survMisc/0000755000176200001440000000000014223606332012066 5ustar liggesuserssurvMisc/NAMESPACE0000744000176200001440000000471014223603142013304 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(COV,numeric) S3method(COV,stratTen) S3method(COV,ten) S3method(asLong,ten) S3method(asWide,ten) S3method(autoplot,stratTen) S3method(autoplot,survfit) S3method(autoplot,tableAndPlot) S3method(autoplot,ten) S3method(ci,stratTen) S3method(ci,ten) S3method(comp,ten) S3method(cutp,coxph) S3method(cutp,survfit) S3method(gof,coxph) S3method(nc,stratTen) S3method(nc,ten) S3method(predict,ten) S3method(print,COV) S3method(print,lrt) S3method(print,stratTableAndPlot) S3method(print,sup) S3method(print,tableAndPlot) S3method(print,ten) S3method(rsq,coxph) S3method(rsq,survfit) S3method(sf,default) S3method(sf,numeric) S3method(sf,stratTen) S3method(sf,ten) S3method(ten,Surv) S3method(ten,coxph) S3method(ten,data.frame) S3method(ten,data.table) S3method(ten,formula) S3method(ten,numeric) S3method(ten,survfit) S3method(ten,ten) S3method(xtable,survfit) S3method(xtable,table) export(COV) export(asLong) export(asWide) export(autoplot) export(ci) export(comp) export(cutp) export(gof) export(nc) export(profLik) export(rsq) export(sf) export(ten) export(xtable) import(KMsurv) import(ggplot2) import(km.ci) import(knitr) import(survival) importFrom(data.table,':=') importFrom(data.table,as.data.table) importFrom(data.table,copy) importFrom(data.table,data.table) importFrom(data.table,rbindlist) importFrom(data.table,set) importFrom(data.table,setattr) importFrom(data.table,setcolorder) importFrom(data.table,setkey) importFrom(data.table,setnames) importFrom(grDevices,dev.new) importFrom(grDevices,graphics.off) importFrom(graphics,abline) importFrom(graphics,arrows) importFrom(graphics,grid) importFrom(graphics,mtext) importFrom(graphics,plot) importFrom(graphics,points) importFrom(graphics,segments) importFrom(graphics,title) importFrom(grid,unit.pmax) importFrom(gridExtra,grid.arrange) importFrom(stats,anova) importFrom(stats,as.formula) importFrom(stats,formula) importFrom(stats,is.empty.model) importFrom(stats,model.frame) importFrom(stats,model.matrix) importFrom(stats,model.response) importFrom(stats,pchisq) importFrom(stats,pnorm) importFrom(stats,predict) importFrom(stats,printCoefmat) importFrom(stats,qchisq) importFrom(stats,qnorm) importFrom(stats,runif) importFrom(stats,terms) importFrom(stats,update) importFrom(utils,combn) importFrom(utils,data) importFrom(utils,head) importFrom(utils,packageVersion) importFrom(utils,tail) importFrom(xtable,print.xtable) importFrom(xtable,xtable) importFrom(zoo,na.locf.default) survMisc/data/0000755000176200001440000000000012657760621013012 5ustar liggesuserssurvMisc/data/gastric.rda0000744000176200001440000000070312566132460015130 0ustar liggesusers=O@` j԰@R * q"*::::2:88888 ă߁nb׻{smzg|N1&0a 6~iٙUMn0f?5lE{z"|2@*VSב#^xkf)YԳ97N>uG23y8 Q K|]Wkt=6Zn;~:}-)qQT~bkQY2$]kf'$RI%v C0^Wڈy/s5Rx>}5o!r] survMisc/man/0000755000176200001440000000000014223602357012644 5ustar liggesuserssurvMisc/man/COV.Rd0000744000176200001440000000751613317032427013572 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/COV.R \name{COV} \alias{COV} \alias{COV.ten} \alias{COV.stratTen} \alias{COV.numeric} \title{\bold{cov}ariance matrix for survival data} \usage{ COV(x, ...) \method{COV}{ten}(x, ..., reCalc = FALSE) \method{COV}{stratTen}(x, ..., reCalc = FALSE) \method{COV}{numeric}(x, ..., n, ncg) } \arguments{ \item{x}{A \code{numeric} vector of \emph{number of events}, \eqn{e_t}{e[t]}. These are assumed to be ordered by discrete times. \cr A method is available for objects of \code{class} \code{ten}.} \item{...}{Additional arguments (not implemented).} \item{reCalc}{Recalcuate the values? \cr If \code{reCalc=FALSE} (the default) and the \code{ten} object already has the calculated values stored as an \code{attribute}, the value of the \code{attribute} is returned directly. \cr \cr \bold{--Arguments for the numeric method:}} \item{n}{\bold{n}umber at risk (total).} \item{ncg}{\bold{n}umber at risk, per \bold{c}ovariate \bold{g}roup. \cr If there are \eqn{2} groups, this can be given as a \code{vector} with the number at risk for group \eqn{1}. \cr If there are \eqn{\geq 2}{>= 2} groups, it is a \code{matrix} with one column for each group.} } \value{ An \code{array}. \cr The first two dimensions = the number of covariate groups \eqn{K}, \eqn{k = 1, 2, \ldots K}. This is the square matrix below. \cr The third dimension is the number of observations (discrete time points). \cr \cr To calculate this, we use \code{x} (= \eqn{e_t}{e[t]} below) and \eqn{n_1}{n1}, the number at risk in covariate group \eqn{1}. \cr Where there are \eqn{2} groups, the resulting sparse square matrix (i.e. the non-diagonal elements are \eqn{0}) at time \eqn{t} has diagonal elements: \deqn{cov_t = - \frac{n_{0t} n_{1t} e_t (n_t - e_t)}{n_t^2(n_t-1)}}{ cov[t] = - n0[t] * n1[t] * e[t] * (n[t] - e[t]) / (n[t]^2 * (n[t] - 1))} For \eqn{\geq 2}{>=2} groups, the resulting square matrix has diagonal elements given by: \deqn{cov_{kkt} = \frac{n_{kt}(n_t - n_{kt}) e_t(n_t - e_t)}{ n_t^2(n_t - 1)}}{ cov[k, k, t] = n[k, t] * (n[t] - n[k, t]) * e[t] * (n[t] - e[t]) / (n[t]^2 * (n[t] - 1))} The off diagonal elements are: \deqn{cov_{klt} = \frac{-n_{kt} n_{lt} e_t (n_t-e_t) }{ n_t^2(n_t-1)}}{ cov[k, l, t] = - n[k, t] * n[l, t] * e[t] * (n[t] - e[t]) / n[t]^2 * (n[t] - 1)} } \description{ \bold{cov}ariance matrix for survival data } \details{ Gives variance-covariance matrix for comparing survival data for two or more groups. \cr Inputs are vectors corresponding to observations at a set of discrete time points for right censored data, except for \eqn{n1}, the no. at risk by predictor. \cr This should be specified as a vector for one group, otherwise as a matrix with each column corresponding to a group. } \note{ Where the is just one subject at risk \eqn{n=1} at the final timepoint, the equations above may produce \code{NaN} due to division by zero. This is converted to \code{0} for simplicity. } \examples{ ## Two covariate groups ## K&M. Example 7.2, pg 210, table 7.2 (last column). \dontrun{ data("kidney", package="KMsurv") k1 <- with(kidney, ten(Surv(time=time, event=delta) ~ type)) COV(k1)[COV(k1) > 0] } ## Four covariate groups ## K&M. Example 7.6, pg 217. \dontrun{ data("larynx", package="KMsurv") l1 <- ten(Surv(time, delta) ~ stage, data=larynx) rowSums(COV(l1), dims=2) } ## example of numeric method ## Three covariate groups ## K&M. Example 7.4, pg 212. \dontrun{ data("bmt", package="KMsurv") b1 <- asWide(ten(Surv(time=t2, event=d3) ~ group, data=bmt)) rowSums(b1[, COV(x=e, n=n, ncg=matrix(data=c(n_1, n_2, n_3), ncol=3))], dims=2) } } \seealso{ Called by \code{\link{comp}} The name of the function is capitalized to distinguish it from: \cr ?stats::cov } \keyword{survival} survMisc/man/predict.Rd0000744000176200001440000000371313317032427014570 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/predict.R \name{predict} \alias{predict} \alias{predict.ten} \title{predicted events} \usage{ \method{predict}{ten}(object, ..., eMP = TRUE, reCalc = FALSE) } \arguments{ \item{object}{An object of class \code{ten}.} \item{...}{Additional arguments (not implemented).} \item{eMP}{Add column(s) indicating \bold{e}vents \bold{m}inus \bold{p}redicted.} \item{reCalc}{Recalcuate the values? \cr If \code{reCalc=FALSE} (the default) and the \code{ten} object already has the calculated values stored as an \code{attribute}, the value of the \code{attribute} is returned directly.} } \value{ An \code{attribute}, \code{pred} is added to \code{object}: \item{t}{Times with at least one observation} \item{P_}{\bold{p}redicted number of events} And if \code{eMP==TRUE} (the default): \item{eMP_}{\bold{e}vents \bold{m}inus \bold{p}redicted} The names of the \code{object}'s covariate groups are used to make the suffixes of the column names (i.e. after the \code{_} character). } \description{ predicted events } \details{ With \eqn{K} covariate groups, We use \eqn{ncg_{ik}}{ncg[i, k]}, the number at risk for group \eqn{k}, to calculate the number of expected events: \deqn{P_{ik} = \frac{e_i(ncg_{ik})}{n_i} \quad k=1, 2 \ldots K}{ P[i, k] = e[i] * ncg[i, k] / n[i]} } \note{ There is a predicted value for each unique time, for each covariate group. } \examples{ ## K&M. Example 7.2, Table 7.2, pp 209-210. data("kidney", package="KMsurv") k1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) predict(k1) predict(asWide(k1)) stopifnot(predict(asWide(k1))[, sum(eMP_1 + eMP_2)] <= .Machine$double.neg.eps) ## Three covariate groups ## K&M. Example 7.4, pp 212-214. data("bmt", package="KMsurv") b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) predict(b1) ## one group only predict(ten(Surv(time=t2, event=d3) ~ 1, data=bmt)) } \seealso{ ?survival::predict.coxph methods("predict") } survMisc/man/gof.Rd0000744000176200001440000001121014223602357013702 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/gof.R \name{gof} \alias{gof} \alias{gof.coxph} \title{\bold{g}oodness \bold{o}f \bold{f}it test for a \code{coxph} object} \source{ Method and example are from: \cr May S, Hosmer DW 1998. A simplified method of calculating an overall goodness-of-fit test for the Cox proportional hazards model. \emph{Lifetime Data Analysis} \bold{4}(2):109--20. \doi{10.1023/A:1009612305785} } \usage{ gof(x, ...) \method{gof}{coxph}(x, ..., G = NULL) } \arguments{ \item{x}{An object of class \code{coxph}} \item{...}{Additional arguments (not implemented)} \item{G}{Number of \bold{g}roups into which to divide risk score. If \code{G=NULL} (the default), uses closest integer to \deqn{G = \max(2, \quad \min(10, \quad \frac{ne}{40}))}{ G = max(2, min(10, ne/40))} where \eqn{ne} is the number of events overall.} } \value{ A \code{list} with elements: \item{groups}{A \code{data.table} with one row per group \eqn{G}. The columns are \describe{ \item{n}{Number of observations} \item{e}{Number of events} \item{exp}{Number of events expected. This is \deqn{exp = \sum e_i - M_i} where \eqn{e_i} are the events and \eqn{M_i} are the martingale residuals for each observation \eqn{i}} \item{z}{\eqn{Z} score, calculated as \deqn{ Z = \frac{e - exp}{\sqrt{exp}}}{ Z = (e - exp) / exp^0.5} } \item{p}{\eqn{p}-value for \eqn{Z}, which is \deqn{ p = 2. \code{pnorm}(-|z|)}{ p = 2 * pnorm(-|z|)} where \code{pnorm} is the normal distribution function with mean \eqn{\mu =0}{0} and standard deviation \eqn{\sigma =1}{1} and \eqn{|z|} is the absolute value.} }} \item{lrTest}{Likelihood-ratio test. Tests the improvement in log-likelihood with addition of an indicator variable with \eqn{G-1} groups. This is done with \code{survival:::anova.coxph}. The test is distributed as chi-square with \eqn{G-1} degrees of freedom} } \description{ \bold{g}oodness \bold{o}f \bold{f}it test for a \code{coxph} object } \details{ In order to verify the overall goodness of fit, the risk score \eqn{r_i}{r[i]} for each observation \eqn{i} is given by \deqn{r_i = \hat{\beta} X_i}{r[i] = B.X[i]} where \eqn{\hat{\beta}}{B} is the vector of fitted coefficients and \eqn{X_i}{X[i]} is the vector of predictor variables for observation \eqn{i}. \cr This risk score is then sorted and 'lumped' into a grouping variable with \eqn{G} groups, (containing approximately equal numbers of observations). \cr The number of observed (\eqn{e}) and expected (\eqn{exp}) events in each group are used to generate a \eqn{Z} statistic for each group, which is assumed to follow a normal distribution with \eqn{Z \sim N(0,1)}. \cr The indicator variable \code{indicG} is added to the original model and the two models are compared to determine the improvement in fit via the likelihood ratio test. } \note{ The choice of \eqn{G} is somewhat arbitrary but rarely should be \eqn{> 10}. \cr As illustrated in the example, a larger value for \eqn{G} makes the \eqn{Z} test for each group more likely to be significant. This does \emph{not} affect the significance of adding the indicator variable \code{indicG} to the original model. \cr \cr The \eqn{Z} score is chosen for simplicity, as for large sample sizes the Poisson distribution approaches the normal. Strictly speaking, the Poisson would be more appropriate for \eqn{e} and \eqn{exp}{exp} as per Counting Theory. \cr The \eqn{Z} score may be somewhat conservative as the expected events are calculated using the martingale residuals from the overall model, rather than by group. This is likely to bring the expected events closer to the observed events. \cr \cr This test is similar to the Hosmer-Lemeshow test for logistic regression. } \examples{ data("pbc", package="survival") pbc <- pbc[!is.na(pbc$trt), ] ## make corrections as per Fleming pbc[pbc$id==253, "age"] <- 54.4 pbc[pbc$id==107, "protime"] <- 10.7 ### misspecified; should be log(bili) and log(protime) instead c1 <- coxph(Surv(time, status==2) ~ age + log(albumin) + bili + edema + protime, data=pbc) gof(c1, G=10) gof(c1) } \references{ Default value for \eqn{G} as per: \cr May S, Hosmer DW 2004. A cautionary note on the use of the Gronnesby and Borgan goodness-of-fit test for the Cox proportional hazards model. \emph{Lifetime Data Analysis} \bold{10}(3):283--91. \doi{10.1023/B:LIDA.0000036393.29224.1d} Changes to the \code{pbc} dataset in the example are as detailed in: \cr Fleming T, Harrington D 2005. \emph{Counting Processes and Survival Analysis}. New Jersey: Wiley and Sons. Chapter 4, section 4.6, pp 188. \doi{10.1002/9781118150672} } survMisc/man/ci.Rd0000744000176200001440000002125414223602357013533 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ci.R \name{ci} \alias{ci} \alias{ci.ten} \alias{ci.stratTen} \title{\bold{c}onfidence \bold{i}ntervals for survival curves.} \source{ The function is loosely based on \code{km.ci::km.ci}. } \usage{ ci(x, ...) \method{ci}{ten}( x, ..., CI = c("0.95", "0.9", "0.99"), how = c("point", "nair", "hall"), trans = c("log", "lin", "asi"), tL = NULL, tU = NULL, reCalc = FALSE ) \method{ci}{stratTen}( x, ..., CI = c("0.95", "0.9", "0.99"), how = c("point", "nair", "hall"), trans = c("log", "lin", "asi"), tL = NULL, tU = NULL ) } \arguments{ \item{x}{An object of class \code{ten}.} \item{...}{Additional arguments (not implemented).} \item{CI}{Confidence intervals. As the function currently relies on lookup tables, currently only 90\%, 95\% (the default) and 99\% are supported.} \item{how}{Method to use for confidence interval. \cr \code{point} (the default) uses pointwise confirence intervals. \cr The alternatives use confidence \emph{bands} (see details).} \item{trans}{Transformation to use. \cr The default is \code{trans="log"}. \cr Also supported are linear and arcsine-square root transformations.} \item{tL}{\bold{L}ower time point. Used in construction of confidence bands.} \item{tU}{\bold{U}pper time point. Used in construction of confidence bands.} \item{reCalc}{Recalcuate the values? \cr If \code{reCalc=FALSE} (the default) and the \code{ten} object already has the calculated values stored as an \code{attribute}, the value of the \code{attribute} is returned directly.} } \value{ The \code{ten} object is modified in place by the additional of a \code{data.table} as an \code{attribute}. \cr \code{attr(x, "ci")} is printed. \cr This A \code{survfit} object. The \code{upper} and \code{lower} elements in the list (representing confidence intervals) are modified from the original. \cr Other elements will also be shortened if the time range under consideration has been reduced from the original. } \description{ \bold{c}onfidence \bold{i}ntervals for survival curves. } \details{ In the equations below \deqn{\sigma^2_s(t) = \frac{\hat{V}[\hat{S}(t)]}{\hat{S}^2(t)} }{ sigma^2(t) = V[S(t)]/[S(t)]^2} Where \eqn{\hat{S}(t) }{S(t)} is the Kaplan-Meier survival estimate and \eqn{\hat{V}[\hat{S}(t)]}{V[S(t)]} is Greenwood's estimate of its variance. \cr The \bold{pointwise} confidence intervals are valid for \emph{individual} times, e.g. \code{median} and \code{\link{quantile}} values. When plotted and joined for multiple points they tend to be narrower than the \emph{bands} described below. Thus they tend to exaggerate the impression of certainty when used to plot confidence intervals for a time range. They should not be interpreted as giving the intervals within which the \emph{entire} survival function lies. \cr For a given significance level \eqn{\alpha}{alpha}, they are calculated using the standard normal distribution \eqn{Z} as follows: \itemize{ \item linear \deqn{\hat{S}(t) \pm Z_{1- \alpha} \sigma (t) \hat{S}(t)}{ S(t)+- Z(1-alpha) sigma(t) S(t)} \item log transform \deqn{ [ \hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta} ] }{ [S(t)^(1/theta), S(t)^theta]} where \deqn{ \theta = \exp{ \frac{Z_{1- \alpha} \sigma (t)}{ \log{\hat{S}(t)}}} }{ theta = exp ( Z(1-alpha)sigma(t) / log(S(t)) )} \item arcsine-square root transform \cr upper: \cr \deqn{ \sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} - \frac{Z_{1- \alpha}\sigma(t)}{2} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ sin^2(max[0, arcsin S(t)^0.5 - Z(1-alpha)sigma(t)/2 (S(t)/1-S(t))^0.5])} lower: \deqn{ \sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} + \frac{Z_{1- \alpha}\sigma(t)}{2} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ sin^2(min[pi/2, arcsin S(t)^0.5 + Z(1-alpha)sigma(t)/2 (S(t)/1-S(t))^0.5])} } Confidence \bold{bands} give the values within which the survival function falls within a \emph{range} of timepoints. \cr \cr The time range under consideration is given so that \eqn{t_l \geq t_{min}}{tL >= min(t)}, the minimum or lowest event time and \eqn{t_u \leq t_{max}}{tU <= max(t)}, the maximum or largest event time. \cr For a sample size \eqn{n} and \eqn{0 < a_l < a_u <1}: \deqn{a_l = \frac{n\sigma^2_s(t_l)}{1+n\sigma^2_s(t_l)}}{ a_l = n*sigma^2(t_l) / [1+n*sigma^2(t_l)]} \deqn{a_u = \frac{n\sigma^2_s(t_u)}{1+n\sigma^2_s(t_u)}}{ a_u = n*sigma^2(t_u) / [1+n*sigma^2(t_u)]} For the \bold{Nair} or \bold{equal precision} (\bold{EP}) confidence bands, we begin by obtaining the relevant confidence coefficient \eqn{c_{\alpha}}{c[alpha]}. This is obtained from the upper \eqn{\alpha}{a}-th fractile of the random variable \deqn{U = \sup{|W^o(x)\sqrt{[x(1-x)]}|, \quad a_l \leq x \leq a_u}}{ U = sup{ |W(x)[x(1-x)]^0.5|, a_l <= x <= a_u} } Where \eqn{W^o}{W} is a standard Brownian bridge. \cr The intervals are: \itemize{ \item linear \deqn{\hat{S}(t) \pm c_{\alpha} \sigma_s(t) \hat{S}(t)}{ S(t)+- c[alpha] sigma(t) S(t)} \item log transform (the default) \cr This uses \eqn{\theta}{theta} as below: \deqn{\theta = \exp{ \frac{c_{\alpha} \sigma_s(t)}{ \log{\hat{S}(t)}}}}{ theta = exp (c[alpha] * sigma(t) / log(S(t)))} And is given by: \deqn{[\hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta}]}{ [S(t)^(1/theta), S(t)^theta]} \item arcsine-square root transform \cr upper: \deqn{\sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} - \frac{c_{\alpha}\sigma_s(t)}{2} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}])}{ sin^2(max[0, arcsin S(t)^0.5 - c[alpha]*sigma(t)/2 (S(t)/1-S(t))^0.5])} lower: \deqn{\sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} + \frac{c_{\alpha}\sigma_s(t)}{2} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ sin^2(min[pi/2, arcsin S(t)^0.5 - c[alpha]*sigma(t)/2 (S(t)/1-S(t))^0.5])} } For the \bold{Hall-Wellner} bands the confidence coefficient \eqn{k_{\alpha}}{k[alpha]} is obtained from the upper \eqn{\alpha}{a}-th fractile of a Brownian bridge. \cr In this case \eqn{t_l} can be \eqn{=0}. \cr The intervals are: \itemize{ \item linear \deqn{\hat{S}(t) \pm k_{\alpha} \frac{1+n\sigma^2_s(t)}{\sqrt{n}} \hat{S}(t)}{ S(t)+- k[alpha] [1+n*sigma^2(t)]*S(t) / n^0.5 } \item log transform \deqn{[\hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta}]}{ [S(t)^(1/theta), S(t)^theta]} where \deqn{\theta = \exp{ \frac{k_{\alpha}[1+n\sigma^2_s(t)]}{ \sqrt{n}\log{\hat{S}(t)}}} }{ theta = exp(k[alpha] * [1 + n * sigma^2(t)] / n^0.5 * log(S(t)))} \item arcsine-square root transform \cr upper: \deqn{ \sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} - \frac{k_{\alpha}[1+n\sigma_s(t)]}{2\sqrt{n}} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ sin^2( max[0, arcsin S(t)^0.5 - k[alpha]*[1+n*sigma^2(t)]/(2*n^0.5) (S(t)/1-S(t))^0.5])} lower: \deqn{ \sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} + \frac{k_{\alpha}[1+n\sigma^2_s(t)]}{2\sqrt{n}} \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ sin^2( min[pi/2, arcsin S(t)^0.5 - k[alpha]*[1+n*sigma^2(t)]/(2*n^0.5) (S(t)/1-S(t))^0.5])} } } \note{ \itemize{ \item For the Nair and Hall-Wellner bands, the function currently relies on the lookup tables in \code{package:km.ci}. \item Generally, the arcsin-square root transform has the best coverage properties. \item All bands have good coverage properties for samples as small as \eqn{n=20}, except for the \bold{Nair} / \bold{EP} bands with a linear transformation, which perform poorly when \eqn{n < 200}. } } \examples{ ## K&M 2nd ed. Section 4.3. Example 4.2, pg 105. data("bmt", package="KMsurv") b1 <- bmt[bmt$group==1, ] # ALL patients ## K&M 2nd ed. Section 4.4. Example 4.2 (cont.), pg 111. ## patients with ALL t1 <- ten(Surv(t2, d3) ~ 1, data=bmt[bmt$group==1, ]) ci(t1, how="nair", trans="lin", tL=100, tU=600) ## Table 4.5, pg. 111. lapply(list("lin", "log", "asi"), function(x) ci(t1, how="nair", trans=x, tL=100, tU=600)) ## Table 4.6, pg. 111. lapply(list("lin", "log", "asi"), function(x) ci(t1, how="hall", trans=x, tL=100, tU=600)) t1 <- ten(Surv(t2, d3) ~ group, data=bmt) ci(t1, CI="0.95", how="nair", trans="lin", tL=100, tU=600) ## stratified model data("pbc", package="survival") t1 <- ten(coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc)) ci(t1) } \references{ Nair V, 1984. Confidence bands for survival functions with censored data: a comparative study. \emph{Technometrics}. \bold{26}(3):265-75. \samp{http://www.jstor.org/stable/1267553} JSTOR Hall WJ, Wellner JA, 1980. Confidence bands for a survival curve from censored data. \emph{Biometrika}. \bold{67}(1):133-43. \samp{http://www.jstor.org/stable/2335326} JSTOR } \seealso{ \code{\link{sf}} \code{\link{quantile}} } \keyword{survival} survMisc/man/nc.Rd0000744000176200001440000000237513317032427013541 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/nc.R \name{nc} \alias{nc} \alias{nc.ten} \alias{nc.stratTen} \title{Add \bold{n}umber \bold{c}ensored.} \usage{ nc(x, ...) \method{nc}{ten}(x, ...) \method{nc}{stratTen}(x, ...) } \arguments{ \item{x}{An object of class \code{ten} or \code{stratTen}.} \item{...}{Additional arguments (not implemented).} } \value{ The original object, with new column(s) added indicating the number censored at each time point, depending on \code{attr(x, "shape")}: \item{"long"}{the new column, \code{c}, gives the number censored at each timepoint, by covariate group.} \item{"wide"}{new columns, beginning with \code{c_}, give the number censored at each timepoint, by covariate group. There is an additional \code{nc} column giving the \emph{total} number censored at each timepoint.} A \code{stratTen} object has each \code{ten} element in the \code{list} modified as above. } \description{ Add \bold{n}umber \bold{c}ensored. } \examples{ data("kidney", package="KMsurv") t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) nc(t1) nc(asWide(t1)) ## stratified model data("pbc", package="survival") t1 <- ten(coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc)) nc(t1) } survMisc/man/gastric.Rd0000744000176200001440000000262614223602357014576 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/gastric.R \docType{data} \name{gastric} \alias{gastric} \title{gastric cancer trial data} \format{ A \code{data.frame} with \eqn{90} rows (observations) and \eqn{3} columns (variables). } \source{ Klein J, Moeschberger. Survival Analysis, 2nd edition. Springer 2003. Example 7.9, pg 224. } \description{ gastric cancer trial data } \details{ Data from a trial of locally unresectable gastic cancer. \cr Patients (\eqn{n=45} in each group) were randomized to one of two groups: chemotheapy vs. chemotherapy + radiotherapy. \cr Columns are: \describe{ \item{time}{Time, in days} \item{event}{Death} \item{group}{Treatment \describe{ \item{0}{chemotherapy} \item{1}{chemotherapy + radiotherapy} } } } } \examples{ data("gastric", package="survMisc", verbose=TRUE) head(gastric) } \references{ Gastrointestinal Tumor Study Group, 1982. A comparison of combination chemotherapy and combined modality therapy for locally advanced gastric carcinoma. \emph{Cancer}. \bold{49}(9):1771-7. \cr \samp{dx.doi.org/10.1002/1097-0142(19820501)49:9<1771::AID-CNCR2820490907>3.0.CO;2-M} Wiley (free) Stablein DM, Koutrouvelis IA, 1985. A two-sample test sensitive to crossing hazards in uncensored and singly censored data. \emph{Biometrics}. \bold{41}(3):643-52. \cr \samp{dx.doi.org/10.2307/2531284} JSTOR } \seealso{ Examples in \code{\link{comp}} } survMisc/man/cutp.Rd0000744000176200001440000001070014223603142014076 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/cutp.R \name{cutp} \alias{cutp} \alias{cutp.coxph} \alias{cutp.survfit} \title{\bold{cut p}oint for a continuous variable in a model fit with \code{coxph} or \code{survfit}.} \usage{ cutp(x, ...) \method{cutp}{coxph}(x, ..., defCont = 3) \method{cutp}{survfit}(x, ..., defCont = 3) } \arguments{ \item{x}{A \code{survfit} or \code{coxph} object} \item{...}{Additional arguments (not implemented).} \item{defCont}{\bold{def}inition of a \bold{cont}inuous variable. \cr If the variable has \eqn{>} \code{defCont} unique values, it is treated as continuous and a cut point is determined.} } \value{ A \code{list} of \code{data.table}s. \cr There is one list element per continuous variable. \cr Each has a column with possible values of the cut point (i.e. unique values of the variable), and the additional columns: \item{U}{The score (log-rank) test for a model with the variable 'cut' into into those \eqn{\geq}{>=} the cutpoint and those below.} \item{Q}{The test statistic.} \item{p}{The \eqn{p}-value.} The tables are ordered by \eqn{p}-value, lowest first. } \description{ \bold{cut p}oint for a continuous variable in a model fit with \code{coxph} or \code{survfit}. Determine the optimal cut point for a continuous variable in a \code{coxph} or \code{survfit} model. } \details{ For a cut point \eqn{\mu}{mu}, of a predictor \eqn{K}, the variable is split into two groups, those \eqn{\geq \mu}{>= mu} and those \eqn{< \mu}{< mu}. \cr The score (or log-rank) statistic, \eqn{sc}, is calculated for each unique element \eqn{k} in \eqn{K} and uses \itemize{ \item \eqn{e_i^+}{e1[i]} the number of events \item \eqn{n_i^+}{n1[i]} the number at risk } in those above the cut point, respectively. \cr The basic statistic is \deqn{sc_k = \sum_{i=1}^D ( e_i^+ - n_i^+ \frac{e_i}{n_i} )}{ sc[k] = sum (e1[i] - n1[i] * e[i] / n[i])} \cr The sum is taken across times with observed events, to \eqn{D}, the largest of these. \cr It is normalized (standardized), in the case of censoring, by finding \eqn{\sigma^2}{s^2} which is: \deqn{\sigma^2 = \frac{1}{D - 1} \sum_i^D (1 - \sum_{j=1}^i \frac{1}{D+ 1 - j})^2}{ s^2 = (1 / (D - 1)) * sum[i:D](1 - sum[j:i](1 / (D - j + 1))^2 )} The test statistic is then \deqn{Q = \frac{\max |sc_k|}{\sigma \sqrt{D-1}}}{ Q = max(abs(sc[k])) / s * sqrt((D - 1))} Under the null hypothesis that the chosen cut point does \emph{not} predict survival, the distribution of \eqn{Q} has a limiting distibution which is the supremum of the absolute value of a Brownian bridge: \deqn{p = Pr(\sup Q \geq q) = 2 \sum_{i=1}^{\infty} (-1)^{i + 1} \exp (-2 i^2 q^2)}{ p= P(Q >= q) = 2 sum[i:Inf](-1)^(i + 1) * e^(-2 * i^2 *q^2)} } \examples{ ## Mandrekar et al. above data("bmt", package="KMsurv") b1 <- bmt[bmt$group==1, ] # ALL patients c1 <- coxph(Surv(t2, d3) ~ z1, data=b1) # z1=age c1 <- cutp(c1)$z1 data.table::setorder(c1, "z1") ## [] below is used to print data.table to console c1[] \dontrun{ ## compare to output from survival::coxph matrix( unlist( lapply(26:30, function(i) c(i, summary(coxph(Surv(t2, d3) ~ z1 >= i, data=b1))$sctest))), ncol=5, dimnames=list(c("age", "score_test", "df", "p"))) cutp(coxph(Surv(t2, d3) ~ z1, data=bmt[bmt$group==2, ]))$z1[] cutp(coxph(Surv(t2, d3) ~ z1, data=bmt[bmt$group==3, ]))[[1]][] ## K&M. Example 8.3, pg 273-274. data("kidtran", package="KMsurv") k1 <- kidtran ## patients who are male and black k2 <- k1[k1$gender==1 & k1$race==2, ] c2 <- coxph(Surv(time, delta) ~ age, data=k2) print(cutp(c2)) ## check significance of computed value summary(coxph(Surv(time, delta) ~ age >= 58, data=k2)) k3 <- k1[k1$gender==2 & k1$race==2, ] c3 <- coxph(Surv(time, delta) ~ age, data=k3) print(cutp(c3)) ## doesn't apply to binary variables e.g. gender print(cutp(coxph(Surv(time, delta) ~ age + gender, data=k1))) } } \references{ Contal C, O'Quigley J, 1999. An application of changepoint methods in studying the effect of age on survival in breast cancer. \emph{Computational Statistics & Data Analysis} \bold{30}(3):253--70. \doi{10.1016/S0167-9473(98)00096-6} Mandrekar JN, Mandrekar, SJ, Cha SS, 2003. Cutpoint Determination Methods in Survival Analysis using SAS. \emph{Proceedings of the 28th SAS Users Group International Conference (SUGI)}. Paper 261-28. \href{https://support.sas.com/resources/papers/proceedings/proceedings/sugi28/261-28.pdf}{SAS (free)} } survMisc/man/autoplotTen.Rd0000744000176200001440000002357614223603142015460 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/autoplotTen.R \name{autoplotTen} \alias{autoplotTen} \alias{autoplot} \alias{autoplot.ten} \alias{autoplot.stratTen} \alias{autoplot.survfit} \title{Generate a \code{ggplot} for a \code{survfit} or \code{ten} object} \usage{ autoplot(object, ...) \method{autoplot}{ten}( object, ..., title = "Marks show times with censoring", type = c("single", "CI", "fill"), alpha = 0.05, ciLine = 10, censShape = 3, palette = c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter = c("none", "noEvents", "all"), tabTitle = "Number at risk by time", xLab = "Time", timeTicks = c("major", "minor", "days", "months", "custom"), times = NULL, yLab = "Survival", yScale = c("perc", "frac"), legend = TRUE, legTitle = "Group", legLabs = NULL, legOrd = NULL, titleSize = 15, axisTitleSize = 15, axisLabSize = 10, survLineSize = 0.5, censSize = 5, legTitleSize = 10, legLabSize = 10, fillLineSize = 0.05, tabTitleSize = 15, tabLabSize = 5, nRiskSize = 5 ) \method{autoplot}{stratTen}( object, ..., title = NULL, type = c("single", "CI", "fill"), alpha = 0.05, ciLine = 10, censShape = 3, palette = c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter = c("none", "noEvents", "all"), tabTitle = "Number at risk by time", xLab = "Time", timeTicks = c("major", "minor", "days", "months", "custom"), times = NULL, yLab = "Survival", yScale = c("perc", "frac"), legend = TRUE, legTitle = "Group", legLabs = NULL, legOrd = NULL, titleSize = 15, axisTitleSize = 15, axisLabSize = 10, survLineSize = 0.5, censSize = 5, legTitleSize = 10, legLabSize = 10, fillLineSize = 0.05, tabTitleSize = 15, tabLabSize = 5, nRiskSize = 5 ) \method{autoplot}{survfit}( object, ..., title = "Marks show times with censoring", type = c("single", "CI", "fill"), alpha = 0.05, ciLine = 10, censShape = 3, palette = c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter = c("none", "noEvents", "all"), tabTitle = "Number at risk by time", xLab = "Time", timeTicks = c("major", "minor", "weeks", "months", "custom"), times = NULL, yLab = "Survival", yScale = c("perc", "frac"), legend = TRUE, legLabs = NULL, legOrd = NULL, legTitle = "Group", titleSize = 15, axisTitleSize = 15, axisLabSize = 10, survLineSize = 0.5, censSize = 5, legTitleSize = 10, legLabSize = 10, fillLineSize = 0.05, tabTitleSize = 15, tabLabSize = 5, nRiskSize = 5, pVal = FALSE, sigP = 1, pX = 0.1, pY = 0.1 ) } \arguments{ \item{object}{An object of class \code{survfit}, \code{ten} or \code{stratTen}.} \item{...}{Additional arguments (not implemented).} \item{title}{Title for survival plot.} \item{type}{\code{type="single"} (the default) plots single lines. \describe{ \item{\code{type="CI"}}{Adds lines indicating confidence intervals (taken from \code{upper} and \code{lower} values of \code{survfit} object). \cr Higher values of \code{alpha} (opacity) are recommended for this, e.g. \code{alpha=0.8}.} \item{\code{type="fill"}}{Adds filled rectangles from the survival lines to the confidence intervals above.} }} \item{alpha}{Opacity of lines indicating confidence intervals or filled rectangles. Should be in range \eqn{0-1}. Lower = more transparent. \cr Larger values e.g. \code{alpha=0.7} are recommended for confidence intervals.} \item{ciLine}{Confidence interval line type. See 'line type specification' in \cr ?graphics::par} \item{censShape}{Shape of marks to indicate censored onservations. \cr Default is \code{3} which gives vertical ticks. \cr Use \code{censShape=10} for circular marks. See \cr ?graphics::points} \item{palette}{Options are taken from \href{https://colorbrewer2.org/}{color_brewer}. \itemize{ \item \code{palette="Dark2"} (the default) is recommended for \code{single} or \code{CI} plots. \item \code{palette="Set2"} is recommended for \code{type="fill"} plots. }} \item{jitter}{By default, \code{jitter="none"}. \itemize{ \item If \code{jitter="noEvents"}, adds some random, positive noise to survival lines with no events (i.e. all observations censored). This will bring them just above 1 on the y-axis, making them easier to see separately. \item If \code{jitter="all"} add some vertical and horizontal noise to all survival lines. This can prevent overlapping of lines for censoring. }} \item{tabTitle}{Table title. \cr \cr \bold{--Axis arguments:}} \item{xLab}{Label for \eqn{x} axis on survival plot.} \item{timeTicks}{Numbers to mark on the \eqn{x} axis of the survival plot and the table. \describe{ \item{\code{"major"}}{ (the default) only the major \eqn{x}-axis (time) marks from the survival plot are are labelled on the plot and table.} \item{\code{"minor"}}{minor axis marks are labelled instead.} \item{\code{"days"}}{scale is \eqn{0, 7, 14, ..., t_{max}}} \item{\code{"months"}}{scale is \eqn{0, 12,, 24, ..., t_{max}}} \item{\code{"custom"}}{scale is given by \code{times} below} }} \item{times}{Vector of custom times to use for \eqn{x} axis.} \item{yLab}{Label for \eqn{y} axis on survival plot.} \item{yScale}{Display for point on \eqn{y} axis: \describe{ \item{\code{"perc"}}{Displays as percentages.} \item{\code{"frac"}}{Displays as fractions e.g. \eqn{0, 0.1, 0.2, ..., 1.0.}} } \bold{--Legend arguments:} \cr} \item{legend}{If \code{legend=FALSE}, no legends will be produced for the plot or table.} \item{legTitle}{Legend title.} \item{legLabs}{Legend labels. These can be used to replace the names of the covariate groups ('strata' in the case of a \code{survfit} object). \cr Should be given in the same order as those strata.} \item{legOrd}{Legend order. \cr \cr \bold{--Size arguments:} \cr Size arguments are passed to \code{ggplot2::element_text(size=)}.} \item{titleSize}{Title size for survival plot.} \item{axisTitleSize}{Title size for axes.} \item{axisLabSize}{Title size for labels on axes.} \item{survLineSize}{Survival line size.} \item{censSize}{Size of marks to indicate censored onservations.} \item{legTitleSize}{Title size for legend.} \item{legLabSize}{Legend labels width and height.} \item{fillLineSize}{Line size surrouding filled boxes.} \item{tabTitleSize}{Table title text size.} \item{tabLabSize}{Table legend text size.} \item{nRiskSize}{Number at risk - text size. \cr \cr \bold{--Arguments for autoplot.survfit only:} \cr} \item{pVal}{If \code{pVal=TRUE}, adds \eqn{p} value from log-rank test to plot} \item{sigP}{No. of significant digits to display in \eqn{p} value. Typically \eqn{1} to \eqn{3}.} \item{pX}{Location of \eqn{p} value on \eqn{x} axis. \cr Should be in the range of \eqn{0 - 1}, where value is to be placed relative to the maximum observed time. \cr E.g. \code{pX = 0.5} will place it half-way along \eqn{x}-axis} \item{pY}{Location of \eqn{p} value on \eqn{y} axis. \cr Should be in the range of \eqn{0 - 1}, as above.} } \description{ Generate a \code{ggplot} for a \code{survfit} or \code{ten} object } \note{ \code{autoplot.survfit} may be deprecated after packageVersion 0.6. Please try to use \code{autoplot.ten} instead. } \examples{ ## examples are slow to run; see vignette for output from these \dontrun{ ### autoplot.ten data("kidney", package="KMsurv") t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) autoplot(t1) autoplot(t1, type="fill", survLineSize=2, jitter="all") autoplot(t1, timeTicks="months", type="CI", jitter="all", legLabs=c("surgical", "percutaneous"), title="Time to infection following catheter placement \n by type of catheter, for dialysis patients", titleSize=10, censSize=2)$plot t2 <- ten(survfit(Surv(time=time, event=delta) ~ 1, data=kidney)) autoplot(t2, legLabs="")$plot autoplot(t2, legend=FALSE) data("rectum.dat", package="km.ci") t3 <- ten(survfit(Surv(time, status) ~ 1, data=rectum.dat)) ## change confidence intervals to log Equal-Precision confidence bands ci(t3, how="nair", tL=1, tU=40) autoplot(t3, type="fill", legend=FALSE)$plot ## manually changing the output t4 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) (a4 <- autoplot(t4, type="CI", alpha=0.8, survLineSize=2)$plot) ## change default colors a4 + list(ggplot2::scale_color_manual(values=c("red", "blue")), ggplot2::scale_fill_manual(values=c("red", "blue"))) ## change limits of y-axis suppressMessages(a4 + ggplot2::scale_y_continuous(limits=c(0, 1))) } \dontrun{ data("pbc", package="survival") t1 <- ten(Surv(time, status==2) ~ trt + strata(edema), data=pbc, abbNames=FALSE) autoplot(t1) } ### autoplot.survfit \dontrun{ data(kidney, package="KMsurv") s1 <- survfit(Surv(time, delta) ~ type, data=kidney) autoplot(s1, type="fill", survLineSize=2) autoplot(s1, type="CI", pVal=TRUE, pX=0.3, legLabs=c("surgical", "percutaneous"), title="Time to infection following catheter placement \n by type of catheter, for dialysis patients")$plot s1 <- survfit(Surv(time=time, event=delta) ~ 1, data=kidney) autoplot(s1, legLabs="")$plot autoplot(s1, legend=FALSE)$plot data(rectum.dat, package="km.ci") s1 <- survfit(Surv(time, status) ~ 1, data=rectum.dat) ## change confidence intervals to log Equal-Precision confidence bands if (require("km.ci")) { km.ci::km.ci(s1, method="logep") autoplot(s1, type="fill", legend=FALSE)$plot } ## manually changing the output s1 <- survfit(Surv(time, delta) ~ type, data=kidney) g1 <- autoplot(s1, type="CI", alpha=0.8, survLineSize=2)$plot ## change default colors g1 + ggplot2::scale_colour_manual(values=c("red", "blue")) + ggplot2::scale_fill_manual(values=c("red", "blue")) ## change limits of y-axis g1 + ggplot2::scale_y_continuous(limits=c(0, 1)) } } \seealso{ ?ggplot2::ggplot_build } \author{ Chris Dardis. \code{autoplot.survfit} based on existing work by R. Saccilotto, Abhijit Dasgupta, Gil Tomas and Mark Cowley. } \keyword{hplot} \keyword{survival} survMisc/man/sf.Rd0000744000176200001440000001022214223602357013541 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/sf.R \name{sf} \alias{sf} \alias{sf.default} \alias{sf.ten} \alias{sf.stratTen} \alias{strat.Ten} \alias{sf.numeric} \title{\bold{s}urvival (or hazard) \bold{f}unction based on \eqn{e} and \eqn{n}.} \usage{ sf(x, ...) \method{sf}{default}(x, ..., what = c("S", "H"), SCV = FALSE, times = NULL) \method{sf}{ten}(x, ..., what = c("S", "H"), SCV = FALSE, times = NULL, reCalc = FALSE) \method{sf}{stratTen}(x, ..., what = c("S", "H"), SCV = FALSE, times = NULL, reCalc = FALSE) \method{sf}{numeric}( x, ..., n = NULL, what = c("all", "S", "Sv", "H", "Hv"), SCV = FALSE, times = NULL ) } \arguments{ \item{x}{One of the following: \describe{ \item{default}{A numeric vector of events status (assumed sorted by time).} \item{numeric}{Vectors of events and numbers at risk (assumed sorted by time).} \item{ten}{A \code{ten} object.} \item{stratTen}{A \code{stratTen} object.} }} \item{...}{Additional arguments (not implemented).} \item{what}{See return, below.} \item{SCV}{Include the \bold{S}quared \bold{C}oefficient of \bold{V}ariation, which is calcluated using the mean \eqn{\bar{x}}{mean(x)} and the variance \eqn{\sigma_x^2}{var(x)}: \deqn{SCV_x = \frac{\sigma_x^2}{\bar{x}^2}}{ SCV[x] = var(x) / mean(x)^2} This measure of \emph{dispersion} is also referred to as the 'standardized variance' or the 'noise'.} \item{times}{Times for which to calculate the function. \cr If \code{times=NULL} (the default), times are used for which at least one event occurred in at least one covariate group.} \item{reCalc}{Recalcuate the values? \cr If \code{reCalc=FALSE} (the default) and the \code{ten} object already has the calculated values stored as an \code{attribute}, the value of the \code{attribute} is returned directly.} \item{n}{Number at risk.} } \value{ A {data.table} which is stored as an attribute of the \code{ten} object. \cr If \code{what="s"}, the \bold{s}urvival is returned, based on the Kaplan-Meier or product-limit estimator. This is \eqn{1} at \eqn{t=0} and thereafter is given by: \deqn{\hat{S}(t) = \prod_{t \leq t_i} (1-\frac{e_i}{n_i} )}{ S[t] = prod (1 - e[t]) / n[t] } If \code{what="sv"}, the \bold{s}urvival \bold{v}ariance is returned. \cr Greenwoods estimtor of the variance of the Kaplan-Meier (product-limit) estimator is: \deqn{Var[\hat{S}(t)] = [\hat{S}(t)]^2 \sum_{t_i \leq t} \frac{e_i}{n_i (n_i - e_i)}}{ Var(S[t]) = S[t]^2 sum e[t] / (n[t] * (n[t] - e[t]))} If \code{what="h"}, the \bold{h}azard is returned, based on the the Nelson-Aalen estimator. This has a value of \eqn{\hat{H}=0}{H=0} at \eqn{t=0} and thereafter is given by: \deqn{\hat{H}(t) = \sum_{t \leq t_i} \frac{e_i}{n_i}}{ H[t] = sum(e[t] / n[t])} If \code{what="hv"}, the \bold{h}azard \bold{v}ariance is returned. \cr The variance of the Nelson-Aalen estimator is given by: \deqn{Var[\hat{H}(t)] = \sum_{t_i \leq t} \frac{e_i}{n_i^2}}{ Var(H[t]) = sum(e / n^2)} If \code{what="all"} (the default), \emph{all} of the above are returned in a \code{data.table}, along with: \cr Survival, based on the Nelson-Aalen hazard estimator \eqn{H}, which is: \deqn{\hat{S_{na}}=e^{H}}{ S[t] = exp(H[t])} Hazard, based on the Kaplan-Meier survival estimator \eqn{S}, which is: \deqn{\hat{H_{km}} = -\log{S}}{ H[t] = -log(S[t])} } \description{ \bold{s}urvival (or hazard) \bold{f}unction based on \eqn{e} and \eqn{n}. } \examples{ data("kidney", package="KMsurv") k1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) sf(k1) sf(k1, times=1:10, reCalc=TRUE) k2 <- ten(with(kidney, Surv(time=time, event=delta))) sf(k2) ## K&M. Table 4.1A, pg 93. ## 6MP patients data("drug6mp", package="KMsurv") d1 <- with(drug6mp, Surv(time=t2, event=relapse)) (d1 <- ten(d1)) sf(x=d1$e, n=d1$n, what="S") data("pbc", package="survival") t1 <- ten(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc) sf(t1) ## K&M. Table 4.2, pg 94. data("bmt", package="KMsurv") b1 <- bmt[bmt$group==1, ] # ALL patients t2 <- ten(Surv(time=b1$t2, event=b1$d3)) with(t2, sf(x=e, n=n, what="Hv")) ## K&M. Table 4.3, pg 97. sf(x=t2$e, n=t2$n, what="all") } \keyword{survival} survMisc/man/asWide.Rd0000744000176200001440000000242013317032427014344 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/asWide.R \name{asWide} \alias{asWide} \alias{asWide.ten} \alias{asLong} \alias{asLong.ten} \title{Convert an object to "wide" or "long" form.} \usage{ asWide(x, ...) \method{asWide}{ten}(x, ...) asLong(x, ...) \method{asLong}{ten}(x, ...) } \arguments{ \item{x}{An object of class \code{ten} or \code{pred}.} \item{...}{Additional arguments (not implemented).} } \value{ A new \code{data.table} is returned, with the data in 'wide' or 'long' format. \cr There is one row for each time point. \cr For a \code{ten} object generated from a \code{numeric} or \code{Surv} object, this has columns: \item{t}{\bold{t}ime.} \item{e}{number of \bold{e}vents.} \item{n}{\bold{n}umber at risk.} If derived from a \code{survfit}, \code{coxph} or \code{formula} object, there are additional columns for \code{e} and \code{n} for \emph{each} covariate group. } \description{ Convert an object to "wide" or "long" form. } \note{ Most methods for \code{ten} objects are designed for the 'long' form. } \examples{ \dontrun{ data("bmt", package="KMsurv") require("survival") t1 <- ten(c1 <- coxph(Surv(t2, d3) ~ z3*z10, data=bmt)) asWide(t1) } \dontrun{ asLong(asWide(t1)) stopifnot(asLong(asWide(t1)) == ten(ten(t1))) } } survMisc/man/survMisc_package.Rd0000744000176200001440000000371614223602357016431 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/survMisc_package.R \docType{package} \name{survMisc_package} \alias{survMisc_package} \alias{survMisc} \title{Miscellaneous Functions for Survival Analysis} \description{ Miscellaneous Functions for Survival Analysis \tabular{ll}{ Package: \tab survMisc\cr Type: \tab Package\cr Version: \tab 0.5.5 \cr Date: \tab 2018-07-03\cr License: \tab GPL (>= 2) \cr LazyLoad: \tab yes } A collection of functions for the analysis of survival data. These extend the methods already available in \code{package:survival}. \cr The intent is to generate a workspace for some of the common tasks arising in survival analysis. \cr \cr There are references in many of the functions to the textbooks: \tabular{cl}{ \bold{K&M} \tab Klein J, Moeschberger M (2003). \emph{Survival Analysis}, 2nd edition. \cr \tab New York: Springer. \doi{10.1007/b97377} \cr \bold{T&G} \tab Therneau TM, Grambsch PM (2000). \emph{Modeling Survival Data: Extending the Cox Model}. \cr \tab New York: Springer. \doi{10.1007/978-1-4757-3294-8} } \subsection{Notes for developers}{ \itemize{ \item This package should be regarded as 'in development' until release 1.0, meaning that there may be changes to certain function names and parameters, although I will try to keep this to a minimum. As such it is recommended that other packages do \emph{not} depend on or import from this one until at least version 1.0. \item Naming tends to follow the \strong{camelCase} convention; variables within functions are typically alphanumeric e.g. \code{a1 <- 1}. } } For bug reports, feature requests or suggestions for improvement, please try to submit to \href{https://github.com/dardisco/survMisc/issues}{github}. Otherwise email me at the address below. } \author{ Chris Dardis \email{christopherdardis@gmail.com} } \concept{survival} \keyword{package} survMisc/man/ten.Rd0000744000176200001440000001537714223602357013737 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ten.R \name{ten} \alias{ten} \alias{ten.numeric} \alias{ten.Surv} \alias{ten.coxph} \alias{ten.survfit} \alias{ten.formula} \alias{ten.data.frame} \alias{ten.data.table} \alias{ten.ten} \title{\bold{t}ime, \bold{e}vent(s) and \bold{n}umber at risk.} \usage{ ten(x, ...) \method{ten}{numeric}(x, ...) \method{ten}{Surv}(x, ..., call = NULL) \method{ten}{coxph}(x, ..., abbNames = TRUE, contrasts.arg = NULL) \method{ten}{survfit}(x, ..., abbNames = TRUE, contrasts.arg = NULL) \method{ten}{formula}(x, ..., abbNames = TRUE, contrasts.arg = NULL) \method{ten}{data.frame}(x, ..., abbNames = TRUE, contrasts.arg = NULL, call = NULL) \method{ten}{data.table}(x, ..., abbNames = TRUE, mm = NULL, call = NULL) \method{ten}{ten}(x, ..., abbNames = NULL, call = NULL) } \arguments{ \item{x}{For the default method, a \code{numeric} vector indicating an \emph{event} (or status). \cr Each element indicates whether an event occurred (\code{1}) or not (\code{0}) for an observation. \cr These are assumed to be ordered by discrete times. \cr This is similar to the \code{event} argument for \code{Surv} objects. \cr \cr Methods are available for objects of class \code{Surv}, \code{survfit}, \code{coxph} and \code{formula}.} \item{...}{Additional arguments (not implemented).} \item{call}{Used to pass the \code{call} from a \code{formula} to the final \code{ten.data.table} method.} \item{abbNames}{\bold{Abb}reviate names? \cr If \code{abbNames="TRUE"} (the default), the covariate groups are referred to by number. \cr As the names for each covariate group are made by concatenating the predictor names, the full names can become unwieldly. \cr If \code{abbNames="FALSE"}, the full names are given. \cr In either case, the \code{longNames} are given as an \code{attribute} of the returned \code{ten} object.} \item{contrasts.arg}{Methods for handling factors. \cr A \code{list}. The \code{names} are the names of columns of the \code{model.frame} containing \code{factor}s. \cr The \emph{values} are used as replacement values for the \code{stats::contrasts} replacement function. These should be functions (given as character strings) or numeric matrices. \cr This can be passed from \code{survfit}, \code{coxph} and \code{formula} objects to: \cr ?stats::model.matrix} \item{mm}{Used to pass the \code{model.matrix} from a \code{formula} to the final \code{ten.data.table} method.} } \value{ A \code{data.table} with the additional \code{class} \code{ten}. \cr By default, the shape returned is 'long' i.e. there is one row for each unique timepoint per covariate group. \cr The basic form, for a \code{numeric} or \code{Surv} object, has columns: \item{t}{\bold{t}ime.} \item{e}{number of \bold{e}vents.} \item{n}{\bold{n}umber at risk.} A \code{survfit}, \code{coxph} or \code{formula} object will have additional columns: \item{cg}{\bold{c}ovariate \bold{g}roup. This is formed by combining the variables; these are separated by a comma ','.} \item{ncg}{\bold{n}umber at risk, by \bold{c}ovariate \bold{g}roup} \bold{Special terms}. \cr \cr The following are considered 'special' terms in a survival model: \item{strata}{For a stratified model, \code{ten} returns a \code{list} with one element per strata, which is a \code{ten} object. \cr This has the class \code{stratTen}. The name of the list elements are those of the strata in the model.} \item{cluster}{These terms are dropped.} \item{tt}{The variable is unchanged. That is, time-transform terms are handled as if the the function \code{tt(x)} was \code{identity(x)}.} \bold{Attribures}. \cr The returned object will also have the following \code{attributes}: \item{shape}{The default is \code{"long"} but is changed to \code{"wide"} when \code{asWide} is called on the object.} \item{abbNames}{Abbreviate names?} \item{longNames}{A \code{data.table} with two columns, showing the abbrevbiated and full names.} \item{ncg}{Number of covariate groups} \item{call}{The call used to generate the object} \item{mm}{The \code{model.matrix} used to generate to generate the object, if applicable.} Additional attributes will be added by the following functions: \cr \code{\link{sf}} \code{\link{ci}} } \description{ \bold{t}ime, \bold{e}vent(s) and \bold{n}umber at risk. } \note{ The methods for \code{data.frame} (for a model frame) and \code{data.table} are not typically intended for interactive use. \cr \cr Currently only binary status and right-censoring are supported. \cr \cr In stratified models, only one level of stratification is supported (i.e. strata cannot be 'nested' currently). \cr \cr Partial matching is available for the following arguments, based on the characters in bold: \itemize{ \item \bold{abb}Names \item \bold{con}trasts.arg } } \examples{ require("survival") ## binary vector ten(c(1, 0, 1, 0, 1)) ## Surv object df0 <- data.frame(t=c(1, 1, 2, 3, 5, 8, 13, 21), e=rep(c(0, 1), 4)) s1 <- with(df0, Surv(t, e, type="right")) ten(s1) ## some awkward values suppressWarnings( s1 <- Surv(time=c(Inf, -1, NaN, NA, 10, 12), event=c(c(NA, 1, 1, NaN, Inf, 0.75)))) ten(s1) ## coxph object ## K&M. Section 1.2. Table 1.1, page 2. data("hodg", package="KMsurv") hodg <- data.table::data.table(hodg) data.table::setnames(hodg, c(names(hodg)[!names(hodg) \%in\% c("score", "wtime")], "Z1", "Z2")) c1 <- coxph(Surv(time=time, event=delta) ~ Z1 + Z2, data=hodg[gtype==1 & dtype==1, ]) ten(c1) data("bmt", package="KMsurv") ten(c1 <- coxph(Surv(t2, d3) ~ z3*z10, data=bmt)) ## T&G. Section 3.2, pg 47. ## stratified model data("pbc", package="survival") c1 <- coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc) ten(c1) ## K&M. Example 7.2, pg 210. data("kidney", package="KMsurv") with(kidney[kidney$type==2, ], ten(Surv(time=time, event=delta))) s1 <- survfit(Surv(time=time, event=delta) ~ type, data=kidney) ten(s1)[e > 0, ] ## A null model is passed to ten.Surv (t1 <- with(kidney, ten(Surv(time=time, event=delta) ~ 0))) ## but the original call is preserved attr(t1, "call") ## survival::survfit doesn't accept interaction terms... \dontrun{ s1 <- survfit(Surv(t2, d3) ~ z3*z10, data=bmt)} ## but ten.formula does: ten(Surv(time=t2, event=d3) ~ z3*z10, data=bmt) ## the same is true for the '.' (dot operator) in formulas (t1 <- ten(Surv(time=t2, event=d3) ~ ., data=bmt)) ## impractical long names stored as an attribute attr(t1, "longNames") ## not typically intended to be called directly mf1 <- stats::model.frame(Surv(time, status==2) ~ age + strata(edema) + strata(spiders), pbc, drop.unused.levels = TRUE) ten(mf1) } \seealso{ \code{\link{asWide}} \code{\link{print}} } survMisc/man/autoplotTAP.Rd0000744000176200001440000000307014223602357015350 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/autoplotTAP.R \name{autoplotTableAndPlot} \alias{autoplotTableAndPlot} \alias{autoplot.tableAndPlot} \title{Arrange a survival plot with corresponding table and legend.} \usage{ \method{autoplot}{tableAndPlot}(object, ..., hideTabLeg = TRUE, tabHeight = 0.25) } \arguments{ \item{object}{An object of class \code{"tableAndPlot"}, as returned by \code{ggplot.Ten}.} \item{...}{Additional arguments (not implemented).} \item{hideTabLeg}{Hide table legend. \cr If \code{hideTabLeg = TRUE} (the default), the table legend will not appear.} \item{tabHeight}{Table height, as a fraction/ proportion of the whole. \cr \code{tabHeight=0.25} (the default) makes the table \eqn{0.25 = 25\%} of the whole plot height.} } \value{ A graph, plotted with \code{gridExtra::grid.arrange}. } \description{ Arrange a survival plot with corresponding table and legend. } \details{ Arguments to \code{plotHeigth} and \code{tabHeight} are best specified as fractions adding to \eqn{1}, \cr } \note{ This method is called by \code{\link{print.tableAndPlot}} and by \code{print.stratTableAndPlot}. } \examples{ \dontrun{ data("kidney", package="KMsurv") autoplot(survfit(Surv(time, delta) ~ type, data=kidney), type="fill") autoplot(ten(survfit(Surv(time, delta) ~ type, data=kidney)), type="fill") data("bmt", package="KMsurv") s2 <- survfit(Surv(time=t2, event=d3) ~ group, data=bmt) autoplot(s2) } } \author{ Chris Dardis. Based on existing work by R. Saccilotto, Abhijit Dasgupta, Gil Tomas and Mark Cowley. } \keyword{graphics} survMisc/man/rsq.Rd0000744000176200001440000000451113317043765013747 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rsq.R \name{rsq} \alias{rsq} \alias{rsq.coxph} \alias{rsq.survfit} \title{r^2 measures for a a \code{coxph} or \code{survfit} model} \usage{ rsq(x, ...) \method{rsq}{coxph}(x, ..., sigD = 2) \method{rsq}{survfit}(x, ..., sigD = 2) } \arguments{ \item{x}{A \code{survfit} or \code{coxph} object.} \item{...}{Additional arguments (not implemented).} \item{sigD}{\bold{sig}nificant \bold{d}igits (for ease of display). If \code{sigD=NULL}, will return the original numbers.} } \value{ A \code{list} with the following elements: \item{cod}{The \bold{c}oefficient \bold{o}f \bold{d}etermination, which is \deqn{R^2=1-\exp(\frac{2}{n}L_0-L_1)}{ R^2 = 1-exp((2/n).(L[0]-L[1]))} where \eqn{L_0}{L[0]} and \eqn{L_1}{L[1]} are the log partial likelihoods for the \emph{null} and \emph{full} models respectively and \eqn{n} is the number of observations in the data set.} \item{mer}{The \bold{m}easure of \bold{e}xplained \bold{r}andomness, which is: \deqn{R^2_{mer}=1-\exp(\frac{2}{m}L_0-L_1)}{ R^2[mer] = 1-exp((2/m).(L[0]-L[1]))} where \eqn{m} is the number of observed \emph{events}.} \item{mev}{The \bold{m}easure of \bold{e}xplained \bold{v}ariation (similar to that for linear regression), which is: \deqn{R^2=\frac{R^2_{mer}}{R^2_{mer} + \frac{\pi}{6}(1-R^2_{mer})}}{ R^2 = R^2[mer] / ( R^2[mer] + pi/6(1-R^2[mer]) )} } } \description{ r^2 measures for a a \code{coxph} or \code{survfit} model } \examples{ data("kidney", package="KMsurv") c1 <- coxph(Surv(time=time, event=delta) ~ type, data=kidney) cbind(rsq(c1), rsq(c1, sigD=NULL)) } \references{ Nagelkerke NJD, 1991. A Note on a General Definition of the Coefficient of Determination. \emph{Biometrika} \bold{78}(3):691--92. \samp{http://www.jstor.org/stable/2337038} JSTOR O'Quigley J, Xu R, Stare J, 2005. Explained randomness in proportional hazards models. \emph{Stat Med} \bold{24}(3):479--89. \samp{http://dx.doi.org/10.1002/sim.1946} Wiley (paywall) \samp{http://www.math.ucsd.edu/~rxu/igain2.pdf} UCSD (free) Royston P, 2006. Explained variation for survival models. \emph{The Stata Journal} \bold{6}(1):83--96. \samp{http://www.stata-journal.com/sjpdf.html?articlenum=st0098} } survMisc/man/profLik.Rd0000744000176200001440000000365214223602357014550 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/profLik.R \name{profLik} \alias{profLik} \title{Profile likelihood for coefficients in a \code{coxph} model} \usage{ profLik(x, CI = 0.95, interval = 50, mult = c(0.1, 2), devNew = TRUE, ...) } \arguments{ \item{x}{A \code{coxph} model.} \item{CI}{\bold{C}onfidence \bold{I}nterval.} \item{interval}{Number of points over which to evaluate coefficient.} \item{mult}{\bold{Mult}iplier. Coefficent will be multiplied by lower and upper value and evaluated across this range.} \item{devNew}{Open a new device for each plot. See \cr ?grDevices::dev.new} \item{...}{Additional parameters passed to \code{graphics::plot.default}.} } \value{ One plot for each coefficient in the model. } \description{ Profile likelihood for coefficients in a \code{coxph} model } \details{ Plots of range of values for coefficient in model with log-likelihoods for the model with the coefficient fixed at these values. \cr \cr For each coefficient a range of possible values is chosen, given by \eqn{\hat{B}*mult_{lower} - \hat{B}*mult_{upper}}{ Bhat * mult[lower] - Bhat * mult[upper]}. A series of models are fit (given by \code{interval}). The coefficient is included in the model as a \emph{fixed} term and the partial log-likelihood for the model is calculated. \cr \cr A curve is plotted which gives the partial log-likelihood for each of these candidate values. An appropriate confidence interval (CI) is given by subtracting 1/2 the value of the appropriate quantile of a chi-squared distribution with \eqn{1} degree of freedom. \cr \cr Two circles are also plotted giving the 95% CI for the Wald statistic. } \examples{ data("pbc", package="survival") c1 <- coxph(formula = Surv(time, status == 2) ~ age + edema + log(bili) + log(albumin) + log(protime), data = pbc) profLik(c1, col="red") } \references{ Example is from: \bold{T&G}. Section 3.4.1, pg 57. } survMisc/man/comp.Rd0000744000176200001440000002316414223602357014100 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/comp.R \name{comp} \alias{comp} \alias{comp.ten} \title{compare survival curves} \usage{ comp(x, ...) \method{comp}{ten}(x, ..., p = 1, q = 1, scores = seq.int(attr(x, "ncg")), reCalc = FALSE) } \arguments{ \item{x}{A \code{tne} object} \item{...}{Additional arguments (not implemented).} \item{p}{\eqn{p} for Fleming-Harrington test} \item{q}{\eqn{q} for Fleming-Harrington test} \item{scores}{scores for tests for trend} \item{reCalc}{Recalcuate the values? \cr If \code{reCalc=FALSE} (the default) and the \code{ten} object already has the calculated values stored as an \code{attribute}, the value of the \code{attribute} is returned directly.} } \value{ The \code{tne} object is given additional \code{attributes}. \cr The following are always added: \item{lrt}{The \bold{l}og-\bold{r}ank family of \bold{t}ests} \item{lrw}{The \bold{l}og-\bold{r}ank \bold{w}eights (used in calculating the tests).} An additional item depends on the number of covariate groups. \cr If this is \eqn{=2}: \item{sup}{The \bold{sup}remum or Renyi family of tests} and if this is \eqn{>2}: \item{tft}{Tests for trend. This is given as a \code{list}, with the statistics and the scores used.} } \description{ compare survival curves } \details{ The \bold{log-rank} tests are formed from the following elements, with values for each time where there is at least one event: \itemize{ \item \eqn{W_i}{W[i]}, the weights, given below. \item \eqn{e_i}{e[i]}, the number of events (per time). \item \eqn{\hat{e_i}}{P[i]}, the number of \emph{predicted} events, given by \code{\link{predict}}. \item \eqn{COV_i}{COV[, , i]}, the covariance matrix for time \eqn{i}, given by \code{\link{COV}}. } It is calculated as: \deqn{Q_i = \sum{W_i (e_i - \hat{e}_i)}^T \sum{W_i \hat{COV_i} W_i^{-1}} \sum{W_i (e_i - \hat{e}_i)}}{ Q[i] = sum(W[i] * (e[i] - P[i]))^T * sum(W[i] * COV[, , i] * W[i])^-1 * sum(W[i] * (e[i] - P[i]))} If there are \eqn{K} groups, then \eqn{K-1} are selected (arbitrary). \cr Likewise the corresponding variance-covariance matrix is reduced to the appropriate \eqn{K-1 \times K-1}{K-1 * K-1} dimensions. \cr \eqn{Q} is distributed as chi-square with \eqn{K-1} degrees of freedom. \cr \cr For \eqn{2} covariate groups, we can use: \itemize{ \item \eqn{e_i}{e[i]} the number of events (per time). \item \eqn{n_i}{e[i]} the number at risk overall. \item \eqn{e1_i}{e1[i]} the number of events in group \eqn{1}. \item \eqn{n1_i}{n1[i]} the number at risk in group \eqn{1}. } Then: \deqn{Q = \frac{\sum{W_i [e1_i - n1_i (\frac{e_i}{n_i})]} }{ \sqrt{\sum{W_i^2 \frac{n1_i}{n_i} (1 - \frac{n1_i}{n_i}) (\frac{n_i - e_i}{n_i - 1}) e_i }}}}{ Q = sum(W[i] * (e1[i] - n1[i] * e[i] / n[i])) / sqrt(sum(W[i]^2 * e1[i] / e[i] * (1 - n1[i] / n[i]) * (n[i] - e[i] / (n[i] - 1)) *e[i]))} Below, for the Fleming-Harrington weights, \eqn{\hat{S}(t)}{S(t)} is the Kaplan-Meier (product-limit) estimator. \cr Note that both \eqn{p} and \eqn{q} need to be \eqn{\geq 0}{>=0}. \cr \cr The weights are given as follows: \tabular{cll}{ \eqn{1} \tab log-rank \tab \cr \eqn{n_i}{n[i]} \tab Gehan-Breslow generalized Wilcoxon \tab \cr \eqn{\sqrt{n_i}}{sqrt(n[i])} \tab Tarone-Ware \tab \cr \eqn{S1_i}{S1[i]} \tab Peto-Peto's modified survival estimate \tab \eqn{\bar{S}(t)=\prod{1 - \frac{e_i}{n_i + 1}}}{ S1(t) = cumprod(1 - e / (n + 1))} \cr \eqn{S2_i}{S2[i]} \tab modified Peto-Peto (by Andersen) \tab \eqn{\tilde{S}(t)=\bar{S} - \frac{n_i}{n_i + 1}}{ S2(t) = S1[i] * n[i] / (n[i] + 1) } \cr \eqn{FH_i}{FH[i]} \tab Fleming-Harrington \tab The weight at \eqn{t_0 = 1} and thereafter is: \eqn{\hat{S}(t_{i-1})^p [1-\hat{S}(t_{i-1})^q]}{ S(t[i - 1])^p * (1 - S(t)[i - 1]^q)} } The \bold{supremum (Renyi)} family of tests are designed to detect differences in survival curves which \emph{cross}. \cr That is, an early difference in survival in favor of one group is balanced by a later reversal. \cr The same weights as above are used. \cr They are calculated by finding \deqn{Z(t_i) = \sum_{t_k \leq t_i} W(t_k)[e1_k - n1_k\frac{e_k}{n_k}], \quad i=1,2,...,k}{ Z(t[i]) = SUM W(t[k]) [ e1[k] - n1[k]e[k]/n[k] ]} (which is similar to the numerator used to find \eqn{Q} in the log-rank test for 2 groups above). \cr and it's variance: \deqn{\sigma^2(\tau) = \sum_{t_k \leq \tau} W(t_k)^2 \frac{n1_k n2_k (n_k-e_k) e_k}{n_k^2 (n_k-1)} }{ simga^2(tau) = sum(k=1, 2, ..., tau) W(t[k]) (n1[k] * n2[k] * (n[k] - e[k]) * e[k] / n[k]^2 * (n[k] - 1) ] } where \eqn{\tau}{tau} is the largest \eqn{t} where both groups have at least one subject at risk. \cr \cr Then calculate: \deqn{ Q = \frac{ \sup{|Z(t)|}}{\sigma(\tau)}, \quad t<\tau }{ Q = sup( |Z(t)| ) / sigma(tau), t < tau} When the null hypothesis is true, the distribution of \eqn{Q} is approximately \deqn{Q \sim \sup{|B(x)|, \quad 0 \leq x \leq 1}}{ Q ~ sup( |B(x)|, 0 <= x <= 1)} And for a standard Brownian motion (Wiener) process: \deqn{Pr[\sup|B(t)|>x] = 1 - \frac{4}{\pi} \sum_{k=0}^{\infty} \frac{(- 1)^k}{2k + 1} \exp{\frac{-\pi^2(2k + 1)^2}{8x^2}}}{ Pr[sup|B(t)| > x] = 1 - 4 / pi sum((-1)^k / (2 * k + 1) * exp(-pi^2 (2k + 1)^2 / x^2))} \bold{Tests for trend} are designed to detect ordered differences in survival curves. \cr That is, for at least one group: \deqn{S_1(t) \geq S_2(t) \geq ... \geq S_K(t) \quad t \leq \tau}{ S1(t) >= S2(t) >= ... >= SK(t) for t <= tau} where \eqn{\tau}{tau} is the largest \eqn{t} where all groups have at least one subject at risk. The null hypothesis is that \deqn{S_1(t) = S_2(t) = ... = S_K(t) \quad t \leq \tau}{ S1(t) = S2(t) = ... = SK(t) for t <= tau} Scores used to construct the test are typically \eqn{s = 1,2,...,K}, but may be given as a vector representing a numeric characteristic of the group. \cr They are calculated by finding: \deqn{ Z_j(t_i) = \sum_{t_i \leq \tau} W(t_i)[e_{ji} - n_{ji} \frac{e_i}{n_i}], \quad j=1,2,...,K}{ Z[t(i)] = sum(W[t(i)] * (e[j](i) - n[j](i) * e(i) / n(i)))} The test statistic is: \deqn{Z = \frac{ \sum_{j=1}^K s_jZ_j(\tau)}{\sqrt{\sum_{j=1}^K \sum_{g=1}^K s_js_g \sigma_{jg}}} }{ Z = sum(j=1, ..., K) s[j] * Z[j] / sum(j=1, ..., K) sum(g=1, ..., K) s[j] * s[g] * sigma[jg]} where \eqn{\sigma}{sigma} is the the appropriate element in the variance-covariance matrix (see \code{\link{COV}}). \cr If ordering is present, the statistic \eqn{Z} will be greater than the upper \eqn{\alpha}{alpha}-th percentile of a standard normal distribution. } \note{ Regarding the Fleming-Harrington weights: \itemize{ \item \eqn{p = q = 0} gives the log-rank test, i.e. \eqn{W=1} \item \eqn{p=1, q=0} gives a version of the Mann-Whitney-Wilcoxon test (tests if populations distributions are identical) \item \eqn{p=0, q>0} gives more weight to differences later on \item \eqn{p>0, q=0} gives more weight to differences early on } The example using \code{alloauto} data illustrates this. Here the log-rank statistic has a p-value of around 0.5 as the late advantage of allogenic transplants is offset by the high early mortality. However using Fleming-Harrington weights of \eqn{p=0, q=0.5}, emphasising differences later in time, gives a p-value of 0.04. \cr Stratified models (\code{stratTen}) are \emph{not} yet supported. } \examples{ ## Two covariate groups data("leukemia", package="survival") f1 <- survfit(Surv(time, status) ~ x, data=leukemia) comp(ten(f1)) ## K&M 2nd ed. Example 7.2, Table 7.2, pp 209--210. data("kidney", package="KMsurv") t1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) comp(t1, p=c(0, 1, 1, 0.5, 0.5), q=c(1, 0, 1, 0.5, 2)) ## see the weights used attributes(t1)$lrw ## supremum (Renyi) test; two-sided; two covariate groups ## K&M 2nd ed. Example 7.9, pp 223--226. data("gastric", package="survMisc") g1 <- ten(Surv(time, event) ~ group, data=gastric) comp(g1) ## Three covariate groups ## K&M 2nd ed. Example 7.4, pp 212-214. data("bmt", package="KMsurv") b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) comp(b1, p=c(1, 0, 1), q=c(0, 1, 1)) ## Tests for trend ## K&M 2nd ed. Example 7.6, pp 217-218. data("larynx", package="KMsurv") l1 <- ten(Surv(time, delta) ~ stage, data=larynx) comp(l1) attr(l1, "tft") ### see effect of F-H test data("alloauto", package="KMsurv") a1 <- ten(Surv(time, delta) ~ type, data=alloauto) comp(a1, p=c(0, 1), q=c(1, 1)) } \references{ Gehan A. A Generalized Wilcoxon Test for Comparing Arbitrarily Singly-Censored Samples. Biometrika 1965 Jun. 52(1/2):203--23. \samp{http://www.jstor.org/stable/2333825} JSTOR Tarone RE, Ware J 1977 On Distribution-Free Tests for Equality of Survival Distributions. \emph{Biometrika};\bold{64}(1):156--60. \samp{http://www.jstor.org/stable/2335790} JSTOR Peto R, Peto J 1972 Asymptotically Efficient Rank Invariant Test Procedures. \emph{J Royal Statistical Society} \bold{135}(2):186--207. \samp{http://www.jstor.org/stable/2344317} JSTOR Fleming TR, Harrington DP, O'Sullivan M 1987 Supremum Versions of the Log-Rank and Generalized Wilcoxon Statistics. \emph{J American Statistical Association} \bold{82}(397):312--20. \samp{http://www.jstor.org/stable/2289169} JSTOR Billingsly P 1999 \emph{Convergence of Probability Measures.} New York: John Wiley & Sons. \samp{http://dx.doi.org/10.1002/9780470316962} Wiley (paywall) } survMisc/man/xtable.Rd0000744000176200001440000000304514223602357014415 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/xtable.R \name{xtable} \alias{xtable} \alias{xtable.table} \alias{xtable.survfit} \title{\code{xtable} methods} \usage{ xtable( x, caption = NULL, label = NULL, align = NULL, digits = NULL, display = NULL, ... ) \method{xtable}{table}( x, caption = paste0(paste(names(dimnames(x)), collapse = " $\\\\times$ "), "\\\\\\\\ chi-sq=", signif(suppressWarnings(stats::chisq.test(x)$p.value), digits)), label = NULL, align = c("l", rep("c", dim(x)[2])), digits = 2, display = NULL, ... ) \method{xtable}{survfit}( x, caption = paste0("Survival for ", deparse(x$call[[2]])), label = NULL, align = c("l", rep("c", 7)), digits = NULL, display = rep("fg", 8), ... ) } \arguments{ \item{x}{An object with an xtable method.} \item{caption}{Caption.} \item{label}{Label.} \item{align}{Alignment of columns.} \item{digits}{Number of digits to display.} \item{display}{How to display - passed to \code{formatC}.} \item{...}{Additional arguments (not implemented).} } \value{ An \code{xtable}, suitable for use with/ parsing by LaTeX. } \description{ \code{xtable} methods } \note{ \code{xtable.survfit} - this does \emph{not} show the (restricted) mean survival, only the median with confidence intervals. } \examples{ data("kidney", package="KMsurv") xtable(with(kidney, table(delta, type))) ## K&M. Example 7.2, pg 210. xtable(survfit(Surv(time=time, event=delta) ~ type, data=kidney)) } \seealso{ ? xtable ? xtable::print.xtable methods("xtable") } survMisc/man/print.Rd0000744000176200001440000000676414223602357014305 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/print.R \name{print} \alias{print} \alias{print.ten} \alias{print.COV} \alias{print.lrt} \alias{print.sup} \alias{print.tableAndPlot} \alias{print.stratTableAndPlot} \title{\code{print} methods} \usage{ \method{print}{ten}( x, ..., maxRow = getOption("datatable.print.nrows", 50L), nRowP = getOption("datatable.print.topn", 5L), pRowNames = TRUE, maxCol = getOption("survMisc.maxCol", 8L), nColSP = getOption("survMisc.nColSP", 7L), sigDig = getOption("survMisc.sigDig", 2L) ) \method{print}{COV}(x, ..., n = 2L) \method{print}{lrt}(x, ..., dist = c("n", "c")) \method{print}{sup}(x, ...) \method{print}{tableAndPlot}(x, ..., hideTabLeg = TRUE, tabHeight = 0.25) \method{print}{stratTableAndPlot}(x, ..., hideTabLeg = TRUE, tabHeight = 0.25) } \arguments{ \item{x}{An object of class \code{ten}.} \item{...}{Additional arguments (not implemented). \cr \cr \bold{--print.ten}} \item{maxRow}{Maximum number of rows to print. \cr If \code{nrow(x) > maxRow}, just the first and last \code{nRowP} (below) are printed. \cr The default value is that used by \code{data.table}.} \item{nRowP}{\bold{N}umber of rows to \bold{p}rint from the start and end of the object. Used if \code{nrow(x) > maxRow}.} \item{pRowNames}{Print row names? \cr Default is \code{TRUE}.} \item{maxCol}{Maximum number of columns to print. \cr If \code{ncol(x) > maxCol}, just the first \code{nColSP} and last \code{maxCol - nColSP} columns are printed.} \item{nColSP}{\bold{N}umber of \bold{col}umns to \bold{p}rint from the \bold{s}tart of the object. Used if Used if \code{ncol(x) > maxCol}.} \item{sigDig}{\bold{Sig}nificant \bold{dig}its. This is passed as an argument to \cr ?signif \cr when preparing the object for printing. \cr \cr \bold{--print.tableAndPlot} and \bold{print.tableAndPlot}} \item{n}{Similar to \code{n} from e.g. \cr ?utils::head \cr \cr \bold{--print.lrt}} \item{dist}{Which distribution to use for the statistics when printing. \cr Default (\code{dist="n"}) prints \eqn{Z} and \eqn{p} values based on the normal distribution. \cr If \code{dist="c"}, gives values based on the \eqn{\chi^2}{chi-squared} distribution. \cr The results are the same. The default value is typically easier to read. Both options are given for completeness.} \item{hideTabLeg}{Hide table legend.} \item{tabHeight}{Table height (relative to whole plot). \cr \cr \bold{--print.COV}} } \value{ A printed representation of the object is send to the terminal as a \emph{side effect} of calling the function. \cr The return value cannot be \code{assign}ed. } \description{ \code{print} methods } \details{ Prints a \code{ten} object with 'nice' formatting. \cr Options may be set for a session using e.g. \cr options(survMisc.nColSP=4L) \cr It is similar to the behavior of \code{print.data.table} but has additional arguments controlling the number of columns sent to the terminal. } \note{ All numeric arguments to the function must be supplied as integers. } \examples{ set.seed(1) (x <- data.table::data.table(matrix(rnorm(1800), ncol=15, nrow=120))) data.table::setattr(x, "class", c("ten", class(x))) p1 <- print(x) stopifnot(is.null(p1)) x[1:80, ] x[0, ] (data.table::set(x, j=seq.int(ncol(x)), value=NULL)) } \seealso{ For \code{print.ten}: data.table:::print.data.table ?stats::printCoefmat options()$datatable.print.nrows sapply(c("datatable.print.nrows", "datatable.print.topn"), getOption) } \author{ Chris Dardis. Based on existing work by Brian Diggs. } survMisc/DESCRIPTION0000744000176200001440000000171714223606332013603 0ustar liggesusersPackage: survMisc Type: Package Version: 0.5.6 Date: 2022-04-07 Depends: survival Imports: graphics, grDevices, stats, utils, knitr, KMsurv, ggplot2, data.table, zoo, grid, gridExtra, km.ci, xtable Author: Chris Dardis Maintainer: Chris Dardis License: GPL-2 Title: Miscellaneous Functions for Survival Data Description: A collection of functions to help in the analysis of right-censored survival data. These extend the methods available in package:survival. BugReports: https://github.com/dardisco/survMisc/issues LazyData: true VignetteBuilder: knitr Collate: 'ten.R' 'nc.R' 'sf.R' 'ci.R' 'autoplotTAP.R' 'autoplotTen.R' 'print.R' 'asWide.R' 'COV.R' 'predict.R' 'comp.R' 'cutp.R' 'gastric.R' 'gof.R' 'onAttach.R' 'profLik.R' 'rsq.R' 'survMisc_package.R' 'xtable.R' NeedsCompilation: no RoxygenNote: 7.1.2 Packaged: 2022-04-07 15:42:53 UTC; cd Repository: CRAN Date/Publication: 2022-04-07 16:10:02 UTC survMisc/build/0000755000176200001440000000000014223603175013167 5ustar liggesuserssurvMisc/build/vignette.rds0000644000176200001440000000027214223603175015527 0ustar liggesusersb```b`a@, $؀XX84gAN~I^P^9+Xꂔ44 v1@i `aB64/17PvԂԼ?iN,/AQU▙ 7$apq2݀a>9`~oMI,F(WJbI^ZP?survMisc/build/partial.rdb0000644000176200001440000000007414223603163015312 0ustar liggesusersb```b`a 00 FN ͚Z d@$w7survMisc/vignettes/0000755000176200001440000000000014223603175014100 5ustar liggesuserssurvMisc/vignettes/plots.Rnw0000744000176200001440000001252413317012436015733 0ustar liggesusers\documentclass{article} % \VignetteIndexEntry{plots} % \VignetteEngine{knitr::knitr} \usepackage[]{graphicx} \usepackage[]{color} \usepackage{framed} %%% recommended with knitr \usepackage{alltt} \usepackage{mathtools} \usepackage[sc]{mathpazo} \usepackage{geometry} %% for large numbers of floats \usepackage{morefloats} %%% to keep floats in same section \usepackage[section]{placeins} %%% for tables > 1 page \usepackage{longtable} \usepackage{booktabs} \begin{document} \title{Examples of output from plotting functions} \author{C Dardis} \maketitle % knitr chunks <>= library("knitr") ### Set global chunk options opts_chunk$set(eval=TRUE, ## text results echo=TRUE, results=c('markup', 'asis', 'hold', 'hide')[1], collapse=FALSE, warning=TRUE, message=TRUE, error=TRUE, split=FALSE, include=TRUE, strip.white=TRUE, ## code decoration tidy=FALSE, prompt=FALSE, comment='##', highlight=TRUE, size='normalsize', background=c('#F7F7F7', colors()[479], c(0.1, 0.2, 0.3))[1], ## cache cache=FALSE, ## plots fig.path=c('figure', 'figure/minimal-')[1], fig.keep=c('high', 'none', 'all', 'first', 'last')[1], fig.align=c('center', 'left', 'right', 'default')[1], fig.show=c('hold', 'asis', 'animate', 'hide')[1], dev=c('pdf', 'png', 'tikz')[2], fig.width=7, fig.height=7, #inches fig.env=c('figure', 'marginfigure')[1], fig.pos=c('', 'h', 't', 'b', 'p', 'H')[3]) ### Set R options options(formatR.arrow=TRUE, width=60) @ Some minimal examples showing the output of plots from the examples. \section{autoplot.Ten} The 'autoplot' function is a generic S3 method used by 'ggplot2'. \subsection{Simple examples} <>= data("kidney", package="KMsurv") t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) autoplot(t1) @ Now, we increase the line size and use jitter to prevent overlap; we also make the relative size of the table larger. <>= print(autoplot(t1, type="fill", survLineSize=2, jitter="all"), tabHeight=0.35) @ A more customized example follows. Note that we return only the element marked 'plot' from the result (which is a list with two elements). <>= autoplot(t1, timeTicks="months", type="CI", jitter="all", legLabs=c("surgical", "percutaneous"), title="Time to infection following catheter placement \n by type of catheter, for dialysis patients", titleSize=10, censSize=2)$plot @ Here we assign the result in order to modify the $y$ axis. <>= str(a1 <- autoplot(t1), max.level=1) ## check the output is what we want a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival") ## this is one simple way a1 <- autoplot(t1) suppressMessages(a1$plot <- a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival")) a1 ## or we can assign them as follows a1 <- autoplot(t1) ls(a1$plot$scales$scales[[3]]$super$super) is.environment(a1$plot$scales$scales[[3]]$super$super$limits) is.null(a1$plot$scales$scales[[3]]$super$super$limits) a1$plot$scales$scales[[3]]$super$super$limits <- c(0.8, 1) a1 @ \subsection{Modifying the legend} Reordering the legend labels (example with 3 groups). <>= data("bmt", package="KMsurv") b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) autoplot(b1) autoplot(b1, legOrd=c(1, 3, 2)) @ Here we also re-label the legend. <>= autoplot(b1, legOrd=c(3, 2, 1), legLabs=letters[1:3]) @ Now, let's put the legend inside the plot itself. <>= a2 <- autoplot(b1) ## ensure this is what we want a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2$plot <- a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2 @ \subsection{One group only} A number of options for plotting a line with just one group. <>= t2 <- ten(survfit(Surv(time=time, event=delta) ~ 1, data=kidney)) autoplot(t2, legLabs="")$plot autoplot(t2, legend=FALSE) @ \subsection{Using confidence bands} Here we change the default pointwise confidence intervals to bands. <>= data("rectum.dat", package="km.ci") t3 <- ten(survfit(Surv(time, status) ~ 1, data=rectum.dat)) ## change confidence intervals to confidence bands ci(t3, how="nair", tL=1, tU=40) autoplot(t3, type="fill", alpha=0.6, legend=FALSE) @ \subsection{More customization} If the output of 'autoplot.ten' is assigned, it can be modified in place. The list elements are ggplot2 objects which can be altered as usual. <>= ## manually changing the output t4 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) (a4 <- autoplot(t4, type="CI", alpha=0.8, survLineSize=2)$plot) ## change default colors suppressMessages(a4 + list( ggplot2::scale_color_manual(values=c("red", "blue")), ggplot2::scale_fill_manual(values=c("red", "blue")))) ## change limits of y-axis suppressMessages(a4 + ggplot2::scale_y_continuous(limits=c(0, 1))) @ \section{autoplot.StratTen} An example of the plots from a stratified model: <>= data("pbc", package="survival") t1 <- ten(Surv(time, status==2) ~ trt + strata(edema), data=pbc, abbNames=FALSE) suppressWarnings(str(a1 <- autoplot(t1), max.level=1)) a1 @ \section{profLik} Plotting profile likelihood. <>= data("pbc", package="survival") c1 <- survival::coxph(formula = Surv(time, status == 2) ~ age + edema + log(bili) + log(albumin) + log(protime), data = pbc) profLik(c1, col="red", devNew=FALSE) @ \end{document} survMisc/R/0000755000176200001440000000000014223603175012271 5ustar liggesuserssurvMisc/R/rsq.R0000744000176200001440000000630413317043625013226 0ustar liggesusers#' @name rsq #' @title r^2 measures for a a \code{coxph} or \code{survfit} model #' @description r^2 measures for a a \code{coxph} or \code{survfit} model #' #' @param x A \code{survfit} or \code{coxph} object. #' @param sigD \bold{sig}nificant \bold{d}igits (for ease of display). #' If \code{sigD=NULL}, will return the original numbers. #' @inheritParams sf.ten #' #' @return A \code{list} with the following elements: #' \item{cod}{The \bold{c}oefficient \bold{o}f \bold{d}etermination, which is #' \deqn{R^2=1-\exp(\frac{2}{n}L_0-L_1)}{ #' R^2 = 1-exp((2/n).(L[0]-L[1]))} #' where \eqn{L_0}{L[0]} and \eqn{L_1}{L[1]} are the log partial #' likelihoods for the \emph{null} and \emph{full} models respectively #' and \eqn{n} #' is the number of observations in the data set.} #' \item{mer}{The \bold{m}easure of \bold{e}xplained \bold{r}andomness, which is: #' \deqn{R^2_{mer}=1-\exp(\frac{2}{m}L_0-L_1)}{ #' R^2[mer] = 1-exp((2/m).(L[0]-L[1]))} #' where \eqn{m} is the number of observed \emph{events}.} #' \item{mev}{The \bold{m}easure of \bold{e}xplained \bold{v}ariation (similar to #' that for linear regression), which is: #' \deqn{R^2=\frac{R^2_{mer}}{R^2_{mer} + \frac{\pi}{6}(1-R^2_{mer})}}{ #' R^2 = R^2[mer] / ( R^2[mer] + pi/6(1-R^2[mer]) )} #' } #' #' @references Nagelkerke NJD, 1991. #' A Note on a General Definition of the Coefficient of Determination. #' \emph{Biometrika} \bold{78}(3):691--92. #' \samp{http://www.jstor.org/stable/2337038} JSTOR #' @references #' O'Quigley J, Xu R, Stare J, 2005. #' Explained randomness in proportional hazards models. #' \emph{Stat Med} \bold{24}(3):479--89. #' \samp{http://dx.doi.org/10.1002/sim.1946} Wiley (paywall) #' \samp{http://www.math.ucsd.edu/~rxu/igain2.pdf} UCSD (free) #' @references #' Royston P, 2006. #' Explained variation for survival models. #' \emph{The Stata Journal} \bold{6}(1):83--96. #' \samp{http://www.stata-journal.com/sjpdf.html?articlenum=st0098} #' #' @rdname rsq #' @export #' rsq <- function(x, ...) UseMethod("rsq") #' #' @rdname rsq #' @export #' @method rsq coxph #' @aliases rsq.coxph #' @examples #' data("kidney", package="KMsurv") #' c1 <- coxph(Surv(time=time, event=delta) ~ type, data=kidney) #' cbind(rsq(c1), rsq(c1, sigD=NULL)) #' rsq.coxph <- function(x, ..., sigD=2){ stopifnot(inherits(x, "coxph")) l0 <- x$loglik[1] l1 <- x$loglik[2] n1 <- x$n ne1 <- x$nevent res1 <- vector(mode="list", length = 3L) names(res1) <- c("cod", "mer", "mev") res1$cod <- 1 - exp((2 / n1) * (l0 - l1)) res1$mer <- 1 - exp((2 / ne1) * (l0 - l1)) res1$mev <- res1$mer / (res1$mer + pi^2 / 6 * (1 - res1$mer)) if (is.null(sigD)) return(res1) res1 <- lapply(res1, function(N) as.numeric(formatC(signif(N, digits = sigD), digits=sigD, format="fg", flag="#"))) return(res1) } #' #' @rdname rsq #' @export #' @method rsq survfit #' @aliases rsq.survfit #' rsq.survfit <- function(x, ..., sigD=2){ c1 <- deparse(x$call) c1 <- sub("survfit", "coxph", c1) c1 <- eval(parse(text=c1)) rsq(c1, sigD=sigD) } survMisc/R/autoplotTen.R0000744000176200001440000007627714223603013014745 0ustar liggesusers#' @name autoplotTen #' @title Generate a \code{ggplot} for a \code{survfit} or \code{ten} object #' @description Generate a \code{ggplot} for a \code{survfit} or \code{ten} object #' #' @include ten.R #' @include print.R #' @include autoplotTAP.R #' @include sf.R #' @include ci.R #' @include nc.R #' #' @param object An object of class \code{survfit}, \code{ten} or \code{stratTen}. #' @param ... Additional arguments (not implemented). #' @param title Title for survival plot. #' @param type \code{type="single"} (the default) plots single lines. #' \describe{ #' \item{\code{type="CI"}}{Adds lines indicating #' confidence intervals (taken from \code{upper} and \code{lower} #' values of \code{survfit} object). #' \cr #' Higher values of \code{alpha} (opacity) are recommended for this, #' e.g. \code{alpha=0.8}.} #' \item{\code{type="fill"}}{Adds filled rectangles from the survival lines to #' the confidence intervals above.} #' } #' @param alpha Opacity of lines indicating confidence intervals #' or filled rectangles. Should be in range \eqn{0-1}. Lower = more transparent. #' \cr #' Larger values e.g. \code{alpha=0.7} are recommended for confidence #' intervals. #' @param ciLine Confidence interval line type. See 'line type specification' in #' \cr #' ?graphics::par #' @param censShape Shape of marks to indicate censored onservations. #' \cr #' Default is \code{3} which gives vertical ticks. #' \cr #' Use \code{censShape=10} for circular marks. See #' \cr #' ?graphics::points #' @param palette Options are taken from #' \href{https://colorbrewer2.org/}{color_brewer}. #' \itemize{ #' \item \code{palette="Dark2"} (the default) is recommended for #' \code{single} or \code{CI} plots. #' \item \code{palette="Set2"} is recommended for \code{type="fill"} plots. #' } #' @param jitter By default, \code{jitter="none"}. #' \itemize{ #' \item If \code{jitter="noEvents"}, adds some random, positive noise #' to survival lines with no events (i.e. all observations censored). #' This will bring them just above 1 on the y-axis, making them easier to see separately. #' \item If \code{jitter="all"} add some vertical #' and horizontal noise to all survival lines. This can prevent overlapping #' of lines for censoring. #' } #' @param tabTitle Table title. #' \cr \cr #' \bold{--Axis arguments:} #' @param xLab Label for \eqn{x} axis on survival plot. #' @param timeTicks Numbers to mark on the \eqn{x} axis of #' the survival plot and the table. #' \describe{ #' \item{\code{"major"}}{ #' (the default) only the major \eqn{x}-axis (time) marks from the #' survival plot are are labelled on the plot and table.} #' \item{\code{"minor"}}{minor axis marks are labelled instead.} #' \item{\code{"days"}}{scale is \eqn{0, 7, 14, ..., t_{max}}} #' \item{\code{"months"}}{scale is \eqn{0, 12,, 24, ..., t_{max}}} #' \item{\code{"custom"}}{scale is given by \code{times} below} #' } #' @param times Vector of custom times to use for \eqn{x} axis. #' @param yLab Label for \eqn{y} axis on survival plot. #' @param yScale Display for point on \eqn{y} axis: #' \describe{ #' \item{\code{"perc"}}{Displays as percentages.} #' \item{\code{"frac"}}{Displays as fractions e.g. \eqn{0, 0.1, 0.2, ..., 1.0.}} #' } #' \bold{--Legend arguments:} #' \cr #' @param legend If \code{legend=FALSE}, no legends will be produced #' for the plot or table. #' @param legTitle Legend title. #' @param legLabs Legend labels. These can be used to replace the names #' of the covariate groups ('strata' in the case of a \code{survfit} object). #' \cr #' Should be given in the same order as those strata. #' @param legOrd Legend order. #' \cr \cr #' \bold{--Size arguments:} #' \cr #' Size arguments are passed to \code{ggplot2::element_text(size=)}. #' @param titleSize Title size for survival plot. #' @param axisTitleSize Title size for axes. #' @param axisLabSize Title size for labels on axes. #' @param survLineSize Survival line size. #' #' @param legTitleSize Title size for legend. #' @param legLabSize Legend labels width and height. #' @param censSize Size of marks to indicate censored onservations. #' @param fillLineSize Line size surrouding filled boxes. #' @param tabTitleSize Table title text size. #' @param tabLabSize Table legend text size. #' @param nRiskSize Number at risk - text size. #' \cr \cr #' \bold{--Arguments for autoplot.survfit only:} #' \cr #' @param pVal If \code{pVal=TRUE}, adds \eqn{p} value from #' log-rank test to plot #' @param sigP No. of significant digits to display in \eqn{p} value. #' Typically \eqn{1} to \eqn{3}. #' @param pX Location of \eqn{p} value on \eqn{x} axis. #' \cr #' Should be in the range of \eqn{0 - 1}, #' where value is to be placed relative to the maximum observed #' time. #' \cr #' E.g. \code{pX = 0.5} will place it half-way along \eqn{x}-axis #' @param pY Location of \eqn{p} value on \eqn{y} axis. #' \cr #' Should be in the range of \eqn{0 - 1}, as above. #' #' @author Chris Dardis. \code{autoplot.survfit} based on existing work by #' R. Saccilotto, Abhijit Dasgupta, Gil Tomas and Mark Cowley. #' #' @note \code{autoplot.survfit} may be deprecated after packageVersion 0.6. #' Please try to use \code{autoplot.ten} instead. #' #' @keywords hplot #' @keywords survival #' #' @seealso ?ggplot2::ggplot_build #' #' @rdname autoplotTen #' @export #' autoplot <- function (object, ...) UseMethod("autoplot") #' #' @rdname autoplotTen #' @method autoplot ten #' @aliases autoplot.ten #' @export #' @examples #' ## examples are slow to run; see vignette for output from these #' \dontrun{ #' ### autoplot.ten #' data("kidney", package="KMsurv") #' t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) #' autoplot(t1) #' autoplot(t1, type="fill", survLineSize=2, jitter="all") #' autoplot(t1, timeTicks="months", #' type="CI", jitter="all", #' legLabs=c("surgical", "percutaneous"), #' title="Time to infection following catheter placement \n #' by type of catheter, for dialysis patients", #' titleSize=10, censSize=2)$plot #' t2 <- ten(survfit(Surv(time=time, event=delta) ~ 1, data=kidney)) #' autoplot(t2, legLabs="")$plot #' autoplot(t2, legend=FALSE) #' data("rectum.dat", package="km.ci") #' t3 <- ten(survfit(Surv(time, status) ~ 1, data=rectum.dat)) #' ## change confidence intervals to log Equal-Precision confidence bands #' ci(t3, how="nair", tL=1, tU=40) #' autoplot(t3, type="fill", legend=FALSE)$plot #' ## manually changing the output #' t4 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) #' (a4 <- autoplot(t4, type="CI", alpha=0.8, survLineSize=2)$plot) #' ## change default colors #' a4 + list(ggplot2::scale_color_manual(values=c("red", "blue")), #' ggplot2::scale_fill_manual(values=c("red", "blue"))) #' ## change limits of y-axis #' suppressMessages(a4 + ggplot2::scale_y_continuous(limits=c(0, 1))) #' } autoplot.ten <- function(object, ..., title="Marks show times with censoring", type=c("single", "CI", "fill"), alpha=0.05, ciLine=10, censShape=3, palette=c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter=c("none", "noEvents", "all"), tabTitle="Number at risk by time", xLab="Time", timeTicks=c("major", "minor", "days", "months", "custom"), times=NULL, yLab="Survival", yScale=c("perc", "frac"), legend=TRUE, legTitle="Group", legLabs=NULL, legOrd=NULL, titleSize=15, axisTitleSize=15, axisLabSize=10, survLineSize=0.5, censSize=5, legTitleSize=10, legLabSize=10, fillLineSize=0.05, tabTitleSize=15, tabLabSize=5, nRiskSize=5) { stopifnot(inherits(object, "ten")) stopifnot(alpha > 0 & alpha < 1) ## add no. censored nc(object) ## confidence intervals for plot dt1 <- data.table::copy(ci(object)) dt1[, c("Sv", "SCV") := NULL] dt1 <- merge(object[, list(cg, t, nc)], dt1, all.x=FALSE, all.y=FALSE, by=c("cg", "t")) if (!is.null(legOrd)) { stopifnot(length(unique(legOrd))==length(unique(dt1[, cg]))) stopifnot(all(legOrd %in% dt1[, seq.int(length(cg))])) } ## make two extra rows for each covariate group ## for t=0 to t=time of first event dt2 <- data.table::rbindlist(list(dt1[, .SD[1, ], by=cg], dt1[, .SD[1, ], by=cg])) ## set surv, upper and lower to one dt2[, c("S", "lower", "upper") := list(1), by=cg] ## set initial time and no. censored to zero dt2[seq.int(unique(dt2$cg)), c("t", "nc") := list(0L)] ## reorder to allow binding dt1 <- data.table::rbindlist(list(dt2, dt1)) ### jitter jitter <- match.arg(jitter) ## for groups with no events add random no.to survival (by strata) if (jitter=="noEvents") { ## add column to indicate no. events by group dt1[, s1 := sum(n), by=list(cg)] dt1[s1==0, S := S + (stats::runif(1, 0.01, 0.05)), by=cg] } if(jitter=="all"){ ## for groups with no events add random no.to survival (by strata) dt1[, S := S + (stats::runif(1, 0.01, 0.05)), by=cg] dt1[, t := abs(jitter(t, factor=0.5))] } ### if (attr(object, "abbNames")) { na1 <- attr(object, "longNames")[, id] ## abbreviate function (used later) abbFn <- identity } else { na1 <- attr(object, "longNames")[, longName] abbFn <- as.integer } if (is.null(legLabs)) { dt1[, "cg" := factor(cg, labels=na1)] } else { stopifnot(length(legLabs)==length(unique(object$cg))) dt1[, "cg" := factor(cg, labels=legLabs)] } if (is.null(legOrd)) legOrd <- dt1[, seq.int(levels(cg))] ### ### plot single lines only ### g1 <- ggplot(data=dt1, aes(group=cg, color=cg, fill=cg)) + geom_step(aes(x=t, y=S), direction="hv", size=survLineSize) ### type <- match.arg(type) if (type=="CI") { g1 <- g1 + geom_step(aes(x=t, y=upper), direction="hv", linetype=ciLine, alpha=alpha) + geom_step(aes(x=t, y=lower), direction="hv", linetype=ciLine, alpha=alpha) } if (type=="fill") { ## copy dt1 to work allow further work dt2 <- dt1[, list(l=unique(lower), u=unique(upper), minT=as.numeric(min(t)), t=as.numeric(t) ), by=list(S, cg)] ## make max. time column dt2[, "maxT" := c(minT[2:length(minT)], Inf), by=cg] ## merge columns dt1 <- merge(dt1, dt2, by=c("t", "S", "cg"), all.y=TRUE) dt1 <- dt1[order(cg)] ## add shading g1 <- g1 + geom_rect(data=dt1, aes(x=NULL, y=NULL, ymax=S, ymin=l, xmax=maxT, xmin=minT, color=cg, group=cg, fill=cg), alpha=alpha, size=fillLineSize) + geom_rect(data=dt1, aes(x=NULL, y=NULL, ymax=u, ymin=S, xmax=maxT, xmin=minT, color=cg, group=cg, fill=cg), alpha=alpha, size=fillLineSize) } ## add lines to show times where subjects censored if (any(dt1[, nc >= 1])) { g1 <- g1 + geom_point(data=dt1[nc >= 1, ], aes(x=t, y=S), shape=censShape, size=censSize) } ### palette + legend ## use palette Dark2 for prominent shades ## (suitable for colorblind) ## use palette Set2 for lighter shades as large fill area palette <- match.arg(palette) g1 <- g1 + scale_color_brewer(type="qual", breaks=dt1[, levels(cg)[legOrd]], palette=palette, guide=guide_legend( title=legTitle)) + scale_fill_brewer(type="qual", breaks=dt1[, levels(cg)[legOrd]], palette=palette, guide=guide_legend( title=legTitle)) ### scales g1 <- g1 + ggtitle(title) yScale <- match.arg(yScale) if (yScale=="frac") { g1 <- g1 + scale_y_continuous(yLab) } else { y1 <- ggplot_build(g1)$panel$ranges[[1L]]$y.major_source g1 <- g1 + scale_y_continuous(yLab, breaks=y1, labels=paste0(y1 * 100, "%")) } ## times to show ## use marks from existing plot timeTicks <- match.arg(timeTicks) x1 <- get("range", envir=get("range", envir=layer_scales(g1)$x)) times1 <- switch(EXPR=timeTicks, major=ggplot_build(g1)$layout$panel_ranges[[1]]$x.major_source, minor=ggplot_build(g1)$layout$panel_ranges[[1]]$x.minor_source, custom=NaN, days=seq(from=min(x1), to=max(x1), by=7L), months=seq(from=min(x1), to=max(x1), by=12L)) if (is.nan(times1[1])) times1 <- times ## x axis g1 <- g1 + scale_x_continuous(name=xLab, breaks=times1) ## font sizes g1 <- g1 + theme(title=element_text(size=titleSize), legend.text=element_text(size=legLabSize), legend.title=element_text(size=legTitleSize), axis.text=element_text(size=axisLabSize), axis.title=element_text(size=axisTitleSize)) ### data for table of number at risk dt3 <- data.table::data.table("t"=times1) cg1 <- seq.int(attr(object, "ncg")) ## time, no. at risk, covariate group tnc1 <- lapply(cg1, FUN=function(cg1) { r1 <- data.table::setkey(object[abbFn(cg)==cg1, ncg, by=t], t) r1[dt3, roll=-Inf][, ncg] }) tnc1 <- data.table::data.table( "t"=rep(times1, attr(object, "ncg")), "n"=unlist(tnc1), "cg"=as.factor(rep(na1, each=length(times1)))) ## table g2 <- ggplot(data=tnc1, aes(x=t, y=cg, shape=cg)) + geom_point(size=0) + geom_text(aes(label=n), color=1, size=nRiskSize) + scale_x_continuous(name=xLab, limits=c(0, max(dt1[, t])), breaks=times1) + scale_y_discrete(name=legTitle, breaks=levels(tnc1$cg), labels=levels(tnc1$cg)) + ggtitle(tabTitle) + theme(axis.text=element_text(size=axisLabSize), axis.title=element_text(size=axisTitleSize), plot.title=element_text(size=tabTitleSize), legend.title=element_text(size=tabLabSize), legend.text=element_text(size=tabLabSize)) + guides(shape=guide_legend(title=legTitle, keywidht=tabLabSize, keyheight=tabLabSize)) ## remove legend if (!legend) { g1 <- g1 + theme(legend.position="none") g2 <- g2 + theme(legend.position="none") } res1 <- list("table"=g2, "plot"=g1) class(res1) <- c("tableAndPlot", "list") return(res1) } #' #' @rdname autoplotTen #' @method autoplot stratTen #' @aliases autoplot.stratTen #' @export #' @examples #' \dontrun{ #' data("pbc", package="survival") #' t1 <- ten(Surv(time, status==2) ~ trt + strata(edema), data=pbc, abbNames=FALSE) #' autoplot(t1) #' } autoplot.stratTen <- function(object, ..., title=NULL, type=c("single", "CI", "fill"), alpha=0.05, ciLine=10, censShape=3, palette=c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter=c("none", "noEvents", "all"), tabTitle="Number at risk by time", xLab="Time", timeTicks=c("major", "minor", "days", "months", "custom"), times=NULL, yLab="Survival", yScale=c("perc", "frac"), legend=TRUE, legTitle="Group", legLabs=NULL, legOrd=NULL, titleSize=15, axisTitleSize=15, axisLabSize=10, survLineSize=0.5, censSize=5, legTitleSize=10, legLabSize=10, fillLineSize=0.05, tabTitleSize=15, tabLabSize=5, nRiskSize=5) { res1 <- lapply(object, autoplot, title=title, type=type, alpha=alpha, ciLine=ciLine, censShape=3, palette=palette, jitter=jitter, tabTitle=tabTitle, xLab=xLab, timeTicks=timeTicks, times=times, yLab=yLab, yScale=yScale, legend=TRUE, legTitle=legTitle, legLabs=legLabs, legOrd=legOrd, titleSize=titleSize, axisTitleSize=axisTitleSize, axisLabSize=axisLabSize, survLineSize=survLineSize, censSize=censSize, legTitleSize=legTitleSize, legLabSize=legLabSize, fillLineSize=fillLineSize, tabTitleSize=tabTitleSize, tabLabSize=tabLabSize, nRiskSize=nRiskSize) if (is.null(title)) { if (attr(object, "abbNames")) { title <- attr(object, "longNames")[, id] } else { title <- attr(object, "longNames")[, longName] } } else { title <- rep(title, length(object)) } for (i in seq.int(length(object))){ res1[[i]][[2]] <- res1[[i]][[2]] + ggplot2::ggtitle(title[i]) } data.table::setattr(res1, "class", c("stratTableAndPlot", class(res1))) return(res1) } #' #' @rdname autoplotTen #' @method autoplot survfit #' @aliases autoplot.survfit #' @export #' @examples #' ### autoplot.survfit #' \dontrun{ #' data(kidney, package="KMsurv") #' s1 <- survfit(Surv(time, delta) ~ type, data=kidney) #' autoplot(s1, type="fill", survLineSize=2) #' autoplot(s1, type="CI", pVal=TRUE, pX=0.3, #' legLabs=c("surgical", "percutaneous"), #' title="Time to infection following catheter placement \n #' by type of catheter, for dialysis patients")$plot #' s1 <- survfit(Surv(time=time, event=delta) ~ 1, data=kidney) #' autoplot(s1, legLabs="")$plot #' autoplot(s1, legend=FALSE)$plot #' data(rectum.dat, package="km.ci") #' s1 <- survfit(Surv(time, status) ~ 1, data=rectum.dat) #' ## change confidence intervals to log Equal-Precision confidence bands #' if (require("km.ci")) { #' km.ci::km.ci(s1, method="logep") #' autoplot(s1, type="fill", legend=FALSE)$plot #' } #' ## manually changing the output #' s1 <- survfit(Surv(time, delta) ~ type, data=kidney) #' g1 <- autoplot(s1, type="CI", alpha=0.8, survLineSize=2)$plot #' ## change default colors #' g1 + ggplot2::scale_colour_manual(values=c("red", "blue")) + #' ggplot2::scale_fill_manual(values=c("red", "blue")) #' ## change limits of y-axis #' g1 + ggplot2::scale_y_continuous(limits=c(0, 1)) #' } autoplot.survfit <- function(object, ..., title="Marks show times with censoring", type=c("single", "CI", "fill"), alpha=0.05, ciLine=10, censShape=3, palette=c("Dark2", "Set2", "Accent", "Paired", "Pastel1", "Pastel2", "Set1", "Set3"), jitter=c("none", "noEvents", "all"), tabTitle="Number at risk by time", xLab="Time", timeTicks=c("major", "minor", "weeks", "months", "custom"), times=NULL, yLab="Survival", yScale=c("perc", "frac"), legend=TRUE, legLabs=NULL, legOrd=NULL, legTitle="Group", titleSize=15, axisTitleSize=15, axisLabSize=10, survLineSize=0.5, censSize=5, legTitleSize=10, legLabSize=10, fillLineSize=0.05, tabTitleSize=15, tabLabSize=5, nRiskSize=5, pVal=FALSE, sigP=1, pX=0.1, pY=0.1) { stopifnot(inherits(object, "survfit")) if (!is.null(legLabs) &! length(object$strata)==0){ stopifnot(length(legLabs)==length(object$strata)) } ## change names for strata to legLabs if required if (is.null(legLabs)) { stNames <- names(object$strata) } else { stNames <- legLabs } ## if only one strata (intercept only model) if (is.null(object$strata)) { if (is.null(legLabs)) { st1 <- as.factor(rep(1, length(object$time))) } else { stopifnot(length(legLabs)==1) st1 <- as.factor(rep(legLabs, length(object$time))) } } else { ## add vector for one strata according to number of rows of strata st1 <- unlist(sapply(1:length(object$strata), function (i) rep(stNames[i], object$strata[i]))) } ## create data.table with data from survfit ## add column for strata ## (using data.table here as avoids duplication when adding rows later) ## also rename strata as 'st' to avoid calling survival::function dt1 <- data.table::data.table(time=object$time, n.risk=object$n.risk, n.event=object$n.event, n.censor=object$n.censor, surv=object$surv, upper=object$upper, lower=object$lower, cg=as.factor(st1)) ## make two rows for each covariate group ## for time=0 to time=time of first event dt2 <- data.table::rbindlist(list(dt1[, .SD[1, ], by=cg], dt1[, .SD[1, ], by=cg])) ## set n.event and n.censored to zero dt2[, c("n.event", "n.censor") := list(0), by=cg] ## set surv, upper and lower to one dt2[, c("surv", "upper", "lower") := list(1), by=cg] ## set first time to zero dt2[seq(length(unique(dt2$cg))), "time" := (0L) ] ## reorder to allow binding data.table::setcolorder(dt2, names(dt1)) dt1 <- data.table::rbindlist(list(dt2, dt1)) if (is.null(legOrd)) legOrd <- dt1[, seq.int(levels(cg))] ## ## jitter ## jitter <- match.arg(jitter) ## for groups with no events add random no.to survival (by strata) if (jitter=="noEvents") { ## add column to indicate no. events by group dt1[, s1 := sum(n.event), by=list(cg)] dt1[s1==0, surv := surv+(runif(1, 0.01, 0.05)), by=cg] } if(jitter=="all"){ ## for groups with no events add random no.to survival (by strata) dt1[, surv := surv+(runif(1, 0.01, 0.05)), by=cg] } ## dt1 <- dt1[order(cg)] ## ## plot single lines only ## g1 <- ggplot(data=dt1, aes(group=cg, colour=cg, fill=cg)) + geom_step(aes(x=time, y=surv), direction="hv", size=survLineSize) ## type <- match.arg(type) if (type=="CI"){ g1 <- g1 + geom_step(aes(x=time, y=upper), direction="hv", linetype=ciLine, alpha=alpha) + geom_step(aes(x=time, y=lower), direction="hv", linetype=ciLine, alpha=alpha) } if (type=="fill"){ ## copy dt1 to work allow further work dt2 <- dt1[, list(l=unique(lower), u=unique(upper), minT=as.numeric(min(time)), time=as.numeric(time) ), by=list(surv, cg)] ## make max. time column dt2[, "maxT" := c(minT[2:length(minT)], NA), by=cg] ## merge columns dt1 <- merge(dt1, dt2, by=c("time", "surv", "cg"), all.y=TRUE) dt1 <- dt1[order(cg)] ## add shading g1 <- g1 + geom_rect(data=dt1, aes(ymax=surv, ymin=l, xmax=maxT, xmin=minT, colour=cg, group=cg, fill=cg), alpha=alpha, size=fillLineSize) + geom_rect(data=dt1, aes(ymax=u, ymin=surv, xmax=maxT, xmin=minT, colour=cg, group=cg, fill=cg), alpha=alpha, size=fillLineSize) } ## add lines to show times where subjects censored if (any(dt1$n.censor >= 1)) { g1 <- g1 + geom_point(data=dt1[n.censor>=1, ], aes(x=time, y=surv), shape=censShape, size=censSize) } ## palette ## use palette Dark2 for prominent shades ## (suitable for colorblind) ## use palette Set2 for lighter shades as large fill area palette <- match.arg(palette) g1 <- g1 + scale_color_brewer(type="qual", breaks=dt1[, levels(cg)[legOrd]], palette=palette, guide=guide_legend( title=legTitle)) g1 <- g1 + scale_fill_brewer(type="qual", breaks=dt1[, levels(cg)[legOrd]], palette=palette, guide=guide_legend( title=legTitle)) ## scales g1 <- g1 + ggtitle(title) yScale <- match.arg(yScale) if (yScale=="frac") { g1 <- g1 + scale_y_continuous(yLab) } else { y1 <- ggplot_build(g1)$panel$ranges[[1L]]$y.major_source g1 <- g1 + scale_y_continuous(yLab, breaks=y1, labels=paste0(y1 * 100, "%")) } ## times to show timeTicks <- match.arg(timeTicks) x1 <- get("range", envir=get("range", envir=layer_scales(g1)$x)) times1 <- switch(EXPR=timeTicks, major=ggplot_build(g1)$layout$panel_ranges[[1]]$x.major_source, minor=ggplot_build(g1)$layout$panel_ranges[[1]]$x.minor_source, custom=NaN, weeks=seq(from=min(x1), to=max(x1), by=7L), months=seq(from=min(x1), to=max(x1), by=12L)) if (is.nan(times1[1])) times1 <- times ## x axis g1 <- g1 + scale_x_continuous(name=xLab, breaks=times1) ## font sizes g1 <- g1 + theme(title=element_text(size=titleSize), legend.text=element_text(size=legLabSize), legend.title=element_text(size=legTitleSize), axis.text = element_text(size=axisLabSize), axis.title = element_text(size=axisTitleSize)) ## remove legend if required if(!legend) g1 <- g1 + theme(legend.position="none") ## p value for log-rank test (only if >=2 groups) if (pVal & !is.null(object$strata)) { sd1 <- survival::survdiff(eval(object$call$formula), data=eval(object$call$data)) p1 <- stats::pchisq(sd1$chisq, length(sd1$n) - 1, lower.tail=FALSE) p1txt <- ifelse(p1 < 0.0001, "Log-rank test \n p < 0.0001", paste("Log-rank test \n p =", signif(p1, sigP))) g1 <- g1 + annotate(geom="text", x=pX * max(dt1$time), y=pY, label=p1txt, size=legLabSize) } ## data for table dt3 <- data.table::data.table( time=summary(object, times = times1, extend = TRUE)$time, n.risk=summary(object, times = times1, extend = TRUE)$n.risk) ## if intercept-only model if (is.null(object$strata)) { dt3[, "cg" := as.factor(rep(1, length(times1)))] } else { dt3[, "cg" := summary(object, times=times1, extend=TRUE)$strata] } ## change names of strata to legend labels if(!is.null(legLabs)) dt3[, "cg" := factor(cg, labels=legLabs) ] ## table ## reverse here to plot in same order as in main plot g2 <- ggplot(data=dt3, aes(x=time, y=cg, shape=cg)) + geom_point(size=0) + geom_text(aes(label=n.risk), colour=1, size=nRiskSize) + scale_x_continuous(name=xLab, limits=c(0, max(object$time)), breaks=times1) + ## reverse here to plot in same order as in main plot scale_y_discrete(name=legTitle, breaks=levels(dt3$cg), labels=levels(dt3$cg)) + ggtitle(tabTitle) + theme(axis.text = element_text(size=axisLabSize), axis.title = element_text(size=axisTitleSize), plot.title = element_text(size=tabTitleSize), legend.title = element_text(size=tabLabSize), legend.text = element_text(size=tabLabSize)) + guides(shape = guide_legend(title=legTitle, keywidht=tabLabSize, keyheight=tabLabSize)) ## remove legend if(!legend) g2 <- g2 + theme(legend.position = "none") res1 <- list("table"=g2, "plot"=g1) class(res1) <- c("tableAndPlot", "list") return(res1) } ## declare variables (for R CMD check) ## st1 is vector for strata identification surv <- n.risk <- n.censor <- n.event <- upper <- lower <- NULL .SD <- st1 <- stNames <- st <- s1 <- minT <- l <- maxT <- u <- NULL survMisc/R/print.R0000744000176200001440000002157513317032264013561 0ustar liggesusers#' @name print #' @title \code{print} methods #' @description \code{print} methods #' #' @include autoplotTen.R #' #' @details Prints a \code{ten} object with 'nice' formatting. #' \cr #' Options may be set for a session using e.g. #' \cr #' options(survMisc.nColSP=4L) #' \cr #' It is similar to the behavior of \code{print.data.table} but #' has additional arguments controlling the number of columns #' sent to the terminal. #' #' @param x An object of class \code{ten}. #' @param ... Additional arguments (not implemented). #' \cr \cr #' \bold{--print.ten} #' @param maxRow Maximum number of rows to print. #' \cr #' If \code{nrow(x) > maxRow}, just the first and last #' \code{nRowP} (below) are printed. #' \cr #' The default value is that used by \code{data.table}. #' @param nRowP \bold{N}umber of rows to \bold{p}rint from #' the start and end of the object. Used if \code{nrow(x) > maxRow}. #' @param pRowNames Print row names? #' \cr #' Default is \code{TRUE}. #' @param maxCol Maximum number of columns to print. #' \cr #' If \code{ncol(x) > maxCol}, just the first \code{nColSP} #' and last \code{maxCol - nColSP} columns are printed. #' @param nColSP \bold{N}umber of \bold{col}umns to \bold{p}rint from #' the \bold{s}tart of the object. Used if Used if \code{ncol(x) > maxCol}. #' @param sigDig \bold{Sig}nificant \bold{dig}its. This is passed as an argument to #' \cr #' ?signif #' \cr #' when preparing the object for printing. #' \cr \cr #' \bold{--print.tableAndPlot} and \bold{print.tableAndPlot} #' @param hideTabLeg Hide table legend. #' @param tabHeight Table height (relative to whole plot). #' \cr \cr #' \bold{--print.COV} #' @param n Similar to \code{n} from e.g. #' \cr #' ?utils::head #' \cr \cr #' \bold{--print.lrt} #' @param dist Which distribution to use for the statistics #' when printing. #' \cr #' Default (\code{dist="n"}) prints \eqn{Z} and \eqn{p} values #' based on the normal distribution. #' \cr #' If \code{dist="c"}, gives values based on the #' \eqn{\chi^2}{chi-squared} distribution. #' \cr #' The results are the same. The default value is typically #' easier to read. Both options are given for completeness. #' #' @return A printed representation of the object #' is send to the terminal as a \emph{side effect} of #' calling the function. #' \cr #' The return value cannot be \code{assign}ed. #' #' @author Chris Dardis. Based on existing work by Brian Diggs. #' #' @seealso For \code{print.ten}: #' @seealso data.table:::print.data.table #' @seealso ?stats::printCoefmat #' @seealso options()$datatable.print.nrows #' @seealso sapply(c("datatable.print.nrows", "datatable.print.topn"), getOption) #' #' @note #' All numeric arguments to the function must be supplied as integers. #' #' @rdname print #' @method print ten #' @aliases print.ten #' @export #' #' @examples #' set.seed(1) #' (x <- data.table::data.table(matrix(rnorm(1800), ncol=15, nrow=120))) #' data.table::setattr(x, "class", c("ten", class(x))) #' p1 <- print(x) #' stopifnot(is.null(p1)) #' x[1:80, ] #' x[0, ] #' (data.table::set(x, j=seq.int(ncol(x)), value=NULL)) #' print.ten <- function(x, ..., maxRow=getOption("datatable.print.nrows", 50L), nRowP=getOption("datatable.print.topn", 5L), pRowNames=TRUE, maxCol=getOption("survMisc.maxCol", 8L), nColSP=getOption("survMisc.nColSP", 7L), sigDig=getOption("survMisc.sigDig", 2L)){ if (nrow(x)==0L) { if (length(x)==0L) { cat("Null 'ten' object (0 rows and 0 cols)\n") } else { cat("Empty 'ten' object (0 rows) of ", length(x), " col", if (length(x) > 1L) "s", ": ", paste(head(names(x), 6), collapse = ", "), if (ncol(x) > 6) "...", "\n", sep = "") } return(invisible()) } stopifnot(all(sapply(list(maxRow, nRowP, maxCol, nColSP), as.integer))) stopifnot(maxRow > nRowP) ## lCol1 = last columns; needs to be at least one stopifnot((lCol1 <- maxCol - nColSP) >= 1) if (nrow(x) > maxRow) { toPrint1 <- rbind(head(x, nRowP), tail(x, nRowP)) ## row names rn1 <- c(seq_len(nRowP), seq.int(to=nrow(x), length.out=nRowP)) rowDots1 <- TRUE } else { toPrint1 <- x rn1 <- seq_len(nrow(x)) rowDots1 <- FALSE } if (ncol(x) > (nColSP + lCol1 + 1L)) { toPrint1 <- cbind( toPrint1[, seq.int(nColSP), with=FALSE], toPrint1[, seq.int(to=ncol(x), length.out=lCol1), with=FALSE]) colDots1 <- TRUE } else { colDots1 <- FALSE } toPrint1 <- do.call("cbind", lapply(toPrint1, function(col) signif(col, digits=sigDig))) if (pRowNames) { rownames(toPrint1) <- paste(format(rn1, right = TRUE), ":", sep = "") } else { rownames(toPrint1) <- rep.int("", nrow(x)) } if (rowDots1) { toPrint1 <- rbind(head(toPrint1, nRowP), "---" = "", tail(toPrint1, nRowP)) rownames(toPrint1) <- format(rownames(toPrint1), justify="right") } if (colDots1) { toPrint1 <- cbind( toPrint1[, seq.int(nColSP), drop=FALSE], rep("", nrow(toPrint1)), toPrint1[, seq.int(to=ncol(toPrint1), length.out=lCol1), drop=FALSE]) colnames(toPrint1)[colnames(toPrint1)==""] <- " ---" } if (!rowDots1) { toPrint1 <- rbind(toPrint1, matrix(colnames(toPrint1), nrow=1L)) } print(toPrint1, right=TRUE, quote=FALSE) return(invisible()) } #' #' @rdname print #' @method print COV #' @aliases print.COV #' @export #' print.COV <- function(x, ..., n=2L){ stopifnot(length(n)==1L) if (is.array(x)) { n <- min(n, dim(x)[3]) print(x[, , seq_len(n)]) cat(" ... ") print(x[, , dim(x)[3] - seq_len(n)]) } else { n <- min(n, length(x)) print(utils::head(x, n)) cat("\n...\n\n") print(utils::tail(x, n)) } } #' #' @rdname print #' @method print lrt #' @aliases print.lrt #' @export #' print.lrt <- function(x, ..., dist=c("n", "c")){ dist <- match.arg(dist) x1 <- data.table::copy(x) data.table::setattr(x1, "class", "data.frame") rownames(x1) <- x1[, "W"] x1[, "W"] <- NULL if (ncol(x1)==3) { x1[, "pChisq"] <- format.pval(x1[, "pChisq"]) stats::printCoefmat(x1, has.Pvalue=TRUE, cs.ind=1L, # *c*oefficients and *s*tandard errors dig.tst=getOption("digits")) } else { x1[, c("pNorm", "pChisq")] <- format.pval(x1[, c("pNorm", "pChisq")]) ## no need to print pChiSq values routinely if (dist=="n") { stats::printCoefmat(x1[, c("Q", "Var", "Z", "pNorm")], has.Pvalue=TRUE, cs.ind=seq.int(2), dig.tst=getOption("digits")) } else { stats::printCoefmat(x1[, c("Q", "Var", "chiSq", "df", "pChisq")], has.Pvalue=TRUE, cs.ind=as.integer(c(1, 2)), dig.tst=getOption("digits")) } } } #' #' @rdname print #' @method print sup #' @aliases print.sup #' @export #' print.sup <- function(x, ...){ x1 <- data.table::copy(x) data.table::setattr(x1, "class", "data.frame") rownames(x1) <- x1[, "W"] x1[, "W"] <- NULL x1[, "pSupBr"] <- format.pval(x1[, "pSupBr"]) stats::printCoefmat(x1, has.Pvalue=TRUE, cs.ind=seq.int(2), # *c*oefficients and *s*tandard errors dig.tst=getOption("digits")) } #' #' @rdname print #' @method print tableAndPlot #' @aliases print.tableAndPlot #' @export #' print.tableAndPlot <- function(x, ..., hideTabLeg=TRUE, tabHeight=0.25){ autoplot(x, hideTabLeg=hideTabLeg, tabHeight=tabHeight) } #' #' @rdname print #' @method print stratTableAndPlot #' @aliases print.stratTableAndPlot #' @export #' print.stratTableAndPlot <- function(x, ..., hideTabLeg=TRUE, tabHeight=0.25) { for (i in seq.int(length(x))) { if (interactive()) { ## max. no devices is 63 if (i %% 63 == 0) grDevices::graphics.off() grDevices::dev.new() } autoplot(x[[i]], hideTabLeg=hideTabLeg, tabHeight=tabHeight) } } survMisc/R/survMisc_package.R0000744000176200001440000000541214223602063015700 0ustar liggesusers#' @name survMisc_package #' @title Miscellaneous Functions for Survival Analysis #' @description Miscellaneous Functions for Survival Analysis #' #' @description #' \tabular{ll}{ #' Package: \tab survMisc\cr #' Type: \tab Package\cr #' Version: \tab 0.5.5 \cr #' Date: \tab 2018-07-03\cr #' License: \tab GPL (>= 2) \cr #' LazyLoad: \tab yes #' } #' A collection of functions for the analysis of survival data. These #' extend the methods already available in \code{package:survival}. #' \cr #' The intent is to generate a workspace for some of the common tasks #' arising in survival analysis. #' \cr \cr #' There are references in many of the functions to the textbooks: #' \tabular{cl}{ #' \bold{K&M} \tab Klein J, Moeschberger M (2003). #' \emph{Survival Analysis}, 2nd edition. \cr #' \tab New York: Springer. #' \doi{10.1007/b97377} \cr #' \bold{T&G} \tab Therneau TM, Grambsch PM (2000). #' \emph{Modeling Survival Data: Extending the Cox Model}. \cr #' \tab New York: Springer. #' \doi{10.1007/978-1-4757-3294-8} #' } #' #' \subsection{Notes for developers}{ #' \itemize{ #' \item This package should be regarded as 'in development' until #' release 1.0, meaning that there may be changes to certain function #' names and parameters, although I will try to keep this to a minimum. #' As such it is recommended that other packages do \emph{not} depend on or import from this #' one until at least version 1.0. #' \item Naming tends to follow the \strong{camelCase} convention; #' variables within functions are typically alphanumeric e.g. \code{a1 <- 1}. #' } #' } #' For bug reports, feature requests or suggestions for improvement, #' please try to submit to \href{https://github.com/dardisco/survMisc/issues}{github}. #' Otherwise email me at the address below. #' #' @aliases survMisc #' @docType package #' @author Chris Dardis \email{christopherdardis@@gmail.com} #' #' @keywords package #' @concept survival #' #' @importFrom graphics plot abline arrows grid mtext points title segments #' @importFrom grDevices dev.new graphics.off #' @importFrom utils head tail combn data packageVersion #' @importFrom stats anova formula as.formula is.empty.model model.frame model.matrix model.response printCoefmat predict runif pchisq pnorm qchisq qnorm terms update #' #' @import knitr #' @import survival #' @import ggplot2 #' ## the following are imported for their datasets: #' @import KMsurv #' @import km.ci #' #' @importFrom data.table as.data.table data.table set setkey setattr setcolorder setnames ':=' copy rbindlist #' @importFrom zoo na.locf.default #' @importFrom grid unit.pmax #' @importFrom gridExtra grid.arrange #' @importFrom xtable xtable print.xtable #' NULL ## KMsurv ## km.ci survMisc/R/COV.R0000744000176200001440000001731713317032077013055 0ustar liggesusers#' @name COV #' @title \bold{cov}ariance matrix for survival data #' @description \bold{cov}ariance matrix for survival data #' #' @include ten.R #' @include print.R #' @include asWide.R #' #' @rdname COV #' @export #' COV <- function(x, ...) UseMethod("COV") #' #' #' @param x A \code{numeric} vector of #' \emph{number of events}, \eqn{e_t}{e[t]}. #' These are assumed to be ordered by discrete times. #' \cr #' A method is available for objects of \code{class} \code{ten}. #' @param ... Additional arguments (not implemented). #' @param reCalc Recalcuate the values? #' \cr #' If \code{reCalc=FALSE} (the default) and the \code{ten} object already has #' the calculated values stored as an \code{attribute}, #' the value of the \code{attribute} is returned directly. #' \cr \cr #' \bold{--Arguments for the numeric method:} #' @param n \bold{n}umber at risk (total). #' @param ncg \bold{n}umber at risk, per \bold{c}ovariate \bold{g}roup. #' \cr #' If there are \eqn{2} groups, this can be given as a \code{vector} with #' the number at risk for group \eqn{1}. #' \cr #' If there are \eqn{\geq 2}{>= 2} groups, it is #' a \code{matrix} with one column for each group. #' #' @details Gives variance-covariance matrix for comparing survival #' data for two or more groups. #' \cr #' Inputs are vectors corresponding to observations at a set of discrete #' time points for right censored data, except for \eqn{n1}, #' the no. at risk by predictor. #' \cr #' This should be specified as a vector for one group, #' otherwise as a matrix with each column corresponding to a group. #' #' @return An \code{array}. #' \cr #' The first two dimensions = the number of covariate groups \eqn{K}, #' \eqn{k = 1, 2, \ldots K}. #' This is the square matrix below. #' \cr #' The third dimension is the number of observations #' (discrete time points). #' \cr \cr #' To calculate this, we use \code{x} (= \eqn{e_t}{e[t]} below) and #' \eqn{n_1}{n1}, the number at risk in covariate group \eqn{1}. #' \cr #' Where there are \eqn{2} groups, the resulting sparse square matrix #' (i.e. the non-diagonal elements are \eqn{0}) #' at time \eqn{t} has diagonal elements: #' \deqn{cov_t = - \frac{n_{0t} n_{1t} e_t (n_t - e_t)}{n_t^2(n_t-1)}}{ #' cov[t] = - n0[t] * n1[t] * e[t] * (n[t] - e[t]) / #' (n[t]^2 * (n[t] - 1))} #' For \eqn{\geq 2}{>=2} groups, the resulting square matrix #' has diagonal elements given by: #' \deqn{cov_{kkt} = \frac{n_{kt}(n_t - n_{kt}) e_t(n_t - e_t)}{ #' n_t^2(n_t - 1)}}{ #' cov[k, k, t] = n[k, t] * (n[t] - n[k, t]) * e[t] * (n[t] - e[t]) / #' (n[t]^2 * (n[t] - 1))} #' The off diagonal elements are: #' \deqn{cov_{klt} = \frac{-n_{kt} n_{lt} e_t (n_t-e_t) }{ #' n_t^2(n_t-1)}}{ #' cov[k, l, t] = - n[k, t] * n[l, t] * e[t] * (n[t] - e[t]) / #' n[t]^2 * (n[t] - 1)} #' #' @note Where the is just one subject at risk \eqn{n=1} at #' the final timepoint, the equations above may produce \code{NaN} #' due to division by zero. This is converted to \code{0} for #' simplicity. #' #' @seealso Called by \code{\link{comp}} #' @seealso The name of the function is capitalized #' to distinguish it from: #' \cr #' ?stats::cov #' #' @keywords survival #' #' @rdname COV #' @method COV ten #' @aliases COV.ten #' @export #' @examples #' ## Two covariate groups #' ## K&M. Example 7.2, pg 210, table 7.2 (last column). #' \dontrun{ #' data("kidney", package="KMsurv") #' k1 <- with(kidney, #' ten(Surv(time=time, event=delta) ~ type)) #' COV(k1)[COV(k1) > 0] #' } #' ## Four covariate groups #' ## K&M. Example 7.6, pg 217. #' \dontrun{ #' data("larynx", package="KMsurv") #' l1 <- ten(Surv(time, delta) ~ stage, data=larynx) #' rowSums(COV(l1), dims=2) #' } COV.ten <- function(x, ..., reCalc=FALSE) { if (!reCalc & !is.null(attr(x, "COV"))) return (attr(x, "COV")) ## no. of groups g1 <- attr(x, "ncg") if (g1 <= 1) stop ("Only valid if more than one covariate group") ## if 2 groups only if (g1==2) { res2 <- x[, (ncg / n) * (1 - (ncg / n)) * ((n - e) / (n - 1)) * e, by=list(t, cg)] res2 <- data.table::setkey(res2[, sum(V1), by=t], t) res1 <- res2[, V1] if (is.nan(res1[length(res1)])) res1[length(res1)] <- 0 names(res1) <- res2[, t] } if (g1 > 2) { ## same as used in asWide.R ## get no. at risk for each unique time and covariate group t1 <- data.table::data.table("t" = x[, sort(unique(t))]) cg1 <- seq.int(attr(x, "ncg")) ## abbreviate function abbFn <- if (attr(x, "abbNames")) identity else as.integer n1 <- sapply(cg1, FUN=function(cg1){ r1 <- data.table::setkey(x[abbFn(cg)==cg1, ncg, by=t], t) r1 <- r1[t1, roll=-Inf] data.table::set(r1, i=which(is.na(r1$ncg)), j="ncg", value=0) r1[, ncg] }) ## total no. events, no. at risk at each time x1 <- x[, list("e"=sum(e), "n"=max(n)), by=t] data.table::setkey(x1, t) ## 'base variance'; used in all calcuations below bv1 <- x1[, e * (n - e) / (n^2 * (n - 1))] ## diagonal elements r1 <- bv1 * t(apply(n1, MARGIN=1, FUN= function(i) (i * (sum(i) - i)))) ## off-diagonal elements r2 <- bv1 * - t(apply(n1, MARGIN=1, FUN= function(i) apply(utils::combn(i, 2L), MARGIN=2, FUN=prod))) lt1 <- t1[, length(t)] res1 <- lapply(seq.int(lt1), FUN= function(i) { res1 <- diag(r1[i, ]) res1[lower.tri(res1)] <- r2[i, ] res2 <- matrix(res1, ncol=ncol(res1), byrow=TRUE) res1[upper.tri(res1)] <- res2[upper.tri(res2)] return(res1) }) res1 <- as.array(unlist(res1)) dim(res1) <- c(g1, g1, lt1) if (any(is.nan(res1[, , lt1]))) { res1[, , lt1][which(is.nan(res1[, , lt1]))] <- 0 } dimnames(res1) <- list(x[, unique(cg)], x[, unique(cg)], t1[, t]) } class(res1) <- c("COV", class(res1)) data.table::setattr(x, "COV", res1) return(attr(x, "COV")) } #' @rdname COV #' @method COV stratTen #' @aliases COV.stratTen #' @export #' COV.stratTen <- function(x, ..., reCalc=FALSE){ lapply(x, FUN=COV, reCalc=reCalc) lapply(x, attr, "COV") } #' @rdname COV #' @method COV numeric #' @aliases COV.numeric #' @export #' @examples #' ## example of numeric method #' ## Three covariate groups #' ## K&M. Example 7.4, pg 212. #' \dontrun{ #' data("bmt", package="KMsurv") #' b1 <- asWide(ten(Surv(time=t2, event=d3) ~ group, data=bmt)) #' rowSums(b1[, COV(x=e, n=n, ncg=matrix(data=c(n_1, n_2, n_3), ncol=3))], dims=2) #' } COV.numeric <- function(x, ..., n, ncg){ stopifnot(all(sapply(list(x, n, ncg), is.numeric))) ## ensure all same length stopifnot( diff(range(sapply(list(x, n), length))) < .Machine$double.eps) ## no. of groups g1 <- ncol(ncg) if (is.null(g1)) g1 <- 1L ## if 2 groups only if (g1==1) { cov1 <- (ncg / n) * (1 - (ncg / n)) * ((n - x) / (n - 1)) * x return(cov1) } ## hold results a1 <- array(data=0, dim=c(g1, g1, length(n))) ## diagonal elements for (i in seq_len(g1)) { a1[i, i, ] <- (ncg[, i] * (n - ncg[, i]) * x * (n - x)) / (n^2 * (n - 1)) } ## off-diagonal elements for (j in seq_len(g1)) { for (k in 1:g1){ if (j==k) next a1[j, k, ] <- - (ncg[, j] * ncg[, k] * x * (n - x)) / (n^2 * (n-1)) } } if (any(is.nan(a1[, , length(n)]))) { a1[, , length(n)][which(is.nan(a1[, , length(n)]))] <- 0 } dimnames(a1) <- list(1:g1, 1:g1, seq.int(length(n))) class(a1) <- c("COV", class(a1)) return(a1) } survMisc/R/cutp.R0000744000176200001440000001632114223603136013370 0ustar liggesusers#' @name cutp #' @title \bold{cut p}oint for a continuous variable in a #' model fit with \code{coxph} or \code{survfit}. #' @description \bold{cut p}oint for a continuous variable in a #' model fit with \code{coxph} or \code{survfit}. #' #' @include ten.R #' @include print.R #' #' @description Determine the optimal cut point for a continuous variable #' in a \code{coxph} or \code{survfit} model. #' #' @param x A \code{survfit} or \code{coxph} object #' @param defCont \bold{def}inition of a \bold{cont}inuous variable. #' \cr #' If the variable has \eqn{>} \code{defCont} unique values, it #' is treated as continuous and a cut point is determined. #' @param ... Additional arguments (not implemented). #' #' @return A \code{list} of \code{data.table}s. #' \cr #' There is one list element per continuous variable. #' \cr #' Each has a column with possible values of the cut point #' (i.e. unique values of the variable), and the #' additional columns: #' \item{U}{The score (log-rank) test for a model with the variable 'cut' #' into into those \eqn{\geq}{>=} the cutpoint and those below.} #' \item{Q}{The test statistic.} #' \item{p}{The \eqn{p}-value.} #' The tables are ordered by \eqn{p}-value, lowest first. #' #' @details #' For a cut point \eqn{\mu}{mu}, of a predictor \eqn{K}, #' the variable is split #' into two groups, those \eqn{\geq \mu}{>= mu} and #' those \eqn{< \mu}{< mu}. #' \cr #' The score (or log-rank) statistic, \eqn{sc}, #' is calculated for each unique element #' \eqn{k} in \eqn{K} and uses #' \itemize{ #' \item \eqn{e_i^+}{e1[i]} the number of events #' \item \eqn{n_i^+}{n1[i]} the number at risk #' } #' in those above the cut point, respectively. #' \cr #' The basic statistic is #' \deqn{sc_k = \sum_{i=1}^D ( e_i^+ - n_i^+ \frac{e_i}{n_i} )}{ #' sc[k] = sum (e1[i] - n1[i] * e[i] / n[i])} #' \cr #' The sum is taken across times with observed events, to \eqn{D}, #' the largest of these. #' \cr #' It is normalized (standardized), in the case of censoring, #' by finding \eqn{\sigma^2}{s^2} which is: #' \deqn{\sigma^2 = \frac{1}{D - 1} #' \sum_i^D (1 - \sum_{j=1}^i \frac{1}{D+ 1 - j})^2}{ #' s^2 = (1 / (D - 1)) * #' sum[i:D](1 - sum[j:i](1 / (D - j + 1))^2 )} #' The test statistic is then #' \deqn{Q = \frac{\max |sc_k|}{\sigma \sqrt{D-1}}}{ #' Q = max(abs(sc[k])) / s * sqrt((D - 1))} #' Under the null hypothesis that the chosen cut point #' does \emph{not} predict survival, #' the distribution of \eqn{Q} has a limiting distibution which #' is the supremum of the #' absolute value of a Brownian bridge: #' \deqn{p = Pr(\sup Q \geq q) = 2 \sum_{i=1}^{\infty} #' (-1)^{i + 1} \exp (-2 i^2 q^2)}{ #' p= P(Q >= q) = 2 sum[i:Inf](-1)^(i + 1) * e^(-2 * i^2 *q^2)} #' #' @references Contal C, O'Quigley J, 1999. #' An application of changepoint methods in studying the #' effect of age on survival in breast cancer. #' \emph{Computational Statistics & Data Analysis} \bold{30}(3):253--70. #' \doi{10.1016/S0167-9473(98)00096-6} #' #' @references Mandrekar JN, Mandrekar, SJ, Cha SS, 2003. #' Cutpoint Determination Methods in Survival Analysis using SAS. #' \emph{Proceedings of the 28th SAS Users Group International Conference (SUGI)}. Paper 261-28. #' \href{https://support.sas.com/resources/papers/proceedings/proceedings/sugi28/261-28.pdf}{SAS (free)} #' #' @rdname cutp #' @export #' cutp <- function(x, ...) UseMethod("cutp") #' #' @rdname cutp #' @method cutp coxph #' @aliases cutp.coxph #' @export #' @examples #' ## Mandrekar et al. above #' data("bmt", package="KMsurv") #' b1 <- bmt[bmt$group==1, ] # ALL patients #' c1 <- coxph(Surv(t2, d3) ~ z1, data=b1) # z1=age #' c1 <- cutp(c1)$z1 #' data.table::setorder(c1, "z1") #' ## [] below is used to print data.table to console #' c1[] #' #' \dontrun{ #' ## compare to output from survival::coxph #' matrix( #' unlist( #' lapply(26:30, #' function(i) c(i, summary(coxph(Surv(t2, d3) ~ z1 >= i, data=b1))$sctest))), #' ncol=5, #' dimnames=list(c("age", "score_test", "df", "p"))) #' cutp(coxph(Surv(t2, d3) ~ z1, data=bmt[bmt$group==2, ]))$z1[] #' cutp(coxph(Surv(t2, d3) ~ z1, data=bmt[bmt$group==3, ]))[[1]][] #' ## K&M. Example 8.3, pg 273-274. #' data("kidtran", package="KMsurv") #' k1 <- kidtran #' ## patients who are male and black #' k2 <- k1[k1$gender==1 & k1$race==2, ] #' c2 <- coxph(Surv(time, delta) ~ age, data=k2) #' print(cutp(c2)) #' ## check significance of computed value #' summary(coxph(Surv(time, delta) ~ age >= 58, data=k2)) #' k3 <- k1[k1$gender==2 & k1$race==2, ] #' c3 <- coxph(Surv(time, delta) ~ age, data=k3) #' print(cutp(c3)) #' ## doesn't apply to binary variables e.g. gender #' print(cutp(coxph(Surv(time, delta) ~ age + gender, data=k1))) #' } #' cutp.coxph <- function(x, ..., defCont=3){ stopifnot(inherits(x, "coxph")) d1 <- attr(x$terms, "dataClasses")[-1] d1 <- d1[d1=="numeric"] m1 <- data.frame(stats::model.matrix(x)) res1 <- lapply(seq.int(d1), function(i) { ## variable var1 <- get(names(d1[i]), m1) ## unique values u1 <- unique(var1) if (length(u1) < defCont) return(NaN) ## convert to logical l1 <- lapply(u1, function(j) var1 >= j) names(l1) <- u1 ## get tne; this is the longest step l1 <- lapply(l1, function(j) asWide(ten(x$y ~ j))) ## U = score test l1 <- lapply(l1, function(dt1) dt1[, sum(e_1 - e * n_1 / n)]) res2 <- data.table::data.table( u1, "U"=abs(unlist(l1))) data.table::setnames(res2, old="u1", new=names(d1[i])) data.table::setorder(res2, -U) s2 <- findS2(sum(x$nevent)) res2[, "Q" := U / (sqrt(s2) * sqrt(sum(x$nevent) - 1))] res2[, "p" := unlist(lapply(Q, findP))] return(res2) }) names(res1) <- names(d1) return(res1) } ### helper functions findS2 <- function(D) { (1 / (D - 1)) * sum( sapply(1:D, ## i in 1:D function(i) ## j in 1:i (1 - sum(sapply(1:i, function(j) 1 / (D + 1 - j))))^2)) } ### ## lim = limit (accuracy) ## should be to Inf but generally 1e3 is enough findP <- function(q, lim=1e3) { if (q < 0.2) return(1) 2 * sum(sapply(seq.int(lim), function(j) {(-1)^(j+1) * exp(-2 * j^2 * q^2)})) } #' #' @rdname cutp #' @method cutp survfit #' @aliases cutp.survfit #' @export #' cutp.survfit <- function(x, ..., defCont=3){ f1 <- deparse(x$call) f1 <- sub("survfit", "coxph", f1) c1 <- eval(parse(text=f1)) cutp(c1, defCont=defCont) } ## R CMD check U <- Q <- e_1 <- n_1 <- NULL ## if (plot){ ## m1 <- paste0( ## "Test statistic for cut points \n For variable ", var, ## "\nLarger values indicate cut point more likely here") ## setkey(res1, var) ## res1[, graphics::plot(var, U, ## xlab="Cut point", ## ylab="Test statistic", ## main=m1, ## ...)] ## res1[, graphics::lines(var, U, ...)] ## } survMisc/R/gof.R0000744000176200001440000001503214223601761013170 0ustar liggesusers#' @name gof #' @title \bold{g}oodness \bold{o}f \bold{f}it test for a \code{coxph} object #' @description \bold{g}oodness \bold{o}f \bold{f}it test for a \code{coxph} object #' #' @rdname gof #' @export #' gof <- function(x, ...) UseMethod("gof") #' #' @rdname gof #' @method gof coxph #' @aliases gof.coxph #' @export #' #' @param x An object of class \code{coxph} #' @param ... Additional arguments (not implemented) #' @param G Number of \bold{g}roups into which to divide risk score. #' If \code{G=NULL} (the default), uses closest integer to #' \deqn{G = \max(2, \quad \min(10, \quad \frac{ne}{40}))}{ #' G = max(2, min(10, ne/40))} #' where \eqn{ne} is the number of events overall. #' #' @return A \code{list} with elements: #' \item{groups}{A \code{data.table} with one row per group \eqn{G}. #' The columns are \describe{ #' \item{n}{Number of observations} #' \item{e}{Number of events} #' \item{exp}{Number of events expected. This is #' \deqn{exp = \sum e_i - M_i} #' where \eqn{e_i} are the events and #' \eqn{M_i} are the martingale residuals #' for each observation \eqn{i}} #' \item{z}{\eqn{Z} score, calculated as #' \deqn{ Z = \frac{e - exp}{\sqrt{exp}}}{ #' Z = (e - exp) / exp^0.5} #' } #' \item{p}{\eqn{p}-value for \eqn{Z}, which is #' \deqn{ p = 2. \code{pnorm}(-|z|)}{ #' p = 2 * pnorm(-|z|)} #' where \code{pnorm} is the normal distribution function #' with mean \eqn{\mu =0}{0} and standard deviation \eqn{\sigma =1}{1} #' and \eqn{|z|} is the absolute value.} #' }} #' \item{lrTest}{Likelihood-ratio test. #' Tests the improvement in log-likelihood with addition #' of an indicator variable with \eqn{G-1} groups. #' This is done with \code{survival:::anova.coxph}. #' The test is distributed as chi-square with \eqn{G-1} degrees of freedom} #' #' @details #' In order to verify the overall goodness of fit, #' the risk score \eqn{r_i}{r[i]} for each observation \eqn{i} is given by #' \deqn{r_i = \hat{\beta} X_i}{r[i] = B.X[i]} #' where \eqn{\hat{\beta}}{B} is the vector of fitted coefficients #' and \eqn{X_i}{X[i]} is the vector of predictor variables for #' observation \eqn{i}. #' \cr #' This risk score is then sorted and 'lumped' into #' a grouping variable with \eqn{G} groups, #' (containing approximately equal numbers of observations). #' \cr #' The number of observed (\eqn{e}) and expected (\eqn{exp}) events in #' each group are used to generate a \eqn{Z} statistic for each group, #' which is assumed to follow a normal distribution with #' \eqn{Z \sim N(0,1)}. #' \cr #' The indicator variable \code{indicG} is added to the #' original model and the two models are compared to determine the #' improvement in fit via the likelihood ratio test. #' #' @note The choice of \eqn{G} is somewhat arbitrary but rarely should #' be \eqn{> 10}. #' \cr #' As illustrated in the example, a larger value for #' \eqn{G} makes the \eqn{Z} test for each group more likely to be significant. #' This does \emph{not} affect the significance of adding the #' indicator variable \code{indicG} to the original model. #' \cr \cr #' The \eqn{Z} score is chosen for simplicity, as for large sample sizes #' the Poisson distribution approaches the normal. Strictly speaking, #' the Poisson would be more appropriate for \eqn{e} and \eqn{exp}{exp} as #' per Counting Theory. #' \cr #' The \eqn{Z} score may be somewhat conservative as the expected events #' are calculated using the martingale residuals from the overall model, #' rather than by group. This is likely to bring the expected events #' closer to the observed events. #' \cr \cr #' This test is similar to the Hosmer-Lemeshow test for logistic regression. #' #' @source #' Method and example are from: \cr #' May S, Hosmer DW 1998. #' A simplified method of calculating an overall goodness-of-fit test #' for the Cox proportional hazards model. #' \emph{Lifetime Data Analysis} \bold{4}(2):109--20. #' \doi{10.1023/A:1009612305785} #' #' @references #' Default value for \eqn{G} as per: \cr #' May S, Hosmer DW 2004. #' A cautionary note on the use of the Gronnesby and Borgan #' goodness-of-fit test for the Cox proportional hazards model. #' \emph{Lifetime Data Analysis} \bold{10}(3):283--91. #' \doi{10.1023/B:LIDA.0000036393.29224.1d} #' @references #' Changes to the \code{pbc} dataset in the example are as detailed in: \cr #' Fleming T, Harrington D 2005. #' \emph{Counting Processes and Survival Analysis}. #' New Jersey: Wiley and Sons. Chapter 4, section 4.6, pp 188. #' \doi{10.1002/9781118150672} #' #' @examples #' data("pbc", package="survival") #' pbc <- pbc[!is.na(pbc$trt), ] #' ## make corrections as per Fleming #' pbc[pbc$id==253, "age"] <- 54.4 #' pbc[pbc$id==107, "protime"] <- 10.7 #' ### misspecified; should be log(bili) and log(protime) instead #' c1 <- coxph(Surv(time, status==2) ~ #' age + log(albumin) + bili + edema + protime, #' data=pbc) #' gof(c1, G=10) #' gof(c1) #' gof.coxph <- function(x, ..., G=NULL){ stopifnot(inherits(x, "coxph")) if(is.null(G)) G <- round(max(2, min(10, x$nevent/40)), 0) ## from survival:::predict.coxph r1 <- stats::predict(x, type="risk") r2 <- r1[order(r1)] ## 'chunk' size csize1 <- length(r1) / G s1 <- split(r2, ceiling(seq_along(r2) / csize1)) ## 'quantiles' (approx. equal no.s in each) q1 <- rep(1:G, unlist(lapply(s1, length))) ## events e1 <- stats::model.frame(x)[, 1][, "status"] dt2 <- data.table::data.table(n=tapply(e1[order(r1)], q1, length)) dt2[, e := tapply(e1[order(r1)], q1, sum)] ## from survival:::predict.coxph m1 <- stats::residuals(x, type="martingale") ## cumulative hazard = events - margtingale cumHaz <- e1[order(r1)] - m1[order(r1)] ## no. expected dt2[, exp:= tapply(cumHaz, q1, sum)] ## z = eerved - expected/sqrt(expected) dt2[, z := (e - exp) / sqrt(exp)] ## z-score dt2[, p := 2 * pnorm(-abs(z))] ## reformulate f1 <- paste0(as.character(x$formula)[3], " + indicG") f2 <- stats::as.formula(paste(x$formula[2], x$formula[1], f1)) environment(f2) <- environment(x$formula) d1 <- as.character(x$call$data) d1 <- data.table::data.table(get(d1)) d1 <- d1[order(r1), ] d1[, indicG := as.factor(q1)] c1 <- survival::coxph(formula=f2, data=d1) a1 <- stats::anova(x, c1) ## result res1 <- vector(mode="list", length=2) res1[[1]] <- dt2 res1[[2]] <- a1 names(res1) <- c("groups", "lrTest") return(res1) } ## for R CMD check e <- z <- p <- indicG <- NULL survMisc/R/profLik.R0000744000176200001440000001141213317032301014010 0ustar liggesusers#' @name profLik #' @title Profile likelihood for coefficients in a \code{coxph} model #' @description Profile likelihood for coefficients in a \code{coxph} model #' #' @param x A \code{coxph} model. #' @param CI \bold{C}onfidence \bold{I}nterval. #' @param interval Number of points over which to evaluate coefficient. #' @param mult \bold{Mult}iplier. Coefficent will be multiplied by lower and upper #' value and evaluated across this range. #' @param devNew Open a new device for each plot. See #' \cr #' ?grDevices::dev.new #' @param ... Additional parameters passed to \code{graphics::plot.default}. #' #' @details #' Plots of range of values for coefficient in model with log-likelihoods #' for the model with the coefficient fixed at these values. #' \cr \cr #' For each coefficient a range of possible values is chosen, given by #' \eqn{\hat{B}*mult_{lower} - \hat{B}*mult_{upper}}{ #' Bhat * mult[lower] - Bhat * mult[upper]}. #' A series of models are fit (given by \code{interval}). #' The coefficient is included in the model as a #' \emph{fixed} term and the partial log-likelihood for the model is calculated. #' \cr \cr #' A curve is plotted which gives the partial log-likelihood for each of these candidate values. #' An appropriate confidence interval (CI) is given #' by subtracting 1/2 the value of the appropriate quantile #' of a chi-squared distribution with \eqn{1} degree of freedom. #' \cr \cr #' Two circles are also plotted giving the 95% CI for the Wald statistic. #' #' @return One plot for each coefficient in the model. #' #' @references Example is from: #' \bold{T&G}. #' Section 3.4.1, pg 57. #' #' @export #' @examples #' data("pbc", package="survival") #' c1 <- coxph(formula = Surv(time, status == 2) ~ age + edema + log(bili) + #' log(albumin) + log(protime), data = pbc) #' profLik(c1, col="red") #' profLik <- function(x, CI=0.95, interval=50, mult=c(0.1, 2), devNew=TRUE, ...) { if (!inherits(x, "coxph")) stop ("Only applies to objects of class coxph") coef1 <- stats::coef(x) ## use collapse in case formula spans >1 line f1 <- paste0(deparse(x$formula), collapse="") f1 <- gsub(" ", "", f1) ## plot title main1 <- paste0("Partial likelihood profiles and ", 100 * CI, "% CI cutoff for model:\n", f1, " \n Circles show ", 100 * CI, "% CI limits for Wald interval") ###---------------------------------------- ### plots ###---------------------------------------- ## get names of the coefficients from model.frame ## note excluding Surv n1 <- names(stats::model.frame(x))[!grepl( "Surv", names(stats::model.frame(x)) )] ## allocate memory for log partical likelihood llik <- double(length=interval) for (i in seq(length(coef1))) { ## lower + upper limits low1 <- mult[1] * coef1[i] up1 <- mult[2] * coef1[i] ## range for coefficient beta1 <- seq(from=low1, to=up1, length.out=interval) for (j in seq(interval)) { ## right hand side of formula without coefficient rhs1 <- paste0(n1[-i], collapse="+") ## offset = includes coefficient as fixed covariate off1 <- beta1[j] off2 <- paste0("+offset(", off1, "*", n1[i], ")") ## new RHS for formula rhs2 <- paste0(rhs1, off2) f2 <- stats::as.formula(paste0(".~", rhs2)) ## refit model and find model loglik with this value (beta) of coefficient c2 <- stats::update(x, formula=f2) llik[j] <- c2$loglik[2] } graphics::par(oma=c(0, 0, 4, 0)) if (i > 1 & devNew == TRUE) grDevices::dev.new() graphics::plot.default(beta1, llik, type="l", xlab="Values for coefficient", ylab="Model partial likelihood", main=n1[i], ...) ## range for confidence interval is chi-square on with 1 df rCI <- stats::qchisq(CI, 1) ## confidence interval (calcuate lower only) ci1 <- x$loglik[2] - rCI / 2 graphics::abline(h=ci1, lty=2) sd1 <- sqrt(x$var[i, i]) ## range for confidence interval of Wald is normal ## if CI is 95% then need convert to 97.5% CI2 <- (1 - CI) / 2 rCI <- stats::qnorm(1 - CI2) graphics::points(coef1[i] + c(-rCI, rCI) * sd1, c(ci1, ci1), pch=1, cex=3, ...) graphics::mtext(main1, line=0.3, outer=TRUE) } } survMisc/R/sf.R0000744000176200001440000001577314223601307013035 0ustar liggesusers#' @name sf #' @title \bold{s}urvival (or hazard) \bold{f}unction #' based on \eqn{e} and \eqn{n}. #' @description \bold{s}urvival (or hazard) \bold{f}unction #' based on \eqn{e} and \eqn{n}. #' #' @include ten.R #' #' @param x One of the following: #' \describe{ #' \item{default}{A numeric vector of events status (assumed sorted by time).} #' \item{numeric}{Vectors of events and numbers at risk (assumed sorted by time).} #' \item{ten}{A \code{ten} object.} #' \item{stratTen}{A \code{stratTen} object.} #' } #' @param ... Additional arguments (not implemented). #' @param n Number at risk. #' @param what See return, below. #' @param SCV Include the \bold{S}quared \bold{C}oefficient of #' \bold{V}ariation, which is calcluated using #' the mean \eqn{\bar{x}}{mean(x)} and #' the variance \eqn{\sigma_x^2}{var(x)}: #' \deqn{SCV_x = \frac{\sigma_x^2}{\bar{x}^2}}{ #' SCV[x] = var(x) / mean(x)^2} #' This measure of \emph{dispersion} is also referred to as #' the 'standardized variance' or the 'noise'. #' @param times Times for which to calculate the function. #' \cr #' If \code{times=NULL} (the default), times are used for #' which at least one event occurred in at least one covariate group. #' @param reCalc Recalcuate the values? #' \cr #' If \code{reCalc=FALSE} (the default) and the \code{ten} object already has #' the calculated values stored as an \code{attribute}, #' the value of the \code{attribute} is returned directly. #' #' @return #' A {data.table} which is stored as an attribute of #' the \code{ten} object. #' \cr #' If \code{what="s"}, the \bold{s}urvival is returned, based on the #' Kaplan-Meier or product-limit estimator. #' This is \eqn{1} at \eqn{t=0} and thereafter is given by: #' \deqn{\hat{S}(t) = \prod_{t \leq t_i} (1-\frac{e_i}{n_i} )}{ #' S[t] = prod (1 - e[t]) / n[t] } #' #' If \code{what="sv"}, the \bold{s}urvival \bold{v}ariance is returned. #' \cr #' Greenwoods estimtor of the variance of the #' Kaplan-Meier (product-limit) estimator is: #' \deqn{Var[\hat{S}(t)] = [\hat{S}(t)]^2 \sum_{t_i \leq t} #' \frac{e_i}{n_i (n_i - e_i)}}{ #' Var(S[t]) = S[t]^2 sum e[t] / (n[t] * (n[t] - e[t]))} #' #' If \code{what="h"}, the \bold{h}azard is returned, #' based on the the Nelson-Aalen estimator. #' This has a value of \eqn{\hat{H}=0}{H=0} at \eqn{t=0} #' and thereafter is given by: #' \deqn{\hat{H}(t) = \sum_{t \leq t_i} \frac{e_i}{n_i}}{ #' H[t] = sum(e[t] / n[t])} #' #' If \code{what="hv"}, the \bold{h}azard \bold{v}ariance is returned. #' \cr #' The variance of the Nelson-Aalen estimator is given by: #' \deqn{Var[\hat{H}(t)] = \sum_{t_i \leq t} \frac{e_i}{n_i^2}}{ #' Var(H[t]) = sum(e / n^2)} #' #' If \code{what="all"} (the default), \emph{all} of the above #' are returned in a \code{data.table}, along with: #' \cr #' Survival, based on the Nelson-Aalen hazard estimator \eqn{H}, #' which is: #' \deqn{\hat{S_{na}}=e^{H}}{ #' S[t] = exp(H[t])} #' Hazard, based on the Kaplan-Meier survival estimator \eqn{S}, #' which is: #' \deqn{\hat{H_{km}} = -\log{S}}{ #' H[t] = -log(S[t])} #' #' @keywords survival #' #' @rdname sf #' @export #' sf <- function(x, ...) UseMethod("sf") #' #' @rdname sf #' @export #' sf.default <- function(x, ..., what=c("S", "H"), SCV=FALSE, times=NULL){ stopifnot(all(x >= 0 && x <=1)) what <- match.arg(what) t1 <- ten(x) return(sf.ten(t1, what=what, SCV=SCV, times=times)) } #' #' @rdname sf #' @method sf ten #' @aliases sf.ten #' @export #' #' @examples #' data("kidney", package="KMsurv") #' k1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) #' sf(k1) #' sf(k1, times=1:10, reCalc=TRUE) #' k2 <- ten(with(kidney, Surv(time=time, event=delta))) #' sf(k2) #' ## K&M. Table 4.1A, pg 93. #' ## 6MP patients #' data("drug6mp", package="KMsurv") #' d1 <- with(drug6mp, Surv(time=t2, event=relapse)) #' (d1 <- ten(d1)) #' sf(x=d1$e, n=d1$n, what="S") #' sf.ten <- function(x, ..., what=c("S", "H"), SCV=FALSE, times=NULL, reCalc=FALSE){ stopifnot(inherits(x, "ten")) if (!reCalc & !is.null(attr(x, "sf"))) return (attr(x, "sf")) what <- match.arg(what) ## name of variance nv1 <- paste0(what, "v") ## functions to use if (what=="S") { fun1 <- km fun1v <- kmv } else { fun1 <- na fun1v <- nav } if (attr(x, "ncg")==0) { cg <- quote(NULL) ncg <- quote(n) } else { cg <- quote(cg) ncg <- quote(ncg) } if (is.null(times)) { res1 <- x[, t, by=eval(cg)] } else { res1 <- data.table::data.table( data.table::rbindlist( lapply(times, function(i) x[t <= i, list("t1"=i, "t"=max(t)), by=eval(cg)]))) } res1[, (what) := x[which(t %in% res1$t), fun1(e=e, n=eval(ncg)), by=eval(cg)]$V1] res1[, (nv1) := x[which(t %in% res1$t), fun1v(e=e, n=eval(ncg)), by=eval(cg)]$V1] if (SCV) res1[, "SCV" := Sv / S^2] data.table::setattr(x, "sf", res1) return(attr(x, "sf")) } #' #' @rdname sf #' @method sf stratTen #' @aliases strat.Ten #' @export #' @examples #' data("pbc", package="survival") #' t1 <- ten(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc) #' sf(t1) #' sf.stratTen <- function(x, ..., what=c("S", "H"), SCV=FALSE, times=NULL, reCalc=FALSE){ lapply(x, function (i) sf(i, what=what, times=times, reCalc=reCalc)) return(lapply(x, attr, "sf")) } #' #' @rdname sf #' @method sf numeric #' @aliases sf.numeric #' @export #' @examples #' ## K&M. Table 4.2, pg 94. #' data("bmt", package="KMsurv") #' b1 <- bmt[bmt$group==1, ] # ALL patients #' t2 <- ten(Surv(time=b1$t2, event=b1$d3)) #' with(t2, sf(x=e, n=n, what="Hv")) #' ## K&M. Table 4.3, pg 97. #' sf(x=t2$e, n=t2$n, what="all") #' sf.numeric <- function(x, ..., n=NULL, what=c("all", "S", "Sv", "H", "Hv"), SCV=FALSE, times=NULL){ what <- match.arg(what) if (is.null(n)) { what <- substring(what, 1L, 1L) if (what=="a") what <- "S" return(sf.default(x, what=what, SCV=SCV, time=times)) } stopifnot(is.numeric(x) & is.numeric(n)) stopifnot(length(x)==length(n)) if(what == "all") { res1 <- data.table::data.table( "S"=km(x, n), "Sv"=kmv(x, n), "H"=na(x, n), "Hv"=nav(x, n)) res1[, "sna" := exp(-H)] res1[, "hkm" := -log(S)] return(res1) } res1 <- switch(what, "S"=km(x, n), "Sv"=kmv(x, n), "H"=na(x, n), "Hv"=nav(x, n)) return(res1) } ### helper functions km <- function(e, n) cumprod(1 - (e / n)) kmv <- function(e, n) cumprod(1 - (e / n))^2 * cumsum(e / (n * (n - e))) na <- function(e, n) cumsum(e / n) nav <- function(e, n) cumsum(e / (n^2)) ## for R CMD check H <- S <- Hx <- Sv <- NULL survMisc/R/xtable.R0000744000176200001440000002131213317032411013663 0ustar liggesusers#' @name xtable #' @title \code{xtable} methods #' @description \code{xtable} methods #' #' @param x #' An object with an xtable method. #' @param caption Caption. #' @param label Label. #' @param align Alignment of columns. #' @param digits Number of digits to display. #' @param display How to display - passed to \code{formatC}. #' @param ... Additional arguments (not implemented). #' #' @return An \code{xtable}, suitable for use with/ parsing by LaTeX. #' #' @note #' \code{xtable.survfit} - this does \emph{not} show the (restricted) mean survival, #' only the median with confidence intervals. #' #' @seealso ? xtable #' ? xtable::print.xtable #' methods("xtable") #' #' @rdname xtable #' @export #' xtable <- function (x, caption=NULL, label=NULL, align=NULL, digits=NULL, display=NULL, ...) { UseMethod("xtable") } #' #' @rdname xtable #' @method xtable table #' @aliases xtable.table #' @export #' #' @examples #' data("kidney", package="KMsurv") #' xtable(with(kidney, table(delta, type))) #' xtable.table <- function(x, caption=paste0( paste(names(dimnames(x)), collapse=" $\\times$ "), "\\\\ chi-sq=", signif(suppressWarnings( stats::chisq.test(x)$p.value), digits)), label=NULL, align=c("l", rep("c", dim(x)[2])), digits=2, display=NULL, ...){ identity(caption) dn1 <- dimnames(x) for (i in 1:length(dn1)) dn1[[i]][which(is.na(dn1[[i]]))] <- "NA" n1 <- names(dn1) names(attr(x, "dimnames")) <- NULL rownames(x) <- NULL if (length(dim(x))==1) { dim(x) <- c(1L, dim(x)) r1 <- x colnames(r1) <- dn1[[1]] rownames(r1) <- "." class(r1) <- "matrix" } else { r1 <- rbind(dn1[[2]], signif(x, digits)) rownames(r1) <- c(n1[1], dn1[[1]]) colnames(r1) <- c(n1[2], rep(".", dim(x)[2] - 1)) } xtable::xtable(r1, caption=caption, label=label, align=align, digits=digits, display=display, ...) } #' #' @rdname xtable #' @method xtable survfit #' @aliases xtable.survfit #' @export #' #' @examples #' ## K&M. Example 7.2, pg 210. #' xtable(survfit(Surv(time=time, event=delta) ~ type, data=kidney)) #' #' xtable.survfit <- function (x, caption=paste0("Survival for ", deparse(x$call[[2]])), label=NULL, align=c("l", rep("c", 7)), digits=NULL, display=rep("fg", 8), ...) { m1 <- survMean(x) xtable::xtable(m1, caption=caption, label=label, align=align, digits=digits, display=display, ...) } #### hidden functions ### these are based on ## survival:::survmean minMin <- function(y, x) { tolerance <- .Machine$double.eps^0.5 keep <- (!is.na(y) & y < (0.5 + tolerance)) if (!any(keep)) { return(NA) } else { x <- x[keep] y <- y[keep] if (abs(y[1] - 0.5) < tolerance && any(y < y[1])) { return((x[1] + x[min(which(y < y[1]))]) / 2) } else { return(x[1]) } } } printFun <- function(nused, time, surv, n.risk, n.event, lower, upper, start.time, end.time) { if (!is.na(end.time)) { hh <- ifelse((n.risk - n.event) == 0, 0, n.event / (n.risk * (n.risk - n.event))) keep <- which(time <= end.time) if (length(keep) == 0) { temptime <- end.time tempsurv <- 1 hh <- 0 } else { temptime <- c(time[keep], end.time) tempsurv <- c(surv[keep], surv[max(keep)]) hh <- c(hh[keep], 0) } n <- length(temptime) delta <- diff(c(start.time, temptime)) rectangles <- delta * c(1, tempsurv[-n]) varmean <- sum(cumsum(rev(rectangles[-1]))^2 * rev(hh)[-1]) mean <- sum(rectangles) + start.time } else { mean <- 0 varmean <- 0 } med <- minMin(surv, time) if (!is.null(upper)) { upper <- minMin(upper, time) lower <- minMin(lower, time) return(c(nused, max(n.risk), n.risk[1], sum(n.event), sum(mean), sqrt(varmean), med, lower, upper)) } else { return(c(nused, max(n.risk), n.risk[1], sum(n.event), sum(mean), sqrt(varmean), med, 0, 0)) } } survMean <- function(x, scale=1, rmean="none") { if (!is.null(x$start.time)) { start.time <- x$start.time } else { start.time <- min(0, x$time) } stime <- x$time / scale surv <- x$surv plab <- c("records", "n.max", "n.start", "events", "*rmean", "*se(rmean)", "median", paste(x$conf.int, c("LCL", "UCL"), sep = "")) ncols <- 9 if (is.null(x$strata)) { end.time <- NA if (is.matrix(surv)) { out <- matrix(0, ncol(surv), ncols) for (i in 1:ncol(surv)) { if (is.null(x$conf.int)) { out[i, ] <- printFun(x$n, stime, surv[, i], x$n.risk, x$n.event, NULL, NULL, start.time, end.time) } else { out[i, ] <- printFun(x$n, stime, surv[, i], x$n.risk, x$n.event, x$lower[, i], x$upper[, i], start.time, end.time) } } dimnames(out) <- list(dimnames(surv)[[2]], plab) } else { out <- matrix(printFun(x$n, stime, surv, x$n.risk, x$n.event, x$lower, x$upper, start.time, end.time), nrow = 1) dimnames(out) <- list(NULL, plab) } } else { nstrat <- length(x$strata) stemp <- rep(1:nstrat, x$strata) last.time <- (rev(x$time))[match(1:nstrat, rev(stemp))] end.time <- rep(NA, nstrat) if (is.matrix(surv)) { ns <- ncol(surv) out <- matrix(0, nstrat * ns, ncols) if (is.null(dimnames(surv)[[2]])) { dimnames(out) <- list(rep(names(x$strata), rep(ns, nstrat)), plab) } else { cname <- outer(dimnames(surv)[[2]], names(x$strata), paste, sep = ", ") dimnames(out) <- list(c(cname), plab) } k <- 0 for (i in 1:nstrat) { who <- (stemp == i) for (j in 1:ns) { k <- k + 1 if (is.null(x$lower)) { out[k, ] <- printFun(x$n[i], stime[who], surv[who, j], x$n.risk[who], x$n.event[who], NULL, NULL, start.time, end.time[i]) } else { out[k, ] <- printFun(x$n[i], stime[who], surv[who, j], x$n.risk[who], x$n.event[who], x$lower[who, j], x$upper[who, j], start.time, end.time[i]) } } } } else { out <- matrix(0, nstrat, ncols) dimnames(out) <- list(names(x$strata), plab) for (i in 1:nstrat) { who <- (stemp == i) if (is.null(x$lower)) { out[i, ] <- printFun(x$n[i], stime[who], surv[who], x$n.risk[who], x$n.event[who], NULL, NULL, start.time, end.time[i]) } else { out[i, ] <- printFun(x$n[i], stime[who], surv[who], x$n.risk[who], x$n.event[who], x$lower[who], x$upper[who], start.time, end.time[i]) } } } } if (is.null(x$lower)) out <- out[, 1:7, drop=FALSE] out <- out[, -(5:6), drop=FALSE] return(out[, , drop=FALSE]) } survMisc/R/predict.R0000744000176200001440000000550413317032246014051 0ustar liggesusers#' @name predict #' @title predicted events #' @description predicted events #' #' @include ten.R #' @include print.R #' @include asWide.R #' #' @param object An object of class \code{ten}. #' @param eMP Add column(s) indicating #' \bold{e}vents \bold{m}inus \bold{p}redicted. #' @inheritParams sf.ten #' #' @return An \code{attribute}, \code{pred} is added #' to \code{object}: #' \item{t}{Times with at least one observation} #' \item{P_}{\bold{p}redicted number of events} #' And if \code{eMP==TRUE} (the default): #' \item{eMP_}{\bold{e}vents \bold{m}inus \bold{p}redicted} #' The names of the \code{object}'s covariate groups are #' used to make the suffixes of the column names (i.e. after the #' \code{_} character). #' #' @details #' With \eqn{K} covariate groups, We use \eqn{ncg_{ik}}{ncg[i, k]}, #' the number at risk for group \eqn{k}, #' to calculate the number of expected events: #' \deqn{P_{ik} = \frac{e_i(ncg_{ik})}{n_i} \quad k=1, 2 \ldots K}{ #' P[i, k] = e[i] * ncg[i, k] / n[i]} #' #' @note There is a predicted value for each unique time, for each covariate group. #' #' @seealso #' ?survival::predict.coxph #' methods("predict") #' #' @rdname predict #' @method predict ten #' @aliases predict.ten #' @export #' @examples #' ## K&M. Example 7.2, Table 7.2, pp 209-210. #' data("kidney", package="KMsurv") #' k1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) #' predict(k1) #' predict(asWide(k1)) #' stopifnot(predict(asWide(k1))[, sum(eMP_1 + eMP_2)] <= #' .Machine$double.neg.eps) #' ## Three covariate groups #' ## K&M. Example 7.4, pp 212-214. #' data("bmt", package="KMsurv") #' b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) #' predict(b1) #' ## one group only #' predict(ten(Surv(time=t2, event=d3) ~ 1, data=bmt)) #' predict.ten <- function(object, ..., eMP=TRUE, reCalc=FALSE){ if (!reCalc & !is.null(attr(object, "pred"))) return (attr(object, "pred")) stopifnot(attr(object, "ncg")>=1) if (attr(object, "shape")=="long" & (attr(object, "ncg")==1)) { res1 <- object[, "P" := e * ncg / n][, list(t, P)] if(eMP) res1[, "eMP" := object[, e] - P] } else { x1 <- if(attr(object, "shape")=="wide") object else asWide(object) ## names of columns n1 <- x1[, as.matrix(.SD), .SDcols=grep("n_", names(x1))] res1 <- data.table::data.table(n1 * x1[, e / n]) data.table::setnames(res1, paste("P", seq.int(ncol(res1)), sep="_")) if (eMP) { e1 <- x1[, as.matrix(.SD), .SDcols=grep("e_", names(x1))] na1 <- paste("eMP", seq.int(ncol(res1)), sep="_") res1[, (na1) := data.frame(as.matrix(res1) - e1)] } } data.table::setattr(res1, "class", c("pred", class(res1))) data.table::setattr(object, "pred", res1) return(attr(object, "pred")) } survMisc/R/autoplotTAP.R0000744000176200001440000000537413317031777014650 0ustar liggesusers#' @name autoplotTableAndPlot #' @title Arrange a survival plot with corresponding table and legend. #' @description Arrange a survival plot with corresponding table and legend. #' #' @include autoplotTen.R #' #' @param object An object of class \code{"tableAndPlot"}, as returned by #' \code{ggplot.Ten}. #' @param ... Additional arguments (not implemented). #' @param hideTabLeg Hide table legend. #' \cr #' If \code{hideTabLeg = TRUE} (the default), the table legend will not appear. #' @param tabHeight Table height, as a fraction/ proportion of the whole. #' \cr #' \code{tabHeight=0.25} (the default) makes the table #' \eqn{0.25 = 25\%} of the whole plot height. #' #' @return A graph, plotted with \code{gridExtra::grid.arrange}. #' #' @details Arguments to \code{plotHeigth} and \code{tabHeight} are #' best specified as fractions adding to \eqn{1}, #' \cr #' #' @note This method is called by \code{\link{print.tableAndPlot}} #' and by \code{print.stratTableAndPlot}. #' #' @author Chris Dardis. Based on existing work by #' R. Saccilotto, Abhijit Dasgupta, Gil Tomas and Mark Cowley. #' #' @keywords graphics #' #' @rdname autoplotTAP #' @method autoplot tableAndPlot #' @aliases autoplot.tableAndPlot #' @export #' @examples #' \dontrun{ #' data("kidney", package="KMsurv") #' autoplot(survfit(Surv(time, delta) ~ type, data=kidney), type="fill") #' autoplot(ten(survfit(Surv(time, delta) ~ type, data=kidney)), type="fill") #' data("bmt", package="KMsurv") #' s2 <- survfit(Surv(time=t2, event=d3) ~ group, data=bmt) #' autoplot(s2) #' } autoplot.tableAndPlot <- function(object, ..., hideTabLeg=TRUE, tabHeight=0.25){ stopifnot(inherits(object, "tableAndPlot")) stopifnot(0 < tabHeight & tabHeight < 1) if (hideTabLeg) { object$table <- object$table + theme(legend.key.height=NULL, legend.key.width=NULL, legend.key=element_rect(colour=NA, fill=NA), legend.text=element_text(colour=NA), legend.title=element_text(colour=NA)) } ## change to graphical objects grobs1 <- lapply(rev(seq.int(object)), function(i) ggplotGrob(object[[i]])) ## collect the widths for each grob of each plot w1 <- lapply(seq.int(grobs1), function(i) grobs1[[i]]$widths[2:5]) ## use do.call to get the max width maxWidth1 <- do.call(grid::unit.pmax, w1) ## asign the max width to each grob for (i in seq.int(grobs1)) { grobs1[[i]]$widths[2:5] <- as.list(maxWidth1) } ## plot do.call(gridExtra::grid.arrange, c(grobs1, nrow=2, heights=list(c(1 - tabHeight, tabHeight)))) } survMisc/R/comp.R0000744000176200001440000004213213317043507013355 0ustar liggesusers#' @name comp #' @title compare survival curves #' @description compare survival curves #' #' @include ten.R #' @include asWide.R #' @include COV.R #' @include predict.R #' @include sf.R #' @include print.R #' #' @rdname comp #' @export comp #' comp <- function(x, ...) UseMethod("comp") #' #' @param x A \code{tne} object #' @param p \eqn{p} for Fleming-Harrington test #' @param q \eqn{q} for Fleming-Harrington test #' @param scores scores for tests for trend #' @inheritParams sf.ten #' #' @return The \code{tne} object is given #' additional \code{attributes}. #' \cr #' The following are always added: #' \item{lrt}{The \bold{l}og-\bold{r}ank family of \bold{t}ests} #' \item{lrw}{The \bold{l}og-\bold{r}ank \bold{w}eights (used in calculating the tests).} #' An additional item depends on the number of covariate groups. #' \cr #' If this is \eqn{=2}: #' \item{sup}{The \bold{sup}remum or Renyi family of tests} #' and if this is \eqn{>2}: #' \item{tft}{Tests for trend. This is given as a \code{list}, #' with the statistics and the scores used.} #' #' @details #' The \bold{log-rank} tests are formed from the following elements, #' with values for each time where there is at least one event: #' \itemize{ #' \item \eqn{W_i}{W[i]}, the weights, given below. #' \item \eqn{e_i}{e[i]}, the number of events (per time). #' \item \eqn{\hat{e_i}}{P[i]}, the number of \emph{predicted} events, #' given by \code{\link{predict}}. #' \item \eqn{COV_i}{COV[, , i]}, the covariance matrix for time \eqn{i}, #' given by \code{\link{COV}}. #' } #' It is calculated as: #' \deqn{Q_i = \sum{W_i (e_i - \hat{e}_i)}^T #' \sum{W_i \hat{COV_i} W_i^{-1}} #' \sum{W_i (e_i - \hat{e}_i)}}{ #' Q[i] = sum(W[i] * (e[i] - P[i]))^T * #' sum(W[i] * COV[, , i] * W[i])^-1 * #' sum(W[i] * (e[i] - P[i]))} #' #' If there are \eqn{K} groups, then \eqn{K-1} are selected (arbitrary). #' \cr #' Likewise the corresponding variance-covariance matrix is reduced to the #' appropriate \eqn{K-1 \times K-1}{K-1 * K-1} dimensions. #' \cr #' \eqn{Q} is distributed as chi-square with \eqn{K-1} degrees of freedom. #' \cr \cr #' For \eqn{2} covariate groups, we can use: #' \itemize{ #' \item \eqn{e_i}{e[i]} the number of events (per time). #' \item \eqn{n_i}{e[i]} the number at risk overall. #' \item \eqn{e1_i}{e1[i]} the number of events in group \eqn{1}. #' \item \eqn{n1_i}{n1[i]} the number at risk in group \eqn{1}. #' } #' Then: #' \deqn{Q = \frac{\sum{W_i [e1_i - n1_i (\frac{e_i}{n_i})]} }{ #' \sqrt{\sum{W_i^2 \frac{n1_i}{n_i} #' (1 - \frac{n1_i}{n_i}) #' (\frac{n_i - e_i}{n_i - 1}) e_i }}}}{ #' Q = sum(W[i] * (e1[i] - n1[i] * e[i] / n[i])) / #' sqrt(sum(W[i]^2 * e1[i] / e[i] * (1 - n1[i] / n[i]) * (n[i] - e[i] / (n[i] - 1)) *e[i]))} #' Below, for the Fleming-Harrington weights, #' \eqn{\hat{S}(t)}{S(t)} is the Kaplan-Meier (product-limit) estimator. #' \cr #' Note that both \eqn{p} and \eqn{q} need to be \eqn{\geq 0}{>=0}. #' \cr \cr #' The weights are given as follows: #' \tabular{cll}{ #' \eqn{1} \tab log-rank \tab \cr #' \eqn{n_i}{n[i]} \tab Gehan-Breslow generalized Wilcoxon \tab \cr #' \eqn{\sqrt{n_i}}{sqrt(n[i])} \tab Tarone-Ware \tab \cr #' \eqn{S1_i}{S1[i]} \tab Peto-Peto's modified survival estimate \tab #' \eqn{\bar{S}(t)=\prod{1 - \frac{e_i}{n_i + 1}}}{ #' S1(t) = cumprod(1 - e / (n + 1))} \cr #' \eqn{S2_i}{S2[i]} \tab modified Peto-Peto (by Andersen) \tab #' \eqn{\tilde{S}(t)=\bar{S} - \frac{n_i}{n_i + 1}}{ #' S2(t) = S1[i] * n[i] / (n[i] + 1) } \cr #' \eqn{FH_i}{FH[i]} \tab Fleming-Harrington \tab #' The weight at \eqn{t_0 = 1} and thereafter is: #' \eqn{\hat{S}(t_{i-1})^p [1-\hat{S}(t_{i-1})^q]}{ #' S(t[i - 1])^p * (1 - S(t)[i - 1]^q)} #' } #' The \bold{supremum (Renyi)} family of tests are designed #' to detect differences in survival curves which \emph{cross}. #' \cr #' That is, an early difference in survival in favor of one group #' is balanced by a later reversal. #' \cr #' The same weights as above are used. #' \cr #' They are calculated by finding #' \deqn{Z(t_i) = \sum_{t_k \leq t_i} W(t_k)[e1_k - n1_k\frac{e_k}{n_k}], \quad i=1,2,...,k}{ #' Z(t[i]) = SUM W(t[k]) [ e1[k] - n1[k]e[k]/n[k] ]} #' (which is similar to the numerator used to find \eqn{Q} #' in the log-rank test for 2 groups above). #' \cr #' and it's variance: #' \deqn{\sigma^2(\tau) = \sum_{t_k \leq \tau} W(t_k)^2 \frac{n1_k n2_k (n_k-e_k) e_k}{n_k^2 (n_k-1)} }{ #' simga^2(tau) = sum(k=1, 2, ..., tau) W(t[k]) (n1[k] * n2[k] * (n[k] - e[k]) * #' e[k] / n[k]^2 * (n[k] - 1) ] } #' where \eqn{\tau}{tau} is the largest \eqn{t} #' where both groups have at least one subject at risk. #' \cr \cr #' Then calculate: #' \deqn{ Q = \frac{ \sup{|Z(t)|}}{\sigma(\tau)}, \quad t<\tau }{ #' Q = sup( |Z(t)| ) / sigma(tau), t < tau} #' When the null hypothesis is true, #' the distribution of \eqn{Q} is approximately #' \deqn{Q \sim \sup{|B(x)|, \quad 0 \leq x \leq 1}}{ #' Q ~ sup( |B(x)|, 0 <= x <= 1)} #' And for a standard Brownian motion (Wiener) process: #' \deqn{Pr[\sup|B(t)|>x] = 1 - \frac{4}{\pi} #' \sum_{k=0}^{\infty} #' \frac{(- 1)^k}{2k + 1} \exp{\frac{-\pi^2(2k + 1)^2}{8x^2}}}{ #' Pr[sup|B(t)| > x] = 1 - 4 / pi sum((-1)^k / (2 * k + 1) * exp(-pi^2 (2k + 1)^2 / x^2))} #' \bold{Tests for trend} are designed to detect ordered differences in survival curves. #' \cr #' That is, for at least one group: #' \deqn{S_1(t) \geq S_2(t) \geq ... \geq S_K(t) \quad t \leq \tau}{ #' S1(t) >= S2(t) >= ... >= SK(t) for t <= tau} #' where \eqn{\tau}{tau} is the largest \eqn{t} where all groups have at least one subject at risk. #' The null hypothesis is that #' \deqn{S_1(t) = S_2(t) = ... = S_K(t) \quad t \leq \tau}{ #' S1(t) = S2(t) = ... = SK(t) for t <= tau} #' Scores used to construct the test are typically \eqn{s = 1,2,...,K}, #' but may be given as a vector representing a numeric characteristic of the group. #' \cr #' They are calculated by finding: #' \deqn{ Z_j(t_i) = \sum_{t_i \leq \tau} W(t_i)[e_{ji} - n_{ji} \frac{e_i}{n_i}], #' \quad j=1,2,...,K}{ #' Z[t(i)] = sum(W[t(i)] * (e[j](i) - n[j](i) * e(i) / n(i)))} #' The test statistic is: #' \deqn{Z = \frac{ \sum_{j=1}^K s_jZ_j(\tau)}{\sqrt{\sum_{j=1}^K \sum_{g=1}^K s_js_g \sigma_{jg}}} }{ #' Z = sum(j=1, ..., K) s[j] * Z[j] / sum(j=1, ..., K) sum(g=1, ..., K) #' s[j] * s[g] * sigma[jg]} #' where \eqn{\sigma}{sigma} is the the appropriate element in the #' variance-covariance matrix (see \code{\link{COV}}). #' \cr #' If ordering is present, the statistic \eqn{Z} will be greater than the #' upper \eqn{\alpha}{alpha}-th #' percentile of a standard normal distribution. #' #' @note Regarding the Fleming-Harrington weights: #' \itemize{ #' \item \eqn{p = q = 0} gives the log-rank test, i.e. \eqn{W=1} #' \item \eqn{p=1, q=0} gives a version of the Mann-Whitney-Wilcoxon test #' (tests if populations distributions are identical) #' \item \eqn{p=0, q>0} gives more weight to differences later on #' \item \eqn{p>0, q=0} gives more weight to differences early on #' } #' The example using \code{alloauto} data illustrates this. #' Here the log-rank statistic #' has a p-value of around 0.5 #' as the late advantage of allogenic transplants #' is offset by the high early mortality. However using #' Fleming-Harrington weights of \eqn{p=0, q=0.5}, #' emphasising differences later in time, gives a p-value of 0.04. #' \cr #' Stratified models (\code{stratTen}) are \emph{not} yet supported. #' #' @references Gehan A. #' A Generalized Wilcoxon Test for Comparing Arbitrarily #' Singly-Censored Samples. #' Biometrika 1965 Jun. 52(1/2):203--23. #' \samp{http://www.jstor.org/stable/2333825} JSTOR #' @references Tarone RE, Ware J 1977 #' On Distribution-Free Tests for Equality of Survival Distributions. #' \emph{Biometrika};\bold{64}(1):156--60. #' \samp{http://www.jstor.org/stable/2335790} JSTOR #' @references Peto R, Peto J 1972 #' Asymptotically Efficient Rank Invariant Test Procedures. #' \emph{J Royal Statistical Society} \bold{135}(2):186--207. #' \samp{http://www.jstor.org/stable/2344317} JSTOR #' @references Fleming TR, Harrington DP, O'Sullivan M 1987 #' Supremum Versions of the Log-Rank and Generalized Wilcoxon Statistics. #' \emph{J American Statistical Association} \bold{82}(397):312--20. #' \samp{http://www.jstor.org/stable/2289169} JSTOR #' @references Billingsly P 1999 #' \emph{Convergence of Probability Measures.} #' New York: John Wiley & Sons. #' \samp{http://dx.doi.org/10.1002/9780470316962} Wiley (paywall) #' #' @examples #' ## Two covariate groups #' data("leukemia", package="survival") #' f1 <- survfit(Surv(time, status) ~ x, data=leukemia) #' comp(ten(f1)) #' ## K&M 2nd ed. Example 7.2, Table 7.2, pp 209--210. #' data("kidney", package="KMsurv") #' t1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney) #' comp(t1, p=c(0, 1, 1, 0.5, 0.5), q=c(1, 0, 1, 0.5, 2)) #' ## see the weights used #' attributes(t1)$lrw #' ## supremum (Renyi) test; two-sided; two covariate groups #' ## K&M 2nd ed. Example 7.9, pp 223--226. #' data("gastric", package="survMisc") #' g1 <- ten(Surv(time, event) ~ group, data=gastric) #' comp(g1) #' ## Three covariate groups #' ## K&M 2nd ed. Example 7.4, pp 212-214. #' data("bmt", package="KMsurv") #' b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) #' comp(b1, p=c(1, 0, 1), q=c(0, 1, 1)) #' ## Tests for trend #' ## K&M 2nd ed. Example 7.6, pp 217-218. #' data("larynx", package="KMsurv") #' l1 <- ten(Surv(time, delta) ~ stage, data=larynx) #' comp(l1) #' attr(l1, "tft") #' ### see effect of F-H test #' data("alloauto", package="KMsurv") #' a1 <- ten(Surv(time, delta) ~ type, data=alloauto) #' comp(a1, p=c(0, 1), q=c(1, 1)) #' #' @rdname comp #' @aliases comp.ten #' @method comp ten #' @export #' comp.ten <- function(x, ..., p=1, q=1, scores=seq.int(attr(x, "ncg")), reCalc=FALSE){ if (!reCalc & !is.null(attr(x, "lrt"))) { print(attr(x, "lrt")) print(if (!is.null(attr(x, "sup"))) attr(x, "sup") else attr(x, "tft")) return(invisible()) } stopifnot(attr(x, "ncg") >= 2) stopifnot(length(p)==length(q)) ## number of F-H tests fh1 <- length(p) if (!attr(x, "sorted")=="t") data.table::setkey(x, t) ## times with at least one event t1 <- x[e>0, t, by=t][, t] ## WEIGHTS wt1 <- data.table::data.table( array(data=1, dim=c(length(t1), 5L + fh1))) ## names for F-H tests FHn <- paste("FH_p=", p, "_q=", q, sep="") ## names for weights n1 <- c("1", "n", "sqrtN", "S1", "S2", FHn) data.table::setnames(wt1, n1) ## Gehan-Breslow generalized Wilcoxon, weight = n data.table::set(wt1, j="n", value=x[e>0, max(n), by=t][, V1]) ## Tarone-Ware, weight = sqrt(n) data.table::set(wt1, j="sqrtN", value=wt1[, sqrt(.SD), .SDcols="n"]) ## Peto-Peto, weight = S(t) = modified estimator of survival function data.table::set(wt1, j="S1", value=cumprod(x[e > 0, 1 - sum(e) / (max(n) + 1), by=t][, V1])) ## modified Peto-Peto (by Andersen), weight = S(t)n / n+1 data.table::set(wt1, j="S2", value=wt1[, S1] * x[e > 0, max(n) / (max(n) + 1), by=t][, V1]) ## Fleming-Harrington S3 <- sf(x=x[e>0, sum(e), by=t][, V1], n=x[e>0, max(n), by=t][, V1], what="S") ## weight of first 1st element is 1 as depends on [i-1] S3 <- c(1, S3[seq.int(length(S3) - 1L)]) ## assign to names ## SIMPLIFY = FALSE returns list rather than matrix wt1[, (FHn) := mapply(function(p, q) S3^p * ((1 - S3)^q), p, q, SIMPLIFY=FALSE)] ## Hold results: ## if 2 groups res1 = log-rank tests ## if >2 groups, res1 = tests for trend n2 <- c("W", "Q", "Var", "Z", "pNorm", "chiSq", "df", "pChisq") res1 <- data.table::data.table(matrix(0, nrow=ncol(wt1), ncol=length(n2))) data.table::setnames(res1, n2) data.table::set(res1, j=1L, value=n1) predict(x) ## events minus predicted eMP1 <- attr(x, "pred") eMP1 <- eMP1[rowSums(eMP1) > 0, ] ## covariance COV(x) cov1 <- attr(x, "COV") if (is.null(dim(cov1))) { cov1 <- cov1[names(cov1) %in% t1] } else { ## 3rd dimension = times cov1 <- cov1[, , dimnames(cov1)[[3]] %in% t1] } ## number of covariate groups ncg1 <- attr(x, "ncg") ### 2 groups only: if (ncg1==2) { ### log-rank family ## make observed - expected for one group eMP1 <- unlist(eMP1[, .SD, .SDcols=(length(eMP1) - 1L)]) data.table::set(res1, j="Q", value=colSums(wt1 * eMP1)) data.table::set(res1, j="Var", value=colSums(wt1^2 * cov1)) ### supremum tests ### aka Renyi statistics ### (analagous to 2-sample Kolmogorov-Smirnov test) n3 <- c("W", "maxAbsZ", "Var", "Q", "pSupBr") res2 <- data.table::data.table(matrix(0, nrow=5 + fh1, ncol=length(n3))) data.table::setnames(res2, n3) data.table::set(res2, j=1L, value=n1) data.table::set(res2, j="maxAbsZ", value=sapply(abs(cumsum(eMP1 * wt1)), max)) data.table::set(res2, j="Var", value=res1[, Var]) res2[, "Q" := maxAbsZ / sqrt(Var)] res2[, "pSupBr" := sapply(Q, probSupBr)] data.table::setattr(res2, "class", c("sup", class(res2))) } ### >2 groups if (ncg1 > 2) { ## degrees of freedom df1 <- seq.int(ncg1 - 1L) ## Subset - reduce to df1 degrees of freedom eMP1 <- eMP1[, .SD, .SDcols=grep("eMP_", names(eMP1))] ## hold results res3 <- data.table::data.table(array(0, dim=c(ncol(wt1), 4L))) data.table::setnames(res3, c("W", "chiSq", "df", "pChisq")) data.table::set(res3, j=1L, value=n1) ## We save results below as these are also used by ## the tests for trend ## Take each column of weight, multiply by each column of eMP ## then get sum of these values eMP1w <- apply(wt1, MARGIN=2, FUN=function(wt) colSums(sweep(eMP1, MARGIN=1, STATS=wt, FUN="*"))) ## Same as above but use weight^2; then cov1w <- apply(wt1, MARGIN=2, FUN=function(wt) rowSums(sweep(cov1, MARGIN=3, STATS=wt^2, FUN="*"), dims=2)) dim(cov1w) <- c(ncg1, ncg1, ncol(cov1w)) ## Subset - reduce to df1 degrees of freedom before solving cov1ws <- cov1w[df1, df1, ] cov1ws <- apply(cov1ws, MARGIN=3, FUN=solve) dim(cov1ws) <- c(max(df1), max(df1), length(n1)) eMP1ss <- eMP1w[df1, ] ## only need subset for this calculation data.table::set(res3, j="chiSq", value= sapply( seq.int(length(n1)), function(i) eMP1ss[, i] %*% cov1ws[, , i] %*% eMP1ss[, i])) ## results res3[, "df" := max(df1)] res3[, "pChisq" := 1 - stats::pchisq(chiSq, df)] data.table::setattr(res3, "class", c("lrt", class(res3))) ### Tests for trend ## scores - all combinations sAC1 <- as.matrix(expand.grid(scores, scores)) ## scores - product of all combinations scoProd1 <- apply(sAC1, MARGIN=1, FUN=prod) data.table::set(res1, j="Q", value=colSums(eMP1w * scores)) data.table::set(res1, j="Var", value=abs(apply(cov1w * scoProd1, MARGIN=3, sum))) } ## results res1[, "Z" := Q / sqrt(Var)] res1[, "pNorm" := 2 * (1 - stats::pnorm(abs(Z)))] res1[, "chiSq" := Q^2 / Var] res1[, "df" := 1] res1[, "pChisq" := 1 - stats::pchisq(chiSq, df)] data.table::setattr(res1, "class", c("lrt", class(res1))) ## add column for times ## these are unique times with at least one event data.table::set(wt1, j="t", value=t1) data.table::setattr(x, "lrw", wt1) if (ncg1==2) { data.table::setattr(x, "lrt", res1) data.table::setattr(x, "sup", res2) } else { data.table::setattr(x, "lrt", res3) res1 <- list("tft"=res1, scores=scores) data.table::setattr(x, "tft", res1) } print(attr(x, "lrt")) print(if (!is.null(attr(x, "sup"))) attr(x, "sup") else attr(x, "tft")) return(invisible()) } ### Helper functions ## Probability of supremum of absolute value of ## standard Brownian motion process B(t) ## For k, 1e4 is good enough for all practical purposes probSupBr <- function(x) { k <- c(0L, seq.int(1e4)) 1 - (4 / pi) * (sum(((( - 1)^k) / (2 * k + 1)) * exp(-(((pi^2) * (2 * k + 1)^2) / (8 * x^2))))) } ## for R CMD check S1 <- Var <- maxAbsZ <- chiSq <- Z <- df <- NULL survMisc/R/onAttach.R0000744000176200001440000000024712737213545014167 0ustar liggesusers.onAttach <- function(libname, pkgname) { if (interactive()) { packageStartupMessage("survMisc ", as.character(utils::packageVersion("survMisc"))) } } survMisc/R/ten.R0000744000176200001440000004206614223602150013203 0ustar liggesusers#' @name ten #' @title \bold{t}ime, \bold{e}vent(s) and \bold{n}umber at risk. #' @description \bold{t}ime, \bold{e}vent(s) and \bold{n}umber at risk. #' #' @include print.R #' @include asWide.R #' #' @param x #' For the default method, a \code{numeric} vector indicating an #' \emph{event} (or status). #' \cr #' Each element indicates whether an event occurred (\code{1}) or #' not (\code{0}) for an observation. #' \cr #' These are assumed to be ordered by discrete times. #' \cr #' This is similar to the \code{event} argument for \code{Surv} #' objects. #' \cr \cr #' Methods are available for objects of class #' \code{Surv}, \code{survfit}, #' \code{coxph} and \code{formula}. #' @param abbNames \bold{Abb}reviate names? #' \cr #' If \code{abbNames="TRUE"} (the default), #' the covariate groups are referred to by number. #' \cr #' As the names for each covariate group are made by concatenating #' the predictor names, the full names can become unwieldly. #' \cr #' If \code{abbNames="FALSE"}, the full names are given. #' \cr #' In either case, the \code{longNames} are given #' as an \code{attribute} of the returned \code{ten} object. #' @param contrasts.arg Methods for handling factors. #' \cr #' A \code{list}. The \code{names} are the names of #' columns of the \code{model.frame} containing #' \code{factor}s. #' \cr #' The \emph{values} are used as replacement #' values for the \code{stats::contrasts} replacement function. #' These should be functions (given as character strings) #' or numeric matrices. #' \cr #' This can be passed from #' \code{survfit}, \code{coxph} and \code{formula} objects to: #' \cr #' ?stats::model.matrix #' @param call Used to pass the \code{call} from a \code{formula} #' to the final \code{ten.data.table} method. #' @param mm Used to pass the \code{model.matrix} from a \code{formula} #' to the final \code{ten.data.table} method. #' @inheritParams sf.ten #' #' @return A \code{data.table} with the additional \code{class} #' \code{ten}. #' \cr #' By default, the shape returned is 'long' i.e. there is one row for each unique #' timepoint per covariate group. #' \cr #' The basic form, for a \code{numeric} or \code{Surv} object, has columns: #' \item{t}{\bold{t}ime.} #' \item{e}{number of \bold{e}vents.} #' \item{n}{\bold{n}umber at risk.} #' A \code{survfit}, \code{coxph} or \code{formula} object #' will have additional columns: #' \item{cg}{\bold{c}ovariate \bold{g}roup. #' This is formed by combining the variables; these #' are separated by a comma ','.} #' \item{ncg}{\bold{n}umber at risk, by \bold{c}ovariate \bold{g}roup} #' #' \bold{Special terms}. #' \cr \cr #' The following are considered 'special' #' terms in a survival model: #' \item{strata}{For a stratified model, \code{ten} returns a \code{list} with #' one element per strata, which is a \code{ten} object. #' \cr #' This has the class \code{stratTen}. The name of the #' list elements are those of the strata in the model.} #' \item{cluster}{These terms are dropped.} #' \item{tt}{The variable is unchanged. That is, time-transform #' terms are handled as if the the function #' \code{tt(x)} was \code{identity(x)}.} #' \bold{Attribures}. #' \cr #' The returned object will also have the following \code{attributes}: #' \item{shape}{The default is \code{"long"} but #' is changed to \code{"wide"} when \code{asWide} is called on the object.} #' \item{abbNames}{Abbreviate names?} #' \item{longNames}{A \code{data.table} with two columns, showing the abbrevbiated #' and full names.} #' \item{ncg}{Number of covariate groups} #' \item{call}{The call used to generate the object} #' \item{mm}{The \code{model.matrix} used to generate to #' generate the object, if applicable.} #' Additional attributes will be added by the following functions: #' \cr #' \code{\link{sf}} #' \code{\link{ci}} #' #' @note #' The methods for \code{data.frame} (for a model frame) #' and \code{data.table} are not typically intended for interactive use. #' \cr \cr #' Currently only binary status and right-censoring #' are supported. #' \cr \cr #' In stratified models, only one level of stratification is supported #' (i.e. strata cannot be 'nested' currently). #' \cr \cr #' Partial matching is available for the #' following arguments, based on the characters in bold: #' \itemize{ #' \item \bold{abb}Names #' \item \bold{con}trasts.arg #' } #' #' @seealso \code{\link{asWide}} #' @seealso \code{\link{print}} #' #' @rdname ten #' @export #' ten <- function(x, ...) UseMethod("ten") ### all are methods ultimately passed to ### ten.data.frame (below) ### except ten.numeric() and ten.Surv() ###---------------------------------------- #' #' @rdname ten #' @method ten numeric #' @aliases ten.numeric #' @export #' @examples #' require("survival") #' ## binary vector #' ten(c(1, 0, 1, 0, 1)) #' ten.numeric <- function(x, ...){ stopifnot(all(x >= 0 & x <=1)) res1 <- data.table::data.table( "t"=(t <- seq_along(x)), "n"=rev(t), "e"=x) data.table::setattr(res1, "class", c("ten", class(res1))) setAttr(res1, shape="long", abbNames=TRUE, ncg=0, call=match.call()) return(res1) } #' #' @rdname ten #' @method ten Surv #' @aliases ten.Surv #' @export #' @examples #' ## Surv object #' df0 <- data.frame(t=c(1, 1, 2, 3, 5, 8, 13, 21), #' e=rep(c(0, 1), 4)) #' s1 <- with(df0, Surv(t, e, type="right")) #' ten(s1) #' ## some awkward values #' suppressWarnings( #' s1 <- Surv(time=c(Inf, -1, NaN, NA, 10, 12), #' event=c(c(NA, 1, 1, NaN, Inf, 0.75)))) #' ten(s1) #' ten.Surv <- function(x, ..., call=NULL){ stopifnot(inherits(x, "Surv")) stopifnot(attributes(x)$type=="right") if(is.null(call)) call <- match.call() res1 <- data.table::data.table(unclass(x)) data.table::setkey(res1, "time") res1 <- res1[, list("e"=sum(status), "n"=length(status)), by=sort(time, na.last=TRUE)] res1[, "n" := c(sum(n), sum(n) - cumsum(n)[ - length(n)])] data.table::setnames(res1, c("t", "e", "n")) data.table::setattr(res1, "class", c("ten", class(res1))) setAttr(res1, shape="long", ncg=0, call=call) return(res1) } #' #' @rdname ten #' @method ten coxph #' @aliases ten.coxph #' @export #' @examples #' ## coxph object #' ## K&M. Section 1.2. Table 1.1, page 2. #' data("hodg", package="KMsurv") #' hodg <- data.table::data.table(hodg) #' data.table::setnames(hodg, #' c(names(hodg)[!names(hodg) %in% #' c("score", "wtime")], #' "Z1", "Z2")) #' c1 <- coxph(Surv(time=time, event=delta) ~ Z1 + Z2, #' data=hodg[gtype==1 & dtype==1, ]) #' ten(c1) #' data("bmt", package="KMsurv") #' ten(c1 <- coxph(Surv(t2, d3) ~ z3*z10, data=bmt)) #' ## T&G. Section 3.2, pg 47. #' ## stratified model #' data("pbc", package="survival") #' c1 <- coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc) #' ten(c1) #' ten.coxph <- function(x, ..., abbNames=TRUE, contrasts.arg=NULL){ partMatch(env1=environment(), ...) x$call$formula <- stats::terms( x=stats::formula(x$call), specials=c("strata", "cluster", "tt")) mode(x$call) <- "list" length(x$call) <- 3 mode(x$call) <- "call" call1 <- x$call x$call$drop.unused.levels <- TRUE x$call[[1]] <- as.name("model.frame") ## model.frame xMF1 <- eval(x$call, parent.frame()) ten(x=xMF1, abbNames=abbNames, contrasts.arg=contrasts.arg, call=call1) } #' @rdname ten #' @aliases ten.survfit #' @method ten survfit #' @export #' @examples #' ## K&M. Example 7.2, pg 210. #' data("kidney", package="KMsurv") #' with(kidney[kidney$type==2, ], ten(Surv(time=time, event=delta))) #' s1 <- survfit(Surv(time=time, event=delta) ~ type, data=kidney) #' ten(s1)[e > 0, ] #' ten.survfit <- function(x, ..., abbNames=TRUE, contrasts.arg=NULL){ partMatch(env1=environment(), ...) x$call$formula <- stats::terms( x=stats::formula(x$call), specials=c("strata", "cluster", "tt")) mode(x$call) <- "list" length(x$call) <- 3 mode(x$call) <- "call" call1 <- x$call x$call$drop.unused.levels <- TRUE x$call[[1]] <- as.name("model.frame") xMF1 <- eval(x$call, parent.frame()) ten(x=xMF1, abbNames=abbNames, contrasts.arg=contrasts.arg, call=call1) } #' #' @rdname ten #' @method ten formula #' @aliases ten.formula #' @export #' @examples #' ## A null model is passed to ten.Surv #' (t1 <- with(kidney, ten(Surv(time=time, event=delta) ~ 0))) #' ## but the original call is preserved #' attr(t1, "call") #' ## survival::survfit doesn't accept interaction terms... #' \dontrun{ #' s1 <- survfit(Surv(t2, d3) ~ z3*z10, data=bmt)} #' ## but ten.formula does: #' ten(Surv(time=t2, event=d3) ~ z3*z10, data=bmt) #' ## the same is true for the '.' (dot operator) in formulas #' (t1 <- ten(Surv(time=t2, event=d3) ~ ., data=bmt)) #' ## impractical long names stored as an attribute #' attr(t1, "longNames") #' ten.formula <- function(x, ..., abbNames=TRUE, contrasts.arg=NULL){ partMatch(env1=environment(), ...) stopifnot(inherits(x, "formula")) ## based on code from stats::lm() mc1 <- match.call() names(mc1)[names(mc1)=="x"] <- "formula" mc1 <- mc1[c(1L, match(c("formula", "data"), names(mc1), 0L))] call1 <- mc1 mc1$drop.unused.levels <- TRUE mc1[[1L]] <- as.name("model.frame") mf1 <- eval(mc1, parent.frame()) ten(x=mf1, abbNames=abbNames, contrasts.arg=contrasts.arg, call=call1) } #' #' @rdname ten #' @method ten data.frame #' @aliases ten.data.frame #' @export #' @examples #' ## not typically intended to be called directly #' mf1 <- stats::model.frame(Surv(time, status==2) ~ age + strata(edema) + strata(spiders), pbc, #' drop.unused.levels = TRUE) #' ten(mf1) #' ten.data.frame <- function(x, ..., abbNames=TRUE, contrasts.arg=NULL, call=NULL){ stopifnot(survival::is.Surv(x[[1]])) stopifnot(attr(x[[1]], "type") == "right") partMatch(env1=environment(), ...) if (stats::is.empty.model(stats::terms(x))) { ## extract Surv object return(ten(x[[1]], call=call)) } ## names of strata xNS1 <- grep("^strata\\(.*\\)", names(x)) ## data.table from x xDT <- data.table::as.data.table( stats::model.matrix(stats::terms(x), x, contrasts.arg=contrasts.arg)) xDT[, c("time", "status") := lapply(1:2L, function(i) stats::model.response(x)[, i])] ## names of clusters xNC1 <- grep("^cluster\\(.*\\)", names(x)) if (any(xNC1)) { ## drop cluster terms data.table::set(xDT, j=grep("^cluster\\(.*\\)", names(xDT)), value=NULL) } if (any(xNS1)) { ## strata numbers xDTnS1 <- grep("^strata\\(.*\\)", names(xDT)) # xDTSn1 <- grep("^strata\\(.*\\)", names(xDT), value=TRUE) ## separate table only for strata xDTstr <- xDT[, .SD, .SDcols=xDTnS1] data.table::set(xDT, j=xDTnS1, value=NULL) setnames(xDTstr, sub("^strata\\(.+\\)", "", names(xDTstr))) c1 <- colnames(xDTstr) xDTstr[, (c1) := lapply(.SD, as.logical), .SDcols=seq.int(ncol(xDTstr))] collapseDT(xDTstr, except=NA, nName="strat") collapseDT(xDT, except=c("time", "status"), nName="cg") xDT[, "cg" := as.factor(cg)] xDT[, "strat" := xDTstr[, factor(strat)]] ## columns which are not strata xDTnotS1 <- seq.int(names(xDT))[!(grepl("^strat", names(xDT)))] res1 <- lapply(xDT[, seq.int(levels(strat))], function(i) { data.table::copy(xDT[as.integer(strat)==i, .SD, .SDcols=xDTnotS1])}) ## drop unused levels res1 <- lapply(res1, function(i) { i[, "cg" := factor(cg)]}) res1 <- lapply(res1, ten, abbNames=abbNames) ln1 <- data.table::data.table( "id" = xDT[, seq.int(levels(strat))], "longName" = xDT[, levels(strat)]) data.table::setattr(res1, name="longNames", value=ln1) if (abbNames) { names(res1) <- ln1[, id] } else { names(res1) <- xDT[, levels(strat)] } data.table::setattr(res1, name="class", value=c("stratTen", class(res1))) data.table::setattr(res1, name="call", value=call) data.table::setattr(res1, name="abbNames", value=abbNames) return(res1) } if (!any(xNS1)) { mm1 <- data.table::copy(xDT) collapseDT(xDT, except=c("time", "status"), nName="cg") ten(x=xDT, abbNames=abbNames, mm=mm1, call=call) } } #' @rdname ten #' @method ten data.table #' @aliases ten.data.table #' @export #' ten.data.table <- function(x, ..., abbNames=TRUE, mm=NULL, call=NULL){ stopifnot(all(names(x) %in% c("time", "status", "cg"))) partMatch(env1=environment(), ...) data.table::setkey(x, time, cg) ## number at risk x[, "n" := rev(seq.int(nrow(x)))] ## number at risk per covariate group x[, "ncg" := rev(seq.int(length(n))), by=cg] ## drop unused levels x[, "cg" := as.factor(cg)[, drop=TRUE]] ## long names ln1 <- data.table::data.table( "id" = x[, seq_along(levels(cg))], "longName" = x[, levels(cg)]) if(abbNames) x[, "cg" := as.integer(cg)] data.table::setnames(x, c("t", colnames(x)[-1])) x[, "e" := sum(status), by=list(t, cg)] x[, "ncg" := max(ncg), by=list(t, cg)] x[, "n" := max(n), by=list(t)] x[, "status" := NULL] x <- x[!(duplicated(x)), ] data.table::setcolorder(x, c("t", "e", "n", "cg", "ncg")) data.table::setkey(x, "cg") data.table::setattr(x, "class", c("ten", class(x))) setAttr(x, "shape"="long", "abbNames"=abbNames, "longNames"=ln1, "ncg"=nrow(ln1), "call"=call, "mm"=mm) return(x) } #' #' @rdname ten #' @method ten ten #' @aliases ten.ten #' @export #' ten.ten <- function(x, ..., abbNames=NULL, call=NULL){ partMatch(env1=environment(), ...) if (attr(x, "shape")=="long") { return(asWide(x)) } else { return(asLong(x)) } } ### helper functions ## ## partial matching with an ellipsis ## from environment env1 partMatch <- function(env1=NULL, ...){ stopifnot(is.environment(env1)) l1 <- as.list(substitute(list(...)))[-1L] n1 <- c("sh", "abb", "con") s1 <- sapply(n1, pmatch, names(l1)) n2 <- c("shape", "abbNames", "contrasts.arg") names(s1) <- n2 s1 <- s1[!is.na(s1)] for (i in seq_along(s1)){ names(l1)[s1[i]] <- names(s1[i]) } l1 <- l1[names(l1) %in% n2] for(i in seq_along(l1)){ ## this isn't v. pretty... if (is.character(l1[[i]])){ p1 <- paste0("env1$", names(l1)[i], " <- \"", l1[[i]], "\"") } else { p1 <- paste0("env1$", names(l1)[i], " <- ", l1[[i]]) } eval(parse(text=p1)) } } ## collapse/ paste a data table ## x = data.table ## except = columns to remain unmodified ## nName = new name for collapsed column ## returns the modified data.table collapseDT <- function(x, except=c("time", "status"), nName="cg"){ stopifnot(inherits(x, "data.table")) if (ncol(x)==1) { data.table::setnames(x, nName) return(invisible()) } ## names in 'except'? toCollapse1 <- names(x)[!names(x) %in% except] x[, (nName) := paste(toCollapse1, .SD, sep="=", collapse=", "), .SDcols=toCollapse1, by=seq.int(nrow(x))] toRemove1 <- which(names(x) %in% toCollapse1) if (length(toRemove1)) { data.table::set(x, j=toRemove1, value=NULL) } return(invisible()) } ## set attributes for a ten object (a data.table) setAttr <- function(x, ...) UseMethod("setAttr") setAttr.ten <- function(x, ..., shape=NULL, abbNames=NULL, longNames=NULL, ncg=NULL, call=NULL, mm=NULL){ stopifnot(inherits(x, "ten")) ## can't use .Internal in a package... ## l1 <- .Internal(ls(envir=environment(), all.names=TRUE)) l1 <- ls() l1 <- l1[!grepl("x", l1)] for(i in seq_along(l1)){ data.table::setattr(x, name=l1[i], value=eval(as.name(l1[i]))) } return(x) } ## for R CMD check n <- status <- strat <- time <- NULL survMisc/R/asWide.R0000744000176200001440000001141213317031760013626 0ustar liggesusers#' @name asWide #' @title Convert an object to "wide" or "long" form. #' @description Convert an object to "wide" or "long" form. #' @include ten.R #' @include nc.R #' @include print.R #' #' @param x An object of class \code{ten} or \code{pred}. #' @param ... Additional arguments (not implemented). #' #' @return #' A new \code{data.table} is returned, #' with the data in 'wide' or 'long' format. #' \cr #' There is one row for each time point. #' \cr #' For a \code{ten} object generated from a \code{numeric} or \code{Surv} object, #' this has columns: #' \item{t}{\bold{t}ime.} #' \item{e}{number of \bold{e}vents.} #' \item{n}{\bold{n}umber at risk.} #' If derived from a \code{survfit}, \code{coxph} or \code{formula} object, #' there are additional columns for \code{e} and \code{n} #' for \emph{each} covariate group. #' #' @note #' Most methods for \code{ten} objects are designed for the 'long' form. #' #' @rdname asWide #' @export #' asWide <- function(x, ...) UseMethod("asWide") #' #' @rdname asWide #' @method asWide ten #' @aliases asWide.ten #' @export #' @examples #' \dontrun{ #' data("bmt", package="KMsurv") #' require("survival") #' t1 <- ten(c1 <- coxph(Surv(t2, d3) ~ z3*z10, data=bmt)) #' asWide(t1) #' } asWide.ten <- function(x, ...){ data.table::setkey(x, t) t1 <- data.table::data.table("t" = x[, sort(unique(t))]) if (attr(x, "abbNames")) { na1 <- attr(x, "longNames")[, id] abbFn <- identity } else { na1 <- attr(x, "longNames")[, longName] abbFn <- as.integer } cg1 <- seq.int(attr(x, "ncg")) res1 <- lapply(cg1, FUN=function(cg1){ r1 <- data.table::setkey(x[abbFn(cg)==cg1, ncg, by=t], t) r1 <- r1[t1, roll=-Inf] data.table::set(r1, i=which(is.na(r1$ncg)), j="ncg", value=0) r1[, ncg] }) res1 <- data.table::as.data.table(res1) ## names for 'n' and 'e' columns nne1 <- outer(c("n_", "e_"), na1, paste, sep="") data.table::setnames(res1, nne1[1, ]) res2 <- lapply(cg1, FUN=function(cg1){ r1 <- data.table::setkey(x[abbFn(cg)==cg1, e, by=t], t) r1 <- r1[t1] data.table::set(r1, i=which(is.na(r1$e)), j="e", value=0) r1[, e] }) res1[, (nne1[2, ]) := res2] ## make no. at risk (total) per time period res1[, "n" := rowSums(.SD), .SDcols = grep("n_", names(res1))] ## total events per time period res1[, "e" := rowSums(.SD), .SDcols = grep("e_", names(res1))] ## now add time res1[, "t" := t1] data.table::setcolorder(res1, c("t", "n", "e", as.vector(nne1))) data.table::setattr(res1, "class", c("ten", class(res1))) setAttr(res1, shape="wide", abbNames=attr(x, "abbNames"), longNames=attr(x, "longNames"), ncg=attr(x, "ncg"), call=attr(x, "call"), mm=attr(x, "mm")) return(res1) } #' #' @rdname asWide #' @export #' asLong <- function(x, ...) UseMethod("asLong") #' #' @rdname asWide #' @method asLong ten #' @aliases asLong.ten #' @export #' @examples #' \dontrun{ #' asLong(asWide(t1)) #' stopifnot(asLong(asWide(t1)) == ten(ten(t1))) #' } asLong.ten <- function(x, ...){ stopifnot(inherits(x, "ten")) ## add no. censored nc(x) ## n at risk + no. events n_ <- grep("n_", names(x)) e_ <- grep("e_", names(x)) ## names of covariate groups n1 <- sub("n_", "", grep("n_", names(x), value=TRUE)) l1 <- vector(mode="list", length=length(n1)) for (i in seq_along(n1)){ e_n <- grep(paste0("e_", n1[i]), names(x)) c_n <- grep(paste0("c_", n1[i]), names(x)) ## which times have at least one event ## or one censored observation l1[[i]] <- as.logical(rowSums(x[, .SD, .SDcols=c(e_n, c_n)])) } ### new structure here res1 <- data.table::data.table( t=unlist(lapply(l1, function(i) x[i, t])), n=unlist(lapply(l1, function(i) x[i, n])), e=unlist(sapply(seq_along(n1), function(i) x[, .SD, .SDcols=e_[i]][l1[[i]], ])), cg=as.integer(unlist(mapply(rep, n1, sapply(l1, sum)))), ncg=unlist(sapply(seq_along(n1), function(i) x[, .SD, .SDcols=n_[i]][l1[[i]], ]))) data.table::setkey(res1, t) data.table::setattr(res1, "class", c("ten", class(res1))) setAttr(res1, "shape"="long", "abbNames"=attr(x, "abbNames"), "longNames"=attr(x, "longNames"), "ncg"=attr(x, "ncg"), "call"=attr(x, "call"), "mm"=attr(x, "mm")) return(res1) } ## for R CMD check .SD <- cg <- cg_ <- P <- eMP <- NULL abbNames <- longName <- id <- NULL n <- e <- ncg <- NULL survMisc/R/gastric.R0000744000176200001440000000264513317043741014060 0ustar liggesusers#' @name gastric #' @docType data #' @title gastric cancer trial data #' @description gastric cancer trial data #' #' @format A \code{data.frame} with \eqn{90} rows (observations) and \eqn{3} columns (variables). #' @details Data from a trial of locally unresectable gastic cancer. #' \cr #' Patients (\eqn{n=45} in each group) were randomized to one of two groups: #' chemotheapy vs. chemotherapy + radiotherapy. #' \cr #' Columns are: #' \describe{ #' \item{time}{Time, in days} #' \item{event}{Death} #' \item{group}{Treatment #' \describe{ #' \item{0}{chemotherapy} #' \item{1}{chemotherapy + radiotherapy} #' } #' } #' } #' #' @seealso Examples in \code{\link{comp}} #' #' @source Klein J, Moeschberger. Survival Analysis, 2nd edition. Springer 2003. #' Example 7.9, pg 224. #' @references Gastrointestinal Tumor Study Group, 1982. #' A comparison of combination chemotherapy and #' combined modality therapy for locally advanced gastric carcinoma. #' \emph{Cancer}. \bold{49}(9):1771-7. \cr #' \samp{dx.doi.org/10.1002/1097-0142(19820501)49:9<1771::AID-CNCR2820490907>3.0.CO;2-M} Wiley (free) #' @references Stablein DM, Koutrouvelis IA, 1985. #' A two-sample test sensitive to crossing hazards in uncensored and singly censored data. #' \emph{Biometrics}. \bold{41}(3):643-52. \cr #' \samp{dx.doi.org/10.2307/2531284} JSTOR #' #' @examples #' data("gastric", package="survMisc", verbose=TRUE) #' head(gastric) #' NULL survMisc/R/ci.R0000744000176200001440000003354113317043530013012 0ustar liggesusers#' @name ci #' @title \bold{c}onfidence \bold{i}ntervals for survival curves. #' @description \bold{c}onfidence \bold{i}ntervals for survival curves. #' #' @include ten.R #' @include sf.R #' #' @param x An object of class \code{ten}. #' @param CI Confidence intervals. As the function currently relies on lookup #' tables, currently only 90\%, 95\% (the default) and 99\% are supported. #' @param how Method to use for confidence interval. #' \cr #' \code{point} (the default) uses pointwise confirence intervals. #' \cr #' The alternatives use confidence \emph{bands} (see details). #' @param trans Transformation to use. #' \cr #' The default is \code{trans="log"}. #' \cr #' Also supported are linear and arcsine-square root transformations. #' @param tL \bold{L}ower time point. Used in construction of confidence bands. #' @param tU \bold{U}pper time point. Used in construction of confidence bands. #' @inheritParams sf.ten #' #' @return The \code{ten} object is modified in place by the additional of a #' \code{data.table} as an \code{attribute}. #' \cr #' \code{attr(x, "ci")} is printed. #' \cr #' This A \code{survfit} object. The \code{upper} and \code{lower} #' elements in the list (representing confidence intervals) #' are modified from the original. #' \cr #' Other elements will also be shortened if the time range under consideration has been #' reduced from the original. #' #' @details #' In the equations below #' \deqn{\sigma^2_s(t) = \frac{\hat{V}[\hat{S}(t)]}{\hat{S}^2(t)} }{ #' sigma^2(t) = V[S(t)]/[S(t)]^2} #' Where \eqn{\hat{S}(t) }{S(t)} is the Kaplan-Meier survival estimate and #' \eqn{\hat{V}[\hat{S}(t)]}{V[S(t)]} is Greenwood's estimate of its #' variance. #' \cr #' The \bold{pointwise} confidence intervals are valid for \emph{individual} #' times, e.g. \code{median} and \code{\link{quantile}} values. #' When plotted and joined for multiple points they tend to #' be narrower than the \emph{bands} described below. #' Thus they tend to exaggerate the impression of certainty #' when used to plot confidence intervals for a time range. #' They should not be interpreted as giving the intervals #' within which the \emph{entire} survival function lies. #' \cr #' For a given significance level \eqn{\alpha}{alpha}, #' they are calculated using the standard normal distribution \eqn{Z} #' as follows: #' #' \itemize{ #' #' \item linear #' \deqn{\hat{S}(t) \pm Z_{1- \alpha} \sigma (t) \hat{S}(t)}{ #' S(t)+- Z(1-alpha) sigma(t) S(t)} #' #' \item log transform #' \deqn{ [ \hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta} ] }{ #' [S(t)^(1/theta), S(t)^theta]} #' where #' \deqn{ \theta = \exp{ \frac{Z_{1- \alpha} \sigma (t)}{ \log{\hat{S}(t)}}} }{ #' theta = exp ( Z(1-alpha)sigma(t) / log(S(t)) )} #' #' \item arcsine-square root transform #' \cr #' upper: #' \cr #' \deqn{ \sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} - #' \frac{Z_{1- \alpha}\sigma(t)}{2} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ #' sin^2(max[0, arcsin S(t)^0.5 - Z(1-alpha)sigma(t)/2 (S(t)/1-S(t))^0.5])} #' lower: #' \deqn{ \sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} + #' \frac{Z_{1- \alpha}\sigma(t)}{2} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ #' sin^2(min[pi/2, arcsin S(t)^0.5 + Z(1-alpha)sigma(t)/2 (S(t)/1-S(t))^0.5])} #' #' } #' #' Confidence \bold{bands} give the values within which the survival function #' falls within a \emph{range} of timepoints. #' \cr \cr #' The time range under consideration is given so that #' \eqn{t_l \geq t_{min}}{tL >= min(t)}, the minimum or lowest event time and #' \eqn{t_u \leq t_{max}}{tU <= max(t)}, the maximum or largest event time. #' \cr #' For a sample size \eqn{n} and \eqn{0 < a_l < a_u <1}: #' \deqn{a_l = \frac{n\sigma^2_s(t_l)}{1+n\sigma^2_s(t_l)}}{ #' a_l = n*sigma^2(t_l) / [1+n*sigma^2(t_l)]} #' \deqn{a_u = \frac{n\sigma^2_s(t_u)}{1+n\sigma^2_s(t_u)}}{ #' a_u = n*sigma^2(t_u) / [1+n*sigma^2(t_u)]} #' #' For the \bold{Nair} or \bold{equal precision} (\bold{EP}) confidence bands, #' we begin by obtaining the relevant #' confidence coefficient \eqn{c_{\alpha}}{c[alpha]}. This is obtained from #' the upper \eqn{\alpha}{a}-th fractile of the random variable #' \deqn{U = \sup{|W^o(x)\sqrt{[x(1-x)]}|, \quad a_l \leq x \leq a_u}}{ #' U = sup{ |W(x)[x(1-x)]^0.5|, a_l <= x <= a_u} } #' Where \eqn{W^o}{W} is a standard Brownian bridge. #' \cr #' The intervals are: #' #' \itemize{ #' \item linear #' \deqn{\hat{S}(t) \pm c_{\alpha} \sigma_s(t) \hat{S}(t)}{ #' S(t)+- c[alpha] sigma(t) S(t)} #' #' \item log transform (the default) #' \cr #' This uses \eqn{\theta}{theta} as below: #' \deqn{\theta = \exp{ \frac{c_{\alpha} \sigma_s(t)}{ \log{\hat{S}(t)}}}}{ #' theta = exp (c[alpha] * sigma(t) / log(S(t)))} #' And is given by: #' \deqn{[\hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta}]}{ #' [S(t)^(1/theta), S(t)^theta]} #' #' \item arcsine-square root transform #' \cr #' upper: #' \deqn{\sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} #' - \frac{c_{\alpha}\sigma_s(t)}{2} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}])}{ #' sin^2(max[0, arcsin S(t)^0.5 - c[alpha]*sigma(t)/2 (S(t)/1-S(t))^0.5])} #' lower: #' \deqn{\sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} #' + \frac{c_{\alpha}\sigma_s(t)}{2} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ #' sin^2(min[pi/2, arcsin S(t)^0.5 - c[alpha]*sigma(t)/2 (S(t)/1-S(t))^0.5])} #' #' } #' #' For the \bold{Hall-Wellner} bands the confidence coefficient #' \eqn{k_{\alpha}}{k[alpha]} #' is obtained from the upper \eqn{\alpha}{a}-th fractile of a #' Brownian bridge. #' \cr #' In this case \eqn{t_l} can be \eqn{=0}. #' \cr #' The intervals are: #' #' \itemize{ #' #' \item linear #' \deqn{\hat{S}(t) \pm #' k_{\alpha} \frac{1+n\sigma^2_s(t)}{\sqrt{n}} \hat{S}(t)}{ #' S(t)+- k[alpha] [1+n*sigma^2(t)]*S(t) / n^0.5 } #' #' \item log transform #' \deqn{[\hat{S}(t)^{\frac{1}{\theta}}, \hat{S}(t)^{\theta}]}{ #' [S(t)^(1/theta), S(t)^theta]} #' where #' \deqn{\theta = \exp{ \frac{k_{\alpha}[1+n\sigma^2_s(t)]}{ #' \sqrt{n}\log{\hat{S}(t)}}} }{ #' theta = exp(k[alpha] * [1 + n * sigma^2(t)] / n^0.5 * log(S(t)))} #' #' \item arcsine-square root transform #' \cr #' upper: #' \deqn{ \sin^2(\max[0, \arcsin{\sqrt{\hat{S}(t)}} #' - \frac{k_{\alpha}[1+n\sigma_s(t)]}{2\sqrt{n}} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ #' sin^2( max[0, arcsin S(t)^0.5 - k[alpha]*[1+n*sigma^2(t)]/(2*n^0.5) (S(t)/1-S(t))^0.5])} #' lower: #' \deqn{ \sin^2(\min[\frac{\pi}{2}, \arcsin{\sqrt{\hat{S}(t)}} #' + \frac{k_{\alpha}[1+n\sigma^2_s(t)]}{2\sqrt{n}} #' \sqrt{ \frac{\hat{S}(t)}{1-\hat{S}(t)}}]) }{ #' sin^2( min[pi/2, arcsin S(t)^0.5 - k[alpha]*[1+n*sigma^2(t)]/(2*n^0.5) (S(t)/1-S(t))^0.5])} #' #' } #' #' @source The function is loosely based on \code{km.ci::km.ci}. #' #' @note #' \itemize{ #' \item For the Nair and Hall-Wellner bands, the function currently #' relies on the lookup tables in \code{package:km.ci}. #' \item Generally, the arcsin-square root transform has the best coverage properties. #' \item All bands have good coverage properties for samples as small as \eqn{n=20}, #' except for the \bold{Nair} / \bold{EP} bands with a linear transformation, #' which perform poorly when \eqn{n < 200}. #' } #' #' @keywords survival #' #' @references #' Nair V, 1984. #' Confidence bands for survival functions with censored data: a comparative study. #' \emph{Technometrics}. \bold{26}(3):265-75. #' \samp{http://www.jstor.org/stable/1267553} JSTOR #' @references #' Hall WJ, Wellner JA, 1980. #' Confidence bands for a survival curve from censored data. #' \emph{Biometrika}. \bold{67}(1):133-43. #' \samp{http://www.jstor.org/stable/2335326} JSTOR #' #' @seealso \code{\link{sf}} #' @seealso \code{\link{quantile}} #' #' @rdname ci #' @export #' ci <- function(x, ...) UseMethod("ci") #' #' @rdname ci #' @method ci ten #' @aliases ci.ten #' @export #' @examples #' ## K&M 2nd ed. Section 4.3. Example 4.2, pg 105. #' data("bmt", package="KMsurv") #' b1 <- bmt[bmt$group==1, ] # ALL patients #' ## K&M 2nd ed. Section 4.4. Example 4.2 (cont.), pg 111. #' ## patients with ALL #' t1 <- ten(Surv(t2, d3) ~ 1, data=bmt[bmt$group==1, ]) #' ci(t1, how="nair", trans="lin", tL=100, tU=600) #' ## Table 4.5, pg. 111. #' lapply(list("lin", "log", "asi"), #' function(x) ci(t1, how="nair", trans=x, tL=100, tU=600)) #' ## Table 4.6, pg. 111. #' lapply(list("lin", "log", "asi"), #' function(x) ci(t1, how="hall", trans=x, tL=100, tU=600)) #' t1 <- ten(Surv(t2, d3) ~ group, data=bmt) #' ci(t1, CI="0.95", how="nair", trans="lin", tL=100, tU=600) #' ci.ten <- function(x, ..., CI=c("0.95", "0.9", "0.99"), how=c("point", "nair", "hall"), trans=c("log", "lin", "asi"), tL=NULL, tU=NULL, reCalc=FALSE){ stopifnot(inherits(x, "ten")) if (!reCalc & !is.null(attr(x, "ci"))) return (attr(x, "ci")) ### trans <- match.arg(trans) CI <- 100 * as.numeric(match.arg(CI)) how <- match.arg(how) ### sf(x, SCV=TRUE) if (is.null(tL)) tL <- min(x[, min(t), by=cg][, V1]) if (is.null(tU)) tU <- max(x[, max(t), by=cg][, V1]) s1 <- data.table::copy(attr(x, "sf")[t >= tL & t <= tU, ]) if (!"cg" %in% names(s1)) s1[, "cg" := 1] if (attr(x, "ncg") >= 1) { n1 <- x[, max(ncg), by=cg] } else { n1 <- data.table::data.table( cg=1, V1=x[, max(n)]) } ### get reference value if (how=="point") { ## get Z (quantile from normal distribution) alpha <- (100 - CI) / 100 ref1 <- stats::qnorm(1 - alpha / 2) } if (how=="nair" | how=="hall") { A1 <- mapply(FUN=function(cg2, n2) data.table::rbindlist( list( utils::head(s1[cg==cg2, ], 1), utils::tail(s1[cg==cg2, ], 1)))[, genA(SCV=SCV, n=n2)], n1[, cg], n1[, V1]) ## get lookup table for confidence coefficient d1s <- paste0("critical.value.", how, ".", CI) do.call(utils::data, list(eval(substitute(d1s)), package="km.ci")) d1 <- NULL do.call(assign, list("d1", eval(parse(text=d1s)))) ## label lookup table if (how=="nair") { rownames(d1) <- seq(0.1, 0.98, by=0.02) colnames(d1) <- seq(0.02, 0.6, by=0.02) } else { rownames(d1) <- seq(0.1, 1.0, by=0.02) colnames(d1) <- seq(0, 0.6, by=0.02) } ## if a_L and/or a_U are outside the range of lookup table. err1 <- " Confidence coefficients are outside the range available in the lookup table. Suggest try narrower range for time i.e. increase lower time (tL) and/or decrease upper time (tU). " ref1 <- mapply(function(i, j) tryCatch(d1[i, j], error=function(e) NaN), i=as.character(A1[2, ]), j=as.character(A1[1, ])) if (any(is.nan(ref1))) message(err1) } ## transform function tf1 <- switch(trans, lin=LIN, log=LOG, asi=ASI) s1[, "lower" := with( list(bound=-1, how=how), unlist( mapply( FUN=function(cg2, ref2, n2) s1[cg==cg2, tf1(how=how, S=S, SCV=SCV, ref=ref2, n=n2, bound=bound)], ref2=ref1, cg2=n1[, cg], n2=n1[, V1])))] data.table::set(s1, i=s1[, which(lower < 0)], j="lower", value=0) s1[, "upper" := with( list(bound=1, how=how), unlist( mapply( FUN=function(cg2, ref2, n2) s1[cg==cg2, tf1(how=how, S=S, SCV=SCV, ref=ref2, n=n2, bound=bound)], cg2=n1[, cg], ref2=ref1, n2=n1[, V1])))] data.table::set(s1, i=s1[, which(upper > 1)], j="upper", value=1) data.table::setattr(s1, "CI", CI) data.table::setattr(s1, "how", how) data.table::setattr(s1, "trans", trans) data.table::setattr(s1, "class", c("tenAttr", class(s1))) data.table::setattr(x, "ci", s1) return(attr(x, "ci")) } #' #' @rdname ci #' @method ci stratTen #' @aliases ci.stratTen #' @export #' @examples #' ## stratified model #' data("pbc", package="survival") #' t1 <- ten(coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc)) #' ci(t1) #' ci.stratTen <- function(x, ..., CI=c("0.95", "0.9", "0.99"), how=c("point", "nair", "hall"), trans=c("log", "lin", "asi"), tL=NULL, tU=NULL) { lapply(x, ci, CI=CI, how=how, trans=trans, tL=tL, tU=tU) lapply(x, attr, "ci") } ### Helper functions ## generate 'a', lower and upper genA <- function(SCV, n) round((SCV * n) / (1 + SCV * n), 1) ## linear transform LIN <- function(how, S, SCV, ref=NULL, n=NULL, bound=NULL){ if (how=="hall") { S + sign(bound) * S * (ref * (1 + SCV * n)) / sqrt(n) } else { S + sign(bound) * ref * S * sqrt(SCV) } } ## log transform LOG <- function(how, S, SCV, ref=NULL, n=NULL, bound=NULL){ if (how=="hall"){ theta <- exp(ref * (1 + SCV * n) / (log(S) * sqrt(n))) } else { theta <- exp(ref * sqrt(SCV) / log(S)) } S^(theta^sign(bound)) } ## acrsin-sqrt transform ASI <- function(how, S, SCV, ref=NULL, n=NULL, bound=NULL){ if (how=="hall") { sin( asin(sqrt(S)) + sign(bound) * (ref / 2) * (1 + SCV * n) / sqrt(n) * sqrt(S / (1 - S)) )^2 } else { sin( asin(sqrt(S)) + sign(bound) * (ref / 2) * sqrt(SCV) * sqrt(S / (1 - S)) )^2 } } ## R CMD check V1 <- SCV <- NULL survMisc/R/nc.R0000744000176200001440000000452313317032230013010 0ustar liggesusers#' @name nc #' @title Add \bold{n}umber \bold{c}ensored. #' @description Add \bold{n}umber \bold{c}ensored. #' #' @include ten.R #' @include print.R #' #' @param x An object of class \code{ten} or \code{stratTen}. #' @inheritParams sf.ten #' #' @return #' The original object, with new column(s) added indicating the #' number censored at each time point, depending on \code{attr(x, "shape")}: #' \item{"long"}{the new column, \code{c}, gives #' the number censored at each timepoint, by covariate group.} #' \item{"wide"}{new columns, beginning with \code{c_}, give #' the number censored at each timepoint, by covariate group. #' There is an additional \code{nc} column giving #' the \emph{total} number censored at each timepoint.} #' A \code{stratTen} object has each \code{ten} element in the #' \code{list} modified as above. #' #' @rdname nc #' @export #' nc <- function(x, ...) UseMethod("nc") #' #' @rdname nc #' @method nc ten #' @aliases nc.ten #' @export #' @examples #' data("kidney", package="KMsurv") #' t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) #' nc(t1) #' nc(asWide(t1)) #' nc.ten <- function(x, ...){ if (attr(x, "shape")=="long") { x[, "nc" := (c(-diff(ncg), tail(ncg, 1))-e), by=cg] } else { n_ <- grep("n_", names(x)) e_ <- grep("e_", names(x)) ## no. at risk - no. events nMe1 <- x[, .SD, .SDcols=n_] - x[, .SD, .SDcols=e_] ## add no. censored c1 <- nMe1[seq.int(nrow(x) - 1), ] - x[seq.int(2, nrow(x)), .SD, .SDcols=n_] c1 <- data.table::rbindlist(list( c1, x[nrow(x), .SD, .SDcols=n_])) ## names for censored columns c_ <- grep("e_", names(x), value=TRUE) substr(c_, 1, 1) <- "c" x[, (c_) := c1] ## total no. censored per time period x[, "c" := rowSums(.SD), .SDcols = grep("c_", names(x))] ## reorder data.table::setcolorder(x, c("t", "n", "e", "c", colnames(x)[4L:(ncol(x) - 1L)])) } return(x) } #' @rdname nc #' @method nc stratTen #' @aliases nc.stratTen #' @export #' @examples #' ## stratified model #' data("pbc", package="survival") #' t1 <- ten(coxph(Surv(time, status==2) ~ log(bili) + age + strata(edema), data=pbc)) #' nc(t1) #' nc.stratTen <- function(x, ...){ lapply(x, nc) return(x) } survMisc/MD50000644000176200001440000000424514223606332012403 0ustar liggesusers926818c70e7bc55f14b26b3df677d6a4 *DESCRIPTION e8bcb39256fd25e426be13cc70fba3b7 *NAMESPACE bf2e08485d4bb3c752b28f262ddcc483 *R/COV.R 417958b108c9c53977e64bc1cc3c7530 *R/asWide.R d18091669d4793eda4255a8a12af9f1d *R/autoplotTAP.R df222466c8de362ae8c5e8796947f6e0 *R/autoplotTen.R 374f0cdbde5ea82b9943dd316c036d38 *R/ci.R 55edb3a22fe6a336bafe82be434f3dae *R/comp.R f5b594ab6d23b193c4cd22dc76e424f5 *R/cutp.R 20425c00a56986e223962f76fc4ef85b *R/gastric.R 1bb6dce03da362d3b192ac120efd0267 *R/gof.R 7910d622bd88e43a07994bf06fd6319f *R/nc.R 44bcc72eba1feb5178aab2f2f71cc44d *R/onAttach.R 2d25c82a4a303c8e251c1faae660070e *R/predict.R 52c103b38fe2d6c497ba74827bd579de *R/print.R 5d12671f69a55a944a6b8a20cb166f9f *R/profLik.R cefedd55fa8dbfc0504229df926b938f *R/rsq.R b8237a658e38532b4d57c0a3b50ce5f0 *R/sf.R c927a0e3eda445793ad0c1bd7ac88aa9 *R/survMisc_package.R 110ec2426aaf09a6762177377aa3f96d *R/ten.R 0cebce81d1df1e01ba72351f41b38319 *R/xtable.R 439bf689fa27cf9affd0335332142165 *build/partial.rdb aa8da97747bb75cec5fe4fc8995cce6f *build/vignette.rds 4f1626f137174f3193d86c26a2bb0a15 *data/gastric.rda 693c5d3d4e78444d3060912ec3644cab *inst/NEWS.Rd b95ec240ffbd73d2c9ebafffcd0d7dd5 *inst/doc/plots.R 1867fb12511aeb329f23259cf10d82c4 *inst/doc/plots.Rnw de301af9716d4edbfae56295fc283db5 *inst/doc/plots.pdf 7463d9449e48ee77a93984278e932cb7 *man/COV.Rd d6be9a4bdca6bf1488e24792044c652c *man/asWide.Rd 2cda0e3e299e63e47abf47bfc185b5ce *man/autoplotTAP.Rd 7607dc6570e02bed2c36daa108b49634 *man/autoplotTen.Rd 6bd3bf31a50ca25210c391a754e01f83 *man/ci.Rd 25456f6d47e7c28fe000904de40eedb5 *man/comp.Rd b7769d28b2a4b81e00dcd03dc82ffee5 *man/cutp.Rd 94242574d3ad2e2c9f03ca3d5e2e1d3b *man/gastric.Rd d8923bff76aa0686307e35996c093ae1 *man/gof.Rd 62c4a240886fa4a99be12d6d0e092aa3 *man/nc.Rd 44ec66f70150876b1c25b2a908bac6e2 *man/predict.Rd 3c0f3fca78f13a115977e8a40835117a *man/print.Rd 4f6f17cf3e05e31bc119f68ab9728092 *man/profLik.Rd 621929a968d55656e041777fbfc103bd *man/rsq.Rd afd7b6ee6a7cd35cd809fe9fb29262e2 *man/sf.Rd 337118fc35be7ef1f3a62308857c11b6 *man/survMisc_package.Rd 04e93274a3bf1b56e093bd817172d86c *man/ten.Rd 8a4196aee39040dfd998190209752a2d *man/xtable.Rd 1867fb12511aeb329f23259cf10d82c4 *vignettes/plots.Rnw survMisc/inst/0000755000176200001440000000000014223603175013045 5ustar liggesuserssurvMisc/inst/doc/0000755000176200001440000000000013317044172013611 5ustar liggesuserssurvMisc/inst/doc/plots.pdf0000644000176200001440000024131714223603174015455 0ustar liggesusers%PDF-1.5 % 3 0 obj << /Length 1474 /Filter /FlateDecode >> stream xZ[s4~ϯt=I,06o8/[#qti Md&dI窣YܢW#/b!bnD/'w>],1]̳QFayV9]XYfRWM(Y[$H%c'#8 Q9̳i%֘$8{d3|r41 |}H$ C沐0kR׋}s'ZFޠAY8^WĞ YehR[4ӓLTl8 )ǡi7t6XcyZLr jQNԩ)]:.o \S=VלE`E:.@0Z|Z{TskS{pmAG~{OF@-fE\/$YI> V  D"!8YZяOiw~UXow %~cF"@#ZQ>{*ʨ@gӭhCՋ=zr%Ez{k9ζ#jAk (cAcK`s^J]zOXl4 8bբ#AފV4 Գ\n6DQ!y vi)"jw]< {L?KbKc0([hhLj pp},']CLRV[zu>щOl)v(J,-DU>{*uF7u^VZ}Q>Z+.nn=@ADE}A? \ubnk`Q({+ A]Iv*jӀ ,LZ҅T*vFqT(lo*O57|f\\ jg̈́׽-oU_ CJ\v2_8 =~yo<^D}ACCce%HLfq{PaIPF~͆HI̞z Da}n?J\~B3$IRLsulj2F"-mfH2q,,!}=]Mwo1D{_yGxn:kuݵ맆-"A}y;ĦYǭl*Ev[kЁwu`bvGѣHm_> F؆,GfvkMCHR8]]~:_X endstream endobj 12 0 obj << /Length 1297 /Filter /FlateDecode >> stream x\[o6~\qu/+u2dfKQ(ɿ)xDqD:υ9$aoYfs϶# EE5?=9-h!c";'=81$n{樍ŇS*oi{ ɭA Bs6&3fwugb*˸,%Ɋl.[XMp-/W\(? я.G7L._0#Aaә4)pF}>C[eaÅ!Wst=f ϩ XѲܜ΄+Li6÷:z1p7r.g^RXm @m'OztS/Ax7z-o}W\@62V{TT6WNɻJ kj>qWS'Ch'佺 %mQ4Z 0෶/lkWm6,OB@-L .Zu]RPX2mƴ&48 % |SAB(YY,K"V-kEiݘu(kV(Wrv}-VcNsZJt, r%7eVC@Oǘ3$8;Niahm1mv-ێe\ h):N߀oHq()M?|W1^4$!.pMtՂ+XOH6C(QE /ZVYdB\̨N!ˤ8 MJk>2G9bǻsRyAe:cŮvBi50٣MhG=O7va׸X| t)3r,Sնu+HJ[YfL(dEuWmINwr-4Kb .h ]$):vvRݪCHff ,!%,!hY=UNݛSkAyDiq"{8<Car9` @Yw:T xrW.IFW(/BqSk8y8Wˊ\pq9d`3"ZR1.tƥ'ZnoSMFzdlYq&k*6plcmknn9(r<rq$`6m N~Ic8p~s.{äݖʱS n^QggC9#<Z#vTI_$l] endstream endobj 17 0 obj << /Length 1242 /Filter /FlateDecode >> stream x[[6~8Y]>3vn6e7yok^bӅ3HHIG:OGiPf!dVd˟&*i~޼ealEt\l8&Fck>j `%Xˮ̄|#_-#pmbL4Q%3ɣȼ7a*?{uoZLjqNbN _7tL#1L櫁(gtl#LæG{j?ۤEĨ^*VE`JccGJe^EJCY>K .gl67뀊bu /ikE#Xڛ-yۼާyr3@p}+2ճf݁0jǮ6A`u~wV?A5_˂U*N,Q:3>mVo3[s2 2t6\ y$*K4^3Iж^Z0Р\ՃWZdGzu]63WW\NLLm/kX"Uȣyąy! W|i8YpXX/ug]~v9򄜕\nFoò~-'60h~vlx5-~r]/}A!Q$ ]1^ae (`.zRRB+13\Y&*z OtX*t3R0e 1cRdt­Gc.8}`лw ߼Yں_rY!Jb)[Y$[^MU:9\a?$O&moY4u4(XK9$6]0^맰 ?UIy<*g\3qAGVq߀pL{҃`+=XWКt'߾uĈAͻ gT]8We6/O\5c%qfiVo_@k|WsLtr ưdt{ endstream endobj 20 0 obj << /Length 1392 /Filter /FlateDecode >> stream x[Ko6W+kEU4H-C7ӶFm6ZT졿ÇbF,$r8opifm/S* e^h[|p,-5Cjb 4 /@ڊh0in]}l[?Mg iZF`Z8$־5A@uoɦ[eN2vmWa\w;.\HZ}&O5Eϋݔ>r ybm e.HrN>4,rGƗyLA9Vz* k םXMM+Z1Í+)yZi Қu6m=ݻbV2IQ h$F+ڜ*&P\/WRPdVo hlAzIg `+H3۔JihjP7{\\\ldanŜ27yU+!BlY$?>{[s>YW!rn* "*^`gL!i|Ϥ_xu}qҘ𔭊IP-Zk5V.w6rsܺ)7yp;",jJB(2a)vr&vrDy9]nnQKU~sxq"O/͎*N^O1q>A/+kt^)&m;BTi/TAy FM7rE l7m츆ZR:dqv endstream endobj 23 0 obj << /Length 1405 /Filter /FlateDecode >> stream x[YoFSXC/t`flSle?{r0b6E?s8nَ"YleO=C4Z>k N6?=@Wdh@<ؾn@ -[FlKwF mmIjwѦ\A[?LzwtMtY珴Ls1A aN2 &q^5JGֈ5}rNJy}g9ew4_\Ö*'7}4ڇ=|4~5zzrL4;m X,V$Wba~ kΧF9$Ŗ|.h+b5$y7n9Aky|6mqOPnw U;?W >WNT,k 9۫:۫۫s|5aD*{ hm:mĉHq;JR&)q\ PD1< # ĠsQn-Q3>Z ӿQjp A25y4;#*ə1弌@ #>Jzӳ'hHf1QpEh؀RvSQ' E"es⏆a "7 9oYT+sw'p+L`2z_H fj4?fkf6[ˍ4Ox X 'Mvt\a+}~j(Z_#轧b G f%>l{U{gIֹ\)qhBH9=!ȘJ60lC qmKm I5R䜹]2q57'!w9 悸qNfM6i2CV:{oV D8ZPP~L`ZJ:DžQ'hH2pf!}6%\ >\+eg&Q؁֐Te8-sIb]GGdITx0T2F%1^0e1t۵%i>=gP7v j<%AZK$8 sUpy" :^7 8커@w8v׽*4u ~mIl|v瘢d8؉)(2JLsq&8Ҵj$>IP]ۃQ.DAe%<-?#ͩy k*[IiC}=_Nz1#' endstream endobj 31 0 obj << /Length1 1751 /Length2 5639 /Length3 0 /Length 6724 /Filter /FlateDecode >> stream xڍvTk5 %!/3CKKw  1Э(!"tIHw H "*"!!ߠ{5kͼ>ϳLLqBha81qDP757$7G;(-X-*DmP>|qI@\F^\V$ 1 ARc<Q.8|A .''++P@x`P4`Ź <aPw C!p*!yʃ~~~ vVP8 E#@ cAopGh,> Gx`@6 y;8HrO п0@$kp8QBݱ|>r:~qZ?0o' ¢OgM4\@㰔'4P:_747Dp pOU4'oDVV@x y'SČ"$ BPH2 E8oDHiǿ8Gp:ތ@x x^^dp =_ 6R5k濽jj ULBRdd18Fbl/ƾD gE2ോ-D+*KIW'r:j8^]hgĥ@vV 叀p0_mzq(4E\2$?> /_.~V OMBZz{C(!xEIHKp/1`çC$ƛdd0]N 9 OAq8%0ΧvJIQ 'oCx^n )gċO)I|%ӉXOu/s {xާ Sij=xv F/x{_+?KG('10Hךֽ*U?C=2Ww}(΃J9r/0tQ!6$&yGMN DtH'艢<&WHϹ8e> k/naΆ߳^\2&*˯8c4)$Dz}X=Tl]z=*J=k]6Flj^5[ڔM'k/;$]/v,nN'Yd/tX5gC.,|aG]ڴn:H ۲,]2D]TG Q!ToCEZTr98[gže2i$#kI-vKtIңu kZVE*!gޮ<~ h3ohڬ~| |}r(rYOD[e").['BҚi,&1I`YVŕ71Ng;;< P럝2Ko#)~>a"߉o~ޓy"Wd,ݥc5Dk/sЧ嬫=𣹳ƇEbod_.D"&(SkGztyo@rZŴĺ5W`YC).$ hyX x욵&[ߴb3RT]o`ZCSyUp>2ej…DyݨgKra{7p-[x$'t%@egtLgݯӇ-ĵ"_/< 2^4qs-:؃Y?ۘo6Z. +o}{BCo@DY:AkHDo[K6ߦd(da4n:'k!BԨc JG\-2ΖpogiAM}cLM,jcK FZ=Ȃ/ >╚듏iʼbԞl h.`6(\;SZSX=N:jf>ܘ$5ECʰ} nIn]놡W4%MdňeiVx#>&~0οvNɲ]zR1XͷlODE+^d,76$RMYz^;n4цHIpbE·WZtSok]QI?0A( ?9$h 4QJ㬪ž33`T\d yŗȟu #=̭6(Tϧk fsG:1Qv]ݶ +:b=7?%hbIru(px'z{VHLD #oJѺA' s.mp$DtMf~SN]o5AߙDv묌eW+xEl{{F*,ČyAPW۫fN+˘Nߜr" Yv1+̈́&h<dN'o Q&W!_z=E,vw|R^na֎+7+dwY]Ȑ]^~.R+7?ZZ;Wㄖ$kIp!NuHv5yGO|%0s.N}%gu+UlU*0cؽg@$֣Fyc^ȔT7Y=PX|*qɓCQ`mGs 5`*=ޭ,*rT TAϮ#n>bCZޘ6znc nٳϛ6zms[)юq癩֡G|Ϊ%H=ԗ;WFRbR)v9TrܣE#¢g6MׇSBҘrG6'9hߛ3q&,i'CI>q;Wۓ+-wXz ɋff r;2K>P> MβEyׂ7z 1*W Y-$Ou5Π(*ifIw~jM(n7s6p;<(6tf' -XSΣ-ZQ~vW+rb;fxu!'^9w;/ *O<۔&:k#[j2e)ń}/}ן/f(1aquZfӄn Y{;Mg~X㺰忙'0χGkdA<ժC"4>ʱA+WV9|\gs_W0'`bWN$/y|scE?~ZH0xPo2С #tm򃛊2Ǩ)-m}T.z,NO>&k'G{Q˵M3e-Y-Sޔ qP a$SZHϽf%FpSls3ḧ}vwBdm{~uKJNrQ:JAuIЅ[\~ԁ1=scHCX3H׳/ gr& C<1ߍŇoZHTꂟϖ<]k y}5pVl`_IE@M'n{CI}9ܱRIp3cڋڶUq%7#b$h&7>9w9)=!f [S!kjVcÛymC^3Ob/&b}j,8+&T\I瓭z6(W} W|ًq4,{I?5fAkc9+AHpWS[{&BaO9"5K(IB3ܵ2ZRZo+iܓ&KW >;ZiY$;)]媆?S\P^W"D7gXUcZE4M1жP8M7z,]ĶmNQLJC֒ hL+Y.sbΞɫugw;9_;m:Y>V.Q]#"N*jtm'%"y*I /:Q-Y8/ܙKRggLҝ_@&̙s-)wz(C{DzjhS ؾ;j@ v8(P+ ~uWX FtPϟ<%ޓ5yִtsgy+2$sʛ9"sCI;}$Xs{%ϴ#U8a|4A8TSy)jAߑMbɌSWO5AL]o5IHL=G#-}-q{Rۯ3JI0ɢe#b:;H,Ps{lP]ofב{&]i{FpِŽR,]Oo2(UʨV6_z~ =c7vJ!+T qnvhD;EŻtR{~魈\g,wP!0Ɯ7>9Ѱ܂R9}BT Y#_cZH R!@xnG6}=#Fw̟KY`co ^{P=<'\?Clڪk&mVy]Pܨ.Qq6L=뼞S *\\Э6ȕ i×~pp|0t94 ' ~pKfA4ҁ?d5O_~OUS9ѦM`F3 3@ƕ"d:޵\(ᘙ"6d%aAqA2g3(15'dXIKoma[8ļ!asYo8Ě>Kq/km[sF| kq=?:wa,5# _)ƛ_0!7[mwDVļ\]z=}ыqU 7qgŦs QjItt}PHq"H,ǩ __4^H(K Lϫ{@w˂%}懏i*vz0G1=soǑ.֗i(ԱR7!d;5.$5͋*͕Zr{Xߒު&{F7$mOX+[÷HC o zi8T~}uώ׿3\_}ai=#zB>>B=k5s͏uMEgt4]Iw'DɧWus}90ׄj7x/ي> stream xڍxTSۺ5Ҥ7^{]$@ $HBD:HG^w H +M@}c72^7֜;DP rAPgb@Q! PApݲ/QX<&}tc,+#uO⷟/Xzb*Cao/pOjaX5(!G}`PC8k_Ol_> اK.VC&"&{x}Ig]AXUBa> B0v@#ʃvC\0!X&ʆ|`D&exEJgtU_k.ae{hխᧆse3L|.F}YJ) ~=yc$FQ}N4bƷN|$-buOj(>dKc{ܗӆyJuQ)]Akv߲tcN5 aHCz=Q/֪.̺~Lֱ+)\zvgUsDת􂹤ĨTɾHRBEfB8Ύ$=k1Tu4z?д2 kAB%{(li W Uޖ&P(%'TVo , uDDZnT4SI@7}&hw=Pdg,ogdO~JYKGVI?H U=ᨸj,?Ara(.Z! grF;;]xP`-E 2ܴ)wX|/ )7cӺr[_~a,d2x*)-a,?`@z)/DpiT]*SǶ`4Lu6Utѫ5w?I.LJv{y_S3oN\2VI+6Ru&a RΦpRҺ0-Ϥ)j j@E|' j9댖~n?z?f}Y _q/C3,"Zvɐ;NbW@?G=YiݞJ;LMEC:pM"YP?IP45tNұ5kTb^OV ڜ$DAZ-9ɉ-SUݙos"*'K5v/^(@X%k6D]u5yBq T!x+ >r x lߚF綧Bmg9b7Nm$M l`ܕOu+!&2۩8&$JbrT^ 5ڱvGe46#AQySi3׮ QwZir2PΜ;BJс.pY´)&L) ʫۮxmT uyyk{i]^sRTq I2kdPqM,? v؇"oڴ'EƆM-?;vĩݙ.rWlt1 4fpFw1νI6%co.ث'ᓆ1Y*]^ĎR PSo2mU$/xݦ007E<^^ެp fޥ&,3bP~Kx i>'¼\P̈́W)[5<-G1PG& Z%<w#,<]zRk#>3 kz ځ{c =o'J+DQuoV暕SZ^w`W8!>QG|`bNoׅ[ ^ x8ϡ@!= .jh-"yO_'!= nOWuRbH.B#G=ZYUFDC nqe>H<6O"$OMShQ!*.F\:xsT+uwCFFPǒ}:FNgliCfbrFjIuͣyyFFp 'ٽ0j31:c"$D' Y1O=d.0?)>t \+*Ƹ|( bBnk`Y )~\fa>uKe˜e!cA(F\NtuB0;jBݫ*J.{}0"%7 &-tngJ,88R+hOp#yl^mG!&z^inrz־RiJǃc_-Mm a,LzaxqI_Zug%։^{$moiWϹ<1oSJoTHf&ݪzvmr!-ohcVǙ~d}.-[qK )"-~Ohn_Hs9x]"[ߓ B[T'T__yI쪤e{S"Ѥ`U<1 AF4W)m@"%Db%W`h_N ƃꟻ)Fnh )w6N/YW˓rFZk)̫_<$cyঃ<$R_ j{@IZ7Yj˅Ma0iSZPQcy *!:-gFL++ݽj:Ccsj6|49gOo#%mfD; \k8ylS}BǴURr78.?\5k1G|W,M!1LӱD e"n c-+wx֪9˙(KڍX\eS- iǟՖ?[c& t=Rcl.TQKi#ZΘF;<ɧØS+}~}ýDrNHyfhpOkuů=#mw;v[VQ =Vp^\C*8uަֱ 9Uef֪10dohI(+xRψP&Lz'WBOmm*iJ׿xbJnXmo0d  yơ^.MSu؜Qwi \[h@r$YΜsCzf|!ݨ`CSzIG-b٩ːkwjm AKVE\ '(Om ќ cݮ E l%Q'ez6_-dhn'NK;M8I]L(E{^h*>4U1!rY:Ai̖۬7 877$FuX`:\V_IUTYE" bwkc74/VERiՌR-G|X4$sWym,Ĕ-Ȏ GY66}Fo-n=2Kdr#}WuHllys</rgwVo;fLmGr 3n1)`Z?;܏!GȬP`Zex/d-ў=Q{RrM4+RM 36Ca1O],Wb|=F|h_Bٓ˟;o6CT|>^|B٥j@:1IƱonGz+;'_\fϖ#dZjӵ|^=QGڤH zL45319d 9v7XU~K$] $v;_.wHD[R%CKuK яЋr)x!}}˵sz]B5$M /`6dc$ټD (jM2;EE0/w AO\ )V\hw$Sneԟ]9PCi-Ǥ g+\X!^&~4*șN2~dv[V:]{Q~]qP]ɢ+"=bYIߙP@Ce+^07r⼩Pw8Ezd1| k}R<%nAW# u{_w/`59gSrqK.xW#_iDn)|r !'mmo*ը4G+ya/hM/\G?7S# Zn ^3!qӳ0!Ϛ^"~oI<192v diLUu Y(bƼfRzBWk{ftk8MO Ú(\.^,je,Ң!|d Pm3YP$s[ endstream endobj 35 0 obj << /Length1 2442 /Length2 16994 /Length3 0 /Length 18436 /Filter /FlateDecode >> stream xڌP c 6; 'Kp-gb 1P֙ "`bbe`bb''WtWOttBh 9 l.fV33'v<Q#WKS@O.bghin ߟ*j377'B6@GK#[bgb tT|ldbbgcodaik0eݝFY;فYZ nR2'?'GK{g''Krd b"v66@[g'Z:M@u`ll,mMJŞQ(% H[ft311q_T=+r񲷳XA_^NF@ O"xff3hni ;H 4?f_&-fWR'v/z6= ++_/F㤔?dAU/a&/y;T\}ߌu/#qkT1XZ{c[gف6jr@SKr6킐tt*Z:X=g֖@E;'˿n=3сp* hw7_K0rt4gM ;;@`k :e0sFDAF߈(/d0F,F߈(|FFA7Mo/T(o/!3 (%֦Vƿ39|fο3*@TME g&v֠W"2 YALNFKZG@m0}D\21xZf * beT8@[5[w(VP$[n?KFA.@?ԠA.A4#?"({%d/l[ 3wf33b32/t s'5owG?A9qTS wcC@Nwy@1K%|*UVq)lcb{: OG#cf?"\=M .D—0pGZS98y;;i1m8mVuї0L9h}^brz~S9}wJsdx TNфz 6]Zc08ݯ:xj_ &ӹd<=K؍[Lδ 3"8G+dyHvOʅ5(x2E#5mx֙ OZѫ#MО1>RV?A!0wE޼׾KM˧;anъe*&u\yv?Fa-ntNuVMFUh7`AQ1cm Cԏ:yHlkk8Q^D/6/* szW2FYe|C?s br =)("G2ƮGKZj4$>b[ jiމ-Tww,*T`?s+uŰ1Kf/]"iuft8|K]-n͈bJBAo:0 JUN[6{1T~mCu44cUV6M-~KWK4m4jRq.n֪ekz0ލ1a8OJ/,;Ǘ؉]$ł㨤㫒23S'>+b"*J򸅆qdW}`%,v#TѽxB\L{>T״s@ײp m  S &6RiPA JT ;&N,`kYC }ʁ,;@#rX|,7G7$kJx$DDٞCCT;Ta`QF\֌~x'`^ ?:ШmbEF~԰v;SzSۇ4ĉN1o`բƁ<, : ˡ`) 7-UU*m%#ֻ-h]ױ?w.(ˈ7>BUj1W뙭*cVz 1Ps e;ҦM'T#yb~64=PCfp"pHXz 'aw8o?Vd"DVpnadwdiZ~}ZHH'pZ;3lL1uK^ F~dl>& XX*)ތF/𪕵NMuil5(4CծYևUqΫ竚XED͖K*C$c`vjJw7R@ˈ3SH]/gaOd``͟+):"ˍ͓ q%]k?T;59ol "21yPclM} <]m-Cjuc,7slN ٻDSx?HU_xצQ=uX d +mF7<{'H\ c%[EXV]^aݘEIJP5.$ςݲI837_: FAj2{v1oqa<hm0[Ynsru{tS.Jvm<|Ś˨vT'zjcP#>aGތIAOv+w6_a+<KA%d$1Λy^^.1cB]vh ڄ%|֝Xs 3&Z[*3)Q&y}ypJP338 ۰N{=E 8Upt)4to2Z7tN{@,uz fĬrb wO<Đnvw]$yO ƒ2ތT',+s\$SX* 3i=~UGt^&zYv5mw#?%TD^) 3z\0= :"{}4m"? BFxt6u*4i,} d[ DΛVG;yRMi5Ud¿@l|&R~N *jŶ13\OG4yTڽ1kidNטV8>qs-Q>GPyiר댗|OU ﮻zҺl6h/8-EQW<|݄Rj:emJ*-ˌ-!4"A2e\:ࢻ̤uf.ϱ7"-]Vfcޕމ)`x f1v{'uu}]䗂sTJ6딙=_= RmxYӌ^PHf]ďWKsM*9;y=N?ϙ|R4 [g*(ߑ%Jl3Q`)c0XvX:4ƌg987SM˒0-Í6ಞmí\B1Mko;~Z;}4P"/ )9U)Yi/d#y=~tNA_i_\k1!۳فXfʴ&-!ޜ*k&pfDg~=˖VIꫫkލ^>6v TX 5ܗ]p cܿ +j ]Zj>W9 a5oW1zYb@\B-QpcTPYH4^gJC|`-ga5V b0X K٠Ǧ#$ᆟmv!!y?j+p<1RBO_uA44%c]qGKIХMzHd P$qƹ%Kx+|a hy%^5XE`IZN?N bk"O3(;O {F5ÜL8U%Ղ6*,0z㢝Ԣw=EU fSPd2Jy.7prZ1h1`/s~[IF%ؼZ6SC닋rcMK~ڶceY} CvO%XdʿZ_תQBnKXU'6󰊈*a4 p3J||lI#oS{&-CAqxcṯ&ϵ^||Yk_6'<o&RGf'6d:dyT,4Rq0A+ 'jjeE*Ý05/N-C׳OpFa͢"%R<*=ظ S稐@%6c3r}MEM |$LõCn5B$LMhlKNYERb& )ť8ˏT(i<:>LGEY)kەg9"@gvY]Y 0ܩG*\^9 9ZUl.)i3_97l(d5&vd9.$L&0vQŲ^hQp~pYJDLYՃ9ϧ8߭:;_pjOKlŜ%]q[ =ipǚӇ7-/2{;$mvva0| IyWTI5-4INEIor!531}TKIy4@ģ{V=N4zm#<ו:S.2'BrHnE78X+mս 6"AwNxS@5ΨO81yfډ2]&SS ;yAn2hh=Cae<="[tgښ=V}lQaF۬8Xbʜ_a[oZ &F/KpoO`W ?+/^JxtiQ _ITidUv Gw("F:`v4W +m"w6hPUǾ=j!U/ʒ<;Kl~9X{5IYK- "i~acB Kq-3܆'(F3YFqB[%*#'pkVc)Ub%hH@Ohyl|Jyb 溬_w⻱= [eZ#ړR<Fo,๼.-~{ٖݛ,yNj|O}/Ookx8hΗ9gP籉T gW =T9U SMAFW%Vc)D; zJowv=a,sCҹ VtB5#bXm@Sp]>6f@]2/W.T rpe͙[vEXgq͒}WKaƶbssoIކ=ׯLRzPߓS{Nوc>UJXl9 o9])h\+s -Eazw]ׁ%Ki3WZ޳z/FTUEDž]|7O3H/`=TdxBmD96FNuxMn @%}k3S3Î~1Δ+QiFř<%Tlzi`:/2f 9 ϺtX*K =RѾ0EI.SAq' Bxi^ xB=:`WX!P2.a3n_mWBe KN@>eumFUtp\<1N-ۊgwgȇlw?RG{5^V/epǒ˾,B{SL4Qoy~D F|MK+L}R3#?_r[DHr/ `}T% Ů}}\ZS 5'/sTG*t]7E3$32VM5)5m8&~^pز@L:}8'\!vT￘{,cK!,\ 0%{J8 (W6N2rHlLE3t=w4r-v؊ [6SyޡiJMU͚K%=a c\]:ɾmKOO|xf]j<<Odhy99> l- ߎ޺/F֍(Qcۙ_jרHiKL<'Meadg=I'30h;f /}PQMz:6f>D3F0;xV-А~k2ꫜoHpDhP&!ZMB&RNúArı"ܰs z'Un1H_(PedCz- / |\4E:qOV7W@Ά/hc>Rex(L5gx`V{@uz4zD%OnJ3.I[סϿ-\L:%|6UJ>#L81tL"GۭJ8SNQmz5ic>AN_)BnjZZ1$^t;BَQ}RwKIj}$wOs?4]uw:Ӎcs]R,Z."ºG a3*yz y=nK/Ӗ+y8>bŖ؄ܫ;F]KmmO ,6c>!3Ġ/RV+vT eFynr=R8*RäjTLjż8w <+R%9Q| E%kb0x*J\WT'%|mZC?a7ft7 6-#}Zm~tGR,ucyűR;oy^^M0uFupk%kҳ>PlJu-Kj1wX GdhU6BE&:OX~)K}-!qg(J[yG*oǎ<E;֭x~iY: 0i?}}[!1tS0a`CyLTsBz?7( ruE1f]ʮ_pR>Az(oa2{5zSg E_ QÏi$$9Qu(8Bu`olRLh^ ̓h8A;/:C9bS CL5G gouCjJWD6h b",2_nTsbPGG=w'uhe+#>vy2<%+V:l4DJj+5z˨vjB0 LۤBVƒ4u V5gx5G_='{vSS+J|Q'**:N;Fא -ӗI߰(ڨ_-3|_n7ɳ(lX]J&^;=+s'UP^Ka0 ~&dڕe}|o;K)Kٕ]t\ *Y%p][0lވ_ RUFsm**mp ٧:$p|ƻV?-DI"y2_3-%fs-pJP\wWa`ptVGGЦAݺ"y<оv̽ˡV&tcT\LCPBg+}c7] ܲ&".߉c6 5^)~_*<+&kP+uM^ޭA=/"l '۹GXXj>o}8T_jdzvۏTj_dӪqVEw7ǬKR3[,ՄB<3$ZYnʙځ[aWHTB`01ȡ-Şp}5sWkv䇀bO3/]C.pzk_ ǕmPRݭPN* gc ޸r Ь\bz%<\эuH{KhWVJfOfU9Ω&*{C:ͤ -̔Kӓғo[ŅW./WZ1,bx"l7rԿ`rY`m!<"z[kOQ#Ӓ1w{0` яTFnE~3ŬJ`oz"63@JEnuc-Q(q |0>I*gÃ8F׳&FF؏FݔO|)87mW~ lm}UmS!ǬV ߖdKj"V{v.~3>v46ő)^sKR3S?$)O40<ԩn,Mj/,8 լg` Sn7pۤ=u14t*|GuS`Uvx][s'R~8,pnL#DtUAQ`f$/ޱNTlz؞ Њr&Q<{jx+bX%y#d5M뾋GWi|cf{)Yb aBhyu柿 㣝q򒟹b} 31bvC/3mKD>bq]WM[g|\ط1n>3ɬh.gގ<$czFJ"eա+1`A'_5Ɇ伄v6& !c msܯZ/LY+ 5ȟ肅 Y5?nOqH,uy`ɖUhwHYIjH,= /H>aso!,ѝjx/S?/ݘsÇ'w7;adZoP8/ݒ!dmKv̗H;}G:"o  Kx=7l _.pMWD3|+2CR=1J"!K??IM1&q@rLva_9 QfM3~9Z+c48 -*cZ~  g؜%~Q\ŷ'H=쉆3p'W?6 &2q3o6S8_yl2ה:qqjOʀ5Qxg=jZ 7X OuMKEYR :1,p$3{uR)#ռ:UgkoQ,UhǸ7@{vT&i*''Ks쓞hT-̪ V剕b#̳ Tg#0iZ!"szO>]1x{FmA)HB6v$*}w$/r24"ѯOa+"a?p./r~ȯ9I-qwc.snLdN{G'":J~\ SS?g j)Xhq qgң6V̦[We^P 1߇CnBCJtcb{g\~x/\]㑥@di qT)y/Ke BmkZ*ܗ1@<ؾŕMOӞP%+CI 32ZF?eAt{׵'ֳD-6taߓCnyH u!fQf8 bO;jX %ĪrY޳ KVpw wg!y30JS&\A%z" \^E$ޡN-7KBw/'R-/\U`^Ǩ򭵦6rDv # 1Ļ ?}bFbGz?0oEy]^P5>wPfh4e˰Yhbhd?4Z6׃Q(Vu%ꁦp op cVT5z: aFC4_mޑW^ ]9Үb~҆Y 66&%"ê\g >g)'5}iHrLlUWILRvxD}D'|l%s}!'`CiBF7Rk}Qj{sS֛փ7 ?瞷0RS&Ģ0uJҌ8g?Ea=dP9p[J$}u2MI<7>wsrF=@S'@c݉ZH*kTxg;eʉj>xqÓ)V`n*W8G78 ,918j@yx$}US`EV2$jLs"tIl 3 \ =p̴ t.=qi\ ki~"W ? y-Qh* ƘR1K^$CL_+ϗ(s碩:߅ݪg?y"uYؼP޺B}k?2IHCj/T57)Fuy_6X)تxăR`xd {%voz4Y<:-`暴={CAǁѯ(? oX9Έ;\++eސRS-r.U DpˮM-5w"-x<d=C niڴ}~ޘ[@uUBP[1sY ]K,lki¤^R_%ypc K[ T-@ۢ3#ڊ> 5r~ "/|zeDWlk!%y|lY3U7R&dM6Acן5oj9&k&TƊ4^C Gj >'Qݨa޼-5At|S7X>N <ܭ†!ߎ~9VIJbg}] 4,|l cun h_̬޹E3yQ( licS˅ǴZ*K8dզ\zBt\BdU.lڨZȘ\D)&μ_|lŪ=`mk zߺ5P}V&Z2RWR@љcmz!Ůz}Ӗz HXW_~8dnBi/yM<k3W)!' k~޵]젷CV:6NZ5}~R/wXZ },&rERh380UA`08% ڟ4k{DIPƿ{ )?G,lqao'H ~g]*"Iu8Jgxa` EK[*Iś0v1 !V$oS[fX!o)J|2OBBSФ>Z!Q-ج*1J`2~ӟq?y K `)uP!A}Cxn=̈m(gFgⓒq\WSK }m)yBOU E-\p(0IIի[A gS2wTKLTתTNWrŽTUi_u2C NY5.mT {0?@Z%a ٞE:SE=u)$,/wxh`>+;ᤡK5.=6@i;x0/tfщ({19=PʶB,6 N#GȄd5a@{@L 5hW:9LsԼߐ;Db2٬?W;2)W?<0@9/aX\zj$+B#ߐJ*4gX9;jL~LSGsw#6k𺚓nYtM =?hCT^'a* *83!z%Fm:u#kt"cq5/e#o4R)ɁYU{8N\_FЃgmPBXSx1 ?q{2yn`y 73WYȰ\" d5WD*s!(@!VNgHؤB\&dvxTZuQt:Qto wp lYPoB;ߛNLQ%d|,<fnK?pD=zӰ,!g7>uW9>VTʄ:mOpz ;cv(e͸o¹:AZ[]=Zӈ 9NSDv?~h,@[e_i"H8#LS>=z6ƛM NDH?)Y٫6A# ?)Oԙ{U̵3b{kZB0 ) ,?y,/dO&6KEbv iӞ1E?)a3/XLTǤcܻ{eS[!~`Bt;gUu,>ElDOѵ]u #c7Q^mn Y)>_ <~v擣w3aBb \ >y_̡=yErhNۚ)DL(Y5nV Xh ٘mKUv#/)Δ@ NnҘ\7Je?|fp}Sc{Q8Z_%gYb(ᲸMyw{Y$Jx Ga޺a?  ]a6A'r9~>K Xj3|ɱJѠ8imdpοVл$`\Q 9J 7g*ތ"y҃bXͺ^,|G;;㐴yJJ&~"{:)W81n~ 7itN?:N7Àpqkµb_yVNqPi!ZJ`OV-L-h=a[A)xVq9fnn:b ҆9at~80sJneiدp!ϚYWetxx鸦#3BCiMU,bXcЙD3:bhר?У@eRJ OKA,H9q)(&F_.bhgT-+VM)ʐ~b9.=6WvV֢bԷQc4l endstream endobj 37 0 obj << /Length1 1614 /Length2 15146 /Length3 0 /Length 15973 /Filter /FlateDecode >> stream xڭct]%vض*Vv'vNlWlbN*޾}{_3ƳkͽCI(bjo 12U4llL-EmL͜bN@#ā&66+///%@AKO_Blin wځB_'`fi)*i(HhR@; @ gis6ۙZӚ3_,ghb7 ntp:Z:;}X:̝@gXڙظC_BN#l);M,@U%'Omg˿nHS{Zddi A2L-l< d/ gK;bp9g:'l'K3ƌ oMvl;3{+M] tw@4ڿ$Ll<@3f{ߒ;߉@'?]=ZFo?K 6?HMp?ȀB,L,att*YL,fF6]dci濃07?cS+пęTe8ʃ<Rۛ?(/FVN6#; /;@79Yt7V F 23_U=t ,ڛ[ejqs&uzX!BJ~Ww0| ajh}`dž;xOCN[AM9L3jAn JE`gBY{ T;#?k?ŃIj},vZFmU` t>!}N,<%o)i+g|;1bA dQi2&^=gkwO֯"PDju&gCg |Xu2T0~$V)|\J4lKw6R?|ĮA:fe0'_ &KY_1)׫Bi\ﯣ;.Gvn5e6ߦޠ{2ۂ~YPb)XND:Î 3lUt0;Y ܤ*ugQ) ?PIFAɂNBVSOer`,ٳG>{,1/\D saIptw4iRy_W/d^܃tuQ _y"^/vxPc1Xج0JNRrdN.Yݘ+1-l>yp)yp "ZVJN_aEauIA%":ׇj_)3޼vFkqyx(b-m6z۴sw֓{,+:T%br"Qz<p4}>YR{r/#-%OvC䒻*Gg8/:V _P%0oI}[}q&L_lW&֐|q ls0xT-?u u7CC<;uyPJ8KzQ wn.z>Ncis;cORj5VEB,|>@u~Bt"ќfUY*߼Q;X%L%Y/$ɉD1cn`Af} [PYG󍌏ND m̮&n.łOy Aw!pr+և=@'sRzu[ay,FIvdOb5r\+Q&}[7.NGZ028dlOh=O 66c&QOҙj2XJRef&gWtゾ - 锫|Dr(r Ŕr"ݦ8CP ߗP=x?{A;6}'b3KG8FM LHt\u- )6-bW}P6z\Ciy!*%ZC%{]43E\G0`+u[#uqg\[TîGca^b&6%YNIw|\J|!SyQ|f?thj<'=<ab<3!֯:vj3uP{LPl/_[۾mы@ް4{XBwT}3?;˓8:9Q2nzt7m:ZI=cnxaY!ޭ)UM2&]vQ 1&қD@Ŋ6`t n|KBaLU9YdQ"ٜ!oVBR'A3)alxWI9_z<5q-˂^&wwdK?ƣ/7?~pHX% SDo8ďHQ 븤H]`SYx̕yeB[LQC:j޴]#mÊQ[(XbX&g5 ī_cjUc`"VӋ >ul{?P rR7ڨwYp$}% !A &hX,0fo2P]]X_CcЌu6D \JmLVٰ16Κ;_w{uj6S>0)%$R1{ 7vs$׈$}TRU.sj;sSDY}9oLFв\y%Acg 2bQ3c)[9PԎkWN!}}OQ+g# z<'~pLp!|tR O#!l/[qg,A'k9#FݕWl9N3LA?ˎ͉M+ҏ!/q&6>Cc5Fƕ1/2Kl|Ve*ǯ6Uq_2 yvNεUzKw-VNS 3lE4Ws wΨv6?"5{Ay_([Q}@B85\cz+D fue-h¶-`p"?v v&ʋ&o3Y1-; O(O*'z[R=ɯS>`HKfJ3#Eu8)s^!/y 5LjyY.N9v%>EI`Wr'\>_FVЉ;Ee @'n,eNi(4Cb< f:]@O==1IC- K.uu/"*O76i]Pݫ/HB(a{˳)Ȝctp:xRB/E'}I_M.]{kr !4g [IW)ʸD;V'8z  @ NiwIo)&5EkL&]D\8TGP53\tmpՒF20q'h{ eqc4%/z]wNDn4Er"6%)@۾뀢8ֶt R?ES]nW`J8eoH^"DÈ F5AP~uCumwA8GyԬYReK&f 5 C-n/w}Y??]SФ5mS J ;KkŻLg -2huJ.9\Ώ9_cqY7տO{Eȩ ALj*R+USY\4B_xJ|9!<&Yq!8wHʥZ@uSXz-Ę}4E`'|z Za:yq`2NufZmitsGs7ݿ8i&/KGɸ!g '|0U0_jNɞǂy8d> h .0FY.{w4.e"3.dq;LQ!st<ޡd<٫J2R*J%g*u OzxjJ }&L ਭp1_%(qdM|y`:Xf,7|hgp3|5Eܺ_WZb0# G t=MIab0M)G ZV9oP(Yնx FEs 8m?+Br2_.S̀bjW;YI"(ӊ'%_:6%oV  {-Z9fQf,(AqjoP!<~[}FҬɞNcLx=3S>C$P@69f'UI xll֐pkZUW$xq0s]!V( 58̋7-L6[ga~;/VKn[?*T1{$ɐRjSȞk9@HZ4¢!&Er 5&Xa]eIZ3"~b`$iCӸ|Ӏ{-Xލ**Xf^FC"uXFJLdǛ܋,^p07߳e}#PpԄ0 k2~CЪv-jQOThګVV8R@zpwP-i$?Pғn^E0xZ߸9݀9HL nS| [!nYz%A)[:ld-wV`FۙඩryÍ 9'%ŀ 1yzS̄^A)G!:6έ=wپ0d]R ɰQE<|3@Rt#㐹;029"6F%J8ܵr#L v쪄_[4ڂ< z<"j/$T /&eW8 fEq:R1Vv64w\ڇPV .ьODgY#dqw)- | u8^n=9aY5saĹ'Y,y x vӍ{㟸QgžAu3k]+o7+ez袕y@wFt1+%:La1V43{J?qi)VQ+wܞ /Ysh0bXKKg۷mFybF%MIsU0YdVu\a;XuةFK#//IJ|;o >Mn( r,GɖJzƳf4ص]Wev?W)pE x"64f</F*U_ qHOZK/w!L]CqHΠl$V {Pg=5,H  -}Џ|_y=鈌 d_} h)BB^EiM>Sf6`*ׄ$g'vxMz쫪%Jb36MHY3g412 "1S$CP P'_.Fu}U|?XqުiJ*^y4h OEc䯭G[dtFVӸ#^jvW D@MܒQجs14*qD/uȽ%P}AId 7m 5 += E+:Wn6lixfoA126.]+0$MA'a"D˦MӃ^~ബ7_z(itbj(~6Vz:#}339w]'M#cr\+e,S uKѺZ^%zןe[x<ҋ$!{“*'MOlC\/_n0oB{q2n'J)l(b8EL\Gc%kA{~: u =8gFADe.Y8Y`!m ]`}-jA?^oLQm@7KWg ROJWe C#5C9L;yزJ/). A>iAӬw;;1\nO#˙M\`n<'rgNY3Z\Sٳ\>Iǻ ?fJ,Ljbz,!:Fhj}Rժ!Rg_ӦZ,[ u{hazmziۑJ",̍%˔v: ScLv{,&>/6.B+]+8IY{,"⋳H,dWX\*4$ӃsYm )hAvYֿt8'[<'Eќ ҮQ e;k9gR&gkbQHjǿ;5]ji۴oŷ%a!EfJ˾u'֊^tcQsv&-;b+" Joݟ'StUt:oI}kE8IjlL|lX|#Ǵtz&̇qFk>-:`ΑQ\d36hlɏ7}|:OMJv#ܜBrЙLzD`]YJg}u&H20 ¼(;̡;By'| yBik?&Y6&2G3)2:40^A@$u$Qrr4_kNo9,}tȗp`{@T7xƄv:R{s졪Zj9LH #6ԅo)ּU$!r'3ͥ64vEhIwe߈Ja<,g&p, 3koe͡CUfmvoZ֨HX &l|,K(ƻI_*w:(uA,bF8r]nm6"y,XYW|q?ĉѫG2Jd/οw%S8MO2'#8[/+ c1wI%]bwjik[.|9GNĦb߲ R J-!сc:S0L-^ՒIbM'܎f*'X26+]bpHY;(=`/R!= 5Zqjt_ J ҞYZEw6gBRrF . ձ"Դ7J#z?>} u{VW"1MYX$.ȅh%CS탛ܔᨫ$PQFd1(ӉZEB䟳ژz--/< zUԂ#>½}1_qawscn> b fXTL;(]qGg+,}{Umo漷+6E&yLj9Yvw3Jw@a2+ mq/jX-ĭEHFԜ5\ԫ,\<81ziqDϹ ܝ,kL^/"Ld#N=9B6,' @X[DiK 㠒U)3utٌMlg /ӱCb)Աq'oN_ _8!2MiĸuJiNz(ף)GCXP6>w3g!SL':X^o}>\;e)=gYI+!t4ks!1T ,LnXzZJ:~,88B-4ӝ=ӼuSa?F?gnTTvXniWܼ{ z$/{~ zѠ-M>;ZV㾲BP&1Aȵ:LdINJ +3K $"H$5FɁf&3?PVA;`827L7xg=A&7 w`j璽HÏ*+~|iPV`Zع-&X(r,byDWH@Y  R|]tW٩ _ԟ|(z$SymHV,_G%5}۝? =K~]sԬ'̨" PGDI95)&LƍNstmN~^kF7PjZ|$(8nq|^2/.Ͳ\bm.T0pX-X߽ Y^, ;$;EbUt-rސp;K3Iwhk}%"L:+\VŻ1텈DffuHzpx[/Mt)V;eJĖ=QLo ;ȭA"# D%HE"$b𗍼lz0KW^Nq1m3h|ćWft*ς5Ɓ)i:*کAK'To|PH WOxA2oa =-PXk}__tP!^g/hHSulMn``&k6v1}CpA \eEr~`C8q4F^HXq+QBzLr?2l~JJS+&Rw 9^e ]@?Д%89Yڒ@вVVo AG4I>O"Keia(ϭ]C'ӹp/DҮʅgqmRZYZH})v8T_6_p8fojWߒV.\v|PDN{_=yYd y\R z61g8S=p-Kh Tu{ tW]U4^Q8ϥ]WR{J%5:x,Q;d1̆hON.oA)1IkjN[͇2JHv  20sǶ5?Vi7CJ$Ff8,>OF SknuOc4ދP9dv\'KI[ޤD;vlВ~Ó2;]> y6<ܲTt1qklTw>9oU\'&]8h;DŽ,?ʈy3g/j[O/DIo6߷ɺqbt݂QdzX=#!%w!xS '=Ai"eNʥn z)+ P~y:srJFZBS[gP,LI湆h:< MtqRDB#6mߥtGe|ՓI32u)`w2|5F4TX~.)F>/wgriSݐf\!p֟p$޲ !v@?uvh{p[fantf)O][ Mf Uh/6Cf!WLv ^%`oF~9t=o@C[LU3A(ᐒ ;ҝ4Ԋt^E(V5˟-YTn ao==ab_bP:~OGG&ۣwRR$/dB6E3^.`ElvzJ)iL~AU}2.RMr> %:YIzZlLZ sakͬPC^~2IRpL)w"?"C?vr1ŠxemCe^J_uU;n)-L-HF^O'U. zUI((/ -ƗgCwAhDglk>jL0.av-ĩVy (fQWxXzx3{+m*9F;^j@^+Bϸ@"c]@m҃Ijiףw>Ɗv U:Unc ocݚuO&7BfU! EaeBNakU='*AGG6-gC_eb̜ܣw-N)t$~SA<ʜH۾ 9dd)mYV7%wYg k" /!Τ3 3ur=w)$jI@bbڏ(OG͐qHeP7DŽ@hJlzU67-ˡcJ\W>Ѕ<*aw}I$uǺ8:/ 봧io|ʱC?WF>^Xs?Ų.fg1 p(j+w7Ǚ$PG'==ӡ>[ݪchm9C 0CE#LV:pO;y( "Kc̈́ʬǫse̙B^VT63k&Z^%uǥSev4мeNb95ɼ"ML۩437/Ic2"w^Sl6bHf*f9M_ѡdL<†ǟK3Ei kZ‹ tqY#'ؤRNY84= Mm%&m{fT3?іš%Hz׆7/Lcd݇}e uS?)뫜܂^4t4F&|Go&ŮlD; btfըE#YH{ h*tTn3ID[H[*bv;l Qa*qDK+?_I-w:AʇI+ęk(ܶb¬dOyތ'% ,vE/\nM5qJ\IާBBPEL+=J =wF_ayxeE|# &텮.bQ e/C;'<})Kkylb3o0An_Rx 'Eu2 ~f^"6uB1H~ՅlswZeoU@tڨK4& UU&5'sb'䌋v,,`IsHS~Exk](H(\0h@;x9w#BmpKNYzs MaJPy0#PJid:&$\"8 "=nOP[e9ΗM\ =@C"x 2P=f"(@TIS1h@)`Q.$zGa tD&qpf ;.ո\shauRϮ} 2',hM;at)BћNuApaDtF?:LfFlz. 3k^Ey(G=z'O_5~g+~.`74"WǛg1y2>߳uь ÃӘ$j#<_oz7 ۔H“*e]?# ԃA}6=c|Q"^ޫ8'1DD6K"Xһtw!}|8S9ITox3GTL?=b2Y#N/ >Oe*i9;l}?Aِe[W^˨s9F1vsٲ +06!oEIu6#"\,<'Av\ GڜLC d#IKs'P Po̶7C27 e1vVb 豺rb}S\>y2YqDs0k(.ϒ ;@2B:w*31/Lk fRSeq[73ғL H&<Է}a/h-P`~EeR?\GӜl-{UvM1nB\c kPbС@ѡMh<-~Ks¡k*&vQc@EvmWv.j)ay~6MՑ _s@o(36 !,o- R@Q_'JX4YW4\@ R\R2͔^p{EgtGBڑןmuC-"}4oKwB#qB!>_=rl|%IR~8r)\}LvdCRcR,ν|P-8珤~v} -@}@k+:Ndκ-N SzRK{%` [-n]w&к# {ÛL3ĈI$q9riV/6oQ>!2 oulȝc]K~Y7InpkaXJ)GBI'+%eh+09d5nRuűv7 |hrUI.Z÷ޖ@K{o#t $ `n73. h#jppqa0QaDzQLD}#Wo.:Y)o%B< )Z-Qĝ3G> su7#[fN>/-k!*»+9$-rwyӊMD> stream xڬSe]%f:J[J'm۶m;+mTڶm}۷~{EĚ1#f!#W010vec([ZmimͬpddB@ka}GcN@`#غM1(iO?=w:M?>8[;ިdl p43-Brb 1Y%@h[;SLlX)́/@`kl t[;큣 hmhdv &o`hu*/,<ؘ41t}azGcWrnsKhm_ hƦFabӝTokkn' #ߜs k#؍lllo(ʿ$l-F&p6S(T'DwC9T? օ g:Vf:L7vޅ8 |K9=McQ?p7PNۜJֹ ѓ`V[?ʾdZyyZ;*JC ~Iz|%K91"H5{]xo3~9c2%u:˲EV2qj;x˃1]^bxk" znD۷+j#{U(I[K+fL9GO6B6o$akV=mc wIl G>޿*v&}ίeڃʆqWw) 9t/v*y!?*Is>-t=GU cN,G470SH0lzi~`,ӧb$)O[1&_+7ào&e4 S^CdC%ڞ::64O"} 4 B zs`ԕA:?nu UTөgkpl.o_k E&}(ZxR-2x~0ݫF2C)zԱ( F6M/vvngQb"9/ :]_yI;yx<;.qb#xW*n}-VH[ t7MbB|fx\'y A7(Z䔾MH\,3hXo#܌=tV`zNxEU>ٜNrd-qpzpU0w.c%ǥ^FuޘuN[i"b!QT!%%ʲ[eDNۍ!EydKnb~I⧢bJaҡ#:><e듾G#5 H}ա@`KWe }Zofv?}##<3R.L 1׫,R;:U-O֋7a ᝨBdطe!S..z-)L)ִgRhM:98cnRCu 1#'@&oHۀTDI26_,hoGǨْ fwO]usfI-uO-Բv¾ӜnȰpi'+"NE؋QOz1Jm)wϷ(Z䏀f'NUT2%/q#'ymLo!Dv'eفitp5rl6'q,ӨB>Ib+cPvsWX,p^ i|}co)Ҳa1-EvؗQ ~O|^_-?V#QEDV]"'s:vߢH0IQ)zqZ N:E3yoXܲ#ɨݷ5\xƥb獽$421b'Ӈ?ldOb_@1ӓB;NF9懤0cЮQI8lJ2=%fg'LټAFƈp'~(nfd1%mݸ;ML4NlbjdPPT ׌?Uiv4l5Z]ǿ7T.Eo{`8?=Ir60]ᨣ|e|BD{Sw}\%gp&cDmBmqIxb%*P\(Z (pXݖhfkb'ڵR|fbEC4wZAo9=g{Ds/lCyGCJb?*/:S6lT\bWG-Ypl/_-[/24ޕ|~յR< >~ @1,֜|#ώ ɦ{xz: [ӃxQV+uuOksvftWX.*nn %u *kypt6t\'ҎsҼk1{y̜VFn,vk 6в D ѾjP\{ܪRlLyuYq2[+1S+VA;n5S qdxX!c-8o??R8?SAJy.sVYS{ۢ!ʅk]Vg-{u{4u8(y31&xC:fSv!C#В;cxʕnݓ )nOqNBl;Y1:[:u7F; kD5K%3cu6^KgG"FɈZHƻLt'>+bG\ìodU>ǔ{+ sW:asCg/ޫ$FYP(>Y$:/Cq.и< u}USlZO?2~CA;:/3`-pY ]I(L3W?.<$:i$Js+*I#ˠ;`)D,Gz *~,HgwJtN~2H@FZ{˟\ܨbґ`Kw4cGPLE21K3RL{+Q8s<9jl &,oroc  ^="NOY"qIQS)m3vzvʑ%af4lpQe(֢t#c~"(E4|Xh crQfZ)XRtI|\uE`P``hJ ?[PVZir{ :E0!rwcK-􌓵X[HBa)blI޻Q#=G$iͩbݸXTFm`^(d U[|6%CgO1|1w.7<(61_!+,x@(eMʰPp^qn=uVXu0-]uQe|%F9w Yonvt2\@='M"#4U# 52bIj#=*dO+3IbO$' ӨGQN5%=ac>;6K=xvt2e/"[f~"N9xzM-lTuEafjՎ({k=8^R kSҊ ف!9X6`XiObt.zV_lM j^K*&QEotm[1fr(4 SƤWR2LѱTCuu r>b2:s|}M (؈+oLyKwφ3R Ta݄$p'A}*'gM,ۜ6#6tBTh ofXe[/$O-mGc+=7ҁdiSCYd 'SbdvskB% $Aϭ -AJb&\?mvxT?`%ZOC)xYoO~|&(GF̠"]4k!z.I V8r9momі <'ArQk?X|󘅍7$*m\/Zyyx};d%w4lbz54Au-!@!Xp}^dkw_`\QZ'|FKU$.4;M]OJZGyި=G7Tm{҈s+CGoРK5_[Cq&gyC\m|}:94#e9GY)0>XTP= %r>9;£þ (K\z< hL_҄dپײaZREp4K'0Qf}A~c*eD122֕Px6:OAGR>dሁH(b SH@劰m%?Jqn{*}>"RI&?چy&5k'LN1B& re)8$)G'M°_c#l35Kz6(|AfGN׎*^mH}C3EN!MC߻$DTNTK#ʳj?ҿET$3nI tx;2~yQ9ͨ_xݴXe2qHFp` \R`g4c2VCO\~) <'0IcjmH&x&HA-LhSEey}F8 67JnүTODbZǦLqU߲#W^GJ&c6Mkٖئtx?k=/6U@+Vx$pQSDՊU5ۧ J!VCN[dx*9-DA(`-akrlYk72A!M4z͌AS N ntH!&5\MElGc?=lRʋn1J(R,t/m.r)V :2BH1ӱ0R*I- {;IJ}K"/}v yj2DՍ)4@Q 5}!11S;˅[=!JO1 7lR]k=i~Ϡ͇MMZbMW@fFxJOaNq$zJt7Sl~?Br3mx3B+cl.柍NOyZ> ̪GMRjgkt%@bC6V-JR/kpuitMH΁PKzAȷkYe_ՀKnϾ?Sl0h5}V]V Inۆ{P8^dєi;)ƁIO{ʴ}*KB)ߕ CǛTS;*_B > ,Y{ʇYDfJPkvSW/&.39C__/e q7$"\r":u S=Ht*RQG)#6PRMFhJfS%b^a1ԇ^\+- }{Q(WWjd{q B.]]7|V^ G $B-33gGzA%QzuYPe؋vظǖQ/xo .18Mf<{(#;v͎+'|.-W9[Z*NFTwELruF|^I7 YGŲ+l2Jެ݆Ma=s ]oմܓ)4yTh|io&vJsNF`{rf$?7](pT` ̳Y ؗ 0{haVqj M!F'-'&(NjȩK.I;G,0v8 "f<%q ?NFCY_FZYFI\hH`%2C75#G9Vq jxzK9 !o,>fpxzvTAN ꣓v^R(1XabGެo $P@ jWdM9؜ >or9 mRpkWA^س';voV Q}»/Rl3h$"MpXug`f::ݿ:Ag="t$ϒ1{1vטXPJNַzLο,mNkuBYSbE!o?ljW{B{6i* Y!4K]B-e{Jo=J5HJmfl\s%%7YEշdSQ1+/mȜ RiU@Y ũu>%LMNDRA]k 7 9f9F+fUUCRÊ,bb.b;Y+3ڍӀ?\&`-{p_fmavz}Eym)s†S,7P,QmWt" fP.TpԞ|0 [%q|5]-[1nMdqԵUouKcEU?Z_X[뜈s-FAϡ<|.!љ_X{4IDl/p"`ʷ?2Ë_,94FϤP<|lEmݫΒm?Â/ Ť'߃)(p 8A9$`_4)QQ}Qn7͕C]I,}]kC˯ər?gvۮvEf`sI1$}n'C7HVË_ hkApfC[SS֞Px۱l0^FiSkL0׺ɭR%Rkypʉ%_D+qZU^M܁_Z 2PEA|l'~8Cw.|G 53ûx"yn$3;iHe1|^SCW_x՞_z@b魄gA]3|ўHJIʪsug0_틄LXc2} mTZ*+f.c`j/2_ bymU1HPq颒,g6YP‡2%&586?d7Z9gs҃}Q'jbuFO}OM3v ƋL+JUգcwrL'0!V_nªO&irs9f걓:m$^emyhoּ!;2jY,b2T'ځ0yRjDR=b7?ɲ}BF郶[M[P-ҌZIpX1Q mWUVEF;gftW2UUE;ߎLlX(^Tc3\(nP`gTƘo!fm! $پK#ƻJآ3tmȼ@H"3"`z/xܙt7Y[mrIJG5(ێ-R̈́6/b$gEhPm:΃ɯ_wPDr5ó"Is},؈pQmMIH°vm,P(ߟ1{.)R/TMsT|:Gu?N/q1"gHjApT,wWYDOE/v|WkIn` nρP:,R#ȗDh-Ť2мe7U 1yXr hBEaubf8p1]qn@vb$ukT ߰^[h'RXwH{Pn)@ۅ v 88l`:$nsKT}F613/6@ vsJүu%l᧔!Jv /)y|Sy>Ül @J66И񳩱yQ|9۹($ =#Dz|FT5xd>.3 [S|uH-unY|S'%1ĖW-f;6n hW3Y'Js0 uL H:q=W)bK% NXzy2X|p8mAeX)tī|xS@>oHgdlT71cnMUvv.#Hhs 7Q#5/}WQF5netn fZDgѝJL7KZantxqbg{ |v;3f̑F/^mwO{>+)LLMcQ8|W:Ug&YЊN;t^s<8+Nj\ C瑫F{`*|Ҫcyd(>|t3@ti,śSgψmZh.qN &8o7d^=>9eʣT[I}6Tq嗺EaGܭ@fVezD\9|&^SO E]Xu9%9N¿ᗕ 0)lla_ԅ+Mm ,i O~ FG`ϕ]*Cг\ˤlP:MFWpb#jɯQ}xfK74)^lUFff#uj*"ӊpH_yC5ŋ^|s9ML#1/OJ.~');v^:Mz[}HwF'ǤdeF|kU&S^:>g|U_4 qˊ]E!;UK48yf~ҫWe =dLp -j Zˬ+/fFuSxF<##bge5/ ;+X/&-xD(_n?4 +'d1FtC8xNO*EO 7uY-24Ctg<3K 5 sѢ&69 m,aʳg e9r{LZ%,D,g-ANM:tr:5KF8 w`BlІ?! Kշ1: \2iX۷. RN% 3H,vUQ|x66u8Ȓk,?|e%oPEbMŴqMN-h׉@sM*VSւ$^u-?هFff` f T^/Ո,̱~2<65FA 㣉{%qh-2" K,6ƨKBhVNy9i9>׏k*rʔ|M?;T _@5j)FzO< ت 6S\ =8nDA=~*[~B׍OxuiUO6}mhho*Y}'*OOX!8?e^T~ EΖ3–޽ &wE2(NmMq9'!=5cm922vhOL1%Tқ3o8J*m"]>%h8䘘'4qiS1Wtb9B[Z7O\b I(aZ ~j"n,'H|$d#ђ/JL# ujUJ !}AkZlrTIs3E2!cO- 񓦤]yFW 5ewq}}3Ps`FPk?H˪_3C7qDWSNGI"VoY* ܥn.hIǵ\Z,Z4|XwozntK./9xClFdjf_$׌ϦάLP®JІ@wL[7C?> Q wz{sمR=ѐܜ}Hѝ"7)|!HצN@Q#PɤX)y1k@c, Mb3X89CwaI%qqZ4e5Nh}N '݆M uS ilWk o>!¿ߎUkt8E Xt4§ s v~ p, CO͂0@.!2H߃ D́V 3 Ya\XB__꾛 l&,2Z HxЀv@W09G|]-mQknOA$ AqdYԳ;9[W֞Sy9M8٣%Hor~ܒ1Ty`֦w_)8f#&d-=h<)qx;#9ol%(p .h^r,uqz'sH9lɨ#TO#{`8zw1};ۮ%jO'rdM+<س&vzhQȘAa):'۟<}pHK d1z˘$~]P1n # [L˕|~ N1@Ip2;{;V@6uX0V$S ^]tL~VZL.{X, k-oFe#Wz8!;b,5Afd'~l&u؎׮Hw{xCM"ڻ^)cZm`zV|ؙ]~d.MGgav ӲY_ dmLPb,uiY'**_7Ե0(e^JNIǰ{PieUPqL=EPPV[XyOP=cܝÈx#(;$.tB%l`9,r!?M.7Ufg'`q0L0(Nr@1gUbXez)P`O=ק ~nQd3x]kĢa⊌7ꑣ{BZ*LiIv*~ϭP(T6jQFw2`'29Q'Y3(;L-c]7dq M nѕ,K(g}3M2~!um겾 }AۘЙԗ&Nmt"šo% r>'qK0$dMܞ7Z>ʼ)5bԬX_y80Ia# &؍wX QdCۚU+Xl#Buee]B͸5HK7QȤyt@@% mEG H+u30 *_Modx'Vz H:ׂB)CV6%^]VWooNMX `>WeJ:d)1LI XuOHL^$8O-Y<%EolD.BޠD߮&P߯c\}2?1*PtN">:ZuD Y-~鲓+]k҂gx:U,n=?X̞C2ZL4lc/ÒrMN 2p\R TXFbT_êHmŪtjYq2v`m7wdrrNZz{4-d %ܪ 2Z_;gzY^'fhXzq,YLť3pᴝ_)/&MGh[ihŎ r?Hdj|b5MU:A#B9\@}:f\v*BJe-a̵C , Iᮡ%P;7 3|-^"a\q(;jM9EP+ɍZD^̊Pz&ZHddDuLpzv^ oLU9ƒ%d:)sNo5UMGżuqBmȿpAﬨ:[BL)x{;])G͍19c[ZM@rGj1p&찮M/;aRNue=,'JFf*1vJk喽DL1GwjcwjNƃx21g<{ IBus`d>5C~-w)3d^{Ai&C5X:4w' {l%|{x>eMq;7r!@7&f}#o$°W=ʲ{T*F:Oz ^X>ZK0ύ_[`+H?*\LlYOk8.t8EH\c1IދIgcsFʏ0TK((mJ+ %!&;=v=Rٹ}~ͫk+n= Q͂ OxQm2 Rfm79D w8unȲ  RTm"𡝢 -ś~uB9yŭ)mu(`$Җ@)e#{/:`6Ul_%j6ߐL&j*_ .T"܂ՂCMH CyMla7SJ,T`T(8\R4_ r+*0FNȽt BT>[8.k:uMu LSً ji_i> 3M}P\sq3Cq C<?2,@~K<9gmjPQ3*V9 k*#x;fҳލs,~ /%#a=[Ԧ\=}5]ba@jJO2*=%*(K]"/8KRoBEN)aR-vFn@g :} tflR'Vs)XZtX ӗ6 N7 ;3|sBq"+U//k+k8%fWnmWGa*"dK1[%No|hӜKҺpKVˆN]J=)jc%' 􏦕|L'L8 BdbVp(:Lݧ{.wZ:ys&0nxF'EpODppM>O endstream endobj 41 0 obj << /Length1 1620 /Length2 4488 /Length3 0 /Length 5300 /Filter /FlateDecode >> stream xڭVgXS[F{0{&(J3B H E ҥ(M."H ΝK! $CJFF/ˡ ND^h@0X?MPǁ A}s G3c@0 ҄E; .  Jc 0hakxy .x,@R@"B!x7^88@ Ta'wAXG(d<!: pan<OpoBCA p$@"0;x XDM!␿#^U @/8 tX,;5 x)SD(hFíb0??@{@{FRFГQ C$3w"$?!oߩP(3;. /p?$iQA("&7BA $ xQYۢ=@ )' STfD=^>E+XZ;ς?Ob!(rU(ǎrԔa&o"_pIao4hpXh&Fpm{Jow$Ƒ! B#'^: en'cT<~Qi OP+_JQ?!9}^zu/J1-zRǘxH', !˃i.Vei嚷O'Sw ǰ-q2Q5 ZS^IYmAD뛎Ui~̤cbpHo"w˸ 3=L?y~s \?[m(r(/e\B#TzԔUY=ՉH'Kml1 K 0eRO>M&89@\~Wi)a%mj GFVCzΞ]-S„ƮFXܹӫ~iFSFcARG]>}p"~lP\ tU I:y.ݺc=4}1mXHiʻЄHn6clg͊3s1K~닷4zǣ g.x]S,eN+evc첬WKxvl.vW?&Dkq?.HۥoIJܠ*!]=%dtO4yAiô*-;U?{ie̖:)p#:,_>%yd e|X;e cAp(,5Rˎ Y|G=ڬu11?'Ky\zw XhQu$GK4/b>hRzG\K*jZgXdK_?ڋ`&t784%Fe v˩>RZ0((GPaT.ɛo׉!2A*kV=Gy m.{WVH9&`13^/zmVI\kYo- 9,\, GKZ|Ԟ!.gU+Ԣ?|M$y"R!^ĖhEsopx)9IZͶlZC>Wr8V]78֧kaH+\gTpXh\\OOшr]qr^枿0]|"gl.MO|@v*J=Ot C V/*,16e/·w%g `d?uƷFI epgĹJReMEjLr`[4E;;v\P#;f(|?y-v{7B~+ѫu_6K*KX葿oq6Z'6}M eaaJ7^>l`JS뽻Q.nGdIS8@|fЌ+1s=qG@IjBWkԊgXѓ-L O9N<877s̞߶+bJ.2zQg69:* c`'~yf?8Bvt&.gYY QǺ9a' 3 ԭ֕ XL/v"[2*wWj6^Q̀~#;aNYBpe0- /ͦVHyM^YpXIH.2Tl@3bN@bTH ԧnMbFhexq]oơJ1X>q"dӧFFROҗOfb3g7zDv?p\~(^F}Hw,+xh:i̬'+@sƾJ=VO0ke}j>u#<&nM5ؙRKrx\K:2"Kmq֕mBQl߼d>:=-{!!py/fɎ'~Q}7VV6I܉<\ZC~Avyҵ@_#ʹ%hm[s 8HUYx;s$\k=/kO)7df?QҖz+ '[oݔ(?%8kƫMxD)mϭ7a&^X٭x`|fէBZ虘+^]VyOmPIz0/B_Dw؃bW1,)vV~R*t #2X{VDeKn”:.Sqcy)sAŮ>-;xc'şԙ?aݍTj%#4OsN5ݍrNy ˋz+W vzmR_ZѬRx`joɷSzԞ` Ջ'^eI4\&I] ȃFŀ&,Tlq؈SY˓nˣ9/&,>;k:r+s5S㌴W"ZfrCŏ\xG#S)qƂRkXtv2\ MFz:gU#l6jN? 2m>L^Gr.y:68gǯrwLN(ҘX{JkP5o=X#pЬ_fe,yf^wT!e '.<"('-N>U|?#&?,ӽT=Bv4Z{wnL uXs>0:!v?]\wC"AV-WojTfkGzt'|8u6kR0kdZ7x0hMax?'Ť7k_}uUр%O \t"H swdWȤPps]Iq6Aט_S8b٥ZLߌ[rM.n?BeJď]Vo|VOkm?u~,*Q֞sR=7]enؾی 7\IO=˴p&PvϹlC^t<םe$gR|C#M$-5'Ǯ1'ΡdU ~{V|Ap[ڪ1Lѯ&s<˯]pPUXh/uHg{L,>/!:0f礞|ueS&8oZ6pI6^I0XlRU-i'frLjLvB]X{E#7>2L e>;~iϟƺq[P`TQ⭣U-2Xa_zF&ĺ0*+RfpǷ/Ug>Ҩ:kq:b{ZPM+4m]<4,Dکs!Y76K : 6u2ʾO#N7^ endstream endobj 44 0 obj << /Producer (pdfTeX-1.40.20) /Creator (TeX) /CreationDate (D:20220407164252+01'00') /ModDate (D:20220407164252+01'00') /Trapped /False /PTEX.Fullbanner (This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019) kpathsea version 6.3.1) >> endobj 8 0 obj << /Type /ObjStm /N 31 /First 227 /Length 1595 /Filter /FlateDecode >> stream xX]o6}D  $idm4NnE[ʖ+m_s)+dǭ{)49L dR2i=&SϤϴg1 fJ1K0LȞ2, bRi1L=>hE\14u0$`M({Ef3%2fF@@5߃^_~njGYʂi/"[ø4%Av>T Mss7&c{{i@[UK<φd}nY7.^$+h &eA?]=rBX5O"y3#^sc$?^.?xؼn3)I2+S^'Q B>@F?jS.vZ68Jt@1d.c\$N45Y!?XՕ)i@.r )TQ:wБrVI։AX$M21Tա4^wV-R}iTVUr*۵ĜʕM- iW˚%de>ɥ`RK#;>>:36΋d< lq)}!] $X[1a=b" &dheˆG4*&5]t6{+Qmփ?A[Cc~_7?_+~ ݣb$ebmeߕu9I,f?FQQwq6yip3<(bݗY ӍЙu<ΓlYnӪPg({c9㘗_28b6aǵo[g-Y]d}{|jp&ӆpvU[ W|ڇF6TçB[N+;gmX` $JV00%1,%ww1 z&xG_(N1"ݩK:#W ZT>6l?\WsjNW,kq3s4Ӎͺ`sq[T "vw3Ѓ?J$qhC*Lv ff;M׸^+nw Mkx-#P-jOϏ;rq񣸴 E\k.t?^ zZu2sĽ!IYe;t3F<6VW~ endstream endobj 45 0 obj << /Type /XRef /Index [0 46] /Size 46 /W [1 3 1] /Root 43 0 R /Info 44 0 R /ID [<29C508C3F2CBB98CB567EDEBFE91B976> <29C508C3F2CBB98CB567EDEBFE91B976>] /Length 141 /Filter /FlateDecode >> stream x;ݐ!W<3t:3:иkPht n_ AU){ һJLD(Dr%""yl(;芞HEFTk@ޮ!yFA endstream endobj startxref 82248 %%EOF survMisc/inst/doc/plots.R0000644000176200001440000001032614223603174015077 0ustar liggesusers## ----setup, include=FALSE--------------------------------- library("knitr") ### Set global chunk options opts_chunk$set(eval=TRUE, ## text results echo=TRUE, results=c('markup', 'asis', 'hold', 'hide')[1], collapse=FALSE, warning=TRUE, message=TRUE, error=TRUE, split=FALSE, include=TRUE, strip.white=TRUE, ## code decoration tidy=FALSE, prompt=FALSE, comment='##', highlight=TRUE, size='normalsize', background=c('#F7F7F7', colors()[479], c(0.1, 0.2, 0.3))[1], ## cache cache=FALSE, ## plots fig.path=c('figure', 'figure/minimal-')[1], fig.keep=c('high', 'none', 'all', 'first', 'last')[1], fig.align=c('center', 'left', 'right', 'default')[1], fig.show=c('hold', 'asis', 'animate', 'hide')[1], dev=c('pdf', 'png', 'tikz')[2], fig.width=7, fig.height=7, #inches fig.env=c('figure', 'marginfigure')[1], fig.pos=c('', 'h', 't', 'b', 'p', 'H')[3]) ### Set R options options(formatR.arrow=TRUE, width=60) ## ----p2--------------------------------------------------- data("kidney", package="KMsurv") t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) autoplot(t1) ## ----p3--------------------------------------------------- print(autoplot(t1, type="fill", survLineSize=2, jitter="all"), tabHeight=0.35) ## ----p4--------------------------------------------------- autoplot(t1, timeTicks="months", type="CI", jitter="all", legLabs=c("surgical", "percutaneous"), title="Time to infection following catheter placement \n by type of catheter, for dialysis patients", titleSize=10, censSize=2)$plot ## ----p5--------------------------------------------------- str(a1 <- autoplot(t1), max.level=1) ## check the output is what we want a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival") ## this is one simple way a1 <- autoplot(t1) suppressMessages(a1$plot <- a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival")) a1 ## or we can assign them as follows a1 <- autoplot(t1) ls(a1$plot$scales$scales[[3]]$super$super) is.environment(a1$plot$scales$scales[[3]]$super$super$limits) is.null(a1$plot$scales$scales[[3]]$super$super$limits) a1$plot$scales$scales[[3]]$super$super$limits <- c(0.8, 1) a1 ## ----p6--------------------------------------------------- data("bmt", package="KMsurv") b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) autoplot(b1) autoplot(b1, legOrd=c(1, 3, 2)) ## ----p7--------------------------------------------------- autoplot(b1, legOrd=c(3, 2, 1), legLabs=letters[1:3]) ## ----p8--------------------------------------------------- a2 <- autoplot(b1) ## ensure this is what we want a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2$plot <- a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2 ## ----p9--------------------------------------------------- t2 <- ten(survfit(Surv(time=time, event=delta) ~ 1, data=kidney)) autoplot(t2, legLabs="")$plot autoplot(t2, legend=FALSE) ## ----p10-------------------------------------------------- data("rectum.dat", package="km.ci") t3 <- ten(survfit(Surv(time, status) ~ 1, data=rectum.dat)) ## change confidence intervals to confidence bands ci(t3, how="nair", tL=1, tU=40) autoplot(t3, type="fill", alpha=0.6, legend=FALSE) ## ----p11-------------------------------------------------- ## manually changing the output t4 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) (a4 <- autoplot(t4, type="CI", alpha=0.8, survLineSize=2)$plot) ## change default colors suppressMessages(a4 + list( ggplot2::scale_color_manual(values=c("red", "blue")), ggplot2::scale_fill_manual(values=c("red", "blue")))) ## change limits of y-axis suppressMessages(a4 + ggplot2::scale_y_continuous(limits=c(0, 1))) ## ----p30-------------------------------------------------- data("pbc", package="survival") t1 <- ten(Surv(time, status==2) ~ trt + strata(edema), data=pbc, abbNames=FALSE) suppressWarnings(str(a1 <- autoplot(t1), max.level=1)) a1 ## ----p40-------------------------------------------------- data("pbc", package="survival") c1 <- survival::coxph(formula = Surv(time, status == 2) ~ age + edema + log(bili) + log(albumin) + log(protime), data = pbc) profLik(c1, col="red", devNew=FALSE) survMisc/inst/doc/plots.Rnw0000744000176200001440000001252413317012436015445 0ustar liggesusers\documentclass{article} % \VignetteIndexEntry{plots} % \VignetteEngine{knitr::knitr} \usepackage[]{graphicx} \usepackage[]{color} \usepackage{framed} %%% recommended with knitr \usepackage{alltt} \usepackage{mathtools} \usepackage[sc]{mathpazo} \usepackage{geometry} %% for large numbers of floats \usepackage{morefloats} %%% to keep floats in same section \usepackage[section]{placeins} %%% for tables > 1 page \usepackage{longtable} \usepackage{booktabs} \begin{document} \title{Examples of output from plotting functions} \author{C Dardis} \maketitle % knitr chunks <>= library("knitr") ### Set global chunk options opts_chunk$set(eval=TRUE, ## text results echo=TRUE, results=c('markup', 'asis', 'hold', 'hide')[1], collapse=FALSE, warning=TRUE, message=TRUE, error=TRUE, split=FALSE, include=TRUE, strip.white=TRUE, ## code decoration tidy=FALSE, prompt=FALSE, comment='##', highlight=TRUE, size='normalsize', background=c('#F7F7F7', colors()[479], c(0.1, 0.2, 0.3))[1], ## cache cache=FALSE, ## plots fig.path=c('figure', 'figure/minimal-')[1], fig.keep=c('high', 'none', 'all', 'first', 'last')[1], fig.align=c('center', 'left', 'right', 'default')[1], fig.show=c('hold', 'asis', 'animate', 'hide')[1], dev=c('pdf', 'png', 'tikz')[2], fig.width=7, fig.height=7, #inches fig.env=c('figure', 'marginfigure')[1], fig.pos=c('', 'h', 't', 'b', 'p', 'H')[3]) ### Set R options options(formatR.arrow=TRUE, width=60) @ Some minimal examples showing the output of plots from the examples. \section{autoplot.Ten} The 'autoplot' function is a generic S3 method used by 'ggplot2'. \subsection{Simple examples} <>= data("kidney", package="KMsurv") t1 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) autoplot(t1) @ Now, we increase the line size and use jitter to prevent overlap; we also make the relative size of the table larger. <>= print(autoplot(t1, type="fill", survLineSize=2, jitter="all"), tabHeight=0.35) @ A more customized example follows. Note that we return only the element marked 'plot' from the result (which is a list with two elements). <>= autoplot(t1, timeTicks="months", type="CI", jitter="all", legLabs=c("surgical", "percutaneous"), title="Time to infection following catheter placement \n by type of catheter, for dialysis patients", titleSize=10, censSize=2)$plot @ Here we assign the result in order to modify the $y$ axis. <>= str(a1 <- autoplot(t1), max.level=1) ## check the output is what we want a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival") ## this is one simple way a1 <- autoplot(t1) suppressMessages(a1$plot <- a1$plot + ggplot2::scale_y_continuous(limits=c(0.8, 1), name="Survival")) a1 ## or we can assign them as follows a1 <- autoplot(t1) ls(a1$plot$scales$scales[[3]]$super$super) is.environment(a1$plot$scales$scales[[3]]$super$super$limits) is.null(a1$plot$scales$scales[[3]]$super$super$limits) a1$plot$scales$scales[[3]]$super$super$limits <- c(0.8, 1) a1 @ \subsection{Modifying the legend} Reordering the legend labels (example with 3 groups). <>= data("bmt", package="KMsurv") b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt) autoplot(b1) autoplot(b1, legOrd=c(1, 3, 2)) @ Here we also re-label the legend. <>= autoplot(b1, legOrd=c(3, 2, 1), legLabs=letters[1:3]) @ Now, let's put the legend inside the plot itself. <>= a2 <- autoplot(b1) ## ensure this is what we want a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2$plot <- a2$plot + ggplot2::theme(legend.position=c(0.75, 0.75)) a2 @ \subsection{One group only} A number of options for plotting a line with just one group. <>= t2 <- ten(survfit(Surv(time=time, event=delta) ~ 1, data=kidney)) autoplot(t2, legLabs="")$plot autoplot(t2, legend=FALSE) @ \subsection{Using confidence bands} Here we change the default pointwise confidence intervals to bands. <>= data("rectum.dat", package="km.ci") t3 <- ten(survfit(Surv(time, status) ~ 1, data=rectum.dat)) ## change confidence intervals to confidence bands ci(t3, how="nair", tL=1, tU=40) autoplot(t3, type="fill", alpha=0.6, legend=FALSE) @ \subsection{More customization} If the output of 'autoplot.ten' is assigned, it can be modified in place. The list elements are ggplot2 objects which can be altered as usual. <>= ## manually changing the output t4 <- ten(survfit(Surv(time, delta) ~ type, data=kidney)) (a4 <- autoplot(t4, type="CI", alpha=0.8, survLineSize=2)$plot) ## change default colors suppressMessages(a4 + list( ggplot2::scale_color_manual(values=c("red", "blue")), ggplot2::scale_fill_manual(values=c("red", "blue")))) ## change limits of y-axis suppressMessages(a4 + ggplot2::scale_y_continuous(limits=c(0, 1))) @ \section{autoplot.StratTen} An example of the plots from a stratified model: <>= data("pbc", package="survival") t1 <- ten(Surv(time, status==2) ~ trt + strata(edema), data=pbc, abbNames=FALSE) suppressWarnings(str(a1 <- autoplot(t1), max.level=1)) a1 @ \section{profLik} Plotting profile likelihood. <>= data("pbc", package="survival") c1 <- survival::coxph(formula = Surv(time, status == 2) ~ age + edema + log(bili) + log(albumin) + log(protime), data = pbc) profLik(c1, col="red", devNew=FALSE) @ \end{document} survMisc/inst/NEWS.Rd0000744000176200001440000001245314223600657014120 0ustar liggesusers\name{NEWS} \title{NEWS file for the survMisc package} \section{07/03/18: Version 0.5.6}{ \itemize{ \item Minor correction to reflect change in behavior of '&' operator with new release of R. \item A substantial revision to this package in still planned to occur with version 0.6.0. } } \section{07/03/18: Version 0.5.5}{ \itemize{ \item Removing \code{plot} method for \code{Surv} objects, so as to avoid overwriting the new \code{plot.Surv} method in \code{package:survival}. \item A substantial revision to this package in planned in the months ahead and is planned to occur with version 0.6.0. } } \section{11/22/16: Version 0.5.4}{ \itemize{ \item Minor update to take account of the release of \code{package:ggplot2 2.2.0}. } } \section{7/6/16: Version 0.5.3}{ \itemize{ \item Corrected error in \code{sf}. Thanks to Walton Jones for bringing this to my attention. \item Added some \code{xtable} methods applicable to survival data. } } \section{5/16/16: Version 0.5.2}{ \itemize{ \item Corrected error in \code{comp} when comparing two groups. This was present in Versions 0.5.0 and 0.5.1. Thanks to onesandzeroes and drcanak for pointing this out. \item Also re-named the vignette again. } } \section{5/11/16: Version 0.5.1}{ \itemize{ \item Have included \code{profLik} function from version 0.4.6 again. \item Re-named the vignette. } } \section{2/16/16: Version 0.5.0}{ \itemize{ \item This is a re-write 'from the ground up'. A more 'oject-oriented' approach has been adopted. The 'ten' function/ object has replaced the former 'tne' method. (Motivated in part by the user who commented that "I'm afraid the code in the function `.getTne` appears somewhat cobbled together".) \item A number of functions and datasets have been left out temporarily for this release in order to focus on the 'core' package elements. These will be added back in due course, most likely version 0.6.0. \item \describe{ \item{\code{cutp}}{Error in calculation of log-rank test in some in cases has been fixed. Code also simplified to make better use of \code{data.table}. Thanks in particular to Rached Alkallas for her helpful work on this. As suggested: \cr \emph{There was an problem with the previous version [0.4.6], which may have affected the scientific results of those using this function} } \item{\code{comp}}{Problems when calculating the 'test for trend' have been fixed. Thanks to Karl Ove Hufthammer for pointing this out. I have also modified the help for \code{comp} which was using the words 'groups' and 'strata' interchangably. This has been corrected elsewhere in the manual. \code{comp} does *not* yet work with stratified models, but this will be implemented in due course. \cr Thanks also to Mario Ouwens for input on the Renyi function and Haitao Yang for comments on the log-rank test. } \item{\code{autoplot.survfit}}{Function has been improved. This will likely be deprecated in due course in favor of \code{autoplot.ten}. \cr The returned object will now \code{print} using \code{autoplot.tableAndPlot}. \cr Examples of changing the output are now given in the 'plots' vignette. These include \eqn{y} axis limits and the order of the legend labels and position. Thanks to Line Heylen for these suggestions. \cr Thanks also to Jason Kennedy for bringing up the problem of legend re-ordering. } \item{\code{rsq.coxph}}{Typo fixed. Thanks to Nir Y Krakauer for bringing this to my attention.} } }} \section{12/21/14: Version 0.4.4}{ \itemize{ \item Error in use of \code{dQuote} in \code{comp} now fixed. Thanks to all who pointed this out: J Smith, M Fiocco, D Winsemius, J Bienkowska, A Calcareo, N Villanueva, A Lover, R Perry, K Karakostas. \item Typo in \code{autoplot.survfit} fixed. Thanks to D Menne. \item Example in \code{autoplot.survfit} now includes method to customize the output further. Thanks to S Melov and L Neves. \item Scoping in \code{comp} fixed. Thanks to A. Atkinson. }} \section{Version 0.4 and before}{ \itemize{ \item 8/14/14: Add \code{cf6.c} to \code{src}; use this instead of depending on survival package. \item 7/6/2014: Add \code{gof} method. \item 7/2/2014: Fix \code{.getTne} to work on Win and Linux. \item 6/21/2014: Change name of \code{autoplot} and push new version to github. \item 6/9/2014: Improve \code{autoplot.survfit}; coloring now consistent for line and fill. \cr Add \code{ci} method. \cr Move documentation from \code{quantile} to \code{ci}. \item 5/21/14: Fixed \code{autoplot.survfit}: arbitrary alpha now works with upper box in filled survival lines. \item 6/6/2014: Fixed \code{.getTne}, was not calculating values correctly; comp() now works correctly. \item 5/20/14: Fixed \code{comp}, \code{compNSurv} for cases where last element in \code{covMatSurv} is \code{NaN}. \cr Fixed \code{sig} so likelihood-ratio test works for factors (needs call to \code{model.matrix}). \cr Fixed \code{cutp} - need to convert \code{data.table} to \code{data.frame} in function body for now. }}