mitml/ 0000755 0001762 0000144 00000000000 13414712773 011405 5 ustar ligges users mitml/inst/ 0000755 0001762 0000144 00000000000 13413110673 012350 5 ustar ligges users mitml/inst/doc/ 0000755 0001762 0000144 00000000000 13413110673 013115 5 ustar ligges users mitml/inst/doc/Introduction.Rmd 0000644 0001762 0000144 00000017363 13321347222 016254 0 ustar ligges users --- title: "Introduction" output: rmarkdown::html_vignette: css: "css/vignette.css" vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Introduction} %\VignetteEncoding{UTF-8} --- --- ```{r setup, include=FALSE, cache=FALSE} library(knitr) set.seed(123) options(width=87) opts_chunk$set(background="#ffffff", comment="#", collapse=FALSE, fig.width=9, fig.height=9, warning=FALSE, message=FALSE) ``` This vignette is intended to provide a first introduction to the R package `mitml` for generating and analyzing multiple imputations for multilevel missing data. A usual application of the package may consist of the following steps. 1. Imputation 2. Assessment of convergence 3. Completion of the data 4. Analysis 5. Pooling The `mitml` package offers a set of tools to facilitate each of these steps. This vignette is intended as a step-by-step illustration of the basic features of `mitml`. Further information can be found in the other [vignettes](https://github.com/simongrund1/mitml/wiki) and the package [documentation](https://cran.r-project.org/package=mitml/mitml.pdf). ## Example data For the purposes of this vignette, we employ a simple example that makes use of the `studentratings` data set, which is provided with `mitml`. To use it, the `mitml` package and the data set must be loaded as follows. ```{r} library(mitml) data(studentratings) ``` More information about the variables in the data set can be obtained from its `summary`. ```{r} summary(studentratings) ``` In addition, the correlations between variables (based on pairwise observations) may be useful for identifying possible sources of information that may be used during the treatment of missing data. ```{r, echo=FALSE} round(cor(studentratings[,-(1:3)], use="pairwise"),3) ``` This illustrates that (a) most variables in the data set are affected by missing data, but also (b) that substantial relations exist between variables. For simplicity, we focus on only a subset of these variables. ## Model of interest For the present example, we focus on the two variables `ReadDis` (disciplinary problems in reading class) and `ReadAchiev` (reading achievement). Assume we are interested in the relation between these variables. Specifically, we may be interested in the following analysis model $$ \mathit{ReadAchiev}_{ij} = \gamma_{00} + \gamma_{10} \mathit{ReadDis}_{ij} + u_{0j} + e_{ij} $$ On the basis of the syntax used in the R package `lme4`, this model may be written as follows. ```{r, results="hide"} ReadAchiev ~ 1 + ReadDis + (1|ID) ``` In this model, the relation between `ReadDis` and `ReadAchiev` is represented by a single fixed effect of `ReadDis`, and a random intercept is included to account for the clustered structure of the data and the group-level variance in `ReadAchiev` that is not explained by `ReadDis`. ## Generating imputations The `mitml` package includes wrapper functions for the R packages `pan` (`panImpute`) and `jomo` (`jomoImpute`). Here, we will use the first option. To generate imputations with `panImpute`, the user must specify (at least): 1. an imputation model 2. the number of iterations and imputations The easiest way of specifying the imputation model is to use the `formula` argument of `panImpute`. Generally speaking, the imputation model should include all variables that are either (a) part of the model of interest, (b) related to the variables in the model, or (c) related to whether the variables in the model are missing. In this simple example, we include only `ReadDis` and `ReadAchiev` as the main target variables and `SchClimate` as an auxiliary variable. ```{r} fml <- ReadAchiev + ReadDis + SchClimate ~ 1 + (1|ID) ``` Note that, in this specification of the imputation model. all variables are included on the left-hand side of the model, whereas the right-hand side is left "empty". This model allows for all relations between variables at Level 1 and 2 and is thus suitable for most applications of the multilevel random intercept model (for further discussion, see also Grund, Lüdtke, & Robitzsch, 2016, in press). The imputation procedure is then run for 5,000 iterations (burn-in), after which 100 imputations are drawn every 100 iterations. ```{r, results="hide"} imp <- panImpute(studentratings, formula=fml, n.burn=5000, n.iter=100, m=100) ``` This step may take a few seconds. Once the process is completed, the imputations are saved in the `imp` object. ## Assessing convergence In `mitml`, there are two options for assessing the convergence of the imputation procedure. First, the `summary` calculates the "potential scale reduction factor" ($\hat{R}$) for each parameter in the imputation model. If this value is noticeably larger than 1 for some parameters (say $>1.05$), a longer burn-in period may be required. ```{r} summary(imp) ``` Second, diagnostic plots can be requested with the `plot` function. These plots consist of a trace plot, an autocorrelation plot, and some additional information about the posterior distribution. Convergence can be assumed if the trace plot is stationary (i.e., does not "drift"), and the autocorrelation is within reasonable bounds for the chosen number of iterations between imputations. For this example, we examine only the plot for the parameter `Beta[1,2]` (i.e., the intercept of `ReadDis`). ```{r conv, echo=FALSE} plot(imp, trace="all", print="beta", pos=c(1,2), export="png", dev.args=list(width=720, height=380, pointsize=16)) ``` ```{r, eval=FALSE} plot(imp, trace="all", print="beta", pos=c(1,2)) ```  Taken together, both $\hat{R}$ and the diagnostic plots indicate that the imputation model converged, setting the basis for the analysis of the imputed data sets. ## Completing the data In order to work with and analyze the imputed data sets, the data sets must be completed with the imputations generated in the previous steps. To do so, `mitml` provides the function `mitmlComplete`. ```{r} implist <- mitmlComplete(imp, "all") ``` This resulting object is a list that contains the 100 completed data sets. ## Analysis and pooling In order to obtain estimates for the model of interest, the model must be fit separately to each of the completed data sets, and the results must be pooled into a final set of estimates and inferences. The `mitml` package offers the `with` function to fit various statistical models to a list of completed data sets. In this example, we use the `lmer` function from the R package `lme4` to fit the model of interest. ```{r, message=FALSE} library(lme4) fit <- with(implist, lmer(ReadAchiev ~ 1 + ReadDis + (1|ID))) ``` The resulting object is a list containing the 100 fitted models. To pool the results of these models into a set of final estimates and inferences, `mitml` offers the `testEstimates` function. ```{r} testEstimates(fit, var.comp=TRUE) ``` The estimates can be interpreted in a manner similar to the estimates from the corresponding complete-data procedure. In addition, the output includes diagnostic quantities such as the fraction of missing information (FMI), which can be helpful for interpreting the results and understanding problems with the imputation procedure. ###### References Grund, S., Lüdtke, O., & Robitzsch, A. (2016). Multiple imputation of multilevel missing data: An introduction to the R package pan. *SAGE Open*, *6*(4), 1–17. doi: 10.1177/2158244016668220 ([Link](https://doi.org/10.1177/2158244016668220)) Grund, S., Lüdtke, O., & Robitzsch, A. (in press). Multiple imputation of missing data for multilevel models: Simulations and recommendations. *Organizational Research Methods*. doi: 10.1177/1094428117703686 ([Link](https://doi.org/10.1177/1094428117703686)) --- ```{r, echo=F} cat("Author: Simon Grund (grund@ipn.uni-kiel.de)\nDate: ", as.character(Sys.Date())) ``` mitml/inst/doc/Analysis.Rmd 0000644 0001762 0000144 00000022746 13321375504 015363 0 ustar ligges users --- title: "Analysis of Multiply Imputed Data Sets" output: rmarkdown::html_vignette: css: "css/vignette.css" vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{Analysis of multiply imputed data sets} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE, cache=FALSE} library(knitr) set.seed(123) options(width=87) opts_chunk$set(background="#ffffff", comment="#", collapse=FALSE, fig.width=9, fig.height=9, warning=FALSE, message=FALSE) ``` This vignette is intended to provide an overview of the analysis of multiply imputed data sets with `mitml`. Specifically, this vignette addresses the following topics: 1. Working with multiply imputed data sets 2. Rubin's rules for pooling individual parameters 3. Model comparisons 4. Parameter constraints Further information can be found in the other [vignettes](https://github.com/simongrund1/mitml/wiki) and the package [documentation](https://cran.r-project.org/package=mitml/mitml.pdf). ## Example data (`studentratings`) For the purposes of this vignette, we make use of the `studentratings` data set, which contains simulated data from 750 students in 50 schools including scores on reading and math achievement, socioeconomic status (SES), and ratings on school and classroom environment. The package and the data set can be loaded as follows. ```{r} library(mitml) library(lme4) data(studentratings) ``` As evident from its `summary`, most variables in the data set contain missing values. ```{r} summary(studentratings) ``` In the present example, we investigate the differences in mathematics achievement that can be attributed to differences in SES when controlling for students' sex. Specifically, we are interested in the following model. $$ \mathit{MA}_{ij} = \gamma_{00} + \gamma_{10} \mathit{Sex}_{ij} + \gamma_{20} (\mathit{SES}_{ij}-\overline{\mathit{SES}}_{\bullet j}) + \gamma_{01} \overline{\mathit{SES}}_{\bullet j} + u_{0j} + e_{ij} $$ Note that this model also employs group-mean centering to separate the individual and group-level effects of SES. ## Generating imputations In the present example, we generate 20 imputations from the following imputation model. ```{r, results="hide"} fml <- ReadDis + SES ~ 1 + Sex + (1|ID) imp <- panImpute(studentratings, formula=fml, n.burn=5000, n.iter=200, m=20) ``` The completed data are then extracted with `mitmlComplete`. ```{r} implist <- mitmlComplete(imp, "all") ``` ## Transforming the imputed data sets In empirical research, the raw data rarely enter the analyses but often require to be transformed beforehand. For this purpose, the `mitml` package provides the `within` function, which applies a given transformation directly to each data set. In the following, we use this to (a) calculate the group means of SES and (b) center the individual scores around their group means. ```{r} implist <- within(implist,{ G.SES <- clusterMeans(SES,ID) # calculate group means I.SES <- SES - G.SES # center around group means }) ``` This method can be used to apply arbitrary transformations to all of the completed data sets simultaneously. > **Note regarding** `dplyr`**:** > Due to how it is implemented, `within` cannot be used directly with `dplyr`. > Instead, users may use `with` instead of `within` with the following workaround. >```{r, eval=FALSE} implist <- with(implist,{ df <- data.frame(as.list(environment())) df <- ... # dplyr commands df }) implist <- as.mitml.list(implist) ``` > Advanced users may also consider using `lapply` for a similar workaround.` ## Fitting the analysis model In order to analyze the imputed data, each data set is analyzed using regular complete-data techniques. For this purpose, `mitml` offers the `with` function. In the present example, we use it to fit the model of interest with the R package `lme4`. ```{r} fit <- with(implist,{ lmer(MathAchiev ~ 1 + Sex + I.SES + G.SES + (1|ID)) }) ``` This results in a list of fitted models, one for each of the imputed data sets. ## Pooling The results obtained from the imputed data sets must be pooled in order to obtain a set of final parameter estimates and inferences. In the following, we employ a number of different pooling methods that can be used to address common statistical tasks, for example, for (a) estimating and testing individual parameters, (b) model comparisons, and (c) tests of constraints about one or several parameters. #### Parameter estimates Individual parameters are commonly pooled with the rules developed by Rubin (1987). In `mitml`, Rubin's rules are implemented in the `testEstimates` function. ```{r} testEstimates(fit) ``` In addition, the argument `var.comp=TRUE` can be used to obtain pooled estimates of variance components, and `df.com` can be used to specify the complete-data degrees of freedom, which provides more appropriate (i.e., conservative) inferences in smaller samples. For example, using a conservative value for the complete-data degrees of freedom for the fixed effects in the model of interest (Snijders & Bosker, 2012), the output changes as follows. ```{r} testEstimates(fit, var.comp=TRUE, df.com=46) ``` #### Multiple parameters and model comparisons Oftentimes, statistical inference concerns more than one parameter at a time. For example, the combined influence of SES (within and between groups) on mathematics achievement is represented by two parameters in the model of interest. Multiple pooling methods for Wald and likelihood ratio tests (LRTs) are implemented in the `testModels` function. This function requires the specification of a full model and a restricted model, which are then compared using (pooled) Wald tests or LRTs. Specifically, `testModels` allows users to pool Wald tests ($D_1$), $\chi^2$ test statistics ($D_2$), and LRTs ($D_3$; for a comparison of these methods, see also Grund, Lüdtke, & Robitzsch, 2016b). To examine the combined influence of SES on mathematics achievement, the following restricted model can be specified and compared with the model of interest (using $D_1$). ```{r} fit.null <- with(implist,{ lmer(MathAchiev ~ 1 + Sex + (1|ID)) }) testModels(fit, fit.null) ``` > **Note regarding the order of arguments:** > Please note that `testModels` expects that the first argument contains the full model, and the second argument contains the restricted model. > If the order of the arguments is reversed, the results will not be interpretable. Similar to the test for individual parameters, smaller samples can be accommodated with `testModels` (with method $D_1$) by specifying the complete-data degrees of freedom for the denominator of the $F$ statistic. ```{r} testModels(fit, fit.null, df.com=46) ``` The pooling method used by `testModels` is determined by the `method` argument. For example, to calculate the pooled LRT corresponding to the Wald test above (i.e., $D_3$), the following command can be issued. ```{r} testModels(fit, fit.null, method="D3") ``` #### Constraints on parameters Finally, it is often useful to investigate functions (or constraints) of the parameters in the model of interest. In complete data sets, this can be achieved with a test of linear hypotheses or the delta method. The `mitml` package implements a pooled version of the delta method in the `testConstraints` function. For example, the combined influence of SES on mathematics achievement can also be tested without model comparisons by testing the constraint that the parameters pertaining to `I.SES` and `G.SES` are both zero. This constraint is defined and tested as follows. ```{r} c1 <- c("I.SES", "G.SES") testConstraints(fit, constraints=c1) ``` This test is identical to the Wald test given in the previous section. Arbitrary constraints on the parameters can be specified and tested in this manner, where each character string denotes an expression to be tested against zero. In the present example, we are also interested in the *contextual* effect of SES on mathematics achievement (e.g., Snijders & Bosker, 2012). The contextual effect is simply the difference between the coefficients pertaining to `G.SES` and `I.SES` and can be tested as follows. ```{r} c2 <- c("G.SES - I.SES") testConstraints(fit, constraints=c2) ``` Similar to model comparisons, constraints can be tested with different methods ($D_1$ and $D_2$) and can accommodate smaller samples by a value for `df.com`. Further examples for the analysis of multiply imputed data sets with `mitml` are given by Enders (2016) and Grund, Lüdtke, and Robitzsch (2016a). ###### References Enders, C. K. (2016). Multiple imputation as a flexible tool for missing data handling in clinical research. *Behaviour Research and Therapy*. doi: 10.1016/j.brat.2016.11.008 ([Link](https://doi.org/10.1016/j.brat.2016.11.008)) Grund, S., Lüdtke, O., & Robitzsch, A. (2016a). Multiple imputation of multilevel missing data: An introduction to the R package pan. *SAGE Open*, *6*(4), 1–17. doi: 10.1177/2158244016668220 ([Link](https://doi.org/10.1177/2158244016668220)) Grund, S., Lüdtke, O., & Robitzsch, A. (2016b). Pooling ANOVA results from multiply imputed datasets: A simulation study. *Methodology*, *12*, 75–88. doi: 10.1027/1614-2241/a000111 ([Link](https://doi.org/10.1027/1614-2241/a000111)) Rubin, D. B. (1987). *Multiple imputation for nonresponse in surveys*. Hoboken, NJ: Wiley. Snijders, T. A. B., & Bosker, R. J. (2012). *Multilevel analysis: An introduction to basic and advanced multilevel modeling*. Thousand Oaks, CA: Sage. --- ```{r, echo=F} cat("Author: Simon Grund (grund@ipn.uni-kiel.de)\nDate: ", as.character(Sys.Date())) ``` mitml/inst/doc/Analysis.R 0000644 0001762 0000144 00000005175 13413110657 015035 0 ustar ligges users ## ----setup, include=FALSE, cache=FALSE----------------------------------------------- library(knitr) set.seed(123) options(width=87) opts_chunk$set(background="#ffffff", comment="#", collapse=FALSE, fig.width=9, fig.height=9, warning=FALSE, message=FALSE) ## ------------------------------------------------------------------------------------ library(mitml) library(lme4) data(studentratings) ## ------------------------------------------------------------------------------------ summary(studentratings) ## ---- results="hide"----------------------------------------------------------------- fml <- ReadDis + SES ~ 1 + Sex + (1|ID) imp <- panImpute(studentratings, formula=fml, n.burn=5000, n.iter=200, m=20) ## ------------------------------------------------------------------------------------ implist <- mitmlComplete(imp, "all") ## ------------------------------------------------------------------------------------ implist <- within(implist,{ G.SES <- clusterMeans(SES,ID) # calculate group means I.SES <- SES - G.SES # center around group means }) ## ---- eval=FALSE--------------------------------------------------------------------- # implist <- with(implist,{ # df <- data.frame(as.list(environment())) # df <- ... # dplyr commands # df # }) # implist <- as.mitml.list(implist) ## ------------------------------------------------------------------------------------ fit <- with(implist,{ lmer(MathAchiev ~ 1 + Sex + I.SES + G.SES + (1|ID)) }) ## ------------------------------------------------------------------------------------ testEstimates(fit) ## ------------------------------------------------------------------------------------ testEstimates(fit, var.comp=TRUE, df.com=46) ## ------------------------------------------------------------------------------------ fit.null <- with(implist,{ lmer(MathAchiev ~ 1 + Sex + (1|ID)) }) testModels(fit, fit.null) ## ------------------------------------------------------------------------------------ testModels(fit, fit.null, df.com=46) ## ------------------------------------------------------------------------------------ testModels(fit, fit.null, method="D3") ## ------------------------------------------------------------------------------------ c1 <- c("I.SES", "G.SES") testConstraints(fit, constraints=c1) ## ------------------------------------------------------------------------------------ c2 <- c("G.SES - I.SES") testConstraints(fit, constraints=c2) ## ---- echo=F------------------------------------------------------------------------- cat("Author: Simon Grund (grund@ipn.uni-kiel.de)\nDate: ", as.character(Sys.Date())) mitml/inst/doc/Analysis.html 0000644 0001762 0000144 00000067056 13413110657 015606 0 ustar ligges users
This vignette is intended to provide an overview of the analysis of multiply imputed data sets with mitml
. Specifically, this vignette addresses the following topics:
Further information can be found in the other vignettes and the package documentation.
studentratings
)For the purposes of this vignette, we make use of the studentratings
data set, which contains simulated data from 750 students in 50 schools including scores on reading and math achievement, socioeconomic status (SES), and ratings on school and classroom environment.
The package and the data set can be loaded as follows.
As evident from its summary
, most variables in the data set contain missing values.
# ID FedState Sex MathAchiev MathDis
# Min. :1001 B :375 Length:750 Min. :225.0 Min. :0.2987
# 1st Qu.:1013 SH:375 Class :character 1st Qu.:440.7 1st Qu.:1.9594
# Median :1513 Mode :character Median :492.7 Median :2.4350
# Mean :1513 Mean :495.4 Mean :2.4717
# 3rd Qu.:2013 3rd Qu.:553.2 3rd Qu.:3.0113
# Max. :2025 Max. :808.1 Max. :4.7888
# NA's :132 NA's :466
# SES ReadAchiev ReadDis CognAbility SchClimate
# Min. :-9.00 Min. :191.1 Min. :0.7637 Min. :28.89 Min. :0.02449
# 1st Qu.:35.00 1st Qu.:427.4 1st Qu.:2.1249 1st Qu.:43.80 1st Qu.:1.15338
# Median :46.00 Median :490.2 Median :2.5300 Median :48.69 Median :1.65636
# Mean :46.55 Mean :489.9 Mean :2.5899 Mean :48.82 Mean :1.73196
# 3rd Qu.:59.00 3rd Qu.:558.4 3rd Qu.:3.0663 3rd Qu.:53.94 3rd Qu.:2.24018
# Max. :93.00 Max. :818.5 Max. :4.8554 Max. :71.29 Max. :4.19316
# NA's :281 NA's :153 NA's :140
In the present example, we investigate the differences in mathematics achievement that can be attributed to differences in SES when controlling for students’ sex. Specifically, we are interested in the following model.
\[ \mathit{MA}_{ij} = \gamma_{00} + \gamma_{10} \mathit{Sex}_{ij} + \gamma_{20} (\mathit{SES}_{ij}-\overline{\mathit{SES}}_{\bullet j}) + \gamma_{01} \overline{\mathit{SES}}_{\bullet j} + u_{0j} + e_{ij} \]
Note that this model also employs group-mean centering to separate the individual and group-level effects of SES.
In the present example, we generate 20 imputations from the following imputation model.
fml <- ReadDis + SES ~ 1 + Sex + (1|ID)
imp <- panImpute(studentratings, formula=fml, n.burn=5000, n.iter=200, m=20)
The completed data are then extracted with mitmlComplete
.
In empirical research, the raw data rarely enter the analyses but often require to be transformed beforehand. For this purpose, the mitml
package provides the within
function, which applies a given transformation directly to each data set.
In the following, we use this to (a) calculate the group means of SES and (b) center the individual scores around their group means.
implist <- within(implist,{
G.SES <- clusterMeans(SES,ID) # calculate group means
I.SES <- SES - G.SES # center around group means
})
This method can be used to apply arbitrary transformations to all of the completed data sets simultaneously.
Note regarding
dplyr
: Due to how it is implemented,within
cannot be used directly withdplyr
. Instead, users may usewith
instead ofwithin
with the following workaround.implist <- with(implist,{ df <- data.frame(as.list(environment())) df <- ... # dplyr commands df }) implist <- as.mitml.list(implist)
Advanced users may also consider using
lapply
for a similar workaround.`
In order to analyze the imputed data, each data set is analyzed using regular complete-data techniques. For this purpose, mitml
offers the with
function. In the present example, we use it to fit the model of interest with the R package lme4
.
This results in a list of fitted models, one for each of the imputed data sets.
The results obtained from the imputed data sets must be pooled in order to obtain a set of final parameter estimates and inferences. In the following, we employ a number of different pooling methods that can be used to address common statistical tasks, for example, for (a) estimating and testing individual parameters, (b) model comparisons, and (c) tests of constraints about one or several parameters.
Individual parameters are commonly pooled with the rules developed by Rubin (1987). In mitml
, Rubin’s rules are implemented in the testEstimates
function.
#
# Call:
#
# testEstimates(model = fit)
#
# Final parameter estimates and inferences obtained from 20 imputed data sets.
#
# Estimate Std.Error t.value df P(>|t|) RIV FMI
# (Intercept) 433.015 28.481 15.203 1081.280 0.000 0.153 0.134
# SexGirl 3.380 7.335 0.461 279399.841 0.645 0.008 0.008
# I.SES 0.692 0.257 2.690 233.427 0.008 0.399 0.291
# G.SES 1.296 0.597 2.173 1096.956 0.030 0.152 0.133
#
# Unadjusted hypothesis test as appropriate in larger samples.
In addition, the argument var.comp=TRUE
can be used to obtain pooled estimates of variance components, and df.com
can be used to specify the complete-data degrees of freedom, which provides more appropriate (i.e., conservative) inferences in smaller samples.
For example, using a conservative value for the complete-data degrees of freedom for the fixed effects in the model of interest (Snijders & Bosker, 2012), the output changes as follows.
#
# Call:
#
# testEstimates(model = fit, var.comp = TRUE, df.com = 46)
#
# Final parameter estimates and inferences obtained from 20 imputed data sets.
#
# Estimate Std.Error t.value df P(>|t|) RIV FMI
# (Intercept) 433.015 28.481 15.203 36.965 0.000 0.153 0.134
# SexGirl 3.380 7.335 0.461 43.752 0.647 0.008 0.008
# I.SES 0.692 0.257 2.690 27.781 0.012 0.399 0.291
# G.SES 1.296 0.597 2.173 37.022 0.036 0.152 0.133
#
# Estimate
# Intercept~~Intercept|ID 168.506
# Residual~~Residual 8092.631
# ICC|ID 0.020
#
# Hypothesis test adjusted for small samples with df=[46]
# complete-data degrees of freedom.
Oftentimes, statistical inference concerns more than one parameter at a time. For example, the combined influence of SES (within and between groups) on mathematics achievement is represented by two parameters in the model of interest.
Multiple pooling methods for Wald and likelihood ratio tests (LRTs) are implemented in the testModels
function. This function requires the specification of a full model and a restricted model, which are then compared using (pooled) Wald tests or LRTs. Specifically, testModels
allows users to pool Wald tests (\(D_1\)), \(\chi^2\) test statistics (\(D_2\)), and LRTs (\(D_3\); for a comparison of these methods, see also Grund, Lüdtke, & Robitzsch, 2016b).
To examine the combined influence of SES on mathematics achievement, the following restricted model can be specified and compared with the model of interest (using \(D_1\)).
#
# Call:
#
# testModels(model = fit, null.model = fit.null)
#
# Model comparison calculated from 20 imputed data sets.
# Combination method: D1
#
# F.value df1 df2 P(>F) RIV
# 6.095 2 674.475 0.002 0.275
#
# Unadjusted hypothesis test as appropriate in larger samples.
Note regarding the order of arguments: Please note that
testModels
expects that the first argument contains the full model, and the second argument contains the restricted model. If the order of the arguments is reversed, the results will not be interpretable.
Similar to the test for individual parameters, smaller samples can be accommodated with testModels
(with method \(D_1\)) by specifying the complete-data degrees of freedom for the denominator of the \(F\) statistic.
#
# Call:
#
# testModels(model = fit, null.model = fit.null, df.com = 46)
#
# Model comparison calculated from 20 imputed data sets.
# Combination method: D1
#
# F.value df1 df2 P(>F) RIV
# 6.095 2 40.687 0.005 0.275
#
# Hypothesis test adjusted for small samples with df=[46]
# complete-data degrees of freedom.
The pooling method used by testModels
is determined by the method
argument. For example, to calculate the pooled LRT corresponding to the Wald test above (i.e., \(D_3\)), the following command can be issued.
#
# Call:
#
# testModels(model = fit, null.model = fit.null, method = "D3")
#
# Model comparison calculated from 20 imputed data sets.
# Combination method: D3
#
# F.value df1 df2 P(>F) RIV
# 5.787 2 519.143 0.003 0.328
#
# Models originally fit with REML were automatically refit using ML.
Finally, it is often useful to investigate functions (or constraints) of the parameters in the model of interest. In complete data sets, this can be achieved with a test of linear hypotheses or the delta method. The mitml
package implements a pooled version of the delta method in the testConstraints
function.
For example, the combined influence of SES on mathematics achievement can also be tested without model comparisons by testing the constraint that the parameters pertaining to I.SES
and G.SES
are both zero. This constraint is defined and tested as follows.
#
# Call:
#
# testConstraints(model = fit, constraints = c1)
#
# Hypothesis test calculated from 20 imputed data sets. The following
# constraints were specified:
#
# Estimate Std. Error
# I.SES: 0.692 0.245
# G.SES: 1.296 0.628
#
# Combination method: D1
#
# F.value df1 df2 P(>F) RIV
# 6.095 2 674.475 0.002 0.275
#
# Unadjusted hypothesis test as appropriate in larger samples.
This test is identical to the Wald test given in the previous section. Arbitrary constraints on the parameters can be specified and tested in this manner, where each character string denotes an expression to be tested against zero.
In the present example, we are also interested in the contextual effect of SES on mathematics achievement (e.g., Snijders & Bosker, 2012). The contextual effect is simply the difference between the coefficients pertaining to G.SES
and I.SES
and can be tested as follows.
#
# Call:
#
# testConstraints(model = fit, constraints = c2)
#
# Hypothesis test calculated from 20 imputed data sets. The following
# constraints were specified:
#
# Estimate Std. Error
# G.SES - I.SES: 0.605 0.644
#
# Combination method: D1
#
# F.value df1 df2 P(>F) RIV
# 0.881 1 616.380 0.348 0.166
#
# Unadjusted hypothesis test as appropriate in larger samples.
Similar to model comparisons, constraints can be tested with different methods (\(D_1\) and \(D_2\)) and can accommodate smaller samples by a value for df.com
. Further examples for the analysis of multiply imputed data sets with mitml
are given by Enders (2016) and Grund, Lüdtke, and Robitzsch (2016a).
Enders, C. K. (2016). Multiple imputation as a flexible tool for missing data handling in clinical research. Behaviour Research and Therapy. doi: 10.1016/j.brat.2016.11.008 (Link)
Grund, S., Lüdtke, O., & Robitzsch, A. (2016a). Multiple imputation of multilevel missing data: An introduction to the R package pan. SAGE Open, 6(4), 1–17. doi: 10.1177/2158244016668220 (Link)
Grund, S., Lüdtke, O., & Robitzsch, A. (2016b). Pooling ANOVA results from multiply imputed datasets: A simulation study. Methodology, 12, 75–88. doi: 10.1027/1614-2241/a000111 (Link)
Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. Hoboken, NJ: Wiley.
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: Sage.
# Author: Simon Grund (grund@ipn.uni-kiel.de)
# Date: 2019-01-02
This vignette illustrates the use of mitml
for the treatment of missing data at Level 2. Specifically, the vignette addresses the following topics:
Further information can be found in the other vignettes and the package documentation.
For purposes of this vignette, we make use of the leadership
data set, which contains simulated data from 750 employees in 50 groups including ratings on job satisfaction, leadership style, and work load (Level 1) as well as cohesion (Level 2).
The package and the data set can be loaded as follows.
In the summary
of the data, it becomes visible that all variables are affected by missing data.
# GRPID JOBSAT COHES NEGLEAD WLOAD
# Min. : 1.0 Min. :-7.32934 Min. :-3.4072 Min. :-3.13213 low :416
# 1st Qu.:13.0 1st Qu.:-1.61932 1st Qu.:-0.4004 1st Qu.:-0.70299 high:248
# Median :25.5 Median :-0.02637 Median : 0.2117 Median : 0.08027 NA's: 86
# Mean :25.5 Mean :-0.03168 Mean : 0.1722 Mean : 0.04024
# 3rd Qu.:38.0 3rd Qu.: 1.64571 3rd Qu.: 1.1497 3rd Qu.: 0.79111
# Max. :50.0 Max. :10.19227 Max. : 2.5794 Max. : 3.16116
# NA's :69 NA's :30 NA's :92
The following data segment illustrates this fact, including cases with missing data at Level 1 (e.g., job satisfaction) and 2 (e.g., cohesion).
# GRPID JOBSAT COHES NEGLEAD WLOAD
# 73 5 -1.72143400 0.9023198 0.83025589 high
# 74 5 NA 0.9023198 0.15335056 high
# 75 5 -0.09541178 0.9023198 0.21886272 low
# 76 6 0.68626611 NA -0.38190591 high
# 77 6 NA NA NA low
# 78 6 -1.86298201 NA -0.05351001 high
In the following, we will employ a two-level model to address missing data at both levels simultaneously.
The specification of the two-level model, involves two components, one pertaining to the variables at each level of the sample (Goldstein, Carpenter, Kenward, & Levin, 2009; for further discussion, see also Enders, Mister, & Keller, 2016; Grund, Lüdtke, & Robitzsch, in press).
Specifically, the imputation model is specified as a list with two components, where the first component denotes the model for the variables at Level 1, and the second component denotes the model for the variables at Level 2.
For example, using the formula
interface, an imputation model targeting all variables in the data set can be written as follows.
The first component of this list includes the three target variables at Level 1 and a fixed (1
) as well as a random intercept (1|GRPID
). The second component includes the target variable at Level 2 with a fixed intercept (1
).
From a statistical point of view, this specification corresponds to the following model \[ \begin{aligned} \mathbf{y}_{1ij} &= \boldsymbol\mu_{1} + \mathbf{u}_{1j} + \mathbf{e}_{ij} \\ \mathbf{y}_{2j} &= \boldsymbol\mu_{2} + \mathbf{u}_{1j} \; , \end{aligned} \] where \(\mathbf{y}_{1ij}\) denotes the target variables at Level 1, \(\mathbf{y}_{2j}\) the target variables at Level 2, and the right-hand side of the model contains the fixed effects, random effects, and residual terms as mentioned above.
Note that, even though the two components of the model appear to be separated, they define a single (joint) model for all target variables at both Level 1 and 2. Specifically, this model employs a two-level covariance structure, which allows for relations between variables at both Level 1 (i.e., correlated residuals at Level 1) and 2 (i.e., correlated random effects residuals at Level 2).
Because the data contain missing values at both levels, imputations will be generated with jomoImpute
(and not panImpute
). Except for the specification of the two-level model, the syntax is the same as in applications with missing data only at Level 1.
Here, we will run 5,000 burn-in iterations and generate 20 imputations, each 250 iterations apart.
By looking at the summary
, we can then review the imputation procedure and verify that the imputation model converged.
#
# Call:
#
# jomoImpute(data = leadership, formula = fml, n.burn = 5000, n.iter = 250,
# m = 20)
#
# Level 1:
#
# Cluster variable: GRPID
# Target variables: JOBSAT NEGLEAD WLOAD
# Fixed effect predictors: (Intercept)
# Random effect predictors: (Intercept)
#
# Level 2:
#
# Target variables: COHES
# Fixed effect predictors: (Intercept)
#
# Performed 5000 burn-in iterations, and generated 20 imputed data sets,
# each 250 iterations apart.
#
# Potential scale reduction (Rhat, imputation phase):
#
# Min 25% Mean Median 75% Max
# Beta: 1.001 1.001 1.001 1.001 1.001 1.001
# Beta2: 1.001 1.001 1.001 1.001 1.001 1.001
# Psi: 1.000 1.001 1.003 1.001 1.003 1.009
# Sigma: 1.000 1.003 1.004 1.004 1.006 1.009
#
# Largest potential scale reduction:
# Beta: [1,3], Beta2: [1,1], Psi: [4,3], Sigma: [3,1]
#
# Missing data per variable:
# GRPID JOBSAT NEGLEAD WLOAD COHES
# MD% 0 9.2 12.3 11.5 4.0
Due to the greater complexity of the two-level model, the output includes more information than in applications with missing data only at Level 1. For example, the output features the model specification for variables at both Level 1 and 2. Furthermore, it provides convergence statistics for the additional regression coefficients for the target variables at Level 2 (i.e., Beta2
).
Finally, it also becomes visible that the two-level model indeed allows for relations between target variables at Level 1 and 2. This can be seen from the fact that the potential scale reduction factor (\(\hat{R}\)) for the covariance matrix at Level 2 (Psi
) was largest for Psi[4,3]
, which is the covariance between cohesion and the random intercept of work load.
The completed data sets can then be extracted with mitmlComplete
.
When inspecting the completed data, it is easy to verify that the imputations for variables at Level 2 are constant within groups as intended, thus preserving the two-level structure of the data.
# GRPID JOBSAT NEGLEAD WLOAD COHES
# 73 5 -1.72143400 0.83025589 high 0.9023198
# 74 5 -2.80749991 0.15335056 high 0.9023198
# 75 5 -0.09541178 0.21886272 low 0.9023198
# 76 6 0.68626611 -0.38190591 high -1.0275552
# 77 6 1.52825873 -1.11035850 low -1.0275552
# 78 6 -1.86298201 -0.05351001 high -1.0275552
Enders, C. K., Mistler, S. A., & Keller, B. T. (2016). Multilevel multiple imputation: A review and evaluation of joint modeling and chained equations imputation. Psychological Methods, 21, 222–240. doi: 10.1037/met0000063 (Link)
Goldstein, H., Carpenter, J. R., Kenward, M. G., & Levin, K. A. (2009). Multilevel models with multivariate mixed response types. Statistical Modelling, 9, 173–197. doi: 10.1177/1471082X0800900301 (Link)
Grund, S., Lüdtke, O., & Robitzsch, A. (in press). Multiple imputation of missing data for multilevel models: Simulations and recommendations. Organizational Research Methods. doi: 10.1177/1094428117703686 (Link)
# Author: Simon Grund (grund@ipn.uni-kiel.de)
# Date: 2019-01-02
This vignette is intended to provide a first introduction to the R package mitml
for generating and analyzing multiple imputations for multilevel missing data. A usual application of the package may consist of the following steps.
The mitml
package offers a set of tools to facilitate each of these steps. This vignette is intended as a step-by-step illustration of the basic features of mitml
. Further information can be found in the other vignettes and the package documentation.
For the purposes of this vignette, we employ a simple example that makes use of the studentratings
data set, which is provided with mitml
. To use it, the mitml
package and the data set must be loaded as follows.
More information about the variables in the data set can be obtained from its summary
.
# ID FedState Sex MathAchiev MathDis
# Min. :1001 B :375 Length:750 Min. :225.0 Min. :0.2987
# 1st Qu.:1013 SH:375 Class :character 1st Qu.:440.7 1st Qu.:1.9594
# Median :1513 Mode :character Median :492.7 Median :2.4350
# Mean :1513 Mean :495.4 Mean :2.4717
# 3rd Qu.:2013 3rd Qu.:553.2 3rd Qu.:3.0113
# Max. :2025 Max. :808.1 Max. :4.7888
# NA's :132 NA's :466
# SES ReadAchiev ReadDis CognAbility SchClimate
# Min. :-9.00 Min. :191.1 Min. :0.7637 Min. :28.89 Min. :0.02449
# 1st Qu.:35.00 1st Qu.:427.4 1st Qu.:2.1249 1st Qu.:43.80 1st Qu.:1.15338
# Median :46.00 Median :490.2 Median :2.5300 Median :48.69 Median :1.65636
# Mean :46.55 Mean :489.9 Mean :2.5899 Mean :48.82 Mean :1.73196
# 3rd Qu.:59.00 3rd Qu.:558.4 3rd Qu.:3.0663 3rd Qu.:53.94 3rd Qu.:2.24018
# Max. :93.00 Max. :818.5 Max. :4.8554 Max. :71.29 Max. :4.19316
# NA's :281 NA's :153 NA's :140
In addition, the correlations between variables (based on pairwise observations) may be useful for identifying possible sources of information that may be used during the treatment of missing data.
# MathAchiev MathDis SES ReadAchiev ReadDis CognAbility SchClimate
# MathAchiev 1.000 -0.106 0.260 0.497 -0.080 0.569 -0.206
# MathDis -0.106 1.000 -0.206 -0.189 0.613 -0.203 0.412
# SES 0.260 -0.206 1.000 0.305 -0.153 0.138 -0.176
# ReadAchiev 0.497 -0.189 0.305 1.000 -0.297 0.413 -0.320
# ReadDis -0.080 0.613 -0.153 -0.297 1.000 -0.162 0.417
# CognAbility 0.569 -0.203 0.138 0.413 -0.162 1.000 -0.266
# SchClimate -0.206 0.412 -0.176 -0.320 0.417 -0.266 1.000
This illustrates that (a) most variables in the data set are affected by missing data, but also (b) that substantial relations exist between variables. For simplicity, we focus on only a subset of these variables.
For the present example, we focus on the two variables ReadDis
(disciplinary problems in reading class) and ReadAchiev
(reading achievement).
Assume we are interested in the relation between these variables. Specifically, we may be interested in the following analysis model
\[ \mathit{ReadAchiev}_{ij} = \gamma_{00} + \gamma_{10} \mathit{ReadDis}_{ij} + u_{0j} + e_{ij} \]
On the basis of the syntax used in the R package lme4
, this model may be written as follows.
In this model, the relation between ReadDis
and ReadAchiev
is represented by a single fixed effect of ReadDis
, and a random intercept is included to account for the clustered structure of the data and the group-level variance in ReadAchiev
that is not explained by ReadDis
.
The mitml
package includes wrapper functions for the R packages pan
(panImpute
) and jomo
(jomoImpute
). Here, we will use the first option. To generate imputations with panImpute
, the user must specify (at least):
The easiest way of specifying the imputation model is to use the formula
argument of panImpute
. Generally speaking, the imputation model should include all variables that are either (a) part of the model of interest, (b) related to the variables in the model, or (c) related to whether the variables in the model are missing.
In this simple example, we include only ReadDis
and ReadAchiev
as the main target variables and SchClimate
as an auxiliary variable.
Note that, in this specification of the imputation model. all variables are included on the left-hand side of the model, whereas the right-hand side is left “empty”. This model allows for all relations between variables at Level 1 and 2 and is thus suitable for most applications of the multilevel random intercept model (for further discussion, see also Grund, Lüdtke, & Robitzsch, 2016, in press).
The imputation procedure is then run for 5,000 iterations (burn-in), after which 100 imputations are drawn every 100 iterations.
This step may take a few seconds. Once the process is completed, the imputations are saved in the imp
object.
In mitml
, there are two options for assessing the convergence of the imputation procedure. First, the summary
calculates the “potential scale reduction factor” (\(\hat{R}\)) for each parameter in the imputation model. If this value is noticeably larger than 1 for some parameters (say \(>1.05\)), a longer burn-in period may be required.
#
# Call:
#
# panImpute(data = studentratings, formula = fml, n.burn = 5000,
# n.iter = 100, m = 100)
#
# Cluster variable: ID
# Target variables: ReadAchiev ReadDis SchClimate
# Fixed effect predictors: (Intercept)
# Random effect predictors: (Intercept)
#
# Performed 5000 burn-in iterations, and generated 100 imputed data sets,
# each 100 iterations apart.
#
# Potential scale reduction (Rhat, imputation phase):
#
# Min 25% Mean Median 75% Max
# Beta: 1.000 1.001 1.001 1.001 1.002 1.003
# Psi: 1.000 1.001 1.001 1.001 1.001 1.002
# Sigma: 1.000 1.000 1.000 1.000 1.000 1.001
#
# Largest potential scale reduction:
# Beta: [1,3], Psi: [2,1], Sigma: [2,1]
#
# Missing data per variable:
# ID ReadAchiev ReadDis SchClimate FedState Sex MathAchiev MathDis SES CognAbility
# MD% 0 0 20.4 18.7 0 0 17.6 62.1 37.5 0
Second, diagnostic plots can be requested with the plot
function. These plots consist of a trace plot, an autocorrelation plot, and some additional information about the posterior distribution. Convergence can be assumed if the trace plot is stationary (i.e., does not “drift”), and the autocorrelation is within reasonable bounds for the chosen number of iterations between imputations.
For this example, we examine only the plot for the parameter Beta[1,2]
(i.e., the intercept of ReadDis
).
Taken together, both \(\hat{R}\) and the diagnostic plots indicate that the imputation model converged, setting the basis for the analysis of the imputed data sets.
In order to work with and analyze the imputed data sets, the data sets must be completed with the imputations generated in the previous steps. To do so, mitml
provides the function mitmlComplete
.
This resulting object is a list that contains the 100 completed data sets.
In order to obtain estimates for the model of interest, the model must be fit separately to each of the completed data sets, and the results must be pooled into a final set of estimates and inferences. The mitml
package offers the with
function to fit various statistical models to a list of completed data sets.
In this example, we use the lmer
function from the R package lme4
to fit the model of interest.
The resulting object is a list containing the 100 fitted models. To pool the results of these models into a set of final estimates and inferences, mitml
offers the testEstimates
function.
#
# Call:
#
# testEstimates(model = fit, var.comp = TRUE)
#
# Final parameter estimates and inferences obtained from 100 imputed data sets.
#
# Estimate Std.Error t.value df P(>|t|) RIV FMI
# (Intercept) 582.186 14.501 40.147 4335.314 0.000 0.178 0.152
# ReadDis -35.689 5.231 -6.822 3239.411 0.000 0.212 0.175
#
# Estimate
# Intercept~~Intercept|ID 902.868
# Residual~~Residual 6996.303
# ICC|ID 0.114
#
# Unadjusted hypothesis test as appropriate in larger samples.
The estimates can be interpreted in a manner similar to the estimates from the corresponding complete-data procedure. In addition, the output includes diagnostic quantities such as the fraction of missing information (FMI), which can be helpful for interpreting the results and understanding problems with the imputation procedure.
Grund, S., Lüdtke, O., & Robitzsch, A. (2016). Multiple imputation of multilevel missing data: An introduction to the R package pan. SAGE Open, 6(4), 1–17. doi: 10.1177/2158244016668220 (Link)
Grund, S., Lüdtke, O., & Robitzsch, A. (in press). Multiple imputation of missing data for multilevel models: Simulations and recommendations. Organizational Research Methods. doi: 10.1177/1094428117703686 (Link)
# Author: Simon Grund (grund@ipn.uni-kiel.de)
# Date: 2019-01-02
t͜Xӗ/}w'gyyrϏn_wMz_Ow~bdNm_d>6K+ol~3Ee֟dY/m7˯釽q٘l/m/xΫV±.Džj}lkzFOrmgM 67?_y=~HyriG/Urwugm6kwm_~5l~{W'm{v_?9p;mtUl/g'=g7|ӇeFe{V6ʸmdosU0|Kgs_-JstW|6{'*Ky ?R\}K-YIIq~96y1Bdi6\ncýg6?T6Mx?$~q>W_+pـnؼ_O {fsyfl]~TכKYc4\WY_l.]ƣ6= }@}+6.ޛlYeCӉjY[-q?|?l˿MOn|MK)Tpe ph9}eG)PgiֽϱIg~̦Nii⾻& Y_ۍ|e^ϗ/I]ϒ=e?oU[&Ё_e>'ͻ2JR\CY]kJWyzsx1K,'Lvz98KzVpFwW]}똔Eeo\_([x%l2T{RعOQYG0\"ceo˿vi@kgW~\ wش4G~O9a)p %yA_)ƕU|!~bϕ2_SmNӦ&ZCۮ?t8y!OYKQfYβ %Ll>y6 !;|tmH:v)u9KϕvhܹqbgO2".2U^q~SӅmyu2ƕկXn?uG~)q^ϿgJeyǍ/wWop c}{^M;gˤơ|fVY.^QZKc3k?OF$O(#GVo;.}xO Is ?L+𗂃ETxp3. )CG}~7[q[R7$%G#4nb= ~.3}?Sʎ]NV Od~MO7 U^Hĺ/d7ypKwyrs;3npw!v+19Ɠ /9AZ)~`v _(ß:sۥ8;:oGyOn-xMO;G3!wUIybul#NoZsW7S'W]s g]ϗfaSػJrgKSnsͬ{h<_f~_6h%=oT%E?II*v$I%CcGVt \PGTxHSk oU\Ao8QGms)9]|a ;4Os:xĮ+'YI}闀+4}6sOK)xG'/z|k>DRxaJiL =O2إGpWگ%ow9lZp&\HυH-ҧkǵ>ROgK[_ }UHǭM<%*KP>@=k"]ќ9ּ@>ǶPƪ5%}ߨ>Vm[tx~o)xVwɣO#_տM ?<ׁ&Y~|Wz[rgOO7{ <$O\tI rX/SJڿ̒7hOҟXHP1֩>CuI*үи`}$y: η. -W+7zI.mԧ4}~#rg|Bׯɺ/}[':tCоI=O4W)`p.>.Tէy 'O]MgQr/Uu?{摫YVַ^Osxk'}<.=S)_ziom' u30:R)]݊iߐeoQU%ɋ勇_Su<;<ßcu[{ >bG;=wj.O_Qo|(}J^}E/ρ{~gg!y\]/q(h2 O컜A@>s}&1Z7v|&ާVz 9xK7Y|Dz牏"=L<?t?~kuUc #yF}| EZrء*>QOeR/h̀+YNgma3xFdSY9y|iOy3 o&[ -x3ߍdVYJy*RH= e8ϑ [*>AzvxleGԾ_]q _o>| 3 c}ժY s)̓0OxMoI[h_C@sZ7AzN.꾂;_EsY!-^cz6nQ9٣nyWяWiӉu^KA>mu[}%ztvP}Mԉ=W'QUzwrڏ >?*}*\iz;KWO W6>"R戛ӎ hsfucsB&%o$^ f[.ᖊ?QɋfHp+~fF>f.A?+3(d|x=!Y?7` Ow%fԽnYMlxtwI= Cnc4~~Ǭ{ExøzmV=%k& 5~Ax?ycGͤ?_~kJiO qK 3:!wPcƹ"ُqMb7L=G$fYh<<1e̠"8.3f 3$|lxvVtǍ<܄í}"sb_FwKvd||VgL3(כ%(Y'n>!%q_ q9ewD{:r)c\{?FY"-rcnhFapfwIW«Hܸdpfa3JuwwH~e?;jq;df#=κ씼dďEl$בww7HRy!bp}C|Q#v5 f;n'5>nN{ -:<%~ ǭG$^Ŏ<W[nXvEoo$ů8pgW7c:]#׃;RDZvs(mT=L~`>AQet4A3>U 8sGz_+c^}Qpw爗^dߍR︣xq馤ycKjɟu+J7Y<+~|~Pgf<|X^{Xh/ '\uN盪`o#K^wK%TC|It$o>"~bM _1U;o"Q~G??Nla?&QS쌯{XGSOZ]cco":lw+9T}f+x_\I~?)xyKȣCР|NxR9 .\)'o ]>f';.uo )K}N8kww[e+s;ɝ܇8u;Xū1x?K#r-WI?%u5A~Ιu;{'fq(Gsu>xع͞xȶ:6ytݽv#b>xgB^3+MZO o쯇?gS,Mo|{88)~=O=ݓp~S88~aO0q{ܣuZ'GAֵ;Yǧ+o`\>#ȫSLݠ\g&$/RC-FRWx3!rJȟˈ g]n#}'E&c+ā9J^uE%~WxaK?f}x|F븆>>\/ڂ߃gf xLHyο}gnB뛸Q8*%q7?`ϳ[aofo<1reuC8|^:ƼFIuz03g}2;7#Ko>c}R]O1ε6괛u/wUh[լpcze:E6NS\}_/n%wn~M7pGWV!Z1A~7B^?,l6q&~NsԽ{>u;o%Cb893 uN4ΣUuɕ< :斀7ŕNύ3SA7ߍ?7toq@CZQ<( *U''w6F ko38I/qO?;foׅnh{/?sZO?w[}~W֯obS<ܫؙ> 2/6җ#zYJ,I!掰#$qI>7AnCw/.a/o9$nhʛS~Ȑ`VJߊ̛!puNs 8xXoտb GVU#ɽw[KѾ·ݒh|/>^~;"?oebW'(1%xvK 37u{ɗ/i=bog bcc&$Vx3I|T}/Ɉq8}nv[|VyY4=wKۆax ϋh?K`7ybvwjx?v6f[߾rTԟ _^1>n>߈;-;} w;H ywk;qh<,>El w"OKතk/qsR<ϝQ4kߚQ$GalwGϏ f7s3;N| !?U @}7u'R79⟕/kKsf7 N'7g8(V<>ob]ME;Cε8-iz*qCk?6~>J1;5>˩vK}J>%-Oǔ/*w[.v6W熶pz]ϓ辯֩琣iG߲oC'1Ї?_`vr^Z39.sgq|zNo|Kg U8 tfWѸ0uU~x>K7'i]noS>8H_v+}U} ?ѺVOy»g/Eu;xN _s O?-[*>H To7c)X[xn_Æj}_*P`M?TGHz{{a]~esdBRz"Ts&Tj=}@Fg6}`=zSfP[~M6P!C]5ugvr2Pz1LW>C6jut6P;^V6w2B2Ef-T;x?X3̛Gb~,|Zsk5I ޯ ҿGPyj~_`y7T5}P=[Qj}O0T{^zj=7wPׅj# _ =Bۅ!x}sۡ::ב/B@'9Oy-X=@ne2Pϻ.ToouwCf)q}ֿ=P_j{^as̡Z !y3PG1T/D~zuD۴nϻ|7ϻ*W?/\Tu˶.g9q