mclust/0000755000176200001440000000000013510710262011556 5ustar liggesusersmclust/inst/0000755000176200001440000000000013510412701012530 5ustar liggesusersmclust/inst/CITATION0000644000176200001440000000165113323700137013675 0ustar liggesuserscitHeader("To cite 'mclust' R package in publications, please use::") citEntry(entry = "Article", title = "{mclust} 5: clustering, classification and density estimation using {G}aussian finite mixture models", author = personList(person(given="Luca", family="Scrucca"), person(given="Michael", family="Fop"), person(given=c("Thomas", "Brendan"), family="Murphy"), person(given=c("Adrian", "E."), family="Raftery")), journal = "The {R} Journal", year = "2016", volume = "8", number = "1", pages = "205--233", url="https://journal.r-project.org/archive/2016-1/scrucca-fop-murphy-etal.pdf", # textVersion = paste("Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016)", "mclust 5: clustering, classification and density estimation using Gaussian finite mixture models", "The R Journal", "8/1, pp. 205-233")) mclust/inst/NEWS0000644000176200001440000003363713375040702013252 0ustar liggesusersVersion 5.4.2 o Add mclustBICupdate() to merge the best values from two BIC results as returned by mclustBIC(). o Add mclustLoglik() to compute the maximal log-likelihood values from BIC results as returned by mclustBIC(). o Add option type = "level" to plot.densityMclust() and surfacePlot() to draw highest density regions. o Add meXXI() and meXXX() to exported functions. o Update vignette. Version 5.4.1 o Add parametric bootstrap option (type = "pb") in MclustBootstrap(). o Add the options to get averages of resampling distributions in summary.MclustBootstrap() and to plot resampling-based confidence intervals in plot.MclustBootstrap(). o Add function catwrap() for wrapping printed lines at getOption("width") when using cat(). o mclust.options() now modify the variable .mclust in the namespace of the package, so it should work even inside an mclust function call. o Fix a bug in covw() when normalize = TRUE. o Fix a bug in estepVEV() estepVEE() when parameters contains Vinv. o Fix a bug in plotDensityMclustd() when drawing marginal axes. o Fix a bug in summary.MclustDA() when computing classification error in the extreme case of a minor class of assignment. o Fix a bug in the initialisation of mclustBIC() when a noise component is present for 1-dimensional data. o fix bugs in some examples documenting clustCombi() and related functions. Version 5.4 o Model-based hierarchical clustering used to start the EM-algorithm is now based on the scaled SVD transformation proposed by Scrucca and Raftery (2016). This change is not backward compatible. However, previous results can be easily obtained by issuing the command: mclust.options(hcUse = "VARS") For more details see help("mclust.options"). o Added 'subset' parameter in mclust.options() to control the maximal sample size to be used in the initial model-based hierarchical phase. o predict.densityMclust() can optionally returns the density on a logarithm scale. o Removed normalization of mixing proportions for new models in single mstep. o Internal rewrite of code used by packageStartupMessage(). o Fix a small bug in MclustBootstrap() in the univariate data case. o Fix bugs when both the noise and subset are provided for initialization. o Vignette updated to include references, startup message, css style, etc. o Various bugs fix in plotting methods when noise is present. o Update references in citation() and man pages. Version 5.3 (2017-05) o added gmmhd() function and relative methods. o added MclustDRsubsel() function and relative methods. o added option to use subset in the hierarchical initialization step when a noise component is present. o plot.clustCombi() presents a menu in interactive sessions, no more need of data for classification plots but extract the data from the 'clustCombi' object. o added combiTree() plot for 'clustCombi' objects. o clPairs() now produces a single scatterplot in the bivariate case. o fix a bug in imputeData() when seed is provided. Now if a seed is provided the data matrix is reproducible. o in imputeData() and imputePairs() some name of arguments have been modified to be coherent with the rest of the package. o added functions matchCluster() and majorityVote(). o rewriting of print and summary methods for 'clustCombi' class object. o added clustCombiOptim(). o fix a bug in randomPairs() when nrow of input data is odd. o fix a bug in plotDensityMclust2(), plotDensityMclustd() and surfacePlot() when a noise component is present. Version 5.2.3 (2017-03) o added native routine registration for Fortran code. o fix lowercase argument PACKAGE in .Fortran() calls. Version 5.2.2 (2017-01) o fix a bug in rare case when performing an extra M step at the end of EM algorithm. Version 5.2.1 (2017-01) o replaced "structure(NULL, *)" with "structure(list(), *)" Version 5.2 (2016-03) o added argument 'x' to Mclust() to use BIC values from previous computations to avoid recomputing for the same models. The same argument and functionality was already available in mclustBIC(). o added argument 'x' to mclustICL() to use ICL values from previous computations to avoid recomputing for the same models. o corrected a bug on plot.MclustBootstrap for the "mean" and "var" in the univariate case. o modified uncertainty plots. o introduction of as.Mclust and as.densityMclust to convert object to specific mclust classes. o solved a numerical accuracy problem in qclass when the scale of x is (very) large by making the tolerance eps scale dependent. o use transpose subroutine instead of non-Fortran 77 TRANSPOSE function in mclustaddson.f o predict.Mclust and predict.MclustDR implement a more efficient and accurate algorithm for computing the densities. Version 5.1 (2015-10) o fix slow convergence for VVE and EVE models. o fix a bug in orientation for model VEE. o add an extra M-step and parameters update in Mclust call via summaryMclustBIC. Version 5.0.2 (2015-07) o add option to MclustBootstrap for using weighted likelihood bootstrap. o add a plot method to MclustBootstrap. o add errorBars function. o add clPairsLegend function. o add covw function. o fix rescaling of mixing probabilities in new models. o bug fixes. Version 5.0.1 (2015-04) o bug fixes. o add print method to hc. Version 5.0.0 (2015-03) o added the four missing models (EVV, VEE, EVE, VVE) to the mclust family. A noise component is allowed, but no prior is available. o added mclustBootstrapLRT function (and print and plot methods) for selecting the number of mixture components based on the bootstrap sequential likelihood ratio test. o added MclustBootstrap function (and print and summary methods) for performing bootstrap inference. This provides standard errors for parameters and confidence intervals. o a "A quick tour of mclust" vignette is included as html generated using rmarkdown and knitr. Older vignettes are included as other documentation for the package. o modified arguments to mvn2plot to control colour, lty, lwd, and pch of ellipses and mean point. o added functions emX, emXII, emXXI, emXXX, cdensX, cdensXII, cdensXXI, and cdensXXX, to deal with single-component cases, so calling the em function works even if G = 1. o small changes to icl.R, now icl is a generic method, with specialized methods for 'Mclust' and 'MclustDA' objects. o bug fixes for transformations in the initialization step when some variables are constant (i.e. the variance is zero) or a one-dimensional data is provided. o change the order of arguments in hc (and all the functions calling it). o small modification to CITATION file upon request of CRAN maintainers. o small bug fixes. Version 4.4 (2014-09) o add option for using transformation of variables in the hierarchical initialization step. o add quantileMclust for computing the quantiles from a univariate Gaussian mixture distribution. o bug fixes on summaryMclustBIC, summaryMclustBICn, Mclust to return a matrix of 1s on a single column for z even in the case of G = 1. This is to avoid error on some plots. o pdf files (previously included as vignettes) moved to inst/doc with corresponding index.html. Version 4.3 (2014-03) o bug fix for logLik.MclustDA() in the univariate case. o add argument "what" to predict.densityMclust() function for choosing what to retrieve, the mixture density or component density. o hc function has an additional parameter to control if the original variables or a transformation of them should be used for hierarchical clustering. o included "hcUse" in mclust.options to be passed as default to hc(). o original data (and class for classification models) are stored in the object returned by the main functions. o add component "hypvol"" to Mclust object which provide the hypervolume of the noise component when required, otherwise is set to NA. o add a warning when prior is used and BIC returns NAs. o bug fixes for summary.Mclust(), print.summary.Mclust(), plot.Mclust() and icl() in the case of presence of a noise component. o some plots on plot.MclustDR() require plot.new() before calling plot.window(). o bug fixes for MclustDR() when p=1. o correction to Mclust man page. o bug fixes. Version 4.2 (2013-07) o fix bug in sim* functions when no obs are assigned to a component. o MclustDA allows to fit a single class model. o fix bug in summary.Mclust when a subset is used for initialization. o fix a bug in the function qclass when ties are present in quantiles, so it always return the required number of classes. o various small bug fixes. Version 4.1 (2013-04) o new icl function for computing the integrated complete-data likelihood o new mclustICL function with associated print and plot methods o print.mclustBIC shows also the top models based on BIC o modified summary.Mclust to return also the icl o rewrite of adjustedRandIndex function. This version is more efficient for large vectors o updated help for adjustedRandIndex o modifications to MclustDR and its summary method o changed behavior of plot.MclustDR(..., what = "contour") o improved plot of uncertainty for plot.MclustDR(..., what = "boundaries") o corrected a bug for malformed GvHD data o corrected version of qclass for selecting initial values in case of 1D data when successive quantiles coincide o corrected version of plot BIC values when only a single G component models are fitted o various bug fixes Version 4.0 (2012-08) o new summary and print methods for Mclust. o new summary and print methods for densityMclust. o included MclustDA function and methods. o included MclustDR function and methods. o included me.weighted function. o restored hierarchical clustering capability for the EEE model (hcEEE). o included vignettes for mclust version 4 from Technical Report No. 597 and for using weights in mclust. o adoption of GPL (>= 2) license. Version 3.5 (2012-07) o added summary.Mclust o new functions for plotting and summarizing density estimation o various bug fixes o clustCombi (code and doc provided by Jean-Patrick Baudry) o bug fix: variable names lost when G = 1 Version 3.4.11 (2012-01) o added NAMESPACE Version 3.4.10 (2011-05) o removed intrinsic gamma Version 3.4.9 (2011-05) o fixed hypvol function to avoid overflow o fixed hypvol helpfile value description o removed unused variables and tabs from source code o switched to intrinsic gamma in source code o fixed default warning in estepVEV and mstepVEV Version 3.4.8 (2010-12) o fixed output when G = 1 (it had NA for the missing "z" component) Version 3.4.7 (2010-10) o removed hierarchical clustering capability for the EEE model (hcEEE) o The R 2.12.0 build failed due to a 32-bit Windows compiler error, forcing removal of the underlying Fortran code for hcEEE from the package, which does not contain errors and compiles on other platforms. Version 3.4.6 (2010-08) o added description of parameters output component to Mclust and o summary.mclustBIC help files Version 3.4.5 (2010-07) o added densityMclust function Version 3.4.4 (2010-04) o fixed bug in covariance matrix output for EEV and VEV models Version 3.4.3 (2010-02) o bug fixes Version 3.4.2 (2010-02) o moved CITATION to inst and used standard format o BibTex entries are in inst/cite o fixed bug in handling missing classes in mclustBIC o clarified license wording Version 3.4.1 (2010-01) o corrected output description in mclustModel help file o updated mclust manual reference to show revision Version 3.4 (2009-12) o updated defaultPrior help file o added utility functions for imputing missing data with the mix package o changed default max # of mixture components in each class from 9 to 3 Version 3.3.2 (2009-10) o fixed problems with \cr in mclustOptions help file Version 3.3.1 (2009-06) o fixed plot.mclustBIC/plot.Mclust to handle modelNames o changed "orientation" for VEV, VVV models to be consistent with R eigen() and the literature o fixed some problems including doc for the noise option o updated the unmap function to optionally include missing groups Version 3.3 (2009-06) o fixed bug in the "errors" option for randProj o fixed boundary cases for the "noise" option Version 3.2 (2009-04) o added permission for CRAN distribution to LICENSE o fixed problems with help files found by new parser o changed PKG_LIBS order in src/Makevars o fixed Mclust to handle sampling in data expression in call Version 3.1.10 (2008-11) o added EXPR = to all switch functions that didn't already have it Version 3.1.9 (2008-10) o added pro component to parameters in dens help file o fixed some problems with the noise option Version 3.1.1 (2007-03) o Default seed changed in sim functions. o Model name check added to various functions. o Otherwise backward compatible with version 3.0 Version 3.1 (2007-01) o Most plotting functions changed to use color. o Mclust/mclustBIC fixed to work with G=1 o Otherwise backward compatible with version 3.0. Version 3.0 (2006-10) o New functionality added, including conjugate priors for Bayesian regularization. o Backward compatibility is not guaranteed since the implementation of some functions has changed to make them easier to use or maintain. mclust/inst/doc/0000755000176200001440000000000013510412700013274 5ustar liggesusersmclust/inst/doc/mclust.R0000644000176200001440000001275713510412677014757 0ustar liggesusers## ----setup, include=FALSE------------------------------------------------ library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) set.seed(1) # for exact reproducibility ## ---- message = FALSE, echo=-2------------------------------------------- library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ## ------------------------------------------------------------------------ data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) plot(mod1, what = "uncertainty") ICL <- mclustICL(X) summary(ICL) plot(ICL) LRT <- mclustBootstrapLRT(X, modelName = "VVV") LRT ## ------------------------------------------------------------------------ (hc1 <- hc(X, modelName = "VVV", use = "SVD")) BIC1 <- mclustBIC(X, initialization = list(hcPairs = hc1)) # default summary(BIC1) (hc2 <- hc(X, modelName = "VVV", use = "VARS")) BIC2 <- mclustBIC(X, initialization = list(hcPairs = hc2)) summary(BIC2) (hc3 <- hc(X, modelName = "EEE", use = "SVD")) BIC3 <- mclustBIC(X, initialization = list(hcPairs = hc3)) summary(BIC3) ## ------------------------------------------------------------------------ BIC <- mclustBICupdate(BIC1, BIC2, BIC3) summary(BIC) plot(BIC) ## ---- echo=-1------------------------------------------------------------ set.seed(20181116) data(galaxies, package = "MASS") galaxies <- galaxies / 1000 BIC <- NULL for(j in 1:20) { rBIC <- mclustBIC(galaxies, verbose = FALSE, initialization = list(hcPairs = randomPairs(galaxies))) BIC <- mclustBICupdate(BIC, rBIC) } summary(BIC) plot(BIC) mod <- Mclust(galaxies, x = BIC) summary(mod) ## ------------------------------------------------------------------------ data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ## ------------------------------------------------------------------------ data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ## ------------------------------------------------------------------------ cv <- cvMclustDA(mod2, nfold = 10) str(cv) unlist(cv[3:4]) cv <- cvMclustDA(mod3, nfold = 10) str(cv) unlist(cv[3:4]) ## ------------------------------------------------------------------------ data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ## ------------------------------------------------------------------------ data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "hdr") plot(mod5, what = "density", type = "hdr", data = faithful, points.cex = 0.5) plot(mod5, what = "density", type = "persp") ## ------------------------------------------------------------------------ boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") par(mfrow=c(1,1)) ## ------------------------------------------------------------------------ boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") par(mfrow=c(1,1)) ## ------------------------------------------------------------------------ mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ## ------------------------------------------------------------------------ mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ## ------------------------------------------------------------------------ mclust.options("bicPlotColors") mclust.options("classPlotColors") ## ------------------------------------------------------------------------ cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") ## ------------------------------------------------------------------------ sessionInfo() mclust/inst/doc/mclust.Rmd0000644000176200001440000001726713427502265015301 0ustar liggesusers--- title: "A quick tour of mclust" author: "Luca Scrucca" date: "`r format(Sys.time(), '%d %b %Y')`" output: rmarkdown::html_vignette: toc: true number_sections: false css: "vignette.css" vignette: > %\VignetteIndexEntry{A quick tour of mclust} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) set.seed(1) # for exact reproducibility ``` # Introduction **mclust** is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results. This document gives a quick tour of **mclust** (version `r packageVersion("mclust")`) functionalities. It was written in R Markdown, using the [knitr](https://cran.r-project.org/package=knitr) package for production. See `help(package="mclust")` for further details and references provided by `citation("mclust")`. ```{r, message = FALSE, echo=-2} library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ``` # Clustering ```{r} data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) plot(mod1, what = "uncertainty") ICL <- mclustICL(X) summary(ICL) plot(ICL) LRT <- mclustBootstrapLRT(X, modelName = "VVV") LRT ``` ## Initialisation EM algorithm is used by **mclust** for maximum likelihood estimation. Initialisation of EM is performed using the partitions obtained from agglomerative hierarchical clustering. For details see `help(mclustBIC)` or `help(Mclust)`, and `help(hc)`. ```{r} (hc1 <- hc(X, modelName = "VVV", use = "SVD")) BIC1 <- mclustBIC(X, initialization = list(hcPairs = hc1)) # default summary(BIC1) (hc2 <- hc(X, modelName = "VVV", use = "VARS")) BIC2 <- mclustBIC(X, initialization = list(hcPairs = hc2)) summary(BIC2) (hc3 <- hc(X, modelName = "EEE", use = "SVD")) BIC3 <- mclustBIC(X, initialization = list(hcPairs = hc3)) summary(BIC3) ``` Update BIC by merging the best results: ```{r} BIC <- mclustBICupdate(BIC1, BIC2, BIC3) summary(BIC) plot(BIC) ``` Univariate fit using random starting points obtained by creating random agglomerations (see `help(randomPairs)`) and merging best results: ```{r, echo=-1} set.seed(20181116) data(galaxies, package = "MASS") galaxies <- galaxies / 1000 BIC <- NULL for(j in 1:20) { rBIC <- mclustBIC(galaxies, verbose = FALSE, initialization = list(hcPairs = randomPairs(galaxies))) BIC <- mclustBICupdate(BIC, rBIC) } summary(BIC) plot(BIC) mod <- Mclust(galaxies, x = BIC) summary(mod) ``` # Classification ## EDDA ```{r} data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ``` ## MclustDA ```{r} data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ``` ## Cross-validation error ```{r} cv <- cvMclustDA(mod2, nfold = 10) str(cv) unlist(cv[3:4]) cv <- cvMclustDA(mod3, nfold = 10) str(cv) unlist(cv[3:4]) ``` # Density estimation ## Univariate ```{r} data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ``` ## Multivariate ```{r} data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "hdr") plot(mod5, what = "density", type = "hdr", data = faithful, points.cex = 0.5) plot(mod5, what = "density", type = "persp") ``` # Bootstrap inference ```{r} boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") par(mfrow=c(1,1)) ``` ```{r} boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") par(mfrow=c(1,1)) ``` # Dimension reduction ## Clustering ```{r} mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ``` ## Classification ```{r} mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ``` # Using colorblind-friendly palettes Most of the graphs produced by **mclust** use colors that by default are defined in the following options: ```{r} mclust.options("bicPlotColors") mclust.options("classPlotColors") ``` The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data. Color-blind-friendly palettes can be defined and assigned to the above options as follows: ```{r} cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") ``` The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed. # References Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, *The R Journal*, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, *Journal of the American Statistical Association*, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. *Technical Report* No. 597, Department of Statistics, University of Washington. ---- ```{r} sessionInfo() ```mclust/inst/doc/mclust.html0000644000176200001440001377664213510412700015523 0ustar liggesusers A quick tour of mclust

A quick tour of mclust

Luca Scrucca

07 Jul 2019

Introduction

mclust is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results.

This document gives a quick tour of mclust (version 5.4.5) functionalities. It was written in R Markdown, using the knitr package for production. See help(package="mclust") for further details and references provided by citation("mclust").

library(mclust)
##     __  ___________    __  _____________
##    /  |/  / ____/ /   / / / / ___/_  __/
##   / /|_/ / /   / /   / / / /\__ \ / /   
##  / /  / / /___/ /___/ /_/ /___/ // /    
## /_/  /_/\____/_____/\____//____//_/    version 5.4.5
## Type 'citation("mclust")' for citing this R package in publications.

Clustering

data(diabetes)
class <- diabetes$class
table(class)
## class
## Chemical   Normal    Overt 
##       36       76       33
X <- diabetes[,-1]
head(X)
##   glucose insulin sspg
## 1      80     356  124
## 2      97     289  117
## 3     105     319  143
## 4      90     356  199
## 5      90     323  240
## 6      86     381  157
clPairs(X, class)


BIC <- mclustBIC(X)
plot(BIC)

summary(BIC)
## Best BIC values:
##              VVV,3       VVV,4       EVE,6
## BIC      -4751.316 -4784.32213 -4785.24591
## BIC diff     0.000   -33.00573   -33.92951

mod1 <- Mclust(X, x = BIC)
summary(mod1, parameters = TRUE)
## ---------------------------------------------------- 
## Gaussian finite mixture model fitted by EM algorithm 
## ---------------------------------------------------- 
## 
## Mclust VVV (ellipsoidal, varying volume, shape, and orientation) model
## with 3 components: 
## 
##  log-likelihood   n df       BIC       ICL
##       -2303.496 145 29 -4751.316 -4770.169
## 
## Clustering table:
##  1  2  3 
## 81 36 28 
## 
## Mixing probabilities:
##         1         2         3 
## 0.5368974 0.2650129 0.1980897 
## 
## Means:
##              [,1]     [,2]       [,3]
## glucose  90.96239 104.5335  229.42136
## insulin 357.79083 494.8259 1098.25990
## sspg    163.74858 309.5583   81.60001
## 
## Variances:
## [,,1]
##          glucose    insulin       sspg
## glucose 57.18044   75.83206   14.73199
## insulin 75.83206 2101.76553  322.82294
## sspg    14.73199  322.82294 2416.99074
## [,,2]
##           glucose   insulin       sspg
## glucose  185.0290  1282.340  -509.7313
## insulin 1282.3398 14039.283 -2559.0251
## sspg    -509.7313 -2559.025 23835.7278
## [,,3]
##           glucose   insulin       sspg
## glucose  5529.250  20389.09  -2486.208
## insulin 20389.088  83132.48 -10393.004
## sspg    -2486.208 -10393.00   2217.533

plot(mod1, what = "classification")

table(class, mod1$classification)
##           
## class       1  2  3
##   Chemical  9 26  1
##   Normal   72  4  0
##   Overt     0  6 27

plot(mod1, what = "uncertainty")


ICL <- mclustICL(X)
summary(ICL)
## Best ICL values:
##              VVV,3       EVE,6       EVE,7
## ICL      -4770.169 -4797.38232 -4797.50566
## ICL diff     0.000   -27.21342   -27.33677
plot(ICL)


LRT <- mclustBootstrapLRT(X, modelName = "VVV")
LRT
## ------------------------------------------------------------- 
## Bootstrap sequential LRT for the number of mixture components 
## ------------------------------------------------------------- 
## Model        = VVV 
## Replications = 999 
##               LRTS bootstrap p-value
## 1 vs 2   361.16739             0.001
## 2 vs 3   123.49685             0.001
## 3 vs 4    16.76161             0.498

Initialisation

EM algorithm is used by mclust for maximum likelihood estimation. Initialisation of EM is performed using the partitions obtained from agglomerative hierarchical clustering. For details see help(mclustBIC) or help(Mclust), and help(hc).

(hc1 <- hc(X, modelName = "VVV", use = "SVD"))
## Call:
## hc(data = X, modelName = "VVV", use = "SVD") 
## 
## Model-Based Agglomerative Hierarchical Clustering 
## Model name        = VVV 
## Use               = SVD 
## Number of objects = 145
BIC1 <- mclustBIC(X, initialization = list(hcPairs = hc1)) # default 
summary(BIC1)
## Best BIC values:
##              VVV,3       VVV,4       EVE,6
## BIC      -4751.316 -4784.32213 -4785.24591
## BIC diff     0.000   -33.00573   -33.92951

(hc2 <- hc(X, modelName = "VVV", use = "VARS"))
## Call:
## hc(data = X, modelName = "VVV", use = "VARS") 
## 
## Model-Based Agglomerative Hierarchical Clustering 
## Model name        = VVV 
## Use               = VARS 
## Number of objects = 145
BIC2 <- mclustBIC(X, initialization = list(hcPairs = hc2))
summary(BIC2)
## Best BIC values:
##              VVV,3       VVE,3       EVE,4
## BIC      -4760.091 -4775.53693 -4793.26143
## BIC diff     0.000   -15.44628   -33.17079

(hc3 <- hc(X, modelName = "EEE", use = "SVD"))
## Call:
## hc(data = X, modelName = "EEE", use = "SVD") 
## 
## Model-Based Agglomerative Hierarchical Clustering 
## Model name        = EEE 
## Use               = SVD 
## Number of objects = 145
BIC3 <- mclustBIC(X, initialization = list(hcPairs = hc3))
summary(BIC3)
## Best BIC values:
##              VVV,3        VVE,4       VVE,3
## BIC      -4751.354 -4757.091572 -4775.69587
## BIC diff     0.000    -5.737822   -24.34212

Update BIC by merging the best results:

BIC <- mclustBICupdate(BIC1, BIC2, BIC3)
summary(BIC)
## Best BIC values:
##              VVV,3        VVE,4       VVE,3
## BIC      -4751.316 -4757.091572 -4775.53693
## BIC diff     0.000    -5.775172   -24.22053
plot(BIC)

Univariate fit using random starting points obtained by creating random agglomerations (see help(randomPairs)) and merging best results:

data(galaxies, package = "MASS") 
galaxies <- galaxies / 1000
BIC <- NULL
for(j in 1:20)
{
  rBIC <- mclustBIC(galaxies, verbose = FALSE,
                    initialization = list(hcPairs = randomPairs(galaxies)))
  BIC <- mclustBICupdate(BIC, rBIC)
}
summary(BIC)
## Best BIC values:
##                V,3         V,4        V,5
## BIC      -441.6122 -443.399746 -446.34966
## BIC diff    0.0000   -1.787536   -4.73745
plot(BIC)

mod <- Mclust(galaxies, x = BIC)
summary(mod)
## ---------------------------------------------------- 
## Gaussian finite mixture model fitted by EM algorithm 
## ---------------------------------------------------- 
## 
## Mclust V (univariate, unequal variance) model with 3 components: 
## 
##  log-likelihood  n df       BIC       ICL
##       -203.1792 82  8 -441.6122 -441.6126
## 
## Clustering table:
##  1  2  3 
##  3  7 72

Classification

EDDA

data(iris)
class <- iris$Species
table(class)
## class
##     setosa versicolor  virginica 
##         50         50         50
X <- iris[,1:4]
head(X)
##   Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1          5.1         3.5          1.4         0.2
## 2          4.9         3.0          1.4         0.2
## 3          4.7         3.2          1.3         0.2
## 4          4.6         3.1          1.5         0.2
## 5          5.0         3.6          1.4         0.2
## 6          5.4         3.9          1.7         0.4
mod2 <- MclustDA(X, class, modelType = "EDDA")
summary(mod2)
## ------------------------------------------------ 
## Gaussian finite mixture model for classification 
## ------------------------------------------------ 
## 
## EDDA model summary: 
## 
##  log-likelihood   n df       BIC
##       -187.7097 150 36 -555.8024
##             
## Classes       n     % Model G
##   setosa     50 33.33   VEV 1
##   versicolor 50 33.33   VEV 1
##   virginica  50 33.33   VEV 1
## 
## Training confusion matrix:
##             Predicted
## Class        setosa versicolor virginica
##   setosa         50          0         0
##   versicolor      0         47         3
##   virginica       0          0        50
## Classification error = 0.02 
## Brier score          = 0.0127
plot(mod2, what = "scatterplot")

plot(mod2, what = "classification")

MclustDA

data(banknote)
class <- banknote$Status
table(class)
## class
## counterfeit     genuine 
##         100         100
X <- banknote[,-1]
head(X)
##   Length  Left Right Bottom  Top Diagonal
## 1  214.8 131.0 131.1    9.0  9.7    141.0
## 2  214.6 129.7 129.7    8.1  9.5    141.7
## 3  214.8 129.7 129.7    8.7  9.6    142.2
## 4  214.8 129.7 129.6    7.5 10.4    142.0
## 5  215.0 129.6 129.7   10.4  7.7    141.8
## 6  215.7 130.8 130.5    9.0 10.1    141.4
mod3 <- MclustDA(X, class)
summary(mod3)
## ------------------------------------------------ 
## Gaussian finite mixture model for classification 
## ------------------------------------------------ 
## 
## MclustDA model summary: 
## 
##  log-likelihood   n df       BIC
##       -646.0798 200 66 -1641.849
##              
## Classes         n  % Model G
##   counterfeit 100 50   EVE 2
##   genuine     100 50   XXX 1
## 
## Training confusion matrix:
##              Predicted
## Class         counterfeit genuine
##   counterfeit         100       0
##   genuine               0     100
## Classification error = 0 
## Brier score          = 0
plot(mod3, what = "scatterplot")

plot(mod3, what = "classification")

Cross-validation error

cv <- cvMclustDA(mod2, nfold = 10)
str(cv)
## List of 5
##  $ classification: Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ z             : num [1:150, 1:3] 1 1 1 1 1 1 1 1 1 1 ...
##   ..- attr(*, "dimnames")=List of 2
##   .. ..$ : NULL
##   .. ..$ : chr [1:3] "setosa" "versicolor" "virginica"
##  $ error         : num 0.0267
##  $ brier         : logi NA
##  $ se            : num 0.0109
unlist(cv[3:4])
##      error      brier 
## 0.02666667         NA
cv <- cvMclustDA(mod3, nfold = 10)
str(cv)
## List of 5
##  $ classification: Factor w/ 2 levels "counterfeit",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ z             : num [1:200, 1:2] 6.72e-05 3.69e-19 5.13e-28 3.32e-20 2.26e-29 ...
##   ..- attr(*, "dimnames")=List of 2
##   .. ..$ : NULL
##   .. ..$ : chr [1:2] "counterfeit" "genuine"
##  $ error         : num 0
##  $ brier         : logi NA
##  $ se            : num 0
unlist(cv[3:4])
## error brier 
##     0    NA

Density estimation

Univariate

data(acidity)
mod4 <- densityMclust(acidity)
summary(mod4)
## ------------------------------------------------------- 
## Density estimation via Gaussian finite mixture modeling 
## ------------------------------------------------------- 
## 
## Mclust E (univariate, equal variance) model with 2 components: 
## 
##  log-likelihood   n df       BIC       ICL
##       -185.9493 155  4 -392.0723 -398.5554
plot(mod4, what = "BIC")

plot(mod4, what = "density", data = acidity, breaks = 15)

plot(mod4, what = "diagnostic", type = "cdf")

plot(mod4, what = "diagnostic", type = "qq")

Multivariate

data(faithful)
mod5 <- densityMclust(faithful)
summary(mod5)
## ------------------------------------------------------- 
## Density estimation via Gaussian finite mixture modeling 
## ------------------------------------------------------- 
## 
## Mclust EEE (ellipsoidal, equal volume, shape and orientation) model with 3
## components: 
## 
##  log-likelihood   n df       BIC       ICL
##       -1126.326 272 11 -2314.316 -2357.824
plot(mod5, what = "BIC")

plot(mod5, what = "density")

plot(mod5, what = "density", type = "hdr")
plot(mod5, what = "density", type = "hdr",
     data = faithful, points.cex = 0.5)

plot(mod5, what = "density", type = "persp")

Bootstrap inference

boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs")
summary(boot1, what = "se")
## ---------------------------------------------------------- 
## Resampling standard errors 
## ---------------------------------------------------------- 
## Model                      = VVV 
## Num. of mixture components = 3 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## 
## Mixing probabilities:
##          1          2          3 
## 0.05185780 0.05058160 0.03559685 
## 
## Means:
##                1         2         3
## glucose 1.042239  3.444948 16.340816
## insulin 7.554105 29.047203 63.483315
## sspg    7.669033 31.684647  9.926121
## 
## Variances:
## [,,1]
##          glucose   insulin      sspg
## glucose 10.78177  51.28084  51.61617
## insulin 51.28084 529.62298 416.38176
## sspg    51.61617 416.38176 623.81098
## [,,2]
##           glucose   insulin      sspg
## glucose  65.66172  616.6785  442.0993
## insulin 616.67852 7279.0671 3240.3558
## sspg    442.09927 3240.3558 7070.4152
## [,,3]
##           glucose   insulin      sspg
## glucose 1045.6542  4178.685  667.2709
## insulin 4178.6846 18873.253 2495.0278
## sspg     667.2709  2495.028  506.8173
summary(boot1, what = "ci")
## ---------------------------------------------------------- 
## Resampling confidence intervals 
## ---------------------------------------------------------- 
## Model                      = VVV 
## Num. of mixture components = 3 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## Confidence level           = 0.95 
## 
## Mixing probabilities:
##               1         2         3
## 2.5%  0.4490043 0.1510533 0.1324862
## 97.5% 0.6518326 0.3548749 0.2688038
## 
## Means:
## [,,1]
##        glucose  insulin     sspg
## 2.5%  89.13950 344.9890 150.8405
## 97.5% 93.16603 374.7221 181.8322
## [,,2]
##         glucose  insulin     sspg
## 2.5%   98.82567 447.4121 257.9011
## 97.5% 112.28459 561.3273 374.6194
## [,,3]
##        glucose   insulin      sspg
## 2.5%  198.5986  969.6231  63.22103
## 97.5% 263.2932 1226.2654 101.09078
## 
## Variances:
## [,,1]
##        glucose  insulin     sspg
## 2.5%  38.65508 1234.198 1514.416
## 97.5% 79.43401 3287.722 4146.024
## [,,2]
##         glucose   insulin     sspg
## 2.5%   88.35268  3514.662 12583.92
## 97.5% 358.15175 31416.557 39228.47
## [,,3]
##        glucose   insulin     sspg
## 2.5%  3377.773  47477.74 1317.041
## 97.5% 7379.344 120297.75 3229.747

par(mfrow=c(4,3))
plot(boot1, what = "pro")
plot(boot1, what = "mean")

par(mfrow=c(1,1))
boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs")
summary(boot4, what = "se")
## ---------------------------------------------------------- 
## Resampling standard errors 
## ---------------------------------------------------------- 
## Model                      = E 
## Num. of mixture components = 2 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## 
## Mixing probabilities:
##          1          2 
## 0.04130937 0.04130937 
## 
## Means:
##          1          2 
## 0.04669993 0.06719883 
## 
## Variances:
##          1          2 
## 0.02376885 0.02376885
summary(boot4, what = "ci")
## ---------------------------------------------------------- 
## Resampling confidence intervals 
## ---------------------------------------------------------- 
## Model                      = E 
## Num. of mixture components = 2 
## Replications               = 999 
## Type                       = nonparametric bootstrap 
## Confidence level           = 0.95 
## 
## Mixing probabilities:
##               1         2
## 2.5%  0.5364895 0.3004131
## 97.5% 0.6995869 0.4635105
## 
## Means:
##              1        2
## 2.5%  4.279055 6.184439
## 97.5% 4.461108 6.449465
## 
## Variances:
##               1         2
## 2.5%  0.1395796 0.1395796
## 97.5% 0.2317769 0.2317769

par(mfrow=c(2,2))
plot(boot4, what = "pro")
plot(boot4, what = "mean")

par(mfrow=c(1,1))

Dimension reduction

Clustering

mod1dr <- MclustDR(mod1)
summary(mod1dr)
## ----------------------------------------------------------------- 
## Dimension reduction for model-based clustering and classification 
## ----------------------------------------------------------------- 
## 
## Mixture model type: Mclust (VVV, 3) 
##         
## Clusters  n
##        1 81
##        2 36
##        3 28
## 
## Estimated basis vectors: 
##              Dir1     Dir2      Dir3
## glucose -0.988671  0.76532 -0.966565
## insulin  0.142656 -0.13395  0.252109
## sspg    -0.046689  0.62955  0.046837
## 
##                Dir1     Dir2      Dir3
## Eigenvalues  1.3506  0.75608   0.53412
## Cum. %      51.1440 79.77436 100.00000
plot(mod1dr, what = "pairs")

plot(mod1dr, what = "boundaries", ngrid = 200)


mod1dr <- MclustDR(mod1, lambda = 1)
summary(mod1dr)
## ----------------------------------------------------------------- 
## Dimension reduction for model-based clustering and classification 
## ----------------------------------------------------------------- 
## 
## Mixture model type: Mclust (VVV, 3) 
##         
## Clusters  n
##        1 81
##        2 36
##        3 28
## 
## Estimated basis vectors: 
##              Dir1     Dir2
## glucose  0.764699  0.86359
## insulin -0.643961 -0.22219
## sspg     0.023438 -0.45260
## 
##                Dir1      Dir2
## Eigenvalues  1.2629   0.35218
## Cum. %      78.1939 100.00000
plot(mod1dr, what = "scatterplot")

plot(mod1dr, what = "boundaries", ngrid = 200)

Classification

mod2dr <- MclustDR(mod2)
summary(mod2dr)
## ----------------------------------------------------------------- 
## Dimension reduction for model-based clustering and classification 
## ----------------------------------------------------------------- 
## 
## Mixture model type: EDDA 
##             
## Classes       n Model G
##   setosa     50   VEV 1
##   versicolor 50   VEV 1
##   virginica  50   VEV 1
## 
## Estimated basis vectors: 
##                  Dir1      Dir2     Dir3     Dir4
## Sepal.Length  0.17425 -0.193663  0.64081 -0.46231
## Sepal.Width   0.45292  0.066561  0.34852  0.57110
## Petal.Length -0.61629 -0.311030 -0.42366  0.46256
## Petal.Width  -0.62024  0.928076  0.53703 -0.49613
## 
##                 Dir1     Dir2      Dir3       Dir4
## Eigenvalues  0.94747  0.68835  0.076141   0.052607
## Cum. %      53.69408 92.70374 97.018700 100.000000
plot(mod2dr, what = "scatterplot")

plot(mod2dr, what = "boundaries", ngrid = 200)


mod3dr <- MclustDR(mod3)
summary(mod3dr)
## ----------------------------------------------------------------- 
## Dimension reduction for model-based clustering and classification 
## ----------------------------------------------------------------- 
## 
## Mixture model type: MclustDA 
##              
## Classes         n Model G
##   counterfeit 100   EVE 2
##   genuine     100   XXX 1
## 
## Estimated basis vectors: 
##              Dir1      Dir2     Dir3      Dir4      Dir5      Dir6
## Length   -0.10027 -0.327553  0.79718 -0.033721 -0.317043  0.084618
## Left     -0.21760 -0.305350 -0.30266 -0.893676  0.371043 -0.565611
## Right     0.29180 -0.018877 -0.49600  0.406605 -0.861020  0.481331
## Bottom    0.57603  0.445501  0.12002 -0.034570  0.004359 -0.078688
## Top       0.57555  0.385645  0.10093 -0.103629  0.136005  0.625416
## Diagonal -0.44088  0.672251 -0.04781 -0.151473 -0.044035  0.209542
## 
##                 Dir1     Dir2     Dir3     Dir4      Dir5       Dir6
## Eigenvalues  0.87241  0.55372  0.48603  0.13301  0.053113   0.027239
## Cum. %      41.04429 67.09530 89.96182 96.21965 98.718473 100.000000
plot(mod3dr, what = "scatterplot")

plot(mod3dr, what = "boundaries", ngrid = 200)

Using colorblind-friendly palettes

Most of the graphs produced by mclust use colors that by default are defined in the following options:

mclust.options("bicPlotColors")
##       EII       VII       EEI       EVI       VEI       VVI       EEE 
##    "gray"   "black" "#218B21" "#41884F" "#508476" "#58819C" "#597DC3" 
##       EVE       VEE       VVE       EEV       VEV       EVV       VVV 
## "#5178EA" "#716EE7" "#9B60B8" "#B2508B" "#C03F60" "#C82A36" "#CC0000" 
##         E         V 
##    "gray"   "black"
mclust.options("classPlotColors")
##  [1] "dodgerblue2"    "red3"           "green3"         "slateblue"     
##  [5] "darkorange"     "skyblue1"       "violetred4"     "forestgreen"   
##  [9] "steelblue4"     "slategrey"      "brown"          "black"         
## [13] "darkseagreen"   "darkgoldenrod3" "olivedrab"      "royalblue"     
## [17] "tomato4"        "cyan2"          "springgreen2"

The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data.

Color-blind-friendly palettes can be defined and assigned to the above options as follows:

cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
bicPlotColors <- mclust.options("bicPlotColors")
bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6])
mclust.options("bicPlotColors" = bicPlotColors)
mclust.options("classPlotColors" = cbPalette)

clPairs(iris[,-5], iris$Species)

mod <- Mclust(iris[,-5])
plot(mod, what = "BIC")

plot(mod, what = "classification")

The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed.

References

Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, The R Journal, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf

Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, Journal of the American Statistical Association, 97/458, pp. 611-631.

Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington.


sessionInfo()
## R version 3.6.0 (2019-04-26)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS Sierra 10.12.5
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] mclust_5.4.5 knitr_1.23  
## 
## loaded via a namespace (and not attached):
##  [1] compiler_3.6.0  magrittr_1.5    htmltools_0.3.6 tools_3.6.0    
##  [5] yaml_2.2.0      Rcpp_1.0.1      stringi_1.4.3   rmarkdown_1.13 
##  [9] stringr_1.4.0   digest_0.6.19   xfun_0.8        evaluate_0.14
mclust/src/0000755000176200001440000000000013510412701012342 5ustar liggesusersmclust/src/Makevars0000644000176200001440000000006013475427014014047 0ustar liggesusersPKG_LIBS = $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS) mclust/src/mclustaddson.f0000644000176200001440000021453113507676041015236 0ustar liggesusers* ===================================================================== subroutine transpose(X, p) * * Compute transpose of a matrix * * ===================================================================== implicit NONE integer :: p, i, j double precision :: X(p,p), temp do j = 2, p do i = 1, j-1 temp = X(i,j) X(i,j) = X(j,i) X(j,i) = temp end do end do return end * ===================================================================== subroutine crossprodf(X, Y, n, p, q, XTY) * * Given matrices X and Y of dimension (n x p) and (n x q) computes * the matrix of cross-product, i.e. X' Y * * ===================================================================== implicit NONE integer n, p, q double precision X(n,p), Y(n,q), XTY(p,q) * Compute X'Y using DGEMM blas subroutine call DGEMM('T', 'N', p, q, n, 1.d0, X, n, Y, n, 0.d0, XTY, p) end * ====================================================================== subroutine covwf ( X, Z, n, p, G, M, S, W ) * * Given data matrix X(n x p) and weight matrix Z(n x G) computes * weighted means M(p x G), weighted covariance matrices S(p x p x G) * and weighted scattering matrices W(p x p x G) * * ====================================================================== implicit none integer :: n, p, G double precision :: X(n,p), Z(n,G) double precision :: M(p,G), S(p,p,G), W(p,p,G) integer :: j, k double precision :: sumZ(G), temp(n,p) * compute X'Z using BLAS call dgemm('T', 'N', p, G, n, 1.d0, X, n, Z, n, 0.d0, M, p) * compute row sums of Z sumZ = sum(Z, DIM = 1) do k = 1,G * compute means call dscal(p, (1.d0/sumZ(k)), M(:,k), 1) do j = 1,p * compute sqrt(Z) * (X - M) temp(:,j) = sqrt(Z(:,k)) * (X(:,j) - M(j,k)) end do * compute scattering matrix call dgemm('T', 'N', p, p, n, 1.d0, temp, n, temp, n, * 0.d0, W(:,:,k), p) * compute covariance matrix S(:,:,k) = W(:,:,k)/sumZ(k) end do return end ************************************************************************ **** EVV model ************************************************************************ * ====================================================================== subroutine msevv (x,z, n,p,G, mu,O,U,scale,shape,pro, lwork,info, * eps) * Maximization step for model EEV * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), O(p,p,*), U(p,p,*), pro(G) double precision :: scale(G), shape(p,G) double precision :: sumz(G) integer :: i, j, k, info, lwork, l double precision :: temp(p), wrk(lwork), eps, dummy double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) * * double precision :: BIGLOG * parameter (BIGLOG = 709.d0) * * double precision :: SMALOG * parameter (SMALOG = -708.d0) *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) * U(:,:,k) = U(:,:,k) + * * spread(temp, dim = 2, ncopies = p)* * * spread(temp, dim = 1, ncopies = p) * outer product, Press et al. (1992), p. 970 call dger(p, p, 1.d0, temp, 1, temp, 1, U(:,:,k), p) * more efficient end do * U contains the weighted scattering matrix O(:,:,k) = U(:,:,k) * call dgesvd('O', 'N', p, p, O(:,:,k), p, shape(:,k), * * dummy, 1, dummy, 1, wrk, lwork, info) call dgesvd('N', 'O', p, p, O(:,:,k), p, shape(:,k), * dummy, 1, dummy, 1, wrk, lwork, info) * O now contains eigenvectors of the scattering matrix * ##### NOTE: O is transposed * shape contains the eigenvalues * check if dgesvd converged (info == 0) if (info .ne. 0) then l = info else scale(k) = exp( sum( log(shape(:,k)) ) )**(1.d0/p) call dscal(p*p, 1.d0/scale(k), U(:,:,k), 1) call dscal(p, 1.d0/scale(k), shape(:,k), 1) * now U is the matrix Ck (Celeux, Govaert 1995, p.787) * and shape is the proper scaled shape (matrix A) end if end do * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps)) then shape = FLMAX scale = FLMAX return end if scale(1) = sum(scale) / sum(sumz) return end * ====================================================================== subroutine esevv (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model EVV * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p,G), scale, shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3 integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. scale .le. sqrt(eps)) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O(:,:,k), p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro(:) ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine meevv (x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmax,tol,eps, * niterout,errout,lwork,info) * Maximization-expectation algorithm for model EVV * ====================================================================== implicit none logical :: eqpro integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p,G),scale(G),shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: U(p,p,G), sumz(Gnoise) double precision :: temp1(p), temp2(p), temp3, scsh(p) * double precision :: temp(*) integer :: i, j, k, info, lwork, l, itmax, niterout double precision :: tol, eps, errout, rteps, dummy double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- l = 0 rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop using goto statement 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * M step.......................................................... do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix O(:,:,k) = U(:,:,k) call dgesvd('N', 'O', p, p, O(:,:,k), p, shape(:,k), * dummy, 1, dummy, 1, wrk, lwork, info) * O now contains eigenvectors of the scattering matrix * ##### NOTE: O is transposed * shape contains the eigenvalues * check if dgesvd converged (info == 0) if (info .ne. 0) then l = info return else scale(k) = exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) call dscal(p*p, 1.d0/scale(k), U(:,:,k), 1) call dscal(p, 1.d0/scale(k), shape(:,k), 1) * now U is the matrix Ck (Celeux, Govaert 1995, p.787) * and shape is the proper scaled shape (matrix A) end if end do if ( Vinv .gt. 0.d0 ) then scale(1) = sum(scale) / sum(sumz(1:G)) else scale(1) = sum(scale)/dble(n) end if * if noise lambda = num/sum_{k=1}^{G} n_k; pag. 787 Celeux, Govaert * ................................................................ * check very small eigenvalues (singular covariance) if (minval(shape) .le. rteps .or. minval(scale) .le. rteps) then loglik = FLMAX return end if * E step.......................................................... do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(1)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed scsh = sqrt(scale(1)*shape(:,k)) do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O(:,:,k), p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/scsh temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then * call dcopy( n, log(Vinv) + log(pro(Gnoise)), 0, z(:,Gnoise), 1) z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) * with p_0 the proportion of noise loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) * errout = abs(loglik - lkprev) lkprev = loglik * temp(niterout) = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components * if ( minval(pro) .lt. rteps ) then if ( any(sumz .lt. rteps, 1) ) then loglik = -FLMAX return end if * WHILE condition if ( errout .gt. tol .and. niterout .lt. itmax ) goto 100 return end ************************************************************************ **** VEE model ************************************************************************ * ====================================================================== subroutine msvee (x,z, n,p,G, mu,U,C,scale,pro, lwork,info, * itmax,tol, niterin,errin) * Maximization step for model VEE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), C(p,p), pro(G) * ### NOTE: shape and orientation parameters are computed in R double precision :: scale(G) double precision :: sumz(G) integer :: i, j, k, info, lwork, l * integer :: dummy double precision :: temp1(p), temp2(p,p), temp3 double precision :: wrk(lwork), tol, errin, trgt, trgtprev integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix * check if U is positive definite (see help of dpotrf) * (through Choleski is more efficient) temp2 = U(:,:,k) call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 scale = FLMAX return end if end if end do * covariance matrix components estimation niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * WHILE loop using goto statement 100 continue niterin = niterin + 1 * initialise C call dcopy(p*p, 0.d0, 0, C, 1) * ### NOTE: scale is initialised in R do k = 1,G C = C + U(:,:,k)/scale(k) end do * C contains the numerator of matrix C in pag.785, Celeux, Govaert temp2 = C call dsyev('N', 'U', p, temp2, p, temp1, wrk, lwork, info) temp1 = temp1(p:1:-1) * temp1 contains the (decreasing) ordered eigenvalues of C * check if dsyev converged or illegal value if ( info .ne. 0 ) then l = info return end if temp3 = exp( sum(log(temp1)) )**(1/dble(p)) * temp3 is the denominator of C C = C/temp3 * C is now the actual matrix C of pag.785 * compute the inverse of C via Choleski temp2 = C call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 scale = FLMAX return end if end if call dpotri('U', p, temp2, p, info) if ( info .ne. 0 ) return do j = 2,p do k = 1,(j-1) temp2(j,k) = temp2(k,j) end do end do * temp2 is now the inverse of C scale = 0.d0 do k = 1,G do j = 1,p scale(k) = scale(k) + ddot(p, U(j,:,k), 1, temp2(:,j), 1) end do scale(k) = scale(k) / (dble(p)*sumz(k)) end do * scale contains now the lambdas (pag.784 of Celeux, Govaert) * evaluate target function * trgt = dble(n)*dble(p) + dble(p)*SUM(log(scale)*sumz) trgt = sum(sumz)*dble(p) + dble(p)*SUM(log(scale)*sumz) * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 return end * ====================================================================== subroutine esvee (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model VEE * ====================================================================== implicit none integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale(G), shape(p) double precision :: temp1(p), temp2(p), temp3 integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (cannot compute E step) if ( minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps) ) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale(k)*shape) temp3 = ddot(p, temp2, 1, temp2, 1) * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro(:) ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine mevee ( x,z, n,p,G,Gnoise, mu,C,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info ) * Maximization-expectation algorithm for model VEE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), C(p,p), scale(G), shape(p) double precision :: U(p,p,G), sumz(Gnoise) double precision :: temp1(p), temp2(p,p), temp3, temp4(p) integer :: i, j, k, info, lwork, l * integer :: dummy integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- l = 0 rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1, 1, temp1, 1, U(:,:,k), p) end do * U contains the weighted scattering matrix * check if U is positive definite (see help of dpotrf) * (through Choleski is more efficient) temp2 = U(:,:,k) call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 loglik = FLMAX return end if end if end do * M step.......................................................... * covariance matrix components estimation niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * initialise scale call dcopy(G, 1.d0, 0, scale, 1) * WHILE loop for M step 110 continue niterin = niterin + 1 * initialise C call dcopy(p*p, 0.d0, 0, C, 1) do k = 1,G C = C + U(:,:,k)/scale(k) end do * C contains the numerator of matrix C in pag.785, Celeux, Govaert temp2 = C call dsyev('N', 'U', p, temp2, p, temp1, wrk, lwork, info) temp1 = temp1(p:1:-1) * temp1 contains the (decreasing) ordered eigenvalues of C * check if dsyev converged or illegal value if ( info .ne. 0 ) then l = info return end if temp3 = exp( sum(log(temp1)) )**(1/dble(p)) * temp3 is the denominator of C C = C/temp3 * C is now the actual matrix C of pag.785 * compute the inverse of C via Choleski temp2 = C call dpotrf('U', p, temp2, p, info) if ( info .ne. 0 ) then if ( info .lt. 0) then l = info return else if ( info .gt. 0 ) then info = 0 loglik = FLMAX return end if end if call dpotri('U', p, temp2, p, info) if ( info .ne. 0 ) return do j = 2,p do k = 1,(j-1) temp2(j,k) = temp2(k,j) end do end do * temp2 is now the inverse of C scale = 0.d0 do k = 1,G do j = 1,p scale(k) = scale(k) + ddot(p, U(j,:,k), 1, temp2(:,j), 1) end do scale(k) = scale(k) / (dble(p)*sumz(k)) end do * scale contains now the lambdas (pag.784 of Celeux, Govaert) * evaluate target function * trgt = dble(n)*dble(p) + dble(p)*SUM(log(scale)*sumz) trgt = sum(sumz(1:G))*dble(p) + dble(p)*SUM(log(scale)*sumz(1:G)) * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition for M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 * ................................................................ * eigenvalues of C shape = temp1 / temp3 * check very small eigenvalues (singular covariance) if (minval(shape) .le. rteps .or. minval(scale) .le. rteps) then loglik = FLMAX return end if * E step.......................................................... do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + log(shape(j)) + log(scale(k)) end do * compute mahalanobis distance for each observation do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp4, 1) call dgemv('N', p, p, 1.d0, * temp2, p, temp1, 1, 0.d0, temp4, 1) temp4 = temp4/scale(k) temp3 = ddot(p, temp4, 1, temp1, 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) * with p_0 the proportion of noise loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) * errout = abs(loglik - lkprev) lkprev = loglik * temp(niterout) = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end ************************************************************************ **** EVE model ************************************************************************ * ====================================================================== subroutine mseve (x,z, n,p,G, mu,U,O,scale,shape,pro, lwork,info, * itmax,tol, niterin,errin, eps) * Maximization step for model EVE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), pro(G), O(p,p) double precision :: scale, shape(p,G) double precision :: sumz(G), omega(G) integer :: i, j, k, info, lwork double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) double precision :: wrk(lwork), tol, errin, trgt, trgtprev, eps integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. sqrt(eps) ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * WHILE loop using goto statement 100 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = t( R %*% t(P) ) = P %*% t(R) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * NOTE: we compute the TRANSPOSED of the matrix in the output in the paper call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do shape(:,k) = shape(:,k)/ * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) * now shape contains matrix A of Celeux, Govaert pag. 785 * check positive values if ( minval(shape(:,k)) .lt. sqrt(eps) ) then info = 0 shape = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 scale = trgt / ( sum(sumz)*dble(p) ) return end * ====================================================================== subroutine eseve (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model EVE * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p), scale, shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3, temp4(n) integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if (minval(shape) .le. sqrt(eps) .or. scale .le. sqrt(eps)) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) temp4(i) = temp3 * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine meeve ( x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info ) * Maximization-expectation algorithm for model EVE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale, shape(p,G) double precision :: U(p,p,G), sumz(Gnoise), omega(G) double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) integer :: i, j, k, info, lwork integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. rteps ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix * M step.......................................................... niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * ### NOTE: we don't re-initialize shape and orientation at each * outer iteration of the EM algorithm * WHILE loop for M step 110 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = t( R %*% t(P) ) = P %*% t(R) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) * call dgemm( 'T','T', p,p,p, 1.d0, temp2,p, temp1,p, * * 0.d0, O,p ) O = 0.d0 * NOTE: we compute the TRANSPOSED of the matrix in the output in the paper call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do shape(:,k) = shape(:,k)/ * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) * now shape contains matrix A of Celeux, Govaert pag. 785 * check positive values if ( minval(shape(:,k)) .lt. rteps ) then info = 0 loglik = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 scale = trgt / ( sum(sumz(1:G))*dble(p) ) * ................................................................ * E step.......................................................... const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2(:,1), 1) call dgemv('N', p, p, 1.d0, * O, p, temp1(:,1), 1, 0.d0, temp2(:,1), 1) temp2(:,1) = temp2(:,1)/sqrt(scale*shape(:,k)) temp3(1,1) = ddot(p, temp2(:,1), 1, temp2(:,1), 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3(1,1) = maxval(z(i,:)) temp1(1,1) = temp3(1,1) + log( sum(exp(z(i,:) - temp3(1,1))) ) loglik = loglik + temp1(1,1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1,1) ) * re-normalize probabilities temp3(1,1) = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3(1,1), z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) lkprev = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end ************************************************************************ **** VVE model ************************************************************************ * ====================================================================== subroutine msvve (x,z, n,p,G, mu,U,O,scale,shape,pro, lwork,info, * itmax,tol, niterin,errin, eps) * Maximization step for model VVE * ====================================================================== implicit none integer :: n, p, G double precision :: x(n,p), z(n,G) double precision :: mu(p,G), U(p,p,G), pro(G), O(p,p) double precision :: scale(G), shape(p,G) double precision :: sumz(G), omega(G) integer :: i, j, k, info, lwork double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) double precision :: wrk(lwork), tol, errin, trgt, trgtprev, eps integer :: itmax, niterin double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot *----------------------------------------------------------------------- * colsums of z sumz = sum(z, dim = 1) * a priori probabilities pro = sumz / dble(n) * pro = sumz / sum(sumz) * if there is noise sum(sumz) does not sum to n. See help(mstep) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. sqrt(eps) ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * WHILE loop using goto statement 100 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(P %*% t(R)) = R %*% t(P) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * Algorithm MM 2 ...................................... call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do * shape(:,k) = shape(:,k)/ * * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) shape(:,k) = shape(:,k)/sumz(k) * now shape contains matrix A (scale*A) of Celeux, Govaert pag. 785 * compute scale parameter and shape matrix A scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) shape(:,k) = shape(:,k)/scale(k) * check positive values if ( minval(shape(:,k)) .lt. sqrt(eps) ) then info = 0 shape = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition if ( errin .gt. tol .and. niterin .lt. itmax ) goto 100 return end * ====================================================================== subroutine esvve (x,z, n,p,G,Gnoise, mu,O,scale,shape,pro, Vinv, * loglik, eps) * Expectation step for model VVE * ====================================================================== implicit none integer :: n, p, G, Gnoise double precision :: x(n,p), z(n,Gnoise) double precision :: mu(p,G), O(p,p), scale(G), shape(p,G) double precision :: Vinv, pro(Gnoise) double precision :: temp1(p), temp2(p), temp3, temp4(n) integer :: i, k, j double precision :: const, logdet, loglik, eps double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- * check very small eigenvalues (singular covariance) if ( minval(shape) .le. sqrt(eps) .or. * minval(scale) .le. sqrt(eps) ) then loglik = FLMAX return end if const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1 = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2, 1) call dgemv('N', p, p, 1.d0, * O, p, temp1, 1, 0.d0, temp2, 1) temp2 = temp2/sqrt(scale(k)*shape(:,k)) temp3 = ddot(p, temp2, 1, temp2, 1) temp4(i) = temp3 * temp3 contains the mahalanobis distance * z(i,k) = const - logdet/2.d0 - temp3/2.d0 + log(pro(k)) z(i,k) = const - logdet/2.d0 - temp3/2.d0 * help(cdens) --> The densities are not scaled by mixing proportions end do * z contains the log-density log(N(x|theta_k)) end do if ( pro(1) .lt. 0.d0 ) return * cdens function * noise component if (Vinv .gt. 0.d0) then call dcopy( n, log(Vinv), 0, z(:,Gnoise), 1) end if * now column Gnoise of z contains log(Vinv) do i = 1,n z(i,:) = z(i,:) + log( pro ) * Numerical Recipes pag.844 temp3 = maxval(z(i,:)) temp1(1) = temp3 + log( sum(exp(z(i,:) - temp3)) ) loglik = loglik + temp1(1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1) ) * re-normalize probabilities temp3 = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3, z(i,:), 1 ) end do return end * ====================================================================== subroutine mevve ( x,z, n,p,G,Gnoise, mu,O,U,scale,shape,pro,Vinv, * loglik, eqpro,itmaxin,tolin,itmaxout,tolout,eps, * niterin,errin,niterout,errout,lwork,info) * Maximization-expectation algorithm for model VVE * ====================================================================== implicit none logical :: eqpro integer :: n,p,G,Gnoise double precision :: x(n,p), z(n,Gnoise), pro(Gnoise), Vinv double precision :: mu(p,G), O(p,p), scale(G), shape(p,G) double precision :: U(p,p,G), sumz(Gnoise), omega(G) double precision :: temp1(p,p), temp2(p,p), temp3(p,p), temp4(p) integer :: i, j, k, info, lwork integer :: itmaxin, itmaxout, niterin, niterout double precision :: tolin, tolout, errin, errout, eps, rteps double precision :: const, logdet, loglik, lkprev, wrk(lwork) double precision :: trgt, trgtprev double precision :: log2pi parameter (log2pi = 1.837877066409345d0) double precision :: FLMAX parameter (FLMAX = 1.7976931348623157d308) external :: ddot double precision :: ddot * double precision :: smalog * parameter (smalog = -708.d0) *----------------------------------------------------------------------- rteps = sqrt(eps) niterout = 0 errout = FLMAX lkprev = FLMAX/2 loglik = FLMAX const = (-dble(p)/2.d0)*log2pi * WHILE loop for EM algorithm 100 continue niterout = niterout + 1 sumz = sum(z, dim = 1) if ( eqpro ) then if ( Vinv .gt. 0 ) then pro(Gnoise) = sumz(Gnoise) / dble(n) pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) sumz = pro * dble(n) else pro = 1 / dble(G) sumz = pro * dble(n) end if else pro = sumz / dble(n) end if * re-initialise U call dcopy(p*p*G, 0.d0, 0, U, 1) * compute weighted scattering matrix and means do k = 1,G do j = 1,p mu(j,k) = sum(x(:,j)*z(:,k))/sumz(k) end do do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) * sqrt(z(i,k)) call dger(p, p, 1.d0, temp1(:,1), 1, temp1(:,1), 1, * U(:,:,k), p) end do * U contains the weighted scattering matrix * compute the eigenvalues of U to be stored in omega temp2 = U(:,:,k) call dsyev('N', 'U', p, temp2, p, temp1(:,1), wrk, lwork, info) * now temp1 contains all the eigenvalues of U * check if dsyev converged and positive definite if ( info .ne. 0 ) then return else if ( minval(temp1(:,1)) .lt. rteps ) then info = 0 scale = FLMAX return end if end if omega(k) = temp1(p,1) end do * omega contains the largest eigenvalue of each scattering matrix * M step.......................................................... niterin = 0 errin = FLMAX trgt = FLMAX trgtprev = FLMAX/2 * covariance matrix components estimation * we consider algorithm MM 1 and MM 2 of Browne, McNicholas 2013 * with a modification in computing the orientation matrix in the MM 2 step * shape (matrix A) and orientation (matrix D) initialised in R * shape = matrix(1, p,G) * O = diag(p) * ### NOTE: we don't re-initialize shape and orientation at each * outer iteration of the EM algorithm * WHILE loop for M step 110 continue * ### NOTE: O is transposed niterin = niterin + 1 temp2 = 0.d0 temp3 = 0.d0 * temp3 will contain matrix F * Algorithm MM 1 ...................................... do k = 1,G do j = 1,p * temp1(j,:) = O(:,j) / shape(j,k) temp1(j,:) = O(j,:) / shape(j,k) end do * temp1 contains inv(A)t(D) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, U(:,:,k),p, * 0.d0, temp2,p ) * temp2 contains inv(A) %*% t(D) %*% W temp1 = temp2 - omega(k)*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(P %*% t(R)) = R %*% t(P) call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) * O contains TRANSPOSED orientation (matrix D of Browne, McNicholas) * ..................................................... * Algorithm MM 2 ...................................... call transpose(O, p) * O contains matrix D of Browne, McNicholas * Algorithm MM 2 temp1 = 0.d0 temp3 = 0.d0 do k = 1,G call dgemm( 'N','N', p,p,p, 1.d0, U(:,:,k),p, O,p, * 0.d0, temp1,p ) * temp1 contains W %*% D do j = 1,p temp2(:,j) = temp1(:,j) / shape(j,k) end do * temp2 contains W %*% D %*% inv(A) temp1 = temp2 - maxval( 1/shape(:,k) )*temp1 temp3 = temp3 + temp1 * temp3 contains the matrix F end do * compute matrices P and R where svd(F) = P %*% B %*% t(R) call dgesvd('A','A', p,p, temp3,p, temp4, temp1,p, temp2,p, * wrk, lwork, info) * now temp1 contains matrix P, temp2 contains matrix t(R) * temp4 contains the singular values * check if dgesvd converged if ( info .ne. 0 ) return * NOTE: t(O) = R %*% t(P) O = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, temp1,p, temp2,p, * 0.d0, O,p ) call transpose(O, p) * O contains TRANSPOSED matrix D of Browne, McNicholas * ..................................................... * compute shape (matrix A) and target function trgt = 0.d0 do k = 1,G temp1 = 0.d0 call dgemm( 'N','N', p,p,p, 1.d0, O,p, U(:,:,k),p, * 0.d0, temp1,p ) * temp1 contains t(D) %*% W do j = 1,p shape(j,k) = ddot(p, temp1(j,:), 1, O(j,:), 1) end do * shape(:,k) = shape(:,k)/ * * exp( sum( log(shape(:,k)) ) )**(1.d0/dble(p)) shape(:,k) = shape(:,k)/sumz(k) * now shape contains matrix A (scale*A) of Celeux, Govaert pag. 785 * compute scale parameter and shape matrix A scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) shape(:,k) = shape(:,k)/scale(k) * check positive values if (minval(shape(:,k)) .lt. rteps .or. * scale(k) .lt. rteps) then info = 0 loglik = FLMAX return end if temp4(1) = 0.d0 do j = 1,p * temp2(:,j) = O(:,j) * 1.d0/shape(j,k) temp2(:,j) = O(j,:) * 1.d0/shape(j,k) temp4(1) = temp4(1) + ddot(p, temp1(j,:), 1, temp2(:,j), 1) end do trgt = trgt + temp4(1) end do * error errin = abs(trgt - trgtprev)/(1.d0 + abs(trgt)) trgtprev = trgt * WHILE condition M step if ( errin .gt. tolin .and. niterin .lt. itmaxin ) goto 110 * do k = 1,G * scale(k) = exp( sum( log(shape(:,k)) ) )**(1/dble(p)) * shape(:,k) = shape(:,k)/scale(k) * end do * ................................................................ * E step.......................................................... const = (-dble(p)/2.d0)*log2pi do k = 1,G logdet = 0.d0 do j = 1,p logdet = logdet + ( log(shape(j,k)) + log(scale(k)) ) end do * compute mahalanobis distance for each observation * ##### NOTE: O is transposed do i = 1,n temp1(:,1) = ( x(i,:) - mu(:,k) ) call dcopy(p, 0.d0, 0, temp2(:,1), 1) call dgemv('N', p, p, 1.d0, * O, p, temp1(:,1), 1, 0.d0, temp2(:,1), 1) temp2(:,1) = temp2(:,1)/sqrt(scale(k)*shape(:,k)) temp3(1,1) = ddot(p, temp2(:,1), 1, temp2(:,1), 1) * temp3 contains the mahalanobis distance z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 + log(pro(k)) * z(i,k) = const - logdet/2.d0 - temp3(1,1)/2.d0 end do * z contains the log-density log(N(x|theta_k)) + log(p_k) end do * noise component if (Vinv .gt. 0.d0) then z(:,Gnoise) = log(Vinv) + log( pro(Gnoise) ) end if * now column Gnoise of z contains log(Vinv) + log(p_0) loglik = 0.d0 do i = 1,n * Numerical Recipes pag.844 temp3(1,1) = maxval(z(i,:)) temp1(1,1) = temp3(1,1) + log( sum(exp(z(i,:) - temp3(1,1))) ) loglik = loglik + temp1(1,1) * ##### NOTE: do we need to check if (z - zmax) is too small? z(i,:) = exp( z(i,:) - temp1(1,1) ) * re-normalize probabilities temp3(1,1) = sum( z(i,:) ) call dscal( Gnoise, 1.d0/temp3(1,1), z(i,:), 1 ) end do * ................................................................ errout = abs(loglik - lkprev)/(1.d0 + abs(loglik)) lkprev = loglik * Chris F (June 2015): pro should not be computed in the E-step * sumz = sum(z, dim = 1) * if ( eqpro ) then * if ( Vinv .gt. 0 ) then * pro(Gnoise) = sumz(Gnoise) / dble(n) * pro(1:G) = ( 1 - pro(Gnoise) ) / dble(G) * sumz = pro * dble(n) * else * pro = 1 / dble(G) * sumz = pro * dble(n) * end if * else * pro = sumz / dble(n) * end if * check if empty components if ( minval(sumz) .lt. rteps ) then loglik = -FLMAX return end if * WHILE condition EM if ( errout .gt. tolout .and. niterout .lt. itmaxout ) goto 100 return end mclust/src/dmvnorm.f0000644000176200001440000000377113463252677014230 0ustar liggesusers* ===================================================================== subroutine dmvnorm ( x, mu, Sigma, n, p, w, hood, logdens) * * Compute log-density of multivariate Gaussian * * ===================================================================== implicit NONE integer n, p double precision hood double precision x(n,p), w(*), logdens(n) double precision mu(p), Sigma(p,p) integer info, i, j double precision detlog, umin, umax, const, temp double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision ddot external ddot * --------------------------------------------------------------------------- * Cholesky factorization call dpotrf('U', p, Sigma, p, info) if (info .ne. 0) then w(1) = dble(info) hood = FLMAX return end if call absrng( p, Sigma, (p+1), umin, umax) if (umax .le. one .and. umax .ge. umin*RTMAX) then w(1) = zero hood = FLMAX return end if if (umax .ge. one .and. umin .le. umax*RTMIN) then w(1) = zero hood = FLMAX return end if detlog = zero do j = 1, p detlog = detlog + log(abs(Sigma(j,j))) end do const = dble(p)*pi2log/two + detlog do i = 1, n call dcopy(p, x(i,1), n, w, 1) call daxpy(p, (-one), mu(1), 1, w, 1) call dtrsv('U', 'T', 'N', p, Sigma, p, w, 1) temp = ddot(p, w, 1, w, 1)/two logdens(i) = -(const+temp) end do w(1) = zero return end mclust/src/init.c0000644000176200001440000004331613507677506013506 0ustar liggesusers#include #include // for NULL #include /* Routines registration obtained with tools::package_native_routine_registration_skeleton("~/R/mclust") FIXME: Check these declarations against the C/Fortran source code. */ /* .Fortran calls */ extern void F77_NAME(dmvnorm)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(covwf)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(crossprodf)(void *, void *, void *, void *, void *, void *); extern void F77_NAME(d2norm)(void *, void *, void *, void *); extern void F77_NAME(es1e)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(es1v)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(eseve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(esvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hc1e)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hc1v)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hceee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hceii)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hcvii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(hcvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mcltrw)(void *, void *, void *, void *, void *); extern void F77_NAME(me1e)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1ep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1v)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(me1vp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeeep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeeip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meeve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meveip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevii)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(meviip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mevvvp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxxip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mnxxxp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1e)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1ep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1v)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(ms1vp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseee)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseeep)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseei)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseeip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseii)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseiip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mseve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevi)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msevv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvee)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvei)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msveip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvev)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvevp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvii)(void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msviip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvve)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvi)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvip)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvv)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(msvvvp)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvn1d)(void *, void *, void *, void *, void *); extern void F77_NAME(mvn1p)(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxii)(void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxxi)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(mvnxxx)(void *, void *, void *, void *, void *, void *); extern void F77_NAME(shapeo)(void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(uncholf)(void *, void *, void *, void *, void *); static const R_FortranMethodDef FortranEntries[] = { {"dmvnorm", (DL_FUNC) &F77_NAME(dmvnorm), 8}, {"covwf", (DL_FUNC) &F77_NAME(covwf), 8}, {"crossprodf", (DL_FUNC) &F77_NAME(crossprodf), 6}, {"d2norm", (DL_FUNC) &F77_NAME(d2norm), 4}, {"es1e", (DL_FUNC) &F77_NAME(es1e), 9}, {"es1v", (DL_FUNC) &F77_NAME(es1v), 9}, {"eseee", (DL_FUNC) &F77_NAME(eseee), 12}, {"eseei", (DL_FUNC) &F77_NAME(eseei), 11}, {"eseev", (DL_FUNC) &F77_NAME(eseev), 14}, {"eseii", (DL_FUNC) &F77_NAME(eseii), 10}, {"eseve", (DL_FUNC) &F77_NAME(eseve), 14}, {"esevi", (DL_FUNC) &F77_NAME(esevi), 11}, {"esevv", (DL_FUNC) &F77_NAME(esevv), 14}, {"esvee", (DL_FUNC) &F77_NAME(esvee), 14}, {"esvei", (DL_FUNC) &F77_NAME(esvei), 11}, {"esvev", (DL_FUNC) &F77_NAME(esvev), 14}, {"esvii", (DL_FUNC) &F77_NAME(esvii), 10}, {"esvve", (DL_FUNC) &F77_NAME(esvve), 14}, {"esvvi", (DL_FUNC) &F77_NAME(esvvi), 11}, {"esvvv", (DL_FUNC) &F77_NAME(esvvv), 12}, {"hc1e", (DL_FUNC) &F77_NAME(hc1e), 7}, {"hc1v", (DL_FUNC) &F77_NAME(hc1v), 8}, {"hceee", (DL_FUNC) &F77_NAME(hceee), 12}, {"hceii", (DL_FUNC) &F77_NAME(hceii), 9}, {"hcvii", (DL_FUNC) &F77_NAME(hcvii), 10}, {"hcvvv", (DL_FUNC) &F77_NAME(hcvvv), 14}, {"mcltrw", (DL_FUNC) &F77_NAME(mcltrw), 5}, {"me1e", (DL_FUNC) &F77_NAME(me1e), 12}, {"me1ep", (DL_FUNC) &F77_NAME(me1ep), 16}, {"me1v", (DL_FUNC) &F77_NAME(me1v), 12}, {"me1vp", (DL_FUNC) &F77_NAME(me1vp), 16}, {"meeee", (DL_FUNC) &F77_NAME(meeee), 14}, {"meeeep", (DL_FUNC) &F77_NAME(meeeep), 18}, {"meeei", (DL_FUNC) &F77_NAME(meeei), 14}, {"meeeip", (DL_FUNC) &F77_NAME(meeeip), 18}, {"meeev", (DL_FUNC) &F77_NAME(meeev), 18}, {"meeevp", (DL_FUNC) &F77_NAME(meeevp), 22}, {"meeii", (DL_FUNC) &F77_NAME(meeii), 13}, {"meeiip", (DL_FUNC) &F77_NAME(meeiip), 17}, {"meeve", (DL_FUNC) &F77_NAME(meeve), 26}, {"meevi", (DL_FUNC) &F77_NAME(meevi), 14}, {"meevip", (DL_FUNC) &F77_NAME(meevip), 18}, {"meevv", (DL_FUNC) &F77_NAME(meevv), 22}, {"mevee", (DL_FUNC) &F77_NAME(mevee), 26}, {"mevei", (DL_FUNC) &F77_NAME(mevei), 17}, {"meveip", (DL_FUNC) &F77_NAME(meveip), 21}, {"mevev", (DL_FUNC) &F77_NAME(mevev), 18}, {"mevevp", (DL_FUNC) &F77_NAME(mevevp), 22}, {"mevii", (DL_FUNC) &F77_NAME(mevii), 13}, {"meviip", (DL_FUNC) &F77_NAME(meviip), 17}, {"mevve", (DL_FUNC) &F77_NAME(mevve), 26}, {"mevvi", (DL_FUNC) &F77_NAME(mevvi), 14}, {"mevvip", (DL_FUNC) &F77_NAME(mevvip), 18}, {"mevvv", (DL_FUNC) &F77_NAME(mevvv), 15}, {"mevvvp", (DL_FUNC) &F77_NAME(mevvvp), 19}, {"mnxiip", (DL_FUNC) &F77_NAME(mnxiip), 10}, {"mnxxip", (DL_FUNC) &F77_NAME(mnxxip), 11}, {"mnxxxp", (DL_FUNC) &F77_NAME(mnxxxp), 11}, {"ms1e", (DL_FUNC) &F77_NAME(ms1e), 7}, {"ms1ep", (DL_FUNC) &F77_NAME(ms1ep), 11}, {"ms1v", (DL_FUNC) &F77_NAME(ms1v), 7}, {"ms1vp", (DL_FUNC) &F77_NAME(ms1vp), 11}, {"mseee", (DL_FUNC) &F77_NAME(mseee), 9}, {"mseeep", (DL_FUNC) &F77_NAME(mseeep), 13}, {"mseei", (DL_FUNC) &F77_NAME(mseei), 9}, {"mseeip", (DL_FUNC) &F77_NAME(mseeip), 13}, {"mseev", (DL_FUNC) &F77_NAME(mseev), 12}, {"mseevp", (DL_FUNC) &F77_NAME(mseevp), 16}, {"mseii", (DL_FUNC) &F77_NAME(mseii), 8}, {"mseiip", (DL_FUNC) &F77_NAME(mseiip), 12}, {"mseve", (DL_FUNC) &F77_NAME(mseve), 18}, {"msevi", (DL_FUNC) &F77_NAME(msevi), 9}, {"msevip", (DL_FUNC) &F77_NAME(msevip), 13}, {"msevv", (DL_FUNC) &F77_NAME(msevv), 14}, {"msvee", (DL_FUNC) &F77_NAME(msvee), 16}, {"msvei", (DL_FUNC) &F77_NAME(msvei), 14}, {"msveip", (DL_FUNC) &F77_NAME(msveip), 18}, {"msvev", (DL_FUNC) &F77_NAME(msvev), 14}, {"msvevp", (DL_FUNC) &F77_NAME(msvevp), 18}, {"msvii", (DL_FUNC) &F77_NAME(msvii), 8}, {"msviip", (DL_FUNC) &F77_NAME(msviip), 12}, {"msvve", (DL_FUNC) &F77_NAME(msvve), 18}, {"msvvi", (DL_FUNC) &F77_NAME(msvvi), 9}, {"msvvip", (DL_FUNC) &F77_NAME(msvvip), 13}, {"msvvv", (DL_FUNC) &F77_NAME(msvvv), 10}, {"msvvvp", (DL_FUNC) &F77_NAME(msvvvp), 14}, {"mvn1d", (DL_FUNC) &F77_NAME(mvn1d), 5}, {"mvn1p", (DL_FUNC) &F77_NAME(mvn1p), 9}, {"mvnxii", (DL_FUNC) &F77_NAME(mvnxii), 6}, {"mvnxxi", (DL_FUNC) &F77_NAME(mvnxxi), 7}, {"mvnxxx", (DL_FUNC) &F77_NAME(mvnxxx), 6}, {"shapeo", (DL_FUNC) &F77_NAME(shapeo), 7}, {"uncholf", (DL_FUNC) &F77_NAME(uncholf), 5}, {NULL, NULL, 0} }; void R_init_mclust(DllInfo *dll) { R_registerRoutines(dll, NULL, NULL, FortranEntries, NULL); R_useDynamicSymbols(dll, FALSE); } mclust/src/mclust.f0000644000176200001440000143640513504406167014051 0ustar liggesusersC modified to avoid printing for calls from Fortran within R double precision function dgam (x) c jan 1984 edition. w. fullerton, c3, los alamos scientific lab. c jan 1994 wpp@ips.id.ethz.ch, ehg@research.att.com declare xsml c jun 2019 renamed function from dgamma to avoid warning with intrinsic c function already named dgamma double precision x, gamcs(42), dxrel, pi, sinpiy, sq2pil, xmax, 1 xmin, y, d9lgmc, dcsevl, d1mach, xsml C external d1mach, d9lgmc, dcsevl, dexp, dint, dlog, dsin, dsqrt, C 1 initds external d1mach, d9lgmc, dcsevl c c series for gam on the interval 0. to 1.00000e+00 c with weighted error 5.79e-32 c log weighted error 31.24 c significant figures required 30.00 c decimal places required 32.05 c data gam cs( 1) / +.8571195590 9893314219 2006239994 2 d-2 / data gam cs( 2) / +.4415381324 8410067571 9131577165 2 d-2 / data gam cs( 3) / +.5685043681 5993633786 3266458878 9 d-1 / data gam cs( 4) / -.4219835396 4185605010 1250018662 4 d-2 / data gam cs( 5) / +.1326808181 2124602205 8400679635 2 d-2 / data gam cs( 6) / -.1893024529 7988804325 2394702388 6 d-3 / data gam cs( 7) / +.3606925327 4412452565 7808221722 5 d-4 / data gam cs( 8) / -.6056761904 4608642184 8554829036 5 d-5 / data gam cs( 9) / +.1055829546 3022833447 3182350909 3 d-5 / data gam cs( 10) / -.1811967365 5423840482 9185589116 6 d-6 / data gam cs( 11) / +.3117724964 7153222777 9025459316 9 d-7 / data gam cs( 12) / -.5354219639 0196871408 7408102434 7 d-8 / data gam cs( 13) / +.9193275519 8595889468 8778682594 0 d-9 / data gam cs( 14) / -.1577941280 2883397617 6742327395 3 d-9 / data gam cs( 15) / +.2707980622 9349545432 6654043308 9 d-10 / data gam cs( 16) / -.4646818653 8257301440 8166105893 3 d-11 / data gam cs( 17) / +.7973350192 0074196564 6076717535 9 d-12 / data gam cs( 18) / -.1368078209 8309160257 9949917230 9 d-12 / data gam cs( 19) / +.2347319486 5638006572 3347177168 8 d-13 / data gam cs( 20) / -.4027432614 9490669327 6657053469 9 d-14 / data gam cs( 21) / +.6910051747 3721009121 3833697525 7 d-15 / data gam cs( 22) / -.1185584500 2219929070 5238712619 2 d-15 / data gam cs( 23) / +.2034148542 4963739552 0102605193 2 d-16 / data gam cs( 24) / -.3490054341 7174058492 7401294910 8 d-17 / data gam cs( 25) / +.5987993856 4853055671 3505106602 6 d-18 / data gam cs( 26) / -.1027378057 8722280744 9006977843 1 d-18 / data gam cs( 27) / +.1762702816 0605298249 4275966074 8 d-19 / data gam cs( 28) / -.3024320653 7353062609 5877211204 2 d-20 / data gam cs( 29) / +.5188914660 2183978397 1783355050 6 d-21 / data gam cs( 30) / -.8902770842 4565766924 4925160106 6 d-22 / data gam cs( 31) / +.1527474068 4933426022 7459689130 6 d-22 / data gam cs( 32) / -.2620731256 1873629002 5732833279 9 d-23 / data gam cs( 33) / +.4496464047 8305386703 3104657066 6 d-24 / data gam cs( 34) / -.7714712731 3368779117 0390152533 3 d-25 / data gam cs( 35) / +.1323635453 1260440364 8657271466 6 d-25 / data gam cs( 36) / -.2270999412 9429288167 0231381333 3 d-26 / data gam cs( 37) / +.3896418998 0039914493 2081663999 9 d-27 / data gam cs( 38) / -.6685198115 1259533277 9212799999 9 d-28 / data gam cs( 39) / +.1146998663 1400243843 4761386666 6 d-28 / data gam cs( 40) / -.1967938586 3451346772 9510399999 9 d-29 / data gam cs( 41) / +.3376448816 5853380903 3489066666 6 d-30 / data gam cs( 42) / -.5793070335 7821357846 2549333333 3 d-31 / c data pi / 3.1415926535 8979323846 2643383279 50 d0 / c sq2pil is 0.5*alog(2*pi) = alog(sqrt(2*pi)) data sq2pil / 0.9189385332 0467274178 0329736405 62 d0 / data ngam, xmin, xmax, xsml, dxrel / 0, 4*0.d0 / c if (ngam.ne.0) go to 10 ngam = initds (gamcs, 42, 0.1*sngl(d1mach(3)) ) c call d9gaml (xmin, xmax) xsml = exp (max (log(d1mach(1)), -log(d1mach(2)))+0.01d0) dxrel = sqrt (d1mach(4)) c 10 y = abs(x) if (y.gt.10.d0) go to 50 c c compute gamma(x) for -xbnd .le. x .le. xbnd. reduce interval and find c gamma(1+y) for 0.0 .le. y .lt. 1.0 first of all. c n = int(x) if (x.lt.0.d0) n = n - 1 y = x - dble(float(n)) n = n - 1 dgam = 0.9375d0 + dcsevl (2.d0*y-1.d0, gamcs, ngam) if (n.eq.0) return c if (n.gt.0) go to 30 c c compute gamma(x) for x .lt. 1.0 c n = -n if (x.eq.0.d0) then dgam = d1mach(2) return endif if (x.lt.0.0d0 .and. x+dble(float(n-2)).eq.0.d0) then dgam = -d1mach(2) return endif if (y.lt.xsml) then dgam = d1mach(2) return endif c do 20 i=1,n dgam = dgam/(x+dble(float(i-1)) ) 20 continue return c c gamma(x) for x .ge. 2.0 and x .le. 10.0 c 30 do 40 i=1,n dgam = (y+dble(float(i))) * dgam 40 continue return c c gamma(x) for dabs(x) .gt. 10.0. recall y = dabs(x). c 50 if (x.gt.xmax) then dgam = d1mach(2) return endif c dgam = 0.d0 if (x.lt.xmin) return c dgam = exp ((y-0.5d0)*log(y) - y + sq2pil + d9lgmc(y) ) if (x.gt.0.d0) return c sinpiy = sin (pi*y) c if (sinpiy.eq.0.d0) then dgam = -d1mach(2) return endif c dgam = -pi/(y*sinpiy*dgam) c return end C modified to omit priniting for calls from Fortran within R subroutine d9gaml (xmin, xmax) c june 1977 edition. w. fullerton, c3, los alamos scientific lab. c c calculate the minimum and maximum legal bounds for x in gamma(x). c xmin and xmax are not the only bounds, but they are the only non- c trivial ones to calculate. c c output arguments -- c xmin dble prec minimum legal value of x in gamma(x). any smaller c value of x might result in underflow. c xmax dble prec maximum legal value of x in gamma(x). any larger c value of x might cause overflow. c double precision xmin, xmax, alnbig, alnsml, xln, xold, d1mach C external d1mach, dlog external d1mach c alnsml = log(d1mach(1)) xmin = -alnsml do 10 i=1,10 xold = xmin xln = log(xmin) xmin = xmin - xmin*((xmin+0.5d0)*xln - xmin - 0.2258d0 + alnsml) 1 / (xmin*xln+0.5d0) if (abs(xmin-xold).lt.0.005d0) go to 20 10 continue C call seteru (27hd9gaml unable to find xmin, 27, 1, 2) xmin = d1mach(2) xmax = -d1mach(2) return c 20 xmin = -xmin + 0.01d0 c alnbig = log (d1mach(2)) xmax = alnbig do 30 i=1,10 xold = xmax xln = log(xmax) xmax = xmax - xmax*((xmax-0.5d0)*xln - xmax + 0.9189d0 - alnbig) 1 / (xmax*xln-0.5d0) if (abs(xmax-xold).lt.0.005d0) go to 40 30 continue C call seteru (27hd9gaml unable to find xmax, 27, 2, 2) xmin = d1mach(2) xmax = -d1mach(2) return c 40 xmax = xmax - 0.01d0 xmin = dmax1 (xmin, -xmax+1.d0) c return end double precision function dcsevl (x, a, n) double precision a(n), x, twox, b0, b1, b2 double precision d1mach external d1mach c if (n.lt.1) then dcsevl = -d1mach(2) return endif if (n.gt.1000) then dcsevl = d1mach(2) return endif if (x.lt.(-1.1d0) .or. x.gt.1.1d0) then dcsevl = d1mach(2) return endif C added by CF to avoid uninitialized warnings b2 = 0 c twox = 2.0d0*x b1 = 0.d0 b0 = 0.d0 do 10 i=1,n b2 = b1 b1 = b0 ni = n - i + 1 b0 = twox*b1 - b2 + a(ni) 10 continue c dcsevl = 0.5d0 * (b0-b2) c return end double precision function d9lgmc (x) double precision x, algmcs(15), xbig, xmax, dcsevl, d1mach external d1mach, dcsevl, initds c data algmcs( 1) / +.1666389480 4518632472 0572965082 2 d+0 / data algmcs( 2) / -.1384948176 0675638407 3298605913 5 d-4 / data algmcs( 3) / +.9810825646 9247294261 5717154748 7 d-8 / data algmcs( 4) / -.1809129475 5724941942 6330626671 9 d-10 / data algmcs( 5) / +.6221098041 8926052271 2601554341 6 d-13 / data algmcs( 6) / -.3399615005 4177219443 0333059966 6 d-15 / data algmcs( 7) / +.2683181998 4826987489 5753884666 6 d-17 / data algmcs( 8) / -.2868042435 3346432841 4462239999 9 d-19 / data algmcs( 9) / +.3962837061 0464348036 7930666666 6 d-21 / data algmcs( 10) / -.6831888753 9857668701 1199999999 9 d-23 / data algmcs( 11) / +.1429227355 9424981475 7333333333 3 d-24 / data algmcs( 12) / -.3547598158 1010705471 9999999999 9 d-26 / data algmcs( 13) / +.1025680058 0104709120 0000000000 0 d-27 / data algmcs( 14) / -.3401102254 3167487999 9999999999 9 d-29 / data algmcs( 15) / +.1276642195 6300629333 3333333333 3 d-30 / c data nalgm, xbig, xmax / 0, 2*0.d0 / c if (nalgm.ne.0) go to 10 nalgm = initds (algmcs, 15, sngl(d1mach(3)) ) xbig = 1.0d0/sqrt(d1mach(3)) xmax = exp (dmin1(log(d1mach(2)/12.d0), -log(12.d0*d1mach(1)))) c 10 if (x.lt.10.d0) then d9lgmc = d1mach(2) return endif c if (x.ge.xmax) go to 20 c d9lgmc = 1.d0/(12.d0*x) if (x.lt.xbig) d9lgmc = dcsevl (2.0d0*(10.d0/x)**2-1.d0, algmcs, 1 nalgm) / x return c 20 d9lgmc = 0.d0 C call seteru (34hd9lgmc x so big d9lgmc underflows, 34, 2, 0) return c end double precision function dlngam (x) double precision x, y, xmax, dxrel, pi double precision sinpiy, sqpi2l, sq2pil double precision d1mach, d9lgmc external d1mach, d9lgmc double precision dgam c external dgamma c data sq2pil / 0.9189385332 0467274178 0329736405 62 d0 / c sq2pil = alog (sqrt(2*pi)), sqpi2l = alog(sqrt(pi/2)) data sqpi2l / +.2257913526 4472743236 3097614947 441 d+0 / data pi / 3.1415926535 8979323846 2643383279 50 d0 / c data xmax, dxrel / 2*0.d0 / c C added by CF to avoid uninitialized warnings dlngam = 0.d0 if (xmax.ne.0.d0) go to 10 xmax = d1mach(2)/dlog(d1mach(2)) dxrel = dsqrt (d1mach(4)) c 10 y = abs (x) if (y.gt.10.d0) go to 20 c c dlog (dabs (dgam(x)) ) for dabs(x) .le. 10.0 c dlngam = log (abs (dgam(x)) ) return c c dlog ( dabs (dgam(x)) ) for dabs(x) .gt. 10.0 c C20 if (y.gt.xmax) call seteru ( C 1 39hdlngam dabs(x) so big dlngam overflows, 39, 2, 2) 20 if (y.gt.xmax) dlngam = d1mach(2) if (y.gt.xmax) return c if (x.gt.0.d0) dlngam = sq2pil + (x-0.5d0)*log(x) - x + d9lgmc(y) if (x.gt.0.d0) return c sinpiy = abs (sin(pi*y)) C if (sinpiy.eq.0.d0) call seteru ( C 1 31hdlngam x is a negative integer, 31, 3, 2) if (sinpiy.eq.0.d0) dlngam = -d1mach(2) if (sinpiy.eq.0.d0) return c dlngam = sqpi2l + (x-0.5d0)*log(y) - x - log(sinpiy) - d9lgmc(y) c C if (dabs((x-dint(x-0.5d0))*dlngam/x).lt.dxrel) call seteru ( C 1 68hdlngam answer lt half precision because x too near negative C 2integer, 68, 1, 1) return c end function initds (dos, nos, eta) double precision dos(nos) integer i1mach external i1mach c C if (nos.lt.1) call seteru ( C 1 35hinitds number of coefficients lt 1, 35, 2, 2) if (nos.lt.1) initds = i1mach(9) c C added by CF to avoid uninitialized warnings i = 0 err = 0. do 10 ii=1,nos i = nos + 1 - ii err = err + abs(sngl(dos(i))) if (err.gt.eta) go to 20 10 continue c C20 if (i.eq.nos) call seteru (28hinitds eta may be too small, 28, C 1 1, 2) 20 continue initds = i c return end subroutine absrng( l, v, i, vmin, vmax) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE double precision v(*) integer i, j, k, l double precision temp, vmin, vmax c---------------------------------------------------------------------------- temp = abs(v(1)) vmin = temp vmax = temp if (l .eq. 1) return if (i .eq. 1) then do j = 2, l temp = abs(v(j)) vmin = min(vmin,temp) vmax = max(vmax,temp) end do else k = 1 + i do j = 2, l temp = abs(v(k)) vmin = min(vmin,temp) vmax = max(vmax,temp) k = k + i end do end if return end SUBROUTINE D2NORM ( N, X, INCX, VALUE ) * .. Scalar Arguments .. INTEGER INCX, N * .. Array Arguments .. DOUBLE PRECISION X( * ), VALUE * .. * * DNRM2 returns the euclidean norm of a vector via the function * name, so that * * DNRM2 := sqrt( x'*x ) * * THIS FUNCTION MODELLED AFTER DNRM2 BUT WRITTEN AS A SUBROUTINE * * -- This version written on 25-October-1982. * Modified on 14-October-1993 to inline the call to DLASSQ. * Sven Hammarling, Nag Ltd. * * * .. Parameters .. DOUBLE PRECISION ONE , ZERO PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 ) * .. Local Scalars .. INTEGER IX DOUBLE PRECISION ABSXI, NORM, SCALE, SSQ * .. Intrinsic Functions .. INTRINSIC ABS, SQRT * .. * .. Executable Statements .. IF( N.LT.1 .OR. INCX.LT.1 )THEN NORM = ZERO ELSE IF( N.EQ.1 )THEN NORM = ABS( X( 1 ) ) ELSE SCALE = ZERO SSQ = ONE * The following loop is equivalent to this call to the LAPACK * auxiliary routine: * CALL DLASSQ( N, X, INCX, SCALE, SSQ ) * DO 10, IX = 1, 1 + ( N - 1 )*INCX, INCX IF( X( IX ).NE.ZERO )THEN ABSXI = ABS( X( IX ) ) IF( SCALE.LT.ABSXI )THEN SSQ = ONE + SSQ*( SCALE/ABSXI )**2 SCALE = ABSXI ELSE SSQ = SSQ + ( ABSXI/SCALE )**2 END IF END IF 10 CONTINUE NORM = SCALE * SQRT( SSQ ) END IF * VALUE = NORM RETURN * * End of D2NORM. * END subroutine mclrup( l, n, v, r, lr) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer l, n, lr double precision cs, sn c double precision v(n), r(lr,n) double precision v(*), r(lr,*) integer i, j, k, m if (l .eq. 1) return k = l - 1 if (k .le. n) then call dcopy( n, v, 1, r(k,1), lr) if (k .eq. 1) return if (n .gt. 1) then i = 1 m = n do j = 2, k call drotg( r(i,i), r(k,i), cs, sn) m = m - 1 call drot( m, r(i,j), lr, r(k,j), lr, cs, sn) i = j end do else call drotg( r(1,1), r(k,1), cs, sn) end if else if (n .gt. 1) then i = 1 m = n do j = 2, n call drotg( r(i,i), v(i), cs, sn) m = m - 1 call drot( m, r(i,j), lr, v(j), 1, cs, sn) i = j end do end if call drotg( r(n,n), v(n), cs, sn) end if return end subroutine mcltrw( x, n, p, u, ss) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision ss c double precision x(n,p), u(p) double precision x(n,*), u(*) double precision ddot external ddot integer i, j double precision fac double precision zero, one parameter (zero = 0.d0, one = 1.d0) c------------------------------------------------------------------------------ c form mean fac = one / sqrt(dble(n)) call dcopy( p, zero, 0, u, 1) do i = 1, n call daxpy( p, fac, x(i,1), n, u, 1) end do c subtract mean and form sum of squares ss = zero do j = 1, p call daxpy( n, (-fac), u(j), 0, x(1,j), 1) ss = ss + ddot(n, x(1,j), 1, x(1,j), 1) end do return end subroutine mclvol( x, n, p, u, v, w, * work, lwork, iwork, liwork, * info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, lwork, liwork, info c integer iwork(liwork) integer iwork(*) c double precision x(n,p), u(p), v(p,p), w(p,p), work(lwork), double precision x(n,*), u(*), v(p,*), w(p,p), work(*) integer i, j double precision temp, dummy, cmin, cmax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision FLMAX parameter (FLMAX = 1.7976931348623157D+308) c------------------------------------------------------------------------------ c form mean temp = one / dble(n) call dcopy( p, zero, 0, u, 1) do i = 1, n call daxpy( p, temp, x(i,1), n, u, 1) end do c subtract mean do j = 1, p call daxpy( n, (-one), u(j), 0, x(1,j), 1) end do c if (.false.) then c this gets the eigenvectors but x is overwritten c get right singular vectors c call dgesvd( 'N', 'A', n, p, x, n, u, c * dummy, 1, w, p, work, lwork, info) c if (info .lt. 0) return c if (info .eq. 0) then c lwork = int(work(1)) c do i = 1, p c v(i,i) = w(i,i) c if (i .gt. 1) then c do j = 1, (i-1) c v(i,j) = w(j,i) c v(j,i) = w(i,j) c end do c end if c end do c goto 100 c end if c end if c form crossproduct call dsyrk( 'U', 'T', p, n, one, x, n, zero, w, p) c get eigenvectors do j = 1, p do i = 1, j v(i,j) = w(i,j) end do end do call dsyevd( 'V', 'U', p, v, p, u, * work, lwork, iwork, liwork, info) if (info .lt. 0) return if (info .eq. 0) then lwork = int(work(1)) liwork = iwork(1) goto 100 end if c EPSMAX = d1mach(4) call dsyevx( 'V', 'A', 'U', p, w, p, dummy, dummy, i, i, * sqrt(EPSMAX), j, u, v, p, * work, lwork, iwork(p+1), iwork, info) if (info .ne. 0) return lwork = int(work(1)) liwork = -1 100 continue c FLMAX = d1mach(2) c form xv c vol = one do j = 1, p call dgemv( 'N', n, p, one, x, n, v(1,j), 1, zero, work, 1) cmax = -FLMAX cmin = FLMAX do i = 1, n temp = work(i) if (temp .gt. cmax) cmax = temp if (temp .lt. cmin) cmin = temp end do u(j) = cmax - cmin c vol = vol * (cmax - cmin) end do return end subroutine sgnrng( l, v, i, vmin, vmax) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE double precision v(*) integer i, j, k, l double precision temp, vmin, vmax c---------------------------------------------------------------------------- temp = v(1) vmin = temp vmax = temp if (l .eq. 1) return if (i .eq. 1) then do j = 2, l temp = v(j) vmin = min(vmin,temp) vmax = max(vmax,temp) end do else k = 1 + i do j = 2, l temp = v(k) vmin = min(vmin,temp) vmax = max(vmax,temp) k = k + i end do end if return end subroutine shapeo( TRANSP, s, O, l, m, w, info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical TRANSP integer l, m, info c double precision s(l), O(l,l,m), w(l,l) double precision s(*), O(l,l,*), w(l,*) integer j, k double precision temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) c------------------------------------------------------------------------------ if (TRANSP) then do j = 1, l temp = sqrt(s(j)) do k = 1, m call dscal( l, temp, O(j,1,k), l) end do end do do k = 1, m call dsyrk( 'U', 'T', l, l, one, O(1,1,k), l, zero, w, l) do j = 1, l call dcopy( j, w(1,j), 1, O(1,j,k), 1) end do do j = 2, l call dcopy( j-1, w(1,j), 1, O(j,1,k), l) end do end do info = 0 return end if if (.not. TRANSP) then do j = 1, l temp = sqrt(s(j)) do k = 1, m call dscal( l, temp, O(1,j,k), 1) end do end do do k = 1, m call dsyrk( 'U', 'N', l, l, one, O(1,1,k), l, zero, w, l) do j = 1, l call dcopy( j, w(1,j), 1, O(1,j,k), 1) end do do j = 2, l call dcopy( j-1, w(1,j), 1, O(j,1,k), l) end do end do info = 0 return end if info = -1 return end subroutine uncholf ( UPPER, T, l, n, info) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical UPPER integer l, n, info c double precision T(abs(n), abs(n)) double precision T( l , * ) integer i, j, k double precision ddot external ddot c------------------------------------------------------------------------------ if (UPPER) then do i = 2, n do j = 1, (i-1) T(i,j) = ddot( j, T(1,i), 1, T(1,j), 1) end do end do do k = 1, n T(k,k) = ddot( k, T(1,k), 1, T(1,k), 1) end do do k = 1, n-1 call dcopy( n-k, T(k+1,k), 1, T(k,k+1), l) end do info = 0 return end if if (.not. UPPER) then do i = 2, n do j = 1, (i-1) T(j,i) = ddot( j, T(i,1), l, T(j,1), l) end do end do do k = 1, n T(k,k) = ddot( k, T(k,1), l, T(k,1), l) end do do k = 2, n call dcopy( k-1, T(1,k), 1, T(k,1), l) end do return end if info = -1 return end subroutine wardsw( i, n, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer i, n double precision d(*) integer i1, n1, ii, nn, k double precision temp double precision FLMAX parameter (FLMAX = 1.7976931348623157D+308) *----------------------------------------------------------------------------- i1 = i - 1 ii = (i1*(i1-1))/2 + 1 n1 = n - 1 nn = (n1*(n1-1))/2 + 1 c if (i .gt. 1) then call dswap( i1, d(nn), 1, d(ii), 1) c call dcopy( i1, FLMAX, 0, d(nn), 1) ii = ii + i1 + i1 nn = nn + i c end if if (n1 .eq. i) return k = i 100 continue temp = d(ii) d(ii) = d(nn) d(nn) = temp c d(nn) = FLMAX ii = ii + k nn = nn + 1 k = k + 1 if (k .lt. n1) goto 100 c d(nn) = FLMAX return end subroutine es1e ( x, mu, sigsq, pro, n, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision sigsq, hood, Vinv c double precision x(n), mu(G), pro(G[+1]), z(n,G[+1]) double precision x(*), mu(*), pro( * ), z(n, * ) integer i, k, nz double precision temp, const, muk, prok, tmin, tmax, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (sigsq .le. zero) then hood = FLMAX return end if const = pi2log + log(sigsq) do k = 1, G muk = mu(k) c prok = pro(k) do i = 1, n temp = x(i) - muk c z(i,k) = prok*exp(-(const+(temp*temp)/sigsq)/two) if (sigsq .lt. one .and. * abs(temp) .ge. sqrt(sigsq)*RTMAX) then hood = FLMAX return end if z(i,k) = -(const+(temp*temp)/sigsq)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c temp = zero c do k = 1, nz c temp = temp + z(i,k) c end do c hood = hood + log(temp) c call dscal( nz, (one/temp), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hc1e ( x, n, ic, ng, ns, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, ic(n), ng, ns, nd c double precision x(n), d(ng*(ng-1)/2) double precision x(*), d(*) integer lg, ld, ll, lo, ls integer i, j, k, m integer ni, nj, nij, iopt, jopt, iold, jold integer ij, ici, icj, ii, ik, jk double precision ri, rj, rij, si, sj, sij double precision temp, dij, dopt, dold external wardsw double precision one parameter (one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ iopt = 0 jopt = 0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd c call intpr( 'ic', -1, ic, n) c call intpr( 'no. of groups', -1, lg, 1) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 c call dswap( p, x(k,1), n, x(j,1), n) temp = x(k) x(k) = x(j) x(j) = temp ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) do j = 1, n i = ic(j) if (i .ne. j) then ic(j) = 0 ni = ic(i) nij = ni + 1 ic(i) = nij ri = dble(ni) rij = dble(nij) sj = sqrt(one/rij) si = sqrt(ri)*sj c update column sum in kth row c call dscal( p, si, x(i,1), n) c call daxpy( p, sj, x(j,1), n, x(i,1), n) x(i) = si*x(i) + sj*x(j) else ic(j) = 1 end if end do c call intpr( 'ic', -1, ic, n) dopt = FLMAX ij = 0 do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) c call dcopy( p, x(i,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) c dij = ddot(p, v, 1, v, 1) temp = sj*x(i) - si*x(j) dij = temp*temp ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij iopt = i jopt = j end if end do end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1) = dble(iopt) ic(1) = jopt else x(1) = dble(jopt) ic(1) = iopt end if d(1) = dopt return end if ls = 1 100 continue ni = ic(iopt) nj = ic(jopt) nij = ni + nj ic(iopt) = nij ic(jopt) = -iopt if (jopt .ne. lg) then call wardsw( jopt, lg, d) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if si = dble(ni) sj = dble(nj) sij = dble(nij) dold = dopt iold = iopt jold = jopt iopt = -1 jopt = -1 dopt = FLMAX lg = lg - 1 ld = ld - lg ii = (iold*(iold-1))/2 if (iold .gt. 1) then ik = ii - iold + 1 do j = 1, (iold - 1) nj = ic(j) rj = dble(nj) ik = ik + 1 jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij end do end if if (iold .lt. lg) then ik = ii + iold i = iold do j = (iold + 1), lg nj = ic(j) rj = dble(nj) jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij ik = ik + i i = j end do end if d(lo) = dold lo = lo - 1 d(lo) = dble(iold) lo = lo - 1 d(lo) = dble(jold) lo = lo - 1 c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld si = d(i) if (si .le. dopt) then ij = i dopt = si end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1) = si d(ld) = sj else x(1) = sj d(ld) = si end if ld = ld - 1 lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k) = dble(ici) d(ld) = dble(icj) else x(k) = dble(icj) d(ld) = dble(ici) end if ld = ld - 1 end do ld = nd lo = nd - 1 do k = 1, ns ic(k) = int(d(lo)) lo = lo - 1 ld = ld - 1 d(ld) = d(lo) lo = lo - 1 end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine me1e ( EQPRO, x, n, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq, pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq, pro( * ) integer nz, iter, k, i double precision hold, hood, err, prok, tmin, tmax, ViLog double precision const, sum, sumz, smu, temp, term, zsum double precision rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( nz, one/dble(nz), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero sigsq = zero zsum = one do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then smu = smu / sum mu(k) = smu do i = 1, n temp = x(i) - smu temp = temp*temp sigsq = sigsq + z(i,k)*temp z(i,k) = temp end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .le. zero) then sigsq = sigsq / dble(n) else sigsq = sigsq / sumz end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = pi2log + log(sigsq) do k = 1, G c temp = pro(k) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsq))/two) z(i,k) = -(const+(z(i,k)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine me1ep ( EQPRO, x, n, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision pshrnk, pmu, pscale, pdof double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq, pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq, pro( * ) integer nz, iter, k, i double precision hold, hood, err, prok, tmin, tmax, ViLog double precision const, sum, sumz, smu, temp, term, zsum double precision pmupmu, cgam, cmu, rmu, rgam, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision dlngam external dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( nz, one/dble(nz), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX pmupmu = pmu*pmu iter = 0 100 continue iter = iter + 1 sigsq = zero zsum = one do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then smu = smu/sumz sum = zero do i = 1, n term = x(i) - smu term = term*term sum = sum + z(i,k)*term end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq = sigsq + sum + term*temp term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if sigsq = (pscale + sigsq)/(pdof + dble(n+G) + two) c if (Vinv .le. zero) then c sigsq = sigsq / dble(n) c else c sigsq = sigsq / sumz c end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = pi2log + log(sigsq) do k = 1, G c temp = pro(k) do i = 1, n term = x(i) - mu(k) z(i,k) = -(const+((term*term)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(G)*(pi2log-log(pshrnk))/two sum = zero do k = 1, G temp = pmu - mu(k) temp = temp*temp sum = sum - (pshrnk/sigsq)*temp end do term = log(sigsq) rmu = (sum - dble(G)*term)/two temp = pdof/two cgam = temp*log(pscale/two) - dlngam(temp) rgam = -(temp+one)*term - (pscale/sigsq)/two pdof = (cmu+cgam) + (rmu+rgam) return end subroutine ms1e ( x, z, n, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G c double precision x(n), z(n,G), mu(G), sigsq, pro(G) double precision x(*), z(n,*), mu(*), sigsq, pro(*) integer i, k double precision sum, smu, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ sumz = zero sigsq = zero do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sigsq .gt. one .or. smu .le. sum*FLMAX) then smu = smu / sum mu(k) = smu if (sigsq .ne. FLMAX) then do i = 1, n temp = abs(x(i) - smu) sigsq = sigsq + z(i,k)*(temp*temp) end do end if else mu(k) = FLMAX sigsq = FLMAX end if end do c sumz .eq. n when no noise if (sigsq .ne. FLMAX) sigsq = sigsq / sumz return end subroutine ms1ep ( x, z, n, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision pshrnk, pmu, pscale, pdof c double precision x(n), z(n,G), mu(G), sigsq, pro(G) double precision x(*), z(n,*), mu(*), sigsq, pro(*) integer k, i double precision pmupmu double precision sum, sumz, smu, temp, term double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = pmu*pmu sigsq = zero do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do pro(k) = sumz / dble(n) if (sumz .gt. one .or. smu .lt. sumz*FLMAX) then smu = smu/sumz sum = zero term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu if (sigsq .ne. FLMAX) then do i = 1, n term = abs(x(i) - smu) sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq = sigsq + sum + term*temp end if else mu(k) = FLMAX sigsq = FLMAX end if end do if (sigsq .ne. FLMAX) then temp = pdof + dble(n) + two if (pshrnk .gt. zero) temp = temp + dble(G) sigsq = (pscale + sigsq)/temp end if return end subroutine eseee ( CHOL, x, mu, Sigma, pro, n, p, G, Vinv, * w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c character CHOL logical CHOL c integer n, p, G integer n, p, G double precision hood, Vinv c double precision x(n,p), w(p), z(n,G[+1]) double precision x(n,*), w(*), z(n, * ) c double precision mu(p,G), Sigma(p,p), pro(G[+1]) double precision mu(p,*), Sigma(p,*), pro( * ) integer info, i, j, k, nz double precision detlog, prok, tmin, tmax double precision umin, umax, const, temp, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ c if (CHOL .eq. 'N') then if (.not. CHOL) then c Cholesky factorization call dpotrf( 'U', p, Sigma, p, info) if (info .ne. 0) then w(1) = dble(info) hood = FLMAX return end if end if call absrng( p, Sigma, (p+1), umin, umax) c rc = umin/(one+umax) if (umax .le. one .and. umax .ge. umin*RTMAX) then w(1) = zero hood = FLMAX return end if if (umax .ge. one .and. umin .le. umax*RTMIN) then w(1) = zero hood = FLMAX return end if detlog = zero do j = 1, p detlog = detlog + log(abs(Sigma(j,j))) end do const = dble(p)*pi2log/two + detlog do k = 1, G c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, Sigma, p, w, 1) temp = ddot( p, w, 1, w, 1)/two c z(i,k) = prok*exp(-(const+temp)) z(i,k) = -(const+temp) end do end do w(1) = zero if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then w(1) = zero hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do w(1) = zero return end double precision function detmc2( n, u) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer k, n double precision q double precision u(n,*) double precision zero, two parameter (zero = 0.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) detmc2 = zero do k = 1, n q = u(k,k) if (q .eq. zero) then detmc2 = -FLMAX return end if detmc2 = detmc2 + log(abs(q)) end do detmc2 = two*detmc2 return end subroutine meeee ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, U, pro, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer nz, p1, iter, i, j, k, j1 double precision piterm, sclfac, sumz, sum, zsum double precision cs, sn, umin, umax, rc, detlog, rteps double precision const, hold, hood, err, temp, term double precision prok, tmin, tmax, ViLog double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX c zero out the lower triangle i = 1 do j = 2, p call dcopy( p-i, zero, 0, U(j,i), 1) i = j end do iter = 0 100 continue iter = iter + 1 do j = 1, p call dcopy( j, zero, 0, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX hood = eps maxi = iter return end if if (Vinv .le. zero) then sclfac = one/sqrt(dble(n)) else sclfac = one/sqrt(sumz) end if do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do c condition number call absrng( p, U, p1, umin, umax) rc = umin/(one+umax) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (rc .le. rteps) then tol = err eps = FLMAX hood = eps maxi = iter return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = piterm + detlog do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp * exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeeep( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, U, pro, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer nz, p1, iter, i, j, k, j1 double precision piterm, sclfac, sumz, sum, zsum double precision cs, sn, umin, umax, rc, detlog, rteps double precision const, hold, hood, err, temp, term double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 sclfac = one/sqrt(dble(n)) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 c copy pscale to U do j = 1, p call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sum + pshrnk temp = (sum*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) call dscal( p, sum/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + dble(G) if (Vinv .le. zero) then sclfac = one/sqrt(term+dble(n)) else sclfac = one/sqrt(term+dble(sumz)) end if do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if c condition number call absrng( p, U, p1, umin, umax) rc = umin/(one+umax) if (rc .le. rteps) then tol = err eps = FLMAX maxi = iter return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = piterm + detlog do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp * exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter if (pshrnk .gt. zero) then cmu = dble(p)*(log(pshrnk) - pi2log)/two rmu = zero do k = 1, G call daxpy( p, (-one), mu(1,k), 1, pmu, 1) call dtrsv('U','T','N',p,U,p,pmu,1) rmu = rmu + ddot( p, pmu, 1, pmu, 1) end do sum = zero term = zero temp = zero do j = 1, p call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j),i,pmu(j),1) call dtrsv('U','T','N', i, U(j,j),p,pmu(j),1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) temp = temp + log(abs(pscale(j,j))) term = term + dlngam((pdof+one-dble(j))/two) end do rmu = -(detlog+pshrnk*rmu/two) const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const/two-pdof*temp) - term rgam = -((pdof+dble(p)+one)*detlog + sum/two) pdof = (dble(G)*cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mseee ( x, z, n, p, G, w, mu, U, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) c------------------------------------------------------------------------------ c c x double (input) (n,p) matrix of observations. c z double (input) (n,G) conditional probabilities. c n integer (input) number of observations. c p integer (input) dimension of the data. c G integer (input) number of Gaussian clusters in the mixture. c w double (scratch) (p) c mu double (output) (p,G) mean for each group. c U double (output) (p,p) upper triangular Cholesky factor of the c common covariance matrix for the groups: transpose(U) * U = Sigma. c pro double (output) (G) mixing proportions (ignore result if equal). integer i, j, k, j1 double precision sum, sumz, zsum, temp, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do j = 1, p call dcopy( p, zero, 0, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .gt. one .or. one .gt. sum*FLMAX) then zsum = min(zsum,sum) call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do else zsum = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .eq. zero) return c sumz .eq. n when no noise do j = 1, p call dscal( j, one/sqrt(sumz), U(1,j), 1) end do return end subroutine mseeep( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, mu, U, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision mu(p,G), U(p,p), pro(G) double precision mu(p,*), U(p,*), pro(*) integer i, j, k, j1 double precision sclfac, const, temp double precision sum, sumz, zsum, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ if (pshrnk .le. zero) pshrnk = zero do j = 1, p call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .gt. sum*FLMAX) then zsum = min(zsum,sum) call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sum + pshrnk temp = (sum*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) call dscal( p, sum/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else zsum = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .eq. zero) return temp = pdof+dble(n+p+1) if (pshrnk .gt. zero) temp = temp + dble(G) sclfac = one/sqrt(temp) do j = 1, p call dscal( j, sclfac, U(1,j), 1) end do return end subroutine eseei ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision scale, hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale) do j = 1, p shape(j) = temp*sqrt(shape(j)) end do const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (abs(temp) .ge. shape(j)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meeei ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, zsum double precision const, hold, hood, err, smin, smax double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then scale = zero tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then scale = temp/dble(n) else scale = temp/sumz end if if (temp .le. eps) then tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps .or. scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeeip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p), pro(G[+1]) double precision mu(p,*), shape(*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, zsum double precision const, hold, hood, err, smin, smax double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, pscale, 0, shape, 1) sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + (temp*temp) end do shape(j) = shape(j) + sum temp = pmu(j) - mu(j,k) shape(j) = shape(j) + const*(temp*temp) end do end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then scale = zero tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + one if (pshrnk .gt. zero) term = term + one if (Vinv .le. zero) then scale = temp/(term + dble(n)) else scale = temp/(term + sumz) end if if (temp .le. eps) then tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps .or. scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine mseei ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p), pro(G) double precision mu(p,*), scale, shape(*), pro(*) integer i, j, k double precision sum, sumz, temp, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ sumz = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do i = 1, n do k = 1, G if (mu(1,k) .eq. FLMAX) then scale = FLMAX return end if temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do end do shape(j) = shape(j) + sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .eq. zero) then scale = zero return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (sumz .lt. one .and. temp .ge. sumz*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if scale = temp/sumz if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine mseeip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p), pro(G[+1]) double precision mu(p,*), scale, shape(*), pro( * ) integer i, j, k double precision sum, sumz, temp, term double precision const, smin, smax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero call dcopy( p, pscale, 0, shape, 1) sumz = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do shape(j) = shape(j) + sum temp = pmu(j) - mu(j,k) shape(j) = shape(j) + const*(temp*temp) end do else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do call sgnrng(p, shape, 1, smin, smax) if (smin .eq. zero) then scale = zero return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .gt. SMALOG) then smin = exp(temp) else smin = zero end if term = pdof + sumz + two if (pshrnk .gt. zero) term = term + dble(G) scale = smin/term if (smin .lt. one .and. one .ge. smin*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/smin, shape, 1) return end subroutine eseev ( x, mu, scale, shape, O, pro, n, p, G, * Vinv, v, w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p, G integer n, p, G double precision scale, Vinv, hood c double precision x(n,p), v(p), w(p), z(n,G[+1]) double precision x(n,*), v(*), w(*), z(n, * ) c double precision mu(p,G), shape(p), O(p,p,G), pro(G[+1]) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer i, j, k, nz double precision const, temp, tmin, tmax double precision smin, smax, prok, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale) do j = 1, p shape(j) = temp*sqrt(shape(j)) end do const = dble(p)*(pi2log + log(scale)) do k = 1, G c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dgemv( 'N', p, p, one, O(1,1,k), p, * w, 1, zero, v, 1) do j = 1, p if (shape(j) .lt. one .and. * abs(v(j)) .ge. shape(j)*FLMAX) then hood = FLMAX return end if v(j) = v(j)/shape(j) end do temp = ddot( p, v, 1, v, 1) c z(i,k) = prok*exp(-(const+temp)/two) z(i,k) = -(const+temp)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meeev ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * lwork, mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi, lwork double precision Vinv, eps, tol, scale double precision x(n,*), z(n, * ), w( * ), s(*) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer nz, p1, iter, i, j, k, l, j1, info double precision dnp, dummy, temp, term, rteps double precision sumz, sum, smin, smax, cs, sn double precision const, rc, hood, hold, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if p1 = p + 1 dnp = dble(n*p) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) sumz = zero zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum zsum = min(zsum,sum) if (.not. EQPRO) pro(k) = sum / dble(n) if (sum .ge. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, s, * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = s(j) shape(j) = shape(j) + temp*temp end do end if end if end do if (l .ne. 0 .or. zsum .lt. rteps) then lwork = l c w(1) = FLMAX tol = err if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then lwork = 0 c w(1) = smin tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then scale = temp/dble(n) else scale = temp/sumz end if if (temp .le. eps) then lwork = 0 c w(1) = temp tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if if (scale .le. eps) then c w(1) = -scale lwork = 0 tol = err eps = FLMAX maxi = iter return end if temp = sqrt(scale) do j = 1, p w(j) = temp*sqrt(shape(j)) end do call absrng( p, w, 1, smin, smax) rc = smin / (one + smax) if (smin .le. rteps) then c w(1) = -smin lwork = 0 tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log + log(scale))/two do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, s, 1) do j = 1, p s(j) = s(j) / w(j) end do sum = ddot( p, s, 1, s, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 lwork = 0 c w(1) = rc tol = err eps = hood maxi = iter return end subroutine meeevp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * lwork, mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]), w(lwork), s(p) double precision x(n,*), z(n, * ), w( * ), s(*) c double precision mu(p,G), shape(p), O(p,p,G), pro(G[+1]) double precision mu(p,*), shape(*), O(p,p,*), pro( * ) integer nz, p1, iter, i, j, k, l, j1, info double precision dnp, dummy, temp, term, rteps double precision sumz, sum, smin, smax, cs, sn double precision const, rc, hood, hold, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if p1 = p + 1 dnp = dble(n*p) eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 call dcopy( p, zero, 0, shape, 1) zsum = one sumz = zero l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sum+pshrnk const = (sum*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, s, * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = s(j) shape(j) = shape(j) + temp*temp end do end if end if end do if (l .ne. 0 .or. zsum .le. rteps) then lwork = l c w(1) = FLMAX tol = err if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then c w(1) = smin tol = err eps = FLMAX maxi = iter return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + one if (Vinv .le. zero) then scale = temp/(term + dble(n)) else scale = temp/(term + sumz) end if if (temp .le. eps) then c w(1) = temp tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape, 1) call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if if (scale .le. eps) then c w(1) = -scale tol = err eps = FLMAX maxi = iter return end if temp = sqrt(scale) do j = 1, p w(j) = temp*sqrt(shape(j)) end do call sgnrng( p, w, 1, smin, smax) rc = smin / (one + smax) if (smin .le. rteps) then c w(1) = -smin tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log + log(scale))/two do k = 1, G c temp = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, s, 1) do j = 1, p s(j) = s(j) / w(j) end do sum = ddot( p, s, 1, s, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 lwork = 0 c w(1) = rc tol = err eps = hood maxi = iter return end subroutine mseev ( x, z, n, p, G, w, lwork, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, lwork double precision scale c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision shape(p), O(p,p,G), mu(p,G), pro(G) double precision shape(*), O(p,p,*), mu(p,*), pro(*) integer i, j, k, j1, l, info double precision dummy, sum, sumz, temp double precision cs, sn, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call dcopy( p, zero, 0, shape, 1) l = 0 sumz = zero scale = zero do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else if (scale .ne. FLMAX) then do j = 1, p temp = z(j,k) shape(j) = shape(j) + temp*temp end do end if else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (scale .eq. FLMAX .or. l .ne. 0) then lwork = l if (l .ne. 0) then scale = FLMAX else scale = -FLMAX end if call dcopy( p, FLMAX, 0, shape, 1) return end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then scale = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .ge. sumz*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if scale = temp/sumz if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine mseevp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, lwork, mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision scale c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision mu(p,G), shape(p), O(p,p,G), pro(G) double precision mu(p,*), shape(*), O(p,p,*), pro(*) integer p1, i, j, k, l, j1, info double precision dummy, temp, term, const double precision sumz, sum, smin, smax, cs, sn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (pshrnk .gt. zero) pshrnk = zero p1 = p + 1 call dcopy( p, zero, 0, shape, 1) l = 0 sumz = zero scale = zero do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sum+pshrnk const = (sum*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else if (scale .ne. FLMAX) then do j = 1, p temp = z(j,k) shape(j) = shape(j) + temp*temp end do end if else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (scale .eq. FLMAX .or. l .ne. 0) then lwork = l if (l .ne. 0) then scale = FLMAX else scale = -FLMAX end if call dcopy( p, FLMAX, 0, shape, 1) return end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then scale = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape, 1) return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if term = pdof + dble(p) + one if (pshrnk .gt. zero) term = term + one scale = temp/(term + sumz) if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) return end if call dscal( p, one/temp, shape, 1) return end subroutine eseii ( x, mu, sigsq, pro, n, p, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision sigsq, hood, Vinv c double precision x(n,p), mu(p,G), pro(G[+1]), z(n,G[+1]) double precision x(n,*), mu(p,*), pro( * ), z(n, * ) integer i, j, k, nz double precision sum, temp, const, prok, tmin, tmax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (sigsq .le. zero) then hood = FLMAX return end if const = dble(p)*(pi2log+log(sigsq)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/sigsq)/two) if (sigsq .lt. one .and. sum .ge. sigsq*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/sigsq)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hceii ( x, n, p, ic, ng, ns, v, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd c double precision x(n,p), v(p), d(ng*(ng-1)/2) double precision x(n,*), v(*), d(*) integer lg, ld, ll, lo, ls integer i, j, k, m integer ni, nj, nij, iopt, jopt, iold, jold integer ij, ici, icj, ii, ik, jk double precision ri, rj, rij, si, sj, sij double precision dij, dopt, dold external wardsw double precision one parameter (one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ iopt = 0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd c call intpr( 'ic', -1, ic, n) c call intpr( 'no. of groups', -1, lg, 1) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) do j = 1, n i = ic(j) if (i .ne. j) then ic(j) = 0 ni = ic(i) nij = ni + 1 ic(i) = nij ri = dble(ni) rij = dble(nij) sj = sqrt(one/rij) si = sqrt(ri)*sj c update column sum in kth row call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) else ic(j) = 1 end if end do c call intpr( 'ic', -1, ic, n) dopt = FLMAX ij = 0 do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) dij = ddot(p, v, 1, v, 1) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij iopt = i jopt = j end if end do end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = iopt x(1,2) = jopt else x(1,1) = jopt x(1,2) = iopt end if d(1) = dopt return end if ls = 1 100 continue ni = ic(iopt) nj = ic(jopt) nij = ni + nj ic(iopt) = nij ic(jopt) = -iopt if (jopt .ne. lg) then call wardsw( jopt, lg, d) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if si = dble(ni) sj = dble(nj) sij = dble(nij) dold = dopt iold = iopt jold = jopt iopt = -1 jopt = -1 dopt = FLMAX lg = lg - 1 ld = ld - lg ii = (iold*(iold-1))/2 if (iold .gt. 1) then ik = ii - iold + 1 do j = 1, (iold - 1) nj = ic(j) rj = dble(nj) ik = ik + 1 jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij end do end if if (iold .lt. lg) then ik = ii + iold i = iold do j = (iold + 1), lg nj = ic(j) rj = dble(nj) jk = ld + j dij = (rj+si)*d(ik)+(rj+sj)*d(jk) dij = (dij-rj*dold)/(rj+sij) d(ik) = dij ik = ik + i i = j end do end if d(lo) = dold lo = lo - 1 d(lo) = dble(iold) lo = lo - 1 d(lo) = dble(jold) lo = lo - 1 c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld si = d(i) if (si .le. dopt) then ij = i dopt = si end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine meeii ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, sigsq c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, prok, tmax, tmin, rteps double precision const, hold, hood, err, dnp, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return dnp = dble(n*p) if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sigsq = zero sumz = zero zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum/dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum z(i,k) = sum end do else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .le. zero) then sigsq = sigsq / dnp else sigsq = sigsq / (dble(p)*sumz) end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log+log(sigsq)) do k = 1, G c temp = pro(k) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsq))/two) z(i,k) = -(const+(z(i,k)/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meeiip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, sigsq c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumk, sumz, temp, term, tmax, tmin double precision const, hold, hood, err, dnp, ViLog, prok double precision pmupmu, cmu, cgam, rmu, rgam, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMLOG parameter (SMLOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return dnp = dble(n*p) if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 pmupmu = ddot( p, pmu, 1, pmu, 1) 100 continue iter = iter + 1 sigsq = zero sumz = zero zsum = one do k = 1, G sumk = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumk = sumk + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sumk if (.not. EQPRO) pro(k) = sumk/dble(n) zsum = min(zsum,sumk) if (sumk .gt. rteps) then call dscal( p, (one/sumk), mu(1,k), 1) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do sigsq = sigsq + z(i,k)*sum end do temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) const = sumk+pshrnk sigsq = sigsq + ((pshrnk*sumk)/const)*temp call dscal( p, (sumk/const), mu(1,k), 1) call daxpy(p, (pshrnk/const), pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .le. zero) then sigsq = sigsq / (pdof + dble((n+G)*p) + two) else sigsq = sigsq / (pdof + (sumz+dble(G))*dble(p) + two) do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (sigsq .le. eps) then tol = err eps = FLMAX maxi = iter return end if const = dble(p)*(pi2log+log(sigsq)) do i = 1, n do k = 1, G sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do z(i,k) = -(const+(sum/sigsq))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMLOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(p)*(log(pshrnk)-pi2log)/two const = pdof/two cgam = const*log(pscale/two)-dlngam(const) rmu = zero do k = 1, G temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) rmu = rmu + (pshrnk*temp)/sigsq end do term = log(sigsq) rmu = -(rmu + dble(p)*term)/two rgam = -(const+one)*term - (pscale/sigsq)/two pdof = (dble(G)*cmu+cgam) + (rmu+rgam) return end subroutine mseii ( x, z, n, p, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), mu(p,G), sigsq, pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq, pro(*) integer i, j, k double precision sum, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ sumz = zero sigsq = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) if (sigsq .ne. FLMAX) then do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum end do end if else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c sumz .eq. n when no noise if (sigsq .ne. FLMAX) sigsq = sigsq / (sumz*dble(p)) return end subroutine mseiip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G), mu(p,G), sigsq, pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq, pro(*) integer i, j, k double precision sum, sumz, pmupmu double precision const, temp, dnp double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pscale = pscale*1.d0 dnp = dble(n*p) pmupmu = ddot( p, pmu, 1, pmu, 1) sumz = zero sigsq = zero do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .gt. one .or. one .le. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) const = sum+pshrnk call dscal( p, (sum/const), mu(1,k), 1) call daxpy(p, (pshrnk/const), pmu, 1, mu(1,k), 1) if (sigsq .ne. FLMAX) then sigsq = sigsq + ((pshrnk*sum)/const)*temp do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsq = sigsq + z(i,k)*sum end do end if else sigsq = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (sigsq .eq. FLMAX) return temp = pdof + sumz*dble(p) + two if (pshrnk .gt. zero) temp = temp + dble(G*p) sigsq = sigsq / temp return end subroutine esevi ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision scale, hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (scale .le. zero) then hood = FLMAX return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .eq. zero) then hood = FLMAX return end if end do temp = sqrt(scale) do k = 1, G do j = 1, p shape(j,k) = temp*sqrt(shape(j,k)) end do end do const = dble(p)*(pi2log+log(scale)) do k = 1, G c prok = pro(k) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j,k) .lt. one .and. * abs(temp) .ge. shape(j,k)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j,k) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine meevi ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero zsum = one do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum /dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) if (sqrt(z(i,k))*abs(temp) .gt. RTMIN) * sum = sum + z(i,k)*(temp*temp) end do shape(j,k) = shape(j,k) + sum end do else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if scale = zero epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .gt. zero) then sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale = scale + temp epsmin = min(temp,epsmin) if (temp .lt. eps) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do term = zero if (Vinv .gt. zero) then scale = scale /sumz do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else scale = scale /dble(n) end if if (scale .le. eps) then tol = epsmin eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = smin eps = FLMAX maxi = iter return end if end do const = dble(p)*(pi2log + log(scale)) do k = 1, G do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meevip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol, scale c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), shape(p,G), pro(G[+1]) double precision mu(p,*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, sumz, temp, term, epsmin, zsum double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 sumz = zero zsum = one do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum if (.not. EQPRO) pro(k) = sum /dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p do i = 1, n temp = x(i,j) - mu(j,k) if (abs(temp)*sqrt(z(i,k)) .gt. RTMIN) * shape(j,k) = shape(j,k) + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if scale = zero epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .gt. zero) then sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale = scale + temp epsmin = min(temp,epsmin) if (temp .lt. eps) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) tol = err eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do term = pdof + one if (Vinv .le. zero) then term = term + dble(n) else term = term + sumz end if if (pshrnk .gt. zero) term = term + one scale = scale/term if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (scale .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do const = dble(p)*(pi2log + log(scale)) do k = 1, G do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scale))/two) z(i,k) = -(const+(sum/scale))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine msevi ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p,G), pro(G) double precision mu(p,*), scale, shape(p,*), pro(*) integer i, j, k double precision smin, smax double precision sum, sumz, temp double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ scale = zero sumz = zero do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do sumz = sumz + sum pro(k) = sum/dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (scale .eq. FLMAX) return c pro(k) now contains n_k do j = 1, p do k = 1, G sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do end do scale = zero do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (scale .ne. FLMAX) scale = scale + temp if (temp .lt. one .and. one .ge. temp*FLMAX) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp , shape(1,k), 1) 100 continue end do if (sumz .lt. one .and. one .ge. sumz*FLMAX) then scale = FLMAX return end if scale = scale/sumz return end subroutine msevip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale, shape(p,G), pro(G) double precision mu(p,*), scale, shape(p,*), pro(*) integer i, j, k double precision sum, sumz, temp, term double precision smin, smax, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero sumz = zero scale = zero do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum pro(k) = sum /dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) term = pshrnk + sum const = (pshrnk*sum)/term do j = 1, p do i = 1, n temp = x(i,j) - mu(j,k) if (abs(temp)*sqrt(z(i,k)) .gt. RTMIN) * shape(j,k) = shape(j,k) + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sum/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else scale = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do if (scale .eq. FLMAX) return scale = zero do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then scale = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero endif if (scale .ne. FLMAX) scale = scale + temp if (temp .le. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp, shape(1,k), 1) 100 continue end do term = pdof + sumz + two if (pshrnk .gt. zero) term = term + dble(G) scale = scale/term return end subroutine es1v ( x, mu, sigsq, pro, n, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision hood, Vinv c double precision x(n), mu(G), sigsq(G), pro(G[+1]), z(n,G[+1]) double precision x(*), mu(*), sigsq(*), pro( * ), z(n, * ) integer i, k, nz double precision temp, const, tmin, tmax, sum double precision muk, sigsqk, prok, sigmin double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. zero) then hood = FLMAX return end if do k = 1, G c prok = pro(k) muk = mu(k) sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n temp = x(i) - muk c z(i,k) = prok*exp(-(const+(temp*temp)/sigsqk)/two) if (sigsqk .lt. one .and. * abs(temp) .ge. sqrt(sigsqk)*RTMAX) then hood = FLMAX return end if z(i,k) = -(const+(temp*temp)/sigsqk)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c temp = zero c do k = 1, nz c temp = temp + z(i,k) c end do c hood = hood + log(temp) c call dscal( nz, (one/temp), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hc1v ( x, n, ic, ng, ns, ALPHA, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, ic(n), ng, ns, nd integer n, ic(*), ng, ns, nd c double precision x(n), ALPHA, d(ng*(ng-1)/2) double precision x(*), ALPHA, d(*) integer lg, ld, ll, lo, ls, i, j, k, m integer ni, nj, nij, nopt, niop, njop integer ij, ici, icj, iopt, jopt, iold double precision ALFLOG double precision qi, qj, qij, ri, rj, rij, si, sj double precision tracei, tracej, trcij, trop double precision termi, termj, trmij, tmop double precision temp, dij, dopt, siop, sjop double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision sqrthf parameter (sqrthf = .70710678118654757274d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision ddot external ddot c------------------------------------------------------------------------------ c call dblepr( 'x', -1, x, n) c call intpr( 'n', -1, n, 1) c call intpr( 'ic', -1, ic, n) c call intpr( 'ng', -1, ng, 1) c call intpr( 'ns', -1, ns, 1) c call dblepr( 'alpha', -1, alpha, 1) c call intpr( 'nd', -1, nd, 1) iopt = 0 jopt = 0 niop = 0 njop = 0 nopt = 0 siop = 0 sjop = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) ALFLOG = log(ALPHA) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 c call dswap( p, x(k,1), n, x(j,1), n) temp = x(k) x(k) = x(j) x(j) = temp ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers do j = 1, n i = ic(j) if (i .ne. j) then c update sum of squares k = ic(i) if (k .eq. 1) then ic(i) = j ic(j) = 2 c call dscal( p, sqrthf, x(i,1), n) c call dscal( p, sqrthf, x(j,1), n) c call dcopy( p, x(j,1), n, v, 1) c call daxpy( p, (-one), x(i,1), n, v, 1) c call daxpy( p, one, x(j,1), n, x(i,1), n) c x(j,1) = ddot( p, v, 1, v, 1) temp = sqrthf*(x(j) - x(i)) x(i) = sqrthf*(x(j) + x(i)) x(j) = temp*temp else ic(j) = 0 ni = ic(k) ic(k) = ni + 1 ri = dble(ni) rij = dble(ni+1) qj = one/rij qi = ri*qj si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(j,1), n, v, 1) c call dscal( p, si, v, 1) c call daxpy( p, (-sj), x(i,1), n, v, 1) c x(k,1) = x(k,1) + ddot(p, v, 1, v, 1) c call dscal( p, si, x(i,1), n) c call daxpy( p, sj, x(j,1), n, x(i,1), n) temp = si*x(j) - sj*x(i) x(k) = x(k) + temp*temp x(i) = si*x(i) + sj*x(j) end if else ic(j) = 1 end if end do c store terms also so as not to recompute them do k = 1, ng i = ic(k) if (i .ne. 1) then ni = ic(i) ri = dble(ni) d(nd-k+1) = ri*log((x(i)+ALPHA)/ri) end if end do c call intpr( 'ic', -1, ic, n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng nj = ic(j) if (nj .eq. 1) then tracej = zero termj = ALFLOG rj = one else tracej = x(nj) nj = ic(nj) rj = dble(nj) termj = d(nd-j+1) end if do i = 1, (j-1) ni = ic(i) if (ni .eq. 1) then tracei = zero termi = ALFLOG ri = one else tracei = x(ni) ni = ic(ni) ri = dble(ni) termi = d(nd-i+1) end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy(p, x(i,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(i) - si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j end if end do end do c call dblepr( 'dij', -1, d, (ng*(ng-1))/2) if (ns .eq. 1) then if (iopt .lt. jopt) then x(1) = dble(iopt) ic(1) = jopt else x(1) = dble(jopt) ic(1) = iopt end if d(1) = dopt return end if if (niop .ne. 1) ic(ic(iopt)) = 0 if (njop .ne. 1) ic(ic(jopt)) = 0 ls = 1 100 continue c if (.false.) then c ij = 1 c jj = 1 c do j = 2, n c nj = ic(j) c if (nj .ne. 0 .and. abs(nj) .le. n) then c call dblepr( 'dij', -1, d(ij), jj) c ij = ij + jj c jj = jj + 1 c end if c end do c end if c call dscal( p, siop, x(iopt,1), n) c call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) x(iopt) = siop*x(iopt)+sjop*x(jopt) if (jopt .ne. lg) then call wardsw( jopt, lg, d) c call dcopy( p, x(lg,1), n, x(jopt,1), n) x(jopt) = x(lg) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if ic(iopt) = lg c ic(lg) = nopt c x(lg,1) = trop x(lg) = trop c x(lg,2) = tmop d(lo) = dopt lo = lo - 1 ic(lg) = lo d(lo) = tmop lo = lo - 1 d(lo) = dble(nopt) lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt iopt = -1 jopt = -1 dopt = FLMAX ni = nopt ri = dble(ni) tracei = trop termi = tmop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold - 1) nj = ic(j) if (nj .ne. 1) then c tracej = x(nj,1) tracej = x(nj) k = ic(nj) termj = d(k) nj = int(d(k-1)) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(iold,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(iold)-si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = j jopt = iold nopt = nij niop = ni njop = nj sjop = si siop = sj end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold + 1), lg nj = ic(j) if (nj .ne. 1) then c tracej = x(nj,1) tracej = x(nj) k = ic(nj) termj = d(k) nj = int(d(k-1)) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one /rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) c call dcopy( p, x(iold,1), n, v, 1) c call dscal( p, sj, v, 1) c call daxpy( p, (-si), x(j,1), n, v, 1) temp = sj*x(iold) - si*x(j) c trcij = (tracei + tracej) + ddot(p,v,1,v,1) trcij = (tracei + tracej) + temp*temp trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = iold jopt = j nopt = nij niop = ni njop = nj siop = si sjop = sj end if ij = ij + i i = j end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 dopt = d(1) do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if i = ic(iopt) j = ic(jopt) if (iopt .ne. iold .and. jopt .ne. iold) then if (i .ne. 1) then tracei = x(i) ici = ic(i) termi = d(ici) niop = int(d(ici-1)) ri = dble(niop) else tracei = zero termi = ALFLOG niop = 1 ri = one end if if (j .ne. 1) then c tracej = x(j,1) tracej = x(j) icj = ic(j) termj = d(icj) njop = int(d(icj-1)) rj = dble(njop) else tracej = zero termj = ALFLOG njop = 1 rj = one end if nopt = niop + njop rij = dble(nopt) qij = one/rij qi = ri*qij qj = rj*qij siop = sqrt(qi) sjop = sqrt(qj) c call dcopy( p, x(iopt,1), n, v, 1) c call dscal( p, sjop, v, 1) c call daxpy( p, (-siop), x(jopt,1), n, v, 1) temp = sjop*x(iopt)-siop*x(jopt) c trop = (tracei + tracej) + ddot(p,v,1,v,1) trop = (tracei + tracej) + temp*temp tmop = rij*log((trop+ALPHA)/rij) end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = tmop lo = lo - 1 d(lo) = dble(nopt) lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 3 ld = nd - 1 si = d(lo) lo = lo - 1 sj = d(lo) lo = lo - 1 ic(int(sj)) = ng if (si .lt. sj) then x(1) = si d(ld) = sj else x(1) = sj d(ld) = si end if ld = ld - 1 lg = ng + 1 do k = 2, ns d(ld) = d(lo) ld = ld - 1 lo = lo - 3 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) lo = lo - 1 icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k) = dble(ici) d(ld) = dble(icj) else x(k) = dble(icj) d(ld) = dble(ici) end if ld = ld - 1 end do ld = nd lo = nd - 1 do k = 1, ns ic(k) = int(d(lo)) lo = lo - 1 ld = ld - 1 d(ld) = d(lo) lo = lo - 1 end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine me1v ( EQPRO, x, n, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq(G), pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq(*), pro( * ) integer nz, iter, k, i double precision hold, hood, err, sum, smu, zsum double precision const, temp, term, sigmin, sigsqk double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sum / dble(n) zsum = min(sum,zsum) if (sum .gt. rteps) then smu = smu / sum mu(k) = smu sigsqk = zero do i = 1, n temp = x(i) - smu temp = temp*temp sigsqk = sigsqk + z(i,k)*temp z(i,k) = temp end do sigsq(k) = sigsqk / sum end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if sigmin = FLMAX do k = 1, G sigmin = min(sigmin,sigsq(k)) end do if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n c z(i,k) = temp*exp(-(const+(z(i,k)/sigsqk))/two) z(i,k) = -(const+(z(i,k)/sigsqk))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine me1vp ( EQPRO, x, n, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, G, maxi double precision pshrnk, pmu, pscale, pdof double precision Vinv, eps, tol c double precision x(n), z(n,G[+1]), mu(G), sigsq(G), pro(G[+1]) double precision x(*), z(n, * ), mu(*), sigsq(*), pro( * ) integer nz, iter, k, i double precision hold, hood, err, pmupmu double precision sumz, sum, smu, zsum, rteps double precision const, temp, term, sigmin, sigsqk double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision three parameter (three = 3.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision dlngam external dlngam c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX pmupmu = pmu*pmu iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then smu = smu/sumz sum = zero do i = 1, n term = abs(x(i) - smu) if (term .ge. eps .or. sqrt(z(i,k))*term .gt. RTMIN) * sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+three) term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if sigmin = FLMAX do k = 1, G sigmin = min(sigmin,sigsq(k)) end do if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = pi2log + log(sigsqk) do i = 1, n term = abs(x(i) - mu(k)) if (term .gt. RTMIN) then z(i,k) = -(const+((term*term)/sigsqk))/two else z(i,k) = -const/two end if end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter cmu = dble(G)*(pi2log-log(pshrnk))/two const = pdof/two cgam = dble(G)*(const*log(pscale/two) - dlngam(const)) rmu = zero rgam = zero do k = 1, G temp = pmu - mu(k) temp = temp*temp term = log(sigsq(k)) rmu = rmu + (term + (pshrnk/sigsq(k))*temp) rgam = rgam + ((pdof+3.d0)*term + pscale/sigsq(k)) end do rmu = -rmu /two rgam = -rgam/two pdof = (cmu+cgam) + (rmu+rgam) return end subroutine ms1v ( x, z, n, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G c double precision x(n), z(n,G), mu(G), sigsq(G), pro(G) double precision x(*), z(n,*), mu(*), sigsq(*), pro(*) integer i, k double precision sum, smu, temp, sigsqk double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ do k = 1, G sum = zero smu = zero do i = 1, n temp = z(i,k) sum = sum + temp smu = smu + temp*x(i) end do pro(k) = sum / dble(n) if (sum .gt. one .or. smu .le. sum*FLMAX) then smu = smu / sum mu(k) = smu sigsqk = zero do i = 1, n temp = abs(x(i) - smu) sigsqk = sigsqk + z(i,k)*(temp*temp) end do sigsq(k) = sigsqk / sum else mu(k) = FLMAX sigsq(k) = FLMAX end if end do return end subroutine ms1vp ( x, z, n, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, G double precision pshrnk, pmu, pscale, pdof c double precision x(n), z(n,G), mu(G), sigsq(G), pro(G) double precision x(*), z(n,*), mu(*), sigsq(*), pro(*) integer k, i double precision pmupmu double precision sumz, sum, smu double precision temp, term double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = pmu*pmu do k = 1, G sumz = zero smu = zero do i = 1, n temp = z(i,k) sumz = sumz + temp smu = smu + temp*x(i) end do pro(k) = sumz / dble(n) if (sumz .gt. one .or. smu .le. sumz*FLMAX) then smu = smu/sumz term = sumz/(pshrnk+sumz) temp = pshrnk/(pshrnk+sumz) mu(k) = term*smu + temp*pmu sum = zero do i = 1, n term = abs(x(i) - smu) sum = sum + z(i,k)*(term*term) end do term = (pshrnk*sumz)/(pshrnk+sumz) temp = (pmupmu + smu*smu) - two*pmu*smu if (pshrnk .gt. zero) then sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+3.d0) else sigsq(k) = (pscale + sum + term*temp)/(pdof+sumz+two) end if else mu(k) = FLMAX sigsq(k) = FLMAX end if end do return end subroutine esvei ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do j = 1, p shape(j) = sqrt(shape(j)) end do do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j) .lt. one .and. * abs(temp) .ge. shape(j)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) if (scalek .lt. one .and. * sum .ge. scalek*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/scalek)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. * one .le. sum*FLMAX) then hood = FLMAX return end if if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevei ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2) double precision Vinv, eps, tol(2) c double precision x(n,p), z(n,G[+1]), scl(G), shp(p), w(p,G) double precision x(n,*), z(n, * ), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer nz, i, j, k integer iter, maxi1, maxi2, inner, inmax double precision tol1, tol2, sum, temp, term, tmin, tmax double precision prok, scalek, smin, smax, const, zsum double precision hold, hood, err, errin, dnp, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ maxi1 = maxi(1) maxi2 = max(maxi(2),0) if (maxi1 .le. 0) return dnp = dble(n*p) inmax = 0 if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX c start with shape and scale equal to 1 call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) iter = 0 100 continue inner = 0 zsum = one do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do w(j,k) = sum end do end if end do call dscal( G, dble(p), pro, 1) if (zsum .le. rteps) then eps = -FLMAX tol(1) = zsum tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (maxi2 .le. 0) goto 120 110 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p sum = sum + w(j,k)/shape(j) end do scale(k) = sum/pro(k) end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G sum = sum + w(j,k)/scale(k) end do shape(j) = sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = temp tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if call dscal( p, one/temp, shape, 1) errin = zero do k = 1, G errin = max(errin, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p errin = max(errin, abs(shp(j)-shape(j))/(one + shape(j))) end do if (errin .gt. tol2 .and. inner .le. maxi2) goto 110 120 continue iter = iter + 1 inmax = max(inner, inmax) if (.not. EQPRO) call dscal( G, one/dnp, pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) z(i,k) = -(const+sum/scalek)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine meveip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2) double precision Vinv, eps, tol(2) c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G[+1]), scl(G), shp(p), w(p,G) double precision x(n,*), z(n, * ), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G[+1]) double precision mu(p,*), scale(*), shape(*), pro( * ) integer nz, i, j, k integer iter, maxi1, maxi2, inner, inmax double precision tol1, tol2, sum, temp, term, tmin, tmax double precision prok, scalek, smin, smax, const, sumz double precision hold, hood, err, errin, dnp, ViLog, zsum double precision rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero maxi1 = maxi(1) maxi2 = max(maxi(2),0) if (maxi1 .le. 0) return dnp = dble(n*p) inmax = 0 if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX c start with shape and scale equal to 1 call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) iter = 0 100 continue inner = 0 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz zsum = min(zsum,sumz) if (sumz .gt. rteps) then term = pshrnk + sumz const = (pshrnk*sumz)/term call dscal( p, (one/sumz), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) w(j,k) = pscale + sum + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) end if end do call dscal( G, dble(p), pro, 1) if (zsum .le. rteps) then eps = -FLMAX tol(1) = zsum tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (maxi2 .le. 0) goto 120 110 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) temp = pdof + two if (pshrnk .gt. zero) temp = temp + one do k = 1, G sum = zero do j = 1, p sum = sum + w(j,k)/shape(j) end do scale(k) = sum/(pro(k)+temp) end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G sum = sum + w(j,k)/scale(k) end do shape(j) = sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = temp tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = max(inner,inmax) return end if call dscal( p, one/temp, shape, 1) errin = zero do k = 1, G errin = max(errin, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p errin = max(errin, abs(shp(j)-shape(j))/(one + shape(j))) end do if (errin .gt. tol2 .and. inner .le. maxi2) goto 110 120 continue iter = iter + 1 inmax = max(inner, inmax) if (.not. EQPRO) call dscal( G, one/dnp, pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dscal( G, one/dble(G), pro, 1) end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j) end do c z(i,k) = prok*exp(-(const+sum/scalek)/two) z(i,k) = -(const+sum/scalek)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine msvei ( x, z, n, p, G, maxi, tol, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi double precision tol c double precision x(n,p), z(n,G), scl(G), shp(p), w(p,G) double precision x(n,*), z(n,*), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G) double precision mu(p,*), scale(*), shape(*), pro(*) integer i, j, k, inner double precision sum, temp, smin, smax, err double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) err = FLMAX c start with the equal volume and shape estimate do k = 1, G sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum if (sum .gt. one .or. one .lt. sum*FLMAX) then err = min(err,sum) call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) temp = temp*temp temp = z(i,k)*temp sum = sum + temp end do w(j,k) = sum end do else err = -FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (err .lt. zero) then call dscal( G, one/dble(n), pro, 1) call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) tol = FLMAX maxi = 0 return end if call dcopy( p, one, 0, shape, 1) call dcopy( G, one, 0, scale, 1) call dscal( G, dble(p), pro, 1) inner = 0 err = FLMAX 100 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) goto 200 inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p if (shape(j) .gt. one .or. * w(j,k) .lt. shape(j)*FLMAX) then sum = sum + w(j,k)/shape(j) else scale(k) = FLMAX goto 110 end if end do scale(k) = sum/pro(k) 110 continue end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. zero .or. smax .eq. FLMAX) goto 200 c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G if (scale(k) .gt. one .or. w(j,k) .lt. scale(k)*FLMAX) then sum = sum + w(j,k)/scale(k) else shape(j) = FLMAX goto 120 end if end do shape(j) = sum 120 continue end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero .or. smax .eq. FLMAX) goto 200 sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do k = 1, G err = max(err, abs(scl(k) - scale(k))/(one + scale(k))) end do do j = 1, p err = max(err, abs(shp(j) - shape(j))/(one + shape(j))) end do if (err .gt. tol .and. inner .le. maxi) goto 100 200 continue call dscal( G, one/dble(n*p), pro, 1) tol = err maxi = inner return end subroutine msveip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * maxi, tol, * mu, scale, shape, pro, scl, shp, w) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi double precision tol c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G), scl(G), shp(p), w(p,G) double precision x(n,*), z(n,*), scl(*), shp(*), w(p,*) c double precision mu(p,G), scale(G), shape(p), pro(G) double precision mu(p,*), scale(*), shape(*), pro(*) integer i, j, k, inner double precision sum, temp, term, err double precision smin, smax, const, sumz double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) err = FLMAX c start with shape and scale equal to 1 do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz if (sumz .gt. one .or. one .lt. sumz*FLMAX) then err = min(err,sumz) term = pshrnk + sumz const = (pshrnk*sumz)/term call dscal( p, (one/sumz), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j,k) sum = sum + z(i,k)*(temp*temp) end do temp = pmu(j) - mu(j,k) w(j,k) = pscale + sum + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else err = -FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (err .lt. zero) then call dscal( G, one/dble(n), pro, 1) call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) tol = FLMAX maxi = 0 return end if call dcopy(p, one, 0, shape, 1) call dcopy(G, one, 0, scale, 1) call dscal( G, dble(p), pro, 1) if (maxi .le. 0) return inner = 0 err = FLMAX 100 continue call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) goto 200 inner = inner + 1 c scale estimate call dcopy( G, scale, 1, scl, 1) do k = 1, G sum = zero do j = 1, p if (shape(j) .ge. one .or. * w(j,k) .le. shape(j)*FLMAX) then sum = sum + w(j,k)/shape(j) else scale(k) = FLMAX goto 110 end if end do temp = pdof + pro(k) + two if (pshrnk .gt. zero) temp = temp + one scale(k) = sum/temp 110 continue end do call sgnrng(G, scale, 1, smin, smax) if (smin .le. zero .or. smax .ge. FLMAX) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if c shape estimate call dcopy( p, shape, 1, shp, 1) do j = 1, p sum = zero do k = 1, G if (scale(k) .gt. w(j,k) .or. * w(j,k) .lt. scale(k)*FLMAX) then sum = sum + w(j,k)/scale(k) else shape(j) = FLMAX goto 120 end if end do shape(j) = sum 120 continue end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero .or. smax .ge. FLMAX) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( G, FLMAX, 0, scale, 1) call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do k = 1, G err = max(err, abs(scl(k)-scale(k))/(one + scale(k))) end do do j = 1, p err = max(err, abs(shp(j)-shape(j))/(one + shape(j))) end do if (err .gt. tol .and. inner .le. maxi) goto 100 200 continue call dscal( G, one/dble(n*p), pro, 1) tol = err maxi = inner return end subroutine esvev ( x, mu, scale, shape, O, pro, n, p, G, * Vinv, v, w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p, G integer n, p, G double precision Vinv, hood c double precision x(n,p), z(n,G[+1]), mu(p,G), pro(G[+1]) double precision x(n,*), z(n, * ), mu(p,*), pro( * ) c double precision v(p), w(p) double precision v(*), w(*) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer i, j, k, nz double precision const, temp, tmin, tmax double precision smin, smax, scalek, prok, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do j = 1, p shape(j) = sqrt(shape(j)) end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dgemv( 'N', p, p, one, O(1,1,k), p, * w, 1, zero, v, 1) do j = 1, p if (shape(j) .lt. one .and. * abs(v(j)) .ge. shape(j)*FLMAX) then hood = FLMAX return end if v(j) = v(j)/shape(j) end do temp = ddot( p, v, 1, v, 1) if (scalek .lt. one .and. temp .ge. scalek*FLMAX) then hood = FLMAX return end if temp = temp/scalek c z(i,k) = prok*exp(-(const+temp)/two) z(i,k) = -(const+temp)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevev ( EQPRO, x, n, p, G, Vinv, z, * maxi, tol, eps, lwork, * mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2), lwork double precision Vinv, eps, tol(2) double precision x(n,*), z(n, * ), w( * ), s(*) double precision mu(p,*), pro( * ) double precision scale(*), shape(*), O(p,p,*) integer maxi1, maxi2, p1, inmax, iter integer nz, i, j, k, l, j1, info, inner double precision tol1, tol2, dnp, term, rteps, ViLog double precision errin, smin, smax, sumz, tmin, tmax double precision cs, sn, dummy, hold, hood, err, zsum double precision const, temp, sum, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ maxi1 = maxi(1) maxi2 = maxi(2) if (maxi1 .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) rteps = sqrt(eps) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) p1 = p + 1 dnp = dble(n*p) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX inmax = 0 iter = 0 100 continue sumz = zero zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do sumz = sumz + sum zsum = min(zsum,sum) pro(k) = sum if (sum .ge. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) z(j,k) = temp*temp end do end if end if end do iter = iter + 1 if (l .ne. 0 .or. zsum .lt. rteps) then if (Vinv .ge. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = l c w(1) = FLMAX tol(1) = err tol(2) = errin if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if maxi(1) = -1 maxi(2) = -1 return end if if (iter .eq. 1) then call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do k = 1, G sum = sum + z(j,k) end do shape(j) = sum end do call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .ge. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) then call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (Vinv .le. zero) then call dcopy (G, temp/dble(n), 0, scale, 1) else call dcopy (G, temp/sumz, 0, scale, 1) end if if (temp .le. eps) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = temp c w(2) = zero tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if call dscal( p, one/temp, shape, 1) end if c inner iteration to estimate scale and shape c pro now contains n*pro inner = 0 errin = zero if (maxi2 .le. 0) goto 120 110 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(p)) scale(k) = temp if (temp .le. eps) then lwork = 0 c w(1) = temp tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) return end if do j = 1, p shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) return end if c normalize the shape matrix sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = -1 maxi(2) = -1 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if lwork = 0 c w(1) = temp tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = max(inner,inmax) end if call dscal( p, one/temp, shape, 1) errin = zero do j = 1, p errin = max(abs(w(j)-shape(j))/(one+shape(j)), errin) end do do k = 1, G errin = max(abs(scale(k)-w(p+k))/(one+scale(k)), errin) end do if (errin .ge. tol2 .and. inner .lt. maxi2) goto 110 120 continue inmax = max(inner,inmax) smin = smin/temp smax = smax/temp if (.not. EQPRO) call dscal( G, one/dble(n), pro, 1) term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if do j = 1, p s(j) = sqrt(shape(j)) end do call sgnrng( p, s, 1, smin, smax) if (smin .le. rteps) then lwork = 0 c w(1) = -smin tol(1) = err tol(2) = errin eps = FLMAX maxi(1) = iter maxi(2) = inmax return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, w, 1) do j = 1, p w(j) = w(j) / s(j) end do sum = ddot(p,w,1,w,1)/scalek c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 c smin = sqrt(smin) c smax = sqrt(smax) c rcmin = FLMAX c do k = 1, G c temp = sqrt(scale(k)) c rcmin = min(rcmin,(temp*smin)/(one+temp*smax)) c end do lwork = 0 c w(1) = rcmin tol(1) = err tol(2) = errin eps = hood maxi(1) = iter maxi(2) = inmax return end subroutine mevevp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, lwork, * mu, scale, shape, O, pro, w, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi(2), lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol(2) c double precision x(n,p), z(n,G[+1]), w(lwork), s(p) double precision x(n,*), z(n, * ), w( * ), s(*) c double precision mu(p,G), pro(G[+1]) double precision mu(p,*), pro( * ) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer maxi1, maxi2, p1, inmax, iter integer nz, i, j, k, l, j1, info, inner double precision tol1, tol2, dnp, term, rteps, ViLog double precision errin, smin, smax, sumz, tmin, tmax double precision cs, sn, dummy, hold, hood, err, zsum double precision const, temp, sum, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pdof = pdof*1.d0 maxi1 = maxi(1) maxi2 = maxi(2) if (maxi1 .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) rteps = sqrt(eps) tol1 = max(tol(1),zero) tol2 = max(tol(2),zero) p1 = p + 1 dnp = dble(n*p) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX errin = FLMAX inmax = 0 inner = 0 iter = 0 100 continue zsum = one l = 0 do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) zsum = min(zsum,sumz) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sumz+pshrnk const = (sumz*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) z(j,k) = temp*temp end do end if end if end do iter = iter + 1 if (l .ne. 0 .or. zsum .le. rteps) then lwork = l call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) if (l .ne. 0) then eps = FLMAX else eps = -FLMAX end if goto 200 end if if (iter .eq. 1) then call dcopy( p, zero, 0, shape, 1) do j = 1, p sum = zero do k = 1, G sum = sum + z(j,k) end do shape(j) = sum end do call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then eps = FLMAX goto 200 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX goto 200 return end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .le. eps) then eps = FLMAX goto 200 return end if call dscal( p, one/temp, shape, 1) end if inner = 0 errin = zero if (maxi2 .le. 0) goto 120 110 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call sgnrng( p+G, w, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p if (w(j) .le. z(j,k) .and. z(j,k) .lt. w(j)*rteps) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(n*p)) scale(k) = temp do j = 1, p if (temp .le. z(j,k) .and. z(j,k) .lt. temp*rteps) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then eps = FLMAX goto 200 return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then eps = FLMAX goto 200 return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .le. eps) then eps = FLMAX goto 200 end if call dscal( p, one/temp, shape, 1) errin = zero do j = 1, p errin = max(abs(w(j)-shape(j))/(one+shape(j)), errin) end do do k = 1, G errin = max(abs(scale(k)-w(p+k))/(one+scale(k)), errin) end do if (errin .ge. tol2 .and. inner .lt. maxi2) goto 110 120 continue inmax = max(inner,inmax) smin = smin/temp smax = smax/temp term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if else if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. eps) then eps = FLMAX goto 200 return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then eps = FLMAX goto 200 return end if do j = 1, p s(j) = sqrt(shape(j)) end do call sgnrng( p, s, 1, smin, smax) if (smin .le. rteps) then eps = FLMAX goto 200 return end if do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n call dcopy( p, x(i,1), n, w(p1), 1) call daxpy( p, (-one), mu(1,k), 1, w(p1), 1) call dgemv( 'N', p, p, one, O(1,1,k), p, w(p1), 1, zero, w, 1) do j = 1, p w(j) = w(j) / s(j) end do sum = ddot(p,w,1,w,1)/scalek c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol1 .and. iter .lt. maxi1) goto 100 c smin = sqrt(smin) c smax = sqrt(smax) c rcmin = FLMAX c do k = 1, G c temp = sqrt(scale(k)) c rcmin = min(rcmin,(temp*smin)/(one+temp*smax)) c end do c w(1) = rcmin lwork = 0 eps = hood 200 continue tol(1) = err tol(2) = errin maxi(1) = iter maxi(2) = inmax return end subroutine msvev ( x, z, n, p, G, w, lwork, maxi, tol, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi, lwork double precision tol c double precision x(n,p), z(n,G), w(max(4*p,5*p-4,p+G)) double precision x(n,*), z(n,*), w(*) c double precision scale(G), shape(p), O(p,p,G), mu(p,G), pro(G) double precision scale(*), shape(*), O(p,p,*), mu(p,*), pro(*) integer p1, i, j, k, j1, inner, info double precision err, dummy double precision temp, sum, smin, smax, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision BIGLOG parameter (BIGLOG = 709.d0) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ tol = max(tol,zero) p1 = p + 1 err = FLMAX inner = 0 call dcopy( p, zero, 0, shape, 1) do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, zero, 0, O(1,j,k), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) if (lwork .gt. 0) then do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then inner = info else do j = 1, p temp = z(j,k) temp = temp*temp shape(j) = shape(j) + temp z(j,k) = temp end do end if end if else err = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c inner iteration estimates scale and shape c pro now contains n*pro if (inner .ne. 0 .or. err .eq. zero) then lwork = inner call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if lwork = 0 call sgnrng( p, shape, 1, smin, smax) if (smin .eq. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) c iteration to estimate scale and shape c pro now contains n*pro if (maxi .le. 0) goto 200 100 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call absrng( p, w, 1, smin, smax) if (smin .le. one .and. one .ge. smin*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p sum = sum + z(j,k)/w(j) end do temp = (sum/pro(k))/dble(p) scale(k) = temp if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if do j = 1, p shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if c normalize the shape matrix sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do j = 1, p err = max(abs(w(j)-shape(j))/(one+shape(j)), err) end do do k = 1, G err = max(abs(scale(k)-w(p+k))/(one+scale(k)), err) end do if (err .ge. tol .and. inner .lt. maxi) goto 100 200 continue call dscal( G, one/dble(n), pro, 1) tol = err maxi = inner return end subroutine msvevp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, lwork, maxi, tol, * mu, scale, shape, O, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G, maxi, lwork c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision tol c double precision x(n,p), z(n,G), w(lwork) double precision x(n,*), z(n,*), w( * ) c double precision mu(p,G), pro(G) double precision mu(p,*), pro(*) c double precision scale(G), shape(p), O(p,p,G) double precision scale(*), shape(*), O(p,p,*) integer p1, i, j, k, l, j1, inner, info double precision sum, term, temp, err, smin, smax double precision sumz, cs, sn, dummy, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (pshrnk .le. zero) pshrnk = zero pdof = pdof*1.d0 tol = max(tol,zero) p1 = p + 1 err = FLMAX inner = 0 l = 0 call dcopy( p, zero, 0, shape, 1) do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p call dcopy( p, pscale(1,j), 1, O(1,j,k), 1) end do sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) term = sumz+pshrnk const = (sumz*pshrnk)/term call dscal( p, sqrt(const), w, 1) j = 1 do j1 = 2, p call drotg( O(j,j,k), w(j), cs, sn) call drot( p-j, O(j,j1,k), p, w(j1), 1, cs, sn) j = j1 end do call drotg( O(p,p,k), w(p), cs, sn) call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) call dgesvd( 'N', 'O', p, p, O(1,1,k), p, z(1,k), * dummy, 1, dummy, 1, w, lwork, info) if (info .ne. 0) then l = info else do j = 1, p temp = z(j,k) temp = temp*temp shape(j) = shape(j) + temp z(j,k) = temp end do end if else err = zero call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (l .ne. 0 .or. err .eq. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if do k = 1, G scale(k) = temp / (pro(k)*dble(n)) end do if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dscal( p, one/temp, shape, 1) if (maxi .le. 0) goto 200 100 continue call dcopy( p, shape, 1, w , 1) call dcopy( G, scale, 1, w(p1), 1) call sgnrng( p+G, w, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if call dcopy( p, zero, 0, shape, 1) do k = 1, G sum = zero do j = 1, p if (w(j) .le. z(j,k) .and. z(j,k) .ge. w(j)*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if sum = sum + z(j,k)/w(j) end do temp = sum/(pro(k)*dble(n*p)) scale(k) = temp do j = 1, p if (temp .le. z(j,k) .and. z(j,k) .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) call dcopy( G, FLMAX, 0, scale, 1) goto 200 end if shape(j) = shape(j) + z(j,k)/temp end do end do inner = inner + 1 call sgnrng( p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if if (temp .ge. SMALOG) then temp = exp(temp) else temp = zero end if if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) goto 200 end if call dscal( p, one/temp, shape, 1) err = zero do j = 1, p err = max(abs(w(j)-shape(j))/(one+shape(j)), err) end do do k = 1, G err = max(abs(scale(k)-w(p+k))/(one+scale(k)), err) end do if (err .ge. tol .and. inner .lt. maxi) goto 100 200 continue lwork = l tol = err maxi = inner return end subroutine esvii ( x, mu, sigsq, pro, n, p, G, Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision prok, sigsqk, sigmin double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. zero) then hood = FLMAX return end if do k = 1, G c prok = pro(k) sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsq(k))) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (abs(temp) .ge. RTMAX) then hood = FLMAX return end if if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum/sigsqk)/two) if (sigsqk .lt. one .and. sum .ge. sigsqk*FLMAX) then hood = FLMAX return end if z(i,k) = -(const+sum/sigsqk)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine hcvii ( x, n, p, ic, ng, ns, ALPHA, v, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd c double precision x(n,p), v(p). d(*), ALPHA double precision x(n,*), v(*), d(*), ALPHA integer lg, ld, ll, lo, ls, i, j, k, m integer ni, nj, nij, nopt, niop, njop integer ij, ici, icj, iopt, jopt, iold double precision ALFLOG double precision qi, qj, qij, ri, rj, rij, si, sj double precision tracei, tracej, trcij, trop double precision termi, termj, trmij, tmop double precision dij, dopt, siop, sjop double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision sqrthf parameter (sqrthf = .70710678118654757274d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) double precision ddot external ddot c------------------------------------------------------------------------------ iopt = 0 niop = 0 njop = 0 nopt = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) ALFLOG = log(ALPHA) c call intpr( 'ic', -1, ic, n) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. lg) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers do j = 1, n i = ic(j) if (i .ne. j) then c update sum of squares k = ic(i) if (k .eq. 1) then ic(i) = j ic(j) = 2 call dscal( p, sqrthf, x(i,1), n) call dscal( p, sqrthf, x(j,1), n) call dcopy( p, x(j,1), n, v, 1) call daxpy( p, (-one), x(i,1), n, v, 1) call daxpy( p, one, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) c x(j,1) = ddot( p, v, 1, v, 1) / two x(j,1) = ddot( p, v, 1, v, 1) else ic(j) = 0 ni = ic(k) ic(k) = ni + 1 ri = dble(ni) rij = dble(ni+1) qj = one/rij qi = ri*qj si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(i,1), n, v, 1) c x(k,1) = qi*x(k,1) + qj*ddot(p, v, 1, v, 1) x(k,1) = x(k,1) + ddot(p, v, 1, v, 1) call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) end if else ic(j) = 1 end if end do c store terms also so as not to recompute them do k = 1, ng i = ic(k) if (i .ne. 1) then ni = ic(i) ri = dble(ni) c x(i,2) = ri*log(x(i,1)+ALPHA) x(i,2) = ri*log((x(i,1)+ALPHA)/ri) end if end do c call intpr( 'ic', -1, ic, n) c call dblepr( 'trace', -1, x(1,1), n) c call dblepr( 'term', -1, x(1,2), n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng nj = ic(j) if (nj .eq. 1) then tracej = zero termj = ALFLOG rj = one else tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) end if do i = 1, (j-1) ni = ic(i) if (ni .eq. 1) then tracei = zero termi = ALFLOG ri = one else tracei = x(ni,1) termi = x(ni,2) ni = ic(ni) ri = dble(ni) end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j end if end do end do c call dblepr( 'dij', -1, d, (l*(l-1))/2) if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = dble(iopt) x(1,2) = dble(jopt) else x(1,1) = dble(jopt) x(1,2) = dble(iopt) end if d(1) = dopt return end if if (niop .ne. 1) ic(ic(iopt)) = 0 if (njop .ne. 1) ic(ic(jopt)) = 0 ls = 1 100 continue c if (.false.) then c ij = 1 c jj = 1 c do j = 2, n c nj = ic(j) c if (nj .ne. 0 .and. abs(nj) .le. n) then c call dblepr( 'dij', -1, d(ij), jj) c ij = ij + jj c jj = jj + 1 c end if c end do c end if call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call wardsw( jopt, lg, d) call dcopy( p, x(lg,1), n, x(jopt,1), n) m = ic(jopt) ic(jopt) = ic(lg) ic(lg) = m end if ic(iopt) = lg ic(lg) = nopt x(lg,1) = trop x(lg,2) = tmop d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt iopt = -1 jopt = -1 dopt = FLMAX ni = nopt ri = dble(ni) tracei = trop termi = tmop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold - 1) nj = ic(j) if (nj .ne. 1) then tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(iold,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = j jopt = iold nopt = nij niop = ni njop = nj sjop = si siop = sj end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold + 1), lg nj = ic(j) if (nj .ne. 1) then tracej = x(nj,1) termj = x(nj,2) nj = ic(nj) rj = dble(nj) else tracej = zero termj = ALFLOG rj = one end if nij = ni + nj rij = dble(nij) qij = one /rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(iold,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) c trcij = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trcij = (tracei + tracej) + ddot(p,v,1,v,1) c trmij = rij*log(trcij+ALPHA) trmij = rij*log((trcij+ALPHA)/rij) dij = trmij - (termi + termj) d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij iopt = iold jopt = j nopt = nij niop = ni njop = nj siop = si sjop = sj end if ij = ij + i i = j end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 dopt = d(1) do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if i = ic(iopt) j = ic(jopt) if (iopt .ne. iold .and. jopt .ne. iold) then if (i .ne. 1) then tracei = x(i,1) termi = x(i,2) niop = ic(i) ri = dble(niop) else tracei = zero termi = ALFLOG niop = 1 ri = one end if if (j .ne. 1) then tracej = x(j,1) termj = x(j,2) njop = ic(j) rj = dble(njop) else tracej = zero termj = ALFLOG njop = 1 rj = one end if nopt = niop + njop rij = dble(nopt) qij = one/rij qi = ri*qij qj = rj*qij siop = sqrt(qi) sjop = sqrt(qj) call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) c trop = (qi*tracei + qj*tracej) + qij*ddot(p,v,1,v,1) trop = (tracei + tracej) + ddot(p,v,1,v,1) c tmop = rij*log(trop+ALPHA) tmop = rij*log((trop+ALPHA)/rij) end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 100 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end subroutine mevii ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, const, term, zsum double precision sigmin, sigsqk, hold, hood, err double precision prok, tmin, tmax, ViLog, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = zero do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum z(i,k) = sum end do sigsq(k) = (sigsqk/sumz)/dble(p) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G c temp = pro(k) sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsqk)) do i = 1, n c z(i,k) = temp*exp(-(const+z(i,k)/sigsqk)/two) z(i,k) = -(const+z(i,k)/sigsqk)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do const = zero - tmax sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) + const if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)-const) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine meviip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), sigsq(G), pro(G[+1]) double precision mu(p,*), sigsq(*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, const, term, zsum double precision sigmin, sigsqk, hold, hood, err double precision prok, tmin, tmax, ViLog, rteps double precision pmupmu, cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 pmupmu = ddot(p,pmu,1,pmu,1) 100 continue iter = iter + 1 zsum = one do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = pscale do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = pmupmu + ddot(p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot(p,mu(1,k),1,pmu,1) const = sumz+pshrnk sigsqk = sigsqk + ((sumz*pshrnk)/const) * temp c sigsq(k) = sigsqk/(pdof+(sumz+one)*dble(p)+two) temp = pdof+sumz*dble(p)+two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq(k) = sigsqk/temp call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if call sgnrng( G, sigsq, 1, sigmin, temp) if (sigmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G sigsqk = sigsq(k) const = dble(p)*(pi2log+log(sigsqk)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + temp*temp end do z(i,k) = -(const+sum/sigsqk)/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do const = zero - tmax sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) + const if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)-const) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter if (pshrnk .gt. zero) then cmu = dble(p)*(log(pshrnk)-pi2log)/two const = pdof/two cgam = const*log(pscale/two)-dlngam(const) rmu = zero rgam = zero do k = 1, G term = log(sigsq(k)) temp = pmupmu + ddot( p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot( p, mu(1,k), 1, pmu, 1) rmu = rmu + (pshrnk*temp)/sigsq(k) rgam = rgam + ((pdof+3.d0)*term - (pscale/sigsq(k))) end do rmu = -rmu /two rgam = -rgam/two pdof = (dble(G)*cmu+rmu) + (dble(G)*cgam+rgam) else pdof = FLMAX end if return end subroutine msvii ( x, z, n, p, G, mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), mu(p,G), sigsq(G), pro(G) double precision x(n,*), z(n,*), mu(p,*), sigsq(*), pro(*) integer i, j, k double precision sum, sumz, temp, sigsqk double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) c------------------------------------------------------------------------------ do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .le. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = zero do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = sumz*dble(p) if (temp .ge. one .or. sigsqk .le. temp*FLMAX) then sigsq(k) = sigsqk/temp else sigsq(k) = FLMAX end if else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do return end subroutine msviip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, sigsq, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), sigsq(G), pro(G) double precision mu(p,*), sigsq(*), pro(*) integer i, j, k double precision sumz, sum, temp double precision sigsqk, const, pmupmu double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision ddot external ddot c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero pmupmu = ddot(p,pmu,1,pmu,1) do k = 1, G sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) sigsqk = pscale do i = 1, n sum = zero do j = 1, p temp = abs(x(i,j) - mu(j,k)) if (temp .gt. RTMIN) sum = sum + temp*temp end do if (sqrt(z(i,k))*sqrt(sum) .gt. RTMIN) * sigsqk = sigsqk + z(i,k)*sum end do temp = pmupmu + ddot(p, mu(1,k), 1, mu(1,k), 1) temp = temp - two*ddot(p,mu(1,k),1,pmu,1) const = sumz+pshrnk sigsqk = sigsqk + ((sumz*pshrnk)/const) * temp temp = pdof+sumz*dble(p)+two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq(k) = sigsqk/temp call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else sigsq(k) = FLMAX call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do return end subroutine esvvi ( x, mu, scale, shape, pro, n, p, G, * Vinv, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G double precision hood, Vinv c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p,G), pro(G[+1]) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer i, j, k, nz double precision sum, temp, const, tmin, tmax double precision smin, smax, prok, scalek double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) c------------------------------------------------------------------------------ call sgnrng( G, scale, 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. zero) then hood = FLMAX return end if temp = sqrt(scale(k)) do j = 1, p shape(j,k) = temp*sqrt(shape(j,k)) end do end do do k = 1, G c prok = pro(k) scalek = scale(k) const = dble(p)*(pi2log+log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) if (shape(j,k) .lt. one .and. * abs(temp) .ge. shape(j,k)*FLMAX) then hood = FLMAX return end if temp = temp/shape(j,k) if (abs(temp) .gt. RTMIN) sum = sum + temp*temp end do c z(i,k) = prok*exp(-(const+sum)/two) z(i,k) = -(const+sum)/two end do end do if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do return end subroutine mevvi ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol double precision x(n,*), z(n, * ) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sum, temp, term, scalek, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if tol = max(tol,zero) eps = max(eps,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) call dcopy( p, zero, 0, mu(1,k), 1) sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sum c pro(k) now contains n_k zsum = min(zsum,sum) if (sum .gt. rteps) then call dscal( p, (one/sum), mu(1,k), 1) do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do end if end do if (zsum .le. rteps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = -FLMAX maxi = iter return end if epsmin = FLMAX do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .le. zero) then scale(k) = zero else temp = zero do j = 1, p temp = temp + log(shape(j,k)) end do temp = temp/dble(p) if (temp .gt. BIGLOG) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if scale(k) = temp/pro(k) epsmin = min(temp,epsmin) if (temp .le. eps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do if (.not. EQPRO) then call dscal( G, one/dble(n), pro, 1) else if (Vinv .le. zero) then call dscal( G, one/dble(G), pro, 1) end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (epsmin .le. eps) then tol = err eps = -FLMAX maxi = iter return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scalek))/two) z(i,k) = -(const+(sum/scalek))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine mevvip( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G[+1]) double precision x(n,*), z(n, * ) c double precision mu(p,G), scale(G), shape(p,G), pro(G[+1]) double precision mu(p,*), scale(*), shape(p,*), pro( * ) integer nz, iter, i, j, k double precision sumz, sum, temp, term, scalek, epsmin double precision hold, hood, err, smin, smax, const double precision prok, tmin, tmax, ViLog, zsum, rteps double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G end if eps = max(eps,zero) tol = max(tol,zero) rteps = sqrt(eps) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) call dcopy( p, zero, 0, mu(1,k), 1) sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sumz zsum = min(zsum,sumz) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) term = pshrnk+sumz const = (pshrnk*sumz)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) end if end do if (zsum .le. rteps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = -FLMAX maxi = iter return end if c pro(k) now contains n_k epsmin = FLMAX term = pdof+two if (pshrnk .gt. zero) term = term + one do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) epsmin = min(smin,epsmin) if (smin .eq. zero) then scale(k) = zero else sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if if (temp .gt. SMALOG) then temp = exp(temp) else temp = zero end if c pro(k) contains n_k scale(k) = temp/(pro(k)+term) epsmin = min(temp,epsmin) if (temp .le. eps) then call dscal( G, one/dble(n), pro, 1) tol = zsum eps = FLMAX maxi = iter return end if call dscal( p, one/temp, shape(1,k), 1) end if end do if (.not. EQPRO) then call dscal( G, one/dble(n), pro, 1) else if (Vinv .le. zero) then call dcopy( G, one/dble(G), 0, pro, 1) end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if if (epsmin .le. eps) then tol = err eps = FLMAX maxi = iter return end if call sgnrng( G, scale, 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if do k = 1, G call sgnrng( p, shape(1,k), 1, smin, smax) if (smin .le. eps) then tol = err eps = FLMAX maxi = iter return end if end do do k = 1, G scalek = scale(k) const = dble(p)*(pi2log + log(scalek)) do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j,k) sum = sum + (temp*temp)/shape(j,k) end do c z(i,k) = pro(k)*exp(-(const+(sum/scalek))/two) z(i,k) = -(const+(sum/scalek))/two end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 tol = err eps = hood maxi = iter return end subroutine msvvi ( x, z, n, p, G, mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale(G), shape(p,G), pro(G) double precision mu(p,*), scale(*), shape(p,*), pro(*) integer i, j, k double precision sum, temp, smin, smax double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ do k = 1, G call dcopy( p, zero, 0, shape(1,k), 1) sum = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sum if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if end do c pro(k) now contains n_k do k = 1, G if (mu(1,k) .ne. FLMAX) then do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum end do else call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (smax .eq. FLMAX) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if if (temp .lt. SMALOG) then temp = zero scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if temp = exp(temp) if (pro(k) .lt. one .and. temp .ge. pro(k)*FLMAX) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if scale(k) = temp/pro(k) if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape(1,k), 1) goto 100 end if call dscal( p, one/temp, shape(1,k), 1) 100 continue end do call dscal( G, one/dble(n), pro, 1) return end subroutine msvvip( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * mu, scale, shape, pro) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof c double precision x(n,p), z(n,G) double precision x(n,*), z(n,*) c double precision mu(p,G), scale(G), shape(p,G), pro(G) double precision mu(p,*), scale(*), shape(p,*), pro(*) integer i, j, k double precision sumz, sum, temp, term double precision smin, smax, const double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero do k = 1, G call dcopy( p, pscale, 0, shape(1,k), 1) sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) z(i,k) = sqrt(temp) end do pro(k) = sumz if (sumz .ge. one .or. one .le. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) term = pshrnk+sumz const = (pshrnk*sumz)/term do j = 1, p sum = zero do i = 1, n temp = z(i,k)*(x(i,j) - mu(j,k)) sum = sum + temp*temp end do shape(j,k) = shape(j,k) + sum temp = pmu(j) - mu(j,k) shape(j,k) = shape(j,k) + const*(temp*temp) end do call dscal( p, sumz/term, mu(1,k), 1) call daxpy( p, pshrnk/term, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end do c pro(k) now contains n_k do k = 1, G call sgnrng(p, shape(1,k), 1, smin, smax) if (smin .le. zero) then scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) else if (smax .eq. FLMAX) then scale(k) = FLMAX else sum = zero do j = 1, p sum = sum + log(shape(j,k)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then scale(k) = FLMAX call dcopy( p, FLMAX, 0, shape(1,k), 1) else if (temp .lt. SMALOG) then temp = zero scale(k) = zero call dcopy( p, FLMAX, 0, shape(1,k), 1) else temp = exp(temp) c pro(k) contains n_k term = pro(k) + pdof + two if (pshrnk .gt. zero) term = term + one scale(k) = temp/term if (temp .ge. one .or. one .le. temp*FLMAX) then call dscal( p, one/temp, shape(1,k), 1) else call dcopy( p, FLMAX, 0, shape(1,k), 1) end if end if end if end do call dscal( G, one/dble(n), pro, 1) return end subroutine esvvv ( CHOL, x, mu, Sigma, pro, n, p, G, Vinv, * w, hood, z) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c character CHOL logical CHOL c integer n, p, G integer n, p, G double precision hood, Vinv c double precision x(n,p), w(p), z(n,G[+1]) double precision x(n,*), w(*), z(n, * ) c double precision mu(p,G), Sigma(p,p,G), pro(G[+1]) double precision mu(p,*), Sigma(p,p,*), pro( * ) integer nz, p1, info, i, j, k double precision const, detlog, temp, prok, tmin, tmax double precision umin, umax, sum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision RTMAX parameter (RTMAX = 1.340780792994260d154) double precision RTMIN parameter (RTMIN = 1.49166814624d-154) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ p1 = p + 1 c if (CHOL .eq. 'N') then if (.not. CHOL) then do k = 1, G call dpotrf( 'U', p, Sigma(1,1,k), p, info) w(1) = dble(info) if (info .ne. 0) then hood = FLMAX return end if end do end if do k = 1, G call absrng( p, Sigma(1,1,k), p1, umin, umax) if (umax .le. one .and. umax .ge. umin*RTMAX) then w(1) = zero hood = FLMAX return end if if (umax .ge. one .and. umin .le. umax*RTMIN) then w(1) = zero hood = FLMAX return end if end do do k = 1, G detlog = zero do j = 1, p detlog = detlog + log(abs(Sigma(j,j,k))) end do const = dble(p)*pi2log/two + detlog c prok = pro(k) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, Sigma(1,1,k), p, w, 1) temp = ddot( p, w, 1, w, 1)/two c z(i,k) = prok*exp(-(const+temp)) z(i,k) = -(const+temp) end do end do w(1) = zero if (pro(1) .lt. zero) return nz = G if (Vinv .gt. zero) then nz = nz + 1 c call dcopy( n, pro(nz)*Vinv, 0, z(1,nz), 1) call dcopy( n, log(Vinv), 0, z(1,nz), 1) end if c hood = zero c do i = 1, n c sum = zero c do k = 1, nz c sum = sum + z(i,k) c end do c hood = hood + log(sum) c call dscal( nz, (one/sum), z(i,1), n) c end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) if (sum .lt. one .and. one .ge. sum*FLMAX) then w(1) = zero hood = FLMAX return end if call dscal( nz, (one/sum), z(i,1), n) end do w(1) = zero return end subroutine hcvvv ( x, n, p, ic, ng, ns, ALPHA, BETA, * v, u, s, r, nd, d) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, ic(n), ng, ns, nd double precision ALPHA, BETA c double precision x(n,p+1), v(p), u(p,p), s(p,p) c double precision r(p,p), d(ng*(ng-1)/2) double precision x(n,*), v(*), u(p,*), s(p,*) double precision r(p,*), d(*) integer psq, pm1, pp1 integer i, j, k, l, m, ij, iold integer lg, ld, ll, lo, ls integer ici, icj, ni, nj, nij integer nopt, niop, njop, iopt, jopt double precision trcij, trmij, trop, tmop double precision traci, tracj, termi, termj double precision qi, qj, qij, si, sj, sij, ri, rj, rij double precision dij, dopt, siop, sjop double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision rthalf parameter (rthalf = .7071067811865476d0) double precision ddot, vvvtij external ddot, vvvtij double precision BETA0, ALPHA0, ABLOG common /VVVMCL/ BETA0, ALPHA0, ABLOG save /VVVMCL/ double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMAX parameter (EPSMAX = 2.2204460492503131d-16) c------------------------------------------------------------------------------ iopt = 0 niop = 0 nopt = 0 tmop = 0.d0 trop = 0.d0 lg = ng ld = (ng*(ng-1))/2 ll = nd-ng lo = nd psq = p*p pm1 = p-1 pp1 = p+1 if (ng .eq. 1) return ALPHA = max(ALPHA,EPSMAX) BETA0 = BETA ALPHA0 = ALPHA ABLOG = log(BETA*ALPHA) c call intpr( 'ic', -1, ic, n) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. ng) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c set up pointers if (ng .eq. n) goto 4 do j = n, ng+1, -1 icj = ic(j) i = ic(icj) ic(icj) = j if (i .ne. icj) then ic(j) = i else ic(j) = j end if end do 4 continue c call intpr( 'ic', -1, ic, n) c initialize by simulating merges do k = 1, ng j = ic(k) if (j .ne. k) then c non-singleton call dcopy( psq, zero, 0, r, 1) trcij = zero l = 1 10 continue m = l + 1 qj = one/dble(m) qi = dble(l)*qj si = sqrt(qi) sj = sqrt(qj) call dcopy( p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(k,1), n, v, 1) trcij = trcij + ddot( p, v, 1, v, 1) call dscal( p, si, x(k,1), n) call daxpy( p, sj, x(j,1), n, x(k,1), n) call mclrup( m, p, v, r, p) l = m i = ic(j) if (i .eq. j) goto 20 j = i goto 10 20 continue c d(ll+k) = trcij c copy triangular factor into the rows of x j = k m = p do i = 1, min(l-1,p) j = ic(j) call dcopy( m, r(i,i), p, x(j,i), n) m = m - 1 end do ij = j if (l .ge. p) then do m = p, l icj = ic(j) ic(j) = -k j = icj end do end if ic(ij) = n+l x(k, pp1) = zero if (l .ge. 2) then x( k, pp1) = trcij trmij = vvvtij( l, p, r, sj, trcij) x(ic(k),pp1) = trmij end if else ic(k) = 1 c d(ll+k) = zero end if end do c call intpr( 'ic', -1, ic, n) c call dblepr( '', -1, x(1,pp1), n) c call dblepr( 'trac', -1, d(ll+1), ng) c call dblepr( 'term', -1, term, n) c compute change in likelihood and determine minimum dopt = FLMAX ij = 0 do j = 2, ng icj = ic(j) nj = 1 if (icj .eq. 1) then tracj = zero termj = ABLOG do i = 1, (j-1) ni = 1 ici = ic(i) if (ici .eq. 1) then nij = 2 rij = two si = rthalf sj = rthalf sij = rthalf call dcopy( p, x(i,1), n, v, 1) call daxpy( p, (-one), x(j,1), n, v, 1) call dscal( p, rthalf, v, 1) c trcij = half*ddot( p, v, 1, v, 1) trcij = ddot( p, v, 1, v, 1) call dcopy( p, v, 1, u, p) c trmij = rij*log(BETA*trcij+ALPHA) trmij = two*log(BETA*(trcij+ALPHA)/two) termi = ABLOG else m = p l = ici 110 continue call dcopy( m, x(l,ni), n, u(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 110 ni = l - n c traci = d(ll+i) c traci = trac(i) c termi = vvvtrm(i,ni,n,p,ic,x,traci) c termi = term(i) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) end if dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j m = p do k = 1, min(nij-1,p) call dcopy( m, u(k,k), p, r(k,k), p) m = m - 1 end do end if end do else m = p l = icj 120 continue call dcopy( m, x(l,nj), n, s(nj,nj), p) nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 120 nj = l - n c tracj = d(ll+j) c termj = vvvtrm(j,nj,n,p,ic,x,tracj) tracj = x( j , pp1) termj = x( ic(j), pp1) rj = dble(nj) do i = 1, (j-1) m = p do k = 1, min(nj-1,p) call dcopy( m, s(k,k), p, u(k,k), p) m = m - 1 end do ni = 1 ici = ic(i) if (ici .eq. 1) then nij = nj + 1 rij = dble(nij) qij = one/rij qi = qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = tracj + ddot(p,v,1,v,1) termi = ABLOG else m = p l = ici k = nj + 1 130 continue call dcopy( m, x(l,ni), n, v, 1) call mclrup( k, m, v, u(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 130 ni = l - n c traci = d(ll+i) c termi = vvvtrm(i,ni,n,p,ic,x,traci) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = i jopt = j m = p do k = 1, min(nij-1,p) call dcopy( m, u(k,k), p, r(k,k), p) m = m - 1 end do end if end do end if end do c if (.false.) then c i = 1 c ij = 1 c do j = 2, ng c call dblepr( 'dij', -1, d(ij), i) c ij = ij + i c i = j c end do c end if if (ns .eq. 1) then if (iopt .lt. jopt) then x(1,1) = iopt x(1,2) = jopt else x(1,1) = jopt x(1,2) = iopt end if d(1) = dopt return end if ls = 1 200 continue call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, siop, v, 1) call daxpy( p, sjop, x(jopt,1), n, v, 1) if (jopt .ne. lg) then call wardsw( jopt, lg, d) call dcopy( p, x(lg,1), n, x(jopt,1), n) m = ic(jopt) icj = ic(lg) if (icj .ne. 1) x( jopt, pp1) = x( lg, pp1) ic(jopt) = icj ic(lg) = m end if if (niop .eq. 1) then ic(iopt) = lg else l = ic(iopt) do k = 1, min(niop-1,p) m = l l = ic(l) end do if (l .lt. n) call intpr("l .lt. n", 8, l, 1) ic(m) = lg end if l = ic(iopt) do k = 1, min(nopt-1,p) call dcopy( p, r(1,1), p, x(l,1), n) m = l l = ic(l) end do ic(m) = nopt + n c call intpr('ic', 2, ic, n) c term(iopt) = tmop c trac(iopt) = trop x(iopt, pp1) = zero if (nopt .ge. 2) then x(iopt,pp1) = trop x(ic(iopt),pp1) = tmop endif call dcopy( p, v, 1, x(iopt,1), n) d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) lo = lo - 1 lg = lg - 1 ld = ld - lg iold = iopt dopt = FLMAX ni = nopt ri = dble(ni) termi = tmop traci = trop ij = ((iold-1)*(iold-2))/2 if (iold .gt. 1) then do j = 1, (iold-1) call dcopy(psq, zero, 0, u, 1) m = p do k = 1, min(ni-1,p) call dcopy(m, r(k,k), p, u(k,k), p) m = m - 1 end do nj = 1 icj = ic(j) if (icj .eq. 1) then nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) tracj = zero termj = ABLOG else m = p l = icj k = ni + 1 310 continue call dcopy( m, x(l,nj), n, v, 1) call mclrup( k, m, v, u(nj,nj), p) k = k + 1 nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 310 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j ,pp1) termj = x(ic(j),pp1) rj = dble(nj) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) ij = ij + 1 d(ij) = dij if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = nj njop = ni siop = sj sjop = si iopt = j jopt = iold m = p do k = 1, min(nij-1,p) call dcopy(m, u(k,k), p, s(k,k), p) m = m - 1 end do end if end do end if if (iold .lt. lg) then i = iold ij = ij + i do j = (iold+1), lg call dcopy(psq, zero, 0, u, 1) m = p do k = 1, min(ni-1,p) call dcopy(m, r(k,k), p, u(k,k), p) m = m - 1 end do nj = 1 icj = ic(j) if (icj .eq. 1) then nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) sij = sj call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = traci + ddot(p,v,1,v,1) termj = ABLOG else m = p l = icj k = ni + 1 410 continue call dcopy( m, x(l,nj), n, v, 1) call mclrup( k, m, v, u(nj,nj), p) k = k + 1 nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 410 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j ,pp1) termj = x(ic(j),pp1) rj = dble(nj) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) sij = sqrt(qij) call dcopy(p, x(j,1), n, v, 1) call dscal( p, si, v, 1) call daxpy( p, (-sj), x(iold,1), n, v, 1) trcij = ( traci + tracj) + ddot(p,v,1,v,1) end if call mclrup( nij, p, v, u, p) trmij = vvvtij( nij, p, u, sij, trcij) dij = trmij - (termi + termj) d(ij) = dij ij = ij + i i = j if (dij .le. dopt) then dopt = dij trop = trcij tmop = trmij nopt = nij niop = ni njop = nj siop = si sjop = sj iopt = iold jopt = j m = p do k = 1, min(nij-1,p) call dcopy(m, u(k,k), p, s(k,k), p) m = m - 1 end do end if end do end if c update d and find max jopt = 2 iopt = 1 dopt = d(1) if (lg .eq. 2) goto 900 ij = 1 do i = 2, ld qi = d(i) if (qi .le. dopt) then ij = i dopt = qi end if end do c call dblepr("d", 1, d, nd) c call dblepr("d", 1, d, ld) if (ij .gt. 1) then do i = 2, ij iopt = iopt + 1 if (iopt .ge. jopt) then jopt = jopt + 1 iopt = 1 end if end do end if do k = 1, p call dcopy( p, zero, 0, r(1,k), 1) end do if (iopt .ne. iold .and. jopt .ne. iold) then i = iopt j = jopt nj = 1 icj = ic(j) ni = 1 ici = ic(i) if (icj .eq. 1) then termj = ABLOG if (ici .eq. 1) then nij = 2 rij = two si = rthalf sj = rthalf call dcopy(p, x(i,1), n, v, 1) call daxpy( p, (-one), x(j,1), n, v, 1) call dscal( p, rthalf, v, 1) trcij = ddot( p, v, 1, v, 1) call dcopy( p, v, 1, r, p) termi = ABLOG else m = p l = ici 610 continue call dcopy( m, x(l,ni), n, r(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 610 ni = l - n c call vvvget(i,ni,n,p,ic,x,traci,termi) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + 1 rij = dble(nij) qij = one/rij qi = ri*qij si = sqrt(qi) sj = sqrt(qij) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = traci + ddot( p, v, 1, v, 1) call mclrup( nij, p, v, r, p) end if else m = p l = icj 620 continue call dcopy( m, x(l,nj), n, r(nj,nj), p) nj = nj + 1 m = m - 1 l = ic(l) if (l .le. n) goto 620 nj = l - n c call vvvget(j,nj,n,p,ic,x,tracj,termj) tracj = x( j , pp1) termj = x(ic(j), pp1) rj = dble(nj) if (ici .eq. 1) then nij = nj + 1 rij = dble(nij) qij = one/rij qj = rj*qij si = sqrt(qij) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = tracj + ddot( p, v, 1, v, 1) termi = ABLOG else m = p l = ici k = nj + 1 630 continue call dcopy( m, x(l,ni), n, v, 1) call mclrup( k, m, v, r(ni,ni), p) ni = ni + 1 m = m - 1 l = ic(l) if (l .le. n) goto 630 ni = l - n c call vvvget(i,ni,n,p,ic,x,traci,termi) traci = x( i , pp1) termi = x(ic(i), pp1) ri = dble(ni) nij = ni + nj rij = dble(nij) qij = one/rij qi = ri*qij qj = rj*qij si = sqrt(qi) sj = sqrt(qj) call dcopy(p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcij = ( traci + tracj) + ddot( p,v,1,v,1) end if call mclrup( nij, p, v, r, p) end if trop = trcij tmop = dopt + (termi + termj) nopt = nij niop = ni njop = nj siop = si sjop = sj else m = p do k = 1, min(nopt-1,p) call dcopy(m, s(k,k), p, r(k,k), p) m = m - 1 end do l = ic(iopt) if (l .ne. 1) then 710 continue if (l .le. n) then l = ic(l) goto 710 end if niop = l-n else niop = 1 end if l = ic(jopt) if (l .ne. 1) then 720 continue if (l .le. n) then l = ic(l) goto 720 end if njop = l-n else njop = 1 end if nopt = niop + njop end if ls = ls + 1 if (ls .eq. ns) goto 900 goto 200 900 continue d(lo) = dopt lo = lo - 1 d(lo) = dble(iopt) lo = lo - 1 d(lo) = dble(jopt) do i = 1, ng ic(i) = i end do lo = nd - 1 ld = lo si = d(lo) lo = lo - 1 sj = d(lo) ic(int(sj)) = ng if (si .lt. sj) then x(1,1) = si x(1,2) = sj else x(1,1) = sj x(1,2) = si end if lg = ng + 1 do k = 2, ns lo = lo - 1 d(ld) = d(lo) ld = ld - 1 lo = lo - 1 i = int(d(lo)) ici = ic(i) lo = lo - 1 j = int(d(lo)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(lg-k) if (ici .lt. icj) then x(k,1) = dble(ici) x(k,2) = dble(icj) else x(k,1) = dble(icj) x(k,2) = dble(ici) end if end do ld = nd lo = 1 do k = 1, ns si = d(lo) d(lo) = d(ld) d(ld) = si ld = ld - 1 lo = lo + 1 end do return end double precision function vvvtij( l, p, r, s, trac) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer l, p double precision r(p,*), s, trac double precision detlog double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision det2mc external det2mc double precision BETA, ALPHA, ABLOG common /VVVMCL/ BETA, ALPHA, ABLOG save /VVVMCL/ double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) if (l .le. p) then vvvtij = log(BETA*(trac+ALPHA)/dble(l)) else if (trac .eq. zero) then vvvtij = log((ALPHA*BETA)/dble(l)) else detlog = det2mc( p, r, s) if (detlog .eq. -FLMAX) then vvvtij = log(BETA*(trac+ALPHA)/dble(l)) else if (detlog .le. zero) then vvvtij = log(exp(detlog)+BETA*(trac+ALPHA)/dble(l)) else vvvtij = log(one+exp(-detlog)*(BETA*(trac+ALPHA)/dble(l))) * + detlog end if end if end if vvvtij = dble(l)*vvvtij return end double precision function det2mc( n, u, s) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer k, n double precision q, s double precision u(n,*) double precision zero, two parameter (zero = 0.d0, two = 2.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) det2mc = zero do k = 1, n q = u(k,k)*s if (abs(q) .le. zero) then det2mc = -FLMAX return end if det2mc = det2mc + log(abs(q)) end do det2mc = two*det2mc return end subroutine mevvv ( EQPRO, x, n, p, G, Vinv, z, maxi, tol, eps, * mu, U, pro, w, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer nz, p1, iter, i, j, k, l, j1 double precision piterm, hold, rcmin, rteps double precision temp, term, cs, sn, umin, umax double precision sumz, sum, detlog, const, hood, err double precision prok, tmin, tmax, ViLog, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot external ddot c------------------------------------------------------------------------------ if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX c zero out the lower triangle do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do i = 1 do j = 2, p call dcopy( p-i, zero, 0, S(j,i), 1) i = j end do do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( j, zero, 0, S(1,j), 1) end do call dcopy( p, zero, 0, mu(1,k), 1) sumz = zero do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do do j = 1, p call dscal( j, one/sqrt(sumz), S(1,j), 1) end do else call dcopy( p, FLMAX, 0, z(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if if (Vinv .gt. zero) then term = zero do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if rcmin = FLMAX do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do call absrng( p, S, p1, umin, umax) rcmin = min(umin/(one+umax),rcmin) end do if (rcmin .le. rteps) then tol = rcmin eps = FLMAX maxi = iter return end if do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do c temp = pro(k) detlog = zero do j = 1, p detlog = detlog + log(abs(S(j,j))) end do const = piterm+detlog do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, S, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 c w(1) = rcmin tol = err eps = hood maxi = iter return end subroutine mevvvp( EQPRO, x, n, p, G, Vinv, * pshrnk, pmu, pscale, pdof, * z, maxi, tol, eps, mu, U, pro, w, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE logical EQPRO integer n, p, G, maxi c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision Vinv, eps, tol c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer nz, p1, iter, i, j, k, l, j1 double precision piterm, hold, rcmin, rteps double precision temp, term, cs, sn, umin, umax double precision sum, sumz, detlog, const, hood, err double precision prok, tmin, tmax, ViLog double precision cmu, cgam, rmu, rgam, zsum double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG parameter (SMALOG = -708.d0) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero if (maxi .le. 0) return if (Vinv .gt. zero) then nz = G + 1 ViLog = log(Vinv) else nz = G if (EQPRO) call dcopy( G, one/dble(G), 0, pro, 1) end if piterm = dble(p)*pi2log/two p1 = p + 1 eps = max(eps,zero) rteps = sqrt(eps) tol = max(tol,zero) c FLMAX = d1mach(2) hold = FLMAX/two hood = FLMAX err = FLMAX iter = 0 100 continue iter = iter + 1 zsum = one do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( p, pscale(1,j), 1, S(1,j), 1) end do sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do if (.not. EQPRO) pro(k) = sumz / dble(n) zsum = min(sumz,zsum) if (sumz .gt. rteps) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sumz+pshrnk temp = (sumz*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) do j = 1, p temp = pdof+sumz+dble(p)+two call dscal( j, one/sqrt(temp), S(1,j), 1) end do call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, z(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do if (zsum .le. rteps) then tol = zsum eps = -FLMAX maxi = iter return end if term = zero if (Vinv .gt. zero) then do i = 1, n term = term + z(i,nz) end do temp = term / dble(n) pro(nz) = temp call dcopy( n, ViLog, 0, z(1,nz), 1) if (EQPRO) then temp = (one - pro(nz))/dble(G) call dcopy( G, temp, 0, pro, 1) end if end if rcmin = FLMAX do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,K) end do end do call absrng( p, S, p1, umin, umax) rcmin = min(umin/(one+umax),rcmin) end do if (rcmin .le. rteps) then tol = rcmin eps = FLMAX maxi = iter return end if rmu = zero rgam = zero do k = 1, G c temp = pro(k) do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do detlog = zero do j = 1, p detlog = detlog + log(abs(S(j,j))) end do rmu = rmu - detlog rgam = rgam - (pdof+dble(p)+one)*detlog const = piterm+detlog do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dtrsv( 'U', 'T', 'N', p, S, p, w, 1) sum = ddot( p, w, 1, w, 1)/two c z(i,k) = temp*exp(-(const+sum)) z(i,k) = -(const+sum) end do end do hood = zero do i = 1, n tmin = FLMAX tmax = -FLMAX do k = 1, nz prok = pro(k) if (prok .eq. zero) then z(i,k) = zero else temp = log(prok) + z(i,k) tmin = min(tmin,temp) tmax = max(tmax,temp) z(i,k) = temp end if end do sum = zero do k = 1, nz if (pro(k) .ne. zero) then temp = z(i,k) - tmax if (temp .ge. SMALOG) then z(i,k) = exp(temp) sum = sum + z(i,k) else z(i,k) = zero end if end if end do hood = hood + (log(sum)+tmax) call dscal( nz, (one/sum), z(i,1), n) end do err = abs(hold-hood)/(one+abs(hood)) hold = hood if (err .gt. tol .and. iter .lt. maxi) goto 100 c w(1) = rcmin tol = err eps = hood maxi = iter cmu = dble(p)*(log(pshrnk) - pi2log)/two sum = zero do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do call daxpy( p, (-one), mu(1,k), 1, pmu, 1) call dtrsv('U','T','N',p, S, p, pmu, 1) sum = sum + ddot( p, pmu, 1, pmu, 1) end do rmu = rmu - pshrnk*sum/two sum = zero term = zero temp = zero do j = 1, p call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j,k),i,pmu(j),1) call dtrsv('U','T','N', i, S(j,j), p, pmu(j), 1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) temp = temp + log(abs(pscale(j,j))) term = term + dlngam((pdof+one-dble(j))/two) end do rgam = rgam - sum/two const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const-pdof*temp)/two-term pdof = (dble(G)*cmu+rmu) + (dble(G)*cgam+rgam) return end subroutine msvvv ( x, z, n, p, G, w, mu, U, pro, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) integer i, j, k, l, j1 double precision sum, temp, cs, sn double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do k = 1, G call dcopy( p, zero, 0, mu(1,k), 1) do j = 1, p c call dcopy( j, zero, 0, U(1,j,k), 1) call dcopy( j, zero, 0, S(1,j), 1) end do sum = zero do i = 1, n temp = z(i,k) sum = sum + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sum / dble(n) if (sum .ge. one .or. one .lt. sum*FLMAX) then call dscal( p, (one/sum), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do temp = sqrt(sum) if (temp .ge. one .or. one .lt. temp*FLMAX) then do j = 1, p call dscal( j, one/temp, S(1,j), 1) end do else do j = 1, p call dcopy( j, FLMAX, 0, S(1,j), 1) end do end if else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do return end subroutine msvvvp( x, z, n, p, G, * pshrnk, pmu, pscale, pdof, * w, mu, U, pro, S) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n, p, G c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof c double precision x(n,p), z(n,G), w(p) double precision x(n,*), z(n,*), w(*) c double precision mu(p,G), U(p,p,G), pro(G), S(p,p) double precision mu(p,*), U(p,p,*), pro(*), S(p,*) c------------------------------------------------------------------------------ c c x double (input) (n,p) matrix of observations. c z double (input/output) (n,G[+1]) conditional probabilities. c n integer (input) number of observations. c p integer (input) dimension of the data. c G integer (input) number of Gaussian clusters in the mixture. c mu double (output) (p,G) mean for each group. c U double (output) (p,p,G) c pro double (output) (G) mixing proportions (used even if equal). c w double (scratch) (max(p,G)) integer i, j, k, l, j1 double precision sumz, temp, cs, sn, const double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) c------------------------------------------------------------------------------ do k = 1, G do j = 1, p do l = 1, p S(l,j) = U(l,j,k) end do end do do j = 1, p call dcopy( p, pscale(1,j), 1, S(1,j), 1 ) end do sumz = zero call dcopy( p, zero, 0, mu(1,k), 1) do i = 1, n temp = z(i,k) sumz = sumz + temp call daxpy( p, temp, x(i,1), n, mu(1,k), 1) end do pro(k) = sumz / dble(n) if (sumz .ge. one .or. one .lt. sumz*FLMAX) then call dscal( p, (one/sumz), mu(1,k), 1) do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) call dscal( p, sqrt(z(i,k)), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu(1,k), 1, w, 1) const = sumz+pshrnk temp = (sumz*pshrnk)/const call dscal( p, sqrt(temp), w, 1) j = 1 do j1 = 2, p call drotg( S(j,j), w(j), cs, sn) call drot( p-j, S(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( S(p,p), w(p), cs, sn) temp = pdof+sumz+dble(p)+one if (pshrnk .gt. zero) temp = temp + one do j = 1, p call dscal( j, one/sqrt(temp), S(1,j), 1) end do call dscal( p, sumz/const, mu(1,k), 1) call daxpy( p, pshrnk/const, pmu, 1, mu(1,k), 1) else call dcopy( p, FLMAX, 0, mu(1,k), 1) end if do j = 1, p do l = 1, p U(l,j,k) = S(l,j) end do end do end do return end subroutine mvn1d ( x, n, mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n integer n double precision mu, sigsq, hood c double precision x(n) double precision x(*) c------------------------------------------------------------------------------ c c x double (input) (n) matrix of observations (destroyed). c n integer (input) number of observations. c mu double (output) mean. c sigsq double (output) variance. c hood double (output) loglikelihood. double precision dn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) mu = ddot( n, one/dn, 0, x, 1) sigsq = zero call daxpy( n, (-one), mu, 0, x, 1) sigsq = ddot( n, x, 1, x, 1)/dn if (sigsq .eq. zero) then hood = FLMAX else hood = -dn*(pi2log + (one + log(sigsq)))/two end if return end subroutine mvn1p ( x, n, pshrnk, pmu, pscale, pdof, * mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE integer n double precision pshrnk, pmu, pscale, pdof double precision mu, sigsq, hood c double precision x(n) double precision x(*) integer i double precision dn, const, term, temp, xbar double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ if (pshrnk .lt. zero) pshrnk = zero dn = dble(n) xbar = ddot( n, one/dn, 0, x, 1) const = pshrnk + dn mu = (dn/const)*xbar + (pshrnk/const)*pmu sigsq = zero do i = 1, n temp = xbar - x(i) sigsq = sigsq + temp*temp end do temp = xbar - pmu sigsq = sigsq + pscale + dn*(pshrnk/const)*(temp*temp) temp = pdof + dn + two if (pshrnk .gt. zero) temp = temp + one sigsq = sigsq / temp if (sigsq .eq. zero) then hood = FLMAX else call daxpy( n, (-one), mu, 0, x, 1) temp = ddot( n, x, 1, x, 1) if (sigsq .lt. one .and. temp .ge. sigsq*FLMAX) then hood = FLMAX return end if temp = temp/sigsq hood = -(dn*(pi2log + log(sigsq)) + temp)/two end if if (pshrnk .gt. zero) then cmu = (pi2log-log(pshrnk))/two term = pdof/two cgam = term*log(pscale/two) - dlngam(term) temp = pmu - mu const = log(sigsq) rmu = -(const - (pshrnk/sigsq)*(temp*temp))/two rgam = -(term+one)*const - (pscale/sigsq)/two pdof = (cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mnxiip( x, n, p, pshrnk, pmu, pscale, pdof, * mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision sigsq, hood c double precision x(n,p), mu(p) double precision x(n,*), mu(*) integer i, j double precision dnp, scl, temp, term, sum double precision dmudmu, pmupmu, cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) end do sum = zero do i = 1, n do j = 1, p temp = x(i,j) - mu(j) sum = sum + temp*temp end do end do pmupmu = ddot(p,pmu,1,pmu,1) dmudmu = ddot(p,mu,1,mu,1) temp = dmudmu + pmupmu temp = temp - two*ddot(p,mu,1,pmu,1) term = pshrnk + dble(n) scl = (pshrnk*dble(n))/term sigsq = pscale + scl*temp + sum temp = pdof + dble(n*p) + two if (pshrnk .gt. zero) temp = temp + dble(p) sigsq = sigsq/temp call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) if (sigsq .eq. zero) then hood = FLMAX else sum = zero do i = 1, n do j = 1, p temp = x(i,j) - mu(j) sum = sum + temp*temp end do end do hood = -(sum/sigsq + dnp*(pi2log + log(sigsq)))/two end if if (pshrnk .gt. zero) then dmudmu = ddot(p,mu,1,mu,1) cmu = dble(p)*(log(pshrnk)-pi2log)/two temp = (dmudmu+pmupmu) - two*ddot(p,pmu,1,mu,1) term = log(sigsq) rmu = -(dble(p)*term + (pshrnk*temp)/sigsq)/two temp = pdof/two cgam = temp*log(pscale/two) - dlngam(temp) rgam = -(temp+one)*term - pscale/(two*sigsq) pdof = (cmu+rmu) + (cgam+rgam) else pdof = FLMAX end if return end subroutine mvnxii( x, n, p, mu, sigsq, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision sigsq, hood c double precision x(n,p), mu(p) double precision x(n,*), mu(*) integer j double precision dnp, scl double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) end do sigsq = zero do j = 1, p call daxpy( n, (-one), mu(j), 0, x(1,j), 1) sigsq = sigsq + ddot( n, x(1,j), 1, x(1,j), 1) end do sigsq = sigsq/dnp if (sigsq .eq. zero) then hood = FLMAX else hood = -dnp*(pi2log + (one + log(sigsq)))/two end if return end subroutine mnxxip( x, n, p, pshrnk, pmu, pscale, pdof, * mu, scale, shape, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale, pdof double precision pshrnk, pmu(*), pscale, pdof double precision scale, hood c double precision x(n,p), mu(p), shape(p) double precision x(n,*), mu(*), shape(*) integer i, j double precision sum, temp, smin, smax double precision term, const, scl double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ temp = one/dble(n) do j = 1, p mu(j) = ddot( n, temp, 0, x(1,j), 1) shape(j) = zero end do do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do term = pshrnk + dble(n) scl = (pshrnk*dble(n))/term do j = 1, p temp = pmu(j) - mu(j) shape(j) = shape(j) + scl*(temp*temp) + pscale end do call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .ge. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = FLMAX hood = FLMAX return end if if (temp .le. SMALOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if temp = exp(temp) term = pdof + dble(n) + two if (pshrnk .gt. zero) term = term + one scale = temp/term if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) hood = FLMAX return end if call dscal( p, one/temp, shape, 1) const = dble(p)*(pi2log+log(scale)) hood = zero do i = 1, n sum = zero do j = 1, p temp = x(i,j) - mu(j) sum = sum + (temp*temp)/shape(j) end do hood = hood - (const+(sum/scale))/two end do c log posterior computation not yet available pdof = FLMAX return end subroutine mvnxxi( x, n, p, mu, scale, shape, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision scale, hood c double precision x(n,p), mu(p), shape(p) double precision x(n,*), mu(*), shape(*) integer i, j double precision dn, scl, sum, temp, smin, smax double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision SMALOG, BIGLOG parameter (SMALOG = -708.d0, BIGLOG = 709.d0) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) scl = one/dn do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) shape(j) = zero end do do j = 1, p sum = zero do i = 1, n temp = x(i,j) - mu(j) sum = sum + temp*temp end do shape(j) = shape(j) + sum end do call sgnrng(p, shape, 1, smin, smax) if (smin .le. zero) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if sum = zero do j = 1, p sum = sum + log(shape(j)) end do temp = sum/dble(p) if (temp .gt. BIGLOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = FLMAX hood = FLMAX return end if if (temp .lt. SMALOG) then call dcopy( p, FLMAX, 0, shape, 1) scale = zero hood = FLMAX return end if temp = exp(temp) scale = temp/dn if (temp .lt. one .and. one .ge. temp*FLMAX) then call dcopy( p, FLMAX, 0, shape, 1) hood = FLMAX return end if call dscal( p, one/temp, shape, 1) hood = -dble(n*p)*(one + pi2log + log(scale))/two return end subroutine mnxxxp( x, n, p, w, * pshrnk, pmu, pscale, pdof, * mu, U, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p c double precision pshrnk, pmu(p), pscale(p,p), pdof double precision pshrnk, pmu(*), pscale(p,*), pdof double precision hood c double precision x(n,p), w(p), mu(p), U(p,p) double precision x(n,*), w(*), mu(*), U(p,*) integer i, j, j1 double precision dnp, scl, detlog, sum, term, temp double precision umin, umax, cs, sn, const double precision cmu, cgam, rmu, rgam double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision twolog parameter (twolog = 0.6931471805599453d0) double precision pilog parameter (pilog = 1.144729885849400d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot, dlngam external ddot, dlngam c------------------------------------------------------------------------------ dnp = dble(n*p) scl = one/dble(n) do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) call dcopy( p, pscale(1,j), 1, U(1,j), 1) end do c mu contains ybar; U contains Cholesky factor of inverse Wishart scale do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu, 1, w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) end do call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu, 1, w, 1) term = (pshrnk*dble(n))/(pshrnk+dble(n)) call dscal( p, sqrt(term), w, 1) j = 1 do j1 = 2, p call drotg( U(j,j), w(j), cs, sn) call drot( p-j, U(j,j1), p, w(j1), 1, cs, sn) j = j1 end do call drotg( U(p,p), w(p), cs, sn) scl = pdof + dble(n+p+1) if (pshrnk .gt. zero) scl = scl + one scl = one/sqrt(scl) do j = 1, p call dscal( j, scl, U(1,j), 1) end do term = pshrnk + dble(n) call dscal( p, dble(n)/term, mu, 1) call daxpy( p, pshrnk/term, pmu, 1, mu, 1) call absrng( p, U, p+1, umin, umax) c rcond = umin / (one + umax) if (umin .eq. zero) then hood = FLMAX return end if detlog = zero do j = 1, p detlog = detlog + log(abs(U(j,j))) end do const = dble(n)*(detlog + dble(p)*pi2log/two) sum = zero do i = 1, n call dcopy( p, x(i,1), n, w, 1) call daxpy( p, (-one), mu, 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) sum = sum + ddot(p, w, 1, w, 1) end do hood = -(const+sum/two) cmu = dble(p)*(log(pshrnk) - pi2log)/two call dcopy( p, pmu, 1, w, 1) call daxpy( p, (-one), mu, 1, w, 1) call dtrsv( 'U', 'T', 'N', p, U, p, w, 1) temp = ddot( p, w, 1, w, 1) sum = zero term = zero do j = 1, p term = term + dlngam((pdof+dble(1-j))/two) call dcopy( p, pscale(j,1), p, pmu, 1) c call dtrsv('U','T','N', p, U, p, pmu, 1) i = p-j+1 c call dtrsv('U','T','N', i, U(j,j),i,pmu(j),1) call dtrsv('U','T','N', i, U(j,j),p,pmu(j),1) sum = sum + ddot(i, pmu(j), 1, pmu(j), 1) end do if (pshrnk .gt. zero) then rmu = -(detlog+pshrnk*temp/two) const = -dble(p)*(pdof*twolog+(dble(p)-one)*pilog/two) cgam = (const/two-pdof*detlog) - term rgam = -((pdof+dble(p)+one)*detlog + sum/two) pdof = (cmu+cgam) + (rmu+rgam) else pdof = FLMAX end if return end subroutine mvnxxx( x, n, p, mu, U, hood) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt implicit NONE c integer n, p integer n, p double precision hood c double precision x(n,p), mu(p), U(p,p) double precision x(n,*), mu(*), U(p,*) integer i, j, j1 double precision dn, dnp, scl double precision umin, umax, cs, sn double precision zero, one, two parameter (zero = 0.d0, one = 1.d0, two = 2.d0) double precision pi2log parameter (pi2log = 1.837877066409345d0) double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision ddot external ddot c------------------------------------------------------------------------------ dn = dble(n) dnp = dble(n*p) scl = one/dn do j = 1, p mu(j) = ddot( n, scl, 0, x(1,j), 1) call dcopy( p, zero, 0, U(1,j), 1) end do do i = 1, n call daxpy( p, (-one), mu, 1, x(i,1), n) j = 1 do j1 = 2, p call drotg( U(j,j), x(i,j), cs, sn) call drot( p-j, U(j,j1), p, x(i,j1), n, cs, sn) j = j1 end do call drotg( U(p,p), x(i,p), cs, sn) end do scl = sqrt(scl) do j = 1, p call dscal( j, scl, U(1,j), 1) end do call absrng( p, U, p+1, umin, umax) c rcond = umin / (one + umax) if (umin .eq. zero) then hood = FLMAX else hood = zero do j = 1, p hood = hood + log(abs(U(j,j))) end do hood = -dn*(hood + dble(p)*(pi2log + one)/two) end if c c do j = 1, p c do i = 1, j c x(i,j) = ddot(i,U(1,i),1,U(1,j),1) c if (i .ne. j) x(j,i) = x(i,j) c end do c end do c do j = 1, p c call dcopy( p, x(1,j), 1, U(1,j), 1) c end do return end c Luca: add to check if compile ok subroutine hceee ( x, n, p, ic, ng, ns, io, jo, v, s, u, r) c This function is part of the MCLUST software described at c http://www.stat.washington.edu/mclust c Copyright information and conditions for use of MCLUST are given at c http://www.stat.washington.edu/mclust/license.txt c Gaussian model-based clustering algorithm in clusters share a common c variance (shape, volume, and orientation are the same for all clusters). implicit NONE integer n, p, ic(n), ng, ns, io(*), jo(*) c double precision x(n,p), v(p), s(p,p), u(p,p), r(p,p) double precision x(n,*), v(*), s(*), u(*), r(*) c------------------------------------------------------------------------------ c c x double (input/output) On input, the (n by p) matrix containing c the observations. On output, the first two columns c and ns rows contain the determinant and trace of the c sum of the sample cross product matrices. Columns 3 and 4 c contain the merge indices if p .ge. 4 c n integer (input) number of observations c p integer (input) dimension of the data c ic integer (input) (n) Initial partitioning of the data; groups must c be numbered consecutively. c ng integer (input) Number of groups in initial partition. c ns integer (input) Desired number of stages of clustering. c io,jo integer (output [p .le. 3]) If p .lt. 3, both io and jo must be of c length ns and contain the indices of the merged pairs on c output. If p .eq. 3, jo must be of length ns and contains c an index of each merged on output pair. Otherwise io and c jo are not used and can be of length 1. c v double (scratch/output) (p) On output, algorithm breakpoints; c tells where the algorithm switches from using trace c to trace + det, and from trace + det to det as criterion. c s double (scratch/output) (p,p) On output the first column contains c the initial trace and determinant of the sum of sample c cross product matrices. c u,r double (scratch) (p,p) integer q, i, j, k, l, m, i1, i2, l1, l2 integer ni, nj, nij, lw, ls, lg, ici, icj integer nopt, iopt, jopt, idet, jdet, ndet double precision DELOG double precision ri, rj, rij, dij, tij, zij double precision trc0, trc1, trcw, det0, det1, detw double precision si, sj, siop, sjop, sidt, sjdt double precision dopt, zopt, dijo, tijo, tdet double precision zero, one parameter (zero = 0.d0, one = 1.d0) double precision ddot, detmc2 external ddot, detmc2 double precision FLMAX parameter (FLMAX = 1.7976931348623157d308) double precision EPSMIN parameter (EPSMIN = 1.1102230246251565d-16) c------------------------------------------------------------------------------ i1 = 0 i2 = 0 trcw = 0.d0 tijo = 0.d0 tdet = 0.d0 sjdt = 0.d0 sidt = 0.d0 dijo = 0.d0 ndet = 0 jdet = 0 idet = 0 iopt = 0 nopt = 0 jopt = 0 lw = p*p c call intpr('ic', -1, ic, n) c form scaled column sums call dscal( n*p, one/sqrt(dble(n)), x, 1) si = one/sqrt(dble(p)) sj = si / dble(n) call dcopy( p, zero, 0, v, 1) do k = 1, n call daxpy( p, sj, x(k,1), n, v, 1) end do trc0 = zero call dcopy( lw, zero, 0, r, 1) do k = 1, n call dcopy( p, v, 1, s, 1) call daxpy( p, (-si), x(k,1), n, s, 1) trc0 = trc0 + ddot( p, s, 1, s, 1) call mclrup( (k+1), p, s, r, p) end do det0 = detmc2( p, r) DELOG = log(trc0+EPSMIN) c group heads should be first among rows of x i = 1 j = 2 1 continue icj = ic(j) if (icj .ne. j) goto 2 if (j .eq. ng) goto 3 i = j j = j + 1 goto 1 2 continue k = i m = j + 1 do j = m, n icj = ic(j) if (icj .gt. k) then k = k + 1 call dswap( p, x(k,1), n, x(j,1), n) ic(j) = ic(k) ic(k) = icj end if end do 3 continue c call intpr( 'ic', -1, ic, n) call dcopy( lw, zero, 0, r, 1) q = 1 do j = 1, n i = ic(j) if (i .ne. j) then c update trace and Cholesky factor as if a merge q = q + 2 ni = ic(i) ri = dble(ni) rij = dble(ni+1) sj = sqrt(one/rij) si = sqrt(ri)*sj call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) trcw = trcw + ddot(p, v, 1, v, 1) call mclrup( q, p, v, r, p) ic(j) = 0 ic(i) = ic(i) + 1 call dscal( p, si, x(i,1), n) call daxpy( p, sj, x(j,1), n, x(i,1), n) c call dcopy( p, FLMAX, 0, x(j,1), n) c update column sum in jth row else ic(j) = 1 end if end do c call intpr('ic', -1, ic, n) trc1 = trcw if (q .lt. p) then detw = -FLMAX else detw = detmc2( p, r) end if det1 = detw ls = 1 lg = ng l1 = 0 l2 = 0 100 continue if (q .ge. p) then c if (.false.) c * call intpr('PART 2 --------------------------', -1, ls, 0) if (detw .lt. DELOG) then goto 200 else goto 300 end if end if dopt = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, s, 1) call dscal( p, sj, s, 1) call daxpy( p, (-si), x(j,1), n, s, 1) tij = trcw + ddot(p, s, 1, s, 1) zij = max(tij,EPSMIN) if (zij .le. dopt) then dopt = zij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( p, s, 1, v, 1) end if end do end do trcw = dopt if (ls .eq. ns) goto 900 call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if c update the Cholesky factor q = q + 1 call mclrup( q, p, v, r, p) ls = ls + 1 lg = lg - 1 goto 100 200 continue q = q + 1 c call intpr('ic', -1, ic, n) dopt = FLMAX zopt = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) tij = trcw + ddot(p, v, 1, v, 1) call dcopy( lw, r, 1, u, 1) call mclrup( q, p, v, u, p) dij = detmc2( p, u) if (dij .le. dopt) then dopt = dij tdet = tij ndet = nij sidt = si sjdt = sj idet = i jdet = j end if if (tij .eq. zero) then zij = -FLMAX else zij = max(tij,EPSMIN) if (dij .eq. (-FLMAX)) then zij = log(zij) else if (dij .le. zero) then zij = log(exp(dij) + zij) else zij = log(one + zij*exp(-dij)) + dij end if end if if (zij .le. zopt) then zopt = zij dijo = dij tijo = tij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( lw, u, 1, s, 1) end if end do end do if (dopt .lt. DELOG) then if (l1 .eq. 0) l1 = ls trcw = tijo detw = dijo call dcopy( lw, s, 1, r, 1) else l2 = ls trcw = tdet detw = dopt siop = sidt sjop = sjdt nopt = ndet iopt = idet jopt = jdet call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) call mclrup( q, p, v, r, p) end if if (ls .eq. ns) goto 900 call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if ls = ls + 1 lg = lg - 1 if (detw .ge. DELOG) then c if (.false.) c * call intpr('PART 3 --------------------------', -1, ls, 0) goto 300 end if goto 200 300 continue q = q + 1 detw = FLMAX do j = 2, lg nj = ic(j) rj = dble(nj) do i = 1, (j-1) ni = ic(i) ri = dble(ni) nij = ni + nj rij = dble(nij) si = sqrt(ri/rij) sj = sqrt(rj/rij) call dcopy( p, x(i,1), n, v, 1) call dscal( p, sj, v, 1) call daxpy( p, (-si), x(j,1), n, v, 1) call dcopy( lw, r, 1, u, 1) call mclrup( q, p, v, u, p) dij = detmc2( p, u) if (dij .le. detw) then detw = dij nopt = nij siop = si sjop = sj iopt = i jopt = j call dcopy( lw, u, 1, s, 1) end if end do end do c update the trace call dcopy( p, x(iopt,1), n, v, 1) call dscal( p, sjop, v, 1) call daxpy( p, (-siop), x(jopt,1), n, v, 1) trcw = trcw + ddot( p, v, 1, v, 1) if (ls .eq. ns) goto 900 call dcopy( lw, s, 1, r, 1) call dscal( p, siop, x(iopt,1), n) call daxpy( p, sjop, x(jopt,1), n, x(iopt,1), n) if (jopt .ne. lg) then call dcopy( p, x(lg,1), n, x(jopt,1), n) ic(jopt) = ic(lg) end if ic(iopt) = nopt x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else if (p .eq. 3) then x(lg,3) = dble(iopt) jo(ls) = jopt else io(ls) = iopt jo(ls) = jopt end if ls = ls + 1 lg = lg - 1 goto 300 900 continue x(lg,1) = detw x(lg,2) = trcw if (p .ge. 4) then if (iopt .lt. jopt) then x(lg,3) = dble(iopt) x(lg,4) = dble(jopt) else x(lg,3) = dble(jopt) x(lg,4) = dble(iopt) end if else if (p .eq. 3) then if (iopt .lt. jopt) then x(lg,3) = dble(iopt) jo(ls) = jopt else x(lg,3) = dble(jopt) jo(ls) = iopt end if else if (iopt .lt. jopt) then io(ls) = iopt jo(ls) = jopt else io(ls) = jopt jo(ls) = iopt end if end if c decode do k = 1, ng ic(k) = k end do m = ng + 1 if (p .ge. 4) then l = m do k = 1, ns l = l - 1 i = int(x(l,3)) ici = ic(i) j = int(x(l,4)) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then x(l,3) = dble(ici) x(l,4) = dble(icj) else x(l,3) = dble(icj) x(l,4) = dble(ici) end if end do else if (p .eq. 3) then l = m do k = 1, ns l = l - 1 i = int(x(l,3)) ici = ic(i) j = jo(k) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then x(l,3) = dble(ici) jo(k) = icj else x(l,3) = dble(icj) jo(k) = ici end if end do else do k = 1, ns i = io(k) ici = ic(i) j = jo(k) icj = ic(j) if (ici .gt. icj) ic(i) = icj ic(j) = ic(m - k) if (ici .lt. icj) then io(k) = ici jo(k) = icj else io(k) = icj jo(k) = ici end if end do end if l = 2 m = min(p,4) do k = ng, lg, -1 if (k .le. l) goto 950 call dswap( m, x(k,1), n, x(l,1), n) l = l + 1 end do 950 continue x(1,1) = det1 x(1,2) = trc1 v(1) = dble(l1) v(2) = dble(l2) s(1) = det0 s(2) = trc0 return end mclust/NAMESPACE0000644000176200001440000001341713474424342013015 0ustar liggesusersuseDynLib(mclust) # useDynLib(mclust, .registration = TRUE) # Export all names # exportPattern(".") # Import all packages listed as Imports or Depends import("stats", "utils", "graphics", "grDevices") # export(.mclust) export(mclust.options, emControl) export(em, emE, emEEE, emEEI, emEEV, emEII, emEVI, emV, emVEI, emVEV, emVII, emVVI, emVVV, emEVV, emVEE, emEVE, emVVE, emX, emXII, emXXI, emXXX) export(me, meE, meEEE, meEEI, meEEV, meEII, meEVI, meV, meVEI, meVEV, meVII, meVVI, meVVV, meEVV, meVEE, meEVE, meVVE, meX, meXII, meXXI, meXXX) export(sim, simE, simEEE, simEEI, simEEV, simEII, simEVI, simV, simVEI, simVEV, simVII, simVVI, simVVV, simEVV, simVEE, simEVE, simVVE) export(estep, estepE, estepEEE, estepEEI, estepEEV, estepEII, estepEVI, estepV, estepVEI, estepVEV, estepVII, estepVVI, estepVVV, estepEVV, estepVEE, estepEVE, estepVVE) export(mstep, mstepE, mstepEEE, mstepEEI, mstepEEV, mstepEII, mstepEVI, mstepV, mstepVEI, mstepVEV, mstepVII, mstepVVI, mstepVVV, mstepEVV, mstepVEE, mstepEVE, mstepVVE) export(mvn, mvnX, mvnXII, mvnXXI, mvnXXX) export(cdens, cdensE, cdensEEE, cdensEEI, cdensEEV, cdensEII, cdensEVI, cdensV, cdensVEI, cdensVEV, cdensVII, cdensVVI, cdensVVV, cdensEVV, cdensVEE, cdensEVE, cdensVVE, cdensX, cdensXII, cdensXXI, cdensXXX) export(bic, pickBIC, mclustBICupdate) export(mclustLoglik, print.mclustLoglik) S3method("print", "mclustLoglik") export(nVarParams, nMclustParams) export(map, unmap, partconv, partuniq, errorBars) export(mclustModel, mclustModelNames, checkModelName, mclustVariance) export(decomp2sigma, sigma2decomp) export(imputeData, imputePairs, matchCluster, majorityVote) export(mapClass, classError, adjustedRandIndex, BrierScore) export(mclust1Dplot, mclust2Dplot, mvn2plot, surfacePlot, uncerPlot) export(clPairs, clPairsLegend, coordProj, randProj, randomOrthogonalMatrix) export(priorControl, defaultPrior, hypvol) export(hc, print.hc, plot.hc, as.hclust.hc, as.dendrogram.hc) S3method("print", "hc") S3method("plot", "hc") S3method("as.hclust", "hc") S3method("as.dendrogram", "hc") export(hcE, hcEEE, hcEII, hcV, hcVII, hcVVV) export(hclass, randomPairs) export(mclustBIC, print.mclustBIC, summary.mclustBIC, print.summary.Mclust, plot.mclustBIC, summaryMclustBIC, summaryMclustBICn) S3method("print", "mclustBIC") S3method("summary", "mclustBIC") S3method("print", "summary.mclustBIC") S3method("plot", "mclustBIC") export(Mclust, print.Mclust, summary.Mclust, print.summary.Mclust, plot.Mclust, predict.Mclust, logLik.Mclust) S3method("print", "Mclust") S3method("summary", "Mclust") S3method("print", "summary.Mclust") S3method("plot", "Mclust") S3method("predict", "Mclust") S3method("logLik", "Mclust") export(densityMclust, plot.densityMclust, dens, predict.densityMclust, cdfMclust, quantileMclust, densityMclust.diagnostic, plotDensityMclust1, plotDensityMclust2, plotDensityMclustd) S3method("plot", "densityMclust") S3method("predict", "densityMclust") export(MclustDA, print.MclustDA, summary.MclustDA, print.summary.MclustDA, plot.MclustDA, predict.MclustDA, cvMclustDA, getParameters.MclustDA, logLik.MclustDA, classPriorProbs) S3method("print", "MclustDA") S3method("summary", "MclustDA") S3method("print", "summary.MclustDA") S3method("plot", "MclustDA") S3method("predict", "MclustDA") S3method("logLik", "MclustDA") export(MclustDR, print.MclustDR, summary.MclustDR, print.summary.MclustDR, plot.MclustDR, plotEvalues.MclustDR, projpar.MclustDR, predict.MclustDR, predict2D.MclustDR) S3method("print", "MclustDR") S3method("summary", "MclustDR") S3method("print", "summary.MclustDR") S3method("plot", "MclustDR") S3method("predict", "MclustDR") export(MclustDRsubsel, MclustDRsubsel_cluster, MclustDRsubsel_classif, MclustDRsubsel1cycle, MclustDRrecoverdir, print.MclustDRsubsel, summary.MclustDRsubsel) S3method("print", "MclustDRsubsel") S3method("summary", "MclustDRsubsel") export(me.weighted, covw, hdrlevels, dmvnorm) export(icl, mclustICL, print.mclustICL, summary.mclustICL, print.summary.mclustICL, plot.mclustICL) S3method("icl", "Mclust") S3method("icl", "MclustDA") S3method("print", "mclustICL") S3method("summary", "mclustICL") S3method("print", "summary.mclustICL") S3method("plot", "mclustICL") export(mclustBootstrapLRT, print.mclustBootstrapLRT, plot.mclustBootstrapLRT) S3method("print", "mclustBootstrapLRT") S3method("plot", "mclustBootstrapLRT") export(MclustBootstrap, print.MclustBootstrap, summary.MclustBootstrap, print.summary.MclustBootstrap, plot.MclustBootstrap) S3method("print", "MclustBootstrap") S3method("summary", "MclustBootstrap") S3method("print", "summary.MclustBootstrap") S3method("plot", "MclustBootstrap") export(as.Mclust, as.Mclust.default, as.Mclust.densityMclust) S3method("as.Mclust", "default") S3method("as.Mclust", "densityMclust") export(as.densityMclust, as.densityMclust.default, as.densityMclust.Mclust) S3method("as.densityMclust", "default") S3method("as.densityMclust", "Mclust") export(clustCombi, print.clustCombi, summary.clustCombi, print.summary.clustCombi, plot.clustCombi, combiPlot, entPlot, combiTree, combMat, clustCombiOptim) S3method("plot", "clustCombi") S3method("print", "clustCombi") S3method("summary", "clustCombi") S3method("print", "summary.clustCombi") export(gmmhd, print.gmmhd, summary.gmmhd, print.summary.gmmhd, plot.gmmhd, gmmhdClusterCores, gmmhdClassify) S3method("print", "gmmhd") S3method("summary", "gmmhd") S3method("print", "summary.gmmhd") S3method("plot", "gmmhd") # deprecated functions export(cv.MclustDA, cv1EMtrain, bicEMtrain) mclust/NEWS.md0000644000176200001440000003622313510361761012670 0ustar liggesusers# mclust 5.4.5 * Fixed warnings in Fortran calls raised by CRAN. # mclust 5.4.4 - Added `classPriorProbs()` to estimate prior class probabilities. - Added `BrierScore()` to compute the Brier score for assessing the accuracy of probabilistic predictions. - Added `randomOrthogonalMatrix()` to generate random orthogonal basis matrices. - Partial rewriting of `summary.MclustDA()` internals to provide both the classification error and the Brier score for training and/or test data. - Partial rewriting of `plot.MclustDA()` internals. - Added `dmvnorm()` for computing the density of a general multivariate Gaussian distribution via efficient Fortran code. - Added Wisconsin diagnostic breast cancer (WDBC) data. - Added EuroUnemployment data. - Fixed mismatches in Fortran calls. - Bugs fix. # mclust 5.4.3 - Added website site and update DESCRIPTION with URL. - Fixed a bug when checking for univariate data with a single observation in several instances. Using `NCOL()` works both for n-values vector or nx1 matrix. - Fixed a bug when `hcPairs` are provided in the `initialization` argument of `mclustBIC()` (and relatives) and the number of observations exceed the threshold for subsetting. - Fixed bugs on axes for some manual pairs plots. - Renamed `type = "level"` to `type = "hdr"`, and `level.prob` to `prob`, in `surfacePlot()` for getting HDRs graphs - Fixed a bug in `type = "hdr"` plot on `surfacePlot()`. - Fixed a bug in `as.Mclust()`. - Small changes to `summary.MclustDA()` when `modelType = "EDDA"` and in general for a more compact output. # mclust 5.4.2 - Added `mclustBICupdate()` to merge the best values from two BIC results as returned by `mclustBIC()`. - Added `mclustLoglik()` to compute the maximal log-likelihood values from BIC results as returned by `mclustBIC()`. - Added option `type = "level"` to `plot.densityMclust()` and `surfacePlot()` to draw highest density regions. - Added `meXXI()` and `meXXX()` to exported functions. - Updated vignette. # mclust 5.4.1 - Added parametric bootstrap option (`type = "pb"`) in `MclustBootstrap()`. - Added the options to get averages of resampling distributions in `summary.MclustBootstrap()` and to plot resampling-based confidence intervals in `plot.MclustBootstrap()`. - Added function `catwrap()` for wrapping printed lines at `getOption("width")` when using `cat()`. - `mclust.options()` now modify the variable `.mclust` in the namespace of the package, so it should work even inside an mclust-function call. - Fixed a bug in `covw()` when `normalize = TRUE`. - Fixed a bug in `estepVEV()` and `estepVEE()` when parameters contains `Vinv`. - Fixed a bug in `plotDensityMclustd()` when drawing marginal axes. - Fixed a bug in `summary.MclustDA()` when computing classification error in the extreme case of a minor class of assignment. - Fixed a bug in the initialisation of `mclustBIC()` when a noise component is present for 1-dimensional data. - Fixed bugs in some examples documenting `clustCombi()` and related functions. # mclust 5.4 - Model-based hierarchical clustering used to start the EM-algorithm is now based on the scaled SVD transformation proposed by Scrucca and Raftery (2016). This change is not backward compatible. However, previous results can be easily obtained by issuing the command: `mclust.options(hcUse = "VARS") For more details see help("mclust.options")`. - Added `subset` parameter in `mclust.options()` to control the maximal sample size to be used in the initial model-based hierarchical phase. - `predict.densityMclust()` can optionally returns the density on a logarithm scale. - Removed normalization of mixing proportions for new models in single mstep. - Internal rewrite of code used by `packageStartupMessage()`. - Fixed a small bug in `MclustBootstrap()` in the univariate data case. - Fixed bugs when both the noise and subset are provided for initialization. - Vignette updated to include references, startup message, css style, etc. - Various bug fixes in plotting methods when noise is present. - Updated references in `citation()` and man pages. # mclust 5.3 (2017-05) - Added `gmmhd()` function and relative methods. - Added `MclustDRsubsel()` function and relative methods. - Added option to use subset in the hierarchical initialization step when a noise component is present. - `plot.clustCombi()` presents a menu in interactive sessions, no more need of data for classification plots but extract the data from the `clustCombi` object. - Added `combiTree()` plot for `clustCombi` objects. - `clPairs()` now produces a single scatterplot in the bivariate case. - Fixed a bug in `imputeData()` when seed is provided. Now if a seed is provided the data matrix is reproducible. - in `imputeData()` and `imputePairs()` some name of arguments have been modified to be coherent with the rest of the package. - Added functions `matchCluster()` and `majorityVote()`. - Rewrite of print and summary methods for `clustCombi` class objects. - Added `clustCombiOptim()`. - Fixed a bug in `randomPairs()` when nrow of input data is odd. - Fixed a bug in `plotDensityMclust2()`, `plotDensityMclustd()` and `surfacePlot()` when a noise component is present. # mclust 5.2.3 (2017-03) - Added native routine registration for Fortran code. - Fixed lowercase argument PACKAGE in `.Fortran()` calls. # mclust 5.2.2 (2017-01) - Fixed a bug in rare case when performing an extra M step at the end of EM algorithm. # mclust 5.2.1 (2017-01) - Replaced `structure(NULL, *)` with `structure(list(), *)` # mclust 5.2 (2016-03) - Added argument `x` to `Mclust()` to use BIC values from previous computations to avoid recomputing for the same models. The same argument and functionality was already available in `mclustBIC()`. - Added argument `x` to `mclustICL()` to use ICL values from previous computations to avoid recomputing for the same models. - Fixed a bug on `plot.MclustBootstrap()` for the `"mean"` and `"var"` in the univariate case. - Fixed uncertainty plots. - Added functions `as.Mclust()` and `as.densityMclust()` to convert object to specific mclust classes. - Solved a numerical accuracy problem in `qclass()` when the scale of `x` is (very) large by making the tolerance eps scale dependent. - Use transpose subroutine instead of non-Fortran 77 TRANSPOSE function in `mclustaddson.f`. - Fixed `predict.Mclust()` and `predict.MclustDR()` by implementing a more efficient and accurate algorithm for computing the densities. # mclust 5.1 (2015-10) - Fixed slow convergence for VVE and EVE models. - Fixed a bug in orientation for model VEE. - Added an extra M-step and parameters update in `Mclust()` call via `summaryMclustBIC()`. # mclust 5.0.2 (2015-07) - Added option to `MclustBootstrap()` for using weighted likelihood bootstrap. - Added a plot method for `MclustBootstrap` objects. - Added `errorBars()` function. - Added `clPairsLegend()` function. - Added `covw()` function. - Fixed rescaling of mixing probabilities in new models. - bug fixes. # mclust 5.0.1 (2015-04) - Fixed bugs. - Added print method for `hc` objects. # mclust 5.0.0 (2015-03) - Added the four missing models (EVV, VEE, EVE, VVE) to the mclust family. A noise component is allowed, but no prior is available. - Added `mclustBootstrapLRT()` function (and corresponding print and plot methods) for selecting the number of mixture components based on the sequential bootstrap likelihood ratio test. - Added `MclustBootstrap()` function (and corresponding print and summary methods) for performing bootstrap inference. This provides standard errors for parameters and confidence intervals. - Added `"A quick tour of mclust"` vignette as html generated using rmarkdown and knitr. Older vignettes are included as other documentation for the package. - Modified arguments to `mvn2plot()` to control colour, lty, lwd, and pch of ellipses and mean point. - Added functions `emX()`, `emXII()`, `emXXI()`, `emXXX()`, `cdensX()`, `cdensXII()`, `cdensXXI()`, and `cdensXXX()`, to deal with single-component cases, so calling the em function works even if `G = 1`. - Small changes to `icl()`, now it is a generic method, with specialized methods for `Mclust` and `MclustDA` objects. - Fixed bug for transformations in the initialization step when some variables are constant (i.e. the variance is zero) or a one-dimensional data is provided. - Changed the order of arguments in `hc()` (and all the functions calling it). - Small modification to `CITATION` file upon request of CRAN maintainers. - Various bug fixes. # mclust 4.4 (2014-09) - Added option for using transformation of variables in the hierarchical initialization step. - Added `quantileMclust()` for computing the quantiles from a univariate Gaussian mixture distribution. - Fixed bugs on `summaryMclustBIC()`, `summaryMclustBICn()`, `Mclust()` to return a matrix of 1s on a single column for `z` even in the case of `G = 1`. This is to avoid error on some plots. - Moved pdf files (previously included as vignettes) to `inst/doc` with corresponding `index.html`. # mclust 4.3 (2014-03) - Fixed bug for `logLik.MclustDA()` in the univariate case. - Added argument `"what"` to `predict.densityMclust()` function for choosing what to retrieve, the mixture density or component density. - `hc()` function has an additional parameter to control if the original variables or a transformation of them should be used for hierarchical clustering. - Added `"hcUse"` argument in `mclust.options()` to be passed as default to `hc()`. - Added the storing of original data (and class for classification models) in the object returned by the main functions. - Added component `hypvol` to `Mclust` object which provide the hypervolume of the noise component when required, otherwise is set to NA. - Added a warning when prior is used and BIC returns NAs. - Fixed bugs in `summary.Mclust()`, `print.summary.Mclust()`, `plot.Mclust()` and `icl()` in the case of presence of a noise component. - Fixed bug on some plots in `plot.MclustDR()` which requires `plot.new()` before calling `plot.window()`. - Fixed a bug in `MclustDR()` for the one-dimensional case. - Corrections to `Mclust` man page. - Various small bug fixes. # mclust 4.2 (2013-07) - Fixed bug in `sim*()` functions when no obs are assigned to a component. - `MclustDA()` allows to fit a single class model. - Fixex bug in `summary.Mclust()` when a subset is used for initialization. - Fixed a bug in the function `qclass()` when ties are present in quantiles, so it always return the required number of classes. - Various small bug fixes. # mclust 4.1 (2013-04) - Added `icl()` function for computing the integrated complete-data likelihood. - Added `mclustICL()` function with associated print and plot methods. - `print.mclustBIC()` shows also the top models based on BIC. - Modified `summary.Mclust()` to return also the icl. - Rewrite of `adjustedRandIndex()` function. This version is more efficient for large vectors. - Updated help for `adjustedRandIndex()`. - Modifications to `MclustDR()` and its summary method. - Changed behavior of `plot.MclustDR(..., what = "contour")`. - Improved plot of uncertainty for `plot.MclustDR(..., what = "boundaries")`. - Corrected a bug for malformed GvHD data. - Corrected version of qclass for selecting initial values in case of 1D data when successive quantiles coincide. - Corrected version of plot BIC values when only a single G component models are fitted. - Various bug fixes. # mclust 4.0 (2012-08) - Added new summary and print methods for `Mclust()`. - Added new summary and print methods for `densityMclust()`. - Included `MclustDA()` function and methods. - Included `MclustDR()` function and methods. - Included `me.weighted()` function. - Restored hierarchical clustering capability for the EEE model (hcEEE). - Included vignettes for mclust version 4 from Technical Report No. 597 and for using weights in mclust. - Adoption of GPL (>= 2) license. # mclust 3.5 (2012-07) - Added `summary.Mclust()`. - New functions for plotting and summarizing density estimation. - Various bug fixes. - Added `clustCombi()` and related functions (code and doc provided by Jean-Patrick Baudry). - Bug fix: variable names lost when G = 1. # mclust 3.4.11 (2012-01) - Added `NAMESPACE`. # mclust 3.4.10 (2011-05) - Removed intrinsic gamma- # mclust 3.4.9 (2011-05) - Fixed `hypvol()` function to avoid overflow. - Fixed `hypvol()` help file value description. - Removed unused variables and tabs from source code. - Switched to intrinsic gamma in source code. - Fixed default warning in estepVEV and mstepVEV. # mclust 3.4.8 (2010-12) - Fixed output when G = 1 (it had NA for the missing `z` component). # mclust 3.4.7 (2010-10) - Removed hierarchical clustering capability for the `EEE` model (hcEEE). - The R 2.12.0 build failed due to a 32-bit Windows compiler error, forcing removal of the underlying Fortran code for hcEEE from the package, which does not contain errors and compiles on other platforms. # mclust 3.4.6 (2010-08) - Added description of parameters output component to `Mclust` and `summary.mclustBIC` help files. # mclust 3.4.5 (2010-07) - Added `densityMclust()` function. # mclust 3.4.4 (2010-04) - Fixed bug in covariance matrix output for EEV and VEV models. # mclust 3.4.3 (2010-02) - Bug fixes. # mclust 3.4.2 (2010-02) - Moved CITATION to inst and used standard format - BibTex entries are in inst/cite. - Fixed bug in handling missing classes in `mclustBIC()`. - Clarified license wording. # mclust 3.4.1 (2010-01) - Corrected output description in `mclustModel` help file. - Updated mclust manual reference to show revision. # mclust 3.4 (2009-12) - Updated `defaultPrior` help file. - Added utility functions for imputing missing data with the mix package. - Changed default max to number of mixture components in each class from 9 to 3. # mclust 3.3.2 (2009-10) - Fixed problems with \cr in `mclustOptions` help file # mclust 3.3.1 (2009-06) - Fixed `plot.mclustBIC()` and `plot.Mclust()` to handle `modelNames`. - Changed "orientation" for VEV, VVV models to be consistent with R `eigen()` and the literature - Fixed some problems including doc for the noise option. - Updated the `unmap()` function to optionally include missing groups. # mclust 3.3 (2009-06) - Fixed bug in the `"errors"` option for `randProj()`. - Fixed boundary cases for the `"noise"` option. # mclust 3.2 (2009-04) - Added permission for CRAN distribution to LICENSE. - Fixed problems with help files found by new parser. - Changed PKG_LIBS order in src/Makevars. - Fixed `Mclust()` to handle sampling in data expression in call. # mclust 3.1.10 (2008-11) - Added `EXPR = to` all switch functions that didn't already have it. # mclust 3.1.9 (2008-10) - Added `pro` component to parameters in `dens()` help file. - Fixed some problems with the noise option. # mclust 3.1.1 (2007-03) - Default seed changed in `sim*()` functions. - Added model name check to various functions. - Otherwise backward compatible with version 3.0 # mclust 3.1 (2007-01) - Most plotting functions changed to use color. - `Mclust()` and `mclustBIC()` fixed to work with G=1 - Otherwise backward compatible with version 3.0. # mclust 3.0 (2006-10) - New functionality added, including conjugate priors for Bayesian regularization. - Backward compatibility is not guaranteed since the implementation of some functions has changed to make them easier to use or maintain. mclust/data/0000755000176200001440000000000013510412702012465 5ustar liggesusersmclust/data/Baudry_etal_2010_JCGS_examples.rda0000644000176200001440000012236113510412701020562 0ustar liggesusers‹´ýy8ÕÑ÷? ›çñ æá˜Ëi¯$sEBD“”•F¥($¡©LíC©Ìcæyžçé>õù~~÷õ|ŸûºîçŸçŸs8×9ï÷9{¯õZ¯×Þk­mit`3Ç::::FÚ##íO&Ú=;í™ÙÍGSUŽŽ‘øïMtt<´§ 9pj½5ÆÄ2‚üO¹€ÅßáìDš÷y}Ç 'MöÃÈAÜþšåÁ±­“ˆªÛÛ̳þMÜ÷Ó¿OdžT9KZã®äwìü1‚¤Õ¾3¿c2€¢é>j.c R¼ ž¾ê’hYþÛXCvPŽÏ'¼)Ñê2‹1öÃhþëÃIÕw½ )³+à4q#êêåʈ6Ç%Û3b’… ìÐ >¢??î{)w¿ŒŠôþçW˜qu¬ƒ»Ž7®Q6cÞqø4P>l 'Ù£ÅݵäürÑo;‰=£ì ¨•‡u¶0Ç Ø·77$.Ùz<ÉÄ2¬ÕÐÈîÀã[Ô@r¯û§'ÔdQ&¤½±r³Gsrиm‹ø´==úJ49œpMϾÕK6iùëÏ>YþÂÍ7¼§¨«Ú ¹ëÁoF#Üè6ýd)sÝöÛîJÅ·Þf[<­F”"Y£V€b­ðæ‘–êÖÈ×NÉqÛ‰¤X4%è×ß0̃~UZÖd]z ”QÊ}2½0jî¾¹Ïãt š6×XïüÉ‹º×E¯ 1à–q²”¬û *ýrÚÈ‚Pˆ›žÙ\åEƒ–Ì!_AêÖí%ÛÂP —ytÌ.aœÑpõa£ücÔ߯›PŽæ Vß=|‹L;’WŽ‚”ÁØ÷2»ŸˆZðVØó!P¢RF5.ЮSÕ˜¸[Ï~xËò[A$ŸÜ|â\Ÿ Reá„bÑ’]žêxCšXßò¥fò:Pj†ùíÏ ¢iîà}K[qÏÖĨ«Ç§Q—ùÅåÛ‹á´ûBP¥¹þY^1z$ïÍâÞ _-wj“/Züpì°²š8·ð8ÙÛ7&ŠP=KÉ¥PQrîH¥{¿¼ ”e9úÝuGmH óôX ˆÆVg‰p…u·„aHƼ®D Ez~ißðH<Á§.>B³“ùÊ;›6 +Û™&{  ãŸå휵B‹ íµ|¡£¸ûVL\“L)Z_¨Úò8M¥V–ß2G³ï.Rú£¹ë»òÉñ¸ÎæFãÔùU n½Ÿ;Á”6ç“ÖÝ@‘þpX1 ï5 ê[FU’~}W-) Õªlpec5P‚Óƒ}†_ƒ=¥øœ›ÂÃÂÇYßy¡ºÑІ¢K¨—Þ]F¶T¤Œ±ü '‚$)”Y;þ5þ]Õ¹e ÍœµJ{ óg²vÓí¥Ù)Ç‹Iu54­Üþð‡ç:p%P}¼½èÔTƒÄ~pÛ©¯iɯ^¡—ŠÙº#4®†,«4Ð8ïY‡ìg4á{ÿžäápÔQ2ÀOéEÓ£–„).´~Íá¢ô3.<¤g¶Á¢1$ûýxÚ†Šø²Äºt&@²[¼®æÏ=´œsóÆÓvP”nyŒ§¾ ]¹´Y4kü-b{?šŒ´3?§*Šf·UqÎ½ßæœMŠ–¼Î£¹×ëQUÓ %P£›(è¦ZÖ[ ';ö\<‰¦U‡ëú×Qç'EÕê×¶hö­R°kÙ{|üÍbñ{#0F׈ïî“îØ3Ö&Ó@ÒÒZÖ(¸¤W´/ëT1 ¡%&ñµh4‘ô–.ÿ1Ð7>}"ÔfÏÓÕ¸>¹†ºûMAÕÂì­7›pË×ö¢‹ÛÄ@"Ìa¨¸7%Uìý¼Â~fxï1[4ÜñIª2S×ÇÜÐî$¼ÊsQfÄ$ ÒÅÇ~ï˜"‚T”6« ûªÉyÛ‰âÑ빎حÚh>•‘a›¢™æJ²í›C ѹ£àä™\šK.|«@ÓåwÛŸ¥+‚”‹¤‰D—!êöº¼g´½¤ö\Ûµ+_¤ø°ß»sìhôò' ¡M\÷6Xó¥dêK1±òX®ƒÐkö@¦ÂV Œ‹ùo¹Íˆ{´¹¾ëdo±Êè£fÑx¸R}ée%Ú|¯©Ýé¢ù;°©üJ™JJçù%4«¹4ˆÊEУÍÞl‰Œ³ LT[(1oÁ•¶#ï¸EÃVñ™Ñô C<‰U}>v¬á ÜeO.~@n)G†«:‰ ˜”\Úy£Äî›ý~òî È0²&òŽ;ƒ¨¨™Ê‡mިʘü^Ÿ”Ž¿×eÙ ¢©xa'[cüév™îYHj ƒAß&Ôà³å¼VÈ,îû.0û çúóÙàE -œU{3<¯‰ZÕÖ–½7œÉ,÷Tz.øÝ´ª=߃Fš„Wéw­Ópþ\%½f:þ‘}œÁŒË ÷î0º/ëåRV?7l;¢ŒÆ9>¼Rá ”ÍÇ£)?ëqÁÂŽâX•—¸øã¸Y=Hm»ßËñQ%õö\¹Ôš‹º«"VkÇqóñõ±-6©@ÉéˆçÓfÕ¬5o¿R¿ç—"ˆQåj~K†]®ý f—^£Xk^F“[ýF:<î‚øÝä⛞¨eóJPŒ MírL‚DãüÃgPÓpl©åñ4 ¬²M¹Œ&¿¹6=?åR¿vÍó>Fãü¦¡Â 4ãk5û(;vç·MHJºî±Épǹ¬™¶ÜÉ™h(cªDíõ1Àª‡9b 4< X°:jf= ®m6gÊRˆ6oHZiuNÔ¾ñ‹f5oŽtè¢*¦>¡_* Ú¾ ù‚dRÖxñœ0pt<òKç$iëîûàŽÖ|§'k‹¨'­j<á(íúÊtIÅ)»€YÃÛ’U£ —ï°ëŒñS›äûix &üiáÅ”ï÷ãND EýK{ZŸâ/ŽE™Ô–S@9Qx î:Í?“GR·õ„¢Î ¢ÿ*jαYù\n‡ú‹Ö¹~Y¦ñ¶‚7×D߀xéMÖ’ØvÔQ3a÷äxš{nšmàR[îïëÊ“Æ3ë6›Zõƒt)ÇÖí½ã¨¡¬y}ø°:ê8¨qS›L‹ƒw¶n9Rj7oîvÜ_Þ[¹Ñwᆜñ·±fè·þEƒ¤3^@)øÙ •ƒëÖD/P‹FsL l‚+ÑÌ[åvÑ;hÔÈþø-Õnlr?¯³A’Ñ37. ¤$cë¦xA¬[[ÿÍû\àVöP;I߀æxˆKþ¸DãCˆÑ„G(’GE˜ˆ¢ö2¯Š’ÂXT=³LÚÜߎ~vzKsÒxÕgŲFy4ûÁþgˆšHØdÝ+Kºriq¯1šy0­ðÖƒÆcÊ>}Nà?Œg÷Ó›ß=põŸ?0jÇ<‡sÌ"ÒmÍÑbYÄ8ïh.›³Þ)ŸH³!pŒèø âÚWØ ch¼§X6‚ÄsWŽ“á4¼uøÊºE©»ýpê‰\ *e¤Õ+{x” £%¿Ë/qå¡©y7?%>ºhy¬Ä‚æ®.ëËÅü«÷éÌ>´ªÐ|Æôë,-þ(¯¹Up£©+ Oy4âÐbÐé—goÀƒ­ñwýöà¡“¼[êíiñD옡âü2HWqìÙ$&6Ÿ—7¸MÃý>3“nâëünÞŸœªD£m:Íýchì¼ÓÓ=‘¿Ñˆcùþù¯ÐDÀ¾8+в#>àÅøœ¦Î[tž—¥ÙsZ8´'ö̢ۙŠ;[7ž:²Œ†¿e¹sëúvë“íFú üLªÔ"¿ë7êrl´›°FÃ=Ñ,/ZPßæ:I%l ”Ö».$ô†iHÜtPèF&|ãC£ñßuó"¼P‹^J‘ãq4iÕ[\(@Ó“NkUJ ÈÃ×(±õÆ P„ ¥ññc.2!Q QÀcùFO3<Þâ6ŠÆcç˜Þ¢ž­ÉôýZͨÊ[hÿµ*fÔ1iÎ0OÓ t-¥+<·ísD’ H‰m=µù€HŽØÆKGø‘Ø/Ý«hÅq0ðÎÒSx,m°Wá$š»{ƒ±ãˆZ”¨Í»a;‚b4\TD Al¸ð!{²$~[zKZ)(MN‘:祣áÊCõa;YAÂ]±_ÓŽÒ„½ÏþmEE[Ò㯞G“lÂofÕi~²·JNJ*wž•kÐùŒF ´ô CÙ ]ú—dàõ£#è›ÁÔKa‡€råÑ3sÛ@4’1¯Ÿý|$ªß íÉ´ÅqþFå%ÞhºúÙéhS¯¦è IYû¢Áúè›Ç“S·épÄ꣉¾0 nÒ‰+R"$ƒDLͦ|‡Q`PÍÝq‡‡d<ÒÝøMY_þe „†šók©‡¡iŸ™E=j+ÃÔÌ)d6™ªvF=.¹ý¦Ö£¸3®4V__ONòW ®wN'}_ ÞN‹;Úæ¸†Üwù;Z>!ƒ’gPnàÆú¯„½¸cjE­bÛ7Pç¡s†HMžD4Š/ óKp¢ŽÃILý"Ahâ…ë†ç×sQ¯Žê‘û-•@1^, éEå[³,ÉhüfqW‹3êcq˜û²û¿ëXx¼Ê¥{©/_›)æ˜oîÞñwŒß×·#ÉšîÀ}c÷’Qv}|QâÄ>4Íq]‡í•yû-ÄÁ/#J: dxßôìëÎ$%ï'éìõÉ“!h}úꇈ›]@Øy'[™Eøñwq§£‚¨bËŸ°­»€X`¥Äôˆü›Ïý€â›â½˜ë@¼ì÷øE ý~fÌR~þnõÚåV¬T¼7çšúâúûñATMnô²¿ÇÆ:Fu0…l;ól­ô4h1ÇìŃ׾Yù[ÿ³ CmçúÛçö=f' ÖïõÛg½!î ïµÑÌ·®Ôö„ì*ñPýŠ .òÊuå'‘|8?U ¸2íØ ÄØÉ §'†‹ì¾úÂâ7ðÅykÔtãBõu¥èÏ?€Ø±r’Â~­ÍúÆàÙš9V6ŸäBc/f›8b¿ñµ`åÁf *q=8äw ˆÈXÀµÄ øÇ*»˜ µAÔ\G=.S8_™¯í$µÉ¾{òÅ]/¾Ý–´Aƒ»nRë€ðiwŒ¯Ñ ½ïzÿÙy?ž{.ü‘ÜÆô<©«]øÛº›½Û6yQÀˆWæ6oân¹S÷lb±òyŠK4eËs^Ö¡ú7ìÆ7ù¿–'ûî+´ùý4ôòÒq*}3»Nh+•N˜£ûVå ðé)=°ñÂ#ãþ‚ RµI{ßY ¯ZرHýÈŒƒ#jyÎy™iñkV]̶€@Ü.Ö³"Ôµ‚uù}Få53»ûOâ1»°ÇG×›A@—TÊ#¥Î—fÐä3–G¨Ê%ç„DÊW¼l!é}'PÂgpƒÚý6ëŒPÚó„mØ™m¶È·æØ£R†/‘e¬À§¿ÞWwï^ô•õ¤E\ O”…ÑŸ8 ÕÖ€™7ˆL¨¿²ª $æÒà\µo@~(%˜”wˆŒj :NOAìvö·–{×€dÂjX 9Ä)5É`ÿ]@4iÌÊ{ºWŸwÑç\bG¿Ï(©³lÞ¤›*ÂúøÓ~¿8!&Þ±>]›qΉ†—»ñ„—×»£%—ÄT—V~ø&šŒªú‘Ü$ïø‹Ê©¬Ày+ÂëcªÓ®pw?W ï0¿è„»ƒŸt‡ü||ù'’ô¿£oó¯,„ja·ð#÷rµÒæ[`[°û> d¬ßgó HË­ü_=ÝðHRâäÉüpôÒWû¨ìËLÚõýïD£‘áûÏ¿~÷²ÜçK_Ѩâ-b‡ú{ èúw¬¨æ€¿HóÅ&¼z€›Õ<ð)ÊhÉ«Lâž ýõ¾¸Iéó;ÑHDº4Ëã (Í××È÷ „Ã’·ûpÔÑæ@ô,»2§¾\»t¼HOYÌqàÌÃ[V,$jÂÍG¸Ø.NÀlÌåÓo7€€yäè÷qAà•n£Ý4¸åØó Ú÷1(,œÆåý¼~óñ±&ÛÎÝí4?‰Øôxu7‰ÌPpáiÓŒûAhb€è«¶HS‡LÞUÉ%÷ñ©žM È{íë1ú|àŸû» Û€ûž.3 p0â5ú§Îä{‡¹ö>”À_Oè†ýX§›ƒt×*Aø³c¬Ñf Ë~y’Ñõøý?<úÓüõ‡òxt’Ñu"‹×YÔ???]È$„™×:t ÄêY#vNó'ö×Ú‚FË£íÿºn¶…û[®ð{[ß=ûS[œ0Š©<>ÇQí‹-vÈê@RÊ?¡üJg.ݤˆàÎD§»A_‹ìZ\|ê ä·kNßhyË\¿å 5)ì“A_ɯo¸’BÞºVÝ«n ñÝ8´ë¼íý]¬ß/鈡9ÇFs4³ÚDb1Bmáê Ée¨Ò ÙíX–0𓹂ÓРèþéIª>þÒb]^—u—“œ2¥qÙÆk—s¥Ñâ"‡Áe)k ò¼Åc‡«ÊÄç ™ûÃSw‰i ~¨âµHþŸíP0ntZ»Å\z ÕÛj]¼~2ýÚ¶5%Žiy„8\ÕÁdþ1Äíä7÷ÇŒF¢è–þS*ä(ïÏi[¾pE}Iš,‘eŸq¦Žáܤ÷ X}áæÉ§¨³ä¶?ž£áå“ o€RfÄÓ²5qeôøäÐx§§ñS!£§@¸àóâ W-ЛŸäúj튧_Tì6xü–¥2ñü@¼y†7Øš÷ôqØK瞢ťÏr‡Þ€@ïK“–”ùõDë* o¥›¯]Dßl§N<¡ â5Ž»H-ѧ^†† >‹À×Ý»Åk*üPõ"òŒØZy¼õË­s tu…Ø¥¤}e;øó½íö ÇÜ1ÝÂs]ÉÏéÛ€,Ãðúåé@¶ç)ûîø‰‹KÊA@œL9Ť%Œ¿ŽÍ’ 6»¡‚œj)×.<É.ÛžŽÏ‡,¤Êy Ñ‘æ Ynüx®_MeãÑð›ÝCL¨¨xÍy÷9ý-CüAêEh:9àç]÷¸+Qtl·Ø+Ü%ÈÈowÒŠ„&=Boq•ûÈ«& ¦Öyøˆá½(‹â 7 ?hÛ°½û$þrÁXòɉO¸rOá¼ÍÞF36N{^RÆ=͇¬ÙOŸ³¾zög/¸Ò­,ÇŽ›¤4ž´ìÛ U`%†êN~HÞ°æBï$so!µ¾‰éñb>:Êëü/¯çð\W@ oƒ­›.nâÛúY“Þ@œÖ§'ìËBåßå 4ri¿Nõõs Èb;`$$¶°»5m[±=ììqð­:ÃMû@(l;¥b]•ž[ØI‰ B!Oåu™ø\AàxÝûéØ) jâöõy./º+MU­‹qHÐ^ ¹Ž{¢â|É(Qø«r Û¹wÁ[&“oÌ øÍ’x_vNPÝíë>ñr¸‘$Q™ç ‚åŒÛ}:ÃqQš‡ÙYÓ 7næ$³,€(Tºß££áêl޵¬4”öf:âÆ©³[–qµUŠQÁr7XPû`•Žç¥ÚŸ¦Ûw^°Ä-³ƒ|殩rä|¸F|Ž~AÓÃ]ׯ¶ Áî¼ÔâþÀß(Èá¿„Ûº Û…G__xú¹’õ 0‘ï;$©À­“Ùcõ5óÖ;ÿX2ö¹Ä!ƒ œÌl‰Ž¦ù¿­?wd>¼¿ |?Â?.rÒìùÉÈÕg»â@ØJ!«vŒO§^2uÏŠ’ý‚BÑ ”2êaö]C Ue$‡òñPKóÊÀ]×óòš0ôÇ}âÂŽÓÇ“œÅ€DUŸÐ'9áÀ‰eJ¬*Ùö¾£vßâXäBÚ›×Àïxn9^b8hg½B#×xt,éðÙ¤÷c¼yV>ÇmCU*Ç‹Ÿø€À«¨ûtí2 844ºÇHGåP ºÊѰAŽ|Ñp™ñÒµb1àws$Ý2ÛA‹ ½,Ð4}„õgÝ\h{*«g3ë ¸2° á–Í=¿æ%n)~)…q#ðó?\Èõúާ¡™N Ô ¹m×ûa 8S§^ÎÑxM‡`d….žˆi]‰[Ü„F‚ELY"€?ÎJ`mÏ.Ñ*<˜bl ìǵ̨ øy¼mOE ®ýyoq‘Ú¬ôþô hÜZžA\/„Ù8n—äÓøC¸2Õò$+žO!qø¬—à• ƵG@Ô;òw¡ ŒÕ7™Î$q\Å5_hËî{cà |w“ë¡EjÌ1éóÀ§€Ô=·A$÷´è=³ ÔV •ü LÿöV¼õ!¾™½¹t¤‘¾Í‹ùçS©æ¸AãdÒ·š: ¦18)‡ƒpr½µ§ ‡ä)E4á¥i;?~![4õgúó¾`äœ`þ¼(:¯?…T¾¤sW^Äîâ3Ãç4E|í7ß^û‰¦é²¯è1ú´ªè\…Û@ò%Ûl éáûÒ8ï€ó‰µ• ¨:ß»¦ UÑÆÿBÙ‡/OA¨ýÛS ™<´U¤†Ú DÓ`ÄAÏ ¤cže]Úhfy—FÌ–u Ô:og»l÷¢Ô¸y˜qoß –_Õ&=,—æßœ»÷ôÕâ°þ¦Sc¡@Œ:½AêæV4ïøíã²'uÃ嘟@¨¯)ÚRMûý}OömÇ•Ó{^~¡ EÕöÜ¡,Gÿ·ËfYò1#‹Ñ⻫çö„`)Û}÷—#þþšçÙò¯o ¬[ЇóEÐLìƒî-ç£kµÒ¶‘|¸û1û÷7 [P߇ =ß9ôy]‹ó{}.ªNêáPR@M-uw>!H1bNwºì<¸²£ÌÀ¨zD§…Vš’]îèÔOÁá9+`[»õyÚ8ø[uXôI¸ÝÞ¨\HãF’Ä‚+¨àEsæØýA ‘ðÛÄCQ¨b9×e‡þœÞðXÍ~b–¦û~ׇ*Ù¤ø`zzKñ‚™imÑ@ ˆ²¾¹tX5 Ⱦ!†~5@Ú"”uÁ È7¼§Ò>Ÿ2Ê7 üò|Ñï¶X€ ù##ÝÂó臡·˜Ý¾8-øÊÙû ¨ÿÎh‹¾¤èI-çÎgÀ÷¦îd>ƒPÍ\n»ðȉ­ïV¯n¡‹–¸s“àÛ‚;²@¨`XL»Ö;ê kÉpÈd¯¬E:u`æëU|øŠˆ¼‹!ÇõTp‹T°Úwš^¨Ié–¾œ ÄúG—Žy ƒ€¶²Êó{pÝ »Pð5 2ÌNËŽá…¨ÛñÇPixÈþ`‘ýÜŽ¡ÀýFù¬Á Á†¿]Þ‚Pæ};ã-Ó ÈZnð¾j+M·ÊΤðô£¹˜#q›˜ñlè êïã Ä[îãøˆàäÃûG_=®¬üg¾µê2Bõœ§§Lºì@`x¬ÜìÉ=Ô¹QPëõÛ%?9|>» Œ7=ØZDãSþh$Pâ{Œ‘ÛÑèØ\²Çñ‹@TÎØ´‹Lã‚ùG_êßA¹p¶¨çðDÁîLy÷¢ø—;ÿ§Wì-¡éG·³ùŸžàšOÒ÷Ãé€KòëÃø×µ¸·ÞVƒàÏü©6ÕOO)à_RçÈÏ$®£ÁׄkaM¸™Äôû\4Íß›.Zéõ£êƒ3±ª;€Ÿîò4§ìÜÚ{<6éËYç"þ˜W×Í[Æ‚ºß"¶÷•ë¨ CXÍ-ž¿7¡!ûÕ˜«KÇh:à@Jܦ–04âtñÑûÓýÿ;YËÓù´›7í¾ÉZÿy‘á€úÿùkóÿþÀÑSÎÞÿýÀ_äpu>ï¬zìíZÿëíìçÎ\Rýï=¸ÿ^ñ*ía}}5™ö´öÿ”9¶ùgŽ¡±"Ñ_۲Ϡµ‹()3µ‘u‹æG´€"7³z:U±lt­Ùn‹¼Õ¢W/ÎÅ¢!ßÇê_È[ßb¨ã†YVTH=æPÿ´1T»D—B( Hà‰PÆWVP$n»ªS§ñ2(fI*õÕÀTURÕÉV„fä©}RGS¼;¿xÍj…œÂzÈÝ­\ÿ WâtÕ§v¯]‰Ù¿I $m¾¼Ni@Kî_mt‹ÐðÉ!‰S_@ry:VTõJë¶Î(sà‰5M nïÔ3SµÖûë50¬Ô²Ó«£þÝŽc?knôÝ3Ó©]òøg”¢kZ›¦Êý”ï*lD‰ˆ5t‰×eìÖ*jFSÚQ>ê'˜A<ƒkPì~ î}råW*7š©þí$ÇX‚ËÎ}.Ø•î žsõ'~ ®å\«æy(Ñ÷.K&Ø{Ð|y!µ¿1X’‹¿L¯¦ŠaJ¨3d³g¾4ˆ=åZ qtÔU|8u“Y<šºÇSaŠf´&¿Î<›EýµÛØ9}çp›Y9YNY.ίm‰››8CŒéÐâÆÔxXÍH³Ôµ |Égµoù:¤Ñ2WtîÚn/T,íî”2ˆŠ{ÚcÐÇmœw_½$ÖØ2$®Š—¢å„”6á%ܮբ39¸Çi2éELH:œ%ðCÍøz_ñoM@!e•GtqÄÓ®”Ï© ~‚ÁXé°?šê´c))B%C:8U[¬œ_Ûµ“ ¤- Äî‚X|ÑTwÖdóu£Œ°m¾[› âG/hi|B ÂCIƒäW™EÆ{þ uòĦ݅ý ®ñlDó¬ˆ*=²Ïúœ "-rÙ?Gn‚Øî÷Õ}…;ÐÐêL×ð¹8¬™NÃg6d³–Î+åB{ØLÚ T}3Y²î¡)Vï‹RÇÐôe†ÊˆÆÀP¶§…ì:‚ºYɱss§ÐÈ™ùëûTñÇ?Å%U_SÑÇ…cí€! 9yH¾B“ª¬GÎTSРè½ÐQ%4_ì"ûÅX¤ØÎyé9…»Ïn3¨rjA•›ÖÞ fãá“Ö…¾Ü¡¨ƒd~îýŒ$šÞ/ǯ~5$>¬ä'‚Ø ±­S'È a¾·|ý4P4ÿ­Ì‚¨ÏZüõ3¸Ãe»ø÷~ Ó{džÜGYŠá«fÞxtý5wÌ1ô†¾‘nœ$÷çŒ[gñ£ï›ZrªL³ÑPô[·ñçl ©üéCA­ j·*‰iÑ·ÊFÌÀ6–‡:ú$3oIɱ]^ M¨ãìW•…»…4;­=û'«åÙóÆlDÝ+Ÿ6G¦4ƒÔ£'1_ì~ *5Q{?¹>Ô÷<¶9fÔçLÉJÖq•øúCÃ4ÿ·Ò…Óï¢)£÷7²Š@"@¤p±WM¦t³&dL¥‹ÙCâ§ Pä[{‹ÐêÇWÓÇÇo¡Éºq“.hhj}þìCˆÝØw¾QXæ:%CÙ a[ücÿr4sÂ=YãHj©l±üÄ4î˜sá&$á¶{¡‘Á_9Qí…⌷‘BhÔìùµ¹4 À@’x_ HoÒ;÷¬Í}Ö³—ìÜ’þÏèÄnàOÁg•—)¡¾»”õvY£YôäšëËq€½›~-½¢)p‡Zܶ±F‡Ò’x¾Rø€Ò\$ìEB“ wª¶¡îé°3 (•àS—@ªþ•AóBö…ºïO<×ÙÚˆ»R÷~ÀrhFrÇå]Ä\TvFXߣ@ Äk è¶J‡Ñª(LKœå:ÏÑXLyUÜm~Ò¼D'ñ þÜ¿)-hnï׺p£ôGÿ±²Ô3V{¨V½©[$o¸…7>¸ŒפÒXÑô,ÐÇ+ópŽJ0‘O…ù@§Ï¼Q3åNKñyû¬Õ(I ç÷…\¤Åë/_~ãâ-û¹Û¥¢Éˆ/⃒´ïctΈº¤ŽÖÓó8X×AR°cWMí´fžEØïú (¯Nf±!Ô~š)WîÃqÔ\«fx(÷Bé[^X¡Ù§31] ÕJ SZ/ЧsêSúç ñDâÄ>KoºMI|8‹Ó_VìÒÐUkcªÏy4u¬rŸó½nL `¼[ù^MŠëÿ¥:@¹ÁþÙ^ °]á_B‰”øvGI \sá ­ÎC•G’z©[Ar×-ëŸYïAº¨¶Úº‚e_óÝ<·`R:ÃŒBÙšhH}÷©wûÑ«0²¶7HÞ‰Zºç¸Lýxå…ËÀt—×|]´M¾\‡Sô ýý‘[éÇLÜZBÌŒ¸8ði´ûñN)ü…iïy×_*øGsÏÀ%”îÒög<í51ðŒh“`Ä“=žoé¿åò´^¾Õ_~,Í—À¾Ï)¼ºõ WG(!…¡Œ‰PQûì:ÁÚò$Ï~›ù<—ð ×_P¾üM©”‘öG Mðaõyj ”zEÿw¦{Q×ÎÖ¥)_ftž–[üì ‹ôl@ÙûÓl…Ëõ}öé*9 ²ÿvÊ@²`)€DAÄÿÝùP¯'@)éÙM|Láܔœ¶PŒÆ˜Z"TÐâÕ§Å„¢ ¨ã‚½Nf-šp"«ºy £Nõ Ö°íþ¸3t·¥X MSö§oš¢b9¥~4gÐPÅõÓw²\)jà €8v˜Úã(„š%—D]ïxeó<ßi­ûhÖŒÇë§*ú¹²?vU‰æ÷oÛÇø©Ó@©úº­Ÿž¹Áöhî-N^‘ ]FSÂiãÆç¢é“L|û? æ†Bâ]3yÔpôVËÇJ Á0|êÓE„%c•­¢Ñ f¥£Ì á3ì×ã? ”º}”YC<,EåññøŽfec=u½¶£êÒï½êç"ÐÚ@è‹a‹i´  ³:?€Æõ§î§1Ô•Û~EÏü_¢†…}k 743~.Ã2ø:HæF' æÓxÊf›ü„,Ôë^"øº÷ºá°¹FÃí-PÒÛŒÏÖD“ü²ÃgkAì]qÚ áTÍP°ÿTíÈØœ/ú¹TÓ¥éZJÑ׬RXÒ5‘=di?Š&§Î|­Í»šUeÛÓ‚4·bLÆJÐU}0¾¾û7šÇ[«>õÆ â ôíoĽÐÜñ½k Ñ̾€?¯>Ê”ü° ‡¬1ê²Qxx$¬²s,ЬñÌGA×ݶä`Ióð4Ê=-Î]bóížÉ‹4=»ý-HR2Õ#|ò`ƒŒæÚüi½¹I£¤·8+¸EƒTÎ yž›Á IÖS³ûVnš^‰º‹j<Óý=s/€”‹Ó) ç)4ø‹þ$¯šþfxì¦øÜkxÿûU”— yh-9w=¨Í%hœ5ŽØŠþT*—v4Í¢ùÜõÁ4B´Ý¬—wµîû±LζÍwÍ÷Ù‚ØHhËœ–?úôÔ7]EŒ‚zÜk)µœ¨H#ñï–7P>=9›Zˆ&KOWÉÕï4$ùQ§‡WP鎗 ¥YdäÁÓ Âßl•nàRàœÏ òJ@KqA’æûãQöîkeÅkq Ñðu•î,Â?"G§º´Pfÿn¬ù£¯M^m!ê¡»Â>)rõꘗßö͆, ÜÓ¸Jãc·jÍ}¸iþ*xðKãЙ}ú%ÁœRrmõ×ЊRJAÄc((#)뇟 HæŒTúX¡ÒyEÿf¸ùÿ]Ðä@j Û97€øÇ¸Ü©ÛshêúÞOï²íð”l“YÏg\âkeP*&¹–âNM‚T¢ŸC䇻 9u:ý(Ê~{Òâ½Ý@‰xž²V¥Œª –7óN=DËnoZ3übiø·nf­m¾^6FÉújr+vEhªùö8ýÛe`ØOôö5¡Ev±$Ä(±Älg2lQÒtV…^ŸÇú$ÜbékõÜoeðe¨D³GMã ƒi¼ZpgÅ£aE4iñQG´@ü…é#{_>r6xI¤(gΞ  Å©\ï¶Y)>SâìS€ú½’Ûo怤»È ž‡áø³È°GÑÖw¨©cäø…­á¸¿5Vué¸j²æSÙ“£'½»öz6Þ¶/Z#bЀëEezݯ4]¶)ôÞNX€l•ôH4™nͬ[íŠÆùò´|;ÅÑÄ»ù/BAâÊÜúkýQÚ³âU–½±¨6ìpŒÞ‡Mh’Më@’3êøJl>¦ŽÄjw8í ŠÓ‡‡Î/Ô£w\Ð+c¨dfáF¿g‡ â¦ÑXoM¿ÑE¼|i¸‡ÊMãŽe_ÇMþU?·eèÑpº+0Ïçofy‡ñ*)MFÏÊ2ïJé@÷µÒKhPüHEÑvGTç]ô•ãmüN<=ÀìK»ïã/&Z·ðäÛ–w÷:eQ÷‰»žm7XÐü¿…tÚzäÁ§0NÜuP±éÜ¥HŠ3¯( C™šþÛEr€éűK*ެxºžösG~ a¼3³àÛAˆSvËŽÒGdÉ:Íî ÁüÑ<ªèPn‰^fØ{$ ýô¨Y§ñxé¸ÝõÉ& «{ðÀñ3{u?g%ã?öÚðŸ e“^µÙ“¨‡>m{׿³Ü·mAÜ1è™iõôg×`¶@Vþy¡Àl[íó’S%öÑ×Q#ýcNtY(…ÜM,O¦ÑœD›²W‚Z>r¹[„6?Á"RÄ ´ïñdªö¹å14vÑëˆN\ˆ=:ñwku/(‹æ•ÍÅ¥oÏ…ˆFJ› 4Ë_±n½åU–¸.Jv«ˆåPÎ~ç1\zqåsšIOµÓwB]o´nýwÁ££ÚÙÃÌ@"æÞòŠÛÞ:KIÅ¿Òî›Ý•щSÍ_v5ßAr÷dìfÐÐZ¥l 7ZÚz†U¢q3jOf?f¿ò3kÍ4…àÔékŽé׎£Îâ.÷¯ï¤€²vD€+Ä:ÃïºÄmFSªY_wýª@«ï9r<å|pU{Lȵü­ ÞK(omG6r¹Sš y™'l`Äo&s_h%såxó` }ÊàqJû¸Œ¼¼êŠ}¸Ip{üåOÏÑâªÛß4“TråêŠæ‰ST0F5}®Œ¶àî ÉZ„?¨‡n·¯ˆ©•^”›ç‰rq÷ñôÒ ­Hÿ $m”]/£ñÅ­Úß§÷€[z ø(j™Œ¶Ï€¤É§ Ì7Q‘n¹’C%*8Íó9I4Í™>ë4ýü->r0~&¾?í;ÿdŵϦJÐ(ÏžžåS4œ<ÚûvHfކKqÛ%vSA2G@ŽÙ› u1·õ+=É@Ájõ¸Ãˆ'rÚÂõVwJ$‚ÄÝ›E××iºÃWHÌöÒw;ö1ü.νsH;x7H=ðð#>{ â#‰–šWuÒäå.aRSç°®>×õzm»Â‹64Ï«/ã‰BÖ.Ó°j\§Ú#Sï=ŒÆ¥DÙÆέøÞ±wjhúCl¡¸± 6î{sêÇ!<üó’@V5*£žÏÙ& xÞ}ظÖÀ‰¦cE¯tÞ<RÜ'ÞJ%ça%kƒÿSÉ'ýί€´}§ö‰9!4ìäÇ€˜3sÃÇ× M4ÒörÞB]׋µß¹YJ«fþèeT×ÚÜ/ð{õ×ÿNn¡C9z¾oOÊào2[¶¥nºŒZc×<èP¡Ž6{•3î ðs[içRçDåb#ט˜Î^)¾Á †²¶?@zwþÅ/×pÝÈ !!~Ï…‘2Ÿ‚ð±©ì’#?ðŸÃß4ϽtFJÇâcÇ©@¨Sß+Û‚ìòƒ>µk@JŒzw˜ý¶Ø-&si#vðyÏPĬ¶¡?Á¦@’TU8”õºïîà7Ø'Gº8T(^Ö¸rcȬæ“i-²Àm{S‰k/<ß;}/@åvDÓ/ÃF@<~µrd ïl¿}…ÊнôÀáÀÙ.Ù쟩uÎ"ó«k¨)©ò¶QW:!¼µ¦k.«iEWö‚Àþ<+æ4p«úsñí.{6 ä!–xõ‡¨è±áÌÅÛ³@Û"ýnÅÈ,©MAðxǾÛU@š ï]>»‚JÛ&3}RèžAGqVQíýE @˜ûÙfTdDM—žª×ûÌÔy[Ì»ñ­þ».롊}CÞUéý¸l±u›U:ªv=ïh’ƒAèjÿþ?¨ëÓ\ú2U ~e¡16¼d*U²$;‹ºýS²è3€˜‰Opç‚ ¦ð#ý ¨^ŒþÖ“m‚ iÌ`P äñEŸÕ Àl/;Xdß <ú%AÁÕRùxðÎqÖù›Ê‰Êoq“r}úq×Í5‘”‘¬¤.9oO³ J”Ü ‘Jï®âÓÎ3¯rŒ4¹.á¬‘Š›ôyž ¤ú<ÄÃõ4ˆMªõY4ñƒðÛ"×ÝWfT6oÁTg Ä=çÔì@H|É}§w=‰ yŠÛ:ø¥·¡y <ñÔÞ~ÕÜ^0_ò7“4Åe[ ŠÝP˜ µ~«7K[G]€ðÀñŠnT0ˆRY½{ˆ¶j#ê2@ßxåãç’* ²¼¯Øqˆ1¯/ο—>ÁÕâp±d*Cö®ŒòK·P¸öÑí%¸ó“XdŸžZ…žs-8 ˆ›™}z‹†q_в‰í ð´Å,šÕKfÍùMQL’p|иw_½cM︌&Åúýýü€ç–ßÙ°c$TÙõæÛa LÛëfo=GÃS>ÒÞ­¸Øda€Wø¶úFIÂ3ÊœgÍöjã…¡Ïv(hކsbšDF\²ðKœq²zyÉct#ÐÈд[~9j<Ç}ôqÕ}\u÷IÃ._7ý+%“%lÉ;UYtë)dÜUžÛ˜ütp?ÍŽœ¯šEmEßËRY3ƒèix[›ùû1ê.ë_[šÒ-†‹ÕÅš@â šÕ‘ëÄíæãtovëñ›Ô‹þ¹YàP{§Ê"¬9–‹âÌíkvíºxàfZ’´ÐA4|ãüb5ñ6î§”I¿ÒrýíS+¸Ô c™`r‡Ïé n Ú96ÿ”×>[Ìì^%ãyo–¥T¬·…;}ðK®Øv $ó—ò6E¸Ü1ႚ*ª}±)ž.—¦¯®ªÿJ“C&A/¯%ï¶G¹A{ÇôpÖîÓÌW³¶ñ n¾ÿB Émà‘€7½Z‘ }i»À^ÓáÉ•[C³©¨æCQÓQG <%À…«8ú+|Oã6/λ—½⥗6ñ;oेqC#çn1ßH/ZÃìú_“¼Ä€¤àj[šI³-î ,i ´TlÓÁâ\—+t–¤2”hâe•Íxê¶H‰u÷Mà?p[Áþþ3üSü=_ÙJ6ðoÒ·ž•mAeô'^fÕ=‚‰•B1ôaðã‡þpZ\ó_ÊÙÙyîÑ+¾8±ˆÒí3{Ô¨Qå'uàÑ úR»)2€pâµwèMàã(6WdÇ@ö®p ¢ÖàßT14Ð1­¨š”¤x9P:® ¤´Â´mÑ<¸“+êrÛà&¼Äµ»ûy<ú“’yU‘¥H<Ìa.Ù¸øƒˆ9ç»öÄ5ÀüÚi4“§ô¦jˆ­J›Ó?zÓ~÷}ól¿, ¹˜ÿ-ÉÄ¢Y'>Ž›´i_NÁ•ðö„”‰…ÏGoT’;%îÓ ÛÉá²0ƒ:¶Ñëð¸¢æ¸¥³{‹çÑï·UÜK@"^è\¾ô† ±z¼ Pu”ùVÝ*~ìm»ãfÒІ\å·4u5vÅ5_ Pö‡GRõÄX@÷m¢¾”ßCRÓ»F' ô´²¢¡¶×Æ?Ðâ嵩~BJ'n?Û×f$Ö@nùƒj ërÛ…8‰;ã?ýáhNÍ^áòg4—xŽ4aÉ „w;`›Úf é¿· ¶†žÏ¿q˵+½÷·bÔZÿ.ùAý –Õ+NrE¿`Uj+¢)üc5îÿÌwîkL®]I” ™϶…Î àœÅ¨="üÙ;êBÉÎë¦EÔeÜLÿüCsr#*Q8Þ÷ëj fgsâAD«gÔ†á!^z£þb«Ÿ)ªo9z¸ô°<]N]yo_Š›‡Û:~ܲø½äŸ¥¨KĨ½qL“ßâä¬ñ/y¹ê'àp½2Tðö®˜®¹wð±8Þˆ„½Z³²Õó$ɯúØhM5Åy­⊃S@t¹“ÎɇÇ÷]*ábªê"ÿo- öÿùaÁˆQ"njÞñb}ÔÚqÔ·x§2yÉ Dv8åÍÎÿ “xþ`šŽÛ™±7€_ÁƒaWc+ˆe&K—xü'\^5žÒÉÈŒ|ÎøS´ð¡¶¬xœ÷Ç‘×úÏk Ö¨Xî~‚i‡·¡ñÑ€ »Ì4 µâŸýQŽ[ŠÇI½ë3¯Ý¤á͉úû£ž*81¼®$ˆº:ñ›uy&pfc?µâÉ2^ºœa›±#æÒ wPóÎ$+rˆ_:vÑö9 §yL3suöŸú{ákŽÅÀߟE­à¾¤jRúˆÒàg;B‘ñ@ôNœÀ“b觇s¸ÁÀa §4νÎBÕ_o8[ÙáÍ‘‹…¸*'¹FÜ» „å/„î­ÿ ‚Ÿ©›6E§Ëö”îÅ­¸r"E—UxîœÜpš%xÿÖ'í]ý{Ä2 »ÈwE-Kpà”Ó{#]EÏ N¡µžt1@Êõ ¹-‹ßhJþ¶ u¼U§[k¼Úù_ÑW{ñáÑ ¦+Ã@zFc•,¥ SÅ~îÌm4"_áþSè'ðÕ‡\ÛæÄˆëïÆÄø{ûÑÈ$ ïëÌgw8­aßâó¼qv\éÎç¦0?¤õ>¾>z ŽW²'p0ãõ“ä®n=boÑ ‰ þù} ë ®‘¥Apæú¦Dö­hÝô.&Ùwáô+SK@¤ciÒ!ã±tÇø=Îhr§•ý÷×Ó@<úæ/ÑEeMW{ rhx›h8ùýˆ(éù5ðMyL õ‘ÀØo J‹¢ñ~³7tÛ‡€<8¯.ëä3 #¼·4@صóíÛ£d "§¿%Ž´Ï]e~u¨ö”Ý4Ž'6š<ËÆ´ï±ØtÁ‚fü×HSõ¦¨KmzËÓz ½¾~êÀ8ð^0^ѯ‰§²|ù¥")¶‚G˜ÞöLÜàÁÍ›wn(|€‡7Ç2·ç£I¾ö×e@ìÇÁäF¾æEžó[h~Âaà‰É€ž š®ªØ´¡ë5´Ñ|P$±º|mÛZ\ãwÃ}s„3!µÈKWix0týô—@ܶŸi™ÿ:¢îNÖ= ì¨:#êtù­J´´«<£(ÊHKö3bÏÆÑ¯Ó)W«æ¯ãùëf•§O¢ªÌd• Ínïd7h£´÷®")¹@êõ9µ¨Dö'2߬’h¼3ŽÌkˆ‡Ü}§ªàΫŒ}çÐê¹ëÀ¸ö‚®&£;›\ï‘€d8A”žò`ÜÅðuøVþšÅàù°=Ã+ó?T½Oo% ¦ˆÕô=QË@ä%7 »Œ¨ŽçäudXq…±n–ê Ü× úˆ¢]Nñ\~NÓ/ܦ‰+þµ‘ÿO‚­àÙé)¿o· *µ0õ© î»ÇªúøA(ùc™Þ®:4ᬖe7Œ{N®Dí¬Æã¿"Ì6^r}ßn-H=m ƒ€¿TòÁšÁr9­kq×~£ ®6'œ}}»~² ÏŠ®oçÆÔ PN  ËýFÇ8}¨ôLG58'÷›%&B$9`¹j¹‚ê{o‹Šé{ÙÒ™X㟠ðäÍ“cí›à‘õ7õÈÈyôPç/ÔÅôí@sÖ3 Dç*Üj’¼þ±GhñVÀ!ƒ×¼K¥·žzâÝC<ÛÔÿ“¬Å°þÿž,Ƹþÿ–Æ´þÿk:˜Æÿg:])껳÷õ(ÒÀŠœ¾Ðå£1É¿Íoé+{» š¯»6 ¢ì™õ;§,£Poý¤ŒÐá <´È7-èjL‡‡·L¶ÕÛ„SidzðUMlJvÕú»„Ú¸ã‡*‹Ïcê±"ýžøö8cd½Ââq©h̪©è=ü…ì;ïá‰Fz”/ÑbÅÈíŸbh´0ñ™ÃŽ8`MàVrN}Š:–N;ÖHJóöíO´Ð7Ò ™«ÝÑÐókk« AÀÇOrÂoŽƒ3/ÖÚVˆÚ¹o*ÈkÆ/³h÷] `ç;¡¼Fƒ'î„7›#pýÎLÛ£÷'Ûʧ¸Ç/k’¨‹ƒð„_Z kÏÂÍ<ÎYuãŸQ~“iÖ`X ðIÙm}ñe¿¿PdiÝŒ=o›5òcàùw7B ?nêé9‹{­Žµðµ¢‘µ"eÍuh¡°d\_Wøº¿«*úÜ™÷²4X#hpð/!?—dU6BÝíî×Å?¢æ m«W48ý¶+ñÃ:?ºZg|÷mõg2@ã›/¼ôÚ,ÌGʈŒƒÀâŸ~3è¸*.G³TR¶ö“Ç ö¸ÏuyGÝ–@`ÀGäéШ]–ÿ'Â(M-YeÏÚ06P|,6À¬_»CËú"0Òÿ™¼7 ÀÏ7À}Š G÷£Á½¾.¢5“À>ô …G ÞS¼?–†ñxÓõ˜)kÀ?ó~èèçÛ `(tÆåþI`Óßý’•.ØÏ™þÍ51S5€óòôóÒÔk¸ã£Ù-²ø~Ý}ýðwp› ‚Ïãd4¶ød‰&lð%iªqvùWüÔ¤QS¯Ý5ÖOÀ6»÷Þm¾&Ø÷¶—ÃãÁ}ÑöëÀНlúPŠ*ÜLµ¸ª)ÀÉ÷`Ûîï€.öí _…üýæÙl›Š0ÜÁ¿c^ÔzÕ\«h;qX$þKÌ¡-6r¸~#‡›ã[ »•EFõ ½—Gêÿï8B­­?ËwÔ¿æÿ¦"͆ëϸ½tžJÏ#³ž}[¿­°Î³ÂÓìŠ&*²+¨œ^zûøF!4tVHí¥ÓÛ/<æ‘܉?Ì wëê¢yëfÓ>I´Ê¸ã¹ý~ÔàðÕ]Õå8‰Üµ4ì)ƒ«Å–<<C¾åbDOžØHJüù†¦“ mk ^Ë…;Ãï:P–rXä¶sR¸²·,!N5*Jn%Ià‘¥M¯?¹û¾XUbÔùí³Ã±Ãø ×·Å-¹¡¸¬áû6DñÇßdÍ©7ŸÑ€ð͹ët6¸ëù¶‡ŠÐ`&üî;Œ?•™èpeáÎ0|Ցݯï^‡ü#hñ9ðžaÇ#‚yZ­·pçÌ&Sï :ÔYúÞµ¡x }j Ø£4†;Ñ®k— ü· ÔÏ„ÈîÂcÓ¥—ß øã7;³Ï% üŸÂ¨WkRÄVBê‹ ýí´òßÂ(4'xœÞv×Lf“Ä]­÷¿Ã­\…Ø6 ãq£âKB¼'QµÞwÎÕ·PÂ1BâÈ¡7hâeþ»é!ÜT@ç®Ç‚ªf>Çç^ìùoîØÃmÿG-Ûýí³swG¿* Ö‰ã¡M*ßWrãÊú~3¥”ÛyÁºÿ`¡ö=3ït Þ¬>ŽÎC>¸t|œ¯ÈÛµ®úKõíAU×%n)–Çm Z (vC£ý°Î…“u}+¿q­•ú…¨6GÔ›XkòL¿»ñle»'Bc¶2©Ü£Ò½滞Ž[· ß-„`T¹iUü§—ó ¹pÛJ­…Ÿ7®Ù‘ö¢Ø÷W©OÏQÅ”ý1394·Nú9ꨊ|j-&ê•-rÛ«ž„ë¦n-u¨ãÖþ‚F;·³sÎ[ýß_(ûËD „;J¿Ú–&\×I¥óQÚ·g+nÈëw§Á¸çÜqÃ>Ç"ÔYcq™žEõÕÖý/ø…)\TÛÍt'„±QœðÈ™è»J£OqíU$+ÏVÓWòüˆG-yáB?Ö¾S™ÿ#?q‹•Ã:û1<ÌÅ7",„_,Îî›gÿoÁš›s Öm4D}ªåé‘S~¸<=H»Âhõo¸Á]€ãŽeW.mÁ½Jáy™5θàgÇ­'î¸ø\Î9e«ÿSX6\L~ÃþJµî–Ô0e¥É6fw­ojøû¤ÝÎ,a´àÑÀ~ì#/îo÷TwšC#CÇ+k…Îâ{û¯·ÍŸCC‡çNÔu–¡‘î…ÖùÂ3ÿ-@ÃsñSVg­ÿ-@C3E‘¹\óY/„°Î‰››v +Å£V–Ššñè™æðÛD<Ÿ®ÍºXFÃ9œFm˜Ç]÷›×I±ÏpÜ¿í”È Þ6¼ N‹°áBi\1ÇU)6€K§¿›±‰£ž=sGÏ ¥à°êhçÖ¬EÜñùçªLPNP¼m‘Áƒ–8ä/÷›¾Ã]jÿ–_ðÐìØ›‚ƒ«x¬ä?x—F3<ÕµñÄÏlXûY…*E’O‹q+ š¶ðv‘à-ÿ§ m8 Ò— Vÿwa®¤osÖ|‰ÊmÊ»Ï7ãß)ú\’¨Nìòf¯a?Ô•¾½BèY3®Û/1>¨ñ ç'ÝJm4÷òàþZ•!Ü5“&|8NÚ‡©Ù?Bņ…Y³Ë¨EÜê´îIN*kÕß~%WЈiê­uO*Ó ïÒ–fÔY¬õæA÷Z¸3ǾÄ8ßÔüè°s7îÚ·eÿ>S"mÜw™ïÚ„çu¥uói2˜yl¤Uç*¾þ˜Ùi›*h'}ñþmOeÙ¯z̨ñzÏ6ÁÉUÜ„*Õö ïÙ÷ÿO9´®óÿ ‡XÿÊ¡ÿ¯ÎÊŒà»#zÉú‡^]{y¿·ó,ž²›rã*'âÁ©ÃçŠ@้wL/þÞÇ/P\  ›ümPZ} 7Efàõÿ"!®+Œ¢ú¸ï³ñ†€Û R/ìŸ\J¥ÛpÓ[Þc/Zygý½ñ¬1žèßOyó-8%Þ}E»NÉèv íñÛ)’gB»¯|9õÍŽ ×Ú§'ñj'—þ ˆÉe’çºöѱ/øª°ÆxÔä(€€¹/§cc ÎíêÛý¼·J¾=ajŽú÷ ÑU)†ÝÁ ¡\ß@ÌÌŸé°C5•“«v)*k -4“Ÿñ©Z‡·Ë© ua±ëá©m·-—|AüFœìt9“9bùg$@È£&Döìo }­Ÿõ>…1ûG¿€ßó±@€+p÷ý`2Ùr™ÈoD! Æyæ JÓRw€;ÍM%Èà.°ff»«ƒlùÇGî…€xUåŽ +E¼Þ©²zÜô}âªO¤Ùeƒ…—–Ø ‰–ö³áD5?$m¬Ä·d¦?õ'Ñ~ï߆mž¨àºò¾?“Ê õ1gáþZ­ +¾ç¤šÊ¿Ê„ÏÞ.÷_ÆvYù%£JLì'<×þŸ~GÍR7c ëò%&CàúªßGzvxí4[íïáєІñ8Ð-)ñFô wn]Ž+{ÿµlÀßG©y@åàCUIÀ6!“¡«Z ¬oÜ-%>£ÝTÝ»;¶áÑ©<·‹þŽhÝË"ï®¶")Eƒàú«H™ØàÝ&'µàcŠï¬8.ùIƃô݈¿­iñ\èé‚W¾ÏAàiÖ_œBs¾2Ÿî}ö7¿m‡JpAŸLÏÝxHbAƒc ÏŸü·¯ÜÁµr!/ŒÔ\p°šñ4ðWà9:z æ~ È†”2ç§¡©×ÕY¥ÊAB½‰eö+Hr6|bösKûßì)…ä~ÒÅ@ÚçSÿέuÉÖHH¾锚Í@°N×^|h‚ôA|ZA²¹sÜ›ZR~v«£ ìÀékÇr‰B¥k¹›ÝÊBláÿˆp<)ÝÝšH¥÷ ˜ý-û„J7Ùa?[¼Ë¼"æÚq¨Ã +|ó D'ÃÖD9 Ôö?g?Ñ ìÔ›ÚÔÓ@srzÔÿø¥yvtÉ¡™éónyvÖîåwW{ˆ{ï¼hÉJ¾±äOB#@:{æo©=~åWw£Òþ °¶Í·Šþ>NetýOå=éâè!N 8 ý^ xŠùöÞèsxIú §&ß Èï»ßdº͸nQ/>?ÜEðó¾ l“ Ùs¸ †_ø¾£r†þ§³>CõØ#ás4Ü­-.ÕâQžË(d(þ}¯%s€”ÇVTW|æªW¢îY?Ý\ô¾‰u¢·JŒÇk³?¿?ðejtüÐeÔE¿,„*¬SêñÔ¦¿—ì÷#‰ÁhFÞ©;âðû'³½¨ ¢‹[8º×BpOiü.aûr¡šJBÖólìÙù¸ û¿ðbTÏß@zG7幋™WÂyÜÄçÙ n­oOâY燤ÍT÷Póv=W S­_MÖþïK>t…ϽtAinþdsëýnàêž2{P÷^J¸<Ñ²Ê “_š)TÂùC¬À¾ôBkOè ð;KÔ_ÿã羈HSg7¿úÑcYM@Ù¯ò¨cÇ,Ë{Þ¸¸^úL‡Ó˜Be’9øþû:}–^wˆŒ¬e 怱õf¥áåjÔ'€ßšŸwúm›‡{Ÿ”=+a­cç~´X}ðÚz­3š2J¿¿Å­$ùS’OÌ¢?ÖO„[M~àÉMç«RÔ@Däoš¸»ò¦ÌýÑ] N³e4»ÓüiˆÕƱA8=T^&m}Œ‹ï(‘`i'ËøÏEfÀNWßßXð ×LÚ¹"–zi Y'€«§d¾•fg¥u¶*ŸÞGý6½<ÝÀßlîZ±ý®:ó¼ìà~üj­ù<ðÞš¿!ÀŒø1k ¦¿*,¼ù²Ø>?Ú¹ú WÜ¢ùo6¾39ÀoÂssðMôVÛíßä6‡®ƒò\ÀóÕæµl¡—žJŽ+iŠ¢Ñ3iž¸£i®çQÉcŠŽjqðy+é ªâZéȼò É]*Öð;ÂÏݲ{€ø/m×8µž+¿¶Ð¡ÒeéüMLÂë&µÃ¸ï+Á|ËÜC aŒ¾§ŠJ|Î; ’·&í¥î—YtÅñÎïj* SÌÜI¥R×üúÉ®>O /)yì~Fùæ#7AÀäê–¨x´ú&LÌÀ¦ˆ§"Ò¯þ8î´†²«ü ê’ã~Æ· x“„¾ít½‡éîf§P™Èß#›A¤ôJÿ±VÔɳ×&ûé°*ŒÄPéLl‹ÛÏÒâNÔ¢Ó›ûÀò&ÕÇ;M–†gùãYh¡I)3ÌKŽÊôóË7x uËíú"w¯ËÿmìgÄÒõK Ê“:/‘6"ûï‡þ2 ªð6ã‰yùwñü>©+ rÀž¶³øÏØ5 uØH  ¸§»õÎÍ H`—=9A¤ÍoµïݬƒÀsfßTÊ¡c hÏT1v(D”Î9IqÝ|޻ߤÒñ½¸y¥§ D6©FEco–`Ø‖¹'—]©F½£çÖ$¬¨LͦCçðzj§¦K›0üOýÝògÿ gîTƒÀÁ·|ì.xŠÜ!¥ï~xRǺ}ØW¼*ý¨j}óUi!=Çéw‡ûA„ïûî ¡_Qoó¯‹"[Atsîç¥/Ø%¸ÅZQ8ã¡9÷Eðï“Sù<åDYg£¯¦pÛ)R•øñs¨)øÖ~câ]?Hü 9 ÜCm¤½6÷“~€u ã®tq Ϧ4]mA4§Sß)ƒ;_­Ç± ùÕ'u Ž÷®¨ÌEhøèˆã–¨2¹’c›|mˆÝ/øm¹>6V-4I}nC]ÉA²×2©-DΆu`ßÄý—0§|¹µé­3Àþg®Ï¯„<‡ ›Õßáu†îƒ œÃ@ŽŒˆŒrGo3I±ƒÀg‹cK𬻳¸Åž8Çù±DÊ]úå|A"|Ï×Ó@â¶•tÞ·@OÑü‘d‹;6|̹” |o˜l*T@ø:eŠòñ ú„¡nìDŠñNد¶ßˆ6ßçþêT5 Í?ùäx/u™$<ÿñ)Ä ëù9  ùOÜZ˜²5lò~…û6o ó•9Œ:~•7)§„qåÃ.{6W`²“~({•ä¢}ƒ«ïÏIÂ^³iøÌz«®†ÑÆãèÀßBTTðúÌ;Çp6´lNÁ„ù piþÛhúæóLO“@˜mÉä®ÐO`-;öò%+m^n³ïïŒî®’õÝ{ESAŒéî7Án œ"L˜ó/Ã+¶Î&ç½ãÀ ·²Îiš½Whý’nþQŽƒs ¬Ïpa…9wKr(­¶cmÚBuz’'-î{p«@¬ú¼›@G·Ð©ðók‡’ûNÐâ°'éÃCº$gB”)ÎÈøÌÑÐ-41P¼æÂô² æ­÷?`@Åïè> Ü×ì¼-F¥kxI3:&Ù„8i¯ÃNœ6`húMÒKÿÔÊ[†@ïTãú$ø>ðÙ*we ·¤`»pýI+¶ŸŒrÂʶ|É@Îü¤ëkn¤ëkd#Ín `Ý6é#¯A8Ç´$xÓ$nÙ=>ŠWUnt§^ù=ÚŒéHÕ=sU=\c¾ÓîçÔ6*“‘ÝÞãý‹hkŠpñÿë¼ BQ"É ŸTP6Ÿç—¸_žt ƒ„]»í6Ó, 1oœß»¯ „’[ l ¨\¬AßI+ýÀîéU ¡‰ëW$ÖãnNੈÏÔÉŸ[PõS'+ÿZÚu®'n'Ü1¦á]Ø× Å¢Àî3šs”ÆÛWf‹‡ùƒ„î¹Üöí„úyú—ÌóQç1úåÃsð3paÑÛeø Õ[o9öïÔ?| þ•Mï…e"6ÖÙ+®·Gb½kÿüñƒx©4XeOÌÔÖÓí À›ûK¸¦/(6Ý!¦™¹ Wa“T½#ÄŽÝa¹güVét×ñÌn »N7ú|Þè‚o=H”¾$zþ€J2ù‹È}Ú8é/Ô» ‚Ã&Æ A¤Ùz™I0Ÿæ_̳÷Úðtå.Ýå—@ÉËúÛ²„5JW?jþ=F~‡ˆç@à×Ùµ½âø£M7ëföW¸~£‹°ù§À—ÇÂ$·î ä*¹ð-eÀåÏöf*U-Ÿ 2 $›=»҃^(ó¢Zi>­Tm&Dõ1éÖˆ¤Ò=zé:s¸8ª™'?ÞÆ›Ò œ_d¥â•?¼Ax#LjõàdiɳjÂ+^KÞ¿™2ñšþþM5ʨÃA0ʇå•®ø@FÅòwà!ÿBI'¯R"5]ïÆ%srÿÖÚ- î¼&Õtú¯«ù•þV“qÛw/DÚLQYlÇ–$šYxS~˜u6ž´ðò±ˆ §O¸“œÅZ¥žØî1i/ ¼õÆk…„#¸£²ÕòÝŒˆe¬n¿Ï ræ+‘ÓAðLäGn=*“ãöõ ÞR*]«€Æn¥AàÞÖç϶…‹dEøOIÒÁ½ekÀ˲b‚¢½ñÀãG' ñÜ ÷€üz*Ý/¶â†¬@~­vÄÐá"MGq¦ü9RwXT¦*½ãK¶÷²'§›Å÷ôuܸpŸ:Ñ‚«IÓ´Ü*Z»dcqR9r®‹]Ýn Büå 9߀ØÉ™üÐÌ—ÔNsÝ* sæ/%L©â“a‚´ ˆþëlWmÿé,Öûþâ[/éX ÒN*Ǭü-EȬȸ,4 ô7{Ù3;;{~‰B=V}NÔŸÍLÕp¶BËÍŒh?•娻¬kÕxÀíG ßñÀ_²CòÝ æÌý»#äú=vŒb ¤ɲÃ%$Š4ï.|W@¿Q){ï*C ÷‹À©h‘,£¢ôI õ¤ùœ^j¶@e:¡ººÙ® ük(c»Ó}ªIù{„°ôëq‹íƳ‰6èk ãÅüÖÀ¼‘ÿÕéÍÀ;§ü½]Á (XÄ<âxN-8úEM^‹8½"žGevwÈìößF¥“¾(ÐúHØÔÕœþ £—I_kA$¡´Õš1Mù?:wxØ%±4….ÛTàš:£½ÅWðDýÖÔø¬`‘÷ã‰ôñc´p¥Ü”ˆ…šÜZ;ûÏÉ` T9>ñÕ0/™5,)\áK+1å „Î1×7=ÁKÖ·Û6_ ³Ò™Â–Àðöùío.h@¥±“Ag$µv±¿ù ¤i×bÿ‹Þ@ô²š0kÁ Y"é§ÑÒ©©h^5’Q–@Óá¡Þ“Çñ€MÂ4“ç2îáž ÍcaÜë±LT’ɵ –oÀ—Ÿé=”îg_wš°~ÝUáÇ5 óåk‘AÄ©QìžBYyÆ¿k„»:3DöiG ;¯å}¹§=g°½ÃêÑ>èêͻ٢Âk}\T@<#-ªýÈ%àû#a!ÓÎÂòÜâ3_@ôröTÝè¹MÇŸµOàDZ¥™54Ý•>sÿÓgàëÚm£ 3Wxü™?y]rvÒU®§'qNìžUsJ"n,È ¯(€ÄŒÜÞm®š@ºWÊzRÕ/x }LJ=ä7®Y½ p[>çÊÈÔõäºÞÑk‡ñSfBÖœG˜Ü–ÕXsu`»£SÏRQ£ pðOðŠÿ©L¥Kñ¼P=Œü;áT¿®©z&[S ¦’,é©ì¦E#¤öè¯ Ph+r½š,¾óY6=ž»8d@ l|ª¢¿$ÎOŠ:½/Ä|Sù=xã?ä¥Nž|ƒò«Îþ”út½h‹Ùy~?/rEíHíô}/Æúœ6÷›¥çQ_?ߌç*(Ü’)u-ÂÓ¬w^:„-ã,Bâ FVT^Xa¹¨WB¥«áÌüBO•–ÕBÝ©b|‡ZÆ’©  ·zöƒñuà7¯,\~”ŽVËŠûÖ–€û~·Á!ÁW Ÿô𥡘âKùý7ƒÄ¿Û0ã××ùã®øåûí=÷S~Q¯¸9¥ 2€ô™ ¯ù/Iáé犗hã»!Õ&˜aŠ[Ý>q^u9®X¿×Çï¡Qo¡ªÃŠTžu_¡òj)|þª†Ñƒ„:½+^WÓŠ;¾üçÄ,±.[¢åÇú®<±– ëE¼ùl MŸøëæ‡pkõE£ÀÎ/TÁ½‡#6ý4¹yÛ‰G¨„wGÊOLåƒtøñ›×â±ûifýÏ éhNË ¿öŽÅKqe=¹tbè·xŽœ¢ý ÈìÛN7ò€ï;éÛ]v)5í¾o±8?{b•¡ü.È;äueA–ÿl\à½d4™Ýâ} _ ”§¬¾ô|ñG£Z}t¾— q×ÿ:}ñHóßòn´bic¹cD“ÊkcÖÆ®Eã;½c ¹QÛ®«ê_×ðT¬ÑÑVIEYŽVÝú¤ªí4Øè…&Ÿf{uù;wЗډí¢Å÷[÷Õ+EL"Ù9ý2•ŽW0ÿnÿFlÊõ™ÎÑÄ» rçá¾[ÞÑø6›#Kô 4†>‚úö¿û-oèŒz&ÈIê›A6éòÑhå{ “~÷À}í °ŸR“Ø»†,£ßvž¤òlRÐ\Ø„f·ï¾“nZ”«ª)¹™f¸Äñnßšÿ.4óÞ8cb í÷ŒõxÙ]!ÛLY} »§ÖûŽÞqúÛ¾Šx†Àü2H8œx½‹Ê~áü;ñ œŸIXQDÕ’JqdÕ0Ú‘s2A§D%›GfîRù·£¤c®Ý°—ÐÉ•²¿CøOTÈ•ö÷«°áë¬ÅúI–Ö޼š‹—÷ u™“ùg•v*CAÞ]í·ÐlîƒE–~?àzî9ì*’ΖyÑðtîû©¹— ¼ý@Q¾F ÈÍXÇ}ìB=²ÇFfØ“a“×ßvŠv "]´_¯xÿ\Ø9o²·/oM*ð©þoVI»èA#à­y¹!(ì•JÎh… >Ì Á×Bøë*EæÕ˜ó9N4Ûå«òíæ[<—'@^zr÷Òèl¥²®§ùlPfE—c«n¢TBÝl©g´Ê·î:ðj’Ê7Êü‚Î˃Êì·6xg¤9ñÖcV¢TJ’/Tç‹FªcÕª[>€Ì[U«JçTµý{`É9*cþJ7™û=nžpê“hÓôñÀ×§Í] —÷ðÔ+Ovï‰î+ñú;{ÿÄʧáÃm“ÓyÏ€dAÿ"òŒ8^r\þjÑþ˜ÿ%d‚Ì ‘ MJ4¸þrí…CÍîç>[Œ9ûÆg=I™ ÄðwÙ_½~:?pý¨%tÝhýèüÌ5î†|E—Ï÷ÆÝ•qÝÕ]ÝãVZ b×Ôò†Ï/íIüJãíÿNv¤ñ—Fé“®5°1l¨óNÙ4ì_X^§"Ò^¼»çH—ëÞÌÝ[„Öwí|µoÝÈÝg¸lÒ¨œqN¨å‹4•т̖HÒ C–ÞŸæ°A-òúÈÑ“¸þËPW—K'Èò8Ìù‰ÑÑâ«aZðV%,m0 ~ðd} LqP@qû+ºåä`ò7[¼%CåZÊooG+ ®ëŽÂAº3moK‰ß1®óé2î(å‘¿mûMe-_÷ÒÙ+¶Ü0WËÑ÷zÙÀr۷Ο &Ù¸©Ë›ÐBmî†#OŸ‚¸Ëº¥½@Uø\zå¶7*«±yt-1Èó2‚úBh ï—VpT(¼ëtãØY G*CMR`£ác1;a<$qçoÆÈ>x¡ÀRǼÿ6tW¹M?ÒžÐü+/öï^Pz²¿ÿ2ª`ØÕøç5•Q=tŸzž:ÌW¶1F9ºF³–þKTúe®Ó÷´ÐðÐîÞ¿QÿÈHuUÍNóÊçÙ³@†üÏ.P÷†y]Çø84µå(÷ˆÅOúªTÚòú:•ãã·}ÏË®ƒrâý¡Á1c lÏ1Ÿº‰ >ÑE¥hTR+•@ÁC<àeB¯¦T²þ¶‹5G96¤f»D4¾÷_+°ôo°¹gûV9î´G’ã˜#¶?ÄCÙ~Ç’ßÝ©ÏËoÕ@*ïÁžŒ+ xê³q¸Ë®¸ìÃ4cXŠ‹o:ªCuãßF :ï¹™«0ä/„ õÛ…SéY,ó·ióƒìe¿o%TV“µ+’Ô0à&l2÷éuv[jý,hh\ÍI þrÚÁ‹F²b Me;oð}H\òõ*ÏãÐ`ƒs§qÐóOmÙ@eÝ–ã­2Ë‚&Bîqêźü­ÚAêîßðæÒW™bÅÓ©¢ÿs¢LøZÐ.M¼ øß·¹X’_ïîuYRÍ36¶®5µWö–{r[¸.µÞÜ„zN|mžL~…V¼UûbœDãGä„O°;R9ÎlY¯dEõ©‘5;»¾ÀÆ4³g'™7ƒŒ¤¨tŒ–•`òC€èž ÿ·ÑyÚ¡N+ôé™Ué›Õ l{A››aU:øœàù° ØXjÿüÞ®O¿±)pÏ=PùŸþYõËKY/ADúdæ!*àñÍÒ‘_¹#//>Ry ò”v ö»Ï0=¦’6kl”¦KŒpõ—Ékn¾ï©ìÂ,Æ'%¨¤æʯ”ªØÁ~öû3ËhZo=Ñ]E›Êx¤†ÊBÅ“)ÔãýËÀšr‘¬F¤2ݺpÉÏ™†7|ÂÞ© ‚â§þ‚Éá $¤ò7}|ýx¸—<Ž&ÿøü¹“6…'Ä„ç¥9FA­qQqñ ˆ°)êm™žõÛ_üXÔANáÅÄ!¯ |ýûzÛ9š–R¹¤$Geí7‚Ô¥ó€‡õE=Žƾº=Þ|ÑÀÇôïèªÀí­Ëex²àÀ²ac,p²”ü-üAÔh®Óè.ÜÄߪ<²K*hîü¬†} 6HXƼ7®Äóhˆüi<d‡4C&ö¼¥é™< $zÄÄO-X€êÂ5U 3Ɉ ˆé³-gV¿¶â­«šßR ý5²'4ýo ›E8‚œ”/ÃòaF4”Æ–çg <¹ƒ©ïe"¨O.;§¢~ÝŒ¼8„‡ô¦$1òåqtduão–»ê»˜“b7C4^`áÑ^ò@ ô“Rh8fŒ½ÞƒKŽ<—9…fw|ytMOŸ#„ ïPþ¼•=<îTeýšx¹DP㎵éËs¡2MŒd™µß™Ð‡Í·.mFÓ‡ú»ªK€ŒÚ¶¶Ôx<.ñ1f¬Í‹Êï»æPGq[=Ó>ÃS ¦øG‰'á(:w*ZtKƒ¬ArŠéo¾L]ÿZÈEÛêÆé ø¸øeÓðÑp T,äz3y¡÷%Hõêi|ʘ¡²Ø¸ 0R|ø^~üØØc}¿™Ç ÂQtï.AkøqoÚ4ž¯ÿ°zÔ“Êæòöø ™@9h üCW¤Ìipp)OX†‚"€¿Ð·î¶Æ$nW=dp ÄßÓë tÔf8ïj3Lz–´–¥HÓ{Óo«˜À&µ’ùøÌ1RCÿTcäx`°Nt?šÈi’sýÛ]oZ+£±8òóŸŸAzsâß ›¾PÚ´È ç*ÕTcfA~û³QNµÏ ¯Õ¤Ô ôv:žÌ‘ uð§î‡º9PT7þ¶yBøÝ£ž6X9¹T>Ïnnª@} Vç5ÜtîÎHê£4ûïM7‹+Áka¶“ !A[OÎ gÄGmQ”˜¡rëŒÕwU¢ùÝ¿GƒŒð/7+˜µ&ZT“L¸ø¸ËdÞ›\w©ü€ÖïÓgv%ú£©çDgæZ ˜úyj¹ NÓ íG-AÑ^ç¬ÓãS ²w*yÝC d-ärýN~ö`“8ÝoÀ¯vf×çKóÀìn`DT®(¯>àòqƒ‹Ni§JÔüxáÀÙxäå‰çŒw_ƒÚ¹µ¾eÓmx98ÚÒduúeKÇ_úÖ xT$yà 3–=û{¿­6M^æôé›ÙÂJuÿž7íKãõê „ø—˜Ø š´År ³÷˜Ø]¹Çy4~ɧ(ü~ wß«¸Ø÷ЈJ¬º×U5Y‰úMX3ëû€yË‹ÁWÞTîÈmIñ߀\}ôæý: äÑŒ“¦Ÿæ[<÷Uª×û~r ´´.¿×Õ£ò #Ñç ͤÔÅ"< RçdB_€ E¾Má{Œ' ѧ:~ËqPÒ–äŽý[O¡²œ‰NN'ðƒxXࣛvÀæø7†|ËßMAzíïW†T"ýIªïoßBS}6@~kîò] pPŸOé»*È>ìW¯îèñÖ;RÓPßÖo¤áëTâé“d©JM =úK«E€Òw·ýYº"ÈÞ…¹6l p¾=qÔ WÕqËZÞòŠÿô:^Š9¤ö —Þy!r6f}ô,q[Î$lpØzù¹Åà¶©s.)º@¥_ùٴ͸bn­-ÐpPœzƒ< ›WŒ¶ÿ¨ Ò ƒ5§+¿‚Lét¶ôO³¿Š ó¯'ß ¼÷g£€ò^ù–Ä."(|•à:d 2*®Ùù+æxzSDé嫳 Ûc@Ѧ; ÒÕÓŽêÜdž}7¹vÔVil`û2•(%.mþY (¬-Z;”€ñ†àÌ ÉmÜ›®«yÐl ¤uU4¶O¦¿æ½FS@Jðñ©ÕA‹‘ã´4|J‹'Û6ú-É‘q57˜'gÛKz•Qß¼ó¤sJ(\]uÈ £é”ŽìÆZ.`mÖÛ{`([e­½%þ€”£ôQz;!=<ò†S>.˜0gU‚lË;© Cl ·c¿†„î•ÿž,‰×¯1\9Bù2ƒC$f+>DZ¹­(‚†AGøÔñÀÿUÛ—‡SÝFíB¦×¡ ™É”1JXÌ…H’1C%¥P%DQ(¥Q©P$TTD[T†Ìcö`Ø“)”!Cç§÷|ß9×wÎùçœëüǾ6ûù=ÏZ÷ºï½Ö³Ö#…_‡Z‘Õ·lÓ’8RQwXu&³7²ê` “…ñѯ>»ãz¢ ¶Ÿn‰C\?'‡Ê¶+ÀÌÒß¾^0Ç-è¼UÎWßžµÒໆ+ôoŸ=Ü `Þc-©ÝЦ‘Û$’ K支ñÝSH|_úƒu Ûz»ßÇcúÙ*ÔÛäùV¤½õoßÃ!‚bHü ¤f(ôb¿öo$+{×.¤ÎX UÙz.âzý9Ú êbº2ßµ/HývÊ`Y8˜n1o"VEã–ß-«¤Œa¯Ç½{}iwRð½t´ót²ô꣕Ôüs³úOqx R˜>}eëíh!-ÑÖ^ ÃM‰‹»mõ™…õ€ªþ-uˆW¿k…p!•Û{BFò#ƒsç& zc¾Œ6¨þúè÷m´z蚀ÔN­Ü»ÿÏÉ’V®Ý‹"Mý•#²H3«oÞµbæ’}œý¿¿oV»[U:Æv¨HpBôñâuÖ÷‡qs'–„ycjþ—É’ÊS}’jÚF5RÛɽ7šK`A›g,¡é¦åî>é8‘Ž>~HU‘Sw)Ô¦;Æî/= Eòbl™G5‘ALR† þ"R=™84{W¤Ýj&`þ÷GãCbÔÆBܘ.Sæ¯æW¶íÖmHý•ŸKf©P ‡Á®‹Á¢‘H=;⺨ø}ýì¡Äzú%¤49ɾžj€TeþÕOzË«_,h½@ª[þÖóÖH¶cÞ­ñ7SÄÉSqötöØïŒ"9#E‘ú/ê"þHîÎŽ˜Ç{2€œeª¦éˆ»ßͧžÇâè„^«Ê0Ì'4nèò£"ñ n)«®íÃðkþÙŸ'#HQHË@{r¦F,ôh·ËüHõðgWõKœHÊóòÖjÛ˜k™xö­àŽVì2G=[†Ô{;¹n½Ù‘ëve„píNc*^'CaÂEôqÈr-nVªÖhôÖm`<#”O×z#õÜ•tYT¯ðÅxœ ޱÕCVäôn1PÒÒ£P­†+g¥ýðY\ìÅTåW‘b%Ë‘ y–/0­·ÇÍÇûóçf½†éº\gæ'´êwÒ‘„èJN]/5ºiú¨ÇIEŠá©ùšéˆ_¯ºÁYЩ†û²„ë:y>\u®ì§¥ƒH¡óW¥6Å iÈ(ëºE"Ý?j‰u«ÿÀèØ®˜éW‘âÇ—6ê¹HÎñ@óà Nøð¥óõ ¦ÚÐxkE0áÚËïŽ&ü8?ä?8E6ÜEr™7}åM–x"ïö"R ŸÿQY wéˆõÖ®C*×ÓâO<:ˆ6ظô.™"¡ô±(!+O¤“ýàF‰¼+Z¿–Wyï_ÜXÀ %ò®n®Áù™…RLqz[#x$fó̧ëHuXVè[fwÓ¨›xGß±Ðq_ Ãû ~|ݲ¼H}DÐû‰d2üˆ±ò¶FWkïH@¯oñÅi>쬵£%y¶×æàÆ<÷ðÚ˜;erV)ßL›2aå!åOýTüÿy!e‘ò¾ò[¸ÿMÄpêãÖß‘7Àœa}Ú‰ô€­¹7¹íy°(UË­q@äÉ Ö×Âðª?«<¶!WÇì^ª7=ºê2xUsFâ§î0°2!׌ R­40; loÙ½ã‘bÞ&o ¢<| °VëDí2…ö—Ÿã_-Áô8iœÏè§í8?Íò{WA¨Ï; _;rªaóuÔ•»Æ³¨/çzoÎÛÂòwÕC]`¶ŸHÿxˆZÞÄ>Ǚß˼ã =Ý‹®Óƒ!ÀÈ3pç7? Õç\ï:··ÀÜüIîð12ÏR3ævœ|é]ƒpáèrÈ69îI…¡·/Þw€¯%Ñsú˜@Ñ9•C·ƒ“lòpò. ôt'÷\z^§¸/PÛ-À5ŸÌÛÿX\Þ ,™ÕçßôBçS©ÌÍ·ó qýËm,^=ßàúö_ÓÀú'aK7|SéÊþrå0ôWŽpÞÒ…ºÚàÎ÷ކ@ÊõT^<ôk’G|/Àç ï‘0°9jvŠ®ÎJj¤ó`ó†ÍŸy¥EûÜà Æ%_2ùEÊF~·oxûo <ýárÔ7ˆŒB§Rß`­nu]U  C|/¼m'ŸÞ·’vr¥ÄæHth›ªç;Ì®üœü+À2Ñë+ãM‚ —é€ê!¨ðûqRÁê öøµ„j#àgÆy?{lŸ3o_VƒQÜ烊“æ0úfTV¶Ò;ßòˆm†>‰’C±¢¦ÀØÿ¹¼“?ú;›ó ¹>€)*ÓfôŽÃ¨wÖ÷ò;@Qè·ÌUiV®9⣰¦O8ÆbçÅh,òXcŠvSžÍ@Ûz0ãG‡ß=Xì¥ç¦í€jÇiÝ C†;…¹OÍÆåçùæÀÊOõrÝ€ÙclÍÈØo©0}¹–øûΖêñyÀè¦;\ð {žEß¹ ÷´à*æuÍÉ( ô³3:Œ·#÷Ɉ§P´„†$Ä0?N0;^{s+°ž|<\掇ÑâXWç]  'ý ºK@Чî^ë# ¦ÒÖ7ñx9öt tü­»ˆö¢ÁçZÊ@m8\¬ ôÛ꼿¿®‚þO1…«50? ÏÇž@§Åž›½TþGVƒ âÀ6dæŠsW@íèЯ=ì8ýióy ®>ynëÓ6x®îòÀ¡˜ Þæ·7‰îâ3oâÝhSxÁ•”t?´ºWIÔ½ŠZ8_ñ1w øtàéÒâÙóúgþ¡„swŒïaªŒX=‡é‚;½®]qíHË‹( •–Œ&EC?RÛ' ø…µ™w†1û~ì°Þ­¨ ¶"î¥Ðuúoa$¶~»ÙG+LøéÞ# íáK—tÍš¡Ÿ¿WOj-°…ÖíýtY¨<©î‘À÷ü¡= #RÈ.)…w­ ®Þ¬! êúÝ«ºt ˜ÁF aÀáƒv‚ 0—Ömz%F€îÅå:©ˆ+åxab0áÞ¿G~­0òO¢ÇÀª;£ /»øµÛfó+íßo£yúP;ǹ»–€j¾EÊI®(¬Ÿ5¯ÞFB±G×õÔni •ÇÅ*ïŠ8ßjR“"tÞšXhŸÜýu'â~»Ý„Q‹·e¯c+a8·(õ´FßyÉk¼€ß78J¢ nÌV˜•rÚΆ‡£Çþ!ö•ì¨Í;x'¹þl‹ø ô¿VŸ¾ÔCççØýiÇlÞ{`ýVÂË2†°°±¬H!4F—RË>ãe {«7@ÉKnh¦M1œyÏ8ó“†N_Ý.g`ªï½Žûã*û$·8cçzåûwÚv µ„–Ÿq: £ìŠïìYúÙ¼ðµÜJò•ZX„áÖg<:ÿtÂȶÌf† FNån’0bå‚#á“A0àÊw¥¾³ž<ÿ9‹ïM@G7íbÊrN}f¨uЋ_•Õ)Z#Ú¦ òZ­>:JÀ¨aíÎ"`‹mìˆ}ùðZÌsFvîŽxFû¶•ü¹ŠOç Èž?/¥å­bëžÊ|ÀO5趘Ÿ&%ÜÄ® ¥¾0ÀóQoarL|qjÿ¸ˆÄú’tÐýª0qƒqBÊy=0·8¼Zÿõ0°ÿiÈß2¿í¿‡ÚÕ’ãñ­@ºÔsÎͨ—¶ŸKޱÊÑÓn mÇ0;ý±éõ`º¦uj§„ë¦Ðg€~ U$gwa¦Œ©ù0³÷[IQ-ªþîÝîüY;çaä'ׇ)~9 Ž#Mz?ö$¤Þ»c¼B«QÇk'Pï…Ý3j…Añþôp~Á *8æìÁ„!Q|˜ò×ÐõOÒ~Â÷ÅŽÛm¼@¼•³Å@˜\2Íä1Qè¨Üni^øÆ>ÏÖ膤œI­?Áa“?ŠÜ´ƒ%ô Ù5.[OíšÒ§3iÃ-ÿƒg9ˆµI=VÈÖÆïú@åÒÒQ‰Š¶Ùõ߬á“Wð%±@u Z·2‰èH£È,Ǿ…3&q9+ÑNûMÐZ`V޼ÈÜ.ÃöGóÃ{—`Ø,Ì™ÇÊèíÅ6´‹Ðc%Ú°À ÃÂOÜ\‚½¼ä–G"À8Hño›‚á.ކÎZ=½ï[ÿL†¯¾möT€w<_N”‡1‰òßÎÁé0LÖJÇâ¹Í¶zÌEhYìPаÀ⪙T§f2/•+™{Ž-ŽgÿQf×6Õïç2‚qÁU5ý¿ý€ù¢ÜÞ‡¡ÔW…éðzºuYëôÖŒ|O,—ƒOÑû'#íØÀ¨Ì³âêÎjض§r;$o¶YÀìSÜ»îö¥B}QºªÀ,ô¬gí³šrsÌßI&FK%â²O¶Ÿ&Sn^ü°OŽS0¬øœ]ºÈ¿mmïâ ¦e„PZ³ ÈK«”UnaëÚ›rå7^Äý?lò`tÑ&ö\žPÍšm¶~VRñÚ÷|uŸ²¦ÏÂRe ÆÕ¶Ö-„MÃp÷!R¦X/4އ7Y¸ªþnŸ<†à…¢½"^Ãp›¿U|;P¶dÿ¹Çáë8FêfkÀ«põ?V|ÃÉ®nOİxóºL#ó «=¯6ˆÑº£ðÌî}@ÛÂA{Ê% ·š}¢8,Õ`øƒ.s2⊩Ï1»Šò}°õ&†¿ápvÃ!Õ³É0šû“×)w4Åþú'Í÷0ß\X&IÚÉ›VµR ®UúrgaP½-hC¨6°q;}¸‹ùÃÑîc}fÏuŸ-è ël§Ú½@.9¼´†ØÞ·qm«ÏL.ˆÙÅŠª—ŸÂÀcs‰¦T˜ø¼`~²«ÈÉ=B&á`LRãñ½Æ÷#'^Å[#–Ư ŒñÂmDrye€jyŠ»6 ã?·D&y1^O¨FbL*P,4Ä-îE¨ Æ:. ˜VÒ¤0O+ªÏàp§õíDÌå ûux¡6°¢ß(‹áq•W1\÷­¢TÝŠ{¤~è ( 57|åò?^ë‹ÍÔ!»Ö/ß„gÊr”€³ð-Š{zwî( _`µÒuîBûëøC|!0 ®ã/gÎI8bSƒ0†7L.¾0½?@uû­"·¥ð·wšÇoº ÌÓ[ùG-8€1ÿtãeÞÍÀ~9œ˜ßYŒˆäéÄ:R×Ã¥Ûû­Nßï]a0¦^hMÙuh\Þ¹Õ-³ãeiEf²ÞÀï ²0š–X³)†MÞµòÁ‡}»ëîÝ-a¾Ó@=þÐx3‚^‚sLi:0Ïgf¨^[ß}/íÝK¼ Ùc^²×@îQ¿Ž°p`wÎrsš#ˆ¶|“ñmÚj*ß,Œ^liÓÞ1£MíÎå5a@ZH"þhDOœWÎSø'Ìq,³èæB¡á:zR’G¡ ÓÖ_žÜ8Ô¦ò™œøE`)ýí¼,Ù§köéÁÈõO‡?ÂôÙ)éŠ r@ª‘í/pö„¡Ö(Þ÷B:¢õîL= J\ô\ª&/ßVÃW\Ðq@å«n³H‡Î„&a|ì òÙßsÈšT%`|Ç¢‰^wgè[:î‡ aüëØƒ¥ŽN Û–5µ3„¡+o¦ŠbvÀë%ó(ܾ›Q8‹Á ·ê½м_<[Tú ßž—Y[® ÕþˆT½JfãóK@*üÛØÆŽ½zêÝW øÑ¦é¤cÀÈ!j*{C[² #cô2°8V¾‘bs˜qÊÝ`hšQ”n‰S€Ö3·éÕ~cø®ÉÃû{¨«¿Üï‡=#rÉÉ* Ô‹÷ïH·á§<— Ÿ/+p€ÑwxЫp/ž©Aù¾¶{s@6%d[ûÅÀ@Ïʬ]è+½¾¸£æX!¦vµ€y$fáV(°ì}_½úýÍÅEi äp[þzp óô´¨_¡ÀP­¾þ/äã®·èéÃO`ø5Õ½6 ÌÌðÕÍž¾0¨i®éƧ$ë’ÂÀx ¿ewŠ’1=Ý«hcqHEO~êC­ýøãÃ@ý%åyÁÓá{þ>)º Dšý} ŒtRý̱¿S”u?zÕ R{´ãP÷Q [gqœ>ápÝù¾>`ÝpãnƒZ¿³ë7·k9²±JIwXB{“]ºc™qÐ&é|Ÿ 1;žÝḀ]…’S¾£&ãZÐn?OìÓúx?á¾þIâÿö!¡{=«X°ïñm`¾=¥/¼y}Šjû<²€¤ÝÌñs*†<â\ôLª‘ð‰ã@/P×:w í“npxq: uœE–a$t#oëè|4{SÐ µš’§,ÿ9é^á3 +šð|‹ä‚ÙžÖî K`*ø‘µŽ›·U]_¤Ùmj8O Ñc‰™ƒ?”»8:/®=‰Ó¶¼w±ÿ—³4[-$Ÿü­À»‰2>” á¹±j>`l^þ|¤Øhwªz gÛ€–Jy©ÓÕ솃÷jü'Ïšàt—ñrªxè~¹riÀ¯Ûéë_PV‹ç£#d€7û2;ð–å §Ëگźœ_$ýp£IÜ>´õ ‹` ‘““=ßc8(_s@ ¢jãu“Âõ3\Ì1¬‹z²:ç̬†w‰fÛìó¯l ›e Ô«G4ޏS¡ë„F袃( ˆõæ†Ö‡¾} I’òJ¯#à·ÒÚŽ^]Ââ‰Dìêžè¸yÐñ-v ŇەÃPòPâô…0 `¦­ ~÷ÓzR.­jE6‹mätõ¦DI vWÖú1¡ÿéê*+À_N(P¯¹T×E“újkè_0¼ú]ì #b<é %T}̨“¢êtšÚ€ vî3™„à ³Æâd mèÞÝ)ŠÅ]›è‚ÆxxÞY­ç~ÑÞu\z6ÀTZ1¿õöç1€7èvÿÔðß8•##˜®ŠZÆ_Ãxá箊öÉ\øJžßÂܨ„~T‡K–aµ2 áv„–Õâû«ûÞ„5Æ©ú8ã)Pù®+•ºÿ€þÓš&ÇÔ±8ïbó*­(fíŠk"¾¡2¾÷×Ý<»³2x3¤×Udbëu4,ãóÂÖ{è’hÚÁf`ö¬ ñÖ±ºß¶¡(. i[Éþ‘Àìa%ÿaÛl)Æ|ŸøÎ­`˜óÔâ;e|KRŒZo¼Ûa qÔ…æ½´š‰ù“E¾y]*ê̹›”âeb ãu.F¡ßèÈuÖ¯µû|ñšƧ,Ù6˜ŸT.Œ½}¯ v\¹| (­Áíeƒ-˜î¿¹ë²dŒDöKñº=’ßPàGj$í?}ôK4U¶Üë~í”õ¸{@ðdû+ÆM@™{£w_ðöõÓ¯¬ÊYŸèÈ5hØ^¶Ñ6è-Ô,w]PÆÉ%B6 ÿÒC :lˆ±1Ætký“ ­Á˜}p/‹òJJ•yJ‡g%·×@ÛŽÒ¤6@JÈ>‹û‰ñÕ¾/Sâ:@Ë;ÞKS|àå½ãÉÏ€Ò;Ú›”Œñ’9K(©Jª×¨™«hASó©°“÷>/²ñÊv{XnQrŸÖ¯ú‹ªq@Ü2²£z©¾3®h@š*Ýñ®â0†©'“%€6/»SÉHþ)ËïñÒ2àŠŽZ$ÂÈìù¼-3SXÜf5ÝôÂô¡ Ù” êïW0ð\Ûw§‰#049eW¯Ó„e‚`wÓq <‘Ú/`-§ò£üóò ùI½Ó[1ÀŸ’pGë€Rë-n«¾$)ž 2à§z´qN@îíB/íŸÑópyb l¸{=ô)f/N!aÑM|@©<¸·<¨2öÕc8á»OŒ/ãXý›ËýÀÔä=¿![J‡Ýõ–š°8.›œ(´Æû·‡€¼n)ìI# l.½I>áÔØ?Š»üa`møås÷ÀÀ– ‘@¹šxï^§_¼^õðä2PôÞÐlõø’±lö}|¾ßþr7Ó¹î -BøEžß¶ãéPÏ2+ÄÎ aþådÈó(ÈÖ{—þo`%¬AÓ†€Ú6yžÉ[ ´ ‡÷%I@¶Ê"Ògl0ƒ•/Z²ßryÜÃ'þ¥lÌŸª&ýTjc<ìÝ^Ù¨ ”•ˆgu¿ÚUüÙaÿ5@ÎØp»õ#@ÿG?‹û_°xëÍ/0 DyÝ ào š8¸.æ4ÂÃpÂRp% È“]‚½Ê‚ÇrJ³·2Ät±“˜kš/Õ–Òð¿`pí†W ÃE¥¥åç@á÷ß5y# ˆfÞx¤AÜ Þþzè©á ò(¿M÷•Z‰õa@üˆ Öù&´7+u/ÎpÛŠ&³:ÐH‚Ñeï¹Å€´IÓ3 ž®ù•¸Ýb+P3ö­´ú„ëº+å™Ð?ÍSè§„éCÆ ý6]“âb„áßîwwÓSð9õJ±† <;ÅjRýʈ$1~ÞN~Þ |¨I¼6Ì,W )§ùBúI}=²š†ñÊá,É[˜ŽÈ}»FéÇ'h·JïX%!ÔOwÿ̬›üNs‹†AÀÛ3ì~¨ãõ?·cŸûÔãsRa0жã>ðuûr@]¦ ôßõâ×<¯¾ÀÀ›© žþ$ »mç÷1ãßN}ÔL­/…!@RýH/UnüØ^ño6c:ŽŽª ›´á×Xùc~÷L~wÛU o9-¦* 䘥Ëüâ1ðýĹŒ×˜^\K³ÑÝL龿T’<@ÕÑò꺟ã»yÅ*ÑózÓÙ|ÎqÆ/1¼ŸâfŽÕ@oïß‹E€ïàZøö Jשӹ 4UÇ’»VEbË€ƒÂn8ùââŒ/ÐTRoÅc²‰ŽÜ‘óöÚô¼x6s|‰yR!-ŒŒÒ‡Z¢²b‘1×µ($Â(Ý6;U¤ab›×T“Œ”÷¾Ü×}fÙ¼®¼¯°é®[-·Ñè²”sæ4Ëz„¬ƒ~a;$DVI¾íƽ dí–æ>‹û¡}^Ü¡‚¬«I2Jl´IcÍa$ò¡Ícï,6ÅÅlÓ_œCºšñk´=&p·ЧOãènýW1o"®¶|Ö»e~¤ô•Vı¿™X;Õº_,ê”Û‡¶!¥Õ„Ö†W‘äãùxbü$²‰ó !j!££õfg‹‘Eø+OËd1â÷ýÎd£Ò#>z¤ Á>^—Ö»:Àœ 1t>·ò^à>³ éFnÚýHéo»ÙÍ\@j}? äL{(½ƒß3/¸Ùîh¾L{fˆ¬Û·üéýç&B¼Ûø3a²g6Ù«ÊY×]*Wä#!øÑÒ½©òýûüê)mVSâ}ÈFèè—ÞÍ)È2z#¸e:2Ù.4™(‡t 4/f¦È»ðHT"²@[1Ç·Gæ:bt7Ûã&íÞÄ„ü(FFOÿ݈JÇ…,×Î Û"}­Þ•ZNì¼ãwIžLCðð_2²[è<ÑóY,iXø”(ãpž·o¥Éä¡RóF~ªì¨êò¾p S„e#O")™Öâi„Äþ¶C†q'·“çT:ýß‘¡â¥œYOLű³LãK*YîÎKï ŠA–&—EÉ4¡Í~›8²šg‘y¤Dfÿ’)2ºyìY Ï!$ž—Æñ0HYwn|õÙ° iÑÈ®~"…©ÍÎ5ËÆóxXΔwC­Ò‰öÈûÈrOEÞIÒsdýŒ÷òûÊzd³ƒ“õÈDõ¯ßŸ…·ªk HõDªÿžOØóZ–˜œý/óŠhÿa9–ÿ/Çѯmclust/data/acidity.rda0000644000176200001440000000224413510412701014604 0ustar liggesusers‹E”yPSWÆ#;h’÷^–·$ŠVêRÄZQò &âBQV," AE*J‹F†€NAÁq­[% :Š…"˜V¥ŠË¨”*P@(d)…¨°ét|ýãž{îÜ;÷|÷w¾¹A‹BÝB…ÀÒÆ-Í©•…9ŒX ìͳmDä†õ“Í©È<òa»ñøÈù˜"ˆý+háºkþþ&whÀBmí¨Êó$D¹…Ž×§@ØÚtòs½ˆœê‚Še+_±'FÌ<–: byT£©;ÇZÍï6}??Ûðç¨éÁé‹‚T`›ó+§=¡u‹ üí2ĵY9u_Ì)Š«=0«„­vé’Š^ˆ×zD„„Í€,³BTô¢’‚l;?§ ÜŠ„#¯@L—e—.Ø òäŽ~ÑÙ2PÖ;¶ÇMŒáõ(,VNã<¡ø±ö\Xx è¡WÚ¯¤€Ý–ìúüˆí·Hy¨ÎgÑ_Ztƒ™©Us&k+'NÖ˜÷‡Äû]= IvùÙ®ä¹8Î*Ë ò7u·ý?{Èë— ¦•Û­‚Diœ+˜é»ï«Ó] ‰Û^ÐpÏbOÝÈã·ü–VM½ ‚léþV òÃx™.° Tx°Ìzñ/ òKÇÎîˆQ®¾pgXRç.3ÝÐAîïêT_©€ÔC¸5Ý>Äu½*q¯\ÚWWߪúAzè©=r—'®­Ö £ÂêÇwé!K0êJ>¹ jÉn‡|š¯Ïìts±œçÏ¿ÿ}?©±ãÛ|Cƒqïõœèlà×ïù²k'J?ݲ•Nò]þbFu[!èö"«Í…ö {75ì‰/9û?~ì¯ûB³ædƒ¨Éw¸Üº¯ÃÅn¨Jr•Ûè5Z:qõÅ*ž‹¼yþ¡LßVȼÎå«[€%»¨ÛÅë•’%õÓR[y}tȳ’¦5Þ ƒŠ‹»žyÿ(æÍŠ8Õ¨ƒÌöü;Œ P©ozÄ8G§î¯ "ê_Ð&(glIÎsªxy×cãÁÿï!ªKÓžj W·Þr¼ß$ágLw»Ì}HÈpëœ ²«ì&å€Üø©ÓÍÑ`¾Ëe’˜}`]& ò̓â奿ԃ4Ï…¾2­Ó ?²InÊ¿fRð„¼´à::3o”ù@©Ñ»·®#8Ë Â Ñ¨V‡Ÿ†Ò¹I;²­ õ:>¦l'×\x8Ì®¿kcÜATÍîΣã! ð«ÕN\?¨ÔÊ«Á.¤*åg@Çj/õ§€UÖ.“ 3 ”žÿ›*ùR&Ñçöðã†MQ¸oÖ1î¨&)ì@ÿ”G®§y¿sƒÒªÚY°MÁŽ1Õ¹Ph^…x?2Bº³íuIÊŸ õ¥Šá{pdÈ µ%ˆùÎ˧>ÿLƒA4Wßîã´žH=˜ ú?춃ô /½ù¸‚õ¯¹ƒ{×ûòk˜èJçâ[ç„î> Åòöf¯½ tÎwDÞóÊ~S÷âhp뤱\ œÁOôÌ≱åÖƒ:#é=t÷Eóß6úý*rmclust/data/thyroid.rda0000644000176200001440000000377113510412702014647 0ustar liggesusers‹íZMO”W~a:ꀊBQD„j‰ŠŠ®x“²0 1VQ‡Á™*¶&( ˜6îæo´+ÿCW®˜­I»rѸ#1!™„B0è ó<ÇœgæM7MÚ…$Ù{ï¹çó¹ç~d/·¤[‚ ¨b¥ÿ±Ò×x}é_]šKôÀ‹_.Ì?}±#¥f‰'v°DÿÚŸòùóùóïbÿ£OÝóqK°qnöçÙ¹ÅÒ·ä¾U•ÞøÍ—?Í“ãÖü³ÜZ ¥‘ÙÒðh.·H&ùIîÑ‹ùk°[ú*¯ëp²”ÿÂé°B¢=…ö¤Ð©<øA'I ¾M¾L~Й¼—?Uðò2lƒN€oBìšÿ Û×O½j÷LàÇÙŸóBOÊÍŠ¾‘Ÿíöý3ÚO;('“÷í ‰CFâËx=yÿ ‰£É ¼Ÿ7дúþtàõe_7yÏO?3¢JôÒžûoo:ïñ1)q Ž&ÃÚxÜ¥wÓGÅÇ”PñêÑõнÇkFò=%ø™Ürý`<¦"â•‘u5!ù5yÁsX7ÓOµ›þXݸÑt¾¶œyÁæ]â—ËJÝ€üÛ ÷ßμ_·Ñ~@Ü…BÅnö§Á?Ôîÿ.ðëæ^àiZüå¼»R‡¾Wý"—y¼K?ÿ°6~­Þ?÷ÊשL¾6ߌԟ‡Ro³Y_Êe>r¢ÿ¡ÔCâ +ë:#y6|ûõ`ûWïÐþ_xæòߟáè1ôŸ=ñdÿ/ìÅ8ù{@¿‚=I´û~-ÿýö`^7õ° >ÒNгàëßi´i'õŸ¢=bßáï“yÔÇ~ʧ}ýÓzHé7ÚgA©‡~žßyŒS.õö¢ÿKÆCø·~‰[·Øq4E¹÷”Ìï’üÑÚmñGÿi‰_ý_?ãùìOa>û‰'ÕO}ƒ°—þq~䇸êüsœ‡qâñ<Žq¶SÌ/q+þu‹_Éë9ÈëbÞˆ+‘wúÉO;.vTú‰sæòˆ_êM ^R§“‚oÆ«Gò–üS?qÀ|t.)ü‚ŽŸb^Œ·£¿]ƉïS\G\×h3Þ\× IÁw úoŒUø¯#¾—ÑOþ+óëÿò4,õïçAñ7 zUêÞ×à¿zuø2èÅß+ú® } ”vbÞ5´û‰ð r@ï×úë1Ÿ8I`¼ ôóQ¡#ÛèOTä좿ˆx€xYô7€›I= 9[˜ÿì9Xá§ž0îõržáƒûûÛü¾dûWìhÂx ö~€>ú@úÉÇù#ë Ïo·Í§<ȯҳ yÖ¦œ´?B_Lh¯ñ¯K¿ØemÆ?ðöEé3¿$nõb¯øÁ¼Wµé7õ¢_âgí‰ý`ž$nÆÕO>úÀçÛäoy¹šwËçÑ™_¥Wpeã›â¿ÚýÁëãú±üÑ^Æ]éNm¼n>È:ØÜQþjm;GÖÀ¯óUúÖ4¾'&gEòQ9ŒäY^%~UñØ’¼¯I>Ø~'ó£ü[\ìø:`uXðÇx;÷CÐ#¾~†'ý: “wLÖ/ãÉóABÎ}ÜŸ›äÜúÞJ*qãzo“õß"ôˆ_·¶ÑO渲¸¼‡=qÁÕŽ¯Â©Ôó‘72ŸøY–|sþ[ñ—òØO¾÷>ïUr¶Ä´¹}òvlɼmé—}Íèªß߬ÍõpPúiOQÖá¦ÔÏ5‰ï»¨[²ž­½&~lÈ> õKëí|'~#êêWyê§âhSpFJ½ÄAQê¦æa9¢^iþ6W"Cêl1"jú'ù²ó[ û®œSÂFÙ‡eÿµ~­ãš©ëƧyR¹b—Ö9£]›‚7Å리å׺‚|Y½_¿eÚùšñŽ¯Ò·*çÉuÉó®ìÏZô<ªë]óÄyËq\Ø/‹7]?;ëƒôµÔYê[ÿØæ<Ý4>圤õF×½ÖG»Øa÷´^9±Î ˽y€÷\Œóž8Æ÷¹÷Ø»ß9¤ŸïÍñk˜7Êû)æwÉù¼Uì—sµÝ¿xNÀ{‹æa ù´÷ ò·›ÜSåÜÌóCRêÏÝ 9W%|ޫΑ°Ãì‰{¼ñ\V…ã5¹mùõvxœ…‡%^‚ó°YîÁ8ïæÇ²Ü§‹rÃü%œkˆ—%Êaœ¯fO6?ä\D[}\ô>J;ì½á°ØÏs ã””ÿÞÞÇ\}ã¸õ¿õø³÷ÞoÆ|}hã} y¸J\Üßu¾‹BÞXÞCùŽŽþoóþ‘u…ûÔßõ1ÎwÔ+x…^¾vCÿ(ßéÑÏwì£høº¡?÷xž{6ËŸ{4¢³yôiî‡çó‹Où’;ãC÷ð½~üŠ}Â·ØøÝ›d-ßWÛ“Ÿ‘´<Î½È >Y(©3šæ¤)åßÕ—½ßÛÛÀoNvÿ[^&ü‚&mclust/data/chevron.rda0000644000176200001440000003733613510412701014634 0ustar liggesusers‹íeXTë÷÷1@10°D…Ù½éîb``»»»;ÝØ;Pìn‘c‹­¨Ø­Ï|÷ïfÍÿ÷òyñ\×s]¾˜³÷föìïëþ¬u¯ûÞžÄ-c¥µ23337+_Ñðßò†S sÃÊ™Y˜YŽ•Úvjß¿OÏffåk. ß—·6Ü¢Vnùóùóùóùóùóùù1ÿóùóùóùóùóùóùóùóùóùóùóùÿíó¿Ò,»µïß¾[_ÃY åÛÿüÕ¢]v¿ìÿ9¯Ð£gç¾íÿëgÚvËîû﯌Ïêݶ_Ï>†³_†O5dk¢×øìifѸéÓµ†ctq…R[Ã14X×2ÞpŒ½dÚOÃ18çö‹TÃ1²¥kêÃQX±`X+Ã1l~='·yx¹…á? Ä>Ïp ¿üªýgÃ1 ×ü¿² G©¨Ã ÃQ3qhq¯‘†ãÎÅ[ÚŽ~®†¶Æß6¹û—áÔ7AÝÇó{žB9–¼Úbo8eÄÞžb8&5J™ÕÍpT[ ªär}ËJ;b8&ôØØÈ¿Óo5›ýMíJNŽÚJǯT2“·_­€çyÿóÈ~+Êó¦´ó`ün»u•3†rD„ï4ïb¸N *¼„r78v¸!êõýðC_”oZÒã$Ã}êq^Knޱö7›£~±]£>íƒnÂèŽW GŸáó,ïãúêß›{àû4óM¹øýºry ¿K:Þõw —M¨ç}­Z÷œáûÄÈV…×QÿÜ‘nß G¿¬%i?ð÷”¿V»îóeÚ´MA½žï{0å¼[}k_´ÃÂ¥§ ßõ¨Øh1êydw»AèÇÞ¶P_®ã‰åî(O}Ýý“†cŠ×â8å:U~û{´CÿŽÚц£Ÿ­n¨¿¦Ej”_s#îé]´×e|èÕïçôÊ èPwÊp´[×C>©†ëøã\Ð>¡Eg_5C>Õ×¾†žÏÔ7A†£ØâwÍF°¯ou žábÉ÷ßè·èó.·Ð¯]¯,7Xd€W§CEé¸Î¯éÑp ì*4¿_Wû(ì&ºq”þÚ§Á—ÔGhçf,ì/¤j/(×V ž[úú˜„ënÆâ÷¼UYšÚpôóx”ÁâZ=íuåð+ÿ¦d£á¨Zã8åb5aÚ«÷Ò óñü¦Ÿýq_Ä¥•Ð>‘e%Ua/1¿-mçBw¹}FŒbŸ—Çòx~èÕ`´³à~úsEÃÑ£ðŸù¨ïyûœ4èÆlÊ®‡ûü6o8:¯l\:üêqíýð-¦¶x åTWuùµå,ûN…ð?‚ïCÆÞp[úÞ:Õ:Þ“_yáyqõå0n¢ÏÞ´z†òïÎ\» nÖâÚ%þêÓGо7–öRî›ë³ý^Žñ9ç÷v踻ZÌAú8íúã&dD£Ð>Jç®ÞâºÏ1î‚-’|ð¿gzعêNåkàGØÚ$\»W¨õv~(ÅÅö:ù}1ÚMÈY’‰v .¿°Ê7pÛÐhßkë®.Aý«¿þnÇñ—Ê`_‰ÓcòÏø“xp›ßßð 5äí#pm;þÎf‡†œÃ8а¿¢|ávø¢Ž2\Ö,Îwol¸/nøëeàÙë%ÒÀcç¶ì'!·ÍðL[¿b Œû¤MrÃø­üƒÉ DŽҲÙñœ¦óžì‚?ÈsÓWÀuþ‘±°ïä­svUüÂ) …%–)N(¿K݉¨OÂþ—Áh÷„È W¯~Ÿ|кÆ?àìÎ9†ëÔk J;¢§óLOü«÷kê½kà@ð'eÐŒ&q¼xèd>?N&¿SWD?kùŸõQžðgWúáwúJGaÿqÌüºhßÄwÚgÑv—tSÀ8OÒ³FWÔ÷v¦Û`øÅë[À.â2,¦ÀïÆ>´ìéŠv,> ý¯ÉûñÏrÔCå³ð÷M=„DkÜ9NǶ_¤ÆøI¨ØvÆÔçA¹}ð—)/-ÚýÂ}‹ŠæA¯¤ß„©°Ç®[¬w$¶-þˆßÏßv‘0Ýmˆ%î·>6ÆíápíF”?}Ûâk(ß…iCà÷í,êbüjêÎ  ™u~Os]5~-ñËózuq¿ÿ§°¯ÄUsO‚ 3Z/ŒÄó»Ù>€~KÝ[ôWê™—wPÎøYGâ`×ñÏZv@¹ROZ¿Ã¸HÛüd3ž¿yé·uð‡ŒÁsR·É+Ú |'fÌÏ4e?µJý-7?‚_~sì¨2>¯DÿúŽöh±úÝØûâÐÏÉKÚϽƒïË5÷»€ò$.xž'¨·6E¼•¸¥Æup3¾pû;Ø·föì1°ûãfËoÄ6ºbu÷õêe¯‚o™ý¼î™4Oèys{?úüÊ—4fû±·†kí¡ÈcŸ¾‘PnŠá:Zóéô£'–l; ;[£O„¿HŠ:Ÿ ð¢¸ùYØ…&z‘<ÝpLR¨_Ò“±o®#ƒ;DØáû‹\ÄIwÌ;VG;—«öå8üµ7ÊÙųrK\ß²:„rGU›Zö£Mˆ”ðœ¸W2P.ÍÛo¸¯ÕªœÝ‡«›f ’Gåå"žJq˜x[ƒ~è¾[DÜÕsümô7t¨³ü }¹+àsºpÎoÏH3ÿoi)ëaI6{rÁñä]Ìdð5ãÈÈæðÅñ—Q?Ýôsõî¡_¤€¿Õ.êÂÁŸeU´íŒzYœsq´ùÎÑð[ú/yïž‚£[&o†=?0>ÖpÌtžŸÞßWô?³Þð»fïtßáWÒØ;ÿ2\‹A§R%éÚýryÃÄ[Y}nµß‹ãºôð‘ë—Ö@•Þå§”ví'Wîk8¶ê3ì5Êíy-mâG½ÃMÿCh¯Ðw¬ Ó$$R…vtñ¯8ÏËpxžµzžüêqóhWÄÑÍf– C=ÇÝ­‚¸^ž´þ"ºüé}ˆ‹µçz¡_S_öp£>ÚžF<Þ,íHkøã Ç'™˜7díõ›øÂð½nàÉòðÇÖ¶L âó¸×—Xøá&fß[‡öývaÚ%Ã}iy{÷ƒï«|9ÿ2nëĹœ—geÃÑq”ßdôWCK¹ â+ýÒ}à?3˦4ƒßuYq:-Þ@§ôÛôÏD…㎗À—´J%å› =œÍlïôÞI à/CãîßEÜœåÿT»}·5ÓXÔ+<| ñ7cÁüÞ|úKĵ‘Y“`<ÆkzJ; Ç›>í€_‹kªÚˆxÀv¼ý(º½ºß陲÷ùð7-6®E=Wž›¾ׇ“|u(Ïíiï"Ž/ßü(üCÆ’çž®õçß^77dé<.aœ´®˜ûÓpUù—­£'¾íì‡vÙ¿5Ú#µÏ=ÆqŠ™KÿÛ†c•›±gPÎòÛœál—UÊÄ1Ãc’Ï"~æÕN‰™#`qœÝÄ£~KÆvA¼|+C´\ê#Øæ¯ÒBÔ3þCpìÍñÐ’½õ”8²ºÆM°*q©?ê1mÔGøÁ¬Û‰þõà7¢-Ýòa7)vµà‡Â6,B<>ñ¨?ø‘ù®ÕPk…ÿ“«áùIåïŒú„þÿ=¢óý´ªª)ˆ¿5K+7ĸ4z~“Ò±xWc›“ã+yÞ íW÷í¥{ì¿w2> ó¥´¦#¿¢=RÒuM/øþuâ¡ú ßGh›c\M\˜‡údîNª›zßu¼¤Œ+&qÊ×iì¿A×ZUáoR:üüý¿Ÿ6d_gôg×¾–ˆçÜ2/‚½^¿¿ãKìq4×—ì0/I\PqP0ìuúñ¡lÏâ/_Ú|ÝÎ(èkŸY#`µ‡«‡ø‹-XÚq·«ÿÌåžž(GLÝw%>>íâ•ð>¹•áÕ¿çÀtyÕW ÏüüôW?ôß<¾ü€×™™¨W¹1µŽ›™ùÿˆïtíçxxûmØcFÎá™Î†ûôrØMa/“—;må?2:žW×­Âý!sFV¯ƒ7¾É€¥VÚ?|Lð”‡nå°HÌC2מּæb¸výR¤… òÎF|ÿþÕÜò(ÿ‚°ï(dšó1pÀüdèF¾o5`%Ú[ÜrzšÚ“¯–¦Á5|± ¢â_kÐõ|4µæaÁ§¯Æñ„UòS#—ïÅü"níÁlŒ¯Ý{"ΔÙMýÀqv^}-òº¨äZh/vC[ð&æó‰°àÏóÄ¡q³C*ß@ù(È5”C®}ÄÃ0¯ñWv)õOZ}z ü ÿ)ÝñHF×-–±ˆ—†·ï ÞeVú|qVr§…7áÇjzkût Ö@›ZRÖG}v\ÍP´[·úÓÁgqÀÞï ×~Ä€Ÿo0v÷QÄ¡ ¯6×F“p´ëzÄ¿q1Ó_¡ý3?h˜÷†ßÕ9´!vn·ý2òƒšúF¡½RïøÕÀ|'Ù÷² úU§}ö¨ö{ÔéòAW›ïEûeöp‰Þ?µ¡C¹ið×´ƒÿÉìSñácäÇŠq–"¬¦ðóç—[ï|…íÐ-ßâP1Æ÷Ìž+§±Yn;ÚëËà×,úÛaßì ÌËÓýu9ˆ¿Swgo‚}%:š»`Uãè‰U˜ª+­òFþ³ÅʉkaßiwoÞ©ˆvm¢/ùejÑÌdð,9¨yOø-]Î+Þ ñUÒ™ðSl§²l§ü¿=¿g£ä=-õÃxˆ]y°âˆÌg©U`‡ºå¶‹ ¿«ÐyOUð«yæ¶1ˆ¿oí F“¹|îža?ëßš¡ÿuï¸:`Öñ\yñÅ8‹iðúîú箹¥Iß1_ ¬:á æ“iê;ǧ•ŽÈÅýA»êÆaXñùÒѰA;… ß÷óg!æ'­zÄï§"_Êí0KûÐèoWÃ÷M'ÔÖ£™7]|{£n[ ¿‚ç²±…Ÿm¨ä[zYö1ØüüEhjÕ–è¿ néÈ»¶t ®pM5 ~¡êªÓû0~´úLøÇxv} âr)¸}ce\ŸÜ^vÕ;û²?¾[Ÿ¼rb«ä0Ì/¢S/ÎF<~fœýCø‹±žð_YõG ©ÿT?\W‡=ŽE¾;iaƒãÀ‡áM¾¡Ý> _tÉfuÐŽÇÞŠÆï¦úx'xæ€ à_䌥žJ<Ù,åð±…ygÀרXô£úü·BäIÓš&æ*y…“¡b`‡ªwMFûNÍqÆ|F_isBŠáºEÃÍaˆãBî9ÌÀ|VÔê â¼Ôž¹ë1Ž"'œ{ñŸ;µ6ÚÅ=«_Mäá2;7Þkø¾VÓcAG2×ÅÜÍÚäx ÜI]³ëÚ³ÕÆÀkðW#bºÁŸE§o½Ìp,7¦lòº¯¡}§‰ñÖ}á¯äyc:à91¼Ò0? › ;zŒ\ ÿàþöçbð‰/<âdàˆÿ÷Oû«)ù‡:ˆ2|ü›!O#¯²wSò‰C£cüª¿éoø»KØýÁÈk…6ÚIƒø2¥R!8«¯yUÿ¨Û‘c‹z†lKÈÚ€v·Lj¿Yt±ò/AU®¿3>÷>ðÁp?³ñLB­³Ã:#ŽÞ²v™¸,:ùÕW›Zà¾ÇÇkJ>5k_| ÔϳÅû+ÈÛèÃcêÜv5¦Ðñ¹ÍÀ³3pZP÷ é°ƒØ%ÇQÎ÷ˆËý ÇúûÛ]FÜòÞªÊ>w°<Éùn â_õÖ©Ëà?c=›÷Gž¾©{§|Œ›¦Õ{‚? Ytô3âÓÌþ:ë Çí ^ªõÇ'Ûƒ'íÆ)ùoù›Â§˜‚8ðÄ}ÂWWÄiáçÜyÄ?•Ó-·- Ó‡©çòl2¾®Õåæ¡©-^*üýú»1«N)óXýöý51ŸŒX²~ ìT?Öþ²F™$KO§nÿ´GG9‡£_æÏ|¸~ân§w¸¨ü.v×Bg½åH>\u(üvBAîp|Ê3[z¸vä›·âáÀ²_?1¾õó˜BÌsí÷Õìs2 qÖ_¾j¬;$çÏÚÑý0bcyÄ—QoóÃ0oI¾P:‰Ãó9.Äü)eóÞéðƒA–.Qò ç=òêÀN¨¼í¯K±¾†üœ}¯j›ÀµôÊ–öG?Š™$€W[û­ÀúŒ*Œ Ø¿_‹üRÒ‘—{ÁØ¿~nÁü=Í×mßJporÍuoÐoŽÜ'Ä›†ùÎ0´WâŠQa¥†cÆÈë.3QŽ{ý_"?¥Y²~&¦ÏU Ø]óMÞ†—•Àÿ¹ê×oƒ¦ݱyǸ»…oϪv©üNàá¥Êx wßëu*ɼ>üzÜÛaÈO§·üËÖö½{I<â4ßJ+K•üpK'ö8ô¾ª´ð›º7»`š!£š#NNúk‹«-Ú¿î‚ÕóÑ/a[Þ£ýþéòö—9ììk”'f`ôðIަç¬'Në »rïÖkÒ®Ž& ÿƒ\ž ~Lj¶ýdOô¯ï¦à”wÍcÌs]¼_„¡ž £.ÃÏfôå‡8Ïù·î!üyÖØ«×±9rÏó|µy¿°Û N#~`ì¯þ-)y‹ˆ+|×WVÖí²v¸ÞQâ;e$ˆNíV~çUOµý{³G%p”o;5ýf—ô¨×™'ê-ĸOùYeû5´÷ž«0γúgmÃxɬ’ßw¸9À\‰µk]:; z½ óÓðkƒŸ;Q3â{a¾ç–]t ~3sdO5üK†* öVÔ+ùèÄÐð8õzç…x@ûÚ킌¸óÐÖc(WDÙ³ZJ¼øz#%¿¾-å4òg‰ñvmÇ4:®BÿDtúÖ_‰·zBüã8´gÔ'-QÓ'Óð\)vÒhøå”ÞWžlÇx;¯ì=~óÕ~ˆ3tÖ‹"ŽV}³B|Ÿþ£‘òR¤²y W½a¾ö=ù'â£Ì¶Á¿óz=a(òX1ûtAžÄ÷›˜ N¨oõ~¿‘5ªW?pGw}ºÖ£ž:åà:1ºôúRŒ×óFà9!ͦ;Œåx-ÆWàEŸt%_~þY3Ìÿ4Köd#^ÐM{0vÄ~gŽkƒ·®Eܦú寕ëŠ`7Ø¡ÖãXŽ2žÖ%½c'Î'<Æys³•uw×ð«ìd6⌠ӱ®ˆ7C&ï]ö³8{h&xÒjÆÖ¿˜Ío‡ø¨qßåÑ•‹î­F\þÀv æÅÉ­ŠÖ ^ÊŠÜ¿~)* r}Ì?u1•Ÿî}XŒ¿ˆçëï§hJ¹¼óvýÚÂÑõÁCëã·ËP?éöérʺÁöojóôâ™àYÞÓ¬w¦¬½·ù%U·²aXïÕ&)ñ¸[×.#Q6ýÄlÌk2. ÇQÝäŒâ_ïëõ,wÔšÓx<ü>›‡u¦Ä>O.!n +ìßá ÆsÃý1G»œR£Üö._v ¯ŒûÔ5Ÿ”¯…qÚT³Þ å=Ø+È¿ûŽV‡ÃþŸ/žªä×凈#cÃoÿfff܇Ó÷:âÍ´ÎÍgEª'…½N󪢬w–ã±ÿ$ éŠ3ê•呆z±M "Ÿ0ýÞn¬E\Ûy ~<²^´ÆcÀž›m/{ˆwš)y³ƒOž*ñA›œð¿\éã+° Ÿ ö{àW[Žéˆø;êâÃÈs‡5i±ÈvªKH_¤¾Œ¹?¯˜étvQ0wâêà&QÎø]Øä|åïÑšåý·à: iÄñ"Ó¶<úUÎzå}ÍÆÖŒÇx¶í’™¦Äûúòç`ÏQ]ºÔÃüÁóþd’W:]zñSH§âZï*÷; ½ÃztÀ<5náòtØ[ô ·¦ð !ù‘ˆ«œžÝë©ø‰Kð÷MÕ‘oP-ñ ÂüÕ7ǯ󱠔ùyÈÛ„hsù–³ÓWÐo¡ nCu³‰Øªì×(é ŽûL< :Á \o#?ÇÞ‰õ¶Àž•ߣ=[Øl^Š8(8·æcÄ->¡ã^Àïî–§:+œÌ À8«zÏk:¸ÄG7œ?ê–_G‡¼OŒïF̃[ 8»z}æÛnë_^@?ª|×»ÃÎC7‹ßaÏvŸÖc}-øæó¬ûúÖH,C?ûÝz¶뉪*gÀŸÖ^&m€=Û$ÈàKXÏ3çÁ¡F½lÁ¹ic;Ø¿3Ûÿ>òÔÎnÎUaÇ¡Ëëz£œžÂ¡Gç–÷pŒ²m9 ýá5éþ;%o|¾ãs…#îã•~àmߌ;¬ä]Ncü{ÙÀŸpSÇ¥`ý-28åì)xå÷>àrH£OÓ±Þo^ »hq¿>ƒz†$ŽFü°ºÉ,ø«À»!£á÷C¶Û;öÒ{/â‘°Æ¿BÑ?ko£ÝŽ$ÂÞ¸þ-ÿ? š­~ÿÍ¿Yöì*¤sǹ+Ð^‡Rj!?ø¨ÿGèÚ½Û>÷ säñŠ«zƺ{ТïÏá—<¢3£½*v·ã±yJ_e¿‰ÔH·í|Øò ær÷UÐŽ¾û†ÍUžïZ8ù<1jº7xS+!ÈÁAÉwønÃx ²½²\«5Íá&ìÊïqH{pÀm’™ÆgÐ×òƒ7Ô.mÿòÐÜã½kQfëxã:ìSBCä Ãjxû€ÃÁ-w”CžªéŠKÝ0~ÂÛ4kŒ¸V5¶ó*įÁ¿Š Áaçü(·ßAÿŽÍ”ýÖ)åò‹µNWæoº"ø[éåÏAÊx}Øò#Ö‡½üüóg«»šÃnæo€y·_À܆XèÞq'ÿœ+5Üç_Örê´·bÿWÄ aù–Ð˱‹ƒý„¦­Ø„¸A¸vfÊæZ¤øë–+›²°óÐ/£~\û§ð܈ vc¿C³7·Êp7éS[…;??Æ`ŸŽ¯UYÄ›^©þ÷0Ž|f]7ü>ümï2ð½BQñ#¥G¶\ =7ÿ€–¨·hæÐñíÚa*Æ©êãÛo_M¿¯·ƒýÍŒ8…uêðéš¼ø³íÏ>¶?ûØþìcû¿ßÇæÿ¾bÉ.ÌZ¼{?|ʨ6ð)òQ:¯YiØO¬-êPŒy¾ç“W»V)y“ W&#®þ«å«>#Íü\¬´öž-9aœ:¿:;Kï±Õë¢ k–-a G›ÙæV°»ØÞ<˸ÑqÖ­´¯>ýB^Të’=óEõ]G äuÒsO}꡾sþNÄK™ ¿& Ž¿Vtvã<)ÊåÕ̱Züæ‹!UCW7@ÿë“á[Gº¢…âB¯6;"®ð¿©¿ƒ¸+®ÙÙ+˜×¥o-_Œ|Gæ™èËX§WŸ:pO\ë~ë‘åýB?Ûõ»U_™çX97‰UòýËm`÷1š'àŸ#ŸæÄÞ¦òø1¿ïÉíàW/ºÀsš4‹öCý2ï÷Ø‚}.éŸülP¯‘ëî!®È,ë¿û¨bŸßŸÃÿȵcÐoWù#¾I<¶¦7ì±qjéOŒ÷”ƯÏÀžãeómÈÇkwϺ‡|V`ì÷ÚˆŸ“2äÅJaÔá]1ÏmÓëpYµ»5ÂþmùjΈW“çÕ»´ëZÍN‚]ÿËØ'çÛµx¨¾øÞÉÙ0Î~N›³ënڜɬ²f7wÂöÛhY<âž$?÷ó°ó€™‹î‚¡·«ß=£ìóŸ÷ùMsÕk¬Ç]+ž¯WöK¼ñƺIúÀmX÷vr~ßñH íê¦Ê¾º·ç§4LÜ>¦ímo‰üMæ4y¿².º"Ãñnê°‡ GHµxR ;ôîÙy-Ú?¨ ö~e<öa ~iÅwb÷¥[.<ØJÉmZÖ ÏSU ¸‰z=8¬ì“ô9×§¾P/³„ôí}²ve<ÕÿµI}-ð~DÒü U‘ÇöÑ´è ®©…AÓ‘¯öÏ9ßý—:_ºößp_]ìËH»]Å ë’ê/‡¦à½‰¸‘g/£¿uªXgÔ#ý‹Ã8¬ÿÄN*™†¼O̪Ï/e¸ÏÊÁþž¬à,…ÞÏ+ÎC<ìñâì4å€KØ{؆èR”_׃?qèÙ÷\»ʾ»Í#ñû¤ŸÖî°CæMÃLô›•Å(ïw” G>;äÎÕ˜gŬ~í›:ÑoòKêMó»¡ £†Å!Ϙ½-y؆‡'­„~ì—‡ðs NϽyWÓ~)ò걦£õ6ï&+û=çÆ_@ž]}ds~MØÉRiæ•¡Ýç$ÿxyyËȾC‹0O‰xY§öÝF<šŸ‹¸£†ÿóŸó=Ï~°ÓJƒJŠ0ž2´>èŸø±¶kÑo©¿¿Ú¨ø­®¨[D7Ã{<ç%<ßòPÿj°+F”ÁÎR»®yëëãª_F~Nû¼ŠÿO%߸]{Jݶ÷•ò>Ï×qæÊû-¶æb‰æ÷áÍh´{»:`ÿ™S¢ëøçúk³æÃÿ&y˜¹ ÜÐÇëÙÉ¡“v~Rö͈+áoÒ׈È{Þj;ñSú²+¿(ëÖí•ý/C‡X€ >K®c¦,p‰þMWŸ CXW» ¤#ìwö†vñÿÕaÝ=ð9F\Øs¿bW› G)ûŽ_lÂz·¦ê¶¯Èke|}> õŽ[Ðo7â Œ_£”ýqqkáüª·?:SmEŒw—æúpðM=¨Y´ÿõ[´‡nÍ’}ÙØ/ÃäíCžÊÚ-púŸŸ€ñ—4åš²ÿN[ß·#ÚÛn¯Ý1ô_ŠU‡A¿±oâ€×ð1ý€>ëô‘c×yaÞásqú5ðÐÊ–c”yÄØ”#áÛѧX'ib[ü q¥WÞÖ.àEâ~ÛÚüôû©+ûH~rèw½ÍǸH" ÁøJÙ~v9ø”öát6ü°Æ¯ØÇýpâêì{ª×qPúû4¬›f½›¿q–æ¦_ìC޹p­-ò)Ën7WöÛ_ǺîËLoÄ×)»gµ‚nÚÞñgL¹Mñ(OÂn›:Ø7n}QÌ€ßÔÚϱÆ~MÞ‰ÅÊúìÆBägt‹r°¿"¼kä%ž-íŸùHÔùîc·©~8µ>¢äQÔ„ÿ:ÝöÇ}þZ¶©ÂÛ¾¾¾ˆÒÇ íxKÓÃóöwÄ?m=tSÞý ñšýõOIÙWVe2ò½¡úK)hôÎW>c\Vyð`êÑ8jÜæñXÏ©ù^ø*¤ÉƒS“;Þ? ë˜*Çâ Œÿ¤O`]%½àTK¬3x¤5˜…ú§ôµ¯€}µ.´aß„s“ ™øý‡ò¶‰Öq¹ðÛ)ÛÆ¬Ç¾Ó´¶ukb¿^jÉ´-¾¿œ7ñÖ!~å(ûÑ’æ}³¢ì—¼9q…sìÎ…(WÖ)õÌ{ª…'`¦äj‚ÑoáýÕÑOµŽ”vk­Ä•¯B”}¯ßî_Å:mòšÏQØÿÓªGf¿Öʼzéaä©4ã—»¢Ÿ}x=A×fì3­s³/ا;6|Hš9û2ÒF ªVºÖE~8¾÷0GŒ“”š?f*ûg«÷B¾?|M©²žªSyĺ1kWeÍëx¯%¸¸¹Å1ì/ÏxØ{xšz}Ô¬g†\:]œl¾ªZ¬“6Y2ÄRñ‹ÞŸ×VòGkÁ_»ÇÙ*û$£S/ƒÝ„2žàNÈǃò˜Q#®J+>ù œH÷ü9 ã0ªï%ÎóÆN·ÕÎôÅúU½Ø#1È‹Äÿž¼mBË;9Y·±½~+ïÅÍØµyäÌ_o>b>“Î^­ß9Ä;œÃüH=2rÊG¬§n9‡ymý¥!UG*ûV?ôÃß} jÛ‚C “vs Æ­Î3õ8øÉÏ­}Cy¿á}Û#xNŠ&,í’\o“Óaå½–œçØgnëÕí¢vÝ?ëÝ?uB¾,³iøD¬+eH¾gáG"çÎùüHy¿ËÏ vìY¢øáq©)q_Ô߈¿g³¶à¹zÉ. p)ŠÝ¥…½¦«Ü3PÿØ‹uÀÍÆOáåÑ·A.‘Ê>ÈQ·~a½2bXßOx*ýýÆy%毡Ð+ñ@ò/å}É,ÿűo.¬ü†Ï\Wb¿…n‡ß"¼U1¢üAóüԹߚš3æc³åw'øG1ºä$âœð^VAxnЪ٩ˆCuñ-Ÿ"ÉXzXY· º½ëºW§í=Á½‚åó`6"[qN㘹«PßÄJ5·…¬Ü¯Ì—µe½s°ï<¹Ïý¦s;ï2 ý–ÑêÍ ìˈë·{d}ÔgÙ/]Œ§‘g¬Ð‰[ù%ˆw2F§bþêÔoždË%îl~Ólœ²þ±à0æ{þkæeá¹ìŒ-Aȳ—³iS Ìc~ÜÃü&©Eýrñèïµþx¿»fS¬;FØLo£¼ß#î m„qôàj"æ{Ú~û‘OUWq¨üó}M”+ÍMs~$Fµý2¸•ݦ¢WxZå ÿ¶Ç û õ}&ÇZÂΦûç㽚ø¶ A ßã•ý º *ë2 ‡»ã9\“ËÝ”}$Ñ[Û!NOí÷Ùýäù÷'…ѾO0Î5,Ôˆ¿SJ*ÞÀ~æäÅËz#^JŸe÷þ$¶ŽÍw¼ŸãùÞþü¿†» ÄO>³Ã?Âd~«9œöZ›Tˆ”öãÒtä]ôG<‚£ZŸiX·ª•éÔó‹?~#øÍ|Ûÿ ókÝæ6J9#ÌIAœ•5qÛuÌC3tûhi³Å ~4£¼N—«ä}^F?„¯HnŽý3šZ†½ûÏþþ#ʾ²§ï¼àŠ*ebÝ0½Ú…xŽÃŽg©ˆû™ ›0Ÿ eßü„½…5×}WÞÙ]¹âè̘¬àzè:‡FØßž`öƒ½ŒvÞ>x ÖMëöËk£ø‰ÂœÆkg4`yŒ_•za 桺š7®}ÇÙ€ØÏu,< ñsÊ©éqÛ`o%s ð~²Þ¾Ê}%Žõ™Œ÷XÜëVìñhãòÅí¨ñMX†¸K{ û^¼+ñ7i&‚¸..¸ãÐʺ´]'ä›Òw­Cüåz=áµò÷ø{ðñ‡ýá/~k ¾‡î[ñí³Öõ:ì)ÁúEú_½¡Çìk ~àTŒ}±ú1qs±o/kÉ45òÑ˾¯ÏUòùèŸÔͧÏb=*ÑbJìZÛèóKð5fÄ^3äg=k¨Šò&{åiñ~Šã‘7ëgëó7Ö‚}e-Žü|êë@ûÐ<ïDoQa(ÊÛiC;Ì+ŸÙÕÂû™'žÃû™´}‚y’ŸÅde¿çã ÙÈ JËk9)ï Ü-i¥ÌoÿŽŽÀþF¯úD|-vÓA“åZ]Fœ Ù#VF|ºöèoÄ?5G¬Râmañ-Ì{²˜¿×¢]†]¹Xªìã9^ö ËK-˜8L½ÈõOµe.£SCË.øbŸ^yûV w‹^`_ObaòÔ[Š}—`ý7@¸^÷2ü‰ÜjJ_”W“=Àå3Ö…kht¨¿v蛪¨ìÙ¶ƒ„ÎÝÝñQð×]µào¤¯<àwOܳÃþѺqcÆ)û¹Ù±Ø¢Ï[³ íã±xÞ$e½ì¡u»ùÊ>ÈáÍÁ“„ê¯=‘çÈhd£êƒvvºwûÅbŲû¨gý–]0Ž-®Ùà©ÌSÃO ?—Þ(&GÉ›ivêá’/i±/eë£(ìûÐÆ}9y;åK†ò¾NZî̯R^˜·ƒÿP/vž[vVù6ÚAÛ/ØkŒãÕ/’±¾ªqóçƒæš_Æ>Vw~ô3¬+%ß•c¾œ¶$Pá¿Í™Dp›+j–«Ä–wã=°à[§7c&êÔTyqåÓ³ [sh]ì—ÍŒhÑ ÜÎ;Yqmúµ«±¯*üKA÷®Ê|º{¯°K!«æisÏ‚§j¯[·o§}w¹ˆ8þpówÄ%õÚ¦ÞÅQ;ºï3¼7ZdûVy°QûeØç¶E‰ôg*ÜÆ|'2¶‰ ÜL¸—žf¸MðGÜ‘¹¦…û št\éŽügZ\Õ·íÑní~ÅÃ޺ݰD¾)½´Û=ܧ›¸o-ü@]÷A²ƒ²1ì Þ×ñ?yQ‹õ=í¼Ý·°_/~ÂB´›®Í‹ÈkKú¼À~RŸ«ûa¼Ù-:°^ù÷$,ŽU‡}¦..YûSyϱîkøw>¿‘ÇJ𚊸"69ÆùBMÔÐCŠ?Þ—¹~"~Ú´kØoÖºÔ ûåš èœ®¬óê…¢]ë• ³Eܟ𣽼tì¦áa¿s¶o7Œ/ÿ¯¶‡Ò`÷ê¹_Ì–b\¯.Ý¿›¼ûï˜k–Ž=ªì'ߟ^y!ÇMÕ휕÷ 2m`Gáíu!ˆ'5-GõAœ§[‘¾e¾Üð¦=¸5cÖåýâ¢z»]?>Ö«§’W«Z£ÞS­¿âc<úEû¦Ü¬X%ή´þ+öÙhKĵUŽÊþbÍêŽJÞf`Ieؽí£kk0Ž“Ô§”÷:u ŽýpG»·*Y »ÍúTÆ¡ê=iÿZÙìsñxÕ§ç•ýñ½’CoGÅeŸÅúyÄÑË0ÿÎÌãz"®jPxÈRy/{ézÌ‹¤Ý³N(¼ð€}€‰b÷ƒç<Ï>%?Eù½Çõ¾¬ì'™›?ÿîEZN´ ãÌ{^qÔÛ'»Ûtô¯öcn>ÆMˆÓ)ŸYJæ^;ðÊsþì*Êû«§·.„}7¾Ó¤™²ßËqÐŒ—tûe1° «ŽøL›{ ñÞÿó)=²»·ÿ÷ŸO)ÿ¿ÿM•ÿ\”øïÉ ÿú­eŸžZ™üÞBýïªO˜OØO¸OøO„OÄO¤Oäÿ91WyÏTÆ3ÆxÆÏ8ão<Œg¢ñL2ž5£cÔ`ŒŒQƒ1j0F ƨÁ5£cÔ`¬Qƒ5j°F Ö¨Á5X£kÔ`¬Qƒ3jpF ΨÁ58£gÔàŒœQƒ3jpF Þ¨Á5x£oÔà¼Qƒ7jðF Þ¨Á5£†`ÔŒ‚QC0jF Á¨!5£†`Ô¢QC4jˆF Ѩ!5D£†hÔ¢QC2jHF ɨ!5$£†dÔŒ’QC2jHF Ù¨!5d£†lÔ²QC6jÈF Ù¨!ÿ«Q^åéI§*:e蔥SŽNy:èT¤S‰NIMEj*RS‘šŠÔT¤¦"5©©HMEj*RcH!5†ÔRcH!5†ÔRcH!5–ÔXRcI%5–ÔXRcI%5–ÔXRãH#5ŽÔ8RãH#5ŽÔ8RãH#5žÔxRãI'5žÔxRãI'5žÔxRHM 5ÔRHM 5ÔRHM 5‘ÔDRIM$5‘ÔDRIM$5‘ÔDR“HM"5‰Ô$R“HM"5‰Ô$R“HM"5™ÔdR“IM&5™ÔdR“IM&5™Ôˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKb C,aˆ% ±„!–0ĆXÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%,±„%–°Ä–XÂKXb K,a‰%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂK8b G,áˆ%±„#–pÄŽXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%<±„'–ðÄžXÂKxb O,á‰%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"Kb‰@,ˆ%±D –ÄX"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%"±D$–ˆÄ‘X"KDb‰H,‰%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"K$b‰D,‘ˆ%±D"–HĉX"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘‰%2±D&–ÈÄ™X"Kdb‰L,‘,±Pya‚s•É9crΚœs&ç¼É¹`r.šœK&ç&º*]•‰®ÊDWe¢«2ÑU™èªLtU&º*]•‰.c¢Ë˜è2&ºŒ‰.c¢Ë˜è2&ºŒ‰.c¢Ë˜è²&º¬‰.k¢Ëšè²&º¬‰.k¢Ëšè²&º¬‰.g¢Ë™èr&ºœ‰.g¢Ë™èr&ºœ‰.g¢Ë™èò&º¼‰.o¢Ë›èò&º¼‰.o¢Ë›èò&º¼‰®`¢+˜è &º‚‰®`¢+˜è &º‚‰®`¢+˜èŠ&º¢‰®h¢+šèŠ&º¢‰®h¢+šèŠ&º¢‰®d¢+™èJ&º’‰®d¢+™èJ&º’‰®d¢+™èÊ&º²‰®l¢+›èÊ&º²‰®l¢+›èÊ&º&¼R™ðJeÂ+• ¯T&¼Â[e7~ùßÿõÿ>µÂÿ1µU‡>ÙÝÛÿÏÿÿô×ÿ‘~mèä†mclust/data/EuroUnemployment.rda0000644000176200001440000000120413510412701016500 0ustar liggesusers‹]SënQÞ–rU‰µ^ªFé…Ö` ÖR…˜¶èEl(IûóNaÃîYrØ-â/_C_Ãðô)LLüíIã.ç›…r’Ý93óÍ|3luçd=q’Ð4mZ E¼wÈ»ÎL{¯)mF‹{öfÉ•v]p«kÚ‹ GÓBs^Ü&½'¥/jã¯~óÏw}þ|xô°·‘ôÉ?ŸõUØ×U¼›FüEEõÉ ~í¾Š¯ ¿‚ø=øÁK>ñÜn üÏrã?„]Fþ)ì"l õóó_‡]V|ÁþyôËcŽç°Äi¾,êó?UßäÀWÀžå¯ W:Uþæo…Ë—ƒ>/Q¿9Ÿ Oº‘žÄ·‰~̱ú7_пÃ+à×`ÓÀ“.ëµÿ,éŠ|XÙâ?ħOÐÿzäÀ—? \u© ýnÁΩxñ/ð7®òê15\ñæN‚wõøwÐ ó^ªxP‡¹‹?À÷ ýƒõ¯|'aÁ,Þó?˜áÇ¡‚¡ãz•®§£kÅ»NT7LÖ£j &šÌaÙsé5ž€Ç¥ÝÏŽ¦ˆnq³e¸ÜØ–k¶˜4üäöÞh/Ty×=3ípa1Ù!w—K‹‰¹¥žc‹ Et_r“‰&ÜÈ®ä¼ÁiZ—‚ReÉDŠnK›9A›ð¾ÃL¢ˆlºÒí‘WaÎEŒW §í²Ñ‰Šûž[g¶+[ÔzÏÞ–Ô-üŽ™¡¯r§Í¥?1õnº=g¤IäÈÛ'vdKÇm1“ÀUÛ#ÕLû‚Oø¬3R§lˆquj}Þä$H². ‡7Þ¢Õ´­@ÐÆAmÙg2Ç®ìðw»ôŸÿ)šè¢¯mclust/data/GvHD.rda0000644000176200001440000033717213510412702013762 0ustar liggesusers‹tý9lÜë•.üÖwιûûpïÏqà àÛ0Œ†aÐî¶½=מµgjž¥¢FŠ¢Ä"©y,iÛn‡ }2†2ì°B‡ ;dØ!ÃŽ.®Ïõÿ·Öc¹nõ–Ȫÿÿ}×ð¬çYïz}xæÇÿ×™ÿk4ý—Ñýþåÿþ׿ü¿ÿí¿üåÿü£ÿ6ú?ÿòçÿë“G>üáÕÍ»÷¶77F£ÿúÿ¯øËÿþ÷ÑèüÏÉÚÁèýÏd{ú×?ONþúç?þúç{ó¿þùpøóÌðï·†2üÞ£Ù_ÿÜÚÿëŸ;‡ýóþÒ_ÿ|gøùéð}§†Ïûjøóâðïgýóèð=÷†Ï½;üýíñð¼ËýóÅ›áù†Ï=:|ÎÍáÏÙðû¯†ç¸·û×?7†_þ|4üýƒ½¿þ¹<üýúðçñIÿóÑðý¯†ÏßÖáùð'†çŸù럗†ßÛ>ÿ¾çþûÞðï›Ãs¼7|ïƒá¹ ?¿æý‡ïÎû÷î ?¿=üÜùás2üüæðï?¾çÙR·ƒíáçïÌû>>>okØß†ß÷çžÙ¿aî/÷ý`OχÏݾ÷üðó«ÃŸ§‡Ïy6¬Óƒáïß¾gkøœïÿ}ø¹—ûý9üüóiÿëåÏgÃ:Ÿþ}6üùpeX·ás.~ÅN¦ÝŽßû`øóñð9¯þ4|îð{›ó¾ÏÏÆÝ/¶†ï½ØÎ÷þ|øóîAß×sÃï?þ}gøû‡ÿ~1¬ÓöÁÛ÷Ý÷Üþýž?‡¿ŸÏúºÞÞó¼ý~nmÒíôʸ¿×­ÝîW÷†ç::¬×öR+¯¶‡ŸûtÞã×GóîoõßÖsøœß Ï{N|aWƒœ>ïþ¸¯ýà×ìßs¾¾çîðœ[ÃϽz·ÛÕJø×ižûÃçÜô~ö{øïÓaÿkÓn7­×ð\†¿?6ïï¹:ü÷Óáç ?ÿ­Q~ï]v1üþ«a]Ö†?w&=®¯Oúþx®»Ýß6†8ü÷‰á÷/¿ÂN†¿¿ñ@Þ°¾;oâ{‡¿Ÿ?ÿåð絃îçñwÖïŠÏ¿†uº=ü÷gß§&Ýÿï¹9ìÏéX—GÃç<˜ö8/þmGÜÝþ}6üÞ½åÝ󛼺:üû‡òî¢ûãÎr÷—;ÃïoÔ×ùÞJÇ—‡Ï¹1<×ʨ?×þ0üý„ý ÿ~uøï{‡=Žùþìï¼ÇÏûÖeØ¿­YCïÏ{ÜÚþ{gX·3£þ¼ŸzÜy²Ü?ŸŸˆ›÷¦=~m û|fÞíÙ>[·•À?üIþz<|ÎÏF=¾] ÿX_ôxÁ¯½ÇËa¯/:n¸=|þÝÀ{ów;x<<ïSypxßoÖÿ™w»´ÏðÐñ`Ñýáþ¬ã¤‡ýßÙEå'ï=ëßë{ì÷Ëá¿gÃs¼˜ö¼#lî’ÏNúú=÷¼öÆ®žÉïþ~°¯¯‡Ïy%,zþy6üýúADZð)¿„øürLüÚíqž½L½ß°n?u\-Î\>>îñ_<»ùf=p…8¹þ&>ù=8žØžïҼǯkfoº}˃üX>x6¬Ïê¸ï‡ümÙ)ÿy9éëÏÏò‹wúºÍ¿°#ûûp¿ûåÍáç~= ÁŽÕ!ãŸ.÷ºN‡ï³wëtØŸ}<GüXê~¶ñ‹zß¹ú'â <Ͼ؃çÚ˜vÿ}<é8é¡õ=ÒëÊÇoz½¶5Šõc÷“ÀQÖk¹ã“Û=«‡àšGó÷­Ó½yÏ/GWªw®Lº=_¾ïù°ó¥¾îÖõé›žßØå£•¾÷#‰ƒŸ ÏwV]·èyð“QÇð=>êõÊøeÖí^|›îöø!~ËW[±Ž¯Þtk¾×óâé¨sá±5|Ó¨×ÝòÁE_¿ª–zÜcÇðÄÕÝŽãÄ[ùNÝ)^Ýwœí¹Õ¥Wv»žæ?øëµè|ÅwüýnvýëˆÇÿm-õ|8ÓíA=|gÒíèW£Î7ˆ«â²çÂsà/ðoz=|.êHu?Ü%‰sÅÃëðYäqW\{2¬ç…yǽžGÞRžœG}·ßñù½øSÞ~²Ôó‘ø#³yD€;¼??û¦¾œw˜_ãsÔCì>^>çѤç{x¯úè ÛQ½÷nÿž3ðÜðsχç>7ê8åù¢çµkÓþïò'þHþ`·×5žîçϯì÷~7ðþzŠXéõ0üˆ·Ø ¾ ¤Þ~ýíþß[»ápvŸÁ¥õƒ=|>ïv†g~õ‘¼$noF½ºqÝû_‹º¡®ûbÔßKܧ­;;ùà^Ô³y×#ð ìU½Š7¶ŸöA¼’WØ»¼ W[ôxRq{øýõ8Ã_o Ÿ÷ø°û ܽ³Òëmßw<øÐóó^/Š·xª“—ÖFý÷ÄÍÛ‹çä#<žr>¬û‚/÷ß§=¯~êï—ú~á5®.Pg¨[į­È—¯ç½®S·[ø™ñt¥ÿ÷íI×}à7þ~eÞýK<ú2ô'ñ9ùh|‘xõÛQ¯«ùÁÓåont»Ã¯Ïv{ýä÷ñå/àE·_vðeØÅ½¨+ø»úð~è>p¾Œ¿>öçUä¹äÑÅïû½Ý^ïã‹äëúÏ£^óõ0^ò«À­öÁ÷>šF=~þ,žã^àÛ=®ááì×ü°ÛxÇÀ%â4ÀΟ¿^éùHý@¸2|ÞÆAÇÇò)|ymÔë vúñ¼ÿ>žÙzª'î ëö8âÐ$xt¸òzðìêz~]ö9íïÍÞà5ño|–¸o?é¹ò;>évê{½®´/žƒþ!>‹—Gèdÿ_÷8Á.žìv;|½Üyõ΋áß¿>÷jð(Óq·;xb}Ü÷‰ÎŸ~º¹zïÙ~Ïÿ/Æ=_^›÷<;ð·Wot{„Ïf˽.`pÜ/~È“ð½ÅûÁ[¥_-¿]ÏN}À~ϦÈÛç"ÿ«¯àE|¼~‰ÇãŽoÔÉâ¼€GâpÐÏB¤Ÿ< ìƒÈ“x³¯‡ÁéWB?ÀKÑ?ìSê3_ötÚó(œÂ~EýzyÒyXë‹Ç:ºŸ:èõ;ÝžÅ=uHñŸ{½Å©»¼þãG£à=|çÄZ7q¾¶~â£:ŒNFó=úì3ž ÿ–û1ÿ×ÁO:Lï²>·£.ùuÄq|˜Ÿg?Ö ýt7Öu¼JðŸø»w£>|8ØùéEç¯Äoù.ÎÀYCñÛÃQWêyyTüñƒ/£.Rר'uÈ‹ýΓð×dz®øÿ‡ÿxžëµÒãû¼|„~ƒõ|¬>‡Ÿ^Nµßxës)t8ö ÷á#ìgéÓn'êåíÀ[OF½Ž–õ§\@|–—^­u;¿ü<ülÒë0þSüXð”pÔ³QþKÃËŽç]7T·.…Éïpê§aoìèB且êOÀÊx9y .¤ Àá>—Ý =ðº¼;ê¼»£ûÀ1ãˆËÛ_åe|íVðöÛSñÛêDñŸ½ Lœ¬ýÛïñíÃàÛõ…É›óÿòÅ‘á²êO Þ„Ð%Ô·ë»Ý_ý>û»~«ÿÎsÉøÃ—a¯«¡ÏÁëêgv‚wcÏðâ“á9õ‡Ý‹ºz3êºàëíu²þõaéCú¾½xÓóóñoé9N/(¯ù~ºµzŽÙ?T?E|Šýbwx3¸I^‡ü»ú@Ç÷ O Îÿ­ï»¼ªŸõ~ð<ê àõ½Š;÷¢¾OÄSõéÇ£®Ãá›é“ìóå¢ã}ù?¦ƒ?ëvXub<ü߀Óùõ]ì÷}„ß*ï/:ï¶¿øDÿÊtÔóý‹àµé©£îÆoªÃ>õºžß\ ýU]ã¿Õµ§=ïÓKÔ‘üJÝýd¯ãÁÇ£à¡"Þ }ßw6Öå^<OìӽΉSú½õ­T_Þnçíõ‘] ¿³žpÜïˆx[ºœw+pÝfÄûg«·Ô9ô },«ÁëgÔ·»ú¸ú^]6þ<6éõœï+¾!ðÝW=!ˆð}õé?Wý2¡÷•~´ÛýA?‰>ŸíЋéî'‚‡—ÇV¢öy7§OQ׉·Oè3ÑŸE¿¾z±õŸ†ÿªé`ð„ç‚ÛnM;^RGÑ7ñ‘ê²gƒ]ë·U'ð«E]ôqð¾wøE¸~Ú'+§ÃøuÀÑÀÏOBÿ´^p!~YžP·Ð ªŽ^êŸÇž½'»::ýÞÝìÛ£îßòõêðûú¬«z‰_ûyçàøàÔ¨ÿ)^ÊYÏé;yºßy+øNaúzÔcç‚×´n7ßÒãáhõgñi±üV=Oã©ô‡ëŸfוÿw;ŽGåñ_úz­ã’‹¯ëüÁa·Û‹áó?w>DŸˆ|©ƒð‘ö™ÞN¿ÖWGá—ÄÿÊ£®ßø}º±zG>ªÏl¯ç+õîaφžøò0ðä~ÇûÖÍs”^<|žóðæ­I×;ê|̸ׇ³ÀâŠ>øn)â)^Oþºõ;ÑÇ`®t¾Yü÷ó[ñ9úTägvYýË=lÏ; oÈ»ëQ_áÑôËÉ£üçaàWúüéßý7¼MÇ¥ú=qXq'ê6¼ŸþšwCg _Ðì7œûx¯Û»èÛ¡ÏèƒRϱ[8âFøÓóÃÈ‹nDý¤Ž­ó2‘×àI:ŵqàÞ¥Žñ…×—zç¿ü†>ñ<ø·iäÛÒ¥‡ç2ëq/^ó{xMžcüTŸšõÀïðOyÍ÷âQªÞõ}‡Kè–üžV×97(é3ÐGw—îºóNètxv'yxCœ »¨¯ë\Mð[ô}}ÕKKçóOñF¾÷}ø ûDïÔ/÷ ⓼GêÃ>1ïvr4â˜øýjÈë¿õüu.êœÊw+=/?U_Œ»Þ÷tÞëbú¶s_ø¸€Þ ÿé«€ÄGqà‹yÏ›ú—>Šõ©>ïáû^ýGÇ5Ö•ŽXùnþÇŽÕið‡>¿“QÏáAðÉÖO]k}ïF|ÃKëK‘çôÑkðn×b]ð…‚W9ë/¯Àcüï7_îëÌîŸFÞwÔ+ü¯ï9éøpPÛ™t=îYŠÿ®>Õ½¾t:|>SP¯xO¼ÏÙȳúª¯qx~ýšêÁ§ÁÃã Õ ðçÃ+Óñ'ìz+x*vX}zoº.ƒä÷øÈï­ßG^ñ<÷B,7øîçÁ+©Óù=~|x¾ªÃõ Ê/Ϊ¿ðŽêñäÕ;]gѯ÷zÜ×Ñ~à¡+ß½éÏ÷ã-ßÏþÄoý%[Q_9OÑÃÅK|ÑÓ¥^‡ÒWè«Áʧž÷«ùÛñŠu—/øãzü<\¨ŸKÝb?~öð toqÚy0øƒ¿l‡®(/‰_ïÏ;ßÈoñ9ìc>í|:?µ_p™÷¦¿Ù'¸NÑ'°ºG} w> Ýþ£¨cåùH¿œ|d?KŸ Ÿ^¯øûUgãñòïñ,xUùFÿ†s‰â±ç©þÐy#ÙÏhÿäKq¿Æ¾NÆ÷x¯›ñ<Î{^uÞ£ì3êÿÏCÿÚ ~D¥ú <Œ‡ëàJxOÆ^?<Ž7d'öÝ9õÚKÏ÷Çà'ýý²Ÿk-thyB>ÑŸT纃'«~ë7½Î¨9£Î'ÃçòíÐÔÕê,u·8ìü«º×þ œªïïñ~OüˆÝ•7ïu9üäO~ ÓËÙ©º“½Á¹‰û}Þ8tCç±õ¹½ œ$þÕùɽ÷ñuWÇ‹cuŽhÜù*yÂyHý^÷‚ߺ<¨~&ýOyòYð4êëŸD<+>m؇ÛÓÎ/Ò§ð¹ð>ÿ¬ó7ã®·àÁÅ ú®üš8Êyë¥>¾9îøùFäW~u4xnû)Žâƒ½ÇËa]ôµ_ ;æG'CïQ¯ÁÉÅÛîw}Þ:©oá~¸âÒ¢×êÙê#Yîº1Þj'êÜš£pÐO<©úfÜ×I'ôc^<–ç´Šw¼¢ɹ><»øëOOT÷È+x!u–÷ƒð£w?xo~Á_/ï‰_æ&ä¼}#ò{Ûÿ‘÷j~Êa×ä}³êlçð’ðŠuæO•Cÿ|¹Üëü¡ý¡‹Âoêpûƒ7qn¾?ü©?Ÿï÷<§ºL¼xõ}/¯/ úœB·öyw‚7r®Žäç¾^ÓßÃßùA?žô:XŸ–>Fx®ê„IÿñK]‡ìOÍWÚëu§üÂNÄùÇÁ·•þ²Üñ þ^ü¬z=ìŸNOÔ9ÅÀx3¼Žü†w«~Äá÷çÿÞíS½oý¾H½2ò8œ¡O§øÓá{ÿ!üƒŸÞ®æ” ÿ­ïZÿ’úóÕðï+ÁŸ?«î~þjÔóò¶uOÏG|£ücðêÙùZ¯¿Ä¹»ã÷ñ^p›8Ânì“þ*¼|ºzáùøý;â%}P? zCžÀש£jNÜ´çùÂEâ×›ÎWŸ÷a×±¯G=Cg“Wø½úŒûw|=é;awŸ®ƒ7ìãÓÃÎ+›+¢ßÆû>Ü 7‹Cì®ô“ÃÎ/É_Û‡á]ëBG¼™úθÇ5ýâJñIÃÏéo¹ù§ßŠ:ÏúŠ£ôD¼Õ£à…Ôp? ßú>ñïÇá%|xöK–^z¦ÜùjÒóIÍ¥YîyMüÙ ˆ-O²ý®úÜœ£5ŸP^ÀTý¿Üy ý:ô¡÷ÃÏÕúLð—xá‘OÔ7ΕÁÛêOaÔó§>õ~e'tçOߟDoDí÷wB—«y6Ó¾ÏêøÖú‰O—Ã*nô|!ÂÁO£Þ€'î„#?l…áwì—xNç”/êü´ïkñôpõؼ?~Q¼—Ïôøyç%œ·~™:QÔ±óow|¥U¯;Ÿ-ÿ½~·ëœòÑ?Þ‘ïÎÁÿû{¼‰þƒ—‘§ÅéЭ¶ƒwÇ›ÑéÔM³àŸL:¯]}&KQ§-:(®‰§êçÅ%u…zóÓÈwøDu??…¿è_x |<¼ï3~Ã:mÞ•oå%¼=œìyü~å¡•î_[Á¿ñ~ª.z5ôÙ¿ú~÷s~ gÁx•Ûá'üý~äu}ìêß×ßïöl¿Åµ[a—öó~äùÌz‹ú‡ñŒì >¼<¥¸¬Ï Ï@7pŽäbè?¯†þzó‰á3öqzÚñ¿z:&žÖzÒ]éëóEèÂ{=OŠ—C¼.=”ÝMWÞý·æ Í»ù>xëXðüÛ{Šãp~U&>ˆú¯Uç¡g}‹'œô÷~¼ ;À;éSÕßs:tž{Áï¨Ã=¿ ¬F€'Óׯ¿´xžÈÛ?½E^³./£žÆŸŠ3ÖßsW}/ükç±álñ¤Î§ï¾½€3Åѯ'Õܸ½n/úÜàž£Á;Šox\üÜÀ~ü·8ð8p#þþQç²ùÐyG~×È75çvÑý?t&pùûÁÏWý2îú­¸¦¯¹Xý÷Ä稜CÀ_ø<¼~Âú}+ððQoÎC¿Ñ‡öiêÕ˽Á O/š‹¦ÞÒ_#®:o®ïµÎ±:€gá4ï-ž¨'×"Þ°ü¹ºR½RsWð7ÿxq7žwx}Îú5F£žçè.wB?0'_ÝÁ/áZ¸€Î WX¼Oé(ËýùêÜԨףx¿êã ¾/ànGüºõ’zÄúþ(ù÷qÏ÷Ö?HÇÔ‡ª.£°ÿÇQ?W_ɰÿÎÙ~õ <Í®Íg‡t™G¡Ëó‹œGWóßt^³ö/Ö<½ïw¼¤ßT½TŸÞr÷ë;'àÃSQ_伊ãùìýΜ|ƒó¼ð‘¼¶öÏ¿¿ ^DÆ—ÝOn7øŸá¿Ór¾÷ù›ž7ªi·ÛgõÉv;5ŸÇ>áÇí>“®¥¼ÎÁíu^YþÖ7Fwx=àý+øUñ©ÎçMz^eǧÂïk¿–{þªo{øï+ó΃¿ˆ:_Wù{©óÞö¡æY†â…åý«‘ŸèNtAõ‚ ?õ¹Îy¼:ífäiç/Þ \Œß¼ú²|u#êDùò³àåéIúòg;õÕzžÊ ãÎ;¼øF}&®= H½¦OA ž‹ß_üdΨ|aà%þê¼°ùúkÞ(ü»|ù¸ã£âSVz<óü7×:_q%t÷­Gþ¢_Þ‰uÂË+£—Ù)ÝÅ=FC÷Ÿù¾J\•wà4ŸÇÖœü7ÝŸõ Þ1õ|Ÿó%ú»F’ÇÌo†W+oO:ÿ‹Ct¶‡þÀà3ëÆ~Gçð}[ªŽÛï|«þ ø’ߨåf¼¿ç¸þRç'½®Q?ª³ñ Îg×ù–I·s~Wó†Ÿw>ãhðÃúpàë¤ï%ù|~ ©¯Õ¯5`¼Ê¬ï7¿´ÎÉÏ×ý$ÓþûöÙ|Ž»Q7l‡ž+Ð7"oÃ}9/t'x>|ž¼‡?Ô‡á½î‡>÷,øGøJŸ\ÞTÏâ ïÆz?:õ2üäëà=Ôê÷³ÁG•ÎyØùDý¬þïÌžøácýå›YÄwùŸ¬ôtÔö™ŽìÜ«üö(âEÝ“6î~ñÓàà.ç¯ñ‘úâ&ÉÓÍ:?R÷ýí†~9êxσ¿z5ï¼(;£oÉ#ê£êƒ8Òuqó¼èiúù:P¸¥oöAÔ9ðMÍa ¦xíƒ^éP§ß8ü½¾ÞìcÛ~°æ'E^n¸<4œ‰ÿ¤÷Í"«ûÕOuç¼ûǃÀåüos}ÜóÌýàÙÁ™àõœoćª‡÷z]£¾WÕ9ݨÿÊïÞt|g?ðçâJêòÑÝЪïkxŽ×{¡çMº]Ô¹Ž¥Ïáz>^Tþ|ºé‰ÐAäëÃoøT‘w}Þ³¨Ëê^>¸X6<¯þúœûP…¿è7‡wÄ%ç˜à—ês ýFܸ ΄«ÔúÄ­:çwNû¾Âÿâ»c?Ö„ß=7{¦ûÐýWƒ|º ~¢ê þšXoç§>Ž:ß(Ý ýîú 샿¿®Þ›v¯D—Ü ÝþÒ§RuÃR¯?øõ“ƾé'§ÇnO®þ¹¸ æêÏz}^󨆟«y^“ÎkÕyMüæ¤Ç[:ÊVà~vˆwt>óG¡³Ú?y×~=ŽüÊ_KŸXéúÜ—òߊç`pVÝK5ê8o-tHuUõ ï¯ïánè…ú*åuñQ_£}€›¿ð8pü‹àçèñp:^[\Ÿê>ÐEǯtav‡­F\x=ëq–NÈ?ôiÿ=íñÖß›WUs†GÝàañp#òPë4íþwÁ[ìE\øyè?Î!êëöÞu/bðꆓ¡3y.?Ï_ñúÊß—úïãÙðª9WîÐ'M¢wéû“·ä8ÿç\ 8EÏ…Ù?|ý³Ð£ìcý9ëŸkn‰|¾öR÷—„¾MWY ~bìŸ÷šŽ:¾‘çàNõ‡÷‘ßüP_¨{ðýuÏÀ¢û¿¹¨â«¸¨®rÏ ¾¥x‰ÝžWkŽÏAÇûúîK·ìà›å¾Þ[ßÿ9òuÅ{á_ÓŸ‹? üïÄ—GOFF²tâ+áìþf÷øRö Šê“ê£Øï8üaè<5qÑñʫà šÆzª3ôëè›qþ_ EÝPó€–{œÄßÝ]I=Uó.ƒOûðLUo ?÷jÔó¯ùcÉówùq%Ö Ž°Þ—C_Ý ýJ¿Ä½àAäkð¢â\Ýã7í|Ÿ¼¡ïW]HïÑgÇ‹#9gþÓàÓð!Îíá̧¨¹…ÿ? ÝnªûÑÂ.ám:V§¾ßù2yüTÔûôC¼WÍÏõ8*ŸŠø»ÒCæÝª¯zÒñãÕÐß O…~1þÌßÏÕÿ+=?ÙøAõ.ž^¿1^MDZ·ŸOm½éjÕÏ0üw;þÔo?û}uwߟvÝ_(=Œ8Xyö çiñ£_®xì•^Á_Ö[¨®uîÍ9øßy.¿ÿ^àu|/ÞÂÏ«ûÕ¹î/{Xt?Óç+½Šýk…¾âÞWö$^êoúMðØðMÍ·ZéÏá\Jo\ôø‡©¹âó¾w"‹»UÌzþ§·ào­[Íëw;?õÛÔ}G¡ê“O~»æìw=ÿ8òeæyXÿŽïƒ#Ù1^ÿ= ý…=Õ}0¡ÛÉ‹[±þÅÓÅ>ð3sþÔ‹5¿Ϻ×÷áaÔ×·‚ר{÷¦–:þÄk\ Ý ¯¾Ëa}Í¥Á/Xëý0â6ÿÿœg)ܲ×ëLëå\bž \†ßÇ£Xüž¾üsõoîu|LÇ?¼À8êÚ¼ïl+øåº|Öý#çåàéùsÍk§óÏÐQÔçð{ÁûÀåêêš³»ÖyŸí¨Wî„®x6t9v«ÿìÃŽ³ªŸ_?€¼®Ž©û¢Ø©ý‹8Wó\§ý½Ü' >¯9²KýùÅ3|†úÁïÁ3ÞSž.œ:é?'¸W æOvþK|§Ü~\<ƒO<¯>—º‡cx^ýôò½>zúÆÕàðÀÞ§îÛíºÞƒà%¼·÷U÷™þ*pŒ:¢æ Í;O î¥ŸÑ?å38®¯{–›ß•}©ßèñyžÎûÍé=ô»‘Ÿ†T>îúiñòÁƒöxdND;îx<¸}Ù ^×=&…£Ãê~¸ásõuÖýγžgÍkg—p~Fß*ÞO—½ü±¼æûÝó)þë‡t>îVÔ¡[±^t@ñN¾Ñ?A»xŒ?«§íëOå¹²š³Ôó7ŸE½˜÷Ó×|ă¾žê“šï<Žóâà·ïãÕáìšÃ÷¦çIs\òþS÷Tëg`‡Õ߸âQè‚?Åzøþ$NÕy˜•Ž«Oø"êgýUÏ‚WÅ‘êoyÓãCÆ]<µzK~£èZêÌÒ‡ÆçZg}Ûæá9œy?tÂ:¾Ôý,Ïá9‡¢žâÏ¡¨ƒð˜úzØMéÕýý­SÕ?t™x^u£ó!Λê~ :×~Ðu=Ïý8ôðÀ¯÷BÇÛmïRuÇðûú'*Ž»½ÊóxºÏo©ëWÙïG¾º þH^éAÔÇü[œô¼ðž÷,ÝdÖó=¼ú"|~>ø¼ü¥ÎÁC“GÿÔ×iü.@_"| þ¨ûª>}Óy7ëÀájz¶÷¨{‰Ç½îÇùÑZü{€?éV~ž¿¨+îg|\ô8UqëMÇoæwÔ¹ðQ·ùŠú¾ãQåý®ò>{¨¾ iÏ£O‡7·*þ‰Ûâ?{«þœÐKîžøÅ~É;‘ßÙ§øÂ¯Ø‰ßsžºæ*F^›ö÷0ÏûÔ}"Ó·óÕ'ñ¨Ç£Þx¼?~Ç<2û—°ož§úW—ºN³v¨¿[þÖ7ëß­?¼×Ë ðõ®>Õå^_•Ž;ë¼”ý/ü1î:WÕ]îû"å=Øø¼‘ó^·‚o¶ta¼˜õ•ÖÂÿä¹[¡CÃ_'OãÿðÁuž,쯭¾¹z©:¥ê5þ¿Üõqž.óÍ;½îÂ;ËïwB‡OðO§=OŠSuŸÔð¹îy»Ã73Oî?»œvûõ_ÖÜ…I×oß‚¿-¼8ëøSýʮչ¯â=ლËåïç=o>Œ8hÝRo•ïñ³y~̹ë툻…Ž/…7·W¨ãj¾ï¯ôcàíò¾ý(ì o¢žÒW”s¿jý÷ú¾à¹ô¬G]컩{Ëg]RWË~oU÷3Φ[]m;tݺÏnÔukïé~ÏÀCÖÿ\èæøyÏs;ê¥Á êvð?ì®óïø%º~y|^S_œmN‚{Žü¾:©Î,õºï oî×ï/žé›TwÊò]寃®ŸZÇ‹Á~ü:Po_ñ ⻨ù@øêÐÿø±õ®<·ß߯ÎëO»}æÜ|çGðÝúëþÝØEçµ}¿ùp2\í\²zNþÑÏÁÎðøøbqÑÜ…ß_ë~h<™øKÿ¬¹(+]ŸQ‡ÃÁôB<ýšNûêß»]½Xéþ °wø>­¹.ãŽ#~¸V\>qýqðÖò^ætèžú ð|GÃoÅÏŸ‡þ§ïŽ¿¨ãÅ…š3>ïx®ÕÏ Ç˜¯Wý7ûÈ3æáoà˜—«^Þ®ÏÕÜßÝàó÷{^c¯ðŸy¼øl¸§úNc=‹^é~«Ïé“È·ú½÷zè'‘ª¿h¯ëqpEÝÛ5êyÆ>;¯;Ø[Å¡ê)ü`iÔõi¸'Ïå«Åwçœñ8x[ñƒ½Ô=„¡›‹§ðþK½.¿o¦C°Ûêï›vnìsÕM›¡C}zŒ¼i^­õÒXs;>Ï%?Á%K¯ò\¸ºh+ô“êß÷ú²î'žôçÃÄâï÷:Χðæøð;þû^/ÿáÛñRö[?Ùƒðú‰{äYû¦b. wôó8_á|·:/óuä)|Ù÷Ìu|õíŽËðSú.Äë¿é#Zîö¯= ý®î7SÏ ï|(œg¯‡¾c¿î¯náNöÿ­{Á'}]à|ºX­ûð÷úƒá¿êSSÇ? >¾Š×Ô+ò¬ù¦uOðâ¹<æsô%ª§ª¿ãM¯c¡¿ÊGö?†÷ÿvðBêyÂ\ˆšã<íü‰<¤ÏÈ÷V|Ýïq]¾Ù½©Î „ß:ï§Ž¡ ãå·Õàåº÷r©ótÕÕ|ÿiçGéÜìÙù´ê×[„î2éu(ûÍû¸å-ñŽVÔ½¥Ë]·P׸׹î{]êöøt¹ëœò,ÿQˆcîã¬þ€Yç¯Ôá5oüMßgçÆð±úy^ÿk_Ï:¯m½»}:_Uý×ÃçëçÏœ—§+øû:WµÜëÅ÷Ï‹/ðœ÷ÿ,êó¡÷Õùø½®³Ô½´{=Ù_}5ü÷NèéôÁí¨GÕQ>WÄ×8ÏÉŸè;ø"u¹¾íº×-tm}þø ó4JG›vô9Û+=éÞ/s5ìIýYçâ—»°úÜ«ån_ò†üï¿=·ºu+tñ>Þ^ñAàêê÷ÜëuoõÛ¾ ~Üy[þäsàáº÷xÑý1ïëUOÂáæ1² ºœ¸Êï×nEÝt:ø ‘G|®óqtÒ‹QšsW÷ÒÌ»ý¨Wð6u?õRÏcüÄ|&¼ÊNVCo›¼ß4ô9~ N’¯ßå=¬û,â¾çb‡'ƒÇÇSº?^¼ðœìGB§R§Þˆ8*¯©£Åï:Ÿxï3íøBþôyG¢ÎÆÃ˜ëv>ôØàÅ_ü¼yÖø8¾œþÄ—àjžöa¯GÄmÞß‹SγXG8ÿnð#Å¿/zýw:ô¿âK&]àÿø-q£úÜB|ëÈ¯Ä ¸Õ~ü8ø.ÿ]ºëð9úÔù—úH\-u·óÚúoª.8 ½>tsˆœÛ¥“転Ï‚—+> }¬î÷Ù œö¦ÇøËsŠò¡ùvâ¹>¦äcå;¼¸¾%ûï9‹Ýíø¨ævœùê?{>œ…¿}u—zá^è¯ò~â|yõWN"®-uÉ^õµÊ»#.”^ÿžôïÍ~{ñþ‘¯Ìófßx;ñݾûýgÓÎC¾Üí:aÝ;ºÜãù,t⺷æM¯SÕÝÕõ¼¸ü–z¼æ:O{½ãýéTê:âóÝþüÅÃDZ.æmè[Á‡½ ýüJèô¯Ÿÿ6xL<‚|,Ÿâ³ð”æ Àop'=°î^ ^7ê§â›üürãïáü#¡ëšçÊÎ}>ÞähÄ›{ayîìõ›¾n·‚?ð'>¾‘ÿ7ÃÕiâЙø{u™z…]òçû«ŸÃ>¯tÝo~qÎäqýrÇ3¯í÷üdý­wÝ5éû!þÀKì¢ðã¬ÇWq¿ˆ‡Õ73íüññÔWzœÄ{º‡œÿT¿êRÇo³àíÄ-çŠñ»? Éÿ íîÓëMvIg„Ó¼Oéêû½.„Ç:b›žvý¤pÏ¢óß Þu#꩚ß<ízkë=ìú2½¢úABÆ/êcÒ?ó$âüQóöz©9–{п//´Üí²ôìo÷|u?òЯ">äýeü¢Îwìöz]Õ¯Co¤?œ;ÀÛCŸÝ >¿¥ß‹ž©_F]9 Þ†Ì^ôé9/bNš¹AyOvÍ[îû 7Ö}ÍÇjÈJßñ7ç Ô|ý½ŽwÅvO§ôþìN¾+|tÐqsžó›Q¯OJG|Óë ùÎÜl:8ýÝ<ºæQè¬öƒŸÝ }Úó?{ УÄ5qf%êºGdøÓ8ÈÏÁ‹FýÄßÄ«âA‚Ïä‡ü£æý Ÿãœ¡8ÁÍÖMýeäÓš´~uXÇQçË÷;ÁË>Œx¦Î›NFµÿ5ß|·ã¡iÄÁ×ÿÑ?G_~õôýâwø³Âï‡ÝÞê€8ºÔë(¼®~3ºMÕ³û½^’}ΣÐkœsÃsŸI»þÝ<‘+øƒÎï›·Â.keèj'þ/3‹õS·êGwV\,qØíòaÄ1ûù8xž{Á;¨“ÌÓ­óÍ¡Êcú§Õáø/¼…þ¸šG> Ý.ü“nŒ/áwð±Ð^,õ}}7ò |‹g¨þ •®ŸŠËâ$}$ïýòôùC݈W‘ÇžÄsÚ_vÍåmº©ýd·úè W‡ßâÁ<Ÿ¼CÖ¢NÀßó'ùŸ^,¿OF=?©ÿ¬+¾ö~컸X:ÿðßîE¨{`æ}½Õu>xÔùVö'}:–¾º\݇¼Üujþg®ðNر>еÐÓõçØou¡<O‹¸[÷8v;1—»ú2v;Þ°Ÿ÷bÿŠYôº¿Ä¿ôÕ»7½¾•xÒߣ΅ïöuõÞ ýívÔ7ê[ëfþ8œã~‡ëQ'n…®ªžú>þò?ÿ}4úÿó÷½Ïš.Q}ˆ¿N\û¦?9$âé“xoøòBèxhs¼ù½¼æv¿éþVø!xöz%t Ï£OSó0êûëœÉ͈Õo0î8õûÁOê£QÕÜœqÇ9ð/Ý Oå\ÛõÀ…ò§óŒò>þ‹ÞXúÞa×™ôx>Ÿ+üÍù¥Ž3øßňSp¶øWzCàY}øìå~äEï¥/Iœvþ]Ü¡CÝ ò~ð_?³¿ò×Ý^oÚ¼Œý篞¿ˆ7¯úv¿×KxùQàVvi¿é<æk|zó³½ng߉ø^÷Úív|õ8øãQÿÔ¼–½ž7Ì—²OuŸððùú~ÏÞõžW¢¾P‡É/úò­?,ý|ÞyY¼~|;ÞË÷±gx^ªþQ‹óôŒyäåÇñßÏ#_{ÿá_úÿÕ»ü ï³#ü¸üÊŸééx=yÛ¹+yìÙÁÛã›ûDØÀ‡9¿+Ïê'y:ïöŽÿ€¯žE}÷ôMÏÛž“Nb|ßZðxx_y^ßàñÐcÔ5t_û£îsN’_è°úPØù<â¬ßÓŸQsªæ=¿³SçläkçQÙãûáïø’•X/üEùëðsú·CÿzyC?™ºO o³/¯Ÿ<®oæjðn΋âçÄ!?g~ÞQ=Ãߨ‹u‚;ÝK(ŠÏ;¡»úþ¯‚'Ç/ß 韂ç«ù§K}®ö\òÝJÉñМ«°ïOǽþƒWèÏ¢NÖ×MÒ_bŽǹA¸ªæfïvþB¿ÜÁζ£>¢ëˆƒž—=㑜[ú<ôÈŸE}j}‹'xÓ÷Ï"žÝ ÿÖwXs~øïa¯—íCí¿õ˜÷|U|å¸çŸ§ ¾áGüD]ølxns ëþ½y_OyWß‘|ªîÏàa}­×#ÎÈ7pVÍ5÷x¦>9›yr¿óÚuþf©×ü†ŠïÙqžÓ«~ÃÝÀµø¥•þßìÔ¹}IðŒŽ=¨ã‹›w/u?~ü"ÿTŸÀqîÃý,øayøAÔ{ôßÒS§Ýoñ%ôþ‡Ý .¿Y Êï/:­{Oç=þÀ?úåE~Œ'°Þu/ËJÏc57f¹×)t÷3‘?Å—š7év£oLž-½vøÞƒß½ñ©úv;o?UgÃÎáµ çïvÜ _ÊÎOË“õ{GÅøKùYÔ#p/¿QǨSðZxnö.^”¿LºÞ‡ÉGú+_ŒCGžôºØŸôfz¶:ãã¨wñ.òŒõ«ùÙ³ngÏÇ]O†£ë~£Q÷CñÈ÷ã[ý>Þ̾<ßëõò³ðWþˆ‡3ßx'ôÚ¼ïP\V¯ªCá¾û—ðJìý\ ;£ñ_ynaçU§Ë—»=/É·âÖJ¬£>yÝzÒ½_îvžMÝΞðRUðÛQÇáÓÐ g£žŸOú>T_þr¯Ëõ úwøÎ÷ùüÁ~ÃKæF•>´ßëXx? oñ^âêÃŽ¿‹Ÿ_ô:A¾÷½öEÞ¡ëó¿ÿÃЃñxa:®¼"~±çõÈ;[Cñåò´|õ$âjñî‹þsâ„úù£à›èUæÊ?~ßœ‰g‡]OTÿ$Èäe~"ßÃÅüŸª¾Ä‹ŠÛžûIä;û ŸéGU‡ü$ùË¥žßjÎçn¯7_/ßúܺ/h·ç‘ägèÿês(å1qǹ_xâyè:/Ãôïá_Dý‡ŸØ þE—’·àpü3^”Î?ÕïÝ^·~7xAùÌsû9qRý¯o„¿½üû(âLòe‚ÏW·yÏœÏ%߬'nžuýX?rñÚô–q7üþ³à—? œ%þ©¯áxÏ¢Ns~ÁsÜ (^TßÄJçs‹/ 3Mz=žz¾~[yo.Õï>Hÿ}òxày„ŸñççÁ×òK<žxY÷àLºÝ”n¸èxõTè¿ÅKN;¯¶àÁðh¥Ç¿_½×ñ]é}?Ø>PÝêHÕ÷‹Î×ƹ¼|Æ>knÈnç¹Å•ùƒ—žÏ}`pëâ¹®†‹×»õ¼ÅïélðÝ\?OéG‹žÏœû¹ú‡:Iܸ:é~ù8ô½ÒACg`—¾ÇÏ×=€‘GÔi¢Þ}xöYäïºvÜu(¼†õ—¿Äy¼‰º¦üvÑq?ötÆã¡·À7BßVïð}Ûצ]ç­{Ô‚ÏÃ[X?ý§òvÝK³ßßßœÊ_‡~ŽßøEèTtT8Á\ùàeÔmââ•ÐGí+¾L¼07 ï«O¥…‡sîžú°æÏìw|w,ð¡xDq¿îÝÌ+±Ï5¯f:ç¨óⰺϾ< ½I|çøñ»»=œx|:p]…>R÷9,zÜk½»d¿ÖÑóÊÃà=þYqb©ï\_u~Ô7‚Ï _ùóyð6ôÀªçö;Þ;ú“¾3ýèC?×> }Ic]Ù5\ú$ùª¨ïå+~«_Zœü:â˜8TüÖ$â6Ýá¯vRý6úÙÔ15Ï{Ü㮼z/ì»üiø}ß ðZ¼/<ô"ôƒÓ¡‡Áü@<Ð'YüÇa¯³Ÿï÷õ¨yËaÿ¡«©§ÕEpaõ¡ïu\,O‰‡ú­«Oé°çv+^û©ù(“÷ĺ²¼Y|ø¤óŽp~PÿÔ4Þïóà/Å ü̽À§Õ/>é|CÙaäüêãàq„ŽìÜÂNè:ê›§‘w†Ë+ô°O#¾ÊCpçƒÇ<ŸóFæª}|·>Óâ†çÒO.ÑWðï5×n֟˾­D]MWábçq^¼‰:3ìZýëþ õ’¼LÿàϯSñAÝ)ï87r5â‚|)®;OVóÜ:Îf't+~Åßë>˜áç7W×\øyÏ¢nu.TG÷~ö¯œ Ûùb|ÿ|:ºs^t+} ÖW½/Š?ìÞÜ(ñWÂçýZðуX7:EéÁOÈc—ÞáyZ݃S7ªóåñŽx8Ã÷ˆ[Oƒ}õ™s6øE<~ ÿ"Ž~:Í•À7òð4øTº”}ý<üìIä_þŠª{tGýs~ü.€›ü7|E—çáRöÀåïŸ~|zœöqÔñø´{Á#ñ{s.ÅéíÀýâ»z_T}Ñ+ë[Æó‹¿×ßðE¬ÛƒÐ{V#/ ÿÇ é—…#='\Zz÷aß×BÿÍ{Àï‰Oö‡¾ð(x,<Ô«IçMõ9ÔýT{½^ÑŸÁ¾ùϾ߿ åIð%úÐùݳÐSõ÷m‡îš÷BàSà2ëàóø[í >u9üù Çy¤Î¡ÿîž <¿ø¨ŸŠ?Wë¬óàuŸÎð>æª|º.ÞÆz] ž^<ü^^ÃÏâùœV7{ï²á½Þ >—Ÿù>z0}'ê3ù £Ÿö|Ä98äQèìêtø«çs¾@¬ß¡ðêJ÷ü4÷:õAÏ£îª:hÞýÞ~:'Â+nŽ{ú—|)®ŠËtæ üÇÿ®Ìß®Ô=‡}ý…³y@Ü{ü¡z¿•N?íü®ø¦€¾ç×\ýýЃ»X÷š“±ÜqÞÇÁóóøƒ‚Wò¼êÐû‘7n?µ%>MzÜ·òùéØŸšã8Cœ-Oé#‡SE]Â/œ7¨þ™½¾ïÏBǪ{x‡Ï3gï—Á§òCøèaè ò#¿€<‡º8ydqÁ¾²qýÅoÍzœ†£å!uƒþ^ç¤äY~ø0ÖÕ÷Šg"ßY'öanJÕ¿»]÷tŸý‰wø—ØÇš õ@ÕÉÁû\¾Êy}{ú¿Î‡¾@3oB^VŸ}ø«ÎqÍÃ_—z>`¯ð§8÷pϼ¾rqë‡gÍ],~pÞq ¯ûÐWz>°_ÎGØqYߌ9;Q·Þ¾”Îö(âñVð?w§Ã¢´ouÜçïå"VÅ^ÙUÙçnߟið·Ö ß&Nðk:±þ+:ý®ÎíLzgÕUÖÓ¹ù/ÃoÄMy©ôÃÎoÈ#5OiÚíK¬:!êo|JÍ»;èöôÇ#_\  Ÿ¬úê«ù•_âÓá%}Þò­ú»æyÌ¢~vœ‡ã/ß )<½Òëë ~úªsÌ>N¥×Ó½œ«ÔOLW¯sw»S»Ÿ»_>¼yj%t ¿oécÑÿ­ÏÞ¹¡Í¨àëI`?uoõAÇuOïð§ù?ê²'áS§;ìëƒðyøº£Q§Ö½{—Ò©à|Žsˆ'ÏÃÕO‚VÞõ«sS‡Á‹ì÷ç¿|éãÐÄiýÇžïyØ¡ï1GL܆óžD<Ä“Šoìc~aÿéÛæ ÿ»þ¥íˆóÕ_ºÔù8“Ý\{…£ÔIüýF¬Û‹¨ŸJ—=ìùþN³êÕÂû=V´ç:½Z;]”?âð„êº#¡{ÉúÅÌ™ñ{ì‰?ëOÆ·ëOò¾ÞãQèEußÔAÇsu®o¯ãfu/\ÉžFܸ~c>„uªø¸ÿiÔísD£îÇ¢.€;ÿ!p¡zGü®¾ÅásW§=ßÉ?ÆoçÅáïš2îyÓ\sëZ:Õ¤ëEê-v_sf}ßó¾s ÔW£¾µ¯…¯Å©Yàµáûk~é¤ûñã°›ûQwËêmq\<^xÅOvÏŸ ¾A¿‚ø Î¶Ž5os¥ç]<œ\ýñÃç9']û1|._ßÑ·¢³ÎëGÔ»_…ësÂË^Ùëyj5x¤G¡[âIò½|û2ø-:™s–Õï½Ôõ+þ§ªëô‘Â%#þÕ<¿IÇcy…u/Ÿ‡že¾…~€Gcxç3Õ±êJºÂç/‡_«ŸëœøJßÁŸ©«ø|`žÎ,ÝaÚq•8''~pŽÌ>×¼ÂEß§O"®Á™ü€^Sçîç}?Ù¥¼+.øýóŸåíš33é|¿:¹æê¤ž¶ßësu¬¾Pq÷UÄ9ø/8 }ݱûӡψ“œôzÎóÐ ñð<üI'u>¢îWØëq/¤~Ï…cÿñ»Ë‘/ìõ-Î‚ß ½Œ~t/øvx/ô^z‰¾}*žO¿•ºÚŸe‡ó³D}ù\¯|ü>¼džþbõ~J½ò‹àÁÅië‡ß¹xEÿ¼¾š÷½ñT}vÐã?ÄCÜ<¤Þ­û‰ðçóàËFÝîžϊϦk™ÀÎÅ:7¾ßã{PŸ×yááyõéþ&êDþ@'N] Ÿ ×Y‡êã›÷Ï¿û´~ï|?þ‰þ¢ÖOõ,»ª9ˆËÝoÙ‘ß³îò\Íå'ñv5ò±~2¼ª¼´z•xZçpßt^“ýZ×—Á3޽PßÝhšuФã©Ob=Ï_ó»w»_ÿ ø?ùþ~à§ÒµàÐEèÛÓ?äsyþiÔ‹p‹ºFÞÙ‰8y‹zÚs?ø>ÿ·^—ÀÏ#NÕ}›·Ì§z7ð¸|õƒˆëò›xç<ˆº—~–¿Òù1~).Ã/â¸8ˆwÁ;?ë¹äUú{9îñ†¿'¾¾ñQü©ó‡׊»žK]ãqèˆüN=WówG}ÿåq˜ž\úÄnϧæHÖœ£Eÿù­î_;;\ê|už»È¾iõÞ§ú‚§ÒÏo¿Åeº0]¾¨yE+½Þ-œ|ØóÛ•à+ÕƒžËºŽZôýå÷ü‰Ÿ~ü注—||v-ê‘'¡Û×½óΣx®ºmxÞâýðËt"~Zý7³Cþ„¼þ·ÎÿUŸý¨Çys"³_èaÔ£O‚Çû0x˺Oý ón5·o¿ëD9ŸCß[ÝÏ:ív[ϸïõ Ç¹ŽÓ_¥®ª>ʽÐOËkø,üNñvÃ>ý8øö?ë>ˆå΃» ú;†ÿ6?oǨû †çX¢ûÌ»Þ!þoǾ=|ä÷ð;îC(Ü5|yÉâ7þ Ž­y®Ãzþ0òKõ1-u¼ñ$xì3ÁëW¼žw\²:Xñ³žÏá¤o¦·˜«W¸2ôcõ™¼÷óØ_xRi~vÍ{¤§ÂSÁ—T¿Õ¸ã«áo>ïYè êlz¶|ƒ¯¹üŽ}<y5õÔ/¢Þ‡ùÃgÏ §;þªs=£^Ïȧôgûˆ‡®95+͸ó<ù¶áyÝ·WýŸo‚ŸÖËùDyÛy0çC>¼&Î݈z’ÿ°uÎ4t!ùAývîïÔ#ês'ă:¼×óëÍàKéO5l¹ãŽ/Âo~|¢ç– ¿Ï·³è?ïÞû«ÿŽ Kmߤޯ9Øû¯{‘Æ]ÿbòàfðìü»æt\û»Ã¾>p·:îÕ¸ãñûtðëÁ»égfü†zn8P]Q}û½®¸—zÔðü„}ÎBG¬ùÒ{«{^–;oPypÔãÆ8ø'8^_Kçw~ÐçÏGa§Ã÷é/ð¹òƒ÷¿z1e§â—úã{Á{øfvm~s[ì´ô僎/èMê0ýeê¸ ¿¯ÑÅIùNú^à qE ½h…Î,oîõ| ¿éûvî|'üdqIýBßvî¨îm˜u^Â9tyT½>øé¿Õ£ìÿ¦ßB\ð^úêOF½G_¨þ¸ÐYŸGÒïô<òiíÛn׿‰÷įã“KÜíºuñ닎×[´Îðié}½®.‚#knü¤ã;u‘xVó¦XÇ?®už~®X]+_Ë÷÷ƒw©óÓËÝïñÕ5W[\šõý­þ¹¥nâèÖ¤Ù̓RÁS¥t}B¾†+Õ­yÿ”çÀG=‰z³üÜýWÝ%îVxØó\¡ßòAègøa}(¥§Œ:.ª9°‹îŸ×ÿs.>Eÿ{­9ÛÓžGÕ§æG‹Ët÷âY÷OÂŽá4yF¼qˆ¼`~ˆ~õ‡QçÀµÕw7ïñ¨æí„Ÿ²Cxúyè3øb¿¯~„Ÿà x?¤ž¬û[fÝOêÜ׸Ç9üÇËYÇâ»²Ïøš³wÐý(çQÃu5n¥ûKÝó2ü\õétûÀ‹Ð‹Ùëþ_Zu\‰/3·R`Õ§“Ÿõàiï„_Ùùßüžçð3ëà ý¯5å°çyqÕ{ž ;ú<Ö³âënÏßåw¡—‰ +¡CÊ?¥ÆsàïùWÝÃ5íx©øÇ•î'x}†xã:7¹Ûñ‰zÚçVŸãnç#«/„¾9éù¶æÎîv¼õ$ž÷bðü°æ‹Ÿ¤þY¬Íõ™v»e?§"×=+ûߪ«ë²áßk¦ÿÁCê8y›Ÿ5íz@¼›¼šù¸æ3L:þÒ—ˆ×SëççŸuq¹× ¿ ÿ ýoÜídã ãýÞU§î÷øú$ìO¹q„»OªúDöú>лÄ|R' |#?Їx¥š³þG‰—ô¯š¿²ßýìAèIâ_ÍKšôx O‰>&] û~àjq+ãá³Ðqê^ÒÃŽWÔ“Ÿ³3|5~J\-¾4øyðyä‰Òõ—ºžù,pœúìdè‘þãßáo8@=³þXç1=¾ÐÇKÿ¨ûn½nÇVâ÷kNë^àí¥Ž+¯ÀwŽYê¼&~É:×½ó®ï•ßLz¾´Õ¯2í8Ž}ꇪ¾~|gÔ_烷Ä­†®’øŠÝàÙgÍ) {ó+.¿é¸î¿ñÞª¹òK=.ªG—G=ï«ÿõ±Wüë“à}ñ]êaý…pBòæÊ‰³ÎÙà¡åsq¬øÌYÇ{©‡[/ùÏ{×üªýnÏ'£¾†¿ =ÍùHùBŸ==ýRÔYæœÉãÖÑüßÇñyðBñWû=®²ãÔ-ŸEÜ3°tçyÿyz—ÏÿÝŸB7?\8ïõ\×=Ž<†O3w–ß¾<ÝŸ>ƒ¯Ç.îúE䯚ß4îøGÞPDZ 8›±øJ<ðþ~ÞüùíÄÅÓÎÖЍ«Ÿ_Q}Ó“þ{ò<ñ8òºzø^|Žúåbè‘OÂÎkÞyèì›Þx;ôuðüM»ÕqÐù`ü‹û ÅW¼UÝ+2ëzzõçÍß^OÜxÍŸîßô8ø×÷çËeï‘/á:g6éñWœ­{*ðÃçë—o²OÜ÷à5Ä7þ¢Ž†Ï /-zÀŽÅeq£Î“ïw;Ñ·W:Ï~×SáñƒË—£ž­óóÝòq8íB|îÕàýê¾ Ý®cU?Ò¬çñ§øÛ½ÎÓéÃÃ;çý5;QoïD¼z/ô.~'Î[gýÂtÏ!¯Ò ªûÓ—zýl½jNè°nî[c/ì}ã!ªk)âÙ¬ûSÍÅXîy[]ï¦c= ÿúIÄ?û—_<¼|gõ5ÏúºV6ø·èü„:“½ÖýU—r>¶øæqç‰Í dÿú†ñü¯ú›—{¼¯ûÛ—»ýúwõ¨¹Ûu~Ÿ.2üœs&5Öulú–óx5¿s©ó‘9ÿ?ò<ø›š?9ü¾þ÷šû4ü^͵ÛívÌþà¾àmž‡½â7ëä›®c ª9׋^·{¿ºvÑq÷û,ül)øš_…=˜Û•sͽ¼ú—sâü¯¾Ñ—Ѝ9v‹ŽÇ¼GÍ L}gÔŸ¿æµ­D<ŸwþZ~Ô‡ }yA\ϹFüEü5O‚×yªq·—¯‚Ó/V}º–8`ßñ¼öåܤ:}ûòãàI^Í»®¸ówt店+øEûRöÆw{¼Û‰º[aé!‡=zçýñBpÞ/ƒw=ü¼ÂŽ‹ ¼RóôB€¿­ÿÍЧ³?­tüýW< /až­8¨žO:ï§ž‡Þ7t¸®Î¯ôºájÄ~¤OL}Õ9áƒþ}ú¼V'Áöý3'¿= *¯Ö}•êÞà™ð9É î~ÿ ¦øïáçÏG=í~1¼<â<¿y¡Õ¿ú¦Û9¼@ïWîÇ>þCàŠšÛ<ëuDÞOê¹áø î}Zî¼zÍ‘Üï|Èý¨#×]ßÅ÷<‹|)Šoúå¶£Þå—ø›šë4é|JÎÅTW|:˜ïó>úÓà?‡u^ð\ðAîaµÎü\Ä.?>z-pýÓà‰/ÀÎ~ùÕ犃źGˆË—ƒ÷«¾óqÇ×p½º…óÙí£ÐiÕžóê¢ÛkÝ›üyÞËÌÏNNÅ›×<¿Ý¯Õaæ4È_pÇhÔó›üB'òžóÐèjøÚ3¡ËÐèÝOCgV7è£+oÞõ#ú\O{:ý¥æÜt¼x?â”zºæ ìO»ÔÿN¨û0†u­ó¼—àÏìCÅOË'ìÏ \Y¸mø=çGØ«~ê§ÁO‰kÖÅ¾Þ >غ³ß‡¡+ÀWüðõ¨ëÓׂÙŠ¼RúÊR¯3rž*ž_¾Ñ§WóäÀ¥už.^ô¸'­¾ÎÝà‰¦}ÿåc<§xJÇ)vÖ÷]}®IRsBÆÝߪoj¿× 5w`©çñº‡|ÑõJøKtݱ×ùך 3íqÜŸxtz >îJðüú¶ç=^ò{qÈï³3ñ¢òߤãv‡or¯fòâ>žÊ›GïzÆ3‹{Þ«úµ{Üù~Ô? žôAäõÆÏ–Œz\ÿIÄ»§ÁKñ¿Šz¢pó^·ùÏïÃ#ø’ê7Åô<‘8BÝh}Ÿ¦}À/áq—ÝÔ<ÅIÏ“u~jÒã‚|mô÷ŠO"þ×\êá½õ:ïGösÕ×> ¿}Óë<Ç£àã×£.Æûˆ ÖátØý£ø>ú…u1×E®Í%)¾ú ï[ÍiwÞ\ÿ’|n]D}§Näo☺éXèéÎ[ª¶ÆßÁ¯Ó÷Ì•k.ò~çQk®ý÷øVóew{e‡ø+ë»z³ý{) ߣ¯ÁüXsnëüÁ¨óHâ|ðIìgÍ/zÓëÊ󭣺þÆ;Ñiò<,œ`ÞMÝ÷²ÒýN=yD|”ô W¿ć;»½žæçðºx¢^_ ?§7¨OŠ_žuSó:Óž—»NÉžœ'Èó â©8ô8>~¨û2ÆsØßþ‰×B—:/§Š/¥³ïu»—g^D]~*ð¤øYó‚–û:U{ÔKöÍ9”KÁ×Ú¯có^/þ øˆšk9ë|©úÐ}“ò<~°îSZéü(]T?ÚÃÐÕŠ×9èy@²Ö©Ö}¿ÇiuD͇÷ø ÿÕ=yË]¯‡ ð xq¯î‹ýëçWÿסǫGJ¾Ï~Õ½+of_Ÿç-]q·çkøWŸÝƒX_vù(ôYðñø¤‘7ê•àm¾Šú„Žéó;ÖÏä<ÎjÔÉuÆaà”Q.uNÝ{´Ôëv(ÎÕý™{½ÞÑï“çaOFýÍ^ÅO|¼ç…Å yèû¹:XÍ-÷u6¿É=iÎÚ× ž†.kŸ‡^£ï[¶yŸôó´«•ïÉ?áâº_kÔ×±òEðÒƒ·cWÖÍœÁ:3í:‘ç¹õ)|åµú¨æì‡ž¢_Hþ~þŸßw^”î—Óy|œ€×TwäyA|Ìüݾ¯pâ¹À=æ©§œï¾þ`]Š· ]Rñ‹àen¡/C¾€«Ä3q¸ÎkÍûþ‹ŸôÕšÿü'}Tzº%z;><ϵúÅZà­ËèN¥Ï‡~A?÷9꘼'§òÛ¬Ç58óIàx¼‡ù¦>o+p žh3ø'QÿÑô•l…ŸÝ‹÷ä¯ê|v§ÀIðɵø=:‡~½š_Èß{}û›àiê¾›7‡æwú£—ÃØ…ù'‚{úГˆîñÄ#ü+¼)>Ðç› ÷ÕœŽq÷‡SÊGZîï×Ó¹ëÞò½®<§xŸ¥žÏäIýôUçÄVCÇV‡¾ ½©>wÚyÛš˜uðRçíð¥ò«~ø^=®.©yt+ß?‹:S®st‡'ê+€×_§â%ÕOú!ᾇQ/³G:©8ºy§ð¬úcÞu³º ê‡â)ßôøèü>U­sñË}?è1úd×Â.ó^@y—¾ÆŽØ³<2 £ÎÁª»—B¯Úíy¤ú;ö;ÿ=§ý¿:À§ðWâÆãХฺîûõõÏþ$ï).×¼¬½®{< þ“¨›íëË•žÏà!º8^½ê[MÜc>)>§î+wž¸ÎQÌ#¿/Yó͆÷0_çÊ߉gx¦ª“Vú÷97ø,p=œP细ßw.–.T}]³Î[×ý8{7–o„>hî}Ÿ=LCW¯þÌ7·Õ|ë3Ý|øs#ð€º¢Î¯®ôü„ïªûçó°ý¢ë£þÜòXÝ÷ü¦Û•õÝŠºÛû«ûÆQ7zÎÏ#ߊWtqn;ôðºÇ5ð7\æœýÅÄ糞?Jg‘?ÂNá&u/šß§o%ïõVßœ ¾æRèÇp7<ôjÔõžê[Xé<‘ï«ù>Ãïsøv¾»î?[î<ûù¨›ê¼Õ›À‡Ác®tóqä×+Ý_éÉOƒŸã?ð~´òÀ~äçYߟ×+'<q¿^<Çð9úÇôuá;3G˜}à…ë\â¨ã³§Á§½\t]Žž} >‡~õq䳺‡0ôvöWà<—|%Ÿ~uŽývŠ|öièøòsÁ/«ŇêçÙë:Dãw<ÍÏø•ø/.á]Ì5ÁwŠ›öI\«yôû=/x.ç7+¾â¦½ÞäWp0ËçÝz[ÜÀ¿}ÏAU/^ýWëÞáçN‡VWÂuÏÂ>Ùïã?âu9t³¼Àsþ2ê,uÅÏ#Ÿ¼ž§îñZýPÁÃunx©¯;þŒÿ” ?¿ÏaÄ/u½´úÞ&o¬:jÜíKüÐ7fÄ9çNÕUe—£žÏáåê\îñðjà}ÈúÏE¼‘g~zAÍÞõ¸«¿©ú\ÇÝ>ð÷âNÍO‰øHÿ"ðé˨ùŸ~nç\õíØïsô±Ù¯ê«Øí~OTÿ^ä'uSÝ{=ê|¯ºy¹ç£#©-zÝ}7xÿšw0ë|½ï‹úèIاu£w‚Ós2x«¿™3=í|!ž ŽToÖüÀqÏæÔÃyú‹¾öýb¯ûSé ü$ø¤òãÐ †¾½Ï_÷äÍ»]ëÀó9÷§.¾b·Û/û'éµ5/l¿ãÒ‹+«vÜí ¿ïóås’ÄùÁÃÁãø:Ÿú_†c_v­{,ÇÝ. O:ÿ±z¢¼ñ<ò0]n~¤×g5_!êè:¿Üy¼ÊË7?g“Ð)«qÜã0\ªÎewÿòý®«Ô\çýî?uŽq¿ëƒêyvë—Ó¯.ŸVí›^‹KpÔÙ¨ßOnÅ‹?M—¬~£ýÎ+Õ}4êýƒØ!>μ%|V΃pü.ߊÇÅ».:?„çþ,tù\_µçƒ#èHâ+e|˜8aÝø;¼·ˆ£5×'t£iøÛ÷â3|OKÀçÌ»Á©ðmרïy_ü£#~+tÆÒ½®¨÷Zô8TsP¦OÏêüõ¢?wÝ3:ê:Ãßð!“®»àÓ¿Yéù ¿FïªyŽ£ž§êþÙqßiðYgƒŸÄm†.ˆW‚éZxõºW2ö½î÷^û8èöh^x¡o[þ«>ÍQçßøkûÖǽŸ„'oà®NCgZî|“ó=xÆawun`Þu?ñéÃÐgØGöy♾ãç÷åˆÓ»=NîD<©ó{³à˧ȣðQÝÿ»ßí–ýÀa×£^¼:Üã\/}òRðNtÃê‹õ¸¹zŒ¿WÊËôwu¶<â|ÁÕˆçð]gÜëø°îÕÖïfè\yO´øÀ|Žú®îßYêñ~¬ç ½‰꿨{¶ƒ×Ä¿ÂSâ+ÿW«¯}¯û“üâ¥z˜_ª“îD=\}yÃs9¯ïûÎÿïâ ë^Ε°ÃƒÀ{/®þª½§žDþòýU—_Zý2Ãó»OïŒsc'xÜú¾•þÜŸ†Nò8t9y¹ú®v»Ÿx^xõHäqk¾Ò?÷qÄßû¡+oŒ{¼„Ãá4¸ºÎ§¢î_Þm©ãÐ:?¸{u«|VºÈr·Ov/à¯g"ÀAêýGêfûð(Ö¡ðrÔkâôƒð[s(k~Ör×UÔ[§£ž°o÷£®‘¿OGÝ…Ÿ¼~UýNû½}}Øí¤î»]ôúìqèPâ^àÐAäEyïIÄó<ð^#Þm‡n%¾\Œ¼­ï¨æm¼éñÁûVþÛíû´“:^èÿ†ŸÓ†/: ÞåeðÍpªsÓuoÓR껕ÐÓùÞçØâíõ£~ö»'ÏÆþÀåê5yäËÐ+k>ûR¯—ñ½ÇÃ/îEyº;]ÈyßÒQ¯f~xøÊçê#ª~±½¾Žu~^›DÝ1y;?ÂîÄŸšƒ=üÞ×Á×]‰:ædðZBï¨{:†uzùs-ìë|Ôµu.{Úí§î‹ºˆýÿÍ}¤Ãßÿ(øåí¬×‘çö:¾º<ý¦æû^÷¯-w|,þªÏè\t©g¡GW¾×õxøFhŠu|Œw{yk'òcÝc¼½x£>_è¶üC<®þ¯ØŸKÝÿò?ÿ}4úÿ³êX:.=Åù-ñuØÇù£®÷è3¹ü;Ý^?K{Šúéxè”xQç Ù7œ„Wºúîíà®Ì{\ÓÿkÝô}ã—ÅMߣ.ð~óáÞý úG^-u;×ÏäûÙ9N½{%òšø¢nÂ~ü¬ø)ß['¸–}N‚wä¯ÞÿJðŽô(õ¶u¿ñg>êº8¼¶ùÇ¿ÓwÅ·ó¡¿žu;Ì—«~”áç–C—ý§àEüžûör.û|¹ÛëåÐ7àõÕ$üañ×sàÇnÆÏ{ÿ:W3ï¼:þJÔçxh~dnÞ4ø)þ¢ ÎgÇ¿Œ¼üNäSýŸö™ÝOƒWµ.öqþíþ½ÎI:O„ÿã/ì{ø†ž±ññÃùÛíBž«ó:“®ßÞêtà~ýU¿·¹g6çݪÛІßÇä[þ«/ݺYûg.ÂñEîÄú«ÛÅ_ß/UWø{÷ømDœÕ·6‰x},x…ó±žx{|BÍ};èùŸ oûtà‡#ŽØgö3 Þb5êþ‚ç_þÞüÏçñÞgþq$tcý>.>è7½ŸûiðÚÎ1áëá:¿b÷ìo1 þ±æ¿ºŸ3‹Ï÷ÄkÁQÎK›·¼*^P½Êßœ÷’·ìŸüJ¾±Ûqˆ>ès¡×ŸŠzæ"ªÏÍÕsîì|äÕ÷ç}¿¿Ž¸¨? /Uüá¬Ç%< »fâÁgá÷â¡}Ô‡£~¼¼â·#.©ØQÎ/Sgçy"qDŸÆ4ôMv+®³¿WÐçìq ïà½æQzžŸÅzŸ }]ã?ðÊ«á™~Jqæjð,êIxžP—ÎÿµÇ—ùð}¯öûçò?q™.ÃþއÈö ~ðú´ô;Ëâ¹óßö³pÉrïò;ÿч¦žÎùÏp-{ù<â<×é¯Ñ?$¯–>9ïñl5ø^ûiÝ.„[ß•ðC?gN¿ò§¼%>⽇~Ëóa_pKñ$óÀS{=ÎÉxM¿GŸù4ö‰¿< Ý/u'ùGœd§ìä§/áŒÄ9…c§]‡QWù\ë$Ï»Gïhä…Ëa?t­—=¯âáà*¼;ûÑ_iÝÄg¼óÑüÈ:Ã>÷ýˆ£~^Rg›ë þ\‰úL}e_­Cá%ö±èë«ÿË:Á›çƒ–_¾ˆ¼'<yûuèX5u·ë*¿ \HßÑ'êsGýk¼¯ü»šºÌ¨ÛéÕøýiðξO?ÔéÀíø®kßÜŸ7ßïö oÂùe·£ŽkÏ¿è=_½éþkÿé‚úìwÍg÷÷¼õÐÉÈÇü£æžŒúsf]$>ЇG#ë/¤ƒÒ•à5ç:/NÕï÷êÎWýº ~·øŒq×¥­3Þd5ôìsaß¿Œç‡èrò±¸äŽkñ|×c¿Ù‹÷Ò_óUÔÏâþÒœ~qØ|3ýÕεú¾k/Ü“"îT| óË¨Ã¼çµøùÔõé+ÎâÉàçæñ"?ª¯¬ç«o÷ÿ~õ†8öeÔ/…ËþØëAxXÃûþ8ö[^Q¯Áêí »gOçïØg¿˜o¤/üTÔ©ç"~²?¼ž¹GÞÇçzŽŸ…>È«/s9âTð^G}´zÅ¥´çyÇg¥+ûûØoùžîD?¸º·yƒçÇW¢u~îvðMÎÁ©à”÷îô?«ÓàAϯŸ/²k<>¿8ÿ·ž~øÈ¾ãÝäa÷~YWõÀ#.¯ÇÂR×àÇmyqÚq‘}ÀKýsäS?¯.¨~½à«}ï¿„ÎÂø%~ðyðWb½Žî×ÔÁòš>¦¢.´Ï"N”ÝõÛÀO§Â¾ÏDo]ÅIϹuÜôêLÇGâ$>^”·èüm|œÀÿÇ—ð§à_éõù‘¨£ï„nì}Ôp\ð$ðÖÂŽÔû5÷y©ûûÓˆ÷ê<öw.ôx\Þggußæ¸ÇexowâkÙÃÕÀÃt}õËÑX_óâõg }’]¨ƒœïºõäJà‘ùaßç+‘_áŸ{,ž—]W½¸Ÿº<êÅЇžE=AÏ;u–:;ûÍø5>à›Gýy>ÿ/%ðž¼µüöýÐàñM^“/Ô7úÌü7½U}öqì û{qØóÝçÙó™À)ú)ÆßÔc—Ï»ÎþrÞñ€÷ƒ§ÔžÇ{™_`Ÿáƒ:oµ×ã°zÏ!þ©{øá‰È7ðüZs°÷:?ûóX÷ÂÛ‘o¬3|„:q⽨s¾ ~O|½ùR_˜û .ŒÞ®Â_æï}uûæâNò•7ƒÿǧ¼ú+7ŸìjàNùÐú×¼ØÃþùÇ‚¯…/œ¬z~Ñ×Y_R͵Ûë|è­ÑÛuªŠ“‹ž?W‚> ûp¾[ߥþг¡{Ô½@ƒ};glçg¢.÷øfŽ>\e]ŠÖÓy“ó±Nkáp‰u÷Þpß|¯ãÁ«úCþ©{ç'=î™oä}Ä¥S{õÙ²çÓ¡oÉCêõ/ƒ×ºyV·8‰©? üôEò‹»ÝoáÓŸGýt9xë†P'|uÓ¹àoíß™àUÍ¡G]öuðCŸF~¾|Þ±ÉÛùŸkQ÷Â_G<€+?v>êQýî_G=+Þ p$ôXçײî™w^÷TÄõ#ÁÛ^ ýùTð$ÅïOz>ýnðì¶îŸw€? ÝöjðÁÞO¾„‹­ƒó%ïn¯~x.¹:É{Q×9§Ê_ÿ)x¬„Ý\‰:Ýzò¯‡Ý×pIݹù;êðK±_Ï®K7šö8àùõñ×QÏù9}sßJû™ô8ü~ÔµOïOEÜ´_~RW‰¿ŽÏ=8'û%ðꉎ禨ÏõW ÞD=$¯ý(âÅ«ƒþ{߯îòœÐÙà+õÉÜŠúμ9ï¡ïôVìÏÑÐ¥þ†oÚí<°xýÝÀÍW?l¿ OyOùßçÙ?󏨑<¯_g;âü‰à ñ$âDá¦áï—G=¯œþý{ÁãW6BßVгüœÎ%? ~Éy¦úœÝÎw‹ßüþRä v¡O2u4ñÕçˆ烷Ÿÿ©ãç7_D^ZçÌóüâ©üy!â(¾ãrøí­¨÷ØÃ“X}ï–;/þíÐwÔüW<„SÊ?ÄÏÝWàîqØ¿uU÷Йœ¥ÑÛy÷WÓÎ'â =—ó—øç\<ï/âóõq³ÎÍyvu1øíkáGú¦Ÿ'¿oÉ/ÿux=èõû¯î~ùNÝúð½ò“x~>âýùØ·w"¿Xõ^UÝ þ…óÅvRç#YOùÿJ¼ÿjä?þŒz½ÖùÚï÷øZ~hN„~ ú±¾Cø‘Ÿ¨»Ä÷ïÅó°³#‘¯ËY <¯×=Yý9åùð<Î+Ò‡ÔÑÖýt<'¼Vó…†økž„ç¼qæDðì NOØ…s5â„:€ðY<Ï¥°£/#þâßO‡ ø*òÜ#o]»þ(x1v«ŽÏúMÞcú0ž‡Þ^zàðïOCwwÞO]ñ~Ôx.ëY}!ãGŽOp#øw8Ç|ÏóEÔéìå7‘?íûåÀŸG=®Þpn³òý¼ó©U7D<3§K|øÚó:_‹OX:±úV÷ºÞ{#x)<_Õ_gúº‰›êFüÐÏŸßyoú·:¢òã´ç}úâ¹ûÛÄ›ið\/‚Ç[ p+ôu8“¾Àìß©¨»ø!;ÔO$®= ^ïiðb—ÂÞá(ñö½Ð-ÏÎ`·×#Ï×|ñƒŽç¾ %x¢•Зأµ×yòz®QÏg‚G眫SGˆ«uþ*xÅBO¯œ£…ïœßŇü&ørõæÕÀsâ„s»ðž|ÃÿäÕŸÿkÿ³žS×ÜÙy×ÁÄGßߨGßû­À¡¿ |„ÑwŠÀˆx~.pÈóà‘Wÿ ž³t·ˆÛêüË'ñ{O£îÓ;×±qÕþWø4tèqÏ?W#Ÿòwö™=¹>W_«zÃz×½¯‡½îM=WýXúƼãQqï“à[S)]aÜ>¸ø®î[~<Üjàð£ÁûæùCç|ÍóÂ÷[¯Q¦Þã<ƒïsº‘¸b/e·u_xè§ô±ÛO6ƒ÷?º„xæ=ÅáÁ“Ú'süØ{݇¶èöª¿#çšÍÿ³ëå«¡ËêCßÅõ÷#náéŒøÂë¡Y—c¡kø¹ù¨û3܆t¿ËÃà=nÏ ú\ñøë°ßÁϪ‹è¢Îeó[çÆ=¿¼ßê³N>΄—//Êžá@yOu2ôvU÷÷¾éùÄþ{O¼c?Þëï‘ü*\·;î8ʺÁi“؇£«uîÌçþ$êÍßDÝp,âÖjàÛºŸj·ëJò¢u<õ¾x _Z?çläÇiÄ—¨CÄ5ß×Nc?N§ÍÕ9q ®½õ»óOïǾ\ –þPþ:ïyŽJ‡‡¿.¯åùkNÉJ·cs†ÝOŠ€Ï‹g?èñÕz™×p!t×ìK.{ ^¥t‡I?ø*¼ý/"Þð7ü€9Kt:ë ÷ß >ˆ>gn|C×ÇGYÿWÿÑý÷Vàà¡o»wŸsû ¯‡º.o¾ù·nïöáõ·;/[zJðµú™¾8h5tãºouxsVW/È7Eœ€;Ý' Õ½l£Žëţϗœ ~Þç°ƒQ_±£Œºòóx8Ž~y*x8õLägù/§üNàäÍÐ=^¿ë@ë‘—_úýÐÙ¾ìhð‰øŸËQ[7|Æ•À¡úL;‘Ø­úJÜaÖµú&ý}ñ,ž ïÍ7Ê{TáoF$Ozo:;ÚŽ8WßÞßt=xGqFÜ×ÿêßñ°tÁÁŠ#êA8ðdÄa¼•sPß ÷[“ðÞuïkè¥ìôjàÅ[Á/\ ;7¿þÁ“ʯü'ßϼ·‘þ9xq*çUøyõÃÝÐSýÜ‹¨‹žÄûœ8vÛï öýQð™æi½‡ñç›_/„¿®½—ø xß+ÁŸyŸ:'1ïû‡ÒŸv$ðŸu^ þÁÜâw£Ž?uÍYKýo·×±¥Oº¾¡Þ¸õ‘}/ùNètÖc˜;VzòNè0Gƒ/û(xú:w1î?w3tóí{ÕWÝ^> ~E¼‘o|½ß4‹úìeàíìã/>O¿û— Ôýê ê/Ûíu¥¸×™· ?ÿ4ø ùY~yõï}]è ¾çRà«ÔÙÿfÎFÄñ»ú¶£^Ñgƾ?'éwߺ¡Îï¿gôwøºX/z,žY\Ùÿ>ŸƒßNƺø£Ø‡µØç“¡âsëœÿ~vu1tü‰úcgÑóЅدk‘Gà~Ö×ïó§S‘ïß<qé˰W¼ü‡â#.…. ãSáù†žÂ¯‹úVèÉÇ~uJå߃Χ˜¿Áo'ßrNܯ‚çºüŒ¸x%xJ÷Õ}sô£¥þûòÀ™àÇjžZäõ0ÿ'ü–øõiè=ì ¿óÁ[˜‹)ö¿ðú~áUà7¼~Ù>Ô½Gv{½^wâ®î3×Ê;õœ|"žK)ÃéîÝ´o5ÇfÜõ‚ìÓ燗‚Ês¾â¤x­®Ãoˆxö;ÁKd?}åLäÉ ÁËŠ“ðu­Û´û…ýç^„ßïþµç=8érèXß\ /˜»ñIøÕµÐ-᦯ÃnÌs0—ñ;¡Uö´Ç¡KÈÓkQoë OâQœ;tÏœE¯£ÿØ·ïG^·Nü×üá3Q7úûÁœ ¾;Ïëg_µ¾ z<ñqð+p¨8¾ôwô¼%䉃Ùݨÿ¾ë·ÂÇ‚7Óo oÚ•ðsu‘}Äÿ ÏOÇùv¿¯âýʼn‡Áý ôô_D]¤/œ=«#ë^ȽŽG.źÁø—O¢Žuÿ‰Ï£×Èÿß÷¬üºþø|è<úçüžù pÄ?/ 'â×äA}B÷£®^‹ü:›ö÷e_7£nûYÄqx÷fàça/†ß] ’ýÚ‡k‘/kFàÞ£¡[OÇX'qáDð‰tVú¼¾{uºç†/áDõÇͨÏé›Óþüòß]sÆ=œ žºÎÉ-ú~Ûÿa¯¾ßqć¡ÿfŸ:?•ýaè_â¿þ}¿_¿kVc­ûõÐàŠœ›V÷£ÁÇóþyæuÀ7âÔÐñœ×Îs­îEòû_~¯óa£ÎÇž:_‡ß÷‡^ñÍŸzç^?w)tp|ÝÙÈ_ïG¼’÷Íé;ütå‰E·gÿŽ/U×¼üA¿Áû¡Ãó·[¡ÛÑÉJ_öxQç±çÝ~ÔÝσOÉyâ‚zÚºYsŽF~-xÐ÷ï‡áßâ¥y“©ÿŠ+ÕÇxÐçŠ'çƒß¬¹‡QÏ\çÅo }çoôÖI¯·éPò‹ý>üÔzÄ1}“âçêâíñîòþ+QÇNƒ‡?ùáRèêp}>øÕ‹?~<ý‡Q‡¨—Wƒ·?×ûyÏëÞKGÇüaرséçƒWà¯Öºøuçôïä=þΟ¯ò9„`¿ùeÎ÷~ý¦óŸÞÿí|äoût*pûG¾ ûÇïéo8ºÙhÔùh<Ég¡?ãÃÕ?çC·xyh-òØ×ÁÃä<՚Ϲè|ÆwBo¼õÉÕÐMÄ5ñHŸÄ7‡ßõ¹ì ásì3û‘/ÎRÿ‰«îYòžwÂ_jNθï'{©y¤‹°¯I¯ÓÄ󓡃8Ÿ /ËÏâßjðŠ×#ª_Îì õ^ =ã\ðè?ý}¹ßíæŸ‚·Ð7&àýÜ›AðŸð›~%ù nÑï gš †·tžLžÆ‹%ïÀ¿áPç&O¿u1xÄÕÀck¡œ½¨úÁ#ÎÔ9%ñ}·ï>Šù=ç‹=¾]}¸K\•g^νþ\s|ýùò<ˆºäÛÁWðOxÙ¹é×î¸ò'7OGÿQð:ÖQüQÀ£Þ£æó‡*ÏŸŠÏ·Oú÷Þ ,þZ×é‡Â‹ë'<ùótàtñÿ|ðâôO£^‘wàÊO£wð£ôŸ×K½.Ðg~$âÔ{ßáij‘?þïSçá§ýßÕÿˆ¼'Á—#¿â-~:°>ðÏ‚çº:÷vàÚÓa7u^)øíW‹ž_knÕ¢û‡>LþªO_ÍNODÝ_¸p·ÇKù6õùñhÄù­I·ãš¿z.¿ýEàŒºooÒ¿ïQðgt;ñ݈ 9§€¾ÊéCìg¿O#~|;tä«®/îÜê©ÈãO#Ïú€{Ù<¨~ÿ8ñÐß±Cs'ŽG=\ùÈã¡ûœ·½Ü¿G^€+àøÀmW÷|úøw¢Î÷¹xu¥9GC'Õ•Îß þëh䛚«~zÓ¼ãfùëDð6ìÅ< ê[ÝëöˆÇÕGRs&×Ë›þÞ{è“4§n†'kîÒ¼¿×?y5ëûq$âéWQ祎z>x‡<ÿÄÏðÙÖý'øUÍåžö?Åeö[óoûœk‘?à7xnúað”ïGÓÅ9ý2y˜<«®´âÑÁŸýóZàþƒ¿Ò€_ü2â?Ó·n¯Ò{탼öaàÑ«Q/ß›ô8§®ÃßÿsÔò@õ©O:Þ_¼ÿauŸïn·§º·uÚýIþ3O•ÿÔ<ñh¥ç×qðmð1>\Ÿ¢óçßxÈþÌ-†ƒô|uÆ4p·>ÔºÇ~Þí—=>Á~žœ¶uµ¸O|s:ô÷Î p¼üô£Ð3N.æGêqêHèò¿ ]âëàÏŽç9¬ûÇ¡Ïһũo‡Ýš/ûqð€ðþé°ovoýÝk­þ<uýöJÇ/§FoçÏz½p6tîù¬Çç÷âó¾ëx.ôiöðËàµÌ…©} œZs-&OÁóÒ{àXs·o/"žàÿ¿º ·íWé©ã¾úõµ= ¾VÞS'Ý÷Ï©s§ÓÈï£nïtyùKß~Ÿu}\g>Šïó\ÇÃ_ü¾º’îú›ˆGæUˆû[QÏÝ}K>8úÊ?s.âû‘×à¡¿±‹¬W~¸÷zĹ3Áƒ}öi=ß ¿ƒ³Þ ~ç\øÏ·ÂÎ婼¿R§_|ùÓóVýº±ûWŸE=á\­Ïù øYŸkÍÝI߇¼ïÕyZ~ WÞ ÞëBÔíò䉨¶f=ÞN#o¨ÏÏD¾7¿0u×:_?ïùÞ¾°ÃŠïƒ—WÃ>åû{aÇò+½R}p"ÖÏüíÒ/=¿ã£õEâïD=Ê íöçñžžÀÿòž:vk°?}ò¬yAÅÏïöúꓨw­÷Ó¨#ÖCgßðxOº7\¥þÈ>CñH½€O¡Ïÿ$øn|&^ÚúMâóNE½«þŸÌ¡û"p‹ÏyxËù©œs}1â§zN½q·ÿžüÍ>³ïø³¨kä ñÃ=g[»ýùäïÊ÷ß ÞYûæ ó]×Bÿ6·…Ÿþ8ò þ„n£¯«¾Vç-G¿zÈéÈë×bÿõ§nq&ö=û˦ÁW^ ÞælÔÕâw☺¿rÑë9ç©îF]›÷Ô°ßšËôýŽÇà½Â¼ßµ°óaN¯…—ðœââ7ßïþ¯Î1âèoGü´Ÿów;.§ÿÀI>÷Bðtp¶ø¿¨ºk·ótò°¾à‹Ás™×ù‹¨/ÅãâÃ~ëÞ)z)~$u¼À_…÷> >å|ÔWìëjØÇÑÈ_òOñÙ“Ç> ;—«® ís'QW=n5tåb?ñW7¢ž¼ù™_‰||;òà—Ág‹gæiýd‡W"¯¨£kîRÔwð~âJÔ[úűDü[ îÁ›§¼¤ú p1⾺øÑ¤?Ÿ¸v2âFöqòñúhð±üÀ¹oöê¹¶cýá:õÿT<<èqýÒ~ypõ4¿q.E¼o…þp4ì/¨^Çý\Ý›4îu™¼éÜÐÃÐᮆþlî…ÎÆŸàû _½êÍYØ“u=üÿ{/Ä1ù‡¿Õþ:ÏêŠxOxøJàJŸ£~¹õÝå°×åÐSó‡ºv©×§ìóÕ¿uZs¸æ½ž¨9gÁ[âc~öu)ðÜ]}Éÿñ ¿ »ág×_ÉKÿy¥ÎMív¾Ò>^|S<ÅnÇ ï/ñ<ø³yàÓ×ìqúbøCÝ3;îñt5xºKQŸæù‘O‚”‡œƒÂ5ú‹¯îÁ'ÜÝíqýEàxJÝè¹Á¯Ä_ùÆúÿ,ð,ûeWå»=ŸÒã·†Ïs¾Iþ¸þðãàµí»õù$Þïyè*5aÞóù§ÁŸ ž†%¾À+óÏˇøsö‰þcÄv÷Ußû<Ïu!âÒ•ð£•ÐkñK©“|ü¿÷«9WQ¯¯^§ž|êäíulï^éùáRð~_Þ9ÿî9¾ ¾þH<Ï•À}ð*þúÞr·×{ãŽK¯F¼ðs·B’/¯Í/¯®Î,]8üeëeŽšúBÿÒ¶zuÜ׳òÝ^׳̙ÿ<Ö5σ »?ºŽ<õp¹ë1ÖAÄõ¨·.w:øg¡‹~õÜE×~u“ûjÞSÔËynðeì·þqßg~iÎß »ÒmÙ©sµ§OŸ >ŽÞy1tæ¼Gõfðh‰¯î†^^uå¤Çyë8Ôí0õ³ë¡²u¯¼v?ꆚ[=üýFð·ìZžQ7À[Ó¨ßÅAŸ[sˆ‚oÖ÷/þ=œQóG=¿ª;üÞÅÀkÁ{©kÌý¾:ÃùÀÙú¤³?Žàâ÷Zè©úB®ž”ŸÕÃÿuÅrðFžß=×§CÏOÝW&Ϥîþ*ø¡/‚™:rwx¾:§ã¾íáaä‡q¯'?›[ã¾Yö©~9õžs?ðŠþÏ}'â¡ßûQè옎œõÍo"ný4ðГàµå•S±.ÏUÕ$êæÀÝ—‚”ÇÝ_¢Î\ \ñ^Äù[_®xTç¥c=ÙËÑàŸŽ…}Ú?~{%t,ñ¢î?õ:xùT]P}:£îïø|ü‡us꛸À{ÿ(ê)ïË.~¼”üüúß»NYý1¡[Ê?5jÒñ+^Àxý±Wâ=w&=¿‰3Û‘ÿÄu²u³®¯Vz¾ø*ö n>d¿äÙàCCU'ÕýàÓžWOp-t%ùÉwuùËùÛqËà¯ñCp£ý¾ŽÏ¦Ý^‡ç¨ø_s¶×$Ÿg¿oîËyΫ‹f‡½îuT)þèÇ_ϼ:Ç/‚÷:~·õüÕà÷~y‚?|uVÍÙô¸b¶CWq ÏáßtêˆãÞWÿÊÙà;¯DÞýEÄS}°ð«ý¨>­YÇ]ž³æ‰L;Æ·\ øLäOý•ÓÀcyžòÛÁOå\zv§×ý.Á{ç¼¢³á¯§ƒÿtAéÃûβޙôçæ¯úùoF¾yg%ôøÄÜ/Bw¸ñÏúÜYt^GÞ[ SÀ÷Û'þô"p*3wéhè«üÿ#߈—æ\ =i¼üì^ç¬_áõòËåÎÓ~'ê[önýæáÿ§ç;t3t§šo8îxÞVÇÓ—3ÀÛ‘Gφ^p<ø=yîTìÛÉÐ}Õïî]À/ÚG|*|è¼Wá²qÏ¿Ÿ‡îâ<ˆs•—#ïâeíGòìh%ô]õËõÀçÉ—ùùtð¦úyÌãÑG$¾Ïбô¯Fžæ_ì"ï7[zDžú:tlxþdò7ñ|ç"~àå> {^žL]+~¾úÜ þ{k¹.¼`Ί¼É?æïtûÑç¢poâž­Óç/Dt'ô¬š2îë£Ï@3osè‹üß÷ž/º®ñ½à ÅuÕB?—ï? üx:êxzÃZäi}…ߢ>û ôWþŽ5ß®³Þxþ<§YóWw»Ñcõ«¬„nÂ~Åeyã \ƒzõŸû¯ÏáÜõ‰À ?Šú¶ú„÷úóÓO‚ï`ÏÖ[¿v‹Œz!û¤­—øQ÷¡Î»žnËð[ûö,ð剸9zˆüþIèBðåÉŽ§ ·F>¸8I^ŸNDÝþ"ð›õT÷ø>zSÎA¹õªø„¹?üû|­óÇ¢î€+î/ìç/î¼üg¡;Üp5tuŸs.p°¾[ß§ÎqòËàm=Ç‹à7O‡}7êeñNâ¯ôçÄŽ/…¿'F}v*>ÿJà'z‡õй¼áéÌO‹Ès“^׈?êð¡_ ÷rT\«ú?ê7ºÊåÀÃìÉßëã|Ÿ[ø`Ôãóbyî~3ð Êþþ:âTÝ]ýúÓî_ædŠ'ê5¸Õ:Ó]áTx(ÏêGø¡ï­sØ‹¾¯t(ù„}ë7f¿§ƒ×Î{!ù‰yl"_þ.IÞ¤æžtû?8ƒ¿_ \p:t<ûë9Í·/Íõù,òï¹àEÏÆûÃ9·b=/…¾$¯ÿ4tHqBßižƒ,^oÑí/íBüü»Á'ÁåïǺ;g~<âóÅÈgӨ˦¯ðgt—÷ãçÄïªÓç=^]^úJðqu®kÜí@ÝPsC§³¿þûûñö×óûþ3¡sâOÅÉG‹îOÖçLÔ‡ð‰{zÅ«êCÚ‹x6éñäËÐáí›ýbòø_}ûk¾¾t!êCø\^ÄËþ8ò¬9Ç‚“Çù·zþ/øSüœþ™Ó™¿‚Ïóûú_åQñƒ_Îâ9ÙÝJð~üëTðÛðªûð¼ú$ØŸ>Ãç/6¢>¯ú‰6ëßs2òÊÃàÉ~¸ãJàQëþÍPß}³Ò÷¼€8ï=àæOB?z;U=^Ð~FÝϲ®óßú†…Þ‚—2_‚®‡èŸ3.·/ÛÓþ¹΄;jÎ]èkìKýôÁ›È7ô-ùÖ:™/¼ùàY詪OÞ ~ñjÔ­t¹oE¼cw«ñþÞoþçÏSw§À™ô\~8Ž}Ä#àŸ>þäHäc¼uùͼã{σ}Çþá©óÿôÕÝo¯ª‹ñóÂίǺ°Wï1àߊç"o×¼ÊùÛù§k±>O‚?Uw>ØþŸžM^Ož†Îw>ôŽ£¡çMþŽÎ#NÞ û¯ó"‹Ž+K~^ŸMé±û}_F¼sϱsÌxduŒusÿ‘ó†pÙÕXG¸G¿¾Ï}y$Ï=ò7và~#¸Š=ÀëÖI?“¸¿3z;ŽÕwŒÇ¸¼>ñEà%ß“çiO„݉G# =ãvà[õNõ÷ügÿo~¨®¥¯Š/æV<]ê¸p#xû î›|ßjð5Âþ/õ8¢Ãó9o(Þ}úÂ'ÏÙ‹þcûSóTßúù;SŽÆúÖ¹îQÏÇxù Qs†§û9ñV¿ ’ç.D]ŠßgÏþ,0p·"Î×}5KÝõqð7ø ¾§/^ ]1û]*ŽOºŸT¿ä^÷ûo¯ã~Î'K=Ž­FÜûAÔst<ñ§î!÷¼·:žoIßü0ðà‰°Ëš¹èqñýÐÄý/"Îÿ:ô‚¼ÏA|¾ñý|Ôuò>ïÙõ9×?U^ “ÎýÛàG¾ˆüޝ°ÏyN\õ^è´òôÙԟ黇VŸjÔÓWƒ÷¶oê®­7½Ž…îˆçÆw|ñã7¿¾ϼxûŸbýÏE}«ÏÎ~ÙW|º{º~º;xvËß¾9Òýƒß¹_MÞøçx.~¨^Ç—}ù¶ê4úÑnÇï÷£¦ _ M3¿¯y»‡'É>LöaÝò|]Ý7…§ ¼¢ÎY üæ=¿û ‡™Ót4ò?<öAðèU,:Ïs*ö¯î+~Îýá“ÀüßÛ·‚*ý~ÒqÀ4â(|ÊŸ²>Ô_Éj>×^ÇÑçCWПw/ð”óöêBù§ÎoGýF'þUØÓõÐ_ò^ýwu¿Å¤ÇÙ¯"ߊ‹ðíW¡ƒ_ ½røÒúþ*ôbõ ¿Ñ+îŒúãf쇸É>õ/²»ƒŸ×ït9p…8iùºTßÄü°?¾¦÷ƒ§˜;Þ¹yþTè®ìĺü6ê¤:—ø þ·oׂOƒcÌ¿€s~uëµàM3¯}'ô™¯"½õ¼ý,Ö·æÎF=?ðÀl©ÿ|%TŸÜ¢¯ËѨð¾u¾tÞ¿ïtØ5þCÜ„—o/%âǹÐQ®?áóàç‘}ï«wú:á5Íõ./ è[äµ{{½>°ÏÖç£à¡Ôçƒwf‡xÎk‘'N.;ßs-t¡š¯÷ ß«_êy<Ÿý¦ðãÓ¡Ÿ'>™„]Þou5ø¦ê;Ýíë-_ã—|þ»¡ÀòÇ7ÃÜ }‘þýdðºø{º>kûM¯7j®dÔŸ†=^ ¾Öþð÷é|8|èc¨xe=>‹úI~;u2âEèEÅ / çé§Ugæ¹®_GÜ©sbÁ»Ë“yïx©®¸8Çûÿ,ô&q_|rî^–ú‚«?à±Oà}oG=dý.‡.÷ƒØ?ujÞ;§ÞUŸÖ|XŸ»ÁO¾úÕ½¥îÿ—¢«óÌÁ¯îvœªŽÁ_èc°þòÖåàÞþÒ>lF|;ñþX¬§õÊó/#­F¿œ‹úýrð}ÇBgüuècöŸ¿|¼Å¹Àß \oÿùûéÈëïÎñ&/ƒwû÷xÜqCµî¿¿ƒ×φKþ‡QŸß ½ù\ðíâ³xOÏümàró ÙÙ4ôfü©y?ðÀZðø5o5p…¸Ïø‰þuï#îŠ/pµxâ>Û›ÁWàÛu\͇÷º¶îUžô|#OÑk/Ïs:xº¼OÁü¡œ?w9êϳ‘ë8Éù’•È>÷'Qâ%¬Û4t8’Ÿæ½YOï˯âí$¾ïrø×GQ·À“òÏÐÇWùð³Àuâ9>I}ìßÙ§ßcÏÇBŸO v¡¿þ=NŠûÇ"ÿoo3»<ºÎˈûu¿jÄ3v·Þ½ïÕ¹÷à_òœnÚ×—Á·«CêžßÈC§£Ž=zïZðøÛÏ£>9z©Ï¹zœåžªƒ®þØÐm¾þ/ÿóßG£ÿñ?K/Oäv"ÁÿúênF^çÙ7‹Ï5Ÿ×}"›“¨¯‡ý˜»žãùÔ{ø›ÂÇ‹Ž…?á׃'ð|ÎYáeîN;£ßt'x}µQ_âµ.‡¾ã羞ۼYxÛïoƺ܊zb%øs-ôMºÅ›#‡ßÒà}ê¾Íq¾ãá׿ö]º€];¿J/ó97‚‡¤ëÔ9¸á{.õ|ì}ä­:ÇpÐëtç}bwx¹Zï¨Í{œGÀïZó¤òžpùè—Á#Ë#tŸÍÈOôMqKßRíägó·ƒ·þëwÔ_ìÏù*u‰¼²|€óCw"Nê'®¹ »ý=Øû݈·ô„¿ñËQÇy?ˆú¾ÀwÜ \z3p¡ýÝ]Fß»±Žø×ŽïÔMb߆®ˆ§ÀÝ ÿ27úzð¼žÃþÐQúºØÏ:_=ïvz5ðŠó×ú™6"_éî1)ü žò—­/è÷²׃ï“_n†ËçÎû;÷ís=×¥Ðw~:ž÷¤C\ ^Ç}(eÓ¾.;ãÀñÓî¯a¿pü´zƒ9sÖ©îå~Oßš÷w®ªæ0O»Y·­ÈwâûfðÕp´Ÿ“o=ÿõ¨ô¹¬?/ÏÈ>—ž”ñOœæ3uéa·söw>x }“x}ÉìnyG] '¼©|¥.ÛÚïßç9ð|âœýoĵ?ìwû±®p¥x)_Ü ¿a§›»}ÿØÍ©˜vüºüïzÔµxøSß“|„ßÿé—pPƒŸtüÃóåÁëÇ6"³£/£î·^›Ó¾ŸøJûg>ûfÖõÃû½ú÷¾?ΕÚoûêsáþ»4êxlöa^$>F^½õ­ý[ûÅ©‚ÃÿˆÿìNþ¡s©×îG])¿áÁä—¨¯oW}'x-qŹSñ§Î£È_áŸüþƒ¨O®F|(Ýp·Û‹>ŽêW÷uæþœ…Ÿ^ žDž¸¿·½Û÷Ž]5ßwÑã:< wÞ þ»úG‚ßS?Š/—$sÑ®DÝPó—ûïyŸ? ï«ÿÍ÷þ~ÒýãJàkñ¾þqÔñò{‚ãØ1ÿ¾ñ¾´pËà7ᯚ»3êþmŽû«ûï&=~úù㱿W#~Èßò¼ý’çÜÇ'®~ó¨ã{x›~ NËÓ›ÁW½;¾õ­õ+>{Ñu :ï©øû»‰Kç=îÃ'âˆóâ¡ókøåœ[²qðJ¼ÿ8ð/ûÀ«Ì÷º½À™Û=¾ç½–p¤¾çØV#ïV°èùÈ>ñ3u¥ï©û.‡ßso‘øoýgQ\ýßSssç=³›+Q¯á'§¯VCzq¶ìc·ó§¡»=u›_òqàĺï>ø$8žUÿÉŸøMqY¾µÏtWü!ýk%êëì½ðc쾨{Í=ÏŸ‰ü_uIàUóÔt }qôèÌËYŸ§?-zݬ^³/ðù•¨ñ÷Bg…§áa?/¾d>„Ïä?øñnäíª:.X žow.ö×{W>—ßF=žÂÍ¡‹\ œJÀ'Õ= ã“|¸ÝóÝ ýJ<¸qÂÊÙ´ïëzèmêÚo»=Y_}ùìÅ})â~Âó;?PüÖ¸ï¯xt=ìÆãÉj.œ~Ðó—~†YÔ›õ\ÃsèƒPOX_u œ Žà»_ý¹ã%<>ìHÔ—ƒ/7/B<¥ÏÁU/…½¨¿ðÉúhíóü ëuÖ×½ƒìïvÔMƒ¹õ¨ó`UÅçdâ׿ôŠÓ§b]¬·õ½ö(¿9§¢ïõØð9¿{§çW¼@¬ºçMÇ«ÕG9üÜñEËwïáqë%î™çI§e—ìn w²SÏëç6‚W¿üùµàoëIðöŸDÝf®ô•à­“þ2v¹_¯æ‰-:®¸x¤îožtýX]ø‡ÿ Þ"x«ª["oÛþòèMïið؃òËQ׉ÿêZó ÄñÔ\ û‹‡ø$ð,ÿ(^å°ç¥Ð àrö;‹ç¹~×Õ¼äàÿ|<üNà ö .ˆWÅ?O;ÎÏä]ër<øœˆËîYünèóÖ÷Rì󪡰çœËªþ¾¸ùVð#7"ÿ›/Žï«¹TQ¯¹wëlÄý͈Oê&8ø›3]‡`ç[QGócºüfÈ«Á»<˜tÜé¹éAâ¯óöõõ‘Î+˜cr6ò?~<‘zʾ°ãÔAܬ׭à—j^Éð{£ˆ?ìÝšÓäþª+á_Öå[a? ¾Nß§¼)ÞÔ|…¨oäÕÕà—F¼Àƒ8çr7øñ÷G°û¦­~ãˆã·CÇô½p†x{-xUçå3N~Þ}»N°|Œø žØíøp=xuëËœg¬ÏŸv%þ±küÙõàMàIõ³~ u³xñÍŸúzÒ‡nÄçäzlÍ{½T¼Ë¸ïJàOç1_{=·¼ÁÍÐ=SÁK°Ÿ»Q›ÇîçÅs<*—×Õ5>OŽýÞ»”'áëÇ•oÖƒ·Áï®»û‰tNó‡a×üà¶"_®Ï+ê1|ž:NÿÊFà>ëòqı÷ç_ºÛuÅÿ]o ÿîízÄâ…™okëœß´Ç5sõÇ­GBGºu§ø.ðÓÐ/oÿIð —‚·”oáïôŸ©C¼éßoùüïèðð®¸ù~è_üÖ~]ŠŸÇ£ëãaw׃',ž,x{u—>…êOœw½@ü…Õ?ø@ýÖwB_²/Õw=ï8¾ºÃÉÜ_y4ê±:G|ÐíîVÔ5«Q?×|<ߨﳼ Çéƒo×BÇIÜå÷ñŸEÜáç…ç·/øPöÝ ^ìnð¹wBg†#Ï®a‡ú oŽQ?ïG«y«ñïô¾qØ£ß+¿:èñÂù‰¼ïóDèEÇ‚·öó+£Žg¾ =÷VðkQ_ÁÖO>Ëû}­Ë`ÏÕ‡¡Ž¯ù³“î/ð(¼´xªú²‡ïýÙ¾ÿ³à~ÿnÇmös#üäFԣ׃—ǻҟð=yŽM¾”èÁ·Ù'x.ÖY÷„:®S÷ÿþ‡nZ œ]|ÄnßþV~5í8¯y?òÊïf=ÿ¬OÑü;œÄ?—~:ùfð$¡oÕú.z]~7xüOÂŽè®òCöYëßá¿×ú*êßÿG×ÃݳãÀŨ7N//Ψ»õM‹ëôLø>ÜU}LÃë[¸¼=îÝ >¼ú F=_ùþµ¨o„½Ó+Y >ír|OõÓŒûþ—~±Ü÷¯î{£3.õ<ô³Ðí¬/\çóèW5ouÜu`yÎs¯F=[¼û¼óÆöo=ø<ë5 ½>ñûæJ˜½ù æ¡Î:‚CoDáç·"_âkjÝJÇÃâËÃoànu…:îdøßéàWå·Ýþ¾wC+|¼¤ç^‰¸B×<: Ýiú9“¿‹ûxDñ×óâA¬ç7òâaàÍØoÏs#ò±çRÁåòû€/žF^‚ïþeÐY/¯^À;©ïéš•/ŸßŠ÷½8ôFàù·î¸èœÌíЇ®_h=kî}ð°û ;t^áj訫‘/}ÞÖJ_WþêüÇïžL3=¢xøàqÕÕïñX]Gÿg¡ë“ðwÝ ¼ˆº;òö­ØOyïuð&¾_³¸ƒáǪOk©ÛÅï׺Ý<žôº˜ÎµxëLðíòµ9sW£³_·ãýWƒO©óµ“®;]Rç7CW?ðKõ¹û§ÄÓì£ïíë,ð¤zçlìËZòûóž_®oVõÊ¢ãjøÞü:Ïñ´æ…²ï¡üyè@K£Î[«Ëì3?á1÷Y¬Å{ÂíÅ#EžÃ›ÜŠz÷³¨‡à­s?ïnô<Ï‚ç?š¼bäc|ÑõàE¬—þbÏ[ýØ»}?œ7¯‡>v+ø9}È¥ÇñóxOï!±_ºCÝg<ê~*±ÏS}FË÷T¿Ò¸ûÓ¿,bÿ‚/«~×içáZŸŸº€ý€Oï¿éÿ~)ìA¼ûýŸzä§ÁŸ×}}©3ô¸Å>äõ_…Èÿ›[îw)â1?:ûˆ/ÓOx6ðnéÊ¡À£êÒ¡ËðK¸Z¼³?>]ñþ_ýâoª¼8Óý쉗փO“ワà\è,xMûëýn…ŽºyM=o‡~;u õP/> ÝäJÔ]â8ÞÉ9Í;‘‡ð¿ûS÷‹:¿´×ëjù¤øºà³îž¬û›ç=/‹Ë/‚ÿWù^ë_çO&ýy¼'€ní½~õ5žš=ooS¼^Ô™u.6øeë‹OÕ—¼z‡sìpšï¿n†]ýnøóõèízÒÝàU¯_W²7uþÍX÷õxÞ«‰£ƒÀ'Ó'à’Ðg/'Ÿ>ü»9{päï‡ÿv¾Sÿ<žªÎ /÷ø(Oáåaq—=À+ì|øïi=ÏFþç߇}ÂÍ5'nÑu$|þ—‘¯õ×èÏ>–•øÁõÐYÙÅí¨Gõ¹¼Þív\}:ÁÃè_«o„¾,N~ó¦ïKêoâ^ñZÄñëdð‰ìõ+Á—Œûïã­ÄIõQÕ·“^¿«Sسy k±ð½\ü;úTùô×-ò‚9†¾ÞS¯Ñ¯7B‡?|¾øŸWßÌAç}> ûDòmQ'‹SXêñx3ò­þBsê>Á•‡è'~o-üÿ|ÔÿâÏ•àÏÄþöQèÇødçEÕqæ Áuævó{ûÿ |ï9oÇo/ÞÊ9}wÂ>øßëY÷ë'¿éW>ú#~PžžEžøvÆÉÈëü–U¾Üí¼;÷§ú[>¯’Wúý¼ãT|BÍ÷ï½þë9Ücu3p”ý݈¼«^‹ýnÚóOÍóNðâ{®Nò>tù…ÞžŽ¿‰óúå8e#êv'~Òè‡yΆÛxöXÔÿ¯öz^›Gœ¼Ïñ»?öç¸xn/¾êñ{ùóÞãÆüí:ö½ˆ#ðÉ7ãž¿Í{²è8Þ¿óÓ³a?Åï̻޼³zžãÕaSW‚'±¯…·Ç§|y½Î±Ÿÿuðõü^óùÕÿy¿o¾Mé ‹^7Ë?ò¿áW[½®ÞÌ:&x뺧oÑq§}Çw«o}î÷ƒ×ó|‰x]üh¬£yVu/yð|£.`ÏWgW=õ?ù—ïIÄü 1ï;‡^-õ}­sqñûÕÿ~ÐqÓåX—ê+§ GÞ“¯ªo|Þí^¡çxÿIè¢ê™íÐÕß¡îSó£°‹êC½f-p?û¿ðkú+n?f†û ­ÿþÜÏv;ôq÷Ér_/}öðùÕˆW#¾ðO8Jœ÷tûšŸ2|Žy¹Î¡¥¿àåéà9ïébàjûç<½8ß›û€·W–n<¬ƒ~{8‚ŽV|ôA¯kkÎÔ¨¿Å娣ø«8^8á óIâÀàC¿¯ÏRÝz1â¯~!|]ü^ÔÙ¿Œ:J^¦?°÷õðsñªt¥áïõ»n…No?ÿð§Ž÷7â{­¯}®yÏÃ:››[ó潎€O»ƒ[W¢îw®Äù8Mþ÷ßGÃÞnUú^ð¶7".½þS7ƒ—±ž›Yç,Ÿ{œ9¼hñ¹Q÷ ßû‡ow=ucåíyçØ»øt,tá»÷}¸+ï™§à{¼äµà]Ä õ*|NLJØ>·Î¿â«Ã®­›¾Sq†d]ކ¾ªNVÿ9ç÷Å-8ówßï|Põo¶y!Ï{Õ9¤qßßš5íù1ç á¯×}'pé¹³ ¿:.gù5<¤î͹:u¾Ÿ0íøÈzˆÓøõÀ©¾GŸ;‡_ÍÓ]=R|àG¯ÎD~žvümý䫚GxÀ>¹öbàLüÏñÈ{Õ·¼èù™?½z·¯úÏï›·c?é'âøëï÷ç?:¢8·õÕóà¿n„.fàéÂÓnpÿ8z¸üñ^èHö™¼^êúñôïFîyÕ5Ï"p ^áw‹Î#W¼ ÿ±·C¬{£ýï¯F=Å>Ä/øëJ<ÿOƒgµÎ{®‡~!_Šýš¥Þ8í~'?\Š:Ÿ_øx6ô›µð«›·óÞ|{„#ô…á'ÅÚۼ蚃ùnèìÏ¢^…C¼qÍEŽ÷ºú+üûAð†â¦~КüzS¾$ïY¬}wüŽçÇk”¾¼µó¨ì ½¼ÁFÔ¿? ^W_¼ª®ÇW©—ê>›q·«Š|¦n€ŸïF|‡ïý>\÷ìMqo+ꪜ“u<â4þü÷ÿÞ×µêïÐÔ—tƒÛ¡/;~'pÉzìïµÀÓÎÛ_ë8‹÷[¸ לQzäAq§æ†N²¼ù4êïë¡Gù}qÆ\4ëXóä}ñQ_žP¯\‹úw#øe8âlàrçÇÌU‘¿NÏnø|s/Þ[ž´Ïø‹Ò;§¡“ÄúÕ åÐwv;«ó§šŸévü(ð# .úæ^ÇÈ'#ÿ°ûãœ\Úc+õ¼\óFéÇ:ügõ•ú¾©ok~ç¸ã—Ó¡7VÿTÔg7B‡ÂÃï´ßqœûû  ¿ÿò°Çi}ìê:¸Ÿýoo¼yçZèò¼FªyWÓîoy¹Ÿ;ûUýz£nìÞ´Q¯ÉsptéÇÁß}qx;tñÐÅ7ך·(îâ³þ8êÏü´ûdéPðÕµxžÕÐìgñÁ‡¹ßÇÜ+Ÿ—÷Ë•ƒ—Ø=UÝùû?öýú—YÇm§gl oœ ¼on¿?ÿ·þùx;Áƒ‹W«Áû°·ê÷œöýâÇŸ…/8“üÀ´ï3œ‹·ï> ._ >.óœæwñ§ïEþ3_ªøêàãÙ-?Ò1 „=ý6ðúËðzÒÝÀ»÷¢.+œ2¼ç7¡/ÔyÝq¾oG}zûüýr×Ï*O¬tâVø ¿ûQèåu¿}xøoó%ôU~Ä=q¤æäãƒÇ]Ç•gKG™vüUç w»xï ¡×lÿ,lF]ºü¤~«º|Ú÷åþ¢Û]Î=_Þ\~8¦îG‹¼áýÔOuŸÝ¨ïçzè;‡¬9Ù£ß*¯/uœ­ÞÅëß:=ïã–í{^ü­~Á;òoþ|>ø“ÕÅÛùBçü? ïVÄõÖZð\êÍ÷Cç¸õÿzè%g¢Î] Þ¹úkïó+ùÃzë <<{{­Û,ô¸?Œû{Õ}Á“·ë3üš={ÏÏ"®^ÞS}j.Âõ¨ßÌ_š,ÝüÔ|Ü3ÁSYwû¹ùý›Å ¼é“àC.O`†¾‡d×üY}d—Âþèæ²9oËôAÖyÙáç¾ù׎«g±pš¸9\p5ôVyòþ¼ûwž·ó{t•°CÏQsq'•Þ7íús®âjø?ûIýÙœö½º(ôçÑðß«¡k¸ß]:Ïœ ¾—ÿŸˆºïY÷ôçÂ_^Џgº­ÎmÄçðs<„| w;פ»uíÝÈ«ìøß;oçoD\tKH¯U‡Á¯¢3zÿò÷¨ïg{ÅÛê ¾ î˜u;«yZ‹î'W㽦Áƒš³ËîØaÝÓxD?¡¸j߯/[úØ¢×{›‘WùÏ«#Á#ïö8x#p©:¿¦Ï•]௿ Þ•ïÎÎ¥ïUß×aÿýº¿>x³²ßá=3ê<ÄÇÁ/Þˆï­~®IìǤDZ<ŸB§Å7ç¹ë{-ðœõÍ-þý~å¿“Ž³o®ðy¥+vœv5t¼ÁwÌÿµÇWøÓùÐgáw57tÀu¯¾ÝíO]%> ýçIèÝW"¬߇¿¼Ñ'Q‹søÙ»‘wèë#«Oiúé¨×ð¿ç«ºNž Þ]¼¬ó’“þòý‹¿ÖyùÝžÇN†]Ó²/Þ¦‡ñýAÛ‡ÿÊ>æýyEÝ4pÎõÀiæ óK8F}§ÍïÃS?ù;õšxz2ðù­àÕ®Æ÷Üý¹ºŠþõÚûïùý微užeÏúêxÔã­ï÷yž£î—Yôüy;ø½â†ÿ~:îu{ñÞp x§ï±æ…ßëþrý]·R·]ôx÷ÃÀð\z'ø6uï…ùÛujº²üéûïŒÿ§Gž{|ØãZÎÓ³_âû¿|»ÛÍS?x;¿ÄßOGf~¦¾XzžC_GÍË\ôõSWØ/þ~3pÚ¥Àöõvð¬âIÞÃ8 ýïVºØ¼óGòÄ÷R÷ºSüt/Ä•ˆ×'ÌÿÜqÉѰ‹›Á—ÜŠºSÌ9‹üb=ø~–ýÿC¸ß/þ;?ë^hq¾ùmØÙÝàaφ¾¸xŽ´ÿ¾W?‰u¸|rý9üüÁ§Þ \ný²ïÛù¥šS¿Ûy ÷eÛ/Ï?z[þ¹ñ®Î5.wÿÌ÷ÆÀÿ¯V‚§™ö¸à<‹sUâüx+êµÛùgÔwæ†ä=ÀÙW±ëk½D½õò^Ïwub©ëÅâý î”Oœ'¦ÃÔ=}ãîoò$»;ùwâ‘ýºù†~ú)ÿ^Þº‰ÎéÞýÌú›‡Ã•Ù'Yö5îŸs&žË{¹Ïn‡ƒø«ùw¾Ç:\ \nÿ ?.:>4—öføßà…VüÇ¢Û­ç?²¿ßþÔ½¹£Ž«/ßW}½ÓžÇoF]Ê_¾ùsèÞñ=ÓЙØÁ—Áëœ ¾Ðz²G|êͬǗû:e©ûç7ïû¹ªG£.oÊ_–{\¾8çjäÿÍà)õ]¾þs·gü£>%8ž×'·¼þ×þùÇ"ÂKø%v~<ðQê…Þ»æïtÿ¡3Z/óŽÕ™uîÒõÝâ/Æ]‡€Ï6¢Nßj~jÄÍëá_o.¿:ߤî=ú }áÉ~çqþ‡ìõ8.žf¤?ᱺ'ãM¯ ïïÉÎì§úúÕï¿uT౺/`¿ÛÙ­à‰ÅõãQßͪÿvø>sÀ.þz'x3vçßoï’s1®Dý–q Î¹ ¿çœ»`¯ÿò¯¿°7}+WƒÇ®{7‡ÿþßûú?Öñ1;zÓã¸s#ôœ+êÜ{†óßu¤+íLßž/éÓÕ—±Û÷Y}¡ý^ð„xû›Qïoz½þƒÏCÇó¾Þë\ðÞE^ñ÷žS\ žŽz¶Îdž~-~œ ¾ñLðÉ碮_uf∸£¢‡üË‘¾ou¿î¼ãRx€þý‹Ðµï¥>ÅWð#ñõ\ÔEü´úÛç}ÙƒºÊ½óuOÉ´Ç™›Q§êÏ9O7CO:úpé³âY¬÷•ðCyåjÄös1íeÚŸ«îó‹úõD躧³¾›vÜ©g¾YîŸ_ùhøÓý¯ÎY幇Õ𣚛¿èïw,ø+qXß8ϯn]xS<ò¼ã_ñ‚þ .[Wº½x_õ’¼pÐíT¾Þíë_8uÒytúØïþ½ÇOÏ¥ÿW;¯¹äKÝÏsN€>¡{ãÎ3ç}~+ëþqñßzÜsþi=ü0ϳ£¤þöçŽoíãføoöm_Š|ˆç«sEÁÓÕÜ‚wº_⾌ziž<”÷0Öý³£¾ö[^àϳàß/ïÁ¶ÂŽê¾Â¨Ç­ <©?¯UçÆ=¿?:]oP÷—ív»W·Ü‹ú{=êz<‚ïQOŸ üw-pÚµ¨¯„NPç˦Ýïä©g¡;Õý§“Žs.«:ðjð ê yöEèh³ˆ«žçBèⷂǪógóŽWå©kÓÙüc}FøÇ¨{Äk8k3þ}=x›ñ¼øTúœy#ô•ºrÞùIñ‡ýŠSwãù®E=–çßì¿øq*õÆi¯×‡ggñ|kÁ‡¾xénü{Ýw?ëqáaèp¥u®¹¶+OÃË¡gU?Køýï¢^ÆŠg5—|Þq¤øt&ð#>êfÔ‡ÙŸr-p_Ío˜v\t4ôzyËï³×ó_ž3:<úÍÐ+î†~ À øÜíyßçã¡Ï胹~W÷Ë‹Ÿ»]§—ï..”÷à9÷˜ÿ6þ^?ÅZèFâ¹Ï­ó¼þ=ù§7=ŽÜ œ|3ølŸ«ŸíÓÐ÷.^¬¹,=©‹gÁ3Y¯? øåÑ¢û-¿Î{7ù—ÿÖ¿ ÏÒŸà½÷/ñ³ÂAû}ØÑëïþÆ¿Ä;yl5ô¸¼ÇGp.;\¸Rý»]‡³^›¡ïZçäóù£žÏó~m~Ew‘‡Ï†?Ë«§BqßýýÝ€ïìv>Î:ªSäQùfú¶¸ä{­ç±àµÅSúíõ°'ºÞ“ˆo¾ŸD?4ø_þØó?à?êÊÒ?=_ÄMù¤æŸôŸcßêhñ«î‰þ{yÔyˆ›Q7Èã¾çÙJçi~õãÕÀ7Ãnn.¥oÞ ¼(Oæ=I‘Ϫß8òÇ—‘'ø©xw+òý¨ÿë^áX¯šŸñU½™sX×"?áaÔê4<Î#.ŸçûÞê ãÅ[¡×Î^ôuð>òTñ•£ÎGñòT½×´Ç·äÕ«ÿ&ðª8÷óñ»[±ßôÐ;ÿù•üw=êè­Ðåá;Q‡ÊWöoßÿe¥ÿ¾¼`ž‘÷yø×õ¨sòè•àÕEÂÏäÙK‹·ëSøì:Wøý‹°;Ÿs-t툃53xûõðuœ#êxþOW¥7À™7Ãè#ö¹î[w=<ï ] <µv þù\õ:¼o¸OYçÑC¿¹z.gªnøyçr/,}ä ÛŸ|²üÅJèìêDä}z³y^~Ž=ÀÁ5÷qÞë7øJý&žnEÞÁ§]÷³nâbΨ>™Ð­á®K÷ëþ†Àö{-xó+Qg×Ü×ásœû2×}>Ü#t9xvë\sˆçÝÏÔ—Õ¿=îö(NÂ]OB²îòH÷ø;:~òAWÿ‹—âÞƒýŽ?áüÜí¨Ïþ­>ïáÏo¾ß×Aü—œ«,|>í|šŸ›Œz=Ê{W?¾öÏV<ÚízÝíÐëñY?›zøtÔò|iØ%¾çü(\øê=/þ.pÍÍÈo—Çâ1ávuÆýƒÿñx_øÌ=;êØš›´èïëÅ|ø"¾‡Ÿ]ˆzÍ~ɳê=üö½qñfüœzãÉnï|_8c=ôÖ¡—Þ ž`=p‘/žc¹ãÚ«²èñ¿©N©óS{Ý®Å÷êk>ˆxu»ºÒyǧ‚§¡¯:\¸oÀUú?…¾k}à4xÔ|ùÀ;<Þ œv.O«ÿá ñÿ'GOž]½Ç_ÜÛç\kÝG0ÞÒãïÍà'ý¼u¬y¨Ãï™4 ¾£ÎåMûsq¯ø—çoº³ö[÷ä ߯ßß÷°¿Õ¨Ç·£®\:m=òOïtðú Å­ßöÿ®~q×uáOxOŸ¼û¾ü¶uÈ9§p×$ôæs¡GÜ ]«ŽÏåyVÞÁÓЧ}9GÂ+…î¨Nô¾òXÝ›3éüô¨Õ·#n×üªÈ7üÿJ¬»†oìë7Ãy#söðußã¼ïÃ¥à‰äGzÖш·×£>uοt°Ãî_Î}Nƒ÷õ¼ëÁÏ^‹ü§ONÝ-þЧtÏ? ¼jîöfÔmøoøÄ¾? ónÄ]}›aÿ✪O)ïiÖ?S8*üz5ê~ûu%üçzØQé‹—«>x^ñDþ—¡Ï[?~á|!ðøÍà3ù•ûùä÷œ« gœ <ÌN6Bß~uð£ˆëâü‹•¾>užaÒóáW‘ï­¾i|;\ëÀ[Á3ç9Òš‹?í:ié+ÓnæpTþ .…W«~]ô÷´_âdÍ£‰zÂû^Ž|†·–?ñ.ú4ÎÏ)þë'ºúåÕÀ·ƒW½¸¾tµIs;‡}Ýk^ѼïSñ°ÁgžŠ¸}6žóvð7eûý}ÅMqQ_ûW¡¯Àƒ—CG2×MüÓgQ¼Û(ê zÀ·ÝŸ.^ËËx}gu¯h¬“:Eøýt\Án|ýîÇÁ7âIžÞΣߌ¿ß¯øËˆ[ÖÁ9~ùîw{ݾjÜèíø¬ê‚áç^?; »¬{óB—®yhÃß?}Óó}m¾Ôëï+ÁÞŽü$ÏÌâùÏϵõŒ{7_~Ó75 ^ï³à™O„®{7êŽ[§ÄÁäÿ³ïÿjð_FÞ¿”uÎnÇAôÊßGó‰ä£ƒO*]2ðá¯#þíD>°žæ×f?íÃýÀç·Bwÿ›{éFwÈþ,uèBw¸™yùMçcľ±÷<÷ÞÕÅæû{õÞXOvY÷åtÜ> ½ðãÐmñwâlÍ!?èõJõ×O:oËjŽìA¯#ª*Þ?ïwÏGÝõ»GÝÞÅ}ÏÃÎåñó—Á?À=ú=éI[Á—«#Ímqžn©÷˜ô8P|´¿7Mœ)|2êëv$x ï‰'¬ù¡ïä¹ö_xsøÓ¹Ô«¡ƒ žõZà¼ÓÏÕç‚—¸ën=_¯õxxbôöúx#ìîVð¾Õ¿ÿݨÛò¼gõ)„ž(λ/¨øÉE׫ÔÎñ›·¶õá§…þ›çäóÇYØyÍ|ÓŸo-ò¥ºŽ"þÏ—;^¬ùCõ‰C—·æýÆòŒ8¹¼Ï'ñ¹Öq+u؈‹u/û¨Ç%uì™à³7ç?Yé~»ë¤/Èzà]ï&o=îëx'â’~*õ˜ºÂÏÿ4ô˜Õ¨ë¾–ÐÙrNÊùÔ݇ŸÿMØú­ê¹¨ûáÿ÷çý}ÿ9ö£æ{<üGïúæÁ·….½ùôZèŒ/û>¾èøãnä—º7n7t„IוîDÄoà…Òiz|ÿvüÞùÈÃ_…?ˆ:õfàƒëã·ë·þý_Ît~èÖßÑojÀðÞuÿͤ×óuþ2xxówQøù3¡wyës#üáõ™þ~u~eÞùÄšï?íù­x‹qבø³ó¯ì¼ôD|Nð#úAfÁË\ ^Î÷~~ð‡#ýórÎ&¿².Ï–»ž(nÒ£¬Çëwz=™óœøËï‡ÿvÏäÀ9êfs(äßQ«gj.áA÷ûiè8ÇÌâsôEž ^ëƒÀ‹î‡¿›û;ï8O¼O®E¶Ïü{{ôv¸xD=¯¾1íAè<ìòHà‡âc?×Oˆ‹¥ _šú"{'é}¿·ãг÷å‡âWðäoBg²NƒÝÿþQ·ü€õñþ¿žË¾ëHè…«¦=®?ŽºÌŸŸžÊ9'â0?g§â›?õ©ŠÕOÒý½úY¬WéL¡:¿kß¿zÿ¢«¨‹­£~¤«QÁÝêÙ߯õï­9[+=Ÿ×¼€ˆ—×#?U_ùðy÷&Ýί®¼zK«Ùíïk¿ÕïöO¿¨ùúpkÞ3îö‡WÅcðËŠGÁk‹“?Ž:ÿþ¬ÛOC^ËœÔäsø3=OÝ¥N>x¸üeøsø»î«æw˳ïÎééW€wÌU¿Æ£OnÄãN…]š÷Ì/ðݾÿëà j>eÔ?ú.Ä>ÀðLÞ#T|ðòá¥X/ùþfð]ê@ë¯ï?ÏŸˆ7v;»ºˆÿ¿Ûïö#ŽàÙ¿ þzÔIÇÂ_k]Æ=^‰ës¬¯|åüÈÅÐ…Øÿ«àçøÃãÐÙõ _ =¹úöGÖ܈åþïâýø½×¡?œ Ýïå¢ãE¿$x4üì«ÿèºcwu|€X >R]!~œ ?cï¿\ucüv®»œö4î|¨úc3t=¿Wç”æÝÎ3?À‡ÖóVè—©S ÿ=ý·ÿò—ÿóŒþÛèÿüËŸÿ»«w®ÿ¯£6ÿë[ÿÛð—ÿõƒÿyøÿÛ¾{åÿõ?ýïÿ÷Ýü¤««;>É_þ_×Vï­þðÆö_¾$~üÿÜÞ|øC_þÿþËÿþ—¿¾ÇÿøŸù?ÿßøáw>ypäÃÎ6ÿò³ÿõÿþÿ?èhôßG£ÿÏÓšZªk¤T‘Q·ê'³¾ªuZ%XBè«\·ÞM{v•%í†Ó¢Ø±P»wözµ :×´ïÃnÅkQ}Ý ´³*²¬,êÕíѳþܳPé‹õ¼Jwê'‘-ª«cÞ«óìæ¨ÓV¡ŽÝ‰¨P¿·ŸG­ ß·^Óž•E[èo+Tj§,|žl%ûØçªÊw{æ­ÅÎìvôêyêô–-zÔ;—lõ¸«ýØ v³ì‰Ï—M,úóC3u+'{œ‡ª8ꪷ}UÕA½¦­\Œ¨-+뎺Q—€S³¢WMé vH7ƒ¬ y/Ykêß7ãùtb[رi/~O¶â·ºìãFDé`9Ù­.®GÿçtEñåx°ºP ZÁ>wÔy1T¿Í`c/E5ÓMïöžü…ý܉*ëIm¶þT2·°®z¬[T£jX –D5\S–']½_ß›{ü`¹Ž.8lû‚2ø½ˆ›Ð ÖÍ4<Õµìþ\>÷éáÛ«ZùÁ´rÕ[M¡wv“JàôûsUÁà7¦úzÞç£þ\uûS¨HöËôóßž ¯‹Ê4q쀪ðÕðsN}¼œ÷ªØiUl¹Ó•Þt5M:Uíš‚þë”ã£`ÙV¬º>‡Ÿ{<êÏã”ÖÍ­IºíDÜ~2<.ï?¸Ýû‹•rŠT·&–Äi"ñýY°“¥ÂMz_8]¥ú™»ªº·£Z¨ç}·Wë[Á2ëÆ±]ª!¾ÖQwÒÎ`ߺzîGœÿ¬tÿ-{>×éMSßoF•~*ª©çªþ7]­äçX(ë€Ý¥š°#§ù»Ûô^î÷õªÓeoúßë}ÝnºxHÜzºèñ[76þÕZ¯:tSˆs¦p[_Ó‘á«­`;·ƒ%aWü›¿Éïó#½[ÀÔd~übX§ù`_º†ŸúÞEà¹ýÀ§+Ý^äQSnëԸ纾žëÑ›/jÚíð~÷"¯>ßïõ€us{ \ÇŸžFÀ¿ÄkÝ!òæ£åÎv˜Fj:rùÛ¸ÿ>µît¨ÞO‡¶èë„-ù÷†Cîôø^·‰Ìza¿Øµü&ß鲆Ÿ^Ž:‹Çß^~(Öh¿¿7öÃû¸Õ¾Ÿÿœ‹s‡Ý_ý;uÉ{›bÇŽÔº9+Ë7ÃϹ}Ê©RÓ*|¾õ•7E2ièºù­xðÌ:?gz/?ç¯Îfgöõv°b/Bu¢zНþ­ãeqØéqÁzË Qÿòù÷qà,xõqØ3<½Ãžö»=οÝ÷IœÃ¦S™ø“|e]á:8T÷¨Ó4Ô]ùªNà ûo*n¿Ÿˆ¯Îtû÷sºdL=«Žu¶Í>ªWëÏ7½î°ž/Wº‰eïówÕ?Ö’±/Âý¼uw¨íò¸xñtøûêÂwÖÊ{ð»Ç‘¿°“ìIܳžpç,ðŸ<¤»®¢¶¨Ÿì¯xÀßï½é8U|è ù‰|€ï jÂ9âÖ­ÀÝõ~³^ßÝüáý§‘¯êy© îvü0Ôñ•ÿóg¼Ž:ëN%ÕÕb½J5\t6–;-é½Ô3W:ž¢zˆßºØuW<:èø‰ýPœ¾…Ø/•Ì¿{_vw/qÓ~·ŸìZV—>ÂãÍ:~ш—òœìHže¯¦ ¼P7îö¼g²¯¹|<éõˆ¸·y<ñ‡:¹ðÖrÛuºtÜñ±}€Å-ø@œó~÷æ™þpÚýüùJg»ÕUê×'Á{‰p”nOÏÅÄAûs3ëÍý΢«gà~¼§<ùü ç?§ið¥â ;“w«~ÞëyÿNðtüQ^ÕW³þòÚ“EÇ >ßûÉçu«êA¯3ø|ë{«>šô:Êßü7‹…·ÅGù…Y'qšÚ[¼Ù¢ç-ñÕtˆGû=/±ïYÄGùF·°iÆêªkÁûW½²Üó¸~·n)w|_ÀçìC^ùzÜÕgø.áÇPÇñ#?—»šúâ ï¯ïå§øªšæ³èù–=Põ¬«iÐôÝu¥^ëóâMçë_á4‰õ*]!øà§Ëg'äýšj<êñE<çgw"ócü¿Ü¾È{Á/púWü™Ÿéxi+Ö]Þg?üïXõáaƒò€ü®~,>e¥çe8¡ôžq×gÖ£Nd?ô–‹À±»¡†‡ZëyðwÃŽvbÿðyøº¾¤+ÆçÞÞëþÉ>®~äO¡cÞ |+¾×mg²âÈÓ½ÎË»­Bœ.^h·ëpž)aü_?Œ|gï_ã{à…YÄ}ùüvüþ‹y¯ÃÕ]ü=²‡º}~©óý5]lÔõß»û.oªŸÄ_~NwÁKª‹ðîÕý5ëúbá÷¥®3ªŸvBß¹ñP}PS’Fý}åú ¼„wòNàK¶\ ]ûAèp|é}¶‚ÅoÑ-ØÓfúûRÇ—ö—áåà­ïOBœ÷õ)~b¹ãºí¨_à-ú½nøR\¹õûjø-¿<º8Y]὎«Û%=>ºHäQûQq+t~:”>ëŽÿ¥{àïn‡]Õ­ ÃºÔ)°i¯#}^íû~¯cðt(]NìSý{#òÿÃãøOH OoÏíϺ ð ÞE<+mÔóëÓÀžžõyÖ]žQçy.zÝÅ­òA¾œw^n5êRï /³§ˆê¦:­:|^ê]tÿ¬úuÜ×Å4œº}|Þu2õiõ¹„n"ßoF=QSX]Ï…{ëVôQOtÁͨS6CG÷9B†Û®‡ždõãÔ)ÝEç5ΰzEÝ0ïþ"ÎÑq|¿:.½<)¿)Þ?ê::¦u¦Ëù~Ýã…ŸGO°®5­â°çSùǾ©ì÷Nàª+¡Ï×m\xœIÇ¿Õ2ïñœŽÆfÁÃ{þ{3x_ö§ÿn§Ãþ6â>œîÔ ]z=âì¥ÐïðmìÔþY꺻„Ço…¿o?g=.‡¿Þßë냟°ßÏ£~ÞŒ¼8 ÞCÜ=ñ¡n!ÜïqG<Ó=YüXä5¼“üîsêTEà?º:äwð`ñÛ‡=n–þü7hýÙ§ýߌ¼\:Ë´ãSQ‹uu6êN<¦>”ØyC\†Kë´_àIë*>Žø­âzè ôLö Ž=¶N.ºÝû³ðΤÛSzuÿ’¿WÃÿ½ïµÐ‰ ÇMºÝ²KïU§_‚÷TWáeñ§‚G–G\§GŠ÷¯R¯?´×ëq§¤ì³x]ï?êÏ¿¼›þ­Ðß|©sðžÏ¿ºæVÄùÏu*x)}žÅ÷Ϻÿú“ýÑ•êöÇQ×E«¿s¿ãú¦÷¬Û&èÎob‚§“¶ƒ×†3kêvðóx°º=iÒý3ü¸xÀI·'z;¬~Ãå®SãqÅ-qA>SŸOCoƒÏå_8^a¿ðTÝ‚¨>Ńív»Z‰|S¼æ¬Ç vžÓøù¸ //5ïqE¨®¨ßÛíúÂÝÀeÕ?<ïñu-ðºîüê»[éxG\P?ÒçäïMǸ¸}'òÔ,x û?O¼RÝÂu屈ÇÖ _„ÿ«S‘ßCª|¿ÜñTêXçƒÒ·¨Ÿìëˆ×üÂ{=¿æ/‰×Ø;û°Ï¥'…-è*üÐñªü¬_†®%>‹ÇÖ»¦Or/òš¼c_á"ïUÓ¼ßîDþïO#ŽV_é¤Ç³ê—[·ná ]Iž¬©§w6ÂëV±I¼×AçY‹§ÜïyM|Þ ½Ö¾ß ¿…ãôÁãyêtæA_Oû“¼Võó-÷ï_ >âzðCu GðÕn¥ªú?žÛ~Õm{‹þûüòZè˜ìŸ ÇŸˆ¿ç§ðœ|nZ^e'ìTܯ©d{ÝMÇé}ù>úžéòÆ­à «ŸéMÇ/ìÏsÝ_êöRÿ}øvþçNä3§‹áxöXì3¼VçuÝ ž—«¿àWqônðê⼈OUgà ÅOù­êµ½X'uš<±èÏ¿;ïÁøëÀot%¸ïvØEN¿™:Ò¬ÇÃË‘Ö#ÞÔmž»½~Ýw=´x»Ýî'çcýà“­Ð‹gWŸAà8V>u~Éy¼‰|®Ÿõnð‹xé²Ë|¿àmèw÷¿í/‚§ƒ· Ïì¾]Ÿ<:é~Q}ÛQ߈“ø)ﯮé|‹ŽûñÿÞSŸÏÙ°ã;‡jÚâðsúKà Yð+ìõBèbWºÝêÏߌxŒ?Ù‰zOßï?í¼ÿ>8ÆÏU½1êºóZðÕ·;ü©>ùù¨ç9vS§ˆc¼·ýxººû›÷ôKo±Òã”<_ýè»ÝÞžïvŽºxýKÖÏy ß÷$ð ëJØïNðüænÔçúUïS¼Ê¼ã+|¢¾cxŒø<ûµü¾±ÎÙ…¾K_fW¦¾è{¼~K¯¾™ºó¨?§zË:문¸W|ä¯÷ß¼Wß@è-ÖOž¢Óß}à«àòVÄWK—½ºŠxQý}“àµWB_:ì~"o«³ô7Þ‹¸¿8›ŽS·–Œ‚ï<èøÒº‰¸ï<Ƶø}ýö];Ÿú øìQ7À;5•2ô18Î+ÎVé´ïÝàÑħêS¦ý½×Â.=¿ÊþÃ;‘g«_zÖñ¤º=«à®Ç±oxH}IÖùvä?ë5u?,~$ìß#®áå¶SŸuœçßùwžo+Ü2ëþ\ùþ ë\ÎóÚO·Þ¬„^ÉÞðMðf§ßíûy1ø.ùRŸbòq÷CרùÃç釯§ê[öÞBµ®üå~èS›¡'z8NÌOÌuØÿÖ/©X\=פóDžß¹Só7ôux.ö¿Ó7Êî÷:>¨>0yyx~}¿x俪¿.ê ïy/ê GÁ÷XךŽ8îö+ÿïOÂk¡s©·å©àìܯÞ×Åû¦b+ò?”ëyŸl„nJ_³ŽòˆyFO¬¹>ûý¿áH8 wþÙ~¾Øïy­ê3ß·ÜýÃû°;¸þª¼9î¿/ðù9ÍTÿ~ÝZñâé<ò×¼ã¥:g0 7ê@ù‘}Þ =¡nŸ›ôø„WƒŸªyÒño=÷^·s|áÐ2.É#ÔtÔiìȾUÜzÓy¢ÍÐðeÿ‹î¿òL+[¼]7¬ù+½ÞŸVÃõ}{ñpk÷íñz+òhé£ÎËá‘Ù3;¨¾†ˆ‡u~qÔ×ÿù½;Á‡±“ºÝà ã˜:~¨zå ¯³º•ݘss4öC]†ÏÖ?°:d;šv^ÏçÞŠ:ÁœŒ‡Á+ªsÎDÜZ ;W‡êûxzJ鬋ŽWà ùýVä ¸Dý~=twÏ9‹õ–Gÿ¦ÿn·¯ ~C<ø¿¿'·ƒ‡f¯—‚_UŸûœ×Ÿ‹Ï ¼¢nR·U_°ïu‹Ðn¯#áõÐ_œ'PGTßðŸCÿ[éqp¸&§Êçt}Ýn«)?Ÿtýšým„ÎbáyK›…¾åœ¼Ê«oz¾È[„ø£óÜÛÁëÊ·xqQž>úƒ<"ž­¯-Þzó-êÏJÿ^vCOÔWêÜ[ñ•Ãïë?.]tø{ý2kôa§ _< ÿð¹âÓ³À ðýÈóÐ/Ä-õHáªYäáˆÇ:³gs%ª>Yéxägáïx›çoø8_ÓMg÷€ýœþ«Ò•&¡gÍ:¯Q·tŽ{œæßüîæ›«±¾5—!ô°;¡sä-›—‚¸üŸøµõËNð¥C,Çþíö|šýˆ×¢Î(<·×óÓµŒkÃ÷]™÷÷X‰}·žøV:ÛNÆ÷¥Ž›Ø»sVwƒ7V×ʃ†]ÕmÃçëßcÎ Í÷{>«þ©Ýެ¯ºÔíYÞÓþÙ'ñµt©ð7yÂ|:ËÅÈ“tñ¤n5˜w]H?Ò£ägC÷¹øÏ›sX­ƒúZ<€gê¼ä¼Çq~¸qœNµz‡9büÚºàŸÄú‹ku+ϤëžûaðÂ7‚«<4|žþ€ÄMx ~(_] <%^ Ýåy]¾CÜÆ‹>÷iàùG\«¹—ì}9t×q·ë:xônð¡òŸøS·PÿZï·ßãIõ5¬tûH¼ ·Ð½àzqnõèvàÔ˱®×Bo†¿ëvtüê¢Û)û¨~ÂE×ø1üSç÷».ôà°Ç?ø.÷žêøº~|að`xœâóf]GÈÛ«ŸwÜ×Ñ{}šºæ¢¯SሕnßÛWùOݦ´ÒëSûŽï3ϔݛ |=uÏá{ô©ÛÜæP· -÷õÆCÝüU·DFÝÇ/ׂÃG=‰çÙŒ8/ßÜ ]j;â½o#p¸¼îÖ‡“±ÏÔI»ý÷Ø1^“=êÃT7Ó«ßgÚybqÀºØw¸¤ôÿåžÙã­À[³ÐÇ>½´æ ú~Õ¹±àéÿâé£Ã·ÿ7 þO]_¦Î?¼ý¦nc…w&?KÜr_ºð¢û›x[ç¥ö{|¿öi}ð®›ã·×uŽv¹çÿê>o2êy°t@ú^ðÕÏ~Ðù©ºpÚùòêãŽúWÝEgtþT^®¾Ÿq_oqZüºxX^¸õ¦÷Ó×°x^·³óoåx Q·\F}U·G^àWxnõç8ñ ÎAÀOì^À§nFgcÇpÂ4ê\ùEþR;ïŸ-¿ø;ú ï¯Ûƒ—zÕ§ Á}Õ?ñ”¿ÂÛ[ñ=þéïõKÒÄ9yO¼Ê9cô±³‘‡í¾>SßÕ<Еîç5'`Ôë&v8 œ£Ž^+Žãáåߚ׸ÛãVݺ6»½> {÷7Cϵ^ø(øi;òÏã¨ONGý»³èõø“ˆ7ÖͺּÎy?8×ü‚¼•”ýÃ[ò¡8ůÔíüÚz×y¾Àçâ,œ¨XŸÜ•·àª'‹/<è:Üí¨áúyõý†nW}÷üf¹ë§ö¯t-ùøˆ?ô¢oª¿5t¸Y}fßÅñºÕiÒßS¿uͯõº‰ÊcÕçz»>âƒB·ð÷uæðûOÂ.å%÷&è+œ¯uÿàgúCÌ}×v4êvzû“à·ìßãÐ'ÔKêçQäÑÂÝ“n·Ã/ï…þ¶zŠ<ŽWdG·Ó_Ç«zvÔq¤|‚—ÏÛIÿFï>ìxœ¿Ãu>a©óAð£}®[Ò†Ÿõï}¿õ'ܾù•<²èñ°î?ˆ<à9w‚©þ½àéõ'Z—­Ø¯¯‚ŸVOЙécp¡õcÿx¤»ÁW×\ÛáyÜ2Æþœ§Kˆ8—ñÄßã-ܦ? 4Î-‹ð*T©<8ëö÷©ç¿9zô¤Ûëù~¿ÇÍ­°W¼¨¾[uµ>ÿš§ºÜãì,ðú_üxvšz$ÿ%Ÿ5éñ™îYó&Ç=>ê߸ü‡üo~ ¾„® N¼ŒºMþc7/>ÿ²ø¦íÐËägóÔ³57rÞ×GÜ̹Ï5§qÒõúÂÏÃï»V]q%êëz!ÖKŸ•÷Ü  _®Nĸ×ùPñZ=¨N݉ý‡Oj®Ñ~Ï;QÇþ÷úºü|¯ç™ÍЙ꾘Iè:³îwò£8ø0ð—óL÷C—×OE¾Ã£û{|γ7¡Óº~s&êã:©»Þô| /É—ðUñ ‘ÿ6ÞŽÏùËjàçÉÄkçvN·Ãy5ç Ž˜uü!î«ë7c}­_ž.Ýe÷퟊'[ûjŽåJ¯ŸÔGìGó(â œ˜÷Î%®©ù““Î ©Cé>¥“‡à\sñ»‘‡'§º öD쟺/ï uˆ¾uxõ2ž«êŠiÇMuŒuõ:îæïðì…°Ëš+¼‡·ø_Ío `þÇþ\Å.:¾qAáéqxY}F57ç ó<ÉcVÿçAçó·ƒÇƯ^‹:Ðyãg±ßøTýÉ7cÝk¬×Sòž}“Äx£î[éù”}Ô½o:¡Ÿü~ÔÙúàkŸ'ýsôcðOø›ÝÙ—:Ÿº|ïÖ=e£ž«¯yÑñSñˆÁK²£šË/ÿz^ÅßâÝî,~\ ¾Ž¿Wæ¢ó‹â%}œgÀ)·‚¿õç½àGÞ÷~uÿEðÞO¼5?Ê:À"ÿ½Ýò›å®Xw8ƒß žf=ü¿úgrßf]GgÇÛÁçÀáp¿Ÿ«{,ßôïÇË”nµ×í—?Þú;|‹¸./ˆ7ð{¹õç4øûê“ØëüÜxoöv=®útĉý®ƒnE<±Õ§=ü¼ùÚÖ³ÎSO{ü¬çœö: $O«¯D½@o÷óg‚„›Š÷>OßiÍßžw{ö¾ð›s5—%ô„º%zÚãGáã°Cüv¾ˆþr7pª}T·ÈûôÔêûZôx©ŸÞ†ÃøyÝ32ì۵ЕœOÔçOϼv ³_þ}w¢Ž°põFÄ{uIñ‡'±ëºWbÔyhñU?rÍšõ|þ$ìW߆ü|-pOñCÁ÷87#Nß¼§ß°æHŒ{=ÿSðÍÓ·ëÃ9Ÿ¬î^t¾^ÐGZ÷$Ê“/gøüÓ¡·ëÛ•gŸGü–ôeÑ-Ä¿xo-pÉ—KëÏðyîñû>õQÝ«8ÞlÞóòVðI5yÜëŸêžÿ˨ëÞ›7ÝåïKÁ·æ½M~O\âïòÅÅÄ«£nïò’ºùaøKã Üí¿ó¼ráŸE?¼Ò,êV8œâ³êùÂS—­û¯‚gÂ;UÝ=éxÛýØìèx⤃®ïñWúïíàOàÁ;Q'Ÿ ?¯{+Wú¾Ö9ŽÀâ+ ¿ÂÓõ¾K[ôxPzü›Ž{ê^‘¥¾Ïu¿ÉJ«5+ð|Í9Úíqó^àù__#ûX¸*¯á[kÞÇ¢×µówz|¬~ö•Ž æK½ž¨¹Žó®—/ü¦ó(꬚{7ëï_çzþç‡'C¾uyÕ5³®°øÈ\Åês˜v»|ü¥¼Ç/ªŸ,ö¿tËQ¯çõÏðgxÕóóï]ü^èu^wÑyJïÃNøû§‰ßâÃóÀ…Köº_Öy¾Xguš{Ü—ã\/?<|ìzÔûužÒyÂ[‘á–Ê+‡½Â3¨C«¿`¥û¾q}W[¡ã§x¸¬ú æ=ïWí¨û­|R÷nÌ;î…‹šö¸†_¨ó°“nB_¬x¾Ôù¥£×FþÎyøæ}Ø'|†>þø,ös;tx æ÷N»Ý”.¶õÀèíqDŸÕ³¨ÄWõ¥urn†îYçÙ£þÄÕÜœyÏWìF|³Žì[ó'½štÜç{éÖ¿Àÿž†.v=ð¿`7ôÅÒw’œö¸s=ê#yI¿’sE›ñóîóÔGó7çWúþÃqø©êËšôüä9Åã;aOÅ£‡ÞI„knï~çGê¼ï¬óZâqååýŽrþ¸zófà%úUé"ž#ü^®¹ ‡}}jîrç­œ[°öûnðpußà›W«Oú ã媿ûúT_bð†â ýõDÄ!ø^fýý=Þ½âÿqž~¥ß@>{ßSç„§¡7¼ ]{ÞñF¯wœ ^Á-5÷lÖñ\t9ø¼!?ÒORç´—ÊuNhÚùh?wÕùþxÏíàÃýÜfè7úªr¥Ç=8þõ“nF|fGÖ¿î#˜ô÷Åãª?õý©øŸ+ϬÅçŸrØãšzŠßÈã÷¢·êÜsÔ‹ú[Ôyú:kîмóüüçEø1Þåvàãš ½èëZçwFï•n:éŸ+¯ >Z>®ùmû=/e½ydÔy8øV¾Rÿ¼Úï~”s«Ä·ò¯I__û!NÍÿµë›ö©s‰{ý¹Ù9ýÝç£Õï6|~ªÂ‹®Sø÷YÔëê•êsXtÞžN_|ìAàg|Ê¢Çñš¯¸D¼.\¾Ûóßjè¼›gÕ³5çfÞã+Ü­?EÝ©^¢Ÿ¯õ¼¼(îÈÛìú~ÔÖÿÜð\ëÁ¿Ú¿ã¡;Y¿Â'{WÀ¿ê¼DéõK·Õý2ãž·Å zÂݰ?÷¬>ê8¨æ_F]W¸j¥û¹þﺟdÞíåAÄE|’:Rߘ{kÎ~çGë^Ò¥Î_Ãýøeë„ÿ­þö•ÿä]¼¯ø,nã—Ä#ºƒ}Oªçéê¼Ö½È?â¯:@ß¾|ýê ó—pjÖÙø|K¼÷õç×yÂÝþÞ¾‡þQç{ð3¡ÏòSõª¹Ö_ç‹C5op·ïïFè$uŸGÔÇ…"Ô\®ÃŽ·jîê¼ë7Ï¢ãúŠŠ ¾i3pòü?û¾Éwc¿¬‡þøß5 žõfè«ð=¾Þï•n8ïq¬æA‡®@/Z ]•n†_(¾w·ãÖÄt*úŸ£—¼€_³¯Õoùßûš‹ÂÞôÓÐè{ü:Ï_ Iñ|Ñ×L§ÃO>‰z§tºƒ/žDžS÷âïOo §ÃÃúˆ²j>íûï9àÒšw=<ÿÍÐÙ†Ž4 ~æaè"ÛÁGÐq>>AÝXõÏ~ç“ô¯ùþ:ß¶ÛíÇ÷T¿uàóº'kÚùϺW/ø=yO¼’§àœû‘÷jîì`GŸÏ£Þœ†Nú`ý9ïõ¢ø!è{×7»|ËNÄ˺Ïr|í¨ïóVâ8Ÿ.yÐó°{}žzM\¨s +=ωƒâ“:D<ÂWá}åËÂeË=ÎZ§ªãWúó¨·ðÕw´Û÷»æ¡É+ÿÙ¯ºåM×oáJùGþø"ðÇýàaõUÕù‹È[5G'øùͨ—W#×üéy#Õÿ)¾Ð»†ï] ÿ/Ö‚Çõ+×¼ªq×O/Ïß®g‰üR?šxVüé~ÿ¼êÏ þ_¿ø,øý|ÏÂo×þN}Tºå¢×)ð9\{;tbõ@õéÒQ#ß[/ÏÿÁûÁ½ìþNà›º×aÞóÄñЃꞒÄÁë÷ä;zƒ¼Xzeð‘5'?âœu´Oú‚Ù]߈ú’â‘àluª¸t7ø¨êc ÞÇúÂÕò»®s»ãó§¾…/ÂNô ÞOÞç°óo¾ß.ö¤î“ßsÎ]÷[ô÷“GáGï«^Ø!¼ëÓÄõˆx¢^©{Ýç}؉<ç9ÅYz2¿_ýl'thv²üFà ÞɾnFÝá{Ø×í°GáÏöåBòz‘—NËy¬êš²³E÷s<Šs»t™š¿rÐ×¾ ?ê~ð+ÛÁ›òŸïŽºŽw'ì¡xبÓðzž»æïìöxTç̦½¶~5ï4ôއÁÓæÜñ3¡ÇÀiö¯€?­yìóžW®G}z7ò;»|/ã|T÷³Í{Ý ÞM¿­ù§‹Žïñvž»ú-»ž—siàïw-êös*øãû¡‹ª7|ÞåÈ+â}Õ±ûÝÖc}ÅÿcQ÷ø÷º—n©¯³sÎAÓ÷nF}ÆÏnF=Výhÿ;Ÿ^}È‹žÙ˽àojîgà#çžäïêó†N\:Ö4öyÔãQÍ[=ªÎo.z^'V'ÝÎåë©oâA<ÝÞ¾Mƒ÷V÷Þ ²ð¸Ï‘ßÔÿðÖK8fÚù$:•z Þ½üëoú‡–»¿Ö½=KÝÞÔýÕÿû&xù¨7Ø~î|ºÓÕE×7ôQÂsÕ·½Üù)þ~?ðiÍ‹ ý ~2/ ŸVúäðùú Šg< f¹×ù9?Y⬚‹3îù5çɲϺi¯çýšs4|>o}…;¡«Ã5ú:KwŸ÷}¯{˜ö{~©¹?+Ýé3[/<÷FèNx <ëùÐ#ÔyW¢î¯¹®‘çj.È´óAtê[¡? žðRÄ{~êyáÔÊkßˡŸ°<УÀÉòþ¨ïáª{¡ëÔ9¿ðçê·þý“ÈÏÎ}ˆ5h¥ëÈp‹Ï;ì¯8z'ìßœBsÁÙëfÔ¯ôs:ÕÉu¿Ä¢óRxOø Ï‚ßê>¹Ðõà ¼æåàs=û±Þú«aÒëµä±õ½è›yü­}ƒå]ùŸ!Nȳ֩tœà©j¾Ë¬ÿ·:c#êvïý<òeÅÁi·KzÔ…Ð Ý«²¼<­N_]Êó‰Ïð¼|Wý±îpjõã?gŽ[õÝÍz|;öªžaÿâ|':7}1uÙ½ÿ_]#izÝ ¿q¿o Þ ÜA4ÚÐ!BŠÚZÒP vséæÒ]û^‘û¾Õ¾WÔÒÝűʤ&Í2i†I³LšiÒLSÖw¯Šßœ¿Š„RueF¼ï³œó_Îsž×nÅøú|uâ‰þ¯êê_|ÔóŽ:÷çý†â<ÇGÏÄÉûá‡îÊûJœöõ~+tü¬ôõ…®?î>éü¢ÎMŒ;ÿx߯Îõ0ðWÅ÷Ó¾ïË'uü…OÃÛâˆÏÛÿËýÞ Çòµéå³Ï¿Wݪ:?yG©{äé{o;¯È¾âu_ö¨ëùòñÐm­Oû“OêïðœG§°¯³ß><Ékð òjøbò} o«óG}í7ñï!Oˆÿê\ _ :)}Ö×áNàsyÜß ]—_ñIê~º·á:Oœü¡Ç½ÊëâѰ?oÝŸtñvÐ×yõœõñõÜÛÁÓåyc3tSx©ò;|=êëOþSÇèùðžýЃìw>yõ…š…>5éúwÞ ,?Óûo…\çfýù—濯ïºÏƒ/á·Â;¡—«³…Wï†Þ+þÑu᧺ßkÚyÜ!oÁ]úòT½ÜI‡ƒÐ•ÔEÔ¹©¥ª?û óÊû_².äVè§tï%>Ò}ùì¡;UŸÆÅžOøö³¾xÂæìúÞg½Ó9Ë™¿÷‹7ý9è/u.yÐq¶õVýÍ"o×þ_ìûe3òëqà‡ /ýy:öQè&·úüÆû”Ͼ©¸[ëtÖó>®¾£§}ܪŸYèAuÏè´ãnûu5òž~mwÁÁtêï1ëϽz{ñ퓎‡o‡%Oû{õŇ F}ÝV]Ñiè’GGîÄ::ŽxæïÞÏ:¶nð4¸=ûóEìçÞœzq©ó”ÍÈ+uÿà¨ÇÏÂáÇï¦ÏZôGñ@×芞÷IàéÒñC÷Ì}ú$òlÝ»ñå^øÙê)ŒOÝsø.|€·}]?×÷¼`¼¬[õ¢Æç ü9ëäNà!ñø'ƒî«> ¾ZõH }}ԽΡûó»øÌê ù·–î:îqË<¯…ßRºeèùuo»y ôNàC8JÝŽxï:ßý¶ë>òdÝO=ëÿ^}:~1ÿ"ïò®Ç8«oÄ_Ä3ûÿ,^9ëãUçóÇ]3êÈŸ„Þ*اtP|h'ò݉Wñ*|:ßÿ>t³IÏòñ•Пw‚å½¾çA䓪ó…î5îû=û0Î ßÔ÷LÌçRï¼×œž Ñ[·#NÁ±êŸ.Ìz¾Í{iàru,ò‹¸™u©xýï/_çmçCOB>½À:µ®žOB™ôÁÙt®ƒÐIè>ôÄ;äËÁ«oG^~:îû½üÓIwö¥õ€WU¿–·¡œþ˜Ÿ>ÝÏÂ?·^ªÈI×­JGw?Ò<;/—/\è-õxC¯áSü—ú‚qºÏvÔãåqäyõçâ®~ö¯øp~2^‡ê{/ÂöÇ$|¼Ÿí¼Yõ™D|œõõ+ÞV_Áðcª^"â9~T}«Çç[·×Ã'I½­îË\lŠx]Õ7½íx@Ú ¿%ûŸÒA¼¿sFêð:ÑýÐéÂ×ÎÃÛñ-ù›žZõ§|ÍAÇü;ºCzÛódö96~ÖgÕ©ÌŸSèzèÝ›[á‚êcyÚñZñ•ÓþÜâ§ù¾—o;ª~[“ާä±:Ÿ;êãqvÔ?w/túãð™ô¹£“úyõ-tŽê¿>ÿ}çfŒûjèà5Ná;Ý þáç«_Å0òp¼¿÷¨{€–:>Æ ¯ŸSO[}I§]¼xÙs”¾2èù®óK=>:wyvÖõñâ-ÁëËïõy Ûûžò-}?VýGÄ|ðQð|#ï;S¿Qõóçzñ`þþïz|¦î‡oRýÜN»~R÷(.õñ€³Ö‚ßÂ7ꓪ~,ô^z¶º.|N—ªú´Ç½ê70ë¸IƒüÃ9už.ø><@o\ Ÿî⬯Wû©x°ϣür?|@qªêªC_’wÿË}³ÃŽgî~1žÅ{F}ëûóŽñ óUuÑ''[µÆ¨¿<Ž|Xý„·.‡îpü:óeõyœôï«{hC7,0éë¡îçzßã”q·oè4·o8_{/t¢êã5êøˆoX/p±º²:§<çÁkñϱ8m;p™8±~³|U÷½,tw;â9\ö |hqÅz¶ñVñž»ãrù›ŽgЙ«"¿8tÃÐMԱЧåÛÝà+æÉz±¯ê\ѬûèwBç­~Ùóï9zvñ€·Kçÿ®~Üxf? ë\^Ë{ÌŸ:øÉsÂitpuøƺZ]ŠßV}ýæÏY}Ì&ÇÉß·BÇ¡“<Œü§8oñ]äñÉ{m‡‹å¹ÍÍÐÊ']†ŽhR‡t]×¹d¸ë8Ö—óz/~ÓãŸýr%ü¼÷¿}¼Ðü©«¨{·B¶Oñ :eÞª.§î×u ùt5p~õWxßã0}†¯u%üýˆWž“XúøI÷‡ö¯? }]‘/> ýí¤Çwþ®óªCo³žªÔ°¯ÿê÷¾óL¾tõ%{×ñTÕ·Ûï"¾ÏúþÜ v'ð}é'G}\«N+t^ód½û|q¹ú¹†ö žÓ>ظd<ÎÏûjÍKÝ'&߆ÏåyꞤ·=¯ÐOê^ûI÷ñ®µØçuîjÚñHñô…ï;÷­Àyϲzª/öwïã­Àk·cÿÀó¾—nâ<^N¯¬s,“ŽOoÄ|dÿVñM}"žj\Ó§uÉ/Ã?W3øùõÀÇ'æUöñ^àTþSÝó´ØócÝ_;î|Âx쇎 _m¸zÿƒð¥ä{~z#÷ß«{Q"ïWýЬϸ±º÷vètòäùÈ…ó§ïx?qöqøÐž[ž©s”±¯Õçìžâû–O2ëyäWÁ«àê>CÅ‘Ó>žæQ] ý¶ó•¯/>‡ž.>ØðwÝ<ÿ»sïwB³NÔ‘ÂIÕwä¨ã±G1?øŒs4ÖÅ­Ð{ñ3ÏSñg<(xkõ3™¿ç¹Yßð-½“þ]üvÚ}:ý6ü€À/uïØ¤¿·þ9ÕÐçÞ,_î}ÿùõÀÝtë‡ÿM÷¶àŠƒÀÆýzàììW(o›ÿ.¾ªK£oÁÏúÐ…Pç?Nºn¥>ï±nä |Þ3îuá ÁÉ{‘ç«þï$xþ°¯'?÷0x]õ1 =oqc2ç]î¥gå=íôrç`ä_ûÀþáw^}<ˆüZødÜŸËüW|üÆï?éÅCt~¿ú À¥£¾þø1·RYêüíbøu?ŸrÖ?ÇxUšw]®ú’“ðy{^x|ÞyDúÞɂ۞ƾµÿõÍ«º°Q߯øXžçï;ÿ¡ÏÓQë¾Ú£wäŸâŸ'}â/u¿ýjÖ}¼ Ÿà©_ª¾Šï{þ¯Sÿ­û“R_t>Ä¿¤³ÉCk_;rn’ß(~Á'òÐQìãª3õ¸«/ƒãÄ[~‹óJâÓ~èò·"~= |®¾È:¦‡ïÄ:ásØÖMñÍ·ïÖ}µ£Ž_—ÃçÄ[·‚Ñ_Ì·u¿øQÜÙ V¾Ã—­Wú/< î—´ü´îQzßóßfòæqû…koõð;ážw.“ÎiÞÕ‰íøjÔõuŽëñœtaxF>„¿Wb¼ùΕÂw“ÓŽëäÁ[±¯Œ›sÌKƒî›Ð¯­sçò²îæ(quà^óÍ´àÕÒa'=Îø³ú}‡Ï,n˜_y+ž§æoÐuûà|èçÕ_aÒãÿQày^1ýåê˳ÐóH'‹ç’Ÿª÷,âì»>¾þ»õYºýi×oäÛg±Ÿª/ëIÏt`û N}ºí÷]ß©ópo{žñùòî~è·ô8¾®¸Pxl©óêÂoû¼ˆç×Bß _£|¾Îüç×ÃGâ³èÓl=/Ÿ¯ê32éã7ÒOÄs|PÝÊnÄ1ë˜fý[WÖ/ŸÇü. ºÿ.þWß’…žͯñªsLïz|¸qÚþ×ê^УwÕ¯¤/æïÖ=ÞM·‚+wÃãË;7TýõÞ÷uRçÂϨ>f¡Õù•À÷unwØù=Þ#T]Â4tIñ¡Îé¾íyT\‡¯®^7í‹ÒאָÛáãû\þƒ:ßêsvÚñMéûpïRø ñ=u¿Ü¸¿oñï?çIw‰'uþ*â¬q-¼:ë?O§ð¹ÙÇßû/ð=/†}ýdÈô«.~Øq@ÕÅÌ÷•þ3ÿù£úÆß½Wáºà%trñn¨þz ÷ÞŠ8·>Fõ›zßy˜<¦¯kÃwüU}b‡ßÒÃø(uQø¤pÚýàû¥›Œ{<”¿Ÿ‡7kGýß×Âç¨~È‹Á—–zÇÓªÿÑ ¿ŸüVãEÇu÷AøÚæÍûV¿6:¿hþߟ˃“®3Ü -^? ß{üÿ}> þ£ïx3w=~Oè÷ÎEøûƒÐEðÌ:ó®û¦u/Ϥãåˆ÷ô†³¡{UŸ¡aבé…Ù§Í9Bñßf?Â!;1/|†[‘o­çhÅê§3îñ!ïɺ*ýë¼Ïüó«ž.ð¥zqý$àžºgøm_GÖ‰>·ÂgÀßáIúTéå‹=oŠ÷u¾8p±õB·ÿ_¼ë?¯>Sœ¹úMݳ>íÏ‹÷ò×ðÙà=΃ߊ<\õœÃõ©{ªÞö|Suo§ý÷ªÞü9ºñ˜Ÿú`?t=Ⱥ®{OºT÷GLz¼2®pÊä]__O7UŸÓùûŒ&?ŠwUvÔ÷séãà·Ã®çëƒ/Ð¥«Äièõƒ®gÞ¼‡z~üþqð÷ÄËò›uY}zÞ)¼ë§îœ†~µØóHõ­ž?^ìû»ê.çÏù0ü>¾xÕû„Þ½–ûæ´¿oÍû¬ó!xN¨úüùºyzxÕ÷õqÏûëÜû°?ÿSþ¢óó»«_ê´úɺ/"Ö5Ý ú,vÔ¿ÃIãð+éVòiõ¥;í| ž¨¸1êï)^Ñ_ë¼ûBÓµÞG}ߘ?¾²: ëÈ:¯ª®é´ó{Ïoî…þEW·éuîqþûúà˜¿äéê&Ô±øœºgxÒñÓd1ðÑ´ç|¾î±9êq0ëá¡ê33éûb3xÔnè¡ô¼úû°ë°ò_ÕIO;N­ûƒ–úzÎþÙò]˾‘ïÜ/®É›ËáÿšxËóÞ ¼ ×ß ü 7уno®{ÿÂÇ­>-áK›×•ðgªÓ¬çC÷úÔ=‚ÓëQ¾ž®ó7Ÿ?3þ8ï”uÔüÙµÌ#£žwÄ·½ðk͇ý ûy8£ú„ïv÷O쿪g8êû¸ðä´çOuëò]õ­šôñ½þ.¼Ë÷ ;ñåJ?œõß¿ùÒþ?èÔy~ºôŒðqª?óB÷…ëA¾òI¬Ç÷±_ù—¡?ùSFïc? =q9üvû£êxß÷}]}&]O¬zÕùø»ß¤Îõ,uÞ'ßg¿ã¹Nßw\¢.MŸ˜œçïÐñàŽ½ð5Õ£ÓD^ÛvÃÇ­¾¬K¡c »NOÏ/é¥Õ/gÚq ~é{G>©|·ØŸs9ðeÖÿÐ[èZ©ËŠÃx¤nÝý*GáÏ×= þþ2?ã¥êæì?:hõ5¡#Oº>VçÊÞ÷y“gŠL{>x¾æVüù0ð“õ7;_ø4ôþ­Ð·èOWƒ«#Jý>ûjŠûîYÏ{†áª¬? >†Ï> ^HǿԅT¿oywúÃÛ®cÙ¿x±> ø¦º°ãØçò“õ¯ÞQ>»8œþW÷ÿ¼ÿ°dTê…þÞÆ‡…×9ªI߯w"_ÒqÕÙÔ¹®ÅŽ3n¯®{'Õ½™Ó®?ÑE'÷ÿ^}¥G=ÎÔ}ö'}ßÒ¡àBz~Ý;yÑsVŸ„ÀSuŽì´ãô»áÍÿßG×¹ëÐ ùøÕïŽ{ô¥Û¾‹¼ø¶Ï£?ÅøžõyôÚZ÷#JÞ÷—Žú¿ Ç»ŒƒøA‡¨¾È'ýçùÏO:Ž¡³[WÏBWª:¿ÐiJÏt½V½Â½ÐljƒO{œÜ ß„.FOð¼ô õNîxñ¯î—{:Ò¨ûêÖÅKy·âJð_w¯|ù£óñ™|Ö×oÕ}Îç_}êÐïéZò6އӳൠðÍQOùì~ijÊ?ïúóï%Oœί·^ |*ÎOºn‡×}4oûçË«>ŽJ¯}~•sku^ö¤ëÕŸ`ØõNñÅ8ª¿¯¾†K=¯×yŸ…ˆ§ }ÿ] Iý£¼$î«;ªû0Bß*9tùâG]ÿ©~_K]—RŸ|¸7ë÷o~Ú ]°êNçÿýì¬óP|¡øì»ŽCÏG¼w«®ú¤?}ä^üÞQĉàåù½ò¾¸q9øæ^èªæW¾àÉ¿ú§l…ϱqŠY}/ÄÝYçãø:^&NÃôðªÏ™ôñ7ÓßáŸYÇÎ5‰7pPõµ™ôï×C¯À§o‡>PçÑÞź ýZ½´ç*]ùmwô8r=xhÕ+Œú>ÚŒ¼£_Tõ ûçî†ß3ìŸ{+tÆÕà Gá?:çë¹7¼úŸ›ßxã`]ØÇô‡êÿ6íûs7Ösù¿ïºOf=ìÅ:;þ»ú}ß ¿Ã>€ÇªåR÷ài÷cÁ5ÆÛøTçi_ßGÓûyÇ{ýœº‹êc¸O>4.p/|G·…;ŒOÕÅõñÂsë¼î¸ó#ÏÇ«ó¶ƒ×ćÝð¡ðÄâ¡G}=9·$ŽÛwÆ¿ü×Ó®Û©ŸÆãWƒ¯É»ô+>Dõç;êxê tœƒXß5ŸóÏQ7ªžÁº[Ÿíaä#ºœúHûŸù‡Aß—ôeqòZäå‹‘/ß°᪳™õÏ­s*G}ÈGYÿåÜ™|ÿ8…¯=Š÷ªûdè'ý÷ŒëõÀÆ¿ú÷Oû<éãaz^zï½ðªÏû ó<Ë:4uÿÈRßò|¼¸Jü½ºÌaŒ+<#•^>é|N«s.¡ó-èSö!Ÿ¢êPÆÞ k?tY:ºóKü²õÀ‰G¡ÓmÇ{˜§ÃÐ_üý0âMÝŸ2êãã¿ÛŸêéttKu(ü÷ê>êø êfýó.‡Ÿe_xÞɬã1ûˆY}¥"þМû,^0길pTèú9NF=ïÐ]Õ7Wüö¸š}I¬ÿªã>é<ù ð_Õãž¿:¢~½Æ‡¦ãqðfy€.ý8ü_ñßºÚ õ“‚{Íß.Ö—žô(tKø°úÍ7ŽÕo>üY¼v²Ôãå£Ð9Ô‰”:ÿyu?ÕlØu2ëÄ95ùM|1ÙO_\†ç«Þ~Ø×wö¥»¾ ]àQèaxiÝG<ìëD^6ò Í<×9‚÷Ý/´®à-ÿ^u¡'}~6è~ËWâðRã^ç&º̧QŸþ Ö=ÉsòçëŒüZ¿óp+øÔÝàï/~ßãÂFàéãÐ?Å«£À‡ø|ñßðèít#8Vžõ=ú#Ð9ée|Äà¹Ïz¥‹Õ¹·IèÚ‘¿è¯t$ï'~ÞŒüU<Œ|Wñ÷¤ïcºo°OávúXÍûI_ÿô'ü°ôÔ“¾ßgâ‰OïWÿ““ÇKG|Ûuåó»VÂ?­úï£xÏQ_WÕÇå¤?_S~Ûyoÿ EüØŠ}v+t¨ãØG¥¾íã&þÁoøýVè(ûáÛÚgÕ§pØ÷ëð­ñ±ƒî«üÇÿü÷Áà<®u[u¥âؤã¥À£îk‡/|5|[ú&ýH<ÑçÂ:ûé¤ç!ºœ}òoóÏužÎøÕ¨ãŠïâ_“®ûÀ p_ßöýö‰õt9Þ~X ±úeκ®—ÓÛä-Ÿïý? ýѹ8¼ÍyU÷ÑãŸë^ñißßöËVè>ÆO\óÜö›ñ¸ñçÓA>:>ô8â;½—^X÷+Mûïz¥ñÅëéöOõÓž?×?ú¸TÑüsþvÐõAó¦Ïœ÷þ³AOëï†ßŒÞižôýû|Òq®ºñ/cüçºÓOãûðp<É8‰øŒ÷ؽ–^Fw•>ôuá}à~…ŸŸz_øòÏWðÿDZNáz†¿ëßm¬ÅûÁýÿ|ûQä[ñˆ.DS—oü¿ôqô¹«áêKú"üqÄyj8÷BÄù¿Þ“ï+Y—êÎʼnqàyëÉþ¨¾£® ‰·ãð¯Äº»¾¬÷´.~ï­.Ë~­ó¨ƒ®‡§þ4èº0=¬î%Xè¸ÏŸìïg³¾¾ø«â‰Ïƒò¾úxÕDzåë_ˆ³ž÷Œ;¯ê”gÁßòÄ~Äq¾¥õHo¦«Á›uïÒ°+}®§wª›’÷<×½àÉŸÀóƒî·ãkâͣЇÌ_ÕO :_Öÿ¬Îýõý¤Ž-ýý]ǡˡKˆûk¡³ÀeêPésU7èûšÏày}®zº¯‚æ}8{áã] ¾P÷Ênu~@‡¡CÙÖïËßþçŸ :Ÿ°C§£Ú?ç®û2ßb×!ÄÂW©zº“žGÔkÒ±7Cg_ ¿ˆß\çtO;OÉ{Xì[~^lþ"üOúÃzèÕêÄÙâóçyuÔ×Gõñ÷xƒwV_Äq_Oæ»|óð1ïÞ燿\êq³Î©ÍÞyS:§ýç½ð[Ïu?ü¸(ù¨y•ïñ]º4>ÄÇ£WàGtÝãÀÑ¥·O»ÞéÜ@ÕÝϟ˹ѵð±­Ããà!÷—‡­ðŸô»ÂÇäß/B?|ú™x¦Þ ú4Ì?ïÙ»®;|¸¬ú/õý\¿yI\°à¾ì½ð‹ùEôñk‘OŸ¿²þn‡Þû øÑ­ÐS­ƒêûL—>íqž®ÏÐÇÔ)ÐqJŸXèº*K½}"ž^y¾zßR÷ýþÝÀõϧ/à§ê¿ðKëè(öÍ$tƒ­ð•ªëm[ûá7Ѽ‡~ØëÁ#Ó§¹zN®úÞa§ôëéÙbÇ{¡§•n;_¯ú,Àµuù¸çiz{(éÕÿnÖõgøåÕÇgëþŨïõõuNñ]Ÿ?ü~¬q÷ýu7t.‡GWØŸðõ&ýs_|§ë/p%¾¤ÎáVøÐâ€8æ¿û½:/ð¶ëúæNùɤë5žÏá×½Y㾿žºüg^*M:¨þ¢pê ÿÞjø;î+~<¤3Ô9Ë“Ž×ðe|½ðÌ´ó^ñ^‚ŸÌÃf¬+yáÅRן«¿ñÛžœ£*ÉOŸŸô¼8ŽüTß{Ò÷ =˜ßâ|¾¹~ ¹|¯ô7qeÒù‰ñ×§âY|¿y§ôÙªïYêëþÚ>#¯‹¯ø½õ$^–~ü®Ï{‡yÛçc-âg¿ÜÊG:3éÏe]â p¨ýUý檧ö\uîc~à¸ïƒÂÓ¾Oè§Î#ÑáŠÇ†o©¾a5ôßêã¼ØqUãtj½Ô9ïY×;ñÓ¼_C¡CÚŸ†\zÄ ðÀ0ü‰¯êƒÆ7ˆ¸y/pƒ}/ŽÂãümõð¡õþ8öË‹ïu¼ NÖ½þ/à0ãDÇÚ Ÿ^^¨þ0óõã¾ååЛùÆ ¯o·#?²êÊïY7ü„Š[Ãλ×B_¯~×|È“ûÒâÙJü‰¿x'¡ã«ôtû›nÁc¡'ÁKνŒÃª÷šv\w‹‡…'—:ÿ¬ó£žïìGãQ÷–-ôùö߷B[ñ¬þúãÐ1¦×â×p‰|ßýMèÝkáGã§âMõ#õý‰ÇÉ·»þPþ>³:´uh~ò<ëjðú Œ_znûÿª{FNû|òkÝG0êz¥üQ¾ÿ ¯>±õ³y¡úü‡^Ÿ¾­¼ñƒy Þ¶žÍ‹þeò9=Z?vón¾ò^qñH½‘øV÷% úz¥Û­GüØýgùjð¿ñÇGƒžç¬o¾É¢¸ó¸~1±ÖѰïŸGÁ[ÅkëoN_…ÈçU_‡¿_¼œ…ßñ…ÔZ~ßz¿Ì«|ª>‰¾ O—Noñ£gÝ÷¨ý0îy6ëñŸW¸ü´çeþªzQ¿ïù«Á ãHø[_dùj;xžüPýrOúüû9ú„}ç½y¿ô’qðø§Á_oGþ‚ÕÀðïrà­£\§³·c¾Å=õÎô;û[}Þní?ùKÚ <[ºçûζÂG½<ß‚Ëí/x±ñW}@ÞßF÷¸þ¥Þ«sãÃîk^g/|ŸXâ¼üæïp3>z|A½`ùÑÃþ¼Õÿç´ãsó/ð«Ý¯þeèxtÄêëµØu6û½öù û:k¡ãùÏ‹ÿ¯Eœ^g/tyz$œïy·ƒïmF®s1×Í»ø_çІ=?ðá*¿N×®.Á÷àBñµîg;éø“^Mƒ܃,\^/îÅ:Ø ¼]zÂüsõõïäÉêO?꾂ýj>#ßËë>/áÛøyϧŽæ}øô³ùs?;ê|^¹Þ Ÿ¹ô¶YÛÆÅ>WgýAÏŸußǬó[ûB|ñ=gF}ßVÔìÃã{1x›ñùž[¡WW^Î_‹ç“>urä©;±nÆ¡ð½ü gââ„ïÅ'ë^…w/„þ¸>·ù(Ào;ê|MþÏ8 WÝì¨Ïë~à'óé÷êÜê$ü˜ùx©S0¯Ö“z½¥ùŸŸLºßa^õ;ÿrÐó =µúl…Oºzªz®õÐ .î§ÏnûßqI¦_9G ?Â=Ï"~x/ûO<†KòIý;Ä>mÅÿàIæE\,<zôRøb~o'|”݈{pÿj¬óÄã–”5ëã¿> ^ÂËÅ¿GÝ»¼óg¡Œþ„®^zgÄ3<(_7ü5xGܨºýIß/òñzÄÛ¼wl/xè`ÐÇϺVnþÌÓWÁ'Õ¨ãUw|ëi웺?,x/þ&~ìÞÚ_ãRø^p|·–_~šõv?âÑVðSû¦úyοǹ­õàmê¦Å#¸îfð3Wÿ„N_zÅ âð¤çSã‚Çl¦;éxþñ{;‡6ÃÛ‰ï-_aÔqJž—¬~³xÂɇñõnø([¡X7|:‰x`?{>ÏO‡¨¾Ã¾~ÔÙÚ×⟸D·jüb¼ë~I×ëÔÛª{º¸w=x´¸·ûz=òíŸG^Å««Ž8xPáµX_òŸ>æI½ºqTW#><¿>Y<ïýêÏÀÛæÙúóùêXÔ•«¿ð{ÖMÕeÎ_ßq¿ún¾·îë|ÞüßÕå_½O=Bœó^ûágÀOƒ_Öýæ }}à ¥[ͺî¼>ÉåçµÀép¥y:^§äùªú’Ã!ƒþ¾Û'«>jÖŸ-ðµù«y˜vž¼8Yàoü2æãjÄ1}»~zànøwt´/§Ñù^~Ü} øãlèĵþý¹Ï¦Þ<øp^úUèk¯«_lìëõЪ¯yÆëA×6BÏáÿòå/ú çÇÔ¥àc~î:~¡ü†ŸCßRÿŠßÃÁ›¡‰â˜8%¯lnTßZ¸pÐãªzÑ—K×Ì—u$ï8Ïl?ÝøjèWë¡wþEèïÕ×vÜù°÷¥¿l_6Ï>×¼Ò9èŽy¾Ÿ>T÷ÜŽú:4.Wªó°¿à(>š:å{áÛ©ÓÆkëüíü¹œÝ~oÝ= ?O½FÕ­Œûº©>'“žçóþ¼ºd¡ó+üÝß÷‚OÀq?8¢ÿ)?KÜ»xóF䷕دž[}á¯Âß•×í›káׯNÝ ʺønì ë9ëDéjt‚çs\|>| ¼×:¥ªs¸ûöVÌ‹ó Þc%öµ<·8Ý>ÿìוÐ÷ðxõlžËxÒSñ˜Ëñ¹>§ê(¦=?W»i_¯«¡;Á½öñò¼ü­êÇ:îùNrq-ð0DÏÙý`¾vBßsÏ`Ý8£î=ìx¢êKö‚oÂ]òÕ‹ùó|8f=t»µøýqèþn_áçð}ÝcyDž¸º›uµ:šõeþëü^ðS¼Ñ9®K‘‡Wâ{œKÇ|^1 ÿSÜ] þú‚º>ÌxºwÎþÓGÈçâ«òºxmìG»?¿~@Ý×3ì::n¼þÂÿ"ÂÝÆþrßeõq í¾>ýÒ/„®*?øÜ¬w>~‰|¬ò~ðóê<èñÁùzÓfðKùR<Ç?ìû{Á—øâÞÓßé©'/„OË/2>Ÿ„¾PùsÖõ£ó¡¯þsè®ö|çýuèÓò#þp.â@ÝŽ¸á{«?ÜB_ou/ѤãuqO²ná…OCÏ‘«¤ë;ž“þ%®ÀWâ¬ïÛü ¯®‡~oYWt³óá3]=f=ð ü~5y邏ñ±þª?õ°ã²º7‰/~Ôñ¥üa\³oÿvŒ“¸jÊ»ÆÙ¹½ÊS³¾žì7ãï¾u5Îñé£cßÄ8áçÖÿZ¬“ó¡+ɿ곪ïɸãc8O=×õàâwݧ±Øç‘Ž~%ðÉÊðú>¾ã«>ˆž°8è<Õ8›—ê;îúžøá¼ÿýÀAuT¬?ñi+Æc;óÙ¨ïƒÂñ³ÎOùIôûújðjy†n-ŽÐ“àÀ½Ø_b¿T?êq~^œrÜz¢ÿ“¾ê^åIŸ×ª» õNì_º1žr.üe¸ ±/ðg¾…úÆ Áÿ¬ë|ò&žoÜã¿Ø¼[k“…Ï©n~«sc+nÏúûÒ)å¸N ëGýÁóÓ¾nvƒí„#/Õ=ŒÓÎóäGãb>œÈûlé/º.WýSbÂÛ΃{NñÿAðyÚ¼}>‹qzÚ÷á™àqöŸóAðŒõ[õ±ï;ïü<öÏ?…¾ñ^|¹¾ôNüý x€}[õ÷øÅ¬çŸo­ÄzZ ý«úªúïW?¤È×ö< ×ừá‹p—z³¹/ßu¾3ù]à˜À÷æžÇ÷†ÏB·—ÍBgÒøÑYÏo…ënÚŠqØ _÷Qìg¿Áô"ë¨úVŒº>'ýmð,ã)>?<³ê[†}=ÞŠüEopÿ—þvø‰Gú8nÿ¿¼‚(ŽÔ}W }ìÇ~_þ¸a5ô-8I^pN•®û4>o=tÜôÏÿÝ9Nûǽ6êIàÅàuu¾eþç÷ýûÇ‘‡éÆ‘o‚7ß w)ü‘ê;7êù^žq¾ý(óü¨ã†_ÆøoÅþ‘Çõ+ ï{÷è¡ðCú4Õ§t©ãûŠžf=‰³‡‘¿Wbýß÷÷W÷TëiþßÿÎxOº®A7°Žø/|>ùœ.¿„K·,yÔ?o3ô6qÎ{¦ïZ} ƒÿXÿG¡coÆ8<é¼èãðk³|ÅÍYÿܬ3¨º‹“à­á{í„ε|‘î¬ßdúRt7õ€W] ýkã _¬Æ~+^?ìxÎ_u¶züµÀ›‘ðߪ?4/§=Îá_ú=Ð/#¿å¹õÕðÌçíÐSér›‘çï%Ÿš†Ž1îûq#ô?y¿ù¢úxz|È8æÜåãÅ>Ö±ýPýèÊ··C·²®>ÉñŠ8¼¼üqðëÔà ïY:ýo:þ{¸ßû‰_kÁë\ð¤¯øÐ¸àÏx ýP|Ý ¼Z÷ŸÏßO_ÌÝà)k±_ô—ÑÿoPïW˜ùóê_bÞ ž¶~¼ýt3â#<¹xÀùYëÍû®Æóѱœë3>p}éß­'ùP?4ëøÓÐo}ßFøæÆÏ>½zý~/â¨<™ñ¦ôēWBßß l't-ýg¶"^[/t8ë4û™í¦ž9ÿœŸ‡ï»~ÏVì?ñ¤SÞ W.†.ðyàÏðOà>ùð§ó;‘Oý|Õ©Íúû«ï©ºçIç[êÌà ëôgF}¿Xçêv¶oþõøh?V}â¤óC~ ü@Ç\ Þ£?üsð}ÛŒ›uPçt:þ7^æ—^²y×ûÀ—øÓ¯Þ¯Wã÷B²Ÿn‡Ž|þçràOã%.âñ™>ظãx>äƒx~ñPü9<²îã=éþ ?›ïa]¼üFð˜Íà ÞÛ8­FÜ1?Š÷«{"G=Ámáéy%üLã[8kþïλÁËòÁƒÀø—úØÃŽã7âçÛ¹z¤ëú¤ç»û‘GÉ£uOÍüsþ"òéZ॥à¥î!ªú‹A÷ìó¨O¡õTü+| ùàvè½ÅS#ùܾ÷i+kOןe½ç•ÀÞWŸ/qG¼ª~ßÃwíËåÐ —CoË>nΡ‹¿+ñ\ý=ÆOÛ¿Ëñ>/¿Óq£}±?x/þºqœcÿ,x¼ø Ïù÷ÕУáé:§µÐÇÁzªq'ßv~U}1¿öxX÷žö罺´¸-þâßúÒ%‡——ÝÝ ÿm;ôÀ£Ž½³øØ:V·kÿ±ãà­â‚zÔŸnü4t\ya7ôFy¾ê û¾€àà<¾>ý±úMñçÏû(|iz™¸@_“ô·0ð]ÃüÕ9œQ× ñžÝÀ âx+.W=xä|ž^ZþåûÎk³ŽÄ{Wß»ðûªn|Ø×µ¼¢Þp;x†}Ϳƒéâî‡?cݾZìãb½ØOôGqb'|“{¡‹«ÛHÞ­O#®ÈGׂËÓúR]Ýk-ü^þÍzä'qäù°_Õ©»o'©·«þαîN„KøhÖµü‡ø=ú>}]¼¾ð't>Ê™ðQÄŸÏWRϵº¿?o}Šëø÷ZÄ¥Ã|ß…þâéaøV—#òMW"ŸÜ\`ýW?âÓ¾^¶B‡7öán¬ ùT] º'ëÊú¤'ªõ¼öéO߸·Ñ|= ?’ŸF¯ÐÏONZ>Ù Åzª>Fï»$¯ÜŒø`}_~4ožC~]Ñsyz üfÂëu^kÖy¼Ÿú¿o„>¼ú³Ï£7FκçìÛ—Ñ÷ª/à(üW¼Þ˜_Ý»zÚã÷Nø¬Y?X}l†æù.u¼ø£|gþœ{Ö/Ý8é›@ׂãð¿º*x¦}¡oâ¿…žQ}÷𿬿ìñ䯂¿Šÿò?ôAÄsýýêÜǨã>ùm/ò‰{éìWë¨úU ;~[y9æÏýt5uô\¾cÝëû’N§îèA¬“åˆw? 7ÏZÿÖAùÁÃÀ3‹ÇâCt'þ%œÏñk=§úaã ÿ}/ðƒýŽŸñ-õß’wîĸ×ù¸YðÖà=ãÐ5åEýhÜ“lßšxSÑß…ïY|ämÏÇÖgÕ±Ž:ž©s ƒŽŸõ ïñê+÷àûñ‹Ð_Ä<ÏI‡âFžåc‹;ÛáÏšoã_zÔ¤ûÄÞ'uáK±îÎÏS'?‹?UÇ9ß?êUF^¤ãˆCò”ù¨û¾—z^¼øÖþ(}{Ôã ¼ß;7¶û¶îñ›t^þ0ô6ûÔþrŽA<®zäØWÕ§iÜu¤'ÓžÇÄ¥;§à„Ë‘Çè\xâaðź—xÔqÎzørx¸~Ì>o“—àbú õ‹Ë37ÂïµÏí—¥ÐÅñYºÿáŸÃ7»y©Ö©÷˜ôÏ——ÔkT}Sä#ú¡}ó,tjy—>¨¾o©{)¦}ý˜oë_ý#=²îÓ™/ þ³yޝĝ_/‡?èNêÎåºwgØ×áqð|q#ëŽBß­>šïºOîüÕ$âVᣜǥîKÀ'âØäA_pÎíÐCÅ þcÝe?8üU]Âað{÷Ë3ê‡ò~tóçÜœEgÃûÒ7 ý»ôÅ“ŽÇK÷ý^}zœ²ŽàÅýðœË„óÔ騋’Gø½«á3UßúiÇ9ð&¼q{ÖõñÈK»áÒ¡áq8~¾ºwñדþœÎÓðè¯yÎ n¹zŸÞáfß$uôò·¾”úZ¨£¿:mÞ};ô•Ðõ²Yéôãž'¬:…s(òkËšv_Õ8Û§ÖÿÕô%çÿÝ=$òúù˜ÏäcnÆ¡{ÊâŠx£žâ‚Ï÷ï¾¢[îÆ>ºz2=æ0|½Ý¼÷UÝþä´¯Ãêƒ7ëyô1nò—{³ê9‡ÝƒðÛgá'Áëô`u¿Ö/½”?©Á¹Fý.íÓ:ï|¶úY ÿwüR÷™ »¿tñ˜¯cÝ~2é:“çÇ3á"øÓþ‘¯ƒ‡^Ýæ8ôü/¤ŽŠ¯:Y }oÚu ûùÅQÇ{±_ëÜâ¤ÿ\õquýÓóž Ý¥|Ëùßõ à[l…ÏGÃÛåOz[ÆÅ«¡ƒì^–O/{ãTúZè—òƒõm?ÃmðáæäÃø±ô½“¾Ü¿Ó{ð >ȹ˜ñÕ:{<ëëиí‡Îú üéíà+ôi>‡óvðSáýð'²ÄóV_³Qè‚£®§áÏp^#®ÜK¢õ‹WT}ó°ó(úÉ~¬oüÈ~‚ãùC“ßï½¼îõ—Ÿî9ŽüY}öñµù~Õ¯çzè*òÙóð¡«ÞnÐýAý¬Õ)ñ·à™¯Ç]¯°žÝË$oÙ?ødÕ‡z~܉uÉ'ñs7ÂWu¿éåð1¶Ã?ó¹ð‚:zïq#ð½{í»ó£ìWë?«¾‹}]ÝŒï÷<Æ[_›âkó¿ÿã ï{y‘?°|IÜöïÆãfè\ê{í»ýø÷ÕÀUâç•È7êë7÷â»gŸÂ§ï{<ñý>O\ ¯Â¿øûvøxBÿæóª÷¢[ÉÏúü­ŸÕ7^=“~Ò ïKëDý±ñÑOÖ¼ÛêcÝÛX~ÝRçÛÕ¯1Ö—8å¹Bײïø7ð o3>>Õ_½î¿žô|P}wOÀëÕjÚçý«Ð½äz¥ùäð;ªoèIÇiÆ]oÚ þ†ÇŸ ?H½_>ñ‹gïî:YÿÜ'ñyt‘‡Á“®ÇóX—·">àuÕg?ppÝ ¶Ðã tÅz^èù¸¾>Dy> ýå¤Ï'~d}~†uGÇ¥Ñ7Ê7YèqºâeâV??î:õ~øÛáø;×> ×Ò)ÊGt~Êïß⛬N²Žñcó+_‹¯ô_ãϬþ/}tÒã¢zÀºW ô]÷èy<ÿzèö“¿_ ^Ã?ÔWÍ—wágrëh'tÖÀéð ~íÜ£ó<üõê•èdô}ûEKz ¼ºzVžç‹â‘òêføHÎkðÝêÞ˜I÷JGvü3¼/è+AÏðÞò/]î òµùà×}ª³žGªîmÚ¿ß¿_ÞEשz»IußŨ¯üß_áS>õÕàÿÎ蟗uÔôϯ"?TýÔ0ü¡aç)ÕOrÖ×Cõ±¥›ºž{#|еÐᬪ38t;tDqG\¼xì~ð*û·îÉ›öu-ÿÀ©ö›ýŠgاö¡üv!üz:ƒß¿8OÝ®÷¯Š†n½º^‹ÿü;ã§nA^±ÿLJ¶S?÷ø³üž.™õÇòÖáŸÐMù«—®‡/ _ª€7ìO÷¸W_„ð%¡ÿò›W"ήG<¦ðð‹Ð%³ƒøÇOÑ×J|Tu }žªbÜ}£—K}¿]^º~ ÝàqðƼ?Æû¨W¡Їù»áËóò™ï寋Ûò¿8g>Õɹ÷©îQ›?·~D#þ©G§K«ó€ |í\ÎùЕËŸô}kU}À¬ç¥ì³z/|«Ò§ý½á…Ãð?7ŸQ§ãyåGu4æ»îx,Èg—bÞêþÍ“>ÿøÞί×êÞÑAÿwñƒÞ”眿§ú»ýgþì¯Ï#_}ã´ï±zÄÕðoC©sUÓžŸÅMãz7¾çNì+ñö¾tñî§ŠƒÖñƒˆçWCoÆïÅIïË/ÿÖÃ'õe‚›éËúÃVÈqŸOûóo‚×Z_êîø6›1oê1ÖÇ#¾ž÷£©gœ|¯ó§ŠóÖSÄ}?‡¿Àó¾ßsgÝ Ÿ.ªzÏÐ éêèðíµ?1_U¯7ëú†¸ ÏÑåÔÕ? }£î!u=Süàéç'îxï³á÷Ö}<óŸS‡N/–'ä<Õ9jç2œc2þêÿw¬sg³î‰3ú¢Ôù‰7ÿ‹¯òjõ¡=é:²øû4ôλ᫊p2\ùl±ëçu ý&ü+ùòIà§­ˆ3÷B÷®óȳ®›É§ü’ÍÀgtÒº_vÜã¤z18¸êìãs𢅿nžáiëN8Ó{¼˜¯ƒ§©·M:ŽWGCÇ£ËÀÁÕ‡)ô„Ð=/…Zx‘žµÐ¯úGϟ׹ÿ®–ãmþèŸòÜD>ô¸"ŽÐ[Žãùëþ“iwý¬kúŠ}į†/Õ¹áûç‚§ìÅ8­…^á{íw>„ý—ÃCôvõaæu+ô{~ìrð†‡?Ï…ÎýâMçgê¹|Ÿ{—å™:ó¾¿\jÀåËÏÿtNZ¾µoÜ¥®Þú?Žøs#x¸¼¡™}Fg©ûÂ;žÜ¸þ£È‹pª>ƒê×áŽq¼ŸýWœ›v>lÿ½˜Ÿýy¬3ëðNÄqúç×>ý'ä9Yz§ý®îw3æÕúS‡§åé¨Çç›Á¯žÆzܽ¿‘êþÛIøÐþηÃuo•q¡³‹[Ï}Ýû¼ôjøÐÇáW\ÿ iþß«/mðÙɨ¯_ëâQèLppõ·<éü2uôÕð7ä—ÂÖùRÇ~ïR¬3Ïýë)]é]çt<õQî¿®ú·IèC1ÎôBïw/ôµÄ³oû¾®þ8¡ß§¾SquÐýíõ˜Ï3'ÔIZpÙƒIÏ#Þ‹•÷sÂaâ>fŸz_}Ñ«îv>ÞêÝP÷ÅÌ?ßy¿ªÓˆñÛ }óãðî.Â2.óeè/Ö­<;‰¼ÿ§W"¿Ø‡k‘7ª/ʤûb«¡÷À)ÁÛð¼ê9éŸ{#òŒ?/*<ú“Ÿ¯ûí»_$ÿ¯¬Ó—å[ë«ÎeMúz“?à¤q<¿¿^Ö÷¾R?§œ¼X÷]L;n¯zZºß»î/¨õçvðµqìŸ ágÃktðºý¨¯:¿ýÿqøV¡o×=Ã㾿²¾øP|<ø‡ç+Ý3üš[Ûù‚ö‘Ï9Œ}[¾DÄgþfí§Q§â´¸…w~¼“®ƒìÄó–~Œóíð/? ßÛŽN£¾j7pÞb?ŽrÎí¯"Ÿ_¿âɇ—º‡Ú/øõ½Ð£v‚gÃúDÑèðt@>Ïqè𞛞å^ByN€[ÔÐ%ä¥Ýð=wB/ª{,¬ë÷}^<¾Í·?\Zu'—ßul:äÕøïÕ?,ôxì(ô!º…:„»¡g×=œâÖ þ~Ú÷ÝY½Ê÷Ã÷1N‡áãɯÖÝó;ìã$žÉ+Y§&oÜ\”÷ÉЕ꾋q߯ۡ›Ö½d³žŸ§©>ºác¯¥n;ê¸Ï8/¦Ç¼ïñÒÏù\¼J~ü*ø­º­êû²Øñ"Éz¬~T'—T{ðó‘õ§u?ß°Ý#2ÿS¼ºûd)|þ‡s±â9ì7õPÖBç³?áŽ'áKá÷Uo=èãÕ½¡ ¡ÿNûxnFœ«þ(ô·i×ã«_À|^œW¾óÇUÝâB÷ð&<ðhÐ㫸À_Kÿ§ÖÍüs—]¿÷ûÖóFÄ%~ï‹içAêaÕOÓÁªnsÜ×±çÿ,ø\­þÏb߉ÛÕg|±ž0ÎéºuúÁÛxïÀÉã˜ßºwxÜñÞίªsó}ú0âŽú ùJ}>} î°žàøs#ÆþËû¬¼C?¾Œ¸¶0è¼±êòG}þª^`ÐǃŽ!È;OÃ?®så¡Õ½ÈôâÀ—|uxÉ9õøt:®÷¿žM;¯¬û™Ž:oåKUŸ’iϯü²ºc©çøk)Ö‰zеXæ;ïYPW">ÿ4ümÏW©ƒ„­CõÅòš8jÝ[GΉf]àFìoçbžœô÷/èFx½èR¬Ï쓘u ÎïâWì':þ^ñyÔßNGÅmzÙNð7ý÷ù¹u.Þ>9êùç»÷<ïýa>ó.xSýÞ´ó°ÍÐËåoqÞÏ{~ëžÞ^÷’”Ÿå—­ø9üÚ¾Äó.…?ÿqèà~ß±˯œB×(>ƒx(~XÎíÔ9ØùßÕ]Œ¸a½øïòðVø@ÿ:èþ‡8ª®ñAÄ·ªkœtQºþIǧu¯Û¨çÓg'ÆW×—r¾þ1þž£î©˜õy)¼ó¾¯—ÔoÄwù…Ÿôìm÷7Äuºý—±7‚çìÄx?udø®øs.Þ?ïos®ªôÆa×Åq:&ÞýEè‹gB—¨{YB72^ÏÞwÝï0×}èÔpâAäýíÀÿòÆÝЛíü›®ôÐ Õ­ó%ø%é3Õ9—Óþ¼uOËûî¯n¨{¦]—ÆGèíÎOÓ'Ç‚“ÍÃùÄõ˜8cyæ«ðÁè™uù¤ÇgûÿѾÿ½ÏAäûÝðÇô5ÕQ|¦'l‡öÏ?ðžËÿS'‚»õÁ®z¼iøÉ¡ãØ·uÖ¤ëJxæ~Ä5ïã<÷<½N](]™Pºçû®ÿÁëu#|Á¼'‹^ëûè®ê›­ ç2Êt «{¯'?Õ¹žð-ð=¼ŸÇŒkÕ%ºŸä¼ßõà%ëásëGû¾€çâOíFœ•'Ä…ãɇu_øHÿ3âåÖ›ùœœtâJàN8~ßáqëænèËâ‡y¼7ÿyç²åÕ³±ªïrÄyûßøÖ}fã®Ë­…NZý‡}©«HγW]ÎI“wg‰#ê«? ܼ¼ëN𲪊ç0?~N<„çé17#ÎÙ7ðàrð£õðߌKñòˆOê"=ß‹;®‚§’·þCàû·ú`O:þð{¥ßMºþ¯/·~ãþ³ð‰ƒ¯êûóÛy\õ8…_žu\B¢óàiæµpê¨ç<—^´|¸î7ÆŸû|ÿ4ôg|ÜùJóó8üCùù{¡_l†¿'Õý2o{~¢÷Ù‡Uw{Òß«Îÿ_*~7Nú÷{ãøóð;ñ0ûábà'<„¿Eǧ3×¹˜iËôGϽ>Èçãü8ÛúçkÜñkàùÂ_³¾š÷¼úûw/æ¹M8UÿŽÁ ¿¯ÏÙßS§ðW¡Ç×}µ']?ø"üðºÿ}Øóîzø{ö¹ùÝŠ¸\ýÚÝ6"þÃ3·ÆÎa[gtï›ù¸òº{úøÍÔó'ïÖ½Ýï{œçãÕ=¬“žw»€k/‡/Œ7Ú‡ü#ù®Î;Ÿôç½ùÈþ˜ÌëìE~¤£]ˆ< ŽÁiòbWõøYu£®+>½ ú€.v[|¨{if}\ê|Ü´ÿ^Þ›[ø|qeÐ׃ý ŽÒþ†³«/çIÛö'?ä(òÆ£“þ¹Æc'x­~YáK¬$þ›v¼¯.î'ù²g"?솿Ÿà;âÙVøw[áß/qÌ9ñ¢ÎC/ö¼åœœ|ødØççLè»yÜrø<ãàsÖ |PuÜ‘ÇèW»±®ÄÕº/=ôϺ/è}Ç]Ö«¸ë|¹ü_;÷JŸØN|ÒyTÝ/ù®óŒ[“¾îñpySßì½ÍóÕÀÉÖ9t%ð¼b}ÃÿâŠþ/|ùW¸ºÞŠwÓÝù²>wqÿbøuÎ}Öýd¾Rö'¡«øïp×åÐù9¾W¼€Ÿ¬G>ý>Sõ8Ó>ïðÝÍzõïú-ØŸÛÁÌ¿ýÃ߸qËóÑÙ¯[»žgÒuˆàíçÿñßwƒO—8Ïû¡×½kïzÜæŸU=ä´û/ž—¡?½y[‰ç€{<7Þr/üŠåÀñuOâüÏgóñq>[]"œ~3øyñôЙé[æá0ô}ç=½OÝC9ê8žÿAð¡OÃ793é8ÉxÐ5ÏrÌíàÖÓaðBßçóíCyŽ0ŸðÊJøsöÝýÐKoE<ªs(ƒXÏ£¾žw‚§Â7ö>$?ÒñãbŸVÿ½I÷MVc?‹S»±î½/Ü·ä¹ôµØ×«ÁÿåÍ_¼^Ÿ?¡oþS¬zNÝO3ìã¹ÿßYç›Þõ}[ç×Ǭëâ ¼HŸÿì'ñî\ð>û@¿(y–.—÷Ÿß3Ï&ŸÈûPéøß÷NøÈyßë—Á“|ïƒÈ?ôŸ•ˆ/¾w5xÐzèÕßämñ1ûAñéáqÀãèq™?ý©.¢ú³Û_K7'?ù4öKõq~áóªÿýBßot\}©·YÏ~_ß÷ê÷¹ÐyþÃàÑæ4èùÒ÷ˆU2_u_Íb÷ÏÄÇGÓ·ù~>§pZè5uïräþεˆ[ãð ùô‘~Â^èYößrÄGñYž²>­ãÝðͬƒÈ»æi5p?þ´þ~«/„:|8†þ‚/Ó¥ñ-÷¸]ÿ×zÚ ýžñ9ò´ø ¾ü<öœ:Ÿ]ü©ûùð¦AøÅåÀoÖ…¾HxŸûìû'‘ŸÎ†Oí¼Ú^ì_<«Î¹zٸǷ Qû&t^ñ^<‚œ'ÏóEq€-Þ9ÏÂÇ”÷‹Çðç?¯“}"_ÓÍépEõA‚sN»þ!¯©§¨óàƒ¾~à}k¼çãç<Ž©þ¡ Ô½gê“ùï©'¯>SÓž§êüQÄ Çñ±õðËñyÔyûýÀ¿+áÛàau&ôF< ž1Î|i÷Öy>ç1á0ã|#|ä:WdÞz|wnFœ_‹¼ÆÿåCîE|_ \*âSð’}G«ÃÄ å8r3|÷÷,?“÷˾GW©~swÊO?éù³ê '=þ.? üqìûäâ²>]é3Àø½þêw¶‚^ ßWñ&¸ _§á/xÊÏãƒê?ð‹+±Õ)TçYïÖûrø‚ô<ïlø\Þ¨ºÿQÿœåà æ[ÞWï*½Â+ƒä½+Ưêg'YöÛãÐé? ÝŠþ²ë›Îô¯ÁÃVb?Â?•G}ÝW]–y£ÃÁ׳®×‡¼8”ÎD—¥û9ï¨îߺS+ßÀ§ðÈíà ðïWáýø¬#œVý“ÇÞŸŸ‡w­o¨ûˆ§=Ö}/ƒÎcWãïîmxzÖaäß±àS<Ø÷Ü <gÐüy~¹¸L¯ªûY'=®>w½g+Ö¾ìüüýx>ùÙ¹¡µÀ寕 ïËwÙ÷˜.ººfõɉõ\} C×W~~âÍð×õÓTG…w¬n¹ººý°¸¬|èПŒžüyŒ‹{:ÆáóòGªÏC<'\ç<ýòVðIý†áçÂÓÓþýÕÿ-âÝqè›+ÁŸàbºÝ–'OÁýâ‡?“/Ø/uí]÷‘í£‹‘—ò¼¸çø´ôá­ðOÖB¤»àg—c}Ày÷æÏù2ê´ü|õ šuÜá¼KÝ0ÿï/æóòã÷‚.ñê½÷ß Ÿô'‹J^èû×>5ÏwbßÝ Ý>Þ ßE¼‘7ì'óÇ?­ûÆýy«ÿɰóqø´ú6ùïG'ïÇ·ßî†yœùeÐóîþäÃñƺ§o‹ÃƒAŸ¯zŽQ‹c?<ôü·Ókª>vÔqˆ¸ÊÙ ]R/…m¾èp'ÞT÷×Î:?r®G<ÆWà‡³¡ìÞÿñ»¼Çf)ø.Þ¡žc7ühq¸ú1†Þð|:ͰÏ#½z=tÏgýy/õ)êÔoý"øÞ™÷ÉYWÕ¯÷]ׇÔÖý|ÁÄ“êc?î:ˆy„ã·Â÷â‡/o¤w= \ñnþ9Ï—zÐgWÂ?ƒÏôcZõg?äy çãëÜxäY8ÿßBwŸÄ~»áçæŽâóí7u ðªþ†uNyÖùÚfèÁÂo7¾ÿ>Ý8tEñNÝ_ñ½qÇ]Öxs¾žxo”xÔ÷«÷æŸo†ÿUxô}—Ö]ÛßËǾ:üþÇÃÀùú\ =f%tMø<û»ÈËá÷Z·_/)žß ;?óýâ¾OªkX‰Ï±NÔ½[¯x_˜>ZýD‡}<áˆ:§7êz2=žñX¾§ú½ïº:¼u?|v|^}!nàÿü8ü÷Vèd[¡›š?xöbàJ~ÖnèןDžóûu/@ðLñRþsn®‚ÿÏÅ÷Éò…:ÐÍà¡OÁ‘u_­ç<êë÷QÄÝåðcÕYàSÞ/ë’é€[á7ï„èûýÞ—Ü‹ñ¤;Ò«oÙ°ãGë`+p“¿×yžA_¿Å«'=nÏ0§}_æýÛÖq­¾‘¡•vÚóÀ“ð+éž+¡§‰/îÑÊsbtg|=û ­ßÜŒuå}v‚W/u.U_:ŒuyÒÇÕðùx~¾TÖñn>†ðz}•Å3~–~GÕ§sÜù„¸S÷0M{¾wNX¾÷9ÖÉÙØ¯“å1¼àù6Âß Üq:å³QÇ/uÎkÖujqÈ: ëÐKCŸÝ>`½‹'ôMëãõBÿ»sLôÏÒÃçÏ[ý³Bÿ”§ÅGÏe­†>äy¬su³Öõ«“Îoå{¿O×=~”ç¢Sí†Qþ#¾5ì8š^¨²ÎkÇüg}¼«n%ømé£ã®·œ ÞYý寧êþz©çmqßþÝ]LÝvõ¹Xêqw5ø,½Eü3žk/í«çô‚A×ëùgöåpÐýFýßøpéÁ?oÆãõà[/àˆðmô­rÞÉúõw"žÕý¢“Žëä§•ÐÛùµúöáð ÜBgÐ/„îHG¯º˜…ÎÄ›ï…/x?p%¿`5t¥íȃâ:|á<ÇWÁÏí[¾ˆ>áO‚?W–ÐIÓO§6#¯É—{‘ß6‚×^˜…O:é8¾ê¼‡'á%©oª§óÞâ!½TW7s3ôŽ­ø;ý nToM—= ¾p5ô¶OC¯,~Úý§­À3GÆ]‡¬~ÞÃþu¾ÿ]Ä“Žw®ÇçV½ØIÏËúÐa³_JÕÅÄþ†Ÿ¾ !¿Úü ~™ñõÜâóhÐó“uöÓxz2þbߊ›æS}¸â=ì§Ê»ƒÎ›þ5ô}iðxÏó¼ê±«/á´ûË¡«ÿMø»âuÓ÷ø¹ñs/öÿÛþ¹y ëù­Ã­ÈKðGÕuz|º|’O¢®KÜ2?úÁMp„}¤î³ú#N;^—§è£ÎEý6Å¼í³ØG;±èÆ ¥KWøüçÇ?¸Yœ£Àâsǵ¯Oº>Wç9éç ±n'§Á'›ñ2|«øî ã1ëйñ½xnû™®Vç\§}]^ ß\¼­sToû? >hþåé½ðmÄwuByÑÅØÏu¯EÄ3qIžy~ÒýÖõÀKð¦ónæé\ðyùvðº'a¡ïS¸Ní;:sùÅÓþ\î©; N¦“ðçà7qL€KVC/Qç­æZèâ#¿A|¯{äø5ïºoBçºüq=|šŸ†a—5ìù>á»yîJèŽÛÁƒ|ž¼q5Ö±}{ÏS}fæãôÃÐÌ×NÌoÕL‚/…OÏчá+ø|/pœ|À×Çìã/C'£ó ¿W§ü`©ó‹ýðCB7¬ó¸Ãžç­Ÿqø ôó$O‡ç#ŽÓoñ“ÃÀwNä™Y×%äûЉöBÏ“ïÍß͈«öý})ôØãÈŸy·>‘çÒð~”¼O'Ñ'~øÏpªç¼>zõ †n+ŸC/™öý¿ÀS/þ„ÿã\7¼½~#^œýL«ßͬëÀÆ >ð{Õê´û¨ð4œ+ËãÖÉÕÐ{ñ…ˆG⫼$þÙÙï¦xç°ëT"ÞÀùußÛ¬û:øœoX'ê×Ì÷VèæŸ?u#ÞŽ×ߥÎEŽú¾q?aõ'=éëãYàÖÝÀ#ôá«¡ÊWîCøUè"U·8VªºwN\ìx@½{ášÀ½â÷íð9SÝÓ7ë>ÎVðËɤó qõÉR×·èy?ÆWïðIñÉ9áê»;î¾Ù­àckáßlG|㿟§Àï©CPOW¾òiðëù÷© ¤ç¨G ã\ Ÿànðó±àÉˡߙíà!uï Þ:éñÁÿé=ôq¾UÕ­Îß[Ýó'¡_â¡7CŸö>ëÛÅs¸{9ò<½W¾¬¾«¡#Ô=}ƒž}Žç–¯ê<™x2ÿž ƒŽõeıº{¡ëð­ºAñoWïÎGÅõ7:þÉ/qFÿ4uíïÍï=q\_ç8ÎŽ÷ÞwÂÇ®ü;ìú¿¾±ø‡x7>8ŠxøÞ?«ª:²Ð{¶‡ŠoUÿ>è:Å…x?8¸ê³†Ý‚ßéwêÓ|OÞ;”÷;•¾ÀÇôx¯.Í|VéQçÍÖÃÃÐïÀ?¬{'Nú>½|S_qçCª/š<>°}aÿÜ |TzÒ ëÄø‡ua]ÁS/~ûa_÷*Ž;߆çè½×B’gŒ+S}BÕ­Ì:N®{J'ÁÞõ<à<®ý«?ý¶|w=|ñêÃ:èëÛ½ôâ¥:yþs}3t.y=õ]xÉúJ¾ºx‡Þñl¡û’ø€z#ùˆîlýó'ŠŒºþ)~ÊÏð–q¨þ›KˆöŸsÉ›ñyx}É}'Õ¿vÜqÑ~ÄÕ—vb½©–§àû~Ô¡Ø;áo‡_wÉËuÁ¸Ï—ø¨.ÙþZ ^Wç¸ñÎá‡n;xݲúäõ<ôÿó߃ÿñøÿÔ—œôy‡¯R§0Nð³ý!NÓû<Ç8ôuqO¬<ûi›?~±8è9àí¿Ž}Ž'áâê¹à¥ãÀ—ËáSÜ (ï¹±ßÝÃc¿‹Sç#¿ÂoZ‰ç¿ø/ûØØ‡©ÇÕùÿIç)Æïräõ@žÓº¨þ Ãþ½ß}Y¾¸¸ÍüŠçð†<ø—ãë<æ´ãiãµ>fùGKÁ×窧âÑM}.žm},….¥~õ|¼7‘¿nžÖB÷QO`ݹøF¼¿þBx»uTçh‚LJ‡ø|–êß4è:ïÌ»çñ¾5³®çðÃ÷C—ñ}xJÞ2Ž}xùæÇñÎmâ-ü ùÔsä½´ø¿}äçø\Ÿ‡ÎLŸÿYäõ«oÅEzýrè@Ö9¾©ÎP=ø£À;øÊ8|ž Ž΂œwL?¾¯>”Þù‚Î-Ó‡³ïúVà z€õ朄¸È§®ûz;È~‚ð \B7¢ ÀNû¾1nòEõ=uÞ&þè—v~8sùŠîó òŸñÇ;œß‡Žß_|G>\\RÁÇS÷íü˜nfœù^êŠù¡C6oîË»>¿!×§óJð:”Ÿ*¨+1ê,ž†>´?Ï·?꾇¥Ž«??Ÿp±çqýüùŽê~ÄéêK5îz†ý[çÅ<÷i×Aä‘â1“އœûÄCêþ†YßÇðO骣þ~k#ÄKÿÎâß—î7êÏ ÃÎõ×gGÝ'ʺ!¾6ž/œø½øŸ‚£Ëwšt]Ü|½?õõŸ™O¿‡oªÏ©sC“þþâ±}I7S>‰|‡_Éó¥[„ßV~ȤûÎx×Ðïõ»óߟ{Jñ¢gÁ›èŽþ;çiÄEyò8žë0ômóŸðíä:—>-¯†~0Üw_ ÿÃ{Xáäy º:ýµêŸ—z<ñô¥}®{eèutF:-^i]dNûê~Œ«úNùMþ0ð¶¾à~Ïú¤ûˆ÷ø8ýíIøå|½Ð•è%B?°þ^¼éŸ£žKŸEÿý^ø#pR+|Ûñ¼WýB—à#Oè.ïûÏñ¯XòŽüeœ²oèqøUtVú½÷ɸûLåON:¿“‡ð3úó½ÀýÆÝç¹h=tõZp’¾dpÐåÐñá&<÷ |V8›¯„wì…ÿ‡ÁWõþÁËž…¾}Úq)ݾZ÷˽íùUž©{vÞõõÏYß“ùÏ=OÂCΗ;?aÞ«Þt±ë?pÙg¡7î…ßÍ•/øšü~þ…÷¤_Ñ{àÎ{¡gàeÎÉ¿“KWî…žn=áUêVìKøîrè„ÿ:÷(ø±¸]õÙ¡¿äy ºÊQÄË¿ùp~’÷ëy&t|ë¡Îk†.žÂ]αÐOžÍºÎ½|¨xdèÎXïöÅãÐñ¬ŸGÁ¬Ã„Ÿ¶ñ¶úÂGÞª:eëcØñ_ÞOMßï½Gõ¹YìñÒÏ×>?ê>§8}û}Ï÷xέàx3½ºxà¬ãÅýø=ë¿Ä—üw¼È=øÙz8 ßÝÏ/ü·Øý|÷lì{ñFÜ¥Á×›¡·‰§xà­À™pë…xެ緮寧9ëy¡î9éú#x®'o»Ž$ßò-ñäìvþÈAðûBÏû•áXß«ëÏÂç… äcø¡î ^ÉߤÉ«â÷FøIÆÅ>À_ŸÎpNÌ÷¹L܃›·#~Wß±q__UW¼Ø×÷OçêŸg^áWø¸Îõžô÷^t˺‚猫º>úÓ>ŸêÕÄ3q‘.·zmÕ Ï?÷W·í·3á3À“÷;NÕÇP¾VO÷ñoè67#^8o@×·à4ëp?ö½¼Í×gnǺчn7ü‡íð½ù`ð?~b]Ô9ïqçíÖ«÷¦C^Œß—w¼·øZ¸â¤¯ïM¡có™Ôýë÷ã=Ä©µðÍvB¯·~Üe_ËCôù_Ý«óßv^÷0|“½ÀIU¯xV¼ÆWÕ5Ö}£ŽoàgßC—Y _¾àDžÉ> uŸrÄ;ëVúÄRÄëÔ{f½!\°º—÷vþWþ9]huÖó€|{9|ÝóáÂ÷+±Þ†áoV|÷}è}ðGxdëË<ù^õ¨Zÿ|—ºwÔõ |[^½¾ºxÏ/ÅzÀ‡Â'_í|›/#~‰ÓÕÇgÚù$>u.üï _É#u~~ØßC]LÝg?êqM>ýEøDâ4|ç|žujŸÊ³Ë·–ï¾þã•ðÙàDù þcáæ;7ªîcØyØ/•ïŒ>‰©£ºyMŸr:`õ_èñž+½•5ìøŽQç“=ßÕ½B“=›þ¡¾f7â?ÝØ<óóï„ås~ùÅó[îqà;g†ç=Œ÷À«ƨã0:»ºqÛ¹^Ÿ‹©ó—×ÅËÒu‡=oÐêœfðšÍÀuŽlþó?èqŽþjÔ}µê:îy–Îýçìç‡áW~ͺoá^øJg‚'‹ t}Èä<]>[ ÿ¤ú¬F¼Û ]»î±Â£ÞwR\„#ñ  ¡;Çê|Ò}óx3ð÷…Î{ªß'|ŒÊKø+üêýõÙÖ'Oþ+œ<áRðÛêï Ÿßá“«Å3Œk郎¯ÌkõÏžõxõÏÁ÷½x3x¨ý.? ½z%ø.=ÿtîãbð› ^{;p`Þ/ÆÏ¨~-1o¿ ½éjäa¸€æçå“£Ðñþ:ôÔá‹ÑÑý|Ý«qÚuyDüÄ?Ö½?'NË_ðZös¸¸¿ú€ÍßçG¿®ÃÛöbüÜ—~.ô—ì{a^Ï…®ßð[á¾yÕæ<:o¤ƒÂië ø?pÞ¨^úËð}þÂO­¾>t‹iÇwÙünøËü¯/BLJîíý·üÉ/û"ÖuáI'U'=êñÙú)l>^Ÿ¤Ox(÷»÷:ݹÎYÎz<®{´g݇…ûàùâñBèeþþèHtÚå˜þŠøNçÜ µy\t/•z4>ÃføhŸœ‰åB°NÔ[‹C|ßWu«Ó®Ëâ½õ¾']gçûGù >©¾o±>àè‹¡×ã·ÆïW—ô‹àGàUö-½)ï…< >´¸j/p,DÜxþòfàÃåÐGåñÿ¿¥Û”_ŸoþíWùV|‡—ÿ6ÖÒ¾ñ9t±³±.›> i]? ÞÃ;ê~‘÷Ýï‚_ónðÿ:/=ëüݺÞå§Á{}Ž}[÷T»Žs.ôL¾#~ë¼ÉŸnOœÓßåUë[Ìfè˜÷Þö¸Sý¹‡_ýDÊg8V?6xÈþZ Þl}È—öëVø)ò€ºq îÕ ž¨>èLä©‘ïåÉOƒWãüÔ…à½t©õð•ŠwŽú<Õ9ØYÿ^8N¡Óˆ_pîϧ´sŸU¿àÀ+u>iÒãœ6 Ö:¿º;ý…žéü»óàÎeÞ/ãÇö;ÝÆçç9ž›KÖC¿/uÜóOêË›¡l„.©~š¯Aÿ1?—‚Gˆ uŽ„nOÿt~D÷ÿeàsøàû ¯¾ý8tšø^:µ8²¼^S¿þ/·¯Ç<|ëØz­ûMƒ'BÏ5_ƒAOýáë×|XWïþ$ôZù*ŸÃïqž¯[uÑ?Ô+Ì?·tëi#æÁï•>1ìãV÷#EÞÇ·ìoûþ’~º º´X7òŒ:rõ!CŸ5ßø&\‡Týͨçó@ת~Œ }¿^ ŸFœøQŒ›ü¨ÞšÂw°ßèoxŸºhñìW¡¯¨k¿¼é\à`ŸçUé‡1ÎÖsá€içÁpIâBó7~7x¨|¸ø·ÎILûó×9‚iüÜ(âó¬ëºôQñN¼›Êÿ ß§Î˺."ŸÈ¯+¡ÀQgB´O|Ÿ8H/€Gª_ɰÿI ×àí›ÁGð ¾»õ+Ÿ™?ºöOç{¯G¡SÃ5C'¸úOél£·¾Š}lí„Ϲ¸Ež/}rÒßóFè@ÖÉn¬¸ÃúQÿeý&‰çç¯ó3¾úÔ¥ð‘VC8ºõfèðçåȳöÅJÄz¾ø¢ɾ­ú±YÇÍ¥ï†Î±8ÎqΟÎ%Wýö¨û9>W|qNÆsÐÝ uþ:Œ¼~/ôôêõ¾ç‘ê7ûõJèÉtõ4Öç¥Ø¯ö/<É_8ó|;tTÏkÿ™ëŸåWáãÕo?ÆÛ8-ºþ>ÿÈú¿|¢î¿Žœu?C–íÐ…®Çþ'äûóIø 7c=«ƒ­{gWÈÃÕ¯iøa_ÞúÌ~ÀyïFŸÿ]½‡:}ëÿ~äQñ‚²<X÷BŸt}Óú¤CÁ ÆI}ËvÌKÝ»6í¸è£Ð¿Ä!ã2Ïï…ׯ±þ®D<¡ý]ä÷¼÷á~à+qY}ŸÓ÷x+üaußðÐ¥ðå2/‹'ô´ÍðÁ¯ÄsØ¿õ÷ˆ_Eü0.t¦íÀæE<Ëû€ðg|GG§Õ=Õ½õ“®ËCßpO÷Q&Ï”Î5êë _´oଫçámõæðú¹Àí×"Ï9מ÷ØTÝëÛî?‰ ÖÞ|#â˜ølÝÖýSK7Ðn?κҼ¨î×JŸ.ô)zÇQèu¯Ó¬¯OëïbèèôB:ˆyÿqðYqÂyçY3>•~<è¸ì taï­Ÿúë•ØŸÅÇ}>Æ¡ÇÖ}aÁ3̯xU÷n‰C‹=N‹ý„/Ó«ïéioûˆ¯æOq‹î˜ý%/†÷E¬Sõ Þûçÿø¶t'ú^õß÷y¹úþùÐ Ïÿ÷üôöï/uÞнá¿ CïÃø›ð÷•à%ÎçpÞ?†žF翺aõ :êzîbÌkÖ™Ô=‹}]ÊsÕŸaÚã[õë=éø.›}¦wBŸPá燾[~ÐB×#φîï|ú³ˆëò½ß}?âiÝÃ<èùÏ9ýÒÛNº¿HÃÍŸþ!Å&=ó!ùÛÆ®€ G¾ƒGÏÅŸ÷Å|ï“ð=VŸ«¾‹=oTŸïA›—b]ˆx¼PŸ;輟þ|>ü)ÏeÈsé7Öù”ð!èÖÍà«t{Á…þá½ðÖ/#¿áýw»€‡ã'ö—:7ú@õC½áG¡“‰ƒðͼ?Nññ:ÿ<ß÷|W÷áßÑc¬—»Á“é=ö¯¼µ>È­ˆËôêëásÑ-ÿ:pÚXçOÃ7_àãoÿèd~ÄÙOøæóø­ïƒ}¨þòF¼ßrèAð ½ëYðFë.ï›8>Û¿«:–içox³<||Vž] I¾Ä».ÇŸðõ‹…ÎŸéø…x+ó½Öãß:ÿìRè£÷bám…;NúüÁÙb¿ˆ³öÕµø<¸Þ}°Î!±÷"Ÿx_úÇjð>Ú ãÏÛ±¯àKó´ºÓg?ågûŸÞúUðgu§|…³Áwî…Ÿp&Ö¿d¾òÏ÷ÂKâ£ü÷å¹Hz%ü£>A¿’³áS<_œžV}àg=þâ!pšú&õ/þ;~ȼ~©:›ºgzÒ}Ú•ð Åï©nÎxñ÷êünìÃs¡ÓâmãÐ;Ї-u_QP÷">nÆ>…+éPºÚÝÐkÆxÜ E<Ÿ×íW~qÞ„Aß§Ôý¹óÿ®-õ‚Š«á»=]ìú»}~9ð _çÁûñÈ¿ãþ,ôóT}å‡}]À#â¥8Ž¿–3ìïOGðwïW÷æû:÷? ¾úiàyμnFÜò|ø%~eœ®‡®Søç}7ãïçá/yµîwžõ}B'¯>¯¡×ÁÃö³ó/>×ó|º=ÿÇsÃaϧ=¿8·'9Oú¡¸ú ôùÐk¯\‰|à ó‹ÀÛu_.nÒqœ{Êðu¸D^Õn=ôë o€/oFÞ–þ)túpõŸ ¼+oóKá6xã“Øun1âÕfèfâú£qÏ?ô1qP_|üż9x&òIöÕ/Ý~þßõ»¸Ýû§â'ÿ~£/­„c>èêô/yÖ97þ×ràûê»þ»xv7t"qŠ'XOU÷2í¸•þYðnû­|Ûwý¿ž~ÛóÂJè»Æ•Î¥ŽÁüTôQ_·tF} ½OÅÝY×ÏG×B¿­¾B'}7‚OÒ;ÎG¼Ûýƺ¨>§¿ÈÓt:·ñß>Sßs6ð D'»¸JõÿŒuj½/oú*ôÀs·àVñ«îÑåw¼í>=ŽkýÐiÿ*ð½ø îëûá›W•ùs¨c­û­&}]Ñ7¬q¶îö¸k|œ§§<ÿýàCëð\ùþ‡á÷Ö=裾vbÿÓÛåkçÏð[ñ¦úm ;îòüôYß[÷${|4Þt^ëÈ9ÌÛÓV‚ØïɃÅMøÑú¹¼k/ö%î×ýöïz\‚‹Õ+ø\u ú·\ ½XŸ/x\?Q|‚ß¶úVß÷}c¿à•xL®Ÿº·cØÇÃ|<Œ+\þ·áWß¹ÐÅô«ƒ¿êžîA×GêÞðÏ«Ž<æùqè{Ö<»¾;ýÔ9 ¸‹Nq>ûfŒ³÷3ž_„¯AâË?UÏ[uM§ß‰O|qûÞÄÃŒWÕÒìÓw]/®ûb†Ý÷òÜêÄK¾„uBgªz’ˆSÏ—ú<õAká§ÖyôAÇ[Ÿ…YõúÁWc?^ ]ž¹º¦úõÈ×Ö?þÆ_¹¸M¾…‹÷ã¹øéðˆõ®DÆ}Þ¯ïÚQÄ=u—ã¹àÂà•ãx>8G5ûË{_œuÝÆ¼é“¿ïkÏÿ~?ð½ùªóL³×c|..#K¢çá3âýS=ÑÕÐg𞟟àÿÒÿ½¿xz)Æo¢ê £~ʼú<}yÔߨ³1oÎññWÂW¯s.']°îÍÛd±C ¼G<wd%â)œCwÿ­ù¤úDžvç¿Ëøpų9Þù~ø»ðì™ÐI|~õû <ã>íªËöÛn𜛡ì†/ Ú÷7èzùç¨ûtõ~ðÎrÜF}~ë.?6â ^ ¤u/^¨/6_ê*ô‰pn7ÍŸsò%=ÂzÇkõ ¢/Šð‚ºãTu“ƒ_V_ù“î[ø<õ#âÜ´|LÞ0.U¿Ð¿_="ÞEÇ9xâ«QÏŸòÕNìÿ:gó ©ƒ¬:ÝQúæÉG…Oûzu~/³ÞÕq›¿½à¡{V_˜I×½—BÕ'ænøšÆS¼‡g²ò/C\?‰N@®~âüiÏ3bˆ›x£~¸åçΟó£Øßtûؽ‹ú V¿ŠY¯ußõ ã9øU½ÝËßôüúóøyû×¾4.øŠuo8/'¿ÁÁúb¨“†3 OŽ_˜¯Ó¾.n„¿±yÞº€/øŽâ;ì—¡—®‹÷žv½Êþ*—¯?/ÿœ ým'ôÒŸÃàëÓ¥ð}¯n­s']Ïà{{®¼_`9xNù%áÏ^Œç×_×ïÕý„áÛ‰#øûð;øüÖþë^°ðµ¯†oz6p³x?Ò¥¬³ÒÍÆæ«_åw|Fü«þú³þ{â ¿yþò¿}ÞéÐê î…. ?žÇ÷YŸú ðóV¼¯ù}ñG‘zExc/ðàzèPð#ž ǪˮûWB|þSÝW4ìfÿ„Ià8ܺ¡“ __ ?˺Y º„[Ï„®M/¹þLÕyŒº/†‡‹O_EüÈûÜ«¯ñ¨ïK¼ŽÆCĺwx©§õ 'WÿÝA÷Ê—8êxÑúà§ó•ë|á¨ç™Û¡³=Š8¿º<]AZ¼'^;2ç?¥_>AÝ¿0éyývøÖâ•ñp/‘¼`ÕǪ›²¯ð û/Ž×Y¿ÕGaØãˆ}]ýV‚GÊ£tS¸ß]=W}z®qçIèêÕÿó´ï£Ÿ‡žXó1__uĬûô|Ÿw-ô³[“þ9¥oöù©óU'ýón…î`Á1ê Ÿzüº¼|0è<ÚþÆÏìó¯Bï¤÷„^'ïDúëðÕêÜ9\qý¯c¿¹—Î÷9/ §ÈÇ«Á‹­ûŸ…A_ ž—z˜:Uõ”Oƒï9'Txƺ]ìóbýàÅtÁÊ·¿|>|åñì¼¶êMfÝÿ] q:àµÈ_ô`ó À±tqÉ}ôÑ­Àg±†/~&üɇÁƒøNp6¿ˆîã<»|Нê[¥N]ýŒørþ£|»º;\B?Ä/u%×Â×~yÒ׋}k\í÷ýˆâ-Ü&Îo‡®t'â_´êF†ïÐu·?€_Й嗳1ÆoÇX¼·Ò‰ÇGÞ9›øiþóÏ÷l_†Ç}ç®õ±:ù¤ëö \ªŸýÓÀMö|ê¿Ãßt(ñM½¿êvìWûÓÏóéþ!p6Ýå8| qÞ>ޏ&O†Á°¾ì;?ÿ·¡ƒdÝ<Œ7Xy.â~ø–Õyöá|á¾+üù«Xÿ“ïu]_œÃã¬+ù®] þiÝý$x´ûûø¯úaWßžAÏïæÉ=Fužï´ëæðK‡ô¸ _ÐçñB}a?|)Ÿœ ?KüÿeÄñ±~ï/õ¾ãð•ì;ë´ú].|8N¬Ÿ£ú}ûÄ>”à˜Ú?Ã_ìCyªî™švE‡Üj+x,ü£Ÿ¬ú::½tÜ[§òܰ÷XŠüp3öÝ2ëzüJèwƒÓwáZù·ê§}Ü=ïVè ü̪#÷÷ÁÄ_¸Ù¼Ã}ôøz?Æë^ìKuyêþ-öñ—C­‡ÃÀ‡îË­¾š'=oþ$ü'þÀÂÿ¢«÷Ñ_F]˜¼!^ùs3ÖAšÿÝ9,ç¿/‡Îë÷ÅéÝð³ñQúâQø tžŸFܨþ ÓŸÏÇ8ºŸs¸>2?ò¿QW->g/|aÝÓ'åçgç½~ì…xߺ¯cÚç»îŸôçÃùéöcõQ}ÛñGÝSsÚõ·ºWbÒó„ù‡'åEyb?ø¯xGï¥ÇЧàœê;6éóÀoà© úiø1æÅs”>>é8ŽX>MøGwB—+½ëmÏ»Ÿ‡jýÝÈý9 ݆7ê> ›.ê{¾ >óUè„|+}eéiË¡‹l溽†n¬Þ¾»zìOß)ÿAž>êïÁÇÝߟ‚‡àøì?¬o‡qós|`ó WÀ•ò²u¶zHê×ô‡Z󟻼 ¿ÃOäSqO ùçQðq>Ñ—êNŒûaðFñl'ølå»QÏKÎ)ÿÙvxE|‡ ê¾®IÇGueÜó ý_ÝÌnì8Ä>‡Ÿn†¿u?â®úqTü3NÎGߌ¸s3ðPù“>ÿ—bëž¡YÄÝi÷_ñ¹ªŽu~#t°¼Ÿ…¯`Ý[o|y–~w~±8 ^¾µßäaqÝ}O·Â¦W…k^¬££à×uÏOäÏIWºx¾Igñ?ñS^…å¡•x.qW¾4u~?p‘õ‡WñI}î~øÃæýLà#ù2ŸÖ•ý_ýoG]wÍú8MŸ™ï:Γ/ÕwÂk|oõhtrü_ýñ¯Âg ƒD܆í_yˆß g‰Ût¡oË3ÖƒxU÷:—î¦o:ëqмÑ-ªÎû¶ÇAëÙïÁSε©Ûà3ú¾›áËZ—öŸ_Ýó ‡‘Çàx¶ü“ùß«¯Ü8téÓŽgnEœƒoáoñ@_,xõ©ï "ßÍ:np^\]=†¾¡^ÎüÃËôðÐϬ xºê á¨w=^Øê­/Çç­…ÿ'ãIpŒu)ÉãâÌñþËÁ«NkÒq¯s3ü|ìÏ‚ÿНgbÝóQÍ+‚Ž ž×{êK >[ïâQÝÇ>íñC«¾ ô ºÁJøU·#^nÇz·WB礿Èê ÕÀÆ‹î!®Ñ_áøíˆÃÖÅWoè6æ“å¼–qö<ðÑqè炇ãt†óƒèmxzÝG<ÿûƒ£®7[—â€{ÁÅQŸ·ëüIÌ—ø±~ÿÂù5çPèðßM|ãÈ“»¡ ÐEñˆç‹}_âaµ>å÷Ðá¼óÆÝ:ª¾–£þwþ1ÿìlèoâîãI×ÇëÜì´û:B'Ãßêß›÷zÐév‡ÔÃÞ ÿŠŒOÑ‹Õ ÓkÝû|8D½ Ü¢ž úQÌ“õòEètt•Ò :Ï:ˆ¼F«~=¡‡ºW¸Î»Lú<=Ž÷ïíq:×J¬ûˆ_ /ª] ®ô=•/‚[Çü_ü¨ê¦BϲŸðr:@Ý[:0ݹýÒç¿÷lþüê==‡º*õ"p‚|U÷L {>­{ƒ¯¾Õ7À:Ü ß„®CwXô¸p:"þv.ø³x›¾€ç¨{°Ç]§¯zÕY»ÕçfÒãßU|<Ž8%þdý¯õâÆZø:ò*¡‡ñí&GçØ_óýW:ˆ÷4ô/zàBà7ñt-ôiyõçSì~*}d+öŸºÈêw7ë8—e¼ù-ð† | Û? üßú„~Ïê2è†ðXéy“>îüGú™ø+ßñgvOÁ7ê`Ä?8ÿ(|«•ˆ×uÎjÒuL¸®Êç᫉StÒ£ð¹qÃ:¬~RÃàcy¦ÎY½íqË:‘—ÔoÃ".Šsò€:$¸Í<ŠÃp¿ßß¼¼ þQoáy—uëóèmö[Ö“Øð´:°3¡§ðäGxTüù§Àtußò <–ýºò^òƒˆËžg9Æ î‡®IÇö^|Çá§ý[Ä?¾…sÙtþª˜v¼l}ãgtHú@Ý<è|½xì¸ãÍÍð­“íÈ_ö /Ú ]P¼¸u/ø!¿!ïoÅßàêÊ˧ÿãoul̃÷Î>;p,—.?-G¼ƒò"?¿ÇãÔ–¿8ÿ»:ÿ:ß6êãá}Ê›t}~+ø¤8%.] Ÿiò¾ï7xçûá÷ÀMDZ/†~OŸ¤Ç= Üi}ÿÏà£üvý J/™ÿ|õˆ›º¸ŸG<Íþ¤ö³ñ<ñ¨üIÿ}~‚ç/nu»òˆuå¹÷CϳÞáËíɇó<žkœÎ><¿ká> ]ܺµßÍ‹ýþd¡ûÂyߦ¼„©°6—_ Ühxõ!ÞËçÁ“G×àõ1pFË;êß©¯±èž›1¿~o|X\©õ0ÿûW¡“W=ü¬ïësñ9Ÿ‡/ çÐ9ì{ú ^"ßyïê_>¨suêU¶c¾ð:qÔ~€ùø~y>ü\õˆ©Ã裦Ð{Éãð9]ôËà•Fýßå§›á{YU¿9_ÏÞ÷x\>ô¤ÿ¼x)^áuGÁ·ÿ5ü­‹ÁCè`G±®*? º?ÿâ´óVû¾ôè¥î3? ½´î¯6_óï×÷³ô©a÷‹­×ÝÈãæ/~;Ó®Ëà9uOèóàº7dØukù3n“¥î+‹WGáSó­s¸·îјö¸^çYÌǤãçº'ô\xŸPç(?â ÁÛÔßnÅü_‹ÿ.oЋí ú•<¨ÎR^»¸ೊ««Á7ø°ü¯q_çê†ø8â™:z¾‰Û7üy¸ÕsÑánï<º?n-|mùM]ÝĽ£UüQ~¨ú„q÷e'Ÿu‘óôøËóºG*Ï+:‡^}zûû}xãl¼ÿ^èÁêLÅëê¾³ú8ûH}”õY¼tÔñKöaŸ~ú¢:†õþIŒSå¥XGxNñòÐg/~·ºØVà%¾ØµðAªo÷¸ãÜêgyS>©zÅq×ùðyìLø‰ðü•ðg×cŸ«²îœ¿ººõC‘Wá-uGkÁŸù¥»‘gù2pè8øø£ÐAá!uGÖ'ýq9ôaë¦úM»>WýÆ¿†=ÞÁuŸað3þ”ýöýà;ê}ê¼È¬¯cùV|¡ÿÀÏâ&A¿µþä1üƾ¬~’'}¼øÉ“{…«ä…[ëí'ó·8ê0p}_ç€ûú7c½|ú‚¾Üæq5xA­«A_uný?¯ê6cXÇ£àaâª|é¼±øoÓMÄ3Ï¿ùp%ö3œ·¾øÅÀÛxÿ›N"¿Ù·ü>ÕÏ×lî——¼üUç麎 OÐså¾@Õ…F<’ú|Ô¾P—÷ç=°ðɃ÷=ŽÖ}0±ÎŹËO®Ÿ¿:±¾oG‘çÖC¿M½ß>¸ºý—áÃý8|Oz²ø™÷}x/Ï)dÿr¾ãÅð¾ ]×½tm>ÒJøÐt”Ç¡§×9—I×_3îCOuîänøüYzÕjèÕâ }Ñú(lÜñî 8è¤ïãGÀI"®ìÅúØ üt!~_Þ'<§sÌò8½Q}Ýíð¿àõ¶tCó‚lÿT/÷aÖ=<ïƒþ¡›‹kU/5îßËýeè:ÖßNàûi-Öñ™àãæïièAŸÇ>£«Ð‰B'½:ÐO‚o]þm\íþ(ßá(tê¡‹XïêÄkztñ´ˆöyÕ¥GÜ5žë¡w~ï“ç?ày8¢ê$Æ}ÉïC7†> ¾äœJ‡ýKÇ»ûc9x¾ç3NO"žÁeÇÁƒê>rþÅiÇ£Åã'Ϻ˜ýKWƒ3~~{Õm »Næ¹Äoïm=ÃÉrŸÍŸóåûÈ{¡;ªÃâ·ï«‘?Õü—{ìGÝoºþ¶qñçVøËÃ|Ž;ýÑú¹z˜÷\ ¿üÃYÇKuNÚõsëÈ>û"ö‰ýE§¨ûUßv®kÞøAÕ?â¤Ç‰ºù]/¼ úèO{\¯{ôÆ=OÂw"ά„Ïi«GÀ7.‡~‚¯UeèÍO†]ŸÙŠxÆŸ\ K\wÏ¿’ŒBwõþúßÂß ½¾­ú‡“¾Î«Þ!|­â=ã¾ø_WŸUߣЗùTôIÏ1Žçó}׈ßö-^ O«ã>>Ü$.] ÝIÚvø%uÏü;íùÞtú…y¼8 ÞÛ?Xž®~ö¡+ÀÇöáZø[~nÆ£Dœ¡Oç ÿ®û¤ÃŸÓ/írøšë·Ï†¿ðñ\Ÿ>S/m?øwxAœøqäañDý,>€W؇ƒAçûô‡ò;÷ûæSÝÑ¥Ôõf=ßËkUï,¿í8 ¾¼ú(¿8Mü¬þ ]ï®ûýBÿvîÝ÷ÉÃt'ñÓ9ºFÞ£šç øÕpÇùÀ+ês×CO¶®Ö‚ÿÖ½YáÓÝ =ãaè³Y_ÀO³?Wzž+= ÄóÏ…®%žÉð‘ûÄáV~GÞ—ã½Ë?›ôõ¼8LNõ8ð©õ%.¨‹¶Ÿ<ÿzð§ï†y%üHçð$º†uó‹o7unð=Jü€S‡ƒ®Ïl¿“¯ø¶uÎë¤ë6êâìGë‡>Ñeë<ϸïÃû¡‹ Ý»î|××cÝ =íû,û¿: ù»ðz„s‡ûáOñй:¼õÂÇÇ/á9ø§ú®…Jß¡»‹×êSà<ûBž¸~©úsõ¿ˆ¼ÿÜ ¿ ^}¾Ð×ŕȷöÍ$ø)¼¸š.¼4èÿ_àK|Ôº÷Qu[ýÞ¡{ަóZïÛÉË"TÿÁI÷éê\—}µÔã‹x,Ïò > _-ùÕQø®ÕW>ðpÅÝøÓ<¿ø^_Ç<«Þ…¾\÷ „O,¿ŠÛuÿbð6ãŸâ«Ç£®«W=Õ¬ãÍÄ‘òŽýs!âŸxóô}Ï«U¯yZ¼€ëüã¬û tÈÒëF=îûïÕ—?|Pû•>a?]Ÿ‰A—þ‚Ÿùøæ¾ gÿUä ñ¯Î×Íúº[ü§òµê>áAÏï¾—þd}ª+ÜþƒËœŸHÞæÜ–zF¸ÏóÂÙpÀ“÷=þñÙ‚?áçŸÅ:û"|ÏçG}äýUÖ£}ü0ø|s'ô#ñ…Nb~œë¢ƒÑów#Õ½­¡g:§$/Á3÷ÂÙ<åüÄzèqyŸ1Ý¿¾~<‹cê¤ÖWÒ¿ðª›1x£øm_,Gø4øûÙÐ/ìyŸdÿ«z¥ð1ð<ãËð+6‚'<\ìq–n¢žûfŒCÝO;éûÙ¸Àa—Cßð|p¹÷§âÇâ,= ž:þÙÀö§ü^çÖÂ_¾ã*ÿæûÅ{Ë·âùäÍógìk¾|ÝIŸ{Û÷Ó\¸ªŸ?úŸ¸$.à‡â\‡O”ÿ:ëñÕûÉ›ô*º ßXÜOV|£Ce}6ž}û‹îeüà‰£ðÙªÌÛî7oÄú仨WuOù8òÁ¤ãäÚwŸÄï:?:µß3ø<q>¸sSp™ñV'‰'W_£Q?ñ¸|)yu¡Ï'<^ç9†gY_yŽ‚ï¾'¸ÎÏÝÿ©ÖÕbŸ7õÝp®óŽÇ¡Ÿœ©æiÔ};uúü ºžûÁðY럽<Ûsá1î7݉|)žÌy`é†ôëQ|x¾Æ8òÊÁäÃ>]ÖÃUРû}ð¨q±ÒW6ŽÎóYÏFýý«¾cÜç³ô¾£þ½ôLùÎT×–÷kÁ#pýZè—ðóÍз=ߥØ×Õ}ÔyzÝ£=é¸Ä~´þ¼ïG¡»W}úBŸÿºè¤ã˜Â‹³—¯„|!ðÜA·zqÊ÷á“xªs&? Üóóðvà›Ö©xŽðKÜ?Œ‹G|ò}gƉò·}j^áÂ:Ç9ÿSŸƒ3¡;9 }Z]¨<àûìsó(>áuå'Ë‹ ]çÁÇ~z¥ý–ú\Ýzšºº²ýiÜóÞ®ûá×9˜ÈëË¡&ÿ¡OÃ;î5Ò/äá‹<žÿÉ×׿DÛõÈÇ¡“ï.ç'~5êy>’_ͯq¤oàâ—sBôò¿ >¥.oŸWuGáçÑ'ŽÃ×Q¿ˆßÐè[êxðû‰¯XûuÖyÜqäqõÂÎÉ÷|N¼£|²Aǧõýá[ˆóC_ ^ ÷G¾°ŽVOÕ¹æ£þóuÏü¤ÿ>^`=Ëãß _ªú‘:/¬ÏŸõq¡÷w÷þ‰? ¿¾v…8K×ç’.`Û'ÆË~¦¯§ÞˆÇæÕº³ÎïE>Ç>´_6cßÞŒ}îÜŠýBGz2Ï[á÷XÏôïƒðéáÕó¡Èwƒ/^¼U÷/œôÏ1¿åwº´ºæNàøa+td8Ý9˜ª7tC>;ˆu±qô|à÷ˆcãð׃·Ð¹ëœHà|Vÿ“®;ã[ô}xï¢g=ˆ¸s-ô4ü©úHŒú÷œ¼O×˺uúÀàÏŸ„ÿþEø)»áÃÁ¹ö¸¸¾©Ï¹q“. nŠîàßó^êC7í~‚?ó~^øŽ~µzê_ÄzÉ{ê|ð¬ãkùÉ{ÉòÇßD®{¿'ýùªžiÚý¼N»ëîGÁ®…žT}qÞvüCG¹ø”>x=t-ÿ]KõQøDݧõ¾/^ˆÿÀ««§¶‚ïñÁn‡Mw5.WB¯_û:Ä(âàÍo:øýÐñè@"~z¿ЏHG÷ßÂw†/~¾Õ‘¯¬S:ϙЛèøá~ð?oÿ¹ï®ôùxTé-GWm޾5é8ž¬{âÖEé ê7Õcxû‹>‰·¨Ëƒ;à <ájøWÏG]©úöQç â²}N—]ÞN¤SÊgêj–âçgÕ/„NxÒãGâ²3ÿá$ûåJèÐô¸ÑçÒÓ䛿ýÏ?ÕÔ= 'áO{þ_ýœT÷œ|x]] >*?o…oðwßU|òœGñœò ~ìÛÂ3þûá‹Ð…Ý“+nøïê¯ä%ëÞ:7Ôexï/ÃÂá|Ⱦ®ºPûõ¤ãøòÅûŽ·ÃO·>á&õòƒûL+±?ïqð¸Aþ•_ïŸþ2ôÞºtþsO¯>è½ñAqþá×Þžõõ†«»ß ½»îE‹<¤þøÅG=ŽÐwétÎýUŸ×Y_/ð·ýgòêÞõA÷ð}÷å\ =¡tqh¡ÇéçÁ—Cò¼Yר¿ÃdÔy=¼"_^E1¾ž`VŸòa×ÛÕãÑ…ÄãÍð…­_qf#|6ó£/ |RçPÆ]o¹ñI¾ª¿û¾6N·C¢;àÇçŸÀ êè>Š8²ñq#ðeõ¯8鼿^øåÛá—yï9.*]o–/å‰Åð£éyÎ?÷uçï‹áóÒ‹<½'ò£ß§ü4óå°Ï‡qŸÎ‡Þ@WÁ¿×B_¢«ò«Ÿã°Ç%çuÊŠý‰ÝÜ)¿,_¸ÿóðêœÞ°ëéâsõ\ìï%_à CçÁ?bÿÚ_æ×ú˾ð›Ák<ßNð÷OÃ÷Û ßl5ÖüP÷­ÏÿüD^›öõ ð¯àã|>E\Ç—øvò€y¡Wß œ_u僎·*®M{¼—·ùkê¡Å±£a÷+Ô÷Ùwx5|c^ÄòÍNz‘¯äAzß³ùϫ׻z‚û_ð«Õài~~9òÔZðêª{ßõ=ëC,ýârŒ?½Âßé‹Ö£<®®àNàhÏM×W[÷Y-t^w:öVà²ðYàUqÙùtøÚßá9ý<Ü+x3üHûAþªxe]ĺWùˆ|÷ù~û»òí¬ë¢wBOÅ Œ“õ¿~ã—ÁçÌ·<ês×7â‰?éôt‹ºïì´Ç±á Ï+¼$nÚ—ôD>Þ­àïþ„C勺G4þÝóØ¿æÇóÁÁâ½u)_ÃÓð˜?‡î ‡Éƒò¸ý!Îù2øÜd¿‰'Ûáƒà‘Ÿ†ïJŸ’¯«þ}ÒõÌõȯ|þÐqËç¿Ó;ü|õ] ]þbìû<×Zøz¡ó3‘çÖCÿ§«|¾Zõ \¿ëÃ~½zž|³¿Wã;îãc_™O:;]¾„‡ªaÖuÛ³³¾ŽáðÍÀ³p‡u-Âqø#¿¢ÇÂîÿp^‰®u3p6\#ñ%Ä#|Ý8ó‡åëO"îêߪãÂôÝ‚ƒÅQóQ:<¿sþûÏOûû]Ý„NSºù¨¯óÀå—Û—öuŸÿ}0è:¢÷’où¬ôüŽÿNÇ®º··?©«ó¹Î Ð1Ë¿šDÞ¿µø÷¬ãˆåñ‡ãÍjøÕ7xØ}œ9Þ-þí\2=B]öÙð-èJ? ?›O&þÃã×BŸ¶¬Süf+pšyvÁ÷;WÏTÖIGâ”q¥ãT_ëiãòú߇¾q1üåqŒ?|·¾> ¿âWòõÈûÆî„¬õeôx˜ŽûýÈsx6~­Ù:¤³{óáŸ×ý“÷à®ê73îûe?x~Ýg>ß7_…¯¤®Ä<>ŠüKWð\¥kú÷™ß[±žÔOÉ£âøãÈËòxnÿ|øE^6þ·"nì‡ÿ)ßGç¿ôá ó_õìUÇ+îœöu`<ÎÄßëó¨?WÖÍÖ=]Æã¨¿?Þ ˆó|aº¼¥Õ³ùzU§_ûƒÀëöiÝ?:輚ÿ¢N?ÇõcÜ  ¯<××§º“ýÐaøyð±s@OZâ?;¿+®Õýc³>Þ§oXWO—º>9ò…û¤à ó€Ÿ Þ8çç­xÏ“'ùØî‚¯è‰ûá{ˆ_uë¸ã9¾å(ò”ñÆóÔß8Wf|¬gûýLø ê1ùò4Ýžæ|xïT_*?~ë£î_÷ýý/¡GË#káã>u_Öú¯:Ãa{êråÛ¡×Àép‡·ÇáoÐ)í#¼ß¿W_ÆYÿê§ñÒ<Ëë—âûäû‘>{%ð»øèsÕ Šbœªþ{Òq~ÝO4îûC»:•¼¨žÔú6ÏÕ—§_„ž[}ІŸ‡^øzøo†}}_ =ãúŸÐgøÒ¾ÿZøã+¡ÛÈ×7üÜ«:ß_ ~VõLã¾®ë>’aËû¡ãÒ}<ç~ìß­ø³ðvè†kÁG¬ÓÉVÏ?/¦Ý7\ ÿ‹n´ëþ›£þs7ƒï‡~p#øêjð:<þÙ<ž¼ü÷ùŸ‡þ>êúŽúíÃàYÞOÝÅ~ÃçèAuOø¤¯[ç ¾u´ùàIèþ¿ \R÷¼ û>z¹Ðß“ï@÷}ý›®OVßÅaŸßó¡7Õ}MãÎê^€ÐkJŸœvýH}Œó8¯Fýsö‚?Y¯1ßö“uöb«¯Kq¿ðõüþª¯çûW?–•ø9çq²>™_LŸÛÜ—÷ÎËïúOæøíuäiãöâ÷݇_äÕ×§ÿ~=ìº[õ«\èë‘~ñrþç‹ùï½úÃüs¾×Ç ~Þ ÿ†ýúMÇIòßëK=¿9?]ý:'ï:¿»þ]èNè"|J>3}èÕIÇ Î翞výy>söóëaø¦óç{ùQèó?¿~ÐõAqÂ<ì·ñMu5u_ý¨óPyF]Ý+>âüó¾ùãüÏù~þzþ{úz<›£ócðÀ›¥ž—–Cÿ¡SL柼>Ƨx=ßW¯Ç}Ÿ?}Œ¿æõüóŸá§óqPïæ9¾>êü^Rot-â—z>}[ìÇ×ï{¡_ÐgÄÕýÈ»¥«„?óõ<Þ¨)?ê¤ãûõ›ß®›uŸÄ¿[—ßõø`ØOoŒËBÇû1Nò ÝÔù™XgâƒuNGÃwåGñ¯î}xÛù¾¸íþ»ÃÀ½ONº.Kß8ÝÏþ²á`: _z%æ O©{ϧ]ß¶ô úf!p@ä»_¯ëì{áYæŸï±ù_1þÞW_:£¼ùrŸ=·/¬‹ßtúÕüß¿™ÿ7ã®ëâ­pÆ·ó÷Qçýê]ÇÇx÷ë]ÏåWÐj.ôx §áMÅŸÆÝ§ª~ óïýú£ŽËŒë7ßéq·îS:éŸðuªoËbÇ•¡O[/f}]ÓÍä)ïí>>õKÖë7òÎÛÎ;åqñK~¯ûÂæÿþíû®o[_ßz¾¯:Æ£ø§Ö¡xZý-ç?÷í$pî´Çÿ×óõ¡^'äç9«Ïæ°ûêÙÕmF\ÂW^ý®ë¨êÔ¹}¯¼ïyðë?ôÏsŽáÍBÇkê÷Å·o}ßÛîÛ|q±îóœßËß^œv½ûÛ“þ}>‡î„ß¼žÇùª‹›õøäsÍû7K=oÖýU¡ëÖ=ÔÆuÜu޼ïY}ý7sܦÎËïáqÕïdÒãÛûf±ãÎÄŸÕ'O|š¿þ#¯æñ¬îɘv¼ôë¥OÅ úè¯G]§Äà²×ác˜ï—“¾áº¹¸ªþƾÒ×>Z ½’ßñj©Ïë­ÐÝ¿]ìø þ¯ä‡7ƒ¾NÄYõ xª|Kï¼ü>Ÿ<\ÄG Cˆÿ//õùþÙêó&^~;íÏùõü=¿÷qŸÜ“h¾þ}ÇÁuoç¬ãŒ»¡›Òý óDûïÕüyÕ‹XæÛº^Î:þ8ôï¿8ÄýMîÅ”ï^/ôïÅÛåݧóÏñ›÷¬[ëÐyš½Ð¹¾ù¨ãòo>ëû†g^}ÖßïÛßw=ß|î…ŽZçeÞuýÒüèCK¢S›7óäyÄ?GwvÏô·ñïÆ«îÛ;íú»uï¼ ~ Çš7yN¦.ڹ˯çŸóõw:~¥Ÿ·íOù3ïÝ>«Ýþ÷ÜðÎëÅ®ÛÞ ý¾á…< îâ8p.~nÞæ¼£üÔ—ì<Øø:ªUÝ‹¹ÐŸÇ=i‡áS?|Ðã'äÜ¡Ï{õqç­¯·úû¿uÿÏxWßòiÇ¡põvèBöÍðI¾þm÷Eĉ¿|üÍ||^<èëùÖ¤ãPóZçxéÑÃîç«§Qïxú:?H¼?Œç¶ðCëMÝS?è~Jü£—OqZ>÷~|]ç7ÝÛ!_¸Ð9ŽÒÄãY×à&¾Æ›y<ü_w-žXw“§ã+ÏÄ/ã(}Òùµ} W±øßÛïgž}~ZÞ#“÷FóÏ}¿ïÝŒüg¾_|Öu¼âÕi÷Ïá¶Ÿ†Oïóõ»yµzkÄ­Š—ãŽoª/ß|}è3Œ¿½úmÛò¾ÏÙŽç†/ð4þ¡ú¬ƒÐƒíã¯ÅÍà3ü"û>ë¹ÕgÈëuÿ:ù¾çIóLŸTgJ{5çn^:cß¼ïùæÏÂG”àÇêÃ=èq.cDÜÃ3ó¼”ñùsy¾C=1¿N}ÆËy~z~iljüPýŽ;¾¾˜¸&N??HÜøö]ß×Ö“éq±ú®M:…ûá:ȳ“îËÁ9Ö‡÷³¾^ »O<ù¨Ç•«¡—¼úc×SàÐÊãþ¾Æ•¬þ±®Å!ó}ó‡¾î¾žôüs7tU~ããð}èÆóõG=>àIôUùDܯº¾QÏ+ß~§ïc:°ø¿:}r'tgñÚ¼¼<êúà1^1ÿ¹qðSù”^÷ÔŸt=E=¿ÓüÁIêë:Ï×8ïdŸÀktw>véÌÁÒ߇·äçÝðMñj|o5ð¨¿WÿñAÏkâÃNø·ë1?úw˜¾ÙËy6îó#›ñ{%|ÎêÇ;ÿ¹7u=U] ~eŸÃåžû\øXòª:VyN÷µÏž„^ o©óz6éú·yÏ亽ºáºO{Üù ü ®†^ W~3ëÏáùŒ Ϻ¯þгþÞÕbÖ÷³ºD¾8—Y‰8^u“î3Þ =âVè³ÖÝḞϽœëó¤ó™ò£Þ÷õ±ûZÜ­ó|ÃŽ“ÝOu>Úëïûª¾[“ŽËèð¡kÓ1}þ·sœî§ucÝ‹#COÇúŸÕ ÙŸûñÎCšy•ÏcŸõõP:Hø¡ÿŒw|§ãuü¾Î1ÒgëG]us³®Wû óøºÏcÒã¸ûŽÖƒOÉó_ÿ±ï³£ÐWý»ºÜG—åýdððí˜/øZ¿–GïúïÑAÔe‰'ð§ó1ãXW/bÞ䇭À‡ê\éào¾ÓõÏÒMÂ×Ü‹ý¢p#ðýÜ9xU]æZø¶æC½ÿÙÐíàÉ7“¾Žªßâ°Ïƒç8ëU]\uyü¥xÆ=é>Ó~ðhzã^èö¾Ï;Žxé}n_3®¯æëÙý?ð%~z%ò«uw?ü³³Ž?|Î8ð~ÕõŽºÞG0žòƒ:Wߟ¾8éß'©kQOv<Ú<ÑSÕ1Yp1óyà]¸Ï¡›–?°Ð׫ý þž#¾ð¹Äã¡nMž:ˆ|PýMfý½ª.oØutÏ¡~m-ò\å…GùŽÎÓ©+äG©³œŸ¯ºÜaëg-ð¢ý¡Þ^+\=è¸ðüêÊø©7í§õàê2ñ‚G¡çÐKøUøš¸öØW{á«›ZŸB¼®uøV>~ù¶ûÁðyõ šõqµÏ.=ŒzÏgÙ'u?ýüç7•çè?GÁ¾ý¨ûÏ—Ó‡×ñµúÀêã0ì~½A¼|ý»þ~Îu9çB¿«ó¡‰÷®ûMf}ÝÃe¡Ã­Ä¸8'q-töçÁ§«¯ï¨ûâÖ‰xf\¼/=Ï“ä-çJ?œã¹g§=¿ÒËäG~þRúZèÄôã9ùCçMêÀ^Îó‡ó°ððZøüxŽøeߪɺŽýðvbýò}èÇ·ÃO¨þ<ó¿ûï¯×½í'}žè³òlöYØŠ¼Sýäçëûåo:¬ó©“î“EüßJoœöõó…“|å½ë¼â¸ÇóZø"xjÕnu¾}¾µþPëwÕ劷¹¾­ó!ñeÔ®†åœnõ• _¨úcœt\f½¨—\tÞt+|>^Õý¼ëú…ù; ¿éËÐ#Åaÿþ¡oу.†Îm<¯Æ>ù¾ãäª×žu~ ÿW¸äjð¯£ÐCá}ßóõÇ=Ö¹¥ðª¿À´û0Îáf^ugtWyKÝlž·²^|¯~ùÎ%Œ’'…ÞL×0®ê:äûGÍs›pˆý·øS^Èþq~²Øq='ûê_¯LÆ}~&w~gîÚ ^P÷r½ïy¾#ÏÐÉô±Ñ'ô(t{:ãnè.y~Ìþ刷Öÿvä{º½}·¾<¿ôvèÒum¡óõwç¤[Ãø)½q+æ¿úû|g3ëK\ù~Ä1÷äЃÄi~Ø“ÌSÅÑð-Vb}ûqèȃ®Z×ôÝò+&'ØŸÆU|}ùŸù ê.Äù“ž°xÿ(p Þ,N«ï—ßýw¾ÑùúÌfà~ý<Ê¿t}YüøûˆWú ÔyŸ“Ηê|ÑIççø–ç¾ãüæéü{6â¹åaç!œ_êNð!óÎgå•íàµgÂ_PW£žoÞ|#Ï<›u¿ ¾©{Oæï缺skÙ·Dܤƒ9-~¾\꿇ñÓ›yù®Ç-ïe?šz‚qŸÝ÷òzŽgëÑAÏ êËÄ[ù¿üŒÐQœ÷­ûì:ØŽz8Nº8Óyø¬ÖûR×]èÚ£Xßŧ[çÎÝÑÃéåt›½ÀOêúÄqß g¼šÏû½ðÅgãáœ2>?­…?cÞŽƒ—VÿßYljò{ïI÷±ð@ëðbð2ûF©ºÅЫ‹ÿˆãïû¸”5ìûÈçŠâùrè«á·¾ž/?ÒK奛ág‰ëôoÿμ8¹îœö¸±]ü/Æ©î©u\ožýÞFÌ“uËWoÔÛªï1^öÅðmé”ãÐ%Ô©#ß =Îù‘—Þ÷åc8¨îñwý[ŸÁ9,¸Ò~„çĺ?Á¾¹8âù´ë&+ÁÛøUúgúïpÇRÄxì ö8 þVÿùð¿œC—GªÏû¨ó-¸U]ýEèÿÕwnÐ狎lÜ.„N‹7†¯uqr;þý±o­¯—oz¾/þ|áyø¸põ8tox¡î± J^­þ‘‡×—>ÿs8èqÒzP7v>püC7¯~±/«_ùÛŽ·ýþQàŠog]Gqn_¨<6ê~œg‡Ïcÿ<_èχ_„>Pùö}_‡/æñðëßuÜT½ÍI×Qùåò\gÜÄ%ºãœÏU¾¦kØ¿»Áý»þÈâ²ï-ÿ;|@ûI<‡¯Ç§þEàÍê'´Ø×B¥à»Ö«õ…OÑôI +À³øšýŠÿ®†®wy™_x˜ù5Ö­:sñu=üPûƒ¾eÜù(p6î Öý³úì{~Ÿ×oýþ>éøIž€øtoæÚŽ}¨/¾ú£Òß&ýçÁïà4õª÷/W¾qþg䉯ŸÄ÷äãˆ/˜wï#OÐñ׃‡À)tPûa7ô[y_¬ûÉÞö¼^:ÂÛžgá~û ûáÓ·C·Ã?¬/ù¿|™ðïÅçÅóþÝ࣡Ë['Þ7¾Û±þë†YÇ…Ö1圼Q:Ó|ݪ;¸<ÌŸãðáëÒù©À·t¡ªS™Ÿ>&tå:qߟæÝ|«ßS¯·¼nÕoIý/|kŸâÿÒÇÍ—xôÍ›Wøõ|º¼äï☾®Ö¿¸îk©ðì°ëò±8l¾ýÛ>¯paÕAûçÁ©ú Y'ò“õçÏ:—xÒó¿º ¸¸újÍßC]ó8ð;åzÄwx_¿ƒÐíÅѺÿmÜç—o«^.u :Ž:ÚGá‡7üq=ð¬8l_Ä{Ùt^¼i2Ÿ—ªÿ\ì¾:]ôLø@øhñžQßïêýo„³èu®uÐã1œýîéyß²yÜŠ¼ >Ê{ú'íÅï‹›öµu¼ã/ç9W~‰sSÆ%}'º={2ì~«ùwÞ¹7õYú\¤_ùuäkÏ_y6æÍþ¨ü|ÚùˆõdÞåŸê‹?îû’^ÉG°Oå!8´Îû<ëŸG¯: Þrn ÿ¢ÈSúeáÇ÷ÃÇRoZñoÚõýÔÕ‰‡|é _úóÐ è¯{¡{…ÎXõ½õq¯<ÿ‹7wÒ5é ¥SEœ——}Ÿóí9ÿuOÁ¨oõ­›…~nŽ:_¬ó°1¿gã•â¢þVëÿÔš‡íÐÑàÃo>îxî ôGunx·|[÷3‡¡^//ÁCÖ1½Zoì8.V>Yè|M}8ñ4x1eÓ7ÅÕêëÍ{××'|lÿÀ·ƒW¼ŽqÂÿðN|iaÐñ‰>V+±þå}쿵À5[±>äç3Á_íouÉÖïg±ÿòÜgûü\~ó¤ï«½à_tU¼Ý¾6¿“ßöõ¼þ¥¼»qE^v®a=ð£y®ºåùçÞ ]غq‚ÿ^÷÷†_÷Òº~i¿ó_ÖƒÇ7¾×ç‘ùi›áC¿ümÇâ¼O—¥/ÚO›±ïVÂçÃC¬§ª7Ÿïu‚¥Cžô8¡ŽnÂÛÜK oª8À×É>¿â›þYÆÝ<‹ïx®>8êoôIyñÇþ~g÷–‡ùÓëáßò7CLJD^?ÄÕ_„ïù:ú«U^ôç©>óÿV¿z~á/ûS\RGY~Ò¥>Þ¾O¨ü7ÿû·Ã…οÓ9¿öõ×ãƒêi¯„n¹ü>u^öõ»Žáè ¡‡‰Oâ¹|í=åxX=™¾z¯û~ÞÜõò{=.-g<™ç9ç‰ÅŸ‡¡c:÷ÿò³>~ÞÃý³ô¢ê78îqÊxY¯ÖYƒ=é¼S¾wÿ×bßGuncÒy!Üæ\!p3⌺tu…âôZà5÷^Ââƒóšð=×<:¯DgÇÔ‘W}iWCGÜ =¸øíIsê‡è‘•ÿ'_8Ÿ¬^Ä9<øåïºî`Ÿûwïésßü¾Ç7þ•úÏå|›õfÜÝGo7ÿê[ôi“ß·b~ŒÇ^øWâX}’~¢·‚WŸÔaç/¾î¤ø®Ñg ¿²ßÄù/ƒ÷­„/Uu»G}é‡_ý£‡]¶îàAß‹·ÃIW"™¸ÜûŠ¿—O«o¸‘Ï3íûºúk,õ8·:<Þ¥ÁQè2â²ûØà>q„nP}¦=¿¼9íy¥ît¼H¯ª¾­òJø$xÆ‹À©_¿ïû‚QþÇ ÿ¾õ*ïÓ+Ó¯®óšñâæ«ù:Ñg]ï݉õæ| ¾³»‘?èV[áÛ'žïgáÈkÎá,G¼­:æa÷Sô3ªs¡x_¾"\¤/€õˆç‰ÿî%€/F£Êït~ >‚~)námÎáë;¬îAýÕqŒOÕ{›þé¤Ëv襻¡ßª¿qß|Í×ùfôBñ¿Vg&ôGýü~:,_=wé]s\÷“I_×Î¥<šÿ÷_…~ÅGÞ _×þpÞÁß¿~ÛugüþWèúê­Åç]o/1Ÿ¾ÇW¯Z}zù¯ ý¹áã¼ïáÉûîW¨ÓPìýÅãêÓµÔ?Ï~€“«Ïé¸ëŠÕ'o“O:/¯º­£þžÕ÷w©¯;yàï—áeâ-ݰô†q“kÁñÇåÀ‰_‡ïã¼£ó¯U§;èóΓGJ¯ þ­Þ€¿|>ÆYþUÏûëÅþ~úŽuùï]ïxú ¼ìù­'õK÷·Õ=$ƒÎËèêð÷§óŠÿòcö¡•‡ùI┺hç>&óñÖ7^<â¿UÕù<볿ëããð«Õ©×z?Êù³½ÐôSVßñê;'×:YèñŸŽœõpD­‹qÇM#ëxþ½îï“ÏÍãõÐV"ÿÐË‚›õvþ]¿(÷}3Ÿ§×¡+ØoÎûá¹òßä¨Çs¸D~¯þ,·"¿ÙpF ð0ô;ÿ¯Ês*7ƒÿʇ—c?Ø×ò°õ£žñ“I×5|žz|º²ûY_¼íûþHOÒ×r1t>ñL5Ïâ¼ø«.ĺ¼~¥ýr%týsÄ•Ëáƒd]ƒþ”¥ÓÎãzh}zðpq˜/\\ ç;ÿ¦Þê(t*÷¸‡ _ÏK›t|öÍüïÎ!áÇ|ÜåØoê`á?qA]:?Ì8/õ4ô¸qðSÎÎ×Ãíâ=Á>ÆóàõbðŽõ!o”¿;ÿÓy—º«ß~ÝÏzÒõ¼ÿnø³>Çýêbñ!õ™|PzÈaðGû.w~_½ Þ)ÿ:ïhôåQÏç~xö›Áú|½žußQü·ÿÅ#çB²žÓûÓEܰ:cõkZý¼ÞvœÃ¿ß¼ô:p‹}Z÷§:?¦ÇÒõÄ9þ“|Q÷`½ëñ/HWÒçÈçZ·ßþ±óUñæo_Õ½³“îð»q½j∺&õ/ô ëv3ø0½Ù½ÐêéÔ«O~ÓßûFäk:;_о€;Õ]87p!xzÝ u|g>nî«ÄyAþø2t-ü–_ÑÝùêTŒ‡x©þË=uòóiÇ1ÎÕ—þ@™t½o#tøº×xÒ}÷ýÐÉÄ7úù{ó&~Б×ÂßoÜ?wß·ŠOæ¯üQxò7]—ªzÀÓ®WV?«?ôx»¾}ä½àÙ­ð¿¯ýW.©:奾Æáóì†?l|ñ)ø¯tÔY×_Õ+ß?éó¦ï8u~î¢à'Ëáol‡oȯ¡óõ¿ÆO5.Å›O:Â?ám>‚ïßÄ{y×=Añ¼žÓ<Ú¯t4Ï{&ô{óôäm÷Sù ·ßöuýâý}ŸºgºÕ×Óî©—¤»Â!êc«ß:Ÿè}¿õÐwìÃÃðï?ß’ŸªNJŸªs˜uöZø|êÓ«Ê|=½šÿœó¸“ßu~¼º½Ùˆõ¸¾–sHòÝ~ø1/öÍvè©öcÞ{]}ïF}ü*ÎEÞôý«ñûæ›^k¿—N6ÿïúZ¿ ýÝùEþ¹ºsýw#òc.…¯WÛGcŸÈ‹âkõA<éyMÝŸn/ü0ãS÷ :ßz0íñopN¾#nUýɰç7ûR_믢ˆÿÕyÔqmõ‰™Ï‡þ&ÕÏnÚó¼ý\ýû¯©Ç/B°O­çåð]Þ û¸ºwW~p~ïÿ6tù×<Ðó׃/òSÔãôOu½ô ï!þ8ï¦N>u‚ªÛ=é~‚ú&Ÿw5¾×þÈó£êèéYÕ÷lÐ?Ÿ~*_ÒÝé±ø!~¬¯´y®þUïú8m†N`èî£QwB/w¯Éä÷¬…Ž? |)Ÿ×ùßùßÿ)ô3þpù4']Ï—G×—V=ÙIçe…á÷ÓÐijQ×ùjfý9 ÏŸC¶8÷žoë¹î[šö¸‰O, :À#BŸ­>"ãΧåõOæóød¡ókç7á;ïc_ãŸOzÜòÙ÷Ï~y8 ¼ºÕy.~€_:¿·<›O¶ãê9‡søúáSô²×¡oÙò3¿[?)ûJæÏÔýÓÎßøî‡Ú L¾zþ¶ó÷ºokÖqCեͺþäÜO‚¯T}‘8ºÔó>\‡îÇ~1Ö{Ý4éókª³¾ë…/c^í×oÂ?ª~ '÷ÒËÕO¯¿žßqÿª}§Ñ-Äû¨î%[ìq@Þ¬:ŸYÇ;¥ãOz>Lù¯¡oe¼zõ½þwyþÙ»¾ÞÅ%÷xÃþ©?×qß«xãû>¾ôJyúßb?Z÷ø)=NÏ?2>/}¾áG:–øk\¶çyOë•>›} W#/¿ø¨ë;C/ sYï¥?|¯ï?qD¿ºÀFøâê^ùØôñn%âýƒ·=NïÅ:®ºÕÀÇøàNèÍô6ý¡Ü³Ì·Öχ?åœÞ§ŸÜŒçXõˆ÷鿬ï‹ïËä¹ê×ö ï[q6ïÑÆ7ëÀ¸ïkçÕðFû;ZŽõîü.œC·³oœ„N<îóìþ»º—!pÞZÌ—s£ÕfÒãþFà†íÐ å}¾™{é+ߎºþM÷~=Ç_êé{ᷨ篺ߓð‡}ÙÎàQ«ßßô÷¬xóßÓ‡áÿ‡‹½® Î‰~§Ïkå•ð½ñò?‹ç³nÇQPúö¨ç_zÎ$x¡<ÿˆ¿ê0ó^絬‹ìïžçqß—³Ðס·=‡:Yx=ûéÒòþys=ø¬úf8Äø»¯K½˜{ÖéwöMét¡¬„¯ôw1/âŠu §¸ï»üÍi£—ÏÃûúßû¹ÏcœÕ»çU¹Ëᯇÿ†íœ{§šu¾¦ÿÂZø=ü¦oßw¼o\ªX¬§Ê‹=žùw};í?õûu~Ôs¿íëÖù®µð‹³.Ä~¬saàh¾Z­ëw]Ó§Qˆ{G]×xzýÄ9D~;œW:éüy<±úM:ÿ÷¾úȯò’:cy‡ÎKÿ©þï{þâƒX?ôiü^ÆËÄÁ¬w´¿ô€3í_uØübç´èRòÿé0ö“<Äï¨{xçÿýyðá¥ôçg'â‘öñ/ÃG§>…gÿ—¯6CoÈþRæS_ 8’ŸèýªnÔ÷}Ü>Øþ ~ Ž»?ï\œU¸g¾.œW/¿ç¤¯‹qì[<€~Ì_þ‡ÈkÎ+𙫟Hì¼ϯzëQoþþ¨ý:íxV>1îâ=}Ó¿ë³ð4|²×¡ÃëC’yB|4?üî¯F}>Œ}íRä½oÿýz|Ÿ¾ýÖ‰ýu?t—ÄO©ßÀy|ȇ =Ðëé2î÷æG;ïÛtrqË}Ègéà“ŽànùÆçÞ‚›Þ÷ϯþR£¾^«Îg±óëWáëWqNw³Þ¹ú£‹ãâçIÇ鋘¯o¢¿Úqä þ!\ þ þwÇn†ÿà#òËëˆËtšêg½Øq#Òþ¶ßô³ný¾<_÷ðMûú/üý»®§Z‡¾¯úrͺoRünþyçB·Û ü~ï}Ç3_‡¾…«#< ¿¶úyƾ¤ÿÁø'þ-ÏØßxºº:ëß8T¿Ì·]·¾ÔóªüV÷»½ë¼Ô9¥3“ÎÃêÜJð@õØuïŠõºZõíÛêqÂ|òÉF«ùäöÃw§X7uÀü{Õcã£u¾n¡ãÊò‘§]ÏâG¸7b0èq¢ðßüïúÓŸê>Ñ“î ýz¾ßôË—­+÷³zŸýñ‡õ*ï·‹ Þg%pyÝ'=îqA€7íûm%t*õ.Öƒú@¼ãëX÷úMÐgÕ©ðßÝÿùzŽ›Ô1‡íðãÔUÐk¿ÂÍoºnfß4ÿó×£ŽgáBçoo…Oï=ë^ÑYÏCu^vÚõ ~äZ|ÏFÄ+çƒè0ê~Äi¾ºuàÞ§â#ïû:ªs˜ÓîEþ¾ºJ§Šº>.ÝïFè.æóÍûçùyuHûãó>Zu…ê‡JwÁSæóª^PMqJ|´ïäùW—ú¸Š;ô"x~ùúA_¯ü´ª3öý ž‹{Q'@÷¼º¾Pø<üyëN“øî+Á#×CÏ(sÖùDÿOzFÕKÌ:å÷Ö9üYççúT>Yèy6ó¸õ)Ž Â_ÙŠï•/6Cǹüv#tü¨çÃ_¯{'Oº.Ä7: \b_óÇè}ôΣÀkuï¸ët ýüì ù¹ŸƒßèC€·Š?u¿ë¬ã“íÐyÅ‹¼· ®§›Ðñë:'ñË>2¯ú“™O}K<_õ%ˆ¼K­{¸#Îò•œ÷ÑÇB}òJèÕwÐ÷½ó7¾ÇóË_pCݳ>è|†O뜽|à>úÏã‹wüwçþèèu>hþûê_W797QñÈxu=FÝ ý˜÷±‹'ÏN:oT/·:¹óTøòð9ùÖγ«³Uê<^Xõ§=^‰Sò…ó6Ö+ý¼úYŽ{œ©û»ÂDZÎð:u«糞GÝ·¼'/Û÷ÞW=—ø±z”u‡'í†Îc^ä%¾>>ò2|&}ìÄ1çóøÆ©úÝ…^ôⳎ_à(û‡¯øfÖ×aúíòìË?~ØG¸ó·~!\ª¿µ}vúqñ ù~©ãï§î­î“u‘§X7p±saÖƒû¼ªŸý»îßÓ½äAûѹ8çiëüÞï{^‘¿õgâßès~#ô ã^ýª¦Á|*z¥Ÿ»:Øq쾊z}M<7}«î¡§Lº.'_VýÙQçGò«|dã/þ_$Nàéâ|Y¼8ð,ý“_VþÚüyžuÞ¡–ï(žÐ5BÏñwñÜùGõ7ø‹¼I÷݉|¨NP¸ôâ{=Ïä~¸ùҹÃ؇ê ™¾é°ûúxÊqäÏ[¡÷mE^oª~zÒyŽ~Éôo¼™ß…§WYä}3ë~ë÷wˆ³úíÂ|ŸoœËú]¯Ö«úsñ”ñÙ¤óŒêW ‡„ÏGË;æAýσQLJò¸üx}Òõuë©ûÝüy¼hþ^êUª/ð¬ëet4Ï÷Uè'k¡¿ÊËîy¿þÒ­À½W"^ ž(îÁó¥{/õugÈË[ág< Õ÷ñ?ÂOVÇ¿q¾áÏY_t8×~çåE¼ßýñUw:š{Cá<Ö91yò~èlôùå(ô¡ÕØïËá×Ò!èìÅ׎úº¨{%–zœ8Žõw] ¼ .¢ðØ´ã)<£úv ºï¦4¾ü*ò ?$ýå݈7ò<"~Ê3ƇoQý4é¿ëz¦>—úvTý Ýë}ÇÓuŸî´¾zÐí?áwNÂG°®íW÷+×}†óÿaø]~ÿVäS}éøüâã“þ^|[üÁxÓÓª¯ç¤ãW8Q\ÃÜïðjþ<Î}Ãgêáû¿êˆÆ]o†¿ä?ë+x°:(ù¯ú É¿“¾NÅ?ëÀ¹Kqî tz«ýñ8üóH÷­þWƒÎÛªÞRçßâˆzw}'Ô«êGG¿ËûOú¾tže'Þ»tþ¨GÂ[?™t¼\:ñ Ïsö+¬þ³žwĺ_0ü,çð·qžÐ>pnõ0üëô0⼿‰ê«Ü»ðņÒu|¾óï‡/á¦ÑwšEGÓß4ûÙпà“g‹Ý‡Õ#V?Šˆ«³ž7Å:|Õå ;ÏÚI~<ëù¯ï¬óêUà¾JÝ¿;èñ|3üxyB¿ý„áŠË¡còᘪ[w}MžQ×ðíÛ¾^ñHý«­Kþ»~cuÈRçåê(œ«v®Ôz¨zèÐ=_<èûÍzªs±±¿ü®úÊ.ti|èŸòùzàø­àùú}Á'w#žŠxžGo´¬KëMÞq_BõÓô¸s:Ÿ¯>m+øJêùuOHøð™üX÷H C?˜ÿþƒù:tŸ¼ø¢_Qç¢Þ÷¼¦~Ⱥ†Wðxõ‘öÙyûpþµúÎåÐó¼(ýáÛwÿüMà¸w/x®}ZuÔ'=þ¨ƒâ“;W ÓÏ^…¶8È~Êþx| _¹¿ŒoàûüžBŸÙ ?º> _ŸrþTé Áóåû—¾§í†Ïª_ÐóÐÛ«ßÆ ¯wº„¼ç©§æ»UÐÃÄã«/zŸ{¡ï9o³:¦¼O_ùUðñDݲ:]ñh%òõS|ë´ÿ÷6?îò½ôàW1ÿ›¡KNâ¼¾¸ÌÇñüUW†wº>¸ûL~Q×!_‰ ü½ûQG'ÊïôVø¿´Ïœ§äÏ?}×ã?¾°ÞïF@Ýwû¾ãºõŸÊ»ð…ϯ¸0뾆ù¬{¿G¨(¿òm÷’?Œû¼Ê_Ûá·ÞŸ¿¾(⢼û¾øm×5äéº/#úçËgo~Û×ý«úEÐOz|…séëወsuÏܬ¯û::ì: >eïοiÐy"ýÀ÷Ñ“o„ž{;õŒiÇÏð‘ú:1>_õªo;~(½ØñpÕiͺW÷ÍNú{ÕýÊó¿ë‹dÞÔß“p/¤¾ùͨçïýð¬7ïWýßOºŽÁÚˆõVçœc}‰×ÕdžO3í¸«îK\ìûÛ{º?ѸU¸ð 7§KÔçú{â§î3®>hã®Ã—¿sÚç“®P:å´çqûZýˆy?³næFèM~ß~/Ÿ„/Ì—ô÷êG=èëòÛ]?Ä_ôóžtaþ=¼¯Üµôåi_…_‚·ÐÅÎGü–Ï¿=í~ŸÝó<Ÿ†>´óýfþ¿þ¬ó“ê—2ÿ￞Žú+ü ï.>µØõ5þPåÛqÇWÁwísu4t™¿ôx#/ß ó*ô(û§ú_Ìz~ᨫq¦îôþT|jÔ}¡¬G_é‡áßw=غƒ£¾ >Aÿ(ÿp©ëÅsFOðëõÅúÞ ï?ùÅz”Gᇛ÷﮼º¿˜ŸJ73?ÇáƒÑôÛùû¹çõaÄqÙx¨¿,ß0|õiüê'óÿ®;û»ÀÎ+Â9Æ™n²™ño©ÇU÷Åà‡Å3ᇇ>.ßÔ=‘§]O}ºØó™(â&=FÜý硃©/GÜ®¾v—:O°®éA›¡‹:‡$nèç".? Ø|Ã߯ùiàRùy-tUûÙŸ]œ…_Ô­U¼tüª>X¾_Šy•ÿ·ƒÓËw‰Nä\eÝ8ÿ>÷¶ÚÇ¥o-v=ïÇÓêþIßUG5ÿy}Äëº!ö¯s•W‚/¾ù¸çуðn¯/¬'ç0îE~¿ùgÿqHÝ®s?TßX¾š~Xteç„¶ƒgî/=q¸tŸqÇû|­Àqâí…нÅñ­ðÔiLJoæyéÛЪ_ûüý¸8µîå÷xí½SOð<»±àzûZݲ}÷eè£Î×y°A× ÜðuķЕÌë/C?ÄÜwÇx8I×?ý0p¦¸@ßž¡ÓÐõéΫ¨Ï2ΛáŸ/~¿ý)ÈGð¡}ñU膟…o&/þ<Æß>óQ_7>·úÛÌ¿O]Òó÷}À1›Á¯ÓvèFk¡Ïåyc¼LŸ.ñßùà;±œ?Wçs¾:ßÛ:< }ZÞÀ3ð”ãà/¥Kã©‹}¼“rÞ¶Þÿ¤û+ÕaÜq±s)ßw>] ?Žoã<†çS_Hߪó5'ûçÿ]ý=|L_§¯Á›âö—ƒÎW«_Ó8üÆùsºÅ|}3ÿ÷ÅA;êát:‹{jèTÅÃÃoÝ]’>«ž¬s¶§}_àwpáVð|ÇyKëX½›þ¢ôtõxó­ð³¿™ÿ㓎‹ñêì_²¼YÜ…sD\†#åñ:߀'ñá~:ؤóY>\íÓi_ò#œªNQžäŸø¾+ê¼Ø¸ëÈâ®8&á7B“'Õ#ê÷"~ÒaKçw¼€;ïS}aâós½í%^t?ˆÏ+Îê«Xu…G‡z>üÂw†Ûêž³“ŽWÅkxˆßj¼àΪo Ý\œT·Hñ{ðŒç9|I¾wž§ú"O:¯Äßí{õòò%}GŸ>y_ß }`7b>..;?YýË‹y\ù<â^õ•õ}QýyÞõõôõit|©úàNºn­ÎMÿº*üc]ЇñoþÐRèá5ž?é|Þ:Šüâçôk•_}OùÐþÏÕ-=Œ}e_ë?¤î€ï / ûºR¯º~uæ~„/B—Þ]i+ô}xYÁ/o†?ÇÒ›Ôƒœ ^ëü@õG˜õueþ¯Içqx¾qåoTß÷Ï1ÎçB­>ü>ç}ç³Ö¥}ø:p¿øq&ô¤ƒÐÉá38óZøä»ûÄGxMŸR竞Ǽê„/©¿Â/á=þBåÁa‹x.~i|^Çyá:¯6ì?·¾ég1oø€ïÑ?d'ü9ñ¾Î›·àû¯ßöøâ9êÜ+}wÒñª<"¾>t<©†ú¦×qî_¿'ß#ò ­çÙñ:õÿpµu¶8àaà³IøÀêêþa:á´ëã÷ç?§ÏûäÅ Ï¡¯¾Ϋ—¯1ÿùºOsÜã ½¬ÎͺoæçñxAÝæ}ìïê¿ðYçu{ñóðÚaèð˱^Ý[ó<ò%߬ÎÛ¼ïyÊó™Ïê#4ÿïÎUÜ >ó¾ç%}CÔ”_ÅxÛqu7y_ã˜÷«óíź“xg©÷Ø |\}õgoç¬/ïýMôQóüæí|àO1®ö¹ó‚òžT~âüóÜó#Oª×qŸ¨{G¾©¸v!ö¥x OV¿Šð1ïÄ>ðûæa/|úÃĹüäùç9çI߃sÕU=Рû\Æ ouþÙç[OtçÂÅß§|~þIÿú?é×αþò^k¼Q^Pß ïò-žû|Òÿœƒè_æÍ¹îêß6ízRõJ“¾Ÿ¬‹ªßÇ»çÿ]ÿ(úy_²?Ž>Èü­Ê§öáb÷Ó¨†Ô—óqÔ÷ýuÄ™º?%|£?á‡Ö¹“®‹¼yÐqÈ8x¬:e¸¼úɼë8RÝ,¼+¿þ }qÚÇ­úÍ»~/Ÿ¯†ž¦ž¤î?éûË÷,úºP×÷æ÷¯ÒqìëÎoH?`-tÓÃÀÇâ½8÷{où-tfy“~ðëAÇSâ:ü%îªû2¯êü®†î¾<ß:{Á—?íþóŽôb÷ÀÑ5V_ÀÃA×ÓåSx¨î ÄW诋/ÜýÿXðµs°uÞ(ÎË,Gœ}¾šç£¿ÐgºûÌ¿{²7ãt¨:w9ëq°î=<í<ŸÙ‰õg~3çê;¶ÂwÇ;ùg>o7ð8¼CÏÑHÿ1>Š:Ÿê7ê¼\>pânä óGW­sÜóïs¿œóà¨ó^àiûÀyCzÚ›7=Nyž³{žá/ƒÇíº“uH§‘–Bw)}î÷Š{æÃy5÷Þæ¾å?Ø?tuuyæÓïËêWÜ{ûyðçªk8é:Eâ'¼‚¯o±§ëÞ¿…Ž·ø ü¶;¡«ª{±_÷/V“AÏÛun”1íó`¾ë€a÷#œ'æ‡Ôù¢ðßF¡ªŸ  mÆzÉ>0žGÝ£¾xu?úÇýçÌ Ý¿ê%C¿/ÝÃ÷XwðÄíȣƾ¬ûúàÆùúz¾—|·<ß>>¸Æy0ã"oðàNã¯^R<ªþîÿÞ¿ßï×÷|Ôù¥sòáò5|/Þðü¤ï/z«ügWýȸëÖxÇ·o:N°ÿÔçàoâ ¼r?p*žGZt§Ýˆ{p·sœtùÛþÁçÕaV}æIWç÷Ô‘|ù_©{‹&=†±8Ð8ˆ¿p¥û§6cý?þ¿ú;çj·Âÿ„ÿÍŸ|¯B|•Gô›¡¿À•xšs îÛ‚ö‚ÇËsÖ3ÿY<½:镨/üqof?\œu<[ýþ'=úûnàç•ð8“Žªß¾ >踿xþôÇÐ;—úó‰xáËÀ‹·#½ <]õxƒ>ŽÕÿ|±çcŸs!p>ì|³¸šçñ’ª3|Óý¹¯C°Îà^ý3ùAð½Œ{Ÿýðáì³æyàFñ¬ü»iÏëø¿ïS·t6ê¬gú)N\Á+ùÉyîĺöüô)÷ TßÙ“Ïñ—åðßñd¸oÌóúñÅú«óáæc¡ÇååðEÝ/³Ï÷èëP÷ÚŒ#Ϻo᜙øŸqݸÒ×­+y]¼à'©ËæªS°Nñ ºávè'ÖÁAèuöû=?< ¯øâóПùÂp3ü„û“¾‹g{¸d3xòFøÞÛ¡ÏŠkûOñZùG¼Ôç N‘ÿê¼üÛþþð[é˜pܺžvßïÏçW]¿Á¹(ãþ¿–:Ÿ€ïƒ7®ϧ}ݨWÁçªoÇ,ð㨯ëíÐaè¹p—sÊ|tß_÷6†Þ!ÏgµÝÐ[nÏ­þôáK¹WÀ¸ðWÔ?ó«oÙ«®W«ý_çÚ§ñßcÿÐ3ø@¯ÃïÄ/_üç¿ÿð4ãàR_þüdÒÇI¾//¿ñ9tµÒK†=/©ÿzóYŸßÂ!óùø*p§þoò%_KþG|”_ô—†#Õ±áqêð¼íˆ{×CÏç[ÒMàÈk¡Ó±ò4.ÀOè8úlñY¿øZëdÐ×#>£îž^Á÷_ ŸÓ~¸7.õO‚OÐÏ«^uúaŸZ½Ôõð-à>: ~‡_‰;å[…_ oˆ[ðqƒSŒ·¸»¯Ö³>ä? < ~áÍÕ5üyqÍüŠ›uÏÛ¨¯oq²Îmt~¬®÷\Ö¥x§^­üùiÇpµóÆtó'/òé£õá¹Í×aè;¡£©S)ÞŠŸŽº®‰–ß4í:¤:,q>òt¡ë {Ç_v|&®é_z%p õê0߀Qy8ô˜ªÃ7î¿‹y¿Ô×7¿L¼GÌøc]è&îàÝê¢bÒàn~ ?¿æç©#Á¯­¿µà‰ô!q.†GáGÑ×Å'z% êÉéX Ý—¦çÙGâÅóðï®s^pfø`ð¶~eøû‚|:ÏK/:ÿ)fÔu)ù®úDO:®³žÕ·eÿ)z°7^U÷<_õ<ühqT=ý«~…l]»·¯îƒuœd}¬E\–/àeþÜ·8Óº“×·—‹Su8ßv©Üàü¤ó8xœý+Îë3à¹ÎÎ:þ{4èz‡úåÂãÎK‡Øˆ}_õÀ£ðÓ.u?H?Ÿõà?tmuš>o-x'ß‘e]Â7U1è~í êòžqã Wš·¥È_â…¿;oRy}¡|±8Ðz;z¬¼`_/>éùHÝú$øÉnèØðŽxS}ÊÞ¿<é>¹üÊ/Ö¥|»úû‹­¯á8ó-¾9‡æž,çŸ*ný¡û×Å7¸lÐu$8õñüý& =oñ»Ì»¾itèÉg]7²þáÉë¡;Ö¹çižo¢Ï…u‚w|;ìzHõíöy–·ñ¾ºW}±ã :‰y*<éûf©ouOZößõ=—Âw’ÿäG:§º¾ÍÀ)ÎKùyz¾qÕ'Y]>ëƒ|¡_×Åɇ×UõS_êëe7pÕ•ðaàèÕðQv/ÞÖú®~¡o¦Nÿ2t…†®úMød_D\¡K¼‰{ Ä-ãïZï~®úuý¦çzûK}C#ÎÃù>×¹ï«~àvø¨¾_3^‡¨ÓRÿ§nÙïß‹ýªã7™¿ÇÁCø6+±?¾†~ó¾ë¾ôç€ÝÏ«oŒøéóôóuL_5<Íxª‡V§§â(xFÝ›>íüx3|› ¡ÇÁO¿Þê:®ý©>Þ98×¼ðëÜS&¿½Xêñ‹Î§? }g7ònõ·±N;?ƒàâº7pØq¦>+ô ó¥Äüo¿P'çŠ÷çÃÿß =ÔßùYô7ï_ÿN'I¾=ëï ?Þ >újÖ㳺ùÛÎoË÷ˆ¸§ï÷ácÀÆn/Òá(õÙ7,ï?¶¯àW÷ ‡¿ÆwÕwÀºÓÿÄü«òœò’|“÷ln‡¯‹ÉâQõ· ÝFw³õç.]yÒy(ñǃ^Þ±ní?u¨úWÔyB}Þv–N›/yÉÏ- ºï#ž×ýou]÷ŸBïϪŸÒ ëÈêöðux|9x[¿[ý]àö•ðàŽoôß÷ç¹Ð£Ì›<,ù\})«ozðúZõc³Þç߯NƒÏöò÷à ߄ýžñ$yKµãÐy*Nžôç¬ûªÇ}ó߬Ÿê_:Î:g;îŸK—†»žÍ÷‡:ãÕÈ{;»½ÝèNøóô[¼NÜÑGêÍo:ÎÓÎ5‹Âß3ïuÎû¤ëéü}§¾}ßç‹Oûêß?ìk©ÿvIü]}Gªï¯úýaÏoÖ¹>”Î{ðè âî7áïǺùiø¼ôqW? ãr6|!uÓø>Þ¬ÿß¾R‡÷ØG⿟+Ü8ýÛˆwô?¸àž1z½þ!Uo3 Ý4ø.IaÝC:è:±º•ê‡=îº)^K^ ?Ûø]}Ø?¨zÍïú¦y¤ªïðwûþpOÊýkô^yþp ž¶^ªOò»®¿ðqª_ðÛ>Þô ¼o7^‡Þv-t˜:Ÿ÷Yƒúÿf½¶uǼ_Éþ‚žu§ÎM,t¬ê&f=ŽÛ?òöOÂwâ«sç·:ÿ¬D~ô¼ÇOáB>ÕfìOñ•~&Žá=ÖûFàRŸ‹'YWð%œ gZî¶Žœ'¯¾ã®ÇÊS“9ŽQW¤Ïó> ¼Ï.‡/o?‹ó_u=g3ôÕº×}Ò×ûrð—:Ï0_ŸúA]]Y݇zK<ÄüɯÖ}_‚ã꾚Q÷1¯…Žº›qð]ï~á+VŸ¸q?ó.ÎÓ‡~zœXuÌo;3žyß„<;硃Áû¿ƒÿëÿ÷ÿÏÿ^hÿí?þ÷ÿýþߟþøïçÿïûôǯüŸÿüwÿçÿýø?ùÿöËÿ¯ù?ýÖ–ï,ÿÍÆ­åýõÿü‰ÿö¿âÿûÿ÷ýç—ÿÿm6þ?ÿûÿþÿL†–ú<Ämclust/data/wreath.rda0000644000176200001440000003623613510412702014461 0ustar liggesusers‹]»i8•_øþmžçyÚìy›ö`*™öUQIŠ2&•h H)¥(Iš2Ô7‰¢”L û¢4˜*B¤"!”!’ÿýÿýžã9Žçy±ïµŽc­ûž×uçùy±ÖÙ{šJyJ  ‹ñŸÂü©ˆÿ!( " ÉÅ"Ã"‚ùKrâo‘µ($8o–;·&TYñÑa4LRÛÁŠB ŸÆ¥5çY@²¹²¾rÓ Ôþz©^XðÐ×láh‚ðºés3²~×§ž¡JûÓö‹×¢ñ½Ýës†Ž aÿÚí.üªÙ.Ò„ ¨­k,m¸óÿ,7éo/F–xheô ߿ՠŸ½«“wò°ªi]]62Ù&Œ£ô\d…2ã¢!%ŸKäl×Û…"*Š·[ jÑæiëªÓ@óÖÈ\[{ˆ;ñ·aÅHц_+P³aóa³^#xÔ<¹Ñfªß_Z]…dÊéƒ×G;œo 6/y©b‹þ2(ÈÔk¥n¼_̦t±Ÿ.ål„£™F9¦œœog”¢‘IÂÜWµN mÐV_cBurfíÑh´Å¡xO¨Bß&ød‹DKàÍ@ ;zt™³7?Öií DÍÇo~¢¡F}IrŸèÿ=“ó HwªMû§þ¡ÊôD#ÓS‹úbíª-€V~úò-ÅX î,!~n\Ô©·Ücg®ÇG³Ì¬Õ,©/ë¡‘›sLf2+~Nm õq§‚{Ųd¤L,Þ·,ð>J]¹F(éœDöxÖ’©=R´>o|zf1Ps6¾o>–‚,›Ë©{}Prýgv™Ô ªyT|:â#‚“¹篢ַ¶ì]’á@d­ž½óàÐCMT}^m¹Ý±Ú§™QQ3ëþ6¿Y ‰º&&b¶‘P3èdVnÂ?”ö~u?ÃüPMGr†Œ¼ø7þ]êþð išÓÁ ÃzÔS"-Iÿ;„â™ëî§Y\Eµ¹Òoõ'†AÀÓE¸îóVÔ©×¿ØÆÒ²Ìcá;v`ê¬yEm³,÷·êäWç»ó ã§Ï‘©ó#Ÿð•Ê­§‡ÒR‘³ID_aÍQ 9׉&F5ÍÞ3>Q¦(*=«–ýqCv¹ñdA`lÓðêß©¤…?«'ÒÜ€áûO 2ÿ9о”ÈÉÇå›Üêûä |'Êmšs hòÓcšþ!¨!?œ…£€*P:°×XWn4xzƒÉÑësK>sýÆÀ›<`Õî9ôfÍ.  Iv‰W×sÿu<þæ¦däú"ÂnÁ¨|ÉÏ¿øp"²3'Ç™¦!mwîÇ;/SA&ÅsõŽPíÚL8äà ³^Iß÷Ò PJ›:kéû`¥®êNcGx{ÎÆ€TÇÛñµ¯QAø˜´þaªî?³å8gº"×] #%¼îvàÒ4æ…4óüÝe¨àyð;P^*¼ î~œ#b“‡K‘müËì¡:ÄÔ<ÄÃFlV{è§¼a.(ˆ¦º4.ðëÛ/ùõ '¤§YÍ*ŽAj«Ve*²÷<ÕMóXԛ㔧GÐ8äXþ—Æ (QlìðñªÊ“ÿ#ú£ÚÊ-=ºÍ3 xã/®ê’ðÂ+¯^™ª¹[Ž‚ý/=w €¥íÎ&xtÅ)õ¨—Nh ²]¤,Õ™Oó½\†ÍU&™ôîA±Œ¿ó–¨0×Ôã`;Î?íÇeÖK4ïxST5?úñk‚.TÇþ’ÁœLÔ^¹h4¥.õN„½1Ýe¬rَɰ»È›ÕÄ6”8ßÞçxÙQ§‡š웼®Z8©qA]ÞÕ£ôoB°é}d\¹±¦pË> Ž^K°$û'1Ù7Öl5Ò†Ÿôœãο*=¼8IˉJ·7å‹ö…mÑ@‹ìj¶É²E¦>#÷Ó;íBÌÿÄDæÌñϺíÈ4YÞA<®â·B¾ýМAúÖ©Í'ÿÍ€pÐúoت ôùUÕÌ…a”“^sˆ½*ƒ×MºÚÅ´ÖšYé0·½÷n2g÷$ZfÊkojøÁ 5vü÷Z+SF£ü¤µœ °æou±ôGæ–³Ç6§+sçœÇà _|‘Ði·R‘³‚¯#Ã)[ ÛP[=&ÿ#õj§Ûµ^‹@ú©£ÅÒ7‘qÍ;¸—@˜µClâ_W23-ß×üOOñ‹¡–û>ëtT§.¡½{9‰Ý—¹Kê6!éΨo~ô. Ôîb«ë£Î¢-~WrféCJ v³ErÁ èµ—ukþ×;ë b¼Q¯Þ×ýïã×@¥Îåš¡šñûJ‹Š4 •ÚÜòLDƒ¿GëYy§ÿá½})¨µ,ÙzÌË ojeÅË\FÌgLçÅâÈ®pRÞ÷Ø%'ÔÆ…”h£*ö¥Ö”ž•_kÈOX9Ÿç‚‚"ÝÝB1S Íã¹F•ŸCÉ‘ãߥQcbÛÓ-gøuþÏ×I÷é¤ÖÌRŽ—Ø£ÚÍäÒ¯õ¹@ñ³Êpº›ŒÃ§låŒ?çÝÖ©«ùC(,¡òø‚Rg;—«K`g½PÂ=$.lvÍ™CÅ»KoÉ>MrÍvBê:,z—¸:ØßI“‚ ôŸ Û ÇóäÖèÕ `å#þ*älv°'Îóýs»®ùc}]~Ë…”QM†-ôî’ ’¬y† +UÐÈ}aàøÇÚ’‘ýUÎuþð¼ÓÃër[›ÕÝ[‚Œ-§„ÖÔ^C£–„UϽùúß|ªŒykš;·üÂà˜û{ Œ98ïD±‰C¾EÚ›µhyìÏ&_ j}¯ÈiX‰Ò! ÅúQ£¨î¶ím¸ _þWö™›O\ü=ЃÄ%é’!rfw<×°Ý&Oí¥kÏ=A}Á4–rÑN¤U[nŸ5üÉ_·R0»"”Ež¶qÌýQÞÕ–~Ov¨’K¢LTµÜ}¶Ù9©9yËTþ(°öŸöðEÖu²v¬ã P¾òÅUxóa ™8¿µ×ú·B×;‘e•µiolê(Z÷_è†ø…èè'"È"§K9ù\æÎqù/æ.8iÀðæëþÐÓê{=ÈŽ‘ÍÜÞÆu¹ìy$dö_ý÷"ƒŸ—Èå¶N• +!»7®Q‰÷¦Ou®(Ê¥[SŽv¨ø /†Â¦‡ñŸR=¤çn¶ ¶ÁUJb»SU@Œ»°õáüydÍgžpO;´¡Ò§ó~ñh0£›kßÔ¥ý#óU^Í«ùÞ¸îH·ÿ“`d§ü÷sEr”V¯åžü:\ñüIÕΤ¿ìDÕëòïÎ|<ŽrSAµ]‹O€X“ùýãC4¾NdS‚bzwÙ6Í  ÔwÒ’Û—¡––oÛ›Ò@«;Ö—;,û®C%õ»€a‘oØQ‹¬õBïL„˜ ÆËÉ[Œr,µVÝ@)7¸Ä‰«çÎæ”WQ‘»gßûüg(¥¯Ô*䉄å£W7¡áL/m‘ÔmÌý¡P®‰ÄG oìAÌÁD }a ]),« &;vDóóNöíyVQ æ¤>ú tÊ)í¢'öE÷bk~'=êQ­&¼ä}±;Uíçú–6%zŒÂ‚×7Tü€J›z[ǦÌQ¡R‚˜ê*‰LÕ§ÎᲯ‘Ôqvfš2·Øx—>*AÆwç±ýh¢NÉþÊ”?¸¢QÅ÷C§8‡* çÛý»%+³-^ìE¦¿çêEâ(W}4wÜ'&{Qjªœ¼‘¼î9jËœÊ9­&ÃA.×íF½ïÂ$ŠW£rø–õk“ÍÐx \ÐÇXÈ„hë5ÊËAPÒïF…Î@'$œK< ’g(Á†óh(U}eŒNîT$º^P:¹¨+Öÿ»þë1*Ë üÞ &›vQ<´òA$áëì£v f¥1]¦;€’ÜÎ:žUÂû«ìÐÚ¾ñ0(˜ ʆ6=F…ɈÓ†îHmÑU*?©€„(Á{¯ÝkA²zj¢=Ø(¾dêb$­²‘üdÿ˜DÒòg/¦r'‹uäÔ䎣ÆÁ0AÙ¦³ þDè$ ˆ/ÿòŒq ÊÍt­Dzj  5è·[¤fÏH­ê®v=1£Ò‡,ËÔÏŸAÍßïcûÌÊ­ÚÛVŒ4Áo¦„Ö¾}Þ+©eJ‹>_Q)¡½kÛSÁÄ5 @ÿrl,~%¡%ȨØ_F&Õ[Lªb (&úIï¸@¼þ}8ån‰{º.yЯª{õ!PëÏ-jØèœÅ£!ÝåsÚã0 u ”qìRÑ8ö!íéºC`B½ÛàôÅÕ»Œ¤WJþBÆ®M=màô/6Øô  ¥ûçOøù áLnxºs,÷]0|¤bÝK ßÙèe¡2¡Fˆ¢’*o?̓IÈÉýs¤oWì³Z”¥¦UÏ÷lA%»ƒ®™§4~ŠóP ©Så7½-r-é@!—uÝ™ømι@—¿-)ÿLîÍÆ…σPGØ—‹(Ö/oív#Ä<ªËóÒÿå.'BÕ=9*÷6åP½sâSQçËñ´Ï4 x$íûè´G6q‡PϿުîô2d)‹øõÚ)¶oû&꯹2op‹ +ÕÞÆÖçþÖºŸ”ÿk wJIVÑ8{(Ç$>|ÖoòÕ1¶ËѨ¶$aùH ˜Š n©cQP÷@k~ÉçÃ(bñ@Vž³¨gË~O”åŽÜMUË.Gâ§K2™ò (‹ýò„ÈW gÇäÅÙÈ€I’è¯×mm¨KØXÖK=ŒÆ&B›¿Ê¶ÜÏÍ–Z›Ä›r“J ZÌ–n0’b&Õyý]$ÇÔ¤FoòGMöÄÑ©ŒQ4¼>÷©n0µ‹þPHßøºÈm3ø¯Ý)á×S„Þ™½KaEÙ "L›h{‹ßsm6P¬)¥'MÞ¡Œ'÷XÉíÔî’L©¾cВŬ+3-€~ÂéÉç?ç r’Z¹¿tˆ4; ÐMhuƒŠØòIòOà8sŸÆ6(^2Ó*`~¨^–´]Ç6ÆÝ9”û»¼yø7²:Øiw|[ÁÄí…ÿÓSÐøÎhCzûY Ç¾®íؾ„:MŠ”sAïVݹ{w–YéøÝÉùÜ…C×~ îEvQÛ9«‡¿QFUàŠLú•¦2z ¦ý~Ý”†'ŠE±Ù—‚Üo¡£5Êùú\Ñ‘Õ=‚ÆÖ–¿Î?Væ.ìJr®YnƒÂ*+îzÕÇ£Hø¦k#Nhôú¹Ð›?€~J¯;Ô4ûû¢á“ü¯·‘2,ôÄ/3HÅ>×çû¢`jnGQ2êu¾N<àËçÅW‹Bøõ©=cþÏk²é¿UΤƒ¾ð_í§®#ÜȼßÈϹÁ»ÜÝ)é(s¡ó^{‘pÊ÷%}÷¼ 4É=ÝÜtÓ’[AÈl†¸+Õãh̬Ž-Wçs\·×ÙÝ[‹‘c÷æ÷ÔaTºÙ½dÒ@«ßŒXAÒWØpB•;‹RlNg­GÁqÍ~¾/æYEíGÙkÙþ/L˜@![ ˜X%¡ÂÛ&Oy‹Ó@•ÙÜô«p ÿ{þu\¦q({«­?¿A=É%çO=èCåÓ—m"B€¾ÔIÚ«à3÷/9ÊÅbn+w:)ÉÕôQ+0ÿœ+} òÈfU­§s‹# å°Îm¸Zö(G/DÝøÎš>yã6È/ÿ–kåÊÏ!'µÊ’Ç£–hã´ªÔw$–z¼êƒjÔR òOWEª°ß…ÌçÐH(„*¿ñ3ê4¼ˆž³£"ÃRoëÑ€f”9e£RþDµå¾9~wæç¦tæÊÈ7¨ÒÄZÓ(†:^Ú S$ 0I[T›à¥в]&sƾ¨K{4´±¬„õĦ6ïNdŠwûe(¿£’hÛ¦Ôñ.å‚}%üþiXM4åóצëfã ê%\ ¼(ÉÏYŠ3(1¼D>ܳÍ[ïƒzJ‰/+Bé냴ÙKFh|$çB¨à *¼Ð*JëOñç,ò¢Àßì.¨šSâ\Aæ!åoñ&o ~žÖt*تkGìÕP9ìåX[Ãv0Éœ*MÈÚËë\!›»¨Í ¿ý$vŒï°=/ãZGÑøßík <¾äéÚÔÔ;ítPµ4Jø?‘^TÒŸ—ŠÝ(ŠÚïÍ7lP@£•r/¼¶|EÂ7.«^ú̦\”²šƒ[JÞÁ“¨jQ-a–ô¨,/A'PäR»è»£AøïFšÊþŸ¨½E+x©\P×í(.û‚ÎnÆ÷%. aý!³$ÇÀ™[%Ô-Šúõš3‡ò”Q“GÛÛÛu W~x¤Îa‰ü =¨(xó–Wñ%”͹ºèø• $¯þoCÌÆLî/Ñõg3Ñ0±…ýñ‚7PdDbN¡áý0Ï£ÊÈX»7>…ÀÏqýM._Kß9ŤÄÜþ>POÎQ2G  ëmÀ)]¤ Ã#õ:ë‘‘erGä¦?Ê7Zô/GZWý vu/HýîüÛkµIÔx?›×ð±5½e^ym+R-Š ª=A­‹û3îÉ"K˪ýÉØ) Z:#ü$ ••oý‰ >íø‹F¸û:Ÿ÷¡Xùƒ‰ó\B\;EýòÙaB×QÖ&y´†ÞÏç• ÉKÒd@ïTš‡@®3Š7VîÝ•»€…µoÚIYßk:%;è3éR¯"Ú€:`¹z»·EÛÇ⽪QÏp{fJ2ßÿëÚf£¥Ú€b°?0Õs YoÂÜá™ÚÃ)MݸC;o¸NóÇk·äÔpÔ?ïcXð.ÄæO½|áe‰ìÐ÷jA¦›¸ó×4÷-}¨¿Œ™uéD#ª'"'tè?Ô ‡;x ¡«?§]ù T·ÏYÇ’‘Y˜ÖDáò¹'ø®vˆýIÔ7Ûß-` ”ÄÅŽóôc|ì}½Ö:Эµ#¼ûôÀ’peîDËg㊇ h°ôOE»¼P[Y3:s@mHïï“®C†w“ÝÝ÷&@âZí´VªAù ée…Âhœl#?z7u$g*m;AÌ=Pq_{ÐJÖ yý~´ÛEÓ¢÷¸ŸÓöK|ôPCöºDÚq½+ÈZtäi!uÏ™_k[ûå•eà îíA:—–ÿ2jÐ’Ý×.“LΧ %U!íÀùpp÷µg.H6×=Uù ub•ÍÂùü£x*®_}>¨®†ûþ¡ñHÒy­ÑN¤‰ÐnJvP]É£,àdEžŽD¥Û×çüfúQy©n;TrO7O;²•§?a Ù×ãöd\ä÷iâ ‡¬Ÿ@9à[¿é÷b Éœ(¸…2ÏÚä©7=ÑPæÖ5€4 ÊyÊç¿íGE߇:,ÞBÙLªf}²Åñ“Ȱû/GÔzÈË•ªÜìŒNSYnxñ)pÊlUmòÐÐ:*ø¼ÉÔZ]±-¹™ŠÌêóÔUò¨ópí1߃%•Är.( :kR4×"‰ÏßSzí’@úDV“A9Çm›tLPý`ÿÚÝ—ž­B¡U¥öÊ[¿KÄä'Õ;Ì÷Ô ‘™¸UhÙ$ˆï?.)Ë™àVÑ«FŽ!K÷ÏѨ;†üºø½"/) ÕmÂÞiò9VÖAIã $6§;šŽ— 'ÇòMл›Èº©ÑùÒÄØª£Ë=¹óÁ1ï×:'dÿØ»¸%ÝI¦ˆ:0GW „·ù•7dœœih´ …Dâêì—^BÖC\Âßwô£ D¥B:Ĺ~éχSÖ¯¹Î×ÿ9Ó‰ÂÈVuiÉ¿ˆZ‘±“KþÁWÞ±á\Áä@RíOiâÝxô>ÿ5 <ìzòSbP–Hu¶î¶Ý6›x— `|bnH*X…;ûÕß^~ä•ígr£•Ÿ0– „F>Êò ‘@y(7¡d ä`GŸaq»2Hº·þ0*{EÌVoöZ±Óv¡§§@ZT‰õl[ RkûF“NósM¯çæ.4¶7/=qåˆÝÕú™ñk8 „ªkg^#í‡Æ¦Ë^ÖÜ÷+ï„»­Gµo*Øüœ¶Òú¬åÒ/@I®š÷uôîÛy¬±©îYg,Ä"¥µá]I¨~wJi^=€:â#©idw1uúrÆ&(VÁj.Ê@¦-ÙðH‘ίýð®‡tØóºj±H-»¼'˜±È.2»-ËTP¾6_ÏöéàdÏ”ùnÖFÃo:3@™ß}UöP-sX¿ŸÝþ³ŒpW/ô¼ý‡€²Ë8ñSÀê«jö]Ú¬rC󑘋@ù^±"ñ1P^ ëµ6AêUyâàãT½˜r3y+P›]-Ì‘qýæùf ì_¢,7Ô‚°ÏŸŽó¹y·CPíý` Xú·vøõe‡ð˜k(*õŸX^AãìÖ¡3Gn`³]»ñP¯O–àl%ïOŠÐº@Û±-w}9HjÎX6FBÙ¢àZJ\!‹è^(GT{:vÕn¨YRSýÎqç­o‡}¢·!  Šjkðæ*¡U¯/6BW9:@ô,ˆZ¬¨ËýdK±ÒÚUܦã2+‡ ³º Ô^rK¯Ï`/êÉýê(¤ k>SÇ+²nüxŘ&ʽŒthî½FÇ—)õJõýƒpß\ ‡©ûÔ‰  B«ãÝ«{/"˶iǧ‘/üÜ¿]oilhA‰A›ñQ †ŒºÔu£v_ ¦›š¢âŽ šÙÃeÈðœ#œ±â‡ÎwžAN¼ßgËûïPöWÇðµË¶(U`5Òv€”?uCÌÐýÖ~Ý(¸h7/ÍIô‰ÁÀæSQÁüüðòî½ÑBଠ]öùèI^îýîÓ[àXõøYãO *©±¬òÍ›òÂ@\VwÀh94ÃðHsÔŒQêt¹‚zíOÃØ z€E1^B-‰Êõï—§ qOä¾-O…€zLòèåšb¤ù;…ËS{Ar±wÑûùí¨DoùÈÜ4´Uï=Ä}Ø@kй×û…ï~A¿ãµAį¦ÖìÞ[ ‘ï­Á»™ÜqÝÐLÉf ì©ƒÕ7£†áîçØA ^ý-‘\&Ê]¨ÿF=s•K¾p öâçN™£ÏÈ£¨qCÎ.KÀh÷ŒÅÓ&÷ ¸u{æÆ ¾Ù̸Ui´€,¨Gxò§üqÐè‡Û®¯Rr‚…RÌóȤxÕÇÈê£A¡ž.TÞssBð³PÞŠÍ+¤½DŽZ«Ýá§þh\]dIäûᮀ–•ˆ­ý©‘}b¨ÿʼÖoÒ÷+¯‚"ji ¸ÝäÎy¾š?QzߣxK¹O@Nâf¬k&m §âlEwlù¼`Õ"]Pû2ÈYÔ¼uNjÕÿëIj%õí‘(gÐõ”uê™C¦›òøyWªªàH_ÜSÊßïDBÀ¸Î¹Èþ£¶0Iàó•Ü­]nÖÀ0 Í´-JÒÕ¦@•z -äéD~ÝÔ¨µÑ.³Õh8yµ£?FUv†¨c%}'Aý]p,Om¹|þê µÍÑnI¸ºú&ç5P'"íïÆŒB2^Uçõ¥9{­•¦‹WíЩâÒ@Í¡'u®«Ê*»;ú¼¹}!JñµkW#ÝîŽoR^:oâ¨R¶“€_—´ùFÝ¥•§ í’EŠœö±sæ «µà¹ï~¯|ø9JS«Q1S”¯uZkFÄìPGWа'1‰ý~7‘œ‹ 1u…lj±h@øu•£®éÓóGlAëÝÕòw@SÔ3A¦® ¾ %ïý×GRmO£öäÔs{bÔÎvhÑÏD“{ÍÏ´x4%™|^ÕW‡:Q+þü@Ö«9Ðxb1èÇûW=Êç¯nl[´IÇ«âî@šGhjù‡­@ÊØ9ÞSÑ‚_N\#ïôFÙM³¿®rgEÛîb™¡F‚Ç«è·n ›öüê„Ë!Pò ÜâèòWÖnr½ ª¦çMžW— Ñ÷ÈÍJ—uA¿Ä³g¡†¨¾âXi¯*Å´Vë-FÚø;Ñàq0<¦~ ÞÊ#(­QÉOƒæŸ%¾æýgòÞOšÚƒ&Äí_* C€ºÌß`O‹°Þ$¥Ú”ÈsKü¤äÏgh £úÕdIu?(7L¼?ï9„J;ËÝv¿>Ë}èõÚK?Ý@Û0 ßyb3h¾?²wU.²¯­ÏyÔS«ÿ{SPÂëöñz³2)9‚Z콋@›}þúþnã2­›×§tP?ø÷êzPgÃÕ"fÀ$Ô?¤^ã>pÈsŒº·`ÂV3žÝ·ÈoR êÒ5Tj`7½«Þ÷¼Ã)'Î~3qöá!ç­ò·Áðñǧ‰{òyã[·>+ iR¤ÿCbÿäˆ^Ï\Þq âlÎrZ‘¡ÿà®X’ü¿Ž)@åB¬®Jÿ„”'Ã_•ͽÀÄh]¦¥€7_o#ö]1HúƒËÖgÑ‘øôMÑÔsT´V0ß=dÆû±³âÞZ*ïÏ÷\ª*Ìmýãr¾p=ƾÍú¹Œ„ݯžÔQ×„Ô ý‡P>Tógè²5hlðmå4å-¨­ämq²ôARqjùO¯G¨PÜý³7}UW eמAãÄñ¯a‰oа\×*~1¿>‰-~*»­®œ+þî²3í\"¦4æºN‚Í»|‘`ÚÕ+yn5wÌVÿÌj^Ü÷÷_Æêçéð*CZžè–¢ˆkR·-t0ªˆ]˜ôƒ*‡=ƒŠÅÂÐÖsÀ<𠬾¥´¬F.Yî MÑßc’æH&Í êìF«sõWËA$•}5£òsY½‰rÐsî„^QÍ{ù0¤Þvo/ØÍ¯÷ï·òGÑXo4gŒmzRjçýÒ%ñ>­(4<±ÅìFŽ1ê²Å\—<ÎýµLåe* v›ŸWtàŽ§GŽ#AêsàÆ˜y`¬do–y’Ä]Ð=¿*®©K¯Ä6w‚ŠU²òÁrn÷ÜБ dÞ[Áçã_§ƒÞ:Zׯf`$®Ÿt±q´s–i©\@öDLw¸áVÔQ9qZ·ÓX{/^Tº1ˆÔ‚âu“T ¹´³ù7Òw½^w.^9/K{rNÝ mxšÐÄB¤È§å…¼_iÅ«¯Ù £®rGÈ7ÎÔVì®c†ôM“§Ú2Ã}K”ýŒ—ûŸ¸èÊkñ °ë“ÌKóÝQài€D‚3ìÌW$¾­PZVÒÁýâ0&Ú>Ћ:Ë#¤ƒqrR*^þÞ‚zêf³¢ü>PÐzO?Xô’û}^ýÄËRQàf9o{fË›3óxí0}Åþ\MR|úN/ü%ÖóÞ¾¹9£ø´KÆbFЏ½®ÎÖ|B‘‚³BïönB#ÿ/ò3Â_AÏøŸá¥*ÐÁéïÁŸQwDøÆj ÊÔ..}ôrE9ãKN¼¡Öh™…7 ¤;1LŽa Ýʬ/¶d;šœ×]ç ¬J³£b•Ü/IÇ·ž ª¦ÇEOfVhwßÒÉ{» Z_ÈÏtЀ~®ìyîì2õ>ÚìòÛ©4}Kïóu‘Õs*xŸ´PM—ñsñÞ¬ÛeázhØÔo+P,”²63ç$;Ó¯Ì5¶õWKñÆý@j™Ú{nQ·ÇGçMaº2ÒN~Ø÷{ß1 ¿«Ù£} 輘ô¹M»øúG[ïî×FÞòÎN΃²Èȉ-Û€ä¸ÕG¦ð9(8÷¸{ Æöä¤eB¡´SFo䣲צVoT÷9ý5Öe1t]bZ ¨ýnñ› s@3k†µ{´‘øzg“×àø Wo}± ÌOk’¥ÑÔe]ÐÔ†—|¾¿ó5,n2·ÎÈj7½J±½ìÕ0p’¥(|Ïù¡xú ýnv׸RÓɽvsô­ß¥³Hp {%¸ 5)2»Ê”â€ùò`gö [îøïÌpëÛȾvðô^.·?n}òn9·¿3*hxà’GÅNsަGÀ“°÷âi”os$V "mÕ£/4—ñzlC¼Þ¸+¢AõùúÑm?Õ›ÍÔ¹²è›:â,#!õÈ/MÕ^P[ëöUÖ!ÈgFì ï³2ûñÀÍ î1ûûx _W…*¾Ž qVÙQÖAÝñĉ·}«‘êÊ-™”äME9ÄÏZ§Áæà#Ç™¨;]2s„Ï?áäØÛÿ)‚Êy·É¨0àìïü1}ûÇz‘â²Ì!Wò»p2KMrw+„g•󆺸‡º}šA㫚6Óä7jˆ÷^8X´)‡Þo‹–WDzÕÝïŽs[\k7Ê»ÕñþÄÍÄ|n<ÇmËÞÕ1ÿã? ×P‰~UH>>âñ›cŒ¬NK‡``Z­ôœ¾[¤ÚdÕׄϠÔ',º¼Ôœûçî½›ÁT§—r[¯¯âQÉ~AÞÔ8Ø ¡UÀ|~ßYF6ˆ“8еKÉUýÝçœ{p¬Lhèþ{j·sÕÒ_wܼþŒï,@ÃùüvuÑÓ¨ëýij•(Ïž<_ã3 ªlaÃp4­F^l1Û :úÇ<_º Þ´ÂÏà»Ù ¼]í£ñ<²ÓŸm–2Ím÷ÍO Ÿá+,Š_·ÕiòƒNv·€,ßíEÐ,G‚ª¬„à>6ªÍÊG‹)mäM—G}ÉâM‚Aõi˽´W¨+ÆZÍùvØORI®:¼ Ý¯Š‚'ÔÌ>yÒö¢è/)‰Z,ÈíÙÆ ¾þªHL*Á ù œ” »oEú{/ëàp*ªõ®t^ú¼¢»ï-BÃÀU¶Ût–yÝ{ÛP554Ðò˳ßÈýºª!s£ ™ÒºD-f0j©ï™Ÿ*ìâM¯à ÷€Uq9¦"¨ t(ý)d`ŒºìŸûu MüÜœ•"f‘]CzZÉI?VEŠºzƒÖŠñôû¿ÑÄþw…’y8ز³~þ'P~o¬Íäêɘ˜™ÕEpûe+†šÔQNðš5e¸•Ýo æóù7‹õâ•ÊfĥyÑ óÈŸ@ô9Œ·CÃÙù‰Þœ~d¼cY'’Fg&û€0‰>º9&?ÿ·ô!I`iúû "’êæõV§ÑA…&rùùõ] |®â×´Ì} Q$Æ<ígÁ(µž÷î²!ŠÿK9ЧbŒ±t`ýݱw°5*Þ}xܬd—†<Þ¸EÈåùí)¨Q™wo×@;»äò QÞ¸ä5WŸÝb¨{†hšY-”*NêqãÝ z¤–ãn¶éƒEH]¿x!ûV2¨¨©¬Ýq(<›ÈVÑ ^G+ð6C Û³ûh_#fz ̃¢qišÚ3Ô_egb;1Î{ª×VÝy»Ò¥(+±neVàЗ3²#>øô*Ç<ÈŸë„#»ãa=ùp·+p¨Ñ¸FŸJœ[@êßqË Fü÷Œò¾4YŒJ6sûby“צFD»Ñ`_ɉ‰: Ø^*ù÷ä/ÛBäÏþBÝö˜µKö£¾w¶Ä7äÖŽ{„5ÜÚ噌 5IжW<|1-ôÊão~¬»„äµ-"ã ×ÑdñDÛ‡Nn·úý³âjûÑ`쫬§\ 0ÛŒ~tî2DªùHØì "oØáËð"qIŸìŽ~Œ&‰gûìô<€ oKg&„=*°Kö ;2‚¥iª6?õ*~áÙ{QdN©»ÖXœYdûu3÷}[ÍÍÖnîËÑÆÌWBÑ ¦xî¨ó¯[¼1OU¥¬Ù-Ü¡¥’Y;w±€Ø”­ÖÞÏ~“5jµè4¯£¹&3ƒþŒ;°x¿¿ZSÛäêB1o®§å½—qR ‹Æx€Y8š„žt‰S+á üM:#ÓÁý ®¹®•³I™™×Ø—"sClhÆn}^‘‘Bëɼž÷i[b‹T¹Sׂ:ó˜¨ÜݽqChÏ~ø±„÷k¿bEGÐÎsâï<FÎñ]ñ^ïMZ–qgâÈ ”™uzks‰1KW¼^g€ŒÜˆoŽ ÞêM¯ç.\G†xˆ„pV3oOÞè(« ¤úSSaû®¢æã+ìÛª@U{Oí­þóh¸×ÜðFŸ³?i·ùZ"­6$c£E$PßNŸ«2ûä„á  *·Þ\=!Ùjîñ&Ý÷þñ:m„³ÿ8 ú•t…º€5¼SeF_“{ÛÏ #§Œ=ºe‚Êk ÈñÕRû¤^JwþìÔ2â}~úôCnIÎnAA¨uíÖ ›Ãdtù7ï[µ”²T"JüWÝùÛu"Ý º2‰¨r›o ®=‡|ÞIzGÕáûIoõ¢ ï!éFõ1ß# Ûý¸ áX4÷Û³b_ûQãÏÇØ{ÝA ujIŠit’˲ 4º¨ºCôB2¯nííAÒô‹O 'RЄªñÊ ”5åæƒZ¸Íî.Ö}þHõì8ÄþÂ…ÍsIÇ6 }oë•™S¼¡âÿÊÈú ëI)y^ß9RðõØ:îGOó«· IѬÙµ´[å)NÕð~½;˜mŠº;V|r‘£×³hRÂ@©(ŽÚÝV ê1ÝnŠ"E ÈjÕNŽº™½eõ§€¬š–å8Yœ¬ÀôB7¯Ï¬´uQ¼(|õp NFRÃÁïLnßÊûþ@­òqÊ?r ½“¦`H{V¸T´ ‰xçvÜÓë`trAºª¸ ÒŽy¸Æ\Czl@“ó¯ƒÀÐÉÛ¼Ÿ`°J|@=è0²U2)’ëfQKñäûë¶Hü¸vSß_`*˜¦dðù³=b“fT_ýª¯›Quê ÑÖ£hõvEÍ»/¢rו›%ó_@w}U³¹è+ Gší”ÝÌAŸ°;›£b»K$ùþxFõø(•MôÆ^E£À–mfNHÝ<º¨’^ÆÏHlÙ(gÐø%ñ­æÇ¤¾µ8`)‹¦&$ç¼àdKÍý¸³H?üÔ®zt¢ávkÑïüQgÚŠÛº-Jmª@ÿœlÚÓCW‘áwпÁ%;šQüWr(nHYE>…ÔˆôÌ’€ î÷ íY³‡w ãV…ÌÆ%•Hm/]xÍçgÒÑÿVÝ„1T /or;ç ¤X#.©ÇœK˜0]4äÔ”;„¨½¼jÉÉŸ¼å ­v~Gû% _¨ýlkEåþwëÉ£;¥ócºW#>…ÏÎç®üÌð°mÞŸ^û®AŠ_Û_ ßò_iŸþ’Û,õUÉa:R.gtr·#9*qÿ'vlðÙ¿Ík¹½sæ¾Ä§\`¬ùûn=П²}ÅÒÐ@ÿÜäÓ¢a >”ƒ7€8³åQpL)©;ÑXÞù`pù|ˈ&›Úe™¯=ÑX:0É7²4†~¬ì[Z„ÊO•#/5£ÑþßÅc.ƒÚÉËi¹3¯@ÿÃóÇÕKç&H]3×Ù€ª›bêþ˜<çþüãò÷[ ²å“Ú<ÅäyŸ·.PnÆ‘%àE ‹ÛÄ­©ÙÛz€T-¿ró7Ô­£Œ8çƒúýíAG ÑðÑð”|ÍçÓËMIš½K˲6)ôþ{~¡7&´”ÝßÇ(=]ÿÅUò¿ö% ý'£å3µèjñˆEòÊU±íŸdÁ g[iÛØxßj#¬ÑZÒ¡Ëò|N#ŽæºÇß,™Þo²u?02}ÚD§¢–¥Àš/]R¼1Vêc1¼/+ËŽÍïARKø:Ù×'P§üR ùÛ^ÔöÌÌf¾Ž÷O¦ÉEÈî0³Un÷¾7³QZ‡‘Zz¯úcYªuê\­Aå{šr…± ^oB©U^Õ%Д_#™z÷ o0U]#¸r5!·#é\:=K~rÔ‚oï’ù¾T™Nþä}yÁÛ|‹½(/ß%ëwìã6n+Kx ¤áÉF3^_ÖJ¹²AHŠqîZ¯ºˆ,WA«Î^4T_,…êÈT×¾âzŒµ-÷w:ƒt´ØÖ|mi`¥¿i£ÌD!qÛîÍuûÊäìóå[Mr¶¿?cÑ|œ÷éÄþHóå}À þN±>¼ L,Ÿ?8¶›÷ëøùÔ×u9¨^u¹é¹”›Î9Æûúóó%q¥WðPù£’ý%ë ©8H¯ehÃQhÞžð¨¡ÆéyŽ-@úÞ'r~i÷­ýh·²ùcP~ØyhKúWÔt ©ÕP\ê¬Á@ÛÚÅHŸ¾õ3ÿ^·ÃÛ-i°õoÒ®ZVÇePŽJZÏyZÎ!õÎJAQÐÙ=Î}t34Eÿ^-êç¿–9—îdÌ<8;:qè‹o Ê/Híy´dˆB^a³`ÜÂ,8RÌŒ7©öµ™qÁ÷Uq[óD™ù¶  îØdǃýKx.ÝEzXÔ¢únaPÍQ¾Õ½6 •‹ö¹­O£”–»YöÞ¼¾ oG²PÕôñÌT×F4^u­:µ´œM‡­vX"‰NÿÓä¨dÖj¤¼j¡± ½Ú¨ 6÷fGlQ/?5%#U•,Uº`>LžI.¡„•"#²"]ço#¨Ýz|ÿX=P º-àŒ+ÇSm‹·ƒÎ ë;‹—Dó&×]õâýØÔ][õÔ dy-!ze•B¦HYð×üâFÔÛ±H)åY$4Yé^‰#Hi²½¡\ÿ)ìáÔ“¾­@yÑ[úP`ô7|ª¹®‚&>¿÷ÝÙéäÀ^œ­»¦5Œ„ÜÈ=5 ¤÷gNÅ Œ«wT¬«Þ̤ëd¯¸4k‡Š#”Èàär½ª;¤}¼’Äé$MïTv{’Ü!® jÔ™7ãæ}£Aš|•׿a|¤»fž7"yVÛÎ}ÒÖÜ=ž* 4ÆúÒ¢¼ ú#êr%mQRr:̦½Š;³Ø€~Vu¥>å³¾^E¹“DKç+»xV,Ê þÿõFïï`p'IwN#†©íçïÛcðR¡]¢:*<ÎI*0,Î=s£ …N¨+H»é³) û(Ä?éFÆÃ„Q¥¦à‘ö‰ ƒœ¿¯Ò;‘_¨¹Cè²²Xq˜óèæ¢ /Úä86Sv¬r9*ù¸˜žÜë…ô/!H÷ÆA“÷œ)ŠFë"Öªu1€Ýú“ùªôVÔžºú‰N œ{CT‰zÎ_–üê–Š^å¨àXíñ^ì? ¯ö\:3}û4ðî¾]Oî æ:É»ñæ—øõ³ùÎp^&š¸ú™Ü¤N ®pJÏŽì 4ÞóqÿB+ Ví*Ôè«A¶›fHßÒ»¨m¸Ýc÷U Ðeý^¹ë(D¼“þœ3…RÕúcæ@×rP#Ù2Zì˸ßI†··Gƒì¥úG ?$^*¯±îä{ã³½®V‹Í´z×pö(w¥ŠÉv‚Œ÷§g·Ý£uù<™µwx=}Œ6‰ë™ b´>Òg÷oÞÀ刦ÕZ Ég¯ªŸYàõN¨ ÏX œJŒù‚ZiJt›C éK§×<àŸkœô÷qn³Ø… öž&H,¨ç¾ºõ³1;Ô뼉‰×AE!áÈxòC4ŒÞN+ ÙÇefWz¸Å­iÕö—ò¡ÀV\›MøæˆÊo»Ïû7 ÁÚ‘“÷?C£x­ó ?4tVÛÕ&Æ •·ÌËÞBZÈ¿kBQIezÑ%׳@Yn__µMÕ‚2²f_]Eú–€5½ «Ÿ¯ 3¯#3Aù¥[‡Aïªr¼°-dž…4½–öº`3ÖðS>'­ŒßÌúFC†A½ã¢@¤ ¯…À8(ÍÝ~2}ð>*O[ ƒ`T/\4æNm@æ¯pad* mñÕlºÐïoôÁ ¤ôù>ê[ º¶Ìùp5"¨Rš]QS©éÏÿÝ,úRçv¨I£¡¿ÇyR&Ÿ'eéÂLUP)1Ìñÿ¢ ÌGâÿ9ñ@¹)ÇñÓînÐGú•‹‰H÷”í EÚ«±Û¡Hê^¢gög ËÌ,Ñ>£*= ©¼Æ÷¨Q3·]à‹GìZÒŠ£§:6Ð8Ç}Ÿ½èÙ¸5we0‡«'»D QâÔ{Ó‚³`à8-$ºloÇ•¶­ƒG{^5ï·s²sZ*êlš¾¥q¶<ú>çIç5‡ÌEœÐp¢0uiëo4èÜY8tvöõ7{œÞ<›˜m¤9ÏEÅ7rP¯jr_ð0>³~z 3ªüɰS™Y Fç£ÝW÷züÿn§ oÛÂdÿ绀ð÷ÿÿ?[$ø[BB÷ñçÊÿ».0Ïÿ)üÏü÷4ùg¦ÿÏúüÿšäœ.?mclust/data/cross.rda0000644000176200001440000001750113510412701014311 0ustar liggesusers‹íš÷SÍ–ÇÉIrÎ9'É(ˆÝ &@$½"""¢ ¨ Š‚DAA²•$j7QüsÎá!<—{kïþ°ûl•SÕçsN÷tÏTWÍ™þVs5222 2Jš]K¹ëRQìr2*2ú]R;z\óô$#£äØ þ5ʼ;F dÿ¾þò/ÿò/ÿò/ÿòÿ/!Ù_þå_þå_þå_þçÿèt’&Kع»ûQëùHÇ%ðûHOtÈò Ùöw‰*R‚îG~:‰oÑZRbZ¡†1š+ hýLD„ÞÏÒŽÝŽ?•r“” )|ö>&¯Ù† iÒ ©ˆ;êO%ÐÒÙú˜X‘YDxø¨]Pü j‘¶–÷Š«GÝVª¡êþ@“¤[³‘­ š}üN‰>¼ô Ðxd«Š–ë~sÇu¢º/ç/ 3‚NÈvT'”²Éß M¸ ¦Jf’‚uQ¶ßùÂð;`P(ü¶õïh°hoš×þ~ ×a»èHaEL–2 m6@¾Mp5(¼Ÿø(þ Ù[«Duù£ž[µø²L4Zí]nyELz\ý$Õ ,¹ëR­ÏÄ"úÉ P¿dØ; 9°ÜÄó{=‡ˆšu4r?‚MÑs[ÁŽßÁÐâûm{Ô»~%!B=M'ðék¶Ò¢¥ ÃÁòcçÑ"cz¾-¯úC O¯ _°¤¸ ¹¹L'ˆøa¯…êùŒÂM¯€f:'2Yš:Ô)0V$ë0)C¦•òóEÑ”§+µÎ¬*Êâ÷*ýâš(Ìðá3gÀHuªlwK+ªâ}8¦j¨‹É=Õá:ŽHÚå…I‡hÿí²‡“8…ÆSÒèT¢y¾:Æ8ô=õ«GwšæÜhãžh¤ð·O³=ŠVêÕÿ\³(®Qf A  ±vÓò ­<È-(žã= ^Ä•|,käMÞº¼§ÑhEnõ)ž$ô#¨“í˜Éƒ¯ïv肟ˆª-c.PæÌ½N/‰~Hñ… Ì‘£¹½vÖŽç¢U²3î¼ù‰h*¨tÔðÂ%@b-öxü™Læ;†ø…'¡:>îÇók¿P wýaÔÝ1©x‚'̶ >¾DÍ©6#k=¥€Ä›¶wµSŒæ…ûÿ"ôRÍÆ§« Nhf±ü‚ÍÄE$Y«{昚Uás™T#œa¿èµ‚)M_yÊšE‚º(S¬ÿpn¥ Ô›ÝäNWÄ‘\žòõ0Eµæ?o€&{ΤØt¢‘•Û¡¹™ ­Ëþ͹Ýx‰›Å)ã‘ ¾é»ä|w ‘ü—…~\Õëämt‚LÐ õ—~±7`½ÛDÑ8u_*nr%¾ADÁ?Æü~ïÀë­ÕËYgËÁ[ñ€$¶7{Ñú0­˜~zåúÜ®]F‹SƒXñ¸-hóűjoGÀ@FIŸ_S+(Í,`~ ÿý¸"ãócä0¨9YXÝ]6ùìB ÉãÐÔçYG~Sh±yãâÁìà1³vX]em™uMÊc]¢h¹x%6¿Ø¹6mĹª‚µ½Gžqj£þÚ1ÁTh(SðùZDT u=ræg±¶øÍlæ_éŽð­; å|v¯áCÚhþ]O1%`B3m‡G“¬ÅÑœäñ¦°=¶`¢è7cXÈ{´ö‹°ŽZ‹,¹pö:Z¼pŠªÏ—-:X Hè@“ߊ>ßyƒ§í_ª{ä¢ÞÃÛFº _e®\†™-S+Y«Õ=6Ï ÒF”ìƒ#[¨(¿Ç7˜SRüÞ[À*f)yÓ)ãQ“DEG^?šùb¥’T=†“Ÿ8q´v¡Gé›~yz¨cë]+yOhµÑ»B9ŽŒ£ÝæÍÇ!µÁüÝ©ÔP°>ìz‰KaL.‘›Uㄾê{bwóÆÑ…œ`0ks|Ä} µ›žç¼ œ‰D·Úí$:À¶X0UÀ5šû*~_nZ…Îæ7õ ‘ì,•ÂHš7¹1º'L ŒÈ×Ï/~XDkç‹wŠä{ѯ‹— nüàCÃçªÚμF+…¿ÿ.*CŸhäÛQÏ‹o /·Ððò½çf@t:Yø”Î Ñßµ(E…²bD«‡r ‚vÈqS˜MÄ0çŒZ¢šÊ§žFžïÁbâýÙQ13!}ój©ùþWw¸« ™Z_ð§Q4ü¡±—_ÌÿhÌt¡5›)-i È-ÒI ’Å,Æ=ùê$é›¶7ÑÀ•úªÎ7h¡ä°û— ²h6»ü`Â3@2½<ÍïFÈ~”¤³ÉæVY³u°6½¥.9ˆ:.üТVÓb:[V¥¦(gσÇÌïÐàžcKLÙÁ\¢Éñµ8u÷€Ù)´È"3Už’‰|pË[«³×\ÖLÁR©œ9+âA«æµ&‚A‘å›nµa0#Ê~„#¸ ýÙrúp(ã-XyäóuG\ YñµäÔÌ€æ¶ä©;.¨º• üÑ^Læh5¶£Ý\J•Ý{ðüù¨ZT×®½º©jBO©Cò¶WSZÓuЦV×C91B³ïûQYWÅܵ:"X˰ pJäDD׳QgIgÁÀObÓ€ù ØþDPgBM9ÖOî ñW«ObÄPÅû»&.´ù`¡•½‡j -h{ÙÇG]Úi®§¥­ƒÛ‚ŒUMƒ…˜VO·pi Å‚%/F†+`­¬í ¸Õ€ˆ‡U‹Y¬¹Pñ¦˜ŵs ¥Nr3\FSž×{E%SìJ]º¢ÁYðÍgÊ̘Z/ôâ|» d¿ñ)áÛ)Hu‰Þ“¶{îÉ™œPR›„âÜn¿ì>`ªW3cBz—19wª ™J Ú²€¾-­¡ÄÉå›ý¾AP5ìz í}(vVÃ4áU8Ï]³µ„{Û×4}Í æŒ:=ñ=Ëßf5Ôú%ƒ%6#ÂuI›â”\íb¿Fœô9Îâ}7ügph?”²lâ=ªn…vR˜u{ÓiÑ ù¹ÇŽ„MÈdXÇðña–‰èvm›{†ù®÷ÍIËŠ@ñ·Šz±„D¨ZªWÞ 1 y±®Ì0•-䑜·Ò]•Qþ³ J(¾äÝšŸÐ %µ‡+ò·ˆ˜>¤ƒìrQ0Ø8×mÕEú»+«¢ º=!êa‰?ÙµõÖZP<äÙGv_([3×™óŠÞö¤å †Ô’lŠ…¼ èwWžù~íM°ú#¼Ç*/Š4ZTÂÎ/u§SN`i² [*)˜ïmV‚d~(–{¿ à’~+3ZÕn_އ²-¤$h°hX¯±{¯f—s°N)±Æ’W;…èéCv¯B—òÈA¾Hø"Õ ¥oDõ×Ĭ‰¢!•à ”¶êœ0 }–KäbPg&–¹X>)1%‹óE¿ï‚ z«Çþ!¿ ?~Jþ…™wÒŽÇe`Y ‡* EÐÁLˆ¨Ã²û¹©.~_Ç*§oóÓ9C>û€¶£·0uHH­ÄèÌXdgôÄÆóÈ6‘5;e>š ˆö$aÅP•-똕Ì7I<Ær:,zG´¯Cª cçÜ;Ì-Ûc´œGBC=Ž=wþ o\"Ú¡òñ?*.ß `îñóà0êg˜'rÅrª4JÙ~gI2žÅÒœVYê ǹú·‘]ÌPZ¦L}Èý.–Üx1buÀÒZ· RM¬A1¡˜: >()$nv¤Û*¸{ÒçÖxBU}åMÍG¿Àf‹›5¾›/uÈv €cÏd÷5B>®ÇþÉ7°¢ùUTTÙ)dÞ8»ï`Éš>¿$å ,švú§ùÅ.nS(bÈséPŒ–è`!ÂÈYº–Ì3º@ë:Æå~¿»>es†¶»‡ñ ×?‡Z¤1kƒîÀ™Ö_P'­ç”qø4Zù*)/5UìIyú7U /´ ²áÄŠj oCCöð¯ö‡|¬¶“ë厠Þx³¥†O«>ŸL4ä×€š&ÞÙZÑhƒî‡A‘C3–Û·*×^a ¥½¯ÅY>­Ü=çI½Ow’”‘¿Ù£‰é í9Å*Âê©/é_¢5‘מ+P€žv….‘ •¿už¹bý4Vžk®}·v¶ÔXSÛò ã稅k¹W0E«Iîv}LݪpoÙ®ó_ŽY?Üz˼’Þ×VŠå–'\Ð0‚&‰W{¿¤bþ¯…çQjf”~ª¿–烥†Õi÷ZÍaQš-ÅšÔ×P~ÁŒå¸8d¨üÎ4è­ )êò9"=™QÝ#‰mfi,\`4Ô!¹LØÆ]à ’}G¾Íƒ)sß-ƒÔJû.Ä(ª‹þ†éîê*.:›bþÊSÔ°RdÀ)ñ*,NíÃÀb4µÏ¦‡[Ò„Ô­Ïg«o@QÛ{ôë¿|ÁÖ¼²züÛSàO¢É¨ÂŽ>Zí/«éN"ANfë÷=–Pn'@¥ˆ¸Šî¿ÉTŃvþðE„nN‚ù—†kfÆôXž›ÂKyç'” ðÐ÷^v‡*§™l÷2`µ¾æ”¡FTÑN'ÐÏ~‹ä4öó5ÄRan‹˜>ƒ:ìV²ê¡™y½6qªÒè9o}†ìäÑOv”œ°ŠsÜó ÷2ÈöVàysæ]ö£ªC›fnl^ß ¹ë§ˆF 0zx™¾22 ¾êJ¢-\Ç<ÑkV|‚ã˜×18/šúX÷>*é9| ŠÄü~s'œ3ê½N ª¾Žy¹hYiXÔzˆ&Yî ¾_s.ü>dÛ`:Î À„¹WÔU÷·ÑAÍïæ“;_Bùt“ÚYNÌÿæf>[dš•¥åµ<”õD߯»š@1i©—'!§8a(dÕ³M7ždŽEó#™ÁÙ`²ïäé)&7Ë*ñËÄb^=<ßé¡„*ö=y €Zå×=zì˜ï”âi1ò›ÿÑéXñ{¶–‰Ø^È–Uj[|;2ŸÙ6°W“†*}>:„:(Ï€ ö‰gùç}ÑD…i·Ã„ äjŽŒ§Ø`Æ"K·ê^I]ógï qJ_„ô—•3>VCަ«]gȱ¤ËÎa Yc(oÌæÌ™ùŽäK%>d€ZÏ/œ¾^‚U)âÏå—vc.ÝÒx£VîuÓxö3›¶™1HHB‹‡ïö‰}…Ê)76_JKb–ÙÛ[Éñg°Ô ñÔ¥Ÿw0XŠÈõÅëÛ6ñ\ØÕGªÉž†ã9P¦D•úiU3¦~p—×Zd+e’ß13= ÅDƒ Žè=Ã’ô†úDRäÖ\yTcàÙýl3{u0dxÝ_yµOî¥ý¢uáÚÊ1ÓøúÃˤ5f Ô$€a£fÏ7ñƒ˜ìsÃâæÁÛ»·ð˜s±6Tºz½7ïêV9âHÅüm×åqž`£ú$ŠÈñK@@òÒIÌÇ5G®[¨…§,*”àž[ŠB™—óÑ· Òz ÑâŒBï80ò—ÊN»¶)BÞÃsºt£P5öÍ V‡dLyneWÇaéK£ùIZ2˜Ñ?&yÛä;˜eì=ñg»“Ë-•:E‡b¦Ô&¾1ÉÝÿ’#AˆHŠÆ‚Ÿ?ô»=„%J¼B€4jHu&ÔÒµèß|; >ÕÜ;Ki‹cé´Éý »ÒëÆ8!Ì›k–=Õ°‚Ák¥pñ 娃ŠôpfÿÂø³ây<Þs¼†`e‰ôô³cs  Ä[÷wþ#Pñd¥M #ÚþVÈ”] ýZXæY·x$£¹ÃG¹÷$”dø¬ãT…¥Üà“ך&EÑ$¼pììt¶}'¸`áse.7! ißaK0ÈÖ¨á'þŒ¼¸=ZM‡¶¸„E…˰d§øùþ>UÌeUÔâG @óö>Ð~u¯Z£ÀÌ"/—¢õVÇ{÷»Ý œæ¬ëà3Py<®qTò<$_T6‰žÏ¿ýÓªdc1«(^cŸhDòÖ™§ÝyPF^‚’u ²pñ„‹Tcyñ%ƒ‚z0~äXÌWAÈ)³î6†Æ^~<úÑ *Y?U¨JDk}‡(NzÅgϾ¹X@ùÇßY]°†S‚Ã;§!Gä?[ÛùPü»Æ§á=`ózßÔ9+YòVøÜÈ‹î4‹å¢iXÉG/dÁYsó j>‘„bòá^ϱV^+üj. æß5Y^*Æ\SþÒû•0u·,• ÝP,ÒÚ–§p²º¼pÛлeN¼1qÃ’Nµ…fÆÉXäÄÒé©)Z, ÔàXžs-e ¦.ù`qº‹|fFXèþÄò‡¹4,öø[ÿà³zÈUjVi2õr”&ì N(0³ì4LµI ÈO¹ý!íðÒùæBÞ¼1[»çd=§ælÛ[S9à*e×?hd0h€õa ¢²3?]‡9²­ŸÄû¦be§€‡;4EXñd€©ÑLÌ$[,bBi¶¸­.†6Ž£'‡Rº1#g:YæÀn^¶÷›/Õ‚<%W>çÚCÖ'‘ã²P‰Ï…Æé½:VùòásÃ`(¦Jý6äBÛ÷Ý)¹Ëʽ?NqnAÙé$ïý×d g5¿Ê¤'XN‹™<1§Eoæ³ UÞÅ*šü/é¼Ï`:ï¬QJ?!W‹eÔ½Ì(+1—{ŒâOµ(žcå_ØBÉm±VÚ]AUk¿ÜuKŒIJuT·Yž”ÑàŒV#ÒœµHI!yaU?Rïqµ}KÓG y‚b§ð€T ³è̬Æ* Þ'“› M7¡1ÔóS†OŸÅÜ©Ù.¬ôXÅæÑLÈÁTL‘©}›÷Ú¤ãYT º‰år]ƒÂÏ@U×^L XÀº5†.YkÜ;Ñ3L»€®žc{Te5峃r CÅ’ÎäW“qPzÛ›¡¾B :Ëü’w”‡¬oûÜ÷Y´`–Çþ÷’¦>b‰?zÜô±yÆz\–=–Ò65pl«ìñ­:¸ŠÅª-Úµ³ì ÿ~ìAh»…EºEÝ÷Ç yÖúŸ°X µóþ7DZòœÂLŽ^ $º-¨žÅ ttMY!ÒrvÐ3Å.ÈÙ{ìBoõ”JxP^÷TÝI/Ó“Ž÷æˆø.¢É¦5N:1'=OþïŠ#hœé»õ[A(ÏÔ@Ÿòãæê1oþy½ÈYos)¢éÅfKßå>Ì–œ˜P°§Ê]<õ@~b“)Øp{™*¡í××è²ä!aé¯*õ‘ÓÓ¥XôÝdÆþÀL(¹«q´ùÞbîQ±îÓ‰°ÈÔ-éÑ=¯°ÆÎSJ&ž§»ïmàÃwÒŒ&Õv†1Ÿ÷Áò=dPÚ¶ûTOµ=ä>mý[ÑÌV¦és\7*Kã#½¾XaÚ;Hõ|¶ÉYCazæ‰_,6ýh9õ¦¦°lÞOkŒ¹e‚eõcVj.^ j0Ê~‡:‹\ ­6ÿãÂr ÍΆST.A—‹O¬ÇÜ0g)÷ݵÖ(4ºGÓaúú=Ðß öÑu„lˆ´FÞÒ´A›Jñ§íîƒÿç´Ñ Ë]~ZÐ3ÄÍ­xÛ0©î\;NNHBªK÷\lÐÿ|U•E>%ï]®±þæ‚× öQ`AÈÿ°}ý[0Auý¨Œù§Î W´ì*šzÐçL•³ÙÌ`ëµ!ünz ÍØ:sVؾ ùm£ÑÂ¥äJÉôïˆèþ2Žâá3p½^+'ÃÃÕ.òvC:y`µq-ãWAÓÈýé[ÿ-E2¬G1 €öY ûCT½PLœc5G„±ñ !™0PMš¾6Á]cJJxqà Í—Ó`ÔŒyvЃIIç9»(i0U6u>~ÌOëLGÒç’;jtq¨Ô7pÓQó,êUÐõj|+‡–I^ûTóPƒ«ËõO—}Ð\:áÚ°›3è>õ‘vÖÕܼ¸>,Ú‹†ÚŒûÑ`€òÎêcÛ[EnFwž´ ’dšcÎr$Zy–:åß-Ò” ÑXæª÷q4éXü‰¯«²(,¸-xаób•å2/ê½òDæƒ1j%—¹•ðGo\°Êƒh1°4ÏÝãýÁHW•ôw¬Wè¡U¾Ú>É2Q0U[N™ÿ”m´Û™Ä.h¢®]±ÛëˆàŒ/Pçï yZwRÞõ—€pEÂÝ,ñ)Zãñ7Œ=ÖL.D\6÷C)¯œj‰³¯w¿Ÿ¸ÃëŸÊйnj™ÓC0ZÌ“ze ôïqÍØW\K;3ˆ`Q]ÐÎ]lôY,v4wyƒQÍßîÑ4üBÙà‡ü±éêå ìD›ûÍ>@žV´ÍCË*%~‘2†^ÅF¼s‚Ôj¢ÆÔh±øuš¡ê|éwrˆ±ä+ *Ñ–Z¡þÕy5(nɲŸ–šË^K¯er~»Œ>gt?ýAw þÑ:¶ß_ Ü8ØúL ®¦Mœ¶!_刓 ·ÍÁ4ïÁ@á‹Öø 2 Š*ÐÒýF[¾ã €Ô¢`ÈôÑìÅÙï2ðl1v\:VºÖD,À8å¦ÚF=^G?4‰ƒA£1y¦ºhUÚ«gzâ(X0Íaëx–z,ØÇDÞ£Oe‘ͪÒöü¿éÀoïek’0Κp=f“…æ6¦ŽÕF ¥k”xÈ#¢Æç¡ÁS"h¼|ÜÓôH"訿²ù$ÎÍ«J}á¨ù™îX£[(¯ŒOÑï4B¤wæÍUŸâÀÊ—äD½U@ší­¹üm)™ªµDýëÛØú…%êþüá¶íQP™{F@y-U<¾y²7 urÄú>ÜLS,ŸbObaÖ°ìãAPÿj”ÓÖeоjÌ(«‡F D†>î3C½vwŸi1s‚¥W(Ó+¤Anšr÷^¯y4Ö^Á©`]ZÿºíÝ×h¸«\‡µ,qjWF¾‚á×jó²Þ óˆ•[Öy[Ô2¢ôi†ÅÌÚ™K0ïÇä†ù³Å"Àµm|…*`&NÇ(Ù¢ÝÖùÒGÖˆùξµ+åèÉ>L9ÖTÞÎäðH¢Ñút¾‹—mê‰L_¬) bÐhŠe"Cú]Ô—çåꔆ¥'Û¥tÅЄ´ì^¯3š OIë²nX1Z–ÿ~7¾“IévŠ,烑Ö*¥8¡û¨”ÜG}Õ¥õ´;EôžGƒŸ4›V"nI§-mD‹v##ä,Ñ`¾`é¶ÅO ´_l!•&7èlO]EµwžãaW{0l;Á±Ü«‡6¯Z M6Í¡¦Ë%:ï$Pé7š?¶¥ u†ËÖHÚƒÅßb}Ñ ­¯J‚ÖYj#"ÿ}ÐûA…hÓ–B¿Pìýô¿‹âÝÜœú3æL:GFòÕ¥µú•fÖLg⧃#ùÚH>x6çÿñ³ô±ô—Ùÿ“þ±ù7ñßÿÔ^;ÔhÕÓÑFsŽ=×yx¹ÞÈ6»–éáV³ßl‰í„lÖ÷V›ëkzK¬>Ú쮳uèã~³›”?G¿×ŸîáêŒóì_u¾ZÊÈÙ\Ësça®ªç‰œ-0yö%~„+ýó…ökóga?Wèþ9ôO‹;N«ÿ‹7mùc^…ö#y,Ø+Î9+¿¹ªŽƒþ—ÔKæ¿0/×ó¤= »™âàü/Xõ“~ÖßZ·y«Î3·ÖË®¯Ä»€yƒó™«¼îÌ·ªã’x.–‰vÍ•}|¾°Ö3×u9Ÿœ÷E«~²Ï½o¤ÿYÕ~¤þŒó»"/åeÌëÀîæ­ÀîÊmßÜ/åZ­”«õ\Ûw¦JÙ†ÿ+˜·~mô_ï*Æ3ÈEðk°[„]íðzÂÙ+eÿX)—s˯ؗmžG}øëb^Š<¥?ƒÿÁcÌ—z =øí\óSäÙ…ÀO »¡Ä‹<ûmí§‡ñâO‘¼ öCøï?¶$ì³3ˆ«/þm'q`?|P rˆx;gt|â·+ë8Ò'ºþÙ]_±—ùéž®Ÿ¬Gþ²B¯‹Èìvà¹ø®£^ÛârÜëˆgÜ¡Ôû{uÚ¼á1½ŽR¯-ijeñeůä/qŠŸí=]·pv1ïSÈ\Ʊ7ç.8ùòØ!¯[?¡må2ÛâÀÿN¡íOKÝ_þü×v»’ü}ö&ì¿/å#¬Û·'Kù9xþ(僥RÞG~w±ÿÂî.Öå~RÊ{ˆó6ò¸‹üfð÷#ú¿(å×7Жz¡._}€¸àowñ#ÞÛ¨ÿp¾DÿC´¿ùy Î{¿–roãà}—ëóHÎÃKØupV!70Þ@ݯA®ˆ½œ›5=¾,㘿&ç®´1ÞóëxÉÑç±Ì¯¡½‚ö*dëÜ’û~šh/K\…ö#y6a߆Ÿ¦e· ¹»š£ý-!ßξ~XD~ÓUý^u)×÷¦Ôµ[Õuº&ùÈ=]Õ÷]£Ðþg¬ñÅ\¯+ë~\¯×’ÌËõýW—{¤­ïÏ^®ë±Ð÷rÏKgO߇\ïÙ_gáçæ¾^Ékýwà?sô}| Ïm¼uñyñwð¼´÷õ{€ÌËä<ÃøV®ï³ëÖ=?ÌõºÈ=RCÏu~¯¢¿QÓï'!cGÇÿ¡£×•ßèßÑïwòž²äèõ•çá´µÿVå}íF¡ý¿ãè}qíT΃š~_~ËÑïë§À;‹ùï:ºþ'ä8Ò߉õõ¦|ûMêDzñÊJ«×ØL›Ò\ÛH{­µ 4'Ó´½RúÿËúÜ<ºTÏê'–»#‚E=ÒÝœxžüc®(ž(¾((¡(‘(±(‰((ãî5—šGͧP ©EÔbj 52<2<2<2<2<2<2<2<2<2<2|2|2|2|2|2|2|2|2|2|22222222222B2B2B2B2B2B2B2B2B2B2"2"2"2"2"2"2"2"2"2"2b2b2b2b2b2b2b2b2b2b22222222222*dTȨQ!£BF…Œ 2*dT„1áNMÕ5ªgTߨQC£FFšÕÐ\Cs Í54×Ð\Cs Í54×Ð\Cs Í34ÏÐ{ûÕF<åöžÊ{^óÄS¾ZJmýèÇ®åçºü;=y}ü¹¥½u½uߣz/ìþ¼oÜêÏûéïG¿—¦'XËÕ?zÞõj‘µm=¨â5¾ÿ¿îá§+ï}Í_÷-*_ƒ~èxÿO¯çt=ŽV¦ÖȒ˘ïÅ¿å¼güÞÙ{ïu¶Ÿ÷}vÑOC/¦øÛcêÕ·?¡¯ªtö<±ªqú{{oUõýõxkïqìý¾Ô¯~Ö·œ~f;ýú iaïéÇïæÕ.z«ÕŒgZß3Xïe÷ÃÛþi^×­UoyxïõVÞÙö“Yü½ö^f}oë-o½Ý÷®ö¯¹ÞFÒƒ¼ñÏßËŠ¿½¼uÚÔ‹z¦7¹{¬kž¡ãõVÔn|à۵Ϸ¢Úô«7þÛ•úWúÜZôŒê)~]çÄo÷ÜM¯ë=¹ù3ÞOy¯kx“Õ¡¿Ù†—3×a+oÍ­7ýöj}ïõîw`êúykŽ÷ÕÞÿê«×®Ýò^r/ ­_[Ÿò.&ÆÛü)>ô‹Ã|t˜Ë*ÚòÓwì=±—~æ ½süTNÎÖ½U:з͚ᤔ¼ëqÄYËô]~ïKõ¸Æªkõæ™K¡ìýFaP_õ=Z˜2"x¬ØçÊ5"6xç=:oPï,-f¾¸‹‰çûþìýo½†êÈw|¾(âDÑwß ƒj=ï{·›ø…€ŸHé|¼‹.…~‹s }b݊½*F¾ýò»œ«_Ô nq|_¼xÚ¾Ïh-…wÕ½ó|¾wåD¡ëGéÚÿo‘/·|?=ž²ßÀ;{ÞÁÕ¡â½V¿ªëMЮ·F¤¾×bŽô5bY¼®x˜ @õ6=è®ô·’÷’¬vv,míáØÑ#½»ãí‘›ðÝÁŽBþ°Ö®¢Cm\;äI7·¶HéëRìxŸôö@ó^£‘Fv¯lú„½lÁqþzs¾O¾]Áã=—÷!ï3J¤¼ï ÿéãïí­Ÿ¿®cVÓ~w±ï¹‘~ßË‘º¼Çôþ÷®XæR­µé¥OÕ Õ[àEøåçæ¸¸_œÓwöÞÜ»^ß-û¾Û$—XEowèi—á4¡øÆÙÔÜþkrÞ)¤¿|å­ñ÷¬Gמ®¾³ÖÔÛ¥9QA1šÇ+b]'n­êŒ9Ý;Së=¾ŒILü¼ëD:Ûôb7µ¶ÅØ-ö¾ûK:Þ_“âë%é~1ï ±ù= ó2_ô8Ê|ÞCx—1»3ÚÁ péw€Bˆºç¥e±®Õ¸ÆV¬©é戗__ÄÏ÷åDÚzû—¢ ô>ÃéfŸŽRïÀÄëÊ8}Æx¶o{öJÆA,©‹âë…ÃFøèïær8Œÿ­Nõ×^)Üõ]ðݬ};F¿x »ïY9¾ÃûNXÍÚk9C<>­E¹á+’}i gõÓ1zú%îøm›öeóçEÎw>Ì·ôËË´m_Õé}8}k'/R)sä™~a'–Õ:?P[Îk|·Žu×jï”û°×w?Ïí߃ü+"GT/ýY‘M½ýý ¤"½óêbµ8#¾'©4^Bª{¹ÂÙj¬ê¯FzWâùìÞ‚§ý^ÀËvS¥fpImt¼_eHE>òrUr^g=ñ®ŒÄSúÂCÝy¨â>ŒG»£Ê À¸Q]g¿œ-~èÒ“x'Ù§JepA¼0‘¢·Â© d~Æž Tò•ˆ?®•]b½?Ëå›áN#)Îì—Ø2|z~‘æm·Uý1ïu6Z ,p ÞæÄîÛÉaQ®ñvèËÎãmùbª>HÍ 6Œ•:oય ãÕ©¿N$ÍÉ‹‹ær|ºÈ •ñ´î–FÓÎ|uA÷ºô.O{FØÆ É;ðÙÕ»±ÚWT}{ËÈ|)ä$S·Ë{²o5±´IÆËDŠw›ÏžñªÝx³>ºµ×Îi±¨U›ù£ûo.ÿöùÿ·ŽqýÀÝNôî7ÞÇÍ|(q&6z® ÌØŠ‘,Õ¥ìT#{Ú>kú4_·ãÛxíäæ-¬ß¯¢ ˆ-ŽÕþuÊþ°Ò§¹0房 ÿF½»¦¾r׳\QÍ¿â»>³í5ªsÛæ ð]\®rÊ-ÎárcQÕÑðwˆ´ 6¶™6Ý<úVïvâVu;Ì;©kUmv¯êý•¸‘¯ÀðW¥¬ àìÂZÞvðJŠa„ãÃ?ÝÎÙnìÜ›/ Ú)ÓEÅ‹ÈdN|ÞF¼Üi”ÊoÉ“hør€¦zrèüfª¹Ðü4JÇ«|e¿wâbiï*ÙpÒ²u“¾¿â<ïöF;ňˆ¿è{^™_Ä >޾K·W€Q±cÛ—¿Ïã“ÖH"uÍÛîY€ˆF:¸Ÿú¾€ _`úY¿ßØpv:·{èæmùò–X×}aË%et§b=ÂÚ¦ðãx×±¼».m.J0¯xÐlôóªÑÑl/ìÙü‡öÒ¥·¨Kwˆ£BÒ–áà·-Õßþì÷’’3T¼· ^(w~Ĩêñ+xgØM—àë%o“Ú˽ê—v¼»92 tS.%û4žXÀÞ“¾ìc*Mß :g/´Æ‰ŒÎˆ¾ó:$%ÑPªK„ÙdùѾÿÿù:^Ý1H:„¹Eãè½Ä_oãH¸ÈÜ)¤™6˜²3`óšžÞ $ìÜh†žÞØ»6Àˆfƒß`üè.X²RôÂ@S{äµ÷÷Ȭïi‚Éåc´ý÷wÕªÖfUŽ#ïi/µ9“DÿUhh‘±ú·Õ×{Dd&:|ïîŠÛ:ÚÎA£ YÇÍ—I†ïÓVubg$ ÝÙ¸«÷CËÑZV‡Ær†ëý°«Êå9]³DA9utŸÞ‡¼Òn3ñ´Ò:÷JÜGzÒa„unûåu4 …ºsi§f‘íúdÍ›År&¼/Ã-ïyuaoAÁÐëÊØèÏ(œvWK×hJ¬*:ëÿl1RÅ@ékpmÑØhAÛ¡ù.GƒøT}EÒz,`©b¼¥Dæïgâw·›6meKÃQ ¼5»×¨NX„ —âEÅë:¢)n§…»¹ó<€6¼ûykZÕhv~S\'žÎ+ "qGûrt߀D?¸&æ>8£][Ǿo=ˆû¦Kó•ò|á þB dߌ‹ä¦Dr"öÏ[p¼ô÷ÕꯛÃ4¾ÛÜnvjn°¬»È7ÖõŽCvÐ]סˆïAÌ€ülá£ø 0—é’¹nƒJ«nz+šhUéÈ n#¿1d"ú^TÒoÓyë(ŠâëqM«ß8〽¸a„Å­0Ó­º¡ êöÆ»ú}¯¯íÝ …ªeªìq𯙣èò¡ÈÐÛð~ƒŽn«)‡XqÁ¿[ÿý”]õ]ͧºsºß¹öÉCÚàž~é¬(*‘ãר=o)ñ¦·Ú í¤q›ÒÜBO å¾ÍËz!ö½µ¿Þ¡ˆÉ ¶h>P}B²}’ ëÑ.Ô¶ü,I}n|‘ຽ³Uæ…»~ý¢ÿù£^<ñæ~ÿ'.œŸˆoy7šs‘=¾å{S "ÜÀãiS»³S¾—|µÂùmÊx{+6å ¶Ibn>WIYØÝå&pçÖŽlnꜫÖX1šÒ6G‘Ĥ|lÛ¥:kdZ;óüHüùUOâ:æŸ_¯´ðU¶õ¸GUR4”š+c·ºYÖŒüª ‘1î(éTfÜa/ÚúN›7Ñ@ïT‹ËeékïØQÃuo²‘— •ÜÖ¸¾wûpû¦¾j0ƒðt£þ˜§‘;ú6«&©øñ|ß—rDx•°G£eˆÞÓãš~Ùåt¶z\°¼“çf|,(3|’¶i±ʪðíë®´„qÌ #ð2Û'¬,JoÑ£Ï2#¼¾¿Õë×¢[^Õ]ccMþ… SŽeÅUôW€ñ„‚yµG$ü£íuTÌ‹Í"ÄfuA©Sl/Þà†EpèB‰Â½pÉZ¼v]×M·þé±ÈÙ¸ºÝ N¬3 xOï’1,w2Nê‚ë·AìkF°jLƒøa ÇYÒpº :ˆ0 8¯hÚ¢SÅívÞ»‰s‡ƒ­A}h‡æ6G”W©Ñ‹:H4Ä…ˆô$ƒv·GÜÿöð*˜¡- -­{tà†«˜øÇ/Œ¼‹îêÙ°&Á¨g›ô’;uÈÁùãf9]¥htœ‚e£ŠxZýŒ_Ñÿ¶´Jl þGôŽÞ7z¡$Î^Юã) qì@u*šÙNÁ?à:jó^]ÜÕ;ç©s÷8r‚îtksŽ tÍÍñÔ/m:9éf~Þ¨á ÷©¬ŸjæÅÝöçf~àò­gF—z‘÷G‡âÝÊgY\Ûþw>ÕïályûF7B/¼‹¹ÿŽXø5Q2nõƒsPõâ›33àÞ^¿®1j2•Ež vxŒ2ödßÔYº·á87~ú»ý''ïZ¶U¢ßM.蘻èà Siœ OÓ~`’ks¼˜Fº{£´Þûì ˜¥úT@ò ß½$Åh—“ɲ¼ÙGä+Aâ~o-î¶ÏÕ‡Í`á;dt—&‘¨ñk¾Q‹Ùxâ³é,zÚ÷LºíË?GM3W¾o{¸€‡íe— ð=æƒ|Ž¢zÔnI -Zd|ƒÅ4à–[Wà"QgÆ\‚ÃÞº3i%ýóG•FiL¾ÀÖ:mÁ ôŽ_KL$•.¾Ÿ+ø.~2èÑ“;9׉+'RåWè¿Á² 6AK"™Éä!7ˆ ígóiW?Hê} ÐWÜf¼¨°ªf>€òÌžŽdÅ+Ü/‚ƒ’—¼GíåSÞJÛÁöYÀ00ÃOza$ÎÛKgõþ èÚûԥ˶ךÛrðùfI:t¿Ñùí2g²Šëú~PP ¢6~ ­c6!ø½à¨Ÿê·9hò8 &¤oÆwB^B¹~½é„’™{YÞÛéÞ¬N-Žz'ž_¼»@NƒãtâݽØZ… =ˆ Æ‚2|*?YLCQwÅo£è¾8a¦³†’ô·O‰²õÃGôÒ–¦.JìHñ¼“•7“üözœè½ê {Çéﻼ²õAÛê%8o“¼ÐF‚㮩³¸ /v;~Þ¤8ëEðS÷žäŒ?è24i.Á͹n=Ð ¼´UÝyXº#ó ^ì qFêÉ.£%ëBl3‡þ¸‹óbù[Öz4¨;5xâ=^¼°Ç‡èK·¬|w_Ê@²þAê­×9?o1Ér1ÓÊòݧnÍÀaæ×)o7Z¦°°B´©2bwW#BÛÐ͸øp¦g9:Ôã3"¼e3OÖÄiìqcF´¼¿Üx¨ÕÆ8hÍ2ôÀÖÏÚÉ|¼©»ÍäZ·ánoxÕm ÐÆÉ|«Nk7º|Ñã²½°ècªaH»ËñØ(«ÛÕ¯Þñ¥}²ØÔºN^m}0'~­•ÔZ§¦üquG2LBWц) Y aœý—6ˆt—ZÙÙË qÿµàÏ︇z £½EEÁ›èuÑÆõ2XÔRlêu«V”ÎØ‰Ã¤÷°V•cQ³§nÓÃb*…TxTûÓ^’ÁFGU‹¬©øñ²šáecžº ˆŠœÔTà©EmÜ'f(V$'^’Ý’ Ý©e|_wúèÖºùë$k1ã£;ó£êÝ×[/,ï»çɈ€ü.\ç Ìe½3W–àš{h c‹Jt’†4¦£BrŒkè:üÖbÊNuè°ƒiNd7}T•¨á’øýæxA 0åk(Ù12£óªÒ?ñ¦×d-LÂd+ƒ1,]ÂÑ{XšqŒ‡ýBfP¹ïAýš;(0¥„5•=Þ@ˆ"‰/±hävü˜$>Q~}l§÷Ú3¯[IæÎN_ ¯6ÑìÜxÃz¸¯ßì{yÆ ê×fõÅØâ›ßÖ|)HaÌÑVL8íL$Ù4¾ ±jxaÛ$îF5ª˜v[†GÏõxydêªÙ¥œzrªF#޼ñ%‘ñ}˜–ß,%ù€f—†¯($ÿhžî"Ëiê1¶º˜TÒÞ]eâ2‚Y˜»#@}å}¬êú.Ó´ÓŒ÷tãx¾w–µƒ;‹¾šwd ôIv=s+•êâç ¼å]¡Œ/[ÿ&×Àøí2ÁPZcýÓä‚ÿµý¨d Æ{?É0¸èy4(¡³5‹ãë Á¤6UðyD»a.X‰çý¯_[~clÊ=×–­A=æ3V.ŽSV\¶h–!ù<(ÕP!gtê’PÕ\úKÓ}ùËŒöÇ º !¡Ÿl·ø5åÊü‡Ðݢߘùa/–=’ ÷³ûfÝße:Ê#‰Ôévßvt……Æ“{Q Nä« ' ¥;ß!‚úv@¼”âä!2üL ÏÍ(“Qì¤æ›âöŽWIlÑ¿^&üGèÁD%ŽãŒë9H·Ø£#wäƒî2³š0”8÷»««—¦ÔC•sxgäÕk~ Šæóå*Å6i±ÕJ·Ñ¤sS*v”6qÇ_£ýl¢ˆSŸè“øš‡ŠÇëž ­³ãèÓ“ ðò#mÇž a¿¨ú8·3¶"dÀ‘^œvc´ªGÔz¯Dqâ_â\¤]Î0£Å.‹ú¶0ATT?öž«p>&qM®A"p>‚¬’³ê´E6]bWIAÄóæ"òs¾Œâ®«Ü&“Ín;HFW!ÂS¤5%’üÝ2–¸«k¼¢÷%y.ù@P ÏNÇîn™ÕŒ¯7Ksè‘È4€Sê^¼;Éq3òh !ÅÔN eœÓ™TõÐO%S=ŒŸùÚžÙi:s'§§›:g[YÚ²tõ»Û¾àC݇ –À|ÖÁÓC"#ºÿUmÿ÷œ"P®,ˆs:µÃï¡­ÈSIJj©ˆ-þuÁ^K*‡5ôÔ(<Äš æÜ¥9VS±ÄÜë_æÐ¿‰tÏ»ŽArhC°&’^,õl­$“~‚æ§²Tù îZ›Ï1i/Ìàb4‘ÜQ‰õC²ïf.ãåk)'Ñȃ6dh)NpE¯ä¡•ÁD–y˜Ž%ÑnúZÃÃ/˜ÁVë×Ð0lqëÏá©8(ôTbÅIè‰#ÕjWæ)nàôw0ÁÝúHŒWùøµC[¶õ¡e„Ô<6ÝS‹¦Ñ Ëë‹f "H*Ó)ÆžVF*R¾›Â¸±—ùÿ¥¶Ž–ìnr˜kC«–ÔÔ«ˆ‰yMlmôƒÜ©Õ“̸8Sî¨RX k5§5¸)"+Œ°{qmL0¢Âh<šPMÚuÝ—2¨)kÜÑYÒ·#xLèl5‹Lf%ê¬ô<€â†qãQ¤¹¸AîLâ•5‡“WœhwC­o¯¶Ü¤ˆ°J´gyNøšÐxüP¹·¯á‰@Þ7† Îàt¶àÄhD) ^Ú é2û§Ì˜ÀjY®>–\ ¢š­óˆ¬xÒ>mà\å#-øb*©“´Å˜•iõœÚ|Á¤¦“äœÇ˜4 éˆO)r³ðx*ÌoÞ …Zpƒæ'î‹ Øuê·*jÕ®³6+ó€ONÓ¶u äéì©w6çÇ Œï*W~ùôßø í˜k‰…¡ªiBÅðK;ŠG Aßj7d¿¥P¨çdƒ’©‰x©£U—ZW²Ez‰¶ûi X‹ë¥s¥<ùÓÀfBü-Ö¨ß5*>‚/qÐÍ”€ºÐ‚ϤS¶xƒ ÿ®ÔeTðÓ­(Hß•“ÕDÈ™2Ÿ÷UÁÍU’K«NŽi”«nðVöÿj’WÕ i5Q½ÀHaV^ÿ~”Vׂ;·b,Dw[/´¯ :Ã>¤Ò:ùU~}™ñvÖôêkÅ¢ü8~˜«4 MÑ?hHç¶þÒXpLPŒO9DyH~ÿ®Y–/¶M‹/}×üb`̵ˆo!crÄB“ô ;k T0þŠ\§6MY*¦+J-„BÃ* 7™V¬„ËàÕ ½Qê:ÞGª slÈÑÐòD‚´÷á»+ܯ=[±â $Þ¹nä %=öþF»OÁE‰àyCMÉ ¢AþëÄÝáVÜZ¶ØX©Ïš±}‰7έ VDBÎâK \½.ôuÌÞj®,ÛÊ6žCÅÊÈq½´krUÔ¯WQU’½S—ÝËu7ÿ>©Y›³2—@„£žîÍfàe3’Õ`À5F,Ž© ìý]a¢æH(,(Z—c+•ʃψQnf;©œW‰ÆÆìA^ý0e`z+ißÝT{P‡—ƒ»ßáïp‘"&EÚî ö«Íw#5ÕõÜEÓÒÚjNˆ";ÌT «o?$ÚsÕw«}Hf26𢽇 h«;½†ã=+U_î…Un¯Ac|ÏHGÞ_*ä§m´©C’æFuƒ9 ‰ê±ÜØ9b7Ó\ñ6N:KP9’0ôˆ‡ µØg°++SýVvrS ¢T(€¢[ûz3ÑOAÀU#0‚÷U“aQÔúÅ”pËYuäw<@¬²®jÌ Ýk{G˜$'¶ìü[ÖCó¡»õ2亢† ö¢h ÚHsÐn­¨·zøB©Eáõ~´J½6ÂMWÄMß@ÂÌè€ÿ0[‘›0½v&!\J µ2´ªDªnãÁÌÈGÒ³ß ¢„‘F£Ö €êýOLŠëŽI­šO³\`Þå~–ánÊÄ»ƒ8°FŒ±›¢#FDØ[¤ýúâ‹|ĉ ûÌ,ˆÙÞÁäò4UÓj ÷8 ¾‹`Òåfš83fli ú›3…N†‡©=_§XïË£ü*“b¡Ô—…t:ß­L¿%yH¥À{s½á},Ñ£äË]ìÀÕ›†…3¼m~(ìGbÔ¢y«!¹dtx7s«¦>M-Œ´´“TþnN«… Z\ŒžUf,PV+=-6ýÔÏPÅ2²S¶T%lešï9ã¡åh–ù”Âg¡åkNrݨ?tWÇ•ÖBÐ]Âhœ—ÌÉŒ‰kkÜ5k›s•êtáÁÍ¿¢f;çðä¯ULϪ)T›BðgiQ×0U[t×C›¶¿vaežîÒÚR°~o.ŸÜrïK3ÆÃjñH·¸ýç®~Êþt.5fØrò,þšuºå„ßËòÝ}ÌkNÛVrkRq̧ViÕˆÕÝ<õ\«*žÞ[¥œsB<&½›wç-æ(`ãz ·‹@kE¿!J˜ª‹Q·ìÖ™ò-YZGû§Z÷¬iª£IùiCÌq¡´Íàv5ýˆtόAûræéµ÷§®–A¬¡{PSº2$Ì`+QT÷Ïf°Èbšnh>ÓsN=5Ïú¿£ÄtWá ’Oö¸‰î^{ü óh-$çh£ßjÀÖêAlC7l㢕­¬*oG¡+”`#‰75¼Ü”n< K£Ý_ Læ°®{ªsŽPƒ‰j'GÉLÓ‰(Ò‡Ûv~Û¹2ß^Kr2 ¥¾:Ïo–GGgat"*Èook5¿²ëÚv‰"í™ 'ÒÕßS ëe1Uù%¢7Iãæ\ÇŸ Y4;þþ1Q74"E{™¸Æ]‚Q³ºÕ Þì?ªØÜǨZÝ«˜›v]*Ÿ¨þÕ!['3)/l1Ê!ö¥ŒÈÕúnìø 7êÌV¼/K­Ó…ìbCÑ¡—ïEÑ<û”„,Ôt¤LÓ­ÿ#!4§nÖ½SŒŽZ£Ý ²×±® Ú¤¤uS¯5FÌ­”ì0Š–¾i=\Ìw䀳®Ü/Ä«Tl»Æ¦$²tÀz4îl•Ô]€¦3~Ó<!L½–ÚÅò€¶eÐr8ÝF6öåó:hï4åó³Õx2’_fô%‰p'ÄOWÜrÄ Šî4x9§2ßä ÒÛŽ¹‹ l+¸Xp© á\Trf©ÃÔRú?êBG[%!K¸vŽâœÒ_ ¦#-®oHñ®‚1’csÙÆueÑRé“VW²eä¬@¨:3CýäÈUt“x3þ&å%“/%Çß;Ws‹Æ^m&+ëä¥RÑ>àƒàs›‚•RXÇýµ{fÈ?-Û¹xÌ=Ð-w¢ÕÛ0eÂ,#=22ó³@MË—Cø®ºšg_~u ò#ã€Ùx"Óè}dÕ UgЬ˜;Å¥'¥åj¶tM•OȱC=þÙŽƒT(âÇšF "¦GíÍ [1OXS«ÌÔÀú°’XsƒU£Î@v‡rBI¿ÖSi‰žÈ¦=;‹JIìµç¹2‰š¢€\=ø4LæçO´n|Š8Š3¨Yà66àUW=PCøûmõK–LfGº{h>ù¾‹QÕ—ZF·‰ªœÙàkˆZq²÷ zÐüÕÞ;)G0rüÏ›|»ºÄ…GjƒÛõVå×´ì"Eø·[åI3'Š›tc3—}6»0ÏoFZÕ”e¤’‚¦’ááÕuĈoÖ—4ÄZJûGßyª³¨[UXô–}+]Êîvî½s]Ÿ  º—ú{3iS¦ý=õDÇ`ž„Ýöù3 KF¨/C%eBD³jÐTñ»îo†x2w¹ÿÞÿù:ÍL͵Þ¶_ÛÙyžl^µô¿˜ô~:!ûœjªàU’él¨w™ Mz@úk1¥pò[jDF$‰ iX6pŒð ©o6zk~'à4u“&iŒ?gˆ×ˆoh‘è]W'þÕ ÌJl6 ©“ÁïQõóƒe5< ’Êù§P{¨°ç| 1NPĬnc‡¡¨âG3šëfh¯ÇÔßv Ñ R¦¾Í@>xhp4š÷±¬µé¿E‘Ð9¥Å7rÚª¤¼MfôA¡kªÞ»˜v?žÙÓ~¬iúqÝðÃÐJWÌq¶¿\[ÒØ#ª1- Åd²*'¶¯©¡5ƒà§ÊG|Å?ãhÉäï‘ÄãÚÊ>Q8ò@Ý‘_iQT&¸4æ}î¥PôÔr7×ðmÇ]œ{ö'ȸ% É=NŠ˜95KU?ç+Z)³J>ÃÃCÄ;|ˆºÄƒÞ‡tÚžÙê:PÔª |#æ-íîe5ü({zÔ?EN›‡?:Jgeíôl¡ÄfܽÀ·° (·[Ñ{˜¼6×\ >ú€æ ÈXÜ1 ×5^åéÓ˜rý÷ò%7ÇjŽdï¹g÷k«q4Äáb_»×Ç•ÌÚ܆›íSÊïy±4OI¸äíhG½¥¾ÿš}½”@õ¯›KdWAMžælvwÊNUG"[!fCõH„­ ¥)ð_[ê@¥W×Ȱt‚ÔÞ ¿nËV„wŽ4ÍÄ-‰£Ò÷‹ýŸLLawŠùž$€‰IÙ̇Ð3^h)3š4=Äøj,AÀ%ʱà8Åk&ö%û¯íÏ›S‰…4¥ö¶’ý¡; 4æzœÀ ‚ C¦—vlG¶ÈoéNÖ4H’Úïiÿ¦ÞÏ 4ˆn4_SxÇ?²/4D2÷jOéSj©x7´).i>H[öê"—½Ós›&Ù'ä7‹ ÔI õ€+Ý~1K«k)ŒTAR‰ÃhAPé)f冾¦;'ø@ô0{„°h Á]wîÄ ‡µš×dÎÛä`°R«DF/"GP79$à¿r³“±° ±¨Èbq”› Õï«n‘ú½hÐà³ȧ;Q³¤wµYûâoÉp)ä.„™žJÂþé€ÂÔ^0û¼P`¬¼V‹HçÝH!ŽB—Ö Ü4ª¨o Fï—¸ÛÎih8‘};銵ðN:mäYŠïn\¬Ì+Éûj$hʰ‚Áp·Õ=⭨ǂ¦¯ý‚l^%•­Zè…G;˨}§ò½»þ9‡kN‡qïR>µßE­Ýx6]?9‚ù•ýêg¤Þ—khçúƒ!úAŠõØ2 !E²È{2;ÖORŒÐ]‡Ô*]Šœƒ>Û£Ã@§°·ö:#œB}I<=ÈKÕ„ibL3¦Ï¯Û,no¼ûrÉé=9Ø*IÁùõ&æmÜ ãÐÑb¦¬‘eh-å[­œreÑÍ[ZÂK¥aZæÌBIÂDJøY®[s¦Ï9f”ej¦/Ǽ×:îcƒPªã; óPËjAj‹ŽîçQ»ò*f‚;Õ¬kröàõxúì·ôÐŒ†¯G[W¾€æ¬·8ù©î8^È’açún›ìëŽRÁ +Bƒã >xôàQ¡-¬'(—çKÚÑ»"¤DwÔî"ZÖJÕVZx¹·¾”ƒÁü>•1 Ê\4¬BZß“$€ù]0ý„!h½ähiiœW9ÁÒjhñ¬3ç`>€ WÓ”H†àIôÐf ÎÏNbâ ¯ú]ø$Š4!Š ¾Y†#-°—Ÿ¬a=ê´l£'çü)'ükõ{›Ã £-éÅ8H'ÎÁ *?Ùîr2@۲ɽO&k0q€U4<Û„yÓB[ O5¥<2¥ÿfòSܺI¬Äv·¤G¸*È”¶t$„ˆ<(Ù‰é‘Þˆôn®w =â8x£“¡¿ÅL? túÒ¥f›áÖ-!°¢Æíñ.âÀäД“ôKiutz"7jÅ*‹†VÍ÷xÖoßf`³R9ûPƒfR¬kS¾@‚Gƒ>êÄÂÞ‹`r)Uõu¶¨L‡[7ä­ÛkíÕ#:QaXŸD¹ŠÞ “õö‘ÖV'«ƒÔYG¹/z;+êU/k^ØîU‡<¡«3€wºàˆôâ€Á#ø+¯à#¢ø±ï˜ÀÀ™®¦Ü )º•h U,ºÔPYN¿?&õ2õʤšs÷=ÝÖ¤;#:“>sðꫪ&ÿxꢀ÷øZ‹¸˜K(šøx[ÿj6ÒúûÃRÆëÀs˜(ä@‰¶ä¥Fÿ[  ­¢:s C¼·2Ûòñ4ñÈäŠ ó„€øŠÆó¦WqM„Xùö:zo·*îxäÕAšŽbM¦ºWÓ '–»±wkPFy²ÞNªB(ÿï˜#˜Þ“&>hJ8¨J–>åž$¼¤RÍ7š"ËÁˆ2¤J«,ˆ‘6¤ &ô¥&ÞŠ”U–uí&µ(®y4‰ \µ}Výe€Ý,u iPW.x{Ÿß꬘¥º„n§F?ÞÍÃò~Τ².òd-›t®®œ¢¯¥\‡ºGdïôx§%ŠÂ¦ƒÀIÑÙQˆ—z5?€@l¡HhÚ¦°þ•Œ9á–-5UÔŽ*FPÑ{e/¬ûÅö¼DÚï=RzÅ®\œïkèy˜rôa¼4+Z$ãÂfN«¶ë¡çŽ9âFÉñ&TÑÚo7j̲ / §'f&„ìeeŸfÂÍk‡F©n3¹PÆ“ÙEZ‡§æÖö†›I}¢Ë‰óç8Æn¬GÄ0ÉM˜8/°Ýœôþ«éùÝR6f ‹‰ªŽD ¸‘Ô‡5Ó kNE|%ž›B @.žoØt¿ªÅeÂûWÞ‘ÖÛŸ‘M¾×ÍéçŠKö3^Eý°Úò ûÊ'5´;¢ÂÊ2©âèõ’ͱÉFLã}´ØÔ‚+4ö'½¤÷•M嵎BkhA£Xê~r?)¨)Z_w2ÙÜ- =¨8p#¼ý7Ó*šM„/JëøSf"û+ÈoR•†®‰sÖ¤\PÒü¶lXX “1gN1y(jAj(£;—t:Í -l³#™»£4²³q·(j‡¯þû˜}Ú“³â® )7=Ÿ~¤½"²þíÙXº%u¥Ùê)-Cc!ÓížÖ—Þœ¹ºRxSœ«Çp)Y{’[P‚^¤¾ár˪:w”¼rV÷8˜|RÔ›œ°mÔ‚°¨0V8ö±Å²<±^ζ~V™ýxJßð:ÄTþ4Âcqãf{ÊT7v4&©âg'Ï¡3ÉH=ÉyÓì@Õ·‰jiÄñ•]f’˜~ŸÇ›"ÐWQà²ò.‡oGK­ê D>†îb‚úÒl›îó71;ôà€?enSBå>AT,\Y®'²c,ÞW.È‹\Zိ~QvBÜÕCþÌ¡áœË†0¿ðÜÙ²¶¸¼Ñ{U“´cý‹š Jeˆdõy졊nÓ¥·Ü$x¿C¶H§Ä'á´d81àQí†Ècèk…4u£E‰FËŽnõ¨ î»ÍÃàÎJeÌ À"úðEYˆ9‡c[Õ^Üe1—‰²æ”àÓ§¥ˆ Ób°;¹‡K¶ª-zÌ!Ü0Põ,Y5Ëì÷‹Kö•³êõ[ܡ祈“æ$kT¿Ò-ðxð‚Lj–Þµu€¬EJøK>…+Fe¶µ<; c Øy@fi©ÿ|æ[á2ÈØi5Õ8»¿à,½Ã䶤3g¸ØØ-p²W hW¯ÍWq•– Ž]E,–½Ì€ŸÙœ1ˉ™VÉÊï<Ò‰ WH@úæø ÐÕÈÑ,wÖÉeU£$r$ŠÊ]’»Í¬šÖA·îûJê^ë^Õ*qîÝƾ3VÖ?ÚŸ1iŒekjÌõoWàV`p ¥¿Éñê]–ÚòÏý+³hïóeü-¢Õ‰;)Üibà×÷¶ilöæLå1‹²ÑÖ¹âÕ¾•…ôÎ?¤è¥žE±!k üN1²Äƒ<™_6 ðú'/kTˆ71~BÌ"ª\ˆF©zÄ$Éqý<-A½M&>*÷C’u®Óœp«õ çÊêT*·ú€‰¶ôØCÂûׂ±|6ù´r½#¡E7Ex— ý9mGÈõ¸ÕÑs‰£ ‡õÈŸlÝ¢›Îž˜go"òtñݬôrh˜pòÛ ¤½¢ðضã!Ä&X¡OLîDKk“ôåŒ=‚àTœe=ÖQtäBÖ¿ÖL˜7ZiåÒSü4¡'Ѥ¦Àd“¥‡HK'5lQRì“‘ë´l¢Ýëu5ì·.¡£Éþ‚-]´cŒ%Þt¾H§Æ3ÝÆ©»õ^¢Š®^lÉê™®ž×ZJEø´®ÆU‰¡®€ÿ;CºŽÐD4ˆP1Vªl{w Hr. Z;^ØZÎ"‡u¦ŽDÂÂy›ã<ç 4­Së~Y|4 X±#C2uË!¸³(£’¹+#o/ÔS‰Âü ÂdS.Ò%âZŒ¾Ðl«³hC3+jŸ¦ùu\DUÁgLzAwÅ©]'1 º®;³}º\ÝX)XÇ7ݪi@ ©]OÙƒ…`ªA’±aËŤ<Ì/Jb]¨aØ?l½HMŽNJ¦²\>k0‚^ÓËzÁ¥B/@bs/—!æÿÑZ :³“Úœ¿3i"ÈI+WñQ1ÉÈ Èî”T³§ÝÎ4è—–¶äŸò¥Î4d¶4ÝËÀ·³b•ÍÞKº4r¡foÝÑ &ÙUáýîhâà½f^ Ö­–“ÐîEt_˜u2+oT{,!­1“»©o‚óU>é-Ÿ³„ýïHy¨DÑxä¶u,^¸¾Ù¥6×fÈBÆ’“âÎÒÆožKµ—˜Ìúúú›`ÈÄþ=Ae¿±OŠFÈIÃÍ5ô~Âi}Œßí ôt@F&DûýÛè@lKÚXv{AœðÕ «ÑWÙÌ_Ùµ°$’àrð£ØBŸúƼy|ôük–~p†Ýw¶j=!ó¡›±$ÂEÍÆ4PÖÄù’Зt›ïÊþÝ8fÅ(äŠíÈ>:ïµ{uÌõ¬4oZ,|1íæè4<}ðæ5%Ãü¸hŒ2µYìˆsìžžÙÎ"HÆÓµAi‰û*fksŽ9{èq„<éý•ñ ¶(ãå3!ÿð ³Li ´´Ò¸0†‚•¿9düÅézáÁ¼û±ÙPp êî÷Ý|‘(InLp®ÆÎšG22©4Ê®…6üíA—÷‹wñõ—&ßîOØêE»vÅœO([5‚€§ªKµd_ôM ]Ùl%¬[Âh…"°-ö7ùFyœêƒò´Oyò;/µÚ<.É/šäh¡E5BntìR:xÌTA6‰°²%íA®0³‚TL¢©QÀ0ψªmCÿJÑ9†#0sÙÓBÛ¥z}°„íÒp€ë" ûí +"„Ö•'C˼[©9¹4éYÊæÔ;‹´~ 8¦fΞºº o_¸-M‘胾J‰¦#;VîGO&7Õc6ÃOn¥, Ü+'zKʼnÚ´£:·&ÜMC]/£L£Ñw(¡2‘Y´Ž[dG[6™>$ÔAÒÜ©xþ{†=µˆZž¦Ÿ _-u(;꨼·¢Ú×\~ÚˆH&)1ñ9~­$¸ uÜ™!y$µß‰Kn³ft„ïjV+3Œpà3•ØÑïŸ9ÜtÙ*#å&ét_§û¤®³«wgSc&Ì« âS"jÆ4Q¸"\‚]Ô>þ—v$R°ý0äf‰öTðÇ€ Ÿ‚[dP kÆO"2Eeª]L?ˆ­0`-_Ýàj¥ô°‘xO!Èbáø–Ê¹7¦â¼#é3ß(«aÞ¡ýxm?ÜÚiòZ]“ÃF@ô?ëÊgˆ4A;Ëø¯r(uä{ u$øÐ)kº9m˜i’4DT}÷xXôíc :D7唕>¡z3ŽÙÏžjd6:\èÞšvtlÖbuÒUR§’l9nØU ‘sz=Ù£*ºXoA“’ˆÚ+!ª‰À:$ã™Qç–Ò‘r«TÿM6jø$¸“÷6?C ÖgŽŽ¬¬!9ÿBG$P{ØDóJ[QŽ6!¢Y4Ød~36ƒ…6¥Ô´ÒÃn Xo]²s|Jǘ2ãÓYê-‘Ñš ÀµcËŠjL//-×jæÞôÓs4ÈöÙÎ+›¥ Rjù]Âo}k¤çIñ m¼´h1ÑÁ{Ò¸Fßk  |F•ìÃéO÷o–0©¾x¦R5õÂq!ƒ×œF”<½Þ'¼›Dи»a˜ê†‰4%BOÀ­6~KraΙÆ3¿šts¹ô,˜Ù§ æ»ÝÝ!ÞÈgW‹U ƒ’a†µBÁÿÀTnÂK¯Ä¾V?ŒYzÍÅ öx­¬8vûªk^7àqtˆ4ê}åc¦‹|x𩕇¤™i ¢› vbÄ£«³7tëfܘî]K@»§6ì·VIA k©”Þ#µHSaw{ÍKêÐ.Üæn€éûÄ”_N|!9Òå;²9€‡b$ǾÂgØ~¢í¦%Þ+U©¥ÔäžÃÔªiàɬúHƒÉ¼ bc{Ƽ«4­jW˜ð‰õŒaˆ} šZÏÌÏüm*¡Â/³Ç"NÆeV™Ñ‹ü)Žu‹*’‡×‰\…lsf\•èâßÏ– îÉv„ÛL(Sú­Þ„z¦¥Ì["E#¥û¥+þ¹èŸî= 7ü›÷ÿÈoGr¨€Pàa–9UTTSœø&[ºŽXNOû’ç;¢ª<Æ[ìi÷›Tõz>w¤o.é,ÁM2»8™ÔДõ»Ñ¶Åpfõ\IÊ#ºSR2ÕâÂðî‰äeWP·sPÄ`Nf4®€ä½Sh™ô%¯…ëYŽOJ<9è°âyYNe.¶˜Ù¨”pÀæë4Ô±($9Ç¡´9ï\Wãž2~Ëçî(WÓŒÚdˆNvjµo˜’žKåîeqVÕ¡èBðcºÆ~¬:f‰c ²^öå\m¨’ÜfAÐäßPýÅÄŽ¼¡÷š3ÂNrG"~9Õ`Ü”ï+ˆã‹ÌnÁ E„š)¹edJ hÐi·þ;yVøeÍMQmÞªö#‚ãëeP?DÙž,¡±àùG(\šáïqj~Ñ?L…Ëý÷A^Dž |õÀ¦Ôˆ¢a'!U'5¤2³ŽüZƒM€#âôñÊf)š´TE ô†ã…Úº8;f÷Ùp¤÷d÷PìZÒ§ ¦mÛY­æõ9`rÂÙïû4j>¬¤åJàöbàŸŽC›õ†fyOŠOœ¿iÈdÑ/ݾ÷°H-)T6nÛ^ZA1òw ø©èf¡À…4õÇ+\ãG0¾?Œ…þžwé¶œÿ§ob.•xí6wÔ¬‹(gJÜZæö¶c0å³á!Šþ&n±Às&Íômâ\ÂÿDüïôA˜³Ò@íffu¨“\µH-.9­…7?/‚4©€ùƒB8]Šv&ºÚH5,à‘·µ#Å8VŸM±+)aÛÕ„ ¼Òjç¦4—÷…º#V“åÂc ›öHÉA ˜ŒÙnð7ÐІÓÊ‚&Ù%ò\œxý‚@ÞsÆ}›iKIBì"O(¼ƒ(,ôXRë'µË݉ÆÃ{y|ÕÒÌ”‰7dc{3õ0Dz< ü˜4ÏÛ½s¤`)Ï2'7XEDrŒ+9ëÌör[¦´LNz(¤%yÊ"+† mNd Ùݤf†UŽúV;Ä{0TŒbE££‘1dfw)„uWßðä PèL8„„\½·|Š—U…j‚¨æJèÒJ/‘*‰:4—[Ç‹°È<Õýimm |n¾N­²¯D 2 ;9Jp µ°Û8#ŠÈ9 Ÿm3“Ë€Ê(èØáÃMAD8îÊ«t‚Õ1°«ó…i`ªGôò)ÐPš‚æ&(Ømm†]R_êˆhL”|Ú©Lƒ–ù¯V¶ZŽJš®J$ëâߪÄ@fd×€o¶™2:‰¨Ûq‹„~CøÖ÷Ÿ©\ó[îÙ/‘› 팣‹Z‡Åƒ"óNSGtÁýtu¶Ñ±5q$¨:!k’ƒî9ºuœ®Œ^΀6BÛ­Ce\Š牳6¬n{M Ù¨ÑlÒØ…TBÃ1ðëP–nÒ74*DK!O|Ì fðh)h±OêO©±Ç<Öu¸Ÿ’ßëmî·ÇÎÓóͬÌH«G;c%s}!¦2Å9.’Ï¢`¸%æÇ¸!š*u%¢À8á’À[Ý)[aýY˜Ø·PÞ¬d{¾màv#ÚóJ‹¯#c\“âDv²5Ç‹U œÑOÑ^‡)D.‰Ehä{.A»ê;›T PâBåNCO¤ÄØ]V«Åä¸Ê‚7¢¾Ãh$ºÐ3ò~B½RÀÈìü¼O‡s^6ãØòŸ9#s¡HÌ‹tÓJ¸ÎæߧQø'®µ`kŠgGSž˜ÃÔ•$KoIÂvTU ¤ÂíÑ@RŒ~û4¹Ë™—ï~¹-ÔžIt¿Ú©HäãÓ>Ýãd®ö«o#'¾t¿ ÇøêÞV-Ü¡Dt‚«Ùt¥ÀüØ?;A\Ô”CüYƒR$´l›%WÌZô·Š9ç$T>ÓB6–´ÊÎL_:mìÞ×­ÅëÆÌ±Œ`‡9À¬oºyÎÔØgÑ{OËA1ŸÔº4©-8’Z§Ñé 5Øj·!Å ©tðÄžža ¡Xƒ(~Vkû‡2뮉fáÞjŽe¸ó@—3ÇnÓIüŒ‚í%g7 |/ë“ùÖ¢c°ã7bܦÛQuÆ„ÀŠ®âÕWV7ø1<\¯§vó[Ý@ŒWh°‰>rë•…ÀB_#'ôPÔMíÈieŒJðËòÌщ¦&®EŸŸôü—#"ª òŸõjbZï/ÜIÞlì$„>c,´E½y¿àÞþtïÁA9JvíÁqË'ñ7£ù/‡ñ.Õ/gS®\Za˜úÐH4êx\i °¡˜?æ¦qòçÎçä˜C(‘LšÚÙ—[Æ‹Ø\Ó׿ht Q#¨°®ôÆb©3ÚÌÜ*tסÒP”­¤šÁE”–0«~Ù‚Þêšâ"V<â%¾Ò45ÔåâßPÒ¯çk“ÚaÀT¡%QÇ££îFƒmÔ}Bs fÞw{¿½IýŽt¨SZ9 ’\¡íxiy<Mùg'”ñ–$ÉçœT[©¤œ“V­§µOÀEA.ŽZº+"³ u;Nn/F€{’aŒPÐ8/ž.GÂ)Úˆ¯dk¶*…w½4Ò°ícr,È‘ÊnžÃjò±ù‰â=Ä äSNóo&K ã´ÓLyïrÑ }ª^TWpŽ¥âˆ\Îo¦‹”¹éÆ¢Ö8S¢dÙ!ñÔ&¢{±RÞ×b\îÂÐïéùËÇ®6Wvªá¤eïŽYóŸlô¾1ã“Ãái6")ÓÐäéQ—†ÐÒôaÈȆ!ÜŽ’ò"F28qCÔ/,ˆ7—ŠæXVh¼÷Îø5iÝrg°RŽƒŠ€s éÂlQ#üÆl÷œ ¼pî–u kº;‘Å‚’ÿíEÓŽäU\9׿ÊÇ wZ•„ÍúšåDàx7œ÷¦Ø*ÒWô(ÞÖ@°æ"¥äÊÿlîx&F¦!A«5ô\_ŠM„[Òªm%žÅ öÈ„ÌþŠVsÑSˆi¬ØB—!)§™8Ñ£2Š/ÕXµŠqÚé¼23vã…Y ûõuY˜þÚe÷H*¬[š#Ú« Ñ’i¹7~ÒÕP¾¨ñ´²µzx3ŸGž9°dtŸÄܵC·–žfòß§Âð…côPRyNäõ”§Å¦ÔˆÄ?bàVÔÄe„Y¿*¶Ó!²]ŒwïäH¡ZñnáínŸj™ÒH—Šœù,©G_d±6OPü„ÈA.á`Ri&aO,‡£z=Œ…ðÚÖEý¾qÉæÙVo@![´í®="«{𩊙P÷‘³ *_£ù?ÑwÛ›·—îM±¡œ>MP6¼i_AÜBÞ¤klsl6Ž_ÈB1±C¯W™Â–\È#Ë‘ %ˆ"* oHºFjÝ“çs™¿LòRg”˜a×Ë š‡vÆ”^“IQTE ÿë>Qw´vÁ—(v:bn‰½å­{¢X‘58îÉó;"IòÉŽ k­l\(Ló2·kœrÒmC$÷@7özˆGîç… ¤½4ìø?°y~\vQ:SMØO:=BymÁç À;p¸2dÚ *«öUE.ph0G_Å2´´Y×ZÑ$ÉX~â:w€0`®ÙÿÄ?h7eë֬ѽHxˆ p·Q67”EªÀ…Îåû+æê2e´åõ–…j˲Üq¦6GÏB@:üÁMjÖˆogëÉWõöq¥Ø.ß[‡¥ß–we\aZÜ™8ômƒw†²OVÊTÝìÒG­O¢éÍÿñÆ]Qƒ°-—6£|HN†„4LºOï› Y˜ç5ânŠŽµ$Çp‰±Òy&çuÐ]_ô(¯ïo5¢ÿg­É´$¬Y~1#ÆôµsÓwùÒs{'¥¨ý —ÕTÜš"z9¯F£E±Ç 3…eÐÒz0Íp"é J@û%Þ0? ±RBw¬Oâ†ã˜]‘yQ,ê-ÒJ⛄™Âˆí8Úf;çcݲ·g‘«V!cxpj.\¡†úBß?éTa±­’f@ÉÁ®'ÞR$  Îi öíLiHÍŽ´>¹êHxØZõ©ç—y›K‹kI2¨N¤—þ"¨çg ÛÐÈgìàñóaƒø‚MÀÝüDF&fc­^úÖLC}Lj(¬²rŽW¦œ$"ìÔ]õ×ÕDw³``‰°qs»vkÍì … [|VMy¹ ëRR¸ Mu´XËè²õ³n”Vѳ% <,)bó ÙÑ0é‰Aƒi×:o"\@ÖW練µç©3#ktq’ P(‰Þuô2ãöÎ<)§‹¾¤'.øâ}Ãac¤ÆÓ\Mé¶ôɤa3õ P™8Jž¼eÆ3ß9,ÏòMoÏ"ž¾ÀÕÇckØ8gH} —îheý÷žT“ø(¡œã‘ÅEÑ7Ejñ¡%ÁHlõœBàaU-ÞÚrÔkìž”Tßm9`9‘X•M¤ßœË—J „D¿Ã3JƤ’† Ù2]Bߥ¦ü¿—ÁKk™Ûlß*w€—ëq¸$©É‚Tû²ÛùP/é'¾n /[ù;5õ†Xjôý€u2ÉO~I[FóS¡$íÌqR¼A®ËŒx–*A¦¸d&ñ K»è„…m”ɸŒ¦iÃä¬4Œ¸›&ä>g|´„RFò\åáoMù–òÌ=]c8t}ü–®8Ì-D²¥0úã-ôÖ.~ê9À´Œ“uJ‹Wƒ\az"W¡†“¤~{î°+uU$vYjãóçéY±tC¸>Ò^€ñ½Q=A°éØ •­xß5yUMÄ ™Œ uK‰ëds=KçÞ-§È÷Õ‹›8ýhKjv-Vtø F&ÓN½Ïœe‡©ió€fÁĸ›øMLa+Ø­&90™kD”T©kÇnãÍ òÕÀ·=¬Í€èàÞX¡Ÿ£rÎÚN·ÖFKË-ò•/Ò¬™Ým§ø(EŒcˆw%bFí÷Ò¤vŒTyª:B»°måãU$OÐ+^Àg#º¸N‹ zTl½g‡L_YÁªxP¨ êyk'î’j~øVdèèÄÄ¡¯ç_€  q Ó€ V£”‚•›GPI56‡W–ZÚ‡ãgˆ#6€Mš¨ï/“èÁíK)¶E·–7è8BkÚiä #ººwO‹JÖš8·ýd–ϳõBÖžw77)x ,\bxi¨›¯’ƒ4 “h—ŠÚP#Ì4š¤>+x¦ûä{xƒbä9½Oéò¸»–éèfWD¬=òh™'õ ¹[©IY²,v¢3Su×Q¨E•7kp @å²ð?`aXÙÝÖ†HQòúbÀÚ©‰2"pGCÀ×7¤Z·ÿ7©Ÿ/«F`XÛQòZtªHT¶Å6xWt„>Ym ü±Ók›HÀ.¡[ä³[ªŒûª-˜s§3õéwòÃË'ñ渉˜æ¾˜ÖFÁ#ºŸ8Šwöf4;í§:抴¾b0ù-±U«Djú0ŠÐkyg¯²AM`þ¸ëÍ ‹¥Ó*"¯@ƒzɪ»J†5jÛ0þJ€õša*Õ?2õþÃícÊB¬«HíRhªÚ NM øß¡÷RëßÞÎPZ´îÜðÊvЛ‚Óš }¼ø ÑúÝ/ysøX¬ª±”|Õ.Æýÿ–4!%Þ\²%GŽCd7ÝWHºs1CuêM "OUæçœKZ®´¥Ñî =´«|ã/†á" ¿Áž@Ç:š8·vºý.¿ñ_“FÔ¾ —(å¢ô-¢$Šo’dótÌ!ÒˆO%G¡öû!…Ʀ;öB*~/#¨­}æo7۰ܱôVê1+@\Ÿ]¶Þ jP:/¹Ð¤V•–nJù ²{HйÝBq&ùsg%ÃéJÍ>_™³FѼã`´‚‰©âpA°pÿ°~çæ%‘—´U(O­u{Dõ}ÏdìJj‚ðR+ýÌҜǶáÑ¢§ QK èo'R”ÄÜæN—‚Œî`æÏOEOÖ7ÜGE†ð2»¡…õ–übe­õ)×¹Œ0ÎníÝf,µ2ÏËİKš¢î}Ľ‹6È\½Ð»Ùˆ ‚õN–äM¯øäÙEŠÜ4±Ó ]ºu1{¡ã»ÚvO)E#Á.÷•”â[ÖÌ¡ 1¦ÞZÖëêƒGDGˆ¾µœyôŠ,Úòo&úSAuš£è6#w‰¿(¨çquÒ8M„îê¦BžC³ÔK+η*¬cK9s6yÒÉlqs75¦ªg':•Cöɹ?Cn‹ž.ª3íÛs;úÙ‰•èUh#MQ 8h¢-M=ÏÍàÑ»}`(Ägº¢¡uÞו < ðI³”Ðàôøjk”<+>¯Ä þ²Ëk“Bu“ï¬Õ†ñ”¢)ÐÆmB:ž 2ó q¢ÎÔ–¼DhÏ«XôI“<˜³Qeß’†Ü¼¼ÐŸ©Q}Aß(ÝXçÈ2¶Ãè áœ=âfëc|ƒ5rÕ ¬+€Î,sfÚÆ£EQGzÜ7¼‰2ëÁÅKáaJ|éGrhj†Ckã…ñµ‚·%Êf@›gROÆtnpÐß]‹.<ÒŠýS@Õs£8†÷x¼8uMõA¿¦tÜòŒ‘WûÖ©5ï'“"ôÓÁ‚¬uå’Ýì™÷5ïæo$û=â.ÿ§Ïúd„#µ|YdÌ*(ÌiZ,¢.éòhfÊáfßµSÉù¿-³Ï†ì–œwk¾ÏY¡KµØ »ò3j{gÛu*ìž™d_=ÿudŒ'ÂüGô?Úp ?Ö ö«¨ÒYpþöÈáCêt¿¶[²c– .f"”á  /œj9úe@>ZUZêìLûü¬™±C ŒÐù|RuàÊt Ÿ&3èŒuÙéŸæF:O¶tß'Õð³ ~c8”vË“]‘ßWvRÑh™‡YÛÌ6ôÖÅ[jð÷PßtD`C0Ó#ÌðÈN’]ÛßMJqÈÛkq4m+#€ðš¾ÂYÝ”'0¦*Šw?€Iuy4e‰H5g$*XN’hÚÔóµBäå6D¬œÀØ/@¿ÒÁøåÆ;Žj ’Dž›Áù 3&ò¸T¾‚.…‚á¹$¡dj!²Îž4ÌÜ_®å‡mí“î™fBˆT}†Ó…D‰w ö³óçIWýnÝ™‹Dh ψ™âf+Ç„11Üüçîj›ã{ozæ7êl®@–¢y !«r–ÐvJ3rk [wîZé’*•yÉ•åýªðê©ã 0@î9¾“ÛÌ1üÕº€¥l "@Úp묛…wsQÅ$t!·€‘ö F<³ªÁÔȲ8|ˆ WšuHÛЂ‘q±J¸­uÌ´ ëÌ.=¿¿ûg[íðþ‡áød/KŒX:£=ümcR˜ŸCi(jšH®pžm1[AìÿPÃE¡GÔ„,Ö í$uÈœ;ÛbhU¢hèe«S&‚C÷T´èFÁ*ÏÇ`klTÿiR™Ñꋟœ­7mMZ;ÕÚøt)SsBüé¶àÜ®2=;m®‚ØÆJ{”âR ß:`¹†a‰›¤çõ$*È:°ëÝHyñˆØ3E% -ãÜ'1@c\GöîZä³´ðaˆí€B_6‚ØÖöâ:^b)Œ³jÎS`,8áo´Š¯d…‡u·a¨¡%ìz84ÿôY¥¦§k¨žhA©C;p Iu —ò.=e õUDËÝcar¸Šºe`Õ–ÓzF.¨]ajÀD‚¥C;<7c`&ÄÛÃî'“î13[Z¿ÖT—Kø$ÊH³rcÖyˆôbóøê><í÷~0´Û4³Øž1à‹²w—faâK¬"æ°PŽœ¡Ú?Г„g¯ï±„yÅÝšFÒÎÙB}d!„ö3Êÿö߲ԦVå2Ks¶¯ÞÊ;M—×¶?”­HhÔ¿ë¾a踾EõÉ™‹Ë_b2ôò™˜¡¤P“a1¨ù‚]½?š5zo<Ð2F8z²Iv²•iêûwLòJ»L‘Já² ^¿;ƒ‡Ùå¬à›Ey î´5•œ_K^(ÒÊóe“3d  •‘Ž®ìÛƒ~}i*'ôÏ«"kËFcµ¦#å‹ÐSä/(Ae£‡¦yHøÓº5g„W0˜åVxÃËfçh®9¶I¾šïÐ÷—ÓÆ¦\ñ#´2] ÿjsíÛsˆÊ·BÂ-'•ÌÀ«kZF=©-ƒ“Ÿ˜¼ízqÚÄ,¦·tOM"y””²=ÞM$åé°RÍÂ.¦òš;¼qÆ%hÚŒ®¤„§îalfah.·‹ðò,BÙ5Ñ-uh6x#­;ç2¹g4†Ñ4ºB“A!‡ Rà=˳{ÒàŒ+aÄu•v¬LzùA Î.…éÝ GÇ,ýZ¹·Vߺ¢$Ý´R®Õ:Th¸Ãž!³‰&]¢6ÜÆÛ¬{ù¢h2hü¹æuù†KYœoÞ…Ò2Ê ;èœYnV½³àbœˆooóŒo¸ŸsP‡½bûÑ–Ý@vµÒ–®µÀ%ºûNpGè?˜¾ ²TƽlN.y¤Ý”¸v¦ ¾Uœ±9¡ƒ‹üUA²ÆŸš :xƒ#Bxók†ú¦‘YmµÓ¡ÉÓÍë¿ð¿beËÔa,ü3N‡•‰ß&6„º¿s1o“é‘ùª±£4Á0/Rÿð| 3Ù$ Ð×wLg¹7Ù ¼É-ÎM°àê¿Ó8°ïd±´Ck¯‰W³G;Ö÷ö@³ m¤?ŒW[`„à—#~×@éµì„r¢;äÜWãÆ©aV.íÁ -ôñ™)Ð ƒy vgr-¨„Ž<ÑM=8 ôŒ}¼êòÕí)öÚþºÄš T,YÌ#µ$}„pNŒŸl¦àÖìÇ·ø0½LuºEý­B€vlë_OÅgnwÂ(:ñ/u‹·fýÞñKKÈ¢@5›M†E'ºxq™üÎcÓK4¢@ÊóåBm˜ÞD®çÁ›–/çh¾ Ê!©Åqc ÆŽY¿9*༫¥:ùª…©]NòUô z˜Öã~Kc8ñ8|GÅæŠR/çNUÞe¦‡ñ»í3vwÉÝÀõêös(\ve”ÚYE-Ø”hЇ-«‚oÓg±•пª šàÃñ"Üù£ìþž0S*7htb#¿ @HÊÁSÄÇfÁß  ÒÕ_äªf†‹&H‚(ËDôÇn;7îèÀìc¬‡QÃ, ½0Ž'#>±7•m-¹~FŽ"…¢f' D ™–ƒÌÙ U¾:·¼ï1#[f$ÿCä°ù‡9ÅÆÆ{ÿðö3Õ ' ˆjGúeÓ–·p-œêŽ&jŠÉ''óñàgJòoí¬•£/ÿë¬F §·BÔèî5ðÑgóÆFh·ìÐ}¢e𣱌‚uÃH*6ö{tÅ6kiœ ó¾Ýb€ú}%¬}·€“¼IqX0ÃI{Ü&NUìj)?è\† ¡8ž¦ú6×›áýè]Å¿ @˜àvõ/àîNyŠ’ma÷a¼Õ{ÞÜŽ'7eðü%ù‚Ñõ#›Œ„{!'Ó}…ÚÙ’‰Ø® WÙ’f2ßÃ)e…v¡ÿ5‚è~Bgoߢýø…˜Nð\ND÷>ÊÆXσ*ß3´yH•÷J ,Ø[#”¬ß®ZBÐkàŠïLˆŽœ8åoåÁËĬ±§øQ¢{þÕH÷):?Ôåâ¾JC^¢üÉÿouC»oFM0Ò*üB1d“ø5¦ LbÄãkI¾é}µ•`øqÂTƒ‚Ÿ€Š¬äå¡­hUq^Žf_g]@’dFŒéZ×0ý1_ ^ZÛ†Ô´·ÈAMnNÕ[åó”çh Ž1o›!ZôÍ}ûŸP­³¶œ‘6ji^—}OIƒ%Τ&º«üUwXA¼SDŽÕ^žn§ý±Æt^ÔË^ÛR°pIdÜ+ö5jÖÖÇÌt4K‘,¸'·ŠoKëìV±ñ¦Õ§2íLt ÊÌ< š$Sñ¦=QŒ!Z_ÛbSG‘{®F1'FÁHßò‚-¤C“¨›ÿ·ßœâ¸@®¯g¿Á}PRQa2j—óÞÄëÙ,ßö!”ÉUKõÊ0|Žég2ð„žZn‚p°·9ä–¯*˜šL€=Ý5õM•мµ1Ø=6å÷–HXˆ÷Âá€ïzIŸˆÎôFé®Ã¥Q¨ÜKÙHIV9Ô_¸O‰ òþ̱»Fœ¥Yí?ÔÛÎODÁxÐ/'ÇæÏEg\W;pÂÓaã«Õ%g^Åîµ&ý8“óµŒQÌÃã›ùê;^îÝñÒ)«Xw6×HAr¦ \ æT±‚¦ä¢øQGõÚãÆn©ê»Ì–ÓúœhµåÒº:5‰œ[Ù˜xçÙÈA`ª_Ì|ð^uîåµÉÕ=Õ)-â5ˆþ…S ªäÕï¥þ7#ˆeT†O@¢á#7Å…œ: «f/fŠ ŒÆ4¾u’{¨W…ýØŠË6\•RÊÈåFîö’„\ÛýM(”çѸ8º”¨WõO!Úyrnеu©ò-a63ÔãDw¡­Õ4”¥#Ì8´òmœ¤Õ>¾x˜ÒÓR <# Oø|¯{Ý^<Éß‚cGxòÍù"Zósd"} 0)ÇCñ€“)ÝꢚÚø[™+G^‚ž°Nc`j•ÔUH"/^ ás* ï+Å“ué¼9 å¢àÌÉ`Ãêür‡lr,1º•ÁÑ 6uÔAukT{ðiöº›¤jÝ­Ø6K:¥«B<šÿg¹Ì"çñ+í­&aWô:"… Œoípjñ9ÞENþéü\¦Ï‡gX3QÞ1´®u®žpPà”+÷xoqŠÛN™Çt­A™5å=-Ò~[wÕ ÆH•µR×lX—#„^œÛrH‰¿"t1#ôªÅÌ•«Àd˯ú;#ëz§“ÙΘ4ïìQ ¾œW0|+¤'<>ì!=Ô¤PŸDœ%ENÆ`ÈVü¢=lŸ"7‚|,!ç¾ ^Ý1ç, Èjw%Ñb¬kmò…‹*¿[IÜɵ6Guï²®tD0öºAQÔ¹‘Zf¸{†}dÅø©Zo. Lg¦õ Ø'UÉKSÎb+κÊ/*G,Á@aûþÓ®±sg½¼±(8kn©Ç sâayÐéJOB0à‘ÂTx31x:-2 Ûs{ºJ^|çc®A=O± ì1¢±÷8Oô?›ØkWb« n<+Tc¤_¨>ÏŒy0­-Ú¢Tvð›@û‰… 7]xÓL:%Ê×Ñ'¯w+X¡¯–m9aSæX­)×F¡_£çÓž/9N‹×?-dŸ3·Û". \{¿ {ÑHŒ¼è,‰' Í Zm¤=Öàd9'`Ä)üqÆ·®¼•ˆMèƒ _7˜Êê$kTª¢YjW»¥Ñékþbþ7›ÆL­é"fà8s{M4ý³$ 4S §îKÚ £Bœ8nà7öæòÝìɘËþ JóÈñ7¼§¢˜McXÁê5™ Å~{Á&¨b¾¨òNÏPÖòg7ç™ã$‡¡za±;þRúíf»TLÞ?ܵb›{±D»ÐBS!jŠRΙT¹ZÝêÂR²ƒÞVÍù`4õ‘ÉŽ°Kx$Ïr *Tì(Á?¥¿šÊ\nV ]âaØ–sE{)­l‘òè˜uW2_ŒèxHš}Q¡Eí²3€÷´³J¥o$ HÐyˆ¦ößð+Ê“)ûÛ1·¥8 EF:TÝN­+ŬC[-¸¹¸3vÌÏw²wÑ™r(Y¬*/²5pˆ)h]g†ËÜÁ¨†`+$÷sӿگ̤åÁ $Ñ -J “ð·:¬6ÀŽ\¦@àìdá¹|ëÒ/“+éÒqKÏ_…w÷€´µšJ³¾eƒ7£[&—V `Aò“‘­ùXag»ù¬(½8)à‚tJ1¼&j¦è™¢ž ,z¯ÑÏ*Ì<¼[CübÀž¿¶Þþ‡ØKŽ}Sgéc{‡s ÒZ]wòX7‹«òÛçäx§+í5ó¡¿È˜ö€Ü2×ò¦Žî0ÐZM)-j’&+±ÿ&¬,á8[õJj¤j,夜yûS›2 Û)ûª Ù­…i‘’L߀B~¨ÝL™†ëL¥kb¬Ó{q¯[uç_TQÕ‚…D{¥n«‚{åÁ$‚»Õ¾_1Èl°ÇÈÍùYxŒ;§‚A©Ü£»î¯Ô-Ðv [W ²Jÿ¼‘YqÃ9PŸ››SïI‹¾*Šéš³²œ5|`RnW‰2Kj8”{„j™­~Ñ{ß<^2‰j£ZØá ¯õ]æºÕx/v$a6ONÍ6~i» %êÓìx3Z1­'Åõ´¤fl¦k‡lh6Ðý@/1àWMoMu&—ôõcdÖwçŽzý“¾Q0ÑÏ»\¦õcU DªþTÈéV:έãSW2cÌr㜸\ÖÆOí{w€m+¯wÞU™é¶šJô9¯8zŠüumÊë œâ¢±E ù0°b§ÊÁWE×ü„ð4tg!¼·–ð¹7öçk‘áDÆ|‚qúZîw4Ge4ÃÔá– †T¤nx Ò<=-ZBLÐ4wÇ)YHOo8“VQO;&³y:.²Þ·¹ p::e¤a¢98›†â¾ôÉYT'Qnô–™]†!["ýªèúך†³e„å´[6ªØ™ë­ÍC$\7îb¦­a²:aYƶCxÜíDºß&ÉEQi«‰ 7qÖezÌ} «µ‹2kczÍúóCÍŽh‡¿RÒdi÷ÇzJ2Ò&ÃÚ!„HÛ–a™Yvë×8ÔUǵ/Èf»ßÎOèǾørµ4fUl2_ÊKÆÂIØß¨¶T¥BCQgŸTÙ›!#‘ףѼó«nÝ1´þ,ùrÞò¤:Ùq¬š•3ÀmÛULKzQ²|÷[‚Cá‚4_U£íÒþ;Â7ÃSÖ7MÒƒžÔL‰A:)\HÓÊ 7Ï ¯Å¨›g{È ¹;ï¿ïΣ|KQxuX(ò­»¢±¦íx,j8=\„"“<¯ò:–è>¡¤ y‚­y{§‹+ue2†©ÛèO¡³Jæ!?KPkėߊ iÑj3Ý1ŒšEMƒwRÄÝ uxWÅ% ¢gëN‡7áä:3£ÑƒÕmøÁv­•í?p iÇq ^2µ.I`i˜$ ¸*äyû¿;ó/ΕnfþðY­\'£WnUr,S’«ˆÒMäK:fûæ(?•\A Ncàz^íæÅ!éqDÙ r‘´Âg"¬;¿  ÿ¾}š|4^§ˆ*Oßs·°íÐŽ… 5eÐd'sÂï«ÙîÎŒÛèjzrLžÁ-‡Áàâì´w\ÿz§ÍŸ©«;˜óSG¢èÎ^I海î%‹± &•ò´öødƒö§}E+i|ORÙº[vCøî·ƒ+:~v „{-|F4‡ÖTÛo‹HÕÙU‚Õ»;hߌõL:ßîIa„yS‹ŒÑß™|ötKŸéjF»fÚTÓYúÆC(9¦OËØ™•Æ¡ïkS>G9&Éex•¦ížÎÁL“ ÿãâ75 ÖÍûEþ*@4¯So€<Îz#shX–o5Å4çJª.Ÿ3>é#œUŸkAV Z„ˆˆ-|»d‚/Æ0õYÈÖÑIÛÞ»i…㸗$S\§xí“åJ}ˆ8$R^x/ËäSäg•MYX—]L´í?X’ê{ >¤Mº-ÎÃRR`Á ’wøb§×»wJJL€%Ÿ÷ãKÇÿ­OŸÄ*úrØ@rAüú*òŠ«à­°D ´¸çG;@œÐÿp»V-¯âqÀdâyÜDªÈ{’Ð#€µ‘š<Þ^“EŠÓ£ Ý•éŠf|·¸­ì-cUuÅY 9¦é~á^øiëBÁÚÔúi ‹_c’Ï iø!‡ÕS2›Ÿ-s âg²¡õ}qt¨Í¶äFu$J«Ö ?ÖuñÙ]pˆðˆoõYZÔd#I][MÏ1 u™_,qzÕ v¦Í鼑c‡‹Â{>"?‡ºt®AHä½r¥ä%M#Ôi!có{Ý .‹%”úËfÄbSÊg¾m’Óúûý¸XÜ©Èù±ÌÂbãFpXŸ Os¢B–ØRÇ’Ö-Š(ŸM' kƒ›Ña½6Úé&ù Ä~¸b Ô1\[¨_ýNõ»1G“Q–“oKèÑSÀða_´nÈÞÞúF©ihÁ¢Ý!løžÖICðßÞ~‰¢Áɉ“m,LêÈ]sOóËŠ[Û‰á@1G¡§M3ï® ÙÇþÉÈGᆙ¿G*á‡üÒh@©#‹¸~V.Œ>üfvÄ„‰bµóæ\$€ 2[;LUJ uçÄ="BÈÏ÷1H~½|’Nò6r˜=§N*ÊÄýmõç&Jþ•¶Z—¤ÍXWjU×x™¯Ä9ŒÅõ¤‰›Øïëó½î¾zhûãaøÌ#‰ñBPf‘œ}˜h»Ž™"«3®éØ÷M†rG<ÖìȤÚâd?Û¢TBZÛäN7‡jï”ß½;ZLŒªû>ú„Úyè“oU!°ë¿GZŠÛô cHë,F¸`ÓyUûD6¨ó·"Bj+|*´Ø>ב6v1ø ÚDñ„uƒ B:ŽfLM#Bºð½Ôª…Ý3xW]%iêÈSö’WþúR”î9§Š8jªà˜mo½ƒ½Å|©jü¡±“AOÜ_¤ _9“ÇZ,Χ£]ä„a ÛL7q. ~P Ç7‹U­cÜ‘}ÞÒÚGþ#ˆck)Fž’ÞݨAl³Y—#bä²–h>èAåüó'ü??R•ü裸 Š,ÞWU;ù“ÓUîôÆYæ¬á,ü^hÓêRuê.˜Geùl3µqD†G5JHNêiyd¬«ÂŒù–À6`¼6&ÉŸ„ŽqZIµÔ•zà€tƒß½óŽØ;ý«åù0£·‘‡ck¹ŒJtÜÜ_$ˆëÛïNDãù!„ñVWvÎe°…áSõä<ã¿qÄ‘øh¯ÄÝ(^#ƒ…B…øV×79 É %®ußÏ®ûÈ &à\ŸÔ“L[?´RÙ¦d8=*Ìqo®ûºœr·xiåÝMB>h%5†ZêÑ$IÜÎÀÇ-’1ŠÛ¥«9 UlZM¦ŠŒÑÖVîQõ„=¦x$°1{Êh¡¼[gPwí¹ 3|bÎß ÜøåÞÁsKŲ{FŸÁb¨ÁãB¿¢;h ì·+¬·¥ÌÉ@TõÜÉJq! A’>bÿ¤¯9ÍÃ~–ýžÁ;"ïðrºìH]ö+ð°Â,÷ÅS-ÏeÀP4‰FJTƒ!p‰¸àÅ`üø2³Á…Çæì^œ"‚˜Š'æAÒÞ’ï‰ô<ƒ$È;…†|ló°F±Ã•´Íáv×¥•ó›Îv¿Í±rÙTp*‡S\‹2)4ÿ.c,iâÝ= ã¶ÍPó$£VÝ0– ·h[„Óu»ŸÍ w£ØÙЋ)¢zMÁó´ä.„,÷¾Vgc[¯{1hÛ•E}œ®7UðÝŒoûVÀ!CÆTÒnQ5>U˜“¢æ$•ò0-R|‰ Á“ä‹jŒû1SºçTÆ” ¡§a¿Ydy嘰BÍI˜Fs/¥š0œs ;¡7òüXÛÕ´ïý‰ž‰¨¼3ʦ¶S>Ùyª°ßH¼•Ðr¯:rª’oóˆr]邾þJýÔÎÈ*1£i¢H×õc¹ïÊœ×ïÍR͙ʘÕ6ĈØû°·´Â&ÍûrroìÆ=Ik¿¾Ä8»HhéjQõ®w˘ÿ{; Ý8é¥ú˜z'¾ÓÇBûŠþ~¨OOxXmÛÂs[MÃdK=¨áY¨Ñp´í¾½ #ÇË¡>5Žæ¾3‹ÜÌâ0Q#õaüs™µÑ, µ†ª¸ét€9™ãúã¢^vÎÛIü*»P¥÷x¶âÉmÂÜšôìMvgVÁEÄíþ†§)L7Ë È=5–×r†¹*Ùy/8.¹·¢<Ð Ý´œt‚†X—ý¶´Àï_0‘AE´8ì-­Þ´ÚH2Ϻi)•ûý7¥óÞQÂt…á0nìë­æ¤8u˜³®viÑTõÑ–¯¹ª“˜Ñò¦7×»ðOè2ÑB„ÒWCûIÛs Þza˜f·ø¸<„Ù“EPòºŠ´ò¦Xßb’æÝÙhΛa²ÝYµd/i¡{ôîæBfq™#]Cµ Vú‚huâŒøo÷qóª=5%ó³Bã'R¿—-ƒóß›–M\ˆ,”¤…Ïx‹ÄÇdÖÐT½÷þè\wsjzƒd`ùâƒpóûV±¬U‰ÿÁw^ÌT…ýz¡ Î(â¦Ù7’yLmÚ±Šßm=ôŸ`€T1Ø" barѲ7ðòI«† ·8iF–Ç<»fêÝŽ²ŽÁˆÜðuÞK;ûü&œÝ¸¿œåßE˜AwLA̯Y_aKâ„•ߨFÇžÛÅJÎ6À iË¾Ê ü5 ÆgZohèi àŒ¸Fau“Ê÷–Utk©ã®cíÊx^ÀŽÝ›Ì`oÄ£¥‡ÔF„³“Zªç&"à“…-çvÇg]Å‘¸¼ßöŽŽPÿÐJp„AÿÏ ǛSÓ_R3q‹p ì*ZEbA~B)b‘÷ϰêg'{;EŽ÷§E§4óBÃêí(écÂcðÖÒ’e4+}>_#30[ Ê-+È:t‹C‰ªy̤vDž’¦#»U}Þª|4†Õ®ËXpTÿ»ä”QÏ*П×c2ä³q0a_A~ï+Á<·MQMŽp3¥¿^r“‹ŽÄk3ëÀ.ÃÉ»BÏö Ë‡ZquðZ õÔõ48ÈÉ ›}kÐK“GšOÏXZÄ܃e$9WIè$…(£¿éóvºIux”Xξ—m–O”ߦp‰Íó!Mšš“‡'§}×e8°‹Tqz/¹ævú…~×Û×RC†n/áËûý¥´Í­Vµë8€ýƒ¡à!EO_ƒ”l6形޶°Hq(»žSŠûû^ø3}{Ö—¢e"‘}å´a¤ˆfÆó‘!ÔÛ.£Õ¼˜ôC!˜ ÿ8œ”6l±¥¹ËÓ1¤áœz4ß9KMã{ב*ˆáÀœ7œ4Œü5#‚à›ÌIØ}ã-êÔœ2¬j‘T’'(!D™Þ…«0¿s“²ÝTF37æoz܈¿fûD(°œÏIšJ¿´å¨è(fÓͽJP>ºº9ç꘡t{KÀ/»csÞµ‹Š;EZ‘ÇÂ¶ššŒ’§ZWÅU[çã ~ÈúÌß1îþH&‡ÚzWÃ]6œÁ…éP2W©žr$=À™ZÑ'‰Ï‰¾…k½î,7n¥Ïît w‘úSg¥ÍÖˆüoí„pýÞ|µÎð fÔu¦¹M½-»·Bç;òÓö‡ùDÓÇþ›®z\û†½$ŠYÕv‹¬è´÷_{ïœçuê߸×aÅÆ6­R>y†½d;òÈK¤ú¨ÓEÀÍ6! ÁtSRÓÚQÑ»7tIª:T5‰Zß:B`ŽØ*ÿáN úx;ÊJã—Ü$c˜&ÞX¦»q×=9´túßÞTÏ@…7ÍŽÍ);b»L Šr+DÞ“ iê‰ûCÑ~Æ‚’AÀP:h¶ØHDafÏ ‡ž]@W*|këêsÓÇõåÍ«QwC‡HIØŠì­©|{Xù6ƒË †¹A/7©cnÅ™0ÕäŒ\85;Ãî/^÷½%Œ94bÁ ¶©à*!²®¬Ã¯¦þ Ñ1…Hᮊ|ì°æŽ€I¥Ì÷DjeR5úÛMÍSM$›ž«M]Ž˜c5Öâ.+çií#ជÑRëÆB¯ÝŸþÿ”y™'©*åÃHº‚1V52Ôd$QKá`lªRðÐ3ŵ MSRÞ[#U¾£æ»k<¯Èݳ‡Mµ¿ŽMêͽ7ê„Ίѱ}WÌ/T‹‰Þ~©£r˜ìåöùT×¢CUQ ÀûœÔë„e ý…'f€°ÆÛ,0{¨šj§þ*‹ ¾œ*þ*—q«îãÍÕW Z´àB¢;ó ™\l\mÒ«ƒ ÞQ(£ )›ZnEd£$‰¿¬5‰†i3l"$w¢¥…ùš“ø_ä4-ldøvrœl¡9 …ŠÅ%°j¸%^kœœy6?Pô„o¾m§îOz° ßdjÐqÔ.ÆYC=¸{¬@WÀ¤ýi Ÿóöäe}DºPOnM6ÄxEOâÚàªOŠrÊ ÙÒå •f߯£´rLñÃó€ ±¸êGVShàðWÅ@ľ°,å{É/Ó9 p5y]֠»[t.w§o7ÇÐÕ$Ñ­éª>³Wvÿv‡Õ4Þ]ÈÉ·”L³Û½ò¦Ì&,N^kä¹ù£i&ùn£{ #ïhmurwOõ» ‚·iÖ̆â'¸Ú6›á-$~*2õ´›­ò:”d¸âb+«cßû>ñó¹ù2 Â+1 ˜Z±§Ó@V݆ŒNèõ Âr°´º·ÝƒÎµEûX+θ®3¯Z!/:Sj`ú0íTµÚvQÇE ;õCÎGZŒ"öP³i‚N&´ý÷þŠ8³Az––ù«|¤Öéóe©˜^igšãˆ:àþh¼¶´P?ÖmÑ>qLmÀŽîkÎì"qÖ¹@_¦™&zkSF0É|:X®ðZ¤¢r²µuöã‰ýˆ”Õ‚Ù/|0}”œõž3¤ÞM°Ê °ÎlZ[ˆ[y0êÊD2µ¿ FW õ{9ôµ}KÖpÞ¤@‹\kj1hÆ©×Ç(Ó-ã¢ùä¤A‘CfS‡« ÖzÛ4¢ÀÀl©Á‘Æk@”—þÛ¦óV¯JêøQ"Ÿ˜y¯koÑzB¢‚éß ©x |]E“½®áínVÏbÀÎÁ‘œ:­^ä²ã˜’+½¸G±ž 15н/ƒgøäü˜v^€?±¸–FͳgçMßh¥okñs½–EÜrÛ8Ñ?æ|®väõáf $çk s-þæ±TÒõ·(]” ‡!—úÛàÌE0N]˜«a(‘tê}ª–žt‚Z«ˆø¡zi-í»S„´¡ÇáuÏãxl¶åmYkº.ê+ÃU zS¤“‹ÿòª¢æ‡#Ë•gŠhV÷©æØ<ñ9DÁbÖè¾mâå”Uº€˜g^ñ¹sæÁÅÛj§žÍÁ/Åž:a¾µ‡™¯¥Ÿ†ÛnÖ¦xÁ 1ŠÅ΄õ„uthBLiÿÎ aš’½"OžˆZlæW¼%òßÕø„–/ïm:·OÅ2øꃒþ)2i}!ØXµ¸2ÌW¸*ÂÙ]’ݲ:€R‘ü†–°Mi‰{™!ŽÊÔÓãï¿Ë脸J\“³» M«½Ýo Ùù¤/.Í#ju9.±fô¯ÙÓˆj6šºÉò4 _ñ9š–å{ûAS·xJM2sTô`úL§•Â2=§~Oóî”…Ãûò!K{ßîlé.èò{ý†.F+¬@¡ ÕÝp1CDTž.È«ŠÓÆyÖdBþ^ßè’gyjl‹b6 TEaI€ò]cŸZ- áÕTZò—§yÔïtM£^M­vþV¢›Õ×4ù¼A…]Í—‚F‰Ô&žÊ™Õ%Ž&Ñ¥ŠÛÈ£ ð‰Š²s- ­¦€”mêšX•r>PoVëH’lšÒ,®i ›÷‘hT«lˆL³”É÷ õ8 0D7ØÈV½¸»Úç’¥¹˜`ü Ï»õ]Ãu|,`õiòJ[̲ŠM8zÈ.ÉÚqœATŸèŸ5Ș¡øcHsx|öEˆ®É#´¶–›}ë÷DDÔä³LYߺ”UnùÞɲž@=ÛLf¦¯iR˜ä L³²ûgvÙ®ºòÂ;lJÞÈÝÌž6ðú&ãTÙáÚAº˜bmJ ˜RØÁmø2Y»xGÁõ#Z˜bc„Æå\¸‰‹sCô5jT¡}%º¿p)×à’*-Xí1Ý<†OLÓ› \ç+U~0³J¦‚vìÇ:¯Ç¢Z\ ú%ÚÝ»#®Xþ€þm‘íÁGÌ®˜-3›Ã&åH''¸k5g«ÐYšxeô¨Ü4?è|ŠÁÓk/¥ÏÝ&zTkC¢ß%O¿ÔA!—¦|ÁŸ¿¥)æ%:Ö)VJ9$c€”Œèøí;Z_óeP7†ðï2x nß‘$sà¨ÔÌÕDyN ãô–s¾ÖžN9XÒ¬zÙ RMå÷¯µ®uv:§x""5GýNÝ{Ly÷늧·Žø–IqüŒžDƒ¨ç•¹×¹¬ÎåÄ]Þå=ÐzŽCŒx»K[г4PÍ®s$U¨†²@¦`úqÓª!ËSŽbCÔ¬…TN8lOR°:K?ê̾>gç?*ß 3ǃrL´£ÓÚ­™º#G^¶eó½ñ@²h+½ …ÔD¡âñQBÎ#ÃF`KDêºØŽN"½A†M³+쫉òMÝ>oqQÊÌ´Çw[pQ¥Šž¤#áHA‘ÛÀ×a‚ãu£y-*\#~kZׯ)Žœ”§¥ª¥ w»!q.ŵ˜õŽÿ¾Cp \›H×Ò—Ž 6_l‡ã´²O‹‰mKåÄVx—ÏM§³Ûö¦Ìà€dz-T– þˆ1Ð,÷Áÿç¨3~2Š0‡DçôRçÉÕõ·Ã jb]„ýãú€.EÃÌ܉ (4!& Zó+ûReu;«ìNß:ÇÇÃdÑØ%í/;v\’{‹ÝÜ%“¹-˜ƒ$kãÂPÈúà¤Å[£2‘ðÀ«7’dFéí+;0bn9ÉåRÛPÌ—]Å–0æt@÷†l…Kú'^wÑKgâî0oߢc’îíù©¾®Ò@Tɦ>Øû¾?AwRîvc°Zi ª‘ÝbXÌ’N‹n6Ê ó›qH³O¼´/:z&¸×…Ë›f¡Ï,i!™‚ôã@û‚ozLd÷ðØ½¦kRýýô#5ÜÏËÕjkÐ&}p…L^=‹K9L…¥¦$-Œ_KKø ô1ͧœ’Ö™Â5x kÝ7åPEÕ°SÞX°®ë‡>&÷šYü]­5Ëh3Ü”M?Ú‡NVÍëÎüÙî9x¸Éå/ÖR V{ÞˆV…"j“)ó_ßeh7Ì1©óÏï°%’¤(8åû§ÔÙ ¹¾ã23N¦Å4ØmvÊ‹Æg¬QdÎÇšsXY¿øp øÂFµ:#F ½7¡ÅM&oÄDYu õá Mžµ #}Í+¤ªNK´úD_¦¦<ÝÀ ‡¼M”Ëgým1·e3ç$²Ú¾ša(™Ö`žÅMpiÞø+ŒìÑ›”œ:ÆZÝ»la° Jé+gÆÎÁRÓý‘°ˆ‘¶¯;ƒð£*]J¤jGÙ-M»¯@«ð¦ì‰3i^Ó…º©Ã*MÝ»qÅEÒ…f R=Wf¸dd3´Io¢ªÕã26ƒ|²Ëô/‚ûå÷뢅9†tT.KÃæmã~ˆG×/¶¡4Xœ6î–fcìq( «½¥žZŒI›fÕd;qm£ôéë cžã¤¾21Ä=Jp$«, =¼7 û!œ¡HBëgûÌŠf°ÏH%=Ö hæÄQ~†¥FØvµ.KÎÇ2¯ùá!Vûô_ÈÜáCjdWP~ѸÝN.ZjBLíÄ>³—=tòª B~‚:–iø,VÄéü8wž’rZ_äüÁ¯ÚŸ·Þ—bùHöÆŸ…£%% h‹G䊩ؽm¯í½ÓÍ^îϹLïYTk,°¤.ÌÜŽq2ÉKŽÛ¤§w™ nsâ+"c+I8Õ.ÍCÚ+ƒ1 …›àþƒiðþ 4€(w¹‚©PÈ•Is ƒ£_4ÅèÑÓ}pàG°â%ÈÚŠ±C”‹¬ëq&;Åž‰ŒS‚¸ÿœ¾Ð<ð(^E%pâ$nŽÌ\²×‹!:ZTu0΀Ehø?€Ž _WiLF0ièfŸsfKq#³vŒÉ”‚·ttCãkò+ú&¡š!¨'äWLEàhqbTQ¦YÔ­i´êçÂ’óQé*Dç±$aÌìÞ—¬©¯òÙ·Æ^i;„`F p*B7Þ9\kFpZù§})°©JæµJÍúÞth ÉZ §.ßQQUgéºç€EÞuÌò‘Ðè=F=ïç“)Éþœ^û¨ÝUÛ¨)IÓè}«` ŽÉâ”­:–pÁ='”]â‘ç˜iBSÔKQ¸}õ6€rw~÷HVd|eÊáʟ|†i³÷I‘§Kán³ Å d´ÒD4Á.,ž…8Š`ç£Ö½+™KÄ È\èžUKà¾ÀB_k‹º{+#}{ù'GÌfMÆ]úô|8hËÓ˜Œ[†,WL´_˜q•Ôç­%©uÚN?›Ob2êÕª±Ö`£Iò2 ,ÌÓð=[·”–0O‹úëíÆãåe_f1ár”OÎŽëûfxtf:›€¶SšB·}¤1$ˆyu“iŒ×欙I&y¦(#4!ë‹…NVäØ"4F~Fe»C Q=óÐÖ7ÄwÁ°¿j©7œ¥ç“M^ütNväpœ–¹©ø3MkQ4u ~–äO§wXú§äxO9hº5VcËv¹‰YyÃÌÕ>ýO§Äû}pWÏ! ëµ-@º%kÇj±+KcŸVL;7HZ°™­›I‹qÖÖµîeM×e DV¶Ó67¸2•NÀfÓ€ˆÜ¿¹kË6Êݬc´ QïôÅDý/EãÈYv#´ "«ï`q61øî‚<¡lƒ­’¨î<)¡œÊºSÊ4I­©ÉÛ“”V&7¸Òj‚"®|QCå8Kµ =áäÉüžë•µ&£ƒôàþ ±4™vÇiÚKG¢}œ’׈L&‡Rqn_íÇèJ†ÙrUYÂyƒ.’n¿²tÑŠN¸=„Õ±M>l@ü˜%;Mið!à× öÙÿ 2(cR#Öu¦g-íõRp÷bNÂc‰ØRšÃPÑ…0žZ †–äÓja‹9Þ–ˆB")7©uÔÖÙŸËa§Å–·×Q6wuYU.¯Ê"Frq.By‡ã0°×ªVs¹fe1¹½ G8 wtPÒËsS†,'âuÑ)tèöUq†ø¡]ÑÇWi“=Ô“üO¶`Ix)Y©S…7h®¼Ì¶Älί?q~EÍF !zoqÑ#ýGNF‹tr›´ïä£ +¬îöø¼7îƒmNûØš± Öi˜ ùÍ‘\ùôƒ/`23d¶‘Ã*QØ;2v •Æ»'†1$ÞQ¥žæd.z1ƒ^g™¿]™-+§^U¨ïcÅ´ª'‘¤(å|l²% ˜ôå3‹S«E¡Z«l˜ªT­vÖ(7W7RÿÌá2­Í߆þ•ï ‡¨(z¯Ì.øß!×Í·gKôˆ®®«xPÿ! ­jwTñ1ž·¬µŒuDoê $¶œBi¹ªH´º„Ÿ[7Ñ;ÎÚFðÇ\jsyº_^3-³¡-ôèÖÂ$!¢A‡Rz Ã Hµ*_k¨FÀê÷p£æš³m/,^…)küˆ÷Ð büÆ´}ÉËô!žþõ¶r'BUñ€¶ý“5&jäq'òèRJX= ÎQ½E…ð kþNX§7T ªSéeÊå«ÑwëÃÆLm[#)¿ ©áô¹è…OãÚìÛÉ‘Pñ8ºà >h ‰A`fÍ‘ä'»¾®–PCyí p—ËCAnˆ$ÈraCYÇ üCt=<…ƒŠÑ<-ØçÊvçŒàëÛŠ9Ê+¨Õ¦§AÐh¤¾zóÅ•nõŸuƒVZ!twã]ìo5ÐKÂÐæX.žÐŽYf³øS4f´¬‰ ¸Û†‚ž·ãÒÃJ—Ü’1Šˆ“)¼ö‰[ú°ÜOÎ+,úÔ;Ý\ æœUôåbÍ„eŸÃ(rê7÷´.Ø0MQ’Há;±Ñ¼Y1b®0S^¨îf]øÓåÐ=ßBŠ´u¶VŠ åÒ;`Ôw¤£sZ’,Id|pÝ.X>Õ…6ŒÏL3´½`MoUx©R·KÎÒ°ýè͠ך:®>žSòe;çèCÄ’hox-I-62`Y½\ŒLO«³AYÀ &“Æ$}Œ+¤/‚Û¨áW|pêþÿøÐoýÝÜÛîë¬þ!ÐO›ÝЫÇç%-~ëÐnºÖh޹Ð#'š’Æde³R¹@¢s²^…¨Î;rï ˆvå)¾&£Ø—²ä\†§`f,7\éÓ¼VäMNÙ]§¦¸¹›I°·&µS`ìŸ 4™$* ®âñÄ,Þüu0kp=yÁ½ ^·yÚJ±­q¼º•b˜E Þ®í(J íÓƒ=3ͪŒU c#IPœjY]WjÝKžäŒãòªVäø•Þ4÷MÀX*ÂÞ1­êW÷‚£qyO©àqùÉøh5fpß%ô–¦AN«> áßr[Ìê~"¦äœX€ 6,Q´¢ù$Ö¾E?Q•æÎ8h" –d&av–~UTu}d²²'«JŽ‹ñ§%K] >™È[iþŠÑUÑ(³ÄÕ*ãKYL#ð<Ó6TÛ ¸Q_f0nŒ&ŬŠÔÔ(ûWv)Ü«›Š€âìC² ójrŽŠÿ½Ðl\MèS»ë'ó:Z60¥>°W‚ºEœµ‡S±¤à"npªR€£‰Ñ´3Éiþ€°ö ¿rlöÅדMäÏ%"Iòô.õd¶¦\šT}âË,Híº/ºÁÔQ)ð]EÃzƒi°ËJ“ž$§©³¼äT†Ù.µa·`‰8ÿ`¨Ã’Y20vˆ|ʶ&&z¢8RvœÖÐJÀ¾¥@Œ*ú^Ž|Ó¿¾∹iøæ:dîqpäHm2¦¯`ëZ[ÂUÛªÑÃͪ¥r8˜¢™@½qqëÏiÆÆ ê‡DÔO̾ӟZ7)}O¬© ç¦OÃ!îË^í%·Kë?Rc­Ì¢[ºÙÕÁÞª“Ôž9·–v= mQ)¦¼$_± ·kÃÑe&Ñ~¿5.ßmRq7‡N£“ñjjÃÔtÆuæœÿŒ;ºßÒYª²X_úßÞ 8Zw…ÑÏËÏyvgfê¼”'ÖõJ É Þiêb‰)oª¯•—"§Ík…6²|§]‡ü ÍÖýÑ!·¾Ðz |r—a3R¹D°HŠÑú%–¥æuBõ¨D†Þ– z×õÇz¬é kiŠDiÁ¼Ã›^Ë ˆ`GÜÏ5és–Ì)° úb;ØUt:=¾q#ßÞ0¶½%‘_ÚŠQ Á†ÐšI…W}˜Ñ“µÝa ÀÓ¸`ŒnñŸƒý‚Šˆ—èÖõ³¤žì½È^èFM¹»òÿDþMë–> ßÕTQž‘®¤[Iv&R€4uêºôʧD=mp~µB²†w´´#§/h“aJÂã+ݼ°­svü‰+DKßžè9¡Ž•è ðà!lø­Ùñ­è€ž†8œ””Z’vRz#±‚ˬªt?^Ry¥tÚ4Æ{- i½R³ì­$èVd ˆûÑNlÛÐôlJ¹ƒ*ÒMÍb·ö‰ŠÐÉáË¡4Cya¬bA´jmçµeˆÞºAeÓûcÄ2¢“JV]'êo=®ºTç͈}ßgÓ6qÃŒOµ?vœmÈËxðÓóâ_ŠùXLù˳£o=gÊg:ÚV*Hð3¦aÒÊ{ößäº)WK\¦$Ù˜Ì3úÊ}£¥¦ÞA@450ÅìÓ†ÜÔú¤UÝ@L[صã3Mþ¾BÞßëk¿½•ÄßÒ¯ë0M’>æhý¶›jÓ.ÙWÚÁàˆxÆp8wÉósØÎÁÖ[³Í«+3ÙˆUh$JÑgÞ· .èªC6…o@« Ž;Mˆ¼Í ^ÚŸw0œvÕ¹Ÿw¤ÇŠ+œmÝ Fµ<¡TAG—$æðŒB¨§Fæ3 |o0ü2h\}†vHGî¢å¥ÁÂЭQI®Ù}V ®Ðk ù¬Mk.¼° ž8x *¨[mëÁ$Gåí]b¤CÝöÌ+fX‰!ªfOËxgõ¤#Êfh?)uÃeÚH(Õû±û€Ø_Õ¥:šá9½;‹0 g¡¥m¯÷B‰Þš ‚õÇNS¢ Ÿôpwx³;MYäùÙÊ”ê2ýdt⡲ÑE¾é%ÍlÍázèí&Û5šµÁ±Â¬­Pt+½·ö^&oÍUðIHk};© ÂÛV¾u䘽$<ÊêïÑŒš6«þÙ×r~DòtoN!.ëY‚Ó-‹½x”u&z¦NÅæf»¥%"Ek[ŽÛ]Ùh(Dû»6\ºç Õ¹3ÆICP³´Î™siÐZú æuë-‰º??ÓC=CCiaí$€8’nô»åÿA"è³¼¯9U¯QåàqϤSE.¸w‡mÌA[p½$7ÄÛI¾±rÊÛÃh×-бADß«—]eØÒpßÌaPޝ‹R_­§‹c°å›- ÒR~’ﻄžl8hÓÑõOiYâ´˜ob1à½f«¸TçÉŒ)ôb­04?}öV‡ÕzÃŸË ƒ‚ZrȘþÓçünÅÈ_ñh ¥K mŠßð’..C¹èð­Ýiu±–Ö•SM’èÓNÁ·9Er/"m™$‰Ÿ˜ÿîŠ7iµÿHUá…®C¼Ñ»a‡Öl¿‘MÑqú.ç¨B" Wÿ>ç틘#üàË@‘Cey”ODì%õÃKKA É;\é¼ë=iüÊ€ä–"Q4y?©Õqò•6§ ÊCÔ^Yø`>óîf aúŸˆïGžÿ"k¯žˆ|ê• ©‹¥°@›ŸëØ ˆ"Jô›øEZÖÈÆÞ¸‚ßöCÏø£­äøªÿf”ò¤­zÝ‹Z‚± ÂNJ“âIÞI÷@‚4í™»LŒ@͆ª;êņæJ 2F¤b´.ý™h {#¶Äòc+¶–ØG«4Î45ˆš± £/±jŒß’ÂàséÖÍב”e4UjNÁ‡†¯wÄÖ:r,&%IÓù¤PTŸvÅ‹˜×<ºab a1ÁêW¾obXðìcj7ÖǼ–©š[œßPÅM!K_¡irÕWúg®?&ÁÖ¢©o…í-ÛM5†²[¤·÷ÖVNI–DK2ãŸøðm×X`H4Dü¸Ò|sôA¾¶J‹¥ÍÍ«[|ì] »Jf47P¢„¼×RJŽnÓˆFdCip èýsƒi´H¦á·«K’-ûrz€ØjÁïl_u›væ°PÑi£M´=á2;³{y$ˆ½aEñ2Ãp©o#g¶;¤ZÂYámL4öðZLcZø NMü€ï$ ïäüÓG}—²Äòª:Õ‚ËSH¢f‰`µ†î`ºµy¯6 ˆ—öÝXâqÔ ™Oj©fèê9és`Ö!I?:0Èèô5èÊ»S¬6#«ïìÆk2Û _heÇ)eVÁýCD1fÜ¥²è³lüúdN’ Æä'óé/fÅêz]ö•U‚„VC%ê½™yf%átôT:Á­Ø Uø@Ǩ¤Ð‚´l ãüo~NÐËŒL*«ÁÆSzí!ˆñ.ùŸ?èe/$øØ\fÜ™$#íÛ:| ¤@rŒ±­ A_HÿæLÉZÉè#ÇD)Až®ø9›ÄÚ5O˜Yi”þçùš.ª´ÈÅÙ"Êe5У¾J Â[é³yŽp•s}¿WLJ¶¬t¯ÍŽ€òKŠ«f¡õÂ@s8líÌW6«]7D3ºŠ‹Ö«ó*ÏIª-ãÓiÉ ŠÒQ .ÎUœ¯aj˜G+!®2 l“é³OÚööñ¯o.´;W¾ ‡Ðé–›~D~ÝM©BFD™ß·Yfï½`®”#­…OK°`l•‚v–³_£§~†Ó Ž -¤ð €åËæci£œ4¢”~Æ âG²,¡9›Õ§ÃDúùÜ䩘P8“0a[›˜ Œåá M2ùßG’F­’þ @åÚš1u·ÅõÚåò`“d„D"'àlS4Rú.¬ £—ïž`Ä’&M‹yƒ€'&-3Ý”’ÆŒžÖâ°Íí\ÓuG™cµû:)qZ¡á7óOŸSzNŒhFçDp÷û¡«é8\†ó-?\üøô"†@c,.àÇìSÛÏ2š¹òm÷Ǻ&{qUFsh™±eçÆÁÇÂP) kj½·Q¸#/NÜ”æÅÊ1L!Y½Äƒ¿’å]„™ ®­l§£/ÈnéˆqôÒîbau!Íü.Š0Á¨iGÏÂæ7ÿ×ÈÞ™§§käÞ&¢õÏó;‰«¤gŽtÛ /·¹ìÞæÛæPÅ’Œq9.b¥+ºp<..7˜¡/Œ·6Ë ±a¼#óïf#â¿mÉ!B´xÈÐ|_LT¦e½bPóRŸUÜ\«ºAÿâ{­J1yÈq8º½—LîýPuûáQܽA½¯¾1öžÒ¢ ‹:¹C¯¹éH»Àó%òž3éu~çÒ×À\¤58 é5 Ü<Û°­¢—\ñ, è=\yJ4#6Pt6ÜívY-ãŽoQÿ±íˆ„zYb“ÅÂ(šåw®*å†çá`­=ŒÓvgYe»±Còs[tŸtgaÙº¨÷wÂÀSÌËcVɸ§ãœ¡Ñ#Züñ}Ž–kò’¿ã Æ ©‹­õBT`ΡVH|”ßFŠw¨CDÔdVÛ2ô]ÃD?£Ù 1) { sigma <- object$parameters$variance$sigma } else { sigma <- rep(object$parameters$variance$sigmasq, object$G)[1:object$G] names(sigma) <- names(mean) } if(is.null(object$density)) { title <- paste("Gaussian finite mixture model fitted by EM algorithm") classification <- factor(object$classification, levels = { l <- seq_len(object$G) if(is.numeric(noise)) l <- c(l,0) l }) } else { title <- paste("Density estimation via Gaussian finite mixture modeling") classification <- NULL } # obj <- list(title = title, n = object$n, d = object$d, G = G, modelName = object$modelName, loglik = object$loglik, df = object$df, bic = object$bic, icl = icl(object), pro = pro, mean = mean, variance = sigma, noise = noise, prior = attr(object$BIC, "prior"), classification = classification, printParameters = parameters) class(obj) <- "summary.Mclust" return(obj) } print.summary.Mclust <- function(x, digits = getOption("digits"), ...) { txt <- paste(rep("-", min(nchar(x$title), getOption("width"))), collapse = "") catwrap(txt) catwrap(x$title) catwrap(txt) # cat("\n") if(x$G == 0) { catwrap("Mclust model with only a noise component:") } else { catwrap(paste0("Mclust ", x$modelName, " (", mclustModelNames(x$modelName)$type, ") model with ", x$G, ifelse(x$G > 1, " components", " component"), if(x$noise) " and a noise term", ":")) } cat("\n") # if(!is.null(x$prior)) { catwrap(paste0("Prior: ", x$prior$functionName, "(", paste(names(x$prior[-1]), x$prior[-1], sep = " = ", collapse = ", "), ")", sep = "")) cat("\n") } # tab <- data.frame("log-likelihood" = x$loglik, "n" = x$n, "df" = x$df, "BIC" = x$bic, "ICL" = x$icl, row.names = "", check.names = FALSE) print(tab, digits = digits) # if(!is.null(x$classification)) { cat("\nClustering table:") print(table(x$classification), digits = digits) } # if(x$printParameters) { cat("\nMixing probabilities:\n") print(x$pro, digits = digits) cat("\nMeans:\n") print(x$mean, digits = digits) cat("\nVariances:\n") if(x$d > 1) { for(g in 1:x$G) { cat("[,,", g, "]\n", sep = "") print(x$variance[,,g], digits = digits) } } else print(x$variance, digits = digits) if(x$noise) { cat("\nHypervolume of noise component:\n") cat(signif(x$noise, digits = digits), "\n") } } # invisible(x) } plot.Mclust <- function(x, what = c("BIC", "classification", "uncertainty", "density"), dimens = NULL, xlab = NULL, ylab = NULL, ylim = NULL, addEllipses = TRUE, main = FALSE, ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") data <- object$data p <- ncol(data) if(p == 1) colnames(data) <- deparse(x$call$data) dimens <- if(is.null(dimens)) seq(p) else dimens[dimens <= p] d <- length(dimens) main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) plot.Mclust.bic <- function(...) plot.mclustBIC(object$BIC, xlab = xlab, ylim = ylim, ...) plot.Mclust.classification <- function(...) { if(d == 1) { mclust1Dplot(data = data[,dimens,drop=FALSE], what = "classification", classification = object$classification, z = object$z, xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(d == 2) { pars <- object$parameters pars$mean <- pars$mean[dimens,,drop=FALSE] pars$variance$d <- length(dimens) pars$variance$sigma <- pars$variance$sigma[dimens,dimens,,drop=FALSE] mclust2Dplot(data = data[,dimens,drop=FALSE], what = "classification", classification = object$classification, parameters = if(addEllipses) pars else NULL, xlab = if(is.null(xlab)) colnames(data)[dimens][1] else xlab, ylab = if(is.null(ylab)) colnames(data)[dimens][2] else ylab, main = main, ...) } if(d > 2) { pars <- object$parameters pars$mean <- pars$mean[dimens,,drop=FALSE] pars$variance$d <- length(dimens) pars$variance$sigma <- pars$variance$sigma[dimens,dimens,,drop=FALSE] on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(0.2/2,4), oma = rep(3,4)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(data[, dimens[c(j, i)]], type = "n", xlab = "", ylab = "", axes = FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), labels = colnames(data[, dimens])[i], cex = 1.5, adj = 0.5) box() } else { coordProj(data = data, dimens = dimens[c(j,i)], what = "classification", classification = object$classification, parameters = object$parameters, addEllipses = addEllipses, main = FALSE, xaxt = "n", yaxt = "n", ...) } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } } } plot.Mclust.uncertainty <- function(...) { pars <- object$parameters if(d > 1) { pars$mean <- pars$mean[dimens,,drop=FALSE] pars$variance$d <- length(dimens) pars$variance$sigma <- pars$variance$sigma[dimens,dimens,,drop=FALSE] } # if(p == 1 || d == 1) { mclust1Dplot(data = data[,dimens,drop=FALSE], what = "uncertainty", parameters = pars, z = object$z, xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(p == 2 || d == 2) { mclust2Dplot(data = data[,dimens,drop=FALSE], what = "uncertainty", parameters = pars, # uncertainty = object$uncertainty, z = object$z, classification = object$classification, xlab = if(is.null(xlab)) colnames(data)[dimens][1] else xlab, ylab = if(is.null(ylab)) colnames(data)[dimens][2] else ylab, addEllipses = addEllipses, main = main, ...) } if(p > 2 && d > 2) { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(0,4), mar = rep(0.2/2,4), oma = rep(3,4)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(data[, dimens[c(j, i)]], type="n", xlab = "", ylab = "", axes = FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), labels = colnames(data[,dimens])[i], cex = 1.5, adj = 0.5) box() } else { coordProj(data = data, what = "uncertainty", parameters = object$parameters, # uncertainty = object$uncertainty, z = object$z, classification = object$classification, dimens = dimens[c(j,i)], main = FALSE, addEllipses = addEllipses, xaxt = "n", yaxt = "n", ...) } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } } } plot.Mclust.density <- function(...) { if(p == 1) { mclust1Dplot(data = data, parameters = object$parameters, z = object$z, what = "density", xlab = if(is.null(xlab)) colnames(data)[dimens] else xlab, main = main, ...) } if(p == 2) { surfacePlot(data = data, parameters = object$parameters, what = "density", xlab = if(is.null(xlab)) colnames(data)[1] else xlab, ylab = if(is.null(ylab)) colnames(data)[2] else ylab, main = main, ...) } if(p > 2) { objdens <- as.densityMclust(object) objdens$data <- objdens$data[,dimens,drop=FALSE] objdens$varname <- colnames(data)[dimens] objdens$range <- apply(data, 2, range) objdens$d <- d objdens$parameters$mean <- objdens$parameters$mean[dimens,,drop=FALSE] objdens$parameters$variance$d <- d objdens$parameters$variance$sigma <- objdens$parameters$variance$sigma[dimens,dimens,,drop=FALSE] # if (d == 1) plotDensityMclust1(objdens, ...) else if (d == 2) plotDensityMclust2(objdens, ...) else plotDensityMclustd(objdens, ...) } } if(interactive() & length(what) > 1) { title <- "Model-based clustering plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "BIC") plot.Mclust.bic(...) if(what[choice] == "classification") plot.Mclust.classification(...) if(what[choice] == "uncertainty") plot.Mclust.uncertainty(...) if(what[choice] == "density") plot.Mclust.density(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "BIC")) plot.Mclust.bic(...) if(any(what == "classification")) plot.Mclust.classification(...) if(any(what == "uncertainty")) plot.Mclust.uncertainty(...) if(any(what == "density")) plot.Mclust.density(...) } invisible() } logLik.Mclust <- function(object, ...) { if(is.null(object$loglik)) l <- sum(do.call("dens", c(object, logarithm = TRUE))) else l <- object$loglik if(is.null(object$df)) { noise <- if(is.null(object$hypvol)) FALSE else (!is.na(object$hypvol)) equalPro <- if(is.null(object$BIC)) FALSE else attr(object$BIC, "control")$equalPro df <- with(object, nMclustParams(modelName, d, G, noise = noise, equalPro = equalPro)) } else df <- object$df attr(l, "nobs") <- object$n attr(l, "df") <- df class(l) <- "logLik" return(l) } predict.Mclust <- function(object, newdata, ...) { if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") if(missing(newdata)) { newdata <- object$data } newdata <- as.matrix(newdata) if(ncol(object$data) != ncol(newdata)) { stop("newdata must match ncol of object data") } # object$data <- newdata z <- do.call("cdens", c(object, list(logarithm = TRUE))) pro <- object$parameters$pro pro <- pro/sum(pro) noise <- (!is.na(object$hypvol)) z <- if(noise) cbind(z, log(object$parameters$Vinv)) else cbind(z) # drop redundant attributes z <- sweep(z, MARGIN = 2, FUN = "+", STATS = log(pro)) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = apply(z, 1, logsumexp)) z <- exp(z) cl <- c(seq(object$G), if(noise) 0) colnames(z) <- cl cl <- cl[apply(z, 1, which.max)] out <- list(classification = cl, z = z) return(out) } mclustBIC <- function(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), Vinv = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), ...) { dimData <- dim(data) oneD <- (is.null(dimData) || length(dimData[dimData > 1]) == 1) if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } if(is.null(x)) { if(is.null(modelNames)) { if(d == 1) { modelNames <- c("E", "V") } else { modelNames <- mclust.options("emModelNames") if(n <= d) { # select only spherical and diagonal models m <- match(modelNames, c("EII", "VII", "EEI", "VEI", "EVI", "VVI"), nomatch = 0) modelNames <- modelNames[m] } } } if(!is.null(prior)) { # remove models not available with prior modelNames <- setdiff(modelNames, c("EVE","VEE","VVE","EVV")) } if(is.null(G)) { G <- if (is.null(initialization$noise)) 1:9 else 0:9 } else { G <- sort(as.integer(unique(G))) } if(is.null(initialization$noise)) { if (any(G > n)) G <- G[G <= n] } else { noise <- initialization$noise if(is.logical(noise)) noise <- which(noise) if(any(match(noise, 1:n, nomatch = 0) == 0)) stop("numeric or logical vector for noise must correspond to row indexes of data") initialization$noise <- noise nnoise <- length(noise) if(any(G > (n-nnoise))) G <- G[G <= n-nnoise] } if(!is.null(initialization$subset)) { subset <- initialization$subset if(is.logical(subset)) subset <- which(subset) initialization$subset <- subset if(any(G > n)) G <- G[G <= n] } Gall <- G Mall <- modelNames } else { if(!missing(prior) || !missing(control) || !missing(initialization) || !missing(Vinv)) stop("only G and modelNames may be specified as arguments when x is supplied") prior <- attr(x,"prior") control <- attr(x,"control") initialization <- attr(x,"initialization") Vinv <- attr(x,"Vinv") warn <- attr(x,"warn") Glabels <- dimnames(x)[[1]] Mlabels <- dimnames(x)[[2]] if(is.null(G)) G <- Glabels if(is.null(modelNames)) modelNames <- Mlabels Gmatch <- match(as.character(G), Glabels, nomatch = 0) Mmatch <- match(modelNames, Mlabels, nomatch = 0) if(all(Gmatch) && all(Mmatch)) { out <- x[as.character(G),modelNames,drop=FALSE] mostattributes(out) <- attributes(x) attr(out, "dim") <- c(length(G), length(modelNames)) attr(out, "dimnames") <- list(G, modelNames) attr(out, "G") <- as.numeric(G) attr(out, "modelNames") <- modelNames attr(out, "returnCodes") <- attr(x, "returnCodes")[as.character(G),modelNames,drop=FALSE] return(out) } Gall <- sort(as.numeric(unique(c(as.character(G), Glabels)))) Mall <- unique(c(modelNames, Mlabels)) } if(any(as.logical(as.numeric(G))) < 0) { if(is.null(initialization$noise)) { stop("G must be positive") } else { stop("G must be nonnegative") } } if(d == 1 && any(nchar(modelNames) > 1)) { Emodel <- any(sapply(modelNames, function(x) charmatch("E", x, nomatch = 0)[1]) == 1) Vmodel <- any(sapply(modelNames, function(x) charmatch("V", x, nomatch = 0)[1]) == 1) modelNames <- c("E", "V")[c(Emodel, Vmodel)] } # set subset for initialization when subset is not, no hcPairs is provided, and # data size is larger than the value specified in mclust.options() if(is.null(initialization$subset) & is.null(initialization$hcPairs) & n > mclust.options("subset")) { initialization$subset <- sample(seq.int(n), size = mclust.options("subset"), replace = FALSE) } l <- length(Gall) m <- length(Mall) if(verbose) { cat("fitting ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = l*m+1, style = 3) on.exit(close(pbar)) ipbar <- 0 } EMPTY <- -.Machine$double.xmax BIC <- RET <- matrix(EMPTY, nrow = l, ncol = m, dimnames = list(as.character(Gall), as.character(Mall))) if(!is.null(x)) { BIC[dimnames(x)[[1]],dimnames(x)[[2]]] <- x RET[dimnames(x)[[1]],dimnames(x)[[2]]] <- attr(x, "returnCodes") BIC <- BIC[as.character(G),modelNames,drop=FALSE] RET <- RET[as.character(G),modelNames,drop=FALSE] } G <- as.numeric(G) Glabels <- as.character(G) Gout <- G if(is.null(initialization$noise)) { ## standard case ---- if (G[1] == 1) { for(mdl in modelNames[BIC["1",] == EMPTY]) { out <- mvn(modelName = mdl, data = data, prior = prior) BIC["1", mdl] <- bic(modelName = mdl, loglik = out$loglik, n = n, d = d, G = 1, equalPro = FALSE) RET["1", mdl] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } if (l == 1) { BIC[BIC == EMPTY] <- NA if(verbose) { ipbar <- l*m+1; setTxtProgressBar(pbar, ipbar) } return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = initialization, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$subset)) { ## all data in initial hierarchical clustering phase (no subset) ---- if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data, modelName = mclust.options("hcModelName")[1]) } else { hcPairs <- hc(data = data, modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data, modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { cl <- clss[,g] } else { cl <- qclass(data, as.numeric(g)) } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply( z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for(modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { out <- me(modelName = modelName, data = data, z = z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } else { ## initial hierarchical clustering phase on a subset ---- subset <- initialization$subset # TODO: remove after check # if (is.logical(subset)) subset <- which(subset) if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelName")[1], data = data[subset,]) } else { hcPairs <- hc(modelName = "EII", data = data[subset,]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[subset]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { cl <- clss[, g] } else { cl <- qclass(data[subset], as.numeric(g)) } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply( z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for (modelName in modelNames[!is.na(BIC[g,])]) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[initialization$subset,], prior = prior, control = control, warn = warn) # # ctrl <- control # ctrl$itmax[1] <- 1 # ms <- me(modelName = modelName, data = as.matrix(data)[ # initialization$subset, ], z = z, prior = prior, control = ctrl) # es <- do.call("estep", c(list(data = data, warn = warn), ms)) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } } else { ## noise case ---- noise <- initialization$noise if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) if (is.null(initialization$subset)) { ## all data in initial hierarchical clustering phase (no subset) ---- if(nnoise == n) stop("All observations cannot be initialised as noise!") if (!G[1]) { hood <- n * log(Vinv) BIC["0",] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, noise = initialization$noise), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelName")[1], data = data[-noise,]) } else { hcPairs <- hc(modelName = "EII", data = data[-noise,]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[-noise]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } z <- matrix(0, n, max(G) + 1) for (g in Glabels) { z[] <- 0 k <- as.numeric(g) if(d > 1 || !is.null(hcPairs)) { cl <- clss[,g] } else { cl <- qclass(data[-noise], k = k) } z[-noise,1:k] <- unmap(cl, groups = 1:max(cl)) if(any(apply(z[-noise,1:k,drop=FALSE], 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") # todo: should be pmax(...) qui sotto?? z[-noise,1:k] <- max(z[-noise,1:k], sqrt(.Machine$double.neg.eps)) # todo: should be t(...) qui sotto?? z[-noise,1:k] <- apply(z[-noise,1:k,drop=FALSE], 1, function(z) z/sum(z)) } z[noise, k+1] <- 1 K <- 1:(k+1) for (modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { out <- me(modelName = modelName, data = data, z = z[, K], prior = prior, Vinv = Vinv, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } else { ## initial hierarchical clustering phase on a subset ---- subset <- initialization$subset subset <- setdiff(subset, noise) # remove from subset noise obs initialization$subset <- subset if(length(subset) == 0) stop("No observations in the initial subset after removing the noise!") if (!G[1]) { hood <- n * log(Vinv) BIC["0",] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelName")[1], data = data[subset,]) } else { hcPairs <- hc(modelName = "EII", data = data[subset,]) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data[subset]) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } for (g in Glabels) { k <- as.numeric(g) if (d > 1 || !is.null(hcPairs)) { cl <- clss[, g] } else { cl <- qclass(data[subset], k = k) } z <- unmap(cl, groups = 1:max(cl)) if(any(apply(z, 2, max) == 0) & warn) { # missing groups if(warn) warning("there are missing groups") small <- sqrt(.Machine$double.neg.eps) z[z < small] <- small z <- t(apply( z, 1, function(x) x/sum(x))) } for (modelName in na.omit(modelNames[BIC[g,] == EMPTY])) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[subset,], prior = prior, control = control, warn = warn) es <- do.call("estep", c(list(data = data, warn = warn), ms)) if(is.na(es$loglik)) { BIC[g, modelName] <- NA RET[g, modelName] <- attr(es, "returnCode") } else { es$z <- cbind(es$z, 0) es$z[noise,] <- matrix(c(rep(0,k),1), byrow = TRUE, nrow = length(noise), ncol = k+1) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, Vinv = Vinv, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } } } } if(verbose) { ipbar <- l*m+1; setTxtProgressBar(pbar, ipbar) } if(!is.null(prior) & any(is.na(BIC))) warning("The presence of BIC values equal to NA is likely due to one or more of the mixture proportions being estimated as zero, so that the model estimated reduces to one with a smaller number of components.") structure(BIC, G = Gout, modelNames = modelNames, prior = prior, Vinv = Vinv, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset, noise = initialization$noise), warn = warn, n = n, d = d, oneD = oneD, criterion = "BIC", returnCodes = RET, class = "mclustBIC") } print.mclustBIC <- function(x, pick = 3, ...) { subset <- !is.null(attr(x, "subset")) oldClass(x) <- attr(x, "args") <- NULL attr(x, "criterion") <- NULL attr(x, "control") <- attr(x, "initialization") <- NULL attr(x, "oneD") <- attr(x, "warn") <- attr(x, "Vinv") <- NULL attr(x, "prior") <- attr(x, "G") <- attr(x, "modelNames") <- NULL ret <- attr(x, "returnCodes") == -3 n <- attr(x, "n") d <- attr(x, "d") attr(x, "returnCodes") <- attr(x, "n") <- attr(x, "d") <- NULL catwrap("Bayesian Information Criterion (BIC):") NextMethod("print") cat("\n") catwrap(paste("Top", pick, "models based on the BIC criterion:")) print(pickBIC(x, pick), ...) invisible() } summary.mclustBIC <- function(object, data, G, modelNames, ...) { mc <- match.call(expand.dots = FALSE) if(missing(data)) { if(!missing(G)) object <- object[rownames(object) %in% G,,drop=FALSE] if(!missing(modelNames)) object <- object[,colnames(object) %in% modelNames,drop=FALSE] ans <- pickBIC(object, ...) class(ans) <- "summary.mclustBIC" } else { if(is.null(attr(object,"initialization")$noise)) { mc[[1]] <- as.name("summaryMclustBIC") } else { mc[[1]] <- as.name("summaryMclustBICn") } warn <- attr(object, "warn") ans <- eval(mc, parent.frame()) if(length(ans) == 0) return(ans) Glabels <- dimnames(object)[[1]] if(length(Glabels) != 1 && (!missing(G) && length(G) > 1)) { Grange <- range(as.numeric(Glabels)) if(match(ans$G, Grange, nomatch = 0) & warn) warning("best model occurs at the min or max of number of components considered!") } } ans } summaryMclustBIC <- function (object, data, G = NULL, modelNames = NULL, ...) { dimData <- dim(data) oneD <- (is.null(dimData) || length(dimData[dimData > 1]) == 1) if (!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if (oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } initialization <- attr(object, "initialization") hcPairs <- initialization$hcPairs subset <- initialization$subset prior <- attr(object, "prior") control <- attr(object, "control") warn <- attr(object, "warn") oldClass(object) <- NULL attr(object, "prior") <- attr(object, "warn") <- NULL attr(object, "modelNames") <- attr(object, "oneD") <- NULL attr(object, "initialization") <- attr(object, "control") <- NULL d <- if (is.null(dim(data))) 1 else ncol(data) if(is.null(G)) G <- dimnames(object)[[1]] if(is.null(modelNames)) modelNames <- dimnames(object)[[2]] bestBICs <- pickBIC(object[as.character(G), modelNames, drop = FALSE], k = 3) if(all(is.na(bestBICs))) { return(structure(list(), bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } temp <- unlist(strsplit(names(bestBICs)[1], ",")) bestModel <- temp[1] G <- as.numeric(temp[2]) if(G == 1) { out <- mvn(modelName = bestModel, data = data, prior = prior) ans <- c(list(bic = bestBICs, z = unmap(rep(1,n)), classification = rep(1, n), uncertainty = rep(0, n)), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "z", "classification", "uncertainty") return(structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } if(is.null(subset)) { if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data, G), groups = 1:G) } out <- me(modelName = bestModel, data = data, z = z, prior = prior, control = control, warn = warn) if(sum((out$parameters$pro - colMeans(out$z))^2) > sqrt(.Machine$double.eps)) { # perform extra M-step and update parameters ms <- mstep(modelName = bestModel, data = data, z = out$z, prior = prior, warn = warn) if(attr(ms, "returnCode") == 0) out$parameters <- ms$parameters } } else { if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data[subset], G)) } ms <- mstep(modelName = bestModel, prior = prior, z = z, data = as.matrix(data)[subset,], control = control, warn = warn) es <- do.call("estep", c(list(data = data), ms)) out <- me(modelName = bestModel, data = data, z = es$z, prior = prior, control = control, warn = warn) # perform extra M-step and update parameters ms <- mstep(modelName = bestModel, data = data, z = out$z, prior = prior, warn = warn) if(attr(ms, "returnCode") == 0) out$parameters <- ms$parameters } obsNames <- if (is.null(dim(data))) names(data) else dimnames(data)[[1]] classification <- map(out$z, warn = warn) uncertainty <- 1 - apply(out$z, 1, max) names(classification) <- names(uncertainty) <- obsNames ans <- c(list(bic = bic(bestModel, out$loglik, out$n, out$d, out$G, noise = FALSE, equalPro = control$equalPro), # bic = as.vector(bestBICs[1]), classification = classification, uncertainty = uncertainty), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "z", "classification", "uncertainty") structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC") } summaryMclustBICn <- function(object, data, G = NULL, modelNames = NULL, ...) { dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } initialization <- attr(object, "initialization") hcPairs <- initialization$hcPairs subset <- initialization$subset noise <- initialization$noise # TODO: remove after check # if(!is.logical(noise)) # noise <- as.logical(match(1:n, noise, nomatch = 0)) if(is.logical(noise)) noise <- which(noise) prior <- attr(object, "prior") control <- attr(object, "control") warn <- attr(object, "warn") Vinv <- attr(object, "Vinv") oldClass(object) <- NULL attr(object, "control") <- attr(object, "initialization") <- NULL attr(object, "prior") <- attr(object, "Vinv") <- NULL attr(object, "warn") <- NULL ## if (is.null(G)) G <- dimnames(object)[[1]] if (is.null(modelNames)) modelNames <- dimnames(object)[[2]] bestBICs <- pickBIC(object[as.character(G), modelNames, drop = FALSE], k = 3) if(all(is.na(bestBICs))) { return(structure(list(), bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } temp <- unlist(strsplit(names(bestBICs)[1], ",")) bestModel <- temp[1] G <- as.numeric(temp[2]) if(G == 0) { ans <- list(bic = bestBICs[1], z = unmap(rep(0,n)), classification = rep(0, n), uncertainty = rep(0, n), n = n, d = ncol(data), modelName = bestModel, G = 0, loglik = n * log(Vinv), Vinv = Vinv, parameters = NULL) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "Vinv", "parameters", "z", "classification", "uncertainty") return(structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC")) } G1 <- G + 1 if(is.null(subset)) { z <- matrix(0, n, G1) if(d > 1 || !is.null(hcPairs)) { z[-noise, 1:G] <- unmap(hclass(hcPairs, G)) } else { z[-noise, 1:G] <- unmap(qclass(data[-noise], G)) } z[noise, G1] <- 1 out <- me(modelName = bestModel, data = data, z = z, prior = prior, Vinv = Vinv, control = control, warn = warn) } else { subset <- setdiff(subset, noise) # set subset among those obs not noise if(d > 1 || !is.null(hcPairs)) { z <- unmap(hclass(hcPairs, G)) } else { z <- unmap(qclass(data[subset], G)) } ms <- mstep(modelName = bestModel, data = as.matrix(data)[subset,], z = z, prior = prior, control = control, warn = warn) es <- do.call("estep", c(list(data = data, warn = warn), ms)) es$z <- cbind(es$z, 0) es$z[noise,] <- matrix(c(rep(0,G),1), byrow = TRUE, nrow = length(noise), ncol = G+1) out <- me(modelName = bestModel, data = data, z = es$z, prior = prior, Vinv = Vinv, control = control, warn = warn) } obsNames <- if(is.null(dim(data))) names(data) else dimnames(data)[[1]] classification <- map(out$z, warn = warn) classification[classification == G1] <- 0 uncertainty <- 1 - apply(out$z, 1, max) names(classification) <- names(uncertainty) <- obsNames ans <- c(list(bic = as.vector(bestBICs[1]), classification = classification, uncertainty = uncertainty, Vinv = Vinv), out) orderedNames <- c("modelName", "n", "d", "G", "bic", "loglik", "parameters", "Vinv", "z", "classification", "uncertainty") structure(ans[orderedNames], bestBICvalues = bestBICs, prior = prior, control = control, initialization = initialization, class = "summary.mclustBIC") } print.summary.mclustBIC <- function(x, digits = getOption("digits"), ...) { if("classification" %in% names(x)) { bic <- attr(x,"bestBICvalues") l <- length(bic) if(l == 1) { cat("BIC value:\n") print(bic, digits = digits) } else { cat("Best BIC values:\n") bic <- drop(as.matrix(bic)) bic <- rbind(BIC = bic, "BIC diff" = bic - max(bic)) print(bic, digits = digits) } cat("\n") catwrap(paste0("Classification table for model (", if(l == 1) names(bic)[1] else colnames(bic)[1], "):")) print(table(x$classification), digits = digits, ...) } else { cat("Best BIC values:\n") x <- if(length(x) == 0) attr(x,"bestBICvalues") else drop(as.matrix(x)) x <- rbind(BIC = x, "BIC diff" = x - max(x)) print(x, digits = digits) } invisible() } plot.mclustBIC <- function(x, G = NULL, modelNames = NULL, symbols = NULL, colors = NULL, xlab = NULL, ylab = "BIC", ylim = NULL, legendArgs = list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01), ...) { if(is.null(xlab)) xlab <- "Number of components" fill <- FALSE subset <- !is.null(attr(x, "initialization")$subset) noise <- !is.null(attr(x, "initialization")$noise) ret <- attr(x, "returnCodes") == -3 legendArgsDefault <- list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01) legendArgs <- append(as.list(legendArgs), legendArgsDefault) legendArgs <- legendArgs[!duplicated(names(legendArgs))] n <- ncol(x) dnx <- dimnames(x) x <- matrix(as.vector(x), ncol = n) dimnames(x) <- dnx if(is.null(modelNames)) modelNames <- dimnames(x)[[2]] if(is.null(G)) G <- as.numeric(dimnames(x)[[1]]) # BIC <- x[as.character(G), modelNames, drop = FALSE] # X <- is.na(BIC) # nrowBIC <- nrow(BIC) # ncolBIC <- ncol(BIC) if(is.null(symbols)) { colNames <- dimnames(x)[[2]] m <- length(modelNames) if(is.null(colNames)) { symbols <- if(m > 9) LETTERS[1:m] else as.character(1:m) names(symbols) <- modelNames } else { symbols <- mclust.options("bicPlotSymbols")[modelNames] } } if(is.null(colors)) { colNames <- dimnames(x)[[2]] if(is.null(colNames)) { colors <- 1:m names(colors) <- modelNames } else { # colors <- mclust.options("bicPlotColors")[modelNames] colors <- mclust.options("bicPlotColors") if(!is.null(names(colors)) & !any(names(colors) == "")) colors <- colors[modelNames] } } x <- x[,modelNames, drop = FALSE] if(is.null(ylim)) ylim <- range(as.vector(x[!is.na(x)])) matplot(as.numeric(dnx[[1]]), x, type = "b", xaxt = "n", xlim = range(G), ylim = ylim, pch = symbols, col = colors, lty = 1, xlab = xlab, ylab = ylab, main = "") axis(side = 1, at = as.numeric(dnx[[1]])) if(!is.null(legendArgs)) { do.call("legend", c(list(legend = modelNames, col = colors, pch = symbols), legendArgs)) } invisible(symbols) } pickBIC <- function(x, k = 3, ...) { if(!is.matrix(x)) { warning("sorry, the pickBIC function cannot be applied to the provided argument!") return() } Glabels <- dimnames(x)[[1]] modelNames <- dimnames(x)[[2]] mis <- is.na(x) if(all(mis) & mclust.options("warn")) { warning("none of the selected models could be fitted") return(rep(NA,k)) } x[mis] <- - .Machine$double.xmax x <- data.frame(as.vector(x), Glabels[as.vector(row(x))], modelNames[as.vector(col(x))]) # x <- x[rev(order(x[,1])),] # order by including first simpler models if ties are present x <- x[order(-x[, 1], x[,2], x[,3]),] namesx <- apply(x[,-1,drop = FALSE], 1, function(z) paste(as.character(z[2]), as.character(z[1]), sep = ",")) k <- min(k, nrow(x)) x <- x[1:k,1] x[x == - .Machine$double.xmax] <- NA namesx <- namesx[1:k] namesx[is.na(x)] <- " " names(x) <- namesx x } mclustBICupdate <- function(BIC, ...) { args <- list(...) nargs <- length(args) BIC1 <- BIC if(length(args) > 1) { # recursively call the function when multiple arguments BIC2 <- mclustBICupdate(args[[1]], args[[-1]]) } else { BIC2 <- args[[1]] } if(is.null(BIC1)) return(BIC2) if(is.null(BIC2)) return(BIC1) stopifnot(inherits(BIC1, c("mclustBIC", "mclustSBIC", "mclustICL")) & inherits(BIC2, c("mclustBIC", "mclustSBIC", "mclustICL"))) stopifnot(all.equal(attributes(BIC1)[c("n", "d")], attributes(BIC2)[c("n", "d")])) G <- unique(c(rownames(BIC1), rownames(BIC2))) modelNames <- unique(c(colnames(BIC1), colnames(BIC2))) BIC <- matrix(as.double(NA), nrow = length(G), ncol = length(modelNames), dimnames = list(G, modelNames)) BIC[rownames(BIC1),colnames(BIC1)] <- BIC1[rownames(BIC1),colnames(BIC1)] BIC[rownames(BIC2),colnames(BIC2)] <- BIC2[rownames(BIC2),colnames(BIC2)] r <- intersect(rownames(BIC1), rownames(BIC2)) c <- intersect(colnames(BIC1), colnames(BIC2)) BIC[r,c] <- pmax(BIC1[r,c], BIC2[r,c], na.rm = TRUE) attr <- if(pickBIC(BIC2,1) > pickBIC(BIC1,1)) attributes(BIC2) else attributes(BIC1) attr$dim <- dim(BIC) attr$dimnames <- dimnames(BIC) attr$G <- as.numeric(G) attr$modelNames <- modelNames attr$returnCodes <- NULL attributes(BIC) <- attr return(BIC) } mclustLoglik <- function(object, ...) { stopifnot(inherits(object, "mclustBIC")) BIC <- object G <- as.numeric(rownames(BIC)) modelNames <- colnames(BIC) n <- attr(BIC, "n") d <- attr(BIC, "d") noise <- if(is.null(attr(BIC, "noise"))) FALSE else TRUE loglik <- matrix(as.double(NA), nrow = length(G), ncol = length(modelNames), dimnames = list(G, modelNames)) for(i in seq_along(G)) for(j in seq_along(modelNames)) { npar <- nMclustParams(G = G[i], modelName = modelNames[j], d = d, noise = noise) loglik[i,j] <- 0.5*(BIC[i,j] + npar*log(n)) } mostattributes(loglik) <- attributes(BIC) attr(loglik, "criterion") <- "loglik" class(loglik) <- "mclustLoglik" return(loglik) } print.mclustLoglik <- function(x, ...) { oldClass(x) <- attr(x, "args") <- NULL attr(x, "criterion") <- NULL attr(x, "control") <- attr(x, "initialization") <- NULL attr(x, "oneD") <- attr(x, "warn") <- attr(x, "Vinv") <- NULL attr(x, "prior") <- attr(x, "G") <- attr(x, "modelNames") <- NULL attr(x, "returnCodes") <- attr(x, "n") <- attr(x, "d") <- NULL catwrap("Log-likelihood:") NextMethod("print") invisible() } mclustModel <- function(data, BICvalues, G=NULL, modelNames=NULL, ...) { mc <- match.call(expand.dots = FALSE) if (is.null(attr(BICvalues,"initialization")$noise)) { mc[[1]] <- as.name("summaryMclustBIC") } else { mc[[1]] <- as.name("summaryMclustBICn") } nm <- names(mc) mc[1:3] <- mc[c(1,3,2)] nm[1:3] <- nm[c(1,3,2)] nm[nm == "BICvalues"] <- "object" names(mc) <- nm ans <- eval(mc, parent.frame()) ans$classification <- ans$uncertainty <- NULL attr( ans, "bestBICvalues") <- NULL attr( ans, "prior") <- NULL attr( ans, "control") <- NULL attr( ans, "initialization") <- NULL oldClass(ans) <- "mclustModel" ans } mclustModelNames <- function(model) { type <- switch(EXPR = as.character(model), "E" = "univariate, equal variance", "V" = "univariate, unequal variance", "EII" = "spherical, equal volume", "VII" = "spherical, varying volume", "EEI" = "diagonal, equal volume and shape", "VEI" = "diagonal, equal shape", "EVI" = "diagonal, equal volume, varying shape", "VVI" = "diagonal, varying volume and shape", "EEE" = "ellipsoidal, equal volume, shape and orientation", "EVE" = "ellipsoidal, equal volume and orientation", "VEE" = "ellipsoidal, equal shape and orientation", "VVE" = "ellipsoidal, equal orientation", "EEV" = "ellipsoidal, equal volume and shape", "VEV" = "ellipsoidal, equal shape", "EVV" = "ellipsoidal, equal volume", "VVV" = "ellipsoidal, varying volume, shape, and orientation", "X" = "univariate normal", "XII" = "spherical multivariate normal", "XXI" = "diagonal multivariate normal", "XXX" = "ellipsoidal multivariate normal", warning("invalid model")) return(list(model = model, type = type)) } defaultPrior <- function(data, G, modelName, ...) { aux <- list(...) if(is.null(aux$shrinkage)) { shrinkage <- 0.01 } else if(is.na(aux$shrinkage) || !aux$shrinkage) { shrinkage <- 0 } else if(aux$shrinkage < 0) { stop("negative value given for shrinkage") } else { shrinkage <- aux$shrinkage } if(is.null(aux$mean)) { mean <- if (is.null(dim(data))) mean(data) else colMeans(data) } else if(any(is.na(aux$mean))) { if(shrinkage) stop("positive shrinkage with no prior mean specified") mean <- if (is.null(dim(data))) mean(data) else colMeans(data) } else { if(!shrinkage) stop("prior mean specified but not shrinkage") mean <- aux$mean } switch(EXPR = modelName, E = , V = , X = { dof <- 3 if(is.null(aux$scale)) { scale <- var(data)/G^2 } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, ## EII = , VII = , XII = , EEI = , EVI = , VEI = , VVI = , XXI = { n <- nrow(data) p <- ncol(data) dof <- p + 2 if(is.null(aux$scale)) { fac <- (1/G)^(2/p) scale <- (fac * sum(apply(data, 2, var)))/ p } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, ## EEE = , EVE = , VEE = , VVE = , EEV = , VEV = , EVV = , VVV = , XXX = { n <- nrow(data) p <- ncol(data) dof <- p + 2 if(is.null(aux$scale)) { fac <- (1/G)^(2/p) if(n > p) { scale <- fac * var(data) } else { scale <- fac * diag(apply(data, 2, var)) } } else { scale <- aux$scale } list(shrinkage = shrinkage, mean = mean, dof = dof, scale = scale) }, stop("no default prior for this model")) } emControl <- function(eps = .Machine$double.eps, tol = c(1.0e-05, sqrt(.Machine$double.eps)), itmax = c(.Machine$integer.max, .Machine$integer.max), equalPro = FALSE) { if(any(eps < 0)) stop("eps is negative") if(any(eps >= 1)) stop("eps is not less than 1") if(any(tol < 0)) stop("tol is negative") if(any(tol >= 1)) stop("tol is not less than 1") if(any(itmax < 0)) stop("itmax is negative") if(length(tol) == 1) tol <- rep(tol, 2) if(length(itmax) == 1) itmax <- c(itmax, .Machine$integer.max) i <- is.infinite(itmax) if(any(i)) itmax[i] <- .Machine$integer.max list(eps = eps, tol = tol, itmax = itmax, equalPro = equalPro) } priorControl <- function(functionName = "defaultPrior", ...) { c(list(functionName = functionName), list(...)) } cdensEEE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) > 2) stop("data must be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEE", WARNING = WARNING, returnCode = 9)) } if(is.null(parameters$variance$cholSigma)) stop("variance parameters are missing") temp <- .Fortran("eseee", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholSigma), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(1), double(n * G), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, G) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEE", WARNING = WARNING, returnCode = ret) } emEEE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEE(data, parameters = parameters, warn = warn)$z meEEE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEE <- function(data, parameters, warn = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) > 2) stop("data must be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholSigma)) stop("variance parameters are missing") temp <- .Fortran("eseee", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholSigma), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(1), double(n * K), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, K) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" warning(WARNING) ret <- -4 } else { WARNING <- "input error for LAPACK DPOTRF" warning(WARNING) ret <- -5 } z[] <- loglik <- NA } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEE", d = p, G = G, Sigma = matrix(as.double(NA), p, p), cholSigma = matrix(as.double(NA), p, p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeee", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p), double(K), double(p), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEE"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeeep", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p), double(K), double(p), PACKAGE = "mclust")[c(11:17, 10)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholSigma <- matrix(temp[[6]], p, p) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) Sigma <- matrix( NA, p, p) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- logprior <- NA sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) ret <- if(control$equalPro) -2 else -3 } else { Sigma <- unchol(cholSigma, upper = TRUE) sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- dimnames(cholSigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEE", d = p, G = G, sigma = sigma, Sigma = Sigma, cholSigma = cholSigma) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEE <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEE", d = p, G = G, sigma <- array(NA, c(p,p, G)), Sigma = matrix(as.double(NA), p, p), cholSigma = matrix(as.double(NA), p, p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("mseee", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p), double(p * G), double(p * p), double(G), PACKAGE = "mclust")[7:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEE"), prior[names(prior) != "functionName"])) temp <- .Fortran("mseeep", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * G), double(p * p), double(G), PACKAGE = "mclust")[11:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholSigma <- matrix(temp[[2]], p, p) pro <- temp[[3]] sigma <- array(0, c(p, p, G)) Sigma <- unchol(cholSigma, upper = TRUE) for(k in 1:G) sigma[, , k] <- Sigma WARNING <- NULL if(any(mu > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- sigma[] <- Sigma[] <- cholSigma[] <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- dimnames(cholSigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEE", d = p, G = G, sigma = sigma, Sigma = Sigma, cholSigma= cholSigma) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEE <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEE")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) if(is.null(cholSigma <- parameters$variance$cholSigma)) { if(is.null(Sigma <- parameters$variance$Sigma)) { stop("variance parameters must inlcude either Sigma or cholSigma" ) } cholSigma <- chol(Sigma) } for(k in 1:G) { m <- ctabel[k] x[clabels == k,] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEE") } cdensEEI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("eseei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEI", WARNING = WARNING, returnCode = ret) } cdensEII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("eseii", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EII", WARNING = WARNING, returnCode = ret) } emEEI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEI(data, parameters = parameters, warn = warn)$z meEEI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("eseei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should be in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEI", d = p, G = G, scale = NA, shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeei", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeeip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[6]] shape <- temp[[7]] pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) Sigma <- matrix(as.double(NA), p, p) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) Sigma <- diag(scale * shape) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEI", d = p, G = G, sigma = sigma, Sigma = Sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEI", d = p, G = G, scale = NA, shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("mseei", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(p), double(G), PACKAGE = "mclust")[6:9] } else { storage.mode(z) <- "double" priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEI"), prior[names( prior) != "functionName"])) temp <- .Fortran("mseeip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(p), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[2]] shape <- temp[[3]] pro <- temp[[4]] WARNING <- NULL if(any(c(shape, scale) > signif(.Machine$double.xmax, 6)) || any(!c( scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- NA sigma <- Sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(0, c(p, p, G)) Sigma <- diag(scale * shape) for(k in 1:G) sigma[, , k] <- Sigma ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEI", d = p, G = G, sigma = sigma, Sigma = Sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- parameters$variance$shape if(length(shape) != d) stop("shape incompatible with mean") cholSigma <- diag(sqrt(parameters$variance$scale * shape)) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEI") } cdensE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) mu <- drop(parameters$mean) G <- length(mu) if(any(is.na(unlist(parameters[c("mean", "variance")]))) || any(is.null(parameters[c("mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(length(sigmasq) > 1) if(warn) warning("more than one sigma-squared given") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("es1e", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(names(data),NULL) structure(z, logarithm = logarithm, modelName = "E", WARNING = WARNING, returnCode = ret) } emE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepE(data, parameters = parameters, warn = warn)$z meE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- drop(parameters$mean) G <- length(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "E", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(length(sigmasq) > 1) if(warn) warning("more than one sigma-squared specified") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "E", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("es1e", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) structure(list(modelName = "E", n = n, d = 1, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensEEV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EEV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("eseev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(p), double(1), double(n * G), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EEV", WARNING = WARNING, returnCode = ret) } emEEV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEEV(data, parameters = parameters, warn = warn)$z meEEV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEEV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EEV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("eseev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(p), double(1), double(n * K), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EEV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEEV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p)) storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeev", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[7:16] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEV"), prior[names(prior) !="functionName"])) temp <- .Fortran("meeevp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[11:20] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] lapackSVDinfo <- temp[[5]] mu <- matrix(temp[[6]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[7]] shape <- temp[[8]] O <- aperm(array(temp[[9]], c(p, p, G)),c(2,1,3)) pro <- temp[[10]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } z[] <- O[] <- shape[] <- NA scale <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) shape[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "a z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- scale * shapeO(shape, O, transpose = FALSE) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEEV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EEV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop( "improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), G) if(is.null(prior)) { temp <- .Fortran("mseev", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(lwork), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[7:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mseevp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(lwork), as.integer(lwork), double(p * G), double(1), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[11:16] } lapackSVDinfo <- temp[[1]] mu <- matrix(temp[[2]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[3]] shape <- temp[[4]] O <- aperm( array(temp[[5]], c(p, p, G)), c(2,1,3)) pro <- temp[[6]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" if(warn) warning(WARNING) ret <- -4 } else { WARNING <- "input error for LAPACK DGESVD" if(warn) warning(WARNING) ret <- -5 } O[] <- shape[] <- scale <- NA sigma <- array(NA, c(p, p, G)) } else if(any(c(abs(scale), shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- O[] <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- scale * shapeO(shape, O, transpose = FALSE) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEEV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EEV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- parameters$variance$shape if(length(shape) != d) stop("shape incompatible with mean") sss <- sqrt(parameters$variance$scale * shape) for(k in 1:G) { m <- ctabel[k] cholSigma <- t(parameters$variance$orientation[, , k]) * sss x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EEV") } emEII <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEII(data, parameters = parameters, warn = warn)$z meEII(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEII <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) p <- ncol(data) n <- nrow(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) if(warn) warning("variance parameters are missing") if(sigmasq < 0) stop("sigma-squared is negative") if(!sigmasq) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("eseii", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EII", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEII <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] # number of groups if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EII", d = p, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meeii", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(K), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EII"), prior[names(prior) != "functionName"])) temp <- .Fortran("meeiip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(K), PACKAGE = "mclust")[c(11:17, 10)] } mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] Sigma <- diag(rep(sigmasq, p)) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || sigmasq <= max(control$eps,0)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- Sigma if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EII", d = p, G = G, sigma = sigma, Sigma = Sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance = variance, Vinv=Vinv) structure(list(modelName = "EII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEII <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EII", d = p, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mseii", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(G), PACKAGE = "mclust")[6:8] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EII"), prior[names(prior) !="functionName"])) temp <- .Fortran("mseiip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(G), PACKAGE = "mclust")[10:12] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) sigmasq <- temp[[2]] pro <- temp[[3]] sigma <- array(0, c(p, p, G)) Sigma <- diag(rep(sigmasq, p)) for(k in 1:G) sigma[, , k] <- Sigma WARNING <- NULL if(sigmasq > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(Sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EII", d = p, G = G, sigma = sigma, Sigma = Sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance = variance) structure(list(modelName = "EII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEII <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d), modelName = "EII")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) sigmasq <- parameters$variance$sigmasq cholSigma <- diag(rep(sqrt(sigmasq), d)) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EII") } meE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be 1 dimensional") data <- as.vector(data) n <- length(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal length of data") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "E", d = 1, G = G, sigmasq = NA) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="E", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("me1e", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(1), double(K), PACKAGE = "mclust")[6:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "E"), prior[names(prior) != "functionName"])) temp <- .Fortran("me1ep", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(1), double(K), PACKAGE = "mclust")[c(10:16, 9)] } mu <- temp[[5]] names(mu) <- as.character(1:G) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] pro <- temp[[7]] ## log post <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || sigmasq <= max(control$eps,0)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- logprior <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA ret <- if(control$equalPro) -2 else -3 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 info <- c(iterations = its, error = err) dimnames(z) <- list(names(data), NULL) variance <- list(modelName = "E", d = 1, G = G, sigmasq = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "E", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepE <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") # number of groups G <- dimz[2] ## if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName="E", d=1, G=G, sigmasq=NA) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance) return(structure(list(modelName="E", prior=prior, n=n, d=1, G=G, z = z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("ms1e", as.double(data), as.double(z), as.integer(n), as.integer(G), double(G), double(1), double(G), PACKAGE = "mclust")[5:7] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "E"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("ms1ep", as.double(data), z, as.integer(n), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(G), double(1), double(G), PACKAGE = "mclust")[9:11] } mu <- temp[[1]] names(mu) <- as.character(1:G) sigmasq <- temp[[2]] pro <- temp[[3]] WARNING <- NULL if(sigmasq > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) pro[] <- mu[] <- sigmasq <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data), NULL) variance <- list(modelName = "E", d = 1, G = G, sigmasq = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "E", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simE <- function(parameters, n, seed = NULL, ...) { if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, 2), modelName = "E")) } if(!is.null(seed)) set.seed(seed) mu <- parameters$mean G <- length(mu) pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- rep(0, n) sd <- sqrt(parameters$variance$sigmasq) for(k in 1:G) { x[clabels == k] <- mu[k] + rnorm(ctabel[k], sd = sd) } structure(cbind(group = clabels, "1" = x), modelName = "E") } cdensEVI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- parameters$mean G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esevi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVI", WARNING = WARNING, returnCode = ret) } emEVI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVI(data, parameters = parameters, warn = warn)$z meEVI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepEVI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esevi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meEVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVI", d = p, G = G, scale = NA, shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("meevi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meevip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(1), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { if(warn) warning("z column sum fell below threshold") WARNING <- "z column sum fell below threshold" } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(scale * shape, 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepEVI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVI", d = p, G = G, scale = NA, shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="EVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msevi", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(1), double(p * G), double(G), PACKAGE = "mclust")[6:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msevip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(1), double(p * G), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) scale <- temp[[2]] shape <- matrix(temp[[3]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[4]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(!c( scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(apply(scale * shape, 2, diag), c(p, p, G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simEVI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) shape <- as.matrix(parameters$variance$shape) if(!all(dim(shape) == dim(mean))) stop("shape incompatible with mean") sss <- sqrt(parameters$variance$scale * shape) for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(sss[, k]), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "EVI") } # old version: LS 20150317 sigma2decomp <- function(sigma, G = NULL, tol = sqrt(.Machine$double.eps), ...) { dimSigma <- dim(sigma) if(is.null(dimSigma)) stop("sigma improperly specified") d <- dimSigma[1] if(dimSigma[2] != d) stop("sigma improperly specified") l <- length(dimSigma) if(l < 2 || l > 3) stop("sigma improperly specified") if(is.null(G)) { if(l == 2) { G <- 1 sigma <- array(sigma, c(dimSigma, 1)) } else { G <- dimSigma[3] } } else { if(l == 3 && G != dimSigma[3]) stop("sigma and G are incompatible") if(l == 2 && G != 1) sigma <- array(sigma, c(d,d,G)) } # angle between subspaces subspace <- function(A, B) { for(k in 1:ncol(A)) { B <- B - A[,k,drop=FALSE] %*% (t(A[,k,drop=FALSE]) %*% B) } norm(B, type = "2") } # check equality of values uniq <- function(x) { abs(max(x) - min(x)) < tol } decomp <- list(d = d, G = G, scale = rep(0, G), shape = matrix(0, d, G), orientation = array(0, c(d, d, G))) for(k in 1:G) { ev <- eigen(sigma[,,k], symmetric = TRUE) temp <- log(ev$values); temp[!is.finite(temp)] <- 0 logScale <- sum(temp)/d decomp$scale[k] <- exp(logScale) decomp$shape[,k] <- exp(temp - logScale) decomp$orientation[,,k] <- ev$vectors } scaleName <- "V" shapeName <- "V" orientName <- "V" # check scale/volume if(uniq(decomp$scale)) { decomp$scale <- decomp$scale[1] scaleName <- "E" } # check shape if(all(apply(decomp$shape, 1, uniq))) { decomp$shape <- decomp$shape[, 1] if(all(uniq(decomp$shape))) { shapeName <- "I" decomp$shape <- rep(1, d) } else { shapeName <- "E" } } # check orientation eqOrientation <- { if(d == 2) all(apply(matrix(decomp$orientation, nrow = d * d, ncol = G), 1, uniq)) else all(apply(decomp$orientation[,,-1,drop=FALSE], 3, function(o) subspace(decomp$orientation[,,1],o)) < tol) } if(eqOrientation) { decomp$orientation <- decomp$orientation[,,1] if(all(apply(cbind(decomp$orientation, diag(d)), 1, uniq))) { orientName <- "I" decomp$orientation <- NULL } else { orientName <- "E" } } decomp$modelName <- paste0(scaleName, shapeName, orientName) decomp$sigma <- sigma orderedNames <- c("sigma", "d", "modelName", "G", "scale", "shape", "orientation") return(decomp[orderedNames]) } sigma2decomp <- function(sigma, G = NULL, tol = sqrt(.Machine$double.eps), ...) { dimSigma <- dim(sigma) if(is.null(dimSigma)) stop("sigma improperly specified") d <- dimSigma[1] if(dimSigma[2] != d) stop("sigma improperly specified") l <- length(dimSigma) if(l < 2 || l > 3) stop("sigma improperly specified") if(is.null(G)) { if(l == 2) { G <- 1 sigma <- array(sigma, c(dimSigma, 1)) } else { G <- dimSigma[3] } } else { if(l == 3 && G != dimSigma[3]) stop("sigma and G are incompatible") if(l == 2 && G != 1) sigma <- array(sigma, c(d,d,G)) } # angle between subspaces subspace <- function(A, B) { for(k in 1:ncol(A)) { B <- B - A[,k,drop=FALSE] %*% (t(A[,k,drop=FALSE]) %*% B) } norm(B, type = "2") } # check equality of values uniq <- function(x) { abs(max(x) - min(x)) < tol } decomp <- list(d = d, G = G, scale = rep(0, G), shape = matrix(0, d, G), orientation = array(0, c(d, d, G))) for(k in 1:G) { ev <- eigen(sigma[,,k], symmetric = TRUE) temp <- log(ev$values); temp[!is.finite(temp)] <- 0 logScale <- sum(temp)/d decomp$scale[k] <- exp(logScale) decomp$shape[,k] <- exp(temp - logScale) decomp$orientation[,,k] <- ev$vectors } scaleName <- "V" shapeName <- "V" orientName <- "V" # check scale/volume if(uniq(decomp$scale)) { decomp$scale <- decomp$scale[1] scaleName <- "E" } # check shape if(all(apply(decomp$shape, 1, uniq))) { decomp$shape <- decomp$shape[, 1] if(all(uniq(decomp$shape))) { shapeName <- "I" decomp$shape <- rep(1, d) } else { shapeName <- "E" } } # check orientation D <- decomp$orientation eqOrientation <- all(apply(D, 3, function(d) any(apply(d, 2, function(x) cor(D[,,1], x)^2) > (1-tol)))) if(eqOrientation) { decomp$orientation <- decomp$orientation[,,1] orientName <- "E" if(sum(abs(svd(decomp$orientation)$v) - diag(d)) < tol) { orientName <- "I" # decomp$orientation <- NULL } } decomp$modelName <- paste0(scaleName, shapeName, orientName) decomp$sigma <- sigma orderedNames <- c("sigma", "d", "modelName", "G", "scale", "shape", "orientation") return(decomp[orderedNames]) } decomp2sigma <- function(d, G, scale, shape, orientation = NULL, ...) { nod <- missing(d) noG <- missing(G) lenScale <- length(scale) if(lenScale != 1) { if(!noG && G != lenScale) stop("scale incompatibile with G") G <- lenScale } shape <- as.matrix(shape) p <- nrow(shape) if(!nod && p != d) stop("shape incompatible with d") d <- p g <- ncol(shape) if(g != 1) { if(!is.null(G) && g != G) stop("shape incompatible with scale") if(!noG && g != G) stop("shape incompatible with G") G <- g } if(is.null(orientation)) { orientName <- "I" if(is.null(G)) { G <- if(noG) 1 else G } orientation <- array(diag(d), c(d, d, G)) } else { dimO <- dim(orientation) l <- length(dimO) if(is.null(dimO) || l < 2 || l > 3 || dimO[1] != dimO[2]) stop("orientation improperly specified") if(dimO[1] != d) stop("orientation incompatible with shape") if(l == 3) { orientName <- "V" if(is.null(G)) { if(!noG && dimO[3] != G) stop("orientation incompatible with G") G <- dimO[3] } else if(G != dimO[3]) stop("orientation incompatible with scale and/or shape" ) } else { orientName <- "E" if(is.null(G)) { G <- if(noG) 1 else G } orientation <- array(orientation, c(d, d, G)) } } if(G == 1) { scaleName <- shapeName <- "X" } else { scaleName <- if(lenScale == 1) "E" else "V" shapeName <- if(g == 1) "E" else "V" scale <- rep(scale, G) shape <- matrix(shape, nrow = d, ncol = G) } sigma <- array(0, c(d, d, G)) for(k in 1:G) { sigma[,,k] <- crossprod(t(orientation[,,k]) * sqrt(scale[k] * shape[,k])) } structure(sigma, modelName = paste0(scaleName, shapeName, orientName)) } grid1 <- function (n, range = c(0, 1), edge = TRUE) { if (any(n < 0 | round(n) != n)) stop("n must be nonpositive and integer") G <- rep(0, n) if (edge) { G <- seq(from = min(range), to = max(range), by = abs(diff(range))/(n - 1)) } else { lj <- abs(diff(range)) incr <- lj/(2 * n) G <- seq(from = min(range) + incr, to = max(range) - incr, by = 2 * incr) } G } grid2 <- function (x, y) { lx <- length(x) ly <- length(y) xy <- matrix(0, nrow = lx * ly, ncol = 2) l <- 0 for (j in 1:ly) { for (i in 1:lx) { l <- l + 1 xy[l,] <- c(x[i], y[j]) } } xy } hypvol <- function (data, reciprocal = FALSE) { dimdat <- dim(data) oneD <- ((is.null(dimdat) || NCOL(data) == 1)) if (oneD) { n <- length(as.vector(data)) ans <- if (reciprocal) 1/diff(range(data)) else diff(range(data)) return(ans) } if (length(dimdat) != 2) stop("data must be a vector or a matrix") data <- as.matrix(data) sumlogdifcol <- function(x) sum(log(apply(x, 2, function(colm) diff(range(colm))))) bdvolog <- sumlogdifcol(data) pcvolog <- sumlogdifcol(princomp(data)$scores) volog <- min(bdvolog, pcvolog) if(reciprocal) { minlog <- log(.Machine$double.xmin) if (-volog < minlog) { warning("hypervolume smaller than smallest machine representable positive number") ans <- 0 } else ans <- exp(-volog) } else { maxlog <- log(.Machine$double.xmax) if (volog > maxlog) { warning("hypervolume greater than largest machine representable number") ans <- Inf } else ans <- exp(volog) } return(ans) } "[.mclustBIC" <- function (x, i, j, drop = FALSE) { ATTR <- attributes(x)[c("G", "modelNames", "prior", "control", "initialization", "Vinv", "warn", "n", "d", "oneD", "returnCodes", "class")] oldClass(x) <- NULL x <- NextMethod("[") if (is.null(dim(x))) return(x) ATTR$G <- as.numeric(dimnames(x)[[1]]) ATTR$modelNames <- dimnames(x)[[2]] ATTR$returnCodes <- ATTR$returnCodes[dimnames(x)[[1]],dimnames(x)[[2]], drop=FALSE] do.call("structure", c(list(.Data = x), ATTR)) } bic <- function(modelName, loglik, n, d, G, noise = FALSE, equalPro = FALSE, ...) { nparams <- nMclustParams(modelName = modelName, d = d, G = G, noise = noise, equalPro = equalPro) 2 * loglik - nparams * log(n) } checkModelName <- function(modelName) { switch(EXPR = modelName, "X" = , "E" = , "V" = , "XII" = , "XXI" = , "XXX" = , "EII" = , "VII" = , "EEI" = , "VEI" = , "EVI" = , "VVI" = , "EEE" = , "EVE" = , "VEE" = , "VVE" = , "EEV" = , "VEV" = , "EVV" = , "VVV" = TRUE, stop("invalid model name")) } em <- function(modelName, data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { checkModelName(modelName) funcName <- paste("em", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } estep <- function(modelName, data, parameters, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("estep", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } mclustVariance <- function(modelName, d=NULL, G=2) { x <- -1 if (nchar(modelName) == 1) { if (!is.null(d) && d != 1) stop("modelName and d are incompatible") varList <- switch(EXPR = modelName, "X" = list(sigmasq = x), "E" = list(sigmasq = x), "V" = list(sigmasq = rep(x,G)), stop("modelName not recognized")) } else { if (nchar(modelName) != 3) stop("modelName is misspecified") if (is.null(d)) d <- 3 varList <- switch(EXPR = modelName, "XII" = list(sigmasq = x), "EII" = list(sigmasq = x, scale = x, shape = rep(x,d)), "VII" = list(sigmasq = rep(x,G), scale = rep(x,G), shape = rep(x,d)), "XXI" = list(scale = x, shape = rep(x,d)), "EEI" = list(scale = x, shape = rep(x,d)), "EVI" = list(scale = x, shape = matrix(x,d,G)), "VEI" = list(scale = rep(x,G), shape = rep(x,d)), "VVI" = list(scale = rep(x,G), shape = matrix(x,d,G)), "XXX" = { M <- matrix(x,d,d); M[row(M) > col(M)] <- 0; list(cholSigma = M) }, "EEE" = { M <- matrix(x,d,d); M[row(M) > col(M)] <- 0; list(cholSigma = M) }, "VEE" = list(scale = rep(x,G), shape = rep(x,d), orientation = matrix(x,d,d)), "VVE" = list(scale = rep(x,G), shape = matrix(x,d,G), orientation = matrix(x,d,d)), "EVV" = list(scale = x, shape = matrix(x,d,G), orientation = array(x,c(d,d,G))), "EVE" = list(scale = x, shape = matrix(x,d,G), orientation = matrix(x,d,d)), "EEV" = list(scale = x, shape = rep(x,d), orientation = array(x,c(d,d,G))), "VEV" = list(scale = x, shape = matrix(x,d,G), orientation = array(x,c(d,d,G))), "VVV" = { A <- array(x,c(d,d,G)); I <- row(A[,,1]) > col(A[,,1]) for (k in 1:G) A[,,k][I] <- 0 list(cholsigma = A)}, stop("modelName not recognized")) } c(modelName = modelName, d = d, G = G, varList) } me <- function(modelName, data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("me", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } mstep <- function(modelName, data, z, prior = NULL, warn = NULL, ...) { checkModelName(modelName) funcName <- paste("mstep", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } mvn <- function(modelName, data, prior = NULL, warn = NULL, ...) { modelName <- switch(EXPR = modelName, "E" = "X", "V" = "X", "X" = "X", "Spherical" = "XII", "EII" = "XII", "VII" = "XII", "XII" = "XII", "Diagonal" = "XXI", "EEI" = "XXI", "VEI" = "XXI", "EVI" = "XXI", "VVI" = "XXI", "XXI" = "XXI", "Ellipsoidal" = "XXX", "EEE" = "XXX", "VEE" = "XXX", "EVE" = "XXX", "EVV" = "XXX", "VVE" = "XXX", "EEV" = "XXX", "VEV" = "XXX", "VVV" = "XXX", "XXX" = "XXX", stop("invalid model name")) funcName <- paste("mvn", modelName, sep = "") mc <- match.call() mc[[1]] <- as.name(funcName) mc[[2]] <- NULL out <- eval(mc, parent.frame()) varnames <- colnames(as.matrix(data)) if(!all(is.null(varnames))) { rownames(out$parameters$mean) <- varnames dimnames(out$parameters$variance$Sigma) <- list(varnames, varnames) dimnames(out$parameters$variance$sigma) <- list(varnames, varnames, NULL) } return(out) } nVarParams <- function(modelName, d, G, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) # checkModelName(modelName) switch(EXPR = modelName, "E" = 1, "V" = G, "EII" = 1, "VII" = G, "EEI" = d, "VEI" = G + (d-1), "EVI" = 1 + G * (d-1), "VVI" = G * d, "EEE" = d*(d+1)/2, "EVE" = 1 + G*(d-1) + d*(d-1)/2, "VEE" = G + (d-1) + d*(d-1)/2, "VVE" = G + G * (d-1) + d*(d-1)/2, "EEV" = 1 + (d-1) + G * d*(d-1)/2, "VEV" = G + (d-1) + G * d*(d-1)/2, "EVV" = 1 - G + G * d*(d+1)/2, "VVV" = G * d*(d+1)/2, stop("invalid model name")) } nMclustParams <- function(modelName, d, G, noise = FALSE, equalPro = FALSE, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) if(G == 0) { ## one noise cluster case if(!noise) stop("undefined model") nparams <- 1 } else { nparams <- nVarParams(modelName, d = d, G = G) + G*d if(!equalPro) nparams <- nparams + (G - 1) if(noise) nparams <- nparams + 2 } return(nparams) } sim <- function(modelName, parameters, n, seed = NULL, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) funcName <- paste("sim", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } cdensVEI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],dimnames(mu)[[2]]) structure(z, logarithm = logarithm, modelName = "VEI", WARNING = WARNING, returnCode = ret) } emVEI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEI(data, parameters = parameters, warn = warn)$z meVEI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVEI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvei", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVEI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEI", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevei", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), double(p * G), double(G), double(p), double(K), double(G), double(p), double(p * G), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEI"), prior[names(prior) != "functionName"])) temp <- .Fortran("meveip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), double(p * G), double(G), double(p), double(K), double(G), double(p), double(p * G), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]][1] inner <- temp[[2]][2] err <- temp[[3]][1] inerr <- temp[[3]][2] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- temp[[7]] dimnames(mu) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(scale[k] * shape) if(inner >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner ret <- 2 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) attr(info, "inner") <- c(iterations = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVEI <- function(data, z, prior = NULL, warn = NULL, control = NULL,...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEI", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$ itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] if(is.null(prior)) { temp <- .Fortran("msvei", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(G), double(G), double(p), double(p * G), PACKAGE = "mclust")[6:11] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msveip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(G), double(G), double(p), double(p * G), PACKAGE = "mclust")[10:15] } inner <- temp[[1]] inerr <- temp[[2]] mu <- matrix(temp[[3]], p, G) scale <- temp[[4]] shape <- temp[[5]] dimnames(mu) <- list(NULL, as.character(1:G)) pro <- temp[[6]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(! c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- shape <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { ret <- 0 sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(scale[k] * shape) if(inner >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner } } info <- c(iterations = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } simVEI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rtscale[k] * rtshape), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VEI") } cdensV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) mu <- drop(parameters$mean) G <- length(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(names(data), NULL) return(structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = 9)) } if (length(sigmasq) == 1) sigmasq <- rep(sigmasq,G) temp <- .Fortran("es1v", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(names(data),NULL) structure(z, logarithm = logarithm, modelName = "V", WARNING = WARNING, returnCode = ret) } emV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepV(data, parameters = parameters, warn = warn)$z meV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- drop(data) n <- length(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- drop(parameters$mean) G <- length(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "V", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(names(data), NULL) return(structure(list(modelName = "V", n=n, d=1, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("es1v", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[8:9] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) structure(list(modelName = "V", n = n, d = 1, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensVEV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("esvev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(p), double(1), double(n * G), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VEV", WARNING = WARNING, returnCode = ret) } emVEV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEV(data, parameters = parameters, warn = warn)$z meVEV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVEV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") temp <- .Fortran("esvev", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(aperm(parameters$variance$orientation,c(2,1,3))), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(p), double(1), double(n * K), PACKAGE = "mclust")[13:14] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVEV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEV", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevev", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), as.integer(lwork), double(p * G), double(G), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[7:16] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevevp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax), as.double(control$tol), as.double(control$eps), as.integer(lwork), double(p * G), double(G), double(p), double(p * p * G), double(K), double(lwork), double(p), PACKAGE = "mclust")[11:20] } z <- temp[[1]] its <- temp[[2]][1] inner <- temp[[2]][2] err <- temp[[3]][1] inerr <- temp[[3]][2] loglik <- temp[[4]] lapackSVDinfo <- temp[[5]] mu <- matrix(temp[[6]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[7]] shape <- temp[[8]] O <- aperm( array(temp[[9]], c(p, p, G)), c(2,1,3)) pro <- temp[[10]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- shapeO(shape, O, transpose = FALSE) sigma <- sweep(sigma, MARGIN = 3, STATS = scale, FUN = "*") if(inner >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner ret <- 2 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- structure(c(iterations = its, error = err), inner = c( iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVEV <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEV", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VEV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list(n = n, d = p, G = G, mu = matrix(as.double(NA), p, G), sigma = array(NA, c(p, p, G)), decomp = list( d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA, G), modelName = "VEV", prior = prior), WARNING = WARNING)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop( "improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$ itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) if(is.null(prior)) { temp <- .Fortran("msvev", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(lwork), as.integer(lwork), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEV"), prior[names(prior) != "functionName"])) temp <- .Fortran("msvevp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(lwork), as.integer(lwork), as.integer(itmax), as.double(tol), double(p * G), double(G), double(p), double(p * p * G), double(G), PACKAGE = "mclust")[11:18] } lapackSVDinfo <- temp[[1]] inner <- temp[[2]] inerr <- temp[[3]] mu <- matrix(temp[[4]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp[[5]] shape <- temp[[6]] O <- aperm(array(temp[[7]], c(p, p, G)),c(2,1,3)) pro <- temp[[8]] WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DGESVD" if(warn) warning(WARNING) } O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any( !c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- sweep(shapeO(shape, O, transpose = FALSE), MARGIN = 3, STATS = scale, FUN = "*") if(inner >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) inner <- - inner } ret <- 2 } info <- c(iteration = inner, error = inerr) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VEV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } simVEV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape cholSigma <- t(parameters$variance$orientation[, , k]) * sss x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VEV") } cdensVII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = 9)) } temp <- .Fortran("esvii", as.double(data), as.double(mu), as.double(sigmasq), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VII", WARNING = WARNING, returnCode = ret) } emVII <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVII(data, parameters = parameters, warn = warn)$z meVII(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVII <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } sigmasq <- parameters$variance$sigmasq if(is.null(sigmasq)) stop("variance parameters are missing") if(any(sigmasq < 0)) stop("sigma-squared is negative") if(any(!sigmasq)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VII", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = -1)) } temp <- .Fortran("esvii", as.double(data), as.double(mu), as.double(sigmasq), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[9:10] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VII", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVII <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data must be in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevii", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(K), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VII"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("meviip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(K), PACKAGE = "mclust")[c(11:17, 10)] } mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] sigmasq <- temp[[6]] pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || any(sigmasq <= max(control$eps, 0))) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(rep(sigmasq[k], p)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VII", d = p, G = G, sigma = sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } meVVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVI", d = p, G = G, scale = rep(NA,G), shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVII <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal number of observations") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("msvii", as.double(data), z, as.integer(n), as.integer(p), as.integer(G), double(p * G), double(G), double(G), PACKAGE = "mclust")[6:8] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VII"), prior[names(prior) != "functionName"])) temp <- .Fortran("msviip", as.double(data), z, as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(G), double(G), PACKAGE = "mclust")[10:12] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) sigmasq <- temp[[2]] pro <- temp[[3]] sigma <- array(0, c(p, p, G)) for(k in 1:G) sigma[, , k] <- diag(rep(sigmasq[k], p)) WARNING <- NULL if(any(sigmasq > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VII", d = p, G = G, sigma = sigma, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VII", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVII <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d), modelName = "VII")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) sigmasq <- parameters$variance$sigmasq for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rep(sqrt(sigmasq[k]), d)), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VII") } meV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal length of data") K <- dimz[2] if(!is.null(Vinv)) { G <- K - 1 if (Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "V", d=1, G=G, sigmasq = rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="V", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("me1v", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if(is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(G), double(K), PACKAGE = "mclust")[6:12] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "V"), prior[names(prior) != "functionName"])) temp <- .Fortran("me1vp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(G), double(G), double(K), PACKAGE = "mclust")[c(10:16, 9)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- temp[[5]] names(mu) <- as.character(1:G) sigmasq <- temp[[6]] pro <- temp[[7]] ## logpost <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6) || any(sigmasq <= max(control$eps, 0))) { WARNING <- "sigma-squared falls below threshold" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA ret <- if(control$equalPro) -2 else -3 } else if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 info <- c(iterations = its, error = err) dimnames(z) <- list(names(data),NULL) variance = list(modelName = "V", d = 1, G = G, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "V", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") # number of groups G <- dimz[2] ## if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "V", d=1, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=rep(NA,G), variance=variance) return(structure(list(modelName="V", prior=prior, n=n, d=1, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("ms1v", as.double(data), as.double(z), as.integer(n), as.integer(G), double(G), double(G), double(G), PACKAGE = "mclust")[5:7] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "V"), prior[names(prior) != "functionName"])) storage.mode(z) <- "double" temp <- .Fortran("ms1vp", as.double(data), z, as.integer(n), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(G), double(G), double(G), PACKAGE = "mclust")[9:11] } mu <- temp[[1]] names(mu) <- as.character(1:G) sigmasq <- temp[[2]] pro <- temp[[3]] WARNING <- NULL if(any(sigmasq > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- sigmasq[] <- z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(names(data),NULL) variance = list(modelName = "V", d = 1, G = G, sigmasq = sigmasq, scale = sigmasq) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "V", prior = prior, n = n, d = 1, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simV <- function(parameters, n, seed = NULL, ...) { if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, 2), modelName = "V")) } if(!is.null(seed)) set.seed(seed) mu <- parameters$mean G <- length(mu) pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- rep(0, n) sd <- sqrt(parameters$variance$sigmasq) for(k in 1:G) { x[clabels == k] <- mu[k] + rnorm(ctabel[k], sd = sd[k]) } structure(cbind(group = clabels, "1" = x), modelName = "V") } cdensVVI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mu", "variance")]))) || any(is.null(parameters[c("pro", "mu", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VVI", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvvi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(1), double(n * G), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVI", WARNING = WARNING, retrinCode = ret) } emVVI <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVI(data, parameters = parameters, warn = warn)$z meVVI(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVVI <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mu", "variance")]))) || any(is.null(parameters[c("pro", "mu", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVI", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape)) stop("variance parameters are missing") temp <- .Fortran("esvvi", as.double(data), as.double(mu), as.double(parameters$variance$scale), as.double(parameters$variance$shape), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(1), double(n * K), PACKAGE = "mclust")[10:11] loglik <- temp[[1]] z <- matrix(temp[[2]], n, K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVI", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVVI <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) > 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVI", d = p, G = G, scale = rep(NA,G), shape = matrix(as.double(NA),p,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVI", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvi", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[7:14] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvip", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(G), double(p * G), double(K), PACKAGE = "mclust")[11:18] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) scale <- temp[[6]] shape <- matrix(temp[[7]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[8]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } sigma <- array(NA, c(p, p, G)) mu[] <- pro[] <- z[] <- loglik <- shape[] <- NA ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVVI <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VII", d=p, G=G, sigmasq=rep(NA,G)) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VII", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msvvi", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p * G), double(G), double(p * G), double(G), PACKAGE = "mclust")[6:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVI"), prior[names( prior) != "functionName"])) temp <- .Fortran("msvvip", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p * G), double(G), double(p * G), double(G), PACKAGE = "mclust")[10:13] } mu <- matrix(temp[[1]], p, G) scale <- temp[[2]] shape <- matrix(temp[[3]], p, G) dimnames(mu) <- dimnames(shape) <- list(NULL, as.character(1:G)) pro <- temp[[4]] WARNING <- NULL if(any(c(scale, shape) > signif(.Machine$double.xmax, 6)) || any(! c(scale, shape))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- shape <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array(apply(sweep(shape, MARGIN = 2, STATS = scale, FUN = "*"), 2, diag), c(p, p, G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVI", d = p, G = G, sigma = sigma, sigmasq = scale, scale = scale, shape = shape) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVI", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVVI <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVI")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if(!all(dim(rtshape) == dim(mu))) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if(length(rtscale) != G) stop("scale incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k, ] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% diag(rtscale[k] * rtshape[, k]), MARGIN = 2, STATS = mu[, k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VVI") } cdensVVV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) mu <- as.matrix(parameters$mean) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholsigma)) stop("variance parameters are missing") temp <- .Fortran("esvvv", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholsigma), as.double(-1), as.integer(n), as.integer(p), as.integer(G), as.double(-1), double(p), double(1), double(n * G), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, G) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVV", WARNING = WARNING, returnCode = ret) } emVVV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVV(data, parameters = parameters, warn = warn)$z meVVV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } estepVVV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) G <- ncol(mu) noise <- (l == G + 1) if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(as.double(NA),n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$cholsigma)) stop("variance parameters are missing") temp <- .Fortran("esvvv", as.logical(1), as.double(data), as.double(mu), as.double(parameters$variance$cholsigma), as.double(pro), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), double(p), double(1), double(n * K), PACKAGE = "mclust")[10:12] lapackCholInfo <- temp[[1]][1] loglik <- temp[[2]] z <- matrix(temp[[3]], n, K) WARNING <- NULL if(lapackCholInfo) { if(lapackCholInfo > 0) { WARNING <- "sigma is not positive definite" if(warn) warning(WARNING) } else { WARNING <- "input error for LAPACK DPOTRF" if(warn) warning(WARNING) } z[] <- loglik <- NA ret <- -9 } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } meVVV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVV", d = p, G = G, sigma = array(NA, c(p,p,G)), cholsigma = array(NA, c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="VVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") storage.mode(z) <- "double" if(is.null(prior)) { temp <- .Fortran("mevvv", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p * G), double(K), double(p), double(p*p), PACKAGE = "mclust")[7:13] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVV"), prior[names(prior) != "functionName"])) temp <- .Fortran("mevvvp", as.logical(control$equalPro), as.double(data), as.integer(n), as.integer(p), as.integer(G), as.double(if (is.null(Vinv)) -1 else Vinv), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), z, as.integer(control$itmax[1]), as.double(control$tol[1]), as.double(control$eps), double(p * G), double(p * p * G), double(K), double(p), double(p*p), PACKAGE = "mclust")[c(11:17, 10)] } z <- temp[[1]] its <- temp[[2]] err <- temp[[3]] loglik <- temp[[4]] mu <- matrix(temp[[5]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholsigma <- array(temp[[6]], c(p, p, G)) pro <- temp[[7]] WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(apply(cholsigma, 3, unchol, upper = TRUE), c(p,p,G)) if(its >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) its <- - its ret <- 1 } else ret <- 0 } info <- c(iterations = its, error = abs(err)) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(cholsigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVV", d = p, G = G, sigma = sigma, cholsigma = cholsigma) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } mstepVVV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVV", d = p, G = G, sigma <- array(NA, c(p,p, G)), cholsigma = array(NA, c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(as.double(NA),p,G), variance=variance) return(structure(list(modelName="VVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if(is.null(prior)) { temp <- .Fortran("msvvv", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), double(p), double(p * G), double(p * p * G), double(G), double(p * p), PACKAGE = "mclust")[7:9] } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVV"), prior[names(prior) != "functionName"])) temp <- .Fortran("msvvvp", as.double(data), as.double(z), as.integer(n), as.integer(p), as.integer(G), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$ scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * G), double(p * p * G), double(G), double(p * p), PACKAGE = "mclust")[11:13] } mu <- matrix(temp[[1]], p, G) dimnames(mu) <- list(NULL, as.character(1:G)) cholsigma <- array(temp[[2]], c(p, p, G)) pro <- temp[[3]] WARNING <- NULL if(any(c(mu, cholsigma) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- sigma[] <- cholsigma[] <- NA ret <- -1 } else { sigma <- array(apply(cholsigma, 3, unchol, upper = TRUE), c(p,p,G)) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) dimnames(cholsigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "VVV", d = p, G = G, sigma = sigma, cholsigma= cholsigma) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } simVVV <- function(parameters, n, seed = NULL, ...) { if(!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if(any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVV")) } pro <- parameters$pro if(is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins=G) x <- matrix(0, n, d) if(is.null(cholsigma <- parameters$variance$cholsigma)) { if(is.null(sigma <- parameters$variance$sigma)) { stop("variance parameters must inlcude either sigma or cholsigma" ) } cholsigma <- apply(sigma, 3, chol) for(k in 1:ncol(cholsigma)) sigma[, , k] <- cholsigma[, k] cholsigma <- sigma } if(dim(cholsigma)[3] != G) stop("variance incompatible with mean") for(k in 1:G) { m <- ctabel[k] x[clabels == k,] <- sweep(matrix(rnorm(m * d), nrow = m, ncol = d) %*% cholsigma[,,k], MARGIN = 2, STATS = mu[,k], FUN = "+") } dimnames(x) <- list(NULL, paste0("x", 1:d)) structure(cbind(group = clabels, x), modelName = "VVV") } # single component univariate case mvnX <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one dimensional") data <- as.vector(data) n <- length(data) if(is.null(prior)) { temp <- .Fortran("mvn1d", as.double(data), as.integer(n), double(1), double(1), double(1), PACKAGE = "mclust")[3:5] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "X"), prior[names(prior) != "functionName"])) temp <- .Fortran("mvn1p", as.double(data), as.integer(n), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(1), double(1), double(1), PACKAGE = "mclust")[c(7:9, 6)] logpost <- temp[[4]] } mu <- temp[[1]] sigmasq <- temp[[2]] loglik <- temp[[3]] ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "sigma-squared vanishes" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance = list(modelName= "X", d = 1, G = 1, sigmasq = sigmasq) parameters <- list(pro = 1, mean = mu, variance = variance) structure(list(modelName = "X", prior = prior, n = n, d = 1, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensX <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensE") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "X" return(z) } emX <- function(data, prior = NULL, warn = NULL, ...) { mvnX(data = data, prior = prior, warn = warn, ...) } meX <- emX # single component multivariate case with diagonal common variance mvnXII <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxii", as.double(data), as.integer(n), as.integer(p), double(p), double(1), double(1), PACKAGE = "mclust")[4:6] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XII"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxiip", as.double(data), as.integer(n), as.integer(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p), double(1), double(1), PACKAGE = "mclust")[c(8:10, 7)] logpost <- temp[[4]] } mu <- temp[[1]] sigmasq <- temp[[2]] loglik <- temp[[3]] Sigma <- sigmasq * diag(p) ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XII", d = p, G = 1, sigmasq = sigmasq, Sigma = Sigma, sigma = array(Sigma, c(p, p, 1)), scale = sigmasq) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XII", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXII <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEII") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XII" return(z) } emXII <- function(data, prior = NULL, warn = NULL, ...) { mvnXII(data = data, prior = prior, warn = warn, ...) } meXII <- emXII # single component multivariate case with diagonal different variances mvnXXI <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxxi", as.double(data), as.integer(n), as.integer(p), double(p), double(1), double(p), double(1), PACKAGE = "mclust")[4:7] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XXI"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxxip", as.double(data), as.integer(n), as.integer(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(priorParams$scale), as.double(priorParams$dof), double(p), double(1), double(p), double(1), PACKAGE = "mclust")[c(8:11, 7)] logpost <- temp[[5]] } mu <- temp[[1]] scale <- temp[[2]] shape <- temp[[3]] loglik <- temp[[4]] Sigma <- diag(scale * shape) ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XXI", d = p, G = 1, Sigma = Sigma, sigma = array(Sigma, c(p, p, 1)), scale = scale, shape = shape) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XXI", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXXI <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEEI") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XXI" return(z) } emXXI <- function(data, prior = NULL, warn = NULL, ...) { mvnXXI(data = data, prior = prior, warn = warn, ...) } meXXI <- emXXI # single component multivariate case with full covariance matrix mvnXXX <- function(data, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD) stop("for multidimensional data only") if(length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) if(is.null(prior)) { temp <- .Fortran("mvnxxx", as.double(data), as.integer(n), as.integer(p), double(p), double(p * p), double(1), PACKAGE = "mclust")[c(4:6)] logpost <- NULL } else { priorParams <- do.call(prior$functionName, c(list(data = data, G = 1, modelName = "XXX"), prior[names(prior) != "functionName"])) temp <- .Fortran("mnxxxp", as.double(data), as.integer(n), as.integer(p), double(p), as.double(priorParams$shrinkage), as.double(priorParams$mean), as.double(if(any(priorParams$scale != 0)) chol(priorParams$scale) else priorParams$scale), as.double(priorParams$dof), double(p), double(p * p), double(1), PACKAGE = "mclust")[c(9:11, 8)] logpost <- temp[[4]] } mu <- temp[[1]] cholSigma <- matrix(temp[[2]], p, p) Sigma <- unchol(cholSigma, upper = TRUE) loglik <- temp[[3]] ## Sigma = t(cholSigma) %*% cholSigma ret <- 0 WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) loglik <- NA ret <- -1 } variance <- list(modelName = "XXX", d = p, G = 1, Sigma = Sigma, cholSigma = cholSigma, cholsigma = cholSigma, sigma = array(Sigma, c(p, p, 1))) parameters <- list(pro = 1, mean = matrix(mu, ncol = 1), variance = variance) structure(list(modelName = "XXX", prior = prior, n = n, d = p, G = 1, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } cdensXXX <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { call <- match.call() mc <- match.call(expand.dots = FALSE) mc[[1]] <- as.name("cdensEEE") z <- eval(mc, parent.frame()) attr(z, "modelName") <- "XXX" return(z) } emXXX <- function(data, prior = NULL, warn = NULL, ...) { mvnXXX(data = data, prior = prior, warn = warn, ...) } meXXX <- emXXX mclust/R/mclustdr.R0000644000176200001440000011546013455320710013751 0ustar liggesusers###################################################### ## ## ## Dimension reduction for model-based ## ## clustering and classification ## ## ## ## Author: Luca Scrucca ## ###################################################### # GMMDR dimension reduction ----------------------------------------------- MclustDR <- function(object, lambda = 0.5, normalized = TRUE, Sigma, tol = sqrt(.Machine$double.eps)) { # Dimension reduction for model-based clustering and classification call <- match.call() if(!any(class(object) %in% c("Mclust", "MclustDA"))) stop("object must be of class 'Mclust' or 'MclustDA'") x <- data.matrix(object$data) n <- nrow(x) p <- ncol(x) lambda <- pmax(0, min(lambda, 1)) #----------------------------------------------------------------- # overall parameters mu <- colMeans(x) if(missing(Sigma)) Sigma <- var(x)*(n-1)/n # within-cluster parameters based on fitted mixture model if(inherits(object, "Mclust")) { type <- "Mclust" G <- object$G modelName <- object$modelName y <- object$classification cl2mc <- seq(G) class <- as.factor(y) par <- object$parameters f <- par$pro if(is.null(f)) f <- 1 if(!is.na(object$hypvol)) f <- f[-length(f)] # within-group means mu.G <- matrix(par$mean,p,G) # within-group covars if(p == 1) { Sigma.G <- array(par$variance$sigmasq, c(p,p,G)) } else { Sigma.G <- par$variance$sigma } } else if(inherits(object, "MclustDA")) { type <- object$type modelName <- sapply(object$models, function(m) m$modelName) class <- object$class class <- factor(class, levels = names(object$models)) y <- rep(NA, length(class)) for(i in 1:nlevels(class)) { y[class == levels(class)[i]] <- paste(levels(class)[i], object$models[[i]]$classification, sep =":") } y <- as.numeric(factor(y)) cl2mc <- rep(seq(length(object$models)), sapply(object$models, function(m) m$G)) m <- sapply(object$models, function(mod) mod$n) ncomp <- sapply(object$models, function(mod) mod$G) G <- sum(ncomp) f <- vector(length = G) mu.G <- matrix(as.double(NA), nrow = p, ncol = G) Sigma.G <- array(NA, dim = c(p,p,G)) for(i in 1:length(object$models)) { ii <- seq(c(0,cumsum(ncomp))[i]+1,c(0,cumsum(ncomp))[i+1]) par <- object$models[[i]]$parameters if(is.null(par$pro)) par$pro <- 1 f[ii] <- par$pro * m[i]/sum(m) # within-group means mu.G[,ii] <- par$mean # within-group covars if(p == 1) { Sigma.G[,,ii] <- array(par$variance$sigmasq, c(p,p,1)) } else { Sigma.G[,,ii] <- par$variance$sigma } } } #----------------------------------------------------------------- SVD <- svd(Sigma, nu = 0, nv = min(n,p)) pos <- which(SVD$d > max(tol*SVD$d[1], 0)) SVD$d <- SVD$d[pos] SVD$v <- SVD$v[,pos,drop=FALSE] inv.Sigma <- SVD$v %*% (1/SVD$d * t(SVD$v)) inv.sqrt.Sigma <- SVD$v %*% (1/sqrt(SVD$d) * t(SVD$v)) #----------------------------------------------------------------- # pooled within-group covariance S <- matrix(0, p, p) for(j in seq_len(G)) S <- S + f[j]*Sigma.G[,,j] #----------------------------------------------------------------- # kernel matrix M.I <- crossprod(t(sweep(mu.G, 1, FUN="-", STATS=mu))*sqrt(f)) M.II <- matrix(0, p, p) if(lambda < 1) { for(j in seq_len(G)) M.II <- M.II + f[j]*crossprod(inv.sqrt.Sigma%*%(Sigma.G[,,j]-S)) } # convex combination of M_I and M_II M <- 2*lambda*crossprod(inv.sqrt.Sigma %*% M.I) + 2*(1-lambda)*M.II # regularize the M_II # M <- M.I + lambda*M.II # M <- crossprod(inv.sqrt.Sigma %*% M.I) + # (1-lambda)*M.II + lambda/p * diag(p) # SVD <- eigen.decomp(M, inv.sqrt.Sigma, invsqrt = TRUE) l <- SVD$l; l <- (l+abs(l))/2 numdir <- min(p, sum(l > sqrt(.Machine$double.eps))) basis <- as.matrix(SVD$v)[,seq(numdir),drop=FALSE] sdx <- diag(Sigma) std.basis <- as.matrix(apply(basis, 2, function(x) x*sdx)) if(normalized) { basis <- as.matrix(apply(basis, 2, normalize)) std.basis <- as.matrix(apply(std.basis, 2, normalize)) } dimnames(basis) <- list(colnames(x), paste("Dir", 1:ncol(basis), sep="")) dimnames(std.basis) <- dimnames(basis) Z <- scale(x, scale = FALSE) %*% basis # out = list(call = call, type = type, x = x, Sigma = Sigma, classification = class, mixcomp = y, class2mixcomp = cl2mc, G = G, modelName = modelName, mu = mu.G, sigma = Sigma.G, pro = f, M = M, M.I = M.I, M.II = M.II, lambda = lambda, evalues = l, raw.evectors = as.matrix(SVD$v), basis = basis, std.basis = std.basis, numdir = numdir, dir = Z) class(out) = "MclustDR" return(out) } print.MclustDR <- function(x, digits = getOption("digits"), ...) { txt <- paste0("\'", class(x)[1], "\' model object: ") catwrap(txt) cat("\n") catwrap("\nAvailable components:\n") print(names(x)) # str(x, max.level = 1, give.attr = FALSE, strict.width = "wrap") invisible(x) } summary.MclustDR <- function(object, numdir, std = FALSE, ...) { if(missing(numdir)) numdir <- object$numdir dim <- seq(numdir) if(object$type == "Mclust") { n <- as.vector(table(object$classification)) G <- object$G } else { n <- as.vector(table(object$classification)) G <- as.vector(table(object$class2mixcomp)) } obj <- list(type = object$type, modelName = object$modelName, classes = levels(object$classification), n = n, G = G, basis = object$basis[,seq(dim),drop=FALSE], std = std, std.basis = object$std.basis[,seq(dim),drop=FALSE], evalues = object$evalues[seq(dim)], evalues.cumperc = with(object, { evalues <- evalues[seq(numdir)] cumsum(evalues)/sum(evalues)*100 }) ) class(obj) <- "summary.MclustDR" return(obj) } print.summary.MclustDR <- function(x, digits = max(5, getOption("digits") - 3), ...) { title <- paste("Dimension reduction for model-based clustering and classification") txt <- paste(rep("-", min(nchar(title), getOption("width"))), collapse = "") catwrap(txt) catwrap(title) catwrap(txt) if(x$type == "Mclust") { tab <- data.frame(n = x$n) rownames(tab) <- x$classes tab <- as.matrix(tab) names(dimnames(tab)) <- c("Clusters", "") cat("\n") catwrap(paste0("Mixture model type: ", x$type, " (", x$modelName, ", ", x$G, ")")) print(tab, quote = FALSE, right = TRUE) } else if(x$type == "MclustDA" | x$type == "EDDA") { tab <- data.frame(n = x$n, Model = x$modelName, G = x$G) rownames(tab) <- x$classes tab <- as.matrix(tab) names(dimnames(tab)) <- c("Classes", "") cat("\n") catwrap(paste("Mixture model type:", x$type)) print(tab, quote = FALSE, right = TRUE) } else stop("invalid model type") cat("\n") if(x$std) { catwrap("Standardized basis vectors using predictors scaled to have std.dev. equal to one:") print(x$std.basis, digits = digits) } else { catwrap("Estimated basis vectors:") print(x$basis, digits = digits) } cat("\n") evalues <- rbind("Eigenvalues" = x$evalues, "Cum. %" = x$evalues.cumperc) colnames(evalues) <- colnames(x$basis) print(evalues, digits=digits) invisible() } projpar.MclustDR <- function(object, dim, center = TRUE, raw = FALSE) { # Transform estimated parameters to projection subspace given by # 'dim' directions x <- object$x p <- ncol(x) n <- nrow(x) G <- object$G numdir <- object$numdir if(missing(dim)) dim <- seq(numdir) numdir <- length(dim) if(raw) V <- object$raw.evectors[,dim,drop=FALSE] else V <- object$basis[,dim,drop=FALSE] # mu <- t(object$mu) if(center) mu <- scale(mu, center = apply(x,2,mean), scale = FALSE) Mu <- mu %*% V # sigma <- object$sigma cho <- array(apply(sigma, 3, chol), c(p, p, G)) Sigma <- array(apply(cho, 3, function(R) crossprod(R %*% V)), c(numdir, numdir, G)) # return(list(mean = Mu, variance = Sigma)) } predict.MclustDR <- function(object, dim = 1:object$numdir, newdata, eval.points, ...) { dim <- dim[dim <= object$numdir] if(missing(newdata) & missing(eval.points)) { dir <- object$dir[,dim,drop=FALSE] } else if(!missing(newdata)) { newdata <- as.matrix(newdata) newdata <- scale(newdata, center = colMeans(object$x), scale = FALSE) dir <- newdata %*% object$basis[,dim,drop=FALSE] } else if(!missing(eval.points)) { dir <- as.matrix(eval.points) } n <- nrow(dir) G <- object$G # num. components nclass <- nlevels(object$classification) # num. classes par <- projpar.MclustDR(object, dim) Mu <- par$mean Sigma <- par$variance # old version # cden <- array(NA, c(n, G)) # for(j in 1:G) # { cden[,j] <- mvdnorm(dir, Mu[j,], Sigma[,,j], log = FALSE) } # z <- sweep(cden, 2, FUN = "*", STATS = object$pro) # den <- apply(z, 1, sum) # z <- sweep(z, 1, FUN = "/", STATS = den) # new version: more efficient and accurate z <- array(NA, c(n, G)) for(j in 1:G) { z[,j] <- mvdnorm(dir, Mu[j,], Sigma[,,j], log = TRUE) } z <- sweep(z, 2, FUN = "+", STATS = log(object$pro)) logden <- apply(z, 1, logsumexp) z <- sweep(z, 1, FUN = "-", STATS = logden) z <- exp(z) # zz <- matrix(0, n, nclass) for(j in seq(nclass)) { zz[,j] <- rowSums(z[,object$class2mixcomp == j,drop=FALSE]) } z <- zz; rm(zz) class <- factor(apply(z,1,which.max), levels = 1:nclass, labels = levels(object$classification)) out <- list(dir = dir, density = exp(logden), z = z, uncertainty = 1 - apply(z,1,max), classification = class) return(out) } predict2D.MclustDR <- function(object, dim = 1:2, ngrid = 100, xlim, ylim) { dim <- dim[1:2] dir <- object$dir[,dim,drop=FALSE] G <- object$G par <- projpar.MclustDR(object, dim) Mu <- par$mean Sigma <- par$variance if(missing(xlim)) xlim <- range(dir[,1]) # +c(-1,1)*0.05*diff(range(x))) if(missing(ylim)) ylim <- range(dir[,2]) # +c(-1,1)*0.05*diff(range(x))) xygrid <- cbind(seq(xlim[1], xlim[2], length = ngrid), seq(ylim[1], ylim[2], length = ngrid)) grid <- expand.grid(xygrid[,1], xygrid[,2]) pred <- predict.MclustDR(object, dim = dim, eval.points = grid) out <- list(x = xygrid[,1], y = xygrid[,2], density = matrix(pred$density, ngrid, ngrid), z = array(pred$z, c(ngrid, ngrid, ncol(pred$z))), uncertainty = matrix(pred$uncertainty, ngrid, ngrid), classification = matrix(pred$classification, ngrid, ngrid)) return(out) } plot.MclustDR <- function(x, dimens, what = c("scatterplot", "pairs", "contour", "classification", "boundaries", "density", "evalues"), symbols, colors, col.contour = gray(0.7), col.sep = grey(0.4), ngrid = 200, nlevels = 5, asp = NULL, ...) { object <- x x <- object$x p <- ncol(x) n <- nrow(x) G <- object$G y <- object$mixcomp class <- as.numeric(object$classification) nclass <- length(table(class)) dir <- object$dir numdir <- object$numdir dimens <- if(missing(dimens)) seq(numdir) else intersect(as.numeric(dimens), seq(numdir)) if(missing(symbols)) { if(G <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else if(G <= 26) { symbols <- LETTERS } } if(length(symbols) == 1) symbols <- rep(symbols,nclass) if(length(symbols) < nclass) { warning("more symbols needed to show classification") symbols <- rep(16, nclass) } if(missing(colors)) { colors <- mclust.options("classPlotColors") } if(length(colors) == 1) colors <- rep(colors,nclass) if(length(colors) < nclass) { warning("more colors needed to show mixture components") colors <- rep("black", nclass) } niceRange <- function (x, f = 0.04) { r <- range(x) d <- diff(r) out <- c(r[1] - d*f, r[2] + d*f) return(out) } #################################################################### what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) if(any(i <- (what == "pairs")) & (length(dimens) == 2)) { what[i] <- "scatterplot" } if(length(dimens) == 1) { what[!(what == "density" | what == "evalues")] <- "density" } what <- unique(what) plot.MclustDR.scatterplot <- function(...) { dir <- dir[,dimens,drop=FALSE] plot(dir, col = colors[class], pch = symbols[class], xlab = colnames(dir)[1], ylab = colnames(dir)[2], asp = asp, ...) } plot.MclustDR.pairs <- function(...) { dir <- dir[,dimens,drop=FALSE] pairs(dir, col = colors[class], pch = symbols[class], gap = 0.2, asp = asp, ...) } plot.MclustDR.density <- function(...) { dimens <- dimens[1] dir <- object$dir[,dimens,drop=FALSE] par <- projpar.MclustDR(object, dimens) Mu <- par$mean Sigma <- par$variance q <- seq(min(dir), max(dir), length=2*ngrid) dens <- matrix(as.double(NA), length(q), G) for(j in 1:G) dens[,j] <- dnorm(q, Mu[j,], sqrt(Sigma[,,j])) # if(object$type == "MclustDA") { d <- t(apply(dens, 1, function(x, p = object$pro) p*x)) dens <- matrix(as.double(NA), length(q), nclass) tab <- table(y, class) for(i in 1:ncol(tab)) { j <- which(tab[,i] > 0) dens[,i] <- apply(d[,j,drop=FALSE],1,sum) } } # oldpar <- par(mar = c(0,5.1,1,1), mfrow = par("mfrow"), no.readonly = TRUE) on.exit(par(oldpar)) layout(matrix(1:2,2,1), heights = c(2,1)) plot(0, 0, type = "n", xlab = colnames(dir), ylab = "Density", xlim = range(q, dir), ylim = range(0, dens*1.1), xaxt = "n") for(j in 1:ncol(dens)) lines(q, dens[,j], col = colors[j]) dir.class <- split(dir, class) par(mar = c(4.1,5.1,0,1)) boxplot(dir.class, col = adjustcolor(colors[1:nclass], alpha.f = 0.3), border = colors[1:nclass], horizontal = TRUE, pars = list(boxwex = 0.6, staplewex = 0.8, medlwd = 2, whisklty = 3, outlty = 1, outpch = NA), ylim = range(q,dir), yaxt = "n", xlab = colnames(dir)) axis(2, at = 1:nclass, labels = levels(object$classification), tick = FALSE, cex = 0.8, las = 2) } plot.MclustDR.contour <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] par <- projpar.MclustDR(object, dimens) Mu <- par$mean Sigma <- par$variance # draw contours for each class or cluster plot(dir, type = "n", asp = asp) for(k in seq(nclass)) { i <- which(object$class2mixcomp == k) parameters <- list(pro = object$pro[i]/sum(object$pro[i]), mean = t(par$mean[i,,drop=FALSE]), variance = list(G = length(i), d = 2, sigma = par$variance[,,i,drop=FALSE])) surfacePlot(dir, parameters, col = col.contour, nlevels = nlevels, grid = ngrid, xlim = par("usr")[1:2], ylim = par("usr")[3:4], asp = asp, add = TRUE) } points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.classification.Mclust <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) pred$classification <- apply(pred$z, 1:2, which.max) image(pred$x, pred$y, pred$classification, col = adjustcolor(colors[1:G], alpha.f = 0.1), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.classification.MclustDA <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) pred$classification <- apply(pred$z, 1:2, which.max) image(pred$x, pred$y, pred$classification, col = adjustcolor(colors[1:nclass], alpha.f = 0.1), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.boundaries.Mclust <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) image(pred$x, pred$y, pred$uncertainty, col = rev(gray.colors(10, start = 0, end = 1)), breaks = seq(0, 1-1/nclass, length = 11), xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.boundaries.MclustDA <- function(...) { dimens <- dimens[1:2] dir <- object$dir[,dimens,drop=FALSE] pred <- predict2D.MclustDR(object, dimens, ngrid, xlim = niceRange(dir[,1]), ylim = niceRange(dir[,2])) levels <- seq(0, 1-1/nclass, length = 11) col <- rev(gray.colors(10, start = 0, end = 1)) image(pred$x, pred$y, pred$uncertainty, col = col, breaks = levels, xlab = colnames(dir)[1], ylab = colnames(dir)[2], xaxs = "i", yaxs = "i", asp = asp) points(dir, col = colors[class], pch = symbols[class], ...) } plot.MclustDR.evalues <- function(...) { plotEvalues.MclustDR(object, numdir = max(dimens), plot = TRUE) } if(interactive() & length(what) > 1) { title <- "Dimension reduction for model-based clustering and classification plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "scatterplot") plot.MclustDR.scatterplot(...) if(what[choice] == "pairs") plot.MclustDR.pairs(...) if(what[choice] == "contour") plot.MclustDR.contour(...) if(what[choice] == "classification" & object$type == "Mclust") plot.MclustDR.classification.Mclust(...) if(what[choice] == "classification" & (object$type == "EDDA" | object$type == "MclustDA")) plot.MclustDR.classification.MclustDA(...) if(what[choice] == "boundaries" & object$type == "Mclust") plot.MclustDR.boundaries.Mclust(...) if(what[choice] == "boundaries" & (object$type == "EDDA" | object$type == "MclustDA")) plot.MclustDR.boundaries.MclustDA(...) if(what[choice] == "density") plot.MclustDR.density(...) if(what[choice] == "evalues") plot.MclustDR.evalues(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "scatterplot")) plot.MclustDR.scatterplot(...) if(any(what == "pairs")) plot.MclustDR.pairs(...) if(any(what == "contour")) plot.MclustDR.contour(...) if(any(what == "classification" & object$type == "Mclust")) plot.MclustDR.classification.Mclust(...) if(any(what == "classification" & (object$type == "EDDA" | object$type == "MclustDA"))) plot.MclustDR.classification.MclustDA(...) if(any(what == "boundaries" & object$type == "Mclust")) plot.MclustDR.boundaries.Mclust(...) if(any(what == "boundaries" & (object$type == "EDDA" | object$type == "MclustDA"))) plot.MclustDR.boundaries.MclustDA(...) if(any(what == "density")) plot.MclustDR.density(...) if(any(what == "evalues")) plot.MclustDR.evalues(...) } invisible() } plotEvalues.MclustDR <- function(x, numdir, plot = FALSE, legend = TRUE, ylim, ...) { object <- x G <- object$G f <- object$pro lambda <- object$lambda # dim <- if(missing(numdir)) seq(object$numdir) else seq(numdir) if(missing(numdir)) numdir <- object$numdir dim <- seq(numdir) d <- length(dim) par <- projpar.MclustDR(object, dim = dim, center = TRUE, raw = TRUE) mu <- par$mean Sigma.G <- par$variance # M1 <- t(mu) %*% diag(f) %*% mu l1 <- 2*lambda*diag(crossprod(M1)) # S <- matrix(0, d, d) for(j in seq(G)) S <- S + f[j]*Sigma.G[,,j] M2 <- matrix(0, d, d) for(j in 1:G) { C <- (Sigma.G[,,j]-S) M2 <- M2 + f[j] * tcrossprod(C) } l2 <- 2*(1-lambda)*diag(M2) # l <- object$evalues[dim] # if(plot) { if(missing(ylim)) ylim <- range(0, max(l)+diff(range(l))*0.05) plot(dim, l, type="b", lty = 1, pch = 16, cex = 1.5, xaxt = "n", ylim = ylim, xlab = "MclustDR directions", ylab = "Eigenvalues", panel.first = { abline(v = dim, col = "lightgray", lty = "dotted") abline(h = axTicks(2,par("yaxp")), col = "lightgray", lty = "dotted") } ) axis(1, at = dim, labels = dim) lines(dim, l1, type="b", lty = 2, pch = 22, cex = 1.5) lines(dim, l2, type="b", lty = 2, pch = 2, cex = 1.5) if(legend) { legend("topright", lty = c(1,2,2), pch = c(16,22,2), legend = c("Eigenvalues", "Means contrib.", "Vars contrib."), bg = ifelse(par("bg")=="transparent", "white", par("bg")), inset = 0.01, pt.cex = 1.5) } } out <- list(dim = dim, evalues = l, mean.contrib = l1, var.contrib = l2) if(plot) invisible(out) else return(out) } # Auxiliary functions ----------------------------------------------------- mvdnorm <- function(x, mu, sigma, log = FALSE, tol = sqrt(.Machine$double.eps)) { if(is.vector(x)) { x <- matrix(x, ncol = length(x)) } else { x <- as.matrix(x) } SVD <- svd(sigma) pos <- (SVD$d > max(tol*SVD$d[1], 0)) # in case of not full rank covar matrix inv.sigma <- SVD$v[,pos,drop=FALSE] %*% (1/SVD$d[pos] * t(SVD$u[,pos,drop=FALSE])) z <- mahalanobis(x, center = mu, cov = inv.sigma, inverted = TRUE) # logdet <- sum(log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values)) logdet <- sum(log(SVD$d[pos])) logdens <- -(ncol(x) * log(2 * pi) + logdet + z)/2 if(log) return(logdens) else return(exp(logdens)) } ellipse <- function(c, M, r, npoints = 100) { # Returns the cartesian coordinates of points x on the ellipse # (x-c)'M(x-c) = r^2, # where x = x(theta) and theta varies from 0 to 2*pi radians in npoints steps. # local functions circle <- function(theta, r) r*c(cos(theta),sin(theta)) ellip <- function(theta, r, lambda) lambda*circle(theta, r) point <- function(theta) c+c(gamma %*% ellip(theta, r, lam)) # SVD <- svd(M) lam <- 1/sqrt(SVD$d) gamma <- SVD$v coord <- t(sapply(seq(0, 2*pi, length=npoints), function(th) point(th))) return(coord) } eigen.decomp <- function(A, B, invsqrt = FALSE) { # # Generalized eigenvalue decomposition of A with respect to B. # # A generalized eigenvalue problem AV = BLV is said to be symmetric positive # definite if A is symmetric and B is positive definite. V is the matrix of # generalized eigenvectors, and L is the diagonal matrix of generalized # eigenvalues (Stewart, 2001, pag. 229-230). # # Properties: # V'AV = L # V'BV = I # # The algorithm implemented is described in Stewart (2001, pag. 234) and used # by Li (2000). # # References: # Li, K.C., 2000. High dimensional data analysis via the SIR-PHD approach, # Stewart, G.W., 2001. Matrix Algorithms: vol II Eigensystems, SIAM. if(!invsqrt) { SVD <- svd(B, nu=0) # in case of not full rank covar matrix tol <- .Machine$double.eps pos <- which(SVD$d > max(tol*SVD$d[1], 0)) SVD$d <- SVD$d[pos] SVD$v <- SVD$v[,pos,drop=FALSE] # Computes inverse square root matrix such that: # t(inv.sqrt.B) %*% inv.sqrt.B = inv.sqrt.B %*% t(inv.sqrt.B) = solve(B) inv.sqrt.B <- SVD$v %*% (1/sqrt(SVD$d) * t(SVD$v)) } else { inv.sqrt.B <- B } # Compute B^(-1/2)' A B^(-1/2) = UDU' # evectors = B^(-1/2) U # evalues = D A <- t(inv.sqrt.B) %*% A %*% inv.sqrt.B SVD <- svd(A, nu=0) list(l = SVD$d, v = inv.sqrt.B %*% SVD$v) } # Subset selection of GMMDR/GMMDRC directions ----------------------------- MclustDRsubsel <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { # Subset selection for GMMDR directions based on bayes factors. # # object = a MclustDR object # G = a vector of cluster sizes for searching # modelNames = a vector of models for searching # ... = further arguments passed through Mclust/MclustDA # bic.stop = criterion to stop the search. If maximal BIC difference is # less than bic.stop the algorithm stops. # Two tipical values are: # 0 = stops when BIC difference becomes negative (default) # -Inf = stops when all directions have been selected # bic.cutoff = select simplest ``best'' model within bic.cutoff from the # maximum value achieved. Setting this to 0 (default) simply # select the model with the largest BIC difference. # mindir = the minimum number of diretions to be estimated # verbose = if 0 no trace info is shown; if 1 a trace of each step # of the search is printed; if 2 a detailed trace info is # is shown. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") hcUse <- mclust.options("hcUse") mclust.options("hcUse" = "VARS") on.exit(mclust.options("hcUse" = hcUse)) mc <- match.call(expand.dots = TRUE) mc[[1]] <- switch(object$type, "Mclust" = as.name("MclustDRsubsel_cluster"), "EDDA" = as.name("MclustDRsubsel_classif"), "MclustDA" = as.name("MclustDRsubsel_classif"), stop("Not allowed 'MclustDR' type!")) eval(mc, parent.frame()) } MclustDRsubsel_cluster <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { drmodel <- object mclustType <- drmodel$type lambda <- drmodel$lambda numdir <- drmodel$numdir numdir0 <- numdir+1 dir <- drmodel$dir[,seq(numdir),drop=FALSE] mindir <- max(1, as.numeric(mindir), na.rm = TRUE) verbose <- as.numeric(verbose) ncycle <- 0 while(numdir < numdir0) { ncycle <- ncycle+1 if(verbose > 0) cat("\nCycle", ncycle, "...\n") out <- MclustDRsubsel1cycle(drmodel, G, modelNames, bic.stop = bic.stop, bic.cutoff = bic.cutoff, verbose = if(verbose > 1) TRUE else FALSE) if(verbose > 0) { cat("\n"); print(out$tab) } mod <- do.call("Mclust", list(data = dir[,out$subset,drop=FALSE], G = G, modelNames = if(length(out$subset) > 1) modelNames else c("E", "V"), verbose = FALSE, ...)) numdir0 <- numdir drmodel0 <- MclustDR(mod, lambda = lambda) if(drmodel0$numdir < mindir) break drmodel <- drmodel0 numdir <- drmodel$numdir dir <- drmodel$dir[,seq(numdir),drop=FALSE] } # format object using original data obj <- drmodel obj$basisx <- MclustDRrecoverdir(obj, data = object$x, std = FALSE) obj$std.basisx <- MclustDRrecoverdir(obj, data = object$x, std = TRUE) class(obj) <- c("MclustDRsubsel", class(obj)) return(obj) } MclustDRsubsel1cycle <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), bic.stop = 0, bic.cutoff = 0, verbose = interactive()) { # Single cycle of subset selection for GMMDR directions based on bayes factors. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") d <- object$numdir dir <- object$dir[,seq(d),drop=FALSE] n <- nrow(dir) if(is.null(colnames(dir))) colnames(dir) = paste("[,", 1:d, "]", sep="") dir.names <- colnames(dir) BIC <- Model1 <- Model2 <- tab <- NULL; Model1$bic <- 0 bic.stop <- bic.stop + sqrt(.Machine$double.eps) bic.cutoff <- bic.cutoff + sqrt(.Machine$double.eps) inc <- NULL; excl <- seq(1,d) model1D <- if(any(grep("V", modelNames))) c("E", "V") else "E" # hskip <- paste(rep(" ",4),collapse="") if(verbose) cat("\n", hskip, "Start greedy search...\n", sep="") while(length(excl)>0) { if(verbose) { cat(hskip, rep("-",15), "\n", sep="") cat(paste(hskip, "Step", length(inc)+1, "\n")) } for(j in excl) { # Select simplest model with smallest num. of components # within bic.cutoff bic <- mclustBIC(dir[,sort(c(inc, j)),drop=FALSE], G = G, modelNames = if(length(inc)>0) modelNames else model1D, verbose = FALSE) bic.tab <- (as.matrix(max(bic, na.rm=TRUE) - bic) < bic.cutoff)*1 bestG <- which(rowSums(bic.tab, na.rm=TRUE) > 0)[1] bestmod <- which(bic.tab[bestG,,drop=FALSE] > 0)[1] out <- data.frame(Variable = dir.names[j], model = colnames(bic.tab)[bestmod], G = G[bestG], bic = c(bic[bestG,bestmod]), bic.diff = c(bic[bestG,bestmod] - Model1$bic - MclustDRBICreg(dir[,j], dir[,inc])) ) # Model2 <- rbind(Model2, out) } if(verbose) print(cbind(" " = hskip, Model2), row.names = FALSE) # stop if max BIC difference is < than cut-off bic.stop if(max(Model2$bic.diff) < bic.stop & length(inc) > 0) { break } # otherwise keep selecting i <- which.max(Model2$bic.diff) inc <- append(inc, excl[i]) excl <- setdiff(excl, excl[i]) tab <- rbind(tab, Model2[i,]) Model1 <- Model2[i,] Model2 <- NULL } rownames(tab) <- 1:nrow(tab) colnames(tab) <- c("Variable", "Model", "G", "BIC", "BIC.dif") subsets <- sapply(1:nrow(tab), function(x) list(inc[1:x])) # return(list(subset = subsets[[length(subsets)]], tab = tab)) } MclustDRsubsel_classif <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), ..., bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) { drmodel <- object mclustType <- drmodel$type lambda <- drmodel$lambda numdir <- drmodel$numdir numdir0 <- numdir+1 dir <- drmodel$dir[,seq(numdir),drop=FALSE] mindir <- max(1, as.numeric(mindir), na.rm = TRUE) verbose <- as.numeric(verbose) ncycle <- 0 while(numdir < numdir0) { ncycle <- ncycle+1 if(verbose > 0) cat("\nCycle", ncycle, "...\n") out <- MclustDRCsubsel1cycle(drmodel, G, modelNames, bic.stop = bic.stop, bic.cutoff = bic.cutoff, verbose = if(verbose > 1) TRUE else FALSE) if(verbose > 0) { cat("\n"); print(out$tab) } mod <- do.call("MclustDA", list(data = dir[,out$subset,drop=FALSE], class = object$classification, G = G, modelNames = if(length(out$subset) > 1) modelNames else if(any(grep("V", modelNames))) c("E", "V") else "E", modelType = mclustType, verbose = FALSE, ...)) numdir0 <- numdir drmodel0 <- MclustDR(mod, lambda = lambda) if(drmodel0$numdir < mindir) break drmodel <- drmodel0 numdir <- drmodel$numdir dir <- drmodel$dir[,seq(numdir),drop=FALSE] } # format object using original data obj <- drmodel obj$basisx <- MclustDRrecoverdir(obj, data = object$x, std = FALSE) obj$std.basisx <- MclustDRrecoverdir(obj, data = object$x, std = TRUE) class(obj) <- c("MclustDRsubsel", class(obj)) return(obj) } MclustDRCsubsel1cycle <- function(object, G = 1:9, modelNames = mclust.options("emModelNames"), bic.stop = 0, bic.cutoff = 0, verbose = TRUE) { # Single cycle of subset selection for GMMDRC directions based on bayes factors. if(class(object) != "MclustDR") stop("Not a 'MclustDR' object") d <- object$numdir dir <- object$dir[,seq(d),drop=FALSE] n <- nrow(dir) if(is.null(colnames(dir))) colnames(dir) = paste("[,", seq(d), "]", sep="") dir.names <- colnames(dir) BIC <- Model1 <- Model2 <- tab <- NULL; Model1$bic <- 0 bic.stop <- bic.stop + sqrt(.Machine$double.eps) bic.cutoff <- bic.cutoff + sqrt(.Machine$double.eps) inc <- NULL; excl <- seq(d) model1D <- if(any(grep("V", modelNames))) c("E", "V") else "E" # hskip <- paste(rep(" ",4),collapse="") if(verbose) cat("\n", hskip, "Start greedy search...\n", sep="") while(length(excl)>0) { if(verbose) { cat(hskip, rep("-",15), "\n", sep="") cat(paste(hskip, "Step", length(inc)+1, "\n")) } for(j in excl) { # Select simplest model with smallest num. of components # within bic.cutoff mod <- MclustDA(dir[,sort(c(inc, j)),drop=FALSE], class = object$classification, G = G, modelNames = if(length(inc)>0) modelNames else model1D, modelType = object$type, verbose = FALSE) out <- data.frame(Variable = dir.names[j], model = paste(sapply(mod$models, function(m) m$modelName),collapse="|"), G = paste(sapply(mod$models, function(m) m$G),collapse="|"), bic = mod$bic, bic.diff = c(mod$bic - # (Model1$bic + bic.reg(z2, z1)) Model1$bic - MclustDRBICreg(dir[,j], dir[,inc])) ) # Model2 <- rbind(Model2, out) } if(verbose) print(cbind(" " = hskip, Model2), row.names = FALSE) # stop if max BIC difference is < than cut-off bic.stop if(max(Model2$bic.dif) < bic.stop & length(inc) > 0) { break } # otherwise keep selecting i <- which.max(Model2$bic.dif) inc <- append(inc, excl[i]) excl <- setdiff(excl, excl[i]) tab <- rbind(tab, Model2[i,]) Model1 <- Model2[i,] Model2 <- NULL } rownames(tab) <- 1:nrow(tab) colnames(tab) <- c("Variable", "Model", "G", "BIC", "BIC.dif") subsets <- sapply(1:nrow(tab), function(x) list(inc[1:x])) # return(list(subset = subsets[[length(subsets)]], tab = tab)) } # BICreg <- function(y, x) # { n <- length(y) # mod <- lm.fit(cbind(rep(1,n), x), y) # rss <- sum(mod$residuals^2) # -n*log(2*pi) - n*log(rss/n) - n - (n - mod$df.residual + 1) * log(n) # } MclustDRBICreg <- function(y, x, stepwise = TRUE) { x <- as.matrix(x) y <- as.vector(y) n <- length(y) mod0 <- lm(y ~ 1) if(ncol(x) >= 1) { mod1 <- lm(y ~ 1+x) if(stepwise) { mod <- step(mod0, k = log(n), trace = 0, scope = list(lower = formula(mod0), upper = formula(mod1)), direction = "forward") } else mod <- mod1 } else mod <- mod0 rss <- sum(mod$residuals^2) p <- (n - mod$df.residual + 1) -n*log(2*pi) - n*log(rss/n) - n - p*log(n) } normalize <- function(x) { # Normalize the vector x to have unit length x <- as.vector(x) x <- x/sqrt(as.vector(crossprod(x))) return(x) } MclustDRrecoverdir <- function(object, data, normalized = TRUE, std = FALSE) { # Recover coefficients of the linear combination defining the MclustDR # directions. This is useful if the directions are obtained from other # directions if(!any(class(object) == "MclustDR")) stop("object must be of class mclustsir") if(missing(data)) x <- object$x else x <- as.matrix(data) x <- scale(x, center=TRUE, scale=FALSE) numdir <- object$numdir dir <- object$dir[,seq(numdir),drop=FALSE] # B <- as.matrix(coef(lm(dir ~ x)))[-1,,drop=FALSE] # ok but old B <- qr.solve(x, dir) if(std) { sdx <- sd(x) B <- apply(B, 2, function(x) x*sdx) } if(normalized) { B <- as.matrix(apply(B, 2, normalize)) } rownames(B) <- colnames(x) return(B) } ## Define print and summary methods for showing basis coefs ## in the original scale of variables print.MclustDRsubsel <- function(x, ...) { x$basis <- x$basisx class(x) <- class(x)[2] NextMethod() } summary.MclustDRsubsel <- function(object, ...) { object$basis <- object$basisx object$std.basis <- object$std.basisx class(object) <- class(object)[2] NextMethod() } mclust/R/bootstrap.R0000644000176200001440000004220213370553350014126 0ustar liggesusers## ## Resampling methods ## # # Bootstrap Likelihood Ratio Test # mclustBootstrapLRT <- function(data, modelName = NULL, nboot = 999, level = 0.05, maxG = NULL, verbose = interactive(), ...) { if(is.null(modelName)) stop("A 'modelName' must be provided. Please see help(mclustModelNames) which describes the available models.") modelName <- modelName[1] if(is.null(maxG)) G <- seq.int(1, 9) else { maxG <- as.numeric(maxG); G <- seq.int(1, maxG+1) } Bic <- mclustBIC(data, G = G, modelNames = modelName, warn = FALSE, verbose = FALSE, ...) if(!(modelName %in% attr(Bic, "modelNames"))) stop("'modelName' not compatibile with data. Please see help(mclustModelNames) which describes the available models.") if(all(is.na(Bic))) stop(paste("no model", modelName, "can be fitted.")) # select only models that can be fit G <- which(!is.na(Bic[, attr(Bic, "modelNames") == modelName])) if(verbose) { cat("bootstrapping LRTS ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = (max(G)-1)*nboot, style = 3) on.exit(close(pbar)) } obsLRTS <- p.value <- vector("numeric", length = max(G)-1) bootLRTS <- matrix(as.double(NA), nrow = nboot, ncol = max(G)-1) g <- 0; continue <- TRUE while(g < (max(G)-1) & continue) { g <- g + 1 # fit model under H0 Mod0 <- summary(Bic, data, G = g, modelNames = modelName) # fit model under H1 Mod1 <- summary(Bic, data, G = g+1, modelNames = modelName) # observed LRTS obsLRTS[g] <- 2*(Mod1$loglik - Mod0$loglik) # bootstrap b <- 0 while(b < nboot) { b <- b + 1 # generate 'parametric' bootstrap sample under H0 bootSample <- sim(Mod0$modelName, Mod0$parameters, n = Mod0$n) # fit model under H0 bootMod0 <- em(data = bootSample[,-1], modelName = Mod0$modelName, parameters = Mod0$parameters, warn = FALSE, ...) # fit model under H1 bootMod1 <- em(data = bootSample[,-1], modelName = Mod1$modelName, parameters = Mod1$parameters, warn = FALSE, ...) # compute bootstrap LRT LRTS <- 2*(bootMod1$loglik - bootMod0$loglik) if(is.na(LRTS)) { b <- b - 1; next() } bootLRTS[b,g] <- LRTS if(verbose) setTxtProgressBar(pbar, (g-1)*nboot+b) } p.value[g] <- (1 + sum(bootLRTS[,g] >= obsLRTS[g]))/(nboot+1) # check if not-significant when no maxG is provided if(is.null(maxG) & p.value[g] > level) { continue <- FALSE if(verbose) setTxtProgressBar(pbar, (max(G)-1)*nboot) } } out <- list(G = 1:g, modelName = modelName, obs = obsLRTS[1:g], boot = bootLRTS[,1:g,drop=FALSE], p.value = p.value[1:g]) class(out) <- "mclustBootstrapLRT" return(out) } print.mclustBootstrapLRT <- function(x, ...) { txt <- paste(rep("-", min(61, getOption("width"))), collapse = "") catwrap(txt) catwrap("Bootstrap sequential LRT for the number of mixture components") catwrap(txt) cat(formatC("Model", flag = "-", width = 12), "=", x$modelName, "\n") cat(formatC("Replications", flag = "-", width = 12), "=", nrow(x$boot), "\n") df <- data.frame(x$obs, x$p.value) colnames(df) <- c("LRTS", "bootstrap p-value") rownames(df) <- formatC(paste(x$G, "vs", x$G+1), flag = "-", width = 8) print(df, ...) } plot.mclustBootstrapLRT <- function(x, G = 1, hist.col = "grey", hist.border = "lightgrey", breaks = "Scott", col = "forestgreen", lwd = 2, lty = 3, main = NULL, ...) { if(!any(G == x$G)) { warning(paste("bootstrap LRT not available for G =", G)) return() } G <- as.numeric(G)[1] h <- hist(x$boot[,G], breaks = breaks, plot = FALSE) xlim <- range(h$breaks, x$boot[,G], x$obs[G], na.rm = TRUE) xlim <- extendrange(xlim, f = 0.05) plot(h, xlab = "LRTS", freq = FALSE, xlim = xlim, col = hist.col, border = hist.border, main = NULL) box() abline(v = x$obs[G], lty = lty, lwd = lwd, col = col) if(is.null(main) | is.character(main)) { if(is.null(main)) main <- paste("Bootstrap LRT for model", x$modelName, "with", G, "vs", G+1, "components") title(main = main, cex.main = 1) } invisible() } # # Bootstrap inference (standard errors and percentile confidence intervals) # MclustBootstrap <- function(object, nboot = 999, type = c("bs", "wlbs", "pb", "jk"), max.nonfit = 10*nboot, verbose = interactive(), ...) { if(!any(class(object) %in% c("Mclust", "densityMclust"))) stop("object must be of class 'Mclust' or 'densityMclust'") if(any(type %in% c("nonpara", "wlb"))) { type <- gsub("nonpara", "bs", type) type <- gsub("wlb", "wlbs", type) warning("resampling type converted to \"", type, "\"") } type <- match.arg(type, choices = eval(formals(MclustBootstrap)$type)) # data <- object$data n <- object$n d <- object$d G <- object$G if(type == "jk") nboot <- n varnames <- rownames(object$parameters$mean) # model parameters par <- summary(object)[c("pro", "mean", "variance")] if(d == 1) { par$mean <- array(par$mean, dim = c(d, G)) par$variance <- array(par$variance, dim = c(d, d, G)) } # bootstrapped parameters pro.boot <- array(NA, c(nboot,G), dimnames = list(NULL, seq.int(G))) mean.boot <- array(NA, c(nboot,d,G), dimnames = list(NULL, varnames, seq.int(G))) var.boot <- array(NA, c(nboot,d,d,G), dimnames = list(NULL, varnames, varnames, seq.int(G))) if(verbose) { cat("resampling ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = nboot, style = 3) on.exit(close(pbar)) } b <- nonfit <- 0 while(b < nboot & nonfit < max.nonfit) { b <- b + 1 obj <- object switch(type, "bs" = { idx <- sample(seq_len(n), size = n, replace = TRUE) obj$data <- object$data[idx,] obj$z <- object$z[idx,] obj$warn <- FALSE mod.boot <- try(do.call("me", obj), silent = TRUE) }, "wlbs" = { w <- rexp(n) # w <- w/mean(w) w <- w/max(w) mod.boot <- try(do.call("me.weighted", c(list(weights = w, warn = FALSE), obj)), silent = TRUE) }, "pb" = { obj$data <- do.call("sim", object)[,-1,drop=FALSE] obj$z <- predict(obj)$z obj$warn <- FALSE mod.boot <- try(do.call("me", obj), silent = TRUE) }, "jk" = { idx <- seq_len(n)[-b] obj$data <- object$data[idx,] obj$z <- object$z[idx,] obj$warn <- FALSE mod.boot <- try(do.call("me", obj), silent = TRUE) } ) # check model convergence if(inherits(mod.boot, "try-error")) { if(type != "jk") b <- b - 1 nonfit <- nonfit + 1 next() } if(is.na(mod.boot$loglik)) { if(type != "jk") b <- b - 1 nonfit <- nonfit + 1 next() } if(type == "jk") { # pseudovalues ... # pro.boot[b,] <- n*par$pro - (n-1)*mod.boot$parameters$pro # mean.boot[b,,] <- n*par$mean - (n-1)*mod.boot$parameters$mean # var.boot[b,,,] <- n*par$variance - (n-1)*mod.boot$parameters$variance$sigma pro.boot[b,] <- mod.boot$parameters$pro mean.boot[b,,] <- mod.boot$parameters$mean var.boot[b,,,] <- mod.boot$parameters$variance$sigma } else { # bootstrap values pro.boot[b,] <- mod.boot$parameters$pro mean.boot[b,,] <- mod.boot$parameters$mean var.boot[b,,,] <- mod.boot$parameters$variance$sigma } if(verbose) setTxtProgressBar(pbar, b) } out <- list(G = G, modelName = object$modelName, parameters = par, nboot = nboot, type = type, nonfit = nonfit, pro = pro.boot, mean = mean.boot, variance = var.boot) class(out) <- "MclustBootstrap" return(out) } print.MclustBootstrap <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' object:\n", sep = "") str(x, max.level = 1, give.attr = FALSE, strict.width = "wrap") invisible() } summary.MclustBootstrap <- function(object, what = c("se", "ci", "ave"), conf.level = 0.95, ...) { what <- match.arg(what, choices = eval(formals(summary.MclustBootstrap)$what)) dims <- dim(object$mean) # varnames <- dimnames(object$mean)[[2]] nboot <- dims[1] d <- dims[2] G <- dims[3] switch(what, "se" = { out <- list(pro = apply(object$pro, 2, sd, na.rm=TRUE), mean = apply(object$mean, c(2,3), sd, na.rm=TRUE), variance = apply(object$variance, c(2,3,4), sd, na.rm=TRUE)) if(object$type == "jk") out <- lapply(out, function(x) sqrt(x^2*(nboot-object$nonfit-1)^2/(nboot-object$nonfit))) }, "ave" = { out <- list(pro = apply(object$pro, 2, mean, na.rm=TRUE), mean = apply(object$mean, c(2,3), mean, na.rm=TRUE), variance = apply(object$variance, c(2,3,4), mean, na.rm=TRUE)) }, "ci" = { levels <- c((1-conf.level)/2, (1 + conf.level)/2) if(object$type == "jk") { # bias-corrected ci based on normal-approximation ave <- list(pro = apply(object$pro, 2, mean, na.rm=TRUE), mean = apply(object$mean, c(2,3), mean, na.rm=TRUE), variance = t(sapply(seq.int(d), function(j) apply(object$variance[,j,j,], 2, mean, na.rm=TRUE), simplify = "array"))) se <- list(pro = apply(object$pro, 2, sd, na.rm=TRUE), mean = apply(object$mean, c(2,3), sd, na.rm=TRUE), variance = t(sapply(seq.int(d), function(j) apply(object$variance[,j,j,], 2, sd, na.rm=TRUE), simplify = "array"))) se <- lapply(se, function(x) sqrt(x^2*(nboot-object$nonfit-1)^2/(nboot-object$nonfit))) zq <- qnorm(max(levels)) lnames <- paste0(formatC(levels * 100, format = "fg", width = 1, digits = getOption("digits")), "%") # the code above mimic stats:::format_perc(levels) which can't be used # because format_perc is not exported from stats out <- list(pro = array(as.double(NA), c(2,G), dimnames = list(lnames, 1:G)), mean = array(as.double(NA), dim = c(2,d,G), dimnames = list(lnames, 1:d, 1:G)), variance = array(as.double(NA), dim = c(2,d,G), dimnames = list(lnames, 1:d, 1:G))) out$pro[1,] <- ave$pro - zq*se$pro out$pro[2,] <- ave$pro + zq*se$pro out$mean[1,,] <- ave$mean - zq*se$mean out$mean[2,,] <- ave$mean + zq*se$mean out$variance[1,,] <- ave$variance - zq*se$variance out$variance[2,,] <- ave$variance + zq*se$variance } else { # percentile-based ci out <- list(pro = apply(object$pro, 2, quantile, probs = levels, na.rm=TRUE), mean = apply(object$mean, c(2,3), quantile, probs = levels, na.rm=TRUE)) v <- array(as.double(NA), dim = c(2,d,G), dimnames = dimnames(out$mean)) for(j in seq.int(d)) v[,j,] <- apply(object$variance[,j,j,], 2, quantile, probs = levels, na.rm=TRUE) out$variance <- v } } ) obj <- append(object[c("modelName", "G", "nboot", "type")], list(d = d, what = what)) if(what == "ci") obj$conf.level <- conf.level obj <- append(obj, out) class(obj) <- "summary.MclustBootstrap" return(obj) } print.summary.MclustBootstrap <- function(x, digits = getOption("digits"), ...) { txt <- paste(rep("-", min(58, getOption("width"))), collapse = "") catwrap(txt) catwrap(paste("Resampling", switch(x$what, "se" = "standard errors", "ave" = "averages", "ci" = "confidence intervals"))) catwrap(txt) # cat(formatC("Model", flag = "-", width = 26), "=", x$modelName, "\n") cat(formatC("Num. of mixture components", flag = "-", width = 26), "=", x$G, "\n") cat(formatC("Replications", flag = "-", width = 26), "=", x$nboot, "\n") cat(formatC("Type", flag = "-", width = 26), "=", switch(x$type, "bs" = "nonparametric bootstrap", "wlbs" = "weighted likelihood bootstrap", "pb" = "parametric bootstrap", "jk" = "jackknife"), "\n") if(x$what == "ci") cat(formatC("Confidence level", flag = "-", width = 26), "=", x$conf.level, "\n") # cat("\nMixing probabilities:\n") print(x$pro, digits = digits) # cat("\nMeans:\n") if(x$d == 1) { if(x$what == "se" | x$what == "ave") print(x$mean[1,], digits = digits) else print(x$mean[,1,], digits = digits) } else if(x$what == "se" | x$what == "ave") print(x$mean, digits = digits) else { for(g in seq.int(x$G)) { cat("[,,", g, "]\n", sep = "") print(x$mean[,,g], digits = digits) } } # cat("\nVariances:\n") if(x$d == 1) { print(x$variance[,1,], digits = digits) } else { for(g in seq.int(x$G)) { cat("[,,", g, "]\n", sep = "") print(x$variance[,,g], digits = digits) } } invisible(x) } plot.MclustBootstrap <- function(x, what = c("pro", "mean", "var"), show.parest = TRUE, show.confint = TRUE, hist.col = "grey", hist.border = "lightgrey", breaks = "Sturges", col = "forestgreen", lwd = 2, lty = 3, xlab = NULL, xlim = NULL, ylim = NULL, ...) { object <- x # Argh. Really want to use object anyway what <- match.arg(what, choices = eval(formals(plot.MclustBootstrap)$what)) par <- object$parameters d <- dim(object$mean)[2] varnames <- rownames(par$mean) if(show.confint) { ci <- summary(object, what = "ci", ...) ave <- summary(object, what = "ave", ...) } histBoot <- function(boot, stat, ci, ave, breaks, xlim, ylim, xlab, ...) { hist(boot, breaks = breaks, xlim = xlim, ylim = ylim, main = "", xlab = xlab, ylab = "", border = hist.border, col = hist.col) box() if(show.parest) abline(v = stat, col = col, lwd = lwd, lty = lty) if(show.confint) { lines(ci, rep(par("usr")[3]/2,2), lwd = lwd, col = col) points(ave, par("usr")[3]/2, pch = 15, col = col) } } switch(what, "pro" = { xlim <- range(if(is.null(xlim)) pretty(object$pro) else xlim) for(k in 1:object$G) histBoot(object$pro[,k], breaks = breaks, stat = par$pro[k], ci = ci$pro[,k], ave = ave$pro[k], xlim = xlim, ylim = ylim, xlab = ifelse(is.null(xlab), paste("Mix. prop. for comp.",k), xlab)) }, "mean" = { isNull_xlim <- is.null(xlim) for(j in 1:d) { xlim <- range(if(isNull_xlim) pretty(object$mean[,j,]) else xlim) for(k in 1:object$G) histBoot(object$mean[,j,k], breaks = breaks, stat = par$mean[j,k], ci = ci$mean[,j,k], ave = ave$mean[j,k], xlim = xlim, ylim = ylim, xlab = ifelse(is.null(xlab), paste(varnames[j], "mean for comp.",k), xlab)) } }, "var" = { isNull_xlim <- is.null(xlim) for(j in 1:d) { xlim <- range(if(isNull_xlim) pretty(object$variance[,j,j,]) else xlim) for(k in 1:object$G) histBoot(object$variance[,j,j,k], breaks = breaks, stat = par$variance[j,j,k], ci = ci$variance[,j,k], ave = ave$variance[j,k], xlim = xlim, ylim = ylim, xlab = ifelse(is.null(xlab), paste(varnames[j], "var. for comp.",k), xlab)) } } ) invisible() } mclust/R/graphics.R0000644000176200001440000013047213476473206013730 0ustar liggesusersmclust1Dplot <- function(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "density", "error", "uncertainty"), symbols = NULL, colors = NULL, ngrid = length(data), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, CEX = 1, main = FALSE, ...) { p <- ncol(as.matrix(data)) if (p != 1) stop("for one-dimensional data only") data <- as.vector(data) n <- length(data) if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if (!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigmasq <- parameters$variance$sigmasq haveParams <- !is.null(mu) && !is.null(sigmasq) && !any(is.na(mu)) && !any(is.na(sigmasq)) } else haveParams <- FALSE if (is.null(xlim)) xlim <- range(data) if (haveParams) { G <- length(mu) if ((l <- length(sigmasq)) == 1) { sigmasq <- rep(sigmasq, G) } else if (l != G) { params <- FALSE warning("mu and sigma are incompatible") } } if (!is.null(truth)) { if (is.null(classification)) { classification <- truth truth <- NULL } else { if (length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) if(is.null(symbols)) { symbols <- rep("|", L) } else if(length(symbols) == 1) { symbols <- rep(symbols, L) } else if(length(symbols) < L) { warning("more symbols needed to show classification") symbols <- rep("|", L) } if(is.null(colors)) { colors <- mclust.options("classPlotColors")[1:L] } else if(length(colors) == 1) { colors <- rep(colors, L) } else if(length(colors) < L) { warning("more colors needed to show classification") colors <- rep("black", L) } } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) what <- match.arg(what, choices = eval(formals(mclust1Dplot)$what)) bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "error" && (is.null(classification) || is.null(truth))) if(bad) stop("insufficient input for specified plot") M <- L switch(what, "classification" = { plot(data, seq(from = 0, to = M, length = n), type = "n", xlab = if(is.null(xlab)) "" else xlab, ylab = if(is.null(ylab)) "Classification" else ylab, xlim = xlim, ylim = if(is.null(ylim)) grDevices::extendrange(r = c(0,M), f = 0.1) else ylim, yaxt = "n", main = "", ...) axis(side = 2, at = 0:M, labels = c("", sort(unique(classification)))) if(main) title("Classification") for(k in 1:L) { I <- classification == U[k] if(symbols[k] == "|") { vpoints(data[I], rep(0, length(data[I])), cex = CEX) vpoints(data[I], rep(k, length(data[I])), col = colors[k], cex = CEX) } else { points(data[I], rep(0, length(data[I])), pch = symbols[k], cex = CEX) points(data[I], rep(k, length(data[I])), pch = symbols[k], col = colors[k], cex = CEX) } } }, "error" = { ERRORS <- classError(classification, truth)$misclassified plot(data, seq(from = 0, to = M, length = n), type = "n", xlab = xlab, ylab = if(is.null(ylab)) "Class errors" else ylab, xlim = xlim, ylim = if(is.null(ylim)) grDevices::extendrange(r = c(0,M), f = 0.1) else ylim, yaxt = "n", ...) axis(side = 2, at = 0:M, labels = c("", unique(classification))) if(main) title("Classification error") good <- rep(TRUE, length(classification)) good[ERRORS] <- FALSE sym <- "|" for(k in 1:L) { K <- classification == U[k] I <- (K & good) if(any(I)) { if(FALSE) { sym <- if (L > 4) 1 else if (k == 4) 5 else k - 1 } l <- sum(as.numeric(I)) if(sym == "|") vpoints(data[I], rep(0, l), col = colors[k], cex = CEX) else points(data[I], rep(0, l), pch = sym, col = colors[k], cex = CEX) } I <- K & !good if(any(I)) { if(FALSE) { sym <- if (L > 5) 16 else k + 14 } l <- sum(as.numeric(I)) if(sym == "|") vpoints(data[I], rep(k, l), col = colors[k], cex = CEX) else points(data[I], rep(k, l), pch = sym, col = colors[k], cex = CEX) } } }, "uncertainty" = { u <- (uncertainty - min(uncertainty))/ (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX*c(0.3, 2), alpha = c(0.3, 1)) if(is.null(classification)) { classification <- rep(1, length(u)) U <- 1 } if(is.null(colors)) colors <- palette()[1] cl <- sapply(classification, function(cl) which(cl == U)) plot(data, uncertainty, type = "h", xlab = xlab, ylab = if(is.null(ylab)) "Uncertainty" else ylab, xlim = xlim, ylim = if(is.null(ylim)) c(0,1) else ylim, col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) rug(data, lwd = 1, col = adjustcolor(par("fg"), alpha.f = 0.8)) if(main) title("Uncertainty") }, "density" = { if(is.null(parameters$pro) && parameters$variance$G != 1) stop("mixing proportions missing") x <- grid1(n = ngrid, range = xlim, edge = TRUE) plot(x, dens("V", data = x, parameters = parameters), xlab = xlab, ylab = if(is.null(ylab)) "Density" else ylab, xlim = xlim, type = "l", main = "", ...) if(main) title("Density") }, { plot(data, rep(0, n), type = "n", xlab = "", ylab = "", xlim = xlim, main = "", ...) vpoints(data, rep(0, n), cex = CEX) if(main) title("Point Plot") } ) invisible() } mclust2Dplot <- function(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "uncertainty", "error"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, CEX = 1, PCH = ".", main = FALSE, swapAxes = FALSE, ...) { if(dim(data)[2] != 2) stop("data must be two dimensional") if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any(is.na(sigma)) } else haveParams <- FALSE main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } dnames <- dimnames(data)[[2]] if(is.null(xlab)) { xlab <- if(is.null(dnames)) "" else dnames[1] } if(is.null(ylab)) { ylab <- if(is.null(dnames)) "" else dnames[2] } if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu, c(2, G)) sigma <- array(sigma, c(2, 2, G)) } if(swapAxes) { if(haveParams) { mu <- mu[2:1,] sigma <- sigma[2:1, 2:1,] } data <- data[, 2:1] } if(!is.null(truth)) { if(is.null(classification)) { classification <- truth truth <- NULL } else { if(length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(charmatch("classification", what, nomatch = 0) && is.null(classification) && !is.null(z)) { classification <- map(z) } if(!is.null(classification)) { classification <- as.character(classification) U <- sort(unique(classification)) L <- length(U) noise <- (U[1] == "0") if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) { colors <- unique(c("black", colors))[1:L] } } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if(length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } what <- match.arg(what, choices = eval(formals(mclust2Dplot)$what)) bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "error" && (is.null(classification) || is.null(truth))) if(bad) stop("insufficient input for specified plot") switch(EXPR = what, "classification" = { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Classification") for(k in 1:L) { I <- classification == U[k] points(data[I, 1], data[I, 2], pch = symbols[k], col = colors[k], cex = if(U[k] == "0") CEX/2 else CEX) } }, "error" = { ERRORS <- classError(classification, truth)$misclassified plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Classification Errors") CLASSES <- unique(as.character(truth)) symOpen <- c(2, 0, 1, 5) symFill <- c(17, 15, 16, 18) good <- rep(TRUE,length(classification)) good[ERRORS] <- FALSE if(L > 4) { points(data[good, 1], data[good, 2], pch = 1, col = colors, cex = CEX) points(data[!good, 1], data[!good, 2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- truth == CLASSES[k] points(data[K, 1], data[K, 2], pch = symOpen[k], col = colors[k], cex = CEX) if(any(I <- (K & !good))) { points(data[I, 1], data[I, 2], pch = symFill[k], cex = CEX) } } } }, "uncertainty" = { u <- (uncertainty - min(uncertainty))/ (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX*c(0.3, 2), alpha = c(0.3, 0.9)) cl <- sapply(classification, function(cl) which(cl == U)) plot(data[, 1], data[, 2], pch = 19, xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", cex = b$cex, col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) if(main) title("Uncertainty") fillEllipses <- FALSE }, { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) title("Point Plot") points(data[, 1], data[, 2], pch = PCH, cex = CEX) } ) if(haveParams && addEllipses) { ## plot ellipsoids for(g in 1:G) mvn2plot(mu = mu[,g], sigma = sigma[,,g], k = 15, fillEllipse = fillEllipses, col = if(fillEllipses) colors[g] else rep("grey30",3)) } invisible() } # old version mvn2plot <- function(mu, sigma, k = 15, alone = FALSE, col = rep("grey30",3), pch = 8, lty = c(1,2), lwd = c(1,1)) { p <- length(mu) if (p != 2) stop("only two-dimensional case is available") if (any(unique(dim(sigma)) != p)) stop("mu and sigma are incompatible") ev <- eigen(sigma, symmetric = TRUE) s <- sqrt(rev(sort(ev$values))) V <- t(ev$vectors[, rev(order(ev$values))]) theta <- (0:k) * (pi/(2 * k)) x <- s[1] * cos(theta) y <- s[2] * sin(theta) xy <- cbind(c(x, -x, -x, x), c(y, y, -y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") if(alone) { xymin <- apply(xy, 2, FUN = "min") xymax <- apply(xy, 2, FUN = "max") r <- ceiling(max(xymax - xymin)/2) xymid <- (xymin + xymax)/2 plot(xy[, 1], xy[, 2], type = "n", xlab = "x", ylab = "y", xlim = c(-r, r) + xymid[1], ylim = c(-r, r) + xymid[2]) } l <- length(x) i <- 1:l for(k in 1:4) { lines(xy[i,], col = col[1], lty = lty[1], lwd = lwd[1]) i <- i + l } x <- s[1] y <- s[2] xy <- cbind(c(x, -x, 0, 0), c(0, 0, y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") lines(xy[1:2,], col = col[2], lty = lty[2], lwd = lwd[2]) lines(xy[3:4,], col = col[2], lty = lty[2], lwd = lwd[2]) points(mu[1], mu[2], col = col[3], pch = pch) invisible() } mvn2plot <- function(mu, sigma, k = 15, alone = FALSE, fillEllipse = FALSE, alpha = 0.3, col = rep("grey30", 3), pch = 8, lty = c(1,2), lwd = c(1,1), ...) { p <- length(mu) if(p != 2) stop("only two-dimensional case is available") if(any(unique(dim(sigma)) != p)) stop("mu and sigma are incompatible") ev <- eigen(sigma, symmetric = TRUE) s <- sqrt(rev(sort(ev$values))) V <- t(ev$vectors[, rev(order(ev$values))]) theta <- (0:k) * (pi/(2 * k)) x <- s[1] * cos(theta) y <- s[2] * sin(theta) xy <- cbind(c(x, -x, -x, x), c(y, y, -y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") # if(alone) { xymin <- apply(xy, 2, FUN = "min") xymax <- apply(xy, 2, FUN = "max") r <- ceiling(max(xymax - xymin)/2) xymid <- (xymin + xymax)/2 plot(xy[, 1], xy[, 2], type = "n", xlab = "x", ylab = "y", xlim = c(-r, r) + xymid[1], ylim = c(-r, r) + xymid[2]) } # draw ellipses if(fillEllipse) { col <- rep(col, 3) polygon(xy[chull(xy),], border = NA, col = adjustcolor(col[1], alpha.f = alpha)) } else { l <- length(x) i <- 1:l for(k in 1:4) { lines(xy[i,], col = col[1], lty = lty[1], lwd = lwd[1]) i <- i + l } } # draw principal axes and centroid x <- s[1] y <- s[2] xy <- cbind(c(x, -x, 0, 0), c(0, 0, y, -y)) xy <- xy %*% V xy <- sweep(xy, MARGIN = 2, STATS = mu, FUN = "+") lines(xy[1:2,], col = col[2], lty = lty[2], lwd = lwd[2]) lines(xy[3:4,], col = col[2], lty = lty[2], lwd = lwd[2]) points(mu[1], mu[2], col = col[3], pch = pch) # invisible() } clPairs <- function (data, classification, symbols = NULL, colors = NULL, labels = dimnames(data)[[2]], cex.labels = 1.5, gap = 0.2, ...) { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) if(missing(classification)) classification <- rep(1, n) if(!is.factor(classification)) classification <- as.factor(classification) l <- length(levels(classification)) if(length(classification) != n) stop("classification variable must have the same length as nrows of data!") if(missing(symbols)) { if(l == 1) { symbols <- "." } if(l <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else { if(l <= 9) { symbols <- as.character(1:9) } else if(l <= 26) { symbols <- LETTERS[1:l] } else symbols <- rep(16,l) } } if(length(symbols) == 1) symbols <- rep(symbols, l) if(length(symbols) < l) { symbols <- rep(16, l) warning("more symbols needed") } if(is.null(colors)) { if(l <= length(mclust.options("classPlotColors"))) colors <- mclust.options("classPlotColors")[1:l] } if(length(colors) == 1) colors <- rep(colors, l) if(length(colors) < l) { colors <- rep( "black", l) warning("more colors needed") } if(d > 2) { pairs(x = data, labels = labels, pch = symbols[classification], col = colors[classification], gap = gap, cex.labels = cex.labels, ...) } else if(d == 2) { plot(data, pch = symbols[classification], col = colors[classification], ...) } invisible(list(d = d, class = levels(classification), col = colors, pch = symbols[seq(l)])) } clPairsLegend <- function(x, y, class, col, pch, box = TRUE, ...) { usr <- par("usr") if(box & all(usr == c(0,1,0,1))) { oldpar <- par(mar = rep(0.2, 4), no.readonly = TRUE) on.exit(par(oldpar)) box(which = "plot", lwd = 0.8) } if(!all(usr == c(0,1,0,1))) { x <- x*(usr[2]-usr[1])+usr[1] y <- y*(usr[4]-usr[3])+usr[3] } dots <- list(...) dots$x <- x dots$y <- y dots$legend <- class dots$text.width <- max(strwidth(dots$title, units = "user"), strwidth(dots$legend, units = "user")) dots$col <- col dots$text.col <- col dots$pch <- pch dots$title.col <- par("fg") dots$title.adj <- 0.1 dots$xpd <- NA do.call("legend", dots) } coordProj <- function(data, dimens = c(1,2), parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, CEX = 1, PCH = ".", main = FALSE, ...) { if(is.null(dimens)) dimens <- c(1, 2) if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any( is.na(sigma)) } else haveParams <- FALSE data <- data[, dimens, drop = FALSE] if(dim(data)[2] != 2) stop("need two dimensions") if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } if(is.null(dnames <- dimnames(data)[[2]])) xlab <- ylab <- "" else { xlab <- dnames[1] ylab <- dnames[2] } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu[dimens, ], c(2, G)) sigma <- array(sigma[dimens, dimens, ], c(2, 2, G)) } if(!is.null(truth)) { truth <- as.factor(truth) if(is.null(classification)) { classification <- truth truth <- NULL } } if(!is.null(classification)) { classification <- as.factor(classification) U <- levels(classification) L <- nlevels(classification) noise <- (U[1] == "0") if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } else if(length(symbols) == 1) symbols <- rep(symbols, L) if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) { colors <- unique(c("black", colors))[1:L] } } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if(length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } if(length(what) > 1) what <- what[1] choices <- c("classification", "error", "uncertainty") m <- charmatch(what, choices, nomatch = 0) if(m) { what <- choices[m] bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "error" && (is.null(classification) || is.null( truth))) if(bad) warning("insufficient input for specified plot") badClass <- (what == "error" && (length(unique(classification)) != length( unique(truth)))) if(badClass && !bad) warning("classification and truth differ in number of groups") bad <- bad && badClass } else { bad <- !m warning("what improperly specified") } if(bad) what <- "bad" switch(EXPR = what, "classification" = { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Classification") title(main = TITLE) } for(k in 1:L) { I <- classification == U[k] points(data[I, 1], data[I, 2], pch = symbols[k], col = colors[k], cex = if(U[k] == "0") CEX/3 else CEX) } }, "error" = { ERRORS <- classError(classification, truth)$misclassified plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Errors") title(main = TITLE) } CLASSES <- levels(truth) symOpen <- symb2open(mclust.options("classPlotSymbols")) symFill <- symb2fill(mclust.options("classPlotSymbols")) good <- rep(TRUE, length(classification)) good[ERRORS] <- FALSE if(L > length(symOpen)) { points(data[good, 1], data[good, 2], pch = 1, col = colors, cex = CEX) points(data[!good, 1], data[!good, 2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- truth == CLASSES[k] if(any(I <- (K & good))) { points(data[I, 1], data[I, 2], pch = symOpen[k], col = colors[k], cex = CEX) } if(any(I <- (K & !good))) { points(data[I, 1], data[I, 2], cex = CEX, pch = symFill[k], col = "black", bg = "black") } } } }, "uncertainty" = { u <- (uncertainty - min(uncertainty)) / (max(uncertainty) - min(uncertainty) + sqrt(.Machine$double.eps)) b <- bubble(u, cex = CEX * c(0.3, 2), alpha = c(0.3, 0.9)) cl <- sapply(classification, function(cl) which(cl == U)) plot(data[, 1], data[, 2], pch = 19, main = "", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, cex = b$cex, col = mapply(adjustcolor, col = colors[cl], alpha.f = b$alpha), ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection showing Uncertainty") title(main = TITLE) } fillEllipses <- FALSE }, { plot(data[, 1], data[, 2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste(paste(dimens, collapse = ","), "Coordinate Projection") title(main = TITLE) } points(data[, 1], data[, 2], pch = PCH, cex = CEX) } ) if(haveParams && addEllipses) { ## plot ellipsoids for(g in 1:G) mvn2plot(mu = mu[,g], sigma = sigma[,,g], k = 15, fillEllipse = fillEllipses, col = if(fillEllipses) colors[g] else rep("grey30",3)) } invisible() } symb2open <- function(x) { symb <- 0:18 open <- c(0:14,0,1,2,5) open[sapply(x, function(x) which(symb == x))] } symb2fill <- function(x) { symb <- 0:18 fill <- c(15:17, 3:4, 23, 25, 7:9, 20, 11:18) fill[sapply(x, function(x) which(symb == x))] } # x <- c(16, 0, 17, 3, 15, 4, 1, 8, 2, 7, 5, 9, 6, 10, 11, 18, 12, 13, 14) # plot(seq(x), rep(1,length(x)), pch = x, cex = 2, ylim = c(0.8, 3.8), yaxt = "n") # points(seq(x), rep(2,length(x)), pch = symb2open(x), cex = 2) # points(seq(x), rep(3,length(x)), pch = symb2fill(x), cex = 2, bg = "black") randProj <- function(data, seeds = NULL, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), quantiles = c(0.75, 0.95), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, CEX = 1, PCH = ".", main = FALSE, ...) { if(is.null(classification) && !is.null(z)) classification <- map(z) if(is.null(uncertainty) && !is.null(z)) uncertainty <- 1 - apply(z, 1, max) if(!is.null(parameters)) { mu <- parameters$mean L <- ncol(mu) sigma <- parameters$variance$sigma haveParams <- !is.null(mu) && !is.null(sigma) && !any(is.na(mu)) && !any(is.na(sigma)) } else haveParams <- FALSE d <- ncol(data) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } cho <- array(apply(sigma, 3, chol), c(d, d, G)) } if(!is.null(truth)) { truth <- as.factor(truth) if(is.null(classification)) { classification <- truth truth <- NULL } else { if(length(unique(truth)) != length(unique(classification))) truth <- NULL else truth <- as.character(truth) } } if(!is.null(classification)) { classification <- as.factor(classification) U <- levels(classification) L <- nlevels(classification) noise <- (U[1] == "0") if(is.null(symbols)) { if(L <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols")[1:L] if(noise) { symbols <- c(16,symbols)[1:L] } } else if(L <= 9) { symbols <- as.character(1:9) } else if(L <= 26) { symbols <- LETTERS } } else if(length(symbols) == 1) symbols <- rep(symbols, L) if(is.null(colors)) { if(L <= length(mclust.options("classPlotColors"))) { colors <- mclust.options("classPlotColors")[1:L] if(noise) colors <- unique(c("black", colors))[1:L] } } else if(length(colors) == 1) colors <- rep(colors, L) if(length(symbols) < L) { warning("more symbols needed to show classification ") symbols <- rep(16,L) } if (length(colors) < L) { warning("more colors needed to show classification ") colors <- rep("black",L) } } if(is.null(xlab)) xlab <- "randProj1" if(is.null(ylab)) ylab <- "randProj2" what <- match.arg(what, choices = eval(formals(randProj)$what)) bad <- what == "classification" && is.null(classification) bad <- bad || (what == "uncertainty" && is.null(uncertainty)) bad <- bad || (what == "error" && (is.null(classification) || is.null(truth))) if(bad) stop("insufficient input for specified plot") main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) nullXlim <- is.null(xlim) nullYlim <- is.null(ylim) if(scale || length(seeds) > 1) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) if(scale) par(pty = "s") if(length(seeds) > 1) par(ask = TRUE) } # if not provided get a seed at random if(length(seeds) == 0) { seeds <- as.numeric(Sys.time()) seeds <- (seeds - floor(seeds))*1e8 } for(seed in seeds) { set.seed(seed) # B <- orth2(d) B <- randomOrthogonalMatrix(d, 2) dataProj <- as.matrix(data) %*% B if(dim(dataProj)[2] != 2) stop("need two dimensions") if(nullXlim) xlim <- range(dataProj[,1]) if(nullYlim) ylim <- range(dataProj[,2]) if(scale) { d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2.) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } switch(what, "classification" = { plot(dataProj[,1:2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, ...) for(k in 1:L) { I <- classification == U[k] points(dataProj[I,1:2], pch = symbols[k], col = colors[k], cex = CEX) } if(main) { TITLE <- paste("Random Projection showing Classification: seed = ", seed) title(TITLE) } }, "error" = { ERRORS <- classError(classification, truth)$misclassified plot(dataProj[, 1:2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, ...) if(main) { TITLE <- paste("Random Projection showing Errors: seed = ", seed) title(TITLE) } CLASSES <- unique(as.character(truth)) symOpen <- c(2, 0, 1, 5) symFill <- c(17, 15, 16, 18) good <- !ERRORS if(L > 4) { points(dataProj[good, 1:2], pch = 1, col = colors, cex = CEX) points(dataProj[!good, 1:2], pch = 16, cex = CEX) } else { for(k in 1:L) { K <- which(truth == CLASSES[k]) points(dataProj[K, 1:2], pch = symOpen[k], col = colors[k], cex = CEX) if(any(I <- intersect(K, ERRORS))) points(dataProj[I,1:2], pch = symFill[k], cex = CEX) } } }, "uncertainty" = { plot(dataProj[, 1:2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, main = "", ...) if(main) { TITLE <- paste("Random Projection showing Uncertainty: seed = ", seed) title(TITLE) } breaks <- quantile(uncertainty, probs = sort(quantiles)) I <- uncertainty <= breaks[1] points(dataProj[I, 1:2], pch = 16, col = "gray75", cex = 0.5 * CEX) I <- uncertainty <= breaks[2] & !I points(dataProj[I, 1:2], pch = 16, col = "gray50", cex = 1 * CEX) I <- uncertainty > breaks[2] & !I points(dataProj[I, 1:2], pch = 16, col = "black", cex = 1.5 * CEX) fillEllipses <- FALSE }, { plot(dataProj[, 1:2], type = "n", xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, ...) if(main) { TITLE <- paste("Random Projection: seed = ", seed) title(TITLE) } points(dataProj[, 1:2], pch = PCH, cex = CEX) } ) muProj <- crossprod(B, mu) sigmaProj <- array(apply(cho, 3, function(R) crossprod(R %*% B)), c(2, 2, G)) if(haveParams && addEllipses) { ## plot ellipsoids for(g in 1:G) mvn2plot(mu = muProj[,g], sigma = sigmaProj[,,g], k = 15, fillEllipse = fillEllipses, col = if(fillEllipses) colors[g] else rep("grey30",3)) } } invisible(list(basis = B, data = dataProj, mu = muProj, sigma = sigmaProj)) } surfacePlot <- function(data, parameters, what = c("density", "uncertainty"), type = c("contour", "hdr", "image", "persp"), transformation = c("none", "log", "sqrt"), grid = 200, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), col = gray(0.7), col.palette = function(...) hcl.colors(..., "blues", rev = TRUE), hdr.palette = blue2grey.colors, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, main = FALSE, scale = FALSE, swapAxes = FALSE, verbose = FALSE, ...) { data <- as.matrix(data) if(dim(data)[2] != 2) stop("data must be two dimensional") if(any(type == "level")) type[type == "level"] <- "hdr" # TODO: to be removed type <- match.arg(type, choices = eval(formals(surfacePlot)$type)) what <- match.arg(what, choices = eval(formals(surfacePlot)$what)) transformation <- match.arg(transformation, choices = eval(formals(surfacePlot)$transformation)) # densNuncer <- function(modelName, data, parameters) { if(is.null(parameters$variance$cholsigma)) { parameters$variance$cholsigma <- parameters$variance$sigma G <- dim(parameters$variance$sigma)[3] for(k in 1:G) parameters$variance$cholsigma[,,k] <- chol(parameters$variance$sigma[,,k]) } cden <- cdensVVV(data = data, parameters = parameters, logarithm = TRUE) pro <- parameters$pro if(!is.null(parameters$Vinv)) pro <- pro[-length(pro)] z <- sweep(cden, MARGIN = 2, FUN = "+", STATS = log(pro)) logden <- apply(z, 1, logsumexp) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = logden) z <- exp(z) data.frame(density = exp(logden), uncertainty = 1 - apply(z, 1, max)) } pro <- parameters$pro mu <- parameters$mean sigma <- parameters$variance$sigma haveParams <- (!is.null(mu) && !is.null(sigma) && !is.null(pro) && !any(is.na(mu)) && !any(is.na(sigma)) && !(any(is.na(pro)))) if(haveParams) { G <- ncol(mu) dimpar <- dim(sigma) if(length(dimpar) != 3) { haveParams <- FALSE warning("covariance must be a 3D matrix") } if(G != dimpar[3]) { haveParams <- FALSE warning("means and variance parameters are incompatible") } mu <- array(mu, c(2, G)) sigma <- array(sigma, c(2, 2, G)) } else stop("need parameters to compute density") if(swapAxes) { if(haveParams) { parameters$pro <- pro[2:1] parameters$mean <- mu[2:1,] parameters$variance$sigma <- sigma[2:1, 2:1,] } data <- data[, 2:1] } main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) if(is.null(xlim)) xlim <- range(data[, 1]) if(is.null(ylim)) ylim <- range(data[, 2]) if(scale) { par(pty = "s") d <- diff(xlim) - diff(ylim) if(d > 0) { ylim <- c(ylim[1] - d/2, ylim[2] + d/2) } else { xlim <- c(xlim[1] + d/2, xlim[2] - d/2) } } dnames <- dimnames(data)[[2]] if(is.null(xlab)) { xlab <- if(is.null(dnames)) "" else dnames[1] } if(is.null(ylab)) { ylab <- if(is.null(dnames)) "" else dnames[2] } if(length(grid) == 1) grid <- c(grid, grid) x <- grid1(n = grid[1], range = xlim, edge = TRUE) y <- grid1(n = grid[2], range = ylim, edge = TRUE) xy <- grid2(x, y) if(verbose) message("computing density and uncertainty over grid ...") Z <- densNuncer(modelName = "VVV", data = xy, parameters = parameters) lx <- length(x) ly <- length(y) # switch(what, "density" = { zz <- matrix(Z$density, lx, ly) title2 <- "Density" }, "uncertainty" = { zz <- matrix(Z$uncertainty, lx, ly) title2 <- "Uncertainty" }, stop("what improperly specified")) # switch(transformation, "none" = { title1 <- "" }, "log" = { zz <- log(zz) title1 <- "log" }, "sqrt" = { zz <- sqrt(zz) title1 <- "sqrt" }, stop("transformation improperly specified")) # switch(type, "contour" = { title3 <- "Contour" if(is.null(levels)) levels <- pretty(zz, nlevels) contour(x = x, y = y, z = zz, levels = levels, xlab = xlab, ylab = ylab, col = col, main = "", ...) }, "hdr" = { title3 <- "HDR level" z <- densNuncer(modelName = "VVV", data = data, parameters = parameters)$density levels <- c(sort(hdrlevels(z, prob)), 1.1*max(z)) plot(x, y, type = "n", xlab = xlab, ylab = ylab, ...) fargs <- formals(".filled.contour") dargs <- c(list(x = x, y = y, z = zz, levels = levels, col = hdr.palette(length(levels))), args) dargs <- dargs[names(dargs) %in% names(fargs)] fargs[names(dargs)] <- dargs do.call(".filled.contour", fargs) }, "image" = { title3 <- "Image" col <- col.palette(nlevels) if(length(col) == 1) { if(!is.null(levels)) nlevels <- length(levels) col <- mapply(adjustcolor, col = col, alpha.f = seq(0.1, 1, length = nlevels)) } image(x = x, y = y, z = zz, xlab = xlab, ylab = ylab, col = col, main = "", ...) }, "persp" = { title3 <- "Perspective" dots <- list(...) if(is.null(dots$theta)) dots$theta <- -30 if(is.null(dots$phi)) dots$phi <- 20 if(is.null(dots$expand)) dots$expand <- 0.6 p3d <- do.call("persp", c(list(x = x, y = y, z = zz, border = NA, xlab = xlab, ylab = ylab, col = col, zlab = "Density", main = ""), dots)) ii <- floor(seq(1, length(y), length.out = 2*nlevels)) for(i in ii[-c(1,length(ii))]) lines(trans3d(x, y[i], zz[,i], pmat = p3d)) ii <- floor(seq(1, length(x), length.out = 2*nlevels)) for(i in ii[-c(1,length(ii))]) lines(trans3d(x[i], y, zz[i,], pmat = p3d)) } ) if(main) { TITLE <- paste(c(title1, title2, title3, "Plot"), collapse = " ") title(TITLE) } invisible(list(x = x, y = y, z = zz)) } uncerPlot <- function (z, truth=NULL, ...) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) par(pty = "m") uncer <- 1 - apply(z, 1, max) ord <- order(uncer) M <- max(uncer) plot(uncer[ord], type = "n", xaxt = "n", ylim = c(-(M/32), M), ylab = "uncertainty", xlab = "observations in order of increasing uncertainty") points(uncer[ord], pch = 15, cex = 0.5) lines(uncer[ord]) abline(h = c(0, 0), lty = 3) if (!is.null(truth)) { truth <- as.numeric(as.factor(truth)) n <- length(truth) result <- map(z) bad <- classError(result, truth)$misclassified if(length(bad)) { for(i in bad) { x <- (1:n)[ord == i] lines(c(x, x), c(-(0.5/32), uncer[i]), lty = 1) } } } invisible() } blue2grey.colors <- function(n) { # manually selected basecol <- c("#E6E6E6", "#bcc9d1", "#6c7f97", "#3e5264") # selected using colorspace::sequential_hcl(5, palette = "blues2") # basecol <- c("#023FA5", "#6A76B2", "#A1A6C8", "#CBCDD9", "#E2E2E2") palette <- grDevices::colorRampPalette(basecol, space = "Lab") palette(n) } bubble <- function(x, cex = c(0.2, 3), alpha = c(0.1, 1)) { x <- as.vector(x) cex <- cex[!is.na(cex)] alpha <- alpha[!is.na(alpha)] x <- (x - min(x))/(max(x) - min(x) + sqrt(.Machine$double.eps)) n <- length(x) r <- sqrt(x/pi) r <- (r - min(r, na.rm = TRUE))/ (max(r, na.rm = TRUE) - min(r, na.rm = TRUE) + sqrt(.Machine$double.eps)) cex <- r * diff(range(cex)) + min(cex) alpha <- x * diff(range(alpha)) + min(alpha) return(list(cex = cex, alpha = alpha)) } grid1 <- function(n, range = c(0, 1), edge = TRUE) { if(any(n < 0 | round(n) != n)) stop("n must be nonpositive and integer") G <- rep(0, n) if(edge) { G <- seq(from = min(range), to = max(range), by = abs(diff(range))/(n-1)) } else { lj <- abs(diff(range)) incr <- lj/(2 * n) G <- seq(from = min(range) + incr, to = max(range) - incr, by = 2 * incr) } return(G) } grid2 <- function(x, y) { lx <- length(x) ly <- length(y) xy <- matrix(0, nrow = lx * ly, ncol = 2) l <- 0 for(j in 1:ly) { for(i in 1:lx) { l <- l + 1 xy[l,] <- c(x[i], y[j]) } } return(xy) } vpoints <- function(x, y, col, cex = 1, ...) { xy <- xy.coords(x, y) symbols(xy$x, xy$y, add = TRUE, inches = 0.2*cex, fg = if(missing(col)) par("col") else col, rectangles = matrix(c(0,1), nrow = length(xy$x), ncol = 2, byrow = TRUE), ...) } mclust/R/mclustaddson.R0000644000176200001440000023620613507676642014636 0ustar liggesusers############################################################################## ### EVV model #### ############################################################################## emEVV <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVV(data, parameters = parameters, warn = warn)$z meEVV(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meEVV <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K # if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance, Vinv=Vinv) return(structure(list(modelName="EVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p)) storage.mode(z) <- "double" # # # MICHAEL from here------------------------------------------ # # without prior specification if(is.null(prior)) { temp <- .Fortran( "meevv", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmax = as.integer(control$itmax[1]), tol = as.double(control$tol[1]), eps = as.double(control$eps), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVV"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevvp", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(K), loglik = NA, eqpro = as.logical(control$equalPro), itmax = as.integer(control$itmax[1]), tol = as.double(control$tol[1]), eps = as.double(control$eps), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "EVV model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } # z <- matrix(temp$z, n,K) loglik <- temp$loglik mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale[1] shape <- matrix(temp$shape, p,G) O <- aperm( array(temp$O, c(p,p,G)), c(2,1,3) ) shape.o <- array( temp$shape.o, c(p,p,G) ) pro <- temp$pro niterout <- temp$niterout errout <- temp$errout lapackSVDinfo <- temp$info if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DGESVD" } if(warn) warning(WARNING) z[] <- O[] <- shape[] <- NA scale <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if( loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) shape[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "a z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { # scale <- sum(scale)/n sigma <- scale * shape.o if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- list(iterations = niterout, error = errout) # info <- c(iterations = its, error = err) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EVV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepEVV <- function(data, z, prior = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVV", d = p, G = G, scale = NA, shape = rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVV", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran( "msevv", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVV"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevvp", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p*G), shape.o = double(p*p*G), scale = double(G), shape = double(p*G), pro = double(G), lwork = as.integer(lwork), info = FALSE, eps = as.double(.Machine$double.eps)) WARNING <- "EVV model is not available with prior" if(warn) warning(WARNING) } # lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale[1] # lambda O <- aperm( array(temp$O, c(p,p,G)), c(2,1,3) ) shape.o <- array( temp$shape.o, c(p,p,G) ) shape <- matrix(temp$shape, p,G) pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DGESVD fails to converge" ret <- -4 } else { WARNING <- "input error for LAPACK DGESVD" ret <- -5 } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) # } else if( any(abs(c(scale, shape)) > signif(.Machine$double.xmax, 6)) ) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- scale <- shape[] <- O[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { # scale <- sum(scale)/n # scale <- sum(scale)/sum(z) # lambda --> if noise, see help(mstep) sigma <- scale * shape.o ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- dimnames(shape) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]], NULL) variance <- list(modelName = "EVV", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVV", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters), WARNING = WARNING, returnCode = ret) } #### estepEVV <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVV", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here---------------------------------------------- # temp <- .Fortran( "esevv", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( aperm(O, c(2,1,3)) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVV", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensEVV <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVV", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # temp <- .Fortran( "esevv", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( aperm(O, c(2,1,3)) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVV", WARNING = WARNING, returnCode = ret) } ### simEVV <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVV")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) for (k in 1:G) { m <- ctabel[k] sss <- rtscale * rtshape[,k] cholSigma <- t(parameters$variance$orientation[,,k]) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "EVV") } ############################################################################## ### VEE model #### ############################################################################## emVEE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVEE(data, parameters = parameters, warn = warn)$z meVEE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meVEE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mevee", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), C = double(p*p), U = double(p*p*G), scale = double(G), shape = double(p), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meveep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), C = double(p*p), U = double(p*p*G), scale = double(G), shape = double(p), pro = double(K), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "VEE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- temp$shape shape.o <- matrix(temp$C, p,p) O <- if(any(is.nan(shape.o))) shape.o else svd(shape.o, nu = 0)$v pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DPOTRI fails to converge" } else { WARNING <- "input error for LAPACK DPOTRF, DSYEV or DPOTRI" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- sweep( array(shape.o, c(p,p,G)), 3, FUN = "*", STATS = scale ) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VEE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepVEE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VEE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VEE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "VEE", prior = prior), WARNING = WARNING)) } # shape <- sqrt(rev(sort(shape/exp(sum(log(shape))/p)))) if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran( "msvee", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), C = double(p*p), scale = as.double( rep(1,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), # eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VEE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msveep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), C = double(p*p), scale = double(G), pro = double(G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(control$eps)) WARNING <- "VEE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape.o <- matrix(temp$C, p,p) SVD <- svd(shape.o, nu = 0) shape <- SVD$d O <- SVD$v pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DPOTRI fails to converge" } else { WARNING <- "input error for LAPACK DPOTRF, DSYEV or DPOTRI" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- sweep( array(shape.o, c(p,p,G)), 3, FUN = "*", STATS = scale ) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "VEE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VEE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepVEE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VEE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvee", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VEE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensVEE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VEE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvee", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VEE", WARNING = WARNING, returnCode = ret) } ### simVEE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VEE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (length(rtshape) != d) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if (length(rtscale) != G) stop("scale incompatible with mean") for (k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "VEE") } ############################################################################## ### EVE model #### ############################################################################## emEVE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepEVE(data, parameters = parameters, warn = warn)$z meEVE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meEVE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("meeve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = as.double( diag(p) ), U = double(p*p*G), scale = double(1), shape = as.double( matrix(1, p,G) ), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("meevep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p), U = double(p*p*G), scale = double(1), shape = double(p*G), pro = double(G), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "EVE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- matrix(temp$shape, p,G) O <- t( matrix(temp$O, p,p) ) pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array( apply(shape, 2, function(sh) scale * O%*%diag(sh)%*%t(O)), c(p,p,G) ) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "EVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "EVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepEVE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "EVE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="EVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "EVE", prior = prior), WARNING = WARNING)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mseve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), O = as.double( diag(p) ), scale = as.double(1), shape = as.double( matrix(1, p,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps), # d = 100000, # trgtvec = as.double(100000), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "EVE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msevep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), O = double(p*p), scale = double(1), pro = double(G), shape = double(p*G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps)) WARNING <- "EVE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale O <- t( matrix(temp$O, p,p) ) shape <- matrix(temp$shape, p,G) pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if( any(c(scale, shape) > signif(.Machine$double.xmax, 6)) ) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { sigma <- array( apply(shape, 2, function(sh) scale * O%*%diag(sh)%*%t(O)), c(p,p,G) ) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "EVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "EVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepEVE <- function(data, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "EVE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "eseve", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "EVE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensEVE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "EVE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "eseve", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "EVE", WARNING = WARNING, returnCode = ret) } ### simEVE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "EVE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) for (k in 1:G) { m <- ctabel[k] sss <- rtscale * rtshape[,k] cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "EVE") } ############################################################################## ### VVE model #### ############################################################################## emVVE <- function(data, parameters, prior = NULL, control = emControl(), warn = NULL, ...) { z <- estepVVE(data, parameters = parameters, warn = warn)$z meVVE(data, z = z, prior = prior, control = control, Vinv = parameters$Vinv, warn = warn) } #### meVVE <- function(data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should in the form of a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("data and z should have the same row dimension") K <- dimz[2] if (!is.null(Vinv)) { G <- K - 1 if(Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } else G <- K if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVE", d = p, G = G, scale=rep(NA,G), shape=rep(NA,p), orientation=array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) storage.mode(z) <- "double" # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("mevve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = double(p*G), O = as.double( diag(p) ), U = double(p*p*G), scale = as.double( rep(1, G) ), shape = as.double( matrix(1, p,G) ), pro = double(K), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = as.integer(0), PACKAGE = "mclust") # } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVE"), prior[names(prior) != "functionName"])) # temp <- .Fortran("mevvep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), O = double(p*p), U = double(p*p*G), scale = as.double(rep(1, G)), shape = double(p*G), pro = double(G), loglik = NA, eqpro = as.logical(control$equalPro), itmaxin = as.integer(control$itmax[2]), tolin = as.double(control$tol[2]), itmaxout = as.integer(control$itmax[1]), tolout = as.double(control$tol[1]), eps = as.double(control$eps), niterin = integer(1), errin = double(1), niterout = integer(1), errout = double(1), lwork = as.integer(lwork), info = FALSE) WARNING <- "VVE model is not available with prior" if(warn) warning(WARNING) temp <- structure(temp, info = NA, WARNING = WARNING, returnCode = -1) return(temp) } z <- matrix(temp$z, n,K) niterin <- temp$niterin errin <- temp$errin niterout <- temp$niterout errout <- temp$errout loglik <- temp$loglik lapackSVDinfo <- temp$info mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) scale <- temp$scale shape <- matrix(temp$shape, p,G) O <- t( matrix(temp$O, p,p) ) pro <- temp$pro if( !is.finite(loglik) | any(scale > signif(.Machine$double.xmax, 6)) | any(shape > signif(.Machine$double.xmax, 6)) | any(O > signif(.Machine$double.xmax, 6)) | any(is.nan(scale)) | any(is.nan(shape)) | any(is.nan(O)) ) { loglik <- .Machine$double.xmax } # WARNING <- NULL if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "singular covariance" if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else if(loglik < - signif(.Machine$double.xmax, 6)) { if(control$equalPro) { WARNING <- "z column sum fell below threshold" if(warn) warning(WARNING) } else { WARNING <- "mixing proportion fell below threshold" if(warn) warning(WARNING) } mu[] <- pro[] <- z[] <- loglik <- NA sigma <- array(NA, c(p, p, G)) ret <- if(control$equalPro) -2 else -3 } else { sigma <- array(NA, c(p,p,G)) for ( g in 1:G ) sigma[,,g] <- scale[g] * O %*% diag(shape[,g]) %*% t(O) if(niterin >= control$itmax[2]) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin ret <- 2 } else if(niterout >= control$itmax[1]) { WARNING <- "iteration limit reached" if(warn) warning(WARNING) niterout <- - niterout ret <- 1 } else ret <- 0 } info <- structure(c(niterout = niterout, errout = errout), inner = c(niterin = niterin, errin = errin)) # info <- structure(c(iterations = its, error = err), # inner = c(iterations = inner, error = inerr)) dimnames(z) <- list(dimnames(data)[[1]],NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) ## Sigma = scale * O %*% diag(shape) %*% t(O) variance <- list(modelName = "VVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance, Vinv=Vinv) structure(list(modelName = "VVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control, loglik = loglik), info = info, WARNING = WARNING, returnCode = ret) } #### mstepVVE <- function(data, z, prior = NULL, warn = NULL, control = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(oneD || length(dimdat) != 2) stop("data should be a matrix or a vector") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) ## z <- as.matrix(z) dimz <- dim(z) if(dimz[1] != n) stop("row dimension of z should equal data length") G <- dimz[2] if(all(is.na(z))) { WARNING <- "z is missing" if(warn) warning(WARNING) variance <- list(modelName = "VVE", d = p, G = G, scale = rep(NA,G), shape = rep(NA,p), orientation = array(NA,c(p,p,G))) parameters <- list(pro=rep(NA,G), mean=matrix(NA,p,G), variance=variance) return(structure(list(modelName="VVE", prior=prior, n=n, d=p, G=G, z=z, parameters=parameters, control=control, loglik=NA), WARNING = WARNING, returnCode = 9)) WARNING <- "z is missing" if(warn) warning(WARNING) return(structure(list( n = n, d = p, G = G, mu = matrix(NA,p, G), sigma = array(NA, c(p, p, G)), decomp = list(d = p, G = G, scale = rep(NA, G), shape = rep(NA, p), orientation = array(NA, c(p, p, G))), pro = rep(NA,G), modelName = "VVE", prior = prior), WARNING = WARNING)) } if(any(is.na(z)) || any(z < 0) || any(z > 1)) stop("improper specification of z") if (is.null(control)) control <- emControl() itmax <- if(length(control$itmax) == 1) control$itmax else control$itmax[2] tol <- if(length(control$tol) == 1) control$tol else control$tol[2] lwork <- max(3 * min(n, p) + max(n, p), 5 * min(n, p), p + G) # # MICHAEL from here------------------------------------------------------- # # without prior specification if(is.null(prior)) { temp <- .Fortran("msvve", x = as.double(data), z = as.double(z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mu = double(p*G), U = double(p*p*G), O = as.double( diag(p) ), scale = as.double( rep(1, G) ), shape = as.double( matrix(1, p,G) ), pro = double(G), lwork = as.integer(lwork), info = as.integer(0), itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") } else { # with prior priorParams <- do.call(prior$functionName, c(list(data = data, G = G, modelName = "VVE"), prior[names(prior) != "functionName"])) # # temp <- .Fortran("msvvep", ...) temp <- list(x = data, z = z, n = n, p = p, G = G, mu = double(p*G), U = double(p*p*G), O = double(p*p), scale = double(1), pro = double(G), shape = double(p*G), lwork = as.integer(lwork), info = FALSE, itmax = as.integer(itmax), tol = as.double(tol), niterin = integer(1), errin = double(1), eps = as.double(.Machine$double.eps)) WARNING <- "VVE model is not available with prior" if(warn) warning(WARNING) } lapackSVDinfo <- temp$info errin <- temp$errin niterin <- temp$niterin mu <- matrix(temp$mu, p,G) dimnames(mu) <- list(NULL, as.character(1:G)) O <- t( matrix(temp$O, p,p) ) shape <- matrix(temp$shape, p,G) scale <- temp$scale pro <- temp$pro WARNING <- NULL # if(lapackSVDinfo) { if(lapackSVDinfo > 0) { WARNING <- "LAPACK DSYEV or DGESVD fails to converge" } else { WARNING <- "input error for LAPACK DSYEV or DGESVD" } if(warn) warning(WARNING) O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -9 # } else if(any(c(scale, shape) > signif(.Machine$double.xmax, 6))) { WARNING <- "cannot compute M-step" if(warn) warning(WARNING) mu[] <- pro[] <- O[] <- shape[] <- scale[] <- NA sigma <- array(NA, c(p, p, G)) ret <- -1 } else { # sigma <- array( apply(shape, 2, function(sh) O%*%diag(sh)%*%t(O)), c(p,p,G) ) sigma <- array(NA, c(p,p,G)) for ( g in 1:G ) sigma[,,g] <- scale[g] * O %*% diag(shape[,g]) %*% t(O) if(niterin >= itmax) { WARNING <- "inner iteration limit reached" if(warn) warning(WARNING) niterin <- - niterin } ret <- 2 } info <- c(iteration = niterin, error = errin) dimnames(z) <- list(dimnames(data)[[1]], NULL) dimnames(mu) <- list(dimnames(data)[[2]], NULL) dimnames(sigma) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) dimnames(O) <- list(dimnames(data)[[2]], dimnames(data)[[2]]) variance <- list(modelName = "VVE", d = p, G = G, sigma = sigma, scale = scale, shape = shape, orientation = O) parameters <- list(pro=pro, mean=mu, variance=variance) structure(list(modelName = "VVE", prior = prior, n = n, d = p, G = G, z = z, parameters = parameters, control = control), info = info, WARNING = WARNING, returnCode = ret) } ### estepVVE <- function(data, parameters, warn = NULL, ...) { if (is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation G <- ncol(mu) noise <- l == G + 1 if(!noise) { if(l != G) stop("pro improperly specified") K <- G Vinv <- NULL } else { K <- G + 1 Vinv <- parameters$Vinv if(is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) } if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,K) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(list(modelName = "VVE", n=n, d=p, G=G, z=z, parameters=parameters, loglik=NA), WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvve", x = as.double(data), z = double(n*K), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(K), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(pro), Vinv = as.double( if (is.null(Vinv)) -1 else Vinv ), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,K) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- loglik <- NA ret <- -1 } else ret <- 0 dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(list(modelName = "VVE", n = n, d = p, G = G, z = z, parameters = parameters, loglik = loglik), WARNING = WARNING, returnCode = ret) } #### cdensVVE <- function(data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") dimdat <- dim(data) if(is.null(dimdat) || length(dimdat) != 2) stop("data must be a matrix") data <- as.matrix(data) n <- nrow(data) p <- ncol(data) pro <- parameters$pro pro <- pro/sum(pro) l <- length(pro) mu <- as.matrix(parameters$mean) scale <- parameters$variance$scale shape <- parameters$variance$shape O <- parameters$variance$orientation p <- ncol(data) G <- ncol(mu) if(any(is.na(unlist(parameters[c("pro", "mean", "variance")]))) || any(is.null(parameters[c("pro", "mean", "variance")]))) { WARNING <- "parameters are missing" if(warn) warning(WARNING) z <- matrix(NA,n,G) dimnames(z) <- list(dimnames(data)[[1]], NULL) return(structure(z, logarithm = logarithm, modelName = "VVE", WARNING = WARNING, returnCode = 9)) } if (is.null(parameters$variance$scale) || is.null(parameters$variance$shape) || is.null(parameters$variance$orientation)) stop("variance parameters are missing") # # MICHAEL from here------------------------------------------------------- # temp <- .Fortran( "esvve", x = as.double(data), z = double(n*G), n = as.integer(n), p = as.integer(p), G = as.integer(G), Gnoise = as.integer(G), mu = as.double(mu), O = as.double( t(O) ), scale = as.double(scale), shape = as.double(shape), pro = as.double(-1), Vinv = as.double(-1), loglik = double(1), eps = as.double(.Machine$double.eps), PACKAGE = "mclust") # loglik <- temp$loglik z <- matrix(temp$z, n,G) WARNING <- NULL if(loglik > signif(.Machine$double.xmax, 6)) { WARNING <- "cannot compute E-step" if(warn) warning(WARNING) z[] <- NA ret <- -1 } else { if (!logarithm) z <- exp(z) ret <- 0 } dimnames(z) <- list(dimnames(data)[[1]],NULL) structure(z, logarithm = logarithm, modelName = "VVE", WARNING = WARNING, returnCode = ret) } ### simVVE <- function(parameters, n, seed = NULL, ...) { if (!is.null(seed)) set.seed(seed) mu <- as.matrix(parameters$mean) d <- nrow(mu) G <- ncol(mu) if (any(is.na(parameters[c("mean", "variance")])) || any(is.null(parameters[c("mean", "variance")]))) { warn <- "parameters are missing" warning("parameters are missing") return(structure(matrix(as.double(NA), n, d + 1), modelName = "VVE")) } pro <- parameters$pro if (is.null(pro)) pro <- rep(1/G, G) clabels <- sample(1:G, size = n, replace = TRUE, prob = pro) ctabel <- tabulate(clabels, nbins = G) x <- matrix(0, n, d) rtshape <- sqrt(parameters$variance$shape) if (dim(rtshape)[1] != d | dim(rtshape)[2] != G) stop("shape incompatible with mean") rtscale <- sqrt(parameters$variance$scale) if (length(rtscale) != G) stop("scale incompatible with mean") for (k in 1:G) { m <- ctabel[k] sss <- rtscale[k] * rtshape[,k] cholSigma <- t(parameters$variance$orientation) * sss x[clabels == k, ] <- sweep( matrix(rnorm(m*d), nrow = m, ncol = d) %*% cholSigma, MARGIN = 2, STATS = mu[,k], FUN = "+" ) } dimnames(x) <- list(NULL, 1:d) structure(cbind(group = clabels, x), modelName = "VVE") } ############################################################################# # Examples of some simple R wrapper functions fcrossprod <- function(X, Y, ...) { out <- .Fortran("crossprodf", X = as.matrix(X), Y = as.matrix(Y), n = as.integer(nrow(X)), p = as.integer(ncol(X)), q = as.integer(ncol(Y)), XTY = matrix(0, ncol(X), ncol(Y)), PACKAGE = "mclust") return(out$XTY) }mclust/R/impute.R0000644000176200001440000002030713477457724013436 0ustar liggesusersimputeData <- function(data, categorical = NULL, seed = NULL, verbose = interactive()) { if(!requireNamespace("mix", quietly = TRUE)) stop("imputeData function require 'mix' package to be installed!") fac <- apply(data, 2, is.factor) if(is.null(categorical)) { categorical <- fac } else { if(any(!categorical & fac)) { stop("data has a factor that is not designated as categorical") } if(any(categorical | !fac)) { warning("a categorical is not designated as a factor") for(i in which(categorical | !fac)) data[[i]] <- as.factor(data[[i]]) } } # remove categorical variables and add a dummy variable if(nocat <- !any(categorical)) { data <- cbind(as.factor(1), data) categorical <- c(TRUE, categorical) } ord <- c(which(categorical), which(!categorical)) # do the imputations s <- mix::prelim.mix(data[,ord], p = sum(categorical)) if(is.null(seed)) seed <- runif(1, min = .Machine$integer.max/1024, max = .Machine$integer.max) # find ML estimate thetahat <- mix::em.mix(s, showits = verbose) # set random number generator seed mix::rngseed(seed) # data augmentation from posterior newtheta <- mix::da.mix(s, thetahat, steps = 100, showits = verbose) # impute under newtheta dataImp <- mix::imp.mix(s, newtheta) # there is a bug, so it needs to refix the seed and impute again mix::rngseed(seed) dataImp <- mix::imp.mix(s, newtheta) if(nocat) dataImp[,-1] else dataImp[,order(ord)] } imputePairs <- function(data, dataImp, symbols = c(1, 16), colors = c("black", "red"), labels, panel = points, ..., lower.panel = panel, upper.panel = panel, diag.panel = NULL, text.panel = textPanel, label.pos = 0.5 + has.diag/3, cex.labels = NULL, font.labels = 1, row1attop = TRUE, gap = 0.2) { textPanel <- function(x = 0.5, y = 0.5, txt, cex, font) text(x, y, txt, cex = cex, font = font) localAxis <- function(side, x, y, xpd, bg, col = NULL, main, oma, ...) { if (side%%2 == 1) Axis(x, side = side, xpd = NA, ...) else Axis(y, side = side, xpd = NA, ...) } localPlot <- function(..., main, oma, font.main, cex.main) plot(...) localLowerPanel <- function(..., main, oma, font.main, cex.main) lower.panel(...) localUpperPanel <- function(..., main, oma, font.main, cex.main) upper.panel(...) localDiagPanel <- function(..., main, oma, font.main, cex.main) diag.panel(...) dots <- list(...) nmdots <- names(dots) if (!is.matrix(data)) { data <- as.data.frame(data) for (i in seq_along(names(data))) { if (is.factor(data[[i]]) || is.logical(data[[i]])) data[[i]] <- as.numeric(data[[i]]) if (!is.numeric(unclass(data[[i]]))) stop("non-numeric argument to 'pairs'") } } else if (!is.numeric(data)) stop("non-numeric argument to 'pairs'") panel <- match.fun(panel) if ((has.lower <- !is.null(lower.panel)) && !missing(lower.panel)) lower.panel <- match.fun(lower.panel) if ((has.upper <- !is.null(upper.panel)) && !missing(upper.panel)) upper.panel <- match.fun(upper.panel) if ((has.diag <- !is.null(diag.panel)) && !missing(diag.panel)) diag.panel <- match.fun(diag.panel) if (row1attop) { tmp <- lower.panel lower.panel <- upper.panel upper.panel <- tmp tmp <- has.lower has.lower <- has.upper has.upper <- tmp } nc <- ncol(data) if (nc < 2) stop("only one column in the argument to 'pairs'") has.labs <- TRUE if (missing(labels)) { labels <- colnames(data) if (is.null(labels)) labels <- paste("var", 1:nc) } else if (is.null(labels)) has.labs <- FALSE oma <- if ("oma" %in% nmdots) dots$oma else NULL main <- if ("main" %in% nmdots) dots$main else NULL if (is.null(oma)) { oma <- c(4, 4, 4, 4) if (!is.null(main)) oma[3] <- 6 } opar <- par(mfrow = c(nc, nc), mar = rep(gap/2, 4), oma = oma) on.exit(par(opar)) for (i in if (row1attop) 1:nc else nc:1) for (j in 1:nc) { localPlot(dataImp[, j], dataImp[, i], xlab = "", ylab = "", axes = FALSE, type = "n", ...) if (i == j || (i < j && has.lower) || (i > j && has.upper)) { box() if (i == 1 && (!(j%%2) || !has.upper || !has.lower)) localAxis(1 + 2 * row1attop, dataImp[, j], dataImp[, i], ...) if (i == nc && (j%%2 || !has.upper || !has.lower)) localAxis(3 - 2 * row1attop, dataImp[, j], dataImp[, i], ...) if (j == 1 && (!(i%%2) || !has.upper || !has.lower)) localAxis(2, dataImp[, j], dataImp[, i], ...) if (j == nc && (i%%2 || !has.upper || !has.lower)) localAxis(4, dataImp[, j], dataImp[, i], ...) mfg <- par("mfg") if (i == j) { if (has.diag) localDiagPanel(as.vector(dataImp[, i]), ...) if (has.labs) { par(usr = c(0, 1, 0, 1)) if (is.null(cex.labels)) { l.wid <- strwidth(labels, "user") cex.labels <- max(0.8, min(2, 0.9/max(l.wid))) } text.panel(0.5, label.pos, labels[i], cex = cex.labels, font = font.labels) } } else if (i < j) { classification <- as.numeric(apply(data[,c(i,j)], 1, function(x) any(is.na(x)))) + 1 localLowerPanel(as.vector(dataImp[, j]), as.vector(dataImp[,i]), pch = symbols[classification], col = colors[classification], ...) } else { classification <- as.numeric(apply(data[,c(i,j)], 1, function(x) any(is.na(x)))) + 1 localUpperPanel(as.vector(dataImp[, j]), as.vector(dataImp[, i]), pch = symbols[classification], col = colors[classification], ...) } if (any(par("mfg") != mfg)) stop("the 'panel' function made a new plot") } else par(new = FALSE) } if (!is.null(main)) { font.main <- if ("font.main" %in% nmdots) dots$font.main else par("font.main") cex.main <- if ("cex.main" %in% nmdots) dots$cex.main else par("cex.main") mtext(main, 3, 3, TRUE, 0.5, cex = cex.main, font = font.main) } invisible(NULL) } # LS: old to be removed # matchCluster <- function(group, cluster) # { # if(length(group) != length(cluster)) # stop("arguments must be vector of the same length") # group <- as.factor(group) # cluster <- as.factor(cluster) # tab <- table(group,cluster) # j <- apply(tab,2,which.max) # cluster <- factor(cluster, labels = levels(group)[j]) # cluster <- as.character(cluster) # group <- as.character(group) # misclassified <- !(cluster == group) # out <- list(cluster = cluster, misclassified = misclassified, ord = j) # return(out) # } matchCluster <- function(group, cluster) { if(length(group) != length(cluster)) stop("arguments must be vector of the same length") group <- as.factor(group) cluster <- as.factor(cluster) map <- mapClass(as.numeric(group), as.numeric(cluster)) map1 <- unlist(map[[1]]); names(map1) <- NULL map2 <- unlist(map[[2]]); names(map2) <- NULL cl <- cluster levels(cl) <- map2 cl <- as.character(levels(cl)[as.numeric(cl)]) cl <- as.character(cl) group <- as.character(group) misclassified <- !(cluster == group) out <- list(cluster = cl, misclassified = misclassified, ord = map1) return(out) } majorityVote <- function(x) { # local function to find the maximum position in a vector, # breaking ties at random whichMax <- function (x) { m <- seq_along(x)[x == max(x, na.rm = TRUE)] if(length(m) > 1) sample(m, size = 1) else m } x <- as.vector(x) tab <- table(x) m <- whichMax(tab) out <- list(table = tab, ind = m, majority = names(tab)[m]) return(out) } mclust/R/gmmhd.R0000644000176200001440000004372613213716012013211 0ustar liggesusers###################################################### ## ## ## Identifying Connected Components in Gaussian ## ## Finite Mixture Models for Clustering ## ## ## ## Author: Luca Scrucca ## ###################################################### gmmhd <- function(object, ngrid = min(round((log(nrow(data)))*10), nrow(data)), dr = list(d = 3, lambda = 1, cumEvalues = NULL, mindir = 2), classify = list(G = 1:5, modelNames = mclust.options("emModelNames")[-c(8,10)]), ...) { if(!inherits(object, "Mclust")) stop("first argument must be an object of class 'Mclust'") if(!requireNamespace("geometry", quietly = TRUE)) stop("Package 'geometry' is required. Please install it.") data <- object$data n <- nrow(data) if(ngrid > n) { warning("ngrid too large, set equal to n") n.grid <- n } mNames <- attr(object$BIC, "modelNames") if(is.null(dr$d)) dr$d <- 2 if(is.null(dr$lambda)) dr$lambda <- 1 if(is.null(classify$G)) classify$G <- 1:5 if(is.null(classify$modelNames)) classify$modelNames <- mNames classify$modelNames <- intersect(classify$modelNames, mNames) if(is.null(dr$mindir)) dr$mindir <- 2 if(ncol(data) >= dr$d) { # compute GMMDR directions DR <- MclustDR(object, lambda = dr$lambda) # subset selection of GMMDR directions evalues <- DR$evalues[seq(DR$numdir)] if(is.null(dr$cumEvalues)) { # if dr$cumEvalues not provided # perform suset selection of GMMDR directions DR <- MclustDRsubsel(DR, G = attr(object$BIC, "G"), modelNames = mNames, mindir = dr$mindir, verbose = FALSE) dims <- seq(DR$numdir) } else { # select the smallest subset with cumsum eigenvalues > dr$cumEvalues dims <- min(which(cumsum(evalues/sum(evalues)) > dr$cumEvalues)) dims <- seq(min(dr$mindir, dims)) } # estimate the density from Mclust model on the selected directions x <- DR$dir[,dims,drop=FALSE] colnames(x) <- paste("GMMDR dir", 1:ncol(x), sep = "") mc <- object$call mc$data <- x mc$modelNames <- mNames mc$verbose <- FALSE obj <- eval(mc, parent.frame()) DR$parameters <- obj$parameters fdens <- dens(modelName = obj$modelName, data = x, parameters = obj$parameters) } else { x <- data DR <- NULL fdens <- dens(modelName = object$modelName, data = x, parameters = object$parameters) } p <- ncol(x) xscaled <- scale(x, colMeans(x), apply(x,2,sd)) # if to add vertices of convex envelope # xrange <- apply(x, 2, range) # xbound <- do.call("expand.grid", matrix2list(xrange)) # x <- rbind(as.matrix(x), as.matrix(xbound*1.1)) # fdens <- c(fdens, rep(0,nrow(xbound))) # uniform grid of proportions for which quantiles are calculated pn <- seq(0, 1, length = ngrid) qn <- as.numeric(quantile(fdens[1:n], 1-pn)) nc <- pc <- rep(0, length(qn)) con <- vector("list", length = length(qn)) # Delaunay triangulation matrix of dim (m x p+1), where each row provides a # set of indices to the points describing a simplex of dimension p mode(xscaled) <- "double" # delaunayn requires a real matrix DT <- suppressMessages(geometry::delaunayn(xscaled, options="QJ")) # plot(x); for(l in 1:nrow(DT)) polygon(x[DT[l,],], border = grey(.8)) on.exit(unlink("qhull_out.txt")) # Graph of neighborhood for each point NB <- vector(mode = "list", length = n) for(i in seq(n)) { NB[[i]] <- sort(unique(as.vector(DT[rowSums(DT==i)>0,]))) } for(i in seq(length(qn))) { c <- qn[i] Sc <- which(fdens[1:n] > c); names(Sc) <- NULL if(length(Sc) < 1) next() pc[i] <- length(Sc)/n # select neighborhoods of edges with density > c level nb <- NB[Sc] # select within neighborhoods those edges whose density > c level nb <- lapply(nb, function(nb) sort(intersect(nb, Sc))) nb <- nb[!duplicated(nb)] # table(sapply(nb,length)) # remove neighborhoods which do not share any facet, i.e. having # less than p edges/obs # nb <- nb[sapply(nb, length) >= p] # remove neighborhoods which are not simplices of dim (p+1) nb <- nb[sapply(nb, length) > p] # get connected components ConComp <- ConnectComp(nb) # sapply(ConComp,length); ConComp if(length(ConComp) < 1) next() nc[i] <- length(ConComp) con[[i]] <- ConComp # lapply(ConComp, sort) } # obj <- list(Mclust = object, MclustDA = NULL, MclustDR = DR, x = x, # i.e. the input data or GMMDR directions density = fdens[1:n], con = con, nc = structure(nc, names = format(pn, digit = 3)), pc = pc, pn = pn, qn = structure(qn, names = format(pn, digit = 3)), clusterCores = NULL, cluster = NULL, numClusters = NULL) class(obj) <- "gmmhd" # cluster cores obj$clusterCores <- gmmhdClusterCores(obj) # semi-supervised classification modClass <- gmmhdClassify(obj, G = classify$G, modelNames = classify$modelNames, verbose = FALSE) obj$MclustDA <- modClass$model obj$cluster <- modClass$cluster obj$numClusters <- length(tabulate(obj$cluster)) return(obj) } print.gmmhd <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") cat(paste0(" Mclust initial model = (", x$Mclust$modelName, ",", x$Mclust$G, ")\n")) if(!is.null(x$MclustDR)) cat(paste0(" MclustDR projection = (", x$MclustDR$modelName, ",", x$MclustDR$G, ")\n")) cat(paste0(" GMMHD final number of clusters = ", x$numClusters, "\n")) invisible() } summary.gmmhd <- function(object, ...) { title <- paste("GMM with high-density connected components for clustering") out <- with(object, list(title = title, "Mclust" = list("G" = Mclust$G, "modelName" = Mclust$modelName), "MclustDR" = list("G" = MclustDR$G, "modelName" = MclustDR$modelName), "clusterCores" = table(clusterCores, useNA = "ifany", dnn = NULL), "cluster" = table(cluster, useNA = "ifany", dnn = NULL))) if(is.null(object$MclustDR)) out$MclustDR <- NULL class(out) <- "summary.gmmhd" return(out) } print.summary.gmmhd <- function(x, digits = getOption("digits"), ...) { cat(rep("-", nchar(x$title)),"\n",sep="") cat(x$title, "\n") cat(rep("-", nchar(x$title)),"\n",sep="") # cat("\nInitial model: Mclust (", x$Mclust$modelName, ",", x$Mclust$G, ")", "\n", sep = "") # if(!is.null(x$MclustDR)) cat("\nModel on projection subspace: (", x$MclustDR$modelName, ",", x$MclustDR$G, ")", "\n", sep = "") # cat("\nCluster cores:\n") print(x$clusterCores) # cat("\nFinal clustering:\n") print(x$cluster) # invisible() } plot.gmmhd <- function(x, what = c("mode", "cores", "clusters"), ...) { object <- x what <- match.arg(what, choices = eval(formals(plot.gmmhd)$what), several.ok = TRUE) if(interactive() & length(what) > 1) { title <- "GMM high-density connected components:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "mode") plot.gmmhd.mode(object, ...) if(what[choice] == "cores") plot.gmmhd.cores(object, ...) if(what[choice] == "clusters") plot.gmmhd.clusters(object, ...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "mode")) plot.gmmhd.mode(object, ...) if(any(what == "cores")) plot.gmmhd.cores(object, ...) if(any(what == "clusters")) plot.gmmhd.clusters(object, ...) } invisible() } plot.gmmhd.mode <- function(object, ...) { plot(c(object$pc,1), c(object$nc,0), type = "S", xlab = "Proportion of observed data", ylab = "Mode function", yaxt = "n") axis(side = 2, at = seq(0, max(object$nc, na.rm = TRUE))) } plot.gmmhd.cores <- function(object, col = c("grey50", mclust.options("classPlotColors")), pch = c(1, mclust.options("classPlotSymbols")), ...) { x <- object$x p <- ncol(x) n <- nrow(x) clCores <- object$clusterCores numClusters <- object$numClusters colCores <- col[1] col <- col[-1] col <- col[clCores] col[is.na(col)] <- colCores pch <- unique(pch) pchCores <- pch[1] pch <- pch[-1] pch <- pch[clCores] pch[is.na(pch)] <- pchCores cex <- rep(par("cex"), length(pch)) cex[is.na(clCores)] <- par("cex")/2 if(p == 1) { plot(x, object$density, col = col, pch = pch, cex = cex, ylim = range(0,object$density), xlab = colnames(x)[1], ylab = "Density", ...) } else if(p == 2) { plot(x[,1:2,drop=FALSE], col = col, pch = pch, cex = cex, ...) } else if(p > 2) { pairs(x, col = col, pch = pch, cex = cex, gap = 0, ...) } invisible() } plot.gmmhd.clusters <- function(object, col = mclust.options("classPlotColors"), pch = mclust.options("classPlotSymbols"), ...) { x <- object$x p <- ncol(x) n <- nrow(x) cluster <- object$cluster numClusters <- object$numClusters col <- col[cluster] pch <- setdiff(pch,22)[cluster] if(p == 1) { plot(x, object$density, col = col, pch = pch, ylim = range(0,object$density), xlab = colnames(x)[1], ylab = "Density", ...) } else if(p == 2) { plot(x[,1:2,drop=FALSE], col = col, pch = pch, ...) } else if(p > 2) { pairs(x, col = col, pch = pch, cex = 0.8, gap = 0, ...) } invisible() } gmmhdClusterCores <- function(object, tails = FALSE, ...) { # Identify cluster cores as the first subset of connected components # corresponding to the largest local mode n <- nrow(object$x) nc <- object$nc pc <- object$pc conComp <- object$con # select the subset with largest number of modes ... i <- which(diff(c(nc,0)) < 0) # i <- i[which(nc[i] == max(nc[i]))] # no to consider only the highest mode # remove spurius local modes, i.e. those not identified by at least # two consecutive density level # LS:20150107 okmode <- which(nc[i] == nc[i-1])[1] # LS:20150107 i <- if(length(okmode) > 0) i[okmode] else length(nc) # plot(pc, nc); abline(v = pc[i]) # ... and consider multiplicity of modes # LS: 20150107 i <- which(nc == max(nc[i])) # cc <- conComp[i] clusterCores <- matrix(as.double(NA), n, length(i)) for(j in 1:ncol(clusterCores)) for(cl in 1:length(cc[[j]])) { clusterCores[cc[[j]][[cl]],j] <- cl } while(ncol(clusterCores) > 1) { ncl <- length(unique(na.omit(clusterCores[,2]))) tmp <- rep(NA, n) for(cl in 1:ncl) { l <- which(clusterCores[,2] == cl) if(all(is.na(clusterCores[l,1]))) { tmp[l] <- paste(clusterCores[l,2],"*",sep="") } else { if(length(unique(na.omit(clusterCores[l,1]))) > 1) tmp[l] <- clusterCores[l,1] else tmp[l] <- paste(clusterCores[l,2],"*",sep="") } } clusterCores[,2] <- unclass(as.factor(tmp)) clusterCores <- clusterCores[,-1,drop=FALSE] } clusterCores <- as.vector(clusterCores) return(clusterCores) # select the last subset with largest number of modes # i <- max(which(nc == max(nc))) # select the first subset with largest number of modes i <- which(diff(c(nc,0)) < 0) i <- i[which(nc[i] == max(nc[i]))[1]] # select the largest subset with the largest number of modes # i <- i[max(which(nc[i] == max(nc[i])))] conComp <- object$con[[i]] clusterCores <- rep(NA, n) for(cl in 1:length(conComp)) { clusterCores[conComp[[cl]]] <- cl } return(clusterCores) } gmmhdClassify <- function(object, G = 1:5, modelNames = mclust.options("emModelNames"), verbose = TRUE, ...) { if(!inherits(object, "gmmhd")) stop("object is not of class 'gmmhd'") x <- object$x n <- nrow(x) p <- ncol(x) if(p == 1) modelNames <- unique(substr(modelNames, 1, 1)) clusterCores <- object$clusterCores numClusters <- length(tabulate(clusterCores)) con <- object$con # classify unclustered obs based on training cluster cores isCore <- (!is.na(clusterCores)) logRatio <- function(p) { p <- pmax(pmin(p, 1-sqrt(.Machine$double.eps)),sqrt(.Machine$double.eps)) log(p)-log(1-p) } # select num. components G to guarantee at least minSize obs per class numCompClass <- function(class, G, minSize = 10) { classSize <- tabulate(class) Gin <- as.vector(G) Gmax <- classSize %/% minSize Gmax <- pmin(Gmax, max(G)) G <- vector(length = length(Gmax), mode = "list") for(k in 1:length(G)) { G[[k]] <- intersect(Gin, seq(Gmax[k])) } return(G) } inc <- isCore cluster <- clusterCores while(sum(inc) < n) { mod <- MclustDA(data = x[inc,,drop=FALSE], class = as.character(cluster[inc]), G = numCompClass(cluster[inc], G), modelNames = modelNames, verbose = verbose) unallocated <- which(!inc) # remove those obs with density ~ 0 dens <- density.MclustDA(mod, newdata=x[unallocated,,drop=FALSE]) dens <- pmax(dens, .Machine$double.eps) i <- (dens/max(dens) > sqrt(.Machine$double.eps)) if(sum(i) > 0) unallocated <- unallocated[i] # pred <- predict(mod, newdata = x[unallocated,,drop=FALSE]) # questa versione puo' non allocare obs ai clusterCores piccoli # zmax <- apply(pred$z,1,max) # zclass <- apply(pred$z,1,which.max) # log.ratio <- logRatio(zmax) # alloc <- (log.ratio >= quantile(log.ratio, prob = sum(inc)/n)) # questa versione cerca di ctr per dim clusters e alloca alla classe # predicted iff logRatio is larger than sqrt(sum(inc)/n) quantile z <- pred$z zclass <- apply(z,1,which.max) alloc <- matrix(NA, nrow(z), ncol(z)) for(k in seq(ncol(z))) { log.ratio <- logRatio(z[,k]) alloc[,k] <- (log.ratio >= quantile(log.ratio, prob = sqrt(sum(inc)/n))) & (zclass == k) } alloc <- apply(alloc, 1, any) toclass <- unallocated[alloc] cluster[toclass] <- zclass[alloc] inc <- (!is.na(cluster)) } mod <- MclustDA(data = x, class = cluster, G = numCompClass(cluster[inc], G), modelNames = modelNames, verbose = verbose) cluster <- predict(mod, x)$classification out <- list(model = mod, clusterCores = clusterCores, cluster = cluster) return(out) } density.MclustDA <- function(object, newdata, prior, logarithm = FALSE, ...) { # Compute the density based on a MclustDA model # (later it may be included in the 'mclust' package) # or it can be obtained from predict.MclustDA if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") models <- object$models nclass <- length(models) n <- sapply(1:nclass, function(i) models[[i]]$n) if(missing(newdata)) { newdata <- object$data } if(object$d == 1) newdata <- as.vector(newdata) if(missing(prior)) { prior <- n/sum(n) } else { if(length(prior) != nclass) stop("wrong number of prior probabilities") if(any(prior < 0)) stop("prior must be nonnegative") } # compute on log scale for stability densfun <- function(mod, data) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) } # cden <- as.matrix(data.frame(lapply(models, densfun, data = newdata))) cden <- sweep(cden, 2, FUN = "+", STATS = log(prior)) maxlog <- apply(cden, 1, max) cden <- sweep(cden, 1, FUN = "-", STATS = maxlog) den <- log(apply(exp(cden), 1, sum)) + maxlog if(!logarithm) den <- exp(den) return(den) } # old version ConnectComp_old <- function(nb) { # Get connected components # Example: # nb <- list(c(1,2,3), c(2,3,4), c(9,10,11), c(9,11,12), c(1,6,5)) # if(length(nb) < 1 | !is.list(nb)) return(NULL) nb <- lapply(nb, function(x) as.integer(x)) n <- length(nb) u <- sort(unique(unlist(nb))) nu <- length(u) cnb <- cnb.old <- nb stable <- FALSE # merge the neighbors until the configuration is stable while(!stable) { i <- 0 while(i < length(cnb)) { i <- i + 1 j <- which(sapply(cnb, function(nbb) any(intersect(cnb[[i]], nbb)))) cnb[[i]] <- sort(unique(unlist(cnb[j]))) cnb[setdiff(j, i)] <- NULL } if(identical(cnb, cnb.old)) stable <- TRUE cnb.old <- cnb } return(cnb) } ConnectComp <- function(nb) { # Get connected components # Example: # nb <- list(c(1,2,3), c(2,3,4), c(9,10,11), c(9,11,12), c(1,6,5)) # ConnectComp(nb) if(length(nb) < 1 | !is.list(nb)) return(NULL) nb <- lapply(nb, function(x) as.integer(x)) n <- length(nb) u <- sort(unique(unlist(nb))) nu <- length(u) cnb <- cnb.old <- nb stable <- FALSE # merge the neighbors until the configuration is stable while(!stable) { i <- 0 while(i < length(cnb)) { i <- i + 1 j <- which(sapply(cnb, function(nbb) any(is.element(cnb[[i]], nbb)))) cnb[[i]] <- sort(unique(unlist(cnb[j]))) cnb[setdiff(j, i)] <- NULL } if(identical(cnb, cnb.old)) stable <- TRUE cnb.old <- cnb } return(cnb) } mclust/R/mbahc.R0000644000176200001440000005167413477456351013213 0ustar liggesusers## ## Model-based Agglomerative Hierarchical Clustering (MBAHC) ## # MBAHC used for EM initialization for d-dim data ---- hc <- function(data, modelName = mclust.options("hcModelName"), use = mclust.options("hcUse"), ...) { if(!any(modelName == c("E", "V", "EII", "VII", "EEE", "VVV"))) stop("invalid 'modelName' argument for model-based hierarchical clustering. See help(mclust.options)") if(!any(use == c("VARS", "STD", "SPH", "PCS", "PCR", "SVD", "RND"))) stop("invalid 'use' argument for model-based hierarchical clustering. See help(mclust.options)") funcName <- paste("hc", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc$use <- mc$modelName <- NULL data <- data.matrix(data) dropCols <- function(x) { # select only those columns of matrix x with all finite numeric values x[,apply(x, 2, function(x) all(is.finite(x))), drop = FALSE] } use <- toupper(use[1]) switch(use, "VARS" = { Z <- data }, "STD" = { Z <- scale(data, center = TRUE, scale = TRUE) Z <- dropCols(Z) }, "PCR" = { data <- scale(data, center = TRUE, scale = TRUE) data <- dropCols(data) SVD <- svd(data, nu=0) # evalues <- sqrt(SVD$d^2/(nrow(data)-1)) Z <- data %*% SVD$v }, "PCS" = { data <- scale(data, center = TRUE, scale = FALSE) SVD <- svd(data, nu=0) # evalues <- sqrt(SVD$d^2/(nrow(data)-1)) Z <- data %*% SVD$v Z <- dropCols(Z) }, "SPH" = { data <- scale(data, center = TRUE, scale = FALSE) n <- nrow(data); p <- ncol(data) Sigma <- var(data) * (n - 1)/n SVD <- svd(Sigma, nu = 0) Z <- data %*% SVD$v %*% diag(1/sqrt(SVD$d), p, p) Z <- dropCols(Z) }, "SVD" = { data <- scale(data, center = TRUE, scale = TRUE) data <- dropCols(data) p <- min(dim(data)) SVD <- svd(data, nu=0) Z <- data %*% SVD$v %*% diag(1/sqrt(SVD$d), p, p) }, "RND" = { out <- randomPairs(data, ...) attr(out, "dimensions") <- dim(data) attr(out, "use") <- use attr(out, "call") <- match.call() class(out) <- "hc" return(out) } ) # call the proper hc function mc$data <- Z mc[[1]] <- as.name(funcName) out <- eval(mc, parent.frame()) attr(out, "use") <- use attr(out, "call") <- match.call() attr(out, "data") <- mc$data class(out) <- "hc" return(out) } print.hc <- function(x, ...) { if(!is.null(attr(x, "call"))) { cat("Call:\n") catwrap(paste0(deparse(attr(x, "call")))) cat("\n") } catwrap("Model-Based Agglomerative Hierarchical Clustering") if(!is.null(attr(x, "modelName"))) cat(paste("Model name =", attr(x, "modelName"), "\n")) if(!is.null(attr(x, "use"))) cat(paste("Use =", attr(x, "use"), "\n")) if(!is.null(attr(x, "dimensions"))) cat(paste("Number of objects =", attr(x, "dimensions")[1], "\n")) invisible(x) } randomPairs <- function(data, seed, ...) { if(!missing(seed)) set.seed(seed) data <- as.matrix(data) n <- nrow(data) m <- if(n%%2 == 1) n-1 else n tree <- matrix(sample(1:n, m, replace = FALSE), nrow = 2, ncol = ceiling(m/2)) tree <- apply(tree, 2, sort) ind <- unique(tree[1,]) while(ncol(tree) < (m-1)) { addtree <- sort(sample(ind, size = 2, replace = FALSE)) ind <- setdiff(ind, addtree[2]) tree <- cbind(tree, addtree) } dimnames(tree) <- NULL structure(tree, initialPartition = 1:n, dimensions = c(n,2)) } hclass <- function(hcPairs, G) { initial <- attributes(hcPairs)$init n <- length(initial) k <- length(unique(initial)) G <- if(missing(G)) k:2 else rev(sort(unique(G))) select <- k - G if(length(select) == 1 && !select) return(matrix(initial, ncol = 1, dimnames = list(NULL, as.character(G)))) bad <- select < 0 | select >= k if(all(bad)) stop("No classification with the specified number of clusters") if(any(bad) & mclust.options("warn")) { warning("Some selected classifications are inconsistent with mclust object") } L <- length(select) cl <- matrix(as.double(NA), nrow = n, ncol = L, dimnames = list(NULL, as.character(G))) if(select[1]) m <- 1 else { cl[, 1] <- initial m <- 2 } for(l in 1:max(select)) { ij <- hcPairs[, l] i <- min(ij) j <- max(ij) initial[initial == j] <- i if(select[m] == l) { cl[, m] <- initial m <- m + 1 } } apply(cl[, L:1, drop = FALSE], 2, partconv, consec = TRUE) } hcEII <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #==================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) { stop("initial number of clusters is not greater than minclus") } if(n <= p & mclust.options("warn")) { warning("# of observations <= data dimension") } #============================================================= storage.mode(data) <- "double" ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hceii", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), double(p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 9)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "EII", call = match.call()) } hcEEE <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ## R 2.12.0: 32 bit Windows build fails due to compiler bug ## workaround: removal (hopefully temporary) of hc functionality for EEE # Luca: commented the next line and uncommented below # stop("hc for EEE model is not currently supported") temp <- .Fortran("hceee", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), if(p < 3) integer(m) else integer(1), if(p < 4) integer(m) else integer(1), double(p), double(p * p), double(p * p), double(p * p), PACKAGE = "mclust")[c(1, 7:10)] # # currently temp[[5]] is not output temp[[4]] <- temp[[4]][1:2] temp[[5]] <- temp[[5]][1:2] names(temp[[5]]) <- c("determinant", "trace") temp[[1]] <- temp[[1]][1:(m + 1), ] if(p < 3) tree <- rbind(temp[[2]], temp[[3]]) else if(p < 4) tree <- rbind(temp[[1]][-1, 3], temp[[3]]) else tree <- t(temp[[1]][-1, 3:4, drop = FALSE]) determinant <- temp[[1]][, 1] attr(determinant, "breakpoints") <- temp[[4]] trace <- temp[[1]][, 2] structure(tree, initialPartition = partition, dimensions = dimdat, modelName = "EEE", call = match.call()) } hcVII <- function(data, partition, minclus = 1, alpha = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ll <- (l * (l - 1))/2 ld <- max(n, ll, 3 * m) alpha <- alpha * traceW(data/sqrt(n * p)) alpha <- max(alpha, .Machine$double.eps) temp <- .Fortran("hcvii", data, as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), double(p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 10)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "VII", call = match.call()) } hcVVV <- function(data, partition, minclus = 1, alpha = 1, beta = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) #if(oneD || length(dimdat) > 2) # stop("data should in the form of a matrix") data <- as.matrix(data) dimnames(data) <- NULL n <- nrow(data) p <- ncol(data) if(n <= p & mclust.options("warn")) warning("# of observations <= data dimension") if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ll <- (l * (l - 1))/2 # dp <- duplicated(partition) #x[c((1:n)[!dp],(1:n)[dp]),], #as.integer(c(partition[!dp], partition[dp])), ld <- max(n, ll + 1, 3 * m) alpha <- alpha * traceW(data/sqrt(n * p)) alpha <- max(alpha, .Machine$double.eps) temp <- .Fortran("hcvvv", cbind(data, 0.), as.integer(n), as.integer(p), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), as.double(beta), double(p), double(p * p), double(p * p), double(p * p), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 14)] temp[[1]] <- temp[[1]][1:m, 1:2, drop = FALSE] temp[[2]] <- temp[[2]][1:m] structure(t(temp[[1]]), initialPartition = partition, dimensions = dimdat, modelName = "VVV", call = match.call()) } ## ## Dendrogram for model-based hierarchical agglomeration ---- ## as.hclust.hc <- function(x, ...) { # Convert 'hc' objects to class 'hclust' stopifnot(inherits(x, "hc")) data <- as.matrix(attr(x, "data")) labels <- rownames(data) # convert a 'hc' hierarchical clustering structure to 'hclust' structure HC <- matrix(as.vector(x), ncol(x), nrow(x), byrow = TRUE) HCm <- matrix(NA, nrow(HC), ncol(HC)) merged <- list(as.vector(HC[1,])) HCm[1,] <- -HC[1,] for(i in 2:nrow(HC)) { lmerged <- lapply(merged, function(m) HC[i,] %in% m) lm <- which(sapply(lmerged, function(lm) any(lm))) if(length(lm) == 0) { merged <- append(merged, list(HC[i,])) HCm[i,] <- sort(-HC[i,]) } else if(length(lm) == 1) { merged <- append(merged, list(c(merged[[lm]], HC[i,!lmerged[[lm]]]))) merged[[lm]] <- list() HCm[i,] <- sort(c(-HC[i,!lmerged[[lm]]], lm)) } else { merged <- append(merged, list(unlist(merged[lm]))) merged[[lm[1]]] <- merged[[lm[2]]] <- list() HCm[i,] <- lm } } # compute heights height <- attr(x, "deviance") if(is.null(height)) height <- hcCriterion(x, ...) # create 'hclust' object obj <- structure(list(merge = HCm, height = rev(height), order = merged[[length(merged)]], labels = labels, method = attr(x, "model"), dist.method = NULL, call = attr(x, "call")), class = "hclust") return(obj) } as.dendrogram.hc <- function(object, ...) { # Convert 'hc' objects to class 'dendrogram' stopifnot(inherits(object, "hc")) as.dendrogram(as.hclust(object)) } plot.hc <- function(x, ...) { stopifnot(inherits(x, "hc")) # dots <- list(...) # if(is.null(dots$hang)) dots$hang <- -1 # if(is.null(dots$sub)) dots$sub <- NA dendro <- as.dendrogram(x) # do.call("plot", c(list(hcl), dots)) plot(dendro) invisible(dendro) } # Auxiliary functions ---- hcCriterion <- function(hcPairs, Gmax, what = c("deviance", "loglik"), ...) { stopifnot(inherits(hcPairs, "hc")) hcPairsName <- deparse(substitute(hcPairs)) what <- match.arg(what, choices = eval(formals(hcCriterion)$what)) data <- as.matrix(attr(hcPairs, "data")) N <- nrow(data) p <- ncol(data) model <- attr(hcPairs, "model") m <- ifelse(missing(Gmax), ncol(hcPairs), as.integer(Gmax)) hc <- hclass(hcPairs, seq_len(m)) Wdata <- var(data)*(N-1) trWnp <- tr(Wdata)/(N*p) # detS <- det(Wdata/N) loglik <- rep(as.double(NA), length = m) # loglik[1] <- mvn(model, data)$loglik switch(model, "EII" = { for(k in 1:m) { n <- tabulate(hc[,k], k) # mu <- by(data, as.factor(hc[,k]), FUN = colMeans, # simplify = FALSE) W <- WSS(data, hc[,k]) sigmasq <- sum(apply(W, 3, tr), na.rm=TRUE)/(N*p) loglik[k] <- -0.5*p*N*log(2*pi) -0.5*N*p + -0.5*sum(n*log(sigmasq^p + apply(W, 3, tr)/n + trWnp)) } }, "VII" = { for(k in 1:m) { n <- tabulate(hc[,k], k) W <- WSS(data, hc[,k]) sigmasq <- apply(W, 3, tr)/(n*p) loglik[k] <- -0.5*p*N*log(2*pi) -0.5*N*p + -0.5*sum(n*log(sigmasq^p + apply(W, 3, tr)/n + trWnp)) } }, "EEE" = { for(k in 1:m) { n <- tabulate(hc[,k], k) W <- WSS(data, hc[,k]) Sigma <- apply(W, 1:2, sum)/N loglik[k] <- -0.5*p*N*log(2*pi) -0.5*N*p + -0.5*sum(n*log(det(Sigma) + apply(W, 3, tr)/n + trWnp)) } }, "VVV" = { for(k in 1:m) { n <- tabulate(hc[,k], k) W <- WSS(data, hc[,k]) Sigma <- sapply(1:k, function(k) W[,,k]/n[k], simplify = "array") loglik[k] <- -0.5*p*N*log(2*pi) -0.5*N*p + -0.5*sum(n*log(apply(Sigma, 3, det) + apply(W, 3, tr)/n + trWnp)) } } ) deviance <- -2*(loglik - max(loglik, na.rm = TRUE)) # attr(hcPairs, "loglik") <- loglik # attr(hcPairs, "deviance") <- deviance # assign(hcPairsName, hcPairs, envir = parent.frame()) out <- switch(what, "deviance" = deviance, "loglik" = loglik, NULL) return(out) } WSS <- function(X, group, ...) { X <- as.matrix(X) Z <- unmap(as.vector(group)) n <- nrow(X) p <- ncol(X) G <- ncol(Z) tmp <- .Fortran("covwf", X = as.double(X), Z = as.double(Z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mean = double(p * G), S = double(p * p * G), W = double(p * p * G) ) array(tmp$W, c(p,p,G)) } tr <- function(x) { sum(diag(as.matrix(x))) } ## Initialization for 1-dim data ---- qclass <- function (x, k) { x <- as.vector(x) # eps <- sqrt(.Machine$double.eps) # numerical accuracy problem if scale of x is large, so make tolerance # scale dependent eps <- sd(x)*sqrt(.Machine$double.eps) q <- NA n <- k while(length(q) < (k+1)) { n <- n + 1 q <- unique(quantile(x, seq(from = 0, to = 1, length = n))) } if(length(q) > (k+1)) { dq <- diff(q) nr <- length(q)-k-1 q <- q[-order(dq)[1:nr]] } q[1] <- min(x) - eps q[length(q)] <- max(x) + eps cl <- rep(0, length(x)) for(i in 1:k) { cl[ x >= q[i] & x < q[i+1] ] <- i } return(cl) } hcE <- function(data, partition, minclus = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #==================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hc1e", data, as.integer(n), as.integer(partition), as.integer(l), as.integer(m), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 3, 7)] temp[[1]] <- temp[[1]][1:m] temp[[2]] <- temp[[2]][1:m] temp[[3]] <- temp[[3]][1:m] structure(rbind(temp[[1]], temp[[2]]), initialPartition = partition, dimensions = n, modelName = "E", call = match.call()) } hcV <- function(data, partition, minclus = 1, alpha = 1, ...) { if(minclus < 1) stop("minclus must be positive") if(any(is.na(data))) stop("missing values not allowed in data") #===================================================================== dimdat <- dim(data) oneD <- (is.null(dimdat) || NCOL(data) == 1) if(!oneD) stop("data must be one-dimensional") data <- as.vector(data) n <- length(data) if(missing(partition)) partition <- 1:n else if(length(partition) != n) stop("partition must assign a class to each observation") partition <- partconv(partition, consec = TRUE) l <- length(unique(partition)) attr(partition, "unique") <- l m <- l - minclus if(m <= 0) stop("initial number of clusters is not greater than minclus") storage.mode(data) <- "double" alpha <- alpha * (vecnorm(data - mean(data))^2/n) alpha <- min(alpha, .Machine$double.eps) ld <- max(c((l * (l - 1))/2, 3 * m)) temp <- .Fortran("hc1v", data, as.integer(n), as.integer(partition), as.integer(l), as.integer(m), as.double(alpha), as.integer(ld), double(ld), PACKAGE = "mclust")[c(1, 3, 8)] temp[[1]] <- temp[[1]][1:m] temp[[2]] <- temp[[2]][1:m] temp[[3]] <- temp[[3]][1:m] structure(rbind(temp[[1]], temp[[2]]), initialPartition = partition, dimensions = n, modelName = "V", call = match.call()) } mclust/R/densityMclust.R0000644000176200001440000003734013477460047014777 0ustar liggesusersdensityMclust <- function(data, ...) { mc <- match.call() obj <- Mclust(data, ...) obj$call <- mc d <- dens(modelName = obj$modelName, data = data, parameters = obj$parameters, logarithm = FALSE) obj$density <- d class(obj) <- c("densityMclust", "Mclust") return(obj) } predict.densityMclust <- function(object, newdata, what = c("dens", "cdens", "z"), logarithm = FALSE, ...) { if(!inherits(object, "densityMclust")) stop("object not of class \"densityMclust\"") if(missing(newdata)) { newdata <- object$data } newdata <- as.matrix(newdata) if(ncol(object$data) != ncol(newdata)) { stop("newdata must match ncol of object data") } what <- match.arg(what, choices = eval(formals(predict.densityMclust)$what)) pro <- object$parameters$pro; pro <- pro/sum(pro) noise <- (!is.na(object$hypvol)) cl <- c(seq(object$G), if(noise) 0) switch(what, "dens" = { out <- dens(modelName = object$modelName, data = newdata, parameters = object$parameters, logarithm = logarithm) }, "cdens" = { z <- cdens(modelName = object$modelName, data = newdata, parameters = object$parameters, logarithm = TRUE) z <- if(noise) cbind(z, log(object$parameters$Vinv)) else cbind(z) # drop redundant attributes colnames(z) <- cl out <- if(!logarithm) exp(z) else z }, "z" = { z <- cdens(modelName = object$modelName, data = newdata, parameters = object$parameters, logarithm = TRUE) z <- if(noise) cbind(z, log(object$parameters$Vinv)) else cbind(z) # drop redundant attributes z <- sweep(z, MARGIN = 2, FUN = "+", STATS = log(pro)) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = apply(z, 1, logsumexp)) colnames(z) <- cl out <- if(!logarithm) exp(z) else z } ) return(out) } plot.densityMclust <- function(x, data = NULL, what = c("BIC", "density", "diagnostic"), ...) { object <- x # Argh. Really want to use object anyway what <- match.arg(what, several.ok = TRUE) if(object$d > 1) what <- setdiff(what, "diagnostic") oldpar <- par(no.readonly = TRUE) # on.exit(par(oldpar)) plot.densityMclust.density <- function(...) { if(object$d == 1) plotDensityMclust1(object, data = data, ...) else if(object$d == 2) plotDensityMclust2(object, data = data, ...) else plotDensityMclustd(object, data = data, ...) } plot.densityMclust.bic <- function(...) { plot.mclustBIC(object$BIC, ...) } plot.densityMclust.diagnostic <- function(...) { densityMclust.diagnostic(object, ...) } if(interactive() & length(what) > 1) { title <- "Model-based density estimation plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "BIC") plot.densityMclust.bic(...) if(what[choice] == "density") plot.densityMclust.density(...) if(what[choice] == "diagnostic") plot.densityMclust.diagnostic(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "BIC")) plot.densityMclust.bic(...) if(any(what == "density")) plot.densityMclust.density(...) if(any(what == "diagnostic")) plot.densityMclust.diagnostic(...) } invisible() } plotDensityMclust1 <- function(x, data = NULL, hist.col = "lightgrey", hist.border = "white", breaks = "Sturges", ...) { object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$data <- mc$hist.col <- mc$hist.border <- mc$breaks <- NULL xlab <- mc$xlab if(is.null(xlab)) xlab <- deparse(object$call$data) ylab <- mc$ylab if(is.null(ylab)) ylab <- "Density" # xrange <- extendrange(object$data, f = 0.1) xlim <- eval(mc$xlim, parent.frame()) if(!is.null(xlim)) xrange <- range(xlim) ylim <- eval(mc$ylim, parent.frame()) # eval.points <- seq(from = xrange[1], to = xrange[2], length = 1000) d <- predict.densityMclust(object, eval.points) # if(!is.null(data)) { h <- hist(data, breaks = breaks, plot = FALSE) plot(h, freq = FALSE, col = hist.col, border = hist.border, main = "", xlim = range(h$breaks, xrange), # ylim = range(0, ylim, h$density, max(d)+diff(range(d))*0.1), ylim = if(!is.null(ylim)) range(ylim) else range(0, h$density, d), xlab = xlab, ylab = ylab) box() mc[[1]] <- as.name("lines") mc$x <- eval.points mc$y <- d mc$type <- "l" eval(mc, parent.frame()) } else { mc[[1]] <- as.name("plot") mc$x <- eval.points mc$y <- d mc$type <- "l" mc$xlim <- xlim mc$ylim <- if(!is.null(ylim)) range(ylim) else range(0, d) mc$ylab <- ylab mc$xlab <- xlab eval(mc, parent.frame()) } invisible() } plotDensityMclust2 <- function(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, ...) { # This function call surfacePlot() with a suitable modification of arguments object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$points.pch <- mc$points.col <- mc$points.cex <- NULL mc$nlevels <- nlevels mc$levels <- levels if(!is.null(mc$type)) if(mc$type == "level") mc$type <- "hdr" # TODO: to be removed if(isTRUE(mc$type == "hdr")) { mc$levels <- c(sort(hdrlevels(object$density, prob)), 1.1*max(object$density)) mc$nlevels <- length(mc$levels) } if(is.null(data)) { addPoints <- FALSE mc$data <- object$data } else { data <- as.matrix(data) stopifnot(ncol(data) == ncol(object$data)) addPoints <- TRUE } # set mixture parameters par <- object$parameters # these parameters should be missing par$variance$cholSigma <- par$Sigma <- NULL if(is.null(par$pro)) par$pro <- 1 # LS: bug? par$variance$cholsigma <- par$variance$sigma for(k in seq(par$variance$G)) { par$variance$cholsigma[,,k] <- chol(par$variance$sigma[,,k]) } mc$parameters <- par # now surfacePlot() is called mc[[1]] <- as.name("surfacePlot") out <- eval(mc, parent.frame()) if(addPoints) points(data, pch = points.pch, col = points.col, cex = points.cex) # invisible(out) } plotDensityMclustd <- function(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, gap = 0.2, ...) { # This function call surfacePlot() with a suitable modification of arguments object <- x # Argh. Really want to use object anyway mc <- match.call(expand.dots = TRUE) mc$x <- mc$points.pch <- mc$points.col <- mc$points.cex <- mc$gap <- NULL mc$nlevels <- nlevels mc$levels <- levels mc$prob <- prob if(!is.null(mc$type)) if(mc$type == "level") mc$type <- "hdr" # TODO: to be removed if(is.null(data)) { data <- mc$data <- object$data addPoints <- FALSE } else { data <- as.matrix(data) stopifnot(ncol(data) == ncol(object$data)) addPoints <- TRUE } nc <- object$d oldpar <- par(mfrow = c(nc, nc), mar = rep(gap/2,4), oma = rep(3, 4), no.readonly = TRUE) on.exit(par(oldpar)) for(i in seq(nc)) { for(j in seq(nc)) { if(i == j) { plot(data[,c(i,j)], type="n", xlab = "", ylab = "", axes=FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), colnames(data)[i], cex = 1.5, adj = 0.5) box() } else { # set mixture parameters par <- object$parameters if(is.null(par$pro)) par$pro <- 1 par$mean <- par$mean[c(j,i),,drop=FALSE] par$variance$d <- 2 sigma <- array(dim = c(2, 2, par$variance$G)) for(g in seq(par$variance$G)) sigma[,,g] <- par$variance$sigma[c(j,i),c(j,i),g] par$variance$sigma <- sigma par$variance$Sigma <- NULL par$variance$cholSigma <- NULL par$variance$cholsigma <- NULL mc$parameters <- par mc$data <- object$data[,c(j,i)] mc$axes <- FALSE mc[[1]] <- as.name("surfacePlot") eval(mc, parent.frame()) box() if(addPoints & (j > i)) points(data[,c(j,i)], pch = points.pch, col = points.col, cex = points.cex) } if(i == 1 && (!(j%%2))) axis(3) if(i == nc && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == nc && (i%%2)) axis(4) } } # invisible() } dens <- function(modelName, data, logarithm = FALSE, parameters, warn = NULL, ...) { if(is.null(warn)) warn <- mclust.options("warn") # aux <- list(...) cden <- cdens(modelName = modelName, data = data, logarithm = TRUE, parameters = parameters, warn = warn) dimdat <- dim(data) oneD <- is.null(dimdat) || NCOL(data) == 1 G <- if(oneD) { length(parameters$mean) } else { ncol(as.matrix(parameters$mean)) } pro <- parameters$pro if(is.null(pro)) stop("mixing proportions must be supplied") noise <- (!is.null(parameters$Vinv)) if(G > 1) { if(noise) { # proN <- pro[length(pro)] pro <- pro[-length(pro)] # pro <- pro/sum(pro) } if(any(proz <- pro == 0)) { pro <- pro[!proz] cden <- cden[, !proz, drop = FALSE] } cden <- sweep(cden, 2, FUN = "+", STATS = log(pro)) } # logsumexp maxlog <- apply(cden, 1, max) cden <- sweep(cden, 1, FUN = "-", STATS = maxlog) den <- log(apply(exp(cden), 1, sum)) + maxlog if(noise) den <- den + parameters$pro[G+1]*parameters$Vinv if(!logarithm) den <- exp(den) den } cdens <- function(modelName, data, logarithm = FALSE, parameters, warn = NULL, ...) { modelName <- switch(EXPR = modelName, X = "E", XII = "EII", XXI = "EEI", XXX = "EEE", modelName) checkModelName(modelName) funcName <- paste("cdens", modelName, sep = "") mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name(funcName) mc$modelName <- NULL eval(mc, parent.frame()) } densityMclust.diagnostic <- function(object, type = c("cdf", "qq"), col = c("black", "black"), lwd = c(2,1), lty = c(1,1), legend = TRUE, grid = TRUE, ...) { # Diagnostic plots for density estimation # (only available for the one-dimensional case) # # Arguments: # object = a 'densityMclust' object # data = the data vector # type = type of diagnostic plot: # cdf = the fitted distribution function vs the empirical distribution function; # qq = the fitted distribution function evaluated over the observed points vs # the quantile from a uniform distribution. # # Reference: # Loader C. (1999), Local Regression and Likelihood. New York, Springer, # pp. 87-90) if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } if(object$d > 1) { warning("only available for one-dimensional data") return() } type <- match.arg(type, c("cdf", "qq"), several.ok = TRUE) # main <- if(is.null(main) || is.character(main)) FALSE else as.logical(main) data <- as.numeric(object$data) n <- length(data) cdf <- cdfMclust(object, data = data, ngrid = min(n*10,1000), ...) oldpar <- par(no.readonly = TRUE) if(interactive() & length(type) > 1) { par(ask = TRUE) on.exit(par(oldpar)) } if(any(type == "cdf")) { # Fitted CDF vs Emprical CDF empcdf <- ecdf(data) plot(empcdf, do.points = FALSE, verticals = TRUE, col = col[2], lwd = lwd[2], lty = lty[2], xlab = deparse(object$call$data), ylab = "Cumulative Distribution Function", panel.first = if(grid) grid(equilogs=FALSE) else NULL, main = NULL, ...) # if(main) title(main = "CDF plot", cex.main = 1.1) lines(cdf, col = col[1], lwd = lwd[1], lty = lty[1]) rug(data) if(legend) { legend("bottomright", legend = c("Estimated CDF", "Empirical CDF"), ncol = 1, inset = 0.05, cex = 0.8, col = col, lwd = lwd, lty = lty) } } if(any(type == "qq")) { # Q-Q plot q <- quantileMclust(object, p = ppoints(n)) plot(q, sort(data), xlab = "Quantiles from estimated density", ylab = "Sample Quantiles", panel.first = if(grid) grid(equilogs=FALSE) else NULL, main = NULL, ...) # if(main) title(main = "Q-Q plot", cex.main = 1.1) with(list(y = sort(data), x = q), { i <- (y > quantile(y, 0.25) & y < quantile(y, 0.75)) abline(lm(y ~ x, subset = i), lty = 2) }) # P-P plot # cdf <- cdfMclust(object, data, ...) # plot(seq(1,n)/(n+1), cdf$y, xlab = "Uniform quantiles", # ylab = "Cumulative Distribution Function", # main = "Diagnostic: P-P plot") # abline(0, 1, lty = 2) } invisible() } cdfMclust <- function(object, data, ngrid = 100, ...) { # Cumulative Density Function # (only available for the one-dimensional case) # # Returns the estimated CDF evaluated at points given by the optional # argument data. If not provided, a regular grid of ngrid points is used. # # Arguments: # object = a 'densityMclust' object # data = the data vector # ngrid = the length of rectangular grid if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } if(missing(data)) { eval.points <- extendrange(object$data, f = 0.1) eval.points <- seq(eval.points[1], eval.points[2], length.out = ngrid) } else { eval.points <- sort(as.vector(data)) ngrid <- length(eval.points) } G <- object$G pro <- object$parameters$pro mean <- object$parameters$mean var <- object$parameters$variance$sigmasq if(length(var) < G) var <- rep(var, G) noise <- (!is.null(object$parameters$Vinv)) cdf <- rep(0, ngrid) for(k in seq(G)) { cdf <- cdf + pro[k]*pnorm(eval.points, mean[k], sqrt(var[k])) } if(noise) cdf <- cdf/sum(pro[seq(G)]) out <- list(x = eval.points, y = cdf) return(out) } quantileMclust <- function(object, p, ...) { # Calculate the quantile of a univariate mixture corresponding to cdf equal to p # # Arguments: # object = a 'densityMclust' object # p = vector of probabilities (0 <= p <= 1) if(!any(class(object) == "densityMclust")) { stop("first argument must be an object of class 'densityMclust'") } eval.points <- extendrange(object$data, f = 1) eval.points <- seq(eval.points[1], eval.points[2], length.out = 10000) cdf <- cdfMclust(object, data = eval.points) q <- spline(cdf$y, cdf$x, method = "fmm", xmin = 0, xmax = 1, xout = p)$y q[ p < 0 | p > 1] <- NaN q[ p == 0 ] <- -Inf q[ p == 1 ] <- Inf return(q) }mclust/R/mclustda.R0000644000176200001440000011305613477460237013744 0ustar liggesusersMclustDA <- function(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), verbose = interactive(), ...) { call <- match.call() mc <- match.call(expand.dots = TRUE) # if(missing(data)) stop("no training data provided!") data <- data.matrix(data) n <- nrow(data) p <- ncol(data) oneD <- if(p==1) TRUE else FALSE # if(missing(class)) stop("class labels for training data must be provided!") class <- as.factor(class) classLabel <- levels(class) ncl <- nlevels(class) if(ncl == 1) G <- 1 prop <- as.vector(table(class))/n names(prop) <- classLabel # modelType <- match.arg(modelType, choices = eval(formals(MclustDA)$modelType), several.ok = FALSE) # if(is.null(G)) { G <- rep(list(1:5), ncl) } else if(is.list(G)) { G <- lapply(G, sort) } else { G <- rep(list(sort(G)), ncl) } if(any(unlist(G) <= 0)) stop("G must be positive") # if(is.null(modelNames)) { if(oneD) modelNames <- c("E", "V") else modelNames <- mclust.options("emModelNames") } if(n <= p) { m <- match(c("EEE","EEV","VEV","VVV"), mclust.options("emModelNames"), nomatch=0) modelNames <- modelNames[-m] } if(!is.list(modelNames)) { modelNames <- rep(list(modelNames), ncl) } # hcUse <- mclust.options("hcUse") mclust.options("hcUse" = "VARS") on.exit(mclust.options("hcUse" = hcUse)) # if(modelType == "EDDA") { mc[[1]] <- as.name("mstep") mc$class <- mc$G <- mc$modelNames <- mc$modelType <- NULL mc$warn <- FALSE mc$z <- unmap(as.numeric(class)) G <- 1 modelNames <- unique(unlist(modelNames)) BIC <- rep(NA, length(modelNames)) Model <- NULL if(verbose) { cat("fitting ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = length(modelNames), style = 3) on.exit(close(pbar)) ipbar <- 0 } for(i in seq(modelNames)) { mc$modelName <- as.character(modelNames[i]) mStep <- eval(mc, parent.frame()) eStep <- do.call("estep", c(mStep, list(data = data, warn = FALSE))) BIC[i] <- do.call("bic", c(eStep, list(equalPro = TRUE))) if(!is.na(BIC[i]) && BIC[i] >= max(BIC, na.rm = TRUE)) Model <- eStep if(verbose) { ipbar <- ipbar+1; setTxtProgressBar(pbar, ipbar) } } if(all(is.na(BIC))) { warning("No model(s) can be estimated!!") return() } names(BIC) <- modelNames bic <- max(BIC, na.rm = TRUE) loglik <- Model$loglik df <- (2*loglik - bic)/log(Model$n) # there are (nclass-1) more df than real needed # equal to logLik(object) but faster Model <- c(Model, list("BIC" = BIC)) Models <- rep(list(Model), ncl) names(Models) <- classLabel for(l in 1:ncl) { I <- (class == classLabel[l]) Models[[l]]$n <- sum(I) Models[[l]]$G <- 1 Models[[l]]$bic <- Models[[l]]$loglik <- NULL par <- Models[[l]]$parameters par$pro <- 1 par$mean <- if(oneD) par$mean[l] else par$mean[,l,drop=FALSE] par$variance$G <- 1 if(oneD) { # par$variance$sigma <- par$variance$sigma[l] if(length(par$variance$sigmasq) > 1) par$variance$sigmasq <- par$variance$sigmasq[l] else par$variance$sigmasq <- par$variance$sigmasq } else { par$variance$sigma <- par$variance$sigma[,,l,drop=FALSE] if(length(par$variance$sigmasq) > 1) par$variance$sigmasq <- par$variance$sigmasq[l] if(length(par$variance$scale) > 1) par$variance$scale <- par$variance$scale[l] if(length(dim(par$variance$shape)) > 1) par$variance$shape <- par$variance$shape[,l] if(length(dim(par$variance$orientation)) > 2) # LS was > 1 par$variance$orientation <- par$variance$orientation[,,l,drop=FALSE] if(length(dim(par$variance$cholSigma)) > 2) par$variance$cholSigma <- par$variance$cholSigma[,,l,drop=FALSE] if(length(dim(par$variance$cholsigma)) > 2) par$variance$cholsigma <- par$variance$cholsigma[,,l,drop=FALSE] } Models[[l]]$parameters <- par Models[[l]]$z <- NULL # z[I,,drop=FALSE] Models[[l]]$classification <- rep(1, sum(I)) # apply(z[I,,drop=FALSE], 1, which.max) Models[[l]]$uncertainty <- NULL # 1 - apply(z[I,], 1, max) Models[[l]]$observations <- which(I) } } else { # modelType == "MclustDA" i.e. different covariance structures for each class Models <- rep(list(NULL), ncl) mc[[1]] <- as.name("mclustBIC") mc$class <- NULL for(l in 1:ncl) { I <- (class == classLabel[l]) mc[[2]] <- data[I,] mc$G <- G[[l]] mc$modelNames <- as.character(modelNames[[l]]) if(verbose) cat(paste0("Class ", classLabel[l], ": ")) BIC <- eval(mc, parent.frame()) # slightly adjust parameters if none of the models can be fitted while(all(is.na(BIC))) { if(length(mc$modelNames) == 1) { j <- which(mc$modelNames == mclust.options("emModelNames")) if(j == 1) mc$G <- mc$G - 1 else mc$modelNames <- mclust.options("emModelNames")[j-1] } else { mc$G <- mc$G - 1 } BIC <- eval(mc, parent.frame()) } SUMMARY <- summary(BIC, data[I,]) SUMMARY$bic <- BIC; names(SUMMARY)[which(names(SUMMARY) == "bic")] <- "BIC" Models[[l]] <- c(SUMMARY, list(observations = which(I))) } bic <- loglik <- df <- NULL } names(Models) <- classLabel Models$Vinv <- NULL out <- list(call = call, data = data, class = class, type = modelType, n = n, d = p, prop = prop, models = Models, bic = bic, loglik = loglik, df = df) out <- structure(out, prior = prior, control = control, class = "MclustDA") if(modelType == "MclustDA") { l <- logLik.MclustDA(out, data) out$loglik <- as.numeric(l) out$df <- attr(l, "df") out$bic <- 2*out$loglik - log(n)*out$df } return(out) } print.MclustDA <- function(x, ...) { cat("\'", class(x)[1], "\' model object:\n", sep = "") models <- x$models nclass <- length(models) n <- sapply(1:nclass, function(i) models[[i]]$n) M <- sapply(1:nclass, function(i) models[[i]]$modelName) G <- sapply(1:nclass, function(i) models[[i]]$G) out <- data.frame(n = n, Model = M, G = G) rownames(out) <- names(models) out <- as.matrix(out) names(dimnames(out)) <- c("Classes", "") print(out, quote = FALSE, right = TRUE) cat("\n") catwrap("\nAvailable components:\n") print(names(x)) # str(x, max.level = 2, give.attr = FALSE, strict.width = "wrap") invisible(x) } summary.MclustDA <- function(object, parameters = FALSE, newdata, newclass, ...) { # collect info models <- object$models nclass <- length(models) classes <- names(models) n <- sapply(1:nclass, function(i) models[[i]]$n) G <- sapply(1:nclass, function(i) models[[i]]$G) modelName <- sapply(1:nclass, function(i) models[[i]]$modelName) prior <- attr(object, "prior") printParameters <- parameters par <- getParameters.MclustDA(object) class <- object$class data <- object$data pred <- predict(object, newdata = data, ...) err <- mean(class != pred$classification) brier <- BrierScore(pred$z, class) tab <- try(table(class, pred$classification)) if(class(tab) == "try-error") { err <- tab <- NA } else { names(dimnames(tab)) <- c("Class", "Predicted") } tab.newdata <- err.newdata <- brier.newdata <- NULL if(!missing(newdata) & !missing(newclass)) { pred.newdata <- predict(object, newdata = newdata, ...) if(missing(newclass)) { tab.newdata <- table(pred.newdata$classification) names(dimnames(tab.newdata)) <- "Predicted" } else { tab.newdata <- table(newclass, pred.newdata$classification) names(dimnames(tab.newdata)) <- c("Class", "Predicted") err.newdata <- mean(newclass != pred.newdata$classification) brier.newdata <- BrierScore(pred.newdata$z, newclass) } } obj <- list(type = object$type, n = n, d = object$d, loglik = object$loglik, df = object$df, bic = object$bic, nclass = nclass, classes = classes, G = G, modelName = modelName, prop = object$prop, parameters = par, prior = prior, tab = tab, err = err, brier = brier, tab.newdata = tab.newdata, err.newdata = err.newdata, brier.newdata = brier.newdata, printParameters = printParameters) class(obj) <- "summary.MclustDA" return(obj) } print.summary.MclustDA <- function(x, digits = getOption("digits"), ...) { title <- paste("Gaussian finite mixture model for classification") txt <- paste(rep("-", min(nchar(title), getOption("width"))), collapse = "") catwrap(txt) catwrap(title) catwrap(txt) cat("\n") catwrap(paste(x$type, "model summary:")) cat("\n") # tab <- data.frame("log-likelihood" = x$loglik, "n" = sum(x$n), "df" = x$df, "BIC" = x$bic, row.names = "", check.names = FALSE) print(tab, digits = digits) tab <- data.frame("n" = x$n, "%" = round(x$n/sum(x$n)*100,2), "Model" = x$modelName, "G" = x$G, check.names = FALSE, row.names = x$classes) tab <- as.matrix(tab) names(dimnames(tab)) <- c("Classes", "") print(tab, digits = digits, quote = FALSE, right = TRUE) if(!is.null(x$prior)) { cat("\nPrior: ") cat(x$prior$functionName, "(", paste(names(x$prior[-1]), x$prior[-1], sep = " = ", collapse = ", "), ")", sep = "") cat("\n") } if(x$printParameters) { cat("\nClass prior probabilities:\n") print(x$prop, digits = digits) for(i in seq(x$nclass)) { cat("\nClass = ", x$class[i], "\n", sep = "") par <- x$parameters[[i]] if(x$type == "MclustDA") { cat("\nMixing probabilities: ") cat(round(par$pro, digits = digits), "\n") } cat("\nMeans:\n") print(par$mean, digits = digits) cat("\nVariances:\n") if(x$d > 1) { for(g in seq(x$G[i])) { cat("[,,", g, "]\n", sep = "") print(par$variance[,,g], digits = digits) } } else print(par$variance, digits = digits) } } cat("\nTraining confusion matrix:\n") print(x$tab) cat("Classification error =", round(x$err, min(digits,4)), "\n") cat("Brier score =", round(x$brier, min(digits,4)), "\n") if(!is.null(x$tab.newdata)) { cat("\nTest confusion matrix:\n") print(x$tab.newdata) if(!is.null(x$err.newdata)) { cat("Classification error =", round(x$err.newdata, min(digits,4)), "\n") cat("Brier score =", round(x$brier.newdata, min(digits,4)), "\n") } } invisible(x) } getParameters.MclustDA <- function(object) { # collect info models <- object$models nclass <- length(models) classes <- names(models) n <- sapply(1:nclass, function(i) models[[i]]$n) G <- sapply(1:nclass, function(i) models[[i]]$G) modelName <- sapply(1:nclass, function(i) models[[i]]$modelName) # prior <- attr(object, "prior") par <- vector(mode = "list", length = nclass) for(i in seq(nclass)) { par[[i]] <- models[[i]]$parameters if(is.null(par[[i]]$pro)) par$pro <- 1 if(par[[i]]$variance$d < 2) { sigma <- rep(par[[i]]$variance$sigma, models[[i]]$G)[1:models[[i]]$G] names(sigma) <- names(par[[i]]$mean) par[[i]]$variance$sigma <- sigma } par[[i]]$variance <- par[[i]]$variance$sigma } return(par) } logLik.MclustDA <- function (object, data, ...) { if(missing(data)) data <- object$data n <- object$n d <- object$d par <- getParameters.MclustDA(object) nclass <- length(par) fclass <- sapply(object$models, function(m) m$n)/n logfclass <- log(fclass) G <- sapply(par, function(x) length(x$pro)) if(object$type == "EDDA") { df <- d * nclass + nVarParams(object$models[[1]]$modelName, d = d, G = nclass) } else { df <- sum(sapply(object$models, function(mod) with(mod, (G - 1) + G * d + nVarParams(modelName, d = d, G = G)))) } # ll <- sapply(object$models, function(mod) # { do.call("dens", c(list(data = data, logarithm = FALSE), mod)) }) # l <- sum(log(apply(ll, 1, function(l) sum(fclass*l)))) ll <- sapply(object$models, function(mod) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) }) l <- sum(apply(ll, 1, function(l) logsumexp(logfclass+l))) attr(l, "nobs") <- n attr(l, "df") <- df class(l) <- "logLik" return(l) } predict.MclustDA <- function(object, newdata, prop = object$prop, ...) { if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") models <- object$models nclass <- length(models) classNames <- levels(object$class) n <- sapply(1:nclass, function(i) models[[i]]$n) if(missing(newdata)) { newdata <- object$data } if(object$d == 1) newdata <- as.vector(newdata) if(is.numeric(prop)) { if(any(prop < 0)) stop("'prop' must be nonnegative") if(length(prop) != nclass) stop("'prop' is of incorrect length") prop <- prop/sum(prop) } else { prop <- n/sum(n) } # class density computed on log scale densfun <- function(mod, data) { do.call("dens", c(list(data = data, logarithm = TRUE), mod)) } # z <- as.matrix(data.frame(lapply(models, densfun, data = newdata))) z <- sweep(z, MARGIN = 2, FUN = "+", STATS = log(prop)) z <- sweep(z, MARGIN = 1, FUN = "-", STATS = apply(z, 1, logsumexp)) z <- exp(z) colnames(z) <- classNames cl <- apply(z, 1, which.max) class <- factor(classNames[cl], levels = classNames) # out <- list(classification = class, z = z) return(out) } plot.MclustDA <- function(x, what = c("scatterplot", "classification", "train&test", "error"), newdata, newclass, dimens = NULL, symbols, colors, main = NULL, ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") data <- object$data if(object$d > 1) dataNames <- colnames(data) else dataNames <- deparse(object$call$data) n <- nrow(data) p <- ncol(data) dimens <- if(is.null(dimens)) seq(p) else dimens[dimens <= p] d <- length(dimens) if(d == 0) { warning("dimens larger than data dimensionality...") return(invisible()) } if(missing(newdata)) { newdata <- matrix(as.double(NA), 0, p) } else { newdata <- as.matrix(newdata) } if(ncol(newdata) != p) stop("incompatible newdata dimensionality") if(missing(newclass)) { newclass <- vector(length = 0) } else { if(nrow(newdata) != length(newclass)) stop("incompatible newdata and newclass") } if(object$d > 1) newdataNames <- colnames(newdata) else newdataNames <- deparse(match.call()$newdata) what <- match.arg(what, several.ok = TRUE) models <- object$models M <- length(models) if(missing(dimens)) dimens <- 1:p trainClass <- object$class nclass <- length(unique(trainClass)) Data <- rbind(data, newdata) predClass <- predict(object, Data)$classification if(missing(symbols)) { if(M <= length(mclust.options("classPlotSymbols"))) { symbols <- mclust.options("classPlotSymbols") } else if(M <= 26) { symbols <- LETTERS } } if(length(symbols) == 1) symbols <- rep(symbols,M) if(length(symbols) < M & !any(what == "train&test")) { warning("more symbols needed to show classification") symbols <- rep(16, M) } if(missing(colors)) { colors <- mclust.options("classPlotColors") } if(length(colors) == 1) colors <- rep(colors,M) if(length(colors) < M & !any(what == "train&test")) { warning("more colors needed to show classification") colors <- rep("black", M) } oldpar <- par(no.readonly = TRUE) plot.MclustDA.scatterplot <- function(...) { if(d == 1) { mclust1Dplot(data = if(nrow(newdata) == 0) data[,dimens[1],drop=FALSE] else newdata[,dimens[1],drop=FALSE], what = "classification", classification = if(nrow(newdata) == 0) trainClass else newclass, xlab = if(nrow(newdata) == 0) dataNames[dimens] else newdataNames[dimens], ylab = "Classes", main = NULL, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) if(nrow(newdata) == 0) "Training data" else "Test data" else NULL, cex.main = oldpar$cex.lab) } scatellipses <- function(data, dimens, nclass, symbols, colors, ...) { m <- lapply(models, function(m) { m$parameters$mean <- array(m$parameters$mean[dimens,], c(2,m$G)) m$parameters$variance$sigma <- array(m$parameters$variance$sigma[dimens,dimens,], c(2,2,m$G)) m }) plot(data[,dimens], type = "n", ...) for(l in 1:nclass) { I <- m[[l]]$observations points(data[I,dimens[1]], data[I,dimens[2]], pch = symbols[l], col = colors[l]) for(g in 1:(m[[l]]$G)) { mvn2plot(mu = m[[l]]$parameters$mean[,g], sigma = m[[l]]$parameters$variance$sigma[,,g], k = 15, fillEllipse = mclust.options("fillEllipses"), col = if(mclust.options("fillEllipses")) colors[l] else rep("grey30",3)) } } } if(d == 2) { scatellipses(if(nrow(newdata) == 0) data else newdata, dimens = dimens[1:2], nclass = nclass, symbols = symbols, colors = colors, xlab = if(nrow(newdata) == 0) dataNames[dimens[1]] else newdataNames[dimens[1]], ylab = if(nrow(newdata) == 0) dataNames[dimens[2]] else newdataNames[dimens[2]], ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) if(nrow(newdata) == 0) "Training data" else "Test data" else NULL, cex.main = oldpar$cex.lab) } if(d > 2) { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(0.2/2,4), oma = rep(4,4)+c(0,0,1*(!is.null(main)),0)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(if(nrow(newdata) == 0) data[,dimens[c(i,j)]] else newdata[,dimens[c(i,j)]], type="n", xlab = "", ylab = "", axes=FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), labels = if(nrow(newdata) == 0) dataNames[dimens][i] else newdataNames[dimens][i], cex = 1.5, adj = 0.5) box() } else { scatellipses(if(nrow(newdata) == 0) data else newdata, dimens = dimens[c(j,i)], nclass = nclass, symbols = symbols, colors = colors, xaxt = "n", yaxt = "n") } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) if(nrow(newdata) == 0) "Training data" else "Test data" else NULL, cex.main = 1.2*oldpar$cex.main, outer = TRUE, line = 3) } } plot.MclustDA.classification <- function(...) { if(nrow(newdata) == 0 && d == 1) { mclust1Dplot(data = data[,dimens[1],drop=FALSE], what = "classification", classification = predClass[1:n], colors = colors[1:nclass], xlab = dataNames[dimens], main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training data classification" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 && d == 2) { coordProj(data = data[,dimens], what = "classification", classification = predClass[1:n], main = FALSE, colors = colors[1:nclass], symbols = symbols[1:nclass], ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training data classification" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 && d > 2) { clPairs(data[,dimens], classification = predClass[1:n], colors = colors[1:nclass], symbols = symbols[1:nclass], cex.labels = 1.5, main = if(!is.null(main)) if(is.character(main)) main else if(as.logical(main)) "Training data classification" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 && d == 1) { mclust1Dplot(data = newdata[,dimens], what = "classification", classification = predClass[-(1:n)], xlab = newdataNames[dimens], main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Test data classification" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 && d == 2) { coordProj(data = newdata[,dimens], what ="classification", classification = predClass[-(1:n)], colors = colors[1:nclass], symbols = symbols[1:nclass], main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Test data classification" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 & length(dimens) > 2) { on.exit(par(oldpar)) # par(oma = c(0,0,10,0)) clPairs(data = newdata[,dimens], classification = predClass[-(1:n)], colors = colors[1:nclass], symbols = symbols[1:nclass], cex.labels = 1.5, main = if(!is.null(main)) if(is.character(main)) main else if(as.logical(main)) "Test data classification" else NULL, cex.main = oldpar$cex.lab) } } plot.MclustDA.traintest <- function(...) { cl <- factor(rep(c("Train","Test"), times = c(nrow(data), nrow(newdata))), levels = c("Train", "Test")) if(d == 1) { mclust1Dplot(data = Data[,dimens], what = "classification", classification = cl, xlab = dataNames[dimens], ylab = "", colors = c("grey20", "grey80"), main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training and Test data" else NULL, cex.main = oldpar$cex.lab) } if(d == 2) { coordProj(Data, dimens = dimens[1:2], what = "classification", classification = cl, CEX = 0.8, symbols = c(19,3), colors = c("grey80", "grey20"), main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training (o) and Test (+) data" else NULL, cex.main = oldpar$cex.lab) } if(d > 2) { clPairs(Data[,dimens], classification = cl, symbols = c(19,3), colors = c("grey80", "grey20"), main = if(!is.null(main)) if(is.character(main)) main else if(as.logical(main)) "Training (o) and Test (+) data" else NULL, cex.main = oldpar$cex.lab) } } plot.MclustDA.error <- function(...) { if(nrow(newdata) == 0 && d == 1) { mclust1Dplot(data = data[,dimens], what = "error", classification = predClass[1:n], truth = trainClass, xlab = dataNames[dimens], main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training data error" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 && d == 2) { coordProj(data = data[,dimens[1:2]], what = "error", classification = predClass[1:n], truth = trainClass, main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training data error" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) == 0 && d > 2) { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(0.2/2,4), oma = rep(4,4)+c(0,0,1*(!is.null(main)),0)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(data[,dimens[c(i,j)]], type="n", xlab = "", ylab = "", axes=FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), dataNames[dimens][i], cex = 1.5, adj = 0.5) box() } else { coordProj(data = data[,dimens[c(j,i)]], what = "error", classification = predClass[1:n], truth = trainClass, main = FALSE, xaxt = "n", yaxt = "n") } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Training data error" else NULL, cex.main = 1.2*oldpar$cex.main, outer = TRUE, line = 3) } if(nrow(newdata) > 0 && d == 1) { mclust1Dplot(data = newdata[,dimens], what = "error", classification = predClass[-(1:n)], truth = newclass, xlab = newdataNames[dimens], main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Test data error" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 && d == 2) { coordProj(data = newdata[,dimens[1:2]], what = "error", classification = predClass[-(1:n)], truth = newclass, main = FALSE, ...) if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Test data error" else NULL, cex.main = oldpar$cex.lab) } if(nrow(newdata) > 0 && d > 2) { on.exit(par(oldpar)) par(mfrow = c(d, d), mar = rep(0.2/2,4), oma = rep(4,4)+c(0,0,1*(!is.null(main)),0)) for(i in seq(d)) { for(j in seq(d)) { if(i == j) { plot(newdata[,dimens[c(i,j)]], type="n", xlab = "", ylab = "", axes=FALSE) text(mean(par("usr")[1:2]), mean(par("usr")[3:4]), newdataNames[dimens][i], cex = 1.5, adj = 0.5) box() } else { coordProj(data = newdata[,dimens[c(j,i)]], what = "error", classification = predClass[-(1:n)], truth = newclass, main = FALSE, xaxt = "n", yaxt = "n") } if(i == 1 && (!(j%%2))) axis(3) if(i == d && (j%%2)) axis(1) if(j == 1 && (!(i%%2))) axis(2) if(j == d && (i%%2)) axis(4) } } if(!is.null(main)) title(if(is.character(main)) main else if(as.logical(main)) "Test data error" else NULL, cex.main = 1.2*oldpar$cex.main, outer = TRUE, line = 3) } } if(interactive() & length(what) > 1) { title <- "Model-based discriminant analysis plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "scatterplot") plot.MclustDA.scatterplot(...) if(what[choice] == "classification") plot.MclustDA.classification(...) if(what[choice] == "train&test") plot.MclustDA.traintest(...) if(what[choice] == "error") plot.MclustDA.error(...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "scatterplot")) plot.MclustDA.scatterplot(...) if(any(what == "classification")) plot.MclustDA.classification(...) if(any(what == "train&test")) plot.MclustDA.traintest(...) if(any(what == "error")) plot.MclustDA.error(...) } invisible() } cvMclustDA <- function(object, nfold = 10, metric = c("error", "brier"), prop = object$prop, verbose = interactive(), ...) { call <- object$call nfold <- as.numeric(nfold) metric <- match.arg(metric, choices = eval(formals(cvMclustDA)$metric), several.ok = FALSE) # data <- object$data class <- as.factor(object$class) n <- length(class) G <- lapply(object$models, function(mod) mod$G) modelName <- lapply(object$models, function(mod) mod$modelName) # ce <- function(pred, class) { 1 - sum(class == pred, na.rm = TRUE)/length(class) } # folds <- if(nfold == n) lapply(1:n, function(x) x) else balanced.folds(class, nfolds = nfold) nfold <- length(folds) folds.size <- sapply(folds, length) # cvmetric <- rep(NA, nfold) cvclass <- factor(rep(NA, n), levels = levels(class)) cvprob <- matrix(as.double(NA), nrow = n, ncol = nlevels(class), dimnames = list(NULL, levels(class))) if(verbose) { cat("cross-validating ...\n") flush.console() pbar <- txtProgressBar(min = 0, max = nfold, style = 3) on.exit(close(pbar)) } for(i in seq(nfold)) { x <- data[-folds[[i]],,drop=FALSE] y <- class[-folds[[i]]] call$data <- x call$class <- y call$G <- G call$modelNames <- modelName call$verbose <- FALSE mod <- eval(call, parent.frame()) # predTest <- predict(mod, data[folds[[i]],,drop=FALSE], prop = prop) cvmetric[i] <- if(metric == "error") ce(predTest$classification, class[folds[[i]]]) else BrierScore(predTest$z, class[folds[[i]]]) cvclass[folds[[i]]] <- predTest$classification cvprob[folds[[i]],] <- predTest$z # if(verbose) setTxtProgressBar(pbar, i) } # cv <- sum(cvmetric*folds.size)/sum(folds.size) se <- sqrt(var(cvmetric)/nfold) # out <- list(classification = cvclass, z = cvprob, error = if(metric == "error") cv else NA, brier = if(metric == "brier") cv else NA, se = se) return(out) } balanced.folds <- function(y, nfolds = min(min(table(y)), 10)) { # Create 'nfolds' balanced folds conditional on grouping variable 'y'. # Function useful in evaluating a classifier by balanced cross-validation. # Returns a list with 'nfolds' elements containing indexes of each fold. # # From package 'pamr' by T. Hastie, R. Tibshirani, Balasubramanian # Narasimhan, Gil Chu. totals <- table(y) fmax <- max(totals) nfolds <- min(nfolds, fmax) # makes no sense to have more folds than the max class size folds <- as.list(seq(nfolds)) yids <- split(seq(y), y) # nice we to get the ids in a list, split by class ### create a big matrix, with enough rows to get in all the folds per class bigmat <- matrix(as.double(NA), ceiling(fmax/nfolds) * nfolds, length(totals)) for(i in seq(totals)) # { bigmat[seq(totals[i]), i] <- sample(yids[[i]]) } # Luca: this version has a bug if a class has only 1 obs { if (totals[i]==1) bigmat[seq(totals[i]), i] <- yids[[i]] else bigmat[seq(totals[i]), i] <- sample(yids[[i]]) } smallmat <- matrix(bigmat, nrow = nfolds) # reshape the matrix ### Now do a clever sort to mix up the NAs smallmat <- permute.rows(t(smallmat)) res <-vector("list", nfolds) for(j in 1:nfolds) { jj <- !is.na(smallmat[, j]) res[[j]] <- smallmat[jj, j] } return(res) } permute.rows <- function(x) { dd <- dim(x) n <- dd[1] p <- dd[2] mm <- runif(length(x)) + rep(seq(n) * 10, rep(p, n)) matrix(t(x)[order(mm)], n, p, byrow = TRUE) } # Deprecated functions cv1EMtrain <- function(data, labels, modelNames=NULL) { .Deprecated("cvMclustDA", package = "mclust") z <- unmap(as.numeric(labels)) G <- ncol(z) dimDataset <- dim(data) oneD <- is.null(dimDataset) || length(dimDataset[dimDataset > 1]) == 1 if (oneD || length(dimDataset) != 2) { if (is.null(modelNames)) modelNames <- c("E", "V") if (any(!match(modelNames, c("E", "V"), nomatch = 0))) stop("modelNames E or V for one-dimensional data") n <- length(data) cv <- matrix(1, nrow = n, ncol = length(modelNames)) dimnames(cv) <- list(NULL, modelNames) for (m in modelNames) { for (i in 1:n) { mStep <- mstep(modelName = m, data = data[-i], z = z[-i,], warn = FALSE) eStep <- do.call("estep", c(mStep, list(data = data[i], warn = FALSE))) if (is.null(attr(eStep, "warn"))) { k <- (1:G)[eStep$z == max(eStep$z)] l <- (1:G)[z[i,] == max(z[i,])] cv[i, m] <- as.numeric(!any(k == l)) } } } } else { if (is.null(modelNames)) modelNames <- mclust.options("emModelNames") n <- nrow(data) cv <- matrix(1, nrow = n, ncol = length(modelNames)) dimnames(cv) <- list(NULL, modelNames) for (m in modelNames) { for (i in 1:n) { mStep <- mstep(modelName = m, data = data[-i,], z = z[-i,], warn = FALSE) eStep <- do.call("estep", c(mStep, list(data = data[i, , drop = FALSE], warn = FALSE))) if (is.null(attr(eStep, "warn"))) { k <- (1:G)[eStep$z == max(eStep$z)] l <- (1:G)[z[i,] == max(z[i,])] cv[i, m] <- as.numeric(!any(k == l)) } } } } errorRate <- apply(cv, 2, sum) errorRate/n } bicEMtrain <- function(data, labels, modelNames=NULL) { .Deprecated("MclustDA", package = "mclust") z <- unmap(as.numeric(labels)) G <- ncol(z) dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if (oneD || length(dimData) != 2) { if (is.null(modelNames)) modelNames <- c("E", "V") if (any(!match(modelNames, c("E", "V"), nomatch = 0))) stop("modelNames E or V for one-dimensional data") } else { if (is.null(modelNames)) modelNames <- mclust.options("emModelNames") } BIC <- rep(NA, length(modelNames)) names(BIC) <- modelNames for (m in modelNames) { mStep <- mstep(modelName = m, data = data, z = z, warn = FALSE) eStep <- do.call("estep", c(mStep, list(data=data, warn=FALSE))) if (is.null(attr(eStep, "warn"))) BIC[m] <- do.call("bic", eStep) } BIC } cv.MclustDA <- function(...) { .Deprecated("cvMclustDA", package = "mclust") cvMclustDA(...) } # "[.mclustDAtest" <- function (x, i, j, drop = FALSE) # { # clx <- oldClass(x) # oldClass(x) <- NULL # NextMethod("[") # } classPriorProbs <- function(object, newdata = object$data, itmax = 1e3, eps = sqrt(.Machine$double.eps)) { if(!inherits(object, "MclustDA")) stop("object not of class \"MclustDA\"") z <- predict(object, newdata = newdata)$z prop <- object$prop p <- colMeans(z) p0 <- p+1 it <- 0 while(max(abs(p-p0)/abs(p)) > eps & it < itmax) { it <- it+1 p0 <- p # z_upd <- t(apply(z, 1, function(z) { z <- z*p/prop; z/sum(z) })) z_upd <- sweep(z, 2, FUN = "*", STATS = p/prop) z_upd <- sweep(z_upd, MARGIN = 1, FUN = "/", STATS = rowSums(z_upd)) p <- colMeans(z_upd) } return(p) } mclust/R/icl.R0000644000176200001440000000625013417640466012672 0ustar liggesusers## ## Integrated Complete-data Likelihood (ICL) Criterion ## icl <- function(object, ...) UseMethod("icl") icl.Mclust <- function(object, ...) { n <- object$n # G <- object$G + ifelse(is.na(object$hypvol),0,1) z <- object$z if(is.null(z)) z <- matrix(1, nrow = n, ncol = 1) C <- matrix(0, n, ncol(z)) for(i in 1:n) C[i, which.max(z[i,])] <- 1 object$bic + 2*sum(C * ifelse(z > 0, log(z), 0)) } icl.MclustDA <- function(object, ...) { n <- object$n z <- predict(object)$z df <- object$df if(is.null(z)) z <- matrix(1, nrow = n, ncol = 1) C <- matrix(0, n, ncol(z)) for(i in 1:n) C[i, which.max(z[i,])] <- 1 object$bic + 2*sum(C * ifelse(z > 0, log(z), 0)) } mclustICL <- function(data, G = NULL, modelNames = NULL, initialization = list(hcPairs=NULL, subset=NULL, noise=NULL), x = NULL, ...) { call <- match.call() data <- data.matrix(data) n <- nrow(data) d <- ncol(data) mc <- match.call(expand.dots = TRUE) mc[[1]] <- as.name("mclustBIC") mc[[2]] <- data BIC <- eval(mc, parent.frame()) class(BIC) <- "mclustBIC" G <- attr(BIC, "G") modelNames <- attr(BIC, "modelNames") ICL <- matrix(NA, nrow = length(G), ncol = length(modelNames)) mostattributes(ICL) <- attributes(BIC) if(!is.null(x)) { r <- match(as.character(G), rownames(x), nomatch = 0) c <- match(modelNames, colnames(x), nomatch = 0) ICL[r,c] <- BIC[r,c] } for(i in 1:nrow(ICL)) { for(j in 1:ncol(ICL)) { if(is.na(BIC[i,j])) next() # not fitted if(!is.na(ICL[i,j])) next() # already available Sumry <- summary(BIC, data, G = G[i], modelNames = modelNames[j]) ICL[i,j] <- icl.Mclust(Sumry) } } class(ICL) <- "mclustICL" attr(ICL, "criterion") <- "ICL" return(ICL) } print.mclustICL <- function (x, pick = 3, ...) { subset <- !is.null(attr(x, "subset")) oldClass(x) <- attr(x, "args") <- NULL attr(x, "criterion") <- NULL attr(x, "control") <- attr(x, "initialization") <- NULL attr(x, "oneD") <- attr(x, "warn") <- attr(x, "Vinv") <- NULL attr(x, "prior") <- attr(x, "G") <- attr(x, "modelNames") <- NULL ret <- attr(x, "returnCodes") == -3 n <- attr(x, "n") d <- attr(x, "d") attr(x, "returnCodes") <- attr(x, "n") <- attr(x, "d") <- NULL oldClass(x) <- attr(x, "args") <- attr(x, "criterion") <- NULL catwrap("Integrated Complete-data Likelihood (ICL) criterion:") print(x, ...) cat("\n") catwrap(paste("Top", pick, "models based on the ICL criterion:")) print(pickBIC(x, pick), ...) invisible() } summary.mclustICL <- function(object, G, modelNames, ...) { if(!missing(G)) object <- object[rownames(object) %in% G,,drop=FALSE] if(!missing(modelNames)) object <- object[,colnames(object) %in% modelNames,drop=FALSE] structure(pickBIC(object, ...), class = "summary.mclustICL") } print.summary.mclustICL <- function(x, digits = getOption("digits"), ...) { cat("Best ICL values:\n") x <- drop(as.matrix(x)) x <- rbind(ICL = x, "ICL diff" = x - max(x)) print(x, digits = digits) invisible() } plot.mclustICL <- function(x, ylab = "ICL", ...) { plot.mclustBIC(x, ylab = ylab, ...) } mclust/R/weights.R0000644000176200001440000000336512542512526013572 0ustar liggesusers############################################################################### ## Weights for MCLUST ## ## Written by Thomas Brendan Murphy ## Bugs fix by Luca Scrucca ############################################################################# me.weighted <- function(modelName, data, z, weights = NULL, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) { data <- as.matrix(data) N <- nrow(data) if(is.null(warn)) warn <- mclust.options("warn") if(is.null(weights)) { weights <- rep(1,N) } if(any(weights<0)|any(!is.finite(weights))) { stop("Weights must be positive and finite") } if(!is.vector(weights)) { stop("Weights must be a vector") } if(max(weights)>1) { if(warn) warning("Weights rescaled to have maximum equal to 1") weights <- weights/max(weights) } zw <- z*weights llold <- -Inf eps <- .Machine$double.eps criterion <- TRUE iter <- 0 while(criterion) { iter <- iter+1 fit.m <- do.call("mstep",list(data=data, z=zw, modelName=modelName, prior=prior, control=control, Vinv=Vinv, warn=warn)) fit.m$parameters$pro <- fit.m$parameters$pro/mean(weights) fit.e <- do.call("estep", c(list(data=data, control=control, Vinv=Vinv, warn=warn), fit.m)) zw <- pmax(fit.e$z*weights, eps) criterion <- criterion & (iter < control$itmax[1]) ldens <- do.call("dens", c(list(data=data, logarithm=TRUE, warn=warn), fit.m)) ll <- sum(weights*ldens) criterion <- criterion & (ll-llold > control$tol[1]) llold <- ll } fit <- fit.m fit$z <- fit.e$z fit$weights <- weights fit$loglik <- ll fit } mclust/R/clustCombi.R0000644000176200001440000004610113456651211014217 0ustar liggesusersclustCombi <- function(object = NULL, data = NULL, ...) { if(is.null(object) & is.null(data)) stop("An object or class 'Mclust' or data as matrix/data.frame must be provided!") if(is.null(object)) { object <- Mclust(data, ...) } else { if(!inherits(object, "Mclust")) stop("object not of class \"Mclust\"") data <- object$data } combiRes <- combi(data, object) return(combiRes) } combMat <- function(K,l1,l2) { l=c(min(l1,l2), max(l1,l2)) if(any(length(l1) == 0, length(l2) == 0)){ l1 = numeric(0) l2 = l[2]} else { l1 = l[1] l2 = l[2]} M <- rbind(cbind(diag(l2-1), matrix(rep(0,(K-l2+1)*(l2-1)), nrow=l2-1, ncol=K-l2+1)), cbind(matrix(rep(0,l2*(K-l2)), nrow=K-l2, ncol=l2), diag(K-l2))) M[l1,l2] <- 1 return(M) } ## Define xlog to handle x*log(x) as x=0 xlog <- function(x) { xlog1d <- function (xi) if (xi == 0) 0 else (xi*log(xi)) if (is.null(dim(x))) { return(sapply(x,xlog1d)) } else { return(matrix(sapply(x,xlog1d),dim(x))) } } combi <- function(data, MclustOutput, n = nrow(data), d = ncol(data)) { combiM <- list() combiM[[MclustOutput$G]] <- diag(MclustOutput$G) tau <- list() tau[[MclustOutput$G]] = MclustOutput$z classif <- list() classif[[MclustOutput$G]] = map(tau[[MclustOutput$G]]) for (K in MclustOutput$G:2) { dEnt <- matrix(0,nrow=K-1, ncol=K) preCombiTau <- tau[[K]] for (l1 in 1:(K-1)) { for (l2 in (l1+1):K) { postCombiTau <- t(combMat(K,l1,l2) %*% t(preCombiTau)) dEnt[l1,l2] <- sum(xlog(postCombiTau[,l1])) - sum(xlog(preCombiTau[,l1])+xlog(preCombiTau[,l2])) } } l1=which(dEnt==max(dEnt),arr.ind=TRUE)[1] l2=which(dEnt==max(dEnt),arr.ind=TRUE)[2] combiM[[K-1]] <- combMat(K,l1,l2) tau[[K-1]] = t(combiM[[K-1]] %*% t(tau[[K]])) classif[[K-1]] = map(tau[[K-1]]) } output <- list(classification = classif, combiM = combiM, combiz = tau, MclustOutput = MclustOutput) class(output) <- "clustCombi" return(output) } plot.clustCombi <- function(x, what = c("classification", "entropy", "tree"), ...) { object <- x # Argh. Really want to use object anyway if(!inherits(object, "clustCombi")) stop("object not of class \"clustCombi\"") data <- object$MclustOutput$data what <- match.arg(what, several.ok = TRUE) oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) plot.clustCombi.classification <- function(...) { # Sort z columns so that one of the two combined column is the last one at # each step (prevents the colors and symbols to be mixed as K -> K-1) curr <- 1:object$MclustOutput$G i <- numeric() j <- numeric() for(K in (object$MclustOutput$G):2) { l1 <- which(!object$combiM[[K-1]] %*% rep(1,K) == 1) l2 <- (object$combiM[[K-1]] %*% curr)[l1] - curr[l1] i <- c(curr[l1],i) j <- c(l2,j) curr <- object$combiM[[K-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(K-1-l1))) } permutMat <- function(j,K) { M <- diag(K) M[j,j] <- 0 M[K,K] <- 0 M[j,K] <- 1 M[K,j] <- 1 return(M) } combiM <- diag(object$MclustOutput$G) j <- c(1,j) i <- c(0,i) permutz <- object$MclustOutput$z[,j] par(ask=TRUE) for(K in object$MclustOutput$G:1) { curr_title <- if(K == object$MclustOutput$G) paste0("BIC solution (", as.character(K), " clusters)") else paste0("Combined solution with ", as.character(K), " clusters") if(ncol(as.matrix(data)) > 2) { par(oma = c(0,0,2,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) } else { par(mar = { mar <- oldpar$mar; mar[3] <- 2.1; mar }) } combiPlot(data = data, z = permutz, combiM = combiM, ...) if(ncol(as.matrix(data)) > 2) { title(curr_title, outer = TRUE, cex.main = 1) } else { title(curr_title, cex.main = 1) } combiM <- combMat(K,which(j==i[K]),K) %*% combiM } par(ask=FALSE) } if(interactive() & length(what) > 1) { title <- "Combined clusterings plots:" # present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) while(choice != 0) { if(what[choice] == "classification") plot.clustCombi.classification(...) if(what[choice] == "entropy") entPlot(z = object$MclustOutput$z, combiM = object$combiM, ...) if(what[choice] == "tree") combiTree(object, ...) # re-present menu waiting user choice choice <- menu(what, graphics = FALSE, title = title) } } else { if(any(what == "classification")) plot.clustCombi.classification(...) if(any(what == "entropy")) entPlot(z = object$MclustOutput$z, combiM = object$combiM, ...) if(any(what == "tree")) combiTree(object, ...) } invisible() } combiPlot <- function(data, z, combiM, ...) { p <- ncol(as.matrix(data)) if (p > 2) { clPairs(data[,1:min(5,p)], classification = map(t(combiM %*% t(z))), ...) } else if (p == 2) { mclust2Dplot(data = data, parameters = NULL, classification = map(t(combiM %*% t(z))), what = "classification", ...) } else { mclust1Dplot(data = as.matrix(data), parameters = NULL, classification = map(t(combiM %*% t(z))), what = "classification", ...) } } entPlot <- function(z, combiM, abc = c("standard", "normalized"), reg = 2, ...) { oldpar <- par(no.readonly = TRUE) on.exit(par(oldpar)) if(length(abc) > 1) par(ask=TRUE) ent <- numeric() Kmax <- ncol(z) z0 <- z for(K in Kmax:1) { z0 <- t(combiM[[K]] %*% t(z0)) ent[K] <- -sum(xlog(z0)) } if(any(abc == "normalized")) { mergedn <- numeric() z0 <- z for(K in (Kmax-1):1) { z0 <- t(combiM[[K+1]] %*% t(z0)) mergedn[K] = sum(sapply(map(z0), function(x) any(which(as.logical(combiM[[K]][rowSums(combiM[[K]])==2,]))==x))) } } if(Kmax == 2) reg <- NULL if(any(abc == "standard")) { par(mfrow=c(1,2), oma=c(0,0,3,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) plot(1:Kmax, ent, xlab = "Number of clusters", ylab = "Entropy", xaxt = "n", ...) axis(side = 1, at = 1:Kmax) if(any(reg == 2)) { pcwsreg <- pcws2_reg(1:Kmax,ent) lines(1:pcwsreg$c, pcwsreg$a1*(1:pcwsreg$c) + pcwsreg$b1, lty = 2, col = "red") lines(pcwsreg$c:Kmax, pcwsreg$a2*(pcwsreg$c:Kmax) + pcwsreg$b2, lty = 2, col = "red") } if(any(reg == 3)) { pcwsreg <- pcws3_reg(1:Kmax,ent) lines(1:pcwsreg$c1, pcwsreg$a1*(1:pcwsreg$c1) + pcwsreg$b1, lty = 2, col = "blue") lines(pcwsreg$c1:pcwsreg$c2, pcwsreg$a2*(pcwsreg$c1:pcwsreg$c2) + pcwsreg$b2, lty = 2, col = "blue") lines(pcwsreg$c2:Kmax, pcwsreg$a3*(pcwsreg$c2:Kmax) + pcwsreg$b3, lty = 2, col = "blue") } plot(1:(Kmax-1), ent[2:Kmax]-ent[1:(Kmax-1)], xlab = "Number of clusters", ylab = "Difference in entropy", xaxt = "n", ...) axis(side = 1, at = 1:(Kmax-1)) title("Entropy plot", outer=TRUE, cex.main = 1) } if(any(abc == "normalized")) { par(mfrow=c(1,2), oma=c(0,0,3,0), mar = { mar <- oldpar$mar; mar[3] <- 0.1; mar }) plot(cumsum(c(0,mergedn)), ent, xlab = "Cumul. count of merged obs.", ylab = "Entropy", ...) if(any(reg == 2)) { X <- cumsum(c(0,mergedn)) pcwsreg <- pcws2_reg(X,ent) lines(X[1:pcwsreg$c], pcwsreg$a1*(X[1:pcwsreg$c]) + pcwsreg$b1, lty = 2, col = "red") lines(X[pcwsreg$c:Kmax], pcwsreg$a2*(X[pcwsreg$c:Kmax]) + pcwsreg$b2, lty = 2, col = "red") } if(any(reg == 3)) { X <- cumsum(c(0,mergedn)) pcwsreg <- pcws3_reg(X,ent) lines(X[1:pcwsreg$c1], pcwsreg$a1*(X[1:pcwsreg$c1]) + pcwsreg$b1, lty = 2, col = "blue") lines(X[pcwsreg$c1:pcwsreg$c2], pcwsreg$a2*(X[pcwsreg$c1:pcwsreg$c2]) + pcwsreg$b2, lty = 2, col = "blue") lines(X[pcwsreg$c2:Kmax], pcwsreg$a3*(X[pcwsreg$c2:Kmax]) + pcwsreg$b3, lty = 2, col = "blue") } plot(1:(Kmax-1), (ent[2:Kmax]-ent[1:(Kmax-1)])/mergedn, xlab = "Number of clusters", ylab = "Normalized difference in entropy", xaxt = "n", ...) axis(side = 1, at = 1:(Kmax-1)) title("Normalized entropy plot", outer=TRUE, cex.main = 1) } invisible() } combiTree <- function(object, type = c("triangle", "rectangle"), yaxis = c("entropy", "step"), edgePar = list(col = "darkgray", lwd = 2), ...) { if(!inherits(object, "clustCombi")) stop("object not of class \"clustCombi\"") yaxis <- match.arg(yaxis, eval(formals(combiTree)$yaxis), several.ok = FALSE) type <- match.arg(type, eval(formals(combiTree)$type), several.ok = FALSE) G <- object$MclustOutput$G combiM <- object$combiM # combiZ <- object$combiz # define merging pattern: # - negative numbers are leaves, # - positive are merged clusters (defined by row number in merge) curr <- 1:G merged <- -(1:G) merge <- matrix(NA, G-1, 2) for(k in 1:(G-1)) { Kp <- G - k + 1 l1 <- which(!combiM[[Kp-1]] %*% rep(1,Kp) == 1) l2 <- (combiM[[Kp-1]] %*% curr)[l1] - curr[l1] curr <- setdiff(curr, max(l1, l2)) merge[k,] <- merged[c(l1,l2)] merged[merged == merged[l1] | merged == merged[l2]] <- k } # order along the x-axis sel <- function(x) { if(x < 0) return(abs(x)) else return(c(sel(merge[x,1]), sel(merge[x,2]))) } ord <- abs(c(sel(merge[nrow(merge),1]), sel(merge[nrow(merge),2]))) if(yaxis == "step") { # step h <- 1:(G-1) ylab <- "Steps" } else { # entropy entropy <- sapply(rev(object$combiz), function(z) -sum(xlog(z))) # normalized negentropy h <- entropy; h <- 1 - (h - min(h))/(max(h)-min(h)); h <- h[-1] ylab <- "1 - normalised entropy" } # hclust object (see help(hclust)) hc <- list(merge = merge, # mergin matrix height = h, # define merge heights order = ord, # order of leaves labels = 1:G) # labels of leaves class(hc) <- "hclust" # make it an hclust object # plot(hc, hang = -1) # look at the result # convert to a dendrogram object dendro <- as.dendrogram(hc) plot(dendro, type = type, edgePar = edgePar, ylab = ylab, ...) invisible(dendro) } # pcws2_reg computes the piecewise linear regression -- with two pieces -- to (x,y), for any possible change point and chooses the one leading to the smallest least-square error. pcws2_reg <- function(x, y) { C <- length(x) ssBest = Inf for (c in 2:(C-1)) { x1 <- x[1:c] y1 <- y[1:c] x2 <- x[c:C] y2 <- y[c:C] a1 <- sum((x1-mean(x1))*(y1-mean(y1)))/sum((x1-mean(x1))^2) b1 <- -a1 * mean(x1) + mean(y1) a2 <- sum((x2-mean(x2))*(y2-mean(y2)))/sum((x2-mean(x2))^2) b2 <- -a2 * mean(x2) + mean(y2) ss <- sum((a1*x1+b1-y1)^2) + sum((a2*x2+b2-y2)^2) if (ss < ssBest) { ssBest <- ss cBest <- c a1Best <- a1 a2Best <- a2 b1Best <- b1 b2Best <- b2 } } return(list(c=cBest, a1=a1Best, b1=b1Best, a2=a2Best, b2=b2Best, residuals = c(a1*x1+b1-y1,a2*x2+b2-y2))) } # pcws3_reg computes the piecewise linear regression -- with three pieces -- to (x,y), for any possible change points and chooses the ones leading to the smallest least-square error. pcws3_reg <- function(x, y) { C <- length(x) ssBest = Inf for (c1 in 2:(C-2)) { for (c2 in (c1+1):(C-1)) { x1 <- x[1:c1] y1 <- y[1:c1] x2 <- x[c1:c2] y2 <- y[c1:c2] x3 <- x[c2:C] y3 <- y[c2:C] a1 <- sum((x1-mean(x1))*(y1-mean(y1)))/sum((x1-mean(x1))^2) b1 <- -a1 * mean(x1) + mean(y1) a2 <- sum((x2-mean(x2))*(y2-mean(y2)))/sum((x2-mean(x2))^2) b2 <- -a2 * mean(x2) + mean(y2) a3 <- sum((x3-mean(x3))*(y3-mean(y3)))/sum((x3-mean(x3))^2) b3 <- -a3 * mean(x3) + mean(y3) ss <- sum((a1*x1+b1-y1)^2) + sum((a2*x2+b2-y2)^2) + sum((a3*x3+b3-y3)^2) if (ss < ssBest) { ssBest <- ss c1Best <- c1 c2Best <- c2 a1Best <- a1 b1Best <- b1 a2Best <- a2 b2Best <- b2 a3Best <- a3 b3Best <- b3 } } } return(list(c1=c1Best, c2=c2Best, a1=a1Best, b1=b1Best, a2=a2Best, b2=b2Best, a3=a3Best, b3=b3Best, residuals = c(a1*x1+b1-y1,a2*x2+b2-y2,a3*x3+b3-y3))) } # print.clustCombi <- function(x, ...) # { # output <- x # Argh. Really want to use 'output' # cat("\n EM/BIC Solution\n") # cat(" --------------- \n\n") # cat("Number of components: ", as.character(output$MclustOutput$G), "\n", sep = "") # # cat("Model name: ", output$MclustOutput$parameters$var$modelName, "\n\n", sep="") # for (K in 1:output$MclustOutput$G) # { # cat("Component num.", as.character(K),": ", "\n", sep="") # cat(" proportion: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$pro[K]), "\n", sep="") # if (output$Mclust$d == 1) cat(" mean: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$mean[K]), "\n", sep="") else cat(" mean: ", sprintf(fmt = "%4.2f ", output$MclustOutput$parameters$mean[,K]), "\n", sep="") # } # # cat("\n Combining steps \n") # cat(" --------------- \n\n") # # cl = paste(rep(" ", max(output$MclustOutput$G-4,0)), "Classes labels after this step", rep(" ", max(output$MclustOutput$G-4,0)), sep="") # # if (output$MclustOutput$G>4) for (K in 5:output$MclustOutput$G) cl = paste(" ", cl, " ", sep="") # # cat(" Step | Classes combined at this step | Classes labels after this step", "\n", sep="") # cat("-------|-------------------------------|-------------------------------", "\n", sep="") # curr = 1:output$MclustOutput$G # # cat(" 0 | --- |", sprintf(fmt = "%2d ", curr), "\n", sep="") # # for (K in 1:(output$MclustOutput$G-1)) # { # Kp = output$MclustOutput$G - K + 1 # l1 = which(!output$combiM[[Kp-1]] %*% rep(1,Kp) == 1) # l2 = (output$combiM[[Kp-1]] %*% curr)[l1] - curr[l1] # # nc1 = floor((7-nchar(as.character(K)))/2) # nc2 = (7-nchar(as.character(K))) - nc1 # nc3 = floor((33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))))/2) # nc4 = 33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))) - nc3 # # curr <- output$combiM[[Kp-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(Kp-1-l1))) # # cat(rep(" ", nc1), as.character(K), rep(" ", nc2), "|", rep(" ", nc3), as.character(l1), " & ", as.character(l2), rep(" ", nc4), "|", sprintf(fmt = "%2d ", curr), "\n", sep="") # # } # # cat("\n Classification for K classes: output$classification[[K]]\n") # cat(" Combining matrix (K classes -> (K-1) classes): output$combiM[[K]]\n\n") # } print.clustCombi <- function(x, digits = getOption("digits"), ...) { cat("\'", class(x)[1], "\' object:\n", sep = "") cat(paste0(" Mclust model: (", x$MclustOutput$modelName, ",", x$MclustOutput$G, ")\n")) cat(" Available object components: ") cat(names(x), "\n") cat(" Combining matrix (K+1 classes -> K classes): $combiM[[K]]\n") cat(" Classification for K classes: $classification[[K]]\n") invisible() } summary.clustCombi <- function(object, ...) { title <- paste("Combining Gaussian mixture components for clustering") out <- with(object, list(title = title, MclustModelName = object$MclustOutput$modelName, MclustG = object$MclustOutput$G, combiM = object$combiM)) class(out) <- "summary.clustCombi" return(out) } print.summary.clustCombi <- function(x, digits = getOption("digits"), ...) { cat(rep("-", nchar(x$title)),"\n",sep="") cat(x$title, "\n") cat(rep("-", nchar(x$title)),"\n",sep="") # cat("\nMclust model name:", x$MclustModelName, "\n") cat("Number of components:", x$MclustG, "\n") # cat("\nCombining steps:\n\n") # cl <- paste(rep(" ", max(x$MclustG-4,0)), # "Class labels after this step", # rep(" ", max(x$MclustG-4,0)), sep="") # # if(x$MclustG>4) # for(K in 5:x$MclustG) # cl <- paste(" ", cl, " ", sep="") cat(" Step | Classes combined at this step | Class labels after this step", "\n", sep="") cat("-------|-------------------------------|-----------------------------", "\n", sep="") curr <- 1:x$MclustG cat(" 0 | --- | ", sprintf(fmt = "%d ", curr), "\n", sep="") for(K in 1:(x$MclustG-1)) { Kp = x$MclustG - K + 1 l1 = which(!x$combiM[[Kp-1]] %*% rep(1,Kp) == 1) l2 = (x$combiM[[Kp-1]] %*% curr)[l1] - curr[l1] nc1 = floor((7-nchar(as.character(K)))/2) nc2 = (7-nchar(as.character(K))) - nc1 nc3 = floor((33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))))/2) nc4 = 33-nchar(paste(as.character(c(l1)), " & ", as.character(l2))) - nc3 curr <- x$combiM[[Kp-1]] %*% curr - l2*c(rep(0,(l1-1)),1,rep(0,(Kp-1-l1))) cat(rep(" ", nc1), as.character(K), rep(" ", nc2), "|", rep(" ", nc3), as.character(l1), " & ", as.character(l2), rep(" ", nc4), "| ", sprintf(fmt = "%d ", curr), "\n", sep="") } invisible() } clustCombiOptim <- function(object, reg = 2, plot = FALSE, ...) { # Return the optimal number of clusters suggested by the method based on the # entropy and discussed in reference given in help(clustCombi). # # object = "clustCombi" object # reg = see help(entPlot) z <- object$MclustOutput$z combiM <- object$combiM ent <- rep(as.double(NA, nrow(z))) Kmax <- ncol(z) z0 <- z for(K in Kmax:1) { z0 <- t(combiM[[K]] %*% t(z0)) ent[K] <- -sum(xlog(z0)) } if(Kmax == 2) { # reg <- NULL # in the original code # my modification to get however a result reg <- 1 pcwsreg <- list(K = Kmax) } if(reg == 2) { pcwsreg <- pcws2_reg(1:Kmax, ent) } if(reg == 3) { pcwsreg <- pcws3_reg(1:Kmax, ent) } if(plot) { plot(1:Kmax, ent, xlab = "Number of clusters", ylab = "Entropy", panel.first = grid(), xaxt = "n", ...) axis(side = 1, at = 1:Kmax) if(reg == 2) { lines(1:pcwsreg$c, pcwsreg$a1 * (1:pcwsreg$c) + pcwsreg$b1, lty = 2, col = "red") lines(pcwsreg$c:Kmax, pcwsreg$a2 * (pcwsreg$c:Kmax) + pcwsreg$b2, lty = 2, col = "red") } if(reg == 3) { lines(1:pcwsreg$c1, pcwsreg$a1 * (1:pcwsreg$c1) + pcwsreg$b1, lty = 2, col = "blue") lines(pcwsreg$c1:pcwsreg$c2, pcwsreg$a2 * (pcwsreg$c1:pcwsreg$c2) + pcwsreg$b2, lty = 2, col = "blue") lines(pcwsreg$c2:Kmax, pcwsreg$a3 * (pcwsreg$c2:Kmax) + pcwsreg$b3, lty = 2, col = "blue") } } K <- pcwsreg[[1]] z0 <- z for(K in Kmax:K) { z0 <- t(combiM[[K]] %*% t(z0)) } out <- list(numClusters.combi = K, z.combi = z0, cluster.combi = map(z0)) return(out) } mclust/R/options.R0000644000176200001440000000513013473254161013605 0ustar liggesusers############################################################################# .mclust <- structure(list( emModelNames = c("EII", "VII", "EEI", "VEI", "EVI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV"), # in mclust version <= 4.x # emModelNames = c("EII", "VII", "EEI", "VEI", "EVI", "VVI", "EEE", "EEV", "VEV", "VVV"), hcModelName = "VVV", hcUse = "SVD", subset = 2000, fillEllipses = FALSE, bicPlotSymbols = structure(c(17, 2, 16, 10, 13, 1, 15, 5, 8, 9, 12, 7, 14, 0, 17, 2), .Names = c("EII", "VII", "EEI", "EVI", "VEI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV", "E", "V")), bicPlotColors = structure( { pal <- grDevices::colorRampPalette(c("forestgreen", "royalblue1", "red3"), space = "Lab") c("gray", "black", pal(12), "gray", "black") }, .Names = c("EII", "VII", "EEI", "EVI", "VEI", "VVI", "EEE", "EVE", "VEE", "VVE", "EEV", "VEV", "EVV", "VVV", "E", "V")), classPlotSymbols = c(16, 0, 17, 3, 15, 4, 1, 8, 2, 7, 5, 9, 6, 10, 11, 18, 12, 13, 14), classPlotColors = c("dodgerblue2", "red3", "green3", "slateblue", "darkorange", "skyblue1", "violetred4", "forestgreen", "steelblue4", "slategrey", "brown", "black", "darkseagreen", "darkgoldenrod3", "olivedrab", "royalblue", "tomato4", "cyan2", "springgreen2"), warn = FALSE)) mclust.options <- function(...) { current <- get(".mclust", envir = asNamespace("mclust")) if(nargs() == 0) return(current) args <- list(...) if(length(args) == 1 && is.null(names(args))) { arg <- args[[1]] switch(mode(arg), list = args <- arg, character = return(.mclust[[arg]]), stop("invalid argument: ", dQuote(arg))) } if(length(args) == 0) return(current) n <- names(args) if (is.null(n)) stop("options must be given by name") # changed <- current[n] current[n] <- args assign(".mclust", current, envir = asNamespace("mclust")) # da provare # assignInNamespace(".mclust", current, ns = asNamespace("mclust")) invisible(current) } mclust/R/toremove.R0000644000176200001440000002435413324447600013760 0ustar liggesusers# functions to be removed?? EMclust <- function(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs=NULL, subset=NULL, noise=NULL), Vinv = NULL, warn = FALSE, x = NULL, ...) { if (!is.null(x)) { if (!missing(prior) || !missing(control) || !missing(initialization) || !missing(Vinv)) stop("only G and modelNames may be specified as arguments when x is supplied") prior <- attr(x,"prior") control <- attr(x,"control") initialization <- attr(x,"initialization") Vinv <- attr(x,"Vinv") warn <- attr(x,"warn") } dimData <- dim(data) oneD <- is.null(dimData) || length(dimData[dimData > 1]) == 1 if(!oneD && length(dimData) != 2) stop("data must be a vector or a matrix") if(oneD) { data <- drop(as.matrix(data)) n <- length(data) d <- 1 } else { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) } if (is.null(x)) { if (is.null(modelNames)) { if (d == 1) { modelNames <- c("E", "V") } else { modelNames <- mclust.options("emModelNames") if (n <= d) { # select only spherical and diagonal models m <- match(modelNames, c("EII", "VII", "EEI", "VEI", "EVI", "VVI"), nomatch = 0) modelNames <- modelNames[m] } } } if (is.null(G)) { G <- if (is.null(initialization$noise)) 1:9 else 0:9 } else { G <- sort(as.numeric(G)) } Gall <- G Mall <- modelNames } else { Glabels <- dimnames(x)[[1]] Mlabels <- dimnames(x)[[2]] if (is.null(G)) G <- Glabels if (is.null(modelNames)) modelNames <- Mlabels Gmatch <- match(as.character(G), Glabels, nomatch = 0) Mmatch <- match(modelNames, Mlabels, nomatch = 0) if (all(Gmatch) && all(Mmatch)) { attr( x, "G") <- as.numeric(G) attr( x, "modelNames") <- modelNames attr( x, "returnCodes") <- attr(x, "returnCodes")[as.character(G),modelNames,drop=FALSE] return(x[as.character(G),modelNames,drop=FALSE]) } Gall <- sort(as.numeric(unique(c(as.character(G), Glabels)))) Mall <- unique(c(modelNames, Mlabels)) } if (any(as.logical(as.numeric(G))) < 0) { if (is.null(initialization$noise)) { stop("G must be positive") } else { stop("G must be nonnegative") } } if (d == 1 && any(nchar(modelNames) > 1)) { Emodel <- any(sapply(modelNames, function(x) charmatch("E", x, nomatch = 0)[1]) == 1) Vmodel <- any(sapply(modelNames, function(x) charmatch("V", x, nomatch = 0)[1]) == 1) modelNames <- c("E", "V")[c(Emodel, Vmodel)] } l <- length(Gall) m <- length(Mall) EMPTY <- -.Machine$double.xmax BIC <- RET <- matrix(EMPTY, nrow = l, ncol = m, dimnames = list(as.character(Gall), as.character(Mall))) if (!is.null(x)) { BIC[dimnames(x)[[1]],dimnames(x)[[2]]] <- x RET[dimnames(x)[[1]],dimnames(x)[[2]]] <- attr(x, "returnCodes") BIC <- BIC[as.character(G),modelNames,drop=FALSE] RET <- RET[as.character(G),modelNames,drop=FALSE] } G <- as.numeric(G) Glabels <- as.character(G) Gout <- G if (is.null(initialization$noise)) { if (G[1] == 1) { for (mdl in modelNames[BIC["1",] == EMPTY]) { out <- mvn(modelName = mdl, data = data, prior = prior) BIC["1", mdl] <- bic(modelName = mdl, loglik = out$loglik, n = n, d = d, G = 1, equalPro = FALSE) RET["1", mdl] <- attr(out, "returnCode") } if (l == 1) { BIC[BIC == EMPTY] <- NA return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = initialization, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$subset)) { ####################################################### # all data in initial hierarchical clustering phase ####################################################### if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(modelName = mclust.options("hcModelName"), data = data) } else { hcPairs <- hc(modelName = "EII", data = data) } } else { hcPairs <- NULL # hcPairs <- hc(modelName = "E", data = data) } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { z <- unmap(clss[, g]) } else { z <- unmap(qclass( data, as.numeric(g))) } for (modelName in modelNames[BIC[g,] == EMPTY]) { out <- me(modelName = modelName, data = data, z = z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } else { ###################################################### # initial hierarchical clustering phase on a subset ###################################################### if (is.logical(initialization$subset)) initialization$subset <- (1:n)[initialization$subset] if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data[initialization$subset, ], modelName = mclust.options("hcModelName")) } else { hcPairs <- hc(data = data[initialization$subset,], modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data[initialization$subset], # modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) for (g in Glabels) { if (d > 1 || !is.null(hcPairs)) { z <- unmap(clss[, g]) } else { z <- unmap(qclass(data[initialization$subset], as.numeric(g))) } dimnames(z) <- list(as.character(initialization$subset), NULL) for (modelName in modelNames[!is.na(BIC[g,])]) { ms <- mstep(modelName = modelName, z = z, data = as.matrix(data)[initialization$subset, ], prior = prior, control = control, warn = warn) # # ctrl <- control # ctrl$itmax[1] <- 1 # ms <- me(modelName = modelName, data = as.matrix(data)[ # initialization$subset, ], z = z, prior = prior, control = ctrl) # es <- do.call("estep", c(list(data = data, warn = warn), ms)) out <- me(modelName = modelName, data = data, z = es$z, prior = prior, control = control, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = as.numeric(g), equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } } else { ###################################################### # noise case ###################################################### if (!is.null(initialization$subset)) stop("subset option not implemented with noise") if (is.null(Vinv) || Vinv <= 0) Vinv <- hypvol(data, reciprocal = TRUE) noise <- initialization$noise if (!is.logical(noise)) noise <- as.logical(match(1:n, noise, nomatch = 0)) if (!G[1]) { hood <- n * log(Vinv) BIC["0", ] <- 2 * hood - log(n) if (l == 1) { return(structure(BIC, G = G, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset), warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC")) } G <- G[-1] Glabels <- Glabels[-1] } if (is.null(initialization$hcPairs)) { if (d != 1) { if (n > d) { hcPairs <- hc(data = data[!noise,], modelName = mclust.options("hcModelName")) } else { hcPairs <- hc(data = data[!noise,], modelName = "EII") } } else { hcPairs <- NULL # hcPairs <- hc(data = data[!noise], modelName = "E") } } else hcPairs <- initialization$hcPairs if (d > 1 || !is.null(hcPairs)) clss <- hclass(hcPairs, G) z <- matrix(0, n, max(G) + 1) for (g in Glabels) { z[] <- 0 k <- as.numeric(g) if (d > 1 || !is.null(hcPairs)) { z[!noise, 1:k] <- unmap(clss[, g]) } else { z[!noise, 1:k] <- unmap(qclass(data[!noise])) } z[noise, k+1] <- 1 K <- 1:(k+1) for (modelName in modelNames[BIC[g,] == EMPTY]) { out <- me(modelName = modelName, data = data, z = z[, K], prior = prior, control = control, Vinv = Vinv, warn = warn) BIC[g, modelName] <- bic(modelName = modelName, loglik = out$loglik, n = n, d = d, G = k, noise = TRUE, equalPro = control$equalPro) RET[g, modelName] <- attr(out, "returnCode") } } } structure(BIC, G = Gout, modelNames = modelNames, prior = prior, control = control, initialization = list(hcPairs = hcPairs, subset = initialization$subset, noise = initialization$noise), Vinv = Vinv, warn = warn, n = n, d = d, oneD = oneD, returnCodes = RET, class = "mclustBIC") } # EMclust <- function(...) .Defunct("mclustBIC", PACKAGE = "mclust") mclust/R/util.R0000644000176200001440000003150613477632314013101 0ustar liggesusers adjustedRandIndex <- function (x, y) { x <- as.vector(x) y <- as.vector(y) if(length(x) != length(y)) stop("arguments must be vectors of the same length") tab <- table(x,y) if(all(dim(tab)==c(1,1))) return(1) a <- sum(choose(tab, 2)) b <- sum(choose(rowSums(tab), 2)) - a c <- sum(choose(colSums(tab), 2)) - a d <- choose(sum(tab), 2) - a - b - c ARI <- (a - (a + b) * (a + c)/(a + b + c + d)) / ((a + b + a + c)/2 - (a + b) * (a + c)/(a + b + c + d)) return(ARI) } classError <- function(classification, class) { q <- function(map, len, x) { x <- as.character(x) map <- lapply(map, as.character) y <- sapply(map, function(x) x[1]) best <- y != x if(all(len) == 1) return(best) errmin <- sum(as.numeric(best)) z <- sapply(map, function(x) x[length(x)]) mask <- len != 1 counter <- rep(0, length(len)) k <- sum(as.numeric(mask)) j <- 0 while(y != z) { i <- k - j m <- mask[i] counter[m] <- (counter[m] %% len[m]) + 1 y[x == names(map)[m]] <- map[[m]][counter[m]] temp <- y != x err <- sum(as.numeric(temp)) if(err < errmin) { errmin <- err best <- temp } j <- (j + 1) %% k } best } if (any(isNA <- is.na(classification))) { classification <- as.character(classification) nachar <- paste(unique(classification[!isNA]),collapse="") classification[isNA] <- nachar } MAP <- mapClass(classification, class) len <- sapply(MAP[[1]], length) if(all(len) == 1) { CtoT <- unlist(MAP[[1]]) I <- match(as.character(classification), names(CtoT), nomatch= 0) one <- CtoT[I] != class } else { one <- q(MAP[[1]], len, class) } len <- sapply(MAP[[2]], length) if(all(len) == 1) { TtoC <- unlist(MAP[[2]]) I <- match(as.character(class), names(TtoC), nomatch = 0) two <- TtoC[I] != classification } else { two <- q(MAP[[2]], len, classification) } err <- if(sum(as.numeric(one)) > sum(as.numeric(two))) as.vector(one) else as.vector(two) bad <- seq(along = classification)[err] list(misclassified = bad, errorRate = length(bad)/length(class)) } mapClass <- function(a, b) { l <- length(a) x <- y <- rep(NA, l) if(l != length(b)) { warning("unequal lengths") return(x) } aChar <- as.character(a) bChar <- as.character(b) Tab <- table(a, b) Ua <- dimnames(Tab)[[1]] Ub <- dimnames(Tab)[[2]] aTOb <- rep(list(Ub), length(Ua)) names(aTOb) <- Ua bTOa <- rep(list(Ua), length(Ub)) names(bTOa) <- Ub # k <- nrow(Tab) Map <- rep(0, k) Max <- apply(Tab, 1, max) for(i in 1:k) { I <- match(Max[i], Tab[i, ], nomatch = 0) aTOb[[i]] <- Ub[I] } if(is.numeric(b)) aTOb <- lapply(aTOb, as.numeric) k <- ncol(Tab) Map <- rep(0, k) Max <- apply(Tab, 2, max) for(j in (1:k)) { J <- match(Max[j], Tab[, j]) bTOa[[j]] <- Ua[J] } if(is.numeric(a)) bTOa <- lapply(bTOa, as.numeric) # return(list(aTOb = aTOb, bTOa = bTOa)) } map <- function(z, warn = mclust.options("warn"), ...) { nrowz <- nrow(z) cl <- numeric(nrowz) I <- 1:nrowz J <- 1:ncol(z) for(i in I) { cl[i] <- (J[z[i, ] == max(z[i, ])])[1] } if(warn) { K <- as.logical(match(J, sort(unique(cl)), nomatch = 0)) if(any(!K)) warning(paste("no assignment to", paste(J[!K], collapse = ","))) } return(cl) } unmap <- function(classification, groups=NULL, noise=NULL, ...) { # converts a classification to conditional probabilities # classes are arranged in sorted order unless groups is specified # if a noise indicator is specified, that column is placed last n <- length(classification) u <- sort(unique(classification)) if(is.null(groups)) { groups <- u } else { if(any(match( u, groups, nomatch = 0) == 0)) stop("groups incompatible with classification") miss <- match( groups, u, nomatch = 0) == 0 } cgroups <- as.character(groups) if(!is.null(noise)) { noiz <- match( noise, groups, nomatch = 0) if(any(noiz == 0)) stop("noise incompatible with classification") groups <- c(groups[groups != noise],groups[groups==noise]) noise <- as.numeric(factor(as.character(noise), levels = unique(groups))) } groups <- as.numeric(factor(cgroups, levels = unique(cgroups))) classification <- as.numeric(factor(as.character(classification), levels = unique(cgroups))) k <- length(groups) - length(noise) nam <- levels(groups) if(!is.null(noise)) { k <- k + 1 nam <- nam[1:k] nam[k] <- "noise" } z <- matrix(0, n, k, dimnames = c(names(classification),nam)) for(j in 1:k) { z[classification == groups[j], j] <- 1 } return(z) } BrierScore <- function(z, class) { z <- as.matrix(z) z <- sweep(z, 1, STATS = rowSums(z), FUN = "/") cl <- unmap(class, groups = if(is.factor(class)) levels(class) else NULL) if(any(dim(cl) != dim(z))) stop("input arguments do not match!") sum((cl-z)^2)/(2*nrow(cl)) } orth2 <- function (n) { u <- rnorm(n) u <- u/vecnorm(u) v <- rnorm(n) v <- v/vecnorm(v) Q <- cbind(u, v - sum(u * v) * u) dimnames(Q) <- NULL Q } randomOrthogonalMatrix <- function(n, d) { # Generate a random orthogonal basis matrix of dimension (n x d) using # the method in # Heiberger R. (1978) Generation of random orthogonal matrices. JRSS C, 27, # 199-206. Q <- qr.Q(qr(matrix(rnorm(n*d), nrow = n, ncol = d))) return(Q) } logsumexp <- function(x) { # Numerically efficient implementation of log(sum(exp(x))) max <- max(x) max + log(sum(exp(x-max))) } partconv <- function(x, consec = TRUE) { n <- length(x) y <- numeric(n) u <- unique(x) if(consec) { # number groups in order of first row appearance l <- length(u) for(i in 1:l) y[x == u[i]] <- i } else { # represent each group by its lowest-numbered member for(i in u) { l <- x == i y[l] <- (1:n)[l][1] } } y } partuniq <- function(x) { # finds the classification that removes duplicates from x charconv <- function(x, sep = "001") { if(!is.data.frame(x)) x <- data.frame(x) do.call("paste", c(as.list(x), sep = sep)) } n <- nrow(x) x <- charconv(x) k <- duplicated(x) partition <- 1.:n partition[k] <- match(x[k], x) partition } dmvnorm <- function(data, mean, sigma, log = FALSE) { data <- as.matrix(data) n <- nrow(data) d <- ncol(data) if(missing(mean)) mean <- rep(0, length = d) mean <- as.vector(mean) if(length(mean) != d) stop("data and mean have non-conforming size") if(missing(sigma)) sigma <- diag(d) sigma <- as.matrix(sigma) if(ncol(sigma) != d) stop("data and sigma have non-conforming size") if(max(abs(sigma - t(sigma))) > .Machine$double.eps) stop("sigma must be a symmetric matrix") # - 1st approach # cholsigma <- chol(sigma) # logdet <- 2 * sum(log(diag(cholsigma))) # md <- mahalanobis(data, center = mean, # cov = chol2inv(cholsigma), inverted = TRUE) # logdens <- -(ncol(data) * log(2 * pi) + logdet + md)/2 # # - 2nd approach # cholsigma <- chol(sigma) # logdet <- 2 * sum(log(diag(cholsigma))) # mean <- outer(rep(1, nrow(data)), as.vector(matrix(mean,d))) # data <- t(data - mean) # conc <- chol2inv(cholsigma) # Q <- colSums((conc %*% data)* data) # logdens <- as.vector(Q + d*log(2*pi) + logdet)/(-2) # # - 3rd approach (via Fortran code) logdens <- .Fortran("dmvnorm", as.double(data), # x as.double(mean), # mu as.double(sigma), # Sigma as.integer(n), # n as.integer(d), # p double(d), # w double(1), # hood double(n), # logdens PACKAGE = "mclust")[[8]] # if(log) logdens else exp(logdens) } shapeO <- function(shape, O, transpose = FALSE) { dimO <- dim(O) if(dimO[1] != dimO[2]) stop("leading dimensions of O are unequal") if((ldO <- length(dimO)) != 3) { if(ldO == 2) { dimO <- c(dimO, 1) O <- array(O, dimO) } else stop("O must be a matrix or an array") } l <- length(shape) if(l != dimO[1]) stop("dimension of O and length s are unequal") storage.mode(O) <- "double" .Fortran("shapeo", as.logical(transpose), as.double(shape), O, as.integer(l), as.integer(dimO[3]), double(l * l), integer(1), PACKAGE = "mclust")[[3]] } traceW <- function(x) { # sum(as.vector(sweep(x, 2, apply(x, 2, mean)))^2) dimx <- dim(x) n <- dimx[1] p <- dimx[2] .Fortran("mcltrw", as.double(x), as.integer(n), as.integer(p), double(p), double(1), PACKAGE = "mclust")[[5]] } unchol <- function(x, upper = NULL) { if(is.null(upper)) { upper <- any(x[row(x) < col(x)]) lower <- any(x[row(x) > col(x)]) if(upper && lower) stop("not a triangular matrix") if(!(upper || lower)) { x <- diag(x) return(diag(x * x)) } } dimx <- dim(x) storage.mode(x) <- "double" .Fortran("uncholf", as.logical(upper), x, as.integer(nrow(x)), as.integer(ncol(x)), integer(1), PACKAGE = "mclust")[[2]] } vecnorm <- function (x, p = 2) { if (is.character(p)) { if (charmatch(p, "maximum", nomatch = 0) == 1) p <- Inf else if (charmatch(p, "euclidean", nomatch = 0) == 1) p <- 2 else stop("improper specification of p") } if (!is.numeric(x) && !is.complex(x)) stop("mode of x must be either numeric or complex") if (!is.numeric(p)) stop("improper specification of p") if (p < 1) stop("p must be greater than or equal to 1") if (is.numeric(x)) x <- abs(x) else x <- Mod(x) if (p == 2) return(.Fortran("d2norm", as.integer(length(x)), as.double(x), as.integer(1), double(1), PACKAGE = "mclust")[[4]]) if (p == Inf) return(max(x)) if (p == 1) return(sum(x)) xmax <- max(x) if (!xmax) xmax <- max(x) if (!xmax) return(xmax) x <- x/xmax xmax * sum(x^p)^(1/p) } errorBars <- function(x, upper, lower, width = 0.1, code = 3, angle = 90, horizontal = FALSE, ...) { # Draw error bars at x from upper to lower. If horizontal = FALSE (default) # bars are drawn vertically, otherwise horizontally. if(horizontal) arrows(upper, x, lower, x, length = width, angle = angle, code = code, ...) else arrows(x, upper, x, lower, length = width, angle = angle, code = code, ...) } covw <- function(X, Z, normalize = TRUE) # Given data matrix X(n x p) and weight matrix Z(n x G) computes # weighted means(p x G), weighted covariance matrices S(p x p x G) and # weighted scattering matrices W(p x p x G) { X <- as.matrix(X) Z <- as.matrix(Z) n <- nrow(X) p <- ncol(X) nZ <- nrow(Z) G <- ncol(Z) if(n != nZ) stop("X and Z must have same number of rows") if(normalize) Z <- t( apply(Z, 1, function(z) z/sum(z)) ) tmp <- .Fortran("covwf", X = as.double(X), Z = as.double(Z), n = as.integer(n), p = as.integer(p), G = as.integer(G), mean = double(p*G), S = double(p*p*G), W = double(p*p*G), PACKAGE = "mclust") out <- list(mean = matrix(tmp$mean, p,G), S = array(tmp$S, c(p,p,G)), W = array(tmp$W, c(p,p,G)) ) return(out) } hdrlevels <- function(density, prob) { if(missing(density) | missing(prob)) stop("Please provide both 'density' and 'prob' arguments to function call!") density <- as.vector(density) prob <- pmin(pmax(as.numeric(prob), 0), 1) alpha <- 1-prob lev <- quantile(density, alpha) names(lev) <- paste0(round(prob*100),"%") return(lev) } catwrap <- function(x, width = getOption("width"), ...) { # version of cat with wrapping at specified width cat(paste(strwrap(x, width = width, ...), collapse = "\n"), "\n") } ## ## Convert to a from classes 'Mclust' and 'densityMclust' ## as.Mclust <- function(x, ...) { UseMethod("as.Mclust") } as.Mclust.default <- function(x, ...) { if(inherits(x, "Mclust")) x else stop("argument 'x' cannot be coerced to class 'Mclust'") } as.Mclust.densityMclust <- function(x, ...) { class(x) <- c("Mclust", class(x)[1]) return(x) } as.densityMclust <- function(x, ...) { UseMethod("as.densityMclust") } as.densityMclust.default <- function(x, ...) { if(inherits(x, "densityMclust")) x else stop("argument 'x' cannot be coerced to class 'densityMclust'") } as.densityMclust.Mclust <- function(x, ...) { class(x) <- c("densityMclust", class(x)) x$density <- dens(modelName = x$modelName, data = x$data, parameters = x$parameters, logarithm = FALSE) return(x) }mclust/R/zzz.R0000644000176200001440000000156713377256700012765 0ustar liggesusers# .onLoad <- function(libname, pkgname) # { # library.dynam("mclust", pkgname, libname) # } mclustStartupMessage <- function() { # Startup message obtained as # > figlet -f slant MCLUST msg <- c(paste0( " __ ___________ __ _____________ / |/ / ____/ / / / / / ___/_ __/ / /|_/ / / / / / / / /\\__ \\ / / / / / / /___/ /___/ /_/ /___/ // / /_/ /_/\\____/_____/\\____//____//_/ version ", packageVersion("mclust")), "\nType 'citation(\"mclust\")' for citing this R package in publications.") return(msg) } .onAttach <- function(lib, pkg) { # unlock .mclust variable allowing its modification unlockBinding(".mclust", asNamespace("mclust")) # startup message msg <- mclustStartupMessage() if(!interactive()) msg[1] <- paste("Package 'mclust' version", packageVersion("mclust")) packageStartupMessage(msg) invisible() } mclust/vignettes/0000755000176200001440000000000013510412700013562 5ustar liggesusersmclust/vignettes/mclust.Rmd0000644000176200001440000001726713427502265015567 0ustar liggesusers--- title: "A quick tour of mclust" author: "Luca Scrucca" date: "`r format(Sys.time(), '%d %b %Y')`" output: rmarkdown::html_vignette: toc: true number_sections: false css: "vignette.css" vignette: > %\VignetteIndexEntry{A quick tour of mclust} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(knitr) opts_chunk$set(fig.align = "center", out.width = "90%", fig.width = 6, fig.height = 5.5, dev.args=list(pointsize=10), par = TRUE, # needed for setting hook collapse = TRUE, # collapse input & ouput code in chunks warning = FALSE) knit_hooks$set(par = function(before, options, envir) { if(before && options$fig.show != "none") par(family = "sans", mar=c(4.1,4.1,1.1,1.1), mgp=c(3,1,0), tcl=-0.5) }) set.seed(1) # for exact reproducibility ``` # Introduction **mclust** is a contributed R package for model-based clustering, classification, and density estimation based on finite normal mixture modelling. It provides functions for parameter estimation via the EM algorithm for normal mixture models with a variety of covariance structures, and functions for simulation from these models. Also included are functions that combine model-based hierarchical clustering, EM for mixture estimation and the Bayesian Information Criterion (BIC) in comprehensive strategies for clustering, density estimation and discriminant analysis. Additional functionalities are available for displaying and visualizing fitted models along with clustering, classification, and density estimation results. This document gives a quick tour of **mclust** (version `r packageVersion("mclust")`) functionalities. It was written in R Markdown, using the [knitr](https://cran.r-project.org/package=knitr) package for production. See `help(package="mclust")` for further details and references provided by `citation("mclust")`. ```{r, message = FALSE, echo=-2} library(mclust) cat(mclust:::mclustStartupMessage(), sep="") ``` # Clustering ```{r} data(diabetes) class <- diabetes$class table(class) X <- diabetes[,-1] head(X) clPairs(X, class) BIC <- mclustBIC(X) plot(BIC) summary(BIC) mod1 <- Mclust(X, x = BIC) summary(mod1, parameters = TRUE) plot(mod1, what = "classification") table(class, mod1$classification) plot(mod1, what = "uncertainty") ICL <- mclustICL(X) summary(ICL) plot(ICL) LRT <- mclustBootstrapLRT(X, modelName = "VVV") LRT ``` ## Initialisation EM algorithm is used by **mclust** for maximum likelihood estimation. Initialisation of EM is performed using the partitions obtained from agglomerative hierarchical clustering. For details see `help(mclustBIC)` or `help(Mclust)`, and `help(hc)`. ```{r} (hc1 <- hc(X, modelName = "VVV", use = "SVD")) BIC1 <- mclustBIC(X, initialization = list(hcPairs = hc1)) # default summary(BIC1) (hc2 <- hc(X, modelName = "VVV", use = "VARS")) BIC2 <- mclustBIC(X, initialization = list(hcPairs = hc2)) summary(BIC2) (hc3 <- hc(X, modelName = "EEE", use = "SVD")) BIC3 <- mclustBIC(X, initialization = list(hcPairs = hc3)) summary(BIC3) ``` Update BIC by merging the best results: ```{r} BIC <- mclustBICupdate(BIC1, BIC2, BIC3) summary(BIC) plot(BIC) ``` Univariate fit using random starting points obtained by creating random agglomerations (see `help(randomPairs)`) and merging best results: ```{r, echo=-1} set.seed(20181116) data(galaxies, package = "MASS") galaxies <- galaxies / 1000 BIC <- NULL for(j in 1:20) { rBIC <- mclustBIC(galaxies, verbose = FALSE, initialization = list(hcPairs = randomPairs(galaxies))) BIC <- mclustBICupdate(BIC, rBIC) } summary(BIC) plot(BIC) mod <- Mclust(galaxies, x = BIC) summary(mod) ``` # Classification ## EDDA ```{r} data(iris) class <- iris$Species table(class) X <- iris[,1:4] head(X) mod2 <- MclustDA(X, class, modelType = "EDDA") summary(mod2) plot(mod2, what = "scatterplot") plot(mod2, what = "classification") ``` ## MclustDA ```{r} data(banknote) class <- banknote$Status table(class) X <- banknote[,-1] head(X) mod3 <- MclustDA(X, class) summary(mod3) plot(mod3, what = "scatterplot") plot(mod3, what = "classification") ``` ## Cross-validation error ```{r} cv <- cvMclustDA(mod2, nfold = 10) str(cv) unlist(cv[3:4]) cv <- cvMclustDA(mod3, nfold = 10) str(cv) unlist(cv[3:4]) ``` # Density estimation ## Univariate ```{r} data(acidity) mod4 <- densityMclust(acidity) summary(mod4) plot(mod4, what = "BIC") plot(mod4, what = "density", data = acidity, breaks = 15) plot(mod4, what = "diagnostic", type = "cdf") plot(mod4, what = "diagnostic", type = "qq") ``` ## Multivariate ```{r} data(faithful) mod5 <- densityMclust(faithful) summary(mod5) plot(mod5, what = "BIC") plot(mod5, what = "density") plot(mod5, what = "density", type = "hdr") plot(mod5, what = "density", type = "hdr", data = faithful, points.cex = 0.5) plot(mod5, what = "density", type = "persp") ``` # Bootstrap inference ```{r} boot1 <- MclustBootstrap(mod1, nboot = 999, type = "bs") summary(boot1, what = "se") summary(boot1, what = "ci") par(mfrow=c(4,3)) plot(boot1, what = "pro") plot(boot1, what = "mean") par(mfrow=c(1,1)) ``` ```{r} boot4 <- MclustBootstrap(mod4, nboot = 999, type = "bs") summary(boot4, what = "se") summary(boot4, what = "ci") par(mfrow=c(2,2)) plot(boot4, what = "pro") plot(boot4, what = "mean") par(mfrow=c(1,1)) ``` # Dimension reduction ## Clustering ```{r} mod1dr <- MclustDR(mod1) summary(mod1dr) plot(mod1dr, what = "pairs") plot(mod1dr, what = "boundaries", ngrid = 200) mod1dr <- MclustDR(mod1, lambda = 1) summary(mod1dr) plot(mod1dr, what = "scatterplot") plot(mod1dr, what = "boundaries", ngrid = 200) ``` ## Classification ```{r} mod2dr <- MclustDR(mod2) summary(mod2dr) plot(mod2dr, what = "scatterplot") plot(mod2dr, what = "boundaries", ngrid = 200) mod3dr <- MclustDR(mod3) summary(mod3dr) plot(mod3dr, what = "scatterplot") plot(mod3dr, what = "boundaries", ngrid = 200) ``` # Using colorblind-friendly palettes Most of the graphs produced by **mclust** use colors that by default are defined in the following options: ```{r} mclust.options("bicPlotColors") mclust.options("classPlotColors") ``` The first option controls colors used for plotting BIC, ICL, etc. curves, whereas the second option is used to assign colors for indicating clusters or classes when plotting data. Color-blind-friendly palettes can be defined and assigned to the above options as follows: ```{r} cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#999999", "#F0E442", "#0072B2", "#D55E00", "#CC79A7") bicPlotColors <- mclust.options("bicPlotColors") bicPlotColors[1:14] <- c(cbPalette, cbPalette[1:6]) mclust.options("bicPlotColors" = bicPlotColors) mclust.options("classPlotColors" = cbPalette) clPairs(iris[,-5], iris$Species) mod <- Mclust(iris[,-5]) plot(mod, what = "BIC") plot(mod, what = "classification") ``` The above color definitions are adapted from http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/, but users can easily define their own palettes if needed. # References Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, *The R Journal*, 8/1, pp. 205-233. https://journal.r-project.org/archive/2016/RJ-2016-021/RJ-2016-021.pdf Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, *Journal of the American Statistical Association*, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. *Technical Report* No. 597, Department of Statistics, University of Washington. ---- ```{r} sessionInfo() ```mclust/vignettes/vignette.css0000644000176200001440000001125313407464551016143 0ustar liggesusers@charset "UTF-8"; body { background-color: #fff; margin: 1em auto; max-width: 700px; overflow: visible; padding-left: 2em; padding-right: 2em; font-family: "Open Sans", Helvetica, sans-serif; font-size: 14px; line-height: 1.35; } #header { text-align: center; } #TOC { clear: both; margin: 10px 10px 20px 10px; padding: 4px; width: 400px; border: 1px solid #CCCCCC; border-radius: 5px; background-color: #f6f6f6; font-size: 12px; line-height: 1.3; } #TOC .toctitle { font-weight: bold; font-size: 15px; margin-left: 5px; } #TOC ul { padding-left: 40px; margin-left: -1.5em; margin-top: 5px; margin-bottom: 5px; } #TOC ul ul { margin-left: -2em; } #TOC li { line-height: 16px; list-style: square outside; } table { margin: 1em auto; border-width: 1px; border-color: #DDDDDD; border-top: 1px solid #111; border-bottom: 1px solid #111; } th { border-bottom: 1px solid #111; } table th { border-width: 1px; padding: 5px; } table td { border-width: 1px; line-height: 16px; padding: 5px 5px; } table thead, table tr.even { background-color: #f7f7f7; } p { margin: 0.5em 0; } blockquote { background-color: #f6f6f6; padding: 0.25em 0.75em; } hr { border-style: solid; border: none; border-top: 1px solid #777; margin: 28px 0; } dl { margin-left: 0; } dl dd { margin-bottom: 13px; margin-left: 13px; } dl dt { font-weight: bold; } ul { margin-top: 0; } ul li { list-style: circle outside; } ul ul { margin-bottom: 0; } figure { margin: 0; text-align: center; } div.figure { text-align: center; } img { background-color: #FFFFFF; padding: 2px; border: 1px solid #DDDDDD; border-radius: 3px; /* border: 1px solid #CCCCCC; */ margin: 0 5px; } h1 { padding-top: 10px; padding-bottom: 10px; border-bottom: 3px solid #f7f7f7; margin-top: 0; font-size: 120%; line-height: 10px; color: rgb(33,33,33); } h1.title { font-size: 200%; line-height: 40px; } h2 { padding-top: 10px; padding-bottom: 5px; border-bottom: 3px solid #f7f7f7; margin-left: 4px; font-size: 110%; color: rgb(33,33,33); } h2.title { font-size: 110%; line-height: 10px; } h3 { border-bottom: 2px solid #f7f7f7; padding-top: 10px; margin-left: 8px; font-size: 105%; color: rgb(33,33,33); } h4 { border-bottom: 1px solid #f7f7f7; margin-left: 12px; font-size: 100%; color: rgb(33,33,33); } h4.author { border-bottom: 0; color: rgb(77,77,77); } h4.date { border-bottom: 1px solid #f7f7f7; font-size: 100%; color: rgb(77,77,77); } h5, h6 { border-bottom: 1px solid #ccc; font-size: 105%; color: rgb(33,33,33); } address{ font-weight: bold; color: rgb(77,77,77); margin-left: 8px; font-size: 100%; } a { color: rgb(24,116,205); text-decoration: none; } a:hover { color: rgb(28,134,238); } a:visited { color: rgb(24,116,205); } a:visited:hover { color: rgb(28,134,238); } a[href^="http:"] { text-decoration: underline; } a[href^="https:"] { text-decoration: underline; } code { font-family: 'DejaVu Sans Mono', Consolas, Monaco, monospace; color: rgb(77,77,77); font-size: 85%; } p > code, li > code { padding: 2px 0; } pre, code { background-color: #F8F8F8; border-radius: 3px; color: #333333; } pre { white-space: pre-wrap; /* Wrap long lines */ border-radius: 3px; margin: 5px 0px 10px 0px; padding: 10px; } pre:not([class]) { background-color: #F8F8F8; } strong { font-weight: bold; } hi { font-weight: bold; color: rgb(28,134,238); } /* Class described in https://benjeffrey.com/posts/pandoc-syntax-highlighting-css Colours from https://gist.github.com/robsimmons/1172277 */ code span.al { color: rgb(255,255,255); font-weight: bold; } /* Alert */ code span.an { font-style: italic; } /* Annotation */ code span.cf { font-weight: bold; } /* ControlFlow */ code span.co { color: rgb(112,112,112); font-style: normal; } /* Comment */ code span.cv { font-style: italic; } /* CommentVar */ code span.do { font-style: italic; } /* Documentation */ code span.dt { color: #4075AD; } /* DataType */ /* code span.dt { text-decoration: underline; } */ code span.dv { color: rgb(85,85,85); } /* DecVal (decimal values) */ code span.er { color: rgb(166,23,23); font-weight: bold; } /* Error */ code span.in { font-style: italic; } /* Information */ code span.kw { font-weight: bold; color: rgb(23,74,133);} /* Keyword */ /* code span.ot { color: #007020; } /* OtherToken */ code span.pp { font-weight: bold; } /* Preprocessor */ code span.st { color: rgb(85,85,85); font-style: italic; } /* String */ code span.wa { font-style: italic; } /* Warning */ mclust/MD50000644000176200001440000001771713510710262012103 0ustar liggesusers30dfe17ce10c3d506b6c7813e0949c9b *DESCRIPTION 6d6add2a10e03c1d9a1675092d5eee19 *NAMESPACE 9f73c6a77f15384190d683246eefa94e *NEWS.md abc4ce05ea40fed290801fb7674bf4ae *R/bootstrap.R 7e0361ad8167e56f59687168450e6d22 *R/clustCombi.R 36c696a442ec492df315af4e31f63f06 *R/densityMclust.R b76a3d8060cc93cec384445d98105a21 *R/gmmhd.R dd2a53f1cfb281a514b01e4ab665026f *R/graphics.R ba84097ee321529784cd9c6db5143e77 *R/icl.R 4b9155a000a45bb1f37fd1507c15f172 *R/impute.R 29cac94c1eb03a4cab12325989b68d6d *R/mbahc.R 6296fc5fd74388259d4497fcd9b6c105 *R/mclust.R 72011d2ed8066986461cafc912d65e27 *R/mclustaddson.R dfac435b338172c60572467cf10a487e *R/mclustda.R 3134343f0d9576ac66c85d66b0fc2256 *R/mclustdr.R 18ffd65e4c69c99e8ea4cb19ff98e0c9 *R/options.R 781ff8bea2efa10350791cd781eb4a2d *R/toremove.R 59bbe86bfdaa0874f2ddf73f9c23b23a *R/util.R f4e0dc159cb386c756eeb3d91e833a4d *R/weights.R ea54da7ad04ea6cf47aad932539d5945 *R/zzz.R 53fd8f7a70dd35554c276e69fb397c36 *build/vignette.rds be633fe4c8d2fe6d9ffb8d1f7775d7d4 *data/Baudry_etal_2010_JCGS_examples.rda dc855c15e4d0ad6aada18089f334ec2d *data/EuroUnemployment.rda 5f1af2c25a1b0220d60caef83a9a8933 *data/GvHD.rda 70cf5c7497a7502e1608152e06cb10ad *data/acidity.rda dcf0404be80a56cd040ed8cb8da1b428 *data/banknote.txt.gz 8ce8f5711f6fedbd7fbb72c2d4d4877b *data/chevron.rda 9fa1c815b04f041a760d73de6855af6a *data/cross.rda 48f91d7dfca0225f74cf8f91306e4ad1 *data/diabetes.rda a6f4195e02f623d477012b738a367682 *data/thyroid.rda b2a529e265721bcd2cb632d42ec6cc11 *data/wdbc.txt.gz 6e79f1abe6d1f90149c6032203d0b6e3 *data/wreath.rda 675c74d0cdc5bfaf2d7f6a8fa9d55f15 *inst/CITATION 6fce37b68f6b4a2ce7ade42cba984c99 *inst/NEWS 00cba8b48cf416b20705afd5ac6d2e04 *inst/doc/mclust.R 7278b7f759063e3034995f25eff22def *inst/doc/mclust.Rmd e829e4f01fa661e98d1154072aec8f0f *inst/doc/mclust.html 0910b5ac9a610600d59154919b1b5eaf *man/Baudry_etal_2010_JCGS_examples.Rd 226b01ed5c258d406de2af5cfade2ba1 *man/BrierScore.Rd 46173e31ba6dcd5e32de0ab995a3936f *man/EuroUnemployment.Rd d76b599944ca21988801f20fa43badc2 *man/GvHD.Rd 0da1a6b97eb72846c97a614f71954fc3 *man/Mclust.Rd 3d737638443b8ad5332f5bb1851d99e3 *man/MclustBootstrap.Rd 2db38d20a186dae8e7675520c1be90ec *man/MclustDA.Rd 35a685d7dc0ca9ef4041cedc7a2ed9a4 *man/MclustDR.Rd 490426167429f8a5032a76f13780d122 *man/MclustDRsubsel.Rd be7d9c84a562490de0d98f27345b349a *man/acidity.Rd 25e4f71de95b8fff43de166f3245d8ba *man/adjustedRandIndex.Rd 40bf7f2db2b9677c5b9bf7733da4aeac *man/banknote.Rd eb92ce9bfd46fcf3eb40fb062bd82071 *man/bic.Rd 0b4fc7fd19284abad76b17b8a947851b *man/cdens.Rd a0f106c2c8ffeb166a9bbe85fdceb392 *man/cdensE.Rd 3cb1c9457f1b0f03056e314304e78274 *man/cdfMclust.Rd b0cfe540c4eb91f99f49c2127562cd49 *man/chevron.Rd 8d03d846594d737682ed289316a492f6 *man/clPairs.Rd 63d108fe436518c400eebb43430a5958 *man/classError.Rd b13b8ac3eafe2084cdb1174e08cb6cdb *man/classPriorProbs.Rd fd20be5ef505890b5fb82337022b9f0a *man/clustCombi-internals.Rd a8283e0b37782c013e43e7abc96b55e2 *man/clustCombi.Rd bdb14988bc5193739060f58cca67fd5f *man/clustCombiOptim.Rd fd9027722a416f5955196a5cdc985c53 *man/combMat.Rd 249a123191e271ba5d8f0577ee249276 *man/combiPlot.Rd 0ad6fd1b3b38c99e3dbb74292cdea0ae *man/combiTree.Rd 31e6b4816725c39bea7a28a3c7d5adfe *man/coordProj.Rd 0ad10b7d97e45bebe7a10f69a7f919b9 *man/covw.Rd 2e85d0cc4be9094e9d19aeffe69a296d *man/cross.Rd 9bb56179fd11ee4fb5381b6a684a9feb *man/cvMclustDA.Rd 281a582da51a0e7f85e7a014e915478c *man/decomp2sigma.Rd c2e2f3160f5343cf2e9d94aac026e0e7 *man/defaultPrior.Rd b1195aa852e01046fee1d9484e2fafc9 *man/dens.Rd edabd93e8b4e5142fbd8a300f7bfdc83 *man/densityMclust.Rd 802c511a30a7cff09b45442526a03cd2 *man/densityMclust.diagnostic.Rd 9555c9ce741dcf600b6458e3c90e72c6 *man/diabetes.Rd 2ed521a037dca0f419d2c7458aaeb2d0 *man/dmvnorm.Rd 70afb44eac912ad73a029d1b342976df *man/em.Rd 9bd81085828b4bc9badee8e6e5f147ea *man/emControl.Rd 7fd10b6eb296938380d664874f096808 *man/emE.Rd 391b1379ae424c5177fd9b939c434d3e *man/entPlot.Rd 6e7e4d7ec91567f07c07d95891720b0b *man/errorBars.Rd 2b3e5919d2445368ee9d40f5498e5ed6 *man/estep.Rd ed4de5adcefd398a587e3fe176be5ef0 *man/estepE.Rd 246f72baa397cd928f97c9cb6c3ff940 *man/figures/logo.png df6daeac190195cfd0f6cde35e18365c *man/gmmhd.Rd 769d775fe51705de16d09b6073fc48ba *man/hc.Rd 3b505711416a599b4a71586ad9c6ee66 *man/hcE.Rd d63ca89381715f823a343ab10d8eb590 *man/hclass.Rd 2ddfa6fedfaf2a8b469024689cad3e32 *man/hdrlevels.Rd f5c538311fa12c3298e7524d9eb62817 *man/hypvol.Rd 918939d5858154bd5aa2a789e8adda3a *man/icl.Rd cfaaf677496a141f288bd91068a8c94a *man/imputeData.Rd fafd046c110aa10b51f1b45f040fdf6b *man/imputePairs.Rd cc916d8c4282eb5e20899b25d0b741ea *man/logLik.Mclust.Rd db9dd261d97ef515e5008f7f9351bc0b *man/logLik.MclustDA.Rd 699915f3a4cf8bfd6718f71f2a754c48 *man/majorityVote.Rd 7d8989594ce3a00a8633781ff84658f0 *man/map.Rd 2333fd41fe7388bb3347ac831801d654 *man/mapClass.Rd f1edaf4e08180fa4edad91f6023b58c3 *man/mclust-deprecated.Rd 46bd2e7c7536abe228d0ba00a6e99dd4 *man/mclust-internal.Rd 5d5923f8b95ecd940b8f5eb381cbb790 *man/mclust-package.Rd c3d60b764e4508528ac7a8571d4c1a46 *man/mclust.options.Rd 90d8845ae0707c8c43b3b733622b1abc *man/mclust1Dplot.Rd 2082bd9ac1a5a190f78d66495d7d8691 *man/mclust2Dplot.Rd 75e74f6e0bc744d968243a732ebac875 *man/mclustBIC.Rd d472e0d59f19113c24ffdde7fdc67955 *man/mclustBICupdate.Rd 8b53ade82e8b688643dbd9e3f8a4bc08 *man/mclustBootstrapLRT.Rd ace09f6b111e8fc300d75adce419134b *man/mclustICL.Rd 0a070441cfab2c8827cae182a38b3c66 *man/mclustLoglik.Rd 9d69c87d5861e6de08a57854ed4bf324 *man/mclustModel.Rd ea6c97d710bff8c2d30aa915949e711d *man/mclustModelNames.Rd 9c49fb43d204333caacaa786cac65af9 *man/mclustVariance.Rd 27e609cedd39d670de86885456340a2a *man/me.Rd 601d4fddd6bde3efc311aa4c09157b26 *man/me.weighted.Rd 4bdf047772528fdc2c663778faad37f6 *man/meE.Rd fcf5cf7cf0a3e468832036c88a92feb6 *man/mstep.Rd 1df45500c5a8ac583684bea4d3e08260 *man/mstepE.Rd f6baae6868d0519aa222797656ffee37 *man/mvn.Rd 95f6a5c67782e061d7e922dd937ff7bc *man/mvnX.Rd c3796dae5e43a2c26a9ee9b73fcf2276 *man/nMclustParams.Rd 2099a0c748ce32038c52f4aedf1886c5 *man/nVarParams.Rd 36355172607046a0c37f10dee1197ed5 *man/partconv.Rd cde8fd0e95c3ca5e84846d10a851fd76 *man/partuniq.Rd 8c79fcccb82baeadfb8a61b38ae2b2a3 *man/plot.Mclust.Rd 2cf7265cf9b4fa52c6208dc5fe27072b *man/plot.MclustBoostrap.Rd aaccb40a44639012fc0789461389d642 *man/plot.MclustDA.Rd 038d53a4fd4ca42d090b19d5fb54d6df *man/plot.MclustDR.Rd dbd8bdd0d07f7f7d6f8fe3b107091e23 *man/plot.clustCombi.Rd fc6801f93f4dfb1559632c1c37cb5bb8 *man/plot.densityMclust.Rd 12951e89daa1f9b1ac1ab80cb05b93f5 *man/plot.mclustBIC.Rd 73cbe302b1095c8a9069163414501214 *man/plot.mclustICL.Rd f640c08bd9098247a97a44f30c89a4cf *man/predict.Mclust.Rd eb5eee0ce8ec4a4894ca618c703f8555 *man/predict.MclustDA.Rd e84b696c5b8eff056814c3cba07bee9f *man/predict.MclustDR.Rd 7c321a1a8eecfcb89e3ed806a4b9a2d2 *man/predict.densityMclust.Rd c59592f03a637487c2b2e8473ac427e0 *man/priorControl.Rd d29459dadd315c784743284fa3f6f40a *man/randProj.Rd 9acae98acfaf6b565e9233957a144d22 *man/randomOrthogonalMatrix.Rd 164e38869f2f45ca98a8ad38912e15bf *man/randomPairs.Rd db6e45328c8d98198da2b16c881403b4 *man/sigma2decomp.Rd 5a37417ceffee43ee448dbbf3d7259bb *man/sim.Rd a28f1d4193a31f36ba654fcf05836c22 *man/simE.Rd 77fd1a835bb490319a118d67fe022de4 *man/summary.Mclust.Rd 674c885bc9505c6906125ec08141e60a *man/summary.MclustBootstrap.Rd 4a8c675b46da86ca7075999c22e635f4 *man/summary.MclustDA.Rd 103fff818063adef9f3262398342fb0a *man/summary.MclustDR.Rd 986ecadd060b62810d73fa37ec72dc19 *man/summary.mclustBIC.Rd e9bb8170d08268e1369bb90f9f91e0cb *man/surfacePlot.Rd db0b51f96c35a65bea5efa26a19ee50d *man/thyroid.Rd 16f301b425aebb1ac6b6f0175427dabc *man/uncerPlot.Rd 6310a244a9397d432360fd51766bfe29 *man/unmap.Rd 784f834ca6c27d28b787bc0813995f62 *man/wdbc.Rd c1b81d23059192faf2b0fdb40b0bc0d2 *man/wreath.Rd 2a6f9e9e044a78154d3cfda5936d6f48 *src/Makevars d784799104d2c2350f9350b268761a2b *src/dmvnorm.f 394a717e565a6647678c758462d11eb2 *src/init.c d2b48fc7997f081f8a1704d8882a56a3 *src/mclust.f 3e070d829bea6cd31852ae8dc6e7f6bc *src/mclustaddson.f 7278b7f759063e3034995f25eff22def *vignettes/mclust.Rmd b336c79647f3679eee6aff1702d422f5 *vignettes/vignette.css mclust/build/0000755000176200001440000000000013510412700012651 5ustar liggesusersmclust/build/vignette.rds0000644000176200001440000000032113510412700015204 0ustar liggesusers‹‹àb```b`f’Ì@&³0r€˜‘…Hså&ç”—è妠Ɉ9*–f&g+”ä—)ä§)@T¢©â†êÏ(ÉÍA“†´_,‘&$u¬y‰¹©ÅhšÙ]R RóR@Âÿ°ëgünŸwjey~LŠ6¨·ÌœT˜½!™%ps€‹”É„î óQÜÏY”_®ó/(Ì€Ä @÷hrNb1ºG¹RKõÒŠ€úAîL¥ümµmclust/DESCRIPTION0000644000176200001440000000334613510710262013272 0ustar liggesusersPackage: mclust Version: 5.4.5 Date: 2019-07-07 Title: Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation Description: Gaussian finite mixture models fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization, dimension reduction for visualisation, and resampling-based inference. Authors@R: c(person("Chris", "Fraley", role = "aut"), person("Adrian E.", "Raftery", role = "aut", comment = c(ORCID = "0000-0002-6589-301X")), person("Luca", "Scrucca", role = c("aut", "cre"), email = "luca.scrucca@unipg.it", comment = c(ORCID = "0000-0003-3826-0484")), person("Thomas Brendan", "Murphy", role = "ctb", comment = c(ORCID = "0000-0002-5668-7046")), person("Michael", "Fop", role = "ctb", comment = c(ORCID = "0000-0003-3936-2757"))) Depends: R (>= 3.0) Imports: stats, utils, graphics, grDevices Suggests: knitr (>= 1.12), rmarkdown (>= 0.9), mix (>= 1.0), geometry (>= 0.3-6), MASS License: GPL (>= 2) URL: https://mclust-org.github.io/mclust/ VignetteBuilder: knitr Repository: CRAN ByteCompile: true LazyData: yes RoxygenNote: 6.1.1 NeedsCompilation: yes Packaged: 2019-07-07 15:54:41 UTC; luca Author: Chris Fraley [aut], Adrian E. Raftery [aut] (), Luca Scrucca [aut, cre] (), Thomas Brendan Murphy [ctb] (), Michael Fop [ctb] () Maintainer: Luca Scrucca Date/Publication: 2019-07-08 18:51:30 UTC mclust/man/0000755000176200001440000000000013474243604012343 5ustar liggesusersmclust/man/em.Rd0000644000176200001440000001203413465001217013223 0ustar liggesusers\name{em} \alias{em} \title{EM algorithm starting with E-step for parameterized Gaussian mixture models} \description{ Implements the EM algorithm for parameterized Gaussian mixture models, starting with the expectation step. } \usage{ em(modelName, data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A names list giving the parameters of the model. The components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{control}{ The list of control parameters for EM used. } \item{prior}{ The specification of a conjugate prior on the means and variances used, \code{NULL} if no prior is used. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{emE}}, \dots, \code{\link{emVVV}}, \code{\link{estep}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ \dontrun{ msEst <- mstep(modelName = "EEE", data = iris[,-5], z = unmap(iris[,5])) names(msEst) em(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters) do.call("em", c(list(data = iris[,-5]), msEst)) ## alternative call } } \keyword{cluster} mclust/man/simE.Rd0000644000176200001440000000724213205036651013526 0ustar liggesusers\name{simE} \alias{simE} \alias{simV} \alias{simEII} \alias{simVII} \alias{simEEI} \alias{simVEI} \alias{simEVI} \alias{simVVI} \alias{simEEE} \alias{simEEV} \alias{simVEV} \alias{simVVV} \alias{simEVE} \alias{simEVV} \alias{simVEE} \alias{simVVE} \title{ Simulate from a Parameterized MVN Mixture Model } \description{ Simulate data from a parameterized MVN mixture model. } \usage{ simE(parameters, n, seed = NULL, \dots) simV(parameters, n, seed = NULL, \dots) simEII(parameters, n, seed = NULL, \dots) simVII(parameters, n, seed = NULL, \dots) simEEI(parameters, n, seed = NULL, \dots) simVEI(parameters, n, seed = NULL, \dots) simEVI(parameters, n, seed = NULL, \dots) simVVI(parameters, n, seed = NULL, \dots) simEEE(parameters, n, seed = NULL, \dots) simEEV(parameters, n, seed = NULL, \dots) simVEV(parameters, n, seed = NULL, \dots) simVVV(parameters, n, seed = NULL, \dots) simEVE(parameters, n, seed = NULL, \dots) simEVV(parameters, n, seed = NULL, \dots) simVEE(parameters, n, seed = NULL, \dots) simVVE(parameters, n, seed = NULL, \dots) } \arguments{ \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{n}{ An integer specifying the number of data points to be simulated. } \item{seed}{ An optional integer argument to \code{set.seed} for reproducible random class assignment. By default the current seed will be used. Reproducibility can also be achieved by calling \code{set.seed} before calling \code{sim}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A matrix in which first column is the classification and the remaining columns are the \code{n} observations simulated from the specified MVN mixture model. \item{Attributes:}{ \code{"modelName"} A character string indicating the variance model used for the simulation. } } \details{ This function can be used with an indirect or list call using \code{do.call}, allowing the output of e.g. \code{mstep}, \code{em} \code{me}, \code{Mclust}, to be passed directly without the need to specify individual parameters as arguments. } \seealso{ \code{\link{sim}}, \code{\link{Mclust}}, \code{\link{mstepE}}, \code{\link{mclustVariance}}. } \examples{ \dontrun{ d <- 2 G <- 2 scale <- 1 shape <- c(1, 9) O1 <- diag(2) O2 <- diag(2)[,c(2,1)] O <- array(cbind(O1,O2), c(2, 2, 2)) O variance <- list(d= d, G = G, scale = scale, shape = shape, orientation = O) mu <- matrix(0, d, G) ## center at the origin simdat <- simEEV( n = 200, parameters = list(pro=c(1,1),mean=mu,variance=variance), seed = NULL) cl <- simdat[,1] sigma <- array(apply(O, 3, function(x,y) crossprod(x*y), y = sqrt(scale*shape)), c(2,2,2)) paramList <- list(mu = mu, sigma = sigma) coordProj( simdat, paramList = paramList, classification = cl) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/acidity.Rd0000644000176200001440000000221512502264254014253 0ustar liggesusers\name{acidity} \alias{acidity} \docType{data} \title{Acidity data} \description{ Acidity index measured in a sample of 155 lakes in the Northeastern United States. The data are on the log scale, as analysed by Crawford et al. (1992, 1994). The data were also used to fit mixture of gaussian distributions by Richardson and Green (1997), and by McLachlan and Peel (2000, Sec. 6.6.2). } \usage{data(acidity)} \source{\url{http://www.stats.bris.ac.uk/~peter/mixdata}} \references{ Crawford, S. L. (1994) An application of the Laplace method to finite mixture distribution. \emph{Journal of the American Statistical Association}, 89, 259--267. Crawford, S. L., DeGroot, M. H., Kadane, J. B., and Small, M. J. (1994) Modeling lake chemistry distributions: Approximate Bayesian methods for estimating a finite mixture model. \emph{Technometrics}, 34, 441--453. McLachlan, G. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley, New York. Richardson, S. and Green, P. J. (1997) On Bayesian analysis of mixtures with unknown number of components (with discussion). \emph{Journal of the Royal Statistical Society, Series B}, 59, 731--792. } \keyword{datasets} mclust/man/surfacePlot.Rd0000644000176200001440000001326113473211462015120 0ustar liggesusers\name{surfacePlot} \alias{surfacePlot} \title{Density or uncertainty surface for bivariate mixtures} \description{ Plots a density or uncertainty surface given bivariate data and parameters of a MVN mixture model for the data. } \usage{ surfacePlot(data, parameters, what = c("density", "uncertainty"), type = c("contour", "hdr", "image", "persp"), transformation = c("none", "log", "sqrt"), grid = 200, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), col = gray(0.7), col.palette = function(...) hcl.colors(..., "blues", rev = TRUE), hdr.palette = blue2grey.colors, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, main = FALSE, scale = FALSE, swapAxes = FALSE, verbose = FALSE, \dots) } \arguments{ \item{data}{ A matrix, or data frame of bivariate observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{what}{ Choose from one of the following options: \code{"density"} (default), \code{"uncertainty"} indicating what to plot. } \item{type}{ Choose from one of the following three options: \code{"contour"} (default), \code{"hdr"}, \code{"image"}, and \code{"persp"} indicating the plot type. } \item{transformation}{ Choose from one of the following three options: \code{"none"} (default), \code{"log"}, \code{"sqrt"} indicating a transformation to be applied before plotting. } \item{grid}{ The number of grid points (evenly spaced on each axis). The mixture density and uncertainty is computed at \code{grid x grid} points to produce the surface plot. Default: \code{100}. } \item{nlevels}{ The number of levels to use for a contour plot. Default: \code{11}. } \item{levels}{ A vector of levels at which to draw the lines in a contour plot. } \item{prob}{ A vector of probability levels for computing HDR. Only used if \code{type = "hdr"} and supersede previous \code{nlevels} and \code{levels} arguments. } \item{col}{ A string specifying the colour to be used for \code{type = "contour"} and \code{type = "persp"} plots. } \item{col.palette}{ A function which defines a palette of colours to be used for \code{type = "image"} plots. } \item{hdr.palette}{ A function which defines a palette of colours to be used for \code{type = "hdr"} plots. } \item{xlim, ylim}{ Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{xlab, ylab}{ Optional argument specifying labels for the x-axis and y-axis. } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{scale}{ A logical variable indicating whether or not the two dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. The default is not to scale. } \item{swapAxes}{ A logical variable indicating whether or not the axes should be swapped for the plot. } \item{verbose}{ A logical variable telling whether or not to print an indication that the function is in the process of computing values at the grid points, which typically takes some time to complete. } \item{\dots}{ Other graphics parameters. } } \value{ A plots showing (a transformation of) the density or uncertainty for the given mixture model and data. The function also returns an invisible list with components \code{x}, \code{y}, and \code{z} in which \code{x} and \code{y} are the values used to define the grid and \code{z} is the transformed density or uncertainty at the grid points. } \details{ For an image plot, a color scheme may need to be selected on the display device in order to view the plot. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{mclust2Dplot}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "density", transformation = "none", drawlabels = FALSE) surfacePlot(faithful, parameters = faithfulModel$parameters, type = "persp", what = "density", transformation = "log") surfacePlot(faithful, parameters = faithfulModel$parameters, type = "contour", what = "uncertainty", transformation = "log") } } \keyword{cluster} mclust/man/me.Rd0000644000176200001440000001073513465001556013237 0ustar liggesusers\name{me} \alias{me} \title{EM algorithm starting with M-step for parameterized MVN mixture models} \description{ Implements the EM algorithm for MVN mixture models parameterized by eignevalue decomposition, starting with the maximization step. } \usage{ me(modelName, data, z, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is an initial estimate of the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{prior}{ Specification of a conjugate prior on the means and variances. See the help file for \code{priorControl} for further information. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{Vinv}{ If the model is to include a noise term, \code{Vinv} is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function \code{hypvol}. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set in \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{control}{ The list of control parameters for EM used. } \item{prior}{ The specification of a conjugate prior on the means and variances used, \code{NULL} if no prior is used. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{meE}},..., \code{\link{meVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{estep}}, \code{\link{priorControl}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}} } \examples{ \dontrun{ me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/BrierScore.Rd0000644000176200001440000001152413467346535014705 0ustar liggesusers\name{BrierScore} \alias{BrierScore} % R CMD Rd2pdf BrierScore.Rd \title{Brier score to assess the accuracy of probabilistic predictions} \description{ The Brier score is a proper score function that measures the accuracy of probabilistic predictions.} \usage{ BrierScore(z, class) } \arguments{ \item{z}{ a matrix containing the predicted probabilities of each observation to be classified in one of the classes. Thus, the number of rows must match the length of \code{class}, and the number of columns the number of known classes. } \item{class}{ a numeric, character vector or factor containing the known class labels for each observation. If \code{class} is a factor, the number of classes is \code{nlevels(class)} with classes \code{levels(class)}. If \code{class} is a numeric or character vector, the number of classes is equal to the number of classes obtained via \code{unique(class)}. } } \details{ The Brier Score is the mean square difference between the true classes and the predicted probabilities. This function implements the original multi-class definition by Brier (1950), normalized to \eqn{[0,1]} as in Kruppa et al (2014). The formula is the following: \deqn{ BS = \frac{1}{2n} \sum_{i=1}^n \sum_{k=1}^K (C_{ik} - p_{ik})^2 } where \eqn{n} is the number of observations, \eqn{K} the number of classes, \eqn{C_{ik} = \{0,1\}} the indicator of class \eqn{k} for observation \eqn{i}, and \eqn{p_{ik}} is the predicted probability of observation \eqn{i} to belong to class \eqn{k}. The above formulation is applicable to multi-class predictions, including the binary case. A small value of the Brier Score indicates high prediction accuracy. The Brier Score is a strictly proper score (Gneiting and Raftery, 2007), which means that it takes its minimal value only when the predicted probabilities match the empirical probabilities. } \references{ Brier, G.W. (1950) Verification of forecasts expressed in terms of probability. \emph{Monthly Weather Review}, 78 (1): 1-3. Gneiting, G. and Raftery, A. E. (2007) Strictly proper scoring rules, prediction, and estimation. \emph{Journal of the American Statistical Association} 102 (477): 359-378. Kruppa, J., Liu, Y., Diener, H.-C., Holste, T., Weimar, C., Koonig, I. R., and Ziegler, A. (2014) Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications. \emph{Biometrical Journal}, 56 (4): 564-583. } \seealso{\code{\link{cvMclustDA}}} \examples{ # multi-class case class <- factor(c(5,5,5,2,5,3,1,2,1,1), levels = 1:5) probs <- matrix(c(0.15, 0.01, 0.08, 0.23, 0.01, 0.23, 0.59, 0.02, 0.38, 0.45, 0.36, 0.05, 0.30, 0.46, 0.15, 0.13, 0.06, 0.19, 0.27, 0.17, 0.40, 0.34, 0.18, 0.04, 0.47, 0.34, 0.32, 0.01, 0.03, 0.11, 0.04, 0.04, 0.09, 0.05, 0.28, 0.27, 0.02, 0.03, 0.12, 0.25, 0.05, 0.56, 0.35, 0.22, 0.09, 0.03, 0.01, 0.75, 0.20, 0.02), nrow = 10, ncol = 5) cbind(class, probs, map = map(probs)) BrierScore(probs, class) # two-class case class <- factor(c(1,1,1,2,2,1,1,2,1,1), levels = 1:2) probs <- matrix(c(0.91, 0.4, 0.56, 0.27, 0.37, 0.7, 0.97, 0.22, 0.68, 0.43, 0.09, 0.6, 0.44, 0.73, 0.63, 0.3, 0.03, 0.78, 0.32, 0.57), nrow = 10, ncol = 2) cbind(class, probs, map = map(probs)) BrierScore(probs, class) # two-class case when predicted probabilities are constrained to be equal to # 0 or 1, then the (normalized) Brier Score is equal to the classification # error rate probs <- ifelse(probs > 0.5, 1, 0) cbind(class, probs, map = map(probs)) BrierScore(probs, class) classError(map(probs), class)$errorRate # plot Brier score for predicted probabilities in range [0,1] class <- factor(rep(1, each = 100), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring all one class", xlab = "Predicted probability", ylab = "Brier score") # brier score for predicting balanced data with constant prob class <- factor(rep(c(1,0), each = 50), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring balanced classes", xlab = "Predicted probability", ylab = "Brier score") # brier score for predicting unbalanced data with constant prob class <- factor(rep(c(0,1), times = c(90,10)), levels = 0:1) prob <- seq(0, 1, by = 0.01) brier <- sapply(prob, function(p) { z <- matrix(c(1-p,p), nrow = length(class), ncol = 2, byrow = TRUE) BrierScore(z, class) }) plot(prob, brier, type = "l", main = "Scoring unbalanced classes", xlab = "Predicted probability", ylab = "Brier score") } \keyword{classif} mclust/man/logLik.MclustDA.Rd0000644000176200001440000000200713175052652015524 0ustar liggesusers\name{logLik.MclustDA} \alias{logLik.MclustDA} \title{Log-Likelihood of a \code{MclustDA} object} \description{ Returns the log-likelihood for a \code{MclustDA} object.} \usage{ \method{logLik}{MclustDA}(object, data, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{data}{the data for which the log-likelihood must be computed. If missing, the observed data from the \code{'MclustDA'} object is used.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{Returns an object of class \code{'logLik'} with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model (\code{"df"}) and the sample size (\code{"nobs"}).} \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}.} \examples{ \dontrun{ irisMclustDA <- MclustDA(iris[,1:4], iris$Species) summary(irisMclustDA) logLik(irisMclustDA) } } \keyword{multivariate} mclust/man/figures/0000755000176200001440000000000013510412701013772 5ustar liggesusersmclust/man/figures/logo.png0000644000176200001440000020501213376734350015461 0ustar liggesusers‰PNG  IHDRX».' iCCPICC Profile8U]hU>»sg#$ÎSl4…t¨? % “V4¡´ºÝÝ6n–I6Ú"èdöîΘÉÎ83»ý¡OEP|1ê›Ä¿·€ (õÛ>´/• %ÚÔ (>´øƒP苦ë™;3™iº±Þeî|óïž{î¹gï蹪X–‘š®-2âs‡ˆ=+„‡ ¡WQ+]©L6O wµ[ßCÂ{_ÙÕÝþŸ­·F qb³æ¨ ˆð§UËvzú‘?êZöbè·1@Ä/z¸ác×Ãs>~ifä,âÓˆUSjˆ—ÌÅøF û1°Ö_ Mjëªèå¢b›uÝ ±pïaþŸmÁh…ómçϙŸ>„ïa\û+5%çáQÄKª’ŸFüâkm}¶àÛ–›‘?ÜÞš¯¦ïD\¬Ûª¾Ÿ¤­µŠ!~ç„6ó,â-ˆÏ7çÊSÁØ«ª“ÅœÁvÄ·5Zòò;À‰º[šñÇrûmSžòçåê5šË{yDüú¼yHö}rŸ9íé|èó„–-ü¥—”ƒăˆ¡FAöçâþ±ÜJjåI.’£[/ã]m¦èÏK 7ÔKëúR ÿD³‹r€¯Y«QŒOÚ-¹êëùQÅÎ|Ÿ|…6«¾ ³ (˜0‡½ MXd(@ߨh©ƒ2­Š_¡fçÀ<ò:´™ÍÁ¾Â”þÈÈ_ƒù¸Î´*d‡>‚²üެÓeñ«…\c?~,7?& ÙƒÏ^2Iö‘q2"yŠ@ÿ«g Ÿ|UP`þo@IDATxìxGÓ€ç$÷‚mŒ Ó{‡€m!@¨é•4JBIB0Bz#|éå'„žBK#•ô 00½WÓ;îEºæÛ¨ÜI+é$Í>,énovö½³nnwv€ `L€ 0&À˜`L€ 0&À˜`L€ 0&À˜`L€ 0&À˜`L€ 0&À44ÔåªL€ ø´ŒE·I = ·“%0 ÿ‚Ù¶,™ !2zn]cPÈ à!QHQ‚º4‚û´ˆ°àJbÏçÁìo7ÂOëÚ B †V6ãš…Ù%æç7¾=ô„¡,„ 0·`ÃÀ-XY(p@òÀ7Ãë$%ŽÇùúgQZ¸ëË$tnVº¹#$×°yÏÑó0ýë °~ÏIQMcèùfq|ÆT˜÷öº9ˆ–¦! bL€ ¾˜€ ¤¦/ºY2(ÙëŠR¯~b4dÜØRÐ0ÐR~]Þýa3;›¯å0»uq bÎ]¤¯ž9ä»y'`'À†Ç‘sƒLÀ6´±‹;I²L~Ýl×Ò¶'*<†\ÕnëÞ§0¢¥¸Ô¤ø|òÇN((—G G~–Læñ«f Ýæ„Z|`n À† ²H& •@ÚÈwkÊáÏá?ä(‡éU 7 °Àƒ×µƒjbV4ž¾PÓ—fÁ²¬l•Z¨ªVŠ”üRé•M³UuWbLÀmØ0pZÌh=pRHTÍÆâ´9F;>B]¶ âa­—A£Z1êÐXkÛÁ30 ý¶â»¨‚ÆÁiY–Ï<þÕœÞYU–ô`Ã@;3>‚ !:fAƒd˜ÓM„D!IÕ#!ýúvнMQ"íÊù.sÌýi œºPh·ž¦²¼É,›ÇgÎú»¦ã¸2`B°a # aê `ÔÂVh ¼…ÿ|}Õe¿fxˆîèÑ]ÕB‚Ü’PѦÅ¥0ïç­ðå_»¡¸Ôl³žÖè ø±¹ž^3{Ð^­Çr}&Àœ'À†óìøH& ‰@ÛÇEÉOc¢£qx`¦ƒíTîÝ¡.Ž´‡øjavj¹×Ñ3y0í« ð÷¶£â“¡$ó«9¥þoˬô\q‚Y`¶°a`‹ og¢ hLMºa˜ ¯¡Èê¢Ä¶®_ÆâòÃu…‰¢ÚŒ{@þATÁIGqyã„Õ3‡RxeAa—DiÇr˜€`ÃÀ¿Î'÷FgR3v—À€é¡µ(ÕjàÈÀŒXxMJQ"…Ë1cÈÄ/ÿÞóÝ çóŠÊ—×È2Œ]=}ð*BY`°aPd¢tÊø qMÆiƒEÉ 2À­W4{1·AXˆ°™QêY•“[P‚Á‘6Á7«÷)¹¬Vrb#®`˜[Xbš˜õö=‡8œaLÀ6 ìÀá]L@+æÃߎ {{ÿ¹Ä@aÝZׯiƒöP+.R«Jº¨³6Nùr=ü»[\šœ^Èà ϟÈ-™¶ÿƒa—Eè+Á¼F€ ¯¡ç†ýŒ€”:fÑ]8e@£µDõ­IíÅ }£Q"½*gÕö£0$:)ÐP–`†±«f YêÕÎqãLÀO°aà''’»á=)c^n SÑ(è,J‹˜ÈeÊমÁ@! ý¨”šÌ@¡•/Ûy…¥Âz†#+Ì¥Ò¸5³m&”1$à_¿8x¹ËÞ#€ñ’±õp„à^QZÑ tÈ#ѹrøs9W!½3: )”ÞY’¥YEPôüúéÃÅ¥…¢ a¾A€ ß8O¬¥ŽP:ääZ5ÇâMh"¢TëÔ4QI‡\7AXddQª¹UÎî#ç”åY{O k ó ÉO—æÍáôΰ² !À†A€œhcÞ„CûS1¯A=1ê¡!0æ†öÖB˜k‚(Õ<*ç—Â{? Nï,ÃNŒÅ˜¾fú _=ÚnŒ ø06 |øä±êž#Ð9c~G#Ñ@ê.ªÕ¨°` aÜn¿ÒùtÈ¢tÑ‹œ¢,ú};|Š>¸Q˜Z¸¼ñ‡R¹dºö Ê‚˜€Ÿ`ÃÀOO,wK v£$†‡žCi£$ "¤ZÒ!?pm[ˆ !Òïdœ:_ ¬^X¶Qlzg šøfá…’W6|0ìœßAã1AØ0’ÅøN÷¿l ‹%Éð"ŽTÕ»Ž § ׎%Ò¯ål9pZñ?Ø~謰~¢Ÿã)0›[=cÏ|€Iâ²> Ó1ï`ÃÀ»ü¹uÀÕý0ûá üçh*J½ZqpßÕm OGa® ¢TÓ½œ€o1râ\Ìàx&G\#46Jfó8Œ°\÷XA&àAlx67¥o]F/h)S0AQš†áΞÍ_O§CÕ½ÈÉ/*…~Þ_üµJ0‚¨‚†ÇGRIÉS«Þ¶_”L–Ã|™¾|öXw!:Ü;/64:ø)œ2x KBЫ}2ŒÁtÈ5bÂ…èÉBÊPzç·0¼òªíÇÄ!‘åBAx¥àœyòÆ…CóÄ fILÀ÷°aà{çŒ5E€Ò!×¼é^I’_C£ ^”Ø–˜yÜM e=}¥CÕ?½È¡¼ Ó1½óÞcÄ©$à ¯‹*èðl6MÈœyÏNQ2Yð46 C %cqƒ$c:d©¹(¥kÆ–¥Cî{§CÅÔWåà“>|ƒéçaÇ39â⡃â³I·f栾ʆõ,lÖùöÉÞv~pasc„饫Eu€Ò!ßÞ£ ¾ª„âg.LÀB ¿°æþ¼¾ü{7†+^Y‘-Ãâ"\Á°~æ –¶ø è‘z<+¬“B§ b‚Ã"ŸÂ¹‡Ð(…¥W»dH¿¡=$p:dQHýRÎaŒš8 ³7 Oï,Ë/—çOFÿƒ|¿Çòyløü)ôÇL2¤i|&;z ¸Dõ°Erœ’× u}a–E©ÆrtL`Ý®ãJüƒ}˜‡ATÁi‹lY–&dδD”L–ÃD`Ã@I–#„@JÆ‚´üGÚ ˆBªG—¥C¾6µfYæK^×@’£¤wþs7Ìÿu+ä”ë:ކý#czgŒ°V˜PÄ\$À¿’.äÃÅè2j^98øe¼qß%F"@0Æ ¸¥[cL‡Ü"BƒD‰e9L '¿Þùa|Ÿ¹D¦wß3›J&®™5ìXãå®ë„:9ªF»! "ÃbŒàjL‡,…‰â@Ñ )ûaíø(Q"Y('@é§|±6ì—Þ—7æâ †I§áÌ Nï\Žš?x^€ÎM*¤.cݘŒÉŽêˆbÒ¨V5ȸ±P~.LÀÝþÜrf}“‡Oç k ×Aì•eóØÌéC¾&”1 Ø0Ћ«Š!Ð9c~ªŒäGÐEŒD€j˜yhŸ–8uÐ("&à)%¥føhùø_Ó;ãôÂo¥ÅæñkߺÙS}áv˜à_P¾ÖÖ¸¼ñ8.o|,súžEœÞÙ%Þ®…Zhq]«RÒõ6a:.riiµ‚cÕ•ý:Õwâh>„ è—€ÙLé÷bzç­p6W\zgåe“yÜêYCÿÔoïY3_ À†/œ%ê˜öÀ¼¦Rhð›h\'JEJ|û•MaHï–œYT–£KyJzç-ðÕß{„¦wÆè‰‹Ì¥ÅO­™5ü.;ÎJ鞺?EúS0-cQ5œ.xC¸>ŒSÂÒ!÷h[Æ`:äÄØýuš5bn"é§~µ2wÙ.…_Ê>zâÍì%Ä-‹©!ËÒ-6 t{jô¨Ø$C—Œ¦CñÓ!KÂb7« ãnêmÔÐc§Y'&àkvSœÈÖfoÄQy:'~&L( ò{løý)ÓÁÔŒ…Ý)Žt#p…A( ïß®ÇÈ…8ò J,Ëa>K Ôd†Ï1½ó‚ß¶A®ÀôÎèð—,Iã0þÁ:Ÿ…Ê{Œÿ{ µo6Ôéþõ‚à ¯ ÁÝ¢zd””œÃú¶‚ˆ0a3¢Ôc9LÀëÎç)Á‘¾Ë܇+Ũƒ¾HAzòó'®~ï>¡ób4d)z!À†^΄Îôètÿ;A!KÓ¨Z¸(õº´¨…é;@Œ^È… 0ûö;ӿ΂wŸ°_QÓ^9Çl–žË;¾kæ–%“Š5Ê•‚qšµu2eìÂÛ ²4‡÷“µi»6å3È@ÇÂÎÍjÚ®Ä{˜°J`YV6¼óý&8zF\zgœ^Ø-›qyãÌ¡ß[m”7,6 öÔ_ÚñÔ1 :$ò#.¿t¯s[¢Ãƒáž>­à–+8²sù(&PF ¸Ô/ß .ÛÅ&aXp¦â4Æ£ƒâVaBYO`ÃÀ§OŸåSFÏ«e0?ó#E¥C6à•u-:Þu[¨ÆéÅœ(–ÂÀé …0ëÛ,øu½Ð0¥æéy%†6Ít–A66 øü·8)$ªV“ 4žC£ ZŠNM•iƒ†µbD‰d9L€ T!°íÐô?Ø[œ©²Ç¥¯g̲ù©ÌcK߃%KÄ K¸¤ìilxš¸NÚÃå‡×þ·ü°±(•jÇG¨kÚBvÂ\D©Ær˜€ßøqí~eÃ)IWäÍ&Œ_3sðoâd²$_!À†¯œ)Az¦¤ÐÚh~ ãô$Â)r¯æpgæœYT–Ã4(,.UÒ;/Y¹Khzgô?ø¼T.y|Ýô{÷hP‡«ú86 |üªU¿õˆ÷ªGE†O”dƒFQíqŽêõ»¬Œº¶ÄWãtÈŽXñ~&ànÇÎæÁ Lï¼róqMÉPd–åÿ;Ÿ_ôÚŽ¹#Ä…e§!KL€ Á@u'Ó!wiÛø> /¢nÕEé׺~¼Ƹy2§CÅ”å0Q²öžTÒ;ï>r^”H\¸ ÃKaôDLR¸ø-6 üöÔ¤¥/ì Ãt.l-ª› 1˜ùê6ПÓ!‹BÊr˜€[Pz祫öÂÜŸ·Àùæ`xéȱ“S8½³§Ï¨ûÚcÃÀ}l=)YÂå‡C$Éð:žPa1‡›ÖŽ…ŒÛCûF žì ·Å˜€ ¬Þ~LqP<ˆ†‚¸"YzhÕôA_ˆ“É’¼E€ o‘Ônç1‹º ÒT<‘‰„ØÈPé¯ÃÈ… aÈ… 0¿"@é?ÃôÎ ÅôÎ…%Âú†#+M`·vú=ë… eA'À¿úG.¦Á”Ñsë‚B^–@,F"¥C¾éòÆ0¼_kˆätÈ¢°²& [ç0½óLÎôÚýBÓ;ãeN~±ù¹o™R·ýM16 |ìŒ&|3<¹V͇q^ïIt.6éŸÖœÒ!·‡äa‘‘}Œ,«Ë—Àž#ç`:Æ?X¿ç¤0蜈k%åI¥Ey3×Íy@ܰ„0 Y-lØ"£Ãíi‹nÃ¥‡“1¯A=QêÕOŒVò¤ aÀ… 0À&ð{Ö!xç»Mpìl¾0h ìüoyㄲ ·`ÃÀ­xÅ𲠣ü®# Ó!íÝnÅtÈ([”X–Ø€ ôÎ.Ûá«°D\%4~’L¦‡VͺÍÇù½úlèø§|·&DD`:dy$ú¹{+éS*AŠbÐÉ `LÀSç `öw…§wÆP ÓŠsŠ_ØðÁ°sÖÚåmÞ'À†÷ÏÁ%P:äÈšMÓ%Iž„~Õ.©à䆎pùahœÄéDȇ1€#°õài˜öUPšgQã)Ÿ’dù©UǾžËéEQ'‡ q,…HJK_p oá‰i*D IªŽé¯m =9²(¤,‡ \†?®=ïþ¸N LïŒÂF9ó/, ( :ï,:9AèXØ UÁ¼R?Q*…‡áî^-àÎžÍ $HXBEQê±&À|Œ@¥wþeÆ@›Þ1,1›MgκÏÇø¥ºlxù´¶}pq\D< ‚1¨Š°$}1òƒ8J_-ÜË=äæ™ð7GÏ”¥wþs‹Øôθ û\Óù×¶ÌJÏõ7f¾Ô6 ¼u¶4¦Öºœ _B£ ^”­êU‡±7u€–u…eX¥ËaLÀϬßsÓ;gÁÞ£Ó;ƒ|Ô,Ë®™>äCÄ…³ \_\zg4 2qafo\eWÞ)œßA„#µ-0åÁÅ Á@‰Žnµ]KÛJ‡<°{SÒ§%„s:dmð¸6`Âä+é¿þg¯°ôÎèó(ãê¬E梧ÖÏ!pÞBX·ýR8­­GÏŒŠ4Ä+®S2œ4Kæ§2§í™ 0É,N°KbÃ@åùM»p€A6LEƒ ™ÊCV«`¢£«Ú×uX—+0&Àü¥wþ~Í~xïÇ-p&G\#”›eó¸5Ó‡þáï Eô ;™×"X ž‚Ž…TU½—3b:äæ˜¹9„âg.L€ 0&p‘@~¦wþu+|¶r7”`,aE–?…’’ÇW½=l¿0™~(ˆ 'µÃ½óbC¢CžÅc±Š°tÈ}:ÖÅtÈí F §C¶ž73&ÀGNç*ÑÿÚzTY.Dÿ°× Ι_߸phž8Áþ#‰ ƒªçÓ!§%Ý47¿„pjTÝíì÷uã`¦CnUOX†egUáã˜`>EàßÝ'ÿƒ½Ç.Ô[>l6Ãc™3„B…]¨žE±aP~—1 {ʘ¡´«°Ù¥ñ”y@й>ÎF0n—`òÁL€ ,Jïüõé/LïŒþ«L`·vú=™ ·JÇùN…@RÇ,hh0_ëðqúk°ñb:äˆPa3NëÃ&èá¹ó8œÄ@2çóŠæniÎÖ€sd˜¦;:"ªG‡AƒšÕ OÇz ‰{íSrÐ(xï§-ðÍ*±é1ªÏ¥…æ§×Í"pÞ§Ж+ІA»! "Ãc O L‡,…•SqñíËÒ!'UçtÈ.¢äÃ]$@ëÃ?Y±Ë¡”¸¨Pøê¹ëÖã L@/ö¿Ó—fÁZ4|…(ä‹§àô[»§-&×Ç Îãc}&u¥”Œ…ƒ"b;qxÿQFA£¤˜òÀ•ðÒ½—>xU°ÊL€ ø åš|_wxéž®\#JŒÞDáÈÁ«ñRüÖ´±‹o#Ô÷¤Üw—ôi²‘ü¤4Q§+&"†õo ”™2!raL€ 0ϸ¢MHÜ2–ôÎ4]æjÁ_ñF(ã«´ŒÅ¿—Êòøu3orU¦/0†AÇ1ï×5„¾Œi:‡âIr÷&#àÆ®`8Ñá!¾tÞYW&À˜€ß2(±a®N©ï|· ~\w@Hzgô¿*¤õi‹Þ–ͦç2gÞsÚo ÙéˆßGסtÈͯø¨Q Z‚£)x¢…1ò+Ã.‡þp";ïò.58ÿºåÀ‡J„£#"ÜâÂ|™]Ç4‚pyË$ „ç \rN•$Ã}µ»Ü’8¼Ý¿°¹À¨K®«(Z‚_©é‹nŽ 3.Å“zÚBéi.ë©;S`Žp²#Ñ—#ËM€ ÑDYž/ˆ¯פ6„º Q°ýÐÈ+t}z)Ãñ>rurÍêk§Ü´ûpæ—{|…3:úåTBÚƒóÛAóH=bí˜È° Ò»% ìÞEªÏ¦52¼ 0& O´·;Ž ,þ};|üÇN TÏ®©%.oÿ§–BqÉ#«ßæxÙëzT‚aujl§1Ì~XÃõ<Þ¶ïGëNÈhÅ$ºº3¥CnqQÂV4ÚéïbâðrEq,Y’o8±rN ÈÞˆÏF™î†ï#ê¤Ýræpf»,€å>^ÙçG 0jaƒd| ¢.¨š±0ŠÒ!wàtÈ¢˜²ïàïp¤VéI|æ0HkQËgºmIïüî›ál®À8F2¬§ôΙӇ¬ôVõÙÇàÔôùÍ$ƒñM´Ö®µÒ/§6Q ä»Ð3›R"s:d§òAL€ ÊYððœ•J¶X_2 ð¾×¢sb/Aøà×mðùŸ» Ô$àA_‚Ž0¬ÀàH—š_7gèA_¼|Î0@?‚˜ ÐÈgöX<¹Á¢ ÷ÆÑ%HÄÑ.L€ 0&`Ÿ9ò=1÷/8x2ÇgÓÈGàôÂèëÚ)Áéfbxå¿·‰I“€Cñw‡oì2váë%…ù¯¯›ó@¾}šúÚ+d.Þ3]šdH»pdphÔ.4e4OŽƒ™é½`â 46 ÈîHK_p§ ™ã#)鋯4e cÜ} „”êÑ¡0Ó!_“Ò€Ó! !ÊB˜S¾øþÚ*æéZOÌ(pÝûõ¯0½ó?o…œ‚—ÕÇØd0?ê’±(Ã,£ÿÁŒ¡k]êfº6 :¦/®j_GcàvAQŒÒ!ßÖ½ Ř4ŒÄ… 0&ÀÔ˜‡7ÌoVïS€Õ¤85·]Ñú]VÞÿq3,ÅôÎ8@âz‘¤Ëq}&Æ?˜g6•<½fÖ°c® u]èGÃ/K ?&*ó!áëÖ* Ò¯ouDeârÏ9a©L€ 0]ø ‚~ÙªKÝD+U “ã=tËepÓå•ôÎëvp¹‰²üÒpcPðÀÔŒ…/äÛ3uË’IÅ. ,@o>Rê˜Ew‡ER:䉢Œ‚†8gôæýÝáåaÝØ(|±8&ÀƒÀ_[ÀäÏ×Fg+ô²a­¼\©Ä´©Ya+¥hƒdx=ºVÓ­]Æ,¼ÁIî8V7#)£¥Œ†1†®¢:Jß°~­”EœYU–Ø@ ؼÿ4LZ¸J̺£ÐÊ]0½ó§+vÂB ±\ ½3N“7Éðu—ŒÅ¿šÌ%ã×̼w‹ðxÝ0ètÿ¤ Péeô!¸§l˜Åu,dÜÐ¥‘’è(.L€ 0&àƒ'ràÉyAq©_'T‡Ò;ºª À0ùs~ÀôÎk¨:Îa% ú ÁYh ÌÎÉ/xnËû#§Du(Ôù ^3 šdL ­ñãQõgÐjŠr¾ •¤p—7vKN*7Àߘ`Bàô…xäÝ•@Œ¸\$_- ž¼#nFÿƒi_g‰Y¶Y–ßgLtdøÝicM\½q÷;°|’€´õVûÉ+>©cÞ/ÅoEƒàUQFAœûy óLƹ 6 Ôž~®Ç˜°N ¯°}ïO8Ž!¹X'Тnu˜5¦Ù^ßË(½smX¼l|¼|‡iü‰h FÃϸ¼ñë“üÈ¿³†ìö˜× óü­ QØ1! èTÓ!·…êÑ‘™~(Nž/€s˜ô£ ¸l„‰n*Ô²Viý­ÚBó†+7† {Ob¦±ópÆJ¦±È° ¨²SӚХe-!Y&ÏçÁ›ÃÚÇaGöYå‰ÿ*•ü1¤>ÑèOËzÕ±ýDh‰Ö9®V©TOo_("ÜNìÓ¦ý§`çás@œÉƒóv†bÉ'&&2jàðd­ê‘P/!'Å(ýNÂï¾^r°ïgr ãnzTèü’ñ ±ØwOœW3žJ˜s¯¿||.Áÿ'ºœÂ‚ƒ :<âðÿˆtòT¡äCöœ„‡ÏÂ~LBt¯úÿ¶5—OÌâ«…]õ£¡ ^#Íð³zÍ‹|(¢C/}œ ëQ7.ê á6¢k¸s0Pzçå³Õl§&þoÜ$]š±xÊù¼Â—vÌ‘c§º]îý•ÅtÈ©íšÄ—æ¿ú{³"œ>®–CµN7ƺL€þÿ¦}½¾ôµƒS}ZF Ö ¿Ï+Ÿ¬q¹ÏZC¾SžúÿÕªŸ¨úä»2ÃñS|é˜ÞyÕv1iÐÿà.ƒ1äF¼ÿ¾vøØ‰7²—L(¡³zo5›­a:ä1‹Gà‚Â?Š–Œ£ YX˜1º'LÜ…›ì/ÝqüìE üÍ/ÝgXZžùMÐJÕBOÔ´þÙ]F¥=j›|/µmY«G+/¥$£cñk#®€7F^õÑTDÁûn¾þ—œ”¸½Ë˜Ewˆé’a6zÁ]2š¬1à=eîC€Fqè¥üØÀNðÎØÞжa K„ehn Î5~³Ê=#‰Òÿ4=iT4²Oå*‘ÒhÞÑ…úI+,ÜYhwÔ´ßa/z{«Ðê.¾Aà“?vºm¤N ZÅ£¦ÐJˆÇ0Vð¾j¬R‡V…PÆÛ@+©è2wB_sC{ˆB‡W1Eª‡ë ?Fÿƒ•Òj›Gª¢€Sg¤Óý êaãIAÆ•8Rà’}ȳöŽÍ`ñãàZ\îAs3\´ ''z˜ýÝFí;yÄiœÂXðëVåhj§§Š†‚“b56õ« Ê4ЦƒTV&£`Â;+ì.9T)Êéj5c#fY®Ótràþã”p¹ÞT'¥YM‡Í“3êÑNo•j¸ºŸv½Õ¼Wۥ隆ýïy4Å ê–‡4¯Àûé ¯ün»Q ÔYˆUHhò1 tÈÁaðÂÂ<]ÞÓ!£å”Ì髜í_i½üëKÖy|X†LïìÙ>B¯fwyèÛ£Aë¿¿Y½WÉ£n¯žÖ}‡qôƒž¨Ôx×Û“MËIkàô¸èPÅWë(Kç 19SRl ZWo«tnæÔÿ·-q¼Ýf¡ï3f´˜–b7ÂøäSEKré‰dåãª$ºSL 2˦îúª\ÎM¿»ô¤j¯ü³í¨tÉ^ÚGÁ¹.oUÛQµKöóôØE$Ôì­Q=0ø[6Ìþv“ðêâ^ç>áh - x#*©éý©é‹&dÎü­I ´6:I²<m»©¨¦Í© ï×Ó!7ò{oT5<ÔÔQ;ÚfÏ( ¹º(¹xŽ…Q¥$SkǪj”nNÓqUÈè¿c¸d×ç {àÅ#·uR1‘†Q)Ep{ÕŽ–PÇÈ Q„a@~4¡¥Ð2Ñ ·\††U}-‡Ù­[CRߨUXòQ»mñN× ¬V¹ }gkZRHd jD/.¾O€b³ ÆH¶WãoÉ;ßo r…t Gú¡ ,4få—J“6ÍtÖš`+7|\~˜±ðQ)"| ®ÌUX;Rã¶Ë0ôåûè…ùþp²Q ž‹Õë&DÁÌô^ªKsdL- …Ò¥åµ"5zrý ?”'ìOãY´%¿*F·Ðàw[(7… ;ã*h¥a ª-yÿmÂ{û؈ yWê˜E£¬Õ­d$|3¼ËØ&?$úHBY’#Ì‹ø¤:å+•dÖ”àmî#@MoŒì® Q;ÓJZ‹Z. c“Åû ¨r¶ÜÕ£¹’PHíñ4•BI{\)o=‚¹$ΨA‘9߸¯{@åîP '€*’£ªZc²ccž  KÃå®R\ Jïü4†¢'ƒxôi™îŸæ5ª(³’aP§VâTQ)‘ÉÁíþkÚÀüGú) `*6ÊŸ=G€”¸š¡ïÖ+š8¥0…4BäJ!g,JÚ¤¥¸š*öCLª¥PB/šëåØÎbIµ…²JraZàhu.oÒ»…’CËñ¶êâ¬Ö­]Ú6ý_Åý ´pÚ`xÅÎ~¦áÔÅ €A˜ÛÀqìÕÛߎk‡¢(F·«…V/P f­e&:²„gÖzlÅú”眆êÕÊêçlÙ™Ð6í?­úp $£ÖÙKµP®è“h‹Úâ­€Bjõãzú%@«¸Fh íä»%¢È’L© ÊK¹aкUB®{t)Mëúñ±ð*xòާ‡®Ë5ã.Ö¯•Ë2HâhßP›§-¸RÐEKíÓ”†ÚrìlžÚª—Ô£ìwj 9e`ü .L€ÐJµ…–ra® ‘`òÝšŠKi©£+"+N'”ÁùÚ ÿÓ„Ò!?sWª2Ò‚‡T]9?ÂŽm€qõÉáSTiQW[8Þþ8äEC_¢ŠÄ¢?:S(ò²ÙªíÞ¦Žª@2ªrEŸ& Å©š®³œ×|a|+/Œ­Âzo|e¹(- w¶tj–Tþƒ]n8#Œœ®†öi ‹ë}/«çŒ>ÆMèÆ,²hMàãL°{ú6ŤZjË'³9îÄhrdHm¹¥[Y¬µõ¹ž¨£!r+… ëËõþ „{ç1ä‹E¡>|üj%̲–Ñ+‹’…g–Nôˆn£®k µâ8¤¥¬žÞEÍ=YúTSCÊßt•`,(–!ØyÇ…ŠBù³w 4¬Ey3Ä– ÷«¶ÄG K¼Y©IZ¦ã®rSÐj ýLG¸0ªêc΀èßóÃÚUw9üNAµè5“‘‘ÎUí“¡ FHT;Ræ°®ÀTpß/­J¸šx__t1hx ¢y.wŠï®r“©-Ô?ÊçÀ… X#0憅™1)©—3…2•.ÇU ô 1M[QvÍ4Œ'âÎÿgtåcü“~x^kĈ ™Y»Àm?^ZŒ“Šº«ù|ü¬ã r9µÑÁŒ °E€ G C>þí?Te&´%‡¶S*erR¤£%²”V™k‰\lOÞxÜ÷x,uÓãjêÝ¡´/þ`ÓΖV$paöÐÜ.Ŷo¦a™­=y´/¯°(­óØ ù®WÄ´¼;7qa¢ °a š¨äÑrA.Úähˆ}@+$¸0Ghºi&”]Tð"eIäÌo6Â//ß%8ýÀ… ˆ"À†(’:’ãLp ©ïU´Ä¹Óàˆé•Îp£º!@¹b(»èœq} £V²\ÀL¢³¿Û#¦ü»0@& ‚"(êL†;çâuÖUaêhYâH¹¸0-(rç[Ó~2¦çvG`¬˜8,}æ2øgÛQ-jq]&`•ÿÂYÅ€Ѩ> S©“q)÷÷R›Õ„©ö€wÇ÷†kR(«.­åÜõzvþ?ðïn–äA>ÊB€  ~hZÒƒ—”ªO¯ÐP¹ó6 4«ßÞ¾˜x=<ï¢FJ0êç3óÿÆ¥’ê—ßÚT’w,6 öÔsÇ+Ð<‰àpK€2[b!Gá«qä€F>~òjÑ¿54r1@­^xã³'÷Yv_’Åø6X ¹ Îa”D.Ž hÉ—aâé%e«¥× “ümæ>øiÝe™¢cÚ•kÐtÂÚǦ.¸0­xÄ@+1®ï—´$‰:~6ß/ˆîT°'Í" äÃå"Ƶcaf]ü§&ÜÒâ«iZöÙŸ»/ äOL@6 4ÀâªþK ¶†ÄO'Î僖åþKÍ~ÏB5ä¶È-,¶/,@÷Rž„»6†Eö‡¾—ÕÓDaõö£˜ÿ£PÓ1\™ 6 ø:`H f\$©\™@£Þ{žgnD†©Ï™q6—§gìáŒ@–”îþö+›Ú«Vi]§kp:Á%P}D<ÁÖm°aà êܦîPP¨†µbT뵓äp±O€ëBT&¾âéû,-{¼¶hI«¾ãÐYË¡n}7¨ó¨[¡zQ8^„ÏMë‹@+ ©”3=ô$¦/BÚµ©­nnüàÉíÂðrè¼ïêÖª{~äŒsU7ð_ÅR\&ÉŰaà?ç’{â"v k¨–°½¾yu‚c\I*3QRŠb^ê˜'ÕèØ8QÉ´¨¦öy­ )dçQ5§Ãgê°aà3§Šu7TÌw¯6Ù -¯û—“q±O ~b´ý önÜw²Â7þh‹$×PÇ•y¢p–GOPö\lxŽ5·¤sÕ0kb[ £Ÿ­Ü Åþ‘öÖ]7ÆI±ªÏúŸ[ލ®èƒUúnhY2ê ÓLæÄŰaà?ç’{"€@ÿNõUK¡¥`‹ß®º¾ž+•‚;<Ë[Õ¯®ºÛ˲²Ÿ<Õá:u¾@UE2v])j†å–ð^W@ëìX6 tvBXïèÕ.È›^mùpÙؼÿ´Úꯧ6ú Mœ¾ ~Í{㤈Qys"£à‹¿8(£‹$5© ²U'>Ê‘8»ûµ¤ßü‚]Y¼Ów°aà;çŠ5õZ/~c×Fª[¢*%­9|JŸIk´ä€Ø…axEI’ k«$Õbý¶ ŽzÈ“^µR:«¸HÃ(U³ä8—´‰T?â gÙ%x0xÒ¹Ëö ìÞLS:\ Î3fÖrØ™í™5ãöµ¯¼7.JÝrA:꯭î™ãïÛQ}ľônvÁ?CÓ\.%°róaøaíKwØØrY“{ÔmNŒ‹PWk¹ëúQ­WF€ a(Y¿ ¸ôw÷j¡©;äo>s,Y¹ ÌrN'%IC¨ç_×w,oëÔ4½èÕiï:|š³§6ÔÍ£ëµÛÕX±é0<¿xµêvÚ4ˆ‡ÄXõ7vk‚ë'V³¶Ùê6JÜtàO'X…ãcÙ0ð±Æêz†À=š–¥v¤Uq©f,Í‚‘oý ôd'Â@ y÷_þ=¨dÛs¦çMj«æHˆS¾\/DÒtÂݽšWÜäðóvŒØ7lò/ðíê}àKÁs>^¾F£æG|ª—ázüÿßç딑ºÆÔ–›0Ç™|c@IDAT‚«EK”E ~øÂ‡™Âúíªî|¼óÔ{Y9ßÉ|Ž%¯yêÎ=chM ¼ó(<3ÿHˆ ‡^í“!Sß¶ªQáŽsœÍ-„}Ç.À¶ƒgàß='! _´”°kË$xux7Íi* œ'ÔE¤•äñ>¬_+hÓ ‡Š…F(šž–O«=>^¾´D8<7Ä7>[ó~Þ½;ÔUR7Ç9ó˜ÈЊ*YýLçìê¹ûè9؂ΡžZ I©’Ì…µ»N…Ù¦ YtþÛ7J€¦ub/áiUyÜHúÓ5ðû†C8u°_ój5«ÁUÈÌÕB£gjUƒ½xMª)4ÚsßÔ_1:cHø ¶òe±³ýÐØ„çæ–nAË”—=¸ŽkØ0píÇZÔ­7´‡·¾ÚàT/Oâ öÓ»”  Y2b"BÁ²½Ÿó‹Jp¿NâÐ9=µ[+«0S娩aÎ×"§®´Xðë6ËW‡ïôc=aÎJ¥^43d”šd  ˆôNåˉ×ÚpÇTŸ–½=„éƒzg}ÕTNáj‰O#½¨C ‘ÓjÊàhÄ “Ù¬,—£§tòù8…,µtš”²RyëÁÓŠQ`ÙEí¯GÃŽ^TPM¨…ɺ(“gjá‹ý gWåZ@¬ƒV†d£#+9‚Úº,òm½S;”ª™ ¥WûºhlQ-Š¢XþoQÙ”]#±Q¡JÎ Q(Ä>Òù!ÃÀRjctÌ«SX¾ò»°a ƒ“À*è—ÀÍÝšÀ~|Úþêï=.+I?úÎ. ¤Õe³áNœâÐZ®Ok¨Ä[pæF™ƒ7ZzU-;ÐÑ’F1´”Ëš$­Èós—$’E/½•_ÿ=dW%:‡´âÂÝ«.†ön©ŒPØUFÃÎëðúYˆ«E´LcXÄ“ï£ÔÏ”Œ  1}¼³>Îk¡cãoêtsõvYµí¨S*ÐÓõ ]9u¬­ƒös.íôèëÛAV÷ÇòǦl¯w«OǺÊ4HE詟|nÜUØaÑ]d—ˆóìøÈ!@Îsßz Çywo–-N;íˆ7r@~¦> ;ShJáåa—Ck iÇÓÇìCC‰¦<¼YÈã©;SqÊBÌBž ÁQ-ެuôÙÙQ4Gry¿óØ0pž@èÇöž¾­@-së"ÑP.Í?;SÈññ¥{/×ÕÑ^;jCòZ“AioÞ%P”I)[œñZWÈ—àkÚ*β¢ü ªv†üL^Ö Ôfˬz¼½ïîX"k¯=Þ瘎q &PN€æÕç?Ò®ïÒP˜sW¹pŽœvÎ0 ÑÍêÄÁ[£z ã›ú G¶TruŽ? ' é ìQh(øzÙ‹+Q¼QZÔƒ·Ç^¥y9¨3ºÒ”ÔìŒ^˜öYìTµ"'\.ú!À·ú9¬‰ Ä4ÜÚ î¸²™âÔ÷.)sÆ1KkwE< Òr¿¹ú¬o7ÂOë`â$­Z”Õ§•"ʵ© ¡[ëÚ°è·íðͪ½PXb!V³ z¦^g\Z®!W†¥!‘¡L†îÊRiiÇòÞ´v, ºªôlWÇ-S–vª¾Ó²Â)\‰)Ç÷Ã\\F*j€®§˜ ÇËP«êÃßÝC€ ÷p.Õ²¼Í‘`ºy¸cŽ‘Ú 2JåËÕìé¢2%¬=ÖöU^So­m ©²öÞV=W·×Mˆ†'îH¯k¿ãúÿ?pÕ@ÖÞ“ 2ð!ݤț¿æ¸o jÖð;êÉxõ¦ CKÿÙ `D=ZZ©¥8só´%?õƒËBïÅ©šß6TôÙ¸÷”Ûn²¤;ÝX[âÓvKŒ/Ai©+exÿÖÊŠ‘5;½6àu@ñ DâteÛ:JLÒÙ[…~_h¥B¿Nõ0ÎB6à: ,ÉtfÕ ¹C4ÀèŠ7øEx‹?´[î¥ÒáÞy±aÕBT{§¡+­Nü–·û@QðNžÏ‡s¸˜>—šÍ 1BtxÄE‡*ë£Õ¦JÕÚúǧ¹åÓ9Jt³"|º3ã#gÞ°iÞØ²¾\Kâ­:ÐÒ'ÒÖAÓšh¶o4”¹sz’WÖ‡ãºioŠñON‚›qΙ²Íı'Î!/±ÿéi³Þœjaœ HÔ8)h˜¸1ÞÀDŒ8âA¾ ;Ÿ…l¼™FÆøGç›®%Ê6Y c/ÐzôêxQðœzh¹Ë%]éÚÞkù)`|:„úQÀ" “LCÏŽ §ôt›€—4PC2“ÞijÆp§îÝèÝ…L)àM5P(]ÃŽn¢d¼Ðúþ†\ˆFy(@R3 ä ½-úky§ ‰ÎÙatL=‹}ÌÃߨRœ" {>ù(ÐoD^C 8%Aް”ù±^ëîü½ÐÒ¬û®dzbî_ªº–stWè–%“”uÀ*Ñ4Å.¯Ÿè}èEvœ ²#Jcrüó–óŸš>PÀšŒøF¯Š…ntpí=S4ÜLU4dÔÑ;çÍB¹ ´ä3p·®dlR´HzU-Ä0· X1J+ÐP,Q¢4Á‘8¬¯‡› ªšÖT^Uû‡†"½,×MéÐÿéÇiIý]U¶7¾“Þ4²E/.¾O€ ß?‡Ü @7:àâ:zú öm–ôôL/.L@xU‚Ï ëĘ`LÀKØ0ðxn– 0&À˜€ °a Ç³Â:1&À˜ð6 ¼ž›eL€ 0& Glèñ¬°NL€ 0&À¼D€ /çf™`L€ è‘z<+¬`L€ 0/`ÃÀKà¹Y&À˜`z$À†Ï ëĘ`LÀKØ0ðxn– 0&À˜€ °a Ç³Â:1&À˜ð6 ¼ž›eL€ 0& Glèñ¬°NL€ 0&À¼D€ /çf™`L€ è‘z<+¬`L€ 0/`ÃÀKà¹Y&À˜`z$À†Ï ëĘ`LÀKØ0ðxn– 0&À˜€ °a Ç³Â:1&À˜ð6 ¼ž›eL€ 0& Glèñ¬°NL€ 0&À¼D€ /çf™`L€ è‘z<+¬`L€ 0/`ÃÀKà¹Y&À˜`z$¤G¥X'&À˜@ (.•áÇM%pð´šÖ4@ŸÖÁ`4HÒ}î§ °a Ã“Â*1&Îä™á®Ù¹°ó¸¹¼ÃaÁ}QÂÆA9þàQ<•àQÜÜ`Là"׿/¬dОµûMðö²Â‹•øð06 < œ›cL€ X¬ØQbùXé}ÅŽÒJßù ð$6 ¶Ùð(ܽǡF `L€ 8O€G œgÇGz‘@ÂÝwA9b)>v Nþ¥µá¦™`þC€ ÿ9—Õ“ä§ž€ð¦M”>_øguö¹³L€ ¸“O%¸“.ËfL€ 0&àcxÄÀÇN˜?©›ƒA{Ž¿?Ì……V»T-j aYÝÏ™`L@<6 Ä3e‰* l¿í(Aÿ{%ë6hóëO—TÙ>ð0„„*ÛMùy—ìç L€ 0&à6 œãÆG¹H€ò82 ¨‰¢ìì‹-Uÿ[°mûÅí>Q` .L€ 0&à<6 œgÇGºB J’ Úct´"±pß~8‰Œª–jWv‡ó¿þVus¥ïÕºw¯ô¿0&À˜€6lhãŵE¨òdŸ”>Bë&+ÒÏ-[nÕ0hùù§@ŒÌÖ}Œ‘Ó«§( …È‘KKÁ\l=¤± |qaL€ è‰z:¬‹]†°0ˆëßÏn=í<úöØÿÈc ÛÈÅ@i”ë<2ê¿ð?=©Íº&°û¸ Þ^^ûO™ q‚¼*Ô0 n…Å1qx¹¢8–,‰ T"ptÆLÛFÕD?‹#oM«t ñ/[˜à†©9ðùÚbX·ßŸ®)V¾ï9aò¯ŽroüŠøÕéô¡ÎTñ1ØxEŒevª¹ÈúлõNQU.*r¨²lcšÁá\Á'Lù© J*«šƒ3aS)„iƒ"+ïàoL@'Ø0Ðɉ85ªøØZ¡ ýcȵŒO ºkšrš‹eÃÆ®W\rÊi¥†l²þ$)¯¸¸™î7l;j¶ªãŽ£ÖϳÕʼ‘ x˜O%x87WF@Âùõ¸«8ÄwÍÕëøBcl '$(¯ L§\µ|þEX«¢b­¾VÇÖ€cï¾_õ0þ®sÉqÖb“«[ß®óî°zB€G äDë±›->ûr7dl#ò¡#F¶k§GÕ…ëtdò  ¶ E‡<2uÔºo„­*¼]‡Ò{‡Bæ¾R¨8sF3f£z…ÙÕvïI,ú»Žž—¡M# í ÑaÎÅèÈ/–áÏ%‹3[i‚ Ž cÅ®B¼3 °aP§[_¥áñèÎô¥”›´ÉY æ¼²Å'N\ÒŠ­°Ð+ª©S±>ö>îÍ‚áýa‘ŠOÁ¾Sfhœh€‡û‡AJCÛ?½ë”ÂÝïäBá¾ ?n*/ÿ-†/ÆDCµpmÆÁÖÃ¥0lnœ¸ +0‚Ð(yö†pÅÐð>Ö@¯l_zÕ˜õb>Hàà3UkÝvÅ2I.‹é‹ÅŽ»©>–+ê@¯–Á@/µå…o ÊË1{N˜aîÊBß/ܲÉỌËóË: ]&}]©8rÐ"É?üw‚à š °a ÀÔ0V‹ÁЇìV¶D{¬X)81BëÔV6&Ô¨¸‹?{ÀÁÓ&øys ”àÍô*¼±7¯å™éÆCÖ³ll·…„Œ‰½'/u~¤iß¶–°a` o6 ø"`n"ÐàWáà³Ïé¿)„ªÍBàö„qU7ówø6«&|”%ÿÝ£ßø¡&âü½W”%îr§Š‰Ñ’â[PµÄhm‹R‹TE±µ¸0›Ø0°‰†w0×Ä^Õ è¥µ\øûo(ܳG9,ó­‡s} \(á‰%GOÙ/âïVAP·º{GF\ŠmUûŒMÞÓ-DSÏa”ÅIØ^eÉ$9?ök­~jCS£\Ù/°aà§‘;á*ì7&+Qm%¢ ßœ ñ7Ý9kÖ§žÓù V›•BC éÁ áîw´q÷ðûUáýn$°þ`)äY‰MeÂQù¿w—©î6 €ښƒa”OçÉЬ¦AqlUGûÏõŒÁ‘0œ.›RC{àÅ["Ð Ò½}pãéaÑ  ýJó€RÜð$ *thÒóv— šrr û•×Ã`ÿcO@ÎßÿØUq×ú Ѷ-Fs¼øŒþËÀZ!_jÃ^)=}¶\{$¥?Õý$¾ƒ½þzk_DˆmÏ{ûDê{Ï0 W‰I†`£m}µIÀ/FÃZ\2IË;70B\$Ï#8âèûÙ0ô+€û¯ä3°CÀ‚È\¯|4;oÙdû'eu.‹tX^ 'v›-üjÜvkù&ˇ†Sß„#“ß‚‚Ý»@¶ÚœŸ¯¤Î]»Ò޶Êï‚ t¬g„†5 @Ë +–ê‘ôláÙ!xWŒ‹î$£kçõ..••å’[› )Ö·§„@|¾þøÎg×Ï*÷É%­~üÚ¯]¥¼M{Ë®¬:=Í?ùüßëvë);1äñ©Ï>·Z/qÐÝÐáßLIJ²º¿âFÓ¹s8ç]¶.½âvþ,†@ÞH߉«.þ85îȾÖàhØÖ¤æE ‡y¼jN¡ ÷¼—§r/Ž$eî3Á}äÁ7㢜NÀ•¹×z˜îÕ6¶{¼ãÜ [°aà¬,Ôœúü 8ûרZrübÈâ’£Ç.Ù_qƒ¹ J/\SnY¸ãŠû¼ýÙ”›ëm¸}øU4 ,Í’_½Ú$;÷S‹~ÖJl„õíÖêò6ß#àÜÕâ{ýd™À%Ìye΄—ìаaç ¡6k{{Ðt€³…ü,¾–÷ª²Bë׃ˆvmË7çüõ”ž9Sþ? 8b`«œÇ¸ Ζ»ÒB`éúÿ’6T2¨«û=UhŽ?z˜ÎÍ釀­›­- áÊ.)TÝâ§Ÿ…¢£G¡`ûv["ínß:àZ»ûi'MЋK` ¬‰ÖJÆDjëähÉëÒ8Þ¼3^ÂàNS!›¡èéW©û°¦oÓ?ëW“þõf ™€Ë$éâphx‹æPzö˜‹Ê"Û(aŒÑiŠ!‰ P÷é'•ïµî{£|vôçØŒYŽªTÚ/±³`%üEf˜Ça\ßPÌâXvýÒQä+IÁŒ´fd¬ÚâÍBà†ŽÁp S@ÇGI|ñÿ¦j]þî.®Çñþp/˜€SÂ7‚”ƒ{•ø# úµW—Ë©ÿ¿ç Ó®íPý†ë”m5‡ƒ¨´Ôòý áf’X¶ÚáâÁ•?%RyƒƒodÐ$Žñ7ßä &ïöw”yqÉè(x°W¨b$ü0!è¦.¢ÑʨƒK6Ù(ASÿ2xÄ@ÿçˆ5tŠS äoP°kwyK¦ûŽ{Gèsù,nþK•^êÉM#"¹î³O•·aíC2ÆE¨‰7z³ gÆ ©]Átölù¡ÛwàTÅŽòïü!° tn­k•À±cÇàøþãðÇŽ"0bôͨS§$aœ gP ì‹DEïÙ0P‰«øSK>ƒCÏ¿XÞ±óËÿ€õmÚ—wöCû¿WBÞ¼«cµjÐdÎ쪛­~®^€^V Å(P[ŒaeqÔÖçz¾I §¼Ö¬YëÖ­S^‡²Ù‘PôiÑ¢ôìÙzôè11”œ ¨L€ ƒÊ<ø[€8ˆFé¼ŠÐÆÈƒÃŽ;ààÁƒpøða؉÷ç¢MÁ$€BÙW+,„ e\”¹Ó×…R/–jWtƒ˜Þ½¼¨7ínt~óÍ7ðÓO?Á\2«¦¡MVV–òš1c\sÍ50tèP´C­¢jdrÿ#À†ÿSî‘ ”wÀQ)2ÂŽË:ÂÒÛaãM7ý¨–òìª_þÕòá§?C—¨1Úž< Nœ‚+S –ú®¼‡6¨tŒ¤BN“Eûö+Ÿ Ðæ·Ÿ•ÏüÇ3.˜aæoE°jO)Äàÿ;RCàÚöbæ÷«öàĉ0þ|øùçŸÁŒa¶­•`œºJLL„( °UŠ×_>^ï4½PqúÌ„ÉÃȰ W»ví`Ê”)ÖDñ¶$À†Ažtîreß™Q;(‹ß…j'Áf¼ñ—Ròú •##V>úÒofô9ÈÆl‰ôú¹A=Å@èçÀgáR)Ö·TœJ(ÚÀz% Ó ÖðV-ŠJd¸cv.l?zñ&½rg)¿ Ãðîâ–õÑ”ÁÂ… áÓO?ú\±ÐA—.] S§NбcGÅ— ¢ Õ-ÄQ­={öÀÊ•+aÙ²epêÔ©r1¤÷sÏ==öDFF–oçI€ ƒÀ<ïÜë ›4Èöí•)‚w?üVvÀ€Axs¯ZÂðé¼qãÆœœ ±˜añÂç_‚¡¨ŠÑ¹ë:w …Ch _ü·2áMzCÍÈJˆ‡‡ ƒ7ÞxjÔ¨QU´êï1½z©O—Ø­ËSvùˆÞùå¿Å•Œ‹ü7*€A]Cpíÿ¥×’¥ŽÚ÷]˜ãµ×^ƒ}ûöU:„®Å›o¾úô飌TÚYå ]¿­[·V^ÃðZ¼ë®»à|…é´?ÿüŽ?ÿ÷ÿçPVÑüÕÏ\üó³Žqw˜€Z%8ÔúþûïÃ'Ÿ|4¼ZÑ(ˆ…¾}ûBZZ´iÓW&^LJ$¿øÈè{°¾cg(Ú[öƒM«ŽEF(£ ÿ¢A‹ òO4hÌš5K10”ÿ4ýà}¨yß°5BþÑ—_êü¨±®®€%iQÕCòpæéè934¨áZ¾…/¾øfÏž]iÚ€ŒKò 0`€²ê€ÚÞÊ”àZ˜=0=ôß»KaÑßEpÑj…1½C!.²ÌyuÅŠ•Œ‹îd€<ñÄŠþ_0Ë>~lι¨ž^øg•ÍÐÀºY›Ë‹‡wgL‡ƒ–¨$\.Ø=ûŒ8°‚h ¢•BÃúü¨ù‡ áèÌ·ËoÖ4Ð_1¤ñšÂXt'ýw<Í÷¦§§Ãøñã•u+bín’pt"æÊîvëðNϨ_v³­Ú*E ¬YÍú¾ªu­}§é‚©S§Â?üPi÷Mèïrß}÷XÊÛË á -—5¦‡È)´ìؘm‚åÛK`é¸h 2+S–½døFãH T¶mÛ¦'N´Tá÷#`ý/À pwý‹ÀÁÿ½Ù/¿j·SÞì7¢Á—ÍcÄËqbq¶ïþCÐöÔi%#-£ ¢ð(œÓmŠ)Ž­• «Vùé§+í¢|šRÈË˃[o½µÒ>þâ{nÅ BïþQ‡ÏZÌ¿²>Üß3ÂCœ›F kãi¼n6mÚT$!!AyšïСÌƲ#ëP)¼ö}+wT4 ,õöž4Ã'™Å\ð‡2mFÛ)¦Á!C”d´.]ºT©þÇÀ÷߯¬Z°ÏïCÀys6pqO}ŒÀùeËjü[RMX‚Ë Kñ‡Q)øtŸzädü»Ú‘Q€É»ßÕ²`Á‚r-[¶„¸¸¸òï4¥@Ó\|›@Ý–7] µb$h^ËÏÝõ»øD¯¥‡¹¸¢…œ+4EÓ U’ûfVT[v-†E‹•W'ß |D%##£’ü™3gÂÉ“'Ëëò‡À!À#s®¦§r…%\!ujCPL¬ÒwZª•äK7„µhX -'¼ãàahJÑqyš«¯ƒQ])«p´€bXÊØ±c•!ÛGyDY:FÛçÌ™µk׆îÝyzÀÂÉßkÅ`Ê]®{óS<2 h®ßRú÷ï=ôP%ÿË>z§pŪËÑ•F \~(<õÔS0räH%.­b˜7ož¢Oy%L8e7we|ŸU&œ½ëÝ*FaÈf™Uáê°a Wõ=õ_y î¸]QÜŒŽ‚Ï¶ïXÉ(¨[·.¼üòËÊÍYTïLøÄw§2ÞY·¶\dÛP'½†fœ,ÁTœÓ=–“£ìõÕW•öiÅ—À%@A´ž}öÙJFùГ¼½ruÛ`˜öËEÿ[u£ÍphÍÇå»É·À2Z`Ù¯o¾ù¦²‰b%Ü~ûíРAK•KÞ³q …l“Ú±—@?±¤>[[\~Ì&ôuÈ:X sG”àå;øƒ®°a «ÓÁÊ8K ôÜ98öλ@ïE/†„=ùñ'·! (WÁ·AXU»Vy­š6…Wܰ4ëÈ”©ðûÜyp°u‹ò¶ºe®…ÓòÜŽcï^Ñòñ©ŒžÌ^zñE˜‰ºÐª#Na—À!@£Yd nÞ¼¹¼Ó„Q£F•·õ2+¾60žý² ÿ›UhZÓ×w†6–À\•‚«ºE¬@_ˆ#Ѝ8ZPQ6­tøì³Ï”U4¤×ø—?‚íñB(fU¼§Kž¸6\I¦´û¸ Æ~˜ÛŽàJ,íëaêÝPÿ¿U{Oš*–6–m/…5ûJ,Ûø]_ø×G_ç# µÉY»öŽÎ€’SÖç5•DDƒî†zŸ¹„Ó¾‡“~tÉösßÿôÚP¾hÞ¤|2>­¿€C¶Nt)>z –ÕO.Ûâô¨SÁ( 5иmízX@ÆÆL8€K'w¹®:˜­ø64[8ª_wM¹ þàßæÎ äðg)=zPeXêß– }ZÃú&%Ír‡zFeŠ!£OY¾ Z†{ï½Zª+Kpi ËZ¡¤Kd4ÐH•óûVBqÔ=PTóÿ*†S92L¾3î}/Ÿ+[ÝCõ²™`øÜ<øñáhÆ8á{NTvƤ:–²ç„éÀ–[.ü»Ó;›¡M# ¹<ȃ‹ç °aàyæÜ¢ ‡_{ò0Ž»½’ýÒ+Pëþ‘Rëâ“?E+<ýu™7µµcGEþ?{×Eõü¿wé•BèjèÁ†(* *REŠ"þì¨ð·""* XÅ‚ibDQz‘&½‡B*é=¹Ü]þ3{Ù½½–FzÞðÙìÛ÷æ•ýÞq;;o Öµn¡4Õ§‡ô˜Sað,¦Ÿ6k!®/ý ñèóÉF!eë6È‘OÑoY´j¾;’àA†‡AŽ@ÔÜy0ÐxL­)HÒM1q8P ÅØÕ¸!:Så@ _G !(Y.8p+(°–LY>]ñcÒS8ðy:æòDËzÅ‹ƒàç©Eÿv¶ê|wóæÍˆŽ.Z[ ¯m^Ü=}“•­1ÞI»‘t¯Ôü'i!no«³ ä~ìùpˆ´}[¹Püûka^ëØÇh{áño2] ñø‹æøí°¿¾è# :òøâ\1Á bp®Ñ³D¹Ik×öȵA}4ÿxÜš6±×¬Ô¬Þª•«‚5_üÒe0’{—=Ê¥Æ+CÛ(Þžä*8êì9xp £bÒÅgžEÒº?ŠÉ lëÚQáemAƒødÓáѶ-œêø*‚3Ý0ÿºH¥¨‰%q{ÓÆqþ"å>0ïË*ƒ‰BC€­þy A&{ĵ £WΊžßÀ7Oöë DOdmÚm i äupšæ&ïÂ…ÿÖJU^)ÿ)‚WD&9Ö¤e›´­ƒ0¸ má‘ᡚú¶tF:ÔôÁúlE(ë/’Æ _)¥w‡<Ž8—ËO§äýEZŽ@ÅëŸôE¢àDž­¾ùªH>™¡á¤Wá?ȬJ?5€ÞVTÞ2Ÿ ÙÙêK‹òÆæÒC—+µ´_:òìyøÑCWKñàÉЪ8”u6¬8lO˜¿¢UÛýik@&C–•ðBv·E†CO±ëgQ ¦“:ùŽk‘ÉpeÔªß9#'_ŠBØ€Œñ¼¿5óþýG}¤dF4j\2 ùNætÙ‰F)1ÓímÍ7KŠk bbb¤n…ÙXëߢP ¸gœ†Æ¨£žâ¸D-Øž ë$¢®¤ÜèÞÜüXùì1ÖxW Ä;€¼ì=èyÂqŒA€ù¬ø¹ÅŒ5}rR±î¢¸|ò`­ZÂ÷–›åK‹0ÅæJ%zóf!â‚_®_Oaâ‡tHšÉ€ÓK•Öâ8WkÁÞ¬‘¢&R¾5mSiEX[Ð0³ð,ŽÎ×à®Áƒ±â÷ßMÆ^do°›¶Z©åjƒÀ×ð‹M9´oúÊ>v“+ÞâgÎÍmEœÕP« ¡é3Èó°Õª¥“ QZ²ÖÜsÏ=Ej x®µ”ÿaå¹4׺ÓVB Õy`á Û·& tG»Nxó~w|´!GY}u1•â7ù˜…!¶5`AÀž0 t¤B= bRmïS=–š_”Ë!”/¾µjtÔrÁ|åž“7þƒ„Õ?+×V ¡@O¿R´ Q¦ä4È·]‹R®9Ç=â·¸ O>Müß/o7Ègêºc'œ|}QoܸP¼zµ`p–¶bÈžA&µ¶€ë®¼2 ìÊhMüÇ9fΜ)5qVÇ\Z‡ ê…¿ÏVE äpÅ~êQXä‰w»[Ü o!|÷ÝwJ]Ÿ[úáçÜþ$Ð*URÁ…ÞÀÙ£ ´d­-àïYqhþzàkœ‘íÝ^i‡¥.n™qSï^˜ÐÏt/OÓ¹w gl>'¹+ÞÛÑí•n­œ…R-dð„äD„17»g¹‚§Œ(ݧXÆ‹ÃÕ ´”ú5豑ÊÍäÅ]/µ`G{ò9—.+cQ"ys¹¥d×\K^K‚ÂÐó—@¿±ERÎù HX¹Ê1­Ã@éb¾0 @23ÛÈdO[`(ˆ[ ó°V#é?¥ËNdŸáIÖàY$„äÑùˆAî £(T~9h_ØäzkÁ`þüùÈ"#S&oŠiÑòöÿ¡Ýi-NFš%é üAI°(íýïÞûŸÒõö;‹¶-™y ƒI碮Ù6»y›8ƒ¥ñ·»AOS~»ƒ’>eå£U=-¦Ð½wl|ãcßèÚjczmüÔ«Á=GL>JCñdÈ÷wHS¥kïØëÎrl‡ 0RÁHi”KCÖÚ‚;UÚ ‡ã‘`6ü¥¹c‹æø¯ÀCá°Q§•Q¨déì ¯ÖõÇÉófÏž=Ê-9·{üãFצ‡1 ƒ:»à¹;ÝѾaqÄYe(‹¿Éÿß~š½0:×Áê„a¸éTî¡7û¢¨5ÅFàø:f «kÎUp}y†núÙþîÒ‘«Ï‘ËäŒiÞ *A'Á*°‡«áÙ3A9èÁWÒª²ÅÖÃF?SuêäL2òËrv‚–¶Ê”T)˜÷©'µcO;ßy ¡ë-mÔëéBZ™.vøHPõAàÖÖö¸·´¶|ÿâ0Ø25hÞGôýäKéÌʱÿÈå/”r.”–R³Œxue&R¼{áj—¥ˆèø52µxmU&¶À0gc6lËAD¢}£¿7È~€z s7kÁ\rã0ÞìË›D¸äòF¸èñ-¿±Eó €CØ?ŸWéíé¼ø<2Aí¿Ú#9À‘å?P““JPȣجIVÆÉz¸ìç‹QM›£É/¦Ô²êþ¥.“ë£L.B{>XÛÈ<žÚÃÿÞ{Èus&âW®¶qïlCŒîÔ?‡ÞžØ{ðÕ™çÇá³yUp.ß›ûª«‹]¾úÞûHú}­ÙŽÂª§Kp0Ù‹| ÏжV-âÒO÷sÃîóyôP7?lÙ—ÿÁfAöŠn¦ ÜspY¹T ×ÓòMÆxë’ú ´ÿ’™¹¦ŽìI ×MXlöŽaCÉc¼¤2Ÿû‘Äêç¼ñÍ?A¸|ÖÔ¢É×ÃÛ™5of;S‹ø[Ó‚AMûD+ø~4äï\Ò8Û›R÷õ»ëNôаó+©f²Sý’6ü‰ìsçq¨®tr8a~õb½,Q&­óÛØh„¶kƒ.])sw@¶a`ˆÆ¤)h`½mAkð½ý6Ô%¡€©á+/K‡taõ§ÇÔ©ŠšùäÅ‹hHqQÎåËèv⨣f‡õÙ/!jÖl‡íÜÃEëÌo·…vR˜àÏzKÆxa1z¨k1¸««TÏð°Fs¨)'æí 4%_ZËÿü¶^׳tBïdÇ B=¯\Ö‘ óÆÏYØ;Å×&VB;ÚÆÈÌ÷B>´”mÔ¤ý{}Y4æ=Óªd‰›äÉÄ¹Ú ƒjóQUÍ…zRП¾}¾o¿Ã²ðô¸Ù(Ñ!£)êàO+`´6Ü+àÕv pØPtÞ³KªY8f (}œT樂~¹¹ØFFlÐÇà+dhH¾í釚÷6‹M¼u Ò8ÓXzÚ6`ò¢-”Û#£íåÖ¼:nÚh·Íº’ÓëÊûϱ^žÖÍ×ys[TãÂΩœŠ&C†Éµ³hNÁ!#À™vr¥C®1Ÿ9Â!»ª)lÏ*4õظ&Ï#×;Tiz´·+åÖ0R®½”üNòÿgA£¸ÄA„X°`C¾¢(‘r)\ ¼¬< >¡í†}—Œhæä'ƒé»°õD –RØâ'o-ÿ-…¢Ö-ÚË!”¶µbdNöÓiÇVèbcI-mz«°¾qgz‹çÄ@¥¡ O?ƒäË}Gýã(aQ÷“Çp™Þ ¯ÌÛ':VÊIО ëZµ íÓ6G.­yµ^‡³¯½†I“&¡qcó>ª£9¤z•PÀׯìâÙÔ)AJåe q˜56œ>š‰JqXè²"-c ]½B.‘¶â¾_¤\‹BÙ!°zõje07òàÉ%¡•É9; ÎOAjÐ@d7…GúÖA(Å ¸kvºd©Ï<þ‘ 4˜ ‡¼Ü4X8Î Ï/ÍDB†I8ð¢g¹¼½`=†œÿ¡"S>Ea4S>þ9™'3 5²$ƒù±VüM©s”åìÙg 68 ”] 9æÀÖ­[®6M›¢ÿÌYÊu?*m9t?þõ2 "%ž8qBJ1;vìX)µ,'ÑzÿÔXànÆ“øÐ^ÃW&¨ËEêÖíJ^Ö–¸4l€ˆ÷?Ö‚”ÿƒàÕ¹³tmý§yóæJUº›+²iMþ=Ïöí¤zN4U¦‚iAüܥ̙uê´R…²Càüùóàï›LR6Í}ûöáw lÅ[ ôøEøh¥9Œ¾'â嵡ŠPÀ}òHÑðúê,ÜJ†Œœ¡8Äñv¿í‹ÃádLS4 Ðâ~Ê¿Q`{ Ñ+Ä MÌÿ˜#¾ÜšƒxJ–dJ¢d³×_ÔU}„`Põ?£Z½BÓ¦ Ž:ÈFqLÆì,ËüôË·}ûv#ý½ûàCÁXÔô0]÷9óæÍÃn CÌ”GZ€ï¿ÿ;vìÀäɓѺuk?ýT‰ßšë¼=€Ë¯¼†ë?.–Ævô'š)õ¼JqìhQ‚‚‚à¬ÑBO[Li$œ÷”Âk/@’£yn´ž…„C!­ÈHÒRý-ëR¯Z}÷ ¼»w“«ÄÙ7š·’Ú·oîÝ»KGÿþýÁ‰W¯^•zÆ_¿Ž?|¾þ·#±ñ“äfè«ŒÈ †˜›îrÃÈ›Š§Êw§TÉ·¨¼%ØrÎFKM y†|)#b|ºC槃ÑÀN–öŽøD}õE@Õ÷³«+WÛ4ú¿I`E&Ψ¨Nlt-2qqq &wÜq‡RVüýý1mÚ4I0`!)Éd#p‘ ýžþyIs0òµW‹%°í@ýg&HjþÀÃ¥iز(â,ºèpØgkJÛ³žj9TÍLY¤aÈ:n~Ó´æ¿Ñë|ò¦àµÈ¤WedÄÂ(bQŸ|†¶+–ÆVëÛtesÛ¶m ƒ RÊ,$,\¸PÊ®Èõ‰È|’vÁ3í?…LÿÛþÈd#Þú%©”¨èw¸Sèå|œŠ4#¤0ÅZ²qpD,d/Ûk¥. æc)òJݼhW®P 5f+CÞÕÑ £ûš¶3–îÉÅ÷¿;áí<Э©x¤(`Uã‚ø«ñ‡'–nFàèQ³u~‹-À@at3½±uùê+,¤ôÆS’&#=$W­Z…];wb8½Í©¶ ¸=ƒÔnN.úÆÄÒv@'4¢dOjâmq(~ùJ4}oŠ «>)^yzE0Ȥ9Ë“8Š#kJKêí”ÒŽQÓû±1iFA(lw2–íׯŸrËI™Fdäh1†Œf¹žµgÏš¶ÏœôiŸ‹,â›þ×@¥ßW¤êoKq&¯¢|äÜJv­ƒµX8ÖKò‚`û€œ¼|JLaŠ ‚$± ÖN>”…ƒ¤`;OFˆjâÄIœvY¦÷  ?þ›‹éëÌápFQÚä ¯ú E“Ì.ÎÕ!TÓN,ۣǎ)ݺ9Vms˜åÓïGnÄ5‰ÿúÛ€¼Ø81ÉÃ]ª‹¦LtG†<€½PG[ ±T˦ëik÷`ÁYóNòÆë‡Ú.ÀX`¿ TÈŸ´½{¶º©,×uÞö˜5.¥ûï«u3ݧ½1íÕiH{ÑqË?JSâo¿#úsS6H¥R"ÀÛT2ñÃ߃ânp¢Éd3°å G¬€äq0çÑÆÒ6×Úµk)Â÷Б½ “gÚ493IF#-ð^É7vž_šŸós’ü/Rë Â<‰'¾Í A!_JâÄ}?¦Ü ÿwŸ;ž§-?òRàì‡ì¢hMœÏ‰íÔ¤Õ[z¦Ô©c2àý†’EYou°aÚÒ['®+ËoAå­CÌ,°‹€ÚÆàêÛïâô}ƒ¥ãüè±ügUAc  (9’,È´HMËGãVr7ÔH†`@ë&ÑaóßXHQ?uÖà‡ÎÓÅl0˜ôà`<óþ4Œ?jksyLùìNÛò!×ñÙ‘‡Ô¦bÔQšC¢¦Ô]»6;jð  EþdQ9xp0†„¶×·fÍ ë*ÚTð6¡C‡”y‹ëµUf¡€y‹`ü¢ Ä‘'éСC±xñèÑ£§Ò3^û ÎOEݨ•ð\‚¼ôhx'ï&wF2\¼þœt‰È=ùÜ£Ö+ý¸À6çb p#{ƒQ7›¶¨Ô >$'>ÒË´=ð$%3âk™œóL[m|íEAµ\Hƒe0æÛl7ÈüÑ)…|_e&q®ò”î•£Êß–X`MA@mc©Ò ¨ï-÷Ìá„[µr¬Ï/pS÷ç² ýØÝKÁ„zSìƒl Ÿ|ûØqÈÌÌÛ09sž*cÁ_~ùEÙæØ÷>ú¨Ä§þÑ»Ÿ6Ûìõðáý 5‹EÙ@óEù•E¦ßÙ‰Ûá²OÍfQN `qíM»ö2*º#l2ÿejJ±Adº­ ~É‹÷èh[Áä•0rEÔ)fŒyq®šX~ «æŪj1üð–]Á²d‰ÒT|ëË‚v]ºˆƒª½aGc&''ƒeË–áþ;ï”’eì?àˆ]©/,“¤ÚÁ›ö¨+ƒ´îÅšV[`°Y,æZÆÄžlœ*‡»fbUüÇ#<ñ¿%™Š‘ ×sd~àªé·Ã:D$™Tö~qkᛸUÝl·ì›¸…ø¶ ¹þp¸„>޹11ÐÛ´U`·C!•²‡³¨lùºccg|ò¨x„05ħZÓ>ÑZx?ü`–‰Ý_‡¸ÅKo硚såŠÌ …ONúk£r-ç=àŠVA”ˆÆÉ&¶½ÂlU¤ÄIçÚ´‡Sª¹ïóÛ£šSÎëÀíÞVa˜¹Ž5ùvê¹­¬È½Eê=9 ?¯±É)ÏáR/ ^|A¾g+Ø.E&_Jù­VÅ÷ uÁ¦É>Xw4Üóqs+gpšôtèÃõf—@·,“½‹š§°rŸ€Ë˜G‘yû 4t¶ŒÔ¶;:˜·¸J3žèS}‚Aõù¬ÄJ   üèøüØ'áÈÈP=DêöàùΜɴMpÚ×)m³ÁÁƒÿÜS¥ øŽOŸ>­Üxƒ ŠŒ ¢0‹BµG@Õþ#7À.a2¹ÑÞ}q„™ßÑÙHîôÐßÒÄ¡PÀ}ÈSáT`NÓ‘îê‚~û iºÉ‚\›´õÆŽQ.ùM»Ñ«•ku!N•Q±^f¹¢™H øª"„‚‚iÅ©˜$P&Ä…ÛsÉ’_à:l„ç&Ù§ÈÝ[¶l)‹} õ¿3)–ô¦dûvAŒg…½iW(à9Ä7{?Ì™3G:{2;Œê aö;]DU A@5䃬ͷÁ eR»7r]ÈÜÏ(tòz‡š¹Ÿ½³‘„ ½MA‹äT\®[GêïéŽk¤©`º@™$o¦·ü¦OÇ¥ƒ¤:-jõõ—R¹¨?jÁ ¾UäE{}5®¶~éöøD]ù!LÑ ‡ÌË¿á›È€ ÇòÐ;.B™T½ TÚ)déòñÉß9ØtÒ$ì¶¡è†g¢åqik)é_¸èÌ¡¿í K—.½!î¸ã{ÍŪãø!ÖÅê(˜j$ ToÔA´ô0WS½Ñ£àÙ¡½ºJ*{´ E—#ÿÁ½Mk›6-©ó;ïû=÷ìÂÃ÷ÞknWÙt§ý×—Ÿ{·&&ãΫ‘6ÖôãOóï!ÍÁį¾Ä9ŠKœ2ò6„ûR¬ƒj‘%ížÙ€±áKÏÛmãJÎÛA±ìÅÉéàp`Ñ`À¢Ý¹*¡ÀÔd¤¸±Ñ‘ _qƒggâGñÁBA ü½4’ö ´] Ilû!¹Þi>OÚkÛ¶­T¾@ñ(^xáL:Qª”äÊÂ8dbb¢Ôª%á¸wïÞ8EuMD@h jâ§ZËî‰eʳrïK%wAN¸dMluïEÆTvÃÓxœ1Ž_ÎWº6£C±ô£Ë¹Z'§ ÿÀhðòkÐSäħOœÁÚÖ-p½`+ ž "êŠÎ×0˜‚&Eg‡ CԶ툽©‡ÂBy ¬‰7qDFW €ÃFˆŽŒ/½4qß~oÝÝâš … $¥º‹±Ýë×R@"uŽzõ9öJ¼ç‚mà£kIùøõ/toîB[^uÄOyhçƒbQÀ½K]$7ºmôžYº@IDATïÅŠI àãc09÷‡lëpðàAŒ7Ç— k [çw‰·äPÈr8×l„Æ f¾µâîÔ?ZÉ×ã-î9ìÁ‡‘ôûZ‹:¾È!uë•IÿG™m…™y'%S’Óár¨ä/†cÊþCxáØIxZ¥"nB®iÏQ}ÿ«×,ŒOáâÜm±u«c73åWHÙ´—ýh;¢@ãá“«CP¶e1êq{Ì ï<àa!üy\‡>¦á¶ixþŸÎ±÷ÒÖ—Ll°»råJL˜0êèŒÜþ÷ßËl’mAQ)ÌfQ¨1Á Æ|”µëFRþÙ¤Üp’)N•èH®+é™Í·7m¤tSk \ÈŸ»(r©_Þ¤ö¿«®?ÞÈ3¢«*a6 óçÏÇĉ-ÜÙÎÔUy“ÃM7ÝTÔ4¢½ !Љ¢þö¢Þ¸¿`Ë@c È×)9Qqè“G=íç0æÁ/ö7eˆûï'mƒœ G#ôxyyÒMZŠ<’M—rE~ûñÉ'Ÿ€cÈ)i±æÎ+% K"XÞB“i J˜ëÊãÌÚ N.ÅgA•€ØJ¨üÏ@¬ °¿L U.{)înÒÛ·k—@‡m[(r_Â_›Œ¬³aŠÚž÷ò뼡ÓÛº–ûEš´§(Î@|€ÁÚ‚þÑq’¡ŸÖÓuúÝ¿{îoûn§:¾ùxwÐÁ{¶ ,@ma0qð˜gžyO<ñF Œƒõƒ¥zþs+ üF(¨ú!ðlwÒ8cù:œ¾lZ€e>£BoŠÃ%¿q¿~?’‡”,óC’C!;發ø»ñØc9gõÚ~2wUø–ï×áºá‡~À?þˆ_ý•’}š¼iþøãìÛ·íI åXL”G¤oß¾Jÿò*,Ý“‹ÏþÉ'òõÐà•{Ü¥8å5Ÿ·h„`P4F‚£Š#Рi}>ò(/=» Æg@³‚ C¹”¦Þc¢ëáÿpúÁ!Hýg³t7®  çå Ê5‰¿Ð%Õìhl~£bmAÙ0ðï¬!5 ´m‘°z º:V-•þEî¾ûnôêÕ _~ù%¶oß.±ó0ÿ@ÿCeÑ$PÈÔ3% QŸ!]f³0sƒ¿r5Òvý+¢+عÁ!E÷B¸©…3ô½Ý0uC!L…4¹¹h0‡4/.+È¥`¥-4h¤-`—ÈŒ\~˜Z*~Õ…zæå7r72\}öÙgÑ¿Iƒpù²I‚áðÇ»víRº :ÎdSžÄi¦ß[kÖø¥‘p0}]6êzj0¤»¥íCy®CŒm‰@ù~ê–s‰+@¹ õßA4éØŽ¬úM‡.QüY0¸8~²Ïž…–Þ²r.\*r~Žbh¡-ˆˆ´íCZ„\JŽSÁ€ñóóÔ)S$c.VÝÊqè£Uë&ZÙmÞ—¸j;k©kb¾0»\–zѱD¨ úr(;eIi@{lyÝÿP £3¶áØ1³¶`äÈ‘øz{¾¡H‹ü mè§§AÔÅô ½‰24þsÊôÖ¯ž·Ù9°FB&Žuðõ×_cõêÕR0YSÀílä:˜4YåM?í͵;Å2ª‚]h*¤R³˜¤¼h™B‘ ƒ‹t¾S~ Ó–CÔÇsl¦Ï‹»Žc=û(õ¬Y´M+uÝ ”NöÉqÊuüO+ÀÞ 7J¬žíÒ¥‹ä:Æ*\5Ým2xT×ÉeÒrXÓõe?!üÍwϹÝH”E‘kƒúE±ˆöR"ÀB L)é´åëâž“äøÛÝ1÷HŽtbmÁ_ç}1û/ó[v4Bz‰l Ø3Ó6?ÞÇU(\6o·Õ!ýû{ÚLÍQ£F¡{÷îx饗£XηÀ)ÅË›xkÇ‹£¼×V›Ç‚AmþôoàÞuqqHݱÓa–?—€ø ¼·øîu7°îÚìL›@¤7ÒICàS°WjohvÕË"—D5™´¦tÃl[0¬Wo´ød¶Â±ÊB0à=Ɇá•W^«qåd5lUÐïÖ[`½‘ù}ë ñÿÇE º6cô¤.”èÇßÐ},ZO4xé‹:qQv¨ƒ, mÍ1ÔZ„’Ì4lØ0ÉŃq€¢‡Ø }ì ÃoÙ,¸9kðÓ3ÞXOa™O\£Ü ¾Z ïéŠ :;¢;v(B ûâ ÓV–#þ²ªïâŒK×U~ž÷¤zA•‡€@¿ò°¯¶3é{¢ï­”¶8ºÐ{hüÖh:mj¡·^²^4oœ¥¨‚*öhÔ¨ø¡ËtåÊ•R Öwмvž³ NÅÞ%%\>ûì3Å·~¸â¾7õH‹ñç«>Xu —ã Òâ1 ÔdmPYÒûü7†@É¿I76Ÿè]`½¥Š|Iý-o1ä\¸hº´â“yJræ·ülÏj¬À'G Y÷ëÉk€É‡Â¶ÞÕ¤1¶GšÌöȵ0Ìß¡I–oWî-BÀ‡=šøà`´÷4*ØB__¼)^ ÌŸqͦ[ÎÕ) ²‘ Ìøíà ç©Ã2Ó[£:§W·®]³ ß.^Œ˜“Fƒ·¦°¹W§N³_KQéG ‡GÛ66mÖn/Á“’C1iÊÙšÜznqm‹@HHˆ"ð–Ñm·ÝfËTŠft,"Crï“»· ê³w¸É—Å>ÿõ×_8~ü¸ÂÿüóÏ—zËC¤„vQ|æ¡!(!låÊ.ƒr…· NµŽ[MÁ†øí#ú³¹¸úöéÆ9@öE³'€3ºotq)'ü*Ž÷îKn‚©EvɤPÈZ ‘Ú¬1 Bo#7FkÁ °Xõêå8 ˜òHÈáÃ.Q즰¡#HqÊ.‹\É‘eJÛ¹ [I-¼^•;ávί°{LQdNó9’Œ'½zt—TÓœù±á+/£î=w›D©J"ТE ìÞ½[ZÛYòŒ)+jì„¿'ù`Å~SVÇÐNô–íowÖ›Ÿâ軽páB¥Ã­·ÞZf‹2¨(TK„`P-?¶ª¹è°‡‡#yãßÊâ2ü‡£:+×\h1o.êÿo‚E£‹4úQ-ŽPÀýdàÅÔÿRº™æŒñöÂÉ@ôtrlt%uRýq/Fl­¯/¼:u”zeSzÚ’PÙ@|§òo@ɘî ü …Q>®eìÛ¯°d>ƒ^WÍ—ÜÀn™²¦@w-R®çJB€·‡dâ·ò yÝÝ…‚ ‘gAq¨~-íÛ e‹ÃoÍc í²?üPŠvÈmìðòË/[³‰ëZŠ@ñ¾…µqÛÅG B©ª…G=ãW®rÔdS¯ŽnÈêxÚÿ”z}¶á犆™YhG‰•dú«MkäÓíŠ=¢>ý\îêðìOÞ.vÂÑz ÜÑY¥q¦äH>·ÜŒ²ö^Ѿ-² ¼%ÜIHx"3[ÉÆèw÷hH#PéMÛ'Ö|—ž{'oí'çeÝ,®+NWìQðy²P VÙËKa¯»f›rôz? £¿ÍBËíåuþŽ„9sFž…‚ò$$`„` ¾7†m\œð,.O|Õþ8ô0T?Ä9saiȳ};´^ôr8Ñ[»LŠäæJ–Ûò1ðÊUÉ€Û3(  G,åÅÇíúGå\5Ù1XÉv™<;´—‹ô﹚-íJÁ“Ìþ7ß|õ)V‚LõŸûœë˜‚4Éuò¹É{ïÊE‹³S1„m1x,e‚Çàø2qvM5¥d1öûLÉèN®ÿ÷‚ÏQÄÃò¤mÛ¶aÍš5Ê÷Ýw 0w•Q¨µÁ Ö~ôewã×—.CâÏ¿Ø XçŽ~¸9+ m~ZbÓVÒŠÌã'pfðCÊ¡Þbà=üúÆ£çÅ0éðÏÉ58¢‰ø™3–”‰ Ê`Ç¿ñ—„ò z…]G•d¦CNÃÌ £G.Ñ~nÝûïSÆSšNŸ·æ’ƒµÖ‡ÙY4Ÿù¡º‹(W ê”ÅœœHΕÁKàè„öÂ7àbœ¡\VÉ™gÏ6ÇæhÞ¼¹ب\&ƒV[œ«íÊÅÂ+ Š˜ÇoìEûñm[akLÙ²Õ"ÐÍÃà,©ð# ÙœýÈ{öìYì5µúök°WSüꟑ¾go¡}ã~4 @éÿšx9ÐÒòúõ•oÖ ÜÙ¢%ÆQ š² ÀaCÁ‡ ª‰À-ä-S—‚o%''K.þù§’‰Ã;"N(TÖŽwß}WqMô¥ÿÃÓ§O—r'”õ\b¼ê€ ª÷çwëϣýêìólÜåÝCšÃU•¦•ëhß´ã¶MHúãOÊ\hz36df"fî<¹š}<õŸyZ¹.«‚sÁ>¨žl,\­‚±*ì±³çñýÀH¤eÎ"Ç?Šï¿ÿ¾”c¾¬Ö£Ç@ó¨)–$ýD6©”¹Q¦î±×ñä°ò¥Å™½9Ù¨í-,:•à‘´½û¯³¿Ã^#¬aï Aeƒo'°ª~ÅŠÒ€«V­’rpCN¶d|È» }CÚ‚+CbwÉÿû¿ÿC}˜8Îr¬Akì3­¹ÄuD çJ8Ž÷êetDzcî¸m3|zY¾i{uêD–ù”n:Šì¦ XxHXµZjÏ8|Dá»Ñ‚£g½ ]2Õbxo²g˜úâKxcÎlp"6cá`òäÉàl‡eBv¢ü®··a}liÞú·Fž‹ó Ü9N”ýQ!Uÿì°sJµuáäÍ·YW•øúdÿ»‘Eªä¨á¤WÑ|Ƈ…±ˆ¶"0|øp)²`& Ïü`^¶l8^@ç&Îx¦Ÿ¾Ýivgu&‰vÆpx¸–pvžñç1êþ®èج䟯í̦š={ö`ÆŒ’`Ì5Zú>ß6l2R}é{§Ë‡g ŽÖ ê«B0¨~ŸY™­X­žæ‡ ¥–IŸœ"Eòãk5ŸÜn}v¦ýJv),ÊëÀž›ŸõXòuÝ{ïAã·ÞDäÌYrüi?ßx2B®ï™™ƒ¹¢sHsÌ™3ì^ 9HoQOŸ ÃÃ”Š¹EO´÷~–¶û_y(‹³ZÕúÛ¤lÞŠ‹nÀš½{)GC”«Aîàçé…1tí9 UÕ=ÊÂó ÅŸ£î û{ Žœ˜´v½ÜÝî9øÉ±vë «”cöË<íÚAS ôèÓR¡Ä=0–ýÞ¶}û ã&J0T@žš€øå¦}Q¹®°³–Þ¼[/ù×—,u(8×õCÓ÷,Õý…ÉmÁO³ <8œ1­›S'ÿõ7r.]’†p© ÿÁƒ”áXH !ÂÉÛ[ª •¢¼±Á¹s&•½ž­ m3¢Wl:Ç'ÀMå>¨ fUðîÝKªaíÃ!RÍo8xû/_B>ågPS¿~ý¤ Šq¯LB¼ºAUæ€Dþ*ƒ ‘‚ßî Ái=ÜÁ™+o”:ïÞ'ÚçfJü}Î|üF‡ýU¬Ø¯S„®Î׺âJÃWÑ$óM´õH$ÛÞÒúüóÏ%¯9<62ŠÔ“¦ÁˆfZto^²ŸéØØXI>vì˜4ÿqóðDxÃ×]§›R—GŽ'# tdÓ\L·“–Ya…Z…@ɾqµ q³%E pØÃd!ÿpI»•ˆßÞvàѪZ~5¿Ð±‚ƒƒ1þ|»téR‹V‘½e/ý¿3v%+AJœ ¢M ÉZ0àí‡M›63$&$$¨¸ŽQðÒK/áÎ;ï”êïï¢ÇBò~p ¯HF^!Bó±GG×Ã&¹_eœ9ƒ³¿?¤ÀP…,Àì,JCú|J@¥‹@3·¥é^ãú8;i0ù>wl=“§<ˆÙ®uÁ¶\¸:k0ñnwpTD2ĈlŒ(““! ÞÉ»¥C®ãó»ÕW–eþ^²±+ ,¸Ú£Çú¸á¾Î¤É¸¤—lŽE襵 îₘÝvíõuµ ¡1¨]Ÿ·ÅÝrÒŸÏ?Eúô‹ãàÇÉ‹ê=1Ê¢_MºHüõ7‡·Ão_u Sý«0ó¡xÎþ¦74#.¦îØépÜÊhÐЃ£ýŸë‘@¡dDiØà±%r* -_ˆÍ©°²µÙ³¥4ãÔ¤>ìªhïíü‡]¹xyiHàäxõZöÆýïƒ_Ü:x¦‚ÖX¸-€£¦d[sÏ=÷`àÀRèeu›½²Ÿ§÷vr•{í¢N ÀÁ –<ÿ,ø¨MäÞ¢tQ¶jX5îìYj>k|nê-õÈ>wG;w+AïŠaõîÞ |”5%ëñEì ¤RðwÊ: ô3p*빪Óx×ÓHE`‡Òsò‘C; qŒN‘ˠγ%®‡¼!žiGá–y®9WQÏù:<õ’MÛ Ô¯_ (Dy;ŠCÁ™ƒì¤ÿ¶3¥M•ÞôÕÁùº5sB ·°/°©WÁ øµõÖ[ÿøb¿ûÆ‚¸ñÖ8°]@Ðè'¬«Åµ>™. ÜüQÔ[èï;n´/^Û©K' ·CÖÁZ ×Ãà:&·YnÏwrGfݾÒÁ×óž÷F¯²ý™ŽH4à©E™¸tÝ$¸¸ÑðSòÀã´Õ H À”í7N`*¨¸5iŒfÓ§Uƒ•Vý%†eŸÂÊ„EÊB#uWñýõ/ðBýוºÚZÕ× ¿ÉÙh³µŸ+™¨¼Ga5õ¦çÆN8Aš5umꄞÍÍ6-ê¶)O\ž¥°°Ð¶¾åÃ^«Õàǧ½ð…PÞ|ÚäÅpo'L#‚íÊ’bRŒ8vÍRáñÙ\染: ,…–²œ[ŒU}‚Aõù¬ÄJ«0Žò*Tá%ßðÒþIY}¶F–YÆLÌŽžŠO›}wÃsT÷8IÑÿî(ÚâßßK‹ùOxIyøžËZ q,,™HÁ@|„€17WJ†dÌv£ÀÝ~wßÎ !“W—ÎHÙ´Y¾´{fžšH¹d9ÿaÔ›oí·¤ø?tõ²L×í°ƒh(/@†·…_ I¬Š‹%õfƒÃ­gõ¨HU¾»‹ww[%øøj« jÕÇ]ñ7»ð;\~ù•B'¶VÑÊ|ƒÙ.JQ 9T±šØÆ@¦–?,„aÊ5ç7§VÎ;¯,˳cG4ûè庲 º˜\7ޔᦈE´\0¿Ñ\èÔÈþƒ¿®§ ý„â·:¶UqíB0¨ŠŸJ Z“sPц räìë[¡w¯#Õ쯿Wèœ%,/!¡XB..¶¤C þj†À]]°ò€^±Œ¿ñîƒÂø°š}–Õa¹B0¨ŸR5^cÈìYˆ (‚úä»wÁBAðØ1ɉì2–A¥{‹h==)š¡ýDIò^:ÊÅ*sÖ¸¹¡ÙŒ•õ¤íÞ¤µë•kQ¨Ù°M² ¦Îû/éQ‡4öv£pÆâ'¼fò•swâ[U9¸×šYy¿¾íŠŸªÄýº ÓîíHݲ Ö‘åº6l€ÀGFÈ—ezæX Y'Oµöȉr/x¶ µ×ÝðÅç-Ú„``G¿p#†§û¹ÓQãoUÜ`%# ƒJþÄô‹€ÙðQtfЃHÛ¹«Ð©›LyMÞ}§PÑ(ÊaµRžèб22Š ˜5iÃ_3€@@ P©A¥Â/&¯êd_¼„¨9ŸR(eûÙ!5®n~jêÜ~[¡·b½uáR¯ 5ÉåF ¬dH1Ù`XóɃr~‡õåKlG( ¢ J‰€ J œèV;¸L¡‘9þBaÄI zÇF–(ut·“GÁÑ™’Öo@؈Gm¦Ð8Yþ÷t”üIãlÉg3¨%@@l%”,ÁZûàXE¿íó[YGZôìܹða)1UÀÃC ç­€@ ˆW€%Xk7õÈ­’½,˜ ™™ˆühf¹¢¥DO÷îBÎ…‹äEa?ò¡K` \4(×uÜèà¿%­À`¿ápÕºÞèP¢¿@@ PÁ @SÔ †AÝ÷J7£»~ý†ƒè/æ+±²Î†9ˆ…Ïöí¶Wõ†]Þ‰x y×ñLð+U}¹%Z_\ª$ëøzOñ‘h ¬ŠyHÉ2¢Wˆ3îëäBf&ÅK¢T¢… fÀ " ƒPt”ȳJÓ­Úõ™ýrò³1?v†ú?Ž@2º¬ætâš“WgáB%* º»ƒ3>Qxêä}óðÔ¢Lä¤Y²GGý\ðÍO!TóïCM\~ñDÝšxç➈€“·7œŠöÙµQà \UùNu8c?þH^#M’aLÇœ˜iå;aŒÎéÇ|—©<åæÓz¼¼¼ðhšoý’­ò27ŸÎÃ_'«V21ymâ\»ƒÚýù‹»/çG‡p–Èhz[,nw“Bý×—þ„üÜ\»Ýœüê ÑäIvÛª[e~~>Þœl±ì5‰K1:p:zv³¨¯NOä!5Û6Óá¿ô¸–d@ÛdG¼åp5Ñþ÷å¿Ëz îbi{ÁØÅ§ç£Ž‡íP@ ¢‚AE#.æ«Vh´æz9e´½ÐÄ$°×&×q¬ƒ¢âȼÕýüKÒ2œÌ>jqù`aáÿ°¦Í‹úêt‘’e+ÈëOÎÌ'Á@¾2Ÿ½Ý5p"ݬÁŽlàG9Ô´åLÞû= Ñ)ùp¥¯Þã}\ñöp¡\ ‚…€ØJ¨(¤Å<Õÿ‡*rÝ~d¨¥$G‚LdÒ1;zš]8eî£í…_ì¶U‡JGI‹øí¾m}³©¾/7 †tsQWIe7z-Úì-8¥ÇsK2%¡€t”Hq1Ù"|²±ì]am#**„Æ@†( ¬h:u ‚F> }šýȇZ7WòhoÝ­Ú_óÐjJ÷ÞðUìl$èãb0+êÜ]gܵyªj{ð[üŠýæDX¬ øh˜G¡jÿ†zÒ-eaíÑ'±ÊJDÀ£MëJœ½r¦^7½½o¥ã–/à¥úoâ…ú¯ÚÏUSr ‹!ß'ùAZè娸Ñ0O hï‚ÝçóàIÚ€»º¢m¼ W >é…iCò‘ž“úu46zÞŠ°G™d’ÂÚÖ0Tâ«V(‹9Õ¸¼,ˆûSÖá¶ÿ–XsàéäU.w;+z ¨;=»—Ëø%´;ðQRb{>ìk#6·õRèÜØ‰„û}ì#ê7Š@ét…7:«è/TYfEMA¶1 g²ãçÄ%Ub—rÎcñõ6žUbqe´ˆGorÅM-,5"Þ¤X™>´úm¹”$b˜JB@h * x1­@ *"p4ó Ö&¯R–öIÌûTw|œ|•ºÊ(|õ&ôôïHæ¬KZ‡üm“NUƺJ2g,¹-n%¯öNèßÎÙÆµ‘µËžñÆÚ#:‹0 ÈG‹Gz»¢¡Ÿx+ ΂÷Æ‚Ác(FÔL±,ã($ê㥨…o7šQi÷¸#õìHûG™Ÿ·îñ{Z6è«´é”N ‚”[òâƒõŒ‡÷²´µ`·ÄT7¢Wõ¸/±Êš‰€Ekæç*îJ Pb8ÙÑñ¬Ã6ýÇ/À•œ‹6õQ‘—Ÿ‡¢Þ°˜*6/ßÄ}fQW•/Ò( Ò¤UY…^+{¼ýk6X‹ H PÕ‚AUûDÄzå„ÀδÍGÎ4dPì©vÛùá̪üÊ ¥ñßàrÆ}Ž(Ý5›úªXq8\ ;Á.óÈÓ`ïEûY3+ã>~=¤Ãà¹éèóA*ž[š‰+ñ´@Aµ±•P+?vqÓµ 3YÇ1þÒ0üÖv‡]«þí¤ªqo…´² M&å:`À–îmì¶—Ge’>_Äδ;tn~fF½/C–Ùm¯J•n…8/TÄe{s1õ÷l¶¿)‡‡kÞøšêùŠ÷G˜ZR‚A-ù ÅmÖn8±þqþ‚_Ûl³c0òQ•h[êßèæéx³=݆h]$º6®J˶YKæÎhä§A…9V‡C¾­M!Rƒš¹Ël[2o³mtÅ$Š«ÀäÂ+¢ᯒC Á J~,bQ²CàÏäßð_æiÀʲêçHŠYÆLx;ùûƆ<>ª;±·Á·OzKêùˆ‚dJÁ¾|ù„|)”reSqLȰZä59Jþ$·‹sÍD@5ósw%È5šTîj8JbÕÿmÜ\ôó½m=:¨‡°)Gæ^Ec·f6õrgV Ë9…÷"WÕªsû†NØöºNEÀ‰9;RТª’‰s94$'n²¦6Á–q¬ÛÅuÍD@lÕÌÏUÜÕ ÀªÕœ+áÈ ;g÷ÈŠ¾Ñ+¶«d¤—gi¤ÇVý_Ç}ZäBâób1öø§GÞ8YŸˆ‡Ï÷GDî»c²ÊŸã!,‹ÿsÂìòÔ†J'­]š8£[3ç*#ȸ¿q¿ívA#ÊçðÄÍæ$O2¯8×|„Æ æÆâKˆÀ…Ñã°¦ð €M§MEã·,ÝèJ8M¹³Çê ¬ x4`¹6q¸Ž£ßC¦1{3v`SÊRì{ÌŸÆL—’&±ç·-VÛ°|û1µ_—ê?ˆ|KZ­³á•‹ÀƒÝ\Q‡l–ü›‹øô|^œðÒ]îðóûÉTÎìâS¯ÜŬU|ƒ ¿üZäêâWÿ\$Oe3ð–AN¾ÙÒ\½Ùª_]§.ŸÈ:‚_“~Rª>Šz ¹F[Ÿ»°ìSX™°HâÛœº{Ò·+}¸ž{ ?Æ¥ÔíJß‚­©)×¢Puè×Ö‹Æ{ãW|0ýaO o„ªóáTðJ„Æ ‚ÓUmX0m%Ȥq%Uª¦À@Œ6‡óóLInòõUÇÿ\^«õùÙà×ð¿àW­«•kÊï§”­ ì½ ¦Ý,ŠÿÏ[FFäm#ý“‰¯ÿ ݯdAü4AM¬Y¸ì\4•o‘¯^—( &„` ¾ Bèqþ \48R6oÁ™ÁÂ]µšB=:–jAœ‹€½¬é«ØÙî? A.õ¥¦RÖc_ÆN ¶ó9g°"áŒz»ÒH;¶Ñ¢/$-Âõ¯ðLð+6m¢B ¨|ÄVBåb*ƒgUä-{ÄölwÀÄÛ ¼½`>‹ù‰yñ6¡ŒÕ¼ócg!!Ïdw ®e€@ òƒÊÿ Ä jìñ ‘·&ªà}oHþ…2)Ö‘{Ë;•u×rÃñGò\Ó…ÛcAŠ!I îð½|8"Ö.ºÔsÔ,êJB@•¼˜¶z pý§p®SGZlÖÙ³7´h ^ ƒO›}7­eV½¸ ;>dŠÇÅ‹‘ššŠ´´4899áR\8´n˜2Z­}¥£3œðHÀX‡Bç9ø’6Ýìs‡<•8 U!T‘B,£j"1ej™-lMÒRü™ò:xtÁsõ-ûÊl’ˆ…—'N`Ó¦M8zô(âââŽèíí^½zaÈ!èØ±dö œç`KêŸRwd¡‘N.rC@å­¸:" qq“Ÿ ))….ß%(¨ÐvëÆ C:æDO“ª¿Œ›a£PÏÅdÔhÍ[×:ëׯÇÚµkS¬%ddd`ûöíÒáááW^y (²ï{$‰ÙNáë+Šì#ŠC@‡µ˜© ÀûÿmZ‚Øo¿ƒ1ÇÖoŸoÁÉÇß.Yp#uÎ0›Œø>iöm¥#b ÷Ì7bÙ²eHHH°»žàà`ðáëë‹g΄…Y†)f»ž={Joÿ]ºtA`` ÍºX8|ø0fÍš.Ë4mÚ4Lœ8>ø \eqþ9q Îd·¨›õ:6´Ý ­Æ¾½‚³¸Ê!”;Äb‚ò@€e:c.¥†³CüVïÙ.ÔNKÅW}ù¦M^…œ¹¢½ØŽ`åÊ•X²d ôª@M.´Âô#F ¨ˆ­Þ:` ‚Z(‘7oüýýqë­·ÊUÒYΙ`QIg³Obeâ8ÞºI\ •€€ *t1å#pvè¤üýO¡5z}2š}ð~¡<åݸ;m+¶¤Ù|4ë?¬K^!þ#Ë{Êø™™™’Êß¾}Jî¾ûnŒ?¾H@ÝiñâÅÊehh(._¿]RŽÌÇŒ3ðý÷ߣaÆ 'dJÔÇ+×êÂ'QÓð€ß0ø:û©«•r´. ]+×¢ ”BwW~ØŠ‘Ë #ís%ðÔIl(§Ø)kÑëÃóðƒ´<ˆ÷ó9£¡Lxþùç¡ x›€âo¾ù¦]¡` ­Og´ÕÊ?~|ÈtïØ¸òÔaU丬žŸ9 ½¼o†&ÂEzøs,™n¾ùf¼ñÆ`·C{”¦OÁëWŸÅ„à‰69xB&¶CXY÷[äºd!aøe/k#5± ÂÎ;ѯ_? £ŒMíáRÎyÜs¶‡E~fΧÃFËCZœçÄLÃu},E]|Ÿ7ÿÁ¢M\e€Ð”=¦bÄ F ëáèuíŠt´Y¶¸‚g¯Øév¦mÂÛ×^"KþÝ…NœfHÅg”™MÙ>“'O–ÉÆŽ‹éÓ§; ˜onì $Á9âóbå®’¦@­-Öû3vIí™]’Ù.Yáe[5M|ÍF(àö$}¾ˆ™¡f•ʧ³ŽaMâR©¼6yŽfþgÃ#*²E@e‹§M Pnèóõ˜NÙ ™¦Gýx«ÀñC–¶î—|øE.²²²$VŽ\8uêTŒ3ÆaTBfä7ûeñ ¥>œ#avô4©ÌÔ¶»tÆ2?sZenO¾?‚O]¸pÇŽ“ÊÛSÿÆ^«´ÌlÒiiü7Ò¼ê:vmdm‚Ll°©Þz‰Ì½ŠLC†Ü,Î@ ƒ2Q !¨øÁy9÷¼4•lÉoo^~¨3¯k”'ê/j ­ÞôßÜ•RHøá‡’jß^?uÝQo@Oÿdú%iNd‘òQ&¿Ü©»*_Jg]ƒldµ1ˆâ8 œzyº*~Ü¡µ{(îô½O:83kDdúƒò6ÌÜ+_JçãY‡ñ…R–‰¤qŸÈ—â,”ÂÆ  @CT.1 ¾‘‚ñ*r.\¬ÜÅ”ÓìüöϪ}5ñVÁu‡Ã—’©‰êHpBƒïÚA›kú/nt1 Ïë]Ñ»wo5«Ý2¿Ù«Ð2¿­7]ÒE¾D‡.íñ{ÀOp1Zêó HïsžçMû÷ïÇâØ×]Vúʅؼ¬jýü-c%ä³1+ê™ÍâüqôT ô{Dzb3…Uf£Æ‘ãÐÄ­¹Ÿ¸J‡€ J‡›èU…ˆúxNZMù,åÓèéH'»5Éûòï6þX©æ‡úhô}G8e˜ØùÚ|Ĺ€ŸýÏ๼— ÅÌoöF½©Œ§.œ9~)'4JÕÓã&`pÓ{єȡ¦\ Ç3cÈùÛ‘Õ6ùNFh ZpèäoöÎZ(]•ßÓ§Ñïã#+#Í…qŸ#:/RáSâÉq~ìÇØžö·T­Ë×I¡•¿iaiÏ î#Ê@ñ‚Añ±œU ©Ä OL,tE® ªN.‚BZD#o¬"{Ä[S` –îm$uý‡×ÞD½å­à’à!±óþ|ü£—ÝŽTûd’ÀoÛŸ6ûÎÞPRi»â‚Ýöº›Ìqºv튶Ûâų¡±k3¬hmŠÕðVÄ ¦yÝŒÈn ϰºÒu½ð&hÚ±‰ÝqÏQúe6n r©¯´×wiˆ÷;Þ"8’qœ¶Y¦R×c_úNôõé'W‰³@@ PJ„`PJàD·ÊC€#†®Y…¸E‹)ŸAŽÝ…8QT¾F“^µÛVÝ*§G¾n×’Ÿïƒíxë`qËßá¢qÁc^ÄÊ0ó›ó¸qãðø+·¬ù_©T¬û©é‡(5üæ¾,a!Ü/úÂ㲯RÏãþp}®éÂ¥ãï”u`õÿÙœ“ ONÓ E0Œk€emÍëR˜ ç HÕ'cnÌG6íŒÓ†Ð½pÒ8Ù´‰ €@ øÁ øX Î*„€ï-7ƒÚ@ß·XS¬Ûù©³K¤Ü=,甤Yø´\%Î@)^ ¥MtT$^NÞ(êHMMÅÇ›m š5k&4º‘< œ™÷ïÝ/¶àŠY[À18;$g‰”):ïR¬Öº†&IæqJqŬð©2{©Ï²Ïâ§Ç[!l‹Á™ ¥G@h Jè)¨2|öÙgR:d^'CâXœè¨´Äûõ¼oÏä»ß¬EÈ"»õà×Èåv‡p‚–þIäoŠh(o_Lö~ßnŸ’TrÀ¦ù!K í’nL‡/ü å€c„`àÑ"¨lÞ¼ÿþû¯²Ö & yóæÊuI 3öJ“x¿^&½¿ÉŽ#_“ä×ðQô[r“͹Å"P7­3¬Í`JJJBƒ4mëÑ|Ê!”¶bd@¹#žžŽo¾ùF™§G:t¨r]’GRäHƒl+PϹúxß.uÏ‘]Û|hIàܰ E'<7´däg ˜|¸j]%^6Bô"Ú{šâøùù)‚, ”dM‚W ¨x„`Pñ˜‹e†§6NI1í©»»»K9JkWÀ9 Ng“Öölðkx$`¬ymÍE¹Ä¡‰:wõ9Ž_ÛlC;Nr“ræh‹2éõæHŠr£³îY«fPŽ0õòB@ü¯+/dŸrFàÌ™3ذÁœZšëÕ«WªYÓ iø$Æl0'zTJ+t,“|2û¨äJ©ÞzPwÒd½ä:$Ø¥‘3E,¤ÈÉ“Šâí@Ù" ƒ²ÅSŒ&¨0¾þúke® 6L¹æ'ú+ùw‹:G_R$Áýu¥™Ë\çˆ2 鉕8»"o#X“Z{a¤tÙßÄ}: #YHaA¥(ᤰqD›@@ P:„`P:ÜD/@¥"ÀƆ¬1éå—_gNTÓ—q³1åÚÄ"Ý÷Âs/áÇxË ‰<×q›=2 qMìÞ˜k̵¨ÓétÊuº& _Ç}J±ãLÖq¥Þº ),œÌeÝ,®rF@å °^ Pذ¯(2 øâ‹/¶¾}û¢sçÎÊ5"r¯`ÑõùH6$Rò%Û(jæ)8çH°&®û(òMëj\ͽŒEñ_ÚÔs–Åﯛ×Å l)ÓÚœ•!WJ£<5ò5¹Úâl-¤,Ž_€+9531–Å‹ @B@UèÃK©ÝìNÛŠ±¢·nûažetÞ~ûm%f×=òÈ#r“r–ƒqÅÒø…’'Ò¨*ìMßÝé[áªqµ{ìJß"å à.8h^ÌL|H‚=A‚y8reL”I-ì1n‘«q8s?þLþM¹– ÖBŠ$œ8Hê$÷g€@ l^ e‹§M P*ôùzLzQAðÛësñR}Û7u˜÷éOž4ç#à:fôÜsÏ¡sÿŽhàÚü°ß”ú7ID…˜NZ%­ÖÊUÊùfŸ;p®«mxa…AUà´ÏKâ¿1º·Îƒspp¤!þ#ÁÛ9ª\ƒj4Pއ×1 ÎýpÓºKõ»Ò¶`kÚF ¾à:n»Ýw€M›¨Ê!”=¦bD@‰à0¿s¤~¼? ê»6´‡m rs-÷ñù­|öìÙЬ7àÇw–azª98‘<À®ôÍØ–ºwÖ¹O®*Ñ™·8–‘æ#Æëã$×ĉ Þ.tŒèèh¥=_k„Á×loÀ ¬Yà$M/7x ,q2(GÄm}ÀY#~²a$êe…€ØJ(+$Å8R"¬O'’)Û˜…YÑSäK‹óªU«”ëœÆ0º™ßÂóÜðäø'³‰ÒQSŠekúTòê-€Ë9$Bk>{×P_Îä(ÓW±s£‹’/ížÕ‚ÞŸ„;¿6 âæ V#™ÐÔ5wúÞg÷à6æ$”?Bü.ŒÅ Bø,æ¤LAŠdÆuÉ«1&èèîu“\…'NàÜ9³qbˆË0xéøk¼ÎÖ•øòuù\ßÞÇOíy ²•þWr/J[O×{Y {<1üIŒ zÞPxì¶§þi›,šò £(‰“ñM s6G ºP yöí&ê8Õ•4½½o‚ÊGÀŽ _ù‹+ÔÎeŸÆŠ„ìÞ.?xd˜ß’ÕÁŒ²Z§@×( ?âÆŸCܨ $$˜= Ü#|Ð|nw¼yôS ÇñNQÒñXÀSÒ\œè'â,‰ïÀ±†5 öˆ“,ÎØo¯Iª WÚ´AÔu °9xü˹ç>Q*¡1¨üÏ@¬ #ÀyßÞÈ:‚1—ÄÞŽôàÌÔ`ÇÎ [ÚÍ–12»%"»M*Ö7ƒÏá ‰Ý^¾þý“&MB‡¦äCœˆƒ i³œ ßéŒ7OLB¼¾8|ø0žxâ Üt“YKÁƆ—s/(óZ^x[Ú:‘Ì£ÖnLí;÷v¾Wng€@  # ƒ*üሥÕl² ™x”òðaMü&=3ê$âñEÌ ´Ûß ½Éž@[Gƒž}z"™8˜¹Wéj¤m…øÇ.!£[ÿÖ†¤|©íêÕ«˜8q"zè!ÄÜ}ùé$h¯#ð¯)¥ò\LóæÍÃò妔Ê,@¬N\"½ås<{Äoû¬}0Æ¢™ $Õƒ¶mÛZ´ŸÏ>ƒÍ©ðB}[CI Fq!T$ð«0@IDAT8B0¨pÈÅ„žN^xÐß6·²g LKã¿Aß1R™ÿŒôÆ·/ÅuÛ8ø ç”j¶W©{÷îr:£©ŒÿH^ƒ±Ï¢‡w…GÊE@ù•pÊ&?'A@¥"À‡¿&ýd±o•`pëí·Z´ç"S“Ž=Ö!»eÅ=ÐÛÝBÇñó÷“ ÙÎ@®ÿ4ú›-¹]9³¢œ@);;Û"ÁS=dVü?º+Ê5Øû‚·: UE0(‰Æ`õŽóHH5ûGW[«Tø!©&çD7¸Æz*U‰"•rq <ÉøŸ3<ÏûA›ë Qùoo3Db|"._¦$I‹áúuSfŽJx!ç¬ ¯\Á1þ¿½ó€oªüúøIÒ½hÙ{SÊ.£-.ĉ ˆòE¡ AfË( .P^P¡eP@e²ì …R ´¥¥‹Ò½WrßóÜš4é"ižÐŒsýHî}ƹÏó½iròŒó[Üf9†6¶“.]ºååÓNNN \xXuŠDYÿ n›d é D€äŒ|Ø{¶bQ±®VUc‰âˆµVõOD$¿Q)0yX/ýH°±Ò”{ÕÊ"D@ƒ@±¢¶Z¢‘söÃA1­YÛ¦ðD—g5òïwÁVÿïLÿ¬[;€ @"h7cؼyspuuͧ—¥Áäæ³ë¼U!FkT§NRžÂ AƒTë V$/Õ˜"Q–÷¼ëËâ‚Gõt:'D@7…%å°ûŸ›°óïëP&¯¾ÀXk*Ç@›ÂêeŠËä°þ؇ÉÜ—<ááž­Õ³éœ ØIíá!çÇ5j‰<©º~Ì{ôu¨œ¯WeÔq‚'u´í Ð@ø(àª5À(ZTZó¨Û=0eÊðôô›Š L³a\Ó)uÜ¥2«¬¬ Ξ=«J2dˆxžXšQEWPcASZUOfý ¯5¤žDçD€hI€MǸœ¿_†Ì¼š#ji TŽÌ†­1ÐnÄ@ÝxjV!,Úrúvj óÐAèÒºâW†z:'D@wLIñâÅ‹ªŠì×·®ÇÒv«TUXÀ£±c!³4S•Võ$!!BCCáÑGu_ËÀl? D³l›¢2XR[›öð‡G¥ÃPõ¾tMˆ@ý DÆ£ÞʾpˆNÒ ­®‹E;·Vª…>ê?*ã©êbí¿²qé0åÛc°ò× ] ©þVsT…X< (?¿bw‚gëúö­ý×¶6°˜c ´Wµ|Ÿ>}TI999ªs]OŽ?®ªÂœæÐAˆ€a¤fÀW{ÎìÀz9¬u²Ž©æTŽÁ…Ó *£¨Ô£lañà8÷åŸðÛ¹(¯çüF=nMUˆ€I`Á‚îw¨‡.nY×WîW¥Î|65ðúë¯×Xfذa0kÖ,1b„øZµPbb"ܹs§j²Æ5s(˜,´òxê©§”§ôJˆG%8•ÿÓÉ0áÿŽÀ¡°Û, QÙLõ¡¤šJ¨°,™‰_îû$Ðn…R-ÍÉ/.‡5~=s æîº6¯¥$%Ë$03î XÙ~¸XÕ>õ¦î”´Ë·õísÿ§F]m)þû↑¢l·@\\4iÒDòoܸ±*Oý„­V|÷ï»TiÌ`Á’ÔºTÕËÑ9 Õ ÜLÊÿu'á\ÓÇÍ)àžB!à{ZýŽŽË ÷3î8艳¿ªÔç<øz*L^u6ºùEz-eЧT—48"ÜÖ÷eòb±Ûïm€˜âµ¶).>N•WÒ¦bAߊäë”IVU¨á$((H•úÐC»»;XYU4T•¨<>|¸XV™’• GWýïô>dddˆÉgCÎ@VrÅÂ'¦´Èê¨e÷àµèaª HêytNˆ@í²óK`õoá0mõq`kùx¸l×ÛÊJò{†Nø«ªMM—þ¿Ü»çËK Ù»§õ QG$R ®x’TÄ`­Z[‡k¶þàÊí 8nN¶Ð¹e#½†Du¸5%FC€i Ë=$¶‡é ÄcÀ—­Ö¾Ä´;°wÏ^UzÆÔ:°Ä8ø§8? %qÐÈÊMU¦®6ZÀÄ‘”LJ~(N!(¯ëzµ··6À^#®D€â¿µCÉw’áСCШQ#ØðË:(J/͸y:Û£§j˜ü,é=8‘û'8HÀËéa<º D :¶Fop,|¸õ,:xõRõHàt ^ ðÝráç#´YK ñöß> —,Å_-ëÑŒ«tmÝæŒò„~+´ãk,D‰DÀŒ$—&“×úA‰PñªìÚæÎ¿ÂžS^Н‹O,€?¿$žËË þ“ ª|‰ |ßégX0 Ž÷¸Xç:e¥¯¾ú Øv°Ñ‚Ï?ÿ\™¥Óë¸s#!nk ØÇ C-G²_$ìîop·ï)–¸VxFÜxD ©ì(u‚=/C3kn%µ´‚’‰€é¸}¾Þ‰é•š)úöPÜVŠÅa¾;ïg«Æƒª•’B÷]pôÔ‰­:ýqiâýÇ«©r™W‡ÏÇC†mtoë NöºÇP¨b’.‰€Qø Á¢Š«ï,`úo4 2¶'|y|w:J/³±ë¶Rh?´´´n-þß̺Îþ RÊE'ãq—gÅruýSZZ*îppp€Å‹«¢ÖU§jÞñœC°6äJ‡r×Q¡QòŸZ£²lq›<Èz>˜°Ò+MÆ‹É~q ©,A•w Ø0ÿS^¨fGׄeÉàôD ìÎØ ¾Mß‚žý€í(X{÷ÿpQs€Ñþš«ÆhAꛚ;,²YÀvaÔt„†ÂþÌÝ0ªñk5eS0{á1i°fÿeˆM©¤ÑªpƒJ–†ŒcÃõzÐüK¯z-¯ûÌÜéæ` _¢±7±J½ª·éÓ &?Ûš¸ØWÍ¢k"`òæÝ~SüâÜÐyWµ¾°Å'NTü¢~óÍ7áïðIÛUð¿›OB¾¢r4AY± %ýÕã¼jÄA™®ËkpÞixýÖsª*ÞŽÀn÷Š‹ªÄÿNX £·ÞzK ±Ð# R§j:¬è3†ãºˆš75µÂô™-V5M×DÀ¬ ¤dÀ^…cáSn\:+ào€µòì%‘ëfë½b‘Ë—ø•õã²°sÓ}üw¬Æëñ—Å=‡ðÐ9è/“¼3¤dÃH\,ýY»ÅÖœË; ;­µeW‹Â!8ÿ̈{½F§€UdCÿ[ÓÖÁ[-æÖj§® \­ Ÿ&½«Q$´à,*î…án£5ÒÙÅîÝ»UN“sÎ|¡æ¹„’Û°¾ÓOª…•Õ Q°E¥å°åw 2›>çuà°À!Aûá(~/›\F ª6ÆË?h¤ ¤_ãcתyõ½nÕØQ”w~¨G«úš zDÀ(0yÔ—n¶ÝízÁA5¾<—-[JA¢â2!ùÉ›âPþìï@T*¬ép–5ªñK¼¦²UÓ~Lß Þñ¯š m¬ÛÁñž—ÀVj§ÊKNN¥™ÙNvôy¾'<6y°*¿êÉPÜ5¡¾±j>]s& Ê!G$B Ê!gàt9·C®àê¹±¸bh‘›aŽÃþêm ˜p ×˜¥9·ìö.Îy¼ƒë\ÔóësΆ_Þß|úwi&.PìÒªQ}ÌP"Ðà~ÍÜ©r XcnGÂOøÅ<¾Ù[ª¶áߌ꼰¼@<Ç¿%É?{ܪòxœäÊs`UÊ'5šJ*»î~sZ-Råûí· t ˜ÎÂ3¿GGGU>"PA *!£^‚¨;™Ü £Á¢}’º?@]øˆÛ ÐPÅÆižÿ³uïÚIybèÞSM¼´ÙJ&a?ó{ãCå§]=Uˆ ޱÀ¶wx´u£íõäHÕ†Ó ˜û*4§à Báõ&“ÁNZ±ž&<<”"J¥­ ¡È£bR2~Qw±ëÝÿ Ä£?¤­u¤Ž™°úÿÑÅQâ"A{ŒZ¸oß>Ø¿¿rÄÂ… 5Â%óhÙ ¦N =§\ÅÓs‹xu§×n,É+uaãÄ“píÎ"æÐû‹ZÛf œ½}€•Lºoø¨¶uîWÎÉÞ¦?ß÷Nv´}¿â”OœÀW¨uðÝÝU5¶cR³™°¤íJ1 mÙ²E<Ï÷L‡´ñ·TuZ[·‡÷•N„*ƒÓILñMˆ-‰ª›Œ••£?ü0|öÙgêEèœX4Òr9ì=[pû=jqc£'%rŬàu¢¸­ÃÐs ”mðž½c"ê/|†cí”iú¾vhî,Ê;÷ïÒ\_STŸŒÓ6x&jnQ¬9¢™z°¢o]ÖÛRÔ%Rfj~Ìkù!ÌmõAÚ:9f´(îtÃ-³-’ì`΀¿¿?ܺUá ¸¹¹Á¦M›Dƒ4‚Œ#pŽÉ!ã´Õæuà@´ (æ£úáA^6µ±ÃeW‚67R– ];~[Û1_ïiÓ¢ùGè ÌÃt½÷"ÆcÉyß¶0Ñod_hÛÌYy;z%FC€-àû±ëá:Ûã€ZìÈpNU•sNw/·WU×ì$¥,I# ’F¦G³ÿ€“Œ‰›pŠaVËwÄs¶®@é°„wÞy‡œ‘ ýcébPU4†ÇÜã†GrqMÑWò’Âÿ»°qºf$1nw©ÝÐ1PoŠ×¬ÍídV6«1 ÃËêéúœãzûxwxã‰îàhWEN{T—4ÌÌLŒ6Fuk¦ŒØ¸ñý~±Ðê U´8ù3ûwx?a&äÈ+ä“Ùz&xtö` ¬Œ‚8zôh˜={¶©0_Ù%°ýXNÜNQŒ™Ð˜ÿÙYT¦XñÝ„´†¢g°Å‡Út(9lnbÈÞÝ-½Fó~襴֦^]eØbšÕCoCcg;’w® å- iœ:|ŠŠŠÄ60Ú´©9P+À6ß¿|ùrP(àéé©SßX¨å×£Ÿƒl9 IRq°Íñ—ïÀÑ€“¢}–Ú§O`’Íý¿RôB,‹“Cþ]”C>—bÓë^° 6ü>!ÎG㮾õwÏï+¨’ý@/Ô1Pö49t綠߷õ~ù.îË„BÅxª²@=^ÙÂÓW“á\T tlá-Üêa…ªO€é¼|s( Hy’1Â)®®®àååUkcΜ9Û¶m‚ØØX;vl­ekÊHùNæÑȲMp„Òu¸Æ ¼b`‘i-¬\¹˜B#DÀ \¼•&n›gúü‚ 8Îç0~~rèo•sˆ ؘÜ~!8Àw}v~q7üÑ¿ÿ¯y…–ް¢“²ÁÝI”² …»…èØ *nAدo}¦W^žéÝ“TfÎ;§:¯zÂF ‚‚‚TÉltA—#½,MÔBP¯cf-ðiiÅo;;;qD‚-:¤ƒX;÷òà½Mg`þ†S€ç<ü»-D¥/SÒ<0HÑv6yÙ0&Ç@ìÓÍSòÒ¼²rE/ü¼«XÅ¡·G.&ÀøÂöãQPŒ¡)é †"°&u9Í9X«yöEžRZù¥¯^éÎþML:Õ¡bW»HIIëׯ«UŸ>}Ø(ò?~¼òT«×ïÌ…r¨\ßdj­¾ë ²ÂŠ5:‚LÏ-|ŠâhE“ ™ü¢2Øxè L\y‚¯óû1?|ENÝCÇ/NüùíŠùB#× ‹µáà3{û3 •|‹kzjS^›2ÍÙƒÿ‹ýàq’wÖ•Ñ›x:ª¿(t¤Ç°‘ÚhÔfÁÙ¼p ëçjRÉL¯`ć!ªèŠªŽûúÁPSqùÄOÀâÅ‹Uyì„9Ó¦MS9C† %K–h”©ëâja8¼xã1¶èI,fsÇZ}ï¡r ØBFvÏ¡C‡Öe†òˆ€Y`rÈG.ÆÃº?" §@¿@u0øçz^!QÌ [ã[û z…:7Š5uõ=)l_lRÙ†6NÝs1r¢Œ×{{caI9œÀØÕç£ïB×֮ДÔëz”§÷f‰!ŽspŸ£Ì 9=¤Q{fìë°'#¢K¢ Cèl×M•ÿcÆfÌÛ¦ºf'Ŷ…àt¹BÑýC,ÀåØ{°xÛ9\`%œ‚¡ŸŠÎ÷ÂЀñ3’CöUüÁ1L£w Dvú1)tï¿ú¼ð½­µ5û”쇂ÞÓ iÙEÀ3òŠÀ£]c°·}àaŒø­AMÓ•@pÞ)ø*å#UµKa0¦ÉÕ—ðñœCâ\~©P"–a"JãšNÅ“˜^ÁŒ¸±¢ ³Êž”5+׋-AR$G˜sðÌ3ψEØh“gÎÎίüq(|4œ¤ÎàfUûÖÆ©±cTù7‹¯‡]oh|¼H~rIyÅŸ•?:No†ÀÌîêM¢s"`–R³ à[ P´ö@dæUüêÛQq­œ àÖÆÑ&õ(z_MÃ1ø¯Å÷Â/Bý…­¼^üU"‘öÁaÎê©ïùÍÄlqû‰­• ÜÛ¸Ljô3,õí*Õ36 0¿ØÙ¢AåQ†³Ë³à×À$NŽ}òÐPlk ûè4V$/sù'•Y•¯øV,s.§ˆŠQ¶Ö€ípww‡S§NièÌû`,Ì| n_GmÍ€HJƒfï‡ i_‹"N³[¼ ƒ…ÇáüÚ+pýh´²ˆ ?ý f=3—œ:1WlÍÙ'nÀÒÁÀ«ó:Ðq?\å#pûáOá¿ó›àÕÀ:ì˜ô7 Ïœ£p´sŽt©£:eµnR!ï<Ø£•Nõ¨°eØ™¾ ß™S NÁþî§áß¼`yò‡Õò™cðwËp"ïOŒdXû¢Øã_†ø‹wÄú,†À_|7n„¸¸81È'gCPúñz[—ý0Äåiû•(JÄõ‰¥ñbúë13 rû-Èϯtjݺµ¸û cÇŽu邘#“ÿÉ!ßCÑ#~‡pU.‡yakÇçgóÁZ2iÇ€¡Byg§]ÞÃðÊï`ÅÊÉW=9ìÖ\\ Ø©%É;ë‰Òì«ç–gÃÐk}!KžQc_û: „›E× X¨ùÃçµ&“àËökk¬«L¼wïÌœ9²²*Éd2ã§òøhí0µèeÀ)bRW;8ìV’Êé±µ©+`%Ê+ÛÝrƇÚ]‚柋|ðÁàä¤we³è•%ë(ƒ¼fÿ%ˆŒç'‡ŒÍÄØbŸ‡Þým¡äL“w ” ¼fmi)³²^%’×q!5—~±…Wë¾OõÍÕåÊûÒ+8†kögîªÄ¥‚óX_k>{»è~z9xÖZ†eDGGÃüùóUÑ•…Yà£Ä©WÅÝÊ4öúq›ÿƒÉÍg‰IwònÃK»ŸÛàF`ç¢^L X4}út1b„F:]s#‘[ [ŽDŠkË8ö­×l.,ƒ÷¯¬W:”ã ´)._ ºÑuÝÏÛ/hT"]㮫œ.yÎ(ï<í…>0Ü»­?Е…ÈÂK¸ñ‘ZI4³j 3Z¼ =ì{ÃCÎ×ZN™!n,((P&á@@Qç(ôȆ²æÅ w.‰\ NE`ºb!Ä\‹…ó—Ã@Q‚_UŽ‚žY°rá*x´ÃUrè’˜&‡¼É!£SPTZ9Ò¦wøw̨…×ô¶eDÌÎ1P²ì¿s>0”w–´U¦éûÊB+¿=º?ôëÜL_STßÌÜ(Šw»žÕŒØN…›ÅQuööÙF#¡¥Më:˨g&''ÃĉÅðÇê麜3)çÌçï@IÇ|ðr|ö¸Õ¥:•%&Cà_ ‹ÏäS2Õœi}[/·p›Ð‚à@ßßõ5eŒõÍÖ1`°NÛà ³uú§ت0½ã(à#½ZÃì}¡MSš‹U2±äW¹ ‡ç¯ûÀ´æóà•&ºEԅ۞ȱƒ8p¶îÙ9)¹Z™±²·‚vµ‚NCÛBã.®uضʦÖÍ5Ò肘2ØÔؘ¾¿CÈA²"/5zEäÏKMj§. ÌÚ1P‚8-¨½•­l5~¦¾¤LÓ÷•É;¿>åñ’wÖgƒ×ÿ2i1¼ßæózµcÛ½ï`iâ`ÓL¦˜62ıôΘÖb>´¶©c •‘‘‘‘‘!.P´¶¶qkc= wïÞ¢2¢ ­•1ij!›ÆC åƒ0ôýÞ3åYhP‰° ‹†üðÖ]ãé­aZbŽ·ÿöǤÀÖ@eš¾¯maæð¾ðÌ€öª_súÚ¤ú–ÀþÌÝ0/þMXÓqŒt{E§›g—gŠ;X¤CvÌh±Þký©N6´)¬ ]<Üu4t Ò¦ •!E€É! ½ _¦qÀ턳¨ú;5|.p³iä†,Ê1¨xK¥>~]gàèÁü&ç6vêÞÆæ¾ä ½;65òGNÍS'P¤(„'¯õƒÔ²dhmÝŽ÷¼vRíg–Üy[;€Ùµ‘ØÀÑ¡½m'õÛè}>ææÓp¾à_ÑΞnGÁˉÛÚZ½ÛFˆ@CIƒoö†C|Z·¦à`Ü‹C|-Î7©È‡|žøI^9¬ÏèÉ ×iÀy[½9däá°Ûâ—îmÝÀ‘¦ø<.[ HýþÎ=,Þ%O‘+îûì#ÝÆàzƒ­ÕÒÕØ—÷ô¸×Ô“4λÛõ‚ƒÿŠI÷¹`š ÏD {©ƒ(²ôCÚšjµšZ5‡¿q¡£6LµÊ”@L”“C>žëQ9+ŸÐ‘ˆB. rÅÜuΘ(®Í&Ç@çÐ¥VÞ}ºÌÇ_û`4:Íý\êåt<ïÛ©)øê' 4éX•Šˆ@Hþ=¬Vë?w;VM2Y½pxA(¤¡|q]Ç`§Ç ‘•[]Eªå­¿» •>V¥ËCã5l{ä¢6_Ô”Uï´ ùÁ(=l¤´{¡Þ©¢ADÄ¥‹aŒ¹ Ü•ŠO‚bPdd©Â 7A£äÔðмgok"•I¿A: w0p™naÛÏ_Ü&?ÛÜœìj¸+%=H“b^ƨ„—k½e_ürÜÔå—Zó ‘q'p!d¢RÔˆÝÇÝ®8H«o´•Ú†N»tv>jk;­x6j Œm:YŒÆX[9J'’@Zv!|ø*¹˜Àí¶¸¶ŒÅ XA>ƨ…•ótÜî`Ú†È1¨ãù ôÛÑÇJÿ@òDÅtÊr°µ‚7‡õ‚—îBët"gþ…߉Ÿ¿dn¯ÖÑ~¸Î`Ÿû?_l¸áî7âÚGtBNôŒ€fÖ-ªµ…ˆÀƒ"PR&‡=§nBб((-ç÷cwüU&Wø]\ç{ëAõÅÔîCŽOl°ßŽ× ,G¡“ŵ*Ò£&²íÞÝ[jUž ™ß3÷@;÷û #){QxFÝxLyYíueûð¿&㪥óJ¸WvWܶ™¯¨Øî5¦ñXÑa=/ód‡èDà”Cøý2ð”CÆ``LÏ`>ŽÑ©1X˜Ë0¹¹sÃÝ ‘ö>C¿³ÊÑ“„Ó zOÀæ–ÂQ‹ŒÏwܽàêdkî-¦,6Â[±c ¼ X¨amŽ5)Ë¡T('©3(ƒ%)ë±Eˆùò\xÑíU@0e2××O“Þ…ðÂP•ͨ¢x²Ñóк•*Nˆ€¡ ÜHÌ‚%Ûƒa÷©h(,á´)@€, \øQÈ•[“’ö̉6tÌÁ>èøNÛÞÊÊV² w/ŒÅu\øÉPÌá•G»ŠòÎÎ$ï¬ã1¾â_§|,>;t¦øêÍg ¬à\µN­ïô#<ç:ªZ:eTEõUýè8~q?®žDçDÀ 21ÌfT>ü#$§ý9ÝB9¾§7 ù"Ü~˜ÁɪE˜áòÅf¤ªtrÿ6o\1ÎÖ ®’UïKt ¦£¼óó^IÞ¹Þ¶"‹ðô5O(*¶RéMñ`Ö^ð»í[cÚÚt€c=Â1¶ÿ‘%õ¨ŠUo¾ºÃx±ñ«U“éšp!P†k~;›˜2¯±eÂñòRżóßM¸Ê¥¡f„¦êùÀ“CK I›ÛxNÀ‘ƒA8‚pÿˆ8÷¹[lsîZ œ¾ší›;C«ÆŽ÷©AÙÆF`F,¼^\ùYÄYK¬a°síëXJÅ0)æ¥j;”ýË•ç`¨f;1p’2Ç묟1ªb`­¦.ž‡7šN*HþžµzØ÷©µ,e]_OE›ÏÂñKw€éð8p°!VŠ¡k|ßK>¿§¬"晌 1àð¨úú9:¸I?Æ‘Ø9¸tœÛ^Ä'ú¶…i/ô†ÖMªoUãÐl2Á™@m±ì$öb0¢V6µºb[}o€Ð‚³b«>l½†»ýO£…ÌÁà-W| ×4dkܧê[D„ ’ëî®Õ#›ÓºƒªˆèZ·ïæ¢ò%8Íñ{[€|AP¬Èd­¸0‡cä#:fFEÉ1àø0ÏØÒQ°¶fÓ /ò2k!•_¢»(ñ̶:Òaœ‚FÜxØ¢½šŽûES<–sH\°¨¬ÛÔª…ø%ì$sV&5ØëݲŒ¯ÐØ¢J.zU‡ï¬-tcÓ%‡ ®™ò¯(‡,dž<\€Â‡ÂOòò²aë&×qŒÇ -Ä9xÐ^þA3ygtúñ2ߨÙfè O÷ogðýì¼ÚlIvØú€¥‰ kí2[§ºµË>èéзZ6Z0ìú ¸]£‘7£ùÛð^›Ï4ÒââíÛSa_ÖOª[³˜ žŽ8{FЂs†ÆÁÆCW £2:ÿ*äÂܰuãôhÑ9:ÀÒ­èR©·—Yè |Œß Ít«[{iÜÚ8ãôêФöB”cR6Þý–'X­Í†’p®v£:Ø–ËÑ7‡j”ðtð‚½î'ÈAÕ B5¸}§ .CNð;„$'X0n+?›dI->T§ÁõœÉ;ï má9l£LfƒòÎBAЛwzn1zß·!5«<Ú5&yg®ÏìÁK/KƒYqãÄUïÎ4’p—›†hˆÂ`ÛÞ€Ô²dÛ³ë¶i!¢ºP'„rÈ+¹1”qv§)A(Æ@s+Ë‹ Æ„­ŸL£êÀ9ŸÓˆg µ™4s{w++i Ž<][]Óí¬e0ᙡØâ9¦Gà·Ì]pHMá‘í>p‘5ÒèÈ’¶« M;´q±7óGXÿV·jnÕR ›ì £35²ÐÄB”Cþñä Øuò&`ØanÐGý­TóÂ׎‹çf” ÕJ€ƒZÑ&Ã+?/HáœsîÎë-\DõÆÇz×¾ê׽ȎáÄ߀‘7…CÁÐѶ‹án¤…åBy¸à0­¼öõ\³[¼ [/ÑÂ1wlt‰Er]02ó8 4i½¤K憭wÊÜSÿÈ1h€§1pÚk™ã|œZøc hþ<Ô£=ý:7…9£<¡kknŠÑz´†ªêJ`â­—àTÞQxÊåyøá+;Vm+ÛeQ¬(ªš¬qÍÂ3ÛIí5ÒèÂò\½Íä/ gÌíà:KCo}GrÈܨjmˆ­Qñ/Øßs3‰íWèOÄ.Að™¼ó(QÞ¹é/ðd³øwÎa˜ûŠÊþ¶.ûaˆ ·Y'•]:!¼09äþŒ„¿.Äó2‰á…2‰ß••|taãôn†ÉNÈ1Ð —a {ûoïǶ7âúƒÇyÝÁÑÎ ¦ ë £êLòμ ÈN™PâA\I¥ lW;8ìVŠ]a ìd¶žX„ÖŸOG‹rÈìœ×NÁÔ5ðG]ƒ›¼l’ú Ç ~Ü R „×%é2|(yÝ ]3”wÆé/’wæ…”»ÒÖÀI‹ªÙ]Òv%Lj6³Z:%†"pêJ¢ò]-àvp].QÌ[ãû'7›dH/äè…厓¶Øµt±z_ÉBœ^à¶äÛ¿û‰ ü[MëK £ìž¸Èi*T=\d®p²g¸YQÌŠªlèúÁ¸™”%Æ#ˆˆKçvc\Xˆ±¸%_„DD '—rÒXæÖ<‹6DŽ‘>þ~3¶µÁ툫ðçUžòÎcëãŸòg{#í¹e5kQ‚ìÊØRk§Ç7} >k÷m­ù”A I +¿¶¹¿Çr•C‰bkYqáû¸Ž€Ÿ§aHf›#à>þ;ãÈ[àÍ«©˜¼óp”wÔ¤Rz ðâZ;§rá–¬Ú÷{ËÀ uy²>¦©¨7¦v¸å· rA1¿ó¸ŽàJ)Î Y?±fQ‘z·˜*ò$@ß Ë®R¡daxàM± óEi6=#ÇÀ„¥ÏœOHI^ÝhâR!ïü”'É;óbj‰vòä¹ân‹Œò{b÷}›NƒOÛ}c‰(Œ¶ÏLùpØmQ9‡Óë,Ž„H䊹Ák'„mç©au Ç N<&9fŒÌ»Õ¨Y8·C,sÛ×Ö•碼söô+ÏÞF×ÄeIÀ÷i«UíÂ^¢Dwû^ª4:i8o¥AÆ#ˆMá\P€dAP, ôÝŠ=®wtg} c /A#©ï9i‹«­‹õ¸þ`6‰Û\À ^aês½ ‰‹½‘ô”šaìâŠoÁ°ëƒ€EtT?v ;»TO¢óL %³6¼'"ùÝåqZsMa–âÓˆí ø&K E€ƒ†"o ûžÔ¬¤ÿà)^·°·Ayç§{Â+u+Z¯Ê‹«¹Ú™ó ü{¸Æîmè´ žuå¶4¦Æ{Pbu…%åðÊ!ÿtâg9da¿¤¬l^ðw“oW¿+¥˜*r LõÉݧÝÞ³wŒÈ$_ãîvŸ¢Zg·tcòΞðh¯ÖZס‚–E€Åe˜3ªÖN··éGz\[©m­e(ƒÜ&ÇÂïÀº?˜r17Ã8O!Qà:‚@ߓ܌’!£!@ŽÑ< þ aòÎV6Ž p˜o®?páu‡þ]š?†WîBòμš«…áp·,¥Îþôwô†ÆVMë,C™úˆŒÏ@9äKpý?9dtÒqÁ'¡1ëHYÿgd¬È10Ö'ñ]}g5··–~…Â\ƒÀEÞ™LõP˜ôlOpu¤_™"zHÏ)‚ïÿ¼*Æ(ÑËZey(ÃÏŽ Åy¥]Ú:5è0gä˜óÓ­Ò·AþÛúË@¶G«’UïK';k˜‚‹_LòÎõ†H‰Lù—3(‡ŒŠŠ9Ê!ãþ‚ceB©ÿ…ÀÉ×94“L˜r Là!ñn¢·ßŽ7¤RXŽ Ûó²Ý¾™³¸½q{ ^&É Z8}µB95‹Ÿ2.O¸ ùüµiÙ *f&È10“©k7ÚŽùÚ¾M«‹P2mŽ 8èZ¿¶òƒ=*äÛ¡£@ †%p+9[”C¾[HŠÇÝÐ!ÈÁ©ƒeòÒ‚oPýPsÏ) £'@ŽÑ?"Ã6ÕÛâ›`FO|•×d¸áÕ!(ïüdp²·æe–ì"ðì‚ØòW$ìç(‡Œ 8m°µJßx“Ÿ§AOÍäc`rÌ0 öš³ýa© Åõ0ˆ×Ø¢D&ïüÜÀ$ïÌ *Ù±hLyÿ¿±°‚üb~?æÑ)8¥(—Ì [7î’E¦Î‹È1 7‚:É`ÿSqKÒ§8½ÐR=CŸón¸­ÑT?è×¹™>f¨.°haL·Þ¹ÇO§ãq˜`Qh€ïO —:¯A€ tÁts“3þÚ_‚Ó þø±áE… 3M{¡7´tsäe’ì³'–k\†àë©ÜúÊä% ¬LÍ-ÿòöÖÉü"qk!jHä4$}#¿÷@ÿ­]¬ÀŠmoΫ©òÎðúPw°#yg^XÉŽÈ/*ƒíÇ£àçÓÑÀ”y¸° {p;ã‚ËßMLâa“l˜r Ìï™rï‘—ÿΧ1*Ò·¸þ /ãÍÙì}áIE ƒJ &‡|þ6l8tr J+3ô>ÂrùÜеÿÕÛ0kä˜õãåØ9”wöiñ¢ŸD*ý­rÓbîÕ¡ ÌÁõ(óL°t—bî‰aŒc8Ê!ã AŠ |8n3òå3ô`éÊÌûOŽ™?`ÞÝë5凯ŽöŸãŽÄ·Ð6gyçÞ(ïlÇ»Éd=QGN\æ)‡ %è¬É—ç|¹n6Ç‹F“¨'r ôh©Õ½foí%•Yƒo gx1°·µ‚IO÷€Ñ’¼3/¦dǸ •–Ã.”CþåKËü+Ày9Ì [?.–ŸQ²d)È1°”'m ~öß9G'Q IÒ•×-Z7q?To|¤'É;óbJvŒ‹“C>~é¬G9äô\Ž›áŠBPÌ œð·qõ˜ZcJÈ10¥§e¤mí5f©S«® ðÍ´õ¸ÅBе¹ÿ sËFFÚsjÐ@TB¦¸Žà¾ò:ÐÑÈ$ðIhÊþuðóÏr^vÉŽe ÇÀ2Ÿ»Azí3õû`ï°ûâ.ï-&ïüÒÃ]`ò³½ÀÅ[HƒôŸŒºdäÁF¡°ÛuÓ5¯×l,,“,¾²~\–®•©<¨‰—ïš Sšåð™³s DVãôÂ#¼(0Í…©Ïõ†‘>ÀJ†›'é &B ´åOß‚ c× ¨”çyá8”+üƒ×Mˆ2ÔL!@މ<(Sl¦ßŽñè,ÃÑnÁ :4wÆéOð"ygS|KX\›ÏD&Càï—í:àuà~Ãh… _0á/›d‡¨ Ç@s'0pÚ+[ÇE=q·çuƒG{µ†™ ©mS'^&ÉàF€Å!@]ƒpŒKÀëÀu¹8C·,/5ú›ÈŸ—òŒ|Ä«‰dÇLc`&ÒØ»1pZP{+[é*t^áÕV+“wvGygp´#yg^\ÉNý ä09ä#×P18E1ƈD¢rPQ™â½ˆï&¤Õ¿uT“hG€í8Q)N|f=*‘IÙúƒœL‚›Ê;¿Ð†‘¼3/¤dGGLù÷àXØÄäQã€×ÓgÐöÜ k}/ò²IvˆÀýcp?B”oK¥>sºMEßâ°¯¸·qÅðʞЧSS^&ɸ/P”CfëâQ‘ß!$(°(4püül’%" r ´ãD¥ @ÀLJ‹D",@:߈Üö">Ý¿8‚ÐÜÕÁ­&“D ‚@â½xcï¶÷ìmM$RÙè LEË\Z‚áÞ`ʰ^ÐØ™äý=` í»›UP™ ñ:Ð(AÀQ‡OnlžÂsÅ"¯&’ '@Ž…¿Œ½ûƒfõ¶²‘~‹âLOñj«Ê;O|¦'üï‘®`kè U £òO'o¢òu¾rÈ üQZ.Ì¿¸Î÷VÕ{Ò50äË“ vÔIÀÇoûÿ@*]oØÎuÔ!³ Ê;³ðÊõh¥C-*jîþÆÑu(‡|/‡ß¦TYŽTÌ wÌÜùQÿLŸ9¦ÿ -¦]ýר6•4^‚ä}œ^à yP7&ïì [¸X Kêhu×ï09äËŸQ=³þ)™‚BñiÈÕ˜µpriyýÍPM"ðàcðàXÓ88m{++[ÉW8½0ž—¼³ õ_FyçI8ÅàLòΜž”i˜ÉÈ-F9ä«|åÀXˆ°± °hq䦩™¦A‚ZI*c@ï“%à5k‡—T&YÎÁC¼:á‚N[œ8rpg`ÎæK ¬\¿ž‰†­Ç¢ ¨„ãyŽÉeóÂÖNŠ4_zÔ3s&@Ÿ|æüt-£oŸ9;ÇK@XŽ#mxu¹N+øê»q‹ØÌ«id‡³×*ä“3øÉ!ã†ÃŒuðNèÚñû84‘L#@ŽAƒ¡§ó$Àä­m>ÄíàocHYn{‡ôn3†÷6$ïÌóq5˜­¸T”CF]ƒ ÑÐÑ͈€ c`@¸dºáxÍÞ9D*Vc€$O^­hìÌäû¢¼s{” ?^\ i‡É!‰ƒM¸¸0£2(:§sC'œ7dûÉ6hôéÖÔ鞈ÀR©·_×iR‰äSÜÞØŒ×M»cXe^¹7†Y¦Ãx œ¿yWœ6¸}7—[#Qý0×, ¿ƒ›Q2DŒŒ9Fö@¨9ü àúƒFV¶ŽKÑòlü¥oÍëÏ hÓžï $ïÌ‹(;‰éù°á(‡Œ-+B§`UyIÁò §òi)Y!ÆI€ã|.Ô*@ýw©T¶çžãežI:3yç×w'yg^Pëi§É!ÿ}öœº <åÑ!ø¥¼D±àÂÆ õlU#&E€“z\ÔX¼ælN&H¿Áéö˜®0 å‡ömËË$ÙÑ’~qßçQÅŽ²òùÉ!ã”A¸pA€ïi-›Bň€Y ÇÀ,#uBg(ï<¸o—9H>Š®:ׯ¥BßNMÅøîmHÞ¹D\“¯Ä¥cãKp3‰Ÿ2.,LÃí…¬‰þ`)0¤ƒXr ,ëySo«ÀõM­ìœ¾0…§¼ó&ïü\/psâR¡JË-û2-»BùX8_9d|Ä`YŸ„Œç·bѲõÞ c`‚šÌŸ€ÏÌm}ÁJ†á•%CyYw´³µF£¼³ÆB C%erØuòì2CŒ€9Fð¨ ÆM`ðì A&eá•}xµ´Ê;O}®7 ÷éDòÎU ŠrÈgoÁ¶£× £2:— ¼ ã¯T¹%]" F€5tJê ñž½c‚T*Y†»Z×QN§¬Î­?®?е¹Nõ̵ð¿Q)ˆê‡,z!¯bA¡x7$Ð÷W^6É0gä˜óÓ¥¾q'Ð×7ÈѾ‘ôCœZ˜ÏUÞ¹O˜5¢/´jÌmI÷¾Ò Ó3`rÈL߀Û!@>(Zž Y«nÌáùˆ[ É0Jäåc¡F;o¿ NR©t€äe^meòÎc‡ºÃOx€ƒ­/³Fm'å·â”Á>”C–+ø,$À)4$ì(/Þ»°Ñ7ŨP㈀ ÇÀ 5Ét öÛ>T²õЗW«›¸Ø¡¼sxEšÌUÞ™9‚caÓ_‘‹Î¯Ã#ËA>÷|ÀÄP^6ɰ4äXÚ§þò'0fŒlp˧ é§øÕ”× <Ú¹ÁÜ—<¡g{ó’w¾x+ 0Œql*Ïà‚B†·^²f“Cæ3ôÀëA’"`bÈ10±FÍ5^ž“¶¸Ú:[‚-œÉSÞ™°„¦ì·óZ´,9ƒÉ!_SW“´(­eA(Æéœ¯ ³åË"¶O(в#D äÔ‡²ˆ@} ôÛâa%±ù(«OýšêØ1yç§<àÕ!¦'ï̶î8…rÈÑP&ç§I„ÿJJK7ùvMÌ(ú Ç ~ܨ¸/ŸÙA/€Tö :î÷-¬e–n(7yg&‡ü×…Q93Øs:Ðîe&‡0áN&É  jÈ1PƒA§D€7To´¶²uœƒÃÝ¡ƒÐˆ—}ÏÎÍDyç®­¹)Fójšhç*†/fë®c8cn‡÷Ð!ø(4 æ{’CæF• jÈ1¨†„ˆýý77³Ûe¨¿ð&FPä"µˆŽŒôé,Ê;»:Úòot=,ÞË)GŽ^L¨Gíš«àAæ¬-/)XzaãôœšKQ* ò¤ˆIDAT¼cÀ‹$Ù!ZðšµÓSj%°ðÊC´(®U';kQÞùeÔ`h(yg&¼ûŸ›°óïëPÌSYËË…ùç×ûÞÐ "D@oäè Ý xÍÙþªL¬À@t¯]söÍœaö‹}a°G«š (õdD"¬;w³ ¹ÝcÝ(`~pàøÃÜŒ’!"@´"@ŽV˜¨àO€É;·p¶~µÞå)ï<Ø£%ÌÙÚ7wæßh5‹ÑIÙ¢rD\ºZª~§èdK@ø¬¬¸0§ ØD€<`ä<`àt;"P•@¿ÛÚØÛÈV‚äu\7ÀåoR&•Àÿí Ÿî NöÖUo©×uv~ Ê!_…?8Ê!£C€û%?”çˆ?OC¯žRe"`™¸|Y&:ê5àKÀ{ö¶‡¤2Ùjü‚ôâe¹‘c…¼óïN€Êz™-Ç{Q™i—ëeK½2..<‰ŽÁ¼ÐßËêétNˆ@ÃÐï“¢aÚLw%æL@2ØçDܽ° ¸-èÚšÉ;{‚g—fõbg 9äÛ ï„Œÿ¥^¢JD€„9ÁJF‰€~zÍZëä(s]Œûçáä·½ˆC10ÒÌ} ¥›vòÎñi¹°ö÷¹‘ª_‡Ôjãè@D|™šWºòöÖÉü"©ÝƒN‰¨?r êÏŽjƒðš¹³³Ô VáÚƒ—xÝÌÆJ ¯=îãžô{›šåóŠþ“C>ËYY"ì,U”¼8%™WÈ | cÀ—'Y#!àíô¤åqýAo^7hŠòÎ3†÷…§û·SÉ;39äƒ!qââÂŽrȨwŠ#sqÚ ˜WûÉ †!@Ža¸’U"ÀŸÊ;û´5¿d?AõFnZÌ=Û7†9(ï\\ZŽÛ/Cl Çà‚$+¡kÇ!’Cæÿ® ‹D€;r ¸#%ƒDÀ°úÌÜéæ(>©d&Þ©æ¹Ã6áþÖ(ÁÝ_ç+r–E®›ÿ T‚c!@ޱ< jБ€ÿŽžXÕ%ÏêXÕ ÅÑ!Ø+Š…¡â z#2Nˆ€Ac`¬d”<8^þA#¥Ù*ücîöàîZýN8O©óBÖŒ;Q=—Rˆ0ä˜Ê“¢v:ô³ÔƱe—¹û`1Ž ¸ÔQ”{:éAñqpêïáçŸåÜo@‰x È1x ¸éfDÀ°|¦~ßBâ`÷…’Éè$p‘w®£Ååè¬-É-]ziëäì:ÊQ &D€zXÔT" -³·@ f”w†Gµ­£K9A€¿$rùüàu¢t©Ge‰0~äÿ3¢zð™4V"“~…ñÚ×ÛˆZEtn an?üC-™N‰0#23ê u…*’Âö]…önprtB cÁ×ÔKj n ,./)˜xþ»7¯W¹ ]"`FhÄÀŒ&u…ÔEÀkÖævR+ÛøG?¶®rêyLu 6–É?ŒønBšz"`žÈ10ÏçJ½"µä·ã™TÂÖ ¬µf SpJ.Èç˜^W9Ê#DÀ¼c`^Ï“zC´$°Têãßuî\ø Õ[«WÂEq8Lð~Øß=êétNˆ€e ÇÀ2ž3õ’ÔH`à´ Ö2‡á¸ö /: A!¿’và/ŠGP#.J$D€"@ˆ D€"@ˆ D€"@ˆ D€"@ˆ Dà?ÿ–@2,|­—ÈIEND®B`‚mclust/man/dens.Rd0000644000176200001440000000451213175051432013557 0ustar liggesusers\name{dens} \alias{dens} \title{ Density for Parameterized MVN Mixtures } \description{ Computes densities of observations in parameterized MVN mixtures. } \usage{ dens(modelName, data, logarithm = FALSE, parameters, warn=NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{pro}}{ The vector of mixing proportions for the components of the mixture. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric vector whose \emph{i}th component is the density of the \emph{ith} observation in \code{data} in the MVN mixture specified by \code{parameters}. } \seealso{ \code{\link{cdens}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) Dens <- dens(modelName = faithfulModel$modelName, data = faithful, parameters = faithfulModel$parameters) Dens ## alternative call do.call("dens", faithfulModel)} } \keyword{cluster} mclust/man/meE.Rd0000644000176200001440000001216113465001575013340 0ustar liggesusers\name{meE} \alias{meE} \alias{meV} \alias{meX} \alias{meEII} \alias{meVII} \alias{meEEI} \alias{meVEI} \alias{meEVI} \alias{meVVI} \alias{meEEE} \alias{meEVE} \alias{meVEE} \alias{meVVE} \alias{meEEV} \alias{meVEV} \alias{meEVV} \alias{meVVV} \alias{meXII} \alias{meXXI} \alias{meXXX} \title{EM algorithm starting with M-step for a parameterized Gaussian mixture model} \description{ Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the maximization step. } \usage{ meE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meX(data, prior = NULL, warn = NULL, \dots) meEII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVII(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVI(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVE(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVEV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meEVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meVVV(data, z, prior=NULL, control=emControl(), Vinv=NULL, warn=NULL, \dots) meXII(data, prior = NULL, warn = NULL, \dots) meXXI(data, prior = NULL, warn = NULL, \dots) meXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{Vinv}{ An estimate of the reciprocal hypervolume of the data region, when the model is to include a noise term. Set to a negative value or zero if a noise term is desired, but an estimate is unavailable --- in that case function \code{hypvol} will be used to obtain the estimate. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations.\cr } } \seealso{ \code{\link{em}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclust.options}} } \examples{ meVVV(data = iris[,-5], z = unmap(iris[,5])) } \keyword{cluster} mclust/man/densityMclust.Rd0000644000176200001440000000560713504361631015504 0ustar liggesusers\name{densityMclust} \alias{densityMclust} \title{Density Estimation via Model-Based Clustering} \description{ Produces a density estimate for each data point using a Gaussian finite mixture model from \code{Mclust}. } \usage{ densityMclust(data, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{\dots }{ Additional arguments for the \code{\link{Mclust}} function. In particular, setting the arguments \code{G} and \code{modelNames} allow to specify the number of mixture components and the type of model to be fitted. By default an "optimal" model is selected based on the BIC criterion. } } \value{ An object of class \code{densityMclust}, which inherits from \code{Mclust}, is returned with the following slot added: \item{density}{The density evaluated at the input \code{data} computed from the estimated model.} } %\details{} \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \author{Revised version by Luca Scrucca based on the original code by C. Fraley and A.E. Raftery.} \seealso{ \code{\link{plot.densityMclust}}, \code{\link{Mclust}}, \code{\link{summary.Mclust}}, \code{\link{predict.densityMclust}}. } \examples{ dens <- densityMclust(faithful$waiting) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "BIC", legendArgs = list(x = "topright")) plot(dens, what = "density", data = faithful$waiting) dens <- densityMclust(faithful, modelNames = "EEE", G = 3) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "density", data = faithful, drawlabels = FALSE, points.pch = 20) plot(dens, what = "density", type = "hdr") plot(dens, what = "density", type = "hdr", prob = c(0.1, 0.9)) plot(dens, what = "density", type = "hdr", data = faithful) plot(dens, what = "density", type = "persp") \dontrun{ dens <- densityMclust(iris[,1:4], G = 2) summary(dens, parameters = TRUE) plot(dens, what = "density", data = iris[,1:4], col = "slategrey", drawlabels = FALSE, nlevels = 7) plot(dens, what = "density", type = "hdr", data = iris[,1:4]) plot(dens, what = "density", type = "persp", col = grey(0.9)) } } \keyword{cluster} mclust/man/hypvol.Rd0000644000176200001440000000301313175052576014154 0ustar liggesusers\name{hypvol} \alias{hypvol} \title{ Aproximate Hypervolume for Multivariate Data } \description{ Computes a simple approximation to the hypervolume of a multivariate data set. } \usage{ hypvol(data, reciprocal=FALSE) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{reciprocal}{ A logical variable indicating whether or not the reciprocal hypervolume is desired rather than the hypervolume itself. The default is to return the hypervolume. } } \value{ Returns the minimum of the hypervolume computed from simple variable bounds and that computed from variable bounds of the principal component scores. Used for the default hypervolume parameter for the noise component when observations are designated as noise in \code{Mclust} and \code{mclustBIC}. } \references{ A. Dasgupta and A. E. Raftery (1998). Detecting features in spatial point processes with clutter via model-based clustering. \emph{Journal of the American Statistical Association 93:294-302}. C. Fraley and A.E. Raftery (1998). \emph{Computer Journal 41:578-588}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{mclustBIC}} } \examples{ hypvol(iris[,-5]) } \keyword{cluster} mclust/man/mclustLoglik.Rd0000644000176200001440000000160013317451770015301 0ustar liggesusers\name{mclustLoglik} \alias{mclustLoglik} \alias{print.mclustLoglik} \title{Log-likelihood from a table of BIC values for parameterized Gaussian mixture models} \description{ Compute the maximal log-likelihood from a table of BIC values contained in a \code{'mclustBIC'} object as returned by function \code{\link{mclustBIC}}. } \usage{ mclustLoglik(object, \dots) } \arguments{ \item{object}{An object of class \code{'mclustBIC'} containing the BIC values as returned by a call to \code{\link{mclustBIC}}. } \item{\dots}{ Catches unused arguments in an indirect or list call via \code{do.call}. } } \value{ An object of class \code{'mclustLoglik'} containing the maximal log-likelihood values for the Gaussian mixture models provided as input. } \seealso{ \code{\link{mclustBIC}}. } \examples{ \dontrun{ BIC <- mclustBIC(iris[,1:4]) mclustLoglik(BIC) } } \keyword{cluster} mclust/man/clustCombi.Rd0000644000176200001440000001217613475242121014737 0ustar liggesusers\name{clustCombi} \alias{clustCombi} \alias{print.clustCombi} \alias{summary.clustCombi} \alias{print.summary.clustCombi} \title{ Combining Gaussian Mixture Components for Clustering } \description{ Provides a hierarchy of combined clusterings from the EM/BIC Gaussian mixture solution to one class, following the methodology proposed in the article cited in the references. } \usage{ clustCombi(object = NULL, data = NULL, \dots) } \arguments{ \item{object}{ An object returned by \code{\link{Mclust}} giving the optimal (according to BIC) parameters, conditional probabilities, and log-likelihood, together with the associated classification and its uncertainty. If not provided, the \code{data} argument must be specified. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. If the \code{object} argument is not provided, the function \code{\link{Mclust}} is applied to the given \code{data} to fit a mixture model.} \item{\dots}{ Optional arguments to be passed to called functions. Notably, any argument (such as the numbers of components for which the BIC is computed; the models to be fitted by EM; initialization parameters for the EM algorithm, ...) to be passed to \code{\link{Mclust}} in case \code{object = NULL}. Please see the \code{\link{Mclust}} documentation for more details. } } \details{ Mclust provides a Gaussian mixture fitted to the data by maximum likelihood through the EM algorithm, for the model and number of components selected according to BIC. The corresponding components are hierarchically combined according to an entropy criterion, following the methodology described in the article cited in the references section. The solutions with numbers of classes between the one selected by BIC and one are returned as a \code{clustCombi} class object. } \value{ A list of class \code{clustCombi} giving the hierarchy of combined solutions from the number of components selected by BIC to one. The details of the output components are as follows: \item{classification}{A list of the data classifications obtained for each combined solution of the hierarchy through a MAP assignment} \item{combiM}{A list of matrices. \code{combiM[[K]]} is the matrix used to combine the components of the (K+1)-classes solution to get the K-classes solution. Please see the examples.} \item{combiz}{A list of matrices. \code{combiz[[K]]} is a matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class according to the K-classes combined solution.} \item{MclustOutput}{A list of class \code{Mclust}. Output of a call to the Mclust function (as provided by the user or the result of a call to the Mclust function) used to initiate the combined solutions hierarchy: please see the \code{\link{Mclust}} function documentation for details.} } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{plot.clustCombi}} } \examples{ data(Baudry_etal_2010_JCGS_examples) # run Mclust using provided data output <- clustCombi(data = ex4.1) \dontrun{ # or run Mclust and then clustcombi on the returned object mod <- Mclust(ex4.1) output <- clustCombi(mod) } output summary(output) \dontrun{ # run Mclust using provided data and any further optional argument provided output <- clustCombi(data = ex4.1, modelName = "EEV", G = 1:15) } # plot the hierarchy of combined solutions plot(output, what = "classification") # plot some "entropy plots" which may help one to select the number of classes plot(output, what = "entropy") # plot the tree structure obtained from combining mixture components plot(output, what = "tree") # the selected model and number of components obtained from Mclust using BIC output$MclustOutput # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the BIC solution head( output$combiz[[output$MclustOutput$G]] ) # the matrix whose [i,k]th entry is the probability that i-th observation in # the data belongs to the k-th class according to the first combined solution head( output$combiz[[output$MclustOutput$G-1]] ) # the matrix describing how to merge the 6-classes solution to get the # 5-classes solution output$combiM[[5]] # for example the following code returns the label of the class (in the # 5-classes combined solution) to which the 4th class (in the 6-classes # solution) is assigned. Only two classes in the (K+1)-classes solution # are assigned the same class in the K-classes solution: the two which # are merged at this step... output$combiM[[5]] %*% c(0,0,0,1,0,0) # recover the 5-classes soft clustering from the 6-classes soft clustering # and the 6 -> 5 combining matrix all( output$combiz[[5]] == t( output$combiM[[5]] \%*\% t(output$combiz[[6]]) ) ) # the hard clustering under the 5-classes solution head( output$classification[[5]] ) } \keyword{ cluster } mclust/man/plot.densityMclust.Rd0000644000176200001440000001132413427502250016450 0ustar liggesusers\name{plot.densityMclust} \alias{plot.densityMclust} \alias{plotDensityMclust1} \alias{plotDensityMclust2} \alias{plotDensityMclustd} \title{Plots for Mixture-Based Density Estimate} \description{ Plotting methods for an object of class \code{'mclustDensity'}. Available graphs are plot of BIC values and density for univariate and bivariate data. For higher data dimensionality a scatterplot matrix of pairwise densities is drawn. } \usage{ \method{plot}{densityMclust}(x, data = NULL, what = c("BIC", "density", "diagnostic"), \dots) plotDensityMclust1(x, data = NULL, hist.col = "lightgrey", hist.border = "white", breaks = "Sturges", \dots) plotDensityMclust2(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, \dots) plotDensityMclustd(x, data = NULL, nlevels = 11, levels = NULL, prob = c(0.25, 0.5, 0.75), points.pch = 1, points.col = 1, points.cex = 0.8, gap = 0.2, \dots) } \arguments{ \item{x}{An object of class \code{'mclustDensity'} obtained from a call to \code{\link{densityMclust}} function.} \item{data}{Optional data points.} \item{what}{The type of graph requested: \describe{ \item{\code{"density"} =}{a plot of estimated density; if \code{data} is also provided the density is plotted over data points (see Details section).} \item{\code{"BIC"} =}{a plot of BIC values for the estimated models versus the number of components.} \item{\code{"diagnostic"} =}{diagnostic plots (only available for the one-dimensional case, see \code{\link{densityMclust.diagnostic}})} } } \item{hist.col}{The color to be used to fill the bars of the histogram.} \item{hist.border}{The color of the border around the bars of the histogram.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{points.pch, points.col, points.cex}{The character symbols, colors, and magnification to be used for plotting \code{data} points.} \item{nlevels}{An integer, the number of levels to be used in plotting contour densities.} \item{levels}{A vector of density levels at which to draw the contour lines.} \item{prob}{A vector of probability levels for computing HDR. Only used if \code{type = "hdr"} and supersede previous \code{nlevels} and \code{levels} arguments.} \item{gap}{Distance between subplots, in margin lines, for the matrix of pairwise scatterplots.} \item{\dots}{Additional arguments passed to \code{\link{surfacePlot}}.} } \details{The function \code{plot.densityMclust} allows to obtain the plot of estimated density or the graph of BIC values for evaluated models. If \code{what = "density"} the produced plot dependes on the dimensionality of the data. For one-dimensional data a call with no \code{data} provided produces a plot of the estimated density over a sensible range of values. If \code{data} is provided the density is over-plotted on a histogram for the observed data. For two-dimensional data further arguments available are those accepted by the \code{\link{surfacePlot}} function. In particular, the density can be represented through \code{"contour"}, \code{"hdr"}, \code{"image"}, and \code{"persp"} type of graph. For \code{type = "hdr"} Highest Density Regions (HDRs) are plotted for probability levels \code{prob}. See \code{\link{hdrlevels}} for details. For higher dimensionality a scatterplot matrix of pairwise projected densities is drawn. } % \value{} \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{surfacePlot}}, \code{\link{densityMclust.diagnostic}}, \code{\link{Mclust}}. } \examples{ \dontrun{ dens <- densityMclust(faithful$waiting) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "BIC", legendArgs = list(x = "topright")) plot(dens, what = "density", data = faithful$waiting) dens <- densityMclust(faithful) summary(dens) summary(dens, parameters = TRUE) plot(dens, what = "density", data = faithful, drawlabels = FALSE, points.pch = 20) plot(dens, what = "density", type = "hdr") plot(dens, what = "density", type = "hdr", prob = seq(0.1, 0.9, by = 0.1)) plot(dens, what = "density", type = "hdr", data = faithful) plot(dens, what = "density", type = "persp") dens <- densityMclust(iris[,1:4]) summary(dens, parameters = TRUE) plot(dens, what = "density", data = iris[,1:4], col = "slategrey", drawlabels = FALSE, nlevels = 7) plot(dens, what = "density", type = "hdr", data = iris[,1:4]) plot(dens, what = "density", type = "persp", col = grey(0.9)) } } \keyword{cluster} \keyword{dplot} mclust/man/mclustVariance.Rd0000644000176200001440000000743613175053400015613 0ustar liggesusers\name{mclustVariance} \alias{mclustVariance} \title{ Template for variance specification for parameterized Gaussian mixture models } \description{ Specification of variance parameters for the various types of Gaussian mixture models. } \usage{ mclustVariance(modelName, d = NULL, G = 2) } \arguments{ \item{modelName}{A character string specifying the model.} \item{d}{A integer specifying the dimension of the data.} \item{G}{An integer specifying the number of components in the mixture model.} } \details{The \code{variance} component in the {parameters} list from the output to e.g. \code{me} or \code{mstep} or input to e.g. \code{estep} may contain one or more of the following arguments, depending on the model: \describe{ \item{\code{modelName}}{ A character string indicating the model. } \item{\code{d}}{ The dimension of the data. } \item{\code{G}}{ The number of components in the mixture model. } \item{\code{sigmasq}}{ for the one-dimensional models (\code{"E"}, \code{"V"}) and spherical models (\code{"EII"}, \code{"VII"}). This is either a vector whose \emph{k}th component is the variance for the \emph{k}th component in the mixture model (\code{"V"} and \code{"VII"}), or a scalar giving the common variance for all components in the mixture model (\code{"E"} and \code{"EII"}). } \item{\code{Sigma}}{ For the equal variance models \code{"EII"}, \code{"EEI"}, and \code{"EEE"}. A \emph{d} by \emph{d} matrix giving the common covariance for all components of the mixture model. } \item{\code{cholSigma}}{ For the equal variance model {"EEE"}. A \emph{d} by \emph{d} upper triangular matrix giving the Cholesky factor of the common covariance for all components of the mixture model. } \item{\code{sigma}}{ For all multidimensional mixture models. A \emph{d} by \emph{d} by \emph{G} matrix array whose \code{[,,k]}th entry is the covariance matrix for the \emph{k}th component of the mixture model. } \item{\code{cholsigma}}{ For the unconstrained covariance mixture model \code{"VVV"}. A \emph{d} by \emph{d} by \emph{G} matrix array whose \code{[,,k]}th entry is the upper triangular Cholesky factor of the covariance matrix for the \emph{k}th component of the mixture model. } \item{\code{scale}}{ For diagonal models \code{"EEI"}, \code{"EVI"}, \code{"VEI"}, \code{"VVI"} and constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{\code{shape}}{ For diagonal models \code{"EEI"}, \code{"EVI"}, \code{"VEI"}, \code{"VVI"} and constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{\code{orientation}}{ For the constant-shape models \code{"EEV"} and \code{"VEV"}. Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component is not needed in spherical and diagonal models, since the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } } In all cases, the value \code{-1} is used as a placeholder for unknown nonzero entries. } \keyword{cluster} mclust/man/plot.MclustDR.Rd0000644000176200001440000001146213504362015015300 0ustar liggesusers\name{plot.MclustDR} \alias{plot.MclustDR} \alias{plotEvalues.MclustDR} \title{Plotting method for dimension reduction for model-based clustering and classification} \description{ Graphs data projected onto the estimated subspace for model-based clustering and classification. } \usage{ \method{plot}{MclustDR}(x, dimens, what = c("scatterplot", "pairs", "contour", "classification", "boundaries", "density", "evalues"), symbols, colors, col.contour = gray(0.7), col.sep = grey(0.4), ngrid = 200, nlevels = 5, asp = NULL, \dots) } \arguments{ \item{x}{ An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}. } \item{dimens}{ A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. } \item{what}{ The type of graph requested: \describe{ \item{\code{"scatterplot"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} and with data points marked according to the corresponding mixture component. By default, the first two directions are selected for plotting.} \item{\code{"pairs"} =}{a scatterplot matrix of data projected onto the estimated subspace and with data points marked according to the corresponding mixture component. By default, all the available directions are used, unless they have been specified by \code{dimens}.} \item{\code{"contour"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with density contours for classes or clusters and data points marked according to the corresponding mixture component.} \item{\code{"classification"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with classification region and data points marked according to the corresponding mixture component.} \item{\code{"boundaries"} =}{a two-dimensional plot of data projected onto the first two directions specified by \code{dimens} (by default, the first two directions) with uncertainty boundaries and data points marked according to the corresponding mixture component. The uncertainty is shown using a greyscale with darker regions indicating higher uncertainty. } \item{\code{"density"} =}{a one-dimensional plot of estimated density for the first direction specified by \code{dimens} (by default, the first one). A set of box-plots for each estimated cluster or known class are also shown at the bottom of the graph. } } } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique mixture component. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique cluster or known class. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotColors")}. } \item{col.contour}{ The color of contours in case \code{what = "contour"}. } \item{col.sep}{ The color of classification boundaries in case \code{what = "classification"}. } \item{ngrid}{ An integer specifying the number of grid points to use in evaluating the classification regions. } \item{nlevels}{ The number of levels to use in case \code{what = "contour"}. } \item{asp}{For scatterplots the \eqn{y/x} aspect ratio, see \code{\link{plot.window}}. } \item{\dots}{further arguments passed to or from other methods.} } %\details{} %\value{} \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. } \author{Luca Scrucca} %\note{} \seealso{\link{MclustDR}} \examples{ \dontrun{ mod <- Mclust(iris[,1:4], G = 3) dr <- MclustDR(mod) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "scatterplot", dimens = c(1,3)) plot(dr, what = "contour") plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status, G = 1:3) dr <- MclustDR(da) plot(dr, what = "evalues") plot(dr, what = "pairs") plot(dr, what = "contour") plot(dr, what = "contour", dimens = c(1,3)) plot(dr, what = "classification", ngrid = 200) plot(dr, what = "boundaries", ngrid = 200) plot(dr, what = "density") plot(dr, what = "density", dimens = 2) } } \keyword{multivariate} mclust/man/estep.Rd0000644000176200001440000000620513175052007013746 0ustar liggesusers\name{estep} \alias{estep} \title{ E-step for parameterized Gaussian mixture models. } \description{ Implements the expectation step of EM algorithm for parameterized Gaussian mixture models. } \usage{ estep( modelName, data, parameters, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ A names list giving the parameters of the model. The components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If set to NULL or a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ The input parameters. } \item{loglik}{ The log-likelihood for the data in the mixture model. } \item{Attributes}{ \code{"WARNING"}: an appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{estepE}}, \dots, \code{\link{estepVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{mclust.options}} \code{\link{mclustVariance}} } \examples{ \dontrun{ msEst <- mstep(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) names(msEst) estep(modelName = msEst$modelName, data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} mclust/man/chevron.Rd0000644000176200001440000000107612460535131014273 0ustar liggesusers\name{chevron} \alias{chevron} \title{Simulated minefield data} \usage{data(chevron)} \description{A set of simulated bivariate minefield data (1104 observations).} \references{ A. Dasgupta and A. E. Raftery (1998). Detecting features in spatial point processes with clutter via model-based clustering. \emph{Journal of the American Statistical Association 93:294-302}. C. Fraley and A.E. Raftery (1998). \emph{Computer Journal 41:578-588}. G. J. McLachlan and D. Peel (2000). \emph{Finite Mixture Models}, Wiley, pages 110-112. } \keyword{datasets} mclust/man/map.Rd0000644000176200001440000000225413175052667013417 0ustar liggesusers\name{map} \alias{map} \title{Classification given Probabilities} \description{ Converts a matrix in which each row sums to 1 to an integer vector specifying for each row the column index of the maximum. } \usage{ map(z, warn = mclust.options("warn"), \dots) } \arguments{ \item{z}{ A matrix (for example a matrix of conditional probabilities in which each row sums to 1 as produced by the E-step of the EM algorithm). } \item{warn}{ A logical variable indicating whether or not a warning should be issued when there are some columns of \code{z} for which no row attains a maximum. } \item{\dots }{ Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with \code{do.call}. } } \value{ A integer vector with one entry for each row of z, in which the \emph{i}-th value is the column index at which the \emph{i}-th row of \code{z} attains a maximum. } \seealso{ \code{\link{unmap}}, \code{\link{estep}}, \code{\link{em}}, \code{\link{me}}. } \examples{ emEst <- me(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5])) map(emEst$z) } \keyword{cluster} % docclass is function mclust/man/mclust-package.Rd0000644000176200001440000000312713314421047015525 0ustar liggesusers\name{mclust-package} \alias{mclust-package} \alias{mclust} \docType{package} \title{Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation} \description{Finite Gaussian mixture modelling fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization and dimension reduction.} \details{For a quick introduction to \pkg{mclust} see the vignette \href{../doc/mclust.html}{A quick tour of mclust}.} \author{ Chris Fraley, Adrian Raftery and Luca Scrucca. Maintainer: Luca Scrucca \email{luca.scrucca@unipg.it} } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \examples{ # Clustering mod1 <- Mclust(iris[,1:4]) summary(mod1) plot(mod1, what = c("BIC", "classification")) # Classification data(banknote) mod2 <- MclustDA(banknote[,2:7], banknote$Status) summary(mod2) plot(mod2) # Density estimation mod3 <- densityMclust(faithful$waiting) summary(mod3) plot(mod3, faithful$waiting) } \keyword{package} mclust/man/mclust2Dplot.Rd0000644000176200001440000001316513473255145015236 0ustar liggesusers\name{mclust2Dplot} \alias{mclust2Dplot} \title{Plot two-dimensional data modelled by an MVN mixture} \description{ Plot two-dimensional data given parameters of an MVN mixture model for the data. } \usage{ mclust2Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "uncertainty", "error"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, scale = FALSE, CEX = 1, PCH = ".", main = FALSE, swapAxes = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. In this case the data are two dimensional, so there are two columns. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"error"}, \code{"uncertainty"}. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances. } \item{fillEllipses}{ A logical specifying whether or not to fill ellipses with transparent colors when \code{addEllipses = TRUE}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given is \code{mclust.options("classPlotColors")}. } \item{xlim, ylim}{ Optional argument specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{xlab, ylab}{ Optional argument specifying labels for the x-axis and y-axis. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{swapAxes}{ A logical variable indicating whether or not the axes should be swapped for the plot. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. } \seealso{ \code{\link{surfacePlot}}, \code{\link{clPairs}}, \code{\link{coordProj}}, \code{\link{mclust.options}} } \examples{ \dontrun{ faithfulModel <- Mclust(faithful) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "classification", main = TRUE) mclust2Dplot(faithful, parameters=faithfulModel$parameters, z=faithfulModel$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/partuniq.Rd0000644000176200001440000000141212460535131014464 0ustar liggesusers\name{partuniq} \alias{partuniq} \title{ Classifies Data According to Unique Observations } \description{ Gives a one-to-one mapping from unique observations to rows of a data matrix. } \usage{ partuniq(x) } \arguments{ \item{x}{Matrix of observations.} } \value{ A vector of length \code{nrow(x)} with integer entries. An observation \code{k} is assigned an integer \code{i} whenever observation \code{i} is the first row of \code{x} that is identical to observation \code{k} (note that \code{i <= k}). } \seealso{ \code{\link{partconv}} } \examples{ set.seed(0) mat <- data.frame(lets = sample(LETTERS[1:2],9,TRUE), nums = sample(1:2,9,TRUE)) mat ans <- partuniq(mat) ans partconv(ans,consec=TRUE) } \keyword{cluster} % Converted by Sd2Rd version 0.3-2. mclust/man/MclustDRsubsel.Rd0000644000176200001440000001225313465001547015546 0ustar liggesusers\name{MclustDRsubsel} \alias{MclustDRsubsel} \alias{print.MclustDRsubsel} \alias{MclustDRsubsel_classif} \alias{MclustDRsubsel_cluster} \alias{MclustDRrecoverdir} \alias{MclustDRsubsel1cycle} \alias{print.MclustDRsubsel} \alias{summary.MclustDRsubsel} \title{Subset selection for GMMDR directions based on BIC} \description{ Implements a subset selection method for selecting the relevant directions spanning the dimension reduction subspace for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities.} \usage{ MclustDRsubsel(object, G = 1:9, modelNames = mclust.options("emModelNames"), \dots, bic.stop = 0, bic.cutoff = 0, mindir = 1, verbose = interactive()) } \arguments{ \item{object}{An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}.} \item{G}{An integer vector specifying the numbers of mixture components or clusters.} \item{modelNames}{A vector of character strings indicating the models to be fitted. See \code{\link{mclustModelNames}} for a description of the available models.} \item{\dots}{Further arguments passed through \code{\link{Mclust}} or \code{\link{MclustDA}}.} \item{bic.stop}{A criterion to terminate the search. If maximal BIC difference is less than \code{bic.stop} then the algorithm stops. \cr Two tipical values are: \describe{ \item{}{\code{0}: algorithm stops when the BIC difference becomes negative (default)} \item{}{\code{-Inf}: algorithm continues until all directions have been selected} }} \item{bic.cutoff}{A value specifying how to select simplest ``best'' model within \code{bic.cutoff} from the maximum value achieved. Setting this to \code{0} (default) simply select the model with the largest BIC difference.} \item{mindir}{An integer value specifying the minimum number of directions to be estimated.} \item{verbose}{A logical or integer value specifying if and how much detailed information should be reported during the iterations of the algorithm. \cr Possible values are: \describe{ \item{}{\code{0} or \code{FALSE}: no trace info is shown;} \item{}{\code{1} or \code{TRUE}: a trace info is shown at each step of the search;} \item{}{\code{2}: a more detailed trace info is is shown.} } } } \details{ The GMMDR method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. This is implemented in \code{\link{MclustDR}}. The \code{MclustDRsubsel} function implements the greedy forward search algorithm discussed in Scrucca (2010) to prune the set of all GMMDR directions. The criterion used to select the relevant directions is based on the BIC difference between a clustering model and a model in which the feature proposal has no clustering relevance. The steps are the following: 1. Select the first feature to be the one which maximizes the BIC difference between the best clustering model and the model which assumes no clustering, i.e. a single component. 2. Select the next feature amongst those not previously included, to be the one which maximizes the BIC difference. 3. Iterate the previous step until all the BIC differences for the inclusion of a feature become less than \code{bic.stop}. At each step, the search over the model space is performed with respect to the model parametrisation and the number of clusters. } \value{ An object of class \code{'MclustDRsubsel'} which inherits from \code{'MclustDR'}, so it has the same components of the latter plus the following: \item{basisx}{The basis of the estimated dimension reduction subspace expressed in terms of the original variables.} \item{std.basisx}{The basis of the estimated dimension reduction subspace expressed in terms of the original variables standardized to have unit standard deviation.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. \emph{Advances in Data Analysis and Classification}, 8(2), pp. 147-165 } \author{Luca Scrucca} %\note{} \seealso{ \code{\link{MclustDR}}, \code{\link{Mclust}}, \code{\link{MclustDA}}. } \examples{ \dontrun{ # clustering data(crabs, package = "MASS") x <- crabs[,4:8] class <- paste(crabs$sp, crabs$sex, sep = "|") mod <- Mclust(x) table(class, mod$classification) dr <- MclustDR(mod) summary(dr) plot(dr) drs <- MclustDRsubsel(dr) summary(drs) table(class, drs$classification) plot(drs, what = "scatterplot") plot(drs, what = "pairs") plot(drs, what = "contour") plot(drs, what = "boundaries") plot(drs, what = "evalues") # classification data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status) table(banknote$Status, predict(da)$class) dr <- MclustDR(da) summary(dr) drs <- MclustDRsubsel(dr) summary(drs) table(banknote$Status, predict(drs)$class) plot(drs, what = "scatterplot") plot(drs, what = "classification") plot(drs, what = "boundaries")} } \keyword{multivariate}mclust/man/plot.clustCombi.Rd0000644000176200001440000000443513475242100015710 0ustar liggesusers\name{plot.clustCombi} \alias{plot.clustCombi} \title{ Plot Combined Clusterings Results } \description{ Plot combined clusterings results: classifications corresponding to \code{Mclust}/BIC and to the hierarchically combined classes, "entropy plots" to help to select a number of classes, and the tree structure obtained from combining mixture components. } \usage{ \method{plot}{clustCombi}(x, what = c("classification", "entropy", "tree"), \dots) } \arguments{ \item{x}{ Object returned by \code{\link{clustCombi}} function. } \item{what}{ Type of plot. } \item{\dots}{ Other arguments to be passed to other functions: \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{combiTree}}. Please see the corresponding documentations. } } \value{ Classifications are plotted with \code{\link{combiPlot}}, which relies on the \code{Mclust} plot functions. Entropy plots are plotted with \code{\link{entPlot}} and may help to select a number of classes: please see the article cited in the references. Tree plots are produced by \code{\link{combiTree}} and graph the tree structure implied by the clusters combining process. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{combiTree}}, \code{\link{clustCombi}}. } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) ## 1D Example output <- clustCombi(data = Test1D, G=1:15) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 2D Example output <- clustCombi(data = ex4.1) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) ## 3D Example output <- clustCombi(data = ex4.4.2) # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes (please see the article cited # in the references) plot(output) } } \keyword{ cluster } mclust/man/classError.Rd0000644000176200001440000000375113467317376014771 0ustar liggesusers\name{classError} \alias{classError} \title{Classification error} \description{ Computes the errore rate of a given classification relative to the known classes, and the location of misclassified data points.} \usage{ classError(classification, class) } \arguments{ \item{classification}{ A numeric, character vector or factor specifying the predicted class labels. Must have the same length as \code{class}. } \item{class}{ A numeric, character vector or factor of known true class labels. Must have the same length as \code{classification}. } } \value{ A list with the following two components: \item{misclassified}{ The indexes of the misclassified data points in a minimum error mapping between the predicted classification and the known true classes. } \item{errorRate}{ The error rate corresponding to a minimum error mapping between the predicted classification and the known true classes. } } \details{ If more than one mapping between predicted classification and the known truth corresponds to the minimum number of classification errors, only one possible set of misclassified observations is returned. } \seealso{ \code{\link{map}} \code{\link{mapClass}}, \code{\link{table}} } \examples{ (a <- rep(1:3, 3)) (b <- rep(c("A", "B", "C"), 3)) classError(a, b) (a <- sample(1:3, 9, replace = TRUE)) (b <- sample(c("A", "B", "C"), 9, replace = TRUE)) classError(a, b) class <- factor(c(5,5,5,2,5,3,1,2,1,1), levels = 1:5) probs <- matrix(c(0.15, 0.01, 0.08, 0.23, 0.01, 0.23, 0.59, 0.02, 0.38, 0.45, 0.36, 0.05, 0.30, 0.46, 0.15, 0.13, 0.06, 0.19, 0.27, 0.17, 0.40, 0.34, 0.18, 0.04, 0.47, 0.34, 0.32, 0.01, 0.03, 0.11, 0.04, 0.04, 0.09, 0.05, 0.28, 0.27, 0.02, 0.03, 0.12, 0.25, 0.05, 0.56, 0.35, 0.22, 0.09, 0.03, 0.01, 0.75, 0.20, 0.02), nrow = 10, ncol = 5) cbind(class, probs, map = map(probs)) classError(map(probs), class) } \keyword{cluster} mclust/man/wreath.Rd0000644000176200001440000000103113175055360014115 0ustar liggesusers\name{wreath} \alias{wreath} \title{Data Simulated from a 14-Component Mixture} \usage{data(wreath)} \description{ A dataset consisting of 1000 observations drawn from a 14-component normal mixture in which the covariances of the components have the same size and shape but differ in orientation. } \references{ C. Fraley, A. E. Raftery and R. Wehrens (2005). Incremental model-based clustering for large datasets with small clusters. \emph{Journal of Computational and Graphical Statistics 14:1:18}. } \keyword{datasets} mclust/man/logLik.Mclust.Rd0000644000176200001440000000151413175052642015320 0ustar liggesusers\name{logLik.Mclust} \alias{logLik.Mclust} \title{Log-Likelihood of a \code{Mclust} object} \description{ Returns the log-likelihood for a \code{'Mclust'} object.} \usage{ \method{logLik}{Mclust}(object, \dots) } \arguments{ \item{object}{an object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{Returns an object of class \code{'logLik'} with an element providing the maximized log-likelihood, and further arguments giving the number of (estimated) parameters in the model (\code{"df"}) and the sample size (\code{"nobs"}).} \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ \dontrun{ irisMclust <- Mclust(iris[,1:4]) summary(irisMclust) logLik(irisMclust) } } \keyword{multivariate} mclust/man/mclustBIC.Rd0000644000176200001440000001650613424536031014461 0ustar liggesusers\name{mclustBIC} \alias{mclustBIC} \alias{EMclust} \alias{print.mclustBIC} \title{BIC for Model-Based Clustering} \description{ BIC for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering.} \usage{ mclustBIC(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), Vinv = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is \code{G=1:9}, unless the argument \code{x} is specified, in which case the default is taken from the values associated with \code{x}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for \code{\link{mclustModelNames}} describes the available models. The default is: \describe{ \item{\code{c("E", "V")}}{for univariate data} \item{\code{mclust.options("emModelNames")}}{for multivariate data (n > d)} \item{\code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")}}{the spherical and diagonal models for multivariate data (n <= d)} } unless the argument \code{x} is specified, in which case the default is taken from the values associated with \code{x}. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{priorControl}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{\link{hc}}. \cr For multivariate data, the default is to compute a hierarchical agglomerative clustering tree by applying function \code{\link{hc}} with model specified by \code{mclust.options("hcModelName")}, and data transformation set by \code{mclust.options("hcUse")}.\cr All the input or a subset as indicated by the \code{subset} argument is used for initial clustering.\cr The hierarchical clustering results are then used to start the EM algorithm from a given partition.\cr For univariate data, the default is to use quantiles to start the EM algorithm. However, hierarchical clustering could also be used by calling \code{\link{hc}} with model specified as \code{"V"} or \code{"E"}. } \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. By default no subset is used unless the number of observations exceeds the value specified by \code{mclust.options("subset")}. The \code{subset} argument is ignored if \code{hcPairs} are provided. Note that to guarantee exact reproducibility of results a seed must be specified (see \code{\link{set.seed}}). } \item{\code{noise}}{ A logical or numeric vector indicating an initial guess as to which observations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation. } } } \item{Vinv}{ An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function \code{hypvol} to the data. Used only if an initial guess as to which observations are noise is supplied. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by \code{\link{mclust.options}}. } \item{x}{ An object of class \code{'mclustBIC'}. If supplied, \code{mclustBIC} will use the settings in \code{x} to produce another object of class \code{'mclustBIC'}, but with \code{G} and \code{modelNames} as specified in the arguments. Models that have already been computed in \code{x} are not recomputed. All arguments to \code{mclustBIC} except \code{data}, \code{G} and \code{modelName} are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ Return an object of class \code{'mclustBIC'} containing the Bayesian Information Criterion for the specified mixture models numbers of clusters. Auxiliary information returned as attributes. The corresponding \code{print} method shows the matrix of values and the top models according to the BIC criterion. } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{priorControl}}, \code{\link{emControl}}, \code{\link{mclustModel}}, \code{\link{summary.mclustBIC}}, \code{\link{hc}}, \code{\link{me}}, \code{\link{mclustModelNames}}, \code{\link{mclust.options}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) irisBIC plot(irisBIC) \dontrun{ subset <- sample(1:nrow(iris), 100) irisBIC <- mclustBIC(iris[,-5], initialization=list(subset = subset)) irisBIC plot(irisBIC) irisBIC1 <- mclustBIC(iris[,-5], G=seq(from=1,to=9,by=2), modelNames=c("EII", "EEI", "EEE")) irisBIC1 plot(irisBIC1) irisBIC2 <- mclustBIC(iris[,-5], G=seq(from=2,to=8,by=2), modelNames=c("VII", "VVI", "VVV"), x= irisBIC1) irisBIC2 plot(irisBIC2) } nNoise <- 450 set.seed(0) poissonNoise <- apply(apply( iris[,-5], 2, range), 2, function(x, n) runif(n, min = x[1]-.1, max = x[2]+.1), n = nNoise) set.seed(0) noiseInit <- sample(c(TRUE,FALSE),size=nrow(iris)+nNoise,replace=TRUE, prob=c(3,1)) irisNdata <- rbind(iris[,-5], poissonNoise) irisNbic <- mclustBIC(data = irisNdata, G = 1:5, initialization = list(noise = noiseInit)) irisNbic plot(irisNbic) } \keyword{cluster} % docclass is function mclust/man/combMat.Rd0000644000176200001440000000173713475242100014213 0ustar liggesusers\name{combMat} \alias{combMat} \title{ Combining Matrix } \description{ Create a combining matrix } \usage{ combMat(K, l1, l2) } \arguments{ \item{K}{ The original number of classes: the matrix will define a combining from K to (K-1) classes. } \item{l1}{ Label of one of the two classes to be combined. } \item{l2}{ Label of the other class to be combined. } } \value{ If \code{z} is a vector (length \emph{K}) whose \emph{k}th entry is the probability that an observation belongs to the \emph{k}th class in a \emph{K}-classes classification, then \code{combiM \%*\% z} is the vector (length \emph{K-1}) whose \emph{k}th entry is the probability that the observation belongs to the \emph{k}th class in the \emph{K-1}-classes classification obtained by merging classes \code{l1} and \code{l2} in the initial classification. } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{clustCombi}}, \code{\link{combiPlot}} } %\examples{} \keyword{ cluster } mclust/man/Baudry_etal_2010_JCGS_examples.Rd0000644000176200001440000000371113314163075020231 0ustar liggesusers\name{Baudry_etal_2010_JCGS_examples} \alias{Baudry_etal_2010_JCGS_examples} \alias{ex4.1} \alias{ex4.2} \alias{ex4.3} \alias{ex4.4.1} \alias{ex4.4.2} \alias{Test1D} \docType{data} \title{Simulated Example Datasets From Baudry et al. (2010)} \description{ Simulated datasets used in Baudry et al. (2010) to illustrate the proposed mixture components combining method for clustering. Please see the cited article for a detailed presentation of these datasets. The data frame with name exN.M is presented in Section N.M in the paper. Test1D (not in the article) has been simulated from a Gaussian mixture distribution in R. ex4.1 and ex4.2 have been simulated from a Gaussian mixture distribution in R^2. ex4.3 has been simulated from a mixture of a uniform distribution on a square and a spherical Gaussian distribution in R^2. ex4.4.1 has been simulated from a Gaussian mixture model in R^2 ex4.4.2 has been simulated from a mixture of two uniform distributions in R^3. } \usage{data(Baudry_etal_2010_JCGS_examples)} \format{ \code{ex4.1} is a data frame with 600 observations on 2 real variables. \code{ex4.2} is a data frame with 600 observations on 2 real variables. \code{ex4.3} is a data frame with 200 observations on 2 real variables. \code{ex4.4.1} is a data frame with 800 observations on 2 real variables. \code{ex4.4.2} is a data frame with 300 observations on 3 real variables. \code{Test1D} is a data frame with 200 observations on 1 real variable. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.4.1) output # is of class clustCombi # plots the hierarchy of combined solutions, then some "entropy plots" which # may help one to select the number of classes plot(output) } } \keyword{datasets} mclust/man/MclustBootstrap.Rd0000644000176200001440000001064413376462046016010 0ustar liggesusers\name{MclustBootstrap} \alias{MclustBootstrap} \alias{print.MclustBootstrap} \title{Resampling-based Inference for Gaussian finite mixture models} \description{Bootstrap or jackknife estimation of standard errors and percentile bootstrap confidence intervals for the parameters of a Gaussian mixture model.} \usage{ MclustBootstrap(object, nboot = 999, type = c("bs", "wlbs", "pb", "jk"), max.nonfit = 10*nboot, verbose = interactive(), \dots) } \arguments{ \item{object}{An object of class \code{'Mclust'} or \code{'densityMclust'} providing an estimated Gaussian mixture model.} \item{nboot}{The number of bootstrap replications.} \item{type}{A character string specifying the type of resampling to use: \describe{ \item{\code{"bs"}}{nonparametric bootstrap} \item{\code{"wlbs"}}{weighted likelihood bootstrap} \item{\code{"pb"}}{parametric bootstrap} \item{\code{"jk"}}{jackknife} } } \item{max.nonfit}{The maximum number of non-estimable models allowed.} \item{verbose}{A logical controlling if a text progress bar is displayed during the resampling procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.} \item{\dots}{Further arguments passed to or from other methods.} } \details{For a fitted Gaussian mixture model with \code{object$G} mixture components and covariances parameterisation \code{object$modelName}, this function returns either the bootstrap distribution or the jackknife distribution of mixture parameters. In the former case, the nonparametric bootstrap or the weighted likelihood bootstrap approach could be used, so the the bootstrap procedure generates \code{nboot} bootstrap samples of the same size as the original data by resampling with replacement from the observed data. In the jackknife case, the procedure considers all the samples obtained by omitting one observation at time. The resulting resampling distribution can then be used to obtain standard errors and percentile confidence intervals by the use of \code{\link{summary.MclustBootstrap}} function.} \value{An object of class \code{'MclustBootstrap'} with the following components: \item{n}{The number of observations in the data.} \item{d}{The dimension of the data.} \item{G}{A value specifying the number of mixture components.} \item{modelName}{A character string specifying the mixture model covariances parameterisation (see \code{\link{mclustModelNames}}).} \item{parameters}{A list of estimated parameters for the mixture components with the following components: \describe{ \item{\code{pro}}{a vector of mixing proportions.} \item{\code{mean}}{a matrix of means for each component.} \item{\code{variance}}{an array of covariance matrices for each component.} } } \item{nboot}{The number of bootstrap replications if \code{type = "bs"} or \code{type = "wlbs"}. The sample size if \code{type = "jk"}.} \item{type}{The type of resampling approach used.} \item{nonfit}{The number of resamples that did not convergence during the procedure.} \item{pro}{A matrix of dimension (\code{nboot} x \code{G}) containing the bootstrap distribution for the mixing proportion.} \item{mean}{An array of dimension (\code{nboot} x \code{d} x \code{G}), where \code{d} is the dimension of the data, containing the bootstrap distribution for the component means.} \item{variance}{An array of dimension (\code{nboot} x \code{d} x \code{d} x \code{G}), where \code{d} is the dimension of the data, containing the bootstrap distribution for the component covariances.} } \references{ Davison, A. and Hinkley, D. (1997) \emph{Bootstrap Methods and Their Applications}. Cambridge University Press. McLachlan, G.J. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley. O'Hagan A., Murphy T. B., Gormley I. C. and Scrucca L. (2015) On Estimation of Parameter Uncertainty in Model-Based Clustering. Submitted to \emph{Computational Statistics}. } \seealso{\code{\link{summary.MclustBootstrap}}, \code{\link{plot.MclustBootstrap}}, \code{\link{Mclust}}, \code{\link{densityMclust}}.} \examples{ \dontrun{ data(diabetes) X <- diabetes[,-1] modClust <- Mclust(X) bootClust <- MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens <- densityMclust(acidity) modDens <- MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") } } \keyword{htest} \keyword{cluster} mclust/man/mclustICL.Rd0000644000176200001440000001044113314421446014464 0ustar liggesusers\name{mclustICL} \alias{mclustICL} \alias{print.mclustICL} \alias{summary.mclustICL} \alias{print.summary.mclustICL} \title{ICL Criterion for Model-Based Clustering} \description{ ICL (Integrated Complete-data Likelihood) for parameterized Gaussian mixture models fitted by EM algorithm initialized by model-based hierarchical clustering. } \usage{ mclustICL(data, G = NULL, modelNames = NULL, initialization = list(hcPairs = NULL, subset = NULL, noise = NULL), x = NULL, \dots) \method{summary}{mclustICL}(object, G, modelNames, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is \code{G = 1:9}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The help file for \code{\link{mclustModelNames}} describes the available models. The default is: \describe{ \item{\code{c("E", "V")}}{for univariate data} \item{\code{mclust.options("emModelNames")}}{for multivariate data (n > d)} \item{\code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")}}{the spherical and diagonal models for multivariate data (n <= d)} } } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. For multivariate data, the default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "VVV"} to the data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are to start EM. For univariate data, the default is to use quantiles to start EM. } \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. } } } \item{x}{ An object of class \code{'mclustICL'}. If supplied, \code{mclustICL} will use the settings in \code{x} to produce another object of class \code{'mclustICL'}, but with \code{G} and \code{modelNames} as specified in the arguments. Models that have already been computed in \code{x} are not recomputed. All arguments to \code{mclustICL} except \code{data}, \code{G} and \code{modelName} are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{\dots}{ Futher arguments used in the call to \code{\link{Mclust}}. See also \code{\link{mclustBIC}}. } \item{object}{ An integer vector specifying the numbers of mixture components (clusters) for which the criteria should be calculated. The default is \code{G = 1:9}. } } \value{ Returns an object of class \code{'mclustICL'} containing the the ICL criterion for the specified mixture models and numbers of clusters. The corresponding \code{print} method shows the matrix of values and the top models according to the ICL criterion. The \code{summary} method shows only the top models. } \references{ Biernacki, C., Celeux, G., Govaert, G. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. \emph{IEEE Trans. Pattern Analysis and Machine Intelligence}, 22 (7), 719-725. Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. } \seealso{ \code{\link{plot.mclustICL}}, \code{\link{Mclust}}, \code{\link{mclustBIC}}, \code{\link{mclustBootstrapLRT}}, \code{\link{bic}}, \code{\link{icl}} } \examples{ data(faithful) faithful.ICL <- mclustICL(faithful) faithful.ICL summary(faithful.ICL) plot(faithful.ICL) \dontrun{ # compare with faithful.BIC <- mclustBIC(faithful) faithful.BIC plot(faithful.BIC) } } \keyword{cluster} mclust/man/mvnX.Rd0000644000176200001440000000601613205036667013566 0ustar liggesusers\name{mvnX} \alias{mvnX} \alias{mvnXII} \alias{mvnXXI} \alias{mvnXXX} \title{ Univariate or Multivariate Normal Fit } \description{ Computes the mean, covariance, and log-likelihood from fitting a single Gaussian (univariate or multivariate normal). } \usage{ mvnX(data, prior = NULL, warn = NULL, \dots) mvnXII(data, prior = NULL, warn = NULL, \dots) mvnXXI(data, prior = NULL, warn = NULL, \dots) mvnXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \details{ \describe{ \item{\code{mvnXII}}{computes the best fitting Gaussian with the covariance restricted to be a multiple of the identity.} \item{\code{mvnXXI}}{computes the best fitting Gaussian with the covariance restricted to be diagonal.} \item{\code{mvnXXX}}{computes the best fitting Gaussian with ellipsoidal (unrestricted) covariance.} } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{mvn}}, \code{\link{mstepE}} } \examples{ \dontrun{ n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvnX(x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvnXII(x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvnXXI(x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvnXXX(x) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/plot.mclustBIC.Rd0000644000176200001440000000421513175053654015440 0ustar liggesusers\name{plot.mclustBIC} \alias{plot.mclustBIC} \title{BIC Plot for Model-Based Clustering} \description{ Plots the BIC values returned by the \code{\link{mclustBIC}} function. } \usage{ \method{plot}{mclustBIC}(x, G = NULL, modelNames = NULL, symbols = NULL, colors = NULL, xlab = NULL, ylab = "BIC", ylim = NULL, legendArgs = list(x = "bottomright", ncol = 2, cex = 1, inset = 0.01), \dots) } \arguments{ \item{x}{ Output from \code{mclustBIC}. } \item{G}{ One or more numbers of components corresponding to models fit in \code{x}. The default is to plot the BIC for all of the numbers of components fit. } \item{modelNames}{ One or more model names corresponding to models fit in \code{x}. The default is to plot the BIC for all of the models fit. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{xlab}{ Optional label for the horizontal axis of the BIC plot. } \item{ylab}{ Label for the vertical axis of the BIC plot. } \item{ylim}{ Optional limits for the vertical axis of the BIC plot. } \item{legendArgs}{ Arguments to pass to the \code{legend} function. Set to \code{NULL} for no legend. } \item{\dots}{ Other graphics parameters. } } \value{ A plot of the BIC values. } \seealso{ \code{\link{mclustBIC}} } \examples{ \dontrun{ plot(mclustBIC(precip), legendArgs = list(x = "bottomleft")) plot(mclustBIC(faithful)) plot(mclustBIC(iris[,-5])) } } \keyword{cluster} % docclass is function mclust/man/errorBars.Rd0000644000176200001440000000317212542725574014604 0ustar liggesusers\name{errorBars} \alias{errorBars} \title{Draw error bars on a plot} \description{ Draw error bars at x from upper to lower. If \code{horizontal = FALSE} (default) bars are drawn vertically, otherwise horizontally. } \usage{ errorBars(x, upper, lower, width = 0.1, code = 3, angle = 90, horizontal = FALSE, \dots) } \arguments{ \item{x}{A vector of values where the bars must be drawn.} \item{upper}{A vector of upper values where the bars must end.} \item{lower}{A vector of lower values where the bars must start.} \item{width}{A value specifying the width of the end-point segment.} \item{code}{An integer code specifying the kind of arrows to be drawn. For details see \code{\link[graphics]{arrows}}.} \item{angle}{A value specifying the angle at the arrow edge. For details see \code{\link[graphics]{arrows}}.} \item{horizontal}{A logical specifying if bars should be drawn vertically (default) or horizontally.} \item{\dots}{Further arguments are passed to \code{\link[graphics]{arrows}}.} } %\value{} \examples{ par(mfrow=c(2,2)) # Create a simple example dataset x <- 1:5 n <- c(10, 15, 12, 6, 3) se <- c(1, 1.2, 2, 1, .5) # upper and lower bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n-se, upper = n+se, lwd = 2, col = "red3") # one side bars b <- barplot(n, ylim = c(0, max(n)*1.5)) errorBars(b, lower = n, upper = n+se, lwd = 2, col = "red3", code = 1) # plot(x, n, ylim = c(0, max(n)*1.5), pch = 0) errorBars(x, lower = n-se, upper = n+se, lwd = 2, col = "red3") # dotchart(n, labels = x, pch = 19, xlim = c(0, max(n)*1.5)) errorBars(x, lower = n-se, upper = n+se, col = "red3", horizontal = TRUE) } mclust/man/priorControl.Rd0000644000176200001440000000330713175055124015325 0ustar liggesusers\name{priorControl} \alias{priorControl} \title{ Conjugate Prior for Gaussian Mixtures. } \description{ Specify a conjugate prior for Gaussian mixtures. } \usage{ priorControl(functionName = "defaultPrior", \dots) } \arguments{ \item{functionName}{ The name of the function specifying the conjugate prior. By default the function \code{\link{defaultPrior}} is used, and this can also be used as a template for alternative specification. } \item{\dots}{ Optional named arguments to the function specified in \code{functionName} together with their values. } } \value{ A list with the function name as the first component. The remaining components (if any) consist of a list of arguments to the function with assigned values. } \details{ The function \code{priorControl} is used to specify a conjugate prior for EM within \emph{MCLUST}.\cr Note that, as described in \code{\link{defaultPrior}}, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \references{ C. Fraley and A. E. Raftery (2007). Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification 24:155-181}. } \seealso{ \code{\link{mclustBIC}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{defaultPrior}} } \examples{ # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mstepE.Rd0000644000176200001440000001171413465001622014063 0ustar liggesusers\name{mstepE} \alias{mstepE} \alias{mstepV} \alias{mstepEII} \alias{mstepVII} \alias{mstepEEI} \alias{mstepVEI} \alias{mstepEVI} \alias{mstepVVI} \alias{mstepEEE} \alias{mstepEEV} \alias{mstepVEV} \alias{mstepVVV} \alias{mstepEVE} \alias{mstepEVV} \alias{mstepVEE} \alias{mstepVVE} \title{M-step for a parameterized Gaussian mixture model} \description{ Maximization step in the EM algorithm for a parameterized Gaussian mixture model. } \usage{ mstepE( data, z, prior = NULL, warn = NULL, \dots) mstepV( data, z, prior = NULL, warn = NULL, \dots) mstepEII( data, z, prior = NULL, warn = NULL, \dots) mstepVII( data, z, prior = NULL, warn = NULL, \dots) mstepEEI( data, z, prior = NULL, warn = NULL, \dots) mstepVEI( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepEVI( data, z, prior = NULL, warn = NULL, \dots) mstepVVI( data, z, prior = NULL, warn = NULL, \dots) mstepEEE( data, z, prior = NULL, warn = NULL, \dots) mstepEEV( data, z, prior = NULL, warn = NULL, \dots) mstepVEV( data, z, prior = NULL, warn = NULL, control = NULL,\dots) mstepVVV( data, z, prior = NULL, warn = NULL, \dots) mstepEVE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepEVV( data, z, prior = NULL, warn = NULL, \dots) mstepVEE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) mstepVVE( data, z, prior = NULL, warn = NULL, control = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{control}{ Values controlling termination for models \code{"VEI"} and \code{"VEV"} that have an iterative M-step. This should be a list with components named \emph{itmax} and \emph{tol}. These components can be of length 1 or 2; in the latter case, \code{mstep} will use the second value, under the assumption that the first applies to an outer iteration (as in the function \code{me}). The default uses the default values from the function \code{emControl}, which sets no limit on the number of iterations, and a relative tolerance of \code{sqrt(.Machine$double.eps)} on successive iterates. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{Attributes:}{ \code{"info"} For those models with iterative M-steps (\code{"VEI"} and \code{"VEV"}), information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \note{ This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. \cr In contrast to \code{me} for the EM algorithm, computations in \code{mstep} are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single \code{mstep}, use \code{me} with the \emph{itmax} component of the \code{control} argument set to 1. } \seealso{ \code{\link{mstep}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclustVariance}}, \code{\link{priorControl}}, \code{\link{emControl}}. } \examples{ \dontrun{ mstepVII(data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/estepE.Rd0000644000176200001440000000752613175052017014063 0ustar liggesusers\name{estepE} \alias{estepE} \alias{estepV} \alias{estepEII} \alias{estepVII} \alias{estepEEI} \alias{estepVEI} \alias{estepEVI} \alias{estepVVI} \alias{estepEEE} \alias{estepEEV} \alias{estepVEV} \alias{estepVVV} \alias{estepEVE} \alias{estepEVV} \alias{estepVEE} \alias{estepVVE} \title{ E-step in the EM algorithm for a parameterized Gaussian mixture model. } \description{ Implements the expectation step in the EM algorithm for a parameterized Gaussian mixture model. } \usage{ estepE(data, parameters, warn = NULL, \dots) estepV(data, parameters, warn = NULL, \dots) estepEII(data, parameters, warn = NULL, \dots) estepVII(data, parameters, warn = NULL, \dots) estepEEI(data, parameters, warn = NULL, \dots) estepVEI(data, parameters, warn = NULL, \dots) estepEVI(data, parameters, warn = NULL, \dots) estepVVI(data, parameters, warn = NULL, \dots) estepEEE(data, parameters, warn = NULL, \dots) estepEEV(data, parameters, warn = NULL, \dots) estepVEV(data, parameters, warn = NULL, \dots) estepVVV(data, parameters, warn = NULL, \dots) estepEVE(data, parameters, warn = NULL, \dots) estepEVV(data, parameters, warn = NULL, \dots) estepVEE(data, parameters, warn = NULL, \dots) estepVVE(data, parameters, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ The parameters of the model: %\itemize{ %\item An argument describing the variance (depends on the model): \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{mu}{ The mean for each component. If there is more than one component, this is a matrix whose columns are the means of the components. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. If not supplied or set to a negative value, the default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } %} } \item{warn}{ A logical value indicating whether or certain warnings should be issued. The default is given by \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ Character string identifying the model. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ The input parameters. } \item{loglik}{ The logliklihood for the data in the mixture model. } \item{Attribute}{ \code{"WARNING"}: An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{estep}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{do.call}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ msEst <- mstepEII(data = iris[,-5], z = unmap(iris[,5])) names(msEst) estepEII(data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} mclust/man/cross.Rd0000644000176200001440000000102413107075407013755 0ustar liggesusers\name{cross} \alias{cross} \title{Simulated Cross Data} \usage{data(cross)} \description{ A 500 by 3 matrix in which the first column is the classification and the remaining columns are two data from a simulation of two crossed elliptical Gaussians. } \examples{ # This dataset was created as follows \dontrun{ n <- 250 set.seed(0) cross <- rbind(matrix(rnorm(n*2), n, 2) \%*\% diag(c(1,9)), matrix(rnorm(n*2), n, 2) \%*\% diag(c(1,9))[,2:1]) cross <- cbind(c(rep(1,n),rep(2,n)), cross) } } \keyword{datasets} mclust/man/clustCombi-internals.Rd0000644000176200001440000000034612460535131016727 0ustar liggesusers\name{clustCombi-internal} \title{Internal clustCombi functions} \alias{combi} \alias{pcws2_reg} \alias{pcws3_reg} \alias{xlog} \description{ Internal functions not intended to be called directly by users. } \keyword{internal} mclust/man/wdbc.Rd0000644000176200001440000000634413467344551013565 0ustar liggesusers\name{wdbc} \alias{wdbc} \docType{data} \title{Wisconsin diagnostic breast cancer (WDBC) data} \description{ The data set provides data for 569 patients on 30 features of the cell nuclei obtained from a digitized image of a fine needle aspirate (FNA) of a breast mass. For each patient the cancer was diagnosed as malignant or benign.} \usage{data(wdbc)} \format{A data frame with 569 observations on the following variables: \describe{ \item{\code{ID}}{ID number} \item{\code{Diagnosis}}{cancer diagnosis: \code{M} = malignant, \code{B} = benign} \item{\code{Radius_mean}}{a numeric vector} \item{\code{Texture_mean}}{a numeric vector} \item{\code{Perimeter_mean}}{a numeric vector} \item{\code{Area_mean}}{a numeric vector} \item{\code{Smoothness_mean}}{a numeric vector} \item{\code{Compactness_mean}}{a numeric vector} \item{\code{Concavity_mean}}{a numeric vector} \item{\code{Nconcave_mean}}{a numeric vector} \item{\code{Symmetry_mean}}{a numeric vector} \item{\code{Fractaldim_mean}}{a numeric vector} \item{\code{Radius_se}}{a numeric vector} \item{\code{Texture_se}}{a numeric vector} \item{\code{Perimeter_se}}{a numeric vector} \item{\code{Area_se}}{a numeric vector} \item{\code{Smoothness_se}}{a numeric vector} \item{\code{Compactness_se}}{a numeric vector} \item{\code{Concavity_se}}{a numeric vector} \item{\code{Nconcave_se}}{a numeric vector} \item{\code{Symmetry_se}}{a numeric vector} \item{\code{Fractaldim_se}}{a numeric vector} \item{\code{Radius_extreme}}{a numeric vector} \item{\code{Texture_extreme}}{a numeric vector} \item{\code{Perimeter_extreme}}{a numeric vector} \item{\code{Area_extreme}}{a numeric vector} \item{\code{Smoothness_extreme}}{a numeric vector} \item{\code{Compactness_extreme}}{a numeric vector} \item{\code{Concavity_extreme}}{a numeric vector} \item{\code{Nconcave_extreme}}{a numeric vector} \item{\code{Symmetry_extreme}}{a numeric vector} \item{\code{Fractaldim_extreme}}{a numeric vector} } } \details{ The recorded features are: \itemize{ \item \code{Radius} as mean of distances from center to points on the perimeter \item \code{Texture} as standard deviation of gray-scale values \item \code{Perimeter} as cell nucleus perimeter \item \code{Area} as cell nucleus area \item \code{Smoothness} as local variation in radius lengths \item \code{Compactness} as cell nucleus compactness, perimeter^2 / area - 1 \item \code{Concavity} as severity of concave portions of the contour \item \code{Nconcave} as number of concave portions of the contour \item \code{Symmetry} as cell nucleus shape \item \code{Fractaldim} as fractal dimension, "coastline approximation" - 1 } For each feature the recorded values are computed from each image as \code{_mean}, \code{_se}, and \code{_extreme}, for the mean, the standard error, and the mean of the three largest values. } \source{UCI \url{http://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)}} \references{ Mangasarian, O. L., Street, W. N., and Wolberg, W. H. (1995) Breast cancer diagnosis and prognosis via linear programming. \emph{Operations Research}, 43(4), pp. 570-577. } \keyword{datasets} mclust/man/hclass.Rd0000644000176200001440000000221313175052541014101 0ustar liggesusers\name{hclass} \alias{hclass} \title{ Classifications from Hierarchical Agglomeration } \description{ Determines the classifications corresponding to different numbers of groups given merge pairs from hierarchical agglomeration. } \usage{ hclass(hcPairs, G) } \arguments{ \item{hcPairs}{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. } \item{G}{ An integer or vector of integers giving the number of clusters for which the corresponding classfications are wanted. } } \value{ A matrix with \code{length(G)} columns, each column corresponding to a classification. Columns are indexed by the character representation of the integers in \code{G}. } \seealso{ \code{\link{hc}}, \code{\link{hcE}} } \examples{ hcTree <- hc(modelName="VVV", data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mclustBootstrapLRT.Rd0000644000176200001440000001111613214434103016405 0ustar liggesusers\name{mclustBootstrapLRT} \alias{mclustBootstrapLRT} \alias{print.mclustBootstrapLRT} \alias{plot.mclustBootstrapLRT} \title{Bootstrap Likelihood Ratio Test for the Number of Mixture Components} \description{Perform the likelihood ratio test (LRT) for assessing the number of mixture components in a specific finite mixture model parameterisation. The observed significance is approximated by using the (parametric) bootstrap for the likelihood ratio test statistic (LRTS).} \usage{ mclustBootstrapLRT(data, modelName = NULL, nboot = 999, level = 0.05, maxG = NULL, verbose = interactive(), \dots) \method{print}{mclustBootstrapLRT}(x, \dots) \method{plot}{mclustBootstrapLRT}(x, G = 1, hist.col = "grey", hist.border = "lightgrey", breaks = "Scott", col = "forestgreen", lwd = 2, lty = 3, main = NULL, \dots) } \arguments{ \item{data}{A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables.} \item{modelName}{A character string indicating the mixture model to be fitted. The help file for \code{\link{mclustModelNames}} describes the available models.} \item{nboot}{The number of bootstrap replications to use (by default 999).} \item{level}{The significance level to be used to terminate the sequential bootstrap procedure.} \item{maxG}{The maximum number of mixture components \eqn{G} to test. If not provided the procedure is stopped when a test is not significant at the specified \code{level}.} \item{verbose}{A logical controlling if a text progress bar is displayed during the bootstrap procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.} \item{\dots}{Further arguments passed to or from other methods. In particular, see the optional arguments in \code{\link{mclustBIC}}.} \item{x}{An \code{'mclustBootstrapLRT'} object.} \item{G}{A value specifying the number of components for which to plot the bootstrap distribution.} \item{hist.col}{The colour to be used to fill the bars of the histogram.} \item{hist.border}{The color of the border around the bars of the histogram.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{col, lwd, lty}{The color, line width and line type to be used to represent the observed LRT statistic.} \item{main}{The title for the graph.} } \details{The implemented algorithm for computing the LRT observed significance using the bootstrap is the following. Let \eqn{G_0} be the number of mixture components under the null hypothesis versus \eqn{G_1 = G_0+1} under the alternative. Bootstrap samples are drawn by simulating data under the null hypothesis. Then, the p-value may be approximated using eq. (13) on McLachlan and Rathnayake (2014). Equivalently, using the notation of Davison and Hinkley (1997) it may be computed as \deqn{\textnormal{p-value} = \frac{1 + \#\{LRT^*_b \ge LRTS_{obs}\}}{B+1}}{% p-value = (1 + #{LRTS*_b \ge LRT_obs}) / (B+1)} where \cr \eqn{B} = number of bootstrap samples \cr \eqn{LRT_{obs}}{LRT_obs} = LRTS computed on the observed data\cr \eqn{LRT^*_b}{LRT*_b} = LRTS computed on the \eqn{b}th bootstrap sample. } \value{An object of class \code{'mclustBootstrapLRT'} with the following components: \item{G}{A vector of number of components tested under the null hypothesis.} \item{modelName}{A character string specifying the mixture model as provided in the function call (see above).} \item{obs}{The observed values of the LRTS.} \item{boot}{A matrix of dimension \code{nboot} x the number of components tested containing the bootstrap values of LRTS.} \item{p.value}{A vector of p-values.} } \references{ Davison, A. and Hinkley, D. (1997) \emph{Bootstrap Methods and Their Applications}. Cambridge University Press. McLachlan G.J. (1987) On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture. \emph{Applied Statistics}, 36, 318-324. McLachlan, G.J. and Peel, D. (2000) \emph{Finite Mixture Models}. Wiley. McLachlan, G.J. and Rathnayake, S. (2014) On the number of components in a Gaussian mixture model. \emph{Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery}, 4(5), pp. 341-355. } \seealso{\code{\link{mclustBIC}}, \code{\link{mclustICL}}, \code{\link{Mclust}}} \examples{ \dontrun{ data(faithful) faithful.boot = mclustBootstrapLRT(faithful, model = "VVV") faithful.boot plot(faithful.boot, G = 1) plot(faithful.boot, G = 2) } } \keyword{htest} \keyword{cluster} mclust/man/adjustedRandIndex.Rd0000644000176200001440000000275313175052444016237 0ustar liggesusers\name{adjustedRandIndex} \alias{adjustedRandIndex} \title{ Adjusted Rand Index } \description{ Computes the adjusted Rand index comparing two classifications. } \usage{ adjustedRandIndex(x, y) } \arguments{ \item{x}{ A numeric or character vector of class labels. } \item{y}{ A numeric or character vector of class labels. The length of \code{y} should be the same as that of \code{x}. } } \value{ The adjusted Rand index comparing the two partitions (a scalar). This index has zero expected value in the case of random partition, and it is bounded above by 1 in the case of perfect agreement between two partitions. } \references{ L. Hubert and P. Arabie (1985) Comparing Partitions, \emph{Journal of the Classification}, 2, pp. 193-218. } \seealso{ \code{\link{classError}}, \code{\link{mapClass}}, \code{\link{table}} } \examples{ a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b adjustedRandIndex(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b adjustedRandIndex(a, b) a <- rep(1:3, 4) a b <- rep(c("A", "B", "C", "D"), 3) b adjustedRandIndex(a, b) irisHCvvv <- hc(modelName = "VVV", data = iris[,-5]) cl3 <- hclass(irisHCvvv, 3) adjustedRandIndex(cl3,iris[,5]) irisBIC <- mclustBIC(iris[,-5]) adjustedRandIndex(summary(irisBIC,iris[,-5])$classification,iris[,5]) adjustedRandIndex(summary(irisBIC,iris[,-5],G=3)$classification,iris[,5]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/imputePairs.Rd0000644000176200001440000000517213175055571015143 0ustar liggesusers\name{imputePairs} \alias{imputePairs} \title{ Pairwise Scatter Plots showing Missing Data Imputations } \description{ Creates a scatter plot for each pair of variables in given data, allowing display of imputations for missing values in different colors and symbols than non missing values. } \usage{ imputePairs(data, dataImp, symbols = c(1,16), colors = c("black", "red"), labels, panel = points, ..., lower.panel = panel, upper.panel = panel, diag.panel = NULL, text.panel = textPanel, label.pos = 0.5 + has.diag/3, cex.labels = NULL, font.labels = 1, row1attop = TRUE, gap = 0.2) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations containing missing values. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{dataImp}{ The dataset \code{data} with missing values imputed. } \item{symbols}{ Either an integer or character vector assigning plotting symbols to the nonmissing data and impued values, respectively. The default is a closed circle for the nonmissing data and an open circle for the imputed values. } \item{colors}{ Either an integer or character vector assigning colors to the nonmissing data and impued values, respectively. The default is black for the nonmissing data and red for the imputed values. } \item{labels}{ As in function \code{pairs}. } \item{panel}{ As in function \code{pairs}. } \item{\dots}{ As in function \code{pairs}. } \item{lower.panel}{ As in function \code{pairs}. } \item{upper.panel}{ As in function \code{pairs}. } \item{diag.panel}{ As in function \code{pairs}. } \item{text.panel}{ As in function \code{pairs}. } \item{label.pos}{ As in function \code{pairs}. } \item{cex.labels}{ As in function \code{pairs}. } \item{font.labels}{ As in function \code{pairs}. } \item{row1attop}{ As in function \code{pairs}. } \item{gap}{ As in function \code{pairs}. } } \value{ A pairs plot displaying the location of missing and nonmissing values. } \references{ Schafer J. L. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. } \seealso{ \code{\link{pairs}}, \code{\link{imputeData}} } \examples{ \dontrun{ # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) } } \keyword{cluster} mclust/man/mclustModelNames.Rd0000644000176200001440000000373713376464311016121 0ustar liggesusers\name{mclustModelNames} \alias{mclustModelNames} \title{ MCLUST Model Names } \description{ Description of model names used in the \emph{MCLUST} package. } \usage{ mclustModelNames(model) } \arguments{ \item{model}{A string specifying the model.} } \details{ The following models are available in package \pkg{mclust}:\cr \bold{univariate mixture} \cr \describe{ \item{\code{"E"}}{equal variance (one-dimensional)} \item{\code{"V"}}{variable/unqual variance (one-dimensional)} } \bold{multivariate mixture}\cr \describe{ \item{\code{"EII"}}{spherical, equal volume} \item{\code{"VII"}}{spherical, unequal volume} \item{\code{"EEI"}}{diagonal, equal volume and shape} \item{\code{"VEI"}}{diagonal, varying volume, equal shape} \item{\code{"EVI"}}{diagonal, equal volume, varying shape} \item{\code{"VVI"}}{diagonal, varying volume and shape} \item{\code{"EEE"}}{ellipsoidal, equal volume, shape, and orientation} \item{\code{"EVE"}}{ellipsoidal, equal volume and orientation (*)} \item{\code{"VEE"}}{ellipsoidal, equal shape and orientation (*)} \item{\code{"VVE"}}{ellipsoidal, equal orientation (*)} \item{\code{"EEV"}}{ellipsoidal, equal volume and equal shape} \item{\code{"VEV"}}{ellipsoidal, equal shape} \item{\code{"EVV"}}{ellipsoidal, equal volume (*)} \item{\code{"VVV"}}{ellipsoidal, varying volume, shape, and orientation} } \bold{single component}\cr \describe{ \item{\code{"X"}}{univariate normal} \item{\code{"XII"}}{spherical multivariate normal} \item{\code{"XXI"}}{diagonal multivariate normal} \item{\code{"XXX"}}{ellipsoidal multivariate normal} } (*) new models in \pkg{mclust} version >= 5.0.0. } \value{Returns a list with the following components: \item{model}{a character string indicating the model (as in input).} \item{type}{the description of the indicated model (see Details section).} } \seealso{ \code{\link{Mclust}}, \code{\link{mclustBIC}} } \examples{ mclustModelNames("E") mclustModelNames("EEE") mclustModelNames("VVV") mclustModelNames("XXI") } \keyword{cluster} mclust/man/randomOrthogonalMatrix.Rd0000644000176200001440000000146113473257701017340 0ustar liggesusers\name{randomOrthogonalMatrix} \alias{randomOrthogonalMatrix} \title{Random orthogonal matrix} \description{ Generate a random orthogonal basis matrix of dimension \eqn{(n x d)} using the method in Heiberger (1978). } \usage{ randomOrthogonalMatrix(n, d) } \arguments{ \item{n}{the number of rows of the resulting orthogonal matrix.} \item{d}{the number of columns of the resulting orthogonal matrix.} } \value{ An orthogonal matrix of dimension \eqn{n x d} such that each column is orthogonal to the other and has unit lenght.} \seealso{\code{\link{coordProj}}} \references{ Heiberger R. (1978) Generation of random orthogonal matrices. \emph{Journal of the Royal Statistical Society. Series C (Applied Statistics)}, 27(2), 199-206. } \examples{ B <- randomOrthogonalMatrix(10,3) zapsmall(crossprod(B)) } mclust/man/mclustBICupdate.Rd0000644000176200001440000000244613373756760015702 0ustar liggesusers\name{mclustBICupdate} \alias{mclustBICupdate} \title{Update BIC values for parameterized Gaussian mixture models} \description{ Update the BIC (Bayesian Information Criterion) for parameterized Gaussian mixture models by taking the best from BIC results as returned by \code{\link{mclustBIC}}. } \usage{ mclustBICupdate(BIC, \dots) } \arguments{ \item{BIC}{Object of class \code{'mclustBIC'} containing the BIC values as returned by a call to \code{\link{mclustBIC}}. } \item{\dots}{Further objects of class \code{'mclustBIC'} to be merged.} } \value{ An object of class \code{'mclustBIC'} containing the best values obtained from merging the input arguments. Attributes are also updated according to the best BIC found, so calling \code{\link{Mclust}} on the resulting ouput will return the corresponding best model (see example). } \seealso{ \code{\link{mclustBIC}}, \code{\link{Mclust}}. } \examples{ \dontrun{ data(galaxies, package = "MASS") galaxies <- galaxies / 1000 # use several random starting points BIC <- NULL for(j in 1:100) { rBIC <- mclustBIC(galaxies, verbose = FALSE, initialization = list(hcPairs = randomPairs(galaxies))) BIC <- mclustBICupdate(BIC, rBIC) } pickBIC(BIC) plot(BIC) mod <- Mclust(galaxies, x = BIC) summary(mod) } } \keyword{cluster} mclust/man/randProj.Rd0000644000176200001440000001457613475242100014415 0ustar liggesusers\name{randProj} \alias{randProj} \title{Random projections of multidimensional data modeled by an MVN mixture} \description{ Plots random projections given multidimensional data and parameters of an MVN mixture model for the data. } \usage{ randProj(data, seeds = NULL, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), quantiles = c(0.75, 0.95), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, xlab = NULL, ylab = NULL, CEX = 1, PCH = ".", main = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{seeds}{ An integer value or a vector of integer values to be used as seed for random number generation. If multiple values are provided, then each seed should produce a different projection. By default, a single seed is drawn randomnly, so each call of \code{randProj()} produces different projections. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"error"}, \code{"uncertainty"}. } \item{quantiles}{ A vector of length 2 giving quantiles used in plotting uncertainty. The smallest symbols correspond to the smallest quantile (lowest uncertainty), medium-sized (open) symbols to points falling between the given quantiles, and large (filled) symbols to those in the largest quantile (highest uncertainty). The default is \emph{(0.75,0.95)}. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{fillEllipses}{ A logical specifying whether or not to fill ellipses with transparent colors when \code{addEllipses = TRUE}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{xlim, ylim}{ Optional arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{xlab, ylab}{ Optional arguments specifying the labels for, respectively, the horizontal and vertical axis. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classificatiion has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing a random two-dimensional projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. The function also returns an invisible list with components \code{basis}, the randomnly generated basis of the projection subspace, \code{data}, a matrix of projected data, and \code{mu} and \code{sigma} the component parameters transformed to the projection subspace. } \seealso{ \code{\link{clPairs}}, \code{\link{coordProj}}, \code{\link{mclust2Dplot}}, \code{\link{mclust.options}} } \examples{ \dontrun{ est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "classification", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, truth = iris[,5], what = "error", main = TRUE) randProj(iris[,-5], seeds=1:3, parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/summary.mclustBIC.Rd0000644000176200001440000000760413175055217016161 0ustar liggesusers\name{summary.mclustBIC} \alias{summary.mclustBIC} \alias{print.summary.mclustBIC} \alias{summary.mclustBIC} \alias{summaryMclustBIC} \alias{summaryMclustBICn} \alias{printSummaryMclustBIC} \alias{printSummaryMclustBICn} \title{Summary function for model-based clustering via BIC} \description{ Optimal model characteristics and classification for model-based clustering via \code{mclustBIC}. } \usage{ \method{summary}{mclustBIC}(object, data, G, modelNames, \dots) } \arguments{ \item{object}{ An \code{'mclustBIC'} object, which is the result of applying \code{mclustBIC} to \code{data}. } \item{data}{ The matrix or vector of observations used to generate `object'. } \item{G}{ A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (\code{as.character(G)} must be a subset of the row names of \code{object}). The default is to select the best model for all numbers of mixture components used to obtain \code{object}. } \item{modelNames}{ A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (\code{as.character(model)} must be a subset of the column names of \code{object}). The default is to select the best model for parameterizations used to obtain \code{object}. } \item{\dots}{ Not used. For generic/method consistency. } } \value{ A list giving the optimal (according to BIC) parameters, conditional probabilities \code{z}, and log-likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: \item{modelName}{ A character string denoting the model corresponding to the optimal BIC. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of mixture components in the model corresponding to the optimal BIC. } \item{bic}{ The optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th class. } \item{classification}{ \code{map(z)}: The classification corresponding to \code{z}. } \item{uncertainty}{ The uncertainty associated with the classification. } \item{Attributes:}{ \code{"bestBICvalues"} Some of the best bic values for the analysis.\cr \code{"prior"} The prior as specified in the input.\cr \code{"control"} The control parameters for EM as specified in the input.\cr \code{"initialization"} The parameters used to initial EM for computing the maximum likelihood values used to obtain the BIC. } } \seealso{ \code{\link{mclustBIC}} \code{\link{mclustModel}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) summary(irisBIC, iris[,-5], G = 1:6, modelNames = c("VII", "VVI", "VVV")) } \keyword{cluster} % docclass is function mclust/man/mapClass.Rd0000644000176200001440000000240013465001365014365 0ustar liggesusers\name{mapClass} \alias{mapClass} \title{Correspondence between classifications} \description{ Best correspondence between classes given two vectors viewed as alternative classifications of the same object. } \usage{ mapClass(a, b) } \arguments{ \item{a}{ A numeric or character vector of class labels. } \item{b}{ A numeric or character vector of class labels. Must have the same length as \code{a}. } } \value{ A list with two named elements, \code{aTOb} and \code{bTOa} which are themselves lists. The \code{aTOb} list has a component corresponding to each unique element of \code{a}, which gives the element or elements of \code{b} that result in the closest class correspondence. The \code{bTOa} list has a component corresponding to each unique element of \code{b}, which gives the element or elements of \code{a} that result in the closest class correspondence. } \seealso{ \code{\link{mapClass}}, \code{\link{classError}}, \code{\link{table}} } \examples{ a <- rep(1:3, 3) a b <- rep(c("A", "B", "C"), 3) b mapClass(a, b) a <- sample(1:3, 9, replace = TRUE) a b <- sample(c("A", "B", "C"), 9, replace = TRUE) b mapClass(a, b) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/emE.Rd0000644000176200001440000001374413465001255013343 0ustar liggesusers\name{emE} \alias{emE} \alias{emV} \alias{emX} \alias{emEII} \alias{emVII} \alias{emEEI} \alias{emVEI} \alias{emEVI} \alias{emVVI} \alias{emEEE} \alias{emEEV} \alias{emVEV} \alias{emVVV} \alias{emEVE} \alias{emEVV} \alias{emVEE} \alias{emVVE} \alias{emXII} \alias{emXXI} \alias{emXXX} \title{EM algorithm starting with E-step for a parameterized Gaussian mixture model} \description{ Implements the EM algorithm for a parameterized Gaussian mixture model, starting with the expectation step. } \usage{ emE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emX(data, prior = NULL, warn = NULL, \dots) emEII(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVII(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVI(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emEVV(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVEE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emVVE(data, parameters, prior = NULL, control = emControl(), warn = NULL, \dots) emXII(data, prior = NULL, warn = NULL, \dots) emXXI(data, prior = NULL, warn = NULL, \dots) emXXX(data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ An estimate of the reciprocal hypervolume of the data region. The default is determined by applying function \code{hypvol} to the data. Used only when \code{pro} includes an additional mixing proportion for a noise component. } } } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{priorControl}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given in \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ msEst <- mstepEEE(data = iris[,-5], z = unmap(iris[,5])) names(msEst) emEEE(data = iris[,-5], parameters = msEst$parameters)} } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/summary.Mclust.Rd0000644000176200001440000000275113175055207015600 0ustar liggesusers\name{summary.Mclust} \alias{summary.Mclust} \alias{print.summary.Mclust} \title{Summarizing Gaussian Finite Mixture Model Fits} \description{Summary method for class \code{"Mclust"}.} \usage{ \method{summary}{Mclust}(object, parameters = FALSE, classification = FALSE, \dots) \method{print}{summary.Mclust}(x, digits = getOption("digits"), \dots) } \arguments{ \item{object}{An object of class \code{'Mclust'} resulting of a call to \code{\link{Mclust}} or \code{\link{densityMclust}}.} \item{x}{An object of class \code{'summary.Mclust'}, usually, a result of a call to \code{summary.Mclust}.} \item{parameters}{Logical; if \code{TRUE}, the parameters of mixture components are printed.} \item{classification}{Logical; if \code{TRUE}, the MAP classification/clustering of observations is printed.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } % \details{} % \value{} \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}, \code{\link{densityMclust}}.} \examples{ mod1 = Mclust(iris[,1:4]) summary(mod1) summary(mod1, parameters = TRUE, classification = TRUE) mod2 = Mclust(iris[,1:4], G = 1) summary(mod2, parameters = TRUE, classification = TRUE) mod3 = Mclust(iris[,1:4], prior = priorControl()) summary(mod3) mod4 = Mclust(iris[,1:4], prior = priorControl(functionName="defaultPrior", shrinkage=0.1)) summary(mod4, parameters = TRUE, classification = TRUE) } \keyword{cluster} mclust/man/bic.Rd0000644000176200001440000000355113205037164013365 0ustar liggesusers\name{bic} \alias{bic} \title{ BIC for Parameterized Gaussian Mixture Models } \description{ Computes the BIC (Bayesian Information Criterion) for parameterized mixture models given the loglikelihood, the dimension of the data, and number of mixture components in the model. } \usage{ bic(modelName, loglik, n, d, G, noise=FALSE, equalPro=FALSE, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{loglik}{ The log-likelihood for a data set with respect to the Gaussian mixture model specified in the \code{modelName} argument. } \item{n}{ The number of observations in the data used to compute \code{loglik}. } \item{d}{ The dimension of the data used to compute \code{loglik}. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{noise}{ A logical variable indicating whether or not the model includes an optional Poisson noise component. The default is to assume no noise component. } \item{equalPro}{ A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. The default is to assume unequal mixing proportions. } \item{\dots}{ Catches unused arguments in an indirect or list call via \code{do.call}. } } \value{ The BIC or Bayesian Information Criterion for the given input arguments. } \seealso{ \code{\link{mclustBIC}}, \code{\link{nVarParams}}, \code{\link{mclustModelNames}}. } \examples{ \dontrun{ n <- nrow(iris) d <- ncol(iris)-1 G <- 3 emEst <- me(modelName="VVI", data=iris[,-5], unmap(iris[,5])) names(emEst) args(bic) bic(modelName="VVI", loglik=emEst$loglik, n=n, d=d, G=G) # do.call("bic", emEst) ## alternative call } } \keyword{cluster} mclust/man/combiTree.Rd0000644000176200001440000000306113314162461014535 0ustar liggesusers\name{combiTree} \alias{combiTree} \title{Tree structure obtained from combining mixture components} \description{The method implemented in \code{\link{clustCombi}} can be used for combining Gaussian mixture components for clustering. This provides a hierarchical structure which can be graphically represented as a tree.} \usage{ combiTree(object, type = c("triangle", "rectangle"), yaxis = c("entropy", "step"), edgePar = list(col = "darkgray", lwd = 2), \dots) } \arguments{ \item{object}{ An object of class \code{'clustCombi'} resulting from a call to \code{\link{clustCombi}}. } \item{type}{ A string specifying the dendrogram's type. Possible values are \code{"triangle"} (default), and \code{"rectangle"}. } \item{yaxis}{ A string specifying the quantity used to draw the vertical axis. Possible values are \code{"entropy"} (default), and \code{"step"}. } \item{edgePar}{ A list of plotting parameters. See \code{\link[stats]{dendrogram}}. } \item{\dots}{Further arguments passed to or from other methods.} } %\details{} \value{ The function always draw a tree and invisibly returns an object of class \code{'dendrogram'} for fine tuning. } %\references{} \author{L. Scrucca} %\note{} \seealso{\code{\link{clustCombi}}} \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiTree(output) combiTree(output, type = "rectangle") combiTree(output, yaxis = "step") combiTree(output, type = "rectangle", yaxis = "step") } } \keyword{cluster} \keyword{hplot} mclust/man/emControl.Rd0000644000176200001440000000442113465001266014571 0ustar liggesusers\name{emControl} \alias{emControl} \title{Set control values for use with the EM algorithm} \description{ Supplies a list of values including tolerances for singularity and convergence assessment, for use functions involving EM within \emph{MCLUST}. } \usage{ emControl(eps, tol, itmax, equalPro) } \arguments{ \item{eps}{ A scalar tolerance associated with deciding when to terminate computations due to computational singularity in covariances. Smaller values of \code{eps} allow computations to proceed nearer to singularity. The default is the relative machine precision \code{.Machine$double.eps}, which is approximately \eqn{2e-16} on IEEE-compliant machines. } \item{tol}{ A vector of length two giving relative convergence tolerances for the log-likelihood and for parameter convergence in the inner loop for models with iterative M-step ("VEI", "EVE", "VEE", "VVE", "VEV"), respectively. The default is \code{c(1.e-5,sqrt(.Machine$double.eps))}. If only one number is supplied, it is used as the tolerance for the outer iterations and the tolerance for the inner iterations is as in the default. } \item{itmax}{ A vector of length two giving integer limits on the number of EM iterations and on the number of iterations in the inner loop for models with iterative M-step ("VEI", "EVE", "VEE", "VVE", "VEV"), respectively. The default is \code{c(.Machine$integer.max, .Machine$integer.max)} allowing termination to be completely governed by \code{tol}. If only one number is supplied, it is used as the iteration limit for the outer iteration only. } \item{equalPro}{ Logical variable indicating whether or not the mixing proportions are equal in the model. Default: \code{equalPro = FALSE}. } } \value{ A named list in which the names are the names of the arguments and the values are the values supplied to the arguments. } \details{ \code{emControl} is provided for assigning values and defaults for EM within \emph{MCLUST}. } \seealso{ \code{\link{em}}, \code{\link{estep}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{mclustBIC}} } \examples{ irisBIC <- mclustBIC(iris[,-5], control = emControl(tol = 1.e-6)) summary(irisBIC, iris[,-5]) } \keyword{cluster} mclust/man/cdensE.Rd0000644000176200001440000001053313175050476014036 0ustar liggesusers\name{cdensE} \alias{cdensE} \alias{cdensV} \alias{cdensX} \alias{cdensEII} \alias{cdensVII} \alias{cdensEEI} \alias{cdensVEI} \alias{cdensEVI} \alias{cdensVVI} \alias{cdensEEE} \alias{cdensEEV} \alias{cdensVEV} \alias{cdensVVV} \alias{cdensEVE} \alias{cdensEVV} \alias{cdensVEE} \alias{cdensVVE} \alias{cdensXII} \alias{cdensXXI} \alias{cdensXXX} \title{ Component Density for a Parameterized MVN Mixture Model } \description{ Computes component densities for points in a parameterized MVN mixture model. } \usage{ cdensE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensX(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEEV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensEVV(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVEE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensVVE(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXII(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXXI(data, logarithm = FALSE, parameters, warn = NULL, \dots) cdensXXX(data, logarithm = FALSE, parameters, warn = NULL, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{pro}}{ Mixing proportions for the components of the mixture. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric matrix whose \code{[i,j]}th entry is the density of observation \emph{i} in component \emph{j}. The densities are not scaled by mixing proportions. } \note{ When one or more component densities are very large in magnitude, then it may be possible to compute the logarithm of the component densities but not the component densities themselves due to overflow. } \seealso{ \code{\link{cdens}}, \code{\link{dens}}, \code{\link{mclustVariance}}, \code{\link{mstep}}, \code{\link{mclust.options}}, \code{\link{do.call}}. } \examples{ \dontrun{ z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- meVVV(data=faithful, z=z2) cdensVVV(data=faithful, logarithm = TRUE, parameters = model$parameters) data(cross) z2 <- unmap(cross[,1]) model <- meEEV(data = cross[,-1], z = z2) EEVdensities <- cdensEEV( data = cross[,-1], parameters = model$parameters) cbind(cross[,-1],map(EEVdensities))} } \keyword{cluster} mclust/man/randomPairs.Rd0000644000176200001440000000216113175055134015106 0ustar liggesusers\name{randomPairs} \alias{randomPairs} \title{Random hierarchical structure} \description{Create a hierarchical structure using a random partition of the data.} \usage{ randomPairs(data, seed, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{seed}{ Optional single value, interpreted as an integer, specifying the seed for random partition. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of a random agglomerative hierarchical clustering. } \seealso{ \code{\link{hc}}, \code{\link{hclass}} \code{\link{hcVVV}} } \examples{ data <- iris[,1:4] randPairs <- randomPairs(data) str(randPairs) # start model-based clustering from a random partition mod <- Mclust(data, initialization = list(hcPairs = randPairs)) summary(mod) } \keyword{cluster} mclust/man/partconv.Rd0000644000176200001440000000164012460535131014460 0ustar liggesusers\name{partconv} \alias{partconv} \title{Numeric Encoding of a Partitioning} \description{ Converts a vector interpreted as a classification or partitioning into a numeric vector. } \usage{ partconv(x, consec=TRUE) } \arguments{ \item{x}{ A vector interpreted as a classification or partitioning. } \item{consec}{ Logical value indicating whether or not to consecutive class numbers should be used . } } \value{ Numeric encoding of \code{x}. When \code{consec = TRUE}, the distinct values in \code{x} are numbered by the order in which they appear. When \code{consec = FALSE}, each distinct value in \code{x} is numbered by the index corresponding to its first appearance in \code{x}. } \seealso{ \code{\link{partuniq}} } \examples{ partconv(iris[,5]) set.seed(0) cl <- sample(LETTERS[1:9], 25, replace=TRUE) partconv(cl, consec=FALSE) partconv(cl, consec=TRUE) } \keyword{cluster} mclust/man/mclust-internal.Rd0000644000176200001440000000133513463255725015762 0ustar liggesusers\name{mclust-internal} \title{Internal MCLUST functions} \alias{pickBIC} \alias{bicFill} \alias{grid1} \alias{grid2} \alias{mvn2plot} \alias{vecnorm} \alias{traceW} \alias{qclass} \alias{unchol} \alias{shapeO} \alias{orth2} \alias{charconv} \alias{[.mclustBIC} \alias{checkModelName} \alias{balanced.folds} \alias{permute.rows} \alias{projpar.MclustDR} \alias{projdir.MclustDR} \alias{mvdnorm} \alias{ellipse} \alias{eigen.decomp} \alias{getParameters.MclustDA} \alias{as.Mclust} \alias{as.Mclust.default} \alias{as.Mclust.densityMclust} \alias{as.densityMclust} \alias{as.densityMclust.default} \alias{as.densityMclust.Mclust} \description{ Internal functions not intended to be called directly by users. } \keyword{internal} mclust/man/entPlot.Rd0000644000176200001440000000537713475242100014262 0ustar liggesusers\name{entPlot} \alias{entPlot} \title{ Plot Entropy Plots } \description{ Plot "entropy plots" to help select the number of classes from a hierarchy of combined clusterings. } \usage{ entPlot(z, combiM, abc = c("standard", "normalized"), reg = 2, ...) } \arguments{ \item{z}{ A matrix whose \code{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th class, for the initial solution (ie before any combining). Typically, the one returned by \code{Mclust}/BIC. } \item{combiM}{ A list of "combining matrices" (as provided by \code{clustCombi}), ie \code{combiM[[K]]} is the matrix whose \emph{k}th row contains only zeros, but in columns corresponding to the labels of the classes in the \emph{(K+1)}-classes solution to be merged to get the \emph{K}-classes combined solution. \code{combiM} must contain matrices from \code{K} = number of classes in \code{z} to one. } \item{abc}{ Choose one or more of: "standard", "normalized", to specify whether the number of observations involved in each combining step should be taken into account to scale the plots or not. } \item{reg}{ The number of parts of the piecewise linear regression for the entropy plots. Choose one or more of: 2 (for 1 change-point), 3 (for 2 change-points). } \item{\dots}{ Other graphical arguments to be passed to the plot functions. } } \details{ Please see the article cited in the references for more details. A clear elbow in the "entropy plot" should suggest the user to consider the corresponding number(s) of class(es). } \value{ if \code{abc = "standard"}, plots the entropy against the number of clusters and the difference between the entropy of successive combined solutions against the number of clusters. if \code{abc = "normalized"}, plots the entropy against the cumulated number of observations involved in the successive combining steps and the difference between the entropy of successive combined solutions divided by the number of observations involved in the corresponding combining step against the number of clusters. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{plot.clustCombi}}, \code{\link{combiPlot}}, \code{\link{clustCombi}} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) # run Mclust to get the MclustOutput output <- clustCombi(data = ex4.2, modelNames = "VII") entPlot(output$MclustOutput$z, output$combiM, reg = c(2,3)) # legend: in red, the single-change-point piecewise linear regression; # in blue, the two-change-point piecewise linear regression. } } \keyword{ cluster } mclust/man/GvHD.Rd0000644000176200001440000000434413475242100013416 0ustar liggesusers\name{GvHD} \alias{GvHD} \alias{GvHD.pos} \alias{GvHD.control} \docType{data} \title{GvHD Dataset} \description{ GvHD (Graft-versus-Host Disease) data of Brinkman et al. (2007). Two samples of this flow cytometry data, one from a patient with the GvHD, and the other from a control patient. The GvHD positive and control samples consist of 9083 and 6809 observations, respectively. Both samples include four biomarker variables, namely, CD4, CD8b, CD3, and CD8. The objective of the analysis is to identify CD3+ CD4+ CD8b+ cell sub-populations present in the GvHD positive sample. A treatment of this data by combining mixtures is proposed in Baudry et al. (2010). } \usage{data(GvHD)} \format{ GvHD.pos (positive patient) is a data frame with 9083 observations on the following 4 variables, which are biomarker measurements. \describe{ \item{CD4}{} \item{CD8b}{} \item{CD3}{} \item{CD8}{} } GvHD.control (control patient) is a data frame with 6809 observations on the following 4 variables, which are biomarker measurements. \describe{ \item{CD4}{} \item{CD8b}{} \item{CD3}{} \item{CD8}{} } } \references{ R. R. Brinkman, M. Gasparetto, S.-J. J. Lee, A. J. Ribickas, J. Perkins, W. Janssen, R. Smiley and C. Smith (2007). High-content flow cytometry and temporal data analysis for defining a cellular signature of Graft-versus-Host Disease. \emph{Biology of Blood and Marrow Transplantation, 13: 691-700.} K. Lo, R. R. Brinkman, R. Gottardo (2008). Automated gating of flow cytometry data via robust model-based clustering. \emph{Cytometry A, 73: 321-332.} J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \examples{ \dontrun{ data(GvHD) dat <- GvHD.pos[1:500,] # only a few lines for a quick example output <- clustCombi(data = dat) output # is of class clustCombi # plot the hierarchy of combined solutions plot(output, what = "classification") # plot some "entropy plots" which may help one to select the number of classes plot(output, what = "entropy") # plot the tree structure obtained from combining mixture components plot(output, what = "tree") } } \keyword{datasets} mclust/man/majorityVote.Rd0000644000176200001440000000125613107132441015317 0ustar liggesusers\name{majorityVote} \alias{majorityVote} \title{Majority vote} \description{ A function to compute the majority vote (some would say plurality) label in a vector of labels, breaking ties at random.} \usage{ majorityVote(x) } \arguments{ \item{x}{A vector of values, either numerical or not.} } \value{A list with the following components: \item{table}{A table of votes for each unique value of \code{x}.} \item{ind}{An integer specifying which unique value of \code{x} corresponds to the majority vote.} \item{majority}{A string specifying the majority vote label.} } %\seealso{} \author{L. Scrucca} \examples{ x <- c("A", "C", "A", "B", "C", "B", "A") majorityVote(x) } mclust/man/predict.Mclust.Rd0000644000176200001440000000317113175055063015532 0ustar liggesusers\name{predict.Mclust} \alias{predict.Mclust} \title{Cluster multivariate observations by Gaussian finite mixture modeling} \description{Cluster prediction for multivariate observations based on Gaussian finite mixture models estimated by \code{\link{Mclust}}.} \usage{ \method{predict}{Mclust}(object, newdata, \dots) } \arguments{ \item{object}{an object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}.} \item{newdata}{a data frame or matrix giving the data. If missing the clustering data obtained from the call to \code{\link{Mclust}} are classified.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{classification}{a factor of predicted cluster labels for \code{newdata}.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th cluster.} } \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ model <- Mclust(faithful) # predict cluster for the observed data pred <- predict(model) str(pred) pred$z # equal to model$z pred$classification # equal to plot(faithful, col = pred$classification, pch = pred$classification) # predict cluster over a grid grid <- apply(faithful, 2, function(x) seq(min(x), max(x), length = 50)) grid <- expand.grid(eruptions = grid[,1], waiting = grid[,2]) pred <- predict(model, grid) plot(grid, col = mclust.options("classPlotColors")[pred$classification], pch = 15, cex = 0.5) points(faithful, pch = model$classification) } \keyword{multivariate} mclust/man/decomp2sigma.Rd0000644000176200001440000000422313465001117015174 0ustar liggesusers\name{decomp2sigma} \alias{decomp2sigma} \title{ Convert mixture component covariances to matrix form } \description{ Converts covariances from a parameterization by eigenvalue decomposition or cholesky factorization to representation as a 3-D array. } \usage{ decomp2sigma(d, G, scale, shape, orientation, \dots) } \arguments{ \item{d}{ The dimension of the data. } \item{G}{ The number of components in the mixture model. } \item{scale}{ Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{shape}{ Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{orientation}{ Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component of \code{decomp} can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } \item{\dots}{ Catches unused arguments from an indirect or list call via \code{do.call}. } } \value{ A 3-D array whose \code{[,,k]}th component is the covariance matrix of the \emph{k}th component in an MVN mixture model. } \seealso{ \code{\link{sigma2decomp}} } \examples{ meEst <- meVEV(iris[,-5], unmap(iris[,5])) names(meEst) meEst$parameters$variance dec <- meEst$parameters$variance decomp2sigma(d=dec$d, G=dec$G, shape=dec$shape, scale=dec$scale, orientation = dec$orientation) \dontrun{ do.call("decomp2sigma", dec) ## alternative call } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/MclustDA.Rd0000644000176200001440000002064313175053251014306 0ustar liggesusers\name{MclustDA} \alias{MclustDA} \alias{print.MclustDA} \title{MclustDA discriminant analysis} \description{ Discriminant analysis based on Gaussian finite mixture modeling. } \usage{ MclustDA(data, class, G = NULL, modelNames = NULL, modelType = c("MclustDA", "EDDA"), prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), verbose = interactive(), \dots) } \arguments{ \item{data}{ A data frame or matrix giving the training data. } \item{class}{ A vector giving the class labels for the observations in the training data.} \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated within each class. The default is \code{G = 1:5}.\cr A different set of mixture components for each class can be specified by providing this argument with a list of integers for each class. See the examples below. } \item{modelNames}{ A vector of character strings indicating the models to be fitted by EM within each class (see the description in \code{\link{mclustModelNames}}). A different set of mixture models for each class can be specified by providing this argument with a list of character strings. See the examples below. } \item{modelType}{ A character string specifying whether the models given in \code{modelNames} should fit a different number of mixture components and covariance structures for each class (\code{"MclustDA"}, the default) or should be constrained to have a single component for each class with the same covariance structure among classes (\code{"EDDA"}). See Details section and the examples below. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{\link{priorControl}}. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{hc}. The default is to compute a hierarchical clustering tree by applying function \code{hc} with \code{modelName = "E"} to univariate data and \code{modelName = "VVV"} to multivariate data or a subset as indicated by the \code{subset} argument. The hierarchical clustering results are used as starting values for EM.} \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase.} } } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when estimation fails. The default is controlled by \code{\link{mclust.options}}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots }{Further arguments passed to or from other methods.} } \value{ An object of class \code{'MclustDA'} providing the optimal (according to BIC) mixture model. The details of the output components are as follows: \item{call}{The matched call.} \item{data}{The input data matrix.} \item{class}{The input class labels.} \item{type}{A character string specifying the \code{modelType} estimated.} \item{models}{A list of \code{\link{Mclust}} objects containing information on fitted model for each class.} \item{n}{The total number of observations in the data.} \item{d}{The dimension of the data.} % \item{BIC}{All BIC values.} \item{bic}{Optimal BIC value.} \item{loglik}{Log-likelihood for the selected model.} \item{df}{Number of estimated parameters.} } \details{ The \code{"EDDA"} method for discriminant analysis is described in Bensmail and Celeux (1996), while \code{"MclustDA"} in Fraley and Raftery (2002). } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. Bensmail, H., and Celeux, G. (1996) Regularized Gaussian Discriminant Analysis Through Eigenvalue Decomposition.\emph{Journal of the American Statistical Association}, 91, 1743-1748. } \author{Luca Scrucca} \seealso{ \code{\link{summary.MclustDA}}, \code{\link{plot.MclustDA}}, \code{\link{predict.MclustDA}}, \code{\link{classError}} } \examples{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) \dontrun{ # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA, what = "scatterplot") plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA, what = "scatterplot") plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) } } \keyword{multivariate} mclust/man/mstep.Rd0000644000176200001440000000671213465001611013756 0ustar liggesusers\name{mstep} \alias{mstep} \title{M-step for parameterized Gaussian mixture models} \description{ Maximization step in the EM algorithm for parameterized Gaussian mixture models. } \usage{ mstep(modelName, data, z, prior = NULL, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. In analyses involving noise, this should not include the conditional probabilities for the noise component. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is given by \code{mclust.options("warn")}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{Attributes:}{ \code{"info"} For those models with iterative M-steps (\code{"VEI"} and \code{"VEV"}), information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \note{ This function computes the M-step only for MVN mixtures, so in analyses involving noise, the conditional probabilities input should exclude those for the noise component. \cr In contrast to \code{me} for the EM algorithm, computations in \code{mstep} are carried out unless failure due to overflow would occur. To impose stricter tolerances on a single \code{mstep}, use \code{me} with the \emph{itmax} component of the \code{control} argument set to 1. } \seealso{ \code{\link{mstepE}}, \dots, \code{\link{mstepVVV}}, \code{\link{emControl}}, \code{\link{me}}, \code{\link{estep}}, \code{\link{mclust.options}}. } \examples{ \dontrun{ mstep(modelName = "VII", data = iris[,-5], z = unmap(iris[,5]))} } \keyword{cluster} mclust/man/nVarParams.Rd0000644000176200001440000000317313175052444014706 0ustar liggesusers\name{nVarParams} \alias{nVarParams} \title{ Number of Variance Parameters in Gaussian Mixture Models } \description{ Gives the number of variance parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. } \usage{ nVarParams(modelName, d, G, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. Not used for models in which neither the shape nor the orientation varies. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ The number of variance parameters in the corresponding Gaussian mixture model. } \details{ To get the total number of parameters in model, add \code{G*d} for the means and \code{G-1} for the mixing proportions if they are unequal. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611:631}. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \seealso{ \code{\link{bic}}, \code{\link{nMclustParams}}. } \examples{ mapply(nVarParams, mclust.options("emModelNames"), d = 2, G = 3) } \keyword{cluster} mclust/man/predict.MclustDR.Rd0000644000176200001440000000363413175055104015760 0ustar liggesusers\name{predict.MclustDR} \alias{predict.MclustDR} \alias{predict2D.MclustDR} \title{Classify multivariate observations on a dimension reduced subspace by Gaussian finite mixture modeling} \description{Classify multivariate observations on a dimension reduced subspace estimated from a Gaussian finite mixture model.} \usage{ \method{predict}{MclustDR}(object, dim = 1:object$numdir, newdata, eval.points, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}.}} \item{dim}{the dimensions of the reduced subspace used for prediction.} \item{newdata}{a data frame or matrix giving the data. If missing the data obtained from the call to \code{\link{MclustDR}} are used.} \item{eval.points}{a data frame or matrix giving the data projected on the reduced subspace. If provided \code{newdata} is not used.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{dir}{a matrix containing the data projected onto the \code{dim} dimensions of the reduced subspace.} \item{density}{densities from mixture model for each data point.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th class.} \item{uncertainty}{The uncertainty associated with the classification.} \item{classification}{A vector of values giving the MAP classification.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. } \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDR}}.} \examples{ mod = Mclust(iris[,1:4]) dr = MclustDR(mod) pred = predict(dr) str(pred) data(banknote) mod = MclustDA(banknote[,2:7], banknote$Status) dr = MclustDR(mod) pred = predict(dr) str(pred) } \keyword{multivariate} mclust/man/cdens.Rd0000644000176200001440000000600213375223444013724 0ustar liggesusers\name{cdens} \alias{cdens} \title{ Component Density for Parameterized MVN Mixture Models } \description{ Computes component densities for observations in MVN mixture models parameterized by eigenvalue decomposition. } \usage{ cdens(modelName, data, logarithm = FALSE, parameters, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{logarithm}{ A logical value indicating whether or not the logarithm of the component densities should be returned. The default is to return the component densities, obtained from the log component densities by exponentiation. } \item{parameters}{ The parameters of the model: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{warn}{ A logical value indicating whether or not a warning should be issued when computations fail. The default is \code{warn=FALSE}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A numeric matrix whose \code{[i,k]}th entry is the density or log density of observation \emph{i} in component \emph{k}. The densities are not scaled by mixing proportions. } \note{ When one or more component densities are very large in magnitude, it may be possible to compute the logarithm of the component densities but not the component densities themselves due to overflow. } \seealso{ \code{\link{cdensE}}, \dots, \code{\link{cdensVVV}}, \code{\link{dens}}, \code{\link{estep}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}}, \code{\link{do.call}} } \examples{ z2 <- unmap(hclass(hcVVV(faithful),2)) # initial value for 2 class case model <- me(modelName = "EEE", data = faithful, z = z2) cdens(modelName = "EEE", data = faithful, logarithm = TRUE, parameters = model$parameters)[1:5,] data(cross) odd <- seq(1, nrow(cross), by = 2) oddBIC <- mclustBIC(cross[odd,-1]) oddModel <- mclustModel(cross[odd,-1], oddBIC) ## best parameter estimates names(oddModel) even <- odd + 1 densities <- cdens(modelName = oddModel$modelName, data = cross[even,-1], parameters = oddModel$parameters) cbind(class = cross[even,1], densities)[1:5,] } \keyword{cluster} mclust/man/unmap.Rd0000644000176200001440000000331013175055342013745 0ustar liggesusers\name{unmap} \alias{unmap} \title{ Indicator Variables given Classification } \description{ Converts a classification into a matrix of indicator variables. } \usage{ unmap(classification, groups=NULL, noise=NULL, \dots) } \arguments{ \item{classification}{ A numeric or character vector. Typically the distinct entries of this vector would represent a classification of observations in a data set. } \item{groups}{ A numeric or character vector indicating the groups from which \code{classification} is drawn. If not supplied, the default is to assumed to be the unique entries of classification. } \item{noise}{ A single numeric or character value used to indicate the value of \code{groups} corresponding to noise. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ An \emph{n} by \emph{m} matrix of \emph{(0,1)} indicator variables, where \emph{n} is the length of \code{classification} and \emph{m} is the number of unique values or symbols in \code{classification}. Columns are labeled by the unique values in \code{classification}, and the \code{[i,j]}th entry is \emph{1} if \code{classification[i]} is the \emph{j}th unique value or symbol in sorted order \code{classification}. If a \code{noise} value of symbol is designated, the corresponding indicator variables are relocated to the last column of the matrix. } \seealso{ \code{\link{map}}, \code{\link{estep}}, \code{\link{me}} } \examples{ z <- unmap(iris[,5]) z[1:5, ] emEst <- me(modelName = "VVV", data = iris[,-5], z = z) emEst$z[1:5,] map(emEst$z) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/mclustModel.Rd0000644000176200001440000000667513475242100015127 0ustar liggesusers\name{mclustModel} \alias{mclustModel} \title{ Best model based on BIC } \description{ Determines the best model from clustering via \code{mclustBIC} for a given set of model parameterizations and numbers of components. } \usage{ mclustModel(data, BICvalues, G, modelNames, \dots) } \arguments{ \item{data}{ The matrix or vector of observations used to generate `object'. } \item{BICvalues}{ An \code{'mclustBIC'} object, which is the result of applying \code{mclustBIC} to \code{data}. } \item{G}{ A vector of integers giving the numbers of mixture components (clusters) from which the best model according to BIC will be selected (\code{as.character(G)} must be a subset of the row names of \code{BICvalues}). The default is to select the best model for all numbers of mixture components used to obtain \code{BICvalues}. } \item{modelNames}{ A vector of integers giving the model parameterizations from which the best model according to BIC will be selected (\code{as.character(model)} must be a subset of the column names of \code{BICvalues}). The default is to select the best model for parameterizations used to obtain \code{BICvalues}. } \item{\dots}{ Not used. For generic/method consistency. } } \value{ A list giving the optimal (according to BIC) parameters, conditional probabilities \code{z}, and log-likelihood, together with the associated classification and its uncertainty. The details of the output components are as follows: \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The number of components in the Gaussian mixture model corresponding to the optimal BIC. } \item{bic}{ The optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the test data belongs to the \emph{k}th class. } } \seealso{ \code{\link{mclustBIC}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) mclustModel(iris[,-5], irisBIC) mclustModel(iris[,-5], irisBIC, G = 1:6, modelNames = c("VII", "VVI", "VVV")) } \keyword{cluster} % docclass is function mclust/man/hcE.Rd0000644000176200001440000000606113175052531013327 0ustar liggesusers\name{hcE} \alias{hcE} \alias{hcV} \alias{hcEII} \alias{hcVII} \alias{hcEEE} \alias{hcVVV} \title{ Model-based Hierarchical Clustering } \description{ Agglomerative hierarchical clustering based on maximum likelihood for a Gaussian mixture model parameterized by eigenvalue decomposition. } \usage{ hcE(data, partition, minclus=1, \dots) hcV(data, partition, minclus = 1, alpha = 1, \dots) hcEII(data, partition, minclus = 1, \dots) hcVII(data, partition, minclus = 1, alpha = 1, \dots) hcEEE(data, partition, minclus = 1, \dots) hcVVV(data, partition, minclus = 1, alpha = 1, beta = 1, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{partition}{ A numeric or character vector representing a partition of observations (rows) of \code{data}. If provided, group merges will start with this partition. Otherwise, each observation is assumed to be in a cluster by itself at the start of agglomeration. } \item{minclus}{ A number indicating the number of clusters at which to stop the agglomeration. The default is to stop when all observations have been merged into a single cluster. } \item{alpha, beta}{ Additional tuning parameters needed for initializatiion in some models. For details, see Fraley 1998. The defaults provided are usually adequate. } \item{\dots}{ Catch unused arguments from a \code{do.call} call. } } \value{ A numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. } \details{ Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or \code{"EEE"}, do not admit a fast algorithm under the usual agglomerative hierachical clustering paradigm. These use less memory but are much slower to execute. } \references{ J. D. Banfield and A. E. Raftery (1993). Model-based Gaussian and non-Gaussian Clustering. \emph{Biometrics 49:803-821}. C. Fraley (1998). Algorithms for model-based Gaussian hierarchical clustering. \emph{SIAM Journal on Scientific Computing 20:270-281}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{hc}}, \code{\link{hclass}} \code{\link{randomPairs}} } \examples{ hcTree <- hcEII(data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) par(mfrow = c(1,2)) dimens <- c(1,2) coordProj(iris[,-5], classification=cl[,"2"], dimens=dimens) coordProj(iris[,-5], classification=cl[,"3"], dimens=dimens) } } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/hc.Rd0000644000176200001440000001023613477632451013233 0ustar liggesusers\name{hc} \alias{hc} \alias{print.hc} \alias{plot.hc} \alias{as.dendrogram.hc} \alias{as.hclust.hc} \title{Model-based Agglomerative Hierarchical Clustering} \description{ Agglomerative hierarchical clustering based on maximum likelihood criteria for Gaussian mixture models parameterized by eigenvalue decomposition. } \usage{ hc(data, modelName = mclust.options("hcModelName"), use = mclust.options("hcUse"), \dots) \method{plot}{hc}(x, \dots) \method{as.dendrogram}{hc}(object, \dots) \method{as.hclust}{hc}(x, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations (\eqn{n}) and columns correspond to variables (\eqn{d}). } \item{modelName}{ A character string indicating the model to be used.\cr Possible models are: \describe{ \item{\code{"E"}}{equal variance (one-dimensional)} \item{\code{"V"}}{spherical, variable variance (one-dimensional)} \item{\code{"EII"}}{spherical, equal volume} \item{\code{"VII"}}{spherical, unequal volume} \item{\code{"EEE"}}{ellipsoidal, equal volume, shape, and orientation} \item{\code{"VVV"}}{ellipsoidal, varying volume, shape, and orientation.} } By default the model provided by \code{mclust.options("hcModelName")} is used. See \code{\link{mclust.options}}. } \item{use}{ A string or a vector of character strings specifying the type of input variables/data transformation to be used for model-based hierarchical clustering.\cr By default the method specified in \code{mclust.options("hcUse")} is used. See \code{\link{mclust.options}}. } \item{\dots}{ Arguments for the method-specific \code{hc} functions. See for example \code{\link{hcE}}. } \item{object, x}{ An object of class \code{'hc'} resulting from a call to \code{hc()}. } } \value{ The function \code{hc()} returns a numeric two-column matrix in which the \emph{i}th row gives the minimum index for observations in each of the two clusters merged at the \emph{i}th stage of agglomerative hierarchical clustering. Several other informations are also returned as attributes. The plotting function \code{plot.hc()} draws a dendrogram by first converting the input object from class \code{'hc'} to class \code{'dendrogram'} and then plot the transformed object using \code{\link{plot.dendrogram}}. The functions \code{as.dendrogram.hc()} and \code{as.hclust.hc()} are used to convert the input object from class \code{'hc'} to class, respectively, \code{'dendrogram'} and \code{'hclust'}. } \details{ Most models have memory usage of the order of the square of the number groups in the initial partition for fast execution. Some models, such as equal variance or \code{"EEE"}, do not admit a fast algorithm under the usual agglomerative hierarchical clustering paradigm. These use less memory but are much slower to execute. } \note{ If \code{modelName = "E"} (univariate with equal variances) or \code{modelName = "EII"} (multivariate with equal spherical covariances), then the method is equivalent to Ward's method for hierarchical clustering. } \references{ J. D. Banfield and A. E. Raftery (1993). Model-based Gaussian and non-Gaussian Clustering. \emph{Biometrics 49:803-821}. C. Fraley (1998). Algorithms for model-based Gaussian hierarchical clustering. \emph{SIAM Journal on Scientific Computing 20:270-281}. C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association 97:611-631}. } \seealso{ \code{\link{hcE}},..., \code{\link{hcVVV}}, \code{\link{hclass}}, \code{\link{mclust.options}} } \examples{ hcTree <- hc(modelName = "VVV", data = iris[,-5]) cl <- hclass(hcTree,c(2,3)) \dontrun{ par(pty = "s", mfrow = c(1,1)) clPairs(iris[,-5],cl=cl[,"2"]) clPairs(iris[,-5],cl=cl[,"3"]) par(mfrow = c(1,2)) dimens <- c(1,2) coordProj(iris[,-5], dimens = dimens, classification=cl[,"2"]) coordProj(iris[,-5], dimens = dimens, classification=cl[,"3"]) } } \keyword{cluster} mclust/man/coordProj.Rd0000644000176200001440000001270213475242100014564 0ustar liggesusers\name{coordProj} \alias{coordProj} \title{ Coordinate projections of multidimensional data modeled by an MVN mixture. } \description{ Plots coordinate projections given multidimensional data and parameters of an MVN mixture model for the data. } \usage{ coordProj(data, dimens = c(1,2), parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "error", "uncertainty"), addEllipses = TRUE, fillEllipses = mclust.options("fillEllipses"), symbols = NULL, colors = NULL, scale = FALSE, xlim = NULL, ylim = NULL, CEX = 1, PCH = ".", main = FALSE, \dots) } \arguments{ \item{data}{ A numeric matrix or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{dimens}{ A vector of length 2 giving the integer dimensions of the desired coordinate projections. The default is \code{c(1,2)}, in which the first dimension is plotted against the second. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following three options: \code{"classification"} (default), \code{"error"}, \code{"uncertainty"}. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{fillEllipses}{ A logical specifying whether or not to fill ellipses with transparent colors when \code{addEllipses = TRUE}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{scale}{ A logical variable indicating whether or not the two chosen dimensions should be plotted on the same scale, and thus preserve the shape of the distribution. Default: \code{scale=FALSE} } \item{xlim, ylim}{ Arguments specifying bounds for the ordinate, abscissa of the plot. This may be useful for when comparing plots. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{PCH}{ An argument specifying the symbol to be used when a classification has not been specified for the data. The default value is a small dot ".". } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing a two-dimensional coordinate projection of the data, together with the location of the mixture components, classification, uncertainty, and/or classification errors. } \seealso{ \code{\link{clPairs}}, \code{\link{randProj}}, \code{\link{mclust2Dplot}}, \code{\link{mclust.options}} } \examples{ \dontrun{ est <- meVVV(iris[,-5], unmap(iris[,5])) par(pty = "s", mfrow = c(1,1)) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "classification", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, truth = iris[,5], what = "error", main = TRUE) coordProj(iris[,-5], dimens=c(2,3), parameters = est$parameters, z = est$z, what = "uncertainty", main = TRUE) } } \keyword{cluster} mclust/man/cdfMclust.Rd0000644000176200001440000000405213175050523014551 0ustar liggesusers\name{cdfMclust} \alias{cdfMclust} \alias{quantileMclust} \title{ Cumulative Distribution and Quantiles for a univariate Gaussian mixture distribution } \description{ Compute the cumulative density function (cdf) or quantiles from an estimated one-dimensional Gaussian mixture fitted using \code{\link{densityMclust}}.} \usage{ cdfMclust(object, data, ngrid = 100, \dots) quantileMclust(object, p, ...) } \arguments{ \item{object}{a \code{densityMclust} model object.} \item{data}{a numeric vector of evaluation points.} \item{ngrid}{the number of points in a regular grid to be used as evaluation points if no \code{data} are provided.} \item{p}{a numeric vector of probabilities.} \item{\dots}{further arguments passed to or from other methods.} } \details{The cdf is evaluated at points given by the optional argument \code{data}. If not provided, a regular grid of length \code{ngrid} for the evaluation points is used. The quantiles are computed using interpolating splines on an adaptive finer grid. } \value{ \code{cdfMclust} returns a list of \code{x} and \code{y} values providing, respectively, the evaluation points and the estimated cdf. \code{quantileMclust} returns a vector of quantiles. } \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{plot.densityMclust}}. } \examples{ x <- c(rnorm(100), rnorm(100, 3, 2)) dens <- densityMclust(x) summary(dens, parameters = TRUE) cdf <- cdfMclust(dens) str(cdf) q <- quantileMclust(dens, p = c(0.01, 0.1, 0.5, 0.9, 0.99)) cbind(quantile = q, cdf = cdfMclust(dens, q)$y) plot(cdf, type = "l", xlab = "x", ylab = "CDF") points(q, cdfMclust(dens, q)$y, pch = 20, col = "red3") par(mfrow = c(2,2)) dens.waiting <- densityMclust(faithful$waiting) plot(dens.waiting) plot(cdfMclust(dens.waiting), type = "l", xlab = dens.waiting$varname, ylab = "CDF") dens.eruptions <- densityMclust(faithful$eruptions) plot(dens.eruptions) plot(cdfMclust(dens.eruptions), type = "l", xlab = dens.eruptions$varname, ylab = "CDF") par(mfrow = c(1,1)) } \keyword{cluster} \keyword{dplot} mclust/man/densityMclust.diagnostic.Rd0000644000176200001440000000453513225107560017625 0ustar liggesusers\name{densityMclust.diagnostic} \alias{densityMclust.diagnostic} \title{Diagnostic plots for \code{mclustDensity} estimation} \description{ Diagnostic plots for density estimation. Only available for the one-dimensional case. } \usage{ densityMclust.diagnostic(object, type = c("cdf", "qq"), col = c("black", "black"), lwd = c(2,1), lty = c(1,1), legend = TRUE, grid = TRUE, \dots) } \arguments{ \item{object}{An object of class \code{'mclustDensity'} obtained from a call to \code{\link{densityMclust}} function.} \item{type}{The type of graph requested: \describe{ \item{\code{"cdf"} =}{a plot of the estimated CDF versus the empirical distribution function.} \item{\code{"qq"} =}{a Q-Q plot of sample quantiles versus the quantiles obtained from the inverse of the estimated cdf.} } } \item{col}{A pair of values for the color to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{lwd}{A pair of values for the line width to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{lty}{A pair of values for the line type to be used for plotting, respectively, the estimated CDF and the empirical cdf.} \item{legend}{A logical indicating if a legend must be added to the plot of fitted CDF vs the empirical CDF.} \item{grid}{A logical indicating if a \code{\link{grid}} should be added to the plot.} \item{\dots}{Additional arguments.} } \details{ The two diagnostic plots for density estimation in the one-dimensional case are discussed in Loader (1999, pp- 87-90). } % \value{} \references{ Loader C. (1999), Local Regression and Likelihood. New York, Springer. C. Fraley, A. E. Raftery, T. B. Murphy and L. Scrucca (2012). mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. Technical Report No. 597, Department of Statistics, University of Washington. } \author{Luca Scrucca} \seealso{ \code{\link{densityMclust}}, \code{\link{plot.densityMclust}}. } \examples{ \dontrun{ x <- faithful$waiting dens <- densityMclust(x) plot(dens, x, what = "diagnostic") # or densityMclust.diagnostic(dens, type = "cdf") densityMclust.diagnostic(dens, type = "qq") } } \keyword{cluster} \keyword{dplot} mclust/man/cvMclustDA.Rd0000644000176200001440000000523113473260044014634 0ustar liggesusers\name{cvMclustDA} \alias{cvMclustDA} \title{MclustDA cross-validation} \description{ K-fold cross-validation for discriminant analysis based on Gaussian finite mixture modeling. } \usage{ cvMclustDA(object, nfold = 10, metric = c("error", "brier"), prop = object$prop, verbose = interactive(), \dots) } \arguments{ \item{object}{ An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}. } \item{nfold}{ An integer specifying the number of folds. } \item{metric}{ A character string specifying the statistic to be used in the cross-validation resampling process. Possible values are \code{"error"} for the classification error, and \code{"brier"} for the Brier score. } \item{prop}{ A vector of class prior probabilities, which if not provided default to the class proportions in the training data. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the cross-validation procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise. } \item{\dots }{Further arguments passed to or from other methods.} } \value{ The function returns a list with the following components: \item{classification}{a factor of cross-validated class labels.} \item{z}{a matrix containing the cross-validated probabilites for class assignement.} \item{error}{the cross-validation classification error if \code{metric = "error"}, \code{NA} otherwise.} \item{brier}{the cross-validation Brier score if \code{metric = "brier"}, \code{NA} otherwise.} \item{se}{the standard error of the cross-validated statistic.} } %\details{} \author{Luca Scrucca} \seealso{ \code{\link{summary.MclustDA}}, \code{\link{plot.MclustDA}}, \code{\link{predict.MclustDA}}, \code{\link{classError}}, \code{\link{BrierScore}} } \examples{ \dontrun{ X <- iris[,-5] Class <- iris[,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X, Class, modelType = "EDDA", modelNames = "EEE") cv <- cvMclustDA(irisMclustDA) # default 10-fold CV cv[c("error", "se")] cv <- cvMclustDA(irisMclustDA, nfold = length(Class)) # LOO-CV cv[c("error", "se")] cv <- cvMclustDA(irisMclustDA, metric = "brier") # 10-fold CV with Brier score metric cv[c("brier", "se")] # general covariance structure selected by BIC irisMclustDA <- MclustDA(X, Class) cv <- cvMclustDA(irisMclustDA) # default 10-fold CV cv[c("error", "se")] cv <- cvMclustDA(irisMclustDA, metric = "brier") # 10-fold CV with Brier score metric cv[c("brier", "se")] } } \keyword{multivariate} mclust/man/combiPlot.Rd0000644000176200001440000000474113475242100014557 0ustar liggesusers\name{combiPlot} \alias{combiPlot} \title{ Plot Classifications Corresponding to Successive Combined Solutions } \description{ Plot classifications corresponding to successive combined solutions. } \usage{ combiPlot(data, z, combiM, \dots) } \arguments{ \item{data}{ The data. } \item{z}{ A matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, for the initial solution (ie before any combining). Typically, the one returned by \code{Mclust}/BIC. } \item{combiM}{ A "combining matrix" (as provided by \code{\link{clustCombi}}), ie a matrix whose kth row contains only zeros, but in columns corresponding to the labels of the classes in the initial solution to be merged together to get the combined solution. } \item{\dots}{ Other arguments to be passed to the \code{\link{Mclust}} plot functions. } } \value{ Plot the classifications obtained by MAP from the matrix \code{t(combiM \%*\% t(z))}, which is the matrix whose [i,k]th entry is the probability that observation i in the data belongs to the kth class, according to the combined solution obtained by merging (according to \code{combiM}) the initial solution described by \code{z}. } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{clustCombi}}, \code{\link{combMat}}, \code{\link{clustCombi}} } \examples{ \dontrun{ data(Baudry_etal_2010_JCGS_examples) MclustOutput <- Mclust(ex4.1) MclustOutput$G # Mclust/BIC selected 6 classes par(mfrow=c(2,2)) combiM0 <- diag(6) # is the identity matrix # no merging: plot the initial solution, given by z combiPlot(ex4.1, MclustOutput$z, combiM0, cex = 3) title("No combining") combiM1 <- combMat(6, 1, 2) # let's merge classes labeled 1 and 2 combiM1 combiPlot(ex4.1, MclustOutput$z, combiM1) title("Combine 1 and 2") # let's merge classes labeled 1 and 2, and then components labeled (in this # new 5-classes combined solution...) 1 and 2 combiM2 <- combMat(5, 1, 2) \%*\% combMat(6, 1, 2) combiM2 combiPlot(ex4.1, MclustOutput$z, combiM2) title("Combine 1, 2 and then 1 and 2 again") plot(0,0,type="n", xlab = "", ylab = "", axes = FALSE) legend("center", legend = 1:6, col = mclust.options("classPlotColors"), pch = mclust.options("classPlotSymbols"), title = "Class labels:")} } \keyword{cluster} mclust/man/mclust1Dplot.Rd0000644000176200001440000001205413475242121015221 0ustar liggesusers\name{mclust1Dplot} \alias{mclust1Dplot} \title{ Plot one-dimensional data modeled by an MVN mixture. } \description{ Plot one-dimensional data given parameters of an MVN mixture model for the data. } \usage{ mclust1Dplot(data, parameters = NULL, z = NULL, classification = NULL, truth = NULL, uncertainty = NULL, what = c("classification", "density", "error", "uncertainty"), symbols = NULL, colors = NULL, ngrid = length(data), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, CEX = 1, main = FALSE, \dots) } \arguments{ \item{data}{ A numeric vector of observations. Categorical variables are not allowed. } \item{parameters}{ A named list giving the parameters of an \emph{MCLUST} model, used to produce superimposing ellipses on the plot. The relevant components are as follows: \describe{ \item{\code{pro}}{ Mixing proportions for the components of the mixture. There should one more mixing proportion than the number of Gaussian components if the mixture model includes a Poisson noise term. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix in which the \code{[i,k]}th entry gives the probability of observation \emph{i} belonging to the \emph{k}th class. Used to compute \code{classification} and \code{uncertainty} if those arguments aren't available. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. If present argument \code{z} will be ignored. } \item{truth}{ A numeric or character vector giving a known classification of each data point. If \code{classification} or \code{z} is also present, this is used for displaying classification errors. } \item{uncertainty}{ A numeric vector of values in \emph{(0,1)} giving the uncertainty of each data point. If present argument \code{z} will be ignored. } \item{what}{ Choose from one of the following options: \code{"classification"} (default), \code{"density"}, \code{"error"}, \code{"uncertainty"}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class \code{classification}. Elements in \code{symbols} correspond to classes in \code{classification} in order of appearance in the observations (the order used by the function \code{unique}). The default is to use a single plotting symbol \emph{|}. Classes are delineated by showing them in separate lines above the whole of the data. } \item{colors}{ Either an integer or character vector assigning a color to each unique class \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the observations (the order used by the function \code{unique}). The default is given is \code{mclust.options("classPlotColors")}. } \item{ngrid}{ Number of grid points to use for density computation over the interval spanned by the data. The default is the length of the data set. } \item{xlab, ylab}{ An argument specifying a label for the axes. } \item{xlim, ylim}{ An argument specifying bounds of the plot. This may be useful for when comparing plots. } \item{CEX}{ An argument specifying the size of the plotting symbols. The default value is 1. } \item{main}{ A logical variable or \code{NULL} indicating whether or not to add a title to the plot identifying the dimensions used. } \item{\dots}{ Other graphics parameters. } } \value{ A plot showing location of the mixture components, classification, uncertainty, density and/or classification errors. Points in the different classes are shown in separated levels above the whole of the data. } \seealso{ \code{\link{mclust2Dplot}}, \code{\link{clPairs}}, \code{\link{coordProj}} } \examples{ \dontrun{ n <- 250 ## create artificial data set.seed(1) y <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) yclass <- c(rep(1,n), rep(2,n), rep(3,n)) yModel <- Mclust(y) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "classification") mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "error", truth = yclass) mclust1Dplot(y, parameters = yModel$parameters, z = yModel$z, what = "density") mclust1Dplot(y, z = yModel$z, parameters = yModel$parameters, what = "uncertainty") } } \keyword{cluster} mclust/man/clPairs.Rd0000644000176200001440000000660213475241702014231 0ustar liggesusers\name{clPairs} \alias{clPairs} \alias{clPairsLegend} \title{Pairwise Scatter Plots showing Classification} \description{ Creates a scatter plot for each pair of variables in given data. Observations in different classes are represented by different colors and symbols. } \usage{ clPairs(data, classification, symbols, colors, labels = dimnames(data)[[2]], cex.labels = 1.5, gap = 0.2, \dots) clPairsLegend(x, y, class, col, pch, box = TRUE, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{classification}{ A numeric or character vector representing a classification of observations (rows) of \code{data}. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class in \code{classification}. Elements in \code{symbols} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{unique}). The default is given by \code{mclust.options("classPlotColors")}. } \item{labels}{ A vector of character strings for labelling the variables. The default is to use the column dimension names of \code{data}. } \item{cex.labels}{ An argument specifying the size of the text labels. } \item{gap}{ An argument specifying the distance between subplots (see \code{\link{pairs}}). } \item{x,y}{ The x and y co-ordinates with respect to a graphic device having plotting region coordinates \code{par("usr" = c(0,1,0,1))}. } \item{class}{ The class labels. } \item{box}{ A logical, if \code{TRUE} then a box is drawn around the current plot figure. } \item{col, pch}{ The colors and plotting symbols appearing in the legend. } \item{\dots}{ For a \code{clPairs} call may be additional arguments to be passed to \code{\link{pairs}}. For a \code{clPairsLegend} call may be additional arguments to be passed to \code{\link{legend}}. } } \details{ The function \code{clPairs()} draws scatter plots on the current graphics device for each combination of variables in \code{data}. Observations of different classifications are labeled with different symbols. The function \code{clPairsLegend()} can be used to add a legend. See examples below. } \value{ The function \code{clPairs()} invisibly returns a list with the following components: \item{class}{A character vector of class labels.} \item{col}{A vector of colors used for each class.} \item{pch}{A vector of plotting symbols used for each class.} } \seealso{ \code{\link{pairs}}, \code{\link{coordProj}}, \code{\link{mclust.options}} } \examples{ clPairs(iris[,1:4], cl = iris$Species) clp <- clPairs(iris[,1:4], cl = iris$Species, lower.panel = NULL) clPairsLegend(0.1, 0.4, class = clp$class, col = clp$col, pch = clp$pch, title = "Iris data") } \keyword{cluster} mclust/man/classPriorProbs.Rd0000644000176200001440000001050113470201364015747 0ustar liggesusers\name{classPriorProbs} \alias{classPriorProbs} % R CMD Rd2pdf classPriorProbs.Rd \title{Estimation of class prior probabilities by EM algorithm} \description{ A simple procedure to improve the estimation of class prior probabilities when the training data does not reflect the true a priori probabilities of the target classes. The EM algorithm used is described in Saerens et al (2002).} \usage{ classPriorProbs(object, newdata = object$data, itmax = 1e3, eps = sqrt(.Machine$double.eps)) } \arguments{ \item{object}{ an object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}. } \item{newdata}{ a data frame or matrix giving the data. If missing the train data obtained from the call to \code{\link{MclustDA}} are used. } \item{itmax}{ an integer value specifying the maximal number of EM iterations. } \item{eps}{ a scalar specifying the tolerance associated with deciding when to terminate the EM iterations. } } \details{ The estimation procedure employes an EM algorithm as described in Saerens et al (2002). } \value{A vector of class prior estimates which can then be used in the \code{\link{predict.MclustDA}} to improve predictions.} \references{ Saerens, M., Latinne, P. and Decaestecker, C. (2002) Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure, \emph{Neural computation}, 14 (1), 21--41. } \seealso{\code{\link{MclustDA}}, \code{\link{predict.MclustDA}}} \examples{ \dontrun{ # generate data from a mixture f(x) = 0.9 * N(0,1) + 0.1 * N(3,1) n <- 10000 mixpro <- c(0.9, 0.1) class <- factor(sample(0:1, size = n, prob = mixpro, replace = TRUE)) x <- ifelse(class == 1, rnorm(n, mean = 3, sd = 1), rnorm(n, mean = 0, sd = 1)) hist(x[class==0], breaks = 11, xlim = range(x), main = "", xlab = "x", col = adjustcolor("dodgerblue2", alpha.f = 0.5), border = "white") hist(x[class==1], breaks = 11, add = TRUE, col = adjustcolor("red3", alpha.f = 0.5), border = "white") box() # generate training data from a balanced case-control sample, i.e. # f(x) = 0.5 * N(0,1) + 0.5 * N(3,1) n_train <- 1000 class_train <- factor(sample(0:1, size = n_train, prob = c(0.5, 0.5), replace = TRUE)) x_train <- ifelse(class_train == 1, rnorm(n_train, mean = 3, sd = 1), rnorm(n_train, mean = 0, sd = 1)) hist(x_train[class_train==0], breaks = 11, xlim = range(x_train), main = "", xlab = "x", col = adjustcolor("dodgerblue2", alpha.f = 0.5), border = "white") hist(x_train[class_train==1], breaks = 11, add = TRUE, col = adjustcolor("red3", alpha.f = 0.5), border = "white") box() # fit a MclustDA model mod <- MclustDA(x_train, class_train) summary(mod, parameters = TRUE) # test set performance pred <- predict(mod, newdata = x) classError(pred$classification, class)$error BrierScore(pred$z, class) # compute performance over a grid of prior probs priorProp <- seq(0.01, 0.99, by = 0.01) CE <- BS <- rep(as.double(NA), length(priorProp)) for(i in seq(priorProp)) { pred <- predict(mod, newdata = x, prop = c(1-priorProp[i], priorProp[i])) CE[i] <- classError(pred$classification, class = class)$error BS[i] <- BrierScore(pred$z, class) } # estimate the optimal class prior probs (priorProbs <- classPriorProbs(mod, x)) pred <- predict(mod, newdata = x, prop = priorProbs) # compute performance at the estimated class prior probs classError(pred$classification, class = class)$error BrierScore(pred$z, class) matplot(priorProp, cbind(CE,BS), type = "l", lty = 1, lwd = 2, xlab = "Class prior probability", ylab = "", ylim = c(0,max(CE,BS)), panel.first = { abline(h = seq(0,1,by=0.05), col = "grey", lty = 3) abline(v = seq(0,1,by=0.05), col = "grey", lty = 3) }) abline(v = mod$prop[2], lty = 2) # training prop abline(v = mean(class==1), lty = 4) # test prop (usually unknown) abline(v = priorProbs[2], lty = 3, lwd = 2) # estimated prior probs legend("topleft", legend = c("ClassError", "BrierScore"), col = 1:2, lty = 1, lwd = 2, inset = 0.02) # Summary of results: priorProp[which.min(CE)] # best prior of class 1 according to classification error priorProp[which.min(BS)] # best prior of class 1 according to Brier score priorProbs # optimal estimated class prior probabilities } } \keyword{classif} mclust/man/banknote.Rd0000644000176200001440000000141012501077123014416 0ustar liggesusers\name{banknote} \alias{banknote} \docType{data} \title{Swiss banknotes data} \description{ The data set contains six measurements made on 100 genuine and 100 counterfeit old-Swiss 1000-franc bank notes.} \usage{data(banknote)} \format{A data frame with the following variables: \describe{ \item{Status}{the status of the banknote: \code{genuine} or \code{counterfeit}} \item{Length}{Length of bill (mm)} \item{Left}{Width of left edge (mm)} \item{Right}{Width of right edge (mm)} \item{Bottom}{Bottom margin width (mm)} \item{Top}{Top margin width (mm)} \item{Diagonal}{Length of diagonal (mm)} } } \source{Flury, B. and Riedwyl, H. (1988). \emph{Multivariate Statistics: A practical approach.} London: Chapman & Hall, Tables 1.1 and 1.2, pp. 5-8.} \keyword{datasets} mclust/man/plot.mclustICL.Rd0000644000176200001440000000126113175055043015442 0ustar liggesusers\name{plot.mclustICL} \alias{plot.mclustICL} \title{ICL Plot for Model-Based Clustering} \description{ Plots the ICL values returned by the \code{\link{mclustICL}} function. } \usage{ \method{plot}{mclustICL}(x, ylab = "ICL", \dots) } \arguments{ \item{x}{ Output from \code{\link{mclustICL}}. } \item{ylab}{ Label for the vertical axis of the plot. } \item{\dots}{ Further arguments passed to the \code{\link{plot.mclustBIC}} function. } } \value{ A plot of the ICL values. } \seealso{ \code{\link{mclustICL}} } \examples{ \dontrun{ data(faithful) faithful.ICL = mclustICL(faithful) plot(faithful.ICL) } } \keyword{cluster} % docclass is function mclust/man/covw.Rd0000644000176200001440000000210513465001006013572 0ustar liggesusers\name{covw} \alias{covw} \title{Weighted means, covariance and scattering matrices conditioning on a weighted matrix} \description{ Compute efficiently (via Fortran code) the means, covariance and scattering matrices conditioning on a weighted or indicator matrix } \usage{ covw(X, Z, normalize = TRUE) } \arguments{ \item{X}{A \eqn{(n x p)} data matrix, with \eqn{n} observations on \eqn{p} variables.} \item{Z}{A \eqn{(n x G)} matrix of weights, with \eqn{G} number of groups.} \item{normalize}{A logical indicating if rows of \code{Z} should be normalized to sum to one.} } \value{A list with the following components: \item{mean}{A \eqn{(p x G)} matrix of weighted means.} \item{S}{A \eqn{(p x p x G)} array of weighted covariance matrices.} \item{W}{A \eqn{(p x p x G)} array of weighted scattering matrices.} } %\seealso{} \author{M. Fop and L. Scrucca} \examples{ # Z as an indicator matrix X <- iris[,1:4] Z <- unmap(iris$Species) str(covw(X, Z)) # Z as a matrix of weights mod <- Mclust(X, G = 3, modelNames = "VVV") str(covw(X, mod$z)) } \keyword{multivariate} mclust/man/summary.MclustDR.Rd0000644000176200001440000000214713175055251016024 0ustar liggesusers\name{summary.MclustDR} \alias{summary.MclustDR} \alias{print.summary.MclustDR} \title{Summarizing dimension reduction method for model-based clustering and classification} \description{Summary method for class \code{"MclustDR"}.} \usage{ \method{summary}{MclustDR}(object, numdir, std = FALSE, \dots) \method{print}{summary.MclustDR}(x, digits = max(5, getOption("digits") - 3), \dots) } \arguments{ \item{object}{An object of class \code{'MclustDR'} resulting from a call to \code{\link{MclustDR}}.} \item{x}{An object of class \code{'summary.MclustDR'}, usually, a result of a call to \code{summary.MclustDR}.} \item{numdir}{An integer providing the number of basis directions to be printed.} \item{std}{if \code{TRUE} the coefficients basis are scaled such that all predictors have unit standard deviation.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } %\details{} %\value{} \author{Luca Scrucca} %\note{} \seealso{ \code{\link{MclustDR}}, \code{\link{plot.MclustDR}} } %\examples{} %\keyword{} mclust/man/predict.MclustDA.Rd0000644000176200001440000000305413467322461015742 0ustar liggesusers\name{predict.MclustDA} \alias{predict.MclustDA} \title{Classify multivariate observations by Gaussian finite mixture modeling} \description{Classify multivariate observations based on Gaussian finite mixture models estimated by \code{\link{MclustDA}}.} \usage{ \method{predict}{MclustDA}(object, newdata, prop = object$prop, \dots) } \arguments{ \item{object}{an object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{newdata}{a data frame or matrix giving the data. If missing the train data obtained from the call to \code{\link{MclustDA}} are classified.} \item{prop}{the class proportions or prior class probabilities to belong to each class; by default, this is set at the class proportions in the training data.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a list of with the following components: \item{classification}{a factor of predicted class labels for \code{newdata}.} \item{z}{a matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in \code{newdata} belongs to the \emph{k}th class.} } \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}.} \examples{ \dontrun{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] irisMclustDA <- MclustDA(X.train, Class.train) predTrain <- predict(irisMclustDA) predTrain predTest <- predict(irisMclustDA, X.test) predTest } } \keyword{multivariate} mclust/man/thyroid.Rd0000644000176200001440000000331512504554764014323 0ustar liggesusers\name{thyroid} \alias{thyroid} \docType{data} \title{Thyroid gland data} \description{ Data on five laboratory tests administered to a sample of 215 patients. The tests are used to predict whether a patient's thyroid can be classified as euthyroidism (normal thyroid gland function), hypothyroidism (underactive thyroid not producing enough thyroid hormone) or hyperthyroidism (overactive thyroid producing and secreting excessive amounts of the free thyroid hormones T3 and/or thyroxine T4). Diagnosis of thyroid operation was based on a complete medical record, including anamnesis, scan, etc..} \usage{data(thyroid)} \format{A data frame with the following variables: \describe{ \item{Diagnosis}{Diagnosis of thyroid operation: \code{Hypo}, \code{Normal}, and \code{Hyper}.} \item{RT3U}{T3-resin uptake test (percentage).} \item{T4}{Total Serum thyroxin as measured by the isotopic displacement method.} \item{T3}{Total serum triiodothyronine as measured by radioimmuno assay.} \item{TSH}{Basal thyroid-stimulating hormone (TSH) as measured by radioimmuno assay.} \item{DTSH}{Maximal absolute difference of TSH value after injection of 200 micro grams of thyrotropin-releasing hormone as compared to the basal value.} } } \source{UCI \url{ftp://ftp.ics.uci.edu/pub/machine-learning-databases/thyroid-disease/}} \references{ Coomans, D., Broeckaert, M. Jonckheer M. and Massart D.L. (1983) Comparison of Multivariate Discriminant Techniques for Clinical Data - Application to the Thyroid Functional State, \emph{Meth. Inform. Med.} 22, pp. 93-101. Coomans, D. and I. Broeckaert (1986) \emph{Potential Pattern Recognition in Cemical and Medical Decision Making}, Research Studies Press, Letchworth, England. } \keyword{datasets} mclust/man/Mclust.Rd0000644000176200001440000002046213376460657014117 0ustar liggesusers\name{Mclust} \alias{Mclust} \alias{print.Mclust} \title{Model-Based Clustering} \description{ Model-based clustering based on parameterized finite Gaussian mixture models. Models are estimated by EM algorithm initialized by hierarchical model-based agglomerative clustering. The optimal model is then selected according to BIC. } \usage{ Mclust(data, G = NULL, modelNames = NULL, prior = NULL, control = emControl(), initialization = NULL, warn = mclust.options("warn"), x = NULL, verbose = interactive(), \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations (\eqn{n}) and columns correspond to variables (\eqn{d}). } \item{G}{ An integer vector specifying the numbers of mixture components (clusters) for which the BIC is to be calculated. The default is \code{G=1:9}. } \item{modelNames}{ A vector of character strings indicating the models to be fitted in the EM phase of clustering. The default is: \itemize{ \item for univariate data (\eqn{d = 1}): \code{c("E", "V")} \item for multivariate data (\eqn{n > d}): all the models available in \code{mclust.options("emModelNames")} \item for multivariate data (\eqn{n <= d}): the spherical and diagonal models, i.e. \code{c("EII", "VII", "EEI", "EVI", "VEI", "VVI")} } The help file for \code{\link{mclustModelNames}} describes the available models. } \item{prior}{ The default assumes no prior, but this argument allows specification of a conjugate prior on the means and variances through the function \code{\link{priorControl}}. \cr Note that, as described in \code{\link{defaultPrior}}, in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{emControl()}. } \item{initialization}{ A list containing zero or more of the following components: \describe{ \item{\code{hcPairs}}{ A matrix of merge pairs for hierarchical clustering such as produced by function \code{\link{hc}}. \cr For multivariate data, the default is to compute a hierarchical agglomerative clustering tree by applying function \code{\link{hc}} with model specified by \code{mclust.options("hcModelName")}, and data transformation set by \code{mclust.options("hcUse")}.\cr All the input or a subset as indicated by the \code{subset} argument is used for initial clustering.\cr The hierarchical clustering results are then used to start the EM algorithm from a given partition.\cr For univariate data, the default is to use quantiles to start the EM algorithm. However, hierarchical clustering could also be used by calling \code{\link{hc}} with model specified as \code{"V"} or \code{"E"}. } \item{\code{subset}}{ A logical or numeric vector specifying a subset of the data to be used in the initial hierarchical clustering phase. By default no subset is used unless the number of observations exceeds the value specified by \code{mclust.options("subset")}. Note that to guarantee exact reproducibility of results a seed must be specified (see \code{\link{set.seed}}).} \item{\code{noise}}{ A logical or numeric vector indicating an initial guess as to which observations are noise in the data. If numeric the entries should correspond to row indexes of the data. If supplied, a noise term will be added to the model in the estimation.} } } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued. The default is controlled by \code{\link{mclust.options}}. } \item{x}{ An object of class \code{'mclustBIC'}. If supplied, BIC values for models that have already been computed and are available in \code{x} are not recomputed. All arguments, with the exception of \code{data}, \code{G} and \code{modelName}, are ignored and their values are set as specified in the attributes of \code{x}. Defaults for \code{G} and \code{modelNames} are taken from \code{x}. } \item{verbose}{ A logical controlling if a text progress bar is displayed during the fitting procedure. By default is \code{TRUE} if the session is interactive, and \code{FALSE} otherwise.. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ An object of class \code{'Mclust'} providing the optimal (according to BIC) mixture model estimation. The details of the output components are as follows: \item{call}{The matched call} \item{data}{The input data matrix.} \item{modelName}{ A character string denoting the model at which the optimal BIC occurs. } \item{n}{ The number of observations in the data. } \item{d}{ The dimension of the data. } \item{G}{ The optimal number of mixture components. } \item{BIC}{ All BIC values. } \item{bic}{ Optimal BIC value. } \item{loglik}{ The log-likelihood corresponding to the optimal BIC. } \item{df}{ The number of estimated parameters. } \item{hypvol}{ The hypervolume parameter for the noise component if required, otherwise set to \code{NULL} (see \code{\link{hypvol}}). } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{z}{ A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the test data belongs to the \emph{k}th class. } \item{classification}{ The classification corresponding to \code{z}, i.e. \code{map(z)}. } \item{uncertainty}{ The uncertainty associated with the classification. } } \references{ Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. Fraley C. and Raftery A. E. (2002) Model-based clustering, discriminant analysis and density estimation, \emph{Journal of the American Statistical Association}, 97/458, pp. 611-631. Fraley C., Raftery A. E., Murphy T. B. and Scrucca L. (2012) mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. \emph{Technical Report} No. 597, Department of Statistics, University of Washington. C. Fraley and A. E. Raftery (2007) Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification}, 24, 155-181. } \seealso{ \code{\link{summary.Mclust}}, \code{\link{plot.Mclust}}, \code{\link{priorControl}}, \code{\link{emControl}}, \code{\link{hc}}, \code{\link{mclustBIC}}, \code{\link{mclustModelNames}}, \code{\link{mclust.options}} } \examples{ mod1 <- Mclust(iris[,1:4]) summary(mod1) mod2 <- Mclust(iris[,1:4], G = 3) summary(mod2, parameters = TRUE) # Using prior mod3 <- Mclust(iris[,1:4], prior = priorControl()) summary(mod3) mod4 <- Mclust(iris[,1:4], prior = priorControl(functionName="defaultPrior", shrinkage=0.1)) summary(mod4) # Clustering of faithful data with some artificial noise added nNoise <- 100 set.seed(0) # to make it reproducible Noise <- apply(faithful, 2, function(x) runif(nNoise, min = min(x)-.1, max = max(x)+.1)) data <- rbind(faithful, Noise) plot(faithful) points(Noise, pch = 20, cex = 0.5, col = "lightgrey") set.seed(0) NoiseInit <- sample(c(TRUE,FALSE), size = nrow(faithful)+nNoise, replace = TRUE, prob = c(3,1)/4) mod5 <- Mclust(data, initialization = list(noise = NoiseInit)) summary(mod5, parameter = TRUE) plot(mod5, what = "classification") } \keyword{cluster} mclust/man/plot.MclustDA.Rd0000644000176200001440000001453313475242100015260 0ustar liggesusers\name{plot.MclustDA} \alias{plot.MclustDA} \title{Plotting method for MclustDA discriminant analysis} \description{ Plots for model-based mixture discriminant analysis results, such as scatterplot of training and test data, classification of train and test data, and errors. } \usage{ \method{plot}{MclustDA}(x, what = c("scatterplot", "classification", "train&test", "error"), newdata, newclass, dimens = NULL, symbols, colors, main = NULL, \dots) } \arguments{ \item{x}{ An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}. } \item{what}{ A string specifying the type of graph requested. Available choices are: \describe{ \item{\code{"scatterplot"} =}{a plot of training data with points marked based on the known classification. Ellipses corresponding to covariances of mixture components are also drawn.} \item{\code{"classification"} =}{a plot of data with points marked on based the predicted classification; if \code{newdata} is provided then the test set is shown otherwise the training set.} \item{\code{"train&test"} =}{a plot of training and test data with points marked according to the type of set.} \item{\code{"error"} =}{a plot of training set (or test set if \code{newdata} and \code{newclass} are provided) with misclassified points marked.} } If not specified, in interactive sessions a menu of choices is proposed. } \item{newdata}{ A data frame or matrix for test data. } \item{newclass}{ A vector giving the class labels for the observations in the test data (if known). } \item{dimens}{ A vector of integers giving the dimensions of the desired coordinate projections for multivariate data. The default is to take all the the available dimensions for plotting. } \item{symbols}{ Either an integer or character vector assigning a plotting symbol to each unique class. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotSymbols")}. } \item{colors}{ Either an integer or character vector assigning a color to each unique class in \code{classification}. Elements in \code{colors} correspond to classes in order of appearance in the sequence of observations (the order used by the function \code{factor}). The default is given by \code{mclust.options("classPlotColors")}. } \item{main}{ A logical, a character string, or \code{NULL} (default) for the main title. If \code{NULL} or \code{FALSE} no title is added to a plot. If \code{TRUE} a default title is added identifying the type of plot drawn. If a character string is provided, this is used for the title. } \item{\dots}{further arguments passed to or from other methods.} } %\value{} \details{ For more flexibility in plotting, use \code{mclust1Dplot}, \code{mclust2Dplot}, \code{surfacePlot}, \code{coordProj}, or \code{randProj}. } \author{Luca Scrucca} \seealso{ \code{\link{MclustDA}}, \code{\link{surfacePlot}}, \code{\link{coordProj}}, \code{\link{randProj}} } \examples{ \dontrun{ odd <- seq(from = 1, to = nrow(iris), by = 2) even <- odd + 1 X.train <- iris[odd,-5] Class.train <- iris[odd,5] X.test <- iris[even,-5] Class.test <- iris[even,5] # common EEE covariance structure (which is essentially equivalent to linear discriminant analysis) irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA", modelNames = "EEE") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # common covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train, modelType = "EDDA") summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) # general covariance structure selected by BIC irisMclustDA <- MclustDA(X.train, Class.train) summary(irisMclustDA, parameters = TRUE) summary(irisMclustDA, newdata = X.test, newclass = Class.test) plot(irisMclustDA) plot(irisMclustDA, dimens = 3:4) plot(irisMclustDA, dimens = 4) plot(irisMclustDA, what = "classification") plot(irisMclustDA, what = "classification", newdata = X.test) plot(irisMclustDA, what = "classification", dimens = 3:4) plot(irisMclustDA, what = "classification", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "classification", dimens = 4) plot(irisMclustDA, what = "classification", dimens = 4, newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 3:4) plot(irisMclustDA, what = "train&test", newdata = X.test, dimens = 4) plot(irisMclustDA, what = "error") plot(irisMclustDA, what = "error", dimens = 3:4) plot(irisMclustDA, what = "error", dimens = 4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 3:4) plot(irisMclustDA, what = "error", newdata = X.test, newclass = Class.test, dimens = 4) # simulated 1D data n <- 250 set.seed(1) triModal <- c(rnorm(n,-5), rnorm(n,0), rnorm(n,5)) triClass <- c(rep(1,n), rep(2,n), rep(3,n)) odd <- seq(from = 1, to = length(triModal), by = 2) even <- odd + 1 triMclustDA <- MclustDA(triModal[odd], triClass[odd]) summary(triMclustDA, parameters = TRUE) summary(triMclustDA, newdata = triModal[even], newclass = triClass[even]) plot(triMclustDA) plot(triMclustDA, what = "classification") plot(triMclustDA, what = "classification", newdata = triModal[even]) plot(triMclustDA, what = "train&test", newdata = triModal[even]) plot(triMclustDA, what = "error") plot(triMclustDA, what = "error", newdata = triModal[even], newclass = triClass[even]) # simulated 2D cross data data(cross) odd <- seq(from = 1, to = nrow(cross), by = 2) even <- odd + 1 crossMclustDA <- MclustDA(cross[odd,-1], cross[odd,1]) summary(crossMclustDA, parameters = TRUE) summary(crossMclustDA, newdata = cross[even,-1], newclass = cross[even,1]) plot(crossMclustDA) plot(crossMclustDA, what = "classification") plot(crossMclustDA, what = "classification", newdata = cross[even,-1]) plot(crossMclustDA, what = "train&test", newdata = cross[even,-1]) plot(crossMclustDA, what = "error") plot(crossMclustDA, what = "error", newdata =cross[even,-1], newclass = cross[even,1]) } } \keyword{multivariate} mclust/man/mvn.Rd0000644000176200001440000000606513375223424013437 0ustar liggesusers\name{mvn} \alias{mvn} \title{ Univariate or Multivariate Normal Fit } \description{ Computes the mean, covariance, and log-likelihood from fitting a single Gaussian to given data (univariate or multivariate normal). } \usage{ mvn( modelName, data, prior = NULL, warn = NULL, \dots) } \arguments{ \item{modelName}{ A character string representing a model name. This can be either \code{"Spherical"}, \code{"Diagonal"}, or \code{"Ellipsoidal"} or else \cr \code{"X"} for one-dimensional data,\cr \code{"XII"} for a spherical Gaussian, \cr \code{"XXI"} for a diagonal Gaussian \cr \code{"XXX"} for a general ellipsoidal Gaussian } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{prior}{ Specification of a conjugate prior on the means and variances. The default assumes no prior. } \item{warn}{ A logical value indicating whether or not a warning should be issued whenever a singularity is encountered. The default is given by \code{mclust.options("warn")}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{parameters}{ \describe{ \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \seealso{ \code{\link{mvnX}}, \code{\link{mvnXII}}, \code{\link{mvnXXI}}, \code{\link{mvnXXX}}, \code{\link{mclustModelNames}} } \examples{ n <- 1000 set.seed(0) x <- rnorm(n, mean = -1, sd = 2) mvn(modelName = "X", x) mu <- c(-1, 0, 1) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% (2*diag(3)), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XII", x) mvn(modelName = "Spherical", x) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% diag(1:3), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXI", x) mvn(modelName = "Diagonal", x) Sigma <- matrix(c(9,-4,1,-4,9,4,1,4,9), 3, 3) set.seed(0) x <- sweep(matrix(rnorm(n*3), n, 3) \%*\% chol(Sigma), MARGIN = 2, STATS = mu, FUN = "+") mvn(modelName = "XXX", x) mvn(modelName = "Ellipsoidal", x) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/imputeData.Rd0000644000176200001440000000325713205036743014732 0ustar liggesusers\name{imputeData} \alias{imputeData} \alias{matchCluster} \title{Missing data imputation via the \pkg{mix} package} \description{ Imputes missing data using the \pkg{mix} package. } \usage{ imputeData(data, categorical = NULL, seed = NULL, verbose = interactive()) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations containing missing values. Categorical variables are allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{categorical}{ A logical vectors whose \emph{i}th entry is \code{TRUE} if the \emph{i}th variable or column of \code{data} is to be interpreted as categorical and \code{FALSE} otherwise. The default is to assume that a variable is to be interpreted as categorical only if it is a factor. } \item{seed}{ A seed for the function \code{rngseed} that is used to initialize the random number generator in \pkg{mix}. By default, a seed is chosen uniformly in the interval \code{(.Machine$integer.max/1024, .Machine$integer.max)}. } \item{verbose}{ A logical, if \code{TRUE} reports info about iterations of the algorithm. } } \value{ A dataset of the same dimensions as \code{data} with missing values filled in. } \references{ Schafer J. L. (1997). Analysis of Imcomplete Multivariate Data, Chapman and Hall. } \seealso{ \code{\link{imputePairs}} } \examples{ \dontrun{ # Note that package 'mix' must be installed data(stlouis, package = "mix") # impute the continuos variables in the stlouis data stlimp <- imputeData(stlouis[,-(1:3)]) # plot imputed values imputePairs(stlouis[,-(1:3)], stlimp) } } \keyword{cluster} mclust/man/diabetes.Rd0000644000176200001440000000144612577017527014425 0ustar liggesusers\name{diabetes} \alias{diabetes} \docType{data} \title{Diabetes data} \description{The data set contains three measurements made on 145 non-obese adult patients classified into three groups.} \usage{data(diabetes)} \format{A data frame with the following variables: \describe{ \item{class}{The type of diabete: \code{Normal}, \code{Overt}, and \code{Chemical}.} \item{glucose}{Area under plasma glucose curve after a three hour oral glucose tolerance test (OGTT).} \item{insulin}{Area under plasma insulin curve after a three hour oral glucose tolerance test (OGTT).} \item{sspg}{Steady state plasma glucose.} } } \source{Reaven, G. M. and Miller, R. G. (1979). An attempt to define the nature of chemical diabetes using a multidimensional analysis. \emph{Diabetologia} 16:17-24.} \keyword{datasets} mclust/man/hdrlevels.Rd0000644000176200001440000000431313314423527014620 0ustar liggesusers\name{hdrlevels} \alias{hdrlevels} \title{Highest Density Region (HDR) Levels} \description{ Compute the levels of Highest Density Regions (HDRs) for any density and probability levels. } \usage{ hdrlevels(density, prob) } \arguments{ \item{density}{A vector of density values computed on a set of (observed) evaluation points.} \item{prob}{A vector of probability levels in the range \eqn{[0,1]}.} } \value{ The function returns a vector of density values corresponding to HDRs at given probability levels. } \details{ From Hyndman (1996), let \eqn{f(x)} be the density function of a random variable \eqn{X}. Then the \eqn{100(1-\alpha)\%} HDR is the subset \eqn{R(f_\alpha)} of the sample space of \eqn{X} such that \deqn{ R(f_\alpha) = {x : f(x) \ge f_\alpha } } where \eqn{f_\alpha} is the largest constant such that \eqn{ Pr( X \in R(f_\alpha)) \ge 1-\alpha } } \seealso{ \code{\link{plot.densityMclust}} } \references{ Rob J. Hyndman (1996) Computing and Graphing Highest Density Regions. \emph{The American Statistician}, 50(2):120-126. } \author{L. Scrucca} \examples{ # Example: univariate Gaussian x <- rnorm(1000) f <- dnorm(x) a <- c(0.5, 0.25, 0.1) (f_a <- hdrlevels(f, prob = 1-a)) plot(x, f) abline(h = f_a, lty = 2) text(max(x), f_a, labels = paste0("f_", a), pos = 3) mean(f > f_a[1]) range(x[which(f > f_a[1])]) qnorm(1-a[1]/2) mean(f > f_a[2]) range(x[which(f > f_a[2])]) qnorm(1-a[2]/2) mean(f > f_a[3]) range(x[which(f > f_a[3])]) qnorm(1-a[3]/2) # Example 2: univariate Gaussian mixture set.seed(1) cl <- sample(1:2, size = 1000, prob = c(0.7, 0.3), replace = TRUE) x <- ifelse(cl == 1, rnorm(1000, mean = 0, sd = 1), rnorm(1000, mean = 4, sd = 1)) f <- 0.7*dnorm(x, mean = 0, sd = 1) + 0.3*dnorm(x, mean = 4, sd = 1) a <- 0.25 (f_a <- hdrlevels(f, prob = 1-a)) plot(x, f) abline(h = f_a, lty = 2) text(max(x), f_a, labels = paste0("f_", a), pos = 3) mean(f > f_a) # find the regions of HDR ord <- order(x) f <- f[ord] x <- x[ord] x_a <- x[f > f_a] j <- which.max(diff(x_a)) region1 <- x_a[c(1,j)] region2 <- x_a[c(j+1,length(x_a))] plot(x, f, type = "l") abline(h = f_a, lty = 2) abline(v = region1, lty = 3, col = 2) abline(v = region2, lty = 3, col = 3) } \keyword{density} mclust/man/clustCombiOptim.Rd0000644000176200001440000000415313475242100015741 0ustar liggesusers\name{clustCombiOptim} \alias{clustCombiOptim} \title{Optimal number of clusters obtained by combining mixture components} \description{ Return the optimal number of clusters by combining mixture components based on the entropy method discussed in the reference given below. } \usage{ clustCombiOptim(object, reg = 2, plot = FALSE, \dots) } \arguments{ \item{object}{ An object of class \code{'clustCombi'} resulting from a call to \code{\link{clustCombi}}. } \item{reg}{ The number of parts of the piecewise linear regression for the entropy plots. Choose 2 for a two-segment piecewise linear regression model (i.e. 1 change-point), and 3 for a three-segment piecewise linear regression model (i.e. 3 change-points). } \item{plot}{ Logical, if \code{TRUE} an entropy plot is also produced. } \item{\dots}{Further arguments passed to or from other methods.} } \value{ The function returns a list with the following components: \item{numClusters.combi}{The estimated number of clusters.} \item{z.combi}{A matrix whose \emph{[i,k]}th entry is the probability that observation \emph{i} in the data belongs to the \emph{k}th cluster.} \item{cluster.combi}{The clustering labels.} } \references{ J.-P. Baudry, A. E. Raftery, G. Celeux, K. Lo and R. Gottardo (2010). Combining mixture components for clustering. \emph{Journal of Computational and Graphical Statistics, 19(2):332-353.} } \author{ J.-P. Baudry, A. E. Raftery, L. Scrucca } \seealso{ \code{\link{combiPlot}}, \code{\link{entPlot}}, \code{\link{clustCombi}} } \examples{ data(Baudry_etal_2010_JCGS_examples) output <- clustCombi(data = ex4.1) combiOptim <- clustCombiOptim(output) str(combiOptim) # plot optimal clustering with alpha color transparency proportional to uncertainty zmax <- apply(combiOptim$z.combi, 1, max) col <- mclust.options("classPlotColors")[combiOptim$cluster.combi] vadjustcolor <- Vectorize(adjustcolor) alphacol = (zmax - 1/combiOptim$numClusters.combi)/(1-1/combiOptim$numClusters.combi) col <- vadjustcolor(col, alpha.f = alphacol) plot(ex4.1, col = col, pch = mclust.options("classPlotSymbols")[combiOptim$cluster.combi]) } \keyword{ cluster } mclust/man/MclustDR.Rd0000644000176200001440000001152013461340767014332 0ustar liggesusers\name{MclustDR} \alias{MclustDR} \alias{print.MclustDR} \title{Dimension reduction for model-based clustering and classification} \description{ A dimension reduction method for visualizing the clustering or classification structure obtained from a finite mixture of Gaussian densities. } \usage{ MclustDR(object, lambda = 0.5, normalized = TRUE, Sigma, tol = sqrt(.Machine$double.eps)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{An object of class \code{'Mclust'} or \code{'MclustDA'} resulting from a call to, respectively, \code{\link{Mclust}} or \code{\link{MclustDA}}. } \item{lambda}{A tuning parameter in the range [0,1] described in Scrucca (2014). The default 0.5 gives equal importance to differences in means and covariances among clusters/classes. To recover the directions that mostly separate the estimated clusters or classes set this parameter to 1. } \item{normalized}{Logical. If \code{TRUE} directions are normalized to unit norm. } \item{Sigma}{Marginal covariance matrix of data. If not provided is estimated by the MLE of observed data. } \item{tol}{A tolerance value.} } \details{ The method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the clustering or classification structure contained in the data. Information on the dimension reduction subspace is obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances (see Scrucca, 2010). Observations may then be projected onto such a reduced subspace, thus providing summary plots which help to visualize the underlying structure. The method has been extended to the supervised case, i.e. when the true classification is known (see Scrucca, 2013). This implementation doesn't provide a formal procedure for the selection of dimensionality. A future release will include one or more methods. } \value{ An object of class \code{'MclustDR'} with the following components: \item{call}{The matched call} \item{type}{A character string specifying the type of model for which the dimension reduction is computed. Currently, possible values are \code{"Mclust"} for clustering, and \code{"MclustDA"} or \code{"EDDA"} for classification.} \item{x}{The data matrix.} \item{Sigma}{The covariance matrix of the data.} \item{mixcomp}{A numeric vector specifying the mixture component of each data observation.} \item{class}{A factor specifying the classification of each data observation. For model-based clustering this is equivalent to the corresponding mixture component. For model-based classification this is the known classification.} \item{G}{The number of mixture components.} \item{modelName}{The name of the parameterization of the estimated mixture model(s). See \code{\link{mclustModelNames}}.} \item{mu}{A matrix of means for each mixture component.} \item{sigma}{An array of covariance matrices for each mixture component.} \item{pro}{The estimated prior for each mixture component.} \item{M}{The kernel matrix.} \item{lambda}{The tuning parameter.} \item{evalues}{The eigenvalues from the generalized eigen-decomposition of the kernel matrix.} \item{raw.evectors}{The raw eigenvectors from the generalized eigen-decomposition of the kernel matrix, ordered according to the eigenvalues.} \item{basis}{The basis of the estimated dimension reduction subspace.} \item{std.basis}{The basis of the estimated dimension reduction subspace standardized to variables having unit standard deviation.} \item{numdir}{The dimension of the projection subspace.} \item{dir}{The estimated directions, i.e. the data projected onto the estimated dimension reduction subspace.} } \references{ Scrucca, L. (2010) Dimension reduction for model-based clustering. \emph{Statistics and Computing}, 20(4), pp. 471-484. Scrucca, L. (2014) Graphical Tools for Model-based Mixture Discriminant Analysis. \emph{Advances in Data Analysis and Classification}, 8(2), pp. 147-165. } \author{Luca Scrucca} %\note{} \seealso{ \code{\link{summary.MclustDR}}, \code{\link{plot.MclustDR}}, \code{\link{Mclust}}, \code{\link{MclustDA}}. } \examples{ # clustering data(diabetes) mod <- Mclust(diabetes[,-1]) summary(mod) dr <- MclustDR(mod) summary(dr) plot(dr, what = "scatterplot") plot(dr, what = "evalues") # adjust the tuning parameter to show the most separating directions dr1 <- MclustDR(mod, lambda = 1) summary(dr1) plot(dr1, what = "scatterplot") plot(dr1, what = "evalues") # classification data(banknote) da <- MclustDA(banknote[,2:7], banknote$Status, modelType = "EDDA") dr <- MclustDR(da) summary(dr) da <- MclustDA(banknote[,2:7], banknote$Status) dr <- MclustDR(da) summary(dr) } \keyword{multivariate} mclust/man/summary.MclustDA.Rd0000644000176200001440000000262313465000766016005 0ustar liggesusers\name{summary.MclustDA} \alias{summary.MclustDA} \alias{print.summary.MclustDA} \title{Summarizing discriminant analysis based on Gaussian finite mixture modeling} \description{Summary method for class \code{"MclustDA"}.} \usage{ \method{summary}{MclustDA}(object, parameters = FALSE, newdata, newclass, \dots) \method{print}{summary.MclustDA}(x, digits = getOption("digits"), \dots) } \arguments{ \item{object}{An object of class \code{'MclustDA'} resulting from a call to \code{\link{MclustDA}}.} \item{x}{An object of class \code{'summary.MclustDA'}, usually, a result of a call to \code{summary.MclustDA}.} \item{parameters}{Logical; if \code{TRUE}, the parameters of mixture components are printed.} \item{newdata}{A data frame or matrix giving the test data.} \item{newclass}{A vector giving the class labels for the observations in the test data.} \item{digits}{The number of significant digits to use when printing.} \item{\dots}{Further arguments passed to or from other methods.} } % \details{} \value{The function \code{summary.MclustDA} computes and returns a list of summary statistics of the estimated MclustDA or EDDA model for classification.} \author{Luca Scrucca} % \note{} \seealso{\code{\link{MclustDA}}, \code{\link{plot.MclustDA}}.} \examples{ mod = MclustDA(data = iris[,1:4], class = iris$Species) summary(mod) summary(mod, parameters = TRUE) } \keyword{multivariate} mclust/man/summary.MclustBootstrap.Rd0000644000176200001440000000260713225105712017467 0ustar liggesusers\name{summary.MclustBootstrap} \alias{summary.MclustBootstrap} \alias{print.summary.MclustBootstrap} \title{Summary Function for Bootstrap Inference for Gaussian Finite Mixture Models} \description{Summary of bootstrap distribution for the parameters of a Gaussian mixture model providing either standard errors or percentile bootstrap confidence intervals.} \usage{ \method{summary}{MclustBootstrap}(object, what = c("se", "ci", "ave"), conf.level = 0.95, \dots) } \arguments{ \item{object}{An object of class \code{'MclustBootstrap'} as returned by \code{\link{MclustBootstrap}}.} \item{what}{A character string: \code{"se"} for the standard errors; \code{"ci"} for the confidence intervals; \code{"ave"} for the averages.} \item{conf.level}{A value specifying the confidence level of the interval.} \item{\dots}{Further arguments passed to or from other methods.} } \details{For details about the procedure used to obtain the bootstrap distribution see \code{\link{MclustBootstrap}}.} %\value{} \seealso{\code{\link{MclustBootstrap}}.} \examples{ \dontrun{ data(diabetes) X = diabetes[,-1] modClust = Mclust(X) bootClust = MclustBootstrap(modClust) summary(bootClust, what = "se") summary(bootClust, what = "ci") data(acidity) modDens = densityMclust(acidity) modDens = MclustBootstrap(modDens) summary(modDens, what = "se") summary(modDens, what = "ci") } } \keyword{htest} \keyword{cluster} mclust/man/plot.Mclust.Rd0000644000176200001440000000534013504361766015064 0ustar liggesusers\name{plot.Mclust} \alias{plot.Mclust} \title{Plotting method for Mclust model-based clustering} \description{ Plots for model-based clustering results, such as BIC, classification, uncertainty and density. } \usage{ \method{plot}{Mclust}(x, what = c("BIC", "classification", "uncertainty", "density"), dimens = NULL, xlab = NULL, ylab = NULL, ylim = NULL, addEllipses = TRUE, main = FALSE, \dots) } \arguments{ \item{x}{ Output from \code{Mclust}. } \item{what}{ A string specifying the type of graph requested. Available choices are: \describe{ \item{\code{"BIC"}}{plot of BIC values used for choosing the number of clusters.} \item{\code{"classification"} =}{a plot showing the clustering. For data in more than two dimensions a pairs plot is produced, followed by a coordinate projection plot using specified \code{dimens}. Ellipses corresponding to covariances of mixture components are also drawn if \code{addEllipses = TRUE}.} \item{\code{"uncertainty"}}{a plot of classification uncertainty. For data in more than two dimensions a coordinate projection plot is drawn using specified \code{dimens}.} \item{\code{"density"}}{a plot of estimated density. For data in more than two dimensions a matrix of contours for coordinate projection plot is drawn using specified \code{dimens}.} } If not specified, in interactive sessions a menu of choices is proposed. } \item{dimens}{ A vector of integers specifying the dimensions of the coordinate projections in case of \code{"classification"}, \code{"uncertainty"}, or \code{"density"} plots. } \item{xlab, ylab}{ Optional labels for the x-axis and the y-axis. } \item{ylim}{ Optional limits for the vertical axis of the BIC plot. } \item{addEllipses}{ A logical indicating whether or not to add ellipses with axes corresponding to the within-cluster covariances in case of \code{"classification"} or \code{"uncertainty"} plots. } \item{main}{ A logical or \code{NULL} indicating whether or not to add a title to the plot identifying the type of plot drawn. } \item{\dots}{ Other graphics parameters. } } \details{ For more flexibility in plotting, use \code{mclust1Dplot}, \code{mclust2Dplot}, \code{surfacePlot}, \code{coordProj}, or \code{randProj}. } \seealso{ \code{\link{Mclust}}, \code{\link{plot.mclustBIC}}, \code{\link{plot.mclustICL}}, \code{\link{mclust1Dplot}}, \code{\link{mclust2Dplot}}, \code{\link{surfacePlot}}, \code{\link{coordProj}}, \code{\link{randProj}}. } \examples{ \dontrun{ precipMclust <- Mclust(precip) plot(precipMclust) faithfulMclust <- Mclust(faithful) plot(faithfulMclust) irisMclust <- Mclust(iris[,-5]) plot(irisMclust) } } \keyword{cluster} mclust/man/uncerPlot.Rd0000644000176200001440000000275413175055331014611 0ustar liggesusers\name{uncerPlot} \alias{uncerPlot} \title{ Uncertainty Plot for Model-Based Clustering } \description{ Displays the uncertainty in converting a conditional probablility from EM to a classification in model-based clustering. } \usage{ uncerPlot(z, truth, \dots) } \arguments{ \item{z}{ A matrix whose \emph{[i,k]}th entry is the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{truth}{ A numeric or character vector giving the true classification of the data. } \item{\dots }{ Provided to allow lists with elements other than the arguments can be passed in indirect or list calls with \code{do.call}. } } \value{ A plot of the uncertainty profile of the data, with uncertainties in increasing order of magnitude. If \code{truth} is supplied and the number of classes is the same as the number of columns of \code{z}, the uncertainty of the misclassified data is marked by vertical lines on the plot. } \details{ When \code{truth} is provided and the number of classes is compatible with \code{z}, the function \code{compareClass} is used to to find best correspondence between classes in \code{truth} and \code{z}. } \seealso{ \code{\link{mclustBIC}}, \code{\link{em}}, \code{\link{me}}, \code{\link{mapClass}} } \examples{ irisModel3 <- Mclust(iris[,-5], G = 3) uncerPlot(z = irisModel3$z) uncerPlot(z = irisModel3$z, truth = iris[,5]) } \keyword{cluster} % docclass is function mclust/man/sigma2decomp.Rd0000644000176200001440000000555613175055163015216 0ustar liggesusers\name{sigma2decomp} \alias{sigma2decomp} \title{ Convert mixture component covariances to decomposition form. } \description{ Converts a set of covariance matrices from representation as a 3-D array to a parameterization by eigenvalue decomposition. } \usage{ sigma2decomp(sigma, G = NULL, tol = sqrt(.Machine$double.eps), \dots) } \arguments{ \item{sigma}{ Either a 3-D array whose [,,k]th component is the covariance matrix for the kth component in an MVN mixture model, or a single covariance matrix in the case that all components have the same covariance. } \item{G}{ The number of components in the mixture. When \code{sigma} is a 3-D array, the number of components can be inferred from its dimensions. } \item{tol}{ Tolerance for determining whether or not the covariances have equal volume, shape, and or orientation. The default is the square root of the relative machine precision, \code{sqrt(.Machine$double.eps)}, which is about \code{1.e-8}. } \item{\dots}{ Catches unused arguments from an indirect or list call via \code{do.call}. } } \value{ The covariance matrices for the mixture components in decomposition form, including the following components: \item{modelName}{ A character string indicating the infered model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. } \item{G}{ The number of components in the mixture model. } \item{scale}{ Either a \emph{G}-vector giving the scale of the covariance (the \emph{d}th root of its determinant) for each component in the mixture model, or a single numeric value if the scale is the same for each component. } \item{shape}{ Either a \emph{G} by \emph{d} matrix in which the \emph{k}th column is the shape of the covariance matrix (normalized to have determinant 1) for the \emph{k}th component, or a \emph{d}-vector giving a common shape for all components. } \item{orientation}{ Either a \emph{d} by \emph{d} by \emph{G} array whose \code{[,,k]}th entry is the orthonomal matrix whose columns are the eigenvectors of the covariance matrix of the \emph{k}th component, or a \emph{d} by \emph{d} orthonormal matrix if the mixture components have a common orientation. The \code{orientation} component of \code{decomp} can be omitted in spherical and diagonal models, for which the principal components are parallel to the coordinate axes so that the orientation matrix is the identity. } } \seealso{ \code{\link{decomp2sigma}} } \examples{ meEst <- meEEE(iris[,-5], unmap(iris[,5])) names(meEst$parameters$variance) meEst$parameters$variance$Sigma sigma2decomp(meEst$parameters$variance$Sigma, G = length(unique(iris[,5]))) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/nMclustParams.Rd0000644000176200001440000000275613175053603015431 0ustar liggesusers\name{nMclustParams} \alias{nMclustParams} \title{Number of Estimated Parameters in Gaussian Mixture Models} \description{ Gives the number of estimated parameters for parameterizations of the Gaussian mixture model that are used in MCLUST. } \usage{ nMclustParams(modelName, d, G, noise = FALSE, equalPro = FALSE, \dots) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{d}{ The dimension of the data. Not used for models in which neither the shape nor the orientation varies. } \item{G}{ The number of components in the Gaussian mixture model used to compute \code{loglik}. } \item{noise}{ A logical variable indicating whether or not the model includes an optional Poisson noise component. } \item{equalPro}{ A logical variable indicating whether or not the components in the model are assumed to be present in equal proportion. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ The number of variance parameters in the corresponding Gaussian mixture model. } \details{ To get the total number of parameters in model, add \code{G*d} for the means and \code{G-1} for the mixing proportions if they are unequal. } \seealso{ \code{\link{bic}}, \code{\link{nVarParams}}. } \examples{ mapply(nMclustParams, mclust.options("emModelNames"), d = 2, G = 3) } \keyword{cluster} mclust/man/dmvnorm.Rd0000644000176200001440000000252613463276036014324 0ustar liggesusers\name{dmvnorm} \alias{dmvnorm} \title{Density of multivariate Gaussian distribution} \description{ Efficiently computes the densities of observations for a generic multivariate Gaussian distribution } \usage{ dmvnorm(data, mean, sigma, log = FALSE) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{mean}{ A vector of means for each variable. } \item{sigma}{ A positive definite covariance matrix. } \item{log}{ A logical value indicating whether or not the logarithm of the densities should be returned. } } \value{ A numeric vector whose \emph{i}th component is the density of the \emph{ith} observation in \code{data} in the MVN mixture specified by \code{parameters}. } \seealso{ \code{\link{dnorm}}, \code{\link{dens}} } \examples{ # univariate ngrid <- 101 x <- seq(-5, 5, length = ngrid) dens <- dmvnorm(x, mean = 1, sigma = 5) plot(x, dens, type = "l") # bivariate ngrid <- 101 x1 <- x2 <- seq(-5, 5, length = ngrid) mu <- c(1,0) sigma <- matrix(c(1,0.5,0.5,2), 2, 2) dens <- dmvnorm(as.matrix(expand.grid(x1, x2)), mu, sigma) dens <- matrix(dens, ngrid, ngrid) image(x1, x2, dens) contour(x1, x2, dens, add = TRUE) } mclust/man/mclust.options.Rd0000644000176200001440000001415113473250450015631 0ustar liggesusers\name{mclust.options} \alias{mclust.options} \title{Default values for use with MCLUST package} \description{Set or retrieve default values for use with MCLUST package.} \usage{ mclust.options(\dots) } \arguments{ \item{\dots}{ one or more arguments provided in the \code{name = value} form, or no argument at all may be given. \cr Available arguments are described in the Details section below.} } \details{ \code{mclust.options} is provided for assigning or retrieving default values used by various functions in \code{MCLUST}.\cr Available options are: \describe{ \item{\code{emModelNames}}{ A vector of 3-character strings that are associated with multivariate models for which EM estimation is available in MCLUST. \cr The current default is all of the multivariate mixture models supported in MCLUST. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{\code{hcModelName}}{ A string associated with multivariate models for which model-based hierarchical clustering is available in MCLUST. \cr The available models are the following: \describe{ \item{\code{"EII"}}{spherical, equal volume} \item{\code{"EEE"}}{ellipsoidal, equal volume, shape, and orientation} \item{\code{"VII"}}{spherical, unequal volume} \item{\code{"VVV"}}{ellipsoidal, varying volume, shape, and orientation.} } The \code{"VVV"} is used as default for initialization of EM algorithm. } \item{\code{hcUse}}{ A string or a vector of character strings specifying the type of input variables to be used in model-based hierarchical clustering to start the EM algorithm. Possible values are: \describe{ \item{\code{"VARS"}}{original variables;} \item{\code{"STD"}}{standardized variables;} \item{\code{"SPH"}}{sphered variables (centered, scaled, uncorrelated) computed using SVD;} \item{\code{"PCS"}}{principal components computed using SVD on centered variables (i.e. using the covariance matrix);} \item{\code{"PCR"}}{principal components computed using SVD on standardized (center and scaled) variables (i.e. using the correlation matrix);} \item{\code{"SVD"}}{scaled SVD transformation;} \item{\code{"RND"}}{no transformation is applied but a random hierarchical structure is returned (see \code{\link{randomPairs}}).} } For further details see Scrucca and Raftery (2015), Scrucca et al. (2016). } \item{\code{subset}}{ A value specifying the maximal sample size to be used in the model-based hierarchical clustering to start the EM algorithm. If data sample size exceeds this value, a random sample is drawn of size specified by \code{subset}. } \item{\code{fillEllipses}}{ A logical value specifying whether or not to fill with transparent colors ellipses corresponding to the within-cluster covariances in case of \code{"classification"} plot for \code{'Mclust'} objects, or \code{"scatterplot"} graphs for \code{'MclustDA'} objects. } \item{\code{bicPlotSymbols}}{ A vector whose entries correspond to graphics symbols for plotting the BIC values output from \code{\link{Mclust}} and \code{\link{mclustBIC}}. These are displayed in the legend which appears at the lower right of the BIC plots. } \item{\code{bicPlotColors}}{ A vector whose entries correspond to colors for plotting the BIC curves from output from \code{\link{Mclust}} and \code{\link{mclustBIC}}. These are displayed in the legend which appears at the lower right of the BIC plots. } \item{\code{classPlotSymbols}}{ A vector whose entries are either integers corresponding to graphics symbols or single characters for indicating classifications when plotting data. Classes are assigned symbols in the given order. } \item{\code{classPlotColors}}{ A vector whose entries correspond to colors for indicating classifications when plotting data. Classes are assigned colors in the given order. } \item{\code{warn}}{ A logical value indicating whether or not to issue certain warnings. Most of these warnings have to do with situations in which singularities are encountered. The default is \code{warn = FALSE}. } } The parameter values set via a call to this function will remain in effect for the rest of the session, affecting the subsequent behaviour of the functions for which the given parameters are relevant. } \value{ If the argument list is empty the function returns the current list of values. If the argument list is not empty, the returned list is invisible. } \seealso{ \code{\link{Mclust}}, \code{\link{MclustDA}}, \code{\link{densityMclust}}, \code{\link{emControl}} } \references{ Scrucca L. and Raftery A. E. (2015) Improved initialisation of model-based clustering using Gaussian hierarchical partitions. \emph{Advances in Data Analysis and Classification}, 9/4, pp. 447-460. Scrucca L., Fop M., Murphy T. B. and Raftery A. E. (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models, \emph{The R Journal}, 8/1, pp. 205-233. } \examples{ opt <- mclust.options() # save default values irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(emModelNames = c("EII", "EEI", "EEE")) irisBIC <- mclustBIC(iris[,-5]) summary(irisBIC, iris[,-5]) mclust.options(opt) # restore default values mclust.options() oldpar <- par(mfrow = c(2,1), no.readonly = TRUE) n <- with(mclust.options(), max(sapply(list(bicPlotSymbols, bicPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("bicPlotSymbols"), col = mclust.options("bicPlotColors")) title("mclust.options(\"bicPlotSymbols\") \n mclust.options(\"bicPlotColors\")") n <- with(mclust.options(), max(sapply(list(classPlotSymbols, classPlotColors),length))) plot(seq(n), rep(1,n), ylab = "", xlab = "", yaxt = "n", pch = mclust.options("classPlotSymbols"), col = mclust.options("classPlotColors")) title("mclust.options(\"classPlotSymbols\") \n mclust.options(\"classPlotColors\")") par(oldpar) } \keyword{cluster} mclust/man/icl.Rd0000644000176200001440000000161613205036712013375 0ustar liggesusers\name{icl} \alias{icl} \title{ ICL for an estimated Gaussian Mixture Model } \description{ Computes the ICL (Integrated Complete-data Likelihood) for criterion for a Gaussian Mixture Model fitted by \code{\link{Mclust}}. } \usage{ icl(object, \dots) } \arguments{ \item{object}{ An object of class \code{'Mclust'} resulting from a call to \code{\link{Mclust}}. } \item{\dots}{Further arguments passed to or from other methods.} } \value{ The ICL for the given input MCLUST model. } \references{ Biernacki, C., Celeux, G., Govaert, G. (2000). Assessing a mixture model for clustering with the integrated completed likelihood. \emph{IEEE Trans. Pattern Analysis and Machine Intelligence}, 22 (7), 719-725. } \seealso{ \code{\link{Mclust}}, \code{\link{mclustBIC}}, \code{\link{mclustICL}}, \code{\link{bic}}. } \examples{ mod <- Mclust(iris[,1:4]) icl(mod) } \keyword{cluster} mclust/man/me.weighted.Rd0000644000176200001440000001105513175053427015034 0ustar liggesusers\name{me.weighted} \alias{me.weighted} \title{EM algorithm with weights starting with M-step for parameterized MVN mixture models} \description{ Implements the EM algorithm for fitting MVN mixture models parameterized by eigenvalue decomposition, when observations have weights, starting with the maximization step. } \usage{ me.weighted(modelName, data, z, weights = NULL, prior = NULL, control = emControl(), Vinv = NULL, warn = NULL, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{z}{ A matrix whose \code{[i,k]}th entry is an initial estimate of the conditional probability of the ith observation belonging to the \emph{k}th component of the mixture. } \item{weights}{ A vector of positive weights, where the \code{[i]}th entry is the weight for the ith observation. If any of the weights are greater than one, then they are scaled so that the maximum weight is one. } \item{prior}{ Specification of a conjugate prior on the means and variances. See the help file for \code{priorControl} for further information. The default assumes no prior. } \item{control}{ A list of control parameters for EM. The defaults are set by the call \code{\link{emControl}}. } \item{Vinv}{ If the model is to include a noise term, \code{Vinv} is an estimate of the reciprocal hypervolume of the data region. If set to a negative value or 0, the model will include a noise term with the reciprocal hypervolume estimated by the function \code{hypvol}. The default is not to assume a noise term in the model through the setting \code{Vinv=NULL}. } \item{warn}{ A logical value indicating whether or not certain warnings (usually related to singularity) should be issued when the estimation fails. The default is set by \code{warn} using \code{\link{mclust.options}}. } \item{\dots}{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A list including the following components: \item{modelName}{ A character string identifying the model (same as the input argument). } \item{z}{ A matrix whose \code{[i,k]}th entry is the conditional probability of the \emph{i}th observation belonging to the \emph{k}th component of the mixture. } \item{parameters}{ \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If the model includes a Poisson term for noise, there should be one more mixing proportion than the number of Gaussian components. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } \item{\code{Vinv}}{ The estimate of the reciprocal hypervolume of the data region used in the computation when the input indicates the addition of a noise component to the model. } } } \item{loglik}{ The log likelihood for the data in the mixture model. } \item{Attributes:}{ \code{"info"} Information on the iteration.\cr \code{"WARNING"} An appropriate warning if problems are encountered in the computations. } } \author{Thomas Brendan Murphy} \seealso{ \code{\link{me}}, \code{\link{meE}},..., \code{\link{meVVV}}, \code{\link{em}}, \code{\link{mstep}}, \code{\link{estep}}, \code{\link{priorControl}}, \code{\link{mclustModelNames}}, \code{\link{mclustVariance}}, \code{\link{mclust.options}} } \examples{ \dontrun{ w <- rep(1,150) w[1] <- 0 me.weighted(modelName = "VVV", data = iris[,-5], z = unmap(iris[,5]),weights=w)} } \keyword{cluster} mclust/man/mclust-deprecated.Rd0000644000176200001440000000135513405515075016241 0ustar liggesusers\name{mclust-deprecated} \alias{cv.MclustDA} \alias{cv1EMtrain} \alias{bicEMtrain} \title{Deprecated Functions in mclust package} \description{ These functions are provided for compatibility with older versions of the \pkg{mclust} package only, and may be removed eventually. } \usage{ cv.MclustDA(\dots) cv1EMtrain(data, labels, modelNames=NULL) bicEMtrain(data, labels, modelNames=NULL) } \arguments{ \item{\dots}{pass arguments down.} \item{data}{A numeric vector or matrix of observations.} \item{labels}{Labels for each element or row in the dataset.} \item{modelNames}{Vector of model names that should be tested. The default is to select all available model names.} } \seealso{\code{\link{deprecated}}} mclust/man/plot.MclustBoostrap.Rd0000644000176200001440000000417713370553447016605 0ustar liggesusers\name{plot.MclustBootstrap} \alias{plot.MclustBootstrap} \title{Plot of bootstrap distributions for mixture model parameters} \description{ Plots the bootstrap distribution of parameters as returned by the \code{\link{MclustBootstrap}} function. } \usage{ \method{plot}{MclustBootstrap}(x, what = c("pro", "mean", "var"), show.parest = TRUE, show.confint = TRUE, hist.col = "grey", hist.border = "lightgrey", breaks = "Sturges", col = "forestgreen", lwd = 2, lty = 3, xlab = NULL, xlim = NULL, ylim = NULL, \dots) } \arguments{ \item{x}{Object returned by \code{MclustBootstrap}.} \item{what}{Character string specifying if mixing proportions (\code{"pro"}), component means (\code{"mean"}) or component variances (\code{"var"}) should be drawn.} \item{show.parest}{A logical specifying if the parameter estimate should be drawn as vertical line.} \item{show.confint}{A logical specifying if the resampling-based confidence interval should be drawn at the bottom of the graph. Confidence level can be provided as further argument \code{conf.level}; see \code{\link{summary.MclustBootstrap}}.} \item{hist.col}{The color to be used to fill the bars of the histograms.} \item{hist.border}{The color of the border around the bars of the histograms.} \item{breaks}{See the argument in function \code{\link[graphics]{hist}}.} \item{col, lwd, lty}{The color, line width and line type to be used to represent the estimated parameters and confidence intervals.} \item{xlab}{Optional label for the horizontal axis.} \item{xlim, ylim}{A two-values vector of axis range for, respectively, horizontal and vertical axis.} \item{\dots}{Other graphics parameters.} } \value{ A plot for each variable/component of the selected parameters. } \seealso{ \code{\link{MclustBootstrap}} } \examples{ \dontrun{ data(diabetes) X <- diabetes[,-1] modClust <- Mclust(X, G = 3, modelNames = "VVV") bootClust <- MclustBootstrap(modClust, nboot = 99) par(mfrow = c(1,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "pro") par(mfrow = c(3,3), mar = c(4,2,2,0.5)) plot(bootClust, what = "mean") } } \keyword{cluster} mclust/man/EuroUnemployment.Rd0000644000176200001440000000143313474250025016155 0ustar liggesusers\name{EuroUnemployment} \alias{EuroUnemployment} \docType{data} \title{Unemployment data for European countries in 2014} \description{ The data set contains unemployment rates for 31 European countries for the year 2014.} \usage{data(EuroUnemployment)} \format{A data frame with the following variables: \describe{ \item{TUR}{Total unemployment rate, i.e. percentage of unemployed persons aged 15-74 in the economically active population.} \item{YUR}{Youth unemployment rate, i.e. percentage of unemployed persons aged 15-24 in the economically active population.} \item{LUR}{Long-term unemployment rate, i.e. percentage of unemployed persons who have been unemployed for 12 months or more.} } } \source{EUROSTAT (http://ec.europa.eu/eurostat/web/lfs/data/database)} \keyword{datasets} mclust/man/sim.Rd0000644000176200001440000000711113375223464013424 0ustar liggesusers\name{sim} \alias{sim} \title{ Simulate from Parameterized MVN Mixture Models } \description{ Simulate data from parameterized MVN mixture models. } \usage{ sim(modelName, parameters, n, seed = NULL, ...) } \arguments{ \item{modelName}{ A character string indicating the model. The help file for \code{\link{mclustModelNames}} describes the available models. } \item{parameters}{ A list with the following components: \describe{ \item{\code{pro}}{ A vector whose \emph{k}th component is the mixing proportion for the \emph{k}th component of the mixture model. If missing, equal proportions are assumed. } \item{\code{mean}}{ The mean for each component. If there is more than one component, this is a matrix whose kth column is the mean of the \emph{k}th component of the mixture model. } \item{\code{variance}}{ A list of variance parameters for the model. The components of this list depend on the model specification. See the help file for \code{\link{mclustVariance}} for details. } } } \item{n}{ An integer specifying the number of data points to be simulated. } \item{seed}{ An optional integer argument to \code{set.seed} for reproducible random class assignment. By default the current seed will be used. Reproducibility can also be achieved by calling \code{set.seed} before calling \code{sim}. } \item{\dots }{ Catches unused arguments in indirect or list calls via \code{do.call}. } } \value{ A matrix in which first column is the classification and the remaining columns are the \code{n} observations simulated from the specified MVN mixture model. \item{Attributes:}{ \code{"modelName"} A character string indicating the variance model used for the simulation. } } \details{ This function can be used with an indirect or list call using \code{do.call}, allowing the output of e.g. \code{mstep}, \code{em}, \code{me}, \code{Mclust} to be passed directly without the need to specify individual parameters as arguments. } \seealso{ \code{\link{simE}}, \dots, \code{\link{simVVV}}, \code{\link{Mclust}}, \code{\link{mstep}}, \code{\link{do.call}} } \examples{ irisBIC <- mclustBIC(iris[,-5]) irisModel <- mclustModel(iris[,-5], irisBIC) names(irisModel) irisSim <- sim(modelName = irisModel$modelName, parameters = irisModel$parameters, n = nrow(iris)) \dontrun{ do.call("sim", irisModel) # alternative call } par(pty = "s", mfrow = c(1,2)) dimnames(irisSim) <- list(NULL, c("dummy", (dimnames(iris)[[2]])[-5])) dimens <- c(1,2) lim1 <- apply(iris[,dimens],2,range) lim2 <- apply(irisSim[,dimens+1],2,range) lims <- apply(rbind(lim1,lim2),2,range) xlim <- lims[,1] ylim <- lims[,2] coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), dimens=dimens, xlim=xlim, ylim=ylim) coordProj(iris[,-5], parameters=irisModel$parameters, classification=map(irisModel$z), truth = irisSim[,-1], dimens=dimens, xlim=xlim, ylim=ylim) irisModel3 <- mclustModel(iris[,-5], irisBIC, G=3) irisSim3 <- sim(modelName = irisModel3$modelName, parameters = irisModel3$parameters, n = 500, seed = 1) \dontrun{ irisModel3$n <- NULL irisSim3 <- do.call("sim",c(list(n=500,seed=1),irisModel3)) # alternative call } clPairs(irisSim3[,-1], cl = irisSim3[,1]) } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21. mclust/man/gmmhd.Rd0000644000176200001440000001202013107075206013713 0ustar liggesusers\name{gmmhd} \alias{gmmhd} \alias{print.gmmhd} \alias{summary.gmmhd} \alias{print.summary.gmmhd} \alias{plot.gmmhd} \alias{gmmhdClusterCores} \alias{gmmhdClassify} \title{Identifying Connected Components in Gaussian Finite Mixture Models for Clustering} \description{ Starting with the density estimate obtained from a fitted Gaussian finite mixture model, cluster cores are identified from the connected components at a given density level. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. } \usage{ gmmhd(object, ngrid = min(round((log(nrow(data)))*10), nrow(data)), dr = list(d = 3, lambda = 1, cumEvalues = NULL, mindir = 2), classify = list(G = 1:5, modelNames = mclust.options("emModelNames")[-c(8, 10)]), \dots) \method{plot}{gmmhd}(x, what = c("mode", "cores", "clusters"), \dots) } \arguments{ \item{object}{An object returned by \code{\link{Mclust}}.} \item{ngrid}{An integer specifying the number of grid points used to compute the density levels.} \item{dr}{A list of parameters used in the dimension reduction step.} \item{classify}{A list of parameters used in the classification step.} \item{x}{An object of class \code{'gmmhd'} as returned by the function \code{gmmhd}.} \item{what}{A string specifying the type of plot to be produced. See Examples section.} \item{\dots}{further arguments passed to or from other methods.} } \details{ Model-based clustering associates each component of a finite mixture distribution to a group or cluster. An underlying implicit assumption is that a one-to-one correspondence exists between mixture components and clusters. However, a single Gaussian density may not be sufficient, and two or more mixture components could be needed to reasonably approximate the distribution within a homogeneous group of observations. This function implements the methodology proposed by Scrucca (2016) based on the identification of high density regions of the underlying density function. Starting with an estimated Gaussian finite mixture model, the corresponding density estimate is used to identify the cluster cores, i.e. those data points which form the core of the clusters. These cluster cores are obtained from the connected components at a given density level \eqn{c}. A mode function gives the number of connected components as the level \eqn{c} is varied. Once cluster cores are identified, the remaining observations are allocated to those cluster cores for which the probability of cluster membership is the highest. The method usually improves the identification of non-Gaussian clusters compared to a fully parametric approach. Furthermore, it enables the identification of clusters which cannot be obtained by merging mixture components, and it can be straightforwardly extended to cases of higher dimensionality. } \value{ A list of class \code{gmmhd} with the following components: \item{Mclust}{The input object of class \code{"Mclust"} representing an estimated Gaussian finite mixture model.} \item{MclustDA}{An object of class \code{"MclustDA"} containing the model used for the classification step.} \item{MclustDR}{An object of class \code{"MclustDR"} containing the dimension reduction step if performed, otherwise \code{NULL}.} \item{x}{The data used in the algorithm. This can be the input data or a projection if a preliminary dimension reduction step is performed.} \item{density}{The density estimated from the input Gaussian finite mixture model evaluated at the input data.} \item{con}{A list of connected components at each step.} \item{nc}{A vector giving the number of connected components (i.e. modes) at each step.} \item{pn}{Vector of values over a uniform grid of proportions of length \code{ngrid}.} \item{qn}{Vector of density quantiles corresponding to proportions \code{pn}.} \item{pc}{Vector of empirical proportions corresponding to quantiles \code{qn}.} \item{clusterCores}{Vector of cluster cores numerical labels; \code{NA}s indicate that an observation does not belong to any cluster core.} \item{clusterCores}{Vector of numerical labels giving the final clustering.} \item{numClusters}{An integer giving the number of clusters.} } \references{ Scrucca, L. (2016) Identifying connected components in Gaussian finite mixture models for clustering. \emph{Computational Statistics & Data Analysis}, 93, 5-17. } \author{ Luca Scrucca \email{luca.scrucca@unipg.it} } %\note{} \seealso{\code{\link{Mclust}}} \examples{ \dontrun{ data(faithful) mod <- Mclust(faithful) summary(mod) plot(as.densityMclust(mod), faithful, what = "density", points.pch = mclust.options("classPlotSymbols")[mod$classification], points.col = mclust.options("classPlotColors")[mod$classification]) GMMHD <- gmmhd(mod) summary(GMMHD) plot(GMMHD, what = "mode") plot(GMMHD, what = "cores") plot(GMMHD, what = "clusters") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. Use one of RShowDoc("KEYWORDS") \keyword{cluster} mclust/man/predict.densityMclust.Rd0000644000176200001440000000321713461341200017120 0ustar liggesusers\name{predict.densityMclust} \alias{predict.densityMclust} \title{Density estimate of multivariate observations by Gaussian finite mixture modeling} \description{Compute density estimation for multivariate observations based on Gaussian finite mixture models estimated by \code{\link{densityMclust}}.} \usage{ \method{predict}{densityMclust}(object, newdata, what = c("dens", "cdens", "z"), logarithm = FALSE, \dots) } \arguments{ \item{object}{an object of class \code{'densityMclust'} resulting from a call to \code{\link{densityMclust}}.} \item{newdata}{a vector, a data frame or matrix giving the data. If missing the density is computed for the input data obtained from the call to \code{\link{densityMclust}}.} \item{what}{a character string specifying what to retrieve: \code{"dens"} returns a vector of values for the mixture density; \code{"cdens"} returns a matrix of component densities for each mixture component (along the columns); \code{"z"} returns a matrix of conditional probabilities of each data point to belong to a mixture component.} \item{logarithm}{A logical value indicating whether or not the logarithm of the density or component densities should be returned.} \item{\dots}{further arguments passed to or from other methods.} } % \details{} \value{ Returns a vector or a matrix of densities evaluated at \code{newdata} depending on the argument \code{what} (see above). } \author{Luca Scrucca} % \note{} \seealso{\code{\link{Mclust}}.} \examples{ \dontrun{ x <- faithful$waiting dens <- densityMclust(x) x0 <- seq(50, 100, by = 10) d0 <- predict(dens, x0) plot(dens) points(x0, d0, pch = 20) } } \keyword{multivariate} mclust/man/defaultPrior.Rd0000644000176200001440000001003013465001173015255 0ustar liggesusers\name{defaultPrior} \alias{defaultPrior} \title{ Default conjugate prior for Gaussian mixtures } \description{ Default conjugate prior specification for Gaussian mixtures. } \usage{ defaultPrior(data, G, modelName, \dots) } \arguments{ \item{data}{ A numeric vector, matrix, or data frame of observations. Categorical variables are not allowed. If a matrix or data frame, rows correspond to observations and columns correspond to variables. } \item{G}{ The number of mixture components. } \item{modelName}{ A character string indicating the model: \cr\cr \code{"E"}: equal variance (univariate) \cr \code{"V"}: variable variance (univariate)\cr \code{"EII"}: spherical, equal volume \cr \code{"VII"}: spherical, unequal volume \cr \code{"EEI"}: diagonal, equal volume and shape\cr \code{"VEI"}: diagonal, varying volume, equal shape\cr \code{"EVI"}: diagonal, equal volume, varying shape \cr \code{"VVI"}: diagonal, varying volume and shape \cr \code{"EEE"}: ellipsoidal, equal volume, shape, and orientation \cr \code{"EEV"}: ellipsoidal, equal volume and equal shape\cr \code{"VEV"}: ellipsoidal, equal shape \cr \code{"VVV"}: ellipsoidal, varying volume, shape, and orientation. \cr\cr A description of the models above is provided in the help of \code{\link{mclustModelNames}}. Note that in the multivariate case only 10 out of 14 models may be used in conjunction with a prior, i.e. those available in \emph{MCLUST} up to version 4.4. } \item{\dots}{ One or more of the following: \describe{ \item{\code{dof}}{ The degrees of freedom for the prior on the variance. The default is \code{d + 2}, where \code{d} is the dimension of the data. } \item{\code{scale}}{ The scale parameter for the prior on the variance. The default is \code{var(data)/G^(2/d)}, where \code{d} is the dimension of the data. } \item{\code{shrinkage}}{ The shrinkage parameter for the prior on the mean. The default value is 0.01. If 0 or NA, no prior is assumed for the mean. } \item{\code{mean}}{ The mean parameter for the prior. The default value is \code{colMeans(data)}. } } } } \value{ A list giving the prior degrees of freedom, scale, shrinkage, and mean. } \details{ \code{defaultPrior} is a function whose default is to output the default prior specification for EM within \emph{MCLUST}.\cr Furthermore, \code{defaultPrior} can be used as a template to specify alternative parameters for a conjugate prior. } \references{ C. Fraley and A. E. Raftery (2002). Model-based clustering, discriminant analysis, and density estimation. \emph{Journal of the American Statistical Association} 97:611-631. C. Fraley and A. E. Raftery (2005, revised 2009). Bayesian regularization for normal mixture estimation and model-based clustering. Technical Report, Department of Statistics, University of Washington. C. Fraley and A. E. Raftery (2007). Bayesian regularization for normal mixture estimation and model-based clustering. \emph{Journal of Classification} 24:155-181. } \seealso{ \code{\link{mclustBIC}}, \code{\link{me}}, \code{\link{mstep}}, \code{\link{priorControl}} } \examples{ # default prior irisBIC <- mclustBIC(iris[,-5], prior = priorControl()) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName = "defaultPrior")) summary(irisBIC, iris[,-5]) # no prior on the mean; default prior on variance irisBIC <- mclustBIC(iris[,-5], prior = priorControl(shrinkage = 0)) summary(irisBIC, iris[,-5]) # equivalent to previous example irisBIC <- mclustBIC(iris[,-5], prior = priorControl(functionName="defaultPrior", shrinkage=0)) summary(irisBIC, iris[,-5]) defaultPrior( iris[-5], G = 3, modelName = "VVV") } \keyword{cluster} % docclass is function % Converted by Sd2Rd version 1.21.