WGCNA/0000755000176200001440000000000014672567642011134 5ustar liggesusersWGCNA/MD50000644000176200001440000004431414672567642011452 0ustar liggesusers38e93ffda442445800e1a0640c7a1da2 *Changelog 7d75aaa6826b0c238b797fd0c3d02709 *DESCRIPTION d2b6fd915680eb5af195de145344e193 *NAMESPACE 2640fb7286cab45b6de425c01109f80e *R/AFcorMI.R 35609243f0e3ea642a162a0d737b60e7 *R/Functions-fromSimilarity.R f0a23a7cf2d4b05f06b2b1dbe29eeb21 *R/Functions-multiData.R bb52fe6efa10c874f3d2f58815004185 *R/Functions.R b042b30d8f5d4721298fb105a49bd369 *R/GOenrichmentAnalysis.R a741dd19796b763560aba5d13f00fc48 *R/TrueTrait.R 3747f2e4a71105d8e3ac4c54914b6bd4 *R/accuracyMeasures.R 6694f89a89e904eb1bc90d28cf28fc37 *R/adjacency.polyReg.R 3532a8d0c9e1ea87935747498e952c75 *R/adjacency.splineReg.R 76c1679828a24d7006946cf8a449817a *R/blockwiseData.R 0d941ba5c6ba0b31da881c80a2603b39 *R/blockwiseModulesC.R d8acf25a73395f5ac0c55052943763b3 *R/branchSplit.R 15468ef5e298f2fdb6320a53fe144051 *R/coClustering.R ceee787204a986fedae3e8b3ce86d121 *R/collapseRows.R 93b1234770b325950074537bc2aad873 *R/collapseRowsUsingKME.R 46a73fe3580e01ceab1bdcb55c316c3c *R/conformityDecomposition.R e7454fc0b69cec7a1e8b7eff45dff948 *R/consensusCalculations.R 4a71bdff2bb1de3856a796d14b8bc13b *R/consensusDissTOMandTree.R 6a3752172cc4c31d53eec0cdc1463ea7 *R/consensusRepresentatives.R 11e8d886f68ff5c20661ac7a15aee28a *R/consensusTOM.R 239273bfb01626e6a7a6ea7eb966bca7 *R/corAndPvalue.R ae38c4a3af8cf6f039c093f158c8be75 *R/corFunctions.R f5dfb236765806a3eb39bd028bff071b *R/coxRegressionResiduals.R aa9c7ff1a228742079e2cbd4a1f4c28e *R/dendrogramAdjustmentFunctions.R 28f6dd6709cd85e3359c6eb2fbaaa9e4 *R/empiricalBayesLM.R 51e3134584bb27345f534fb55be4ebab *R/exportFunctions.R c6befb2db32212367a81356658135a2b *R/heatmapWithLegend.R 2b5d78677c5e881ad05280b28b287632 *R/hierarchicalConsensusModules.R c0d7b4995fc8bbebcaa6bd93766e9c8e *R/internalConstants.R da2729744a42bda91cc02ddf3714a612 *R/kMEcomparisonScatterplot.R 9ad0513b5a21e56e67fba7c9b9647d3d *R/labelPoints.R a29ac61ecd7adcb9e8953ae90a5d61e3 *R/labeledHeatmap.R 65bec5bef84215c7f0abfa1a41345d1e *R/matchLabels.R de5a35825d9b736b61c515ebee717d25 *R/moduleMergeUsingKME.R a64927df257a214dea9e1474dc938e1b *R/modulePreservation.R dbd00cc0838fdeb1aa724e950b205b03 *R/multiData.R d96c4db72edd76ecee257f03f3743c3e *R/mutualInfoAdjacency.R f01e9b205225218592759842ec81a61a *R/nearestCentroidPredictor.R ff247c78f289e887d88ea719e482e2ee *R/networkConcepts.R cbbf15f7e1af46c97769349028dcc197 *R/overlapTableUsingKME.R dd248a6dbff1e165b81e4a55f5ba35e9 *R/plotDendrogram.R 1f84c828572b06bc6baf3ad5df077b99 *R/populationMeansInAdmixture.R 1c35495264dc245f0c417f2797d649c1 *R/proportionsInAdmixture.R f17cfef92906bb48ef2d09a25a8b57c4 *R/quantileC.R 0f6a00ada5b99112e8dadea807aa2a00 *R/qvalue.R 69f368afdd00a44008445ba5d90cdd6b *R/returnGeneSetsAsList.R 55a085c6e69e20a8e639f91d6c817ed7 *R/sampledModules.R 9150985fe60317f9ad20f66cacaf7170 *R/smaFunctions.R e7af15046afab56942ec17ef97e4661c *R/standardScreeningBinaryTrait.R f20cae9009e6bca9e4f19328f17a229f *R/stratifiedBarplot.R 838bed5c4cbe28e4c942be6e9ca537ac *R/transposeBigData.R 8603738eadb272fa798eb71c9cb173df *R/useNThreads.R 518eaed6fa2e07262e907b29641e6467 *R/userListEnrichment.R 3aae1993670b9e78f0d03398091ffebf *R/verboseIplot.R 9fb8ae431795a46628ae79049344a17d *R/votingLinearPredictor.R 9ca7256d86ad4bb124368a831d9f66ae *R/zzz.R e23903adc990d9515ba1ebd23e3bff59 *build/partial.rdb f403f840e0fc7f8a0c352107793e0b46 *data/BloodLists.rda 51c2c6e860d1506d66e9b65c524453be *data/BrainLists.rda 7bdbaf29df66b5302497c33719d384ae *data/BrainRegionMarkers.rda ad0ed9fc9a81c1b69c0b2866967a112b *data/ImmunePathwayLists.rda fbe597b58e5ece1cfb445339988bd936 *data/PWLists.rda c6e9fa3b41671b6ad31df708e1a013ff *data/SCsLists.rda 3d2609f8ec6df9c0617383bcbc410fec *inst/CITATION 1395f9a0fa5ae4ae4d33450dd978bf20 *man/AFcorMI.Rd 6d8f05dde0ca15ee026da0f8c32b791e *man/BD.getData.Rd 90e62d285af393b1c3598dca8b5e4901 *man/BloodLists.Rd 2692449a2600292b3b10f07bfa70010d *man/BrainLists.Rd 4da45c33a54f0997f73aa42f00a0f960 *man/BrainRegionMarkers.Rd c8562c092b61e998696d2eda7e89ae33 *man/GOenrichmentAnalysis.Rd 4de45938151062d0afaa35fbebd71074 *man/GTOMdist.Rd 71782d3e2e09a394a6a512b4069e1434 *man/ImmunePathwayLists.Rd 21b4631827ab613c08045a398a0145f7 *man/PWLists.Rd a1c383c7c833d5d3e336f02b5fead2f6 *man/SCsLists.Rd 25b979522bb3b52ee08f7c016eaf9b57 *man/TOMplot.Rd 55eb05f83c134a51da654a2274f6d0bb *man/TOMsimilarity.Rd a7a43269be5e56b3086ffd22e10f6728 *man/TOMsimilarityFromExpr.Rd 43b1061be0b841b2d60c05ba5a5a4d2a *man/TrueTrait.Rd f3daf1abc96634906122136454ee140b *man/accuracyMeasures.Rd 051b9a0b6e3d224367101ae392fdee22 *man/addErrorBars.Rd fa45e0ea9ab324fdd8aec5e8576ecdad *man/addGrid.Rd cae8eaec92ed9b1e44b74b887bb1f2d5 *man/addGuideLines.Rd 8d7d42ffe2f393ffc0639eb753269012 *man/addTraitToMEs.Rd ef869aa22f5579c0721fc58bcdce0782 *man/adjacency.Rd 00f123bfb5d44dce62796916b1935ca3 *man/adjacency.polyReg.Rd 2ce0f3f36981f2e361fee7ae835ce24e *man/adjacency.splineReg.Rd c7e701229b5800c1413850f3adf86a3c *man/alignExpr.Rd f73c92a0f512782324e4b56eed5cbe65 *man/allocateJobs.Rd b7ea7d2217d2af07fb52698036093d99 *man/allowWGCNAThreads.Rd f0af93e682c1e4f9ed671d5dc26b5c47 *man/automaticNetworkScreening.Rd 37f7ceee377056bc41dfc56146e44c22 *man/automaticNetworkScreeningGS.Rd 29dc88d5f37dff098532b14b6b43bbc9 *man/bicor.Rd eb2b6b1088d8ac9f4d3d584633730c09 *man/bicorAndPvalue.Rd d3e975d3323a46bfaba290ebf3958e52 *man/bicovWeights.Rd 70d15038c9a7316d4e05d0098fa5d763 *man/binarizeCategoricalColumns.Rd 7e1f6831ec4f0bfeceb47e61d6bb6692 *man/binarizeCategoricalVariable.Rd 65ac3edef958fd60295b57734824b37f *man/blockSize.Rd c9d24f89c854d764561b49d0e05c4679 *man/blockwiseConsensusModules.Rd c4e93e50ee0d8182c6dc3654b5421603 *man/blockwiseIndividualTOMs.Rd 3556474c6935be79672b16daffa8af4e *man/blockwiseModules.Rd 8f558a48678902df6e4ca83f822edd7b *man/blueWhiteRed.Rd c5db45284538a7acbd1ac63d47eb4f8c *man/branchEigengeneDissim.Rd bffaf7af5a6432a63c4d5513d71aef5d *man/branchSplit.Rd 816a353ce3eab35ade1dab4755fb16d6 *man/branchSplit.dissim.Rd 6b66247bf96b0179e9f96b2e36e08b76 *man/branchSplitFromStabilityLabels.Rd a670b576a031c5f51483aedd991e2cb3 *man/checkAdjMat.Rd 77c7d5fb58a8ec83f99ffbd83d4481dd *man/checkSets.Rd a86af95333958160c5a6f1e95a73162b *man/chooseOneHubInEachModule.Rd dfac4810548b3f2fd176be22cc08e3a3 *man/chooseTopHubInEachModule.Rd 5e958f0875508a71d357444f14b6179e *man/clusterCoef.Rd a4dee3d59f71f9271fef45dbc884e50c *man/coClustering.Rd 39432228084e0ab40e2bdfba1971ef2d *man/coClustering.permutationTest.Rd 8f27170733d4c3702ef327e8093b0a4b *man/colQuantileC.Rd 4c084cc6c7c2586c1668bd5efe05ccf2 *man/collapseRows.Rd 36c92b8ae3dddd0ce0cf42916b20b47c *man/collapseRowsUsingKME.Rd e3f8541b1a35d25d0767fca0ecd2a2b3 *man/collectGarbage.Rd 161399055c32a9010d1311b4987249b8 *man/conformityBasedNetworkConcepts.Rd 9fa74e5ac5a227d887bb262b936b2e78 *man/conformityDecomposition.Rd 27be3fca8fefe9cd6912bdf03d271791 *man/consensusCalculation.Rd 9f16701b048bd097c7b3702d4cadd15c *man/consensusDissTOMandTree.Rd bdb934e0d83055b9761c0241ca0180cd *man/consensusKME.Rd 4ceee9e0b3e10f94f427290042ec11e5 *man/consensusMEDissimilarity.Rd 171634ec94492cd2a68a284c88329b68 *man/consensusOrderMEs.Rd 95b1b6e256a3f74f55c682502d317941 *man/consensusProjectiveKMeans.Rd 48c16095ea58df4111408b497f51fe30 *man/consensusRepresentatives.Rd 9c2fe04c2d7fc5580f2240ae665c3c45 *man/consensusTOM.Rd d1e60bfaf7b14be5fb1bc4733b7c605a *man/consensusTreeInputs.Rd cfe0f218bfeb5025973dd853ae48782e *man/convertNumericColumnsToNumeric.Rd 72e05051fc949fe8de36176b53b21c01 *man/cor.Rd c07afb24ae1a9566cd610d640d221b5c *man/corAndPvalue.Rd 19cd6ac7f5c9891b6f6be8fd4034f484 *man/corPredictionSuccess.Rd ddd90d6b54aac459bd8af75db5f5e8a6 *man/corPvalueFisher.Rd 087446ede74deb76c5952eab930bcc3d *man/corPvalueStudent.Rd c52e6fc1e3b1b5bb616681de0984d7b5 *man/correlationPreservation.Rd 06d6988c8a2b30e720ed769496a5573a *man/coxRegressionResiduals.Rd ebd1b5154444b4b1ddb110a91453596d *man/cutreeStatic.Rd 76d1215fee27707480cb5b2bb4436754 *man/cutreeStaticColor.Rd aa9a53d1eeea331b86022a2f85238b9d *man/displayColors.Rd ec2017b186d25a4707f66d4069f3f18c *man/dynamicMergeCut.Rd 9b870611471adefdde87a8a7a6a5caec *man/empiricalBayesLM.Rd 46e29dcd87c583e3d449493600141016 *man/exportNetworkToCytoscape.Rd b2f81e61e7ab3417929b10dac119c86a *man/exportNetworkToVisANT.Rd 65e62fed0cdf47bc3d30d014841cd5a9 *man/factorizeNonNumericColumns.Rd 4f2aee2616cd6c8a62a69753510b8911 *man/fixDataStructure.Rd 3764fd1e17fc01c8d80a17b109d43360 *man/formatLabels.Rd a0878ec86150188ec97abfc2a3533b2e *man/fundamentalNetworkConcepts.Rd d5eccfa6aed03b219ad13c2e018c910e *man/goodGenes.Rd 7ada67f812f019f45d24d4a1137d5fa3 *man/goodGenesMS.Rd 67f904ccc5437f12602cd98dd0d1833a *man/goodSamples.Rd 6f8140f19f31df1dc87875b1c371a8d7 *man/goodSamplesGenes.Rd f5343336c0f7b68051210e843b369d45 *man/goodSamplesGenesMS.Rd 7e615387771ecd5750522c0067184f15 *man/goodSamplesMS.Rd c32af802f17f6a13a8d1f9022269b151 *man/greenBlackRed.Rd 557c9bef232a943268f060c32c6eb87f *man/greenWhiteRed.Rd 3dbab7906b875d026684c208e5cc0b0d *man/hierarchicalConsensusCalculation.Rd ea9b3909ee86b68b5564081ef5dbbdeb *man/hierarchicalConsensusKME.Rd d45876fcffb99e345b1353b829cb157f *man/hierarchicalConsensusMEDissimilarity.Rd b24d03073612390a5093f565a244ecd2 *man/hierarchicalConsensusModules.Rd f078c8b4cec40f3a04d3716bb2c4a499 *man/hierarchicalConsensusTOM.Rd fb7a0a67cac69b516b5a5ed51ed343b9 *man/hierarchicalMergeCloseModules.Rd 1282dbf2291264f8c714ba10c3c93b33 *man/hubGeneSignificance.Rd f1054aaa6b9e7e9ba8c4d5d12d01291a *man/imputeByModule.Rd e10c3bbf2cf8c9f5337bb10644b20365 *man/individualTOMs.Rd b2ddece70167d37c6ca795c56c6bbf51 *man/initProgInd.Rd 4fec782aa9797a7638471e0da6491fc2 *man/intramodularConnectivity.Rd 500667bd49d01613aaaf03cd9831b042 *man/isMultiData.Rd 54a0f9b8a8a814ea9da629e01a25ca11 *man/kMEcomparisonScatterplot.Rd 367c8efa7aff57c91f508f4d6993ee0c *man/keepCommonProbes.Rd 8e09108aa6a9aa781d9be2d546e838bd *man/labelPoints.Rd 4ed4ad8fb3eefb8fb3ebc018d50f24c1 *man/labeledBarplot.Rd cd18550352fb5a3d39bf6e63fd0fac55 *man/labeledHeatmap.Rd 157f94fcf57d814ab7891106fcc7da0e *man/labeledHeatmap.multiPage.Rd 4bbe76c64e465dd5d480f649370d8c31 *man/labels2colors.Rd 7fc163ef52818f5539648901398c4449 *man/list2multiData.Rd 5ed84fabbd891e85b12619f3530492c8 *man/lowerTri2matrix.Rd 14556752ea276b68851b24234c39bda4 *man/matchLabels.Rd caad2f4cd29ed55a6c2ff090d7090ecb *man/matrixToNetwork.Rd db4368efdc27de132013ce1c5ee8a952 *man/mergeCloseModules.Rd 50529471afd7336184fa7963507eaa5f *man/metaAnalysis.Rd ffd97d40240349925bc44f5b1906d3e5 *man/metaZfunction.Rd 9282f0ac4a84e49c5e30f2cea1648eec *man/minWhichMin.Rd 9073bb0dae1e815032222b1b00619cbb *man/modifiedBisquareWeights.Rd 7edcdce8f159ca0dfa75c6381c252900 *man/moduleColor.getMEprefix.Rd 5fe15072f66d4ca7175539b748f46396 *man/moduleEigengenes.Rd b991b62e1ec4036a42ef5aa27713c17f *man/moduleMergeUsingKME.Rd 271eded401e01f0e1e330f34d33afe22 *man/moduleNumber.Rd b580649a78012cf2e0abb0945f29b30d *man/modulePreservation.Rd c6f0d7ed6ba44a899c2caba17db3d06f *man/mtd.apply.Rd ae9dfd2181ad34cedaf894a1778b30ff *man/mtd.mapply.Rd 84f704fc9b1530f928f8dd2860df7be1 *man/mtd.rbindSelf.Rd a7f9d8df53a7f26a59b90e79abd259a7 *man/mtd.setAttr.Rd deebafdb1fe89306161853efc52e5e1d *man/mtd.setColnames.Rd d47de5e40e5018d567ad4b086b0b4229 *man/mtd.simplify.Rd 6e335fdb204eab8ea890c2fa4406b4a7 *man/mtd.subset.Rd 2728b2005265bff56411d862d9960e81 *man/multiData.Rd 76676f3fdba6e9ca1d4ddccc3cc70fe9 *man/multiData.eigengeneSignificance.Rd 3d65c3f353aef786735f1902e643faff *man/multiGSub.Rd 620889218d4d5ed5e3aa6bb214005152 *man/multiSetMEs.Rd 9006d4cf6446d835d0c8c3bd9d3c72e7 *man/multiUnion.Rd 67f393ca0f70989e332b083e14673a64 *man/mutualInfoAdjacency.Rd 423948deb1875655f76b5aab3bd18ff3 *man/nPresent.Rd 5a9944c4bcd3dd9111df6d32dca279c1 *man/nSets.Rd f254d52b87268800ff55ac2372a1a308 *man/nearestCentroidPredictor.Rd f3528eca336b9c278d84c2226ac5e946 *man/nearestNeighborConnectivity.Rd 9de47712cf569e3de94cfff5f17a72ea *man/nearestNeighborConnectivityMS.Rd 3982e0f77862c8c9993a49f878258fda *man/networkConcepts.Rd 9b0d99621400d82fee9c9a34dde5c5eb *man/networkScreening.Rd 49cd57856511c6a7c0acd9014c10150e *man/networkScreeningGS.Rd e59f9efc10e15a4eda2b89794842a426 *man/newBlockInformation.Rd 1a885dd7884e647bbe27bef91832ad10 *man/newBlockwiseData.Rd c89b2cdeda46be905bc189e609eb52b9 *man/newConsensusOptions.Rd b5f90a4ff1b26edcb08d87f010245b01 *man/newConsensusTree.Rd 30457eb40c5df6c1a920b5ef441bae1f *man/newCorrelationOptions.Rd bb9d834a0eaaf813761a8b52eb06eac5 *man/newNetworkOptions.Rd 00521be977eaaa72457d4ee52ab407cb *man/normalizeLabels.Rd 3487d374af60f914a7674bdca219b1d6 *man/numbers2colors.Rd 810c3c5526efbb4eed7ed7d06295c906 *man/orderBranchesUsingHubGenes.Rd 6dec40376ce19b7cc0d0d8d2ac3d25a6 *man/orderMEs.Rd 117bfb67b2ad096a621b54cd4fa3cd8d *man/orderMEsByHierarchicalConsensus.Rd 83c877691112a6c2be887c9406eff374 *man/overlapTable.Rd f74547f3ebfc1ed7142e0d052b4cb2a7 *man/overlapTableUsingKME.Rd a00873ae7623437f433140b20db1e64d *man/pickHardThreshold.Rd 4238d25f597551527f75cf2974c83c43 *man/pickSoftThreshold.Rd 3293cecd4d3f5f802361270b9dc23be9 *man/plotClusterTreeSamples.Rd 7c426af238b2b3464cd3d410d2da94b1 *man/plotColorUnderTree.Rd b541802cfa67c468d0afb34862a8f881 *man/plotCor.Rd 18f1effd86c7bbc2f1db2e80dd8949eb *man/plotDendroAndColors.Rd 8ee0e2fdd85ddb8c17db4aa1730bd38a *man/plotEigengeneNetworks.Rd f66707ae89f83b6b3813ade2a347c305 *man/plotMEpairs.Rd 763c6cd109acff2070e1b3029961f87c *man/plotMat.Rd f2fa34c2378f9fc80f160eb71192ca58 *man/plotModuleSignificance.Rd b3b63887750827169ae97ae4d11e6063 *man/plotMultiHist.Rd 5044c7fe416ebf8a24e4e71bd3f66094 *man/plotNetworkHeatmap.Rd eceea04cd1a4c5da64cf47ce67682eb5 *man/populationMeansInAdmixture.Rd 84def9c15318640a89e377a9cbf8bc1e *man/pquantile.Rd 98a05e761090162822158e0c290ffd30 *man/prepComma.Rd cd9706045136f37ae6b872545f694455 *man/prependZeros.Rd 638576053aa6de4e1d9bd76f17cd417c *man/preservationNetworkConnectivity.Rd a70c85a8caa0ced94361fe891f2edbd9 *man/projectiveKMeans.Rd 6cd6c33ac81666fee9ab2934c15f14d1 *man/propVarExplained.Rd 9a5477827bbbd20d9cfbddc7f153bb94 *man/proportionsInAdmixture.Rd ae7dad9cfe8d1629e82f9b0c1edcf723 *man/pruneAndMergeConsensusModules.Rd d9ea8f492deb814d528c569639f2f98f *man/pruneConsensusModules.Rd 948c0413d52381f83d79c8846b946966 *man/qvalue.Rd 33f922166bfa3232d2cfd31dd88e4098 *man/qvalue.restricted.Rd bfb6acf64524a1f6952301cdf9d7e692 *man/randIndex.Rd e8c0717cae5f64d23fcc6d7ab07908d7 *man/rankPvalue.Rd ddc32efe3dcf7578e8d17051cdd18ea7 *man/recutBlockwiseTrees.Rd 7b24ba1f32d3d50e7c77ae2f592381d3 *man/recutConsensusTrees.Rd adec15fbabdc64fe49ca0d7c4629f723 *man/redWhiteGreen.Rd 0a9c3ee7c033293b7796df9b7c1af2d0 *man/relativeCorPredictionSuccess.Rd 09f53652e9013b5e922ae51635c0bf45 *man/removeGreyME.Rd 89bfa864baec9426f6b31c481245e27a *man/removePrincipalComponents.Rd b758a9ed501844eaf72fba9b0b095de8 *man/replaceMissing.Rd 0448a3f0db9ec9aa0f9df0d195aee809 *man/returnGeneSetsAsList.Rd 67326216442516e2de8187ef222005d3 *man/rgcolors.func.Rd eb2114305fe9dc74f572128b86194b02 *man/sampledBlockwiseModules.Rd 52925f8edf7faa1101480222f063c8ea *man/sampledHierarchicalConsensusModules.Rd 6a79360f6ec29c84aeb933bc46b4ff91 *man/scaleFreeFitIndex.Rd ea6d8d82152fae0924bc92d24c8ae02f *man/scaleFreePlot.Rd 499b4ea1d3aaf9831d6970e138af69dc *man/selectFewestConsensusMissing.Rd 3eb76ea602de17259220fbfe2f99d748 *man/setCorrelationPreservation.Rd 6e5711f9ad7cc86add003a1e29e8db9f *man/shortenStrings.Rd 9cce33e7a075bc301deb614f4f23b8ed *man/sigmoidAdjacencyFunction.Rd f449b7526814b57f49f5f772f2a6cc7c *man/signedKME.Rd 5a66f319627490da0d4d48171b01e4f8 *man/signifNumeric.Rd dede43eacc71262d01c19b6f9db9dc51 *man/signumAdjacencyFunction.Rd af422cc8798f418caca3be40a52d31a6 *man/simpleConsensusCalculation.Rd c13b97b9ac1177650cbeb1c1b36b7f3a *man/simpleHierarchicalConsensusCalculation.Rd 0e39f7efd042fb089459a9cd424c7194 *man/simulateDatExpr.Rd a46f3274eb7bb0626ac9fe27beb808a9 *man/simulateDatExpr5Modules.Rd 61a727f8871174ca50d62e4263ad97ce *man/simulateEigengeneNetwork.Rd 91f5470b5fcc0427c6713267ecf2fd4a *man/simulateModule.Rd 97a36e1f39fa8a5c08f9e08307490643 *man/simulateMultiExpr.Rd 6496e932e9256cac6d44aa19e80ed191 *man/simulateSmallLayer.Rd 232f6821bf891056fa52c4d9e602ce11 *man/sizeGrWindow.Rd bb9444ca32db2c3d66bd070907621c7c *man/sizeRestrictedClusterMerge.Rd fd3d4ac3bbdab1aa528d20a926e36d95 *man/softConnectivity.Rd fe4f33b0118a94102b47510875ed615a *man/spaste.Rd 7ecad95060f87ef52377475bc0b16424 *man/standardColors.Rd d200230688dd8d8af081fab208e967a2 *man/standardScreeningBinaryTrait.Rd 08f1a4fccc3aefb0682181775f6f2718 *man/standardScreeningCensoredTime.Rd 734427fdd35f94ab3977ff5654d8d054 *man/standardScreeningNumericTrait.Rd ee8e76f68bf0082b629605fc36df945b *man/stdErr.Rd 9725ef92a629b364c5fd35f910a1d3cb *man/stratifiedBarplot.Rd ec5fb6e8d42733952b3d0cc28ea4203d *man/subsetTOM.Rd 39b2d7cd00137268542b3c65ac5917f8 *man/swapTwoBranches.Rd 28fedfc9d88f99715180802aad1bc919 *man/transposeBigData.Rd 041fbe19d3d0d0a3f1f23deb7af27e8f *man/unsignedAdjacency.Rd fb29898d75e15f33c90a01c3fbf12880 *man/userListEnrichment.Rd 17da3078480f388b98a1fcde5baeb206 *man/vectorTOM.Rd 4090284c54c0ffaf7a93e5824491b021 *man/vectorizeMatrix.Rd 36b156febaf10e4caac7ff28c2255b7d *man/verboseBarplot.Rd 72069790ca6e8b3072971fe266b15459 *man/verboseBoxplot.Rd b9a05f2797a5a2004afd7be8197a54cc *man/verboseIplot.Rd 4059850cbd7fbf32016980ce22c4adaf *man/verboseScatterplot.Rd 5b60605817816b91df09d8a00118afff *man/votingLinearPredictor.Rd 79b4e6437c1c28ffca58a9127550d161 *src/Makevars 22d54f2b9dc56c83bb08fc083227e3fd *src/Makevars.win 85d55be1be399ea0dd0bb31ea0953851 *src/array.h c8de6e0086d891a82305d064cb7214f0 *src/arrayGeneric.h 9325eb75f2dbcd496f707012da9a7451 *src/compiling.h 204f39820e48253e79bd0b33052cc947 *src/conditionalThreading.h ed5c4ec8102de78d3404695fe2b51143 *src/corFunctions-typeDefs.h 1d4220b7b95451da9be03ab055aec5eb *src/corFunctions-utils.c db35a97d45c6268d49c6a3a7b8e396ac *src/corFunctions-utils.h 907bdf307a0951f8177d1564399fec2b *src/corFunctions.c 9c6f16982bc36cc5f787e3401af74aca *src/corFunctions.h 5ce06b070466f09b11f2745166ab226a *src/exceptions.h 8685af37af7b142facb727e97d74b3ec *src/myMatrixMultiplication.c 364e2ccaa651126865bafa7177613cf6 *src/myMatrixMultiplication.h 88a529f1a6ec577d8a9f5525f4eba15c *src/networkFunctions.c 785bb06185332922ef0d0c3ed1b919a1 *src/networkFunctions.h 2c64d1f22ee1dcac8ed74bb29566d52c *src/parallelQuantile.cc 73219e59ee796dc44d848434cfb818c4 *src/parallelQuantile.h 7d69c97a3259a124c1b80b1b452dba29 *src/parallelQuantile_stdC.h c6bbf2c6f3fb39cf93e2091713c573f0 *src/pivot.c 203d262a31d418e6a688df4eb1b4abc1 *src/pivot.h c6601848ab159000caeda22b81aa92cd *src/pivot_declarations.h WGCNA/Changelog0000644000176200001440000023120014672545314012734 0ustar liggesusersNote: as of 2009/03/10, changes that lead to different calculational results are marked as DIFF. 2024/09/18: 1.73 . Version bumb for release 2024/09/03: 1.72-90 . New function modifiedBisquareWeights for calculating bisquare weights for columns in a matrix . Typos in documentation (help files) of chooseTopHubInEachModule and chooseOneHubInEachModule fixed. Thanks to Dominic Owens for reporting these. . Minor changes in compiled C code to comply with R's updated API 2023/12/06: 1.72-5 . Minor fixes in code and help files 2023/12/06: 1.72-4 . Minor fixes in code and help files 2023/11/27: 1.72-3 . Several minor fixes in help files. . Minor changes in internal code for labeledHeatmap . Fixes in internal code to remove compile warnings and use of deprecated functions 2023/08/14: 1.72-2 . Bugfix in function plotEigengeneNetworks reported by Ramón Fallon . Minor changes in internal code for labeledHeatmap 2023/01/18: 1.72-1 . Minor changes to compy with CRAN requirements 2023/01/07: 1.72 . Version bump for release 2022/12/29: 1.71-7 . Small changes in default position of legend label in labeledHeatmap . C code reorganized to conform to the requirement of declaring all function prototypes before definition . Fixed version numbers in changelog. 2022/11/01: 1.71-6 . Bugfix in modulePreservation fixes checking for testNetworks out of range. The bug made no difference for valid inputs. 2022/10/24: 1.71-5 . Function overlapTable nwo also returns a matrix of expected overlap counts and a matrix of ratios of observed and expected overlap counts. . Bugfix in individualTOMs that fixes a crash in hierarchicalConsensusModules when networkOptions are given as a single object of class networkOptions. . Internal functions for creating heatmaps with legend are improved. 2022/09/14: 1.71-4 . Function overlapTable now optionally returns logarithms of the overlap p-values. 2022/07/03: 1.71-3 . Function prependZeros now gets the number of characters correctly even for numbers like 100000 which were previously formatted to 1e05. 2022/07/02: 1.71-2 . Functions matchLabels and overlapTable now work also in the case where one set of labels consists entirely of ignored labels. . Function imputeByModule only imputes data in those modules that actually contain missing data. 2022/05/22: 1.71-1 . empiricalBayesLM is a bit informative about failed initial regressions. 2022/04/22: 1.71 . Version bumb for release 2022/04/08: 1.70-80 . Bugfix in blueWhiteRed and greenWhiteRed that makes the pallettes symmetric for small odd values of n. . Function sampleModules now runs nRuns sampled calculations (and optionally also the unsampled calculation) rather than always running precisely nRuns calculations. . Function blueWhiteRed now accepts arguments giving the blue end, red end and middle colors. . Fixed bug in positioning color legend label for labeledHeatmap and related functions. . verboseIplot now accepts argument showMSE that can supress printing of MSE in the title of the plot. . verboseScatterplot now accepts argument showPValue that can suppress printing of the p-value in the title of the plot. . addGrid now accepts arguments linesPerTick.horiz and linesPerTick.vert allowing the user to set the number of lines per tick separately for the vertical and horizontal lines . empiricalBayesLM now accepts argument scaleMeansOfSamples that works in conjunction with scaleMeansToSamples to fine tune how the adjustment is applied. 2021/03/01: 1.70-4 . plotColorUnderTree and plotDendroAndColor now work also when there the 'colors' argument has length zero, producing an empty color plot. . Improvements in internal code plotting legends for labeledHeatmap. 2021/02/17: 1.70-3: . Bugfixes of bugs introduced in 1.70-2 2021/02/13: 1.70-2: . Fixed problem in moduleEigengenes with argument 'expr' not having 'rownames'. 2021/02/13: 1.70-1: . Fixed several old web links in help files 2021/02/10: 1.69-86, 1.70 . Internal code for generating axis ticks now works when the limits have zero range. . Help files for blockwiseIndividualTOMs and blockwiseConsensusModules updated. 2020/12/18: 1.69-85 . Internal code for plotting legends for heatmaps expanded to be able to plot several color columns per legend. 2020/11/03: 1.69-84 . Internal code for plotting color scale/axis for heatmaps can now add an axis label. . Corrected help files. 2020/07/12: 1.69-83: . Code for labeledHeatmap.multipage was simplified by taking advantage of arguments showRows and showCols to labeledHeatmap. This also solves incorrect font size and colors for the row and column labels. 2020/05/08: 1.69-82: . blockwiseConsensusModules and friends now use make.unique before assigning rownames, to guard against duplicate rownames errors. 2020/04/30: 1.69-81 . Bugfix in internal code fixes failure of labeledHeatmap with only one column or row. 2020/03/03: 1.69-80 . Minor enhancements in internal code (function .plotOrderedColorSubplot). 2020/02/28: 1.68-86, 1.69 . Argument nameOut to userListEnrichment now accepts value NULL to suppress generating a file with the results. . lmRob has been removed as an automatically supported optional initial fit function in empiricalBayesLM (due to the package 'robust' having been orphaned and archived on CRAN). . labeledHeatmap and internal function heatmapWithLegend now take optional argument colorMat allowing the user to specify the cell colors explicitly. . Internal function heatmapWithLegend shoudl be able to use handle transparent as color for missing data. . Bugfix in internal function used in plotColorUnderTree and plotOrderedColor. 2020/02/08: 1.68-84 . Further bugfixes in plotOrderedColor. 2020/02/06: 1.68-83 . plotOrderedColors, plotColorUnderTree and plotDendroAndColors gain argument separatorLine.col to set (or omit) separator lines between color rows. . plotOrderedColors also gains argument align to provide some degree of control over alignment of color rectangles. 2020/02/03: 1.68-82 . Bugfixes in prependZeros and internal code (.boxDimensionsForHeatmapWithLegend). . Internal change: plotOrderedColor is now split into user-facing wrapper and the workhorse internal function that can be called from other functions as well. . Internal change: split off labeledHeatmap into is own file. . prependZeros now handles fractional numbers better. . goodSamplesMS and goodSamplesGenesMS now copy the 'names' attribute of the input 'multiExpr' to the output. . Function hierarchicalConsensusModules and pruneAndMergeConsensusModules now calculate eigengenes with more regard to resource utilization, skipping the some unneeded calcualtions. . Bugfix in labeledHeatmap and related functions; showing two separators at the same position now works. . metaAnalysis now works with a single data set. . Example for chooseTopHubInEachModule has been simplified. . DIFF: Argument scaleMeanToSamples for function empiricalBayesLM now defaults to fitToSamples. This will produce adjusted data wwith different means than before if non-trivial fitToSamples is used. (The variation of the adjusted data is unchanged, i.e., correlations of the adjusted data will be the same as before.) To reproduce old results, use scaleMeanToSamples = NULL when calling empiricalBayesLM. 2019/06/18: 1.68-80 . binarizeCategoricalColumns.forPlots now accepts argument checkNames. 2019/06/02: 1.68-2 . Internal code checking and scaling multiData weights is now more general. 2019/05/30: 1.68-1 . Function matchLabels is now nearly two orders of magnitude faster. . Bugfix in sizeRestrictedClusterMerge fizes an error that ocurred when some clusters were of size 1 2019/05/22: 1.68 . Bugfix in sizeRestrictedClusterMerge which also benefits projectiveKMeans and blockwiseModules fixes an occasional crash. Bug reports and assistance of several users, especially Luis Revilla, is gratefully acknowledged. . Bugfixes in blockwiseModules, blockwiseConsensusModules and individualTOMs fix non-propagation of randomSeed to projectiveKMeans. The default randomSeed for the 3 functions has changed to match that of projectiveKMeans and consensusProjectiveKMeans to make previous results reproducible. . plotOrderedColor now returns some of the plotting information, useful for adding to the plot. . modulePreservation gains arguments multiWeights representing optional observation weights of expression data, and goldName that can be used to change the label used for all-network sample. . Adding arguments horizontalSeparator.interval and verticalSeparator.interval to labeledHeatmap . Change in sampledHierarchicalModules: saveIndividualTOMs and saveConsensusTOM arguments for the underlying hierarchicalConsensusModules can be set as arguments rather than being hardcoded. 2019/05/05: 1.67-90 . Bugfix in sizeRestrictedClusterMerge and projectiveKMeans that calls it fixes occassional crash (reported on Stackexchange by user holly) . Bugfix in modulePreservation: empty rownames of certain matrices do not cause a crash anymore. . New (experimental) versions of Topological Overlap (TOM) are available. . bugfix in exportNetworkToVisANT thanks to Guangjian Du . consensusProjectiveKMeans is a bit more efficient, especially when the number of sets is 1. . goodGenes should be faster when weights are not supplied 2019/04/11: 1.67 . moduleEigengenes and derived functions (multiSetMEs) now copy row names of the input data as row names of the output matrices. . blockwiseModules, blockwiseConsensusModules, their 'recut' versions and blockwiseHierarchicalModules now copy the column names of input data into names of output module labels and colors. Row names of input data are copied into row names of the corresponding module eigengene matrices. . Startup message regarding setting up multi-threading is now disabled. 2019/04/09: 1.66-92 . bugfix in labeledHeatmap makes vertical separator lines better aligned with standard position of labels. . exportNetworkToCytoscape does not check column names in nodeAttr for being valid language names. 2019/02/25: 1.66-90 . Bugfix in consensusTOM: saving calibrated TOMs should work now. . Function formatLabels now accepts optional argument font. . Function hierarchicalConsensusKME now accepts argument getMetaColsFor1Set that controls whether meta-statistics are returned when the consensus only has 1 input set. . One more bugfix in internal code (.interleave) . Bugfix in internal code (.interleave) fixes crashes in hierarchicalConsensusKME . Bugfix in multiGrep fixes sorting order when returning values rather than indices. . Functions consensusKME and hierachicalConsensusKME now don't throw an error on duplicate column names in input data. . Changed default for argument saveConsensusData from TRUE to NULL (automatically determined from input data) in function consensusCalculation . Signed TOM described by Nowick et al (PNAS, 2009) is implemented and can be accessed using TOMType = "signed Nowick" for all TOM calculation and module identification functions that take argument TOMType. Argument suppressNegativeTOM can be used to automatically set all negative TOM values to zero. . New argument suppressNegativeResults for newConsensusOptions. . New function sizeRestrictedClusterMerge for merging clusters such that the resulting clusters do not exceed a specified size. . hierarchicalConsensusKME is now more robust and works with a single module as well. 2018/10/22: 1.66 . Function consensusDissTOMandTree is now a it faster because of more efficient garbage collection. . Function verboseBarplot can now optionally add text labels to the (also optional) scatterplot. . In internal functions plotting heatmap legends, widths are now specified in inches rather than in user coordinates. . Hierarchical consensus module merging is now more robust to unusual cases with 0 or only 1 module. . Bugfix in hierarchicalConsensusModules: supplying a single networkOptions dows not produce an error. . pruneAndMergeConsensusModules now checks for presence of at least one module; if no modules are present, it returns the input labels rather than throwing an error. . DIFF: Bugfix in hierarchicalConsensusModules changes the order of imputation and removal of genes and samples with too many missing values. This may lead to slightly different module eigengenes. 2018/10/02: 1.65 . Bugfix in modulePreservation fixes duplicate rownames error. . Function signedKME now checks that colnames of the input data are unique and makes them unique if not. . Consensus calculations now return origin count for all consensus quantiles and the mean consensus. Origin count for each set is the number of (calibrated) values from the set that are less than or equal the consensus. . New function consensusTreeInputs for getting inputs of a consensus tree. 2018/09/24: 1.64-81 . Fixed crash in softConnectivity when weights are used. . Fixed bug in labeledHeatmap that transposed a 1-row matrix into a 1-column one. . Fixed minor typos in errors emitted by labeledHeatmap. 2018/09/14: 1.64-80 . empiricalBayesLM now accepts optional argument fitToSamples that can restrict the fitting process to a subset of samples. The argument order was also rearranged to make the order more logical. 2018/09/09: 1.64-1 . binarizeCategoricalColumns.for... have re-ordered and in some cases added arguments to make them more useful. 2018/09/05: 1.64 . Web site link updated . Documentation for cor has been improved by making more precise what nThreads actually affects . Documentation for pickSoftThreshold.fromSimilarity has been improved thanks to a suggestion from Paolo Inglese. . Function signedKME has been streamlined and a bug in which gene names in the output were passed through check.names in coercing the input to a data frmae is now fixed. . New functions binarizeCategoricalVariable, binarizeCategoricalColumns, and related wrappers for binarizing categorical covariates into sets of binary indicators . labeledHeatmap now accepts arguments showRows and showCols that allow one show a subset of the heatmap without having to explicitly subset all row- and column-specific arguments . branchSplitFromStabilityLabels.individualFraction can now work with missing entries in cluster labels . New function signifNumeric for rounding numeric columns of a data frame. . Bugfix in corFast: argument 'use' is now interpreted properly (thanks to Thomas Mohr for the report). . Argument networkType has been removed from the function pickSoftThreshold.fromSimilarity where it has not been used; help file has been adjusted accordingly (thanks to Max Moldovan for reporting the issue). . mtd.mapply now prints a more informative error message. . bugfix in GOenrichmentAnalysis which now works again. 2018/03/21: 1.63-2 . TOM calculation now gives 0 for completely unconnected nodes, instead of returning NaN 2018/02/26: 1.63 . Bugfix in blockwiseModules fixes a crash caused by supplying weights to bicor . DIFF: correlation options such as maxPOutliers are now by default used throughout function blockwiseModules. New argument useCorOptionsThroughout can be used to switch to old behaviour where the arguments were used only for network construction. 2018/02/10: 1.62 . New functions sampledBlockwiseModules and sampledHierarchicalConsensusModules carry out network analysis repeatedly on resampled data. . TOM calculations can optionally use internal matrix algebra rather than R- (or system-)provided BLAS, controlled by argument useInternalMatrixAlgebra. . Pearson correlation (function cor) now accepts individual sample weights for arguments x and y. When both weights are supplied (and for correlation of a columns in a single matrix), the weights are a product of the weights for the two vectors being correlated. Denominators are calculated separately using separate weights, which leads to slightly different results than standard weighted correlation. . Most network construction functions now also accept optional weights. . hierarchicalConsensusModules can now optionally perform gene/module pruning and merging iteratively. . New functions pruneConsensusModules and pruneAndMergeConsensusModules implement the pruning and iterative pruning/merging of hierarchical consensus modules. . Network construction functions now accept argument suppressTOMForZeroAdjacencies whose effect is to set TOM to zero for all node pairs with zero adjacency. . Bugfix in individualTOMs: function now works with default useDiskCache . Module eigengenes returned by blockwiseConsensusModules now carry names copied from names of multiExpr . New function imputeByModule . Bugfix in verboseBoxplot and verboseBarplot: the function now honors the setting of addScatterplot and correctly interprets point color and other arguments when they are vectors. . Enhancements and bugfixes in TOMplot which should work with all data sizes and dendrogram complexities. . Function formatLabels now avoids over-splitting labels that already contain a newline character. . Function labeledHeatmap can plot row labels on the right, specified via argument yLabPosition. . Bugfix in internal function .networkCalculation 2017/08/04: 1.61 . Bugfix in consensusCalculation: function works with mean consensus. . New arguments pch and plotPriority for verboseScatterplot. . Function pmin has been removed since it incorrectly duplicates base::pmin. . Function GOenrichmentAnalysis is deprecated. Please use function enrichmentAnalysis from R package anRichment, available from https://labs.genetics.ucla.edu/horvath/htdocs/CoexpressionNetwork/GeneAnnotation/ 2017/07/07: 1.60 . empiricalBayesLM expanded with additional arguments allowing specification of initial fit function . DIFF: New branch split function that works with stability labels, branchSplitFromStabilityLabels.individualFraction, that tweaks the branch dissimilarity measure to work better when one branch is large and one small. To reproduce old results, use argument stabilityCriterion = "Common fraction" to functions hierarchicalConsensusModules and blockwiseConsensusModules. . verboseBoxplot and verboseBarplot can now optionally overlay scatterplots of the underlying data (thanks to Zhijin (Jean) Wu for suggestion and code). . Bugfix in enrichmentAnalysis: function does work when the number of reported terms is zero. (Thanks to Zhijie Cao for bug report and fix.) . New functionality that allows a hierarchical calculation of consensus modules, with multiple new functions. . New arguments colorHeightBase, colorHeightMax control layout in function plotDendroAndColors . Arguments controlling the legend size in heatmaps with legends been tweaked to make the legend width independent of the width of the plotting region. . The horizontal adjusting of row labels in labeledHeatmap can now be set using the new argument x.adj.lab.y. . Bugfix in mtd.susbet: function now works properly when invert is TRUE. . formatLabels can now format strings to a maximum width in user coordinates and shorten the results to a given number of lines. . Limitation to block sizes less than the integer addressing limit (sqrt(2^31)) has been removed. . Multiple mtd.... functions now return NULL when the input multiData argument has length zero. . Function adjacency now accepts corOptions in both character and list formats. . New function minWhichMin that calculates the row- or column-wise minimum and index of the minimum. . New functions pmin.fromList, pmean.fromList, pquantile.fromList, for parallel minimum, mean and quantile whose input is a list of identically sized arrays; functions pmin, pmean and pquantile now use compiled code and should be substantially faster and more memory efficient. . Bugfix in function blockwiseModules: corrected conditional module removal if too few genes remain. . pickSoftThreshold is now faster thanks to Alexey Sergushichev; new argument gcInterval allows the user to fine tune frequency of garbage collection to suit the size of the data. . verboseBarplot now takes argument ylim which defaults to incorporating all bar heights plus error bars (if requested) . Cleanup in function labeledHeatmap: incorrect text and color label offsets and widths fixed. The corresponding arguments xColorWidth, yColorWidth are now measured in user units rather than fractions of overall width and height. . New functions bicovWeightFactors and bicovWeightsFromFactors 2016/05/30: 1.52 . New function plotMultiHist for plotting multiple histograms in one plot . New functions multiGrep, multiGrepl, multiSub, multiGSub . New argument invert in mtd.subset allows consistent excluding of rows and columns from multiData structures. . Bugfix in modulePreservation: accuracy statistics now work also with a single module. Thanks to Victor Hanson-Smith for pointing this out. . mtd.rbindSelf now warns when colnames of the individual data sets do not agree. 2016/03/08: 1.51 . DIFF: Defaults for the number of pre-clustering centers in blockwise[Consensus]Modules, projectiveKMeans and consensusProjectiveKMeans have changed to prevent the pre-clustering from becoming too long. To reproduce old results, use (nPreclusteringCenters or nCenters) = min(nGenes/20, maxBlockSize^2/nGenes), where nGenes is the number of genes (variables) in the input data. 2016/02/23: 1.50 . Bugfix in function cor: function now correctly handles cases where columns have missing data placed such that the remaining entries (after removing missing values) have zero variance. Thanks to Pasha Mazin for pointing this out. . New function consensusRepresentatives for selecting consensus representatives from multi-data. . Cleanup in plotOrderedColors should results in better placement of 'rowText' under blocks of colors. . Minor change in spacing for row text in plotOrderedColors that also affects plotColorUnderTree and plotDendroAndColors . TOM calculation functins (TOMsimilarity, blockwiseIndividualTOMs, blockwiseModules) now accept argument replaceMissingAdjacencies that allows the underlying code to replace missing adjacencies with an appropriate value for zero-strength link. . moduleEigengenes is now more resistant to problems with missing data and zero-variance variables. 2015/12/27: 1.49 . DIFF: goodGenes, goodGenesMS, goodSamplesGenes, goodSamplesGenesMS get an argument `tol' to compare the variance against, rather than against zero. This prevents erroneous retaining of zero-variacen genes because of numeric under/overflow errors in the fast calculation of variance. This may possibly result in removal of genes that were retained in WGCNA versions 1.47-2 and newer, but should remove the same genes as 1.47-1 and older. 2015/12/16: 1.48-2 . Bugfix in our local copy of heatmap, called from TOMplot, based on the bug report https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=16583 . Thanks to Duncan Murdoch for pointing it out . Function goodSamplesGenesMS can now work with data frames within the multiExpr structure, rather than just matrices. 2015/11/03: 1.48-1 . TOMplot now uses my own dendrogram drawing routine which should not run into the stack/memory allocation problems of the standard plot.dendrogram function. 2015/10/29: 1.48 . Bugfix in pickHardThreshold (and pickHardThreshold.fromSimilarity): when the input is a similarity matrix rather than expression data, the function no longer crashes. 2015/10/09: 1.47-6 . Bug in consensusTOM fixed that caused a spurious error "File names to save blocks of consensus TOM are not unique." when calculating with more than on block. . DIFF: Function consensusProjectiveKMeans can (and will by default) impute missing data. This may cause differences in preclustering when missing data are present. To reproduce old results, use argument imputeMissing = FALSE. 2015/09/30: 1.47-5 . Function consensusKME is now using internal code for re-arranging rather than relying on reshape's melt-cast combination due to performance issues with larger data sets. . Imports from/dependence on reshape have been dropped. . blockwiseConsensusModules accepts argument cacheBase which replaces the default "." that has been used. . labelPoints now returns a data frame with the label positions, and accepts argument doPlot that optionally turns off the plotting of the labels. . bicor now produces a better-worded warnings about zero or missing MAD. 2015/08/07: 1.47-4 . DIFF: projectiveKMeans now by default imputes missing data in datExpr at the start of the function. This can lead to somewhat different output; old behaviour can be forced by setting imputeMissing = FALSE. This change does not affect blockwiseModules since blockwiseModules imputed missing data before calling projectiveKMeans. . Internal code cleanup: several ifelse calls changed to if else constructs to avoid issues with ifelse generics defined by bioconductor . pickHardThreshold is now more resistant to genes (columns) that have zero variance. However, their connectivity is considered 0. . DIFF: Bugfix in internal code fixes an error in blockwiseModules and blockwiseConsensusModules where blocks determined by pre-clustering were incorrectly merged. In rare cases this may cause differences to previous results. Thanks to Elmar Tobi for reporting this. 2015/06/27: 1.47-2 . goodGenes and goodSamples, along with their MS versions, should now run faster thanks to a code clean-up 2015/06/18: 1.47-1 . Bugfix in empiricalBayesLM fixes a crash in the function. 2015/06/13: 1.47 . Functions plotCor, plotMat and rgcolors.func from sma have been re-introduced . empiricalBayesLM is now more robust to missing values, particularly when OLS regression fails due to missing values making some of the covariates constant. . Bugfix in bicovWeights fixes crash when there is one variable with MAD = 0. . Function blockwiseModules can now optionally use module merging criteria derived from stability studies. . Functions blockwiseModules, blockwiseIndividualTOMs and consensusBlockwiseModules now accept argument blockSizePenaltyPower that lets the user specify the severity of penalty for blocks exceeding maximum size. . Functions projectiveKMeans and consensusProjectiveKMeans now accept Inf for sizePenaltyPower. . Function labeledHeatmap.multiPage now accepts more explicit arguments (rather than relying on ... as before) and correctly handles separator lines. 2015/03/28: 1.46 . Functions inherited from the now-defunct package sma have been removed. 2015/03/27: 1.45 . New function empiricalBayesLM for removing unwanted variation due to given covariates from high-dimensional data . New function bicovWeights for assigning weights to observations based on whether they are suspected outliers . labeledHeatmap and friends now accept argument keepLegendSpace . labeledHeatmap now returns information about positions of individual boxes in the heatmap. . labeledHeatmap now accepts argumens xColorOffset and yColorOffset controling the gap between color labels and the heatmap itself. 2015/02/06: 1.43-11 . Bugfix in C code fixes memory allocation errors with large data sets. The C code was also made more consistent in the use of size_t whenever needed and possible. . Bugfix in labeledHeatmap fixes the direction of extended separator lines for x-axis labels with angle other than the default 45 degrees. 2015/01/23: 1.43-10 . Bugfix in function blockwiseIndividualTOMs and functions that use it (blockwiseConsensusModules, consensusTOM) fixes a bug that caused crashes when more than one blocks were used. 2015/01/15: 1.43 . Package now imports AnnotationDbi and GO.db . Function blueWhiteRed gains argument endSaturation that allows the user to lighten the colors at the end of the range, at the expense of saturation. . Function labeledHeatmap gains additional arguments allowing the user to place horizontal and vertical divider lines into the heatmap. . Bugs introduced into GOenrichmentAnalysis in the previous release are now fixed. 2014/11/25: 1.42 . Function TrueTrait has been re-worked and the argument Strata to it is no longer accepted. . labeledHeatmap.multiPage now works with 1-column or 1-row matrices. . The textMatrix argument to labeledHeatmap and labeledHeatmap.multiPage is now allowed to be a (dimensionless) vector as long as its length is consistent with the dimensions of the input Matrix. . Bugfix in function formatLabels fixes sometimes incorrect start of formatted labels. . Bugfixes in function TrueTrait . DIFF: Bugfix in mtd.branchEigengeneDissim that primarilly affects blockwiseConsensusModules. The consensus quantile was incorrectly applied, which led to overly aggressive module merging. The bugfix will lead to a different final modules if useBranchEigengeneDissim was TRUE. To reproduce old behavior, use argument reproduceBranchEigennodeQuantileError = TRUE. . New function consensusTOM implements calculation of consensus TOM in a stand-alone function; this calculation was previously hidden inside blockwiseConsensusModules. . Network calibration methods in consensusTOM now include full quantile normalization . DIFF: bugfix in blockwiseIndividualTOMs that affects blockwiseIndividualTOMs as well as blockwiseConsensusModules: The soft-thresholding power for set 1 was used for all sets. This bug has bow been fixed; to reproduce old calculations, change the old code to use the same power for all sets. I apologize for this rather serious omission. . blockwiseIndividualTOMs gains additional output components. . Internal C code changed some index variables from int to size_t. . Function individualTOMs now uses a more memory-efficient way to call the internal compiled code. . Bugfix in mtd.apply fixes occasional spurious crashes . Function mtd.apply now copies the 'names' attribute from input to the output . Function mtd.subset can now subset columns based on column names, not just numeric indices. . Function votingLinearPredictor can now tolerate missing data in the predictor variables (features). . blockwiseModules gains a new argument loadTOM. 2014/06/13: 1.41-1 . DUP = FALSE removed from all .C calls since the argument is deprecated. This means that memory requirements may increase in some situations. 2014/06/12: 1.41 . New function shortenStrings . Changed several packages we depend on from Depends: to Imports: . New functions prependZeros and formatLabels . Bugfix in userListEnrichment: function no longer crashes when there are no "significant" overlaps . New function intramodularConnectivity.fromExpr 2014/05/07: 1.40 . Functions blockwise[Consensus]Modules and recut[Blockwise,Consensus]Trees get new arguments minSplitHeight and minAbsSplitHeight that get passed to cutreeDynamic. . Arguments for blockwiseModules have been re-ordered to group them and make it easier to find the right setting. 2014/04/28: 1.39 . Function exportNetworkToVisANT can now restrict connections to a given number of top connections. . Function exportNetworkToVisANT returns the resulting data frame invisibly. . Function modulePreservation gains the ability to run the calculations in parallel, controlled by argument 'parallelCalculation'. . Function modulePreservation gains the ability to explicitly specify test sets for each reference network, via parameter testNetworks. . Functions matchLabels and overlapTable have been expanded with new functionality and arguments. matchLabels now works with any labels (not just numeric or color labels) and can handle missing labels by removing them. . Bug in function multiData2list fixed: function now returns a simpler (and correct) list. . Crash-causing bug in mtd.rbindSelf fixed. 2014/01/08: 1.37 . New functions multiUnion and multiIntersect . New function labeledHeatmap.multiPage produces labeled heatmaps divided into multiple plots (pages). . plotColorUnderTree and plotOrderedColorsnow produce more precise "center" and "right" text alignment. . Maximum block size in blockwiseModules, blockwiseConsensusModules and blockwiseIndividualTOMs (argument 'maxBlockSize') is now limited to be less than sqrt(2^31) to prevent problems with C and Fortran routines. The same limitation also applies to the 'preferredSize' argument of projectiveKMeans and consensusProjectiveKMeans. . Occasional bug when saving and loading permutation test results in modulePreservation fixed. 2013/11/22: 1.36 . Needs dynamicTreeCut 1.61 or higher. . Bugfix in mtd.mapply fixes a crash. . Warning message in mtd.subset cleaned up. . New function multiData to conveniently create multiData structures. . The color palette of blueWhiteRed has been tweaked to provide lighter extreme colors which makes it easier to read black text superimposed on the strongest colors. . More descriptive error message in cor() and bicor() when input contains missing data and 'use = "all.observations"'. 2013/11/04: 1.35 . Function accuracyMeasures now works also with factors. . Function moduleEigengenes calls impute.knn only if there are any missing data. . Functions mtd.apply, mtd.applyToSubset, and mtd.mapply gain arguments controling whether and how progress should be displayed; they also gain arguments that let the user specify calculation on only a subset of the input sets. 2013/10/10: 1.34 . Function verboseBarplot gains argument addCellCounts that enables display of counts above each bar. . Function mtd.subset gains the argument 'permissive' that allows subsetting of "loose" multiData structures as well as 'drop' that controls dropping of dimensions with extent 1. . Internal: Depends and Imports fields were cleaned up as per new CRAN requirements. Users should be aware that loading and attaching WGCNA now does not automatically attach all packages from which WGCNA imports. . Potentially serious bug has been fixed in the internal C code for calculations of correlation (function cor()). This bug seems to have led to correlation sometimes returning 0 where it should have returned an NA. 2013/08/29: 1.33 . mtd.apply and mda.applyToSubset gain argument mdaCopyNonData that controls whether non-data components of input should be copied to output . Bug in matchLabels fixed that led to non-integer color labels if two reference modules had the same size. . New function plotOrderedColors that extends plotColorUnderTree to the case where the plot 'above' is not a dendrogram but a general plot (a useful example is a barplot). . DIFF: bugfix in blockwiseModules, recutBlockwiseTrees, blockwiseConsensusModules, recutConsensusTrees: gene reassignment by kME now reassigns correct genes. Previously, the the reassigned genes were incorrectly chosen. This may make a small difference in the module assignments if reassignThreshold was above zero (as it was by default). The default was also changed to not perform module re-assignment. . Switching to dynamicTreeCut 1.60: this dynamicTreeCut uses external criteria that are supplied by the blockwise* functions. The criteria are implemented in functions branchEigengeneDissim, mtd.branchEigengeneDissim, branchSplit, branchSplit.dissim To preserve results of older calculations, new features are disabled by default. . New functions for convenient handling of multiData structures: mtd.apply, mtd.applyToSubset, mtd.mapply, mtd.rbindSelf, mtd.setAttr, mtd.colnames, mtd.setColnames mtd.simplify mtd.subset, list2multiData, multiData2list, isMultiData . Function randomGLMPredictor has been removed. Please use the package randomGLM and the function randomGLM in that package. . Internal C code has been simplified. 2013/07/20: 1.30 . Functions blockwiseModules and TOMsimilarityFromExpr are now more memory-efficient: only 2 copies of the potentially large TOM matrix are needed, down from 3 before. . Function mergeCloseModules can now optionally perform quantile equalization (normalization) on to make eigengene (dis-)similarities comparable across the data sets. . Argument order in mergeCloseModules has changed to a more logical order 2013/07/18: 1.29 . Functions determining the number of available cores are now more robust. . labeledHeatmap gains argument naColor that controls the color for missing values. 2013/07/09: 1.28 . Help files formatted to narrower line width. . bugfix in pickSoftThreshold when dataIsExpr is FALSE: function now correctly checks similarity (thanks to Lourdes Pena Castillo for reporting it). . updateProgInd now only updates display if the value to be displayed is actually different from the one already displayed. This can lead to somewhat better performance if updateProgInd is the performance bottleneck. . DIFF: Bug fix in GOenrichmentAnalysis: all evidence codes listed on http://www.geneontology.org/GO.evidence.shtml are now recognized. This will likely may change GO enrichment results. We apologize for this omission. . Bug fix in userListEnrichment: p-values of comparisons with no overlapping genes are now 1, instead of 0. . plotColorUnderTree and plotDendroAndColors gain new arguments rowTextAlignment and rowTextIgnore that afford more flexibility in how the row text is formatted. 2013/04/01: 1.27-1 ("This one's no April 1 joke!") . userListEnrichment now contains a new set of gene lists compiled mostly by Mike Palazzolo and Jim Wang at CHDI. . userListEnrichment now runs faster . Bug fix in verboseBarplot: p-values are now consistently diplayed with two significant digits. . Bug fix in blockwiseConsensusModules: function now works correctly (and faster) when not using disk cache 2013/03/06: 1.26 ("Spring growth") . Bug fix in blockwiseModules, blockwiseConsensusModules and blockwiseIndividualTOMs: functions now use consisten block labels [thanks to Austin Hilliard for pointing it out]. . Function accuracyMeasures has been re-written to accept directly vectors of predicted and observed values, and work also for prediction of continuous outcomes. The new function is backwards-compatible with older versions except for the name of the first argument. . Function verboseScatterplot gains a new argument displayAsZero that allows displaying small correlations as 0 (rather than say 1e-13). . New function returnGeneSetsAsList returns gene sets used by function userListEnrichment. 2012/12/01: 1.25-2 ("We ain't done fixin' yet") . Bugfix in blockwise[Consensus]Modules fixes a crash under certain circumstances (when some of the initial modules are removed). [Thanks to Nicola Soranzo for pointing this out.] . Bad choice of return value in WGCNAnThreads() that caused a crash in pickSoftThreshold is now fixed. [Thanks to multiple users who noticed this.] 2012/11/09: 1.25-1 ("The Election Fix") . Internal C++ code now compliant with older compilers that don't implement vector::data() 2012/11/06: 1.25 ("The Election Issue", pardon the pun.) . DIFF: bugfix in function modulePreservation: the function now respects the corFnc and corOptions arguments throughout the calculations. This bug affected calculations that used correlation functions other than the default "cor" and options other than the default "use = 'p'". . Introducing parallelization to user-level R functions through the use of packages parallel, foreach and the doParallel backend. The parallelization should work on all R-supported platforms. This change does not affect C-level workhorse functions for calculating correlations. . New function blockSize that attempts to choose a suitable block size for most functions that use a block-wise approach to fit calculations into memory. . Function labels2colors is now more robust and should work for data.frames as well . Function verboseBoxplot gains arguments notch and varwidth that are passed to the underlying boxplot call. . Functions plotDendroUnderTree and plotDendroAndColors now display missing color values as grey. . Function verboseBarplot can now handle binary variables as well and will print Fisher exact test p-value. . Function colQuantileC now ignores missing data . Internal C code cleaned up and small memory leaks plugged. . Function labelPoints has a new argument protectEdges that can prevent labels from going outside of the plot area. . verboseScatterplot now has explicit arguments col and bg to specify colors and fill (background) of the plotting symbols. This makes specifying col and bg work also when only a sample of all points is plotted. . GOenrichmentAnalysis can now optionally shorten output by omitting details of highest-enriched GO terms. . Defaults in randomGLMpredictor have been slightly tweaked for better performance. . New function transposeBigData for transposing big matrices. 2012/08/02: 1.23-1 . Examples in help files are now shorter to speed up execution as per CRAN requests. 2012/07/27: 1.23 . Function userListEnrichment is now improved with more categories of gene lists . DIFF: bugfix in functions corAndPvalue and bicorAndPvalue: calculation of the Z statistic is now corrected. This bug did not affect the calculetion of Student t statistics nor the p-values. . Function standardScreeningBinaryTrait gains argument areaUnderROC (default TRUE) that allows the user to turn off AUC calculation and thus achieve a substantial speedup for large data sets. . Function metaAnalysis now by default turns off AUC calculation, making it much faster on large data sets. The AUC calculation can be enabled using the new argument getAreaUnderROC. . New function randomGLMpredictor that implements an ensemble predictor based on bootstrap aggregation (bagging) of generalized linear models whose covariates are selected using forward stepwise regression according to AIC criteria. 2012/06/18: 1.22 . DIFF: Bugfix in function blockwiseConsensusModules: the module colors are now correct (and not scrambled) when some genes are excluded due to too many missing entries. Apologies for any non-sense results this error may have generated. . Bugfix in the function modulePreservation when used on adjacency input: clustering coefficient calculations are now corrected. . Function modulePreservation gains an additional argument calculateClusterCoeff, with default value FALSE, that can be used to enable/disable clustering coefficient calculations. These tend to be slow and for efficiency purposes the default is FALSE. . DIFF: because of the above, clustering coefficient calculations in function modulePreservation are disabled as a default. . Error message in modulePreservation about input containing genes with too many missing values is now more informative. . Bugfix in function numbers2colors: function now properly accounts for the presence of positive and negative entries when automatically deciding whether the input should be considered signed or unsigned. . Function rankPvalue now doesn't warn about weights not summing to 1. . DIFF: bugfix in function userListEnrichment corrects output. Previously, UNcorrected p-values were returned when corrected p-values were requested, and no p-values were returned when uncorrected p-values were requested. . p-value display in function verboseBarplot is now cleaned up by properly arranging spaces. . Bugfix in mergeCloseModules: function now completes correctly when all modules are merged into a single module. . Internal bugfix in C function minWhichMin: function now works correctly when the first entry is NA or NaN. 2012/04/23: 1.20 . New function blueWhiteRed that produces a color sequence distinguishable by people with the most common kind of color blindness (green-red color blindness). The currently much-used greenWhiteRed now outputs a warning to that effect. We recommend that users switch to blueWhiteRed. . The default color palette for numbers2colors is now blueWhiteRed. . Function TOMsimilarity now forces the diagonal of the input adjacency matrix to be 1. The diagonal is not really meaningful in a network analysis, but the underlying C code assumes the diagonal to be 1 since it simplifies calculations. This change also applies to TOMdist (which calls TOMsimilarity). . Function blockwiseConsensusModules now accepts argument trimmingConsensusQuantile and makes the consensus in module trimmimg consistent with the consensus in network construction and module identification. . Function recutConsensusTrees now accepts arguments trimmingConsensusQuantile and mergeConsensusQuantile with the same effect as in blockwiseConsensusModules. . Removed argument minKMEtoJoin from blockwise* and recut* functions as it has not been used by the actual code. . Function verboseBarplot now accepts argument horiz and is capable of creating horizontal barplots. . Internal: C function minWhichMin now ignores missing values. 2012/02/28: 1.19 . Argument name change in function TOMplot: colors and colorsLeft are now Colors and ColorsLeft to avoid conflict with argument col to functions heatmap and image. . Bugfix in internal function .consensusMEDissimilarity that caused crashes mergeCloseModules. . DIFF: blockwiseConsensusModules now passes the argument mergeConsensusQuantile to mergeCloseModules, which can lead to different module merging compared to previous versions. Use mergeConsensusQuantile = 0 to reproduce results from older versions of this package. . Function mergeCloseModules now takes arguments consensusQuantile, corFnc and corOptions . New function blockwiseIndividualTOMs to calculate individual TOMs across multi-set expression data . New functionality in blockwiseConsensusModules that can now utilize pre-calculated individual TOMs, giving more flexibility . New utility function lowerTri2matrix . Internal change: default chunk size in consensus module calculation is now 1e8 instead of 1e7. Most computers with at least 2GB of memory should be able to handle as much; the size can be lowered (or increased) if necessary. . Function sizeGrWindow now calls dev.new rather than X11 . New function nSets to directly return the number of sets in a multi-set variable. . Function verboseBarplot now returns the midpoints of the drawn bars (as the function barplot does) augmented by the bar heights as attribute "height". . Minor changes and cleanup in code . New citation added for our JSS article 2012/01/12: 1.18-2 . Bugfix in function consensusKME fixes column names of output 2012/01/04: 1.18-1 . Bugfix in function GOenrichmentAnalysis fixes occasional crash of the function 2011/12/22: 1.18 . DIFF: p-value calculation in userListEnrichment is now corrected to give the probability of overlap **as strong or stronger** than observed, rather than that of overlap **stronger** than observed. The resulting p-values are more conservative; the difference can be large for very small categories. . metaAnalysis and consensusKME now also calculate meta-analysis statistics using weights proportional to the square root of the number of samples . Added citation information for the package. The citation is available by typing the command citation("WGCNA"). Please cite the package if you use it in published research. (The citation("WGCNA") gives somewhat mis-formatted results on older R versions, sorry.) 2011/12/20: 1.17-1 . Minor clean-up of greeting message . Functions goodSamples and goodGenes as well as their combined and multi-set versions now properly respect settings of minNSamples and minNGenes 2011/12/06: 1.17 . Function consensusKME now returns additional statistics based on 3 different sets of weights . Bugfix in function TOMplot: the heatmap is now drawn correctly . GOenrichmentAnalysis is now about 40% faster . nearestCentroidPredictor is now more robust to constant variables . Bugfix in function standardScreeningNumericTrait: default value for argument corOptions is now correct 2011/11/16: 1.16 . Bugfix in standardScreeningBinaryTrait: sign of `signed' Kruskal test statistic is now calculated correctly . Function metaAnalysis now (optionally) also calculates meta-analysis statistics using the rankPvalue function. . Function consensusKME now also optionally uses rankPvalue for meta-analysis of kME results across input data sets 2011/11/14: 1.15 . Due to CRAN requirements, R versions below 2.10 are not supported anymore. . New function multiData.eigengeneSignificance for calculating module eigengene significances in multi-set situations. . DIFF: bugfix in function pickSoftThreshold, subtraction of self-connectivity is now corrected which may lead to slightly different scale-free analysis results (thanks to Michael Linderman for reporting it). 2011/11/08: 1.14 . The default value of argument calculateCor.KIMall in function modulePreservation is now FALSE to speed up execution. . New function consensusKME to calculate "consensus" kME across several data sets. . Function standardScreeningBinaryTrait now takes arguments corFnc and corOptions. . Argument 'signed' to function 'numbers2colors' now has a default value that is TRUE if the input contains data with both positive and negative signs. . labels2colors now accepts missing data in input, and the input labels can be non-numeric . Bugfix in standardScreeningNumericTrait: setting qvalues=FALSE now works as intended. . Bugfix in votingLinearPredictor: function now works when xTest is NULL. . DIFF bugfix in rankPvalue: function now returns correct values also when column weights are specified . Default value of argument var.equal in function standardScreeningBinaryTrait has been changed to FALSE to make it consistent with the defaults of function t.test. . Function pickSoftThreshold now alos accepts similarity matrix as input. . Function nearestCentroidPredictor now works also without specifying test data (xtest). . Function nearestCentroidPredictor lost multiple input arguments that related to un-loved sample network options and removal of PCs. . Function accuracyMeasures now also outputs the naive accuracy rate. . New data set SCsList containing stemm cell marker genes that can be used with userListEnrichment. . New function metaAnalysis to calculate meta-analysis significance statistics 2011/08/17: 1.13 . Multi-threading is now disabled by default but can be enabled by setting the environment variable ALLOW_WGCNA_THREADS= or using the function allowWGCNAThreads(). 2011/08/16: 1.12 . Code cleanup to satisfy CRAN package check . Functions plot.cor and plot.mat renamed to plotCor and plotMat to avoid confusion with methods for the plot generic . Function userListEnrichment now returns a slightly more informative output in which the type of the refrence category is also indicated. 2011/07/06: 1.11-3 . bugfix in modulePreservation fixes an additional bug that was introduced in 1.11-1 . bugfix in votingLinearPredictor: function now produces valid output when some (but not all!) features have zero variance . fixed names of output in votingLinearPredictor 2011/06/06: 1.11-2 . bugfix in blockwiseConsensusModules fixes a crash of the function 2011/05/31: 1.11-1 . bugfix in modulePreservation fixes internal errors that appeared under certain circumstances . small changes in compiled code should make the code compatible with older gcc versions 2011/05/11: 1.11 . bugfix in conformityDecomposition: calculation corrected . bugfix in userListEnrichment: internal data sets are now used correctly . bugfix in collapseRows that affected the group2row component in the returned value. The bug caused some of the returned representative rows to be incorrect but had no effect on the collapsed data. . bicor now issues a warning when any of the input variables (or their columns) have zero MAD. . numbers2colors gains the argument commonLim to specify whether limits should be column-specific or universal . bug fix in numbers2colors: having missing data and out of range data at the same time will not cause an error anymore 2011/04/20: 1.10-2 . Bug in function cor fixed that caused an unncessary slowdown. Results were not affected. . Typos in several help files fixed: use = 'Spearman' is now corrected to use = 'spearman' (thanks to Eric L. Du for pointing it out). 2011/04/19: 1.10 . New function accuracyMeasures for summarizing 2x2 confusion tables . New prediction function votingLinearPredictor and nearestCentroidPredictor . New functions proportionsInAdmixture.R and populationMeansInAdmixture.R for estimating proportions and populations meanspopulations means in admixtures . Several additional functions by Jeremy Miller: chooseOneHubInEachModule, consensusDissTOMandTree, kMEcomparisonScatterplot, moduleMergeUsingKME, orderBranchesUsingHubGenes, overlapTableUsingKME, stratifiedBarplot, swapTwoBranches, reflectBranches, selectBranch, userListEnrichment . New function mutualInfoAdjacency by Lin Song . New functions coClustering and coClustering.permutationTest . Due to excessive problems with Tcl/Tk, we dropped the dependency on package qvalue and incorporated the necessary R code directly into WGCNA . plotColorUnderTree and plotDendroAndColors gain additional flexibility when working with rows of text labels. . The argument 'colorText' to the above functions has been renamed to 'rowText'. Sorry for any code that becomes broken because of this! . VerboseScatterplot gains arguments to set the regression line color and type. . Colors for text labels in labeledHeatmap can now be set. 2011/02/05: 1.00 . Functions standardScreeningBinaryTrait and standardScreeningNumericTrait can now optionally turn off the calculation of q-values as the latter sometimes leads to problems. . Character expansion for axis labels in labeledHeatmap can be set separately for x and y axis. . The ubiquitous corOptions argument can now take the value of an empty string (""). . New utility function prepComma. . New set of functions that calculate adjacency from a given similarity matrix. 2010/12/17: 0.99 . Functions [bi]cor gain the ability to calculate cosine [bi]correlations, invoked using the argument cosine = TRUE. Similar functionality added to blockwise[Consensus]Modules and TOMsimilarityFromExpr. For most other functions using correlations, the argument corOptions can be used to request cosine correlations. . Corrected error message in function adjacency . Functions [bi]corAndPvalue now also return the number of observations for each p-value 2010/11/12: 0.98 . Bugfix in functions [bi]corAndPvalue that gave incorrect p-values when input y = NULL . Functions [bi]corAndPvalue now also return the Student t statistic from which the p-values are calculated. . Function standardScreeningNumericTrait now accepts arguments corFnc and corOptions 2010/10/29: 0.97 . Functions [bi]corAndPvalue now also return the Fisher Z statistics. . Function adjacency can now calculate distance-based networks as well, via type="distance". Various distance functions and options can be used. . Functions signedKME and networkScreening now accept arguments cor and corOptions that can be used to specify the correlation function to be used in the calculations. . Argument 'y' in [bi]corAndPvalue now has a default value NULL . DIFF: displayed value of correlation p-value in function verboseScatterplot is now based on Student distribution instead of Fisher normal approximation. This change only affects plots. 2010/09/23: 0.96 . New functions pquantile, pmean, pmedian for calculation of "parallel" quantiles, means, and medians. . Functions cor and bicor now give a more descriptive error description when an argument with one dimension equal zero is passed to the functions. 2010/09/16: 0.95 . More changes to function collapseRows: renamed arguments, changed defaults, code cleaned up and bugs fixed. The function collapseRows should still be considered experimental. 2010/09/05: 0.94 . DIFF: big fixed in modulePreservation that put incorrect input into Zconnectivity.preservation which also affected Zsummary. We apologize for this error. . Bug fixes in function collapseRows 2010/08/25: 0.93 . modulePreservation now also calculates density and connectivity statistics based on clustering coefficient and maximum adjacency ratio. . New functions mutualInfoAdjacency and AUV2predicted provide basic interfacing between mutual information methods and weighted networks . Minor cleanup of help files 2010/07/26: 0.92-3 . Package file cleanup, nothing changed from user perspective 2010/07/14: 0.92-1 . Bug in compiled calculation of adjacency fixed. This may have affected calculations of TOM when TOMType = "signed". Apologies if important results were affected. 2010/06/24: 0.92 . DIFF: in modulePreservation, by default Zsummary is only calculated from meanAdj, cor.kIM and cor.Adj. . DIFF: in modulePreservation, a new option includekMEallInSummary (default FALSE) controls whether cor.kMEall should be included in summary statistis . output of modulePreservation now has names in permutationDetails to facilitate detailed studies of permutation results 2010/06/10: 0.91 . DIFF: in modulePreservation, values of cor.kME, cor.kMEall and meanSignAwarekME are now stripped of their sign, both in observed values and in permutations. This is done because the eigengene may flip sign between data sets, leading to artificially low preservation scores. . DIFF: p-values calculated by modulePreservation are now returned as logarithms in base 10 (previously they were returned as natural logarithms). Names of the returned components also changed slightly to emphasize that the p-values are returned as logarithms. . Additional summarycolumns added to the output of modulePreservation for observed and permutation preservation statistics . Function collapseRows now selects a single probe to represent each gene 2010/06/05: 0.90 . modulePreservation now also returns p-values and optionally the q-values corresponding to the permutation Z scores . simulateDatExpr now sets (artificial) gene and sample names, for easier integration with some analysis functions (notably modulePreservation) . incompatibilities with new versions of R in exportNetworkTo[Visant,Cytoscape] fixed . new function subsetTOM to efficiently calculate TOM among a subset of network nodes . bugs in vectorTOM fixed . DIFF: modulePreservation now limits the size of the gold module to at most half the total number of genes in the comparison, to make sampling and permutation calculations meaningful. This may slightly affect calculated results for the gold module, but will not affect results of any other modules. 2010/05/21: 0.89 . DIFF: definition of cluster coefficient has been slightly changed, which affects approximate conformity-based network concepts . new functions rankPvalue and metaZfunction . new function standardScreeningNumericTrait . bug in automatic labeling of axes in verboseBoxplot has been fixed . several help files added and updated . minor changes in standardScreeningBinaryTrait, standardScreeningCensoredTime . scaleFreeFitIndex now does not calculate a log(log(..)) based fit that often led to errors . new function collapseRows by Jeremy Miller to convert probe-level expression to gene-level expression 2010/05/07: 0.88-2 . unnecessary dependence on package fields removed. 2010/05/07: 0.88-1 . DIFF: bug in cor(x) and bicor(x) fixed that produced incorrect results when a column of x was all missing (NA). We aplogize for any incovenience this error may have caused. 2010/03/17: 0.88 . new functions corAndPvalue and bicorAndPvalue that calculate (bi)correlation p-values efficiently for matrices of correlations 2010/02/19: 0.87 . bicor gets a new argument pearsonFallback that lets the user specify the handling of cases with zero median absolute deviation that would normally lead to NA values. . mergeCloseModules (and blockwise[Consensus]Modules) now more robust: works also in the case of a single module . DIFF: A bug was fixed that affected bicor(x) calculation with missing data on Windows (calculations of 2-variable bicor, single-variable bicor without missing data, and any calculations where multi-threading is available [most platforms except Windows] were not affected by this bug). The bug caused some calculations to be a bit off. We apologize for any problems this may have caused. 2010/02/10: 0.86 . bug in colQuantileC fixed. . EXPERIMENTAL: a major addition to modulePreservation now allows module preservation calculations based on general adjacency matrices. . bugfix in labelPoints fixes a warning message 2009/12/26: 0.85 . DIFF: in modulePreservation, separability now correctly works for signed and signed hybrid networks. . modulePreservation gains argument to specify correlation function 2009/12/05: 0.84 . new function spaste . addtions and corrections to help files . bugfix in standardScreeningCensoredTime . bugfix in blockwiseModules: module colors are now not scrambled when some genes are excluded because they don't pass goodSamplesGenes . improved labelPoints: more consistent label offsets and fixed meaning of 'offs' in help file 2009/11/24: 0.83-1 . help cleaned up . Windows package now compatible with R-2.10 2009/11/22: 0.83 . new function modulePreservation calculates module preservation statistics between independent data sets . new dependence on package splines . new function labelPoints to semi-inteligently label points in a scatterplot. . extended functions pickSoftThreshold, pickHardThreshold . minor bugfix in scaleFreePlot . new function standardScreeningCensoredTime . new function scaleFreeFitIndex 2009/11/17: 0.82-1 . bugfix in plotEigengeneNetworks (thanks to Ronnie Alves for pointing it out) . bugfix in heatmapWithLegend when used with mixed color and text labels 2009/11/10: 0.82 . new function standardScreeningBinaryTrait . bugfix in verboseBarplot: the function now passes the ... arguments to barplot . verboseScatterplot gets argument sample for better handling of large vectors . labeledHeatmap gets arguments xColorWidth and yColorWidth that control the witdth of the color labels 2009/10/29: 0.81-2 . bugfix in blockwise[Consensus]Modules: function now works when number of modules exceeds number of available colors . bugfix in exportNetworkToCytoscape: all nodes are now included in the node file (thanks to Tim Gernat for pointing it out and suggesting the fix) 2009/10/22: 0.81-1 . addTraitToMEs now works with colnames instead of names . bugfix in bicor(x): when x has missing data, bicor now does not crash . minor bugfix in plotEigengeneNetworks: printAdjacency is now respected 2009/10/19: 0.81 . several functions from the now defunct package sma were added, mainly to allow our users to run all tutorials smoothly. Note that if sma package is used concurrently, identical copies of the duplicated functions will be available. Warning messages about this fact can be disregarded safely. . labeledHeatmap re-written to use an internal heatmapWithLegend function. This means labeledHeatmap now works also in complicated sectionings using layout(), which was not the case before. 2009/09/18: 0.80-1 . crash in blockwise[Consensus]Modules and TOM calculations when using bicor fixed; blockwise[Consensus]Modules and TOM calculations using compiled code get a new argument maxPOutliers corresponding to the bicor argument of the same name (see news in 0.80). 2009/09/13: 0.80 . new functions conformityBasedNetworkConcepts and fundamentalNetworkConcepts calculate network concepts (indices) . bicor gains the argument maxPOutliers specifying the maximum fraction of data that can be considered outliers . new function verboseBarplot to produce annotated barplots . verboseBarplot, verboseBoxplot and verboseScatterplot now have complete help files 2009/09/01: 0.79-4 . bug in networkConcepts fixed . goodGenesMS and goodSamplesMS now a bit more robust . new function vectorizeMatrix to turn a matrix into a vector of non-redundant components . orderMEs now works also on the single set output of moduleEigengenes. . plotColorUnderTree, plotDendroAndColors, and plotClusterTreeSamples are now capable of displaying text labels that identify the displayed colors by name. . overlapTable now works when the arguments are 1xn or nx1 matrices. 2009/08/07: 0.79-3 . GOenrichmentAnalysis now works for yeast data as well. Yeast GO mappings are given in a slightly different database (ORF identifiers instead of Entrez) and previous versions of the function did not handle this difference (thanks to Maryam Anwar for pointing this out). 2009/08/05: 0.79-2 . Default color in addGuideLines now darker ((grey30 instead of grey70). . Default color in addGrid now darker (grey30 instead of grey70). . Error in bicor help file corrected. . moduleEigengenes gets the option to turn off scaling expression data before calculating the singular value decomposition. This should only be used to speed up calculation if the data has been scaled previously. . Bugfix in simulateMultiExpr: the function now works when leaveOut=NULL . Minor changes to diagnostic messages in TOM calculations . Minor changes to printed values in verboseBoxplot . GOenrichmentAnalysis a bit more stable . GOenrichmentAnalysis can now analyze several sets of labels at the same time; the idea is to give the user the option to quickly calculate enrichments for a collection of competing module assignments. . Enrichment calculations can optionally be restricted to a subset of modules 2009/06/05: 0.79-1 . GOenrichmentAnalysis now faster and a bug in returned genePositions fixed. . function plotEigengeneNetworks gained two additional arguments controling the printing of numerical values into the adjacency heatmaps. 2009/06/01: 0.79 . Added (experimental) function GOenrichmentAnalysis for automatic GO enrichment analysis 2009/05/26: 0.78 . Package qvalue becomes optional. If it is not installed, q values will not be calculated in the network screening functions. 2009/05/18: 0.77-1 . Hubgene calculation in moduleEigengenes now works even if only one gene has valid data. . Bug in labeledBarplot fixed that crashed the function if no error bars were given. 2009/05/05: 0.77 . DIFF: reverting to default TOM denominator method "min" . Bug fix in .clustOrder affecting consensusOrderMEs and mergeCloseModules when there is only one proper module present. 2009/04/02: 0.76 . Display of p-value in verboseScatterplot improved: small p values now shown more accurately . Spurious warning message in matchLabels is now suppressed . additional TOM option added: use mean denominator instead of minimum . DIFF: default TOM denominator method is mean. Use TOMDenom = "min" where necessary to reproduce older results . help files for [unsigned]adjacency corrected 2009/03/18: 0.75 . fixed a bug in matchLabels that caused unrelated modules to be matched . faster correlation calculations are now implemented both for [bi]cor(x) as wellas [bi]cor(x,y). On platforms where POSIX threads are available, certain parts of the [bi]cor calculations are threaded . functions [bi]cor, blockwise[Consensus]Modules, TOMsimilarityFromExpr now have additional parameters providing control over threading and over the tradeoff between speed and precision in handling missing data in the correlation calculations . fixed a bug in bicor(x,y) that was present when any of the columns consisted of only missing data and that caused incorrect NAs and 0s in the result. . DIFF: softConnectivity now subtracts 1 from the sum of adjacencies to remove the adjacency of a gene with itself. . DIFF: order of parameters in softConnectivity has changed. . DIFF: default number of centers in [consensus]projectiveKMeans is now an attempt to fully utilize resources estimated from preferredSize. . softConnectivity now accepts parameter 'type' that can specify the network type to be used; . fixed a bug in function adjacency that would crash the function in certain circumstances. 2009/03/04: 0.73 . default p value threshold in matchLables relaxed to 5e-2 . DIFF: [consensus]ProjectiveKMeans now default to a much higher number since that seems to lead to better results. . Faster versions of correlation and bicorrelation implemented. Fast correlation available for general use in function cor1. . bugfix in labeledHeatmap: xLabels and xSymbols do not overlap anymore . goodGenes, goodGenesMS, etc now resistant to whole gene profiles containing only NAs . minor changes in verbosity of [consensus]projectiveKMeans 2009/02/18: 0.72 . corPvalueStudent now correctly returns small values instead of zero . minor changes in default sectioning in plotDendroAndColors . plotDendroAndColors and plotColorUnderTree now plot colors top to bottom (previously was bottom to top) . faster median calculation in C implementation of bicor . added function overlapTable for computing significance of module overlaps . added function matchLabels for relabeling modules in a source partition to best approximate module labels in a given reference partition 2009/02/06: 0.71-1 . minor changes in verbosity of TOM calculations . blockwise[Consensus]Modules: random seed is now set only if it is non-NULL; this allows the user to force the functions to run without setting a seed, which was not possible before. 2009/02/05: 0.71 . package flashClust that implements fast hierarchical clustering is now required and used throughut. 2009/01/29: 0.70 . adapted for dynamicTreeCut-1.20: PAM stage in the dynamic tree cut used in blockwise module detection functions can now optionally respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. Note that this requires dynamicTreeCut 1.20 or higher. . TOM calculation: "Rough guide to max array..." now respects verbose parameter 2009/01/25: 0.67-2 . bug in recutBlockwiseTrees fixed (only visible if some genes or samples were bad) . bug in clusterCoef fixed . typos in help files fixed (cutreeStatic[Color]) . drawing of color label rectangles slightly modified in labeledHeatmap to prevent excessively large rectangles 2008/12/23: 0.67-1 . help file for intramodularConnectivity corrected . corPvalueFisher corrected when twoSided=TRUE: used to give p-values inflated by a factor of 2 2008/12/09: 0.67 . addTraitToMEs now works also when the multiMEs only contain the eigengenes but no average expression . labeledBarplot now accepts xLabelsAngle . fixed interpretation of grey name in orderMEs, consensusOrderMEs . scaleTOMs in consensusBlockwiseModules now actually has an effect 2008/12/04: 0.66 . new function exportNetworkToCytoscape 2008/12/04: 0.65-3 . new function networkConcepts . Bug fix in plotEigengeneNetworks when only one set is given . Minor changes and cleanup in help files and printed diagnostics 2008/11/05: 0.65-2 . Bugfix in exportNetworkToVisANT: no more spurious errors when probe to gene name translation table is given. 2008/11/02: 0.65-1 . Bugfix in reassigning genes between modules when KME difference too significant: an empty set of reassign candidates doesn't throw an error anymore. . Several help files added 2008/10/25: 0.65 . added function preservationNetworkAdjacency 2008/10/09: 0.64 . different versions of package impute are now supported. . help files expanded . help files' syntax corrected so all item descriptions are now displayed properly. 2008/10/01: 0.63-1 . bug in [consensus]ProjectiveKMeans fixed that would cause a crash if the center with highest index became empty . numbers2colors resistant to values out of range of the given min and max. . p-value in verboseScaterplot now corresponds to the actual correlation printed instead of the Pearson correlation value. However, the p-values are always calculated assuming the correlation value is actually Pearson. 2008/09/18: 0.63 . [automaticN|n]etworkScreening now accept option getQValue that can be used to turn off q value calculations; they also report counts of module eigengenes with gene significances in specific intervals. 2008/09/17: 0.62-1 . C implementation of column quantile (i.e., apply(data, 2, quantile)) giving immense speedup of consensus quantile calculations in new function colQuantileC 2008/09/17: 0.62 . Fixed bugs in blockwiseConsensusModules when disk cache not in use, and when consensusQuantile>0 . Added function exportNetworkToVisANT to export a network to VisANT. . Help files for several functions added 2008/09/13: 0.61 . TOMdist function added, defined simply as 1-TOMsimilarity, for easier migration of older code. 2008/09/10: 0.60-3 . [consensus]projectiveKMeans now resistant to an svd failure, using a weighted mean if svd returns an error. 2008/09/05: 0.60-2 . Bug fix in C implementation of bicor (bad bug!!) 2008/09/04: 0.60-1 . Big fixes in blockwiseConsensusModules related to TOM sampling 2008/09/02: 0.60 . minCoreKME default now 0.5 (was 0.7 for Consensus functions, which didn't make much sense) . blockwise functions now complete all blocks even if there is a block with no modules detected or with an error during ME calculation. . consensusProjectiveKMeans can now optionally use mean as definition of consensus distance, improving convergence in multiple set situations . several help files added . bicor now implemented consistently in R code and in compiled code: both call the same set of C functions and treat NAs correctly and consistently. Functions bicov and bicorNAy are removed. . pickSoftThreshold now accepts networkType argument . blockwiseConsensusModules can now optionally save samples used for TOM scaling 2008/08/29: 0.55 . Changes of input arguments and output for blockwise[Consenus]Modules . Errors in pre-clustering and calculation of signed networks fixed . New functions for re-cutting dendrograms produced in blockwise[Consenus]Modules . Bug in the return value of blockwise[Consensus]Modules fixed: blockGenes now correctly refer to all genes instead of goodGenes; . New function numbers2colors 2008/08/26: 0.50-2 . Fixed bugs in blockwiseConsensusModules causing a crash when module eigengene calculation fails 2008/08/20: 0.50-1 . fixed bug in compiled code bicor1 function that used quick calculation whenever at least one of the x,y had the correct NAs instead of both. . fixed crash in consensusBlockwiseModules when data not checked for missing entries . added info printing when merging clusters in consensusProjectiveKMeans 2008/08/15: 0.50 . [consensus]projectiveKMeans improved: no crash when some clusters become empty, faster . blockwise[Consensus]Modules: additional parameter checkMissingData can optionally turn off checking for missing data when it's not necessary. . a few help files added 2008/08/05 . Added random seed setting to blockwise[Consensus]Modules and [consensus]ProjectiveKMeans for repeatability . Lowered reassignThreshold to 1e-6 to prevent too many reassigned genes. 2008/08/03: 0.42-2 . Fixed bug causing a crash when no modules are detected. 2008/08/02: 0.42-1 . More complete help pages, bug in blockwiseModules fixed (reported by Anatole Ghazalpour) 2008/07/23: 0.41 . Many enhancements, improvements and bugfixes 2008/06/02: . Changes to TOMplot to make the dendrograms on the side more informative . Changes to scaleFreePlot to pass more of ... to the plotting function (removed separate title() command). 2008/05/29: 0.12 . Performance improvements, extra parameters in blockwiseConsensusModules 2008/05/16: 0.11-1 . nearestNeighborConnectivity (and softConnectivity) now work even when there is 1 gene in the last block . More bug fixes. 2008/05/15: 0.11-0 . Numerous bug fixes . blockwise[Consensus]Modules fixed (ub-broken) from the change of batch to block. . blockwise[Consensus]Modules get an option to choose the TOM function. 0.1.0: Basic functions in place. WGCNA/R/0000755000176200001440000000000014672545314011325 5ustar liggesusersWGCNA/R/branchSplit.R0000644000176200001440000005453114230552654013724 0ustar liggesusers#====================================================================================================== # # Aligned svd: svd plus aligning the result along the average expression of the input data. # #====================================================================================================== # CAUTION: this function assumes normalized x and no missing values. .alignedFirstPC = function(x, power = 6, verbose = 2, indent = 0) { x = as.matrix(x); #printFlush(paste(".alignedFirstPC: dim(x) = ", paste(dim(x), collapse = ", "))); pc = try( svd(x, nu = 1, nv = 0)$u[,1] , silent = TRUE); if (inherits(pc, 'try-error')) { #file = "alignedFirstPC-svdError-inputData-x.RData"; #save(x, file = file); #stop(paste("Error in .alignedFirstPC: svd failed with following message: \n ", # pc, "\n. Saving the offending data into file", file)); if (verbose > 0) { spaces = indentSpaces(indent); printFlush(paste(spaces, ".alignedFirstPC: FYI: svd failed, using a weighted mean instead.\n", spaces, " ...svd reported:", pc)) } pc = rowMeans(x, na.rm = TRUE); weight = matrix(abs(cor(x, pc, use = 'p'))^power, nrow(x), ncol(x), byrow = TRUE); pc = scale(rowMeans(x * weight, na.rm = TRUE)); } else { weight = abs(cor(x, pc))^power; meanX = rowWeightedMeans(x, weight, na.rm = TRUE); cov1 = cov(pc, meanX); if (!is.finite(cov1)) cov1 = 0; if (cov1 < 0) pc = -pc; } pc; } #========================================================================================================== # # Branch eigengene split (dissimilarity) calculation # #========================================================================================================== # Assumes correct input: multiExpr is scaled to mean 0 and variance 1, branch1 and branch2 are numeric # indices that have no overlap. branchEigengeneDissim = function(expr, branch1, branch2, corFnc = cor, corOptions = list(use = 'p'), signed = TRUE, ...) { expr.branch1 = expr[ , branch1]; expr.branch2 = expr[ , branch2]; corOptions$x = .alignedFirstPC(expr.branch1, verbose = 0); corOptions$y = .alignedFirstPC(expr.branch2, verbose = 0); corFnc = match.fun(corFnc); cor0 = as.numeric(do.call(corFnc, corOptions)); if (length(cor0) != 1) stop ("Internal error in branchEigengeneDissim: cor has length ", length(cor0)); if (signed) 1-cor0 else 1-abs(cor0); } mtd.branchEigengeneDissim = function(multiExpr, branch1, branch2, corFnc = cor, corOptions = list(use = 'p'), consensusQuantile = 0, signed = TRUE, reproduceQuantileError = FALSE, ...) { setSplits.list = mtd.apply(multiExpr, branchEigengeneDissim, branch1 = branch1, branch2 = branch2, corFnc = corFnc, corOptions = corOptions, signed = signed, returnList = TRUE); setSplits = unlist(setSplits.list); quantile(setSplits, prob = if (reproduceQuantileError) consensusQuantile else 1-consensusQuantile, na.rm = TRUE, names = FALSE); } branchEigengeneSimilarity = function(expr, branch1, branch2, networkOptions, returnDissim = TRUE, ...) { corFnc = match.fun(networkOptions$corFnc); cor0 = as.numeric(do.call(corFnc, c(list(x = .alignedFirstPC(expr[, branch1], verbose = 0), y = .alignedFirstPC(expr[, branch2], verbose = 0)), networkOptions$corOptions))); if (length(cor0) != 1) stop ("Internal error in branchEigengeneDissim: cor has length ", length(cor0)); if (grepl("signed", networkOptions$networkType)) cor0 = abs(cor0); if (returnDissim) 1-cor0 else cor0; } hierarchicalBranchEigengeneDissim = function( multiExpr, branch1, branch2, networkOptions, consensusTree, ...) { setSplits.list = mtd.mapply(branchEigengeneSimilarity, expr = multiExpr, networkOptions = networkOptions, MoreArgs = list( branch1 = branch1, branch2 = branch2, returnDissim = FALSE), returnList = TRUE) 1 - simpleHierarchicalConsensusCalculation(individualData = setSplits.list, consensusTree = consensusTree) } #========================================================================================================== # # Branch split calculation # #========================================================================================================== # Caution: highly experimental! # Define a function that's supposed to decide whether two given groups of expression data are part of a # single module, or truly two independent modules. # assumes no missing values for now. # assumes all data is scaled to mean zero and equal variance. # return: criterion is zero or near zero if it looks like a single module, and is near 1 if looks like two # modules. # Careful: in the interest of speedy execution, the function doesn't check arguments for validity. For # example, it assumes that expr is already scaled to the same mean and variance, branch1 and branch2 are valid # indices, nConsideredPCs does not exceed any of the dimensions of expr etc. .histogramsWithCommonBreaks = function(data, groups, discardProp = 0.08) { if (is.list(data)) { lengths = sapply(data, length); data = data[lengths>0]; lengths = sapply(data, length); nGroups = length(lengths) groups = rep( c(1:nGroups), lengths); data = unlist(data); } if (discardProp > 0) { # get rid of outliers on either side - those are most likely not interesting. # The code is somewhat involved because I want to get rid of outliers that are defined with respect to # the combined data, but no more than a certain proportion of either of the groups. sizes = table(groups); nAll = length(data); order = order(data) ordGrp = groups[order]; cs = rep(0, nAll); nGroups = length(sizes); for (g in 1:nGroups) cs[ordGrp==g] = ((1:sizes[g])-0.5)/sizes[g]; firstKeep = min(which(cs > discardProp)); first = data[order[firstKeep]]; # Analogous upper quantile lastKeep = max(which(cs < 1-discardProp)); last = data[order[lastKeep]]; keep = ( (data >= first) & (data <= last) ); data = data[keep]; groups = groups[keep]; } else { last = max(data, na.rm = TRUE); first = min(data, na.rm = TRUE); } # Find the smaller of the two groups and define histogram bin size from the number of elements in that # group; the aim is to prevent the group getting splintered into too many bins of the histogram. sizes = table(groups); smallerInd = which.min(sizes); smallerSize = sizes[smallerInd]; nBins = ceiling(5 + ifelse(smallerSize > 25, sqrt(smallerSize)-4, 1 )); smaller = data[groups==smallerInd] binSize = (max(smaller) - min(smaller))/nBins; nAllBins = ceiling((last-first)/binSize); breaks = first + c(0:nAllBins) * (last - first)/nAllBins tapply(data, groups, hist, breaks = breaks, plot = FALSE); } branchSplit = function(expr, branch1, branch2, discardProp = 0.05, minCentralProp = 0.75, nConsideredPCs = 3, signed = FALSE, getDetails = TRUE, ...) { nGenes = c(length(branch1), length(branch2)); #combined = cbind(expr[, branch1], expr[, branch2]); combinedScaled = cbind(expr[, branch1]/sqrt(length(branch1)), expr[, branch2]/sqrt(length(branch2))); groups = c(rep(1, nGenes[1]), rep(2, nGenes[2]) ); # get the combination of PCs that best approximates the groups vector svd = svd(combinedScaled, nu = 0, nv = nConsideredPCs); v2 = svd$v * c( rep(sqrt(length(branch1)), length(branch1)), rep(sqrt(length(branch2)), length(branch2))); #svd = svd(combinedScaled, nu = nConsideredPCs, nv = 0); #v2 = cor(combinedScaled, svd$u); if (!signed) v2 = v2 * sign(v2[, 1]); cor2 = predict(lm(groups~., data = as.data.frame(v2))); # get the histograms of the projections in both groups, but make sure the binning is the same for both. # get rid of outliers on either side - those are most likely not interesting. # The code is somewhat involved because I want to get rid of outliers that are defined with respect to # the combined data, but no more than a certain proportion of either of the groups. h = .histogramsWithCommonBreaks(cor2, groups, discardProp); maxAll = max(c(h[[1]]$counts, h[[2]]$counts)); h[[1]]$counts = h[[1]]$counts/maxAll h[[2]]$counts = h[[2]]$counts/maxAll; max1 = max(h[[1]]$counts) max2 = max(h[[2]]$counts) minMax = min(max1, max2) if (FALSE) { plot(h[[1]]$mids, h[[1]]$counts, type = "l"); lines(h[[2]]$mids, h[[2]]$counts, type = "l", col = "red") lines(h[[2]]$mids, h[[1]]$counts + h[[2]]$counts, type = "l", col = "blue") } # Locate "central" bins: those whose scaled counts exceed a threshold. central = list(); central[[1]] = h[[1]]$counts > minCentralProp * minMax; central[[2]] = h[[2]]$counts > minCentralProp * minMax; # Do central bins overlap? overlap = (min(h[[1]]$mids[central[[1]]]) <= max(h[[2]]$mids[central[[2]]])) & (min(h[[2]]$mids[central[[2]]]) <= max(h[[1]]$mids[central[[1]]])); if (overlap) { result = list(middleCounts = NULL, criterion = minCentralProp, split = -1, histograms = h); } else { # Locate the region between the two central regions and check whether the gap is deep enough. if (min(h[[1]]$mids[central[[1]]]) > max(h[[2]]$mids[central[[2]]])) { left = 2; right = 1; } else { left = 1; right = 2; } leftEdge = max(h[[left]]$mids[central[[left]]]); rightEdge = min(h[[right]]$mids[central[[right]]]); middle = ( (h[[left]]$mids > leftEdge) & (h[[left]]$mids < rightEdge) ); nMiddle = sum(middle); if (nMiddle==0) { result = list(middleCounts = NULL, criterion = minCentralProp, split = -1, histograms = h); } else { # Reference level: 75th percentile of the central region of the smaller branch #refLevel1 = quantile(h[[1]]$counts [ central[[1]] ], prob = 0.75); #refLevel2 = quantile(h[[2]]$counts [ central[[2]] ], prob = 0.75) refLevel1 = mean(h[[1]]$counts [ central[[1]] ], na.rm = TRUE); refLevel2 = mean(h[[2]]$counts [ central[[2]] ], na.rm = TRUE) peakRefLevel = min(refLevel1, refLevel2); middleCounts = h[[left]]$counts[middle] + h[[right]]$counts[middle]; #troughRefLevel = quantile(middleCounts, prob = 0.25) troughRefLevel = mean(middleCounts, na.rm = TRUE) meanCorrFactor = sqrt(min(nMiddle + 1, 3) / min(nMiddle, 3)) # =sqrt(2, 3/2, 1), for nMiddle=1,2,3,.. result = list(middleCounts = middleCounts, criterion = troughRefLevel * meanCorrFactor, split = (peakRefLevel - troughRefLevel * meanCorrFactor)/peakRefLevel, histograms = h); } } if (getDetails) result else result$split } #========================================================================================================== # # Dissimilarity-based branch split # #========================================================================================================== .meanInRange = function(mat, rangeMat) { nc = ncol(mat); means = rep(0, nc); for (c in 1:nc) { col = mat[, c]; means[c] = mean( col[col >=rangeMat[c, 1] & col <= rangeMat[c, 2]], na.rm= TRUE); } means; } .sizeDependentQuantile = function(p, sizes, minNumber = 5) { refSize = minNumber/p; correctionFactor = pmin( rep(1, length(sizes)), sizes/refSize); pmin(rep(1, length(sizes)), p/correctionFactor); } branchSplit.dissim = function(dissimMat, branch1, branch2, upperP, minNumberInSplit = 5, getDetails = FALSE, ...) { lowerP = 0; sizes = c(length(branch1), length(branch2)); upperP = .sizeDependentQuantile(upperP, sizes, minNumber = minNumberInSplit); multiP = as.data.frame(rbind(rep(0, 2), upperP)); outDissim = list(list(data = dissimMat[branch2, branch1]), list(data = dissimMat[branch1, branch2])); quantiles = mtd.mapply(colQuantiles, outDissim, probs = multiP, MoreArgs = list(drop = FALSE)); averages = mtd.mapply(.meanInRange, outDissim, quantiles); averageQuantiles = mtd.mapply(quantile, averages, prob = multiP, MoreArgs = list(drop = FALSE)); betweenQuantiles = mtd.mapply(function(x, quantiles) { x>=quantiles[1] & x <=quantiles[2]}, averages, averageQuantiles); selectedDissim = list(list(data = dissimMat[branch1, branch1[betweenQuantiles[[1]]$data] ]), list(data = dissimMat[branch2, branch1[ betweenQuantiles[[1]]$data] ]), list(data = dissimMat[branch2, branch2[betweenQuantiles[[2]]$data]]), list(data = dissimMat[branch1, branch2[betweenQuantiles[[2]]$data]])); #n1 = length(branch1); #m1 = sum(betweenQuantiles[[1]]$data); #indexMat = cbind((1:n1)[betweenQuantiles[[1]]$data], 1:m1); # Remove the points nearest to branch 2 from the distances in branch 1 selectedDissim[[1]]$data[ betweenQuantiles[[1]]$data, ] = NA; #n2 = length(branch2); #m2 = sum(betweenQuantiles[[2]]$data); #indexMat = cbind((1:n2)[betweenQuantiles[[2]]$data], 1:m2); selectedDissim[[3]]$data[ betweenQuantiles[[2]]$data, ] = NA; multiP.ext = cbind(multiP, multiP[, c(2,1)]); selectedDissimQuantiles = mtd.mapply(colQuantiles, selectedDissim, probs = multiP.ext, MoreArgs = list( drop = FALSE, na.rm = TRUE)); selectedAverages = mtd.mapply(.meanInRange, selectedDissim, selectedDissimQuantiles); if (FALSE) { par(mfrow = c(1,2)) verboseBoxplot(c(selectedAverages[[1]]$data, selectedAverages[[2]]$data), c( rep("in", length(selectedAverages[[1]]$data)), rep("out", length(selectedAverages[[2]]$data))), main = "branch 1", xlab = "", ylab = "mean distance") verboseBoxplot(c(selectedAverages[[3]]$data, selectedAverages[[4]]$data), c( rep("in", length(selectedAverages[[3]]$data)), rep("out", length(selectedAverages[[4]]$data))), main = "branch 2", xlab = "", ylab = "mean distance") } separation = function(x, y) { nx = length(x); ny = length(y); if (nx*ny==0) return(0); mx = mean(x, na.rm = TRUE); my = mean(y, na.rm = TRUE); if (!is.finite(mx) | !is.finite(my)) return(0); if (nx > 1) varx = var(x, na.rm = TRUE) else varx = 0; if (ny > 0) vary = var(y, na.rm = TRUE) else vary = 0; if (is.na(varx)) varx = 0; if (is.na(vary)) vary = 0; if (varx + vary == 0) { if (my==mx) return(0) else return(Inf); } out = abs(my-mx)/(sqrt(varx) + sqrt(vary)); if (is.na(out)) out = 0; out } separations = c(separation(selectedAverages[[1]]$data, selectedAverages[[2]]$data), separation(selectedAverages[[3]]$data, selectedAverages[[4]]$data)); out = max(separations, na.rm = TRUE); if (is.na(out)) out = 0; if (out < 0) out = 0; if (getDetails) { return(list(split = out, distances = list(within1 = selectedAverages[[1]]$data, from1to2 = selectedAverages[[2]]$data, within2 = selectedAverages[[3]]$data, from2to1 = selectedAverages[[4]]$data))); } out } #======================================================================================================== # # Branch dissimilarity based on a series of alternate branch labels # #======================================================================================================== # this function measures the split of branch1 and branch2 based on alternate labels, typically derived from # resampled or otherwise perturbed data (but could also be derived from an independent data set). # Basic idea: if two branches are separate, their membership should be predictable from the alternate # labels. # This function takes the l-th stability labels, finds ones that overlap with both branches, and for each # label calculates the contribution to similarity as # r1 = sum(lab1==cl)/n1; # r2 = sum(lab2==cl)/n2; # sim = sim + min(r1, r2) # This will penalize similarity of a small and large module if the large module is a composite of several # branches, only few of which overlap with the small module. # This method is invariant under splitting of alternate module as long as the branch to which the modules are # assigned does not change. So in this sense the splitting settings in the resampling study shouldn't # matter too much but to some degree they still do. # stabilityLabels: a matrix of dimensions (nGenes) x (number of alternate labels) branchSplitFromStabilityLabels = function( branch1, branch2, stabilityLabels, ignoreLabels = 0, ...) { nLabels = ncol(stabilityLabels); n1 = length(branch1); n2 = length(branch2); sim = 0; #browser() for (l in 1:nLabels) { lab1 = stabilityLabels[branch1, l]; lab2 = stabilityLabels[branch2, l]; commonLevels = intersect(unique(lab1), unique(lab2)); commonLevels = setdiff(commonLevels, ignoreLabels); if (length(commonLevels) > 0) for (cl in commonLevels) { #printFlush(spaste("Common level ", cl, " in clustering ", l)) r1 = sum(lab1==cl)/n1; r2 = sum(lab2==cl)/n2; sim = sim + min(r1, r2) } } #printFlush(spaste("branchSplitFromStabilityLabels: returning ", 1-sim/nLabels)) 1-sim/nLabels } # Resurrect the old idea of prediction # accuracy. For each overlap module, count the genes in the branch with which the module has smaller overlap # and add it to the score for that branch. The final counts divided by number of genes on branch give a # "indistinctness" score; take the larger of the the two indistinctness scores and call this the similarity. # This method is still more or less invariant under splitting of the stability modules, as long as the # splitting is random with respect to the two branches. ## Note that one could in principle run a chisq.test on the table of labels corresponding to branch1 and ## branch2 vs. stabilityLabels restricted to branch1 and branch2, # The problem here is that small changes in stability labels could make a big difference in final # (dis)similarity when one module is large and the other small. Assume a few of the stability labels cover # small and a part of the large module; other stability labels cover the rest of the large module. If the # common stability labels cover a bit more of large than small module, similarity will be high; if they # switch more to the smaller module, similarity could be near zero. # In summary, this function may be used for experiments but should not be used in production setting. branchSplitFromStabilityLabels.prediction = function( branch1, branch2, stabilityLabels, ignoreLabels = 0, ...) { nLabels = ncol(stabilityLabels); n1 = length(branch1); n2 = length(branch2); s1 = s2 = 0; for (l in 1:nLabels) { lab1 = stabilityLabels[branch1, l]; lab2 = stabilityLabels[branch2, l]; commonLevels = intersect(lab1, lab2); commonLevels = setdiff(commonLevels, ignoreLabels); if (length(commonLevels) > 0) for (cl in commonLevels) { #printFlush(spaste("Common level ", cl, " in clustering ", l)) o1 = sum(lab1==cl); o2 = sum(lab2==cl); if (o1 > o2) { s2 = s2 + o2; } else s1 = s1 + o1; } } indist1 = s1/(n1 * nLabels); indist2 = s2/(n2 * nLabels); sim = min(1, 2*max(indist1, indist2)); dissim = 1-sim #printFlush(spaste("branchSplitFromStabilityLabels.prediction: returning ", dissim)) dissim } # Third idea: for each branch, for each gene sum the fraction of the stability label (restricted to the two # branches) that belongs to the branch. If this is relatively low, around 0.5, it means most elements are in # non-discriminative stability labels. branchSplitFromStabilityLabels.individualFraction= function( branch1, branch2, stabilityLabels, ignoreLabels = 0, verbose = 1, indent = 0, ...) { nLabels = ncol(stabilityLabels); n1 = length(branch1); n2 = length(branch2); s1 = s2 = 0; for (l in 1:nLabels) { lab1 = stabilityLabels[branch1, l]; lab2 = stabilityLabels[branch2, l]; commonLevels = intersect(lab1, lab2); commonLevels = setdiff(commonLevels, ignoreLabels); s1.all = n1; s2.all = n2; for (cl in commonLevels) { #printFlush(spaste("Common level ", cl, " in clustering ", l)) o1 = sum(lab1==cl, na.rm = TRUE); o2 = sum(lab2==cl, na.rm = TRUE); o12 = o1 + o2; coef1 = max(0.5, o1/o12); coef2 = max(0.5, o2/o12); s2 = s2 + o2*coef2; s1 = s1 + o1*coef1; s1.all = s1.all - o1; s2.all = s2.all - o2; } s1 = s1 + s1.all; s2 = s2 + s2.all; #if (is.na(s1) | is.na(s2)) browser(); } distinctness1 = 2*s1/(n1 * nLabels) -1 distinctenss2 = 2*s2/(n2 * nLabels)-1; dissim = min(distinctness1, distinctenss2) if (verbose > 0) { spaces = indentSpaces(indent); printFlush(spaste(spaces, "branchSplitFromStabilityLabels.individualFraction: returning ", dissim)) } dissim } #================================================================================================================== # # Criteria for calling a branch a cluster when a basic (non-composite) brach ends at cut height # #================================================================================================================== # very simple criterion: count the proportion of grey genes on the branch .branchCritFromStabilityLabels = function(branch, stabilityLabels, unassignedLabel = 0, ...) { lab1 = stabilityLabels[branch, ]; mean(lab1==unassignedLabel, na.rm = TRUE) } WGCNA/R/returnGeneSetsAsList.R0000644000176200001440000001047113103416622015533 0ustar liggesusersreturnGeneSetsAsList <- function (fnIn = NULL, catNmIn = fnIn, useBrainLists = FALSE, useBloodAtlases = FALSE, useStemCellLists = FALSE, useBrainRegionMarkers = FALSE, useImmunePathwayLists = FALSE, geneSubset=NULL) { if (length(catNmIn) < length(fnIn)) { catNmIn = c(catNmIn, fnIn[(length(catNmIn) + 1):length(fnIn)]) write("WARNING: not enough category names. \n\t\t\t Naming remaining categories with file names.", "") } if (is.null(fnIn) & (!(useBrainLists | useBloodAtlases | useStemCellLists | useBrainRegionMarkers | useImmunePathwayLists))) stop("Either enter user-defined lists or set one of the use_____ parameters to TRUE.") glIn = NULL if (length(fnIn) > 0) { for (i in 1:length(fnIn)) { ext = substr(fnIn[i], nchar(fnIn[i]) - 2, nchar(fnIn[i])) if (ext == "csv") { datIn = read.csv(fnIn[i]) if (colnames(datIn)[2] == "Gene") { datIn = datIn[, 2:3] } else { datIn = datIn[, 1:2] } } else { datIn = scan(fnIn[i], what = "character", sep = "\n") datIn = cbind(datIn[2:length(datIn)], datIn[1]) } colnames(datIn) = c("Gene", "Category") datIn[, 2] = paste(datIn[, 2], catNmIn[i], sep = "__") glIn = rbind(glIn, datIn) } glIn = cbind(glIn, Type = rep("User", nrow(glIn))) } if (useBrainLists) { if (!(exists("BrainLists"))) BrainLists = NULL data("BrainLists", envir = sys.frame(sys.nframe())) write("See userListEnrichment help file for details regarding brain list references.", "") glIn = rbind(glIn, cbind(BrainLists, Type = rep("Brain", nrow(BrainLists)))) } if (useBloodAtlases) { if (!(exists("BloodLists"))) BloodLists = NULL data("BloodLists", envir = sys.frame(sys.nframe())) write("See userListEnrichment help file for details regarding blood atlas references.", "") glIn = rbind(glIn, cbind(BloodLists, Type = rep("Blood", nrow(BloodLists)))) } if (useStemCellLists) { if (!(exists("SCsLists"))) SCsLists = NULL data("SCsLists", envir = sys.frame(sys.nframe())) write("See userListEnrichment help file for details regarding stem cell list references.", "") glIn = rbind(glIn, cbind(SCsLists, Type = rep("StemCells", nrow(SCsLists)))) } if (useBrainRegionMarkers) { if (!(exists("BrainRegionMarkers"))) BrainRegionMarkers = NULL data("BrainRegionMarkers", envir = sys.frame(sys.nframe())) write("Brain region markers from http://human.brain-map.org/ -- See userListEnrichment help file for details.", "") glIn = rbind(glIn, cbind(BrainRegionMarkers, Type = rep("HumanBrainRegions", nrow(BrainRegionMarkers)))) } if (useImmunePathwayLists) { if (!(exists("ImmunePathwayLists"))) ImmunePathwayLists = NULL data("ImmunePathwayLists", envir = sys.frame(sys.nframe())) write("See userListEnrichment help file for details regarding immune pathways.", "") glIn = rbind(glIn, cbind(ImmunePathwayLists, Type = rep("Immune", nrow(ImmunePathwayLists)))) } removeDups = unique(paste(as.character(glIn[, 1]), as.character(glIn[, 2]), as.character(glIn[, 3]), sep = "@#$%")) if (length(removeDups) < length(glIn[, 1])) glIn = t(as.matrix(as.data.frame(strsplit(removeDups, "@#$%", fixed = TRUE)))) geneIn = as.character(glIn[, 1]) labelIn = paste(as.character(glIn[, 2]),as.character(glIn[, 3]),sep="__") if(!is.null(geneSubset)){ keep = is.element(geneIn, geneSubset) geneIn = geneIn[keep] labelIn = labelIn[keep] } if(length(geneIn)<2) stop("Please include a larger geneSubset, or set geneSubset=NULL.") allLabels <- sort(unique(labelIn)) geneSet <- list() for (i in 1:length(allLabels)) geneSet[[i]] = geneIn[labelIn==allLabels[i]] names(geneSet) = allLabels return(geneSet) }WGCNA/R/adjacency.splineReg.R0000644000176200001440000000240713103416622015306 0ustar liggesusersadjacency.splineReg = function(datExpr, df = 6-(nrow(datExpr)<100)-(nrow(datExpr)<30), symmetrizationMethod = "mean", ...) { if (!is.element(symmetrizationMethod, c("none", "min" ,"max", "mean"))) { stop("Unrecognized symmetrization method.") } datExpr = matrix(as.numeric(as.matrix(datExpr)), nrow(datExpr), ncol(datExpr)) n = ncol(datExpr) splineRsquare = matrix(NA, n,n) for (i in 2:n) { for (j in 1:(i-1)) { del = is.na(datExpr[, i]+datExpr[,j]) if (sum(del)>=(n-1) | var(datExpr[, i], na.rm=T)==0 | var(datExpr[, j], na.rm=T)==0) { splineRsquare[i, j] = splineRsquare[j, i]=NA }else{ dati = datExpr[!del, i]; datj = datExpr[!del, j]; lmSij=glm( dati ~ ns( datj, df = df, ...)) splineRsquare[i, j] = cor( dati, predict(lmSij))^2 lmSji=glm( datj ~ ns( dati, df = df, ...)) splineRsquare[j, i] = cor( datj, predict(lmSji))^2 rm(dati, datj, lmSij, lmSji) } } } diag(splineRsquare) = rep(1,n) if (symmetrizationMethod =="none") {adj= splineRsquare} else { adj = switch(symmetrizationMethod, min = pmin(splineRsquare, t(splineRsquare)), max = pmax(splineRsquare, t(splineRsquare)), mean = (splineRsquare + t(splineRsquare))/2)} adj } WGCNA/R/consensusDissTOMandTree.R0000644000176200001440000000415213363452247016176 0ustar liggesusers# Makes a consensus network using all of the default values in the WGCNA library. consensusDissTOMandTree <- function (multiExpr, softPower, TOM=NULL){ nGenes = dim(multiExpr[[1]]$data)[2] nSets = length(multiExpr) if(is.null(TOM)){ adjacencies <- TOM <- list() for (set in 1:nSets){ adjacencies[[set]] = adjacency(multiExpr[[set]]$data,power=softPower,type="signed"); diag(adjacencies[[set]])=0 write(paste("Adjacency, set",set),"") TOM[[set]] = TOMsimilarity(adjacencies[[set]], TOMType="signed"); write(paste("Similarity, set",set),"") gc(); } } nSets = length(TOM) set.seed(12345); scaleP = 0.95; nSamples = as.integer(1/(1-scaleP) * 1000); scaleSample = sample(nGenes*(nGenes-1)/2, size = nSamples) TOMScalingSamples = list(); scaleQuant <- scalePowers <- rep(1, nSets) for (set in 1:nSets){ TOMScalingSamples[[set]] = as.dist(TOM[[set]])[scaleSample] scaleQuant[set] = quantile(TOMScalingSamples[[set]],probs = scaleP, type = 8); if (set>1){ scalePowers[set] = log(scaleQuant[1])/log(scaleQuant[set]); TOM[[set]] = TOM[[set]]^scalePowers[set]; } write(paste("Scaling, set",set),"") } half = round(nGenes/2); haP1 = half+1 kp = list(list(c(1:half),c(1:half)),list(c(1:half),c(haP1:nGenes)), list(c(haP1:nGenes),c(1:half)),list(c(haP1:nGenes),c(haP1:nGenes))) consensusTOMi = list() for (i in 1:4){ a = kp[[i]][[1]]; b = kp[[i]][[2]] consensusTOMi[[i]] = TOM[[1]][a,b] for (j in 2:nSets) consensusTOMi[[i]] = pmin(consensusTOMi[[i]], TOM[[j]][a,b]); write(paste(i,"of 4 iterations in pMin"),"") } consensusTOM = rbind(cbind(consensusTOMi[[1]],consensusTOMi[[2]]), cbind(consensusTOMi[[3]],consensusTOMi[[4]])) rownames(consensusTOM) <- colnames(consensusTOM) <- colnames(multiExpr[[1]]$data) consensusTOM = 1-consensusTOM write("Starting dendrogram tree.","") consTree = fastcluster::hclust(as.dist(consensusTOM), method = "average"); write("DONE!!!!","") out = list(consensusTOM,consTree) names(out) = c("consensusTOM","consTree") return(out) } .collect_garbage <- function(){while (gc()[2,4] != gc()[2,4] | gc()[1,4] != gc()[1,4]){}} WGCNA/R/collapseRowsUsingKME.R0000644000176200001440000000156713103416622015464 0ustar liggesusers# This function chooses a single plobe per gene based on a kME table collapseRowsUsingKME <- function (MM, Gin, Pin=NULL, kMEcols = 1:dim(MM)[2]){ if (is.null(Pin)) Pin = rownames(MM) rownames(MM) = Pin Gout = as.character(sort(unique(Gin))) cors = MM[,kMEcols] maxC = apply(cors,1,max) MMout = matrix(0, nrow = length(Gout), ncol = dim(MM)[2]) colnames(MMout) = colnames(MM) rownames(MMout) = Gout MM = as.matrix(MM) keepThese = NULL for (g in 1:length(Gout)){ maxCg = maxC maxCg[Gin!=Gout[g]] = -1000 keep = which(maxCg==max(maxCg))[1] MMout[g,] = MM[keep,] keepThese = c(keepThese, keep) } group2Row = cbind(Gout,Pin[keepThese]) colnames(group2Row) = c("group","selectedRowID") selectedRow = is.element(1:length(Pin),keepThese) out = list(MMout, group2Row, selectedRow) names(out) = c("MMcollapsed","group2Row", "selectedRow") return(out) } WGCNA/R/populationMeansInAdmixture.R0000644000176200001440000000735413103416622016774 0ustar liggesuserspopulationMeansInAdmixture<-function ( datProportions, datE.Admixture, scaleProportionsTo1 = TRUE, scaleProportionsInCelltype=TRUE, setMissingProportionsToZero = FALSE ) { datProportions = as.matrix(datProportions) if (dim(datE.Admixture)[[1]] != dim(as.matrix(datProportions))[[1]]) stop("Input error. The numbers of samples are not congruent: dim(datE.Admixture)[[1]] is unequal to dim(datProportions)[[1]]. Hint: Consider transposing one of the matrices.") noMissing = apply(is.na(datProportions), 1, sum) if (max(noMissing) > 0) { warning(paste("Urgent Warning: datProportions contains missing proportions in the following row(s):", paste(which(noMissing > 0), collapse = ","), "\nCheck these rows in datProportions. But for your convenience, I will proceed")) } if (setMissingProportionsToZero) { datProportions[is.na(datProportions)] = 0 } noNegative = apply((datProportions) < 0, 1, sum, na.rm = T) if (max(noNegative) > 0) { stop(paste("datProportions contains negative numbers. Negative proportions can be found in the following row(s):", paste(which(noNegative > 0), collapse = ","), "\nCheck these rows in datProportions.")) } sumsTo1 = TRUE sumProp = apply(datProportions, 1, sum, na.rm = TRUE) if (max(sumProp, na.rm = T) > 1.0000001) { sumsTo1 = FALSE if (scaleProportionsTo1) { warning(paste("The sum of proportions is larger than 1 for some rows including:", paste(which(sumProp > 1.0000001)[1:5], collapse = ","), "\nCheck these rows in datProportions. By default, I will scale them so that they sum to 1.\nBut if you do not want this scaling, please set scaleProportionsTo1=FALSE .\n")) } } if (min(sumProp, na.rm = T) < 0.99999) { sumsTo1 = FALSE if (scaleProportionsTo1) { warning(paste("The sum of proportions is smaller than 1 for some rows including:", paste(which(sumProp < 0.99999)[1:5], collapse = ","), "\n Check these rows in datProportions. By default, I will scale them so that they sum to 1.\nBut if you do not want this scaling, please set scaleProportionsTo1=FALSE .\n")) } } if (scaleProportionsTo1) { sumsTo1 = TRUE for (i in 1:dim(datProportions)[1]) { datProportions[i, ] = datProportions[i, ]/sum(datProportions[i, ], na.rm = T) } } if (sumsTo1) { if(scaleProportionsInCelltype) { for(ci in dim(datProportions)[2]) datProportions[,ci]=datProportions[,ci]-mean(datProportions[,ci]) } fit1 = lm(as.matrix(datE.Admixture) ~ . - 1, data = data.frame(datProportions)) datPredictedMeans = t(as.matrix(fit1$coefficients)) } if (!sumsTo1) { if(scaleProportionsInCelltype) { for(ci in dim(datProportions)[2]) datProportions[,ci]=datProportions[,ci]-mean(datProportions[,ci]) } fit1 = lm(as.matrix(datE.Admixture) ~ ., data = data.frame(datProportions)) if (dim(as.matrix(datProportions))[[2]] == 1) { datPredictedMeans = (matrix(fit1$coefficients[-1, ], ncol = 1)) } else { datPredictedMeans = t(as.matrix(fit1$coefficients[-1, ])) } } dimnames(datPredictedMeans)[[1]] = dimnames(datE.Admixture)[[2]] if (is.null(dimnames(datPredictedMeans)[[2]])) { dimnames(datPredictedMeans)[[2]] = paste("Mean", 1:dim(datPredictedMeans)[[2]], sep = ".") } else { dimnames(datPredictedMeans)[[2]] = paste("Mean", dimnames(datPredictedMeans)[[2]], sep = ".") } datPredictedMeans } WGCNA/R/coClustering.R0000644000176200001440000000317613103416622014103 0ustar liggesusers# coClustering and a permutation test for it coClustering = function(clusters.ref, clusters.test, tupletSize=2, unassignedLabel=0) { overlap = overlapTable(clusters.test, clusters.ref) greyRow = rownames(overlap$countTable)==unassignedLabel; greyCol = colnames(overlap$countTable)==unassignedLabel; refModSizes = table(clusters.ref); ccNumer = apply(overlap$countTable[!greyRow, !greyCol, drop = FALSE], 2, choose, tupletSize); ccDenom = choose(refModSizes[!greyCol], tupletSize) apply(ccNumer, 2, sum)/ccDenom; } coClustering.permutationTest = function(clusters.ref, clusters.test, tupletSize=2, nPermutations = 100, unassignedLabel=0, randomSeed = 12345, verbose = 0, indent = 0) { spaces = indentSpaces(indent); if (!is.null(randomSeed)) set.seed(randomSeed); observed = coClustering(clusters.ref, clusters.test, tupletSize, unassignedLabel); nModules = length(observed) permValues = matrix(NA, nPermutations, nModules); if (verbose > 0) pind = initProgInd(spaste(spaces, "Running permutations: "), " done"); for (p in 1:nPermutations) { ctPerm = sample(clusters.test); permValues[p, ] = as.numeric(coClustering(clusters.ref, ctPerm, tupletSize, unassignedLabel)); if (verbose > 0) pind = updateProgInd(p/nPermutations, pind); } if (verbose > 0) printFlush(""); means = colMeans(permValues); sds = apply(permValues, 2, sd, na.rm = TRUE); list(observed = observed, Z = (observed-means)/sds, permuted.mean = means, permuted.sd = sds, permuted.cc = permValues); } WGCNA/R/consensusTOM.R0000644000176200001440000014365714356162617014070 0ustar liggesusers# New multilevel specification of consensus: a hierarchical list. # The consensus calculation needs to be general enough so it can be used for module merging (consensus # eigengene network) as well as for calculation of consensus kMEs, and possibly for other stuff as well. # Consensus specification for a single operation: # - inputs: a list specifying the input of the consensus. This should be general enough to handle # blockwise data but also not specific to adjacencies. # The inut should be a multiData structure, with each # - calibrationMethod: currently one of "none", "simple quantile", "full quantile" # - consensusQuantile # - saveCalibratedIndividualTOMs (= FALSE) # - calibratedIndividualTOMFilePattern (= "calibratedIndividualTOM-Set%s-Block%b.RData") # The consensus calculation itself does not need information about correlation type etc; the consensus # calculation is very general. # For network analysis applications of the consensus: will also have to keep information about how the # individual network adjacencies (TOMs) are to be created or were created - correlation type, network type, # TOM type, correlation options etc. # So we will keep 2 different types of information around: # 1. network construction options. A simple list giving the necessary network construction options. # 2. ConsensusOptions: This will also be a list giving the options but not holding the actual consensus # inputs or outputs. # outputs of network construction and consensus construction should be held in separate lists. # Output of individualTOMs: # - adjacency data, a list of blockwiseData instances # - block information # - network construction options, separately for each adjacency (network construction could be different) # For consensusTOM: # Inputs: # - adjacency data: either from individual TOMs or from other consensus TOMs # . note that constructing a complicated consensus manually (i.e., using consensus TOMs # as inputs to higher-level consensus) is of limited use # since consensus modules would need the full consensus tree anyway. # - consensus tree # Not really needed: block information # outputs: # - consensus adjacency data # - copy of consensus options # - other (diagnostic etc) information # For consensus modules # Inputs: # - undelying expression data # - optional consensus TOM # - block information # - network options for each expression data set # - consensus tree .checkPower = function(power) { if (any(!is.finite(power)) | (sum(power<1)>0) | (sum(power>50)>0) ) stop("power must be between 1 and 50."); } #========================================================================================================== # # Defining a single consensus operation # #========================================================================================================== newConsensusTree = function(consensusOptions = newConsensusOptions(), inputs, analysisName = NULL) { if (!inherits(consensusOptions, "ConsensusOptions")) stop("'consensusOptions' must be of class ConsensusOptions."); out = list(consensusOptions = consensusOptions, inputs = inputs, analysisName = analysisName); class(out) = c("ConsensusTree", class(out)); out; } consensusTreeInputs = function(consensusTree, flatten = TRUE) { out = lapply(consensusTree$inputs, function(inp) if (inherits(inp, "ConsensusTree")) consensusTreeInputs(inp) else inp) if (flatten) out = unlist(out); out; } newConsensusOptions = function( calibration = c("full quantile", "single quantile", "none"), # Simple quantile scaling options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, # Consensus post-processing suppressNegativeResults = FALSE, # Potentially useful for Nowick-type signed TOM # Name to prevent clashing of files analysisName = "") { calibration = match.arg(calibration); if (any(!is.finite(setWeights))) stop("Entries of 'setWeights' must all be finite."); if ( (consensusQuantile < 0) | (consensusQuantile > 1) ) stop("'consensusQuantile' must be between 0 and 1."); out = list( calibration = calibration, calibrationQuantile = calibrationQuantile, sampleForCalibration = sampleForCalibration, sampleForCalibrationFactor = sampleForCalibrationFactor, consensusQuantile = consensusQuantile, useMean = useMean, setWeights = setWeights, suppressNegativeResults = suppressNegativeResults, analysisName = analysisName); class(out) = c("ConsensusOptions", class(out)) out; } newCorrelationOptions = function( corType = c("pearson", "bicor"), maxPOutliers = 0.05, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, nThreads = 0, corFnc = if (corType=="bicor") "bicor" else "cor", corOptions = c( list(use = 'p', cosine = cosineCorrelation, quick = quickCor, nThreads = nThreads), if (corType=="bicor") list(maxPOutliers = maxPOutliers, pearsonFallback = pearsonFallback) else NULL)) { if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); if (quickCor < 0) stop("quickCor must be positive."); if (nThreads < 0) stop("nThreads must be positive."); corType = match.arg(corType); if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); if (quickCor < 0) stop("quickCor must be positive."); fallback = pmatch(pearsonFallback, .pearsonFallbacks) if (is.na(fallback)) stop(paste("Unrecognized 'pearsonFallback'. Recognized values are (unique abbreviations of)\n", paste(.pearsonFallbacks, collapse = ", "))) out = list( corType = corType, maxPOutliers = maxPOutliers, quickCor = quickCor, pearsonFallback = pearsonFallback, pearsonFallback.code = match(pearsonFallback, .pearsonFallbacks), cosineCorrelation = cosineCorrelation, corFnc = corFnc, corOptions = corOptions, corType.code = match(corType, .corTypes), nThreads = nThreads); class(out) = c("CorrelationOptions", class(out)); out; } newNetworkOptions = function( correlationOptions = newCorrelationOptions(), # Adjacency options replaceMissingAdjacencies = TRUE, power = 6, networkType = c("signed hybrid", "signed", "unsigned"), checkPower = TRUE, # Topological overlap options TOMType = c("signed", "signed Nowick", "unsigned", "none", "signed 2", "signed Nowick 2", "unsigned 2"), TOMDenom = c("mean", "min"), suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Internal behavior options useInternalMatrixAlgebra = FALSE) { if (checkPower) .checkPower(power); networkType = match.arg(networkType); TOMType = match.arg(TOMType); TOMDenom = match.arg(TOMDenom); out = c(correlationOptions, list(replaceMissingAdjacencies = replaceMissingAdjacencies, power = power, networkType = networkType, TOMType = TOMType, TOMDenom = TOMDenom, networkType.code = match(networkType, .networkTypes), TOMType.code = match(TOMType, .TOMTypes), TOMDenom.code = match(TOMDenom, .TOMDenoms), suppressTOMForZeroAdjacencies = suppressTOMForZeroAdjacencies, suppressNegativeTOM = suppressNegativeTOM, useInternalMatrixAlgebra = useInternalMatrixAlgebra)); class(out) = c("NetworkOptions", class(correlationOptions)); out; } #==================================================================================================== # # cor, network, and consensus calculations # #==================================================================================================== .corCalculation = function(x, y = NULL, weights.x = NULL, weights.y = NULL, correlationOptions) { if (!inherits(correlationOptions, "CorrelationOptions")) stop(".corCalculation: 'correlationOptions' does not have correct type."); do.call(match.fun(correlationOptions$corFnc), c(list(x = x, y= y, weights.x = weights.x, weights.y = weights.y), correlationOptions$corOptions)); } # network calculation. Returns the resulting topological overlap or adjacency matrix. .networkCalculation = function(data, networkOptions, weights = NULL, verbose = 1, indent = 0) { if (is.data.frame(data)) data = as.matrix(data); if (!inherits(networkOptions, "NetworkOptions")) stop(".networkCalculation: 'networkOptions' does not have correct type."); .checkAndScaleWeights(weights, data, scaleByMax = FALSE); callVerb = max(0, verbose - 1); callInd = indent + 2; CcorType = networkOptions$corType.code - 1; CnetworkType = networkOptions$networkType.code - 1; CTOMType = networkOptions$TOMType.code -1; CTOMDenom = networkOptions$TOMDenom.code -1; warn = as.integer(0); # For now return the full matrix; eventually we may return just the dissimilarity. # To make full use of the lower triagle space saving we'd have to modify the underlying C code # dramatically, otherwise it will still need to allocate the full matrix for the matrix multiplication. .Call("tomSimilarity_call", data, weights, as.integer(CcorType), as.integer(CnetworkType), as.double(networkOptions$power), as.integer(CTOMType), as.integer(CTOMDenom), as.double(networkOptions$maxPOutliers), as.double(networkOptions$quickCor), as.integer(networkOptions$pearsonFallback.code), as.integer(networkOptions$cosineCorrelation), as.integer(networkOptions$replaceMissingAdjacencies), as.integer(networkOptions$suppressTOMForZeroAdjacencies), as.integer(networkOptions$suppressNegativeTOM), as.integer(networkOptions$useInternalMatrixAlgebra), warn, as.integer(min(1, networkOptions$nThreads)), as.integer(callVerb), as.integer(callInd), PACKAGE = "WGCNA"); } # the following is contained in consensusOptions: # out = list( calibration = calibration, # calibrationQuantile = calibrationQuantile, # sampleForCalibration = sampleForCalibration, # sampleForCalibrationFactor = sampleForCalibrationFactor, # consensusQuantile = consensusQuantile, # useMean = useMean, # setWeights = setWeights); #============================================================================================== # # general utility functions # #============================================================================================== # Try to guess whether disk cache should be used # this should work for both multiData as well as for simple lists of arrays. .useDiskCache = function(multiExpr, blocks = NULL, chunkSize = NULL, nSets = if (isMultiData(multiExpr)) length(multiExpr) else 1, nGenes = checkSets(multiExpr)$nGenes) { if (is.null(chunkSize)) chunkSize = as.integer(.largestBlockSize/(2*nSets)) if (length(blocks) == 0) { blockLengths = nGenes; } else blockLengths = as.numeric(table(blocks)); max(blockLengths) > chunkSize; } .dimensions = function(x) { if (is.null(dim(x))) return(length(x)) return(dim(x)) } .shiftList = function(c0, lst) { if (length(lst)==0) return(list(c0)); ll = length(lst) out = list(); out[[1]] = c0; for (i in 1:ll) out[[i+1]] = lst[[i]]; out; } .checkListDimConsistencyAndGetDimnames = function(dataList) { nPars = length(dataList) dn = NULL for (p in 1:nPars) { if (mode(dataList[[p]])!="numeric") stop(paste("Argument number", p, " is not numeric.")) if (p==1) { dim = .dimensions(dataList[[p]]) } else { if (!isTRUE(all.equal(.dimensions(dataList[[p]]), dim))) stop("Argument dimensions are not consistent."); } if (prod(dim)==0) stop(paste("Argument has zero dimension.")); if (is.null(dn)) dn = dimnames(dataList[[p]]); } dn; } .mtd.checkDimConsistencyAndGetDimnames = function(mtd) { nPars = length(mtd) dn = NULL for (p in 1:nPars) { if (mode(mtd[[p]]$data)!="numeric") stop(paste("Argument number", p, " is not numeric.")) if (p==1) { dim = .dimensions(mtd[[p]]$data) } else { if (!isTRUE(all.equal(.dimensions(mtd[[p]]$data), dim))) stop("Argument dimensions are not consistent."); } if (prod(dim)==0) stop(paste("Argument has zero dimension.")); if (is.null(dn)) dn = dimnames(mtd[[p]]$data); } dn; } #============================================================================================== # # general utility functions for working with disk cache # #============================================================================================== .saveChunks = function(data, chunkSize, fileBase, cacheDir, fileNames = NULL) { ld = length(data); nChunks = ceiling(ld/chunkSize); if (is.null(fileNames)) { if (length(fileBase)!=1) stop("Internal error: length of 'fileBase' must be 1."); fileNames = rep("", nChunks); x = 1; for (c in 1:nChunks) { fileNames[c] = tempfile(pattern = fileBase, tmpdir = cacheDir); # This effectively reserves the file name save(x, file = fileNames[c]); } } else { if (length(fileNames)!=nChunks) stop("Internal error: length of 'fileNames' must equal the number of chunks."); } chunkLengths = rep(0, nChunks); start = 1; for (c in 1:nChunks) { end = min(start + chunkSize-1, ld); chunkLengths[c] = end - start + 1; temp = data[start:end]; save(temp, file = fileNames[c]); start = end + 1; } rm(temp); #gc(); list(files = fileNames, chunkLengths = chunkLengths); } .loadAsList = function(file) { env = new.env(); load(file = file, envir = env); as.list(env); }; .loadObject = function(file, name = NULL, size = NULL) { x = .loadAsList(file); if (!is.null(name) && (names(x)[1]!=name)) stop("File ", file, " does not contain object '", name, "'.") obj = x[[1]]; if (!is.null(size) && (length(obj)!=size)) stop("Object '", name, "' does not have the correct length."); obj; } .vector2dist = function(x) { n = length(x); n1 = (1 + sqrt(1 + 8*n))/2 if (floor(n1)!=n1) stop("Input length not consistent with a distance structure."); attributes(x) = list(Size = as.integer(n1), Diag = FALSE, Upper = FALSE); class(x) = "dist"; x; } .emptyDist = function(nObjects, fill = 0) { n = (nObjects * (nObjects-1))/2; .vector2dist(rep(fill, n)); } .checkAndDelete = function(files) { if (length(files)>0) lapply(as.character(files), function(file) if (file.exists(file)) file.remove(file)); NULL; } .qorder = function(data) { data = as.numeric(data); .Call("qorder", data, PACKAGE = "WGCNA") } # Actual consensus calculation distilled into one function. data is assumed to have sets in columns # and samples/observations/whatever in rows. setWeightMat should be a matrix of dimensions (nSets, 1) # and be normalized to sum=1. .consensusCalculation.base = function(data, useMean, setWeightMat, consensusQuantile) { nSets = ncol(data); if (nSets==1) { out.list = list(consensus = as.vector(data)); } else { if (useMean) { if (any(is.na(data))) { finiteMat = 1-is.na(data); data[is.na(data)] = 0; out = data %*% setWeightMat / finiteMat%*%setWeightMat; } else { out = data %*% setWeightMat; } out.list = list(consensus = out); } else if (consensusQuantile == 0) { whichmin = .Call("minWhich_call", data, 1L, PACKAGE = "WGCNA"); out.list = list(consensus = whichmin$min); } else { out.list = list(consensus = rowQuantileC(data, p = consensusQuantile)); } } if (is.null(out.list$originCount)) { out.list$originCount = apply(data, 2, function(x) sum(x <= out.list$consensus, na.rm = TRUE)); names(out.list$originCount) = 1:nSets } out.list; } .consensusCalculation.base.FromList = function(dataList, useMean, setWeights, consensusQuantile) { nSets = length(dataList); if (nSets==1) { out.list = list(consensus = dataList[[1]]); } else { if (useMean) { out.list = list(consensus = pmean(dataList, setWeights)); } else if (consensusQuantile == 0) { whichmin = pminWhich.fromList(dataList); out.list = list(consensus = whichmin$min); } else { out.list = list(consensus = pquantile.fromList(dataList, prob = consensusQuantile)); } } if (is.null(out.list$originCount)) { out.list$originCount = sapply(dataList, function(x) sum(x <= out.list$consensus, na.rm = TRUE)); names(out.list$originCount) = 1:nSets } out.list; } #============================================================================================== # # utility functions for working with multiple blockwise adjacencies. # #============================================================================================== newBlockInformation = function( blocks, goodSamplesAndGenes) { blockGenes = tapply(1:length(blocks), blocks, identity) names(blockGenes) = sort(unique(blocks)); nGGenes = sum(goodSamplesAndGenes$goodGenes); gBlocks = blocks[goodSamplesAndGenes$goodGenes]; out = list(blocks = blocks, blockGenes = blockGenes, goodSamplesAndGenes = goodSamplesAndGenes, nGGenes = nGGenes, gBlocks = gBlocks); class(out) = c("BlockInformation", class(out)); out; } #======================================================================================================= # # individualTOMs # This is essentially a re-badged blockwiseIndividualTOMs with a different input and ouptut format. # #======================================================================================================= # The following is contained in networkOptions: #corType = corType, #maxPOutliers = maxPOutliers, #quickCor = quickCor, #pearsonFallback = pearsonFallback, #cosineCorrelation = cosineCorrelation, #corFnc = corFnc, #corOptions = corOptions, #corType.code = match(corType, .corTypes), # Adjacency options #replaceMissingAdjacencies = TRUE, #power = 6, #networkType = c("signed hybrid", "signed", "unsigned"), # Topological overlap options #TOMType = c("signed", "unsigned"), #TOMDenom = c("mean", "min")) individualTOMs = function( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, ## Optional, useful for pre-clustering if preclustering is needed. # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction options. This can be a single object of class NetworkOptions, or a multiData # structure of NetworkOptions objects, one per element of multiExpr. networkOptions, # Save individual TOMs? This is equivalent to using external = TRUE in blockwiseData saveTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", # Behaviour options collectGarbage = TRUE, verbose = 2, indent = 0) { spaces = indentSpaces(indent); dataSize = checkSets(multiExpr, checkStructure = TRUE); if (dataSize$structureOK) { nSets = dataSize$nSets; nGenes = dataSize$nGenes; multiFormat = TRUE; } else { multiExpr = multiData(multiExpr); if (!is.null(multiWeights)) multiWeights = multiData(multiWeights); nSets = dataSize$nSets; nGenes = dataSize$nGenes; multiFormat = FALSE; } .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE); if (is.null(names(multiExpr))) names(multiExpr) = spaste("Set", 1:nSets); if (inherits(networkOptions, "NetworkOptions")) { networkOptions = setNames(list2multiData(.listRep(networkOptions, nSets)), names(multiExpr)); } else { if (!all(mtd.apply(networkOptions, inherits, "NetworkOptions", mdaSimplify = TRUE))) stop("'networkOptions' must be of class 'NetworkOptions' or a multiData structure\n", " of objects of the class.\n", " See newNetworkOptions for creating valid network options."); } if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed); } set.seed(randomSeed); } #if (maxBlockSize >= floor(sqrt(2^31)) ) # stop("'maxBlockSize must be less than ", floor(sqrt(2^31)), ". Please decrease it and try again.") if (!is.null(blocks) && (length(blocks)!=nGenes)) stop("Input error: length of 'blocks' must equal number of genes in 'multiExpr'."); if (verbose>0) printFlush(paste(spaces, "Calculating topological overlaps block-wise from all genes")); nSamples = dataSize$nSamples; # Check data for genes and samples that have too many missing values # Check that multiExpr has valid (mtd)column names. If column names are missing, generate them. colIDs = mtd.colnames(multiExpr); if (is.null(colIDs)) colIDs = c(1:dataSize$nGenes); if (checkMissingData) { gsg = goodSamplesGenesMS(multiExpr, multiWeights = multiWeights, verbose = verbose - 1, indent = indent + 1) if (!gsg$allOK) { multiExpr = mtd.subset(multiExpr, gsg$goodSamples, gsg$goodGenes); if (!is.null(multiWeights)) multiWeights = mtd.subset(multiWeights, gsg$goodSamples, gsg$goodGenes); if (!is.null(multiExpr.imputed)) multiExpr.imputed = mtd.subset(multiExpr.imputed, gsg$goodSamples, gsg$goodGenes); colIDs = colIDs[gsg$goodGenes]; } } else { gsg = list(goodGenes = rep(TRUE, nGenes), goodSamples = lapply(nSamples, function(n) rep(TRUE, n))); gsg$allOK = TRUE; } nGGenes = sum(gsg$goodGenes); nGSamples = rep(0, nSets); for (set in 1:nSets) nGSamples[set] = sum(gsg$goodSamples[[set]]); if (is.null(blocks)) { if (nGGenes > maxBlockSize) { if (verbose>1) printFlush(paste(spaces, "....pre-clustering genes to determine blocks..")); clustering = consensusProjectiveKMeans( if (!is.null(multiExpr.imputed)) multiExpr.imputed else multiExpr, preferredSize = maxBlockSize, sizePenaltyPower = blockSizePenaltyPower, checkData = FALSE, nCenters = nPreclusteringCenters, randomSeed = randomSeed, verbose = verbose-2, indent = indent + 1); gBlocks = .orderLabelsBySize(clustering$clusters); } else gBlocks = rep(1, nGGenes); blocks = rep(NA, nGenes); blocks[gsg$goodGenes] = gBlocks; } else { gBlocks = blocks[gsg$goodGenes]; } blockLevels = as.numeric(levels(factor(gBlocks))); blockSizes = table(gBlocks) nBlocks = length(blockLevels); if (any(blockSizes > sqrt(2^31)-1)) printFlush(spaste(spaces, "Found block(s) with size(s) larger than limit of 'int' indexing.\n", spaces, " Support for such large blocks is experimental; please report\n", spaces, " any issues to Peter.Langfelder@gmail.com.")); # check file names for uniqueness actualFileNames = matrix("", nSets, nBlocks); if (saveTOMs) { for (set in 1:nSets) for (b in 1:nBlocks) actualFileNames[set, b] = .processFileName(individualTOMFileNames, set, names(multiExpr), b); rownames(actualFileNames) = spaste("Set.", c(1:nSets)); colnames(actualFileNames) = spaste("Block.", c(1:nBlocks)); if (length(unique(as.vector(actualFileNames))) < nSets * nBlocks) { printFlush("Error: File names for (some) set/block combinations are not unique:"); print(actualFileNames); stop("File names must be unique."); } } # Initialize various variables blockGenes = list(); blockNo = 1; setTomDS = replicate(nSets, list()); # Here's where the analysis starts for (blockNo in 1:nBlocks) { if (verbose>1 && nBlocks > 1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select the block genes block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; #nBlockGenes = length(block); #blockGenes[[blockNo]] = c(1:nGenes)[gsg$goodGenes][gBlocks==blockLevels[blockNo]]; #errorOccurred = FALSE; # For each set: calculate and save TOM for (set in 1:nSets) { if (verbose>2) printFlush(paste(spaces, "....Working on set", set)) selExpr = as.matrix(multiExpr[[set]]$data[, block]); if (!is.null(multiWeights)) { selWeights = as.matrix(multiWeights[[set]]$data[, block]) } else selWeights = NULL; tomDS = as.dist(.networkCalculation(selExpr, networkOptions[[set]]$data, weights = selWeights, verbose = verbose -2, indent = indent+2)); setTomDS[[set]]$data = addBlockToBlockwiseData(if (blockNo==1) NULL else setTomDS[[set]]$data, external = saveTOMs, blockData = tomDS, blockFile = actualFileNames[set, blockNo], recordAttributes = TRUE, metaData = list(IDs = colIDs[block])) } if (collectGarbage) { rm(tomDS); gc(); } } names(setTomDS) = names(multiExpr); if (!multiFormat) { gsg$goodSamples = gsg$goodSamples[[1]]; setTomDS = setTomDS[[1]]$data; } blockInformation = newBlockInformation(blocks, gsg); list(blockwiseAdjacencies = setTomDS, setNames = names(multiExpr), nSets = length(multiExpr), blockInfo = blockInformation, networkOptions = networkOptions ) } #===================================================================================================== # # hierarchical consensus TOM # #===================================================================================================== hierarchicalConsensusTOM = function( # Supply either ... # ... information needed to calculate individual TOMs multiExpr, multiWeights = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 20000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 12345, # Network construction options. This can be a single object of class NetworkOptions, or a multiData # structure of NetworkOptions objects, one per element of multiExpr. networkOptions, # Save individual TOMs? keepIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", # ... or information about individual (more precisely, input) TOMs individualTOMInfo = NULL, # Consensus calculation options consensusTree, useBlocks = NULL, # Save calibrated TOMs? saveCalibratedIndividualTOMs = FALSE, calibratedIndividualTOMFilePattern = "calibratedIndividualTOM-Set%s-Block%b.RData", # Return options saveConsensusTOM = TRUE, consensusTOMFilePattern = "consensusTOM-%a-Block%b.RData", getCalibrationSamples = FALSE, # Return the intermediate results as well? keepIntermediateResults = saveConsensusTOM, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behavior collectGarbage = TRUE, verbose = 1, indent = 0) { if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed); } set.seed(randomSeed); } localIndividualTOMCalculation = is.null(individualTOMInfo); if (is.null(individualTOMInfo)) { if (missing(multiExpr)) stop("Either 'individualTOMInfo' or 'multiExpr' must be given."); if (is.null(useDiskCache)) useDiskCache = .useDiskCache(multiExpr, blocks, chunkSize); time = system.time({individualTOMInfo = individualTOMs(multiExpr = multiExpr, multiWeights = multiWeights, checkMissingData = checkMissingData, blocks = blocks, maxBlockSize = maxBlockSize, blockSizePenaltyPower = blockSizePenaltyPower, nPreclusteringCenters = nPreclusteringCenters, randomSeed = NULL, networkOptions = networkOptions, saveTOMs = useDiskCache | keepIndividualTOMs, individualTOMFileNames = individualTOMFileNames, collectGarbage = collectGarbage, verbose = verbose, indent = indent);}); if (verbose > 1) { printFlush("Timimg for individual TOMs:"); print(time); } } consensus = hierarchicalConsensusCalculation(individualTOMInfo$blockwiseAdjacencies, consensusTree, level = 1, useBlocks = useBlocks, randomSeed = NULL, saveCalibratedIndividualData = saveCalibratedIndividualTOMs, calibratedIndividualDataFilePattern = calibratedIndividualTOMFilePattern, saveConsensusData = saveConsensusTOM, consensusDataFileNames = consensusTOMFilePattern, getCalibrationSamples= getCalibrationSamples, keepIntermediateResults = keepIntermediateResults, useDiskCache = useDiskCache, chunkSize = chunkSize, cacheDir = cacheDir, cacheBase = cacheBase, collectGarbage = collectGarbage, verbose = verbose, indent = indent); if (localIndividualTOMCalculation) { if (!keepIndividualTOMs) { # individual TOMs are contained in the individualTOMInfo list; remove them. mtd.apply(individualTOMInfo$blockwiseAdjacencies, BD.checkAndDeleteFiles); individualTOMInfo$blockwiseAdjacencies = NULL } } c( consensus, list(individualTOMInfo = individualTOMInfo, consensusTree = consensusTree) ); } #======================================================================================================== # # Merge consensusTOMInfo lists # #======================================================================================================== # # Caution: at present the function does not check that the inputs are compatible and compatible with the # supplied blocks. .mergeConsensusTOMInformationLists = function(blockInformation, consensusTOMInfoList) { blocks = blockInformation$blocks; blockLevels = unique(blocks); nBlocks = length(blockLevels); out = consensusTOMInfoList[[1]]; # Merging consensus information out$consensusData = do.call(mergeBlockwiseData, lapply(consensusTOMInfoList, getElement, "consensusData")); if (out$saveCalibratedIndividualData) { out$calibratedIndividualData = lapply(1:out$nSets, function(set) do.call(mergeBlockwiseData, lapply(consensusTOMInfoList, function(ct) ct$calibratedIndividualData[[set]]))); } if (!is.null(out$calibrationSamples)) { out$calibrationSamples = do.call(c, lapply(consensusTOMInfoList, getElement, "calibrationSamples")); } out$originCount = rowSums(do.call(cbind, lapply(consensusTOMInfoList, getElement, "originCount"))) # Merging information in individualTOMInfo out$individualTOMInfo$blockwiseAdjacencies = lapply(1:out$nSets, function(set) list(data = do.call(mergeBlockwiseData, lapply(consensusTOMInfoList, function(ct) ct$individualTOMInfo$blockwiseAdjacencies[[set]]$data)))); out$individualTOMInfo$blockInfo = blockInformation; out; } #======================================================================================================== # # consensusTOM: old, single-layer consensus. # #======================================================================================================== consensusTOM = function( # Supply either ... # ... information needed to calculate individual TOMs multiExpr, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, replaceMissingAdjacencies = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressNegativeTOM = FALSE, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", # ... or individual TOM information individualTOMInfo = NULL, useIndivTOMSubset = NULL, ##### Consensus calculation options useBlocks = NULL, networkCalibration = c("single quantile", "full quantile", "none"), # Save calibrated TOMs? saveCalibratedIndividualTOMs = FALSE, calibratedIndividualTOMFilePattern = "calibratedIndividualTOM-Set%s-Block%b.RData", # Simple quantile scaling options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, getNetworkCalibrationSamples = FALSE, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, # Return options saveConsensusTOMs = TRUE, consensusTOMFilePattern = "consensusTOM-Block%b.RData", returnTOMs = FALSE, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", nThreads = 1, # Diagnostic messages verbose = 1, indent = 0) { spaces = indentSpaces(indent); networkCalibration = match.arg(networkCalibration); seedSaved = FALSE; if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed); } set.seed(randomSeed); } if (any(!is.finite(setWeights))) stop("Entries of 'setWeights' must all be finite."); localIndividualTOMCalculation = is.null(individualTOMInfo); if (is.null(individualTOMInfo)) { if (missing(multiExpr)) stop("Either 'individualTOMInfo' or 'multiExpr' must be given."); dataSize = checkSets(multiExpr); nSets.all = dataSize$nSets; nGenes = dataSize$nGenes; if (is.null(useDiskCache)) useDiskCache = .useDiskCache(multiExpr, blocks, chunkSize); if (length(power)!=1) { if (length(power)!=nSets.all) stop("Invalid arguments: Length of 'power' must equal number of sets given in 'multiExpr'."); } else { power = rep(power, nSets.all); } if ( (consensusQuantile < 0) | (consensusQuantile > 1) ) stop("'consensusQuantile' must be between 0 and 1."); time = system.time({individualTOMInfo = blockwiseIndividualTOMs(multiExpr = multiExpr, checkMissingData = checkMissingData, blocks = blocks, maxBlockSize = maxBlockSize, blockSizePenaltyPower = blockSizePenaltyPower, nPreclusteringCenters = nPreclusteringCenters, randomSeed = randomSeed, corType = corType, maxPOutliers = maxPOutliers, quickCor = quickCor, pearsonFallback = pearsonFallback, cosineCorrelation = cosineCorrelation, replaceMissingAdjacencies = replaceMissingAdjacencies, power = power, networkType = networkType, TOMType = TOMType, TOMDenom = TOMDenom, suppressNegativeTOM = suppressNegativeTOM, saveTOMs = useDiskCache | saveIndividualTOMs, individualTOMFileNames = individualTOMFileNames, nThreads = nThreads, verbose = verbose, indent = indent);}); if (verbose > 1) { printFlush("Timimg for individual TOMs:"); print(time); } if (!saveIndividualTOMs & useDiskCache) on.exit(.checkAndDelete(individualTOMInfo$actualTOMFileNames), add = TRUE) } else { nSets.all = if (individualTOMInfo$saveTOMs) nrow(individualTOMInfo$actualTOMFileNames) else ncol(individualTOMInfo$TOMSimilarities[[1]]); nGenes = length(individualTOMInfo$blocks); if (is.null(useDiskCache)) useDiskCache = .useDiskCache(NULL, blocks, chunkSize, nSets = nSets.all, nGenes = nGenes); } setNames = individualTOMInfo$setNames; if (is.null(setNames)) setNames = spaste("Set.", 1:individualTOMInfo$nSets) nGoodGenes = length(individualTOMInfo$gBlocks); if (is.null(setWeights)) setWeights = rep(1, nSets.all); if (length(setWeights)!=nSets.all) stop("Length of 'setWeights' must equal the number of sets."); setWeightMat = as.matrix(setWeights/sum(setWeights)); if (is.null(useIndivTOMSubset)) { if (individualTOMInfo$nSets != nSets.all) stop(paste("Number of sets in individualTOMInfo and in multiExpr do not agree.\n", " To use a subset of individualTOMInfo, set useIndivTOMSubset appropriately.")); useIndivTOMSubset = c(1:nSets.all); } nSets = length(useIndivTOMSubset); if (length(unique(useIndivTOMSubset))!=nSets) stop("Entries of 'useIndivTOMSubset' must be unique"); if (any(useIndivTOMSubset<1) | any(useIndivTOMSubset>individualTOMInfo$nSets)) stop("All entries of 'useIndivTOMSubset' must be between 1 and the number of sets in individualTOMInfo"); # if ( (minKMEtoJoin >1) | (minKMEtoJoin <0) ) stop("minKMEtoJoin must be between 0 and 1."); gsg = individualTOMInfo$goodSamplesAndGenes; # Restrict gsg to used sets gsg$goodSamples = gsg$goodSamples[useIndivTOMSubset]; if (is.null(chunkSize)) chunkSize = as.integer(.largestBlockSize/(2*nSets)) # Initialize various variables if (getNetworkCalibrationSamples) { if (!sampleForCalibration) stop(paste("Incompatible input options: networkCalibrationSamples can only be returned", "if sampleForCalibration is TRUE.")); networkCalibrationSamples = list(); } blockLevels = sort(unique(individualTOMInfo$gBlocks)); nBlocks = length(blockLevels); if (is.null(useBlocks)) useBlocks = blockLevels; useBlockIndex = match(useBlocks, blockLevels); if (!all(useBlocks %in% blockLevels)) stop("All entries of 'useBlocks' must be valid block levels."); if (any(duplicated(useBlocks))) stop("Entries of 'useBlocks' must be unique."); nUseBlocks = length(useBlocks); if (nUseBlocks==0) stop("'useBlocks' cannot be non-NULL and empty at the same time."); consensusTOM.out = list(); TOMFiles = rep("", nUseBlocks); originCount = rep(0, nSets); calibratedIndividualTOMFileNames = NULL; if (saveCalibratedIndividualTOMs) { calibratedIndividualTOMFileNames = matrix("", nSets, nBlocks); for (set in 1:nSets) for (b in 1:nBlocks) calibratedIndividualTOMFileNames[set, b] = .processFileName(calibratedIndividualTOMFilePattern, setNumber = set, setNames = setNames, blockNumber = b); } gc(); # Here's where the analysis starts for (blockIndex in 1:nUseBlocks) { blockNo = useBlockIndex[blockIndex]; if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select block genes block = c(1:nGoodGenes)[individualTOMInfo$gBlocks==blockLevels[blockNo]]; nBlockGenes = length(block); # blockGenes[[blockNo]] = c(1:nGenes)[gsg$goodGenes][gBlocks==blockLevels[blockNo]]; scaleQuant = rep(1, nSets); scalePowers = rep(1, nSets); # Set up file names or memory space to hold the set TOMs if (useDiskCache) { nChunks = ceiling(nBlockGenes * (nBlockGenes-1)/2/chunkSize); chunkFileNames = array("", dim = c(nChunks, nSets)); on.exit(.checkAndDelete(chunkFileNames), add = TRUE); } else nChunks = 1; if (nChunks==1) useDiskCache = FALSE; if (!useDiskCache) { # Note: setTomDS will contained the scaled set TOM matrices. setTomDS = matrix(0, nBlockGenes*(nBlockGenes-1)/2, nSets); colnames(setTomDS) = setNames; } # create an empty consTomDS distance structure. consTomDS = .emptyDist(nBlockGenes); # sample entry indices from the distance structure for TOM scaling, if requested if (networkCalibration=="single quantile" && sampleForCalibration) { qx = min(calibrationQuantile, 1-calibrationQuantile); nScGenes = min(sampleForCalibrationFactor * 1/qx, length(consTomDS)); nTOMEntries = length(consTomDS) scaleSample = sample(nTOMEntries, nScGenes); if (getNetworkCalibrationSamples) networkCalibrationSamples[[blockIndex]] = list(sampleIndex = scaleSample, TOMSamples = matrix(NA, nScGenes, nSets)); } if (networkCalibration %in% c("single quantile", "none")) { for (set in 1:nSets) { if (verbose>2) printFlush(paste(spaces, "....Working on set", useIndivTOMSubset[set])) if (individualTOMInfo$saveTOMs) { tomDS = .loadObject(individualTOMInfo$ actualTOMFileNames[useIndivTOMSubset[set], blockNo], name = "tomDS", size = nBlockGenes*(nBlockGenes-1)/2); } else { tomDS = consTomDS; tomDS[] = individualTOMInfo$TOMSimilarities[[blockNo]] [, useIndivTOMSubset[set]] } if (networkCalibration=="single quantile") { # Scale TOMs so that calibrationQuantile agree in each set if (sampleForCalibration) { if (getNetworkCalibrationSamples) { networkCalibrationSamples[[blockIndex]]$TOMSamples[, set] = tomDS[scaleSample]; scaleQuant[set] = quantile(networkCalibrationSamples[[blockIndex]]$TOMSamples[, set], probs = calibrationQuantile, type = 8); } else { scaleQuant[set] = quantile(tomDS[scaleSample], probs = calibrationQuantile, type = 8); } } else scaleQuant[set] = quantile(x = tomDS, probs = calibrationQuantile, type = 8); if (set>1) { scalePowers[set] = log(scaleQuant[1])/log(scaleQuant[set]); tomDS = tomDS^scalePowers[set]; } if (saveCalibratedIndividualTOMs) save(tomDS, file = calibratedIndividualTOMFileNames[set, blockNo]); } # Save the calculated TOM either to disk in chunks or to memory. if (useDiskCache) { if (verbose > 3) printFlush(paste(spaces, "......saving TOM similarity to disk cache..")); sc = .saveChunks(tomDS, chunkSize, cacheBase, cacheDir = cacheDir); chunkFileNames[, set] = sc$files; chunkLengths = sc$chunkLengths; } else { setTomDS[, set] = tomDS[]; } rm(tomDS); gc(); } } else if (networkCalibration=="full quantile") { # Step 1: load each TOM, get order, split TOM into chunks according to order, and save. if (verbose>1) printFlush(spaste(spaces, "..working on quantile normalization")) if (useDiskCache) { orderFiles = rep("", nSets); on.exit(.checkAndDelete(orderFiles),add = TRUE); } for (set in 1:nSets) { if (verbose>2) printFlush(paste(spaces, "....Working on set", useIndivTOMSubset[set])) if (individualTOMInfo$saveTOMs) { tomDS = .loadObject(individualTOMInfo$ actualTOMFileNames[useIndivTOMSubset[set], blockNo], name = "tomDS", size = nBlockGenes*(nBlockGenes-1)/2); } else { tomDS = consTomDS; tomDS[] = individualTOMInfo$TOMSimilarities[[blockNo]] [, useIndivTOMSubset[set]] } if (useDiskCache) { # Order TOM (this may take a long time...) if (verbose > 3) printFlush(spaste(spaces, "......ordering TOM")); time = system.time({order1 = .qorder(tomDS)}); if (verbose > 3) { printFlush("Time to order TOM:"); print(time); } # save the order orderFiles[set] = tempfile(pattern = spaste(".orderForSet", set), tmpdir = cacheDir); if (verbose > 3) printFlush(spaste(spaces, "......saving order and ordered TOM")); save(order1, file = orderFiles[set]); # Save ordered tomDS into chunks tomDS.ordered = tomDS[order1]; sc = .saveChunks(tomDS.ordered, chunkSize, cacheBase, cacheDir = cacheDir); chunkFileNames[, set] = sc$files; chunkLengths = sc$chunkLengths; } else { setTomDS[, set] = tomDS[] } } if (useDiskCache) { # Step 2: Load chunks one by one and quantile normalize if (verbose > 2) printFlush(spaste(spaces, "....quantile normalizing chunks")); for (c in 1:nChunks) { if (verbose > 3) printFlush(spaste(spaces, "......QN for chunk ", c, " of ", nChunks)); chunkData = matrix(NA, chunkLengths[c], nSets); for (set in 1:nSets) chunkData[, set] = .loadObject(chunkFileNames[c, set]); time = system.time({ chunk.norm = normalize.quantiles(chunkData, copy = FALSE);}); if (verbose > 1) { printFlush("Time to QN chunk:"); print(time); } # Save quantile normalized chunks for (set in 1:nSets) { temp = chunk.norm[, set]; save(temp, file = chunkFileNames[c, set]); } } if (verbose > 2) printFlush(spaste(spaces, "....putting together full QN'ed TOMs")); # Put together full TOMs for (set in 1:nSets) { load(orderFiles[set]); start = 1; for (c in 1:nChunks) { end = start + chunkLengths[c] - 1; tomDS[order1[start:end]] = .loadObject(chunkFileNames[c, set], size = chunkLengths[c]); start = start + chunkLengths[c]; } if (saveCalibratedIndividualTOMs) save(tomDS, file = calibratedIndividualTOMFileNames[set, blockNo]); .saveChunks(tomDS, chunkSize, fileNames = chunkFileNames[, set]); unlink(orderFiles[set]); } } else { # If disk cache is not being used, simply call normalize.quantiles on the full set. setTomDS = normalize.quantiles(setTomDS); if (saveCalibratedIndividualTOMs) for (set in 1:nSets) { tomDS = .vector2dist(setTomDS[, set]); save(tomDS, file = calibratedIndividualTOMFileNames[set, blockNo]); } } } else stop("Unrecognized value of 'networkCalibration': ", networkCalibration); # Calculate consensus network if (verbose > 2) printFlush(paste(spaces, "....Calculating consensus network")); if (useDiskCache) { start = 1; for (chunk in 1:nChunks) { if (verbose > 3) printFlush(paste(spaces, "......working on chunk", chunk)); end = start + chunkLengths[chunk] - 1; setChunks = array(0, dim = c(chunkLengths[chunk], nSets)); for (set in 1:nSets) { load(file = chunkFileNames[chunk, set]); setChunks[, set] = temp; file.remove(chunkFileNames[chunk, set]); } colnames(setChunks) = setNames; tmp = .consensusCalculation.base(setChunks, useMean = useMean, setWeightMat = setWeightMat, consensusQuantile = consensusQuantile); consTomDS[start:end] = tmp$consensus; countIndex = as.numeric(names(tmp$originCount)); originCount[countIndex] = originCount[countIndex] + tmp$originCount; rm(tmp); start = end + 1; } } else { tmp = .consensusCalculation.base(setTomDS, useMean = useMean, setWeightMat = setWeightMat, consensusQuantile = consensusQuantile); consTomDS[] = tmp$consensus; countIndex = as.numeric(names(tmp$originCount)); originCount[countIndex] = originCount[countIndex] + tmp$originCount; rm(tmp); } # Save the consensus TOM if requested if (saveConsensusTOMs) { TOMFiles[blockIndex] = .substituteTags(consensusTOMFilePattern, "%b", blockNo); if (TOMFiles[blockIndex]==consensusTOMFilePattern) stop(paste("File name for consensus TOM must contain the tag %b somewhere in the file name -\n", " - this tag will be replaced by the block number. ")); save(consTomDS, file = TOMFiles[blockIndex]); } if (returnTOMs) consensusTOM.out[[blockIndex]] = consTomDS; gc(); } if (!saveConsensusTOMs) TOMFiles = NULL; if (!returnTOMs) consensusTOM.out = NULL; if (localIndividualTOMCalculation) { if (!individualTOMInfo$saveTOMs) { # individual TOMs are contained in the individualTOMInfo list; remove them. individualTOMInfo$TOMSimilarities = NULL; } } list(consensusTOM = consensusTOM.out, TOMFiles = TOMFiles, saveConsensusTOMs = saveConsensusTOMs, individualTOMInfo = individualTOMInfo, useIndivTOMSubset = useIndivTOMSubset, goodSamplesAndGenes = gsg, nGGenes = nGoodGenes, nSets = nSets, saveCalibratedIndividualTOMs = saveCalibratedIndividualTOMs, calibratedIndividualTOMFileNames = calibratedIndividualTOMFileNames, networkCalibrationSamples = if (getNetworkCalibrationSamples) networkCalibrationSamples else NULL, consensusQuantile = consensusQuantile, originCount = originCount ) } WGCNA/R/empiricalBayesLM.R0000644000176200001440000010774214672545314014645 0ustar liggesusers#=================================================================================================== # # Multi-variate empirical Bayes with proper accounting for sigma # #=================================================================================================== .colWeightedMeans.x = function(data, weights, na.rm) { nc = ncol(data); means = rep(NA, nc); for (c in 1:nc) means[c] = weighted.mean(data[, c], weights[, c], na.rm = na.rm); names(means) = colnames(data); means; } .weightedScale = function(data, weights, replaceZeroScale = FALSE) { weightSums = colSums(weights); means = .colWeightedMeans.x(data, weights, na.rm = TRUE); V1 = colSums(weights); V2 = colSums(weights^2); centered = data - matrix(means, nrow(data), ncol(data), byrow = TRUE); scale = sqrt( colSums(centered^2* weights, na.rm = TRUE) / ( V1 - V2/V1)); if (replaceZeroScale) scale[scale == 0] = 1; scaled = centered/ matrix(scale, nrow(data), ncol(data), byrow = TRUE); attr(scaled, "scaled:center") = means; attr(scaled, "scaled:scale") = scale; scaled; } .weightedVar = function(x, weights) { V1 = sum(weights); V2 = sum(weights^2); mean = sum(x * weights)/V1; centered = x-mean; sum(centered^2 * weights)/(V1-V2/V1); } # Defaults for certain fitting functions .initialFit.defaultWeightName = function(fitFnc) { wNames = c(rlm = "w", lmrob = "rweights"); if (fitFnc %in% names(wNames)) return(wNames[ match(fitFnc, names(wNames))]); NULL; } .initialFit.defaultOptions = function(fitFnc) { defOpt = list( rlm = list()) if (fitFnc %in% names(defOpt)) return(defOpt[[ match(fitFnc, names(defOpt))]]); list(); } .initialFit.requiresFormula = function(fitFnc) { reqForm = c(rlm = FALSE, lmrob = TRUE); if (fitFnc %in% names(reqForm)) return(reqForm[ match(fitFnc, names(reqForm))]); FALSE; } empiricalBayesLM = function( data, removedCovariates, retainedCovariates = NULL, initialFitFunction = NULL, initialFitOptions = NULL, initialFitRequiresFormula = NULL, initialFit.returnWeightName = NULL, fitToSamples = NULL, weights = NULL, automaticWeights = c("none", "bicov"), aw.maxPOutliers = 0.1, weightType = c("apriori", "empirical"), stopOnSmallWeights = TRUE, minDesignDeviation = 1e-10, robustPriors = FALSE, tol = 1e-4, maxIterations = 1000, garbageCollectInterval = 50000, scaleMeanToSamples = fitToSamples, scaleMeanOfSamples = NULL, getOLSAdjustedData = TRUE, getResiduals = TRUE, getFittedValues = TRUE, getWeights = TRUE, getEBadjustedData = TRUE, verbose = 0, indent = 0) { spaces = indentSpaces(indent); nSamples = nrow(data); designMat = NULL; #mean.x = NULL; #scale.x = NULL; automaticWeights = match.arg(automaticWeights); if (automaticWeights=="bicov") { weightType = "empirical"; if (verbose > 0) printFlush(paste(spaces, "..calculating weights..")); weights = bicovWeights(data, maxPOutliers = aw.maxPOutliers); } weightType = match.arg(weightType); wtype = match(weightType, c("apriori", "empirical")); if (!is.null(retainedCovariates)) { if (is.null(dim(retainedCovariates))) retainedCovariates = data.frame(retainedCovariate = retainedCovariates) if (any(is.na(retainedCovariates))) stop("All elements of 'retainedCovariates' must be finite."); if (nrow(retainedCovariates)!=nSamples) stop("Numbers of rows in 'data' and 'retainedCovariates' differ."); retainedCovariates = as.data.frame(retainedCovariates); mm = model.matrix(~., data = retainedCovariates)[, -1, drop = FALSE]; colSDs = colSds(mm); if (any(colSDs==0)) stop("Some columns in 'retainedCovariates' have zero variance."); designMat = mm; } if (is.null(removedCovariates)) stop("'removedCovariates' must be supplied."); if (is.null(dim(removedCovariates))) removedCovariates = data.frame(removedCovariate = removedCovariates) if (any(is.na(removedCovariates))) stop("All elements of 'removedCovariates' must be finite."); if (nrow(removedCovariates)!=nSamples) stop("Numbers of rows in 'data' and 'removedCovariates' differ."); removedCovariates = as.data.frame(removedCovariates); mm = model.matrix(~., data = removedCovariates)[, -1, drop = FALSE]; colSDs = colSds(mm); if (any(colSDs==0)) stop("Some columns in 'removedCovariates' have zero variance."); designMat = cbind(designMat, mm) removedColumns = (ncol(designMat)-ncol(mm) + 1):ncol(designMat); y.original = as.matrix(data); N.original = ncol(y.original); if (any(!is.finite(y.original))) { warning(immediate. = TRUE, "Found missing and/or non-finite data. These will be removed."); } if (is.null(weights)) weights = matrix(1, nSamples, ncol(y.original)); if (any(!is.finite(weights))) stop("Given 'weights' contain some infinite or missing entries. All weights must be present and finite."); if (any(weights<0)) stop("Given 'weights' contain negative entries. All weights must be non-negative."); originalWeights = weights; dimnamesY = dimnames(y.original); if (verbose > 0) printFlush(paste(spaces, "..checking for non-varying responses..")); varY = colVars(y.original, na.rm = TRUE); varYMissing = is.na(varY); varYZero = varY==0; varYZero[is.na(varYZero)] = FALSE; keepY = !(varYZero | varYMissing); y = y.original[, keepY]; weights = weights[, keepY]; yFinite = is.finite(y); weights[!yFinite] = 0; nSamples.y = colSums(yFinite); if (weightType == "apriori") { # Check weights if (any(weights[yFinite]<1)) { if (stopOnSmallWeights) { stop("When weights are determined 'apriori', small weights are not allowed.\n", "Weights must be at least 1. Use weightType='empirical' if weights were determined from data.") } else warning(immediate. = TRUE, "Small weights found. This can lead to unreliable fit with weight type 'apriori'.\n", "Proceed with caution."); } } N = ncol(y); nc = ncol(designMat); if (verbose > 0) printFlush(paste(spaces, "..standardizing responses..")); if (length(scaleMeanToSamples)==0) scaleMeanToSamples = c(1:nSamples); if (length(scaleMeanOfSamples)==0) scaleMeanOfSamples = c(1:nSamples); if (length(fitToSamples)==0) fitToSamples = c(1:nSamples); mean.y.target = .colWeightedMeans.x(y[scaleMeanToSamples, ], weights[scaleMeanToSamples, ], na.rm = TRUE); if (is.logical(fitToSamples)) fitToSamples = which(fitToSamples); if (length(fitToSamples) < 3) stop("'fitToSamples' must specify at least 3 samples for the fit."); nRefSamples = length(fitToSamples); yFinite.refSamples = yFinite[fitToSamples, ]; weights.refSamples = weights[fitToSamples, ]; # Scale y to mean zero and variance 1. This is needed for the prior to make sense. y.refSamples = .weightedScale(y[fitToSamples, ], weights.refSamples, replaceZeroScale = FALSE); ### Do not replace zero scale; at this point I would rather have the regression fail for these samples. mean.y = attr(y.refSamples, "scaled:center"); scale.y = attr(y.refSamples, "scaled:scale"); y.refSamples[!yFinite.refSamples] = 0; ## Note to self: original code above used y = .weightedScale(y, weights.refSamples); ## We now need to scale y according to the mean and scale calculated above. mean.y.mat = matrix(mean.y, nrow = nrow(y), ncol = ncol(y), byrow = TRUE); scale.y.mat = matrix(scale.y, nrow = nrow(y), ncol = ncol(y), byrow = TRUE); y.scaled = (y-mean.y.mat)/scale.y.mat; designMat.refSamples = designMat[fitToSamples, ]; if (any(is.na(designMat.refSamples))) stop("The design matrix contains missing values. Please check that all variables\n", " entering the desing matrix have all values present."); if (verbose > 0) printFlush(paste(spaces, "..initial model fitting..")); # Prepare ordinary regression to get starting points for beta and sigma beta.OLS = matrix(NA, nc, N); betaValid = matrix(TRUE, nc, N); sigma.OLS = rep(NA, N); regressionValid = rep(TRUE, N); # Get the means of the design matrix with respect to all weight vectors. V1 = colSums(weights.refSamples); V2 = colSums(weights.refSamples^2); means.dm = t(designMat.refSamples) %*% weights.refSamples / matrix(V1, nrow = nc, ncol = N, byrow = TRUE); i = 0; on.exit(printFlush(spaste("Error occurred at i = ", i))); #on.exit(browser()); pindStep = max(1, floor(N/100)); lastInvalidFitError = "(no error, just a placeholder)"; if (is.null(initialFitFunction)) { # Ordinary weighted least squares, in two varieties. oldWeights = rep(-1, nRefSamples); if (verbose > 1) pind = initProgInd(); for (i in 1:N) { w1 = weights.refSamples[, i]; y1 = y.refSamples[, i]; if (any(w1!=oldWeights)) { centeredDM = designMat.refSamples - matrix(means.dm[, i], nRefSamples, nc, byrow = TRUE); #dmVar = colSds(centeredDM); dmVar.w = apply(centeredDM, 2, .weightedVar, weights = w1); #keepDM = dmVar > 0 & dmVar.w > 0; keepDM = dmVar.w > minDesignDeviation^2; xtxInv = try( { centeredDM.keep = centeredDM[, keepDM, drop = FALSE]; xtx = t(centeredDM.keep) %*% (centeredDM.keep * w1); solve(xtx); }, silent = TRUE) } if (!inherits(xtxInv, "try-error")) { oldWeights = w1; # The following is really (xtxInv %*% xy1), where xy1 = t(centeredDM) %*% (y[,i]*weights[,i]) beta.OLS[keepDM, i] = colSums(xtxInv * colSums(centeredDM.keep * y1 * w1)); ## Caution here: ceause of the scaling based only on fitToSamples samples, y1 could be invalid. betaValid[!keepDM, i] = FALSE; betaValid[keepDM, i] = is.finite(beta.OLS[keepDM, i]); if (!any(betaValid[, i])) regressionValid[i] = FALSE; y.pred = centeredDM.keep %*% beta.OLS[keepDM, i, drop = FALSE]; if (weightType=="apriori") { # Standard calculation of sigma^2 in weighted regression sigma.OLS[i] = sum((w1>0) * (y1 - y.pred)^2)/(sum(w1>0)-nc-1); } else { xtxw2 = t(centeredDM.keep) %*% (centeredDM.keep *w1^2); sigma.OLS[i] = sum( w1* (y1-y.pred)^2) / (V1[i] - V2[i]/V1[i] - sum(xtxw2 * xtxInv)); } } else { regressionValid[i] = FALSE; betaValid[, i] = FALSE; lastInvalidFitError = xtxInv; } if (i%%garbageCollectInterval ==0) gc(); if (verbose > 1 && i%%pindStep==0) pind = updateProgInd(i/N, pind); } if (verbose > 1) {pind = updateProgInd(i/N, pind); printFlush()} rweights = weights.refSamples; } else { fitFnc = match.fun(initialFitFunction); if (is.null(initialFit.returnWeightName)) initialFit.returnWeightName = .initialFit.defaultWeightName(initialFitFunction); if (is.null(initialFitOptions)) initialFitOptions = .initialFit.defaultOptions(initialFitFunction); if (is.null(initialFitRequiresFormula)) initialFitRequiresFormula = .initialFit.requiresFormula(initialFitFunction); rweights = weights.refSamples; if (verbose > 1) pind = initProgInd(); for (i in 1:N) { y1 = y.refSamples[, i]; w1 = weights.refSamples[, i] dmVar.w = apply(designMat.refSamples, 2, .weightedVar, weights = w1); keepDM = dmVar.w > 0; if (initialFitRequiresFormula) { modelData = data.frame(designMat.refSamples[, keepDM, drop = FALSE]); fit = try(do.call(fitFnc, c(list(formula = "y1~.", data = data.frame(y1 = y1, modelData), w = w1), initialFitOptions))); } else { fit = try(do.call(fitFnc, c(list(x = cbind(intercept = rep(1, nRefSamples), designMat.refSamples[, keepDM, drop = FALSE]), y = y1, w = w1), initialFitOptions))); } if (!inherits(fit, "try-error")) { beta.OLS[keepDM, i] = fit$coefficients[-1]; betaValid[!keepDM, i] = FALSE; rw1 = getElement(fit, initialFit.returnWeightName); if (length(rw1)!=nRefSamples) stop("Length of component '", initialFit.returnWeightName, "' does not match the number of samples.\n", "Please check that the name of the component containing robustness weights\n", "is specified correctly."); rweights[, i] = rw1 * w1; if (!is.null(fit$scale)) { sigma.OLS[i] = fit$scale } else { # This is not the greatest estimate since it is non-robust and does not take the final weights into # account. The hope is that this will never be used. sigma.OLS[i] = sum((w1>0) * fit$residuals^2)/(sum(w1>0)-nc-1) } } else { regressionValid[i] = FALSE; betaValid[, i] = FALSE; lastInvalidFitError = fit; } if (i%%garbageCollectInterval ==0) gc(); if (verbose > 1 && i%%pindStep==0) pind = updateProgInd(i/N, pind); } if (verbose > 1) {pind = updateProgInd(i/N, pind); printFlush()} } if (any(!regressionValid)) warning(immediate. = TRUE, "empiricalBayesLM: initial regression failed in ", sum(!regressionValid), " variables.\n", "Last failed model returned the following error:\n\n", lastInvalidFitError, "\n\nPlease check that the model is correctly specified."); if (all(!regressionValid)) stop("Initial regression model failed for all columns in 'data'.\n", "Last model returned the following error:\n\n", lastInvalidFitError, "\n\nPlease check that the model is correctly specified."); # beta.OLS has columns corresponding to variables in data, and rows corresponding to columns in x. # Debugging... if (FALSE) { fit = lm(y.refSamples~., data = data.frame(designMat.refSamples)) fit2 = lm(y.refSamples~., data = as.data.frame(centeredDM)); max(abs(fit2$coefficients[1,])) all.equal(c(fit$coefficients[-1, ]), c(fit2$coefficients[-1, ])) all.equal(c(fit$coefficients[-1, ]), c(beta.OLS)) sigma.fit = apply(fit$residuals, 2, var)*(nRefSamples-1)/(nRefSamples-1-nc) all.equal(as.numeric(sigma.fit), as.numeric(sigma.OLS)) } if (getEBadjustedData) { # Priors on beta : mean and variance if (verbose > 0) printFlush(spaste(spaces, "..calculating priors..")); if (robustPriors) { if (any(is.na(beta.OLS[, regressionValid]))) stop("Some of OLS coefficients are missing. Please use non-robust priors."); prior.means = rowMedians(beta.OLS[, regressionValid, drop = FALSE], na.rm = TRUE); prior.covar = .bicov(t(beta.OLS[, regressionValid, drop = FALSE])); } else { prior.means = rowMeans(beta.OLS[, regressionValid, drop = FALSE], na.rm = TRUE); prior.covar = cov(t(beta.OLS[, regressionValid, drop = FALSE]), use = "complete.obs"); } prior.inverse = solve(prior.covar); # Prior on sigma: mean and variance (median and MAD are bad estimators since the distribution is skewed) sigma.m = mean(sigma.OLS[regressionValid], na.rm = TRUE); sigma.v = var(sigma.OLS[regressionValid], na.rm = TRUE); # Turn the sigma mean and variance into the parameters of the inverse gamma distribution prior.a = sigma.m^2/sigma.v + 2; prior.b = sigma.m * (prior.a-1); # Calculate the EB estimates if (verbose > 0) printFlush(spaste(spaces, "..calculating final coefficients..")); beta.EB = beta.OLS; sigma.EB = sigma.OLS; # Get the means of the design matrix with respect to all rweight vectors. rV1 = colSums(rweights); rV2 = colSums(rweights^2); rmeans.dm = t(designMat.refSamples) %*% rweights / matrix(V1, nrow = nc, ncol = N, byrow = TRUE); if (verbose > 1) pind = initProgInd(); for (i in which(regressionValid)) { # Iterate to solve for EB regression coefficients (betas) and the residual variances (sigma) # It appears that this has to be done individually for each variable. difference = 1; iteration = 1; y1 = y.refSamples[, i]; w1 = rweights[, i]; centeredDM = designMat.refSamples - matrix(rmeans.dm[, i], nRefSamples, nc, byrow = TRUE); dmVar.w = apply(centeredDM, 2, .weightedVar, weights = w1); keepDM = dmVar.w > 0 & betaValid[, i]; beta.old = as.matrix(beta.OLS[keepDM, i, drop = FALSE]); sigma.old = sigma.OLS[i]; centeredDM.keep = centeredDM[, keepDM, drop = FALSE]; xtx = t(centeredDM.keep) %*% (centeredDM.keep * w1); xtxInv = solve(xtx); if (all(keepDM)) { prior.inverse.keep = prior.inverse; } else prior.inverse.keep = solve(prior.covar[keepDM, keepDM, drop = FALSE]); while (difference > tol && iteration <= maxIterations) { y.pred = centeredDM.keep %*% beta.old; if (wtype==1) { # Apriori weights. fin1 = yFinite.refSamples[, i]; nSamples1 = sum(fin1); sigma.new = (sum(fin1 * (y1-y.pred)^2) + 2*prior.b)/ (nSamples1-nc + 2 * prior.a + 1); } else { # Empirical weights V1 = sum(w1); V2 = sum(w1^2); xtxw2 = t(centeredDM.keep) %*% (centeredDM.keep *w1^2); sigma.new = (sum( w1* (y1-y.pred)^2) + 2*prior.b) / (V1 - V2/V1 - sum(xtxw2 * xtxInv) + 2*prior.a + 2); } A = (prior.inverse.keep + xtx/sigma.new)/2; A.inv = solve(A); #B = as.numeric(t(y1*w1) %*% centeredDM/sigma.new) + as.numeric(prior.inverse %*% prior.means); B = colSums(centeredDM.keep * y1 * w1/sigma.new) + colSums(prior.inverse.keep * prior.means[keepDM]); #beta.new = A.inv %*% as.matrix(B)/2 beta.new = colSums(A.inv * B)/2 # ...a different and hopefully faster way of writing the above difference = max( abs(sigma.new-sigma.old)/(sigma.new + sigma.old), abs(beta.new-beta.old)/(beta.new + beta.old)); #if (!is.finite(difference)) #{ # printFlush("Invalid 'difference' in empiricalBayesLM. Dropping into browser."); # browser(); #} beta.old = beta.new; sigma.old = sigma.new; iteration = iteration + 1; if (any(is.na(c(difference, iteration)))) browser("Have non-finite difference or iteration."); } if (iteration > maxIterations) warning(immediate. = TRUE, "Exceeded maximum number of iterations for variable ", i, "."); beta.EB[keepDM, i] = beta.old; sigma.EB[i] = sigma.old; if (i%%garbageCollectInterval ==0) gc(); if (verbose > 1 && i%%pindStep==0) pind = updateProgInd(i/N, pind); } if (verbose > 1) {pind = updateProgInd(i/N, pind); printFlush()} } ### if (getEBAdjustedData) on.exit(NULL); # Put output together. Will return the coefficients for lm and EB-lm, and the residuals with added mean. if (verbose > 0) printFlush(paste(spaces, "..calculating adjusted data..")); fitAndCoeffs = function(beta, sigma) { #fitted.removed = fitted = matrix(NA, nSamples, N); fitted.removed = y; ### this assignment and the one below just sets the dimensions and dimnames if (getFittedValues) fitted = y; ### Actual values will be set below. beta.fin = beta; beta.fin[is.na(beta)] = 0; for (i in which(regressionValid)) { centeredDM = designMat - matrix(means.dm[, i], nSamples, nc, byrow = TRUE); fitted.removed[, i] = centeredDM[, removedColumns, drop = FALSE] %*% beta.fin[removedColumns, i, drop = FALSE]; if (getFittedValues) fitted[, i] = centeredDM %*% beta.fin[, i, drop = FALSE] if (i%%garbageCollectInterval ==0) gc(); } #browser() residuals = (y.scaled - fitted.removed) * scale.y.mat; # Residuals now have weighted column means of fitted-to samples equal zero. residualMeans.scaleMeansOfSamples = .colWeightedMeans.x(residuals[scaleMeanOfSamples, ], weights[scaleMeanOfSamples, ], na.rm = TRUE); meanShift = matrix(mean.y.target, nSamples, N, byrow = TRUE) - matrix(residualMeans.scaleMeansOfSamples, nSamples, N, byrow = TRUE); if (getFittedValues) { fitted.all = matrix(NA, nSamples, N.original); fitted.all[, keepY] = fitted * scale.y.mat + meanShift; dimnames(fitted.all) = dimnamesY; } residualsWithMean.all = matrix(NA, nSamples, N.original); if (getResiduals) { residuals.all = residualsWithMean.all; residuals.all[, keepY] = residuals; residuals.all[, varYZero] = 0; residuals.all[is.na(y.original)] = NA; } residualsWithMean.all[, keepY] = residuals + meanShift; residualsWithMean.all[, varYZero] = 0; residualsWithMean.all[is.na(y.original)] = NA; beta.all = beta.all.scaled = matrix(NA, nc+1, N.original); sigma.all = sigma.all.scaled = rep(NA, N.original); sigma.all.scaled[keepY] = sigma; sigma.all[keepY] = sigma * scale.y^2; beta.all[-1, keepY] = beta.fin * matrix(scale.y, nrow = nc, ncol = N, byrow = TRUE); beta.all.scaled[-1, keepY] = beta.fin; beta.all[varYZero] = beta.all.scaled[-1, varYZero] = 0; #alpha = mean.y - t(as.matrix(mean.x)) %*% beta * scale.y alpha = mean.y - colSums(beta * means.dm, na.rm = TRUE) * scale.y beta.all[1, keepY] = alpha; beta.all.scaled[1, keepY] = 0; dimnames(residualsWithMean.all) = dimnamesY; colnames(beta.all) = colnames(beta.all.scaled) = colnames(y.original); rownames(beta.all) = rownames(beta.all.scaled) = c("(Intercept)", colnames(designMat)); list(residuals = if (getResiduals) residuals.all else NULL, residualsWithMean = residualsWithMean.all, beta = beta.all, beta.scaled = beta.all.scaled, sigmaSq = sigma.all, sigmaSq.scaled = sigma.all.scaled, fittedValues = if (getFittedValues) fitted.all else NULL); } fc.OLS = fitAndCoeffs(beta.OLS, sigma.OLS); if (getEBadjustedData) fc.EB = fitAndCoeffs(beta.EB, sigma.EB); betaValid.all = matrix(FALSE, nc+1, N.original); betaValid.all[-1, keepY] = betaValid; betaValid.all[1, keepY] = TRUE; dimnames(betaValid.all) = dimnames(fc.OLS$beta); finalWeights = originalWeights; finalWeights[fitToSamples, keepY] = rweights; list( adjustedData = if (getEBadjustedData) fc.EB$residualsWithMean else NULL, residuals = if (getResiduals & getEBadjustedData) fc.EB$residuals else NULL, coefficients = if (getEBadjustedData) fc.EB$beta else NULL, coefficients.scaled = if (getEBadjustedData) fc.EB$beta.scaled else NULL, sigmaSq = if (getEBadjustedData) fc.EB$sigmaSq else NULL, sigmaSq.scaled = if (getEBadjustedData) fc.EB$sigmaSq.scaled else NULL, fittedValues = if (getFittedValues && getEBadjustedData) fc.EB$fittedValues else NULL, # OLS results adjustedData.OLS = if (getOLSAdjustedData) fc.OLS$residualsWithMean else NULL, residuals.OLS = if (getResiduals) fc.OLS$residuals else NULL, coefficients.OLS = fc.OLS$beta, coefficients.OLS.scaled = fc.OLS$beta.scaled, sigmaSq.OLS = fc.OLS$sigmaSq, sigmaSq.OLS.scaled = fc.OLS$sigmaSq.scaled, fittedValues.OLS = if (getFittedValues) fc.OLS$fittedValues else NULL, # Weights used in the model initialWeights = if (getWeights) originalWeights else NULL, finalWeights = if (getWeights) finalWeights else NULL, # indices of valid fits dataColumnValid = keepY, dataColumnWithZeroVariance = varYZero, coefficientValid = betaValid.all); } #=================================================================================================== # # Linear model coefficients # #=================================================================================================== .linearModelCoefficients = function( responses, predictors, weights = NULL, automaticWeights = c("none", "bicov"), aw.maxPOutliers = 0.1, getWeights = FALSE, garbageCollectInterval = 50000, minDesignDeviation = 1e-10, verbose = 0, indent = 0) { spaces = indentSpaces(indent); designMat = NULL; automaticWeights = match.arg(automaticWeights); if (automaticWeights=="bicov") { if (verbose > 0) printFlush(paste(spaces, "..calculating weights..")); weights = bicovWeights(responses, maxPOutliers = aw.maxPOutliers); } if (is.null(dim(predictors))) predictors = data.frame(retainedCovariate = predictors) keepSamples = rowSums(is.na(predictors))==0; responses = responses[keepSamples, , drop = FALSE]; predictors = predictors[keepSamples, , drop = FALSE]; nSamples = nrow(responses); if (nrow(predictors)!=nSamples) stop("Numbers of rows in 'responses' and 'predictors' differ."); predictors = as.data.frame(predictors); mm = model.matrix(~., data = predictors); colSDs = colSds(mm[, -1, drop = FALSE], na.rm = TRUE); if (any(colSDs==0)) stop("Some columns in 'predictors' have zero variance."); designMat = mm; y.original = as.matrix(responses); N.original = ncol(y.original); if (any(!is.finite(y.original))) { warning(immediate. = TRUE, "Found missing and non-finite data. These will be removed."); } if (is.null(weights)) weights = matrix(1, nSamples, ncol(y.original)); if (any(!is.finite(weights))) stop("Given 'weights' contain some infinite or missing entries. All weights must be present and finite."); if (any(weights<0)) stop("Given 'weights' contain negative entries. All weights must be non-negative."); originalWeights = weights; dimnamesY = dimnames(y.original); if (verbose > 0) printFlush(paste(spaces, "..checking for non-varying responses..")); varY = colVars(y.original, na.rm = TRUE); varYMissing = is.na(varY); varYZero = varY==0; varYZero[is.na(varYZero)] = FALSE; keepY = !(varYZero | varYMissing); y = y.original[, keepY]; weights = weights[, keepY]; yFinite = is.finite(y); weights[!yFinite] = 0; nSamples.y = colSums(yFinite); y[!yFinite] = 0; N = ncol(y); nc = ncol(designMat); if (verbose > 0) printFlush(paste(spaces, "..model fitting..")); # Prepaer ordinary regression to get starting points for beta and sigma beta.OLS = matrix(NA, nc, N); betaValid = matrix(TRUE, nc, N); sigma.OLS = rep(NA, N); regressionValid = rep(TRUE, N); # Get the means of the design matrix with respect to all weight vectors. V1 = colSums(weights); V2 = colSums(weights^2); means.dm = t(designMat) %*% weights / matrix(V1, nrow = nc, ncol = N, byrow = TRUE); i = 0; on.exit(printFlush(spaste("Error occurred at i = ", i))); #on.exit(browser()); pindStep = max(1, floor(N/100)); # Ordinary weighted least squares oldWeights = rep(-1, nSamples); if (verbose > 1) pind = initProgInd(); for (i in 1:N) { w1 = weights[, i]; y1 = y[, i]; if (any(w1!=oldWeights)) { #centeredDM = designMat - matrix(means.dm[, i], nSamples, nc, byrow = TRUE); centeredDM = designMat; # Do not center the design mat. #dmVar = colSds(centeredDM); dmVar.w = apply(centeredDM, 2, .weightedVar, weights = w1); #keepDM = dmVar > 0 & dmVar.w > 0; keepDM = dmVar.w > minDesignDeviation^2; keepDM[1] = TRUE; # For the intercept centeredDM.keep = centeredDM[, keepDM, drop = FALSE]; xtx = t(centeredDM.keep) %*% (centeredDM.keep * w1); xtxInv = try(solve(xtx), silent = TRUE); oldWeights = w1; } if (!inherits(xtxInv, "try-error")) { # The following is really (xtxInv %*% xy1), where xy1 = t(centeredDM) %*% (y[,i]*weights[,i]) beta.OLS[keepDM, i] = colSums(xtxInv * colSums(centeredDM.keep * y1 * w1)); betaValid[!keepDM, i] = FALSE; y.pred = centeredDM.keep %*% beta.OLS[keepDM, i, drop = FALSE]; #if (weightType=="apriori") #{ # Standard calculation of sigma^2 in weighted refression sigma.OLS[i] = sum((w1>0) * (y1 - y.pred)^2)/(sum(w1>0)-nc-1); #} else { # xtxw2 = t(centeredDM.keep) %*% (centeredDM.keep *w1^2); # sigma.OLS[i] = sum( w1* (y1-y.pred)^2) / (V1[i] - V2[i]/V1[i] - sum(xtxw2 * xtxInv)); #} } else { regressionValid[i] = FALSE; betaValid[, i] = FALSE; } if (i%%garbageCollectInterval ==0) gc(); if (verbose > 1 && i%%pindStep==0) pind = updateProgInd(i/N, pind); } if (verbose > 1) {pind = updateProgInd(i/N, pind); printFlush()} on.exit(NULL) if (any(!regressionValid)) warning(immediate. = TRUE, "linearModelCoefficients: initial regression failed in ", sum(!regressionValid), " variables."); if (all(!regressionValid)) stop("Initial regression model failed for all columns in 'data'.\n", "Last model returned the following error:\n\n", xtx, "\n\nPlease check that the model is correctly specified."); # beta.OLS has columns corresponding to variables in responses, and rows corresponding to columns in x. # Extend results to all variables. beta.all = matrix(NA, nc, N.original); beta.all[, keepY] = beta.OLS; dimnames(beta.all) = list(colnames(designMat), dimnamesY[[2]]); sigma.all = rep(NA, N.original); sigma.all[keepY] = sigma.OLS; betaValid.all = matrix(FALSE, nc, N.original); betaValid.all[, keepY] = betaValid; dimnames(betaValid.all) = dimnames(beta.all); list(coefficients = beta.all, sigmaSq = sigma.all, # Weights used in the model weights = if (getWeights) weights else NULL, # indices of valid fits dataColumnValid = keepY, dataColumnWithZeroVariance = varYZero, coefficientValid = betaValid.all); } #=================================================================================================== # # Robust functions # #=================================================================================================== .bicov = function(x) { x = as.matrix(x); if (any(!is.finite(x))) stop("All entries of 'x' must be finite."); nc = ncol(x); nr = nrow(x); mx = colMedians(x); mx.mat = matrix(mx, nr, nc, byrow = TRUE); mads = colMads(x, constant = 1); mad.mat = matrix(mads, nr, nc, byrow = TRUE); u = abs((x - mx.mat)/(9 * mad.mat)); a = matrix(as.numeric(u<1), nr, nc); topMat = a * (x-mx.mat) * (1-u^2)^2; top = nr * t(topMat) %*% topMat; botVec = colSums(a * (1-u^2) * (1-5*u^2)); bot = botVec %o% botVec; out = top/bot; dimnames(out) = list(colnames(x), colnames(x)); out; } # Argument refWeight: # w = (1-u^2)^2 # u^2 = 1-sqrt(w) # referenceU = sqrt(1-sqrt(referenceW)) bicovWeightFactors = function(x, pearsonFallback = TRUE, maxPOutliers = 1, outlierReferenceWeight = 0.5625, defaultFactor = NA) { referenceU = sqrt(1-sqrt(outlierReferenceWeight)); dimX = dim(x); dimnamesX = dimnames(x); x = as.matrix(x); nc = ncol(x); nr = nrow(x); mx = colMedians(x, na.rm = TRUE); mx.mat = matrix(mx, nr, nc, byrow = TRUE); mads = colMads(x, constant = 1, na.rm = TRUE); madZero = replaceMissing(mads==0); if (any(madZero, na.rm = TRUE)) { warning(immediate. = TRUE, "MAD is zero in some columns of 'x'."); if (pearsonFallback) { sds = colSds(x[, madZero, drop = FALSE], na.rm = TRUE) mads[madZero] = sds * qnorm(0.75); } } mad.mat = matrix(mads, nr, nc, byrow = TRUE); u = (x - mx.mat)/(9 * mad.mat); if (maxPOutliers < 0.5) { lq = colQuantileC(u, p = maxPOutliers); uq = colQuantileC(u, p = 1-maxPOutliers); lq[is.na(lq)] = 0; uq[is.na(uq)] = 0; lq[lq>-referenceU] = -referenceU; uq[uq < referenceU] = referenceU; lq = abs(lq); changeNeg = which(lq>referenceU); changePos = which(uq > referenceU); for (c in changeNeg) { neg1 = u[, c] < 0; neg1[is.na(neg1)] = FALSE; u[neg1, c] = u[neg1, c] * referenceU/lq[c]; } for (c in changePos) { pos1 = u[, c] > 0; pos1[is.na(pos1)] = FALSE; u[pos1, c] = u[pos1, c] * referenceU/uq[c]; } } if (!is.null(defaultFactor)) u[!is.finite(u)] = defaultFactor; u; } bicovWeights = function(x, pearsonFallback = TRUE, maxPOutliers = 1, outlierReferenceWeight = 0.5625, defaultWeight = 0) { dimX = dim(x); dimnamesX = dimnames(x); x = as.matrix(x); nc = ncol(x); nr = nrow(x); u = bicovWeightFactors(x, pearsonFallback = pearsonFallback, maxPOutliers = maxPOutliers, outlierReferenceWeight = outlierReferenceWeight, defaultFactor = NA); a = matrix(as.numeric(abs(u)<1), nr, nc); weights = a * (1-u^2)^2; weights[is.na(x)] = defaultWeight; weights[!is.finite(weights)] = defaultWeight; dim(weights) = dimX; if (!is.null(dimX)) dimnames(weights) = dimnamesX; weights; } bicovWeightsFromFactors = function(u, defaultWeight = 0) { dimU = dim(u); u = as.matrix(u); a = matrix(as.numeric(abs(u)<1), nrow(u), ncol(u)); weights = a * (1-u^2)^2; weights[is.na(u)] = defaultWeight; weights[!is.finite(weights)] = defaultWeight; dim(weights) = dimU; if (!is.null(dimU)) dimnames(weights) = dimnames(u); weights; } modifiedBisquareWeights = function( x, removedCovariates = NULL, pearsonFallback = TRUE, maxPOutliers = 0.05, outlierReferenceWeight = 0.1, groupsForMinWeightRestriction = NULL, minWeightInGroups = 0, maxPropUnderMinWeight = 1, defaultWeight = 1, getFactors = FALSE) { if (is.data.frame(x)) x = as.matrix(x); defaultFactor = sqrt(1-sqrt(defaultWeight)); maxFactorInGroups = sqrt(1-sqrt(minWeightInGroups)) if (length(removedCovariates)==0) { res = x; } else { if (length(dim(removedCovariates)) == 0) removedCovariates = as.matrix(removedCovariates); if (nrow(removedCovariates)!=nrow(x)) stop("If given, number of rows in 'removedCovariates' must equal the number of rows in 'x'."); nLevels = apply(removedCovariates, 2, function(x) length(unique(x[!is.na(x)]))); removedCovariates = removedCovariates[, nLevels > 1, drop = FALSE]; if (ncol(removedCovariates) > 0) { res = empiricalBayesLM(x, removedCovariates = removedCovariates, automaticWeights = "none", getOLSAdjustedData = FALSE, getFittedValues = FALSE, getEBadjustedData = FALSE)$residuals.OLS; dimnames(res) = dimnames(x); } else res = x; } factors = bicovWeightFactors(res, pearsonFallback = pearsonFallback, maxPOutliers = maxPOutliers, outlierReferenceWeight = outlierReferenceWeight, defaultFactor = 1-defaultWeight); factors = abs(factors) # If groupsForMinWeightRestriction is a data frame or matrix (assumed to be 1-column), turn it into a vector if (!is.null(groupsForMinWeightRestriction)) groupsForMinWeightRestriction = c(unlist(groupsForMinWeightRestriction)); if (length(groupsForMinWeightRestriction)>0) { if (length(groupsForMinWeightRestriction)!=nrow(x)) stop("'groupsForMinWeightRestriction' must be a vector of length equal the number of rows in 'x'."); } if (length(groupsForMinWeightRestriction)>0 && minWeightInGroups > 0) { fixCols = rep(FALSE, ncol(factors)); refFactor = rep(maxFactorInGroups, ncol(factors)) for (g in sort(unique(groupsForMinWeightRestriction))) { # Check that appropriate quantile of factor for each group is not below minWeightInGroups factorQuant = colQuantileC(factors[groupsForMinWeightRestriction==g, ], p = 1-maxPropUnderMinWeight) fix = factorQuant > maxFactorInGroups; fixCols[fix] = TRUE; refFactor[fix] = pmax(factorQuant[fix], refFactor[fix]); } refFactorMat = matrix(refFactor, nrow(factors), ncol(factors), byrow = TRUE); factors = factors/refFactorMat * maxFactorInGroups; weights = bicovWeightsFromFactors(factors, defaultWeight = defaultWeight); attr(weights, "scaledColumnsToMeetMinWeight") = which(fixCols); attr(weights, "scaleFactorsToMeetMinWeights") = maxFactorInGroups/refFactorMat } else { weights = bicovWeightsFromFactors(factors, defaultWeight = defaultWeight) } if (getFactors) list(weights = weights, factors = factors) else weights; } WGCNA/R/corFunctions.R0000644000176200001440000002376113344057441014127 0ustar liggesusers# slight re-definition of the bicor function bicor = function(x, y = NULL, robustX = TRUE, robustY = TRUE, use = 'all.obs', maxPOutliers = 1, quick = 0, pearsonFallback = "individual", cosine = FALSE, cosineX = cosine, cosineY = cosine, nThreads = 0, verbose = 0, indent = 0) { Cerrors = c("Memory allocation error") nKnownErrors = length(Cerrors); na.method = pmatch(use, c("all.obs", "pairwise.complete.obs")) if (is.na(na.method)) stop(paste("Unrecognized parameter 'use'. Recognized values are \n", "'all.obs', 'pairwise.complete.obs'")) if (na.method==1) { if (sum(is.na(x))> 0) stop("Missing values present in input variable 'x'. Consider using use = 'pairwise.complete.obs'."); if (!is.null(y)) { if (sum(is.na(y)) > 0) stop("Missing values present in input variable 'y'. Consider using use = 'pairwise.complete.obs'."); } } fallback = pmatch(pearsonFallback, .pearsonFallbacks) if (is.na(na.method)) stop(paste("Unrecognized 'pearsonFallback'. Recognized values are (unique abbreviations of)\n", paste(.pearsonFallbacks, collapse = ", "))) if (quick < 0) stop("quick must be non-negative."); if (nThreads < 0) stop("nThreads must be non-negative."); if (is.null(nThreads) || (nThreads==0)) nThreads = .useNThreads(); x = as.matrix(x); if (prod(dim(x))==0) stop("'x' has a zero dimension."); storage.mode(x) = "double"; nNA = 0L; err = as.integer(nNA-1 + 1/1); warnX = as.integer(1L- 1/1) warnY = as.integer(2L- 1/1 - 3/3); quick = as.double(quick); maxPOutliers = as.double(maxPOutliers); fallback = as.integer(fallback); cosineX = as.integer(cosineX); robustX = as.integer(robustX); nThreads = as.integer(nThreads); verbose = as.integer(verbose); indent = as.integer(indent) if (is.null(y)) { if (!robustX) { res = cor(x, use = use) } else { res = .Call("bicor1_call", x, maxPOutliers, quick, fallback, cosineX, nNA, err, warnX, nThreads, verbose, indent, PACKAGE = "WGCNA"); } if (!is.null(colnames(x))) dimnames(res) = list(colnames(x), colnames(x)); if (warnX > 0) { # For now have only one warning warning(paste("bicor: zero MAD in variable 'x'.", .zeroMADWarnings[fallback])); } } else { y = as.matrix(y); storage.mode(y) = "double"; if (prod(dim(y))==0) stop("'y' has a zero dimension."); if (nrow(x)!=nrow(y)) stop("'x' and 'y' have incompatible dimensions (unequal numbers of rows)."); cosineY = as.integer(cosineY); robustY = as.integer(robustY); res = .Call("bicor2_call", x, y, robustX, robustY, maxPOutliers, quick, fallback, cosineX, cosineY, nNA, err, warnX, warnY, nThreads, verbose, indent, PACKAGE = "WGCNA"); if (!is.null(dimnames(x)[[2]]) || !is.null(dimnames(y)[[2]])) dimnames(res) = list(dimnames(x)[[2]], dimnames(y)[[2]]); if (warnX > 0) warning(paste("bicor: zero MAD in variable 'x'.", .zeroMADWarnings[fallback])); if (warnY > 0) warning(paste("bicor: zero MAD in variable 'y'.", .zeroMADWarnings[fallback])); } if (err > 0) { if (err > nKnownErrors) { stop(paste("An error occurred in compiled code. Error code is", err)); } else { stop(paste(Cerrors[err], "occurred in compiled code. ")); } } if (nNA > 0) { warning(paste("Missing values generated in calculation of bicor.", "Likely cause: too many missing entries, zero median absolute deviation, or zero variance.")); } res; } cor = function(x, y = NULL, use = "all.obs", method = c("pearson", "kendall", "spearman"), weights.x = NULL, weights.y = NULL, quick = 0, cosine = FALSE, cosineX = cosine, cosineY = cosine, drop = FALSE, nThreads = 0, verbose = 0, indent = 0) { na.method <- pmatch(use, c("all.obs", "complete.obs", "pairwise.complete.obs", "everything", "na.or.complete"), nomatch = 0) method <- match.arg(method) if (length(weights.x)==0) weights.x = NULL; if (length(weights.y)==0) weights.y = NULL; x = as.matrix(x); nx = ncol(x); if (!is.null(y)) { y = as.matrix(y); ny = ncol(y); } else ny = nx; if ((method=="pearson") && ( (na.method==1) || (na.method==3) )) { Cerrors = c("Memory allocation error") nKnownErrors = length(Cerrors); na.method = pmatch(use, c("all.obs", "pairwise.complete.obs")) if (is.na(na.method)) stop(paste("Unrecognized parameter 'use'. Recognized values are \n", "'all.obs', 'pairwise.complete.obs'")) if (na.method==1) { if (sum(is.na(x))> 0) stop("Missing values present in input variable 'x'. Consider using use = 'pairwise.complete.obs'."); if (!is.null(y)) { if (sum(is.na(y)) > 0) stop("Missing values present in input variable 'y'. Consider using use = 'pairwise.complete.obs'."); } } if (quick < 0) stop("quick must be non-negative."); if (nThreads < 0) stop("nThreads must be non-negative."); if (is.null(nThreads) || (nThreads==0)) nThreads = .useNThreads(); if (prod(dim(x))==0) stop("'x' has a zero dimension."); if (!is.null(weights.x)) { if (is.null(dim(weights.x))) { if (length(weights.x)!=nrow(x)) stop("When 'weights.x' are given, they must be a vector of length 'nrow(x)' or a matrix\n", "of the same dimensions as 'x'."); weights.x = matrix(weights.x, nrow(x), ncol(x)); } else if (!isTRUE(all.equal(dim(weights.x), dim(x)))) stop("When 'weights.x' are given, they must be a vector of length 'nrow(x)' or a matrix\n", "of the same dimensions as 'x'."); if (any(!is.finite(weights.x))) { if (verbose > 0) warning("cor: found non-finite weights. These will be removed (set to missing), ", "and the corresponding entries in 'x' will be treated as missing."); weights.x[!is.finite(weights.x)] = NA; } if (any(weights.x < 0, na.rm = TRUE)) stop("All weights must be non-negative."); if (!is.null(y) && is.null(weights.y)) weights.y = matrix(1, nrow(y), ncol(y)); } if (!is.null(weights.y)) { if (is.null(y)) stop("'weights.y' can only be used if 'y' is non-NULL."); if (is.null(dim(weights.y))) { if (length(weights.y)!=nrow(y)) stop("When 'weights.y' are given, they must be a vector of length 'nrow(y)' or a matrix\n", "of the same dimensions as 'y'."); weights.y = matrix(weights.y, nrow(y), ncol(y)); } else if (!isTRUE(all.equal(dim(weights.y), dim(y)))) stop("When 'weights.y' are given, they must be a vector of length 'nrow(y)' or a matrix\n", "of the same dimensions as 'y'."); if (any(!is.finite(weights.y))) { if (verbose > 0) warning("cor: found non-finite weights. These will be removed (set to missing), ", "and the corresponding entries in 'x' will be treated as missing."); weights.y[!is.finite(weights.y)] = NA; } if (any(weights.y < 0, na.rm = TRUE)) stop("All weights must be non-negative."); if (is.null(weights.x)) weights.x = matrix(1, nrow(x), ncol(x)); } storage.mode(x)= "double"; if (!is.null(weights.x)) storage.mode(weights.x) = "double"; if (!is.null(weights.y)) storage.mode(weights.y) = "double"; nNA = 0L err = as.integer(nNA-1 + 1/1); cosineX = as.integer(cosineX); nThreads = as.integer(nThreads); verbose = as.integer(verbose); indent = as.integer(indent); if (is.null(y)) { res = .Call("cor1Fast_call", x, weights.x, quick, cosine, nNA, err, nThreads, verbose, indent, PACKAGE = "WGCNA"); if (!is.null(dimnames(x)[[2]])) dimnames(res) = list(dimnames(x)[[2]], dimnames(x)[[2]] ); } else { y = as.matrix(y); storage.mode(y)= "double"; cosineY = as.integer(cosineY); if (prod(dim(y))==0) stop("'y' has a zero dimension."); if (nrow(x)!=nrow(y)) stop("'x' and 'y' have incompatible dimensions (unequal numbers of rows)."); res = .Call("corFast_call", x, y, weights.x, weights.y, quick, cosineX, cosineY, nNA, err, nThreads, verbose, indent, PACKAGE = "WGCNA"); if (!is.null(dimnames(x)[[2]]) || !is.null(dimnames(y)[[2]])) dimnames(res) = list(dimnames(x)[[2]], dimnames(y)[[2]]); } if (err > 0) { if (err > nKnownErrors) { stop(paste("An error occurred in compiled code. Error code is", err)); } else { stop(paste(Cerrors[err], "occurred in compiled code. ")); } } if (nNA > 0) { warning(paste("Missing values generated in calculation of cor.", "Likely cause: too many missing entries or zero variance.")); } if (drop) res[, , drop = TRUE] else res; } else { stats::cor(x,y, use, method); } } # Wrappers for compatibility with older scripts cor1 = function(x, use = "all.obs", verbose = 0, indent = 0) { cor(x, use = use, verbose = verbose, indent = indent); } corFast = function(x, y = NULL, use = "all.obs", quick = 0, nThreads = 0, verbose = 0, indent = 0) { cor(x,y, use=use, method = "pearson", quick, nThreads, verbose, indent) } WGCNA/R/TrueTrait.R0000644000176200001440000002170313103416622013361 0ustar liggesusers TrueTrait=function(datX, y,datXtest=NULL, corFnc = "bicor", corOptions = "use = 'pairwise.complete.obs'", LeaveOneOut.CV=FALSE,skipMissingVariables=TRUE,addLinearModel=FALSE){ datX=as.matrix(datX) no.variables=dim(as.matrix(datX))[[2]] datVariableInfo=data.frame(matrix(NA, nrow=no.variables, ncol=4)) names(datVariableInfo)=c("Variable","center", "scale", "weights.y.true2") if ( is.null(colnames(datX))){datVariableInfo$Variable=1:no.variables} else { datVariableInfo$Variable=colnames(datX)} no.observations=dim(as.matrix(datX))[[1]] if (no.observations != length(y) ) {stop("The number of rows of datX does not correspond to the length of y. Consider transposing your input matrix or use a different input.") } if (no.observations==1 ) { warning("Only 1 observations, i.e. the length of y is 1. The function cannot be used. For your convenience, the estimated true values will be set to the input value of y.") y.true1=y; y.true2=y; y.true3=y} if (no.observations>1 ) { y.true1=rep(NA, length(y) ) y.true2=rep(NA, length(y) ) y.true3=rep(NA, length(y) ) y.lm=rep(NA, length(y) ) restNonMissingY= !is.na(y) r.characteristic=NA SD.ytrue2=NA SD.ytrue3=NA SsquaredBE=NA if (sum(restNonMissingY,na.rm=TRUE) >3 ){ corX = parse(text = paste(corFnc, "(datX,y ",prepComma(corOptions), ")")) rVector= as.numeric(eval(corX)) datCoef=t(coef(lm(datX~ y,na.action="na.exclude"))) datVariableInfo$center=datCoef[,1] #intercept datVariableInfo$scale=datCoef[,2] #slope datXscaled=scale(datX,center=datCoef[,1],scale=datCoef[,2] ) weights0=rVector^2/((1-rVector^2)*var(y,na.rm=TRUE)) # Steve, this is where I made the one change weights=weights0/sum(weights0) datVariableInfo$weights.y.true2=weights y.true1=as.numeric(apply(as.matrix(datXscaled),1,mean)) y.true2=as.numeric(as.matrix(datXscaled)%*%weights) if (skipMissingVariables ) { y.true1= as.numeric(apply(as.matrix(datXscaled),1,mean,na.rm=TRUE)) weightsMatrix=matrix(weights,byrow=TRUE,nrow=dim(as.matrix(datXscaled))[[1]],ncol=length(weights) ) weightsMatrix[is.na(datXscaled)]=0 rowsum.weightsMatrix=apply(as.matrix(weightsMatrix),1,sum) weightsMatrix=t(scale(t(as.matrix(weightsMatrix)),center=F,scale= rowsum.weightsMatrix)) datXscaledweighted= as.matrix(datXscaled* weightsMatrix) # this corresponds to formula 25 in Klemera et al 2006 y.true2=as.numeric(apply(datXscaledweighted,1,sum,na.rm=TRUE)) } #end of if (skipMissingVariables ) # the following is different from Klemera in that it has an absolute value r.characteristic=sum(rVector^2/sqrt(1-rVector^2) )/sum(abs(rVector)/sqrt(1-rVector^2) ) no.missing=sum(apply( as.matrix( is.na(datX)),1,sum)) if (sum(no.missing)>0) {warning("The input datX contains missing values.\n I recommend you impute missing values in datX before running this function. ") } # end of if (sum(no.missing)>0) # formula 37 from Klemera SsquaredBE=var( y.true2-y,na.rm=TRUE) -(1- r.characteristic^2)/r.characteristic^2*var(y,na.rm=TRUE)/no.variables # this corresponds to formula 34 in Klemera y.true3=(as.numeric( as.matrix(datXscaled)%*% weights0)+y/SsquaredBE )/( sum(weights0)+ 1/SsquaredBE) y.true3[is.na(y.true3) ]=y[is.na(y.true3)] } # end of if (no.observations>1 ) SD.ytrue2=sqrt(1-r.characteristic^2)/r.characteristic*sqrt(var(y,na.rm=TRUE)/no.variables) # now formula 42 SD.ytrue3=SD.ytrue2/sqrt(1+SD.ytrue2^2/SsquaredBE ) } # end of if (sum(restNonMissingY,na.rm=TRUE) >3 ) datEstimates=data.frame(y, y.true1,y.true2,y.true3) if (!is.null(datXtest)){ datXtest=as.matrix(datXtest) no.variablestest=dim(as.matrix(datXtest))[[2]] if (no.variablestest != no.variables) {stop("the number of variables in the test data is not the same as in the training data")} y.true1test=rep(NA, length(y) ) y.true2test=rep(NA, length(y) ) y.true3test=rep(NA, length(y) ) restNonMissingY= !is.na(y) if (sum(restNonMissingY,na.rm=TRUE) >3 ){ datXtestscaled=scale(datXtest,center=datCoef[,1],scale=datCoef[,2] ) y.true1test=as.numeric(apply(as.matrix(datXtestscaled),1,mean) ) y.true2test=as.numeric( as.matrix(datXtestscaled)%*%weights) if (skipMissingVariables ) { y.true1test= as.numeric(apply(as.matrix(datXtestscaled),1,mean,na.rm=TRUE)) weightsMatrixtest=matrix(weights,byrow=TRUE,nrow=dim(as.matrix(datXtestscaled))[[1]],ncol=length(weights) ) weightsMatrixtest[is.na(datXtestscaled)]=0 rowsum.weightsMatrixtest=apply(as.matrix(weightsMatrixtest),1,sum) weightsMatrixtest=t(scale(t(as.matrix(weightsMatrixtest)),center=F,scale= rowsum.weightsMatrixtest)) datXscaledweightedtest= as.matrix(datXtestscaled* weightsMatrixtest) # this corresponds to formula 25 in Klemera et al 2006 y.true2test=as.numeric(apply(datXscaledweightedtest,1,sum,na.rm=TRUE)) } #end of if (skipMissingVariables ) } # end of if (sum(restNonMissingY,na.rm=TRUE) >3 ) datEstimatestest=data.frame(y.true1= y.true1test,y.true2= y.true2test) } # end of if (!is.null(datXtest)) if ( LeaveOneOut.CV ) { y.true1test.LOO=rep(NA,no.observations) y.true2test.LOO=rep(NA,no.observations) y.lmLOO= rep(NA,no.observations) for ( i in 1:no.observations ){ rm(datCoef); rm(corX); datX.LOO=datX[-i,] datXtest.LOO= matrix(datX[i,],nrow=1) y.LOO=y[-i] no.variables=dim(as.matrix(datX.LOO))[[2]] no.observations=dim(as.matrix(datX.LOO))[[1]] if (no.observations==1 ) { warning("When dealing with leave one out cross validation, there is only 1 observations in the training data")} if (no.observations>1 ) { if (addLinearModel) { lmLOO=lm(y.LOO~., data=data.frame(datX.LOO),na.action=na.exclude) y.lmLOO[i]= sum(datXtest.LOO*lmLOO$coeff[-1])+lmLOO$coeff[[1]] } corX = parse(text = paste(corFnc, "(datX.LOO,y.LOO ", prepComma(corOptions), ")")) rVector= as.numeric(eval(corX)) datCoef=t(coef(lm(datX.LOO~ y.LOO,na.action="na.exclude"))) datX.LOOscaled=scale(datX.LOO,center=datCoef[,1],scale=datCoef[,2] ) weights0=rVector^2/(1-rVector^2) weights=weights0/sum(weights0) datXtest.LOOscaled=(datXtest.LOO-datCoef[,1])/datCoef[,2] y.true1test.LOO[i]= mean(datXtest.LOOscaled) y.true2test.LOO[i]=sum(datXtest.LOOscaled*weights) if (skipMissingVariables ) { y.true1test.LOO[i]= mean(datXtest.LOOscaled,na.rm=TRUE) weightsMatrixLOO= weights weightsMatrixLOO[is.na(datXtest.LOOscaled)]=0 rowsum.weightsMatrixLOO=sum(weightsMatrixLOO) weightsMatrixLOO=weightsMatrixLOO/rowsum.weightsMatrixLOO datXscaledweightedLOO= datXtest.LOOscaled* weightsMatrixLOO y.true2test.LOO[i]=sum(datXscaledweightedLOO,na.rm=TRUE) } #end of if (skipMissingVariables ) } # end of for loop } # end of if (no.observations>1 ) datEstimates.LeaveOneOut.CV=data.frame(y.true1= y.true1test.LOO, y.true2= y.true2test.LOO) } # end of if ( LeaveOneOut.CV ) if (addLinearModel) { y.lmTest=rep(NA, dim(as.matrix(datXtest))[[1]] ) restNonMissingY= !is.na(y) if (sum(restNonMissingY,na.rm=TRUE) >3 ){ lm1=lm(y~., data=data.frame(datX),na.action=na.exclude) y.lmTraining=predict(lm1) y.lmTraining=predict(lm1) if( !is.null(datXtest)) {y.lmTest=predict(lm1,newdata=data.frame(datXtest))} } } if ( !is.null(datXtest) & LeaveOneOut.CV & !addLinearModel ) {out= list( datEstimates=datEstimates, datEstimatestest= datEstimatestest, datEstimates.LeaveOneOut.CV= datEstimates.LeaveOneOut.CV, SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( !is.null(datXtest) & !LeaveOneOut.CV & !addLinearModel) {out= list( datEstimates=datEstimates, datEstimatestest= datEstimatestest, SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( is.null(datXtest) & LeaveOneOut.CV & !addLinearModel) {out= list( datEstimates=datEstimates, datEstimates.LeaveOneOut.CV= datEstimates.LeaveOneOut.CV, SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( is.null(datXtest) & !LeaveOneOut.CV & !addLinearModel) {out= list( datEstimates=datEstimates, SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( !is.null(datXtest) & LeaveOneOut.CV & addLinearModel ) {out= list( datEstimates=data.frame(datEstimates, y.lm=y.lmTraining), datEstimatestest= data.frame(datEstimatestest, y.lm=y.lmTest), datEstimates.LeaveOneOut.CV= data.frame(datEstimates.LeaveOneOut.CV,y.lm=y.lmLOO), SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( !is.null(datXtest) & !LeaveOneOut.CV & addLinearModel) {out= list( datEstimates=data.frame(datEstimates,y.lm=y.lmTraining), datEstimatestest= data.frame(datEstimatestest,y.lm=y.lmTest), SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( is.null(datXtest) & LeaveOneOut.CV & addLinearModel) {out= list( datEstimates=data.frame(datEstimates,y.lm=y.lmTraining), datEstimates.LeaveOneOut.CV= data.frame(datEstimates.LeaveOneOut.CV,y.lm=y.lmLOO), SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } if ( is.null(datXtest) & !LeaveOneOut.CV & addLinearModel) {out= list( datEstimates=data.frame(datEstimates,y.lm=y.lmTraining), SD.ytrue2=SD.ytrue2, SD.ytrue3=SD.ytrue3, datVariableInfo= datVariableInfo ) } out } # end of function WGCNA/R/useNThreads.R0000644000176200001440000001121313131007076013656 0ustar liggesusers# Function to control the number of threads to use in threaded calculations. .useNThreads = function(nThreads = 0) { if (nThreads==0) { nt.env = Sys.getenv(.threadAllowVar, unset = NA); if (is.na(nt.env)) return(1); if (nt.env=="") return(1); if (nt.env=="ALL_PROCESSORS") return (.nProcessorsOnline()); nt = suppressWarnings(as.numeric(nt.env)); if (!is.finite(nt)) return(2); return(nt); } else return (nThreads); } .nProcessorsOnline = function() { n = detectCores(); if (!is.numeric(n)) n = 2; if (!is.finite(n)) n = 2; if (n<1) n = 2; n; } allowWGCNAThreads = function(nThreads = NULL) { # Stop any clusters that may be still running disableWGCNAThreads() # Enable WGCNA threads if (is.null(nThreads)) nThreads = .nProcessorsOnline(); if (!is.numeric(nThreads) || nThreads < 2) stop("nThreads must be numeric and at least 2."); if (nThreads > .nProcessorsOnline()) printFlush(paste("Warning in allowWGCNAThreads: Requested number of threads is higher than number\n", "of available processors (or cores). Using too many threads may degrade code", "performance. It is recommended that the number of threads is no more than number\n", "of available processors.\n")) printFlush(paste("Allowing multi-threading with up to", nThreads, "threads.")); pars = list(nThreads); names(pars) = .threadAllowVar; do.call(Sys.setenv, pars); invisible(nThreads); } disableWGCNAThreads = function() { Sys.unsetenv(.threadAllowVar); pars = list(1) names(pars) = .threadAllowVar do.call(Sys.setenv, pars) if (exists(".revoDoParCluster", where = ".GlobalEnv")) { stopCluster(get(".revoDoParCluster", pos = ".GlobalEnv")); } registerDoSEQ(); } .checkAvailableMemory = function() { size = 0; res = .C("checkAvailableMemoryForR", size = as.double(size), PACKAGE = "WGCNA") res$size; } # Function to calculate an appropriate blocksize blockSize = function(matrixSize, rectangularBlocks = TRUE, maxMemoryAllocation = NULL, overheadFactor = 3) { if (is.null(maxMemoryAllocation)) { maxAlloc = .checkAvailableMemory(); } else { maxAlloc = maxMemoryAllocation/8; } maxAlloc = maxAlloc/overheadFactor; if (rectangularBlocks) { blockSz = floor(maxAlloc/matrixSize); } else blockSz = floor(sqrt(maxAlloc)); return( min (matrixSize, blockSz) ) } #====================================================================================================== # # enableWGCNAThreads # #====================================================================================================== enableWGCNAThreads = function(nThreads = NULL) { nCores = detectCores(); if (is.null(nThreads)) { if (nCores < 4) nThreads = nCores else nThreads = nCores - 1; } if (!is.numeric(nThreads) || nThreads < 2) stop("nThreads must be numeric and at least 2.") if (nThreads > nCores) printFlush(paste("Warning in allowWGCNAThreads: Requested number of threads is higher than number\n", "of available processors (or cores). Using too many threads may degrade code", "performance. It is recommended that the number of threads is no more than number\n", "of available processors.\n")) printFlush(paste("Allowing parallel execution with up to", nThreads, "working processes.")) pars = list(nThreads) names(pars) = .threadAllowVar do.call(Sys.setenv, pars) # Register a parallel backend for foreach registerDoParallel(nThreads); # Return the number of threads invisibly invisible(nThreads) } WGCNAnThreads = function() { n = suppressWarnings(as.numeric(as.character(Sys.getenv(.threadAllowVar, unset = 1)))); if (is.na(n)) n = 1; if (length(n)==0) n = 1; n; } #======================================================================================================== # # allocateJobs # #======================================================================================================== # Facilitates multi-threading by producing an even allocation of jobs # Works even when number of jobs is less than number of threads in which case some components of the # returned allocation will have length 0. allocateJobs = function(nTasks, nWorkers) { if (is.na(nWorkers)) { warning("In function allocateJobs: 'nWorkers' is NA. Will use 1 worker."); nWorkers = 1; } n1 = floor(nTasks/nWorkers); n2 = nTasks - nWorkers*n1; allocation = list(); start = 1; for (t in 1:nWorkers) { end = start + n1 - 1 + as.numeric(t<=n2); if (start > end) { allocation[[t]] = numeric(0); } else allocation[[t]] = c(start:end); start = end+1; } allocation; } WGCNA/R/collapseRows.R0000644000176200001440000003447613103416622014126 0ustar liggesusers.filterSimilarPS <- function(datOut, rowGroup, rowID, thresh=0.8){ ## Collapse groups (ie. genes) with multiple ids (ie. probes) together using the following algorthim: # 1) If there is one id/group = keep # 2) If there are 2 ids/group = take the maximum mean expression, if their correlation is > thresh # 3) If there are 3+ ids/group = iteratively repeat (2) for the id with the highest # correlation until all ids remaining have correlation < thresh for each group # datOut is an expression matrix with rows=ids (NOT group) and cols=samples # rowGroup and rowID are vectors of corresponding group and id names for the rows in datOut # (note: all ids in datOut only need to be a subset of these vectors, not necessarily identical) # thresh is the Pearson correlation threshold to combine probes of similar expression. names(rowGroup) = rowID ids = rownames(datOut); idsIn = ids; # For later group = rowGroup[ids] tGroup = table(group) twos = sort(names(tGroup)[tGroup==2]) more = sort(names(tGroup)[tGroup>2]) len = dim(datOut)[2] testTwoAndCombine <- function (datIn, datTmp, thresh){ # Internal function for removing one of two genes if they have high enough correlation if (cor(as.numeric(datTmp[1,]),as.numeric(datTmp[2,]))>thresh){ rMean = rowMeans(datTmp) omit = as.numeric(which(rMean==min(rMean))) datIn = datIn[rownames(datIn)!=rownames(datTmp)[omit],] } return(datIn) } # End internal function "testTwoAndCombine" for (g in twos){ datTmp = datOut[ids[group==g],] datOut = testTwoAndCombine(datOut, datTmp, thresh) }; write("Done combining genes with 2 probes!","") for (g in more){ go = TRUE while(go){ ids = rownames(datOut) group = rowGroup[ids] datTmp = datOut[ids[group==g],] corDat = cor(t(datTmp)); if(length(datTmp)==len) { go=FALSE } else { diag(corDat)=-2 if (max(corDat)1]) missingData = rowSums(is.na(datET)) # Omit all probes with at least omitPercent genes missing keep = missingData<(omitPercent*dim(datET)[2]/100); # Omit relevant genes and return results if (omitGroups) for (g in checkGenes){ gn = (genes==g) keepGenes[gn] = (missingData[gn] == min(missingData[gn])) } keep = keep & keepGenes; return (keep); } # ----------------- Main Function ------------------- # collapseRows <- function(datET, rowGroup, rowID, method="MaxMean", connectivityBasedCollapsing=FALSE, methodFunction=NULL, connectivityPower=1, selectFewestMissing=TRUE, thresholdCombine=NA) { # datET = as.matrix(as.data.frame(datET)); methodAverage = FALSE if (method=="Average") methodAverage = TRUE # Required for later if (method!="function") methodFunction = NULL # Required for later if ( sum(rowGroup=="",na.rm=TRUE)>0 ){ warning(paste("rowGroup contains blanks. It is strongly recommended that you remove", "these rows before calling the function.\n", " But for your convenience, the collapseRow function will remove these rows")); rowGroup[rowGroup==""]=NA } # datET is a numeric matrix whose rows correspond to variables # e.g. probes of a microarray and whose columns to observations # e.g. microarrays if ( sum(is.na(rowGroup))>0 ){ warning(paste("The argument rowGroup contains missing data. It is strongly recommended\n", " that you remove these rows before calling the function. Or redefine rowGroup\n", " so that it has no missing data. But for convenience, we remove these data.")) } ## Test to make sure the variables are the right length. # if not, fix it if possible, or return 0 if not possible rowID = as.character(rowID) rowGroup = as.character(rowGroup) rnDat = rownames(datET) if (length(rowID)!=length(rowGroup)){ write("Error: rowGroup and rowID not the same length... exiting.","") return(0) } if (length(unique(rowID)) !=length(rowID) ){stop("rowID contains duplicate entries. Make sure that the argument rowID contains unique entries")} names(rowGroup) = rowID if ( sum(is.na(rowID))>0 ){warning("The argument rowID contains missing data. I recommend you choose non-missing, unique values for rowID, e.g. character strings.")} if ((is.null(rnDat))&(dim(datET)[1]==length(rowID))){ write("Warning: *datET* does not have row names. Assigning *rowID* as row names.","") rnDat <- rownames(datET) <- rowID } if (is.null(rnDat)){ write("Error: *datET* does not have row names and length of *rowID*...","") write("... is not the same as # rows in *datET*... exiting.","") return(0) } if (sum(is.element(rnDat,rowID))!=length(rowID)){ write("Warning: row names of input data and probes not identical...","") write("... Attempting to proceed anyway. Check results carefully.","") keepProbes = is.element(rowID, rownames(datET)) rowID = rowID[keepProbes] datET= datET[rowID,] rowGroup = rowGroup[rowID] } restRows = (rowGroup!="" & !is.na(rowGroup)) datET= datET[restRows,] rowGroup = rowGroup[restRows] rowID = rowID[restRows] rnDat = rnDat[restRows] ## For each group, select the row with the fewest missing values (if selectFewestMissing==TRUE) ## Also, remove all rows with more than 90% missing data datET_in = datET # This will be used as a reference later ## For each gene, select the gene with the fewest missing probes (if selectFewestMissing==TRUE) ## Also, remove all probes with more than 90% missing data keep = .selectFewestMissing(datET, rowID, rowGroup, selectFewestMissing) datET = datET[keep, ]; rowGroup = rowGroup[keep]; rowID = rowID[keep]; rnDat = rownames(datET) ## If 0 < thresholdCombine < 1, only combine ids into their corresponding group if their # correlation is greater than thresholdCombine. This parameter supercedes all remaining # parameters. if(!is.na(thresholdCombine)){ if(!is.numeric(thresholdCombine)){ write("thresholdCombine is not between -1 and 1 and is therefore being treated as NA","") } else if((thresholdCombine<(-1))|(thresholdCombine>1)){ write("thresholdCombine is not between -1 and 1 and is therefore being treated as NA","") } else { output = .filterSimilarPS(datET, rowGroup, rowID, thresholdCombine) return(output) } } ## If method="function", use the function "methodFunction" as a way of combining genes # Alternatively, use one of the built-in functions # Note: methodFunction must be a function that takes a vector of numbers as input and # outputs a single number. This function will return(0) or crash otherwise. recMethods = c("function","ME","MaxMean","maxRowVariance","MinMean","absMinMean","absMaxMean","Average"); imethod = pmatch(method, recMethods); if (is.na(imethod)) { printFlush("Error: entered method is not a legal option. Recognized options are *maxRowVariance*,"); printFlush(" *maxRowVariance*, *MaxMean*, *MinMean*, *absMaxMean*, *absMinMean*, *ME*,"); printFlush(" *Average* or *function* for a user-defined function.") return(0) } if (imethod > 2) method = spaste(".", method); if (method=="function") { method = methodFunction if((!is.function(methodFunction))&(!is.null(methodFunction))){ write("Error: *methodFunction* must be a function... please read the help file","") return(0) } } if (!is.function(method)) if (method!="ME") method = get(method, mode = "function") ## Format the variables for use by this function rowID[is.na(rowID)] = rowGroup[is.na(rowID)] # Use group if row is missing rownames(datET)[is.na(rnDat)] = rowGroup[is.na(rnDat)] remove = (is.na(rowID))|(is.na(rowGroup)) # Omit if both gene and probe are missing rowID = rowID[!remove]; rowGroup = rowGroup[!remove]; names(rowGroup) = rowID rowID = sort(intersect(rnDat,rowID)) if (length(rowID)<=1){ write("Error: none of the *datET* rownames are in *rowID*...","") write("... please add rownames and try again... exiting.","") return(0) } rowGroup = rowGroup[rowID] datET = as.matrix(datET) datET = datET[rowID,] probes = rownames(datET) genes = rowGroup[probes] tGenes = table(genes) datETOut=matrix(0,nrow=length(tGenes),ncol=ncol(datET)) colnames(datETOut) = colnames(datET) rownames(datETOut) = sort(names(tGenes)) rowsOut = rownames(datETOut) names(rowsOut) = rowsOut ## If !is.null(connectivityPower), default to the connectivity method with power=method # Collapse genes with multiple probe sets together using the following algorthim: # 1) If there is one ps/g = keep # 2) If there are 2 ps/g = (use "method" or "methodFunction") # 3) If there are 3+ ps/g = take the max connectivity # Otherwise, use "method" if there are 3+ ps/g as well. if(!is.null(connectivityPower)){ if(!is.numeric(connectivityPower)){ write("Error: if entered, connectivityPower must be numeric... exiting.","") return(0) } if(connectivityPower<=0){ write("Warning: connectivityPower must be >= 0. Defaulting to a power of 2.","") connectivityPower=2 } if(dim(datET)[2]<=5){ write("Warning: 5 or fewer samples, this method of probe collapse is unreliable...","") write("...Running anyway, but we suggest trying another method (for example, *mean*).","") } } whichTestFn <- function(x){ d = datETOut[g,] test = (!is.na(x))&(!is.na(d)) return(sum(x[test]==d[test])) } # If method=ME, this function acts as the function moduleEigengene from the WGCNA library if (!is.function(method)) if (method=="ME"){ datETOut = t(moduleEigengenes(t(datET),genes)$eigengenes) colnames(datETOut) = colnames(datET) rownames(datETOut) = substr(rownames(datETOut),3,nchar(rownames(datETOut))) out2 = cbind(rownames(datETOut),paste("ME",rownames(datETOut),sep=".")) colnames(out2) = c("group","selectedRowID") out3 = is.element(rownames(datET_in),"@#$%^&*") names(out3) = rownames(datET_in) return(list(datETcollapsed = datETOut, group2row = out2, selectedRow = out3)) } # Actually run the collapse now!!! if (!is.null(methodFunction)) write("Comment: make sure methodFunction takes a matrix as input.","") ones = sort(names(tGenes)[tGenes==1]) if(connectivityBasedCollapsing){ twos = sort(names(tGenes)[tGenes==2]) # use "method" and connectivity more = sort(names(tGenes)[tGenes>2]) } else { twos = sort(names(tGenes)[tGenes>1]) # only use "method" more = character(0) } for (g in ones){ datETOut[g,] = as.numeric(datET[probes[genes==g],]) rowsOut[g] = probes[genes==g] } count = 0; for (g in twos){ datETTmp = datET[probes[genes==g],] datETOut[g,] = as.numeric(method(datETTmp)) whichTest = apply(datETTmp,1,whichTestFn) rowsOut[g] = (names(whichTest)[whichTest==max(whichTest)])[1] count = count + 1; if (count %% 1000 == 0) collectGarbage(); } for (g in more){ datETTmp = datET[probes[genes==g],] adj = (0.5+0.5*cor(t(datETTmp),use="p"))^connectivityPower datETOut[g,] = as.numeric(datETTmp[which.max(rowSums(adj,na.rm=TRUE)),]) whichTest = apply(datETTmp,1,whichTestFn) rowsOut[g] = (names(whichTest)[whichTest==max(whichTest)])[1] count = count + 1; if (count %% 1000 == 0) collectGarbage(); } if (!is.null(methodFunction)) write("...Ignore previous comment. Function completed properly!","") # Retreive the information about which probes were saved, and include that information # as part of the output. If method="function" or "Average" output placeholder values. if (!is.null(methodFunction)) { out2 = cbind(rownames(datETOut),paste("function",rownames(datETOut),sep=".")) colnames(out2) = c("group","selectedRowID") out3 = is.element(rownames(datET_in),"@#$%^&*") names(out3) = rownames(datET_in) return(list(datETcollapsed = datETOut, group2row = out2, selectedRow = out3)) } if (methodAverage) { out2 = cbind(rownames(datETOut),paste("Average",rownames(datETOut),sep=".")) colnames(out2) = c("group","selectedRowID") out3 = is.element(rownames(datET_in),"@#$%^&*") names(out3) = rownames(datET_in) return(list(datETcollapsed = datETOut, group2row = out2, selectedRow = out3)) } out2 = cbind(rownames(datETOut),rowsOut) colnames(out2) = c("group","selectedRowID") out3 = is.element(rownames(datET_in),rowsOut) names(out3) = rownames(datET_in) output = list(datETcollapsed = datETOut, group2row = out2, selectedRow = out3) return(output) # End of function } WGCNA/R/accuracyMeasures.R0000644000176200001440000001420113103416622014730 0ustar liggesusers# Accuracy measures, modified from the WGCNA version. # Helper function: contingency table of 2 variables that will also include rows/columns for levels that do # not appear in x or y. .table2.allLevels = function(x, y, levels.x = sort(unique(x)), levels.y = sort(unique(y)), setNames = FALSE) { nx = length(levels.x); ny = length(levels.y); t = table(x, y); out = matrix(0, nx, ny); if (setNames) { rownames(out) = levels.x; colnames(out) = levels.y; } out[ match(rownames(t), levels.x), match(colnames(t), levels.y) ] = t; out; } # accuracy measures accuracyMeasures = function(predicted, observed = NULL, type = c("auto", "binary", "quantitative"), levels = if (isTRUE(all.equal(dim(predicted), c(2,2)))) colnames(predicted) else if (is.factor(predicted)) sort(unique(c(as.character(predicted), as.character(observed)))) else sort(unique(c(observed, predicted))), negativeLevel = levels[2], positiveLevel = levels[1] ) { type = match.arg(type); if (type=="auto") { if (!is.null(dim(predicted))) { if (isTRUE(all.equal(dim(predicted), c(2,2)))) { type = "binary" } else stop("If supplying a matrix in 'predicted', it must be a 2x2 contingency table."); } else { if (is.null(observed)) stop("When 'predicted' is a vector, 'observed' must be given and have the same length as 'predicted'."); if (length(levels)==2) { type = "binary" } else type = "quantitative" } } if (type=="binary") { if (is.null(dim(predicted))) { if (is.null(observed)) stop("When 'predicted' is a vector, 'observed' must be given and have the same length as 'predicted'."); if ( length(predicted)!=length(observed) ) stop("When both 'predicted' and 'observed' are given, they must be vectors of the same length."); if (length(levels)!=2) stop("'levels' must contain 2 entries (the possible values of the binary variables\n", " 'predicted' and 'observed')."); tab = .table2.allLevels(predicted, observed, levels.x = levels, levels.y = levels, setNames = TRUE); } else { tab = predicted; if (is.null(colnames(tab)) | is.null(rownames(tab))) stop("When 'predicted' is a contingency table, it must have valid colnames and rownames."); } if ( ncol(tab) !=2 | nrow(tab) !=2 ) stop("The input table must be a 2x2 table. ") if (negativeLevel==positiveLevel) stop("'negativeLevel' and 'positiveLevel' cannot be the same."); neg = match(negativeLevel, colnames(tab)); if (is.na(neg)) stop(spaste("Cannot find the negative level ", negativeLevel, " among the colnames of the contingency table.\n Please check the input and try again.")) pos = match(positiveLevel, colnames(tab)); if (is.na(pos)) stop(spaste("Cannot find the positive level ", positiveLevel, " among the colnames of the contingency table.\n Please check the input and try again.")) if ( sum(is.na(tab) ) ) warning("Missing data should not be present in input.\n", " Suggestion: check whether NA should be coded as 0.") is.wholenumber =function(x, tol = .Machine$double.eps^0.5) { abs(x - round(x)) < tol } if ( sum( !is.wholenumber(tab), na.rm=T ) >0) warning("STRONG WARNING: The input table contains non-integers, which does not make sense.") if ( sum( tab<0, na.rm=T ) >0) stop("The input table cannot contain negative numbers."); num1=sum(diag(tab),na.rm=T) denom1=sum(tab,na.rm=T) if (denom1==0) warning("The input table has zero observations (sum of all cells is zero).") TP=tab[pos, pos] FP=tab[pos, neg] FN=tab[neg, pos] TN=tab[neg, neg] error.rate= ifelse(denom1==0,NA, 1-num1/denom1) Accuracy= ifelse(denom1==0,NA, num1/denom1 ) Specificity= ifelse(FP + TN==0, NA, TN / (FP + TN) ) Sensitivity= ifelse(TP + FN==0, NA, TP / (TP + FN) ) NegativePredictiveValue= ifelse(FN + TN==0,NA, TN / (FN + TN) ) PositivePredictiveValue=ifelse(TP + FP==0,NA, TP / (TP + FP) ) FalsePositiveRate = 1 - Specificity FalseNegativeRate = 1 - Sensitivity Power = Sensitivity LikelihoodRatioPositive = ifelse(1 - Specificity==0,NA, Sensitivity / (1 - Specificity) ) LikelihoodRatioNegative = ifelse(Specificity==0, NA, (1 - Sensitivity) / Specificity ) NaiveErrorRate = ifelse(denom1==0,NA, min(c(tab[pos, pos]+ tab[neg, pos] , tab[pos, neg]+ tab[neg, neg] ))/denom1 ) out=data.frame( Measure= c("Error.Rate","Accuracy", "Specificity","Sensitivity","NegativePredictiveValue", "PositivePredictiveValue","FalsePositiveRate","FalseNegativeRate","Power", "LikelihoodRatioPositive","LikelihoodRatioNegative", "NaiveErrorRate", "NegativeLevel", "PositiveLevel"), Value=c(error.rate,Accuracy, Specificity,Sensitivity,NegativePredictiveValue, PositivePredictiveValue,FalsePositiveRate,FalseNegativeRate,Power, LikelihoodRatioPositive,LikelihoodRatioNegative,NaiveErrorRate, negativeLevel, positiveLevel)); } else if (type=="quantitative") { if (!is.null(dim(predicted))) stop("When 'type' is \"quantitative\", 'predicted' cannot be a 2-dimensional matrix."); if (length(predicted)!=length(observed)) stop("'predicted' and 'observed' must be vectors of the same length."); cr = cor(predicted, observed, use = 'p'); out = data.frame( Measure = c("Cor", "R.squared", "MeanSquareError", "MedianAbsoluteError", "Cindex"), Value = c(cr, cr^2, mean( (predicted-observed)^2,na.rm=TRUE), median((predicted-observed)^2,na.rm=TRUE), rcorr.cens(predicted,observed,outx=TRUE)[[1]])); } out; } WGCNA/R/consensusCalculations.R0000644000176200001440000006112513454432720016030 0ustar liggesusersconsensusCalculation = function( # a list or multiData structure of either numeric vectors (possibly arrays) or blockwiseAdj objects individualData, consensusOptions, useBlocks = NULL, randomSeed = NULL, saveCalibratedIndividualData = FALSE, calibratedIndividualDataFilePattern = "calibratedIndividualData-%a-Set%s-Block%b.RData", # Return options: the data can be either saved or returned but not both. # if NULL, data will be saved only if input data were blockwise data saved on disk rather than held in memory saveConsensusData = NULL, consensusDataFileNames = "consensusData-%a-Block%b.RData", getCalibrationSamples= FALSE, # Internal handling of data useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behaviour collectGarbage = FALSE, verbose = 1, indent = 0) { nSets = length(individualData); if (! isMultiData(individualData)) individualData = list2multiData(individualData); setNames = names(individualData); if (is.null(setNames)) setNames = rep("", nSets); blockwise = inherits(individualData[[1]]$data, "BlockwiseData"); if (!blockwise) { blockDimnames = .mtd.checkDimConsistencyAndGetDimnames(individualData); blockLengths = length(individualData[[1]]$data); blockAttributes = list(attributes(individualData[[1]]$data)) # in this list each element corresponds to 1 block metaData = list(); } else { blockLengths = BD.blockLengths(individualData[[1]]$data); blockAttributes = individualData[[1]]$data$attributes; # List of attributes of all blocks from the 1st indiv. data metaData = BD.getMetaData(individualData[[1]]$data, blocks = 1); } nBlocks = length(blockLengths); if (is.null(saveConsensusData)) saveConsensusData = if (blockwise) individualData[[1]]$data$external else FALSE spaces = indentSpaces(indent); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed); } set.seed(randomSeed); } setWeights = consensusOptions$setWeights; if (is.null(setWeights)) setWeights = rep(1, nSets); if (length(setWeights)!=nSets) stop("Length of 'setWeights' must equal the number of sets."); if (any(!is.finite(setWeights))) stop("setWeights must all be finite."); setWeightMat = as.matrix(setWeights)/sum(setWeights) if (is.null(chunkSize)) chunkSize = as.integer(.largestBlockSize/(2*nSets)) if (is.null(useDiskCache)) useDiskCache = .useDiskCache(individualData, chunkSize = chunkSize); # Initialize various variables if (getCalibrationSamples) { if (!consensusOptions$sampleForCalibration) stop(paste("Incompatible input options: calibrationSamples can only be returned", "if sampleForCalibration is TRUE.")); calibrationSamples = list(); } blockLevels = 1:nBlocks; if (is.null(useBlocks)) useBlocks = blockLevels; useBlockIndex = match(useBlocks, blockLevels); if (!all(useBlocks %in% blockLevels)) stop("All entries of 'useBlocks' must be valid block levels."); if (any(duplicated(useBlocks))) stop("Entries of 'useBlocks' must be unique."); nUseBlocks = length(useBlocks); if (nUseBlocks==0) stop("'useBlocks' cannot be non-NULL and empty at the same time."); consensus.out = list(); consensusFiles = rep("", nUseBlocks); originCount = rep(0, nSets); calibratedIndividualDataFileNames = NULL; if (saveCalibratedIndividualData) { calibratedIndividualDataFileNames = matrix("", nSets, nBlocks); for (set in 1:nSets) for (b in 1:nBlocks) calibratedIndividualDataFileNames[set, b] = .processFileName(calibratedIndividualDataFilePattern, setNumber = set, setNames = setNames, blockNumber = b, analysisName = consensusOptions$analysisName); } if (collectGarbage) gc(); calibratedIndividualData.saved = vector(mode = "list", length = nSets); consensusData = NULL; dataFiles = character(nUseBlocks); # Here's where the analysis starts for (blockIndex in 1:nUseBlocks) { block = useBlockIndex[blockIndex]; if (verbose>1) printFlush(spaste(spaces, "..Working on block ", block, ".")); scaleQuant = rep(1, nSets); scalePowers = rep(1, nSets); useDiskCache1 = useDiskCache && nSets > 1; ### No need to use disk cache when there is only 1 set. # Set up file names or memory space to hold the set Data if (useDiskCache1) { nChunks = ceiling(blockLengths[block]/chunkSize); chunkFileNames = array("", dim = c(nChunks, nSets)); on.exit(.checkAndDelete(chunkFileNames), add = TRUE); } else nChunks = 1; if (nChunks==1) useDiskCache1 = FALSE; if (!useDiskCache1) { # Note: setTomDS will contained the scaled set Data matrices. calibratedData = array(0, dim = c(blockLengths[block], nSets)); } # sample entry indices from the distance structure for Data scaling, if requested if (consensusOptions$calibration=="single quantile" && consensusOptions$sampleForCalibration) { qx = min(consensusOptions$calibrationQuantile, 1-consensusOptions$calibrationQuantile); nScGenes = min(consensusOptions$sampleForCalibrationFactor * 1/qx, blockLengths[block]); scaleSample = sample(blockLengths[block], nScGenes); if (getCalibrationSamples) calibrationSamples[[blockIndex]] = list(sampleIndex = scaleSample, TOMSamples = matrix(NA, nScGenes, nSets)); } if (consensusOptions$calibration %in% c("single quantile", "none")) { for (set in 1:nSets) { if (verbose>2) printFlush(spaste(spaces, "....Working on set ", set, " (", setNames[set], ")")) # We need to drop dimensions here but we may need them later. Keep that in mind. tomDS = as.numeric(.getBDorPlainData(individualData[[set]]$data, block, simplify = TRUE)); if (consensusOptions$calibration=="single quantile") { # Scale Data so that calibrationQuantile agree in each set if (consensusOptions$sampleForCalibration) { if (getCalibrationSamples) { calibrationSamples[[blockIndex]]$dataSamples[, set] = tomDS[scaleSample]; scaleQuant[set] = quantile(calibrationSamples[[blockIndex]]$dataSamples[, set], probs = consensusOptions$calibrationQuantile, type = 8); } else { scaleQuant[set] = quantile(tomDS[scaleSample], probs = consensusOptions$calibrationQuantile, type = 8); } } else scaleQuant[set] = quantile(x = tomDS, probs = consensusOptions$calibrationQuantile, type = 8); if (set>1) { scalePowers[set] = log(scaleQuant[1])/log(scaleQuant[set]); tomDS = tomDS^scalePowers[set]; } if (saveCalibratedIndividualData) calibratedIndividualData.saved[[set]] = addBlockToBlockwiseData( calibratedIndividualData.saved[[set]], .setAttrFromList(tomDS, blockAttributes[[blockIndex]]), external = TRUE, recordAttributes = TRUE, metaData = metaData, blockFile = calibratedIndividualDataFileNames[set, block]) } # Save the calculated Data either to disk in chunks or to memory. if (useDiskCache1) { if (verbose > 3) printFlush(paste(spaces, "......saving Data similarity to disk cache..")); sc = .saveChunks(tomDS, chunkSize, cacheBase, cacheDir = cacheDir); chunkFileNames[, set] = sc$files; chunkLengths = sc$chunkLengths; } else { calibratedData[, set] = tomDS; } } if (collectGarbage) gc(); } else if (consensusOptions$calibration=="full quantile") { # Step 1: load each data set, get order, split Data into chunks according to order, and save. if (verbose>1) printFlush(spaste(spaces, "..working on quantile normalization")) if (useDiskCache1) { orderFiles = rep("", nSets); on.exit(.checkAndDelete(orderFiles),add = TRUE); } for (set in 1:nSets) { if (verbose>2) printFlush(spaste(spaces, "....Working on set ", set, " (", setNames[set], ")")) tomDS = as.numeric(.getBDorPlainData(individualData[[set]]$data, block, simplify = TRUE)); if (useDiskCache1) { # Order Data (this may take a long time...) if (verbose > 3) printFlush(spaste(spaces, "......ordering Data")); time = system.time({order1 = .qorder(tomDS)}); if (verbose > 1) { printFlush("Time to order Data:"); print(time); } # save the order orderFiles[set] = tempfile(pattern = spaste(".orderForSet", set), tmpdir = cacheDir); if (verbose > 3) printFlush(spaste(spaces, "......saving order and ordered Data")); save(order1, file = orderFiles[set]); # Save ordered tomDS into chunks tomDS.ordered = tomDS[order1]; sc = .saveChunks(tomDS.ordered, chunkSize, cacheBase, cacheDir = cacheDir); chunkFileNames[, set] = sc$files; chunkLengths = sc$chunkLengths; } else { calibratedData[, set] = tomDS } } if (useDiskCache1) { # Step 2: Load chunks one by one and quantile normalize if (verbose > 2) printFlush(spaste(spaces, "....quantile normalizing chunks")); for (c in 1:nChunks) { if (verbose > 3) printFlush(spaste(spaces, "......QN for chunk ", c, " of ", nChunks)); chunkData = matrix(NA, chunkLengths[c], nSets); for (set in 1:nSets) chunkData[, set] = .loadObject(chunkFileNames[c, set]); time = system.time({ chunk.norm = normalize.quantiles(chunkData, copy = FALSE);}); if (verbose > 1) { printFlush("Time to QN chunk:"); print(time); } # Save quantile normalized chunks for (set in 1:nSets) { temp = chunk.norm[, set]; save(temp, file = chunkFileNames[c, set]); } } if (verbose > 2) printFlush(spaste(spaces, "....putting together full QN'ed Data")); # Put together full Data for (set in 1:nSets) { load(orderFiles[set]); start = 1; for (c in 1:nChunks) { end = start + chunkLengths[c] - 1; tomDS[order1[start:end]] = .loadObject(chunkFileNames[c, set], size = chunkLengths[c]); start = start + chunkLengths[c]; } if (saveCalibratedIndividualData) calibratedIndividualData.saved[[set]] = addBlockToBlockwiseData( calibratedIndividualData.saved[[set]], .setAttrFromList(tomDS, blockAttributes[[blockIndex]]), external = TRUE, recordAttributes = TRUE, metaData = metaData, blockFile = calibratedIndividualDataFileNames[set, blockIndex]); .saveChunks(tomDS, chunkSize, fileNames = chunkFileNames[, set]); unlink(orderFiles[set]); } } else { # If disk cache is not being used, simply call normalize.quantiles on the full set. if (nSets > 1) calibratedData = normalize.quantiles(calibratedData); if (saveCalibratedIndividualData) for (set in 1:nSets) { calibratedIndividualData.saved[[set]] = addBlockToBlockwiseData( calibratedIndividualData.saved[[set]], .setAttrFromList(calibratedData[, set], blockAttributes[[blockIndex]]), external = TRUE, recordAttributes = TRUE, metaData = metaData, blockFile = calibratedIndividualDataFileNames[set, blockIndex]); } } } else stop("Unrecognized value of 'calibration' in consensusOptions: ", consensusOptions$calibration); # Calculate consensus if (verbose > 2) printFlush(paste(spaces, "....Calculating consensus")); # create an empty consTomDS distance structure. consTomDS = numeric(blockLengths[block]); if (useDiskCache1) { start = 1; for (chunk in 1:nChunks) { if (verbose > 3) printFlush(paste(spaces, "......working on chunk", chunk)); end = start + chunkLengths[chunk] - 1; setChunks = array(0, dim = c(chunkLengths[chunk], nSets)); for (set in 1:nSets) { load(file = chunkFileNames[chunk, set]); setChunks[, set] = temp; file.remove(chunkFileNames[chunk, set]); } tmp = .consensusCalculation.base(setChunks, useMean = consensusOptions$useMean, setWeightMat = setWeightMat, consensusQuantile = consensusOptions$consensusQuantile); consTomDS[start:end] = tmp$consensus; if (!is.null(tmp$originCount)) { countIndex = as.numeric(names(tmp$originCount)); originCount[countIndex] = originCount[countIndex] + tmp$originCount; } start = end + 1; } } else { tmp = .consensusCalculation.base(calibratedData, useMean = consensusOptions$useMean, setWeightMat = setWeightMat, consensusQuantile = consensusOptions$consensusQuantile); consTomDS[] = tmp$consensus; if (!is.null(tmp$originCount)) { countIndex = as.numeric(names(tmp$originCount)); originCount[countIndex] = originCount[countIndex] + tmp$originCount; } } # If requested, suppress negative values of output if (consensusOptions$suppressNegativeResults) consTomDS[ consTomDS < 0 ] = 0; # Save the consensus Data if requested if (saveConsensusData) { if (!grepl("%b", consensusDataFileNames)) stop(paste("File name for consensus data must contain the tag %b somewhere in the file name -\n", " - this tag will be replaced by the block number. ")); dataFiles[blockIndex] = .substituteTags(consensusDataFileNames, c("%b", "%a"), c(block, consensusOptions$analysisName[1])); } consensusData = addBlockToBlockwiseData( consensusData, .setAttrFromList(consTomDS, blockAttributes[[blockIndex]]), external = saveConsensusData, recordAttributes = TRUE, metaData = metaData, blockFile = if (saveConsensusData) dataFiles[blockIndex] else NULL) if (collectGarbage) gc(); } list( #blockwiseData consensusData = consensusData, # int nSets = nSets, # Logical saveCalibratedIndividualData = saveCalibratedIndividualData, # List of blockwise data of length nSets calibratedIndividualData = calibratedIndividualData.saved, # List with one component per block calibrationSamples = if (getCalibrationSamples) calibrationSamples else NULL, # Numeric vector with nSets components originCount = originCount, consensusOptions = consensusOptions ) } #========================================================================================================== # # Hierarchical consensus calculation # #========================================================================================================== # hierarchical consensus tree: a list with the following components: # inputs: either an atomic character vector whose entries match names of individualData, or a list in # which each component can either be a single character string giving a name in individualDara, or # another hierarchical consensus tree. # consensusOptions: a list of class ConsensusOptions # analysisName: optional, analysis name used for naming files when saving to disk. # Here individualData is a list or multiData in which every component is either a blockwiseData instance or # a numeric object (matrix, vector etc). Function consensusCalculation handles both. hierarchicalConsensusCalculation = function( individualData, consensusTree, level = 1, useBlocks = NULL, randomSeed = NULL, saveCalibratedIndividualData = FALSE, calibratedIndividualDataFilePattern = "calibratedIndividualData-%a-Set%s-Block%b.RData", # Return options: the data can be either saved or returned but not both. saveConsensusData = TRUE, consensusDataFileNames = "consensusData-%a-Block%b.RData", getCalibrationSamples= FALSE, # Return the intermediate results as well? keepIntermediateResults = FALSE, # Internal handling of data useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behaviour collectGarbage = FALSE, verbose = 1, indent = 0) { spaces = indentSpaces(indent); individualNames = names(individualData); if (is.null(individualNames)) stop("'individualData' must be a named list."); if (!isMultiData(individualData)) individualData = list2multiData(individualData); if (!"inputs" %in% names(consensusTree)) stop("'consensusTree' must contain component 'inputs'."); if (!"consensusOptions" %in% names(consensusTree)) stop("'consensusTree' must contain component 'consensusOptions'."); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed); } set.seed(randomSeed); } # Set names for consensusTree$inputs so that the names are informative. if (is.null(names(consensusTree$inputs))) { names(consensusTree$inputs) = spaste("Level.", level, ".Input.", 1:length(consensusTree$inputs)); validInputNames = FALSE; } else validInputNames = TRUE; isChar = sapply(consensusTree$inputs, is.character); names(consensusTree$inputs)[isChar] = consensusTree$inputs[isChar]; if (!is.null(consensusTree$analysisName)) consensusTree$consensusOptions$analysisName = consensusTree$analysisName; # Recursive step if necessary if (verbose > 0) printFlush(spaste(spaces, "------------------------------------------------------------------\n", spaces, " Working on ", consensusTree$consensusOptions$analysisName, "\n", spaces, "------------------------------------------------------------------")); names(consensusTree$inputs) = make.unique(make.names(names(consensusTree$inputs))); inputs0 = mtd.mapply(function(inp1, name) { if (is.character(inp1)) { if (!inp1 %in% names(individualData)) stop("Element '", inp1, "' is not among names of 'individualData'."); inp1; } else { if ("analysisName" %in% names(inp1)) name1 = inp1$analysisName else name1 = name; inp1$consensusOptions$analysisName = name1; hierarchicalConsensusCalculation(individualData, inp1, useBlocks = useBlocks, level = level + 1, randomSeed = NULL, saveCalibratedIndividualData = saveCalibratedIndividualData, calibratedIndividualDataFilePattern =calibratedIndividualDataFilePattern, saveConsensusData = saveConsensusData, consensusDataFileNames = consensusDataFileNames, getCalibrationSamples = getCalibrationSamples, keepIntermediateResults = keepIntermediateResults, useDiskCache = useDiskCache, chunkSize = chunkSize, cacheDir = cacheDir, cacheBase = cacheBase, collectGarbage = collectGarbage, verbose = verbose -2, indent = indent + 2); } }, consensusTree$inputs, names(consensusTree$inputs)); names(inputs0) = names(consensusTree$inputs) inputData = mtd.apply(inputs0, function(inp1) { if (is.character(inp1)) { individualData[[inp1]]$data } else inp1$consensusData; }); inputIsIntermediate = !sapply(consensusTree$inputs, is.character); # Need to check that all inputData have the same format. In particular, some could be plain numeric data and # some could be BlockwiseData. nInputs1 = length(inputData); isBlockwise = mtd.apply(inputData, inherits, "BlockwiseData", mdaSimplify = TRUE); if (any(!isBlockwise)) for (i in which(!isBlockwise)) inputData[[i]]$data = newBlockwiseData(list(inputData[[i]]$data), external = FALSE) names(inputData) = names(consensusTree$inputs) # Calculate the consensus if (verbose > 0) printFlush(spaste(spaces, "..Final consensus calculation..")); consensus = consensusCalculation( individualData = inputData, consensusOptions = consensusTree$consensusOptions, randomSeed = NULL, saveCalibratedIndividualData = saveCalibratedIndividualData, calibratedIndividualDataFilePattern =calibratedIndividualDataFilePattern, saveConsensusData = saveConsensusData, consensusDataFileNames = consensusDataFileNames, getCalibrationSamples = getCalibrationSamples, useDiskCache = useDiskCache, chunkSize = chunkSize, cacheDir = cacheDir, cacheBase = cacheBase, collectGarbage = collectGarbage, verbose = verbose-1, indent = indent+1); if (saveConsensusData && !keepIntermediateResults && any(inputIsIntermediate)) mtd.apply(inputData[inputIsIntermediate], BD.checkAndDeleteFiles); out = c(consensus, if (keepIntermediateResults) list(inputs = inputs0) else NULL); out; } #========================================================================================================== # # Simple hierarchical consensus calculation from numeric data, with minimum checking and no calibration. # #========================================================================================================== # Simpler version of consensus calculation, suitable for small data where calibration is not # necessary. simpleConsensusCalculation = function( # multiData or list of numeric vectors individualData, consensusOptions, verbose = 1, indent = 0) { nSets = length(individualData); if (isMultiData(individualData)) individualData = multiData2list(individualData); if (consensusOptions$useMean) { setWeights = consensusOptions$setWeights; if (is.null(setWeights)) setWeights = rep(1, nSets); if (length(setWeights)!=nSets) stop("Length of 'setWeights' must equal the number of sets."); } else setWeights = NULL; out = .consensusCalculation.base.FromList(individualData, useMean = consensusOptions$useMean, setWeights = setWeights, consensusQuantile = consensusOptions$consensusQuantile)$consensus; if (consensusOptions$suppressNegativeResults) out[out<0] = 0; out; } # Simple hierarchical consensus simpleHierarchicalConsensusCalculation = function( # multiData or list of numeric vectors individualData, consensusTree, level = 1) { individualNames = names(individualData); if (is.null(individualNames)) stop("'individualData' must be named."); if (is.null(names(consensusTree$inputs))) names(consensusTree$inputs) = spaste("Level.", level, ".Input.", 1:length(consensusTree$inputs)); if (isMultiData(individualData)) individualData = multiData2list(individualData); isChar = sapply(consensusTree$inputs, is.character); names(consensusTree$inputs)[isChar] = consensusTree$inputs[isChar]; # Recursive step if necessary names(consensusTree$inputs) = make.unique(make.names(names(consensusTree$inputs))); inputData = mapply(function(inp1, name) { if (is.character(inp1)) { if (!inp1 %in% names(individualData)) stop("Element '", inp1, "' is not among names of 'individualData'."); individualData[[inp1]]; } else { if ("analysisName" %in% names(inp1)) name1 = inp1$analysisName else name1 = name; inp1$consensusOptions$analysisName = name1; simpleHierarchicalConsensusCalculation(individualData, inp1, level = level + 1) } }, consensusTree$inputs, names(consensusTree$inputs), SIMPLIFY = FALSE); # Calculate the consensus simpleConsensusCalculation( individualData = inputData, consensusOptions = consensusTree$consensusOptions) } WGCNA/R/Functions.R0000644000176200001440000117775314632057213013435 0ustar liggesusers# Categories of functions: # . network construction (including connectivity calculation) # . module detection # . gene screening # . data simulation # . general statistical functions # . visualization #----------------------------------------------------------------------------------------------- # # Overall options and settings for the package # #----------------------------------------------------------------------------------------------- .moduleColorOptions = list(MEprefix = "ME") moduleColor.getMEprefix = function() { .moduleColorOptions$MEprefix; } # =================================================== #The function moduleEigengenes finds the first principal component (eigengene) in each # module defined by the colors of the input vector "colors". # The theoretical underpinnings are described in Horvath, Dong, Yip (2005) # http://www.genetics.ucla.edu/labs/horvath/ModuleConformity/ # This requires the R library impute moduleEigengenes = function(expr, colors, impute = TRUE, nPC = 1, align = "along average", excludeGrey = FALSE, grey = if (is.numeric(colors)) 0 else "grey", subHubs = TRUE, trapErrors = FALSE, returnValidOnly = trapErrors, softPower = 6, scale = TRUE, verbose = 0, indent = 0) { spaces = indentSpaces(indent); if (verbose==1) printFlush(paste(spaces, "moduleEigengenes: Calculating", nlevels(as.factor(colors)), "module eigengenes in given set.")); if (is.null(expr)) { stop("moduleEigengenes: Error: expr is NULL. "); } if (is.null(colors)) { stop("moduleEigengenes: Error: colors is NULL. "); } if (is.null(dim(expr)) || length(dim(expr))!=2) stop("moduleEigengenes: Error: expr must be two-dimensional."); if (dim(expr)[2]!=length(colors)) stop("moduleEigengenes: Error: ncol(expr) and length(colors) must be equal (one color per gene)."); if (is.factor(colors)) { nl = nlevels(colors); nlDrop = nlevels(colors[, drop = TRUE]); if (nl > nlDrop) stop(paste("Argument 'colors' contains unused levels (empty modules). ", "Use colors[, drop=TRUE] to get rid of them.")); } if (softPower < 0) stop("softPower must be non-negative"); alignRecognizedValues = c("", "along average"); if (!is.element(align, alignRecognizedValues)) { printFlush(paste("ModulePrincipalComponents: Error:", "parameter align has an unrecognised value:", align, "; Recognized values are ", alignRecognizedValues)); stop() } maxVarExplained = 10; if (nPC>maxVarExplained) warning(paste("Given nPC is too large. Will use value", maxVarExplained)); nVarExplained = min(nPC, maxVarExplained); modlevels=levels(factor(colors)) if (excludeGrey) if (sum(as.character(modlevels)!=as.character(grey))>0) { modlevels = modlevels[as.character(modlevels)!=as.character(grey)] } else { stop(paste("Color levels are empty. Possible reason: the only color is grey", "and grey module is excluded from the calculation.")); } PrinComps = data.frame(matrix(NA,nrow=dim(expr)[[1]], ncol= length(modlevels))) averExpr = data.frame(matrix(NA,nrow=dim(expr)[[1]], ncol= length(modlevels))) varExpl= data.frame(matrix(NA, nrow= nVarExplained, ncol= length(modlevels))) validMEs = rep(TRUE, length(modlevels)); validAEs = rep(FALSE, length(modlevels)); isPC = rep(TRUE, length(modlevels)); isHub = rep(FALSE, length(modlevels)); validColors = colors; names(PrinComps)=paste(moduleColor.getMEprefix(), modlevels, sep="") names(averExpr)=paste("AE",modlevels,sep="") if (!is.null(rownames(expr))) rownames(PrinComps) = rownames(averExpr) = make.unique(rownames(expr)) for(i in c(1:length(modlevels)) ) { if (verbose>1) printFlush(paste(spaces, "moduleEigengenes : Working on ME for module", modlevels[i])); modulename = modlevels[i] restrict1 = as.character(colors)== as.character(modulename) if (verbose > 2) printFlush(paste(spaces, " ...", sum(restrict1), "genes")); datModule = as.matrix(t(expr[, restrict1])); n = dim(datModule)[1]; p = dim(datModule)[2]; pc = try( { if (nrow(datModule)>1 && impute) { seedSaved = FALSE; if (exists(".Random.seed")) { saved.seed = .Random.seed; seedSaved = TRUE; } if (any(is.na(datModule))) { if (verbose > 5) printFlush(paste(spaces, " ...imputing missing data")); datModule = impute.knn(datModule, k = min(10, nrow(datModule)-1)) # some versions of impute.knn return a list and we need the data component: try( { if (!is.null(datModule$data)) datModule = datModule$data; }, silent = TRUE ) } # The <<- in the next line is extremely important. Using = or <- will create a local variable of # the name .Random.seed and will leave the important global .Random.seed untouched. if (seedSaved) .Random.seed <<- saved.seed; } if (verbose > 5) printFlush(paste(spaces, " ...scaling")); if (scale) datModule=t(scale(t(datModule))); if (verbose > 5) printFlush(paste(spaces, " ...calculating SVD")); svd1 = svd(datModule, nu = min(n, p, nPC), nv = min(n, p, nPC)); # varExpl[,i]= (svd1$d[1:min(n,p,nVarExplained)])^2/sum(svd1$d^2) if (verbose > 5) printFlush(paste(spaces, " ...calculating PVE")); veMat = cor(svd1$v[, c(1:min(n,p,nVarExplained))], t(datModule), use = "p") varExpl[c(1:min(n,p,nVarExplained)),i]= rowMeans(veMat^2, na.rm = TRUE) # this is the first principal component svd1$v[,1] }, silent = TRUE); if (inherits(pc, 'try-error')) { if ( (!subHubs) && (!trapErrors) ) stop(pc); if (subHubs) { if (verbose>0) { printFlush(paste(spaces, " ..principal component calculation for module", modulename, "failed with the following error:")); printFlush(paste(spaces, " ", pc, spaces, " ..hub genes will be used instead of principal components.")); } isPC[i] = FALSE; pc = try( { scaledExpr = scale(t(datModule)); covEx = cov(scaledExpr, use = "p"); covEx[!is.finite(covEx)] = 0; modAdj = abs(covEx)^softPower; kIM = (rowMeans(modAdj, na.rm = TRUE))^3; if (max(kIM, na.rm = TRUE) > 1) kIM = kIM-1; kIM[is.na(kIM)] = 0; hub = which.max(kIM) alignSign = sign(covEx[, hub]); alignSign[is.na(alignSign)] = 0; isHub[i] = TRUE; pcxMat = scaledExpr * matrix(kIM * alignSign, nrow = nrow(scaledExpr), ncol = ncol(scaledExpr), byrow = TRUE) / sum(kIM); pcx = rowMeans(pcxMat, na.rm = TRUE); varExpl[1, i] = mean(cor(pcx, t(datModule), use = "p")^2, na.rm = TRUE) pcx }, silent = TRUE); } } if (inherits(pc, 'try-error')) { if (!trapErrors) stop(pc); if (verbose>0) { printFlush(paste(spaces, " ..ME calculation of module", modulename, "failed with the following error:")); printFlush(paste(spaces, " ", pc, spaces, " ..the offending module has been removed.")); } warning(paste("Eigengene calculation of module", modulename, "failed with the following error \n ", pc, "The offending module has been removed.\n")); validMEs[i] = FALSE; isPC[i] = FALSE; isHub[i] = FALSE; validColors[restrict1] = grey; } else { PrinComps[, i] = pc; ae = try( { if (isPC[i]) scaledExpr = scale(t(datModule)); averExpr[, i] = rowMeans(scaledExpr, na.rm = TRUE); if (align == "along average") { if (verbose>4) printFlush(paste(spaces, " .. aligning module eigengene with average expression.")) corAve = cor(averExpr[,i], PrinComps[,i], use = "p"); if (!is.finite(corAve)) corAve = 0; if (corAve<0) PrinComps[,i] = -PrinComps[,i] } 0; }, silent = TRUE); if (inherits(ae, 'try-error')) { if (!trapErrors) stop(ae); if (verbose>0) { printFlush(paste(spaces, " ..Average expression calculation of module", modulename, "failed with the following error:")); printFlush(paste(spaces, " ", ae, spaces, " ..the returned average expression vector will be invalid.")); } warning(paste("Average expression calculation of module", modulename, "failed with the following error \n ", ae, "The returned average expression vector will be invalid.\n")); } validAEs[i] = !inherits(ae, 'try-error') } } allOK = (sum(!validMEs)==0) if (returnValidOnly && sum(!validMEs)>0) { PrinComps = PrinComps[, validMEs, drop = FALSE] averExpr = averExpr[, validMEs, drop = FALSE]; varExpl = varExpl[, validMEs, drop = FALSE]; validMEs = rep(TRUE, times = ncol(PrinComps)); isPC = isPC[validMEs]; isHub = isHub[validMEs]; validAEs = validAEs[validMEs]; } allPC = (sum(!isPC)==0); allAEOK = (sum(!validAEs)==0) list(eigengenes = PrinComps, averageExpr = averExpr, varExplained = varExpl, nPC = nPC, validMEs = validMEs, validColors = validColors, allOK = allOK, allPC = allPC, isPC = isPC, isHub = isHub, validAEs = validAEs, allAEOK = allAEOK) } #--------------------------------------------------------------------------------------------- # # removeGrey # #--------------------------------------------------------------------------------------------- # This function removes the grey eigengene from supplied module eigengenes. removeGreyME = function(MEs, greyMEName = paste(moduleColor.getMEprefix(), "grey", sep="")) { newMEs = MEs; if (is.vector(MEs) & mode(MEs)=="list") { warned = 0; newMEs = vector(mode = "list", length = length(MEs)); for (set in 1:length(MEs)) { if (!is.data.frame(MEs[[set]]$data)) stop("MEs is a vector list but the list structure is missing the correct 'data' component."); newMEs[[set]] = MEs[[set]]; if (greyMEName %in% names(MEs[[set]]$data)) { newMEs[[set]]$data = MEs[[set]]$data[, names(MEs[[set]]$data)!=greyMEName]; } else { if (warned==0) { warning("removeGreyME: The given grey ME name was not found among the names of given MEs."); warned = 1; } } } } else { if (length(dim(MEs))!=2) stop("Argument 'MEs' has incorrect dimensions.") MEs = as.data.frame(MEs); if (greyMEName %in% names(MEs)) { newMEs = MEs[, names(MEs)!=greyMEName]; } else { warning("removeGreyME: The given grey ME name was not found among the names of given MEs."); } } newMEs; } #------------------------------------------------------------------------------------- # # ModulePrincipalComponents # #------------------------------------------------------------------------------------- # Has been superseded by moduleEigengenes above. # =================================================== # This function collects garbage collectGarbage=function(){while (gc()[2,4] != gc()[2,4] | gc()[1,4] != gc()[1,4]){}} #-------------------------------------------------------------------------------------- # # orderMEs # #-------------------------------------------------------------------------------------- # # performs hierarchical clustering on MEs and returns the order suitable for plotting. orderMEs = function(MEs, greyLast = TRUE, greyName = paste(moduleColor.getMEprefix(), "grey", sep=""), orderBy = 1, order = NULL, useSets = NULL, verbose = 0, indent = 0) { spaces = indentSpaces(indent); if ("eigengenes" %in% names(MEs)) { if (is.null(order)) { if (verbose>0) printFlush(paste(spaces, "orderMEs: order not given, calculating using given set", orderBy)); corPC = cor(MEs$eigengenes, use="p") disPC = 1-corPC; order = .clustOrder(disPC, greyLast = greyLast, greyName = greyName); } if (length(order)!=dim(MEs$eigengenes)[2]) stop("orderMEs: given MEs and order have incompatible dimensions."); orderedMEs = MEs; orderedMEs$eigengenes = as.data.frame(MEs$eigengenes[,order]); colnames(orderedMEs$eigengenes) = colnames(MEs$eigengenes)[order]; if (!is.null(MEs$averageExpr)) { orderedMEs$averageExpr = as.data.frame(MEs$averageExpr[, order]) colnames(orderedMEs$averageExpr) = colnames(MEs$data)[order]; } if (!is.null(MEs$varExplained)) { orderedMEs$varExplained = as.data.frame(MEs$varExplained[, order]) colnames(orderedMEs$varExplained) = colnames(MEs$data)[order]; } return(orderedMEs); } else { check = checkSets(MEs, checkStructure = TRUE, useSets = useSets); if (check$structureOK) { multiSet = TRUE; } else { multiSet = FALSE; MEs = fixDataStructure(MEs); useSets = NULL; orderBy = 1; } if (!is.null(useSets)) if (is.na(match(orderBy, useSets))) orderBy = useSets[1]; if (is.null(order)) { if (verbose>0) printFlush(paste(spaces, "orderMEs: order not given, calculating using given set", orderBy)); corPC = cor(MEs[[orderBy]]$data, use="p") disPC = 1-corPC; order = .clustOrder(disPC, greyLast = greyLast, greyName = greyName); } if (length(order)!=dim(MEs[[orderBy]]$data)[2]) stop("orderMEs: given MEs and order have incompatible dimensions."); nSets = length(MEs); orderedMEs = MEs; if (is.null(useSets)) useSets = c(1:nSets); for (set in useSets) { orderedMEs[[set]]$data = as.data.frame(MEs[[set]]$data[,order]); colnames(orderedMEs[[set]]$data) = colnames(MEs[[set]]$data)[order]; if (!is.null(MEs[[set]]$averageExpr)) { orderedMEs[[set]]$averageExpr = as.data.frame(MEs[[set]]$averageExpr[, order]) colnames(orderedMEs[[set]]$averageExpr) = colnames(MEs[[set]]$data)[order]; } if (!is.null(MEs[[set]]$varExplained)) { orderedMEs[[set]]$varExplained = as.data.frame(MEs[[set]]$varExplained[, order]) colnames(orderedMEs[[set]]$varExplained) = colnames(MEs[[set]]$data)[order]; } } if (multiSet) { return(orderedMEs); } else { return(orderedMEs[[1]]$data); } } } #--------------------------------------------------------------------------------------------- # # .clustOrder # #--------------------------------------------------------------------------------------------- .clustOrder = function(distM, greyLast = TRUE, greyName = paste(moduleColor.getMEprefix(), "grey", sep="")) { distM = as.matrix(distM); distNames = dimnames(distM)[[1]]; greyInd = match(greyName, distNames); if (greyLast && !is.na(greyInd)) { clusterMEs = (greyName!=distNames); if (sum(clusterMEs)>1) { h = fastcluster::hclust(as.dist(distM[clusterMEs, clusterMEs]), method = "average"); order = h$order; if (sum(order>=greyInd)>0) order[order>=greyInd] = order[order>=greyInd]+1; order = c(order, greyInd); } else if (ncol(distM)>1) { if (greyInd==1) { order = c(2, 1) } else order = c(1, 2); } else order = 1; } else { if (length(distM)>1) { h = fastcluster::hclust(as.dist(distM), method = "average"); order = h$order; } else order = 1; } order; # print(paste("names:", names(distM), collapse = ", ")); # print(paste("order:", order, collapse=", ")) } #--------------------------------------------------------------------------------------------- # # consensusOrderMEs # #--------------------------------------------------------------------------------------------- # Orders MEs by the dendrogram of their consensus dissimilarity. consensusOrderMEs = function(MEs, useAbs = FALSE, useSets = NULL, greyLast = TRUE, greyName = paste(moduleColor.getMEprefix(), "grey", sep=""), method = "consensus") { # Debugging code: #printFlush("consensusOrderMEs:"); #size = checkSets(MEs); #print(size); # end debuging code Diss = consensusMEDissimilarity(MEs, useAbs = useAbs, useSets = useSets, method = method); order = .clustOrder(Diss, greyLast, greyName); #print(order) orderMEs(MEs, greyLast = greyLast, greyName = greyName, order = order, useSets = useSets); } orderMEsByHierarchicalConsensus = function(MEs, networkOptions, consensusTree, greyName = "ME0", calibrate = FALSE) { Diss = .hierarchicalConsensusMEDissimilarity(MEs, networkOptions, consensusTree, greyName = greyName, calibrate = calibrate); order = .clustOrder(Diss, greyLast = TRUE, greyName = greyName); mtd.subset(MEs, , order); } #--------------------------------------------------------------------------------------------- # # consensusMEDissimilarity # #--------------------------------------------------------------------------------------------- # This function calcualtes a consensus dissimilarity (i.e., correlation) among sets of MEs (more generally, # any sets of vectors). # CAUTION: when not using absolute value, the minimum similarity will favor the large negative values! consensusMEDissimilarity = function(MEs, useAbs = FALSE, useSets = NULL, method = "consensus") { methods = c("consensus", "majority"); m = charmatch(method, methods); if (is.na(m)) stop("Unrecognized method given. Recognized values are", paste(methods, collapse =", ")); nSets = length(MEs); MEDiss = vector(mode="list", length = nSets); if (is.null(useSets)) useSets = c(1:nSets); for (set in useSets) { if (useAbs) { diss = 1-abs(cor(MEs[[set]]$data, use="p")); } else { diss = 1-cor(MEs[[set]]$data, use="p"); } MEDiss[[set]] = list(Diss = diss); } for (set in useSets) if (set==useSets[1]) { ConsDiss = MEDiss[[set]]$Diss; } else { if (m==1) { ConsDiss = pmax(ConsDiss, MEDiss[[set]]$Diss); } else { ConsDiss = ConsDiss + MEDiss[[set]]$Diss; } } if (m==2) ConsDiss = ConsDiss/nSets; ConsDiss = as.data.frame(ConsDiss); names(ConsDiss) = names(MEs[[useSets[1]]]$data); rownames(ConsDiss) = make.unique(names(MEs[[useSets[1]]]$data)); ConsDiss; } hierarchicalConsensusMEDissimilarity = function(MEs, networkOptions, consensusTree, greyName = "ME0", calibrate = FALSE) { nSets = checkSets(MEs)$nSets; if (inherits(networkOptions, "NetworkOptions")) networkOptions = list2multiData(.listRep(networkOptions, nSets)); .hierarchicalConsensusMEDissimilarity(MEs, networkOptions, consensusTree, greyName = greyName, calibrate = calibrate) } # Quantile normalization # normalize each column such that (column) quantiles are the same # The final value for each quantile is the 'summaryType' of the corresponding quantiles across the columns .equalizeQuantiles = function(data, summaryType = c("median", "mean")) { summaryType = match.arg(summaryType); data.sorted = apply(data, 2, sort); if (summaryType == "median") { refSample = rowMedians(data.sorted, na.rm = TRUE) } else if (summaryType == "mean") refSample = rowMeans(data.sorted, na.rm = TRUE); ranks = round(colRanks(data, ties.method = "average", preserveShape = TRUE)) out = refSample [ ranks ]; dim(out) = dim(data); dimnames(out) = dimnames(data); out; } .turnVectorIntoDist = function(x, size, Diag, Upper) { attr(x, "Size") = size; attr(x, "Diag") = FALSE; attr(x, "Upper") = FALSE; class(x) = c("dist", class(x)) x; } .turnDistVectorIntoMatrix = function(x, size, Diag, Upper, diagValue) { mat = as.matrix(.turnVectorIntoDist(x, size, Diag, Upper)); if (!Diag) diag(mat) = diagValue; mat; } # This function calculates consensus dissimilarity of module eigengenes .consensusMEDissimilarity = function(multiMEs, useSets = NULL, corFnc = cor, corOptions = list(use = 'p'), equalizeQuantiles = FALSE, quantileSummary = "mean", consensusQuantile = 0, useAbs = FALSE, greyName = "ME0") { nSets = checkSets(multiMEs)$nSets; useMEs = c(1:ncol(multiMEs[[1]]$data))[names(multiMEs[[1]]$data)!=greyName] useNames = names(multiMEs[[1]]$data)[useMEs]; nUseMEs = length(useMEs); # if (nUseMEs<2) # stop("Something is wrong: there are two or more proper modules, but less than two proper", # "eigengenes. Please check that the grey color label and module eigengene label", # "are correct."); if (is.null(useSets)) useSets = c(1:nSets); nUseSets = length(useSets); MEDiss = array(NA, dim = c(nUseMEs, nUseMEs, nUseSets)); for (set in useSets) { corOptions$x = multiMEs[[set]]$data[, useMEs]; if (useAbs) { diss = 1-abs(do.call(corFnc, corOptions)); } else { diss = 1-do.call(corFnc, corOptions); } MEDiss[, , set] = diss; } if (equalizeQuantiles) { distMat = apply(MEDiss, 3, function(x) {as.numeric(as.dist(x))} ) dim(distMat) = c( nUseMEs * (nUseMEs-1)/2, nUseSets); normalized = .equalizeQuantiles(distMat, summaryType = quantileSummary); MEDiss = apply(normalized, 2, .turnDistVectorIntoMatrix, size = nUseMEs, Diag = FALSE, Upper = FALSE, diagValue = 0); } ConsDiss = apply(MEDiss, c(1:2), quantile, probs = 1-consensusQuantile, names = FALSE, na.rm = TRUE); colnames(ConsDiss) = rownames(ConsDiss) = make.unique(useNames); ConsDiss; } .hierarchicalConsensusMEDissimilarity = function(multiMEs, networkOptions, consensusTree, greyName, calibrate) { nSets = checkSets(multiMEs)$nSets; useMEs = which(mtd.colnames(multiMEs)!=greyName); useNames = mtd.colnames(multiMEs)[useMEs]; nUseMEs = length(useMEs); if (nUseMEs == 0) return(matrix(numeric(0), 0, 0)); # if (nUseMEs<2) # stop("Something is wrong: there are two or more proper modules, but less than two proper", # "eigengenes. Please check that the grey color label and module eigengene label", # "are correct."); if (!isMultiData(networkOptions, strict = FALSE)) stop("'networkOptions' must be either a single list of class 'NetworkOptions'\n", "or a MultiData structure containing one such list per input set. "); if (length(networkOptions)!=nSets) stop("Number of sets in 'multiMEs' and 'networkOptions' must be the same."); MEDiss = mtd.mapply(function(me, netOpt) { cor.me = do.call(netOpt$corFnc, c(list(x = me), netOpt$corOptions)); if (!grepl("signed", netOpt$networkType)) cor.me = abs(cor.me); cor.me; }, mtd.subset(multiMEs, , useMEs), networkOptions, returnList = TRUE); if (calibrate) { cons = hierarchicalConsensusCalculation(MEDiss, consensusTree = consensusTree, level = 1, # Return options: the data can be either saved or returned but not both. saveConsensusData = FALSE, keepIntermediateResults = FALSE, # Internal handling of data useDiskCache = FALSE, # Behaviour collectGarbage = FALSE, verbose = 0, indent = 0)$consensusData cons = BD.getData(cons, blocks = 1); } else cons = simpleHierarchicalConsensusCalculation(MEDiss, consensusTree) consDiss = 1-cons; colnames(consDiss) = rownames(consDiss) = make.unique(useNames); consDiss; } #====================================================================================================== # ColorHandler.R #====================================================================================================== # A set of global variables and functions that should help handling color names for some 400+ modules. # A vector called .GlobalStandardColors is defined that holds color names with first few entries # being the well-known and -loved colors. The rest is randomly chosen from the color names of R, # excluding grey colors. #--------------------------------------------------------------------------------------------------------- # # .GlobalStandardColors # #--------------------------------------------------------------------------------------------------------- # This code forms a vector of color names in which the first entries are given by BaseColors and the rest # is "randomly" chosen from the rest of R color names that do not contain "grey" nor "gray". BaseColors = c("turquoise","blue","brown","yellow","green","red","black","pink","magenta", "purple","greenyellow","tan","salmon","cyan", "midnightblue", "lightcyan", "grey60", "lightgreen", "lightyellow", "royalblue", "darkred", "darkgreen", "darkturquoise", "darkgrey", "orange", "darkorange", "white", "skyblue", "saddlebrown", "steelblue", "paleturquoise", "violet", "darkolivegreen", "darkmagenta" ); RColors = colors()[-grep("grey", colors())]; RColors = RColors[-grep("gray", RColors)]; InBase = match(BaseColors, RColors); ExtraColors = RColors[-c(InBase[!is.na(InBase)])]; nExtras = length(ExtraColors); # Here is the vector of colors that should be used by all functions: .GlobalStandardColors = c(BaseColors, ExtraColors[rank(sin(13*c(1:nExtras) +sin(13*c(1:nExtras))) )] ); standardColors = function(n = NULL) { if (is.null(n)) return(.GlobalStandardColors); if ((n>0) && (n<=length(.GlobalStandardColors))) { return(.GlobalStandardColors[c(1:n)]); } else { stop("Invalid number of standard colors requested."); } } rm(BaseColors, RColors, ExtraColors, nExtras, InBase); #--------------------------------------------------------------------------------------------------------- # # normalizeLabels # #--------------------------------------------------------------------------------------------------------- # "Normalizes" numerical labels such that the largest group is labeled 1, the next largest 2 etc. # If KeepZero == TRUE, label zero is preserved. normalizeLabels = function(labels, keepZero = TRUE) { if (keepZero) { NonZero = (labels!=0); } else { NonZero = rep(TRUE, length(labels)); } f = as.numeric(factor(labels[NonZero])); t = table(labels[NonZero]); # print(t) r = rank(-as.vector(t), ties.method = "first"); norm_labs = rep(0, times = length(labels)); norm_labs[NonZero] = r[f]; norm_labs; } #--------------------------------------------------------------------------------------------------------- # # labels2colors # #--------------------------------------------------------------------------------------------------------- # This function converts integer numerical labels labels into color names in the order either given by # colorSeq, # or (if colorSeq==NULL) by standardColors(). If GreyIsZero == TRUE, labels 0 will be assigned # the color grey; otherwise presence of labels below 1 will trigger an error. # dimensions of labels (if present) are preserved. labels2colors = function(labels, zeroIsGrey = TRUE, colorSeq = NULL, naColor = "grey", commonColorCode = TRUE) { if (is.null(colorSeq)) colorSeq = standardColors(); if (is.numeric(labels)) { if (zeroIsGrey) minLabel = 0 else minLabel = 1 if (any(labels<0, na.rm = TRUE)) minLabel = min(c(labels), na.rm = TRUE) nLabels = labels; } else { if (commonColorCode) { factors = factor(c(as.matrix(as.data.frame(labels)))) nLabels = as.numeric(factors) dim(nLabels)= dim(labels); } else { labels = as.matrix(as.data.frame(labels)); factors = list(); for (c in 1:ncol(labels)) factors[[c]] = factor(labels[, c]); nLabels = sapply(factors, as.numeric) } } if (max(nLabels, na.rm = TRUE) > length(colorSeq)) { nRepeats = as.integer((max(labels)-1)/length(colorSeq)) + 1; warning(paste("labels2colors: Number of labels exceeds number of avilable colors.", "Some colors will be repeated", nRepeats, "times.")) extColorSeq = colorSeq; for (rep in 1:nRepeats) extColorSeq = c(extColorSeq, paste(colorSeq, ".", rep, sep="")); } else { nRepeats = 1; extColorSeq = colorSeq; } colors = rep("grey", length(nLabels)); fin = !is.na(nLabels); colors[!fin] = naColor; finLabels = nLabels[fin]; colors[fin][finLabels!=0] = extColorSeq[finLabels[finLabels!=0]]; if (!is.null(dim(labels))) dim(colors) = dim(labels); colors; } #======================================================================================== # # MergeCloseModules # #======================================================================================== #--------------------------------------------------------------------------------- # # moduleNumber # #--------------------------------------------------------------------------------- # Similar to modulecolor2 above, but returns numbers instead of colors, which is oftentimes more useful. # 0 means unassigned. # Return value is a simple vector, not a factor. # Caution: the module numbers are neither sorted nor sequential, the only guarranteed fact is that grey # probes are labeled by 0 and all probes belonging to the same module have the same number. moduleNumber = function(dendro, cutHeight = 0.9, minSize = 50) { Branches = cutree(dendro, h = cutHeight); NOnBranches = table(Branches); TrueBranch = NOnBranches >= minSize; Branches[!TrueBranch[Branches]] = 0; Branches; } #-------------------------------------------------------------------------------------- # # fixDataStructure # #-------------------------------------------------------------------------------------- # Check input data: if they are not a vector of lists, put them into the form of a vector of lists. fixDataStructure = function(data, verbose = 0, indent = 0) { spaces = indentSpaces(indent); if (!inherits(data, "list")) { if (verbose>0) printFlush(paste(spaces, "fixDataStructure: data is not a vector of lists: converting it into one.")); x = data; data = vector(mode = "list", length = 1); data[[1]] = list(data = x); rm(x); } data; } #------------------------------------------------------------------------------------------- # # checkSets # #------------------------------------------------------------------------------------------- # Checks sets for consistency and returns some diagnostics. .permissiveDim = function(x) { d = dim(x); if (is.null(d)) return( c(length(x), 1)) return(d) } checkSets = function(data, checkStructure = FALSE, useSets = NULL) { nSets = length(data); if (is.null(useSets)) useSets = c(1:nSets); if (nSets<=0) stop("No data given."); structureOK = TRUE; if (!inherits(data, "list")) { if (checkStructure) { structureOK = FALSE; nGenes = 0; nSamples = 0; } else { stop("data does not appear to have the correct format. Consider using fixDataStructure", "or setting checkStructure = TRUE when calling this function."); } } else { nSamples = vector(length = nSets); nGenes = .permissiveDim(data[[useSets[1]]]$data)[2]; for (set in useSets) { if (nGenes!=.permissiveDim(data[[set]]$data)[2]) { if (checkStructure) { structureOK = FALSE; } else { stop(paste("Incompatible number of genes in set 1 and", set)); } } nSamples[set] = .permissiveDim(data[[set]]$data)[1]; } } list(nSets = nSets, nGenes = nGenes, nSamples = nSamples, structureOK = structureOK); } #-------------------------------------------------------------------------------------- # # multiSetMEs # #-------------------------------------------------------------------------------------- multiSetMEs = function(exprData, colors, universalColors = NULL, useSets = NULL, useGenes = NULL, impute = TRUE, nPC = 1, align = "along average", excludeGrey = FALSE, grey = if (is.null(universalColors)) {if(is.numeric(colors)) 0 else "grey"} else if (is.numeric(universalColors)) 0 else "grey", subHubs = TRUE, trapErrors = FALSE, returnValidOnly = trapErrors, softPower = 6, verbose = 1, indent = 0) { spaces = indentSpaces(indent); nSets = length(exprData); setsize = checkSets(exprData, useSets = useSets); nGenes = setsize$nGenes; nSamples = setsize$nSamples; if (verbose>0) printFlush(paste(spaces,"multiSetMEs: Calculating module MEs.")); MEs = vector(mode="list", length=nSets); consValidMEs = NULL; if (!is.null(universalColors)) consValidColors = universalColors; if (is.null(useSets)) useSets = c(1:nSets); if (is.null(useGenes)) { for (set in useSets) { if (verbose>0) printFlush(paste(spaces," Working on set", as.character(set), "...")); if (is.null(universalColors)) { setColors = colors[,set]; } else { setColors = universalColors; } setMEs = moduleEigengenes(expr = exprData[[set]]$data, colors = setColors, impute = impute, nPC = nPC, align = align, excludeGrey = excludeGrey, grey = grey, trapErrors = trapErrors, subHubs = subHubs, returnValidOnly = FALSE, softPower = softPower, verbose = verbose-1, indent = indent+1); if (!is.null(universalColors) && (!setMEs$allOK)) { if (is.null(consValidMEs)) { consValidMEs = setMEs$validMEs; } else { consValidMEs = consValidMEs * setMEs$validMEs; } consValidColors[setMEs$validColors!=universalColors] = setMEs$validColors[setMEs$validColors!=universalColors] } MEs[[set]] = setMEs; names(MEs[[set]])[names(setMEs)=='eigengenes'] = 'data'; # Here's what moduleEigengenes returns: # # list(eigengenes = PrinComps, averageExpr = averExpr, varExplained = varExpl, nPC = nPC, # validMEs = validMEs, validColors = validColors, allOK = allOK, allPC = allPC, isPC = isPC, # isHub = isHub, validAEs = validAEs, allAEOK = allAEOK) } } else { for (set in useSets) { if (verbose>0) printFlush(paste(spaces," Working on set", as.character(set), "...")); if (is.null(universalColors)) { setColors = colors[useGenes ,set]; } else { setColors = universalColors[useGenes]; } setMEs = moduleEigengenes(expr = exprData[[set]]$data[, useGenes], colors = setColors, impute = impute, nPC = nPC, align = align, excludeGrey = excludeGrey, grey = grey, trapErrors = trapErrors, subHubs = subHubs, returnValidOnly = FALSE, softPower = softPower, verbose = verbose-1, indent = indent+1); if (!is.null(universalColors) && (!setMEs$allOK)) { if (is.null(consValidMEs)) { consValidMEs = setMEs$validMEs; } else { consValidMEs = consValidMEs * setMEs$validMEs; } consValidColors[setMEs$validColors!=universalColors[useGenes]] = setMEs$validColors[setMEs$validColors!=universalColors[useGenes]] } MEs[[set]] = setMEs; names(MEs[[set]])[names(setMEs)=='eigengenes'] = 'data'; } } if (!is.null(universalColors)) { for (set in 1:nSets) { if (!is.null(consValidMEs)) MEs[[set]]$validMEs = consValidMEs; MEs[[set]]$validColors = consValidColors; } } for (set in 1:nSets) { MEs[[set]]$allOK = (sum(!MEs[[set]]$validMEs)==0); if (returnValidOnly) { valid = (MEs[[set]]$validMEs > 0); MEs[[set]]$data = MEs[[set]]$data[, valid, drop = FALSE]; MEs[[set]]$averageExpr = MEs[[set]]$averageExpr[, valid, drop = FALSE]; MEs[[set]]$varExplained = MEs[[set]]$varExplained[, valid, drop = FALSE]; MEs[[set]]$isPC = MEs[[set]]$isPC[valid]; MEs[[set]]$allPC = (sum(!MEs[[set]]$isPC)==0) MEs[[set]]$isHub = MEs[[set]]$isHub[valid]; MEs[[set]]$validAEs = MEs[[set]]$validAEs[valid]; MEs[[set]]$allAEOK = (sum(!MEs[[set]]$validAEs)==0) MEs[[set]]$validMEs = rep(TRUE, times = ncol(MEs[[set]]$data)); } } names(MEs) = names(exprData); MEs; } #--------------------------------------------------------------------------------------------- # # MergeCloseModules # #--------------------------------------------------------------------------------------------- mergeCloseModules = function( # input data exprData, colors, # Optional starting eigengenes MEs = NULL, # Optional restriction to a subset of all sets useSets = NULL, # If missing data are present, impute them? impute = TRUE, # Input handling options checkDataFormat = TRUE, unassdColor = if (is.numeric(colors)) 0 else "grey", # Options for eigengene network construction corFnc = cor, corOptions = list(use = 'p'), useAbs = FALSE, # Options for constructing the consensus equalizeQuantiles = FALSE, quantileSummary = "mean", consensusQuantile = 0, # Merging options cutHeight = 0.2, iterate = TRUE, # Output options relabel = FALSE, colorSeq = NULL, getNewMEs = TRUE, getNewUnassdME = TRUE, # Options controlling behaviour of the function trapErrors = FALSE, verbose = 1, indent = 0) { MEsInSingleFrame = FALSE; spaces = indentSpaces(indent); #numCols = is.numeric(colors); #facCols = is.factor(colors); #charCols = is.character(colors); origColors = colors; colors = colors[, drop = TRUE]; greyName = paste(moduleColor.getMEprefix(), unassdColor, sep=""); if (verbose>0) printFlush(paste(spaces, "mergeCloseModules: Merging modules whose distance is less than", cutHeight)); if (verbose>3) printFlush(paste(spaces, " .. will look for grey label", greyName)); if (!checkSets(exprData, checkStructure = TRUE, useSets = useSets)$structureOK) { if (checkDataFormat) { exprData = fixDataStructure(exprData); MEsInSingleFrame = TRUE; } else { stop("Given exprData appear to be misformatted."); } } setsize = checkSets(exprData, useSets = useSets); nSets = setsize$nSets; if (!is.null(MEs)) { checkMEs = checkSets(MEs, checkStructure = TRUE, useSets = useSets); if (checkMEs$structureOK) { if (nSets!=checkMEs$nSets) stop("Input error: numbers of sets in exprData and MEs differ.") for (set in 1:nSets) { if (checkMEs$nSamples[set]!=setsize$nSamples[set]) stop(paste("Number of samples in MEs is incompatible with subset length for set", set)); } } else { if (MEsInSingleFrame) { MEs = fixDataStructure(MEs); checkMEs = checkSets(MEs); } else { stop("MEs do not have the appropriate structure (same as exprData). "); } } } if (setsize$nGenes!=length(colors)) stop("Number of genes in exprData is different from the length of original colors. They must equal."); if ((cutHeight <0) | (cutHeight>(1+as.integer(useAbs)))) stop(paste("Given cutHeight is out of sensible range between 0 and", 1+as.integer(useAbs) )); done = FALSE; iteration = 1; MergedColors = colors; ok = try( { while (!done) { if (is.null(MEs)) { MEs = multiSetMEs(exprData, colors = NULL, universalColors = colors, useSets = useSets, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = TRUE, grey = unassdColor, verbose = verbose-1, indent = indent+1); MEs = consensusOrderMEs(MEs, useAbs = useAbs, useSets = useSets, greyLast = FALSE); } else if (nlevels(as.factor(colors))!=checkMEs$nGenes) { if ((iteration==1) & (verbose>0)) printFlush(paste(spaces, " Number of given module colors", "does not match number of given MEs => recalculating the MEs.")) MEs = multiSetMEs(exprData, colors = NULL, universalColors = colors, useSets = useSets, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = TRUE, grey = unassdColor, verbose = verbose-1, indent = indent+1); MEs = consensusOrderMEs(MEs, useAbs = useAbs, useSets = useSets, greyLast = FALSE); } if (iteration==1) oldMEs = MEs; # Check colors for number of distinct colors that are not grey colLevs = as.character(levels(as.factor(colors))); if ( length(colLevs[colLevs!=as.character(unassdColor)])<2 ) { printFlush(paste(spaces, "mergeCloseModules: less than two proper modules.")); printFlush(paste(spaces, " ..color levels are", paste(colLevs, collapse = ", "))); printFlush(paste(spaces, " ..there is nothing to merge.")); MergedNewColors = colors; MergedColors = colors; nOldMods = 1; nNewMods = 1; oldTree = NULL; Tree = NULL; break; } # Cluster the found module eigengenes and merge ones that are too close according to the specified # quantile. nOldMods = nlevels(as.factor(colors)); ConsDiss = .consensusMEDissimilarity(MEs, equalizeQuantiles = equalizeQuantiles, quantileSummary = quantileSummary, consensusQuantile = consensusQuantile, useAbs = useAbs, corFnc = corFnc, corOptions = corOptions, useSets = useSets, greyName = greyName); Tree = fastcluster::hclust(as.dist(ConsDiss), method = "average"); if (iteration==1) oldTree = Tree; TreeBranches = as.factor(moduleNumber(dendro = Tree, cutHeight = cutHeight, minSize = 1)); UniqueBranches = levels(TreeBranches); nBranches = nlevels(TreeBranches) NumberOnBranch = table(TreeBranches); MergedColors = colors; # Merge modules on the same branch for (branch in 1:nBranches) if (NumberOnBranch[branch]>1) { ModulesOnThisBranch = names(TreeBranches)[TreeBranches==UniqueBranches[branch]]; ColorsOnThisBranch = substring(ModulesOnThisBranch, 3); if (is.numeric(origColors)) ColorsOnThisBranch = as.numeric(ColorsOnThisBranch); if (verbose>3) printFlush(paste(spaces, " Merging original colors", paste(ColorsOnThisBranch, collapse=", "))); for (color in 2:length(ColorsOnThisBranch)) MergedColors[MergedColors==ColorsOnThisBranch[color]] = ColorsOnThisBranch[1]; } MergedColors = MergedColors[, drop = TRUE]; nNewMods = nlevels(as.factor(MergedColors)); if (nNewMods3) printFlush(paste(spaces, " Changing original colors:")); rank = 0; for (color in 1:length(SortedRawModuleColors)) if (SortedRawModuleColors[color]!=unassdColor) { rank = rank + 1; if (verbose>3) printFlush(paste(spaces, " ", SortedRawModuleColors[color], "to ", colorSeq[rank])); MergedNewColors[MergedColors==SortedRawModuleColors[color]] = colorSeq[rank]; } if (is.factor(MergedColors)) MergedNewColors = as.factor(MergedNewColors); } else { MergedNewColors = MergedColors; } MergedNewColors = MergedNewColors[, drop = TRUE]; if (getNewMEs) { if (nNewMods0) printFlush(paste(spaces, " Calculating new MEs...")); NewMEs = multiSetMEs(exprData, colors = NULL, universalColors = MergedNewColors, useSets = useSets, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = !getNewUnassdME, grey = unassdColor, verbose = verbose-1, indent = indent+1); newMEs = consensusOrderMEs(NewMEs, useAbs = useAbs, useSets = useSets, greyLast = TRUE, greyName = greyName); ConsDiss = .consensusMEDissimilarity(newMEs, equalizeQuantiles = equalizeQuantiles, quantileSummary = quantileSummary, consensusQuantile = consensusQuantile, useAbs = useAbs, corFnc = corFnc, corOptions = corOptions, useSets = useSets, greyName = greyName); if (length(ConsDiss) > 1) { Tree = fastcluster::hclust(as.dist(ConsDiss), method = "average"); } else Tree = NULL; } else { newMEs = MEs; } } else { newMEs = NULL; } if (MEsInSingleFrame) { newMEs = newMEs[[1]]$data; oldMEs = oldMEs[[1]]$data; } }, silent = TRUE); if (inherits(ok, 'try-error')) { if (!trapErrors) stop(ok); if (verbose>0) { printFlush(paste(spaces, "Warning: merging of modules failed with the following error:")); printFlush(paste(' ', spaces, ok)); printFlush(paste(spaces, " --> returning unmerged modules and *no* eigengenes.")); } warning(paste("mergeCloseModules: merging of modules failed with the following error:\n", " ", ok, " --> returning unmerged modules and *no* eigengenes.\n")); list(colors = origColors, allOK = FALSE); } else { list(colors = MergedNewColors, dendro = Tree, oldDendro = oldTree, cutHeight = cutHeight, oldMEs = oldMEs, newMEs = newMEs, allOK = TRUE); } } #--------------------------------------------------------------------------------------------- # # hierarchicalMergeCloseModules # #--------------------------------------------------------------------------------------------- hierarchicalMergeCloseModules = function( # input data multiExpr, multiExpr.imputed = NULL, labels, # Optional starting eigengenes MEs = NULL, unassdColor = if (is.numeric(labels)) 0 else "grey", # If missing data are present, impute them? impute = TRUE, # Options for eigengene network construction networkOptions, # Options for constructing the consensus consensusTree, calibrateMESimilarities = FALSE, # Merging options cutHeight = 0.2, iterate = TRUE, # Output options relabel = FALSE, colorSeq = NULL, getNewMEs = TRUE, getNewUnassdME = TRUE, # Options controlling behaviour of the function trapErrors = FALSE, verbose = 1, indent = 0) { MEsInSingleFrame = FALSE; spaces = indentSpaces(indent); #numCols = is.numeric(labels); #facCols = is.factor(labels); #charCols = is.character(labels); origColors = labels; useSets = consensusTreeInputs(consensusTree); labels = labels[, drop = TRUE]; if (all(replaceMissing(labels==unassdColor, TRUE))) return( list(labels = labels, allOK = FALSE)); greyName = paste(moduleColor.getMEprefix(), unassdColor, sep=""); if (verbose>0) printFlush(paste(spaces, "mergeCloseModules: Merging modules whose distance is less than", cutHeight)); if (verbose>3) printFlush(paste(spaces, " .. will use unassigned ME label", greyName)); setsize = checkSets(multiExpr[useSets]); nUseSets = setsize$nSets; if (is.null(multiExpr.imputed)) { if (impute) { multiExpr.imputed = mtd.apply(multiExpr[useSets], imputeByModule, labels = labels, excludeUnassigned = FALSE, unassignedLabel = unassdColor, scale = TRUE) } else multiExpr.imputed = multiExpr[useSets]; } else stopifnot(isTRUE(all.equal(checkSets(multiExpr.imputed), setsize))); if (!is.null(MEs)) { checkMEs = checkSets(MEs[useSets], checkStructure = TRUE); if (checkMEs$structureOK) { if (nUseSets!=checkMEs$nSets) stop("Input error: numbers of sets in multiExpr and MEs differ.") for (set in 1:nUseSets) { if (checkMEs$nSamples[set]!=setsize$nSamples[set]) stop(paste("Number of samples in MEs is incompatible with subset length for set", set)); } } else { if (MEsInSingleFrame) { MEs = fixDataStructure(MEs); checkMEs = checkSets(MEs); } else { stop("MEs do not have the appropriate structure (same as multiExpr). "); } } } if (inherits(networkOptions, "NetworkOptions")) networkOptions = list2multiData(.listRep(networkOptions, nUseSets)); if (setsize$nGenes!=length(labels)) stop("Number of genes in multiExpr is different from the length of original labels. They must equal."); done = FALSE; iteration = 1; MergedColors = labels; #ok = try( #{ while (!done) { if (is.null(MEs)) { MEs = multiSetMEs(multiExpr.imputed, colors = NULL, universalColors = labels, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = TRUE, grey = unassdColor, verbose = verbose-1, indent = indent+1); #MEs = consensusOrderMEs(MEs, useAbs = useAbs, greyLast = FALSE); #collectGarbage(); } else if (nlevels(as.factor(labels))!=checkMEs$nGenes) { if ((iteration==1) & (verbose>0)) printFlush(paste(spaces, " Number of given module labels", "does not match number of given MEs => recalculating the MEs.")) MEs = multiSetMEs(multiExpr.imputed, colors = NULL, universalColors = labels, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = TRUE, grey = unassdColor, verbose = verbose-1, indent = indent+1); #MEs = consensusOrderMEs(MEs, useAbs = useAbs, greyLast = FALSE); #collectGarbage(); } if (iteration==1) oldMEs = MEs; # Check labels for number of distinct labels that are not grey colLevs = as.character(levels(as.factor(labels))); if ( length(colLevs[colLevs!=as.character(unassdColor)])<2 ) { printFlush(paste(spaces, "mergeCloseModules: less than two proper modules.")); printFlush(paste(spaces, " ..color levels are", paste(colLevs, collapse = ", "))); printFlush(paste(spaces, " ..there is nothing to merge.")); MergedNewColors = labels; MergedColors = labels; nOldMods = 1; nNewMods = 1; oldTree = NULL; Tree = NULL; break; } # Cluster the found module eigengenes and merge ones that are too close according to the specified # quantile. nOldMods = nlevels(as.factor(labels)); ConsDiss = .hierarchicalConsensusMEDissimilarity(MEs, networkOptions = networkOptions, consensusTree = consensusTree, greyName = greyName, calibrate = calibrateMESimilarities); Tree = fastcluster::hclust(as.dist(ConsDiss), method = "average"); if (iteration==1) oldTree = Tree; TreeBranches = as.factor(moduleNumber(dendro = Tree, cutHeight = cutHeight, minSize = 1)); UniqueBranches = levels(TreeBranches); nBranches = nlevels(TreeBranches) NumberOnBranch = table(TreeBranches); MergedColors = labels; # Merge modules on the same branch for (branch in 1:nBranches) if (NumberOnBranch[branch]>1) { ModulesOnThisBranch = names(TreeBranches)[TreeBranches==UniqueBranches[branch]]; ColorsOnThisBranch = substring(ModulesOnThisBranch, 3); if (is.numeric(origColors)) ColorsOnThisBranch = as.numeric(ColorsOnThisBranch); if (verbose>3) printFlush(paste(spaces, " Merging original labels", paste(ColorsOnThisBranch, collapse=", "))); for (color in 2:length(ColorsOnThisBranch)) MergedColors[MergedColors==ColorsOnThisBranch[color]] = ColorsOnThisBranch[1]; } MergedColors = MergedColors[, drop = TRUE]; nNewMods = nlevels(as.factor(MergedColors)); if (nNewMods3) printFlush(paste(spaces, " Changing original labels:")); rank = 0; for (color in 1:length(SortedRawModuleColors)) if (SortedRawModuleColors[color]!=unassdColor) { rank = rank + 1; if (verbose>3) printFlush(paste(spaces, " ", SortedRawModuleColors[color], "to ", colorSeq[rank])); MergedNewColors[MergedColors==SortedRawModuleColors[color]] = colorSeq[rank]; } if (is.factor(MergedColors)) MergedNewColors = as.factor(MergedNewColors); } else { MergedNewColors = MergedColors; } MergedNewColors = MergedNewColors[, drop = TRUE]; if (getNewMEs) { if (nNewMods0) printFlush(paste(spaces, " Calculating new MEs...")); NewMEs = multiSetMEs(multiExpr.imputed, colors = NULL, universalColors = MergedNewColors, impute = impute, subHubs = TRUE, trapErrors = FALSE, excludeGrey = !getNewUnassdME, grey = unassdColor, verbose = verbose-1, indent = indent+1); newMEs = orderMEsByHierarchicalConsensus(NewMEs, networkOptions, consensusTree, greyName = greyName, calibrate = calibrateMESimilarities); ConsDiss = .hierarchicalConsensusMEDissimilarity(newMEs, networkOptions = networkOptions, consensusTree = consensusTree, greyName = greyName, calibrate = calibrateMESimilarities); if (length(ConsDiss) > 1) { Tree = fastcluster::hclust(as.dist(ConsDiss), method = "average"); } else Tree = NULL; } else { newMEs = MEs; } } else { newMEs = NULL; } #}, silent = TRUE); #if (class(ok)=='try-error') #{ # if (!trapErrors) stop(ok); # if (verbose>0) # { # printFlush(paste(spaces, "Warning: merging of modules failed with the following error:")); # printFlush(paste(' ', spaces, ok)); # printFlush(paste(spaces, " --> returning unmerged modules and *no* eigengenes.")); # } # warning(paste("mergeCloseModules: merging of modules failed with the following error:\n", # " ", ok, " --> returning unmerged modules and *no* eigengenes.\n")); # list(labels = origColors, allOK = FALSE); #} else { list(labels = MergedNewColors, dendro = Tree, oldDendro = oldTree, cutHeight = cutHeight, oldMEs = oldMEs, newMEs = newMEs, allOK = TRUE); #} } # =================================================== #For hard thresholding, we use the signum (step) function signumAdjacencyFunction = function(corMat, threshold) { adjmat= as.matrix(abs(corMat)>=threshold) dimnames(adjmat) <- dimnames(corMat) diag(adjmat) <- 0 adjmat } # =================================================== # For soft thresholding, one can use the sigmoid function # But we have focused on the power adjacency function in the tutorial... sigmoidAdjacencyFunction = function(ss, mu=0.8, alpha=20) { 1/(1+exp(-alpha*(ss-mu))) } #This function is useful for speeding up the connectivity calculation. #The idea is to partition the adjacency matrix into consecutive baches of a #given size. #In principle, the larger the block size the faster is the calculation. But #smaller blockSizes require #less memory... # Input: gene expression data set where *rows* correspond to microarray samples #and columns correspond to genes. # If fewer than minNSamples contain gene expression information for a given # gene, then its connectivity is set to missing. softConnectivity=function(datExpr, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, type = "unsigned", power = if (type == "signed") 15 else 6, blockSize = 1500, minNSamples = NULL, verbose = 2, indent = 0) { spaces = indentSpaces(indent); nGenes=dim(datExpr)[[2]] if (blockSize * nGenes>.largestBlockSize) blockSize = as.integer(.largestBlockSize/nGenes); nSamples=dim(datExpr)[[1]] if (is.null(minNSamples)) { minNSamples = max(..minNSamples, nSamples/3); } if (nGenes<..minNGenes | nSamples0) { printFlush(paste(spaces, "softConnectivity: FYI: connecitivty of genes with less than", ceiling(minNSamples), "valid samples will be returned as NA.")); cat(paste(spaces, "..calculating connectivities..")); pind = initProgInd(); } while (start < nGenes) { end = min(start + blockSize-1, nGenes); index1=start:end; ad1 = adjacency(datExpr, weights = weights, selectCols = index1, power = power, type = type, corFnc = corFnc, corOptions = corOptions); k[index1]=colSums(ad1, na.rm = TRUE)-1; # If fewer than minNSamples contain gene expression information for a given # gene, then we set its connectivity to 0. NoSamplesAvailable=colSums(!is.na(datExpr[,index1])) k[index1][NoSamplesAvailable< minNSamples]=NA if (verbose>0) pind = updateProgInd(end/nGenes, pind); start = end + 1; } if (verbose > 0) printFlush(""); k } # end of function # ============================================================================== # The function PickHardThreshold can help one to estimate the cut-off value # when using the signum (step) function. # The first column lists the threshold ("cut"), # the second column lists the corresponding p-value based on the Fisher Transform # of the correlation. # The third column reports the resulting scale free topology fitting index R^2. # The fourth column reports the slope of the fitting line, it shoud be negative for # biologically meaningul networks. # The fifth column reports the fitting index for the truncated exponential model. # Usually we ignore it. # The remaining columns list the mean, median and maximum resulting connectivity. # To pick a hard threshold (cut) with the scale free topology criterion: # aim for high scale free R^2 (column 3), high connectivity (col 6) and negative slope # (around -1, col 4). # The output is a list with 2 components. The first component lists a sugggested cut-off # while the second component contains the whole table. # The removeFirst option removes the first point (k=0, P(k=0)) from the regression fit. # nBreaks specifies how many intervals used to estimate the frequency p(k) i.e. the no. of points in the # scale free topology plot. pickHardThreshold=function (data, dataIsExpr = TRUE, RsquaredCut = 0.85, cutVector = seq(0.1, 0.9, by = 0.05), moreNetworkConcepts=FALSE , removeFirst = FALSE, nBreaks = 10, corFnc = "cor", corOptions = "use = 'p'") { nGenes = dim(data)[[2]] colname1 = c("Cut", "p-value", "SFT.R.sq", "slope=", "truncated R^2", "mean(k)", "median(k)", "max(k)") if(moreNetworkConcepts) { colname1=c(colname1,"Density", "Centralization", "Heterogeneity") } if (!dataIsExpr) { checkAdjMat(data); if (any(diag(data)!=1)) diag(data) = 1; } else nSamples = dim(data)[[1]] datout = data.frame(matrix(NA, nrow = length(cutVector), ncol = length(colname1))) names(datout) = colname1 datout[, 1] = cutVector if (dataIsExpr) { for (i in 1:length(cutVector)) { cut1 = cutVector[i] datout[i, 2] = 2 * (1 - pt(sqrt(nSamples - 1) * cut1/sqrt(1 - cut1^2), nSamples - 1)) } } else datout[, 2] = NA; fun1 = function(x, dataIsExpr) { if (dataIsExpr) { corExpr = parse(text = paste(corFnc, "(x, data", prepComma(corOptions), ")")) corx = abs(eval(corExpr)) } else corx = x; out1 = rep(NA, length(cutVector)) for (j in c(1:length(cutVector))) { out1[j] = sum(corx > cutVector[j], na.rm = TRUE) } out1 } datk = t(apply(data, 2, fun1, dataIsExpr)) for (i in c(1:length(cutVector))) { khelp= datk[, i] - 1 SFT1=scaleFreeFitIndex(k=khelp,nBreaks=nBreaks,removeFirst=removeFirst) datout[i, 3] = SFT1$Rsquared.SFT datout[i, 4] = SFT1$slope.SFT datout[i, 5] = SFT1$truncatedExponentialAdjRsquared datout[i, 6] = mean(khelp,na.rm = TRUE) datout[i, 7] = median(khelp,na.rm = TRUE) datout[i, 8] = max(khelp,na.rm = TRUE) if(moreNetworkConcepts) { Density = sum(khelp)/(nGenes * (nGenes - 1)) datout[i, 9] =Density Centralization = nGenes*(max(khelp)-mean(khelp))/((nGenes-1)*(nGenes-2)) datout[i, 10] = Centralization Heterogeneity = sqrt(nGenes * sum(khelp^2)/sum(khelp)^2 - 1) datout[i, 11] = Heterogeneity } } datout = as.data.frame(lapply(datout, as.numeric)); print(signif(data.frame(datout),3)) ind1 = datout[, 3] > RsquaredCut indcut = NA indcut = if (sum(ind1) > 0) min(c(1:length(ind1))[ind1]) else indcut; cutEstimate = cutVector[indcut][[1]] list(cutEstimate = cutEstimate, fitIndices = data.frame(datout)) } # end of function pickHardThreshold #============================================================================================== # # pickSoftThreshold # #=============================================================================================== # The function pickSoftThreshold allows one to estimate the power parameter when using # a soft thresholding approach with the use of the power function AF(s)=s^Power # The removeFirst option removes the first point (k=1, P(k=1)) from the regression fit. # PL: a rewrite that splits the data into a few blocks. # SH: more netowkr concepts added. # PL: re-written for parallel processing # Alexey Sergushichev: speed up by pre-calculating correlation powers pickSoftThreshold = function ( data, dataIsExpr = TRUE, weights = NULL, RsquaredCut = 0.85, powerVector = c(seq(1, 10, by = 1), seq(12, 20, by = 2)), removeFirst = FALSE, nBreaks = 10, blockSize = NULL, corFnc = cor, corOptions = list(use = 'p'), networkType = "unsigned", moreNetworkConcepts = FALSE, gcInterval = NULL, verbose = 0, indent = 0) { powerVector = sort(powerVector) intType = charmatch(networkType, .networkTypes) if (is.na(intType)) stop(paste("Unrecognized 'networkType'. Recognized values are", paste(.networkTypes, collapse = ", "))) nGenes = ncol(data); if (nGenes<3) { stop("The input data data contain fewer than 3 rows (nodes).", "\nThis would result in a trivial correlation network." ) } if (!dataIsExpr) { checkSimilarity(data); if (any(diag(data)!=1)) diag(data) = 1; } if (is.null(blockSize)) { blockSize = blockSize(nGenes, rectangularBlocks = TRUE, maxMemoryAllocation = 2^30); if (verbose > 0) printFlush(spaste("pickSoftThreshold: will use block size ", blockSize, ".")) } if (length(gcInterval)==0) gcInterval = 4*blockSize; colname1 = c("Power", "SFT.R.sq", "slope", "truncated R.sq", "mean(k)", "median(k)", "max(k)") if(moreNetworkConcepts) { colname1=c(colname1,"Density", "Centralization", "Heterogeneity") } datout = data.frame(matrix(666, nrow = length(powerVector), ncol = length(colname1))) names(datout) = colname1 datout[, 1] = powerVector spaces = indentSpaces(indent) if (verbose > 0) { cat(paste(spaces, "pickSoftThreshold: calculating connectivity for given powers...")) if (verbose == 1) pind = initProgInd() else cat("\n") } # if we're using one of WGNCA's own correlation functions, set the number of threads to 1. corFnc = match.fun(corFnc); corFormals = formals(corFnc); if ("nThreads" %in% names(corFormals)) corOptions$nThreads = 1; # Resulting connectivities datk = matrix(0, nrow = nGenes, ncol = length(powerVector)) # Number of threads. In this case I need this explicitly. nThreads = WGCNAnThreads(); nPowers = length(powerVector); # Main loop startG = 1 lastGC = 0; corOptions$x = data; if (!is.null(weights)) { if (!dataIsExpr) stop("Weights can only be used when 'data' represents expression data ('dataIsExpr' must be TRUE)."); if (!isTRUE(all.equal(dim(data), dim(weights)))) stop("When 'weights' are given, dimensions of 'data' and 'weights' must be the same."); corOptions$weights.x = weights; } while (startG <= nGenes) { endG = min (startG + blockSize - 1, nGenes) if (verbose > 1) printFlush(paste(spaces, " ..working on genes", startG, "through", endG, "of", nGenes)) nBlockGenes = endG - startG + 1; jobs = allocateJobs(nBlockGenes, nThreads); # This assumes that the non-zero length allocations # precede the zero-length ones actualThreads = which(sapply(jobs, length) > 0); datk[ c(startG:endG), ] = foreach(t = actualThreads, .combine = rbind) %dopar% { useGenes = c(startG:endG)[ jobs[[t]] ] nGenes1 = length(useGenes); if (dataIsExpr) { corOptions$y = data[ , useGenes]; if (!is.null(weights)) corOptions$weights.y = weights[ , useGenes]; corx = do.call(corFnc, corOptions); if (intType == 1) { corx = abs(corx) } else if (intType == 2) { corx = (1 + corx)/2 } else if (intType == 3) { corx[corx < 0] = 0 } if (sum(is.na(corx)) != 0) warning(paste("Some correlations are NA in block", startG, ":", endG, ".")); } else { corx = data[, useGenes]; } # Set the diagonal elements of corx to exactly 1. Possible small numeric errors can in extreme cases lead to # negative connectivities. ind = cbind(useGenes, 1:length(useGenes)); corx[ind] = 1; datk.local = matrix(NA, nGenes1, nPowers); corxPrev = matrix(1, nrow=nrow(corx), ncol=ncol(corx)) powerVector1 <- c(0, head(powerVector, -1)) powerSteps <- powerVector - powerVector1 uniquePowerSteps <- unique(powerSteps) corxPowers <- lapply(uniquePowerSteps, function(p) corx^p) names(corxPowers) <- uniquePowerSteps for (j in 1:nPowers) { corxCur <- corxPrev * corxPowers[[as.character(powerSteps[j])]] datk.local[, j] = colSums(corxCur, na.rm = TRUE) - 1 corxPrev <- corxCur }; datk.local } # End of %dopar% evaluation # Move to the next block of genes. startG = endG + 1 if ((gcInterval > 0) && (startG - lastGC > gcInterval)) { gc(); lastGC = startG; } if (verbose == 1) pind = updateProgInd(endG/nGenes, pind) } if (verbose == 1) printFlush(""); for (i in c(1:length(powerVector))) { khelp= datk[, i] if (any(khelp < 0)) browser(); SFT1=scaleFreeFitIndex(k=khelp,nBreaks=nBreaks,removeFirst=removeFirst) datout[i, 2] = SFT1$Rsquared.SFT datout[i, 3] = SFT1$slope.SFT datout[i, 4] = SFT1$truncatedExponentialAdjRsquared datout[i, 5] = mean(khelp,na.rm = TRUE) datout[i, 6] = median(khelp,na.rm = TRUE) datout[i, 7] = max(khelp,na.rm = TRUE) if(moreNetworkConcepts) { Density = sum(khelp)/(nGenes * (nGenes - 1)) datout[i, 8] =Density Centralization = nGenes*(max(khelp)-mean(khelp))/((nGenes-1)*(nGenes-2)) datout[i, 9] = Centralization Heterogeneity = sqrt(nGenes * sum(khelp^2)/sum(khelp)^2 - 1) datout[i, 10] = Heterogeneity } } print(signif(data.frame(datout),3)) ind1 = datout[, 2] > RsquaredCut indcut = NA indcut = if (sum(ind1) > 0) min(c(1:length(ind1))[ind1]) else indcut; powerEstimate = powerVector[indcut][[1]] gc(); list(powerEstimate = powerEstimate, fitIndices = data.frame(datout)) } # =================================================== # The function ScaleFreePlot1 creates a plot for checking scale free topology # when truncated1 = TRUE is specificed, it provides the R^2 measures for the following # degree distributions: a) scale free topology, b) log-log R^2 and c) truncated exponential R^2 # The function ScaleFreePlot1 creates a plot for checking scale free topology scaleFreePlot = function(connectivity, nBreaks=10, truncated = FALSE, removeFirst = FALSE, main = "", ...) { k = connectivity discretized.k = cut(k, nBreaks) dk = tapply(k, discretized.k, mean) p.dk = as.vector(tapply(k, discretized.k, length)/length(k)) breaks1 = seq(from = min(k), to = max(k), length = nBreaks + 1) hist1 = suppressWarnings(hist(k, breaks = breaks1, equidist = FALSE, plot = FALSE, right = TRUE, ...)) dk2 = hist1$mids dk = ifelse(is.na(dk), dk2, dk) dk = ifelse(dk == 0, dk2, dk) p.dk = ifelse(is.na(p.dk), 0, p.dk) log.dk = as.vector(log10(dk)) if (removeFirst) { p.dk = p.dk[-1] log.dk = log.dk[-1] } log.p.dk= as.numeric(log10(p.dk + 1e-09)) lm1 = lm(log.p.dk ~ log.dk) if (truncated==TRUE) { lm2 = lm(log.p.dk ~ log.dk + I(10^log.dk)) OUTPUT=data.frame(scaleFreeRsquared=round(summary(lm1)$adj.r.squared,2), slope=round(lm1$coefficients[[2]],2), TruncatedRsquared=round(summary(lm2)$adj.r.squared,2)) printFlush("the red line corresponds to the truncated exponential fit") title = paste(main, " scale free R^2=",as.character(round(summary(lm1)$adj.r.squared,2)), ", slope=", round(lm1$coefficients[[2]],2), ", trunc.R^2=",as.character(round(summary(lm2)$adj.r.squared,2))) } else { title = paste(main, " scale R^2=",as.character(round(summary(lm1)$adj.r.squared,2)), ", slope=", round(lm1$coefficients[[2]],2)) OUTPUT=data.frame(scaleFreeRsquared=round(summary(lm1)$adj.r.squared,2), slope=round(lm1$coefficients[[2]],2)) } suppressWarnings(plot(log.dk, log.p.dk, xlab="log10(k)", ylab="log10(p(k))", main = title, ... )) lines(log.dk,predict(lm1),col=1) if (truncated) lines(log.dk, predict(lm2), col = 2) OUTPUT } # end of function ############################################################################################## ############################################################################################## # B) Computing the topological overlap matrix ############################################################################################## ############################################################################################## # =================================================== #The function TOMdist computes a dissimilarity # based on the topological overlap matrix (Ravasz et al) # Input: an Adjacency matrix with entries in [0,1] # # ************* Removed: use 1-TOMsimilarity(adjMat). *********************** # #TOMdist=function(adjMat, useActualMax = FALSE) #{ #diag(adjMat)=0; #adjMat[is.na(adjMat)]=0; #maxh1=max(as.dist(adjMat) ); minh1=min(as.dist(adjMat) ); #if (maxh1>1 | minh1 < 0 ) #stop(paste("The adjacency matrix contains entries that are larger than 1 or", #"smaller than 0: max =",maxh1,", min =",minh1)) #if ( max(c(as.dist(abs(adjMat-t(adjMat)))))>10^(-12) ) #stop("Non-symmetric adjacency matrix. ") #adjMat= (adjMat+ t(adjMat) )/2 #connectivity=apply(adjMat,2,sum) #maxADJconst=1 #if (useActualMax==TRUE) maxADJconst=max(c(as.dist(adjMat ))) #Dhelp1=matrix(connectivity,ncol=length(connectivity),nrow=length(connectivity)) #denomTOM= pmin(as.dist(Dhelp1),as.dist(t(Dhelp1))) +as.dist(maxADJconst-adjMat); #gc();gc(); #numTOM=as.dist(adjMat %*% adjMat +adjMat); ##TOMmatrix=numTOM/denomTOM ## this turns the TOM matrix into a dissimilarity #out1=1-as.matrix(numTOM/denomTOM) #diag(out1)=1 ## setting the diagonal to 1 is unconventional (it should be 0) ## but it leads to nicer looking TOM plots... #out1 #} ##--------------------------------------------------------------------------- ## This is a somewhat modified TOMdist - most checks are left out as they are ## often not necessary. # # ******* This function is not necessary anymore. Left out. *********** # #TOMdistNoChecks = function(adjMat, useActualMax = FALSE) #{ #diag(adjMat)=0; #adjMat[is.na(adjMat)]=0; #connectivity=apply(adjMat,2,sum) #maxADJconst=1 #if (useActualMax==TRUE) maxADJconst=max(c(as.dist(adjMat ))) #Dhelp1 = matrix(connectivity,ncol=length(connectivity),nrow=length(connectivity)) #denomTOM = pmin(as.dist(Dhelp1),as.dist(t(Dhelp1))) + as.dist(maxADJconst-adjMat); #rm(Dhelp1); #numTOM=as.dist(adjMat %*% adjMat +adjMat); ##TOMmatrix=numTOM/denomTOM ## this turns the TOM matrix into a dissimilarity #out1=1-as.matrix(numTOM/denomTOM) #rm(numTOM); rm(denomTOM); #collectGarbage(); #diag(out1)=1 ## setting the diagonal to 1 is unconventional (it should be 0) ## but it leads to nicer looking TOM plots... #out1 #} #--------------------------------------------------------------------------- # exact equivalent of TOMdistNoChecks above, but returns similarity. # This function works with a generalized adjacency that can be signed. # If the adjacency is signed, returned TOM will be signed as well (use abs(TOM) to get the usual unsigned # topological overlap) # If checkDiag and na.rm are turned both off, the function saves a bit of memory overhead. # ************* this function is replaced by TOMsimilarity that calls compiled code. #TOMsimilarity = function(adjMat, useActualMax = FALSE, checkDiag = TRUE, na.rm = TRUE) #{ #if (checkDiag) diag(adjMat) = 1; #if (na.rm) adjMat[is.na(adjMat)]=0; #absAdj = abs(adjMat); #connectivity=apply(absAdj,2,sum)-1; #maxADJconst=1 #if (useActualMax==TRUE) maxADJconst=max(c(as.dist(absAdj ))) #Dhelp1 = matrix(connectivity,ncol=length(connectivity),nrow=length(connectivity)) #denomTOM = pmin(as.dist(Dhelp1),as.dist(t(Dhelp1))) + as.dist(maxADJconst-absAdj); #rm(Dhelp1); #numTOM=as.dist(adjMat %*% adjMat - adjMat); ##TOMmatrix=numTOM/denomTOM ## this turns the TOM matrix into a dissimilarity #out1=as.matrix(numTOM/denomTOM) #rm(numTOM); rm(denomTOM); #collectGarbage(); #diag(out1)=1 #out1 #} # =================================================== # This function computes a TOMk dissimilarity # which generalizes the topological overlap matrix (Ravasz et al) # Input: an Adjacency matrix with entries in [0,1] # WARNING: ONLY FOR UNWEIGHTED NETWORKS, i.e. the adjacency matrix contains binary entries... # This function is explained in Yip and Horvath (2005) # http://www.genetics.ucla.edu/labs/horvath/GTOM/ GTOMdist = function(adjMat, degree = 1) { maxh1=max(as.dist(adjMat) ); minh1=min(as.dist(adjMat) ); if (degree!=round(abs(degree))) stop("'degree' must be a positive integer."); if (maxh1>1 | minh1 < 0 ) stop(paste("Entries of the adjacency matrix are not between 0 and 1: max =", maxh1,", min =",minh1)) if ( max(c(as.dist(abs(adjMat-t(adjMat)))))>0 ) stop("Given adjacency matrix is not symmetric.") B <- adjMat; if (degree>=2) for (i in 2:degree) { diag(B) <- diag(B) + 1; B = B %*% adjMat;# this gives the number of paths with length at most degree connecting a pair } B <- (B>0); # this gives the degree-step reachability from a node to another diag(B) <- 0; # exclude each node being its own neighbor B <- B %*% B # this gives the number of common degree-step-neighbor that a pair of nodes share Nk <- diag(B); B <- B +adjMat; # numerator diag(B) <- 1; denomTOM=outer(Nk,Nk,FUN="pmin")+1-adjMat; diag(denomTOM) <- 1; 1 - B/denomTOM # this turns the TOM matrix into a dissimilarity } #============================================================================================= # # vectorTOM: calculate TOM of a vector (or a 'small' matrix) with expression # data. If the number of columns in vect is small (or 1), number of columns in # datExpr can be large. # #============================================================================================ vectorTOM = function(datExpr, vect, subtract1 = FALSE, blockSize = 2000, corFnc = "cor", corOptions = "use = 'p'", networkType = "unsigned", power = 6, verbose = 1, indent = 0) { spaces = indentSpaces(indent); intType = charmatch(networkType, .networkTypes) if (is.na(intType)) stop(paste("Unrecognized 'networkType'. Recognized values are", paste(.networkTypes, collapse = ", "))); if (is.null(dim(vect))) { vect = as.matrix(vect) vectIsVector = TRUE; } else vectIsVector = FALSE; if (nrow(vect)!=nrow(datExpr)) stop("Input error: numbers of samples in 'vect' and 'datExpr' must be the same."); if (ncol(vect)>blockSize) stop(paste("Input error: number of columns in 'vect' is too large. ", "If you are certain you want to try anyway, increase 'blockSize' to at least", "the number of columns in 'vect'.")); corEval = parse(text = paste(corFnc, "(datExpr, vect ", prepComma(corOptions), ")")); corVE = eval(corEval); if (intType==1) { corVE = abs(corVE); } else if (intType==2) { corVE = (1+corVE)/2; } else if (intType==3) { corVE[corVE < 0] = 0; } else stop("Unrecognized networkType argument. Recognized values are 'unsigned', 'signed', and 'signed hybrid'."); corVE = corVE^power; subtract1 = as.numeric(subtract1); nVect = ncol(vect); nGenes = ncol(datExpr); TOM = matrix(NA, nrow = nGenes, ncol = nVect); if (verbose > 0) { if (verbose > 1) cat(paste(spaces, "Calculating TOM of a set of vectors with genes")); pind = initProgInd(); } start = 1; denomArr = array(0, dim = c(2, blockSize, nVect)); while (start <= nGenes) { end = min(start + blockSize-1, nGenes); blockInd = c(start:end); corEval = parse(text = paste(corFnc, "(datExpr[, blockInd], datExpr ", prepComma(corOptions), ")")); corEE = eval(corEval); if (intType==1) { corEE = abs(corEE); } else if (intType==2) { corEE = (1+corEE)/2; } else if (intType==3) { corEE[corEE < 0] = 0; } corEE = corEE^power; num = corEE %*% corVE -subtract1 * corVE[blockInd, ] kV = apply(corVE, 2, sum, na.rm = TRUE) - subtract1 kE = apply(corEE, 1, sum, na.rm = TRUE) - 1; denomArr[1, 1:(end-start+1), ] = matrix(kV, nrow = end-start+1, ncol = nVect, byrow = TRUE); denomArr[2, 1:(end-start+1), ] = matrix(kE, nrow = end-start+1, ncol = nVect); denom = apply(denomArr[, 1:(end-start+1), ], c(2,3), min) + 1 - corVE[blockInd, ]; TOM[blockInd, ] = num/denom; if (verbose > 0) pind = updateProgInd(end/nGenes, pind); start = end + 1; gc() } if (verbose>0) printFlush(" "); TOM; } #============================================================================================= # # subsetTOM: calculate TOM of a subset of vectors with respect to a full set of vectors. # #============================================================================================ subsetTOM = function(datExpr, subset, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, networkType = "unsigned", power = 6, verbose = 1, indent = 0) { spaces = indentSpaces(indent); if (!is.null(dim(subset))) stop("'subset' must be a dimensionless vector."); if (is.null(dim(datExpr))) stop("'datExpr' must be a matrix or data frame."); if (length(dim(datExpr))!=2) stop("'datExpr' must be two-dimensional."); nGenes = ncol(datExpr); if (is.logical(subset)) subset = c(1:nGenes)[subset]; nBlock = length(subset); if (any(!is.finite(subset))) stop("Entries of 'subset' must all be finite."); if (min(subset) < 1 | max(subset) > nGenes) stop(paste("Some entries of 'subset' are out of range.", "\nNote: 'subset' must contain indices of the subset for which the TOM is calculated.")); intType = charmatch(networkType, .networkTypes) if (is.na(intType)) stop(paste("Unrecognized 'networkType'. Recognized values are", paste(.networkTypes, collapse = ", "))); adj = adjacency(datExpr, weights = weights, selectCols = subset, power = power, type = networkType, corFnc = corFnc, corOptions = corOptions); adj[is.na(adj)] = 0; num = t(adj) %*% adj - adj[subset, ]; k = apply(adj, 2, sum); kMat = matrix(k, nBlock, nBlock); denom = pmin(kMat, t(kMat)) - adj[subset, ]; TOM = num/denom; diag(TOM) = 1; TOM; } #--------------------------------------------------------------------- # # adjacency # #--------------------------------------------------------------------- # Computes the adjacency from the expression data: takes cor, transforms it as appropriate and possibly # adds a sign if requested. No subselection on datExpr is performed. # A slighly reworked version that assumes one wants the adjacency matrix of data with itself or a # subset. The data are given only once, and an additional selection index for columns is given. # Caution: no checking of selectCols validity is performed. # The probability method is removed as it's not used. adjacency = function(datExpr, selectCols=NULL, type = "unsigned", power = if (type=="distance") 1 else 6, corFnc = "cor", corOptions = list(use = 'p'), weights = NULL, distFnc = "dist", distOptions = "method = 'euclidean'", weightArgNames = c("weights.x", "weights.y")) { intType = charmatch(type, .adjacencyTypes) if (is.na(intType)) stop(paste("Unrecognized 'type'. Recognized values are", paste(.adjacencyTypes, collapse = ", "))); corFnc.fnc = match.fun(corFnc); .checkAndScaleWeights(weights, datExpr, scaleByMax = FALSE); if (length(weights) > 0) { if (is.null(selectCols)) { if (is.list(corOptions)) { weightOpt = list(weights.x = weights); names(weightOpt) = weightArgNames[1]; } else weightOpt = spaste(weightArgNames[1], " = weights"); } else { if (is.list(corOptions)) { weightOpt = list(weights.x = weights, weights.y = weights[, selectCols]); names(weightOpt) = weightArgNames[c(1,2)]; } else weightOpt = spaste(weightArgNames[1], " = weights, ", weightArgNames[2], " = weights[, selectCols]"); } } else { weightOpt = if (is.list(corOptions)) list() else "" } if (intType < 4) { if (is.null(selectCols)) { if (is.list(corOptions)) { cor_mat = do.call(corFnc.fnc, c(list(x = datExpr), weightOpt, corOptions)) } else { corExpr = parse(text = paste(corFnc, "(datExpr ", prepComma(weightOpt), prepComma(corOptions), ")")); # cor_mat = cor(datExpr, use = "p"); cor_mat = eval(corExpr); } } else { if (is.list(corOptions)) { cor_mat = do.call(corFnc.fnc, c(list(x = datExpr, y = datExpr[, selectCols]), weightOpt, corOptions)) } else { corExpr = parse(text = paste(corFnc, "(datExpr, datExpr[, selectCols] ", prepComma(weightOpt), prepComma(corOptions), ")")); #cor_mat = cor(datExpr, datExpr[, selectCols], use="p"); cor_mat = eval(corExpr); } } } else { if (!is.null(selectCols)) stop("The argument 'selectCols' cannot be used for distance adjacency."); if (is.list(distOptions)) { d = do.call(distFnc, c(list(x = t(datExpr)), distOptions)); } else { corExpr = parse(text = paste(distFnc, "(t(datExpr) ", prepComma(distOptions), ")")); # cor_mat = cor(datExpr, use = "p"); d = eval(corExpr); } if (any(d<0)) warning("Function WGCNA::adjacency: Distance function returned (some) negative values."); cor_mat = 1-as.matrix( (d/max(d, na.rm = TRUE))^2 ); } if (intType==1) { cor_mat = abs(cor_mat); } else if (intType==2) { cor_mat = (1+cor_mat)/2; } else if (intType==3) { cor_mat[cor_mat < 0] = 0; } cor_mat^power; } # A presumably faster and less memory-intensive version, only for "unsigned" networks. unsignedAdjacency = function(datExpr, datExpr2 = NULL, power = 6, corFnc = "cor", corOptions = "use = 'p'") { corExpr = parse(text = paste(corFnc, "(datExpr, datExpr2 ", prepComma(corOptions), ")")); # abs(cor(datExpr, datExpr2, use="p"))^power; abs(eval(corExpr))^power; } ##################################################################################################### ##################################################################################################### # C) Defining gene modules using clustering procedures ##################################################################################################### ##################################################################################################### cutreeStatic = function(dendro, cutHeight = 0.9, minSize = 50) { normalizeLabels(moduleNumber(dendro, cutHeight, minSize)); } cutreeStaticColor = function(dendro, cutHeight = 0.9, minSize = 50) { labels2colors(normalizeLabels(moduleNumber(dendro, cutHeight, minSize))); } plotColorUnderTree = function( dendro, colors, rowLabels = NULL, rowWidths = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, addTextGuide = TRUE, cex.rowLabels = 1, cex.rowText = 0.8, separatorLine.col = "black", ...) { plotOrderedColors( dendro$order, colors = colors, rowLabels = rowLabels, rowWidths = rowWidths, rowText = rowText, rowTextAlignment = rowTextAlignment, rowTextIgnore = rowTextIgnore, textPositions = textPositions, addTextGuide = addTextGuide, cex.rowLabels = cex.rowLabels, cex.rowText = cex.rowText, startAt = 0, align = "center", separatorLine.col = separatorLine.col, ...); } plotOrderedColors = function( order, colors, main = "", rowLabels = NULL, rowWidths = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, addTextGuide = TRUE, cex.rowLabels = 1, cex.rowText = 0.8, startAt = 0, align = c("center", "edge"), separatorLine.col = "black", ...) { sAF = options("stringsAsFactors") options(stringsAsFactors = FALSE); on.exit(options(stringsAsFactors = sAF[[1]]), TRUE) barplot(height=1, col = "white", border=FALSE, space=0, axes=FALSE, main = main) align = match.arg(align); .plotOrderedColorSubplot( order = order, colors = colors, rowLabels = rowLabels, rowWidths = rowWidths, rowText = rowText, rowTextAlignment = rowTextAlignment, rowTextIgnore = rowTextIgnore, textPositions = textPositions, addTextGuide = addTextGuide, cex.rowLabels = cex.rowLabels, cex.rowText = cex.rowText, startAt = startAt, horizontal = TRUE, align = align, separatorLine.col = separatorLine.col, ...); } .transformCoordinates = function(x, y, angle, oldBox = c(0, 1, 0, 1), newBox = c(0, 1, 0, 1)) { xt0 = x * cos(angle) - y * sin(angle); yt0 = x * sin(angle) + y * cos(angle); trBox.x = oldBox[c(1, 2)] * cos(angle) - oldBox[c(3,4)] * sin(angle) trBox.y = oldBox[c(1, 2)] * sin(angle) + oldBox[c(3,4)] * cos(angle); # the shift calculation basically assumes rotations only in multiples of 90 degrees... scale.x = (newBox[2] - newBox[1])/(trBox.x[2] - trBox.x[1]) scale.y = (newBox[4] - newBox[3])/(trBox.y[2] - trBox.y[1]); list(x = (xt0 - trBox.x[1]) * scale.x + newBox[1], y = (yt0 - trBox.y[1]) * scale.y + newBox[3]); } .plotOrderedColorSubplot = function( order, colors, rowLabels = NULL, rowWidths = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, addTextGuide = TRUE, textGuide.col = "darkgrey", textGuide.lty = 3, cex.rowLabels = 1, cex.rowText = 0.8, startAt = 0, plotBox = NULL, # Defaults to user-coordinate limits rotated according to "horizontal" horizontal = TRUE, rowLabelsAngle = NULL, ## Defaults to the angle of the colors rowLabelsPosition = "left", align = c("center", "edge"), limExpansionFactor.x = if (align=="center") 0.04 else 0, limExpansionFactor.y = limExpansionFactor.x, separatorLine.col = "black", checkOrderLength = TRUE, ...) { if (length(colors)==0) return(NULL); align = match.arg(align); colors = as.matrix(colors); dimC = dim(colors) if (is.null(rowLabels) & (length(dimnames(colors)[[2]])==dimC[2])) rowLabels = colnames(colors); nColorRows = dimC[2]; if (checkOrderLength && (length(order) != dimC[1]) ) stop("Length of colors vector not compatible with number of objects in 'order'."); C = colors[order, , drop = FALSE]; nColumns = dimC[1]; # Old plot box. could in principle be anything but the current value allows me to also get scaling of inches to user # coordinates and the character width and height. plotBox.full = par("usr"); pin = par("pin"); inchToUsr.x = (plotBox.full[2] - plotBox.full[1])/pin[1]; inchToUsr.y = (plotBox.full[4] - plotBox.full[3])/pin[2]; charWidth = strwidth("W", units = "inches") * (if (horizontal) inchToUsr.x else inchToUsr.y); plotBox.contracted = plotBox.full; fullRange.x = plotBox.full[2] - plotBox.full[1]; fullRange.y = plotBox.full[4] - plotBox.full[3]; limContractionFactor.x = limExpansionFactor.x/(1+2*limExpansionFactor.x); plotBox.contracted[1] = plotBox.contracted[1] + limContractionFactor.x * fullRange.x; plotBox.contracted[2] = plotBox.contracted[2] - limContractionFactor.x * fullRange.x; range.x = plotBox.contracted[2] - plotBox.contracted[1]; limContractionFactor.y = limExpansionFactor.y/(1+2*limExpansionFactor.y); plotBox.contracted[3] = plotBox.contracted[3] + limContractionFactor.y * fullRange.y; plotBox.contracted[4] = plotBox.contracted[4] - limContractionFactor.y * fullRange.y; range.x = plotBox.contracted[2] - plotBox.contracted[1]; range.y = plotBox.contracted[4] - plotBox.contracted[3]; step = range.x/(dimC[1] - (align=="center") + 2*startAt); if (is.null(plotBox)) { plotBox = par("usr"); if (!horizontal) plotBox = plotBox[c(3,4,1,2)]; } if (!is.null(rowText)) { if (is.null(textPositions)) textPositions = c(1:nColorRows); if (is.logical(textPositions)) textPositions = c(1:nColorRows)[textPositions]; nTextRows = length(textPositions); } else nTextRows = 0; nRows = nColorRows + nTextRows; if (is.null(rowWidths)) { ystep = range.y/nRows; rowWidths = rep(ystep, nColorRows + nTextRows) } else { if (length(rowWidths)!=nRows) stop("plotOrderedColors: Length of 'rowWidths' must equal the total number of rows.") rowWidths = range.y * rowWidths/sum(rowWidths); } hasText = rep(0, nColorRows); hasText[textPositions] = 1; csPosition = cumsum(c(0, hasText[-nColorRows])); colorRows = c(1:nColorRows) + csPosition; rowType = rep(2, nRows); rowType[colorRows] = 1; physicalTextRow = c(1:nRows)[rowType==2]; yBottom = c(plotBox.contracted[3], plotBox.contracted[3] + cumsum(rowWidths[nRows:1])) ; ## Has one extra entry but that shouldn't hurt yTop = plotBox.contracted[3] + cumsum(rowWidths[nRows:1]) if (!is.null(rowText)) { rowTextAlignment = match.arg(rowTextAlignment); rowText = as.matrix(rowText) textPos = list(); textPosY = list(); textLevs = list(); for (tr in 1:nTextRows) { charHeight.in = max(strheight(rowText[, tr], units = "inches", cex = cex.rowText)); charHeight.scaled = charHeight.in * (if (horizontal) 1/pin[2] else dimC[1]/pin[1]) charHeight.scaled = charHeight.scaled * ( if (horizontal) range.y / abs(plotBox[4] - plotBox[3]) else range.x / abs(plotBox[2] - plotBox[1])); width1 = rowWidths[ physicalTextRow[tr] ]; nCharFit = floor(width1/charHeight.scaled/1.7/par("lheight")); if (nCharFit<1) stop("Rows are too narrow to fit text. Consider decreasing cex.rowText."); set = textPositions[tr]; #colLevs = sort(unique(colors[, set])); #textLevs[[tr]] = rowText[match(colLevs, colors[, set]), tr]; textLevs[[tr]] = sort(unique(rowText[, tr])); textLevs[[tr]] = textLevs[[tr]] [ !textLevs[[tr]] %in% rowTextIgnore ]; nLevs = length(textLevs[[tr]]); textPos[[tr]] = rep(0, nLevs); orderedText = rowText[order, tr] for (cl in 1:nLevs) { ind = orderedText == textLevs[[tr]][cl]; sind = ind[-1]; ind1 = ind[-length(ind)]; starts = c( if (ind[1]) 1 else NULL, which(!ind1 & sind)+1) ends = which(c(ind1 & !sind, ind[length(ind)] )); if (length(starts)==0) starts = 1; if (length(ends)==0) ends = length(ind); if (ends[1] < starts[1]) starts = c(1, starts); if (ends[length(ends)] < starts[length(starts)]) ends = c(ends, length(ind)); lengths = ends - starts; long = which.max(lengths); textPos[[tr]][cl] = switch(rowTextAlignment, left = starts[long], center = (starts[long] + ends[long])/2 + 0.5, right = ends[long]+1); } if (rowTextAlignment=="left") { yPos = seq(from = 1, to=nCharFit, by=1) / (nCharFit+1); } else { yPos = seq(from = nCharFit, to=1, by=-1) / (nCharFit+1); } textPosY[[tr]] = rep(yPos, ceiling(nLevs/nCharFit)+5)[1:nLevs][rank(textPos[[tr]])]; } } jIndex = nRows; colorRectangles = list(); if (is.null(rowLabels)) rowLabels = c(1:nColorRows); C[is.na(C)] = "grey" if (align=="edge") alignShift = 0 else alignShift = 0.5; angle.deg = if (horizontal) 0 else 90; angle = angle.deg * pi/180; if (is.null(rowLabelsAngle)) rowLabelsAngle = angle.deg; for (j in 1:nColorRows) { jj = jIndex; ind = 1:nColumns; xl = plotBox.contracted[1] + (ind- 1 - alignShift + startAt) * step; xr = xl + step; xl[xl < plotBox.full[1]] = plotBox.full[1]; xr[xr > plotBox.full[2]] = plotBox.full[2]; yb = rep(yBottom[jj], dimC[1]); yt = rep(yTop[jj], dimC[1]); trafo1 = .transformCoordinates(xl, yb, angle = angle, oldBox = plotBox.full, newBox = plotBox) trafo2 = .transformCoordinates(xr, yt, angle = angle, oldBox = plotBox.full, newBox = plotBox) if (is.null(dim(C))) { rect(trafo1$x, trafo1$y, trafo2$x, trafo2$y, col = as.character(C), border = as.character(C), xpd = TRUE); } else { rect(trafo1$x, trafo1$y, trafo2$x, trafo2$y, col = as.character(C[,j]), border = as.character(C[,j]), xpd = TRUE); } colorRectangles[[j]] = list(xl = trafo1$x, yb = trafo1$y, xr = trafo2$x, yt = trafo2$y); rowLabelPos = .transformCoordinates( x= if (rowLabelsPosition=="left") xl[1] else xr[nColumns], y= (yBottom[jj] + yTop[jj])/2, angle = angle, oldBox = plotBox.full, newBox = plotBox); xs1 = if (horizontal) charWidth/2 else 0; ys1 = if (horizontal) 0 else charWidth/2; if (rowLabelsPosition!="left") { xs1 = -xs1; ys1 = -ys1; } text(rowLabels[j], adj = c(if (rowLabelsPosition=="left") 1 else 0, 0.5), x= rowLabelPos$x-xs1, y= rowLabelPos$y-ys1, srt = rowLabelsAngle, cex=cex.rowLabels, xpd = TRUE); textRow = match(j, textPositions); if (is.finite(textRow)) { jIndex = jIndex - 1; xt = (textPos[[textRow]] - 1 - alignShift + startAt) * step + plotBox.contracted[1]; xt[xtplotBox.full[2]] = plotBox.full[2]; yt = yBottom[jIndex] + (yTop[jIndex]-yBottom[jIndex]) * (textPosY[[textRow]] + 1/(2*nCharFit+2)); nt = length(textLevs[[textRow]]); # Add guide lines trafo1 = .transformCoordinates(xt, yt, angle = angle, oldBox = plotBox.full, newBox = plotBox); trafo2 = .transformCoordinates(xt, yTop[jIndex], angle = angle, oldBox = plotBox.full, newBox = plotBox); if (addTextGuide) for (l in 1:nt) lines(c(trafo1$x[l], trafo2$x[l]), c(trafo1$y[l], trafo2$y[l]), col = textGuide.col, lty = textGuide.lty); textAdj = c(0, 0.5, 1)[ match(rowTextAlignment, c("left", "center", "right")) ]; text(textLevs[[textRow]], x = trafo1$x, y = trafo1$y, adj = c(textAdj, 1), xpd = TRUE, cex = cex.rowText) # printFlush("ok"); } jIndex = jIndex - 1; } if (!is.na(separatorLine.col)) { trafo1 = .transformCoordinates(min(xl), yBottom, angle = angle, oldBox = plotBox.full, newBox = plotBox); trafo2 = .transformCoordinates(max(xr), yBottom, angle = angle, oldBox = plotBox.full, newBox = plotBox); for (j in 1:(nColorRows + nTextRows+1)) lines(x=c(trafo1$x[j], trafo2$x[j]), y=c(trafo1$y[j], trafo2$y[j]), col = separatorLine.col); } invisible(list(colorRectangles = colorRectangles)); } #======================================================================================================== # This function can be used to create an average linkage hierarchical # clustering tree # or the microarray samples. The rows of datExpr correspond to the samples and # the columns to the genes # You can optionally input a quantitative microarray sample trait. plotClusterTreeSamples=function(datExpr, y = NULL, traitLabels = NULL, yLabels = NULL, main = if (is.null(y)) "Sample dendrogram" else "Sample dendrogram and trait indicator", setLayout = TRUE, autoColorHeight = TRUE, colorHeight = 0.3, dendroLabels = NULL, addGuide = FALSE, guideAll = TRUE, guideCount = NULL, guideHang = 0.20, cex.traitLabels = 0.8, cex.dendroLabels = 0.9, marAll = c(1,5,3,1), saveMar = TRUE, abHeight = NULL, abCol = "red", ...) { dendro = fastcluster::hclust( dist( datExpr ), method="average" ) if (is.null(y) ) { oldMar = par("mar"); par(mar = marAll); plot(dendro, main=main, sub="", xlab = "", labels = dendroLabels, cex = cex.dendroLabels) if (saveMar) par(oldMar); } else { if (is.null(traitLabels)) traitLabels = names(as.data.frame(y)); y = as.matrix(y); if (!is.numeric(y) ) { warning(paste("The microarray sample trait y will be transformed to numeric.")); dimy = dim(y) y=as.numeric(y) dim(y) = dimy; } # end of if (!is.numeric(y) ) if ( nrow(as.matrix(datExpr)) != nrow(y) ) stop(paste("Input Error: dim(as.matrix(datExpr))[[1]] != length(y)\n", " In plain English: The number of microarray sample arrays does not match the number", "of samples for the trait.\n", " Hint: Make sure rows of 'datExpr' (and 'y', if it is a matrix) correspond to samples.")) if (is.integer(y)) { y = y-min(0, min(y, na.rm = TRUE)) + 1; } else { y = (y>=median(y, na.rm = TRUE)) + 1; } plotDendroAndColors(dendro, colors = y, groupLabels = traitLabels, rowText = yLabels, setLayout = setLayout, autoColorHeight = autoColorHeight, colorHeight = colorHeight, addGuide = addGuide, guideAll = guideAll, guideCount = guideCount, guideHang = guideHang, cex.colorLabels = cex.traitLabels, cex.dendroLabels = cex.dendroLabels, marAll = marAll, saveMar = saveMar, abHeight = abHeight, abCol = abCol, main = main, ...); } }# end of function PlotClusterTreeSamples # =================================================== # The function TOMplot creates a TOM plot # Inputs: distance measure, hierarchical (hclust) object, color label=colors TOMplot = function(dissim, dendro, Colors=NULL, ColorsLeft = Colors, terrainColors=FALSE, setLayout = TRUE, ...) { if ( is.null(Colors) ) Colors=rep("white", dim(as.matrix(dissim))[[1]] ) if ( is.null(ColorsLeft)) ColorsLeft = Colors; nNodes=length(Colors) if (nNodes<2) { warning("You have only 1 or 2 genes in TOMplot. No plot will be produced") } else { if (nNodes != length(ColorsLeft)) stop("ERROR: number of (top) color labels does not equal number of left color labels") if (nNodes != dim(dissim)[[1]] ) stop(paste("ERROR: number of color labels does not equal number of nodes in dissim.\n", " nNodes != dim(dissim)[[1]] ")) labeltree = as.character(Colors) labelrow = as.character(ColorsLeft) #labelrow[dendro$order[length(labeltree):1]]=labelrow[dendro$order] options(expressions = 10000) dendro$height = (dendro$height - min(dendro$height))/(1.15 * (max(dendro$height)-min(dendro$height))) if (terrainColors) { .heatmap(as.matrix(dissim), Rowv=dendro, Colv= dendro, scale="none", revC = TRUE, ColSideColors=as.character(labeltree), RowSideColors=as.character(labelrow), labRow=FALSE, labCol=FALSE, col = terrain.colors(100), setLayout = setLayout, ...) } else { .heatmap(as.matrix(dissim), Rowv=dendro, Colv= dendro, scale="none",revC = TRUE, ColSideColors=as.character(labeltree), RowSideColors=as.character(labelrow), labRow=FALSE, labCol=FALSE, setLayout = setLayout, ...) } #end of if } } #end of function plotNetworkHeatmap = function(datExpr, plotGenes, weights = NULL, useTOM = TRUE, power = 6 , networkType = "unsigned", main = "Heatmap of the network") { match1=match( plotGenes ,colnames(datExpr) ) match1=match1[ !is.na(match1)] nGenes=length(match1) if ( sum( !is.na(match1) ) != length(plotGenes) ) { printFlush(paste("Warning: Not all gene names were recognized.", "Only the following genes were recognized. ")); printFlush(paste(" ", colnames(datExpr)[match1], collapse = ", " )) } if (nGenes< 3 ) { warning(paste("Since you have fewer than 3 genes, the network will not be visualized.\n", " Hint: please input more genes.")); plot(1,1) } else { datErest=datExpr[, match1 ] if (!is.null(weights)) weights = weights[, match1]; ADJ1 = adjacency(datErest, weights = weights, power = power, type = networkType) if (useTOM) { diss1= 1-TOMsimilarity(ADJ1) } else { diss1 = 1-ADJ1; } diag(diss1)=NA hier1=fastcluster::hclust(as.dist(diss1), method="average" ) colors1=rep("white", nGenes) labeltree = names(data.frame(datErest)) labelrow = names(data.frame(datErest)) labelrow[hier1$order[length(labeltree):1]]=labelrow[hier1$order] options(expressions = 10000) heatmap(as.matrix(diss1),Rowv=as.dendrogram(hier1),Colv= as.dendrogram(hier1), scale="none", revC = TRUE, labRow= labeltree, labCol= labeltree,main=main) } # end of if (nGenes> 2 ) } # end of function ##################################################################################################### ##################################################################################################### # E) Relating a measure of gene significance to the modules ##################################################################################################### ##################################################################################################### # =================================================== # The function ModuleEnrichment1 creates a bar plot that shows whether modules are enriched with # significant genes. # More specifically, it reports the mean gene significance for each module. # The gene significance can be a binary variable or a quantitative variable. # It also plots the 95% confidence interval of the mean (CI=mean +/- 1.96* standard error). # It also reports a Kruskal Wallis P-value. plotModuleSignificance = function(geneSignificance, colors, boxplot = FALSE, main = "Gene significance across modules,", ylab = "Gene Significance", ...) { if (length(geneSignificance) != length(colors) ) stop("Error: 'geneSignificance' and 'colors' do not have the same lengths") no.colors=length(names(table(colors) )) if (no.colors==1) pp=NA if (no.colors>1) { pp=try(kruskal.test(geneSignificance,factor(colors))$p.value) if (inherits(pp, "try-error")) pp=NA } title = paste(main," p-value=", signif(pp,2), sep = "") if (boxplot != TRUE) { means1=as.vector(tapply(geneSignificance,colors,mean, na.rm = TRUE)); se1= as.vector(tapply(geneSignificance,colors,stdErr)) # par(mfrow=c(1,1)) barplot(means1, names.arg=names(table(colors) ),col= names(table(colors) ) ,ylab=ylab, main = title, ...) addErrorBars(as.vector(means1), as.vector(1.96*se1), two.side=TRUE) } else { boxplot(split(geneSignificance,colors),notch = TRUE,varwidth = TRUE, col= names(table(colors) ),ylab=ylab, main = title, ...) } } # end of function ##################################################################################################### ##################################################################################################### # F) Carrying out a within module analysis (computing intramodular connectivity etc) ##################################################################################################### ##################################################################################################### # =================================================== #The function DegreeInOut computes for each gene #a) the total number of connections, #b) the number of connections with genes within its module, #c) the number of connections with genes outside its module # When scaleByMax=TRUE, the within module connectivities are scaled to 1, i.e. the max(K.Within)=1 for each module intramodularConnectivity = function(adjMat, colors, scaleByMax = FALSE) { if (nrow(adjMat)!=ncol(adjMat)) stop("'adjMat' is not a square matrix."); if (nrow(adjMat)!=length(colors)) stop("Dimensions of 'adjMat' and length of 'colors' differ."); nNodes=length(colors) colorLevels=levels(factor(colors)) nLevels=length(colorLevels) kWithin=rep(-666,nNodes ) diag(adjMat)=0 for (i in c(1:nLevels) ) { rest1=colors==colorLevels[i]; if (sum(rest1) <3 ) { kWithin[rest1]=0 } else { kWithin[rest1]=apply(adjMat[rest1,rest1], 2, sum, na.rm = TRUE) if (scaleByMax) kWithin[rest1]=kWithin[rest1]/max(kWithin[rest1]) } } kTotal= apply(adjMat, 2, sum, na.rm = TRUE) kOut=kTotal-kWithin if (scaleByMax) kOut=rep(NA, nNodes); kDiff=kWithin-kOut data.frame(kTotal,kWithin,kOut,kDiff) } intramodularConnectivity.fromExpr = function(datExpr, colors, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, distFnc = "dist", distOptions = "method = 'euclidean'", networkType = "unsigned", power = if (networkType=="distance") 1 else 6, scaleByMax = FALSE, ignoreColors = if (is.numeric(colors)) 0 else "grey", getWholeNetworkConnectivity = TRUE) { if (ncol(datExpr) !=length(colors)) stop("Number of columns (genes) in 'datExpr' and length of 'colors' differ."); nNodes=length(colors) colorLevels=levels(factor(colors)) colorLevels = colorLevels[!colorLevels %in% ignoreColors]; nLevels=length(colorLevels) kWithin=rep(NA,nNodes ) for (i in c(1:nLevels) ) { rest1=colors==colorLevels[i]; weights1 = if (is.null(weights)) weights else weights[, rest1]; if (sum(rest1) <3 ) { kWithin[rest1]=0 } else { adjMat = adjacency(datExpr[, rest1], weights = weights1, type = networkType, power = power, corFnc = corFnc, corOptions = corOptions, distFnc = distFnc, distOptions = distOptions); kWithin[rest1]=colSums(adjMat, na.rm = TRUE)-1; if (scaleByMax) kWithin[rest1]=kWithin[rest1]/max(kWithin[rest1], na.rm = TRUE) } } if (getWholeNetworkConnectivity) { kTotal= softConnectivity(datExpr, weights = weights, corFnc = corFnc, corOptions = corOptions, type = networkType, power = power); kOut=kTotal-kWithin if (scaleByMax) kOut=rep(NA, nNodes); kDiff=kWithin-kOut data.frame(kTotal,kWithin,kOut,kDiff) } else kWithin; } nPresent = function(x) { sum(!is.na(x)) } checkAdjMat = function(adjMat, min = 0, max = 1) { dim = dim(adjMat) if (is.null(dim) || length(dim)!=2 ) stop("adjacency is not two-dimensional"); if (!is.numeric(adjMat)) stop("adjacency is not numeric"); if (dim[1]!=dim[2]) stop("adjacency is not square"); if (max(abs(adjMat - t(adjMat)), na.rm = TRUE) > 1e-12) stop("adjacency is not symmetric"); if (min(adjMat, na.rm = TRUE) < min || max(adjMat, na.rm = TRUE) > max) stop("some entries are not between", min, "and", max) } ##################################################################################################### ##################################################################################################### # G) Miscellaneous other functions, e.g. for computing the cluster coefficient. ##################################################################################################### ##################################################################################################### # The function signedKME computes the module eigengene based connectivity. # Input: datExpr= a possibly very large gene expression data set where the rows # correspond to samples and the columns represent genes # datME=data frame of module eigengenes (columns correspond to module eigengenes or MEs) # A module eigengene based connectivity KME value will be computed if the gene has # a non missing expression value in at least minNSamples arrays. # Output a data frame where columns are the KME values # corresponding to different modules. # By splitting the expression data into different blocks, the function can deal with # tens of thousands of gene expression data. # If there are many eigengenes (say hundreds) consider decreasing the block size. signedKME = function(datExpr, datME, exprWeights = NULL, MEWeights = NULL, outputColumnName="kME", corFnc = "cor", corOptions = "use = 'p'") { if (dim(datME)[[1]] != dim(datExpr)[[1]] ) stop("Number of samples (rows) in 'datExpr' and 'datME' must be the same.") datExpr=as.matrix(datExpr) datME=as.matrix(datME) if (is.null(colnames(datExpr))) colnames(datExpr) = spaste("Gene.", 1:ncol(datExpr)); if (any(duplicated(colnames(datExpr)))) colnames(datExpr) = make.unique(colnames(datExpr)); if (!is.null(exprWeights)) exprWeights = .checkAndScaleWeights(exprWeights, datExpr, scaleByMax = FALSE, verbose = 0); if (!is.null(MEWeights)) MEWeights = .checkAndScaleWeights(exprWeights, datME, scaleByMax = FALSE, verbose = 0); output=list() varianceZeroIndicatordatExpr=colVars(datExpr, na.rm = TRUE)==0 varianceZeroIndicatordatME =colVars(datME, na.rm = TRUE)==0 if (sum(varianceZeroIndicatordatExpr,na.rm = TRUE)>0 ) warning("Some genes are constant. Hint: consider removing constant columns from datExpr." ) if (sum(varianceZeroIndicatordatME,na.rm = TRUE)>0 ) warning(paste("Some module eigengenes are constant, which is suspicious.\n", " Hint: consider removing constant columns from datME." )) no.presentdatExpr=colSums(!is.na(datExpr)) if (min(no.presentdatExpr)<..minNSamples ) warning(paste("Some gene expressions have fewer than 4 observations.\n", " Hint: consider removing genes with too many missing values or collect more arrays.")) if (!is.null(MEWeights)) corOptions = spaste("weights.y = MEWeights, ", corOptions); if (!is.null(exprWeights)) corOptions = spaste("weights.x = exprWeights, ", corOptions); #output=data.frame(cor(datExpr, datME, use="p")) corExpr = parse(text = paste("data.frame(", corFnc, "(datExpr, datME, ", prepComma(corOptions), "))" )); output = eval(corExpr); output[no.presentdatExpr<..minNSamples, ]=NA names(output)=paste(outputColumnName, substring(colnames(datME), first=3), sep="") rownames(output) = make.unique(colnames(datExpr)); output } # end of function signedKME # =================================================== # The function clusterCoef computes the cluster coefficients. # Input is an adjacency matrix clusterCoef=function(adjMat) { checkAdjMat(adjMat); diag(adjMat)=0 nNodes=dim(adjMat)[[1]] computeLinksInNeighbors <- function(x, imatrix){x %*% imatrix %*% x} nolinksNeighbors <- c(rep(-666,nNodes)) total.edge <- c(rep(-666,nNodes)) maxh1=max(as.dist(adjMat) ); minh1=min(as.dist(adjMat) ); if (maxh1>1 | minh1 < 0 ) stop(paste("The adjacency matrix contains entries that are larger than 1 or smaller than 0: max =", maxh1,", min =",minh1)) nolinksNeighbors <- apply(adjMat, 1, computeLinksInNeighbors, imatrix=adjMat) plainsum <- apply(adjMat, 1, sum) squaresum <- apply(adjMat^2, 1, sum) total.edge = plainsum^2 - squaresum CChelp=rep(-666, nNodes) CChelp=ifelse(total.edge==0,0, nolinksNeighbors/total.edge) CChelp } # end of function # =================================================== # The function addErrorBars is used to create error bars in a barplot # usage: addErrorBars(as.vector(means), as.vector(stderrs), two.side=FALSE) addErrorBars<-function(means, errors, two.side=FALSE) { if(!is.numeric(means)) { stop("All arguments must be numeric")} if(is.null(dim(means)) || length(dim(means))==1){ xval<-(cumsum(c(0.7,rep(1.2,length(means)-1)))) }else{ if (length(dim(means))==2){ xval<-cumsum(array(c(1,rep(0,dim(means)[1]-1)), dim=c(1,length(means))))+0:(length(means)-1)+.5 }else{ stop("First argument must either be a vector or a matrix") } } MW<-0.25*(max(xval)/length(xval)) ERR1<-means+errors ERR2<-means-errors for(i in 1:length(means)){ segments(xval[i],means[i],xval[i],ERR1[i]) segments(xval[i]-MW,ERR1[i],xval[i]+MW,ERR1[i]) if(two.side){ segments(xval[i],means[i],xval[i],ERR2[i]) segments(xval[i]-MW,ERR2[i],xval[i]+MW,ERR2[i]) } } } # =================================================== # this function computes the standard error stdErr <- function(x){ sqrt( var(x,na.rm = TRUE)/sum(!is.na(x)) ) } # =================================================== # The following two functions are for displaying the pair-wise correlation in a panel when using the command "pairs()" # Typically, we use "pairs(DATA, upper.panel=panel.smooth, lower.panel=.panel.cor, diag.panel=panel.hist)" to # put the correlation coefficients on the lower panel. .panel.hist <- function(x, ...){ usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nB <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nB], 0, breaks[-1], y, col="cyan", ...) } # =================================================== # This function is used in "pairs()" function. The problem of the original panel.cor is that # when the correlation coefficient is very small, the lower panel will have a large font # instead of a mini-font in a saved .ps file. This new function uses a format for corr=0.2 # when corr<0.2, but it still reports the original value of corr, with a minimum format. .panel.cor=function(x, y, digits=2, prefix="", cex.cor){ usr <- par("usr"); on.exit(par(usr)) par(usr = c(0, 1, 0, 1)) r <- abs(cor(x, y)) txt <- format(c(r, 0.123456789), digits=digits)[1] txt <- paste(prefix, txt, sep="") txt1=txt r1=r if (r<0.2) { r1=0.2 txt1 <- format(c(r1, 0.123456789), digits=digits)[1] txt1 <- paste(prefix, txt1, sep="") } if(missing(cex.cor)) cex <- 0.8/strwidth(txt1) cex = cex * r1 r <- round(r, digits) txt <- format(c(r, 0.123456789), digits=digits)[1] txt <- paste(prefix, txt, sep="") text(0.5, 0.5, txt, cex=cex) } # =================================================== # This function collects garbage # collect_garbage=function(){collectGarbage()} #--------------------------------------------------------------------------------------------------------- # This function plots a barplot with all colors given. If Colors are not given, GlobalStandardColors are # used, i.e. if you want to see the GlobalStandardColors, just call this function without parameters. displayColors = function(colors = NULL) { if (is.null(colors)) colors = standardColors(); barplot(rep(1, length(colors)), col = colors, border = colors); } ############################################################################### # I) Functions for merging modules based on a high correlation of the module eigengenes ############################################################################### #--------------------------------------------------------------------------------------------- # # dynamicMergeCut # #--------------------------------------------------------------------------------------------- dynamicMergeCut = function(n, mergeCor=.9, Zquantile=2.35) { if (mergeCor>1 | mergeCor<0 ) stop("'mergeCor' must be between 0 and 1.") if (mergeCor==1) { printFlush("dynamicMergeCut: given mergeCor=1 will be set to .999."); mergeCor=.999 } if (n<4 ) { printFlush(paste("Warning in function dynamicMergeCut: too few observations for the dynamic", "assignment of the merge threshold.\n Will set the threshold to .35")); mergethreshold=.35 } else { # Fisher transform of the true merge correlation FishermergeCor=.5*log((1+mergeCor)/(1-mergeCor)) E=exp(2*( FishermergeCor -Zquantile/sqrt(n-3))) LowerBoundCIcor=(E-1)/(E+1) mergethreshold=1- LowerBoundCIcor } if (mergethreshold>1) 1 else mergethreshold }# end of function dynamicMergeCut #====================================================================================================== # # print.flush # # ===================================================================================================== #print.flush = function(...) #{ # printFlush(...); #} ############################################################################################## # I) GENERAL STATISTICAL FUNCTIONS ############################################################################################## verboseScatterplot = function(x, y, sample = NULL, corFnc = "cor", corOptions = "use = 'p'", main ="", xlab = NA, ylab = NA, cex=1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, abline = FALSE, abline.color = 1, abline.lty = 1, corLabel = corFnc, displayAsZero = 1e-5, col = 1, bg = 0, pch = 1, lmFnc = lm, plotPriority = NULL, showPValue = TRUE, ...) { if ( is.na(xlab) ) xlab= as.character(match.call(expand.dots = FALSE)$x) if ( is.na(ylab) ) ylab= as.character(match.call(expand.dots = FALSE)$y) x= as.numeric(as.character(x)) y= as.numeric(as.character(y)) corExpr = parse(text = paste(corFnc, "(x, y ", prepComma(corOptions), ")")); #cor=signif(cor(x,y,use="p",method=correlationmethod),2) cor=signif(eval(corExpr),2) if (is.finite(cor)) if (abs(cor) < displayAsZero) cor = 0; corp = signif(corPvalueStudent(cor, sum(is.finite(x) & is.finite(y))), 2); #corpExpr = parse(text = paste("cor.test(x, y, ", corOptions, ")")); #corp=signif(cor.test(x,y,use="p",method=correlationmethod)$p.value,2) #corp=signif(eval(corpExpr)$p.value,2) if (is.finite(corp) && corp<10^(-200) ) corp="<1e-200" else corp = paste("=", corp, sep=""); if (!is.na(corLabel)) { mainX = paste(main, " ", corLabel, "=", cor, if(is.finite(cor) && showPValue) spaste(", p",corp) else "", sep=""); } else mainX = main; if (length(col) 2) { p1 = signif(kruskal.test(x ~ g.factor)$p.value, 2) if (AnovaTest) p1 = signif(anova(lm(x ~ g.factor))$Pr[[1]], 2) } else { p1 = tryCatch(signif(fisher.test(x, g, alternative = "two.sided")$p.value, 2), error = function(e) { NA }) } if (AnovaTest | KruskalTest) main = paste(main, "p =", p1) maxSE = max(as.vector(SE), na.rm = TRUE); if (is.null(ylim)) { if (addScatterplot && adjustYLim) { ylim = range(x, na.rm = TRUE); d = ylim[2] -ylim[1]; ylim = ylim + c(-d/50, d/50); } else { ylim = range(Means1,na.rm = TRUE) + c(-maxSE, maxSE) * numberStandardErrors * (numberStandardErrors>0); if (ylim[1] > 0) ylim[1] = 0; if (ylim[2] <0) ylim[2] = 0; } } ret = barplot(Means1, main = main, col = color, xlab = xlab, ylab = ylab, cex = cex, cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main, horiz = horiz, ylim = ylim, ...) if (addCellCounts) { cellCountsF = function(x) { sum(!is.na(x)) } cellCounts=tapply(x, g.factor, cellCountsF) mtext(text=cellCounts,side=if(horiz) 2 else 1,outer=FALSE,at=ret, col="darkgrey",las=2,cex=.8,...) } # end of if (addCellCounts) abline(h = 0) if (numberStandardErrors > 0) { err.bp(as.vector(Means1), as.vector(SE), two.sided = two.sided, numberStandardErrors = numberStandardErrors, horiz = horiz) } if (addScatterplot) { if (exists(".Random.seed")) { savedSeed = .Random.seed; set.seed(randomSeed); on.exit(.Random.seed <<- savedSeed); } x.list = tapply(x, g, identity); nPerGroup = sapply(x.list, length); set.seed(randomSeed) # so we can make the identical plot again n = length(g); pch = unlist(tapply(.extend(pch, n), g, identity)); pt.col = unlist(tapply(.extend(pt.col, n), g, identity)); pt.bg = unlist(tapply(.extend(pt.bg, n), g, identity)); pt.cex = unlist(tapply(.extend(pt.cex, n), g, identity)); x = jitter(rep(ret, nPerGroup), jitter); y = unlist(x.list); points(x, y, pch=pch, col=pt.col, bg = pt.bg, cex = pt.cex); if (!is.null(pointLabels)) { labels.lst = tapply(pointLabels, g, identity); labelPoints(x, y, unlist(labels.lst), offs = label.offs, cex = label.cex); } } attr(ret, "height") = as.vector(Means1) attr(ret, "stdErr") = as.vector(SE) invisible(ret) } #============================================================================================= # # Correlation p-value for multiple correlation values # #============================================================================================= corPvalueFisher = function(cor, nSamples, twoSided = TRUE) { if (sum(abs(cor)>1, na.rm = TRUE)>0) stop("Some entries in 'cor' are out of normal range -1 to 1."); if (twoSided) { z = abs(0.5 * log((1+cor)/(1-cor)) * sqrt(nSamples-3)); 2 * pnorm(-z); } else { # return a small p-value for positive correlations z = -0.5 * log((1+cor)/(1-cor)) * sqrt(nSamples-3); pnorm(-z); } } # this function compute an asymptotic p-value for a given correlation (r) and sample size (n) # Needs a new name before we commit it to the package. corPvalueStudent = function(cor, nSamples) { T=sqrt(nSamples-2) * cor/sqrt(1-cor^2) 2*pt(abs(T),nSamples-2, lower.tail = FALSE) } ######################################################################################### propVarExplained = function(datExpr, colors, MEs, corFnc = "cor", corOptions = "use = 'p'") { fc = as.factor(colors); mods = levels(fc); nMods = nlevels(fc); nGenes = ncol(datExpr); if (nMods!=ncol(MEs)) stop(paste("Input error: number of distinct 'colors' differs from\n", " the number of module eigengenes given in ME.")); if (ncol(datExpr)!=length(colors)) stop("Input error: number of probes (columns) in 'datExpr' differs from the length of goven 'colors'."); if (nrow(datExpr)!=nrow(MEs)) stop("Input error: number of observations (rows) in 'datExpr' and 'MEs' differ."); PVE = rep(0, nMods); col2MEs = match(mods, substring(names(MEs), 3)); if (sum(is.na(col2MEs))>0) stop("Input error: not all given colors could be matched to names of module eigengenes."); for (mod in 1:nMods) { modGenes = c(1:nGenes)[as.character(colors)==mods[mod]]; corExpr = parse(text = paste(corFnc, "(datExpr[, modGenes], MEs[, col2MEs[mod]]", prepComma(corOptions), ")")); PVE[mod] = mean(as.vector(eval(corExpr)^2)); } names(PVE) = paste("PVE", mods, sep = ""); PVE } #=================================================================================== # # addGrid # #=================================================================================== # This function adds a horizontal grid to a plot addGrid = function(linesPerTick = NULL, linesPerTick.horiz = linesPerTick, linesPerTick.vert = linesPerTick, horiz = TRUE, vert = FALSE, col = "grey30", lty = 3) { box = par("usr"); if (horiz) { ticks = par("yaxp"); nTicks = ticks[3]; if (is.null(linesPerTick.horiz)) { if (nTicks < 6) linesPerTick.horiz = 5 else linesPerTick.horiz = 2; } spacing = (ticks[2]-ticks[1])/(linesPerTick.horiz*nTicks); first = ceiling((box[3] - ticks[1])/spacing); last = floor((box[4] - ticks[1])/spacing); #print(paste("addGrid: first=", first, ", last =", last, "box = ", paste(signif(box,2), collapse = ", "), #"ticks = ", paste(signif(ticks, 2), collapse = ", "), "spacing =", spacing )); for (k in first:last) lines(x = box[c(1,2)], y = rep(ticks[1] + spacing * k, 2), col = col, lty = lty); } if (vert) { ticks = par("xaxp"); nTicks = ticks[3]; if (is.null(linesPerTick.vert)) { if (nTicks < 6) linesPerTick.vert = 5 else linesPerTick.vert = 2; } spacing = (ticks[2]-ticks[1])/(linesPerTick.vert*ticks[3]); first = ceiling((box[1] - ticks[1])/spacing); last = floor((box[2] - ticks[1])/spacing); #print(paste("addGrid: first=", first, ", last =", last, "box = ", paste(signif(box,2), collapse = ", "), # "ticks = ", paste(signif(ticks, 2), collapse = ", "), "spacing =", spacing )); for (l in first:last) lines(x = rep(ticks[1] + spacing * l, 2), y = box[c(3,4)], col = col, lty = lty); } } #----------------------------------------------------------------------------------------------- # # Add vertical "guide" lines to a dendrogram to facilitate identification of clusters with color bars # #----------------------------------------------------------------------------------------------- addGuideLines = function(dendro, all = FALSE, count = 50, positions = NULL, col = "grey30", lty = 3, hang = 0) { if (all) { positions = 1:(length(dendro$height)+1); } else { if (is.null(positions)) { lineSpacing = (length(dendro$height)+1)/count; positions = (1:count)* lineSpacing; } } objHeights = rep(0, length(dendro$height+1)); objHeights[-dendro$merge[dendro$merge[,1]<0,1]] = dendro$height[dendro$merge[,1]<0]; objHeights[-dendro$merge[dendro$merge[,2]<0,2]] = dendro$height[dendro$merge[,2]<0]; box = par("usr"); ymin = box[3]; ymax = box[4]; objHeights = objHeights - hang*(ymax - ymin); objHeights[objHeights nLinks); } if (sampleLinks) nLinks = min(nLinks, nGenes) else nLinks = nGenes; #printFlush(paste("blockSize =", blockSize)); #printFlush(paste("nGenes =", nGenes)); #printFlush(paste(".largestBlockSize =", .largestBlockSize)); if (blockSize * nLinks>.largestBlockSize) blockSize = as.integer(.largestBlockSize/nLinks); intNetworkType = charmatch(type, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument. Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); subtract = rep(1, nGenes); if (sampleLinks) { if (verbose > 0) printFlush(paste(spaces, "nearestNeighborConnectivity: selecting sample pool of size", nLinks, "..")) sd = apply(datExpr, 2, sd, na.rm = TRUE); order = order(-sd); saved = FALSE; if (exists(".Random.seed")) { saved = TRUE; savedSeed = .Random.seed if (is.numeric(setSeed)) set.seed(setSeed); } samplePool = order[sample(x = nGenes, size = nLinks)] if (saved) .Random.seed <<- savedSeed; poolExpr = datExpr[, samplePool]; subtract[-samplePool] = 0; } if (verbose>0) { printFlush(paste(spaces, "nearestNeighborConnectivity: received", "dataset with nGenes =", as.character(nGenes))); cat(paste(spaces, "..using nNeighbors =", nNeighbors, "and blockSize =", blockSize, " ")); pind = initProgInd(trailStr = " done"); } nearestNeighborConn = rep(0, nGenes); nBlocks = as.integer((nGenes-1)/blockSize); SetRestrConn = NULL; start = 1; if (sampleLinks) { corEval = parse(text = paste(corFnc, "(poolExpr, datExpr[, blockIndex] ", prepComma(corOptions), ")")) } else { corEval = parse(text = paste(corFnc, "(datExpr, datExpr[, blockIndex] ", prepComma(corOptions), ")")) } while (start <= nGenes) { end = start + blockSize-1; if (end>nGenes) end = nGenes; blockIndex = c(start:end); #if (verbose>1) printFlush(paste(spaces, "..working on genes", start, "through", end, "of", nGenes)) c = eval(corEval); if (intNetworkType==1) { c = abs(c); } else if (intNetworkType==2) { c = (1+c)/2; } else if (intNetworkType==3) { c[c < 0] = 0; } else stop("Internal error: intNetworkType has wrong value:", intNetworkType, ". Sorry!"); adj_mat = as.matrix(c^power); adj_mat[is.na(adj_mat)] = 0; sortedAdj = as.matrix(apply(adj_mat, 2, sort, decreasing = TRUE)[1:(nNeighbors+1), ]); nearestNeighborConn[blockIndex] = apply(sortedAdj, 2, sum)-subtract[blockIndex]; start = end+1; if (verbose>0) pind = updateProgInd(end/nGenes, pind); gc(); } if (verbose>0) printFlush(" "); nearestNeighborConn; } #Try to merge this with the single-set function. #------------------------------------------------------------------------------------------- # # nearestNeighborConnectivityMS # #------------------------------------------------------------------------------------------- # This function takes expression data (rows=samples, colummns=genes) in the multi-set format # and the power exponent used in weighting the # correlations to get the network adjacency matrix, and returns an array of dimensions # nGenes * nSets containing the connectivities of each gene in each subset. nearestNeighborConnectivityMS = function(multiExpr, nNeighbors = 50, power=6, type = "unsigned", corFnc = "cor", corOptions = "use = 'p'", blockSize = 1000, sampleLinks = NULL, nLinks = 5000, setSeed = 36492, verbose=1, indent=0) { spaces = indentSpaces(indent); setsize = checkSets(multiExpr); nGenes = setsize$nGenes; nSamples = setsize$nSamples; nSets = setsize$nSets; if (is.null(sampleLinks)) { sampleLinks = (nGenes > nLinks); } if (sampleLinks) nLinks = min(nLinks, nGenes) else nLinks = nGenes; #printFlush(paste("blockSize =", blockSize)); #printFlush(paste("nGenes =", nGenes)); #printFlush(paste(".largestBlockSize =", .largestBlockSize)); if (blockSize * nLinks>.largestBlockSize) blockSize = as.integer(.largestBlockSize/nLinks); if (length(power)==1) { power = rep(power, nSets); } else if (length(power)!=nSets) stop("Invalid arguments: length of 'power' must equal number sets in 'multiExpr'"); intNetworkType = charmatch(type, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument. Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); subtract = rep(1, nGenes); if (sampleLinks) { if (verbose > 0) printFlush(paste(spaces, "nearestNeighborConnectivityMS: selecting sample pool of size", nLinks, "..")) sd = apply(multiExpr[[1]]$data, 2, sd, na.rm = TRUE); order = order(-sd); saved = FALSE; if (exists(".Random.seed")) { saved = TRUE; savedSeed = .Random.seed if (is.numeric(setSeed)) set.seed(setSeed); } samplePool = order[sample(x = nGenes, size = nLinks)] if (saved) .Random.seed <<- savedSeed; subtract[-samplePool] = 0; } if (verbose>0) printFlush(paste(spaces, "nearestNeighborConnectivityMS: received", nSets, "datasets with nGenes =", as.character(nGenes))); if (verbose>0) printFlush(paste(spaces, " Using nNeighbors =", nNeighbors)); nearestNeighborConn = matrix(nrow = nGenes, ncol = nSets); if (sampleLinks) { corEval = parse(text = paste(corFnc, "(multiExpr[[set]]$data[, samplePool], multiExpr[[set]]$data[, blockIndex] ", prepComma(corOptions), ")")) } else { corEval = parse(text = paste(corFnc, "(multiExpr[[set]]$data, multiExpr[[set]]$data[, blockIndex] ", prepComma(corOptions), ")")) } for (set in 1:nSets) { if (verbose>0) { cat(paste(spaces, " Working on set", set)); pind = initProgInd(trailStr = " done"); } nBlocks = as.integer((nGenes-1)/blockSize); SetRestrConn = NULL; start = 1; while (start <= nGenes) { end = start + blockSize-1; if (end>nGenes) end = nGenes; blockIndex = c(start:end); #if (verbose>1) printFlush(paste(spaces, " .. working on genes", start, "through", end, "of", nGenes)) c = eval(corEval); if (intNetworkType==1) { c = abs(c); } else if (intNetworkType==2) { c = (1+c)/2; } else if (intNetworkType==3) { c[c < 0] = 0; } else stop("Internal error: intNetworkType has wrong value:", intNetworkType, ". Sorry!"); adj_mat = as.matrix(c^power[set]); adj_mat[is.na(adj_mat)] = 0; sortedAdj = as.matrix(apply(adj_mat, 2, sort, decreasing = TRUE)[1:(nNeighbors+1), ]); nearestNeighborConn[blockIndex, set] = apply(sortedAdj, 2, sum)-subtract[blockIndex]; gc(); start = end + 1; if (verbose > 0) pind = updateProgInd(end/nGenes, pind); } if (verbose>0) printFlush(" "); } nearestNeighborConn; } #====================================================================================================== # # Nifty display of progress. # # ===================================================================================================== initProgInd = function( leadStr = "..", trailStr = "", quiet = !interactive()) { oldStr = " "; cat(oldStr); progInd = list(oldStr = oldStr, leadStr = leadStr, trailStr = trailStr); class(progInd) = "progressIndicator"; updateProgInd(0, progInd, quiet); } updateProgInd = function(newFrac, progInd, quiet = !interactive()) { if (!inherits(progInd, "progressIndicator") ) stop("Parameter progInd is not of class 'progressIndicator'. Use initProgInd() to initialize", "it prior to use."); newStr = paste(progInd$leadStr, as.integer(newFrac*100), "% ", progInd$trailStr, sep = ""); if (newStr!=progInd$oldStr) { if (quiet) { progInd$oldStr = newStr; } else { cat(paste(rep("\b", nchar(progInd$oldStr)), collapse="")); cat(newStr); if (exists("flush.console")) flush.console(); progInd$oldStr = newStr; } } progInd; } #====================================================================================================== # # Plot a dendrogram and a set of labels underneath # # ===================================================================================================== # plotDendroAndColors = function(dendro, colors, groupLabels = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, setLayout = TRUE, autoColorHeight = TRUE, colorHeight = 0.2, colorHeightBase = 0.2, colorHeightMax = 0.6, rowWidths = NULL, dendroLabels = NULL, addGuide = FALSE, guideAll = FALSE, guideCount = 50, guideHang = 0.20, addTextGuide = FALSE, cex.colorLabels = 0.8, cex.dendroLabels = 0.9, cex.rowText = 0.8, marAll = c(1,5,3,1), saveMar = TRUE, abHeight = NULL, abCol = "red", ...) { oldMar = par("mar"); if (!is.null(dim(colors))) { nRows = dim(colors)[2]; } else nRows = as.numeric(length(colors) > 0); if (!is.null(rowText)) nRows = nRows + if (is.null(textPositions)) nRows else length(textPositions); if (autoColorHeight) colorHeight = colorHeightBase + (colorHeightMax - colorHeightBase) * (1-exp(-(nRows-1)/6)) if (setLayout) layout(matrix(c(1:2), 2, 1), heights = c(1-colorHeight, colorHeight)); par(mar = c(0, marAll[2], marAll[3], marAll[4])); plot(dendro, labels = dendroLabels, cex = cex.dendroLabels, ...); if (addGuide) addGuideLines(dendro, count = if(guideAll) length(dendro$height)+1 else guideCount, hang = guideHang); if (!is.null(abHeight)) abline(h=abHeight, col = abCol); par(mar = c(marAll[1], marAll[2], 0, marAll[4])); plotColorUnderTree(dendro, colors, groupLabels, cex.rowLabels = cex.colorLabels, rowText = rowText, rowTextAlignment = rowTextAlignment, rowTextIgnore = rowTextIgnore, textPositions = textPositions, cex.rowText = cex.rowText, rowWidths = rowWidths, addTextGuide = addTextGuide) if (saveMar) par(mar = oldMar); } #################################################################################################### # # Functions included from NetworkScreeningFunctions # #################################################################################################### # this function creates pairwise scatter plots between module eigengenes (above the diagonal) # Below the diagonal are the absolute values of the Pearson correlation coefficients. # The diagonal contains histograms of the module eigengene expressions. plotMEpairs=function(datME, y=NULL, main="Relationship between module eigengenes", clusterMEs=TRUE, ...) { if ( dim(as.matrix(datME))[[2]]==1 & is.null(y) ) { hist( datME, ...) } else { datMEordered=datME if (clusterMEs & dim(as.matrix(datME))[[1]] >1 ) { dissimME=(1-t(cor(datME, method="p", use="p")))/2 hclustdatME=fastcluster::hclust(as.dist(dissimME), method="average" ) datMEordered=datME[,hclustdatME$order] } # end of if if ( !is.null(y) ) { if ( length(y) != dim(as.matrix(datMEordered))[[1]] ) stop(paste("The length of the outcome vector 'y' does not match the number of rows of 'datME'.\n", " The columns of datME should correspond to the module eigengenes.\n", " The rows correspond to the array samples. Hint: consider transposing datME.")); datMEordered=data.frame(y, datMEordered) } # end of if pairs( datMEordered, upper.panel = panel.smooth, lower.panel = .panel.cor , diag.panel=.panel.hist ,main=main, ...) } # end if } # end of function #-------------------------------------------------------------------------------------------------- # # corPredictionSuccess # #-------------------------------------------------------------------------------------------------- # The function corPredictionSuccess can be used to determine which method is best for predicting correlations # in a new test set. corTestSet should be a vector of correlations in the test set. # The parameter topNumber specifies that the top number most positive and the top most negative # predicted correlations # TopNumber is a vector of integers. # corPrediction should be a data frame of predictions for the correlations. # Output a list with the following components: # meancorTestSetPositive= mean test set correlation among the topNumber of genes # which are predicted to have positive correlations. # meancorTestSetNegative= mean test set correlation among the topNumber of genes # which are predicted to have negative correlations. # meancorTestSetOverall=(meancorTestSetPositive-meancorTestSetNegative)/2 corPredictionSuccess=function( corPrediction, corTestSet, topNumber=100 ) { nPredictors=dim(as.matrix(corPrediction))[[2]] nGenes=dim(as.matrix(corPrediction))[[1]] if (length(as.numeric(corTestSet))!=nGenes ) stop("non-compatible dimensions of 'corPrediction' and 'corTestSet'") out1=rep(NA, nPredictors) meancorTestSetPositive=matrix(NA, ncol=nPredictors, nrow=length(topNumber) ) meancorTestSetNegative=matrix(NA, ncol=nPredictors, nrow=length(topNumber) ) for (i in c(1:nPredictors) ) { rankpositive=rank(-as.matrix(corPrediction)[,i], ties.method="first") ranknegative=rank(as.matrix(corPrediction)[,i], ties.method="first") for (j in c(1:length(topNumber) ) ) { meancorTestSetPositive[j,i]=mean(corTestSet[rankpositive<= topNumber[j]],na.rm = TRUE) meancorTestSetNegative[j,i]= mean(corTestSet[ranknegative<=topNumber[j]],na.rm = TRUE) } # end of j loop over topNumber } # end of i loop over predictors meancorTestSetOverall=data.frame((meancorTestSetPositive-meancorTestSetNegative)/2) dimnames(meancorTestSetOverall)[[2]]=names(data.frame(corPrediction)) meancorTestSetOverall=data.frame(topNumber=topNumber, meancorTestSetOverall) meancorTestSetPositive=data.frame(meancorTestSetPositive) dimnames(meancorTestSetPositive)[[2]]=names(data.frame(corPrediction)) meancorTestSetPositive=data.frame(topNumber=topNumber, meancorTestSetPositive) meancorTestSetNegative=data.frame(meancorTestSetNegative) dimnames(meancorTestSetNegative)[[2]]=names(data.frame(corPrediction)) meancorTestSetNegative=data.frame(topNumber=topNumber, meancorTestSetNegative) datout=list(meancorTestSetOverall=meancorTestSetOverall, meancorTestSetPositive=meancorTestSetPositive, meancorTestSetNegative =meancorTestSetNegative) datout } # end of function corPredictionSuccess #-------------------------------------------------------------------------------------------------- # # relativeCorPredictionSuccess # #-------------------------------------------------------------------------------------------------- # The function relativeCorPredictionSuccess can be used to test whether a gene screening method # is significantly better than a standard method. # For each gene screening method (column of corPredictionNew) it provides a Kruskal Wallis # test p-value for comparison with the vector corPredictionStandard, # TopNumber is a vector of integers. # corTestSet should be a vector of correlations in the test set. # corPredictionNew should be a data frame of predictions for the # correlations. corPredictionStandard should be the standard prediction (correlation in the training data). # The function outputs a p-value for the Kruskal test that # the new correlation prediction methods outperform the standard correlation prediction method. relativeCorPredictionSuccess=function(corPredictionNew, corPredictionStandard, corTestSet, topNumber=100 ) { nPredictors=dim(as.matrix(corPredictionNew))[[2]] nGenes=dim(as.matrix(corPredictionNew))[[1]] if (length(as.numeric(corTestSet))!=nGenes ) stop("non-compatible dimensions of 'corPrediction' and 'corTestSet'.") if (length(as.numeric(corTestSet))!=length(corPredictionStandard) ) stop("non-compatible dimensions of 'corTestSet' and 'corPredictionStandard'.") kruskalp=matrix(NA,nrow=length(topNumber), ncol=nPredictors) for (i in c(1:nPredictors) ) { rankhighNew=rank(-as.matrix(corPredictionNew)[,i], ties.method="first") ranklowNew=rank(as.matrix(corPredictionNew)[,i],ties.method="first") for (j in c(1:length(topNumber)) ){ highCorNew=as.numeric(corTestSet[rankhighNew <= topNumber[j] ]) lowCorNew=as.numeric(corTestSet[ranklowNew <= topNumber[j] ]) highCorStandard=as.numeric(corTestSet[rank(-as.numeric(corPredictionStandard), ties.method="first") <= topNumber[j]]) lowCorStandard=as.numeric(corTestSet[rank(as.numeric(corPredictionStandard), ties.method="first") <= topNumber[j]]) signedCorNew=c(highCorNew,-lowCorNew) signedCorStandard=c(highCorStandard,-lowCorStandard) x1=c(signedCorNew,signedCorStandard) Grouping=rep(c(2,1), c(length(signedCorNew), length(signedCorStandard))) sign1=sign(cor(Grouping,x1, use="p")) if (sign1==0) sign1=1 kruskalp[j,i]=kruskal.test(x=x1, g=Grouping)$p.value*sign1 #print(names(data.frame(corPredictionNew))[[i]]) #print(paste("This correlation is positive if the new method is better than the old method" , # signif(cor(Grouping,x1, use="p"),3))) } # end of j loop } # end of i loop kruskalp[kruskalp<0]=1 kruskalp=data.frame(kruskalp) dimnames(kruskalp)[[2]]= paste(names(data.frame(corPredictionNew)),".kruskalP", sep="") kruskalp=data.frame(topNumber=topNumber, kruskalp) kruskalp } # end of function relativeCorPredictionSuccess #-------------------------------------------------------------------------------------------------- # # alignExpr # #-------------------------------------------------------------------------------------------------- # If y is supplied, it multiplies columns of datExpr by +/-1 to make all correlations with y positive. # If y is not supplied, the first column of datExpr is used as the reference direction. alignExpr=function(datExpr, y = NULL) { if ( !is.null(y) & dim(as.matrix(datExpr))[[1]] != length(y) ) stop("Incompatible number of samples in 'datExpr' and 'y'.") if (is.null(y) ) y=as.numeric(datExpr[,1]) sign1=sign(as.numeric(cor(y, datExpr, use="p" ))) as.data.frame(scale(t(t(datExpr)*sign1))) } # end of function alignExpr # this function can be used to rank the values in x. Ties are broken by the method first. # This function does not appear to be used anywhere in these functions. #rank1=function(x){ # rank(x, ties.method="first") #} ############################################################################################## # # Gene expression simulations (functions by P.L.) # ############################################################################################## #---------------------------------------------------------------------------- # # .causalChildren # #---------------------------------------------------------------------------- # Note: The returned vector may contain multiple occurences of the same child. .causalChildren = function(parents, causeMat) { nNodes = dim(causeMat)[[1]]; # print(paste("Length of parents: ",length(parents))); if (length(parents)==0) return(NULL); Child_ind = apply(as.matrix(abs(causeMat[, parents])), 1, sum)>0; if (sum(Child_ind)>0) { children = c(1:nNodes)[Child_ind] } else { children = NULL; } children; } #---------------------------------------------------------------------------- # # simulateEigengeneNetwork # #---------------------------------------------------------------------------- # # Given a set of causal anchors, this function creates a network of vectors that should satisfy the # causal relations encoded in the causal matrix causeMat, i.e. causeMat[j,i] is the causal effect of # vector i on vector j. # The function starts by initializing all vectors to noise given in the noise specification. (The noise # can be specified for each vector separately.) Then it runs the standard causal network signal # propagation and returns the resulting vectors. simulateEigengeneNetwork = function(causeMat, anchorIndex, anchorVectors, noise = 1, verbose = 0, indent = 0) { spaces = indentSpaces(indent); if (verbose>0) printFlush(paste(spaces, "Creating seed vectors...")); nNodes = dim(causeMat)[[1]]; nSamples = dim(anchorVectors)[[1]]; if (length(anchorIndex)!=dim(anchorVectors)[[2]]) stop(paste("Length of anchorIndex must equal the number of vectors in anchorVectors.")); if (length(noise)==1) noise = rep(noise, nNodes); if (length(noise)!=nNodes) stop(paste("Length of noise must equal", "the number of nodes as given by the dimension of the causeMat matrix.")); # Initialize all node vectors to noise with given standard deviation NodeVectors = matrix(0, nrow = nSamples, ncol = nNodes); for (i in 1:nNodes) NodeVectors[,i] = rnorm(n=nSamples, mean=0, sd=noise[i]); Levels = rep(0, times = nNodes); # Calculate levels for all nodes: start from anchors and go through each successive level of children level = 0; parents = anchorIndex; Children = .causalChildren(parents = parents, causeMat = causeMat); if (verbose>1) printFlush(paste(spaces, "..Determining level structure...")); while (!is.null(Children)) { # print(paste("level:", level)); # print(paste(" parents:", parents)); # print(paste(" Children:", Children)); level = level + 1; if ((verbose>1) & (level/10 == as.integer(level/10))) printFlush(paste(spaces, " ..Detected level", level)); #printFlush(paste("Detected level", level)); Levels[Children] = level; parents = Children; Children = .causalChildren(parents = parents, causeMat = causeMat); } HighestLevel = level; # Generate the whole network if (verbose>1) printFlush(paste(spaces, "..Calculating network...")); NodeVectors[,anchorIndex] = NodeVectors[,anchorIndex] + anchorVectors; for (level in (1:HighestLevel)) { if ( (verbose>1) & (level/10 == as.integer(level/10)) ) printFlush(paste(spaces, " .Working on level", level)); #printFlush(paste("Working on level", level)); LevelChildren = c(1:nNodes)[Levels==level] for (child in LevelChildren) { LevelParents = c(1:nNodes)[causeMat[child, ]!=0] for (parent in LevelParents) NodeVectors[, child] = scale(NodeVectors[, child] + causeMat[child, parent]*NodeVectors[,parent]); } } Nodes = list(eigengenes = NodeVectors, causeMat = causeMat, levels = Levels, anchorIndex = anchorIndex); Nodes; } #-------------------------------------------------------------------------------------------- # # simulateModule # #-------------------------------------------------------------------------------------------- # The resulting data is normalized. # Attributes contain the component trueKME giving simulated correlation with module eigengene for # both module genes and near-module genes. # corPower controls how fast the correlation drops with index i in the module; the curve is roughly # x^{1/corPower} with x<1 and x~0 near the "center", so the higher the power, the faster the curve rises. simulateModule = function(ME, nGenes, nNearGenes = 0, minCor = 0.3, maxCor = 1, corPower = 1, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, verbose = 0, indent = 0) { nSamples = length(ME); datExpr = matrix(rnorm((nGenes+nNearGenes)*nSamples), nrow = nSamples, ncol = nGenes+nNearGenes) VarME = var(ME) # generate the in-module genes CorME = maxCor - (c(1:nGenes)/nGenes)^(1/corPower) * (maxCor-minCor); noise = sqrt(VarME * (1-CorME^2)/CorME^2); sign = rep(1, nGenes); if (!signed) { negGenes = as.integer(seq(from = 1/propNegativeCor, by = 1/propNegativeCor, length.out = nGenes * propNegativeCor)) negGenes = negGenes[negGenes <=nGenes]; sign[negGenes] = -1; } for (gene in 1:nGenes) { datExpr[, gene] = sign[gene] * (ME + rnorm(nSamples, sd = noise[gene])); } trueKME = CorME; # generate the near-module genes if (nNearGenes>0) { CorME = c(1:nNearGenes)/nNearGenes * minCor; noise = sqrt(VarME * (1-CorME^2)/CorME^2); sign = rep(1, nNearGenes); if (!signed) { negGenes = as.integer(seq(from = 1/propNegativeCor, by = 1/propNegativeCor, length.out = nNearGenes * propNegativeCor)) negGenes = negGenes[negGenes <=nNearGenes]; sign[negGenes] = -1; } for (gene in 1:nNearGenes) datExpr[, nGenes + gene] = ME + sign[gene] * rnorm(nSamples, sd = noise[gene]); trueKME = c(trueKME, CorME); } datExpr = scale(datExpr); if (!is.null(geneMeans)) { if (any(is.na(geneMeans))) stop("All entries of 'geneMeans' must be finite."); if (length(geneMeans)!=nGenes + nNearGenes) stop("The lenght of 'geneMeans' must equal nGenes + nNearGenes."); datExpr = datExpr + matrix(geneMeans, nSamples, nGenes + nNearGenes, byrow = TRUE); } attributes(datExpr)$trueKME = trueKME; datExpr; } #.SimulateModule=function(ME, size,minimumCor=.3) { #if (size<3) print("WARNING: module size smaller than 3") #if(minimumCor==0) minimumCor=0.0001; #maxnoisevariance=var(ME,na.rm = TRUE)*(1/minimumCor^2-1) #SDvector=sqrt(c(1:size)/size*maxnoisevariance) #datSignal=suppressWarnings(matrix(c(ME, ME ,-ME),nrow=size ,ncol=length(ME) ,byrow = TRUE)) #datNoise=SDvector* matrix(rnorm(size*length(ME)),nrow=size ,ncol=length(ME)) #datModule=datSignal+datNoise #t(datModule) #} # end of function #-------------------------------------------------------------------------------------------- # # simulateSmallLayer # #-------------------------------------------------------------------------------------------- # Simulates a bunch of small and weakly expressed modules. simulateSmallLayer = function(order, nSamples, minCor = 0.3, maxCor = 0.5, corPower = 1, averageModuleSize, averageExpr, moduleSpacing, verbose = 4, indent = 0) { spaces = indentSpaces(indent); nGenes = length(order) datExpr = matrix(0, nrow = nSamples, ncol = nGenes); maxCorN0 = averageModuleSize; if (verbose>0) printFlush(paste(spaces, "simulateSmallLayer: simulating modules with min corr", minCor, ", average expression", averageExpr, ", average module size", averageModuleSize, ", inverse density", moduleSpacing)); index = 0; while (index < nGenes) { ModSize = as.integer(rexp(1, 1/averageModuleSize)); if (ModSize<3) ModSize = 3; if (index + ModSize>nGenes) ModSize = nGenes - index; if (ModSize>2) # Otherwise don't bother :) { ModuleExpr = rexp(1, 1/averageExpr); if (verbose>4) printFlush(paste(spaces, " Module of size", ModSize, ", expression", ModuleExpr, ", min corr", minCor, "inserted at index", index+1)); ME = rnorm(nSamples, sd = ModuleExpr); NInModule = as.integer(ModSize*2/3); nNearModule = ModSize - NInModule; EffMinCor = minCor * maxCor; datExpr[, order[(index+1):(index + ModSize)]] = ModuleExpr * simulateModule(ME, NInModule, nNearModule, EffMinCor, maxCor, corPower); } index = index + ModSize * moduleSpacing; } datExpr; } #-------------------------------------------------------------------------------------------- # # simulateDatExpr # #-------------------------------------------------------------------------------------------- # # Caution: the last Mod.Props entry gives the number of "true grey" genes; # the corresponding minCor entry must be absent (i.e. length(minCor) = length(modProportions)-1 # SubmoduleLayers: layers of small modules with weaker correlation, ordered in the same order as the # genes in the big modules. Needs average number of genes in a module (exponential distribution), # average expression strength (exponential density) and inverse density. # ScatteredModuleLayers: Layers of small modules whose order is random. simulateDatExpr=function(eigengenes, nGenes, modProportions, minCor = 0.3, maxCor = 1, corPower = 1, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, backgroundNoise = 0.1, leaveOut = NULL, nSubmoduleLayers = 0, nScatteredModuleLayers = 0, averageNGenesInSubmodule = 10, averageExprInSubmodule = 0.2, submoduleSpacing = 2, verbose = 1, indent = 0) { spaces = indentSpaces(indent); nMods=length(modProportions)-1; nSamples = dim(eigengenes)[[1]]; if (length(minCor)==1) minCor = rep(minCor, nMods); if (length(maxCor)==1) maxCor = rep(maxCor, nMods); if (length(minCor)!=nMods) stop(paste("Input error: minCor is an array of different lentgh than", "the length-1 of modProportions array.")); if (length(maxCor)!=nMods) stop(paste("Input error: maxCor is an array of different lentgh than", "the length-1 of modProportions array.")); if (dim(eigengenes)[[2]]!=nMods) stop(paste("Input error: Number of seed vectors must equal the", "length of modProportions.")); if (is.null(geneMeans)) geneMeans = rep(0, nGenes); if (length(geneMeans)!=nGenes) stop("Length of 'geneMeans' must equal 'nGenes'."); if (any(is.na(geneMeans))) stop("All entries of 'geneMeans' must be finite."); grey = 0; moduleLabels = c(1:nMods); if(sum(modProportions)>1) stop("Input error: the sum of Mod.Props must be less than 1"); #if(sum(modProportions[c(1:(length(modProportions)-1))])>=0.5) #print(paste("SimulateExprData: Input warning: the sum of modProportions for proper modules", #"should ideally be less than 0.5.")); no.in.modules = as.integer(nGenes*modProportions); no.in.proper.modules = no.in.modules[c(1:(length(modProportions)-1))]; no.near.modules = as.integer((nGenes - sum(no.in.modules)) * no.in.proper.modules/sum(no.in.proper.modules)); simulate.module = rep(TRUE, times = nMods); if (!is.null(leaveOut)) simulate.module[leaveOut] = FALSE; no.in.modules[nMods+1] = nGenes - sum(no.in.proper.modules[simulate.module]) - sum(no.near.modules[simulate.module]); labelOrder = moduleLabels[rank(-modProportions[-length(modProportions)], ties.method = "first")]; labelOrder = c(labelOrder, grey); if (verbose>0) printFlush(paste(spaces, "simulateDatExpr: simulating", nGenes, "genes in", nMods, "modules.")); if (verbose>1) { # printFlush(paste(spaces, " Minimum correlation in a module is", minCor, # " and its dropoff is characterized by power", corPower)); printFlush(paste(spaces, " Simulated labels:", paste(labelOrder[1:nMods], collapse = ", "), " and ", grey)); printFlush(paste(spaces, " Module sizes:", paste(no.in.modules, collapse = ", "))); printFlush(paste(spaces, " near module sizes:", paste(no.near.modules, collapse = ", "))); printFlush(paste(spaces, " Min correaltion:", paste(minCor, collapse = ", "))); if (!is.null(leaveOut)) printFlush(paste(spaces, " _leaving out_ modules", paste(labelOrder[leaveOut], collapse = ", "))); } truemodule=rep(grey, nGenes); allLabels=rep(grey, nGenes); # These have the colors for left-out modules as well. # This matrix contains the simulated expression values (rows are genes, columns samples) # Each simulated cluster has a distinct mean expression across the samples datExpr = matrix(rnorm(nGenes*nSamples), nrow = nSamples, ncol = nGenes) trueKME = rep(NA, nGenes); trueKME.whichMod = rep(0, nGenes); gene.index = 0; # Where to put the current gene into datExpr for(mod in c(1:nMods)) { nModGenes = no.in.modules[mod]; nNearGenes = no.near.modules[mod]; if (simulate.module[mod]) { ME = eigengenes[, mod]; EffMaxCor = maxCor[mod]; EffMinCor = minCor[mod]; range = (gene.index+1):(gene.index+nModGenes+nNearGenes); temp = simulateModule(ME, nModGenes, nNearGenes, minCor[mod], maxCor[mod], corPower, signed = signed, propNegativeCor = propNegativeCor, geneMeans = NULL, verbose = verbose-2, indent = indent+2); datExpr[, range] = temp; truemodule[(gene.index+1):(gene.index+nModGenes)] = labelOrder[mod]; trueKME[range] = attributes(temp)$trueKME; trueKME.whichMod[range] = mod; } allLabels[(gene.index+1):(gene.index+nModGenes)] = labelOrder[mod]; gene.index = gene.index + nModGenes + nNearGenes; } if (nSubmoduleLayers>0) { OrderVector = c(1:nGenes) for (layer in 1:nSubmoduleLayers) { if (verbose>1) printFlush(paste(spaces, "Simulating ordereded extra layer", layer)); datExpr = datExpr + simulateSmallLayer(OrderVector, nSamples, minCor[1], maxCor[1], corPower, averageNGenesInSubmodule, averageExprInSubmodule, submoduleSpacing, verbose-1, indent+1); } } if (nScatteredModuleLayers>0) for (layer in 1:nScatteredModuleLayers) { if (verbose>1) printFlush(paste(spaces, "Simulating unordereded extra layer", layer)); OrderVector = sample(nGenes) datExpr = datExpr + simulateSmallLayer(OrderVector, nSamples, minCor[1], maxCor[1], corPower, averageNGenesInSubmodule, averageExprInSubmodule, submoduleSpacing, verbose = verbose-1, indent = indent+1); } gc(); if (verbose>1) printFlush(paste(spaces, " Adding background noise with amplitude", backgroundNoise)); datExpr = datExpr + rnorm(n = nGenes*nSamples, sd = backgroundNoise); means = colMeans(datExpr); datExpr = datExpr + matrix(geneMeans - means, nSamples, nGenes, byrow = TRUE); colnames(datExpr) = spaste("Gene.", c(1:nGenes)); rownames(datExpr) = spaste("Sample.", c(1:nSamples)); list(datExpr = datExpr, setLabels = truemodule, allLabels = allLabels, labelOrder = labelOrder, trueKME = trueKME, trueKME.whichMod = trueKME.whichMod) } # end of function #-------------------------------------------------------------------------------------- # # simulateMultiExpr # #-------------------------------------------------------------------------------------- # simulate several sets with some of the modules left out. # eigengenes are specified in a standard multi-set data format. # leaveOut must be a matrix of No.Modules x No.Sets of TRUE/FALSE values; # minCor must be a single number here; modProportions are a single vector, since the proportions should be the # same for all sets. # nSamples is a vector specifying the number of samples in each set; this must be compatible with the # dimensions of the eigengenes. simulateMultiExpr = function(eigengenes, nGenes, modProportions, minCor = 0.5, maxCor = 1, corPower = 1, backgroundNoise = 0.1, leaveOut = NULL, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, nSubmoduleLayers = 0, nScatteredModuleLayers = 0, averageNGenesInSubmodule = 10, averageExprInSubmodule = 0.2, submoduleSpacing = 2, verbose = 1, indent = 0) { MEsize = checkSets(eigengenes); nSets = MEsize$nSets; nMods = MEsize$nGenes; nSamples = MEsize$nSamples; nAllSamples = sum(nSamples); if (is.null(geneMeans)) { geneMeans = matrix(0, nGenes, nSets); } else { geneMeans = as.matrix(geneMeans); if (nrow(geneMeans)!=nGenes) { stop("Number of rows (or entries) in 'geneMeans' must equal 'nGenes'."); } else if (ncol(geneMeans)==1) { geneMeans = matrix(geneMeans, nGenes, nSets); } else if (ncol(geneMeans)!=nSets) stop("Number of columns in geneMeans must either equal the number of sets or be 1."); } if (any(is.na(geneMeans))) stop("All entries of 'geneMeans' must be finite."); d2 = length(modProportions)-1; if (d2 != nMods) stop(paste("Incompatible numbers of modules in 'eigengenes' and 'modProportions'")); if (is.null(leaveOut)) { leaveOut = matrix(FALSE, nMods, nSets); } else { d3 = dim(leaveOut); if ( (d3[1] != nMods) | (d3[2] != nSets) ) stop(paste("Incompatible dimensions of 'leaveOut' and set eigengenes.")) } multiExpr = vector(mode="list", length = nSets); setLabels = NULL; allLabels = NULL; labelOrder = NULL; for (set in 1:nSets) { SetEigengenes = scale(eigengenes[[set]]$data); setLeaveOut = leaveOut[, set]; # Convert setLeaveOut from boolean to a list of indices where it's TRUE # SetMinCor = rep(minCor, nMods); # SetMaxCor = rep(maxCor, nMods); SetLO = c(1:nMods)[setLeaveOut]; setData = simulateDatExpr(SetEigengenes, nGenes, modProportions, minCor = minCor, maxCor = maxCor, corPower = corPower, signed = signed, propNegativeCor = propNegativeCor, backgroundNoise = backgroundNoise, leaveOut = SetLO, nSubmoduleLayers = nSubmoduleLayers, nScatteredModuleLayers = nScatteredModuleLayers , averageNGenesInSubmodule = averageNGenesInSubmodule, averageExprInSubmodule = averageExprInSubmodule, submoduleSpacing = submoduleSpacing, verbose = verbose-1, indent = indent+1); multiExpr[[set]] = list(data = setData$datExpr); setLabels = cbind(setLabels, setData$setLabels); allLabels = cbind(allLabels, setData$allLabels); labelOrder = cbind(labelOrder, setData$labelOrder); } list(multiExpr = multiExpr, setLabels = setLabels, allLabels = allLabels, labelOrder = labelOrder); } #-------------------------------------------------------------------------------------------------- # # simulateDatExpr5Modules # #-------------------------------------------------------------------------------------------------- simulateDatExpr5Modules = function( nGenes=2000, colorLabels=c("turquoise","blue", "brown", "yellow", "green"), simulateProportions=c(0.10,0.08, 0.06, 0.04, 0.02), MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, SDnoise=1, backgroundCor=0.3) { nSamples=length(MEturquoise) if( length(MEturquoise) != length(MEblue) | length(MEturquoise) != length(MEbrown) | length(MEturquoise) != length(MEyellow) | length(MEturquoise) != length(MEgreen) ) stop("Numbers of samples in module eigengenes (MEs) are not consistent" ); if ( sum(simulateProportions)>1 ) { stop("Sum of module proportions is larger than 1. Please ensure sum(simulateProportions)<=1. " ); # simulateProportions=rep(1/10,5) } modulesizes=round(nGenes*c(simulateProportions, 1-sum(simulateProportions))) truemodule=rep(c( as.character(colorLabels),"grey") , modulesizes ) ModuleEigengenes = data.frame(MEturquoise,MEblue,MEbrown,MEyellow,MEgreen) no.MEs=dim(ModuleEigengenes)[[2]] # This matrix contains the simulated expression values #(rows are samples, columns genes) # it contains some background noise datExpr=matrix(rnorm(nSamples*nGenes,mean=0,sd=SDnoise),nrow=nSamples,ncol=nGenes) if (is.logical(backgroundCor)) backgroundCor = 0.3 * as.numeric(backgroundCor); if (as.numeric(backgroundCor) > 0) { MEbackground=MEturquoise datSignal= (matrix(MEbackground,nrow=length(MEturquoise) ,ncol=nGenes,byrow=FALSE)) datExpr= datExpr+ as.numeric(backgroundCor)*datSignal }# end of if backgroundCor for (i in c(1:no.MEs) ) { restrict1= truemodule== colorLabels[i] datModule = simulateModule(ModuleEigengenes[,i] , nGenes = modulesizes[i], corPower = 2.5) datExpr[,restrict1]= datModule } # end of for loop # this is the output of the function list(datExpr =datExpr, truemodule =truemodule, datME = ModuleEigengenes ) } # end of simulation function #-------------------------------------------------------------------------------------------------- # # automaticNetworkScreening # #-------------------------------------------------------------------------------------------------- automaticNetworkScreening = function( datExpr, y, power=6, networkType="unsigned", detectCutHeight = 0.995, minModuleSize = min(20, ncol(as.matrix(datExpr))/2 ), datME=NULL, getQValues = TRUE, ...) { y = as.numeric(as.character(y)) if (length(y) != dim(as.matrix(datExpr))[[1]] ) stop("Number of samples in 'y' and 'datExpr' disagree: length(y) != dim(as.matrix(datExpr))[[1]] ") nAvailable=apply(as.matrix(!is.na(datExpr)), 2,sum) ExprVariance=apply(as.matrix(datExpr),2,var, na.rm = TRUE ) restrictGenes = (nAvailable>=..minNSamples) & (ExprVariance>0) numberUsefulGenes=sum(restrictGenes,na.rm = TRUE) if ( numberUsefulGenes<3 ) { stop(paste("IMPORTANT: there are not enough useful genes. \n", " Your input genes have fewer than 4 observations or they are constant.\n", " WGCNA cannot be used for these data. Hint: collect more arrays or input genes that vary.")); #warning(paste("IMPORTANT: there are not enough useful genes. \n", # " Your input genes have fewer than 4 observations or they are constant.\n", # " WGCNA cannot be used for these data. Hint: collect more arrays or input genes that vary.")); #output=list(NetworkScreening=data.frame(NS1=rep(NA, dim(as.matrix(datExpr))[[2]] )), # datME=rep(NA, dim(as.matrix(datExpr))[[1]] ), EigengeneSignificance=NA , AAcriterion=NA) #return(output); } datExprUsefulGenes=as.matrix(datExpr)[,restrictGenes & !is.na(restrictGenes)] if (is.null(datME) ) { mergeCutHeight1 = dynamicMergeCut(n= dim(as.matrix(datExprUsefulGenes))[[1]]) B = blockwiseModules(datExprUsefulGenes, mergeCutHeight = mergeCutHeight1, TOMType = "none", power = power, networkType=networkType, detectCutHeight = detectCutHeight, minModuleSize = minModuleSize ); datME=data.frame(B$MEs) } if (dim(as.matrix(datME))[[1]] != dim(as.matrix(datExpr))[[1]] ) stop(paste("Numbers of samples in 'datME' and 'datExpr' are incompatible:", "dim(as.matrix(datME))[[1]] != dim(as.matrix(datExpr))[[1]]")) MMdata=signedKME(datExpr=datExpr, datME=datME, outputColumnName="MM.") MMdataPvalue=as.matrix(corPvalueStudent(as.matrix(MMdata), nSamples= dim(as.matrix(datExpr))[[1]])) dimnames( MMdataPvalue)[[2]]=paste("Pvalue",names(MMdata), sep=".") NS1=networkScreening(y= y,datME=datME, datExpr=datExpr, getQValues = getQValues) # here we compute the eigengene significance measures ES=data.frame(cor(y, datME, use="p")) ESvector = as.vector(as.matrix(ES)); EScounts = tapply(abs(ESvector),cut(abs(ESvector),seq(from=0,to=1, by=.1)),length ) EScounts[is.na(EScounts)] = 0; rr=max(abs(ES),na.rm = TRUE) AAcriterion=sqrt(length(y)-2) * rr/sqrt(1-rr^2) ESy=(1+max(abs(ES), na.rm = TRUE))/2 ES=data.frame(ES, ESy=ESy) # to avoid dividing by zero, we set correlation that are 1 equal to .9999 ES.999=as.numeric(as.vector(ES)) ES.999[!is.na(ES) & ES>0.9999]=.9999 ES.pvalue=corPvalueStudent(cor=abs(ES.999), nSamples=sum(!is.na(y) )) ES.pvalue[length(ES.999)]=0 EigengeneSignificance.pvalue=data.frame(matrix(ES.pvalue, nrow=1) ) names(EigengeneSignificance.pvalue)=names(ES) datME=data.frame(datME,y=y) names(ES)=paste("ES", substr(names(ES),3,100), sep="") print(signif(ES,2)) output=list(networkScreening=data.frame(NS1, MMdata, MMdataPvalue), datME=data.frame(datME), eigengeneSignificance=data.frame(ES) , EScounts = EScounts, eigengeneSignificance.pvalue=EigengeneSignificance.pvalue, AAcriterion=AAcriterion) output } # end of function automaticNetworkScreening #-------------------------------------------------------------------------------------------------- # # automaticNetworkScreeningGS # #-------------------------------------------------------------------------------------------------- automaticNetworkScreeningGS = function( datExpr, GS, power=6, networkType="unsigned", detectCutHeight = 0.995, minModuleSize = min(20, ncol(as.matrix(datExpr))/2 ), datME=NULL) { if (!is.numeric(GS) ) stop("Gene significance 'GS' is not numeric.") if ( dim(as.matrix(datExpr))[[2]] != length(GS) ) stop("length of gene significance variable GS does not equal the number of columns of datExpr."); mergeCutHeight1 = dynamicMergeCut(n= dim(as.matrix(datExpr))[[1]]) nAvailable=apply(as.matrix(!is.na(datExpr)), 2,sum) ExprVariance=apply(as.matrix(datExpr),2,var, na.rm = TRUE ) restrictGenes=nAvailable>=4 & ExprVariance>0 numberUsefulGenes=sum(restrictGenes,na.rm = TRUE) if ( numberUsefulGenes<3 ) { stop(paste("IMPORTANT: there are not enough useful genes. \n", " Your input genes have fewer than 4 observations or they are constant.\n", " WGCNA cannot be used for these data. Hint: collect more arrays or input genes that vary.")); #output=list(NetworkScreening=data.frame(NS1=rep(NA, dim(as.matrix(datExpr))[[2]])) , datME=rep(NA, #dim(as.matrix(datExpr))[[1]]) , hubGeneSignificance=NA); } # end of if datExprUsefulGenes=as.matrix(datExpr)[,restrictGenes & !is.na(restrictGenes)] if (is.null(datME) ) { B = blockwiseModules(datExprUsefulGenes, mergeCutHeight = mergeCutHeight1, TOMType = "none", power = power, networkType = networkType, detectCutHeight = detectCutHeight, minModuleSize= minModuleSize ); datME = data.frame(B$MEs) } #end of if MMdata=signedKME(datExpr=datExpr, datME=datME, outputColumnName="MM.") MMdataPvalue=as.matrix(corPvalueStudent(as.matrix(MMdata), nSamples= dim(as.matrix(datExpr))[[1]])) dimnames( MMdataPvalue)[[2]]=paste("Pvalue",names(MMdata), sep=".") NS1= networkScreeningGS(datExpr=datExpr, datME=datME, GS=GS ) # here we compute the eigengene significance measures HGS1=data.frame(as.matrix(t(hubGeneSignificance(MMdata ^3,GS^3)),nrow=1)) datME=data.frame(datME) names(HGS1)=paste("HGS", substr(names(MMdata),4,100), sep="") # now we compute the AA criterion print(signif(HGS1,2)) output = list(networkScreening=data.frame(NS1, MMdata, MMdataPvalue), datME=data.frame(datME), hubGeneSignificance=data.frame(HGS1)) output } # end of function automaticNetworkScreeningGS #-------------------------------------------------------------------------------------------- # # hubGeneSignificance # #-------------------------------------------------------------------------------------------- # The following function computes the hub gene significance as defined in # in the paper Horvath and Dong. Input a data frame with possibly signed # module membership measures ( also known as module eigengene based connectivity #kME. Further it requires a possibly signed gene significance measure. # GS=0 means that the gene is not significant, high positive or negative values mean # that it is significant. # The input to this function can include the sign of the correlation. hubGeneSignificance=function(datKME, GS ) { nMEs=dim(as.matrix(datKME))[[2]] nGenes= dim(as.matrix(datKME))[[1]] if ( length(GS) != nGenes ) stop("Numbers of genes in 'datKME' and 'GS' are not compatible. ") Kmax=as.numeric(apply(as.matrix(abs(datKME)),2,max, na.rm = TRUE)) Kmax[Kmax==0]=1 datKME=scale(datKME, center=FALSE, scale=Kmax) sumKsq=as.numeric(apply(as.matrix(datKME^2) , 2, sum, na.rm = TRUE)) sumKsq[sumKsq==0]=1 HGS=as.numeric(apply(I(GS)*datKME, 2, sum,na.rm = TRUE))/ sumKsq as.numeric(HGS) } #end of function hubGeneSignificance #-------------------------------------------------------------------------------------------- # # networkScreeningGS # #-------------------------------------------------------------------------------------------- networkScreeningGS = function(datExpr , datME, GS , oddPower = 3, blockSize = 1000, minimumSampleSize = ..minNSamples, addGS=TRUE) { oddPower=as.integer(oddPower) if (as.integer(oddPower/2)==oddPower/2 ) {oddPower=oddPower+1} nMEs=dim(as.matrix(datME))[[2]] nGenes=dim(as.matrix(datExpr))[[2]] GS.Weighted=rep(0,nGenes) if ( dim(as.matrix(datExpr))[[1]] != dim(as.matrix(datME))[[1]]) stop(paste("Expression data and the module eigengenes have different\n", " numbers of observations (arrays). Specifically:\n", " dim(as.matrix(datExpr))[[1]] != dim(as.matrix(datME))[[1]] ")) if ( dim(as.matrix(datExpr))[[2]] != length(GS) ) stop(paste("The number of genes in the expression data does not match\n", " the length of the genes significance variable. Specifically:\n", " dim(as.matrix(datExpr))[[2]] != length(GS) ")); nAvailable=apply(as.matrix(!is.na(datExpr)), 2,sum) ExprVariance=apply(as.matrix(datExpr),2,var, na.rm = TRUE ) restrictGenes=nAvailable>=4 & ExprVariance>0 numberUsefulGenes=sum(restrictGenes,na.rm = TRUE) if ( numberUsefulGenes<3 ) { stop(paste("IMPORTANT: there are fewer than 3 useful genes. \n", " Violations: either fewer than 4 observations or they are constant.\n", " WGCNA cannot be used for these data. Hint: collect more arrays or input genes that vary.")); # datout=data.frame(GS.Weighted=rep(NA, dim(as.matrix(datExpr))[[2]]), GS=GS) } # end of if nBlocks=as.integer(nMEs/blockSize) if (nBlocks>0) for (i in 1:nBlocks) { printFlush(paste("block number = ", i)) index1=c(1:blockSize)+(i-1)* blockSize datMEBatch= datME[,index1] datKMEBatch=as.matrix(signedKME(datExpr,datMEBatch, outputColumnName="MM.")) ESBatch= hubGeneSignificance(datKMEBatch ^oddPower,GS^oddPower) # the following omits the diagonal when datME=datExpr if (nGenes==nMEs) {diag(datKMEBatch[index1,])=0 # missing values will not be used datKMEBatch[is.na(datKMEBatch)]=0 ESBatch[is.na(ESBatch)]=0 } # end of if GS.WeightedBatch= as.matrix(datKMEBatch)^oddPower %*% as.matrix(ESBatch) GS.Weighted=GS.Weighted+GS.WeightedBatch } # end of for (i in 1:nBlocks if (nMEs-nBlocks*blockSize>0 ) { restindex=c((nBlocks*blockSize+1):nMEs) datMEBatch= datME[,restindex] datKMEBatch=as.matrix(signedKME(datExpr,datMEBatch, outputColumnName="MM.")) ESBatch= hubGeneSignificance(datKMEBatch ^oddPower,GS^oddPower) # the following omits the diagonal when datME=datExpr if (nGenes==nMEs) {diag(datKMEBatch[restindex,])=0 # missing values will not be used datKMEBatch[is.na(datKMEBatch)]=0 ESBatch[is.na(ESBatch)]=0 } # end of if (nGenes==nMEs) GS.WeightedBatch= as.matrix(datKMEBatch)^oddPower %*% ESBatch GS.Weighted=GS.Weighted+GS.WeightedBatch } # end of if (nMEs-nBlocks*blockSize>0 ) GS.Weighted=GS.Weighted/nMEs GS.Weighted[nAvailable< minimumSampleSize]=NA rankGS.Weighted=rank(-GS.Weighted, ties.method="first") rankGS=rank(-GS, ties.method="first") printFlush(paste("Proportion of agreement between GS.Weighted and GS:")) for (i in c(10,20,50,100,200,500,1000)) { printFlush(paste("Top ", i, " list of genes: prop. of agreement = ", signif(sum(rankGS.Weighted<=i & rankGS<=i,na.rm = TRUE)/i,3) )) } # end of for loop if (mean(abs(GS.Weighted),na.rm = TRUE)>0) { GS.Weighted=GS.Weighted/mean(abs(GS.Weighted),na.rm = TRUE)*mean(abs(GS),na.rm = TRUE) } if (addGS ) GS.Weighted=apply(data.frame(GS.Weighted, GS), 1,mean, na.rm = TRUE) datout=data.frame(GS.Weighted, GS) datout } # end of function #-------------------------------------------------------------------------------------------------- # # networkScreening # #-------------------------------------------------------------------------------------------------- networkScreening = function( y, datME, datExpr, corFnc = "cor", corOptions = "use = 'p'", oddPower = 3, blockSize = 1000, minimumSampleSize = ..minNSamples, addMEy = TRUE, removeDiag = FALSE, weightESy=0.5, getQValues = TRUE) { oddPower=as.integer(oddPower) if (as.integer(oddPower/2)==oddPower/2 ) {oddPower=oddPower+1} nMEs=dim(as.matrix(datME))[[2]] nGenes=dim(as.matrix(datExpr))[[2]] # Here we add y as extra ME if (nGenes>nMEs & addMEy) { datME=data.frame(y,datME) } nMEs=dim(as.matrix(datME))[[2]] RawCor.Weighted=rep(0,nGenes) #Cor.Standard= as.numeric(cor(y,datExpr,use= "p") ) corExpr = parse(text = paste("as.numeric( ", corFnc, "(y,datExpr ", prepComma(corOptions), "))")); Cor.Standard= eval(corExpr) NoAvailable=apply(!is.na(datExpr), 2,sum) Cor.Standard[NoAvailable< minimumSampleSize]=NA if (nGenes==1) { #RawCor.Weighted=as.numeric(cor(y,datExpr,use= "p") ) corExpr = parse(text = paste("as.numeric(" , corFnc, "(y,datExpr ", prepComma(corOptions), "))")); RawCor.Weighted = eval(corExpr); } start = 1; i = 1; while (start <= nMEs) { end = min(start + blockSize -1, nMEs); if (i>1 || end < nMEs) printFlush(paste("block number = ", i)) index1=c(start:end) datMEBatch= datME[,index1] datKMEBatch=as.matrix(signedKME(datExpr,datMEBatch, outputColumnName="MM.", corFnc = corFnc, corOptions = corOptions)) # ES.CorBatch= as.vector(cor( as.numeric(as.character(y)) ,datMEBatch, use="p")) corExpr = parse(text = paste("as.vector( ", corFnc, "( as.numeric(as.character(y)) ,datMEBatch", prepComma(corOptions), "))" )); ES.CorBatch = eval(corExpr); #weightESy ES.CorBatch[ES.CorBatch>.999]= weightESy*1+ (1- weightESy)* max(abs(ES.CorBatch[ES.CorBatch <.999 ]),na.rm = TRUE) # the following omits the diagonal when datME=datExpr if (nGenes==nMEs & removeDiag) {diag(datKMEBatch[index1,])=0} if (nGenes==nMEs ) { # missing values will not be used datKMEBatch[is.na(datKMEBatch)]=0 ES.CorBatch[is.na(ES.CorBatch)]=0 } # end of if RawCor.WeightedBatch= as.matrix(datKMEBatch)^oddPower %*% as.matrix(ES.CorBatch^oddPower) RawCor.Weighted=RawCor.Weighted+RawCor.WeightedBatch start = end + 1; } # end of while (start <= nMEs) RawCor.Weighted=RawCor.Weighted/nMEs RawCor.Weighted[NoAvailable< minimumSampleSize]=NA #to avoid dividing by zero we scale it as follows if (max(abs(RawCor.Weighted),na.rm = TRUE)==1) RawCor.Weighted=RawCor.Weighted/1.0000001 if (max(abs( Cor.Standard),na.rm = TRUE)==1) Cor.Standard=Cor.Standard/1.0000001 RawZ.Weighted=sqrt(NoAvailable -2)*RawCor.Weighted/sqrt(1-RawCor.Weighted^2) Z.Standard= sqrt(NoAvailable -2)* Cor.Standard/sqrt(1-Cor.Standard^2) if (sum(abs(Z.Standard),na.rm = TRUE) >0 ) { Z.Weighted=RawZ.Weighted/sum(abs(RawZ.Weighted),na.rm = TRUE)*sum(abs(Z.Standard),na.rm = TRUE) } # end of if h1=Z.Weighted/sqrt(NoAvailable-2) Cor.Weighted=h1/sqrt(1+h1^2) p.Weighted=as.numeric(2*(1-pt(abs(Z.Weighted),NoAvailable-2))) p.Standard=2*(1-pt(abs(Z.Standard),NoAvailable-2)) if (getQValues) { # since the function qvalue cannot handle missing data, we set missing p-values to 1. p.Weighted2=p.Weighted p.Standard2=p.Standard p.Weighted2[is.na(p.Weighted)]=1 p.Standard2[is.na(p.Standard)]=1 q.Weighted=try(qvalue(p.Weighted2)$qvalues, silent = TRUE) q.Standard=try(qvalue(p.Standard2)$qvalues, silent = TRUE) if (inherits(q.Weighted, "try-error") ) { warning("Calculation of weighted q-values failed; the q-values will be returned as NAs."); q.Weighted=rep(NA, length(p.Weighted) ) } if (inherits(q.Standard, "try-error")) { warning("Calculation of standard q-values failed; the q-values will be returned as NAs."); q.Standard=rep(NA, length(p.Standard) ) } } else { q.Weighted=rep(NA, length(p.Weighted) ) q.Standard=rep(NA, length(p.Standard) ) if (getQValues) printFlush("networkScreening: Warning: package qvalue not found. q-values will not be calculated."); } rankCor.Weighted=rank(-abs(Cor.Weighted), ties.method="first") rankCor.Standard=rank(-abs(Cor.Standard), ties.method="first") printFlush(paste("Proportion of agreement between lists based on abs(Cor.Weighted) and abs(Cor.Standard):")) for (i in c(10,20,50,100,200,500,1000)) { printFlush(paste("Top ", i, " list of genes: prop. agree = ", signif(sum(rankCor.Weighted<=i & rankCor.Standard<=i,na.rm = TRUE)/i,3))) } # end of for loop datout=data.frame(p.Weighted, q.Weighted, Cor.Weighted, Z.Weighted, p.Standard, q.Standard, Cor.Standard, Z.Standard) names(datout) = sub("Cor", corFnc, names(datout), fixed = TRUE); datout } # end of function ############################################################################################## # # Functions included from NetworkFunctions-PL-07.R # Selected ones only # ############################################################################################## #-------------------------------------------------------------------------- # # labeledBarplot = function ( Matrix, labels, ... ) { # #-------------------------------------------------------------------------- # # Plots a barplot of the Matrix and writes the labels underneath such that they are readable. labeledBarplot = function ( Matrix, labels, colorLabels = FALSE, colored = TRUE, setStdMargins = TRUE, stdErrors = NULL, cex.lab = NULL, xLabelsAngle = 45, ... ) { if (setStdMargins) par(mar=c(3,3,2,2)+0.2) if (colored) { colors = substring(labels, 3); } else { colors = rep("grey", times = ifelse(length(dim(Matrix))<2, length(Matrix), dim(Matrix)[[2]])); } ValidColors = !is.na(match(substring(labels, 3), colors())); if (sum(ValidColors)>0) ColorLabInd = c(1:length(labels))[ValidColors] if (sum(!ValidColors)>0) TextLabInd = c(1:length(labels))[!ValidColors] colors[!ValidColors] = "grey"; mp = barplot(Matrix, col = colors, xaxt = "n", xlab="", yaxt="n", ...) if (length(dim(Matrix))==2) { means = apply(Matrix, 2, sum); } else { means = Matrix; } if (!is.null(stdErrors)) addErrorBars(means, 1.96*stdErrors, two.side = TRUE); # axis(1, labels = FALSE) nlabels = length(labels) plotbox = par("usr"); xmin = plotbox[1]; xmax = plotbox[2]; ymin = plotbox[3]; yrange = plotbox[4]-ymin; ymax = plotbox[4]; # print(paste("yrange:", yrange)); if (nlabels>1) { spacing = (mp[length(mp)] - mp[1])/(nlabels-1); } else { spacing = (xmax-xmin); } yoffset = yrange/30 xshift = spacing/2; xrange = spacing * nlabels; if (is.null(cex.lab)) cex.lab = 1; if (colorLabels) { #rect(xshift + ((1:nlabels)-1)*spacing - spacing/2.1, ymin - spacing/2.1 - spacing/8, # xshift + ((1:nlabels)-1)*spacing + spacing/2.1, ymin - spacing/8, # density = -1, col = substring(labels, 3), border = substring(labels, 3), xpd = TRUE) if (sum(!ValidColors)>0) { text( mp[!ValidColors] , ymin - 0.02, srt = 45, adj = 1, labels = labels[TextLabInd], xpd = TRUE, cex = cex.lab, srt = xLabelsAngle) } if (sum(ValidColors)>0) { rect(mp[ValidColors] - spacing/2.1, ymin - 2*spacing/2.1 * yrange/xrange - yoffset, mp[ValidColors] + spacing/2.1, ymin - yoffset, density = -1, col = substring(labels[ValidColors], 3), border = substring(labels[ValidColors], 3), xpd = TRUE) } } else { text(((1:nlabels)-1)*spacing +spacing/2 , ymin - 0.02*yrange, srt = 45, adj = 1, labels = labels, xpd = TRUE, cex = cex.lab, srt = xLabelsAngle) } axis(2, labels = TRUE) } #-------------------------------------------------------------------------- # # sizeGrWindow # #-------------------------------------------------------------------------- # if the current device isn't of the required dimensions, close it and open a new one. sizeGrWindow = function(width, height) { din = par("din"); if ( (din[1]!=width) | (din[2]!=height) ) { dev.off(); dev.new(width = width, height=height); } } #====================================================================================================== # GreenToRed.R #====================================================================================================== greenBlackRed = function(n, gamma = 1) { half = as.integer(n/2); red = c(rep(0, times = half), 0, seq(from=0, to=1, length.out = half)^(1/gamma)); green = c(seq(from=1, to=0, length.out = half)^(1/gamma), rep(0, times = half+1)); blue = rep(0, times = 2*half+1); col = rgb(red, green, blue, maxColorValue = 1); col; } greenWhiteRed = function(n, gamma = 1, warn = TRUE) { if (warn) warning(spaste("WGCNA::greenWhiteRed: this palette is not suitable for people\n", "with green-red color blindness (the most common kind of color blindness).\n", "Consider using the function blueWhiteRed instead.")); half = as.integer(n/2); red = c(seq(from=0, to=1, length.out = half)^(1/gamma), rep(1, times = half+1)); green = c(rep(1, times = half+1), seq(from=1, to=0, length.out = half)^(1/gamma)); blue = c(seq(from=0, to=1, length.out = half)^(1/gamma), 1, seq(from=1, to=0, length.out = half)^(1/gamma)); col = rgb(red, green, blue, maxColorValue = 1); col; } redWhiteGreen = function(n, gamma = 1) { half = as.integer(n/2); green = c(seq(from=0, to=1, length.out = half)^(1/gamma), rep(1, times = half+1)); red = c(rep(1, times = half+1), seq(from=1, to=0, length.out = half)^(1/gamma)); blue = c(seq(from=0, to=1, length.out = half)^(1/gamma), 1, seq(from=1, to=0, length.out = half)^(1/gamma)); col = rgb(red, green, blue, maxColorValue = 1); col; } #====================================================================================================== # # Color pallettes that are more friendly to people with common color blindness # #====================================================================================================== blueWhiteRed = function(n, gamma = 1, endSaturation = 1, blueEnd = c(0.05 + (1-endSaturation) * 0.45 , 0.55 + (1-endSaturation) * 0.25, 1.00), redEnd = c(1.0, 0.2 + (1-endSaturation) * 0.6, 0.6*(1-endSaturation)), middle = c(1,1,1) ) { if (endSaturation >1 | endSaturation < 0) stop("'endSaturation' must be between 0 and 1."); half = as.integer(n/2); if (n%%2 == 0) { index1 = c(1:half); index2 = c(1:half)+half; frac1 = ((index1-1)/(half-1))^(1/gamma); frac2 = rev(frac1); } else { index1 = c(1:(half + 1)) index2 = c(1:half) + half + 1 frac1 = (c(0:half)/half)^(1/gamma); frac2 = rev((c(0:(half-1))/half)^(1/gamma)); } cols = matrix(0, n, 3); for (c in 1:3) { cols[ index1, c] = blueEnd[c] + (middle[c] - blueEnd[c]) * frac1; cols[ index2, c] = redEnd[c] + (middle[c] - redEnd[c]) * frac2; } rgb(cols[, 1], cols[, 2], cols[, 3], maxColorValue = 1); } #========================================================================================================= # # KeepCommonProbes # #------------------------------------------------------------------------------------------- # Filters out probes that are not common to all datasets, and puts probes into the same order in each # set. Works by creating dataframes of probe names and their indices and merging them all. keepCommonProbes = function(multiExpr, orderBy = 1) { size = checkSets(multiExpr); nSets = size$nSets; if (nSets<=0) stop("No expression data given!"); Names = data.frame(Names = names(multiExpr[[orderBy]]$data)); if (nSets>1) for (set in (1:nSets)) { SetNames = data.frame(Names = names(multiExpr[[set]]$data), index = c(1:dim(multiExpr[[set]]$data)[2])); Names = merge(Names, SetNames, by.x = "Names", by.y = "Names", all = FALSE, sort = FALSE); } for (set in 1:nSets) multiExpr[[set]]$data = multiExpr[[set]]$data[, Names[, set+1]]; multiExpr; } #-------------------------------------------------------------------------------------- # # addTraitToPCs # #-------------------------------------------------------------------------------------- # Adds a trait vector to a set of eigenvectors. # Caution: multiTraits is assumed to be a vector of lists with each list having an entry data which is # a nSamples x nTraits data frame with an appropriate column name, not a vector. addTraitToMEs = function(multiME, multiTraits) { nSets = length(multiTraits); setsize = checkSets(multiTraits); nTraits = setsize$nGenes; nSamples = setsize$nSamples; if (length(multiME)!=nSets) stop("Numbers of sets in multiME and multiTraits parameters differ - must be the same."); multiMETs = vector(mode="list", length=nSets); for (set in 1:nSets) { trait.subs = multiTraits[[set]]$data; multiMET = as.data.frame(cbind(multiME[[set]]$data, trait.subs)); colnames(multiMET) = c(colnames(multiME[[set]]$data), colnames(trait.subs)); if (!is.null(multiME[[set]]$AET)) { AET = as.data.frame(cbind(multiME[[set]]$averageExpr, trait.subs)); colnames(AET) = c(colnames(multiME[[set]]$averageExpr), colnames(trait.subs)); } multiMETs[[set]] = list(data=multiMET); } multiMETs; } #-------------------------------------------------------------------------------------- # # CorrelationPreservation # #-------------------------------------------------------------------------------------- # # Given a set of multiME (or OrderedMEs), calculate the preservation values for each module in each pair # of datasets and return them as a matrix correlationPreservation = function(multiME, setLabels, excludeGrey = TRUE, greyLabel = "grey") { nSets = length(multiME); if (nSets!=length(setLabels)) stop("The lengths of multiME and setLabels must equal."); if (nSets<=1) stop("Something is wrong with argument multiME: its length is 0 or 1"); Names = names(multiME[[1]]$data); if (excludeGrey) { Use = substring(Names, 3)!=greyLabel; } else { Use = rep(TRUE, times = length(Names)); } No.Mods = ncol(multiME[[1]]$data[, Use]); CP = matrix(0, nrow = No.Mods, ncol = nSets*(nSets-1)/2); diag(CP) = 1; CPInd = 1; CPNames = NULL; for (i in 1:(nSets-1)) for (j in (i+1):nSets) { corME1 = cor(multiME[[i]]$data[, Use], use="p"); corME2 = cor(multiME[[j]]$data[, Use], use="p"); d = 1-abs(tanh((corME1 - corME2) / (abs(corME1) + abs(corME2))^2)); CP[ ,CPInd] = apply(d, 1, sum)-1; CPNames = c(CPNames, paste(setLabels[i], "::", setLabels[j], collapse = "")); CPInd = CPInd + 1; } CPx = as.data.frame(CP); names(CPx) = CPNames; rownames(CPx) = make.unique(Names[Use]); CPx; } #-------------------------------------------------------------------------------------- # # setCorrelationPreservation # #-------------------------------------------------------------------------------------- # # Given a set of multiME (or OrderedMEs), calculate the preservation values for each each pair # of datasets and return them as a matrix. setCorrelationPreservation = function(multiME, setLabels, excludeGrey = TRUE, greyLabel = "grey", method = "absolute") { m = charmatch(method, c("absolute", "hyperbolic")); if (is.na(m)) { stop("Unrecognized method given. Recognized methods are absolute, hyperbolic. "); } nSets = length(multiME); if (nSets!=length(setLabels)) stop("The lengths of multiME and setLabels must equal."); if (nSets<=1) stop("Something is wrong with argument multiME: its length is 0 or 1"); Names = names(multiME[[1]]$data); if (excludeGrey) { Use = substring(Names, 3)!=greyLabel; } else { Use = rep(TRUE, times = length(Names)); } No.Mods = ncol(multiME[[1]]$data[, Use]); SCP = matrix(0, nrow = nSets, ncol = nSets); diag(SCP) = 0; for (i in 1:(nSets-1)) for (j in (i+1):nSets) { corME1 = cor(multiME[[i]]$data[, Use], use="p"); corME2 = cor(multiME[[j]]$data[, Use], use="p"); if (m==1) { d = 1 - abs(corME1 - corME2)/2; } else { d = 1-abs(tanh((corME1 - corME2) / (abs(corME1) + abs(corME2))^2)); } SCP[i,j] = sum(d[upper.tri(d)])/sum(upper.tri(d)); SCP[j,i] = SCP[i,j]; } SCPx = as.data.frame(SCP); names(SCPx) = setLabels; rownames(SCPx) = make.unique(setLabels); SCPx; } #--------------------------------------------------------------------------------------- # # preservationNetworkDensity # #--------------------------------------------------------------------------------------- #--------------------------------------------------------------------------------------- # # preservationNetworkConnectivity # #--------------------------------------------------------------------------------------- # This function returns connectivities of nodes in preservation networks preservationNetworkConnectivity = function( multiExpr, useSets = NULL, useGenes = NULL, corFnc = "cor", corOptions = "use='p'", networkType = "unsigned", power = 6, sampleLinks = NULL, nLinks = 5000, blockSize = 1000, setSeed = 12345, weightPower = 2, verbose = 2, indent = 0) { spaces = indentSpaces(indent) size = checkSets(multiExpr); nGenes = size$nGenes; nSets = size$nSets; if (!is.null(useSets) || !is.null(useGenes)) { if (is.null(useSets)) useSets = c(1:nSets) if (is.null(useGenes)) useGenes = c(1:nGenes) useExpr = vector(mode = "list", length = length(useSets)); for (set in 1:length(useSets)) useExpr[[set]] = list(data = multiExpr[[useSets[set]]]$data[, useGenes]); multiExpr = useExpr; rm(useExpr); gc(); } size = checkSets(multiExpr); nGenes = size$nGenes; nSets = size$nSets; if (is.null(sampleLinks)) { sampleLinks = (nGenes > nLinks); } if (sampleLinks) nLinks = min(nLinks, nGenes) else nLinks = nGenes; if (blockSize * nLinks > .largestBlockSize) blockSize = as.integer(.largestBlockSize/nLinks); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument. Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); subtract = rep(1, nGenes); if (sampleLinks) { if (verbose > 0) printFlush(paste(spaces, "preservationNetworkConnectivity: selecting sample pool of size", nLinks, "..")) sd = apply(multiExpr[[1]]$data, 2, sd, na.rm = TRUE); order = order(-sd); saved = FALSE; if (exists(".Random.seed")) { saved = TRUE; savedSeed = .Random.seed if (is.numeric(setSeed)) set.seed(setSeed); } samplePool = order[sample(x = nGenes, size = nLinks)] if (saved) .Random.seed <<- savedSeed; subtract[-samplePool] = 0; } nPairComps = nSets * (nSets -1)/2; allPres = rep(NA, nGenes); allPresW = rep(NA, nGenes); allPresH = rep(NA, nGenes); allPresWH = rep(NA, nGenes); pairPres = matrix(NA, nGenes, nPairComps); pairPresW = matrix(NA, nGenes, nPairComps); pairPresH = matrix(NA, nGenes, nPairComps); pairPresWH = matrix(NA, nGenes, nPairComps); compNames = NULL; for (set1 in 1:(nSets-1)) for (set2 in (set1+1):nSets) compNames = c(compNames, paste(set1, "vs", set2)); dimnames(pairPres) = list(names(multiExpr[[1]]$data), compNames); dimnames(pairPresW) = list(names(multiExpr[[1]]$data), compNames); dimnames(pairPresH) = list(names(multiExpr[[1]]$data), compNames); dimnames(pairPresWH) = list(names(multiExpr[[1]]$data), compNames); if (verbose>0) { pind = initProgInd(trailStr = " done"); } nBlocks = as.integer((nGenes-1)/blockSize); SetRestrConn = NULL; start = 1; if (sampleLinks) { corEval = parse(text = paste(corFnc, "(multiExpr[[set]]$data[, samplePool], multiExpr[[set]]$data[, blockIndex] ", prepComma(corOptions), ")")) } else { corEval = parse(text = paste(corFnc, "(multiExpr[[set]]$data, multiExpr[[set]]$data[, blockIndex] ", prepComma(corOptions), ")")) } while (start <= nGenes) { end = start + blockSize-1; if (end>nGenes) end = nGenes; blockIndex = c(start:end); nBlockGenes = end-start+1; blockAdj = array(0, dim = c(nSets, nLinks, nBlockGenes)); #if (verbose>1) printFlush(paste(spaces, "..working on genes", start, "through", end, "of", nGenes)) for (set in 1:nSets) { c = eval(corEval); if (intNetworkType==1) { c = abs(c); } else if (intNetworkType==2) { c = (1+c)/2; } else if (intNetworkType==3) { c[c < 0] = 0; } else stop("Internal error: intNetworkType has wrong value:", intNetworkType, ". Sorry!"); adj_mat = as.matrix(c^power); if (sum(is.na(adj_mat)) > 0) stop("NA values present in adjacency - this function cannot handle them yet. Sorry!"); adj_mat[is.na(adj_mat)] = 0; blockAdj[set, , ] = adj_mat } blockAdj2 = blockAdj; dim(blockAdj2) = c(nSets, nLinks * nBlockGenes); min = matrix(0, nLinks, nBlockGenes) max = matrix(0, nLinks, nBlockGenes); #which = matrix(0, nLinks, nBlockGenes) #res = .C("minWhichMin", as.double(blockAdj), as.integer(nSets), as.integer(nLinks * nBlockGenes), # min = as.double(min), as.double(which)) #min[, ] = res$min; #res = .C("minWhichMin", as.double(-blockAdj), as.integer(nSets), as.integer(nLinks * nBlockGenes), # min = as.double(min), as.double(which)) #max[, ] = -res$min; #rm(res); min[, ] = colMins(blockAdj2); max[, ] = colMaxs(blockAdj2); diff = max - min; allPres[blockIndex] = (apply(1-diff, 2, sum) - subtract[blockIndex])/(nLinks - subtract[blockIndex]); weight = ((max + min)/2)^weightPower allPresW[blockIndex] = (apply((1-diff) * weight, 2, sum) - subtract[blockIndex])/ (apply(weight, 2, sum) - subtract[blockIndex]); hyp = 1-tanh(diff/(max+min)^2); allPresH[blockIndex] = (apply(hyp, 2, sum) - subtract[blockIndex])/(nLinks - subtract[blockIndex]); allPresWH[blockIndex] = (apply(hyp * weight, 2, sum) - subtract[blockIndex])/ (apply(weight, 2, sum) - subtract[blockIndex]); compNames = NULL; compInd = 1; for (set1 in 1:(nSets-1)) for (set2 in (set1+1):nSets) { diff = abs(blockAdj[set1, , ] - blockAdj[set2, , ]) compNames = c(compNames, paste(set1, "vs", set2)); pairPres[blockIndex, compInd] = (apply(1-diff, 2, sum) - subtract[blockIndex]) / (nLinks - subtract[blockIndex]); weight = ((blockAdj[set1, , ] + blockAdj[set2, , ])/2)^weightPower pairPresW[blockIndex, compInd] = (apply((1-diff) * weight, 2, sum) - subtract[blockIndex]) / (apply(weight, 2, sum) - subtract[blockIndex]); hyp = 1-tanh(diff/(blockAdj[set1, , ] + blockAdj[set2, , ])^2) pairPresH[blockIndex, compInd] = (apply(hyp, 2, sum) - subtract[blockIndex]) / (nLinks - subtract[blockIndex]); pairPresWH[blockIndex, compInd] = (apply(hyp * weight, 2, sum) - subtract[blockIndex]) / (apply(weight, 2, sum) - subtract[blockIndex]); compInd = compInd + 1; } start = end+1; if (verbose>0) pind = updateProgInd(end/nGenes, pind); gc(); } if (verbose>0) printFlush(" "); list(pairwise = pairPres, complete = allPres, pairwiseWeighted = pairPresW, completeWeighted = allPresW, pairwiseHyperbolic = pairPresH, completeHyperbolic = allPresH, pairwiseWeightedHyperbolic = pairPresWH, completeWeightedHyperbolic = allPresWH) } #-------------------------------------------------------------------------------------- # # plotEigengeneNetworks # #-------------------------------------------------------------------------------------- # Plots a matrix plot of the ME(T)s. On the diagonal the heatmaps show correlation of MEs in the # particular subset; off-diagonal are differences in the correlation matrix. # setLabels is a vector of titles for the diagonal diagrams; the off-diagonal will have no title # for now. plotEigengeneNetworks = function( multiME, setLabels, letterSubPlots = FALSE, Letters = NULL, excludeGrey = TRUE, greyLabel = "grey", plotDendrograms = TRUE, plotHeatmaps = TRUE, setMargins = TRUE, marDendro = NULL, marHeatmap = NULL, colorLabels = TRUE, signed = TRUE, heatmapColors = NULL, plotAdjacency = TRUE, printAdjacency = FALSE, cex.adjacency = 0.9, coloredBarplot = TRUE, barplotMeans = TRUE, barplotErrors = FALSE, plotPreservation = "standard", zlimPreservation = c(0,1), printPreservation = FALSE, cex.preservation = 0.9, ...) { # invertColors = FALSE; size = checkSets(multiME, checkStructure = TRUE); if (!size$structureOK) { #printFlush(paste( # "plotEigengeneNetworks: Given multiME does not appear to be a multi-set structure.\n", # "Will attempt to convert it into a multi-set structure containing 1 set.")); multiME = fixDataStructure(multiME); } if (is.null(Letters)) Letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; if (is.null(heatmapColors)) if (signed) { heatmapColors = blueWhiteRed(50); } else { heatmapColors = heat.colors(30); } nSets = length(multiME); cex = par("cex"); mar = par("mar"); nPlotCols = nSets; nPlotRows = as.numeric(plotDendrograms) + nSets * as.numeric(plotHeatmaps); if (nPlotRows==0) stop("Nothing to plot: neither dendrograms not heatmaps requested.") par(mfrow = c(nPlotRows, nPlotCols)); par(cex = cex); if (excludeGrey) for (set in 1:nSets) multiME[[set]]$data = multiME[[set]]$data[ , substring(names(multiME[[set]]$data),3)!=greyLabel] plotPresTypes = c("standard", "hyperbolic", "both") ipp = pmatch(plotPreservation, plotPresTypes); if (is.na(ipp)) stop(paste("Invalid 'plotPreservation'. Available choices are", paste(plotPresTypes, sep = ", "))); letter.ind = 1; if (plotDendrograms) for (set in 1:nSets) { #par(cex = StandardCex/1.4); par(mar = marDendro); labels = names(multiME[[set]]$data); uselabels = labels[substring(labels,3)!=greyLabel]; corME = cor(multiME[[set]]$data[substring(labels,3)!=greyLabel, substring(labels,3)!=greyLabel], use="p"); disME = as.dist(1-corME); clust = fastcluster::hclust(disME, method = "average"); if (letterSubPlots) { main = paste(substring(Letters, letter.ind, letter.ind), ". ", setLabels[set], sep=""); } else { main = setLabels[set]; } #validColors = is.na(match(uselabels, colors())); #plotLabels = ifelse(validColors, substring(uselabels[validColors], 3), uselabels[!validColors]); plotLabels = uselabels; plot(clust, main = main, sub="", xlab="", labels = plotLabels, ylab="", ylim=c(0,1)); letter.ind = letter.ind + 1; } if (plotHeatmaps) for (i.row in (1:nSets)) for (i.col in (1:nSets)) { letter.ind = i.row * nSets + i.col; if (letterSubPlots) { #letter = paste("(", substring(Letters, first = letter.ind, last = letter.ind), ")", sep = ""); letter = paste( substring(Letters, first = letter.ind, last = letter.ind), ". ", sep = ""); } else { letter = NULL; } par(cex = cex); if (setMargins) { if (is.null(marHeatmap)) { if (colorLabels) { par(mar = c(1,2,3,4)+0.2); } else { par(mar = c(6,7,3,5)+0.2); } } else { par(mar = marHeatmap); } } nModules = dim(multiME[[i.col]]$data)[2] textMat = NULL; if (i.row==i.col) { corME = cor(multiME[[i.col]]$data, use="p") pME = corPvalueFisher(corME, nrow(multiME[[i.col]]$data)); if (printAdjacency) { textMat = paste(signif(corME, 2), "\n", signif(pME, 1)); dim(textMat) = dim(corME) } if (signed) { if (plotAdjacency) { if (printAdjacency) { textMat = paste(signif((1+corME)/2, 2), "\n", signif(pME, 1)); dim(textMat) = dim(corME) } labeledHeatmap((1+corME)/2, names(multiME[[i.col]]$data), names(multiME[[i.col]]$data), main=paste(letter, setLabels[[i.col]]), invertColors=FALSE, zlim=c(0,1.0), colorLabels = colorLabels, colors = heatmapColors, setStdMargins = FALSE, textMatrix = textMat, cex.text = cex.adjacency, ...); } else { labeledHeatmap(corME, names(multiME[[i.col]]$data), names(multiME[[i.col]]$data), main=paste(letter, setLabels[[i.col]]), invertColors=FALSE, zlim=c(-1,1.0), colorLabels = colorLabels, colors = heatmapColors, setStdMargins = FALSE, textMatrix = textMat, cex.text = cex.adjacency, ...); } } else { labeledHeatmap(abs(corME), names(multiME[[i.col]]$data), names(multiME[[i.col]]$data), main=paste(letter, setLabels[[i.col]]), invertColors=FALSE, zlim=c(0,1.0), colorLabels = colorLabels, colors = heatmapColors, setStdMargins = FALSE, textMatrix = textMat, cex.text = cex.adjacency, ...); } } else { corME1 = cor(multiME[[i.col]]$data, use="p"); corME2 = cor(multiME[[i.row]]$data, use="p"); cor.dif = (corME1 - corME2)/2; d = tanh((corME1 - corME2) / (abs(corME1) + abs(corME2))^2); # d = abs(corME1 - corME2) / (abs(corME1) + abs(corME2)); if (ipp==1 | ipp==3) { dispd = cor.dif; main = paste(letter, "Preservation"); if (ipp==3) { dispd[upper.tri(d)] = d[upper.tri(d)]; main=paste(letter, "Hyperbolic preservation (UT)\nStandard preservation (LT)") } } else { dispd = d; main = paste(letter, "Hyperbolic preservation"); } if (i.row>i.col) { if (signed) { half = as.integer(length(heatmapColors)/2); range = c(half:length(heatmapColors)); halfColors = heatmapColors[range]; } else { halfColors = heatmapColors; } if (printPreservation) { printMtx = matrix(paste(".", as.integer((1-abs(dispd))*100), sep = ""), nrow = nrow(dispd), ncol = ncol(dispd)); printMtx[printMtx==".100"] = "1"; } else { printMtx = NULL; } if (sum( ((1-abs(dispd))zlimPreservation[2]))>0) warning("plotEigengeneNetworks: Correlation preservation data out of zlim range."); labeledHeatmap(1-abs(dispd), names(multiME[[i.col]]$data), names(multiME[[i.col]]$data), main = main, invertColors=FALSE, colorLabels = colorLabels, zlim = zlimPreservation, colors = halfColors, setStdMargins = FALSE, textMatrix = printMtx, cex.text = cex.preservation, ...); } else { if (ipp==2) { dp = 1-abs(d); method = "Hyperbolic:"; } else { dp = 1-abs(cor.dif); method = "Preservation:"; } diag(dp) = 0; if (barplotMeans) { sum_dp = mean(dp[upper.tri(dp)]); means = apply(dp, 2, sum)/(ncol(dp)-1); if (barplotErrors) { errors = sqrt( (apply(dp^2, 2, sum)/(ncol(dp)-1) - means^2)/(ncol(dp)-2)); } else { errors = NULL; } labeledBarplot(means, names(multiME[[i.col]]$data), main=paste(letter, "D=", signif(sum_dp,2)), ylim=c(0,1), colorLabels = colorLabels, colored = coloredBarplot, setStdMargins = FALSE, stdErrors = errors, ... ) } else { sum_dp = sum(dp[upper.tri(dp)]); labeledBarplot(dp, names(multiME[[i.col]]$data), main=paste(letter, method, "sum = ", signif(sum_dp,3)), ylim=c(0,dim(dp)[[1]]), colorLabels = colorLabels, colored = coloredBarplot, setStdMargins = FALSE, ... ) } } } } } #==================================================================================================== # # numbers2colors: convert a vector of numbers to colors # #==================================================================================================== # Turn a numerical variable into a color indicator. x can be a matrix or a vector. # For discrete variables, consider also labels2colors. numbers2colors = function(x, signed = NULL, centered = signed, lim = NULL, commonLim = FALSE, colors = if (signed) blueWhiteRed(100) else blueWhiteRed(100)[51:100], naColor = "grey") { x = as.matrix(x); if (!is.numeric(x)) stop("'x' must be numeric. For a factor, please use as.numeric(x) in the call."); if (is.null(signed)) { if (any(x<0, na.rm = TRUE) & any(x>0, na.rm = TRUE)) { signed = TRUE; } else signed = FALSE; } if (is.null(centered)) centered = signed; if (is.null(lim)) { if (signed & centered) { max = apply(abs(x), 2, max, na.rm = TRUE); lim = as.matrix(cbind(-max, max)); } else { lim = as.matrix(cbind(apply(x, 2, min, na.rm = TRUE), apply(x, 2, max, na.rm = TRUE))); } if (commonLim) lim = c(min(lim[, 1], na.rm = TRUE), max(lim[, 2], na.rm = TRUE)); } if (is.null(dim(lim))) { if (length(lim)!=2) stop("'lim' must be a vector of length 2 or a matrix with 2 columns."); if (!is.numeric(lim)) stop("'lim' must be numeric"); if (sum(is.finite(lim))!=2) stop("'lim' must be finite."); lim = t(as.matrix(lim)); } else { if (ncol(x)!=nrow(lim)) stop("Incompatible numbers of columns in 'x' and rows in 'lim'.") if (!is.numeric(lim)) stop("'lim' must be numeric"); if (sum(is.finite(lim))!=length(lim)) stop("'lim' must be finite."); } xMin = matrix(lim[,1], nrow = nrow(x), ncol = ncol(x), byrow = TRUE) xMax = matrix(lim[,2], nrow = nrow(x), ncol = ncol(x), byrow = TRUE) if (sum(xMin==xMax)>0) warning("(some columns in) 'x' are constant. Their color will be the color of NA."); xx = x; xx[is.na(xx)] = ((xMin+xMax)[is.na(xx)])/2; if (sum(x < xMin, na.rm = TRUE) > 0) { warning("Some values of 'x' are below given minimum and will be truncated to the minimum."); x[xx xMax, na.rm = TRUE) > 0) { warning("Some values of 'x' are above given maximum and will be truncated to the maximum."); x[xx>xMax] = xMax[xx>xMax]; } mmEq = xMin==xMax; nColors = length(colors); xCol = array(naColor, dim = dim(x)); xInd = (x - xMin)/(xMax-xMin); xInd[xInd==1] = 1-0.5/nColors; xCol[!mmEq] = colors[as.integer(xInd[!mmEq] * nColors) + 1]; xCol[is.na(xCol)] = naColor; xCol; } #==================================================================================================== # # Rand index calculation # #==================================================================================================== # this function is used for computing the Rand index below... # .choosenew <- function(n,k){ n <- c(n) out1 <- rep(0,length(n)) for (i in c(1:length(n)) ){ if (n[i]minRelativeWeight, na.rm = TRUE) gg = useGenes; gg[useGenes][nPresent0) weights else NULL, rows = which(useSamples), cols = which(gg), na.rm = TRUE); var[is.na(var)] = 0; nNAsGenes = colSums(is.na(datExpr[useSamples, gg])); gg[gg] = (nNAsGenes < (1-minFraction) * nSamples & var>tol^2 & (nSamples-nNAsGenes >= minNSamples)); if (sum(gg) < minNGenes) stop("Too few genes with valid expression levels in the required number of samples."); if (verbose>0 & (nGenes - sum(gg) > 0)) printFlush(paste(" ..Excluding", nGenes - sum(gg), "genes from the calculation due to too many missing samples or zero variance.")); gg; } goodSamples = function(datExpr, weights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, minRelativeWeight = 0.1, verbose = 1, indent = 0) { if (is.null(useGenes)) useGenes = rep(TRUE, ncol(datExpr)); if (is.null(useSamples)) useSamples = rep(TRUE, nrow(datExpr)); if (length(useGenes)!= ncol(datExpr)) stop("Length of nGenes is not compatible with number of columns in datExpr."); if (length(useSamples)!= nrow(datExpr)) stop("Length of nSamples is not compatible with number of rows in datExpr."); weights = .checkAndScaleWeights(weights, datExpr, scaleByMax = TRUE); nSamples = sum(useSamples); nGenes = sum(useGenes); if (length(weights)==0) { nNAsSamples = rowSums(is.na(datExpr[useSamples, useGenes, drop = FALSE])) } else nNAsSamples = rowSums(is.na(datExpr[useSamples, useGenes, drop = FALSE]) | replaceMissing(weights[useSamples, useGenes]= minNGenes)); if (sum(goodSamples) < minNSamples) stop("Too few samples with valid expression levels for the required number of genes."); if (verbose>0 & (nSamples - sum(goodSamples)>0)) printFlush(paste(" ..Excluding", nSamples - sum(goodSamples), "samples from the calculation due to too many missing genes.")); goodSamples; } .checkAndScaleWeights = function(weights, expr, scaleByMax = TRUE, verbose = 1) { if (length(weights)==0) return(weights); weights = as.matrix(weights); if (!isTRUE(all.equal(dim(expr), dim(weights)))) stop("When 'weights' are given, they must have the same dimensions as 'expr'.") if (any(weights<0, na.rm = TRUE)) stop("Found negative weights. All weights must be non-negative."); nf = !is.finite(weights); if (any(nf)) { if (verbose > 0) warning("Found non-finite weights. The corresponding data points will be removed."); weights[nf] = NA; } if (scaleByMax) { maxw = colMaxs(weights, na.rm = TRUE); maxw[maxw==0] = 1; weights = weights/matrix(maxw, nrow(weights), ncol(weights), byrow = TRUE); } weights; } .checkAndScaleMultiWeights = function(multiWeights, multiExpr, scaleByMax = TRUE) { if (is.null(multiWeights)) return(NULL); if (!isMultiData(multiExpr, strict = FALSE) || !isMultiData(multiWeights, strict = FALSE)) stop("Both 'multiWeights' and 'multiExpr' must be 'MultiData'."); wOK = checkSets(multiWeights, checkStructure = TRUE); eOK = checkSets(multiExpr, checkStructure = TRUE); if (wOK$nSets!=eOK$nSets) stop("'multiWeights' and 'multiExpr' must have the same length (number of data sets)."); #wSize = mtd.apply(multiWeights, dim); #eSize = mtd.apply(multiExpr, dim) #sameSize = all(mtd.mapply(function(d1, d2) isTRUE(all.equal(d1, d2)), eSize, wSize, mdmaSimplify = TRUE)); #if (!sameSize) # stop(".checkAndScaleMultiWeights: 'multiWeights' and 'multiExpr' ", # "do not have the same sizes across all sets."); mtd.mapply(.checkAndScaleWeights, multiWeights, multiExpr, MoreArgs = list(scaleByMax = scaleByMax)); } .colWeightedVars = function(x, w = NULL) { if (is.null(w)) return(colVars(x, na.rm = TRUE)); missing = !is.finite(x); w[missing] = 0; x[missing] = 0; means = colMeans(x*w)/colMeans(w); means[!is.finite(means)] = NA; x.centered = x - matrix(means, nrow(x), ncol(x), byrow = TRUE); out = colMeans(w*x.centered^2)/colMeans(w); out[!is.finite(out)] = NA; out; } goodGenesMS = function(multiExpr, multiWeights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 1, indent = 0) { dataSize = checkSets(multiExpr); nSets = dataSize$nSets; multiWeights = .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = TRUE); if (is.null(useGenes)) useGenes = rep(TRUE, dataSize$nGenes); if (is.null(useSamples)) { useSamples = list(); for (set in 1:nSets) useSamples[[set]] = rep(TRUE, dataSize$nSamples[set]); } if (length(useGenes)!= dataSize$nGenes) stop("Length of nGenes is not compatible with number of genes in multiExpr."); if (length(useSamples)!= nSets) stop("Length of nSamples is not compatible with number of sets in multiExpr."); for (set in 1:nSets) if (length(useSamples[[set]])!=dataSize$nSamples[set]) stop(paste("Number of samples in useSamples[[", set, "]] incompatible\n ", "with number of samples in the corresponding set of multiExpr.")) nSamples = sapply(useSamples, sum); nGenes = sum(useGenes); goodGenes = useGenes; for (set in 1:nSets) { if (is.null(tol)) tol1 = 1e-10 * max(abs(multiExpr[[set]]$data), na.rm = TRUE) else tol1 = tol; if (sum(goodGenes)==0) break; if (sum(useSamples[[set]])==0) next; expr1 = multiExpr[[set]]$data[useSamples[[set]], goodGenes, drop = FALSE]; if (mode(expr1)=="list") expr1 = as.matrix(expr1); if (is.null(multiWeights)) { nPresent = colSums(!is.na(expr1)) w1 = NULL; } else { w1 = multiWeights[[set]]$data[useSamples[[set]], goodGenes, drop = FALSE]; nPresent = colSums(!is.na(expr1) & w1 > minRelativeWeight, na.rm = TRUE); } keep = nPresent >= minNGenes & nPresent >=minFraction*nSamples[set] goodGenes[goodGenes] = keep expr1 = expr1[, keep, drop = FALSE]; if (!is.null(multiWeights)) w1 = w1[, keep, drop = FALSE]; if (any(goodGenes)) { var = .colWeightedVars(expr1, w1); goodGenes[goodGenes][var <= tol1^2] = FALSE; } } if (sum(goodGenes) < minNGenes) stop("Too few genes with valid expression levels in the required number of samples in all sets."); if (verbose>0 & (nGenes - sum(goodGenes) > 0)) printFlush(paste(" ..Excluding", nGenes - sum(goodGenes), "genes from the calculation due to too many missing samples or zero variance.")); goodGenes; } goodSamplesMS = function(multiExpr, multiWeights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, minRelativeWeight = 0.1, verbose = 1, indent = 0) { dataSize = checkSets(multiExpr); nSets = dataSize$nSets; multiWeights = .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = TRUE); if (is.null(useGenes)) useGenes = rep(TRUE, dataSize$nGenes); if (is.null(useSamples)) { useSamples = list(); for (set in 1:nSets) useSamples[[set]] = rep(TRUE, dataSize$nSamples[set]); } names(useSamples) = names(multiExpr); if (length(useGenes)!= dataSize$nGenes) stop("Length of nGenes is not compatible with number of genes in multiExpr."); if (length(useSamples)!= dataSize$nSets) stop("Length of nSamples is not compatible with number of sets in multiExpr."); for (set in 1:nSets) if (length(useSamples[[set]])!=dataSize$nSamples[set]) stop(paste("Number of samples in useSamples[[", set, "]] incompatible\n ", "with number of samples in the corresponding set of multiExpr.")) nSamples = sapply(useSamples, sum); nGenes = sum(useGenes); goodSamples = useSamples; for (set in 1:nSets) { if (sum(useGenes)==0) break; if (sum(goodSamples[[set]])==0) next; if (is.null(multiWeights)) { nGoodSamples = rowSums(!is.na(multiExpr[[set]]$data[useSamples[[set]], useGenes, drop = FALSE])) } else { nGoodSamples = rowSums(!is.na(multiExpr[[set]]$data[useSamples[[set]], useGenes, drop = FALSE]) & multiWeights[[set]]$data[useSamples[[set]], useGenes, drop = FALSE] > minRelativeWeight, na.rm = TRUE); } goodSamples[[set]][useSamples[[set]]] = ((nGoodSamples >= minFraction * nGenes) & (nGoodSamples >= minNGenes)); if (sum(goodSamples[[set]]) < minNSamples) stop("Too few samples with valid expression levels for the required number of genes in set", set); if (verbose>0 & (nSamples[set] - sum(goodSamples[[set]])>0)) printFlush(paste(" ..Set", set,": Excluding", nSamples[set] - sum(goodSamples[[set]]), "samples from the calculation due to too many missing genes.")); } goodSamples; } goodSamplesGenes = function(datExpr, weights = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 1, indent = 0) { spaces = indentSpaces(indent) goodGenes = NULL; goodSamples = NULL; nBadGenes = 0; nBadSamples = 0; changed = TRUE; iter = 1; if (verbose>0) printFlush(paste(spaces, "Flagging genes and samples with too many missing values...")); while (changed) { if (verbose>0) printFlush(paste(spaces, " ..step", iter)); goodGenes = goodGenes(datExpr, weights, goodSamples, goodGenes, minFraction = minFraction, minNSamples = minNSamples, minNGenes = minNGenes, minRelativeWeight = minRelativeWeight, tol = tol, verbose = verbose - 1, indent = indent + 1); goodSamples = goodSamples(datExpr, weights, goodSamples, goodGenes, minFraction = minFraction, minNSamples = minNSamples, minNGenes = minNGenes, minRelativeWeight = minRelativeWeight, verbose = verbose - 1, indent = indent + 1); changed = ( (sum(!goodGenes)>nBadGenes) | (sum(!goodSamples)>nBadSamples) ) nBadGenes = sum(!goodGenes); nBadSamples = sum(!goodSamples); iter = iter + 1; } allOK = (sum(c(nBadGenes, nBadSamples)) == 0) list(goodGenes = goodGenes, goodSamples = goodSamples, allOK = allOK); } goodSamplesGenesMS = function(multiExpr, multiWeights = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 2, indent = 0) { spaces = indentSpaces(indent) size = checkSets(multiExpr) nSets = size$nSets; goodGenes = NULL; goodSamples = NULL; nBadGenes = 0; nBadSamples = rep(0, nSets); changed = TRUE; iter = 1; if (verbose>0) printFlush(paste(spaces, "Flagging genes and samples with too many missing values...")); while (changed) { if (verbose>0) printFlush(paste(spaces, " ..step", iter)); goodGenes = goodGenesMS(multiExpr, multiWeights, goodSamples, goodGenes, minFraction = minFraction, minNSamples = minNSamples, minNGenes = minNGenes, tol = tol, minRelativeWeight = minRelativeWeight, verbose = verbose - 1, indent = indent + 1); goodSamples = goodSamplesMS(multiExpr, multiWeights, goodSamples, goodGenes, minFraction = minFraction, minNSamples = minNSamples, minNGenes = minNGenes, minRelativeWeight = minRelativeWeight, verbose = verbose - 1, indent = indent + 1); changed = FALSE; for (set in 1:nSets) changed = ( changed | (sum(!goodGenes)>nBadGenes) | (sum(!goodSamples[[set]])>nBadSamples[set]) ) nBadGenes = sum(!goodGenes); for (set in 1:nSets) nBadSamples[set] = sum(!goodSamples[[set]]); iter = iter + 1; if (verbose > 2) printFlush(paste(spaces, " ..bad gene count: ", nBadGenes, ", bad sample counts: ", paste(nBadSamples, collapse = ", "), sep="")); } allOK = (sum(c(nBadGenes, nBadSamples)) == 0) list(goodGenes = goodGenes, goodSamples = goodSamples, allOK = allOK); } #============================================================================================ # # modified heatmap plot: allow specifying the hang parameter for both side and top dendrograms # #============================================================================================ # Important change: work ith dendrograms of class hclust, not dendrogram. .heatmap = function (x, Rowv = NULL, Colv = if (symm) "Rowv" else NULL, distfun = dist, hclustfun = fastcluster::hclust, reorderfun = function(d, w) reorder(d, w), add.expr, symm = FALSE, revC = identical(Colv, "Rowv"), scale = c("row", "column", "none"), na.rm = TRUE, margins = c(1.2, 1.2), ColSideColors, RowSideColors, cexRow = 0.2 + 1/log10(nr), cexCol = 0.2 + 1/log10(nc), labRow = NULL, labCol = NULL, main = NULL, xlab = NULL, ylab = NULL, keep.dendro = FALSE, verbose = getOption("verbose"), setLayout = TRUE, hang = 0.04, ...) { scale <- if(symm && missing(scale)) "none" else match.arg(scale) if(length(di <- dim(x)) != 2 || !is.numeric(x)) stop("'x' must be a numeric matrix") nr <- di[1L] nc <- di[2L] if(nr <= 1 || nc <= 1) stop("'x' must have at least 2 rows and 2 columns") if(!is.numeric(margins) || length(margins) != 2L) stop("'margins' must be a numeric vector of length 2") doRdend <- !identical(Rowv,NA) doCdend <- !identical(Colv,NA) if(!doRdend && identical(Colv, "Rowv")) doCdend <- FALSE ## by default order by row/col means if(is.null(Rowv)) Rowv <- rowMeans(x, na.rm = na.rm) if(is.null(Colv)) Colv <- colMeans(x, na.rm = na.rm) ## get the dendrograms and reordering indices if (doRdend) { if (inherits(Rowv, "hclust")) ddr <- Rowv else { hcr <- hclustfun(distfun(x)) if (inherits(hcr, 'hclust')) { hcr$height = hcr$height-min(hcr$height) + hang * (max(hcr$height)-min(hcr$height)); } ddr = hcr; #ddr <- as.dendrogram(hcr, hang = hang) #if (!is.logical(Rowv) || Rowv) # ddr <- reorderfun(ddr, Rowv) } #if (nr != length(rowInd <- order.dendrogram(ddr))) # stop("row dendrogram ordering gave index of wrong length") rowInd = ddr$order; } else rowInd <- 1:nr if (doCdend) { if (inherits(Colv, "hclust")) ddc <- Colv else if (identical(Colv, "Rowv")) { if (nr != nc) stop("Colv = \"Rowv\" but nrow(x) != ncol(x)") ddc <- ddr } else { hcc <- hclustfun(distfun(if (symm) x else t(x))) if (inherits(hcr, 'hclust')) { hcc$height = hcc$height-min(hcc$height) + hang * (max(hcc$height)-min(hcc$height)); } ddc = hcc; #ddc <- as.dendrogram(hcc, hang = hang) #if (!is.logical(Colv) || Colv) ddc <- reorderfun(ddc, Colv) } #if (nc != length(colInd <- order.dendrogram(ddc))) # stop("column dendrogram ordering gave index of wrong length") colInd = ddc$order; } else colInd <- 1:nc ## reorder x x <- x[rowInd, colInd]; labRow <- if (is.null(labRow)) if (is.null(rownames(x))) (1:nr)[rowInd] else rownames(x) else labRow[rowInd] labCol <- if (is.null(labCol)) if (is.null(colnames(x))) (1:nc)[colInd] else colnames(x) else labCol[colInd] if (scale == "row") { x <- sweep(x, 1, rowMeans(x, na.rm = na.rm)) sx <- apply(x, 1, sd, na.rm = na.rm) x <- sweep(x, 1, sx, "/") } else if (scale == "column") { x <- sweep(x, 2, colMeans(x, na.rm = na.rm)) sx <- apply(x, 2, sd, na.rm = na.rm) x <- sweep(x, 2, sx, "/") } ## Calculate the plot layout lmat <- rbind(c(NA, 3), 2:1) lwid <- c(if (doRdend) 1 else 0.05, 4) lhei <- c((if (doCdend) 1 else 0.05) + if (!is.null(main)) 0.5 else 0, 4) if (!missing(ColSideColors)) { if (!is.character(ColSideColors) || length(ColSideColors) != nc) stop("'ColSideColors' must be a character vector of length ncol(x)") lmat <- rbind(lmat[1, ] + 1, c(NA, 1), lmat[2, ] + 1) lhei <- c(lhei[1], 0.2, lhei[2]) } if (!missing(RowSideColors)) { if (!is.character(RowSideColors) || length(RowSideColors) != nr) stop("'RowSideColors' must be a character vector of length nrow(x)") lmat <- cbind(lmat[, 1] + 1, c(rep(NA, nrow(lmat) - 1), 1), lmat[, 2] + 1) lwid <- c(lwid[1], 0.2, lwid[2]) } lmat[is.na(lmat)] <- 0 if (verbose) { cat("layout: widths = ", lwid, ", heights = ", lhei, "; lmat=\n") print(lmat) } if (!symm || scale != "none") x <- t(x) op <- par(no.readonly = TRUE) if (revC) { iy <- nc:1 #ddr <- rev(ddr) ddr$order = rev(ddr$order); rowInd.colors = rev(rowInd) x <- x[, iy] } else { iy <- 1:nr; rowInd.colors = rowInd} #on.exit(par(op)) # print(paste("main:", main)); if (setLayout) layout(lmat, widths = lwid, heights = lhei, respect = TRUE) if (!missing(RowSideColors)) { par(mar = c(margins[1], 0, 0, 0.5)) image(rbind(1:nr), col = RowSideColors[rowInd.colors], axes = FALSE) } if (!missing(ColSideColors)) { par(mar = c(0.5, 0, 0, margins[2])) image(cbind(1:nc), col = ColSideColors[colInd], axes = FALSE) } par(mar = c(margins[1], 0, 0, margins[2])) image(x = 1:nc, y = 1:nr, x, xlim = 0.5 + c(0, nc), ylim = 0.5 + c(0, nr), axes = FALSE, xlab = "", ylab = "", ...) axis(1, 1:nc, labels = labCol, las = 2, line = -0.5, tick = 0, cex.axis = cexCol) if (!is.null(xlab)) mtext(xlab, side = 1, line = margins[1] - 1.25) axis(4, iy, labels = labRow, las = 2, line = -0.5, tick = 0, cex.axis = cexRow) if (!is.null(ylab)) mtext(ylab, side = 4, line = margins[2] - 1.25) if (!missing(add.expr)) eval.parent(substitute(add.expr)) par(mar = c(margins[1], 0, 0, 0)) if (doRdend) { .plotDendrogram(ddr, horiz = TRUE, labels = FALSE, axes = FALSE, adjustRange = TRUE); # plot(ddr, horiz = TRUE, axes = FALSE, yaxs = "i", leaflab = "none" ) } else frame() par(mar = c(0, 0, if (!is.null(main)) 1.8 else 0, margins[2])) if (doCdend) { .plotDendrogram(ddc, horiz = FALSE, labels = FALSE, axes = FALSE, adjustRange = TRUE); # plot(ddc, axes = FALSE, xaxs = "i", leaflab = "none" ) } else if (!is.null(main)) frame() if (!is.null(main)) title(main, cex.main = 1.2 * op[["cex.main"]]) invisible(list(rowInd = rowInd, colInd = colInd, Rowv = if (keep.dendro && doRdend) ddr, Colv = if (keep.dendro && doCdend) ddc)) } #=================================================================================================== # The vectorize functions turns a matrix or data frame into a vector. If the matrix is not #symmetric the # number of entries of the vector equals the number of rows times the #number of columns of the matrix. # But if the matrix is symmetrical then it only uses the #entries in the upper triangular matrix. #If the option diag =TRUE, it also includes the diagonal elements of the symmetric #matrix. By default it # excludes the diagonal elements of a symmetric matrix. vectorizeMatrix=function(M, diag=FALSE) { if ( is.null(dim(M)) ) stop("The input of the vectorize function is not a matrix or data frame.") if ( length(dim(M))!=2 ) stop("The input of the vectorize function is not a matrix or data frame.") # now we check whether the matrix is symmetrical if (dim(M)[[1]]==dim(M)[[2]]) { M=as.matrix(M) Mtranspose=t(M) abs.difference=max( abs(M-Mtranspose),na.rm = TRUE) if (abs.difference<10^(-14) ) { out=M[upper.tri(M,diag)] } else out=as.vector(M); } else out=as.vector(M) out } # end #======================================================================================================== scaleFreeFitIndex=function(k,nBreaks=10, removeFirst = FALSE) { discretized.k = cut(k, nBreaks) dk = tapply(k, discretized.k, mean) p.dk = as.vector(tapply(k, discretized.k, length)/length(k)) breaks1 = seq(from = min(k), to = max(k), length = nBreaks + 1) hist1 = hist(k, breaks = breaks1, plot = FALSE, right = TRUE) dk2 = hist1$mids dk = ifelse(is.na(dk), dk2, dk) dk = ifelse(dk == 0, dk2, dk) p.dk = ifelse(is.na(p.dk), 0, p.dk) log.dk = as.vector(log10(dk)) if (removeFirst) { p.dk = p.dk[-1] log.dk = log.dk[-1] } log.p.dk= as.numeric(log10(p.dk + 1e-09)) lm1 = try(lm(log.p.dk ~ log.dk)); if (inherits(lm1, "try-error")) browser(); lm2 = lm(log.p.dk ~ log.dk + I(10^log.dk)) datout=data.frame(Rsquared.SFT=summary(lm1)$r.squared, slope.SFT=summary(lm1)$coefficients[2, 1], truncatedExponentialAdjRsquared= summary(lm2)$adj.r.squared) datout } # end of function scaleFreeFitIndex #======================================================================================================== standardScreeningCensoredTime= function ( time, event, datExpr, percentiles = seq(from = 0.1, to = 0.9, by = 0.2), dichotomizationResults = FALSE, qValues = TRUE, fastCalculation = TRUE) { datExpr=data.frame(datExpr, check.names = FALSE) no.Columns = dim(as.matrix(datExpr))[[2]] m = dim(as.matrix(datExpr))[[1]] if (length(time) != m) stop("The length of the time variable does not equal the number of rows of datExpr.\nConsider transposing datExpr.") if (length(event) != m) stop("The length of the event variable does not equal the number of rows of datExpr.\nConsider transposing datExpr.") if (fastCalculation) { fittemp = summary(coxph(Surv(time, event) ~ 1, na.action = na.exclude)) CumHazard = predict(fittemp, type = "expected") martingale1 = event - CumHazard deviance0 = ifelse(event == 0, 2 * CumHazard, -2 * log(CumHazard) + 2 * CumHazard - 2) devianceresidual = sign(martingale1) * sqrt(deviance0) corDeviance = as.numeric(cor(devianceresidual, datExpr, use = "p")) no.nonMissing = sum(!is.na(time)) pvalueDeviance = corPvalueFisher(cor = corDeviance, nSamples = no.nonMissing) qvalueDeviance=rep(NA, length(pvalueDeviance) ) rest1= ! is.na( pvalueDeviance) qvalueDeviance [rest1] = qvalue(pvalueDeviance [rest1])$qvalues datout = data.frame(ID = dimnames(datExpr)[[2]], pvalueDeviance, qvalueDeviance, corDeviance) } if (!fastCalculation) { pvalueWald = rep(NA, no.Columns) HazardRatio = rep(NA, no.Columns) CI.UpperLimitHR = rep(NA, no.Columns) CI.LowerLimitHR = rep(NA, no.Columns) C.index = rep(NA, no.Columns) pvalueLogrank = rep(NA, no.Columns) pValuesDichotomized = data.frame(matrix(NA, nrow = no.Columns, ncol = length(percentiles))) names(pValuesDichotomized) = paste("pValueDichotPercentile", as.character(percentiles), sep = "") fittemp = summary(coxph(Surv(time, event) ~ 1, na.action = na.exclude)) CumHazard = predict(fittemp, type = "expected") martingale1 = event - CumHazard deviance0 = ifelse(event == 0, 2 * CumHazard, -2 * log(CumHazard) + 2 * CumHazard - 2) devianceresidual = sign(martingale1) * sqrt(deviance0) corDeviance = as.numeric(cor(devianceresidual, datExpr, use = "p")) no.nonMissing = sum(!is.na(time)) pvalueDeviance = corPvalueFisher(cor = corDeviance, nSamples = no.nonMissing) for (i in 1:no.Columns) { Column = as.numeric(as.matrix(datExpr[, i])) var1 = var(Column, na.rm = TRUE) if (var1 == 0 | is.na(var1)) { pvalueWald[i] = NA pvalueLogrank[i] = NA HazardRatio[i] = NA CI.UpperLimitHR[i] = NA CI.LowerLimitHR[i] = NA C.index[i] = NA } # end of if (var1 == 0 | is.na(var1)) if (var1 != 0 & !is.na(var1)) { cox1 = summary(coxph(Surv(time, event) ~ Column, na.action = na.exclude)) pvalueWald[i] = cox1$coef[5] pvalueLogrank[i] = cox1$sctest[[3]] HazardRatio[i] = exp(cox1$coef[1]) CI.UpperLimitHR[i] = exp(cox1$coef[1] + 1.96 * cox1$coef[3]) CI.LowerLimitHR[i] = exp(cox1$coef[1] - 1.96 * cox1$coef[3]) C.index[i] = rcorr.cens(Column, Surv(time, event), outx = TRUE)[[1]] } # end of if (var1 != 0 & !is.na(var1)) if (dichotomizationResults) { quantilesE = as.numeric(quantile(Column, prob = percentiles)) for (j in 1:length(quantilesE)) { ColumnDichot = I(Column > quantilesE[j]) var1 = var(ColumnDichot, na.rm = TRUE) if (var1 == 0 | is.na(var1)) { pValuesDichotomized[i, j] = NA } # end of if if (var1 != 0 & !is.na(var1)) { coxh = summary(coxph(Surv(time, event) ~ ColumnDichot, na.action = na.exclude)) pValuesDichotomized[i, j] = coxh$coef[5] } # end of if } # end of for (j) MinimumDichotPvalue = apply(pValuesDichotomized, 1, min, na.rm = TRUE) } # end of if (dichotomizationResults) if (!qValues) { datout = data.frame(ID = dimnames(datExpr)[[2]], pvalueWald, pvalueLogrank, pvalueDeviance, corDeviance, HazardRatio, CI.LowerLimitHR, CI.UpperLimitHR, C.index) } # end of if (!qValues) } # end of for (i in 1:no.Columns) if (qValues) { qvalueWald=rep(NA, length(pvalueWald) ) rest1= ! is.na( pvalueWald) qvalueWald [rest1] = qvalue(pvalueWald[rest1])$qvalues qvalueLogrank=rep(NA, length(pvalueLogrank) ) rest1= ! is.na( pvalueLogrank) qvalueLogrank [rest1] = qvalue(pvalueLogrank[rest1])$qvalues qvalueDeviance=rep(NA, length(pvalueDeviance) ) rest1= ! is.na( pvalueDeviance) qvalueDeviance [rest1] = qvalue(pvalueDeviance[rest1])$qvalues datout = data.frame(ID = dimnames(datExpr)[[2]], pvalueWald, qvalueWald, pvalueLogrank, qvalueLogrank, pvalueDeviance, qvalueDeviance , corDeviance, HazardRatio, CI.LowerLimitHR, CI.UpperLimitHR, C.index) } # end of if (qValues) if (dichotomizationResults) { datout = data.frame(datout, MinimumDichotPvalue, pValuesDichotomized) } } datout } # end of function standardScreeningCensoredTime #================================================================================ # # standardScreeningNumericTrait # #================================================================================ standardScreeningNumericTrait= function (datExpr, yNumeric, corFnc = cor, corOptions = list(use = 'p'), alternative = c("two.sided", "less", "greater"), qValues = TRUE, areaUnderROC = TRUE) { datExpr=as.matrix(datExpr) nGenes = ncol(datExpr); nSamples = nrow(datExpr); if (length(yNumeric) != nSamples) stop("the length of the sample trait y does not equal the number of rows of datExpr") corPearson = rep(NA, nGenes) pvalueStudent = rep(NA, nGenes); AreaUnderROC = rep(NA, nGenes); nPresent = Z = rep(NA, nGenes); corFnc = match.fun(corFnc); corOptions$y = yNumeric; corOptions$x = as.matrix(datExpr); cp = do.call(corFnc, corOptions); corPearson = as.numeric(cp); finMat = !is.na(datExpr) np = t(finMat) %*% (!is.na(as.matrix(yNumeric))) nPresent = as.numeric(np) ia = match.arg(alternative) T = sqrt(np - 2) * corPearson/sqrt(1 - corPearson^2) if (ia == "two.sided") { p = 2 * pt(abs(T), np - 2, lower.tail = FALSE) } else if (ia == "less") { p = pt(T, np - 2, lower.tail = TRUE) } else if (ia == "greater") { p = pt(T, np - 2, lower.tail = FALSE) } pvalueStudent = as.numeric(p); Z = 0.5 * log( (1+corPearson)/(1-corPearson) ) * sqrt(nPresent -2 ); if (areaUnderROC) for (i in 1:dim(datExpr)[[2]]) { AreaUnderROC[i] = rcorr.cens(datExpr[, i], yNumeric, outx = TRUE)[[1]] } q.Student=rep(NA, length(pvalueStudent) ) rest1= ! is.na(pvalueStudent) if (qValues) { x = try({ q.Student[rest1] = qvalue(pvalueStudent[rest1])$qvalues }, silent = TRUE) if (inherits(x, "try-error")) printFlush(paste("Warning in standardScreeningNumericTrait: function qvalue returned an error.\n", "The returned qvalues will be invalid. The qvalue error: ", x, "\n")); } if (is.null(colnames(datExpr))) { ID = spaste("Variable.", 1:ncol(datExpr)); } else ID = colnames(datExpr); output = data.frame(ID = ID, cor = corPearson, Z = Z, pvalueStudent = pvalueStudent); if (qValues) output$qvalueStudent = q.Student; if (areaUnderROC) output$AreaUnderROC = AreaUnderROC; output$nPresentSamples = nPresent; output } #================================================================================ # # spaste # #================================================================================ spaste = function(...) { paste(..., sep = "") } #================================================================================ # # metaZfunction # #================================================================================ metaZfunction=function(datZ, columnweights=NULL ) { if ( ! is.null(columnweights) ) {datZ= t(t(datZ)* columnweights) } datZpresent= !is.na(datZ)+0.0 if ( ! is.null(columnweights) ) {datZpresent= t(t(datZpresent)* columnweights) } sumZ=as.numeric(rowSums(datZ, na.rm=TRUE)) variance= as.numeric(rowSums(datZpresent^2)) sumZ/sqrt(variance) } #================================================================================ # # rankPvalue # #================================================================================ rankPvalue=function(datS, columnweights = NULL, na.last = "keep", ties.method = "average", calculateQvalue = TRUE, pValueMethod = "all") { no.rows = dim(datS)[[1]] no.cols = dim(datS)[[2]] if (!is.null(columnweights) & no.cols != length(columnweights)) stop("The number of components of the vector columnweights is unequal to the number of columns of datS. Hint: consider transposing datS. ") if (!is.null(columnweights) ) { if ( min(columnweights,na.rm=TRUE)<0 ) stop("At least one component of columnweights is negative, which makes no sense. The entries should be positive numbers") if ( sum(is.na(columnweights))>0 ) stop("At least one component of columnweights is missing, which makes no sense. The entries should be positive numbers") if ( sum( columnweights)!= 1 ) { # warning("The entries of columnweights do not sum to 1. Therefore, they will divided by the sum. Then the resulting weights sum to 1."); columnweights= columnweights/sum( columnweights) } } if (pValueMethod != "scale") { percentilerank1 = function(x) { R1 = rank(x, ties.method = ties.method, na.last = na.last) (R1-.5)/max(R1, na.rm = TRUE) } datrankslow = apply(datS, 2, percentilerank1) if (!is.null(columnweights)) { datrankslow = t(t(datrankslow) * columnweights) } datSpresent = !is.na(datS) + 0 if (!is.null(columnweights)) { datSpresent = t(t(datSpresent) * columnweights) } expectedsum = rowSums(datSpresent, na.rm = TRUE) * 0.5 varsum = rowSums(datSpresent^2, na.rm = TRUE) * 1/12 observed.sumPercentileslow = as.numeric(rowSums(datrankslow, na.rm = TRUE)) Zstatisticlow = (observed.sumPercentileslow - expectedsum)/sqrt(varsum) datrankshigh = apply(-datS, 2, percentilerank1) if (!is.null(columnweights)) { datrankshigh = t(t(datrankshigh) * columnweights) } observed.sumPercentileshigh = as.numeric(rowSums(datrankshigh, na.rm = TRUE)) Zstatistichigh = (observed.sumPercentileshigh - expectedsum)/sqrt(varsum) pValueLow = pnorm((Zstatisticlow)) pValueHigh = pnorm((Zstatistichigh)) pValueExtreme = pmin(pValueLow, pValueHigh) datoutrank = data.frame(pValueExtreme, pValueLow, pValueHigh) if (calculateQvalue) { qValueLow = rep(NA, dim(datS)[[1]]) qValueHigh = rep(NA, dim(datS)[[1]]) qValueExtreme = rep(NA, dim(datS)[[1]]) rest1 = !is.na(pValueLow) qValueLow[rest1] = qvalue(pValueLow[rest1])$qvalues rest1 = !is.na(pValueHigh) qValueHigh[rest1] = qvalue(pValueHigh[rest1])$qvalues rest1 = !is.na(pValueExtreme) qValueExtreme = pmin(qValueLow, qValueHigh) datq = data.frame(qValueExtreme, qValueLow, qValueHigh) datoutrank = data.frame(datoutrank, datq) names(datoutrank) = paste(names(datoutrank), "Rank", sep = "") } } if (pValueMethod != "rank") { datSpresent = !is.na(datS) + 0 scaled.datS = scale(datS) if (!is.null(columnweights)) { scaled.datS = t(t(scaled.datS) * columnweights) datSpresent = t(t(datSpresent) * columnweights) } expected.value = rep(0, no.rows) varsum = rowSums(datSpresent^2) * 1 observed.sumScaleddatS = as.numeric(rowSums(scaled.datS, na.rm = TRUE)) Zstatisticlow = (observed.sumScaleddatS - expected.value)/sqrt(varsum) scaled.minusdatS = scale(-datS) if (!is.null(columnweights)) { scaled.minusdatS = t(t(scaled.minusdatS) * columnweights) } observed.sumScaledminusdatS = as.numeric(rowSums(scaled.minusdatS, na.rm = TRUE)) Zstatistichigh = (observed.sumScaledminusdatS - expected.value)/sqrt(varsum) pValueLow = pnorm((Zstatisticlow)) pValueHigh = pnorm((Zstatistichigh)) pValueExtreme = 2 * pnorm(-abs(Zstatisticlow)) datoutscale = data.frame(pValueExtreme, pValueLow, pValueHigh) if (calculateQvalue) { qValueLow = rep(NA, dim(datS)[[1]]) qValueHigh = rep(NA, dim(datS)[[1]]) qValueExtreme = rep(NA, dim(datS)[[1]]) rest1 = !is.na(pValueLow) qValueLow[rest1] = qvalue(pValueLow[rest1])$qvalues rest1 = !is.na(pValueHigh) qValueHigh[rest1] = qvalue(pValueHigh[rest1])$qvalues rest1 = !is.na(pValueExtreme) qValueExtreme[rest1] = qvalue(pValueExtreme[rest1])$qvalues datq = data.frame(qValueExtreme, qValueLow, qValueHigh) datoutscale = data.frame(datoutscale, datq) } names(datoutscale) = paste(names(datoutscale), "Scale", sep = "") } if (pValueMethod == "rank") { datout = datoutrank } if (pValueMethod == "scale") { datout = datoutscale } if (pValueMethod != "rank" & pValueMethod != "scale") datout = data.frame(datoutrank, datoutscale) datout } # End of function #======================================================================================================== # # utility function: add a comma to string if the string is non-empty # #======================================================================================================== prepComma = function(s) { ifelse (s=="", s, paste(",", s)); } #======================================================================================================== # # "restricted" q-value calculation # #======================================================================================================== qvalue.restricted = function(p, trapErrors = TRUE, ...) { fin = is.finite(p); qx = try(qvalue(p[fin], ...)$qvalues, silent = TRUE); q = rep(NA, length(p)); if (inherits(qx, "try-error")) { if (!trapErrors) stop(qx); } else q[fin] = qx; q; } #======================================================================================================== # # consensusKME # #======================================================================================================== .interleave = function(matrices, nameBase = names(matrices), sep = ".", baseFirst = TRUE) { # Drop null entries in the list keep = sapply(matrices, function(x) !is.null(x)); nameBase = nameBase[keep]; matrices = matrices[keep]; nMats = length(matrices) matrices = lapply(matrices, function(x) if (length(dim(x)) < 2) as.matrix(x) else x); nCols = ncol(matrices[[1]]); dims = lapply(matrices, dim); if (baseFirst) { for (m in 1:nMats) colnames(matrices[[m]]) = spaste(nameBase[m], sep, colnames(matrices[[m]])); } else { for (m in 1:nMats) colnames(matrices[[m]]) = spaste(colnames(matrices[[m]]), sep, nameBase[m]); } out = as.data.frame(lapply(1:nCols, function(index, matrices) as.data.frame(lapply(matrices, function(x, i) x[, i, drop = FALSE], index)), matrices)); #xx = try({rownames(out) = rownames(matrices[[1]])}) #if (inherits(xx, "try-error")) browser() if (!is.null(rownames(matrices[[1]]))) rownames(out) = make.unique(rownames(matrices[[1]])); out; } consensusKME = function(multiExpr, moduleLabels, multiEigengenes = NULL, consensusQuantile = 0, signed = TRUE, useModules = NULL, metaAnalysisWeights = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getQvalues = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(calculateQvalue = getQvalues, pValueMethod = "scale"), setNames = NULL, excludeGrey = TRUE, greyLabel = if (is.numeric(moduleLabels)) 0 else "grey") { corAndPvalueFnc = match.fun(corAndPvalueFnc); size = checkSets(multiExpr); nSets = size$nSets; nGenes = size$nGenes; nSamples = size$nSamples; if (!is.null(metaAnalysisWeights)) if (length(metaAnalysisWeights)!=nSets) stop("Length of 'metaAnalysisWeights' must equal number of input sets."); if (!is.null(useModules)) { if (greyLabel %in% useModules) stop(paste("Grey module (or module 0) cannot be used with 'useModules'.\n", " Use 'excludeGrey = FALSE' to obtain results for the grey module as well. ")); keep = moduleLabels %in% useModules; if (sum(keep)==0) stop("Incorrectly specified 'useModules': no such module(s)."); moduleLabels [ !keep ] = greyLabel; } if (is.null(multiEigengenes)) multiEigengenes = multiSetMEs(multiExpr, universalColors = moduleLabels, verbose = 0, excludeGrey = excludeGrey, grey = greyLabel); modLevels = substring(colnames(multiEigengenes[[1]]$data), 3); nModules = length(modLevels); kME = p = Z = nObs = array(NA, dim = c(nGenes, nModules, nSets)); corOptions$alternative = c("two.sided", "greater")[signed+1]; haveZs = FALSE; for (set in 1:nSets) { corOptions$x = multiExpr[[set]]$data; corOptions$y = multiEigengenes[[set]]$data; cp = do.call(corAndPvalueFnc, args = corOptions); corComp = grep(corComponent, names(cp)); pComp = match("p", names(cp)); if (is.na(pComp)) pComp = match("p.value", names(cp)); if (is.na(pComp)) stop("Function `corAndPvalueFnc' did not return a p-value."); kME[, , set] = cp[[corComp]] p[, , set] = cp[[pComp]]; if (!is.null(cp$Z)) { Z[, , set] = cp$Z; haveZs = TRUE} if (!is.null(cp$nObs)) { nObs[, , set] = cp$nObs; } else nObs[, , set] = t(is.na(multiExpr[[set]]$data)) %*% (!is.na(multiEigengenes[[set]]$data)); } if (getQvalues) { q = apply(p, c(2:3), qvalue.restricted); } else q = NULL; # kME.average = rowMeans(kME, dims = 2); <-- not neccessary since weighted average also contains it powers = c(0, 0.5, 1); nPowers = length(powers) nWeights = nPowers + !is.null(metaAnalysisWeights) weightNames = c("equalWeights", "RootDoFWeights", "DoFWeights", "userWeights") [1:nWeights]; kME.weightedAverage = array(NA, dim = c(nGenes, nWeights, nModules)); for (m in 1:nWeights) { if (m<=nPowers) { weights = nObs^powers[m] } else weights = array( rep(metaAnalysisWeights, rep(nGenes*nModules, nSets)), dim = c(nGenes, nModules, nSets)); kME.weightedAverage[, m, ] = rowSums( kME * weights, na.rm = TRUE, dims = 2) / rowSums(weights, dims = 2, na.rm = TRUE) } dim(kME.weightedAverage) = c(nGenes * nWeights, nModules); if (any(is.na(kME))) { kME.consensus.1 = apply(kME, c(1,2), quantile, prob = consensusQuantile, na.rm = TRUE); kME.consensus.2 = apply(kME, c(1,2), quantile, prob = 1-consensusQuantile, na.rm = TRUE); kME.median = apply(kME, c(1,2), median, na.rm = TRUE); } else { kME.consensus.1 = matrix( colQuantileC(t(matrix(kME, nGenes * nModules, nSets)), p = consensusQuantile), nGenes, nModules); kME.consensus.2 = matrix( colQuantileC(t(matrix(kME, nGenes * nModules, nSets)), p = 1-consensusQuantile), nGenes, nModules); kME.median = matrix(colQuantileC(t(matrix(kME, nGenes * nModules, nSets)), p = 0.5), nGenes, nModules); } kME.consensus = ifelse(kME.median > 0, kME.consensus.1, kME.consensus.2); kME.consensus[ kME.consensus * kME.median < 0 ] = 0; # Prepare identifiers for the variables (genes) if (is.null(colnames(multiExpr[[1]]$data))) { ID = spaste("Variable.", 1:nGenes); } else ID = colnames(multiExpr[[1]]$data); # Get meta-Z, -p, -q values if (haveZs) { Z.kME.meta = p.kME.meta = array(0, dim = c(nGenes, nWeights, nModules)) if (getQvalues) q.kME.meta = array(0, dim = c(nGenes, nWeights, nModules)); for (m in 1:nWeights) { if (m<=nPowers) { weights = nObs^powers[m] } else weights = array( rep(metaAnalysisWeights, rep(nGenes*nModules, nSets)), dim = c(nGenes, nModules, nSets)); Z1 = rowSums( Z * weights, na.rm = TRUE, dims = 2) / sqrt(rowSums(weights^2, na.rm = TRUE, dims = 2)) if (signed) { p1 = pnorm(Z1, lower.tail = FALSE); } else p1 = 2*pnorm(abs(Z1), lower.tail = FALSE); Z.kME.meta[, m, ] = Z1; p.kME.meta[, m, ] = p1; if (getQvalues) { q1 = apply(p1, 2, qvalue.restricted); q.kME.meta[, m, ] = q1; } } dim(Z.kME.meta) = dim(p.kME.meta) = c(nGenes* nWeights, nModules); if (getQvalues) { dim(q.kME.meta) = c(nGenes * nWeights, nModules); } else q.kME.meta = NULL; } else { Z.kME.meta = p.kME.meta = q.kME.meta = NULL; } # Call rankPvalue if (useRankPvalue) { for (mod in 1:nModules) for (m in 1:nWeights) { if (m<=nPowers) { weights = nObs[, mod, ]^powers[m] } else weights = matrix( metaAnalysisWeights, nGenes, nSets, byrow = TRUE); # rankPvalue requires a vector of weights... so compress the weights to a vector. # Output a warning if the compression loses information. nDifferent = apply(weights, 2, function(x) {length(unique(x)) }); if (any(nDifferent)>1) printFlush(paste("Warning in consensusKME: rankPvalue requires compressed weights.\n", "Some weights may not be entirely accurate.")); cw = colMeans(weights, na.rm = TRUE); rankPvalueOptions$columnweights = cw / sum(cw); rankPvalueOptions$datS = kME[, mod, ]; rp1 = do.call(rankPvalue, rankPvalueOptions); colnames(rp1) = spaste(colnames(rp1), ".ME", modLevels[mod], ".", weightNames[m]); if (mod==1 && m==1) { rp = rp1; } else rp = cbind(rp, rp1); } } # Format the output... this will entail some rearranging of the individual set results. if (is.null(setNames)) setNames = names(multiExpr); if (is.null(setNames)) setNames = spaste("Set_", c(1:nSets)); if (!haveZs) Z = NULL; keep = c(TRUE, TRUE, getQvalues, haveZs); varNames = c("kME", "p.kME", "q.kME", "Z.kME")[keep]; nVars = sum(keep); dimnames(kME) = list( mtd.colnames(multiExpr), spaste("k", mtd.colnames(multiEigengenes)), setNames); dimnames(p) = list( mtd.colnames(multiExpr), spaste("p.k", mtd.colnames(multiEigengenes)), setNames); if (getQvalues) dimnames(q) = list( mtd.colnames(multiExpr), spaste("q.k", mtd.colnames(multiEigengenes)), setNames); if (haveZs) dimnames(Z) = list( mtd.colnames(multiExpr), spaste("Z.k", mtd.colnames(multiEigengenes)), setNames); varList = list(kME = kME, p = p, q = if (getQvalues) q else NULL, Z = if (haveZs) Z else NULL); varList.interleaved = lapply(varList, function(arr) { if (!is.null(dim(arr))) { split = lapply(1:dim(arr)[3], function(i) arr[, , i]); .interleave(split, nameBase = setNames, baseFirst = FALSE) } else NULL; }) # the following seems to choke on larger data sets, at least in R 3.2.1 # combined = array(c (kME, p, q, Z), dim = c(nGenes, nModules, nSets, nVars)); # recast = matrix( c(cast(melt(combined), X1~X4~X3~X2)), nGenes, nSets * nModules * nVars); # ... so I will replace it with more cumbersome but hopefully workable code. recast = .interleave(varList.interleaved, nameBase = rep("", 4), sep = ""); combinedMeta.0 = rbind( kME.consensus, kME.weightedAverage, Z.kME.meta, p.kME.meta, q.kME.meta); combinedMeta = matrix(combinedMeta.0, nGenes, (1 + nWeights + (2*haveZs + haveZs*getQvalues)*nWeights) * nModules); metaNames = c("consensus.kME", spaste("weightedAverage.", weightNames, ".kME"), spaste("meta.Z.", weightNames, ".kME"), spaste("meta.p.", weightNames, ".kME"), spaste("meta.q.", weightNames, ".kME") )[ c(rep(TRUE, nWeights+1), rep(haveZs, nWeights), rep(haveZs, nWeights), rep(haveZs && getQvalues, nWeights))]; nMetaVars = length(metaNames); colnames(combinedMeta) = spaste (rep(metaNames, nModules), rep(modLevels, rep(nMetaVars, nModules))); if (useRankPvalue) { out = data.frame(ID = ID, combinedMeta, rp, recast); } else out = data.frame(ID = ID, combinedMeta, recast); out } hierarchicalConsensusKME = function( multiExpr, moduleLabels, multiWeights = NULL, multiEigengenes = NULL, consensusTree, signed = TRUE, useModules = NULL, metaAnalysisWeights = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getFDR = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(calculateQvalue = getFDR, pValueMethod = "scale"), setNames = names(multiExpr), excludeGrey = TRUE, greyLabel = if (is.numeric(moduleLabels)) 0 else "grey", reportWeightType = NULL, getOwnModuleZ = TRUE, getBestModuleZ = TRUE, getOwnConsensusKME = TRUE, getBestConsensusKME = TRUE, getAverageKME = FALSE, getConsensusKME = TRUE, getMetaColsFor1Set = FALSE, getMetaP = FALSE, getMetaFDR = getMetaP && getFDR, getSetKME = TRUE, getSetZ = FALSE, getSetP = FALSE, getSetFDR = getSetP && getFDR, includeID = TRUE, additionalGeneInfo = NULL, includeWeightTypeInColnames = TRUE ) { corAndPvalueFnc = match.fun(corAndPvalueFnc); size = checkSets(multiExpr); nSets = size$nSets; nGenes = size$nGenes; nSamples = size$nSamples; nSets.effective = length(consensusTreeInputs(consensusTree)); getMetaCols = nSets.effective > 1 || getMetaColsFor1Set; .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE); if (!is.null(metaAnalysisWeights)) if (length(metaAnalysisWeights)!=nSets) stop("Length of 'metaAnalysisWeights' must equal number of input sets."); if (!is.null(useModules)) { if (greyLabel %in% useModules) stop(paste("Grey module (or module 0) cannot be used with 'useModules'.\n", " Use 'excludeGrey = FALSE' to obtain results for the grey module as well. ")); keep = moduleLabels %in% useModules; if (sum(keep)==0) stop("Incorrectly specified 'useModules': no such module(s)."); moduleLabels [ !keep ] = greyLabel; } if (!is.null(additionalGeneInfo)) { if (nrow(additionalGeneInfo)!=nGenes) stop("If given, 'additionalGeneInfo' must be a data frame with one row per gene."); } if (is.null(multiEigengenes)) multiEigengenes = multiSetMEs(multiExpr, universalColors = moduleLabels, verbose = 0, excludeGrey = excludeGrey, grey = greyLabel); modLevels = substring(colnames(multiEigengenes[[1]]$data), 3); nModules = length(modLevels); kME = p = Z = nObs = array(NA, dim = c(nGenes, nModules, nSets)); corOptions$alternative = c("two.sided", "greater")[signed+1]; haveZs = FALSE; kME.lst = list(); for (set in 1:nSets) { corOptions$x = multiExpr[[set]]$data; corOptions$y = multiEigengenes[[set]]$data; if (!is.null(multiWeights)) corOptions$weights.x = multiWeights[[set]]$data; cp = do.call(corAndPvalueFnc, args = corOptions); corComp = grep(corComponent, names(cp)); pComp = match("p", names(cp)); if (is.na(pComp)) pComp = match("p.value", names(cp)); if (is.na(pComp)) stop("Function `corAndPvalueFnc' did not return a p-value."); kME[, , set] = kME.lst[[set]] = cp[[corComp]] p[, , set] = cp[[pComp]]; if (!is.null(cp$Z)) { Z[, , set] = cp$Z; haveZs = TRUE} if (!is.null(cp$nObs)) { nObs[, , set] = cp$nObs; } else nObs[, , set] = t(is.na(multiExpr[[set]]$data)) %*% (!is.na(multiEigengenes[[set]]$data)); } names(kME.lst) = setNames; if (getFDR) { q = apply(p, c(2:3), p.adjust, method = "fdr"); } else q = NULL; # kME.average = rowMeans(kME, dims = 2); <-- not neccessary since weighted average also contains it if (is.null(reportWeightType)) { if (is.null(metaAnalysisWeights)) { reportWeightType = "rootDoF" } else reportWeightType = "user"; } knownWeightTypes = c("equal", "rootDoF", "DoF", "user"); reportWeightType.num = pmatch(reportWeightType, knownWeightTypes); if (length(reportWeightType.num)==0 || any(is.na(reportWeightType.num))) stop("If given, 'reportWeightType' must be one of:\n ", paste(knownWeightTypes, collapse = ", ")); powers = c(0, 0.5, 1); nPowers = length(powers) nWeights.all = nPowers + !is.null(metaAnalysisWeights) weightNames = c("equalWeights", "rootDoFWeights", "DoFWeights", "userWeights"); nWeights = length(reportWeightType.num); if (nWeights > 1) includeWeightTypeInColnames = TRUE; kME.weightedAverage = array(NA, dim = c(nGenes, nWeights, nModules)); for (m in 1:nWeights) { mm = reportWeightType.num[m]; if (mm<=nPowers) { weights = (nObs-2)^powers[mm] } else weights = array( rep(metaAnalysisWeights, rep(nGenes*nModules, nSets)), dim = c(nGenes, nModules, nSets)); kME.weightedAverage[, m, ] = rowSums( kME * weights, na.rm = TRUE, dims = 2) / rowSums(weights, dims = 2, na.rm = TRUE) } dim(kME.weightedAverage) = c(nGenes * nWeights, nModules); kME.consensus = simpleHierarchicalConsensusCalculation( individualData = kME.lst, consensusTree = consensusTree); # Prepare identifiers for the variables (genes) if (is.null(colnames(multiExpr[[1]]$data))) { ID = spaste("Variable.", 1:nGenes); } else ID = colnames(multiExpr[[1]]$data); # Get meta-Z, -p, -q values if (haveZs) { Z.kME.meta = p.kME.meta = array(0, dim = c(nGenes, nWeights, nModules)) if (getFDR) q.kME.meta = array(0, dim = c(nGenes, nWeights, nModules)); for (m in 1:nWeights) { mm = reportWeightType.num[m]; if (mm<=nPowers) { weights = (nObs-2)^powers[mm] } else weights = array( rep(metaAnalysisWeights, rep(nGenes*nModules, nSets)), dim = c(nGenes, nModules, nSets)); Z1 = rowSums( Z * weights, na.rm = TRUE, dims = 2) / sqrt(rowSums(weights^2, na.rm = TRUE, dims = 2)) if (signed) { p1 = pnorm(Z1, lower.tail = FALSE); } else p1 = 2*pnorm(abs(Z1), lower.tail = FALSE); Z.kME.meta[, m, ] = Z1; p.kME.meta[, m, ] = p1; if (getFDR) { q1 = apply(p1, 2, p.adjust, method = "fdr"); q.kME.meta[, m, ] = q1; } } dim(Z.kME.meta) = dim(p.kME.meta) = c(nGenes* nWeights, nModules); if (getFDR) { dim(q.kME.meta) = c(nGenes * nWeights, nModules); } else q.kME.meta = NULL; } else { Z.kME.meta = p.kME.meta = q.kME.meta = NULL; } # Call rankPvalue if (useRankPvalue && getMetaCols) { for (mod in 1:nModules) for (m in 1:nWeights) { if (m<=nPowers) { weights = nObs[, mod, ]^powers[m] } else weights = matrix( metaAnalysisWeights, nGenes, nSets, byrow = TRUE); # rankPvalue requires a vector of weights... so compress the weights to a vector. # Output a warning if the compression loses information. nDifferent = apply(weights, 2, function(x) {length(unique(x)) }); if (any(nDifferent)>1) printFlush(paste("Warning in consensusKME: rankPvalue requires compressed weights.\n", "Some weights may not be entirely accurate.")); cw = colMeans(weights, na.rm = TRUE); rankPvalueOptions$columnweights = cw / sum(cw); rankPvalueOptions$datS = kME[, mod, ]; rp1 = do.call(rankPvalue, rankPvalueOptions); colnames(rp1) = spaste(colnames(rp1), ".ME", modLevels[mod], ".", weightNames[m]); if (mod==1 && m==1) { rp = rp1; } else rp = cbind(rp, rp1); } } else rp = NULL; # Format the output... this will entail some rearranging of the individual set results. if (is.null(setNames)) setNames = names(multiExpr); if (is.null(setNames)) setNames = spaste("Set_", c(1:nSets)); if (!haveZs) Z = NULL; keep = c(TRUE, TRUE, getFDR, haveZs); varNames = c("kME", "p.kME", "FDR.kME", "Z.kME")[keep]; nVars = sum(keep); dimnames(kME) = list( mtd.colnames(multiExpr), spaste("k", mtd.colnames(multiEigengenes)), setNames); dimnames(p) = list( mtd.colnames(multiExpr), spaste("p.k", mtd.colnames(multiEigengenes)), setNames); if (getFDR) dimnames(q) = list( mtd.colnames(multiExpr), spaste("FDR.k", mtd.colnames(multiEigengenes)), setNames); if (haveZs) dimnames(Z) = list( mtd.colnames(multiExpr), spaste("Z.k", mtd.colnames(multiEigengenes)), setNames); varList = list(kME = if (getSetKME) kME else NULL, p = if (getSetP) p else NULL, q = if (getSetFDR) q else NULL, Z = if (getSetZ && haveZs) Z else NULL); varList.interleaved = lapply(varList, function(arr) { if (!is.null(dim(arr))) { split = lapply(1:dim(arr)[3], function(i) { out = arr[, , i]; if (is.null(dim(out))) { dim(out) = dim(arr)[1:2]; dimnames(out) = dimnames(arr)[1:2]; } out } ); .interleave(split, nameBase = setNames, baseFirst = FALSE) } else NULL; }) recast = .interleave(varList.interleaved, nameBase = rep("", 4), sep = ""); out = data.frame(ID = ID); if (!is.null(additionalGeneInfo)) out = data.frame(out, additionalGeneInfo); out = data.frame(out, module = moduleLabels); index = cbind(1:nGenes, match(moduleLabels, modLevels)); if (getOwnModuleZ) out = cbind(out, Z.kME.inOwnModule= Z.kME.meta[index]); if (getBestModuleZ && getMetaCols) { maxData = minWhichMin(-Z.kME.meta, byRow = TRUE) maxMMmodule = modLevels[maxData$which]; out = cbind(out, maxZ.kME = -maxData$min, moduleOfMaxZ.kME = maxMMmodule); } if (getOwnConsensusKME) out = cbind(out, consKME.inOwnModule= kME.consensus[index]); if (getBestConsensusKME) { maxData = minWhichMin(-kME.consensus, byRow = TRUE) out = cbind(out, maxConsKME = -maxData$min, moduleOfMaxConsKME = modLevels[maxData$which]); } if (!includeID) out = out[, -1, drop = FALSE]; if (getMetaCols) { combinedMeta.0 = rbind( if (getConsensusKME) kME.consensus else NULL, if (getAverageKME) kME.weightedAverage else NULL, Z.kME.meta, if (getMetaP) p.kME.meta else NULL, if (getMetaFDR) q.kME.meta else NULL); combinedMeta = matrix(combinedMeta.0, nGenes, (getConsensusKME + getAverageKME * nWeights + (haveZs* (1 + getMetaP + getMetaFDR)*nWeights)) * nModules); metaNames = c("consensus.kME", spaste("weightedAverage.", if (includeWeightTypeInColnames) spaste(weightNames, ".") else "", "kME"), spaste("meta.Z.", if (includeWeightTypeInColnames) spaste(weightNames, ".") else "", "kME"), spaste("meta.p.", if (includeWeightTypeInColnames) spaste(weightNames, ".") else "", "kME"), spaste("meta.FDR.", if (includeWeightTypeInColnames) spaste(weightNames, ".") else "", "kME") )[ c(getConsensusKME, rep(getAverageKME, nWeights), rep(haveZs, nWeights), rep(haveZs && getMetaP, nWeights), rep(haveZs && getMetaFDR, nWeights))]; nMetaVars = length(metaNames); colnames(combinedMeta) = spaste (rep(metaNames, nModules), rep(modLevels, rep(nMetaVars, nModules))); } else combinedMeta = NULL; if (getMetaCols) { if (useRankPvalue) { out = data.frame(out, combinedMeta, rp, recast); } else out = data.frame(out, combinedMeta, recast); } else out = data.frame(out, recast); out } #====================================================================================================== # # Meta-analysis # #====================================================================================================== .isBinary = function(multiTrait) { bin = TRUE; for (set in 1:length(multiTrait)) if (length(sort(unique(multiTrait[[set]]$data))) > 2) bin = FALSE; bin; } metaAnalysis = function(multiExpr, multiTrait, binary = NULL, #consensusQuantile = 0, metaAnalysisWeights = NULL, corFnc = cor, corOptions = list(use = 'p'), getQvalues = FALSE, getAreaUnderROC = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(), setNames = NULL, kruskalTest = FALSE, var.equal = FALSE, metaKruskal = kruskalTest, na.action = "na.exclude") { size = checkSets(multiExpr); nSets = size$nSets; for (set in 1:nSets) multiTrait[[set]] $ data = as.matrix(multiTrait[[set]] $ data); tSize = checkSets(multiTrait); if (tSize$nGenes!=1) stop("This function only works for a single trait. "); if (size$nSets!=tSize$nSets) stop("The number of sets in 'multiExpr' and 'multiTrait' must be the same."); if (!all.equal(size$nSamples, tSize$nSamples)) stop("Numbers of samples in each set of 'multiExpr' and 'multiTrait' must be the same."); #if (!is.finite(consensusQuantile) || consensusQuantile < 0 || consensusQuantile > 1) # stop("'consensusQuantile' must be between 0 and 1."); if (is.null(setNames)) setNames = names(multiExpr); if (is.null(setNames)) setNames = spaste("Set_", c(1:nSets)); if (metaKruskal && !kruskalTest) stop("Kruskal statistic meta-analysis requires kruskal test. Use kruskalTest=TRUE."); if (is.null(binary)) binary = .isBinary(multiTrait); if (!is.null(metaAnalysisWeights)) { if (length(metaAnalysisWeights)!=nSets) stop("Length of 'metaAnalysisWeights' must equal the number of sets in 'multiExpr'.") if (any (!is.finite(metaAnalysisWeights)) || any(metaAnalysisWeights < 0)) stop("All weights in 'metaAnalysisWeights' must be positive."); } setResults = list(); for (set in 1:size$nSets) { if (binary) { setResults[[set]] = standardScreeningBinaryTrait(multiExpr[[set]]$data, as.vector(multiTrait[[set]]$data), kruskalTest = kruskalTest, qValues = getQvalues, var.equal = var.equal, na.action = na.action, corFnc = corFnc, corOptions = corOptions); trafo = TRUE; if (metaKruskal) { metaStat = "stat.Kruskal.signed"; metaP = "pvaluekruskal"; } else { metaStat = "t.Student"; metaP = "pvalueStudent" } } else { setResults[[set]] = standardScreeningNumericTrait(multiExpr[[set]]$data, as.vector(multiTrait[[set]]$data), qValues = getQvalues, corFnc = corFnc, corOptions = corOptions, areaUnderROC = getAreaUnderROC); metaStat = "Z"; trafo = FALSE; } } comb = NULL; for (set in 1:nSets) { if (set==1) { comb = setResults[[set]] [, -1]; ID = setResults[[set]] [, 1]; colNames= colnames(comb); nColumns = ncol(comb); colnames(comb) = spaste("X", c(1:nColumns)); } else { xx = setResults[[set]][, -1]; colnames(xx) = spaste("X", c(1:nColumns)); comb = rbind(comb, xx); } } # Re-arrange comb: comb = matrix(as.matrix(as.data.frame(comb)), size$nGenes, nColumns * nSets); colnames(comb) = spaste( rep( colNames, rep(nSets, nColumns)), ".", rep(setNames, nColumns)); # Find the columns from which to do meta-analysis statCols = grep(spaste("^", metaStat), colnames(comb)); if (length(statCols)==0) stop("Internal error: no columns for meta-analysis found. Sorry!"); setStats = comb[, statCols, drop = FALSE]; if (trafo) { # transform p-values to Z statistics # Find the pvalue columns pCols = grep(spaste("^", metaP), colnames(comb)); if (length(pCols)==0) stop("Internal error: no columns for meta-analysis found. Sorry!"); setP = comb[, pCols, drop = FALSE]; # Caution: I assume here that the returned p-values are two-sided. setZ = sign(setStats) * qnorm(setP/2, lower.tail = FALSE); } else { setZ = setStats; } colnames(setZ) = spaste("Z.", setNames); nObsCols = grep("nPresentSamples", colnames(comb)); nObs = comb[, nObsCols, drop = FALSE]; powers = c(0, 0.5, 1); nPowers = 3; metaNames = c("equalWeights", "RootDoFWeights", "DoFWeights") if (is.null(metaAnalysisWeights)) { nMeta = nPowers; } else { nMeta = nPowers + 1; metaNames = c(metaNames, "userWeights"); } metaResults = NULL; for (m in 1:nMeta) { if (m<=nPowers) { weights = nObs^powers[m] } else weights = matrix( metaAnalysisWeights, size$nGenes, nSets, byrow = TRUE); metaZ = rowSums( setZ * weights, na.rm = TRUE) / sqrt(rowSums(weights^2, na.rm = TRUE)) p.meta = 2*pnorm(abs(metaZ), lower.tail = FALSE); if (getQvalues) { q.meta = qvalue.restricted(p.meta); meta1 = cbind(metaZ, p.meta, q.meta) } else { q.meta = NULL; meta1 = cbind(metaZ, p.meta); } colnames(meta1) = spaste(c("Z.", "p.", "q.")[1:ncol(meta1)], metaNames[m]); metaResults = cbind(metaResults, meta1); } # Use rankPvalue to produce yet another meta-analysis rankMetaResults = NULL; if (useRankPvalue) { rankPvalueOptions$datS = as.data.frame(setZ); if (is.na(match("calculateQvalue", names(rankPvalueOptions)))) rankPvalueOptions$calculateQvalue = getQvalues; for (m in 1:nMeta) { if (m<=nPowers) { weights = nObs^powers[m] } else weights = matrix( metaAnalysisWeights, size$nGenes, nSets, byrow = TRUE); # rankPvalue requires a vector of weights... so compress the weights to a vector. # Output a warning if the compression loses information. nDifferent = apply(weights, 2, function(x) {length(unique(x)) }); if (any(nDifferent)>1) printFlush(paste("Warning in metaAnalysis: rankPvalue requires compressed weights.\n", "Some weights may not be entirely accurate.")); rankPvalueOptions$columnweights = colMeans(weights, na.rm = TRUE); rankPvalueOptions$columnweights = rankPvalueOptions$columnweights / sum(rankPvalueOptions$columnweights) rp = do.call(rankPvalue, rankPvalueOptions); colnames(rp) = spaste(colnames(rp), ".", metaNames[m]); rankMetaResults = cbind(rankMetaResults, as.matrix(rp)); } } # Put together the output out = list(ID = ID, metaResults, rankMetaResults, comb, if (trafo) setZ else NULL, NULL); # The last NULL is necessary so the line below works even if nothing else is NULL out = as.data.frame(out[ -(which(sapply(out,is.null),arr.ind=TRUE))]) out; } #=============================================================================================== # # multiUnion and multiIntersect # #=============================================================================================== multiUnion = function(setList) { len = length(setList); if (len==0) return(NULL); if (len==1) return(setList[[1]]); out = setList[[1]]; for (elem in 2:len) out = union(out, setList[[elem]]); out; } multiIntersect = function(setList) { len = length(setList); if (len==0) return(NULL); if (len==1) return(setList[[1]]); out = setList[[1]]; for (elem in 2:len) out = intersect(out, setList[[elem]]); out; } #===================================================================================================== # # prependZeros # #===================================================================================================== # prepend as many zeros as necessary to fill number to a certain width. Assumes an integer input. prependZeros = function(x, width = max(nchar(x))) { if (is.numeric(x)) xr = as.integer(x) else xr = x; lengths = nchar(xr); if (width < max(lengths)) stop("Some entries of 'x' are too long."); out = as.character(x); n = length(x); for (i in 1:n) if (lengths[i] < width) out[i] = spaste( paste(rep("0", width-lengths[i]), collapse = ""), x[i]); out; } prependZeros.int = function(x, width = max(nchar(as.integer(x)))) { if (!is.numeric(x)) stop("This function needs numeric, preferrably integer input."); xr = as.integer(x); lengths = nchar(as.integer(xr)); if (width < max(lengths)) stop("Some entries of 'x' are too long."); out = as.character(xr); n = length(x); for (i in 1:n) if (lengths[i] < width) out[i] = spaste( paste(rep("0", width-lengths[i]), collapse = ""), xr[i]); out; } #=========================================================================================================== # # Text formatting # #=========================================================================================================== .effectiveNChar = function(s, capitalMultiplier = 1.4) { ss = gsub("[^A-Z]", "", s); nchar(s) + (capitalMultiplier-1) * nchar(ss); } formatLabels = function(labels, maxCharPerLine = 14, maxWidth = NULL, maxLines = Inf, cex = 1, font = 1, split = " ", fixed = TRUE, newsplit = split, keepSplitAtEOL = TRUE, capitalMultiplier = 1.4, eol = "\n", ellipsis = "...") { n = length(labels); labels2 = strsplit(labels, split = eol, fixed = TRUE); index = unlist(mapply(function(l, i) rep(i, length(l)), labels2, 1:n)); labels3 = unlist(labels2); n3 = length(labels3); splitX = strsplit(labels3, split = split, fixed = fixed); newLabels= rep("", n3); width.newsplit = if (is.null(maxWidth)) .effectiveNChar(newsplit) else strwidth(newsplit, cex = cex, font = font); for (l in 1:n3) { nl = ""; line = ""; nLines = 1; if (nchar(labels3[l]) > 0) for (s in 1:length(splitX[[l]])) { newLen = .effectiveNChar(line) + width.newsplit + .effectiveNChar(splitX [[l]] [s]); cond = if (is.null(maxWidth)) { newLen <= maxCharPerLine - (maxLines==nLines) * (width.newsplit + 2) } else strwidth(spaste(line, newsplit, splitX [[l]] [s]), cex = cex) <= maxWidth - (maxLines==nLines) * (width.newsplit + strwidth(ellipsis, cex = cex)); if (nchar(line) < 5 | cond) { nl = paste(nl, splitX[[l]] [s], sep = newsplit) line = paste(line, splitX[[l]] [s], sep = newsplit); } else { if (nLines < maxLines) { nl = paste(nl, splitX[[l]] [s], sep = paste0(if(keepSplitAtEOL) newsplit else "", eol)); } else { # If this is the last line, add ellipsis and move on to next label. nl = spaste(nl, newsplit, ellipsis) break; } nLines = nLines +1; line = splitX[[l]] [s]; } } newLabels[l] = nl; } newLabels = substring(newLabels, nchar(newsplit)+1); unlist(tapply(newLabels, index, base::paste, collapse = eol)); } #================================================================================================== # # shortenStrings # #================================================================================================= .listRep = function(data, n) { out = list(); if (n> 0) for (i in 1:n) out[[i]] = data; out; } # Truncate labels at the last 'split' before given maximum length, add ... if the label is shortened. shortenStrings = function(strings, maxLength = 25, minLength = 10, split = " ", fixed = TRUE, ellipsis = "...", countEllipsisInLength = FALSE) { dims = dim(strings); dnames = dimnames(strings); if (is.data.frame(strings)) { strings = as.matrix(strings); outputDF = TRUE; } else { outputDF = FALSE; } strings = as.character(strings); n = length(strings); if (n==0) return(character(0)); newLabels= rep("", n); if (length(split) > 0) { splitPositions = gregexpr(pattern = split, text = strings, fixed = fixed); } else { splitPositions = .listRep(numeric(0), n); } if (countEllipsisInLength) { maxLength = maxLength - nchar(ellipsis); minLength = minLength - nchar(ellipsis); } for (l in 1:n) { if (nchar(strings[l]) <= maxLength) { newLabels[l] = strings[l]; } else { splits.1 = splitPositions[[l]]; suitableSplits = which(splits.1 > minLength & splits.1 <= maxLength); if (length(suitableSplits) > 0) { splitPosition = max(splits.1[suitableSplits]); } else { splitPosition = maxLength+1; } newLabels[l] = spaste(substring(strings[l], 1, splitPosition-1), ellipsis) } } dim(newLabels) = dims; dimnames(newLabels) = dnames; if (outputDF) as.data.frame(newLabels) else newLabels; } #======================================================================================================== # # multiGSub, multiSub # #======================================================================================================== multiGSub = function(patterns, replacements, x, ...) { n = length(patterns); if (n!=length(replacements)) stop("Lengths of 'patterns' and 'replacements' must be the same."); for (i in 1:n) x = gsub(patterns[i], replacements[i], x, ...); x; } multiSub = function(patterns, replacements, x, ...) { n = length(patterns); if (n!=length(replacements)) stop("Lengths of 'patterns' and 'replacements' must be the same."); for (i in 1:n) x = sub(patterns[i], replacements[i], x, ...); x; } multiGrep = function(patterns, x, ..., sort = TRUE, value = FALSE, invert = FALSE) { if (invert) { out = multiIntersect(lapply(patterns, grep, x, ..., value = FALSE, invert = TRUE)) } else out = unique(unlist(lapply(patterns, grep, x, ..., value = FALSE, invert = FALSE))); if (sort) out = sort(out); if (value) out = x[out]; out; } multiGrepl = function(patterns, x, ...) { if (length(patterns)==0) return(rep(FALSE, length(x))) if (length(x)==0) return(logical(0)); mat = as.matrix(do.call(cbind, lapply(patterns, function(p) as.numeric(grepl(p, x, ...))))); rowSums(mat)>0; } #======================================================================================================== # # plotMultiHist, multiPlot # #======================================================================================================== .addErrorBars.2sided = function (x, means, upper, lower, width = strwidth("II"), ...) { if (!is.numeric(means) | !is.numeric(x) || !is.numeric(upper) || !is.numeric(lower)) { stop("All arguments must be numeric") } ERR1 <- upper ERR2 <- lower for (i in 1:length(means)) { segments(x[i], means[i], x[i], ERR1[i], ...) segments(x[i] - width/2, ERR1[i], x[i] + width/2, ERR1[i], ...) segments(x[i], means[i], x[i], ERR2[i], ...) segments(x[i] - width/2, ERR2[i], x[i] + width/2, ERR2[i], ...) } } .multiPlot = function( x = NULL, y = NULL, data = NULL, columnX = NULL, columnY = NULL, barHigh = NULL, barLow = NULL, type = "p", xlim = NULL, ylim = NULL, pch = 1, col = 1, bg = 0, lwd = 1, lty = 1, cex = 1, barColor = 1, addGrid = FALSE, linesPerTick = NULL, horiz = TRUE, vert = FALSE, gridColor = "grey30", gridLty = 3, errBar.lwd = 1, plotBg = NULL, newPlot = TRUE, dropMissing = TRUE, ...) { getColumn = function(data, column) { if (!is.numeric(column)) column = match(column, colnames(data)); data[, column]; } expand = function(x, n) { if (length(x) < n) x = rep(x, ceiling(n/length(x))); x[1:n]; } if (!is.null(data)) { if (is.null(columnX)) stop("'columnX' must be given."); if (is.null(columnY)) stop("'columnY' must be given."); x = lapply(data, getColumn, columnX); y = lapply(data, getColumn, columnY); } if (is.null(x) | is.null(y)) stop("'x' and 'y' or 'data' must be given."); if (mode(x)=="numeric") x = as.list(as.data.frame(as.matrix(x))); if (mode(y)=="numeric") y = as.list(as.data.frame(as.matrix(y))); if (!is.null(barHigh) && mode(barHigh)=="numeric") barHigh = as.list(as.data.frame(as.matrix(barHigh))); if (!is.null(barLow) && mode(barLow)=="numeric") barLow = as.list(as.data.frame(as.matrix(barLow))); nx = length(x); ny = length(y); if (nx==1 && ny>1) { for (c in 2:ny) x[[c]] = x[[1]]; nx = length(x); } if (nx!=ny) stop("Length of 'x' and 'y' must be the same."); if (length(barHigh)>0 && length(barHigh)!=ny) stop("If given, 'barHigh' must have the same length as 'y'."); if (length(barLow)>0 && length(barLow)!=ny) stop("If given, 'barLow' must have the same length as 'y'."); if (!is.null(barHigh) && is.null(barLow)) barLow = mapply(function(m, u) 2*m-u, y, barHigh, SIMPLIFY = FALSE); pch = expand(pch, nx); col = expand(col, nx); bg = expand(bg, nx); lwd = expand(lwd, nx); lty = expand(lty, nx); cex = expand(cex, nx); barColor = expand(barColor, nx); if (is.null(xlim)) xlim = range(x, na.rm = TRUE) if (is.null(ylim)) ylim = range(c(y, barLow, barHigh), na.rm = TRUE) if (newPlot) plot(x[[1]], y[[1]], xlim = xlim, ylim = ylim, pch = pch[1], col = col[1], bg = bg[[1]], lwd = lwd[1], lty = lty[1], cex = cex[1], ..., type = "n"); if (!is.null(plotBg)) { if (length(plotBg)==1) plotBg = expand(plotBg, nx); box = par("usr"); for (i in 1:nx) { if (i==1) xl = box[1] else xl = (x[[1]] [i] + x[[1]] [i-1])/2; if (i==nx) xr = box[2] else xr = (x[[1]] [i+1]+x[[1]] [i])/2; rect(xl, box[3], xr, box[4], border = plotBg[i], col = plotBg[i]); } } if (addGrid) addGrid(linesPerTick = linesPerTick, horiz = horiz, vert = vert, col = gridColor, lty = gridLty); if (!is.null(barHigh)) for (p in 1:nx) .addErrorBars.2sided(x[[p]], y[[p]], barHigh[[p]], barLow[[p]], col = barColor[p], lwd = errBar.lwd); if (type %in% c("l", "b")) for (p in 1:nx) { if (dropMissing) present = is.finite(x[[p]]) & is.finite(y[[p]]) else present = rep(TRUE, length(x[[p]])); lines(x[[p]][present], y[[p]][present], lwd = lwd[p], lty = lty[p], cex = cex[p], col = bg[p]); } if (type %in% c("p", "b")) for (p in 1:nx) points(x[[p]], y[[p]], pch = pch[p], col = col[p], bg = bg[p], cex = cex[p]) } plotMultiHist = function(data, nBreaks = 100, col = 1:length(data), scaleBy = c("area", "max", "none"), cumulative = FALSE, ...) { if (is.atomic(data)) data = list(data); range = range(data, na.rm = TRUE); breaks = seq(from = range[1], to = range[2], length.out = nBreaks + 1); breaks[nBreaks + 1] = range[2] + 0.001 * (range[2] - range[1]); hists = lapply(data, hist, breaks = breaks, plot = FALSE); scaleBy = match.arg(scaleBy); if (cumulative) { hists = lapply(hists, function(h) {h$counts = cumsum(h$counts)/sum(h$counts); h}) } else { if (scaleBy=="max") { scale = lapply(hists, function(h1) max(h1$counts)); hists = mapply(function(h1, s1) {h1$counts = h1$counts/s1; h1}, hists, scale, SIMPLIFY = FALSE); } else if (scaleBy=="area") { scale = lapply(hists, function(h1) sum(h1$counts)); hists = mapply(function(h1, s1) {h1$counts = h1$counts/s1; h1}, hists, scale, SIMPLIFY = FALSE); } } n = length(data); .multiPlot(x = lapply(hists, getElement, "mids"), y = lapply(hists, getElement, "counts"), type = "l", col = col, bg = col, ...); invisible(list(x = lapply(hists, getElement, "mids"), y = lapply(hists, getElement, "counts"))) } replaceMissing = function(x, replaceWith) { if (missing(replaceWith)) { if (is.logical(x)) { replaceWith = FALSE } else if (is.numeric(x)) { replaceWith = 0; } else if (is.character(x)) { replaceWith = "" } else stop("Need 'replaceWith'."); } x[is.na(x)] = replaceWith; x; } imputeByModule = function( data, labels, excludeUnassigned = FALSE, unassignedLabel = if (is.numeric(labels)) 0 else "grey", scale = TRUE, ...) { labels = replaceMissing(labels, unassignedLabel); labelLevels = unique(labels); if (excludeUnassigned) labelLevels = setdiff(labelLevels, unassignedLabel); if (scale) data = scale(data); for (ll in labelLevels) { inMod = labels==ll; if (any(is.na(data[, inMod]))) data[, inMod] = t(impute.knn(t(data[, inMod]), ...)$data); } data; } signifNumeric = function(x, digits, fnc = "signif") { x = as.data.frame(x); isNumeric = sapply(x, is.numeric); isDecimal = isNumeric; if (any(isNumeric)) { isDecimal[isNumeric] = sapply(x[, isNumeric, drop = FALSE], function(xx) { any(round(xx)!=xx, na.rm = TRUE)}); } else browser() fnc = match.fun(fnc); x[, isDecimal] = do.call(fnc, list(x = x[, isDecimal], digits = digits)); x; } #======================================================================================================= # # binarizeCategoricalVar # #======================================================================================================= # Assumes x is a vector but can easily be modified to also work with matrices. binarizeCategoricalVariable = function( x, levelOrder = NULL, ignore = NULL, minCount = 3, val1 = 0, val2 = 1, includePairwise = TRUE, includeLevelVsAll = FALSE, dropFirstLevelVsAll = FALSE, dropUninformative = TRUE, namePrefix = "", levelSep = NULL, nameForAll = "all", levelSep.pairwise = if (length(levelSep)==0) ".vs." else levelSep, levelSep.vsAll = if (length(levelSep)==0) (if (nameForAll=="") "" else ".vs.") else levelSep, checkNames = FALSE, includeLevelInformation = TRUE) { tab = table(x); levels0 = names(tab); tab = tab[ tab >= minCount & !(levels0 %in% ignore) ]; levels = names(tab); if (!is.null(levelOrder)) { order = match(levelOrder, levels); order = order[is.finite(order)]; levels0 = levels[order]; levels1 = levels[ !levels %in% levels0]; levels = c(levels0, levels1); } nSamples = length(x); nLevels = length(levels) if (!is.logical(dropFirstLevelVsAll)) { dropFirstLevelVsAll.num = pmatch(dropFirstLevelVsAll, c("none", "binary", "all")); if (is.na(dropFirstLevelVsAll.num)) stop("If 'dropFirstLevelVsAll' is not logical, it must be one of\n", " 'none', 'binary', 'all'."); dropFirstLevelVsAll = dropFirstLevelVsAll.num == 3 | (dropFirstLevelVsAll.num == 2 && nLevels==2) } nBinaryVars = includePairwise * nLevels * (nLevels - 1)/2 + includeLevelVsAll * (nLevels - dropFirstLevelVsAll) if (nBinaryVars==0) { if (dropUninformative) { return(NULL) } else { out = as.matrix(rep(val2, nSamples)); colnames(out) = levels[1]; return(out); } } out = matrix(NA, nSamples, nBinaryVars) levelTable = matrix("", 2, nBinaryVars); ind = 1; names = rep("", nBinaryVars); if (includePairwise) { for (v1 in 1:(nLevels-1)) for (v2 in (v1+1):nLevels) { out[ x==levels[v1], ind] = val1; out[ x==levels[v2], ind] = val2; names[ind] = spaste(namePrefix, levels[v2], levelSep.pairwise, levels[v1]); levelTable[, ind] = levels[ c(v1, v2)]; ind = ind + 1; } } if (includeLevelVsAll) { for (v1 in (1 + as.numeric(dropFirstLevelVsAll)):nLevels) { out[, ind] = c(val1, val2) [ as.numeric(x==levels[v1])+1 ]; names[ind] = spaste(namePrefix, levels[v1], levelSep.vsAll, nameForAll); levelTable[, ind] = c(nameForAll, levels[v1]); ind = ind+1; is.numeric} } colnames(out) = names; if (includeLevelInformation) { colnames(levelTable) = names; rownames(levelTable) = spaste("Value.", c(val1, val2)); attr(out, "includedLevels") = levelTable; } out; } # This function attempts to determine whether a vector is numeric in the sense that coercing it to numeric # will not lead to information loss. .isNumericVector = function(x, naStrings = c("NA", "NULL", "NO DATA")) { if (is.numeric(x)) return(TRUE) x[x%in% naStrings] = NA x.num = suppressWarnings(as.numeric(x)); missing = is.na(x.num); t = table(x[missing]) if (length(t) ==0 ) return (TRUE) if (length(t)>1) return(FALSE) #if (all(missing)) return(TRUE) else return(FALSE); return(FALSE); } convertNumericColumnsToNumeric = function(data, naStrings = c("NA", "NULL", "NO DATA"), unFactor = TRUE) { data = as.data.frame(data); if (unFactor) data = as.data.frame(lapply(data, function(x) if (is.factor(x)) as.character(x) else x)); num = sapply(data, .isNumericVector); for (i in which(num)) data[, i] = as.numeric(data[, i]); data; } # This function turn all non-numeric columns into factors factorizeNonNumericColumns = function(data) { data = as.data.frame(data); isNumeric = sapply(data, is.numeric); nonNumeric = (1:ncol(data))[!isNumeric]; for (c in nonNumeric) if (!is.factor(data[[c]])) data[, c] = factor(data[, c]); data; } binarizeCategoricalColumns = function( data, convertColumns = NULL, considerColumns = NULL, maxOrdinalLevels = 3, levelOrder = NULL, minCount = 3, val1 = 0, val2 = 1, includePairwise = FALSE, includeLevelVsAll = TRUE, dropFirstLevelVsAll = TRUE, dropUninformative = TRUE, includePrefix = TRUE, prefixSep = ".", nameForAll = "all", levelSep = NULL, levelSep.pairwise = if (length(levelSep)==0) ".vs." else levelSep, levelSep.vsAll = if (length(levelSep)==0) (if (nameForAll=="") "" else ".vs.") else levelSep, checkNames = FALSE, includeLevelInformation = FALSE) { data = as.data.frame(data); index = c(1:ncol(data)); if (is.null(convertColumns)) { isNumeric = sapply(data, is.numeric); nLevels = sapply(data, function(x) length(unique(x)) ); convertColumns = !isNumeric | nLevels <= maxOrdinalLevels | index %in% convertColumns; } if (is.character(convertColumns)) convertColumns = match(convertColumns, colnames(data)); if (is.numeric(convertColumns)) { if (any(!is.finite(convertColumns))) stop("All entries in 'convertColumns' must correspond to columns or column names in 'data'."); cc = rep(FALSE, ncol(data)); cc[convertColumns] = TRUE; convertColumns = cc; } if (!is.null(considerColumns)) { if (is.character(considerColumns)) considerColumns = match(considerColumns, colnames(data)); if (is.numeric(considerColumns)) { if (any(!is.finite(considerColumns))) stop("All entries in 'considerColumns' must correspond to columns or column names in 'data'."); cc = rep(FALSE, ncol(data)); cc[considerColumns] = TRUE; considerColumns = cc; } convertColumns = convertColumns & considerColumns; } out = data.frame(hgfdouroio3r9384r93yu9289283yr92owihfiw = rep(NA, nrow(data))); levelInfo = NULL; for (c in index) { if (convertColumns[c]) { nonMissing = !is.na(data[, c]); if (!any(nonMissing) || all(data[which(nonMissing)[1], c]==data[nonMissing, c])) { if (!dropUninformative) { df1 = data.frame(rep(1, nrow(data))); names(df1) = spaste(names(data)[c], ".", data[1, c]); out = cbind(out, df1); } } else { out1 = binarizeCategoricalVariable(data[, c], minCount = minCount, val1 = val1, val2 = val2, namePrefix = if (includePrefix) spaste(names(data)[c], prefixSep) else "", levelSep = levelSep, levelSep.pairwise = levelSep.pairwise, levelSep.vsAll = levelSep.vsAll, nameForAll = nameForAll, includePairwise = includePairwise, includeLevelVsAll = includeLevelVsAll, dropFirstLevelVsAll = dropFirstLevelVsAll, dropUninformative = dropUninformative, levelOrder = levelOrder[[c]], includeLevelInformation = includeLevelInformation); if (!is.null(out1)) { out = as.data.frame(cbind(out, out1)); if (includeLevelInformation) levelInfo = if (is.null(levelInfo)) attr(out1, "includedLevels") else cbind(levelInfo, attr(out1, "includedLevels")); } } } else { out = as.data.frame(cbind(out, data[, c, drop = FALSE])); } } out = out[, -1, drop = FALSE]; if (checkNames) names(out) = make.unique(make.names(names(out))); if (includeLevelInformation) attr(out, "includedLevels") = levelInfo; out } # Convenience wrappers binarizeCategoricalColumns.forRegression = function(data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", checkNames = TRUE) { binarizeCategoricalColumns(data, maxOrdinalLevels = maxOrdinalLevels, convertColumns = convertColumns, considerColumns = considerColumns, val1 = val1, val2 = val2, levelOrder = levelOrder, minCount = 1, includePairwise = FALSE, includeLevelVsAll = TRUE, dropFirstLevelVsAll = TRUE, dropUninformative = TRUE, includePrefix = includePrefix, prefixSep = prefixSep, includeLevelInformation = FALSE, checkNames = checkNames); } binarizeCategoricalColumns.forPlots = function(data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", checkNames = TRUE) { binarizeCategoricalColumns(data, maxOrdinalLevels = maxOrdinalLevels, convertColumns = convertColumns, considerColumns = considerColumns, val1 = val1, val2 = val2, levelOrder = levelOrder, minCount = 1, includePairwise = FALSE, includeLevelVsAll = TRUE, dropFirstLevelVsAll = FALSE, dropUninformative = TRUE, includePrefix = includePrefix, includeLevelInformation = FALSE, prefixSep = prefixSep, nameForAll = "", checkNames = checkNames); } binarizeCategoricalColumns.pairwise = function(data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", levelSep = ".vs.", checkNames = FALSE) { binarizeCategoricalColumns(data, maxOrdinalLevels = maxOrdinalLevels, convertColumns = convertColumns, considerColumns = considerColumns, val1 = val1, val2 = val2, levelOrder = levelOrder, minCount = 1, includePairwise = TRUE, includeLevelVsAll = FALSE, dropFirstLevelVsAll = FALSE, dropUninformative = TRUE, levelSep = levelSep, includePrefix = includePrefix, prefixSep = prefixSep, includeLevelInformation = FALSE, checkNames = checkNames); } WGCNA/R/blockwiseModulesC.R0000644000176200001440000042066614230552654015077 0ustar liggesusers #Copyright (C) 2008 Peter Langfelder #This program is free software; you can redistribute it and/or #modify it under the terms of the GNU General Public License #as published by the Free Software Foundation; either version 2 #of the License, or (at your option) any later version. #This program is distributed in the hope that it will be useful, #but WITHOUT ANY WARRANTY; without even the implied warranty of #MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #GNU General Public License for more details. #You should have received a copy of the GNU General Public License #along with this program; if not, write to the Free Software #Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # In this version the blocks are chosen by pre-clustering. #========================================================================================================== # # TOM similarity via a call to a compiled code. # #========================================================================================================== TOMsimilarityFromExpr = function( datExpr, weights = NULL, corType = "pearson", networkType = "unsigned", power = 6, TOMType = "signed", TOMDenom = "min", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, replaceMissingAdjacencies = FALSE, suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, nThreads = 0, verbose = 1, indent = 0) { corTypeC = as.integer(pmatch(corType, .corTypes)-1); if (is.na(corTypeC)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) TOMTypeC = as.integer(pmatch(TOMType, .TOMTypes)-1); if (is.na(TOMTypeC)) stop(paste("Invalid 'TOMType'. Recognized values are", paste(.TOMTypes, collapse = ", "))) TOMDenomC = as.integer(pmatch(TOMDenom, .TOMDenoms)-1); if (is.na(TOMDenomC)) stop(paste("Invalid 'TOMDenom'. Recognized values are", paste(.TOMDenoms, collapse = ", "))) if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); if (quickCor < 0) stop("quickCor must be positive."); if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); fallback = as.integer(pmatch(pearsonFallback, .pearsonFallbacks)); if (is.na(fallback)) stop(paste("Unrecognized 'pearsonFallback'. Recognized values are (unique abbreviations of)\n", paste(.pearsonFallbacks, collapse = ", "))) if (nThreads < 0) stop("nThreads must be positive."); if (is.null(nThreads) || (nThreads==0)) nThreads = .useNThreads(); if ( (power<1) | (power>30) ) stop("power must be between 1 and 30."); networkTypeC = as.integer(charmatch(networkType, .networkTypes)-1); if (is.na(networkTypeC)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); dimEx = dim(datExpr); if (length(dimEx)!=2) stop("datExpr has incorrect dimensions.") if (length(weights) > 0) { if (!isTRUE(all.equal(dimEx, dim(weights)))) stop("When 'weights' are given, they must have the same dimensions as 'datExpr'.") if (any(!is.finite(weights))) { if (verbose > 0) warning("Found non-finite weights. The corresponding data points will be removed."); weights[!is.finite(weights)] = NA; } } nGenes = dimEx[2]; nSamples = dimEx[1]; warn = as.integer(0); datExpr = as.matrix(datExpr); if (length(weights) > 0) weights = as.matrix(weights) else weights = NULL; tom = .Call("tomSimilarity_call", datExpr, weights, as.integer(corTypeC), as.integer(networkTypeC), as.double(power), as.integer(TOMTypeC), as.integer(TOMDenomC), as.double(maxPOutliers), as.double(quickCor), as.integer(fallback), as.integer(cosineCorrelation), as.integer(replaceMissingAdjacencies), as.integer(suppressTOMForZeroAdjacencies), as.integer(suppressNegativeTOM), as.integer(useInternalMatrixAlgebra), warn, as.integer(nThreads), as.integer(verbose), as.integer(indent), PACKAGE = "WGCNA"); diag(tom) = 1; return (tom); } #====================================================================================================== # # TOMsimilarity (from adjacency) # #=================================================================================================== TOMsimilarity = function(adjMat, TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, verbose = 1, indent = 0) { TOMTypeC = pmatch(TOMType, .TOMTypes)-1; if (is.na(TOMTypeC)) stop(paste("Invalid 'TOMType'. Recognized values are", paste(.TOMTypes, collapse = ", "))) if (TOMTypeC == 0) stop("'TOMType' cannot be 'none' for this function."); TOMDenomC = pmatch(TOMDenom, .TOMDenoms)-1; if (is.na(TOMDenomC)) stop(paste("Invalid 'TOMDenom'. Recognized values are", paste(.TOMDenoms, collapse = ", "))) checkAdjMat(adjMat, min = if (TOMTypeC==2) -1 else 0, max = 1); if (any(is.na(adjMat))) adjMat[is.na(adjMat)] = 0; #if (any(diag(adjMat)!=1)) diag(adjMat) = 1; <--- this is now done in compiled code #nGenes = dim(adjMat)[1]; #tom = matrix(0, nGenes, nGenes); #tomResult = .C("tomSimilarityFromAdj", as.double(as.matrix(adjMat)), as.integer(nGenes), # as.integer(TOMTypeC), # as.integer(TOMDenomC), # tom = as.double(tom), as.integer(verbose), as.integer(indent), PACKAGE = "WGCNA") #tom[,] = tomResult$tom; #diag(tom) = 1; #rm(tomResult); collectGarbage(); tom = .Call("tomSimilarityFromAdj_call", as.matrix(adjMat), as.integer(TOMTypeC), as.integer(TOMDenomC), as.integer(suppressTOMForZeroAdjacencies), as.integer(suppressNegativeTOM), as.integer(useInternalMatrixAlgebra), as.integer(verbose), as.integer(indent), PACKAGE = "WGCNA") gc(); tom; } #========================================================================================================== # # TOMdist (from adjacency) # #========================================================================================================== TOMdist = function(adjMat, TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, verbose = 1, indent = 0) { 1-TOMsimilarity(adjMat, TOMType, TOMDenom, suppressTOMForZeroAdjacencies = suppressTOMForZeroAdjacencies, suppressNegativeTOM = suppressNegativeTOM, useInternalMatrixAlgebra = useInternalMatrixAlgebra, verbose = verbose, indent = indent) } #========================================================================================================== # # blockwiseModules # #========================================================================================================== # Function to calculate modules and eigengenes from all genes. blockwiseModules = function( # Input data datExpr, weights = NULL, # Data checking options checkMissingData = TRUE, # Options for splitting data into blocks blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = as.integer(min(ncol(datExpr)/20, 100*ncol(datExpr)/maxBlockSize)), randomSeed = 54321, # load TOM from previously saved file? loadTOM = FALSE, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "signed", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Saving or returning TOM getTOMs = NULL, saveTOMs = FALSE, saveTOMFileBase = "blockwiseTOM", # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = min(20, ncol(datExpr)/2 ), # Advanced tree cut options maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, stabilityCriterion = c("Individual fraction", "Common fraction"), minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, # Gene reassignment, module trimming, and module "significance" criteria reassignThreshold = 1e-6, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.3, # Module merging options mergeCutHeight = 0.15, impute = TRUE, trapErrors = FALSE, # Output options numericLabels = FALSE, # Options controlling behaviour nThreads = 0, useInternalMatrixAlgebra = FALSE, useCorOptionsThroughout = TRUE, verbose = 0, indent = 0, ...) { spaces = indentSpaces(indent); if (verbose>0) printFlush(paste(spaces, "Calculating module eigengenes block-wise from all genes")); seedSaved = FALSE; if (!is.null(randomSeed)) { if (exists(".Random.seed")) { seedSaved = TRUE; savedSeed = .Random.seed } set.seed(randomSeed); } intCorType = pmatch(corType, .corTypes); if (is.na(intCorType)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) intTOMType = pmatch(TOMType, .TOMTypes); if (is.na(intTOMType)) stop(paste("Invalid 'TOMType'. Recognized values are", paste(.TOMTypes, collapse = ", "))) TOMDenomC = pmatch(TOMDenom, .TOMDenoms)-1; if (is.na(TOMDenomC)) stop(paste("Invalid 'TOMDenom'. Recognized values are", paste(.TOMDenoms, collapse = ", "))) if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); if (quickCor < 0) stop("quickCor must be positive."); if (nThreads < 0) stop("nThreads must be positive."); if (is.null(nThreads) || (nThreads==0)) nThreads = .useNThreads(); if ( (power<1) | (power>30) ) stop("power must be between 1 and 30."); # if ( (minKMEtoJoin >1) | (minKMEtoJoin <0) ) stop("minKMEtoJoin must be between 0 and 1."); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); fallback = pmatch(pearsonFallback, .pearsonFallbacks) if (is.na(fallback)) stop(spaste("Unrecognized value '", pearsonFallback, "' of argument 'pearsonFallback'.", "Recognized values are (unique abbreviations of)\n", paste(.pearsonFallbacks, collapse = ", "))) datExpr = as.matrix(datExpr); dimEx = dim(datExpr); if (length(dimEx)!=2) stop("datExpr has incorrect dimensions.") weights = .checkAndScaleWeights(weights, datExpr, scaleByMax = FALSE); haveWeights = length(weights)>0; nGenes = dimEx[2]; nSamples = dimEx[1]; allLabels = rep(0, nGenes); AllMEs = NULL; allLabelIndex = NULL; originalSampleNames = rownames(datExpr); if (is.null(originalSampleNames)) originalSampleNames = spaste("Row.", 1:nrow(datExpr)); originalGeneNames = colnames(datExpr); if (is.null(originalGeneNames)) originalGeneNames = spaste("Column.", 1:ncol(datExpr)); #if (maxBlockSize >= floor(sqrt(2^31)) ) # stop("'maxBlockSize must be less than ", floor(sqrt(2^31)), ". Please decrease it and try again.") if (!is.null(blocks) && (length(blocks)!=nGenes)) stop("Input error: the length of 'geneRank' does not equal the number of genes in given 'datExpr'."); if (!is.null(getTOMs)) warning("getTOMs is deprecated, please use saveTOMs instead."); # Check data for genes and samples that have too many missing values if (checkMissingData) { gsg = goodSamplesGenes(datExpr, weights = weights, verbose = verbose - 1, indent = indent + 1) if (!gsg$allOK) { datExpr = datExpr[gsg$goodSamples, gsg$goodGenes]; if (haveWeights) weights = weights[gsg$goodSamples, gsg$goodGenes]; } nGGenes = sum(gsg$goodGenes); nGSamples = sum(gsg$goodSamples); } else { nGGenes = nGenes; nGSamples = nSamples; gsg = list(goodSamples = rep(TRUE, nSamples), goodGenes = rep(TRUE, nGenes), allOK = TRUE); } if (any(is.na(datExpr))) { datExpr.scaled.imputed = t(impute.knn(t(scale(datExpr)))$data) } else datExpr.scaled.imputed = scale(datExpr); corFnc = .corFnc[intCorType]; corOptions = .corOptionList[[intCorType]]; corFncAcceptsWeights = intCorType==1; if (useCorOptionsThroughout) corOptions = c(corOptions, list(cosine = cosineCorrelation)); # Do not add the quick argument here. The calculations carried out here will not be slow anyway. if (intCorType==2 && useCorOptionsThroughout ) corOptions = c(corOptions, list(maxPOutliers = maxPOutliers, pearsonFallback = pearsonFallback)); signed = networkType %in% c("signed", "signed hybrid"); # Set up advanced tree cut methods otherArgs = list(...); if (useBranchEigennodeDissim) { branchSplitFnc = list("branchEigengeneDissim"); externalSplitOptions = list(list( corFnc = corFnc, corOptions = corOptions, signed = signed)); nExternalBranchSplitFnc = 1; externalSplitFncNeedsDistance = FALSE; minExternalSplit = minBranchEigennodeDissim; } else { branchSplitFnc = list(); externalSplitOptions = list() externalSplitFncNeedsDistance = logical(0); nExternalBranchSplitFnc = 0; minExternalSplit = numeric(0); } if (!is.null(stabilityLabels)) { stabilityCriterion = match.arg(stabilityCriterion); branchSplitFnc = c(branchSplitFnc, if (stabilityCriterion=="Individual fraction") "branchSplitFromStabilityLabels.individualFraction" else "branchSplitFromStabilityLabels"); minExternalSplit = c(minExternalSplit, minStabilityDissim); externalSplitFncNeedsDistance = c(externalSplitFncNeedsDistance, FALSE); print(dim(stabilityLabels)); externalSplitOptions = c(externalSplitOptions, list(list(stabilityLabels = stabilityLabels))) } if ("useBranchSplit" %in% names(otherArgs)) { if (otherArgs$useBranchSplit) { nExternalBranchSplitFnc = nExternalBranchSplitFnc + 1; branchSplitFnc[[nExternalBranchSplitFnc]] = "branchSplit" externalSplitOptions[[nExternalBranchSplitFnc]] = list(discardProp = 0.08, minCentralProp = 0.75, nConsideredPCs = 3, signed = signed, getDetails = FALSE); externalSplitFncNeedsDistance[nExternalBranchSplitFnc] = FALSE; minExternalSplit[ nExternalBranchSplitFnc] = otherArgs$minBranchSplit; } } # Split data into blocks if needed if (is.null(blocks)) { if (nGGenes > maxBlockSize) { if (verbose>1) printFlush(paste(spaces, "....pre-clustering genes to determine blocks..")); clustering = projectiveKMeans(datExpr, preferredSize = maxBlockSize, checkData = FALSE, sizePenaltyPower = blockSizePenaltyPower, nCenters = nPreclusteringCenters, randomSeed = randomSeed, verbose = verbose-2, indent = indent + 1); gBlocks = .orderLabelsBySize(clustering$clusters) if (verbose > 2) { printFlush("Block sizes:"); print(table(gBlocks)); } } else gBlocks = rep(1, nGGenes); blocks = rep(NA, nGenes); blocks[gsg$goodGenes] = gBlocks; } else { gBlocks = blocks[gsg$goodGenes]; } blockLevels = as.numeric(levels(factor(gBlocks))); blockSizes = table(gBlocks) if (any(blockSizes > sqrt(2^31)-1)) printFlush(spaste(spaces, "Found block(s) with size(s) larger than limit of 'int' indexing.\n", spaces, " Support for such large blocks is experimental; please report\n", spaces, " any issues to Peter.Langfelder@gmail.com.")); nBlocks = length(blockLevels); # Initialize various variables dendros = list(); TOMFiles = rep("", nBlocks); blockGenes = list(); maxUsedLabel = 0; for (blockNo in 1:nBlocks) { if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); blockGenes[[blockNo]] = c(1:nGenes)[gsg$goodGenes][gBlocks==blockLevels[blockNo]]; block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; selExpr = as.matrix(datExpr[, block]); if (haveWeights) selWeights = weights[, block]; nBlockGenes = length(block); TOMFiles[blockNo] = spaste(saveTOMFileBase, "-block.", blockNo, ".RData"); if (loadTOM) { if (verbose > 2) printFlush(paste(spaces, " ..loading TOM for block", blockNo, "from file", TOMFiles[blockNo])); x = try(load(file =TOMFiles[blockNo]), silent = TRUE); if (x!="TOM") { loadTOM = FALSE printFlush(spaste("Loading of TOM in block ", blockNo, " failed:\n file ", TOMFiles[blockNo], "\n either does not exist or does not contain the object 'TOM'.\n", " Will recalculate TOM.")); } else if (!inherits(TOM, "dist")) { printFlush(spaste("TOM file ", TOMFiles[blockNo], " does not contain object of the right type or size.\n", " Will recalculate TOM.")) } else { size.1 = attr(TOM, "Size"); if (length(size.1)!=1 || size.1!=nBlockGenes) { printFlush(spaste("TOM file ", TOMFiles[blockNo], " does not contain object of the right type or size.\n", " Will recalculate TOM.")) loadTOM = FALSE } else { tom = as.matrix(TOM); rm(TOM); collectGarbage(); } } } if (!loadTOM) { # Calculate TOM by calling a custom C function: callVerb = max(0, verbose - 1); callInd = indent + 2; CcorType = intCorType - 1; CnetworkType = intNetworkType - 1; CTOMType = intTOMType -1; warn = as.integer(0); tom = .Call("tomSimilarity_call", selExpr, weights, as.integer(CcorType), as.integer(CnetworkType), as.double(power), as.integer(CTOMType), as.integer(TOMDenomC), as.double(maxPOutliers), as.double(quickCor), as.integer(fallback), as.integer(cosineCorrelation), as.integer(replaceMissingAdjacencies), as.integer(suppressTOMForZeroAdjacencies), as.integer(suppressNegativeTOM), as.integer(useInternalMatrixAlgebra), warn, as.integer(nThreads), as.integer(callVerb), as.integer(callInd), PACKAGE = "WGCNA"); # FIXME: warn if necessary if (saveTOMs) { TOM = as.dist(tom); TOMFiles[blockNo] = paste(saveTOMFileBase, "-block.", blockNo, ".RData", sep=""); if (verbose > 2) printFlush(paste(spaces, " ..saving TOM for block", blockNo, "into file", TOMFiles[blockNo])); save(TOM, file =TOMFiles[blockNo]); rm (TOM) collectGarbage(); } } dissTom = 1-tom; rm(tom); collectGarbage(); if (verbose>2) printFlush(paste(spaces, "....clustering..")); dendros[[blockNo]] = fastcluster::hclust(as.dist(dissTom), method = "average") if (verbose>2) printFlush(paste(spaces, "....detecting modules..")); datExpr.scaled.imputed.block = datExpr.scaled.imputed[, block]; if (nExternalBranchSplitFnc > 0) for (extBSFnc in 1:nExternalBranchSplitFnc) externalSplitOptions[[extBSFnc]]$expr = datExpr.scaled.imputed.block; collectGarbage(); blockLabels = try(cutreeDynamic(dendro = dendros[[blockNo]], deepSplit = deepSplit, cutHeight = detectCutHeight, minClusterSize = minModuleSize, method = "hybrid", distM = dissTom, maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minSplitHeight, minAbsSplitHeight = minAbsSplitHeight, externalBranchSplitFnc = branchSplitFnc, minExternalSplit = minExternalSplit, externalSplitOptions = externalSplitOptions, externalSplitFncNeedsDistance = externalSplitFncNeedsDistance, assumeSimpleExternalSpecification = FALSE, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, verbose = verbose-3, indent = indent + 2), silent = FALSE); collectGarbage(); if (verbose > 8) { labels0 = blockLabels if (interactive()) plotDendroAndColors(dendros[[blockNo]], labels2colors(blockLabels), dendroLabels = FALSE, main = paste("Block", blockNo), rowText = blockLabels, textPositions = 1, rowTextAlignment = "center"); if (FALSE) plotDendroAndColors(dendros[[blockNo]], labels2colors(allLabels), dendroLabels = FALSE, main = paste("Block", blockNo)); } if (inherits(blockLabels, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, "*** cutreeDynamic returned the following error:\n", spaces, blockLabels, spaces, "Stopping the module detection here.")); } else warning(paste("blockwiseModules: cutreeDynamic returned the following error:\n", " ", blockLabels, "---> Continuing with next block. ")); next; } if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No modules detected in block", blockNo)); } blockNo = blockNo + 1; next; } blockLabels[blockLabels>0] = blockLabels[blockLabels>0] + maxUsedLabel; maxUsedLabel = max(blockLabels); if (verbose>2) printFlush(paste(spaces, "....calculating module eigengenes..")); MEs = try(moduleEigengenes(selExpr[, blockLabels!=0], blockLabels[blockLabels!=0], impute = impute, # subHubs = TRUE, trapErrors = FALSE, verbose = verbose - 3, indent = indent + 2), silent = TRUE); if (inherits(MEs, 'try-error')) { if (trapErrors) { if (verbose>0) { printFlush(paste(spaces, "*** moduleEigengenes failed with the following message:")); printFlush(paste(spaces, " ", MEs)); printFlush(paste(spaces, " ---> Stopping module detection here.")); } else warning(paste("blockwiseModules: moduleEigengenes failed with the following message:", "\n ", MEs, "---> Continuing with next block. ")); next; } else stop(MEs); } #propMEs = as.data.frame(MEs$eigengenes[, names(MEs$eigengenes)!="ME0"]); propMEs = MEs$eigengenes; blockLabelIndex = as.numeric(substring(names(propMEs), 3)); deleteModules = NULL; changedModules = NULL; # Check modules: make sure that of the genes present in the module, at least a minimum number # have a correlation with the eigengene higher than a given cutoff. if (verbose>2) printFlush(paste(spaces, "....checking kME in modules..")); for (mod in 1:ncol(propMEs)) { modGenes = (blockLabels==blockLabelIndex[mod]); KME = do.call(corFnc, c(list(selExpr[, modGenes], propMEs[, mod]), if (corFncAcceptsWeights) list( weights.x = if (haveWeights) weights[, modGenes] else NULL, weights.y = NULL) else NULL, corOptions)); if (intNetworkType==1) KME = abs(KME); if (sum(KME>minCoreKME) < minCoreKMESize) { blockLabels[modGenes] = 0; deleteModules = c(deleteModules, mod); if (verbose>3) printFlush(paste(spaces, " ..deleting module ", mod, ": of ", sum(modGenes), " total genes in the module\n only ", sum(KME>minCoreKME), " have the requisite high correlation with the eigengene.", sep="")); } else if (sum(KME0) { # Remove genes whose KME is too low: if (verbose > 2) printFlush(paste(spaces, " ..removing", sum(KME0) < minModuleSize) { deleteModules = c(deleteModules, mod); blockLabels[modGenes] = 0; if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": not enough genes in the module after removal of low KME genes.", sep="")); } else { changedModules = union(changedModules, blockLabelIndex[mod]); } } } # Remove modules that are to be removed if (!is.null(deleteModules)) { propMEs = propMEs[, -deleteModules, drop = FALSE]; modGenes = is.finite(match(blockLabels, blockLabelIndex[deleteModules])); blockLabels[modGenes] = 0; modAllGenes = is.finite(match(allLabels, blockLabelIndex[deleteModules])); allLabels[modAllGenes] = 0; blockLabelIndex = blockLabelIndex[-deleteModules]; } # Check if any modules are left if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No significant modules detected in block", blockNo)); } blockNo = blockNo + 1; next; } # Update allMEs and allLabels if (is.null(AllMEs)) { AllMEs = propMEs; } else AllMEs = cbind(AllMEs, propMEs); allLabelIndex = c(allLabelIndex, blockLabelIndex); assigned = block[blockLabels!=0]; allLabels[gsg$goodGenes][assigned] = blockLabels[blockLabels!=0]; rm(dissTom); collectGarbage(); #if (blockNo < nBlocks) #{ # leftoverBlockGenes = block[allLabels[block]==0]; # nLeftBG = length(leftoverBlockGenes); # blocksLeft = c((blockNo+1):nBlocks); # blockSizes = as.vector(table(blocks))[blocksLeft]; # blocksOpen = blocksLeft[blockSizes < maxBlockSize]; # nBlocksOpen = length(blocksOpen); # if ((nLeftBG>0) && (nBlocksOpen>0)) # { # openSizes = blockSizes[blocksOpen]; # centers = matrix(0, nGSamples, nBlocksOpen); # for (cen in 1:nBlocksOpen) # centers[, cen] = svd(datExpr[, blocks==blocksOpen[cen]], nu=1, nv=0)$u[,1]; # dst = geneCenterDist(datExpr[, block], centers); # rowMat = matrix(c(1:nrow(dst)), nrow(dst), ncol(dst)); # colMat = matrix(c(1:ncol(dst)), nrow(dst), ncol(dst), byrow = TRUE); # dstOrder = order(as.vector(dst)); # while ((nLeftBG>0) && (nBlocksOpen>0)) # { # gene = colMat[dstOrder[1]]; # rowMatV = as.vector(rowMat); # colMatV = as.vector(colMat); blockNo = blockNo + 1; } # Check whether any of the already assigned genes should be re-assigned deleteModules = NULL; goodLabels = allLabels[gsg$goodGenes]; reassignIndex = rep(FALSE, length(goodLabels)); if (sum(goodLabels!=0) > 0) { propLabels = goodLabels[goodLabels!=0]; assGenes = (c(1:nGenes)[gsg$goodGenes])[goodLabels!=0]; KME = do.call(match.fun(corFnc), c(list(datExpr[, goodLabels!=0], AllMEs), if (corFncAcceptsWeights) list( weights.x = if(haveWeights) weights[, goodLabels!=0] else NULL, weights.y = NULL) else NULL, corOptions)); if (intNetworkType == 1) KME = abs(KME) nMods = ncol(AllMEs); for (mod in 1:nMods) { modGenes = c(1:length(propLabels))[propLabels==allLabelIndex[mod]]; KMEmodule = KME[modGenes, mod]; KMEbest = apply(KME[modGenes, , drop = FALSE], 1, max); candidates = (KMEmodule < KMEbest); candidates[!is.finite(candidates)] = FALSE; if (FALSE) { modDiss = dissTom[goodLabels==allLabelIndex[mod], goodLabels==allLabelIndex[mod]]; mod.k = colSums(modDiss); boxplot(mod.k~candidates) } if (sum(candidates) > 0) { pModule = corPvalueFisher(KMEmodule[candidates], nSamples); whichBest = apply(KME[modGenes[candidates], , drop = FALSE], 1, which.max); pBest = corPvalueFisher(KMEbest[candidates], nSamples); reassign = ifelse(is.finite(pBest/pModule), (pBest/pModule < reassignThreshold), FALSE); if (sum(reassign)>0) { if (verbose > 2) printFlush(paste(spaces, " ..reassigning", sum(reassign), "genes from module", mod, "to modules with higher KME.")); allLabels[assGenes[modGenes[candidates][reassign]]] = whichBest[reassign]; changedModules = union(changedModules, whichBest[reassign]); if (length(modGenes)-sum(reassign) < minModuleSize) { deleteModules = c(deleteModules, mod); } else changedModules = union(changedModules, mod); } } } } # Remove modules that are to be removed if (!is.null(deleteModules)) { AllMEs = AllMEs[, -deleteModules, drop = FALSE]; genes = is.finite(match(allLabels, allLabelIndex[deleteModules])); allLabels[genes] = 0; allLabelIndex = allLabelIndex[-deleteModules]; goodLabels = allLabels[gsg$goodGenes]; } if (verbose>1) printFlush(paste(spaces, "..merging modules that are too close..")); if (numericLabels) { colors = allLabels } else { colors = labels2colors(allLabels) } mergedAllColors = colors; MEsOK = TRUE; mergedMods = try(mergeCloseModules(datExpr, colors[gsg$goodGenes], cutHeight = mergeCutHeight, relabel = TRUE, # trapErrors = FALSE, corFnc = corFnc, corOptions = corOptions, impute = impute, verbose = verbose-2, indent = indent + 2), silent = TRUE); if (inherits(mergedMods, 'try-error')) { warning(paste("blockwiseModules: mergeCloseModules failed with the following error message:\n ", mergedMods, "\n--> returning unmerged colors.\n")); MEs = try(moduleEigengenes(datExpr, colors[gsg$goodGenes], # subHubs = TRUE, trapErrors = FALSE, impute = impute, verbose = verbose-3, indent = indent+3), silent = TRUE); if (inherits(MEs, 'try-error')) { if (!trapErrors) stop(MEs); if (verbose>0) { printFlush(paste(spaces, "*** moduleEigengenes failed with the following error message:")); printFlush(paste(spaces, " ", MEs)); printFlush(paste(spaces, "*** returning no module eigengenes.\n")); } else warning(paste("blockwiseModules: moduleEigengenes failed with the following error message:\n ", MEs, "\n--> returning no module eigengenes.\n")); allSampleMEs = NULL; MEsOK = FALSE; } else { if (sum(!MEs$validMEs)>0) { mergedAllColors[gsg$goodGenes] = MEs$validColors; MEs = MEs$eigengenes[, MEs$validMEs]; } else MEs = MEs$eigengenes; allSampleMEs = as.data.frame(matrix(NA, nrow = nSamples, ncol = ncol(MEs))); allSampleMEs[gsg$goodSamples, ] = MEs[,]; names(allSampleMEs) = names(MEs); rownames(allSampleMEs) = make.unique(originalSampleNames) } } else { mergedAllColors[gsg$goodGenes] = mergedMods$colors; allSampleMEs = as.data.frame(matrix(NA, nrow = nSamples, ncol = ncol(mergedMods$newMEs))); allSampleMEs[gsg$goodSamples, ] = mergedMods$newMEs[,]; names(allSampleMEs) = names(mergedMods$newMEs); rownames(allSampleMEs) = make.unique(originalSampleNames); } if (seedSaved) .Random.seed <<- savedSeed; if (!saveTOMs) TOMFiles = NULL; names(colors) = originalGeneNames; names(mergedAllColors) = originalGeneNames; list(colors = mergedAllColors, unmergedColors = colors, MEs = allSampleMEs, goodSamples = gsg$goodSamples, goodGenes = gsg$goodGenes, dendrograms = dendros, TOMFiles = TOMFiles, blockGenes = blockGenes, blocks = blocks, MEsOK = MEsOK); } #================================================================================== # # Helper functions # #================================================================================== # order labels by size .orderLabelsBySize = function(labels, exclude = NULL) { levels.0 = sort(unique(labels)); levels = levels.0[ !levels.0 %in% exclude] levels.excl = levels.0 [levels.0 %in% exclude] rearrange = labels %in% levels; tab = table(labels [ rearrange ]); rank = rank(-tab, ties.method = "first"); oldOrder = c(levels.excl, names(tab)); newOrder = c(levels.excl, names(tab)[rank]); if (is.numeric(labels)) newOrder = as.numeric(newOrder); newOrder[ match(labels, oldOrder) ] } #====================================================================================================== # # Re-cut trees for blockwiseModules # #====================================================================================================== recutBlockwiseTrees = function(datExpr, goodSamples, goodGenes, blocks, TOMFiles, dendrograms, corType = "pearson", networkType = "unsigned", deepSplit = 2, detectCutHeight = 0.995, minModuleSize = min(20, ncol(datExpr)/2 ), maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, pamStage = TRUE, pamRespectsDendro = TRUE, # minKMEtoJoin =0.7, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.3, reassignThreshold = 1e-6, mergeCutHeight = 0.15, impute = TRUE, trapErrors = FALSE, numericLabels = FALSE, verbose = 0, indent = 0, ...) { spaces = indentSpaces(indent); #if (verbose>0) # printFlush(paste(spaces, "Calculating module eigengenes block-wise from all genes")); cutreeLabels = list() intCorType = pmatch(corType, .corTypes); if (is.na(intCorType)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) # if ( (minKMEtoJoin >1) | (minKMEtoJoin <0) ) stop("minKMEtoJoin must be between 0 and 1."); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); dimEx = dim(datExpr); if (length(dimEx)!=2) stop("datExpr has incorrect dimensions.") nGenes = dimEx[2]; nSamples = dimEx[1]; allLabels = rep(0, nGenes); AllMEs = NULL; allLabelIndex = NULL; originalSampleNames = rownames(datExpr); if (is.null(originalSampleNames)) originalSampleNames = spaste("Row.", 1:nrow(datExpr)); originalGeneNames = colnames(datExpr); if (is.null(originalGeneNames)) originalGeneNames = spaste("Column.", 1:ncol(datExpr)); if (length(blocks)!=nGenes) stop("Input error: the length of 'geneRank' does not equal the number of genes in given 'datExpr'."); # Check data for genes and samples that have too many missing values nGGenes = sum(goodGenes) nGSamples = sum(goodSamples); gsg = list(goodSamples = goodSamples, goodGenes = goodGenes, allOK =(sum(!goodSamples) + sum(!goodGenes) == 0)); if (!gsg$allOK) datExpr = datExpr[goodSamples, goodGenes]; gBlocks = blocks[gsg$goodGenes]; blockLevels = as.numeric(levels(factor(gBlocks))); blockSizes = table(gBlocks) nBlocks = length(blockLevels); datExpr.scaled.imputed = t(impute.knn(t(scale(datExpr)))$data) if (any(is.na(datExpr))) datExpr.scaled.imputed = t(impute.knn(t(scale(datExpr)))$data) corFnc = .corFnc[intCorType]; corOptions = list(use = 'p'); signed = networkType %in% c("signed", "signed hybrid"); # Set up advanced tree cut methods otherArgs = list(...); if (useBranchEigennodeDissim) { branchSplitFnc = list("branchEigengeneDissim"); externalSplitOptions = list(list( corFnc = corFnc, corOptions = corOptions, signed = signed)); externalSplitFncNeedsDistance = FALSE; nExternalBranchSplitFnc = 1; minExternalSplit = minBranchEigennodeDissim; } else { branchSplitFnc = list(); externalSplitOptions = list(list()) externalSplitFncNeedsDistance = logical(0); nExternalBranchSplitFnc = 0; minExternalSplit = numeric(0); } if ("useBranchSplit" %in% names(otherArgs)) { if (otherArgs$useBranchSplit) { nExternalBranchSplitFnc = nExternalBranchSplitFnc + 1; branchSplitFnc[[nExternalBranchSplitFnc]] = "branchSplit" externalSplitOptions[[nExternalBranchSplitFnc]] = list(discardProp = 0.08, minCentralProp = 0.75, nConsideredPCs = 3, signed = signed, getDetails = FALSE); minExternalSplit[ nExternalBranchSplitFnc] = otherArgs$minBranchSplit; externalSplitFncNeedsDistance[ nExternalBranchSplitFnc] = FALSE; } } # Initialize various variables blockNo = 1; maxUsedLabel = 0; while (blockNo <= nBlocks) { if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; selExpr = as.matrix(datExpr[, block]); nBlockGenes = length(block); TOM = NULL; # Will be loaded below; this gets rid of warnings form Rcheck. xx = try(load(TOMFiles[blockNo]), silent = TRUE); if (inherits(xx, 'try-error')) { printFlush(paste("************\n File name", TOMFiles[blockNo], "appears invalid: the load function returned the following error:\n ", xx)); stop(); } if (xx!='TOM') stop(paste("The file", TOMFiles[blockNo], "does not contain the appopriate variable.")); if (!inherits(TOM, "dist")) stop(paste("The file", TOMFiles[blockNo], "does not contain the appopriate distance structure.")); dissTom = as.matrix(1-TOM); if (verbose>2) printFlush(paste(spaces, "....detecting modules..")); datExpr.scaled.imputed.block = datExpr.scaled.imputed[, block]; if (nExternalBranchSplitFnc > 0) for (extBSFnc in 1:nExternalBranchSplitFnc) externalSplitOptions[[extBSFnc]]$expr = datExpr.scaled.imputed.block; blockLabels = try(cutreeDynamic(dendro = dendrograms[[blockNo]], deepSplit = deepSplit, cutHeight = detectCutHeight, minClusterSize = minModuleSize, method ="hybrid", maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minSplitHeight, minAbsSplitHeight = minAbsSplitHeight, externalBranchSplitFnc = branchSplitFnc, minExternalSplit = minExternalSplit, externalSplitOptions = externalSplitOptions, externalSplitFncNeedsDistance = externalSplitFncNeedsDistance, assumeSimpleExternalSpecification = FALSE, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, distM = dissTom, verbose = verbose-3, indent = indent + 2), silent = TRUE); collectGarbage(); cutreeLabels[[blockNo]] = blockLabels; if (inherits(blockLabels, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, "*** cutreeDynamic returned the following error:\n", spaces, blockLabels, spaces, "Stopping the module detection here.")); } else warning(paste("blockwiseModules: cutreeDynamic returned the following error:\n", " ", blockLabels, "---> Continuing with next block. ")); next; } if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No modules detected in block", blockNo)); } blockNo = blockNo + 1; next; } blockLabels[blockLabels>0] = blockLabels[blockLabels>0] + maxUsedLabel; maxUsedLabel = max(blockLabels); if (verbose>2) printFlush(paste(spaces, "....calculating module eigengenes..")); MEs = try(moduleEigengenes(selExpr[, blockLabels!=0], blockLabels[blockLabels!=0], impute = impute, # subHubs = TRUE, trapErrors = FALSE, verbose = verbose - 3, indent = indent + 2), silent = TRUE); if (inherits(MEs, 'try-error')) { if (trapErrors) { if (verbose>0) { printFlush(paste(spaces, "*** moduleEigengenes failed with the following message:")); printFlush(paste(spaces, " ", MEs)); printFlush(paste(spaces, " ---> Stopping module detection here.")); } else warning(paste("blockwiseModules: moduleEigengenes failed with the following message:", "\n ", MEs, "---> Continuing with next block. ")); next; } else stop(MEs); } #propMEs = as.data.frame(MEs$eigengenes[, names(MEs$eigengenes)!="ME0"]); propMEs = MEs$eigengenes; blockLabelIndex = as.numeric(substring(names(propMEs), 3)); deleteModules = NULL; changedModules = NULL; # Check modules: make sure that of the genes present in the module, at least a minimum number # have a correlation with the eigengene higher than a given cutoff. if (verbose>2) printFlush(paste(spaces, "....checking modules for statistical meaningfulness..")); for (mod in 1:ncol(propMEs)) { modGenes = (blockLabels==blockLabelIndex[mod]); corEval = parse(text = paste(corFnc, "(selExpr[, modGenes], propMEs[, mod]", prepComma(.corOptions[intCorType]), ")")); KME = as.vector(eval(corEval)); if (intNetworkType==1) KME = abs(KME); if (sum(KME>minCoreKME) < minCoreKMESize) { blockLabels[modGenes] = 0; deleteModules = c(deleteModules, mod); if (verbose>3) printFlush(paste(spaces, " ..deleting module ", mod, ": of ", sum(modGenes), " total genes in the module\n only ", sum(KME>minCoreKME), " have the requisite high correlation with the eigengene.", sep="")); } else if (sum(KME0) { # Remove genes whose KME is too low: if (verbose > 2) printFlush(paste(spaces, " ..removing", sum(KME0) < minModuleSize) { deleteModules = c(deleteModules, mod); blockLabels[modGenes] = 0; if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": not enough genes in the module after removal of low KME genes.", sep="")); } else { changedModules = union(changedModules, blockLabelIndex[mod]); } } } # Remove modules that are to be removed if (!is.null(deleteModules)) { propMEs = propMEs[, -deleteModules, drop = FALSE]; modGenes = is.finite(match(blockLabels, blockLabelIndex[deleteModules])); blockLabels[modGenes] = 0; modAllGenes = is.finite(match(allLabels, blockLabelIndex[deleteModules])); allLabels[modAllGenes] = 0; blockLabelIndex = blockLabelIndex[-deleteModules]; } # Check if any modules are left if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No significant modules detected in block", blockNo)); } blockNo = blockNo + 1; next; } # Update allMEs and allLabels if (is.null(AllMEs)) { AllMEs = propMEs; } else AllMEs = cbind(AllMEs, propMEs); allLabelIndex = c(allLabelIndex, blockLabelIndex); assigned = block[blockLabels!=0]; allLabels[assigned] = blockLabels[blockLabels!=0]; rm(dissTom); collectGarbage(); blockNo = blockNo + 1; } # Check whether any of the already assigned genes should be re-assigned deleteModules = NULL; goodLabels = allLabels[gsg$goodGenes]; if (sum(goodLabels!=0) > 0) { propLabels = goodLabels[goodLabels!=0]; assGenes = (c(1:nGenes)[gsg$goodGenes])[goodLabels!=0]; corEval = parse(text = paste(corFnc, "(datExpr[, goodLabels!=0], AllMEs", prepComma(.corOptions[intCorType]), ")")); KME = eval(corEval); if (intNetworkType == 1) KME = abs(KME) nMods = ncol(AllMEs); for (mod in 1:nMods) { modGenes = c(1:length(propLabels))[propLabels==allLabelIndex[mod]]; KMEmodule = KME[modGenes, mod]; KMEbest = apply(KME[modGenes, , drop = FALSE], 1, max); candidates = (KMEmodule < KMEbest); candidates[!is.finite(candidates)] = FALSE; if (sum(candidates) > 0) { pModule = corPvalueFisher(KMEmodule[candidates], nSamples); whichBest = apply(KME[modGenes[candidates], , drop = FALSE], 1, which.max); pBest = corPvalueFisher(KMEbest[candidates], nSamples); reassign = ifelse(is.finite(pBest/pModule), (pBest/pModule < reassignThreshold), FALSE); if (sum(reassign)>0) { if (verbose > 2) printFlush(paste(spaces, " ..reassigning", sum(reassign), "genes from module", mod, "to modules with higher KME.")); allLabels[assGenes[modGenes[candidates][reassign]]] = whichBest[reassign]; changedModules = union(changedModules, whichBest[reassign]); if (length(modGenes)-sum(reassign) < minModuleSize) { deleteModules = c(deleteModules, mod); } else changedModules = union(changedModules, mod); } } } } # Remove modules that are to be removed if (!is.null(deleteModules)) { AllMEs = AllMEs[, -deleteModules, drop = FALSE]; genes = is.finite(match(allLabels, allLabelIndex[deleteModules])); allLabels[genes] = 0; allLabelIndex = allLabelIndex[-deleteModules]; goodLabels = allLabels[gsg$goodGenes]; } if (verbose>1) printFlush(paste(spaces, "..merging modules that are too close..")); if (numericLabels) { colors = allLabels } else { colors = labels2colors(allLabels) } mergedAllColors = colors; MEsOK = TRUE; mergedMods = try(mergeCloseModules(datExpr, colors[gsg$goodGenes], cutHeight = mergeCutHeight, relabel = TRUE, # trapErrors = FALSE, impute = impute, verbose = verbose-2, indent = indent + 2), silent = TRUE); if (inherits(mergedMods, 'try-error')) { warning(paste("blockwiseModules: mergeCloseModules failed with the following error message:\n ", mergedMods, "\n--> returning unmerged colors.\n")); MEs = try(moduleEigengenes(datExpr, colors[gsg$goodGenes], # subHubs = TRUE, trapErrors = FALSE, impute = impute, verbose = verbose-3, indent = indent+3), silent = TRUE); if (inherits(MEs, 'try-error')) { if (!trapErrors) stop(MEs); if (verbose>0) { printFlush(paste(spaces, "*** moduleEigengenes failed with the following error message:")); printFlush(paste(spaces, " ", MEs)); printFlush(paste(spaces, "*** returning no module eigengenes.\n")); } else warning(paste("blockwiseModules: moduleEigengenes failed with the following error message:\n ", MEs, "\n--> returning no module eigengenes.\n")); allSampleMEs = NULL; MEsOK = FALSE; } else { if (sum(!MEs$validMEs)>0) { colors[gsg$goodGenes] = MEs$validColors; MEs = MEs$eigengenes[, MEs$validMEs]; } else MEs = MEs$eigengenes; allSampleMEs = as.data.frame(matrix(NA, nrow = nSamples, ncol = ncol(MEs))); allSampleMEs[gsg$goodSamples, ] = MEs[,]; names(allSampleMEs) = names(MEs); } } else { mergedAllColors[gsg$goodGenes] = mergedMods$colors; allSampleMEs = as.data.frame(matrix(NA, nrow = nSamples, ncol = ncol(mergedMods$newMEs))); allSampleMEs[gsg$goodSamples, ] = mergedMods$newMEs[,]; names(allSampleMEs) = names(mergedMods$newMEs); rownames(allSampleMEs) = make.unique(originalSampleNames); } names(colors) = originalGeneNames; names(mergedAllColors) = originalGeneNames; list(colors = mergedAllColors, unmergedColors = colors, cutreeLabels = cutreeLabels, MEs = allSampleMEs, #goodSamples = gsg$goodSamples, #goodGenes = gsg$goodGenes, #dendrograms = dendrograms, #TOMFiles = TOMFiles, #blockGenes = blockGenes, MEsOK = MEsOK); } #========================================================================================================== # # blockwiseIndividualTOMs # #========================================================================================================== # This function calculates and saves blockwise topological overlaps for a given multi expression data. The # argument blocks can be given to specify blocks, or the blocks can be omitted and will be calculated if # necessary. # Note on naming of output files: %s will translate into set number, %N into set name (if given in # multiExpr), %b into block number. .substituteTags = function(format, tags, replacements) { nTags = length(tags); if (length(replacements)!= nTags) stop("Length of tags and replacements must be the same."); for (t in 1:nTags) format = gsub(as.character(tags[t]), as.character(replacements[t]), format, fixed = TRUE); format; } .processFileName = function(format, setNumber, setNames, blockNumber, analysisName = "") { # The following is a workaround around empty (NULL) setNames. Replaces the name with the setNumber. if (is.null(setNames)) setNames = rep(setNumber, setNumber) .substituteTags(format, c("%s", "%N", "%b", "%a"), c(setNumber, setNames[setNumber], blockNumber, analysisName)); } blockwiseIndividualTOMs = function( multiExpr, multiWeights = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Save individual TOMs? If not, they will be returned in the session. saveTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", # General options nThreads = 0, useInternalMatrixAlgebra = FALSE, verbose = 2, indent = 0) { spaces = indentSpaces(indent); dataSize = checkSets(multiExpr, checkStructure = TRUE); if (dataSize$structureOK) { nSets = dataSize$nSets; nGenes = dataSize$nGenes; multiFormat = TRUE; } else { multiExpr = multiData(multiExpr); multiWeights = multiData(multiWeights); nSets = dataSize$nSets; nGenes = dataSize$nGenes; multiFormat = FALSE; } .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE); if (length(power)!=1) { if (length(power)!=nSets) stop("Invalid arguments: Length of 'power' must equal number of sets given in 'multiExpr'."); } else { power = rep(power, nSets); } #if (maxBlockSize >= floor(sqrt(2^31)) ) # stop("'maxBlockSize must be less than ", floor(sqrt(2^31)), ". Please decrease it and try again.") if (!is.null(blocks) && (length(blocks)!=nGenes)) stop("Input error: length of 'blocks' must equal number of genes in 'multiExpr'."); if (verbose>0) printFlush(paste(spaces, "Calculating topological overlaps block-wise from all genes")); intCorType = pmatch(corType, .corTypes); if (is.na(intCorType)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) intTOMType = pmatch(TOMType, .TOMTypes); if (is.na(intTOMType)) stop(paste("Invalid 'TOMType'. Recognized values are", paste(.TOMTypes, collapse = ", "))) TOMDenomC = pmatch(TOMDenom, .TOMDenoms)-1; if (is.na(TOMDenomC)) stop(paste("Invalid 'TOMDenom'. Recognized values are", paste(.TOMDenoms, collapse = ", "))) if ( checkPower & ((sum(power<1)>0) | (sum(power>50)>0) ) ) stop("power must be between 1 and 50."); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); if ( (maxPOutliers < 0) | (maxPOutliers > 1)) stop("maxPOutliers must be between 0 and 1."); if (quickCor < 0) stop("quickCor must be positive."); fallback = pmatch(pearsonFallback, .pearsonFallbacks) if (is.na(fallback)) stop(paste("Unrecognized 'pearsonFallback'. Recognized values are (unique abbreviations of)\n", paste(.pearsonFallbacks, collapse = ", "))) if (nThreads < 0) stop("nThreads must be positive."); if (is.null(nThreads) || (nThreads==0)) nThreads = .useNThreads(); nSamples = dataSize$nSamples; # Check data for genes and samples that have too many missing values if (checkMissingData) { gsg = goodSamplesGenesMS(multiExpr, verbose = verbose - 1, indent = indent + 1) if (!gsg$allOK) { multiExpr = mtd.subset(multiExpr, gsg$goodSamples, gsg$goodGenes); if (!is.null(multiWeights)) multiWeights = mtd.subset(multiWeights, gsg$goodSamples, gsg$goodGenes); } } else { gsg = list(goodGenes = rep(TRUE, nGenes), goodSamples = lapply(nSamples, function(n) rep(TRUE, n)), allOK = TRUE); } nGGenes = sum(gsg$goodGenes); nGSamples = rep(0, nSets); for (set in 1:nSets) nGSamples[set] = sum(gsg$goodSamples[[set]]); if (is.null(blocks)) { if (nGGenes > maxBlockSize) { if (verbose>1) printFlush(paste(spaces, "....pre-clustering genes to determine blocks..")); clustering = consensusProjectiveKMeans(multiExpr, preferredSize = maxBlockSize, sizePenaltyPower = blockSizePenaltyPower, checkData = FALSE, nCenters = nPreclusteringCenters, randomSeed = randomSeed, verbose = verbose-2, indent = indent + 1); gBlocks = .orderLabelsBySize(clustering$clusters); } else gBlocks = rep(1, nGGenes); blocks = rep(NA, nGenes); blocks[gsg$goodGenes] = gBlocks; } else { gBlocks = blocks[gsg$goodGenes]; } blockLevels = as.numeric(levels(factor(gBlocks))); blockSizes = table(gBlocks) nBlocks = length(blockLevels); if (any(blockSizes > sqrt(2^31)-1)) printFlush(spaste(spaces, "Found block(s) with size(s) larger than limit of 'int' indexing.\n", spaces, " Support for such large blocks is experimental; please report\n", spaces, " any issues to Peter.Langfelder@gmail.com.")); # check file names for uniqueness actualFileNames = NULL; if (saveTOMs) { actualFileNames = matrix("", nSets, nBlocks); for (set in 1:nSets) for (b in 1:nBlocks) actualFileNames[set, b] = .processFileName(individualTOMFileNames, set, names(multiExpr), b); rownames(actualFileNames) = spaste("Set.", c(1:nSets)); colnames(actualFileNames) = spaste("Block.", c(1:nBlocks)); if (length(unique(as.vector(actualFileNames))) < nSets * nBlocks) { printFlush("Error: File names for (some) set/block combinations are not unique:"); print(actualFileNames); stop("File names must be unique."); } } # Initialize various variables blockGenes = list(); blockNo = 1; collectGarbage(); setTomDS = list(); # Here's where the analysis starts for (blockNo in 1:nBlocks) { if (verbose>1 && nBlocks > 1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select the block genes block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; nBlockGenes = length(block); blockGenes[[blockNo]] = c(1:nGenes)[gsg$goodGenes][gBlocks==blockLevels[blockNo]]; errorOccurred = FALSE; # Set up file names or memory space to hold the set TOMs if (!saveTOMs) { setTomDS[[blockNo]] = array(0, dim = c(nBlockGenes*(nBlockGenes-1)/2, nSets)); } # For each set: calculate and save TOM for (set in 1:nSets) { if (verbose>2) printFlush(paste(spaces, "....Working on set", set)) selExpr = as.matrix(multiExpr[[set]]$data[, block]); if (!is.null(multiWeights)) selWeights = as.matrix(multiWeights[[set]]$data[, block]) else selWeights = NULL; # Calculate TOM by calling a custom C function: callVerb = max(0, verbose - 1); callInd = indent + 2; CcorType = intCorType - 1; CnetworkType = intNetworkType - 1; CTOMType = intTOMType - 1; # tempExpr = as.double(as.matrix(selExpr)); warn = 0L; tom = .Call("tomSimilarity_call", selExpr, selWeights, as.integer(CcorType), as.integer(CnetworkType), as.double(power[set]), as.integer(CTOMType), as.integer(TOMDenomC), as.double(maxPOutliers), as.double(quickCor), as.integer(fallback), as.integer(cosineCorrelation), as.integer(replaceMissingAdjacencies), as.integer(suppressTOMForZeroAdjacencies), as.integer(suppressNegativeTOM), as.integer(useInternalMatrixAlgebra), warn, as.integer(nThreads), as.integer(callVerb), as.integer(callInd), PACKAGE = "WGCNA"); # FIXME: warn if necessary tomDS = as.dist(tom); # dim(tom) = c(nBlockGenes, nBlockGenes); rm(tom); # Save the calculated TOM either to disk in chunks or to memory. if (saveTOMs) { save(tomDS, file = actualFileNames[set, blockNo]); } else { setTomDS[[blockNo]] [, set] = tomDS[]; } } rm(tomDS); collectGarbage(); } if (!multiFormat) { gsg$goodSamples = gsg$goodSamples[[1]]; } list(actualTOMFileNames = actualFileNames, TOMSimilarities = if(!saveTOMs) setTomDS else NULL, blocks = blocks, blockGenes = blockGenes, goodSamplesAndGenes = gsg, nGGenes = nGGenes, gBlocks = gBlocks, nThreads = nThreads, saveTOMs = saveTOMs, intNetworkType = intNetworkType, intCorType = intCorType, nSets = nSets, setNames = names(multiExpr) ) } #========================================================================================================== # # lowerTri2matrix # #========================================================================================================== lowerTri2matrix = function(x, diag = 1) { if (inherits(x, "dist")) { mat = as.matrix(x) } else { n = length(x); n1 = (1 + sqrt(1 + 8*n))/2 if (floor(n1)!=n1) stop("Input length does not translate into matrix"); mat = matrix(0, n1, n1); mat[lower.tri(mat)] = x; mat = mat + t(mat); } diag(mat) = diag; mat; } #========================================================================================================== # # blockwiseConsensusModules # #========================================================================================================== .checkComponents = function(object, names) { objNames = names(object); inObj = names %in% objNames; if (!all(inObj)) stop(".checkComponents: object is missing the following components:\n", paste(names[!inObj], collapse = ", ")); } # Function to calculate consensus modules and eigengenes from all genes. blockwiseConsensusModules = function( multiExpr, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # individual TOM information individualTOMInfo = NULL, useIndivTOMSubset = NULL, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressNegativeTOM = FALSE, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", # Consensus calculation options: network calibration networkCalibration = c("single quantile", "full quantile", "none"), ## Save scaled TOMs? <-- leave this option for users willing to run consensusTOM on its own #saveScaledIndividualTOMs = FALSE, #scaledIndividualTOMFilePattern = "scaledIndividualTOM-Set%s-Block%b.RData", # Simple quantile calibration options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, getNetworkCalibrationSamples = FALSE, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, # Saving the consensus TOM saveConsensusTOMs = FALSE, consensusTOMFilePattern = "consensusTOM-block.%b.RData", # Internal handling of TOMs useDiskCache = TRUE, chunkSize = NULL, cacheBase = ".blockConsModsCache", cacheDir = ".", # Alternative consensus TOM input from a previous calculation consensusTOMInfo = NULL, # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, # Advanced tree cut opyions maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, # Gene joining and removal from a module, and module "significance" criteria reassignThresholdPS = 1e-4, trimmingConsensusQuantile = consensusQuantile, # minKMEtoJoin =0.7, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, trapErrors = FALSE, # Module merging options equalizeQuantilesForModuleMerging = FALSE, quantileSummaryForModuleMerging = "mean", mergeCutHeight = 0.15, mergeConsensusQuantile = consensusQuantile, # Output options numericLabels = FALSE, # General options nThreads = 0, verbose = 2, indent = 0, ...) { spaces = indentSpaces(indent); dataSize = checkSets(multiExpr); nSets = dataSize$nSets; nGenes = dataSize$nGenes; if (length(power)!=1) { if (length(power)!=nSets) stop("Invalid arguments: Length of 'power' must equal number of sets given in 'multiExpr'."); } else { power = rep(power, nSets); } seedSaved = FALSE; if (!is.null(randomSeed)) { if (exists(".Random.seed")) { seedSaved = TRUE; savedSeed = .Random.seed } set.seed(randomSeed); } if ( (consensusQuantile < 0) | (consensusQuantile > 1) ) stop("'consensusQuantile' must be between 0 and 1."); if (checkMinModuleSize & (minModuleSize > nGenes/2)) { minModuleSize = nGenes/2; warning("blockwiseConsensusModules: minModuleSize appeared too large and was lowered to", minModuleSize, ". If you insist on keeping the original minModuleSize, set checkMinModuleSize = FALSE."); } if (verbose>0) printFlush(paste(spaces, "Calculating consensus modules and module eigengenes", "block-wise from all genes")); originalGeneNames = mtd.colnames(multiExpr); if (is.null(originalGeneNames)) originalGeneNames = spaste("Column.", 1:nGenes) originalSampleNames = mtd.apply(multiExpr, function(x) { out = rownames(x); if (is.null(out)) out = spaste("Row.", 1:nrow(x)); out; }); branchSplitFnc = NULL; minBranchDissimilarities = numeric(0); externalSplitFncNeedsDistance = logical(0); if (useBranchEigennodeDissim) { branchSplitFnc = "mtd.branchEigengeneDissim"; minBranchDissimilarities = minBranchEigennodeDissim; externalSplitFncNeedsDistance = FALSE; } if (!is.null(stabilityLabels)) { branchSplitFnc = c(branchSplitFnc, "branchSplitFromStabilityLabels"); minBranchDissimilarities = c(minBranchDissimilarities, minStabilityDissim); externalSplitFncNeedsDistance = c(externalSplitFncNeedsDistance, FALSE); } # Basic checks on consensusTOMInfo if (!is.null(consensusTOMInfo)) { .checkComponents(consensusTOMInfo, c("saveConsensusTOMs", "individualTOMInfo", "goodSamplesAndGenes")); if (length(consensusTOMInfo$individualTOMInfo$blocks)!=nGenes) stop("Inconsistent number of genes in 'consensusTOMInfo$individualTOMInfo$blocks'."); if (!is.null(consensusTOMInfo$consensusQuantile) && (consensusQuantile!=consensusTOMInfo$consensusQuantile) ) warning(immediate. = TRUE, "blockwiseConsensusModules: given (possibly default) 'consensusQuantile' is different\n", "from the value recorded in 'consensusTOMInfo'. This is normally undesirable and may\n", "indicate a mistake in the function call."); } # Handle "other arguments" args = list(...); if (is.null(args$reproduceBranchEigennodeQuantileError)) { reproduceBranchEigennodeQuantileError = FALSE; } else reproduceBranchEigennodeQuantileError = args$reproduceBranchEigennodeQuantileError; # If topological overlaps weren't calculated yet, calculate them. removeIndividualTOMsOnExit = FALSE; nBlocks.0 = length(unique(blocks)); if (is.null(individualTOMInfo)) { if (is.null(consensusTOMInfo)) { individualTOMInfo = blockwiseIndividualTOMs(multiExpr = multiExpr, checkMissingData = checkMissingData, blocks = blocks, maxBlockSize = maxBlockSize, blockSizePenaltyPower = blockSizePenaltyPower, nPreclusteringCenters = nPreclusteringCenters, randomSeed = randomSeed, corType = corType, maxPOutliers = maxPOutliers, quickCor = quickCor, pearsonFallback = pearsonFallback, cosineCorrelation = cosineCorrelation, power = power, networkType = networkType, replaceMissingAdjacencies= replaceMissingAdjacencies, TOMType = TOMType, TOMDenom = TOMDenom, suppressNegativeTOM = suppressNegativeTOM, saveTOMs = useDiskCache | nBlocks.0>1, individualTOMFileNames = individualTOMFileNames, nThreads = nThreads, verbose = verbose, indent = indent); removeIndividualTOMsOnExit = TRUE; } else individualTOMInfo = consensusTOMInfo$individualTOMInfo; } if (is.null(useIndivTOMSubset)) { if (individualTOMInfo$nSets != nSets) stop(paste("Number of sets in individualTOMInfo and in multiExpr do not agree.\n", " To use a subset of individualTOMInfo, set useIndivTOMSubset appropriately.")); useIndivTOMSubset = c(1:nSets); } if (length(useIndivTOMSubset)!=nSets) stop("Length of 'useIndivTOMSubset' must equal the number of sets in 'multiExpr'"); if (length(unique(useIndivTOMSubset))!=nSets) stop("Entries of 'useIndivTOMSubset' must be unique"); if (any(useIndivTOMSubset<1) | any(useIndivTOMSubset>individualTOMInfo$nSets)) stop("All entries of 'useIndivTOMSubset' must be between 1 and the number of sets in individualTOMInfo"); # if ( (minKMEtoJoin >1) | (minKMEtoJoin <0) ) stop("minKMEtoJoin must be between 0 and 1."); intNetworkType = individualTOMInfo$intNetworkType; intCorType = individualTOMInfo$intCorType; corFnc = match.fun(.corFnc[intCorType]); corOptions = list(use = 'p'); fallback = pmatch(pearsonFallback, .pearsonFallbacks) nSamples = dataSize$nSamples; allLabels = rep(0, nGenes); allLabelIndex = NULL; # Restrict data to goodSamples and goodGenes gsg = individualTOMInfo$goodSamplesAndGenes; # Restrict gsg to used sets gsg$goodSamples = gsg$goodSamples[useIndivTOMSubset]; if (!gsg$allOK) multiExpr = mtd.subset(multiExpr, gsg$goodSamples, gsg$goodGenes); # prepare scaled and imputed multiExpr. multiExpr.scaled = mtd.apply(multiExpr, scale); hasMissing = unlist(multiData2list(mtd.apply(multiExpr, function(x) { any(is.na(x)) }))); # Impute those that have missing data multiExpr.scaled.imputed = mtd.mapply(function(x, doImpute) { if (doImpute) t(impute.knn(t(x))$data) else x }, multiExpr.scaled, hasMissing); nGGenes = sum(gsg$goodGenes); nGSamples = rep(0, nSets); for (set in 1:nSets) nGSamples[set] = sum(gsg$goodSamples[[ set ]]); blocks = individualTOMInfo$blocks; gBlocks = individualTOMInfo$gBlocks; blockLevels = sort(unique(gBlocks)); blockSizes = table(gBlocks) nBlocks = length(blockLevels); if (is.null(chunkSize)) chunkSize = as.integer(.largestBlockSize/nSets) reassignThreshold = reassignThresholdPS^nSets; consMEs = vector(mode = "list", length = nSets); dendros = list(); maxUsedLabel = 0; collectGarbage(); # Here's where the analysis starts removeConsensusTOMOnExit = FALSE; if (is.null(consensusTOMInfo) && (nBlocks==1 || saveConsensusTOMs || getNetworkCalibrationSamples)) { consensusTOMInfo = consensusTOM( individualTOMInfo = individualTOMInfo, useIndivTOMSubset = useIndivTOMSubset, networkCalibration = networkCalibration, saveCalibratedIndividualTOMs = FALSE, calibrationQuantile = calibrationQuantile, sampleForCalibration = sampleForCalibration, sampleForCalibrationFactor = sampleForCalibrationFactor, getNetworkCalibrationSamples = getNetworkCalibrationSamples, consensusQuantile = consensusQuantile, useMean = useMean, setWeights = setWeights, # Return options saveConsensusTOMs = saveConsensusTOMs, consensusTOMFilePattern = consensusTOMFilePattern, returnTOMs = nBlocks==1, # Internal handling of TOMs useDiskCache = useDiskCache, chunkSize = chunkSize, cacheBase = cacheBase, cacheDir = cacheDir, verbose = verbose, indent = indent); removeConsensusTOMOnExit = !saveConsensusTOMs; } blockwiseConsensusCalculation = is.null(consensusTOMInfo); for (blockNo in 1:nBlocks) { if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select block genes block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; nBlockGenes = length(block); selExpr = mtd.subset(multiExpr, , block); errorOccurred = FALSE; if (blockwiseConsensusCalculation) { # This code is only reached if input saveConsensusTOMs is FALSE and there are at least 2 blocks. consensusTOMInfo = consensusTOM( individualTOMInfo = individualTOMInfo, useIndivTOMSubset = useIndivTOMSubset, useBlocks = blockNo, networkCalibration = networkCalibration, saveCalibratedIndividualTOMs = FALSE, calibrationQuantile = calibrationQuantile, sampleForCalibration = sampleForCalibration, sampleForCalibrationFactor = sampleForCalibrationFactor, getNetworkCalibrationSamples = FALSE, consensusQuantile = consensusQuantile, useMean = useMean, setWeights = setWeights, saveConsensusTOMs = FALSE, returnTOMs = TRUE, useDiskCache = useDiskCache, chunkSize = chunkSize, cacheBase = cacheBase, cacheDir = cacheDir); consTomDS = consensusTOMInfo$consensusTOM[[1]]; # Remove the consensus TOM from the structure. consensusTOMInfo$consensusTOM[[1]] = NULL; consensusTOMInfo$consensusTOM = NULL; } else { if (consensusTOMInfo$saveConsensusTOMs) { consTomDS = .loadObject(file = consensusTOMInfo$TOMFiles[blockNo], size = nBlockGenes * (nBlockGenes-1)/2); } else consTomDS = consensusTOMInfo$consensusTOM[[blockNo]]; } # Temporary "cast" so fastcluster::hclust doesn't complain about non-integer size. attr(consTomDS, "Size") = as.integer(attr(consTomDS, "Size")); consTomDS = 1-consTomDS; collectGarbage(); if (verbose>2) printFlush(paste(spaces, "....clustering and detecting modules..")); errorOccured = FALSE; dendros[[blockNo]] = fastcluster::hclust(consTomDS, method = "average"); if (verbose > 8) { if (interactive()) plot(dendros[[blockNo]], labels = FALSE, main = paste("Block", blockNo)); } externalSplitOptions = list(); e.index = 1; if (useBranchEigennodeDissim) { externalSplitOptions[[e.index]] = list(multiExpr = mtd.subset(multiExpr.scaled.imputed,, block), corFnc = corFnc, corOptions = corOptions, consensusQuantile = consensusQuantile, signed = networkType %in% c("signed", "signed hybrid"), reproduceQuantileError = reproduceBranchEigennodeQuantileError); e.index = e.index +1; } if (!is.null(stabilityLabels)) { externalSplitOptions[[e.index]] = list(stabilityLabels = stabilityLabels); e.index = e.index + 1; } collectGarbage(); #blockLabels = try(cutreeDynamic(dendro = dendros[[blockNo]], blockLabels = cutreeDynamic(dendro = dendros[[blockNo]], distM = as.matrix(consTomDS), deepSplit = deepSplit, cutHeight = detectCutHeight, minClusterSize = minModuleSize, method ="hybrid", maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minSplitHeight, minAbsSplitHeight = minAbsSplitHeight, externalBranchSplitFnc = branchSplitFnc, minExternalSplit = minBranchDissimilarities, externalSplitOptions = externalSplitOptions, externalSplitFncNeedsDistance = externalSplitFncNeedsDistance, assumeSimpleExternalSpecification = FALSE, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, verbose = verbose-3, indent = indent + 2) #verbose = verbose-3, indent = indent + 2), silent = TRUE); if (verbose > 8) { print(table(blockLabels)); if (interactive()) plotDendroAndColors(dendros[[blockNo]], labels2colors(blockLabels), dendroLabels = FALSE, main = paste("Block", blockNo)); } if (inherits(blockLabels, 'try-error')) { (if (verbose>0) printFlush else warning) (paste(spaces, "blockwiseConsensusModules: cutreeDynamic failed:\n ", spaces, blockLabels, "\n", spaces, " Error occured in block", blockNo, "\n", spaces, " Continuing with next block. ")); next; } else { blockLabels[blockLabels>0] = blockLabels[blockLabels>0] + maxUsedLabel; maxUsedLabel = max(blockLabels); } if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No modules detected in block", blockNo, "--> continuing with next block.")) } next; } # Calculate eigengenes for this batch if (verbose>2) printFlush(paste(spaces, "....calculating eigengenes..")); blockAssigned = c(1:nBlockGenes)[blockLabels!=0]; blockLabelIndex = as.numeric(levels(as.factor(blockLabels[blockAssigned]))); blockConsMEs = try(multiSetMEs(selExpr, universalColors = blockLabels, excludeGrey = TRUE, grey = 0, impute = impute, # trapErrors = TRUE, returnValidOnly = TRUE, verbose = verbose-4, indent = indent + 3), silent = TRUE); if (inherits(blockConsMEs, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, "*** multiSetMEs failed with the message:")); printFlush(paste(spaces, " ", blockConsMEs)); printFlush(paste(spaces, "*** --> Ending module detection here")); } else warning(paste("blocwiseConsensusModules: multiSetMEs failed with the message: \n", " ", blockConsMEs, "\n--> continuing with next block.")); next; } deleteModules = NULL; changedModules = NULL; # find genes whose closest module eigengene has cor higher than minKMEtoJoin and assign them # Removed - should not change blocks before clustering them #unassGenes = c(c(1:nGGenes)[-block][allLabels[-block]==0], block[blockLabels==0]); #if (length(unassGenes) > 0) #{ #blockKME = array(0, dim = c(length(unassGenes), ncol(blockConsMEs[[1]]$data), nSets)); #corEval = parse(text = paste(.corFnc[intCorType], #"( multiExpr[[set]]$data[, unassGenes], blockConsMEs[[set]]$data,", #.corOptions[intCorType], ")")) #for (set in 1:nSets) blockKME[, , set] = eval(corEval); #if (intNetworkType==1) blockKME = abs(blockKME); #consKME = as.matrix(apply(blockKME, c(1,2), min)); #consKMEmax = apply(consKME, 1, max); #closestModule = blockLabelIndex[apply(consKME, 1, which.max)]; #assign = (consKMEmax >= minKMEtoJoin ); #if (sum(assign>0)) #{ #allLabels[unassGenes[assign]] = closestModule[assign]; #changedModules = union(changedModules, closestModule[assign]); #} #rm(blockKME, consKME, consKMEmax); #} collectGarbage(); # Check modules: make sure that of the genes present in the module, at least a minimum number # have a correlation with the eigengene higher than a given cutoff, and that all member genes have # the required minimum consensus KME if (verbose>2) printFlush(paste(spaces, "....checking consensus modules for statistical meaningfulness..")); for (mod in 1:ncol(blockConsMEs[[1]]$data)) { modGenes = (blockLabels==blockLabelIndex[mod]); KME = matrix(0, nrow = sum(modGenes), ncol = nSets); corEval = parse(text = paste(.corFnc[intCorType], "( selExpr[[set]]$data[, modGenes], blockConsMEs[[set]]$data[, mod]", prepComma(.corOptions[intCorType]), ")")) for (set in 1:nSets) KME[, set] = as.vector(eval(corEval)); if (intNetworkType==1) KME = abs(KME); consKME = apply(KME, 1, quantile, probs = trimmingConsensusQuantile, names = FALSE, na.rm = TRUE); if (sum(consKME>minCoreKME) < minCoreKMESize) { blockLabels[modGenes] = 0; deleteModules = union(deleteModules, mod); if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": of ", sum(modGenes), " total genes in the module only ", sum(consKME>minCoreKME), " have the requisite high correlation with the eigengene in all sets.", sep="")); } else if (sum(consKME0) { if (verbose > 3) printFlush(paste(spaces, " ..removing", sum(consKME0) < minModuleSize) { deleteModules = union(deleteModules, mod); blockLabels[modGenes] = 0; if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": not enough genes in the module after removal of low KME genes.", sep="")); } else { changedModules = union(changedModules, blockLabelIndex[mod]); } } } # Remove marked modules if (!is.null(deleteModules)) { for (set in 1:nSets) blockConsMEs[[set]]$data = blockConsMEs[[set]]$data[, -deleteModules]; modGenes = is.finite(match(blockLabels, blockLabelIndex[deleteModules])); blockLabels[modGenes] = 0; modAllGenes = is.finite(match(allLabels, blockLabelIndex[deleteModules])); allLabels[modAllGenes] = 0; blockLabelIndex = blockLabelIndex[-deleteModules]; } # Check whether there's anything left if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, " ..No significant modules detected in block", blockNo)) printFlush(paste(spaces, " ..continuing with next block.")); } next; } # Update module eigengenes for (set in 1:nSets) if (is.null(dim(blockConsMEs[[set]]$data))) dim(blockConsMEs[[set]]$data) = c(length(blockConsMEs[[set]]$data), 1); if (is.null(consMEs[[1]])) { for (set in 1:nSets) consMEs[[set]] = list(data = blockConsMEs[[set]]$data); } else for (set in 1:nSets) consMEs[[set]]$data = cbind(consMEs[[set]]$data, blockConsMEs[[set]]$data); # Update allLabels allLabelIndex = c(allLabelIndex, blockLabelIndex); allLabels[gsg$goodGenes][block[blockAssigned]] = blockLabels[blockAssigned]; collectGarbage(); } # Check whether any of the already assigned genes (in this or previous blocks) should be re-assigned if (verbose>2) printFlush(paste(spaces, "....checking for genes that should be reassigned..")); deleteModules = NULL; goodLabels = allLabels[gsg$goodGenes]; if (sum(goodLabels!=0) > 0) { propLabels = goodLabels[goodLabels!=0]; assGenes = (c(1:nGenes)[gsg$goodGenes])[goodLabels!=0]; corEval = parse(text = paste(.corFnc[intCorType], "(multiExpr[[set]]$data[, goodLabels!=0], consMEs[[set]]$data", prepComma(.corOptions[intCorType]), ")")); nMods = ncol(consMEs[[1]]$data); lpValues = array(0, dim = c(length(propLabels), nMods, nSets)); sumSign = array(0, dim = c(length(propLabels), nMods)); if (verbose>3) printFlush(paste(spaces, "......module membership p-values..")); for (set in 1:nSets) { KME = eval(corEval); if (intNetworkType == 1) KME = abs(KME) lpValues[,,set] = -2*log(corPvalueFisher(KME, nGSamples[set], twoSided = FALSE)); sumSign = sumSign + sign(KME); } if (verbose>3) printFlush(paste(spaces, "......module membership scores..")); scoreAll = as.matrix(apply(lpValues, c(1,2), sum)) * (nSets + sumSign)/(2*nSets); scoreAll[!is.finite(scoreAll)] = 0.001 # This low should be enough bestScore = apply(scoreAll, 1, max); if (intNetworkType==1) sumSign = abs(sumSign); if (verbose>3) { cat(paste(spaces, "......individual modules..")); pind = initProgInd(); } for (mod in 1:nMods) { modGenes = c(1:length(propLabels))[propLabels==allLabelIndex[mod]]; scoreMod = scoreAll[modGenes, mod]; candidates = (bestScore[modGenes] > scoreMod); candidates[!is.finite(candidates)] = FALSE; if (sum(candidates) > 0) { pModule = pchisq(scoreMod[candidates], nSets, log.p = TRUE) whichBest = apply(scoreAll[modGenes[candidates], ], 1, which.max); pBest = pchisq(bestScore[modGenes[candidates]], nSets, log.p = TRUE); reassign = ifelse(is.finite(pBest - pModule), ( (pBest - pModule) < log(reassignThreshold) ), FALSE); if (sum(reassign)>0) { allLabels[assGenes[modGenes[candidates][reassign]]] = whichBest[reassign]; changedModules = union(changedModules, whichBest[reassign]); if (length(modGenes)-sum(reassign) < minModuleSize) { deleteModules = union(deleteModules, mod); } else changedModules = union(changedModules, mod); } } if (verbose > 3) pind = updateProgInd(mod/nMods, pind); } rm(lpValues, sumSign, scoreAll); if (verbose > 3) printFlush(""); } # Remove marked modules if (!is.null(deleteModules)) { # for (set in 1:nSets) consMEs[[set]]$data = consMEs[[set]]$data[, -deleteModules]; modGenes = is.finite(match(allLabels, allLabelIndex[deleteModules])); allLabels[modGenes] = 0; # allLabelIndex = allLabelIndex[-deleteModules]; } if (verbose>1) printFlush(paste(spaces, "..merging consensus modules that are too close..")); #print(table(allLabels)); #print(is.numeric(allLabels)) if (numericLabels) { colors = allLabels } else { colors = labels2colors(allLabels) } mergedColors = colors; mergedMods = try(mergeCloseModules(multiExpr, colors[gsg$goodGenes], equalizeQuantiles = equalizeQuantilesForModuleMerging, quantileSummary = quantileSummaryForModuleMerging, consensusQuantile = mergeConsensusQuantile, cutHeight = mergeCutHeight, relabel = TRUE, impute = impute, verbose = verbose-2, indent = indent + 2), silent = TRUE); if (inherits(mergedMods, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, 'blockwiseConsensusModules: mergeCloseModule failed with this message:\n', spaces, ' ', mergedMods, spaces, '---> returning unmerged consensus modules')); } else warning(paste('blockwiseConsensusModules: mergeCloseModule failed with this message:\n ', mergedMods, '---> returning unmerged consensus modules')); MEs = try(multiSetMEs(multiExpr, universalColors = colors[gsg$goodGenes] # trapErrors = TRUE, returnValidOnly = TRUE ), silent = TRUE); if (inherits(MEs, 'try-error')) { warning(paste('blockwiseConsensusModules: ME calculation failed with this message:\n ', MEs, '---> returning empty module eigengenes')); allSampleMEs = NULL; } else { if (!MEs[[1]]$allOK) mergedColors[gsg$goodGenes] = MEs[[1]]$validColors; allSampleMEs = vector(mode = "list", length = nSets); names(allSampleMEs) = names(multiExpr); for (set in 1:nSets) { allSampleMEs[[set]] = list(data = as.data.frame(matrix(NA, nrow = nSamples[set], ncol = ncol(MEs[[set]]$data)))); allSampleMEs[[set]]$data[gsg$goodSamples[[set]], ] = MEs[[set]]$data[,]; names(allSampleMEs[[set]]$data) = names(MEs[[set]]$data); rownames(allSampleMEs[[set]]$data) = make.unique(originalSampleNames[[set]]$data) } } } else { mergedColors[gsg$goodGenes] = mergedMods$colors; allSampleMEs = vector(mode = "list", length = nSets); names(allSampleMEs) = names(multiExpr); for (set in 1:nSets) { allSampleMEs[[set]] = list(data = as.data.frame(matrix(NA, nrow = nSamples[set], ncol = ncol(mergedMods$newMEs[[1]]$data)))); allSampleMEs[[set]]$data[gsg$goodSamples[[set]], ] = mergedMods$newMEs[[set]]$data[,]; names(allSampleMEs[[set]]$data) = names(mergedMods$newMEs[[set]]$data); rownames(allSampleMEs[[set]]$data) = make.unique(originalSampleNames[[set]]$data); } } if (seedSaved) .Random.seed <<- savedSeed; if (removeConsensusTOMOnExit) { .checkAndDelete(consensusTOMInfo$TOMFiles); consensusTOMInfo$TOMFiles = NULL; } if (removeIndividualTOMsOnExit) { .checkAndDelete(individualTOMInfo$actualTOMFileNames); individualTOMInfo$actualTOMFileNames = NULL; } # Under no circumstances return consensus TOM or individual TOM similarities within the returned list. consensusTOMInfo$consensusTOM = NULL; individualTOMInfo$TOMSimilarities = NULL names(mergedColors) = names(colors) = originalGeneNames; list(colors = mergedColors, unmergedColors = colors, multiMEs = allSampleMEs, goodSamples = gsg$goodSamples, goodGenes = gsg$goodGenes, dendrograms = dendros, TOMFiles = consensusTOMInfo$TOMFiles, blockGenes = individualTOMInfo$blockGenes, blocks = blocks, originCount = consensusTOMInfo$originCount, networkCalibrationSamples = consensusTOMInfo$networkCalibrationSamples, individualTOMInfo = individualTOMInfo, consensusTOMInfo = if (saveConsensusTOMs) consensusTOMInfo else NULL, consensusQuantile = consensusQuantile ) } #========================================================================================================== # # recutConsensusTrees # #========================================================================================================== recutConsensusTrees = function(multiExpr, goodSamples, goodGenes, blocks, TOMFiles, dendrograms, corType = "pearson", networkType = "unsigned", deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, pamStage = TRUE, pamRespectsDendro = TRUE, # minKMEtoJoin =0.7, trimmingConsensusQuantile = 0, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, reassignThresholdPS = 1e-4, mergeCutHeight = 0.15, mergeConsensusQuantile = trimmingConsensusQuantile, impute = TRUE, trapErrors = FALSE, numericLabels = FALSE, verbose = 2, indent = 0) { spaces = indentSpaces(indent); dataSize = checkSets(multiExpr); nSets = dataSize$nSets; nGenes = dataSize$nGenes; nSamples = dataSize$nSamples; originalGeneNames = mtd.colnames(multiExpr); if (is.null(originalGeneNames)) originalGeneNames = spaste("Column.", 1:nGenes); originalSampleNames = mtd.apply(multiExpr, function(x) { out = rownames(x); if (is.null(out)) out = spaste("Row.", 1:nrow(x)); out; }); if (length(blocks)!=nGenes) stop("Input error: length of 'blocks' must equal number of genes in 'multiExpr'."); #if (verbose>0) # printFlush(paste(spaces, "Calculating consensus modules and module eigengenes", # "block-wise from all genes")); intCorType = pmatch(corType, .corTypes); if (is.na(intCorType)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) # if ( (minKMEtoJoin >1) | (minKMEtoJoin <0) ) stop("minKMEtoJoin must be between 0 and 1."); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); allLabels = rep(0, nGenes); allLabelIndex = NULL; corFnc = match.fun(.corFnc[intCorType]); corOptions = list(use = 'p'); # Get rid of bad genes and bad samples gsg = list(goodGenes = goodGenes, goodSamples = goodSamples, allOK = TRUE); gsg$allOK = (sum(!gsg$goodGenes)==0); nGGenes = sum(gsg$goodGenes); nGSamples = rep(0, nSets); for (set in 1:nSets) { nGSamples[set] = sum(gsg$goodSamples[[set]]); gsg$allOK = gsg$allOK && (sum(!gsg$goodSamples[[set]])==0); } if (!gsg$allOK) for (set in 1:nSets) multiExpr[[set]]$data = multiExpr[[set]]$data[gsg$goodSamples[[set]], gsg$goodGenes]; gBlocks = blocks[gsg$goodGenes]; blockLevels = as.numeric(levels(factor(gBlocks))); blockSizes = table(gBlocks) nBlocks = length(blockLevels); reassignThreshold = reassignThresholdPS^nSets; # prepare scaled and imputed multiExpr. multiExpr.scaled = mtd.apply(multiExpr, scale); hasMissing = unlist(multiData2list(mtd.apply(multiExpr, function(x) { any(is.na(x)) }))); # Impute those that have missing data multiExpr.scaled.imputed = mtd.mapply(function(x, doImpute) { if (doImpute) t(impute.knn(t(x))$data) else x }, multiExpr.scaled, hasMissing); if (useBranchEigennodeDissim) { branchSplitFnc = "mtd.branchEigengeneDissim"; } else branchSplitFnc = NULL; # Initialize various variables consMEs = vector(mode = "list", length = nSets); blockNo = 1; maxUsedLabel = 0; collectGarbage(); # Here's where the analysis starts while (blockNo <= nBlocks) { if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select most connected genes block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; nBlockGenes = length(block); selExpr = vector(mode = "list", length = nSets); for (set in 1:nSets) selExpr[[set]] = list(data = multiExpr[[set]]$data[, block]); # This is how TOMs are saved: #if (saveTOMs) #{ # TOMFiles[blockNo] = paste(saveTOMFileBase, "-block.", blockNo, ".RData", sep=""); # save(consTomDS, file = TOMFiles[blockNo]); #} #consTomDS = 1-consTomDS; #collectGarbage(); xx = try(load(TOMFiles[blockNo]), silent = TRUE); if (inherits(xx, 'try-error')) { printFlush(paste("************\n File name", TOMFiles[blockNo], "appears invalid: the load function returned the following error:\n ", xx)); stop(); } if (xx!='consTomDS') stop(paste("The file", TOMFiles[blockNo], "does not contain the appopriate variable.")); if (!inherits(consTomDS, "dist")) stop(paste("The file", TOMFiles[blockNo], "does not contain the appopriate distance structure.")); consTomDS = 1-consTomDS; collectGarbage(); if (verbose>2) printFlush(paste(spaces, "....clustering and detecting modules..")); errorOccured = FALSE; blockLabels = try(cutreeDynamic(dendro = dendrograms[[blockNo]], deepSplit = deepSplit, cutHeight = detectCutHeight, minClusterSize = minModuleSize, method ="hybrid", maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minSplitHeight, minAbsSplitHeight = minAbsSplitHeight, externalBranchSplitFnc = if (useBranchEigennodeDissim) branchSplitFnc else NULL, minExternalSplit = minBranchEigennodeDissim, externalSplitOptions = list(multiExpr = mtd.subset(multiExpr.scaled.imputed, , block), corFnc = corFnc, corOptions = corOptions, consensusQuantile = mergeConsensusQuantile, signed = networkType %in% c("signed", "signed hybrid")), externalSplitFncNeedsDistance = FALSE, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, distM = as.matrix(consTomDS), verbose = verbose-3, indent = indent + 2), silent = TRUE); if (inherits(blockLabels, 'try-error')) { (if (verbose>0) printFlush else warning) (paste(spaces, "blockwiseConsensusModules: cutreeDynamic failed:\n ", blockLabels, "\nError occured in block", blockNo, "\nContinuing with next block.")); next; } else { blockLabels[blockLabels>0] = blockLabels[blockLabels>0] + maxUsedLabel; maxUsedLabel = max(blockLabels); } if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, "No modules detected in block", blockNo)) printFlush(paste(spaces, " Continuing with next block.")) } next; } # Calculate eigengenes for this batch if (verbose>2) printFlush(paste(spaces, "....calculating eigengenes..")); blockAssigned = c(1:nBlockGenes)[blockLabels!=0]; blockLabelIndex = as.numeric(levels(as.factor(blockLabels[blockAssigned]))); blockConsMEs = try(multiSetMEs(selExpr, universalColors = blockLabels, excludeGrey = TRUE, grey = 0, impute = impute, # trapErrors = TRUE, returnValidOnly = TRUE, verbose = verbose-4, indent = indent + 3), silent = TRUE); if (inherits(blockConsMEs, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, "*** multiSetMEs failed with the message:")); printFlush(paste(spaces, " ", blockConsMEs)); printFlush(paste(spaces, "*** --> Ending module detection here")); } else warning(paste("blocwiseConsensusModules: multiSetMEs failed with the message: \n", " ", blockConsMEs, "\n--> Continuing with next block.")); next; } deleteModules = NULL; changedModules = NULL; # find genes whose closest module eigengene has cor higher than minKMEtoJoin and assign them # Removed - should not change blocks before clustering them #unassGenes = c(c(1:nGGenes)[-block][allLabels[-block]==0], block[blockLabels==0]); #if (length(unassGenes) > 0) #{ #blockKME = array(0, dim = c(length(unassGenes), ncol(blockConsMEs[[1]]$data), nSets)); #corEval = parse(text = paste(.corFnc[intCorType], #"( multiExpr[[set]]$data[, unassGenes], blockConsMEs[[set]]$data,", #.corOptions[intCorType], ")")) #for (set in 1:nSets) blockKME[, , set] = eval(corEval); #if (intNetworkType==1) blockKME = abs(blockKME); #consKME = as.matrix(apply(blockKME, c(1,2), min)); #consKMEmax = apply(consKME, 1, max); #closestModule = blockLabelIndex[apply(consKME, 1, which.max)]; #assign = (consKMEmax >= minKMEtoJoin ); #if (sum(assign>0)) #{ #allLabels[unassGenes[assign]] = closestModule[assign]; #changedModules = union(changedModules, closestModule[assign]); #} #rm(blockKME, consKME, consKMEmax); #} collectGarbage(); # Check modules: make sure that of the genes present in the module, at least a minimum number # have a correlation with the eigengene higher than a given cutoff, and that all member genes have # the required minimum consensus KME if (verbose>2) printFlush(paste(spaces, "....checking consensus modules for statistical meaningfulness..")); for (mod in 1:ncol(blockConsMEs[[1]]$data)) { modGenes = (blockLabels==blockLabelIndex[mod]); KME = matrix(0, nrow = sum(modGenes), ncol = nSets); corEval = parse(text = paste(.corFnc[intCorType], "( selExpr[[set]]$data[, modGenes], blockConsMEs[[set]]$data[, mod]", prepComma(.corOptions[intCorType]), ")")) for (set in 1:nSets) KME[, set] = as.vector(eval(corEval)); if (intNetworkType==1) KME = abs(KME); consKME = apply(KME, 1, quantile, probs = trimmingConsensusQuantile, na.rm = TRUE); if (sum(consKME>minCoreKME) < minCoreKMESize) { blockLabels[modGenes] = 0; deleteModules = union(deleteModules, mod); if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": of ", sum(modGenes), " total genes in the module only ", sum(consKME>minCoreKME), " have the requisite high correlation with the eigengene in all sets.", sep="")); } else if (sum(consKME0) { if (verbose > 3) printFlush(paste(spaces, " ..removing", sum(consKME0) < minModuleSize) { deleteModules = union(deleteModules, mod); blockLabels[modGenes] = 0; if (verbose>3) printFlush(paste(spaces, " ..deleting module ",blockLabelIndex[mod], ": not enough genes in the module after removal of low KME genes.", sep="")); } else { changedModules = union(changedModules, blockLabelIndex[mod]); } } } # Remove marked modules if (!is.null(deleteModules)) { for (set in 1:nSets) blockConsMEs[[set]]$data = blockConsMEs[[set]]$data[, -deleteModules]; modGenes = is.finite(match(blockLabels, blockLabelIndex[deleteModules])); blockLabels[modGenes] = 0; modAllGenes = is.finite(match(allLabels, blockLabelIndex[deleteModules])); allLabels[modAllGenes] = 0; blockLabelIndex = blockLabelIndex[-deleteModules]; } # Check whether there's anything left if (sum(blockLabels>0)==0) { if (verbose>1) { printFlush(paste(spaces, " No significant modules detected in block", blockNo)) printFlush(paste(spaces, " Continuing with next block.")); } next; } # Update module eigengenes for (set in 1:nSets) if (is.null(dim(blockConsMEs[[set]]$data))) dim(blockConsMEs[[set]]$data) = c(length(blockConsMEs[[set]]$data), 1); if (is.null(consMEs[[1]])) { for (set in 1:nSets) consMEs[[set]] = list(data = blockConsMEs[[set]]$data); } else for (set in 1:nSets) consMEs[[set]]$data = cbind(consMEs[[set]]$data, blockConsMEs[[set]]$data); # Update allLabels allLabelIndex = c(allLabelIndex, blockLabelIndex); allLabels[block[blockAssigned]] = blockLabels[blockAssigned]; collectGarbage(); blockNo = blockNo + 1; } # Check whether any of the already assigned genes (in this or previous blocks) should be re-assigned if (verbose>2) printFlush(paste(spaces, "....checking for genes that should be reassigned..")); deleteModules = NULL; goodLabels = allLabels[gsg$goodGenes]; if (sum(goodLabels!=0) > 0) { propLabels = goodLabels[goodLabels!=0]; assGenes = (c(1:nGenes)[gsg$goodGenes])[goodLabels!=0]; corEval = parse(text = paste(.corFnc[intCorType], "(multiExpr[[set]]$data[, goodLabels!=0], consMEs[[set]]$data", prepComma(.corOptions[intCorType]), ")")); nMods = ncol(consMEs[[1]]$data); lpValues = array(0, dim = c(length(propLabels), nMods, nSets)); sumSign = array(0, dim = c(length(propLabels), nMods)); if (verbose>3) printFlush(paste(spaces, "......module membership p-values..")); for (set in 1:nSets) { KME = eval(corEval); if (intNetworkType == 1) KME = abs(KME) lpValues[,,set] = -2*log(corPvalueFisher(KME, nGSamples[set], twoSided = FALSE)); sumSign = sumSign + sign(KME); } if (verbose>3) printFlush(paste(spaces, "......module membership scores..")); scoreAll = as.matrix(apply(lpValues, c(1,2), sum)) * (nSets + sumSign)/(2*nSets); scoreAll[!is.finite(scoreAll)] = 0.001 # This low should be enough bestScore = apply(scoreAll, 1, max); if (intNetworkType==1) sumSign = abs(sumSign); if (verbose>3) { cat(paste(spaces, "......individual modules..")); pind = initProgInd(); } for (mod in 1:nMods) { modGenes = c(1:length(propLabels))[propLabels==allLabelIndex[mod]]; scoreMod = scoreAll[modGenes, mod]; candidates = (bestScore[modGenes] > scoreMod); candidates[!is.finite(candidates)] = FALSE; if (sum(candidates) > 0) { pModule = pchisq(scoreMod[candidates], nSets, log.p = TRUE) whichBest = apply(scoreAll[modGenes[candidates], ], 1, which.max); pBest = pchisq(bestScore[modGenes[candidates]], nSets, log.p = TRUE); reassign = ifelse(is.finite(pBest - pModule), ( (pBest - pModule) < log(reassignThreshold) ), FALSE); if (sum(reassign)>0) { allLabels[assGenes[modGenes[candidates][reassign]]] = whichBest[reassign]; changedModules = union(changedModules, whichBest[reassign]); if (length(modGenes)-sum(reassign) < minModuleSize) { deleteModules = union(deleteModules, mod); } else changedModules = union(changedModules, mod); } } if (verbose > 3) pind = updateProgInd(mod/nMods, pind); } rm(lpValues, sumSign, scoreAll); if (verbose > 3) printFlush(""); } # Remove marked modules if (!is.null(deleteModules)) { # for (set in 1:nSets) consMEs[[set]]$data = consMEs[[set]]$data[, -deleteModules]; modGenes = is.finite(match(allLabels, allLabelIndex[deleteModules])); allLabels[modGenes] = 0; # allLabelIndex = allLabelIndex[-deleteModules]; } if (verbose>1) printFlush(paste(spaces, "..merging consensus modules that are too close..")); #print(table(allLabels)); #print(is.numeric(allLabels)) if (numericLabels) { colors = allLabels } else { colors = labels2colors(allLabels) } mergedColors = colors; mergedMods = try(mergeCloseModules(multiExpr, colors[gsg$goodGenes], consensusQuantile = mergeConsensusQuantile, cutHeight = mergeCutHeight, relabel = TRUE, impute = impute, verbose = verbose-2, indent = indent + 2), silent = TRUE); if (inherits(mergedMods, 'try-error')) { if (verbose>0) { printFlush(paste(spaces, 'blockwiseConsensusModules: mergeCloseModule failed with this message:\n', spaces, ' ', mergedMods, spaces, '---> returning unmerged consensus modules')); } else warning(paste('blockwiseConsensusModules: mergeCloseModule failed with this message:\n ', mergedMods, '---> returning unmerged consensus modules')); MEs = try(multiSetMEs(multiExpr, universalColors = colors[gsg$goodGenes] # trapErrors = TRUE, returnValidOnly = TRUE ), silent = TRUE); if (inherits(MEs, 'try-error')) { warning(paste('blockwiseConsensusModules: ME calculation failed with this message:\n ', MEs, '---> returning empty module eigengenes')); allSampleMEs = NULL; } else { if (!MEs[[1]]$allOK) mergedColors[gsg$goodGenes] = MEs[[1]]$validColors; allSampleMEs = vector(mode = "list", length = nSets); names(allSampleMEs) = names(multiExpr); for (set in 1:nSets) { allSampleMEs[[set]] = list(data = as.data.frame(matrix(NA, nrow = nSamples[set], ncol = ncol(MEs[[set]]$data)))); allSampleMEs[[set]]$data[gsg$goodSamples[[set]], ] = MEs[[set]]$data[,]; names(allSampleMEs[[set]]$data) = names(MEs[[set]]$data); rownames(allSampleMEs[[set]]$data) = make.unique(originalSampleNames[[set]]$data); } } } else { mergedColors[gsg$goodGenes] = mergedMods$colors; allSampleMEs = vector(mode = "list", length = nSets); names(allSampleMEs) = names(multiExpr); for (set in 1:nSets) { allSampleMEs[[set]] = list(data = as.data.frame(matrix(NA, nrow = nSamples[set], ncol = ncol(mergedMods$newMEs[[1]]$data)))); allSampleMEs[[set]]$data[gsg$goodSamples[[set]], ] = mergedMods$newMEs[[set]]$data[,]; names(allSampleMEs[[set]]$data) = names(mergedMods$newMEs[[set]]$data); rownames(allSampleMEs[[set]]$data) = make.unique(originalSampleNames[[set]]$data); } } names(mergedColors) = names(colors) = originalGeneNames; list(colors = mergedColors, unmergedColors = colors, multiMEs = allSampleMEs # goodSamples = gsg$goodSamples, # goodGenes = gsg$goodGenes, # dendrograms = dendros, # blockGenes = blockGenes, # originCount = originCount, # TOMFiles = TOMFiles ); } #=================================================================================================================== # # Preliminary clustering via projective k-means # #=================================================================================================================== projectiveKMeans = function ( datExpr, preferredSize = 5000, nCenters = as.integer(min(ncol(datExpr)/20, preferredSize^2/ncol(datExpr))), sizePenaltyPower = 4, networkType = "unsigned", randomSeed = 54321, checkData = TRUE, imputeMissing = TRUE, maxIterations = 1000, verbose = 0, indent = 0 ) { centerGeneDist = function(centers, oldDst = NULL, changed = c(1:nCenters), blockSize = 50000, verbose = 0, spaces = "") { if (is.null(oldDst)) { oldDst = array(0, c(nCenters, nGenes)); changed = c(1:nCenters); } dstAll = oldDst; nChanged = length(changed) nBlocks = ceiling(ncol(datExpr)/blockSize); blocks = allocateJobs(ncol(datExpr), nBlocks); if (verbose > 5) pind = initProgInd(spaste(spaces, " ..centerGeneDist: ")); for (b in 1:nBlocks) { if (intNetworkType==1) { dst = 1-abs(cor(centers[, changed], datExpr[, blocks[[b]] ])); } else { dst = 1-cor(centers[, changed], datExpr[, blocks[[b]] ]); } dstAll[changed, blocks[[b]]] = dst; if (verbose > 5) pind = updateProgInd(b/nBlocks, pind); } dstAll; } memberCenterDist = function(dst, membership, sizes = NULL, changed = c(1:nCenters), oldMCDist = NULL) { if (is.null(oldMCDist)) { changed = c(1:nCenters); oldMCDist = rep(0, nGenes); } centerDist = oldMCDist; if (!is.null(changed)) { if (is.null(sizes)) sizes = table(membership) if (length(sizes)!=nCenters) { sizes2 = rep(0, nCenters); sizes2[as.numeric(names(sizes))] = sizes; sizes = sizes2; } if (is.finite(sizePenaltyPower)) { sizeCorrections = (sizes/preferredSize)^sizePenaltyPower; sizeCorrections[sizeCorrections < 1] = 1; } else { sizeCorrections = rep(1, length(sizes)); sizeCorrections[sizes > preferredSize] = Inf; } for (cen in changed) if (sizes[cen]!=0) { if (is.finite(sizeCorrections[cen])) { centerDist[membership==cen] = dst[cen, membership==cen] * sizeCorrections[cen]; } else centerDist[membership==cen] = 10 + dst[cen, membership==cen]; } } centerDist; } spaces = indentSpaces(indent); if (verbose > 0) printFlush(paste(spaces, "Projective K-means:")); datExpr = scale(as.matrix(datExpr)); if (any(is.na(datExpr))) { if (imputeMissing) { printFlush(spaste(spaces, "projectiveKMeans: imputing missing data in 'datExpr'.\n", "To reproduce older results, use 'imputeMissing = FALSE'. ")); datExpr = t(impute.knn(t(datExpr))$data); } else { printFlush(spaste(spaces, "projectiveKMeans: there are missing data in 'datExpr'.\n", "SVD will not work; will use a weighted mean approximation.")); } } #if (preferredSize >= floor(sqrt(2^31)) ) # stop("'preferredSize must be less than ", floor(sqrt(2^31)), ". Please decrease it and try again.") if (preferredSize >= sqrt(2^31)-1) printFlush(spaste( spaces, " Support for blocks larger than sqrt(2^31) is experimental; please report\n", spaces, " any issues to Peter.Langfelder@gmail.com.")); if (exists(".Random.seed")) { seedSaved = TRUE; savedSeed = .Random.seed } else seedSaved = FALSE; set.seed(randomSeed); nAllSamples = nrow(datExpr); nAllGenes = ncol(datExpr); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); if (verbose > 1) printFlush(paste(spaces, "..using", nCenters, "centers.")); # Check data for genes and samples that have too many missing values if (checkData) { if (verbose > 0) printFlush(paste(spaces, "..checking data for excessive number of missing values..")); gsg = goodSamplesGenes(datExpr, verbose = verbose -1, indent = indent + 1) if (!gsg$allOK) datExpr = datExpr[gsg$goodSamples, gsg$goodGenes]; } nGenes = ncol(datExpr); nSamples = nrow(datExpr); datExpr[is.na(datExpr)] = 0; centers = matrix(0, nSamples, nCenters); randGeneIndex = sample(nGenes, size = nGenes); temp = rep(c(1:nCenters), times = ceiling(nGenes/nCenters)); membership = temp[randGeneIndex]; if (verbose > 0) printFlush(paste(spaces, "..k-means clustering..")); changed = c(1:nCenters); dst = NULL; centerDist = NULL; iteration = 0; while (!is.null(changed) && (iteration < maxIterations)) { iteration = iteration + 1; if (verbose > 1) printFlush(paste(spaces, " ..iteration", iteration)); clusterSizes = table(membership); if (verbose > 5) pind = initProgInd(paste(spaces, " ....calculating centers: ")) for (cen in sort(changed)) { centers[, cen] = .alignedFirstPC(datExpr[, membership==cen], verbose = verbose-2, indent = indent+2); if (verbose > 5) pind = updateProgInd(cen/nCenters, pind); } if (verbose > 5) { pind = updateProgInd(1, pind); printFlush("")} if (verbose > 5) printFlush(paste(spaces, " ....calculating center to gene distances")); dst = centerGeneDist(centers, dst, changed, verbose = verbose, spaces = spaces); centerDist = memberCenterDist(dst, membership, clusterSizes, changed, centerDist); nearestDist = rep(0, nGenes); nearest = rep(0, nGenes); if (verbose > 5) printFlush(paste(spaces, " ....finding nearest center for each gene")); minRes = .Call("minWhich_call", dst, 0L, PACKAGE = "WGCNA") nearestDist = minRes$min; nearest = minRes$which; if (sum(centerDist>nearestDist)>0) { proposedMemb = nearest; accepted = FALSE; while (!accepted && (sum(proposedMemb!=membership)>0)) { if (verbose > 2) cat(paste(spaces, " ..proposing to move", sum(proposedMemb!=membership), "genes")); moved = c(1:nGenes)[proposedMemb!=membership]; newCentDist = memberCenterDist(dst, proposedMemb); gotWorse = newCentDist[moved] > centerDist[moved] if (sum(!is.finite(gotWorse))>0) warning("Have NAs in gotWorse."); if (sum(gotWorse)==0) { accepted = TRUE; if (verbose > 2) printFlush(paste("..move accepted.")); } else { if (verbose > 2) printFlush(paste("..some genes got worse. Trying again.")); ord = order(centerDist[moved[gotWorse]] - newCentDist[moved[gotWorse]]) n = ceiling(length(ord)*3/5); proposedMemb[moved[gotWorse][ord[c(1:n)]]] = membership[moved[gotWorse][ord[c(1:n)]]]; } } if (accepted) { propSizes = table(proposedMemb); keep = as.numeric(names(propSizes)); centers = centers[, keep]; dst = dst[keep, ]; changedAll = union(membership[moved], proposedMemb[moved]); changedKeep = changedAll[is.finite(match(changedAll, keep))]; changed = rank(changedKeep); # This is a way to make say 1,3,4 into 1,2,3 membership = as.numeric(as.factor(proposedMemb)); if ( (verbose > 1) && (sum(keep) < nCenters)) printFlush(paste(spaces, " ..dropping", nCenters - sum(keep), "centers because ther clusters are empty.")); nCenters = length(keep); } else { changed = NULL; if (verbose > 2) printFlush(paste("Could not find a suitable move to improve the clustering.")); } } else { changed = NULL; if (verbose > 2) printFlush(paste("Clustering is stable: all genes are closest to their assigned center.")); } if (verbose > 5) { printFlush("Sizes of biggest preliminary clusters:"); order = order(-as.numeric(clusterSizes)); print(as.numeric(clusterSizes)[order[1:min(100, length(order))]]); } } if (verbose > 2 & verbose <6) { printFlush("Sizes of preliminary clusters:"); print(clusterSizes); } merged = sizeRestrictedClusterMerge( datExpr, clusters = membership, clusterSizes = clusterSizes, centers = centers, maxSize = preferredSize, networkType = networkType, verbose = verbose, indent = indent); list(clusters = merged$clusters, centers = merged$centers); } sizeRestrictedClusterMerge = function( datExpr, clusters, clusterSizes = NULL, centers = NULL, maxSize, networkType = "unsigned", verbose = 0, indent = 0) { if (ncol(datExpr)!=length(clusters)) stop("Number of variables (columns) in 'datExpr' and length of 'clusters' are not the same."); spaces = indentSpaces(indent); intNetworkType = charmatch(networkType, .networkTypes); if (is.null(clusterSizes)) { clusterSizes = table(clusters); centers = NULL; } nCenters = length(clusterSizes); nSamples = nrow(datExpr); if (is.null(centers)) { centers = matrix(0, nSamples, nCenters); for (i in 1:nCenters) { if (clusterSizes[i] > 1) { centers[, i] = .alignedFirstPC(datExpr[, clusters==i], verbose = verbose-2, indent = indent+2) } else { #xx = try({centers[, i] = scale(datExpr[, clusters==i, drop = FALSE])/(sum(is.finite(datExpr[, clusters==i]))-1)}); #if (inherits(xx, "try-error")) browser(); centers[, i] = scale(datExpr[, clusters==i, drop = FALSE])/(sum(is.finite(datExpr[, clusters==i]))-1); } } } if (verbose > 0) printFlush(paste(spaces, "..merging smaller clusters...")); small = (clusterSizes < maxSize); # typically all clusters will be below the max size, but is not a requirement. if (intNetworkType==1) { clustDist = 1-abs(cor(centers[, small])); } else { clustDist = 1-cor(centers[, small]); } diag(clustDist) = 10; # merge nearby clusters if their sizes allow merging done = FALSE; while (!done & (sum(small)>1)) { smallIndex = c(1:nCenters)[small] nSmall = sum(small); distOrder = order(as.vector(clustDist))[seq(from=2, to = nSmall * (nSmall-1), by=2)]; i = 1; canMerge = FALSE; while (i <= length(distOrder) && (!canMerge)) { jj = as.integer( (distOrder[i]-1)/nSmall + 1); ii = distOrder[i] - (jj-1) * nSmall whichI = smallIndex[ii] whichJ = smallIndex[jj]; canMerge = sum(clusterSizes[c(whichI, whichJ)]) <= maxSize; i = i + 1; } if (canMerge) { clusters[clusters==whichJ] = whichI; clusterSizes[whichI] = clusterSizes[whichI] + clusterSizes[whichJ]; centers[, whichI] = .alignedFirstPC(datExpr[, clusters==whichI], verbose = verbose-2, indent = indent+2); nCenters = nCenters -1; if (verbose > 3) printFlush(paste(spaces, " ..merged clusters", whichI, "and", whichJ, "whose combined size is", clusterSizes[whichI])); clusters[clusters>whichJ] = clusters[clusters>whichJ] - 1; centers = centers[ , -whichJ, drop = FALSE]; clusterSizes = clusterSizes[-whichJ]; small = clusterSizes < maxSize; if (sum(small) > 1) # Only update if there are at least 2 small clusters left { clustDist = clustDist[-jj, -jj, drop = FALSE]; if (clusterSizes[whichI] < maxSize) { cr1 = c(cor(centers[, small], centers[, whichI])); clustDist[, ii] = clustDist[ii, ] = if (intNetworkType==1) 1-abs(cr1) else 1-cr1; clustDist[ii, ii] = 10; } else { clustDist = clustDist[-ii, -ii, drop = FALSE] } } } else done = TRUE; } list(clusters = clusters, centers = centers); } #====================================================================================================== # # Consensus preliminary partitioning # #====================================================================================================== consensusProjectiveKMeans = function ( multiExpr, preferredSize = 5000, nCenters = NULL, sizePenaltyPower = 4, networkType = "unsigned", randomSeed = 54321, checkData = TRUE, imputeMissing = TRUE, useMean = (length(multiExpr) > 3), maxIterations = 1000, verbose = 0, indent = 0 ) { centerGeneDist = function(centers, oldDst = NULL, changed = c(1:nCenters)) { if (is.null(oldDst)) { oldDst = array(0, c(nCenters, nGenes)); changed = c(1:nCenters); } dstAll = oldDst; nChanged = length(changed) if (nChanged!=0) { dstX = array(0, c(nSets, nChanged, nGenes)); for (set in 1:nSets) if (intNetworkType==1) { dstX[set, , ] = 1-abs(cor(centers[[set]]$data[, changed], multiExpr[[set]]$data)); } else { dstX[set, , ] = 1-cor(centers[[set]]$data[, changed], multiExpr[[set]]$data); } if (nSets==1) { dstAll[changed, ] = dstX[1, , ]; } else { #dst = array(0, c(nChanged, nGenes)); if (useMean) { dstAll[changed, ] = base::colMeans(dstX, dims = 1); } else { dstAll[changed, ] = -minWhichMin(-dstX, byRow = FALSE, dims = 1)$min; } } } dstAll; } memberCenterDist = function(dst, membership, sizes = NULL, changed = c(1:nCenters), oldMCDist = NULL) { if (is.null(oldMCDist)) { changed = c(1:nCenters); oldMCDist = rep(0, nGenes); } centerDist = oldMCDist; if (!is.null(changed)) { if (is.null(sizes)) sizes = table(membership) if (length(sizes)!=nCenters) { sizes2 = rep(0, nCenters); sizes2[as.numeric(names(sizes))] = sizes; sizes = sizes2; } if (is.finite(sizePenaltyPower)) { sizeCorrections = (sizes/preferredSize)^sizePenaltyPower; sizeCorrections[sizeCorrections < 1] = 1; } else { sizeCorrections = rep(1, length(sizes)); sizeCorrections[sizes > preferredSize] = Inf; } for (cen in changed) if (sizes[cen]!=0) { if (is.finite(sizeCorrections[cen])) { centerDist[membership==cen] = dst[cen, membership==cen] * sizeCorrections[cen]; } else centerDist[membership==cen] = 10 + dst[cen, membership==cen]; } } centerDist; } spaces = indentSpaces(indent); if (verbose > 0) printFlush(paste(spaces, "Consensus projective K-means:")); allSize = checkSets(multiExpr); nSamples = allSize$nSamples; nGenes = allSize$nGenes; nSets = allSize$nSets; #if (preferredSize >= floor(sqrt(2^31)) ) # stop("'preferredSize must be less than ", floor(sqrt(2^31)), ". Please decrease it and try again.") if (preferredSize >= sqrt(2^31)-1) printFlush(spaste( spaces, " Support for blocks larger than sqrt(2^31) is experimental; please report\n", spaces, " any issues to Peter.Langfelder@gmail.com.")); if (exists(".Random.seed")) { seedSaved = TRUE; savedSeed = .Random.seed } else seedSaved = FALSE; set.seed(randomSeed); if (is.null(nCenters)) nCenters = as.integer(min(nGenes/20, 100 * nGenes/preferredSize)); if (verbose > 1) printFlush(paste(spaces, "..using", nCenters, "centers.")); intNetworkType = charmatch(networkType, .networkTypes); if (is.na(intNetworkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); for (set in 1:nSets) multiExpr[[set]]$data = scale(as.matrix(multiExpr[[set]]$data)); # Check data for genes and samples that have too many missing values if (checkData) { if (verbose > 0) printFlush(paste(spaces, "..checking data for excessive number of missing values..")); gsg = goodSamplesGenesMS(multiExpr, verbose = verbose - 1, indent = indent + 1); for (set in 1:nSets) { if (!gsg$allOK) multiExpr[[set]]$data = scale(multiExpr[[set]]$data[gsg$goodSamples[[set]], gsg$goodGenes]); } } anyNA = mtd.apply(multiExpr, function(x) any(is.na(x)), mdaSimplify = TRUE); if (imputeMissing && any(anyNA)) { if (verbose > 0) printFlush(paste(spaces, "..imputing missing data..")); multiExpr[anyNA] = mtd.apply(multiExpr[anyNA], function(x) t(impute.knn(t(x))$data), mdaVerbose = verbose>1) } else if (any(anyNA)) { printFlush(paste(spaces, "Found missing data. These will be replaced by zeros;\n", spaces, " for a better replacement, use 'imputeMissing=TRUE'.")); for (set in 1:nSets) multiExpr[[set]]$data[is.na(multiExpr[[set]]$data)] = 0; } setSize = checkSets(multiExpr); nGenes = setSize$nGenes; nSamples = setSize$nSamples; centers = vector(mode="list", length = nSets); for (set in 1:nSets) centers[[set]] = list(data = matrix(0, nSamples[set], nCenters)); randGeneIndex = sample(nGenes, size = nGenes); temp = rep(c(1:nCenters), times = ceiling(nGenes/nCenters)); membership = temp[randGeneIndex]; if (verbose > 0) printFlush(paste(spaces, "..consensus k-means clustering..")); changed = c(1:nCenters); dst = NULL; iteration = 0; centerDist = NULL; while (!is.null(changed) && (iteration < maxIterations)) { iteration = iteration + 1; if (verbose > 1) printFlush(paste(spaces, " ..iteration", iteration)); clusterSizes = table(membership); for (set in 1:nSets) for (cen in changed) centers[[set]]$data[, cen] = .alignedFirstPC(multiExpr[[set]]$data[, membership==cen], verbose = verbose-2, indent = indent+2); dst = centerGeneDist(centers, dst, changed); centerDist = memberCenterDist(dst, membership, clusterSizes, changed, centerDist); changed = NULL; minRes = minWhichMin(dst); nearestDist = minRes$min; nearest = minRes$which; if (sum(centerDist>nearestDist)>0) { proposedMemb = nearest; accepted = FALSE; while (!accepted && (sum(proposedMemb!=membership)>0)) { if (verbose > 2) cat(paste(spaces, " ..proposing to move", sum(proposedMemb!=membership), "genes")); moved = c(1:nGenes)[proposedMemb!=membership]; newCentDist = memberCenterDist(dst, proposedMemb); gotWorse = newCentDist[moved] > centerDist[moved] if (sum(gotWorse)==0) { accepted = TRUE; if (verbose > 2) printFlush(paste("..move accepted.")); } else { if (verbose > 2) printFlush(paste("..some genes got worse. Trying again.")); ord = order(centerDist[moved[gotWorse]] - newCentDist[moved[gotWorse]]) n = ceiling(length(ord)*3/5); proposedMemb[moved[gotWorse][ord[c(1:n)]]] = membership[moved[gotWorse][ord[c(1:n)]]]; } } if (accepted) { propSizes = table(proposedMemb); keep = as.numeric(names(propSizes)); for (set in 1:nSets) centers[[set]]$data = centers[[set]]$data[, keep]; dst = dst[keep, ]; changedAll = union(membership[moved], proposedMemb[moved]); changedKeep = changedAll[is.finite(match(changedAll, keep))]; changed = rank(changedKeep); # This is a way to make say 1,3,4 into 1,2,3 membership = as.numeric(as.factor(proposedMemb)); if ( (verbose > 1) && (sum(keep) < nCenters)) printFlush(paste(spaces, " ..dropping", nCenters - sum(keep), "centers because ther clusters are empty.")); nCenters = length(keep); } else { changed = NULL; if (verbose > 2) printFlush(paste("Could not find a suitable move to improve the clustering.")); } } else { changed = NULL; if (verbose > 2) printFlush(paste("Clustering is stable: all genes are closest to their assigned center.")); } } consCenterDist = function(centers, select) { if (is.logical(select)) { nC = sum(select); } else { nC = length(select); } distX = array(0, dim=c(nSets, nC, nC)); for (set in 1:nSets) { if (intNetworkType==1) { distX[set, , ] = 1-abs(cor(centers[[set]]$data[, select])); } else { distX[set, , ] = 1-cor(centers[[set]]$data[, select]); } } if (nSets==1) { distX[1, , ] } else -minWhichMin(-distX, byRow = FALSE, dims = 1)$min; } unmergedMembership = membership; unmergedCenters = centers; # merge nearby clusters if their sizes allow merging if (verbose > 0) printFlush(paste(spaces, "..merging smaller clusters...")); clusterSizes = as.vector(table(membership)); small = (clusterSizes < preferredSize); done = FALSE; while (!done & (sum(small)>1)) { smallIndex = c(1:nCenters)[small] nSmall = sum(small); clustDist = consCenterDist(centers, smallIndex); diag(clustDist) = 10; distOrder = order(as.vector(clustDist))[seq(from=2, to = nSmall * (nSmall-1), by=2)]; i = 1; canMerge = FALSE; while (i <= length(distOrder) && (!canMerge)) { whichJ = smallIndex[as.integer( (distOrder[i]-1)/nSmall + 1)]; whichI = smallIndex[distOrder[i] - (whichJ-1) * nSmall]; canMerge = sum(clusterSizes[c(whichI, whichJ)]) < preferredSize; i = i + 1; } if (canMerge) { membership[membership==whichJ] = whichI; clusterSizes[whichI] = sum(clusterSizes[c(whichI, whichJ)]); #if (verbose > 1) # printFlush(paste(spaces, " ..merged clusters", whichI, "and", whichJ, # "whose combined size is", clusterSizes[whichI])); for (set in 1:nSets) centers[[set]]$data[, whichI] = .alignedFirstPC(multiExpr[[set]]$data[, membership==whichI], verbose = verbose-2, indent = indent+2); for (set in 1:nSets) centers[[set]]$data = centers[[set]]$data[,-whichJ]; membership[membership>whichJ] = membership[membership>whichJ] - 1; nCenters = nCenters -1; clusterSizes = as.vector(table(membership)); small = (clusterSizes < preferredSize); if (verbose > 3) printFlush(paste(spaces, " ..merged clusters", whichI, "and", whichJ, "whose combined size is", clusterSizes[whichI])); } else done = TRUE; } if (checkData) { membershipAll = rep(NA, allSize$nGenes); membershipAll[gsg$goodGenes] = membership; } else membershipAll = membership; if (seedSaved) .Random.seed <<- savedSeed; return( list(clusters = membershipAll, centers = centers, unmergedClusters = unmergedMembership, unmergedCenters = unmergedCenters) ); } WGCNA/R/overlapTableUsingKME.R0000644000176200001440000001241513103416622015421 0ustar liggesusers# Determines significant overlap between modules in two networks based on kME tables. overlapTableUsingKME <- function(dat1, dat2, colorh1, colorh2, MEs1=NULL, MEs2=NULL, name1="MM1", name2="MM2", cutoffMethod="assigned", cutoff=0.5, omitGrey=TRUE, datIsExpression=TRUE){ # Run a few tests on the imput data formatting if (is.null(dim(dat1))|is.null(dim(dat2))) { write("Error: dat1 and dat2 must be matrices.",""); return(0) } if ((dim(dat1)[datIsExpression+1]!=length(colorh1))| (dim(dat2)[datIsExpression+1]!=length(colorh2))){ write("Error: Both sets of input data and color vectors must have same length.",""); return(0) } if ((cutoffMethod=="pvalue")&(datIsExpression==FALSE)){ write("Error: Pvalues are not calculated if datIsExpression=TRUE. Choose other cutoffMethod.", ""); return(0) } # Find and format the kME values and other variables for both inputs G1 = dimnames(dat1)[[datIsExpression+1]]; G2 = dimnames(dat2)[[datIsExpression+1]]; if(datIsExpression){ if(is.null(MEs1)) MEs1 = (moduleEigengenes(dat1, colors=as.character(colorh1), excludeGrey=omitGrey))$eigengenes if(is.null(MEs2)) MEs2 = (moduleEigengenes(dat2, colors=as.character(colorh2), excludeGrey=omitGrey))$eigengenes mods1 = colnames(MEs1); mods2 = colnames(MEs2) if (length(grep("ME",mods1))==length(mods1)) mods1 = substr(mods1,3,nchar(mods1)) if (length(grep("PC",mods1))==length(mods1)) mods1 = substr(mods1,3,nchar(mods1)) if (length(grep("ME",mods2))==length(mods2)) mods2 = substr(mods2,3,nchar(mods2)) if (length(grep("PC",mods2))==length(mods2)) mods2 = substr(mods2,3,nchar(mods2)) out = corAndPvalue(dat1,MEs1); MM1 = out$cor; PV1 = out$p; rm(out); out = corAndPvalue(dat2,MEs2); MM2 = out$cor; PV2 = out$p; rm(out); colnames(MM1) <- colnames(PV1) <- mods1; colnames(MM2) <- colnames(PV2) <- mods2; rownames(MM1) <- rownames(PV1) <- G1; rownames(MM2) <- rownames(PV2) <- G2; } else { MM1 = dat1[,sort(colnames(dat1))]; mods1 = colnames(MM1) MM2 = dat2[,sort(colnames(dat2))]; mods2 = colnames(MM2) if (length(grep("ME",mods1))==length(mods1)) mods1 = substr(mods1,3,nchar(mods1)) if (length(grep("PC",mods1))==length(mods1)) mods1 = substr(mods1,3,nchar(mods1)) if (length(grep("ME",mods2))==length(mods2)) mods2 = substr(mods2,3,nchar(mods2)) if (length(grep("PC",mods2))==length(mods2)) mods2 = substr(mods2,3,nchar(mods2)) colnames(MM1) = mods1; colnames(MM2) = mods2; rownames(MM1) = G1; rownames(MM2) = G2; if(omitGrey){ MM1 = MM1[,!is.element(mods1,"grey")]; mods1 = colnames(MM1) MM2 = MM2[,!is.element(mods2,"grey")]; mods2 = colnames(MM2) } } if ((length(setdiff(mods1,as.character(colorh1)))>omitGrey)| (length(setdiff(mods2,as.character(colorh2)))>omitGrey)){ write("MEs cannot include colors with no genes assigned.",""); return(0) } l1 = length(mods1); l2 = length(mods2) cutoffMethod = substr(cutoffMethod,1,1) names=c(name1,name2) comGenes = sort(unique(intersect(G1,G2))); total = length(comGenes) MM1 = MM1[comGenes,]; MM2 = MM2[comGenes,] if (datIsExpression){ PV1 = PV1[comGenes,]; PV2 = PV2[comGenes,] } names(colorh1) = G1; colorh1 = colorh1[comGenes] names(colorh2) = G2; colorh2 = colorh2[comGenes] # Assign each gene in each module to a vector corresponding to the modules genes1 <- genes2 <- list() if (cutoffMethod=="a"){ for (i in 1:l1) genes1[[i]] = comGenes[colorh1==mods1[i]] for (i in 1:l2) genes2[[i]] = comGenes[colorh2==mods2[i]] } else if (cutoffMethod=="p") { for (i in 1:l1) genes1[[i]] = comGenes[PV1[,mods1[i]]<=cutoff] for (i in 1:l2) genes2[[i]] = comGenes[PV2[,mods2[i]]<=cutoff] } else if (cutoffMethod=="k") { for (i in 1:l1) genes1[[i]] = comGenes[MM1[,mods1[i]]>=cutoff] for (i in 1:l2) genes2[[i]] = comGenes[MM2[,mods2[i]]>=cutoff] } else if (cutoffMethod=="n") { for (i in 1:l1) genes1[[i]] = comGenes[rank(-MM1[,mods1[i]])<=cutoff] for (i in 1:l2) genes2[[i]] = comGenes[rank(-MM2[,mods2[i]])<=cutoff] } else { write("ERROR: cutoffMethod entered is not supported.",""); return(0) } names(genes1) = paste(names[1],mods1,sep="_") names(genes2) = paste(names[2],mods2,sep="_") # Determine signficance of each comparison and write out all of the gene lists ovGenes = list() ovNames = rep("",l1*l2) pVals = matrix(1, nrow=l1, ncol=l2) rownames(pVals) = paste(names[1],mods1,sep="_") colnames(pVals) = paste(names[2],mods2,sep="_") i = 0 for (m1 in 1:l1) for (m2 in 1:l2) { i = i+1 ovGenes[[i]] = sort(unique(intersect(genes1[[m1]],genes2[[m2]]))) pVals[m1,m2] = .phyper2(total,length(genes1[[m1]]), length(genes2[[m2]]),length(ovGenes[[i]])) if (pVals[m1,m2]>10^(-10)) pVals[m1,m2] = .phyper2(total,length(genes1[[m1]]), length(genes2[[m2]]),length(ovGenes[[i]]),FALSE) ovNames[i] = paste(names[1],mods1[m1],names[2],mods2[m2],sep="_") } names(ovGenes) = ovNames out = list(pVals,comGenes,genes1,genes2,ovGenes) names(out) = c("PvaluesHypergeo","AllCommonGenes",paste("Genes",names,sep=""),"OverlappingGenes") return(out) } .phyper2 <- function (total, group1, group2, overlap, verySig=TRUE ,lt=TRUE){ # This function is the same is phyper, just allows for more sensible input values q = overlap m = group1 n = total-group1 k = group2 prob = phyper(q, m, n, k, log.p = verySig, lower.tail=lt) if (verySig) return(-prob) return(1-prob) }WGCNA/R/matchLabels.R0000644000176200001440000001070014536055333013661 0ustar liggesusers# Relabel the labels in source such that modules with high overlap with those in reference will have the # same labels overlapTable = function(labels1, labels2, na.rm = TRUE, ignore = NULL, levels1 = NULL, levels2 = NULL, log.p = FALSE) { labels1 = as.vector(labels1); labels2 = as.vector(labels2); if (length(labels1)!=length(labels2)) stop("Lengths of 'labels1' and 'labels2' must be the same.") if (na.rm) { keep = !is.na(labels1) & !is.na(labels2); labels1 = labels1[keep]; labels2 = labels2[keep]; } if (is.null(levels1)) { levels1 = sort(unique(labels1)); levels1 = levels1[!levels1 %in%ignore]; } if (is.null(levels2)) { levels2 = sort(unique(labels2)); levels2 = levels2[!levels2 %in%ignore]; } n1 = length(levels1); n2 = length(levels2); countMat = matrix(0, n1, n2); pMat = matrix(0, n1, n2); expected = matrix(0, n1, n2); nAll = length(labels1); if (n1 > 0 && n2 > 0) for (m1 in 1:n1) for (m2 in 1:n2) { m1Members = (labels1 == levels1[m1]); m2Members = (labels2 == levels2[m2]); .n1 = sum(m1Members); .n2 = sum(m2Members); .n12 = sum(m1Members & m2Members); #tab = .table2.allLevels(m1Members, m2Members, levels.x = c(FALSE, TRUE), levels.y = c(FALSE, TRUE)); #print(paste("table for levels", levels1[m1], levels2[m2])); #print(table(m1Members, m2Members)); #pMat[m1, m2] = fisher.test(tab, alternative = "greater")$p.value; pMat[m1, m2] = if (.n12 > 0) phyper(.n12-1, .n1, nAll - .n1, .n2, lower.tail = FALSE, log.p = log.p) else 1-log.p; countMat[m1, m2] = .n12; expected[m1, m2] = .n1 * .n2/nAll } dimnames(pMat) = dimnames(countMat) = dimnames(expected) = list(levels1, levels2); fraction = countMat/expected; fraction[expected==0] = 0; pMat[is.na(pMat)] = 1; list(countTable = countMat, pTable = pMat, expected = expected, fraction = fraction); } matchLabels = function(source, reference, pThreshold = 5e-2, na.rm = TRUE, ignoreLabels = if (is.numeric(reference)) 0 else "grey", extraLabels = if (is.numeric(reference)) c(1:1000) else standardColors()) { source = as.matrix(source); if (nrow(source)!=length(reference)) stop("Number of rows of 'source' must equal the length of 'reference'."); result = array(NA, dim = dim(source)); #refMods = as.numeric(sort(unique(reference))); #refMods = refMods[!refMods %in% ignoreLabels]; for (col in 1:ncol(source)) { src = source[, col] tab = overlapTable(src, reference, na.rm = na.rm, ignore = ignoreLabels); if (any(dim(tab$countTable)==0)) { result[, col] = src; next; } pTab = tab$pTable; pOrder = apply(pTab, 2, order); bestOrder = order(apply(pTab, 2, min)); refMods = colnames(pTab); if (is.numeric(reference)) refMods = as.numeric(refMods); sourceMods = rownames(pTab); newLabels = rep(NA, length(sourceMods)); names(newLabels) = sourceMods; for (rm in 1:length(bestOrder)) { bestInd = 1; done = FALSE; #printFlush(paste("Looking for best match for reference module ", refMods[bestOrder[rm]])); while (!done && bestInd < length(sourceMods)) { bm = pOrder[bestInd, bestOrder[rm]]; bp = pTab[bm, bestOrder[rm]]; if (bp > pThreshold) { done = TRUE; } else if (is.na(newLabels[bm])) { #newLabels[bm] = as.numeric(refMods[bestOrder[rm]]); #printFlush(paste("Labeling old module ", sourceMods[bm], "as new module", # refMods[bestOrder[rm]], "with p=", bp)); newLabels[bm] = refMods[bestOrder[rm]]; done = TRUE; } bestInd = bestInd + 1; } } if (length(ignoreLabels) > 0) { newLabels.ignore = ignoreLabels; names(newLabels.ignore) = ignoreLabels; newLabels = c(newLabels.ignore, newLabels); } unassigned = src %in% names(newLabels)[is.na(newLabels)]; if (any(unassigned)) { unassdSrcTab = table(src[!src %in% names(newLabels)]); unassdRank = rank(-unassdSrcTab, ties.method = "first"); nExtra = sum(is.na(newLabels)); newLabels[is.na(newLabels)] = extraLabels[ !extraLabels %in% c(refMods, ignoreLabels, names(newLabels))] [1:nExtra]; } result[, col] = newLabels[match(src, names(newLabels))]; } result; } WGCNA/R/hierarchicalConsensusModules.R0000644000176200001440000005756414230552654017334 0ustar liggesusers# Hierarchical consensus modules hierarchicalConsensusModules = function( multiExpr, multiWeights = NULL, # Optional: multiExpr wigth imputed missing data multiExpr.imputed = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 12345, # ...or information needed to construct individual networks # Network construction options. This can be a single object of class NetworkOptions, or a multiData # structure of NetworkOptions objects, one per element of multiExpr. networkOptions, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set%s-Block%b.RData", keepIndividualTOMs = FALSE, # Consensus calculation options consensusTree = NULL, # if not given, the one in consensusTOMInfo will be used. # Return options saveConsensusTOM = TRUE, consensusTOMFilePattern = "consensusTOM-%a-Block%b.RData", # Keep the consensus? Note: I will not have an option to keep intermediate results here. keepConsensusTOM = saveConsensusTOM, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheBase = ".blockConsModsCache", cacheDir = ".", # Alternative consensus TOM input from a previous calculation consensusTOMInfo = NULL, # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, # Advanced tree cut opyions maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, stabilityCriterion = c("Individual fraction", "Common fraction"), minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, # Gene joining and removal from a module, and module "significance" criteria # reassignThresholdPS = 1e-4, ## For now do not do gene reassignment - have to think more about how # to do it. iteratePruningAndMerging = FALSE, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, trapErrors = FALSE, excludeGrey = FALSE, # Module merging options calibrateMergingSimilarities = FALSE, mergeCutHeight = 0.15, # General options collectGarbage = TRUE, verbose = 2, indent = 0, ...) { spaces = indentSpaces(indent); dataSize = checkSets(multiExpr); nSets = dataSize$nSets; nGenes = dataSize$nGenes; # nSamples = dataSize$nSamples; originalGeneNames = mtd.colnames(multiExpr); if (is.null(originalGeneNames)) originalGeneNames = spaste("Column.", 1:nGenes); originalSampleNames = mtd.apply(multiExpr, function(x) { out = rownames(x); if (is.null(out)) out = spaste("Row.", 1:nrow(x)); out; }); haveWeights = !is.null(multiWeights); .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<-savedSeed, add = FALSE); } set.seed(randomSeed); } if (checkMinModuleSize & (minModuleSize > nGenes/5)) { minModuleSize = nGenes/5; warning("blockwiseConsensusModules: minModuleSize appeared too large and was lowered to", minModuleSize, ". If you insist on keeping the original minModuleSize, set checkMinModuleSize = FALSE."); } if (verbose>0) printFlush(paste(spaces, "Calculating consensus modules and module eigengenes", "block-wise from all genes")); if (!is.null(multiExpr.imputed)) { size.imp = checkSets(multiExpr.imputed); if (!isTRUE(all.equal(size.imp, dataSize))) stop("If given, 'multiExpr.imputed' must have the same dimensions in each set as 'multiExpr'."); } branchSplitFnc = NULL; minBranchDissimilarities = numeric(0); externalSplitFncNeedsDistance = logical(0); if (useBranchEigennodeDissim) { branchSplitFnc = "hierarchicalBranchEigengeneDissim"; minBranchDissimilarities = minBranchEigennodeDissim; externalSplitFncNeedsDistance = FALSE; } if (!is.null(stabilityLabels)) { stabilityCriterion = match.arg(stabilityCriterion); branchSplitFnc = c(branchSplitFnc, if (stabilityCriterion=="Individual fraction") "branchSplitFromStabilityLabels.individualFraction" else "branchSplitFromStabilityLabels"); minBranchDissimilarities = c(minBranchDissimilarities, minStabilityDissim); externalSplitFncNeedsDistance = c(externalSplitFncNeedsDistance, FALSE); } otherArgs = list(...); getDetails = FALSE; if ("getDetails" %in% names(otherArgs)) getDetails = otherArgs$getDetails; if (is.null(consensusTOMInfo)) { if (is.null(consensusTree)) stop("Either 'consensusTOMInfo' or 'consensusTree' must be given."); consensusTOMInfo = hierarchicalConsensusTOM( multiExpr = multiExpr, multiWeights = multiWeights, checkMissingData = checkMissingData, blocks = blocks, maxBlockSize = maxBlockSize, blockSizePenaltyPower = blockSizePenaltyPower, nPreclusteringCenters = nPreclusteringCenters, randomSeed = NULL, networkOptions = networkOptions, keepIndividualTOMs = keepIndividualTOMs, individualTOMFileNames = individualTOMFileNames, consensusTree = consensusTree, saveCalibratedIndividualTOMs = FALSE, getCalibrationSamples = FALSE, # Return options saveConsensusTOM = saveConsensusTOM, consensusTOMFilePattern = consensusTOMFilePattern, keepIntermediateResults = FALSE, # Internal handling of TOMs useDiskCache = useDiskCache, chunkSize = chunkSize, cacheBase = cacheBase, cacheDir = cacheDir, collectGarbage = collectGarbage, verbose = verbose, indent = indent); removeConsensusTOMOnExit = !keepConsensusTOM; } else { # Basic checks on consensusTOMInfo .checkComponents(consensusTOMInfo, c("individualTOMInfo", "consensusData", "consensusTree")); if (length(consensusTOMInfo$individualTOMInfo$blockInfo$blocks)!=nGenes) stop("Inconsistent number of genes in 'consensusTOMInfo$individualTOMInfo$blockInfo$blocks'."); if (!is.null(consensusTree) && !isTRUE(all.equal(consensusTree, consensusTOMInfo$consensusTree))) warning(immediate. = TRUE, "hierarchicalConsensusModules: given 'consensusTree' is different\n", "from the 'consensusTree' component in 'consensusTOMInfo'. \n", "This is normally undesirable and may\n", "indicate a mistake in the function call."); if (is.null(consensusTree)) consensusTree = consensusTOMInfo$consensusTree; removeConsensusTOMOnExit = FALSE; } useSets = consensusTreeInputs(consensusTree) nUseSets = length(useSets) ## Re-set network options to make them a multiData structure with 1 entry per input set. networkOptions = consensusTOMInfo$individualTOMInfo$networkOptions; allLabels = mergedLabels = rep(0, nGenes); allLabelIndex = NULL; # Restrict data to goodSamples and goodGenes gsg = consensusTOMInfo$individualTOMInfo$blockInfo$goodSamplesAndGenes; if (!gsg$allOK) { multiExpr = mtd.subset(multiExpr, gsg$goodSamples, gsg$goodGenes); if (!is.null(multiExpr.imputed)) multiExpr.imputed = mtd.subset(multiExpr.imputed, gsg$goodSamples, gsg$goodGenes); if (haveWeights) multiWeights = mtd.subset(multiWeights, gsg$goodSamples, gsg$goodGenes); } hasMissing = unlist(multiData2list(mtd.apply(multiExpr[useSets], function(x) { any(is.na(x)) }))); # prepare scaled and imputed multiExpr. multiExpr.scaled = mtd.apply(multiExpr, scale); if (is.null(multiExpr.imputed)) { multiExpr.imputed = mtd.mapply(function(x, doImpute) { if (doImpute) t(impute.knn(t(x))$data) else x }, multiExpr.scaled[useSets], hasMissing); } else { if (is.null(names(multiExpr.imputed))) names(multiExpr.imputed) = names(multiExpr) multiExpr.imputed = multiExpr.imputed[useSets]; } nGGenes = sum(gsg$goodGenes); nGSamples = sapply(gsg$goodSamples, sum); blocks = consensusTOMInfo$individualTOMInfo$blockInfo$blocks; gBlocks = consensusTOMInfo$individualTOMInfo$blockInfo$gBlocks; blockLevels = sort(unique(gBlocks)); blockSizes = table(gBlocks) nBlocks = length(blockLevels); # reassignThreshold = reassignThresholdPS^nSets; dendros = list(); cutreeLabels = list(); maxUsedLabel = 0; goodGeneLabels = rep(0, nGGenes); # Here's where the analysis starts for (blockNo in 1:nBlocks) { if (verbose>1) printFlush(paste(spaces, "..Working on block", blockNo, ".")); # Select block genes block = c(1:nGGenes)[gBlocks==blockLevels[blockNo]]; nBlockGenes = length(block); selExpr = mtd.subset(multiExpr[useSets], , block); if (haveWeights) selWeights = mtd.subset(multiWeights[useSets], , block); errorOccurred = FALSE; consTomDS = BD.getData(consensusTOMInfo$consensusData, blockNo); consTomDS = 1-consTomDS; if (collectGarbage) gc(); if (verbose>2) printFlush(paste(spaces, "....clustering and detecting modules..")); errorOccured = FALSE; dendros[[blockNo]] = fastcluster::hclust(consTomDS, method = "average"); if (verbose > 8) { if (interactive()) plot(dendros[[blockNo]], labels = FALSE, main = paste("Block", blockNo)); } externalSplitOptions = list(); e.index = 1; if (useBranchEigennodeDissim) { externalSplitOptions[[e.index]] = list(multiExpr = mtd.subset(multiExpr.imputed,, block), networkOptions = networkOptions[useSets], consensusTree = consensusTree); e.index = e.index +1; } if (!is.null(stabilityLabels)) { externalSplitOptions[[e.index]] = list(stabilityLabels = stabilityLabels); e.index = e.index + 1; } #blockLabels = try(cutreeDynamic(dendro = dendros[[blockNo]], blockLabels = cutreeDynamic(dendro = dendros[[blockNo]], distM = as.matrix(consTomDS), deepSplit = deepSplit, cutHeight = detectCutHeight, minClusterSize = minModuleSize, method ="hybrid", maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minSplitHeight, minAbsSplitHeight = minAbsSplitHeight, externalBranchSplitFnc = branchSplitFnc, minExternalSplit = minBranchDissimilarities, externalSplitOptions = externalSplitOptions, externalSplitFncNeedsDistance = externalSplitFncNeedsDistance, assumeSimpleExternalSpecification = FALSE, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, verbose = verbose-3, indent = indent + 2) #verbose = verbose-3, indent = indent + 2), silent = TRUE); if (verbose > 8) { print(table(blockLabels)); if (interactive()) plotDendroAndColors(dendros[[blockNo]], labels2colors(blockLabels), dendroLabels = FALSE, main = paste("Block", blockNo)); } if (getDetails) { cutreeLabels[[blockNo]] = blockLabels; } if (inherits(blockLabels, 'try-error')) { (if (verbose>0) printFlush else warning) (paste(spaces, "blockwiseConsensusModules: cutreeDynamic failed:\n ", spaces, blockLabels, "\n", spaces, " Error occured in block", blockNo, "\n", spaces, " Continuing with next block. ")); } else { blockLabels[blockLabels>0] = blockLabels[blockLabels>0] + maxUsedLabel; maxUsedLabel = max(blockLabels); goodGeneLabels[block] = blockLabels; } } #prune = try(pruneAndMergeConsensusModules( prune = pruneAndMergeConsensusModules( multiExpr = multiExpr[useSets], multiWeights = multiWeights[useSets], multiExpr.imputed = multiExpr.imputed, labels = goodGeneLabels, networkOptions = networkOptions[useSets], consensusTree = consensusTree, minModuleSize = minModuleSize, minCoreKME = minCoreKME, minCoreKMESize = minCoreKMESize, minKMEtoStay = minKMEtoStay, # Module eigengene calculation options impute = impute, trapErrors = trapErrors, # Module merging options calibrateMergingSimilarities = calibrateMergingSimilarities, mergeCutHeight = mergeCutHeight, iterate = iteratePruningAndMerging, collectGarbage = collectGarbage, getDetails = TRUE, verbose = verbose, indent=indent + 1)#, silent = TRUE); if (inherits(prune, "try-error")) { printFlush(paste(spaces, "'pruneAndMergeConsensusModules' failed with the following error message:\n", spaces, prune, "\n", spaces, "--> returning unpruned module labels.")); mergedLabels = goodGeneLabels; } else mergedLabels = prune$labels; allLabels[gsg$goodGenes] = goodGeneLabels; MEs = try(multiSetMEs(multiExpr[useSets], universalColors = mergedLabels, excludeGrey = excludeGrey, grey = 0, # trapErrors = TRUE, returnValidOnly = TRUE verbose = verbose-1, indent = indent + 1 ), silent = TRUE); if (inherits(MEs, 'try-error')) { warning(paste('blockwiseConsensusModules: ME calculation failed with this message:\n ', MEs, '---> returning empty module eigengenes')); allSampleMEs = NULL; } else { mergedLabels[gsg$goodGenes] = mergedLabels; index = lapply(gsg$goodSamples, function(gs) { out = rep(NA, length(gs)); out[gs] = 1:sum(gs); out; }); names(index) = names(multiExpr); allSampleMEs = mtd.subset(MEs, index[useSets]); for (set in 1:nUseSets) rownames(allSampleMEs[[set]]$data) = make.unique(originalSampleNames[[ useSets[set] ]]$data); } if (removeConsensusTOMOnExit) { BD.checkAndDeleteFiles(consensusTOMInfo$consensusData); consensusTOMInfo$consensusData = NULL; } names(mergedLabels) = names(allLabels) = originalGeneNames; list(labels = mergedLabels, unmergedLabels = allLabels, colors = labels2colors(mergedLabels), unmergedColors = labels2colors(allLabels), multiMEs = allSampleMEs, dendrograms = dendros, consensusTOMInfo = consensusTOMInfo, blockInfo = consensusTOMInfo$individualTOMInfo$blockInfo, moduleIdentificationArguments = list( deepSplit = deepSplit, detectCutHeight = detectCutHeight, minModuleSize = minModuleSize, maxCoreScatter = maxCoreScatter, minGap = minGap, maxAbsCoreScatter = maxAbsCoreScatter, minAbsGap = minAbsGap, minSplitHeight = minAbsSplitHeight, useBranchEigennodeDissim = useBranchEigennodeDissim, minBranchEigennodeDissim = minBranchEigennodeDissim, minStabilityDissim = minStabilityDissim, pamStage = pamStage, pamRespectsDendro = pamRespectsDendro, minCoreKME = minCoreKME, minCoreKMESize = minCoreKMESize, minKMEtoStay = minKMEtoStay, calibrateMergingSimilarities = calibrateMergingSimilarities, mergeCutHeight = mergeCutHeight), details = if(getDetails) list(cutreeLabels = cutreeLabels) else NULL ); } #===================================================================================================== # # pruneAndMergeConsensusModules # #===================================================================================================== pruneConsensusModules = function( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, MEs = NULL, labels, unassignedLabel = if (is.numeric(labels)) 0 else "grey", networkOptions, consensusTree, minModuleSize, minCoreKMESize = minModuleSize/3, minCoreKME = 0.5, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, collectGarbage = FALSE, checkWeights = TRUE, verbose = 1, indent=0) { spaces = indentSpaces(indent); if (checkWeights) .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE) oldLabels = labels; moduleIndex = sort(unique(labels)); moduleIndex = moduleIndex[moduleIndex!=0]; useInputs = consensusTreeInputs(consensusTree, flatten = TRUE) if (is.null(MEs)) { # If multiExpr.imputed were not given, do not impute here, let moduleEigengenes do it since the imputation # should be faster there. if (is.null(multiExpr.imputed)) multiExpr.imputed = multiExpr MEs = multiSetMEs(multiExpr.imputed[useInputs], universalColors = labels, excludeGrey = TRUE, grey = unassignedLabel, impute = impute, verbose = verbose-1, indent = indent + 1); } else { meSize = checkSets(MEs); } deleteModules = numeric(0); changedModules = numeric(0); # Check modules: make sure that of the genes present in the module, at least a minimum number # have a correlation with the eigengene higher than a given cutoff, and that all member genes have # the required minimum consensus KME if (verbose>0) printFlush(paste(spaces, "..checking kME in consensus modules")); nSets = nSets(multiExpr); if (is.null(multiWeights)) multiWeights = .listRep(numeric(0), nSets) KME = mtd.mapply(function(expr, weights, me, netOpt) { haveWeights = length(dim(weights))==2; kme = do.call(netOpt$corFnc, c(if (haveWeights) list(x = expr, y = me, weights.x = weights) else list(x = expr, y = me), netOpt$corOptions)); if (!grepl("signed", netOpt$networkType)) kme = abs(kme); kme; }, multiExpr[useInputs], multiWeights[useInputs], MEs, networkOptions, returnList = TRUE); consKME = simpleHierarchicalConsensusCalculation(KME, consensusTree); if (collectGarbage) gc(); nMEs = checkSets(MEs)$nGenes; for (mod in 1:nMEs) { modGenes = (labels==moduleIndex[mod]); consKME1 = consKME[modGenes, mod]; if (sum(consKME1>minCoreKME) < minCoreKMESize) { labels[modGenes] = 0; deleteModules = union(deleteModules, mod); if (verbose>1) printFlush(paste(spaces, " ..deleting module ",moduleIndex[mod], ": of ", sum(modGenes), " total genes in the module only ", sum(consKME1>minCoreKME), " have the requisite high correlation with the eigengene in all sets.", sep="")); } else if (sum(consKME10) { if (verbose > 1) printFlush(paste(spaces, " ..removing", sum(consKME10) < minModuleSize) { deleteModules = union(deleteModules, mod); labels[modGenes] = unassignedLabel; if (verbose>1) printFlush(paste(spaces, " ..deleting module ",moduleIndex[mod], ": not enough genes in the module after removal of low KME genes.", sep="")); } else { changedModules = union(changedModules, moduleIndex[mod]); } } } # Remove marked modules if (length(deleteModules) > 0) { for (set in 1:nSets) MEs[[set]]$data = MEs[[set]]$data[, -deleteModules, drop = FALSE]; modGenes = labels %in% moduleIndex[deleteModules]; labels[modGenes] = unassignedLabel; moduleIndex = moduleIndex[-deleteModules]; } labels; } pruneAndMergeConsensusModules = function( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, labels, unassignedLabel = if (is.numeric(labels)) 0 else "grey", networkOptions, consensusTree, minModuleSize, minCoreKMESize = minModuleSize/3, minCoreKME = 0.5, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, trapErrors = FALSE, # Module merging options calibrateMergingSimilarities = FALSE, mergeCutHeight = 0.15, iterate = TRUE, collectGarbage = FALSE, getDetails = TRUE, verbose = 1, indent=0) { # Check that there is at least 1 module if (all(labels==unassignedLabel, na.rm = TRUE)) return(if (getDetails) list(labels = labels, lastMergeInfo = NULL, details = NULL) else labels); spaces = indentSpaces(indent); useSets = consensusTreeInputs(consensusTree); if (is.null(multiExpr.imputed)) multiExpr.imputed = mtd.apply(multiExpr[useSets], function(x) t(impute.knn(t(scale(x)))$data)); .checkAndScaleMultiWeights(multiWeights[useSets], multiExpr[useSets], scaleByMax = FALSE); changed = TRUE; if (getDetails) details = list(originalLabels = labels); step = 0; while (changed) { step = step + 1; if (verbose > 0) printFlush(spaste(spaces, "prune and merge consensusModules step ", step)); stepDetails = list(); oldLabels = labels; if (verbose>1) printFlush(paste(spaces, "..pruning genes with low KME..")); labels = pruneConsensusModules( multiExpr[useSets], multiWeights = multiWeights[useSets], multiExpr.imputed = multiExpr.imputed, MEs = NULL, labels = labels, unassignedLabel = unassignedLabel, networkOptions = networkOptions, consensusTree = consensusTree, minModuleSize = minModuleSize, minCoreKME = minCoreKME, minCoreKMESize = minCoreKMESize, minKMEtoStay = minKMEtoStay, impute = impute, collectGarbage = collectGarbage, checkWeights = FALSE, verbose = verbose-1, indent = indent + 1) if (getDetails) stepDetails = c(stepDetails, list(prunedLabels = labels)) #if (sum(labels>0)==0) #{ # if (verbose>1) # printFlush(paste(spaces, " ..No significant modules left.")); # if (getDetails) details = c(details, stepDetails); # break; #} # Merging needs to be called only if we're either in first iteration or if pruning acutally changed # modules. if (step==1 || any(labels!=oldLabels)) { if (verbose>1) printFlush(paste(spaces, "..merging consensus modules that are too close..")); mergedMods = hierarchicalMergeCloseModules(multiExpr[useSets], labels = labels, networkOptions = networkOptions, consensusTree = consensusTree, calibrateMESimilarities = calibrateMergingSimilarities, cutHeight = mergeCutHeight, relabel = TRUE, getNewUnassdME = FALSE, getNewMEs = FALSE, verbose = verbose-1, indent = indent + 1); if (getDetails) stepDetails = c(stepDetails, list(mergeInfo = mergedMods)); labels = mergedMods$labels; } changed = !all(labels==oldLabels) & iterate; if (getDetails) details = c(details, list(stepDetails)); } if (getDetails) { names(details)[-1] = spaste("Iteration.", prependZeros(1:step)); list(labels = labels, lastMergeInfo = mergedMods, details = details); } else labels; } WGCNA/R/mutualInfoAdjacency.R0000644000176200001440000000541213103416622015362 0ustar liggesusers#read in mutual information fx mutualInfoAdjacency=function( datE, discretizeColumns=TRUE, entropyEstimationMethod="MM", numberBins=NULL) { reqPackages = c("infotheo", "minet", "entropy") nReq = length(reqPackages) for (r in 1:nReq) { expression = spaste('require("', reqPackages[r], '", quietly = TRUE)'); ok = eval(parse(text = expression)) if (!ok) stop("The function requires R package infotheo, minet and entropy. Please install these packages first.") } if ( !is.element( discretizeColumns, c(TRUE, FALSE) ) ) stop("The input parameter discretizeColumns contains a value that is not logical. It needs to be set to TRUE or FALSE") if ( !is.element( entropyEstimationMethod, c("MM", "ML", "shrink", "SG" ) ) ){warning("The entropy estimation method does not correspond to any of the following: MM, ML, shrink, SG. MM will be used."); entropyEstimationMethod="MM"} datE=data.frame(datE) if ( ! (dim(datE)[[2]]>1) ) stop("The number of columns of datE must be larger than 1") entropyOfCountData=function( counts ) { express='entropy::entropy(table(counts) , unit="log", method= entropyEstimationMethod)' eval(parse(text=express)) } if (is.null(numberBins) ) numberBins=sqrt(nrow(datE)) if (!is.null(numberBins) ) numberBins=as.integer(numberBins) if ( !( numberBins>1 ) ) stop("Something is wrong with the input parameter numberBins, which is used for discretizing the quantitative variables. numberBins should be larger than 1. Recommendation: choose the default value numberBins=NULL") if (discretizeColumns) { express = 'infotheo::discretize(datE, disc ="equalwidth", nbins=numberBins)' discretized.datE= eval(parse(text = express)); } if (! discretizeColumns) { discretized.datE= datE } ENTROPY=as.numeric( apply(discretized.datE, 2, entropyOfCountData )) if (entropyEstimationMethod =="MM") entropyEstimationMethodRenamed="mi.mm" ; if (entropyEstimationMethod =="ML") entropyEstimationMethodRenamed="mi.empirical" ; if (entropyEstimationMethod =="shrink") entropyEstimationMethodRenamed="mi.shrink" ; if (entropyEstimationMethod =="SG") entropyEstimationMethodRenamed="mi.sg" ; express="minet::build.mim(discretized.datE , estimator= entropyEstimationMethodRenamed)" MIxy = eval(parse(text=express)) MIxy[MIxy<0]=0 diag(MIxy)=ENTROPY AdjacencySymmetricUncertainty=2*MIxy/ outer(ENTROPY,ENTROPY, FUN="+") AdjacencyUniversal= AdjacencySymmetricUncertainty/(2- AdjacencySymmetricUncertainty) AdjacencyUniversalVersion2= MIxy/outer(ENTROPY,ENTROPY, FUN="pmax", na.rm=T) #output list(Entropy=ENTROPY, MutualInformation=MIxy, AdjacencySymmetricUncertainty= AdjacencySymmetricUncertainty, AdjacencyUniversalVersion1= AdjacencyUniversal, AdjacencyUniversalVersion2= AdjacencyUniversalVersion2) } # end of function mutualInfoAdjacency WGCNA/R/corAndPvalue.R0000644000176200001440000000376313103416622014027 0ustar liggesusers# Functions to calculate correlation and corresponding p-values. Geared towards cor p-values of large # matrices where varying numbers of missing data make the number of observations vary for each pair of # columns. corAndPvalue = function(x, y = NULL, use = "pairwise.complete.obs", alternative = c("two.sided", "less", "greater"), ...) { ia = match.arg(alternative); cor = cor(x, y, use = use, ...); x = as.matrix(x); finMat = !is.na(x) if (is.null(y)) { np = t(finMat) %*% finMat; } else { y = as.matrix(y); np = t(finMat) %*% (!is.na(y)); } Z = 0.5 * log( (1+cor)/(1-cor) ) * sqrt(np-2); if (ia=="two.sided") { T = sqrt(np - 2) * abs(cor)/sqrt(1 - cor^2) p = 2*pt(T, np - 2, lower.tail = FALSE); } else if (ia=="less") { T = sqrt(np - 2) * cor/sqrt(1 - cor^2) p = pt(T, np - 2, lower.tail = TRUE) } else if (ia=="greater") { T = sqrt(np - 2) * cor/sqrt(1 - cor^2) p = pt(T, np - 2, lower.tail = FALSE) } list(cor = cor, p = p, Z = Z, t = T, nObs = np); } bicorAndPvalue = function(x, y = NULL, use = "pairwise.complete.obs", alternative = c("two.sided", "less", "greater"), ...) { ia = match.arg(alternative); cor = bicor(x, y, use = use, ...); x = as.matrix(x); finMat = !is.na(x) if (is.null(y)) { np = t(finMat) %*% finMat; } else { y = as.matrix(y); np = t(finMat) %*% (!is.na(y)); } Z = 0.5 * log( (1+cor)/(1-cor) ) * sqrt(np-2); if (ia=="two.sided") { T = sqrt(np - 2) * abs(cor)/sqrt(1 - cor^2) p = 2*pt(T, np - 2, lower.tail = FALSE); } else if (ia=="less") { T = sqrt(np - 2) * cor/sqrt(1 - cor^2) p = pt(T, np - 2, lower.tail = TRUE) } else if (ia=="greater") { #Z = 0.5 * log( (1+cor)/(1-cor) ) * sqrt(np-3); T = sqrt(np - 2) * cor/sqrt(1 - cor^2) p = pt(T, np - 2, lower.tail = FALSE) } list(bicor = cor, p = p, Z = Z, t = T, nObs = np); } WGCNA/R/AFcorMI.R0000644000176200001440000000033713103416622012656 0ustar liggesusers#read in prediction fx. It was named version C, now renamed because we'll only use this fx later. AFcorMI=function(r,m) { D=0.43*m^(-0.3) epsilon=D^2.2 out=log(1+epsilon-r^2)/log(epsilon)*(1-D)+D out } WGCNA/R/nearestCentroidPredictor.R0000644000176200001440000004101113103416622016435 0ustar liggesusers# nearest centroid predictor # 09: Add weighted measures of sample-centroid similarity. # Fix CVfold such that leave one out cross-validation works. # 08: change the way sample clustering is called. Use a do.call. # 07: modify bagging and add boosting # - revert the predictor to state where it can only take a single trait and a single number of features. # - make sure the predictor respects nFeatures=0 # -06: # - add new arguments assocCut.hi and assocCut.lo # - make nNetworkFeatures equal nFeatures # = Bagging and boosting is broken. # 05: add bagging # 04: add a self-tuning version # version 03: try to build a sample network predictor. # Cluster the samples in each class separately and identify clusters. It's a bit of a question whether we # can automate the cluster identification completely. Anyway, use the clusters as additional centroids and # as prediction use the class of each centroid. # version 02: add the option to use a quantile of class distances instead of distance from centroid # Inverse distance between colunms of x and y # assume that y has few columns and compute the distance matrix between columns of x and columns of y. .euclideanDist.forNCP = function(x, y, use = 'p') { x = as.matrix(x); y = as.matrix(y); ny = ncol(y) diff = matrix(NA, ncol(x), ny); for (cy in 1:ny) diff[, cy] = apply( (x - y[, cy])^2, 2, sum, na.rm = TRUE); -diff; } #===================================================================================================== # # Main predictor function. # #===================================================================================================== # best suited to prediction of factors. # Classification: for each level find nFeatures.eachSide that best distinguish the level from all other # levels. Actually this doesn't make much sense since I will have to put all the distinguishing sets # together to form the profiles, so features that may have no relationship to a level will be added if # there are more than two levels. I can fix that using some roundabout analysis but for now forget it. # To do: check that the CVfold validation split makes sense, i.e. none of the bins contains all observations # of any class. # Could also add robust standardization # Output a measure of similarity to class centroids # Same thing for the gene voting predictor # prediction for heterogenous cases: sample network in training cases get modules and eigensamples and # similarly in the controls then use all centroids for classification between k nearest neighbor and # nearest centroid # CAUTION: the function standardizes each gene (unless standardization is turned off), so the sample # networks may be different from what would be expected from the supplied data. # Work internally with just numeric entries. Corresponding levels of the response are saved and restored at # the very end. nearestCentroidPredictor = function( # Input training and test data x, y, xtest = NULL, # Feature weights and selection criteria featureSignificance = NULL, assocFnc = "cor", assocOptions = "use = 'p'", assocCut.hi = NULL, assocCut.lo = NULL, nFeatures.hi = 10, nFeatures.lo = 10, weighFeaturesByAssociation = 0, scaleFeatureMean = TRUE, scaleFeatureVar = TRUE, # Predictor options centroidMethod = c("mean", "eigensample"), simFnc = "cor", simOptions = "use = 'p'", useQuantile = NULL, sampleWeights = NULL, weighSimByPrediction = 0, # What should be returned CVfold = 0, returnFactor = FALSE, # General options randomSeed = 12345, verbose = 2, indent = 0) { # For now we do not support regression centroidMethod = match.arg(centroidMethod); if (simFnc=="dist") { if (verbose > 0) printFlush(paste("NearestCentroidPredictor: 'dist' is changed to a suitable", "Euclidean distance function.\n", " Note: simOptions will be disregarded.")); simFnc = ".euclideanDist.forNCP"; simOptions = "use = 'p'" } # Convert factors to numeric variables. ySaved = y; #if (classify) #{ originalYLevels = sort(unique(y)) ; y = as.numeric(as.factor(y)); #} x = as.matrix(x); doTest = !is.null(xtest) if (doTest) { xtest = as.matrix(xtest); nTestSamples = nrow(xtest); if (ncol(x)!=ncol(xtest)) stop("Number of learning and testing predictors (columns of x, xtest) must equal."); } else { if (weighSimByPrediction > 0) stop("weighting similarity by prediction is not possible when xtest = NULL."); nTestSamples = 0; } numYLevels = sort(unique(y)); minY = min(y); maxY = max(y); nSamples = length(y); nVars = ncol(x); if (!is.null(assocCut.hi)) { if (is.null(assocCut.lo)) assocCut.lo = -assocCut.hi; } spaces = indentSpaces(indent); if (!is.null(useQuantile)) { if ( (useQuantile < 0) | (useQuantile > 1) ) stop("If 'useQuantile' is given, it must be between 0 and 1."); } if (is.null(sampleWeights)) sampleWeights = rep(1, nSamples); # If cross-validation is requested, change the whole flow and use a recursive call. if (CVfold > 0) { if (CVfold > nSamples ) { printFlush("'CVfold' is larger than number of samples. Will perform leave-one-out cross-validation."); CVfold = nSamples; } ratio = nSamples/CVfold; if (floor(ratio)!=ratio) { smaller = floor(ratio); nLarger = nSamples - CVfold * smaller binSizes = c(rep(smaller, CVfold-nLarger), rep(smaller +1, nLarger)); } else binSizes = rep(ratio, CVfold); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { saved.seed = .Random.seed; seedSaved = TRUE; } else seedSaved = FALSE; set.seed(randomSeed); } sampleOrder = sample(1:nSamples); CVpredicted = rep(NA, nSamples); CVbin = rep(0, nSamples); if (verbose > 0) { cat(paste(spaces, "Running cross-validation: ")); if (verbose==1) pind = initProgInd() else printFlush(""); } if (!is.null(featureSignificance)) printFlush(paste("Warning in nearestCentroidPredictor: \n", " cross-validation will be biased if featureSignificance was derived", "from training data.")); ind = 1; for (cv in 1:CVfold) { if (verbose > 1) printFlush(paste("..cross validation bin", cv, "of", CVfold)); end = ind + binSizes[cv] - 1; samples = sampleOrder[ind:end]; CVbin[samples] = cv; xCVtrain = x[-samples, , drop = FALSE]; xCVtest = x[samples, , drop = FALSE]; yCVtrain = y[-samples]; yCVtest = y[samples]; CVsampleWeights = sampleWeights[-samples]; pr = nearestCentroidPredictor(xCVtrain, yCVtrain, xCVtest, #classify = classify, featureSignificance = featureSignificance, assocCut.hi = assocCut.hi, assocCut.lo = assocCut.lo, nFeatures.hi = nFeatures.hi, nFeatures.lo = nFeatures.lo, useQuantile = useQuantile, sampleWeights = CVsampleWeights, CVfold = 0, returnFactor = FALSE, randomSeed = randomSeed, centroidMethod = centroidMethod, assocFnc = assocFnc, assocOptions = assocOptions, scaleFeatureMean = scaleFeatureMean, scaleFeatureVar = scaleFeatureVar, simFnc = simFnc, simOptions = simOptions, weighFeaturesByAssociation = weighFeaturesByAssociation, weighSimByPrediction = weighSimByPrediction, verbose = verbose - 2, indent = indent + 1) CVpredicted[samples] = pr$predictedTest; ind = end + 1; if (verbose==1) pind = updateProgInd(cv/CVfold, pind); } if (verbose==1) printFlush(""); } if (nrow(x)!=length(y)) stop("Number of observations in x and y must equal."); # Feature selection: xWeighted = x * sampleWeights; yWeighted = y * sampleWeights; if (is.null(featureSignificance)) { corEval = parse(text = paste(assocFnc, "(xWeighted, yWeighted, ", assocOptions, ")")); featureSignificance = as.vector(eval(corEval)); } else { if (length(featureSignificance)!=nVars) stop("Given 'featureSignificance' has incorrect length (must be nFeatures)."); } nGood = nVars; nNA = sum(is.na(featureSignificance)); testCentroidSimilarities = list(); xSD = apply(x, 2, sd, na.rm = TRUE); keep = is.finite(featureSignificance) & (xSD>0); nKeep = sum(keep); keepInd = c(1:nVars)[keep]; order = order(featureSignificance[keep]); levels = sort(unique(y)); nLevels = length(levels); if (is.null(assocCut.hi)) { nf = c(nFeatures.hi, nFeatures.lo); if (nf[2] > 0) ind1 = c(1:nf[2]) else ind1 = c(); if (nf[1] > 0) ind2 = c((nKeep-nf[1] + 1):nKeep) else ind2 = c(); indexSelect = unique(c(ind1, ind2)); if (length(indexSelect) < 1) stop("No features were selected. At least one of 'nFeatures.hi', 'nFeatures.lo' must be nonzero."); indexSelect = indexSelect[indexSelect > 0]; select = keepInd[order[indexSelect]]; } else { indexSelect = (1:nKeep)[ featureSignificance[keep] >= assocCut.hi | featureSignificance[keep] <= assocCut.lo ] if (length(indexSelect)<2) stop(paste("'assocCut.hi'", assocCut.hi, "and assocCut.lo", assocCut.lo, "are too stringent, less than 3 features were selected.\n", "Please relax the cutoffs.")); select = keepInd[indexSelect]; } if ((length(select) < 3) && (simFnc!='dist')) { stop(paste("Less than 3 features were selected. Please either relax", "the selection criteria of use simFnc = 'dist'.")); } selectedFeatures = select; nSelect = length(select); xSel = x[, select]; selectSignif = featureSignificance[select]; if (scaleFeatureMean) { if (scaleFeatureVar) { xSD = apply(xSel, 2, sd, na.rm = TRUE); } else xSD = rep(1, nSelect); xMean = apply(xSel, 2, mean, na.rm = TRUE); } else { if (scaleFeatureVar) { xSD = sqrt(apply(xSel^2, 2, sum, na.rm = TRUE)) / pmax(apply(!is.na(xSel), 2, sum) - 1, rep(1, nSelect)); } else xSD = rep(1, nSelect); xMean = rep(0, nSelect); } xSel = scale(xSel, center = scaleFeatureMean, scale = scaleFeatureVar); if (doTest) { xtestSel = xtest[, select]; xtestSel = (xtestSel - matrix(xMean, nTestSamples, nSelect, byrow = TRUE) ) / matrix(xSD, nTestSamples, nSelect, byrow = TRUE); } else xtestSel = NULL; xWeighted = xSel * sampleWeights; if (weighSimByPrediction > 0) { pr = .quickGeneVotingPredictor.CV(xSel, xtestSel, c(1:nSelect)) dCV = sqrt(colMeans( (pr$CVpredicted - xSel)^2, na.rm = TRUE)); dTS = sqrt(colMeans( (pr$predictedTest - xtestSel)^2, na.rm = TRUE)); dTS[dTS==0] = min(dTS[dTS>0]); validationWeight = (dCV/dTS)^weighSimByPrediction; validationWeight[validationWeight > 1] = 1; } else validationWeight = rep(1, nSelect); nTestSamples = if (doTest) nrow(xtest) else 0; predicted = rep(0, nSamples); predictedTest = rep(0, nTestSamples); clusterLabels = list(); clusterNumbers = list(); if ( (centroidMethod=="eigensample") ) { if (sum(is.na(xSel)) > 0) { xImp = t(impute.knn(t(xSel), k = min(10, nSelect - 1))$data); } else xImp = xSel; if (doTest && sum(is.na(xtestSel))>0) { xtestImp = t(impute.knn(t(xtestSel), k = min(10, nSelect - 1))$data); } else xtestImp = xtestSel; } clusterNumbers = rep(1, nLevels); sampleModules = list(); # Trivial cluster labels: clusters equal case classes for (l in 1:nLevels) clusterLabels[[l]] = rep(l, sum(y==levels[l])) nClusters = sum(clusterNumbers); centroidSimilarities = array(NA, dim = c(nSamples, nClusters)); testCentroidSimilarities = array(NA, dim = c(nTestSamples, nClusters)); #if (classify) #{ cluster2level = rep(c(1:nLevels), clusterNumbers); featureWeight = validationWeight; if (is.null(useQuantile)) { # Form centroid profiles for each cluster and class centroidProfiles = array(0, dim = c(nSelect, nClusters)); for (cl in 1:nClusters) { l = cluster2level[cl]; clusterSamples = c(1:nSamples)[ y==l ] [ clusterLabels[[l]]==cl ]; if (centroidMethod=="mean") { centroidProfiles[, cl] = apply(xSel[clusterSamples, , drop = FALSE], 2, mean, na.rm = TRUE); } else if (centroidMethod=="eigensample") { cp = svd(xSel[clusterSamples,], nu = 0, nv = 1)$v[, 1]; cor = cor(t(xSel[clusterSamples,]), cp); if (sum(cor, na.rm = TRUE) < 0) cp = -cp; centroidProfiles[, cl] = cp; } } if (weighFeaturesByAssociation > 0) featureWeight = featureWeight * sqrt(abs(selectSignif)^weighFeaturesByAssociation); # Back-substitution prediction: wcps = centroidProfiles * featureWeight; wxSel = t(xSel) * featureWeight; distExpr = spaste( simFnc, "( wcps, wxSel, ", simOptions, ")"); sample.centroidSim = eval(parse(text = distExpr)); # Actual prediction: for each sample, calculate distances to centroid profiles if (doTest) { wxtestSel = t(xtestSel) * featureWeight distExpr = spaste( simFnc, "( wcps, wxtestSel, ", simOptions, ")"); testSample.centroidSim = eval(parse(text = distExpr)); } } else { labelVector = y; for (l in 1:nLevels) labelVector[y==l] = clusterLabels[[l]]; keepSamples = labelVector!=0; nKeepSamples = sum(keepSamples); keepLabels = labelVector[keepSamples]; if (weighFeaturesByAssociation > 0) featureWeight = featureWeight * sqrt(abs(selectSignif)^weighFeaturesByAssociation); wxSel = t(xSel) * featureWeight; wxSel.keepSamples = t(xSel[keepSamples, ]) * featureWeight; # Back-substitution prediction: distExpr = spaste( simFnc, "( wxSel.keepSamples, wxSel, ", simOptions, ")"); dst = eval(parse(text = distExpr)); # Test prediction: if (doTest) { wxtestSel = t(xtestSel) * featureWeight distExpr = spaste( simFnc, "( wxSel.keepSamples, wxtestSel, ", simOptions, ")"); dst.test = eval(parse(text = distExpr)); sample.centroidSim = matrix(0, nClusters, nSamples); testSample.centroidSim = matrix(0, nClusters, nTestSamples); } for (l in 1:nClusters) { #x = try ( { lSamples = c(1:nKeepSamples)[keepLabels==l]; sample.centroidSim[l, ] = colQuantileC(dst[lSamples, ], 1-useQuantile); testSample.centroidSim[l, ] = colQuantileC(dst.test[lSamples, ], 1-useQuantile); #} ) #if (class(x) == 'try-error') browser(text = "zastavka."); } } centroidSimilarities = t(sample.centroidSim); prediction = cluster2level[apply(sample.centroidSim, 2, which.max)]; # Save predictions predicted = prediction; if (doTest) { testCentroidSimilarities = t(testSample.centroidSim); testprediction = cluster2level[apply(testSample.centroidSim, 2, which.max)]; predictedTest = testprediction; } #} else # stop("Prediction for continouos variables is not implemented yet. Sorry!"); # Reformat output if factors are to be returned if (returnFactor) { predicted.out = factor(originalYLevels[[t]][predicted]) if (doTest) predictedTest.out = factor(originalYLevels[[t]][predictedTest]); if (CVfold > 0) CVpredicted.out = factor(originalYLevels[[t]][CVpredicted]); } else { # Turn ordinal predictions into levels of input traits predicted.out = originalYLevels[predicted]; if (doTest) predictedTest.out = originalYLevels[predictedTest]; if (CVfold > 0) CVpredicted.out = originalYLevels[CVpredicted]; } out = list(predicted = predicted.out, predictedTest = if (doTest) predictedTest.out else NULL, featureSignificance = featureSignificance, selectedFeatures = selectedFeatures, centroidProfiles = if (is.null(useQuantile)) centroidProfiles else NULL, testSample2centroidSimilarities = if (doTest) testCentroidSimilarities else NULL, featureValidationWeights = validationWeight ) if (CVfold > 0) out$CVpredicted = CVpredicted.out; out; } WGCNA/R/zzz.R0000644000176200001440000000466713454432720012313 0ustar liggesusers# first and last lib functions .onAttach = function(libname, pkgname) { ourVer = try( gsub("[^0-9_.-]", "", packageVersion("WGCNA"), fixed = FALSE) ); if (inherits(ourVer, "try-error")) ourVer = ""; # printFlush("==========================================================================\n*"); # printFlush(paste("* Package WGCNA", ourVer, "loaded.\n*")) # # if (.useNThreads()==1 && .nProcessorsOnline() > 1) # { # printFlush(spaste( # "* Important note: It appears that your system supports multi-threading,\n", # "* but it is not enabled within WGCNA in R. \n", # "* To allow multi-threading within WGCNA with all available cores, use \n", # "*\n", # "* allowWGCNAThreads()\n", # "*\n", # "* within R. Use disableWGCNAThreads() to disable threading if necessary.\n", # "* Alternatively, set the following environment variable on your system:\n", # "*\n", # "* ", .threadAllowVar, "=\n", # "*\n", # "* for example \n", # "*\n", # "* ", .threadAllowVar, "=", .nProcessorsOnline(), "\n", # "*\n", # "* To set the environment variable in linux bash shell, type \n", # "*\n", # "* export ", .threadAllowVar, "=", .nProcessorsOnline(), # "\n*", # "\n* before running R. Other operating systems or shells will", # "\n* have a similar command to achieve the same aim.\n*")); # } # printFlush("==========================================================================\n\n"); imputeVer = try( gsub("[^0-9_.-]", "", packageVersion("impute"), fixed = FALSE) ); if (!inherits(imputeVer, "try-error")) { if (compareVersion(imputeVer, "1.12")< 0) { printFlush(paste("*!*!*!*!*!*!* Caution: installed package 'impute' is too old.\n", "Old versions of this package can occasionally crash the code or the entire R session.\n", "If you already have the newest version available from CRAN, \n", "and you still see this warning, please download the impute package \n", "from Bioconductor at \n", "http://www.bioconductor.org/packages/release/bioc/html/impute.html . \n", "If the above link is dead, search for package 'impute' \n", "in the Downloads -> Software section of http://www.bioconductor.org .\n")); } } } WGCNA/R/stratifiedBarplot.R0000644000176200001440000001011713103416622015115 0ustar liggesusers# This function barplots data across two splitting parameters stratifiedBarplot = function (expAll, groups, split, subset, genes=NA, scale="N", graph=TRUE, las1=2, cex1=1.5, ...){ ## Code to take care of array formatting expAll = t(expAll) if (length(subset)>1){ subsetName = subset[1]; subset = subset[2:length(subset)] } else { subsetName=subset } groupNames = as.character(names(.tableOrd(groups))) splitNames = as.character(names(.tableOrd(split))) if (is.na(genes)[1]) genes = rownames(expAll) keep = !is.na(groups) groups = groups[keep] split = split[keep] expAll = expAll[,keep] scale = substr(scale,1,1) ## Collect and scale the expression data expSubset = expAll[is.element(genes,subset),] if(length(subset)>1){ if(scale=="A") expSubset = t(apply(expSubset,1,function(x) return(x/mean(x)))) if(scale=="Z") expSubset = t(apply(expSubset,1,function(x) return((x-mean(x))/sd(x)))) if(scale=="H") { AdjMat = adjacency(t(expSubset),type="signed",power=2) diag(AdjMat) = 0 Degree = rowSums(AdjMat) keep = which(Degree == max(Degree)) expSubset = expSubset[keep,] } if(scale=="M") { me = moduleEigengenes(as.matrix(t(expSubset)), rep("blue",dim(expSubset)[1])) expSubset = me$eigengenes$MEblue } expSubset = rbind(expSubset,expSubset) expSubset = apply(expSubset,2,mean) } ## Now average the data and output it in a meaningful way exp <- std <- matrix(0,ncol=length(splitNames),nrow=length(groupNames)) splitPvals = rep(1,length(splitNames)) names(splitPvals) = splitNames groupPvals = rep(1,length(groupNames)) names(groupPvals) = groupNames for (c in 1:length(splitNames)){ expTmp = expSubset[split==splitNames[c]] grpTmp = groups[split==splitNames[c]] splitPvals[c] = kruskal.test(expTmp,as.factor(grpTmp))$p.value for (r in 1:length(groupNames)){ exp[r,c] = mean(expSubset[(groups==groupNames[r])&(split==splitNames[c])]) std[r,c] = sd(expSubset[(groups==groupNames[r])&(split==splitNames[c])]) if(c==1){ expTmp = expSubset[groups==groupNames[r]] splTmp = split[groups==groupNames[r]] groupPvals[r] = kruskal.test(expTmp,as.factor(splTmp))$p.value } } } colnames(exp) <- colnames(std) <- splitNames rownames(exp) <- rownames(std) <- groupNames ## Now plot the results, if requested if(graph){ ylim = c(min(0,min(min(exp-std))),max(max(exp+std))*(1+0.08*length(groupNames))) barplot(exp, beside=TRUE, legend.text=TRUE, main=subsetName, las=las1, ylim=ylim, cex.axis=cex1, cex.names=cex1, ...) .err.bp(exp,std,TRUE) } # Now collect the output and return it. out = list(splitGroupMeans = exp, splitGroupSDs = std, splitPvals = splitPvals, groupPvals = groupPvals) return(out) } # -------------------------------- .err.bp<-function(daten,error,two.side=F){ ## This function was written by Steve Horvath # The function err.bp is used to create error bars in a barplot # usage: err.bp(as.vector(means), as.vector(stderrs), two.side=F) if(!is.numeric(daten)) { stop("All arguments must be numeric")} if(is.vector(daten)){ xval<-(cumsum(c(0.7,rep(1.2,length(daten)-1)))) }else{ if (is.matrix(daten)){ xval<-cumsum(array(c(1,rep(0,dim(daten)[1]-1)), dim=c(1,length(daten))))+0:(length(daten)-1)+.5 }else{ stop("First argument must either be a vector or a matrix") } } MW<-0.25*(max(xval)/length(xval)) ERR1<-daten+error ERR2<-daten-error for(i in 1:length(daten)){ segments(xval[i],daten[i],xval[i],ERR1[i]) segments(xval[i]-MW,ERR1[i],xval[i]+MW,ERR1[i]) if(two.side){ segments(xval[i],daten[i],xval[i],ERR2[i]) segments(xval[i]-MW,ERR2[i],xval[i]+MW,ERR2[i]) } } } .tableOrd = function (input){ ## This is the same as the "table" function but retains the order ## This internal function collects the order tableOrd2 = function(input, output=NULL){ input = input[!is.na(input)] if (length(input)==0) return (output) outTmp = input[1] output = c(output, outTmp) input = input[input!=outTmp] output = tableOrd2(input, output) return(output) } ## Get the results tableOut = table(input) tableOrder = tableOrd2(input) return(tableOut[tableOrder]) } WGCNA/R/coxRegressionResiduals.R0000644000176200001440000000230513103416622016141 0ustar liggesusers## The function is currently defined as coxRegressionResiduals = function(time,event,datCovariates=NULL) { if (eval(parse(text= '!require("survival")'))) stop("This function requires package survival. Please install it first."); if ( length(time) != length(event) ) { stop("Error: The length of the vector event is unequal to the length of the time vector. In R language: length(time) != length(event)") } if ( is.null(datCovariates) ){ coxmodel=eval(parse(text = "survival:::coxph(Surv(time, event) ~ 1 , na.action = na.exclude)")); } if ( !is.null(datCovariates) ){ if ( dim(as.matrix(datCovariates))[[1]] !=length(event) ) stop("Error: the number of rows of the input matrix datCovariates is unequal to the number of observations specified in the vector event. In R language: dim(as.matrix(datCovariates))[[1]] !=length(event)") coxmodel=eval(parse( text = paste("survival:::coxph(Surv(time, event) ~ . , data=datCovariates,", "na.action = na.exclude, model = TRUE)"))); } # end of if datResiduals=data.frame(martingale=residuals(coxmodel,type="martingale"), deviance=residuals(coxmodel,type="deviance")) datResiduals } # end of function WGCNA/R/internalConstants.R0000644000176200001440000000164513571361236015163 0ustar liggesusers# Definitions of internal constants. .pearsonFallbacks = c("none", "individual", "all"); .threadAllowVar = "ALLOW_WGCNA_THREADS" .zeroMADWarnings = c("Some results will be NA.", "Pearson correlation was used for individual columns with zero (or missing) MAD.", "Pearson correlation was used for entire variable."); ..minNGenes = 4; ..minNSamples = 4; .largestBlockSize = 1e8; .networkTypes = c("unsigned", "signed", "signed hybrid"); .adjacencyTypes = c(.networkTypes, "distance"); .TOMTypes = c("none", "unsigned", "signed", "signed Nowick", "unsigned 2", "signed 2", "signed Nowick 2"); .TOMDenoms = c("min", "mean"); .corTypes = c("pearson", "bicor"); .corFnc = c("cor", "bicor", "cor"); .corOptions = c("use = 'p'", "use = 'p'", "use = 'p', method = 'spearman'"); .corOptionList = list( list(use = 'p'), list(use = 'p'), list(use = 'p', method = "spearman")); WGCNA/R/Functions-fromSimilarity.R0000644000176200001440000000566114230552654016433 0ustar liggesusers# Functions to perform WGCNA from similarity input. matrixToNetwork = function(mat, symmetrizeMethod = c("average", "min", "max"), signed = TRUE, min = NULL, max = NULL, power = 12, diagEntry = 1) { sm = match.arg(symmetrizeMethod); if (is.na(sm)) stop("Unrecognized or non-unique 'symmetrizeMethod'."); mat = as.matrix(mat); nd = 0 x = try({nd = dim(mat)}); if ( inherits(x, 'try-error') | (nd!=2) ) stop("'mat' appears to have incorrect type; must be a 2-dimensional square matrix."); if (ncol(mat)!=nrow(mat)) stop("'mat' must be a square matrix."); if (!signed) mat = abs(mat); if (sm==1) { mat = (mat + t(mat))/2; } else if (sm==2) { mat = pmin(mat, t(mat), na.rm = TRUE); } else mat = pmax(mat, t(mat), na.rm = TRUE); if (is.null(min)) { min = min(mat, na.rm = TRUE); } else mat[mat < min] = min; if (is.null(max)) { max = max(mat, na.rm = TRUE); } else mat[mat > max] = max; adj = ( (mat-min)/(max-min) )^power; diag(adj) = diagEntry adj; } checkSimilarity = function(similarity, min=-1, max=1) { checkAdjMat(similarity, min, max); } adjacency.fromSimilarity = function(similarity, type = "unsigned", power = if (type=="distance") 1 else 6) { checkSimilarity(similarity); adjacency(similarity, type = type, power = power, corFnc = "I", corOptions="", distFnc = "I", distOptions = ""); } softConnectivity.fromSimilarity=function(similarity, type = "unsigned", power = if (type == "signed") 15 else 6, blockSize = 1500, verbose = 2, indent = 0) { checkSimilarity(similarity) softConnectivity(similarity, corFnc = "I", corOptions = "", type = type, power = power, blockSize = blockSize, verbose = verbose, indent = indent) } pickHardThreshold.fromSimilarity=function (similarity, RsquaredCut = 0.85, cutVector = seq(0.1, 0.9, by = 0.05), moreNetworkConcepts=FALSE , removeFirst = FALSE, nBreaks = 10) { checkSimilarity(similarity) pickHardThreshold(similarity, dataIsExpr = FALSE, RsquaredCut = RsquaredCut, cutVector = cutVector, moreNetworkConcepts = moreNetworkConcepts, removeFirst = removeFirst, nBreaks = nBreaks, corFnc = "I", corOptions = ""); } pickSoftThreshold.fromSimilarity = function (similarity, RsquaredCut = 0.85, powerVector = c(seq(1, 10, by = 1), seq(12, 20, by = 2)), removeFirst = FALSE, nBreaks = 10, blockSize = 1000, moreNetworkConcepts=FALSE, verbose = 0, indent = 0) { checkSimilarity(similarity) pickSoftThreshold(similarity, dataIsExpr = FALSE, RsquaredCut = RsquaredCut, powerVector = powerVector, removeFirst = removeFirst, nBreaks = nBreaks, blockSize = blockSize, networkType = "signed", moreNetworkConcepts = moreNetworkConcepts, verbose = verbose, indent = indent); } WGCNA/R/GOenrichmentAnalysis.R0000644000176200001440000005130513344057441015534 0ustar liggesusers# Function that returns GO terms with best enrichment and highest number of genes. # So that I don't forget: information about GO categories is contained in the GO.db package GOenrichmentAnalysis = function(labels, entrezCodes, yeastORFs = NULL, organism = "human", ontologies = c("BP", "CC", "MF"), evidence = "all", includeOffspring = TRUE, backgroundType = "givenInGO", removeDuplicates = TRUE, leaveOutLabel = NULL, nBestP = 10, pCut = NULL, nBiggest = 0, getTermDetails = TRUE, verbose = 2, indent = 0 ) { warning(immediate. = TRUE, spaste("This function is deprecated and will be removed in the near future. \n", "We suggest using the replacement function enrichmentAnalysis \n", "in R package anRichment, available from the following URL:\n", "https://labs.genetics.ucla.edu/horvath/htdocs/CoexpressionNetwork/GeneAnnotation/")); sAF = options("stringsAsFactors") options(stringsAsFactors = FALSE); on.exit(options(stringsAsFactors = sAF[[1]]), TRUE) organisms = c("human", "mouse", "rat", "malaria", "yeast", "fly", "bovine", "worm", "canine", "zebrafish", "chicken"); allEvidence = c("EXP", "IDA", "IPI", "IMP", "IGI", "IEP", "ISS", "ISO", "ISA", "ISM", "IGC", "IBA", "IBD", "IKR", "IRD", "RCA", "TAS", "NAS", "IC", "ND", "IEA", "NR"); allOntologies = c("BP", "CC", "MF"); backgroundTypes = c("allGiven", "allInGO", "givenInGO"); spaces = indentSpaces(indent); orgInd = pmatch(organism, organisms); if (is.na(orgInd)) stop(paste("Unrecognized 'organism' given. Recognized values are ", paste(organisms, collapse = ", "))); if (length(evidence)==0) stop("At least one valid evidence code must be given in 'evidence'."); if (length(ontologies)==0) stop("At least one valid ontology code must be given in 'ontology'."); if (evidence=="all") evidence = allEvidence; evidInd = pmatch(evidence, allEvidence); if (sum(is.na(evidInd))!=0) stop(paste("Unrecognized 'evidence' given. Recognized values are ", paste(allEvidence, collapse = ", "))); ontoInd = pmatch(ontologies, allOntologies); if (sum(is.na(ontoInd))!=0) stop(paste("Unrecognized 'ontologies' given. Recognized values are ", paste(allEvidence, collapse = ", "))); backT = pmatch(backgroundType, backgroundTypes); if (is.na(backT)) stop(paste("Unrecognized 'backgroundType' given. Recognized values are ", paste(backgroundTypes, collapse = ", "))); orgCodes = c("Hs", "Mm", "Rn", "Pf", "Sc", "Dm", "Bt", "Ce", "Cf", "Dr", "Gg"); orgExtensions = c(rep(".eg", 4), ".sgd", rep(".eg", 6)); reverseMap = c(rep(".egGO2EG", 4), ".sgdGO2ORF", rep(".egGO2EG", 6)) missingPacks = NULL; packageName = paste("org.", orgCodes[orgInd], orgExtensions[orgInd], ".db", sep=""); if (!eval(parse(text = "require(packageName, character.only = TRUE)"))) missingPacks = c(missingPacks, packageName); if (!eval(parse(text="require(GO.db)"))) missingPacks = c(missingPacks, "GO.db"); if (!is.null(missingPacks)) stop(paste("Could not load the requisite package(s)", paste(missingPacks, collapse = ", "), ". Please install the package(s).")) if (verbose > 0) { printFlush(paste(spaces, "GOenrichmentAnalysis: loading annotation data...")); } if (orgInd==5) { # Yeast needs special care. if (!is.null(yeastORFs)) { entrezCodes = yeastORFs } else { # Map the entrez IDs to yeast ORFs x = eval(parse(text = "org.Sc.sgd::org.Sc.sgdENTREZID")) # x = org.Sc.sgd::org.Sc.sgdENTREZID xx = as.list(x[mapped_genes]) allORFs = names(xx); mappedECs = as.character(sapply(xx, as.character)) entrez2orf = match(entrezCodes, mappedECs); fin = is.finite(entrez2orf); newCodes = paste("InvalidCode", c(1:length(entrezCodes)), sep = "."); newCodes[fin] = allORFs[entrez2orf[fin]]; entrezCodes = newCodes; } } labels = as.matrix(labels); nSets = ncol(labels); nGivenRaw = nrow(labels); if (removeDuplicates) { # Restrict given entrezCodes such that each code is unique ECtab = table(entrezCodes); uniqueEC = names(ECtab); keepEC = match(uniqueEC, entrezCodes); entrezCodes = entrezCodes[keepEC] labels = labels[keepEC, , drop = FALSE]; } else keepEC = c(1:nGivenRaw); egGO = eval(parse(text = paste(packageName, "::org.", orgCodes[orgInd], orgExtensions[orgInd], "GO", sep = ""))); if (orgInd==5) { mapped_genes = as.character(do.call(match.fun("mappedkeys"), list(egGO))); encodes2mapped = match(as.character(entrezCodes), mapped_genes); } else { mapped_genes = as.numeric(as.character(do.call(match.fun("mappedkeys"), list(egGO)))); encodes2mapped = match(as.numeric(entrezCodes), mapped_genes); } encMapped = is.finite(encodes2mapped); nAllIDsInGO = sum(encMapped); mapECodes = entrezCodes[encMapped]; mapLabels = labels[encMapped, , drop = FALSE]; nMappedGenes = nrow(mapLabels); if (nMappedGenes==0) stop(paste("None of the supplied gene identifiers map to the GO database.\n", "Please make sure you have specified the correct organism (default is human).")) Go2eg = eval(parse(text = paste("AnnotationDbi::as.list(", packageName, "::org.", orgCodes[orgInd], reverseMap[orgInd],")", sep = ""))); nTerms = length(Go2eg); goInfo = as.list(GO.db::GOTERM); if (length(goInfo) > 0) { orgGoNames = names(Go2eg); dbGoNames = as.character(sapply(goInfo, GOID)); dbGoOntologies = as.character(sapply(goInfo, Ontology)); } else { dbGoNames = ""; } goOffSpr = list(); if (includeOffspring) { goOffSpr[[1]] = as.list(GOBPOFFSPRING); goOffSpr[[2]] = as.list(GOCCOFFSPRING); goOffSpr[[3]] = as.list(GOMFOFFSPRING); } term2info = match(names(Go2eg), names(goInfo)); termOntologies = dbGoOntologies[term2info]; if (backT==1) { nBackgroundGenes = sum(!is.na(entrezCodes)); } else { if (backT==2) { nBackgroundGenes = length(mapped_genes) } else nBackgroundGenes = nMappedGenes; } termCodes = vector(mode="list", length = nTerms); collectGarbage(); nExpandLength = 0; blockSize = 3000; # For a more efficient concatenating of offspring genes nAllInTerm = rep(0, nTerms); if (verbose > 0) { printFlush(paste(spaces, " ..of the", length(entrezCodes), " Entrez identifiers submitted,", sum(encMapped), "are mapped in current GO categories.")); printFlush(paste(spaces, " ..will use", nBackgroundGenes, "background genes for enrichment calculations.")); cat(paste(spaces, " ..preparing term lists (this may take a while)..")); pind = initProgInd(); } for (c in 1:nTerms) if (!is.na(Go2eg[[c]][[1]])) { te = as.character(names(Go2eg[[c]])); # Term evidence codes tc = Go2eg[[c]]; if (includeOffspring) { termOffspring = NULL; for (ont in 1:length(goOffSpr)) { term2off = match(names(Go2eg)[c], names(goOffSpr[[ont]])) if (!is.na(term2off)) termOffspring = c(termOffspring, goOffSpr[[ont]][[term2off]]); } if (length(termOffspring)>0) { maxLen = blockSize; tex = rep("", maxLen); tcx = rep("", maxLen); ind = 1; len = length(te); tex[ ind:len ] = te; tcx[ ind:len ] = tc; ind = len + 1; o2go = match(termOffspring, as.character(names(Go2eg))); o2go = o2go[is.finite(o2go)] if (length(o2go)>0) for (o in 1:length(o2go)) if (!is.na(Go2eg[[o2go[o]]][[1]])) { #printFlush(paste("Have offspring for term", c, ": ", names(Go2eg)[c], # Term(goInfo[[term2info[c]]]))); newc = Go2eg[[o2go[o]]]; newe = names(newc); newl = length(newe); if ((len + newl) > maxLen) { nExpand = ceiling( (len + newl - maxLen)/blockSize); maxLen = maxLen + blockSize * nExpand; tex = c(tex, rep("", maxLen - length(tex))); tcx = c(tcx, rep("", maxLen - length(tex))); nExpandLength = nExpandLength + 1; } tex[ind:(len + newl)] = newe; tcx[ind:(len + newl)] = newc; ind = ind + newl; len = len + newl; } te = tex[1:len]; tc = tcx[1:len]; } } use = is.finite(match(te, evidence)); if (orgInd==5) { if (backT==2) { termCodes[[c]] = unique(as.character(tc[use])); } else termCodes[[c]] = as.character(intersect(tc[use], mapECodes)); } else { if (backT==2) { termCodes[[c]] = unique(as.character(tc[use])); } else termCodes[[c]] = as.numeric(as.character(intersect(tc[use], mapECodes))); } nAllInTerm[c] = length(termCodes[[c]]); if ( (c %%50 ==0) & (verbose > 0)) pind = updateProgInd(c/nTerms, pind); } if (verbose > 0) { pind = updateProgInd(1, pind); printFlush(""); } if ((verbose > 5) & (includeOffspring)) printFlush(paste(spaces, " ..diagnostic for the developer: offspring buffer was expanded", nExpandLength, "times.")); ftp = function(...) { fisher.test(...)$p.value } setResults = list(); for (set in 1:nSets) { if (verbose > 0) printFlush(paste(spaces, " ..working on label set", set, "..")); labelLevels = levels(factor(labels[, set])); if (!is.null(leaveOutLabel)) { keep = !(labelLevels %in% as.character(leaveOutLabel)); if (sum(keep)==0) stop("No labels were kept after removing labels that are supposed to be ignored."); labelLevels = labelLevels[keep] } nLabelLevels = length(labelLevels); modCodes = list(); nModCodes = rep(0, nLabelLevels); if (backT==1) { for (ll in 1:nLabelLevels) { modCodes[[ll]] = entrezCodes[labels[, set]==labelLevels[ll]]; nModCodes[ll] = length(modCodes[[ll]]); } } else { for (ll in 1:nLabelLevels) { modCodes[[ll]] = mapECodes[mapLabels[, set]==labelLevels[ll]]; nModCodes[ll] = length(modCodes[[ll]]); } } countsInTerm = matrix(0, nLabelLevels, nTerms); enrichment = matrix(1, nLabelLevels, nTerms); for (ll in 1:nLabelLevels) countsInTerm[ll, ] = sapply(lapply(termCodes, intersect, modCodes[[ll]]), length) nAllInTermMat = matrix(nAllInTerm, nLabelLevels, nTerms, byrow = TRUE); nModCodesMat = matrix(nModCodes, nLabelLevels, nTerms); tabArr = array(c(countsInTerm, nAllInTermMat - countsInTerm, nModCodesMat - countsInTerm, nBackgroundGenes - nModCodesMat - nAllInTermMat + countsInTerm), dim = c(nLabelLevels * nTerms, 2, 2)); if (verbose > 0) printFlush(paste(spaces, " ..calculating enrichments (this may also take a while)..")); calculate = c(countsInTerm) > 0; enrichment[calculate] = apply(tabArr[calculate, , ], 1, ftp, alternative = "g"); dimnames(enrichment) = list (labelLevels, names(Go2eg)); dimnames(countsInTerm) = list (labelLevels, names(Go2eg)); bestPTerms = list(); modSizes = table(labels[ !(labels[, set] %in% leaveOutLabel), set]); if (!is.null(pCut) || nBestP > 0) { printFlush(paste(spaces, " ..putting together terms with highest enrichment significance..")); nConsideredOntologies = length(ontologies)+1; for (ont in 1:nConsideredOntologies) { if (ont==nConsideredOntologies) { ontTerms = is.finite(match(termOntologies, ontologies)) bestPTerms[[ont]] = list(ontology = ontologies); names(bestPTerms)[ont] = paste(ontologies, collapse = ", "); } else { ontTerms = termOntologies==ontologies[ont]; bestPTerms[[ont]] = list(ontology = ontologies[ont]); names(bestPTerms)[ont] = ontologies[ont]; } bestPTerms[[ont]]$enrichment = NULL; bestPTerms[[ont]]$termInfo = list(); nOntTerms = sum(ontTerms) ontEnr = enrichment[, ontTerms, drop = FALSE]; order = apply(ontEnr, 1, order); for (ll in 1:nLabelLevels) { if (!is.null(pCut)) { reportTerms = c(1:nTerms)[ontTerms][ontEnr[ll, ] < pCut]; reportTerms = reportTerms[order(ontEnr[ll, ][reportTerms])]; } else reportTerms = c(1:nTerms)[ontTerms][order[1:nBestP, ll]]; nRepTerms = length(reportTerms); enrTab = data.frame(module = rep(labelLevels[ll], nRepTerms), modSize = rep(modSizes[ll], nRepTerms), bkgrModSize = rep(nModCodes[ll], nRepTerms), rank = seq(length.out = nRepTerms), enrichmentP = enrichment[ll, reportTerms], BonferoniP = pmin(rep(1, nRepTerms), enrichment[ll, reportTerms] * nOntTerms), nModGenesInTerm = countsInTerm[ll, reportTerms], fracOfBkgrModSize = countsInTerm[ll, reportTerms]/nModCodes[ll], fracOfBkgrTermSize = countsInTerm[ll, reportTerms]/nAllInTerm[reportTerms], bkgrTermSize = nAllInTerm[reportTerms], termID = names(Go2eg)[reportTerms], termOntology = rep("NA", nRepTerms), termName = rep("NA", nRepTerms), termDefinition = rep("NA", nRepTerms)) bestPTerms[[ont]]$forModule[[ll]] = list(); for (rci in seq(length.out = nRepTerms)) { term = reportTerms[rci]; termID = names(Go2eg)[term]; dbind = match(termID, dbGoNames); if (is.finite(dbind)) { enrTab$termName[rci] = eval(parse(text = "AnnotationDbi::Term(goInfo[[dbind]])")); enrTab$termDefinition[rci] = eval(parse(text = "AnnotationDbi::Definition(goInfo[[dbind]])")); enrTab$termOntology[rci] = eval(parse(text = "AnnotationDbi::Ontology(goInfo[[dbind]])")); } if (getTermDetails) { geneCodes = intersect(modCodes[[ll]], termCodes[[term]]) bestPTerms[[ont]]$forModule[[ll]][[rci]] = list(termID = termID, termName = enrTab$termName[rci], enrichmentP = enrTab$enrichmentP[rci], termDefinition = enrTab$termDefinition[rci], termOntology = enrTab$termOntology[rci], geneCodes = geneCodes, genePositions = keepEC[match(geneCodes, entrezCodes)]); } } if (ll==1) { bestPTerms[[ont]]$enrichment = enrTab; } else bestPTerms[[ont]]$enrichment = rbind(bestPTerms[[ont]]$enrichment, enrTab); } } } biggestTerms = list(); if (nBiggest > 0) { printFlush(paste(spaces, " ..putting together terms with largest number of genes in modules..")); nConsideredOntologies = length(ontologies)+1; for (ont in 1:nConsideredOntologies) { if (ont==nConsideredOntologies) { ontTerms = is.finite(match(termOntologies, ontologies)) biggestTerms[[ont]] = list(ontology = ontologies); names(biggestTerms)[ont] = paste(ontologies, collapse = ", "); } else { ontTerms = termOntologies==ontologies[ont]; biggestTerms[[ont]] = list(ontology = ontologies[ont]); names(biggestTerms)[ont] = ontologies[ont]; } biggestTerms[[ont]]$enrichment = NULL; biggestTerms[[ont]]$termInfo = list(); nOntTerms = sum(ontTerms) ontNGenes = countsInTerm[, ontTerms, drop = FALSE]; order = apply(-ontNGenes, 1, order); for (ll in 1:nLabelLevels) { reportTerms = c(1:nTerms)[ontTerms][order[1:nBiggest, ll]]; nRepTerms = length(reportTerms); enrTab = data.frame(module = rep(labelLevels[ll], nRepTerms), modSize = rep(modSizes[ll], nRepTerms), bkgrModSize = rep(nModCodes[ll], nRepTerms), rank = seq(length.out = nRepTerms), enrichmentP = enrichment[ll, reportTerms], BonferoniP = pmin(rep(1, nRepTerms), enrichment[ll, reportTerms] * nOntTerms), nModGenesInTerm = countsInTerm[ll, reportTerms], fracOfModSize = countsInTerm[ll, reportTerms]/nModCodes[ll], fracOfBkgrTermSize = countsInTerm[ll, reportTerms]/nAllInTerm[reportTerms], bkgrTermSize = nAllInTerm[reportTerms], termID = names(Go2eg)[reportTerms], termOntology = rep("NA", nRepTerms), termName = rep("NA", nRepTerms), termDefinition = rep("NA", nRepTerms)) biggestTerms[[ont]]$forModule[[ll]] = list(); for (rci in seq(length.out = nRepTerms)) { term = reportTerms[rci]; termID = names(Go2eg)[term]; dbind = match(termID, dbGoNames); if (is.finite(dbind)) { enrTab$termName[rci] = eval(parse(text="AnnotationDbi::Term(goInfo[[dbind]])")); enrTab$termDefinition[rci] = eval(parse(text="AnnotationDbi::Definition(goInfo[[dbind]])")); enrTab$termOntology[rci] = eval(parse(text="AnnotationDbi::Ontology(goInfo[[dbind]])")); } if (getTermDetails) { geneCodes = intersect(modCodes[[ll]], termCodes[[term]]) biggestTerms[[ont]]$forModule[[ll]][[rci]] = list(termID = termID, termName = enrTab$termName[rci], enrichmentP = enrTab$enrichmentP[rci], termDefinition = enrTab$termDefinition[rci], termOntology = enrTab$termOntology[rci], geneCodes = geneCodes, genePositions = keepEC[match(geneCodes, entrezCodes)]); } } if (ll==1) { biggestTerms[[ont]]$enrichment = enrTab; } else biggestTerms[[ont]]$enrichment = rbind(biggestTerms[[ont]]$enrichment, enrTab); } } } setResults[[set]] = list(countsInTerm = countsInTerm, enrichmentP = enrichment, bestPTerms = bestPTerms, biggestTerms = biggestTerms); } inGO = rep(FALSE, nGivenRaw); inGO[keepEC] = encMapped; kept = rep(FALSE, nGivenRaw); kept[keepEC] = TRUE; if (nSets==1) { list(keptForAnalysis = kept, inGO = inGO, countsInTerms = setResults[[1]]$countsInTerm, enrichmentP = setResults[[1]]$enrichmentP, bestPTerms = setResults[[1]]$bestPTerms, biggestTerms = setResults[[1]]$biggestTerms); } else { list(keptForAnalysis = kept, inGO = inGO, setResults = setResults); } } WGCNA/R/votingLinearPredictor.R0000644000176200001440000004210013103416622015745 0ustar liggesusers.devianceResidual = function(y) { event = y[, ncol(y) ]; fit = summary(coxph(y~1, na.action = na.exclude)) CumHazard = predict(fit, type = "expected") martingale1 = event - CumHazard deviance0 = ifelse(event == 0, 2 * CumHazard, -2 * log(CumHazard) + 2 * CumHazard - 2) sign(martingale1) * sqrt(deviance0); } .dropThirdDim = function(x) { d = dim(x); dim(x) = c(d[1], d[2]*d[3]); x; } #----------------------------------------------------------------------------------------------------- # # Voting linear predictor: given a set of vectors y and a set of vectors x, # predictor is given by the correlations of y with x[,i] # #----------------------------------------------------------------------------------------------------- votingLinearPredictor = function(x, y, xtest = NULL, classify = FALSE, CVfold = 0, randomSeed = 12345, assocFnc = "cor", assocOptions = "use = 'p'", featureWeightPowers = NULL, priorWeights = NULL, weighByPrediction = 0, nFeatures.hi = NULL, nFeatures.lo = NULL, dropUnusedDimensions = TRUE, verbose = 2, indent = 0) { # Special handling of a survival response if (is.Surv(y)) { pr = votingLinearPredictor(x, .devianceResidual(y), xtest, classify = FALSE, CVfold = CVfold, randomSeed = randomSeed, assocFnc = assocFnc, assocOptions = assocOptions, featureWeightPowers = featureWeightPowers, priorWeights = priorWeights, nFeatures.hi = nFeatures.hi, nFeatures.lo = nFeatures.lo, dropUnusedDimensions = dropUnusedDimensions, verbose = verbose, indent = indent); # may possibly need completion here. return(pr) } # Standard code for a numeric response ySaved = y; if (classify) { # Convert factors to numeric variables. if (is.null(dim(y))) { ySaved = list(y); if (classify) { originalYLevels = list( sort(unique(y)) ); y = as.factor(y); } y = as.matrix(as.numeric(y)); } else { ySaved = y; if (classify) { y = as.data.frame(lapply(as.data.frame(y), as.factor)); originalYLevels = lapply(as.data.frame(apply(ySaved, 2, unique)), sort); } else y = as.data.frame(y); y = as.matrix(sapply(lapply(y, as.numeric), I)); } numYLevels = lapply(as.data.frame(apply(y, 2, unique)), sort); minY = apply(y, 2, min); maxY = apply(y, 2, max); } else y = as.matrix(y); doTest = !is.null(xtest) x = as.matrix(x); nSamples = nrow(y); nTraits = ncol(y); nVars = ncol(x); if (is.null(featureWeightPowers)) featureWeightPowers = 0; nPredWPowers = length(featureWeightPowers); if (is.null(rownames(x))) { sampleNames = spaste("Sample.", c(1:nSamples)); } else sampleNames = rownames(x); if (doTest) { xtest = as.matrix(xtest); nTestSamples = nrow(xtest); if (is.null(rownames(xtest))) { testSampleNames = spaste("testSample.", c(1:nTestSamples)); } else testSampleNames = rownames(xtest); if (ncol(x)!=ncol(xtest)) stop("Number of learning and testing predictors (columns of x, xtest) must equal."); } if (is.null(colnames(y))) { traitNames = spaste("Response.", c(1:nTraits)); } else traitNames = colnames(y); if (is.null(colnames(x))) { featureNames = spaste("Feature.", c(1:nVars)); } else featureNames = colnames(x); powerNames = spaste("Power.", featureWeightPowers); spaces = indentSpaces(indent); # If cross-validation is requested, change the whole flow and use a recursive call. if (CVfold > 0) { if (CVfold > nSamples ) { printFlush("CVfold is larger than nSamples. Will perform leave-one-out cross-validation."); CVfold = nSamples; } ratio = nSamples/CVfold; if (floor(ratio)!=ratio) { smaller =floor(ratio); nLarger = nSamples - CVfold * smaller binSizes = c(rep(smaller, CVfold-nLarger), rep(smaller +1, nLarger)); } else binSizes = rep(ratio, CVfold); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { saved.seed = .Random.seed; seedSaved = TRUE; } else seedSaved = FALSE; set.seed(randomSeed); } sampleOrder = sample(1:nSamples); CVpredicted = array(NA, dim = c(nSamples, nTraits, nPredWPowers)); CVbin = rep(0, nSamples); if (verbose > 0) { cat(paste(spaces, "Running cross-validation: ")); if (verbose==1) pind = initProgInd() else printFlush(""); } ind = 1; for (cv in 1:CVfold) { if (verbose > 1) printFlush(paste("..cross validation bin", cv, "of", CVfold)); end = ind + binSizes[cv] - 1; samples = sampleOrder[ind:end]; CVbin[samples] = cv; xCVtrain = x[-samples, , drop = FALSE]; xCVtest = x[samples, , drop = FALSE]; yCVtrain = y[-samples, , drop = FALSE]; yCVtest = y[samples,, drop = FALSE ]; pr = votingLinearPredictor(xCVtrain, yCVtrain, xCVtest, classify = FALSE, CVfold = 0, assocFnc = assocFnc, assocOptions = assocOptions, featureWeightPowers = featureWeightPowers, priorWeights = priorWeights, nFeatures.hi = nFeatures.hi, nFeatures.lo = nFeatures.lo, dropUnusedDimensions = dropUnusedDimensions, verbose = verbose - 1, indent = indent + 1); CVpredicted[samples, , ] = pr$predictedTest; ind = end + 1; if (verbose==1) pind = updateProgInd(cv/CVfold, pind); } if (verbose==1) printFlush(""); collectGarbage(); } if (nrow(x)!=nrow(y)) stop("Number of observations in x and y must equal."); xSD = apply(x, 2, sd, na.rm = TRUE); validFeatures = xSD > 0; xMean = apply(x, 2, mean, na.rm = TRUE); x = scale(x); if (doTest) { xtest = (xtest - matrix(xMean, nTestSamples, nVars, byrow = TRUE) ) / matrix(xSD, nTestSamples, nVars, byrow = TRUE); xtest[, !validFeatures] = 0 } # This prevents NA's generated from zero xSD to contaminate the results x[, !validFeatures] = 0 xSD[!validFeatures] = 1; obsMean = apply(y, 2, mean, na.rm = TRUE); obsSD = apply(y, 2, sd, na.rm = TRUE); if (sum(!is.finite(obsSD)) > 0) stop("Something is wrong with given trait: not all standard deviations are finite."); if (sum(obsSD==0) > 0) stop("Some of given traits have variance zero. Prediction on such traits will not work."); y = scale(y); if (is.null(priorWeights)) { priorWeights = array(1, dim = c(nTraits, nPredWPowers, nVars)); } else { dimPW = dim(priorWeights); if (length(dimPW)<=1) { if (length(priorWeights!=nVars)) stop ("priorWeights are a vector - must have length = number of variables (ncol(x))."); priorWeights = matrix(priorWeights, nrow = nSamples * nPredWPowers, ncol = nVars, byrow = TRUE); dim(priorWeights) = c(nTraits, nPredWPowers, nVars); } else if (length(dimPW)==2) { if ((dimPW[1]!=nPredWPowers) | (dimPW[2]!=nVars) ) stop(paste("If priorWeights is two-dimensional, 1st dimension must equal", "number of predictor weight powers, 2nd must equal number of variables")); # if (verbose>0) printFlush("..converting dimensions of priorWeights.."); newWeights = array(0, dim = c(nTraits, nPredWPowers, nVars)); for (trait in 1:nTraits) newWeights[trait, , ] = priorWeights[,]; priorWeights = newWeights; collectGarbage(); } else if ( (dimPW[1] != nTraits) | (dimPW[3] != nVars) | (dimPW[2] != nPredWPowers) ) stop(paste("priorWeights have incorrect dimensions. Dimensions must be", "(ncol(y), length(featureWeightPowers), ncol(x)).")); } varImportance = array(0, dim = c(nTraits, nPredWPowers, nVars)); predictMat = array(0, dim = c(nSamples, nTraits, nPredWPowers)); if (doTest) predictTestMat = array(0, dim = c(nTestSamples, nTraits, nPredWPowers)); corEval = parse(text = paste(assocFnc, "(x, y ", prepComma(assocOptions), ")")); r = eval(corEval); if (!is.null(nFeatures.hi)) { if (is.null(nFeatures.lo)) nFeatures.lo = nFeatures.hi; # Zero out associations (and therefore weights) for features that do not make the cut rank = apply(r, 2, rank, na.last = TRUE); nFinite = colSums(!is.na(r)); for (t in 1:nTraits) r[ rank[, t]>nFeatures.lo & rank[, t] <= nFinite[t]-nFeatures.hi, t] = 0; } r[is.na(r)] = 0; validationWeights = rep(1, nVars); if (weighByPrediction > 0) { select = c(1:nVars)[rowSums(r!=0) > 0]; nSelect = length(select); pr = .quickGeneVotingPredictor.CV(x[, select], xtest[, select], c(1:nSelect)) dCV = sqrt(colMeans( (pr$CVpredicted - x[, select])^2, na.rm = TRUE)); dTS = sqrt(colMeans( (pr$predictedTest - xtest[, select])^2, na.rm = TRUE)); dTS[dTS==0] = min(dTS[dTS>0]); w = (dCV/dTS)^weighByPrediction; w[w>1] = 1; validationWeights[select] = w; } finiteX = (is.finite(x) + 1)-1 x.fin = x; x.fin[!is.finite(x)] = 0; if (doTest) { finiteXTest = (is.finite(xtest) + 1)-1; xtest.fin = xtest; xtest.fin[!is.finite(xtest)] = 0; } for (power in 1:nPredWPowers) { prWM = priorWeights[, power, ]; dim(prWM) = c(nTraits, nVars); weights = abs(r)^featureWeightPowers[power] * t(prWM) * validationWeights; #weightSum = apply(weights, 2, sum); weightSum = finiteX %*% weights; RWeights = sign(r) * weights; predictMat[ , , power] = x.fin %*% RWeights / weightSum; predMean = apply(.dropThirdDim(predictMat[ , , power, drop = FALSE]), 2, mean, na.rm = TRUE); predSD = apply(.dropThirdDim(predictMat[, , power, drop = FALSE]), 2, sd, na.rm = TRUE); predictMat[, , power] = scale(predictMat[, , power]) * matrix(obsSD, nrow = nSamples, ncol = nTraits, byrow = TRUE) + matrix(obsMean, nrow = nSamples, ncol = nTraits, byrow = TRUE); if (doTest) { weightSum.test = finiteXTest %*% weights; predictTestMat[, , power] = (xtest.fin %*% RWeights / weightSum.test - matrix(predMean, nrow = nTestSamples, ncol = nTraits, byrow = TRUE) ) / matrix(predSD, nrow = nTestSamples, ncol = nTraits, byrow = TRUE) * matrix(obsSD, nrow = nTestSamples, ncol = nTraits, byrow = TRUE) + matrix(obsMean, nrow = nTestSamples, ncol = nTraits, byrow = TRUE); } varImportance[ , power, ] = RWeights; } dimnames(predictMat) = list(sampleNames, traitNames, powerNames); if (doTest) dimnames(predictTestMat) = list(testSampleNames, traitNames, powerNames); dimnames(r) = list(featureNames, traitNames); dimnames(varImportance) = list(traitNames, powerNames, featureNames); discretize = function(x, min, max) { fac = round(x); fac[fac < min] = min; fac[fac > max] = max; # dim(fac) = dim(x); fac; } trafo = function(x, drop) { if (classify) { for (power in 1:nPredWPowers) for (t in 1:nTraits) { disc = discretize(x[, t, power], minY[t], maxY[t]) x[, t, power] = originalYLevels[[t]] [disc]; } } x = x[ , , , drop = drop]; x; } out = list(predicted = trafo(predictMat, drop = dropUnusedDimensions), weightBase = abs(r), variableImportance = varImportance) if (doTest) out$predictedTest = trafo(predictTestMat, drop = dropUnusedDimensions) if (CVfold > 0) { dimnames(CVpredicted) = list(sampleNames, traitNames, powerNames); CVpredFac = trafo(CVpredicted, drop = dropUnusedDimensions) out$CVpredicted = CVpredFac; } out; } #======================================================================================================= # # Quick linear predictor for genes. # #======================================================================================================= # Assume we have expression data (training and test). Run the prediction # in a CV-like way. Keep the predictions for CV samples and for the test samples; at the end average the # test predictions to get one final test prediction. # In each CV run: # scale the training data and keep scale # scale the test data using the training scale # get correlations of predicted vectors and predictors (note: number of predicted vectors should be # relatively small, but all genes may be predictors - however, here we can also make some restrictions) # Form the ensemble of predictors for each gene # Predict the CV samples and test samples .quickGeneVotingPredictor = function(x, xtest, predictedIndex, nPredictorGenes = 20, power = 3, corFnc = "bicor", corOptions = "use = 'p'", verbose = 0) { nSamples = nrow(x) nTestSamples = nrow(xtest); nGenes = ncol(x) nPredicted = length(predictedIndex); if (nPredictorGenes >=nGenes) nPredictorGenes = nGenes -1; geneMeans = as.numeric(colMeans(x, na.rm = TRUE)); geneSD = as.numeric(sqrt(colMeans(x^2, na.rm = TRUE) - geneMeans^2)); xScaled = (x-matrix(geneMeans, nSamples, nGenes, byrow = TRUE))/ matrix(geneSD, nSamples, nGenes, byrow = TRUE); xTestScaled = (xtest-matrix(geneMeans, nTestSamples, nGenes, byrow = TRUE))/ matrix(geneSD, nTestSamples, nGenes, byrow = TRUE); corExpr = parse(text = paste(corFnc, " (xScaled, xScaled[, predictedIndex] ", prepComma(corOptions), ")")); significance = eval(corExpr); predictedTest = matrix(NA, nTestSamples, nPredicted) if (verbose > 0) pind = initProgInd(); for (g in 1:nPredicted) { gg = predictedIndex[g]; sigOrder = order(-significance[, g]); useGenes = sigOrder[2:(nPredictorGenes+1)]; w.base = significance[useGenes, g] w = w.base * abs(w.base)^(power -1); bsub = rowSums(xScaled[, useGenes, drop = FALSE] * matrix(w, nSamples, nPredictorGenes, byrow = TRUE)); bsub.scale = sqrt(mean(bsub^2)) predictedTest[, g] = rowSums(xTestScaled[, useGenes, drop = FALSE] * matrix(w, nTestSamples, nPredictorGenes, byrow = TRUE)) / bsub.scale * geneSD[gg] + geneMeans[gg] if (verbose > 0) pind = updateProgInd(g/nPredicted, pind) } predictedTest; } .quickGeneVotingPredictor.CV = function(x, xtest = NULL, predictedIndex, nPredictorGenes = 20, power = 3, CVfold = 10, corFnc = "bicor", corOptions = "use = 'p'") { nSamples = nrow(x) nTestSamples = if (is.null(xtest)) 0 else nrow(xtest); nGenes = ncol(x) nPredicted = length(predictedIndex); ratio = nSamples/CVfold; if (floor(ratio)!=ratio) { smaller =floor(ratio); nLarger = nSamples - CVfold * smaller binSizes = c(rep(smaller, CVfold-nLarger), rep(smaller +1, nLarger)); } else binSizes = rep(ratio, CVfold); sampleOrder = sample(1:nSamples); CVpredicted = matrix(NA, nSamples, nPredicted); if (!is.null(xtest)) predictedTest = matrix(0, nTestSamples, nPredicted) else predictedTest = NULL; cvStart = 1; for (cv in 1:CVfold) { end = cvStart + binSizes[cv] - 1; oob = sampleOrder[cvStart:end]; CVx = x[-oob, , drop = FALSE]; if (is.null(xtest)) CVxTest = x[oob, , drop = FALSE] else CVxTest = rbind( x[oob, , drop = FALSE], xtest); pred = .quickGeneVotingPredictor(CVx, CVxTest, predictedIndex, nPredictorGenes, power, corFnc, corOptions); CVpredicted[oob, ] = pred[c(1:binSizes[cv]), ]; if (!is.null(xtest)) predictedTest = predictedTest + pred[-c(1:binSizes[cv]), ]; cvStart = end + 1; } if (!is.null(xtest)) predictedTest = predictedTest/CVfold; list(CVpredicted = CVpredicted, predictedTest= predictedTest); } removePrincipalComponents = function(x, n) { if (sum(is.na(x)) > 0) x = t(impute.knn(t(x))$data); svd = svd(x, nu = n, nv = 0); PCs = as.data.frame(svd$u); names(PCs) = spaste("PC", c(1:n)); fit = lm(x~., data = PCs); res = residuals(fit) res; } #======================================================================================================== # # Lin's network screening correlation functions # #======================================================================================================== .corWeighted = function(expr, y, ...) { modules = blockwiseModules(expr, ...) MEs = modules$MEs MEs = MEs[, colnames(MEs)!="MEgrey"] ns = networkScreening(y, MEs, expr, getQValues=F) ns$cor.Weighted } .corWeighted.new = function(expr, y, ...) { modules = blockwiseModules(expr, ...) MEs = modules$MEs MEs = MEs[, colnames(MEs)!="MEgrey"] scaledMEs = scale(MEs) ES = t(as.matrix(cor( y, MEs, use="p"))) weightedAverageME = as.numeric(as.matrix(scaledMEs)%*%ES)/ncol(MEs) w=0.25 y.new = w*scale(y) + (1-w)*weightedAverageME GS.new = as.numeric(cor(y.new, expr, use="p")) GS.new } WGCNA/R/standardScreeningBinaryTrait.R0000644000176200001440000001610513344057441017254 0ustar liggesusers # The function standardScreeningBinaryTrait computes widely used statistics for relating the columns of # the input data frame (argument datExpr) to a binary sample trait (argument y). The statistics include # Student t-test p-value and the corresponding local false discovery rate (known as q-value, Storey et al # 2004), the fold change, the area under the ROC curve (also known as C-index), mean values etc. If the # input option kruskalTest is set to TRUE, it also computes the kruskal Wallist test p-value and # corresponding q-value. The kruskal Wallis test is a non-parametric, rank-based group comparison test. standardScreeningBinaryTrait=function(datExpr, y, corFnc = cor, corOptions = list(use = 'p'), kruskalTest=FALSE, qValues = FALSE, var.equal=FALSE, na.action="na.exclude", getAreaUnderROC = TRUE) { datExpr=data.frame(datExpr, check.names = FALSE) levelsy=levels(factor(y)) if (length(levelsy)>2 ) stop("The sample trait y contains more than 2 levels. Please input a binary variable y") if (length(levelsy)==1 ) stop("The sample trait y is constant. Please input a binary sample trait with some variation.") yNumeric=as.numeric(factor(y)); if (length(yNumeric) !=dim(datExpr)[[1]] ) stop("the length of the sample trait y does not equal the number of rows of datExpr") pvalueStudent = t.Student = Z.Student = rep(NA, dim(datExpr)[[2]] ) pvaluekruskal = stat.Kruskal = Z.Kruskal = sign.Kruskal = rep(NA, dim(datExpr)[[2]] ) nPresent = rep(0, dim(datExpr)[[2]] ) AreaUnderROC=rep(NA, dim(datExpr)[[2]] ) if (var.equal) printFlush(paste("Warning: T-test that assumes equal variances in each group is requested.\n", "This is not the default option for t.test. We recommend to use var.equal=FALSE.")); corFnc = match.fun(corFnc); corOptions$y = yNumeric; corOptions$x = datExpr; corPearson=as.numeric(do.call(corFnc, corOptions)); nGenes = dim(datExpr)[[2]]; nPresent1 = as.numeric( t(as.matrix(!is.na(yNumeric) & yNumeric==1)) %*% ! is.na(datExpr) ); nPresent2 = as.numeric( t(as.matrix(!is.na(yNumeric) & yNumeric==2)) %*% ! is.na(datExpr) ); nPresent = nPresent1 + nPresent2; for (i in 1:nGenes) { no.present1 = nPresent1[i]; no.present2 = nPresent2[i]; no.present = nPresent[i]; if (no.present1<2 | no.present2<2 ) { pvalueStudent[i]= t.Student[i] = NA } else { tst = try(t.test( as.numeric(datExpr[,i])~yNumeric,var.equal=var.equal,na.action=na.action), silent = TRUE) if (!inherits(tst, "try-error")) { pvalueStudent[i] = tst$p.value; t.Student[i] = -tst$statistic # The - sign above is intentional to make the sign of t consistent with correlation } else { printFlush(paste("standardScreeningBinaryTrait: An error ocurred in t.test for variable", i, ":\n", tst)); printFlush(paste("Will return missing value(s) for this variable.\n\n")); } } if (getAreaUnderROC) AreaUnderROC[i] = rcorr.cens(datExpr[, i], yNumeric, outx = TRUE)[[1]] if (kruskalTest) { if (no.present<5 ) { pvaluekruskal[i] = stat.Kruskal[i] = NA } else { kt = try(kruskal.test(datExpr[, i] ~ factor(yNumeric), na.action="na.exclude"), silent = TRUE) if (!inherits(kt, "try-error")) { pvaluekruskal[i] = kt$p.value; stat.Kruskal[i] = kt$statistic; # Find which side is higher r = rank(datExpr[, i]); means = tapply(r, factor(yNumeric), mean, na.rm = TRUE); sign.Kruskal[i] = 2 * ( (means[1] < means[2]) - 0.5); # sign.Kruskal is 1 if the ranks in group 1 are smaller than in group 2 } else { printFlush(paste("standardScreeningBinaryTrait: An error ocurred in kruskal.test for variable", i, ":\n", kt)); printFlush(paste("Will return missing value(s) for this variable.\n\n")); } } } } q.Student=rep(NA, length(pvalueStudent) ) rest1= ! is.na(pvalueStudent) if (qValues) { x = try({ q.Student[rest1] = qvalue(pvalueStudent[rest1])$qvalues }, silent = TRUE); if (inherits(x, "try-error")) printFlush(paste("Warning in standardScreeningBinaryTrait: function qvalue returned an error.\n", "calculated q-values will be invalid. qvalue error:\n\n", x, "\n")) if (kruskalTest) { q.kruskal=rep(NA, length(pvaluekruskal) ) rest1= ! is.na(pvaluekruskal) xx = try( { q.kruskal[rest1] = qvalue(pvaluekruskal[rest1])$qvalues} , silent = TRUE); if (inherits(xx, "try-error")) printFlush(paste("Warning in standardScreeningBinaryTrait: function qvalue returned an error.\n", "calculated q-values will be invalid. qvalue error:\n\n", xx, "\n")) } } meanLevel1 = as.numeric(apply(datExpr[!is.na(y) & y == levelsy[[1]], ], 2, mean, na.rm = TRUE)); meanLevel2 = as.numeric(apply(datExpr[!is.na(y) & y == levelsy[[2]], ], 2, mean, na.rm = TRUE)); Z.Student = qnorm(pvalueStudent/2, lower.tail = FALSE) * sign(t.Student); if (kruskalTest) Z.Kruskal = qnorm(pvaluekruskal/2, lower.tail = FALSE) * sign(stat.Kruskal); stderr1=function(x) {no.present=sum(!is.na(x)); if (no.present<2) out=NA else {out=sqrt(var(x,na.rm=TRUE)/no.present) } out } # end of function stderr1 SE.Level1 = as.numeric(apply(datExpr[y == levelsy[[1]] & !is.na(y), ], 2, stderr1)) SE.Level2 =as.numeric(apply(datExpr[y == levelsy[[2]] & !is.na(y), ], 2, stderr1)) FoldChangeLevel1vsLevel2 = ifelse(meanLevel1/meanLevel2 > 1, meanLevel1/meanLevel2, -meanLevel2/meanLevel1) output = data.frame(ID = dimnames(datExpr)[[2]], corPearson = corPearson, t.Student = t.Student, pvalueStudent = pvalueStudent, FoldChange = FoldChangeLevel1vsLevel2, meanFirstGroup = meanLevel1, meanSecondGroup = meanLevel2, SE.FirstGroup = SE.Level1, SE.SecondGroup = SE.Level2); if (getAreaUnderROC) output$AreaUnderROC = AreaUnderROC; if (kruskalTest) { output = data.frame(output, stat.Kruskal = stat.Kruskal, stat.Kruskal.signed = sign.Kruskal * stat.Kruskal, pvaluekruskal = pvaluekruskal); } if (qValues && !inherits(x, "try-error")) output=data.frame(output, q.Student) if (qValues & kruskalTest ) { if ( !inherits(xx, "try-error")) output=data.frame(output, q.kruskal) } names(output)[3:5] = paste(names(output)[3:5], levelsy[[1]], "vs", levelsy[[2]], sep = ".") output = data.frame(output, nPresentSamples = nPresent); output } WGCNA/R/verboseIplot.R0000644000176200001440000000622114230552654014121 0ustar liggesusersverboseIplot<-function ( x, y, xlim=NA, ylim=NA, nBinsX=150, nBinsY=150, ztransf = function(x){x}, gamma=1, sample = NULL, corFnc = "cor", corOptions = "use = 'p'", main = "", xlab = NA, ylab = NA, cex = 1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, abline = FALSE, abline.color = 1, abline.lty = 1, corLabel = corFnc, showMSE = TRUE, ...) { if (is.na(xlab)) xlab = deparse(substitute(x)) if (is.na(ylab)) ylab = deparse(substitute(y)) x = as.numeric(as.character(x)) y = as.numeric(as.character(y)) xy <- data.frame(x,y) xy=xy[!is.na(x)&!is.na(y),] if (sum(is.na(xlim))!=0) xlim=c(min(xy[,1])-10^-10*diff(range(xy[,1])),max(xy[,1])) if (sum(is.na(ylim))!=0) ylim=c(min(xy[,2])-10^-10*diff(range(xy[,2])),max(xy[,2])) corExpr = parse(text = paste(corFnc, "(x, y ", prepComma(corOptions),")")) cor = signif(eval(corExpr), 2) corp = signif(corPvalueStudent(cor, sum(is.finite(x) & is.finite(y))),2) if (corp < 10^(-200)) corp = "<1e-200" else corp = paste("=", corp, sep = "") resid=lm(y~x)$residuals MSE=round(mean(resid^2),2) if (!is.na(corLabel)) { mainX = paste(main, " ", corLabel, "=", cor, if (showMSE) paste0(" MSE = ", MSE) else "", sep = "") } else mainX = main if (!is.null(sample)) { if (length(sample) == 1) { sample = sample(length(x), sample) } xy=xy[sample,] } sx <- seq(xlim[1], xlim[2], by = diff(xlim)/nBinsX) sy <- seq(ylim[1], ylim[2], by = diff(ylim)/nBinsY) den <- ztransf(table(cut(xy[, 1], breaks = sx), cut(xy[, 2], breaks = sy))) lsx <- length(sx) lsy <- length(sy) xx <- 0.5 * (sx[-1] + sx[-lsx]) yy <- 0.5 * (sy[-1] + sy[-lsy]) whiteBlueGreenRedBlack=function (n){ quarter = as.integer(n/5) red= c(seq(from=1,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=0,length.out=quarter)^(1/gamma)) green=c(seq(from=1,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=0,length.out=quarter)^(1/gamma)) blue= c(seq(from=1,to=1,length.out=quarter)^(1/gamma),seq(from=1,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=0,length.out=quarter)^(1/gamma),seq(from=0,to=0,length.out=quarter)^(1/gamma)) col = rgb(red, green, blue, maxColorValue = 1) col } image(x = xx, y = yy, den, xaxs = "r", yaxs = "r", xlab = xlab, ylab = ylab, cex = cex, main=mainX, cex.axis = cex.axis, cex.lab = cex.lab, cex.main = cex.main, col=whiteBlueGreenRedBlack(50)) if (abline) { fit = lm(y ~ x) abline(reg = fit, col = abline.color, lty = abline.lty) } invisible(sample) } WGCNA/R/qvalue.R0000644000176200001440000001161613103416622012735 0ustar liggesusers# qvalue function by John D. Storey , modified by Peter Langfelder # for use in WGCNA qvalue <- function(p, lambda=seq(0,0.90,0.05), pi0.method="smoother", fdr.level=NULL, robust=FALSE, smooth.df = 3, smooth.log.pi0 = FALSE) { #Input #============================================================================= #p: a vector of p-values (only necessary input) #fdr.level: a level at which to control the FDR (optional) #lambda: the value of the tuning parameter to estimate pi0 (optional) #pi0.method: either "smoother" or "bootstrap"; the method for automatically # choosing tuning parameter in the estimation of pi0, the proportion # of true null hypotheses #robust: an indicator of whether it is desired to make the estimate more robust # for small p-values and a direct finite sample estimate of pFDR (optional) #gui: A flag to indicate to 'qvalue' that it should communicate with the gui. ## change by Alan # Should not be specified on command line. #smooth.df: degrees of freedom to use in smoother (optional) #smooth.log.pi0: should smoothing be done on log scale? (optional) # #Output #============================================================================= #call: gives the function call #pi0: an estimate of the proportion of null p-values #qvalues: a vector of the estimated q-values (the main quantity of interest) #pvalues: a vector of the original p-values #significant: if fdr.level is specified, an indicator of whether the q-value # fell below fdr.level (taking all such q-values to be significant controls # FDR at level fdr.level) #This is just some pre-processing if(min(p)<0 || max(p)>1) stop("qvalue: p-values not in valid range.") if(length(lambda)>1 && length(lambda)<4) stop("qvalue: If length of lambda greater than 1, you need at least 4 values.") if(length(lambda)>1 && (min(lambda) < 0 || max(lambda) >= 1)) stop("qvalue: Lambda must be within [0, 1).") m <- length(p) #These next few functions are the various ways to estimate pi0 if(length(lambda)==1) { if(lambda<0 || lambda>=1) stop("qvalue: Lambda must be within [0, 1).") pi0 <- mean(p >= lambda)/(1-lambda) pi0 <- min(pi0,1) } else { pi0 <- rep(0,length(lambda)) for(i in 1:length(lambda)) { pi0[i] <- mean(p >= lambda[i])/(1-lambda[i]) } if(pi0.method=="smoother") { if(smooth.log.pi0) pi0 <- log(pi0) spi0 <- smooth.spline(lambda,pi0,df=smooth.df) pi0 <- predict(spi0,x=max(lambda))$y if(smooth.log.pi0) pi0 <- exp(pi0) pi0 <- min(pi0,1) } else if(pi0.method=="bootstrap") { minpi0 <- min(pi0) mse <- rep(0,length(lambda)) pi0.boot <- rep(0,length(lambda)) for(i in 1:100) { p.boot <- sample(p,size=m,replace=TRUE) for(i in 1:length(lambda)) { pi0.boot[i] <- mean(p.boot>lambda[i])/(1-lambda[i]) } mse <- mse + (pi0.boot-minpi0)^2 } pi0 <- min(pi0[mse==min(mse)]) pi0 <- min(pi0,1) } else { ## change by Alan: check for valid choice of 'pi0.method' (only necessary on command line) stop("qvalue:: 'pi0.method' must be one of 'smoother' or 'bootstrap'.") return(0) } } if(pi0 <= 0) stop("qvalue:: The estimated pi0 <= 0. Check that you have valid p-values or use another lambda method.") if(!is.null(fdr.level) && (fdr.level<=0 || fdr.level>1)) ## change by Alan: check for valid fdr.level stop("qvalue:: 'fdr.level' must be within (0, 1].") #The estimated q-values calculated here u <- order(p) # change by Alan # ranking function which returns number of observations less than or equal qvalue.rank <- function(x) { idx <- sort.list(x) fc <- factor(x) nl <- length(levels(fc)) bin <- as.integer(fc) tbl <- tabulate(bin) cs <- cumsum(tbl) tbl <- rep(cs, tbl) tbl[idx] <- tbl return(tbl) } v <- qvalue.rank(p) qvalue <- pi0*m*p/v if(robust) { qvalue <- pi0*m*p/(v*(1-(1-p)^m)) } qvalue[u[m]] <- min(qvalue[u[m]],1) for(i in (m-1):1) { qvalue[u[i]] <- min(qvalue[u[i]],qvalue[u[i+1]],1) } #The results are returned if(!is.null(fdr.level)) { retval <- list(call=match.call(), pi0=pi0, qvalues=qvalue, pvalues=p, fdr.level=fdr.level, ## change by Alan significant=(qvalue <= fdr.level), lambda=lambda) } else { retval <- list(call=match.call(), pi0=pi0, qvalues=qvalue, pvalues=p, lambda=lambda) } class(retval) <- "qvalue" return(retval) } WGCNA/R/smaFunctions.R0000644000176200001440000002135313131007076014110 0ustar liggesusers ########################################################################### # Statistics for Microarray Analysis for R # Discriminant analysis # # Date : August 21, 2000 # Last update : April 13, 2001 # # Authors: Sandrine Dudoit, Yee Hwa (Jean) Yang, and Jane Fridlyand. ########################################################################## ########################################################################## # A Red-Green Color Map ########################################################################## ########################################################################/** # # \name{rgcolors.func} # # \alias{rgcolors.func} # # \title{Red and Green Color Specification} # # \description{ # This function creates a vector of n ``contiguous'' colors, # corresponding to n intensities (between 0 and 1) of the red, green # and blue primaries, with the blue intensities set to zero. The # values returned by \code{rgcolors.func} can be used with a # \code{col=} specification in graphics functions or in # \code{\link{par}}. # } # # \usage{ # rgcolors.func(n=50) # } # # \arguments{ # \item{n}{the number of colors (>= 1) to be used in the red and # green palette. } # # } # \value{a character vector of color names. Colors are specified # directly in terms of their RGB components with a string of the form # "\#RRGGBB", where each of the pairs RR, GG, BB consist of two # hexadecimal digits giving a value in the range 00 to FF. # } # # # \author{ # Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} \cr # Jane Fridlyand, \email{janef@stat.berkeley.edu} # } # # \seealso{\code{\link{plotCor}}, \code{\link{plotMat}}, # \code{\link{colors}}, \code{\link{rgb}}, \code{\link{image}}.} # # \examples{ # rgcolors.func(n=5) # ## The following vector is returned: # ## "#00FF00" "#40BF00" "#808000" "#BF4000" "#FF0000" # } # # \keyword{Microarray, RGB image.} # #*/####################################################################### rgcolors.func<-function(n = 50) { k <- round(n/2) r <- c(rep(0, k), seq(0, 1, length = k)) g <- c(rev(seq(0, 1, length = k)), rep(0, k)) res <- rgb(r, g, rep(0, 2 * k)) res } ########################################################################## # Images of data matrices and correlation matrices ########################################################################## ########################################################################/** # \name{plotCor} # # \alias{plotCor} # # \title{Red and Green Color Image of Correlation Matrix} # # \description{ # This function produces a red and green color image of a correlation # matrix using an RGB color specification. Increasingly positive # correlations are represented with reds of increasing intensity, and # increasingly negative correlations are represented with greens of # increasing intensity. # } # # \usage{ # plotCor(X, new=F, nrgcols=50, labels=FALSE, labcols=1, title="", ...) # } # # \arguments{ # \item{X}{a matrix of numerical values.} # \item{new}{If \code{new=F}, \code{X} must already be a correlation # matrix. If \code{new=T}, the correlation matrix for the columns of # \code{X} is computed and displayed in the image.} # \item{nrgcols}{the number of colors (>= 1) to be used in the red # and green palette.} # \item{labels}{vector of character strings to be placed at the # tickpoints, labels for the columns of \code{X}.} # \item{labcols}{colors to be used for the labels of the columns of # \code{X}. \code{labcols} can have either length 1, in which case # all the labels are displayed using the same color, or the same # length as \code{labels}, in which case a color is specified for the # label of each column of \code{X}.} # \item{title}{character string, overall title for the plot.} # \item{\dots}{graphical parameters may also be supplied as arguments to # the function (see \code{\link{par}}). For comparison purposes, # it is good to set \code{zlim=c(-1,1)}.} # } # } # # # \author{ # Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} # } # # \seealso{\code{\link{plotMat}},\code{\link{rgcolors.func}}, # \code{\link{cor.na}}, \code{\link{cor}}, \code{\link{image}}, # \code{\link{rgb}}.} # # # \keyword{Microarray, correlation matrix, image.} # # #*/####################################################################### plotCor<-function(x, new=FALSE, nrgcols=50, labels=FALSE, labcols=1, title="", ...) { # X <- x n<-ncol(x) corr<-x if(new) corr<-cor(x, use = 'p') image(1:n,1:n,corr[,n:1],col=rgcolors.func(nrgcols),axes=FALSE, xlab="", ylab="",... ) if(length(labcols)==1){ axis(2,at=n:1,labels=labels,las=2,cex.axis=0.6,col.axis=labcols) axis(3,at=1:n,labels=labels,las=2,cex.axis=0.6,col.axis=labcols) } if(length(labcols)==n){ cols<-unique(labcols) for(i in 1:length(cols)){ which<-(1:n)[labcols==cols[i]] axis(2,at=(n:1)[which],labels=labels[which],las=2,cex.axis=0.6,col.axis=cols[i]) axis(3,at=which,labels=labels[which],las=2,cex.axis=0.6,col.axis=cols[i]) } } mtext(title,side=3,line=3) box() } ########################################################################/** # \name{plotMat} # # \alias{plotMat} # # \title{Red and Green Color Image of Data Matrix} # # \description{This function produces a red and green color image of a # data matrix using an RGB color specification. Larger entries are # represented with reds of increasing intensity, and smaller entries # are represented with greens of increasing intensity. # } # # \usage{ # plotMat(X, nrgcols=50, rlabels=FALSE, clabels=FALSE, rcols=1, ccols=1, title="",...) # } # # %- maybe also `usage' for other objects documented here. # # \arguments{ # \item{X}{a matrix of numbers.} # \item{nrgcols}{the number of colors (>= 1) to be used in the red # and green palette.} # \item{rlabels}{vector of character strings to be placed at the row # tickpoints, labels for the rows of \code{X}.} # \item{clabels}{vector of character strings to be placed at the # column tickpoints, labels for the columns of \code{X}.} # \item{rcols}{colors to be used for the labels of the rows of # \code{X}. \code{rcols} can have either length 1, in which case # all the labels are displayed using the same color, or the same # length as \code{rlabels}, in which case a color is specified for the # label of each row of \code{X}.} # \item{ccols}{colors to be used for the labels of the columns of # \code{X}. \code{ccols} can have either length 1, in which case # all the labels are displayed using the same color, or the same # length as \code{clabels}, in which case a color is specified for the # label of each column of \code{X}.} # \item{title}{character string, overall title for the plot.} # \item{\dots}{graphical parameters may also be supplied as arguments to # the function (see \code{\link{par}}). E.g. \code{zlim=c(-3,3)}} # } # # %\references{ ~put references to the literature/web site here ~ } # # # \author{ # Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} # } # # \seealso{\code{\link{plotCor}}, \code{\link{rgcolors.func}}, # \code{\link{cor.na}}, \code{\link{cor}}, \code{\link{image}}, # \code{\link{rgb}}.} # # \examples{ # data(MouseArray) # ##mouse.setup <- init.grid() # ##mouse.data <- init.data() ## see \emph{init.data} # mouse.lratio <- stat.ma(mouse.data, mouse.setup) # # ## Looking at log ratios of mouse1 # plotMat(spatial.func(mouse.lratio$M[,1], mouse.setup)) # } # # \keyword{Microarray, image of data matrix.} # # #*/####################################################################### plotMat<-function(x, nrgcols=50, rlabels=FALSE, clabels=FALSE, rcols=1, ccols=1, title="", ...) { # X <-x n<-nrow(x) p<-ncol(x) image(1:p,1:n,t(x[n:1,]),col=rgcolors.func(nrgcols),axes=FALSE, xlab="", ylab="", ... ) if(length(ccols)==1){ axis(3,at=1:p,labels=clabels,las=2,cex.axis=0.6,col.axis=ccols) } if(length(ccols)==p){ cols<-unique(ccols) for(i in 1:length(cols)){ which<-(1:p)[ccols==cols[i]] axis(3,at=which,labels=clabels[which],las=2,cex.axis=0.6,col.axis=cols[i]) } } if(length(rcols)==1){ axis(2,at=n:1,labels=rlabels,las=2,cex.axis=0.6,col.axis=rcols) } if(length(rcols)==n){ cols<-unique(rcols) for(i in 1:length(cols)){ which<-(1:n)[rcols==cols[i]] axis(2,at=(n:1)[which],labels=rlabels[which],las=2,cex.axis=0.6,col.axis=cols[i]) } } mtext(title,side=3,line=3) box() } WGCNA/R/adjacency.polyReg.R0000644000176200001440000000226713103416622015003 0ustar liggesusersadjacency.polyReg = function(datExpr, degree=3, symmetrizationMethod = "mean") { if (!is.element(symmetrizationMethod, c("none", "min" ,"max", "mean"))) { stop("Unrecognized symmetrization method.") } datExpr = matrix(as.numeric(as.matrix(datExpr)), nrow(datExpr), ncol(datExpr)) n = ncol(datExpr) polyRsquare = matrix(NA, n,n) for (i in 2:n) { for (j in 1:(i-1)) { del = is.na(datExpr[, i]+datExpr[,j]) if (sum(del)>=(n-1) | var(datExpr[, i], na.rm=T)==0 | var(datExpr[, j], na.rm=T)==0) { polyRsquare[i, j] = polyRsquare[j, i]=NA }else{ dati = datExpr[!del, i]; datj = datExpr[!del, j]; lmPij=glm( dati ~ poly( datj, degree)) polyRsquare[i, j] = cor( dati, predict(lmPij))^2 lmPji=glm( datj ~ poly( dati, degree)) polyRsquare[j, i] = cor( datj, predict(lmPji))^2 rm(dati, datj, lmPij, lmPji) } } } diag(polyRsquare) = rep(1,n) if (symmetrizationMethod =="none") {adj= polyRsquare} else { adj = switch(symmetrizationMethod, min = pmin(polyRsquare, t(polyRsquare)), max = pmax(polyRsquare, t(polyRsquare)), mean = (polyRsquare + t(polyRsquare))/2) } adj } WGCNA/R/heatmapWithLegend.R0000644000176200001440000003562714533632240015046 0ustar liggesusers# Replacement for the function image.plot .autoTicks = function(min, max, maxTicks = 6, tickPos = c(1,2,5)) { if (max < min) { x = max; max = min; min = x } range = max - min; if (range==0) return(max); tick0 = range/(maxTicks+1-1e-6) maxTick = max(tickPos); # Ticks can only be multiples of tickPos mult = 1; if (tick0 < maxTick/10) { while (tick0 < maxTick/10) {tick0 = 10*tick0; mult = mult*10; } } else while (tick0 >=maxTick ) {tick0 = tick0/10; mult = mult/10; } ind = sum(tick0 > tickPos) + 1; tickStep = tickPos[ind] / mult; lowTick = min/tickStep; if (floor(lowTick)!=lowTick) lowTick = lowTick + 1; lowTick = floor(lowTick); ticks = tickStep * (lowTick:(lowTick + maxTicks+1)); ticks = ticks[ticks <= max]; ticks; } .plotStandaloneLegend = function( colors, lim, ## These dimensions are in inches tickLen = 0.09, tickGap = 0.04, minBarWidth = 0.09, maxBarWidth = Inf, mar = c(0.5, 0.2, 0.5, 0.1), lab = "", horizontal = FALSE, ...) { par(mar = mar); plot(c(0, 1), c(0, 1), type = "n", axes = FALSE, xlab = "", ylab = ""); box = par("usr"); if (horizontal) box.eff = box[c(3,4,1,2)] else box.eff = box; tickVal = .autoTicks(lim[1], lim[2]); pin = par("pin"); pin.eff = if (horizontal) pin[c(2,1)] else pin; wrange = box.eff[2] - box.eff[1]; tickLen.usr = tickLen/pin.eff[1] * wrange tickGap.usr = tickGap/pin.eff[1] * wrange minBarWidth.usr = minBarWidth/pin.eff[1] * wrange maxBarWidth.usr = maxBarWidth/pin.eff[1] * wrange sizeFnc = if (horizontal) strheight else strwidth; maxTickWidth = max(sizeFnc(tickVal)); if (maxTickWidth + tickLen.usr + tickGap.usr > box.eff[2]-box.eff[1]-minBarWidth.usr) warning("Some tick labels will be truncated."); haveLab = length(lab) > 0 if (haveLab && is.character(lab)) haveLab = lab!=""; width = max(box.eff[2]-box.eff[1]-maxTickWidth - tickLen.usr - tickGap.usr- haveLab * 3*sizeFnc("M"), minBarWidth.usr); if (width > maxBarWidth.usr) width = maxBarWidth.usr; .plotColorLegend(box[1], if (horizontal) box[2] else box[1] + width, if (horizontal) box[4]-width else box[3], box[4], colors = colors, lim = lim, tickLen.usr = tickLen.usr, horizontal = horizontal, tickGap.usr = tickGap.usr, lab = lab, ...); } if (FALSE) { source("~/Work/RLibs/WGCNA/R/heatmapWithLegend.R") .plotStandaloneLegend(colors = blueWhiteRed(10), lim = c(-25, 25)) d = matrix(rnorm(100), 10, 10); par(mar = c(2,2,2,0)); .heatmapWithLegend(d, signed = TRUE, colors = blueWhiteRed(20), plotLegend = TRUE, cex.legendAxis = 1, legendShrink = 0.94, legendLabel = "", cex.legendLabel = 1) ## The following arguments are now in inches #legendSpace = 0.5 + (legendLabel!="") * 1.5*strheight("M",units = "inch", cex = cex.legendLabel), #legendWidth = 0.13, #legendGap = 0.09, #frame = TRUE, #frameTicks = FALSE, tickLen = 0.09); } .plotColorLegend = function(xmin, xmax, ymin, ymax, # colors can be a vector or a matrix (in which case a matrix of colors will be plotted) colors, horizontal = FALSE, ### FIXME: it would be good if these could respect settings in par("mgp") tickLen.usr = 0.5* (if (horizontal) strheight("M") else strwidth("M")), tickGap.usr = 0.5 * (if (horizontal) strheight("M") else strwidth("M")), lim, cex.axis = 1, tickLabelAngle = if (horizontal) 0 else -90, lab = "", cex.lab = 1, labAngle = 0, labGap = 0.6 * (if (horizontal) strheight("M") else strwidth("M")) ) { tickVal = .autoTicks(lim[1], lim[2]); nTicks = length(tickVal); if (horizontal) { lmin = xmin; lmax = xmax; tmin = ymin; tmax = ymax; } else { tmin = xmin; tmax = xmax; lmin = ymin; lmax = ymax; } tickPos = (tickVal - lim[1]) / (lim[2] - lim[1]) * (lmax - lmin) + lmin; pin = par("pin"); box = par("usr"); asp = pin[2]/pin[1] * ( box[2]-box[1])/(box[4] - box[3]); # Ticks: if (horizontal) { angle0 = 0; angle = angle0 + tickLabelAngle; if (angle==0) adj = c(0.5, 1) else adj = c(1, 0.5); for (t in 1:nTicks) lines(c(tickPos[t], tickPos[t]), c(ymin, ymin - tickLen.usr), xpd = TRUE); text(tickPos, rep(ymin - tickLen.usr - tickGap.usr), tickVal, adj = adj, cex = cex.axis, xpd = TRUE, srt = angle); tickLabelWidth = if (angle==0) max(strheight(tickVal)) else max(strwidth(tickVal))/asp; } else { angle0 = 90; angle = angle0 + tickLabelAngle; if (angle==0) adj = c(0, 0.5) else adj = c(0.5, 1); for (t in 1:nTicks) lines(c(xmax, xmax + tickLen.usr), c(tickPos[t], tickPos[t]), xpd = TRUE); text(rep(xmax + tickLen.usr + tickGap.usr), tickPos, tickVal, adj = adj, cex = cex.axis, xpd = TRUE, srt = angle); tickLabelWidth = if (angle==0) max(strwidth(tickVal)) else max(strheight(tickVal)) * asp; } # Fill with color: colors = as.matrix(colors); nColumns = ncol(colors); nColors = nrow(colors); bl = (lmax-lmin)/nColors * (0:(nColors-1)) + lmin; tl = (lmax-lmin)/nColors * (1:nColors) + lmin; wi.all = tmax - tmin; wi1 = wi.all/nColumns if (horizontal) { for (col in 1:nColumns) rect(xleft = bl, xright = tl, ybottom = rep(tmin + (col-1) * wi1, nColors), ytop = rep(tmin + wi1*col, nColors), col = colors[, col], border = colors[, col], xpd = TRUE); } else { for (col in 1:nColumns) rect(xleft = rep(tmin + (col-1) * wi1, nColors), xright = rep(tmin + wi1*col, nColors), ybottom = bl, ytop = tl, col = colors[, col], border = colors[, col], xpd = TRUE); } # frame for the legend lines(c(xmin, xmax, xmax, xmin, xmin), c(ymin, ymin, ymax, ymax, ymin), xpd = TRUE ); if (nColumns > 1) for (col in 2:nColumns) if (horizontal) lines(c(xmin, xmax), c(tmin + (col-1) * wi1, tmin + (col-1) * wi1)) else lines(c(tmin + (col-1) * wi1, tmin + (col-1) * wi1), c(ymin, ymax)); # Axis label if (length(lab)>0 && as.character(lab) != "") { if (horizontal) { y = ymin - tickLen.usr - tickGap.usr - tickLabelWidth - labGap; x = (xmin + xmax)/2; adj = if (labAngle==0) c(0.5, 1) else c(1, 0.5) angle = labAngle; text(x, y, lab, cex = cex.lab, srt = labAngle, xpd = TRUE, adj = adj); } else { y = (ymin + ymax)/2; x = xmax + tickLen.usr + tickGap.usr + tickLabelWidth + labGap; adj = if (labAngle==0) c(0.5, 1) else c(0, 0.5); angle = labAngle+90; text(x, y, lab, cex = cex.lab, srt = labAngle+90, xpd = TRUE, adj = adj); } height = strheight(lab); if (!horizontal) height = height * asp; labelInfo = list(x = x, y = y, angle = angle, adj = adj, space.usr = height, gap.usr = labGap); } else labelInfo = list(space.usr = 0, gap.usr = 0); #### FIXME: also include a component named box that gives the outer coordinates of the area used by the legend, to the ###best approximation. Maybe include the padding around the color bar. invisible(list(bar = list(xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax, space.usr = tmax - tmin), ticks = list(length.usr = tickLen.usr, gap.usr = tickGap.usr, labelSpace.usr = tickLabelWidth), label = labelInfo)); } .boxDimensionsForHeatmapWithLegend = function( data, plotLegend = TRUE, keepLegendSpace = plotLegend, cex.legend = 1, legendShrink = 0.94, ## The following arguments are now in inches legendSpace = 0.5, legendWidth = 0.13, legendGap = 0.09, startTempPlot = TRUE, plotDevice = "pdf", plotDeviceOptions = list(), width = 7, height = 7,...) { data = as.matrix(data); nCols = ncol(data); nRows = nrow(data); if (startTempPlot) { if (!is.null(plotDevice)) { if (plotDevice == "x11") { do.call(match.fun(plotDevice), c(list(width = width, height = height), plotDeviceOptions)); on.exit(dev.off()); } else { file = tempfile(); do.call(match.fun(plotDevice), c(list(file = file, width = width, height = height), plotDeviceOptions)) on.exit({ dev.off(); unlink(file)}); } par(mar = par("mar")); } barplot(1, col = "white", border = "white", axisnames = FALSE, axes = FALSE, ...); } pin = par("pin"); box = par("usr"); xminAll = box[1]; xmaxAll = box[2]; yminAll = box[3]; ymaxAll = box[4]; legendSpace.usr = legendSpace/pin[1] * (xmaxAll-xminAll); legendWidth.usr = legendWidth/pin[1] * (xmaxAll-xminAll); legendGap.usr = legendGap/pin[1] * (xmaxAll-xminAll); if (!keepLegendSpace && !plotLegend) { legendSpace.usr = 0; legendWidth.usr = 0; legendGap.usr = 0; } ymin = yminAll; ymax = ymaxAll; xmin = xminAll; xmax = xmaxAll - legendSpace.usr; if (xmax < xmin) stop("'legendSpace is too large, not enough space for the heatmap."); xStep = (xmax - xmin)/nCols; xLeft = xmin + c(0:(nCols-1)) * xStep; xRight = xLeft + xStep; xMid = (xLeft + xRight)/2; yStep = (ymax - ymin)/nRows; yBot = ymin + c(0:(nRows-1)) * yStep; yTop = yBot + yStep; yMid = c(yTop+ yBot)/2; list(xMin = xmin, xMax = xmax, yMin = ymin, yMax = ymax, xLeft = xLeft, xRight = xMid, xMid = xMid, yTop = yTop, yMid = yMid, yBottom = yBot); } .heatmapWithLegend = function(data, signed, colorMatrix = NULL, colors, naColor = "grey", zlim = NULL, reverseRows = TRUE, plotLegend = TRUE, keepLegendSpace = plotLegend, cex.legendAxis = 1, legendShrink = 0.94, legendPosition = 0.5, ## center; 1 means at the top, 0 means at the bottom legendLabel = "", cex.legendLabel = 1, ## The following arguments are now in inches legendSpace = 0.5 + (as.character(legendLabel)!="") * 1.5* strheight("M",units = "inch", cex = cex.legendLabel), legendWidth = 0.13, legendGap = 0.09, maxLegendSize = 4, legendLengthGap = 0.15, frame = TRUE, frameTicks = FALSE, tickLen = 0.09, tickLabelAngle = 0, ...) { if (length(naColor)==0) naColor = 0; ### Means transparent (as opposed to white) color. data = as.matrix(data); nCols = ncol(data); nRows = nrow(data); if (is.null(zlim)) { zlim = range(data, na.rm = TRUE); if (signed) zlim = c(-max(abs(zlim)), max(abs(zlim))); } barplot(1, col = "white", border = "white", axisnames = FALSE, axes = FALSE, ...); pin = par("pin"); box = par("usr"); xminAll = box[1]; xmaxAll = box[2]; yminAll = box[3]; ymaxAll = box[4]; legendSpace.usr = legendSpace/pin[1] * (xmaxAll-xminAll); legendWidth.usr = legendWidth/pin[1] * (xmaxAll-xminAll); legendGap.usr = legendGap/pin[1] * (xmaxAll-xminAll); tickLen.usr = tickLen/pin[1] * (xmaxAll-xminAll); maxLegendSize.usr = maxLegendSize/pin[2] * (ymaxAll-yminAll); legendLengthGap.usr = legendLengthGap/pin[2] * (ymaxAll-yminAll) if (!keepLegendSpace && !plotLegend) { legendSpace.usr = 0; legendWidth.usr = 0; legendGap.usr = 0; } ymin = yminAll; ymax = ymaxAll; xmin = xminAll; xmax = xmaxAll - legendSpace.usr; if (xmax < xmin) stop("'legendSpace is too large, not enough space for the heatmap."); xStep = (xmax - xmin)/nCols; xLeft = xmin + c(0:(nCols-1)) * xStep; xRight = xLeft + xStep; xMid = (xLeft + xRight)/2; yStep = (ymax - ymin)/nRows; yBot = ymin + c(0:(nRows-1)) * yStep; yTop = yBot + yStep; yMid = c(yTop+ yBot)/2; if (is.null(colorMatrix)) colorMatrix = numbers2colors(data, signed, colors = colors, lim = zlim, naColor = naColor) dim(colorMatrix) = dim(data); if (reverseRows) colorMatrix = .reverseRows(colorMatrix); for (c in 1:nCols) { rect(xleft = rep(xLeft[c], nRows), xright = rep(xRight[c], nRows), ybottom = yBot, ytop = yTop, col = ifelse(colorMatrix[, c]==0, 0, colorMatrix[, c]), border = ifelse(colorMatrix[, c]==0, 0, colorMatrix[, c])); ## Note: the ifelse seems superfluous here but it essentially converts a potentially character "0" to the number 0 ## which the plotting system should understand as transparent color. } if (frame) lines( c(xmin, xmax, xmax, xmin, xmin), c(ymin, ymin, ymax, ymax, ymin) ); if (plotLegend) { # Now plot the legend. legendSize.usr = legendShrink * (ymaxAll - yminAll); if (legendSize.usr > maxLegendSize.usr) legendSize.usr = maxLegendSize.usr if (legendLengthGap.usr > 0.5*(ymaxAll - yminAll)*(1-legendShrink)) legendLengthGap.usr = 0.5*(ymaxAll - yminAll)*(1-legendShrink); y0 = yminAll + legendLengthGap.usr; y1 = ymaxAll - legendLengthGap.usr; movementRange = (y1-y0 - legendSize.usr); if (movementRange < -1e-10) {browser(".heatmapWithLegend: movementRange is negative."); movementRange = 0;} ymin.leg = y0 + legendPosition * movementRange; ymax.leg = y0 + legendPosition * movementRange + legendSize.usr legendPosition = .plotColorLegend(xmin = xmaxAll - (legendSpace.usr - legendGap.usr), xmax = xmaxAll - (legendSpace.usr - legendGap.usr - legendWidth.usr), ymin = ymin.leg, ymax = ymax.leg, lim = zlim, colors = colors, tickLen.usr = tickLen.usr, cex.axis = cex.legendAxis, lab = legendLabel, cex.lab = cex.legendLabel, tickLabelAngle = tickLabelAngle ); } else legendPosition = NULL invisible(list(xMid = xMid, yMid = if (reverseRows) rev(yMid) else yMid, box = c(xmin, xmax, ymin, ymax), xLeft = xLeft, xRight = xRight, yTop = yTop, yBot = yBot, legendPosition = legendPosition)); } WGCNA/R/dendrogramAdjustmentFunctions.R0000644000176200001440000002341313103416622017510 0ustar liggesusers## This file contains several functions which can be used to adjust the dendrogram # in ways which keep the dendrogram mathematically identical (ie, branch swapping, # branch reflection, etc). The goal is to biologically optimize the dendrogram. # ----------------------------------------------------------------------------- # orderBranchesUsingHubGenes <- function(hierTOM, datExpr=NULL, colorh=NULL, type="signed", adj=NULL, iter=NULL, useReflections=FALSE, allowNonoptimalSwaps=FALSE){ # First read in and format all of the variables hGenes = hierTOM$labels if(is.null(adj)){ genes = chooseOneHubInEachModule(datExpr, colorh, type=type) adj = adjacency(datExpr[,genes],type=type, power=(2-(type=="unsigned"))) colnames(adj) <- rownames(adj) <- genes } genes = rownames(adj) if(length(genes)!=length(intersect(genes,hGenes))){ write("All genes in the adjacency must also be in the gene tree. Check to make sure","") write("that names(hierTOM$labels) is set to the proper gene or probe names and that","") write("these correspond to the expression / adjacency gene names.","") return(0) } genes = hGenes[is.element(hGenes,genes)] adj = adj[genes,genes] if (is.null(iter)) iter = length(genes)^2 iters=(1:iter)/iter swapAnyway = rep(0,length(iters)) # Quickly decreasing chance of random swap if (allowNonoptimalSwaps) swapAnyway = ((1-iters)^3)/3+0.001 # Iterate random swaps in the branch, only accepting the new result # if it produces a higher correlation than the old result OR if the # random variable says to swap (which gets less likely each iteration) changes=NULL for (i in 1:iter){ swap = 1; if (useReflections) swap = sample(0:1,1) gInd = sample(1:length(genes),2) g = genes[gInd] if (swap==1) { hierTOMnew = swapTwoBranches(hierTOM, g[1], g[2]) } else hierTOMnew = reflectBranch(hierTOM, g[1], g[2], TRUE) oldSum = .offDiagonalMatrixSum(adj) oGenesNew = hGenes[hierTOMnew$order] oGenesNew = oGenesNew[oGenesNew%in%genes] adjNew = adj[oGenesNew,oGenesNew] newSum = .offDiagonalMatrixSum(adjNew) if ((newSum>oldSum)|((sample(1:1000,1)/1000)length(hierTOM$labels))){ write("Input genes are not both legal indices","") return(hierTOM); } # Now determine which branch is the correct one, and find the genes len = length(hierTOM$height) tree1 = which(hierTOM$merge==(-g1))%%len continue=length(which(hierTOM$merge==tree1))>0 while(continue){ nextInd = which(hierTOM$merge==tree1[length(tree1)])%%len tree1 = c(tree1,nextInd) continue=length(which(hierTOM$merge==nextInd))>0 } branchIndex = which(hierTOM$height==.minTreeHeight(hierTOM,g1,g2)) branch=hierTOM$merge[branchIndex,] b1 <- NULL if(is.element(branch[1],tree1)){ b1 = .getBranchMembers(hierTOM,branch[1],b1) } else b1 = .getBranchMembers(hierTOM,branch[2],b1) collectGarbage() return(b1) } # ----------------------------------------------------------------------------- # reflectBranch <- function (hierTOM, g1, g2, both=FALSE){ ## This function reverses the ordering of all genes in a branch of the ## clustering tree defined by the minimal branch possible that contains ## both g1 and g2 (as either ORDERED index or gene names), or just by ## the genes in g1 b1 = selectBranch(hierTOM, g1, g2) if (both) b1 = c(b1,selectBranch(hierTOM, g2, g1)) # Now reorder the hierTOM correctly ord = hierTOM$order i1 = which(ord%in%b1) b=1:(min(i1)-1); if(b[length(b)]length(ord)) e = NULL ord = ord[c(b,i1[order(i1,decreasing=T)],e)] hierTOM$order = ord return(hierTOM) } # ----------------------------------------------------------------------------- # swapTwoBranches <- function (hierTOM, g1, g2){ ## This function re-arranges two branches in a heirarchical clustering tree ## at the nearest branch point of two given genes (or indices) # Convert genes to indices (ORDERED AS ON THE PLOT) if(is.numeric(g1)) g1 = hierTOM$order[g1] if(is.numeric(g2)) g2 = hierTOM$order[g2] if(!is.numeric(g1)) g1 = which(hierTOM$labels==g1) if(!is.numeric(g2)) g2 = which(hierTOM$labels==g2) if((length(g1)==0)|(length(g2)==0)|(max(c(g1,g2))>length(hierTOM$labels))){ write("Input genes are not both legal indices","") return(hierTOM); } # Now determine the genes in each branch branchIndex = which(hierTOM$height==.minTreeHeight(hierTOM,g1,g2)) b1 <- b2 <- NULL b1 = .getBranchMembers(hierTOM,hierTOM$merge[branchIndex,1],b1) b2 = .getBranchMembers(hierTOM,hierTOM$merge[branchIndex,2],b2) # Now reorder the hierTOM correctly ord = hierTOM$order i1 = which(ord%in%b1) i2 = which(ord%in%b2) if(min(i1)>min(i2)) {tmp = i1; i1=i2; i2=tmp; rm(tmp)} b=1:(min(i1)-1); if(b[length(b)]length(ord)) e = NULL ord = ord[c(b,i2,i1,e)] hierTOM$order = ord return(hierTOM) } # ----------------------------------------------------------------------------- # chooseOneHubInEachModule <- function(datExpr, colorh, numGenes=100, omitColors="grey", power=2, type="signed",...){ ## This function returns the gene in each module with the highest connectivity, given # a number of randomly selected genes to test. numGenes = max(round(numGenes),2) keep = NULL isIndex = FALSE modules = names(table(colorh)); numCols = table(colorh) if(!(is.na(omitColors)[1])) modules = modules[!is.element(modules,omitColors)] if(is.null(colnames(datExpr))){ colnames(datExpr) = 1:dim(datExpr)[2] isIndex = TRUE } for (m in modules){ num = min(numGenes,numCols[m]) inMod = which(is.element(colorh,m)) keep = c(keep, sample(inMod,num)) } colorh = colorh[keep] datExpr = datExpr[,keep] return(chooseTopHubInEachModule(datExpr, colorh, omitColors, power, type,...)) } # ----------------------------------------------------------------------------- # chooseTopHubInEachModule <- function(datExpr, colorh, omitColors="grey", power=2, type="signed",...){ ## This function returns the gene in each module with the highest connectivity. isIndex = FALSE modules = names(table(colorh)); if(!(is.na(omitColors)[1])) modules = modules[!is.element(modules,omitColors)] if(is.null(colnames(datExpr))){ colnames(datExpr) = 1:dim(datExpr)[2] isIndex = TRUE } hubs = rep(NA,length(modules)) names(hubs) = modules for (m in modules){ adj = adjacency(datExpr[,colorh==m],power=power,type=type,...) hub = which.max(rowSums(adj)) hubs[m] = colnames(adj)[hub] } if (isIndex){ hubs = as.numeric(hubs) names(hubs) = modules } return(hubs) } ################################################################################# # Internal functions............................................................. options(expressions=50000) # Required for .getBranchMembers .getBranchMembers <- function(hierTOM, ind, members){ # This is a recursive function that gets all the indices of members of # a branch in an hClust tree. if(ind<0) return(c(members,-ind)) m1 = hierTOM$merge[ind,1] m2 = hierTOM$merge[ind,2] if (m1>0) { members = .getBranchMembers(hierTOM,m1,members) } else members = c(members,-m1) if (m2>0) { members = .getBranchMembers(hierTOM,m2,members) } else members = c(members,-m2) return(members) } # ----------------------------------------------------------------------------- # .minTreeHeight <- function(hierTOM,l1,l2) { ## This function finds the minimum height at which two leafs ## in a hierarchical clustering tree are connected. l1 and ## l2 are the UNORDERED indices for the two leafs. ## Return 2 (larger than 1, if l1 or l2 is negative). This represents ## positions that are off the edge of the tree. if((l1<0)|(l2<0)) return(2) ## Get the tree for l1 len = length(hierTOM$height) tree1 = which(hierTOM$merge==(-l1))%%len continue=length(which(hierTOM$merge==tree1))>0 while(continue){ nextInd = which(hierTOM$merge==tree1[length(tree1)])%%len tree1 = c(tree1,nextInd) continue=length(which(hierTOM$merge==nextInd))>0 } ## Get the tree for l2 tree2 = which(hierTOM$merge==(-l2))%%len continue=length(which(hierTOM$merge==tree2))>0 while(continue){ nextInd = which(hierTOM$merge==tree2[length(tree2)])%%len tree2 = c(tree2,nextInd) continue=length(which(hierTOM$merge==nextInd))>0 } ## Now find the index where the two trees first agree minTreeLen = min(c(length(tree1),length(tree2))) tree1 = tree1[(length(tree1)-minTreeLen+1):length(tree1)] tree2 = tree2[(length(tree2)-minTreeLen+1):length(tree2)] treeInd = tree1[min(which(tree1==tree2))] ## Now find and return the minimum tree height return(hierTOM$height[ifelse(treeInd==0,len,treeInd)]) } # ----------------------------------------------------------------------------- # .offDiagonalMatrixSum <- function(adj){ len = dim(adj)[1] output=sum(diag(adj[1:(len-1),2:len])) return(output) } WGCNA/R/networkConcepts.R0000644000176200001440000004453113103416622014632 0ustar liggesusers# =================================================== # This code was written by Jun Dong, modified by Peter Langfelder # datExpr: expression profiles with rows=samples and cols=genes/probesets # power: for contruction of the weighted network # trait: the quantitative external trait # networkConcepts = function(datExpr, power=1, trait=NULL, networkType = "unsigned") { networkTypeC = charmatch(networkType, .networkTypes); if (is.na(networkTypeC)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); if(networkTypeC==1) { adj <- abs(cor(datExpr,use="p"))^power } else if (networkTypeC==2) { adj <- abs((cor(datExpr,use="p")+1)/2)^power } else { cor = cor(datExpr,use="p"); cor[cor < 0] = 0; adj <- cor^power } diag(adj)=0 # Therefore adj=A-I. ### Fundamental Network Concepts Size=dim(adj)[1] Connectivity=apply(adj, 2, sum) # Within Module Connectivities Density=sum(Connectivity)/(Size*(Size-1)) Centralization=Size*(max(Connectivity)-mean(Connectivity))/((Size-1)*(Size-2)) Heterogeneity=sqrt(Size*sum(Connectivity^2)/sum(Connectivity)^2-1) ClusterCoef=.ClusterCoef.fun(adj) fMAR=function(v) sum(v^2)/sum(v) MAR=apply(adj, 1, fMAR) #CONNECTIVITY=Connectivity/max(Connectivity) ### Conformity-Based Network Concepts ### Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 Conformity=.NPC.iterate(adj)$v1 Factorizability=1- sum( (adj-outer(Conformity,Conformity)+ diag(Conformity^2))^2 )/sum(adj^2) Connectivity.CF=sum(Conformity)*Conformity-Conformity^2 Density.CF=sum(Connectivity.CF)/(Size*(Size-1)) Centralization.CF=Size*(max(Connectivity.CF)-mean(Connectivity.CF))/((Size-1)*(Size-2)) Heterogeneity.CF=sqrt(Size*sum(Connectivity.CF^2)/sum(Connectivity.CF)^2-1) #ClusterCoef.CF=.ClusterCoef.fun(outer(Conformity,Conformity)-diag(Conformity^2) ) ClusterCoef.CF=c(NA, Size) for(i in 1:Size ) ClusterCoef.CF[i]=( sum(Conformity[-i]^2)^2 - sum(Conformity[-i]^4) )/ ( sum(Conformity[-i])^2 - sum(Conformity[-i]^2) ) ### Approximate Conformity-Based Network Concepts Connectivity.CF.App=sum(Conformity)*Conformity Density.CF.App=sum(Connectivity.CF.App)/(Size*(Size-1)) Centralization.CF.App=Size*(max(Connectivity.CF.App)-mean(Connectivity.CF.App))/((Size-1)*(Size-2)) Heterogeneity.CF.App=sqrt(Size*sum(Connectivity.CF.App^2)/sum(Connectivity.CF.App)^2-1) ClusterCoef.CF.App=(sum(Conformity^2)/sum(Conformity))^2 ### Eigengene-based Network Concepts m1=moduleEigengenes(datExpr, colors = rep(1, Size)); # Weighted Expression Conformity ConformityE=cor(datExpr,m1[[1]][,1],use="pairwise.complete.obs"); ConformityE=abs(ConformityE)^power; ConnectivityE=sum(ConformityE)*ConformityE; #Expression Connectivity DensityE=sum(ConnectivityE)/(Size*(Size-1)); #Expression Density CentralizationE=Size*(max(ConnectivityE)-mean(ConnectivityE))/((Size-1)*(Size-2)); #Expression Centralization HeterogeneityE=sqrt(Size*sum(ConnectivityE^2)/sum(ConnectivityE)^2-1); #Expression Heterogeneity ClusterCoefE=(sum(ConformityE^2)/sum(ConformityE))^2; ##Expression ClusterCoef MARE=ConformityE* sum(ConformityE^2)/sum(ConformityE) ### Significance measure only when trait is available. if(!is.null(trait)){ EigengeneSignificance = abs(cor(trait, m1[[1]], use="pairwise.complete.obs") )^power; EigengeneSignificance = EigengeneSignificance[1,1] GS= abs(cor(datExpr, trait, use="pairwise.complete.obs") )^power; GS=GS[,1] GSE=ConformityE * EigengeneSignificance; GSE=GSE[,1] ModuleSignificance=mean(GS) ModuleSignificanceE=mean(GSE) K=Connectivity/max(Connectivity) HubGeneSignificance=sum(GS*K)/sum(K^2) KE=ConnectivityE/max(ConnectivityE) HubGeneSignificanceE= sum(GSE*KE)/sum(KE^2) } Summary=cbind( c(Density, Centralization, Heterogeneity, mean(ClusterCoef), mean(Connectivity)), c(DensityE, CentralizationE, HeterogeneityE, mean(ClusterCoefE), mean(ConnectivityE)), c(Density.CF, Centralization.CF, Heterogeneity.CF, mean(ClusterCoef.CF), mean(Connectivity.CF)), c(Density.CF.App, Centralization.CF.App, Heterogeneity.CF.App, mean(ClusterCoef.CF.App), mean(Connectivity.CF.App) ) ) colnames(Summary)=c("Fundamental", "Eigengene-based", "Conformity-Based", "Approximate Conformity-based") rownames(Summary)=c("Density", "Centralization", "Heterogeneity", "Mean ClusterCoef", "Mean Connectivity") output=list(Summary=Summary, Size=Size, Factorizability=Factorizability, Eigengene=m1[[1]], VarExplained=m1[[2]][,1], Conformity=Conformity, ClusterCoef=ClusterCoef, Connectivity=Connectivity, MAR=MAR, ConformityE=ConformityE) if(!is.null(trait)){ output$GS=GS; output$GSE=GSE; Significance=cbind(c(ModuleSignificance, HubGeneSignificance, EigengeneSignificance), c(ModuleSignificanceE, HubGeneSignificanceE, NA)) colnames(Significance)=c("Fundamental", "Eigengene-based") rownames(Significance)=c("ModuleSignificance", "HubGeneSignificance", "EigengeneSignificance") output$Significance=Significance } output } #==================================================================================================== # # Network functions for network concepts # #==================================================================================================== #========================================= # Function definitions #========================================= # ================================================================================ # Cohesiveness/Conformity/Factorizability etc # ================================================================================ # =================================================== # Check if adj is a valid adjacency matrix: square matrix, non-negative entries, symmetric and no missing entries. # Parameters: # adj - the input adjacency matrix # tol - the tolerence level to measure the difference from 0 (symmetric matrix: upper diagonal minus lower diagonal) # Remarks: # 1. This function is not supposed to be used directly. Instead, it should appear in function definitions. # 2. We release the requirement that the diagonal elements be 1 or 0. Users should assign appropriate values # at the beginning of their function definitions. # Usage: # if(!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") .is.adjmat = function(adj, tol=10^(-15)){ n=dim(adj) is.adj=1 if (n[1] != n[2]){ message("The adjacency matrix is not a square matrix!"); is.adj=0;} if ( sum(is.na(adj))>0 ){ message("There are missing values in the adjacency matrix!"); is.adj=0;} if ( sum(adj<0)>0 ){ message("There are negative entries in the adjacency matrix!"); is.adj=0;} if ( max(abs(adj-t(adj))) > tol){ message("The adjacency matrix is not symmetric!"); is.adj=0;} #if ( max(abs(diag(adj)-1)) > tol){ message("The diagonal elements are not all one!"); is.adj=0;} #The last criteria is removed because of different definitions on diagonals with other papers. #Always let "diagonal=1" INSIDE the function calls when using functions for Factorizability paper. is.adj } # =================================================== # .NPC.direct=function(adj) # Calculates the square root of Normalized Product Connectivity (.NPC), by way of definition of .NPC. ( \sqrt{t}) # Parameters: # adj - the input adjacency matrix # tol - the tolerence level to measure the difference from 0 (zero off-diagonal elements) # Output: # v1 - vector, the square root of .NPC # Remarks: # 1. The function requires that the off-diagonal elements of the adjacency matrix are all non-zero. # 2. If any of the off-diagonal elements is zero, use the function .NPC.iterate(). # 3. If the adjacency matrix is 2 by 2, then a warning message is issued and vector of sqrt(adj[1,2]) is returned. # 4. If the adjacency matrix is a ZERO matrix, then a warning message is issued and vector of 0 is returned. .NPC.direct=function(adj){ if(!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") n=dim(adj)[1] if(n==2) { warning("The adjacecny matrix is only 2 by 2. .NPC may not be unique!") return(rep(sqrt(adj[1,2]),2)) } diag(adj)=0 if(!sum(adj>0)){ warning("The adjacency matrix is a ZERO matrix!") return(rep(0,n)) } diag(adj)=1 if(sum(adj==0)) stop("There is zero off--diagonal element! Please use the function .NPC.iterate().") log10.prod.vec=function(vec){ prod=0 for(i in 1:length(vec) ) prod=prod+log10(vec[i]) prod } off.diag=as.vector(as.dist(adj)) prod1=log10.prod.vec(off.diag) v1=rep(-666, n) for(i in 1:n){ prod2=prod1-log10.prod.vec(adj[i,]) v1[i]=10^(prod1/(n-1)-prod2/(n-2)) } v1 } # =================================================== # .NPC.iterate=function(adj, loop=10^(10), tol=10^(-10)) # Calculates the square root of Normalized Product Connectivity, by way of iteration algorithm. ( \sqrt{t}) # Parameters: # adj - the input adjacency matrix # loop - the maximum number of iterations before stopping the algorithm # tol - the tolerence level to measure the difference from 0 (zero off-diagonal elements) # Output: # v1 - vector, the square root of .NPC # loop - integer, the number of iterations taken before convergence criterion is met # diff - scaler, the maximum difference between the estimates of 'v1' in the last two iterations # Remarks: # 1. Whenever possible, use .NPC.direct(). # 2. If the adjacency matrix is 2 by 2, then a warning message is issued. # 3. If the adjacency matrix is a ZERO matrix, then a warning message is issued and vector of 0 is returned. if( exists(".NPC.iterate") ) rm(.NPC.iterate); .NPC.iterate=function(adj, loop=10^(10), tol=10^(-10)){ if(!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") n=dim(adj)[1] if(n==2) warning("The adjacecny matrix is only 2 by 2. .NPC may not be unique!") diag(adj)=0 if(max(abs(adj))i && diff>tol ){ i=i+1 diag(adj)=v1^2 svd1=svd(adj) # Spectral Decomposition v2=sqrt(svd1$d[1])*abs(svd1$u[,1]) diff=max(abs(v1-v2)) v1=v2 } list(v1=v1,loop=i,diff=diff) } # =================================================== # The function .ClusterCoef.fun computes the cluster coefficients. # Input is an adjacency matrix .ClusterCoef.fun=function(adjmat1) { # diag(adjmat1)=0 no.nodes=dim(adjmat1)[[1]] computeLinksInNeighbors <- function(x, imatrix){x %*% imatrix %*% x} computeSqDiagSum = function(x, vec) { sum(x^2 * vec) }; nolinksNeighbors <- c(rep(-666,no.nodes)) total.edge <- c(rep(-666,no.nodes)) maxh1=max(as.dist(adjmat1) ); minh1=min(as.dist(adjmat1) ); if (maxh1>1 | minh1 < 0 ) { stop(paste("ERROR: the adjacency matrix contains entries that are larger", "than 1 or smaller than 0: max=",maxh1,", min=",minh1)) } else { nolinksNeighbors <- apply(adjmat1, 1, computeLinksInNeighbors, imatrix=adjmat1) subTerm = apply(adjmat1, 1, computeSqDiagSum, vec = diag(adjmat1)); plainsum <- apply(adjmat1, 1, sum) squaresum <- apply(adjmat1^2, 1, sum) total.edge = plainsum^2 - squaresum CChelp=rep(-666, no.nodes) CChelp=ifelse(total.edge==0,0, (nolinksNeighbors-subTerm)/total.edge) CChelp } } # end of function # =================================================== # The function err.bp is used to create error bars in a barplot # usage: err.bp(as.vector(means), as.vector(stderrs), two.side=F) .err.bp<-function(daten,error,two.side=F) { if(!is.numeric(daten)) { stop("All arguments must be numeric")} if(is.vector(daten)){ xval<-(cumsum(c(0.7,rep(1.2,length(daten)-1)))) }else{ if (is.matrix(daten)){ xval<-cumsum(array(c(1,rep(0,dim(daten)[1]-1)), dim=c(1,length(daten))))+0:(length(daten)-1)+.5 }else{ stop("First argument must either be a vector or a matrix") } } MW<-0.25*(max(xval)/length(xval)) ERR1<-daten+error ERR2<-daten-error for(i in 1:length(daten)){ segments(xval[i],daten[i],xval[i],ERR1[i]) segments(xval[i]-MW,ERR1[i],xval[i]+MW,ERR1[i]) if(two.side){ segments(xval[i],daten[i],xval[i],ERR2[i]) segments(xval[i]-MW,ERR2[i],xval[i]+MW,ERR2[i]) } } } #======================================================================================== conformityBasedNetworkConcepts = function(adj, GS=NULL) { if(!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") diag(adj)=0 # Therefore adj=A-I. if (dim(adj)[[1]]<3) stop("The adjacency matrix has fewer than 3 rows. This network is trivial and will not be evaluated.") if (!is.null(GS)) { if( length(GS) !=dim(adj)[[1]]) { stop(paste("The length of the node significnce GS does not equal the number", "of rows of the adjcency matrix. length(GS) != dim(adj)[[1]]. \n", "Something is wrong with your input")) } } ### Fundamental Network Concepts Size=dim(adj)[1] Connectivity=apply(adj, 2, sum) Density=sum(Connectivity)/(Size*(Size-1)) Centralization=Size*(max(Connectivity)-mean(Connectivity))/((Size-1)*(Size-2)) Heterogeneity=sqrt(Size*sum(Connectivity^2)/sum(Connectivity)^2-1) ClusterCoef=.ClusterCoef.fun(adj) fMAR=function(v) sum(v^2)/sum(v) MAR=apply(adj, 1, fMAR) ### Conformity-Based Network Concepts Conformity=.NPC.iterate(adj)$v1 Factorizability=1- sum( (adj-outer(Conformity,Conformity)+ diag(Conformity^2))^2 )/sum(adj^2) Connectivity.CF=sum(Conformity)*Conformity-Conformity^2 Density.CF=sum(Connectivity.CF)/(Size*(Size-1)) Centralization.CF=Size*(max(Connectivity.CF)-mean(Connectivity.CF))/((Size-1)*(Size-2)) Heterogeneity.CF=sqrt(Size*sum(Connectivity.CF^2)/sum(Connectivity.CF)^2-1) #ClusterCoef.CF=.ClusterCoef.fun(outer(Conformity,Conformity)-diag(Conformity^2) ) ClusterCoef.CF=c(NA, Size) for(i in 1:Size ) ClusterCoef.CF[i]=( sum(Conformity[-i]^2)^2 - sum(Conformity[-i]^4) )/ ( sum(Conformity[-i])^2 - sum(Conformity[-i]^2) ) MAR.CF=ifelse(sum(Conformity,na.rm=T)-Conformity==0, NA, Conformity*(sum(Conformity^2,na.rm=T)-Conformity^2)/(sum(Conformity,na.rm=T)-Conformity)) ### Approximate Conformity-Based Network Concepts Connectivity.CF.App=sum(Conformity)*Conformity Density.CF.App=sum(Connectivity.CF.App)/(Size*(Size-1)) Centralization.CF.App=Size*(max(Connectivity.CF.App)-mean(Connectivity.CF.App))/((Size-1)*(Size-2)) Heterogeneity.CF.App=sqrt(Size*sum(Connectivity.CF.App^2)/sum(Connectivity.CF.App)^2-1) if(sum(Conformity,na.rm=T)==0) { warning(paste("The sum of conformities equals zero.\n", "Maybe you used an input adjacency matrix with lots of zeroes?\n", "Specifically, sum(Conformity,na.rm=T)==0.")); MAR.CF.App= rep(NA,Size) ClusterCoef.CF.App= rep(NA,Size) } #end of if if(sum(Conformity,na.rm=T) !=0) { MAR.CF.App=Conformity*sum(Conformity^2,na.rm=T) /sum(Conformity,na.rm=T) ClusterCoef.CF.App=rep((sum(Conformity^2)/sum(Conformity))^2,Size) }# end of if output=list( Factorizability =Factorizability, fundamentalNCs=list( ScaledConnectivity=Connectivity/max(Connectivity,na.rm=T), Connectivity=Connectivity, ClusterCoef=ClusterCoef, MAR=MAR, Density=Density, Centralization =Centralization, Heterogeneity= Heterogeneity), conformityBasedNCs=list( Conformity=Conformity, Connectivity.CF=Connectivity.CF, ClusterCoef.CF=ClusterCoef.CF, MAR.CF=MAR.CF, Density.CF=Density.CF, Centralization.CF =Centralization.CF, Heterogeneity.CF= Heterogeneity.CF), approximateConformityBasedNCs=list( Conformity=Conformity, Connectivity.CF.App= Connectivity.CF.App, ClusterCoef.CF.App=ClusterCoef.CF.App, MAR.CF.App=MAR.CF.App, Density.CF.App= Density.CF.App, Centralization.CF.App =Centralization.CF.App, Heterogeneity.CF.App= Heterogeneity.CF.App)) if ( !is.null(GS) ) { output$FundamentalNC$NetworkSignificance = mean(GS,na.rm=T) K = Connectivity/max(Connectivity) output$FundamentalNC$HubNodeSignificance = sum(GS * K,na.rm=T)/sum(K^2,na.rm=T) } output } # end of function #=================================================================================================== fundamentalNetworkConcepts=function(adj,GS=NULL) { if(!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") diag(adj)=0 # Therefore adj=A-I. if (dim(adj)[[1]]<3) stop("The adjacency matrix has fewer than 3 rows. This network is trivial and will not be evaluated.") if (!is.null(GS)) { if( length(GS) !=dim(adj)[[1]]){ stop("The length of the node significnce GS does not equal the number of rows of the adjcency matrix. length(GS) unequal dim(adj)[[1]]. GS should be a vector whosecomponents correspond to the nodes.")}} Size=dim(adj)[1] ### Fundamental Network Concepts Connectivity=apply(adj, 2, sum) # Within Module Connectivities Density=sum(Connectivity)/(Size*(Size-1)) Centralization=Size*(max(Connectivity)-mean(Connectivity))/((Size-1)*(Size-2)) Heterogeneity=sqrt(Size*sum(Connectivity^2)/sum(Connectivity)^2-1) ClusterCoef=.ClusterCoef.fun(adj) fMAR=function(v) sum(v^2)/sum(v) MAR=apply(adj, 1, fMAR) ScaledConnectivity=Connectivity/max(Connectivity,na.rm=T) output=list( Connectivity=Connectivity, ScaledConnectivity=ScaledConnectivity, ClusterCoef=ClusterCoef, MAR=MAR, Density=Density, Centralization =Centralization, Heterogeneity= Heterogeneity) if ( !is.null(GS) ) { output$NetworkSignificance = mean(GS,na.rm=T) output$HubNodeSignificance = sum(GS * ScaledConnectivity,na.rm=T)/sum(ScaledConnectivity^2,na.rm=T) } output } # end of function #========================================================================================================== # # Density function # #========================================================================================================== WGCNA/R/conformityDecomposition.R0000644000176200001440000001170213103416622016362 0ustar liggesusers# conformityDecomposition = function (adj, Cl = NULL) { if ( is.null(dim(adj) )) stop("Input adj is not a matrix or data frame. ") if ( dim(adj)[[1]] < 3) stop("The adjacency matrix has fewer than 3 rows. This network is trivial and will not be evaluated.") if (!.is.adjmat(adj)) stop("The input matrix is not a valid adjacency matrix!") diag(adj) = 0 if (!is.null(Cl)) { if (length(Cl) != dim(adj)[[1]]) { stop(paste("The length of the class assignment Cl does not equal the number", "of rows of the adjcency matrix. length(Cl) != dim(adj)[[1]]. \n", "Something is wrong with your input")) } if (sum(is.na(Cl))>0 ) stop("Cl must not contain missing values (NA)." ) } A.CF=matrix(0, nrow=dim(adj)[[1]], ncol= dim(adj)[[2]] ) diag(A.CF)=1 if ( is.null(Cl) ) { Conformity = .NPC.iterate(adj)$v1 if (sum(adj^2,na.rm=T)==0) {Factorizability=NA} else { A.CF=outer(Conformity, Conformity) - diag(Conformity^2) Factorizability = 1 - sum((adj - A.CF)^2)/sum(adj^2)} diag(A.CF)=1 output = list(A.CF=A.CF, Conformity=data.frame( Conformity), IntermodularAdjacency=1, Factorizability = Factorizability) } if ( !is.null(Cl) ) { Cl=factor(Cl) Cl.level=levels( Cl ) if ( length(Cl.level)>100 ) warning(paste("Your class assignment variable Cl contains", length(Cl.level), "different classes. I assume this is a proper class assignment variable. But if not, stop the calculation, e.g. by using the Esc key on your keybord.")) Conformity=rep(NA, length(Cl) ) listConformity=list() IntramodularFactorizability=rep(NA, length(Cl.level) ) IntermodularAdjacency= matrix(0,nrow=length(Cl.level),ncol=length(Cl.level) ) diag(IntermodularAdjacency)=1 IntermodularAdjacency=data.frame(IntermodularAdjacency) dimnames(IntermodularAdjacency)[[1]]=as.character(Cl.level) dimnames(IntermodularAdjacency)[[2]]=as.character(Cl.level) numeratorFactorizability=0 for (i in 1:length(Cl.level) ) { restclass= Cl== Cl.level[i] if (sum(restclass)==1) { A.help=0; CF.help =0; Conformity[restclass]=CF.help; A.CF[restclass,restclass]=CF.help*CF.help - CF.help^2 } if (sum(restclass)==2) { A.help=adj[restclass,restclass];diag(A.help)=0 CFvalue=sqrt(adj[restclass,restclass][1,2]); CF.help= c(CFvalue , CFvalue ) Conformity[restclass]=CF.help A.CF[restclass,restclass]=outer(CF.help, CF.help) - diag(CF.help^2) } if (sum(restclass)>2) { A.help=adj[restclass,restclass];diag(A.help)=0 ; CF.help = .NPC.iterate(A.help )$v1 Conformity[restclass]=CF.help A.CF[restclass,restclass]=outer(CF.help, CF.help) - diag(CF.help^2) } if (length(CF.help)>1) {numeratorFactorizability= numeratorFactorizability+sum( (A.help-outer(CF.help,CF.help)+diag(CF.help^2) )^2 )} listConformity[[i]] =CF.help if (sum(A.help^2,na.rm=T)==0 | length(CF.help)==1 ) {IntramodularFactorizability[i]=NA} else { IntramodularFactorizability[i] = 1 - sum((A.help - outer(CF.help, CF.help) + diag(CF.help^2))^2)/sum(A.help^2,na.rm=T) } } # end of for loop over i if ( length(Cl.level)==1) {IntermodularAdjacency[1,1]=1} else { for (i in 1:(length(Cl.level)-1) ) { for (j in (i+1):length(Cl.level) ) { restclass1= Cl== Cl.level[i] restclass2= Cl== Cl.level[j] A.inter=adj[restclass1,restclass2] mean.CF1=mean(listConformity[[i]], na.rm=T) mean.CF2=mean(listConformity[[j]], na.rm=T) if ( mean.CF1* mean.CF2 != 0 ) { IntermodularAdjacency[i,j]= mean(A.inter,na.rm=T)/(mean.CF1* mean.CF2) IntermodularAdjacency[j,i]= IntermodularAdjacency[i,j] } if ( length(listConformity[[i]])==1 | length(listConformity[[j]])==1 ) { numeratorFactorizability= numeratorFactorizability+ 2*sum( (A.inter- IntermodularAdjacency[i,j]* listConformity[[i]] * listConformity[[j]])^2 ) A.CF[restclass1,restclass2]= IntermodularAdjacency[i,j]* listConformity[[i]] *listConformity[[j]] A.CF[restclass2,restclass1]= IntermodularAdjacency[j,i]* listConformity[[j]] *listConformity[[i]] } else { numeratorFactorizability= numeratorFactorizability+ 2*sum( (A.inter- IntermodularAdjacency[i,j]*outer( listConformity[[i]] , listConformity[[j]]) )^2 ) A.CF[restclass1,restclass2]= IntermodularAdjacency[i,j]* outer(listConformity[[i]], listConformity[[j]] ) A.CF[restclass2,restclass1]= IntermodularAdjacency[j,i]* outer(listConformity[[j]], listConformity[[i]] ) } # end of else } # end of for (i in } # end of for (j in diag(adj)=NA Factorizability= 1-numeratorFactorizability/ sum(adj^2,na.rm=T) } # end of if else statement diag(A.CF)=1 output = list(A.CF=A.CF, Conformity=Conformity, IntermodularAdjacency= IntermodularAdjacency, Factorizability = Factorizability, Cl.level =Cl.level, IntramodularFactorizability= IntramodularFactorizability, listConformity=listConformity ) } # end of if ( !is.null(Cl) ) output } WGCNA/R/quantileC.R0000644000176200001440000000550013141123026013353 0ustar liggesusers# This function calls the C++ implementation of column quantile. pmedian = function(...) { pquantile(prob = 0.5, ...)} colQuantileC = function(data, p) { data = as.matrix(data) storage.mode(data) = "double"; #if (sum(is.na(data))>0) # stop("Missing values are not handled correctly yet. Sorry!"); p = as.double(as.character(p)); if (length(p) > 1) stop("This function only calculates one quantile at a time, for now. Sorry!"); if ( (p<0) || (p>1) ) stop(paste("Probability", p, "is out of the allowed range between 0 and 1.")); .Call("quantileC_call", data, p, PACKAGE = "WGCNA"); } rowQuantileC = function(data, p) { data = as.matrix(data) storage.mode(data) = "double"; #if (sum(is.na(data))>0) # stop("Missing values are not handled correctly yet. Sorry!"); ncol = ncol(data); nrow = nrow(data); quantiles = rep(0, nrow); p = as.double(as.character(p)); if (length(p) > 1) stop("This function only calculates one quantile at a time, for now. Sorry!"); if ( (p<0) || (p>1) ) stop(paste("Probability", p, "is out of the allowed range between 0 and 1.")); .Call("rowQuantileC_call", data, p, PACKAGE = "WGCNA"); } pquantile = function(prob, ...) { pars = list(...) pquantile.fromList(pars, prob); } pquantile.fromList = function(dataList, prob) { dn = .checkListDimConsistencyAndGetDimnames(dataList); if (length(prob) > 1) warning("pquantile2: only the first element of 'prob' will be used."); q = .Call("parallelQuantile", dataList, as.numeric(prob[1])); dimnames(q) = dn; q } pmean = function(..., weights = NULL) { pmean.fromList(dataList = list(...), weights = weights) } pmean.fromList = function(dataList, weights = NULL) { dn = .checkListDimConsistencyAndGetDimnames(dataList); if (is.null(weights)) weights = rep(1, length(dataList)) q = .Call("parallelMean", dataList, as.numeric(weights)); dimnames(q) = dn; q } #pmin.wgcna = function(...) #{ # pminWhich.fromList(dataList = list(...))$min #} pminWhich.fromList = function(dataList) { dn = .checkListDimConsistencyAndGetDimnames(dataList); q = .Call("parallelMin", dataList); dimnames(q$min) = dimnames(q$which) = dn; q } minWhichMin = function(x, byRow = FALSE, dims = 1) { d = dim(x); if (length(d) <= 2 && dims==1) { x = as.matrix(x); .Call("minWhich_call", x, as.integer(byRow), PACKAGE = "WGCNA") } else { if (dims < 1 || dims >= length(d)) stop("Invalid 'dims'. Must be between 1 and length(dim(x))-1."); d1 = d[1:dims]; d2 = d[(dims+1):length(d)]; dim(x) = c(prod(d1), prod(d2)); out = .Call("minWhich_call", x, as.integer(byRow)); if (byRow && length(d1) > 1) { dim(out$min) = d1; dim(out$which) = d1; } else if (!byRow && length (d2) > 1) { dim(out$min) = d2; dim(out$which) = d2; } out; } } WGCNA/R/kMEcomparisonScatterplot.R0000644000176200001440000000443413103416622016434 0ustar liggesusers# Plots the kME values of genes in two groups of expression data for each module in an inputted color vector kMEcomparisonScatterplot <- function (datExpr1, datExpr2, colorh, inA=NULL, inB=NULL, MEsA=NULL, MEsB=NULL, nameA="A", nameB="B", plotAll=FALSE, noGrey=TRUE, maxPlot=1000, pch=19, fileName = if (plotAll) paste("kME_correlations_between_",nameA,"_and_",nameB,"_all.pdf",sep="") else paste("kME_correlations_between_",nameA,"_and_",nameB,"_inMod.pdf",sep=""), ...){ # First, get the data if (is.null(dim(datExpr1))) { write ("Error: datExpr1 must be a matrix",""); return(0) } if (is.null(datExpr2)){ datA = datExpr1[inA,] datB = datExpr1[inB,] if ((is.null(dim(datA)))|(is.null(dim(datB)))) { write ("Error: Check input for inA and inB.",""); return(0) } } else { if (is.null(dim(datExpr2))) { write ("Error: datExpr2 must be a matrix",""); return(0) } datA = datExpr1 datB = datExpr2 } if ((dim(datA)[2]!=length(colorh))|(dim(datB)[2]!=length(colorh))){ write ("Error: Both sets of input data and color vector must all have same length.",""); return(0) } if(is.null(MEsA)) MEsA = (moduleEigengenes(datA, colors=as.character(colorh), excludeGrey=noGrey))$eigengenes if(is.null(MEsB)) MEsB = (moduleEigengenes(datB, colors=as.character(colorh), excludeGrey=noGrey))$eigengenes mods = substring(names(MEsA),3) kMEsA = as.data.frame(cor(datA,MEsA,use="p")) kMEsB = as.data.frame(cor(datB,MEsB,use="p")) # Second, make the plots xlab = paste("kME values in",nameA) ylab = paste("kME values in",nameB) printFlush(paste("Plotting kME scatterplots into file", fileName)); if (plotAll){ pdf(file=fileName) numPlot = min(maxPlot,length(colorh)); these = sample(1:length(colorh),numPlot) for (i in 1:length(mods)){ plotCol = mods[i]; if(mods[i]=="white") plotCol="black" verboseScatterplot(kMEsA[these,i],kMEsB[these,i],main=mods[i], xlab=xlab,ylab=ylab,pch=pch,col=plotCol,...) } dev.off() return("DONE - Plotted All") } pdf(file=fileName) for (i in 1:length(mods)){ these = colorh==mods[i] plotCol = mods[i]; if(mods[i]=="white") plotCol="black" verboseScatterplot(kMEsA[these,i],kMEsB[these,i],main=mods[i], xlab=xlab,ylab=ylab,pch=pch,col=plotCol,...) } dev.off() return("DONE - Plotted only in module") } WGCNA/R/labeledHeatmap.R0000644000176200001440000004733414022073754014344 0ustar liggesusers#--------------------------------------------------------------------------------------------------------- # labeledHeatmap.R #--------------------------------------------------------------------------------------------------------- #-------------------------------------------------------------------------- # # .reverseRows = function(Matrix) # #-------------------------------------------------------------------------- # .reverseRows = function(Matrix) { ind = seq(from=dim(Matrix)[1], to=1, by=-1); Matrix[ind,, drop = FALSE]; #Matrix } .extend = function(x, n) { nRep = ceiling(n/length(x)); rep(x, nRep)[1:n]; } # Adapt a numeric index to a subset # Aim: if 'index' is a numeric index of special entries of a vector, # create a new index that references 'subset' elements of the vector .restrictIndex = function(index, subset) { out = match(index, subset); out[!is.na(out)]; } #-------------------------------------------------------------------------- # # labeledHeatmap # #-------------------------------------------------------------------------- # This function plots a heatmap of the specified matrix # and labels the x and y axes wit the given labels. # It is assumed that the number of entries in xLabels and yLabels is consistent # with the dimensions in. # If colorLabels==TRUE, the labels are not printed and instead interpreted as colors -- # -- a simple symbol with the appropriate color is printed instead of the label. # The x,yLabels are expected to have the form "..color" as in "MEgrey" or "PCturquoise". # xSymbol, ySymbols are additional markers that can be placed next to color labels labeledHeatmap = function ( Matrix, xLabels, yLabels = NULL, xSymbols = NULL, ySymbols = NULL, colorLabels = NULL, xColorLabels = FALSE, yColorLabels = FALSE, checkColorsValid = TRUE, invertColors = FALSE, setStdMargins = TRUE, xLabelsPosition = "bottom", xLabelsAngle = 45, xLabelsAdj = 1, yLabelsPosition = "left", xColorWidth = 2*strheight("M"), yColorWidth = 2*strwidth("M"), xColorOffset = strheight("M")/3, yColorOffset = strwidth("M")/3, # Content of heatmap colorMatrix = NULL, colors = NULL, naColor = "grey", textMatrix = NULL, cex.text = NULL, textAdj = c(0.5, 0.5), # labeling of rows and columns cex.lab = NULL, cex.lab.x = cex.lab, cex.lab.y = cex.lab, colors.lab.x = 1, colors.lab.y = 1, font.lab.x = 1, font.lab.y = 1, bg.lab.x = NULL, bg.lab.y = NULL, x.adj.lab.y = 1, plotLegend = TRUE, keepLegendSpace = plotLegend, legendLabel = "", cex.legendLabel = 1, # Separator line specification verticalSeparator.x = NULL, verticalSeparator.col = 1, verticalSeparator.lty = 1, verticalSeparator.lwd = 1, verticalSeparator.ext = 0, verticalSeparator.interval = 0, horizontalSeparator.y = NULL, horizontalSeparator.col = 1, horizontalSeparator.lty = 1, horizontalSeparator.lwd = 1, horizontalSeparator.ext = 0, horizontalSeparator.interval = 0, # optional restrictions on which rows and columns to actually show showRows = NULL, showCols = NULL, # Other arguments... ... ) { textFnc = match.fun("text"); if (!is.null(colorLabels)) {xColorLabels = colorLabels; yColorLabels = colorLabels; } if (is.null(yLabels) & (!is.null(xLabels)) & (dim(Matrix)[1]==dim(Matrix)[2])) yLabels = xLabels; nCols = ncol(Matrix); nRows = nrow(Matrix); if (length(xLabels)!=nCols) stop("Length of 'xLabels' must equal the number of columns in 'Matrix.'"); if (length(yLabels)!=nRows) stop("Length of 'yLabels' must equal the number of rows in 'Matrix.'"); if (is.null(showRows)) showRows = c(1:nRows); if (is.null(showCols)) showCols = c(1:nCols); nShowCols = length(showCols); nShowRows = length(showRows); if (nShowCols==0) stop("'showCols' is empty."); if (nShowRows==0) stop("'showRows' is empty."); if (checkColorsValid) { xValidColors = !is.na(match(substring(xLabels, 3), colors())); yValidColors = !is.na(match(substring(yLabels, 3), colors())); } else { xValidColors = rep(TRUE, length(xLabels)); yValidColors = rep(TRUE, length(yLabels)); } if (sum(xValidColors)>0) xColorLabInd = xValidColors[showCols] if (sum(!xValidColors)>0) xTextLabInd = !xValidColors[showCols] if (sum(yValidColors)>0) yColorLabInd = yValidColors[showRows] if (sum(!yValidColors)>0) yTextLabInd = !yValidColors[showRows] if (setStdMargins) { if (xColorLabels & yColorLabels) { par(mar=c(2,2,3,5)+0.2); } else { par(mar = c(7,7,3,5)+0.2); } } xLabels.show = xLabels[showCols]; yLabels.show = yLabels[showRows]; if (!is.null(xSymbols)) { if (length(xSymbols)!=nCols) stop("When 'xSymbols' are given, their length must equal the number of columns in 'Matrix.'"); xSymbols.show = xSymbols[showCols]; } else xSymbols.show = NULL; if (!is.null(ySymbols)) { if (length(ySymbols)!=nRows) stop("When 'ySymbols' are given, their length must equal the number of rows in 'Matrix.'"); ySymbols.show = ySymbols[showRows]; } else ySymbols.show = NULL; xLabPos = charmatch(xLabelsPosition, c("bottom", "top")); if (is.na(xLabPos)) stop("Argument 'xLabelsPosition' must be (a unique abbreviation of) 'bottom', 'top'"); yLabPos = charmatch(yLabelsPosition, c("left", "right")); if (is.na(yLabPos)) stop("Argument 'yLabelsPosition' must be (a unique abbreviation of) 'left', 'right'"); if (is.null(colors)) colors = heat.colors(30); if (invertColors) colors = rev(colors); labPos = .heatmapWithLegend(Matrix[showRows, showCols, drop = FALSE], signed = FALSE, colorMatrix = colorMatrix, colors = colors, naColor = naColor, cex.legendAxis = cex.lab, plotLegend = plotLegend, keepLegendSpace = keepLegendSpace, legendLabel = legendLabel, cex.legendLabel = cex.legendLabel, ...) plotbox = labPos$box; xmin = plotbox[1]; xmax = plotbox[2]; ymin = plotbox[3]; yrange = plotbox[4]-ymin; ymax = plotbox[4]; xrange = xmax - xmin; # The positions below are for showCols/showRows-restriceted data xLeft = labPos$xLeft; xRight = labPos$xRight; yTop = labPos$yTop; yBot = labPos$yBot; xspacing = labPos$xMid[2] - labPos$xMid[1]; yspacing = abs(labPos$yMid[2] - labPos$yMid[1]); offsetx = .extend(xColorOffset, nCols)[showCols] offsety = .extend(yColorOffset, nRows)[showRows] xColW = xColorWidth; yColW = yColorWidth; # Additional angle-dependent offsets for x axis labels textOffsetY = strheight("M") * cos(xLabelsAngle/180 * pi); if (any(xValidColors)) offsetx = offsetx + xColW; if (any(yValidColors)) offsety = offsety + yColW; # Create the background for column and row labels. extension.left = par("mai")[2] * # left margin width in inches par("cxy")[1] / par("cin")[1] # character size in user corrdinates/character size in inches extension.right = par("mai")[4] * # right margin width in inches par("cxy")[1] / par("cin")[1] # character size in user corrdinates/character size in inches extension.bottom = par("mai")[1] * par("cxy")[2] / par("cin")[2]- # character size in user corrdinates/character size in inches offsetx extension.top = par("mai")[3] * par("cxy")[2] / par("cin")[2]- # character size in user corrdinates/character size in inches offsetx figureBox = par("usr"); figXrange = figureBox[2] - figureBox[1]; figYrange = figureBox[4] - figureBox[3]; if (!is.null(bg.lab.x)) { bg.lab.x = .extend(bg.lab.x, nCols)[showCols]; if (xLabPos==1) { y0 = ymin; ext = extension.bottom; sign = 1; } else { y0 = ymax; ext = extension.top; sign = -1; } figureDims = par("pin"); angle = xLabelsAngle/180*pi; ratio = figureDims[1]/figureDims[2] * figYrange/figXrange; ext.x = -sign * ext * 1/tan(angle)/ratio; ext.y = sign * ext * sign(sin(angle)) #offset = (sum(xValidColors)>0) * xColW + offsetx + textOffsetY; offset = offsetx + textOffsetY; for (cc in 1:nShowCols) polygon(x = c(xLeft[cc], xLeft[cc], xLeft[cc] + ext.x, xRight[cc] + ext.x, xRight[cc], xRight[cc]), y = c(y0, y0-sign*offset[cc], y0-sign*offset[cc] - ext.y, y0-sign*offset[cc] - ext.y, y0-sign*offset[cc], y0), border = bg.lab.x[cc], col = bg.lab.x[cc], xpd = TRUE); } if (!is.null(bg.lab.y)) { bg.lab.y = .extend(bg.lab.y, nRows) reverseRows = TRUE; if (reverseRows) bg.lab.y = rev(bg.lab.y); bg.lab.y = bg.lab.y[showRows]; if (yLabPos==1) { xl = xmin-extension.left; xr = xmin; } else { xl = xmax; xr = xmax + extension.right; } for (r in 1:nShowRows) rect(xl, yBot[r], xr, yTop[r], col = bg.lab.y[r], border = bg.lab.y[r], xpd = TRUE); } colors.lab.x = .extend(colors.lab.x, nCols)[showCols]; font.lab.x = .extend(font.lab.x, nCols)[showCols]; # Write out labels if (sum(!xValidColors)>0) { xLabYPos = if(xLabPos==1) ymin - offsetx- textOffsetY else ymax + offsetx + textOffsetY; if (is.null(cex.lab)) cex.lab = 1; mapply(textFnc, x = labPos$xMid[xTextLabInd], y = xLabYPos, labels = xLabels.show[xTextLabInd], col = colors.lab.x[xTextLabInd], font = font.lab.x[xTextLabInd], MoreArgs = list(srt = xLabelsAngle, adj = xLabelsAdj, xpd = TRUE, cex = cex.lab.x)); } if (sum(xValidColors)>0) { baseY = if (xLabPos==1) ymin-offsetx else ymax + offsetx; deltaY = if (xLabPos==1) xColW else -xColW; rect(xleft = labPos$xMid[xColorLabInd] - xspacing/2, ybottom = baseY[xColorLabInd], xright = labPos$xMid[xColorLabInd] + xspacing/2, ytop = baseY[xColorLabInd] + deltaY, density = -1, col = substring(xLabels.show[xColorLabInd], 3), border = substring(xLabels.show[xColorLabInd], 3), xpd = TRUE) if (!is.null(xSymbols)) mapply(textFnc, x = labPos$xMid[xColorLabInd], y = baseY[xColorLabInd] -textOffsetY - sign(deltaY)* strwidth("M")/3, labels = xSymbols.show[xColorLabInd], col = colors.lab.x[xColorLabInd], font = font.lab.x[xColorLabInd], MoreArgs = list( adj = xLabelsAdj, xpd = TRUE, srt = xLabelsAngle, cex = cex.lab.x)); } x.adj.lab.y = .extend(x.adj.lab.y, nRows)[showRows] if (yLabPos==1) { marginWidth = par("mai")[2] / par("pin")[1] * xrange } else { marginWidth = par("mai")[4] / par("pin")[1] * xrange } xSpaceForYLabels = marginWidth-2*strwidth("M")/3 - ifelse(yValidColors[showRows], yColW, 0); xPosOfYLabels.relative = xSpaceForYLabels * (1-x.adj.lab.y) + offsety colors.lab.y = .extend(colors.lab.y, nRows)[showRows]; font.lab.y = .extend(font.lab.y, nRows)[showRows]; if (sum(!yValidColors)>0) { if (is.null(cex.lab)) cex.lab = 1; if (yLabPos==1) { x = xmin - strwidth("M")/3 - xPosOfYLabels.relative[yTextLabInd] adj = x.adj.lab.y[yTextLabInd] } else { x = xmax + strwidth("M")/3 + xPosOfYLabels.relative[yTextLabInd]; adj = 1-x.adj.lab.y[yTextLabInd]; } mapply(textFnc, y = labPos$yMid[yTextLabInd], labels = yLabels.show[yTextLabInd], adj = lapply(adj, c, 0.5), x = x, col = colors.lab.y[yTextLabInd], font = font.lab.y[yTextLabInd], MoreArgs = list(srt = 0, xpd = TRUE, cex = cex.lab.y)); } if (sum(yValidColors)>0) { if (yLabPos==1) { xl = xmin-offsety; xr = xmin-offsety + yColW; xtext = xmin - strwidth("M")/3 - xPosOfYLabels.relative[yColorLabInd]; adj = x.adj.lab.y[yColorLabInd] } else { xl = xmax + offsety - yColW; xr = xmax + offsety; xtext = xmin + strwidth("M")/3 + xPosOfYLabels.relative[yColorLabInd] adj = 1-x.adj.lab.y[yColorLabInd]; } rect(xleft = xl[yColorLabInd], ybottom = rev(labPos$yMid[yColorLabInd]) - yspacing/2, xright = xr[yColorLabInd], ytop = rev(labPos$yMid[yColorLabInd]) + yspacing/2, density = -1, col = substring(rev(yLabels.show[yColorLabInd]), 3), border = substring(rev(yLabels.show[yColorLabInd]), 3), xpd = TRUE) #for (i in yColorLabInd) #{ # lines(c(xmin- offsetx, xmin- offsetx+yColW), y = rep(labPos$yMid[i] - yspacing/2, 2), col = i, xpd = TRUE) # lines(c(xmin- offsetx, xmin- offsetx+yColW), y = rep(labPos$yMid[i] + yspacing/2, 2), col = i, xpd = TRUE) #} if (!is.null(ySymbols)) mapply(textFnc, y = labPos$yMid[yColorLabInd], labels = ySymbols.show[yColorLabInd], adj = lapply(adj, c, 0.5), x = xtext, col = colors.lab.y[yColorLabInd], font = font.lab.y[yColorLabInd], MoreArgs = list(srt = 0, xpd = TRUE, cex = cex.lab.y)); } # Draw separator lines, if requested showCols.ext = c(if (1 %in% showCols) 0 else NULL, showCols); showCols.shift = if (0 %in% showCols.ext) 1 else 0; if (length(verticalSeparator.x) > 0) { if (any(verticalSeparator.x < 0 | verticalSeparator.x > nCols)) stop("If given. 'verticalSeparator.x' must all be between 0 and the number of columns."); colSepShowIndex = which(verticalSeparator.x %in% showCols.ext); verticalSeparator.x.show = .restrictIndex(verticalSeparator.x, showCols.ext)-showCols.shift; } else if (verticalSeparator.interval > 0) { verticalSeparator.x.show = verticalSeparator.x = seq(from = verticalSeparator.interval, by = verticalSeparator.interval, length.out = floor(length(showCols)/verticalSeparator.interval)); colSepShowIndex = 1:length(verticalSeparator.x); } else verticalSeparator.x.show = NULL; if (length(verticalSeparator.x.show) > 0) { nLines = length(verticalSeparator.x); vs.col = .extend(verticalSeparator.col, nLines)[colSepShowIndex]; vs.lty = .extend(verticalSeparator.lty, nLines)[colSepShowIndex]; vs.lwd = .extend(verticalSeparator.lwd, nLines)[colSepShowIndex]; vs.ext = .extend(verticalSeparator.ext, nLines)[colSepShowIndex]; x.lines = ifelse(verticalSeparator.x.show>0, labPos$xRight[verticalSeparator.x.show], labPos$xLeft[1]); nLines.show = length(verticalSeparator.x.show); for (l in 1:nLines.show) lines(rep(x.lines[l], 2), c(ymin, ymax), col = vs.col[l], lty = vs.lty[l], lwd = vs.lwd[l]); angle = xLabelsAngle/180*pi; if (angle==0) angle = pi/2; if (xLabelsPosition =="bottom") { sign = 1; y0 = ymin; ext = extension.bottom; } else { sign = -1; y0 = ymax; ext = extension.top; } figureDims = par("pin"); ratio = figureDims[1]/figureDims[2] * figYrange/figXrange; ext.x = -sign * ext * 1/tan(angle)/ratio; ext.y = sign * ext * sign(sin(angle)) #offset = (sum(xValidColors)>0) * xColW + offsetx + textOffsetY; offset = offsetx + textOffsetY; for (l in 1:nLines.show) lines(c(x.lines[l], x.lines[l], x.lines[l] + vs.ext[l] * ext.x[l]), c(y0, y0-sign*offset[l], y0-sign*offset[l] - vs.ext[l] * ext.y[l]), col = vs.col[l], lty = vs.lty[l], lwd = vs.lwd[l], xpd = TRUE); } showRows.ext = c(if (1 %in% showRows) 0 else NULL, showRows); showRows.shift = if (0 %in% showRows.ext) 1 else 0; if (length(horizontalSeparator.y) >0) { if (any(horizontalSeparator.y < 0 | horizontalSeparator.y > nRows)) stop("If given. 'horizontalSeparator.y' must all be between 0 and the number of rows."); rowSepShowIndex = which( horizontalSeparator.y %in% showRows.ext); horizontalSeparator.y.show = .restrictIndex(horizontalSeparator.y, showRows.ext)-showRows.shift; } else if (horizontalSeparator.interval > 0) { horizontalSeparator.y.show = horizontalSeparator.y = seq(from = horizontalSeparator.interval, by = horizontalSeparator.interval, length.out = floor(length(showRows)/horizontalSeparator.interval)); rowSepShowIndex = 1:length(horizontalSeparator.y); } else horizontalSeparator.y.show = NULL; if (length(horizontalSeparator.y.show) > 0) { reverseRows = TRUE; if (reverseRows) { horizontalSeparator.y.show = nShowRows - horizontalSeparator.y.show+1; y.lines = ifelse( horizontalSeparator.y.show <=nShowRows, labPos$yBot[horizontalSeparator.y.show], labPos$yTop[nShowRows]); } else { y.lines = ifelse( horizontalSeparator.y.show > 0, labPos$yBot[horizontalSeparator.y.show], labPos$yTop[1]); } nLines = length(horizontalSeparator.y); vs.col = .extend(horizontalSeparator.col, nLines)[rowSepShowIndex]; vs.lty = .extend(horizontalSeparator.lty, nLines)[rowSepShowIndex]; vs.lwd = .extend(horizontalSeparator.lwd, nLines)[rowSepShowIndex]; vs.ext = .extend(horizontalSeparator.ext, nLines)[rowSepShowIndex]; nLines.show = length(horizontalSeparator.y.show); for (l in 1:nLines.show) { if (yLabPos==1) { xl = xmin-vs.ext[l]*extension.left; xr = xmax; } else { xl = xmin; xr = xmax + vs.ext[l]*extension.right; } lines(c(xl, xr), rep(y.lines[l], 2), col = vs.col[l], lty = vs.lty[l], lwd = vs.lwd[l], xpd = TRUE); } } if (!is.null(textMatrix)) { if (is.null(cex.text)) cex.text = par("cex"); if (is.null(dim(textMatrix))) if (length(textMatrix)==prod(dim(Matrix))) dim(textMatrix)=dim(Matrix); if (!isTRUE(all.equal(dim(textMatrix), dim(Matrix)))) stop("labeledHeatmap: textMatrix was given, but has dimensions incompatible with Matrix."); for (rw in 1:nShowRows) for (cl in 1:nShowCols) { text(labPos$xMid[cl], labPos$yMid[rw], as.character(textMatrix[showRows[rw],showCols[cl]]), xpd = TRUE, cex = cex.text, adj = textAdj); } } axis(1, labels = FALSE, tick = FALSE) axis(2, labels = FALSE, tick = FALSE) axis(3, labels = FALSE, tick = FALSE) axis(4, labels = FALSE, tick = FALSE) invisible(labPos) } #=================================================================================================== # # multi-page labeled heatmap # #=================================================================================================== labeledHeatmap.multiPage = function( # Input data and ornament[s Matrix, xLabels, yLabels = NULL, xSymbols = NULL, ySymbols = NULL, textMatrix = NULL, # Paging options rowsPerPage = NULL, maxRowsPerPage = 20, colsPerPage = NULL, maxColsPerPage = 10, addPageNumberToMain = TRUE, # Further arguments to labeledHeatmap zlim = NULL, signed = TRUE, main = "", ...) { nr = nrow(Matrix); nc = ncol(Matrix); if (is.null(rowsPerPage)) { nPages.rows = ceiling(nr/maxRowsPerPage); rowsPerPage = allocateJobs(nr, nPages.rows); } else nPages.rows = length(rowsPerPage); if (is.null(colsPerPage)) { nPages.cols = ceiling(nc/maxColsPerPage); colsPerPage = allocateJobs(nc, nPages.cols); } else nPages.cols = length(colsPerPage); if (is.null(zlim)) { zlim = range(Matrix, na.rm = TRUE) if (signed) zlim = c(-max(abs(zlim)), max(abs(zlim))); } page = 1; multiPage = (nPages.cols > 1 | nPages.rows > 1) for (page.col in 1:nPages.cols) for (page.row in 1:nPages.rows) { rows = rowsPerPage[[page.row]]; cols = colsPerPage[[page.col]]; main.1 = main; if (addPageNumberToMain & multiPage) main.1 = spaste(main, "(page ", page, ")"); labeledHeatmap(Matrix = Matrix, xLabels = xLabels, xSymbols = xSymbols, yLabels = yLabels, ySymbols = ySymbols, textMatrix = textMatrix, zlim = zlim, main = main.1, showRows = rows, showCols = cols, ...); page = page + 1; } } WGCNA/R/modulePreservation.R0000644000176200001440000025777314510207710015346 0ustar liggesusers# module preservation for networks specified by adjacency matrices, not expression data. # this version can handle both expression and adjacency matrices. # # Changelog: # 2010/08/04: # . Adding clusterCoeff and MAR to density preservation statistics # . Adding cor.clusterCoeff and cor.MAR to connectivity preservation statistics # . Adding silhuette width to separability statistics # Adding p-values to output. For each Z score can add a corresponding p-value, bonferoni corrected p-value, # and q value (local FDR) #===================================================================================================== # # p-value functions # #===================================================================================================== # Assumes Z in the form of a list, Z[[ref]][[test]] is a matrix of Z scores except the first column is # assumed to be moduleSize. # Returned value is log10 of the p-value .pValueFromZ = function(Z, bonf = FALSE, summaryCols = NULL, summaryInd = NULL) { p = Z # This carries over all necessary names and missing values when ref==test nRef = length(Z); for (ref in 1:nRef) { nTest = length(Z[[ref]]) for (test in 1:nTest) { zz = Z[[ref]][[test]]; if (length(zz) > 1) { names = colnames(zz); # Try to be a bit intelligent about whether to ignore the first column if (names[1]=="moduleSize") ignoreFirst = TRUE else ignoreFirst = FALSE; ncol = ncol(zz); range = c( (1+as.numeric(ignoreFirst)):ncol); p[[ref]][[test]][, range] = pnorm(as.matrix(zz[, range]), lower.tail = FALSE, log.p = TRUE)/log(10); Znames = names[range] if (bonf) { pnames = sub("Z", "log.p.Bonf", Znames, fixed= TRUE); p[[ref]][[test]][, range] = p[[ref]][[test]][, range] + log10( nrow(p[[ref]][[test]])); biggerThan1 = p[[ref]][[test]][, range] > 0; p[[ref]][[test]][, range][biggerThan1] = 0; # Remember that the p-values are stored in log form } else pnames = sub("Z", "log.p", Znames, fixed= TRUE); if (!is.null(summaryCols)) for (c in 1:length(summaryCols)) { medians = apply(p[[ref]][[test]][, summaryInd[[c]], drop = FALSE], 1, median, na.rm = TRUE) p[[ref]][[test]][,summaryCols[c]] = medians } colnames(p[[ref]][[test]])[range] = pnames; } } } p; } .qValueFromP = function(p, summaryCols = NULL, summaryInd = NULL) { q = p # This carries over all necessary names and missing values when ref==test nRef = length(p); for (ref in 1:nRef) { nTest = length(p[[ref]]) for (test in 1:nTest) { pp = p[[ref]][[test]]; if (length(pp) > 1) { names = colnames(pp); # Try to be a bit intelligent about whether to ignore the first column if (names[1]=="moduleSize") { ignoreFirst = TRUE } else ignoreFirst = FALSE; ncol = ncol(pp); nrow = nrow(pp); range = c( (1+as.numeric(ignoreFirst)):ncol); for (col in range) { xx = try(qvalue(10^pp[,col]), silent = TRUE) if (inherits(xx, "try-error") || length(xx)==1) { q[[ref]][[test]][, col] = rep(NA, nrow); printFlush(paste("Warning in modulePreservation: qvalue calculation failed for", "column", col, "for reference set", ref, "and test set", test)) } else q[[ref]][[test]][, col] = xx$qvalues; } colnames(q[[ref]][[test]])[range] = sub("log.p", "q", names[range], fixed= TRUE); if (!is.null(summaryCols)) for (c in 1:length(summaryCols)) { xx = log10(as.matrix(q[[ref]][[test]][, summaryInd[[c]], drop = FALSE])); # Restrict all -logs > 1000 to 1000: xx[!is.na(xx) & !is.finite(xx)] = -1000; xx[!is.na(xx) & (xx < -1000)] = -1000; medians = apply(xx, 1, median, na.rm = TRUE) q[[ref]][[test]][, summaryCols[c]] = 10^medians; } } } } q; } modulePreservation = function( multiData, multiColor, multiWeights = NULL, dataIsExpr = TRUE, networkType = "unsigned", corFnc = "cor", corOptions = "use = 'p'", referenceNetworks = 1, testNetworks = NULL, nPermutations = 100, includekMEallInSummary = FALSE, restrictSummaryForGeneralNetworks = TRUE, calculateQvalue = FALSE, randomSeed = 12345, maxGoldModuleSize = 1000, maxModuleSize = 1000, quickCor = 1, ccTupletSize = 2, calculateCor.kIMall = FALSE, calculateClusterCoeff = FALSE, useInterpolation = FALSE, checkData = TRUE, greyName = NULL, goldName = NULL, savePermutedStatistics = TRUE, loadPermutedStatistics = FALSE, permutedStatisticsFile = if (useInterpolation) "permutedStats-intrModules.RData" else "permutedStats-actualModules.RData", plotInterpolation = TRUE, interpolationPlotFile = "modulePreservationInterpolationPlots.pdf", discardInvalidOutput = TRUE, parallelCalculation = FALSE, verbose = 1, indent = 0) { sAF = options("stringsAsFactors") options(stringsAsFactors = FALSE); on.exit(options(stringsAsFactors = sAF[[1]]), TRUE) if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit({.Random.seed <<- savedSeed;}, TRUE); } set.seed(randomSeed); } spaces = indentSpaces(indent); nType = charmatch(networkType, .networkTypes); if (is.na(nType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); # Check that the multiData/multiAdj has correct structure nNets = length(multiData); nGenes = sapply(multiData, sapply, ncol); if (checkData) { if (dataIsExpr) { .checkExpr(multiData, verbose, indent) } else { .checkAdj(multiData, verbose, indent) } } if (!is.null(multiWeights)) { if (dataIsExpr) { multiWeights = .checkAndScaleMultiWeights(multiWeights, multiData, scaleByMax = FALSE); } else stop("Weights cannot be supplied when 'dataIsExpr' is FALSE."); } # Check for presence of dimnames, assign if none, and make them unique. multiData = mtd.apply(multiData, function(.data) { if (is.null(colnames(.data))) colnames(.data) = spaste("Column.", 1:ncol(.data)); colnames(.data) = make.unique(colnames(.data)); .data; }); if (!dataIsExpr) { multiData = mtd.apply(multiData, function(.data) { rownames(.data) = colnames(.data); .data; }); } # Check for names; if there are none, create artificial labels. setNames = names(multiData); if (is.null(setNames)) { setNames = paste("Set", c(1:nNets), sep=""); } # Check that referenceNetworks is valid referenceNetworks = as.numeric(referenceNetworks); if (any(is.na(referenceNetworks))) stop("All elements of referenceNetworks must be numeric and present."); if (any(referenceNetworks < 1) | any(referenceNetworks > nNets)) stop("referenceNetworks contains elements outside of the allowed range. "); nRefNets = length(referenceNetworks); # Check testNetworks if (is.null(testNetworks)) { testNetworks = list(); for (ref in 1:nRefNets) testNetworks[[ref]] = c(1:nNets)[ -referenceNetworks[ref] ]; } # Check that testNetworks was specified correctly if (!is.list(testNetworks) && nRefNets > 1) stop("When there are more than 1 reference networks, 'testNetworks' must\n", " be a list with one component per reference network."); if (!is.list(testNetworks)) testNetworks = list(testNetworks); if (length(testNetworks)!=nRefNets) stop("Length of 'testNetworks' must be the same as length of 'referenceNetworks'."); for (ref in 1:nRefNets) { if (any(testNetworks[[ref]] < 1 | testNetworks[[ref]] > nNets)) stop("Some entries of testNetworks[[", ref, "]] are out of range."); } # Check multiColor if (!inherits(multiColor, "list")) { stop("multiColor does not appear to have the correct format.") } if (length(multiColor)!=nNets) { multiColor2=list() if (length(names(multiColor))!=length(multiColor)) stop("Each entry of 'multiColor' must have a name."); color2expr = match(names(multiColor), setNames); if (any(is.na(color2expr))) stop("Entries of 'multiColor' must name-match entries in 'multiData'."); for(s in 1:nNets) { #multiData[[s]]$data = as.matrix(multiData[[s]]$data); loc = which(names(multiColor) %in% setNames[s]) if (length(loc)==0) { multiColor2[[s]] = NA } else multiColor2[[s]]=multiColor[[loc]] } multiColor = multiColor2 rm(multiColor2); } s = 1; while ( (s <= nNets) & !.cvPresent(multiColor[[s]])) s = s+1; if (s==nNets+1) stop("No valid color lists found.") if (is.null(greyName)) { if (is.numeric(multiColor[[s]])) # Caution: need a valid s here. { greyName = 0 } else { greyName = "grey" } } else { if (is.null(goldName)) goldName = if (is.numeric(greyName)) 0.1 else "gold" } if (is.null(goldName)) { if (is.numeric(multiColor[[s]])) # Caution: need a valid s here. { goldName = 0.1; } else { goldName = "gold" } } if (verbose > 2) printFlush(paste(" ..unassigned 'module' name:", greyName, "\n ..all network sample 'module' name:", goldName)); MEgrey = paste("ME", greyName, sep=""); MEgold = paste("ME", goldName, sep=""); for (s in 1:nNets) { if ( .cvPresent(multiColor[[s]]) & nGenes[s]!=length(multiColor[[s]])) stop(paste("Color vector for set", s, "does not have the correct number of entries.")); if (is.factor(multiColor[[s]])) multiColor[[s]] = as.character(multiColor[[s]]); } keepGenes = list(); nNAs = sapply(multiColor, .nNAColors) if (any(nNAs > 0)) { if (verbose > 0) printFlush(paste(spaces, " ..removing genes with missing color...")) for (s in 1:nNets) if (.cvPresent(multiColor[[s]])) { keepGenes[[s]] = !is.na(multiColor[[s]]); if (dataIsExpr) { multiData[[s]]$data = multiData[[s]]$data[, keepGenes[[s]]]; } else multiData[[s]]$data = multiData[[s]]$data[keepGenes[[s]], keepGenes[[s]]]; multiColor[[s]] = multiColor[[s]][keepGenes[[s]]]; } } # Check for set names; if there are none, create artificial labels. if (is.null(names(multiData))) { setNames = paste("Set_", c(1:nNets), sep=""); } else { setNames = names(multiData); } # Check that multiData has valid colnames for (s in 1:nNets) if (is.null(colnames(multiData[[s]]$data))) stop(paste("Matrix of data in set", names(multiData)[s], "has no colnames. Colnames are needed to match variables.")); # For now we use numeric labels for permutations. permGoldName = 0.1; permGreyName = 0; gc(); if (verbose > 0) printFlush(paste(spaces, " ..calculating observed preservation values")) observed = .modulePreservationInternal(multiData, multiColor, multiWeights = multiWeights, dataIsExpr = dataIsExpr, calculatePermutation = FALSE, networkType = networkType, referenceNetworks = referenceNetworks, testNetworks = testNetworks, densityOnly = useInterpolation, maxGoldModuleSize = maxGoldModuleSize, maxModuleSize = maxModuleSize, corFnc = corFnc, corOptions = corOptions, quickCor = quickCor, # calculateQuality = calculateQuality, ccTupletSize = ccTupletSize, calculateCor.kIMall = calculateCor.kIMall, calculateClusterCoeff = calculateClusterCoeff, checkData = FALSE, greyName = greyName, goldName = goldName, verbose = verbose -3, indent = indent + 2); if (nPermutations==0) return(list(observed = observed)) # Calculate preservation scores in permuted data. psLoaded = FALSE; if (loadPermutedStatistics) { cat(paste(spaces, "..attempting to load permutation statistics..")); x = try(load(file=permutedStatisticsFile), silent = TRUE); if (inherits(x, "try-error")) { printFlush(paste("failed. Error message returned by system:\n", x)); } else { expectVars = c("regModuleSizes", "regStatNames", "regModuleNames", "fixStatNames", "fixModuleNames", "permutationsPresent", "permOut", "interpolationUsed"); e2x = match(expectVars, x); if (any(is.na(e2x)) | length(x)!=length(expectVars)) { printFlush(paste("the file does not contain (all) expected variables.")); } else if (length(permOut) != length(referenceNetworks)) { printFlush("the loaded permutation statistics have incorrect number of reference sets."); } else if (length(permOut[[1]]) != nNets) { printFlush("the loaded permutation statistics have incorrect number of test sets."); } else { psLoaded = TRUE; printFlush("success."); } } if (!psLoaded) printFlush(paste(spaces, "\n ..will recalculate permutations.", "Hit Ctrl-C (Esc in Windows) to stop the calculation.")); } nRegStats = 20; nFixStats = 3; if (!psLoaded) { permOut=list() regModuleSizes = list(); regModuleNames = list(); permutationsPresent = matrix(FALSE, nNets, nRefNets); interpolationUsed = matrix(FALSE, nNets, nRefNets); if (verbose > 0) printFlush(paste(spaces, " ..calculating permutation Z scores")) for(iref in 1:nRefNets) # Loop over reference networks { ref = referenceNetworks[iref] if (verbose > 0) printFlush(paste(spaces, "..Working with set", ref, "as reference set")); permOut[[iref]] = list() regModuleSizes[[iref]] = list(); regModuleNames[[iref]] = list(); nRefMods = length(unique(multiColor[[ref]])); if (nRefMods==1) { printFlush(paste(spaces, "*+*+*+*+* Reference set contains a single module.\n", spaces, "A permutation analysis is not meaningful for a single module; skipping.")) next; } for (tnet in 1:nNets) if (tnet %in% testNetworks[[iref]]) { # Retain only genes that are shared between the reference and test networks if (verbose > 1) printFlush(paste(spaces, "....working with set", tnet, "as test set")); overlap=intersect(colnames(multiData[[ref]]$data),colnames(multiData[[tnet]]$data)) loc1=match(overlap, colnames(multiData[[ref]]$data)) loc2=match(overlap, colnames(multiData[[tnet]]$data)) refName = paste("ref_", setNames[ref],sep="") colorRef = multiColor[[ref]][loc1] if (dataIsExpr) { datRef=multiData[[ref]]$data[ , loc1] datTest=multiData[[tnet]]$data[ , loc2] if (!is.null(multiWeights)) { weightsRef = multiWeights[[ref]]$data[, loc1]; weightsTest = multiWeights[[tnet]]$data[, loc2]; } else { weightsRef = weightsTest = NULL; } } else { datRef=multiData[[ref]]$data[loc1, loc1] datTest=multiData[[tnet]]$data[loc2, loc2] } testName=setNames[tnet] nRefGenes = ncol(datRef); #if(!is.na(multiColor[[tnet]][1])||length(multiColor[[tnet]])!=1) if (.cvPresent(multiColor[[tnet]])) { colorTest=multiColor[[tnet]][loc2] } else { colorTest=NA } name=paste(refName,"vs",testName,sep="") obsModSizes=list() nObsMods = rep(0, 2); tab = table(colorRef); nObsMods[1] = length(tab); obsModSizes[[1]]=tab[names(tab)!=greyName] if ( !useInterpolation | (nObsMods[1] <= 5) | (sum(obsModSizes[[1]]) < 1000)) { # Do not use interpolation: simply use original colors permRefColors = colorRef; interpolationUsed[tnet, iref] = FALSE; nPermMods = nObsMods[1]; if (useInterpolation && (verbose > 1)) printFlush(paste(spaces, " FYI: interpolation will not be used for this comparison.")) } else { obsModSizes[[1]][obsModSizes[[1]]<3]=3 if(length(colorTest)>1) { tab = table(colorTest); obsModSizes[[2]]=tab[names(tab)!=greyName] obsModSizes[[2]][obsModSizes[[2]]<3]=3 nObsMods[2] = length(tab); } else { obsModSizes[[2]]=NA } # Note we only need permColors for the reference set. nPermMods = 10 minNMods = 5; OMS=obsModSizes[[1]] logmin=log(min(OMS)); logmax= log(min(maxModuleSize,max( OMS))) ok = FALSE; skip = FALSE; while (!ok) { if (logmin>= logmax) logmax=logmin+nPermMods/2; permModSizes=as.integer(exp(seq(from=logmin, to=logmax, length=nPermMods ))) nNeededGenes = sum(permModSizes) # Check that the data has enough genes to fit the module sizes: if (nNeededGenes < nRefGenes) { ok = TRUE; skip = FALSE; } else if (nPermMods > minNMods) { # Drop one module nPermMods = nPermMods -1; logStep = (logmax - logmin)/nPermMods; # If the modules are spaced far enough apart, also decrease the max module size if (logStep > log(2)) { logmax = logmax - logStep; } } else { # It appears we don't have enough genes to form meaningful modules for interpolation. # Decrease logmin as well, but only to a certain degree. logminFloor = 3; if (logmin > logminFloor) { logmin = min(logmin-1, logminFloor); } else { # For now we give up, but in the future may have to add a non-interpolation approach here # as well. printFlush(paste(spaces, "*+*+*+*+*+ There are not enough genes and/or modules for", "reference set", ref, "and test set", tnet, ".\n", spaces, "Will skip this combination.")) ok = TRUE; skip = TRUE; } } } if (skip) { # Instead of skipping, use the actual colors permRefColors = colorRef; interpolationUsed[tnet, iref] = FALSE; nPermMods = nObsMods[1]; } else { # Create a base label sequence for the permuted reference data set permRefColors = rep(c(1:nPermMods), permModSizes); nGrey = nRefGenes - length(permRefColors); permRefColors = c(permRefColors, rep(greyName, nGrey)); interpolationUsed[tnet, iref] = TRUE; } } #permExpr=list() #permExpr[[1]]=list(data=datRef) #permExpr[[2]]=list(data=datTest) permExpr = multiData(datRef, datTest); names(permExpr) = setNames[c(ref, tnet)]; if (!is.null(multiWeights)) permWeights = multiData(weightsRef, weightsTest) else permWeights = NULL; permOut[[iref]][[tnet]]=list( regStats = array(NA, dim = c(nPermMods+2-(!interpolationUsed[tnet, iref]), nRegStats, nPermutations)), fixStats = array(NA, dim = c(nObsMods[[1]], nFixStats, nPermutations))); # Perform actual permutations #oldRNG = NULL; permColors = list(); permColors[[1]] = permRefColors; permColors[[2]] = NA; permColorsForAcc = list(); permColorsForAcc[[1]] = colorRef; permColorsForAcc[[2]] = colorTest; # For reproducibility of previous results, write separate code for threaded and unthreaded # calculations if (parallelCalculation) { combineCalculations = function(...) { list(...); } seed = sample(1e8, 1); if (verbose > 2) printFlush(paste(spaces, " ......parallel calculation of permuted statistics..")); datout = foreach(perm = 1:nPermutations, .combine = combineCalculations, .multicombine = TRUE, .maxcombine = nPermutations+10)%dopar% { set.seed(seed + perm + perm^2); gc(); .modulePreservationInternal(permExpr, permColors, multiWeights = permWeights, dataIsExpr = dataIsExpr, calculatePermutation = TRUE, multiColorForAccuracy = permColorsForAcc, networkType = networkType, corFnc = corFnc, corOptions = corOptions, referenceNetworks = 1, testNetworks = list(2), densityOnly = useInterpolation, maxGoldModuleSize = maxGoldModuleSize, maxModuleSize = maxModuleSize, quickCor = quickCor, ccTupletSize = ccTupletSize, calculateCor.kIMall = calculateCor.kIMall, calculateClusterCoeff = calculateClusterCoeff, # calculateQuality = calculateQuality, greyName = greyName, goldName = goldName, checkData = FALSE, verbose = verbose -3, indent = indent + 3) } for (perm in 1:nPermutations) { if (!datout[[perm]] [[1]]$netPresent[2]) stop(paste("Internal error: no data in permuted set preservation measures. \n", "Please contact the package maintainers. Sorry!")) permOut[[iref]][[tnet]]$regStats[, , perm] = as.matrix( cbind(datout[[perm]] [[1]]$quality[[2]][, -1], datout[[perm]] [[1]]$intra[[2]], datout[[perm]] [[1]]$inter[[2]])); permOut[[iref]][[tnet]]$fixStats[, , perm] = as.matrix(datout[[perm]] [[1]]$accuracy[[2]]); } datout = datout[[1]] # For the name stting procedures that follow... } else for (perm in 1:nPermutations ) { if (verbose > 2) printFlush(paste(spaces, " ......working on permutation", perm)); #newRNG = .Random.seed; #if (!is.null(oldRNG)) # if (isTRUE(all.equal(newRNG, oldRNG))) # printFlush("WARNING: something's wrong with the RNG... old and new RNG equal."); #oldRNG = .Random.seed #set.seed(perm*2); datout= .modulePreservationInternal(permExpr, permColors, multiWeights = permWeights, dataIsExpr = dataIsExpr, calculatePermutation = TRUE, multiColorForAccuracy = permColorsForAcc, networkType = networkType, corFnc = corFnc, corOptions = corOptions, referenceNetworks=1, testNetworks = list(2), densityOnly = useInterpolation, maxGoldModuleSize = maxGoldModuleSize, maxModuleSize = maxModuleSize, quickCor = quickCor, ccTupletSize = ccTupletSize, calculateCor.kIMall = calculateCor.kIMall, calculateClusterCoeff = calculateClusterCoeff, # calculateQuality = calculateQuality, greyName = greyName, goldName = goldName, checkData = FALSE, verbose = verbose -3, indent = indent + 3) if (!datout[[1]]$netPresent[2]) stop(paste("Internal error: no data in permuted set preservation measures. \n", "Please contact the package maintainers. Sorry!")) permOut[[iref]][[tnet]]$regStats[, , perm] = as.matrix(cbind(datout[[1]]$quality[[2]][, -1], datout[[1]]$intra[[2]], datout[[1]]$inter[[2]])); permOut[[iref]][[tnet]]$fixStats[, , perm] = as.matrix(datout[[1]]$accuracy[[2]]); gc(); } regStatNames = c(colnames(datout[[1]]$quality[[2]])[-1], colnames(datout[[1]]$intra[[2]]), colnames(datout[[1]]$inter[[2]])); regModuleNames[[iref]][[tnet]] = rownames(datout[[1]]$quality[[2]]); regModuleSizes[[iref]][[tnet]] = datout[[1]]$quality[[2]][, 1] fixStatNames = colnames(datout[[1]]$accuracy[[2]]); fixModuleNames = rownames(datout[[1]]$accuracy[[2]]); dimnames(permOut[[iref]][[tnet]]$regStats) = list(regModuleNames[[iref]][[tnet]], regStatNames, spaste("Permutation.", c(1:nPermutations))); dimnames(permOut[[iref]][[tnet]]$fixStats) = list(fixModuleNames, fixStatNames, spaste("Permutation.", c(1:nPermutations))); permutationsPresent[tnet, iref] = TRUE } else { regModuleNames[[iref]][[tnet]] = NA; regModuleSizes[[iref]][[tnet]] = NA; permOut[[iref]][[tnet]] = NA; } } if (savePermutedStatistics) save(regModuleSizes, regStatNames, regModuleNames, fixStatNames, fixModuleNames, permutationsPresent, interpolationUsed, permOut, file=permutedStatisticsFile) } # if (!psLoaded) gc(); if (any(interpolationUsed, na.rm = TRUE)) { if (verbose > 0) printFlush(paste(spaces, "..Calculating interpolation approximations..")); if (plotInterpolation) pdf(file = interpolationPlotFile, width = 12, height = 6) } # Define a "class" indicating that no valid fit was obtained invalidFit = "invalidFit"; LogIndex = c(rep(c(1, 0, 0, 0, 0, 0, 0), 2), 0, 0, 0, 0, 0, 0) dfMean = c( rep(c(2, 2, 2, 1, 1, 1, 1), 2), 3, 3, 3, 2, 2, 2) dfSD = c( rep(c(2, 2, 2, 2, 2, 2, 2), 2), 2, 2, 2, 2, 2, 2) epsilon=0.00001 meanLM=list() seLM=list() OmitModule=c(permGoldName,permGreyName) for (iref in 1:nRefNets) { ref = referenceNetworks[iref]; meanLM[[iref]]=list() seLM[[iref]]=list() for (tnet in 1:nNets) if (observed[[iref]]$netPresent[tnet] & permutationsPresent[tnet, iref] & interpolationUsed[tnet, iref]) { NotGold = !is.element(regModuleNames[[iref]][[tnet]], OmitModule) meanLM[[iref]][[tnet]] = list() seLM[[iref]][[tnet]] = list() logName=matrix("", sum(NotGold), 1) for (stat in 1:nRegStats) { means = c(apply(permOut[[iref]][[tnet]]$regStats[NotGold, stat, , drop = FALSE], c(1:2), mean, na.rm=TRUE)); SD=try( c(apply(permOut[[iref]][[tnet]]$regStats[NotGold, stat, , drop = FALSE], c(1:2), sd, na.rm = TRUE)), silent = TRUE); if (inherits(SD, 'try-error')) SD = NA; if (any(is.na(c(means, SD))) ) { meanLM[[iref]][[tnet]][[stat]] = NA; class(meanLM[[iref]][[tnet]][[stat]]) = invalidFit; seLM[[iref]][[tnet]][[stat]] = NA; class(seLM[[iref]][[tnet]][[stat]]) = invalidFit; } else { modSizes = regModuleSizes[[iref]][[tnet]][NotGold]; xx = log(modSizes) if(LogIndex[stat]==1) { means[means1) { par(mfrow=c(1,2)) par(mar = c(5, 5, 4, 1)); SE = SD/sqrt(nPermutations); ymin = min(yy-SE, na.rm = TRUE) ymax = max(yy+SE, na.rm = TRUE) verboseScatterplot(xx,yy,xlab="Log module size", ylab=paste(logName[stat],"Permutation median"), main = paste("mean", name1, "\n", name2, "; "), cex.axis = 1, cex.lab = 1, cex.main = 1.2, ylim = c(ymin, ymax)) try(errbar(xx,yy,yy+SE, yy-SE, add = TRUE), silent = TRUE) lines(xx[order(xx)], PredictedMedian[order(xx)] , col="red") verboseScatterplot(yy, PredictedMedian,xlab="Observed permutation median", ylab="Predicted permutation median", main = paste("mean", name1, "\n", name2, "\n"), cex.axis = 1, cex.lab = 1, cex.main = 1.2) abline(0,1,col="green") } epsilon2=0.0000000001 SD[SD1) { par(mfrow=c(1,2)) verboseScatterplot(xx2,yy2,xlab="Log Module Size",ylab="Log observed permutation SD", main = paste("SD", name1, "\n", name2, "\n"), cex.axis = 1, cex.lab = 1, cex.main = 1.2) lines(xx2[order(xx2)], PredictedSD[order(xx2)] , col="red") verboseScatterplot(yy2, PredictedSD,xlab="Log observed permutation SD", ylab="Log predicted permutation SD", main = paste("SD", name1, "\n", name2, "\n"), cex.axis = 1, cex.lab = 1, cex.main = 1.2) abline(0,1,col="green") } } } } } if (any(interpolationUsed)) { if (plotInterpolation) dev.off(); } observedQuality = list(); observedPreservation = list(); observedReferenceSeparability = list(); observedTestSeparability = list(); observedOverlapCounts = list(); observedOverlapPvalues = list(); observedAccuracy = list(); Z.quality = list(); Z.preservation = list(); Z.referenceSeparability = list(); Z.testSeparability = list(); Z.accuracy = list(); interpolationStat = c(rep(TRUE, 14), FALSE, FALSE, FALSE, rep(TRUE, 6)); for(iref in 1:nRefNets) { observedQuality[[iref]] = list(); observedPreservation[[iref]] = list(); observedReferenceSeparability[[iref]] = list(); observedTestSeparability[[iref]] = list(); observedOverlapCounts[[iref]] = list(); observedOverlapPvalues[[iref]] = list(); observedAccuracy[[iref]] = list(); Z.quality[[iref]] = list(); Z.preservation[[iref]] = list(); Z.referenceSeparability[[iref]] = list(); Z.testSeparability[[iref]] = list(); Z.accuracy[[iref]] = list(); for (tnet in 1:nNets) if (observed[[iref]]$netPresent[tnet] & permutationsPresent[tnet, iref]) { nModules = nrow(observed[[iref]]$intra[[tnet]]); nQualiStats = ncol(observed[[iref]]$quality[[tnet]])-1; nIntraStats = ncol(observed[[iref]]$intra[[tnet]]); accuracy = observed[[iref]]$accuracy[[tnet]] inter = observed[[iref]]$inter[[tnet]]; nInterStats = ncol(inter) + ncol(accuracy); a2i = match(rownames(accuracy), rownames(inter)); accuracy2 = matrix(NA, nrow(inter), ncol(accuracy)); accuracy2[a2i, ] = accuracy; rownames(accuracy2) = rownames(inter); colnames(accuracy2) = colnames(accuracy); modSizes = observed[[iref]]$quality[[tnet]][, 1] allObsStats = cbind(observed[[iref]]$quality[[tnet]][, -1], observed[[iref]]$intra[[tnet]], accuracy2, inter); sepCol = match("separability.qual", colnames(observed[[iref]]$quality[[tnet]])); sepCol2 = match("separability.pres", colnames(observed[[iref]]$intra[[tnet]])); quality = observed[[iref]]$quality[[tnet]][, -sepCol]; if (dataIsExpr | (!restrictSummaryForGeneralNetworks)) { rankColsQuality = c(2,3,4,5) rankColsDensity = c(1,2,3,4) } else { rankColsQuality = c(5) rankColsDensity = 4; } ranks = apply(-quality[, rankColsQuality, drop = FALSE], 2, rank, na.last = "keep"); medRank = apply(as.matrix(ranks), 1, median, na.rm = TRUE); observedQuality[[iref]][[tnet]] = cbind(moduleSize = modSizes, medianRank.qual = medRank, quality[, -1]); preservation = cbind(observed[[iref]]$intra[[tnet]][, -sepCol2], inter ); ranksDensity = apply(-preservation[, rankColsDensity, drop = FALSE], 2, rank, na.last = "keep"); medRankDensity = apply(as.matrix(ranksDensity), 1, median, na.rm = TRUE); if (dataIsExpr | (!restrictSummaryForGeneralNetworks)) { if (includekMEallInSummary) { connSummaryInd = c(7:10) } else { connSummaryInd = c(7,8,10); } } else { connSummaryInd = c(7,10) # in this case only cor.kIM and cor.Adj which sits in the cor.cor slot } ranksConnectivity = apply(-preservation[, connSummaryInd, drop = FALSE], 2, rank, na.last = "keep"); medRankConnectivity = apply(as.matrix(ranksConnectivity), 1, median, na.rm = TRUE); medRank = apply(cbind(ranksDensity, ranksConnectivity), 1, median, na.rm = TRUE); observedPreservation[[iref]][[tnet]] = cbind(moduleSize = modSizes, medianRank.pres = medRank, medianRankDensity.pres = medRankDensity, medianRankConnectivity.pres = medRankConnectivity, preservation); observedAccuracy[[iref]][[tnet]] = cbind(moduleSize = modSizes[a2i], accuracy); observedReferenceSeparability[[iref]][[tnet]] = cbind(moduleSize = modSizes, observed[[iref]]$quality[[tnet]][sepCol]); observedTestSeparability[[iref]][[tnet]] = cbind(moduleSize = modSizes, observed[[iref]]$intra[[tnet]][sepCol2]); observedOverlapCounts[[iref]][[tnet]] = observed[[iref]]$overlapTables[[tnet]]$countTable; observedOverlapPvalues[[iref]][[tnet]] = observed[[iref]]$overlapTables[[tnet]]$pTable; nAllStats = ncol(allObsStats); zAll = matrix(NA, nModules, nAllStats); rownames(zAll) = rownames(observed[[iref]]$intra[[tnet]]) colnames(zAll) = paste("Z.", colnames(allObsStats), sep=""); logModSizes = log(modSizes); goldRowPerm = match(goldName, regModuleNames[[iref]][[tnet]]); goldRowObs = match(goldName, rownames(inter)); fixInd = 1; regInd = 1; for (stat in 1:nAllStats) if (interpolationStat[stat]) { if (interpolationUsed[tnet, iref]) { if (class(meanLM[[iref]][[tnet]][[regInd]])!=invalidFit) { #print(paste(regInd, stat)) prediction = predict(meanLM[[iref]][[tnet]][[regInd]], newdata = data.frame(xx = logModSizes), se.fit = TRUE) predictedMean = as.numeric(prediction$fit); if (LogIndex[regInd]==1) predictedMean = exp(predictedMean); predictedSD = exp(as.numeric(predict(seLM[[iref]][[tnet]][[regInd]], newdata = data.frame(xx2 = logModSizes)))); zAll[, stat] = (allObsStats[, stat] - predictedMean)/predictedSD # For the gold module : take the direct observations. goldMean = mean(permOut[[iref]][[tnet]]$regStats[goldRowPerm, regInd, ], na.rm = TRUE); goldSD = apply(permOut[[iref]][[tnet]]$regStats[goldRowPerm, regInd, ], 2, sd, na.rm = TRUE); zAll[goldRowObs, stat] = (allObsStats[goldRowObs, stat] - goldMean)/goldSD } } else { means = c(apply(permOut[[iref]][[tnet]]$regStats[, regInd, , drop = FALSE], c(1:2), mean, na.rm = TRUE)); SDs = c(apply(permOut[[iref]][[tnet]]$regStats[, regInd, , drop = FALSE], c(1:2), sd, na.rm = TRUE)); z = ( allObsStats[, stat] - means) / SDs; if (any(is.finite(z))) { finite = is.finite(z) z[finite][SDs[finite]==0] = max(abs(z[finite]), na.rm = TRUE) * sign(allObsStats[, stat] - means)[SDs[finite]==0] } zAll[, stat] = z; } regInd = regInd + 1; } else { if (.cvPresent(multiColor[[tnet]]) ) { means = c(apply(permOut[[iref]][[tnet]]$fixStats[, fixInd, , drop = FALSE], c(1:2), mean, na.rm = TRUE)); SDs = c(apply(permOut[[iref]][[tnet]]$fixStats[, fixInd, , drop = FALSE], c(1:2), sd, na.rm = TRUE)); z = ( allObsStats[a2i, stat] - means) / SDs; if (any(is.finite(z))) { finite = is.finite(z) z[finite][SDs[finite]==0] = max(abs(z[finite]), na.rm = TRUE) * sign(allObsStats[a2i, stat] - means)[SDs[finite]==0] } zAll[a2i, stat] = z; } fixInd = fixInd + 1 } zAll = as.data.frame(zAll); sepCol = match("Z.separability.qual", colnames(zAll)[1:nQualiStats]); zQual = zAll[, c(1:nQualiStats)][ , -sepCol]; summaryColsQuality = rankColsQuality -1 # quality also contains module sizes, Z does not summZ = apply(zQual[, summaryColsQuality, drop = FALSE], 1, median, na.rm = TRUE); Z.quality[[iref]][[tnet]] = data.frame(cbind(moduleSize = modSizes, Zsummary.qual = summZ, zQual)); Z.referenceSeparability[[iref]][[tnet]] = data.frame(cbind(moduleSize = modSizes, zAll[sepCol])); st = nQualiStats + 1; en = nQualiStats + nIntraStats + nInterStats; sepCol2 = match("Z.separability.pres", colnames(zAll)[st:en]); accuracyCols = match(c("Z.accuracy", "Z.minusLogFisherP", "Z.coClustering"), colnames(zAll)[st:en]); zPres = zAll[, st:en][, -c(sepCol2, accuracyCols)]; summaryColsPreservation = list(summaryColsQuality, connSummaryInd); nGroups = length(summaryColsPreservation) summZMat = matrix(0, nrow(zPres), nGroups); for (g in 1:nGroups) summZMat[, g] = apply(zPres[, summaryColsPreservation[[g]], drop = FALSE], 1, median, na.rm = TRUE); colnames(summZMat) = c("Zdensity.pres", "Zconnectivity.pres"); summZ = apply(summZMat, 1, mean, na.rm = TRUE); Z.preservation[[iref]][[tnet]] = data.frame(cbind(moduleSize = modSizes, Zsummary.pres = summZ, summZMat, zPres)); Z.testSeparability[[iref]][[tnet]] = data.frame(cbind(moduleSize = modSizes, zAll[nQualiStats + sepCol2])); Z.accuracy[[iref]][[tnet]] = data.frame(cbind(moduleSize = modSizes, zAll[st:en][accuracyCols])); } else { observedQuality[[iref]][[tnet]] = NA; observedPreservation[[iref]][[tnet]] = NA; observedReferenceSeparability[[iref]][[tnet]] = NA; observedTestSeparability[[iref]][[tnet]] = NA; observedAccuracy[[iref]][[tnet]] = NA; observedOverlapCounts[[iref]][[tnet]] = NA; observedOverlapPvalues[[iref]][[tnet]] = NA; Z.quality[[iref]][[tnet]] = NA; Z.preservation[[iref]][[tnet]]= NA; Z.referenceSeparability[[iref]][[tnet]]= NA; Z.testSeparability[[iref]][[tnet]]= NA; Z.accuracy[[iref]][[tnet]] = NA; } names(observedQuality[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedPreservation[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedReferenceSeparability[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedTestSeparability[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedAccuracy[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedOverlapCounts[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(observedOverlapPvalues[[iref]]) = paste("inColumnsAlsoPresentIn", sep=".", setNames); names(Z.quality[[iref]]) = paste("inColumnsAlsoPresentIn", sep = ".", setNames); names(Z.preservation[[iref]]) = paste("inColumnsAlsoPresentIn", sep = ".", setNames); names(Z.referenceSeparability[[iref]]) = paste("inColumnsAlsoPresentIn", sep = ".", setNames); names(Z.testSeparability[[iref]]) = paste("inColumnsAlsoPresentIn", sep = ".", setNames); names(Z.accuracy[[iref]]) = paste("inColumnsAlsoPresentIn", sep = ".", setNames); } names(observedQuality) = paste("ref", setNames[referenceNetworks], sep="."); names(observedPreservation) = paste("ref", setNames[referenceNetworks], sep="."); names(observedReferenceSeparability) = paste("ref", setNames[referenceNetworks], sep="."); names(observedTestSeparability) = paste("ref", setNames[referenceNetworks], sep="."); names(observedAccuracy) = paste("ref", setNames[referenceNetworks], sep="."); names(observedOverlapCounts) = paste("ref", setNames[referenceNetworks], sep="."); names(observedOverlapPvalues) = paste("ref", setNames[referenceNetworks], sep="."); names(Z.quality) = paste("ref", setNames[referenceNetworks], sep="."); names(Z.preservation) = paste("ref", setNames[referenceNetworks], sep="."); names(Z.referenceSeparability) = paste("ref", setNames[referenceNetworks], sep="."); names(Z.testSeparability) = paste("ref", setNames[referenceNetworks], sep="."); names(Z.accuracy) = paste("ref", setNames[referenceNetworks], sep="."); summaryIndQuality = list(c(3:6)); summaryIndPreservation = list(c(5:8), connSummaryInd + 4, c(3,4)); #4 = 1 module size + 3 summary indices p.quality = .pValueFromZ(Z.quality, summaryCols = 2, summaryInd = summaryIndQuality); p.preservation = .pValueFromZ(Z.preservation, summaryCols = c(3,4,2), summaryInd = summaryIndPreservation); p.referenceSeparability = .pValueFromZ(Z.referenceSeparability); p.testSeparability = .pValueFromZ(Z.testSeparability); p.accuracy = .pValueFromZ(Z.accuracy); pBonf.quality = .pValueFromZ(Z.quality, bonf = TRUE, summaryCols = 2, summaryInd = summaryIndQuality); pBonf.preservation = .pValueFromZ(Z.preservation, bonf = TRUE, summaryCols = c(3,4,2), summaryInd = summaryIndPreservation) pBonf.referenceSeparability = .pValueFromZ(Z.referenceSeparability, bonf = TRUE); pBonf.testSeparability = .pValueFromZ(Z.testSeparability, bonf = TRUE); pBonf.accuracy = .pValueFromZ(Z.accuracy, bonf = TRUE); if (calculateQvalue) { q.quality = .qValueFromP(p.quality, summaryCols = 2, summaryInd = summaryIndQuality) q.preservation = .qValueFromP(p.preservation, summaryCols = c(3,4,2), summaryInd = summaryIndPreservation); q.referenceSeparability = .qValueFromP(p.referenceSeparability); q.testSeparability = .qValueFromP(p.testSeparability); q.accuracy = .qValueFromP(p.accuracy); } else { q.quality = NULL; q.preservation = NULL; q.referenceSeparability = NULL; q.testSeparability = NULL; q.accuracy = NULL; } output=list(quality = list(observed = observedQuality, Z = Z.quality, log.p = p.quality, log.pBonf = pBonf.quality, q = q.quality ), preservation = list(observed = observedPreservation, Z = Z.preservation, log.p = p.preservation, log.pBonf = pBonf.preservation, q = q.preservation), accuracy = list(observed = observedAccuracy, Z = Z.accuracy, log.p = p.accuracy, log.pBonf = pBonf.accuracy, q = q.accuracy, observedCounts = observedOverlapCounts, observedFisherPvalues = observedOverlapPvalues), referenceSeparability = list(observed = observedReferenceSeparability, Z = Z.referenceSeparability, log.p = p.referenceSeparability, log.pBonf = pBonf.referenceSeparability, q = q.referenceSeparability), testSeparability = list(observed = observedTestSeparability, Z = Z.testSeparability, log.p = p.testSeparability, log.pBonf = pBonf.testSeparability, q = q.testSeparability), permutationDetails = list(permutedStatistics = permOut, interpolationModuleSizes = regModuleSizes, interpolationStatNames = regStatNames, permutationsPresent = permutationsPresent, interpolationUsed = interpolationUsed) ); checkComps = list(c(1,2), c(1,2), c(1,2), c(1,2), c(1,2)); nCheckComps = length(checkComps); if (discardInvalidOutput) { for(iref in 1:nRefNets) { for (tnet in 1:nNets) if (observed[[iref]]$netPresent[tnet] & permutationsPresent[tnet, iref]) { for (oc in 1:nCheckComps) { for (ic in checkComps[[oc]]) { data = output[[oc]][[ic]][[iref]][[tnet]] keep = apply(!is.na(data), 2, sum) > 0; output[[oc]][[ic]][[iref]][[tnet]] = data[, keep, drop = FALSE]; } } } } } return(output) } #===================================================================================================== # # .modulePreservationInternal # #===================================================================================================== # Calculate module preservation scores for a given multi-expression data set. # color vector present? .cvPresent = function(cv) { if (is.null(cv)) return(FALSE); if (length(cv)==1 && (is.na(cv[1]))) return(FALSE); return(TRUE); } .nNAColors = function(cv) {if (.cvPresent(cv)) sum(is.na(cv)) else 0} .accuracyStatistics = function(colorRef, colorTest, ccTupletSize, greyName, pEpsilon) { colorRefLevels = sort(unique(colorRef)); nRefMods = length(colorRefLevels); refModSizes = table(colorRef); accuracy=matrix(NA,nRefMods ,3) colnames(accuracy)=c("accuracy", "minusLogFisherP", "coClustering"); rownames(accuracy)=colorRefLevels; nRefGenes = length(colorRef); # also equals nTestGenes if(.cvPresent(colorTest)) { #if (verbose > 1) printFlush(paste(spaces, "....calculating color label accuracy...")); overlap = overlapTable(colorTest, colorRef) greyRow = rownames(overlap$countTable)==greyName; greyCol = colnames(overlap$countTable)==greyName; #bestCount = apply(overlap$countTable, 2, max); overlap$pTable[!is.finite(overlap$pTable)] = pEpsilon; overlap$pTable[overlap$pTable < pEpsilon] = pEpsilon; bestCount = rep(0, ncol(overlap$countTable)); if (sum(!greyRow) > 0 & sum(!greyCol)>0) { bestCount[!greyCol] = apply(overlap$countTable[!greyRow, !greyCol, drop = FALSE], 2, max) accuracy[!greyCol, 2] = -log10(apply(overlap$pTable[!greyRow, !greyCol, drop = FALSE], 2, min)); ccNumer = apply(overlap$countTable[!greyRow, !greyCol, drop = FALSE], 2, choose, ccTupletSize); dim(ccNumer) = c(sum(!greyRow), sum(!greyCol)); ccDenom = choose(refModSizes[!greyCol], ccTupletSize) accuracy[!greyCol, 3] = apply(ccNumer, 2, sum)/ccDenom; } if (sum(greyRow)==1 & sum(greyCol) ==1) { bestCount[greyCol] = overlap$countTable[greyRow, greyCol]; accuracy[greyCol, 2] = -log10(overlap$pTable[greyRow, greyCol]); accuracy[greyCol, 3] = choose(bestCount[greyCol], ccTupletSize)/ choose(refModSizes[greyCol], ccTupletSize); } accuracy[, 1] = bestCount/refModSizes; } else { overlap = list(countTable = NA, pTable = NA); } list(accuracy = accuracy, overlapTable = overlap); } #================================================================================================= # # .modulePreservationInternal # #================================================================================================= # multiData contains either expression data or adjacencies; which one is indicated in dataIsExpr .modulePreservationInternal = function(multiData, multiColor, multiWeights, dataIsExpr, calculatePermutation, multiColorForAccuracy = NULL, networkType = "signed", corFnc = "cor", corOptions = "use = 'p'", referenceNetworks=1, testNetworks, densityOnly = FALSE, maxGoldModuleSize = 1000, maxModuleSize = 1000, quickCor = 1, ccTupletSize, calculateCor.kIMall, calculateClusterCoeff, # calculateQuality = FALSE, checkData = TRUE, greyName, goldName, pEpsilon = 1e-200, verbose = 1, indent = 0) { spaces = indentSpaces(indent); #size = checkSets(multiData); nNets = length(multiData); nGenes = sapply(multiData, sapply, ncol); nType = charmatch(networkType, .networkTypes); if (is.na(networkType)) stop(paste("Unrecognized networkType argument.", "Recognized values are (unique abbreviations of)", paste(.networkTypes, collapse = ", "))); setNames = names(multiData); # Check multiColor. if (length(multiColor)!=nNets) { multiColor2=list() if (length(names(multiColor))!=length(multiColor)) stop("Each entry of 'multiColor' must have a name."); color2expr = match(names(multiColor), names(multiData)); if (any(is.na(color2expr))) stop("Entries of 'multiColor' must name-match entries in 'multiData'."); for (s in 1:nNets) { multiData[[s]]$data = as.matrix(multiData[[s]]$data); loc = which(names(multiColor) %in% names(multiData)[s]) if (length(loc)==0) { multiColor2[[s]] = NA } else multiColor2[[s]]=multiColor[[loc]] } multiColor = multiColor2 rm(multiColor2); } MEgrey = paste("ME", greyName, sep=""); MEgold = paste("ME", goldName, sep=""); gc(); for (s in 1:nNets) { if ( .cvPresent(multiColor[[s]]) & nGenes[s]!=length(multiColor[[s]])) stop(paste("Color vector for set", s, "does not have the correct number of entries.")); } keepGenes = list(); for (s in 1:nNets) keepGenes[[s]] = rep(TRUE, nGenes[s]) nNAs = sapply(multiColor, .nNAColors) if (any(nNAs > 0)) { if (verbose > 0) printFlush(paste(spaces, " ..removing genes with missing color...")) for (s in 1:nNets) if (.cvPresent(multiColor[[s]])) { keepGenes[[s]] = !is.na(multiColor[[s]]); if (dataIsExpr) { multiData[[s]]$data = multiData[[s]]$data[, keepGenes[[s]]]; if (!is.null(multiWeights)) multiWeights[[s]]$data = multiWeights[[s]]$data[, keepGenes[[s]]]; } else { multiData[[s]]$data = multiData[[s]]$data[keepGenes[[s]], keepGenes[[s]]]; } multiColor[[s]] = multiColor[[s]][keepGenes[[s]]]; } } #if (verbose > 0) printFlush(paste(spaces, " ..preservation tests based on different reference networks")) datout=list() if (nNets==1) # && length(multiColor)==1) { # For now stop; in the future we'll bring this up to speed as well. stop("Calculation of quality in an idividual network is not supported at this time. Sorry!"); } else { for (iref in 1:length(referenceNetworks)) { ref = referenceNetworks[iref] if (!.cvPresent(multiColor[[ref]])) stop(paste("Network", ref, "does not have a color vector and cannot be used as reference network.")) if (verbose > 0) printFlush(paste(spaces, "..working on reference network",setNames[ref])) accuracy=list() quality = list(); interPres=list() intraPres=list() overlapTables = list(); netPresent = rep(FALSE, nNets) for (tnet in testNetworks[[iref]]) { if (verbose > 1) printFlush(paste(spaces, " ..working on test network",setNames[tnet])) overlap=intersect(colnames(multiData[[ref]]$data),colnames(multiData[[tnet]]$data)) if (length(overlap)==0) { printFlush(paste(spaces, "WARNING: sets", ref, "and", tnet, "have no overlapping genes with valid colors.\n", spaces, "No preservation measures can be calculated.")); next; } loc1=match(overlap, colnames(multiData[[ref]]$data)) loc2=match(overlap, colnames(multiData[[tnet]]$data)) if(length(multiColor[[tnet]])>1) { colorTest=multiColor[[tnet]][loc2] } else { colorTest=NA } if (dataIsExpr) { datRef=multiData[[ref]]$data[,loc1] datTest=multiData[[tnet]]$data[,loc2] if (!is.null(multiWeights)) { weightsRef = multiWeights[[ref]]$data[, loc1]; weightsTest = multiWeights[[tnet]]$data[, loc2]; } else { weightsRef = weightsTest = NULL; } } else { datTest=multiData[[tnet]]$data[loc2, loc2] datRef=multiData[[ref]]$data[loc1, loc1] } colorRef=multiColor[[ref]][loc1] # if multiColorForAccuracy is present, use the colors in this list to calculate accuracy and # Fisher p values. if (!is.null(multiColorForAccuracy)) { colorRefAcc = multiColorForAccuracy[[ref]][loc1]; if (.cvPresent(multiColorForAccuracy[[tnet]])) { colorTestAcc = multiColorForAccuracy[[tnet]][loc2]; } else colorTestAcc = NA; } else { colorRefAcc = colorRef; colorTestAcc = colorTest; #This is the only place where colorTest is used (?) } # Accuracy measures if (calculatePermutation) { colorRefAcc = sample(colorRefAcc); colorTestAcc = sample(colorRefAcc); } x = .accuracyStatistics(colorRefAcc, colorTestAcc, ccTupletSize = ccTupletSize, greyName = greyName, pEpsilon = pEpsilon); accuracy[[tnet]] = x$accuracy; overlapTables[[tnet]] = x$overlapTable; # From now on we work with colorRef; colorTest is not needed anymore. # Restrict each module to at most maxModuleSize genes.. colorRefLevels = sort(unique(colorRef)); nRefMods = length(colorRefLevels); nRefGenes = length(colorRef); # Check that the gold module is not too big. In particular, the gold module must not contain all valid # genes, since in such a case the random sampling makes no sense for density-based statistics. goldModSize = maxGoldModuleSize if (goldModSize > nRefGenes/2) goldModSize = nRefGenes/2; # ..step 1: gold module. Note that because of the above the gold module size is always smaller than # nRefGenes. goldModR = sample(nRefGenes, goldModSize) if (calculatePermutation) { goldModT = sample(nRefGenes, goldModSize) } else goldModT = goldModR if (dataIsExpr) { goldRef = datRef[, goldModR]; goldRefP = datRef[, goldModT]; goldTest = datTest[, goldModT]; if (!is.null(multiWeights)) { goldRefW = weightsRef[, goldModR]; goldRefPW = weightsRef[, goldModT]; goldTestW = weightsTest[, goldModT]; } else goldRefW = goldRefPW = goldTestW = NULL; } else { goldRef = datRef[goldModR, goldModR]; goldRefP = datRef[goldModT, goldModT]; goldTest = datTest[goldModT, goldModT]; } # ..step 2: proper modules and grey keepGenes = rep(TRUE, nRefGenes); for (m in 1:nRefMods) { inModule = colorRef == colorRefLevels[m] nInMod = sum(inModule) if(nInMod > maxModuleSize) { sam = sample(nInMod, maxModuleSize) keepGenes[inModule] = FALSE; keepGenes[inModule][sam] = TRUE; } } # Create the permuted data sets if (sum(keepGenes) < nRefGenes) { colorRef = colorRef[keepGenes] if (dataIsExpr) { datRef = datRef[, keepGenes] if (!is.null(multiWeights)) weightsRef = weightsRef[, keepGenes] else weightsRef = NULL; } else datRef = datRef[keepGenes, keepGenes] nRefGenes = length(colorRef); if (calculatePermutation) { keepPerm = sample(nRefGenes, sum(keepGenes)); if (dataIsExpr) { datTest = datTest[, keepPerm]; datRefP = datRef[, keepPerm]; if (!is.null(multiWeights)) { weightsTest = weightsTest[, keepPerm]; weightsRefP = weightsRef[, keepPerm]; } else weightsTest = weightsRefP = NULL; } else { datTest = datTest[keepPerm, keepPerm]; datRefP = datRef[keepPerm, keepPerm]; } } else { if (dataIsExpr) { datTest = datTest[, keepGenes]; datRefP = datRef; if (!is.null(multiWeights)) { weightsTest = weightsTest[, keepGenes]; weightsRefP = weightsRef; } else weightsTest = weightsRefP = NULL; } else { datTest = datTest[keepGenes, keepGenes]; datRefP = datRef; } } } else { if (calculatePermutation) { perm = sample(c(1:nRefGenes)); if (dataIsExpr) { datRefP = datRef[, perm]; datTest = datTest[, perm]; if (!is.null(multiWeights)) { weightsTest = weightsTest[, perm]; weightsRefP = weightsRef[, perm]; } else weightsTest = weightsRefP = NULL; } else { datRefP = datRef[perm, perm]; datTest = datTest[perm, perm]; } } else { datRefP = datRef; weightsRefP = weightsRef; } } if (dataIsExpr) { datRef = cbind(datRef, goldRef) datRefP = cbind(datRefP, goldRefP) datTest = cbind(datTest, goldTest) if (!is.null(multiWeights)) { weightsRef = cbind(weightsRef, goldRefW) weightsRefP = cbind(weightsRefP, goldRefPW) weightsTest = cbind(weightsTest, goldTestW) } } else { datRef = .combineAdj(datRef, goldRef); datRefP = .combineAdj(datRefP, goldRefP); datTest = .combineAdj(datTest, goldTest); if (!is.null(rownames(datRef))) rownames(datRef) = make.unique(rownames(datRef)); if (!is.null(rownames(datRefP))) rownames(datRefP) = make.unique(rownames(datRefP)); if (!is.null(rownames(datTest))) rownames(datTest) = make.unique(rownames(datTest)); gc(); } if (!is.null(colnames(datRef))) colnames(datRef) = make.unique(colnames(datRef)); if (!is.null(colnames(datRefP))) colnames(datRefP) = make.unique(colnames(datRefP)); if (!is.null(colnames(datTest))) colnames(datTest) = make.unique(colnames(datTest)); gold = rep(goldName, goldModSize) colorRef_2 = c(as.character(colorRef),gold) colorLevels = sort(unique(colorRef_2)); opt = list(corFnc = corFnc, corOptions = corOptions, quickCor = quickCor, nType = nType, MEgold = MEgold, MEgrey = MEgrey, densityOnly = densityOnly, calculatePermutation = calculatePermutation, calculateCor.kIMall = calculateCor.kIMall, calculateClusterCoeff = calculateClusterCoeff); if (dataIsExpr) { stats = .coreCalcForExpr(datRef, datRefP, datTest, colorRef_2, weightsRef, weightsRefP, weightsTest, opt); interPresNames = spaste(corFnc, c(".kIM", ".kME", ".kMEall", spaste(".", corFnc), ".clusterCoeff", ".MAR")); measureNames = c("propVarExplained", "meanSignAwareKME", "separability", "meanSignAwareCorDat", "meanAdj", "meanClusterCoeff", "meanMAR"); } else { stats = .coreCalcForAdj(datRef, datRefP, datTest, colorRef_2, opt); interPresNames = spaste(corFnc, c(".kIM", ".kME", ".kIMall", ".adj", ".clusterCoeff", ".MAR")); measureNames = c("propVarExplained", "meanKIM", "separability", "meanSignAwareCorDat", "meanAdj", "meanClusterCoeff", "meanMAR"); } name1=paste(setNames[[ref]],"_vs_",setNames[[tnet]],sep="") quality[[tnet]] = cbind(stats$modSizes, stats$proVar[, 1], if (dataIsExpr) stats$meanSignAwareKME[, 1] else stats$meankIM[, 1], stats$Separability[, 1], stats$MeanSignAwareCorDat[,1], stats$MeanAdj[, 1], stats$meanClusterCoeff[, 1], stats$meanMAR[, 1]); intraPres[[tnet]]=cbind(stats$proVar[, 2], if (dataIsExpr) stats$meanSignAwareKME[, 2] else stats$meankIM[, 2], stats$Separability[, 2], stats$MeanSignAwareCorDat[, 2], stats$MeanAdj[, 2], stats$meanClusterCoeff[, 2], stats$meanMAR[, 2]) #colnames(quality[[tnet]]) = paste(c("moduleSize", measureNames), setNames[ref], sep = "_"); #colnames(intraPres[[tnet]]) = paste(measureNames, name1, sep = "_"); colnames(quality[[tnet]]) = c("moduleSize", paste(measureNames, "qual", sep=".")); rownames(quality[[tnet]]) = colorLevels colnames(intraPres[[tnet]]) = paste(measureNames, "pres", sep="."); rownames(intraPres[[tnet]]) = colorLevels names(intraPres)[tnet]=paste(name1,sep="") quality[[tnet]] = as.data.frame(quality[[tnet]]); intraPres[[tnet]] = as.data.frame(intraPres[[tnet]]); interPres[[tnet]]= as.data.frame(cbind(stats$corkIM, stats$corkME, stats$corkMEall, stats$ICORdat, stats$corCC, stats$corMAR)) colnames(interPres[[tnet]])=interPresNames; rownames(interPres[[tnet]])=colorLevels names(interPres)[[tnet]]=paste(name1,sep="") netPresent[tnet] = TRUE; } # of for (test in testNetworks[[iref]]) datout[[iref]]=list(netPresent = netPresent, quality = quality, intra = intraPres, inter = interPres, accuracy = accuracy, overlapTables = overlapTables) } # of for (iref in 1:length(referenceNetworkss)) names(datout)=setNames[referenceNetworks] } # of else for if (nNets==1) return(datout) } .checkExpr = function(multiExpr, verbose, indent) { spaces = indentSpaces(indent); nNets = length(multiExpr); if (verbose > 0) printFlush(paste(spaces, " ..checking data for excessive amounts of missing data..")); for (set in 1:nNets) { gsg = goodSamplesGenes(multiExpr[[set]]$data, verbose= verbose -2, indent = indent + 2); if (!gsg$allOK) { stop(paste("The submitted 'multiExpr' data contain genes or samples\n", " with zero variance or excessive counts of missing entries.\n", " Please use the function goodSamplesGenes on each set to identify the problematic\n", " genes and samples, and remove them before running modulePreservation.")) } } } .checkAdj = function(multiAdj, verbose, indent) { spaces = indentSpaces(indent); nNets = length(multiAdj); if (verbose > 0) printFlush(paste(spaces, " ..checking adjacencies for excessive amounts of missing data")); for (set in 1:nNets) { checkAdjMat(multiAdj[[set]]$data); gsg = goodSamplesGenes(multiAdj[[set]]$data, verbose= verbose -2, indent = indent + 2); if (!gsg$allOK) { stop(paste("The submitted 'multiAdj' contains rows or columns\n", " with zero variance or excessive counts of missing entries. Please remove\n", " offending rows and columns before running modulePreservation.")) } } } .combineAdj = function(block1, block2) { n1 = ncol(block1); n2 = ncol(block2); comb = matrix(0, n1+n2, n1+n2); comb[1:n1, 1:n1] = block1; comb[(n1+1):(n1+n2), (n1+1):(n1+n2)] = block2; try( {colnames(comb) = c(colnames(block1), colnames(block2)) }, silent = TRUE); comb; } # This function is basically copied from the file networkConcepts.R .computeLinksInNeighbors = function(x, imatrix){x %*% imatrix %*% x} .computeSqDiagSum = function(x, vec) { sum(x^2 * vec) }; .clusterCoeff = function(adjmat1) { # diag(adjmat1)=0 no.nodes=dim(adjmat1)[[1]] nolinksNeighbors <- c(rep(-666,no.nodes)) total.edge <- c(rep(-666,no.nodes)) maxh1=max(as.dist(adjmat1) ); minh1=min(as.dist(adjmat1) ); nolinksNeighbors <- apply(adjmat1, 1, .computeLinksInNeighbors, imatrix=adjmat1) subTerm = apply(adjmat1, 1, .computeSqDiagSum, vec = diag(adjmat1)); plainsum <- colSums(adjmat1) squaresum <- colSums(adjmat1^2) total.edge = plainsum^2 - squaresum #CChelp=rep(-666, no.nodes) CChelp=ifelse(total.edge==0,0, (nolinksNeighbors-subTerm)/total.edge) CChelp } # This function assumes that the diagonal of the adjacency matrix is 1 .MAR = function(adjacency) { denom = apply(adjacency, 2, sum)-1; mar = (apply(adjacency^2, 2, sum) - 1)/denom; mar[denom==0] = NA; mar; } #======================================================================================= # # Core calculation for expression data # #======================================================================================= .coreCalcForExpr = function(datRef, datRefP, datTest, colors, weightsRef, weightsRefP, weightsTest, opt) { colorLevels = levels(factor(colors)) nMods =length(colorLevels) nGenes = length(colors) # Flag modules whose size is 1 modSizes = table(colors) act = (modSizes>1) kME=list() ME=list() if (!opt$densityOnly) { ME[[1]]=moduleEigengenes(datRef, colors)$eigengenes kME[[1]]=as.matrix(signedKME(datRef,ME[[1]], exprWeights = weightsRef, corFnc = opt$corFnc, corOptions = opt$corOptions)) } if (opt$calculatePermutation | opt$densityOnly) { #printFlush("Calculating ME[[2]]"); ME[[2]]=moduleEigengenes(datRefP, colors)$eigengenes kME[[2]]=as.matrix(signedKME(datRefP,ME[[2]], exprWeights = weightsRefP, corFnc = opt$corFnc, corOptions = opt$corOptions)) } else { ME[[2]] = ME[[1]] kME[[2]] = kME[[1]] } ME[[3]]=moduleEigengenes(datTest, colors)$eigengenes kME[[3]]=as.matrix(signedKME(datTest,ME[[3]], exprWeights = weightsTest, corFnc = opt$corFnc, corOptions = opt$corOptions)) modGenes = list(); for (m in 1:nMods) modGenes[[m]] = c(1:nGenes)[colors==colorLevels[m]]; #kME correlation #if (verbose > 1) printFlush(paste(spaces, "....calculating kME...")); corkME = rep(NA, nMods); corkMEall = rep(NA, nMods); if (!opt$densityOnly) { for(j in 1:nMods ) if(act[j]) { nModGenes=length(modGenes[[j]]); names1=substring(colnames(kME[[1]]),4) j1=which(names1==colorLevels[j]) #corkME[j]=cor(kME[[1]][loc,j1],kME[[3]][loc,j1], use = "p") corExpr = parse(text=paste(opt$corFnc, "(kME[[1]][modGenes[[j]],j1], kME[[3]][modGenes[[j]],j1]", prepComma(opt$corOptions), ")")); corkME[j] = abs(eval(corExpr)); #corkMEall[j]=cor(kME[[1]][,j1],kME[[3]][,j1], use = "p") corExpr = parse(text=paste(opt$corFnc, "(kME[[1]][,j1], kME[[3]][,j1] ", prepComma(opt$corOptions), ")")); corkMEall[j] = abs(eval(corExpr)); # covkME[j]=cov(kME[[1]][loc,j1],kME[[3]][loc,j1], use = "p") # meanProductkME[j] = scalarProduct(kME[[1]][loc,j1],kME[[3]][loc,j1]) } } #proportion of variance explained #if (verbose > 1) # printFlush(paste(spaces, "....calculating proprotion of variance explained...")); proVar=matrix(NA, nMods ,2) meanSignAwareKME=matrix(NA, nMods ,2) names1=substring(colnames(kME[[2]]),4) for(j in 1:nMods ) if(act[j]) { j1=which(names1==colorLevels[j]) proVar[j,1]=mean((kME[[2]][modGenes[[j]],j1])^2,na.rm=TRUE) proVar[j,2]=mean((kME[[3]][modGenes[[j]],j1])^2,na.rm=TRUE) if (opt$densityOnly) { if (opt$nType==1) { meanSignAwareKME[j,1]=mean(abs(kME[[2]][modGenes[[j]],j1]),na.rm = TRUE) meanSignAwareKME[j,2]=mean(abs(kME[[3]][modGenes[[j]],j1]),na.rm = TRUE) } else { meanSignAwareKME[j,1]=abs(mean(kME[[2]][modGenes[[j]],j1],na.rm = TRUE)) meanSignAwareKME[j,2]=abs(mean(kME[[3]][modGenes[[j]],j1],na.rm = TRUE)) } } else { meanSignAwareKME[j,1]=abs(mean(abs(kME[[2]][modGenes[[j]],j1]),na.rm = TRUE)); meanSignAwareKME[j,2]=abs(mean(sign(kME[[1]][modGenes[[j]],j1]) * kME[[3]][modGenes[[j]],j1],na.rm = TRUE)) } } # if (verbose > 1) printFlush(paste(spaces, "....calculating separability...")); Separability=matrix(NA, nMods ,2) # for(k in (2-calculateQuality):2) for(k in 1:2) { Gold=which(colnames(ME[[k+1]]) %in% c(opt$MEgold, opt$MEgrey)) #corME=cor(ME[[k+1]],use="p") corExpr = parse(text=paste(opt$corFnc, "(ME[[k+1]]", prepComma(opt$corOptions), ")")); corME= eval(corExpr); if (opt$nType==0) corME = abs(corME); diag(corME) = 0; Separability[,k]=1-apply(corME[, -Gold, drop = FALSE], 1, max, na.rm = TRUE) } #mean signed correlation&inter array correlation # if (verbose > 1) printFlush(paste(spaces, "....calculating MeanSignAwareCorDat...")); MeanSignAwareCorDat=matrix(NA,nMods ,2) ICORdat=rep(NA,nMods) corkIM = rep(NA, nMods); corCC = rep(NA, nMods); corMAR = rep(NA, nMods); MeanAdj = matrix(NA,nMods ,2) meanCC = matrix(NA,nMods ,2) meanMAR = matrix(NA,nMods ,2) for(j in 1:nMods ) if(act[j]) { if (!opt$densityOnly) { #ModuleCorData1=cor(datRef[,modGenes[[j]]],use="p", quick = as.numeric(opt$quickCor)) corExpr = parse(text=paste(opt$corFnc, "(datRef[,modGenes[[j]]]", prepComma(opt$corOptions), if (!is.null(weightsRef)) ", weights.x = weightsRef[, modGenes[[j]]]" else "", ", quick = as.numeric(opt$quickCor))")); ModuleCorData1=eval(corExpr); } if (opt$calculatePermutation | opt$densityOnly) { #ModuleCorData2=cor(datRefP[,modGenes[[j]]],use="p", quick = as.numeric(opt$quickCor)) corExpr = parse(text=paste(opt$corFnc, "(datRefP[,modGenes[[j]]]", prepComma(opt$corOptions), if (!is.null(weightsRefP)) ", weights.x = weightsRefP[, modGenes[[j]]]" else "", ", quick = as.numeric(opt$quickCor))")); ModuleCorData2 = eval(corExpr); } else ModuleCorData2 = ModuleCorData1; #ModuleCorData3=cor(datTest[,modGenes[[j]]],use="p", quick = as.numeric(opt$quickCor)) corExpr = parse(text=paste(opt$corFnc, "(datTest[,modGenes[[j]]]", prepComma(opt$corOptions), if (!is.null(weightsTest)) ", weights.x = weightsTest[, modGenes[[j]]]" else "", ", quick = as.numeric(opt$quickCor))")); #printFlush(j); ModuleCorData3 = try(eval(corExpr)); if (inherits(ModuleCorData3, "try-error")) browser(); if (opt$nType==1) { SignedModuleCorData2 = abs(ModuleCorData2) } else SignedModuleCorData2 = ModuleCorData2; if (opt$densityOnly) { if (opt$nType==1) SignedModuleCorData3 = abs(ModuleCorData3) else SignedModuleCorData3 = ModuleCorData3 } else SignedModuleCorData3 = sign(ModuleCorData1)*ModuleCorData3 MeanSignAwareCorDat[j,1]=mean(as.dist(SignedModuleCorData2),na.rm = TRUE) MeanSignAwareCorDat[j,2]=mean(as.dist(SignedModuleCorData3),na.rm = TRUE) if (!opt$densityOnly) { #ICORdat[j]=cor(c(as.dist(ModuleCorData1)),c(as.dist(ModuleCorData3)),use="p") corExpr = parse(text=paste(opt$corFnc, "(c(as.dist(ModuleCorData1)),c(as.dist(ModuleCorData3))", prepComma(opt$corOptions), ")")); ICORdat[j] = eval(corExpr); # ICOVdat[j]=cov(c(as.dist(ModuleCorData1)),c(as.dist(ModuleCorData3)),use="p") # spdat[j]=scalarProduct(c(as.dist(ModuleCorData1)),c(as.dist(ModuleCorData3))) } if (opt$nType==1) { if (!opt$densityOnly) adjacency1 = ModuleCorData1^6; if (opt$calculatePermutation | opt$densityOnly) { adjacency2 = ModuleCorData2^6; } else adjacency2 = adjacency1; adjacency3 = ModuleCorData3^6; } else if (opt$nType==2) { if (!opt$densityOnly) adjacency1 = ( (1+ModuleCorData1)/2 ) ^12; if (opt$calculatePermutation | opt$densityOnly) { adjacency2 = ( (1+ModuleCorData2)/2 ) ^12; } else adjacency2 = adjacency1; adjacency3 = ( (1+ModuleCorData3)/2 ) ^12; } else { if (!opt$densityOnly) adjacency1 = ModuleCorData1^6; adjacency1[ModuleCorData1 < 0] = 0; if (opt$calculatePermutation | opt$densityOnly) { adjacency2 = ModuleCorData2^6; adjacency2[ModuleCorData2 < 0] = 0; } else adjacency2 = adjacency1; adjacency3 = ModuleCorData3^6; adjacency3[ModuleCorData3 < 0] = 0; } if (opt$calculateClusterCoeff) { ccRef = .clusterCoeff(adjacency1); ccRefP = .clusterCoeff(adjacency2); ccTest = .clusterCoeff(adjacency3); meanCC[j, 1] = mean(ccRefP); meanCC[j, 2] = mean(ccTest); if (!opt$densityOnly) { corExpr = parse(text=paste(opt$corFnc, "(ccRef, ccTest ", prepComma(opt$corOptions), ")")); corCC[j] = eval(corExpr); } } marRef = .MAR(adjacency1); marRefP = .MAR(adjacency2); marTest = .MAR(adjacency3); meanMAR[j, 1] = mean(marRefP); meanMAR[j, 2] = mean(marTest); if (!opt$densityOnly) { kIMref = apply(adjacency1, 2, sum, na.rm = TRUE) kIMtest = apply(adjacency3, 2, sum, na.rm = TRUE) corExpr = parse(text=paste(opt$corFnc, "(kIMref, kIMtest ", prepComma(opt$corOptions), ")")); corkIM[j] = eval(corExpr); corExpr = parse(text=paste(opt$corFnc, "(marRef, marTest ", prepComma(opt$corOptions), ")")); corMAR[j] = eval(corExpr); } MeanAdj[j,1]=mean(as.dist(adjacency2), na.rm=TRUE) MeanAdj[j,2]=mean(as.dist(adjacency3), na.rm=TRUE) } list(modSizes = modSizes, corkIM = corkIM, corkME = corkME, corkMEall = corkMEall, proVar = proVar, meanSignAwareKME = meanSignAwareKME, Separability = Separability, MeanSignAwareCorDat = MeanSignAwareCorDat, ICORdat = ICORdat, MeanAdj = MeanAdj, meanClusterCoeff = meanCC, meanMAR = meanMAR, corCC = corCC, corMAR = corMAR) } #=================================================================================================== # # Core calculation for adjacency # #=================================================================================================== # A few supporting functions first: .getSVDs = function(data, colors) { colorLevels = levels(factor(colors)) nMods =length(colorLevels) svds = list(); for (m in 1:nMods) { modGenes = (colors==colorLevels[m]) modAdj = data[modGenes, modGenes]; if (sum(is.na(modAdj))>0) { seed = .Random.seed; modAdj = impute.knn(modAdj)$data; .Random.seed <<- seed; } svds[[m]] = svd(modAdj, nu=1, nv=0); svds[[m]]$u = c(svds[[m]]$u); if (sum(svds[[m]]$u, na.rm = TRUE) < 0) svds[[m]]$u = -svds[[m]]$u; } svds; } .kIM = function(adj, colors, calculateAll = TRUE) { colorLevels = levels(factor(colors)) nMods =length(colorLevels) nGenes = length(colors); kIM = matrix(NA, nGenes, nMods); if (calculateAll) { for (m in 1:nMods) { modGenes = colors==colorLevels[m]; kIM[, m] = apply(adj[, modGenes, drop = FALSE], 1, sum, na.rm = TRUE); kIM[modGenes, m] = kIM[modGenes, m] - 1; } } else { for (m in 1:nMods) { modGenes = colors==colorLevels[m]; kIM[modGenes, m] = apply(adj[modGenes, modGenes, drop = FALSE], 1, sum, na.rm = TRUE) - 1; } } kIM; } # Here is the main function # Summary: # PVE: from svd$d # kME: from svd$u (to make it different from kIM) # kIM: as usual # kMEall: from kIMall # meanSignAwareKME: from svd$u # Separability: as in the paper # MeanSignAwareCorDat: ?? # meanAdj: mean adjacency # cor.cor: replace by cor.adj .coreCalcForAdj = function(datRef, datRefP, datTest, colors, opt) { # printFlush(".coreCalcForAdj:entering"); colorLevels = levels(factor(colors)) nMods =length(colorLevels) nGenes = length(colors); gold = substring(opt$MEgold, 3); grey = substring(opt$MEgrey, 3); # Flag modules whose size is 1 modSizes = table(colors) act = (modSizes>1) svds=list() kIM = list(); #printFlush(".coreCalcForAdj:getting svds and kIM"); if (!opt$densityOnly) { svds[[1]] = .getSVDs(datRef, colors); kIM[[1]] = .kIM(datRef, colors, calculateAll = opt$calculateCor.kIMall); } if (opt$calculatePermutation | opt$densityOnly) { svds[[2]] = .getSVDs(datRefP, colors); kIM[[2]] = .kIM(datRefP, colors, calculateAll = opt$calculateCor.kIMall); } else { svds[[2]] = svds[[1]]; kIM[[2]] = kIM[[1]]; } svds[[3]] = .getSVDs(datTest, colors); kIM[[3]] = .kIM(datTest, colors, calculateAll = opt$calculateCor.kIMall); proVar=matrix(NA, nMods ,2) modGenes = list(); for (m in 1:nMods) { modGenes[[m]] = c(1:nGenes)[colors==colorLevels[m]]; proVar[m, 1] = svds[[2]][[m]]$d[1]/sum(svds[[2]][[m]]$d); proVar[m, 2] = svds[[3]][[m]]$d[1]/sum(svds[[3]][[m]]$d); } #printFlush(".coreCalcForAdj:getting corkME and ICOR"); corkME = rep(NA, nMods); corkMEall = rep(NA, nMods); corkIM = rep(NA, nMods); ICORdat = rep(NA,nMods) if (!opt$densityOnly) { for(m in 1:nMods ) if(act[m]) { nModGenes=modSizes[m]; corExpr = parse(text=paste(opt$corFnc, "(svds[[1]][[m]]$u,svds[[3]][[m]]$u", prepComma(opt$corOptions), ")")); corkME[m] = abs(eval(corExpr)); if (opt$calculateCor.kIMall) { corExpr = parse(text=paste(opt$corFnc, "(kIM[[1]][,m],kIM[[3]][,m] ", prepComma(opt$corOptions), ")")); corkMEall[m] = eval(corExpr); } corExpr = parse(text=paste(opt$corFnc, "(kIM[[1]][modGenes[[m]],m],kIM[[3]][modGenes[[m]],m] ", prepComma(opt$corOptions), ")")); corkIM[m] = eval(corExpr); adj1 = datRef[modGenes[[m]], modGenes[[m]]]; adj2 = datTest[modGenes[[m]], modGenes[[m]]]; corExpr = parse(text=paste(opt$corFnc, "(c(as.dist(adj1)), c(as.dist(adj2))", prepComma(opt$corOptions), ")")); ICORdat[m] = eval(corExpr); } } meanSignAwareKME=matrix(NA, nMods ,2) meankIM=matrix(NA, nMods ,2) for(m in 1:nMods ) if(act[m]) { meankIM[m, 1] = mean(kIM[[2]][modGenes[[m]], m], na.rm = TRUE) meankIM[m, 2] = mean(kIM[[3]][modGenes[[m]], m], na.rm = TRUE) if (opt$densityOnly) { if (opt$nType==1) { meanSignAwareKME[m,1]=mean(abs(svds[[2]][[m]]$u),na.rm = TRUE) meanSignAwareKME[m,2]=mean(abs(svds[[3]][[m]]$u),na.rm = TRUE) } else { meanSignAwareKME[m,1]=abs(mean(svds[[2]][[m]]$u,na.rm = TRUE)) meanSignAwareKME[m,2]=abs(mean(svds[[3]][[m]]$u,na.rm = TRUE)) } } else { meanSignAwareKME[m,1]=mean(abs(svds[[2]][[m]]$u),na.rm = TRUE) meanSignAwareKME[m,2]=abs(mean(sign(svds[[1]][[m]]$u) * svds[[3]][[m]]$u,na.rm = TRUE)) } } MeanAdj = matrix(NA, nMods, 2); sepMat = array(NA, dim = c(nMods, nMods, 2)); corCC = rep(NA, nMods); corMAR = rep(NA, nMods); meanCC = matrix(NA,nMods ,2) meanMAR = matrix(NA,nMods ,2) for (m in 1:nMods) if (act[m]) { modAdj = datRefP[modGenes[[m]], modGenes[[m]]]; if (opt$calculateClusterCoeff) ccRefP = .clusterCoeff(modAdj); marRefP = .MAR(modAdj); meanMAR[m, 1] = mean(marRefP); MeanAdj[m,1]=mean(as.dist(modAdj), na.rm = TRUE); modAdj = datRef[modGenes[[m]], modGenes[[m]]]; if (opt$calculateClusterCoeff) ccRef = .clusterCoeff(modAdj); marRef = .MAR(modAdj); modAdj = datTest[modGenes[[m]], modGenes[[m]]]; if (opt$calculateClusterCoeff) ccTest = .clusterCoeff(modAdj); marTest = .MAR(modAdj); MeanAdj[m,2] = mean(as.dist(modAdj), na.rm = TRUE); if (opt$calculateClusterCoeff) { meanCC[m, 1] = mean(ccRefP); meanCC[m, 2] = mean(ccTest); corExpr = parse(text=paste(opt$corFnc, "(ccRef, ccTest ", prepComma(opt$corOptions), ")")); corCC[m] = eval(corExpr); } meanMAR[m, 2] = mean(marTest); corExpr = parse(text=paste(opt$corFnc, "(marRef, marTest ", prepComma(opt$corOptions), ")")); corMAR[m] = eval(corExpr); if ((m > 1) && (colorLevels[m]!=gold)) { for (m2 in 1:(m-1)) if (colorLevels[m2]!=gold) { interAdj = datRefP[modGenes[[m]], modGenes[[m2]]]; tmp = mean(interAdj, na.rm = TRUE); if (tmp!=0) { sepMat[m, m2, 1] = mean(interAdj, na.rm = TRUE)/sqrt(MeanAdj[m, 1] * MeanAdj[m2, 1]); } else sepMat[m, m2, 1] = 0; sepMat[m2, m, 1] = sepMat[m, m2, 1]; interAdj = datTest[modGenes[[m]], modGenes[[m2]]]; tmp = mean(interAdj, na.rm = TRUE); if (tmp!=0) { sepMat[m, m2, 2] = mean(interAdj, na.rm = TRUE)/sqrt(MeanAdj[m, 2] * MeanAdj[m2, 2]); } else sepMat[m, m2, 2] = 0; sepMat[m2, m, 2] = sepMat[m, m2, 2]; } } } Separability=matrix(NA, nMods ,2) notGold = colorLevels!=gold; for(k in 1:2) Separability[notGold, k]=1-apply(sepMat[notGold, notGold, k, drop = FALSE], 1, max, na.rm = TRUE) MeanSignAwareCorDat=matrix(NA,nMods ,2) list(modSizes = modSizes, corkIM = corkIM, corkME = corkME, corkMEall = corkMEall, proVar = proVar, meanSignAwareKME = meanSignAwareKME, meankIM = meankIM, Separability = Separability, MeanSignAwareCorDat = MeanSignAwareCorDat, ICORdat = ICORdat, MeanAdj = MeanAdj, meanClusterCoeff = meanCC, meanMAR = meanMAR, corCC = corCC, corMAR = corMAR) } WGCNA/R/multiData.R0000644000176200001440000003103313344057441013366 0ustar liggesusers#======================================================================================================== # # Convenience functions for manipulating multiData structures # #======================================================================================================== # Note: many of these function would be simpler to use if I used some sort of class/method technique to keep # track of the class of each object internally. For example, I could then write a generic function "subset" that # would work consistently on lists and multiData objects. Similarly, multiData2list would simply become a # method of as.list, and as.list would be safe to use both on lists and on multiData objects. mtd.subset = function(multiData, rowIndex = NULL, colIndex = NULL, invert = FALSE, permissive = FALSE, drop = FALSE) { if (length(multiData)==0) return(NULL); size = checkSets(multiData, checkStructure = permissive); if (!size$structureOK && !is.null(colIndex)) warning(immediate. = TRUE, paste("mtd.subset: applying column selection on data sets that do not have\n", " the same number of columns. This is treacherous territory; proceed with caution.")); if (is.null(colIndex)) { if (!invert) colIndex.1 = c(1:size$nGenes)} else colIndex.1 = colIndex; if (is.null(rowIndex)) rowIndex = lapply(size$nSamples, function(n) if (invert) numeric(0) else c(1:n)) if (length(rowIndex)!=size$nSets) stop("If given, 'rowIndex' must be a list of the same length as 'multiData'."); out = list(); if (invert) { rowIndexAll = lapply(size$nSamples, function(n) c(1:n)) if (any(sapply(rowIndex, function(i) any(i<0)))) stop("Negative 'rowIndex' indices cannot be used with 'invert=TRUE'."); rowIndex = mapply(setdiff, rowIndexAll, rowIndex, SIMPLIFY = FALSE); } for (set in 1:size$nSets) { if (permissive) if (is.null(colIndex) && !invert) colIndex.1 = c(1:ncol(multiData[[set]]$data)) else colIndex.1 = colIndex; if (is.character(colIndex.1)) { colIndex.1 = match(colIndex.1, colnames(multiData[[set]]$data)); n1 = length(colIndex.1) if (any(is.na(colIndex.1))) stop("Cannot match the following entries in 'colIndex' to column names in set ", set, ":\n", paste( colIndex[is.na(colIndex.1)] [1:min(n1, 5)], collapse = ", "), if (n1>5) ", ... [output truncated]" else ""); } if (invert) { if (any(colIndex.1<0)) stop("Negative indices cannot be used with 'invert=TRUE'."); colIndex.2 = setdiff(1:ncol(multiData[[set]]$data), colIndex.1); } else colIndex.2 = colIndex.1; out[[set]] = list(data = multiData[[set]]$data[rowIndex[[set]], colIndex.2, drop = drop]); } names(out) = names(multiData); out; } multiData2list = function(multiData) { lapply(multiData, getElement, 'data'); } list2multiData = function(data) { out = list(); for (set in 1:length(data)) out[[set]] = list(data = data[[set]]); names(out) = names(data); out; } mtd.colnames = function(multiData) { if (length(multiData)==0) return(NULL); colnames(multiData[[1]]$data); } .calculateIndicator = function(nSets, mdaExistingResults, mdaUpdateIndex) { if (length(mdaUpdateIndex)==0) mdaUpdateIndex = NULL; calculate = rep(TRUE, nSets); if (!is.null(mdaExistingResults)) { nSets.existing = length(mdaExistingResults); if (nSets.existing>nSets) stop("Number of sets in 'mdaExistingResults' is higher than the number of sets in 'multiData'.\n", " Please supply a valid 'mdaExistingResults' or NULL to recalculate all results."); if (nSets.existing==0) stop("Number of sets in 'mdaExistingResults' is zero.\n", " Please supply a valid 'mdaExistingResults' or NULL to recalculate all results."); if (is.null(mdaUpdateIndex)) { calculate[1:length(mdaExistingResults)] = FALSE; } else { if (any(! mdaUpdateIndex %in% c(1:nSets))) stop("All entries in 'mdaUpdateIndex' must be between 1 and the number of sets in 'multiData'."); calculateIndex = sort(unique(c(mdaUpdateIndex, if (nSets.existing 0) printFlush(spaste(printSpaces, "mtd.apply: working on set ", set)); out[[set]] = list(data = FUN(multiData[[set]]$data, ...)) } else out[set] = mdaExistingResults[set]; } names(out) = names(multiData); if (mdaSimplify) { if (mdaVerbose > 0) printFlush(spaste(printSpaces, "mtd.apply: attempting to simplify...")); return (mtd.simplify(out)); } else if (returnList) { return (multiData2list(out)); } out; } mtd.applyToSubset = function( # What to do multiData, FUN, ..., # Which rows and cols to keep mdaRowIndex = NULL, mdaColIndex = NULL, # Pre-existing results and update options mdaExistingResults = NULL, mdaUpdateIndex = NULL, mdaCopyNonData = FALSE, # Output formatting options mdaSimplify = FALSE, returnList = FALSE, # Internal behaviour options mdaVerbose = 0, mdaIndent = 0 ) { if (length(multiData)==0) return(NULL); printSpaces = indentSpaces(mdaIndent); size = checkSets(multiData); if (mdaSimplify && mdaCopyNonData) stop("Non-data copying is not compatible with simplification."); if (mdaCopyNonData) res = multiData else res = vector(mode = "list", length = size$nSets); doSelection = FALSE; if (!is.null(mdaColIndex)) { doSelection = TRUE if (any(mdaColIndex < 0 | mdaColIndex > size$nGenes)) stop("Some of the indices in 'mdaColIndex' are out of range."); } else { mdaColIndex = c(1:size$nGenes); } if (!is.null(mdaRowIndex)) { if (!is.list(mdaRowIndex)) stop("mdaRowIndex must be a list, with one component per set."); if (length(mdaRowIndex)!=size$nSets) stop("Number of components in 'mdaRowIndex' must equal number of sets."); doSelection = TRUE } else { mdaRowIndex = lapply(size$nSamples, function(n) { c(1:n) }); } calculate = .calculateIndicator(nSets, mdaExistingResults, mdaUpdateIndex); fun = match.fun(FUN) for (set in 1:size$nSets) { if (calculate[set]) { if (mdaVerbose > 0) printFlush(spaste(printSpaces, "mtd.applyToSubset: working on set ", set)); res[[set]] = list(data = fun( if (doSelection) multiData[[set]] $ data[mdaRowIndex[[set]], mdaColIndex, drop = FALSE] else multiData[[set]] $ data, ...)); } else res[set] = mdaExistingResults[set]; } names(res) = names(multiData); if (mdaSimplify) { if (mdaVerbose > 0) printFlush(spaste(printSpaces, "mtd.applyToSubset: attempting to simplify...")); return (mtd.simplify(res)); } else if (returnList) { return (multiData2list(res)); } return(res); } mtd.simplify = function(multiData) { if (length(multiData)==0) return(NULL); len = length(multiData[[1]]$data); dim = dim(multiData[[1]]$data); simplifiable = TRUE; nSets = length(multiData); for (set in 1:nSets) { if (len!=length(multiData[[set]]$data)) simplifiable = FALSE; if (!isTRUE(all.equal( dim, dim(multiData[[set]]$data)))) simplifiable = FALSE; } if (simplifiable) { if (is.null(dim)) { innerDim = len; innerNames = names(multiData[[1]]$data); if (is.null(innerNames)) innerNames = spaste("X", c(1:len)); } else { innerDim = dim; innerNames = dimnames(multiData[[1]]$data); if (is.null(innerNames)) innerNames = lapply(innerDim, function(x) {spaste("X", 1:x)}) nullIN = sapply(innerNames, is.null); if (any(nullIN)) innerNames[nullIN] = lapply(innerDim[nullIN], function(x) {spaste("X", 1:x)}) } setNames = names(multiData); if (is.null(setNames)) setNames = spaste("Set_", 1:nSets); mtd.s = matrix(NA, prod(innerDim), nSets); for (set in 1:nSets) mtd.s[, set] = as.vector(multiData[[set]]$data); dim(mtd.s) = c(innerDim, nSets); if (!is.null(innerNames)) dimnames(mtd.s) = c (if (is.list(innerNames)) innerNames else list(innerNames), list(setNames)); return(mtd.s); } return(multiData); } isMultiData = function(x, strict = TRUE) { if (strict) { !inherits(try(checkSets(x), silent = TRUE), 'try-error'); } else { hasData = sapply(x, function(l) { "data" %in% names(l) }); all(hasData) } } mtd.mapply = function( # What to do FUN, ..., MoreArgs = NULL, # How to interpret the input mdma.argIsMultiData = NULL, # Copy previously known results? mdmaExistingResults = NULL, mdmaUpdateIndex = NULL, # How to format output mdmaSimplify = FALSE, returnList = FALSE, # Options controlling internal behaviour mdma.doCollectGarbage = FALSE, mdmaVerbose = 0, mdmaIndent = 0) { printSpaces = indentSpaces(mdmaIndent); dots = list(...); if (length(dots)==0) stop("No arguments were specified. Please type ?mtd.mapply to see the help page."); dotLengths = sapply(dots, length); if (any(dotLengths!=dotLengths[1])) { tmp = data.frame(name = names(dots), length = dotLengths); rownames(tmp) = NULL; on.exit(print(tmp)); stop(spaste("All arguments to vectorize over must have the same length.\n", "Scalar arguments should be put into the 'MoreArgs' argument.\n", "Note: lengths of '...' arguments are: ")); } nArgs = length(dots); res = list(); if (is.null(mdma.argIsMultiData)) mdma.argIsMultiData = sapply(dots, isMultiData, strict = FALSE); nSets = dotLengths[1]; calculate = .calculateIndicator(nSets, mdmaExistingResults, mdmaUpdateIndex); FUN = match.fun(FUN); for (set in 1:nSets) { if (calculate[set]) { if (mdmaVerbose > 0) printFlush(spaste(printSpaces, "mtd.mapply: working on set ", set)); localArgs = list(); for (arg in 1:nArgs) localArgs[[arg]] = if (mdma.argIsMultiData[arg]) dots[[arg]] [[set]] $ data else dots[[arg]] [[set]]; names(localArgs) = names(dots); res[[set]] = list(data = do.call(FUN, c(localArgs, MoreArgs))); if (mdma.doCollectGarbage) collectGarbage(); } else res[set] = mdmaExistingResults[set]; } names(res) = names(dots[[1]]); if (mdmaSimplify) { if (mdmaVerbose > 0) printFlush(spaste(printSpaces, "mtd.mapply: attempting to simplify...")); return (mtd.simplify(res)); } else if (returnList) { return (multiData2list(res)); } return(res); } mtd.rbindSelf = function(multiData) { if (length(multiData)==0) return(NULL); size = checkSets(multiData); out = NULL; colnames = mtd.colnames(multiData); for (set in 1:size$nSets) { if (!is.null(colnames(multiData[[set]]$data)) && !isTRUE(all.equal(colnames, colnames(multiData[[set]]$data))) ) { warning("mtd.rbindSelf: 'colnames' of the first set and set ", set, " do not agree."); colnames(multiData[[set]]$data) = colnames; } } do.call(rbind, multiData2list(multiData)); } mtd.setAttr = function(multiData, attribute, valueList) { if (length(multiData)==0) return(NULL); size = checkSets(multiData); ind = 1; for (set in 1:size$nSets) { attr(multiData[[set]]$data, attribute) = valueList[[ind]]; ind = ind + 1; if (ind > length(valueList)) ind = 1; } multiData } mtd.setColnames = function(multiData, colnames) { if (length(multiData)==0) return(NULL); size = checkSets(multiData); for (set in 1:size$nSets) colnames(multiData[[set]]$data) = colnames multiData } multiData = function(...) { list2multiData(list(...)); } WGCNA/R/transposeBigData.R0000644000176200001440000000364313103416622014673 0ustar liggesuserstransposeBigData = function (x, blocksize = 20000) { isdataframe = is.data.frame(x) ismatrix = is.matrix(x) if (!(isdataframe | ismatrix)) stop("Input is neither a data frame nor a matrix") if (blocksize < 2) stop("This blocksize makes no sense. It should be a positive integer>1.") nrow1 = nrow(x) ncol1 = ncol(x) xTranspose = matrix(NA, nrow = ncol1, ncol = nrow1) if (nrow1 <= ncol1) { no.blocks = as.integer(ncol1/blocksize) if (no.blocks >= 1) { for (i in 1:no.blocks) { blockIndex = (i - 1) * blocksize + 1:blocksize xTranspose[blockIndex, ] = t(x[, blockIndex]) } } if (ncol1 - no.blocks * blocksize == 1) { xTranspose[ncol1, ] = t(x[, ncol1]) } if (ncol1 - no.blocks * blocksize > 1) { finalindex = (no.blocks * blocksize + 1):ncol1 xTranspose[finalindex, ] = t(x[, finalindex]) } } if (nrow1 > ncol1) { no.blocks = as.integer(nrow1/blocksize) if (no.blocks >= 1) { for (i in 1:no.blocks) { blockIndex = (i - 1) * blocksize + 1:blocksize xTranspose[, blockIndex] = t(x[blockIndex, ]) } } if (nrow1 - no.blocks * blocksize == 1) { xTranspose[, nrow1] = t(x[nrow1, ]) } if (nrow1 - no.blocks * blocksize > 1) { finalindex = (no.blocks * blocksize + 1):nrow1 xTranspose[, finalindex] = t(x[finalindex, ]) } } if (isdataframe) { xTranspose = data.frame(xTranspose) dimnames(xTranspose)[[1]] = dimnames(x)[[2]] dimnames(xTranspose)[[2]] = dimnames(x)[[1]] } if (ismatrix) { dimnames(xTranspose)[[1]] = dimnames(x)[[2]] dimnames(xTranspose)[[2]] = dimnames(x)[[1]] } xTranspose } WGCNA/R/proportionsInAdmixture.R0000644000176200001440000001113313103416622016202 0ustar liggesusersproportionsInAdmixture<-function (MarkerMeansPure, datE.Admixture, calculateConditionNumber = FALSE, coefToProportion = TRUE) { datE.Admixture = data.frame(datE.Admixture) if (sum(is.na(names(datE.Admixture))) > 0) { warning("Some of the column names of datE.Admixture are missing. Recommendation: check or assign column names. But for your convenience, we remove the corresponding columns of datE.Admixture from the analysis.") datE.Admixture = datE.Admixture[, !is.na(names(datE.Admixture))] } if (sum(names(datE.Admixture) == "") > 0) { warning("Some of the column names of datE.Admixture are missing. Recommendation: check or assign column names. But for your convenience, we remove the corresponding columns of datE.Admixture from the analysis.") datE.Admixture = datE.Admixture[, names(datE.Admixture) != ""] } MarkerID = MarkerMeansPure[, 1] if (sum(is.na(MarkerID)) > 0) { warning("Some of the marker are missing (NA). Recommendation: check the first column of the input MarkerMeansPure. It should contain marker names. But for your convenience, we remove the corresponding markers from the analysis.") MarkerMeansPure = MarkerMeansPure[!is.na(MarkerID), ] MarkerID = MarkerMeansPure[, 1] } if (sum(MarkerID == "", na.rm = T) > 0) { warning("Some of the marker names are empty strings. Recommendation: check the first column of the input MarkerMeansPure. It should contain marker names. But for your convenience, we remove the corresponding markers from the analysis.") MarkerMeansPure = MarkerMeansPure[MarkerID != "", ] MarkerID = MarkerMeansPure[, 1] } noMissingValuesMarker = as.numeric(apply(is.na(MarkerMeansPure[, -1]), 1, sum)) if (max(noMissingValuesMarker, na.rm = T) > 0) { warning("Some of the markers (rows of MarkerMeansPure) contain missing values. This is problematic.\nFor your convenience, we remove the corresponding markers (rows) from the analysis.") MarkerMeansPure = MarkerMeansPure[noMissingValuesMarker == 0, ] MarkerID = MarkerMeansPure[, 1] } match1 = match(MarkerID, names(datE.Admixture)) match1 = match1[!is.na(match1)] if (length(match1) == 0) stop("None of the marker names correspond to column names of the input datE.Admixture. Possible solutions: Transpose datE.Admixture or MarkerMeansPure. Or make sure to assign suitable names to the columns of datE.Admixture, e.g. as follows dimnames(datE.Admixture)[[2]]=GeneSymbols.") if (length(match1) < dim(MarkerMeansPure)[[1]]) { warning(paste("Only", length(match1), "out of ", dim(MarkerMeansPure)[[1]], "rows of MarkerMeansPure correspond to columns of datE.Admixture. \nIf this suprises you, check the the first column of MarkerMeansPure or the column names of datE.Admixture. \nThe output contains a list of markers that could be identified.")) } datE.MarkersAdmixtureTranspose = t(datE.Admixture[, match1]) match2 = match(names(datE.Admixture)[match1], MarkerID) match2 = match2[!is.na(match2)] MarkerMeansPure = MarkerMeansPure[match2, ] if (sum(as.character(MarkerMeansPure[, 1]) != dimnames(datE.MarkersAdmixtureTranspose)[[1]], na.rm = T) > 0) stop("I am sorry but things do not line up. Maybe you need to look inside the R code. Specifically,\nas.character(MarkerMeansPure) != dimnames(datE.MarkersAdmixtureTranspose)[[1]]") conditionNumber = NA if (dim(MarkerMeansPure)[[2]] == 2) { A = as.matrix(MarkerMeansPure[, -1], ncol = 1) } else { A = as.matrix(MarkerMeansPure[, -1]) } if (dim(as.matrix(A))[[2]] > 1 & dim(as.matrix(A))[[1]] > 1 & calculateConditionNumber) { conditionNumber = kappa(A) } datCoef = t(lm(datE.MarkersAdmixtureTranspose ~ A)$coefficients[-1, ]) coef2prop = function(coef) { prop = rep(NA, length(coef)) coef[coef < 0] = 0 if (sum(coef, na.rm = T) > 0 & !is.na(sum(coef, na.rm = T))) { prop = coef/sum(coef, na.rm = T) } prop } if (coefToProportion) { PredictedProportions = data.frame(t(apply(datCoef, 1, coef2prop))) } else { PredictedProportions = datCoef } dimnames(PredictedProportions)[[1]] = dimnames(datE.Admixture)[[1]] out = list(PredictedProportions = PredictedProportions, datCoef = datCoef, conditionNumber = conditionNumber, markersUsed = as.character(MarkerMeansPure[, 1])) out } WGCNA/R/blockwiseData.R0000644000176200001440000001520113454432720014214 0ustar liggesusers#========================================================================================================== # # Utility functions for handling possibly disk-backed blockwise data. # #========================================================================================================== .getAttributesOrEmptyList = function(object) { att = attributes(object); if (is.null(att)) list() else att; } newBlockwiseData = function(data, external = FALSE, fileNames = NULL, doSave = external, recordAttributes = TRUE, metaData = list()) { if (length(external)==0) stop("'external' must be logical of length 1."); if (!is.null(dim(data)) || !is.list(data)) stop("'data' must be a list without dimensions."); if (recordAttributes) { attributes = lapply(data, .getAttributesOrEmptyList); } else attributes = NULL; nBlocks = length(data); if (length(metaData) > 0) { if (length(metaData)!=nBlocks) stop("If 'metaData' are given, it must be a list with one component per component of 'data'."); } else { metaData = .listRep(list(), nBlocks) } lengths = sapply(data, length); if (doSave && !external) warning("newBlockwiseData: Cannot save when 'external' is not TRUE. Data will not be written to disk.") if (external) { if (is.null(fileNames)) stop("When 'external' is TRUE, 'fileNames' must be given."); } else fileNames = NULL; out = list(external = external, data = if (external) list() else data, fileNames = fileNames, lengths = lengths, attributes = attributes, metaData = metaData); if (doSave && external) { if (nBlocks!=length(fileNames)) stop("Length of 'data' and 'fileNames' must be the same."); mapply(function(object, f) save(object, file = f), data, fileNames); } class(out) = "BlockwiseData"; out; } mergeBlockwiseData = function(...) { args = list(...); args = args[ sapply(args, length) > 0]; if (!all(sapply(args, inherits, "BlockwiseData"))) stop("All arguments must be of class 'BlockwiseData'."); external1 = .checkLogicalConsistency(args, "external"); .checkListNamesConsistency(lapply(args, getElement, "attributes"), "attributes"); .checkListNamesConsistency(lapply(args, getElement, "metaData"), "metaData"); out = list(external = external1, data = do.call(c, lapply(args, .getElement, "data")), fileNames = do.call(c, lapply(args, .getElement, "fileNames")), lengths = do.call(c, lapply(args, .getElement, "lengths")), attributes = do.call(c, lapply(args, .getElement, "attributes")), metaData = do.call(c, lapply(args, .getElement, "metaData"))); class(out) = "BlockwiseData"; out; } # Under normal circumstance arguments external, dist and diag should not be set by the calling fnc, but this # function can also be used to start a new instance of blockwise data. addBlockToBlockwiseData = function(bwData, blockData, external = bwData$external, blockFile = NULL, doSave = external, recordAttributes = !is.null(bwData$attributes), metaData = NULL) { badj1 = newBlockwiseData(external = external, data = if (is.null(blockData)) NULL else list(blockData), fileNames = blockFile, recordAttributes = recordAttributes, metaData = list(metaData), doSave = doSave) mergeBlockwiseData(bwData, badj1); } BD.actualFileNames = function(bwData) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise data structure."); if (bwData$external) bwData$fileNames else character(0); } BD.nBlocks = function(bwData) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise data structure."); length(bwData$lengths); } BD.blockLengths = function(bwData) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise structure."); bwData$lengths; } BD.getMetaData = function(bwData, blocks = NULL, simplify = TRUE) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise structure."); if (is.null(blocks)) blocks = 1:BD.nBlocks(bwData); if ( (length(blocks)==0) | any(!is.finite(blocks))) stop("'block' must be present and finite."); if (any(blocks<1) | (blocks > BD.nBlocks(bwData))) stop("All entries in 'block' must be between 1 and ", BD.nBlocks(bwData)) out = bwData$metaData[blocks]; if (length(blocks)==1 && simplify) out= out[[1]]; out; } .getBDorPlainData = function(data, blocks = NULL, simplify = TRUE) { if (inherits(data, "BlockwiseData")) BD.getData(data, blocks, simplify) else data; } BD.getData = function(bwData, blocks = NULL, simplify = TRUE) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise data structure."); if (is.null(blocks)) blocks = 1:BD.nBlocks(bwData); if ( (length(blocks)==0) | any(!is.finite(blocks))) stop("'block' must be present and finite."); if (any(blocks<1) | (blocks > BD.nBlocks(bwData))) stop("All entries in 'block' must be between 1 and ", BD.nBlocks(bwData)) if (bwData$external) { lengths = BD.blockLengths(bwData); out = mapply(.loadObject, bwData$fileNames[blocks], name = 'object', size = lengths[blocks], SIMPLIFY = FALSE); } else out = bwData$data[blocks]; if (length(blocks)==1 && simplify) out= out[[1]]; out; } BD.checkAndDeleteFiles = function(bwData) { if (!inherits(bwData, "BlockwiseData")) stop("'bwData' is not a blockwise data structure"); if (bwData$external) .checkAndDelete(bwData$fileNames) } .getData = function(x, ...) { if (inherits(x, "BlockwiseData")) return(BD.getData(x, ...)); x; } .setAttr = function(object, name, value) { attr(object, name) = value; object; } .setAttrFromList = function(object, valueList) { if (length(valueList) > 0) for (i in 1:length(valueList)) attr(object, names(valueList)[i]) = valueList[[i]]; object; } # A version of getElement that returns NULL if name does not name a valid object .getElement = function(lst, name) { if (name %in% names(lst)) lst[[name, exact = TRUE]] else NULL } .checkLogicalConsistency = function(objects, name) { vals = sapply(objects, getElement, name) if (!all(vals) && !all(!vals)) stop("All arguments must have the same value of '", name, "'."); vals[1]; } .checkListNamesConsistency = function(lst, name) { names = lapply(lst, names); if (!all(sapply(names, function(x) isTRUE(all.equal(x, names[[1]]))))) stop("Not all names agree in ", name); } WGCNA/R/userListEnrichment.R0000644000176200001440000001712413670231050015266 0ustar liggesusers# userListEnrichment <- function (geneR, labelR, fnIn = NULL, catNmIn = fnIn, nameOut = "enrichment.csv", useBrainLists = FALSE, useBloodAtlases = FALSE, omitCategories = "grey", outputCorrectedPvalues = TRUE, useStemCellLists = FALSE, outputGenes = FALSE, minGenesInCategory = 1, useBrainRegionMarkers = FALSE, useImmunePathwayLists = FALSE, usePalazzoloWang = FALSE) { if (length(geneR) != length(labelR)) stop("geneR and labelR must have same number of elements.") if (length(catNmIn) < length(fnIn)) { catNmIn = c(catNmIn, fnIn[(length(catNmIn) + 1):length(fnIn)]) write("WARNING: not enough category names. \n\t\t\t Naming remaining categories with file names.", "") } if (is.null(fnIn) & (! (useBrainLists | useBloodAtlases | useStemCellLists | useBrainRegionMarkers | useImmunePathwayLists | usePalazzoloWang)) ) stop("Either enter user-defined lists or set one of the use_____ parameters to TRUE.") glIn = NULL if (length(fnIn)>0) { for (i in 1:length(fnIn)) { ext = substr(fnIn[i], nchar(fnIn[i]) - 2, nchar(fnIn[i])) if (ext == "csv") { datIn = read.csv(fnIn[i]) if (colnames(datIn)[2] == "Gene") { datIn = datIn[, 2:3] } else { datIn = datIn[, 1:2] } } else { datIn = scan(fnIn[i], what = "character", sep = "\n") datIn = cbind(datIn[2:length(datIn)], datIn[1]) } colnames(datIn) = c("Gene", "Category") datIn[, 2] = paste(datIn[, 2], catNmIn[i], sep = "__") glIn = rbind(glIn, datIn) } glIn = cbind(glIn, Type = rep("User", nrow(glIn))); } if (useBrainLists) { if (!(exists("BrainLists"))) BrainLists = NULL; data("BrainLists",envir=sys.frame(sys.nframe())); write("See help file for details regarding brain list references.", "") glIn = rbind(glIn, cbind(BrainLists, Type = rep("Brain", nrow(BrainLists)))) } if (useBloodAtlases) { if (!(exists("BloodLists"))) BloodLists = NULL; data("BloodLists",envir=sys.frame(sys.nframe())); write("See help file for details regarding blood atlas references.", "") glIn = rbind(glIn, cbind(BloodLists, Type = rep("Blood", nrow(BloodLists)))) } if (useStemCellLists) { if (!(exists("SCsLists"))) SCsLists = NULL; data("SCsLists",envir=sys.frame(sys.nframe())); write("See help file for details regarding stem cell list references.", "") glIn = rbind(glIn, cbind(SCsLists, Type = rep("StemCells", nrow(SCsLists)))) } if (useBrainRegionMarkers) { if (!(exists("BrainRegionMarkers"))) BrainRegionMarkers = NULL; data("BrainRegionMarkers",envir=sys.frame(sys.nframe())); write("Brain region markers from http://human.brain-map.org/ -- see help file for details.", "") glIn = rbind(glIn, cbind(BrainRegionMarkers, Type = rep("HumanBrainRegions", nrow(BrainRegionMarkers)))) } if (useImmunePathwayLists) { if (!(exists("ImmunePathwayLists"))) ImmunePathwayLists = NULL; data("ImmunePathwayLists",envir=sys.frame(sys.nframe())); write("See help file for details regarding immune pathways.", "") glIn = rbind(glIn, cbind(ImmunePathwayLists, Type = rep("Immune", nrow(ImmunePathwayLists)))) } if (usePalazzoloWang) { if (!(exists("PWLists"))) PWLists = NULL; data("PWLists",envir=sys.frame(sys.nframe())); write("See help file for details regarding Palazzolo / Wang lists from CHDI.", "") write("---- there are many of these gene sets so the function may take several minutes to run.", "") glIn = rbind(glIn, cbind(PWLists, Type = rep("PW_Lists", nrow(PWLists)))) } #removeDups = unique(paste(as.character(glIn[, 1]), as.character(glIn[, # 2]), as.character(glIn[, 3]), sep = "@#$%")) #if (length(removeDups) < length(glIn[, 1])) # glIn = t(as.matrix(as.data.frame(strsplit(removeDups, "@#$%", fixed = TRUE)))) glIn.2 = glIn[!duplicated(as.data.frame(glIn)), ] geneIn = as.character(glIn.2[, 1]) labelIn = as.character(glIn.2[, 2]) geneAll = sort(unique(geneR)) keep = is.element(geneIn, geneAll) geneIn = geneIn[keep] labelIn = labelIn[keep] catsR = sort(unique(labelR)) omitCategories = c(omitCategories, "background") catsR = catsR[!is.element(catsR, omitCategories)] catsIn = sort(unique(labelIn)) typeIn = glIn.2[keep, ][match(catsIn, labelIn), 3]; lenAll = length(geneAll) nCols.pValues = 5; nComparisons = length(catsR) * length(catsIn); nIn = length(catsIn); nR = length(catsR); index = 1; nOverlap = rep(0, nComparisons); pValues = rep(1, nComparisons); ovGenes = vector(mode = "list", length = nComparisons); isI = matrix(FALSE, lenAll, nIn); for (i in 1:nIn) { isI[, i] = is.element(geneAll,geneIn[labelIn == catsIn[i]]); } for (r in 1:length(catsR)) { isR = is.element(geneAll,geneR[(labelR == catsR[r])]) for (i in 1:length(catsIn)) { isI.1 = isI[, i]; lyn = sum(isR&(!isI.1)) lny = sum(isI.1&(!isR)) lyy = sum(isR&isI.1) gyy = geneAll[isR&isI.1] lnn = lenAll - lyy - lyn - lny pv = fisher.test(matrix(c(lnn,lny,lyn,lyy), 2, 2), alternative = "greater")$p.value nOverlap[index] = lyy; pValues[index] = pv; ovGenes[[index]] = gyy index = index + 1 } } results = list(pValues = data.frame(InputCategories = rep(catsR, rep(nIn, nR)), UserDefinedCategories = rep(catsIn, nR), Type = rep(typeIn, nR), NumOverlap = nOverlap, Pvalues = pValues, CorrectedPvalues = ifelse(pValues * nComparisons > 1, 1, pValues * nComparisons)), ovGenes = ovGenes); namesOv = paste(results$pValues$InputCategories, "--", results$pValues$UserDefinedCategories); names(results$ovGenes) = namesOv if (outputCorrectedPvalues) { results$sigOverlaps = results$pValues[results$pValues$CorrectedPvalues < 0.05, c(1, 2, 3, 6)] } else { results$sigOverlaps = results$pValues[results$pValues$Pvalues < 0.05, c(1, 2, 3, 5)] write("Note that outputted p-values are not corrected for multiple comparisons.", "") } results$sigOverlaps = results$sigOverlaps[order(results$sigOverlaps[, 4]), ]; row.names(results$sigOverlaps) = NULL; rSig = results$sigOverlaps nSig = nrow(rSig); if (nSig > 0) { rCats = paste(rSig$InputCategories,"--",rSig$UserDefinedCategories) rNums <- rep(0, nSig); rGenes <- rep("", nSig); for (i in 1:nSig) { rGn = results$ovGenes[[which(names(results$ovGenes)==rCats[i])]] rNums[i] = length(rGn); rGenes[i] = paste(rGn,collapse=", "); } rSig$NumGenes = rNums rSig$CategoryGenes = rGenes rSig = rSig[rSig$NumGenes>=minGenesInCategory,] if(!outputGenes) rSig = rSig[,1:4] results$sigOverlaps = rSig if (length(nameOut) > 0) write.csv(results$sigOverlaps, file = nameOut, row.names = FALSE) } write(paste(length(namesOv), "comparisons were successfully performed."), "") return(results) } WGCNA/R/consensusRepresentatives.R0000644000176200001440000003710713131007076016567 0ustar liggesusers#============================================================================================================ # # mdx version of collapseRows # #============================================================================================================ .cr.absMaxMean <- function(x, robust) { if (robust) { colMedians(abs(x), na.rm = TRUE) } else colMeans(abs(x), na.rm = TRUE) } .cr.absMinMean <- function(x, robust) { if (robust) { -colMedians(abs(x), na.rm = TRUE) } else -colMeans(abs(x), na.rm = TRUE) } .cr.MaxMean <- function(x, robust) { if (robust) { colMedians(x, na.rm = TRUE) } else colMeans(x, na.rm = TRUE) } .cr.MinMean <- function(x, robust) { if (robust) { -colMedians(x, na.rm = TRUE) } else -colMeans(x, na.rm = TRUE) } .cr.maxVariance <- function(x, robust) { if (robust) { colMads(x, na.rm = TRUE) } else colSds(x, na.rm = TRUE) } .checkConsistencyOfGroupAndColID = function(mdx, colID, group) { colID = as.character(colID) group = as.character(group) if (length(colID)!=length(group)) stop("'group' and 'colID' must have the same length.") if (any(duplicated(colID))) stop("'colID' contains duplicate entries.") rnDat = mtd.colnames(mdx); if ( sum(is.na(colID))>0 ) warning(spaste("The argument colID contains missing data. It is recommend you choose non-missing,\n", "unique values for colID, e.g. character strings.")) if ( sum(group=="",na.rm=TRUE)>0 ){ warning(paste("group contains blanks. It is strongly recommended that you remove", "these rows before calling the function.\n", " But for your convenience, the collapseRow function will remove these rows")); group[group==""]=NA } if ( sum(is.na(group))>0 ){ warning(paste("The argument group contains missing data. It is strongly recommended\n", " that you remove these rows before calling the function. Or redefine group\n", " so that it has no missing data. But for convenience, we remove these data.")) } if ((is.null(rnDat))&(checkSets(mdx)$nGenes==length(colID))) { write("Warning: mdx does not have column names. Using 'colID' as column names.","") rnDat = colID; mdx = mtd.setColnames(mdx, colID); } if (is.null(rnDat)) stop("'mdx' does not have row names and \n", "length of 'colID' is not the same as # variables in mdx."); keepProbes = rep(TRUE, checkSets(mdx)$nGenes); if (sum(is.element(rnDat,colID))!=length(colID)){ write("Warning: row names of input data and probes not identical...","") write("... Attempting to proceed anyway. Check results carefully.","") keepProbes = is.element(colID, rnDat); colID = colID[keepProbes] mdx = mtd.subset(mdx, , colID); group = group[colID] } restCols = (group!="" & !is.na(group)) if (any(!restCols)) { keepProbes[keepProbes] = restCols; mdx = mtd.subset(mdx, , restCols); group = group[restCols] colID = colID[restCols] rnDat = rnDat[restCols] } list(mdx = mdx, group = group, colID = colID, keepProbes = keepProbes); } selectFewestConsensusMissing <- function(mdx, colID, group, minProportionPresent = 1, consensusQuantile = 0, verbose = 0, ...) { ## For each gene, select the gene with the fewest missing probes, and return the results. # If there is a tie, keep all probes involved in the tie. # The main part of this function is run only if omitGroups=TRUE otherArgs = list(...) nVars = checkSets(mdx)$nGenes; nSamples = checkSets(mdx)$nSamples; nSets = length(mdx); if ((!"checkConsistency" %in% names(otherArgs)) || otherArgs$checkConsistency) { cd = .checkConsistencyOfGroupAndColID(mdx, colID, group); mdx = cd$mdx; group = cd$group; colID = cd$colID; keep = cd$keepProbes; } else keep = rep(TRUE, nVars); # First, return datET if there is no missing data, otherwise run the function if (sum(mtd.apply(mdx, function(x) sum(is.na(x)), mdaSimplify = TRUE))==0) return(rep(TRUE, nVars)); # Set up the variables. names(group) = colID probes = mtd.colnames(mdx); genes = group[probes] keepGenes = rep(TRUE, nVars); tGenes = table(genes) checkGenes = sort(names(tGenes)[tGenes>1]) presentData = as.matrix(mtd.apply(mdx, function(x) colSums(is.finite(x)), mdaSimplify = TRUE)); presentFrac = presentData/matrix(nSamples, nVars, nSets, byrow = TRUE); consensusPresentFrac = .consensusCalculation.base(data = presentFrac, useMean = FALSE, setWeightMat = NULL, consensusQuantile = consensusQuantile)$consensus; # Omit all probes with at least omitFrac genes missing #keep = consensusPresentFrac > omitFraction minProportionPresent = as.numeric(minProportionPresent); # Omit relevant genes and return results if (minProportionPresent > 0) { if (verbose) pind = initProgInd(); for (gi in 1:length(checkGenes)) { g = checkGenes[gi]; gn = which(genes==g) keepGenes[gn] = (consensusPresentFrac[gn] >= minProportionPresent * max(consensusPresentFrac[gn])) if (verbose) pind = updateProgInd(gi/length(checkGenes), pind); } if (verbose) printFlush(""); } keep[keep] = keepGenes; return (keep); } # ----------------- Main Function ------------------- # consensusRepresentatives = function(mdx, group, colID, consensusQuantile = 0, method = "MaxMean", useGroupHubs = TRUE, calibration = c("none", "full quantile"), selectionStatisticFnc = NULL, connectivityPower=1, minProportionPresent=1, getRepresentativeData = TRUE, statisticFncArguments = list(), adjacencyArguments = list(), verbose = 2, indent = 0) # Change in methodFunction: if the methodFunction picks a single representative, it should return it in # attribute "selectedRepresentative". # minProportionPresent now gives the fraction of the maximum of present values that will still be included. # minProportionPresent=1 corresponds to minProportionPresent=TRUE in original collapseRows. # In connectivity-based collapsing, use simple connectivity, do not normalize. This way the connectivities # retain a larger spread which should prevent quantile normalization from making big changes and potentially # suprious changes. { if (!is.null(dim(mdx))) { warning("consensusRepresentatives: wrapping matrix-like input into a mdx structure."); mdx = multiData(mdx); } spaces = indentSpaces(indent); nSamples= checkSets(mdx)$nSamples nSets = length(mdx); colnames.in = mtd.colnames(mdx); calibration = match.arg(calibration); ## Test to make sure the variables are the right length. # if not, fix it if possible, or stop. cd = .checkConsistencyOfGroupAndColID(mdx, colID, group); colID = cd$colID; group = cd$group; mdx = cd$mdx; keepVars = cd$keepProbes rnDat = mtd.colnames(mdx); ## For each gene, select the gene with the fewest missing probes (if minProportionPresent==TRUE) ## Also, remove all probes with more than 90% missing data if (verbose > 0) printFlush(spaste(spaces, "..selecting variables with lowest numbers of missing data..")); keep = selectFewestConsensusMissing(mdx, colID, group, minProportionPresent, consensusQuantile = consensusQuantile, verbose = verbose -1) mdx = mtd.subset(mdx, , keep); keepVars[keepVars] = keep; group = group[keep]; colID = colID[keep]; rnDat = mtd.colnames(mdx); ## If method="function", use the function "methodFunction" as a way of combining genes # Alternatively, use one of the built-in functions # Note: methodFunction must be a function that takes a vector of numbers as input and # outputs a single number. This function will return(0) or crash otherwise. recMethods = c("function","MaxMean","maxVariance","MinMean","absMinMean","absMaxMean"); imethod = pmatch(method, recMethods); if (is.na(imethod)) stop("Error: entered method is not a legal option. Recognized options are\n", " *maxVariance*, *MaxMean*, *MinMean*, *absMaxMean*, *absMinMean*\n", " or *function* for a user-defined function.") if (imethod > 1) { selectionStatisticFnc = spaste(".cr.", method); selStatFnc = get(selectionStatisticFnc, mode = "function") } else { selStatFnc = match.fun(selectionStatisticFnc); if((!is.function(selStatFnc))&(!is.null(selStatFnc))) stop("Error: 'selectionStatisticFnc must be a function... please read the help file.") } ## Format the variables for use by this function colID[is.na(colID)] = group[is.na(colID)] # Use group if row is missing rnDat[is.na(rnDat)] = group[is.na(rnDat)]; mdx = mtd.setColnames(mdx, rnDat); remove = (is.na(colID))|(is.na(group)) # Omit if both gene and probe are missing colID = colID[!remove]; group = group[!remove]; names(group) = colID colID = sort(intersect(rnDat,colID)) if (length(colID)<=1) stop("None of the variable names in 'mdx' are in 'colID'.") group = group[colID] mdx = mtd.apply(mdx, as.matrix); keepVars[keepVars] = mtd.colnames(mdx) %in% colID; mdx = mtd.subset(mdx, , colID); probes = mtd.colnames(mdx) genes = group[probes] tGenes = table(genes) colnames.out = sort(names(tGenes)); if (getRepresentativeData) { mdxOut = mtd.apply(mdx, function(x) { out = matrix(0, nrow(x), length(tGenes)); rownames(out) = rownames(x); colnames(out) = colnames.out out; }); names(mdxOut) = names(mdx); } representatives = rep("", length(colnames.out)) names(representatives) = colnames.out; ## If !is.null(connectivityPower), default to the connectivity method with power=method # Collapse genes with multiple probe sets together using the following algorthim: # 1) If there is one ps/g = keep # 2) If there are 2 ps/g = (use "method" or "methodFunction") # 3) If there are 3+ ps/g = take the max connectivity # Otherwise, use "method" if there are 3+ ps/g as well. if(!is.null(connectivityPower)){ if(!is.numeric(connectivityPower)) stop("Error: if entered, connectivityPower must be numeric.") if(connectivityPower<=0) stop("Warning: connectivityPower must be >= 0."); if(any(nSamples<=5)){ write("Warning: 5 or fewer samples, this method of probe collapse is unreliable...","") write("...Running anyway, but we suggest trying another method (for example, *mean*).","") } } # Run selectionStatisticFnc on all data; if quantile normalization is requested, normalize the selection # statistics across data sets. selectionStatistics = mtd.apply(mdx, function(x) do.call(selStatFnc, c(list(x), statisticFncArguments)), mdaSimplify = TRUE); #if (FALSE) xxx = selectionStatistics; if (is.null(dim(selectionStatistics))) stop("Calculation of selection statistics produced results of zero or unqual lengths."); if (calibration=="full quantile") selectionStatistics = normalize.quantiles(selectionStatistics); #if (FALSE) #{ # sizeGrWindow(14, 5); # par(mfrow = c(1,3)); # for (set in 1:nSets) # hist(xxx[, set], breaks = 200); # ## for (set in 1:nSets) # verboseScatterplot(xxx[, set], selectionStatistics[, set], samples = 10000) # } consensusSelStat = .consensusCalculation.base(selectionStatistics, useMean = FALSE, setWeightMat = NULL, consensusQuantile = consensusQuantile)$consensus; # Actually run the summarization. ones = sort(names(tGenes)[tGenes==1]) if(useGroupHubs){ twos = sort(names(tGenes)[tGenes==2]) # use "method" and connectivity more = sort(names(tGenes)[tGenes>2]) } else { twos = sort(names(tGenes)[tGenes>1]) # only use "method" more = character(0) } ones2genes = match(ones, genes); if (getRepresentativeData) for (set in 1:nSets) mdxOut[[set]]$data[,ones] = mdx[[set]]$data[, ones2genes]; representatives[ones] = probes[ones2genes]; count = 0; if (length(twos) > 0) { if (verbose > 0) printFlush(spaste(spaces, "..selecting representatives for 2-variable groups..")); if (verbose > 1) pind = initProgInd(paste(spaces, "..")); repres = rep(NA, length(twos)); for (ig in 1:length(twos)) { g = twos[ig]; probeIndex = which(genes==g); repres[ig] = probeIndex[which.max(consensusSelStat[probeIndex])]; if (verbose > 1) pind = updateProgInd(ig/length(twos), pind); } if (verbose > 1) printFlush(""); if (getRepresentativeData) for (set in 1:nSets) mdxOut[[set]]$data[, twos] = mdx[[set]]$data[, repres]; representatives[twos] = probes[repres]; } if (length(more) > 0) { if (verbose > 0) printFlush(spaste(spaces, "..selecting representatives for 3-variable groups..")); if (verbose > 1) pind = initProgInd(paste(spaces, "..")); genes.more = genes[genes %in% more]; nAll = length(genes.more); connectivities = matrix(NA, nAll, nSets); for (ig in 1:length(more)) { g = more[ig]; keepProbes1 = which(genes==g); keep.inMore = which(genes.more==g); mdxTmp = mtd.subset(mdx, , keepProbes1); adj = mtd.apply(mdxTmp, function(x) do.call(adjacency, c(list(x, type = "signed", power = connectivityPower), adjacencyArguments))); connectivities[keep.inMore, ] = mtd.apply(adj, colSums, mdaSimplify = TRUE); count = count + 1; if (count %% 50000 == 0) collectGarbage(); if (verbose > 1) pind = updateProgInd(ig/(2*length(more)), pind); } if (calibration=="full quantile") connectivities = normalize.quantiles(connectivities); consConn = .consensusCalculation.base(connectivities, useMean = FALSE, setWeightMat = NULL, consensusQuantile = consensusQuantile)$consensus; repres.inMore = rep(0, length(more)); for (ig in 1:length(more)) { probeIndex = which(genes.more==more[ig]); repres.inMore[ig] = probeIndex[which.max(consConn[probeIndex])]; if (verbose > 1) pind = updateProgInd(ig/(2*length(more)) + 0.5, pind); } repres = which(genes %in% more)[repres.inMore]; if (verbose > 1) printFlush(""); if (getRepresentativeData) for (set in 1:nSets) mdxOut[[set]]$data[, more] = as.numeric(mdx[[set]]$data[, repres]); representatives[more] = probes[repres]; } # Retreive the information about which probes were saved, and include that information # as part of the output. out2 = cbind(colnames.out, representatives) colnames(out2) = c("group","selectedColID") reprIndicator = keepVars; reprIndicator[keepVars] [match(representatives, mtd.colnames(mdx))] = TRUE reprIndicator = colnames.in out = list(representatives = out2, varSelected = reprIndicator, representativeData = if (getRepresentativeData) mdxOut else NULL) return(out) } WGCNA/R/Functions-multiData.R0000644000176200001440000000671013103416622015331 0ustar liggesusers#====================================================================================================== # # multiData.eigengeneSignificance # #====================================================================================================== multiData.eigengeneSignificance = function(multiData, multiTrait, moduleLabels, multiEigengenes = NULL, useModules = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getQvalues = FALSE, setNames = NULL, excludeGrey = TRUE, greyLabel = ifelse(is.numeric(moduleLabels), 0, "grey")) { corAndPvalueFnc = match.fun(corAndPvalueFnc); size = checkSets(multiData); nSets = size$nSets; nGenes = size$nGenes; nSamples = size$nSamples; if (is.null(multiEigengenes)) { multiEigengenes = multiSetMEs(multiData, universalColors = moduleLabels, verbose = 0, excludeGrey = excludeGrey, grey = greyLabel); } else { eSize = checkSets(multiEigengenes); if (!isTRUE(all.equal(eSize$nSamples, nSamples))) stop("Numbers of samples in multiData and multiEigengenes must agree."); } if (!is.null(useModules)) { keep = substring(colnames(multiEigengenes[[1]]$data), 3) %in% useModules; if (sum(keep)==0) stop("Incorrectly specified 'useModules': no such module(s)."); if (any( ! (useModules %in% substring(colnames(multiEigengenes[[1]]$data), 3)))) stop("Some entries in 'useModules' do not exist in the module labels or eigengenes."); for (set in 1:nSets) multiEigengenes[[set]]$data = multiEigengenes[[set]]$data[, keep, drop = FALSE]; } modLevels = substring(colnames(multiEigengenes[[1]]$data), 3); nModules = length(modLevels); MES = p = Z = nObs = array(NA, dim = c(nModules, nSets)); haveZs = FALSE; for (set in 1:nSets) { corOptions$x = multiEigengenes[[set]]$data; corOptions$y = multiTrait[[set]]$data; cp = do.call(corAndPvalueFnc, args = corOptions); corComp = grep(corComponent, names(cp)); pComp = match("p", names(cp)); if (is.na(pComp)) pComp = match("p.value", names(cp)); if (is.na(pComp)) stop("Function `corAndPvalueFnc' did not return a p-value."); MES[, set] = cp[[corComp]] p[, set] = cp[[pComp]]; if (!is.null(cp$Z)) { Z[, set] = cp$Z; haveZs = TRUE} if (!is.null(cp$nObs)) { nObs[, set] = cp$nObs; } else nObs[, set] = t(is.na(multiEigengenes[[set]]$data)) %*% (!is.na(multiTrait[[set]]$data)); } if (is.null(setNames)) setNames = names(multiData); if (is.null(setNames)) setNames = spaste("Set_", c(1:nSets)); colnames(MES) = colnames(p) = colnames(Z) = colnames(nObs) = setNames; rownames(MES) = rownames(p) = rownames(Z) = rownames(nObs) = colnames(multiEigengenes[[1]]$data); if (getQvalues) { q = apply(p, 2, qvalue.restricted); dim(q) = dim(p); dimnames(q) = dimnames(p); } else q = NULL; if (!haveZs) Z = NULL; list(eigengeneSignificance = MES, p.value = p, q.value = q, Z = Z, nObservations = nObs) } #============================================================================================== # # nSets # #============================================================================================== nSets = function(multiData, ...) { size = checkSets(multiData, ...); size$nSets; } WGCNA/R/plotDendrogram.R0000644000176200001440000000530213131007076014414 0ustar liggesusers#======================================================================================================= # # Plot dendrogram # #======================================================================================================= .plotDendrogram = function(tree, labels = NULL, horiz = FALSE, reverseDirection = FALSE, hang = 0.1, xlab = "", ylab = "", axes = TRUE, cex.labels = 1, ..., adjustRange = FALSE) { hang.gr = hang; if (hang < 0) hang.gr = 0.1; n = length(tree$order); heights = tree$height; range = range(heights); hang.scaled = hang.gr * (max(heights) - min(heights)); range[1] = range[1] - hang.scaled; indexLim = c(0.5, n+0.5); if (adjustRange) { ctr = mean(indexLim); indexLim = ctr + (indexLim - ctr)/1.08; } nMerge = n-1; if (is.null(labels)) labels = tree$labels; if (is.null(labels)) labels = rep("", n); if (is.na(labels[1])) labels = rep("", n); if (is.logical(labels) && labels[1]=="FALSE") labels = rep("", n); if (horiz) { plot(NA, NA, xlim = if (reverseDirection) range else rev(range), ylim = indexLim, axes = axes, yaxt = "none", frame = FALSE, type ="n", xlab = xlab, ylab = ylab, ...); } else { plot(NA, NA, ylim = if (reverseDirection) rev(range) else range, xlim = indexLim, axes = axes, xaxt = "none", frame = FALSE, type ="n", xlab = xlab, ylab = ylab, ...); } singleton.x = rep(NA, n); singleton.x[tree$order] = c(1:n); cluster.x = rep(NA, n); for (m in 1:nMerge) { o1 = tree$merge[m, 1] o2 = tree$merge[m, 2] h = heights[m]; hh = if (hang>0) h-hang.scaled else range[1]; h1 = if (o1 < 0) hh else heights[o1]; h2 = if (o2 < 0) hh else heights[o2]; x1 = if (o1 < 0) singleton.x[-o1] else cluster.x[o1] x2 = if (o2 < 0) singleton.x[-o2] else cluster.x[o2] cluster.x[m] = mean(c(x1, x2)); if (!is.null(labels)) { if (horiz) { if (o1 < 0) text(h1, x1, spaste(labels[-o1], " "), adj = c(0, 0.5), srt = 0, cex = cex.labels, xpd = TRUE) if (o2 < 0) text(h2, x2, spaste(labels[-o2], " "), adj = c(0, 0.5), srt = 0, cex = cex.labels, xpd = TRUE) } else { if (o1 < 0) text(x1, h1, spaste(labels[-o1], " "), adj = c(1, 0.5), srt = 90, cex = cex.labels, xpd = TRUE) if (o2 < 0) text(x2, h2, spaste(labels[-o2], " "), adj = c(1, 0.5), srt = 90, cex = cex.labels, xpd = TRUE) } } if (horiz) { lines(c(h1, h, h, h2), c(x1, x1, x2, x2)); } else { lines(c(x1, x1, x2, x2), c(h1, h, h, h2)); } } } WGCNA/R/moduleMergeUsingKME.R0000644000176200001440000001073113103416622015245 0ustar liggesusers## Merge modules and reassign module genes based on kME values moduleMergeUsingKME <- function (datExpr, colorh, ME=NULL, threshPercent=50, mergePercent = 25, reassignModules=TRUE, convertGrey=TRUE, omitColors="grey", reassignScale=1, threshNumber=NULL) { ## First assign all of the variables and put everything in the correct format. if (length(colorh)!=dim(datExpr)[2]){ write("Error: color vector much match inputted datExpr columns.",""); return(0) } if (is.null(ME)) ME = (moduleEigengenes(datExpr, colors=as.character(colorh), excludeGrey=TRUE))$eigengenes if (dim(ME)[1]!=dim(datExpr)[1]){ write("Error: ME rows much match inputted datExpr rows (samples).",""); return(0) } modules = colnames(ME) if (length(grep("ME",modules))==length(modules)) modules = substr(modules,3,nchar(modules)) if (length(grep("PC",modules))==length(modules)) modules = substr(modules,3,nchar(modules)) if (length(setdiff(modules,as.character(colorh)))>0){ write("ME cannot include colors with no genes assigned.",""); return(0) } names(ME) = modules datCorrs = as.data.frame(cor(datExpr,ME,use="p")) colnames(datCorrs) = modules modules = sort(modules[!is.element(modules,omitColors)]) modulesI = modules # To test whether merging occurs datCorrs = datCorrs[,modules] iteration = 1 if(is.null(colnames(datExpr))) colnames(datExpr) = as.character(1:length(colorh)) rownames(datCorrs) <- colnames(datExpr) colorOut = colorh colorOut[is.element(colorOut,omitColors)] = "grey" mergeLog = NULL datExpr = t(datExpr) # For consistency with how the function was originally written. ## Iteratively run this function until no further changes need to be made while (!is.na(iteration)){ write("",""); write("__________________________________________________","") write(paste("This is iteration #",iteration,". There are ",length(modules)," modules.",sep=""),"") iteration = iteration+1 ## Reassign modules if requested by reassignModules and convertGrey colorMax = NULL whichMod = apply(datCorrs,1,which.max) cutNumber = round(table(colorOut)*threshPercent/100) cutNumber = cutNumber[names(cutNumber)!="grey"] if(!is.null(threshNumber)) cutNumber = rep(threshNumber,length(cutNumber)) cutNumber = apply(cbind(cutNumber,10),1,max) for (i in 1:length(whichMod)) colorMax = c(colorMax,modules[whichMod[i]]) for (i in 1:length(modules)){ corrs = as.numeric(datCorrs[,i]) cutValue = sort(corrs[colorOut==modules[i]],decreasing=TRUE)[cutNumber[i]] inModule = corrs>(cutValue*reassignScale) if(convertGrey) colorOut[inModule&(colorOut=="grey")&(colorMax==modules[i])]=modules[i] if(reassignModules) colorOut[inModule&(colorOut!="grey")&(colorMax==modules[i])]=modules[i] } ## Merge all modules meeting the mergePercent and threshPercent criteria for (i in 1:length(modules)){ cutNumber = round(table(colorOut)*threshPercent/100) cutNumber = cutNumber[names(cutNumber)!="grey"] if(!is.null(threshNumber)) cutNumber = rep(threshNumber,length(cutNumber)) cutNumber = apply(cbind(cutNumber,10),1,max) corrs = as.numeric(datCorrs[,i]) # Make sure you do not include more genes than are in the module numInMod = sum(colorOut==modules[i]) cutValue = sort(corrs[colorOut==modules[i]],decreasing=TRUE)[min(numInMod,cutNumber[modules[i]])] colorMod = colorOut[corrs>=cutValue] colorMod = colorMod[colorMod!="grey"] modPercent = 100*table(colorMod)/length(colorMod) modPercent = modPercent[names(modPercent)!=modules[i]] if(length(modPercent)>1) if(max(modPercent)>mergePercent){ whichModuleMerge = names(modPercent)[which.max(modPercent)] colorOut[colorOut==modules[i]] = whichModuleMerge write(paste(modules[i],"has been merged into",whichModuleMerge,"."),"") mergeLog = rbind(mergeLog,c(modules[i],whichModuleMerge)) } } ## If no modules were merged, then set iteration to NA modules = sort(unique(colorOut)) modules = modules[modules!="grey"] if (length(modules)==length(modulesI)) iteration=NA modulesI = modules ## Recalculate the new module membership values MEs = (moduleEigengenes(t(datExpr), colors=as.character(colorOut)))$eigengenes MEs = MEs[,colnames(MEs)!="MEgrey"] datCorrs = as.data.frame(cor(t(datExpr),MEs,use="p")); colnames(datCorrs) = modules } if(!is.null(dim(mergeLog))){ colnames(mergeLog) = c("Old Module","Merged into New Module") rownames(mergeLog) = paste("Merge #",1:dim(mergeLog)[1],sep="") } return(list(moduleColors=colorOut,mergeLog=mergeLog)) } WGCNA/R/labelPoints.R0000644000176200001440000001276113164234744013730 0ustar liggesusers#================================================================================================= # # labelPoints: label points in a scatterplot while trying to avoid labels overlapping with one another # and with points. # #================================================================================================= labelPoints = function(x, y, labels, cex = 0.7, offs = 0.01, xpd = TRUE, jiggle = 0, protectEdges = TRUE, doPlot = TRUE, ...) { nPts = length(labels); box = par("usr"); dims = par("pin"); scaleX = dims[1]/(box[2] - box[1]); scaleY = dims[2]/(box[4] - box[3]); #ish = charmatch(shape, .shapes); #if (is.na(ish)) # stop(paste("Unrecognized 'shape'. Recognized values are", paste(.shapes, collapse = ", "))); if (par("xlog")) { xx = log10(x); } else xx = x; if (par("ylog")) { yy = log10(y); } else yy = y; xx = xx * scaleX; yy = yy * scaleY; if (jiggle > 0) { rangeX = max(xx, na.rm = TRUE) - min(xx, na.rm = TRUE) jx = xx + jiggle * rangeX * (runif(nPts) - 0.5); rangeY = max(yy, na.rm = TRUE) - min(yy, na.rm = TRUE) jy = yy + jiggle * rangeY * (runif(nPts) - 0.5); } else { jx = xx; jy = yy; } dx = offs; dy = offs; labWidth = strwidth(labels, cex=cex) * scaleX; labHeight = strheight(labels, cex=cex) * scaleY; if (nPts==0) return(0); if (nPts==1) { if (protectEdges) { shift = ifelse(x - labWidth/2/scaleX < box[1], box[1] - x + labWidth/2/scaleX, ifelse(x + labWidth/2/scaleX > box[2], box[2] - x - labWidth/2/scaleX, 0)); x = x + shift; # Also check the top and bottom edges yShift = if (y + labHeight/scaleY + offs/scaleY > box[4]) -(labHeight + 2*offs)/scaleY else 0; y = y + yShift } text(x, y + labHeight/2/scaleY + offs/scaleY, labels, cex = cex, xpd = xpd, adj = c(0.5, 0.5), ...) return (0); } xMat = cbind(xx,yy); jxMat = cbind(jx, jy); distX = as.matrix(dist(jx)); distY = as.matrix(dist(jy)); dir = matrix(0, nPts, 2); d0SqX = (labWidth+2*offs)^2 d0SqY = (labHeight + 2*offs)^2; for (p in 1:nPts) { difs = matrix(jxMat[p, ], nPts, 2, byrow = TRUE) - jxMat; difSc = difs / sqrt(matrix(apply(difs^2, 1, sum, na.rm = TRUE), nPts, 2)); difSx = rbind(difSc, c(0,1)); difSx[p, ] = 0; w = c(exp(-distX[,p]^4 / d0SqX[p]^2 - distY[,p]^4/d0SqY^2)); w[distX[, p]==0 & distY[,p]==0] = 0; w = c(w, 0.01); dir[p, ] = apply(difSx * matrix(w, (nPts+1), 2), 2, sum, na.rm = TRUE) / sum(w, na.rm = TRUE) if (sum(abs(dir[p, ]))==0) dir[p, ] = runif(2); } scDir = dir / sqrt(matrix(apply(dir^2, 1, sum, na.rm = TRUE), nPts, 2)); offsMat = cbind(labWidth/2 + offs, labHeight/2 + offs) Rmat = abs(scDir / offsMat); ind = Rmat[, 1] > Rmat[, 2]; # This is an indicator of whether the labels touch the vertical (TRUE ) or # horizontal (FALSE) edge of the square around the point # These are preliminary text coordinates relative to their points. dx = offsMat[, 1] * sign(scDir[, 1]) dx[!ind] = scDir[!ind, 1] * offsMat[!ind, 2]/abs(scDir[!ind,2]); dy = offsMat[, 2] * sign(scDir[, 2]); dy[ind] = scDir[ind, 2] * offsMat[ind, 1]/abs(scDir[ind,1]); # Absolute coordinates xt = (xx + dx)/scaleX; yt = (yy + dy)/scaleY; # Check if any of the points overlap with a label (of a different point) pointMaxx = matrix(xx + offs, nPts, nPts); pointMinx = matrix(xx - offs, nPts, nPts); pointMiny = matrix(yy - offs, nPts, nPts); pointMaxy = matrix(yy + offs, nPts, nPts); labelMinx = matrix(xt - labWidth/2, nPts, nPts, byrow = TRUE); labelMaxx = matrix(xt + labWidth/2, nPts, nPts, byrow = TRUE); labelMiny = matrix(yt - labHeight/2, nPts, nPts, byrow = TRUE); labelMaxy = matrix(yt + labHeight/2, nPts, nPts, byrow = TRUE); overlapF = function(x1min, x1max, x2min, x2max) { overlap = matrix(0, nPts, nPts); overlap[ x1max > x2min & x1max < x2max & x1min < x2min ] = 1; overlap[ x1max > x2min & x1max < x2max & x1min > x2min ] = 2; overlap[ x1max > x2max & x1min > x2min ] = 3; overlap; } overlapX = overlapF(pointMinx, pointMaxx, labelMinx, labelMaxx); overlapY = overlapF(pointMiny, pointMaxy, labelMiny, labelMaxy); indOvr = overlapX > 0 & overlapY >0; overlap = matrix(0, nPts, nPts); overlap[indOvr] = (overlapY[indOvr] - 1) * 3 + overlapX[indOvr]; # For now try to fix cases of a single overlap. nOvrPerLabel = apply(overlap>0, 1, sum); #for (p in 1:nPts) if (nOverPerLabel[p]==1) #{ # Check if any of the labels extend past the left or right edge of the plot if (protectEdges) { shift = ifelse(xt - labWidth/2/scaleX < box[1], box[1] - xt + labWidth/2/scaleX, ifelse(xt + labWidth/2/scaleX > box[2], box[2] - xt - labWidth/2/scaleX, 0)); xt = xt + shift; # Also check the top and bottom edges # Do labels overlap with points along the x coordinate? xOverlap = abs(xt-x) < (labWidth/2 + offs)/scaleX; yShift = ifelse(yt - labHeight/2/scaleY < box[3], ifelse(xOverlap, (labHeight + 2*offs)/scaleY, box[3] - yt + labHeight/2/scaleY), ifelse(yt + labHeight/2/scaleY > box[4], -(labHeight + 2*offs)/scaleY, 0)); yt = yt + yShift } if (par("xlog")) xt = 10^xt; if (par("ylog")) yt = 10^yt; if (doPlot) text(xt, yt, labels, cex = cex, xpd = xpd, adj = c(0.5, 0.5), ...) invisible(data.frame(x = xt, y= yt, label = labels)); } WGCNA/R/sampledModules.R0000644000176200001440000002011314230552654014416 0ustar liggesusers# Call blockwise modules several times with sampled data, collect the results # -02: first entry in the result will be the base modules. # Also: add functions for determining module stability and whether modules should be merged. #=================================================================================================== # # sampledBlockwiseModules # #=================================================================================================== sampledBlockwiseModules = function( datExpr, nRuns, startRunIndex = 1, endRunIndex = startRunIndex + nRuns - skipUnsampledCalculation, replace = FALSE, fraction = if (replace) 1.0 else 0.63, randomSeed = 12345, checkSoftPower = TRUE, nPowerCheckSamples = 2000, skipUnsampledCalculation = FALSE, corType = "pearson", power = 6, networkType = "unsigned", saveTOMs = FALSE, saveTOMFileBase = "TOM", ..., verbose = 2, indent = 0) { spaces = indentSpaces(indent); result = list(); runTOMFileBase = saveTOMFileBase; nSamples = nrow(datExpr); nGenes = ncol(datExpr); corTypeI = pmatch(corType, .corTypes); if (is.na(corTypeI)) stop(paste("Invalid 'corType'. Recognized values are", paste(.corTypes, collapse = ", "))) corFnc = .corFnc[corTypeI]; seedSaved = FALSE; if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit(.Random.seed <<- savedSeed) } set.seed(randomSeed); } if (checkSoftPower) { if (verbose > 0) printFlush(paste(spaces, "...calculating reference mean adjacencies..")); useGenes = sample(nGenes, nPowerCheckSamples, replace = FALSE); adj = adjacency(datExpr[, useGenes], power = power, type = networkType, corFnc = corFnc) refAdjMeans = mean(as.dist(adj)); } for (run in startRunIndex:endRunIndex) { if (!is.null(randomSeed)) set.seed(randomSeed + 2*run + 1); if (verbose > 0) printFlush(paste(spaces, "...working on run", run, "..")); if (saveTOMs) runTOMFileBase = paste(saveTOMFileBase, "-run-", run, sep = ""); if (run > startRunIndex || skipUnsampledCalculation) { useSamples = sample(nSamples, as.integer(nSamples * fraction), replace = replace) } else useSamples = c(1:nSamples) if (verbose > 2) { printFlush(paste(spaces, "Using the following samples: ")) print(useSamples); } samExpr = as.matrix(datExpr[useSamples, ]); samPowers = power; if (checkSoftPower) { if (verbose > 1) printFlush(paste(spaces, " ...calculating mean adjacencies in sampled data..")); adj = adjacency(samExpr[, useGenes], power = power, type = networkType, corFnc = corFnc) sampledAdjMeans = mean(as.dist(adj)); samPowers = power * log( refAdjMeans) / log( sampledAdjMeans); if (!is.finite(samPowers)) samPowers = power; } mods = blockwiseModules( datExpr = samExpr, randomSeed = NULL, power = samPowers, corType = corType, networkType = networkType, saveTOMs = saveTOMs, saveTOMFileBase = runTOMFileBase, ..., verbose = verbose-2, indent = indent+2) result[[run]] = list(mods = mods, samples = useSamples, powers = samPowers) } result; } #=================================================================================================== # # sampledHierarchicalConsensusModules # #=================================================================================================== sampledHierarchicalConsensusModules = function( multiExpr, multiWeights = NULL, networkOptions, consensusTree, nRuns, startRunIndex = 1, endRunIndex = startRunIndex + nRuns -1, replace = FALSE, fraction = if (replace) 1.0 else 0.63, randomSeed = 12345, checkSoftPower = TRUE, nPowerCheckSamples = 2000, individualTOMFilePattern = "individualTOM-Run.%r-Set%s-Block.%b.RData", keepConsensusTOMs = FALSE, consensusTOMFilePattern = "consensusTOM-Run.%r-%a-Block.%b.RData", skipUnsampledCalculation = FALSE, ..., verbose = 2, indent = 0, saveRunningResults = TRUE, runningResultsFile = "results.tmp.RData") { spaces = indentSpaces(indent); result = list(); exprSize = checkSets(multiExpr); nSets = exprSize$nSets; nSamples = exprSize$nSamples; .checkAndScaleMultiWeights(multiWeights, multiExpr, scaleByMax = FALSE); if (inherits(networkOptions, "NetworkOptions")) networkOptions = list2multiData(replicate(nSets, networkOptions, simplify = FALSE)); if (!is.null(randomSeed)) { if (exists(".Random.seed")) { savedSeed = .Random.seed on.exit({ .Random.seed <<- savedSeed }, add = FALSE); } set.seed(randomSeed); } powers = unlist(mtd.apply(networkOptions, getElement, "power")); if (nPowerCheckSamples > exprSize$nGenes) nPowerCheckSamples = exprSize$nGenes; if (checkSoftPower) { if (verbose > 0) printFlush(paste(spaces, "...calculating reference mean adjacencies..")); useGenes = sample(exprSize$nGenes, nPowerCheckSamples, replace = FALSE); refAdjMeans = rep(0, nSets); for (set in 1:nSets) { adj = adjacency(multiExpr[[set]]$data[, useGenes], weights = if (is.null(multiWeights)) NULL else multiWeights[[set]]$data[, useGenes], power = networkOptions[[set]]$data$power, type = networkOptions[[set]]$data$networkType, corFnc = networkOptions[[set]]$data$corFnc, corOptions = networkOptions[[set]]$data$corOptions) refAdjMeans[set] = mean(as.dist(adj)); } } for (run in startRunIndex:endRunIndex) { runTOMFileBase = .substituteTags(consensusTOMFilePattern, "%r", run); individualTOMFiles1 = .substituteTags(individualTOMFilePattern, "%r", run); set.seed(randomSeed + 2*run + 1); if (verbose > 0) printFlush(paste(spaces, "Working on run", run, "..")); useSamples = list() for (set in 1:nSets) { if (run > startRunIndex-skipUnsampledCalculation) { printFlush("This run will be on sampled data."); useSamples[[set]] = sample(nSamples[set], as.integer(nSamples[set] * fraction), replace = replace) } else useSamples[[set]] = c(1:nSamples[set]); } samExpr = mtd.subset(multiExpr, useSamples); if (!is.null(multiWeights)) { samWeights = mtd.subset(multiWeights, useSamples) } else { samWeights = NULL; } samPowers = powers; if (checkSoftPower) { if (verbose > 1) printFlush(paste(spaces, " ...calculating mean adjacencies in sampled data..")); sampledAdjMeans = rep(0, nSets); for (set in 1:nSets) { adj = adjacency(samExpr[[set]]$data[, useGenes], weights = if (is.null(multiWeights)) NULL else samWeights[[set]]$data[, useGenes], power = networkOptions[[set]]$data$power, type = networkOptions[[set]]$data$networkType, corFnc = networkOptions[[set]]$data$corFnc, corOptions = networkOptions[[set]]$data$corOptions) sampledAdjMeans[set] = mean(as.dist(adj)); } samPowers = powers * log( refAdjMeans) / log( sampledAdjMeans); samPowers[!is.finite(samPowers)] = powers[!is.finite(samPowers)]; } networkOptions1 = mtd.mapply(function(x, power) { x$power = power; x; }, networkOptions, samPowers); collectGarbage(); mods = hierarchicalConsensusModules( multiExpr = samExpr, multiWeights = samWeights, randomSeed = NULL, networkOptions = networkOptions1, consensusTree = consensusTree, consensusTOMFilePattern = runTOMFileBase, individualTOMFileNames = individualTOMFiles1, keepIndividualTOMs = FALSE, keepConsensusTOM = keepConsensusTOMs, ..., verbose = verbose-2, indent = indent+2) result[[run - startRunIndex + 1]] = list(mods = mods, samples = useSamples, powers = samPowers) if (saveRunningResults) save(result, file = runningResultsFile); print(lapply(mods, object.size)); print(gc()); } result; } WGCNA/R/exportFunctions.R0000644000176200001440000000701613571361236014662 0ustar liggesusers# Functions for exporting networks to various network visualization software exportNetworkToVisANT = function( adjMat, file = NULL, weighted = TRUE, threshold = 0.5, maxNConnections = NULL, probeToGene = NULL) { adjMat = as.matrix(adjMat) adjMat[is.na(adjMat)] = 0; nRow = nrow(adjMat); checkAdjMat(adjMat, min = -1, max = 1); probes = dimnames(adjMat)[[1]] if (!is.null(probeToGene)) { probes2genes = match(probes, probeToGene[,1]); if (sum(is.na(probes2genes)) > 0) stop("Error translating probe names to gene names: some probe names could not be translated."); probes = probeToGene[probes2genes, 2] } rowMat = matrix(c(1:nRow), nRow, nRow, byrow = TRUE); colMat = matrix(c(1:nRow), nRow, nRow); adjDst = as.dist(adjMat); dstRows = as.dist(rowMat); dstCols = as.dist(colMat); if (is.null(maxNConnections)) maxNConnections = length(adjDst); ranks = rank(-abs(adjDst), na.last = TRUE, ties.method = "first") edges = abs(adjDst) > threshold & ranks <= maxNConnections nEdges = sum(edges) visAntData = data.frame ( from = probes[dstRows[edges]], to = probes[dstCols[edges]], direction = rep(0, nEdges), method = rep("M0039", nEdges), weight = if (weighted) adjDst[edges] else rep(1, nEdges) ); if (!is.null(file)) write.table(visAntData, file = file, quote = FALSE, row.names = FALSE, col.names = FALSE) invisible(visAntData); } exportNetworkToCytoscape = function( adjMat, edgeFile = NULL, nodeFile = NULL, weighted = TRUE, threshold = 0.5, nodeNames = NULL, altNodeNames = NULL, nodeAttr = NULL, includeColNames = TRUE) { adjMat = as.matrix(adjMat) adjMat[is.na(adjMat)] = 0; nRow = nrow(adjMat); checkAdjMat(adjMat, min = -1, max = 1); if (is.null(nodeNames)) nodeNames = dimnames(adjMat)[[1]] if (is.null(nodeNames)) stop("Cannot determine node names: nodeNames is NULL and adjMat has no dimnames.") rowMat = matrix(c(1:nRow), nRow, nRow, byrow = TRUE); colMat = matrix(c(1:nRow), nRow, nRow); if (!is.null(nodeAttr)) { if (is.null(dim(nodeAttr))) nodeAttr = data.frame(nodeAttribute = nodeAttr); nodeAttr = as.data.frame(nodeAttr); } else nodeAttr = data.frame(nodeAttribute = rep(NA, ncol(adjMat))); adjDst = as.dist(adjMat); dstRows = as.dist(rowMat); dstCols = as.dist(colMat); edges = abs(adjDst) > threshold nEdges = sum(edges) edgeData = data.frame ( fromNode = nodeNames[dstRows[edges]], toNode = nodeNames[dstCols[edges]], weight = if (weighted) adjDst[edges] else rep(1, nEdges), direction = rep("undirected", nEdges), fromAltName = if (is.null(altNodeNames)) rep("NA", nEdges) else altNodeNames[dstRows[edges]], toAltName = if (is.null(altNodeNames)) rep("NA", nEdges) else altNodeNames[dstCols[edges]] ); nodesPresent = rep(FALSE, ncol(adjMat)); nodesPresent[dstRows[edges]] = TRUE; nodesPresent[dstCols[edges]] = TRUE; nNodes = sum(nodesPresent); nodeData = data.frame ( nodeName = nodeNames[nodesPresent], altName = if (is.null(altNodeNames)) rep("NA", nNodes) else altNodeNames[nodesPresent], nodeAttr[nodesPresent, ], check.names = FALSE ); if (!is.null(edgeFile)) write.table(edgeData, file = edgeFile, quote = FALSE, row.names = FALSE, col.names = includeColNames, sep = "\t"); if (!is.null(nodeFile)) write.table(nodeData, file = nodeFile, quote = FALSE, row.names = FALSE, col.names = includeColNames, sep = "\t"); list(edgeData = edgeData, nodeData = nodeData); } WGCNA/data/0000755000176200001440000000000013103416623012021 5ustar liggesusersWGCNA/data/PWLists.rda0000644000176200001440000135511013103416623014064 0ustar liggesusers‹ìýéŽÜʲ% žÞ1¤N¡Ý/r#rпvÎT ÉHeäÐ(à^ ¾.Ô¿~³~·vóȰµŒ©H¥´u†{®°±÷vK288}°aÙ²!JþÛÓûË_þòÇ_þø„ÿ.BsùGøÏÿí/Ë¿ü5üÿÆmÿßÿ×ÿçÿ ÇþŸYüÿþ¿/þ0ƒoz·}‘—.uÓK{åÒÌmp ¥¶ßèIYø‡„¼}Ö"<¶thÂÆ&KXÀ¥³º@»=¡}HÐöøq6¹ t֔ҌϪŒÀëw,ði½9m‹·|6)KÚOk—»nóQ(wƒ^"ÏN ·—ï$‚óè<çvBíË#,\ÞᎇAûã&J‰³bjÎÅ{äᇩ‘K‰•2<ü&t(Ý£Ó|⌀7ö›”°°©ðüeµm7_ÔõÚ® tf5 è¤z¬´cöøó~âv¢§´$i–ãÖm^oè郘˜c©9uKïÖæÇB¦½³ð³pç½>Pë1OºzÀWî|}i§Mßv“žÒÓKöû„Ú[½fï‹K/ÞŒm–ܺo­x‡_>Q'!aaËÂ- wúšè>åñòxHy¤)¾ÇHóÅ“ž”6C†;ú–$pF@—øCJíœÚª‡Ã¡Ã¯¿xn_:ðšÔC¦Ÿh= M—`YÊ µùï;j£Û]•–CÇí çôàËÁ8x¢Ç¬Ä:‹íeݧEq:ÔX·&±–#m´c)åß%ÎJ‰‘ø*‰ý]jÎLí™|¿[úÖAJùPfå|¨ä»ÝVVJøg ŸuÎOþîQÈgQ_á"'VÎfÇ }¾ló%õ*8ßëÐ.»ûnØŽuÕ Ûn%úEL­¨S§r­®Að»- ¹.1•F´Ç µï¨ý íÔ]Î_CŽo±C:ê(Ÿ°.Åy±ÑÛ—íZÁèlŠãä:|ˆ½óún'ŸÜã³çAeÐ'Í]…vµOñ÷§'m§ú!yZãÏØî÷T¢(ìX¸eᎄ”/AûÂýí§{m—èË0<0|¹/u̇óö-µ;jëFé{l:¾ËêV×ß¡S}ç]g}ßkÏ~ºe÷>´µ/+Ñy8´3ñ@Â=–ÝixÒniJ¿ w¬õíý@º‡HØ£ƒ¤Ïœ:]ÂS×l¨P[{>MG}÷ ÙN8‰tÎ4ÓÎZKÊ\”ô7uÖã9²×7úyÒKjšï‡˜÷º,§ ÔŽEÚäÚ„~b0-ÓÎßëE;î›´o¼nù¨j|:Ÿ®äè„Açë"Uj9þò%_•ÜŠ…K+êr’çº}ˆ‡}éµÈ©•¶FÚ±Ý= Ê[BìR:ZBÛº=]ºÛCÇRj¤ÜH.Ø?Ⴞ߰@¯áŸÝÆHôäþ™vôô°ãåa™ÚeáÏz—áØ?éJ•¥-tÄ,Ûëß³½j)Ë,¬F8Ð:PÒlƒY6Ý9=+ߪ~åwøsröŽÚ·ŸÐ~ÀÍrØ ²Æè˜Îò €¢;òl—à75ý¾æ¿ïðƒZ-®($,léö·¸ã¾Çz#R‚Tô¾Ô#¾nAÖ¡xš’6ìš¾ZïЮRíĺß»m|ʲµY94µ›Êª¶,$8éòµþ(u«+ëfBK\¹†=.zxÂ\ ÂA{¯l{Gm]Ý+§ê¢r{ü¹eMöJ•>ñ~ŸšqÙï×Ù©l ÞW®wzÃ.ÇGÛ™.ÁS<üxš6$ôL¹¡®ŠRGh]<9jë}újÅØg•^Ë;]g—£¯îq`"e(=~ïÕ/!G uœ%íÀqÈÐ7GØE©‹jZ@y ;¶·¡7——Zu7a¼êo«Ê÷—i°,j¬ÈÑð&ël«ZÃ:|§A]ëÁ¥»˜­AMÓ™xúZ»/|[Õ”Úa€ZV:-V‡a‡¹3eÉE%ùp¶J}¥n*Õ^…m}÷Ωr¾Ïäµ­ÃZ; ›3ØOª-ßd›ÿø_ÿã“>nVtbôÙÛvY9°eaÇ›zèi#)1ÒÖH;#٫ܳ”šk¦æš©¹fÊÖç,,`•‡=¥º­®³}7mèÓÓ s>A9¿IHXØÒ‚Úfý–…[îõaÚ#þN~ìÝÃÜð´…¶´ ·ý¨ }82q;¥v¦ílØQû–Úp‘¤\aÛ„,‚êTY¯êyÖÃ]Úôã©HÀÊZ2ȃ”ik$˜a7ò»VÜÌÅ„E\w™:OmìcíñàSúÁp9iÝ…o»Ç ~xÚ5ôˆOwÔI‡§ºÞíÞAàÓ6è~€Îf®cÒCÅϸµ³¡ÀC»©Ä\ `†‡#w¸ÖP\véE6Ô¸T »::wëÚù–„ᤛÖÙH½´ß`:Ž…Ú7ÙØïù“‹˜©àSúªQLø\Õd—äBíÑ÷ö=È€ý6›R+àÊSÏK™Hèï©ùHjÎË­„_Ÿpä”ÞÞalhªŸZôĉÑ yý¤{U^dn šÜzjº#?¯ŽÕ×ó:tu!Ùq·¨ª·ì¿°ó5Γ#¤ux[í¯ðgò´ð1†ö–Ú;´éÍ[}ÁEÞéÆ1}N” ØŠë[X¸ôÏM~ Ö­>øYlfbB××å2?d{#à ¼ßQûN»ÈŸF˜÷QÂoZB;¡6Ÿ³£6>÷àiäO:ÊúmÑçpݽ~7i'ÔÖ=°ðÛ!À„t§?/V“¶Þ¯ ¶ ”, Zi«-”wåâŠÃZ½ê:´·êëYW.°ºDik¤‘ntÇá ’´ï¨}Omø¯ûd5î·ª“Ô]•A ®»æªz¤°.ÜÔýÐc漈¸È" "è‘Î r›ö8spÑéŽ²ì² ¶÷zœ´7bœP{£#ÓÖ»ö²ï¬¼#µÙ»¯ÚOô÷'í>Ÿêëÿ0ŠÏ° /}¦6Ó23Œú‘Ö‰Q׆E>êÛäŒÀÐVM0?Žê†Š¦Ì£j÷«éß²H™Nªv¬ÆÇöÚ†÷ǤÉŽ íA ¿¢(iÛŒRÂF"aB%4¥:{VEI^]´+ŠêâXgE;[#aJWªÓ…6ÿËxhë¬)êÚézºϲsüd»”…‚Ï«ø¼[tn[â¦0¾ãðUÝ/.)­Ya`LúÎpîYx º- ´ÆÕX˜Æ‰VK?ªºfk¢‘±U1¤t±º÷²8Ñ“…[Ì‘[tº#¯©ä¶£6>ÀTbã+¦GçÅ£†sCKPñ•ïð4q›ßbCmþ;ÆË³.W«2ààò^[ºX–Ω¾ZºÔ#´äzFnŒøü~"•±t£nha+з,;ru€/u¼-JòaÔmQbV/Ê'½xÖýó¯“j©Oô¹d Iõ:çíÅH°£D¯tÈY(ø¼Ê\£2רÌ5<²‚Y^9hU¦ÃeQ©)ü3´uµ¸IÛC¶¡ \6"P—#ÜÚ¸]étñ ëV°MW“STÅ¢ªu}ªêíÆ£ýc2´·ÔÞQû–ÚØ«?Ó>ÿ9M¨M×I/¿ý£‚Û°U<´¼K‹/\³Ô­Zµ#Î9æèG©U‡©]Ã&!¡Ù°`Žà%z×Ò(xAøBí‘ÚÏhS/‘û¦:´ k‘ððJŽÃR§Tås<¬ïÔ‹ÕšzŸ~ÅŽ’Þ¤k{ ƒAæ«nOX“Êë0]T~ŸŸ0<0W¡ +_„„†>m¶,ðorþMοÙó‘½9ÂWÛóÕð’A€Fˆ¢”)3’.1ý°…íZÁ•(í„Ú[jï´7uí M]ªcv"ÇMi6G‹:ÕÉUg%:$·ê4’6Æa”tPדúo>Ó0I' Œ8ö,´,ÀJ zhI\iQÐtðÌ¡­ÏuPgázl3BÆig$ÂÉDéŽ$Üèü» Iw3 ÞHp£±à_ýd„ÔúRã»ShÓßGú;&qh_ÁEíì k¹¦Ô¾•è@¡¶ÆK¬O|ꡟ¥ÄH;Õhzósœàÿèœßé<¾%Þé_ׇ®¬”‰žyt}ø&#<ãÞ!ö.¾ŸOºø±Ë±ŒÚÊx•—pcú¶Q´(¼—Apn[ kõÛ‘–×ú² ܈×h‚°ø ú|Ô¹§y¸Çj`!çÓr>M‡‚%ŸV±P/ðÙ|µÏüŸV÷™auŸ/Ÿx-‚*ægik¤‘øŠ{~¾=ÿªÃº"’ñHà#ð[}á·ù´‘OÂj_èZ»—Š„[#¤,d$è–{³o†mdÚße¹o …ØÐ>$7Å“yF.z ãìiß°ýC};…”Îz ØE¸gO{ذ°e/ðÀð «Þž`9í&¬i*¸£_„- ;nIHù7ðò~Ôw§ŽîÉø­ ­E«p¬e åYÚ)ÚŠM[¶\Üm1A§ `;ªW<(@C§m½6´±œµjk«ÐÆÆç‡ à´ú‚2–í@¯B!ŽÐFNýµ DÔ¨¯Ú)Û²‚O²hOðýÜéëEnÈöº„\(QHXÐ Õ¹ ¿¯ ý†á€ZøÓÄ iëÍ{¸)ÎÂ[wÀ¶HûŽÚ~ ›cÑømQ>âñü°Ã(io…!Á®¢Œ©.èºFHX@¶ð¢kTÉè|Ö•›ž ƒv-Ð2ñY6ø}«kThêä­êp’ü eÑ!ÆÕyê;djçŸõ²§õõXï¨}‹6,o*˜ãK;CœGÚ µudwSF¨0XÝT—aÝÔÀ}‰]q=q14õ‘C³ÅŸoѼÃzÂ5G)Å‹éÌMj# .Àù­Nüî8îõÇGu¹t§zKí;jßS@ªÁ%EØà¬vGmºR‹@Wø~Ž…Œ…œ„[>íßüt¸£÷|Ö=Ÿu¹ðïýfØ`Ãì]¥ˆ u Ý)š²wÍe̯û0«¡¹õa2}Û^’K˜åQÚéÖHwFz0’ƺ£¤¦æ¢©¹hj.šš‹¦ŸXỄªr–ì±-KöLó,£¹ß£¹Ê£Î«¾Èàܦž7xñyŠ³ì‹²£v“Z(²á@Ë8ÐIGÌQ8úÒMÔ'å>mR#iÂW¯a¼eO¡º¾s5è„ø;­ê¡éO.“ÐF'&ø÷½?™‰‘°Â¡¶‹€[z(t½¯G°TD!pBØ9 ·x1¨za…Ç.-ØW@†7,ú7ÁZ m¼ÇD.8è'؆|~=ÙB“B,ÁRÒ ¦å•Ù »ý¡B¸åÐUhª~s ÜhÖæ=¿õßá)ÃÎë“áÉ:|K÷ƒw¥£Ñu”áTŽá᥺ñY^W𥉈ü 8¢¤­#'«;Ò^V¯+ªÏ ,”A@ìX„BUò‘;jXÔž4UŸÓÃä„JÈáI÷¹ç“Åóù³xÉCD°Õ#d(môYEK¶¸g…aåÒ @i¯Ïû’ÕŠŽ­÷°‘|}ªõ4û ÛÀ«'"z°Ùíâg~‡ä‘¥ßC÷{zÒ=œ¡¾uI§„\kåÛ V‡[v$@m BeÜ´…7´éïEFí=~Œ˜Çª|:ñ¯=·uúvÂ-HO„ ŒRÂâ\’™¦Gí Ó á”)™I I[sl;;¶5ÒŽ¤ùÝÎüng~‡lÑ›³”ÒÁ;¸D¼wˆ,{_Шö£¶ª}@LëZ”ä2D)5’BÕÏꟚèå`‚ƒ/bjÄ„ºÍ¬” "ïo1t<-ŽþDã`Ø;g¥„%º—H©‘ÔˆðûÆœn‘øåDLf"²aOó8æHÎBÁBÅBÍBƒ'¤”aŸ³=󑌄ŠOƒ79‚Yز°cáŽJÄ lŒô‘¤d¿1Rb¤­‘n„}hØ?Qû„¶ÜÃ;rê-RüàéE ÚªS-c(ZmûQ=Á ?âKuûHœ ¤w$Ðv*1øj¾EAÕàxµ–üTå ¤ ‚MKÓP[ ©é+­%Š€©"GøÐ²³P²Ð±@æ‡gìª`-ýNöÁ¥¶FHIØñÄ$Èî^- ¥õ"èAÚ5"…LªYN¥*¢ëÐöX¯¢D«ÆÑIZç¤,90˜å3âWÏ}™Ì£DF”õÙ—‡‘2úŽí-ÿþN¯ÖÞ#eËç RçÁ?ÅÞyµÝÂëm´GÿíW·%ö—`“¯n¤=ª:d±wì _"û/tËaO‰ÿ~€?U>òe,«Ê°ômÕïÔTÉÛ ¾¯¿Ê7eƒX¥÷ƶ–A”Utb؉üÒçL78pÛâ Ãt¯WÐ!Ç^³å_ørt°WûôFzÛkpCF6ôÖ¯bM38rüÔÓ´™‘êå”à“&].¿äpÑʈç¹pg&“~™Lêä8KºI€Y|EÝeôAh'8)WGz¸\~K%M'¿(é×íå‹ù»@rÞˆP!7ÐAñ)ç64¥³‡9ÑG ).a ¸üùè1 †âq«ªÑPª™µ*ÄÅC›ÿ¾£¶F$ò# º ù-z7°O)`èaÛä)_ ‡ô°a?9¨Ù²8~P h# !´wÔ¾¥öµñz¡íèÀµ?ÑIÔAذ°°eá–¾%â5"ð}ôbh¯Ò޾ܖnË·åíøˆçß`ƒéD§ ^†19`S•è ·é¥G~äq÷DgÑ·Ûí½ng1¥{"R3 ïVÚÔi<@=Ð ð£mv,ܲpÇÂ= ,ðM“ |µ+ŠÜ! ^!¬6z€êN–m;Y”g5œ°8ÁfvBm€ƒ6‰]d< HN‰‡vFº3’ú^ÏÑ1Q”õÖcŠ…H‚%FHXP€Å#BPYŽ…”„„$|dËGvFàÓnùÈ #ÉM?²DôQJ”IM¯1ƒ—u$ƽ,Ã,_yÝj̱ÞãÉsÚµ•Lä¼i«ëo,V_ú/z»aħaË‚ü‰hî–c…—±"u¢žz¤·™F‚ÕÕÛªµRb$€¹ê]WC7k5xÆÆ%!z ÛÅØâcµ®#Àœ ­”tgІwmxgІwHÒƒ‘0a"öpcÅÄŠ|ËÇ7‰á˜#%êEL¬¸%1¹5''  [-óQÀï$nL¿É!#bô‹±Sr¢ñØЉ}Pnt™û,¥¶jMãÐèÂÂÕj:D­§kåHÌŒ¡ Ý\„„…- ;ð„€T,…–AOš('póehÃéÚzUwàÙÌò½ê7ã1Øk?„»ƒ&jzÀ0 k¤ÇÒœÑuhÃE~–#mí]7V´GìqËñˆÜðñH†Å£CzvÀÜq¢8ñxê¹åéäq¾o霉Û;jëW› Ë¯B_=ööÑžøe¦T}TÁ¶†K3ÄjФ·ŸJ5CFêT•þˆ ßCøò¢t9öÇ¤ËæTŸðóúu¿JþM{bú ô)œ!¡©küÔqbwCw5uÀ(/&%( MÕŸ§^‰ZVÓNåäouÐH£/ˆùmB\#”9=ù†Ú·j«é8!Of‚3ô&BH©wƒ–°°eAá“r ,³ `‡ih8ßgôc~ªS}B2çz«ÍG:¢ ?‹I“VH[^OÇÔ©¥mFüAËéˆ8nÝÀƒ4$2ØŸ«p±[G$ÓÞ঩Øð‰ j͉|"tªcš£å1C‚êñK6à¾AÐ.ÎcQi²aÔžÔ8õ¢xÖøØ¨Ÿï±!é#aÉ¢wGFJ.¿ªÏðkCæ·(î:¬¾‚4GÚpHõŒôY¿jw,ž”,$4uá‘D>½æ‰hµ„4(e¡`¡b¡fAíϥߪuöd‰$m±éŸÀÞ³Ï;@UÇ~Åeù@päp–f&e‡ö :a()¤ÕdÈúB57ÊguÍ¿d0Ál: 8ñƒ   |‘Dßc÷Òû/œS¤ÿú PÒ"BÖ_3Ð÷œ¥ K”9Â9Ú0«(£mzl/Í—Šf|O~ëðIÒ£Óö„ÐzM(#nŒZè?oXЯrÎ$ ÅšK lI! ô•^Hg+ÖVT%"Íà_¤#öÆúð#¢K54,_ð(R Èé§®qªUžßŸqKŒWBݧ¾üp Å8úö Jï‹c á#‚¬+XèúEj - Lƒ uë.œ p|Ø7Î}ÜÂ?ár„î[7hGËŽx»×‹¬ó&C¢ä²ÏL¢‹¢×®;Lx¾0‡4Úé³Ú/ù~º¢º~_ª9ô‹$èð7Oõ4Žu‡„j tîa8p1Ži mª€Æ³î‚Â(<ã†U;`tWÓt‚mX}&²²™Ú3ÅAã2j¶úAÓÓÖ‘0}ãÑMY°‘/Ù‡ÀÏÖ6°ÒaþMò¥6–h Yu¿>ÇÐ#ñÙ J¾À­>Z^ƒ–ûeM-NØï/sâ©5§!2Âô™kŸ·Mð´G>ñ*;MpQ »#y§Ü`¨]# GaÁOèØþ¦°ãa;UJù8’e+4!Eµ9Â<øä¶ä&mµ]µ‡žx }_ÜQ÷n jEW•?”J_þ¬”ËôrýðDÖMP’tZ=—§ÇB×·©ûD®«ìð„'—0×Ó‘x˜`˜>—:}¦˜Kè;µ ˆtUdæ †9Ö[¢ú›1BæÍ˨–Ó ƒ= dw¡ÝU->¨p|I*Zäº õ,ê³H¤\bQ¸ÙëGÄ·‚‚£hþUu ¶¤ÔÑÚƒ–'b4ò&%e°?e´¬EØP.÷¹É6;Ù\.Ã3$ Ø8ó %üpÞU—nü©ÉS]VCÚmuÜ ´ ônbQëÝ‚½1!cL^€PRT¹.*C­¨›`­œ™ÆðÉû /‚Ð/ÞŒÈSËâ9¡6ö6¢±õ-r–]ŸX^øôYÁ„‡Ÿ0ìJÄ®Üý',\A¡%‚µiʶØ}š‹o1™¾Ê «ï×O[5Oƒí Eì#Ê·9Œu(VI—¨úì>"ß´'½H¶ˆ°àKµ#e&Œ½B×rS¢ÈÂn⊠&ÐÚ7ųÆQoñ@N.¼;â¢äÜMï\‹ÔSÁåéûÉ׿cAo¦J~‡Yg=, x®¦3¦ô)Бl˜Ÿ© Q°z Œ9¸ÇÅ–½%XX¡fUHUîÉ>Ö0Üè\°Íg”â%@šRÞç#_‰DµäŸc¸ø¡0J;¬d7´» oàuQŠ·3ù–ûºXšís”Ýqȿݻ‰R.ì·˜€Y*lë[¼¶Ð­)×E0¡””ЧŒÈ”ƒåB¥ù&OéZÁ°,¦”aþ²²b ñ‡>a«F â&ÞÒ=6 öë5¯vå{ð@Hw©*f`£.–`:•|Þð¡šÊ±>uå‡^€TýB¾¡ëXÕ ‹sу<¢›.kçÒ<\*ƒ÷j7GIá{ë>£€ÍÙu§·ËŠÂ ·Øµ$SWÓq+ì2Ân P\\è^ß"+RØêd¤FÄ´vÉK‚’ú Î+ùP<’žò‚Ò…Åðe ®ŽüãDeN‚òÌQa1=u,„U¨›’b]² aÞû¦×¬—›ès€'¤m+·Õ^òa¯àG¬tñ,lË9yJóQcí7ÕT&ûrGô…Þù\Ýr“¿~o@zx×Å­ Ì ™–ü¹)º†’ü–¥#‹¢u£2›®ž¿6ZbòÙ• ~žèkÃFíw¯y†2×>Ã- |¶‚7£‘a%Š‹~ñ¯ %ÒZͼڷ¨0M¬‘ „ìY²¡/ ;5%ô80©Æcä!ôOž²³ƒ°e¶ˆËFÄå^ ràQjZI7ˆMí[Tè—¸ÆÞ‘†£À:).*©‡˜èÁŒÆçäk…‰ 1tÁi·2)³ËºŸÄeX§s°†b®}Íãj¸å4€eGÔAnR„‹LL$ 5¨QÖ‘ãxv ¾Îú=Yá’œ‚õ¥-ö52mºЭËçZ׫³-‡u-”Jc5x¢2Žå ± `Æå'ZÕLÔWòöÇ•gK/«>YÈÓýĺáŒÍ€¤ƒ±æâ½a_‚‰';0,ä’œ)ÌZ¬ DÛ\F3<ØVºùˆÍ©g厲ªÅ3Fz*ªÏ”®AÝ…©Ë$#Z•öÞ>ïjìºÎSßì©û8<W#‡nÇbR(_1øt¡S-lY³Æ®Wæ˜õs¶¥(lÁª`G£›E¾ÇîÚ1~%\:QûÒº]ÈRÉIÐO`‹y¡Sßòw‚û*ËM©MDKû'˜3/®8c’  ]+S<[#1ÛJЏN/¨ŸÿŤ͘ñ&+Q!K.¬ï.S g/ÑG+Ó§¯pìŽ]ÇW˜·Ø"¤Ð%€ÐÆ—Gò¸ ¹^!V¤C‘î‰V&Ó³µ hÉ®i¼îªëÈQ¢Rf8ºÂ¼†óÔNåG¸ó¤fœº‰Ä‡´ù¨Ë­<<ìúQCx?‹†qΪf¯Í(†ÇäFH·èì©+ÀÁ_L»Êïû’ºløQÍq¼s„ ¹°$¨žY¤lUL5êA† ù¨ïE­Îe¥YAôÂam!ÇMÌ'ÂÒàÀ ”{®fÞ¡˜ÌvXAaE]£ËŒÀ'ŽX8x{硾Ku P”y+Öa¯Í1ñÛ¢Cu—êu®\xÉAŸV4÷ÏÝ©GNpô³Ã×5Ñ>l9|½TÚÔ!‹ƒ#,*´°‰» ú5Á€ÂÖry Ý–±|ç%Á[}£™Óz*'"¨‰kÇX¿‹ŽXq=.Û iÆ£.—ýk_˜—ö#ª–¬¦©¥gw]§°ï&BäHÿí«ñ)‡p¦ü;×i~¡2AqèhXaÊiå(’Â/â4á‚ .×ñ"ÂcK‡¼26Ï"àÒ(k,þºÖdö8p”üq çÉ5àÆåÙ ÊËÂåêåsùü7Q¢ìë(¦æ\‚Ñ Žî-’c ŠJHA¥RòQ #àÙ‘#,éw+Zjу•«{±t5ê»ý„náµH‰lEL\¬­bïZ,æ–ž®Íx¿ý°¡Íõƒ•®&$/ŒU÷L¥…ŠÒP_RgvVD¡Âž*‰wö†O¹§S=ká€ßuaFÀÚkr9S6fθ£WúEiçÔ.p?¼X=px»/º¤ü´— íP¬°˜Šƒ®W¡PâPëc,F%Cx™ÉPrÃ<'Wp”#mIJÌ™‰93±g¦|æ--r¶â;ª¹sýök%Ûÿɪ´s9ö«e×ß[jÝÔWÿe5ÕMåô/—þ·¨‘þ­šè;–Lôw>ÿ%Åί8¿Z·Ü+çúäïªCþ‹ªÿŠ¢â ôû¥µÃQ½p-~½2øÏ”7ÀõÞQéÛV÷~W ïëE»ÿI«t¿«0÷ß´üöõŠÛ¾°ö;kgÿW/–ý¯ý®²ØóRØ¿¬Ìõ»j[ÿc X›¢ÕÿÊ…ª¯š~³¶ôÛÕ¤Mh.òü¾ÒͳŠÌ悔Ìå–¹ªò/.¤l‹'ÿ¢‚É?W$ùJ]d®|ü +_«Pü÷®>üŸ§àð»k ¿«<ðÕšÀoVû}»Âï¼ïå!ÞS¼÷]ey¹¯©¾kJçþ‚¹È>{U3ñ××·½ZÒöÇkÕþãÊÓþ]«Ð¾£ðì?q­Ùßf_Îûù³¦Šì?SåØ_Z-^ß[˜õ‡‹±þª‚«¿‹¬j;¡ö–ÚÊAùÍ"«×˨¾¯ú鯭xú·ªkúfýÒ?]•ôçêþ‰£¿ ¨è/ª#úÞÊ¡?Q+ôgŠƒ.»Uö˜¿Â²©WNeãÒ(„µ¤®à©»PÖë`ç(Aü¸UßèŒZ÷EFHUÉâÅä¼'ÒcªÿåÑù³¢þÏÙ>ÀÄ p ¸¿ç¾Üݪ±9ŠûŠ˜›k5{Öí0L {93\¨xŸé– ¯K/j>ì§#gÄV‘ƒd Ê^]ˆy83¼ÀÉaÒ"*Qu ‡!áé¼AéD_£ÐI°»žü8Ööœ±ù¹€lQõH`/ Q¹qCYèk¦>àêÝêå‰Âã=g•õ0w£­N™¼`Þø«ÚJúÂÂÕ Ž{Ⴝã;Ócì˜é%ùJ]–÷ågôű$–Ò>- §ú-ž\s("&ôE'âÃ^ö~Ž‚À·Íó\úêØÐsäè#‡lGl)]çÀÉíîÌïˆ5­Ø³Ï´€Ñ%È’Ë8’y§`˜H¤]Ȱ~¨Xß“¸Jö€xe¨ø–>k5¢×cr¢RÉ3ð¼Tà¥|þb½ƒ€á…-¢OyÚ£è23‚tÌÆ|–¸7‘T©ÕhɇÃæÕ)EŠÆäª ½lê ª’¶ vy1ëô>}ÜrdôÊÚ“;TH"¡êùÈQ9 ¥™‡½9/áæá?þ×ÿÐå‰yâr¹á_é%$§E…¼£EQêXê+ µ¢ ^^U/'°uÝ‚ØZo÷Ä(Ú¼Ò2ª—%¬'h}Š‘›q: ªkï8{ÀÉÅÕöš©q‰ák÷ Y-|Ç®=á"~Ôܬs‚%¦0^~ÈP%ä&tcE„oã)½€W‡±"Ês—*0V²2¡¶8"i†xÞÿõûaÝTÞC¡ˆÜ„ºÅ,†\½r/iæ˜Áa…h"òô¹6÷êÕ¹9G•t¸N)2$ŒüÍóïPZ¶…3§éã!øÊÓ—\ S¿Sí'Oðá\äKÉW2ÈCGvÔÖ‰) ;¬²w’‰´è6a}½Cô#œIµ<£]G_átب_]8±p†y‡a¬…@67X'7&ƒ®bËÇ+±”JBνäþp¼&´“ØìºìßD~÷Ý;¤Oì!’ˆáÖ•5,HÏ媅g¡+è¹rLÅÏŠ¸G˜Nª„FÏ<ˆþ>œ‰l6¼1j¦^Œq`gCV­º°eôà2"hpé'±HÉ'zŽg<ÔÚès~ ‚€Œe?èìüë¹f: hJ¾m«kÁMŸí*³ÓM¥¡j¸5Œ7PA‹§[†a¢‚\WäXñÛ†ŠEõ ´QÒ»—ÒºxKy;¼~ ¦‘d‰Téh¼ “Ê£C僨QÙ£~©8"¹ön¥µ£BL_Sä½$\!bPkÅÁˆX¸§Ë=¨{éY!’"Ád@_°#»¶ÃÓI¤nøÂT…ØaÌuC8ÐÚK0Á0c)\ÜÀ­,˜bdN _Ö²ÓÁP¥ [YºÁð ³Á`°›“†ÅÈNIò (à]Qr\Ìîî‰@ÙÁH{nì4²zóœ×u¦ËL,²šbGjÝ‚r®\,ñ[$ÙÐCàp@¶¯t¡=«vÈ 5ûjɸ×ϒ̯R.Z¤î®F*¦+'…ÞÈtø<üà;ÀUNñ¿d ŠãT@[ø'UçÊðy?êH”²àD—UgPI$F‰Oö']u½ÛoÈ—ñsw~:€™£$,Läjs§Kò_³ÍÇ Õmtä½hɰafÆRÀ7ßmYFÉ y¿¨—CCµ<%ªI]on»œÆy ‡íÚUÞQî@ÓßgúÀÂVH; ™ÂÍ>UA”VU"pŽjY:T&­ è ‚ØFð§¥ôï˜÷2ŠÇƒÞ8UÔÏ&¾|ÃÉY8‹²”,‹gõ;Æ,Õ#Ȇ×ù-½qÖÝ1/é²p84 OX-)¼ ^ªó"`Ù°ÆË)Ç šžZ½ö…Í ¯•õ”Až3ƒÀP€g™P¶NV¨,#ùSzÚ”q1Òò…6ÌÑVÐM1¤)©F­ë3T)Ê Zˤֵê*ÒIØ»™ XP«¾Bi…sØ[+´y/âío±zE{#$0¼8׿òGs:êCœEû Î>ˆ³âìK8û\Î>0(g£ïÖŠ;+n­˜Xqv)]¾£øÉHFº7Ò‘vFÚÉÜße3Ñ<޳Ïîì³»ÍìdóìÎ<­3OëÌÓ¿%óìÎ<;ñgTÿ7®3Ê!sà_LÝ|?EXÉ©ž"ÅÍô-Kág™:a.F–µí”Ø0Øš„”mO:`DÆ€]CòU·=+þØ2â Ïñ€$¿FáwŠˆ’œ¬å®ÍT,)(ŠÐvpžž»_£m« ìBBX¨ \Øeƒõvyý…×dŒ—å ´Ù_7â’™·ÞÂhgDÌp¦/ií‹^æg=d'4í¥Z¾6ÜÅgæ2웂¡ú(=Xñ¢ÏúÁH÷F‚E0\÷ˆòÎ'€w‚rŒÐ6»ÅŠX;DÚj ªºg6@e«[ŒƒÆ€ÓLK$ 3ÌÚãT¨æ]´¨!´˜4)çœâ¬JV_‡"0âÈBùA×£\†Ô›×Óª°`¨vw&10 ¦­Ôý„3À±¼÷%vB(Y´[}§§ xg;J|Ï@I½ê:T’[¶5bRøóS‰i„7Â=#¨ë¨àR°Ø‘F60¿ÂÔ p6„}£&Z ÷¤²x/{ ÈY Ol' AÀ|âBª´Ð¦#+âî]Ae‘dQÄô×& ’üœ?\ö½è2«h0Šq§÷•}D"^.&Ð †½mâÁPxNéÔ9}Áû“ß½auµ[IÖjAæ³³ªêÈREŸCñÐÌëþ͛Ȣ</!W«ž§àF­©èÈHèöýˆ±[å™ÇD› ¡ÁÞ¯|0`¦¼ê<ÓµŒtð X½˜3ÀÀÑ*è <•š)(Á ,(`½oa¾ ¿ZÅõ80…‡8÷N(6»žÜ¸¡¿æòÅÆ)À@+íl¡¥ã-µ¦àê ­v9ÐL%rk*B…=ú¼ J¬þI½’’¿ çêh|ñaèUûbBôZl)¥Ä—`ã JøùÔÔù£mÖ½â„íàƒ$ñWCKÛÊ_6™ÓáHã”:Ì}¦ ;·¬ŠÉÄÓî+Ü=¦ô [ÅŸßdÓÔSœD°ºPºêÃÐÿwiQLÅ%µ[}JEå½r¥x‡&×õÀ¾_5Vc%ã ²Oâý oˆØ·pðv=qL‰×I—æ3Äþ}s‡HP ¾š©º÷9´bq¾a°¶€Â‰Uoüä;<†P·ÃËøtæ¨n2MtÁ¨ÉÎ )nØD<° w×C0PÔ)ñ–ÓXR‡R{xj·ö’”œ!w|„/@&"åÉKº%öM§ÆÕ¹"ím¾ƒZ&)0XÄ¥V‘:ÀĬ¡ýâ8v A8×\Wwq0» ÚCMm³{ÊÛ’aCzê8:íšçžŠ´ˆT¿§XD[Tsr|w"¢à3B\ËêñT lœˆBSB£pp M–ƒ/naÚ¿ÏeM|ÙÅË1ƒíöijzÆê7]Ьfû,¼0‹Ãú ­v¼³Ð¢S¾–°IŽÄì4Qx¨Ê®‡¶î&OuédùÒþÊ„BÒjôªãÔu'4Ö0& óG_aCÝü nÜ̓¾ÿ2ï µ&ë>ÌË–^p!5$€FÞµèÛé3£Ë¤…—=üÂ/®Y]*ʉ؉ð"n4F@Ä‚q8îX¸œvÙßt"Hܘ‰é É_óAÏ3C:ag¨Îg´…uçÖ½¬²ªÖ…VÑ OÃ1L~I%ªÓ®²±'S0ö !‡t:O}±ø%·ÉlÏÀ,”ºµ—™ößϹ;÷o@ÑÜ„[{ÖªvÐ⾩Kޝ™wø&’ï¨õqÓ´IBÉA1£Ûš „±–G ZÓÁoô«Gò uŒDÛƒ”]r^»jvNxÞœøðã<Ç7Î×ñÔabÊ8ÇÄì§'b¡÷ƒškÉeÌ͸Ä%†—ÈÛvG‡4Ÿb=ù¡édõdaüÒÕîy3öÀºF<ò~ãJ(ñ”R4‰&@°tÊUÙª§Dˆ}n±¼D­î 8Hò©nÒ¢va+TfÁ3²q•Ä~Ö O4Phº&帩 ;ê#Mi1NÐ&Â<ƒÛZÞýS|š2ñ$j…ˆ®¸VuÝ 2HÇ Û¶A¢…Ç@¦ÑÓAyÌ“n¢ÑrC¶^•kžÎòè1®Ã£Qqù`(·'ô‚ÐÆAu‘þBCåˆlý—è,Áý¦¡­ºœÒ&zHv´8µ“ó"7«ÔF£¶dSÞ·}JùšQ Rj$À¤# R”ét`;‘ú1ê{)cÁ[/(ÀÏíå¯ê™ì‡)}‚îv0÷$ò9 g¯%¾§<¶j B›S@A½ñÍ^5üôÊÜó„Œ;ªPXð`O&Ôa•‡RUí0Üæ°{‚Ô!È'qjÊïO8mó±z0 MŸñZu§CF¸‰§“‰ƒ¯ Ò=%}Ÿwu]Xƒ¨‰ú–yQ†wæÈýæ^&‹QËË%°=öECDgÙ¸'@Ú‡ÃNÑ‹žÁ¾@ŽÔv€Ú„õÉwú  jD¸#-¶ú—cÍŠäAoM³ÈÜ£ºå6M0ˆ4VªÆŽ† ÇÇÎÄPïñ–‰N¡(„y•¹ ¯ wØ™²÷]~þW©Ê¹­“Tº ËœxHÂûXô5¸…\Z=aÇݰ‰sïÒ cÃßGͧr( 4Jê~FïôU{:u¸w㎠Z>ù&;‚»ßÈfœGG6ZÑîì²Àí-ú°ï´rÐËk©AR#‡FÊ>èÎ{ƒ^áÙeêùýõ6§B³hþ ó»‚«RÖ+}Ý‚GõÃùy>^꿆oìüí¿!|>A¿XY<—´÷ÞÒn ,Ž ˜Ô$m>:ò6Ýv|"V¬ó:²3Ò=ùY™ÝÎßðž%3œY_õÛGêZ“'H³û øHã!Ʀ^¢6õ­îfbo«g0ªËŠ.tEœÏK¨¨Gr>„.©²RÂ?kX¸üÏha‚Ó cm -Ÿ‰¨œS+¢§\KÇ 93\ xx7ÞQ„™CNK},K ¿ŸÈé ‰`íòƒÖl ævÌÄÐ'=ùäžBÖ.#÷‚‰d  ö‰<‘¬Ç"LÞ}òÐuâD[Ö=CÅÊeÝ'”€ùžûZáQ8€¨„€X#ßShĵûê1›},¨H_[vŸ[ôˆµéã3s*Þ&0ôEHYÈY`-D)ÎDHø _d²¡G'Íü—·|àŽÚÊžÀ#Ñ€5pÕuK÷è&§¦>k“÷·åj*8Z—M‘´Õ“ÚT5SÞž9òõÌvù ¼Õ«é‘¥ÑôÈ:õàÊXmã©”ö4´ a?Uœ@Nü>%aذ°€®yTæ“=&sœ f–$$¿„œOËù4)ù´Š…Ú|ÏFà«}æ'øÌ9MŸ™öõ³ní"`s‰ÒÖH;#ñáåd"“Ò™(Ê#ð³ág7ETG>LjIB%Tv"d$€¿vn•ÌönX@,pï™×ã#/Ãî¿£6:„Ò@ð,g=Ðò_ dÛˆÀ§=lXزÀxàx@~±êµ›L] QaNXز`ôê[¨fñÔAkÁ—½hᇗ¼NmêåXµëƒaï„’¬SܼAñó‹Q‹Ç$’×eKå˜CO<:¢h™²nÅ)úbÙÝÁsÎAÑ—¤PRt…è öN¹m¥­ƒRÂ(ŽDˆ ºƒó\Ú~Oâ¢ë_W”xr¿¢%JvTW¡BBp9,ºF÷«®!>ÝsqYœ¦êHó‹øE§ÄI‹Î?ë-@´²èÔK´­ìÒÎÀ$mD.¦Œè4Oà"즚¹û§(X{XyXwCS;¼~øó-š¨Î2õÄï¥/£C=4Ñ>Ž{=å¨îšûо£ö=ÚÈʼ#ÇBF–¾ tuœ½îX0{WµJdߣD¦Z;ê2cJWÓ ¬tg¤#}b‰ bžmŒdmYz´gîŒdî÷h®ò¨HJ¨²ÑZ OÚ5N¢dÖp å°S‹tDÁ<éKÇð“rŸ6©‘”¸'¶§)³ÁNñ÷®À@=@ó môÒ˜êÏ £‡GB#¯¸Æª÷õH 1Ú $,ÜBh)í‡ à°9ß=…B¢° Sa8€ë£WîéXB@û5º ôrþ2ä@Ó¹ÎÚ¼'(þS†åÏTê×»Òoù:ÊÚO”/ðsX"‚ Ÿ ˜ûŒLfAÃêôyXÏ«’„⎠ƒ VŸÓ]rbEË)$žƒÅÉçÏ߈\Þ”Ž*”±Ì”ÚÀ/eÛõ"õ ”¯Oš¡ã›ý6Ãêâ›jÙá˜ßXyé÷Ð üà¿ÇâîÁ ‹·$,lYØ‘€- •p—6§»äô÷"Ã(âW>øüŒ\èf¤2Ó±tÎë´„×ÚÚ68‚Žæ\ÄÁJ I[sl;;¶5ÒŽ¤ùÝÎüng~ÏÊÍYJéàŒï£\xð©yðz®e»Q#þ,¥FB,3nLŽÄ䕘‘†™÷Pô½?Ñ'ö0åÏRÂ=HJ æ÷ã#Às‰Ä·ö™Rr *jh$‡#Ï|$#cËXáQtÇ£N†j œ%]I…¦šÌèSwáÀ)§Áþ½Ç%ò[Êç@l18  xž D–%• 0V”¤8PùŸ!U(óòèéœò Šb€phóß‘m^ðg ½¡A§D <ôºa®r*Dèþ†üÕÉ—’¥'ù– ®¾5QÌÏÕ! ä{ª°:VÛØÅ¤PŽÿ`fç…›â„|AýÁ#v%a£s,¤$$|$á#[>²ã#·|ääË}>²D5"¢¤›ðHyC# 㮲¬ŠóB~ã‘täŒJÒVüðyê¬ PP•þ‹>½pB°°eæ6ê:-Ç*ÓTJ)iô‹Åœ;+%FÒÑ<Ö)z…jc*¸( +k¿[t°€ÏpK!5´RBÒ!<¼CÀ'JwFº7Òƒ‘fÜ+HBbê5Ñ2ø"&Väß&·æäÕ…VÈÛ°€ß‰×)Ø#<›‹±Ó¤AÄß1öYJmtõP¤X³Æ¡ƒ3qðÈ©GÂkˆ°°eaÇî?i•§¥vQqÕcÐZõ%…F >_Zb¬NOtÅH eœmŒ”ikï´±¢=Šì_Qôi9Q`<ÒN+e»°Àå*7ÔÆ$?yü™©œàÅmí]¦]m|) oŽè¥¦ †WÔºTUIï2•(‹;•(F:U%<" Vº<ÁŠÑYNµæv.&Šw÷¤:MZ8(4uÄL=JÄLþO‚¯ÐnpyhaÁ:Zõ”Nmh8Ç#3‡}=Õæ#n¡lð˺}út@wL•~LÝ­^ë˜àFˆ(=DjͱGºHýó˜!Áîø%öãF$ø*—GJn{,*M÷Š‚º„‹gí´G …DuP_ûñ«úÙ¿6Ðv„q¯HÍ\|Õ[/ž³š:’¤äMÊBÁBÅBÍÂÅμy3ׇtù²ó­ŸÓ‰²mŸ»SÇ«ñö£Y· e{ p˜éC’Q¡}˜=¡ôJDéÚ*QV[V«ýrõ‰)aÎ…SÕÄ=UÏZDÆ%q‹à lû¼#ÆŠVIJ¤ /9Kµ¸ì  óõ(dð*Hf¥ºKSDÆÅäbøÌÙ 0'##k±Bâ^6?ŒÑ–Œ‘ ÆHØ €¼Ét<ô+¢໢C ÷30|G¨¶Âéra/Ù úåäÕþ2Üy®K…PŠi¿G*€‹:0ž-o9èK"è–O ù!ç°$ãë'Éí,‡TÍ—^„éÉš^Šñ©—ïÅ9ö (GµâÏt"èï öXû+cì SÀxNLªÜkx2΢ Ž5 ô‘d[|`8>Ÿ(‡}¢B="|Ô./ˆµÚ”7:jcUŸô×HR¬34Ý…Q¤싸µ¢®R÷•ˆ9œ¢:Ä²È – œþrš¤ëÂÐÀ+¶Ìr—¢…ïø4h2¢äá› Å[Ú::Ÿ?ë/Žjàœ%8$Â$Ò©+™åº4_˜)Bh¸i¦ìîX¹‘’-„miÃŽøš¨µý¨ )ùÊ‚/¶ÆŠµ5†›ŽX]ëÃlž.©„¬ÖT²Ò5”±Ó„éA¶@FR&¸âÚ’l¢© <Â>t(’›‚2  s6L€jFxðVµí¥Ëá›l™~3o2ð“/ûÌ O4z¶vÝTÆ^¤kŸ®¾9³ªQäú=è7‚º¦é%‘´ðÿ‰¸—F¦víÂFÂÌñOÈE¯>Ã…‘µ{Þ0á¹#3ë¡>ÓQª—”àùb»¶A¨K}EFBqh=¤¸ô¢¿N“b•Ïe×õý¾ƒ6sÂa| ¨æ²Au/a™Ñ…â¬WаÃ?—›×yþŠÕm(ˆÊR2$ôÙé46û† ic˜`Ju®B´èKLŒ´¡Ï±³%ôž–ô^ ×ûlЃø¡é³NŸ™ß1læ ·UÜÈRqƒÐE#}P)ýÐ07&aüÁè­Ì!» |²bj’s<ŸÜ–¬Ù`gc<ôĉïûâŽzfkÜäJúÇž³_ÕŸt¹‘^®žHekJäµEÞ°S÷ ÖQü•¹æÔ¡Îé3‘µeá(ì«Þã›54ŠTÑ\ª÷ÀÇ^è'`ÁNà–« ›8¦Þl­“ØðÉ…ýI1 «ê@Åz”׉ÑÅVGf¸zŠ,AØ@¸BALk%|u\1í4C)òÁC‘’ÜXšäÖH;¦òÞU®Slèáeë&(lçZYè÷>#üJlJµâËB‰a¥­P9o™Xvà×Zç…Ï‘-Ô{Ћ‡‘8–ú£ÿ„©©PÑ!M‡U¦Á@²Ê ¯Iªaö?áÌôDËaܨ• Á õûÞ™ FLïzÞ2èl¹yÞ™.õŽþ§žh-ãb†â;1¡(¨¯³c«üç%FÁ°ß„FˆHÕ-¼<3Et”=™Oúýº’R¦ž"5‡˜«:¡*šùÁ ˜ñ3‚K2S÷þMï\ ¨ŸD¥6Üqw,€/Ïò;‡’­—ÏK³ô©†¿$…•6Bð§TòL zS)”¨+âq¬UZNcÙq/ë^·+¦^(t¸ ]4ÜÐ*íÈE ˜[—úLi¶ÏÕ_〇{7ÍŠt#?0JØ_Ž) ’¿kz]…M D9çIE‰]0^²°uþýïÿŸÿþÿçßiA¢֞<¸úcj1®«ÄèÔ…Q2 –µMÔoñ:á;¸^'‹ôÙ«ˆT·ˆE’h5*'l2¨!ô ¨F11½CU£31\¦ýè}jØ|-Úm=R§OÈÀŸþ\d7&ÒÕÈ¢­[’8@aíHª ”¾C‡žžúà²ÞUˆŒ¥ÍìÉuà †£!æüÑXêö:eSS|ô¸ºï Ùœ!ÕÐ@›Âü×Q¡ëSW“Ux@Õ"èÅ6~ùÑ¢GñÕ9R÷™<غ£¤ÔQ‘"?Ï-˜ûŒ)/¡pxE| ô'%igXjÝçIBö‚WG:¨ÛiyD陸C³Çg(¨SýxK žîeÔ~¾¨»óa_ó,{Dt}SMeâ„ßQM5ï|®_ëÄäËâ`Ö€3­â~St »–¥#…¢u#J †1C`Û0h0Â¥‰g¨_gŸ02‡öbVŸÕ ‘Të‰çªn¡/e8fA2¤H Doì7;½ $†²·@qáH2b0–|’‚ïB8%(’˜LnPÄPP‰=UäÙ##^*NéX ꡃ[ó¯/¢y*qø-Ÿ°ÎJ5g½¯púëGl4£ X~5éëm¡; ޽|ïÌY'ÒÓä&aææ}‘éõdÀ:]™;>ÈàeÊ QÈÕkðFÞ2º_‹²¯›#R³¨~ñʼnÇGº\·þÒÙi5uÄ‚· ¥Aq…Ãzê›=½pcT¼ßÈM$‚±˜4 Wp)´È¬¥§–¬Š'|ëçlK.9Y2±‹å{,Œ±´œÔ¾"¦ÅÒº8Ë\eäå¥WIcœÕUÕ—£K¿¯0!bŒT¡QP¥Xû3N,ÈJ%ë‹äú´2*1ûŽ=5ê •éÓWÔc§lžgR^ýdi“å<.±ñ/dyI¥¸qjžõ EŽÌ†sÿéC|ÍjzðS’¯þ©39ìG]!.ú½zÎ;Tñ£éDÊ ßß4S•¢–ÞÊð ‡Åá;¡‚Ãy5¹ï^¼IúÞ»¬Zo…¤¤U1ÕàvhF%… C&C$ w”D6Åa/:”-ŸÂò¥ŸTR·u·¹Œ2|©®›:Ø0PF„ÂH'IX¶s˜¤m–¡åÕV.<­:ß"íâ}ÀÊ8‘Qt9tfø…n°ÑLHÈ5R”D|5¡”X ¬Ó¥_—w¬Ry mé-œ¢§rB%36å 3ÕòƒGi‘ñ†Â—•‹àÔ·æ±(4ñÔHi|j8Õà{¦2ï.öøsåß®ÛøîÒŒ¾ã߸üâÕŠ‹×k,þTQÅŸ¬£ø‹K'^/–hÊ#š"ˆï­vøî‡¿kšš†×ÊþÚº…ƒZ…³ºƒ×Ë þDEÁL ÁYÕÀë…mù¿lÁ¿Ÿ(ò÷V]¿_\Éï½¥ûÞWŸïŸ¿$ßOVÚ›U×ûóõôÞª ÷Ϊyÿ˜Bx?YÐîUÃî}ÅéÞ¨Gw­ìWšû»Õ–ûûU“û/P@îÝ%ã¸Jœ© g ¼ý\M·ïq³eÛL¡¶ë•ÙÞ¨±vµlÚõJi?UÍÔ<³uÎæÌ®-ûg+Rvµ,Ù¼øØïzc¿ë‘8;Ù<ûß±Þ—ûå5ÄÞ,ö+ …ýM‹ƒ½Yì­ `ïªùõ ª|ýýjy™r]×*t]«Èõw­¬õOSNk^5ë§ e½«:Ö/©‚õÞºWï«tõVq«¯gõ¾V×kS½UŒêZªku¥l-©_P;êýõ¢þ¹ªBý­ë@]­ûôç+=Ù‚N¾lÓ›˜®W\zoí¤¨•ôë«#ÙrH?YðèÝE¸ˆÑÕ)ú#²õ‡Þ_rÈ”â:BóÚA?U.hV è'ký˜ò>¦ Ï¼†Õíùê<ï,§óF _T6g^ç½EplÝ›·ë×Ür¯îXÀ‘kUiÞ(=s½¼ W”y«†Œ-óþ 0¿¦ä‹-òòw¬ÞòSõZþ†ZlI–7а˜²+ÿ|åU¸N WFù³uNLe“_\¼ägŠ•ü@Q’?]ndV^äW— ùå¥BL-Yù¿Q½Ÿ¬éñkjxpÙŽ«¥:®—çxg5Ž¿mŽw—ÜøûUÙ°¥4þå3þy+fØ\céE”Ú Ž» ¤töhz#ðï ùåI§yÂQׄ6W.ê•ëj` —êú'x)ÃwS‹H¸Q,°AÏ@%;^B{—{]Jl$VÆH8ËX4}¹o©­n©,<›Î£Â+Äx™åPMF?9âV·”•Ô³•)'é¹|‰à¬U&4i+¤?CÒ°ì…—Ëþð[6LIMü`0 m¤ÝJöŽ.Ƨ0@·Rô¨˜é²Ã». Ê,+š´[U{KiT_®zø²E@¸¬â’á"lIWômkGäZuŠÑ]ùZ9I( }^lC=åób/2†®)AÁ¼õÌ4o¸ªÏÕjqD ~ú½´ŒÑظF¥Q$Òà«Ì» O$¼Ì_ë•:tÆb*Œ§ÛÌJº¼"U¯›sDù~Рõ0͘Ä0˜ŸÊÒ5]£Eb.¤Å´Ÿ´©LA‘›`%‘tí‹\ ǘŽÈ?‹ì9:œ3H91+‰ cðS2,DÚ8?,C‡áDx¡¼ê,Ùöì‹"7#QmI™lhÅáͶüš`êäÌ.›R%“ é¹BàçXÓ©µüú•"ó„m‹ÜÝìÑõù˜>GÏ!WŒè±ÐQ•Õu)ZVòQ±&â{3aƒêL`È´8è, ÈxjÕ¡õ ÿ¶à­¯Ž ˜³/mVÏÛ_¶©’ã¼( -Jõj76{î7·Ù¥kÁtLÛç‹—næ´ƒ}ñ5k¯ïŸ¶ä•­rõ¾½Qñló…vcõ|óK|p‘Žºää9™aÕpìQ†wâ?”7ÿ­}8Îæ/—û]Û†¥¿A«Iû°Ù|×Ù¡·Ü*|H,r´é.ruˆòž+mÄ1À‰ýT*òêP*Á@Ì{ëOm¡´S.j5Ãr=`»YŽu‡Î·PÕýma¦öVIÒ­½¯;WØu¹ìȼÒÜ€¿ûÕžª¡Ýo쩳ZF¢º T!Àl·“òÖK;ì[[çYÂR0xÊD%Êd»KÞ÷8 Çàlÿ$¯Mü.àË môâ«{æî‚•l™ún†}îgwÓ”…oR½1©‘aõXæžât´šï<… ëͧJ°ÙÔºõèºNYPfÛ˜ÌuTøõåÓO"}^¬žN ŽZæ…~‰Ùæþ?þ×ÿ€ß1;HÙî,ÖÇÎþÅYùfvv˜ú†ï²ø–.ÇÚÂÆï1o즴án\Ùïš ;ùÖ\á#Ö¢ñŽO½g‘÷Çßm±}gøŽÍv“¡éa§žƒÆX»j”Ù=‚̲?”nñªQL'Ü®Éεý`p#©I‡²Åç£Ny®{v³o†mN¨î ža »ãÀévù¸Ž«ÿç«ú¬î޾>–îXáC?ëôÃÔ‚|Ë_ý W³þiSúj½d^I¥­Ÿ†Š„¾ªª à4º õ­^Yëwö(º ë0Å×Û+_´ç2p\ÂíZGlƒ›b÷,˜ä†è|6WO©[0‹Š îQ2Ü¥Í[,ñƒ_ßVVõµ[â ]³Þ!ì9DëJqÆò&1³—¯­õËjP"X^®_… /÷ˆ3 €È‰¢ïP:ð/›‚ìsÛÖT¯~W5e¶WçUذ„šR"¯¼f߯–`MB3©ˆ3›ˆÈ –ᣲÜCœGcÓ•g,ÑþZþ{…ü#m„ä%)‡·¡{Ï"Ï•ÝL]2Vø¹nXhk¢Y 1XðƒW2Ñ“*¼°$³pCßäÚ¦ð†Øzv?#ÅG*¤Ði´Ö¿µZs]«e8^j±â5fJ¿Y:&Ï •å{Á©¡Ä%†^³@¿±½G¹¶úôÌ0C£Îa³v½H¦ï^]̾£Mûgb‚ tc Á,FTä)j0_,Ë …ÂÙ>’u1Δkú»Q´·Ô†ÒÅw¦\gÓ”sÛ«qFËòª÷ŠñeM;á£É#ô¥#êj?¡åqÔ;@gXÈ{Ôñðtø€aaÛ³€ikwŠ(¥ô~ú‘*¬«äÒxcgYÖƒ>ïZ4{lÍÅÂ"M¾­¼¯磯ڎޞ7Ÿî fyb¥ ¸ÿé@—­o*Z|dw;þ핃— [è,ÃTPÜ8yÃâNfƒ'ê©÷¬½CöÀo]R,Á•˜cÃT)Ñë½ Ÿ1Em-ß—½°gæey¿]‰×Ôν¾Í[ßËØv!~…HáB3¥›Tã/žû–í¼¤ÀEšCZ G¼UÉÙw…ob]f¬Ì†y9,ŽM¯™ù¤Ä°£zFtÉJÌÂérºŽ{74`"#‹d¢ŠžÝ·™½26ƒ]qSµ¹#B³^÷¥Wz¢Í`qÌJø%˜wO cç[óÏ8Üæø†¢áÅ´¿iXH»p>¤\í凱¡l™,]U\îN~@Ñ*¸~`ÝðGÝ…KæL¡³ä¤¦75—Eذ n%§õ¼D½Ñít¨u5cÿÌÑŽ¸a¡§Òüîoê·ü“Ø[ëö,;øÈg勘ZQÕ ëê]•@Z¶ ÌWêÅ : Ec¿­¾×Ûºt“f°/S u¤­ªVDŠEª”šË4×Èï"W¯'!¤^^û²‘|ˆ"áÒE¶ä÷Nü/0 Ñgývc¤oj» ‚Ìf)ÕÊhÝÂ{-ËoA7]y%Œ_YØâ,°_³«ºTZôE©^¿ÐT‹ pÙ˜Âauæ–ERþ5M} ›³¾«í‘O¿§ìGvƒÜpr”s½‘œþ;í¿÷ ÖD€•µGí ‹Tø®¾Ÿ”ÛܳÊeËþ™ª_x8©?°õèé¡•.ŠLìAháLo@Þ-ÂG3Èâ.ˆÂ’M¤(åV‚¡Cì„üæìÄ«<Õ±óþšç [ÚÛ': ›J^ ˜Ï"×a{Uø£Â(‹¼Ãe®bOÎb3Ùï©#@È ¹6Ék¦ãªès(ïØÄµûÛ>ÀA™ÀŒáyS÷Cr²/"ì#0ëÆ~0k(ä½âfå3rÆø †Ù·„]擉Ñì%D üË"ð¨(JšQ2Ç`ÌŠ„~(cÄz,RÊ‚Ú"–¢ºU¡B/ñP‰@+ÞTb¿-ž°2”É€‹]µñ—¥ƒ=]IëS¸ûJRʲê´Fî+ í73O¤ýŒö7Ý çÚ?ßr>,ÆqW}å;ìx•GÕkë¬ _ñODóEÈù´w}œ¥O¿|h«ê §ÈMuÌN±gQ‘u3Na™¹ŒÀ¿žåDý“¯b§’÷ãXЯg¢ª V¥ #NäXGFÊ0¤@lhcû„Wg9vd¶6(ÙsæP¡*Ο­‘ð¥Fpxœ u¨š½D[#Ñ ·ò ­ñc—S)Æo¦?±Ïé¥0$p<½ûŒ‘Ëbg÷Ôõ˜täÈVµþ­õb‚—°É0‡Rì^ŰSÔvj:À´-DqÕL•{‚€…Õ€3¨"(exdl¦bÊYÿE§ï¾ýEVõ{ÛÐÑr?hø6fþêØÚ“Â`|ƒõ6\/aÑ*9È5ÿàª-( ×8 —m‡qÝBaZµc°•õº/:â 2`„oE¿¤½£öµUefT\‰Ð£Œ>èZdTw-êùþ°‹“` ˈ„3W¥Æ¯ßíüÜQ›¡›œÀqƒÀgib÷:*&Šð”ŠÕØ4`´7kR ©âÆw¶#™‹^Aß®b]›|‹Lª¹‹·¦v‹¶®äú]÷å>m´Ú)FØ)l$Ëëš”“ÔÑJžãž˜w8ÍcqètÈPÆÇ:ks„?xWºš£Ý±ð}|è'J]öpJ°oÞÆ²—~P›9´ÕÓhò€Är¶l(ežµŠ¸™³°vO9Mg± iÈY(YéGªú£&p½Â'`Ùš2o&‚ˆ©Ä~•̲‹‡kÄç ¨8@ÿ׸ì¹-¾ M"–Í ‰sâÒ{Êå…~òª“œ·¼ÊVžëîã,lÐ뇔6uúH=3r§ÜÏ#÷óHÔHB3Ê ÷,<°À7M6,ðÕÀ5î”Þ´ðH˜Â6O€\†Ã äWC¹3Ò7•Ë1…¾Œ5JrÈzdªŒ¥ÿr9 8`ÓÕî·8œ6{€Uú5UQJMÀ‘6]Cu.Æ¡Ñ?S•k¦h' Ô!¾ó=&ñ8Äæ:BÈ¢ÅÆSöݤ¥w@J£—Ë:'9ƒDXÓõÔÝS s9’©wQ8n&ŸßR%®+Ù©3°ªº¸z­Kâ½CY†”³ ÈkmiDOSw«‚”×õ1-°½†ÇZIóÑ!'퇳”Ûµ(ž5ðùØ(¸Pf“þôi ¢ëH _C¿ ïº(ˆœL¨^u´›í°Ö`[©Ú` Ëãþ²`šq¨a¿,‘†2mɈ$ý•2Ú^¼A(ðtVa/ovÁ6hå¬u|®¬_å e?‚bY6Du.tÚ=*IÉ)Õè?’l9@–•Ã"ŸWûKŸÆâ*º|§™ƒoŒAëÆýì œéP¤m•g51Ù ç§Ž€¢)·•0(P!¡¦`=% :»¢6–FjG Ý”cº®Rãí- É:–hÀrJ°Ô(è2|ޱ©îCø);¬¿ £ J2#Wâ2j<Î:¤ÃPºÝkŒ8ŒÉDJ, +XêìêZ¹XÔ䘥ONÓI /l‹axÐa(5¸@òšºg8ÒD5b2D ÝÅû¦ÎÑ9§l4NñÒUˆºd-ÌÂë„õ \xôßÀÕ¯úc§% /ae ɼmd ƒœ’DE¯Ó#¾/Àð<€Q% ¼­Pð°!wÐ!#b0ÂEl­¶ª´×¦¨ˆ½‹ð‘õ#ž¹ªõàcÐeP€)F°^åÙ.IA—Î8&ÑÍ¡üÅÒ>Øü¦ y¬œ‚ë”~Ü}„ùÖÏ`r–#;8/3~WÃtóÙí;"Ú‰…o>bÚ^KtXU[*ùÕ| =£§}%š—bÜø¦Òê¼RæFmµ³¿E0ðo`¼ ¹Q>MMŽ>ò²nùÆðئÙ>g"*¬qœÕq”m¤·‹éÎ:ée ÀZËÙ aY¡)D²zùÔ54dÈÁ€ÒT{}*Ÿ†¶B:žÉ;}æžÚüwÜÓ”½îK\ºÉá<.$lÖDÌë%ŠuK·Q}¿j˜Q|67Æ”Šš"®–‚*¬"±/åõ2kSß¹ªV¦<»-µTDE[q~Ãó×FÁäšvcÊÃÚâê\B]¦<i¦U`–_¹JÒž—Р a“ð.G(‘«˜›¬zù$—¯õâä¦ðø:ë÷T_—Ë߈ke_VáàÙ¦rãrZoÔ·àJºNÅq‡÷*±}Út)©j Mš‹ƒs=p®à}^¯ð‚Wëo¿¬¡Ýd¨è*‚qWP¡€‡ÞçrÛ¯*lÓbF…¯ó¶p©kC“À5¬¥Š6²­Gxž„+—£¸„2 ÍŸKF¯COÓ:G7Á’¨Ž¦ð4lXÀÞ·]äS}˜c:)ûiPÑ+åRìYϹB2³5º±<äÏzžäßÂân¼£åÁ~UáÉõ„å™4Ë’ê´+P—UØÒ©rJ殽K±èÁaˆH ý>hØ)2×±R[ƒ+¦Â¯7ý¾`p½×åd­Í'\¦m¯…9·pyhÉo1 =lšÍƒKеÁä ÎJ¡²Ž1¨DëBl~³*¦«†ê×­ÝT¢þBÌ»×"6WmŠ|®%fö€/¶ rvtNéüê …NÎÉ™.^áÈ2e/³ŸrÙÇͦVyÑJ_<)×Ô¬´ä:Î <%ðPñ)ÀÊßðŸw Åûu;âz­¨p½Èâ:—¦? ê;J°[&q{aý&®áךðÂ;õ`€Å.'Çi díÚUÞM2Es¾P`¡&DTv7<•¶˜æ½º.NUSK#«4V²³Ý¢l‹ï­Û°TШ>Pí·ê+²À¹Î[Tä7 x¡Mêî:ŒÎFž)KÆ”`RöGQq¡'wêîœ%Sš`‘™^§¯-ôµ’~Õí…«{­‚æ J|üÝÝú¨Í,[¾jå)¹ÐPiš"T‹q¸L½«%§Š–\¨B hè´n$îòç¿™º~T.Ðt>žZ@Êz PÊå¾y••Á¥’–YÛ©ækØ}I{á1óØê.u(ÁL0`Çgr·¸™#\5 ÞȪ™J$Åõ9Õ3S*gBô¤aáv’¢C0Xº}«zZ CN‚jõF ˜Ä18V {Õ“úá\Sô¾Y3´÷Ï;+̲a¤$Aœ`wæÊAÆ­C57¢´¢­fs;µNn\Zç5Ù–eÜ5ÕXjLcF™vx¬÷‡LS‹]vFÇ|ýA7)L`=ú‚&rNE˸æËJî¯7™>“÷+ìd4Øü~S†’ýE' …R¼U·º}ÖEûEoEÅ]d‘Àªp._³ÕéÿßÏ òÎýÔ›ð Ò¡£Am•ã/ëØEäH› šf‡^aš‚sÀB_pš2 ]d !»—u ¸Öèp“¨*Õ:w¼ŠûáØ«­øAJ í[P˜ò©`äÅtƒ`Ôô—ì©—Ê6˜Ä¹p‘qêa9{x’8õ2w ELã¶NR}YSa9OXñP¾àl÷h¢ÿ¿[6Ø,·ûwjØ_þ©a”ùþÍ41Έÿ2ö_'eŒhþÙ²Ç~Šáw*Ù˘ÿ;¥’1Qı´2Ã…ñ;ÇŒüÎ1{WŽó™üKæ›ΕßÉgçoÎÉgG¬ðÿLyh_¨ý;' Òß9iÿÄ9i†Ûéw~Úåüwå§Yb«ßÉjÚc¯’Õ~œµëwâÚåÕ~'®]„ï&®¶TÛwèÝþód¸}—Fîžîö7`uý×Ë™»ÂJû;î› toqòr6Ý{øy¯%ÙîÞßwŽýθƒ°ð;ãîwÆÝ¿DÆÓ„ÿNÀ»R¹‚‰Ñÿ³gã"àß©yùš÷"üNÍ‹ý;5ïr䦿½Iþ;Uï"þNÕÓ¯ò«Sõ,KüïĽ¿üÓ%îYæþßi|—'ýói|¦dÓËÈù—Íé3u+þ+fø™R¿Óý~§ûýN÷»ì.ÝoVÂæwößy@þ×Íþã"F¿3ÿò;ð?K& ©°õ/È5Â~çšUèŸ#8/Úö‹rMí·¿Aá;3þfÉxH” R8øÓ¼W¼Ä¥!ÈAuT>—³cÎYHNyOÊ*£êZñßıÌkô÷,+Ùß.X}FŒ«Gn–JÀÇ~>ì â†¶µQ߀«¿ ÈÞó»Þ³‘ª`à(Ü’ðèìÌH×°Ú µ·ÔFÜúuYU NNŒ¤/Èàd†#Gð­B7 ½Ž5`F‹ œAýT)¤ïmø˜¿œ‘ì1Í-owÂ+Œ4?êmÕš•ú=Ŧc`Ûú%~8° £•¢gÚE ®Çœ–_ݨåŒgakA‹(|3j±Ë+%‘oýk|JmÇ¿Á êŸ|½caË‚ÚUgË[Å÷àåTö¯ŸñìÍœtá™4­Ž\]r­„…oºš>ÈÖ¦¸Ômíãê7,<ù!!¾%é«#@ lö²eú!ªzÅ—†9èúõÿŒ7„T ¨ó—»]Ráìq[ÑŸ^£#ep|ŠZ‰å? æÈŽ…[îX¸gAµ“xéFÚig¤;#ݳ¤šì2H—õ`Ñô@( ’!ü°lÂU$ýHGD%Wi‘‘60Á^á–~Á¿<ØlH¿v€‡sUQ×£ºªÁÃs~·åkd,ä,èÒ_„4Ð|H°@¡wâ)8K¬b\e-`Ýh(w¬‰i`à'ðauT°‹´ùïº!ezZ†¦nA1êIñí]¼rˆÖ’&ÅÓ¢:·º_D¯ ñ<ÓXjl}™ãŠN´(v³ ÍËO­®#þ›- šKQ&ú‹2üzËBÂ‚Ž«²…]ÚÁ ¤ƒ¨$mRÚöˆšµÁhb!e!c¡`¡dAâWEâ‘R#©L¤„Žé邌R|¢UÓ&À1‹² p¥ì @Þ(è÷Ûçƒ6Q”»Í©¿;P¶:õ¦±Ú|ä«»ÍkõŽÚÀ»^¾¨ý oLF9zjìŽÙÛá)îX@MŽ¡Ù‚äÌ‘˜šu5£K§ˆoa憶^Êk"`à7,<ëIð2‡y ƒÈ£² ÊÀ +#Ó*…×TD‘tÁùÀr*¡YLUI€¿ $,àcÚy”pâàt…ŒÃ‡Mk¥ÇIqðkY`ÀçqF 9+êbDx³u\Á°”Úuý†üÈE P]ü½›çòôX¨ã^â=¯–J Rd;þp›®ËÃ*Ÿåqãó,¿SïÅJM8C_î0‚'J¤AmY™ ê^` ¾~%Ç^LJ¶_eØhí,JáÍ+öFBçÁ«š¶÷†9 mû­8à¬(dÈ3;sG}½á#À-‡žÁÎ9’7NÖEÅàƒ:(Ï« ßá$¹ºîT9ÀZ´N’¸Ûfü È©ˆ›4ã„à×Ý|l%õ*ämŽ'b\È¢§o<KÀ‹›‰RœFtË#Óg>]ý&;Óm•‘4œ!®±¼?ñCdö fiFo ¯“°`\Šßt6ž¯®ã(¬~÷¦ÃíºÅ0Ô5Wö|èâì>»ùßÿþ?ÿý?þÏ¿ÿŒ(ÈèÝ.óÜ*Æ’`mÀæÞS0ŒÜÖ] #BìDlbfEwÌø†âaêÇ` ±’šßm©­ð™l3zÖ‘| tò¬E’c Uï¶¼…êÄäßÍ-]döúpÙô‚B0«¡â5^^S·¬óäÓñ.kKÅgs5öJ“tép}¿®5ïTr™ç’…¯öK¬DdA—w[”È'"Àk{ÁÒM%¹P]öR‡Ï”"­s‘*Å2(b-mBÕöñA{" pS€ÈMÈÚˆ¦ËÁ!¤nýH‡±ëˆ£H„ ˆ<çû&q”R–  òƺ†`”åIo$\!úÁ|„Æ•'”øš>áÂßÞ:ìVsuž7KZ$øj"Iþ­r¹vA¸¤RLbg$¬åá ìUÝtP£Û†¡6Œ4Cì"Æ‚a8yjʤª‚ uš¥^QdÉŒmåŒC‡â ×edB¹caKVà‚%”7v˜¶(çŸI[ßíø8 !KH«öøµ»Ö¦½UÓAé¸ÎovY¤ÝIg[Ϩë1'Oêî½Ð¸buxå½e‚eˆnƒœ…/ºø\KÜÂ Š 0)ý724|½~*4.&Ø =@ÙÙ ¸9—C]è‚:KÏͰÀH–&l–iÜ1I]¼´9HuÝk@¾ìeè`ÚNœó¨º¬qxßL}9–úöËÒåÚ±£¯Éòe$¶šˆF`ÿ´:ÍÉ—ðÒ•´ÁÖcàš/y¡wyÁ$w| ¡ÏA‰óYÉ£§à¬_OÝ'¤kÅ <>tx }¢)âq¹&>N†Ëû½Ã÷¬V`ÏrT .“³˜Ú†¯B3ŸV_ ˜‰&̬øu—NHIô  Wjk¼ü ïÀû¥ƒw1íåøƒ 驪Û·ƒÂé^P<е^uí¨©‘´«M›)ntùØÇÁ9rbiß!§|,©»\Å|”o†Y^ºÁÄtØÞa!Çdÿ”üÆk“¡âfª7±B_žJY—=põ‘ž´Èœ%”Mk,®ßÒp­#†•Ñ!Æ®âï¯úÀ™dKùi€»’vç_Ø ¯mû‹R™¯%xñÍMž@ìݧØà24áôïÕ\…• ïý+ÿâóQy‰àÙ9ÓúmÎÉÉFÚIœÎg~´"‹17ñ®‹ ê‡ó“u öBO‡G=¤PÐ¥'t§j|ýÕ±îÌÚ#ðÉ«ä2XÚÝìÆ0ï®Ùnë¼ð³eW{ïz˜—o)÷ØÜ®kàW´nô3:¸ Ù…/?\îk`T¬'¯Æa,µ¿¼„ì¸+¡¥«à‡yìèÛZÅ[~k>™`£'~@«[?§Tó±iª¤ãqdçg5²7qFÍz#cc/V3!–ë1G±jš ¤ü 5mpyKiû%zZü¹ KtÝ~ZKû±ÌUÎFPâ°sæšKJV¿“™˜ÎÞGuˆ7\!F=šB¾‰±Z”ú¥I¡z'häýžB—°²ö^¤ /»Æùï=qãéù± €ñ ™Á;@0WÝH_XÔ“*“Æ»t%î`½NÿŒ0ãû&æf¾m¾í#{_¤å?-bq@âˆÑûÐH‹F1+o“~ÔöùÓÞÈ«ÆÓ·RoÇ€º(c÷æu|ÕO¨`Ƽ3>Q†gÿè;±ZÖúr˸Xß@qÙháuL×qDëÁeð×[¦-Ã~ÄÊ%±Ñm¯3 êŒmäwF9çtÂ`bÃ?ý§Íípp_‡Êµ:Ïš»°} äl]èÖð]‡ºAè}aᙄôM¿û= ?á„¿–¶ ¹•ž‚Ó~î×èYÈï'‡W"ŒC¤`»Qf]§(1î8ìþ~Pã[zD™Éixí¯wF²Þ{sŒzÌ@ÔLŽ„Á«%WÀk×b? j£,Œ«ø*Ü,mƒa o"ä „ÁÀå¬3ð*ºÁÂêþnù"³ Ê?7T“[®=8íÅ‚ûLb° ]I‰¹bøÅPýŤÙÜŠ7þ«q Š×¿7¦1ɽ+7à (Ýw8€Â$µ×`™w CÀ±ßã¸Ì h` }ÖñÌÍ1é oÀÞ ´(N˜ƒmºÄU¼ƒ 6äá^s›dÁÅ7ÂU½h}íW âhÂ] ºxËUÿ*çãPòz ˆqó›|ëò·Ù!&`sE äcž8b›œEb£y6§„‘6ØÀ€‹ù¼‡°X††ÚtP4°QIᬆ“2VåjºË  G½–ó½@çëBù4a—ý!a,–2?vs Ckræ€ZNÐ1pÎÖ™Csf(Ü@ê¼ÍøGh^kš¥Í£Ç6#ÈF¨^áˆE¾c‘M´{Hº–W4Ã/›ðØûæ×¡N&–~=géçBm3¤µI|²a¸Y‡ê Bg`U3\÷«èÝ[¹UÙ3‰V&?ýIW¿¶e‡œÇe  W“ºltñ{)^săMùz›dȼÍûé°¥TØ ¦XÌ#šoÁáÍ?mözÃDNMÎÀõ¤4›M0KQ3ñV®3±W µ3‘X‹(™'+ÌPyWS®ãõLp—1*oäÓÍcÀ3æ†?“ngs,‚–§õ»–a{‚û)¥z3"lIP ºðºŒ›…Kz~Fh˜ d<ÎPƒ-yËBÇ‚F!£²ôÈ¿‚—Ôe5Êteí ísÀÎOô˜ØSÂY=åTñY”<‘èÀ ßÊåÙ‰Ök—kJëáAGƒ ›Ã&7Rj$tâO8FÃC(uÐÂimóWVÛvãõH©N5Wc骧IOb{‘pr”J¾žÖ£¤%â:p!°zHÙ4ô©°®Ó¨jÁm=5ýrËâ'{ò–üÂáž Ý„ˆK‚á)µÀ€kD3-Ò­9å£;*ÐÌ>„'Sð -œòÅ¿‡ÚYñÖŠªný‰¥âÓx<·,ܱ€åãÀùp-qžöS<ê ÚWO ’Ùï¾t_à§uþS»ÀítÐÃM¿ M µýá0þW¢ Ϥƒ)m¼ú¡RûrÅX%o~®[übœ°( &¤ÎÚðçžN!ORY«§fá&e‹ÛÕ’‰‘m~g„”Σ*d³ï/“åþÚÑ矂ª~>ZÂaÕL¢`¨?o±}Š”³Pðy¥ùUe¥ÄH[~´*ñæ ŸéÁ‚9áè‡ADÙì91§kpååp6;½ØØÓmðÞÝïcâƒHP#|vu‰º£:yÝóÓÈÿ~‘êJ|-ÀÎr:_ã+)í¸Q ØÏE­ÏÜåñŠÏF¤¦¥„~u§Ï£\çáFuä,Ó:زÂGá"mr4÷Ú„­˜¶€Üˆ?ZQz= µHz­òðDïàëo½–Ó¨1VÙæ 6Š 8j§ÔÖÍ(ƒßIQN;£økæPs5´/׿=ÃëZhkB]ŠpŽSŽ¥1VéA†÷Äâ¢@WæPœƨ™Çå®éîX¸gá…Ox.q¤/z¶Ý9áÏߌþÓ¶ÇÎ%^2Õñ¸ÈRÝp²,¥ØK¶O}EGúɺodÄ"“ká€ÙÚiEÍÐVÏLhï>R›.µ£ówˆzædØåaè €…©ÇÑJi£Ÿ¿¹T—ݶ3Çv³c[#ÝI5Q¹¦¹èÓž± ûðtÇ÷ ÿì;ðY¢ÙäŒN½7—¹×Ë|8K´†9e!#áÁ‘@kCøT»ÿ#CwûÃÚ4rýN +„w–Ù@ƒõ*9s8­@yìvhh#¹]„TP êèýÀTGíÔµ´±Ù ½ÚXþÇ~C¸øáÎÑjè/"b{¿EÊ3"(üÎÀ6ü¼'à±Ö"ÜáN E m\êÄÅlX9)e))XBøåFÎäqvÊ¡ˆ÷æädgÅÙÑ{J ÏouÂç–£ÐN©]QûË¥o8ô¾ÌÉÝ“»§ËÐ\ä©– ÎÁ'þ¬›\‘5øs¦ _NLé7•Ëó<Œç_åpÈ‹¼ÀS”¥£¶®SÓu˜wyuT-j™× ÁÌëŠÈ—G®@ÿžž¼Õ‰´ÈÛf~rU<ÖÒ†3+ïñaÕ¨ cgç/Ïw+î€ü7œHîOêÖ‰Rt®ç žù¨¶l>:ôöq¤Ç:âã$$)ANÀETòªàz¬EQ’/E$ s€®mlE];t»¤¤`e© kÎ$¬œ%]uƒ„hÙÌB,Z׫©„GªÛçÕ’kW©Ö+{°c!ÅYXj*EÚ:@jèÂu¦öÀªÎZ°o qYç º\Ô¥zÎk@1MbS"*[þº"/4Ug¤CŠc]d©u«¸¬;8LêŽò΂µ,¨‰"ô(%t"õO× 8@Aô'\sæ¦î‡ÞÃxˆ¢.¤µ§¼ ›L™7y–ti:r ƲtêÕ,Ì×y—;¾ÉE…O$PwŽ(þ½ôL99g“%7«)bÚоդê[\5ìžmrâ6äTjð[59JD!eI•e£n/IçÜü¿H0™¢ßäb[JR¦žD šë  B×Yº3U·îp} ã­i1]BoÐn€C>WvÖžA¸ˆÒ=CSõÆð÷{4èú©ý€Stõjàm”6\²ÎëÖ€Ò·òDJÓjœ(í-µo©}öh®zK'œ’ŠÖ<ªu÷™¢ÓH7]†¦öûcpµÏPèE„Ïšï+<=û`´h4aß IŽ|¬}Ktn{êÅ} ×ä AhOôw-*åÕ€Ÿ““ôóí©4Yë:}äe —ô¢UenÙb“vJm ´…ja¡©K`[£[[9ì g)1ÒÖHú ÛÕ¶ñ:Û5 CŸ³íPÄoÑzÕËÚ¼í”R>ð¤Žr*Rµë$GÎ@{RaI†º"™°·,PZlUè“dñ ;‡oh§hkI¥UçàãûjB—Âèï 7-ÇnÃŽÖºkݬúÂM]äºoåèÞâ7xРéí²ü“n]ø^—gš%뮺¶tƒîÔ“¶N#ù;¸âÂ#â;ÀƒÙyD}:ÿ¨IòÑ×a~¨)Ñ¡›¥]  *çÎE‰£R['Ö4ôð¤cï—vBm¼Û”‘{þPÌ§í¦šªªwSÓQÐФ¿±JÉÍ@7‡?ߢ‰DBrìpì‡^@ÕÙðŠŽÚ9µ j—Ô®¨]S»¡6Šo=ÕBBguÔî©í©ý…Ú]h䫎|Õ‰ÚÔþJíúŸµ;ÒÀ:bô§“šÊÊÔyih¢å©¿ø?1ÑOTd>´?B8 ½r°/½1«¤—ñ°Ñ%®Ï%tÃ(%FÚig¤[#ÝéÞHFÒdþóÝ?15O3“̳¥æÙRól©y¶Ô<[jž-5Ï–šgƒÙfæa2ó0£96ÚcæAí™æ±GóØ£yìÑ<裹ƒ&‰Ú2:cA¡ ¾(;j×ÔÖÔ•¾,€‘Xõ%e‹€{À£¾l—Ä-‘À„û ®nXÀHyõ9Ý$§›äÄÛ‘ƒ¥ÀçÏT‘ªà¥5h89ŽL°I2ðPÚYHXز #ß—ÐÔ¥v²~n¯³)¢©ñw}ÁçvÖÉ+c™ŒÌ!ÎH©‘r#U,ÁÀŒ^¬ÙÁ@ wÃ{퉘d¯ éKºSjDÕÀÎ"Ñ݉ˆAÓ:%u‘ö€ß$A íB'¢„?SM%2I|çˆ}Àwíe•ūڔÚè§C;€;ÃÓxšã¡¸9 xNõ7´Sj+Í{…%‘ÎQHÁ·*æf$%æX’áÛi ,à(Ú£ÉL4×qt”ní•ns:wkžÇ¥H|U‰"áKROüF?ÉmóVõàïÍ‹˜1±G‘Ñ%ì7ŽsÄÐç,Ô$Ðê{•Xç}ögh‡9ïÛd:"%ÆLLÀl%¬©P×Ã#!ôhõóå¾¥²}ûÜ\=†¹y‚ÐÁ}UøD þÿ±Ë‘/lYø+Ø "`I :vö—FÞÂC 5I¡iFÕŸˆ…,œ†ëžnøS `ç~ÉáQa® ån¿õàòÛ !â ×㤞ÜÖ“›Ð ðíM *”2¤ðF.ø\¥­kàP( âbÐ^5¬E¥æ.†Z­È¡¼>ª¤‚­? Ì&ÁÂL¨ƒ¡Å \B¡½Á˜ R²¥C[ô\ˆŒ)ìùWH ÒˆÍ}5´Gú»êíg&¸úÏb‚[˜]"Ü‘@)×A¸gAùRh‡„eÃDöwî~‹á„µrÜ|Jü,é-F€€ËFýàç ÁipÅ€t/F]´–#åMÄßdjV«q$mm„«QÚ9=D¢:Úb„Cjä¢!0Ë 6¦bd,'ûTʳbjÀ/Çq˱4' Ð ‡”¢qz2º8èf3iGÒÖ»ƒ ¥’(óHèjî¬h&·VLlPßøØvuÖÒßcÇ|€ÌôÕaôYJm|ÑT±ÐN¨­FÝxÐõ‚ÐÚ£¨{9z¥@ ~±õ8p)P¨ì4áÞG2IÆnñqÒ ¡f Ï|‚ `±éÀjãj[œÖtt9ì©árÌè2U1»þqK@š4Ë(o”+ír2´ºptEž˜#àƒfjÌG¢Æ\ ïÍeù[NÈÏ]Lˆ LN)z¦ZIÝ#d»ª28H]«€Š);º þÏ0/«ÇO8 soŸzÐüž%Òrì¡S_-§b¸<ïÜ‘%Ê߆6QiG Ø!Ñ>}›þGÌpü,§ ôS ØTC¹ß·DV35ŽŸ ±LˆAï"4á·D0q ŽCJɇ Êš¨wŸô¾AÈŠëMþ^Gñäó[¼H B»£?=´î›葉 J»>¶x›‘À#O] êÕÅætL F‘@Ф0é¨Î±Ù¦°˜²µM8`©*_¸´öMŠD½ÛGòñiÛ m}Ïc5%ð_FIk.D >Ò³xkE] >~Ɇ+ûÜ<¡[NÅÒ%ݹD‚ss¾aÊQÕÚr#C[·ƒ# áËã¤h å#é N«¡ýˆµ‰CùÞZ=­uôã@÷DÔ,Ô»ª =jòÔ¢x¦PG.ÎWz¯è…åWd̬¤­ÁƯ03i]_œ€¨?}­Aû&BÁBÅ‚†ŸÃºÛI•¡/Òï¿Ì‰>#v—¡OÙDcj¤œ±# P 먺š¦’ÊC‚ÝÁæ .iS〧òyM{AKiè{ÃI¢ù´«>;À•%›ÿø_ÿc w·Cü Ì»Vgû‡3*jw™‡‹QCF ˜ªæhçxõ“nþ‡bþ‡zþ‡ÿ!¡Ÿ¼<"jŸ¼<òüò™\ÎäýLþ<“³™¬°ëèA¿Ut¾}œ}E¬¾®j53cq|‚:6©SYtØ-Uï¡3*`OýpgüŽ@ÜKÜ’¼]úƼø ô…sž©]Pi¿D9·ØwºŸh†ûü iøû¹>/Ärz÷þà‰‚ùÜOÖ‡©}¾öžµŒ(Ùcª)÷ÝM4~‰ëàd|2{%3œyÿ÷ÏÝçüþßÎÏ?RA°S1ŒÔ£“÷®Æ;о‰ š}L¹·ÊèÈ¢ö¶ÇvVT½2Ë ³aäâúÚÄÒ· __§ë‹*ó1Ÿÿ!›ÿ!ÿ_¢xYÞYÒcã1k±vFiÃ|„¶Ôxê™Ä&+[µ(֮ݧÌ•„Ä[Û›2Â5>'8-ÛÀiÁl„«±¢:·yþ¤vL¬ö$‚º ¾ƒ¯V±®O—å"ìvÊð’:Nñ¸ƒ"Ä–5Þñ2úµ Îr]´*,š^ö/-é©‘¼VDˆ¯Ó´úÖM¨Ø2#O|1~Á$Ö”ÄÕšg wX– #a}¸\ýæœÍ ¿n$ÈlzT5ùDµQ§lA2—õ.íOúLTGBdz'Ò·Uvâ•üŒ`£¥˜Âó˜•uWõØûr$¼­Œ{«=ôæ}rðŒÛiªk—7þðEW@ª]-ù”:-e~€€#Í€S²!|“_fØÙ€jŒ;]ëˆfò’3®:Œo*ÐðÁ+îŽ`§¾óFT®Ý€´ýÔê _ÎÐ C‚ž¹Ø¿×BµÐõeXÞd›]Ð;tûKr¿fùp†é‹ö©ú¾NðWwYðÒ¡TYKiÌmæ4¿!¦ è—-ðò7Ý!˜ýÜwÁdMQMÆ×lý½LŸ¿âÖS¶Oláo/ßä+çêÑ#-yFѹ6öú¨ÏŸ¨…¶¶°/€-¤s-~ýª›ø)-ɲ¡ï½² Ë‚DJŽô$—”ïæÌeñÑŠúaÎ(„Jh pÁ9d1n¬SšùKåHÉË€¾û‹m¥KÕÌô_È•5@›xßèzÖžj"L´F.^aëTä°d¹NA‘Kç4Ãã²ýlg2œ¾1 QÜÚ7Ñm¡ï ¾À9øq‹‘ÑtÝF“¼×õ0R)ª=ò3 p­ Ô¸¿%8´ÏÕ%ŸeýE/µ °KŠz­[×ò)zªIºÚš< ŠóŠlºvŠY £™ ìZÑP„[蘒ƒ+Æ™’‚؈m‡OZ”êv=6uE·Q.ˆ¹¢=euÖÛ°3ŠÍÊÕD‘©Y” ÉÅ”9„2bB08)eé¿XB—q£ß=¾2YuÝt§”J•-³†‚ÄÍLÎ2uêx\ÕéScvÕ˜êÉM µ] )Vº(Q¼Èo?ÒËm>a$ÚRp«±ytfµT‚Tq¯!}«WV»³k 1µá¿ØSº#waY§L_°¤1sºQ²ŒŠ[ÉíAšr Y¤þ‹WËhw‰£¡B¤r}êŠ æCŸ‘á ÕÇ­>cÁË|‹qã­DöÑzMkïŠ-õã¨Ýÿ!‚12.yX t‹õbn:tÍ"lùöо„.O?½¼5E~&ªßüßw",ŠlÚóâ7ú¢?êŠ1zñÕÁœ/‹¡ceAËHñ ¨  À2Ì µèGåTŒñ£{NºEÞYÔk´ r›r¢Ðfã 'Ѐ»_àÐʶù!Û| ªÞ+VÜÑ·׬›il/*ö”pQ³š³lz}É`Ž£QÔtuªwøá€z’²‚,#RÏ‹:¦þèÐR-§JÓ"«±X…Ï@ÿ”;ÂÜå—ºÃL»UÚÿ•(¤ß¶@ìƒÙ©=Ù™ûè+Òâ×Q[”ÈF?Ï ½zë)¹UÌ=Ïec ?XøˆFúè½ð)­0¨dªaŸ_믕6Ì+™-à.u õg_÷Èfq}Uè"÷Â4ŽL·l@ôrÕg£ªbÁð¦•}l(ÅH„ [ôj]{Ò áä]ma MùC ¨ÌÉP«]ºªYa‰€GOLÀ¾¬À´*¦î‹fÌ”©.Np>CHWñbëì51à죜ñûBÚE²ø‚E­}¦ë-,V2gVu‘z3AêòNÃy•$T‚ÖrǕʄ¼`f9pÒ %A'bXvXÎ<¢Å©ë”Ý%Ìê®»UÕ«cë?lmú8AŸ€–;x˜Ÿ’ÇMTTÓ”$üp²ªNû4…›ZæRe›œ*ã!ÕðÆEÛë°©ÚÐ ®q°Ê*ïìBL•òˆ~èÐI‘lVOëh=#ô­‚ 4¨©zÞ‰TÇȲ<»Õ>Žø3ý±?™¯¡ï(kÀ6g‡ÉpØÜke°óA+©×N¾?xÎ#XåŽÏTxÉÚŸ2@•ׂÑ™Qº5ÒÎHúhßSCï ê íDב}l}4a¯P…6è\¬÷X.ŽH¬Þ&ëMÕN#>“è´;bV=Ï^<õ˜ ¹þÅÔxèûñGHZ9€?£iÍŸ—ÉXû>üPs®¡hÆ=…¼¯%“nb7ç¡ÌŸ‘€‡Â}‚QÇ›¾IÙ(VV]ß0ÛÃP—Žéps­©í€åôÄ+tU,èæ<âe¦JùKÙ–~3BÛàêœÕ Yv…î³/› Å2Ý(N¨úvýha‰}n¦càgèU}ñÞõ&h^~ó‘XðÆÖˆ¤ÊÃAárd Õ>ÂfÄ´ÀùžÁŽÐg—f7644–ùAU‰óÜCL°èÒÂÜðxéÊ…C¬P’ŽõåÄy§ž°²ˆÖ‚VBÑŠ<;‘F¾ Éè\Ä4‚¶µ %Û fÚHß5; H¸èíÙ- «l$ç;+€35^£C«S‹tpqÙMêýpv%a Í—¶ƒkzZ±8¶¾w¤ÕùyöH_ ‹ ldžw¶FR7Mˇ3ÛiÂXL$UÛ¨ïjêˆá*€cä°zj()á{fTþ3ëÕ™´UUvÀù6XBÀšœu2œµä‡M”òëö÷üãºÓ¥ùC^ÝDœ¡c4«u—°¹.;2¤`‹Ös€ ±r•ä p’´phïù©}Õ¡õ56Á <ó«±;À ÝÈi”Q™cásMs8ŒU(ÜM3”aù×>–\ÒñÓkG¿Øìz¸2mP–"ú¶wR§DÍ€"[ àð&uFÝLiô]½›Ô¼"Ψ(šo°3Oiç·97GgEúÄ ü£v·1’1â¶FyÛwÏÖLô­}~–omÊ¡Õ;œ‡WZÖÁ›aÐA¢ºIÁç0 vöíQZMêÏ^ºñ™û©ýD™Lç…Š”×%ÈÚ£GKGŽ`}0:üáÓM*Ÿá"éÐc\ ‡~qqA2 ñÅó 8î×ÆÏ–vÝ÷mÆ=6n·v›GžÛHô[G8ÙWY7u€uùn£iѼ»5+‹n†Õ}ªö|Ù¢t[ÊÚ"ü ;@^Á?J¡ç[ûK@_ÇI+–½Óú3|}ÿz^þå ˸l5˜ÒuZíí&­Û™±ø ¥Y{ÒdéÚÚêûôÅ×¢Ÿ–‹: h òžh4!7¥­S{üaxHÑ4ÔÞˆìÈ0w‡ î£&u·°‹eaø4[±±Ùˆudf7žØ+:A_öõ„ž®Ý=`F¼$#&FÞ€‘ &ó»¶,Tð!\Æb¨q£ã.(BpêŒ(Î~sF¢jÏJ]è‡óªeT¿t¦?áÔ{bÅ_Û`àß}Ün +‰.æ,aöK¦ÞÛ¥»nÆS“ ^ù!\Xg’ÀÕfg+]iDÀ&šLÚû¯ü¶ˆb bêÞ¾;ùFš?©~6¯=ýr÷ùØ™°í‹âYcN̼£zѲúáûÄZ•Ú¥}AÝ{Ð#™]ù#.{–ãO+ ÝXk³Þ%;»˜'·¸øãôôäËpøÐ·÷²‰ó*Š«ÄméÞ'æðÆTÊ|°Ox¯œrΓ׽q7ëËd&›O±ûdŽË6Ðr0̣ݙ‹|;“·3y3““™Ìˆ½Öx`‚Úsùz2¶»‡OŠ^÷]à¾ù|U|0]õqö0#KWÚ½5ŸéöÁ‹¸µâ½¬È#@4«‰ØqÖnn­œ|œÉ‰y´vkog§o“™|ûJNfòv&Û1“Øã»ÙõvŸf=ûÑ^?ÈÛ™l®»éP_ÉfØÍºKd3̤ÙÎäÛ™|7“/Ÿëeh!¸òò‡ôõ^ÍÃWÃÓÜ4ÙØ‡L6³µgój8Ï7“ïg²Yu’d6¼?Íæâö•<×ýìÜŸu2 š¸ªÝÙEîÎÌž ÉËhN¶ãM¾÷ü{Î_e>þõcÈÊqûéacæ_À¨'7‚5ìLO¡J¯Pêu°$3,‚zWkiÅ=gY«þ݇j=¾ì³º•ßd‘¥G_?ºmñ:*èôKÇ×yøôIsÄÏÜŸ€M‰~¢c÷¼õÞú™¦@áO1§(á*ìUð!Š% Ö½lD ƒH[eNŽHcFü·—mPOÝh5ýQعõãGê-r|;uFG(†²G’[,è­:Y•—·Ê uöY.TÅZ¾ˆÐèEÜšƒê<‹jæŸEtmøw»d ûE¬¨­9ºÛ"‚p†®ê„\¹ýîHîózË×j‘ñ}o·w\öZrü>¹ÿ¤JÞ:ç0ÀÚ@í?M}-ƒ}éÀ‡¿ª>;5çÿ{¸ÇæãÇÍöãÃ'…ZE[O3þµG~QÙ®4ówª óO]5f‘ºMæ ¥]Ÿ£¯—¤=ÂoËiTrÇ7J²_)ö4[UmùéwÒår©ß-‘úíê‰\DÊ–râr7\ˆ‚™ý-i<3OÏhFвÏЇH¼,ÈŒ¿ƒ: çÅÜô¶””æ>˽æTX“zi3ã^¥qúÄ,Á ¸ (Ù‚z-ÀÕàE9ù&Ò` øÎ`Ä VŠCó(ãÊÆLdˆ±sÊŒbØŒ³sKU*0õܹNñ[µ0mmSV†Æ3ï® y”¡?™“øù òëLŠ#ä–Gà6ë·nä8ÔOÎ\®9ÔÛ+ׂšä%H½³â­9îvK‰ñAjX¸Š­Ò °-A8Î(pní‘—uŽÓòGÐñ·>»KŒôíÁ#?»ËH¸gáÁ‘@ø ZÂÃè#º É8TêØ¬æó’+g|†“ßÖ/bЂuYcÆ0¯Q&Ái¿VöÜñΨÐ7¦'—z~o`ùµ%pßš€¦°«©ºÊµ!¹DálºR×ïªærE]Q…¶´ßÕÏþŪŸñšN%±PŠK:Q±$®ÔÃÕq^UâøÁú¦"€eöŸ1ò3¾¡ùíý;ˆ¤glÍs&Þ9§ëœVuNÅ7ç65¼†sÎ9c$®¾9Ý÷(¼f¤[¯Ü†$ê(ž,«QþXRf¼1ô/†aŲ}~Š«t 3"…_ >\M’·ùЯòœ9—w–M;KŒýN*ç« GÎU³Idó¤®yn’MÎAÎ<»c®µ2U aï¿…˜ØÛ¢™çPß9èÕÂL lÓhÊsl¤(ÎQ‰säá\8CZ A2\ïDÏ€ò,nŽú2Êü+Ø•EKYtãƒæ`‹¨yEÃp˜9œÄê¾sDÅ Ca` °â€¿ UÏãÅóhì<®:o¾Š`þ`LñÍ ¡‰¡½áí;GçPö9μp¯ünÑXذ@fÔ·3›4—׆‚ÚíÂ~¨ûéYÒ•L…U¶ù;+sØX27~ѳúa«+Ó° ±ÈSÝMIÝ^æ-xÊóVÀy;N”óíšôY TrGm}… o¨­š:i䫪=bõú9é꤅ p¦Ÿ•kPЛªØ“nmê|SinV³©–ñ(¬¦®)ÖG[™¹ý¬“}».èߦ'ס4èLu«k¥ŠfUyX=£Âœ66O›1¶&lÃ}mHœ Å|ª†AÔ{]ÈRS¶Fâ84„tF«yÍ$6Sx1S—¡à2¤+tOo2¯4‹‡]JªÄH[¾È7q/sa¶>¥³14«Z¦õ÷.ƒfvza'QGäl :s:óG“ùŠëÀàNÈW`\ç!ŠC„@ƒ›'ȯE£Ä)g~†úÎ’qõß§,\‡±˜-B_a›)kúfEëáR"pnó_ÅžþÃñ¦ð¶ô7°ã_ رßO£cýðäˆ Œbì‚‹gå‡}¦@ãÆpQª\5¦F¼)ß~ ©aJrÿ@Ñí_ÁÂp´ƒÞ&ØŠà’Ô(75tä¬Þ;jë>mIs_Q`UøUh§Ô®¨­ƒ÷¬H—£,ª÷«ƒŒç¼9ÿJ'}‹’qk´5ÀsÏ$§©{å¹£ˆÚ-1 ·9UÔâ—ú´½£‘¯ugC_²ê=>Fè²góð)úI•f?“vAm€r¼ó ÄRf$˜"ÈI‘+4ônºbFVbôŠ•›êä¶þ¸-8þFYîY^."úFN®¶9+þ8«&gëĽQÅ̳0®àbŠ˜¼UÂÔ‰˜Ñå[¢ç9ió+B^C±i9™#/ôùD€\ˆ-tDOšÀ«E‰"/9•v•²o,\ôc5Á^q-Ýžjc3cíÚ°äã²þPèeÕæ^‡Eæ‰dÀƒÈæœÐèš¼F_9p±LsÀ¾òL ‘çjÿ¯ºÊÑ´m&X*O_Ÿ OàÂsYvãUæZ`š²ÅR²¼&·ÖÙ0@¼¸BOTè"íŒtϾEF6bÖO…‡ö ?9è3èì¨8žyGfCº¨À› µÞe‘éÇ_f#0ËÙÔê/³Ig¥0 n(‚s ŸìŽ%Ø»´U ÁEžªßÀÁëœX‚sTs•ö–Ú;jë²Q€0p5n<ú°©== Ò”x^–üաЉ” Öo[¹tÀ.J”WàDþ£Rõˆ]I¡Wl޽ZñRÛ™j»@Oƒ³’Ó;–Õ¨~˜ßºVãà•j$ßJE˜eONÔԀo2—†¯§à±˜Ô£³Pðoþîg"(üfõG4¹Ÿ¦Â Õ´HQmõç4JE!Ö)-eŒ0|¦N»tÑ"ÿ#4µÛ:«=Gá%˶S—EçôÏÖoª»é²Ë’m-èa[AÝ&»UÉ_Ÿ‹=ëO‚¤OÔ{UõTa,¼¥ö5çu,‡å€üX š¤¾ª}¸{]oÂD.ÜU8‚Ô9ø•ûJ»Ajë9MÿvÐ@YxZW}ÑøíTn©­p? ˜"$,è ê‡ÕˆhvXX LÖÊç…ª‹KŸŸP2)EIú -B» í#Ú A÷íSO¹9ý°š–'HrPΨbú°×Ù¸ð¨LéAѳTO~rôë°uÁ½£œÜ©‚b88LøÐÆß ¤= Å¥y3n>~„,nl}OYK>¾¢Õ'X¯šc› J[#™XF„ —·†‹t ›œ®ß‹¬“+Z²˜´gþ˜àv…§P[¡<¯¦’æâTzÅ‚LbÔQÀY•Î…?&03Õ©’(MäðEUŸÅ4êПN:UŽY Â1ÓQøÈsÔ_¿ø:èužK5µŸûr£å4e2êXO¨rý'%z wD‘uâ.</NP‰b­A˜%¢ì‘QG¯u¬ïÚãyÖêþØ£6Ø€Z&A›müAmÏÒQÖýàFGÀ²†&tÖ4L±[/‚v°ø×¹Ô_…d*¬¬|•‚\)–ã†ÊâtÙXáÓ¨Á¸Î\—¯)¡° `X2سa’©ëÔV¸‰¦)ÖÈÚ¨oL÷:ÙäÜ nXJK_, Vr¿Uó4Lµä P…ߨÙùR¿Å;ÂkÓVTPX{\æÐ!₸ƒ¿©k†- bÈ×Ô­NL%* p†ÞÞjùj…^úãÂã÷ð@0C©aIT¨/¢>çk3Ô˜šÆ¦ó(½*¶Qþødàm}}eU¥|íŽïJ®±vrÔÖøšµÅö•1ªæf”@CS·èk†LAº‹RÕ ¶%‚ w m¤F¼2tê‰m€%ysS¥ŸFBÍF;g%yÙô£>þ>ë?ë´8×Þ¹¼L«…ÖRÚJ9iÄFÛ½‰rÐ+ºö;Ö1ÃÔà9“«UxEßü“úãr Åœµ¿µoöÛ¬b 'ú6CQœ«Ys7aê¿ÝöµpÑB‚^dpúT³åPQ|R\aÐÅêŒoó>õk1XU'7¡*U”tsŸ2ô0)5¢ôiµ˜ÅQË„¦n@¬¼È—MèUt• úwKºÞ2]U®£)¨ˆJWè:+;(æPC†Q9Àþ()Ó˜÷ðueº†é„Ùú5á»`/æ¹B¯, äÜÈϱ.´ˆÿåÍýë¾ ̘ó’öiqµ)eD˜Ž'ݹ¡A½N§e~̶˜ûÓæSû­íF†Ã'*ÑžíÁ•x“=üÇÿúªU~È6·RÞ`°Uá–¤?'¨ÉK•œëa[9ŠÇ»PB çh¼ƒîYx`A ßÙöÈ?{^¯ì‡/bjÎýa/­JÉ)-ïvïÚcáè]~†½÷£N_iã6$yÝLµ4]O¼ßv/JwùÚ8]ŇJê*WÀ«u+/Ý@$±ûºøZº‰^ö‡ýQa’»·ß+_µf Ì ûÆ®§ûfPü‰À%ÖÈ$øÂ’?*Ù1kFiÕ*ÐjŒîA5igü§´(S)KÚßv«/Œ\”a]U¾CX¬òVÜŸpÃ˳Ã4 +™>{: °å2à]Û·å'j«&•ªò±L´&»Ðo—s2@ÒÏþgÅo†¢³(ØÈPBƒ+J©‘r#¼€·Ì)^k5{›/²é÷…‘ãp v•,ßP;±ŒÐ¾ùC~ÿ o˜g;Œñ ¢˜Ð†×‡UópDù?WY‘Âï(BÊBÎBÁB¥/Uáº&® u:«ùTÿ51™ú¨½ývEFš¤:Ò“£‰~#rÖÜÝ"ëâ7 –KÑá×5~ý]C‡ÂH–d™Xe©ph'žƒ§…Zd‚Rëlê{â@þ™ð”H[8_SÏnµ‹Öê«q­(è£3Èus.ò m•¹Ë<ÒÑ‘7ò]æm¥“)ïÁ ·Aã"gIÕñ?Ms‚Jùq„¢›?ª†ôñ`9éˆ/|åëÁ¸× me, md/ÙÚ µu$Ha`ÐyiWî¾€+qQtºÖ~ûñ²Þ>Ýmàa-À”S ©®‹¾yTdš3øÓ‹ œ«x”¶êßųjK`©Ahòò^[ºÃQ¸rU:dÍFVŒSÓðjPsY¢š´UabIJlQ˜ l¡–ú (x¦Ãˆ«Vì_åQK'Çâ+Ê í¦tPþñ×p¤=G¸„'H Ì+£1ÛE¥fý²¢É-”»¨èœz0^¹b–U«©Ê«ªGºqÌ‘[>M/poŽáíz×Ò€º‚ð…Ú#4ñ¥ˆ»œº¢%·¢ý‘‚Ø¡]q{Gí[jßãf)Ump‡˜’ö=k€û÷ª[ìfp¾*/™Lž=k9ÿ&çß\Çïùj{¾Þ‡ÃöoùéÎRf$]Äæ>rtÓOà ÖUÐ9è )åC…9±´’¾LM€×:CV¦!ób@M€þE­†Ì5tÃMݺ˱-Õ­Nu9¢›¾Lç*qfܽ/‘Þ^{Äg—õ sï5¢#Ð:cÿÁX‹UCþW¼(QŠÆ€0°ï¬C{Ê*; J »jö)Ð¥W€«ÐFL³1TMø\na¡¹CóM‚ÜéÛyCwUÍ ´ò4Èd—R-¸éae'ù¢«ýå å/.›A} ÒÖu±™ 7E!aAßóóQ7ÒÐL©­c`/v, Ÿ·,ܲpÇlê“e»j]«R jFJØêGù6‚ÆÆ Î’®5ïA×´Õ[Ø ²ÁÚ)ÅE®Ä!VcçÔ °ˆއ?ƒßÇyî¨ý=P­ d2× ÎY ‚Κ Ù Ej€ËXeM£ï½râä³pÊ¢k50Õ¡î´ï© ÿM8G}Â-» Ö[8$Ê™û›C’Vå˜Á¼Ø†}[·˜á¾ÐÉt#Ö˜V×õ¡+Œ~|‹‡Ë;x4òLÓŒnÎ5¦a8`”€.v ìrþÐrJ!¢Vj|EªA½ú¢;é¦Å±°î”?hÚÂÙ¦Lq¬&rý1*öÿ,{Ò€¯EÖÂci%Ã1ão08ÿ’¤sõ(ôØú,=Ò¦jZ*#àö@'.<ñÁ¾¨Yc)õD¤ÔH™‘r#F*dxi“ÚH‘>io$à#=É‘ìŠ å¾^veFy®áçŒDù6~K íC†_ {G¿)5RŸèD(³)Š©{ê¼p¹8Ì‘oR¾D¡b¡f¡!†Z8ò……gÞ|ˆs#ï)‹4ìò{+%FÚéÖHwFÒú¼Ùëebz¦jÒW`µ¢{`>ŒÝ2Á͸Ì¡êÏÁTǤ;6ŽJÝÔÇuüîîº8ix›a½ËéðÄKʸ³y*tkò½ÁA¨[:r ‡r–°2`LÅrÉ:-æÒN¨MXKê¸o#’ŵ³©ùdL8æàeaHS…Dˆj‰s üô›@çåPÒS—ØúB[µPBÜH;¡6 (‡I\Ô¬þ!Ù»™˜XqkD8L%AùRé!ôWð©Åm¯DÔÕat-a…H?“B5Ò+HP¸5d(ð-µwÔ†áŸÁ„Ž"nsÐxAm]Mã|Ö.e8ùj,' ,.Ç kÇUÔ«Òc‰A@mgè(+}+%±ÖEàŒ~M™âwV|ÀÇÑžL˜÷ÕÈ®ßQŠœÂK4Þö*8>¹M?xÒ\ DzÀ ü#ìWãàQ†àö¡‰0§p7é ‰”Q@¿½,ú±ì,r¾'Z-ãåxÿ³Ù¬£èqÏ!û–Ðk«ÐV Ã[9ÎH:¯e „¶öäõì êæ»©«©>PõZƒW¡ {=´ñ¦­Ï(áà å=NóŠâzç3B–…[ÔË2!6ÂùŠ‹YÖ Öð°ðh“2"RTh¿)<¦…jŒ«ñH4ÖWÒ), åƒqšÅâ1ׇ0Jêºxl:4u°=ŽàòY<ª]È,3«¯=æ€ FBÇêôµvªµ=—„OHõX>7$>÷%È@LÈê¹;3C0ëXERöe:ÁrèœHÇ‘G®Ü›kÁ2 Æ(ò ²é¤k݋ޢ gÉ ®ç•”[OBY›+«öœE|ѨW!¢)îw5±èå¡”ÌO¬‡®zWÞÑò0Ô~Šì ÷f$‰"ˆ­xz"k¨*(ú˜— Û‰‘q|ÊiŠ0}†žÐËŒ–ÕaÉž¨Ë♽”3S UØn*^‰È ÷À°LN½27>Ïò;„¿§¥ˆ›ššô~uC"ƒ¹×®= …äF?Ý+”k}Ÿ‡¹~ “óç±jÓ6,ö$Ö,7ƒ8ôfyNLóû ¶èpÐzkgì,V<á/A¡sýŽ×rÑ „=®¸÷ÚUÑ'?ö·áíË:·—ñ?KlŒ˜ÚÍçØ-¸Cá+]Gt)äÅ^®È”¶iù\Ã9Xúé—Oþ )¥a\ë vÝàˆó†Ì™ðh°!FbG¸¶¼¼ý{ε”½¥?TêÁ0é{âSƒ¯ÙädÃDJó[‹i«”³¤?)‡™j7ÍT¥[訜fpãúÊOXHÆã3ˆõ⾦Kľ­‘h3 û£¿Ó#m~ù99j{…ŠIq<¯ñE“¦h²®¦,®§&Œ05ä$,¢ÜlótFù¨úb.,ËÚâ§Çöq36U˜¶:ÿò=|æœzñ¢Ü¢ÖÌÔJ®DO4¾[Xy¡îÖ´Þø0y·º Ýˆ*³ù¨5c1L޼cDkjr?·Qõ•øa9;-ŽrÓÇáVúòð„+ø~îƒ/ØâËÞ?¯oE>\q>ò¢ÆQ>vñû²KÇœ­k»æ r]ÂÖÕ¨*(§‹v1 %j†«±•¼…Ñá“´•¥ài‚¢ U–qÐ×O%â!áÆdÅ.;°³®›Î»ì²P]L%Õ±RJXÕîˆr²ŠÂ˜žZEJ¬cvÊE¥äTŸµ¸á–J bQ ŠÙw2TìE2L>¯”3èôÍå ^¥  Z¯öMð‹Êfø¬5Tº&ßPòSèöz‚Ò—ß™IœUñ”ÇÕÝåZ’1ç;É.Hoá“Oú>7ÍŠ²IÊ¢ümÂÀÒç“I¤Â¾-AR]ȺǼ‘Û¼î÷Owÿ¥2g9Yç~Rß~Ô<³áKøì[ üסO+pF™üéîAÀ½®¥SËUé[†OÛÛçÀºšvJ½E×r°ÃMÉO‡Qgÿyi6]{U6~ w{æïPéõF&÷è§S‚oe»Ï ´ÞߦyÏRè^:ódÆÁ8„p–.åp“¨øîòI‡ê«äñ°ŒÀb`qš´rA%$„L{O–ùjG[¾”RÎMJàªw“nÑ6?PRÎ0¢ÞÌ´iìKßtXS·åV­Äs–뤻¯ºc¦ßë]¹ïë4 V)¹Èî¡,›`ßgÝâg˜r¦Ä^ÅCa,¼¢¢ß.£³®û°Nêî4€éŒ4¯–Y6!¥'aáŸ)ásáN~ÁµÞ ó·Ž]y„xL¿²øÈª+ݘ7dW¼:èOŒ^ÿ6Þy~kFXÏ › >=¿WëúŒœ_>A_#˜7\íŽëWQ±a9rÆ^;:e}_ôšî¸:ø¬S›¯L±ÜÛEA[‹LC¥óW÷­ß“ŸC(ØÇà5qz"ðÞ2øWל(RúöýãéÚ;Œ§é§Æ¼$Â*tâU=,D‘qì}›ÞÁ{õUç…–¾]LiÞ9¦$“)~4{Ô¢¿6²¿³`̈^—M§è‘?µZ²´ýÉ5ÍVlù~ÑŸYIž+ñ ‚}ä”~ãTxœþLM5‚(ü©ëLè઎¶öÎü=l)žÙ%óƒ°(ÊMþ¾'Éò[[’ôWl6FÑyõî5ðÎ7H6™ˆUü@ërvÄÞÚ`¦º¶³š7\ QÜ™ƒHü‘(•:o¹ƪÛ}ô”¾ìÇPË`å‚"ˆQSJƒ0A“¯…p9<£I¶Ù¸«¢¦Õj*¤¼ñƒcx1­À¾ÈUW””èd¨Ô,>V@´ ÅGø ZXG?iXi=eR¢©Ïõ»{œÞÝš0êŸ7X¯c®éEº¡Ø@a™\0Ñ8 Š˜ðMŠq• 81HHþ¤M0Õ¿¾*{ÌêÕ×|Ð'x±²ôÌs¹iò6dyö€r@‡¼D5ÞéI cØ*n„S‚HÀ}uªj<—§Çöé4ÖÏôd.×…@T BБÅ%Á '1&)Í´R€ühÚ°¨q;¡6ÅñˆLåMúTã2\*r‡yO¢î§S(Û|á¶N ciœ%Ú#*suª,.S‚ ;Dß.ûzŽ€éµ!C™ srrÇc;–R>Ý%â`£ò®¶¢ë%\m¡Ö+EV+_­¢:/“jê¢Æà~œ:`,“°ŽèßÓQÝ·ß%ú jžø4=vÕ äb,Ájðí(3®¤wf/*íÝU5¢àÄñ‡éŸÍäb&ëä«•3Ó­ªr‹ÛJt8ê‡ùëY¤ˆ®qHžÏ®íÁ½‘tç—´fJ¢¤_L¤„ÎÖbJ²«Úñè_ùŒvWSyIò÷ŒÀö9Uxò%XrC»£¶GJJOÎJ´/} Š¿§ÚMûv@›<ðLG5ç–Ý·½Gé«Ü¯ÄHÛ™„¼°ëÙ_ßÉÒúvb–I¥4ÊŽì2" wPùR²xM¬ðGÄý©J©ÝR[œÌ#BÂÂÖ) ·|ä–3¸<¡QvÌ’+U†öµÕü“¶£t*×¥\q!Êð“žî§«ÀJÚŽ„ÝEO2ÒeG,m"ðµà#ˆ5I „vZñF7!ykÔ”¤Õ8Œœ„1|"ÂŽl% F'¯É6;K[#qBKbZÞN’Ù Шx 9ij{Ä“Ö"€Œ^$$õ.ôÝ+ø€AJéHJ«)Xª8ŽdcEË)ã| `L…þD?Ó„µUÚ µ)k´ÖQ%!c$CA1ãL_ís(ˆËG †Å£æ”^¯Yk¢ Ë¯Øë¬ÇФp¬4 5 Їti–¡î©ó;p^FÒýÑ®ƒb:¦àÔ”tX¤-ưó[x Œ³º¤3ÓtèÚ7‚+{‘.ñi}ˆˆ¸FoÃoÿ5T…£oüH²À„Ió²!¦ÞýpOEdlHܖܜЖV·N@ñ¶VÔ†Ò6Ád{«®ð*¼4´ -(YÁ"…àÆZyoàåWâ‘Õ7,Ãÿˆiqꌗ3N¼FªJŸ ÕàÊ ®^ô±Èî7aÜd`ÿa­»§6å…ÂgP¼«Gžóº†>±”®«žŒ — %6aÊl½lá‹Ïp>„ [.b.²N^ &Ð}baq „>ò1¡Ì¹Áנ܀ú0ñ„^ uY‰ 4wDß2õÒbáÚâ4}éFª"V {nP‚J€ª­'7äÀ©Ë~ªƒOè«© ¸dC)Û)*×eàflÄ þ†¸j– vŒ‚¶É2}Òlê…v6º˜t‰ðmÕû ¨1Ê š/‡´ÓíÇg­¶©„À‰€ª©pÐâ†äÃãˆhÆâUÇ%®eI׳JÐA-Ý„5×âVYÆð79¸”ëýa¬‡]NI¿Âl(žn[:\(‡Ö–¾Q†øE¯ÌgÑ=À« Š% L1Iè<„§CÛ64p¨ ™ð–«ÏHü,L€a"‡\á\©®Ç§Mˆs$.’Ø.é:0Ó°‹‘N,#Z±xç¥\j; ŠfÜÜ’ èå4îuô þ”™šûo]÷ÇlϽo‘1+§ëDôà.å—Mö Á”VE èÄ Z9h4µ 8dTJr,‘ôGŒ>°l»jª]¶+P·u´y 1Ä-_Ñ\GÆw*ÝÖWy=Œ¼0+ 4>¢˜cV¥[“K¿¥ˆÖ¢²¡Ãt :=h å=±ƒFwç–ÆÚ¼ƒ—jUøZ7}‰#™­Ì$§ÃFiaϧÊùØR‰NÕ“Î:G§p ƒÑã’(à¼ÈÜ¥—’HmE!Gné^åEÕê¶Îø ÀN1Êê×ûÃ+ây”À¾‘`)B‹‰?ÌÛlP–ÿIïÍÌ‘†ÊÉRͨ‰06Žª U!¡bí1Pç·Ä6^:è ]‹¥å¸×Ë2ÅÆŸööÈœ²9‰[:µŽs#m’ÿwÒëmz;'ǤQ#èoLг©ÓŘeꔕbkÞ„3¾ôl›cz~“;¾"­¬Wr_%×O½£SikÐýà ZHÚÍçUÙþÐô©?T7]dˆ×eª›‡f¥§*˜ÌTW : ç?Çfup8ö%¶W­ë87š+DnÃ~ý÷æ]_ÞZ.–á0fš*!Q‚òÈÉ5¿µ¶ËY¿µ¶—öo­í·Öö[kcé·ÖviÿÖÚ~kmçöo­í»ëÏßLk[ Øß ßÎ øÉ«@Ô³ÆKnX@PÅ¥ÍWLê>.CS—S£ñÝu SjgèFa+©×§û_N¼ÂW€ÌWŠ·­¢—\…!Ç(è<*µZF,N¶e!akªÕ¥Œ°¥f@ÑJª+%mDk[÷oŽ…”…Œ…‚…o–2~Ì|ü1ôë êe—*;Úk\' ) P ©ÈMôûí•Ó’pªË6Wð\¬’’°°eáŽÐAæ!°K¶RiNkŸË ®“u2,W½æ¤¿"&Wúñ…v¡°À›h4 u‰ý¹¢œA¨˜Vø,é„cî»9_KŠÈJ^”p") ëi¬6Í7ÝÑû7N ®•Y¢oüíž‚lQÔ¢(€±2ATÏV í³®Fó{õ1IL³($0~9¤a_Nö~Î.Cµ2˜”çp–öMóEhnôÅ ¡ÑÄœ“´YÚ7h‹ 顳ŠÒáùõ+wûò_ëvþZÚkeŇÅÎÊsË0–·öûdÜ3 hEÚj LBÁHÔo¾?’ŠeÊS_‡ÍŒ10ò:Î\ÕÑgØð=OÞR¯æÈžKs‚÷ŒãÒÆ®FqñµwCŽä ‰ßÑ¥«s+ÓzÇ&¾|¤úðr€ra€«‘Kßgü [Û'0Nª©¾¼µ¯Ì¯¸3Ÿ¦2Òåö1á4ÁJ+zÏ}´Ëì™Xd2zu釄(;átj“²,Kù½é »ìaeu=Bß3NT<Óo½íßzåo½ò·^ù[¯ÄŠú[¯ü­WÆËÿÖ+Ï7ú­WþÖ+ß¡W.]i±T×a¿JkÅ–ÝMãRY®9î‹ µ¼Â”Äì )Q |;|-ïXÚ:ˆ;7qÚ1ÊÂ.;ú„nÒÄE§…^c7jsТ)“Æœ“^b-}‚*c>š ?Bj^xœ—k(y]ô ÿ—Ý¢«3Ù\jER‰8ýû“ý§ûd¹ e‘‘ƒPâÂ!û9m°|jÐ=%‚ß 9 ðÀ2Eü `}NšÐ“”òyt„Uæ@k…;îYx`ÄzÂ+pÙ„Õkš¡ðÊ9H“Sf‘kró2/µþ‘´¡ Œ(ÉÚ µq™úÊr.Ë6ešV"ÀXVÏÞ©Þ”——¹Æ…‘Ïä97F:“a©I :`…pmCÌt*PnØL}pšã¯X á÷Pi|[¨Þå;G,Ú³tó QÕŠy±Ï/˜rîK²8eþ!aaËg›#7B„;(ßâ”/ò¥S¾tÊ—NùÒ©¹4¿‚2y®ãM?Ò¡Œ!ãGÈøL.}Æñ#p‡+)÷–£Q¾ZÎïó;äü9PQÚ)1߸àç+0œˆ€óëmÙH“rnóªWS™SbôgXc¯‹Èé&܃­÷ý nTGÍ"#Æð&ª8™²Z¦|Ö·jdéÊWâ¼K- ”ÁÌÊVKÊ‹ò7†ªm 4ÉgP«á¼”cý$«pF'.Ÿ@ù”:l§ãÈyz$'HFØ­@ø¸LS(‘yÚë>Ú”vyÊt#Ηj‚Û*kG¯ÀϤ˜˜GÇ>S :lIꜮAŒ)jü6f1y૩ѲÝn³ÓzögÖöû‡Û+4ºs1gŽ=hÝ+û³Ù&íÎøÆ~Uòð¸ù§å‹›Óþük\½´}‚¿íÛþ ºüuß«j±QšûZ·ý¾:kTGÖ ÙŒz¿¾d´"»ÛÑ6Æ{‘Ý~fë¾Y¯®ºW×R³z¾Z#ç=÷ðnàOi¾> ]’;öŸ£²›~ìµß|9£ƒð#\שׂ}ðú9éWÅýñê °ûV·š3©_mW^é­«ó×¾ù›_RqféÀŸ¯k+×Ö£ï,$Ÿ~ìÊ?qµ×¿ßY?v—Ÿxeûšÿ Ì5GËŸô~\wqü*ïÅ;<ÿ½?ᙸîo¸j|¾an “LA¶þŒqö†•ŦÔ5»ÈX?ßµ@¬ÉñÈ¿|ÀþÔùgÿÀqðSùÊ÷`ï!»MßPð»™çøe£è­¯O߈¿Å÷úÒôÑÕ!h_çÚþO¶âþ <4­ÿ¯G6èÐß6àKýö=ÿå_ß÷çÂ&Ú§-Í¢÷—W®Í#È5k¤P[7B7K;¥vFm­î™¹,l`™s+–VT³_DhšQJŒ´5B(ù}‚6v<Ns™”eVš¢\ïTëØà:)ëìÐR(%$%æØÖH»™”ik¤‘nIº5W¹5W¹3ÇîfÇøŸÌ™ŸÌ™Ÿˆum(H!¡ª0o×YG~Ppï‘"BÅ^¤^ˆÚ[jëW¿SºôeÞ*ëdAì¨ñqëº>”ZE&¢ÓU½G~ï²çÄJ°ÍtV`»d"‡—®ì‰W™¾[V£™*g}ï(¤|è–…O,<’`®kŸõ_´K‰.bÕn°ÔH3.ÆÍë°ˆ5˜€¡½7‚~`Ë™b§ZŸþj‹yµ%œõÿgïÏvgšnAøÃ#É®w÷Þý_ɘƒr¥E±HÚe×ôIßhôíÿ™”±mÊ”,»ì*>ÈeRÉ"cÊÈHwà3ß›O•6ÈMý›l&•û¯ ƒ¦È[Õ{8ì Z;©Ä¬nb*3åìï§­‘~†ªªï¨2œ ¨O Ýr‘Ù.33fYÄ(˜àâîšZšMÌ}†}ÆE×§¢3I¡óó=>ùÉ*FoÎyƒÑ®áåQå\ÃïàÎGF&ÜÙ¶)˜á/³AâCïÄzÎã'y 1pî$ˆa»Ð‚˜&øPžAYT?8 ×ä†UÊ¢{ÒÇÑNÎÆ]Exýxƒ\D!>s¡£…'; ×Í3Í?i¡lh4#çF B·ü*=Œñ“nBÍYúîRã|BSø„–YH}œÓh,4Ï¡ '`ØzªI[-œóË·]'À—]ŸŸÎÎ[¾;ÕV—Û–](Ë*LS8ØêQ¦¬y4"9pnËrà0•tË“´‚£×¸<>€-þfKOÀ3Z4Cšÿuè6…)Ëbé<\9ÙšÚ.Ù(׳–ã<—õ1.v[Ý+ªÓu@u) ”ëÜeµ#„º‡? ZO=‚½ ©6¨ôY‡ºƒêB[}§WÍš²ZIw©©Î'$k¸A«¿‡S†sšµfƒ¢ªÑ’ qÎÕá—fr~u¹ÏÛ ‰ÜadŸL²åfo ™Æ4b"{Wdbi?µOïu†Ó±vÑUØVsÓXIO0®¶»xäÀ5È \šššº%ärþ)=ŠòÎÕG:c{ŽC<Å_ÒènêñS5‡Ò¨ÆJ‰%ß9ðˆVþ™µ«™U ÙV2’ ™R{agÁÔ-sõ´¢|¨ðdí1ù:ÈÔ¿N¦öRgæñ-¶K¬µÍ$‰-e`ȱZ,ãR\âgh­Xóíe>,«?iYµ–L×ÒøˆÀDÿ:?FÚ/‘óg˜N~ÌÉÑ¡~a8Ô³cËöS/Ô®¥ùB7¸–ï8Ä‹øêÑ%qÒ2 ¸Ç ;hýU‚u™‰ÿì·HxUô‘§Ý±S¶‚C!rêû¤#’d%éØå(¢LGÏ<8 Ž%ãþ½Ìö³Á?Öê>bgŸgY³)ý>ÆsË>þÍ61YÁmãõe mR²C{žïN'Më§š®¡y‰?Ä©vlÑJâÕq|A´iãM<¾›iv91IÑÒürÒ¢—L`Vˆ\«{ñ÷Zï´¤/4t½šG^1thŸGƬéâïõ¬‚«úAV•…¤#mn"à5Ê>”å÷†¤7@†­zÔy1¿˜ã'f7ð“”é.‚>øòÇñÿÒóèm÷ç¹_£ ÞjñÀ”&á^h?‰"<ôåÂÝ)|"Šõ\’)à ah Xƒ"|qµ©©o4"x´=«¢6îÔÔ±ÉÒõ0Î6’X5[ö ìCy å”_ʧ>Þ•f¹ƒl—¦ìC™Î¢9\Ãa¶?â0Û¸Ôëg'¦¬AåöÐQ¹„0û¥f>2eÊBe¡ÖK©·ز†ã1:J"ôú‘º«»0q5×"]Ù?\åµÚœà‚K)cG‚íZö(_‹ ʦa ÒÃgp|O>áwf)ŸŒC½uÔvHáHO .×”å;MÍ.!ŸÐ”М[¤ÚO´Õƒ©&åògÌõ«¢.5¯¯©aï:pk /Ĺ8ôjí:½YøôùlÊÿ¸ Ï~%ŸàQøìôäuÓQ®µmÓ,*€H¶°Ø8O6ç8ñßÀ‰ßĉ3wÞ¸“Ëðbï}ùÄ¥Õµ–ú®žaÁ æc̰F†52¬‘O¼FrÈýrÿ"Î;ÙýV'dt ‹äï[$ï¶0:í‰Ë-Œªª¨z êª¿UãfîÏumÉÛ…CÓ·ssíbÛi'n¡ý¾Í•^{ ´·AÛH]› Gvº¶ÈÏÿ̵œ N¤Ÿö^-RŒ S>¶ûxkÞþÝ.ºo¿ä>ïô°6Þ¶6ºÖ.ßEõ­èˆE x8¶Úw£·3[;“´y¡½ÅÖ¦áó=¿S¶øxW®56ÔÿcÏ.>ôÃ`ƒvÖ ­!ÎÆpyå„ašÉ½+¶ÃãiæÆg!N1>Qå–7 ¢\bYzmÝ.8%,*$ôj”-ûPžAy­éÚÊhKÏ\~Š¥AI^èÏ €)6(A°T‰‡¢MC´+5€¶Ê7NLHª‹£n§XI®Øq¬3y?æ‘#é÷p-„?–uîáíh¦ìBÙƒ²e‘sË$§NÒ“LV±¦Ölgˆ½ZçöÝë›õRï–ÎÖ)òT¸Cj«Å]Hјl!°y²}Ô´Õ£í£¨>9¨uùXÊù]!ùèÆ»x‰zÊ)úË·"X+¾ÑÊ’Õ–5ÑCöû. Ääó$ð@-â(ºÎÌB½¶¨Êy‚üÔå§Óä—eu?Á˜¡6·„\¼hÛËMé7 *5¸wplüŽ «»RE“TØ ARrTꔕaX¨ÚÉëKåÖp |ióaÒx6¤å´áT–ºáepW`•D2(ã îÓ1e5?ïÌB8¼'°Âå _½+TÖw!äÉ4H³ž×[9 pÁàʇB@ØLk ¹Ç4ȶæl Ù’ÍZÁëª9Žî´¼[B>V£*Bå 2]ó=ÒGc=uúúTîºÙbÐ=¬ÓîðÐ_?5´CEí,+¯«ëׇºî¬|Õ}½¤p4§»AS©M^ÃéÙ¨pÀ:Ý…Pvµ\KÝé:¯ð«zTÙ~õF(*ˆêΩšênáW]ÎĪRœïЉ0iéðy-D¶äéEDV}RÖfä£,¦ü.¡oÄ+µB8JGQ.¸ò=_vTyÍëRR뢉qÝd÷܉2ôëç:¯a²Ü©L}sj{As:»d¥.¬|­×—¿ÖþF/g'f·­ÑrœÙ ½(je“}_sbw9:¬~¢FÔ*ªUßÌ+]MËða§©·ÍÀº<Êâçh”Í:LFšå¯l¢é Çm.éí–¿äÜø)aÌ–ÜåKy¬®‘¤Œ¢™\+^p+ÏLi­pàåNØÉ*¸ÓsÕæWÈl5äÀ®¢¤{Ešéñ#ý­Éö­i†›J¾#0í•·²uîáEs:¶ö=¥c†Y'(Ø‘ªZýΆ†S&Ý•@?Í›ršl»®j¸¨™V@ûÌ“CR÷«m¶tÄSfß[ k› 5Æ›)´ÂnqdgV l÷Võ!3ï CbH}Ÿ¬ôjI³üI²oxËÒÅŠeš-E¡Í",ûþ¶_3ì‚p3{S¬ÛwøW2"y¢×¶vcl¤K3ìœ+'Lwp€ª¨ø7Šˆ*Ýèë0¶]~ /UUhò… X×1MÖa~÷wßÜŠK÷ÙåÆ×V'ôõÌCÚHöMƒgx¶1µ÷y¨â`º( ô[nòÔtzZÚΕßãæ@'Å|dm_ÌʾKoiÖ]:s§šÜCÏEU²%žžé3²ˆa "†n‡ì«n×î ½¶Ë´™ðt£¯¦þ½sè‹;ug¼gä%;ê ;»}Tƒêþd7ñž“Ϭ‚»§O×͉.™^ñÈÂÈ—Áþ‹·‡õ÷Ú(Gû‘¬ÄwEÛL&ÔÒQÉ~ÎŽ_ÙS]¿È¡/̉ãvqÊçüe<§§^s*ø 6ð–eÖ÷H4, auü «ãrK¢k| á¿H븗E­è“™¦%:{Ò¶;5iܨ꣡´ÃFu0Q²£Ä~&¥±+$LSb?Ó¢Û@xÿPCÔ ÎÒ’H):=Ò÷óî=¬8Òï/EóoWyûj¶8Î9VìIÁn½æy%0:€a;‰h:†œœïmÜë*þk‘Ô0a‘kg¸d´eŠ…QT_Ñ=sFÓà~9€¿SÙ#öü¾a;ïbjõs¿Q8;#1ŽhŸÿмwš&»è´5NûÁ´EÚµÝÙÿtØE\0Çyòà¸ÐsÈ\À ‹ù¤%×eMÔCqùmÇÉ­dà—ç«•ôÊÉÔ鉋g‹ÿ¬%ŽFü›Yæosn Œê¯bTg0’NÞÑ›]ôàȺÜ^—2#†:,Ða~®Úµ&úÿèÿƒÒ!¿…V?Ò¾=Ý^=¶ùÎGÉè`Ó‚LT;-€N­ÿb;ÝÍ¥Fu¶ã³ÌÍ‘Ñ%“ꫵ·Ï:û,}êµ>>m¿ß¿íŸ´½´Œ[¶ƒâqƒâñYXbýRÄŠ.Ý„m,óð/þº,Ô''UÿÔS¸'L œzçi:ž©‰ö;óaÊ¡×RÕRÖ´A­à®£™tŽe¹dLÃq4ñÆÑXôPç§3ÛEÏGWMHq$éæ–8žN‚²F´EpnˆwÉ!Cz$%A8–Úr¼–¥@Ïí÷=žOGꟖïw$ϰ?;jÎ'Ç[½ÏaDç°b °’;—`ÏeòŒ¿XÁyh­Gê[G^hiG€›_þì¬ø=YîÀJVú1¬tXìÃbûß¼Ø9Í*}‹>Úó~,i÷îh‡HúXN30—÷c.}¹É'aŸEO8— ËkX^Ãòz·åÅ7ô¼€.[8#­=&¯§<ðGR½szwʦþ|iê¼l¶òˤ'/ûf ¦ŒÏGò:£3·ÃÖF¿$Á˜ ˜òý3àܽ”…—RëbÆ\ʒ˹o1¥-&±å5Ì9hOH;Û3£ìyéZ»s²ræÕîäªgåSŪ˜4µ3OjgþÓ¾9O1›)§/í™`’Šž“;”’q’DçÑ=òh`¬cíf¬]¼ôërÏ/Ë0yQI‹ïYÀÐ ¢s"g’ÂX1]L@:åâïÊöÓ'ýìÍ4DëÀŸ­ó.Ç:»ïh='Ô‡n´(N>Ç ©¦ðú…^i§Žœ¦“xȧæúÿó×ýoˆÒë<³…—¢¼Ã™­K%ç:uý¾CR0jð®‡uý¯ëžwÈòVO×¾tÀš||Ëå°„þ¶%Ôï~Ã7^C)•Z7ÿ~½ß9×óõ\öÇî©&–@öö±¬Ï¼¯ô:ÂDÞxÑõ+,èøíáG8Ò{s¤ 2ž·²š¡\ž¡ôbÄÈщ)¬ˆÑ \c௹ðàbbXòo7:Iô’¶À‰Âv°ÃR9Éj>&T»)ËËÓÆÉË?\Fj‚28kîŽö» ðGñÓäM r`ŠS˜â—eŠ/¯ð™í/ÆF_è®ñxäÀ#/¶j¾x)¾ØÉþŽÌK_—ü¹¼í">«Ëª‹ÝnéÁ6þ«YÜ{má}¿ië/ò2jË_½¦ßQm¹ðýìRôÄ…ÉÃŒ‚z«ÅòðJXFº-0¯V^'!#{LBâÅâXÍ@Svñ%=ñø¦d2—A.Çt óe® ‹!‚%‚ÃadÌÎ:y€^¥õÒ•ó×{äòI7Vn°”Û(1W­){PÖÛt+ÇŸøfµâÐÊ«m™@R‰í]ýx˜›QJÈq^¯t“a—¾˜&×Fê«´;ù¶'û@î›3åRd5($Š %„–„RBü½5¡ï„6„ätIQnä¨Å•=ê!7²ÚƒSS™Ü{rVäc/6ª£D~2ªãÃh\×ÅÔ OÚe»ÓCQ£û¨8äž)Švò€‹ö—æ,Íñ6Ô<ÇÛ8¢Ñ“=cÃ=ô¨‘õíªèÂŒÄÕöN¨ôêWX‡ž«½K\9yd&C¾c—o†à@ãZ¯á?!ñ…-~_¥eÉkß"gþ¿ÿŸÿKÏ­‡¥±3oÍC_¤dmBe R¹˜†ÒEËÓ½Åíí0é£<ýzÏhsùœÅóëÄâ;³dç¦Bl䂼(–HŠ裛"h®Nn9´£óÓøµ®Pz‡}jCãÿr㮾­KÚ“Û9ú—jÆ|ù]¨ìc)ë¢Äô'ôõ@R•XÚ£à>ç'güÔécBúÜg`GuOdIµÄD>*õ¨¼v««}´Ì¶þxDÅëRë@Ã"õèD-§ß v[÷X]rx::þ×w û-Æüæª:Gµï¸´uî^•óÈžþ“^·»U½‰Fhî/Yï›~Ýæ–ĺfð@ :ä–Ý øØk¬‹yù³úÏ©òùÖÎusÑlpÒv¾âèDÿÒWñ)±éݯÌ"ÐËÞÔÛÈûÁpM´| ]Tùn„øéhïCç´Ç4ÒÔµf‹ 5|™i¤©ký¹ã÷UiúCãÈÍy§n|ô×7~†÷úƒpGô»„û‡Šó÷ÜŸ²˜Í}r éâ&åkMÊ cÃO AyϸÓsν¼•~dÓº{£ºµ!}dº½í|b@ÐË#Σ÷¦‘hu;ybSÏJ=Ìwº©—†Ž¯c¢ SÎ5!nƒ´¶7æ.¦KŸû$ðÂУžyÙ[L‡Š/u<æR>5Ø ÷MÀFv÷ºûúˆ"ùùò8‰NÿëoyíŸÇ{TêúxoYŒ)Ã[Œ¾3xЕc¼33+'çŒ/'g"§°ß~÷=LÒu^a‰ÖÌ7Atw§9g®+ç9å™}sþs>KÐ?z¯¤¶}ï›èÈ}K‰aìÀË\8ß:%Ù}×äëd‹™«+0-bß$ív¾PaïÌÞ~ì ÉÇ$v?9Y2g‚²¯JÉ©” Ù×Û\𽲯wÚ¢}ÍÏÏÍlfñç3 °KNÏòŒ ýY| ×ÚïT2߸Ü/~·Â‰ ìtÜ#ö;ð¨ÿ ügà?Ï®‚(ˆïõúW¸5u“m¤uA•WR®•™²ô,ŒV‰44È{ãºPUTè+u%‡ð~âI¯kOÁƒ¶51vb¥²Ôß$uvøÍ?Â8G©ìÊLÒe¡ç_SÙËœ¤ÕÆku§²§³ª YåãuªœÙ”…M9È*@ô»G.!OhJè@#£u¶¢Ìµ˜Å™<ÊqÁd[ñŠ_Åyð]Ãx»…pä­:{¶;ñRóå#–C(kYPÈfƸJ=NUàÕ0@¦¡Èds,yØUÑ¡Q-¯xÓbYWeBöŽ(ÒƒÕJSQTyPˆge\•…Ð]Ué*ªêõÔWwYí(o¿t8Æu¦e7Wa«GõçRiï!ä*½N¸0&D*Ln³2-zÿëûö{<ÿ·yAî}¾Ú†u!õ~ êbvƒkPÖ×µ¥þ@–ä$ÝU¥îéÇÉVÛ”ÅÜvmæBȰêXî2ixTÇêe¢NÑx­"nõ¸»ƒe·Yqm×EÎÁz ¥Á륞:·òßÄIVzø¡ˆ†x„éUIä©{8^WÈ•fô›9V7§G.Tá+3†4‚Úçr'ήIôð ¤ØÔÇU«fqÛ7KSuÀt%Wzïæc{¥+Ë^yôs]ÈUÝ×Uy72I×A¹Äë=m-`SØn«°m¾.òÚ®¦ç㈨´³A)¬*e©>5Lk|¼ÏqÙjÛd«¸5z3®…fQƒÙ#e²VZ¸…üªYçz*ܵæÈ1’K¨³1ʧÞtvøâ7ÃkÿFÍ«äXW&ÆÚÍ™Ë17#®ElŠ­¢p¦U¤¥ÁtSM{k6MK{è}t‹}Ⲋéýù ¬HâBåê "³KHv‰Å–$|³ð{«¸;WÄ]Xªõ”MoG,Xæ\D²°49&@Xbœ ˜ß3ƒ?ÂÒû³mbÔ->ÜÅ#˜&ßcõµiéè—f–æ[}„ùafÊ¿.ɱºÓqÖò6vÒ¼È2ÚL¢›+ü&6@ ¿{·—æÙ‹ñÌåwl‘=[V¯‹dZb,`{JÈ7ÉAu$vZ2¤-)ˆgŸÃ¥òbf·Ä4°?âäv‰z5~Ÿ3¢Ó‰€>´öÉ2˜qô8E0C ‚4ˆË@ÃÄ$œÙhœî4ðNŠŽÖu0ŒÆ %ÊÜBqØî‘ÎJü ¡«RœÕ†Ëä¼<.dP'iPK«Á+5J5#Í GäWjÚâ ˜X㓵 @B§Æü€r¥åÞ‘Ú²º"óË>”§Pžë‡SeN@ûÒš€4v¹ DnŒRY½`M×´®‹’€‹@ƒ´ y¼ºj øîà#Yøà>»Úe´‘ÁƒºwÉ€Ë"pæ ¦p¹ ^Ô¢d\³£Gtù©†È¨Ñ륑äƒv9ˆëzÔÛÑih^©<‚•«¬z¬…MÔÑrž){ùAâË®,W7  ž3´M^ÑdMWÒd „(#O|œv*Èx˜‰N]¬©ÑA*wñ.‹—}ä»°¿ö•Jud/cç¡ÛÞŠÌlô û_Oð_\M¯ú%&Aï!i`’+k0eʪKE,¹S/–›¥Žoƒ\B!=L²ÜdPÞBY˜^‘Ed€êh©¬·q±Që¿ØlõëoŒÎ>0£… O½j†¡Â ðÝÛ™L¨}U…`ÉXþ~4Š3©9ΪC,Ý?K9µµŽWÊÀâ•hí hR%ýÉx—ÊD«x@9„re9LUÝEYêr ‹å9jGÁ]xžDïÛ¿©ê…ÑѧºKfQˆ"z#Hð½%U˜Ò¯Ö¾#Çh÷à«p4H[aÀLy¡E!¾8‹ÌCøDã£G5‡’z@Y=¦õRçÉZ1PÖ{Ñwš¾Ë”](«ÂñX‹ç1]Öu®dûÕešb¦ÿžjQÝ<¾[ê°4È%äò M qsB Bbpî¿~C0¤Ö´µ-¤¶…Ô¶ÚRÛBj[Hm ©m!DQc"jLEÏ*~F ½ç7©Ù5»¢fWÔÐ{ú½šé§±ƒñ]¤ÌìNm[Áp÷#R{Óˆ‰È¶HG¢ùÝþ¯íRÒÀØfqªŽoW´Š4ƒV2üšCÕÿ GLI¬áÍOóêð›.Cž¡9~cuhÌ·mm–TA¢zîÎoý9öê·óÄ.¾ÓwÌ»¶£vÚQ[ë˜qÕß¾ùvL[¹E5õtõÈ>ê!YŸÝ6#ƒlât[2lc–ŽêvO›ž1ÀÎý”¦é[mM².?È"ì6É#Ûë ‹Ì¨÷'kcšŠU0޲:€re‘h±ÒǹþÖ”µAß5`Ý”=(ûPžBy¡e Íï@šß4¿Ëð·; à©ôÒñú»j̶ùTáfðøXŒ¹•aÿ®:”} [=‹¡¬‘„ÒP’1ýÐî’Nv5úäw^úÿûi'½û§ŸÑ¬&ã{™qôy%Jݾ‰ƒƒ´½²e•Yy {œÁF¤áuPìÂDÈêý½M“â‰Çüײðfp >qe=nå*t-Ps2¨™òÊ·:TyñãeÊÙS{žu£*JTéÓ¸„Îcmò$Žwâ©ÇñƒîÔ¬¤I²ttê,‚ünÅÄuc€îÙMÅ@ºmÖ ªùCH¿lPˆà`?|kìY¬ij ÒÝ©,¸×¹L ˜¼t—'t4‘dÐMa[Ö_Te è€Áì-Í~2ZwÚ¬)æ,ÕÒ-ïT¥„ Þµ¨Oæê 2ÀS‹Ö¦õ ©‚Së9ÕæQB/j ‹–ôlIÏÎI¥dîî‡%UÏí÷dŠþ³ZWµ³rƒ¤ý¡TÌÜ€_­Wøý™´•§Mj€pÄU€¤”;YåK½ÒQWy™ë™Âÿì!¬‡æ¡˜Ð’j’!~‚.½»!”ºT™öhg ZPÞ¹ÎW†…Øáf}ysMr0. ¡ïÉ6Hwâ1ÜÒê¤rR­¦Éâ[Ïñ7¬ü²ÈIE0Ηk,‡PŽ ü eqR^å«îSòZNxÔ¹µ«#%ú¹„m9„r eá¶w…z¹ƒßîà·;U’ ”ÆÚr|(C’lå9üX¦°€ò-¼¤”`ƒÀE€­ÒX ð“Žxoà#üª‹r]ø‘ëá4†3øYïk˜S ØÛàag=ªó¦X$.iÑ—<ü’‡_òQÈy3üQÃ_`‹€æ R4eø…e˜¡ ±òôó•ÿˆrȨ2’[Ê© WRN…”S!åTH9•ã#À#ÀsåÌÌ,`{\lXåb¤*wŠ?Ú•:Ê"l‚Ô¸|€åýËûAYC¥¬aT%8©ÊJí \ÁÁ L`µUâ1eÝW§ë’+ºûµA! h³ÚœÞh &l€ƒB|ºˆb‡?ó°’ÁŽkYÜj÷!jÕ|2Å'³€¹ ••2I•nP‚ÉX©ÉhË”õ•ÆÙ^WwEí«~i¡lgë@…„)K ¶ƒ)”gP¾…誴ÀAà"Ðh ÉZÜœ¦Ù€€‹@â…~åKo*ÆÓ¸^šó©¯^éJ1å^é<ªeìÿUG±@ƾ;üÎ ôHåÖK­ÔÑà•{‘ûXv‚~‚åbÝâ3ú¹Òà[Ê÷øs„–ë¤ñL!õjÆÐ…²,SÖ©µÀE Ë€ùÍÍГ[.ÖíâW•+Z°€y"eöHv‚~mª(Ð(oûP¨A"€Óœyöc.¾ Òì#ãbÿ5ÚM Íaw½à´A!ªsJߛҦsDsz¶ /,¨–öϹ¥þÝÒ°ÜÜr ñ›Ø?×ñù„xàéM]×å7g„´ïËÂq¨e¢£«L3ò•±²œ`§¾ë¤\R Á—æCÔÏ]¡]ìÖ^A¨ŠÓUž‚F¨jY’I,æsü¶÷&ùsW!òoŠH–õzcCû0J/ÂÏžëcþ?󡑪ØÏñÉ (4Ð[èyÒ-»¥oÝ…zîÞ¶AõvC j:ئ$~}5ä<ën½±¼U%ýˆWjºc€ÿdlÕljP@‰ul¯|múÖ½ÅX\é3.‚\Ç?ܪsÄ$Ñ}]#·ó¾ãøH7ÐM 3}à&4azÔôM‰ÖîaáÔûÜ+ó¥gb&ÃXÛœš¤AÝ¢îãNÕ£J^¬« {,“% dGR¢f6UíjvÄ"¨ÊR£ò0‡3† ²YKúÊLhѪóêý+·zT$Š!È8XÒJth¥«{2ДÒûmhr¨{äPWn¼ÄV˜¡ž’nç‘æ5#]kNÏæ¸býÕ"fîUýg*~ŠÀ¦“”å²Úê>‰ë ÇWïSÝïzIýUÍ4GZ.,;+ºƒr×.-%åÈÍŽVÆc'æaVIµIö«–F5µ3 e™ºÒ„_Ž£PÂÔQE´l2#Ë61_HÙS C™Í–úòŸÕ­‰B #Æßp[Õá-‹@v¦¸´Ÿù´ ç¯Îá/H½kޛɈ%±¦¼3-±}{ûQ£T«¢†•Z,qx.Vîb5õq‚Z—¯Ë¥™×h–°·n/2ë×GëfJ6™ç”$ÖI¢ÑPpòÔ¶Ý2®æ©¯Žÿp+;áV ;h®ö°®(…xˆÌ UFLW•& u9^7s‰á°8¸IBM8³÷öD4X¨)x¼vßÃÏêX+ i82ýF±¶d—çA³ð ëqYæ4—3¤ßS±…ë1ð47Êe¶©jc,°:Ow4¬t–¹ÿûÿù¿\t/ø.!ß»nφûøôL6[Ùç#=P¸ÿš¬!C§š$Œ |Í™áªñn°Ëäû½çj›Ôšj¢•¤Ä®pªû{|dkô%ø·Y¯3äî‚›Eˉ†ñVPÐú€žKÛsrõx~mªVÕ]Z ‡€ȵР±"½‡O‡NêeèŠsÖ035-¥€RõÿŒË9¶Î'çG.²éT9 ©8­‘ñõ jÃ@¨÷âÁÝ?Ó+Ž Ýà [öÔLŒY•Ð_ËŸÌÔk”hŠIÝn ø­žÄº+I?ó=²toîh¤+µ–œ€Þ--0µ=-'“vìõÉ {©[Rvå3Sài¹¥)£é¼ÕmüÕrŠk`JZ¹· ž©‚Vi&P±¬Å»Rç›Êh—f5QÛhYÌȲe§‘,74ìÄÙB“œ¼­YwÔù­²5çÓñ%¹Ê|\§$YÉ»:Uþšå·H'‘)«•n|Ûoƒé–‘@_->Πš´›,«ïº ó$ÎêûŒ„’š½¹lìÉ?%ÆI3…†ÓYv Ú¥™mL³¸0TCÑÌ“[š/r‚Oy 9Ä'™ú¤“‰3‘ùãò³)Mn[ž’#a:Cb™ê­tM`Eá2#ä%-{kÖ Nˈ›9åFóŽ‘8ïoÐÒœ² ]Л¤£NU—‚L/Í ÐÖÃŒ?Ífšš_‘èiö‰D—ßÛ‹4=Ì®uúÌë-êB°*å‹lÄ[/÷BTuìšv¦>¬R9¾ÿ­‰ƒ)X)wÑõÒØ@SÚëòeIÖ qKѧ®ÿjAÍ¢þÖØ(ºW}Ý«D„Âéø6˜†æhbxãŒy>qÒ¹˜>OòL>oÃ< =hj#?ôðèÙ ¯¿ÞÙ­ÌÖ>6¶ÀÚWà·&¼‚«sMgÝöïMýÿ=ùˆÿe×ñ·}µ´û7½™a­N!#óôUõÚèa1nîøšìÞÿX¸Ú~º‚±}®f}&…Óú¹ðÚf]óyîC^4ÍõÓìÏøçž.åpDص¯˜l"±Š¿'m}úõìÖÁ¾.pz2ÏÕðôx•Ï4Ž¢ù–«s÷4—ªwÙø4ˆÿ<›H÷ç-·ßºÅ¦øÊ__¥Ù‚Os‚á`†¦e¡Ø(•¯|=nÓ©[%ègyMžzŒ½6ÓíPÃÍ?ú5\w}þ½GLiªõ†Ùi Mž» o5˜&Êô³Ôê‡>x­eªÜ»‰±B¢m¦Ò¥åØâ† E>üÀ›s“œY«SsâÓbÚz.ò}ßÉÛ[‘͔љ!=Äç}ôôŸÑ¡“$o<"Ò}ü£ç9ž§(žƒðÏhoŪ¿…~Fœv¿ÈÙ>²¯‡Ùµ‘N>êŽ8¢H!ˆ‚x|Ö ž!lGÐéa;ûdó«3¡{sŸw¼Ñ÷¯wÛuÞ›ê/o&ÿ¢ícÚüÅ-Þsökß¾C‹›²ûñ9}vo)p-ôf+æà»¿h ¶Ëp‡¨µù³7”ÉG@ë¹ñò‹¶QpS÷:ŽíoÐŽï,üêöëïõ¢[~ïáxÙ¿ÿ{Fˆ#u¸Ê÷ÓM|Žk{oÿÒÀ¾Ù½oÚË®é½=ÛÇ™¼ òÌLiäOvþîçë>Þ½[ãeO.ùn÷crQgí~YõòÏþzgë¯në¾%nÕ ¸R¡ tO ½|žì¸üÃ|•û|eßdo$9÷=íð2þB×ážcÑ8¼è!Ü×AGÊäÔçåHÄÀ‹çÒ~»½ƒºíhY¸h{¢éx¶µHúêL¬¿ ¸'1H<Sq`Â7ÿ"ç´ëI$8}ÔòÍ d1Å@Y d1ÅG“Å{Î('Àù"ÓÛ:»Üuæ‡b£LEÿŒk·Ã„5aý¢Ÿsè›O?ÃqN¡ô&Gè‘H°3ǧUhå0 Z¥cúx žŸÓñ]PÙT|4¨8ƒŠó'ª8=t·>ÒS”ŽCõƒÒqºÒ1hŸJËxÁ¥#2¢2È—Ï%_~›¡\‘TñÀ þ8Cc·9ô‰ùDŽ{”§>ç¡Èõ˜«=ã"¯06h­ð³ô{xe&ñ%™Ä'p­Á­ÆÝ>­¿”´ÄÿAìd`ûØÇÀ>.Â>NÝXÉÀJÛåï±]†ÃöʶWþTyò #HÞ+d؆Ù5là Û0ǵ’Ϲ Ó?–öíqhoOBÓ $ JÒ JÒŸ $u+%=õ–êñ¶ãÙƒê1¨ƒêññGªQ7d><dþ ó™xÔ!ó!?ùAÈ.ÿBGj¾AŠRü¿)þ×JñΜnƒH§® "}éŸM¤OòòFöþ Jý·«N%?3Ñü™S.!‚TëšjX…Ž‚/ “U¾Ôk€,Hñ7.þFœ E*})Š@›iAŒ@¶%¶I¬2FaË!”åýrïìà]IùÞy€wpxôfªq½Ò˜r/%08nD†À‘[G÷q‰Cã)¸ ñk±ƒO<¬ØÇ×\l½Ž—¡Þ£Ø´9(4Ú¸’ÞBF!ÊêPŽ C9)¤Ã)CA©vqèn`íwÛý$Š]ÿ ÞGQ,fq/B-‹.iÊ›9”Å8/£ûµÅ~Uø®_‹J±$â]´Ñ±4•80õñ2‘\ê£åc~xçÁ¼#ùdìõŠÐ ©Ì¢%ôbâ"긱±A+z¶âg¡ÃŽV‘ä”^eÁ¿ªSlS¥–M‘eòóq¶ÖPSѸÍf¾Ek[k¹ÐÙ鼉c\2!_å"JÃÖªjU[H \ÁÁ Á@ˆU‡XuˆU‡XuˆU‡XuHU/ÈŽVóÑxa"lB„Mˆ° 6!Â&ÄX[ŒµÅX[ŒµÅX[Œ}ˆ±1ö!Ö»Pär á‡lŸ¬S#ÿdFep`ûkdBj¹yÀCôTŒ« ܹk3ÿ Ͻ+j_W½…Ó½i :t wÊÙr¦PžAù~ slƒÀEpœQ-WNMê(Qš7 $à"Пë;u•:7òtÍŒîÂP‹[úsU áŠüÖ2dPJô•ê{»®ËŠîÔ+ —ýŒ’L¸zïϸ4üíð…2Okaë5:V¡$Ö;ðÂZ¬¦ITÇÄ7q-Ð}}nš$ÙvGKKóžußA4.áZtÛPlƒØ;ûEâÃg;n'*ª5²3Í ûæ¢æ[>MÏ ¶¶Ï5FWÑ.,•$ïÁh L¹vè:¨‹Ù½³Â®È—ͼ¹ÌNèm Ç.<ú»0‰4À/@Ú×£—¯CÚ3]ñ·KÑqέ‹uNaž4±ùÉ¢=Š[ÍT±hDXc)70ÙÏ{‹ÛÛlïßLçsøÇÌ÷noÅ%Õð=ðsôòàëæÓâ¼Þ+6!Ö=õfB£ÏÜQýNÖ1_Õ%IIì¥~ e”¾·ésGnSû‚zÛ%¤ø ¹£ä¾˜´î’׸äïâÂò}åãØûÒ¿¾2︔cÁƲìTÙuЍ:.›Ú¢ˆ„Œ N1Ñ‹“uÙo™0A#e=e¼ãut€º»€ íÑþ:ëª:^ÕqÂ;‘Ø~}õëĹ §–µÂÞ²ÎjZ¿ê§›örnv*¡ŸW×üKÔÈAuü­ªã;é„¿G ü5¿ÞŠ©vmÅíÕìmœøâ¾_…¯þ<‘ùð2âX}¸ÔQÞAËž| G×.\_}A|•5ðY({ ‹,Þ,†}&кþ‘'­æLEfÐAÎÖAÞKöV?J‹TŽSG×LÓôöÏ Làý˜À@C CCÇ&þKÎÙÇqtòôw:ÄÏs4ƒÓݦݮRô¾¯·ó"MvUöuNötG’Ë·-Ø}²7›üÐ Ûòµ’ õ˜oô™ÃóShyZOùCœ?ÅöCÔþãáOŽh"Åÿkˆ?‘‰ïT€ç÷aFÝb¡#EÄ9±ÎDHóåÖÅ#_&ùbŽ)ÊøÄ¸â÷ #~=nø$ùvD8tÇ/Ъƽï^ª­« >übJY› †Yýg•Ô?ÔÆú©`ƒáô{gó«¿ô|ž½‡i:6Ma‰¶Kgd[ûx¦H×Nl¨¿À†j…Jv»D{˜:`ÐB GìTùÛ„ÏúôQ%ù¸žûܳÓöÏ‹Oþ­.ŸN/ÎiŽ›wrРç}*`aw›Ÿl=¢!Ø6ºÐÎ"Cç™IAãÒ£7}:Ðjæ»Ìç§°c~ŽÌÃe†þh[_ $hê9û5ü"96‰:o4U:?í¾éÐÃh·‡±çaê»?Óª›6ÈÞ¿Nö’÷±‡´íòù‘´í+aHÔ×Dæ(ÐüDA¨.Å ªs=‘1â2ÐýÊ…„¤ÃÁF%³)Ëð™.µ,Þi]yZèžRƒ`±îTù¶*ƒTVöÇåJê²åÊâÿ Ê8¢¢ŠWIY¬óH¿_óÜ#—¢9ä¢7©N—êt¹NYΣ°’DÎÙ]}j³põI¤™pï(È”Yà"Pád‡4ÎWQPüÒž4~TüRé܃vµ({TQ,),MY4[†=­)”g JD’]ÔêtXÖ3žI^$ò`™i{ ?» n§jî?—I¨¡Q)ÛÞÛ ØÈ¢E•Èñè!|Šã8ÝP¡œBù‡Ô³”“V9zQïk€D–&©ÞXx7Fö\£áÊÆ†)Š¥ýÏÒ“ÒTJs)-¤t+5 ŸiÔëb^—ާëÏYòË()åb¯Ñ2 ´j^\®¥ƒãe–ë_J/6NÉóßi6HSÆÿk¬6x¼Ët* ·|@ )úŸ4“¤…Át‰'à"ð„fød–¸Å'ü&]¨6JmWEb¦DF°Ð%¦(jϸP9õÔ;%·b½ñ¢€PH(ETÒï„¶¯‹,pSI{=6°†ò´Ê”K‘ªƒÝYª¿N6ÜteE!J:¹¯Š¢p#F¡4Ô R¦ó òÓ)?¶žF c†¢æe$«ÊTö=Šr@ã, ¥R¡AØÔ†]~ªLÑÀ(@@Ob ‚Á Á¿À «¦QMx:2;5.KXÒ6›¾<¨Óµ¨_E­zgQjíY Ñ$ö‰<*ƒP&=u¶¬FrOˆšbzTËTV( °@ުധ)ÚeËJÅÕJµ*‹<•e-ZÈ!$zŠAŽ& W»ÊêÔ2ªƒØî£ªwf¥}U¯5¼§ªDæ›r(îìQõ(vF>¢ÀjY’ÿÔ¢ÊÔËB®¯ ïr¸ÄpÌ’eý(«¹V¥jbÊJõªØÉH5=®3¡„«z[…r˸.ÔW—ED`&ý¨eUÖwa ìë^MF[¾t‰qŸ³¶ D pÿSèg¢Šâc¢öøsÖÓ8+eKÍŠL™3ùsÕe‚îv[­«ÚYéE$O8nᨅ“^·pÚÂß[xÕÂ$4TÏ1vå/('PV.'ºT£xá·ªÛÒZŽZónãÒðÌ É…!ûuƒ :µúT@/ £@U‡ôâÆîÓg»gÓøg¬÷N›ZªªVà+±‘ §®ª’Âé‚=$µPWNBnœmy/ÖÞÕj›º7¡VR¤‰*ц|¥%Û¤¬EO‹”„š&’Y vžÞ¦m•½,+Y©cÔö*âø8ÆÍŒ8=2 ñ¾¨|±#·©žØnK]禺@XÔd„…~)Þ“Žv:vÎd³íײØ2á¨c'|ÐÞZ2Æø#CD €s0Ëò! »l—Ðy½ ¤Zçª-Ú!ÇN¹¢.ŽTwÚ“€eY,=z$„XEb›› ˆ ÆN”eumªD6–@)ý6³,ÃhLm¯«¥+ö.Oz»¡¦î cqÃMTÖ˜'º…T„¡³.tpÊB4Ÿ¬,íp¾Õ8¡nd|U™m>åò¤ÜÀØ«{Äòý¹¶Ö™ÓöŸ˜¥ß! KúâAøµ|¼O䣱©Z=÷Ö”Ps)Óý[3%¬ò‰‚0KôbzÛR¹®Í¶Áƒ>¨N·+ÁÅcë›ÃW…’Jl´§ïàð‹él×Q@2ÄUñ,øÃrÍ Mù´4^ÅÌ"Ùå Úˆÿò.spEw¹ŸÙ¡Ûv¼’´Û¼‘'zÁ=xÔ}wÔÖÃsöáβKº¾ÐÓôa> NOO§w§Ó¡Óå¨éïŒQŸ›º\zúJô"¼Ó}"ê9ËùñV‡Çgqr3ã Oø`©wÚã]f÷qÛš¬é#&3YÉ]†qËâí´]ÉZeûl>²òЮŒL¬cVÚGd¡uÓ2aÀJ8n°~:=*î-ÝüTíV®ïÖ ”}²—~ˆ*ÙYJùëb­%”.!_H «îb¯ÀÖºØBk1ÃJÃõÒIÕäH@ºy6¿£ _„µ œ˜Y Î`×lFPCP®Ò ÿèÞI¤T²LcZÿ¨å}ío¼–4EL=rUdÐþªØóÌ%³PÍ Rbhк®Tˆƒˆµeù*IL#ÙTÿ×åú òúQ`3ƒ‰l‚¬Ìù™ÃÐe8Ã~ˆÝ÷ôK§……mæImÓø.§iããl5ZLp;yYëSí¦ˆs27Ç®ýâŒ?â3„–¾,:Ø—fuÏ•P©a"É’»ÏïI»°ŠÄ 3k?´€ÿûP–oö ϸÅuC…z.°7ǰ(Ü(ÓIEþš¢ªAþð’waÜRÑ}QBµÉ'Ãx£†=’•–zÝ Fç¨d«-ÒÐ|SÔ¯ ¬JðGà!IÌêЗ-Õ=“¬PlœÄ9–](˧M®Û6àð‰n÷ƒ@Y°Ú’Ö²W‰"ÂplŠÒ±t%£÷Üø¶û ù¼\9è‰v°Ðku"%Õ»ºÒá4/ ÓjæN·Ž MHw·Y®3°ÛŽJ¹Q¯BYè¿õÐí**eÖU 5HªZAWµ34 D .á~n†Éf]©ŸÃÈtøPÌ·Bô¦(+.¯%ÞÉåc£ÝƒD´˜¢Œº"à"PǾvDÍ;¢Ê…<ê³Ø²¡ÞmÜwØóu°Ô :M nÊX­2³eªAWU a‰Q”ÉÞ©!JYɶ,?¨! Ò”K|àázKTËV B©IÄaq€ÑmÛÕ1[ à]¿SLÖq^Àv(!ݶ¬]Žä…CmÖæÅ?Sí+­Ñ#òí]‰,L,Óè±Êtù’O²m6èƒån£5XN"ŒYKrÔ8]»R´2Këæwe9‘§ÜcUfÚõšˆ½Ã[s±×.÷ôU[׺¤4l ‡TŽzÚ„3ØMüQ.55­D=^†¡J ãjÜ9J Ù|køyKs§4"#ó­„3‚ñÝmn›ñŽ4ôÐN¥ªíƒ ^QQÏ%œ·62¿~*þv~þ%öÆÚÌ©ŠFO—ëd½:,áãÎ%Z3Dˆ}&U1TëC£ímÜ'>}«îÔ¶g[eï²;Ö±!ÖµAõÊöÓ%6œp—érû($ý·9ÎÚÙø¸}Ø]èØ /¾zîÉ[ßí¢¿¤'¾§÷½ÓÇþÆ€ÁìPðW“SºÓÁÜéG~媷—¼¯}­=§þÈþÊÀ®võáìê |hà6¿ÛœTq2ƒN©WÏ…Büáù±GV…:ÀöP÷9ˆ#[â´ ®Y(*ØøÖÍØyFÄ^܉=®·öU[Û¤­­Êק蔉é9Ð=º:¢ï0^4DGFåÅ‘èè~»Õ0÷'tàÍ~½¡Ô:W„§‡èÀÜÁS8|Ôæy‚v>3ÓNªúìäÊ1é(<•Eà*ĵtîÊh“v‡ê ˆôD3÷l`€-¾ÀÙ)o‘—0sh¥(è\§¨<ÂRyÖÌ7KžãŒ6çiv°G̘pŒk§àµÂ¿Ð²n-ÅW| ¯ºZWáDAVj9—ÃäQVP^¹T %-«ÖZŒ¤ÖÇ&6Ç7ƒno`½óòšñºØ Ë-‹J&®,2=“{$ùýW¹¹+±éntÕ7›²&º²‡›‘ö‰-gðÀ‡2 a6ƒò~¬9 X@ù^Ri`ƒÀE€­ÒÃFà'e×¹™N9 Ú<¯ºø!eúxø#×G0C€Ÿõn°†9Õ°@€mð° vÖó¡:oŠÕIvóæ}ÉÃ/yø%ßÁÍðG-Bš¨° 3Qá`UâWµ3}‰å±¿ÁäRÍ¡ RAë+¤ )¤B ©B*ÇG0…v98œ•3C0G°@€íq±=HH•‹M@Ò©Ü)ü¨ ZéTH:•’zÇ»n9êw[ÃU„tb(¡õG´ïAm—½. **5èöކú±g\$Ð0ùúŒêר »À5pò=¯øÖ0ÇB¦þ[à ;^ÅiªyHšvt9“^beX¡áÆv!i*ˆ§/Ͱ—æK·ð¶ãÊÖ÷†8ù׆êVÔþu×ÅçN!¤ùô{éçuC·²%ùôXÈzß/Õ2šêfT}ó¾()ûå«9mŸ^Rjß”ÐH‡b†õôüL³Â°ï† ÃìÏ¡.32·Xµ+óŸ§òQ’L…þç©£kÝT~‹ßö™ì8‚T1sLß¾øŸý$8Õfþ1yTž{0ˆF ê뇾8íiñyÖçS˜J5|h‘×¢+éÂQº8 ‹„â´¨ØL¤¬Õü97É™µú0÷ñ‹ÓbÚz>ÃUëbP`ý.vÐXuÐXÿõmJêïRKMô¿M´õþ ‰šèGk¢/kœÝúá»ÝÛ9\Ìî¿­ã÷Ó俦¾>¨åƒZ~ƒZ®à©å¤‹wkÖ¬û²{Dm}£f9(“ƒ29(“V&) G]•_DéRÊŽ(WƒÒ2(-0(- ¥E8ô ´ü× ´ J 6ñ#•Þ'íönuî‚£ëô‡ÁéN¯a*>ÍTtG(¼üÿV„n+:”bÊ8*ŒÂ³ÚA¥Ýoö¯¤Õ2>üDzÃ[Ä7¢ìQÄ‘f³ÀEà!˜"˜!˜#X Ð[vžŸ’ޏ>ÃiÇ蛎äš@ñ F c†bEY¨¡î R¢ˆô8d/¼Šê"HâBDíxTzï\Û‘p”BJd£2ËÆng‚Àãøô÷ébR?H×7ZçÒÔu¥»¶}ÓëE°sc€^¨gÀw‰é1Ê’Ãׯ¶Am¤Eð 4]DdC^ntU‹ðˆî´xo¾ûQ½å&ÐÏ…„>øÇ΃Ëö0\éÞ ih•ª4Mªîí©êÍZÚ$aQfF‘ð’{Èê1®k6µM·"ß­£å\ÄÒh›i¶Šä‚\ÿ€{©¾5•ëmôξr­2T¡æ ¿ Ò"¨q©ude§äuWUé%W–L5ùœý˜¼gú±–\ ÍÔr–:\¿ô<õã&Dð{ÍgÖ Z¯–—‹|ÚyK΃¯ùîûfÏzS1æ¾6ðï:]’äù™Ô™»õwqÐZÑ'“ ÒQB¯<ú4›8m4´(HLôo@kq;C®uDG=Æ,Úm¼”¢OöÓ;QwB™ <›™ê ‘øß½†¨Þµ¨¨•­Z -ˆô$ê«¢U†mà q&‡:‹x¥Û„Z†t.±hë¶ìAÙ‡ß>hÎFÆjò™²ÔƒIjð„f,‰Œ^aKÏür y„|B’ááÅ\9ãe)ÎË3ªsOë°zd-px(q[uH¯Ÿ%q雇`Ž€n¸£ƒÀ ‚÷É]Ðlhˆ qBlu(¬5 ¶š‚)8âÀå°h™îk®.˜âh\²\^NwÔºÓHsêLî1ÿ½d3ä@[Ãáõ¥†0T®Y$ÃÕ?{R3Jê¯;–JiÿAiŒ«XFtŠ?ÂÌ©&TÀ×v0Ôþ¹Ê‹ÈQË;W±j  5õMˆÃ2ØêG Y¨H€éƒ);:BíÖt|“t vgEàG°ÀCð’çæŸTÄUZHzÅ'¯“ S’íîåç“8k¯N"ñdÅn%ý¥A.¾HíchUÅÎ<Ô JûpJÍàL ±Eª |¡‰@š:¸F›åÑ£¡9!ǃ[uKRÞë‹Ovà­nw^pVÎMÙ¹ƒdûeÕ@f†¼‰¸Í»0LN,Àü>F;Þ"sË‚{YíID'p‘B­4ßÐ*0ê3‘õ˜ÈöN¸x)q˜²¼¾’ä-í=£Y­ :ÃíéùÚéÍážþ¦ßWïñûgo½8¹ÝwtÈ•…‡1\³M%í•âô›ì¢ÚyK¿i× jW´Ó<üFê¢Q)Îõžo óXkÕãbõ(w”TÜ=i*& <5Í Õ ÿ®à­š®¦0 Vvç-žƒ†¸ ”Ç®¸Œ%*wR;^é>ÅëÈùQçª7ý·,véµf´¾Ü9býF€zˆj5þíÓIŸmEçduíc1U×À$^E¥È& T/ã¥Rë2V:°D}èùžÂåµu¬—>dzQV~<ÚÞÇZ+m[mÅí5©¢©,ôqõ#“ý»z;¯ÔMœšÞ*™ke£D’iNò*^e<¬sh«xÔê‡ èU°)H]Å\%Å"Äauõa#ËgüÝ—h­É:u#?{‚øtŽÛš˜HI…Qè aéÑÊ®° ¸E „Õ|Œ:äϨeîœ7e;VûSETäÝê°™ÁðdZkå"cd¸û`ßA;”¾ä°´ ¦ggöîy:ûÀ ý˜Á?»yL6ü~Ãlã4ÞÐÉÚ ûù0Ájmº.qÚ¢ o»í¨ Ùð]„s~ù– «Ýh ÏOu ÿI#M[©Ã:4çɦc¨ñª‘fà¤C/ŸØmÕg”Š»c\jtÖUUÆ`n6HÄoUË•™è£1Ô™/Ê&ÓF* ÷u醭©Kì3Kb®&ºlžù„$dºÈB"ÓÖtAvVmlëcGª‰æ]“|ÂäáÐ1¾Nq=?Óß®~ÞbGÚû29AÓQÏo{y0ñ~§-;jÑ;{²ö4Ç+úZ{¹>[îNòivy.ßm[ƒÓ9 ¯ŽôüXÙ{í\Š*ˆp »¦ ¦‡™ëÔñ"jG©n-ÐDºA©?ÐÆÍƒânñ÷OÍVªÊ":¢´rbÊ e‹XÞ©+½P7¤Çq®ÛPqT >Q;¯»´¬åhWL£ëÐ&«Õf§ß,{Pö¡<…òBËóBc¾ChÌwÙx¥¹U¡[g6òGÂv¦[ÞÛLj¯]$7v^7¹žõšÞ2§Âhù(Lµ+Õ­bCTŒ®¼TótW¹(¼˜D.£´ˆY/áÎŽïJyuËi›ë¨ƒ¸®RçFÞÄs9£;½ á.„¦¨ßÒDl¯í a@”J»¶™:]¡žPlòd’ ô÷¸« GiµÖˆÕÑ*•¹´”¬ST¾ø˜Í€»~¤­^çêN(¿Ë%’û_ÉzH¿G¸ú”Ô‹2¶eÇ¥ßSäÌÑUçS,ž+wXYºÒ«r¹wÃ.á:þËb]žCê'4òk ðϸÖê*ZIçÚ„‡)q@?!$Ïì‹rÒ„ÅÂ=œ³uK¨‚­@Öú} 7+›¢¢ÔaäÄ bû¹*iùáê¦kÃS!ÒÒN%ð¸ÜÞÕIŒšQÍ¡þFSw×y‰b×9ÄÞã¡9Ëàn°ò9 V B¦åÓ[  lZ'coJ,á’ûg@Ä¢6O×År_«sÏ™å,Ñ~sl&§/žNjñ3Zk¼†`©]´ZDLôóû¾Ü%³;e+²=bUý–r‹†ÿ÷»´™c LOE—ð‹êÈÕŽÓUžâŸ~·Œ?]°æ¿C÷§$4È¿ÞRªK.¡iI ÄìrqbµÄ\‰…¾Æ6'AƒïÔî3¾¼¤º²Š¡×A¯ÔÛ–.?1ìFA%\PmµÖJO»Uåàæ`ŽÝɆg l€LdmtcsÔŸ¸’oŽ¢Põ´(„02V¿‡)êœm ¨ªçf«Ñ´‹”ŸéfÌaSîFj¨eC?vR õ¿¡ØÆ"”ózÄ)^SÙ…²8_46wg•Ô¸ dtYé8¥ÎMÔ²™œF²ålÊYe¹Ì$Íd\ÓìNI™ût'Bv2P£´§_ZÕÂ÷±'2h+‰ ­dLŒæ-Ûg+åÓkÏñ:×-oS „¤ÒµîÉ©k² ŒÚƒ·$ :mk¹®v´­ÅHةͿ ¤Ÿ¶,³·Î.Õ°J£BfoRD¡îRà­q‘ë\‚ßÍÞp¥ñE¥á px|S3s!VbÕ!VbÕ!VbÕ!U½@ JfóÑxa"lB„Mˆ° 6!Â&ÄX[ŒµÅX[ŒµÅX[Œ}ˆ±1ö!ֳ r¹„ðà ¶O“¢*tÆ5ä›%Zg¡Tû¢ë+`´•áŸ* ³hJW éA£j+›ÚU)죺Ûé©3]u,<ëá†ãºPÍ©†Ko•Mjad“0ÔàÑf«†–é™nšŒ79¤òYia3NKšT1D‚@6èÌg2ñh4எD˜¯Öa@k-ÂHê•LÖxˆ˜+]óF½ #ŠõQÅ)SÈò¡P¿‚¼ÐãµAëÐGU öeð w 'ò´YÊA«­Lá¤Þf+qvÝ0‰r=‘ÑWmbéJ¬~”Ëp•ä;ðæ=UÆŒþ`«;C–SZðÖ(ľ-¥Ac¸eíp"Få½°eÝUakH×n&.·kÀh[ ¸+wÄsdm\=i¥b1Ä_dE•®Õ@×ÊúQH©Ñ2Ôi³M¶·‡ªžnÓšJ¦€ömÜv—T©4—¯8„\@3º7ÌÈ7|8§Î~†»©¾×zµûn2oÑq7ÙS|zˆïp7ÙSE.ýTC==õ¹â)WÕz™›ïñCIýt?Û w›áÍÚÐáf´º+ ¾—f †¬ªž 9´Y{í Q‚nNŠò…=T…úÊ'bŽÄ‰ëÁª¶6Hî¡‚ ÒüÉR·pDÔ³@ï’¢-Á¡Å§I3o)0GÌô7[ÇdiйøÌöRƒ¨eë´ÔœN…•”ô,Ï^¶òP±"!Ërµí:xÙþ%ƒŒ,TúHajkÌÕš&óõeôËzCå´dvR Q ê3h¾¨ê’b‰Ýaå¬S™AŤåõ!“€´pRO÷™t¸ ØÎ`uˆŠëØ–3( Wãtv¥lýSIºâ£Y„,òù„ÐΛ’©'»2lXùäÚ»KáxjW®½/€Ü{‡gº]7°™E¦Æ¼&¸D¾¾Ãwñ‡/çï×e"Ì£„ ¹/‘×Ϭ±¡®6Uaƒt¦[çÎì–k"Ãè“ Ðþ‡A3»6M÷EIšÀ½9åÞÎÄJmV²fC5Ïçîí˜ÚW–‚`ëb¦®$š6&-Ò!^åÝ}/ÃyEûw_°sQç«wNצ>FêT`.¤©t©ŸU±ø ´‰>:CKx£ŒFÁÜ¿}En[D²|B‘ôš0`Þÿ¹ñÀq߉ã¾<ßÃMs}¦å­™ž{ʸ ¯×Ö…„Q?ñqB2Änn{ ‡=ÎUŸqRf˜Ðîå¼{ãL»¶¬BïBùø®Ÿv¶ ª~ä1\蛯'ï#ݦ‡âÂEYŽ ¨PÃ맺Q+.ªj¼.ß»zÑ’®´tÁ§ÔË7†¾5ð×€¢Û'Æ0Ú±ÏL¿nÏHËÂÄÜ}çrãÕR¨ CpÓH½³¦,/=»¶*ßé5ŸÝçc]à6µ¿‹}ÅæòˆÂ(°ç­î]EY^©;#@úõTeÆÕV7 MY¡ÊËB’z„¾A.!– X"´¢'+Pi²…$n—P§*å˜cQnk¬¯ãÖô8~Pîb‡û¢*ؾ“SBj¸®4ÝèyªÚϸœÏ©rF3øÕË:ݸN!œ¸¡"¸ŽKV*¨ôÓSé8ˆà€ðËwÎL¢¨–ëöÆu¨éîâ@ÃNâx§ÛüñJt\ÅE wq\%É’A[¤Ñ7ÅDû6@&"Y›¹z;E!!UA\GƒôË…Ëä[té6¿SÕ )üLnå'µ„µü#Mÿg9—’$þ_j¤‡^{9^ÞInc²dz¼f\Bª=•¦·ŒôG“tY€!˜FbC„¾ÕNU[Œ×Ze:ä)»¨k½zo³^âe›r)ç$ÛšÒ–DàýiÚ‘^@³…|¦ pxä8÷ÌÙ; 70eʲ*MyDðde¼šR/,ïT™ËX3¦Zà"ðh¢¸Tµ«b­¹lL9‚òÊ®’wjÀm!{õ¼¦å&ªÄò/Ju››²ÚsÃݨÃݨÿõìnÔè#‡ÿw¦øÝw¦âYçÉ2|ø©^¢zfø¸£Ð{]†>CõÄoÖ÷róøN[Ve¡M) ¸¨Ô[<-˜â“ÙÀÜ…Ê„£UUé@Ù…²eáNFAu$‚r e9*QGâ=Ò¼Ôõ]èQo‹¼P¨{á˜c|š¼U„çnwÅÝÔ¦Ú}WA´zQ­U÷¸«à ½•¹ÔJ¤u ˆ,¤VÖä*îF‰zØ«-è1”Ƨ8âb‹eu¦•»ÃÉíIUèîÒ^]ó y„\BSD UÅ×éi\”·>cˆbÆEñœAY--L/Ðè{:;q¢}ßfI¤ýÕœ»œ‘ QÊ\øŽ& 5: žä-™ÜîtziΤ¬A–xñQHT¶Ò¢¨ûÔè+”ûD³"Xn£ì8ßSI›@Õs»ÀæPö¡|xçìÙUh rÓ¬cAM”v4:%ŽçL{MwE74=#4ÅJÔJ°š¤ŽHƒTëÛ„ÂóÆ¥JÆx—m š#¶]3~y “RÂõtÝÚ¾r ]sm C/v¬àΠ¦¥v^¤‹-@†RC4; vø›Oã¥Ù… O[n/[gRØÎ"”ŒWÇý'r[CõsCÎ:TÛB\ 8 Àu38þˆ§Uî4|U<_ N¹;ì ò±H™Ÿ}PSíBˆ¡‘ìÂOR(?p»q~J‚•½Iw?“v,'QÙêÍ•¶ç2 vÏ]¸jLOµ“î@M°5ˆ"ð4¸4œß´^`³o¡Žž©SE‘•Ý à ¿§~I•ÆNTJ¾þVQ©Ù±Ó¼ÈáNŠûÍ®$ÒPòËô*¸=£ðù{B{¦CIÀÏpôn±FUx’m0¦\`Š;2’vÈU¼íV"X³]^pûp…ý¼Å 4æ:ÝÑot Ò²ò›z}ÐYŽ–12B¸:±Ô\ÙûºçðÕ-”%úÑ’Ì  ÚÅÓ0ò gHkªg6þØu\§wøpˆ´Iͽ7Äöö@Z×MÛ¥úoÁ©7º?5eJ=á¸ùð ÑÝ•e¡¼ÿ qnàÖól¹‹©/n»ë¥áâS%ê=ÓÁû˜½¥¤§jî…Ÿ©¦¼×*Cî€O3JÍýôB¾¶no7c!×V^ï› ×Ù© þÃ0Ú‹ã©*{o» ÏÍoÅdnFçÀu 1Iž*ÓÖ}— YÆ|Ááhî¬_ÐðbûÜm½/z‡%¸Þšt…Óê˜x‡[§¼®÷ÞcÓ¯Àáv2bMË\%ƒ§Çs¨k¦6OSµ+#øŸ§õÑë0ÕXݧ^OΆÚoÝbS| {~ûiÌÐ?aèF˜‰µÙàö/ë$‘‡Ï.?ýÏ~võªÕæÛæG¾m{(%D%nVÒT{§=Û8ææëó)L0e;4ÏóyÆÅÄnæTÉí0Ð.ÒŸÓZE†$¼Yëâ‚yj’3kõaîã§Å´õ|†qÛs÷ööé *(úÍÉŸÜÛ‡Œ.`tô’ÿ²Û9–+-ñ âïw4óæôúæy_y·šO´79{Úùj{wÙÐhç¢iƒ9*á¨vwiÚ¨>ÑæPåêR„PgAµ ³Ë­æ2Àç ]×p6,ÇFâ§ ]çô·¦°î쵋Ã1»Aƒ[”[‘¼%…ûIgìÕoµåýpß#º[^Eò’cíüÁëä]h]/¼ŒC ùƒ£WöÆ.À]Ðãe^÷îŽ5@÷öêÕRC+-nŸäºKó,?hû´%Ò$B‡SQ꧈6ðé8´ÇùR›˜VµœLiÎgqŠ=×ƒÓ ÁV2ÎÓb•sC¤£ù6׿ñ¢„=5äôd§vf.iK¯ó ®A,šÉñÒr)”EI  oAPóTÚ½oœ° S£ü¬^Õš%ÝÖ­N€"ªð®¦ ‚e£]P=òx<7žQPh"ù&³¡žk¦„"{Bõó*'ä1J9e’ñã­‰¥éÃÑjõ––’6N.¦63é2¡ õžÓ‰3ØnGÏ/Ÿñ¥Vq•u/¥ÖZè&k&µ9ñ·ðE:½üíCÆõ2sò²äV ·õEuŠ ' k!È`ñõ:«7Š“JŠ[q}¯ëT›b€FÏm!‚g\Bê{z\ayeM.ù¨—…š–](CEgnÊP©„Tr‰y"’êLùd½Ýé@Õy®),À'køÚÚPãk5¾¦"µ†]€¼÷5옲汨mÝG’[ó~­Y“"I”xUíʰÂQº¥'êëž_ÙW«Õ§€N"{™`.K§RÌø×›ÚçsÄeÃÄp¸ Ê2æ\ ”¥Vwaoõd_T6:ÔëßEA¬EI_IÃÆQ]()hr8ª…ÄéÝN\¼ÒL¹ã¸4é¶,B!®·²’ã;U”·‡Á§¶8ÍJÙ³J·žzÃ@Zé¿ïTd­P ¬Í¬É4暀ؖñÿ2¥F+“×SÙÁ)–zê±(4×hY*Řò¡ÊQ­3YköœZؘ½j¨T5ÁŒŠT§Ù5ÆwÛBÿêÆd÷ÐWF©LªÊk•Ц'²~ eM¡‡;jZâTÇ2Ó¡Bg«ª’i’ZÒ–Š\¨XI«Cn°=Uñm¾¨†~ƒÄ~·K$ƒjÑa*n+ÞCzºhÃÖË?õø©ÐÉI ûíKXÓŽDºaQòZzyýô^('.‚·P#‘ ’Í ÔÑ18œ8VGùE×¢‡NRϸ)/“ȤÐ=ç:Ï4·04¦@GÄW_žn˜VœÊÎÙë;a$tˆº$OaÓ%_°3/2äSù.õy$rBb~ýøò´Ž1~uô…Úm†¯âwº³k/7Ý}™ÐÆÄ”ážHd¡„Ù} БBsv/É„?^mõÞ¹ÕÒ¥[ ë~}Wîb(~}}—ÖŽ&æºjà‚МЌДÐ-!ŸCÈ;i€ú CŸÎŽƒH“Ük¬„áu%Ä‚dÿÿ,åëkH2¶†$Û¦¬›„$ª©#Ëæ+ÃÛ.–}k\Á‡«/½_EP–`ùîl]WÕ]¤Ñû{ä "ÝòzŒ•ÂONIñu•¹pu‹‘R«¥(ÂûØvÏ“àÔoûvè¾nŒ?Æ4B¶¬ÃTc'Ë݃xå&y©!]|оc …s•/õô•’n—'Ñ6`•h”ã:(„m•aq8×8ZëE:¦§æcá×ybW€òæÍƒû¯ hö¯>ÛÕšåi\ro«-«‡Z“šrÆKÛû¦Ç‘ÌhzÔ 9æ7)jýÑx,qÆšþk¯¬}qRpÚ ƒÁý¥&R;Zß>650<~T{«Ó­úiòŽNNJ«Înê‚}#"!ñIogPK›:z VvUì²R·fŠÅ„BKB)¡¡5¡ï„6„Ä·Vjf%äŠ×Q"§Ì›Ä#}ÓHA½µ4N£RîC±£3È¢üì‘?ŒÖ9£D÷0Z Înáh±gÔ°h°§íÇj ;Xø¦fæ$¿ÚÈÈÞ…‰«k·A*¦wÂ`ì®-ïÝpYVz íªÚÞÁMéi©?÷E«2ÖŸãr«¼=ˆ#I¸Ñ‡`Š`†`Ž`àV¿Z†êx 7 à‘…¼ÄgFa ªl¿M¶º9•DÐŒ¢P¬þHÕ»IGzË<È•t¤†Ãênßb·^½ ~•‰twUâ÷L4æßÅVO ) ëJ(ï®wêÆ t<ÓR³e¥ÕFug Ô…W ¬«2P-¨.„2à"¯oô'FÅÞ’t'‡z‹8QÍÆ‚êq™€×2¤M+²H•ãþ)ƒ-ÂÜð ºü®o-7Q€ F ôÛ·þ g€«]mê(u"\·¨‹\vÏp«« „1Ø#c³¦Â l1ö:úTwWª\¶¥ç€FU)§á¯VWº»‚INFwr=·)Êt>þ\IBƒqUë1M­¢bå*Mä0Óh—[ªN1©ŠR9—ÊÀëòZ­à\ú$+×~Mó]›ØÀ…VE ™€ër-Ñb×Í „ƒíÆ^•Ãyvõ Ë•å^jf׫D¶Ö(¬\Gº^­UåÝ:ó9šY3ß›ËyÏgÛ÷þ–~#Fî×Áòú1Þ<äk°¤Kƒ„Œ96±bä¸ç±à*ÈxYÎÖ¾ÌØ¸å½´ Õzk'Êøë]r¹·í%‡Z²äJ¶‘]w²á3ùî+¬ö#í¥¥‚IZãéÚ`­'û˜,ï!ŽQwËÜžröˆ0mÉÌ×Úï‘`=Öÿ84LEyÁ»Xg±•±õ×AÙ…²eÊS(Ï <‡òÊr&Œ¶†Nô'J¾Ápég¸ô¾Æê+2¶qz6=L™·/–ž5içva°vu,ÌÚ?r¡Zóo9'Ó6pt!—ÜgÊæm.MÍ\»Îœy)vY:Úd2§˜3¯X/_Œ“!¼iæ»§ú’ƒsú€pOç¯6¼OkŸ}±×¯ßÒ¿sºôòot&/ѺzûÏž÷B_iË»OåÅz~N ;[å½W«z}±û#_¬·]­êEoáÐÝ|y„b”}rm äÿ ST¾kÖY.fB×^t?èÌá±’km9„reј­aHf¿Qp_HËïqC±AªÝ«‘qµ3luÈŒtƘ=RËñ!*eÕ…ªòý£1Õ‘ȰüéÍ©þ{i&šº±F#!‚À©æìؘ³‡ß"}0Å  ‹ÆØ¹·Bòè+±°\Ç¥ÚÜU Z•Ð(<>Ã(R•){Pö¡ŒïCJ‰æËÆ´é‘nuO²:ôµ{9|-‡Ðwò–+ýÿJ'Í!™èÁÖÇr#Ò8N7”C(§Pþ!e8,`Ê.”=(ûP–o%š‚`R9`CgP›}"U%šLÄ–](Ët™²FaZ H¶JÛ"FKW£n©›/€”€³“ÚT[v¡¬jL¥ñx¦ n¸‡(PkqùKóå¦AjÿÓ Svœê¥Ð¦x¨µa[I—®”lY?¾Ò•ê…w¦(#f…ænIs½Ç´.‚)€5¾¶Æ×ä¬ÓØ€ h(? \ÁK5Ö«ËüÒ28ròÞ áÓ)–}(O¡¬9€sÍkÜhFZ€¾–pË¢ôV 9 åº"¯ò"üÙBÄÄu_ªÌ´ê»ºRR6UCNc„ÁØÕ%ìl)0ÍI„V¶*íÏõB—å§þrNe oLK¸—²„»U•ÇÚ2þvå9”0$ºl-ÀÁ’}£´T’‡ >V‘žº°eøÁ^’ÆW®`zWu ‘“5ˆi b ‚%+p±ÑxÆøð@S&0ƒ² üIÆÂÑ:Ã+qàPBDp7Ò:¯4ÅKOwç÷@¯d߬+ÝžØäâ…ez)q¶R†FI\¶‘ eÊ>”§P^Òü‹GÜô¡{Û:åªAâ”˷¹MQf^ŽOLQ™¤.‚©þB.M˜¢ŽÙ®P¡iÁA†@µ“¡$Àžhø›XÀ‘r»‘;¦( .\Ø~±ÀEà#ÍoôGǼÔ.>ñ€ÿ:ññ‰OO°6½Ä€9þfA-xÑ5>.âGÍPšä²QZˆý÷‚÷\ä-ûÒ÷è0V×E¸)tÂhÁ.~-{ÙUlõb€hK5Xi5Xi÷p¸)ËðÜ‹Ñ41Em¬Ò¥ŸšÒÁ–ñÿÒ SVýí'Ü bÊ*Ø,,ðÉBبýà ¾§ôc¿On±†[uÕ<€Ã˜¢BšF¹ÐClætÀ˜³Õ§÷LîôhðÞë²D/ Î9Àýí©éN ËáÖF9 Ä okÄèeVÖAà®éµ²`ë¸ÌøMï`ËÀA s”'ê&°S,Óšoï€ÉA¦HËhTrY†îaê€|P'b34àÙ4J£R]@»r*î™ëè±ÊjrZJ¸’ÕœõJa+¡$^Ë4úð–%qÐ`ÊÝFk³!>Ãëâ*°›þ¨èÄdVk€NšÉ¸ï'VÆÍjÎ2nÖÄ;¼x ·…åðs³Àå¬ »³ó._7 ¹5ؤ‰¹©X š[p‡šá•d•IþNûÚw™?+½ð7êZ6 Ã'º6våò%ƒãüDp FðG}…³›yð‰þÆÚ¢Âè #œË=O”Ȇ³, 딓ôlÖ7Ýb»>CqcÅz¹Û /·ëW[ZÏàá©:^˽¬Q@ÁU߃u!ƒ¯Vï£kxû”úƒŽUÔu|ü½<( =Õ×ln~¼€yPVbY&¹?寧=7S‘dt;yVdÉf¥W8e¢G‰öª>³ráÐDZR£Îp:¡õ† ­/hL\n„°&{8vY,—XÚééj†‰|ƒLUž©K¡Aø³ˆ~öææ´æSø´¿D 0ÀeˆÌf&õ hÜë£bò×Aå¨Àh¶®YµßE¦a¾ÍX|÷h\DÄ îï3(ÀÑöz¤ë´ASœŸÐd7Hj2 Ü,‡†Ì6ÈWjàßn†ã›zð7<™ÿ ‘ïêAÏF+[Ð`ka‰nø£\Šܺ€p[[RQ û†W¬$òdñ›5 ‹×:‡–iŒ·^4UD­b¤Ù,iÉr5•Æå­êˆYŽ$á&H.“KF &˜#ªèÙ’jAZõ#×ÈÒ\H+kKÉ`Z€ù]¬qæc¤‰˜š„´8>!¢Ó‚¸FÌ7â?¦‡Á&Œ8 .σ¬å¡ÞéyŒÂžÌ3òh¯© +T„¶¸@¶0•®ÞšjûáGÞ z>½zµÔüˆõ¤)޲I7ä7k\à<À„~!£Üç¨QŸÅ»KÎ'óÖ-r]W}ôvß"pð7.v'k"÷XB'œ ¤®ƒüËѽvc!n´ê©KÇ]3/f¢‘…øs«•ã¨hÌNZ„Ô-ç¶ìš9ÝCBbe¦bÇí:»£%M\?r¤+\à¼7qh§Ä[b´ì)$B2_ÞQO‘+z ¹jðS•é…ÿºE’…È+k8øÝ”¤\J‚ÜHÜ ìȬ¦•½šý@r\# '?Ì*D=?°#!*ÓÐÅÁ@3lPóp‚cõµaˆJI6ºI\kFGä>SÁ,šã¢BùìÖhïe´@X%CC¦%{Ä­™ë}ãÖªŽñ-±ò÷Œùëô;²k½i{h™¢:±ˆ …„ÐßÛÔÇÉ h(P4"û"óM§~Âxý¢™^!ŸPßGÃãT±\qÇ6ìOžX™vÐ[ Y Dš.iy$]q1~¥®# yNQî5 ‰ˆr"ZS°…éÜîO¢9´˜j²ÓK”µ NÖÙ=õ¸FR¼®Áª‹@ÝÈ”•,yQµ˜0uƒF%¤QßàÏÜ5-Š˜ž‘¾ãdÞà”LRÂâ$ä—Ëß[Êéë9y•rª%™Z¢Z¬e†nƒˆ?¤~¢ ü~ãº7ÊŸ$v <© t"2bºEukŠšÞ-Ù«)ÄD!ñ¨-T?Æêý˜¤2³Ù‚-qf`ðÕ0ÒØ_„(„Q Ì"¨Ñȃ)¡!l0jV>Ž€ÿyö†xö½~>Ò›ÃppMôÑÁè/Q.Ñ®õìòMÏ9:ýæä²pnÐ1iÎ#‚ 1¤ï¨Ðf° \"}nî–¸˜GРN‰» ’8’?ùn‰¯äòʘ:+W—Só †ä@&;Û匯‹Ÿ[ã\ú¨)Mqx ¿c>>§ñ †¾“ÈG¶p„Š( ëéÞ32ˆ[6üöë¸=Aº.[¦tÔ xe~Ç©ý‰ƒ³$…•»(‡ÈàwüïT;®Y#^pVP*-î@fÞŠ6|çÁ÷H(ì=Á1fÔh:SîÙyôÿú@ÿ¥ªë&àìó&œWüQ„²Ü_Š,ÿ’G2N?„ÑuØâKœ–èuráÏ;”Ð:bð†cr4àäão8aÿÜÿ,Š_‚õ‡ØrŠ-â½_Œ÷~k`÷o‹Øþc¢´ß-ȺGìò©ñÃ]»GB\ß3ªµ3Ú´#FÃB‡øOŠÿüAœq›/GdR„"vGöQ4ÅÜõ ­ëŸ£¹s‚â8HÓ(­;X¬ †1aúu$Úë­ñ]ÝA\ýbª(PæX` ŸôŠjé d9»Ò3,¥o$Jw€Iß Ž#¡m}Ú»ï½Y{ï¼ÇŽÛè¼áÛ¹;Ù¹ÙÞºƒM9ÜoéÚÝÀ ‚¾¾}ôË“›šœÑä°ëü%c¼Ûúþͦë'1VßÝÀlÙŽŸÎ°ë2àÈd#[ìÂöW/ÃêCŒ¤±9ÞöøJöÃïÑþmû+kÛúuKîÒ–QAîÖ‰Io=GU½¤BÚ­]^Pƒì§ ²ö×;Œø,µîdå­îõ6 5&ÖK>VqèÁ,f{ÊÉ3‰äÕʇ¿3}/þ9n¸PÂ9|ƒV.¯ÖîÊ„ŒÔûV|†’ߥôuZoqÎnfƒKü7n ¾}çë-ɨ¾š·Ÿ=RN(:^‡~ò(t“þÖ úÄõ´Ñ9¶Ÿ•f`›¹kw²çVcç6P‡ÿ–]N´ƒÌ»‰ï¼Ë@>è–{”]šìlŸ4CÏÖ‡¶™Ù=í¾ëÍùa$É1“âgɑؑ‘޲'žÞ¨#+ÐË™oþÔ.˜µµ\,7 e0Á¤œ.£o~ ÈUA©"ð<žt§£êx"½ß±s8ÖÝ}T´ò!֎ê½BAOg1°JÎ\èá«ð/a=]L¢; r'Êí"ÄnâJêC=˜›ó¿wRUÎÇsô?3ç#s˜ôI'ê­É-'˜¦jòŽY¤(óæÕ¹â´2oNó§æ…¹ú³SÁ`Ú’«?&¹ÇÕ¦crÙ4WCºŠ«Ë¤«¸²R\:+Ť;ùÄ;¦˜¸ºHZ‰ &’˜`êˆ ¦‹¸â CRˆ!)Ä}T†‡«¯œÉar‘ô HËpõ7dbèL¸p5äX@4äX€å3äXr, 95~­ “?>‘Âå2˜|З|#d×ýŽ£003®.4 *äÖí(VÑkʹF¹4H¶p¢Õ$Ù‹vøÞäré™GÈo!—GÈ'44¥Z¦TËŒžÍZÏð ·ôæ-½y«~ÕÖ †‘®³Þ=È•q /÷V/§%l1”¡°˜2( ãý$­6z‘²“—:~ãU©·Ñõ¼öj£?ou¦i±øâs¸ß$‡o,ïʵÄZ½X ï8J—‚Ó+ äTÌÁ¯ôV›—®= u\{4.²@bß‹,ÒCâE¦×&ýžûb ¹ò/öi€® -7Hû\'žQ9\Þ(}™9Õ°Œ²¨f%Kº{¥gͨÒSEU 7Z M³@¶÷(Ä…Ö«Pí \!˺R éñç*Δë= “0RßÞ¯pU¬²Ý ¹Iêr­âf›Ub,]ÛqøWhx3ö!Ù"»šÄùåQ¹Ž”ì_<ð5›‹Ç»uæs™šÿ÷¿ÿïÿþßÿß2è4på¯Âˆ‰ù÷ǽ,—½(ÇìË$ðÅ#œð5Þ7b9î;R=SÚuz¸.#zìA¯CJøq­émC÷3õüÇ•-‹§oÉ ‡ÙVÂ#çGåÊ‘üT‚QLñ°ÿ‘öê©èQ$×6ê™èÄúO¤Á~ºeŠ¡SyU™m©2,YÙŠ ¥ZGþ OŠ_ H[ü¿R²€ž0X ¸ÕêL­…®aŠ:¶±C9²¬î(v}-4‹âEe¨ÓwUrjǾå|~ëÁÿ}ñ¢Ør–Âl‘£eeIQÁq(æ<^-¯g@xŽ£u¥«G—,oó6: bíQûÈ<ÐÓº/ _óïªÝÀpVP“)«ðiNA¥q—Q¥‚¿Î­«®DFDº£ÇP+zÌ„ÿÇñNÄÆÉR{x|S3êá¶À×e¸ÔoÞUº9m`†@‚’TÅi²^‚ÙÜ YG%ø(¥5ú0É””’»R*jXôË(…“` ¢gü&\¤‘åÂu–;Õf”Å–ú†ž_/×ì'aS=8iËÂS¥‰Tœ×¦ø’6×ÖàŠx-3zWÜŠ9°ÓÐ)[v¡,ƒR¸åƒ,¾¨¿€-Ð"(`rã"Ž”ºž;gVê„·º6|Qm.`Š`†` ĪC¬:ĪC¬:ĪC¬:¤ªôZaûÑxa"lB„Mˆ° 6!Â&ÄX[ŒµÅX[ŒµÅX[Œ}ˆ±1ön3nCÈ%„N°}ÿŽ©uÑÃ9i<œ\z‚¡ö‘¸5GŽ’Ü=¤¶+5™ zN¯Ê ñTY¡În „EXŠ’8.“LX@™U"D_æª!™òL*°^Ù–“Vc9öPÖ›®£¼®œ›³PDä&Û¯n}-Ѹɱi`¯¹²Q2®VJ’ÕZMÅqµ)°ü’?ybg"éÕ¹ÜHW F}æjvôñ;ªG b#ˆçÐOxѸVG±-ë®_ Y‡&u¤n§ïªÑBjÈlXC>°’†™²eQçê|¬õ@b]ú=4X2tx$¡mý úSý¨ü¹~,wÚ¥»0˜Ê7îÄ3eŠÒ!óŠå¹þ4QEÌfh¨bƒ\B¢„ÞLèÍ”ž©hÑ ©™bQŽî©Ž{¬C·Íª»­lÈï`Ѳð݃&7º_‰Cñ§î’=&J㸳р_ð+($ïêWX‡je7 >=>øm¿Ÿ¡²Õèx.8/ï4‰T¨BžJ^„ëm£.š$ðR9ÝAr­» ¸ŸNU£©ih,g0ë×µºKŠ5+ŸšVxécíɼ7ÞAUÖãH3–kUq¾™ÅRVK==`J»—Aã—¤öêG­3µËÐ_×Áƒ£Ž¾IˆIÉV‰nÛŒÃZÖõÞÐðñ[2fµY6o®‹ª.Öº3Õø/ÅuÔzvOÛ-Q ~ª'‚Ð¥m†ÀAú§ï$-•©ƒµž{Íe{ ùDߪSø*ù¬Å&ië,:’ªáÊÈ×W/L•j<Ÿ%qÑÅ¿”Ï5Æ ”'Ü¥ðª½YÍÑ L Ò²Í H[ÓÃûÅwªj­†Çn눲f ¼·ÉVšëmßq™ë2K7+0‘Ä£–Têbš¬î~ 0úB•–pBÝh¤ ©)eu&5ȳR3¿FñíŒëÁy§Ù:ƒ#‰Ð#;KùN˜ùuE„þB³Ö=ý”õjk¼G¹^ñ«ãk¹öntˆBbÑÚEû#áÑWYYF¾rΩª‹…`d)¨gÕÅ]?3™Ójk@ƒ¤NãHwi¶AêÉͶ5ÌÔúlj˜òJî«è‰ Y¹÷¡”ñÿº/\²Üž¢%$®ƒz—eš2åínÓhþ¾.ûL”çqœÈ¸?M¾è ÙX¹,+ÉV(}í‚“ÑjxÜ ? Ç0ë;Í#”äØ&–µùŽ =«ø­Ù¡¿ÑX=»:5;kY[¬CÈéªÚÞ•°sœdz¢ÜU\‚{q SŒ”aðJ·Õ:]—%©¬r‰Qp£R9ܯåã}¢l%EsbWŸ¨FãRqZVÅq™Bê,Y}bC™Q7|;6^ ¯ UdºeµJ¢:Æq&n(õø)|v¡·Ó¬7ªÅUðmÓ'|±¥4nóLc!~鳆Þôz†\Ty±^ó*Y9"æÖµ¤´o¦ZF¶YízVÁLCßšyØ2“E“±Dªnବ`cÜþÊÇñãµ"k¯‘Uò¢ýò-´W­‚z­{«ÍÕêCM´mÊ­í0ˆÒÖ¿ýê¶gÃà¤MÁv)ko|¯²ŒÚé`3t+ßv’˜©äk¨SšÕì+i%ÆŽ®5ƒ©­ärÐ)”gPžCyåÃH>qµÿlw—4Ò'x‹m˜" I%·úáÓ6JÒñ}o!¤QSÌ„ýú!3ý$®<¡Ö°w¨Iéã]XÇ_¸Ü3q÷=áY Ï[Xâçžèà†¡ËÐcØ¢¡)ÃÃ[ÖÖæš]®Ùåš%žoÅ`j|"ª{¦¯î_Œ]½¶ç‹ÑeÕpy±1õÑ÷yêd@ŒÂ²u‡NýŸOð_r¬Ù™­îYÀì¾ñB7–ˆ„7Ûmd¦C–)vXñð3¾3çÅaÞCÆ~ ÏZxÊtáñ x<·ÂÑŸÝN¼ÿ‡s#?H³GXO¿Øx’…Õ]7îÌ‚+›Ídޝ› ÖóùlÊ͹‰n)7ϧÎT»ÃϯšpÃÃHüg¿‡P(#‚Üî× íi’®¨ÿ§e 77î­w;WŽ¥AÒo †îý飞?4J£‹1ŠXwû)7‚æ}#nEb·Bk¿b¤kG`*Ä‘¶Â2!ºðyá'á£8½ß÷öh3Y‹¦ÁI­($ #mvnþµ6õZ;oçïµuí¡Ñ®Yçöí á¶ lµà¶ïbÐÆn;І¹û;=üè›'Ïw§<Õ§º?«7¸å½%í]±èÓ$ï$¹ Ÿ9û¹õº=y½Ýuè¡c·¹ÃÐÑ…Nªcn©žž¨n§OËeÓöiwc°bð°£€|§ü-›þ Yî-Cü™e|‚)Œ–,Ù«`£ö0³ºM+² ÞjÁ Ñ‚vÊ Zwíšôénúâzs;ØüBñ»ëÀo×zƒ:û ,é¥=”Ñ#jåÉ×t@ÐûZê]•®—®†êjdçh/'+,ÝZH·²ÑÒ/PWà=+ÚÝáÝ‚ ªç ¼ÓþÂŒò‡2ÞÕþ&VûÁlô9ëìË-;a‹;b†!ó:b*¡ „¼g`7»ØÍ…5;èø|¬§>Æ*Ø;ñ¤#^šA!úm–k7çØÓcO/npô`C¯óšs,D¶ ûX‚ÄyÀÈCVÁQ‚È)9ô^鸢»××yù 8zÎ0ÍâñYð^?» \làb§p±KkÀ’NaCòøÙºWoøº†ÕýluÖÕŸ³ðgÎàÌù½¦ÒIæÑÀzþÖs®Îñ{ÙÍË[eïµïÕ­òt2žÕ|ZV3Ä ü-ŒçøÞÿ;1¡Ö¾ü°¤Öæû'àPCôÓ <ý < —¿}WŒ|¤}™Û›b™úìQú?p ˜ð‚¿B‘;{{m` ü4,p`{ƒ¡zaþ60±/ÆÄÎeV¿1|àdÆÓoߎ™Ãß·îýå/[úŸJékµ ¯f2p¨C jàP‡8ÔÀ¡õêKeÒƒäs˜ùí7d2ã b˜ÃS÷|­ì9ÏÒvrýµäLûb÷VwÞÖÜy{'^_É7âe~­›ñø »Öõl]mñeZtÕT÷Rg_ E;áULèU>t# ÝâÑ}»Fë:¾äáè¥ ÏXu+ûk ÕOf‹”š[Ý<£¨’âòQÜ–ïÄÛ€!?ÝñÓÝáé·êe­O8dü#hc—qû}¡ûýçÊ€—­×•ŽØoaÊš*ì{…xgFëZÆæå›æù²up%±÷¨û¶ógט„BDW•Óõät•wûúb¹R’®&KÛ}ÒåÁ LárRºv´Û_Â_âÅ‘táºAà–±Þ …îÅ‚¦ðú$¼0‰o¬éçÏxv‘Bû Þ°ta¹þEkVÑqoJÀY¾ô”ü¥ó€+½÷<\|9ü•"ë4¹Ô-|@nôâçȨÏáÇ,øÒDêÍ_DÇX·ÝZÊ…´‘äí:G_}¢K…8Ooø(¥ ØÔBA]Ìîe ñÛ*a¾ÕkÏìßÒðLrSÖäô«RøèhŠcS”zÖ¥^P¿Ñ×ǹ–ͪL kÿ³%ê:gÁ[GVn`/¶A.¢ž…ôLÎ]í£[Òö 2\(õBÖq¹JÄ‹g·Z”Ül»êLT+]ÎÕNFvRÕº_>®«Ë2 ?W ³%FÉ®"ôj9 æ$ßü~ %¼ob^‚ÎÁË÷Ï&£iSœ–Wu´ÇÐR›…çCë×…QÁ滸\y’$K¹eŠ•*ô¹ñ±¼l§¾ÅºäR†(½[h‰G¿ÒMe¸Ã¢2Ü7–™( ]v|É,^ðzÕ|§5ЉKUŠ˜Ã;;.~éMÆŒñ&i lÌØ‘Á}v-áÓ}¾\èýÍ,ý×Àç€çlv`³›=Í*ß$>ÙÅßÌ ™ÕýÞv„—ï:ƒa ¬µ>Ë«kIEŸD‘]¤†T„ÓŠÓÐîaWëáç Ê'í}—lÊ™D>È‹Ê/$7.-+¾¦|¾Ñg‰À¬ôU{Ö4°¦5 ¬éMfôÀCžó|ã ò‡öJﵺû­èaÿÖE|öÂ…µÐ"è‹SßEé§¹çě̾ó÷lZhp/??Ôœ>%ÝÍ}ûŒô>™î.R‹>ÃGûV~lðºZÞÝ¢Öz…–ø-²Ñ^N§>*ÿP]N'‰Ÿj³ºCÓ9q?®½ Çoí ¶WöÇZ;bí½©öÔ©»NϼœA(„ÄÑ£ƒÀEà!˜"˜!˜#X Å-d‘üç –:ºñ® ähöU—7È_@ÀÐ75–•\W¾”«4(|„±Æà™÷QHpQh^9{¦,,phd|°Õƒ·sj€‡œè‚3å\e_ƒdeá |3[]òQVP–À<³»ˆ|dçšÉ”¥õgåt.àX¿±qduM–»‡ÑMÉxQVw]EšÈtMÒj£gj,Æ»Š=Çat&/}aÖÙX€ì¡u]”\Òལm ‡¶‘*¶ÛXG¸;›îÎ °5ÊÓéŽMQF…@^ëÁ£Øm%–jyW®5P<ª$@„7}Pȶ‚#Öã9ö"'^,ÝB£­AE…„Rù¡AXmC‚.?uuÊM ˆ$Rk™ÿ·f…Ø£^þ²éWIî 2¦P>]tìa $ÄÏ–ehM9…$ÍâÖ5¤š–RÃTîž@…kàúëZrBNŠ'ÍO:O)*öƒë¼’¡¸ªV^ G¾tcnRm ½6&[–µVIȺ±>‚Z&ƨý[õ!Ôk8߇A‘c£,î¤ jÿëù-Æ÷Ú¸—Ÿ’×ÒQz[i/ÈBQ—–ßÊÈZi›ˆ¿ëªúQéù¶É&ÈÊœßt¶&¬+•ÆÓÛª$MDÐ|{ª¹1ÆRÚF™›i•8äÜá8Oj3uúÁHe,*ûU`ڨ˭ڕQÕ5WY‡¤ ÈZÍu[X%ATÕ2Šö¥§I q87¿:”|W+c­GFÇчuYá¡ Üá»3îÏPÕÇb%˙à ûžQçRC‘%›Uz˜ãÿlÖ±Ñb]õeò𲡄¶ÕS5B{Rq°]Bà»°Šr­Ú‘uz•­óTÔ«·¼l•™"Ÿ­†‹”Ÿ¬#ÕhžŽøÝ ý=K~ÒŠ yÞS$ƒ¶ß_í·[{%Å•¹n«S±êЌިæ óLzJµÝ „X)ădD/IpÇ+—…ãý™÷[¸s‹#w1ásïåX-°×`©½Ù(qÍo< ;Ùrø>Ýí{±a¥Ñ]CHöOÛóS­Á×-»–½êoKs=òÁ®ñêÓÄ·¶‹æ‘)®[ñ„‰lÙ‹8‘ÈÊúò®÷êÐ#ѽ“ÐñÒ‰ßêþÀß¶‘ùž[•þ†¤nB¾çÆc×öâ7ÏØ-$öÖgçoØÇ ^t¯ßÖÝevëZ{roÙ‡{—Ý5ÚOëÞBCs¯k§¬3àþØnÕkiÙa;ˆ÷`Ÿ‡÷kh›¶VpßãùvF÷oYðÆm?Жo3´‚”è8TÞâ¾Ýü¯ëœc/–s4üìâ'g2íC˜Çèï‘aÐwÇYÍgC¹s²¾úì¼óÀw vk<{œä=6"çôzôj':óûM}¿é}y%qŒèë|èøMŸN¯àžÀ~¾TÿÈnþ9½z.?ȘlY‡gÙk`<õ´%: ö¾:?©ëŸU˦@ÔŒ¾êäö›O1!é0î߸Ө}Ðpüúú ]{3÷¬ ­÷ñO¶4dôúú©ù½œt}=qx. ]i³¶Œ|»Ã }2-ú0ÐM~ˆVxè<dzýæîœ.œêMW{RÑÑA{z «=n=§ 'ãHÀ'Q"º¾ž9!\¿ßžÃé~û.|WƒÙ¹ÇÎ4t“ÁÆ‹”‹Þ)zä&о¸“]g8S;—Ƚ6bçŒÉG÷¶sA\”Tš=dqX|Ç /Ì'5þeÔBã2,¾ÃO¤²|WGrÖñ3Œ9´wí}7mu¨¶°ˆm¦‘hUþS*ÀðqœÉOøh‚yàcÅÊ»;wË›÷ÞÚó‹ŒØ¹•àl_bqxqD»iì]¿zîÐvÓß9A=—ÑÙ­=gM …SÝ:¥¤ì¥y°&à"mÀ ìð'5~òKËM)á>‘bÙ‡òÊsýX ÍHkäßs¼èù›ìŠ®l>fìùøDÓ†X@¯ÅX(ðEüqÍ›BuY¤Iš-ðÈà»-ªNKà^@üÚŸzTZoaЖûLžÔ!,ÕMq4‘V¤A Ñú0V޶ÕþFVðϸ¼}Ù³j—‰(FÁÿ¹‡öü¯'øo‹ºû,Ò¯¼0qÅá²@§y;yªhnNž/ÃéwïËUûyÍÎc„Ÿ—Û0‡éä6… T5PÕ)TuÖ¤~âÙ j“â³?`¾ˆãÿ–]°3$Ó[=d÷œ¡®ÿ•:uK©%EvȽò l9„re:ò¡[./åXŠ)C:z§çKž`È0…ÒÅÂH¶À¢8òEèD+=.iÊòÿ®“›¦~,Þù¶o§'÷ƒ7÷êf ekØÝä­]T ½ï mXkgŒ³Eøõd~êC¢W¦ºBôë§±soùåYkhåÇU”ñ±ü牚 qâ+gL-‰ÖºcR•¶rá?žÞPþôaßoµSæxiHvçè°jœE8Dⲑ¸ÙÇXµÏ¸­Ñ=ë”ë¾J'«ÝC5«í*h­ÄWÏÆ>;k´­?æVû •Áo× &òŽ%ŒÞ2Ït¤‹²^'E ë:ün u\ˆ­Cºc¢pZ3#ö éµµÞµ^ty@e΂R)­uøÐØöšX´ZxÛz."OªÇ8i×|‹­—Õ#y{ì@K_šU§ÌÍXÏ[ßhµÖisGGÂE Ò¼̋ИÖâóf0î÷ð^ûS­ôøZ‚ù—)}Ÿ*Ø›O­Ý èA7ºAÐ ‚ît“jH ðøfô¾Ø\UIW§Ke=”Œ„ÉÙ¿Eê쑌ý¯åã}¢—nlï”Óï‘J®µXµå¦,ኛø G…†_Ý'é2@"¹ž—ÊWLœUðÒßÑŠÖ×ô P+UÕÕìQPª¹Žª|ãèÕQçàg†ÀÇ7ûaøé´õ4b3ÔSf8\ÁÁ Á@ˆU‡XuˆU‡XuˆU‡XuHU/h¼¶ýè <а 6!Â&DØ„›ab¬-ÆÚb¬-ÆÚb¬-Æ>ÄØ‡ûKöÈ!äÂ'Ø>½ïš²¿Œ«.¸ék‚‡5hÜwxÖzÓ|nIc#£¶,aŸÄ¶Ê£ÖãH%>þ{j U;\¸ÉÇœAs4í ¹Î†u:¬ÓaëôÝÖ)N)Õ8ƒZÊÑJnúG{U‚wÚÅê{h ‹%¡)¡¢ê ©ÎêLéYJϸê²Kð ªi9J³JþÝa¼Ñ‰ÁŸ&h--׺ú- 'À³Áßlð7úÍ€:K-ХؠPDè@ót‰Ïª*D¼L6‘.˸<>‚)‚YžMÕ7ð(ÄEð«~5Â¯Æø›ão–øZŠ`E+øŽO¾Ó¬ú;6ç;Âw„ï8¾ßoaD¾«Ô ‡KÈ'4%„_Û`Û7ØvMa€jÅ ¢ÙÈ `9v?ÇîÿÀßüÀßüÀßTøZ…¯I>óñ6Ð]<ä¸}܉O,O–†0Ê+ñ_ÐÅ€Q^:ñ<¤‡ž¼ÒS¡úÃ!Æñ툓#KÈœSe(s r y„f„æ€fTËL>íŸ9 ]†ø²³\†øÔu¡}9l%LŒÀƒ[NêÊ>”…ÂM9„ËHÂ(’õtlõ‰B+Õ#lû<æðÿé^ïC½¶ZÚ4Àaˈ)UÙÎÌ8dq7š .Ò>Õ';#OŠ®‹–°ÚjëHCÚÍÇTCÝèžâ^£@ÝS%á÷ˆÆô¿2ÇÊná+3hö–uLU¤î]ÃÜu·ƒ¸£"ÓËiÊõ¶Ò¹à Šv´¼Ï ¥fA㯋?Ȱ‚ÇT¶øö ÷©‹È»Vª:díÔx<0ؾ ˆ€Óö«†mß襸²¼EK9–‘'jñ°Nâ¼Sú€‡ÃAkw.,ÂÐEŒô“ºÐ&j”¬©`ª,¦‰ŸGu¥À­fÜ1‡ yÍjÔ/o7 4}Hhnd·ì›c;k²fû½9Re½XBò4ìš»À溤Ýã‹Bü¦¸Wy–0¸Bï=ΠЂ%zÆÙÄ飡zëÏ¿¼Ý=ØÚŠþR[{°¯ûZÁo²¯ßnM3 Ùd|ÙHì«Cµ¼}Õ²NµŽØDlÒ<³[Ð:9b‘}ñ¡VÚ ¬÷Qî;ž¯¦·Ôï ¨Ù-Íúˆ.ÝV˜Q-n+¼Ýjmåõ·h\m¢lQMz{ŽHßmª‡ø%޼–xÚK §SùÓØ­ëîµL‹’[{Y'-¢îœœÆí¥zNg?5ãý&g¤»mHãç’ò[Gîòôò™G9ÏKíô.6®ÝmàõĶ?Ú×`§áºz®ÁJê÷£®·ºþÏíêó-¦cº_K´žù³Ïì3yëÍHçz,¾¸—bðL ®†ßïjø$Þ…ÏéQø'oó]ÚF«÷LÛ¶¯=Ë,¬Ý*oöœgv¾ÑÎüb„ɤí+híaõÛçŠÝ#~÷–ë\çpÎ.A\¡ßï^bGÖÑ‘õÁÄÍ=ÌÄך‰#ÚÚCõÖ!¡ØêÍ9óoôÐm¬,n¼Ôd¬ÿY­«ÚY¹r;¶üC8”)/üj½Æíˆ6[+QQ®Ã±Ëà¸Å.‹ HC Hàò›\CîMä'å΂/#ü¿eÊz¶P£Í”](Ë׉j · 8Ø™×v < 9ù¶ÝãïíDíFòظæÒ’ÿþÓþM˜¶ÿ‘µÿ‘·ÿ±mÿcÙþG›>ÂUûíޅ϶k5]*= PÐÂq /[xÓÂß[8jáU ‡-ÜnO†ØÕö]•w÷aÖ^Cááó {ô®  oµ4êu®ùì ôÅë†Êô\øÏÒ°M¡å|U`ÝrâfDÏ–¥\%fÿáKxÔöÔªËV~ ýµË^Xæa4žÓæ)œçÏã ȘBúDϹ¥áíü(§»ê¿g‚hhàix©ƒgt£µÆO¹÷ ž?‡:àœy½ðí÷UŠºUˆAmø³Õ†gŠÂY Àï•ìQ”¯¤kxRFz Çú—륫W24È%äú ¹ò–z{–òd Yˆ¯ (Ý!¹€ ¯œ×zËÌø.¬Ûô7,èã Úk-h¯µ ½ÖõÚ¿_·pڭﵸ×^à_…ÁÐÍ%ãm¨I€ž:öl:Åd©ƒ¥æO¶ D ì½¾—¬¶zæ: åÎþ5þ,Temã²ÌùGÏ8Uk{êºGÄu”ÕMŠ4ÔÈ¢.—­ªù’eêš:Ê"Ÿ^– {–¶ª>t~`¨CêÀPû0ÔÏÏD+ãü} ÓÀŒfôQÌè4þƒâò|à’KÿÓ*I½••.ý„o|W‰Ð-ˆe·Øìéûþc‰ãwÎ7J¿m`>½&>ȸAÆ}˜Œ{½ú,ßÄ{«Ò_Ìíðû„襅O,z‚މ)Ã9܆ϖ%2ªð²q\^¿hÄ!êÇ %ôbBS³¤gKž¶s¦4Hå" šßIé1’q¾|à‰§ùÐæ7ˆfÎ¥¹rBKB)!&­5¡ï„6„ä$|UV:i¸<>‚)‚YZU^Òg ‰ºA ‚%‰«*MŠ}]݆ÿÈ[8•%]k³-KSm9„S(Ï | ?Ð8y .‚CoFµÞÍ]G‰îxpÈ`Ü¹ÚøÉÏU¥ÛÜU&5ËXÊWE²C^=x^X†Û…p=–zDUêÕ´¦9v{]?UÀQ­L¿”ßŵKÓéÌV82Bë3œ¶!ÕÌ9N9A¯UÕ¢ ‰OÜÐSñéŽ;¸sø·üÔåþºnûeŸ¡ªmpMÍ?ñ`tK¹QrT%ö‡Ãã>x7í§Ø,ïVÓYZy “m˜œë¨¢P¾ë9“giƒ?îë´Ž 'b¾ácy‘Iè^ÝH˜Êç„ð^G×üÓOcšíö¢Än¸-R€´‹Vd„ø¬ESºž›1ZÞ«OKü­K-6pã2ËšZP×½‘Ìþ¶ g*Ôd…•@¯\%˜>[G¶ET M}­¥;éCÊ é‰ žº¸ ð‰ðð:Z—Ôí³ª'X„‚LF£‘±-FwÅLÛü€Ö´CúÍ”û9‡‘›¶ôÏÖb' h¥JžÓb÷¦1*Nù<¸Ä¢’a¾uîØÆpDoòÍ”?O¯:ôªjÙœ'nÿj¯:³óucÁ3€Ãáƒ(n4"šã~bz–áUÔg³ìxEÜÂ6Ãiž³Tu™<¥˜=Õ‘xñXR)óÛ/‡+" Á|bçµäGØ¢iÄ2»ÊVÅ-¶Í·¯à0.Ô˜=#ÑHè•*ûG+èªòGÊþ׬ ¢Eµ.dºß±{ÃO}zª‰"«: ""–ònk|¦jà>‰æ)-Ȧ:ÏkýM§EV74¨ÉAªè'H_¹™ë°KTóÓúp¡–¦ä#Ûh—å¼Òѱ±`çÅMŒÕ¶,h¿­izÑú’í/ÁŒë¸VàId¶ˆ”DÊû…]`–›ÇsCBzÆÒæÆŒ©7§±–Faã‹‹)ÄŒ‡ßçO´hT*ª  ÚN¿ï´iV[˜\µ­Ú&Þ=U~ù£ppä¸8‹·­qâ-iÆ÷¸l ¸¬íLƒg­EêµÊ9ŽŸá>>=Bß#þà+tä:^Gvþ'1ßb¸2a‹ÆHzÏX„=©£Ì{ùþ€W,+Àê±L¹J›ŸÝâ{âÙæ4¦û™rS[\Ðç¦ÆÜt‡¼m‹iŸÍ9=—¿l–)Z·ÜÆÚßÿSG-Ž®›–â²°pÐãã°ì|¸’Ò蚤¬Ï[Ê¡ð!kðõõ¹‹¤q‰ȺNwä÷òª“§»ÓcJ.+r.u»}ȼD›Œ¬0²(z+ù ×Ÿ¥É“’{L5$­§laÆÚf¤H‰<Ëã Jù•)ý¯Å.LT× œhù¦GÝŸ:ïɦƲîÆlÅð¥ËÙà/ðÒ«¥hÊâqJq,›¢8Ðã\ÂÕlYæ&ÞEâ­½ZF)œ¿°Èe"R'mZŽtVâ×¾ZçE1•È‘l½Õ”©H‹³ÇﯶAmŽŒÉöq§·TüüªXo<ðÌZŠ ¥ˆJªEò×E¸©nè¬/ dà M_fË2’ÇSÇÆ/_’†ÖͰ.dR¯ ñ4Ë’Ë\S¢ÛÌàa ÊðWB0÷AUhY.ϵeyý§8š®‹u1Ý€Û»:v@g4ØÊPØÜ¾„"BºÉYšO±*t—ab¢±žˆ¢×Pª PªW:ÛE& Ô. ®ÊhY›,!PÃ=Èd‘‹­qa¼ø7¾æñ€E4DÂÕLûälF¦WÅØñpx88¢Lì’¦•l:(Ì ŠÖ¹4¢¡û„á’¡†*D2äÿ3Ûþ|ãxî­ïðë!¿âÞ:·ŽnGµŽ`TƒxŒjõ‘цíÓ%bØaëØ‘^êÆ±}Òje¡”G¬çQœTRÜ8J¤8^§2€£u&œ}]§´¯!>òø¦ænÜñ«!VbÕ!Vb³CüŽ( “,ت:ÛJ*È«íc¨«k¼}”+¤my®e‘n¶¬L¤Üª96ÈEÄ;y!=Ó™,uç»J…ƒÛråʸÖf.C.YÌu®‘*y®ôo€Ü¤bêu¡²Û”ñÿ”5‚¬ÚÐÒ ÌŠzä^¼Bˆ?%Âshш×ÊíÞ .µ×ú"cAˆ/KlßA)ÛIº,¥¬aG¶ìjù^È)ÕýûñnõÈwu¤ÇäòÁYÊ^¢z\í*,븬®3hõýœžH[—[¹Mêª*Ö¥ì³L¢Båíqå>! îTÎÿTÎAåTN¢AåÜ·wP9 *ç rByP9Ça K1~²IòeÊúXà~¯ u0GF¯tx´‹Rm”2ÚÉîïuƒ`Çs³¾—ˆÛ—*”Vþˆ¸I²Žê:’=;ó]½ð¶Þi@³ÑwFÈS‘0Jûš ß—ÞK½YµQÄýýPêãuåÑ[sl±Ct[¢”X¯Y0_BY¹®­í†Ê÷°,3GE…­™Sk¸ÙX†¯‡½æî ʤ™oªâܼ6Å&Ìà3ØUšž‰>ò3.gòÕ_Q´Òñ-ò¢5¾3ªÄÏh´¦0*ªô¯¼6]UÕ V¥¡i¹P¬!¸CsÿóÔ\QáäRQdTIXÚ•n½Ž£ÕØΫ)•å*ÉYÖž¤l[/s‰K½Ø¢>‚)‚‚9i[Sõ !OhFhŽÈņ~*Ø„›Ã@–¶1@e0 jtôe:vëRÔG[žÃ8zªçKU÷¯,3M{$‚­Ð›Q‹cAóBnéÑ !iR™ÈÉûý—xh.ÉÀNÞ[à!á®í%.SF±›,¥<Âb¡¡r ÷’.áAPkdÊØ­¸JúŠìÍÖfâô¼Näøþô0p£8Ðl\¡Z¬e¾t´ù»ú.–…ièC~² î…u­—kñ†Œƒ_k\½SÝÛÍJH±ÞˆSÂÖ+Z}½M¶Žd1hëà°wÕ¹€ž¸ÔÀùäªz¤w‘_™vãµ¥Ñ3¸[üµúq i•âJ¤ù(ÿô;ëô¬¶?ú~wè§â?.6p–âR ø¸Ž”!ÚrFCù¡jpCZ²Þ²D¬¾ó•à}—Úõ *ñ *ñ *ñ *ñ ÿ× #TâA%þkUâJÒ½o¸õ¡bùN -Š@rG°¸m>Wרæt ’Xy‹0—6JU+&i¬.ñØ[àöF7Yž°¶¶³&¶\(~ø…xlƒØ•ùmwÂÙ‹8Ñä5ˆFP¤‡JßcÛ\dVÐ7Ç@÷Vý%T†VØŠùòAé{¾«´ ËУ-ÄNÆ¢v! ¢Þ5¤ª?Úl)›û‹»ì×A“w ú«µÃÞO¯zÜ b§”]&é/é씉«Asþ‚6£vLÖô‡øƒ,c?8V3|¢a«]åÊøÎ ÌJ»P+„ùÄQ‹ äg•[Í•kƒ¤õIj¬ÇRÅöR¢n¾Ù³«žš¤Õ®¿8Ξ k}Òñ³Mñ v£Wı)«D²ÀÑ·d¯lY©){@ú´ˆž…ôL§·O8ͤÜEúÕR9Ú¸††õc¹Ó|qU² $ÙÏ8×¼óWvN•bí{ÂÛ›¹øH/6Ù1Vú]eA=Œ¨NM iFïŸH<ÇžT|¬CTÍÃç„£p´Ê¸ÔW…‘ly<ú¿> ¨-Òs;á>½S•òÌtGÁÿd»hæ{ó©2B;Uj–\j%ÒBz¶dÞ‡Ö™nßF©DYmZê&‘6мӔ=-íÁ„!ãà>ƒZA-Æž:ûÝ3Þ=ÃoÖ^3ù¹fïe9rŒwMòOä‘Ì܈aD-ÞCl¥g‡|MÔ! œ#†hP PÿŽwBÕÞ*À˽%)´ÕÇE—¶û¹0éÔ§n âu‘.õ½,`Y¦¶Ä& @–jG$×1a5B Ó6å—•¼\øyô*Ù²n¶,u™òK´f˜G¸# Þ…—câ)8WÎ3©²µ^¦X•©ÆŸÚF)™jЮ-+ùd¦^—Cµ‰ØLîÅËžäÁ >‚Šàœ~L]öIºä3ÞÖ=:Ô[ª´{ú:‡½{ä¸çÜ•W[ÿ&Úéjl'Yt‘M~Çu-ìA÷CRÖωe0ªä,‰-w]M¡<ƒ2øTV (ë]e1H~ .Áø—¢,V¯³õ†=sÞ†„„™¡ r›eºsk«Ó¦Úó‡»tFÅ})ÿ¿/Õô´ÀÅæL©2—Ð ¾HÝ›! úê¬Íƒ8£ºg„t¸‚xëУ)6Ý£GsD’_gl”+u;e¬¬¿ùÕ‚~%·ozØ m. œh×Çùô¥”h¬ä9m¾Êý¸ÅÏ"¡ÍðvÊîI¾@+D4}ÝSFÃ}ÑOòWz²ƒÎÅü ×)/Íw^¸/²ÌÒœ]iÆ,›øÇáÁÒ˜1:Y¤æ¤þÞÇ'ÎÇ~fF+9â{I£T¹\çþ(KdñQ®t{HXÆ&ƒë§Ì{ UGš&¯y¤zͪÌðG¿9XWÛpÈšo´£©7ÉÁÓÔMÓÑ7í@Yµeð|’$Γ".ZGÅMÑ9Çè|Ç 4Ýìx§ôt˜‚Žò·a.3Lˆµ¯KâmzäúCæ k¤óà~šë¤Ð.ª|ÛPõœ« .¸ÚÀ¢8•‚²:¼7ªjñ¡U ,ñ:ªòæúo⪔†Ù¡ªQ*÷UÒL <ƒyÚJ£Æ«äË"—òäβ|uWé…®Q-¶ÂŽ)W°éj áHUwÊù'q’Âý!½OëÒÝ*…ü¹÷Ü&hËâÌ3#åU½Wu&ÐoÑ.óü:ö©jIYá˜xøÀ¥ßè^žyæaï h䀙g;¥äXV-:‘s÷Í÷}…‚š<Å1v±aØú½.¤-WéVok¦˜ákÒ€$/Š9~SƩ鲄49Ãô=£˜*€›Šl-*á(Hõ”²1Ãõ甪Íít™ë­Q”Ëò‹blê£E-êÂØÕaË$Ä…Ê9Ñ#\|rmñ*…æ»›$…w#|.©ÊRVÓ2áÔu¤«'Ëx©Òl¹R5µM#QZ-ÐõÔ WëŽu¬,¿íV#×ò¥ì:Ûre™³¼ÀS£7•5ùÚTï3ún€@Ú6}“¼4.¤|]Ùƒå©S™a‡K¶,råÕ»¬Ö,“S»ªæu˜9r…Rýc¾Ñh§A](J@Y\Å.kÛt¿DŸ¥×æød†`Ao Mä¥qèFÙÜ@CK사«¦lšƒOTÞ€¨‹@ï6mÆC¸c3yš[à*ÍÖ5B#êl,‘Þâ*ùVë\7šeøHF»^ê òNØ·:_Ú‹g艆•+éM…}÷dQyj˜4 ’D…Ü‚‘¸ 4'4#4%tKÈ'ä"ŠY@Ïná«W¼—kM‘ÚжòòhÅ?šbÕ‡~]å:røÌ•®VQl› 1h†•~q †k§<´sZŠªÂp£“% )Kz ¹»‡ìÖ”EŠ Ù5JʯÃX%­’T9˜–¦ãæ¡Q*¡ÄסM.!_¥b“6§Y4ŠÜ¢–mgrÄx‰}yÝ(öl^¡ÔU»Ô“•—ªãºÊ´\?ß[Ùgk)&uO…Y6C²¢ ÛÂëàÛú„"Ó¯8V7ÎUÇjæË üdµ.Ô³a®²¬¸S'j£LϰáuºË ºð‘Lü„Z'IG®ò\ø6‹Nc:…²pFN¼8÷(ÊwBÍwu"›oF–Äþ×9Êm3 *¹Vž.É㺸ú'Ê’ƒ{ŸÂlêA=ºbÞ².Ž,†Ë.€$ú´?+iwQs?¦ß“¥#gzì¦A" n¢9B Gæ‚FÿÙ˜Ž‚P6>( ýÚ%Âÿ^íŠÜ©~Poà Ä©åÿiÖ°P½ÿ¨yyº?ðܶA¨wÊYD D ãâòEj ™¢dÙ¶@†MŒ¢T?±#LJÀG ›•hÜ‘¸xlÙ‡2¾?ƒòÊš¶ª}íjÈõêȨ\IƒÊ•§ÊäÒMåÊ)”HYÍ+[v¡ìAÙ‡²ô,‰”ž+ ¬€Ù¶O¤ªD·òlÙ…²†‘ǹº ,8tâ±øGª\LL|$n Y4SíÛR›gË.”¥qËjB\¬•D‰¦AjßÒ SV§>ͱ)ŠÜ°ËkûSž¬tmY?²ROåò˱)JOÓLõŒçÛu.ØèsúÆnñePþå ~€ø@·S ø¥åϱ+æåÊA@O€g”°£Qè~Ðe~•áÏ"ž(¤{WWJ˜¦6uY ËݬõADº–M 5!UŒð¶«äb6íÑeØdØLK¥µRmÖUT ÍÛ²v¿Aòŕ޲-«O².ÁVƒì± F Xp±+ÐÝ|x Á—öÁ ÊÂv×*lñxóÝõ÷âðÉ:‡/똞.!ñï7Ù¬+Õa7zPv” ï™d+eãmä@ ‘eÊS(/iÒå6qCZ‡îmëÔ„‰}oÕû»Ñ:Ê 9ðmŠÊ’ pLõº¯eŠ:f»ܱldTfï ù¸–ðD/}3@/uk€ÈŽÝƒ“0Eipá–ª1[à"ðht½ýþ(N”æ p¼|æÐŸøô$B˜ão ›l(‹X±ÇE"gÞG…„K¼tæ±Ï¹Æ#Q P”¶Š†Â°=)çªÛ»­– ÙgJs¦!(oM…lÁ Áùä£~²–J(u‹t7 ±È%äò¡ªq-k¥¨µ9õ*CàÀ/VÊ :Nu\—A•–pг¤ög½dT*Ç)Á \꾡-ûPÑfß¿A +Í*òðoй¡JOAŒªHV¦)ªBe*Õ 3ž!¸ Û'8<$#5dÌ2z’˜¢Œ€(Ý¿Œ¾R¶#–yT‰þ81EP+0-ðø°)2#5U›r‰<|@oéI†ð!P†m÷]d&jÑRÇ5(©u¹Q éª â¶=º=º+Ï÷½Þõ~ûJž%#b8=èsE9[èÙ¾#ÖŠ >Ñ ù HñI@/=l¾£zµÑüuiã±fk^ý‚råX+Ö»ììšÑļXŠdy»ð3kGùðé-¶K•°HűuKXÁþ¬–¬àù‘Ö±&OǦ÷'¼ ?ói5Z—ßs±8¢z!²íŸÎiVe>Õ'«žÎ­6ö²(y CÛÇØX¯_°‚\OÒ@òv«ðÉ4,TÉÃöZÙ‡uWdü¦N1º -pèVJ¢ÖŒb™Ö|{‹ø.T[ À7¬Éà ԅñ nˆfh^ ’•jœîÊ©Ž×Ñc•Õäö N«rÈU§ w•[á³›”‘Ko2"2ŸCóº9 ùz¿sóì/}äò>ò?Æ)ø‰àAà;Žú g7óàýÕÛìü@|lJ¤À•ux-†ê3”®Ýži–£!£T¼:“òa©|#žéÊhË„§Ô¤×Ã$‹"k²qdªÌ6Ü$³«[!–Ÿ‘h—Ì{V£ò¢„¬E–lVš.¬(‰Ûó>µøËx•4$ÉŽØ;]9 ‡"ª\Ð8¸Üa"OwÎËÌÙõ¼(´ÓÓÕ ’ãyª R»©Aø³ˆ~öææ´:Sø´¿„Õã¸`-̤¾17±u«aqekÃEkj€Êß=êVBZ—_@'ýÈ~ùŽ¡ÞpÛ<ÒEÓ )vù' €$5†c–CCf\×5ðO7ÃQK=ø›žL‰„È÷4óT£éÚ«H°åR}¾–FAš©§ÿªñ$¡Ñ&»–Ë4FQõR óØÂlrØAxbLb#YË çÓMp>]žëŒh8˜#ªèÙ’j)qž"Éö &$bßÂ9œ`rkœ¶'4¦&!!MgOˆˆ¬ 5'躑1=”±sDL ×$Ï2(#¡…ò0FII yËÖÔ„j[¤î-L¥{.ôü>ô|zõj©ù+S7ä\nÈoÖ¸:y€ýBÞµ¡®2Ѫ›bmæÛJߺEFèªÿÏ®Î[þÆÅnàdÍB\úKè„“¡‰ŠÌÇQÿ±¨6ZõÔ%!à®™‘2aÑȪ(¡SMe’Ó"¤n9?°`Ìè’’Ö…JVJ]eÖÙ-ibÙË Šd¼7qh§Ä[4T²MT¸ØQO‘+z ¹ª¿S•é9û¿ºE’…ýí+«A;øÝ”DTJÚuz‹;2«iåC¯f?×èÉ$³ŠQÏìHˆ’}º8hÃLjN°Æ|Zù#’Äð<ù½8èÂVÛô ¥é_4ÇE…ÂÕ­ÑXÊh°–„V@Köˆ›+—“"ÍÖ]Œoé½ãÍCþ:ýŽìZ3Éí-SÔ1¤ú{›ú89 ŠF¤`_ļéÔO¯_4Ó+äê+hx|€ú‘+ýÉëR†£ š.iA$OqùºzoS#üxQÒ5Š…ˆòZE°…)¾îO¢24[j2kKäþµFf›Úï©ÈÔ“0¶ê0rˆˆË‹ªÅi©å4! ô殉òczFJY³0uHÓrˆ]8üR.oE(§¯çäwÉ©–dFhAˆjI°–:. "&މ²*ð øYŒ‹{ã™þ$ÙB…FM€D F¶Üà#dêd'¤õ!„Ĉ´ ý«÷cAȱf¶ÄJ˜KI°FÃ-cH|¢¤E©1‹ FÃô§„f„°Á¨>ù8þdÌbÌô‹ùHoÛ¨Á4ÑGœ¿D)·DËÓ °Ës4ç蛓«À¹A×A886ă¾£ÖšÁ‚p‰ \ôJ¹[b\=B“7% yÓ¼Eò'ï&±Ò€\ @SgåàêrjbÄÔaÚÔ÷Œ¯‹j[ã\ú¨Mqx ¿cÖ=§ñ ƒÞÈG¶™ì„Ú&JäéÞ3‚›ˆ[7¬ìæ¶Ë-Ó6Š{^‹ßq2âp,I aå.J²ãÿ;ÕŽ«Ôœ”C‹»Œ·¢äö9;m˜ƒ…ö,ÕÿaÿRØE+aàõ>ÞN¼“Q„Û_ªÀþ|!Òǃ¢{†,C 0ÅùöŒ´…XÙž§AªÚþËÑ”a’‘}ƒ!{F9VPþl1‹@‡m}bÙ(0 #ÎÞ#°ìSÇ=aÑ ±:.!ŒÇySކ·tE±tDEô‹~ HÜ”?¶»{è¼;Ýs;š7š)ñmúâ¾o@õL´;H»5¸AÎó.·6:®ÉÅÚíÜkùÄÈ¥EV0Zg'Š´$ØGI">ó›^'gÚ‡`´­pî¥ë|Ë™‡MN;`‚‡EžñÀC”/q£ëìFŸ£}Pô8Épä”G8Cp~±cßß?Ž¿;t¿+Z¿g¸ýEââ%ÈýŒvŒ9§˜ðãà§F|ŸÙqÖ=–/C ¾­Ý¾!²ç‡ÅBôëïsåÐÖKF­rô&ÆNRðã%c)$±;Ò°;€c)°;6¯o^GÔFÖuÆÒuFÅu©aˆPWPЛ8ܧ3§o¤Ng…APàj‚ ýÑ–/ì`’‚‡;U°/„»)´÷ûäÄé£qR| 2Ø÷¨ôŸa]¿ãQ”ÌÐQP:B {|[:B¢â7eÅŒb/†re¸µÄßè(ÒsÁæÿªyqeH ¥G²÷W’ʵ…Q¥GºëLï4ØÖ;1”õB‰qªvÂõ* þ·ÃàŽŸÊ©úo ,àŽ‡Œmì2n¿ßj›fÞ?.[¯«çî€ý¶”ÉÁ÷QÚÑËé÷YWY?Ï. ê8ÛuU… oZ$Ck‘jŒOŸNùé´õ4b3ÔEX¹Je@-Ð0£Jo,“&IÅ“áÞ§}½¿ÆWÕ”î€#WëhWe©ÔR†÷Žû/… õ8VmKGÂƶý…æôšl³J6Ÿm‹6pCC3£™²‡z•¨9Özûo¹ŽtáX'ºwëÌçüƒ¼êÿÓfC¹¹qo½ÛùzÜÁ%Þ²Ò{¬è“ÉüÊ~åô¥”Þ3K3ù|öøŽ7LL¯¹€ñ?›m­KÐÉù:ÙuÞh‚'-RN™£ŽöÐ{y*`)´få°hæÊ¸ž` áÄ?ãIoQ>@{ø;µ–¨ï—çɪ¶Øy6w/‰Q á}°!iÓCzz„èŸèð 9F1ÕØˆëý± ¡º´’¬*éá­úQÒYá÷溲R]â¦(¼O3I4ÿ²)¶¾àtà'ŸË³§ÙnM ¯lÛš!ž?·5»º²šðÚS2c¸ ÎÌ -`èõßZëмϫ¤88h®ŠÝ֧ܾ†õãXÝR»¼i›€‰5N÷©¦9´fÚ’a­îÓà,˜@çD‘Þ4F“¹4™·€xPgz$e¿dB÷fÊß ‡°WflvC#´ ,x¼°#>pz[Ï¢YÔ>C0(q1—ueíÍ›­¿YËæùXËé|›‡-™–gyÿ[­Œ·˜Ç ‡£–B·-ð\e>ªÓ„Ýö½¬ƒ¾®3úíQ!ÂKÙ㉙·×ŸOŸéV¼Ž+H-Èa-í-Jżœ;¥ùIB™åðKÒ“…¢·hMÝ- ÛѶh¥C>½Y2Þ¬Åü°eT/ø³½98ðhf¼Ïyà1~å·~×½P|VýüÖ ã4õã^­…ú­·¨ªE)­Iz6x§óEfÏ×)“ßm‹4‰C¾…»²é×b:°´Þ R_`_íeyT¿ëÍÔ_à¢íõâ·WÓ‘—_‘<ÓöË]Ú‹ê¤V›Ív›6ÏÖxkQ·ÔÊxÑsÞÔ¢:NÛåÜö*_ÆËÛß[û[ý³OcCsuÜËÙb¹Ç=•—ñ6>gùŸºàr{;öZ¾ºnç:ÖžñÖÁÿ8ƒ^qÿQÈ…ïO0f¸dxØÃùÖ@HxaqÈÈ%äòé]X±æÅI%Åí¡U£DŠ€»N5ÅèºN5Ý…ôDZÕ€y|Ss·î„øÕ«±ê«±Ù!~GR”N²`«›eÙ6?ÄW\mCuRŽ·’ˆÑ–õÖGͧmÊ3®' ®Šr@„E!!9ÐlÐá®Ùqù¨ÁfU*¿·åÊ1”QQn.C.iqË!¡:Ï!3xžK¶TrøÆ–ñÿ”%|¢®eL2º 6vX Dí)Q{J4çÐ*Ñ\Y ò[µr= BsB³Ö/…È0^ú)tÅñÜ[ßáÿ¹·Î­„hRu¼A.!– X"pçšÕ…w+¬ÀŒ„NŽ¥ =C`jÓ Û¡Ìð.ö´W^ß>±?òˆÖ“,ë;éÆ~Ð|4Ó&Ib0 ~”ò£õByLó¼üD3Æ—áVÓBÕ«®ˆâN=–zU‹éº/Õ6GéaØ¿‹C$L¢(Ö ”5SÚ*Ñ+5«ìnŽƒ ¾« Oi–@…‡Öy&¹±®ê(i‘ò›ãaxP%yÀ8«¶3-é%ÅÝR>›…¦ª®÷£ÑZR‰~ÛçFš~çVwÒù¦IÔ>4ÅaŸkC57!¼'ÓÕ£ÞÐÐÖ¦"'Ü>à[𦡯ãîQ™o2d–JÑO¸‰¢iµ’q±Îð«¼J? ¡5¼,Äîy‹{+äjÚîªo¬œP¿ö€ÅdÈZæf]=b“\¬lŠƒ?ÅŸ«ÐX-áXÓ”§ØQR¼t;`­™hm½â¬º²ue4TÔTš*Ç÷(ÂoyôHå\dáTŒ Ê6®C8^i[S}4*­ž[Ü÷âÈú¥’çzßtÊuæqce›µÞ&fùy46ªêÝ Þ Ô»A½ƒÑÔ»A½;tiPïõРÞ}õnk0eý?ïBC&Ï(Eb®i‘y$-‚¬7=Ò©˜²0Ô+[–lH“è!„Ûæµ©*¦¸ ,û½à×Mtè-äA™ØÜ‚ìv¢ìÎöÑô3A°D 㿎ô÷fK3 2†6œ·r3-ãÿqëÙ‡rÇ6ôjeͶþªd•z8æÖ)õ²Q[–VœºÝГ‡Ÿ& ©£ H6óæ­“÷Y?( 䑬ãðNEPd[.Ø{ζ íÔ'‹BB1!]î bAlD7–Œ*Ὲ\I+Ô<šÃoTñÁàS>ôu´ÎeªÖ5d̶ÀE Û±’£­$^šÑ·È(£’C•ˆ•°(ô r/…Gð¨¹5SvY:wÏ‘î=C»{çBâ, $¹.‹j¶Ñ°Ü=tb5°UXmÔîîLKàñç*$R+‡Ä¤“jÉüŒ£;2 “r¬·WŽÃºÔÑÜl5Ë—©LRͪP#¬¡TšºbTe’š¨ ”&‘DzyPD¸×Ã5ðU(§³Ÿq±Ï?Ÿ‰Vûíj>üÔøâ}åRƒy¸›â@ mî'KTãæw.!ŸÐ”М¿'ÄØ°#åÇ+½­!*ˆuk~%ÝÌ"½ÿµJwWGüT]?Eü(4dgz²“óËÍ{Ö0°ô¥,}`éKÿXú!û/pª7Ù½pÏÈl†ŽÖBž¦èk^@G× ”…jlYZô,.;T¿v*3_ c§»-øÎ 3ÆK'®ÍÒÓØÆ“™É÷ô)¦¨ýöôTìË®k8ôžHJ‡Yžév„`ÅAûgÒìkh$<ð'a{xÒõ4È%äòÑ'¤ßéÎÔòáQ¶#GÕòàR¼^U±£?Båôѱ)Ê®K½K|årIêxSêõXyéi¸ñ+é³A³(&”"êH´ýÊ=ô»*ÊsÝÝoPHHÎ…T©êf$|jl[È~ª=|€>š'qìg4K\‹h'vÎhªH¤›D&ÆÌÑ-à ک²¤µ<!K¶2³aÞERG¥þ;äÒ®$Ò›ì“Õ*ˆÅIŠÕƒz¢*sx¤í«mzÜPÜ\¯yYÇê™7e©z[Ü mïÅÌÈwÌ{¬ÃP$TžAYÌ;7\i`±z\áƒ[^î²?èÝ£öûXk¡Š <"ó+ ‘=è¸Û)wȧӼÐù4_à“8–ù R¹óÊvBÉ»Avp_‘Ë E¬‘¤möQ ŸI¼ÒUњ׌͸AWjël³G½ž(ØrWs3ǺCÀ ”8Ñ+¥Æ«`§ŸY»üÌÞRWO¹«¡ÛbP—z–NiÉU~×jà/áNÕf õta _¿j¨Xö§ÒªVFØT§Ä4,ïayËû]Þc¸Pv ÷ÉN‚ ×É B]¬A¸Ó.kPgBq†`‹@È£!¢{üU_Ò#Èfbµ¬£lÊDµð@6FA¬-6ì€@ˆ ’_ǺoÊS(Ï <×/lôßUeñš±u¬¢GyEïÈÐ qLQ›’JÅY z°â2ÛIx¦l¯y鹚WC†šr¦y™ŸB„¤…·\„¿¸ „ÄlJ4 ²ðL‹¦Z½}Ò›üPöÌ0k(gP.´‚¢€ Ÿ-ˆµmÁÀE ‘µ[•ŠÛG *B_Ó$ A^$…´B¤è¢TªÑ€V[Ö÷ËJ+­‚•ÒeµÕz*µ±‚ ®ª×:Eu1åižq™‡!¾Ñ£A‚ïi µE)#—‡•¬|—VÕÅì^æàÛªûê »ôúMÄ£Öëz%ÄþõK#÷²ËY ×jÙ†îdî5&#­Œ/ÆwF›x:A· W2«:‹t+åJ>3 ëÃ+Faƒ¢ênz¿¹- åG!x« ŠÎ¨+zÙ¤°\óoŠ±ÐØ(J„$#¸ÎrÙ G«MåP~ŒÅC÷®L±ÖÅg‘¬Œh§ãc䄯·­¤$l¤Sl‹(ÄŸÍ"Ðd>Ñp`+ n`m*¥ú…)ËÐü#T£¨2ŠjQ½lŒ•¶ý]: rA†´Íyš®y¦¼J²•æa$:FóP¶Ýöè–_¥‡ 6ÎýîR|BS¨ÆÇD Åœú¸Pie{ YêØzª ÁÂ9¿|KÐ…[p,ä—Ý9Õ<ÕïŽã@Uï8Ub6åÊ)”fY ¦Q®´¨•ÞµGºœMYбP¢Q×õ¶ñ8½ÛN WG[àÊú‹WËR_Û‰½0Ž30ì²Xÿ 7š¥Úq¾ÖseÄÀ¸4À8.c( ã‰ëÇ­\Ûß Ï2ÿ>Ò‰12•­•9NråI¦ìBY¤ Z¢ã¤(e•%Ú>SõÀ”Õz°€žéúºQ;ZÊö»±*¹Mq G´–ñR[²Œ•,¹DùŸ¥VS¤úÆüòNÙ¤){Pö¡<…ò Ês(‹9·‰Õa —ÅÙ²iÈjHa•˜²«å,‚òAƒøúåp u ( pˆŠsE?Ú#—«µ¼…riÕ…RT”ðR½•N†ÚÉPã|Ò~ªÎ¸=o’FØÁH›:I—…Î^ª«åºz4‡òaÿIeå¤9”fp…xÃe[}GØÉ(Íd1¦ÙjÌ@¥w"äÓ­£TÕÎt ÞI'G©èð¦(¬&­ô«UýKy·²(ÁßöÞ º– „ {­$¤{´’WÉCeQ@V©ôk·ŽW™ññj«Šº° n ”5H+–Å4YzrðÊôZí¶=éµßpqÎÛÐcè2K«:9Y=ª vyÝÖÖFD"Hõ'¢žÖB5æßwúã-µ@²úõ=Òí%ÕE'ë¼ÐÓŒè‰A‹¦ŒBDÊ¿«ZÙx]Ú¾öz, 'B;ë{¡ÍJ–~dÂ6F™Hóq¦ÄkË!”…J³µpÛQ¦«%«_N&Y5JOífuä ‹ÜùÿŸ½wkni²w‹ •Õ;=ûKÊL/’Þ6¸& €J)ßæ¥mÍv­{­ÍÆæïo(ú9LJ™•_Õ×zRqñð»Ûè¥àÌkeDÀ÷m™Éʉ2†@ˆ»hUT˜•Y%tq^e$y2X1 é?ÎÁ´Ï«=öËð¨ªFø” ê%GUàe7jÃÕƒqÒ”¤(uÜ·D-ª>OA=jñ:ñ/˜QDQ µl -C-Zß †ÕÎý{ƒ¦d°ªúVšÂÍ*,1Ò£ÖÆÑæSoëøR99B…V ­Ú(´UèN¡{…D68¾ýVÁHõf„Tß"Õ·Hõ-R}‹Tß"Õ·Hõ-R}Ãñ1\´ª3Vu¦S×:}MuôQß©ºÝ©nwªÛêè£zã,Å[Ê·Cjó=X[5Xœz=YC|[–eSËŽ©{QìÌöPìm+'ÿÑàÁ>Ã1»ï…—jÖ9œ•ã˜_Rß¼àp„e<,»§Æù7ðET“kGh“ç­•#{ѸÍLf‹½ø Âó„”ÈP7)2®6鮤vEm‘\šŒ"Ì<ÀÇ ¯4ä Ð#×dÈÖÙ™¡¶¥vJm‰˜\„`¿ÀHCåAaÐÅgœã]Ù¢-ßrÓ”&ÌÈ·ÛCz·‡8’ˆˆ÷ÒÂU؃ƒ5ªÙ]ÚLŒX éõñÿŠF²BD@S—’lîÓ²î„ñl­y³/iíËXüd/’Ž¿ð²‹ØƒE Y…b….Å/ë7èhæB¡Ï MD:7ûzImú–ûƒ-Œ¤üj çÌ›¶¨Àt´ÐKø\Ý’^Ø5¸«Ï„­ô `@¿w[¯ÇMÏVT£Ÿ£ ¥áõ¤C^úÅ@oð%–šÇó6y\Éi޼#AÛ`´\[žÓ¶XŠ®M6/ÊÜÐ1ïÔU`u;»Óœ6 CÙCÙCéÚBsº¬“1ìrhtpzêoÝi™Ütý6ƒXààê5 5\k¸¼ï ÊÿÂVšŽ¬4³®’ïöˆH?J¾k@_gÌJÐ})a¤ØµKpg]+JæÞ°äB/+wÑGO0É},j¥ßD[;ƒÖ5C4ñêÙu‚¾ÇÜ7ŠúF4ƒ=n®‰Ò·ý³ŒÃ¬–…çšH q@¯kC#qÈd‰,Y‚«h‡Îí7Þá¡PpÔ‡ Š« ‡/¶Šæ¯ɺñœ×ð;|‰£ Ô–çCPíE-6ˆdì:Ê»HqàÏ $.:tˆšv|€¸¸Ùdß”_…˜™“qkŠf³£¨FªÈö2œ¡¢Þð¹ ¬;‡ä³Ý7j'ÔFŒš— aJ„KÇ89‰– ƒ7LÙ5ÞÔÇ4-OÏ¥l2¯of&t|‘ÕM,1ËꄾuGƒHE> tR~®ÄCÉ”SCÓ쿈¤ÞÛŽ/rîÀ†Ã]¤U5¤K· ©Ï›DØó—!ëÃÆ…(ž'(ŒiaŒ¡¡9"ìf?—J Ò¢ ~Cnÿj¬((tŽ,E‰€žzñ)ôbÆ ãOú—ëØ/48ýçöQ‚ƒn æÚÍüç¿ÁNÔ#KHÞt¢€ÞAyðÕŠ×Àúƒ\z4ë„aqû{TYˆt}yB^DÐêëLFÇÃb«Ð†† Ùw (¯yZûBQîÔE¨;³ù#÷%öCŽ„c‹.irÆ‘ I¨æ®w¤jªØ‡ÏïbCSKxïÂI‘’yá¾Àî‘Çù…\>0²âØw2øBNH$C¹­º®O·œ\c¤«þ¸†çk!ä~䓹§C¯):RyˆÑé8ÖJÆËtÔ¾Š|1“g¥…hƒåõ ‡ôËnáX(EÅ„(QÍNHΰj$Z+‹26.:–P†‡Þåñ^½V±®§Ãð_޲Sm<">ãSÀ[8"‰1nh^*øó ºøÓÒøÝ.®EóÔÄsÔ%üÉ ›ã™üb¥†Ø1MÄÀÐÓd­ Ö~1uy$=«Œy’f_¦ y54rJøíqúå·º…hè…eÍ;¯0EnöN¤ñ´Teu:~Vf¶v2pIc-¯- ”|¦¥Ny6·-¨gPÁåwà™Eðb‰Ã!cbhEÈÿ8ˆí¶Š[ÏÌJ‰ ÑéæÈcÙ³Kð¯êÌ©›/Ü’ˆ…ãÇ¢ãüqçŽê¹½~ ¹˜ÃUzn«Fgˆ“œpN¹–üކ^Ï<ïMYè>fM¶üA6ÃÀ S-#rãvĺ™èxHJx¦‡KÊu8À{…îÚ*´QèA¡µBK…p~·%¤Þ¾-@|¼=Ioˆ çNô.y¶Ø“]RŠû§šéZ´ 7µ‰Ý²€–»¯ájá瀼¦ ¼%£t…ÅIRGè…¼’XÚt§~r¢%ÿb»ãÖ'¿›žæ¾#Ý›£ÜvË×øôy”7p¹­c‘Ôf|¬7]”¯ºqz3ˆ’|·)KK£K亂=À”œb0¢üÓ=”—Ç} '—wíŰ»%-B¯ÜäG/ºRH>?‚ýõÓѨàålsÃZ|“£+"ó¹½}ôä–¤‰Ÿ„zJžpœ,A×+ø—DXÝžU•Ïò.Dèž×S«êÕ¿ÆÂåÍs ¡zïR}•üÊI;ø^äW_£Î‹²Â.óà%HÑ)Ï(Ѹ ²}üÚ’ú¿2ȘáÙ7¸€ep(sôI †SWN;E|€_@w‹I>ªÏápdŒÉ­Ù­ýÌQ3ÝhûÀšâö÷踴ÆòŠfj†ª½îë*°#¾¯²8µä7aF3hnr T‡O €ùVƣˋiIÔÐQ³’hèÞtÏxxŒMúé¨Zƒk`¦OW±ÛÅŸ?Ã7=n2g01gi„wð<®,!ŒU+äsœïÔ‚ò„ºoÀMø¹“7ùó ñ±·ãOŠ“£…cQ‚žÒr%ƒÀ“O òS À8KÄ­ÍLŒ˜¢2#ZQdîä$lþ¡¤5'}J·„qœºåžHÒÿï>Sn#5D@†<¨|—*jâ@'©ÿ¶¾Mdï|Ðä±þ”‘BLËÑt¨¼­á‚ß7°q)àµð1F´‹Õ‹  ÷j $œ·÷DX^ZIrÛ£Ï;é~éÏ‹ÛÆH³ÕS$ÌöMjªR^¸1=©VÒv´çð4[‰ô7?´ÈEÔ<ÑIDû Ré3P¯<ëïvt±T$Û:ÖPŽ/SÅ9èǰreçx}÷^I€H4ä]yhý¹z]ßb]û§ŸvÉËê™*Îv9z++ôS“ï@¶¿¬ÂŠ8.8áò بCƒ¢F+Ž¡+™¥È#î¼Gß•v0ÊÂî×-º¾ŽôW‹F鎃e<Øš¥k¥- +óZ8È §È©? Ñ+ ùeQRˆW_ûEÄ(¿N¿žú1 ½ü Ç¢¢°Òªy üÈ놴„®¿´º‘œRí3æÜüÆ¡òÌ„ò%êÄ wœ_y¦—Ì45-Oþ…·=ªOÄ‚?8 ¬õd›¿¬6¬"côÌI_fýióý“¬‚ß½ËÒä«ô{¹·wáí¤@¼Ñ¦Ÿ…/ÐL)Ñw Þ¦mAoÛVVõ,2¢Å‰8ÎQñ˜Ú µ‘¸)×í‰#-°)rÑÙVüªm‡ÀŒšÜzÈ>³©ŠÒýâª~ßÊ=’ž;ÈjòÏnw²sñÿN§+ë;¿ªèŒ×~¦¨ÔŽêeQÁª]¢ÇU…ìç b[ß–Ž™*/øü´;+Öͦ'Ö±ojØ$XÒT4’“pÖCÖìzPô¢F]´…ÅÊVâ: |fR!¿´«UÀóŒfI7{X_)>Òœ³=ˆ@U×ÛŸ¸­EBÅßPòÅm]äW§HŽäðɈäŠ3оýþQF~ ¶d^§ä©žQ@gLjºb1dI’›jôWy?ÂKÏסžQÕ^Y‰ð#×ß'äíSdû¬wOáòê^”TkxLÁ;~ÌÅõHM?ò…Ÿ ±pÝØ®Þq¬£‡±B‰ºUÖô Å_VBiåÚ!µq¥îͺ8ž<†êgjúÀ¾¸–5•?v dÒq•ÖÑ+D Í[÷ñkr4u¹¸uðÓxj(ì!íïŧkö¢1n J(Йf%ò¥oW}§îBU«]‘º¦²¤Î»¢Æ8y€õ›öË;"’»•èÖ»C*"Bk"I-û2içhÔù~øìŠsG…ȵ÷ü‘¼Hwn_¹@çp»ƒÈ/{œÎCuR\DcZ:™è0›ò?%0V«c:—X°+…9ðy —°®–¥F‡Ö¬{>uŽjîHBD&l9‰O¥®§Úþ I.5•/3#n 3#4R•T¹¸§ÇSh{ªD6Ôy’ÞÅ="5Ü¡"´'EŽÅÉm´Qs²¯Í>dËV¢Bf2–™Ô„Òíy‡6 •§ÍãtAð¦KäìÕuºG¶„Ñç+y6˜'X‚ƒôVïà´¥\0T†;øZàdˆw`Ñô^®­“Ýè6¢âäõÖ Íʬ.¡Êw˜Hî¿]›T‰8  ?6¨Þ¡÷NPÀ™LÇ0qÜA¼§@ú¨,–çóÌ}:ñÐØ€® +fKZì)9ï ¯Ú^bf)‰Ï+J/àÊ ñŠÕ<+ϳ€Þ9m"Ú.Ç’ïîAD3.𨱸2@Ìc))‘ܬ© p9)Žýxû!{Ä<ÊJ Ú·‡^“ŽJ*[>X‚xG)¤X¼ï3”I"0}zžÙ8¦Y'û)XA&J*Þ=HB)ä$%2ô'î=­5EˆxU1ܽ«$ ~VaB«¢—U^=S¶¯ÆÀáÔÓ$x©üÃG©%(’[‹r—3&™ñdSVAÔ"aN—ö2º>#½œl]eZ+VÎ#Äžíàé|ÓWäM½éeRæ} ôÝ&‚HF&°ÑÓšG¦û…ຂÔÀÆÈ¯T\ÎYÖA•’õØij.L5#VUÕÒ%¢6Zk´r|jr°ÏAÒ#]ˆÜ1Kåô ÒLÉ.pdOk4™Û™¦’2–^c°!€™¬(QJUmËXOp‰š2›4k_ûçRRO®×wLåO«Äô’`?ëá{9“<9„®Ú/2Åõ8ÌÍr4ùÒ !Y+•?ù˜t”!О?2 E ‹ ƒ€ƒyBE"g Æ\SÄø¤Ûï,—{Tí/À°\DÀTtoP=“Zõy'ZAÍc Sy»'A¶ëe—¦“·=J€ýìÛ³„™HhÍÂ=Ÿ‚D’'òE×>q ûœXÙ\/¼2 «&©Å½áÆšªƒ·Ððt2'»\œE¿›LVÞõEj¨ØŒ_ °“ÔíÛ—Å<æ•h{aÿ] Ó- ÈŦûv}&Wí¶ÀrÉ$«‰_!Ѐ9n[Ö7-…Kô@÷ñ^t€ØÆ¼¿iÑ(ÎÙ“ÝÙ¿Fî~BÀ>P¹ÇÅPmUXGUn•J–ùú DnJÿy%mÔ°Ép˜TߢVqÓbT•FÝ]±È*\vô%¾|aŠûNÉéo_ýÃŽÿÿÕ–T›ÉõU2…â¸c¡n:÷••w†•‚=FcždïŸE¡0‰?d8ˆ%-˜oËâ·¶q'&T±ôÝl.²oRN·!ãRiÏïX fy)´±Hk¤ä@¡Ý¹k˲¤Rîÿ!lp bákzRhœ~qãzõC꬇óV‰›Ì­Î5¬±Ìq | bÜ B/¯T²×µiÆÆE¥|_ÛV Do X†á’¼±†Â¨5ÑÜ•ì+áèæç»ðû5¤A»oE¨Z $ž+¿Œh@ô„êËʧLÑ!ÒnÄ«²–tLž¦$ 66°æ÷ºS ìά¸)'D‚b`žP¨|Ê»æá–ⶈÓ`öT™®f°;WMïÒŒ,WÌ—Î3f@B1è5b>=ABºÇ»žò¨hűi9Q ËÆ×¡#ñ¹Áúe IoXËÆB¬®®ì ²8÷?Ršé›µ¦>;ªÒYoÈo±o6 kó,Äç$gn\:æ^Ê;þy+< Š>nc$Ú3¤ß(LÜw­ y ]ó:NHyöù€¤al˶5Yf3 ñÛ.d”ÕìàæÎ(zFìÿ€ kŽd·ô@|MZJ‡œµX±m“låB›!°«D øc,õA¥ פ橳hE¯‘ÉR7þ5PøSL({LÇ®¿1”¶ì µAÑ÷]à|Ä=Ž‚tƒC~u°Ï=|S³}‡È‰,£¢*-‰Þ»ÅÎÝ¥Œ“Éþš"™Þ¹+E9W(e,•>'½Å< .Έô…,;jÓ 8¦ÒÞj­@È`C àÛ ¾­à=CÓTK}?ßþ†6^ïû'd&h¤¦bV®)æQ¨ð´EHb •v„œ%¯¢©åw$ØÂ-bmEò !@ŠCJOp^A û@ô%¯éC"ÁêGº«÷ôŽq•¥eæzÍ`ÅÏ Ól¦¶~b(1ÂW«k–ÁO¦Ü"ÁÂ;ÙU«0ocÞuIm2„^ãGr‰‘äg¬o"žÒ1„á0ìAa£0ÿ¨d¦Ý¬vÐf¨7¥’bÛ,Ø3_Jr õ´VlÛqMñî¼ µpÄKÆ—¥—t¡ å|ÒtȶL*Óy¶¤zbÉŠMJ˜ÙÇ^Ó’& 3Ã|×ÒE)X»ÏÜ(±H¾m}٩ꂦäZÂR!¼˜ô+g|¬Äœ "âeY¨š¢@èÖzŒE—¯2ªÚJä~Ê€USŠ‘ú ²pð©ðÙJYµÜlƒÇ0u¤üØ`µÈW»ˆOjOݤÇÞ oÝöŽSº%v¦ZÄ Ý¹ð[PzÞ-©¢³|âýâósðRQÁI¸’qÍ%U¡ž¦öþ¶ ?-¤§Á Ê@E£äÈiÃ_+BòHÏ’‰H¬%Ä­)ùƒ¨—[áyø8 ð¨S']S«ñý•¥àRØÉQá2=ùA:×ú½·yÛê­Êûñ‚¤/*lAü¥3J¹C–ÕÁHä­ô¢˜Ä‹ö–Dvw#L'lœã"ª¢FdÁÐᨗr¥Õ´°û’Ð<:¯¹óg w|òc8ÛëÔ!D,EÖÃÛÊ' †äSd¹°t® bïT¾ç5rn±®[ü‘(Òê™›)^ Ts-lÝ\ªÑÛe&V»oÃP$™[ÊPæ>“ÏÀ¾iU®C¡•<9ly¾úêR<'%kMpâÀÙ•ÌÖ— #_*‰xƒ>VkxIãòv¦P6ÔwTèvGQŠÊEemÒ×äJÛyIÌ•húîðí›AÙEFŽÔ–°ý£6ÉÐ]Ð3N»s  ìÜ¡*‰(¿ ®dàý1–Ô©½¢öšÚj‹p ü7v”êX{•åÚŠZÛ üÖ3B‰t{¤=ûÔ%mSШ6¥IN-yäYd"'¼6 ’HK…\Zte*ñà‹‡ôÛà¦;”={[*׌¡ÕnéK(>åÅ úæ;IìæO-øŠÙ3vJßBÇ=; ²–E è07²ÊH ·LeÑõ>­$,‡m…ìBý¾!B9‰´ÏXe6B>‹ :AŽû„ý“tÙ€g·_¯x¡RȲÛä°7yFç6VÛòFómÄ ‘:wJÑ‹ˆá]G›‘w™ÚAÒ‘â§D]ÙýÒ/*ÅŽ ¯l9uåf_'+Ž] ¥æ€¤£¥©Ãƒ,¼§êt¦NMOiüÊ‘˜sf±#“ÿž4ñ¼V‡I^‚”Ís›£'¿çE×/óªß3/†°K+:HAcÿ« +M¦,~žGXQ[H ±¹ÌÎ6ø·¶Ø³QhZß¡4M] e=«6X™©„¨w D"y—¨GÂúÌv±,)-š8Ka0;]#²s¶Zåé ŒD,„)uŽãzeWø6ج+ ptªÒüyÎó°c«ÛÛMÉc>¼#1ðŽÊu›U¬ß¾É¥)H‡›n‚¿dÎQ¹f°Ðô â)C«Wi¡Ö©(TX+EOØ9„&Q6rӃ㉨þddÄ `¼.(Çž_%!Ý'|]ï8ì² …s\á÷uÑP*(CF€KÔŽ5Õyç ­ŠôtÞQêÀ²]"‰Â‚G+vÅÀöÈ;ÇÐøi+§»$þ´E ¡,>Ð÷x2Fbç)g:}’¦±¤|šu)¼3‰u˜å=¢±(øH©$Á‡ê`3Ïé\sºÃÍ@ä[a‹™°Ûø^Þ^b`ìò0ÆŒ(>ÆaåÔ\R¹6•tph«ÓgCíˆÚ§åþšmv2‹¾†:ME6ÚYSQªEò‡ßÖPT×Ú©Š›^¢ˆƒXtË[$Å[ا•îè°ùIka“8“> ­¹ísì‰2†û6¹o+î»C çž=ÊD=l.̉»mOúÞaÛÝì½ךÅé|ºûœÏ^6”3¹Ž[JµdH°¡h—OØ™*æ­…T~ó­NW÷ØüQD¹’Žü 3+Ê·¸Ýw¨àÙ<[Déìt>Y½sö²MeœÙV¤8.Øì“Pº¦€ï^g)ê®±ünì2üÿü7d‘)[TÜlQ†ÎÛô+¢ÑŽ<ž±/9ë½)Rxxô&6ò]Ÿ†)©±z>W§l/7m.×_nšj)ù 'ŒÎi%¿ä;\ƒ^œdå([’擆$m¶&Ïzr4J‘>ZyÓŸ·"©˜#å˜ÄV¤¹#Ũ‰2â®b8µ >¯M¹à¿Ïªô>'§3ªgâl§âø'÷ɹwÖQaƒïF>‘Ÿø´wÕð $‡¤ØÈEñ…<7¦ü±†cUOŽ6@™ ¯"_-å‘zµW8«¯Uà’;b`¾B:¿Btÿ_¡);༠ã(z/:/–û#6L§–I=Ž«áÚšì¼å¿Q˜ ›‹¤hMDÙs#⇇øùýˆ7C¶˜¬¨ßÜ1¸g &R´¾ÅÁ_ÆQFüéÈÛ|@+…Ö ‰?Öñw·ú©ú•ç7ëØkVÛ‰(¢•½Ñ ÒËž>þDô­½¾›CWGî«þRÈ¿Y1i-/ñVS®û^«a¨M2)¢ ¦î¢%¡ û@OļzŽ%&ÿm¢iÁ@8ׯûs^ÌgæCFÚ‡Kg=xTßòðHLa÷¥Cm;žŸa '€J[Í”¿IVc¿/QdiÑE°þ{_%ÕòþáÕ¯ÿæþßý¡î L&*Úu¥‘¸|í£BYz¸°¹1“Ôö”$à|ÌÜÈ|ôHbCÞO³G„®ÓI|¦%ãÚp£¤œ#^¿¡™D1àÄHßóã;¿¯gn‹žÙ­ç36pdÐQ=ש½¦6¬áy'$ó©€èU(‚éÚˆü®ØKƒ“(89•«18Üz°a°epÇI#€±qòØyþqÖIIMÊõ¡ )F;kVïÁQ.Ãoéóc‚ò5¶;äèø ¥â¿ñ"*Â~™*B‘’}×c>‹Ÿ!-“8EW ?¾k©®¡î; 4ÉSÔ…sσ ÷×x«D ^õ-L3ûÌž$Ï 5ªò³ÉÜnzK’ÁˆóõÌ É‚ãé  ks*Ù‹>œË£ÏKdÕÁ£»¾Ud;*ü|xt+Æom¼J”ç”ç=—CoíE]íXQ¦0+JWPW˜I•"^îT߉„Py’.R‡yÊ9AþÍAeÅS$hö¢(‰É¬ÁSˆR;nK$•[DåщCT¦ò+eUõ•¬ôܳҰ’?»Ž:¼”â;¹jXºRZÝ‘¨½®¦­u‹"XÊaÔÈ$>5D¬ú®E«k­ AÅKRüEr«ÎóœÁŠ[=ëwª,ó£òY=Í‘xÓƒ„AÆ g FuXL ÃN0-Aú9%‚Éüåæ!é¹yÄ©êèÂæmJ_—Ã÷ÂǪ+`¥väõÝ>'%ÈD\kã„yÉ&÷é#è#g‡ŠHªNñTÖÖÀ´FÎ1Iд¾ËaWí ¢iºÆH:òù£©õìV7µŽàG>:Ãàô£OÃú,t¿¤_…Šûk[Kêe Í8눲ù³¿6¼D™'mDDßQÁLE Ó¢<ïj:«SM pž_®îù6¾¦ˆÉunçjûIî-¡)ôWVZøz}_û§ü ‚¢ûƒº…¡£­Óµô?v‚Ьï[70Ó$O´éEq=öš¡Lß®ÌX =K ˜‘)+¡g)<î½éXXò@p¢Ž1·¡–ð[}m6ËH¼?lR½1˜d€²ãè,>´\b=+iSo"Çx%k¡hóÂ}B™†È!kždþƒ:½æK!0ÌÏ’ñ×­Bäí»4ÁÓˆ’öNä¯Tö,'ù¸0Ø@És[ÂÒ¥mƒð·ìzhäØÀç­xKjË[’o² ~ô GmGT>(½¹e'Ú#éÚÄ„E+9öªŽ[­‰aŠ£pˆaZäÔVa:PÂwm©žgdý ¿í{ Û‚GúU"Õt˜%™¬FÎ塿…ÔQ1xå}cxFÀ0€Yî| ¦Š'Q©ò&*¸üëPoUêý;‰žúãÁO“ÁKC4£Fr*s¨dÛ€Aà¤ûÜ•½2¯E£kj Š©¶×”½™V;ŒbO¤Ó›;IŒòÔ}Ù›§iD ÄÊd)™ï*)Š—ªM–”82B–E*}KþC̪ §ÛÛˆ]ìÝLôpÐfÇ !Ï©ï̆' ‡öÿõ_%Ê»µ†4©,ÍdS˜ØO”ÉÉÙ°@LZvÇö¡·¿!ñô¸×m I?9ra!æ¸üõñËà´¾G\ŸÚ{4ÐlZPÖïSà)ó\ʺ§\}PŸë€8 ¨HΫF`*RÚwÿûß•~CáÕÁK(<¥¨U ´Ú=ŠZ¨)k¬o‡Ô–Itmè6¸S ¢>m¨½¤öšÚ[jßQûžÚ§…:TCÄ™içþ§]ÉŽ;£l²/pI]FÖiÛç-ù tP'ðü²Q±Y˜s“6BÆ• ¯ú3¡,ˆExE Ì ét>Ò!Ã:B(i—O´ÂÏ;æ{GÀŒÁ^3²¤Š¦×Q/œèþB>tÎ`ï½Z`/XÏYd9ûŸ‹Eó.ˆÂXyÚxÆч°×­!©Áe¶¿úJœaUÖa€0Ë^‘î’«1çc UÓ™Ü9 ‡ kBéLÞ} T»ý!ÈfÞgØGä÷øõ‘DºÍk»'…òÀë(÷ñPO¹ß´üd…çðËáÑ$²M$…wƒ#4Ù?@F(¶”ÝØžV“ß&r³’Ewd´{åž+‰;ØÀ Í÷÷r½k“þTœ¾o_ãòtÙ‹Ѷš›„ŸÍ›/G(´”Ó­ô‹aHnBvr–ðÉš€*1ƒ±ã¤DÍÎ*¹t\+æ/‡˜2RÒ‡›¤‰Ö¨3¥ƒÈ÷N¹P³Ië³æÿ÷åV`ùä ÍÓ8‡ Vàñú’ÏMFºRc¡Àó€«>¶äZEV8efWË¡ôÜ””µ#Èb„]ö¥¨³+ ¨sñ£x‹…ù¬Qù‚Y³®V4Ù’`x$Ý6RõX‰®-ÛóiÌUF“±Ú]'Ø;û17­‰²Lâ¨_ 0,Ó“X•MßÀütDgÙÔEˆ(P`¸ìx@ó‰„ÇIŸtð61E•@Zú~!_ÒuaUHpr©¢”Ê}A»¼d-U]¢%#“ÂdrJxH"èDAe-ÝcMrýÚ•DžT=ZÎûéòØ<’”;²bv‹ÒÙY >=ÎgQ¿–šæ°O2¼J/ñé›)e„¹ž¾ˆM¯Š)Ô¹øY ávK$Äõåñëb„Ç×ãNGx7ŸGØŽp>ÂÑ'#\2Ñ?ÍzVMß ªði¨@€²´ƒÖ^©ðW – ¨‚"jôª’·7¾Wë|=êd„{{Šx2\ÍqrC¤uÈ@¶(¹HÍŒd1òþĹ#œ‹œã¸Wn!ƒå™°epÇàžœx£°¹¡zqŠX ×–!Cœ,“WN%¼só"¯ãÊAYA-çÚy­æõ¤s pgzŠjzòIïÜf†rµO}¦ÇS?áÝ÷é%fH>­ß“)мg…$ª(ÙõdækÇqâJ`ŸQëšâm|öü¨Üó`Fй*/vî*ãÀÑö qÖ®D½çÓ¼‘t§å9/¨ÑË—²]|Þex Þ€¸ê°VrÝùô¿“ΉÊW„\¦,3“ÁœŒ‘cƒÁçLŸ]çêpE „¿Þ¢0‘ù}víëq^Ô˜æ=.V­ WSwA¼Ž/¦§Q@È'åªùt,‰•¤¢™{r.vß l8G‰æ² C2dQñ'ÖÆOðEªV+óHW¦ÿsĤ—µù1¨‰Ï­)àÚµeÇÔ=åÒõ dpN­ð6It:Pàì‚-ÄÉÀwµôÜptå4k (–[J‹|i…Jâç¾ JU®ã qm>êw*:OÊjÏ“HÖ-Å,’ :”¢n5‡_Í+›JIäÅ7¡·{-™Ù }¶²§ÁÊx]mUryc㚯± LÂjæÚµå4©wO+){ú¢ô%)šÃÙT‘,å^ Ñ}jê–Võ©<禳NFƒÂœBžÈré¹ècÛ‚,ò…ø¼Œ‚{¾‹#$]`]ž’B%–L“XRÃÒEæóe…dÁ(çØICíìíiÒYSæݨL•“LŽE‘ƾö€ò>))Û)é¬ËXÊìz;ÜˬÒm°«yEÕÑyOÓ6‘‘70x¶\é =å¸dÏPfP‰a×”­_ÖMÒÈ0yUÛi/Ü QPîOF#‡N*ã4¸mJ—Øá×礆š»ìÊDÖz¾â‹t®â‚wä3ΪúCyƒT†SVÇÎL%_ãšP¸NèŒô­¤c}‹.¾Içy¯«ÝìuÔ=âLµ³¼"±=IÒ7ްËüн=ÞÀ'ްhH5oÏW±9¶Ä–±&¼¥Õzäý½|œ$@I…†ëý~D2VõAü[;Ó“Ï‚®1ÖÜË%½^5#d3žl¦ï©L|ö,*+ê=÷ Þ±¢ôPSJàóç8;W*µÜ´‰y‘×m- ^•wÉ¥ïÃ}^JaV=é}$”ã‘K$NònšÁ™H×)šdÏÝÚÄ›]ûAæwÒ <Ò(K1#¥SŸFưÈJ¥xm09D‰úÙW‰ƒž}Å~Q>Vj¿ôõN˜UexöÚº>Wܲ”­ÙI)¥\ìª ±†dkö„5iö¬f$t_Àþf§Ž™CÞ%'DÎÊp)Ú8gªÒ·“DÕÏ¥?Ùò#Å·®Iž¦rð^£+aÞHjœ4RÆ´"BIoŽg§r˜\·ˆŠŒÆì¶£Ud™¢R‰£ˆ ‡SÊ-_Or×@ô d€„0gKèDâBYsŹ±Ø«-%×Úª'²¯4‹éÉ} æÄürôñùy×îçMd£ÄɾvíŠ$ø®iÅ)›sôë*8_‘QÁ;ILiòn޾ŠEôÓMša©xEÁ©MñMz\‹ž²E]ÿVüJªzBÔR¢Šì%¸hMùª. ë1|•jɼÆKø½ÚýFV&t7NP³\ȇðV/&\ΠB»åÇoùV'ÔlÔ­eÓ#íÿ,’p·¥&„Õ4.39ìTÑ!Š ’6úw^­EL•’œËosΤŸQ¹sÖ´H0×B0¼²¤ðÂÄnç³ã¶÷³J=$y߃¯_9oHjÙH€ÿõëE {qŸ÷úN Ú”fu*Û•JpÅEƒUîøëpF1±º"RÖtÍm§†re:§Ò`¾G¥ª—#a}¹ò¡Èêˆø„w$yRÄÀr¤tSÙŸàž:*a_B3©¥§3¦Ã)E—3ÈPŠÉµÏœwºSÆ©$ß\ Ž¡œNU¦JhT(0ûÎLTóÁžÍ+ì‹ÎJ5ö A”NWÚTi}™H§aËÓI­¼A :—éd¦H%²Zø,{!†z|Úáxãú»×”kòæeˆŒSuk/Vor´˪éDí;U“iºûþ’…zÜ1X¥Gj@.;=ï’Zµi·Î'¸]®[¯H’ŠYú¾Yær|ì”ûèX9y>hid½‘¼ n§Q ì‚úiX7¨Ó  #>Gù?xUÎñ?$=X3±ðöc{”Gò äa/ÖÊgWVpd £ãÄ…ÝÑŸ¨ö»÷ hò•úv¡|y‚ª¯Qà~SrJÊ»˜„ ÔñFG•dž™‚Ø< Ÿ0áS¢ÓÊ]ˆ@úŽ:]NiÁÇcÉ $pgU4WÕÈGAHAdÀ±7—ºPH90Om¦´d*¡($f«.§eظ޼q^ôSþ #ÑïB1(Ιé–i:`3½à·7¤Ö€Ý7—òµ“JÁWêºs(Ÿö“õˆµWÈA fqéB¸–ø¨PVj~añ×mˆÏöQbdéd§Ô)\DE©Ûü ©XÎ"äåàÌc¿ŠQþFþ¹ Ö ”VwC£`ªÈ2v<¯àîR`Í`ÊßÊùÐÕÏ;ÛýrŒu- !l*Ý×Ër6}Û;9N†u¼Tè4C ÿ6p(ßRDÇþ^º)|Xo„ß›4Ðþ”ê">Ûû-µq¶LD¿ê¼J3‹LPi)„3¤rŸØ¿S°Dçä+ú3±;Ai)#/eFfióµ#a1P¼ÜIàð—¼Ç½X ,›Œ}éKÊÏ>í;SpÕ]Š¢ z³¯`#¾™æî–GKfÝñHü: ¿ÍZ Ë$•ì‘yGw„²B+ƒÙ1{;u|‡ÜqT—øˆæQ>¢1H/ížÝ¡WWR6ç.OÞ·T%ôR k,FåE/;qT}l_RhƒåZRô¯´ÄBÃq|¿NâM  ×Ñû~e™*Í’ÓWÝ ‡<¼).êOŠõÎ0~¥"øš|ܯ”)œ¶ò¬RXÛ¨äžÙºÉýEË I 5‚RšfM» ÅÖÛMÓ¥ÌËÍP7SK´“ ÍÛ'*pñÖ’Ó“…¥'+I+GUJÚwäL:×K¹ê=Õº¸ÖNÁü–œš§«~N¬ž¨Q}¹°“ò/uƒ¹¦‘yÀ…8Íýø¥|EôSb(Rsº@Z‘É® ·ÎÂWž¦4E ¦*TOåÒÙ¨'4¤S™Ô¨œÉ’òO¹#OV…§|[mžÈBÖb«›ä_t”iJ¹þOæ2"NöG݃˜¯›>3ÄÂmCÔëzËžC‹4zúJg{L„Á¯¨È!<ªªAÛJbž iá¤S M‹¾9:XËh7û2Æ2Ö’ma);Ü”?™ÊrbÄóëU”¯Vïa!’Ÿ±tW;1OD9ÞˆÃìÛ³8 é$"sóù€ºziåQûʆrÛÁm  ò”¾E¹hx€1JDáÎËh¬>PC|> o =¡ueÝÁ²?’¢•ìÚ#ÌáDIHÌ's$çg@é?ƒQë ½O2~b"xÛù²žÐÚ[µüUåÂ>§²ïʼ”Rꬶ Œ•Jʪ>4gÑñt²ÿ3§ QîºúæTÂ[c;0Ö£j ‘®M S锎…}rJ®Í3[4…u>ë9Íöß“ sÖ ïŠñðõE¼Õébï…±M檓íÚ€Ÿy^6gvþ­G.ç S¹lH]…<Mzªzì êʶ‡GTÛ*:#ƒâd‚Ϩ¤ëݵCŠ#S±[Ð*ßæeHùcÒˆÆV¶ƒ®{D²y=å”ý‘X9*Ù¾„öqDbwEЏ×cuÁÚºB¥X‡âˆP(S|tà¥oMDdß‘pyxßP8É!"kwæ¾p¹>GÃUâ©…ÿ )M6¯c 9lzvWVº÷#mP]ˆœ;âÚS:%؆ïø(?ò£îÏ–]#d)Á?dµl<ŠÕ~9FÀŠP½RÑ(G>J¤³ë’Zác”&È{ËȿϦ RɨŽY0‰¶B4Ç“Ãá»r•(= iç]‰€½¡öY‚às¿°HHŒ-ÅOjKýT ¯Q¯©ËìbПàäs;¾$€uÆê %Ïñù®Zç’5i*tÞ¯¢v­ûÂ¥úÞ3ÛâxÆ Y¨Yü |c™Žryr¼Ýµ,§b+/ê-’¶?¦KY Fë2³¸kås†4pYLÀ3¨ lAUôˆ²ó8Ê(;âï”oŽãÈ»ŽÓ¶ª$ʨyOÁË«îºÚÕ#]§øTŠžŒã ÇüŸŸY:9 ³©ñáa¥5¾´ÌHsòtn:Iº¦\#¯ß<<–j¥³·«·gdpè™H†ÕwÙòV~2¤lÕ(T Æçú³ŒýÈ­„s¼å" ÈwØë$É#V9E&pâºqß°£ J›£¬À±–ØÇ¨÷ÈeRLÎì´Î±Q:£ÍÌf²xÛ^ªL¾}®ÍU§I#®ä¿ !³;²U{€H¡ó}¬¨T™ UÄætZHÎ…ñÚ4BNîÞ‚“^gäUÏOBs|>ºgG–cOÖ `ø…0'¸Ö>…£ºhŠ4äJUt¼P»T5¦•Ûa³ÁìˆpD‘B2¡?%;ºÙyY¥õ…ÄÌšÜQ-ë#´úª¬…—äöàLÕ$£ÁYò:ZÀ($¦–^¶ˆÎÃÄtfþXPÑ_Å¢s¯çÃêØ©2\²{æ§îð¸o©¢Ëu½Ó®Ëd:ʾåÕ.,öÑr¼û•Xóf)’Ýü_Eüqõsªä¡\ãÉ»}î;(j€ )Œ¤7g88"‡)ßø ³1ûC0ÁŽQ·tb`©Í¯q–JáËóùLu¸¤“óñÝlkZ ¾eBÏÙ­ÿ³È#±ákJo¨þä3HâQó§XŒÉ¤rgÃ'”¿5 çSu^.˜¹È.©å&Öô)i•y‹Uì÷ng?åÜ9‹BIS¥ W=£ƒ÷Þ1Ô–å?ò÷ Ši£laÞ+GÆuˆ´]*$ÃùC n޾p¢»øÑzÓ*-Ù;|~Þï¿}^¢4¹oSî²§“ýe@À½qA²¯¤hýɋ𜳘òF&% “ÀQÔ€ªÅǹ—.z C¥H®Åp¢¬Þº¢r%5Ï{­¨-.{\[yVLj¹~S~¦Ù¾c|h‹kì—ãҶ߯Õ|ÛRe•kM·‡ë¢REvU1I…Š1 ¤ë>+¥ò ó^Ó“T&•séæì,¿Õix‡Äpm\!ç•{¼ñ“@ ÿÌœ–"jA²Ñ\×… "TÛš»©F¶JÍ´} '\×nù5gO}2„«„Õ*Ýâ«4ÿx›'SŽ^*"eœûJHòü!“r¼WiIu<ݸΣØ/Æ%m…OÒÏÇÂXüP¯ø6ÕëÕ]°©“väšd¸¯²Þj­#ߦïÅ6~‰Œ|Æ'ügȹM¥ÄÌë?—Û›3Ã0Á¶:Þ[šàG…Î¥À¯<]弿-+¾¿ÐyUÆf :ÃËv”žŽrà4:• yÌ$mΧ¯õJsûÜç+!ƒÑ¯¨&95¸6@Ó‚ŒÝ4e²CŒ(/0­™H„@ÁC¾&\F@¾®LU OoÅ#D ˜\øâbµ¡¯àÖcÈf¯‹z½1&îu“xÛ3ï"O5(0tËpŠÊ×}'øŠ $r_Ñò]o D vT¬Å)»ÍÚö‰˜¤¡ÿÐ 49¾ŸK¾³q‘“ûë$Z²í±F‘œð ç±q2I¾£*ËÈØSÁ4Ήþµ³ÀˆËÑu@>îbŒ7ª®0åðiŠ£š'Eèßgœ;"uÊqjžPAx1h™¥^’®_¯‚ÿ)àîËЦXå°¼XË^G6áÄàïtýzU±Þ§í*a,äÈãyB~ ŽüÁ{$Ø[¤àšrF¤uÈöÒ’Œ-S*ŒxTd"7Á`¹Ñ©^9Þøû:“Éø’ïf38/4L'0Id`uvÖÑúæÓp"G g%ÔióœÔ‘$I)7U…!v'º òòEgõóâ'E,“kžÛnúªi);¿ IôWXîVG$Ž7Ô 1|Õ…hIz„Ì1ëÝŽ™¢ˆ)ò¶›'yÕó؇ÕìFYt­Í)L2©AQ8þoþ­*¾áÐäjÝ‹®vãŒÄþ‰b»zÎÍÊi.ƒî†Cm—XQn¸XßšhÅ%|x¹ñʹ:DõªÐÑA·¯€ê€L&S ìJÓcŸKሔ¦çR(.)åèË&Ž:ÌÀýý,ë&ayÊž¬¾yR¢"æ”Ó Ç½ jõž ܳl³°NåBy8aËPñ ‡NŸ¢ÄÕYUª!bÏLj»2—w b-›¼­&[åXŸ†þJ½¨NgTbsך­¤6®‚€ºÖ>NAüã)ùýe Ȧºñœ%`ï'ʒŒǺ”Cü=Œ’z1#þø±ävHm•Ü„Š{^ô~q´ŽmHe:'9!GOyè­àÚìÛò­ATV¼öf="7}D!½ã=êG×;H\¬óX˜Ô}z§6œxM\ÉPK™Õ"ºŽW„ö’!éÛû".À¦½åò»`9›—¹…™Ž&Xü-55J (E ó†#;ƒÖ3;7&B~}®¤çI Jè‹ÞËI<‘¢˜tô“¾ê,A”³ç²£,‘HÅ9Èy1/0¶/ë2Gq &hgJñ½þ}¸”gõŠÛbˆ{AЈo‹ \ûŽ*¯ª&w׌‚oô[rŸN%’ÈWr¢tÒ¸D\x¨(‚.ž4™eTV‰d‘Q(Å5†{­r;¢H¡X!JH99gÛd… máaVå5hÍÕ d·÷}ò 8wv°gVZ?^B“ÉÙ®Où®ê6¢×•Ð3ª€„«ÊíiHI}žÓ‡æ|ÙŒ){•.\Ÿ’G˜+*Ñ{È€eúkêw ~°Z²6åLÊ&Çy^2É‚õ¢ÄµCGÁQ¼¾Ždó§¥P;6û‚ûfd¯í<©ËšpØÑõäÿðÙíN}ãòÂ÷oÎŽ†6Æœ¯5J!àŠL4H•O¹ìq¤42 í P)'·\!$ÍL,Š å/SÆ8Ó}§ÓN!áøÕ¥øÍžèð‡nfPf Òˆ$õžö0yps9×[&Mõsä¿-ß•ÉNçô#³Ëæ£y^íám‘ש„[Ì ¸N2çlÑ0›³¶³Œsù1|§ AAEÉU(î#¨‘Ë­äpžì²óFqë鯙À!8ëÛd8»M„¸3·ûå?ë\&°\)ñ2j)ü†–>…ëQŸRa\Š×‚¤De« ~ifÅõ1C¿Š>ûXLjLÓËvß ¥Òo&ü^Þ»äk²¾£N8,H¾ƒ>E6Ç~Ií;j‹[XÓH‘­Ï8Í[+œ_Hi”ÚˆuEAÞt²t¦âhFlJnÚð¡ñK·‘ºnê6Yë,e¸¶ÌHÖTÑ„ÄR\]/ðJVi2½iìÔi´œß›Â€è˜ÇPá;œVI(#Ã7‘`6öpŠMöN¥yÊê6?7‚?c¤´ë:Þº<Ê^Âdüª6ºƒó™àÇ•¦¼t§«XO¯²1ëÎåñT¦†«ºÌÜ0`D&JƒÈ ïî:©Â¨ú³\4âàt¾Z؇uv _vKf.µ %¨:T¼?^Ê€²–ÊÜ’Çý÷*ÞÎs[vç–^Pd¹ò¿”Ç“OŸœ”ܵ–Ÿ´«ó(éºòíÇ€JèFñ•:Q+*”¿ô{ðè´ÀZn;"ÙçóøkÓ¤ÓÏô‚D5\r›-lŸÒ ˆšMF‹¦¡ºÕƒ+¶;¤ƒ¸“^6¶ï¤ÌXßS]QwÓÙ™¬­¥¤(°}L»Ðë,g-îä%2Šö9o‰%$˜¨x®Œò»â¤Qg ¡29݈¥¢ÚJ?ðR6ð*×»œ9ÞR©»¸^ùt†ŽÅª)' 9 ÉG@YZØé_QTå Ò!¼*wIJ³®i‘8íˆ`¦àbP¤öއB" ö͘–Â=wrV¾®ˆc§‰Ú3ݤ0#¹¤_àFÃ-Cd|‘†Èi…5Õ1~TŽæÓñ+¡[•…”R=ïE8ß79*GødÕq"(Ò½ˆ— —ªn¸«·#¼a!ÝÞr+o­°Óqi¹7Pux}ìT\ÄÜu9º8îÛ½ž¡SÄÈaõò…§Wþ~úB9‡lU™ìë®Uµ°³ZžúòB‘`8¯Õ|u¶Ö= 7|'f^,ýŸéû¹¸N»KܳrdÉåŠôºçéùȃ´>lUÁûl²Æâ 2O*‚QÊ™,%¿Þqa\üXŽÆ1í'ôŠví>ï0<á :ëH}{>[Û•Ô\ûMF±{çÔFáCE¤Rþr*œ®~BdÒ¸ˆ ÔŠF¸2mY=Ÿ­™„žãºÉ¡{gN½+C…y*Gž“TÄOy× Ý Ï µró¶qˆU:ëÔ®h‘ºfæ$9!ñ¾%g8S7ˆ a}ÝÂôMHÕf& ÷ÍP¶o”ª–¢Ëu1ôµßu!¨JE4ªÈâInñ±c˜¥ÈQ¯>³³‡gÞHâ+ J,÷)\g»¢†LG¡‘,Ò%ø¡YtùßRj¹Ñ³ÈXfëŒÖ…‹"72 Fvp[Ö(zS…`óˆäó c ÇE ÛÌqµ È ½m“låž9©ÛL<ІUF5ËVܨŽ]¶ôPÄ9ÂÉüÓhŸ_ë>Äþ?IЍmTŸ5Hû®X—f£`Ñ3ùËuâÛšx…©û§V#yñðyòMÔKé0|{Mmùµ*uy1¤¶GòAJÌÁe­‚’ö7ɬd(z+œ3¥ªÖ?ìü2U©J½àËwÆe8Cáï2QZÔ—øá9–üòó.AÕÍQ%PB]=ƒ-¯­(Tj&äïuLòyÚ2G¬ 6v¥´M›NÔ$»›=à›<òñ” I'òå@eÁ×ÖÀ¯;è*Ñ__ªên ùi0ô¹§!ãF„S˱`LÛëî Üåžû4Kü8±F ˜ ÉOU´€ÎR¸°èÂrjjŸQÑ DJÆB9ô—Rf¤+žvïovéwN/Ñ· Mu6þ32 ¸)sÙë\s23äR?·i‰ˆÇÔ*O$’51¹Ûdp?ã2–* ST‰îÆŽîÎüVXOKþ¨‡C~—£Ò6Y '’wrö®­­VÒp›|âÆÙÉßõ9U‚4§Ä°\ôìÉ—Ü=õ6CAÔz蛼nåãÎ ‹WIâ†E½{Zý·ûà9mLÕÀ;¨ TYùGn˜|¤tHSìŽѸ ÎÐöY×Ͼ‚Æj”¿S¯@?2ˆ˜±{d QÙst”ú€ž"XÆT )Íp’zk:ßç|SQ+eKWÖðÍ5ý7œ”¯~^’€8W%sñOÃø„ŸKõ X‘UO*¤¼©#ÞË’Ÿe„ù¹¦,ÃéH×\bä{á#gûLäžýÒ3nŸùå)ã|m¾ÿ¶é+å¹öÔÛ³²»ø nÍïãÓh}ò\ÞÚB1Tݬ¨ÛV àNøl…òŸ>k„ŸšõèÒcp2߆^à ´k7ßû‰¥-Èž3jC:?d°ÿ¹{Ð6â0õɘέ3YQ>=éš,E¤æU¹Ð H_`gÕcŒ&Ú¦#èc©Õ7ÛU"0åÆH;±)èEó,k(©OqL›ÁƒðÜO¦¿›ßøÇ@wxj@šfÿt®m÷_î¨MÊ/BÝ3äÏ2Ø^3[ZjW¸ÎÆ ΕŒjÏ»6ù¢+Po¢SºéeŸvÍõwÌ`ó\Øg?!Ë£AÒ.)£îLSWBc)Œ…}¦b½°ƒ<•¸r<ˆÆî{Ö –ç^ªžÌ¿_$åþNi“O{±¤ # %îáJ¡ Ü«häH=… ÏxD'ìBÔN8J{| ܵ ð06.„ö,:c+xqèÃã36 !WšÏ)¤¾¼ÝÒù‚ò £ÂóÄ7©hó¾ÂzqW„ô:¾'©6XYþF$þ?^”~ùJ7¸u@úZÌ+§èI—âa ÌèÞ¶4vuså°Ñè-KMe…ÀÙ)]T~À±éŽ”An¶$C]¹v‡G„ºKy˜úÏê%ÃZ:ª_íð«É pq‰EÓÀ[Ü­&«ñ»Õ™ÚRî‰1;dWêéì½q„}­Y)æÿyÕÁÕu^;æD5ïâåØ@g¥X£§Íãܶ0†Çiæ=ôsýÊÎnbTlù×òÔ ÝúèËù¾L3XW÷òÇ9±³Ÿ9ÁŸ]õñ“\Ü•£ÂÌÞÌ"’ËîN|ÙÂ5{2ÿíP†âx鴱ɰ9kDNXÔýÆR%x aFšÒû6´—™r?8]ùÝÿâ¾UMP`*R^UÝÙ·Õ™ûØžù%ݽ(ªiŽ(|=s?ᙧγ:f/áŒÿÁëò!¥NŽW©ˆä_¢ës÷¹TäÞéŸf?Äu|—ÑPÞ“‡­>_Gž>ÈüwĪÂÚÓÁò¾¿âBϯîëßkœ/Œ,1›“ô‹ø¶KÌÓˆ+RüŒfDˆÙà‡ÿBú}Kb’R#iâœbµKЬAÜ¿çÄG?K.oúó7M=YÝæ“¦Ê@™]»¤²Ü #º; Eû“m÷hWž›Ý¡tgÓJÃõ¹¾ùŸÉ,Fv‡Œ®G$‡b‰B-.ã×Í .;éË>‹¿£0UšÎ çˆoË=&BNµ »Ý·…ÖûÙ\¡eÅe.åa©éxó½G%¶ ߯ºêT˜:_Ï“%s¥ ax‘¸ )mÀ¥c,%†ó¢:©é›ŠõŽ˜}>.x?säá µ•Æ|ÿXy Ź@ºÛxgŸÇ§ «rãB¶‚o¯Îüÿå…ò¥‡Ì¡{…ägS–¥ó&´ÀBÆcK¨³Âý÷JQe–É/”ԢŸ +[)öæ¶:U–ìØ³Có0ñ‘¤¥0ØKñ{i…Ñ-0Ç\÷÷ú>ÂT?M·’G³u¬‡C£;ªcn㈱ ¶®}úÿoR¸x‚%ø-§>á‰Q:(ߎ¨m©-ô¤;ع6<¦érì΀ ;R§XYì·²ÆÞB`&,¸¦ƒÉyœŒ þÿ´ÛsÉ@Zx—½Ï<ÑqmæÍžÀX¸ö÷G0ˆr¬ž 1¥ßó^ëñؤëúJþ4_1ŒÊ¾<€òb§ïWe%ÀD|/ë%j28­¡8„oËhR-†)^yFÚ &30mëZ;~&ýVNÒÌ8ÚØñ“„ìh¯ž°u¾o__¸ö†J6™ e Ÿ±¬’ôeó¾ë)>]bă¦¢Tð›ÊªŽõqcàfÄT[µ±DËlCí-ݳ¢ÿ/©-îJþÿ0Æ*g€ŽÛ!ÆŽœ i0 y¸:ПŒk¾ŸÇKO^¬i^íIM›5yÍÈ<ûsHð”Ï¢ ±¢±d‚nä$šÛ½éN5Žƒ²€ÓcY@ó¨À¬–´–>lÄ |ÍjSÁòTûÒh¡BK…à„^ Ø=ׇ³ ¾4Rm„ÅŽ›ÚÄŽ^ž}çܤBT¦Ÿ~ñ‰—òiÈ)Š¢7öá?þóß6× Ç[]IUÅaÏ ò…¸éŸêØ.qðdÌ‘FYÖ¯NÀ{jÜùÁ¾)…½÷÷©x£«†oo¨}÷ºÛþßüxa¡Ýç´Pw(T/üq'Gë ÄÈ’œ=ÏÌ+aVþDúÒõá­¬ˆø ´¥nÎ(X¼|>î œÕR.„'ÍK!3º>ÀY…„RÅíÔ²i(ÃIÖpq F¹”Õ<^Òh¥-6Sñ»B+îR¨é=uwQEûñB¹»Ÿü'Ç—¥ãËñ˜²±èãñ°lÛgüé¯þ.R›°©DÝ)›ë ##÷qFrf¬Úᯙšñ˜_9Ê—F5°9Tì^Wà ©O=bîžýMÒ×°.xV–fbb”•³Âßnįô†XxŠÐò³|}u‡\²?Á;ᣟoïçû<,.|Á_ÐßBÌÏô¾p¢xÝ&¥xß ÓPÈdØF/hú¦-|{Em!-|­”óÆ–Ú²üýBwýËDü}å½8î»;ó‘<(¿kjÀ¨ŸjÀþ…HŸV{ªÊ0²2trÊú@W9Í•štîíO«×f/˜ ä1“55 “4ß™‘7zêLöëpÅByeÚÞÞ/Jãp W‚©>ºµƒ—(ù«ûñ•ÝÇý³ágñ¡oß§6¥{ü›6å0ðïk÷YÊ=Ò†õÑjq5r“^Zó¦Å´ßxíðyß+GD~4¿wИÚ<†%’]¬vê/@=\7,§ûûÈ{cÊÙgä-ÄZì¡K+mþ€×¶GÙRŽíCáÀêŠG²FÿŒ8óKO›²£=ýˆß2YÀ¤H%ëˆû÷æò½1ÌYŒ.²ÆR…)iÃ`ýúÃÿ¡}(åø¾ds¼9^»•×ÖM ?àLrŽ,Lçø¬º)Õ/³¿×?Òw²Ñ`Þ™KtZ=ÐA̶…y\A¿9o R ¹_”¯?†ºiÞ§œiØ…á²™•ýU†‡o16†kÍÈæC¦­…éÚŒ¢Â; Ÿ²IMÙ´­Ê‰l²ÎØ>åι~j6W¥&qFzž´GRÎC»·zñ1NëÒç]D—(F~ ä»ßî0‘XJä”ä¹AäÈhî¯òhòšìÛ¬þƳÿ%¨õ¤oÑø” W¡Ñ%½¥LnŸåySúø…¿ µßÄZ|ÄlÓã.°²1Â$:±ü&æc²»¤_œUàD+Ê>ö.¸øqŠ0©/­l"öBò¥RîSÞ¯JMmkÐA&pA†#ð Û‚v\"#Á¨ÌÂŽôcoø ~RLWF;ú×lbµiGä}žª-øÞIZž·ZGÙÚw¥Ð”` K¤:cÇŽ6áÎ}º§ÉCt~ ßqÖMìBÞCß¿%(‹ÆRìuíЉ5òN2-q‚»kȰ›•r$yµãRÿF©$·£Ç ‡p®3…ªzÌ‹ ƒ7v@žÚW‘ @/0¥A÷¯ÙêÞê.Y}uKÏÏùe¿@´$ìz‰zýJå==R‡w†K íÐ*àV"›Ÿ߸c°‡‘îÿåã\Rý×C²N5nŒý|›¶†rè31;, œ(ÌÈ °“´E×3~ü‡Dfw`ÝÝã£L½lË`£î›b™Š+ߢ1 yªb2¹ùõÂdN»Ú `Å`M âßT|[¥nÛhø7=,¸Ê!BB.aðTy ÿ`õš½úî %ägŸÂuêÚцê‹Þ‡!µWÔI^‰#A~ÔåŠø¾ý¬tûVNbš:Ê1éÚ!µee°ný)Ù…20¶„› 91–Ø–7þÿåveÿI¾bMŠ÷®]Q[¾ä*g±fW¶hWü2Š“žcÿäŸ7k…Õ˜õ;¨¥v=äæ¶A¡'L×Ìn#,°ë8JNÜòÆ·òu~Ü£w^øCAÄ ÕD˜ÏŸåìŸ+Æ'wü­6ií³[RÝÃ|ßžk¯¨-LÅÅÌ÷3ó¢ñH^œ¨rSI ›ø¹¶Ë\ÃbOù4À’kÑ!íxÝIH„ª'wD‘B2I‰w;× @ÄóÑ‹‹ÓüÑTØ3®sÐMèoGñhl¬xhBÛѧ«A\óÅo†gÊ¢F1ÌÕO7úÖ•†KõM¡¾B÷±ª>VÕÏ^U~Y@ýäE£Q¨ÐJ¡5£Hý.R¿ÃФO(ß3³lq‘a8fÞFp#-Ó‰æ˜ú7dCl`Œ<£^ZËK~¯?/‚Â~CžS¸æRÛçHN*¸:TÆbê+´ÒH¶# µÞä¥ù#®Œ†‘‚{}U¸¡O„¹íG1cj,÷/A|ÇËíèq­Ìü ¯GX\V‹´–„C¾ÑÌtQ‹²UsÝÌõwç®My÷ÐŽÏ ËyPˆø6TíøŒß«ZKTÁ¾zŽïQ0%f|[^wm¥f_Ð*›q½·%YÈ*cíAÑC êU­¥qÝÏJ qqqqdŸO”AÈžê%æÒµ…Ï¥Dß\ô¸O„n÷TAŵå~2eiù¤ú;(;ü`ŸÚÿ­ÜÛåíír>¬Gÿ –RhçcŸìó}þÏ¿Ï?ØŸa_ËþÝg†ZH®¢)›ï(ËÓ{8w}Æ„ :°}"ñéA˜^p¨n¿µú²ÝžŒ°hø†SKH¥­‘v~Yý˜ÛÑcoe_>"½USo hÁý49*w7êÒ¸‹kýìxô…áè]«QWG‹G÷ÃWócfþ²3[T~å4ú®-n¢NÃ"ŠwTšÎ.¿jGÔ>Mx¥ÀÑuªü¾VLí„ÚRÑÀÆá©QïA³]3cE5 Žéuqð7Ö÷Zˆ’.LÖ—7:vgÔ•\*P1bkB4·hÞÓÍodv˜Aé*d5€[f].°(Ì”|ê’¶)jÎe÷'$–½d –ÃéL‘eyõ± eê ¾ê«á誄LWA}wphÓ|˜¯zÎm9¡}›çP]—Oº’Y[x€‰3oßaä>½ÔF—%©;¶—k˜L—-™ªuï;Àî‘^<(GOl¢ÿÜ:N«,Ǿ šRÇAUÖ"úAD?ˆèý ¢ï%¢-TEª¾í•¥ àq¶&%ÍdUÈÈpÿƒJDÎ[ûØJ æó5"/Ô‚Ôõð¡GÜ©+[þrXül/½qmxSqÁ>×–¥d©¤¹¥‚È—¨>vàšò|®Fï$ªærµ¾Q[ü–m‹Ún­¸] å¹0  a^¹¢ ¯ÕÍmYWñž4#¶ïerR”¤ ÝóÜ–ð(+JÊþšÁkŠÕ†³*'â?š âÀfÍ“T£ºËºT6êÉ=JõÄÙW9|~ Jý.Ùiî6”ãj̪®uRRd¯wÄ´ Šy°fM“8Õ¢ÂÜЙnò ?þJ .¨W‹ö+ÔÒÐÈHº®B¸>WEáîÈýJÝiP“pèÆ¦¤;é?ÈÞÙû {dï¿Ùó‚èíµü·Kl·øëq€gZÛIä 62¡Aö,9æ|;¢¶Ü“GDsm¢M„ebŽb6–eÑ‘ˆù¹’kh¨$€CaW⸨°5í,%b.š„ûÊòMÓ®ˆ-ÿ_¦©KPÃõOp¥œ3ïó9»<¨ÛX|øž5Ë{ûî®ht@ÀHùXnËí.7¥€²·(½á†rÓ¼cpÏ@”&nÍ2V(RÈàgÌfÆÄfrɘ›ê™~ã@Ä@V²ùÒ 0ç—šÂý¥Pý,Œôµ•B2ÏEƼ‡³v,žONP2:o‘¨Õâ-¾R›ïYS[@\E^ zãm ð \)ÇŠ~·¾I«@È@¾ygë 2ƒCât:+‘Á°|þvzGu°¢D®÷Tdõ*}bÐ}iÚœÛ2­W»÷.0†AÄ€o£PEÀ@Â?8hŸ1,ƒ=8$ô ƒôbÐ…¶·7®†µM9Nhée%¹Õ·ÍãVݧ¨¤üÆM ¼™?ˆÑ1ú FÄè¯@Œ&5.®#ÐË푾<é4ÜŽ,U3»=ɧÔïµRî© –!9á/üf—Y¼Þ%ÿƒ›8^ùà&>¸‰n⃛øà&øÎn‚øÁM¼…›øUJÓ?]a9{EÑOïRB:Tí@¹)+Çs»“’MQdIi/3CiÖ®ö)ÿBmá|rèï®T=ze# þz £;\¹Uh£Ð–~†J4|ÁIø g™k¯²Þ‘µ‹—4гÓàš2\³ä[Kß–¨O¸F±è Ús?$!Ñ#{·¯­Ì]Ð:;?æíü±?¶ó?ÍvÎ’Sg¦õ K¾<I7ú3ÊìÐ"5ƒ Á€ŒÛ¥¼ ArÓ9FÂP>徎ҿr#P̆ŠÒ˜· ‚8¦É!'2ø{Bú?¼[{8(Ú˜ª»Pr.SH=Dà×<>t" @]Ù2Š’›p´Øäð«ÌIJ•U郎6â˜|9¬èŽÁ»bŒÂÝR¡P¡•B[…¤ÎXÛtÛtGGj¨´&4±À>ùžòçN.¸—7lF?üëñÂ&äÿîß½õëÿœí5Ú-“÷:áÊ¥•ðÃø3gŒ)ȯ§ÿD´áM»ú{[prì§ÆîçÆRÆPfS°CLÁ:œx%¯™ˆdð~µÐz¸¶hS‹‹‡ƒ›Ö4Y" w–S>ñ©uL$…¢„ µÁB”KŠ2v@œè‡K+Èb÷|Ç"~m]®äîÿÐè–þÿܶ¸g+¼»õ Ë»–>H&¼¨›F–ÿ€¨ÚVݕߧÛSù®|[hÈ=+1¥ÌÝq`QNH+î,¨ñØ7†’Lh&ür­Ëç<"f)ƒÖÂÙ_£íhëpdœi¬t;è¬;òmâü[€Ÿ¢&?"Ù:ݳޟR¼O §(èŸq\¢$”ÙóAQ õœ —Ï_s18ùl ›GfÕ¦Âm&¼E÷…êQ~··¢¨÷KЀFÈÊ>q> }EäÇQYïø&hù÷% |Ì›f²ðaK¨‹FÝů”ÁW XÞŸèñ¨¡ÿ½ÜÛÕÃòîêwÓÆÐnN’äŸJþ.óA×lósûKóÊØØ—Ø¿éå:¹øÞ³Ü~hUý•VÒÌôb=l:òÀô d°b°f°a°epG âGGüèˆñ£#~tÄŽÔ£ï+3¼ô–.Yî‚å.Xî‚å.Xî‚å.Äü´˜ŸóÓb~ZÌO‹ùbþ†˜¿!†L4 ¥B¡Büâ„ûÓŽ¯û Ccßlä7‡ÈlDX‡ÕÆ5-=)UŸÍ"ÅS}áï–Zªó6ý‡j†Ô’ëØÿü÷ÿçßÿãýûyøûñ#ȆÇÂtmm3 u©¥HY_ÁÄ9€±H±ÅùƲ ®Êìï ©MnBß|š±xWü9 ˆQ¨®…úšUH„»ëä!Aœ§'Æp?·;þ$wâÚ!µWÔ^SÞ 1¬î³Êˆ'KeQеWÔ^S{Cí-µ!€q{õÌnÏBŠ÷äv°‡}Ú·×Ô–—5n9È’Û—0´;Ðl™Ÿ•gÞ´tõ( ûA{ÿÉiïŒBaÚ(u‰ã&¨€ VyвíÛü¡µšžsyj%EìÀ0ôi Jéô!õó~ß €” ]_qJ_“„i$êŽÇv-Û90O¡1´»ê¹çK Ï ŽR¦[jcÊöTÃ%žöRCaØÁâ±æG^Ö”£TXâeQ¯ÀǨqwÝÓM2vŽ9pZwž¹_¯—ka™o†µ‚Ráîúv½¹½?ýø{‡ãô8u²ù“dÉ'mG¤¯YzŸ{˜ùµ/¶«wËX!¡»ß¯Â\p¹ëš‡[ªŠÇ‡HP'd:€æûX޾OIM­;1¨ˆ¦­{iC—9:´ÍïÆ#r‰T.bÞÏÉHxë<5H'áÅè„,¦¬P L¹”{”4žwm1UÎä;»F2›4ËRK$Íü‹uË ZÑ0½ KE†YÛ·æçn{ žO¹‚Ýø²¤i¦B*~dt±µ&»×?ïûVJ3¥YÚ.÷ }Z¸¹å¬ø«·lyh1CuQ£èà7|§ÿÍãV’u˜îPw‘¨Õ><ãA:Ÿ›œÝ¥;Y¬n›mZCŸ™ÔÍg¾sËùpK—¶+¾¯ÜqK×ıê嚬ˆ:‰IÆÑp$h·ð¾¦Åub#«"*À«Í£¬Dþ’è€ãÓ·…(îϘðvj\@ÃGâQÂ@4ÇvßÊn¿‚Q~|*¥»(jO—vÌÇ .|Düv¹>4’ âlñk¢âv=›¼´?ÍX£¯nFW­†±†èßÃý_™áž·nŸÉ#ZSƒêÜ8”‘kø¬ö|`¬äiÌwÏmY€Äw‰Eç®à—ù ÷Gj„ó̳§KКÒàÛWAO0áKvhá"£øhO˜àÄ–ï’'~}è›ÜõíkA¼¾@ÊÇã–W¨(ŸŽ@.··êF9̆ ˆH›k$……É-ùÝÔTI™e‚¬ªÅÉÇ}ÒZ»E²77Û²/܇—(õî.Á£ÜŸ0_N€’á[Wê`BŽ4 z`M'»~øÍ­¬¸®q¬ý ôfO^Eîi'ÃM¶o™¿ñפ¯UÑ-Õ ,Êf÷ªÕ_9…5,ª2]B!îî»—Œ»Z½GVCôšäF•‚¢b2˜%¹¤dó+xy/­>¤|Ëðúõñå÷ÓªËaܰ²=ŒÄHn¸MbI³†ëhSô±úæÓȼ] ô/Aæ:b . ˆìKF¬ÕHÞeÓ³6Š‹ø8rÿùÜ‘>jJå#9?¾D¦<‘Љ ÉnY.!ZûEº"_–y•aoúÌìûŒ¿øÍЍS'tÜ⫝̸m|L©Ãc1ü*TH“ôì´Ãý«;ýf}+Oó%óãwú²#½¹ÛŠÎêø‡­Ð„·Úl†§cy.ÊORBPBh•|[Ò°%|¨y “õNðZ*$7zñ¥D݈UíQª®¥êZ®®åúšêJý![7j‹*ã ÇdP<í€"…¬B±B‰B©B™B¹B…BŸÚ)$‡H×!MàMwhú5ÆÁÃ({òqmߎè†Ú[j?а…â[.…ÍœW}ÒJɉ㯋ј›ÏãØñ?Nùÿ·¦Œpu¿Ÿþ3þM”ÿQŽÿQÿQÿ‘Žÿ1^kQ>þÇøë¢WÛº.} Yv'°áñï‹ÎFøóç#&lÔŸõ¨?ëñïFxô¾õ¨ÿëÑû×ã÷¾o]2Ñ¿ùÎôR!øåÃ^M§è³=Y· "rbh£ÐZ!ph>ñ€œ¦â¡’«výJŸ2Áó$¯„C9^’o-¿õÈr`£'ýfuÆ/Õ9¾B¨“[à'ókжbšðX¿UŸ‚){oVê­Ð¬_øjû‰ ïÿ±–ÃàeVWj߬ˆ4;ñjãAk”E"ÈM¯éOˆíu3xä}ô%Ü8®­Hß|:éF§èéѧZ«NËÉôžïvÖÐéòs}XüêÝuaC]Ú6ïÙ£- WðÜv"Ó!4c²”ãü RI‘ Yœ¥‡Ž†iJœ¸É붆2ú÷#¤Ô£Ã?¬B±B©zìçGª{w • N_Q™lŸ«KPã‘Øâµ8Å.ê¼!5Ð3ÜeÞ=h&èaPó÷wˆšž¤8eo˜w-jH @]Y1X3Ø0Ø2ñ¾óSû­ƒ°t`É—"¾…Œâ¥BúÚŠ’0HH¨K/ ƒæ‡Ä¿®aqÓR52Cq&j ìuD]ØDÒšDªl½Æõ áÅ~ð¡kO{TÉ>Žž” ˜Çñ“茅øn·Xá…?­xS0þZŠÕiÞF•è™ÏJá.AaS>·‰¬Ía“µø®¶â¡€Ó é¾B¬· ~" • :,ú§º„ÛEc¼yCz^7š‘M4 fö²QÝ ñ¹qòÆ/Áê ¿Däwqm>[t˜(Á§a¶Â[6s„·lÀµ d{Ë=[1Xò}Š$mC¹-Í—ˆ©¶¡íìmsP[ZjÃ{Öü£σºrÈtC5fUÒ÷¥Zžd¯9d Åe[÷¡!±X^ejQÆ’a´ YÎS1Î:Ò‰†çX\„^– °0/ÇÚR|_äðzóËoìõ¶}ã좶ñÊÃZ_¯©ã[iž¢mŠ‚é­Å;f¼IøØs|×únÅ“EÇ#©¾Ôºâ¥¤'Ø­ rXtà¬öñçÃ¥úUýy_Þ±‚~œÿð,/ϹW|BðËZ¢k‚ò°€bV±±Ã}b ™¥æ@w«n›NŽ•ÖŽâ-à® ÜÔ·Kº°¦6™ÏëÚwôã;~ê=µè&̯K!îj{À¯DPí _â·†ü¢Ìce¸âÁTìÁ–¿vuËO¸SO¸gÀ}XqVü±«5=nµáÇ­îø’zÓŠß´â7‰Sàð£-ÿ¨ááo¸G`V¼mˆÚìqÃխŸðšnBÜÑréh¹twÔ¦ÒQï;^!¯ŽWHÇ+¤[®l¨_KÎn¹epÇàž÷'äþðBêBî/.Ü0à—Nù,yÄ]ÀÒaƒì°ùs˜˜FB̘AÊ c ]0‰k Zj«ª=£¨ M„ÛÑjñ"<ÛQiØêý«DÇ5ìAáJ? $¦‘ü4P“F¦ô&5•ãçES5|)ri/m`ö4±‘åá= è½¼k»¥®¸w=ÐÝËPØ¥OÃ$ë_;º>ùx{òõe#Süò{ðjÃüs×ýeYÇO1 c=ÝêÎïÇmcïË "xŒ¹·Ê6[þر‚~¥ò·;bF³p¼LiuJ72üèPæ÷—Z3EÞ€Í|ùÐÍR?ü߽Ɗ}Ç˸0±îL£ºò ð÷ãœ,—êáîî.¯xu.7¸ýôiËñ,ñÀ¹Þmh–”IþÔ£Õh™…¼L–X&§Ñ yÝ,G‹ÚÍ«l½ÓVwºKËíèîxӸ܌®‹ëúp¶‰}n¨pìïDÒû-lÙâô)³J|ãìAìM ÿcdœû…µ•"Ęš]Ïmq“‰ =¯Ê8÷™ü ošNÆlù3‰ªò¯©Ai–r;¤öŠÚkjo¨½¥öµï©-g“×-L+!x×ìŸànëüíûýIçµo9áÌ<ëv8U<€`ÝbÝé”3Wå-ää)Ac}!BÏmâaXp\Š‘ÝäLút­)-ò9_´ðÜDÎüŒ õU¸T¾Jx3HÓð®l-AÓ'²¯'Ó³Ó1!ý‡ÊÉBÙÊ{8„èl'~ñó‹çÁƒÓ¢ÿM# ÿ–Mé6;öNÛqjñ¼l^ÿ!ï 9ç}[@õU•]/'ë1ᔌ™§6œ =‘”Ùå­UæÆ¶°Ø6¯“ŒŽ0>ÿw¢Lxs«]Ë›ó}9…¦q ?kÉM,³é…E‹âÕÌÿcÎË÷†Û³ìÊ”¸×žàç­ø«V¶"w«åñV¢5uÞý꿎éûÁ´žæûǘÖ)®ójršËsVÌAýyLÓ?˜Qzo´°û§5r¥;DYöO[2ô:ñ[K4<à+ˆw²ÏØèIŸjC¡×§1œŒQ8Å·Cj¯ðPIÂ?Cú¥R,Í`É5)ê]\Ý¿7hŠÂ®Ž)>@¡B+…Ö mÒϼSè^!Q)ß~«`¤z3Bªo‘ê[¤ú©¾Eªo‘ê[¤ú©¾EdñvЪÎXÕ™N]ëô5ÕÑG}§êv§ºÝ©nwª£ê ²Tº87ÔŽ¨m©-ÔþðÅJBÕ…øÈ‰NÕ#|åð»Ó¾ñÛ"àü3ü”¥úÙuIÏþ’Ö¼íô\,õ¡†zTïø¹øËV½ÛP¢yÂqÞ=¬å˜9XúHqáÛ«Ÿ|¯µ'ÍYŒŠ6^n@âœÛ!µWÔ^S{Cmʉ–StO~Oí¼8§T#,„ V 6 „®Û}¹¼£*Pe\¯¬ ßI_SÎ"…#-%Û˜oŸV÷¬@ _µmK@Wm“såó¡9ý¿4/é@ÄkY 5Œ;ÍJ)ÿ½ð9³)‘ÖsÚ'+̬y$ׄ³ŽBÔ›'wœ‰I·SÃ*þèX7Õ‘P¡[þÑ=ƒ-ƒz‘GÓæYB©ák—êUB¹Ò¤%îá©Jãj©.mxVêB.Mûñ%d%I—Rrán‹ÅZz|Ľú•t*JÊeá§6äw”ÆÐõœbÓð€õÐÆZõ(S¼4*{§?þ_ iŸÈ•Û!Rß)Ô犕Ƌk4ÿz’õ]9¸SݤžéÉàoó·Œ×/÷Xwk‚,‰Û+B#+à:‚¢ET>"á¹Òµ¼³8œÎY¿'µà½Bw mÚ(ô ÐZ¡¥BPƒ4•ÎÐEi¯< EêZ¤®!_W›'òÝ]¤ú^À9§.>þœtR‘¸Q/Á/¿oM,Ïn‹áW¥ºñôYG7¹õš<úG«ýåôó'Ò_õ bdÆÐRw Î|VÔ¢eqM‘Ø‹–NhÒOš Hµþn‹Ö½à™‘ˆŸ¹‰,è±Fhâ<ˆȹ9€Šxl{ðÈ÷‘ǖϸ@Ô¡)‹ÝÌ„Š™Ò€TRÅE>êf×"_•û‰ãç4{ú2Bp8oö]°|Rê8 ŠHÓÖ=R¾µâã6ëzQý›Î€ŽŒA–â’#ÚýäËdm ¥-£ÄúÖÊ!y©(tLU7TÚHúFþ8$°e#]·ME±8úŠÁoå›^¥&Ξ7C2G¤ò±½áðÆô¥²Ë¯IrðÎ0ºøÖ¥þår9ºŠz Þé›´7ªôõ˜rðpƒn‰…ª&N,zN‚måó|Vbsìö¤Y‰û<éZ¦©AjØ,Àu±¬._@rykF0$èÎM¾º'êÞùbé*P5” ·É º/’Ðrѵ1žxD2°þؤ9wð¥ች“”ûG¡Æ7Ý¡ì‘fîV¢r)þÍ}ôhÑWwÔ?Õ-&î¡2(y¯Ík»7t$@ €\ÉÜÆªc2ðq‚¹vËÏ4€é‘UH–˜ßQÈAŒXœQÂOED6h’'JëË `Þšz¤.£ÂU6‘`æÑHŒˆoËú+Á«Œ««ùчšÇO…âG"5K+þÑ_Ù2í¥rW–Q ¿ëtî߸óuÕõXEM_‚¥ï9¾æ5-½¹u2ÑŽF¸åá†Ñ0d¶²§=ô?|©†dEß@ÜÆ^ŒaÃ]R®Ðî˵AæÄncäÕJ­l¨¬Á‚8¢P …ÞÚ”›gÊ,õ·IK+¶Ï¸³¸ÓFNiÏ{šöi ͈(¨Ê(^ ô+ó§_†@þÓ( ñê0?ö<3͵!¤·¨¢>Ì„©Ê`)øûˆC²ò¸Ï3JõP˜ c]ï@3>õuê#GãËœ?mŽÐVŸ5™6`BÂhv YÝÅbÍð‹eE»>GŽº–•÷;˜:sóTG®‹÷ÕƒÖ §öñfÞ¢PØúGri‹ã謚r"Õ¾çË«‡ã#¤‰‘‡;D(è°{è;“V|L¾çᯠ@¤Ã‰°<Ö{V¿î& ò9i!¤¶¼Ÿ-ìÇ“NÇMÒ(âµa[P"ª—ãÓϦ¼å-l˜Ã±.ä`8va;üŽ*©aÀÁ@V?ã±”ÝÁ„B™F8©a¸7êÑ+o>`÷ðÇ$fÜ&‰m±n4maCìúmvËŸ+ö'màLÁ™îÌ`Ë/Ž&€]höXÓÚ Þ³ÄI$ †ŽÆ®yò&*ž!Vú+%ÛM¦•îJÓ~¥"V+Ç*/­QטñÄ?ØûWº7-왚t¤Ì%£å,–®³X*‡q!"¼o‡Ô–c£©‘ƒ×­Õ¬øßzä§Fž¼Y/µ(‚¾SÕ#wÞ¬xŽ/÷‹Þ‰·Ðç\ÓqÝÙ‰nQW®yæôPSçi0ÔWÓlØLÆ%¥1*jp¤®)1³"Œ[óÆ’#™Rø{‡ËþrÝØPûþõôÍ Lö„‘6§‰ô§¸¤ys\;¬#þ~Ì_üâôã9{æ¶Þƒ”{°bs˜‘Rï‹äißY¤(ùbcõ´{Õ!ì<¯1…ÊN¶U–Š ÊOä¯Üñ{–ê’ PÃÁ=zCm¸¼úŸ £R€øû¸-ý-¹F¸ßß©×ë~ÊžÚZwSdþÇþ´oîé`>‡;7ü²-õˆ¿TÜ|úÚWxnô˶Ô÷ßñ@H¥!?E§¾¤X[Š×8@å³ö~xµé•sa¨9ÏOÉ«¡ÿ1åuí^-Z¦—V¦^zÅ(ïÕk6Úäf¢}¡–þÔúž\ÁA¹äyT”;ÍuO5ŠC€?7Å,†0mЈWný$•“¿crQÍñ_=ˆêèá/ßaÈ1]=#е!Ò>ï¡èq@tu‚s½¦Ò¤Cç—„B©.¼µ¯E£kCo©fFטŒr‚ìOpí•âr½ºƒ‹ñË·LÍ_}jÔ Ìbñ%™Ç{+\×,}–#ß(Ÿx÷Ñ(,Xã<‹¦èñïWáäC\É!ýtÊPãv‘»H§Ê`lÊ‚j¼9 ÂZ‡²NN°¦«o‰ä{°b°f°apšÀYòM¤ÑCmQýú«éJ8«Ð¸6« ¿ãŽ~²V? ù•ò“Qé_E–rž&êBD¯<µ_…¤¾ZD?¶jÔB¡Içiž˜%ÌéñÓCóª÷Ôaêãôò£¾è™¥èñç!WÃüjh©/—öÉ.NÕU^QsïÀ³d`?j +®©)Uùsm²†?—Ê]”½¡¶ ­¥>.áAvaM°¦ÆpžTèA\¯ߵWÔÒ×X•®-/ˆ›çêÄ$Kek'M_~³ïãh§ âK  è?Ú­¿„ –³Ä, ;KØ[UGî`†tä-Mrct¦øD›L®ÄÛ¹OórŒùÇÔa¨!¼,R8 "ò•¯¬øÊšÁ†Áƒ{±Eöæ._eT¸³éÉ+ЊèJT/ÚI¤Å¯»G#AÒt°Üø{¨{ÃStêù§]aÌr#Ž·Ê1W>o3¡ŸCHÚ’;Kà•„ì­té:¡¶o0Þ:ïÝzÙÑš¡¥ÁkáòœOÎßhÊÆó¢†ûª3àJBÅd篰áxW«íraåë•þ§­à7¦|Öññö³1½èú¹Ïì3¦Á×SÔ1ÝSä‹wÍ4¿ Fß#kSÄMÅíçó^¿£o5ÜÐÏJjã Ml(•+çIœ ²oÁÝo•ž&þK~-R¯2n@Ñ*C·OVÝ‘ õ/Çi­ Ḃ߆귧»;NÏÉV¡ ߉F¿±Ðƒ ÜK ÿ¶8°ÌÔ’TÔA­`Õ‹ï_}Ðiuün—qÁv#%ìCV5¥3Cwéoƒwɰ4A (LÀw³‘ÇÙšÌG…8°'z5· ™ dÔy$¾·tIØ¿  VË2,ƒSr1ÐP½vöððgK¦ÄÃ4ÂZ㦠>åÇ>ásÏæò¥¿rëר³-’ÈguC•»}«fÎ@î ôÂ\ë7ÀEÏÔ‰Ñ×xø‰´1*³EzÏŒX?ª`+öy®zÁ{~¯®¨ÝR0ö6E _ WÛãïïèɵ;f”8~èJÏÏ2òÜ3ü „‚Y¼u¸BàÜ'È”Ý+G³¿Qßòǯé™â¤‰íÉKþëi«–{ï/¶–b/[M/\²†—Þó{–܃Q_O²¥_ Ãõ*5ôù/Ís1÷¤XŸ¿:»£ø’«X‹÷sBßÁ¼‰ ˜:ù/ê?ãðž8£/œÊOÝŸ}²êÓôí‡ãÅãζë*u]>~Ô!2yl¨“aò(Pôþ´[‘JEþ¾GÓ¥ mL4Þ¸AÕ~šXúz–£±rÞ¾ CÃ<È»æozʦfé½£fâ׎ýÛÇ›Æøò¸2P7jiœ?‹º¯º9z´•ó#qyÁˆÚʧ3Ç;¢þ›!¨laâ} Ú€Éý#u/ì²±ûa¬P¤aj¤î„§†Oô¿d2¥Ç®e=Õïq€tBe-ÚOÇl|~ÓH󱑞<6-õÙ#¬óGäÝ_ìÝáÜX¬EïÐç#’.Ø'Û k!hñ7˜|í 1¢¿‘wà†BG­7Ø2ˆG%Y}f:®)©*<@2?ÜÉ2¼!§PB+kðŠEŒo¯¨½¦6ß¿¥öµ/È4Ë>Zã+jz[-Ó4CZà¹kÒh:}ùÓŠ;K%è ¦ü®R{Eí5µaXR”QÙÐTú+òóYM};¤6£5Âà<€r¿ÂÂA"¸4„Æ?¤|®È RÊobeû¶tŽ=©][†;6pæH¿¡€xf¢ßéx>Ê^âš§' û¨ú*WrÌ—o·¸€eI^žÀ5åK³²YâþÕ28ùvHmhWëŒÛkjo¨ ù®Æ19€%^íºÍ— ÔÚHCd vËé¢n¢¯#¤Ž,¦-ñèCßae¸G“¼ê€l3¿X!hZl×D†¸"Êê~Ž2¡)G¼{°b€áñmþÿŠÚkjo¨½¥öµïi(……<@ˆôÉZ¬¦–œôm+¨oc°$½Í1§¾-ïÈû–7{:F<ˆ$ R!? äÈa8°¦ bŽð¶Ô–!*èà+D3+J¸–Ž ¥¥*»E¨®à³!+q{O°«QD˜b@ÊD ¨ì’Ú!µWÔ^S{ÓR¥ ðÌ­ÉS_«¾¦“|@§—Ïjr²¨ä(œÕ„Ì»¦tɃÁ¿@6H×ĨíŠõu`Ç d {`¶—È‘À5å}MØ¢¼’!ƒ5>õ¿ë“K\RKF©ÛåҮŢ¡‚sÏ) §ÁH¹&ˆ|óŒÿ÷v禷 }MŸÅBÿ}øöÖ€B…V ­é!üø¬9izØsú¼d°¤_ä2¥ÃTžÎ Y+ërÖæâ¦Ø–8š;˜>f•YsMð# 7¾-áÛP:D¼PI¼ª_»6e¾w`Ë@Ýö@…O<à >݃pžnSÉjv½¢¤ÒQ"±ÅÈ¢ìö¬¡éª½4‘ÎÚ5‰?ꈃ÷`Å`Í€{"Þ“s±k·|aÅÔ]2G>²RǬŽ$è‰!QƒàŒfœÌ9ÌÜéÇW˜o}›ÿ/ýsmó¯¤[um8½x¸ç+÷"`ùÞò}˜Oÿ~¾òÀOx€ñD|»cù6Ô–>;6`{O–Ôæo©}Gmþí?²»c)åÌ©›TŽKOa°f–`›†ŸàX¶`G/ž.xÖ˜Ö†‘•5pC!_Q€¿}£xpweZ3pÈ pžPKOýª^‹-èE<ñ9oM1ȃ%éQ]h…"Êéï7'vþ‰^Ž)íuK5Už»²W"6"Aºwž^âg‡¾´Ýïð{ψÊÈ‹)NËŸ Ì\­xÝãV‘ òj{‰(rKÉr̘—@ÄÜâØ¨>²èÁÄ`Í¿— M#ÞÑ‹AV]Ø8?ýJ&@ÄÛ0™š«ÛÒ˜“ôƒT,þ«q„»Ù”ß½T=# D‰pd½é–èÀˆú—dð¾´)D>ß ù‘Täy *ÄÚM6O˜˜ü:bQOègdÝÔ¤+ð|ÀЇD/pYRuÕñÇ<0¡Êšˆ$¥ýåØrÛà¶ÖJíǤ Mö»(Æÿ¿òKdMzŠ­ÕŠe3ßù|?°²tð²G–Eì?qm>[aþõþÁò°ëÊéãÿ¿VÿPìèÍQâ'uçtW©*ˆ)ÓŒ oéI™…ù;M‡ÕnçM0¹Ô¸^) f&6”Z›“Oú PHÚž—;D©Ä ƒ4©ÕN^‘¿çE×/óÐ$ãÀ©"4éƒo£[£xü9Eò®AÊ‹«¬Éŵ)œ—4\¥t/ûŸì@¬´ù{©Jå/ ™OÓRZ&ÈäCÙ¾Ûˆªz µ6ÅȺvHíµ×Ô–ÅÙ6ˆîkiD\vŒgKEhJôî4}‰xÒÌQòûŠªñø‰X‰µçÓq^Ö¯&ýóøvüÓLþ‹ÿGèþ!þýôŸño¢lürüzüjütüñú”ü±òñ×E¯:¶u]z 3Âñ§#¼áÏ#lG8áh„Çý)‡èß1Á> ¤;T¸\´‡G¤““o?u-pëë5‘´‰ó®çZÅÏ”EÐo°ߦ@¨Þ+?Š‹9Þöj¨wÏ5v¥rWw@~£!jTùÝÒ}´ßAÏû¢“í·CÄŸƒîP‹ *º%Œü~Ýøù›a׉ûÁâkkš ˜8GB¿œ&gxYEòÏxÁÕ9eós¿yŸ<-é¦4¦~¼×ü›±¡óÿ%Ư*Ó¥ô[æGc!C>ЉµX(?’ Ðoùµá_«ï‘•{»^ßn`/öG…ÐõÓ óÁ@ha>ÐQm 6j ƒ_s‚ñy4}8|Ønû=šJômD¹®¤6×MDݘ$o¡o' ? |lûmÿ±í?¶½w9¡”<ÀÁ%Þ‰ãBÖ§!†y@p Ú—-•Þõ(RÈ*+”(”*”)”+T(ôY¡BÈxÛQæ¬îиI/òPjõÎ{›@v R d÷¬œž·1\Î8ÁØ>Ö=Xnå«ì>B(cÐçR=ðfè˜j7G9Ï×ääÝQ[4±Ik¥›•ò¥)ö(W×r}M=ù²+ ×–Ò,šŸ°h†!…MøÊtsœ?Ù>(À_b2¯ž?5asô1Gsô®9šY#¶‘Ø=ðôì8ßKuÒYŒzjý³øF‡J\…gHêš =JóÄ–Z¶'Qi–Ipp¨dj˜úvHm(½) µEê¬wV]¹Óõ ®Áî~ÿŠÌâ¾×§Gýî»ñ½_W¼˜gµasAmF ¾"å@¶óí¿†qŠ£ošÒ„™ˆ,”íX‡Z ¯†å¬ëÍv«QHÂ[m§WÏÂ_bäÿ‘CuíÂþ¯XµFl¬ß³ÿ>ãùk¶ö›Çдœ[¦Eš–éìáäø‡—ɘκÍK[…@þE&Ì]®¨N÷C ×®4Ü0ÜêGmõÍ[ñC÷WÈø›mø¾¥îÐR½cu;†¡†ÜƒÕƒô ÈÑ‹}:^Z®õ­[ —ŽžËo]ߎo^ë>mG/¾Õwßë»y„×Ëñ÷Ýix¯oõcóFîgÖÏžËãñCp«¿ðïýƒwÍha|U\ìÖ# Z)´UèN!.a½¡†Š"Ükzq¯¾- G=Úè'­ oõÕÑÍ£>Šc~›› mKm1¯|:öey¯û¶Ôpu¾ç/p­áF톣Ýú±áåoG8üó¿cªkÛùÎñw a$F“rË Ñ['Ô{e¯¿u?"¥µÚ6º³£•©×íJŸ îIú=‘êÔ&w›X÷1ÒŠGOæéÙ¯õ‹Vã÷®ô{·êª •¢2}õ~t5Zëßê/ŠÔ­íèó—çGýÌÐçòhŒú<ê$õ*0•ä¶ L4=ž\ú’ÈHåˆϜϴ<·¦!ßJC©…ÝË$^vnÛgcˆ­Y§jF%F·9)R*e“PYXÎͼðl"þ<Õ÷Þ˜Hâ4 H!Ñd±ßæ6 ¶ N»ú“·Áð–z&¾>‡t³‚ŽÈôO¨Z§IˆéiÖTÝ´T«Ó@ìú=ƒzÓN ß¡«nüàC6| lTÞy‘$sÛÖ(:•ÚÉM0úr:jJ™“ Û!X¾CFß©½¢öšžº•­5L é<[e¯w)Eø÷€ôµ£DÝ™¨;3u ²Ì!å›fÂú_S¬î'J2•h;°äGª¢¾ÁÏ›RvÚËÌÉ"8,ûFT5ÓI´oò¤m•G;h£'¼i´‡‡ 9²°¢¥Àè»ÁAnìa»’b³µùV½6Rã½âkT5‹ótÏóçýA÷—º&×Û.¤·Èð¶ïÄTë·*™j*õ½3Zp+~t¤¨Ðº¤!é·'ÏHO Àý"­–ªs=ËåSÓø¬¼’ëéø|yL=íCrF׉¼"iË„ÎÿBC ú ò[ÑC²—å}¼ržúƒS—¯Ú=Ÿõ”£íHD— -D€ãÕPw[ˆUc^ÔwÙàåŽ JiÏçuV>ƒÒ$e'èÒri1\î«#þ6ÃÓ#'{n”:}øyWÄ~~àä¶á¬(¹çêìƒ_Ì!z’š_ÞêãR÷üXë1DEÿ2ö:]ޣ؄?}J5MzÀåN79:d1{yÙií¾¬:™t§À’AÈ@V‡ú¦Í¸-,´ÿ#׌Рèaþ Ì„ùƒ0æ¿aþÐíЇnïC·÷¡ÛûÐíÑöýÐí©uò¡ÛûÐíý#u{ã”\!%5n¨œ^w î ]aõ ¹ÃœWùÞàsZô Z)´f©ßEêwà¸Ó§ç˜/…êu¡Ñ×øu¡zA¨^FúN0|ŠþYð7\©Í Ê«{¹'‘ÃצȬ£Ë±•H³ØFµCjãØ"1ҵы¾¥ ½d²,äp½™V‹žj_x2ÙÙšøi‡¾0s-š†£?®P¨# õUTvà÷“s) kÞÔ¥ì‹Oh‰Œ1s•åê d@[ÞaÍSñ’Ö4 ]à®gb³PwÆ;†¬•›ˆv8Yb»C¬¹Æl«‡ûV=ô4Rÿú¹úßýqóˆ¾aÉ]VHõQZÒÌ8ÁÜ€¿t^î“§ût­z'Q;/0Q_²âçPJ?ô²‘ŸOÇ–Ë'¬f$ 2ncå~y®Õ¿‹øwTõƒª~PÕªúAUU]Û—R#e¾3=h_Öíä›g•hÓfÕ³¨3 8l(«ßB‹/sˆ ,v;YºAoR*ñ’B§îLÓ·¨`¡é[·ö仟èµ>s%q˜*¥ÇpíNH–ûPúCz”S’;qõÔ¥à3eM³¶Fd£•ô, ïj,<[“ùÈö¤‰kªÃÝ‹îa^‘Ê+HjÉ«6OšÕí-@GÁަ‡š5 å.ŒšS8í¬@Á¼Ï¤<©R#“A»¨÷TxðˆB…V ÉÊKm!¹K|î1˜(½VÀìùY·ê ûAKÔ‡êüFâ1¬Q:9 ¤Tm÷8 <ÀM…·Ä%êÝtM"µ‹+Lpm»^®¸áC;?õ[z8—&ç—ºèé+*puI-ï ª"R¿º[ô6çÞËÒ)ë'ZkR{ÖW{“‚È3#À¸#*“CûéÚ2vñÞ"Pʬ⦓EPYØþ8ȘN™Oþ`Éb9©.gغ=δ¾0T3§-¤å£y\R;äÏLR!,;S¶µ¾6ºUxɤ¬óÐD?žNÒ’¶|í~Ï= ñE–Æ>ˆñšO/ÓrÇ+}hÚäIÍçw×B~Šû„&4Žd@=Ýð¿Ù3m}²ZÌMG&Ô(+…þêdßóöPËkgVô]G²ˆs¯M"Ú)Dç­5TÕ‰ ý›e0Áxê·d°f –Û µœ™DÎó"ÅnÌYK³5‰ŒúÊWò};)Æê(§°\•ÒÏ1Q%“FŸ[S¬EAoS2íA5ß-ˆCÕô$Ì0¸†¸M47궬QOA÷·t#µåƒ“ì1žC™­%²›@’ÒGiTÁh§ 5ÀÐ2³­äÀbª$=¦/EåÞ ±ª¾-Æ GgÜ®²sÜ@+9V½é^G˜H®÷5û|ÀaP5wnH%ýÚŽ s<'XŒW¡§“ö®FO*&Ö2'€ÊŽî(E1ÊŸKJ¾ÊhÁ€ÌRä+,##sðŒ4¢±ÁWÓ¡$óÈ¢Œ[ýˆzÁÿƒIï¢éÝŸíÊôžæ>HHé0‚¤¸Q ²“Õ Î¼ ±óW”¯ dþYleê™T ·ó¼Ú Á>ïV2,'±àbaÃÊ \3¢¶Ð^v"™•…ÿ«ìÔÿ ªÈ ‡R¨¾-ë©NMAl@¡BBê=ŠÌ9©âT¿tîÛà`zŸux{ŸKFÆà‡ yºïÈ`Põ”û©gJ®ï\†Ûµ…B'Ùi@‚°ƒ –m&³\nŸõi‡®)ÇÈ«ªòœÏØí\)‘΋ù®£“ä D6ôY•ˆß@»^Ë ú=ÙI#“÷Ü¨)&vøp6-ÀZéè<ŠøF9 Ý€ µo #ï ŸQÑŠm‡¼7‹U¯×”cC䣕û†Gp¥›{”ð%ø¥LJrŽDI®-}ç­¯ÁØz¢d# h!ß:/Zd8ºÙn¨@ÏË8—7u•¡¤ß‚ÐÏ$‚äÄY-‰Hg5èÿdþ Êø¿q<5éîä%÷↺A2ÿES™‚ª ÷•|CÛP¥õ.¯`«ôBŒUÇ9iz“Š"ä:MçäúT¶É¬Ç–ï‘<<8D½+Š´±ã’ÚbÇOi®ÀOi}Æïêé1»ä°*dMèÝC&#Ý]õ Ÿ&'´ˆ1ûÈêƒ'-zù9+q†­$äå˜M÷4ÿ»޽[ÊBCœh³¦¶(;|»<¿Å ùvHmyÒSÚúü5‡³‰ ƒŒAÎà›ôw¬[Ä2‚êÆÇŠë «š< *˰Jb¨³¼˜&Ã¥+êŽ6©ïÐYr;¢,u!YÜlD‚+wØUœÒ±’daU4•z¨À汄ïÓÜ­ q:N›ŒÉ7ks‹Hþù’;l ÷µ…!P©››oi³ÚB{«ôªcu‚C”N.‰8xÔ€A(òæÝU¥¯›ÌÄñ®¼ìɹˆëŠj Çu/ë\«gtOíÓ^΃´-ÊRp2*£­" ÇX±ÔE+Ž:ê3º% åS™±23%ÔeG=Óˆ:}6ÔŽ¨-ªâgêuý`ªX¿¼E./À'åtŽ*Ë ]#6ëÁžõÈÖèkÔƒ%ÕVÛÅ Ö`ZâLñà •ËgØÕê˜B9ª',Þ”‹¾JÒTý$ÄwÁÍõ“»¯Z†Âí*çÓÒ¥õü»²a»z‡±ñÇ·CjcÅ•rFé70ÂJ° ZR5i•W¼Ç+nµÍ44G/`™ž,Vø~" MÒÇ8€{ˆk󨰶×@v«KPÉO²}ÞÂð¨Ø0„Â[ó )FÙôÈv5”Nß7ð £8ûj‹y4GeQFL:}D`­I›N*&¶Êi{ów´H+…¶ ¡8Ñ&¥1º Þ4E³Ù-GPhFñ…ôßó¯†dx¥"=»Û2 =X3€ƒ·*ËÙàäMD8xòBzOȽ‘M™æ²Eí]e¥Cõþ §×ñ…DÚç´oVC 4ÖýGíŽ=×0DмXŒ+Yµ^•(‡>1§–ôÂí¸«~3'½Èƒ™aX¸×ÙÁ”õ,Ï|qòZ4\Ýmm±k“3A.-a'*о{æD )â´ƒÉùŒ2T®)?†#%O× ?~^9:K¾‰Õíâ¶PêÑñŠr4:ÆYRíh>w=·±ÂËý“l¯j ÈÒz*„ÿEFttħ-|*Ç*àž„x}:¾] ÷lóèµEüЫœ¨W$“õ@6ÍM^š?bѼÀHÁ½¾º?]ý4@x,½àHã/fŒC#Ý1½/·£Çµˆ{Áë–Áù ³a? ¹ðm0ý§¹‹vBG(ráÛò}G$+ùB€d‹skš‹«T¡-ýGDQ–±DÍmQôòeÕs|/«Ÿã+wqDà¥{4ñ%CÑGMþˆãÚd&ÔWTÊw"³vKM®'ˆ7ÅLj"}A9­9ÍŸIùÉ5}½™ru%íŸ)\í’›µô[Jîno…>ëÛæ‰:Ì.ØyT¦b”9`“e¯”Ϻ¿×ù´ [ʃgÛé_Iÿü1#‰¿ãÝ6Ü|¯W7%uqVéNxº}ŒãÇÁøq0*ôq0~Œ¸ðq0ꃱz&_."–Àùx™_#ób•‘“9ÛŠqH‹ì¼³S9´9\5J Ç‹?=y lüÖI5¤Án*ľ)ùŽÚ4A…Œ3¼#tЂ^ás?T8 w°Ì/zÓ!_È1K ÷±ˆ>Ñ.¢yE4~rêT¤P™õÒ êTß2ßV'c_p JU¶?pÊüÊÑåCmþ¿¸{X8–Ù剮ÿ&Ÿÿ²åØØ3‹Ê8VèïoTßuÃ7§² ãv¨ÔŠåj‘Ú !ú 5Š­Ô5„ö¤™ˆ)Âä|;¤ö ý:™Îÿ‰åˆyaa]rŸ|ÚQ:ï—cËKçS}y ±U0ôÏ‘õ?Cç¼é <»¤<É ß–ßgØòNO¾6§Ò¯Žbý˦¬óh«®mG×V„ÔêNʶ1 ÞC}u£!òéÿ)±³A܈/Ç,)¥jßÎÖŸâÒ!ªˆØ*¡hP„ TÕ‰% 8ÑM+i‚®EõÎÊÒ²Tç×J][©“î~„– ñt„ñ,P%†§¦n¸¸ÑØqoîÄ»Á_BZdz¼ñA¡{u§â|"^Ù£ž‰XPÿñøÞÀXäpÒ1¦þƒ0¾ü†/ìtKïØêŠ5ºìU[ñ3»¢6œŽ>Îð3œñÇþq†œág8¡3ütç_ò ŸRæ!™Ê†/:w´ÑRï±ÍØ…—[s[ÆXŸ¬¡Sv¯‰€®Q~U éRé,‚¼¥ä0iey‘Âwo!ƒ5ƒ ƒ-ƒ;pè÷¾Uh¥ÐZ¡­BwŒ$âq@­FÜaò&óºñZÉN/gïÛ+j‹ÿ[œvEeü™6ñ5ýdP0ß©MâV<ÿd3ÐàûŠGr“#‚È»9“çÖõÕP_ GWéÐÿíG£ø?V棶6:µOeXâ(Aßž68©+lHR{[Âý¦-IO¥’Ü@]9ooš6)q,¢@Œ<ÍIGBÉðf¿8oùƒîl„ – ¤€¥Nr;ôU¶&iaçu»ÌÑ¡®Jx£>pOõ6Úª1Ù¨kBÙâøis¯.éGÉ”e5ô¶æð½·êg÷Ü­µºQØ©ãÆc¥2ºQ¾¦gÎþø|Ù 5t'å`Û[á‘>9‡ÜÇ!'èãû8ä>9îíßös?«;h“$%Q|@2͉τôDHç9 œ¢Ãµ!Òë HVGƒÓLEÀªt G$ÛÑ!Ñ?¤²å0U×5*“í%¾·¶”_ª>4ä‰vhd4&BÜЬ©/"¡á ׌DwpãÇîåpù‡ «ÊâÚ ‡J"C Jž‘SƧ'/ø&yÅ=/Üz^=ÈÁà¾+Dò!cp¥æuTÉ¥²¤B¨ó¼Ï÷ÐgûQ6 xBW–AÌ e1È Ni¶Ž `¥¦fM×¶öˆV … muz¹?óû]îF)¤±\nx0äÄ/½ Vt ¯Í¾Z 33o£JÉŸ¾âqÇr×dá™ò¹MÀ 8ßš}#ŒX×ÂJ2÷阂ãMÓV’bÞK}ž(MTÓ¬n€¸Qÿ¥›[êÌš~²Æïk®ë±W?G¸û Ht×S$¾[ßBjý—¸kkPJdØéò0c)1¨ß· ñ>O]cágd«¢R~å¸e„.¹ÍÃØ/×¥zYˆîõ¦“mZW1óuÏÃõkŽ´8~äy´%¿ßñÁÄS9bˆM[Rü¦I^ßóÇ®h-ôÔkP`׿õ¢6Ê-KÝR–Ÿ» C¬wRcŽ|g¸!mñÀЗ®Ô“·êKïÚò(º¿áGÈ œ-”|HrÔêùZDÿHÎãš9Ý÷é¸Ã)…EÖñ›%©ÜñeR5åÆ>üÇþ[¨éœÜ;Ð$Ùµ¿¿ô3äQY«ãc=¾”ÊÍ8ó¡³âÕö?ÿýÿù÷ÿø_ÿ.Ëõƒaø`>Y¶ ÃÃðÁ0œ.}0  ƒf“Ivñ™iù7Rwú4‹h³Jœò.©<:Â1>P ¥êÊ–(pd‚çÔy•×âíZÒ&’^X%øMR ­–z¿"eçTìÛ°¦á~TG»‘cö²öŽÁ»T²¡Ê¦îB…V M¥—¾ö¦mºíÎ Gj(u0”b¶7C²)ЇsPÌ"rgŽÒ|¨à8¥ÇeÍ«c±ÌFX¯’ø"‹[’Pc$ýˆ˜‡^ik½Nvô…›ÑAd‘hœÃÚ‚þ+ÒQT›ŽGs'óØ® ”{í=µ¡iMJ„{°¤~ÈD+¥åpÛŠº"äpH†uzö¨úý8£ #ÿì¤æƒ¼|—òò#/L/¦7ìhª¦æad„›ú:õ Wõ[Q ½,ÞÓN‘ÑjxÌ_ §5f¢4ɺr8yÕM®´©ÅõóÇuý“Ç•cÌç®{ÈZ89àÇ+áÔðÿMV󤲲ÓH±9ö¨é‘úà"ðÁ*üo+Vá½çûüÀ’Á3WдéýgзW”ãìÇ^Ipu'þÙŽˆó”paMó¢c<’ˆã½Dî)âtÌ- ÍT»³•¦›Øí×îŽ!eÎŒŽS®Qûû¢ò³¦ÅÙ‡ê8RrßÀ5Qé–’Áú6<È$Óy¾üÔ¨äÔ,·òrÕœPp»¦¼bƒ ¿Àö.W(ðW ǶkRªºi„iËŠºCöè‰ôRª¬M)ßúFoÅ ÿD¿v •°í›>]¾|ô¨ËQB„|ö.U¦™uϧ1Ÿõ‰ì9.H™¨Um€QŠŒŒ¶fTÖ§§þ^îíêaywBuÝšQuÊþ¾Ëáç¬5ë“ó<9!ßßÉoÞ¡?}™ÓJœZ~£¤U6AºUšÁÅ@3oåãq^Žkf=—0Ärϩ˫9Ñ_ê@þv—ñi·ð±û7«ùÏ "#uÛ»ØæyCvF®“¬½“ßÃ6Þôuj¨ªÝu¾Ã×°!î¿PÔr@¡Bà8Ú¢ 7<ðçÙ¯~È7½yZ’Ç…é³¥zºÊs"Òî]¶T/ ±75 D’£ï,6²XrÌÔ[´uºÄìûCŒ"9pÕõáH”çxɼ.[”sîŠ Øp±^w—~§˜ü^±i£r©j‡ò2ŸZÁjœ™ äÜXñM8žþŠ+)‘ÚšïY<ºÓSÕ}û΀þ}Û[iÙkê°úïn§]"SŠ^“êŸA–•„­ ò–!ÌÎÛu‹TÊ|óäN,£ÃäÌ!±Â³ÅQð³‰üäJüqjÎü½4;0±dx™Eá)§ˆ®ísD‘B±BH¯EáP âïùRž/G©Ý%Ïd H–(6ëFI&)£’c­A×–•åÓ±ñ'ù % 9áP½îç%¤ñ8a2r¡ô9Tñ˯Ó—‚ªU§¨˜<ŽõD³¿üñvu¨çµi¹¯="×ÔÞP{Km|w—Á¹p*·¯ê»$0Qï׃ƒ5°S”N;¨P&vV¡¢t%uç5›Ië$–ÒeÓþÌתò£Š•ÏšJ⚦b×ÑUJÑü%£ÂÄ]Áa”’'é’ÅC½oçUCÚñZ~ê³qî9Saÿ&#<ÉÚ/ßsz¢ ÛÛ”«öüQé ù ½à«ÍþÙ~ד¡¼Ðs çEJÖ¦}¨U@¨>¡µóóÈáYØïrxÖ>Îíyü*"fÅêV] ééâd¬]†¼œG^¾×ûõ2ó¡œ|ç¦%N‚Y‘yÑlwª2‡OWÖ.µ#YåÕªüS•ª;‚×ÔæÉÃ≭׫ûÝ©ç—gjÍ·ÝÑ3”¦s Xp“õ;U˜„W’¤2[=܉ل¸&fsÆœ‡ ãuÆ•ÇîÇI;qÒþÂôüqvÍÑ4>@FgÆÔ1ÁÃ4õW±öŠ´NRSPÏTqš¢ˆ“*E>˜b¼Úɯ·å?ÏR¤•rí_7©¦ŽgáÕØþdÙî¢(ö!rýo¿^äúGÊ“¢Å”0 Øþkˆ¦bÛ™Sÿaü[=Åè^ǦªÝ©r0?8Å1Ÿs‘™ñ/gÙ•kׯÔ<¾o`'ÇRez™²ï}éÑünŠ‚`»CNcJº3o ¥õ@h‹Âôº”d0¶G ¾áW]F%Âpd³]i(áå26¾bT›™r©Î3Ó£ÒG&Ÿ<Ë ÑSg5eÈM)é³ë›¬N„œY™È¿ó$‡kc;*&nÓ;È*]ƒìÙ !ŒAõ—dà²:C²We¥~$‘YÓ.‘Cf¿¢N«:~‰l7«Ívû¦A£qºfpþÖ2½Vd¨ïæ•0ùµWßµŸôªßN>­{ñ0§¾Ž=Úç%µÉ}¥Þ-c…Ngë,>oQû LÝ®­\78’>uIÛ5,¥M‚yàleA•Àí°ÝÃE©t8³WÐí@7ºÊ´–‚f£r)Á±W]ÔvO™š2Ÿ7Q^æ b—ý}ÐÄáÒE}‹'·Kù®¹u“¶£ga(jS5ÈL±Ûþ•2š~Ä2‹¥ã/"¶mk‰'§Ñ“x÷,†µê’o ËÙæ±x×7maCQFp,ë»Hd$$3˜L׆TOåÝ驼˯’`¨!vImðìÑ'°ÙeêÑ ²LR÷,gí)ã|C‰M ;[TУŵùŒ¨\Ÿ9§>ú>ÇtÜIÛgB+[w îéÌLûq¨5… ­Z+´QhË(RÏŒÔ3#õÌL]ËÔ5ÝOHš×Ru'›ªBשM÷gÜ^S{Cí;¼ õ=è— `7°õgV 8Ûûç ƒ-™¿!_ü­B¡B…xV"Ñ4mî8¯úª¿Š†'SËH-ñw»ö‘ìÙ}ðýu—Úh0þ1;ñcMÿèš~û"¾fÙ~¬†ÕðëWÃ;æÿyÆjúXo_Wi<Êè€ôõ!Ìe/8T·ßZ}ÙŽnEÜË푾,ŸzÄ·Bó“Ú`AÚ‹Ì,ã<¯öPšxÀG UVû­—Ü…}ÚêîÞ&º»·±îîR,|&²£¾.a¯È“æäÙ»èmÑf£§BŽLËGÛòɵÆ'Ÿn^ë.ÄÊ‹º>1²Ù•Ý—pÙ¶ÏQg‚Çg½½v)ûê42£™‰G_oõ:ø«,›Ÿ1Ùãù½<§?oÚÆõ£3óÆrŽp8ø)ÅÝ¥P½<4ú¿˜?WZôÍDLAù§vœþÝ _‘ Á®Äº KêŠÔT *“ЇNõ,OªSD)üÑ[«å«‚–g·&ÞÀIc@ÒMªý|M°²ÜW‰€Ù¡pÀ§¼èúe.+ö„ã¶#œŒp1ÂÙá|„£Nin˜ÛO”lA¿‡…¾T)•6Èé°ó?Ái‘WY('Ðp£Ì{’ÇjAðê"®Ê‹Ç¦[oèM`ì绸"·5ö¼ðýÊú-mMÎFORšòàˆîŠôÅÁ’?u+«É­.á>oEmÑÖq¶•< ±V˶õÅ OZzKíöÕ†?N´ÆßÒçǾùîFѹ©ˆ±z€¨¸­;6x1ù7ÝóFÇfñ?»£‡¯õ#Ôr±œŸ¥èý†ž²âi’qþ\}Ž—¼<ÖÑèñ2ˆÃa š=ïBµ^/7òÂÁ§j½ºÛ,õ¶£ç‰/-v¿ˆ Nï@½·.lšÑΠÕ|qͪ•5µz., ^j6yÊÓˆ?úŸÏmJ²¦Y{êÆ_Pz“ì3YpŽp¥ Äwnƒ5ÝÙº2 á‹jªˆY£"¦oËÚŸ4qRê@È`¥@D`ËW¶¾²T¿Á9ž7ùMSš0êýÑÔ•úêftÕjk(ÛŸ=¦ ;¶é³¢˜ŒXêJKrÕ€ô5ø½öûµŠ ÖŠ²mîi4‘7mXÒ¿ª},xËê§ãVÿh¥ÐšïÄ¢¸J>tõŽ^+œò¾5ÜŸl2Dç*íú_p'«ñó,× Ç?’ˆüèÆ¥]y݆x÷&¸vìþ‹Ë/P¡ñiõÓ‰ª"[ŠTývåêQlÔ“ów8T§3„Òáö:1gZÖ8še¤€¨}2\S´å*±#Ç>Í:±`Í¿Š8ÈáLA?I6§¦nö²u8fËȤ3kòšoZÄá-›N¨.¶ÔRèhÙŠçø¼sœJ‘¥ºM nãõÆ®³jNdêïùþ<Ñ -úçvu`·CèÖ¢+ZšÎ=ÊœæïõL1²l•ažÅbÙ3qž"kFÌ×|5rY÷N˜\m¡U»eœ\©4¸3¯ÛºÕ«m±UàÛúº¡ñ7­¤MžöEÑŒv'»OpŽr¥7–„ V Ö 6 ¶ îDüèˆñ£#~tÄŽøÑ‘zô=ì>ÿÒ[ºd¹ –»`¹ –»`¹ –»óÓb~ZÌO‹ùi1?-æoˆùbþ†jÛ- â'Ü?¨8UÔ¬Kdk_‘¤†jƒNÇa·]pzJD³0u³"ÍFÞIœŸÏ>×¥ò/ðÔŸqfyÿI©žÉJrì@¿-Ä‘+.¨Ú¬¿ÂÓ‘èa”UÆáš~ˆìǧ¶Ö÷É:ˆmCµsý¥B[ê}¤·':o…àýŸ/CõÇhu¾|?T“§÷¤ ò_Ž‚\ùžÍª7¡ÞvÓûâ‹þÒá­ÏP¦vcbt-]‘’?—x0…PŒÂ…S”ûo÷•<™šõšæF:Æøtúù¯?øÚoÔlã4gGçº*è}á4¾|O25e‹v-ÒM\@FUe>‚jŸ‰Ì‘Ä R‘T¥”ZpóR/ÉÇÉÔX¤þ7²²éÂ8 ÝÞÒ<ú1Ñ}]ó¼¬ôîéLXu»ÌÕ¯Öª›*ÌÕ=äŽá\†‡55ú©,±Ú&¹îç†û¢€œÁû®Éï§"Å>¹kÿãÿýÿþïÿ!;ÆR"„$ßÉ=êQ¿Ô”Xr> ¥l½©*§#qËv÷}¬wß¡ÌÖ³˜R|;B[*ZœItj1Z©kbá Ò\è,M \S˜xצ{ÎýúéIçG§ÎW+œ»6 ‹>ƒw'ÄE1×À5#j‹DX¢Œ`IBsU ®ª¶pâum¡!ujz¹®¦ÊïNnlÝR×û¶òhB¦.׃+ië ®ä¾¦Y)$¿Í#9a/ל}W™Yœ¾çŠÎN–Å9Í%h|[›ceʺy¦SÔ£ œ®FJ…ç}–’•É™T0ú’'»)>Q —Ùò4YGåLÖF=¿'õ<Ù„Êæ¼(*ð±d•UíK‡´Ì7Çò@kþ©ÌÔ@_ÄòÁuÙ”»Ê¢$¡Lœu”˵£ÑF¡5#£ïTo6…ô¶Pœz>b7jÄÙªºîÿä±@ ¶¦Ÿ%ÞÞsA|Ÿ¬XGJ>ÉFòƒ‘ü`$ei0’ }0’Œ¤\ú`$?ÉF÷e$m×k2Ú–Ð]YׂÝd{˜$;,øÚF”åÞûˆNº©QÜ»i–Š ,X øQç×P_ GWµóëJßüSgƒCƒÂSqü´B8½èò»çª‘‰oë >£uÖ÷jD•O$<» ±mTIì×”Ïm"^²]K9­;ƒÂ+Aý„ÙΣNª]ÍmÓájL³2èz×ÀkÖ–˜ûú÷ˆ|öO%j ÌúÂ!Õívß#O–hOW¶ÔFІ[¢! ¼l¾Ø|3£e)Äç8õrÐwÝá§ïS¯>A@c‰Ôù+2 MRîùB¨ßC&z·ƒä­_ãV*’½¬Í˜:ABE×^Q;T)$.¼nPð©îëuç†K 1iÑ,‘fÿk‰äâè€þѺTC.áž/O¤Í9l…r|]zê?OÀíú ]´ëƒvýi—û d‹M¡^rM|h¶èdV›/§v¡Æk Ç6Ê,—jª£Ê[‹YÄ(îþ}+¿-‘»i,+£„W ßöÒƒrõ£¦§ ÉóØ8¯Žó"Òù.)%>©Ï÷²IHÒµ¯ürZòeLú²} O9õñ¥"Õr>H•©Rµ|XB•Y*aÉk‡g|P "2,°1|þ•sfÅB…¥†Êix Û,ÛcËU}’ãŽQetÄxH«xsæ!|¥)¶éÓP(ê„CõÓ­~ëi1îÖTntÛz¤Y…N}ýWŸXâîce)Äì²e²fŒ”æ’28?\´µß­K}1Qßµú¤ðø3ù²jO¾IŸŽC _zÂêCB=‰zÒJ¿Æê{?hËmù -´åO¥-]Ò|ÅxaÈ5‘O÷ˆduV©×†Kðñ¬ê3Oã&·Ï§[ŒHé QF9þ稥; r‹°H ˆÌ ebhZ‘[*7áò›ªÏSÄytzÓ¬“œÛ-ÔnyEZ_ ´0öÏUÇOíÔ eÖ|µ)bZÓy?Fp‚<ö“+{¬ƒªO!•±-iž8™Jèan‘ì(¨‘ náˆ[+˜u°÷©8TŽk¢¹%-\ŸFËŠ@8þ¡P,+w``Ø·å\‚ i`º$ML9®)®ž–½ŒV“ÛÒ£¬¢ –ïB‰_ä%03± ži[2|·œn †ñíµÙ `2Cƒ5–Ê\z›¨!@ýLY)2 Ç½³€t ÒDÖe~õß„¡îÓ-–b›ã9 ‰eCÕ+“^r}tž¹X:ìeÎùL5û™v–z,ü”l¶0X»ö†ÚÈþz°³1 a"tBœE‘¥¤P¨¾s­ÐF!fEÏp¨*aÌB¢$co2ìÛp„ «ì85Ò]ùÂ@r~´¨ŠéÛ!µWÔ^SZG÷Û[²JZ$¢Úç–Û!µeý©l0tV¾ŽùOd rü}²âl÷ºª¾ó( 2U×ïù‚¬÷–ªq`É`Å–‡yº§®Ñ@ԸݷEÍŸ"$Ê­31`úrº÷|`œMöé_"VyÛJŒ+“œÙ'ÛžMrÖ>¥ôÁî%rع¼#z‰Â"þ'êLÅI·~CN>³ÈÈÉÙ2DÉj•ôé²aUÆæ‘cORÂ5‚“4û#€Dxóž4Þ§ò%—¡¸i8b(ïrÍIºqôrE†ð¢n±E‹Í7QŒÝ·…ì•T€—Édc¾ž^xÎ9E¹£\r@QôBœ‚l¢I‡ícre¶·u:Îä8rÓ6Ý–>·ËÁqwtB eîQ% ÏÔ£sþÃÒ—ùtw ׯ>~þB¾ [*ÞïÔ… Ÿˆtã²ãŒ¯ãŒ­¢ý¸½½íûµ)¯¬ª_ÓÑb)Í᪠„·òâéT®Ö^(3“‰Àäš`<]Å;r$‹Ý"›‡÷¸GûìóûC‘0ØÕu*ƒ4U¾¨7Ì(Ä:´‰PWöJ« í·¨qãÞRA9ïFÃKþ§³ÇX¶¥¿ ÅzL¼o‰‹ ü}cG0d÷yòˆeéd5ávlÞÇî€,?k[z¤Cpm!»Ã ‚ðà%ñoI\£„¹MÊõ€Àr‡"˧].íØ`óey+ÜéBÀo1yÿÁ1ò¸Íõ厨53Ø2}›þåjî^J@†9pàŽß¾d #Y÷ÕEpo_K/÷îÉKT\¸É}¦!ne‰Â£«Ä@ØuØþaÉ’¿&S¯‘áß·›x%§xoŒÌóB+ÁÍ–Á[¶K90‡¹]Ò²<†a¸“g”MiÎø#7‰…íÐõ=äñ ‘ÇjM_•H7Ü¥û,d »ÆÝ·4èï¾½KÂ5.Þ%ˇ†ºwÇÝßbk—Ù–¯¬äJRa=Ã"ß?L/èýš¸V–}´Æ#k8ÀnœåŸ‚2kuw64H†ß³åÉyÏåö>÷S¾ÚïÔâC<²·â¸=Ê`3ti©;¸R? Vfݨù¸WpµiøÓ6¸šð–æ542‘Ã{´ïo©½E{}úÿ·Åv³°û’¢,Z‘õ YdæE–Ã瀿Bä×4 "ßaÝ¿QZ£ÜàAÞn³!m‘!ËWäØcv±÷ „ ù»ž°ù½œ—ßLs'2S—Dû§3 8’¤ÉÑQfO^y$ÛW»È&m…T Ò‚— "¤O‚MÄýÀpfû'KP@>ýÚE^9Å+×õWÎàó&v\Äé)*t„9¥a/Š}Bûý¾rõ׌ói[Ï…AÌM¹—27\Nzz’¦DHè䘉:ÆÁïjÒ&Cªrr㮤‰¢ü’¶K±Ô¶¯IoרEÁ²'眧û§=ÄPð97­i²„}ÑvTiÈY9–xnËŽ¨-Œ¹lmœ±i«NnC7ý`ÿýZ`ÑìœÈÆZ®ú1+E¤T ¹Ž6$ò½>|`d/¼ =º#Àá52›öõGºY?ƒËLãQ‚!íb+‚úHœ{‚Oû8ê"âŽH¦AÁ£yTŠ$‘[Yˆ[j»„’FÕqq=Ã6ˆ-7Ï+[q{Eí5µ7ÔÞR[ô+û£àÚ!µ×Ô–ç4Y‚½‡Þ’@¬€<Ê„.$˜ß–²G{%·\›ÿ¿¢öšÚÒö!i¾EƵÁ÷ë\~–F¬zl×þ È3QÊc³¥öŠÇ†_ ¥[U#_±­(ë¢{Âb:;ÚZ<3uu°ÔñûÐ8Ng;ˆ€J33ë9{Œ}l)ó ¥×þ q c ì;ŽŒƒÃy¯­†¶J{¨¥þ­:*?¹cžÁ@n†B(U<º£Ûh íøç[üÜ=†¾íÞø¼¡(—Ø< g#y]„X.®-ù³°{Z5Ø)K“|®£×Hè1œÔþDSöûZB§Bùt™¶š 5åû'#´Ù¬VÎzE0Ý|×kÓUά¬lÈÞ¬ªäåÇ\}Kqõ,û¼NŸ µ#jÃùäÒ¹7òú¾doùŽMEÖÀ˜5 GZ±ÓHÌ(rºEMB߆Iewê ­ëv…ü{W”hC0ìê'nã§à*ÝܳžS§`K|ƒÅuÞ§q£´S±sA¸HwDÀÊ X²¹œ\ßÜÉãQ‚€ (Tkò€ìï¤[›Î797O¥þr´è@®g+-9ˆ%E*å躶¢ ‘·ìØmÄÝ%â²ßð–‰zÌ•%ÓXè6sqjX 5ÏDÓëæ0¤¯\b4ÛŽúžï7¦6…·×3壨>'YQ˜e|åçvß@uÝùGãóÅ·i1¼%THÈl .>H¿µú7…„¦p*Ö±üçÖ;d–®:Àßæ“×ÒGMÉ[f«^°T¿Ù« ŒØ°‡P” )àT4le¹Ó'ó¿[݆ê´Þd*Z]³þ>~Za'+dø Åòê½Y='ä5¡\_Æ~,¢%b&ÍÌwmAqìÃSÍ·CjC1ÕTå´EbÀ¬ðo ~ò€<<в ‰õl¤ gÜ^–møÚt~¥íc$ ÕE'6öà[.TùÆ;na¡Ùg¤û¶IÔàdûÕ#AfKKm9ó ¡®³ÊˆW£¯"*f.µ+ŸgsSáz—€"Ìl,'±E•ÃÁp ’c…<ƒ¸yŽ©- Nÿ\‰è]?ˆº¶H1¤ ¼¹ØaÐO§8{P,“o ÑìcQ²õ8oÕ‰ÏÁà@Uƒÿ“¾æ©Eš •sÃ޼7ËrºjkÿvMU¹7(Ù†c¦°_ÈOé@|ƒëÈ«“HO3‘cÊ•ý¢Ï™í(‡í jeÅù@. !ƒþjîÜ3€“Š& ÒŸ§R¸ùÈÕÜSŸå‹m‹ôS + n2Çk-q\€ØÆúPÖ 6 ¶ $!A’É&0±",p"£ô‹*þl0Ë ÜÊjC%ÔÈ"X·°L«&… \8´ap¾ ‚g5¶W¥ù BH æŽ5rYvó#.²•(,•$qœtÍØ® èñNK %DŽY–ÏâY9«ÑÝ ÆÆ˜×}‹ ¤\¨¾ e +„=€ŠtäÆßPY_pßµ)?4qDèHÓlp㫃ð IhìPv…çÇÖ ¶ îÜ3xo’P¡Bçó“ êLâ…>Òf’º šÍ¹g° ƒþ+ }¶çìm¨v–Ñßß$œu­¬PǧòÏÉzòãBÅëð: åeß‘9l9ÒL«ù\c±°©çTþAvsiª~/ÂäÈ×äøÕ£¬FBoÍ^-”5˜R¢§é¬Œtü‘ á€¹ª"rAùK†^Ü8«¢&'4ÛBúêR 4ßÌŽ{ !Õ-Úî(\èÔ.ÌþjƒìGH§6vÙƒó[gý¢G?YˆñÊ0”u¢ÜeŸ=mJ ˆêSy°‚"ÃÞ÷mÑçžq“–qž8,g…dÜÖñ}[ù$×îz܃ˆ>rt*Èc³¨sÉF6+jYˆ$IßÞP[ØœÖÞp –§$ïÖ ŠÁgW%beÜý§©úÔ%mS@Qü‚u&pÐŽ$PØP3"èáG½8n©º†äKÖÖ¦£ð<+c­ôd쨉%ÙÈá;<¶âjLöðà@vÍ(±Ð49¹ TÞ&dz °V¯HG£ð†n¤Õ?—ÌDŠ˜ß_ ìjùF¥+—"Å{ùcì®4ÚÝå« E¦ŠÚñO×Ô_&¿KwUFwM¨åé¢a¨1ɱà“[иuêÑ9Èß½šÃ=ƒÜ¹(  £`?8ÄYäIUf¥Ê¨r_ ²^fYÙ%p´9Wnq”rþX!=Ly¿mIQS"âiðµZ2¬ècäBRö°ªñªeG"ù¦MiRaѵ‰!Cº¶4„9“|2ܶ]ó¬„7gO¥˜ºJÜ›Y¸wxc©²j˜;YÖ[$IŠÓiZŠ£Œ9$ѹ6ή¦îÖµ—`úÇþ8â`ÊH9«–H`wmøH¶T4΂\rÉ(ëRCçÉGâàŸêRžk÷wÔ–Ápmøvz W&mQA…äŽç„AÍ˧³•‰”)$ÛÍ›ñQ,ÏÝ ç´BÂG‡Cl* É–X· ó©vUSvðfÅ ]½¯¼üƒfÒò*èJ!ð¬¨÷} NËzZž“åv*úM:§QH7Ф7k$„ÚIj‘è¦üY\9v\;‹;üœ¥¾L  ‰yÀ¼À ‰Ð¼Q&t1œÃ¡B0D±÷LUÚÕmƒÕ¢Â$X¤„þw"bB¥à¼Ú Ç¡'ʈ  ä–ãÛòlëÕå{¿o| Ä*—O¦Š9ó]™.õ“Ûr ûhÓ'#I"•ÿϼ-ˆýV§áF2*ºs)ᦠ÷h¹ïܧX‚ÈRJ^¯H¬åpT;#Ôhïò0>)èkŠ1iZD,÷dÐò¬é£,#¿¢qŸ;1ýE w;ÃBC‘;âP¼Ç-ÉM·éÈYLJÔ@…“´mŠüqþé2BÃAI¹¦XZn0IAÑ“VŠÄâaì!ƒ €±G°#˜LHuTH¿áìÏLO–BêÝÙM2ð„X®ÝŒ•4]Ò©²EH³kÃ;ïY41ÇlÑqÉ åˆmÀ£61Ñ’?k¥‰Q |.i_û`6Þ ÉýNôk2H³î•Üi´¢tÚZÎ釾Ö8°üš;r(µ¤ÏºF#Ú;¤Ì%b^öìöƒ5Yp¤cÔf}×õÙ¯Q¨,•WÂÜÄ9yÒÅrrL‰öãaÓœÊw´°-e6:"™]o Ï·…ðT6BV[ƒ’ ®Q[´l-E-¥Oj‘— h㤞¯yÛ$[¸R׎¨-Q¾ÆP™! ŽÌDœ·È­­òù‰üYx'-Ú¤ïä¸9ªrq>qQÙ=98IסH{æ+¸³ß=Ÿ´ÌCÉ»vúÄ@x÷¼4Ë a2Èä > ™¦Xš:‰UeñÊíÿì•Ûƒ¶¡D•ìê} ,z®-Ýô¾r^^í$({ â/<[þý•Ç6ßoVø½ÊêY,õ=Ü¿…¯ø-‡ õ6ç® šíAÄYTÄäÛ2rû,>=Œ£c0(ÍèbI/ëÚdš&l×^QFF÷ jGÔ¶ÔŽé]'ª²è¶DΔ-qIÙ·SïIÞ¯VhÔõYÚ‘QÊŽÁ@Q M#vºoÅ#kfd5oMîó€·îœ4"yàF¾çÄ»vHm¸3ƒË†*Ò¥QY$úÐOIMŽz=À>7µÊ3­úyGF•âŠô2,ûÌãÍ»ÖL'÷A"ríÚBq#çÀ–.ÜBBüco#º„xmÿYôn¦6Ô–WrQåXÌŒ¤oo©}Gí{j g˜¶¤jñ’#$ž¾†ÜÝ-}ïZ}ð†?QK}Ÿò®y¸5ä4jjm.ÉL‡¼o6Ç=vÄvq”K3†ãy)y–ã–ÊÈ&œ5’½'h ý·^þÛ7B%g–¢s£Û·CjÃmÄ@­ä•X˜ÅOGHqŸi½ü¿L˜‡'-½!šk47hNæwèŠÚ8Ïã,•·ö­ù¿ÐޤËBÐ8#%Á£BÙ“þËQà´HÙˆDÖ·Oï»1mž·)âÃì¦0‹D"9#k1aqòm'u–¶+”_爑º=ªk+…Ö#*´VhKh£~·Q¿ÛB™èû2ò£@ßüþ”c–²yLÒó.ŠóÁ÷þt*FÑ -H»×E×Fî&8lL¤R:ã+›a½³\ÞåÖ°"êiOh‚ÜCjœ>ƒ°àŒë"¾´b°f°apÇàAÌ aP2à bóÈà‰€ŽCpG#îhÄø»#îµH2Ä„PÀ†d°b°fp]ž‚ˆ0‘´ J䘩žA"I1÷É— i‘ÁwÊÓ·!¢Å<ã¢i:«Ï›5²ÿ” p¯q?êEùãZ9Ÿ÷Iö.;àPB/¶Å‘Uí‚+%E_çV9RúqݸÉL`çìZA×"}ð«Œ!ƒa7ú’r0×BûúI(WÿDáç Qdóš=J¢Šùc‚…8‘nàf4{ü*ðÛ³È]žN×àÐ=úƒ…¤”ls𧦥tWð‚dÃ’ßB¼Ï¡¹è“n§h¨pÓ7MÑlvpGTUÜ|×£Ú*-Ÿ…*o““ÃYéúgrŠŒ1d¾¯[j£: â¼ýýH…@¹ "q„\FÖŸ\×àëlÞÖ”y×!K=ޚуÕjhéO³TO…xz „qšð †*ˆ¨È&r0k¥³*Ø®BhXTͪ” {*·zu@ÈšÁ£CÃ{n±w©2gÅ)¸b×6:ûVA½™”%œ$'X‡€|ÎFQÚ­2¤öŠÚkj³ëå–ÚgÜ0g¥‰ÄöÁÔ0¤'¸Täˆ"…T_ŸI•!ëqCùÌGÒ;2jCÌ…ZšãÜBS!Áƒsù’AÈଙÆý~Ë¿%ÐËKCXÈÞ?·tç,VùÅS¾6Å]§l=P%Ø´”4 7Ç-A&’Í|ê•gÑ2.âÂDBã´ÏöXM™„Œ|ŠÖާ„ºä¦ëï;DcÞD+wYtîªÇ}ó þK×oýÃŒ¨?™áç¡bœ›uB›6`öJ¾£Íý®”*MÅëlWP¡Ú¶Ïhn•‰[Æ %|+Uò>Âï¥ðž,Sà¾éŒPšú6ÿÁkß×2:°Å‘¦rYP%éyÝBʼF1éš²JŠ˜0¢_H$Q_çHls{¸4ÓÚP˜°* x­WÛu%‘ë“lM®1E³Þq#ª2êZ§2C°¤9N:Ýö;¡‡_ÉaúKFY5ØaÐ~¢]WdTÙwχö®Ôã„·!æ>)5D›Ã–:Pf¨Ï²ˆªÁâJʹ˜tð–rº/SFÃØ|¯Ž$£ìHH Ì™€õI“Å(*ž}FäHQ‹õq^æ|¡À©îPÕ‰#Ȭ^²iÅ~¶hú,n¥«¤Bæ˜Ù7Œõ7È+uKT¥¶{|gLZ«î¹FŠè¦Âé*ê "ë‘l¨z÷þÐ årç±°$f©œ[®M£ýzç–˜ò¨s€Ô@m\˽ØÐ›=øÈ- Z)$Qï{ òÑ=2Ž©ˆ0åÏ=ër±5Q,lô7¾R[Î,¯[1 Îj]n¢¼Ì“5(e5d(ö@^”'¨½äÚå+ -Z$ ò½¼Åà>óA QÈü´G1‘„x‘yçµ¥Èþƒ+2{ Žm ÁÍi(5¼Tov(bpb“? Üï soÖ`tÚÐc`Ïå2Ô>SB¿zß_Šº÷ÎGï]ñC î>"Ï<ˆ¤ 29ƒ‚Ágõ±+5Øk…ôµP¡š=Ïzø+Åøë‡ Û‹DúO»Â˜Ûí䀊(J%Ÿ©pëJø›RèÍ0yB:¼?Çêá6”ò$þÛõf¹z Ü­n—Z|3LÖÉ!ëC±TÜ8´„ªs1À{…„£Údc¿ŽˆÁ”!’zAÌab&3^“ìݲ¿CMjFÖy‘º3ÔwªJ®4ª¾Ö(ÿ [ûvHíµ98{CmNiR›b¸Þ–•4^óªcÏa]Õ3„HŸú޼¨“©ÿ®Hy:™O©Ã¾&´9ªõ:e^°XÊT_èªt¨7Ã/D{ð—ÊÕ,ÎÌH­Ç/ÝRA·©£J¿;NO@2Ó 2™ÚŽ¢…['•/øJÕ¬´ËguÈÓIM=PcˆW^Úç]9Z1·û'Ü4ÇTëÈú~!ý(1ØIdà¤ÖaS¼b¼Ç׈/á,cî|Þ-sÝM5 #¤Ò”Vd¡ö€6•½™ˆ•𪤇nP¦‚IíÓ _þ†Î¿PÔh>OíB³?ÐWè<#ªëá-tdó .]·¹h½î¸êRYöÒò÷°n“9f©ý±Kôz@†ösD÷ªªê±5¿c‡Ü§Þ„¡²FD#¨ˆñ¤O—û>µÅ0Œ²í@§]X©,3‹˜ˆYQQnºÒ9FƒA í‘ô¼cH¡o3‘6–G£=ÀÂÚ"#ö>Cš¨¢N„]iéžSwL‰–{æ\ïMo …íŠ&íø×Æ@èAÈ@´ý:Æ­ %7%¤œ_Þº°`.¶`aÆŠ_ßc „ wËBÎÌ {hÃÅ¡ ^¡Vaá[û2PtÜyÞ&“@gƒ¤X!6O ùén4Üñ£8q%½ý †N;…¶ mzT(V(P(RuÓC;øbˆNˆ>gG^óo²¸ÿ&ËÈŠÊ‘‰\ßµJü"ìžyÀ3”Aj#c¢&c‘ ‹¶2M›ÉîØp™yN¼w.²¹¼l˼…Ù©£C¢#¢E­Ÿ÷HÑü¸Ã¦>†*ø ìüêØVP€ù’eLŒ\ZíU*¶Ê$ÐÒ.èø½ôò# ¡0-ÔøKdƒ¶8/YYuAÇx¶B"æKZâLåèèKÍqÐo²™êŽüéNZ—ÌÑ8j%àM) þ4(Œ4LZTzŽoRqŸZág8 Þ¢FV/Ëv—ÊçNO,ÚZ´H ×9¶3ÀŒ‘}JpAžÆbnŒÈUýè*,{¸°”+k¿¢¨cÊ4 ‰(ªû(i|螃°“kG"QåâÆN‡úÒšËÃKs¢:P,W~¬JRí Î*k‹^]‚D/«Ä2¢îIÐ6Ò‹n û{ª xMwANKûŠb(õð€_f-Ä„c}#º‹—'qä´Að_ÿûF\Úåt3Ͱ1gÂv É€sâÕ”Wxi«[Äõ´ >tu¦”õHĵdï!懙fæx:›+â̼ɂ°‘3³©Â»j#-²rwÈäZÖå c†Ü D'˜utœ·O{'&þäX…Ðõ“õ7œ’æµÑïE†_ bB2 ^™{p¬öeݦ›ñÃ6|ç–%Ýh;Lþ"ä‘©³«&%ÖÍ]—dªJ(µR›Ÿ‘Œ–?rðGƒ þ``A/ã·{sPÁ7Çâû)Uç?7¾ß|?‹ïÏFÜ[{u Zà ˆ[P$0“y×ÜmY÷È6¥Lbö —n .áèTfNA„Kœ´¥~S •&^é|–Q¨§S,\cöišÑéV¦6Œb^{ÑæBv­D¢¼¤]šá Ç7ÇÈÈ`hsÙ;G?’ðñöï}Æ4¼"Áò™Å”Ç£HÝóuÖS‰€½bë¨;Éî«­àMáþ¦Àfs'Ó#stËå©aD´pZ‡Jl2¦Æ—󹔀H3 þ›+ ¦,‡U­F“`ßï%Ó]X‰¸£³hÓ˜˜$ô{NjÚ–Ä.Ê»‘³(±Úyôt D à•‰’TÕÊ<~•äBV”j¢Ò .J0c·˜µ«|}Øc+9¡ªœ%º©*eÚ,«Ùƒao -jê¤Á¢çÕå—'þäIþÿó–Œ ¥>¯.ûM“ŠÞ±¿°¡oO!”ЉöŸq¾`ò´ˆ?‚”,WäïëO.¥´%œŸõÔI°…vG1ži ¬Ô#\.«ÕÚ%2Ýwœ‹£ûŽÆõ/ȯ3Þ+š†ÆcÓ…d¸w=èzøêõ½—=YraÇî<ÙRÙ¶_…zpC—W#Y-•i¿âãØ7%{*Ϊã6в=~¡Â ïd›ÿUÕb©õÊeI´jL-ÙW>}«Û~È$‘¢²ú_›¼¥^°Y%Møir°»_.áë€Z%=z}b"(óJ˜çO²¸c¶ìñ«¡+É`ܽLãÐ{„ÉZ>˸žçî^§÷+¼–?QI”‡d7Z`óÃbÈLhÉ›“u,ëAòt)ùÆ- %„˜ˆdK¢£8“ß…¯ëi§ij) +>‰ÿæÄx5Åw¨aFýN^;²î&ÉüŸÓåÍ&e¡thtZ~'}õÊñ5àtšhŽj<‹²-"›òÃS¶çéKw€Š(ƒàm˘‰æ Zz‚ŒÉ”qG[—E/b¥ñn’†µ”Ï||¾n¦N§ý -ˆÑó²(¹þºÔD)æUÜ^Î19›VrV{Owu¶ä‰Âû ôY”#1½7Tç"N‚D(5ó¦mOÔÁ“¹ÀÇåQΩe`4wM§óWjqVÞ*_o–§ôVd|•Êx3Ø3’ý!k X—²qÜèï*ÓÏ‘h¯ÝHKH‡ÝÕL“ðÖÜŽŽ‰Þ½%zG´T7o; >i|w¨®% ¥ ]º˜U ¢ˆ^æ7‰¹ò›X#.èØ€$„Ž”ÜÇìs’~nö@3ØÑ³$ÓäÚs6«$ä}ËBÊ.÷ Žì4ÍÊsÄ PAÑ9÷«Ù¨.²:̹[#0O¬k‘åÍ!ËIü­Qf¤ãɃé.iƒ¶ÕǬé)æˆ!²ß“ÏDËtq´0~‹ýI,ÇŠ¦k°þ~†”uêüƒ4Ê­B©ºµÐOÊ.t¾÷ P¥ì8ÞE;d¨ûÄh£±$“jލ"^zgÕZ2škUGiêÚ:€Â©?PäòÏZwÍ)ü+ñ¾æ¿ïz:=uÈô0‚ˆAÌ`Ã`Ë@Öà~ß… xè`zv×ûFØÝËcM'â”1½*nJø¹Œi(…ôµˆ Éìä à ßµúbÚb=À‹Ní°‘‰t ù¼›„ ½-Ú–ÝMÜyVZÉq¨\E‰ëqm‹HŠÈå±êš\rQÏ †ÐˆÈN›í‡“4×¹?©Kêˆg]„J™óëD‹8½œl/çqÞ¡òØ©|SÀ„Ï· ˜Ð4¤ uÍ3EÚq•‡3 "G€îè(¸ìʶ=9·<7UŒµÐøxh¶W=%í4Ô2Ǹ…MNc,6§tñ•۱ѧ)Ťò#D>ð¹“ãõhÿw™†§ü¸jJ§¥Í:U‹ˆj±-õÌ3· Z/µÒï˪6`¼+b ƒeT"§_×Ù0@Qâ{ªêŠÎ+näô"¬ŽÈuü(— „®òcRŒ}¦hÝð"O¼±3ç¬TF§ýMˆIËûþG—c}ø?òli3Táäÿ]×õÒa×tÙ•ˆâJoµHD«;à;ï™ÈÉHX 2“Œä:d.9‰ 7œÎñ)ZPÌÙggFóÉgo9çÃÝp.=¥i{GçûæàôÞY‰3¤ã|§#Y^?âhiãràCä›èZÃözÓ¸×lžê‹“pÒi¯oé hc\&ôÆ€h’ÔW0¢¬Ï˜'œ%’üƒr*ÙA„Ë÷Õt<.y@Æå¾":–ß4ÉjarÀ¼’FÇ0¶Ë_0~ó—žiiGuÞŸKª<šG`òËÛ5±Jƒø8N)2Þ“é ²g´QHüÍÎ¥Ükˆ ôrÿGåú§¬éçœY^f†Ê¨Ãª±þËó?Žb(´;@ zFÄÏßœüo6Åß;Yýæe9ß·~³ŒqÍZœ‘U^—ýtÞi’064hZ1ö$ˆêcL†Q4yõ¬›É¥²LvWdrúÐö8¯P¯éÂN–øWˆ÷u&Ux]Z»ìÉ ‹ì®9[AÖ¸"È·•’ðeaáp­ýs;GÓùoD¡B2°ž Tá:búrèÑÊŽߊ¢'¢gÖX»LoP`Ug”Vç¯Ö­/M2Í®”Zãµ­¾sR´F/C©jjxî7hÃ+ò!´†ZWy†°”–Ä/³¶%ÊÉw¬ÂF! ïBTuz[ϡǠ-þõS·ôÎPu‘²Qƒ2¢7^b­*½Ù§sŽÁˆÔ¨ÏH">µJ¹ÁÈxµ7ì¹ùcª/e­Ë ¬U×Vd s&² ÄÙÒf®Ÿú¬Êš=ÊtU[«Þ9ÔgÈiÜÛêEèÛÕUo0Þ? ]cNÈIn²`õ‹{ûÝdºÞ'Ž}ÚÑ3?jÐúçŒî–6•¨´Ó@GœäÖ,#‚ò#>K%ë;Æþ`á¹ü>ìÆ<Ã>2â†Äd’¡órh×õ³é lo¯P_ õÕprU[!Fê=²HüEiÿªÌÇýñyn+ã(0ÿ\3ɹô·åDþKò‚¬¿%Cáå*u2eK~+औû˜JßÁVž3é;V¡hVóÖŸëÁîw˜Þ”¼ÃÓ!="£xê›ÄtŽq´èÁNy=*ì¸R GYVY¸yOE³¦ÖÛG©»„«Cî…[M¶È2ËÓ Ñ9½Ž_Õ\›f%DçÜTƒ(ÅÓÔ:aÆOXjª@”*”4Âÿ^òMbQ "[hK$“Z„þô d nÛ0Ø2Ø1x`‰;ïx·òùG!ƒ ¼J—TtÌ•_ˆî‰þš¤¤áÌ•ÉÑ9Ó1Ñ¢áïÑä$9q€>Žb#¸.ÈÏö`›Ïòv–ö)4cÿÒ òb¾B*Ôm)?ó ŠFdä7ÇHmÉ‘1ŸŠdtF‰B—¹tçoÞ#L õUò&þÉ-7gðgö_e¡äN¬dâÆe|‹&|áŽtþmÇ`Ë@NEcájÅ|ß#ƒj>ÒOtÑ‘BB÷\ËșΦoʃˆM¸\­/—oøZ¿Œü¡Ê7¦ý”¼«?BÁ›*-È<Þï¡F-äÒPÚ4Ó‹B‘}–"RÏy2ÒvëhÞÚ@Ù½d\Å÷è3— I–¤E*1×>ü+LN3[ÜÕŠŸâ§¢Oƒ$¼“@éå—eJ #^¶:¸aŽ€CŠ#í@ »8¶÷ò«;éé~’gÈkÒóës Ý®«®ƒ8cR ‡SGý®ANßb1 eVf_KºY7nê^LÞÚÿ;“:FAV r²XšZRŽé0›ZÀ¿Hi(òØU~3ÊŽeÙ@¼õƦeµë©# sX›ûÅh«„i[£¯E AŒ4<“6þ…” 'àãÚJ22oÖKrµSca±ävAá_îú6³6¨¨f˜±ù3{Äà ý:gZ<‚lää<î/X~Äò#9Ñ‘¡²ÍàÝøE8³Š¿¼¬°²èãt‰˜]ð³P1ëA^çŽÏB¯ú#–BżJs:øÖÊ7 d nÓ’ZËö½jGÊ }™¬S|ݵÎ,K/w’9éèŠ.ÄDoˆÞ½£‡w\êÑtö%!®øNø•X­GÀâ´€ßò‹Â #~«³[üZÕ%ìT  ¸×!âb*.Úpq’d`¼¤Þñ›"~SðC[~¨åæo¹F‚ºÑÂ4õDÏÕÇÏxM#ä¢C4\Ä ÷—ê0Ú~G4 ¶ž¢çÁÖó`ëy°õ<Øú f°¡O ¸gú`Ë`Çà×'äúð˜ìC®žrU;À/ ilôjö< {ŒÂSb6j–©áíÂtúDÔ njªQ8é^–óƒP½ÙËgÂé>)ôñ6}¸'šRjÁ8ˆÒÁk5ÿdë‡þ}Uî)£’!ƒ˜Á†Á–Á޼s,ú^¡H¡X¡­B;F0Ûò¨Óˆ+LÑéTÞú™ˆ$²j{®›ccŸ--âeG¡é&iï9ÕýÕôö"N_ùBœ÷%E|ÚÙø&“¤ö~ý¼þ*ëY?Ò*C¡¤5»>KD4f»°ïX]èç"O$ÖÄ‘Ž‰ŽˆŽ‰v<©·»…Ÿ[Õ?rÉid€/ÉbcoÒ”ï÷XŽšçÞ.«Æ@¦68vb¢T½<â‰cÃÏþ(n¢e\ ǪB‘¡Dà‰õÀq„ÎHäŽ&qÚmò¼ež@žùç¬Úœ‚µØÇG€» Yeü…œïÊѪ-‘.i£KšêR ‘BåFiŽ®ˆ…%!ûæžÑNÌÙ‘˜£‹¤úN¼7™-Tžg¦ÉèUì7•¿Ø¼ýÚLp=¬”ä´ !Ô/«¤J²Ö#1Nçâiy1GڱаÔ'X¨¨5ý}¿ÈY÷2•±Ì%¯Âù¹è7K‡júvÚ~Ò—‡éÍú߉X¿‹QÜÎ(VH݉Ä+P-SØ x:"HôÏ2z/'ËŸp] –eªyøA,q®å–ËCŠðFM™KÂo¹ÌŠÃÍ‚êlaÈáò¦¥c~)ÎK“K^&÷Mn„…¬B2mø ö&×(Th£P¤ôàèòMŸ'g]sòVO œ/ EÔ ”Žón4”}× -i =Jø’U—RߪG¹F¡BR2øÌà"i\Ûãs •çÞô—HÆ6¢QÂm-]Ú©ÇvÊ]’s,ˆ!¸p9N.]Ý·Dã„Còm ËOƒ{„{'|¬ó ˜°8¦×i\áç Héæ&=íñÉ# ŠŠÚ(¤ËÜ)ô Øêžß~¯`¢j3Aªn‰ª[¢ê–¨º%ªn‰ª[¢ê–¨ºA$?^´ª2VU¦W×z}MUôIß©ªÝ«j÷ªÚ½ªè“zƒ€^ö‚žÂ÷ª¿§TŽFd“íÌäDq‘)ÄêôÅŠ&}íZhD²€{„&ŸåÇyáEûyÞí%'ãKŒhéÀquà9 ±Œ¿T0¨¸jªË¾¯W÷éÁð’…içþÙñƒ’Uï¥j÷_.¿ºâ„ñ½t¾I;¤ % F¡FêNîqˆ’§Ä³y èö`:W+ xh f +[Ú‘…Û÷ù O˳E‡LŽ–W¥Aò a2Èì „\@ÈH-ˆé¬©Ø-ç;å3ES„ó^{ÕqZÊt£Ø!3 üÂ2Ç¥DgDËÉ×Ý.ÃÉ=ƒ„oMJ›ey˜äÍA ÎI¸/¶}œÛ©#V8R¦ FQJÁí—éa<šéëA+Ü©¿Þ’|bQ”p¶êDÁ")„­°ÊÑ™C»¥ð©ƒ¹¤X5Y ›µfoؤOãcÜôlÀ36›<¼/¯âìïDÕ (\R‹¼DôŒø)DÊð `paÞ&ù ”Ö9zÒìj=í_œ<–BÍ:HíÞæØº~N[ô/2,†Îüÿ@'z=œƒ@æÃ³<0É-uúŠsòWwø>üfØãœ-U¼´KÆü$DÂÚž¬3\†) Oz·kÁdÜë{!õ÷±På¹âXg"v{}qPRŠœr.™ü¸w0ˆ¸’Þjå¾}O³·Ï;p”ËâîP¤ä² bµrv^퉰¶®”9Ç}½<åMa ‘rc8Ä|Ça°/ù\=Û2MY¼S4‡—WB„“IB •gpY,|UY#"^ÖHÌv·<` í+:³øŒÞÊØ*\·¾ ºŠÏ %Aê•ÉtÆ!^òçÃZ‘Í©Š'¤ÌøÀj°a;·fÁ;²*ÉW¹ötŠÜ«û-Le•­† [‚N\î% -„,ž–)Â{çÄ=@·^jžÅ“ýý„æÊÚÖ§YïŽ×b_¿'¸ãÝ[è NÏ [erUr¾¢o¬ÆÇ{ø$’®iéH©,GÁÔá¶'þ¤™ ÌWÞÁ9·ÄûÓ£cƒ\õÉù¡mø¿‘[bþùè\ó‰ŸÞ‹ÏÅþ‘פ¯WM]ƒï¹/Þ(µUz-R“C–/Þ«¤(Žuéƒl(øùÿ}òÍiÚõís¢ w¿„`øh¶ŸÊµ¤l3§‰—|¶Hº( Ópt ŒL¶ÃöIÙ™Vif±s ]¢2„¼1iÛKÎ €q|¤ O#€†‚Ò=}_L¬2’”x¬Ú–Á†¤?J†;Ú‰BÀÈáÁÖEŸ"*ÊÉc¯Ö¡FNnŽ5„PÄÒÖêv3ÛqãÈ2áãÞ†.GTƒu»?°î§õ%êZ¢®Íت+µöP¯áøK¹iÏ—Èu?"Þg¸õhwг¾²B}…‰ºW^P I,/¨Zу›øðñ d nÛrb{31М—Þÿ#åõw^æMGß³<ü¯iÿëH±?äÖ—!%’Ü׃=nN–«¶ŸÎ+G׎ÑÕ†èØ>u$Ê£(“¨û2á|nv¬#•;ù ¸ƒRÀ^ËËTÐ#>l!,¡§á,YáPU AÊ¢L‰ AF Ɇ­DR‹ù,ë·å ~o$)cË "1x¿ì™¦<’¥™²Q(.}Ñ ±Z%™£ˆ«¯! ½%ì;ÙN©øìËÁ"ÝèÕ)íL¬†¬×ù!‡6@åÎðU”PCu/ü†AÀÉÙ#9G šŽr|ÃQŒ:Y´=';Ù8BIé(šÇ$dÅ”½ç7–1›èøG¸©ü Ÿ¸7½0ˆÙCnkŽå瘝 ëqZ­‘þSñwkW§”º¸í|…kˆÕt¼;`;¶”ÓÞ$'¿È`¤ î3º&ÛaËY¥XÚ=惣•¶'`ðƒz [òG_ô†Ø»‹}(ËÐ>'¯q¬Už‰ŽˆŽ‰Þ½%zGôÑ"äñïºg0D b?Óñm t±‡È’}p/ÁL¤sÜ* 7;£zñ"Z¼†ð§Æ³õKo³ú…Œª^`6ê3ü^êq¶À>—^oÌUMé„nþ»lÑ,×Co%þ##q•>Ò“SÎ PbÚ¬÷úzû‹g/ºTÄœ{Þ;vN‰²ê>÷}PòëåSYaªÎÊŽµ{/‚ïê(ήE7’¡0Ñí­ºMÍ+¸`xÀo ³ú/bªCñZ‰Éµ ÜrSÊ4ætÝ+Óµ? œäYçÁÑ׉)‰Ñeçßuc•þ§£TÓ5úﵡ¬j™ ·0/fþõe_¶¼‰2Äëwô¨Ÿ8ECâÖC{ïhð­¤ÓHº-mädLÉÄ=½£ïq=m©ÜP}ª ±µŸ HÅ3"H UFò¥IûŠhY»RäYm‡ï.ë™u—MI½æŽêÇ SÇ„.¤|Û)ˆo«¯(L~Gaò{Ò.õŒ‡b—àòòß]mƒ?¨Ê+ÓÒš€¼c‹^·^2PZ’-ƒƒ"X)ú–LVF$SWéÓµ,^9O|zõ°•5vÖÞß”{4}A «žYåÿâÈ7Zè ?c`|V{î©_/|Ñ”…~UB£Ò ~&…!’›¶•ÆéÉfZ™ƒ7N”Îx“ÁÆ ÷¦“´î 0é¬JRú£HEè4ëhígú¸ôuÝYoP~ÄÛΧ®{ò R2Nz¶¬QÚÄí¨9R‚˜3 Н9Jn„ý©+żäçÜ‘Ú>£|i\;ÖºÓBùÐ2-/QZqxÁ™áåPAgMqª/pµ'Štª¥ë{7ãe:Š2åï‚AT©3|)@mRóyƳ}C‰¥[òl,eÑ,2T×Ñ.÷'R QîžeZѱ³- $§@Rþ&ðÿÉó×^‚¤´¾êÒƒ>&^ïzPj¼.ʧ ëï·}û¨* ìq9 FŠ/;¤GjúL‘Î-L8ÉÌÑ :W¾^Åà±ÁCtB´œ«­ˆyYƒ“yþ›Å–Ïd7¿Ù­P0p{ÈåG¨ë‘caAJ&+åê¡ò7Ï?—åO•Ú%ÑÍFñ3)‘ËòÀí™Æ[ýÐ4Á „*¯ìÄõ„/+OKÌgY¶Ž7’g:RŽÖ† ,†}[ ]$bWÜ1ãý²OÍ{÷®’²#®‚¯ëþNl­²>ûÅÈM°é€!åļ±@ýYóF_ =W•AÇ4EyIÉÞfFš˜f({Œ»‹‘Q,ø¬ã¼SßL:se ¡ÄÅXŒ‰3ø›ôçÑRC’8Ð6<“ÝÇœ ‰a‹ :›=ìÊ)}fËœZi([¢Ûr©ÏÛQ*cˆ7n¿!IA&6•×Í(o;Ø_OaÆ‘ôÞ„J @y}VµBÏ…tÖ‡~h¼ÿt²w•¦”l*É¢âÂåËÜš±° 1Ãhý—0¨›ÞÖ"dp“á…v¾[ûÐsØòÙÔÏÝ«¶¬àüp°M‰üÌl{GkÐ"k).Eý'îÄ!‹v:™Âìþ”·£2ÚŒ2N‡wÝŒæ Œ™ÈÚc½N×Ú‘mÔØµÔ-cÂ8òškγôW¸ê)Í ö“¢4þݘo¯µìåÝ íPIð—ùläé·æ´ –òå² _RpU8°ö€aZß§?¤©•Ø8ïóP¶—w>43V³W(4B0¬/b†´Haœ¦Ï‘ˆJkÓ¡öÂh¬÷¦£ÆÊí\ˆ©¼¯$ ·¶¬„S„>玣‰Ù³ñ»Æ•¯øzÄïLû|hKÚj0|HãTK˜Èemáßãèê†@!l•C‰Äl·FñQ¢LÙR¶MÉI¦Í`~ÒîS9ý.ÚZzkÌAW(„kó\²=ÐÉšBJê<^Ýœ0@z2 S{òm©º"<Þ*$ý5oXéþ²žS¢dGƒomq ˜„ûâ_*ÌÔeÖNl/߯W’*’ á®wsɒϱ‡‰¾j'0TPŽEs{…ñ‹‰Á6< NY îwàMÐu%b°a€ü­I½`Dò”G2ÛûS Kí7¼LÆ gP0QùÍ­Jbe±:õˆ zW”ýQ:t× PCV£¢ª¬=ÀºhÌ}.„®,u‡«É”qAB& n÷•Øî/“A:x’Bcy¢s¦¶8ð %CÎ F"3&|¾«bþ`1{ó–‹FMDØää€ôiÜР«Y¦QÆñ2ª®%‡¨Ô(™TØWW%ŠÓcŽæ@ÛÒ`»§°5:oÇY»%µ=+l#øê¥Q†3:± ³¸cïA§NsÂoÍ;ª†¼ØË»À}+kµÓlUÈ™eèZK…îšCIú­WYDB̯/FŽ‚>ª¨(ø>J·;>nšbŽž÷‰l¾•¡*2ãžÐ—À›•Pb7®›!8Ö´²Ï༓ÕR×nWO $óP>¢)74+ظI˜¡ÀÌ]¢D(þFX#(Tˆ ÙYªÇÎlø Wê+ƒ^6AK—>ógáEËúF‡ÌÓ‚{nÌó0úoäŽ{Ø©§Ô%ú޲·žˆóǘâÄõ<–Ô~i‚LºRüɆ·ç-MªÍ“êcîÈ Ï_"ìLdAÞ˘ó%¨åº¤²)Èá‘¡=ñ™¯*/ è –]^Ñrôê°ÇZã›)þQ¬A|´ˆUsW³Xú4qÖmmJ2i¥ôÊC N‘²î‚°k^ŸÜÃþw™$ØgE¦·KI\ts‹ô¯qj弜ñGEˆnŽAuÊâD ®ò†ÏÍ$ATR?YÝ,åùyÉÎ?]z£…/,:ù³R‹Õ`¨Á<`R¡[–C?#ÚPR%Xà¾ò`Tm¹¡d ¹ ˆM?÷urü „h9£@!œãOÕ Œ¤åb]S<#mÀ½#úh¤7ª)9«ƒAÄ f°a°e°c /,y~%>e!ƒˆ4ö…µÿxĪ‘¡—†üq!×:䊆\Ñ¿ÉË<Va²Gpp³;ŸÞ±¿¿B8’ÈR¸n‹ã3X~v‡[¸åøBZ„rµ’—Ù[*Ê€|›_lÒˆ `b<%ÛÈÈš‰Ü·”ÿçtߟ!še÷€`™CzlÒ£ ±—gR°¸LóH©z=x#% ó Žô½;x@d©¬«ú2îÚ E¹Òµ7ûÙ5›ÃsøGH(úCîS:ùã)"“¶L‚J.ºL´Ê*¡ü ­}¶!ý4'[=ûÄ‹¬^1**åÓ}*é<å<~ôáÈš1oä¦è+ýi<Ä[˜÷•dX쳋çàÚÚ0oáŽ;fG¡ê¤ù/ÅI£“Ø….Pˆ _¦cÔò ’ž|/L¥Z>Cœò-¹ºxÀW8jÊ$JÌ€Ÿùê"E<Ô´ÆAë%iáï0¢P!,á/Ó[¢±œë!Ñô¬ÌM&}>ÔG2 j29làçA©ß‹ß6ðm”éŒÍ÷`±ÉÅS)Š KÀúåXsÈ9•âÔ$Hϼ´´ÞLåq­£qÖ[˜™XLFù‘Œ´ón.Âânº¾ñ¯ü.¡@È b°a°e°c Ò¤Ë[o6å¸Ù@£yFš$Óñ3žñ8IÛÕ`±¼ðÞ0° ÄKºÔýFUݲÎdÞœS¼……,£N=Gg?Y¡ Ø}xØ#]å\0}~ç8kÚ®¦Çš•wŒ€ ªì„aøÚ}Ôªvº®݈ˆ“CÏØA²’(ž“aûL×hNÀñz„h{”özÛpòjö ù™˜¾VF¢òàH#C¹ÔÕ žk$ŠU9§&é/”ÚC¸øÍ,;2üAü …*@ÄŽÆlºCAéîÑ0€ô%}Àa`.‘m M˜£!«†·ùsQ (¬å4÷Õ‘4³§^„h#ú‘FQé|×{›wÚÙBóû;nŽ¡|NUj[A£]4]ÓJ[”y!Vøì^ªƒDÌ%ÝR<ÓeÝâÜZWj … E@ähêèe´i;ãMÿ#19;ȰH}g9'£Æ›Ù¯&ëOÞ’ÆãÆÐï…@Æ\K€”“³®ª­pC¯jâ@Aì_*5Øœo ǯœ÷S!_”©bi?¾ŸÆkYQJ^ÎâõCq-u‚{NëõFQÎq'•ãŠçeDØ7É+Íñ)G3À1ßRZZÒyòC¡zHºÞçÿbÖ º&+_ÜEÔ7p„l®©i¡I*¸_ªP¶èhKT.¥¦$\«Ä4T¿*;f'‹€Œ ·ò C=›²k OK±Lðç ¶\Lp2ÁÙWŒCÔOù +WóUÙçÂ|ú.“¼L¾ˆXV€K‰Ðˆý@ «ô€üÇOµt½Á8~wäZÜ-ÅÓl$#à²ýŠcxgtMp{ÄÈ|ne[Ÿ†6¶^&5Iúaló{ÝǾ¥¼í¦SY dó †¦9¶ÄÝo)ް"h¨á]±1Ý$Y]ÏÕ”Aû5íø|X3wO õ>øc·š· ½%{>’þp]×îä…Í›jÚ{˜æ4Ïæ®l¯A€¨åÞ¢Qü‡È§÷‰°Ïþ;6ÜrÄÓYÑ} dE®²5¢¥™!›Á.©a£å7,¬Vd Wõ9wnò£NŠ*6œη¾ü{£#¢CnuŒ™£ˆA×Ù~8÷öïà:~~¾å{mÙQhg²ƒóÏD83=þ×ÿþŸ’³bÝ·fÓÕÝX#ØfúÊ^æìÿ¨ŽÖ*‡‡X$ømså·­œ¤4“þø±|,$ ÉÇBòg’•itÀv‡òöx©Î"„ï­!äý±’“Ñ"/‘9—L¼VmSI4ïÅ&5í+PV¢Áâ€hO‰˜N¸Û6r@\VHCì ¤àµ<$kc 9KUu‚Sêë­Áx« ÚÀ jÙVdŠõ•DÀ­÷'Zoi¬¬?kÓä­,kcfÐ'(˜Ï!R_q¨n¿·ú²ÜžéÒï}Yô:g|oÔí8Õx çè‡{¢¡£ìÉ9¡Vv¨0Dì€üyކé)%%±ÑxÒ“JùãlZ úªŒÇO#l)Â숿˜)5Nt]D’|¾ÜMŠƒ™ãÇ, ­>œã¤=§Ý' ö‰Õz@4Н‚ƒ³ÿ›ŠRO°NS’:NG`ô…:–tø¡–מ¦|=‰Åý´øß2ÀþãÇ‚ÔoÈd yW΀AÈVÁpÈñôw•ßt2ºÎs†—¢ßÄÿ|O™žŽ÷éd²ïÕeD2Yn¯ôéõezòÞ›éäuá¢o-+8µ¼^+ô‹³pR‘hò“W¥“ûÉÚwòåé×:ˆÂÇ8п…Á#²ñ|,Š‹âÇ¢ø±(~,м(æ0²Þ©ôÚ©3Ø,ìVKSoÂ1¸jÜ5H!`éÅKRQç3ÑúgÍ i2ÍÛ¢Nr**‹ÄŸ Ä? gØö|„úªX†ýLfsû«‡Õ¥{B³y9F)¤3ïJg½ €;1;‚íÇJ¡>{§¬¯F²~x(k_Ov¹:2ƒ29âµÅÓWͯN‰eþâ$ú-GZÜ’…rÌó@Ì—ÞÄxPGðá;‘:– ©‚)*9‡_V/Rí¡Î$Ŷ†Íl_^Žž«6«à[â‰f`}ê‡ì3TDiÿLD•‰Ômòu®:uðpXÿï+üC5Öïç†Ã¶^KÀ©œóƒ…dn«ð>œë‡‚Çüxèë¹ô]îwÚ»73i½FDšrÏŸ£ÍiÁ0^ö\„ìŸÎnÔ@„M>·3¦‚7«Vþ":5›à‘1õŒµë™­ø*Ï5øDå:kÁÈ)ƒ”2òw·aFn¯D$=·A*•˜†œzJ™Hɬi‘Tbá”t~Ͱn×mµ?kUCq²‘d·`û¥|›Ü*yå™E´‡Ý³à-Û㮦ª³*`ÇW¶ vÔlê®{ª…«UɰÝm1¿†¯ÀÓÓ~%ÂYCÞé¾ät>õÜóñ˜ã‚q$óoªèU\! `7J,+UÙ ƒ*á‘¿O,¨Æ®§Hf…¾O•vi»»¶+møÀ×Dó¿$kpÿÖ.€‹FØWuâ[}ÜKó}ÿXÆ9ME.KåSÛˤòŽ,ûF ×¶¨[8ŠøgŒº¤LÕܲÜxõ»¿OV«' Ïâ/È\ôÏï,_ÚªK\#ô¿o§*dJTF!añ¦§¯•È>} v¡Q¨P¤Ð¯‹AÚÊãQþË´­^ßJýn¡þÜÐñîY~Ù{2êâ( bëé=Ñ9ÑÑ5ÑÏ (À+o©Gýp@HûÑk5/¹’ü6ˆï) m5Á²ÈÚ‚/ź÷DÞíc™i¹ c”²ºV’Szð´å2i‡›ž¬ûIií:¢fÕ÷Dcíž{ä÷mU͸¯ÝsrçÇXý«ÿ2cuãWZØŽ $ü-Æ÷4rªa£’6¬`¿·•Ê{:!¾} kŽ–¡!NçlŒ›Ev4$S•MˆHüæŒT0à a ÓËa›[7—/C]Oï[i:“@nò›8ȯ|Uü º7cj?ˆAÜØÆÃ¤H:“±z.ô³r²›†-¸!R?‰¥DgDç YýÝ ÚSõM?f™ [CŸ WÇ  ðÊÕ]ô-%Œ‘¾†¼‡°j[äþ‚ä¯ î?AþpãR¹§8,ãÄzþªuü¬5ü"1)ÈkIy¹ã7¨ o‹k½ö€¬¦.˜õÒ¡ÅO2hÝ’'Ë’¾Ô¦’Çò­££’usÁyzÈBW뺧v†>%ú2 ¦éYîØÈ¬ñóÈT‚]YLF‘º&Lçg7Ï ?€cì¿cü«™_2:š,/="Vø>§Ùx³Þï¢fðLÆ¥s!bÇÐ7ÈÊSÁ’¤ê)XÅ”YÖ¿w–#æT.žNˆ–ÍÒœùZðÿ"òw ÏÊÖü€cˆÛŽ„AÂ@>’MO•%Ì^Rž®mô²&=Ò#³æl‘z»K@ô5>œ¾r½ò…Ì{Óí?ÎAYÌâ3¾É)理‚Ë¥?ëÛ½Çq0Ï‹±ÉMœcüø”²™çV^1ܵ¬c œÉÖöØ‘ÀÑÒK>´¡Ôõ“O@™!öÕÔÇæü6 åʬ÷ÔýF³âï8ßœwú }Â,SŒKߊ× v‚2¥÷‘*¹®P%ÂaŸ_¹4ü¥”Û…]ìÖ»õÁn}°[ìÖ»õÁn'ÿ»õoìÖ_Án­þï ^>£Ë›®§‚³ʨL+ÉïµÈ+gæÕI¤|¿6Ù_“~ìžU¹ÈF€šw%6uÔ•ˆALàÀÏø™?sPÏlP$:Àƒ(QÈ*$–Sêéí<M¹ÖÄ@ÑÓ;¢)ÌÄ­9Ø$íÚ9ô<¹¤{HéÐT`úqu‘޼Q>î8h,‚ŽÆÖúË9ýJéûüîÌry/Дլ¶¯%-ʈ¶ ±e+ò³Œ»Õýr™ÿñ¹þœîþ8w‡|Ÿ[(Dq¬)èÿˆB…xöJüWÍ©}séÌ?Wû“[Èdÿõ¶ ‹VžE[ÃVjvƒþKE¹¿&fÛĤuœ¡Pßµ]Ï nzdô@ÐŒ´xÝ96<Ó`!ÕÓ*7ªv2ÝßI‚S!b”²­¹žïȆU°§'J[­) ]Y#š>GF&i|¸Éô°^õ/°åàT·šË²,gtøT>5}`dƒ\Ù¶ßcƒðYlO•þ´«ö³S“YÊ¡°n_,²w+Z–+3Y’ÿh"‚×A ÅÎYÙ®šª“UêÓù!°veAmÒžÝSˆê BæÝ‡D¥[DF3§ø`~°G—Å÷ƒ=bôÁ}°GìÑ{ôÁýÛÿuìQ¡dPƒ¿FÅ1ÇmyfI‹L¯«= g´ ¯p¯¡tƒ7l”m÷§í! ÉC´²ŧó d1ˆ ðá–«H¡X¡#8E’a$ÅsW¾ÇauÊŸUý-”? úFu‡²}óª”R$f û˜²Ÿœßø%`À…‘*Plê?úF?ÛŽ;Ý­a±Y‹2ŠÁ¡|ÐÛÙy2l¹=Õègw¹ˆ/ÐL”®jfo™SC(½Ãø~Å`ˇù6g¬-l#Œ4¤‰$š„×Kò…¬-`ãóO}ÖµeƒC® ¾¢¢/+æÂŠo£#/ƒï7±“ƒ™ùŠ®Ï‘ÒÏŽ¤Ð¯1l¸³¯øêoÕê ¶h6'ßSÚ•½´´>ms†‹ÛV>N¬ýãijd›="½‰À4Xª!§SçÄÌ—´×SÛ¼g [Û¿ö´ ørʼnÏç™Ë_£Üë9·åpÐß¼ù¤¶—T¬•6rT|–6yÔéÖý~àësôcŽ~ÌÑ¿ó]õYûU˜æáYd8ŽD8Ë3BOôrž/I3îëæJiüóÚ¦^¼|¹ÒR&·^ÈÉ: Áï˜CoJ.ãÓ¤… ƒËê±t &šB[¢qt«ÒB˜(µª‡Aq†ˆU:Þ¬¯F‰*êQQ€Œ òê$ ,?µÃÊš´¹Xê§•œÄi%ý¬Á©ÅÑ!Ñè­ÉÞòm{yÛ*7mz±ª_ä’8f‘·—¼ x",`NïH)ÛUØ-]DQë= Ý=0Ú·TR»?TD×D‹<¥Í9çPNA»ÛœÞèw¤µ‡ªg‡ùŽöÈè»6‡Ž, ÆVì©\tº%qžoxÈï)|‰Lê*ÆÏÈ ò[m¥%r# _^/þfL[½€Æøp46U©ñÌÀJ+ɉóÛ^‚—©ˆv«z^_µñ„y؉G¹¬­0NËãs/Ý}¬Œ¬è™-°1,ɤXd´`Þ®³êøÉh;´úî60ª™í¨ºü!ÇAÊo$|*ú4Ø%ôoÕh£®ßV¡B¿\}HÂW®=ƒ´bP3UÂFOü”*\µÌ'þQgÍŸœÃ+ëÆ„aA†øCñµÌ,¶‡ë¬ ñæOUÀ bU–Å#7 ä±…©, K_30n¶Orõþ-ƒº/ä6°ê»Cž½×û|™àøá³oZ1œðí3"pûÏ*Ý„Î<þ‚ì3V¬0&W<Ü`EZ¹Ó‡lÒk$gز*e_^TÈeU÷µ°í«Þn0'ú#¾t=Ô;äÀ/Iå2˜æÙÛß¡.ïq’¹;/ÐÁ¾´pô_&F»q÷·1݇‘êуúö_ß ßÛ[}¦N©×¼fe‹pSðÛ¾ox —M°ì ¯4µ‘Ïßaò­]½"ùðEÓـ޿숉Þ-Lû¡¿Úvî&Éã¿,PíHÁP}}ÍN¡X¡ÍÕÞðßIéuÎ0™@^>‚ɳ †>oÁkD¡jNݸò?¨ g¨®î’Éx‰4ĽK9ËGׇÓ]rìc¦ …ŒýULÑký4ܪ6ЏŸð¡ïñPÓíy2Oÿ^,ÖÝylFªW4—uç#ÄB9áP¬@B÷mˆEpH]²êRÊ ãûö£\£P¡ˆ )\²µ¾F(¹Tø £ü‚Cu»£½^¶“Û1^¯…Y ¤’ß »6Vô"p_»u5WàZÏ-­?…Œ~lkéÒN=¶#kU‡–Àƒ!€‘ü&™mÞ\ ¢Ì¡Ý”`ê–´‡,‰qåˆÕü HѰ@‚GÊbíÈ ?#=µ¤ó\7éi¶Q¨P¤P¬ÐF!]æN¡…$%çùí÷ &ª6¤ê–¨º%ªn‰ª[¢ê–¨º%ªn‰ª‰Œ<´ª2VU¦W×z}MUôIß©ªÝ«j÷ªÚ½ªè“zÓ •–ÔÜ?(O^¾X |¶ö<"Y <ÂÏIžéóÜÎôÒ€˜Owçù&,Üyi¥Ÿß“EI¯tŸyªÂ_EŒ‚AÅßLêFÂxrXðw~QÝOóâó0Žô'¥|36Î×gee<Ï2XAWE Fد„z 2½à…ºØÉú,ûÂ¥“øa°)ÓhQ—Õž[þAVÓqh„?_˜”ލÿkV÷Ê‹ÎkûŠh‘ÁÂn`AÎ ¤ç65ÅKô d umN)Y˶0Þf(¥yÚcÕIŽœuÖ (úŒB…¶ E ÉúŠ q•æ§£|ð¾(I0‰úäu+çˆüäú§ö3Œĺê!ÅÇðH’oæ'øŒWd)\SÈ¥ÒµX¡‘´Y9ÈéqÙÔÈ*äèèˆh1kh‘t6°,õ4„Zµiq„–@ÈW"1_‰Õ.-N lù™-ß¶U·å ;.àÁ£ú®Žè_WYÓ‚­òû¼çnÑ,¯Ü8& æaBF€ô º§¡¥póæÜ#JR3ÆÝ#ÊÚ+”+T(T*ôY¡ƒB§·0o»’–%µÆœörÙQؘç‹gˆKÝ0†©«ëTî:ØŸ¸I ÓcÓöÓ£€«/£ù¼PÈ$ÉSž_²”¶¦´=?©&T“F´®£¡grcî^õ'&Ž_a4l¬EjH `£9p£î+.R _Öå¦od´ò-pÕUµ åÚÀ€¾n;U›èÀŸðHµÆ"Dª‘ó3jG8²ù ïÂ2,MUq;f1}ÒEFüÉ;/êzà‡º;5j]ªqwî _ÜŠií?ŸÖÁ†÷㥽±Ã‡ô$nµ¦ÔØ‹ÀhqЭ(¿¾–ýШ:†`åŠ4µz¯{ÐwF(¶8>K9¿û]ãµ]@¼èi08/œÓÇK†8A²C0 ÷H˜9š·j¿×ûúš÷¶ø›6u‹¹j™ú´×/S;È8N‹#”´}I‰1l:¼Ô¢MNO¸ŸeóüU{0ïµÿwlóÛÃì&0€óà^©,}çÈÆ|‚ Ñò„ý`y®λÉ{{oÚjò§w½æß¶Ìÿør¾ ¤{ï¬å¶†®^Ë]k%<e,¯\óVú™ëkùÊOHþ¸«Ë·Z®õ í;RoØ3«õÍ ´¶nU«òØ6t.qÃFw:ݽ·<¿»"¿]‚Mþ,”©à¢=·j—Ž,LÍI ÜF¦¥ ›Èi1MÍQH1 \õ¦ÃR™Ó{s3ˆ<¯R‰ÔFì}< ölÞd³oe—è]•!}§l+½(ƒ—¹%޳ {¯¼‘º®ÒVmOê¶lÅD`iašµè»NZ>‡¯ì:¯^ìpà8AUÞèµoPe¿§¼þYŲk¶gu ÃcaNÒ& ¿9¡7¯{€ìŒdmI“+†¤ï¨ž/š ‘ýÞ¦W¾Us»(ŸRþTH·µlßZ½ÛZ±ûð4÷–à¢L¸6ebIåmdj¯jk¡`eñxEz¤¶™,eEÒùÙN¸ÍvjC`LªÆã¯ZÓÕ† ˆß³vÜ勺’#å‘§nu$v¨B`÷uŸ µ„À¹ëOOÑ#Tÿ_‹šW´|{’Mˆ//¢ò(A­{&æ+0…3zûŒ–«sÒ5y5„ÂüE‘ºÊÆá}q½ßx}º¿‡‰¡®àï®m”ûB<æ}å7Dƒí!V>×/Ðíò‡\÷j ¹/=òÚËg©oµ!Oâ͟_çj0¦Ë™Æ®IM²ºŸ,‹V‚Æ8ò²l¯ +œ•Gòj im¢s›%Å/^åÕ ãÙ)“"/sØKy ÉõÖ{WD‡DËz$S62ù<àVc^sº#ͤ-ú4vÒiM8u‡ðân;­¹¹c*:¦åŒÂ«Ž¯$ß3^“‚³¦kHœãë©JÀœÈk màŠà*cˆ€‘iLÞs%¡Ê÷CAZž†ò?Ç×Èe2ïÕc9•ÏYF >²—4;žöƒ8篊¬†]oei]à^¹«³a¨8âu’n|õÅÝפw].ÝwêŽP¡%¶¢N R2ü%Ö|W;ðÆc®œY06rìxo‹_Ñe{ƒ]óú°Êûá›ôv!Î"dÓ4×­‹ºmD‹Q@&P¿ál0´r6^„1v$u$øóùQ2“׬˜£_ÕÔ¼smúcm·hMñ¶éTk´]ú,åÎX†ûgLCÏóªò× Üú!½ÿf/¤%Ó&8]SâATƒÚûÏ2…ËÞL³kq‹Ë!…{äÐ ƒµ8Ô²G^š“OKaad×ü˜b³|“ðÐ/ûZ2º«‘°¯DTïÏÆ«ÃK?:9wÒQóŸ³Í $æD­ï ¾.§ˆQøø¾ï×;tY¡nü»˜h+×lZ•/¢6Õ^žžÉ‹¼×¾¬J·€”b»[¥éQ¤ë´5´BîOpáОž+·< DJmú$‡Èx|D´H˜AAÄ@faÇqFÛF8–~غ*ÏhŠCÛÑà*dˆ¿¿ˆü?|Ua?"™Ãé±ýƒ›ÏFßféU¢·Tˆ~ç&÷Ò’ºû`/X!Tœ¿ðÈ"ÝŒ!_»§‡¤€nÝx“údšõ×d·ñÃD]¡%T–ÃýÉb„x ‹Sž`=„?ñ²Èž Ñ 5LhÕ8‡è k²S@tHt¤—NÏšÜôÝ'Ÿè* §µkI2Q¨¦´Å®"ʯP?óƒ“Á¼QwnÒ×.cãwŒÓÙêº>òKbuIVçó'^†È¿Ÿ‡OƒÂw¯E«û+ a<Š‚I᱆¡†‘.Y_…Wbåh‹×‹º`$Év ]LôN7¯ • áÌ—5Ñèɪôڕʱæ—xy%êÀûë›Ö•¸ÃºnÕkµÖ?êw]ß=' ’u{tOjð BUèße§½sUËi¶Gw¬&ߨ`fÁ¼K|ôÍ‹á:~¡ÌÍÚ­j¼†CkQFd}V””ðù%Ù rìÇn –q•ï[:Q¼â%gV×ï‡ô¨)`3Œ9¶±™«.kæýD.R„cŽ™T18ü¢‰›6…°!»Â¥k_Ù~f#pœeסê°‰˜†­Œ±¡‰µ´µX"߹бß·hZxt¸V?7~½F,Ô(ÈÒßñanŃ)çÈég:EÌŒ‚Uñ‚ÕmQº£LOÒ¢î«+Öõy›‘–,K|Iùœ¢•­v¼€½Ö](ò¾áØ8q.NýduÜÑm¢Ãô[˜Î3‚4Tªp^5…½²hųûßmÞ yV÷Ø¿p=àZKÝx­Xñâà³îU‘²‘WŸ­úýöˆ?]¦Ò;¬½JöÍw9“Éî6>èRc çv)ÿ¤1}ñP# ’“`†ûž–â…ÕuE*ߤ$Ìð d £¤è·¸Ï¡èSÑGîÎd‚Ù™=B±g4i.¤*›ÑgÎ Óù13Ö:VµžlY3ƒf¬ÖF=(môÓZ –èîB"4,Óü`ˆNˆÎ‰þòJÿ–C‚=‰ÂU%¤+U “ÖÆ]¸tÓ »àÊØÓ»¿ ûÜÖ´d×ë‘UHÓ”ºÒQÒ+ÅS¦jËFµ#”É[ 31"©u‹eÎXV6#m†Ýéüp¤Ì]ié®Õ\2ÌVÝv¡EÊ£j,ðqÒÂ8k%CŽ>ñü·Ÿ˜:W¶¢†©,\(kоêÔ<$[{€$Ãm4Ja…)Š’ëaH›*EȲ†$wy„-ÄÝbŸR0ÑPEà‡ÁOÆñ_µeŒ:{@¯kã9‘ÝÔß6³¹2^Û!“øM†Í¿tsŒ†“*©®€ÇZÿÒ|`äpW¯lZ¶×}òrËÅKh'²Þ”®|£ÏF†°d¤Ï)¹Æ#’åŠÃ`eÈS¶íA´åæÖþÆ ,‡¿ó·Tôi‡ýË•_ã” ²c—"©ñ´%zOt‡ «Éx¸Ì¿]} º!º%z :'ú™è'iÊGí/|#úHô Ñ5Ñ=Ñ'¢¿^hJf³´)„ÅÑØ×ÍûÚìwní%E%œj—ôÿëÖ¾ 4a~:—ú\’ô•l¶ƒù¦žžI¾r'4Ç/ÝF¯æ!÷=R±¥j ¸'/µÑ¶CJzÖª¥Ú!Kû‰ÍèLÝf°eçee]½´ÕM÷yݪç ×sÝuWmmt])– “Äèï&C_¹­§×ß ny Ìëèú_çÞøúÙ‚%à$&ÙÐ5c1nô;U­@uÔ¥^ßÍš®ø%õÜ$3ú„%™gBÜÄ eÀCò©-޽ûCÓ*eü"¸Ôø°Óíbu5)×ì qáN0˜Ñ‹M;IGÿKŠó];…f–‡¥kañµøs3_MÈü“ñÎ#u2:õüÞ {g˜=°EÓ%ׇÄûƒ`mªã3Ò%x$bãó%}§¾¸ÁŠ™@ åhH_\‹K¿ydƒG¾hzÔ·ª‹`˜<†l q•µXÜb´q›qÆ5ƒŒU5˜˜£ØÅÈ«Q 6îU«ÿÔYµC.¢2Jäaï€Lž!y6d/ýÎä9WZåp}Џe˜nDú³(Èjþîì»pÌ͘ñ%7/Ó~pœfæ ÛkÜíM¢bäÄÖ“8”ïO§¹D‹$¶0©j‰Š/M½´1%¬ià­–T˜d©xS{šGqFuŒC‹¦¼½»GcxšÊ”ùèhѱø{î‰ÞÒ»¨Ìˆ~1¤-_ýÓtZ²îÁŽ€ìA &YÙѪkÓF’9fa{9‚Ø¡2¨ã;)ûlR4òK"Cfos¸ßž‘º*$Æ îä}¼”·Ø¿$ åð‘wtÌMKH8NI&>Ó+ŸLÚ0HÈXû¥Ï¾àP½î×¥Óö8c°g £Ý'ÉFî"•›²æŒ b ÃÃå,ìΕðìó d1€õ:eŸ>Ÿºeõó©”bƒ'ד,€3!–i)"®»CÙ…)Nægji+Øëg{}s?¹yC0ÒïðÞÕ¡B\4¯B¬† bWY].*ÌŸº„»¦[žc¾¦Àk>×pyø‘¯š,…xc>ö²¥ò{ƒäh€w¨-JûÔÉïO”¶%§œè«Î´0õ@µ¹$PãpË®ANfGK½|*D ë3‹üq}}G_rtÈCË´°s½|Ó9Ñ9Nñž)‘¥h é¡£¥o¬áƒEæãH³ -=ByéÖÃKwD§Äl¤f'áj)WwK@ôR`7d²{@V,… EŒ2ug¦îÌÕ5{÷¨àK•b޾ô¤Êxâ2À=™¾-¬žˆ/Xáë©J`SdvÀAÂ÷oNp²li²»Z@çK ȨÄ”]Ó73•ƒ…dÜttàð÷m#~³bèdE÷] µuÛJOòÿTÌ_¥»Wê4.H1uVýÈ—à>p–``%)2‘gdäu §“óéP>Úí}—§¯£špr˜x®3nôgüÜ=že¢·Dïˆ~ ú‘ÚúR_ê†jXuzà. «txŽdãIœÅNI§æû³¡Øý»ÒB’zçoŒ…Õ¸ê¬.5ÿ_áêXà3RFÁnpÍèjßïDŠíØÆÑn¨âû{8}T/Ĺ߇4¸‡Žg—V?½Ý†ºüÝv«·²pÝýŸÿü_ÿù_ÿßòõM°‘Ejzý|xm¡"ÛOj0¹}ÿ‡+=¸¿£Çȯk$nÕCl¹¿/Ñg1ü­’÷?-/ÿ9!òT6+WXD9/~Ô"G-?ü‹ý_ ûñÓu)“-ýJaÒ‡Äèü½Ý$ιE~3/–Ñ’—9aËÒ•›å)ÿš"”[Ä&7ËIXþ1‘xüµ"މTC‰+fÅ4á§$kÁ;‚Ÿ>(ÿ•gãÿ–buðÕgÝŸ:ÞþÔ‰öOŸZß9‹þOŸ“æŸ?Sþëo;*þÌáðÍYPŸôn8ÚýÀIînúì…3Ö­–RêXÄ'”SÉåÒÇ©äÖSÉm‘3ÆåÒÇCFÚÌã¯;UüÝÎú´pÛù`ö ðãúÿæ|¾fìÿvìû­û߃EÿàÊiÖ|på\ùÞ#‰º2â_•¡ ŒÁ—u¶‡cx؉ØùMB¡ð°U×BB0½š½!f‰ >óØÚÎs¡ÊXPýݵýЖé‘òð5H¶ãí‘ÀØ?7ædÕŸàG·jªŽG!êÜôÀ9åÅ+ýéìã Rú¬üMÒ}´!Œo¡q/;ÿM‹ËÙà@VäÙaùq7ް¿~ ýÔè™"Ü÷7õéŸìÆ_C¦3}v Ó{g®Ü8'njˆÏ~ê¯þ w>âýzÏTõ®P¢f7Ãd®ò}ÜøºB“:˜TÒ‚ÑÞínª 包R…òkuXØD¸!›Æ˜Þ)Îq´,m)1ë¦.Šnê(´‡û­§;£¬.zøAf9ز¬Ü#ÑÒA¿àPÆ—ru#ÄípÐNŽ3FË}%)0Ùϱܗ²æ”ö+‡í˜§…È‘¨#—C´#…Õ"ÇÉWC\y @ó"WÔ÷Y°Ê ÿÜ¢?åB¹,»½ìy®qÀq Ú@Ü0¸Ý#ÄÑB1ƒ ƒ-ƒ9ÔEß++´UhÇž;uq±ä–ˆØëoËÕC9=D{þ²¬KÌÑ઀žw@|šÆK—ùµ(á[éH##Häxm_Eò„ûí[mø÷-*¸ípÏV¬ïÝ=;ܳëèƒÐì; oÖôT©Y>»„8ÃÓü{DtLô†è-ѨniÙçÆ|&ÏÏÏT‰ÏH©¼:´4vH` { ž‰H˜oä KN¯çm,jY2vêAö³E-›µ#¥õKú Ó¶a—èYgØ€“£õ 9µTØ©}ùÃñ%< QêE‘›q’]R¤"å0»hkÄdlk8‚(¯ÕÑOUؘ‘mf Ùë@¡P!‘L¸3Ä@Y´¢‘u"*˜(¨³Ç•¡¥•à7JöŠ\w¦¡]•\×]cz‰‡©u;ç9Óð“PZd•\9Hü­WýÀźñf2 Öሄ]è_˜1uã’ÏÏ8¾,†Ì ÙJ«ñ)f9¼@y›žI‰ô xç8ÔŸjäxšs'¼Íqð¬?£e7 Ñü© qf4>‰¤ðqÝŸéÙetÞA°-õ…‹+©üÊ(•yBÈÿÙmÐ?#÷õ_({Ьfi¢@yQþî`ƒ&él\g"~ «ª»„FÉqü\·Ç:À^>.{+‡Üž 'S:Öž[Ž˜•2’‚N¼çÛBªÍæž«†=º£íãä&Ò•¦úþ~ÃlŒÿ„­BXˆÓôyëòVÞëURÁrX,“ª9píd#ðJBôC»=¨ÏƒäþX!ìíò™–#_7qŒ¬¯Ú¬*C5>¿– ƒ;ò•’„R¾cw˜ˆ‰æÁ#¼©— q=Dtç–«sMÀt¨<äÄmì&¡…ÉŽj ý<+ ‹‘:ÄÞ{dÜSõ5+þeavÐ͊•Ëêèh4G“3½!ZúÆÑ”‘D€²Ì/Yf‘,³H–Y$KŒ¿[ÒbcÑ÷t)áYø­–ßjù­)?“ò3)?³çÛr…\Àg¾òY]á¢?su>s#|æFøüÀà‘Zä3– … Å mâ·¸î®;x&`;4"Õ\Dßßðçág¾ð3_ø™žoëù6Ù}V&±.çÇM(¤ Õ%~•ºØq²É¤rÓ0æ÷>ò3;ÕB\\ÁŸ^ð[s¾íI·1ßWlÔ%.ï —Åã¦7ùÙ­e£ž‰ècU¶ê6v|#µØðÈ.´2_ ßn©¶®ðGº;Ew÷iìuý´[h¤E?oC¾´Òç¯ÏKßÜBì _/Ëx9Šiûëøßé0·}}üR»ßÏÓ:¹×D pS¸ÅŽzá|yGe¹†zä¢Ci§ß_Û-æ{ÃÊÄñ»7.ü‘ßcDL¾ãµ™x1w#@UåQàïç. U¸ûáÂÝåˆÚÔíc¸ýòiÁ´Ó¸áÜ wê%¥9½Ô(šŒºGM€Qsi­‡Q0ã®_e®^ˆvºJÁvò ;žCî#7“ëâ7^Ǭù`‡>Ø¡vèß>Ø¡û`‡>Ø¡vèƒú`‡~ˆšç€>¸š®æƒ«ùïÆÕh…™’[Žw™Œ–aŽeøàT»|ð\Å$°ÌÐÝÕrÅ™~ &„uÖ[xX×®¨Ì×>¿0E_u(BÂõ¿$Áõù±BÒ ?ŸXúÝ´ÐWS?ÿHBôIç»ó7ÈèøNçiÚfÊÍü~敱dgaìñ9øó-5–ýt³­ö&âO´á_ÖhKÓS6½Á,8¤ýEA»ÌM%Žy¹(è9²}ä”A»ÉNLËÛ×-Lâ<Þê”KrQG#%Nš!ËùºmÝc² I£÷m‹ª>‡ßØ]»OAp¢ž’е2ÿž‘c3¸û¯Šø‚~nîZĬ@ÌÍrÝ¥]Üó~OLÕªªm¸ÉB®WZ=Wæ5ñ…©oÛòmòάie¯_#öFLu–û–aG‡DGDÇDoˆÞ Ë\ÛHA¯Ý)½Û‘ñ†àC_šÉPˆ5Œ4Ä‹¢+0ÐPW›!ҵت‹ÜêJ P¬Ðæ;íÊó±Èž™–éÞ±»{­Ù¤õÂëÍõiüÆfî›ùCVy{éµRJ‰U)jÙ˜©òUóµZ“! ;fï&g$7ÿ{?l}uŒui›77lßüòöšöÈ£<Þ¡`IîÍÔ—6`€+U*«ÃE%uðÑ(?1‡nrïÆB)’ò-R‰Vå:žÏ…É%|“{Ø-¾ÖuJ<*½@`âÂÿÖÊs-…Eu´|yëÞ0iÓR Ú‚ôµX@å]!.ä yuÛcG>7‘kýW|£´žÿþj›HÓZÇa†ú"·˜ˆ™Ô¾ Bƒ ƒ-ƒƒÂñÎX«ùnL¬áFC 5g¤OæÃ¦-ŸKâ§!}ótD4삇¤tHbĈג¥ü$ŠTrJ`D7ô­ì,s‘-Ú[2´÷‰«R¢3¢sМ¶q&»þ†üÑ_¨Q¢4¯-²«!¤”u oF܉ó¨G yÀ¤„£éSÅtH´´K¢ {èÆÓ¸§ÂG8Z¸‹Ì}ïå¾ÉÑØúR ½ØKVÌåþyJ„†’Î=uÙó?‡ê‘‰D9­)j§¥“:t’ŽØ³öÛ48·¾ñÔ¢ïörÓ;áwìƒÅº>dÝ…ÝO!‹ G$½3Tâ¾¼]¼¦¤—oZ‰Ï¢꬞RcH­¿Òùá+¥¾âеr´´…¿°¥ Ð4xxà+2,ü ïù>¨büûùÊ#— ±¿—Ïñ¹…Åÿ·=¤¢*6ß7Dˇ¹KËÐ!fœ¿+ :&zKôŽè¢epã¯ÙˆC*ù„[ap|ñÛvBwAF^We)³äC¸t¼ž¼§8BºÁ 6†Ü–œLçgXº* ñŠ[A`£²ç+î5àK&À7¦p~bJ5kÚDmG+øØR:®~¨|¶A[uézƒI’‚qÅãY >¹yó] °«©Xÿcå¥Y<‡/ê¸q¼ó@–…äÓùÀ Ê?åCºZ/?ÉËc¿@"¾¿®Hâ21W ½B¡æÇ‘€óS²'bhø {‡vû$‹Ï§3«õŠCuû½Õ—íäöL—.kÃëeSg|º5ÞÉØh˜P½kþ"ðÍ–˜G'DË):m–ËÑ2`û[J›öŒ4^#Ž¿1ßÍ\ŠbÐù@wHb,æµ`m¹¨š“š,ë—â ÌæT–Ís¨Ý#/²£ñû~?`dͦ Q{´N×÷ÎŽ­3ù-;ì:¼7¯º£U< ü--úcíi¬OÉ@>S^ƪ‹‰¾LøÅW¬&Ihk2=Îe_)Y„¯UHïŽ1fL‡•d1©°$Wë°No Õ}r"j«ìPÌdó%‡•×Y•SM`Ë;Ξá4ýЉùÝxû•UK%pi·žÌ07¨"¢cÕ^ÜzæÕÇzò±ž|¬'ëÉ/ZO”°³’Ø«Ig¡˜ñ@Ú*ë,˜3¤µ÷†B’z ‡Y Ïð(c0Ø3€L “–u[ᶦéu¬ 4K*xë×eOì «£C¢¥‰‡c+ ãòDéHý©ûUG®Ê%&èÒ}‡<á3¾ ôŽ˜,Ÿ®‚7BÅàRìêkÚÅ|Eú®¯ ^Xµ¦ÂÙ¥«1>ëSÿ ’<ìþëÿÏ,aIG™_Ýë[}‚{êÑ=%²h_¶ÏˬùØ>?¶O,ÏÛçÇö‰EbfûÜCàÈJzÏtÍ £Ú_Dó›}áߋ»""¤±©ñ13¥6â 8³ýÊí›CH¦#ĽÊá;5Ïbß“e{YÚ|Ô±¼ßÉè„Å`ù°ew*ì\‡áÈm†#«¢Î‘¯hº¦EîÚ}‰e§ÜS&(B1ƒ ƒ-ƒä±ßS&¨3ŠŠÚ*´c„´Äuq…±zÿ€ˆôŠXÔ+vÅŒi = ó¼ì ˜w´èi'1²SAªJ¦/“tó$0ÕÖ!g$õ¸!{ü²EØŠ…Ý_f2e}_;f±ã` m`5JÎ÷FC‹I¦w‘}Õ>u•à.ç2²{á*‹`a¶ÛAX:‚AÄ f  Ø2ÞTÉÝïC%wïH>=›é}9¼ÞÓ ]ؽ¥›À=xQC#å»{†Í]¦ÌH*¶ûÀ¾s£Ý‡O0¨Û6 Ä’RÎû³Œ3‘¥][+ø^†ÚUÍÕ,žÔ&&…(áb<£Ìê¯#Õ&‹…OãþÓá4a«£=p…Í5ÕK—]šS£¯jKn(ú²y¦ftÍSñç^Kr®’•»{ ©µ]cWú“ðÒ#Òo¢R^[V²ÝåÕÐ[´‘+60n”ˆµ²}‘ìecVzŒ%÷A¢#¢åžÚ<ÁŽq ë~¬ÃL/ó–õ ¬ÙŠ`Ì £Ôù£ajç^* úGuùýƒ£úà¨}pTÿöÁQ½®Oúƒ£úà¨>8ªYŽj>òÚÒXˆQ}ùý7ùÆù¸f'Ñc´Yn-¯å dËÂ@;R@x»(^DQRÛLvt³¯Ká–òêÅâÒæ2••Žà½ÀsiýݶøùXs“èr«ÌpØÌŠÂ¤¢–ò´´Õ a寢!Ò½¡Õ×ýÉVÚ(`ΦvÍ*z€¢LlvA˜¸D 0Ða“qÓ¨p“ŽJûïvÇ2ÁÀ{§Eçô º¥UÔ¹hÔÖ äI Çg‹ªÝŽÏ·Ž8àŽ]·á˜¼}½¡SïÁ„4X^ª°ÈSîYÖ„†/Åê’ðr¦–cÜZr¡Óbß¡ÕÄ9e]Åë½G—Ûµè‰W}QÓ¾îöÓ—ºç§{U°ž]Ðì‡wçT(d/_ðúX¨n …%r#–SâB†bwîU3´V¾ÜX0 ަUQ¢m{ú²Ü¬’ê Îx>lÀŒÄ ¿A4BÝ^öÎ}E£Ö݃>BÜf7°žå¶Ç4Y¶ üsÝM—ê}j‹cïþÂk…%ˆôë=q‘´»‘æ]¸råça“}÷CW©¥ê®e7äWW74•¿´2ÂMs!?yÄ.0þ¥–K“™d½šùQ”O[?^`pên­Õ{qŤ¹rJ´r틇;ñˆdmŒ«n J„û/W /Wiû’Ý·õ–O h-µ>á½jMsP'­M/­õɘ>-Zô…¿¬e°÷dá±îŒôré—dê¶ÿª„4¦*[|¼iU[]}RkO“g4-@û6‡.Ø’aékó„ùhyàKŠ@§=È<ÇÂé à a K¢1Ñ¢·DÃgÈ/FPâÑñ5 …PÔhGhGŸÄ&[¾øÓAögé;“–Ø?¹%›´Äªk³­œò!éÌÉWŒ‡ß3Üh( àV ÷>x±}ìäûWiŽÉ÷Úü2ž'/£?-(Š·úÕ_SŸ ¾ûëD“¾æõ\ž¤ñ0 QÏÚ)´Uh£Ð£B±BB2Zºlè¥Î™p¿R‚ó×Fü…!9j›¯†b\¤]°µzä gÚØšØÇ÷Û@½ #œ§£1Í=šøýÖ0™ëóÜ3¸Tê7+{˜máë2k;Y1ö™0ºŽóåf±éò2?õ«aQ'²5¶×€#Ä‹Ù4YžºcJt¯oÓÍü7Ÿj̹¦Îc•™Ž?k“¨Gè$2wX #ƒÙÞcð—%’ ,ꧤ°]yŽÜË–.´)ŽA#€¢HåtãßÀÇË£a0m(kp"wtH4_µr.XäˆÌË.¼@ˆR*8\ÁþP]-ün›“ìÁ ñœ¢–çgQ rǃÒ«8¤V•ÿì÷þÂJq=~a÷,÷qŠ«>ëý‡?‰>ÿ.¼Ú´ù¶•AMC£ÍåW·£_¦Æà?SÆÌp¹éHœÔ ¹ìZÄÀhaÎ7>p©ÐïÕÑîÂÇ{Døi;J§ÚR޹»ñ ™ˆQíQeUzë Uÿý*ß $+IHVÒѾôb"  åB)Òª¿T÷·½ œ!ÜòùÀ²'yGOÒ¤ž’ôÎÿr"ÿ9·/V”)ŸF&ì#/$BõKŒ;™Êk¨“­õ©§eÝ‚ÝÒüÉb(-Žƒîm²‡Í;‚$;#¦ã›¸ÁÝ¡ b,“‚ÿäöZšÂÊË‚{HˆPÖÈ89#(µkCZ*_2ìJGò‰*2;@º0'ùæÙ£í:±4lj|)MädQ¬dl‹R¶ÖUmÉwÀ쎦‘t} ¡D*§\·è‘¼¼Å,ØŸ$iÒ$)κ6 )çG„÷ç$üõ bS×1ÅI _iˆ7¸'E™uFF!«‹Un¨U‹+ßßBVóÛ 7@Ú´D.´°gÊ‘"q$T+¤s’æ%w~Dæ‚„ÏZ6dáßÀÂßÓÑ1Ñ¢·D‹âÜÖ$¸ó/ ùŠ,ÑþÕÐ;×ikrZF*{;zK4߀ ÆÄþÂ#_ˆÜÓ]òòñB·Šd0ªì?g*†ÇNoŒùR¤.mÚ($‘@l0Î «K}äRcuIN)çZË)åµ uµÒ„¯Iïv‹IQ±†¡†‘.Y_=÷9'ÔV_Ô‹ÃëÝÑ£˜ëg$Cîù¥Ò@Þέ´Uap=üSlŒíÿnûs~¤áí/ØÆÜ\’ŒÝì79Z^çh¨Š=PW žM!ôAÈ&÷¼±üø$•’.ËÓ{¢+¢¢¢Ÿˆ®‰þz¡)ñÝ$7çÁ›ä©›æâÒ9¶¦±¦)°T†«ŸHl¶Ñõì ±†s›„Îg–B°JÓ#x•~Sà¸Uˆ-=9œz¸Òû'°ù˜'ñe}ëù oŠPtºóJcT±àÎèÒÕ$ Y/Ø%~O¸ô-püƒA|å}+ßåý÷^þ3oYäWËûŽÆäÕér¡Ï{a‹¶‡5š£¥y‹¼g3ùäãjÌe¯Û›^¦×ñ¹6ó´ŠsäX†u‰²¸:qÉ4‹¨I¯²kžèIÂnT~¿W·#Eõ¥Šz5'ÅÌp€µÒÒ£^šäê¾-ƒºÅ•ÁŽÌ:„Â]ôi”h$=|{jN`¯MÞNQ¨P¤Nõnsú!ØŸ%Çé5iõ®ñF©Gå~Ä>i«tûÔÁ-ËDk_À"žxÌŒtFn£ã]ïji+ÿ|xõñl 啨[-¼*$ü…‚A–²oaa¡W•òÜ6•xÐ~¢ø]pHî‰G¿:¯`¯è•&Ѥìs)`U‚U?Æ7Ø"Via;a’<¬Ù‹ 'ݺ‡þcÕÛ¬ë¡ÞÁÜp‘‰ÁÝÚn¼—°,¡=Nǯ¥þÚažyˆ€È&%)Ù2ý²ÝSüÕënkJHkÜÙÔ-‡á\¾rœþRžžß× ;è÷ož,,ü¬ozö uÉ7­B7jŠB3¬”!'’œr³¿Lͱ脅üMäÓÿ²:Ž©úâ˜-k#¢jOCÔG¢´ž¾rhe­Û2Æp¥²4¸ˆÄ(„pes Û-ÛÍbRP½+Ð×,]Czx_CȵFd¸&\­o\‰Kz.ž mk¸8ŒF†k Éñh£P¥ïLÔŸ‹ÿ6²ª…^ôÌó©j dA3’Òåz–ÅÞ«Úµ’è|ݹÚJÔQwl”§)2XqGoˆÞ½£‡w\êÑt…{u `2àZüJØ8ŒÀð%~kÈ/µ(Œø!ÙóF°eÀ¯î¹„*á×!â:Dü±âwä‹‹dhÝy´Eww|§zqÄ/ŽøÅqÀmù¡–{£å ‚!!%§©czn»>~ÆkúX"þ==ž~G4 ˜žjßó€éyÀô<`z0}3ØP½‚ W9Ø2Ø1x`Àõ ¹><®ú«À#©7 ø¥!uh¯FRÏ#©ÇH:%f£fвj2Ó‡Wç ûüZÙjÇQy^×»‘òð îÕ½ù?6®4±$µ†jëω…çéKÈ>²@îGBÚœÆôܤ|x“ž`1zF¡B£'}g¬ÐF!ý†UŠzÓzƒL†õ>£ž3ê9£êbT]Œª‹ÑoP5fçîüö{±=Ÿ¯F^j÷é\HmÏ—Õ§$©KÔ‡%êÃõa‰ú°D}X¢>,QFt¼hUel¨o3õcºUï$‚sî‹„»½;}±ÝžÔß}mD b[; pÑ pÑÒt#þw,íž.e\v¦ª-œøß*ç™Çp û“Y½t"¯i®˼5Í­5M§õ£ù³Hɼ°¢%œ:8z”(dJÊÚ+”+¤ßW*ôY¡ƒB²âŸï 5ŒøÞèöº7¼qDúúH•©/ŒôªEU³H½/R­-Y±îÎoP•‰`q¬h}èÊZocp=§‹NÏÀù f³½¨D“|¿¥‹‹N¸è„‹–õiY[ØY9:":&zCô–h„ +,é„ "t únØA°ÃNåéèˆh¾_êmm#2›Š«ì»1dµò°G|ç3 ÒwJ/°óÅù’ˆyÎ…X e¾Ô]ÈN±h’‹ h ã(:w7íX‘y2ßÍ„°\Ö=¬ø)~¤~³¿+|[ Îä¤Nþ‰^Q_¶j euñm'žª„]çqzæœþñAÄe…ô|ÅÏcÂtˆ ýh*+ˆ"ÈØç&×8OËñÐÜŠËîmyô d©! OÏâbiÏ>[c™Xø¼‰ôFGh“}b•–!SÒ x‰ªIÉgqÎËpâã·voa_ê˜ôøK[uã–üCJø±­¥K;õØ+­G K¢7¶òü&q %Þ’ZoI긻óû+þ@õšjý}Œv ÄŒ¢|€D!už{$ ¤<ÀuD¡B?–1Ø3Èù! i~ñx) ÿª7¯ú=¤¾`‡óWß&Ϥés¬ P·¡ªpÙ´{{êÔÑüÝôhõŒ…d`x2Ø2À£êH^ü…̦ 7uëZ ûÕBøÕ=ƒÅÂçç¤Lÿ¸º¿"¿tP$§TD‡ël?œ°þŒFð.Ì—îZ .Û¶ƒç^#öÈä9¢P¡@!ýÜ£BhÏ:§ï–Ýü¶B2ÊûCù„³Ü0XÑ-ûÜ[(´2×îÎIM¾Y[H-Ýxx†³¾m!WÉRìG)íãM ÞCÞy´ÒòÊÊ‹é/Ÿ7ú½õ™Ëü‹·ÄÆÒ¡Èyï‰ô¤‹'ñz_7muDÿ÷™Ýâ 3"1¹™ÊPÚªïayçhþ="gȬÂj¦rwEÖµ'(q=&BWeØSÒ Aõ-Ý{z¾ Sýå©Q ­]ɵA(^Xg”*Ä­Fu'£(Ð’ Ói {öP±#z0lÀF3}Ž$HÝ|q:8ß…=ˆÍñ¢Èq†઼ú<¸—Ox"󅵑§ÎèRí»½©ƒp3FÝ«‘(Yû,œ¿²åwãÔXÔ°À>#ÃHzqåцAÄ÷‘I³C[;UžUHžšMŽô3éÞ¦ºtÏ4“Ï4G È9¯‘ÊeÄy‚8©Ž¤w£¥ÿ$„Ý|Ô¹¿LŸ‡)÷­A4Ë+*»@½±‡¯e?HX•@j]›4¤„¯©…y;D¹ÑÕ_!Œÿçó›/÷ßÎ÷öôæÓô䔸{Uǧ \öÐJüÙå7‘Og9<ëñ16  é×íŠøå—&Waìzq•Y@Á³@†2GF ¥±¬‘eÅý¼¹ù?aÞ¢Ρ\§\F«¼Âþ6È4åhQ˜Šª•Ð«àæ 9þwÁ÷§j 3ˆÂPø 'WÕ³!¸ÿþ„85ŽÉëIâꄱ-¹Û¹IOmÃz¾*¤ëÞø§‰ï†Nr8{Ú½':'º º$Z¢Ä8ü‰!ÝUÝÝý…èŽ ê¹ÔžKˆ~"ú+ÑbÙÿ¯+ÊtÜjëø,yëßJ²¹l{‰V«X¾#ò\ý…Qê"2ƒ£¦EêªvBˆèTŒužÆ–4Uß¾ø.Ðî<ÿì0xaª*©ÊíUX¡úü@!ti[@‘;þXAÄ &Pó35?Só3µzfC ådþh¿£U“«¸!½(×—_Ð5"ôÚç&åOÜ›ñ:—9C!üèØ¹½.äïâ1ô~`>ÿñjÔp?t;ü— Eû7ŒùÞ¢0· ,÷ðƒôtHtDtL4wÞßue˜s.»q‰øekÂíóþ¿û|^êÆÒî ›JGY¤GþS3jv´Ï„7ö¿…aUq8†²;% €ÙÉÇ•ν@ 8\m ê×…Ó´”„£FP£œþÏP0»öÅA·¿;.9óßkŒ®Ü×Ã0ã½ëÛ‰¾G;üªÀœ6‚ãŽhUü‡½Iq–î!RáX1>A¡‚V)‹ƒ—¶{_N±lQÆ5™…l‚®v.^ÉG¾‚6H(}º!ƒˆŒ“ý•˜@ʤ\ýv¨üþ‚º‹ŸOP¯áÚ¤|Š©™¬:ë±`.Ùra9×&W·m¸6[¾òÈ€kƒ16î=_â‡`¢åË~PUå‡õ×Xý„¤?ãWprþŠœßÄÂ6î Uñj0ˆŸÚußÝe×àHæh)éç÷çW.}øôø¿Ò§·æÉöË<|ÿýõ^˜ÜN|~¯¸g……”’æŸærö+ÀnU4Ýäö®Jê=¿±‰¾ê}ŸºŸtg9"œiä/eêFÄEóh¯®íÕµŸñ?RùRU&˜9û<-Oß’‡ÐÓ”kiîÞä' „ D9…œ„ãïPÞ¿—ƒSåÝ{¢¢kdàª]Q½#¢¸WâRm;z vÛxúÃvúÃŽ~ˆïÃð~§ºjÆì5æöî~;ùAšñòƒ¬À²ÿá®÷÷áÃý¬òwÿç?ÿ×þ×ÿ÷Ÿ—oM‘Õu¹ÏO±*Kå9²ËØÄsÎ?ó/Ì9ÏÏ?åxƒ âªïa~ôiL@Z„üYÊŸTó—ð]_‘ãÆ¥ðd‚/ä”Öö_×õVgûÿ¶¶ó!¦V¸{U=®¸ #mŸ;^"Wf‘GRŒ™•aôì‘;>cI]ö]+k¼£Égʾ²á+Û» “Óº7·C#´0SÕ6{ã}l¼(C3˜óeW÷®mŸ¯Réøçäe]·àŒÆ×Y¾†ýá¶H¡:6¨iFV,ó…’⦰ë²-Y¼{à ä"›ä³úïÒEš÷¨Í™ö&G®ƒAÄ`Ã`ËBŸ›bšþ–#Ç ²H« •‹²’{mA1Ú}÷]hÏKÊ”™ËJ’R,ö¶*±a¶Õ œKÛähò e æݳÜåé„hIƒ) {ö.N‰¨¬NHØøÆÅW¤†k“–íñ š;%~€yì]BSF9¿ÂXCHnfc̾ÞIÖ]y‹Ðä*- u4ÚqrÀÑ uÙ'I·ÎoC7ÝæHNÜŠ “ f…ÈÛ|QÃ$¹> /23š¡€Íö1Ïi ‘Ë<¹ªÿ8û³R!'ìúìó¾¬Ú®fýßÃÒ×'êÀ9ÏxŽ4YY}Ÿ•o HgMøÑ¹“—Íøžs¸P¾˜=¸ f°a€<?wüâÁ£O_føÀº3)™^ëÇ_Ë/ž¡zF<ßÛ|ex¼ýOü#4&Ck:Ò8ú…:íTäà…Øž–ŠÏ¿x§#ÿ‘§’eö|¼ÞàîÛæ_šxÒNÓè!2gã“ÌGÛqÃt1™V¡D¡” ¶1´E÷3¯ϼ0Ï=¾?®¸„„+s­qÏ%ßk^·Wôè)âEÒfKEÞŒ8‰©ñÕx£$fp5Æ /ëž¾tô]j+W¶”ÖÙg©¦9 LË–„(ÓŽ½è•Æt)Ú€Ì ‚¦e’¤4éÌ…¾³ÆM5É=ÃDC«aªa¦á¤äK?0$»]‡B…"…bFXò­é‘ÜØâ€¶öæ,—©•K[œDZ›6:‡ìÒÑ!ÑÑ1Ñ¢·Dïˆ~ Z²þ]÷ !ƒˆAL ãg:¾­ã:T¸Âãè„èËp_ì%ãé*oH'[¶|™cÂ9N?ÆâYt2a}$0éËC"1´—Ž4DËp´4ÒâóI¬¿)' ª¼ê‡‡/7ê/Kç¢6ÏÒÙ •q* E Å q~NªVK£/êpãäº|Ì¢¿Ù›rÀ{®x@+y ³»ÁÜhðÑËfÀðn°zº#G‡c©` d dó½a}‹r.=#hM*æ†"ü£zT—îⱪqBº1"}çF¡­B¼w™íÐ6;]GÏÄ9!ŽÕXþ<$THޱ€sóˆeêMÀ@äñ ñ¦ žVŽ–açuëòÎ0ÔU¤enlhݘð¡="‡¦£e&öÌÄ~@šGëõN¢ùž<òsDótžË#Âé^x¡OŽìú=–Ì¡ÝÈÐ:híF –óš%×üÚzŸ|#ðöÛ'¤È~ÎJ1çéÎã4Æs~UÏ'…Êl÷[tPeêá(›Þ|®?§»?΋EBµ€%‚;0i™¶RÃBCqæ>„X7ŠîµI‹EúÑËç\˜Ãi’ ó¨&i,ƒc¼õ¡liŒ·y†nó_'nuŽF ü\hõÔ€‘ÙÈe*éD¦>¤Ló.%ªû‘ë‹UµuíÒó [ˆlÓÏ'œ‰io¯êòÑÑëy/—_÷‘”Ú¶'0óê6Ø(ëþÓ™ì'<„Èö¥ØcúžAGññ7s!/ƒAžÏdÊÛ-TãB!`7¢B°`,ÐTçªïU¿H¼ó•ø‚±‡¹µ~:ˆcÝ ±zQœé÷Z}U¸8ƒ˜ržÆ‘îúápip¤st&ÂËÄsG§¾E"‰‹]ËÔòN÷(Q(UHJ¶ ÜI“ ¤‡œîˆ„k»~¤Z N*3‡–OY¹3–Àà³Êº„¥’ t¤È”¡Y–~½¼Ð9¦³bæóþ‘_C©Oˆ±gFš˜gâ\— i:˜Õœa=Ý¢<0çæQ¢PªPΈø8ÀÇÍåËù5¬Úƒy~ì} |ÕÀüÒ^³$ÉÎ1…eÆc† ‘ùÐf@,ˆÉ(ÀÓû~ U ãà@©qhÇñ=Ó=ª—;„DM(ÌU=$šKÚ½£Rù³¡‡I-âz2ˆì<0_&“;ƒSÉʸ4iÑék…²à*¥©çj•2–GÕÓFkÈ6Z¶ß‚ZöÔó(T(b”¨;ug¢ï”=Ý<µ0À;#ùØ$=àûÎH:3[Ó?L·²¦‚Õƒ!bvª*'¿è“ìí§öŠ•µÓHdñµ)¬mÑQýD (ÒJ׈éÌ2…Íš§¡iJè‹=M{ÌûÈbÊ~y’EÒ‹ˆ2¥¨ˆþB4LtÝkÐÎ#² ɦž£ÉvÔ,÷Ô÷˜z#Jè>n±€[²¶È1Èœ¦^!Õ’eOw†ûmC/C+½.yôº÷°´CÙ‰a0€ÍÏ/èRiÒƒ?UvKGÊU)˃#ñó`@Jë9Rî¨3‘«º·À$Ä×EÌj1Ì]6X«~Ý[ ;öb‡ôSÊËÇNßÇ’»Òñ–âs´h;ØÃt]Kqñ±4yÛ¶£Õ]´«T´«T´“Ðh¶ä#ìAÄ f°a°e°c Š~$@Á-ù {2à„\8yúŽDTgõ ÏôÕ´¶›# è%/û©í‚ep ²5ž™ á1Ï7Êç?e4ôµ”Q½…JvF6èÖ J áAÈ€4½P {µ¯`¯é‡C…"…b*D•X(½ES÷=ñi„ "1Ädú ór(ôÀÑ2ðH¿ì89l«¶ÕÕSI,€x{×>a…¬:ZbÝŽ„oqœBÌW4†p!¥ÊŒéÔX“ÛüÆ€¥¸íàêŸ ù­¸­j¥1Î|:ÛØ ‡,Ës­b‘ÚÔ'HδÜmjû½ªÔƒ¬ÇnMT³)äW)]¥{þº¤9×G4m" ]}¼(·µŽÝH(2²É÷ú»xq—¦éž÷­þà€6ž<éúPºyŠ–w[[¤6ArñuŇ\à#ì£n0nCÖqóÂ:Ñ?…èñ°œ^˜¡Iç\rr‘ËÓ ÝEYïÉWu|ž¯ÀRݸO"-à÷¡‘8uÓÙeÕ(”(dù1ŠVYGÇDo‰ÆÉfÞBûŒ,£Pݪ;C}'{2²Êï²Dô?'Ùe†p|ž‰ŽˆŽ‰†Q_I"ÜE¿÷x¶Û^l¥àÊMŠ%ö´Z ’»fù5CЮ—¯…Id rƒ‹dmý-(º¿j& äæÔy©ÉÏ/å,Ý'Ò¥¥6ˆ… q=ÕÚ¸€û8Ô/0á`j–àÇ…2˜ 6‡âˬo¯g³í#wMR½MrÇù;C.\v íË´Lèè|Ëîã';Ì-Ñ‘®CÄ!íø×'›£`*óœkòñ¦…d]ßC–‰‘d…‹Ѽµ,’+"…)ÁƒG®én3ä ðf@›Îw7‰+e¼¤s É‚Nëð²ÌIaç&>ÇixÀ*šÖLǺÜË ùÔg][â«_qf©6ðdý²#yb_š+S@\¼9zG2OŠ}ØÖÁý,ó 3ó7+E¯¹öï)#ý‘97²ÑÙ‘¯“„ =³±Kü×ÉLꋲ§äCOÌÈ‹ä-[÷CfÐm™ŠÂ¡k%sû'_SÕ¦™H¨VÝþ+†æ·Ú ªrŠkRW䥦)Ù*)±µ@îz.E›=†Æ‘™uÝ÷ìýD˜ý¶èV§$“ØÍyþJ òBØqúñ*¥bKÑñ&Œ /ºc¿"ÆLò s ÿÞ¾ó‡ÈBÛv-™ƒÜÈç¸Îè4—sç[V”Àh샘ïL±?Í@†<û1±å¯úʯ*ñE 8T…Ø®ûú„Ðã®ÊU"o&šs;¡É7.¬ƒ$tÖ)ÙýÌåÎ×.Pó‹ÚwTËÊo¬¸HñÁ%gŠNjÜV…±ÄFVý¤¡}µDN´îÝ×ÃøÏ]Áqňè˜hyàÉ ÚêB;zdÇÄüö-ƒ€Á½ªrÄ%DêÒ#½U^”!ÑÓÚ“ÝÒ1ÿŒlo Ô§¡„Dýå0lùýÒþ‰Š+ýÀY¹úL}ZÂõwüNÃ÷Árø_ÊNžÍßeƒÅJÛõtC§V¤WN$ZH2‹ÙfjX:°¡'øwÙ’r“X,þþ q:u÷Ê.#W*UÙ!ÁÔ=q–AÊ@zÔTéQ„¨ ² MñVÂ8/r‹(Êí…ûYÀUk'(G‚+ß*¢k¢…%nsCQÌÚ´9ÅÈ)¶À÷ m‰îZURü³ßû +Åõø…ݳÜSìJ®ú|Lêþ$ú ÿºðjÓæÛö2yh´¹üÚµbJ0øÏ”13Oå0`F(ÒåJ)Y¼àR¡×÷85µ]ËÊ6$lº¯àU\{TYÕ€ÞúãŸBÕ¿ÊKcÑçŽ¾Üøu—)%q@ÈUésË2HÈ"™VòIi%4¿íeàÔT~Mi&9W4„žNˆ¶D‹EY²¬EF0Â&ãßr=~f dèÔÑå9-â˜7w‹-¬®^±(ðVí tŸÆ\¼äÎüBÚJ°€õ-Š(rZ,z Øì–w1;ÎaIèékºÁ1×PTãš…éåÝÊã¥ÅØ^%ÎX(™s¿Ó‚ËÜ^oÍ%ßÄ#À¸ªH7™)*¹—†reÔ†r8 Çøe=Àͬ™Íb@¡þvd¦-[¡Ó_Ô[ºV-SØ—K©6‘z§M «@Å7¢ä»U_û’!RQx­Òç¯ì…9IÓ'Ch2g{@¯œÑ¥ìEšˆOf·J/çŸEù$¿×–"{ÅWdК¾F©¢pq›šØÑ2H­Ø5;òòikG¤„tH–·yÐô#ŸV\©DÛȳ¿ rÃðÒ )¶‹=”)y?ˆÍí¢ÀÎTÈ*¿*«˜†M'ÙxufÞ­l™ÒÊ|OÝÒ6누 Àè¥4Ó‡Ò¢y:iªvOA—ZÌÖ”²E,Üè’1^1i9M¯@7WºAfOGDÇDoˆÞ !eMÆþþe!_‘ пúz§Z¤t¾[¤?xù÷‰T vGJZî@o¿!ǃÙÞü™n[û’±ŽŒ(T¼¥Þó¥H]Ú*´QH\¯làðX]ê#—ªëI§_ë ßq÷ZºZi(g滯Iï¶ÎIQ±†¡†‘.Y_…Ù}åøò׋º`˜Ã'±¯òŽ…èþŒ¤'Ÿ_* äýþ¾ m=>†‘Zô Ù¢éÍ¿ÛÞWÆØáí/Ø Ý\lžq“£1ôŸÑä#PW`r“RyB°§¬lÆpú¸)ïÐÚƒ'ºDYˆndÅÞ›±”¢ZMß¶SzOˆNˆnˆ~"º&ú"ï[žrq~y@P9ä2¯üP‹ÐÊŸÆQBs׌O^n¼ú»7úªÀ󀛌¿@C=6º¾†±†‘†áÕ’ÖµiÛjYó^] ALž;þ áK²ìøœ:¸Î>BâRôi”h¢P^—+¹m‘ãç"{6D'Wn¡}nKnL^IJè>ïe[,Ú–¼Ž–Ï/òÑÚm·-ecõÈ*$[JJkÏÚäí… E áDí¶".P?„`„Éqz áoø]ãRʺ€Ü@=6“©V•HÙÒ6•0ÞŸF€ˆ³Òƒ@r-ë\rH¬Û¦­Dt)’ ÓËÌ>>÷æÂ4Õ3ÂÉP Þμ°×ô¹# ÙºÍÁ|gRù'aB3"£ÕŪ‘Ïá¶ fïdÌR9±t,?7˜ëŽ$7\D›¿¨p¢«û9az»‘-b=Ô;ĤYdb ·¶›ÿúßÿSö˜¦Ç)ðÎMø`'yݾU‹@aëÖ¾€ó?#²{žÛXy—½ZÚÊ?^}<ð`ØŒ©ÜC*N%«4˱ßõíPAó5"Ô,ëzô}_ŠløU!_QÏP‡wcè 8û.?ç· ’Zâ輨åxèH1h‰vwãœØ“žÿ/m'ÿ¦ðÍXØÇ¿Û‚Ù\S¸±UŸ·| Pq½ÚpG~t²eu`³|…êQÚXÍ¡ Daú_çfÈò7–™Ò ÐÉý°hê4}+S0~oÈJv Ybº:Zäym7vl@sT×±[{tDr¢'g¸ÑPjÙ ø›¨^ï ¸x4âxQÂxbj„ íÚ*´QèQ¡X¡@¡HÕM#è— @#ÑØÊäµð% sØÖ4d(ëlvZ«oƒr¦ÁÁxóíhúycQð(c¤¡î!¸½FFÝLÉÚˆêÃÓ“ð´7 üªTty$%Ï|,Yïû_ënÿ¾ïü{ÞêÚ#ý?óÝÇÏÅSÌ…_à¡Òw&êÎÏ ‰C½v¡¯‡b–ì ˆ^Gªû¤]ÏOU †0„y©’Ûî/µº;}±Ý;‘[ !ÕsÃú™<ŽÏ[(2Jø±­¥K;õØŽ¼+JXˆâÏÓ÷ øMˆ¶KÇî%­èK:@ßß_ñª×ìTCèïc´Kø#d‘7¦ƒØè”è¬zµw¿ËÏ馂è’èÑ5Ñ Ñ_ˆîˆî‰ˆ~"ú+Ñ/FWTºÊßòƒ‚AËà…À²¢×ô|WÊ dÐ1xb`X ƒAM`í¯4‰+êŽ%Ñ"h>‘áÝ̾Êë¶MH¯Ÿž1ý|IP…&Æ@p¢,ÓMRƒ;3´Û'Äç:C¹ù‚õí}êõ²Ü.Ëðë퉾,‹ýßu;Zëõr6Á©.]Ü…/¥Ë‡¶Ä–wy×ôݱ.,Õ°˜|X8yW4©ê¤ôtr¿pñßçq'B byGΕž=C†7$êÔI2oKÓL˜UeÎ}üN¾ã¹¬ÆË´ÿE:·6-H…ì&I¨£/>ë´59i+T×Ñ[¢ùd2¨ØF©ÊùBÄàžî‰Á_Hø¶˜|I—=‰îR»Ü(3o2JV%1|ü[uÕ‹´k=>#›W?tFŸ¯»Òâ±»±²U,µ/ªîµÑ»2]3@G:“®úÏ$Åù©?›ÚÆ*Űfå]Ýú|ó€X°:ƒ4Ûèü‰à]“TÅ8º{2€²IiüØPÎd¨q4åIƒ¯£§©Ðœé˜hʆ#àcj¿{Tq0y#ÀWüxè§÷B2ñ ¹rcºšƒm>+°a°e SñPî!qð a CbFôw£(Îgd T&—@er T&—PÝÊwu$©¡¾ê«áäªU0Ò7o&P?»ÑÏJ ÔW˜Q•oMZóWä.þ“I‰9¹Î¢»ê-9e»ãÿû•Cd€û /ÂÌoeƒ„¨Ân‘âE:â 4y妉,vìAÙkÝ"*o?õêÊs>Ô%ènEìNöékÁW¯âR…ç«‘†´\ÉðõR¬ê Xv_kmõ7ÒJš¾ ¨+;W=þ—6•­`iÁ,`g½Î²=‰xF¹”C*êÀ d ][ uÊ_’—›åÉ œœcb6áƒÎÀ0ä0ìpK_ý’>ÈiËín—“Ä|æ÷µp-èLî—óaˆ]åé˜hÙtþž16d@„ñ‰ërAl_¬AèÈÞP˜8ò×/H…›ŠœR}í?¸/Z¿>¸/¢?¸¯îëƒûR—>¸¯îëƒû’uôƒûúUÜ×<ÃEcú¶PGsÙ 9YbR—¥k™`ЭÏ+7q(²¶ 6™A[ߟNëq>ŠÚÌ'dŠP¹Ó=Ò¨;òë&=Ê|*ÉÇ£!<Â{Ù÷"ÜQ*šÎPJ®g5„õ¨]4H‘Ø äþriø?6ïGÌmŒ0Q0ÔW±hÐ.Ê“ù,avm fªV@º±Ó}±lÕ•ÐLRt-÷¸ïWä×ZÙ}%É,îÎaÀ ü-÷5ÏTݰ  ºZ‚?Þ…«'ÂU!oTÀšÕÑfé,ží|üP6rán¼KVì…‘Á¹L…ž²@4GšÜ êµìkYP<-}˜gu‰ÕÅëdƒHÓgĽïÛ(+ô ÐN!Y£šSK³÷Ô>ÊSiƒáΡ¦'+·_P\ÊS}TàòÔb@Žc6Þ„eâÔ†`(ŒÁé­CP-W°ÙpM#жÇ:õS|#baû§B®ÃýFÕèžËØ*¤ZlrñÒCÕñ?EŒîa5-—AÁð\çÀ† +°)ŸkxŸÎ]©R¸­bÕ=žž“µÐ륈ióL)IÇK¹º¦#º®—›*¾j“Å©ºmÄ çÔ‚ÙsC>zLTä[º¤3ǵãé»Lv˜¾£ä8½¸Òc]C¡:ÜÖK>Ù(|d¦6ÏìùãêœÐi!÷NáÈJnĵiÙˆÇÂQOV”3îtAÚöÍ8òf|‘¬ÖÏ0ä{ÁÚÁx€û††¤-)4ùž‰–çXÔËý7Ê,BÅ瓌ŒªîÊÓ˜7Æ¡F8/Šíø3ÀrdD¡B‘Bl}“ew †ùÏÊ,þár‡_ÁýŒLâ)sL«Ö­ž2Î:D-÷tH4£%œ=÷y2ˆ`¬Í=‡8•µ{Hª@øºÁŠaØÚÓ˜A÷+=-Õw4âÀ… »¿ ‰˜Gxà+’Erí_xÏ÷AäßÏW¹Yý¤Ý-ÕÜÙÓð¹lé‘€h²SK·Dïˆ~  7Þž(&;]²¿–lCØqpXÒ­üÜ—oýZ‚sôq›Ä«:Ž-5 ‹dS“`œþí!?„ÕÕØ—„ËŽ‰–/©²=w¡Ø .š ¡âªáÓ¸f#¬âúɱyüµæV\Šâ¯{2ü_µÇÎJžÇq ‹Üó3,òÆ¡Çc ®c¤Fãt‘i)Ó!y¸ï‹|áyã«Ô•öPÁhú˧eÈ^³ÌDAr4îž(S´ ‡Nx×us¬¬g*)$kͬ±–__B¸öÇži:~-+ªõÓN]ÁÔ£#Gß–…Fo¡:_Ò…Ç Éê5oo¿üŒ¸úš-™ç&–éãêå%V 4ŸÌbYdhÃ.i?S}1£÷qYƒþéþå kX…¨t“ºõB–½æT'üiü8)•tê´G(~jSSûU1Ðígy¼o¾Ôøð°¶ýÐÐÓXÉÛ" w!æ÷A^è>QÂÊ—¸‹±Ö ψZÕÙ˜R áê:LU‚@­žŽ‰æ{¶DC P·² ^eS…£Ëû¡ DhyU2 ms¹e!ž/øÛ™,ÍÑØ\ôRò¢¯/áªþ™o%e ùÆ›üd?í‡C${`* ¡öÆeÚK4âÎ$buÿÚ¿ÜL2ÊŠB¤@ïìej_šnEã(”ÇŒ}"Û|(ö›G2PLè65%7¥£å/›k('߈º‘•qš#o` ­£üE*ìÁmŠ(`Ì“ŽŒ@J!¢wLÍn$ .£Å'”¥@mR¾¤:†nì\¸U:z®šc ‹†¶¤ Ö?}P|ÏÜûz²"u&zßœ›<‹^g2¸)“ÑÚñªƒ¤A\ûK˜#Âc”sdêJÄ@lT >_æ© .å:d 9—/Q¡¬¿ùÜ×Ü6²à—°pйa±—ð—çÅÝ'±ÙýX”Sù¸Xª´š’†.\ËÚ¦EK©Á¬§|þž>/#ÀMKY@Óö…]89JwÆQº=ˆÄ 6. ábß *º%ï6Ôm1ƒ ƒƒ —–ð{T .:‘}cŸõB¾ aÚM&Kj“_8 Œ:aé :9¬ªÀGKª'GG¬3q¨m0™V´µlPŸe3 Mk(t³¡hÔ†‚ŠWÏœºË£Q¢®a=ø%R©wV¿v ÿ-’"Ó“«hŽu¦;&ZäÁÕ…t=š6àŒ•Õ&‚|º*),Ð ?[QÜŸ‚´+Ñx,Ø©±°¿j ÓoƒÍ–¤ÚåC:NŠŽdUê»iƳ<tU #D1QÂ(Þ+”*¤îDŠÀœeå>ÇËq¾\ªûÉtEží©üÇèÁˆöÈE}°ì(Õ¥[WUõ6úC,—†è*ã*ºQ’S³m‰Þ©Ú©®ÛJƒV]gcÕ ±”çZPHfÎwIçû©ÂSÕhPd¸Á¤FVÊ@wŽ\ò+.¶¥ëVd1IàFÛ•65U+fÑ·ˆÚwLÅZìò³ Þ9‡hñû+¬îÚÊ„9 ö=$ó"e„lÖv=0Ì/Z=@R?+¤‚)¡¶Ô¬3¢ó"kñJ½hŸ‘“îû‹¶£÷úíRP SìnøÉßÅ_06øV÷Õ½\d‹ÿø\Nwœ;LŠC|­û"LÃ髤âlñ0³ºM|ûÂêa$zÁW˜©Ïˆ¸Rפ·¯O‰¥ÉùsQãW¬Çª(‚_Ÿ¶úª´„I,x:"@ÌÜÖ)°AFùbümØË³g.,çÂD˘j{Ä¥!¼Öx×TŒÀG°a 6™ã7´|MŠ`Ëà^…xc "8»$¸ïô â“R‹¸Á„ "[; å[ÀÖôû+DzŽæ›;ŠXâÖ¶3LÔ½P‡ˆ3J2ŒBÔHŠæP0ð«,h>úª4:Cðì$Õ^˜\* 9¬ôtÕȦ5kùØ<#c#,yY”¤Ç§v#š Ä:ÍsAè½Wk¸ÑpË0-†‰†9½y†=Þø7 ¢³0O²¶:’¤[#Ââõé1.ëcÛýW¹€ˆ†IzÀX;#éTÏÔ_\ß,_ãèŽ^e‘ôbèšHn6öŽ`‹ö›„ÀX9ƒ45¹˜zw#«@ #pƒ¿ÒÊ« Ö‰ãùBÊlÊ\#>)eéÂ’;€=˜Ž€,K›>r†wÔP D À'áÈ{)Þ6G£pR_[4Lãó 2@ò@]‰Ä ©°_vj^(7K5¢¡ê®qtµ°Ë÷tDtLô†è-Ñ;¢aÀl)?S5$1z ¡·5$ >V¬dñˆ¯…êZ¤P =ÖW4,Å<v¯È0±Ø]ud™YÖ8>ùÇ¡i¿§݃ˆš¿ƒ"ŒÍ¶ju¼3mƒºOç£QDe`ÌÁáy:":&šŸ%²zGô5ë ¾‘‹¼Ã´éhÚ̹ˆ/ +i¾ÖžF·ŽHÞ2o8Õ pèªéÀŸ±¸ÖÑý.cïSáxý{Ê T ä*èA e1ع€ YÀÒ˜.À²Å-Ñb">4òlIÇï2é/ôª´d×éAÈ b À,ÍÈêGyî±»©¥Y8°œ~|ÜË#HJ<†Ç–Fp`¥ç€ŒíÓ7äà—”5Á_P@Þ?‚„/E b; RƒŠ;Öš'Ï®6¸$¸¢ W4áŠB†ç×Zz7Æ[P&…ŸIqú™üÜ ÒXJ$à\JÙçPui±©!¥ò`Cíàß&Ó{Q!ár%ìÆª*À,*è¾ÿa–2¶—fÓŽ«76½!,gb¾ÖÙ‹ˆ}1ˆ7ñ¸ò¢z•´ˆgÙÏ® CCžIó*×Ùúå»7d½&…þªÉR4éMÞ¶Ëg‘ES£Ôge̺hÚu!ThÆ®Õ=.ÍšŸtkÊ xriº Ç•åéqÇZ:åØB ãT °rÝ>ØÓ•Ѐ`bRáã³LQGʧ·aù¼!ƒ˜(#ÆgÀÍœºRYëNü[Ð ¦öP,¯D $b¾«+\2Á;°ãgT ¨ îéA"Ÿê€œ¥ÏˆžjÉÔ$•µgÝf´ç.Û¬‘…«•C#E[u¤ óQE)dJº]™+M·l+J•zsæ(Ú oe[™e[KBéeÛ@ÐßkÐ-,ëÛ¶¦WÓ˜:ÿ²>h‹V'÷‰/Ô+ópG”¨k‰ºv“õˆ¤£þŒËÿw´ì)ƒŒr¨+” .'×y5ýv ®Çº]FX漨IÁÛÛâAÄ f°a°e°#pÑ pÑ pÑ ¨¢/M´h_^øwÙêÇ÷ßÓ%˵±\˵±\˵±\›”KK¹´”KK¹´”KKùsàÙåCJæ … ñ‹3®ò,·ƒ,”–íÇvÈS¬ÓCžÑÌö(T(R(¦BøUy)|O‹ÀRެôD^’lP||qÍ`³öW°6ý´­Š£ÆÇѲ9:—,£«Î šü+Þ9ÃæøJ zé]%Ò;9ÒÉVéí¢C¢c¢eDùûïà Ä_t…øŸtdŠØÁrÙUX+Øœò/ŠO¾v0ã>B4ÉËž6”‰-‘vÏ‘Òh֒拉8Ž&»k‹LìŽ&½›Ä·_9šò49°e n{$@©$àÂAÄJaw¶‘‘= ”ôI*Jb©QŸÄˆL^ݸ—i³î‹ˆrEXØ!󟃢‰ÈB6Œ}ŽVóûÏD΂oÅ"8-ûŽŒYÛ©Ò[×d^,Û¨;°9¾ôŒ×_J è'‘r\fÄði÷(kH5Ò¨d5 Šþ]Õðœ 3žs¡ž•ZÎÌŽQ¯®íU)¼Ä6æì™'u¢Ö½Z!žg›V¨«êÒ²äû¡ä‰–òLUyêo¶&VH- ­ZŒqD-©ºh ¨ÏSµbSòvÜ8”ÁÂ!æãc6#Ü¥|HQ‚{ezXª |²«yuª©›Ý‰#¤¯Šm´å‹Q¬n½§R}IŒ}Pû¼áVå /T-ܼÚêæVèoyµ`ƒË÷Òžëw=*Ï­¡BˆÜü‚úÈ àBþ(îÊmB÷…{ú¤ "¶2à­%€f(Í—“Ô?]„d^Ù׆ŽÎ›PqaÉã<ÈôU}ñBÓ‰ÓÐùØÅm >1o|þâPa:+øS¾LÖrSþÓ Ü¸£Qþœ‘D»8>чlÁPúˆÔÐ'µÖ¨ýÞ<Ÿz^y,Ý·I¹ç6jÑKnkDeõo>ÐŒÚð¨}(Ôm´ =°dÃciÞMÍo­y~·ýÚKQ~o®ø›\IXòGFü!ÛA-;ôUÛ/<ÚKêÊîú^5¿ð‡$Ìn’ƒ…Z£ªÇ ‹2¿Ó’o'F•H LKÞvVa][®¬kWZ:]¹Ö°;žá–·—em•š‘àãÆe†š7æ"âɆ*­i'Ü^$k°—&§ü”HÃÏûo›Ï´2ÄȹsFjÕ`6ô!U€G[B­ñ˜Ó7÷F5ïÿ<baŽÜG^ÌÈFŸ^<¿©!TðíÁ¸sæÚCXSøeùú,TŠëc´W+úï³›½šw­â x•I¦ã·{Ýý¼›šA7ouV^âÔdU€k¨3`øU Oe jcîxDyQÆÑ~¿}RßÌ+E¹ñ®¬rêðÇ š¼ú!j˪i{u|Ü&ê³T+%ªüXXª”ªkŠÇ :~smņjÉ ôÔ^ê÷ 5êíR(4ª”l«Úé3UJÆ¥lY€¾ÙµÅ€ó~iâ–KyQpˆ…_Õþ¦o\ƒjT›mF›š™Ê ó³JžbÔ‘@ „D-`†%(qÊÅÇ©Úy¥Û>3¨Õª£W7q¥WÙ4¦!þðnÏëØÖR‰n+Ù(´Uˆ+ÌbÌ-áý ôëgbo²˜U1f½P¼çvÏ¢“Èð'ïXº±c®§$lÁý!Tˆç…ÁAŸ”~Š&D¨DÈ‘°V«Z¤.±ÌF¤k¿WG=»ZãBÚ­ƒŠçƒÒÁ©…×( •MP<Ý‚A­&j…‚ZocŸ·Ôà!óšáÀ3¶áöv  ^èwª”Ð,fyy)‚X!fykßè>ǨÑ.ÖqgsfÈhò^¸@ßq|ÿ¨<¹õ\`¶BÏÝÏÜù_¹µöj ãÂCÞ©”P$ˆ?«ÒyV» ˆ»‰÷­‡Ó2Ý{Æü.mÌg+Cî?^á¯áxýÈZ‡ì˜§žZmG»M ~ˆcI-x÷þóýçýÿù ÿ¯ùC9 ~*›þáTîFÛy’|Œ!;.W?[½·àÄ­e–"ÞÃBJ¿zõ 98 É÷SáÆ(ÓnÚïûQ’K#¹ìýcÒľõ»ÁMLºß„¿á+b +:eªE‚ƒ7Ï^»‰ãÕQ·©à?#ä|&ß Ž«¼…´ç ‡§ý¾yº6]Vö¶·™µÎt³²Uc#*eK¢L¨ÓßìÆo—™™añ7niÕ¸Ü<7|íÆÁæ¡à…ïÅ%œ#Í|Ô Ž¡_à«â`*dÃ\h‹€Ýͯx†ÏxëUùÆÔmÚqvâ9ç[8q($ç¾I~¸Dñ@ÿ)w%rRn5ìÿ2çÑ1IEÎVüʸ[Yg³9² Ž­¬ÙÊWÙŸ²ÕèÄ‘M'´Õ‚6˜hHY!§%‹ofÿ„OâÈ¥×g îë#Zb½ÿÓ8™érÕ3óͯZ–s‘Hlê¥ã±DÿkShìÐ˼µ/²|®÷6§˜'#“–ðò.*ó‡a0° 2{Èù4 Çîªi3Ñ¥ÖòËþ÷FÿpA§B®» ëý9aû@ªxG#|Æ œ»‡(ç ?¸màB œÖ}f·41Èîú|IÎ #¢í< %{üëµëºj¥vœ,PWŽ€<³_(aÚÜ´šY"þÊ)þ1_?æë/Ÿ¯ÁýuÓRM2ì·¿x^ñùÞöý±yþß6ÿô6§&Ú»nŸK“ãÔJ¡8)ÉŸ˜Uð|>üú¤T0õŸ‹0ŽÝ‹ýS³æÐ“Ösq³£8e˜£ÅCíó ¹“g#rÀÈ7ª €g„ÚŸ ï?fj¼ô¨.Ýz?æ”>do‚!?¢ÜüPð½Ýùð0¤B/ZL]ŠCª2Àí^…À¸a {Á.Yý|À`òM³þÚ¾qäoŽðr5m,0¢ 2rˆéýÃòû7rÛ8N®úRÒ.´ìÅ ÈöJ™3‚”ƒ)ŬUB$:PÏÔE WG‚“ üôs•I¤–ùÐî‚ꄎŌ²¿¤ó='’MÍd“º5ÓÑl®¢ ™J—rRO‡D˸ùÁÌE«¤)!:T¢ ¸³Æ6F”3¯0Õp¯áey^{H 1< Š¢XMzåhDQ§t‚žO@Ã4mŽÐÍú§c¢ùwĆΒÖ0H¤ò²þþx.™•¥L¡i\p îúš•° !^!ÞࡌüÆÂ—ÍgjÑy[è)ÎâBéšyq,\¦9Wp¦w´Œ¤™¬-«”3תÐg$…e X.ÊrâhHÅ9ƒû$ã2§+Ù›D¸,¥‹˜WÜÍi)V ™Ÿ’Àu‘‹dÕ'Î@Kzð•w± @žÜBæN’Ûi"­ÙpÕÂ2êD!Í1v=Ãã±Éì k¬Š¢1î4rþ+Æ:ÍQêU”y ïCiPðÜÖS?æ]sWñÛ•œˆ‹Ã™í`X{ Ck. ö›æ¶§Ö[3eW­¨>oŒ\ÎŒ÷x4Éã¾Ã2¸,½”—{Q÷ÐU½@­íhœ¨^ iNGcxà.5Õ/G¾‹˜7dm˜ý¢pâË&“ùt×d§ÏO‚‹žÃ?BBÛ?pã1ßkI\GûîˆYh: ñÀùÅçÃ]ß’yÜÑÏ1Ñx–¾¸¥L<ŽÆq¸tììí˜@s·$?•ŸËt¾j[ƒR Ê}FR‚›Bù¾ Øûþq òÿgïÝvW¶lÁî%JÎ]¨ó)°H]ìG2H‘L‘“¤v~Àyi  8@£¿#BÖcÒ¢,;k­½+ ÄpPÁ`\f̘׮bÑö­L>Õ*H¯x‹€g›Œv ‹ø.•ƒŠncW&S×¥-èˆ~vOˆ0®‚xÎ{­º kB®‰¸fÅ`Í`ËàŽbãõ©†BÈœ÷e dNœt¬BTN¥•¿±­£ü¹‘3ç};`A;€ÍbQB5I³THè+T‹¾#nÁÒÅ”¢Ú]I®½ÈÈâ âtØC’RÃXÐS¸E² R˜YšC^\*Êe•íXºPXÊÙ œÁ«è“Kƒƒ5D1{¶«[vÜcÜSY¼fÙá¨(¾ã(ô"+Úoúáή•†òÑ9œ èÓ·C°á¦ƒQ]¢wÃM€¢?P‰+Bã9s¸L½\ÅJ ÙpGÉ÷¸?2/Û}ÅŸB{¯Å83ˆqÎŒ¤ n9"ͼ½[â²6¿‡p¯NˇÜrưàŠä+"~Ë–k6 D(d‚np :÷Ô-ͺ怚ÍHò Ž¢¾YF›b“¹1t*®›ë,?‡ˆ#dŒáúÊç½.n$>qGN{L«ù*ê¸JýŠeh=š˜’u¸õ¨>\2R¯*Ü’oõqÃã®[Èø‹?€ CIö Ž«ŒøÕˆÄl¯÷­5»¸ºg*Êtën&¾n9š©PÍl7²êÛ²]ïaº/q›P±¦Ü^F`¼úe€ÙÌ„Å=ò~¸1šô@bçTB—ÜÄMÞð”ó”_ÔÁÙcE?ZÌjµ¬î¨ j½Ä¼Å9ÈšX¼OÝ ,Èì`Éíç^¸j»Þ0y£/ÞÅ>´U##Í.+øîK{éZßߩϖA¨ž7ùD!cüeR­z° ŽÓ‘ÃÙŸUå¹XªŸm É)Ò‘¯²à¹!SÔíD7Ž‹X˜¥¶+Mx'O:ZÓÏÄQñ8#rAO ¢~a·ïâ û{˜sóRö2Ilåºíz _Xì2µVîù©D5ÝØ@l‚òDÜSîU·¤çnc±Dj~¹‰‹xzlåå»´¤Ze·ÿ–þËÝQÉUrZeIrŒko°|E»Rˆy'Í!鳘OÈKD–v§žÿë¿\õ|ºÜ¥k{1 íU– B¢(H‘sÕ¤ârcÏ‚åoTùÅ•¨—(,"/Ù2Iw ’%”̼¸£2‚þ§$%p`É d1X3­«“çnÉШJ1@D „t*éç)f¢œG$Åœ§=‰{Ü T6ê®Â­Ü–Eî¹{NPˆx×Q‘ˆ@ÂjoKWaXYWHø«“¶µ¦:ÈrH¬ñfBª #ä ­Ê_%º6±Ï«d'SùM!y¤\jȰO%AéôËÁóð5S_-Ù„yÈú½Z™¢˜¾Y©@豤K_”ß8b¿'¶“îÇ© ºåÜ1Ø0¨^Ãúôf…ºŽsÄÄ)›8•õe×&z)É\í²®toÙìuMœÖº œ¤¥-qU°9‰»Ç„«ÉÂ5.ìc©pNÇ&îÔ¯¤‡;Ø.öe×eÌÚï„q’Ýd!w#& 'Þ§!”0×~%,Rºâºm™þ!—û|á‰= Õ‡å %úQ÷.Qƒˆë û.=8÷ÜßÓË^ñ³x¨Q< ö,©„h&Ur€Õ\>}ß?в–E(yäbØaË»'?F&éørpÑ’2PcoèÒ¡ªò>·eè2ê ÎøuÇQ?ëG§-Ëõ£I„«ébÜͳyáД.IÔ×ÎÌ’Ê!•1φœ 2„ô¢¦C‹wÉC„$;£2a6Ý EÂ|9ŽáªÏÇ×ñ?Ìø§±ý÷ÐþCzüÓÆ¿Iòñ?ªñ?šñ?êñ?vãŒ×RRŒÿ1þºäUÇ£®K£/$#{Âé›ÿ¾á|„¿Žp1ÂÉhÂFýYú³Úð~„Gï[ú¿½5~ÿèûVãý›ïö‰X²>æÕžz•cÐeF;·…äìŠWãvGÈ5èšËTDþAJV–ýsƒËŸ>?àíÛÃ’+À!Xí|(Êó[!Ü…Û€B·½%'BκþC[k—÷I‹pãw7²'oX€ã©K%LÆ,'Ý ß+#½§÷ʧz¢° ÷2‡‘Ú%Zi-æÖÂÑÏArY–ÓoሺÐÜÓzIôÃ’BÛdŽàéM§/)ö—:­~õAtÍq£OŒ+©üoÂþ¹„ý_“^O8–V^G¯"‰Š ¾îÉEÔí]íì ¥‰Ð9¢÷‰&›® OÜ isœ›Xx`:H+51<ô aôÈ¿"KS=£ ¸-?ÑCBÆgnûgÅçð†Ô4EÜw a r㘄^1 ºbºì§È-=N»˜ä‚QSrMwå^~žW(Ѝ.ΑŸ=.̳<û}2)Qæ®N![†°ÇlDǰ뇲nUn`è7¸„úº{Uwˆ$ 9Ýs ¶¸:<‰è0&“¸n5`ÔÏ4o$ éÜ3?X“òÙõ†XYí¾·=°µñ!ÆÊhéÛÛƒá§@X-ÈPî0½H‚äÊøq× X1]úžY?ˆ‰RÜÃèbÓëk´ß‹1-Ê"JLÒÐÂLÕO4 U]eÂOU•2Èø9ˆ¾ÊÕ¯J_ä{Ä®>‰Aï"<áMÌ>-b…D*›ˆ;)j´V=ÂDÈüæ²³¤ËÇ£Ê?Öð²]· ÊÓí/Ç¥'"ª™Á6¾.–ý$ýx03‰,cÎê+²i6_ô&î`¸k„P³·Ó·’Æ@¢mÏL†€ ÙÐÊ¿óE¬%K`ˆbŠŽ¾E\lQ2cÚâ@‘Dö¦’èT9Èü.ìDÑÊ,b•Ü“²¿~¢hÊHbã×Üa|ZI™bË ¼¦@³€6¸¥èbtš ·ÿ0ÂÜ8Kì¥N/±Fð»À 9bÕHþp­Ñ°µXÏ ’­šg¨ÌsËF󅌮 …`:!¯CÀ@ÅÅÑW$ƒóðN¡{ý¨ªLù•a¦º³Sèk¨:·RhM¯X!™«\©oܪѸCrn7äEëá¨ÙÈÜê‡ï ) Ÿƒ+]»Ñp;‚ê½kôj–ÆBTS¹٢Ћ̔xè©Ä.8M¡ûÊ(asš?„‹N ä©Y8 ’ê -é>-)–§]„HhÖ|#IK©Ê!•ÉU@¶Í,­RüjOòÒ¶EÚç”ʲ+Òá¹–è&郴cÿ-÷ŠÌ@AdÍË!•…qÑ}'‡N ;. ¡±e$åyàÁ’ӓ㌄p]ë”`ÃÈ{Çͦ;ð”»Äu—I†?vh¦@ªô]á°eþ¿LÛ®E>×%½Þ§ýi•ù¦5T–CÕ’½h î++Ceñ·eQ[0 Œã?[d[ÑQÅÀ¯€2#—[_'BxþÈÑ#-Îr mgÿãË`q>·eÙÊùÃìñ|޹°å-•åF•Ë¥÷\Vo^¡³΀ÜШdòª]¢/UßvHV=Àdß„D«C¼¬Fz—IsÉ`M ä¨-À²à•Ñ+:Õò†‚*«rÙP¸¼¢òšÊFA0= þa<Àû4;úK… Í,o…ùÝQ\pû,Ñö ÿî„Ê{Ì+ä=L =D VjþM­jÈOhhù±ø Ú\Ï9•{.C¬Û§Ë-+(b°ê pàô•…˜ÒÌ 9‚"{Š©,œb‘ËiUàž\À“¦€çEQƒ-.j» ˆnØBxâAÈ6<Àk„VÍ {ÑÃïû4J4’ÓÞ1/Ã¥a¤!$îÚËÊ ‘b¯¤‹O‰ÛüÜ–¡ñs aã'F^Ú›x¡½¶l0µ¶œ(0ã8|%a×A•2à{r•uy,øºä*qRží D3K„êV¸PÍ*#{³ÂÂuåeñÀ!/¯YEO”8ÐlYNª]œâ¶à€t¬Ý;yŒ£gè@e÷7qU-e ­³©žs¬ØËp×1<œmY¦§ŽŽ[9€¸×´¢j,Åã¨-­P d ëØ ÀÓÎkúV³KyœæÔÒL¹dTÆý”äŽ[Ó•aôð ÿz(v?9$sIö_ ˆ8¼©m6T©ˆàí¿¡è§$nÅ5ì"ja3gµ,f[Ñb“>ìpñ(T(Rh¥ÐZ¡B[…î’Øñí· &ª7#¤ú–¨¾%ªo‰ê[¢ú–¨¾%ªo‰ê_iTgŒêL¯êz]§:ú¨ŸTÝîU·{Õí^uôQ½A²’M&æe®,+´Ù=ÇTN¨,Ä£ÁVsåÊXŠ D€Í$¥'ºyÓB"4”+´D7oÚ»;bT>;@ÞtˆÁز4zH±m98íC/âÙŽû‡'Vª] 圵¸°›äBNç­IÀlµFÔ®œ L^•ä²âLŠeïÛC V5-Ñ÷–Âõ´d¡dË•…‘Ê„ó Úð­‹v·§íâQ¨P¤DwÙ•k*Ë”¶9üf=‚Õ s´9}INÁCÁ© x/z€.È–s~*§§ž1gpNµeJZ™D $¯×jÀ6•\t¾x@>]-­ùöPcu\ã¾ê²ôB€:ôXBë¹ CK‚hI¯BŸ¬ÈGd'/]‹…þüHîÊã o²@Pd’ª¼†ën#ÏßÁÙþ¯ÿàlÎqkdÈËž- @¹ ïjŸ!yüâu•wD¬Ÿó„Ê•…V~KE0qôþ“s^!1i«ÃSAwB2넃:xû¹2‡/wГڪîk2’0kQÆ=Œ,‚žNÏžn=J¶l¨,÷ ¾¨åuô'ÃÑûz©P¨P¤ÐJ¡5¡µje­ê6ˆŠgÑò–‚Lz2ÜèÚ{õ‚kÿðyÏúãžôdz^²q.ú&nÅãlÖËRúo"7\ô]Êç ‘ÓX¥TGÞâG„·v²æ\Ú_éõC5PºÌ¬:<Âý³àïp|4°žáÒˆj_žÁHA2¤@YýƒœmÇ*RûÚ;—På ÿ×´HŸ‘hpš¹- ÐÙ²¿übHž`–°Ìn _Œº‚[}*¢ã?D™5 Ø ˆ-Šîl¨·= )‡wÕ¡-©,œÂlh(Ê’ •ž =g‘…-vzîYÖYðkíGïıغ u …dÄó¾r/&&pžæt’¸] ­JBNÜÙÓ’(ƃ©‘Í£%# žÄžFN1)EI˜Ûcœ²Œä-¥.wÇÌaþŒ+Áþ¹9¥Aê±uȃinŠp-«©¡Ûº¬·uÔÁpi7NE‡r¤>ÆeΕ!¼ƒÆý‹7Ô@öîW'9ö‚·"V!ï è:8Ä‚WƒíüÙ y?€eÈsâ®UÄü6ÞÃrú&«R9ç¶V!îÍAèí-räË:ñæ¦:ˆ†j:¸Ã,'뾄b9íÄ(ír„-Yç91­4ÚòâãØ*J„½ü-C^a¤¸Ž0ŠyøÎÇ8ö0¢É =ëaà³QhÖ—ch@DÝkìY S¬X¤µ'J1¦VÌ¡OÄ'ò¶Ç¡¼z&Ó¨Xn,›g"^TÏm¥mºŠÑ«á<¼Â’CО#Qq jÄËñ˜È °K±jñ}³™§Sä!T2Ì8õå •ò ÁGíŠã‡²Pb ìM?À„ŽƒkøSì ;ì–jamµõàƒ•;áÄh:VÔ5xêe¥W0t™ã#úD^É@u­„Iž7Ã÷Iù±hj¶æ þ¾å¾Gklt·q£ÉÕžËx ‡tÅ­ IíÞÙH•³>@$Ú1tÄy6l~ƒ‘¤ ]OuE,-þE" œ5Í@_´ÄMd×±87ú’ª0h¡éö<0µ®«2ÊEldnœ¶‡c–м‡6Úª¶º§8xýª¹)ZýZ cèQ ª8¯c~¯Ä‹ZT!ÎÇ^l¹ué9Ū:Zç iÓ%è÷'‡œ>RÓ!îäàxa¬#¬çò–Y\sÄ2Âê³bY©ÞBfD!ZœùLF½!¦ Ȫ,¨»RÑ2æÉHn'ˆ¿„A«M&rî¼ô~-¼ŽÑuòfà7#L—gæ`Coç^®2%#À bV;(\c“]U`áË•eÕ[ $eÒ.pÌ/7rÄýÏbÙ„žº?`Ÿ,¿YÖID –«((•KŽQ€RÊ]ÛÕ™ZÄHua`tš'w¢Ý5ຫKxßP@©RÌG¸Äþv÷ ~T÷PØÕ)ƒÿЄ¢žîD2ü~•¯Ð÷bù™_r6çö„ˆ!œ0g×iA²t×$rlÒÅ2hÉrõºØ¶µlKÄ Ø~¯8Ã4´ ÷1W©¦ÝTÂé²³š”K’WjjžuÃ/=Õ¸h(á]‰­ÄÍñ^zâ=‡ˆ¶¨·¹î´( 6Ž›V}VB˜7| Ú%HQÈZuOñ•iKl|þÎ*ë¿“x8ÎóñÂ`¾æE óÒ9Ñ’k²Í/†:«ï!ît‰"·:Ƭ ï7F°štÝʯ_‚Ú­…IýrhBAâMµîÂs†ëÍFÿã^dKÿpÆ7ÿ$«›q =÷‹m¸½_Á‚ ·ƒõ£÷·!F¢-ÑSöë¼Ò ŒÝ,&ÍÌÇ£l¤Tú¤zemyøjÏÁ¿ÌUþy¢Ã³Øwê”k\>ȹâ'Mx§ VÉvó³ì(?×Xò·Ñã¿›Ñ#Ù~š¡Ý{M#FV×(k?G7JêÌëõq—u[J5Rܰˆ_ÉØµû‚È”eŠZ’7ž±$h$ÖaY KEFÒ¾šë Ûˆ—ÕÜæ+FKs7šm3c>áƒ[9äÏâGaæ”·n`:ŠäÚI0‹Y ‡^I–8oº'$›ã~é³Î^¾! o)‚-›3* F²NbÓ£±­ [c(eúMRTE¶ºå:y+«]wÞ­!$MØ´ŠI«/Œé9$~BÝWáê´8€ïò£»)_]Z4¼Ú ¸ÊΙ"+‚-‡ä4 ú´HŸ»=tYHi§Bë·å5)®ÈÏ‘­AØ'óO~Þ‘G25Ë9P¨ º†,ü÷òÇ#:OU6"ªq`Í5Â[œÆÕí—Õj¹íËï)±¯F“˜¸º÷Ç1žLé—””ÔÕùÅS ?Êãws CŽœàqJ‚Í”LGÒÅ’}ËÄ-ÂQ WæÿGÜ*¯1lDc¯Žüº¤u+…Ö mý|Ò:š™=ã=$Rœ©gF„“–bŠû(sjg×Îþ6Ë$ìNðÿ_•Æ-Ø•øÁ(ZsBeƒ2¹B#bò ‰™&":¿(}EŽr) Û¬@§(Úl@?Ê‚KJãʧ¯¾Dúêtiƒ•HmËà½YÕ8޵ÜtÂÝNø=ÂS¢áÙ>Í® ƒ2ˆ¬¬ ¨lm 7 ãhócBv(gšKR™’¦É¼|qŽÝuWd ³Ür.wF„u£O`/ÐQ×' G˜UÙ2“AÃC0AÝo 7eâf×êoè™)ByŽTºö™<وىÈP,¶ô.A9 ©L1ìîn©LíDL¬„ ºr•£fÍùmHVð®ü9£Ü6œ–ïºd6Á#rÏÍPQÔ"ÈçEÿ3f08ÍÝ©ä{‰‚]+#¾xˆŒ/8Ñø[<Æ¡ÆóÄaalÕݨ9¤­8áÕ¯ù;3;rT–”x؃¢‚ŠŽ*’¨_Å f0MC´ŽòMÏê˜Ö彄s¯ÛSØq•PÙY¤”QY¥<Öy,0±‹6ãæ8³®;ÒÉk’ñ\GA3ñÖsåŠÊ§ƒnáʲYoú2¯2$áFò`Ö•“ôÌéoê8e8Ê>ªøS;è°äøÜžhH쪇˜1Áõ¿Ê’FONçLc‰Ü("‰äf»LÚÙœ‘«))WP6ˆXÕYW×¥$ÊvAäàóŠ1Ø ˘ú}e€„q“ôï ©Ã¼çPy=ÑãTÖ¶÷n‰ÎËöT£ºÉ–™+–𮼦²°?d¨YQ'Û(ÉÕR 2¾Ã±¿p@ý »Å+Í KI=7mt¸§°2oÝ[ô“…D梴™$õ9¾ ¬A¨[^Q²7—º?g-Sú}t ù°W¤-¨pkµe²9û8¦•ƒ½œE 6j~@+í2–{dWRüš]œ¦üÒk’! }»ÈÃh+¾êù¿H(âB›B×,È•e] ‡ª¢TÄö÷­T±Á½M#É;aá¼ähI™Øb¿P?Çò({ñK3Íç[†ÆU;6‘¦Æ•!bð(¡çpòe0°=YÉ °‡e­à,“ø8- ž#ÍiŽŒ#yIQ—(²ýoHЙ§såêG¹ú‘„%²ÿ¯0i$ rü{¯ŠïºE±ÅÿÆ Vk~|ƒÇ7žÙ šn)rXûÌ–š¼Ç¿ï;”ièm+Ð#>ë–T³äž8ãnÀá†ÔbDUvP–ò“kz7Æ#øJÇæW²òÿ +ÊØ¢ŒGUŠtÔåßµÄ+›ÕÂòν(\8.ƒ¶ýùàÑ9k;Ämì䦪Ôã ¯8K§ŽTMÄq‚,€ó«G ?ÃCДáoŸQ#…#§-)ovÏY‰zÎJÔs°<÷iÚÎAÇO!å5izú¶{V`É€gƒå«îù`¾ª×ýÊìôŒ!ûsÞš¡Úš!¨ÂNnG7®el*÷î|]µ ydp÷bïõÒ€ššwU5Û¼_Þ û8C.©¹-Â$=†—ò,‘È`#çåQ-Ü{h{®°=±¯æÎ;NîØn;,s ¼“3àˆ ì?ëä?D"|¥XùµìU¾—•®ì¶ÿ§ßF6èðNþ·‘ ³’|V!‡reÈœ°é@®KŽš]Í% ²ÍQW‰˜eBÀ!² ²šæ7p/çm­¬«‰¹ }­û_{vß´Uæäèã 9ú8bá{;uCüпœQÄ÷EÅŒMÞscœ¦ nFN½}™ÌówÆÿõµþšnÿyœQ9‹rØ$LÑ‹Ñæ?®9¥p–¿Ôdªk‘®<}Ï—cOe¡°^‡«LÿúDòÏÑ‹ýaçÉÇ/r~ÅwtUâ¾6ö‹† ÷Uþ•šž}Ø%Ú<‹zÀ•±IÔz¥‡Ë«‚Z:wOÆ¢8~‚.gWbäsIÉ€mÁ'z!¡“Α¶Ô¶¬ÝÏ@"9ØÄŸ+]»R¦é &v^N ¼®67TNE‘‘6oã%Mô%쪃¡€–™½)gÃñ=o€SÁ1Ø5<ûÑIóOFlÀFÑÝ>9*ÆtD8òŸåÜŽý¦]+Gy5ù}‡×dƒ@ú×{M6eÿÊPS~Ž ËÅÀ•p*¤ÅŰoÓnŠnóʘ+WD ™=[» ‘ð!›c`ëP×زˆú=8a󅃎r효,¯KDµ™Œqt &v—<};dÔ':ZõkMѽ‘/ˆÄ&-ÞÏVL”®J ²pÊ]»÷X¹Ðp(Q(U‘aØBPò±ëå<ïJòã´ $ ›zázáQ¢QH˜ø N}µ··å«0­ÊûòJ?Êkc¢Þ_¦¯Ø?{áþ|7Î]Ú§<2/ELmûÍ6vGjHb2&ì¬ý`7Ê—_!’Û ˆ©Ë±­A6Ú¯÷ T˜Ì®ÀR{-s~Ä«ÂcX€Œ›“þÌM,‘¾.n6;ÀÈè³} 'ÃS`ª»8]Óh;þä½ùÊü±h¿ÐCúâ¦S‚¯g¢î¨go¥~ÕFôq–NmyŒô¥OÈòŸwÝ6U˜ƒ…‹Rî0îHü‘só£G¥3Ê¤Ì Ý¸,À¶t@Î|;ò ßn6ò-ô(!ôŒØË«„°]hÀ{×¹õµ’qÞ¦…:~Ïqr~1óYàºSpG¹;K¤_¶ 㚌“«Oˆ(F¯,ÃA-¬)Y˜­‘ okBÕ!¹%·ªFœZûØ×ÕÔ *nÄC·…eë?·Æ^·mß~¥×ÞåôÜ¢Dºõ#¤Ì²(LéŸØ?Yð“·¹F¡zR#je›…ôÑ[CÜÆk®¡oYíV hNW<õ«”‡ý;ƒÁWé‚'±ŠpüF³ ®…ž?Ví u±aÀ]E¦w[³äY/“•BÔü:lÔ°p7b#^kHü¤òt¬v°:ó?£ï¾FªÜµåV-uxe¸ Š®í Þ<¢º0Ît·hä6 ½l³çÙË#\c¨fý¨ö#F˜òG†™l=•š¤x˨Wu;ÕJÇfVôa«'Å„Ûk…ø£×­ôw%@Êc“ª.}çÆ7ŠÄmôžG’LO+RM:ôÚáȃe—4EÛ”©“îeÍ Ô„HQÞ5͘Ýô!u~e¢ WF+õè-µ2è*McyÑ&ºnPdïIÕý ÏÙìÕꦚ{ÞÅᳪ¡ÆÃ˜›’Vº³G‘#³VèN¡H¡£"‹tz,y3.µJá KM0ôÊQ[Ÿç}“0 þF5ÛTÓ`šÍõ&¡ñ¸{P;N³dÚóþ3Ü`ʼV[?]Ò(®É/ȾyO‹q=Pëw…zŒ7-ŸÚëŽAÍ4¬æ·Ö¼ð(ÚÌÂõK~oÎ à6¿SèžȆÌs^ž’û’ªùÆË¤W@­„oü! ŸFë$äÁP,Z¬ºÇœòÈlÌ–÷ Ÿ áÀlY¥Vn¢–³#:¯m¹$û΋Ik0RHí>ªîRCGßuŸóÖ½Õ'3åm˜¸üPÓPð†$ßFG8c>ŒÃGM_x Eº¿¼ƒÔPQLW7ܼ»,1Pu ÓìxÐÊlQ€©Z× (6ý»Zw §Û1Äí˵÷¨>„·¡¾8ðŽ_ò±¿Œ˜9[+¾ь âðÂR­ÄTÕÅŠÕä]/—±T{q©ÛTÔ<ÔoPµ—šµBª•l£ÐBš)‰R4-V•¦Ô‘Bôà]¡ˆ3ÚAE¶q¦•oéñ-W1!T¼}Ì»u¢¶~̇©¥cÔâ*U”šiÄæ‰A­6µ¦ 9Ý"ïx="ü‘d®Ú(ÄbN`ŶÚ+ ·çûëŠÄRÝ*,ŸÈ“?PW5¿lÇÇŽ-_¶|GÝ’cŽî°E<Ìwmö¼ûÕ¥0¬hņ±º˜ðM2¬ɈT ¤p¾½P±n_™`j·¬ø–©„üUa¬n‹êú2¨­®îZ°1(À\1+°æÁµ7SM2ùÖ¯™•Z{ˆ&¯™K¼ç‘_}ãóZŠáå:äöÔ½o¹R#kÔ2y@PԾĽ ¤*ÉQµÂàŠÞRKóáåeËÊ9ÊÄI,9Ÿï …râ»XIº¯—å±Äîfˆû°7ÞEoš2I…D 8t­ÜÜÇVZ©7€‡?B…îºSh«›‰4¼ÕØM¾ÔÝá®ÂYóˆ– É¸s€Ì—qºWp¥_O™‹=Ô}Fµk 7úKô‹Bý¢µ†›[|6"+¼´+bÚÏ4Κŭ¨äY›F6[—â²Ö½ÔGÆ\; d¸ƒÔÔ”hÚU1X1€ýùä´*íIËW$§=£3Øå0ÛAƒæÊ!•#*¯¨¼¦ò†Ê[*ßQY˜Y÷®[K!ƒˆÁŠ@Ç¿éø±Ž®²Þ‘^tlÑ6ËÅŽì¦1û¸…“˜²n êoúŽôU»”AÆàºŒPñ 顸£?•>O'\(k°Èz­á1¦LüÎgIøPBÜ÷ûW} vÐbº2ŽÆ_aØ]¶+R^Ÿ+÷ƒŸ’ÒvÖ¥²UÒZ•¸¡2÷”ôÍ Ûó¡äÈguŸH®¦+ÒÍ’!¥¡q„ELÓ^´Æ&z&Ô„^•vÖÑ;¶ àÚÏIp¼)r&ôw2'Îqžõ¯Flx&„°µ£ï¢ów-BT;°bõ¸°Up<@çé bzˆÒ¦Rëê¼>þ²öuÚ]65X©½Xéç¸墤Œ ó¼+¥‰//ãÊ›r2“¯_¦|p…|@"\ƒƒzN†&/!N`#Ï— …ïÅųeØe6±ÄèKàÆíÙUyã{s«´/›ÁpcBÜÁCγr'&-_<€¯|UèíÏÌw‡'änÜU|`=H £ ßÿ=Ë ÒáìÌ!¹V½ØÓa¹±·dþ únËB¡ 0¼Åžþ{Î8lä,.ŒÛÈ5‹S¼•^«\¶Êö¼S6V-‘K{¦ e$tζŒ=¹+ö Uäü¹K~>´{ °÷^³Rùä8ªEgì½ú`”¼ Qu%x$cvãÛÍy˜(êZ°è¯Q‰ýãu®ko8«]cËú*' dþ ÚgI{àÊ•%¶¿2Ð+2yÞÙ©ÊVt6«#V\Œ)ä‰cÃ`2éöÞwY‚·½á²´kwÛyÇ O3ä‹90¢û ®¨ÿc2`âbèìÑ,cîÉè3…Itt Ÿ©’ÚƒGFÃ^"â.gá;ÁW°8|SŸt6""…D9.U°Ž4!5•²¦ )Ç_I×»dxîÈ0X¿ód5{¤½U± µîûÆ„† åµGI*1ÕÏd}bŸ„‘ L5LË!ðç_’¨hf$;‰öç›ÉD*:‹•“^°ë ™C¶ôž2Ô¶u0 p d€#û¼§ó•ÖÓ“Á ¤³ÇøK…"…ι÷ÍmÞܪýuÐVA¥JÕ6ý ‘Ö\—êoä=á]ñ1wgä}·ø…Vï± ×ÄõçìÄ5õÔ"ú·MÈÿjúöEÉ¿“²7ËÐNØÊÉøR"J²G‰B0Œ¶ˆÂAªµÈáj®sL~3AÍ„‡]˜x-\bg¼‡rš§Ÿµ98æ_Á‘ãTxú#+L‘†w¹W½¢#)™ÆjîÐ- °ÀpK?Ú×ϯâ³Anâ&n g £ùQ„㣑þ!—¾6TZ“˜]_³Vh«ß€N·H¨u|R}ÏY¥à¼oSIXäÇjÃ5†ó§ž¹)þ\fŒ²‘iR5a–šáÔÙôê=[© yù‡Û2²(¿ÔÝT¬·IÙU.x“A¥(~°]ÔÈS¨â u)é8>àr´‡gøßÚÿs:)ýlx–±±Ež7—C*ËhØ2‚TZ°¡ŠmÌ !pÇ5žqá^xËϸ÷sÍ=·€ ev½­©QyIe¢Hé†Ê[*ßQYäüná®üƒ ɘ *’è¹Ï˜ðƒµU¸Ý¹–ArýPð7‚ð“·Ô¿ãù€êX7zôüeWO-ÖÍ»VÊ” C`q–t8žÏ ¢\ˆÎ‹VGé¶H`IÉ>&||0ܬiO:£)‰'“ƒ¹“(jðQaŸòíÓ‘!¯‘¯ÂþRT¶î©ãrBeaɼ¤ë¼Ükœ,Eoúòl7aY.Õ9Y˜ õÆ/÷ d¢RB®8-[‘<1{ãÏD™¦±ðjaW\’«Å(W1ßd§fVîɵªÂ)Æò§c#¡z]¨ºF¼Î0ñÄ&^–‰¼žö©%ÞÔ$Li”¾ MÅN–·âboš¯Xè׊ém|Ôt¨µO^û *"šñ âxfÓß)¯’ë¾ÌlÎuçwÀHÇ:)ö êX :&ÓPèä©4‘3ýPw6W–{J‚ß™n].ZŒf Ïv =•|D§$›HÈE;{V!竊S?´D_õü#Ò 6äKDt&(ŠL˜ÒºYýî𩊲Šf·ŠH‚ÝC“™°!EF{&nÛןÄê¦Éº\,šýSøÏÐæŸ8Ùã§•¡ì'å "®ŸKî4ˆµ-2¸EðÑå h·pLqK!\â©åƒ¸=kBŸý»ˆ‡–ÎnB…"BäöåPÂUFU¥ 2~nÇ/[ç… EÜÈW’Ϧ”_åOׂØ*Ò;:@?âô.lÜ1 8ÌðN7‡'²ì·ˆd…‡'rö(d”ðÏ6†ª¶êg[’YZ”00(\þ3Ô”YGV£q›ŠÌ(ÜÔO•R÷ù/EŸ.· ­OåXQY•®SÏT’·/¨èrkÏñ“¤€“ Ù¸¡2Žê'U·{Õí^u»W}To³:†ôŽ(FŸÇ{šImûŒ,",UÙ=#¸SŸ1•**‹%Pß Ïþ,BàþÉ[ÝüᛑûãÂL©G¼:„9õ¿ªB†ŽN"þ`ƒ7Gª&f]Žl ÷—•»,•ÝqRp?pCEÜ2§cGÒ’âÌÓ\1‰ ââ$”eÝÃ7õ½ñ`ÅïœdlG!ÑžMÏ¿¢ènf³F7) ê‘z@D«B«¹óE¡nf©:³¡G· w²Rý†Ùò×>¬™ùôãúÀ%R>ýX'UÆïÌ%uÈ"}¡„_E>Oã'yaÞ‚ÙüµYDÛÍv5• *#ïhÊ'Å ¼÷’’Ûrú­DÔëâ"]2bŸ»‡ô›øæ gÆ ƒýQP`w+l;e;,†C]‹;o\Ú3A\õçöÀ<Í:@-Ñz%×ñ˜¸cpçs,þZNè¸ÖbTÚW¶»1ÅÍíÕ>;Ë¿º‘©yÌp÷2tq´å„ÊÌŸþí8Þ 2f†ç)¹ß¹¤N"ç™ Ä:¦=# #4rVH¾Ôy‘Ò6t@Öz‹{Åÿfd3² ‘e¡Î•Œì¡êÈÊ¡D!£PªP¦ÐN¡\¡B¡R¡¯ í‚îåú§ïl[$eí2É×÷~~‡™+‹t 3D¹ðu¡B‘z2̵͆¯+žuql?=»Òß‘jXè–F_–¿4 Úh™¤£çÉ`Ž¥ÿç=NnÎÑ!tÊTÜÌg¦Ë)óÿ¹Ï§"¤Tg:½£‡p8 Ç2¬ñ)”ù¬Î„—ž¶þ`ï÷ Yj>ÃrTexuš„ŒÓ~㓹6é27g³r%¿ãͬ¬+Æ;{”2ó>ÞÜz? اñ®dŸÝñ.ä ,—w¤ÊÁB™VÞÚšó2GŠ–÷îÓ NÀ"M[såB,•‘M”Ef`?Rb¾ya*°{*ZžÄkž3öΙÏg`¨$\ˆ+#*ÏCßÄ!ñit(‘›ÁD¸€çï…äKð?‡žšò¹²‡¾Ê>ôâÀrŸ!¨ùY¿ûc‡¤Ê}Oħ><û…ò«÷Ÿ(üdõ|¿¡öpµuzE×Þ’›ƒÃ·«Ú¨5ü™BrSq?ã&îÔ,ÉE‡RMf•D¡yRv2ÄS…ÎK{u6îN$ú¹~Ox_Ù'ŸºD™fµ!pÝK0ä ¯¬y_ÃŃˆ,b*D ¨Éã¾FÄ€Y-Vä¿$MÚ'{Ö¾;cWy*òÕIëáùCãªÁf²€¢y9tî<ŸJŠreA¨†wÊØÈÚæ)nq+e«\¾¾iƒÚ«œ„\¦­³k™»%NF% ᡈxdƒ”ÿ€ú`W‚kÊÉÁg:¤À¬ÈaUTBü®Ìæ'|Á6}(Jpnm¹ÏŸ¹U.:ý cÕö¸"m’6rò›&£íaV=…QºnÍ3ú ¡c‡½0q4Mót¿Ž!ÓÙX™#{?viùýy–²öQÇÉÇíu>pŤ··^ç)Ä&zq ¼ãß2Qôçn#>,(ÐôîÂ0ò~šÞ5#²Ìáq¦Càèô™¢f¶“xv–"^´×¹ª­ò²‚aÆ_m·R28kÄ2eÝàƒRJ)k—j`Lr<œ´´žµÄY5b4 =©¡Xìär@¤X(G|ðr¨ö«\qOFeKZ 2yEtp_ñ$&0YÒÆ bmL«öžH‰AˆÊyš– ýiúÕoûyúÐˤÍ-·ÕÈKì¡.Ë6ƒ? Ô–WT^Sm~ïaÿ’=cîv&Gä3ÇÏÞCJ*\J¼{¢¹{@öNðªýš¢”РfÚŽ‡d† vCÅáI¬a€[öH:\´Ø×”kvê(XX@îVV¡ÐrËE(nP”/GR{™Ùׇöôæª>H/ªg¨tÜ] øÊvit4Íël‡E[g%ˆm ëgTÔvs]!ˆfE³ï0*®B¦±®[.‹­iÝŠýñ„¶(Ry¾u Ýùš¡ci ¬¬lyMe„¢»¸-+‰ÊÙƒyî"ÐPì&±‹uå‡S¿ÛzÂN¦Á*G2¶GŸœõ3{9’¿Ï8èKù6™é[7¶L‘úþ‰Ë!ÊÐmX^Q†ñä+Á°Ëô½ bþ¾$,¨Û®§£šØÝ…?û´ÞÈ < 'ÿïù÷Áüfß0ïG2$pö÷Ó?aUÈs0oº¹½Õˆ“îl‰ÄGì¦(û!éó—…ЙNGØŒp6Âåç#üu„‹†4졤ֲa5V)f¨Í:>öè|ua‹Ç_¿B¶øaÉ•¶øiÈÒ’QY$û ™M9 ƒ;?d†ìR÷”Itø¤nap–*ŠÓëÕÇ'ÝÖ½ôö¤–Nƒ8»ŠÕ:Vßbvæí`ÀY»x_K3zT(ãJ¸ÃÛ sš’ÿ¬fy{»ŒÂ{Iñò¿ð~y•9u×Ú#Šð±üJŽð„Ór zŸ{ú…¨ŸU4¬ 4qe˜›Ÿ’Þ)Y½>Œ32(±N¬ów7%¢pÌ:È$Ýê>ãkʇ_¥n˜º^ –±Šà-ûÓwÈ1±Ê„ððB4ØtM†UÓ²—ó÷Æ?݉þ²³r@¿™@dÉSEzeãËÑÕ“Ã×pÈó‘M¢Ž`~),ÍåH4¿oc¿ocÇÇ~߯ŽU¿ocÇÿ¾Ê¿oc¿oc¿ocÿnc“Z­ †¢¬ûQæ;çÕ—Lõùƒl鑸khÐUjG[EÝlØ}jß“"·]“b<<vò¶iã>•f–5Ê{¢€CÝ‹Gà38ó@?qÕŠÁš_rú ‰{8/ýÚ®º7…ïìwl«–W|„™™X6FZLæ¡neÅ’ÍA‚Pø® 7$µô¯ UKAhƒœ8L[æÿË'¨ë¸¥°è‹Bе(£®[Ðÿ>ÑT§M>-Ϊui”¬ tÌÕ~Ø#ìèTÌÕ Ø=ИJŸŠBB2_nUÅWUQ¡quxB¦0‡1ÛW%êI])¯£P^Saé?!,ät€,Øbߘç64K\f¼Sè^?ª*e¾=ì,Må˜uWP¡dyœPNZB…°=?/Žô8ùsHe\&9t#~ÿ®¼¦ò/Fù .œt<]H¤ 2;!7r¸¾Y K9°`Ek®ØP™r™^}Ëù7B²â‚rÍ×5ƒ ƒ{ /JHùƖ…hý©™Rþ² ß£ÔX` < k#¸ËônT“åœûŽ.xNî¿?Q‰_f2GÓtÚN‰𥞩TÃó)LTBq·C®à½ÒÍ]›æÄµyGªfV¼#µ¹`0öÜ®Hˆd«2Ý£›”7ÿ„eôÔÕá϶Wû—ͬM~?e\§¤aSÙþ66w¿LEÂRN q‰×¸¨$ñÊ…HÛCŸËM®ìD]J2*óÿ!œ=+t G„®±óëj>HQø{ú—ô5ÂÅÛ#k´ŸÝä]æÉ×¹¬Ì#„µã¬­šHtûJ>R; ]' ú”ƒš(®¼ ßZ¨ó:3 {Ã*ï º—‰Üµ¯\a"ðÜdÊ‹@*ÆZAØŸ>1YÈT㭜Ǿ ¤/Tþq—Ì´˜t¤¨»lÙó‚ŒhðCõûs¶,#Q^UÖdnT=qÒ<‡°›ÚRàXÇW‹ªIš¥B¡Bk…Vÿ²ýø›txŠ:éœ Î¶(d k橌O4aÒbQ3k×Ù,^æË4'^Žm÷==v¨VàÔ¢P¡•Bk… X½âR±+„0Îvï3õ¶&ÒÔsƒG`3ª¸¦eeAÄ`Å`M@5p 7`ø1YdÚr-ƒHÔ/q¢ÆíYKïF r˜Mp€W€ìÁÀOZpN.Ôck×Pí¶ŒIúÜíœå‚mË+*ClJi!Üœ¬Šþq´ã ãlü(•Âx÷ÄàÇèÑ$ÿ‚~u¥ï[Ê¢¥-׃â eãºxúÙÞ%p8=V•¥VÕsl¤z”©šÌ É­LQyEå5•7T–å®Ì Q?'È¡ÊR¯ û£iáˆÞ<yiÚ{·RLð~]™ÿ¿¢²®³¼N\,ɤËÔT–Ase2D$ûŸC3HÀQt¨2ªªíò,Un‚Pe1åx¼qšÍ¨kC]»Aý°§˜jH)ÿp3ó}gåÝC¤+m{š.Bƒƒ5ƒ ƒ-„›N¸é„›N¸é„›N¸éD5}Ç@ô÷þ¥·Te¸ †»`¸ †»`¸ †»rk)·–rk)·–rk)CÊßò7¤”ÙÔ¡¥B¡BüâŒû'ËײUè¸-ÃÆUl ˆê`ÑÅ ¤ 7‘…%Ó]’ˆÕ’BR:’$w "A‡ò®Ìÿ¨¼¢²¯Y]9¤² ƒ½ÝË1¾ðübGA2*J*ÐgÈõî¸Ç%•qãz°{G~ñLȧŒÂÍ;î3“cv>ìRú‰BWÁχC‹³E*@¥‹|ˆccx®…ñ³O­¥]„J´E!_IBPï–C§“4øž {jÅÎÄ‚ÌW5=sî¶¢P¡³†Å+y±åÄ`)ššÕiþå(ÿ:þ‡ÿãôáÿq4W cY ÿ8ýgü›$ÿ£ÿ£ÿ£ÿc7þǘ IŠñ?ÆœKòªc‡Q×¥ÑÓÈÅ#<2É^íFx?Â#“ë•á‘ ¶èîOxÜŸŠqˆþ9Ff ²`à-e{Ÿ€RsŸ øÅa‡õj×Ãíu ùyô$¥±œgEB ËÜG eUâ¶dè^>²2ìëgr ÄÝkÎÞ÷‚ß…”»;¬ -1Á)5&[˜—7É‘ëœÌnš '†(Ü%›¡)·#ª°,S¡¤7Þã ƒßË·²¬ טłî–VˆûtùÚCÙ®é÷ËÈ ê¸í@Í›²eküƒ²bØßÈñ±0UÜ·Ìäd¾g¬ñ…eó—›[õ«%?ˆ-ó€€ÇND!sˆåö7TSc9÷~// J˹cc6=¶ý.y:@”S6ø¸óHÖ›dYLJ•c´átâW%3+UC#„õäu#¦#»‘ùŸ¬‡¡ãòW»¯…ú~Q—ýÆÑîË@ÏŠˆýoФ2‹¬3fsZ çdp/ ¨]†•‘<5nMÜ˰ïb¨ñɹRÃmÁÞw¿NßEÒ°_:Yôö ÙPV3ûâ­Zc‘0 >qbÌ6íW0Ô1`ú«²‰°##Cf÷˜÷ª[y®·³Ô¥Œ¾©zy©ïº»Ý÷«ÖOêi~òC§²ÚŸ‰é}ŒÜînøÔ§Š#¤í*¾ÈÕœ8……sI”™wÏÝñÁ±•½–öZŒ¢ÈÏý-ýfq8Ôîq² Œ÷”ÂKf…ø%¹ ÖhìÉxèÐg÷8¾ÚÕÏ ©pT÷´7åÄ>ýËVÞð U¶«a jðL‰/í›Å{é¥ ¼&K‘#À½oU@6ë.ŸKîLÄ¢#÷Bu‰–pw2œvÁŠäx¢;¦̽½¥àïÍx@øÚÞÄvèžšÀ4>´+9Éã¶RµDN3ä4é«Õíz»Õÿ€›×ñËÕpNÚF™äÜ¡H<üer¤©_ßÞÉ»o–M¬ÞndLÿ¸ßˆì÷æÿý¯ÿç¿þûÿû¯@ܤùã]²¥|¢N¡Ý#kÔßô”SÊM 6ºs©P d1X3Ø0Ø2¸c „=êÎ?Ž› §ÔËbý3VUzßnøc#y›ïÜ»%¥4ìÇê-µµÁfðM‡Ð±¾ Ԋσ54´/*¾Ô¯;êÞuÏ]Yéá†•ŽŠe ·{[^“Ë{©ôºÞu´–ðã”-—êeöÛó]{©ŽhÈí!ŠÇO_¾O"«}ávM“¨XS¢Ñ* y-±ŠNƒò²ZŽÖ¼öh3úA´Õ]ZnFß°å=e?r=ªË_]´Èã ñ(TH(WNáäƒúÁp9¤2ìÑÌZ,p]͆Ê2gý°±ï!¢×0Ôp¥a¤ú»âþÊW'+[9uuÝ—ÏL¢OþDZ? yëõ lZ+t÷ª%1/0K׋¥Ñã}Ï=^©*¥ã4ÝrÿåÛ^ZUÂ<Î>аeÇfäÑ,¼UºÒÏŽ ”µB÷ m¹¯ÝJ¤ûÒ£ªãàAÜШe‡ù—U£¯×ÛÐ…"czF½Y©ßÞñoGoÙpïø%pðÑéo¹mÔfx½ü_/cµúÔúRs}a6yߘ5î—Æv'ðUÇ&Nk€¢Öµ¼Îâ\\7⊜{ãö+}Ng'þ4!Æ4$ CW“"M´iÜ -¤’ûf@‚kóܧæ‚îÅd‘ôÜnÑ7ʧïÄqz¾‹É36l0›î(#EíWÔ¹qZ8y¬G Å)ÆÊÓlÍj2@Þ=ÇTN¨,Ⲧ…uЇLrkôCg¨3D;èÊküT>z¤``í°S,I‡°OªÉYæ¶uÄV’j)Wºá©é`1çBvG 'ËžÈ|ÌõvÅ«7EüØ–k6 ¶ôíê)YM·$k>ËEíù1DÔ2¿’ü(â®V_-쮟xÿ,òª$µ¡{”\·÷y×å².¾b»9”1ãfñþ24»®G <×_œú]Êü¶ç/Nð&nò”Þ¤(¹Ý°pÎóðN¡­B…Ö Ý+´Rh©šý;ú²{ª€$…e¡7~iâ¶j ý£57-÷¶+MxÇu¤†ÕuºhK á°{œò„[—;F_™p«v¸ï}á/Å„½PD§Å2VÁâí›áAwDF!ººÐ#ôy‹¥ñKj"WÆåšÑñÁGàÂðÑÀô¶.â.…ü#2n[ÕÜ!ëÝo ¶kk1Á¶5õ5áùÝcKnBùNÄ¥³~ÕŒŠçoÏêÓÈÏr1Ažåˆå‘WdlRÃï±Èž¸ŒÓ!{XRYzcéMà®ÒÿºÅ*q@~ñÃ~À±;ýÀ GüVü¥Òšý:. ­´#¶§f!ønZ¹€/|KðQêÅ`Þy ÐǤk*Ì)1Lµ´M"37£/níA¶ffˆ¹A:[FÃň:§âU2سZb†ýƒèfƒˆÛçÙPdÜ(Å-¼TL§òÿîj¦Çì¼ÿÖ¥âP5DrêN~CãÁß|åÐ\ñÜYÕ§q»W 7`w؀πÎ߉TîS-D V Tz4n1&€ƒ1'+òê ŠóÝIøâ»ÅuÒcaOŽÒ”ãÓò¾—§·Þë§Eý‚7#¼á;ÝoGji¸Òp­áè³îu¿—ê–CÝr¨[!âFºÏ‘î3¢^< Éh¸Gx3Âk=`‘îZ¤?:ÒßñBŠI—møÔ8·¦~O.Ö×}¹bíÿ^á¿WøÅÎkzáx‚]÷H–¢îÉU¹zlî®j„--w%ú²C$¤¯1ØÕ} §uU2ˆ¬Hä¯Q˜käe¼÷ ¼#€¸>-¥Éa NÏqâ@v:7™ €„(À*ÖÈØ_§}£×BµÊ>‡lËbÄ¿"9Án#‹öPcÃäá#î~%³çÒÎÜñWÝêo¼å7 a¢]bUžõ ‰ºJ$Ï(2Äjò]Zqs²‡GžµÊyUy¨²»¨l­—¿~¥–7ioF¾z¿·éïmú{›þÝ·)¥#ñùOwôY)fT-Êò*›¾:·ñ\†‚ÓQI/…rÖÁ>íMòr.†(vôzùw+@[º}O1§.†—¦ˆ9«&èÀwޡЧ'î—Û­´î”ruö¦²2ÿ$O{­G<íµâdw  -âÿ¹KOXäÞïã+ ׊Š&nIõ•¶G|æ(‰Ý¾Bj“šŠ î!Lñ€3–ázJØU˜ ‰)ß3®à¤r_O`A‡Šš˜ÆÍ*`¬òo äêt¢ø¹IÕdáfEz‡õ𠻝¬Ä9?ÇQ½6³¼‹ëA2m˜KÛ•QóÊ`µ]È9T`/jª!¦2”vFíÎ E[e')Ó#ŸH;e3ôbÍTUÚzG¶É0õ`@3Î{´8²JßÐ`2²Ý›gôÄéWÖ¡+ÙÓ ´óQs±M,…XírYïu Ÿ?¹X3§gÈÁp!ì<ßµ` r"³ƒ …½™6_Ÿå½0 ù°=9"´6 rˆ¯ÊÔƒà Ú©ºª+T]¡ë"…ÄØã©v:y³ŒM!ú¾ HqÖUüOü Î)„ë„»Òž¥¤‰ògÍÀbðÐ%‘MÊ2æÊ2r¶W=!h«(T(Rh¥ÐZ¡Ó*¤4P³RHôÕéË2ÊL²øAÙJè)åËB¦æ³JæcVe2.*&p wŠ<«$ÉÚ¬z>­EÚÄ_%ªŽê»Ë3äã ù®Æq€0n Mˆ¶–¿B±45i[;Ø–‘#E¥òž¾ÔÄä—ÓÏÛ¸C´yK¤— ð“_cÖVb†›=zÉ#k/ UGJ‡…ŒB©B™B;…r… …J…¾*´WÜj+ÙF¡‰ÿÆÎ¹rßtÝ.’C“øßá’O€»ðo.™œÔô5mÖ>IE'YQáu³˜~Ð0Õ¼_ù·ËÑo—£Eù‹\‹Æ^˜¿=>æiD‰Œ¡}Y ar_D9bÇ*'dÿ41|‚¾‘#p*†Ô„cÒ¼w^N†'XšZQyEeˆD*²<³°V¸é8¸Ác Ò|$&õÉ{@Nœ¹+'T±¦ò†Ê÷ô¬F– Br¤°$L†#Ý‚D¬¤WY$%rùPáÊ 1ø¿Qš`¸:/†>_Þâ7H¼s×ëâïùO?QFAŽÜînâ+²Øõý˜ªqEŸL‡S'…Á›Ïã-[E‹äû!>‰µŽi…*#Áý¢xUβJŧèð¿ƒÀŸÕø NGØŒðø÷åç#< ‚‹á_-(½#{8ó >šï‹aŠÝ0P¤õD€þ××úkºý§Îp¿¨“¡…Ü®kÄ`oÞ5ù ‰M‰»"Ïv¦·K²òl—¸5“à4¯ÖŸÄFfßÓ/ñÐno…IŸ¹\¹Í€\Á¾Ú)LÎû!$!©åæörŸâ3†ywâà<;ÒÁ±Úmž¥™|ðM‘uíÙu²Óì)"$°~ÎÑͬªA9ôý%-+YTÈ$7Ò;rš´]Fb„RÝý&¢å»Ñ"/ÝîyÉœÅøyñ|ËØyötº~ãÄj8 ^`¤!Ô”ÏåÂÓöXîC„}Û§YјÒmrøHûÁȷ⹉åFøåÜìº}«“œÈu¢ÏL.&-Á;‰” z< cŸo@Q·|Á”ˆö#m«.âºÄm"Ö·RÉ[)iG¾ØJKë(ˆºz ›æ!éÀžTƒ–µdö޼¹ƒ¬%)wíe@÷œèüé Þ†þCô¸Ëtiðù#±ÁGÚ&C?@#|ãÉËò4É7^4 g^t¬;†>G°?öcÅó€,)8 —-D6ÐÂö/ÚŠ(ï{‰<¾NÑÝÒ.ŒÉ >îvt ×:qÿjµ°Å‘Ûâ‹YÞÿ÷ÿùßK½:&Ë«ˆ5ìSã•9•zhv{Hì.y—ò¿÷"Pâ,ÛgrŸä˜8Ça`Z+u/ÇfBOž×ÿâêôϲ_…e×qüê—£r_)DMžç8˜ro½{ý儎Ç]CNá4âËð) ^jgSðFj0 ’¤î/CºåIc×qŽ„plFT½©¡S[ð4×"äÕñ¹Æ±Ýߊ¯ð2&Ú¶¥Q ¢/xRË+DL8~*2ƒ7H¸»¿¿%â¡^,¼7vÃÂiUÖŸ¦H¯bÖ¿ íp¼÷ʺº°Þ¿C §èŸ‹ ±pFˆ0NSÿòûûQÏíÞ_^9œ€*ñ2¥ýóƘ8çᯠ9q9ÆÄ™°Ó!'nüž¹9êsîè…§–ê6ëˆ?í1*^ç øR6ý?ISú;ŠÅÛQ,Ækut7S>-¹FÃW Å)&DŸ¢úѧÁkËÕ¿{Yûר“½Ã† fcWÄnüéʦjÂúéê@#Ë&2š0`š²TšŒ—Ë6D¿(Zø¯Mñ;íůK{1© ½:Ôï%ýÇYÍÆ'†öGäl$S"ž+å0Ó 3ÔÝoè£K9_'µ#ÿèVø±§ê‚1ºPŒ9ï×,è[!Aî&hÌd. ”`*ðõ»R­ÂÉà¿W,)žÞ3±ÑaÔ«Ìo•Í'Ê]c,5V±f‡å\êò ¼ÙÜš _Ç–©$ÒfÛPeš©(;›²mš¶BÛ)ã=ç™#U$ëØXÆC£5Xg…òcé·’E+³*³ùZa¯Ï*¹ê´(U ;µ´w#¹š…iá—p…UJ5±8G‹hôu{t{ñÚטëݢM?Éhó*3C²ÔxJCÏJž‘jAKŸYÞ|AĬ¤ªZ¶9*YKˆôFŸbbƒØʧac†Ö4  žy•©Œù÷F[Þ±2E»rHeѳ_‘C$Ø¥hÀ£9q>LÆ…ƒe#NÌ¡T¡œÑ„³ÐÈmÎÔŒ92Å«ý$Æ‘ë»X¶è¬Ã©@!­\9¡²„â³Î•Á‡ ˜í!ßI;œ|^»Bc´°ãO¹hýskU‡|ÄœMXg þuvå®DÊãÊ0Çݹ 92]uWÐc•0éÎ,>­c ºö%´ƒ­p1/8Ñø[<Æ¡Æòü±y™åcu7j®nñ„W#,-¥œï¶¸B‘zPÂ>Еe´ˆ$¿èš¸‡ÇŸ%°¸* 2µ´'ë´ÿtbøÛÛeÞ‹aÜËÿÂû%Rˆñãòòl"`‹2Ñ#ï#ÉîÊŽob™Ì \^ÌPÉ¡ä?d Œ',P5`âþm’ÿÎŒr*ÒWîÈMÂÁŠÁšÁ†Á–²|Ó· E ­Ú(´e$Š»úCÏlµ'Zxüý1Èò$³½¹ •Oçàµ{`[F}Å`É d ›Ó,¿ÅTN¨|Z–3³ìPV³8•ŽÇewúìY2 –E*´Ë•åå#¦2îô‡i¾*°f°a K¹­ yÜ[1€™ñµ?­ðVöî1õܽªºUˆ$´{‘=´âÄã5!ó]çây7·ièÇÝ8ð/yé`RíþÆOÉ×Wf¹Þ ÅIŠtCýᵨl¬Ù¶‹±;Z$Ô¦‰Û¬­Rý.EÃ~“Çßäñ7yüM“Ç×ä‘òA½äjõéìV„Mâ“d&H‹VÈSz($¥BÚ2’”…Ó(ÌÚ˜N©,M>àâÈ©0Χ{˜N_±p¿EWB~Pît¹è‚&† ‹©åþ’DPÔII”ÇÐ’{$óô\iªç`Iñã[–®9ë¬%U#;¤I‘Õăô­è7xîKÑG¶.a‡ U0>u¯X©W„ AU\ô8ŒMÑŠ!ɘþ˜ímxý,ûn‰›•wÌpƒ_·p-5ZO¾ýX?ÝÿÊ홎ë&õl–öÙÙž‘lø±E1Ú¼F>¶G.­,µ}^¯³s bÔ?½>.tðÒÚyc¿k!M„Zbú˯ZpýòÉ…úîÝ{å·©õ¯¿TïùúÁÊM“>ì`lü%Ž{{(µ=KÉÉŠ4o‹†’Ah\f@]i×{rêókWûP½6Õ;„yž›´FeÑlj;_š4ªµBÅ”úméžUúû64 »çÇLw|ID_'måë¡îø7?–0H\7CÕ!ר•®ÅåJ÷×p¯V~,u+ ?(Ó‘Är.é¤ÚÝ)âH©línlu–ñxÙ$2Užn©•5ÎÜY•t2X=8†ÅPo{a°çehe] º4èNcPÿy’PŽó¡®$µÄÌÈâµEñ´Å®° ½IÚ\R§·9ea®Ë†\Ë\•pJ”Ås6< .- GDID–Ô6"éu\ñyyV—0·?'ûâº%ûbê×Â,…RF¤•ZìyN¿ †ò»À]vÞ?7_— ÎÔñ­;…¾*´§ïê;!>¸"þ®kû|ߧׯ«X¾U¡_Ĺ.n ɯÚÍ~©ëB®Ãµ´m÷\iü§Æø}þÖ9?¬ú+®âvæ¡ ¦;[•öÿr4ús†®'yÓ¦@qm{ÈXâ|¯ž£Û¿o1B6í:²€`÷$̘ú³Dw˘®™¢ ò W΃‘üßÃÝœmJ=ŽÏôþâÏé¶;ñsÏM€Ïˆ! ˜e½¬¹þÁTОz¹VÕ‰®+©ÅtÎéeQ§àÏ’ôÌþ²Ã©ÿ³}qšå?z‰ŠœÇ’X|nÚÃJ`fÝB^½oMQ©c{ÒËÉí¤ÒñUÌ÷¼­[£ûw~ŽùS_)2SÍ­Ž˜PN+Ì ¯ª–çž³7^â¼bçÁÈ5!×Lµm1µÔpmwn¦ELôOÞrå?ºÂZâ7¼µãÏm€×ñÈÙÑüúæ‰L½*h‰Ú¥å!¤rtv .}ùä[¦vúõ-Oôÿÿ2NÂ~ÃôóJö;°·ÜrùµâÊ=æÑ‰åêj°ÝíÛ¡TЬ<ÞHo¹FìÇ·"ÓA£¤æétÛrˆ•\ù¬ÂS¯ÿ“Ö Ë…<º¨\kÜøÝ©ù‚ yÛ#‡Š¯ÿaki…=ÖFŽ~KWïß«è÷*úÉU˜gÊçþ<È =|3Jèâçy$/wˆÒ¡9Xð“KUõMUÝ2¥ª*Å ”¼L–§¤ê„»-o¨ ‡¡×â)Ͷ$bbýú–­ô ¾®œg ìøm9¤2¬ˆeÌ7­èåò¸MO³4Ë‘ ;oO'þ¬±‡-BàõŒípLå„Ê2E­¾Ã–ePìÛ)Ÿ¯Òû–ZjwûŠÊ5•[ùq#:7+ƃ6§wç[9‡O»—p±¶Œß2»ÃiÏ•6`áG±§v‰'`CQ;ðŠ} ƒ§8mE9ã#‹È nµõËõRó7c˜êe¬[:d¤Ç +­äÊøÇNˆV ÎÆ•åƒÊ”"y80 ZÓÏk’ÖrYˆ´Ÿ´nzZ™½þ¿¡ò5ê°™]oA¦®ÚÙã=(ö»Øk×,ôY›ËÑ ž·[!S²Ú™½ñ>bp’í<ìåòg‹Õ¹…>ër9V ô9èšÜö¢ÂJÊ:ú?öQ—ì{~Ž,×myµ‹x½ ïo‘Mö­çH[c’Oá"Ma‚,3[Ɔ~ȇnt‰1G¸èáZÃÓÃ`Ÿžáðu|hÉíƒÆÛ(7Ö)ñù±‰*âà’‚îyx§ÐV¡Bk…îZ)´T¡V<ùï‰:ìÙ)½b»Œ¦SmžZ¬Uß<è;«e£-ì†Bô # ŠºzÅ ښΕ{ ?ˆ.Ÿ+ó¡àgê•“g6b°kó l ã_ÁqÃëV¢·ypš‚Ñ\Þ˜eøßÿç#0 S³dÃï6ð³\âA¹ãJô¶ Š!=5 Ü !áG^$–Ò¸uy•Ð ¸¼æU\IlªYeä­WM+ˆ-ÉCˆ†¼tR>«Îãš~žïK\Á´o’×3‘Dô>ÐÑÉp‘ºý½WP—b“r‰j9aÜ£æL¹ƒÚÀ{‚츃C%>´óÌ8˜‹,‰Ò¬²Z°oFÚ0Z,}ûÕL|ˆ\wùC`)îH#küäË‹{B¨ÞªÑUFämR6ëêC®aÙ[Ì ­Ô Ä[ă„Ñ#ÿê¼`ϽVÛGüwÍîá×f;²r –Ž®bdÄÈ Ž^‡‹¬ õvÑÏ ‡ˆ 7øÉU Ù0µ…eézÆí•ÆÁZ=òõÝ!ïÞ·…¹ÞñN¼ý.Iv'¸®BΓUÔòy™—ÏííCé…’ .¨J9—gU/,3[¡Ï{³Æžè¤™Ôöé®J:—!k¼YÛó]6â+voûn„Mò½ÃÒŠž»Ò@ÿå ò½“öú~é`®‹p]ðÛØGæ€4;G—°LFÉÇrôÛ{Ã¥¾oAèQ¨†S®¼Ä-êªÚIG¹ˆ+úÎyäþá̰2‘ŠîÓ˜¢—þÝi¸QcĪ1úÐK<Ôøx{›ü­X¬›ãÚ<¯/½Äs-â¡¥8­Hè¹5±™©:`TUÊ ãç5á\£P¡ˆ)œ P_’mŸ:|Ln„U~¡züÖèj3zëõøx¢«¥“/©”bõxܪq8‚^XººW`FÏ‚ чBF ÿlc¨j«~gD÷ ’W;`HKp#r€ß$»M))Ü7RñVÁÔt†Äа‚c†„<¶¢¡(ÄÚ+ü{¢l!ïÁ¹T(T(Rh¥ÐZ!ÝæV¡;…$Ó×ñí· &ª7#¤ú–¨¾%ªo‰ê[¢ú–¨¾%ªo‰ê‰Œ4ª3Fu¦Wu½®S}ÔOªn÷ªÛ½êv¯:ú¨Þð(K¥… Ò{åÉŸ¢;{ÙÛ™& H›vsÜoÂÂILüìþ%Mé¾òVE6ŽÁ”]®HÁ âïXŽúFÂxrÙšÖú~&6ùÇeéOJ5,øaœ/¿ÊxÜeHzY-aG 5 ]fšà…ºÙ}–sá4Iüc°)§)½ÊðÈß 5õKsË%üùŒÌ„þ¤Š3˜Ù{mO) åv2‹!v‹!v‹‘’7æ1‡ˆ9¤€‹Ì"·<ž†ׇª“ØÈ‹¬( …G¡B…"…„~§ˆfœæù`ÊŒä1ú“×’'ñ&°ó€[ûF ‚®:(N‘G$ù¬Í}¶»ä5”ƒÜƒ¦’áÊ!•#*‹Q<{ Š¡+ã§ðD¸œÅ7äšH„ÀŠkVª†[“lø7~l£Ë¶ÜÀƒ{õ=ÜÑ¿úpK8' þÍòÜ®cÒ`ªTPÐ$è® -…ݰ‘ñ(QHíYLG”)´S(W¨P¨Tè«B{… NoabÚv%+,K)íeÐQ¬Í¶?³'ˆKí2ç&•§N?·iå ÞÆ-ôòG´äîÃ'Î Ù$äc_*¤´KÓóï#5¤‘Ò‹ÖN4ôLvÍݪùÄÆq„Q¾P9þ¸U¤–„lJKyP„ w)†këôЖ]s7Ò wm©º¶L¹7°MªA-ŽŸ°çO¸§^ƒ‘jäøµŒ#\ÙÜwb‚¸ªx³}ÒIFüÅ,·NÔuÇ t÷Ш}°ÔèÔ›ãVøâẴ{•¥¡gÌU%˜->¤'q«‰›A­½ŒÅ)Êo†ïe?4ª!X¹"¥±~ ßé'#4[ž¤¸SãŸê¸ ›Ö† ƺ!î+qlÌí[uÞës}fâKGüU‡:Âúf­Og}šAÖqZ@P&¼Þæéð,îÙ³ôAøûo9°'¨õÕú8…øz¢Ê~(°lè^òçnMº/“ç‹ù5 Žó'Š0¢‘‚*ñ’…±9iÇÈ´ô|àæY¤(fó>î@*szo¢çhSöØL6ûVN‰ÞvyÉ ¤ï”c¥epâ8 ²÷ÊáV>OkTÌ[Š`1oKd Ó¬Yßu2ò9üUyõl$sÏÈ Þ9¯õÚW¨²/)¯?ªX¶Ãö¤ª°«%ì?› ÏÛêÀ‘²8…ûl¨cÓôÑ=Tÿß‹šÛ´|»ŠÔæÚ‹¨½Ç®ýÍŠkÈ¡&†Þ>#rul@¦&¯†pM±5m¥ÈmgWámq~Þ˜>ÝøïÀe‚CËl”ûB⨸ί© ^´‡XùØ¿¥¿?lÉ}ߪÁçÒà ¯=}–úfQÛp´£˜E›§s5˜ÀÂlWÆ©IM²º‘Åwé3fÐ!:uXÎ`@=Ï«¬g¤M ¤¯9ì¥\„d"ÔIs€K´-‡T–ô@¦ldòyÀ­®˜ætÚ…ÏFD†_ü$ݑք<È{áÅíqZóp¯¨é‘3;6!¿’ü¸»Ö¤àÄLÜ5$ÎqýT-`Oä5¶•k‚»Œ%F¦‰óž; U¾[ 2ò±rZþ |ìZ&ó^½–áÖ7+$;2rT¡Æ¦ÖÓn@hÜ"«a×[¢ <+7u6 îj’ôàÇP_Ü|Oz;å2}Ý*4ß"l+\°3 û/±æ;;WÎsåÌ‚©ÈhiÞÛ࿘²r³LÐçê-Ñg…8‹\"mV—¢ LdßÅÙÐÊÝîès[¤‰>½JF c4ãz’sôY“ÁŽ®cú¾±£:l=Æ£Ñvé“|”½?€ )öOGggªòk¯ þÕYH$Ó$¸î¿¶]›pà§ñþY¦0()…Ñ­"n1R¸GRXûÙ¾–3zÿÜ~âi‰üÇqù4±´tóÒ´z¾ÒfÁ^¬°5¾Ò0Ô0Ò-ëZx%&Qޱx©Ô ‹üº•·zxe©T»ñR¨Ü¡<¢J/S©k>ÅËû Q.Ó7­+ñ;o¥êP+Z¯ßuþô)HíÁþR£%¿ Tþ]NÚÛµœ6`{°×Šaôz&æ Z ¢õTÄp‘s@Õ‘[U®M›¢ŒÈú¬()°-'W{ìbçù®¥ Åë ^r‚º¾Ò£¦€9Ì0æ8Æ&bpÌ»¬GØà\¤8Ç3©bp¸/ÄsÞ NVmÇWŽŸÉ:QÇq4Â!Bɯ™‰£ä*)¶í[ðM1q³¦…G‡ðs*Sˆ{ýiô(H`¯ïø0Kñ`ŠAÁ9rú7Ý"&VÁ¼xu›•°î(ÓQûÕ•ëú¼‡ÍHK–%®¥ˆ|NÑÊQë+pÖÚ ™"ïz~n4éW1•— ý…¥Uðåê³Qß ßñ§ËVºÀÚ»_ɹy‘3nþ—KÝêJéSÊýR}ñP-’›à_·Þ¢ð_/ŠýåE㻵V?”1ú°C‡%z¥» AšK(WN¨œSùÛKùÒœI†¨*!]©Z˜´RÐð/@?vÂ!µ±£wÿ rFeØç¶qKv½…„˜¦4•®‰’^)ž2U[6j)“TÞR˜ ¤×u.–9¾ P¶XÆŒROùG ÉÞÕ‘‘[WsË0[µÇ„)¯*ßàýh„q×J(&†6Ü;þb8úÄãÏxüÄÔ¹2 L}`áÂ;²×².i”0zoF³‘"$¨!Éå¼MAŠ×OÊ“(·œ+“&r†*Ãö¾Ü±W¦¶%å§`œÊº¨s[©Œ‹Ò¶«Å3•ŽðBzFKeÇâltŸ’Äq”ÍV¥ÁË[ø2m)’W6TÞQ¹ÀcV#’ñp™£rEå„Ê{*7Tn©õý__ë¯éöŸÇOIh&Bj¶T£ƒ—ãæ¥ÚvHIÏZµœi¦j '&£;u›Á–]¥«®žÛꪃûxŠnÔï ×³ÓöTmL])– /S>‚™š4|LWõú;Á-Í^Ýÿó¼Ã+_?S°œÄK·vüqš?تn-ÕDúõå8 ²lN8Q“ *4s/;!S¯ÔìËY&Än RL1$_ÚâÐÛ? ­bPüÁ¥Æ-€­£»™H»ñ.&.ü„ÃSà0f±§Ì/ ǧ¶ M‡ÀްøZüÜÎWò=‹´Þy¥ŽV§^‚o-º ËÌÏÀC—œ_—ÁÂeà)ƒÊÇss¬ÒOêÊ5(f–-CjülG\æMçyñ•æN¡{ý¨ªÔ3…!HœgmA×m\gœqÎ C¥&:fŸ‰P…DE?±Aæí‹¨Ì’yØ[ ›gHžb²—¾°ytÊÎìü±/ʰ݈ôgQ ôüâîñS¸[Níÿ’«É´Êî9Ú3l¯q³‹ëp)*FÞAl½1ŠCyÍvâ°}Úd2Ò>ÔŸ¾.ü‰5xq]%ï¡£é›{k¾Ç3LÓxy®ÆÞ{äB÷»•ã¾Øæ»¬çOŒá/4§KN½2F€¸*¤ $þ÷8Ð, ó}‹VªêÔ#íÁ1ƒ´Ø_-jtVë¹C ^âtÁw†b(ù_÷ªa½»(¼„oŽ=¨ÐÈN{.qêÛµ §óf•çP Ò– Êf);C¿nËk*ŸÈÍ<©%Lï´s‚ʺlD˜6•IÜ=C¾Ÿµé‚Åo|Ð6í:uï$Ò Ï5–à†˜4’h ¹ïND›üÐy×PSª”ÉMM1à‚o…4káÃÊ!O–”Šœ£Rqˆ©sÿßÿçß ÕXúñ |Šu|/j“vábÈ!mŠö4 ðÔÄFÕ·(_<geÚ?qÒ‡å±åŽ+K¯UÔ¬Z  VmXy‰ã>-ZRÛ'aâØ Iñ¸eâþ¤cœ%â.(*ÿU‰Ô»UIiÂÀJ«`iêŕOKjáʰ¨c´k¡ÒÌ ¨~}mö@˜V’dßR0X=Š9ô¿¿ÐÅìȬËϦ¢ž‘ß!âg º'xZÐíIf FbÞµÙ†íœ?Åpü¨\’emß·”ÖY1”qÒËðËzª:0iAÒ&ìÈ}úõÝwÜ'ÚôȰó÷2!_tÙÐË('R%pádÊ‚ÜaÈ@.#ñwи›>í–£WdØ~4q:û÷›¥z!Œ·“©Puþ¼@£6NqêÔFÎ0CáÙ²Vüšƒ]—íÝßþÝ€°9lwl@ ‹ZÂßY>’@9ްթã>Å­ð¡;¤Tîõcz˜ÿ¦ëSû î&dä×Ô5üÒ^åÚ›¸,ìÒ–&&GÜÝ ~P¦bÔ5«%þ›-Âî(G|¦ ¥Š–l1<>Úk7Û KÚʼn÷bâ>ƒ§ÙŒüÌŒŒ“-ʱdb±ø³ÿÞ ¸EñÅ{ünwqŠ"8¾7²ÅrÝJèUð7‡/¼û?„‘¯S,Q`ü#ժ߉.è°Õ.„%˜'-I5µù¼¶…SöñÚ^ý¥+`+Á~Ê)•‰îÌw¤åp—.P*—TÞS™3COl[Ž©œR9£òŽÊ9• *—T–…nË5΂žª©ÜP¹¥ò7*wÔPÏ­öÜê@åG*§²˜Æ54.ߨÜQ¹ç±£ò#•¿Sù9Ö“&§Îw"S¾F@Å `Ð2x&0zQÑ[{þQÊ dÐ1xd30 ®Áá4Ô|á~B ꥿XZS$¯Ò^’èoÁɇ©ê€VÚï–ª;çyþKù¼¥ Èÿß7È(=à•±?ŽˆâP>çcq«µ†?utg˜`ØŽ­‹DùÞýäöº‘¿ cwùnÁ‘MT4ÁdÌéýýH IÓþ†×ûKDaŠ ;\e]9¤rDå•yòþ®”a2Žøu$âÓhÂõûþß}?OeQ¹7²ží‡wÔäjŸÐª¤tñÿP6­ó‰&‚¡õy·sTpò#kGNqt92É¢ÿ^kT¥¸¸`9„JÄðJ¶pÞnCôé¨óάNFº‡*U%3wùëÕ/°(2SøäÔ!ƒÓØ3ú.²êðˆKX‹6Ž5k…ä´½ccBoqÝK¸cPBY¿¾fE åRn@ŒY³ƒbÑU¨§ø÷‰ê5Ü›”käµ?èúlœšk˜[6ÜXνÉÕckî͆kîpo°#üàÞrÿHö‘oûNu•”¨qˆÕOHúã¿‚{”óWäü&’àÑ­B¡j^-É«´‹Å9­k!ùè\ÉlYZêÚ yÍì/àïÊU¬¨LA¹°ÚmyK?Þr«wT¾§‡0°,„ ¸Wˆºà¿¼™1Wñ[C~QÈcFü£pÅ`À_Ýr [ÕÂîCÄ}ˆøc£5‰ìñÆ¡Èÿü“[~R½8âGüâÕ’´áµ<-wZz•ü¯íibz»~õ„×ô«ç%ý†VOO«G²ªº2-˜žzßó‚éyÁô¼`z^0ýrÅ`MýZ®¹ËË ƒ-ƒ;ÜŸûÃ몹 ¼’úpÍ€_Ò„öj%õ¼’z¬¤‡$^«¢–¬ÚÌôávC©Å"\BÍ›íØ ñA÷{§¿H­R^LË[õ,ÂþÇ˳írɃ/M²Æþ‹$D¥»:­˜Î$¦ó…©¼`:ÿ—Îòu1ãõéß“k” +0‚l“JÅyÖ;ာ§|Ø‘ÔÅ#Üi,J¸*S"ÞC;U·Su…ª+têJ±â—ŸÀ\³ ñŽË!•å×®œPÅšÊ*o©|O?Ms`É dS' ’åÿ/äÄþ_öÜÿM”dÎuFöçq&*ª[3¸OÈ6uýލ|Ǩ¸§nÿ£:˜è~¹Ý¬ÆÿØŒÿ±¥¬nÃðv«¦jÍ=/„¸§·áýöv3ú‡ ãéB_Z8m«ÿ´õËÛÛðîîV¨üÍÿû_ÿÏý÷ÿ÷_§oMÜ©wl†ªikjƒ¶QyEå5•7T–µÑQ-ýlfÉKwȯdËüÿˆÊ+*Ó©M›âÊpré‘áK×Ä}V„üY¸;pîD×ÄŠ¿„ŸÊ¸F®§Æ“SÞ#–—ˆÑ8oûš†×‚AÄ`Å`Í`Ã`K á¦n:á¦n:á¦n:QMß1k¼é-Uî‚á.î‚á.î‚á.¤ÜZÊ­¥ÜZÊ­¥ÜmÉ&µð­- â3~/w_ìj®Ù©îqÇeÛæþ!³e{yÔq Õʘ  ó¢o±O ©AßµBãm™báZÀ5k®ÙÜ؆ԘÜÖûÌl0-,+®"{‡"ª¢ÑþÁœÛ®ÎWÅ#R§` ä~'/(ëºgä_g¸nI¾£D‹\Ãdè·‡X(VüMÓœ¹-ò.´„â '`N92ÉsbLx8t§ÌºBNÚ”÷¨vqÒ®,„ "käxGYªˆœ^z60Ú8 £D§¦Ðh乸é;•/)[fÒy‡ìÀÛ ÑI‚¶„siÛff¤ T÷Ižrå„ʧgH ô0(Ÿ=$¢²z0ìb«1íã§e{ø†áNIß-Á<ö‡.¡-S»Ö®4„äÆôI®æm£Ÿ7y‹PÅþÑ5¾2]«y?€ ¹¥.áÈnðø6LSŒÈwàÅm©,q?Îâ &AÌ Ù²°ª5ØÕÃ3’‹<yN ¦5ö jó~ög6¤²Bpê?upuBÇUŒ`0÷Áþ»4vp¸#wD~Òdeõ6+ã[2Éñ£S7/“ bÜîo^"EI)fJ 3Œé‹›+kˆ4þ±ë/}û1Ãï\‹.N!Ð;"Ð_ËÏž zǹy¨(@ mÉ“úºˆ}'O|ñÆ4ºþŒ–Öx¥Õ=Túê¶S‘b¥œ-KÇÕN&7óé‰ü3o%Aöt8?àöÛ¦_†x4N³DòÍ“J›EÝeà»fMÒ¼9s»Lד9`JJùgáGc>ZHŸ˜<1`žÛ¿Uq wæöÜà”¼5¼ö¬èï0SÄ‹¤Í –Š|=r+uuvì%xôvOLÕž˜ªqÿ™¬»²D*KMeÛ–Öš"“(“A¼¸,GÒ~=¶XÉ¡w)Ú,€Ì ‚¦ IaRšt1¢¬Äv«!Ôß& S 3 G-ŸúüÅÃìv-N4 ŠZ1É7qÿ,ƒ Ú¤&DF‹V2,¦øæ‹:n£ý  :„]ï$-ûÁÓiü$ ›ó8MWP ¤i /´ô¡Ç†q d [1X3Ø0Ø2¸cpº Î2D×´ÅNÊøïÊØ7™9ˆ}ð.‡ìÒ–C*GT^QyMå •·T¾£²\dÝ»n,„ "+ÿ¦ãÇ:n C‡+œ0¶œPù´Üg;qÚ˜ç édë}Ci°á,§\ÇAÑɆ]¸»>îÐ% -t(d°apG›¾Ü'ûiùÀ˜Ê²lYiöõA¬¿mQn@{tyÞ5._vÕ‹?h?É@d;(TüVZ*)´RHxŒãï¤kµ ú¬~çá7×écf ¤øÍ.ÈÕ!pÅ»=FÉÙÝ öFƒšË»õ´WŽ×RÀ@Æt‘ù"›ïbзX ÷Ò#‚Öä'âõ5Uà…mËQœÒu†‘,’#µívû˜št(Q‚Òn $ pÁ‹éÞï@Ä`Ë@$þ7êG÷ªêV!^‹¡Z§!¤é'× m‚ä½Û˦h‡¶Ùè:úÍ +§‹‰c ÿ<$TH–±h1ÁËÔÇKrù³ 契ˆ7eˆüc˲ìœn}AÞ†¢‹Dæü@ëÁ‚ÛS¢ú‰êg}';±àmËš^Èv/-„ dòÀZö·{á…¾Øb×ï@2‡v-Koè µó@,Gš%u޶žÀ7¼€Ýñ )ò¥Œ~}ÈïUÏGÊnwGLP×ÃA½‰ˆröÀ¼c°e“fÏ´•J¢ªãJXéAѳ6±Hÿôô9'æp<†¤Â<¨Mº’Åá½+[ZãmžaÚÜ׉[e§Ñ·Z½58°å2Š0óCœècN¶Æñ”Õ½çúVªÛKUwšù™)D¶éöîÄt6Ù׆£ŽFXï{¤1ät+’.Øž8 Ç`­ÐJÏŸ^ÈS¡3Ó—béfR ÄÇ?$¥Äi1ÈïóIý¸…j]¨;,â<Ú* Æb³Æ~^N¾?ÑÓúñ"^éIX©I¬Å·â+"Û³+ãJwþrĸÒÙr&ÂË$æŽn}³d0™ÙŽ(Q(UHZ6 ÜI“ ?I÷ 8]„k;¥š n*—–/Y¹[e,¡¬eY—°T2ŽÔ9€2 Kàè婜c;+f>ï÷ù5XÄØ3#MÌ3q®ACšf5'GWÆiñ±üçø¸©ôŸÃªÝ1˜æÇ.s`à«æ—vš%IFpŠñ( 3Lˆì‡~ˆ¤iñHVÁžÞÍk¨F‡.çý ¡fŒNÙgÔ,wr{¶›ÉP×C*sK*o©UþlèaR#V”„ "[w Ä—)N]Œ[ÉʺŒÓ"¢#Òu1eÁUJ[Ïô*d-{ÕÓZkÈÖZ¶†ßB<´ì©çP¨PÄ(QO&êÉD?)gzüØÂïˆäc“tï;"™L îô§Iàæ&®`õà@È€˜ª¦„l–è“ìíCgÅܘ}ÜH„øRd«¹):ê_ÑQÿ,ˆ€+)ºFLg‚6k® MSÚ@_ì€ìh:c®ÍHÂ’’üÞkä;ó¶Ø“Þ¼À­:h‘ÑÇUäüÑŸ™/kÖVÈŽÔ"xç§È‹>]D4’±Ä²o}^ê…C–$ßÝ39²f¦2;>5¶êÔØŽF:dúO/¸ÅÂ%¿9½™Ï!2vMt'ù^õ’N=¸k;ºix)½Ò@]LQ¯’ÒkéÎÕ)ê?;-=gIÖÂQ†ú8YOÜÒ…Íë®dTÞRlY=еj.BÔ“øÜ(Ä<Àcm%!{/馿ÉÒº¨¨B`~{77QLr†5ƒ¸B”¬IÅÏÌHB¹÷è}V ´h©P¨P¤ÐJ¡óª$Õf¢ÚLT› wLìi=ÈÈ Í‹BeŸS€%ÓÐÙÖ4®]A¢»*3ô8¨´,¤rDå•…·ë—tˆT-M©«.lJ—ölçIEf‹8Î>íð:WæÿÃļG’-[–µÇ ]J,Â.>¢P¡H¡•Bk…6ŒÕf¢ÚÄvt(eñs¹j#Wm䪖«ôÇaIä19ÑæP“½-%´EA^P–ÖâyÕ. `T=)²@HIÞ ÝEÓ&ßÑuè|­Ð›‡¡Çq‹ }C‹>·ka/@Üœº”úfAÈ b°"òoRþÍžköª†[Ûsk0Ç|K{›(t^—;oºê ‘b7¤rDå•Iù‹ø ¶¼¥òÊPç?jéna:œ.G$ï¾ H†bÔ„AÊ c°#ð+4Ñ“ò6äÓªf¤5 jƒëamB*GT^Qy-» :y=*ÖÔrΛöyÉ d° öüÐR\ ö *Äëª<ìž¹Ø9À < 1õ¿úÔBPÙ¶PÉNÈ-ÝàÌeîp€4½P ;µ¯`Ïé‡C…"…VÔˆj±$P,`yô†’îõħ92ˆ¬ &Ó4ÌÁP è-ËÂ#ý²åäp¬8Vç%±àí]û YuDb퉄o±œÂŠk4†p– EzÜ©µ&¹ƒ¤¸íàê~ò[ñXÕÊ`ùL¶[†+n²,ǵŠEjS?@r¦•àöPÛíT§î„[š¨vSȯR*ºK·ü!kU¥9WäJ¶LH$\ü yNëØÝ‡„ò!žM¾ÕßÅÄ]†¦{Úµúƒ—<ð°ñ4àIûÒîSŒ¼=Ú"u’‹¯m>äïോq­²Ž‡Ö‰îWˆËéY<Ô(Ò=¤—œ<àÊåÊ =¦Â¯ªÿ=×ÀRݸ"-àÿ)B#9/U){—ÕX¡D!Ã?£h5”ù=…‰©-o¨Œ›Í´…öF¡z2TO†úIºödd•ße‰èFN²A†p|®R9¢òŠÊ0êk£ÛÓØÍú¸Ç³Ýö¼`㙂Œguœ"Ê伯)­yÿgEj«SigÚË18tàm9¤òŠÊò65 ä•çzg?T6®8AGyô{XÝÛ2öí.¥ æ*Å{ZÍÉ]|Ï´ëù{!™Å<Èä 'ÉÚâG2PtÕLÈî!¨óR³"?¿”2ÝšÒ¥¥f¹JŠ© RèÙ*R’p¢Ý×Ï0á`j†à=! æ›ÃñeÖ·ZfI ŠHa <¸cpÏ=}ç1ã@®tè¼yHœ9p)c’ÎQ€FèpP椰³Ÿã4ÜŠ¦5—WºÝÓ"ùÒg][â«_pf©ŽáÈ2ú #yb_ÆHV;ÄÅÄÈ–·$󤨇m½|¥Ÿe~abÿf¥è5î=e¤?2çA–&:S òu’¡gÖ#v‰û:ÙI}Qö”Ücè‰yF–ò~ÈbLF[¦¢pèì­ñ4®§jL3‘PÍ»Ýw,Í5¥[vS\“ª‘—Æ¥HÉæI‰£Íyê©m¶#;ë¼ï™â¦䀄F¿.ºÕC’Iìf^¤:ÌAPC¤´º²–°êó&KU|0™ÙY[ËEƒäôIk2:O- [E ~H( HÃ;÷%1Y/ û… EŒ2õd¦žÌU]®F±`Pòs¸:Ô›Ãì±ÀT?ÀÞm:–•C*ƒ!*)mÏI«§ñ"ÃÊô d€:ˆ+™o9¢)QŸ[sgÔ`ÂâÇ®¨vɃI1¿šϰ(åçûŒòfØ–3²+Ž’À/¸¾ß‹q¶ßú®J®/§ñqþƒôÊ‘Aøþ}³eµN ÀAdOÇMêoÑÚ%Oß)DÞAë·S ([ŠŽ7bL˜èúyEŒ™ä æî½w\³âBÛv-™ƒ\ÉçØÉè4—sžçñ#+Jàäç`ÅO¦ áÚŸfH ³‹S¡׸Ÿ|çWø& ì«Bì}ý€Ðž°«öD•ȇ‰æÜŽM¨EòƒÛƒë ¢çýócÃËî]ï–ja~SçÎYY"þÁŠ›oqN9Sk¹hŠ TÁ·ØÕO:WûAäD‹Þ~=Œÿl ®+DT^QY~ð#Úª!ƒ-ýdËw Vüö ƒ%ƒ[Õåˆ[ˆTÕ=½U^” ÑvCò¢;@:æ~#Ç[B õq(!QÞ~¿ ‡ûEžc€¬\}¦>-áþ¯¶üΘŸƒåð¿”<›¿ÏÊÄJÛõtû˜n­ɬÊÃó®¸A—]øç.1!.g½á2¨ÚU¶Â¼^µÖÂO¯k¬RøíüóiÕ¢ Ø»|‚tI.þê--‚ºê Sbƒàç–£/£EWüìÆwdobÁ*9Æuw«)÷3}$qŠâIÜ$0'_¸2$ïJöÑ‚D8€ûáX*²a°epV^²p®O×Q;{¥;Äm°eìá&=ÈíûÝQô^¹ûœuÞ™—]L6œň«ŸÓ;ù0ŽU·p ”;‚Õ©@t¿†"ü¹Î¿7udBéa¢`¨k…W``Žeö‘ð^š0¨Ø[î9‘Ìí n¶Ú' Øá¹Ï0蟛]%Ös£èZó¸D8]¦V~…‰\»šŠù§ „pˆ=o­8äY,a©íap@‡IüùÁ"&äž#U’é›='w0ãJÊÖ{„!? –Ó ÍØ<74ä7˜>V\© ë8RJÙ[™‘VŠ£UP=Wó¥B |=)’Ža0— … E ±íUÕ”rÁíüPQŒÐ`ÿZÄ»ÛC]»ÖµëQ­Ñ0Õ0£×þ c\IÇ6è)MÔA"çÝ!¦æýŽ’182X1ÀZ›BŽ/&ÕRèí`Dþ´pel¥oÀ¤ØrHeé¾-C!lÁ†*ÆÐ„À׈;ɽð–ŸÃµÕ½Ÿkî¹ÉZæ6íšÊÒÍÝ“¦<•!ýdIe’ðH^!WÞRùŽÊr]f½c÷ðHÊY’0†M~`}6p×QZ’AÍÝÞ—oý^‚mtqZ›t†é -cÊ¡DÎ#©œ{{È?uÍsÂm¯¨,_Re;žB¹ãRð@·jÄõÎÓlÈWq““¸H!Èš–*h§ ˜·‡ÎˆÃ‡ßì`ÿ$´ó¸ôxMÁzÃ÷Azä·‹LH³O¹R«ïÞû,_x<øN—†/­½'ì+èÆ^°|ÚW(ƒõ:}òééð{ϽùÞθt}Ò#(?Þ®×ö+õ³aEP“ß~s Ò…x*)„“‹ˆ«Ø1Ôº½ƒÕ5uŸ&ÀU@Ä\b]4æ¬:®iü*Ê’l¿9¿(Ö@ ùžmªú5¨N[€­µ+~î®öEê«xб/‡'IØï áÃ× OÛW‡òѶy¡|„ŠàÏR9Ê®3«ƒv{w [ŒPDdJy]Ì çEÌj ÷Á+¯?¨+Ó†Á–âaÈ5iÞR$EoKÒÄ|øRsIlzÞÂNñï—Ţ̜Ïzm~p]\ËW b»)ʃ2”ñ@ÕD $hD0¹ZËqÉrÖˆnñÃ9lÞóØÁ÷U ÄãÎvÙæøaò)J 9R(²×û1Š—¡N«-SÅ9Sc-QÙfÒ=>GŸ¿“`·¥д}î±42V-e¬Zr b°b°&p 7@.Û‚š¶`É@=¶b°f°epG áÖ~êAÂMKÛÙ.[¼Ý3¬|Xày28{Èsàæy9PBÎ{R-M|{/ì Sd[Þ+ ˆ@<ïù£b˜QJ˜M]3ºzb{S‡BF‰ª=øœ,!ÓÔO‡r¾BªAI-‚.é ²¯9p–êÀ7‹>«ãH¤è}UB™ì´Jæø#Y=ðÙÕ€{hŒÈ©ìÜáîéCF(÷"ùT;­goîÇ— Uê»¶z~Æ ¯ÝÕ˜ÑÆ(”0ZíJRO®ˆ… Ò~®—~¿œºû%îŠ<ÛÑ$¸Ñ‹ãaŸ”Ù&ÿ KWU÷ÖúC ·?_OE×êCr¶sÁ³½SS·‘­ºÎ¬Ô€®¤=;¤CŠp|J&ß-ÔH5žªA“Eì“ZY)=9Rå(.Ž¥‰rÐᬎé¸å7cŠÉ)§(¾Ï<ðeâ÷+4\?•ci*xÈgåXúÉü:%Òõyœw#¢)¥žµO0¤~_¾" ˆ"ì+µý8[Îå9JIw>Ï ‡Á]ôE˜†ãWIÇYk6AÝFºŠÇ?Ê©r9 ‡º½|ä­t#ïL!b$h ½Þœ9ËU2ˆh¬ÎÑ”ë ˜9¾q‡B,÷`E””_Ed6 LŒ(ƒ¶Ìÿ¸Õ›÷rRÈR§CÅ)‡BB¡ª‹ZP¨ÐJ¡µBBkÕÊZµ²Qu›Q„'¶×KR€zˆ¯…„`*Ñ¥Ò}.ÒÒÉO#—£uEšIY ÷Í,Œãˆ¹rÅÜɺ¥{gpyIuzêËqåI< -DÜ[•ße@?ʤ—M¦[sW>}õ¬¥„»a‰ŸK2m$\1X1X3Ø2¸gðH á·&ÜtÂM'ÜtÂÝNø=‰X@…M¼À¼ŠkÜ#ˆ¬¬  “p 7`ø1!;|ùªŸA5lyMeI/eÜe"{Í¥êcQÿ\K>ûUŠY*d) È_”æÇè Ûùg}·1q”Q]¯É ô-ÉÜÀÅ|@* hÖ Ál‚¡yÊ÷ƒÊ<Òñöˆãmö(VúWš@ÎE‡4{ü.¹¶~<ŸT#»HO­NóèIÊ|wý©TG~¯J…݈÷ôI`NŸ9ô³Ûø+Õ9È.E®cþ̹\=B(û\µ²ýü‘Ño"þÚæ†>èlÆ%Û°ØßØ“ʹ!q±³K ¤^íˆÐ>z¦HØÇc~C#ój&ÇNѼ¸øýô1iÿŸËó)TâÄGÌsK6bÂäàXs¹9E-QtÝõ™Uγ ÿÞ,Á_rø«c¥O‰š§”:-ž jþAÅÆ9²}žÀ^ £®O AÐÝÙ2“AÃC0AÝo„ *âf×êoè™)籸‚$|f þCLN[$"%‰-'(‡!•)©ÈÝ-•©ˆ‰•PÁqâ‘5µDÁÓ‚Þ&V ®L­ð#µ¿`¡¬©RÜ’ˆ }4± ˜*YæA^áÆ]Tñ?c ¹À;J(¶˜(xе‡Sí™ø'‹Ç8ÔŠG…±=Vw£æñ„W#¼æïÌì@olȨ̢‚ŠŽ*’¨_Å f0MC´Ž´D³:¦uyO‰Ä[±áïj軂&ƒ2£éHõäVùs¦†bëfÜœK1nìH'ÛG9ªl‘|¡)&çU4«2*WT±™+Ëf½é˼ʤáYßbÇ4$pr_ _FúÄ›:Nº_AT¯ùS;è°Åá-·Ç`Ã@uCQ{ÌϨ€KI ãVÜ2æ°·Ÿr…,å=ÙÄ©p´$¬ÔÊ'—=Èn©C8Ÿ ¦Mä@òï7«æ^V ­ÚÒWœ§½#ç$#çhñFˆ(u~Ä{Õ€hÒŽ«‘úvÔLÜg&oè{Å0¹=–™ú¡=há“æFkE¿ŽTÿs… k °ü,B_É´g$O½ÀF–y/¶«‹¸> Døì ’åLæÜåY2D Ö TBÚ-ƒ;÷¼“BýiŠ©Œ*ÁÔ 7ÿó ¢²›_q¶Lå:gíTó›üÁ#‚Ga¤ B‰*3¥«bÉ_›2iÏ–˜ô–*¯r¯ý|7T Èn©£„â—µ|´^°:"Ä eŸ|è禴kgÍbxH`<ç‘X":”„ªn©‘ ‚{2⟩&˜:zv]ÍÌ’Ú€tAé„ò¯ˆ[åž‚gk||'¸5¥;kÔGf àR’êÀµÁíÝ©‘t,!æ^¢eÂ0ïBÆŸ™‘¥üò3xà˜å·˜Ê •EÓˆ´ ¶xÚ.ÈMêP ¡™‘™±Eù:[Äaî@"Í@hKɇþcHà‚™¹CÓ"–¶Å\…ÓmºF¹¥1¡Žrþì5LÆd¾Ï¦ÁùCnOÈò‡Ü ÿTÛHJÉâJÌÂrkÚùn 5\@©g;I­d‹ôßÓ¿Ì+\{Ì®@QøË}µ' yZ5E‡6éj£›Çh]Ï K;mT'0W–E1âËï$¼íÄgƒ¾XÒߘ`ص¥”‹D\N ¦ñÏˤ%²m,ûJjãæI NOÓZ-.Œ]Æ·¹Ë{è+® Ó…rØüV\§ƒÚÕ–šDFöê¾C™†Þ–±=â³nI5Kþè‰3îÆ\^`H-FTQÊæ#ä'×ônŒGð•ŽÍ¯dåÿ•¢%N¥‹,E:j‹òﺕ,BJ̽(\8.ƒnå=:'ó%’z|áço áÔq㪉ÀãØd+ó(áaxš2²dJÒ®»–±©Üc¸óuf¸‘ÁÜ‹½×KjjVÜU9Ôl YJä!¼C_}á“Ë;þŠ{ú ¹Ié[XÊ©¾6Ù– •S*Ÿ"[WgJD•sB’G¾taeÔýŠn“Ê”6"ãV6™e»ÿˆúÑ ýDä9%æ„%‰ã(°'Y’ LD¦AÚ rç;æ;C©Zw&‡RÞyÊÑïÊÈ)û^#þüYdÌŸ,¸º*Š´â ~íé?qàOù©Óû;/°q® ñl_ŠüÅ›œ"–g†©•D·:ƒüØŸ:ûŸÕÁ,oo—Qx¿â]¶Òõáýò^ØÇ©héA /e >r^æ€çö=´=WØžØWsç'wl·–9PÞ!ä‡GöŸuˆu‰¾•ðøm±òkÙ«|¯Š¦3m·ý?ý60²A‡wò¿\˜•ä³Êˆ%PeÈœ°é@®KŽ>#F¹ðþ!Õ/ŽûaÙw±ŸäBœÉ»Á{Ã{ôEDñb¯p¤õÏ"ÿi^ ÏÝbŽ —×ù¸ÝުɑծƒðFàý !ö§WÄ«:xrap«¦ØHv—ŒRœsUAê…=#¬ÆHM¡â?TH®éª¡ Á/º8„å=¢Š¾9R59uit8ªÃóÜév»tû 3Ü qÚ5¬¸:S¯SÓ†ði`³I[àëÕtŠ+ǰ´"ßSуLÙ“ìt¸MÒ5ȉ¸FâY3+%ñ2'%x› F˜»Kª…dÂ'(ÿ…à ð ÚôÏ;*>1*Ô·«èþEÉõJÄ>{ZŸÍ¦]Ó*‘‰“@k4fC&Ò3 zÍI bò]Ð[$¤A'RÉ;h–Y@tŒÑòêZQΞvÔ¬f:Œ‰ˆö³#Öd¼0gb˃15=GYg R«“`a$KØ‚T¥[Äv€håîH ‚$*ð— ¸]EÑ[”MÛJè»IÚ$4ó/¤A¿ŒŒ|.“©yÉŸgßË02™¸‚EœÚÿ¶¹½y•®¡¼é•NòªÝ»¨Ê ’žÉ½üB%¤½4†7Hª{ojãÞ¸ö`èr5Wõö~ŸfHŽ’ÙæNȨ«DÌ2!HàYYMó¸—ó¶VÖÕÄ\оևý¯=»IdÈyýËE¼7äØôá=7ÆiÊðs¡#ƒ6 Sôb´ùu H:Ë/†ƒüÔ¨çèÅ(äyòñ‹œŸGñ]•¸¯ý¢aÃ}•¥¦gv‰6Ï¢pelGµ^éáò*¤ –ƒŽÅÝ“±(·Ÿ ËÇ•ù\R® [Fð‰^Hè¤s¤-µ-k÷3H6ñçJ×®”…]ŠzËáå´ÀëŠ`sÓAåt¨qMÚTàñy_®:(»<{SΆ'â{Þ §‚c°kxö£“æ9¹ŽîöÉQ1¦#‘ÿäK,γ4õ‚kåËÏô‡hÏÈ«¼&›Ã“¦8P§½&í¤¿W†Â˜òs\X.®„S!-.†}›vSt›WÆ\¹"vÍQ,\Ëu:áÒ¼iɪ¢E®¹ ;@ÙkË •ED@¹J8ïÇMRTEóaÛlDeyeš7Ý’¬Ðíq&»…r€ŒóÎ%OPŸ߉“lì£)º7Râñ€ 9¤Å»1±ià·57qšï@ÈSf”».å)VîÇ2¥Ž’¹©Ô£äz9Ï»’ü8- Ȧ^8€^x”(d&~ƒs”mã½Þ–¯Â´*ïË+ý(¯‰zs|™¾bÿì…û×d}÷¥}Ê#óRÄÔwåqW„µìF9J•~Ü ˆ©Ë±­A.Ú¯÷ T˜Ì®ÀR{-ã\¾6L°º§ü™›X"}]Ül8v,€‘Ñgû@O†§þÀ>Twqº¦ÑvüÉ{ó•;ùƒé¸~‡ôÅM§_ÏD!.ÜQÏ'4Èɬá…NmyŒô¥ïº„¿2ï€ s°pQ ÀÆÉ?rn~<©xe°-MC7. °-3ߎ¼D÷›| =J=#6ÆÁò*!ìÄCðÞõ@n}m*¬E›fêø=ÇÉùŬ²TÚîÜQîÎYõ,ȸ&ãßäê"ŠÑëËðDP kaW#ÞÖ„ªC2æÚQ 䦦èHÆoª‰¬À‹þ|rZ•Žöˆ¤å+’ÓžÑìr‰í AsåÊ•WT^SyCå-•ï¨,̬{×-ƒ%ƒAÄ`E ãßtüXÇ ÈNWYïH/:¶h›åbGvÓ˜}ÜÂILÙ7Põ7}GúÆ¿*á]Ê cp]F¨ø‰À…ôPÜÑŸJŸ§.”5Ø d½ÖðS&~ç³$|(!îûý«> ;h±]Gã¯0ì.Û© ¯Ï•ûA‹‹OIi;ëRYƒ*i­JÜP™{JúfíùPr䳺O$WÓéfÉÀÒÐ8Â"¦i/ Zc =jB¯J;ëè[píç$8Þ9ú;™ç8 ÏúW£F6<BØÚÑwÑù»!ªX1€zÜØ*8 ‚óô1=ÄNiS©uu^ÿFYû:í.›,ŽÔ^ ¬ô‡s\ŒrQRÆ‹…yÞ•ÒÄ——qåM9™É×/S>¸B> ®Á¿A='C“—'°‘çË„Â÷€ââÙ2ì2›Xbô%pãö쪼ñ½9‚UZ‹—Í`¸1!nž§Qëd]ÑçP­‰vzS µ±Ü ¹^•[ØÏ2íäýîùQ\Þ,‘ލLl(®2‰ŸröV‘Râ‰H)ï7™pŸà!çY¹“–/ÀW>È*t‰ƒv‡gæ»Ãr7îª>°$†Qo„ÿžåépö æ\«^ìé°ÜØ[2}·e¡ÐÞbOÿ=g6rÆmäšÅ)ÞʯU.[e{Þ)›«–È¥½SÐ2 :gÛƇ€Üû„*r~ŠÜ%?ŸƒÚ½Ø{¯Y©|ò Õ¢3öÞ}°GJ^¨ºÈ<’1»qˆíæ±OÂÈᦎ ¦åøs/IT43’DûóÍ‹d" ÅÊI/Øõ…Ì![zOjÛ‡:˜82À‘}ÞÓùJëéÉ`ÒÙcü‹¥B‘BçÜûæ¶ oî?Õþ:h+Š R¥j›~†‹ƒHk®ËGõ7rƒžð®ø˜»3òǾÛNüB «÷X†kâúsvâšzjýÛ&ä 5í þÏ¥ˆ”H»qi×åYjN.åAú»-‡TÆ¥•œ4m!Æ•sØîGJ„ïÓü`&#»POu¨‘,L÷ êRbâÀï+h56ÒßÚ k:3ôlx–±±EÜL¿7—C*ËhØ2"ÅY°¡ŠmÌ !pÇ5#má^xËÏÖº÷sÍ=·€ÈAv½­©QyIå•7TÞRùŽÊ"ls ë²æ`2ÈÉ©ýgL8£Ù*°X®edëöCÁß(!Ïu"ë—ßñ|àx¬=zžãT4s$[É»VÊr ›æ¦À¨;Âk‘̃ˆ¸’óòQÎ’PÄý‰(û Ä4kÚ“àvJìÀä`î®)Š|ôÆ­ltx¶k.¹£Ø›©{긜PY.òþºyþò9ŽØΈB¨|y‰x°¬—æÜ…TÅ[â[§ûÒÁ¨›fœ–­\ÿønéÏD™¦ñ raW\’«Å(üo²S³S$÷äZUáãK౑P½.T];`÷Èèžk˜hxùR&B3ÚG$|Sœ7%ÂSB»6c5ÞŠ‹½i¾Rt_++³eðQ—3ž¢ÉoˆP!ÑĈa>š–ºL Z”påefs®;¿FŠŽIÙCPÇ¢UŒ9NñK§rµÍ „´9|J\òçwÇÿnÒ%„Äh–p/åȬSt^ ‰¬8´³g/ª`ÑCÛIDÀ?"×WCýDg‚¢È„)­ضîO0¶s@öBÑìV1Éryh2ó6¤ÈhÏÄmûú“˜@Ý4ÙCw€s³ ÿÚü'{ü´¢24n$¡WÀõsɱö¯…W´‹G9Þ€v Çdv·G!n‘ß9ˆÛƒ°&ôÙØ¿‹xhéìö(T("D¾%\eTUÊ ãçvü²u®Q¨PÄ|e I ¯£$º(ÒV‘ðßúçX°`ËàŽC…‹¨9<‘y­Eu ÈcÏ£QÂ?ÛªÚªŸm±6J³úº‚¬#Ó­¸MEK=ŠùòÓ‘]Ô}þKѧËmB‘£S9VTj“ëd¤•$Ï *ºÜÚsü$)à vn¨Œ#a\9¤r„¡[ÊE“>ì0… E ­Z+´Qh«ÐBbz|û­‚‰êÍ©¾%ªo‰ê[¢ú–¨¾%ªo‰ê[¢ú†Éõ•FuƨÎôª®×uª£úIÕí^u»WÝîUGÕĶ…#’¼#”È'ÓfRÛ>#µ¤²*»gDXéÓ"¦rBeCeQÇ÷ úìÏ"DÏž¼Õ;¹?.À”z$Ì«C˜Sÿ;¡*”æsáè$‚€5p#¿9R5±­pdS¸¿¬Ü…d.莓‚û*‚9E"“gÞ˜~gIlœ"¡TÇþ¾©·èSZgI,c;ŠKôlzþ¹ »™Íݤ0¨Gê­ŠoäÎ}l„º™¥ê̆Ý&|pÜÉJõfË_[œ¦âÆñBk!e/p­!äMÄØü-ù×Çû½Ø¿ñÁ=Ïíù‡ø«Ÿs¤ÍØt†l³3äšµEè ÓU…¯QÔ‡Ùïè_ü€œ·q¥`ŸµHYÓ¶ÈQöÎÓÈݯà%êhÈ>Ž/P.\ 'º?{¸»ÁC‚«³-ƒŒ%YjáQÈ—S@jý‡Þü1…­¶!ùr}ê{…ð©t.Ü(@×8}ú ÛrŸ„¥/dؼx¢ù~|™<ÄÞ8¶æm {£/Ç“"brH瘥ü°ê°êÞ8â>t¨©sJ]IÅm#p)ºO?ÖéºqoõÈ’ÁÙ£ef$°÷(¦~?É ó”_ró¨ëGçlWS¹ 22:Qþ¤è–‚._2ŸÚ®•mÖÅ ô”ä1wé7±8wIÔR£wè7»ÿ`5,Ûc1êZ™¥=Älí8 WšuÐê[¢õJA bù8A濯¦vc qä«}vV(ãF¦æ1ƒ@Ñ4Ô–*3ïö·ã/˜}(F1%ÇN—.L”S!~Ç´gDad€&Î ÉÄ;/øä+èä ør()ÏoéÌoæS!HgXSq¥tæPuä¨çP¢Q(U(Sh§P®P¡P©ÐW…ö A÷ÊêÓw2+Ýe’ òýBf®,·9ð(Á#<£+Ë=p)vâÖ 9âÑÓ¡<læ?‘c^¤M\!Ö#Ø0j+Èuï‚3½˜;N9–Ì}¦!¥:‡î=„À9–áçAAògu&¼ô´I#ÇU¸ÿè3Ü”¥¡·‘`„:"ÁdWc`O§ØaA)¥x3+“ÁñÎ%c}c£7·ÞÏAöi¼+Ù|¼ 9·Ï婲ûPŸ·¶æ¼Ì‘üç½û4ˆ°HSÇGó¹¥'EÌåf˜Atj”òq^˜ 잊ƒ²ï‰ÇÀ¹ gÌg‚Š`1TˆÆ•ïé!‰á©äxË:”ÈÍ`"Åó÷B2qøŸÃøŠ2sì•×ê%4@È}† æg#:;$Uî{"þ8õá {TÄÿ‰ÂOVÏ÷jW[ wí-¹9„pUÕA£†?SHn*ä÷¾‰;5KrÑ¡dTW/c²m\M,c•|øl|()PE‹Rñ¡xáVûÄq0ADŠé+ ا"ßüÚ¬ŸS´þB&;ó’°Ñ ûq;¥ð0ßu”ỲÅFRaX(ËoË ࣨ¶aVÕºðÓ{RGV¡ÄŽDŠ,£¨)*јû•zn óó3ôV˜H㋨mó¤ÄÜiôÉ.l®•Àñ>¦l‡ÿˆå–0.pùãN­œU²Ì¡Äš4.F¨5”/AB©+mY†‡ã€$¥ ¹RpÜ=À–õÆÊÉ~6DˆûØ‘L™ÎLþN[†•Fº§ Œ%ŒBUï)“%-NpvÖÙ÷Ô¡Z± ºZ±èÚ¢•Bk…ÐG–ëøƒg*êÌDÒöo1ðèw¥ŒšŠ{ì*øó¸²ÐŠ08ÛIÐ:å:F~¨A± +óÿeøÈ/_Ù¦Z0 i “ÚŒYï)Jh ÙmõKa‰Ê-<Ä^0¸æé4kšyØ ¨øQ¶ÆÁ9H!ºò÷I¾—~}m~Þw^Å5¬ƒƒ5ì]¸„0ü±¾†ó‰¹^ÖÐܼ֩ø†=é5\W Û.n•C*‹p¯¦(Z5¢€Íêg9†™¿óZÍìíŸ"¸+4ðïW®ÌÍ@1Å>AqÖæPqü™i™9Ĭ­•N$eði®ÕŸ;hÞk¢ÄxÓVä”屯 Ø8ïâtMQP¡˜|:ca5¶œ§=jÈÛ3ÆÜÂ}‰™õY'Kÿ?ÑY_ï÷8˜•û謯òÐùÀAßB¡áüK€\Ô²xrHzÕÂ.Í-¸OyÃm[¦ßçÐÄz€;µ.Ú!4Gª°¡AÎläOìé=œ:•Ýc=ˆ¬ÌèHœ¨ÌìQTÿ‚žþ‹É,aê?±¿þ¹–óN‘6¥1‡˜y‰*:ŸJŠumAH-©üÑ .Eµlíª®²mIœ¡#)è9@_®Ô];(Ò]‰ šþB¢VÝl±TÌQÎ…ÍnÖ0ÔÃXí;õY²s¹BèOŸºÌ"‡Wpǽöš…*6n鲄(_ó¸%ííddOËÓ¹±#Ÿ˜Êß>ÔÁ"î<ì¤3dËE±æY^A.S°Öt7k!K׆€ÐßUÿ¦Aįº£ÑyB¯‘È›¸Ó¿ó?ŠÉ9iz?ŠY#Ô Œ¿C<úD¸¬+yDÍè½ÉÄýFé“Ù–÷Ɖq“gè'Ÿ›Ó$Y“Ï)R4¢LxSóvÕ;ìšð²,ÂÒk™d<×ÉC&å,8crº4NÇz½æN:\w¢…oÿ³î ž4–¢¶Çùì5¿;Åä^bKßÍ|~€Û›æ?øÄŸ<ä)øhg©sm"Fõô‘ì*ÓÐ ßÚqtÀ¨SEK„/ËxØIKb§%®H^@¢ÓŸ’²Ì…_*ÄI]>pv)‰Â/Þy|gÿ+¬Ÿ¼'O܇§/£¿ðÂxåÞ¾p‰åP1‘4ï­ƒå¾Í^³_ñtAÛ\%¼ŸØA*j+ï&/>~S¤«³ËH®>¬±x:` Åÿûh,}R{aJþrA.ò«²ékÖû¹°KËïÏ ºù•Ž“!:HÞùˆÂ“½Î!û-îHÿ\Qñ/ªy~ñaAAÚ§w†‘÷Óô®‘eŽ[>›\9ósìðÉDJü e=EŸ¦önó÷ô ѳšû’:[k©¯ hûyZu¤ŒÕ™ê°|àü޼ KüÚÀó™Ò¦6üE«"­húYÊð/vXN°„’/¤k²6¿FÓÀÔ`¤¸$"V²[öOQ‰Ø#eDPX¦É†FŽ:aÐ¥´—cÎNÏ FùSrˆó|˜Öv²jðCkâz°ëø§ Ek1¦µ$KŸæ•˜QŸ0~ ó56/ï´¢ÐÔŽ•¢¤.d¥àuƒ÷¯§¬šTIýr]“Ö±zFiWXë1©çPª Rg°ãj¥…RSüŒ´gŠŠüÙ<ø¿¬QÉ‚êÂÀ‚áɨú›{Ä/cøšÌzåÑ1~¥ÚVi]'ï×*7ù„WGúUWè©ÝuiGMÛ[^³×‚á¹>Q–kħʾP©E´Œç\"\u÷½ÂŽk‚ÿ áêרùu5 äÃú€Œð˜R fºZÕ®TèWØŒ¿_ O†ÙênÈÔ&Ó?o}…Ç´qñ5 UÛçò…ïB‹‰D‰¯ÄûÚh¦¼çMGŽ€´Jÿ>$õ•Æíí[×Ûr–KöaŸ,¢¿p;sºXl˜I™ÎD^:–è\%Ãy¯<þ‚喇À«ä¥d“}R (Þ4žZØ×.å -kµX?°¬'ÍNšºk‹L¬ð·’A^i9)ݹd,9yãÒöuoÈ ”íPDƒ^àÿ~¿ƒ~—ÒuC× '/AúÐ9/´ütך«ø§]!ûŠœ$Ô.üd÷€¿ÄÊŸÌ÷¯= é°ø%:ð댾߲¾JäDöÖŠ1üY;âëmS¯2'U"I6U¡— =•%k)³É÷6þ¬&û[‘O³Á†žÔnOILÿ\uõ_¤¨¯g÷×5{jê…sÁ”yêÒ5R‚+#^ç‘~ºÏ/\Y\;“!•ö¼îéìðUbùñï›6ÇÞà#W(Ò²?ËJDe® î¤C𡮉û êSËd6ËÚÓ!óŸÕÁ,oo—Qx¿Zêÿ…÷Ë{DÌÿ‰dJÓK€òÛ‹j…è"¿s+} ·R¹#ÇB+k[>Á7}«P¤ÐJ¡B[F"Ï£lNç—¾ZéA–ÏÐBá‚´Ö®|:s®]ß.¢†Äð`É d Ï,¿ÅTN¨|Z–3dkµÅÓúCø½?àXb$–‰-&R¿–ÒFJ²+Í~#Z[”x†N÷ŸØ¨zYg–©k$ÄÉÎäy°(Ô(a©:Ѩ]Šéü¢ì‡eÆÙøÂÞØòî‰ÁÑ£I:þhH´ƒ‰šdü[”q²µÓœÏû¦keN€Ô#ˆ”:{#ÆÅ·;< §<ë{Yˆ8rnÜÀGzèçaUŽæ%~õýø_Çÿ¨Æÿ0ã¼ZJ'ù渄örµûÇé?ãF’|üñ{“füzüÝøã5ãŒ$yձèëÒèË Gñ§#lFxüûr„óþ:ÂÅ'£Iõg5êÏj7Âû½o5êÿjôþÕøý£ï“ô'¬¾/|Õßh<‹ñ«¥!Ñp=”[¼#M§Ä» B®úÊUI4~Ë«¥¡Æ%* ©r¿?‰4üÆ[+ýM‘ZEÄ…[¦ß‡ñËãñËéeÉN-ôvÜéuŠÂšÑÓ²-Ž*´MƸåÓâÁV>}ê_ËÚþfa?ÀÂòÁô.þp:(sŽtý)‹ã÷ø¹%pE"X s³/»0…½Þ†F#ýp„‡çû ñ¼€ôìürúrl]t³8•ŽÇe'ÙM“Û‘mY1) á('V*ü½‹UIlýš›t ®2¯(jkv4Æ[Ñu}…‡ã¬2ù—¥äŽðÙsEË×ņéăn÷„Ht»'QeCÌaÙEÚ;{IŸÿˆÛ-ö5‰j®¥F—ô$ð~I*© J'ê¸427©©ºLâ5J#§¼ç¾ˆˆïñÐ"zñ… E ­%êw‰ú»§ç”«úgÒÈ~ñ0—ïp¢Q¨P¤ÐŠ"ÌþÓˆ¨l°ïL²ÔÏâyïÊ[”a×iËòÛîa°ûÊ Ï—C÷Ò McÀÞÔHõ‡Hl™ÿÏv¸"¾øŒÜ£d§•£M¬Ÿ~µ4òP}ñ†ß¿Ô+^uû›jd­ÐJ5r;ê™~ýB[…89$âðŸÚáxú›„çTέQ–­ß„á7aøM~†3„2˜™jD—1cÉ@ÕD Ö 6 ¶ îHF¤8=t±\“^ ¬‡ØòúBŽH× ç%²:3g?¶$öœ~ó3TÊù»ÂAÿ#n±7æ¹]Q~•,m±5o¨ÆÙ_*‹²…üÿìý[sÉž¯MlÖø wÇ4ýàyÖÌš…A4°Õ’ff×.E¢¦*¬ª¢ZXûu¼á _ÙßùÂwþ ÎLùÿƒ:R«[¤ôDÌšf‰xž_(TefåáYûÔ|r^n­Ê•ûn›¾?=nôåÁ¿;’K¡¹äÈ€CsQ×"?ûûOATÒ:¾>öF_íJÖ—R< ºµjÛf«C9yÜ¡¬¾îexÃw¯¬í©XYÛS±¢öTüL{ª4õAK¬ ŽôÁOê@Æ|â¨{ÙÊŦ鸖×Úš,zqq÷Æî®&×W¸ÿã½›{«ÂP«uÖlþRvõ;u#ï6~xwrùÏmP“áâî ¢d¼ðàyGŸƒ~Z[=$\ÛêºëgšL@wÅX;ÿý)sõÒÖÚ»³þÕÞY©o¡¯Þ:õ5XûÓÃ~eªúÀwl¬¾0Û7Ný–zí®·˜ÒžVÖʳv‘ö‡»îTo¼õ„º&ÿ€ûƒ½Ê×åÌsGþÌ}߈#uÉR¨Qþö@®À/2fò}KJ­-CÓ dìÏ[êçmõóŽúyWý¼§~¾þcß·ïAãÅ–Ú­¦©ηcÜ|¯Óaûúëº6`ÉžèR©m«=ÂÜŒkRSÛR}¶?«×´õÏÛêçõóžÈÕÀ¨r´GkòY¨:Òã›ûô¨QX2<Èü¼£~–þÍõO¸«MݵßÈ{5<–‡zRå£#ßßôþaÖwÙaè©+ɦG¤ÇíGµ–—êéZ¦-{¹rû‹Ù÷©xÏBbYŒÒü([HþNÖîpû}²ÞÀÇÝ`ßO½~ò~×û£¾#ꥂjzÚûVJºÃ:fkcPÛjBõÚÞ: µ£bC­ýÐh¨TÌÏk¯Rס®ºuÕµG¯j(–;X¤µ­vôÁ®>ØÓkj5î»+óçì­Võ.AU— *%0ßi½Xû^¨¿Z}ùÝåT™ÿ~ÞaÆ‹Þ@å­•ªªú`}Z@]•Å¿lÔ©I ùyí¹þÙª Xì…d}Èî; ¯m†¤+X«?WZŠæ$‘«b½Q“K–Þ’E¬|f®íA°öAíªRo­%mé‚Ëu¹_×÷ŵsßBºFöþµYÜ;¨îgmv¨gõ´¥]ï^$WŸÚ ¹vžù³6´^¬Ÿ%ºüúÎ石™¯ïZ-fý}Ö'™þ¬èð¸ßPKdÊ-ãI­.‹é=®5ý³Zl³¢j7ì¨ö¨æG1šÖðPÿ,U¸¡ìuzcÀwîó÷Þ}ýžÍY$•‡FswKý,$GÆ*ëP|KO¾T;`ªå ÍÏrݾo¸Ió ´Øäºú¹©~n©ŸõȲ#õ³êÈ‚´æ©¿uÚ‡U_óìê†XW5Ñw;2Ñ­·ë—â0÷Û~UuÔúþïyÜ—3÷‰ù¹ë×Dè·|+æq?ð×l·ý¨tpØwÙ»‚@ÝvÏß½ƒ¡Ldž äEöÿ௼r©ï©‘gÝZ0ê©oá–ê{Ó[!»_UTÑT#£²+c;Í·¡ªzš¿¹bSýr[î$æo­Êûø7ái«? |Gá3Wrßnta*yKæíÜØ’Q­mabõEgK ¶å‹l.52ðDæ=ÙŸ· Û5™+žß)ý­ÝXWMã%Y;öjúÓè­[7œ²lg­íOÏ›{¶®Þ'_ßì5NdñÕg«o¯ÿe»Œüð¢'ö¨­Þ_YŽý¨=ì×F륑NêÞáÐ/2òdxÚSSÌéªN)úl¯_GŸí?ÐgKŸ-}¶úԢϖ>[úlé³¥Ïvíˆ>[úl¯è³ÕôÙÒg«Þgúlé³¥Ïvõ3}¶ôÙ^ýLŸí§÷Ù>®™7Vß$¶ÔÜJjݱ«–VjŒêêÒ{¤…xZkÕ*‘!íA{í@¿L÷;v¥&¹6KuOý³ÿ{ݽtwíHUu·ÕŒé]¹”·‚‘4’›ª2Ëìó³þwiH7T«º¥–ê±~$û=4ŽÚò!u¤vÓ8:öuÂÆQO^3P©ÌÁ–>ØÖ2/åhЯ­©n·ÎÀwâ<Õ¥gf­Òßè6Ôßb¶ôÁ¶>ØÕò.uÕ¢ŸÒ5n*`æÎ­>{Yúi£wÜU=t'2yüQ#húýýç‰ùQ½~Б ½ï[›z ªÂa«"=ºÃþ±ÿò=sGíª>”ÝH¯å¯Ê®æg©|`>Óú®5ý£œ¦Ú’¨Ù–>ƒf]ú>šuzÔôÛ{>iªjùwß1ÖlJV³ùbK~qô·îÍÏrËmœø‡XæEoW4»r;3?WÕÏ[êgahöýÇû¬ù²ß¨­J_·í7E kâ7ƒÀ󚣾?C›§Cÿ wà¿z£µ–_7^“Ç­¦?uŸ¶Z‡j¤=’¯…=òo¼9Ï­ÕJWq«sXõß{¤ uG"4Gu}à/í7Ö~v\{ýÈ¿ËæÈW{µ|§Zýõ³oOBE½m§ .ôøH*ŽF×ß'§õVÕW:rêÝXæÉQ¯-›]õ{׆§GýAß×VG²Ô›=ò[5­~Ù];ò}VGACʪžÂ ü½èI0èÉg4 üª”77|ÒoÕ+Gê†dêê@­É5R—>s°­~!ûnuTÿ£ôøý·Ž××êÉ*ØÖ»ê ¡™æÚꜶ>8Ò?jæGß$XeÕ×A} ý¤¤ûïØÜÿkê`gíàúsUÛ0<9ô›¨ z;Á‰¬Ü^‘¾{çñézI÷Çé8|2õ¤_B×Ö»ž<¦ìöNüW½ë¿»¹vG5¿hÝ£®øý¨{ê Ô}¹+•îfE֛ꫭ=ìêRñèéÇ«=ýxµ' ï¸*ME~1T?ËSø^cÛwM»¾. 9”ÎwàO¯^[=ì™K–×ÕÖCOìöDþíéÿš@mUÂŽd÷ÏÞK©ó¨tOz/Odsà7¯»9Œîiß^î?v´-o³¼±ý†TŸõ›§‡²4îã¦înù…ñžô[²þ’ýEWÿ¢*¿8•”Ãl¤Ó—~ÑÇ}¹>§Žý“ÑÚ¯ÅõOüæGu½ Žu0z¡^Êã s°ãßôþ Ý÷^é®·{UÊŸ! ïGý‘¯ðôOƒýëk÷3Óî­4ýCëÇ'µWþºvÒó N²[ŒùÙׯ‚êà¥tj`ˆiyÈvêƒz4äÜ Cµ³¦9÷­y¬ß›yöÚlÉ×Él¯ÔõAS3ê7BÑêË6N‡R8lª Èe5ÄÇæ¨«~äçþZée¨H[ŵoì³¥Þ;&¥³íoC©æ< :Á¶ôk=Ž¥s%P[8]Ö>m½8úň}yÖ$÷JûóH~îÈ;ôÔûhdHÍ݇£š+ŠÿÝ3{¤†:›CyLBào$öÀ/7ó84äHç·n#·|{$]µksÌûÓ‘B˜ßÈÉoiíèÿhÔÒñ…m×ô`Õ¾pø¯j È`Ø’3&ªÛ±=ØR²vŸ=ØS Í44#;½šƒ¦~YS‡65ÓÔ9¾cñ©T׎4%ÛÛ˜?T5xß½EÉ£`ä¯ ? e™»žô¦ÙÅíêúàG}Ð÷e±’ò³œò?©ÞÚA­®Ç7Õ«ú`gí ®|Ûæ™90¹‡ê¥ÍŸ;¨ùÇÜ5_Póc]–J€=ð§žQ-ï_g‘¶÷ ^—!öÀ÷ êÒÿ9h=ßòW¨A[ž ˜ŸeÊ #7©ÁI]ºPì­l~–Çiö`Kìè¶ØÙõÖ¨ƒªvûÁ9î@ª{Ú _ÌÁ–¼;æ`·ÖÕ¿ÛSrË1•Eù[ƒá–|’nCõ§÷ÔÏ*uXтʮ>X{™ú[‡2|Ìléƒ} mê]6äÑÕãtnê]EÖF<V67¥ôäy0”sX“@úo<ŸÚŸ¥ãÆÉç3ŠùyG0zp5öªêยä7ÃÑ`í@ÎËQM =’ þL;@¼#©Üª={žÔNM¥@è²R®9ô4I˱Vo4Ô u‰®G% _ÊáKÿ-0?ª·òåHÿ¼­~–-¸j2Þûé¨ÖmÊèISý÷#ËF i…ŒZR3æ2*•¢ÚPnËOš/¥³Ã>C•&œ=¨©÷O¿—×øŸÜü)µ+í &MSWVû¨õå)™i¢Êc ÛÍáo“þ¨¯†è´ä¶:'<Ôl¿R?ûaã$] GC5k,èì¬@L”›¦‰¨=àGG'þ’öø¥4ˆ÷jMß³3¨®§“ ¢¾àþ.ñ¤QkÊU 3Ú´íF¯éƒº>h胦>ðýUö†é¿î1Œæ_^gý©¾µöï™;ö¿~ÔjÉß$Hž úmyÞÒ¤wàÙ¨8”G¦®?µ*Ÿ²ùÚH——ùXý_Ù©'£½ )Ÿd Fs4¥öØétLÙG³þœïËlû³Ôä_öÕ¨8»è¼/˱ðR×\è|MñyÍoOe–ïF­­–•tå&nß,y2Ð?íÕe=^UYt{DËÔÞÕ Hÿ@suA Vý_gqÏýyc›EÒô[r>¾|Þòç³¹ù†Ì¨w0”w¥?x!©Æ‰£×©ùëâÓ£á®\¤ŸG'=9=Úyºóô•½‡È­ªùB®4—ÁžŒPj·TstÒkÉ€› aªúr¢×džFõ8è´×^¥Û£Ûµ½µ#ÿÀ^ýFþæc«RÉnÉ£gSƒñmd»AiG:ÿ[ß·óô¹íÕðïl{¨1>íZ]ö½T÷{rØIMtæÔ{[~í 3 d,ÕÓ†ù r¯Õ·¤L:ǵeës7÷g@ÿ4¨ìøÛâaë•ÿCÛ 5ukÔΤÎO 5†ìUGžÚAIþ›0ù“ÿY`ª:;Ò=nª¹ ¶r.=Ù§êÊÛ vüçÓ?Vüõ‡­FgX|'Ä“ç ^tˆ˜Z’¼ß‡ƒžL«øÇ φµ ‘;ûBH‡–½jø¿®WɤæzONÌàè呯ݛåHž–˜îá¶\ÇÔ,góÆË=qP¯4Ôòüƒ^Cy´¥ô®1å+?ÖŽ{2¶ñYÐmIk½­:ûÕFSÞ¬î #c_5¶ŽT—ù2nËþÔ'½Šï¶{<ìÉlñnȘ„þ©<þw}(Â{‘áOMµ‰Ïs}6M–z+ðEpâ[§O†5ˆZ¯Lò¸ß’qæ%‹¢ ;ížš}ÛøY††?šú¿Ôœ5U0Œ–¿¼á‡?=®Ô® RMŠªËŠy~ýdÔïÈU¡v<^×G£Cÿòû<±Ý/þWö@¾€“Á‰4ûjðÀ“Ÿ›ª œôÕ·±ãïBÏŒ¯ÝF¢¹\W¥Êß8lÊYuÒ=”aÁým{M÷gÕ«–¼ÍÝ<ÚzZ¯uää»ê¼ð'mБþ|÷äZƯš¦–ºÒ!Õ1[>ÙÖD5¶å´«âV¯&ãú¦Bîo:¦µ¸ãÏ{sßšÅp(ôþävÂV›"6Õ6¾¦.ÚÝíH ÅÎsS3ÌŸ¹£wRîlo x¨*»¶[O.6=ùÀuY¨À^º•M_û8LÆ¿¶²ë‘;I­w¬Æ%´T¥¶ÑèÅ‚j.³šFþÄÔ]Ôù†ÔYý@æ¤'ݦjÕËÔÅÇýÓfCþ º\­‚#5ƾgš>ª(“ž™z|¿©öôìÛo«éW—ît÷<6šŸCnýù±eOOG¬ëc*Ù]¹JÁÕC*±Í–|Oƒö@a»u}dï™Ç?7å2ßä™Ð“z·!MMsÍ9”;¤¹dËÕrÔh©/²í\‘Gu}UÍÔºêªoªO{ò|Å\ÞtW‹t=×Gþ:óèdè?’QOÍ¡2_–ª<¿© ˜¹>oûYëOjÆ´ym9Õj<ýëvÛÓv£¯†³Û—ébªNyùèŸ7[]u³lª¹ðõ­¶]kªÛ‚í›”‹½±lù¯¨l C˃ÁPžlþi5¦T€WƒuOQ£)µëÇínO='¯weÜnkÐíÈ5Ð=N“&dg(KÙo¥¿kü\÷½ÔÏÜuNîI?wäÁ§­tUü#sìúAoÇÝcßj«j:å±²g8«›}Mµ_~‹e;‡\Í3lªË5Žüylª¡?@îimô¢/Õ×ãî‘\ìx>¹µ˜–Ö¶ÿlº/mªO2[©5Ndr×èTzàÝ•OÎ/ûòDç…[ÑSË ™SO×õF2A¤ÙmËÀôÃ.*‡R]w†|k¯Û“1!Oú#5‚µ©MßM{ÄŸmƒ­£fO.Ìýc™Žò¤ßm«§ˆ­Æ–ªá¿’oÞ3s½yøÖ37ty×ëÝŽlØÔìHgì P{çšú¯Ì¦kr†šs¼æëŽ;þ!’¹è_Hrí€™LùÐÌû%C¥L…X®ÛÓACJôsóÐß;žÙ¡Hª‰}Xq"ƒkµœc_W'›¯2'è´ßPOl\ÆÖT犹øêÿÓæIÐz)£®/ú"8ÙSoMUÞè~¯/ëÑ€~ê¹-Óô\µAªÐõþãê1Aöh×wÐÿ©Û *ì©ù~©ž SÿéU¶ä—öŠ-Gv™×6jõŽjMueQˆG=?òØ´ûÿ˜üOö:·¯î(ÃÖ«]—´]vrÁ® åéìÏêzà:#·uqw¤>jn´ªµ×;– ô§jO¦*¬Æ4·šò¬(j¼©jëû_£&ïÅ“öPÖAùÇv­^¨µ3žºOydÔ•î=wàƒë²#ò“Þ¨!G[½µ5üÁö–ªKÞê 8ðç»9{ý‚ÄŽ nîÖ'2ë°;ìÉùn**Òanó÷ŸŸ;ÌKÕdò{쿈OFªÚù§Ãöá r¢¾µà¹\ßݰ©Ÿ [Õ:2› èŸ4¤’jÿ9I͉à¿®:²#oˆ­fl«Wžè{É–ì#=¬5®ëœ7–¹µÓõEM/«Óè÷U—iãû‹v}¨V·3Mwé†qã@¥;Ú=ð÷LãèdД_º'ê'ªìKE½Ñ÷Ëä=ëÛ™¡êœS}ŽÍNë…z^;•ï£%¥×6Ù¼ª»ê_¾ô=©WÐ}Ál/­Zp0è·ôÍKîíàøäz寧vÞæ®4që¾]»v#ß;»ÚƒôÊ’#æ/ëéú¥ªO?9 Ú~Æž½nmmª£Ý-_'·½V×}—ÏŽFµ‘jŸ#ÊÀîA ß𵵎Ÿ ÛCŸü¬ÜÈŒª§ »¸…<õQ#(ë¯~^¿“ÉûdÕÈgy"ý—®Î¶­ÝþyùÓÆQ/X{îSõó0šƒ“ᑯ 2w$ë?šöĦ<ç±eÝ"wÁo¦¹Doëÿõ¶µ!? ôñIW-ˆcû^årÔ!—ò¡¹¨©…Õ’´ú8FzM|û·W6ý÷f8ÚvÔÄmsníÉ;ØkÉ¥.PË ue@Pp*ãœmÚÖZ±ÕȽ£¦4+MK@z GªÏå´þSWÚjÍ“þðÄW®s©Q•‘z핌ÉowGCYïðêPÞS?‘k®¶#c>Íûîçí?éuújeò“¦ºSºq-¾£¨3J»»Që¾ôo¨©½í¬MÚT£P‡jƒ)µß£š?3·kó!ÉÛ-¹Gm?”þé`ЯÉ0ŠÕeGn@ýZÍ/“ú¸6ôê¤Ýþ¸©¦Cš«ÔŽWþt44w¹ºŒ3j ê׃m™ ˆÔŸMmÅ÷‰< ŽÕꬦûÔÝë¥'¦¦ë\Íc¹_ÙwLì¶Ïýßh;—ª55ˬ[“N‚ÁP6h~Òk½M¦$cezõ‘Jqtr"Ÿ·Ì%ƒqÌÅX¾ä Ó‚™Qãæ,ÉOwÞû&µ³Ý•ž¸ÆK5™¦ÛP“-SzÔzÞÛñ¡oº?3uµaЕ«Ÿì-¹½ù§„]JæškÒƒ»j¿ÊÊ7? Ê®œ½ö¹ú‡ «BÌØöºªa;~XjÐiÖß4ßhŠç§^<íž6öäzõ2huwôßá«Anô”¾CŸÖ×N©]%25^f˜³_nUöbè}«ï‰ÔˆLÇ7Ÿ4zj\ºiH¿Ž+î–Þ÷ç¼m ¯-X.ÏÃNë¾M»º€H=gT—ʺZiÖ­q¶§võß¡ª¥#¹“˜Ë¶|µÚªÙŽw“ϩۖGu/;z*RïÉœ[Ù“%ë{t{ÒíʈSÛ¨ªéŸGRù~ÒlË¢ôÏŽZ¦îí`\5‘e Ÿ^“ê™}²(k˜¹ïàêMø®%×VX¬ãs­fËʇ`ŸÑézè®\Ý“©>ÙSÁWUVMh}ÆKxÓVU¤GêeÏÿqæ=Ùßô‹¤­¾j>­Þ¨Iß`} dÀY§)52[/—EÚìÓ¶ÆÑZc¸ZY;¬lúÁ€ðŸjx*·R»Ì‚4\ÿŠŸ±õÔÜpº2o¨^óW„?­º ¤·ßü{~’Éjb¨¿Ðº3G–7sW'umWv¥Åe½Û¾¯É\^Ú]?›Û=SÜ÷­«w@z›Ûm9sÍ9áûšœtËß‹®¦ïøïÅ¡ixÉÊö¤ôïx§?úÉv0È2àÏ\ÇŠtØ: ÿÍ·ïÆŽï¾êuÔ5s©=½šžkÍ«ÁÒÏõð1Sƒ’+°ieøÊÃ՛⹓ö¡jímé=þi ë »JûÎZÅ÷@îwv-A©´×^ø‡×æÒÚ^ûpvÔùJê>5×Ù|½Ò]i`ûZ·]¾­v]ízjê±µ¦WÝ}X.t=µ,Þðù‰¿8ÝjuŠ~ÍjqãòïÍóÖ+?ídtêçÏÚ[K¿*ƒ!VUcéå7¯•'B£‘Œ§x6::Õë÷Özz•$÷œ^¾òæ2¿ÞZ‘ ­ýlüGoo2ËÐÔªý|§ý {"Uéa¯¥&O5^JRoÐ’ºxMõ/>¯I‡µû3÷ýäϧǶˆ¾¢wX“§5æ»%-ÛêWëöËP“§vIŠþBJÝÞ´Ç*²$¿­‡Éú¢Û÷ƒò]O“¯Ì_/&¹öHTÚ æjPÓÍd©([]ßì4w^UÁlõƒëÉÉWc²*ëdïY´}uñ«J%è´ßkÖãFOž¶u;}‰h¯òDÙ¼;[ú7{RBy&~Ý]uóX®´îxKÖ}¡¯·×å¢beHÿØ×Ȇ§ªvÔ{!çˆ]ýO]ˆÌ¥\jxŸ2ýnm¸¾<°}Òš }ýòã \ÓÒ¿GÏl¾j ÙgÃÒÛ4Ieð´­ ¸Ö¡tâÖü2Uº~7 ´ékÃæ`ëàTi zóÌ}Ojuæ¯löü-Ñ~K¶}qíX¦º|«Þ,yTû³\]ó_ŠØèËõª3:’Ëí ejYk÷ÝÖ¨l©)¶–«Z3­nG.=Ã@F¸Q0²Úƒ‚ç+©vž‰<žÖ P».}›ÙVÓ“m‡¡¿‹ý¬=¥¨n˰ ;KÎN;äSöó9iúá»Ï\%¸»ö.ø¯og(-ÕŠ'ºT•ªtS¸Û¤8výey7 Ô³ sUÝñÍIó°*M+»~™§ºƒCÕ ?ñîtÚ‘Ó©5èÈ”!÷€Sª¬´÷t8ÐO"Žm/»¿¸Õ2ßaT“&¨«¢{Ÿ½ZȃÕHR©¹š¯¤?Üнõ:¡ï»&åã6Í5œó•4#lÛÿ¥«ñÜêÑJ¿¬M[÷µv;_[švõíŠú0|5óꃪéßI/\­!ߎÕM´!^»åß°gn‡œ°îÞQу42mï›TÝìPQ¹âÔšÒ2 FêáSË´geÕÀ\Ôr*ƒÓ~ëÆ+½Uvd…Œ:ï2fÎuËxLó;`k›RÆ@ °­XÙ³¤q"ÍåÒ»ë 5kûÑÐ/ùäº2ý8‘¾ÌY} ¥mëºyÕ“oé”]µ²¤‹¶u:8‘‹GÃूתõ„í8;?|þ°ï»ŠíRž»qíþœ^¥IŽJïN»ïzÙÁ×2¤ºÛ“^,ûx\ÍŒ‘tÜ3þ\·ƒR³ß—]Õ]Ð’BØ+þ2Ý;J]Ô}dë “`èÿ®§ýÓÆ±Ô¾ì))­Ñ@v |:èõ›25¡6]³\P:ºN YOÅÜÎý{võ½”Žs–ZTo¦¹Ô¶‚µ{JU-Ä1êËšeîùޝåôOdFÐi æÚéxï«y)õoS?é¾ì«Mp-µ©­_ûË˪gÌ÷ÊØÞ ¹¾7×Vkœ¶Ôó_{Ï»^°ë±¹8û“uun­ýÅ[²äIßT¨ýÕZW†Ošvªïxt8ºþþ˜¦Œ”öI㨦&œ÷¥+sõ¹Éœþ#µNáÈü¬&;ªaÓ#é¹úhümûUüuÖ¼þyÌ“ ]oÊ;;ªÏ)þqÔ?4 ›}é`œ´›~bëP}ÏLݦ£fä<é5äêgnirEsë!WåwÍ£¾óÔ~Åý¨cwEñ—ö«éqµjk.¶¤‰`¿³RÔ °vwTÝ‘&oG}ÕÛ#õìÏž¿r‰q`üT"×'"_|×Q$]ruéxüª^á_çúãå™NwX•þ­§æã—ʈý²ïI%Òœœ2ËÕNÕóÍj–µ¿Dš–²´®‚¡Ÿöù¤QïªÖ¼mªÈ|x™ñ´ÝÖk5šþ)ž¦Ú”E5]×O] môå!dzÃZ¯r æž×jr¡í䑇½Ad€ÁóF žá¸¾2¹2¶Ì]ÞŸ´æÒ-=D¶¿HZ„&LÕÔ²6®iòËú2öúì?yc÷ÝQ«yHþ¾xÜ0ŸŒü¦í««W·î-©(Ô2…çi¿©Çmu;=Õ©sàvã¶*{òm7í[ojØ-yólµŽ¼¯{ž v*~Ǧ§]®Ù¯~h*œ~Ë«'Ñ¡ ã4-4}ƒ8‘.îç'²Ñ•»xîèCucÛ½’d4€ižÉÜÆòô¨5’ ”¶½-O LÊš|Ò­ åÏrÕ6¹‘ ý3[¥–ë]OzVl£ýú"k¯0]Y8úñH^=V—žúšjÀاr§ï¨útËTvոّܰ]7Â~'d*ôh¤¾LÏìLÕ)g¾à;kUíªš k/6²%ƒÈ'÷•,µa/ÒÂ2ØØ—ž‰þá® ¦´Ðý)ãz–ÕBEvüŠümíãÊô»5Ojç;Sb­3oO:ÑÔ»õÌNuI”OÝ󊵞ÖÊžToLåÝ÷0üÜñ 6=­zêüpÏÔdöX£.7lûTžds‡«>n¹_ Õà(?‚Yf Ôz•×ÏeÄ±í®•n[›•Ùf¦më vø·\Fì_.C‘k²ä¥éV1o ¿\êÍšž G»mÕH57ÙAK‡é¨E@]›UêBváUßm{¤Ôx¡, ášÒüp碜;¦ØjÐy§ß–•"ÌZ ìí ŽÔ0Â`ðRº[{jÞ«=gÕ`Y;êùú}X­+{ŸœÖe‹ÐAKÆØ3]Y2á>:ò”Ô= ÷e¨É&WÃB¥—Ê­åq}øè´/«øØ¦‰jvÔ~;þJê’æ„ÜUs¯k=Y«ßÊ\{\'õà~CV$ì5å–`j Y~.½ûî‚è;©z ¿‹€[§[zYy@oëU;Ò—Ø9”ý`›Üð]¯¿4*&W6›hÉš1îò½/]'¶×\/Ÿô¥åþôå訩ž÷÷7Õä¡£ž,ŸSë×ý¶2va%µ/‡oç>nšk»Ïê#é¨rýýÞ\kÈ3íÇÍ‘î#kùκÕT_€ÆÉOj–˜}:}ýÂtK~ªÑÂî‘/“©óù³¤aWLñ…:ôšò®µ‚áH-“Ð ôÜ»2ÝÍ¢÷µÍÕè=¸¤'¦†¹‚ë­ª¬-÷Xõg¹sÍß7_ËXS3PëÊ®&m©!{öвÖw]Ý\5Û•Þ–“a=}oÝ‘o»#yþÕ¨ü4’uÌÿÔêÖ˜}s}cåjß{¹¹¡ºÒ"sSÜô;°ë›–¢z¤bç›n«…êW©©%žÔT‹¹nT¥«æépø¢*w\;IÁ¿ÐݤÁs"ƒ¨Ü qW>šï¯¿H_ Ø­¯ýuRY5¤—ÐV.¤§ÑUÛ÷Õ§X‘!öš(ƒm§³Ü[¯ü­f5ŽÌ·¾W ýg´Zegw­`ò¤mЕvïéQO pœKg„JÓ]»›¨U3ë=å¶6åɃíêõãÖL£»Öoʨ¬Þ¨5«£à(oíËWrÉwƒ¥¬Ñ•Š´y×{r¾=jõ¼Ý=æS“íºÏ‡ê óB-ÃÚoœÈ“ÓÀ|/ԥȮj)OÐF~1ÄÕVÞZ÷¥,)Ý—½ÜkC¿Ø¯kÌú¾b×`’‹FO5_ Vs˜úò¼ýɰ#Ë6>ÖzRó t¥ùåH ZQ-1×ÿ ƒÚjßèg®—GÞ/sÂÊ\6û•ØõúàÄ/¡gw’”KX¿#sæÜ:žyÙüQú'jMY´ýP¶¨¶7•­ýõŒo•Ù^þÝšÚ+Üœ/þ’e*ÛòlÒ¶#÷Uá÷¥!3:’EWžöê]ÙPÝu/H“Ì^«¥:èZ9þ¼œš&¿|Ýìÿ}¾^²H S—ÑP«F­Ïwo¼ÆžR{R}©õÔ€À§vj„ ^´z#)¿êC‘ÕÎì"Gk}ŠÒç4êvU3©.oz=Ù ¸©ö€îv|³òj$¿œC™‡a+&~—§¯íEÍ—ÖÜÁÕò‡]Ùò+~뽫˫~: ]ß¶'S½ûǧ×=’²«Dؽ°üûmêÒI><õsé¯'·ÊnG5\Ô[}©Ã †Á‘¬]sd–UäR3kíp™ºã+n­àÈ?4µ·BµæBÿ…¯Ñ¯ë¾ÄöK$)ACuy¹Y–sãÌü çO«Zž$`ê¿-[ñöçz-Ø“ [Ðm¨A'ö‹æ;^ìz›µŠîæ‘ÏË.u´-Ý nc© Ž·ÔÌÇvO>óÕ˜8ß@\u»I#ãH-bqõWú©Ý”;ÿŸì8Ô]™È¶ºñmI5Ôn &«¦Ôœ›Úâ™[¥Gí+vx$ûÇµŽŒsþÞs½ •t²É`š«—®õ7òë‚úþqs‚ù1ÇW±¼«†ZõäXm"y=òÖ_N\G›´×ÝãGÿÍ=’½^Ÿ:]µ"Éñ`$çɪËOjP£@.üÝ£Cµ–¸}N©Úa¡,ç&èÉ£ÓáHÍ «ü¤Öt³@ü½À>]–!víáž)ÿÔÞßäûÿüg¹Ä>íüd§ûwÛ7ñ¿.ÉÿÔº½ê†žóm—¹Ô?˯#Ãþ´ZÝEö5¶}úk7ƒ]¹¤výÌÍ«¡sÒŒM?žâçŽ,õfšÒ¾LQ·Ï¬ü¨„?¹«ŠZ4Ö-ÄxýÇ?ë™:tÝÏ,{¶Zïèú ý?¸µ6Ôn½0Þjílÿ=oB­§óhxãË(Ãí‚a>xnK+Ko„ucÝ;UÎJÕi¿>fÿq[-5e— Õ†êrbؾ25öÅC*•v é¨ÕdäHc$³ 5ê×]à;=ß ÿ̽A2†xmòc»ßº?sZ2µnm'ΠS“õú{­¦Làz1høvèÓÎq]z­ížªÒ"·KÈû[±Ú“äqKs4•Vm_õ5oIß™i÷Èig×÷˜ÞÙÚöˆËh‘«Þ ß¹¬VcvØexQã…½Ïø°Þh(O3û‡ÇRmµ¯WKšÉ¯Ó¸»¾ ‰þz>¾ì²/˜Ú¼kmî¼^ôüê)´ÿ˜e§¶Çƒ–ìhXëù’Ùuo|ãEm¥÷]_^Ó­©æÜ¨NÄÇAS-ÚâÖe“éªýy³¦º™ŽôÇß8y±«÷¯Ëw¢®…³Cؤyù£Ìr‹0×õïnu£NüçÛµ“Õ ä¶9íkç7æšü7?™Ï­w¶s}Š=í Zj²±í˜ö½¿æb"ý‚ÍÓCYàz­£Ä.¯.Sz…ûG?ËÒîzEÌ'£ZS®ÞÏWÿUQÛ•’G`ëK!ߨ9x}áFóáù÷Á4ÓýôÆ'ÃÚP¾Qã믑ÚÔ`mòÍÚîWOíæÈ2wT“®x÷:y§û7ìÔ©ªôÕÚÛ]¹«MšÖ–­4_ÍVýpíK2\{«å©¢«±ù?­?ò »ù·¾5a/Ù¾ ¶æ§{Õk¯äâg/ß²V£éÛÄ7æ8¸Û²|ÝÕ†ÍOmuIMcÒ«c­¾í¾×Ç´)ý€ý'ZÈÜÉ•ï@WUO†=©Ã»>W_ m/‡RÙiõOÖRý裵}¯ºT%õr= Þ}Rûu“/®/«¶¯wGfã?šVÕbk§Ãàà…_ôD ½bÔ£Ö+_ëX™·ô{¾¥ÿŒêÚß®Ç(ÉRx'ƒºœ:öƒjê¿©.cõžèö–ê?Ý®iÊøoÉ(º®RêNjuÕg—é/iökê¿ ,ìF2ÈcÙö¡ÌôoeCºµ]jjc/½*Æsa—¥O›²ÍÜûüL*‡«A÷þ…5Ùó‰Ý¤¡ßéîš+H×_¢õNü$WOG}½½]kšÆ³ßËb¤ž©›Jí^Ü”gpö«!ë¬5Õ¤í®ª)]Ÿ^r“ Áã®\â×Ô*Zv¶—ÿ‚Û¯¾|/F~&»šž¼%—2êkxøB àªÉ:¹o­ùjïIþanoä¯Äÿ»q7{2‹ö$hô¤³y$»1¯ `v_Ìmýɵ\mqã†Kúæ’ÚNó­ç×õš¬T²êËȶy«9EÒ'˜«½t5¥SÐŽ«¨ñbæá{ÜÜK€6ZÁ®>©¥Þf7kò¦×;ÙØ[‡4êþ]´«ÞJ~xôÊ^'CY“ñºÇEÊb®ï2Ú¾ÛÑ3ìe_nsZù [Qð5åµ¹kMUï· xHïÞ÷øÕ‘¯X>©%£;²·×ãQ[ï©#ï«[~À¾&µqµåËsׯꓺ&oOý;f—öZÛŠ÷-¨ÆÑ鱎ð¯²Üºþ è8­ËÈ%»ÎºÿÃRWDÛî—;—Þ5ÔÊöõ «iȾ~ægùNšƒ¶>­ íAE}&úgy˜hZκ-C$W×µžN ösG[êHµ¿íQ]¿rGuyÔzj›wÔ\;R æ¯4gޤÛÊùwCWrÍ%²?”}®]ÿ¦é^õ•zÚö¥@v7ïmÖÔn¬MÙ ÌÕHåÍúgcj½ðtó³ÌµRͰÕÅÂùû“;ìʪ9­ÆÑ{nѶ©.蠯 Iòö·ºµçª8Êõ õ³¬ÄýøPírØ÷›H>:<õ3¼ì¢‰rC®Ëîëí¦”r­]ÑÖ÷4U¹m÷Õr²C†¹±W¯Gè¾uMZ«¨õÚÞö®VSçêjCù‡þq‡¾nîªò7—Yà¢y$WÿŽÚĨ#«xÚ^f™6ëfô¹æ¼Ôëü÷ä¢Ö{y"»k«fù…^lÛÈ]å¥ Ôº±yµ¬®fZê@ ÑQ{\^¦¤ÞضZ¶ Ö·ÉÇý“¾L£ýc¿¬ËE[vwþo²·ži «íz}“jm¯`÷¥õ_‰ ð;Þ®o×k~#ÝïØ wí¾"÷ŸetÏ}XOý¦^²»ÝYtKý¼­~ÞU?ë=<×¶U;O© GÝô…¿w÷Q·­è–þUU+ä>gªú`[¶v´AÆZØýFµ!ÐeUËÆúíª·c¨Þ‚µ]HõŽ¢z3Vµxõ·$½ë.¤’©®»Cµk”Ú5t}'Ï››wªÆCµ&«Þÿò­m(k#tÌQuíH=ùPUµÍ ‡þ¬Sû3Þh&=¯éÞöAS.v<‰¨õî®6 ”ßéÚÇèTö–×ý²O]5uí/}â\=e«ÚTNo¦·T{rZïJ/·Þ?¬1”ÑnvVƒïK°žÕ êƒÛqùéØëéð´¡V>Ô» ®íÓd¿t²œ…nm®íq¤ö1Òû ¹/¦/í‹••Ú•f5þmí“ô_ Sµö{7çXïÏ^ßžÃË•[„duý³ÞÃ>ìôïýúƒ@=a¼Ó~nN­Ì\ۈⰦéºçÖÞ^éâo7ýà!µyÂj·Õˤv;xjëëÓ±åd£¦®WëÔMJ9·µ3¿iþ(éÑr]#2ÊÐ4_$km9ùµ¥àÕúz v;:xä¯h7w™ÚËx$M{»R]d•ëõE«ÝȳÃÚQSökT‹’ÙŠ·5u³ËÔ‘ª§^ ËΪ‘ §Ã/ª¤oÐ6dÐê ~(«g=®É—ÜŽQ“ Gªšd e!H7ùT>•‘lÎô§Fe/ËÏ÷dBˆZWãjÆO¨yî«##Bìgë?éõ øvf¦o޸ǹUY¸Þðk*÷à‰ýƒdw¹¡l%jŸÿ×Ô¢mý¦~Â$óXÍ—?mJõA7vןw½aG]qeÀê£{~’€­ù{z4jÈ|[wfÈ05lМj“õaj«¶‘ttßeG‹H³¾1ªK¯”»õÜØ&Ü_[m;Ä—ÿêyŽ *x÷Дõ ÿ´»¹Y ‹e-Šxr.¢I¸µ³S à šEÿ{6ËÂ(„?Géˆ ›©*$IË8Oã2¬nïïÃÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁ}óÜ?®¢¸E4 ·vvª _+²iªB’´Œó4.Ãêöþ>ÜWÇín+\ä±CËdN“Å"GóÅe.¢IXÙ>€‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡ÿ ùÃo¯ñE™'Qy9_ÁÕ]````````````````````````````à‡ï¬ÁçYŸåQ’:ú`øwíÑ».âP¸Mp‚ƒƒƒƒƒƒƒƒƒƒƒ»¯Üÿj¸½p‘å5ÍѪ}öÕa»Û¿Þ÷(Ï¢rš›ÓdœE¥c?¼z4444444444444444444444444444444444ôý£«†>pƒ‰¬ÀΔ™_Ì.ó‡îlBBBBBBBBBBBÞBþyw³²)ô-šÎ'Q8 óS>^UË÷·Q @ (P @ (P @ (P @ (P @ (P @ ÷_±k•uÅ$N‹¤\®àÍhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhè»Ò†®®Ó‹<+ãlžŒ‹+ÁÁ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€üAŶBqXÆoʳ,ûeEníþÞàænes'ü%¾¸£Ë2[L£‹¥Ã¶vÂðØþs¿Qï„õ$+–EÏ 888888¸?ŒÛ2ÜÓI6^–Y‘¬Æ¦U7AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA¿Ô»ß[¡³e‘Ù.\¸páÂ… .\ŸÍµUÙÜÙÝÞ­ZctÒk…µE¶¸êþÄÑ¸Ìæ1((((((((((((((((((((((((((((((((((ègF;=´õ¦Ì“´HÆa•Ó_£exžå"üæ„M+ÜÙa36CccÈÃp˜\¤Ñl–¤¸páz@®¿8WE\‡QÑ ̓nX8 –O·ì9KU,£þ!øÀÿÙá[ Ô:ÝÃ0œ8öQ±ˆŠxc?4•S•™Ç“ð<Ïæá"ÏÆþwhÑ~Ú–Óî¨Vþ¸L^Ge’¥avy¶q"Cv?d®Mº³+²Nú¹"Dˆ!B„ªpî„{ïªlþΗyR”Ùlé+ŸõNÓ­PPæQZÌLãÉý¢ÌÂyRfãi–Nò$"ôˆ®N’ýw…º£­,-më×®9q׳!B„Þá:áÁ{…µÏqŸ!„òßmÈîæûBú'/j¤|å)•÷¥§=R¾ÞwÙ­¾÷ÓéòVÈÖ{Cz‡„¨×.D=„~»E6N¢2MÊ©ËLÒhU»d™lD®#§LÆa½Ñݨ†óx~ç·×ÿH'ý[K_Ý5ÞùÜü‡_f›Í’ #Ë“¿«:å‹{’ò³KÙ}ç%ê½Úãë·ùKëÿÃéUsvÏb;^ÁxäÓ=·󹺑8å4þ¸ëë‘Ñuº rC¼1'ödž„y\,²ÔÄãÄù.çÀ:÷T‹¦±´gbn‰±øäÔÅ|ÏÌ5gÖ#Ó³|î¯+aañ‡'Šè½ ˆ«1’…¹‚æÙåÅT©å›Ëïëè‚°¯1ìG¦šLÃ^­ñ›¾M?ñ_œq{Ýž%餰mÖN-¸½æŒã{;ïû”&ÉU#æê«fœ×_·pœÍ³øMLÈC qüööÞUñ3üøÒÙÃÅ4ºÃ.\¸páÂ… .\¸páÂõí¸~r®·´.ßýjûkãÙìråwã‹1bĈ#FŒ1bĈ¿.±麧F#^/g÷ö0ë_–YñK<‹ËhFdñÇf¸Éû›w¹†Ñd«Î4ôèÑ£¿z·>Ǿh_Ïãè—Iö«kh祗6'gÑŽUŸ‘ûm^1Ë.’q4 ³7Éj×C4hРAƒ 4hРAƒ 4hРAƒ 4hРAƒ 4hРAƒæáhN£&YS;²n„‡—©_Í’¿;[˜‡ãl¾È.Ó f̘1cÆŒ3f̘1cÆŒ3f̘1cÆŒùk6?7æ½MµžhcYfãin‰`{gÓ$Dy¥ñ$<[†ÅåYQæQ‡årqû:رcÇŽ;vìØ±cÇŽ;vìØ¿fû¿8»Ú죕N²‹8Í.‹Ð¼.Ïf·÷OáÀãá9NœCí3¼£<+Õér’goÜÖ!îÒÒxì×+·jwœVmDòÂë,ÉÊd|{™€?7ü_V{ÚÔLõ!2hÍ“4.Âq”†g†^ÿÎg¹ù—IÍ~Y»TjÕ[ÿÙ%ª=h WmUÂÜbÉ]æÍ¬þ`µ‡ÌaT–Ë0'wxdö aµw+gE”fÀVs´ž'¥=¥!;Yw¤z ÝKŠq<›Eiìj× LŸËÔ°¦ŠjêöNÜTÃäï†N/dz8[L“ÙƒUí9•ªµ6N^|Ä-á7â5‡«únÍÞüÂ;—³ˆYѯN¤ª½,Í®* ¶Ypõ¹OÂ2 £Ù$6õóS/èÕNja”NìõÂEžM.ÇváþÑ–û÷£êÉíÕ†/áâU=)ÈfË÷Å»þAÃǘòµ“Ûßì?8îÏ.Nï XNÍ}VÍfFâK)þbUuq_ŸëtÂq–þ×å,X°`Á‚ ,X°`Á‚ ,XšåÀYÔƒ‹öìr|™gé]»  @€à+ü»¨‡ ‡Y>÷ë”Ó8´Os_ÇæB{nEùái3ظ¸Š±c¿¿ˆ‹PÏOíÐø"›%ã°¸œ[ÉU\1f³pžÍâñå]*`ÆŒóý1»a’Uõ„½6ŽK;,÷.w”Ï«­½¸œkØ @«î­Uzʨœ6þ¨!"Dˆ!Bô5‰zN´ÿnQ±LMƒ¸HVƒzóx¼Ïìàe¤H‘þ‘Ò¿:©šÌc'Rd¦½”L>êë~ß<ëÙÒ«H(Ø®FågvÞ§z§©Š÷_ØwÂÊ{…gqú÷Ìv3þš”Óðb¶Û)3_“õÔY«ïµ.¦qjxã¸TêK7=ð Ë'ßz¯¼ˆÌyà&%Ü;ò{y]5aKõûÔ³òÒ\p.ça_æY™½IÆI¹D„¢oOtìDª z”Gi1ËÆþÚ[Ïú£°›\LËp<’Ûë4¿ƒò…Sª>ÐÀü‰q6[Úµ2Ƴ8z]ÄÖ<ì×-·†û,~c§›–ñ]æÔãÇ?þûéOßîü¤áî £å".Âúaó×µp¸ŠÃçµ^0\¦Ñ¢ÌÎòøõn6hØêÛ¿5¬öCبü¶®“ÌlTwîÖQ±¥:*ñl6–Fcüñø—E–¤åíŸt`ƒvôNºÕÞGmŸ‹ @€‚ßIÐp‚ʺàÓ¦2¡Bõí¨.œª*ªÆô—Šù_õ»Æ¤¨|ίÕI­-ÜÕXŽg¦Ñ]7íé_*á8›/fñâ¾â¸#·õöY—Ç‹Y2^Ù?âÔûÖ|ÿé|jIÂÚÚ‡R Ìu½¶_–ÙZPQšßÜ~#$„B!„B!„B!„¯?¤éBÔZú½¤ÌÊdI:™ÅªO.\¸páº×®Ÿ­kW=­ÍÍ}ñüúÎhîErawV9ϳyXNãð—$Ël<Íò;l ó;ëÿ§ÓWÞ§¿é¿L£²ŒÆÓx®ç¼N¢0 {µf5 ý>1Ëköa”âÿçJ¡(tVŠ«÷Ù¾·‹<+ãl¶´çŠ{f“”KW=  S!úÛe’›Bg¹{µ­)•ö×Q-¦‘©6-ÃùÕ©V\jê …}Ò“¥ñ]¨SVÊJY)+e¥¬”•²RVÊJY)+e¥¬SWÖm]Öh½3ߖȶ•'¦Hãòº©¬þ÷Š»<$<òÈ#<òÈ#ïÃyÿêòÔÓ§F<›¹IñþI$H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H¾jIÕIvEÒ®„ÛºòÛ t䞪T™úT’†Í0*ŠlœDe< ã×nÏ>óÏÆ‹#FŒ1bĈ#FŒ÷øïŒº%4 Gy”«½.ááááááááááááááááááááááááááá¿þ•å÷6ßz"ÙzÇÉÉež¤.¡$H H H H H H á·$ \BE†Ã,ªßoÌãÉ*`_äÑ$r¾ì<\T÷ÕÛçaÆŒ3f̘1cÆŒ3f̘1cÆŒù[0ÿïÎ\s0ÍŠÅ4Ë—3QåY'©]œêu6{m²Ü*Uk}™gËÐä$¯cémL~©†ãl¾˜Åoâ‚’<è’4]IÖÖ^n ñ89OÆ«©ããlܸqãÆ7nܸqî ëÞWSp:é49K®uZ•¤æŸ}Ð$šG&Þž-ÃÁ™)ÔíÏʉ#Ž8∻{\ìâ*k—ÿ8Îfn ŠŽ^ë=2ÿtÝ{tžåó»Ý#Œ°¯,¬âÂÔ¸aL£"|èV²Ýßðj‘€š[$`ãíål£Ò¸ÎmŽó% $@ $@ $@ $@ $@ $@ ÷2áŸ]¶z€´LËi\$Åõäl 0`À€ 0`À€ 0`À€†oÏàö†ÛßÃÍ5mբЏî¯kõ´Míñ7ü5)ÇÓ$½°gB–'vËÕ2 £p‘åÚ¶'E•wuE $@ $@ $@ $@ $@ $@ ßbBË%ìIÂI>®„y<Ï^G³ð<Ïæáxjþ¿qÞþp2dÈ!C† 2dÈ!C† 2dÈ}´lÕ¼/²FóxcOV+“-¦Yaþ—/g‘›…¥”‡Éx—H H H H Høâ ÿæ$Á®µR”¹uų,½ˆî´Ì 4hРAƒÍo׸%ü6Esšþ𤓫©(w^†{d¨;CE Ý8rüpõ‘u*1aÂô5›þâLU1Ùl9s»Ñ_áç]bÁ‚å¡XVWµ¹g7º¸ø ×L˜0aÂôÇšFΤ6òl…[èBöHCóS8»J¹êäÀ7nÜvÿèÜj›<»¶ºÃY´;içWO¼1bĈñ3OQ-tvz–üí2)“tc/ât§e8‰/òh²z\eÇÓ,dz$ ›È‘#GŽ9räÈ‘#GŽ9räÈ‘ßùs'ßû$y;vìØ±cÇŽ;vìØ±cÇŽ;öj¯:»Z  ] ƒ©ç¹"÷ ¹¿YÑd/åQZ$wZ»þÝü8^­EvÕ¹]«W¨(²ñjýÿøuœ–E8¹Ìíª 6£$ƒ 2È ã¾füo.C­QÜØÅ%;yVÆIZ„Iú:›½6‘Éj…YgK_B»ÛK5góÅ,~ß¾ÀÅÐÅh¸bl¿õÁ׫¡ßmçêSGõùT?:•šþd³lc–ü‡¿$©}Rñ.ÆsF5^¢a°<+,1ÊËüî;7 Aƒ 4hРAƒ 4hРAƒÍ;4gN£6«Äãü2)çv¶VvΓ2+“q8»ïÚ·ûܽ»ž$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰¤o=iä’Þ›Ô¿ìÕÂ2{G"nܸ¢ÛÍ1­lŠ»›ÅÚÄÒ<þÛe’Ç“ð<ËÃÄî#¿pëäΓ±yÉåÙå,³ü"J“¿»¡0çy6wSPÕ‰bP ŠA1(Å ƒbP ŠA1(Å ƒbP ŠA1(Å ƒbP ŠA1(Å ƒb¼§}WŒÊÛÅèÏ¡~ʨ¬X±bÅŠ+V¬X±bÅŠ+V¬X±bÅZqVµ+f/ \// à×þ³·xÕŸäÙâ2ü§3l‹¡Í.’°‘˜åi‡Aœ'®;ÉfQËhü‹‘êqÈ÷#ä/.dçïÅ<.£»½X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X¾¼e5QcïmKïZQK¯T£ät²ú¡1W7ß›7OʬLÆwŸûI6Ùd“M6Ùd“M6Ùd“M6Ùd“M6Ùd“M6Ùd“M6Ùd“M6Ùd“M6Ùd“­³[6{kS²ß1q²œÆ«ldÈ!C†ì}²ÿt²Šº™ýw-B!„B!„B!„B!„B!„B!„B!„o7äÔ…T?*¤±Ï’4¬#GŽ9räÈ‘#¿‹Ü­B¹µõQòa<¾Ì7nܸqãÆ7nܸqãÆ7nܸqãÆû!»Ïœ{[Ü,}ç…ÕœçÙ\g•ÙõÑ´÷̇Q-¦Q“DI$‘DI$ýnIÿO—´óVmȸÞ;jÄÅÚEü²rçŠ(£ü"¶„Ê{];Æ·HŠ8Êg˰}ûÖ¤‹bQ,ŠE±(Å¢X‹bQ,ŠE±(Å¢X‹bQ,ŠE±(Å¢X‹bQ,ŠE±(Å¢X‹bQ,ŠE±(Å¢X‹bQ¬Ï]¬¾+Ö®*Öe™½Uy¶\ý×+V¬X±bÅŠ+V¬X±bÅŠ+V¬X±~MÖmgÝk³_3Ì"JrXXXXXXXXXXXXXXXXXXXXXXXXXدžý‹c÷…­ÛåÜ[oƉ[ ,X°xËOÎr°nÙ¸aù!¬á0)ãð0Ëç«ñ ˆß#ñ®ooªZC¼°»ÏÝúÿêèŠÐƒxœ]¤‰ƒíTÔ¨(²qâGÙêÐÅl96š™ýÆÿš”Ó°°ïÖ8KË(I“ôÂpat~ÝŒU[–Û7µ¡ëå8rå¨J9³8z]Ä6´œÆá$𛣝Æ÷Ïηµv/ódžLîz&cXþoΰý;žÁ×å¹ýS¥,”åcÊÒueÙ¹ÃUç—rž8ç®>_Šlvy}zÔ΢"» T~gªRöÔ(¾G‹-Z´h?A›:íÞ{µW¢ðu¹ûZa*!³x#½Ïâ¬L&q˜Ç‹Y4ŽçqZ†‹¨œþ-I%•TRI½sêÒ¥Þèöß8Ïcpyå‹iV,¦veº<žg¯£ €|m¸¨gIͤË;úÉÎŒ³´n_¢3æ¯ÉüÚšw6ï\S™_ÎÊd±þ 5ßÊñô“¾§¤“N:餓Nú×—þÂ¥¯ ØYU÷¯ª,+³~4½ê‚Îã"™\Þþ¸?~üøñ›þ?;¿èh¥M7l&¬/QQ|;ŠÂ)ÔàÄQ¥Å,v£§‹ejîîERØž÷Îg³å<Σ"¶ÿdìÛõjóö5F¾pûØ}Ï¢Ýßí;D ZÏoïúÑùvîà ²Y—Ñ­Æcgܽ£ñï_Hù¯N¹÷Î¯Ý ~ç…¹V=É¿;ɾ¾Ü^‘¶ª?ûe9[ õ–UJ’½Y^Ä©;5ïAÄ©‹PµúQ5ôûr}8ðNòÝÍ5ùÖç’7\ÝîšÙå™iK ËÜŽÕ­çqôKxÇÙW¸páÂ… ®·]çR-¦£lžÍ²‹ì²í$™ùÙõœ,„"DøQB7ˆkwëÂ|M˜¯„¦ÉdŸ%c÷¯I:‰±ùiNV×ùbu?³×ùÛÛ€P @( ¯\ÔCÚ¨'=·y\,²´ˆÃ2s]Îïˆ „«„—.aç= «é Y~Ýý¿:“í9½È³2NÒÛOV¾7Íowm>õ8¿LÊëQùW>{ZÉEÍì#_o7§ï']I%•Ô‡“ê6ÜUö{ƒ~hêK‹YüÆDβ± ú{ìܨÛ(ϳq‚øîâÕ'«†%ÔŠ"žŸÍ–×k) jÍÍÞ U©lôëÊOÊ÷U"îz>‘úû¥Ž\ª äÙ8^ºCø®êŸif|åî=5„E5ÄVƒ*“|µÌ›¹Œ­ìñ›ñ4Jï0‚7ç®èó;.–i´(“±©YÛ)¨æ4ŸHà tƒ÷¶>|Ϫl… ZMxºªíù›'Â=Ý{ïª;ÍY–-ìIt9./ó;ŒDý›îß]XNóìòbj®´³Y2‰–á]¦c÷ò$-ãüªïáö`7{_}SúYª¾­t²ñ_ÙjÆÄwý£Ö·¯Ñ†ã§vÆê‡ª³®êºÈ“,·7sdÚs×)è¿Y½ò·¯.û}™Wü±’àÂ… .\¸páÂ… .\ÙÕr.5Ü­=Ë΢YØŽÓlžŒÃ~k~×no˜ÿÞÞUƒ 2dÈ!C†ì+–­žô©îk«FÉN¡I®”_ÀèÆ|î«A·²swvnÆÈÕXêv´Ø8OfnŽ¥md^M¾”…œíPÿYrDADÐç šº 5CwpSlZj«ýIþguock3|øðáÇ>|øðáÇ>|øðáÇïã}mç;ßúºyÖžÄã-tèСC‡:tèСC‡:tèСC‡ÝçÑýbt››¢«E6N¼Ïˆâ´°ož[O^„¿&åÔm‰³Èã õ:ÇÙ|1‹ßI$‘DIä×9t‘}ûg¯£Ù»ãÎólfyr‘¤jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5ê¯_ýg§®Šº™Dgqá"*§¿FK(P @ (P @ (P @ (P @ (P|»ŠbK´¸œ%i8\¦å4.’"ŒÒIäÙ8.Ü8&Œ1bĈ㇠gÜãizžÍ&±“”±Qâb‘¥EŒ *T¨P¡B… *T¨P¡B… *T¨P¡úfUS툪6.“×~=ûÆ4ZÄy–ÆEx¶ k£ÃÝh¶˜FxñâÅûÉÞçݽ›·3hUТE‹-Z´hÑ¢E‹-Z´hÑþÃ?9ížhƒÖà8ÌWÛuÇ“ð"Nã0~³Èíhó;lÕ>|øðáÃ÷Ð|ÿçÛß`¥ºªf]ÍÆÛ˜%¿Äa;Ï~-§á¡æáwöá÷«ZYR.]-ìý/®'éÄNÞ»zø^8ºßS@ H) ¤€R@ øÁv\n)`8ŒÇy\Þ¥Œ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"| Âm+¬TDØž-ÇÙli÷›€……………}lÛ±êæØHÊ<‡Ñ8™„ãåx‡ßµÕ·°C‡:tènÓý­®ªn[vQÝ$Ê,_†ñ,—¹iŒ•y”‹,/k£ ,üÎ~gËp<çÆ:ÏJSŠqv¹0í¸‹ÜžÓ8*ÃEžM.Ç®Mg^}™^¿ÄþÂÿ¿PH I!)$…¤’BRH I!)$…¤’BRH I!)$…¤’BRH I!)$…¤’BRH I!)$…¤’BRH I!)$…¤’BRH I!)$…¤’BRH I!)$…¼'…ºBV¥‡Y>÷kƒÙ½¯¨Q£Fú-õÄ©·DÝI§ÉYòþMRm’Çidþ9îgyäÈ"‹,²È"‹,²È"‹,²\Ö+—µ­{Ã>¼ ½ÍÇår6žf$@½KX]?v>*¡=»GYº1K~‰Ã ^”É$Þ¨EYd‘EYd‘EYd‘EYd‘EÖ½ÊúÉeíJV;Z„ÿu™®†c–yt~žŒ±£…ìpÍ\ ‚1bĈ#FŒñ­âºïÝAŒ &L˜0aúÚMÿæLûï1EEÏÏfK4hРAóNÍ8ÍhF×k2اã,Mã7IZ„Ñ,3—är‡…{‚`×qXDåô×èÛÉ8µ[›’q2K.ÌËóäïþ9‹$%i™]fæðë•ÿì䕾õÖc¤ö}_Ì¢b…ss~š×Ý>ÐúwÖÿÝéÕTø^2γòòìroLâEœNâ´\«˜¬ÅžçÙpÁºO%6õŽÖ›EÅ].Ï @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€ @€‚; ÚN°¿.È“q8Ê£´çÉ¢´š *§¿FKtèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡Ý{t§;]?+ÇÓ£îQX® "ü¦„s+Ü®ˆpЯ…A6[Îã<*â°Ó¹yÕÉãNš”IäkéÄüS6ÏÊ8Oqš¤„J(¡„J(¡„J(¡„J(¡„~}¡¿¸ÐêÝCßؘőyá8&’H"‰$’H"‰$’H"‰$’H"‰$ò>DŽ\äÖ'EâÆ7nܸqãÆ7nÜ×îcçÞþÛ·Z[Å8ZÜÞdE‰%J”(Q¢D‰%Ê{¥ÜuÊQέ³-w ýéUÏâþ‡n¬ë=‹­Y–^|L¯%nܸqãÆ7nܸqãÆûs»‡Î­–¯=Ìòù ÍÎÃr‡­(Ÿ-µ²‘ͳø jÔ¨Q£F5jÔ¨ÿ@µkìl¾G}ÕF°ƒXÜcܸqãÆ7nܸqãÆ÷uï9·Ú%À Ì.fÉø.#sÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁï-Þvxõ}øFØ‹þ+Ëà *§¿FKtèСC‡:tèСC‡:tèСC‡:tèСC‡:tè¾nëº$E‡:tèСC‡:tB÷W§Û»¡Ûú?oÄé$\äÙ8. ëŃ¿Û&3xðüÏsçQ§µË2KÔIÇÅå¬,Â$µÍœhæîr«˜ÛÏrìï³:»êâèFeüÉ=ذaÆ 6lذaÆ 6lذaÆ 6lذaÆ 6lÚæ†½¨aïo ê½u}§Vƒ!B„"Dˆ!B„"Dˆ!ÂÏ$|øðáÇßû|?9ŸZxÊ®îd³å<ÎíxƒNç*!ȳyVÆyØ*ÆÑâö† bĈ#FŒ1bĈ#þ"âÿQÙÜÝÜÜ|KlQ3\L³Âü/_ÎVíF»þ­]/7V{ükRNÃF‹rÈ!‡rÈ!‡rîuΑËQ ¼¿ëyIk–¥w~þ‚>|øðáÇ>|øŒï¥óUÅwè·.’É1&`µ°7²ùb¿!€ € €øÈµ‘ê(*7æñ$‰ÊxÆ"\‹+}{‡ € € €ø¦&.`û=}+Š_õPŒ³´Œ’4I/®¦Fd‘EYd‘EYd‘EYd‘EYd=Ô¬ —¥öj­ey™¼Žot@½Õù”¤®S*:+âtìöu%Ž8âˆ#Ž8âˆ#î>Ç \ÜžÄ â×òô1½Ïâ(ã7 S‰µ«Ü`ÆŒ3f̘1cÆüí™ÿêÌj#«ZQÄó³ÙòJ¾Nr[cǃÏ—óüh=µPëQV”aG–b-®…çÖ“1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1b|PF·2oEm*zSö|‘‡¿&å4œÚ¨q<›]΢<\äY')@@@À·ð?]€Ú'Õeâu’N.]–Š٢̊¤Ï–á<)³ñ4K'yÍÂìÒ”*œÇó³<ãÎ{û1¾×ØqF5ãõùâR]›×ðÙÓ!B„"Dˆ!B„"DˆðK·PMç;ŠçYQFw ûm²'–ÝÚ6˜Ee<‹Ë°6™Æ…Ý]³ÌÂøÍ"+âI8Îf³è"NÑ¢E‹ö£´/œvûZƒ%¯ÝV¾?„Er‘F³$½£tFy|±Úå?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇÿðý?:ÿ®øÛÁóÎÆ<ž$&f¢BÂqTŒ£IŒ#FŒ1bĈãC0nW”1Œf‹i~W©V¶¾¿ª=»êsü:NËÛ7Àˆ#FŒ1bĈ#FŒ1bĈ#FŒ1bÄøÇÛθõãßСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèСC‡:tèС{‡®étj]ÿÖùy<6`v N—“<›-‹.\¸páÂõ)®®sí)×,¾ZBÜØÆË2+²Y2QõŸBó«x†ókp>wÎ}qâb‘¥E–Y;< ×*ÝÁŽ;vìØ±cÇŽ;vìØ±cÇŽ;vìØ±cÇŽ;vìØ±c¿_öÕ+bo&E’N“³äzŒÅ°_´Âó,Ÿ¯†]`ücŒµÆM1×ñ$¾È£ôr†7÷xxøßŸw[aîV„o&E‘Í.ßUAÃ…ëc]/œ«ªÎÏx6 ‹Ëü<Ç®'>7¼aT†å4_›Êÿå,ÊÃ_#óJüøñãÇ?~üøñãÇ?~üøñãÇ?~üøñãÇ?~üøñ㿯þÕø=µøâ(‰«áð΋ÁÃÃÃ)þ_¯Ö] ZZ¯²~eD‚Éï#ù‹“¨Õ«êQ‘\$),X°`Á‚åk±¬ª j5…Nz>»ŒÓ¿Gö§ØY A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚ $H A‚É;%«‰^ûï’t“ó8l,dz ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X¾°åÌYÄÒJË|fçJ÷<Éí´ª$-³ð(+ÊÕÛ¯“ȼx’—eV$·oŒõÇ%…6iOGi±ÈòÒ¦ ’³,½4ïM¶È³2NÒb•gww™}ûËKbˆ!†bˆ!†bˆ!æsÇü»‹Ù~W#Æ´¢Y8è×VÑãµÌ/íWÓÜŠ¦ËIž™¯¥]/ï» yô}8^­°Mdñ‡dtmFEu€¬FR%i8¶+û—¹©Çö¢(Ë-âĉ'Nœ8qâĉ'Nœ8qâĉ'Nœ8qâĉóË9Ÿ;§ÜMÉ$œ$qa ?„óì,™%VGvã-7Ô ÀŽ;vìØ±cÇŽ;vìØ±cÇþ[íMgWss›×^;Ì{’Äe”/Ù‹Äõÿpä\j–ç(ÎÏ“ñ/v5e3LžÍnï»ï¾†ómk_r1[Žã<™Ä«—_OãG… *T¨P¡B…êþ¡åT;¢:ŒÊrÖÆWSÞ‘!C† Ùo•ͬ¬ª:ŠŽ²|ž¥ñF§ER&¯cÛÇá¦é »ßËÒTežD&quÏf¡›Ñ?[’I&™d’ùîÌŸ\¦êVî­-Út})O&a=.£“7Éänk¯#FŒñÿ—«IëKÒ;qdÅgVœyqvQy™»Ë«¼ìöë)‰$’ø¾ÄW.QuÉÙïs¸&^_K²ÌÂù2O ÿßB —°ó¡„u¡I˜E—ù·âÿwçßý_ëŒ}£ôꘈáVq¬î}(B;mF6.ÉxÈ«KÈþ‡2´ÓfLã7rŒÿ!ø>ä_óÿÙe‰ÿ¡øÝxÏ­Íù×|Æãò°»N¾­Ê'Uº/ÓO«v“IæoÍü«ËT}4Çq™¥qx–Mdû—bŽç7zÜ8ï-Õ(úg§æÓ»’Ö³Ißþ™} ®CçRÍÓÒÏ#ømª©ĹùjÙÜ|çWÿ?â<Æ 6lØ~o›ê+ªÍÓ5ÒÅtiZSwÙ+õÛ±ýèl7ú_nÈlúÝ[ä1Þ'ãÏÎxð!ãëØÎhÍÒ‹p<’ô£š«èÑš¾nõÛzGóYT˜›ŸÝ,<û˜±Ìß‚Iux4¦Ù,.Ü”lL˜0a„éSLnßm=Æ*™Å«'ve˜3{TD³òcš¢8qâĉ'Nœ_“3pNõ yíúÙu@±žp{= /^¼xñâý’ÞÔyw?Úû:‰Â½È=ˆusâß,ǪK‚TRI%•TR¿íT7|{ï“R«Û$’H"‰#qÿÓ÷H$ñK&1b¼6Öœq{ýk)_BDˆ¨¨çDjѕҢ«oL?.Íò_üM¦~9™Ü¥q)R¤H‘"EŠ)R¤H‘"EŠ)R¤¬tè¤jÁ™õí=ì6™ÚrçÉë¨L^Ç·¯95jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q£F5jÔ¨Q#ê—V½§¶AªyÙjµ ·üy4γ¢Ëi.fQ1ÂùõnI@@@À°íÔ>’§y…ã¥]ÐöÛfÇŽÝzçÉW,SsºɪÁ˜¤eœ³ôuœI–†ß¹sÓ6+M+2K¿'Š(¢ˆ"Ѝ¯"êÔEíJÔ ¾¸œ9Øv¨fyš”ÓÄÔ¿&ñ8Êϲ7KS/‹ÃïNš äÈ‘#GŽ9räÈ‘#GŽ9räÈï*ÿ'ßyg3+G~|ßáè;‡_XÏ£t<'ãi”¤j€!V¬X±býZ­±Ö}uÃ;JŠ2™|äË»-v–=±t— 'N¡¾ËÁ4NMUr¥Ödz–Ë“¦í4;¢éÜûqÜýÖµœn÷}¢{3]Nò¨D† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈÝÙ±“©ÉvÍä".®×c˜$F/ל(Q>Hå?;¥ZÚã(~“±Ú9 0`À€†ûmpËÕ¨#ڳ˱UÌ¥Mˆ8pàÀ8pàÀã~8캷ÕÍÍ·wïB (P @âVEìj!Ïõ=¹œð—$µûp-ׯ^˜åK»MV')a„Fa„Fa„ö†¸°ím“4Î.âôNKñ#@€Üü«¨Íp‚e~ùÚÎþˆÇîH Ar¯$ÿâ$»úÒ¸Ûëbx÷Ìp|½ŽžuTªïpœåqôË$û5 ¿»¸ú·l¶4Îï‘"EŠé$u5ŠšÞ}˜_š×qø‘Û•ÿfÉ_œDOŽfу¶ük©ªq/Aœ:Çbš‹©­n-¢rúk´ ¿›®fÕϳ4“ßÓË´¼ý3$‡œ¯1gär*’³³8G–'gÖZÙ˜$b=³ü]+§ÕÝtnuX_X1MÊ<‡Ù›drûbs¸páÂ…ë󺎜K `ˆû'ÃÐ ÉëÕ(†(„¹ jÀ‡>|ÓçžW·ßïC€à.‚ލ§.†ïá<ž$¦¡8YùÊ<:?OÆ¿$éÅ/Âq”Ÿeo–³¨L²ô‡pº\\I‡çY>wÿFé$Œò嬸œGeTÄ¡ K^¯~I4ÑDM4Ñ_Wôß\tU¢Û6uãFj™Gi±Èòò‡UÚ‹^ã¼³Ó=eÙÙ~‡s'ün¥eVNãôcÏt¬X±þ>Öš³ªLYœþ}i^_û¸ Rˆ¾iÑŸHMÈ3¯(Ý*‚w>ÿÍ)Ô<†~2¶»Ú 4ïÓüÕiÞÖÌ“IÑìutq—5ã?—çØzvßÛ=Ïfö´Ï…i–&i|û· %Ê߬ü§TMÐÞe1žÅá8KËÜvê48pàÀ8pàø¼·Ðî®z’5,óÕnW²2dÈ!C† ²[d 'SãYÖ{ÌÓK#ÌÊdr‡žrT¨P¡B… *T¨P¡B… *T¨P¡zªÿݩԼ¡Q¥Å"ËK1ÎdGŸçq.ìIW£‹—y2O&öð,*ì‹ÆyVa95/›EÅ< çñüÌ(cJBI(ÉïT7¿jWMÔ V91O (P @âs+^9Åî[ »–ˆT•Ây–f‹iV,¦v㙵Üu’9 $ðu%ìÛ„½­·VóíooÁÃ?dÞÝx÷Þþ†£»w£@q/n•—}5]*n±], 4hРùÜšC§Ù~§æº2—[E’¥·7k°aû#m«/Áþ;m]YCƒÍ?Ôf«ªúÄ×á,²|15â;|;1a„ &éXÛªªŽµ£xþ‘ONP x—â¥Sì­Õ~._ßXTÛŽOl$ežŒÃÚ8™„ßµïÃñÒ®º@@@@Ü£€ž PÏ9ñÅå,rË%fçá™IÙdzY8‰_dzl1Ó)R¤H‘"EŠ)Ò%ýÅIÞ'½ˆÓ8Œß,ò¸(ì?%iGùl.¢tœÇæuãÐür|™YÚôÛŸ¨I$‘·Dþ?läÖæGDº-l‹Òtýή‚3žÚí•çvEf_^$Å÷ºDg—JE©(Õ+Õß]©*s1J'ÙØN4ÜgóyRÚ-A¾ë·N'í­2åÉmÙ’’P‚*Á©+Aõ#J`ë¨È‘#GŽ9òßUîzŒ¶Ôj*µãÑÆ<ž¬v…KR'¯}ÖáÉ‹“J 釥5'Ý~ëËòe8è×Lí¾œþ-oÿp!B„Ñ7(j9‘žú–ŒóÌ*¾›'æ?ß[äªÇ2dÈ!ûcdÿìdjÚì0¹H£™}–q¶ ë½ 0|·HÂÖÞ{ ­öá (P @ (P @ (P @ (PgÏ9wÖ¤H‘"½/Ò¾“îÞža½ÇÖ½Ö1V¬Xï¥õ¥³î¿eµšãYVN³o>`u_;X¨"EŠ)Ò{!uwÇÍÒÏRŸÅú ¬•ÖÏRïŠë7mu·­êºu )R¤_»tuEÙº!ý<õ¬Xo<ÙÝú ëê¶uãáé6R¤H‘~‚ÔÖÞQø×Fk·ƒÆ|£@ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ (P @ Ÿªè9Åž(³¨(ÂZ%ün0Í&Ù¢HÒYòKæñ8^”Y^|)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H¿5éNº/ÒÀÉ$gÉE”N6Î’tbwÏñJŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1bĈ#FŒ1Þ;ãj2òÓxžý’¤jöqhµáøúH‘"EŠ)R¤H‘"EŠ)R¤H¿-鉕înŠt§KÃ&éMkéƒöKj›N[íó¨Èy\¼½ô ®{ìú›sUÅu’ÇoÌ©™ó!/ól±ê.ÂÃC÷? ƒ«Ó% Ëiœä&­XÄã2yM0Á‘à#¼%Áµ¹½'òS|øðáÇ>|øðá{¨¾®óm‹¯wYŒ£|øðáÇß÷ýwçS³ÉW-ÏÆVø]/.£³¬Ì³E2/f—e4Êx1óÌ-ó-QRH!…RîOŠ[Gd_Í_>™[ _†Ãä"fvù› *§¿FKdÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈ!C† 2dÈî û³“m‹LgË0h¶Q @ (P @ (P @ (P @ (î«âÈ)vDÑÌ~M‹2£yX8[XæQZL.Çe’¥øðáÇ>|øðáÇ>|øðáÇ>|øðáÇïñí9ß®ø:óùe‡C÷*ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppðÏ‹ÿÅá{ OÓ¨Œ±`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚ ,X°`Á‚åZþê,ûbe³™Æñ¢Ìò°ãhxðàÁƒh)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠôJŸ;iU¤Ï“<š…“bЯ…VŽð‘ͳøMX—É먌‹pÐ *رcÇŽûƒ³‡Î¾¥n§ƒÚánØ‹'«triŒYfçá"Ï’ô|Í瑹Í.Ãñ²Ì~IÒ;¬oO 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄÜ-&r1Ûÿ€pT?®tŽÃxQ$3S ‚ºsÐjÖÎÇaí¸qX;;¸ )R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤Eê¶?Øý toßY÷öÑ¢E‹-Z´hÑ¢E‹-Z´hÑ¢E‹-Z´hÑ¢E‹öËjWÏM÷>¨=pÖ¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EúqÒýJ·t)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠôéÀIDÚ †a™Giqçy< ÏólvëA8Žò<‰ó°ÌÂF³²ýÍ cÞÞÜsíjÇà‰ÝMx;,’‹4šÍ’ô*T¨P¡B… *T¨P¡B… *T¨P¡B… *TwWý§ªˆª·lîïo$é$^Äæÿ¥e8¾z™¤I™¸, ³¨˜Gá<žŸåQzûSJrÈ!‡rÈ!‡rÈ!‡rÈ!‡rÈ!‡rÈ!‡rÈ!‡rÈy9=—S•œwLí­º©½Õ›/Œ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤ß²´ï¤[”cÅþ¿*V¬X±bÅŠ+V¬X±bÅŠ+V¬X±bÅŠ+V¬X±b½ÖíÛ¬»X±bÅŠ+V¬X±bÅŠ+V¬X±bÅŠ+V¬X±bÅŠõ>XÿÅYwÄÚÈæ‹Y<×+7ãÀ8pàÀãã?:Ç®8:nK”YXæÉÅEœ'éE˜‡c1cĈ#FŒ1bĈñó÷T]>£2ÉRëil‡Q: ÕÐ8’×Q™åFŒ1bĈ#ÆdüÉ÷ÅØ œ†‹¨œþ-oÔ¯Ä6ðëÿ»¨7wE26•é(-“³l²Ü˜Ç“Õ.ÝDADADq#ŽmDeS"j³2ÎS¿Ž¿jå‘SVD9Šóy’š·öÝí|ß¶¯æ|j+°šœk¶m½µj[ï ú ¢Ü‰Ô’|ùü2Íòøârf»-–a’š €ýºgižÅå¯qœ†QØ]ÎÓ,™¸ (L³tÃÿÓ8žÍH&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I&™d’I¾'É{.Y/óß„Er‘F3»Ò888888888888888ø7Œ®¶h ¦Y±˜fùr&3ø›«)üÖü÷¸ŒÂñ4JÒÛÄ7nܸqãÆ}ÿݯœ[m2Ê£´˜eco~U 6ö6Ã2»z¼7Ë.ÜÚtÅ2Åíú‘@ $@ $ð¥.AmÒŽÓ8÷ú"g¦&9‹"N/â<œg³x|9‹o¯MbÆŒ3f̘1cÆŒ3æßn®;³Úð°™ýšeGó‚ &L˜0a„ &L˜0}K¦¶3©W‡=[†´¸4?‡y<Že–£C‡:tèСC‡:tèСC‡:tèСC‡:tèСC÷‡êzVWÝÝ[†ÕP0ŽŠq4¹}í¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"Eú;HÿÅI+êñQc#gQOÂøuœ–·/+ˆ_Ò±ïÕ5GhˆäõjáÔ¯¯9~ký=œÇ“Ľ‰êÆ€"Dº«­Á;ƒá§]ÎqàÀ8pàÀ8pàÀŽ»<ÛYs|ü£5øOåWv×?ÃOJ"Dˆ!B„"Dˆ!B„"Dˆ!zè¢]'ÚQÐÙ:wœ }Ÿèuô¾¢ëÒitÇѾH A‚ ’/+ùg'Q»IÍÖVýcÿüvÃ5l©UO磓 @ð FN Îí¤Óä,q³:³óp4l„ãl¾˜ÅoÂó,Ÿ¯¦{ž-m]à«u7»ºþ¦6*Ÿ4x®oÑUw.½œìî1&L_Þ´ºen¯›|œÀ5Ê·Ô‚ƒxGElï½qçp»…á3üÙÔ@Ûš,Ìa$w©ñ|5 7èxK=}žü†Ñˈ!z8¢UVõX¿µ½•ùa9Æ… .\¸pÝתâ¯ñåž¶ ûíC(P @ (P @ (P @ (P @ (P @ (P @ (P @ (î›bß*¶ÕZZäÙ8. «ùÚù¾ãÕ’‹½°?øÇŸ´fV¬X±bÅŠ+V¬X±bÅŠ+V¬X±bÅŠ+V¬X±b=rVµ]Xÿ°FéÄ>ÓµvÏ’tR„efÓlØC÷½p>µU#žÍÂI•Sý¶½N¢°?¨µ[?˜ÿt]b¿Ö¼}üøñãÇ?~üøñãÇ?~üøñãÇ?þõwœ[=±–+uqTNóìòbþØ?Fˆ!B„"Dˆ!B„ï¡pà„;ZhZÇ×¾±´¥ÏÿÿìýiwIºàyÖ§éRœ L×›+.bpd*#óÞ›v îFÀRîfžæJˆ®eºº««Ïœ®s¦Ï¼˜y7g¾é¸HÀARXHJâò‘Ÿßÿ‰‰ÕáîlZÈJ˜a”¨aŽŒŒŒŒüLåñ]ÉfE._Mmø×îÌ€7?:ÌÉ!&Y"óTŠT¥='¢@ (P @ (P @ (P @ (P @ (P @ (P @…'UøMYجœ¤älª{+öºG‹?ž‡ñZ_£þ%£Û:ƒ€€€€€xÄv ªgXlnŸ´:GB]*ãß¡2Ïükž? óÍ/ýZx°{xN$Ü=<…„„„|d8Üfåü«gkú*.ŸmWÏü¶ä­% àWÃ5Å6+çêÝwy£–ªX¯e±ü¹:‘ž¶T=}u÷ä éEI¿R唸ãv­/ÓTÖ«ÂÀÀÀÀÀÀÀÀ¼f#0•Kð´å‰È¤|’#†f˜a†~ ÃáâÛ;Õá$µñ°¸£ÚÄÃHÅ˾õ‚©Ó’Úª|¦¯sÔš¾I’ l^üÏ’Ée‰Ûݽ·¨¨¨¨¨ßI šÙªÏ¨«¼£Í<óÌ?ßùn˜¯|h®-OŽÄᇇ߲ ùwAnVå´ÖÙÝõU¾P¾¦Îw²UýpÀaó¨utŽñºŒ³`T>CUÎWŸ¨\xéúÊçB›pÊáhämnhhhèo@o®B›a”¨áâw ¡¡_"}èÊÁ⧪_<¸T©þpúÒ^i^?`0ʲî#6ö“·ìÊa¥³§¦:;o7Ÿ2~¿ó%d¯{´ñMŒãÒØ®¼¹ÛUÞÙ¾“qqÓ£†Îÿ” ŠÛ¢>p‚‰‰‰ùS³òöòiù8H:±7>(ðÍGmd®Âi½“&œÎÂýËEéºÊ‹!?#FŒ1bĈ{¹±«îV< çâ¿~Y `y ‹ºÝœr!Å-÷øÆYÅ(((K(•#wÂqö7¿GyþJxkf»rlÄø­™ò•ÏÞHœZ 000000¾‹ñs0*#—/¾dÖùò±áT¤²ò5˜¬ø»¡Ë‹¿óVômÒ×øôÿ-ø•£eN¤ºÉA2ãÀÔ-/¼–9ë•MF^G¢| íRögÞùçÐÙ¾ÑQ7¿Ã¼“:ÊÃ…ø™§R¤*íß“$-1>”¡rHÕ »§Mœ‹OÚ„‰î—¯Ycb~+ó?•æÎÌ¡6c­v”hù(@íõøx⹊lñ›wÞ<ݾ/c –` –xèápðÊ¿­"¢Ý|§Š¿Ë‹ENÛ»ØØØ_É—Û©¼¦ª›—ýÐÒ·—~¤ÊÛ‘ï3mu\ýÄë#Âgv*Ÿq8¨…·•´Yù (((((((/X lÝÙ¸K™\Ij•k¿BAAAAAAAAAAAAݤƇ W>‡Þ9n‹žò éÅIáŠk;•31µe-V™2q1òŠñq-•Ó^¤»Ö9n5D&ýà“}#å4(•s½µŠ“pAg¢$\ÞæêXÓGEE]IÝX[›§j3Ð=í­¡¢¢¢.­îµr¢µÝV·Ó©5Ë> xZÀÛTÎ78s‘îÀŠƒóN8m(ÒcHÿ¤Êùí§Ãá8_…€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€ðU…ß¡r±Ý™ã ÎöÇg@‡yÍ̯³ùæÃ^á@@<_â(•s|‡©D÷¥‰kNE*óÖ m¼r…£­Y|h $$ä}É¿rûÙÓ&Î…·áŸº"Q2.Ê‹_˜Tb*§‘e† *Xù*•s柪¡³Ù@w¶³?ߟ´Œw(n®òèOS/$Ô×¾ðøéÏÆ# ¼áß‚P½Žê;_´áíâZ$½2Ú”¦(«Ü»axJD6Íõ™:tèСC‡:tèСC‡:tèСC‡:¯°s:•Sñ¿­ÂÙÀæÅÿÜ(‡#™G2^üÁvTTTTÔ VN>9dd"ÎÂ×|kàXŸçNš<Õy^žeIFÎæ¹ØKTäŽkd–«ø ÀçÞ˜Â×Èl¡5.ÈLü44áü„ØOÍ~ìÍùß$íJWû…………………………………………………………………………………………………………………………………………………………………………ý–ì ìÖ”ÝU™M¤Ó¿Èò,Â^?P¢ãT^’¾ç¿«"•yëŠY—ÇË–…]ûÉë(™Î~ãêîYtlî'·'åw‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°‹°Èw[d|^¾/ž8õ ñ½Keüâ3­ƒƒƒƒWñ¼rŽáéÕ-®à‹ž×—“óº¶N:­ÉBNyv}+ßId^üUûa„ &L˜0a„ ~Îხ\wåÜÉ‹ },Ÿ€^gÜ$÷ï~Üö¹â‘\·Q‹¬ñR›ð²>>>>>þKð÷ƒ_¹~æÏÓÑnk¿+µõÊ%;ÏUR|µSâ¤øƒðÊ„kv¢                    ”Ê‚RŸ*{Ÿ‹ÑüêÚkrñk³8888888OÇù)8;%LÁ·£ë_•ùâG ˆOUüÄæb»–—§òˆÅ›cÙï—‡,œ…þ¡ü€¨\/þNBGGGGGG.z'èëóô|‚i£½‡û2Ý¿·r"¡ŽMFW/óOÚGƒp«=MO9Êö×;P¡B… •Ç«T>Êßq6Ry®/UåFž *TîSù—P©|€µ«R{)“kh?‘™84^¹Tź<‡È…³él“ O»±•O.vdôQ†—Þ_TÞ`0ñâW°°°°°°°°°°°°°°¾µõ®´6*Ÿuèž¶ª¯Åޝ>9-õ.(ÞÓõN‚WŸãuœM­WN´%ÝR§F}FèQ@Ë ï3eÊÏnBBBBBBBBBBBBBBBBBBBBÞ&ÏÙœCÎ~èãpù«ZACCCCCCC¿@:\.~c}™d{y$³%/@ˆˆˆˆˆø•ÄñK sÄö@j#ökúËÝBBBBBBBBBBBBBBBBBBBBBBBBB>oò<›KWq®\ª 666666ös°ƒ½õe{Õ ƒ~%ð߸} ,¥öù®È6/þçFÉø¸ ib¡/jQøçOÚD{:tèСC‡'ÝùSèì,ÿ@z•ƒ‡ÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁ¿;Þ+ñ͵)ÞN”¼”}%ì…8pöSy ¹)/´~ f]Õ§D‰%J”(Q¢D‰%J”(Q¢ôÒJá‚„›õ9/J­zô5"""""""""""â·Ûì›ÄC£½^á|pppppppppppðáxó~ø¾³©8eJÔE§ø{ë•£H‘"EŠ)R¤H‘â³(®?´Ø H‘"EŠ)R¤H‘âCŠ-6)R¤H‘"EŠ)R¤HñKÅ“PÜœ[l¤6b/±¦¿ÜaF     OýÐy×C¼y_³Ê‰bÑÑÑÑÑÑÑÑKý<è• ÔÍRåó×T{ ¬‰–‰È®žÆæØ÷°;Á®\ àdò3%½ü§ˆž›ûÇÒݪ/çúU¿¡·7þðR•öŠ¿ª1œYçËÓéå©L‘ÚDEÃD-ù}ew‚ݜڭ·íÚ…Lu2™³^i“‹TÅZzWR¸¸¸¸¸¸¸¸¸¸¸¸ßÊÝ îÆÔM»§-±k¯MÿO†éÍ/M·…·âO¢mÍ¥r¹¶æÛƒ»Üš‚ûÖ¥Ò—£ÅóS?P·y)|Kkü/ºý¥ÑVù/z¸ú¹G»Üù¿­ºþ7ÍEo$Z»­îôèûÉ{¥¼½6•Û›"VÅÏ·™ð2VÆæÚ,þ#zTìm‰í¬‰£½ƒ±‰¾LS)ºµÉM6}Šu]ü{©~ é+Ií ÕÇÒ»Q¦œw6èHDÒÅÚ¦#›I?‰7ïÚ'?@AAAAAAAAAAAAAAAAAAAAAA½Jêwõúz£±5¦v­ËmíRïd"äg‹‹Ég¢PPPPPP¾žò6(Ûò¦;ÒC? ZJ;s” ÔˤêõúfýúcEåµcÕw2^üI"dAdAdAdð{ „Á«# »‡µÃZ¢?*áT¤2oÈußÈD›¾(_!ø$GpppppppppppppppppppppOŠû9pW‡€µœŒü µ}et$œî¼g‡ÖÑ0‘îÖaa­î‡öüãÂðññññññññññññññññññññññññññ—÷ëõúVãúã+~4„AdAdAdAü.ƒa°q5ø ÃÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁÁ}SîçÀ5ÇÜ×9î Iÿ°^ßi®­ýŽ+äâw5ª–ªXK¯bam4òJ¤ÒÝâO~>;°U€õÑÝkµÏߟì‰3•Êl`6B›Â–QiäBtUñ·6U@@@@@@@@@@@@@@@@@@@Uè] 5Öf¡õÝù”œž­ ïÛz*xõ[Þä-Vmº§Ã{«öBD*I„ô^FƒT/¤‰Eªû×o¾#Fì®Ø¿…X㎟Ûx­ O¹ï;ûÉj‘5JD6Id–+:tèСC‡:tîÕù:ÍJgï¤ÕlÕ:‰ú\•ì&ˆ‹:4^õËçš2¨|™‡À$I’$IòE%OBr}ö«ÙÖ‘ˆU¦L\¾ZÒúl«yPPPÐ綺1EÛÝ“N^¾s¥/ÿ> ÔW¥µ9¥ÞûrÕ7ŽW:ÖîÉr»Ûª<è«>ëíÚž^@ ëiYã£5·§VWõ‡‰¼>:³mÓTçùÐÉDÈòU¼òZ*ÚÄW±³DÙ Ubß$6~8_ùTá¡)º_Nsq´Þ{2‚ƒ{ ܯK®YùGkƒ€€xÄq *ŸéÚD•í^R¾_Þ ÖÂ]âò¯Ácb>eó0˜•PÛ*Ijá°Ó¿M4þñq}iô/Ë}ò ð!`åÃn­x œ2ùÄËW;^ð%§¬|öT“¦øÿDïdA Ty}ÅX§Ê-û u/¨•(žëþÀOÓ¯ôm öl° Xå£Ë³¬84ù0 çlˆœ ¿Ø‰}§”Ø—ÞD+Ò1"/>bCdkµHÅk‡&ÞŠƒNw}M¼Ùßouë?%Kökfÿ²•žŒ/O™ÿ8mý>µ[H£éF§Ê(P @…§Y¿éRù ïA-sÖ«b´§¼}™¦òê³T+|˜ ðNð¬×+'l8øäÎûÑ9lú1én ë+ÐÇíñW ######9 råC×­Ø©p*@u—ÛJ²¬5„,¿J¹¾Ž–?C÷ËLµBªzEÕlPØ6Ñ™ŽEZüôŠ¿ÏS  —ýªœOdö£HÇÁ;™z½‘è(g?ë¼43Wüê…rÒ[W+:åÅYT<}eF–¯Úˆ7N«þvñg”؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}؇}ØçéïóSØgcºO»Xfà´±rú²À/U®£D‰Þ0Ž—¹f """"""""""""""""""âóO‚¸9lÒ×¢•ç6Òá¥ÈWb1ÚW¦ø‡ô¡è^@·¦èñ(·áˆ9000000°çƒµ¶]y˜ÐiwEl?™Ü;%S‘뾑É2oZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA½<ê P;JÈ$Hñ&ÿáŠ ŽºTÆçppppppppppppppppppppppppppppppppppppppp¯“ÛX»ƒÓppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp/„«ßÁý÷xÜOkL¹¶Í½N‡‰ôÚÑ ?P¢½ÛØ2ÕÉñÑÄß±YËáÈÖª.È7DÆxÖo ±Ê”‰•ñ¢sØóÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ?³ùõuæ™gžyæ™gžyæ™gžyæ™gžyæ™gžyæ™Qó{õF£Þ¬çOßïÖýQ §"•yëD®ûF&ÚôEI}’`````````````````Ëa;`````````````````‡ÕêõífcŒe‰Ž”ÍËS"2ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄÐ=†~†šã¡öÈÅ Ø=mÕreòe߯‚yAL­Þب77W~m†!†bˆ!†bˆ!†bˆ!†bˆ!†bˆ!†bˆ!†îú}Ú=èÝ0˜—Â4³=f>h'‘Žl¸µ×9£Œ2Ê(£Œ2Ê(£Œ2Ê(£Œ2Êè×]«76··®NØöÞF#¯DªŠ¯bŽ9æ˜cŽ9æ˜cŽ9æ˜cŽ9æ˜cŽ9æ˜cŽ9æ˜{ÀÜÎÚsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇÜ£ÍÖÛ›[kã¹CãU‰Ði:4Jå?Y÷Q\X'û-‘9#¯­AEEEEEEEEEEEEEEEýNêÖ&***************êóPê͵+µ•ĶøMgª9Õ&Ò«Xä6ÖÃT8%{¹u$$$$$$$$$$$$dAn66!!!!!!!!!!!!!—'¬7ëõk²ShŸunSÅ 3Ì0à 3Ì0à 3Ì0à 3Ì<“™3Ì0à 3Ì0à 3Ì0à 3Ì0ó,gÖëÍÆÎÆÕ̉ò²g‰LúÁ'9šfpf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™e–Yf™eöÛÏþ£ÞÜÚ©¯‰Îø‹Å¡ñÊÉÈkkÄ®,(™+Ñþ¹Ý]¯¥*ÖÒ«Xäºod¢M_¨Ke|˜;M˜0a„ &L˜0a„ &L˜0a„ &L˜0a„ &L˜0áGÿ—®ÏÝÿs«!¤‰Ãß5…wÒä‘ÓYø²‹âË­FùOÖ}d Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Öx¦küoaÆüÑw*î«íß~™~ò~o$-2g½Ò†]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]Ø…]>†]šóG[I6µ|d†QR”*+‘$I’$I’$I’$I’$I’!ùŸCr}þhÇ©\]DÄÕowj}4ÇŸýÙx¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶xâ[ü¯a‹ù£g7{sª?L®{ë*/üÈÙ\%>jSN° «° «° «° «° «° «° «° «° «° «,±ÊeXesþhw\.Í^{xml/‘¹·©¼þP.uêÔ©S§N:uêÔ©S§N:uêÔ©S§N:uêÔoÕǬۚ?Ú2}m3«ËlTV^¡î\«FmrÊ܇}p5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5XãÉ®ñ…5¶çžÕ‰þeü†Nyê'õ9“&¿z{ǔثE2(Wž*ªü«2¹øûÐŒ!c1c1c1c1c1c1c1c1c1c1c1û*‹¥a±£Óêô]…«³«%J”(Q¢D‰%J”(Q¢Ï5ªËhcmþèy·ux\éeW_L A‚ $Hàs ~ ÁúüÑ®Ì+¹â1xùFU{wýWâ¼Ý%Ož‡\sþhWõ‡‰ ¿f/„F‰’Nœ´vÍ>G¡OŸ>}úôéÓ§OŸ>}úôéÓ§OŸ>}úôéÓ§OŸ>}úôéÓ§OŸ>}úϳúë«ô³æöÕÒ¥‰Ã?õ”—Ô©S§N:uêÔ©S§þjê.Ô7æÊõ^=¤Š¿Ù _ÒwÚ<ð9Ük,Ϩ´9t¯Éx fk>Ó7Žþ÷]p¹és÷Q8©Ì[WÉNN#Õ‰Îaó(ü1tŽÛµ¾LÓû}›³ë°ë°ë°ë°ë°ë°ë°ë°ÎŠëŒO8´àÚº­ã£Gºj9räÈ‘#G޹W’¿W»àò¨g7K3x»Ç”)S¦L™2eÊ”)?érø|ÝÖ‚+/îìO_ì}³çzoë?TtgW³ôéÓ§OŸ>}úôéÓ§?¯6ÙZpÑÙŽü¬“"UÓ&V™*þÏø»žÍ~’ŠuX‡uX‡uX‡uX‡uX‡u^ì:ÿ)¬³à´sßCy·Ûj‹v"ó\²K°K°K°K°K°K°K°K°K°Ä‹Z„%œ ·-ó¬ük$óHÆJh#df3os}¿R¥J•*UªT©R¥J•*UªT©R¥J•*Uªß®ú†ê‚K—uv»­®¨â l¬Såô/ã È}Ò~ ¬(7¹(÷äc„ìÄNìÄNìÄNìô|v²a§×ø<nчâÎüÑý÷?·êÂ;iòÈé,üæøÞEå?Ù⎅6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´¿E;¼Šº½6´«T¢Må}°¿nK‘"EŠ)R¤H‘"EŠ_±žem×çš|X”¦ö“ad‹ßO¼2ëé2ùäko$ÎÎ[çôéÓ§OŸ>}úôéÓ§OŸþ“íOõ°è'ût®_räÈ‘#GŽ9räÈ‘#GŽ9räȽ¤œ)s; ¯'V=¤§½Û^oˆòk.µ§J•*UªT©R¥J•*Uªß¯:ÕWÛs½·ëszôèÑ£G=zôèÑ£÷}záìò; ®m{x°ÿç°§B… *T¨P¡BåÙTþ_¡²à:êm™Dº¼¤•657~§FÅât¿u^‹U¦L\^ÏÊ;iòÈé,Ìj#’Qš l4òê~áYÕXÕXÕXÕXÕXÕXÕXÕXÕXÕXÕXÕXÕXÕ^êjãC(׬>:÷ˆ‡lÒ£G=zôèÑ£G=zôèÑ£G=zôèÑ£G½×Öû_Cocþè»Ãµº8U¢ø¿¾ôúR uq¡"o]yê×}™ ibq~º_“I6¬Â*¬Â*¬Â*¬Â*¬Â*¬Â*¬Â*¬Â*¬Â*Od•ÿ9¬²9ôÃÞÁ~·YyÅíú“!ÅV‘P&¶~ =LYXXXXXá­°5ôìæû婊uøxo$Þí¶Ú¢È<‡¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬À ¬ðlW†¶çž·»³²Y›ÞÞ]ÿ•8‘J’û‹8qâĉ'Nœ8qâĉ'Nœ8qâĉ'Nœ8qâĉߎ ñù£å/ö6y–¦¯m_•k’$I’$I’$I’$I’$I’$I’$I’|”dVon¯­­-H7šµÉ']¦/˜>àÊŸtéÒ¥K—.]ºtéÒ¥K—.]ºÏ½;ÝúüÑôü}õø³ìêkéÑ£G=zôèÑ£G=zôèÑ£G=zôè½¼Þ§ÐkÌÝ«E2(§ÍìY >_êµ±ÑÈ«û½}Ežrö¿‡ìÆýnŒü@ #óH/Â(“‹¿ÍxœuX‡uX‡uX‡uX‡uX‡uX‡uX‡uX‡u^ß:>¬³¹àU®óýZCx'M9…ß¼(¾È:a”ÿdÝGÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´—n÷C{kþhçø¨þ8'O%GŽ9räÈ‘#GŽ9räÈ‘#Gî»ç\ÈmÏ•›½º&.ÿf=|IörÊ”)S¦L™2eÊ”)S¦Lù[”ÿS(ï,xÒœÈ<•"UiÏI£„ʽ³}e„S‘ÊʃKX‚%X‚%X‚%X‚%X‚%X‚%Xbáá$ýõµù£É(Ó‘4¢þ ãøiÑ¢E‹-Z´hÑú^­„Ö‚+ª¶nw›µTÅZz?Î{„ &L˜0a„ &L˜0áïþÏ!¼à*àg“Ò$Þ‰¬¹]“I6áÉòŸzÊK¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶` ¶xá[ü=lÑœ?~±Üã~Ã^ˆ=×{[§H‘"EŠ)R¤H‘"EŠ¿Zñ?†âúüуdÙÈ:¯#«ãé‘êYüÝèAŸÌ`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`v`‡o¿Ãÿvؘ?Ú·Ê_³"y›•»ÑQøä¥F‰’Nœ´vÍ^å†}؇}؇}؇}؇}اºÏø¤³›óGËUM)Oƒ¿ a„ &|G؇ðÖüÑéÙ¬®Î^=©Uç¼SK›6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´iÓ¦ý öø(Ôíù£»ö“ɽS2­¼S¢0R_*ÑÞÝþ•8‘J’û]Ñ•ØØØØØØØØØØØØnï†væf[oNÏ»?Ô&ï=ìã˜D‰%J”(Q¢D‰%J”(Q¢D‰%J”(Q¢D‰½_ô¿”ÑÆÚüÑv"ó\ŠÎaó¨Ò»ãc­ž5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5Xƒ5¾ßáòF}þhG~ÖIQ®Å*S&.âw­ ×{uv`v`v`v`vø†;ذCã!)É’%K–,Y²dÉ’%K–,Y²dÉ’}.Ù$d› ²Å/¥:V•dvõõ4iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ|:Íp¶ŒÆúüÑóƒýZOy)œŠTæ­{ŒSt%J”(Q¢D‰%J”(Q¢D‰%J”(Q¢Ï&*Ctsþè¡É‡EaòE„"Dˆ!B„"Dˆ!B„3ôKmÍíªþ0‘á×ì…h™ØÙ¾2ÓW‰Ê¯¿Ôž Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Øàµl0>“Åöüѽ‹ ù¼Ì¿µ¾|_g˜ o?kC“&ÍÖü?CsgþèÙ0µ£«›¥ÞHt¥yÛi7Já¶JåÂ;iòÈé¬ü ™¿“9•çå—³;±;±ÓëÛ)|Ö«¹èºðÒX££¢vº_û(³L¾}ø©Óˆ%J”(Q¢D‰¾Òèeˆ.¸–ýÙÈÄ*’¦Ö¨M.^ÿ(—í¢N:uêÔ©S§N:uêÔ©?ýú?B}ÑÕ¾'/YüÙøGºþa„ &L˜0a„ ~ÂáñÜ .{6IõF¢5tÖIñQ›â·î÷õõDû!ºàÊ-w±ù8ÏýÉ‘#GŽ9räÈ‘#G޹{äâÛ˜?zŒ2I#Â(ÿɺO¾5þϸàñûöó{q!SŒx%?räÈ‘#GŽ9räÈ‘#GŽ9räȽ¤Ü ¹§MÞ;ØïÖb•)+ãÅž‰­¨¢ù8/Þ²;°ÃÓÜA‡œ49¼ÛO&÷NÉôA‡y$H A‚ËÞE.8/sWym¬Ž„Œt<¹ØG~×§ÙÙØØØØØ¾çQ¹Ãú‚3{îŸÖú2M僞N‘"EŠ)R¤H‘"EŠ)R¤H­”„Ô‚“‹ïu޵ąuŸ¤{èë$ôèÑ£G=zôèÑ£G¯òɲõgîî5ñ `äžeî¿„ÜÒg ŸèЉ¬¹}õÚ4qø§X%^²k°k¼¦5þ[XcÁ90ÏÝÇÉ!c•ïêN~ ÄI«sô WcY†eX†eX†eX†eX†eX†eX†eXæq—ù%,³à ï÷kõšL²ÞI“GNgáK.Š/-|Èu#Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ Ø€ ºÁeØ`ÁUìÎF&V‘4Åwœ*÷_£N:uêÔ©¿¼úøÌ ‹.ŒÙy÷öQÏÔ@=zôèÑ£G=zô^zÏ…Þ‚ËZŸZÓWÆzg3‰–‰-þ™2eÊ”)S¦LùÑÊ6”\™¶³{°ß½zšË®FÈ’%K–,Y²dɾŠl8;öÆ‚KîG5•å:)~mrƬÃÊÚˆTæ^D*IpÖ` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö` Ö`¯²F?¬QŸ?º×yÿHïÅ“#GŽ9räÈ‘#G޹—‘KB®± — \ñÌõmñœöR¹âöÉ@š4iÒ¤I“&Mš4i>¿fšÍù£ç§ûÓ·Âç9 ]ºtéÒ¥K—.]ºtéÒ¥K—.]ºtéêÐ]_0ªŠ_0BÆ•—¿A A‚ $HàsCpcÁðvwöƒªFêK%Ú»Û¿çøÀ*qâĉ'Nœ8qâĉ'Nœ8qâĉ'NüáñÿGˆoÎ}7Êìg iba?Ê«}lªlîe®sáT˜È0b/Ä»ÃýZýê lÄFlÄFlÄFlÄF‹6Ÿ—`kþèžë½­‹Ø~2¹wJ¦r.š4iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ¤ù-šãË`oÏ};óºìêKÉ‘#GŽ9räÈ‘#GŽ9räÈ‘#GŽ9räÈ‘#Gn…ÜßBngþèY½Ó B… *T¨P¡òU*yYÙ\›?z>p6íÙÏÒ(Ñj§"•yëx iÒ¤I“&Mš4iÒ¤I“&Mš4iÒ¤I¿æ´ éúüÑöîúZù¿cbĈ#FŒ1bĈûÞ±ÿbù£g“„ºTÆç"U±–^Å¢7ïv[mÑNdž‹ÃCv`v`v`v`v`v`v`v`v˜÷‘¢Íæ‚êæÃ?¸D… *T¨P¡B…ʳ©\†Êú‚ÊÈÄ*’¦¶^›~§N:uêÔ©S§N:uêÔ_r݇úÆüÑý÷?·„wÒä‘ÓYø½‹âk¬FùOÖ}¤½JûShoÎýéô¨ò'¬ðƒpœ÷¯Äy»û MÈ“'OžüëÉÿkÈ/¸¾y¶ñ°Wã‰!B„"Dˆ!B„"Dˆ!B„‘Gü?CdÁåqç~ló¬ÐD¤’äúMÞÉ©ßDµ#í`/öb/öb/öb/öb/öb/öb/öb/öb/öb/öb/öb/öb/öb/öb/özð^ãªí,Ø«ÞY„Ó¶Q¡B… •ù•´¬l-¸ÞxWõ‡‰ ¿f/Dw`[¢ü’Kí‰%J”(Q¢D‰%J”(Q¢D‰%Jô‰GMˆÖçvvö»µžò²r†Ã‡¼öJ•*UªT©R¥J•*UªT©R¥J•*UªT©R¥ú°ê ÕÆ‚Q• œ6å_ê·¯ãôQÇFDfc¼b Ö` Ö` Ö` Ö` Ö` Ö`Ç[#\ä}«9´];ùËÛ¹Wy§M›6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ´iÓ¦M›6mÚ÷l»Ð^Ÿ?úAçC™\H6Þ!Ž¿FtmœS¦L™2eÊ”)S¦ü¤Ëÿå͸U’ 鄳‰ÊËú¶Œ8ùYxûY„ &L˜0a„gÃÿ-„·ævlbk‰þX~,×”¿0ý¸îÕЯ>µ.{¢ä~Øe™g±Ì/a™íù£Ys»ÒOU¬¥W±èÄI«sÔê\íx¿çFlÀlÀlÀlÀlð7¸ì,xÐz|ÔÒÄå߬_?N}úµ¸¬m/¸Ô^k謓âmå?1-Z´hÑ¢E‹-Z´hÑ¢E‹Ö¼V¸öÍö‚kßì·ÎÄ›öîÎÆyŪT©R¥J•*UªT©R¥J•ê“©~ Õ^j'2ÏÅá[Ñ9l cM-Ñ™Ž¯~ÀA¯;ß ù§H9HF™Ž¤yÐóK,_H[ðAÛI©ù ÊТõÚZÿ1´6Ü8Z§úÃD†_ Ÿ/Ší+#œŠTVžÇ©œ¸Ôþ~?ãìÀìÀìÀìÀìÀìÀìÀìÀìÀìÀìÀìÀìÀìÀÏe‡_;Ô›;âíáûö_Úb0ʆ¹6JôÊ/4~ r‹r,E_ømI¬7¯ ™HsK8<üÈï²5A\_ß¡|@ùŽÊoJecc¢ä™t^ú•¾ãÞ—ÆæíMbÕw2ÿ´Š7á×ËŸžlü£õìbö÷Ü ô“¡—é­?Ÿ…?ŠÄüT2[ÍÛLyN¥ñ?­xkõÄð“¶µ>+ÞþQ[â&µQo\+‘ön˜$÷¸qyªÈo2ùÖËœõJ›)6¾ÀXÝØª·ÿTýè=†QÞ4Û׆9{û‘Ù»ÛòƩќüY¯ FÅCÐÏ£l Ì(ÉFnx¹âoØkzsw°Û©7P‘ÍŸ­RtÜØÞ¼Vòa¦ÜÕ½eù =×2ÑQñH]Çá&½}Ò©UíI–þR–vê×¥R:­ÉHùQbÔЕw'·ïÅ5ü(ݨø©É=Nøê³"ЬoLïGQxêtóÄ› gSñ§â~·v0þÇ~:ôKº1¹Ÿ+g¯¿O ¤_<Ñ‹¼Íïñðf¯„×'0e4JD$ÓÊMnøSU<.n{ótöO%¶1Ù2U—2±áøúÇsÑ:+•ÂæäÀik÷ ?Þ2o³4‹~Ÿy^’[ÓG>åõí~ÿ¥v!Qyñ£ël`ó‚[íÑ öný·'ߪ2Ijc8V‘Ì\ù@ñ ÒÇž<Æî«Â%W¹§iƒ¹ó¥ÇgK6ŠïøK-+?ò‹¾áÉ’%K–ì—îBv¦ïP©ô†] ͬË#§MùÔ¥vX<ðXæ1ÚúZýËôÂ×§Ë;¡õõ‰0ìé ‹’Z}mVš<ñÕÑ¢áçbî³ò¤qè‹ïm‹'rNÅösñÿã·1—øïþ$¦ï-õešÊÚøuïÑäÒa¯@ha{©ÿ¦PF½+©É.^Rà ¥ÅÏFæUq›²ÚkOÝ; ÞÆòÞó8ùÙ¬ÿØØ¨ÅúêÕðKÞ`»ÍUž-|²|¹~}gò+g½5S¾œªDŽÿiñm§ÿ!‹{H«c1°.µ«½Ùô‡ M^¼ŽŠûîh`“•ßÝ|,§üÏ¿±9yny!½Ýñ:¹6Úë«·Ã}“tJr«1{ž”w”êŽW³Ëw×Ne²ðµlܱ»ÜÉ}Á¡-£½ Úä®.“Iªo4³¼wPzÛ“‡ƒÅoI÷]¸òmŒÍµµk®Y?o²Å[^ÜpÅ+Ý$‚­öTbsúBàn-ÙÚX²Im£6}-sé# !Lv9yvó%÷k\LéȦ™šxÑw<22222222222222222òK•[AždIk[^•gÕcŠÊS0lÖ'Ðõ&:%ʤ¹ó –å΅²Ù˜|ªõ—Qj‹ßvÅ¿ë r?dr,TRü ­Žˆ€L_ØDÝ¥^éÜX¯6M.:.€ 2¯=ó‡ÙZœÁÁyEÎyp¶—øñ»úù‹Už.ù3‡ýôìÓ`ï|é á[¨¨¨¨¨¨¨ßC Çê6§çºtJzmTåˆH°¯Š•+Ýœ~T_ÆÊŒO&i†Q¢ly¸t>{âàeNÌûÔŃ NžlŽËÿ‚ÿþ~ÿ;›~oätDátÏV·Ôå)Ü#k.•ËŸp÷yºáµŒÉK€*³ŸGù?†2Q«ú1œòTþ›[“ODäÅM÷ª©~â÷¨ÏñÊgw|$&¼C7=“öÍû³Ì&£;ߢƒ]ÌþØÍéÍëÍÛS}u>sƒ[½ÇˆÞÆÞ™žìzèÊ£{¼ßn §§ˆ¸yŠÖpž×û|ƒ|-÷Ï…»³>¹•šý xSÿqýÇjÞéü^÷óðððð3¼ |ýK÷¡Kå–þØ1bĈ#Fì)Ç>„Xc¥GË?ÃCGGúøùGóËzóÇõ?½‡‡‡‡_Ìÿkà׿Ä7Ë›û7‹@yjéO?²1÷Ž‹ÈÌ;ë“÷]¦9;sDÄ»Ä^'G@Õo|Ôç9ߘåQÍiy²ìD/¾Ì%J”(Qzþ¥_×›;k•c¸ÆŸþŸ=\qqˆÉÑŠö³./ÕÝXØNGÎæ¶Øp‰åx-ïúo?~Xøàë÷¥²YŸ¯,þ }$&ü;mNÞW(s'jxû¾å;*¿ÊôLlÏœ˜Üe&£üö7ì×'Ê®õéõï >,ÞaòàæQSwˆxUï¸ô¶¿x¶ÊòI¤¶·µ`~K3<¼~)(3Jî|0)ÞHU^jxá5Пù·@n-A~¨¼æù£(å奴¡×TÙ •ég†MVû^{0pQµÉ¿§w£ÌÛ≨™1Šç¤ñØZ³I׳ŸGé0²á*½¹JµL³¶E×ɵ¿†Ú΂ÚÏ·¾[†‡Ô‹¾[þ¹,Ô×o%>ŽÌÐ)C‚Älâ$$Ö$“X5ü(ÝÈúÅw ÏýM@§·#go?'ÀXÉø§`Lî.å­ûã×!”Ïß×7&/²Œ_Ý%µ¶m­ðèäq”AP&¯%,yoã1ÂL{~¯|ór}srÃ]žŠ`|Ý€É_Âküድ·A[Ÿno?—/#Jco½Øøª¥ÍRÚžü)TºÊãà‡M¿ÓÓ7˜µ·ÑÀšØßNâ¸ü>3º<­–ÈCï%®ŸcÎgw;ytÇŇ—_ ëq­ßkòPIiââñÅJÆiiìÔ«ìíWአ3 -q"ëg£¶‚:¹û½ÞƇAôŠOµ«£ý–xðô ò}—µéÉ¾|Iy…ïˆÙ￵É÷Ÿ±ÅœSfü ¥<:­ò¨ß[-’بOÞ_)nœ-~„Ö¥å­Få“õkkìeŽÀøjx?àÍeð+½x~÷÷aü_¥øf]õ¿¹o˜[ ¹ÉýûЩEÉ¢—ð™ûòËA›“¹|˜\èâÙyå>pÑûþåÛiÓG\ÙÐÍž7¿uÚåâDùÛ¹ªÕ+§­| •?–•Ê;§ãÊœÓG/ûþ)0ðÝD7צ',Ï#ëzåwixE2Ÿ –ú)~4è0@“ãd+ãµTź Ë£—}qóp¡£e%<:X>ÎÜœÞßô†™³Y9¶ü#—ßbò’žÑ‘õ+0rù]@¶ ÏI):oNï¥Ë•¿z=ó-[üÈßDPqñË Ä<'5|cìL~€6µ‰Œ¼ŽÄ…r©2~韸‡"åû [ÍéMÉ(*/#RkVîÞ¯_„ÿVRùѽ­É ïy»uu3{)¾×;·ˆˆˆˆ‹Å?××õFå©Û쮹rׯï”7p+_øâ+ó¿ üôÅõ«ã/gÅwòÒ¬/@–SšÓ;ξ¾ƒYô<åk*¿Êä`‚ðy7Ù_ù»n¼ÌöôžÿîOv=#åmP&?Gጻý'>+¥éG £ò7o*å#·Éo,ÿŒë+Òïݸõíuó8÷ò×Ë^Ý„-ºq…½þ¡[_¿ýºÚ­ô#1áçez.–éÇW¿¨wv×}×r»Tï`ËÞÅ ù¤”§Ÿï½þ4ÙÍ{à7Φ“.è\t«D·×*H"kTyv“¥îugœñ×3þ§0>s1ËÝÚçQbóñËhã_(Õ¡he…G=_Ïvv&ÿâ…5¾öæ+WÂÝõô€ÙR9®]ÜÁLîZ®{©_Eþäæò|¾sRiÌw×Çj›Uýjrõ?û—¤œJczìm\<¸‰£¤뫣ߋ_èÛ\FÑ@ºòøullìÉKÓµ™×ÏÊ{ô/ÜÊŽïF’¥nj¡ïx™º±6¹wˆlù>EtõÑ‹¡Nbmú¢—ØèãJ\€……ýÚì‡ÀNª¤6õbe†iQ¸‘·îÖ#¼ÌÙñÁ *_æãsÕÿ)è•s_Êdüëúx‰Å'e@@xLá§ L.8«Éâû½øš¤x>Vym…Ÿÿç þ6ˆ;Ó›>e~¥J´ž'nɧ§‰Jtfo½rS¾¸£Md]fÝ’‡Œ|%v?°õ›ìõÛ´¥˜ÚxÙãüÇKN†ž¶vÅÕƒ}7J&±ºù«/‰ ·ÓsC•ì‡áKÝ’<{á(s„«§#Ó³g-ºÓ†„|%d/[_$÷('ʤôkã‹ï¡h©ó <ŽÐÂÖRßPPßÚ Ôä.½T®>õ¼äY:¿?ð.“בÂeíÃ5ÔŠ[ÍÌ—ŸÁ[íPÿ§î–^smyïùƒoXŸ³‘s6Yõ2/[ !¦¼°IxAÜ&£ëû­åÎ;ýÒ”ð0­yÇÕêœJí¥LÆ&Çø2Ç#5š›76 Õ/¬Kår·»(_S9ÊôÅóƒ8\YxãßWÏ_qã¹ë·7ÿÌÉ}_TÑ øW¾ý™½oã„ûâé'eetuŽöpã]^®ë«§%°>¹ë(v÷Ú\¿ŽS<ê–Îiå&Ñ‘y®Ò^²è¿ósRýÁôƒªÞéò¿âõ XVùÓDBBBBBzºRxÄ·>=opçç—z܈‚²¤Ò*•éU‘ŠÇj±. lå—&4yjÙÞíÔâ9? ppppߎ /?oT®å;~©BÇ£äîÛN¼gå…W!6wyʤ¹óe %Îê‹ú|Ô¿uý.õ 7‡â±¦–%2/¾lñ™4)P @Âs,„77')“â~¦øí_¨™žz®x°¢îR®/bÔXÿ±±Qµ‚Nu,)?Ÿ4½¼¸¿uVÑ¥žm=C;œ”kkrìXáæÚ¨ÚÌç@n<¼ûæb'ˆÓO< 娼qMnaì¥]åû÷e¹Ó£±†n)tÉÛ_\\ÜÛt&Rǧ3½M¯puWDÄû‹{¥8½`ÑgY|'__´;üÃØ¿þX÷|,¹½=9‘Cñ3ÏÿM¹ð#½==ôtäʧCå¿Ýì“¡¼xþQ¸Óë·Žl-FæW;Þïi1ç™Ü~ÙϺü.¾Tâê\)7Þë;oR9›ÛT.õÙ ì—aÿìÉçlî?ˆ7Å-~‘E|Öâ~wæŠÅ7O4òÅ÷ÚWÒÊ3Š4§Èœ§äÆì«B~ú|ãF+“7èÊk€‡³…+Q¬µ´¾´ð¡çÓ¤þk &÷<7?sý »òˆûÇÉ/¦Ò˜ë‡áå‘`׿nÔЕ¿ZyU݆MØäᛄM&÷Ðw”Wú Ú\¸!kÜz¥§¼¼I}}è_Ô¼ùý]ýQ®þˆWôŸXäŸBdr'Û áù åkøÍéðÊoìšL¤Yõ÷±œ¿gò.ZÏI T\‹²¼ÈU’ dí£òVÈHÇ"\kÝ•×f–¹‘M³D}¦ð…£P˜¼ÄÚ\¸²¡?ìx©än '/$®×ñy”¹ÕßíÏû§λóça©+’4§W$‘yVüËõW]å1•õÅß|ºä¿ÊÝÊä»ù¸•¿½êŸQxsd}k‘²Ô úúöBf‘SóÛܨ¼öu}qÔ[/È—g¨ZŸI¡7ô£ò'xÑë„_?½ùøÏß ¤“oŒçe^ùæ«3áGxcòS£s›¨á­wðQPPž‹òë L^š‚x1ÄôJ9åÅ£W}„ôDxþ^¹rÂmbÑ»‚Â!C[“òM‘bìöÏ7ÅÓÉMíõãÀEŸpŸ§{ÜÉ3Å›ïX|¡ƒ‰‰‰‰‰ù¸f8®ekòÚZ¤½&·^Šo„׉§çwËÊŒ’»ßžx#•³½Å½zäN §'ÚºãEì%_nU^Ì•ûÖBx7=Õ‰8uÇEµ/‚2ùÞôn”y› ¤™aŠKñË„µf-’®g?ÒadÛ§¹JµL» ‹NÖDíyÖNBmcA-¼¤¬†¥Y¿ø§ô9 á®vúÉC?röök+áîoú!ÉK¹ò»È_C(?lÛÜ™<´l:>FîêïkõÊÙ0—Ä <û*ùÎäéÛnm|Ôá*Äi &/÷÷‹§D‘¿IwtÇ*±Ú]P~ÑjylÂúÚäè‰;Í¥Nš»>ýıwj “ÛŒx3þ™/|úñÅßq§òÍí‹GDI­m[+> }*Ê T¦Ÿ([ò©Ï'L3íù½ýЛ|¸¨<~¸Ô+g †Ï™kí$hªv5×+¿ôú³ào.œMË3g /—8øç« oÚœþ©ÙÏåÑHÒØ[Ç,!½Xé×AšL.Âi'"¯/ÇÖ…_._Ÿ^üçóØÞ?o»êé«|ã-ú¼Äy& Ê#¯·Šuž9›YWJ•#0ÜÀÆÒ¨ÅoT–öôsýáúpȤª¥*ÖE£ü4¿/¶½ÐÑ2ÿº€€€€€¯ oôLϑҖwQåØjÇp¼"ñ0=¹‹Ñ‘õ+Hò¼pÌÉô¡¼hùúúôb(ÃñéÓc% ¢¸ÏIònqÁVÇN6=öÈåêê\¯3wûÅËñ\*.~Ù,s áã«ÇA<\™œTùÙ½%“ ‡‡mL^$º>ºJ\(—*ã'ǯ•,–¿Ž6ÖA«øqÞn]½~)®¼É»üÇìgÄæf³.ŽöÄ™ÛÌ©<צæT˜„÷}?ÿï„S²—[—_I>Rý>""""""""""""â ·ÖÖ_©xX___klÅNùi“T&Â{ÃD‰ž'ðp*JÊkN^ƒÿ!€ÛcðXé|J£eyØ9ùÐ$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ä7#›õõíÍëóiuÖ[ï¤ÉËÓv,:U£Œ2Ê(£Œ~«Ñwaôêroeä‹û<™m.‹»ºbÖ^•i?PIùË‘J’ù§gÆÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃÃû6^³¾¾³½¶yÏW•e”QFeô›Œ¾ £[ywЇ‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡÷ ¼êÍ+ïD&²à`€`€xú› 0À 0À <ŸõúF³¹ÑœÉD_J7¹ŠœZx„&³Ì2Ë,³Ì2Ë,³Ì2Ë,³Ì2Ë,³Ì2ËìwÝÚd–Yf™e–Yf™e–Yf™e–Yf™e–Yf™}V³¿*f·7®.WÛÈ~yMZÅ#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#_{d§¾Æ#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#Œ0Â#ßpd»¾±ÞÜnŽGdŒt¼äg³™gžyæ™gžyæ™gžyæ™gžyæ™g~ÑüúÚóÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ•y[ß\«7×Ew¯Õ>²'Î[GõðúRz•‹ÓýoEo$²Í‹ÿ¹Q"ƒ)M|ýUå?Ú qxt”‹È¦Y¢> ÑUÅï–_‘%K–,Y²dÉ’%û ³§!»1Íž´:â£62WUVq~Ü‘Ì#+TTTTTTTTTTTTTTTÔ…êêæTýéôH¼‰j? Ížÿpç+Õfªb-½ŠËWE®_'‰Å`˜J3~ñ„-Ø‚-Ø‚-Ø‚-^Ó [lM·˜*aþ*‹¬¹]>¢9ªÖÁÁÁÁÁ_þ¯ßžyŠ{$¼t}åsq:Œ%P—ÊÿX½kš>Ή!B„"Dˆ!ò=#ÿ";ÓHkæ Y?P¢Õ©ÕÅ…Lu2 ¿ä¤É#§³ð5¥îèСó:í²³¾6íœé¾‘‰6ýò–åÐxå5,n]                                ^5Õ T}JU¦ku‘O`  g ý¥€vfŽãz«¾“ñÌÁÜ{M1ìé µ×F$º_žH¬ý¾³øœ @€ @€ ð¸ß—êë£'Ú[¯#qP¯ÔÏD6Xêl400000000000000000000000000000000000000Ä4î`µƒÆ Ì×f~˜æmæ¤vrPGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyiÊOAYŸ*]Õ&“ËE¦Wf¤’DD£(Qˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆ/F<âVE`_›~Í;{kÔ$šS¥J•곬þ5T›7ªºàîîR @áiŽCaýKŸîxà„‰‰‰ùuÍÁܘšÇõ¦¬¥*ÖÒOè|y¦ó$<¤‘‰Èu¢Ltý”O¹ab³Dæ©6B}Î\¹Êç=#I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’$I’|6ɣܜ&?h/KcW¼‰dé ålòƒH•—=›è<…„¼Av¹5%wí'“{§dZ|›ö‹ïÖ«oÓBÑ—á{yÿ`¿‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹ûî=u÷»géqg‘Ì#+˜'Êü˜)ÓØ<ØDg2W¢ˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆˆø¼Äß•âæÚT<{‡‚‚‚‚‚‚ò¢”­ Ô+kÍ#ÆgœqÆgœqÆgœqÆÿ ãïÂxc:ÞUy¦ôÖ„JTä5ãS6gÖy<<<<<<<<<<<<<<<<<<<<<<<¼G÷Âeÿ6›SïD{ ¬‰–‰ø“‰ì0 çë8ë•6ù !÷¹>%ÏJ\HïGBF:Ñ( `jc•ªuZ݇§"²i–¨ÏÂ[qQ~½ñ@å¿Oð?†`å|“]Õ/¾!®sÑÈÛü£J”/T§Âqå ÒÄ¢üÞy攌Ë_êÄa§3Ù¥ü«5Êxv`vx®;”;lUÎJw>Ê”8•êXå©ÍËY™ç*í%#8¸'ÉýSà*'´x§Š¯UßÉx¹'›·…¿¡r,öžQ®?*ŒL™¸¸)÷Zéùûnyt|ô¶^ktŽ(P @…'Z8 …Ê'ofŸˆ£|ÑìRû‘¸Ô2Ш¨¨¨/J=êú—Ôî@õÄÁy§¼TÑ/nj—º•ÅÆÆ¾Ÿý›`W>Fz¦û¦|#È{e†Ë=“ÇÀÀø¶Æû`T‡:Q^öl¢ó´¼MHt¦ã<¼£Sü­Í–ý\?,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,ìËce`·¦ì¾ô~$d¤ã…wZF£¤ŸŒ"ålòcà?*o=D:Ý!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„ZúBÛÓÐÙÈøÊu^žJçR¹‘H¬é×¢ÔF\\m1JjmÛZ|Jtttôç­¿ úÎTï8›ikÆó"’KßÚ¾lé7¥´½6•N[»w©¾2ÊI_  ߣ>³M:^ån æ 3)s¬†mñ˜¶¸‘ùävæ¥0LsÊ´RmT-VN_ªX ¬K‹g.8888ïìg}ê´¥WÑÀ&²ôD¯üúeoË^ƒ5þï¾1µÎ#g?ãà,åt‚S9=wWõ‡Éøé_ñlÜ—ªžüüÏý¾ÔÅò—æž·òî™rÖ[£MxÁ6UE#üÓJÿuQ¿ŽºÔÊKH•y«ÉŸ;ÚÂ'Ì•—R’Qt}Šå°ÂÀÀT™µÆ~÷ñjåWaîdŽSy¢½«d¬Ìhü¸xÂ)Sü³i÷´%bÉÅ~111111111111111Ÿ¾9~iaý fù¾XÀ``````ž3ŽÙ©û¦vÃ]eo$šÿ^x+6þ½ØûlîY3Œ%s…‹‹ûäÜÍ/¹Ámââââ>ž»ÜÊ1‰'#[~êq™£†˜e–Yf™eö5Ïþ:ÌVŽÅmï¾ÚˆâuáÜB;•Oœ*ï´©ÕE®ûF&Úô10000000000¾“qT߬¯­U>šºÛn‹TÅZz 齓åyT$$$ä3$ë7o §ªSÙ0ÉËÇèâ×7 !!ïKþ%•3Õvm¢ÊƒsÙpN°Ëv\o·N Ì«p?mÍRï````````````````````````|w£ŒÊyŒº*Eá-“LúÁ'9*ße9®½ è$@•“¡N9ˆžòŸ”2¥S^ݰe>Žœ^â(((((((èƒÐv@+§M< t ï¤Éãá>ZæÁÔJÔ¯U9Åhû]1µÒ›m/xˆÊIæNÕÐÙ ™GÚ ½iü^|õÜ%îžïçc<5#ÜHÔ+grØ·.\ç©Óêœ} "¼ÀR¯||ýú#°åy%ŠoW§úÃòêS¦/df3o—9½%(è«EÿÐʇÊ;6÷µðÐp|·âbæ¬WåI"l¬/t4þ‘~=ðQ€+—7iå™t²¯M1QK´ù¨bÑOFQ1—@BBBBBBBBBBBBBBBBBBBBBBBBBB~gòÿÈÊ•ŒÃW?Påé»í…(þ®°KT‘9 ]nxÛDG›ˆDg:¾Ž¿Ø·¹Œ¢t:V?Šãã÷?„ †w#/”Þ )ŒÌ#eüä]IÖdMÖdMÖdMÖdMÖdMÖdMÖdMÖdMÖdMÖ|Jkþ%¬¹1]ó¬ºc>ìå…ïU^^p²m¯T®¾Š¼ˆÀal~!°î›FI-Ø<¥o¾àÖÀ?ívj§5)_ ýdwšiùÆì÷cÃÅáëÛ_` 6•ÆØ鱤ñGIw¾ Å·~ê®þ8ÿ-áFåòëv·x¨Ú/Šö´‰Ë Gƒ€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€|}Ĥ>Eö­KG‰ÈTæu¬„SQñwÖåA3¿›‡«b§ÒŒ„õå®â‹/}J•*ÕûTªiµ«ùY›Jâ>D;ÍÊ7´SJ\HïGBF:†šR?j}JµœŒ:¶¦¼®væl<Œ¼¶F\8›ŠÝÖÁ‹Çÿ«—DOmqó%N†y”(ѶƉ`BAAAAA½8ê¿jsJ½½z¶m/D÷´%z#qhòa¢M-Ñ•8pöSQÚ/QWkˆ´ü¢ë™Ž³^i“‹7‡û·üð¤“8lý5ÿMî±É_Â&[•Ç=ÅŸê¥ ²Å2§:²^¿®ü(‰¶XJê²þ @€ÀcþÛÓ@Ç©|ddæ ÔLx9Ã/ÿL~†¿3åßéþ ‰H&‘¦"S.U²W<- E‹ÖSnEe«¹v«•Ûx65˜†I‘z.©ŸCªò¦sÇæþïñññññïçÿ#ø%ød„ &Lø;…}W»ñÊöÉnk¢‰aVüj?z™–GO]^]Å9³–ºTÆÓ¦M›6mÚ´iÓ¦M›6mÚ´iÓ¦ýmÚýЮ|ðO¦—ØèãÕù3í¿Ð•ÓuÉ‘#GŽ9rÿî߆\å3…åá7îÑ+Ü’wÚ¨¨¨¨¨¨¨¨¨¨¨_Gý— VN[Ñîî½ÙÀæÅÿÜ(c~àì°?(þªªòÔ’ 4hРAƒ×Ýø[hTNTU˜¡ÍŒhËÆ¯„6ÃÏ“Úìcy*T¨P¡B…ÊÃ*Û¡R9V÷ìh•÷¹™ØüßÃüνuµbeʯR¢=Š™+Н¤(ÊâúÚ½‹myrtxH† 2dÈ<‰LxØ»^Pæˆ •9•߇JåÄ2­[ˆ8üð:™ Ì?“t$u¸HXwr2/|.‰:tèСC‡:tè|ÛN7t*Ÿ{=´Æzg³p~Ô"©ýèî 22òk•?yãK·O§²6=ýòSÓÿôÍ/éíy:888ø—ð³€W˪^ðbh¢›àÜÒ7[ÐÐÐÐÐÐÐ/þC +G·Φ6/'N Í+#M´øXOœåß§r6·Óa”¨±#ó\¥½dòp^þÊY‹wUfs=9Ñ„ú$Ú{§V-²ÆKmÊSêL;¹>œe¶­ŒwÅ/¸Å¯Q¤H‘"EŠ)R¤H‘"EŠ)R¤H‘"EŠ)R¤H‘âó-ž†bå¢ï?ÔêB%ÖôÇgÝ–Î©Ü iâòä™öR¹ÅïÜ¢¢¢¢¢¢¢¢¢¢~õcP+WNß·.\.æV"²i–¨ÏB›ñè{¹*?S:ùÒóâ4I’$I’$I’$I’$I’$I’$I’$1Ù Éæ4ّü|k{æ•À»êÈÈÈÈÈÈÈÈÈȯOþç ¯OåâAw-U±–^Åó6; $H A‚ $žBB„ÄÆâ§óÓdÈ!C† 2dÈ!óåL+d6+™‘ө޵Q¢W~±ñ•ëè)Aãû¶¦ÐÙd´üö(/³é"kŠo‘üê/Î/âu¬D¬k᫼ÓÙÀæÙ øÎ!I’$I’$I’\6ù»¬\!êL÷L„L³D_èh¹¡((((((((ÏM9/•ÍÊ _[»‘))_²óg‡ýè4þ"²¡ÓÆF*óÖ‰zûÛgÁ®ßƆ†††††††~Tzü ¯r*¥óÂJ{ö³4ê®Äy§<–þWm"ˆ¯Cô±}ƒPÉðsx’V^ðP:Ÿ*Sܬé_ÆóìÂÕ‹¸æš¸©NŸ¸™¥ïUúK(íÜ(iJÚx+"•$yÕÞ}ežPàç2°Uù`á©üU;©‰XeÊÄÅ …0jèl¸Lµ/îW»?ÄÇÇÇÇÇÿþ_ƒ_ùøUw òa.úÉ(²Wo¢æâê,ÕB¦©5z˜Nÿ·WQøc(TŽÔ~ïúÒèHŒO3Tü­5¿| ÿù˧Ë¿› ¼ ü6ÀÍ/ÁHH+Iáš[ë·¥ð- ô¨P Ê1ç“Q{!.µ—iñ åÇ«ó:ç:VÅ?Œ/’„c×R›¨h˜,qfgR¤H‘"EŠ)R¤H‘úö©w!µù…Tå6ù°/—x±o菉¹UßYG{á(J›ò“a‘ÊÃ9‹R&¶Y"ó´xNï”×Å7ö0âHõû                                                      ßý1 õ1ÚJ­êi™ëœf˜a†f˜a†f˜a†f˜a†f˜a†™g8Ól¬3à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0óÌgNÂÌÆxæ1Dýjèf}s§±sJ9%‹9‘«¨¬aši¦™fši¦™fši¦™fši¦™fši¦™fši¦™^zº¹¶É4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3ýÀiQN¯¯ƒ~ðIŽÄ¡ñÊÉ(ÌíJ/{2W¢];ùK[dW_2ÿkÉ!C† 2d¾[F‡ÌÆ‚Që£ÈußȤü¼(A‚ $H A‚ $H A‚ $H Á{ÓÜœ?ÚUýa"ï٠Ñmµë¢ü’Kí‰%J”(Q¢D‰%J”(Q¢D‰~û¨ Ñ­%^Cª¥*ÖÒ«X¼Û;{·÷a”ÿdÝG²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%û²ÿ[ÈnÏ=›¨.•ñ¹˜lЉ ÉDÈx òrè£6å»° »° »° »° »° »° »° »° »° »° »° »° »° »°ËkÝåÿvÙ™?ú¡X$mï¤É#§³ò мt}U,g/D«S2ÕɨX5í)—‹}'ëBš¸ü›ë±ë±ë±ë±ë±ë±ë±ë±ë±ë±ÞSZÏ—ëm¬=à•Úóvç¼C›6mÚ´iÓ¦M›6mÚ´iÓ¦MûªýßC»¾ôËAÓ—Úµ“¿´o½>TŽ^ŽÏ‚Í:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¬Ã:¯|ÿ_X§1´«úÃdÜ(öø³ñÓ )ö”—"*þÖh#òÉ1ÕågÖÇ«‹¾2jvkeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQeQ}ÔEÇŸZjÎ]áSKNeNå9ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë°ë|¿u|Xg}Á¨5‘4Öè¨þÙøÊA*ÙÕmÚ´iÓ¦M›6mÚ´iÓ¦½L{ÚóG»­v=zôèÑ£G=zôèÑ£G=zôèÑ£÷ȽͽmÑ£G=zôèÑ£G=zôèÑ{½ÿPôÖwšâhïà@œÛÏ6KdžÚ\çB©~Ÿ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcìûŒm®7cŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆžÐØA}kmsc}<Ö–®g£ØI¯D¬û*ë”&²—[® ÷˜ÜVc î;põ­úöææ˜;&C'œJäHŒ¿Ža†6Ü Ã[WÃ^fƒQb#EÃ\È¡SÅ_´¹PÑÂoU  — ½ Ðöê8ë•6÷{± é5Jõ­ÆöúÆÕKö«$È0à -†¯$}Ø»K@@@@@@@@@@@@@@@/z ­ÇyÙ éÕIµúVsgíJz§2éµ×¹h3ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄCßeh«ÉC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC 1ÄC }÷¡ÿPßÚh4vÆCÝÓ–ðNš<³Î3ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ödǶ¶·6cŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1ÆcŒ1Æ{6c®oo­m×Ew¯Õ>²'NF»ÛÛ"V™2±2^D2d¬„6ÚkéU,¬ÅïÙܦJˆ®’‘/ÿ¾ä‡oLùónkS¤*[ÚÄÃÈë‚´âtÿã[!M,NZñQ™«\ ³â7Ï»[ÛÂ:±#Šœ¾”a„8qâĉ'Nœ8qâĉ'Nœ8qâĉ'Nœ8qâĉ'Nœ8qâĉ'Nü+Åÿ-ěӸa·uTNEn¨}.ŽDdÓ,QŸoer¯ÓaB‡:O³cCg}¶Ó˜Þ¸Vn*‹[×ór²dÉ’%û²²ã»œ/>Î>ìîoUÛz¶¤ûF&ÚôéСCçÕvÎBgó>7ßÐÐÐÐÐßÞ ôÖR/µ`aa}]ë_‚µ=µZ3?ÛÅôfx¨6ùÙïÄùÛ£zÙPY®“%žëÒ AcÑKP;ÓÆï«VîtgbÇÝõÊó%§2å¼ÕnñgÈ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉÞ/Û-³;k•÷*Çk]Ä•–ªæº‡:2ò+”+—Üšw2Š•CFFFFF~ˆü· WNÔŠ/¥‰ ³ŸŒ¢â‹¯> ­Lœ9[VÊû•yëVøˆ*T¨<÷ÊÛP©žeÈxå5,žiÖê¢#•çHHËI~·ÔüQl„—2Njí³ý¾MaaaaaaaaaaaaaaaŸ{Ø;Ùé SgïÚ˜˜˜˜˜˜˜˜˜f¾ få™]Õ¿>÷»½¨@½‘h¿=ÆÃÃÃÃûF^8ÿüNå©ÝÃÚáÉnkã ¾îŸÖd’ dOy)2éŸä(§C‡:tèСC‡:tèСC‡:tèСC‡:t^D§:•ë–'iÖb•)—'õ:ìîWO‘wãâââ>9×w期µò¤ŸSÉœöb¿µ»Û=ì”ç串«ç/ŒdžÉ\Õ¶ÃGxjõ5Ê”)S¦üUÊåñ¡õµµÙûƒÊI ÃE¸+{àááááááááÍ÷ƒ÷å“Dßz ølÁóVNü}ªúÅä¥nüñSëòòͳé›kËŸ!ûeØ¿ öÍ‹¸\(W<ç8CAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyEÊøëw*ÓSùæxxxxxxxxxxxxxxxxxxxxîíocêuÇ'V™^q¬…††††††öŒµÝ mÞùØ¡/Ót•Ç XXXXXXXXXXXXXXXXXXXXXXó_ÝÞšóêöÚ³Ôv‚¶]ÑdV¿šx]À_J`k­úØ)rÊk#ÎFÆT®óÅ™ ¿fÍBš¸ø¹ü¥  @€žI ú4°¤[<#;H†‘ì[SKôG%:*ó:VµºxspÜ©Õ Mš4iÒ¤I“&ýŒÒÿ{H7î›¶¹ªÅ*S&VÆ_‘m¬w6Ó‘èØd”7*ö9ì° Û° Û° Û°ÍãmÞ3ÛªœD·­]$c-h'6úpoàŸP9?ÞAëmKäÓç@N%Jæªü›aæåGž Ūïd¼ÜÛÕ$H A‚ÄëIl‡ÄÆ—_}þŸÂ|僇Ýë“ò•ËÒzÂoƒPùˆÒnåñ›"o²}ãOÔ©HeÞ:Qym éiKá„O[;7$,,¬ïfm¯Ý°Þbaaaaaaaaaa=Cë X•\·f?£UÈSxñappppppppppp¯ûcà*ç…84ÝÓלŒ•%Ò+¢DæJdÒ>É000ð£ÀI€›wÂ¥|i/ûJô <mÙø•ˆÒ•äâRKqÐSÅï/35ì ö‹oøhÒ¤I“&Mš4iÒ¤ùüš?…æÍϸ·Ä7°?Üë­Öç †¹·+›?wÒä™u¾üï|!½œŽÿ÷{ŠÒ^6¿ Y×—FG¢ø?kÀÀÀæcáòµÛ•Ó:´þ1”…¥M-U±7Æ~⣡¡¡¡¡¡¡¡¡¡¡¡¡=í0h•stdžëKUy ÝM‹Ÿ>.†K‚oWNhÐUýa296ñÏÅO²oe"M¤Jº«ŒL @€ @€O4ð÷2°S9µÉÌñ)É(RÎ&âÂÙT´bÙhäU.¼~ Ä±¾,ú«½¶ðí‹íP¬|ÒòÐÿù†Óëä,ÿš$T•z¨Ê¹ÂÃû 劇êB™¸|8oË B–¬Så´éþö÷­~κxØuõIêsÁÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀ<&|Fmg}–™Ž÷F¢Só£L‰ÖyGæjɽ¾^ð8€•÷ë¾4qm|šf}¯Ç혘˜˜˜˜˜/Ï´Á¬œ |¿ür— m.mrYºF¤ª/?J7 ŸÅ±ºT‰ÍRe|8Þ)+¯—(/2gãa´ÔuÈ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²dÉ’%K–,Y²Ï%ÛÙ­iöH•/sÍg&™d’I&ŸúäïÂäöt²s}G1(†lîey]-”/+"(;Å•£Ñ(J´¹î™×~àì°?(ß«œþ¾S‘ʼudÈ!C† 2s3gE¦±¶6Íœjït$ìg+‘{ËG2¹è¥•+Ê„Ì44444444ô£Ò›®Oéèà¤#ÔÅ…ŠüâW’˜fši¦_Úô~˜nÜñ:{$“HÓ•^oGCCC{:ÚqКS­«®?Œh/ŠÇŽÞæ6)`¶å¯~%’ò#Œ˜˜˜7̓`®ßñS™+“k¯‘!ЉãÝc8888¸¯ÃncÊtê½Úáϵ¢˜Ó—c¬òž& à«ÿÀʉL>h/Óòsêo7«¤·"9ÛÏ2ÙÀæÙ |qù;ó?¾ò)û›Œz±2ų È^”³NôÊ)ãj™çCˆOUëoßúV-1̼ü¨ÊSî(ã• çÛ,Á¿°rXÓY¸qNõ˷ʬ•2yU°oλÄEÑKFE§ˆÈð´ *T¨w)-Z´hÑ·vC«ra…k+ÂÂÂZÆz¬ÊÙñ‹Ë?eølOÛïlRü5ͤóå'yÄ›½îÛ‹_ƒ}vì¿”l£rþs'MžYçËKY•·Ø6éëp£{¹úǰÔSë <Ó Aƒ 4hРAƒç×øçШœäö+y%¯kúcñê…½HçµqžÄóIü¨|¢ãlß©<×—jZ±B&Ù@Öê?6j‰6U,RiŒÍ•(¾XÇC•‹ gSq"ÍÎöÖA¶¢F¸¶³ñ0Råol\ÿ2K=ÖR",Õ¼ã;D¯Œ‘êŽïŽTÅZ&¾¼íXò»„ 2dÈ!óD3*d*‡X‡¯×åe¬óLEåS€ë§ ƒQÏéxºG>2Å/ç:™ôƒOr´0>]ÔØX&Ù4KÔgjO¡ƒklV¿¾ü½ˆ†††ö5µß­rXË©ÊN—w„^¹ë›B”/)¿ Jå¹Ý³vëãåkõæÚΆ8Ú;8(~7Q"WQypùˆëHõûÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇÜ£Íý&ÌmŽçÞ¿c£Dæ^ÄúâB9e¼–OØ(©¯_+oÂsÌ1ÇsÌ1ÇsÌ1ÇsÌ­6÷›0·õà×000000000000000000000000000000000¾ñëÒØ^i÷´%ò¡»T:I¤‰Ôôòˆ!šmˆ¯FìÔwÖ›æ˜h]8I#¼eÒØÜ¦Zæ:Xø]®ŽšØ—å§û*Ÿ[”&²—[—-ü# Êý”ð¤æúHî{?+¸ýPm{ã1ð¡           ¬¬üº¾³ÝÜØÖug|„p.r•(£‡©¿,&¿ñ<ˆ½’ØnÎû6Ñ‘hE:§Ê²îãwÀÞØÖÆæ,öÞ(Ñ–®W|«ž(/{…/ùoЇ‡‡‡‡‡‡wÓëokÖ;ÑÞFkb§e"TÁï}ΜÊóðh÷u¸ÿÜíY÷ONI¢D…gÒéô6Ç^™jcEßÙa–Ó Aƒ 4^KC†ÆhOF¶pÑ®Jäg9y!¾mw2 ÿ<ùrB„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dˆ!B„"Dè†ËÐæÚlèð¸¶.rÝ72ѦM¾20\v`³> ¾/¾ÂfÖxmÄÙÄÅÃÃ{ïÆ5ÅZí=qhº§½u÷ùùÅÃÃÃ{}Þ~ðÖg½Ú—'7»"?P¹^òÝT´§ ý>h³ÚA§ÝÍï ÈÁÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀ|¦˜ÍYæÐ\$2M¥·n$º*ϬÉÕjïóââââââââââ¾&wüð|kÖ==îŠÌY¯´YéÐ:˜WÃük`¶g™³Éñ˜åɇvUßÉx|2"{!Ž”·F‰·6֊ȳ„È3P…®D‰ÌýªŸ[ƒƒ{Õ\øìéÖíÏžnT •Ó~=ðmoœßl÷´%º*Kt4~ éÕHáPĭƬÔ2}mûʬxZG˜gÀ|ÌÓyžì×zÊËâû'RYy¢Ö{>åDGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGŸ§« ß¸êáù~^Àýa"½©îž¶ráTùO±ðVDÒÅZFb0Ê”óÎfbĈ͋ý5Än\éÝ(³~ ¼Žd"N•ÿdÝGqQü0ïºa_´âXGËH (P @ /¥° 7®†y¤dV¯º‹ÆjOúÁÀÀž&v°×FÝO†ÖÙlätªcm”h7—Úcbbbbbbbbb¾s|!˜×ö>iuŽD>y'<[å7 à“Jp{m$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ämò} ë³äA§ÝÍíDæ¹x["§¼6µDT°°°°°°°Oý)°Yvï`¿[¿ïu‰.ž±9+¶i´Ÿ$Aæ™t^z%RåeÏ&:O¡¡¡¡¯è¿z}–>Ñ‘³ÝÓV.´‘t±¶éÈF£ÂŒ2å¼³Ù`É) @€ @€ @€ @€À+ |ÙÀíeZØ­pÈCQ°^«ã•y@GGGGGGGGGGGG¹úaÐ7gõNÉÉܦJ쪾“±ôÚ@@@@@@@@@@@@@@@@@@À§ þ&€[³ày»%Ú£(Qoìc{ÖØ=m‰X¦²¯„SyfMŽöjµwAÛ™Õþlü}O‰‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡‡÷˜Þ^éí¬Íz{o?ˆã“N]äôÛcG»qµ„â«úÊèH´Òò¢ g#ã*×9$$$äs!çävnœz¥=òöcåáSr‡æ"‘i*½u#Ñ]éM ttttttttôyz+è7ÎX°o#™ˆV\>¦[öhe  ¯ý ×o8Õ‘õåGÜ[‘×—ÚDÁµΦòâB«’dÉO½ãÏ÷Ã)«wnœ²º•ÙÌÛ¥Ï,€±ŒÑ ÆËåHÓO´gÙ@›¾Mt¦cq¢¼ìŸ§ÈÈOUîy©èŸ*_þ-.....................................î¿ûwï w{mmÖ=¨7ïpñðððððððððððä¯>뽿”NK#Í…r^'åYÌ”QK~F óe˜»ÁlÌšï”t^ìªK•Ø,UÆcaaaaaaaaaaa½;HF‘MFåéS¥‰‹FÖ(z8À­oÌÂBµ‰ÈÆÿ¼Ô Ðæ,t}-«Zªb^ ‰ÊÏÞʸ¼ä˲‡u!#################################?ù_ƒ¼5+Ÿhuâ#-qÜ®íKïG¢é8\@·öþ³ŽW8"Dˆyñ‘ÝÙž¼S©©ñƒåÅÂúšÖ‡`íÌZgºoŠŸ„s'M£ðo/ÄY½#º*R™·ý©è?•zcmVoEÊ’â!~¢g«=BB|=âÛ Þ¸6eù!¾¡³f¥ç€HHHH/_úç Ý8åZÛ¦Y¢Re|qƒìõeð~íD湎Š{ú«¯$A‚ $žUâ0$nýcè‹¿/\%Råe¯x’§€€€€€€ßlðÆ™¤ÎT¢Œ¦«ÝßAAAAAAAAAAAAAAAAAAAAAAAA}K*\‰£qãÌGÚO^®Mø5ÂãOÎÝ8?âîiKÄ2•}%œÊ3kr%ÞX“ŒDëüDÄ*S&VÆÿ@‚ $H A‚ $H˜›§toÜ<¥û(SÎ;› FâÄÆ*ÁšZÇÁºqBíNqßc‹»¤ÎÀæÙ@zµÚëÀ˜˜e”fóæiºÝ(‘©¾ÇI™ —lÞ8n»øé+1ib‘Z— ‘Ÿ‡üû 7nüxÄ:³}e–?> Ì×`ÂÁ³Í—÷Ý·IydâÉŠ‡Kaaaaaa}Ok|×xãÚ¿'­Î‘hË<’±‚yÉÌI`n\çŠûj`û÷=±(((((è«Be@oœ*wjÙ‹¢Io£Qñ¬ýÀÙO~ öeTžCwr2]B„"Dˆ!B„"´B¨B7΢ZžÝ(»z:×ÓÖ;iò ëRéµ5¸¸¸¸sÿܧÚl9 tlMy¬Æ^f?úÊÈ\‰ð:VâÝ(v6 ¿D…ÊüJ*7N›yÐiwóE»`rѾ:"¨<½«ŽD?zY|«E6PΦÖ"EŠ©Wš:,Së7Nlü6±6.JÖûòéØJ’ÞüKoœ«½¼Ø€2Ò±ˆ¬ùû°?~EÂ^ˆž2¿Øâ@ø½¥áŠë7>ù¾ø Û+aøÊ«±ppp/{¸%K»§-‘9©ëXceª*diòÌ:††††††††öµVгڙ—^›ëQ  ÛÐÔœ…Ú#o³D橎DW÷lnS™\m±ä±ÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀUøÏÞ˜…Ïm’ÔýQ wuñ‘O΂¿ þ+ñãóÁmÎòÉ(²}eÄɽÎŒ†††††††††öԴàmÍj‡Çµ­ÛWr|&à¿p{<Õ‘õå%oZ‘×—Ú„5b×fá:8®¯#qª†Î.û y4hРAƒÆóoœ„ÆÎlãÜi’~2Дӱg#ã*×       Ï=*Ñ͵Yt/Q‘/;T.ÛHm !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!_ù6õYòÄŽRvwÉãTV“΃Ԙ•ÎE[%‰è^ŸSõžŸ7ÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆ~9v8ùfsÖÞ˽ W´Î'h úBÐp‰ËÍõYôð¸¶yß»@@@@@@@@@@@@@@@@@@@@@@@@@@@@@À—pcÜ—ÞD+Ò±x«¼ï?ëXzm—<]?&&&&&&&&&æs7‚¹9kžyål –_mü@åzÉë(ÀÁ}®¸íY®uÒ9ª’ ô\ v€vf¡Óa”(ëu¼Ò%®   îA…§][7.ü{õ4ëR‰l`óân”¬úTó…›ámà­ú¬yß÷¨_88888888888888¸'ÎýKàn\[áL9ë­Ñfzy…õÍ-!M,N»Ív¥Eƒ 4h<ëÆO¡qãÕù•+}: ñy‰gA¼q^¨®ê¯Þd¶…ì‹{ÞöÈÛü£J”_ö­ghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhèL¿ ôKV.1s¢¼ìÙDç)ÞÔëokÖke6+îˆt.Nl|}'Õ‰wg­5\\\\\ÜÇs{Á½q±Ú;ÎiÔg4Ú;>ªu÷Zç­õU¯fK‰%J”(-_ßïݸ”ó®J¼¬Z î{¾Y\\\\\\\\\\\\\\\\\\\\\\\\\\\Ü×î~(ÝíµY·mÓ,Q©2>¼ßÛ¶rò‹¶Ì#«%¯ø‚ŽŽŽŽŽŽŽŽŽŽŽŽŽŽŽŽŽ~_ý0èõYýð¸Ö¼ïKa€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€ßlÌ‚{÷ÓSNa@@@@@@@ÀlÞz(²ó°‡"€€€€€€€÷Ã5=·o\Óóüt¿&“l ‹¿~| ýéñ™,6féÃãZð.°ÀÍYð Óîæ?Šv"ó\´Dw`c›åÚÔýQáââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ>WwkÖ=F‰’N8©Ì[— mD¢3‹TyÙ³‰ÎS!M,¼ý¬#í—<©51bĈ#FŒ1bĈ{±ƒÛžíéÈæÒØ??P¹Îáààààž·¸÷$úBÅŇQ¢y}¹ô]ÚSÐ~*µú Íúhpßë>¾¸ÄƬx–Ê$Ǻ_>H /cƒÝ; XsËšÛâ¤Õ9ºïÍ7(((((((((è‹Gû.;ë³è~ëLdWJùÈüÌ;•—o½ÄÃÈkk„½ïÎ:©þ0‘áWˆ#FŒ1bĈ#FŒ1bÄî¿Þ±1;µ¦6°©Mlßs¡ŠÞß­6åëK¡¿èOžw6šÂ(ÿɺ000߃ùC`nÞQ™×±Zåàœ—ç„œîÜüqîí+S=]ÚŒöç Ýø îÙ0¹¿WS|½wÒäÖ¥ã_é*-ÿ6ÎqÁשּׂÍò'Ò;ýYœ«&‰ÍœõJ™«%ÿ¨1111111111¿d¾æó¼œîžqóò¬rº§“ò<4“ß„………………………………………………………½ÉF½qšÛƒdÙÈ:¯£òtöÿ“L³_‹m”“Éô—OVað<€;³àÛñ @]Už‘Þ:lllllllllllllllllllllllllllllllllllllllllllllllllllllllì×dwJ»¾vÃ>éˆÔC¨æÖ6Ô7¡ÞÔFóŠú ½L‹ÙX÷UÎëQ^@HörëÆgü@BBBBBzÆÒÆ&Ò‹–6kkÍ«§Š5L¥·ºw~à´×óŸ2Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Íô¢éæÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓLéóÆÚF}mKt÷Zíó÷'{âd´»½-"™G2VB/g¯baÈ™§R¤*í9i”]%#_~Ì{j·J»¾6µW.QÃÚÔ"×}#mú@@@@@@@@@@@@¨~7´ ôXÐÏÔX¿ñCÛw2œÛ^ˆKíd"vO[BoÅÀæ^ô•±©ŽÂ¯â?Ìÿ}ð•—b”¶¾îªÈ¦=mÆ%˜[L?0Í)sî¤É#§Ã•VeRÌ÷‡Éäm?´W¢ëÌF£âïvõÅ…rʔDž’#GŽ9räÈ‘#GŽ9räÈ‘#GŽ9räÈ‘#GŽ9räȽÔ\/äÖ§¹=ÛܦEèÌ:_~·mÓ,QŸ éCíT,ö­o•_!Þìµ»ç?P¢D‰%J”(Q¢Dé”ÖõF£±6¾Šë‰6ª<7ŒìåÖeãgs®áÊ,³Ì2Ë,³Ì2Ë,³Ì>“Ùæ³Ì2Ë,³Ì2Ë,³Ì2ûBgÿ¹Qo6¶6Ç³å» ‘+!M,¬('.Ê· \Í/R±ˆdéa*œ"A‚ ÿÿöþt¹mÎGѸق—EÞ*i^‡²Ùž}Ê*-òoÎâp8‡Ãápáþ¼¹×Ÿ%þ°œ'§Þ4ÎóöÏÓPÇGE–VS …B¡P(Ê•V›{›KåQœÅKF*•J¥Ò+–þus¯´í~•åQ»£}׋³QÜSÂáp8ç‹9Û›{Ã~c4®ëE·÷ù8j—ÊëÉ…—V¨I=ÜÜÛz¯Y‘ã:-r­V«Õþºvû½vÅ÷D­V«ÕjµZ­öÉæ^Š—§Ûøíþ–Q{‹ÍÑr}´8Ž^‡ºÈCtTŒÒðñ@àU›{ýÕÝ/ëP—Ù%•J¯Oú§ÍÓÇ•é4.Mš…K@ O…¼¬÷—;}”þ×<ÍÛõ–v¦hâ2j^ãf!o^æÖN>G&ÿjùƒõ·ßE'E9E @ â뻃õþ4ïï~JÛÃ;oNî>X”‹ìâÓàôz½^¯×ëõz½^¯×ëõz½^¯×ëõz½^¯×ë¿v¿>Xï/w=œ—i¾òmít:N§Óét:N§Óét:N§Óét:N§Óét:ÝÕí6ëý]°âãã ¼åp°~rCàE™NÓ‘V«ÕjµZ­V«ÕjµZ­V«ÕjµÚ/Ú>lôWf?ÊIުЂi°VOÊPä8‡Ãáp8Ümàž 6¶—ÜqÖiY˜''\Z'ŽÂ¸ŒG߯@ _üÛ`cw%ð¨-Wyf/ñÓˆ›ƒþèÖÓEõ…ÊÁæú[åª?FB¡P( …BáUÿ<Øì˜í—ã4?YOŸ•E»ò¾ê2 …B¡P( …B¡P(—WƒÍáRyœVõ¥®ç’J¥R©ôK¥›ƒÍ­eújQ•R©T*•Jå5-w›'÷¤Ÿ„|‘ÅY|™‹Går¹|Å|Ølõ¯SåbV³IœkµZ­V«Õjyûb0|`õìnTŸlv·:sZa•kÉ_FÞ ú-ù£æŸ|mÿrë×jµZ­V_çz0èo|ñ*žŸÞÚp²˜u|©G§‚@W Ú ú;‡¼ YÈ‹¤˜ÎŠycéõzýýÞ`Ðßíç`çE}xv{Xt‘éh‘¥½{çÑá“oÖâ<™åê7Òa³Ùl6›Í¾jöŸÛ÷Žr,éËì€@ @ @ äk![ƒíþN•º‡ãˆÅb±X,ø÷ƒíþœûeœLÒQ‘§I'@ @ ®‚ð×FØØ^^<Ø?xõݳÑþ(žÕé›=™Nçyˆ^v‹GÑ‹'u1 ‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8‡Ãáp8çW;ÿgçlœ9Y\UÑ“èÙãƒhFi\‡Qçu:y4+‹$TUš£ÿ£ù T!¯ã:-rsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1ÇsÌ1Çs>ÓœÐÍÙ<›³ÿ>}”þ×<­Ó|iþÑaYÔ!®ZvÆe<2Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3̰Ï:¬î† ÞvøÖ]úÙ¨™{7Š«*L²Eç£hfu: QVÄíg£â8JÎ=Àl³Í6Ûl³Í6Ûl³Í¾ÜìÿO7{x6ûù<ÉBÑ’kGiÞ‘£b§ùÝ( ó$ÍCT¦É$*Ã,Äu”Í×—æíbeHš¯¥(£;ÏŸ¾ø&ªÒqgí'fq=ù1^T¾_Œ/Æã‹ñÅøb|1¾_Œ/Æã‹ñÅøb|1¾_Œ/Æã‹ñÅøb|1¾_Ìíùbö»/fëÜóÝýÍèå©t¸”@ @_ Úé í3èI~œÅÓiw•ÄÅ+|r¹\¾bþ×.ß9Ë_MšEŸ¾8Dé9ˆÃá|ug÷}gãz;霽·'‡û¯=³³þ6³ÿäÙ&æ7Ï:fãŒ9XÔÅëvèÙªršGO¦ÓyówË …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P(ô³ lÐÍÍsèaQÇU•ΧÑÁ$Îó]üôƒÁ`0 ƒÁ`0 ƒÁ`0 ãs÷;cóܲxó·Q“¤oâ:Œ¢o%,ÖU°¶;kpf=>x~ê']=<«Ÿä?Æå([Deh¾‘Ç‹öhöe¾‘@ @àyðin¢YYÔ!Í£q·Ö~¶¡Éd2™L&“yåÍÿìÌí3s¹/2-ò¨86ÁL0ÁL0Á„U&<ì&ì|èTŒºŒójV”õÍÐþÖi»ç¾—¯£*äUÚ|KÃ/ùÞÝNñ¯¸wîĈ¬%’8‹¾MKN÷¬Ÿ9?YÃ/ÿ%‰D"‘H$‰Dâç¿ëÄs—¨¼ŠóQ˜F³¢ Ѩ˜ÆiÞü‹ÅÞvÖ±›ï±Åñ[róqôcˆ_7úøéŒw^ýû“o¿1÷ƒÿ|ç.:j³ÆÌºó÷þáöŃÇþ½c‡o³k :7NÒÑÚÙ~ܳïrãþÿìü­÷ý%Ÿ½nïe¢Ùã^S×Nyze¦|ßMÙþ™)Õ,Íã,JŠòûŃU¾QG¾óÑ«Iœuㆵ4Ÿ¤Gé{ÿ3¯ò{µÙM:,`ºÈŠttñ+ŸR©T*•J¥R©T*•J¥RyÊîŒäÁ¹sDŸNw-VÑþ·¯¢êô9M4F£ÑhWUÛoµá¹³§žy{è7¬= £´;Æy?$ñ@ @ @ @ @ @ ›;hã}(zzOâŠòP,?¨B™†Ê 0À 0À€«0àé`gcÅã¬8b2™ÌÏn>ìl®jÆù8K©T*õÆ©›ƒÁR=\”ó7qVÝk§T*•J¥R©Üìîö‡A¿zúÃÚ“(ÍÃòÂ6¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥Rù›ßì¶å^—»—‹<žÕi½ Ušd!Jíêõz½^¯×ëõz½^¯¿þýÞº^¯×ëõz½^¯×ëõú›Üov}ÿ©KŸs T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©TÞ†r{°³µ1ìïçý2ΦE²,V«ÕjµZ­V«ÕjµZ­V«Õê«[ïlªÕjµZ­V«ÕjµZ­V«Õjõµ¨w;;›»Ëúaœ'EžFq¦iÍâzòc¼Ðëõz½^¯×ëõúѺ¾?#äÑþ½ýPŽÓ$ªÚçPVð¤T*•J¥R©T*•J¥R©T*•^”¶»07·7Õ.P½^¯×ëõz½^¯¿ý ë¿t«K*•J¥R©T*•J¥R©T*•J?–þ¾Iw··–éãPÎBUét–…ŸV¼@ @ @ @ @ @ ŸAØ@ @ @ @ @ pe„‡ƒÝ>”-ŠŸY\‡(ÎGÑ(Mâòèäo¦¡ŽŠ,­¦4FûÕÚp°»³ÔËbç…V«ÕjµÚÏÞ{ËöÞ¼–J¥R©ôj¦»ƒ½~ãó»Í€ÿìl¼;àn”eˆ¦Åhž…(ÍßÙ›4Gõ¤ a­¿ª<)¦³bž>¾bd‚ &ܬ O» ›'šœ<ã‡nLñSœqêîÅ縜'uó‚´¶}Èd2™ÌÏ`~ß™ýuQ‡!o›h6)ªÙ¤uúÄèÎûŸJI¾Ãáðá÷;¼¿üìðÅáá[ûÅïFezÔš[Í+T÷Ùëh…ÎÚºàؾ:§£¸Nß´ŸŒ«p¶Áݾ>7çÙÉHÃ>÷°I7lûÂay‘¯½7ðäõd¢yæ]óyË_¾þd‡ƒ´.O_ß.-«úä†ý°"wms³ù:ŠæW±ŽË‹vfØ-vÜ Ûýà°*4[s£L{×{ºi¦™fši×dÚߺiý}dË"Kß9]¬Û¤©ãiÍúˆïŠß¶âVˆûe(?.’Ðþë¬ šÕßj¹Èõ'ïvd˜îû2ÄËŸUÍÍëš~çùÓEÕ~ŸGa\Æ'¿ÒÙòï¢*N’I\³î£öwºûÅ^dkí}ÕŒ0ˆÛ8b·Ñï*êIã¾3F¯¿²}wRàVÔãi˜'ïÿŽôyökÑ®Žý6ZñÄ 0`…ݹ†[['¯eY\·¯eïlÕåbV³IœwFÅr™hzº¸ WƒnB,ëùZ»¹¿; ={4 ?Eõböñm8 »öºÃNŽñ5ïSɤȖOý{ç…nQË­ÄQ1[.åEó¦W†<^î‘<ÿg#oÅȤyr­÷¤qÓQ4)Êé»zŸZ—iZŒŠº]¼ÝJëþðÓE£6»Qýnòƒ24+[ùÙIxʯSþ÷¶Üîwâ.ÊtšŽÚ¶=¹&Ÿ'Y(êö¹Aoÿ(|ÿì0о¿øý«Ã»÷^}ü3Ì0ÃŒO1c¿›±ñÎÊÓ¬ ɼ¬Šrõ›“€@ /=î þÖwÞâ™¶ûÜ–Û;Ý5¹<Çãñn¶×’Ûîþwwç¨Ú½ZÅá,Í_ÇãÕ¡.ãþŒ—v³Â>ï€ÿÙ ®6 ™ù¨,Òöé±Õ<;n!ŸÞBÓS>rÐz{kµ)“0‹ËØL¸Â¾ë&ô|>Y‹óæõ¡|÷%£]K*²t\¬øv„Å~Vö/Û~·WÞ/²ö‚žé»»ñ¿³Üè8¹”-”á¨=Ýís –¿_Õ[®ì™MÒ|Ü|ÇgííÓßú·gÅQ±Ö^ššoJóDÑ££a÷ß_}Êÿh§ì¬¯8%ÎÇYZDïÍy5¸%Cº”Õ†díoÀ{ÿ(O“ÕþéM¹Ä”£nÊæjSòP|xPnÒ™ô‡nR¿à»sï…‹¬nÖëªãæm©úø»"âªêˆáϧ¬Ët:moë ù ÈŸ;¤ß¾ºÊi\ŸÛ€Zõ/ å2JwîÂN¿Itð½q Ð Ðò%°ßx}üÎÞäÓ#ý.‰oÛ;§@> Ò]r³Óï5y×õ"Š“÷¶,BVäã/Œ½l±Ýõ•±»Ñ4­‹åá¥4Fß|z½£û]%G¡Ž×¾;¹U—N÷Õ»î†ã»›êî6¿ýó«íCgÝ kÚYýNƒ½ƒÝµ—uøÈÕ…³2ŒóYw05n××0£´û‹PÕåE‡Y ý¬C»íòÝ~gЫæ]«ù‘XÞà¯ÈV?JF¡P(ŸKyÕ)'ÏÚè+ŽY{-}{1âÃoÞ?Yoy¡üáû–ØÝèÝí÷íPOâüä¶ w¼;à­ÏG‡L1å«N9즜Ü5d+J«¢YÛË‹÷ö>LÛ¸Yÿk÷›­rg.—û¾ûûÎíwHïŸ_IYuo4ði…½NèI¼\ž;õîm¢¯8Ш¹·~zÄ6 UýÞªÿݨú¯yœ…ÆÝ¼;X ³ö©Í[drnyC 1Äs{ö6Î Iâ,IÁ¾‰_­t÷ñßë÷°?É—¯Ó"/Îøöÿôî­¿íŸÓm ]˜Ïçóù|>ÿ:ùÿÖù'÷ãØ¿÷îÚT˜¿ŽËEQ·ªÏæuªå£ÅºEÁà›·O ®¯ŸÛ Z7›®g»@§¡ŽŠ,­¦w£Ã'ÍÿnFÍrջû[ßN>,?šfši¦™fši¦]‹i¯»i+M{—=ý¸:]ké `¤‘FyÁÈQ7²?å°Èz¨h\ŽÓ|ùìÐñtùÉ·¶ôªY(ûº™õygýG7k°ê¬¢ÌÓzòÞ°=`»Пó"=*ªbÎï¦P«ÕjµZ­V«ÕjµZ­V«ÕjuW?èê“k¹6ÖæyZGi^‡2)ò7¡¬ºÛ€Â®ö¼Ãú ÊžïßÍÒÿš§y{iX{}Q†Ñ<©ã*\âNÔTê¬SwΩ£0YŒÚKóö9MwΆ|= k/ÛËëæÇYü¦èÿ%Åt–…ŸÌ5×\soàÜWÝÜÝÕænDq6›ÄQ5?ê'±Ùl6û#öËÎÞ[ÕnŸiF£Ñ_‚~ØÒý/çI’æËgÖý¬Okµ'vr³ÅE»Â]Ó%'+âQªYZÆuQ.¢nOÙõ7;ðäVÓé<_nzÄí>\¥RyMË?te"ükõb¢ýW‡ÝN×wæ#â»þ¶ŸzÿzªI1-’“Oý6ªBÙ]EuúW_ÓÿKç÷‡·_<ßfE¶˜†²Ý2xòdåoƒ¹’Ì_;fûg‡Ã¹úΣÎéϸÿ¶³<<ÍÊtÚ~¸Ê^à[Åý¹ãv?ÈBVÇ·TYþlî}P ³*ÍŠüV;Ý*ÈfáðtÏÚÝhsýeû¤ˆöÉŒíƒK²620Ì eÖ:¦?òöìàÙJ¯.¢>ú×.êc48=ðô’˜dù*\­p7wµZ­V_z§«ûÛ/gYš„þ†rßo¬Uù‹ç‡rùçÉ·?”oÊårùMÉ÷»|çùa9ÛØ;¸°õt¥Ý’ èšAÝ%¡ƒÝ@ƒ­—Ñ÷[+¼€ @ ~†èn×6èOJéŽlÅÉ"[«Û‹Dß><¸â%¢@ @ ðŠƒß¶à°?ëûù<ÉBQ§£ö:†q\"‘7‡ÜíÈÍ÷Ný9r½þªöìúþ´äïÊtœæQ’bœ§í匯dt·Aö§¾cs¸ÑoA?O벇<š6o¾GE–VJ¥R©|·\nœ< kžÏKî6w¿nö?rÏb@ @ ·Xøv¸ÙŸúìlwCqýò¢Aê4©¢£E”œ=Rã°½}‰D"‘H$‰D"‘H$‰D"‘H$òŠnög™Þ/çãó瘮q8‡Ãáp8‡Ãáp8œ[îüy¸¹ûsNQOB…üŸ‹é…w±£P( …B¡P(ÊUS7û›–wŸ:¹¨¸8Žæy×ó²{zçñé-Ri4F[]ûÝpÐßvÿÞAT·OˆžeJ™L&“Éd2Ù'Ëþe8ìô"=*ªöø¯Å-nq‹[Üâ·¸Å-nq‹[Üâ·øµ[|m8ìâúÎ#×E"‘H$ú4Ñï‡ÃÁ2ºWq¶ÜŸ”é¬}>xt'uqÁy@ @ \5á_‡ÃþÁ‹÷› Ä2̲4‰»Z¥R©T*Õ—®¾û;æ–Eâ  @ |Î`m8Ü>¤y~j¯Ó‰D¢kmµ÷^F‡û/¢*çq–æãhדã…X,‹Åb±X,‹Åbñ5ŠíþBÓ{í•?á§$­ÚÃÏe˜Åi)•J¥R©ôú¤{íþÊÞçó$ EŽ þu¸Õ_*ú,­¦qLT*•êvV;íþ²¼ÇÅ´ÈŠq1¯š.)¦Gi~ñõyr¹\.—_Ù¼Ùhè/ìz^äk“3"䣵iÞžB.¶†[ýeFÏö¿½ü d2™Læg0¿ïÌþxØaÈÛ&šMŠj6i“›eÝyÿSÝ]³¿Ãáðá÷;¼ß;~øâð0:jÉëÉò«LZs«y…ê>{­ÐY{|ÛWçtwÏn>Q…³ îöõ¹8ÏNFö¹‡Mša›ëëË‹|í½'o¬'Í3ïšÏ ݼ“'¢¤uyúúÞpiYÕQ¿™ß+òw×67›¯£h~븼hסa†ÝÒaÇݰá‡U¡Ùš}`Ú»þÛÓM3Í4ÓL»&Óþ³›ÖïÃ{ÖDy[TašÆÙ(L£sëw³²˜Åy±ÈÖŠý(Úoø“?›`˜ð·nÂÉž’²ÈÒ<¼³³¤û%˜v?ö³~â»â·­¸Ñ¯f¾ å‡ÀEÚ×’µA³±V-¹þä½–Üì7eíßÛîŒãé4^ÛŸ¦yq4¯­öMTMæùǯ²¹ÙÒÝNÚ\Jß—!^ᡤšßüæ¿uMÒÂÓEÕþ<ŽÂ¸ŒOÞ¨³åßEUœ$“¸yì>jß©»·ë^0ˆ›:â݈þسPO·Eª8{ÏŽ€1×ÝØíŒÝ÷Œs¿Rzý•íÿ£ëûƒÄOÃÿŒ—ÝŒ~#æa\׋(Nší˜·Ñù¸_ž¦uÑl\å£2Ñ7ŸÞïè~ûè +Šî!!—]7Ýjèqõ»Ó¿[kOŒßÝÉ:m÷§7›Û³°¼c Çãñx¼›íu‡Û·NN„oïWµ;_ŠÂYš¿nOY«C]Æý™›í¶|Þÿ³°±Ú€äëËGqòú¨¸à€Ÿ)·yJwBÊÖæjS&a—± &˜p…'¬wú¹G¡Ž×¾;¹\§ûê]wÌVbý£Ã'kqÞ¼•ï¾¥µkñE–Ž‹W—°ØÏÊþ¥cOÎÒŽÛû–-²öóé»ÇƾÓmo÷›¡ Gí™cç£þýâ«zÝúëö[믳Iš›ïøì½ÃEã¬8*ÖÚ+ôCóQóMi¾€(zt4ìþû«OéÎçØÞ\qJœ³´ˆÞ›ójpK†,ÿQ« ÉÚ߀÷þQž&+þÓ›²ú”£nÊpµ)y(><(7éÆLúS7©?˜ÿüdÏê¬ É¼¬Šö‘ÂétÚ>?òi‡ôë1{»k/ëð‘³ÿ›ã<äEÖm#Æí>¡ÌÃ(íþ"TuyÑÖ£¡Ÿuèò£_ûû¶½;Ñ/8÷ò3ÈN¿þúøý&ȧDºK(vúÍŽWe'‹ly·±"[}‰B¡P>—òûNé7ö÷Ï«¾ÎŸVxÐ «Ÿ>þ¥±Ýw6ÙÞÝgßJ.¿vùa—ŸâÞŠÒª˜•!/ÞÛG6 oNžÈ´Êý,¹\îÏìßëWÝ_.OÚªÞ½3(@¸•Â^' ÞV=|öõîÐôÞÉM°Û½ Iœ%éñ/8Àý«•îVÌ{'·bÞܸðô½b¼Ü7Òn.$EY§IÑg£âôS·~äónd¿Úy/ÍÂ‡Ž«¾µ«©» ö7*¦R©×Iý{£Öûw‹ƒöSïßæ¾½Qyrò©ßž>fõô¯¾¦ÿ¢óûïÊ£ö&»Ýcœ>ò|Üñ¹…nš¼ün÷‡%÷«æ­áèý[Çy:³å€æ}#¬mFQ|º0ŸÏçóù|þuòG­¿Ñ_úsXd‹=µ).Çi¾|8Óxº|ÂW4›×e¨–à«f¡ìïŽnÖçõ´›ur"Lò.¦!ÚwGœ×EóA·“>ºèIˆÌÛiþ[gž\S¿ï]-Ì_Ç墨Û+èÏÿ¨v‹‚Á7nog:ØÜXõ…½(ó´ž¼÷Ê~£wú½UÝ3°ê"‹Îžé4 u|Tdi5½>iþïp3j–«îlÜÞÝúæppòÑ`ùñÐ4ÓL3Í4ÓL3íZL{ÝM®4í]öôãnètQ¬¥'€‘Fiä#Ûë­ƒ“Ç6ÖæÍF\”æu(“"ʪ»Æöl+v•±'-vò äƒEûœÈ²˜6ÛìÉÆéUóÍ–ù,-ãº(Q7ìzƒÛ¸õ˜Díœqõñ]jµZ­¾*u÷qrßüýöI.Ý >êÏ~×<@ ¼Îà·ØŸ²ö|žd¡è—UÍÇñ»C"¯1Ùm YÊ4_ñØÒÒ¿vi÷ñþÁ~T¦GE>O²PÌÊ¢·ÜyÐ:[ýíËyù&}gÑ´¨‹2Êü,òèÎËgÏ¿Án&¶ÝaýÃÕŸ·¿@qÍŠ2¨ÕjµZ­V«?Qý»®Þ\ÖáÉÃáCÙuÈþÔeƒeöà§f-ûó<©ÓâlSó΃¿|@ ×wú7³ÏëèrO+×jµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«Õj¯Kû‡áNÖàý¢¬Šµ7!¯Ë8‹âŸÒ*:.ÊiÜxƒ@ Ä‰ã†ØÞì/è>(òÌÇqFÑQš…(NÒöOM’דP¥ÕÝ(™Y³@Õñ¼,úÆÙ")N?se¦mwú³QŸu2¹ä;¬Z­V«ÕjµZ­~»Þî —õã0‡I1à-`k ¼zôpí(Ô1~¸;ÜÙ^ûí½—ÇótçÉÇÏ×h4F£Ñh4F£Ñh4F£Ñܬfk¸³³l~xðèá%8‰Åb±X,‹Åb±X,‹¯u|ØÄ›Ñ‹û¯¾{ö Ú¿w°ÕeœW³¢¬CYEieé,E“bŠªŽ«´Š¢!Nêæ/¸\.—{+ÜÇ;8sŸ¥uû˜†|T¦qÖNy{ïmïO7<ó^vo×ѫ͓å£< @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ÈWCþÒ![gÈýýGQœ¢'‡ƒ¨êÀ4c0 ƒÁÜ:æqÇlŸ1ßfE=)Ö¦a”ÆuEY:nÕ£4ñ®Ÿ÷´óvμ£PÇ·Äü®3wÏÌça×雕a<Ïân;¢8Ž>zøâ¿2X,‹Åb±X,‹½‘ìß:vïÜѨY¹xŸ$‰DâÿÜŠ›ëïžQЦG‹èåÁõoŸ¼¢P( …B¡P( …B¡P(Ê•StÊÆ™òâ­Å-±úÎb ûŠXw*Ãææ¶Ÿ×é8äѬ,’PU±vPUÕüE¨B^Ç+]‹Åb±X,‹Åb±X,‹Åb±X,‹½úì뎜±±ö–QG³¸¬Ó¤= ¢ðSÑÌ-æU/¿„*º3›Äã¢jÆTßùµF.O†8wãÛ/ÖOúfR=ù1^P( …B¡P( …B¡Pn„rÐ)çîïÿ µBœý'ó"‹Kê«RÇuîYÞCZÙü(ûðÞѰ¾ÊÎQÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í4ÓL3Í´Û1ía7íÜóR_½xò0:}Rê«§/.ó`F£Ñh4F£Ñh4F£Ñ®—öm§{(ü£²ø±žD“¢œyûhø$Ìê¢D"‘H$yõÈ¿täÞy?¼ Y1›¶ÇÖ³vù¬_|ɃÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ\7æ-3X?c^ž>öh=ÿîþþSƒÁ¸ñÆß:cãÌxÆó,®Ó"Šc"‘ø®¸ß‰›gâAȲµö?¢¤˜Nçyš,y@ @ @ @ º=ÐF Πg¡Y$­„B¡P( …B¡P( …B¡P( …Â/þ¾ ‡o‡ušD/y<#@ @ WSvÂÖy¡ŽŠ,­¦Z­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZí'og]»}Ö¾ uO£²8J‹YÊ4îÜ{<ü&ªy= UZÝÊ,’,ÍÇw£*ÎÞÄãÅù¨ùëñ<‹ë´ÈÍ5×\sÍý•s'ÝÜss'!JÒºL“(NÒQtçÕÁþ7Q;æÄ®fi×E¹ˆB’ºl¦4_j^ÍŠ²6Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3Ï<óÌ3ϼO:ïU7o÷lÞ½gûO7¢ƒ§ß|ûüpÿåf´ŸÔ雸Uô(ä!zðÓ¬Y­r­-›Íf³Ùl6›ÍfM{¯³÷ÎìGëÝfÔƒ¸ÌÑ£ ®°6ÜÙ¾}ðèQô°Hâ,ŠGí]óÚ÷ÄoÃx,‰D"‘H$‰D"‘H$‰D"‘H$‰D"‘H$‰D"‘Ht{£áÎÞ2zpðl­ I˜ÕE¥yÊ8©år¹\.—Ëår¹\.—Ëår¹ü:äî®/óƒÛ=:-²Ì³PEwöŸUß@ @ @ @ @ /ˆ¬w7–È~Ó—!¯¢Ìó‹OÒét:N§Óét:Nwõ»µáîæ²{•Ž'µH$‰D"‘H$‰D"‘H$‰D"‘H$z'úípw°ŒÅ3‰D"‘H$‰D"‘H$ÉUJî w‡Ë䠘β0 yÅù(JŠx<ÏâHâ*‰G¡"‘H$‰D"‘H$‰D"‘HW_ÚZJûyŽCÍÊ" U•æãNœ•¡jôx•}‡$‰D"‘H$‰D"‘H$‰DúêÒÎpw{)½yš¯Åù8-ê7R´\R.—Ë»ü`¸»³Ì_Y¶–¥¯CT†$ÌꢌªtœÇYû4‹ëÉñ…B¡P( …B¡P( …B¡P( …B¡P_Úîî.©¿Å¯×^¾ÚÀmö–Àã0ëbV¤¡N“( Y5JˆÇ€«<î­/çq=/ã,zfY(—Â4ŒÒ¸£(YÔE]ü”&iýñ=å<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<ÞMòþ:ÜÛXz¯–Ä/|؇Ãáp8‡Ãáp8‡Ãáp8‡Ãápn¼³¹tîq8‡Ãáp8‡Ãáp8ç+8î –ÎÃ$ ³*ÍŠ …B¡P( …B¡P( …B¡P( …B¡P(ê Q»Ã½­%u–IXƇe1ͧÊ"kÓ0Jã:Œ¢¢Huˆ¦q=/—½Òh4F£Ñh4F£Ñh4F£Ñnšvwk}¸Ôž…,Î0UZi4F£Ñh4F£Ñh4Í'iþ¼µ¾µlöGé¬Ý}Y¼Nópɳß) …B¡P( …B¡P(”¯¡ìn­o/•W‹Yˆž<‰Fi|êPEÓei=ÿøÞ1½^¯×ëõz½^}ú­õó½\.—Ëår¹\þÅóƒ­õÝeþ¬½N<­Q‘W¡>cŠã¨ž„hQÌó1 …B¡P¨Ÿ¡6·Ö÷ú£ÅÙ?'!†ò_ªÆ©B\¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T~áòÞÖÆF_NE]³IšDY\‡2΢*ÉBY´÷·½³ÿôå7$‰D"‘H$‰D"‘zi°µ±¹”Ïó:ÍÇu‘¯¸1.•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*•J¥R©T*½êéîÖÆ`™þ•i%“" e¢4?I¹^¯×ëõz½^¯×ë?Aÿ[[ËþÁ,­'!Kã,JB–EU:Îã¬Ù˜k¤èqó‰¤8Š“:”Ñl‘ej€`€`€`€`€`€`Àçpkc{9à0®'Å8äi=¨’I(Ód’ÆQRd,‹Åb±X,‹Åb±XïZë[»Kë Èв)Úqž„R§Óét:N§ÓétŸ±lmô¸ò“ãÇI\&i^Lc©T*•J¥R©T*•J¥·#]ßÚ\_¦‡qž”!®Ódµ=:N§Óét:N§ÓÝÞnckscÙ=ÈGÅ4ÔeºÚ¡z¡P( …B¡PøIÂÿ}k³Ðà£,½è€˜…-la [ØÂ¶°…-láÕþ®Yxk0\.ÿjQUš‡hÆe<ŠÛˆÜê“¿&Å´½áHVqýñ'~c?û‡Žíï6ûp­^ÌB´ÿê0®ÂÝ(Ì_Ç墨C…@ Ÿ„xÒ{»;ý+YçUR¦³ö%,΢iZ•a<Ϻ—´ö&×+ìã@ @ @ @ @ @ @ üà¿nm–ÚaYTu{U¶J¥R©T*•J¥R©T*•JuŪ'Mµ¾±ýI÷³@ @ @ @ @ @ @ ^\ÛÚöÚdQéH$‰D·#Úi¢á^ÿŽrÊxšŽBtÔ.–דP¥•\~³òßwyÿÓ—³Iš‹*Í á7ÿ£v—Â÷e:jëi‘³IQÍ&íídÎ[w£q6¯›× f©;¿_~Eß?;¼%C[›[Ë ÷âªÙMBÖüG\&i^Lc©T*•J¥R©T*ýJév“ý®£—¡,ê"å8M¢f+(žUA­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«Õ—®ÿek³OŸ…,¾ðЭÅ-nq‹[Üâ·¸Å-nq‹_ÿÅ×¶6ûëïeñhÊ/÷‰D"‘H$úrÑÎÖfWƒIYäiM!kï9”…ùë0M?¾Â#—Ëår¹\.—Ëår¹\.¿šù_š|s}=zñ`ÿàÕwÏD÷Û&ŸMC^ÇY»|VŒQô"ÄI]Læk1;³qÆìÿTäÑxžŽÚ{òÏ—ßoóÍÁYþ2çq–æãèh½(ŽŠ¨ I˜ÕEɺÂÖ£ÎÚ:³žäÍòé›åãUŠãè`” 7£8E/âw}¸­­Í½å›à~2¯ÃåÞÅb±X,‹Åb±Xüëâ¿4ñÆÎÆ'Ø}†Á|-f§c6Å®G¹ü²ù~—Ÿßí¦ñlR”iûè:”íÞ²"¯®!ô¬ƒv߆ûÑáþ·Ñ(ÌB>j~A—ß®2ÌæYÕîü*èA‡î¡/žVís¸{½:Ý}õ]Kíž{1z~°ÿì\\”QæeÚ¼Éózm\?Ö,vEvkk°¾\S{9³þÙ¼W8÷_,‹Åb±X,‹Åb±X,‹?OO²—爕÷…±X,‹Åú¼Ö·uî¨àr«HäU$CGž»1æË‡k£0 ù¨Yë’¢.ã¼ZúqÍʢ͚p—ãP·ê"š†éQ³ÔÅO—7Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3Ì0à 3̰+6ìoݰ³aß­eiþ:Œ¢q¶Hš…—sÚ[üLçIšWD"‘H$‰D"‘H$‰D"ñJˆÏ;q÷L|ÊišŸ2ß­µrœGGm™×“P¥T*•J¥R©«O:uïL=<ܱí'uú&®C= yˆü4+CUµ“€@ @ @ @ ÀÏ~Û‚[ëgàËÙ$ÍÇE–ÎÒQ4 Q^¼).w‚ ‰D"‘H$ò“;rãŒ|Ô]ÜuÞ†:>jþ\Mi4F£Ñ>¡ö§NÛ<·jÊ4—[™¹:Èò›38‡Ì³ãyÅÓ4/¢8ù%ßjF£Ñn›Ö]*¾5<ÓÚO·oËÇE9]^1~\ÓhRL‹ääs_G}Ñ©[gêý0.ãÑéeí§PœÈä«./“·ßþMNG!*~J{¿.¢ªù˸¦}.mÐi;çÿ}C^­rß-©T*•J¥R©Tú‹Ò½.=w˺ýl6‰×FÀ—v;àÜ™æ÷BGz½^¯×ëõú›ÚßöÛçÎßÏëôMZÆY4 É$ÎÛêG‹èÉÃçkUNçY\·Ïîy€Ãáp8‡Ãáp8‡Ãáp8‡^ü ÃÏ]Iÿä壭(þÀ …B¡P( …B¡P( …úzÔ~G»}擼eæ¯Ó|m'ªÒqgi>@¿êî0µ}î{‡e‘5EšGeH¬.JF»–ÚF§»?çý´ quñÍò„B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …B¡P( …·<ü?»pë,|¹ÈëI¨Ò*ŠóQT”ã8Oÿ×i‘GÅqôhÿÑÝö?¿{ÍŠl1+‹:¤yeŽ9æ˜såæüG7gûlγ0=*ãJóq7®ŽËq¨ëöÃå°È 0àSø{7`çç^ó' UÕËqÒ~P”‹Õý¿{ÎOÇyœµàÑ"zðèá‹(Í£ƒ8OB ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒ]{ìß:lïg°¤È«:­çuú&d‹¨‘š?,G€Á`0|Íàï[xgý ÞoãÓí&ñ,”E¢G!U;lÿÕÃíµ8›Mb8~­ñ¾q üï÷7î¼ü†L&“Éd2™L&“Éd2ùÆËÝ‘ÇÍŸ“ÏïŒÂ`0 »ªØÞÖ`#úöÁ£GÑó"_«¦q–EIhþ#›çã(éÏ!ú6ŒÇøòÀ_`gxîÌØûáMÈŠÙ4äuœµËgÅxqáÆß'bvZfëüÉz?5›¢ãy:jÿ—|îü~›¿uGÙóg ¿(ŽŠ¨ I˜ÕÅÅÃ|Jëikß ÿ¢ÈB»¾”µ×ç´ÚÚË,­£êtÈ×0ÿ÷­Áæò‡q¿ª'Óø£?x¶°…-lá/¶ð°YxkpîMúq˜U··L¿èµýïm;j«µ7¡lV„Ö&EU¯tδ^¯×ëõz½þvõ;[ƒeX¦Ó¸\DÝ…wÅ(§Iòd!—Ëår¹\~kò‡M>ö·$xê²Ý17 QÈGEçy|”æí­ ÎÝä”F£Ñh4F£Ñh4F£Ñh4F£Ñh×]ÛÛ]ߦÑh4F£Ñh4F£Ñh4F£Ñh4Ú­Ñþ°5ÜÚÜF÷ž|wðÑq\׋(Nš>NêôM¼¼ß_“%‹ä¦÷;bë„®M£²øiqòq¢£¶ÈëI¨Òj¥/gûÄš,fó*Í¿4ñçŽØ9!FélROâi:º”òûNÙ=Qêx^^òëøõÂòÉÞ©°(‹÷¿O~¸@yÔ*;ë'ʼl~@²hÚß åOÈ“'Ñ2ŒæíÏLøæZsë¸.i?ùþ·¬%§ñtÚ¼TÄDâçÿ£7Oăg‡kÏ×â$Ô‹,Íû4Íß{‘mø0—‹¢ÕÍðŸÝ€ÁÉ€q¶HŠV-Ú?ÅùÚ¬,Ú…¨y[CT†qû²p‰×QnË„ÃnÂé Ñ$Ìâ2Σjž¿÷Cz'kÿªª›IþŒr¹\.÷ë¸ßvîéVj2)òf+5­›7‹85› å4®ãK½U ¯4ùª#·?DþêŸ&6›}»ìëì³½‚'/IàOßëàÝ•ÞýI$‰Dú²ÒÃNÚ[y-ƒF£ÑhWM;hµÝÓs>¾!ƒB¡P_‘êv“ïžž^ROÒbmZd‹£QÈçÓ()Žã¤.ÊË:òôìŒ8YdQ—eÊèäpú4ÔñQ‘¥ÕôKb?tØéÁþ ÿw>¹ÔI#ôŸÓÿØé§‡àžïߪ8{ÓžO1‹ëÉñ¢Œ_n<éŒÓã}“ðS<+CÞ¼\´gÛÕlrÉWÆ—xºç?βµºÙ¼¯ÖF!ù•ôÓŽ>ÝÛ=»ÈúÿºBfÑ™§;8ªù,”'ÿ Åñª3š—Š7iܼJ¿‰³¢=1 c5ÖXc¯ÎØîʑݽ³·×é{§ ®Ð­,ì]ár”þ×<Í‹lmcýCí'+ùi²Òùp{§[’w7·ÖFiéÖ›´nϪŽî.³þñ¸#O7þ޳øMúÎ[O~8»ð!_é‡xït+§ uû?ú2_pÕ„íN8ÝzjšèMZÍã,jš,¨ÕW¤î.ÝÛzûgýë&”¯§ü¥S¶/R¾Ó]ý¸wºW£ eQy÷4dñò£K¼Z?ø§N<] •EUÇÝ  ·_•ë§›Y³AQÔe.ÛÊûÊ:åìðX{êÂ8\êqc‰Ó­ŠPÕ·—ØïˆÓuÖq6OФ(ë4¹äKÙ †îwÐéºã4ÍCg7ËzÑY§«¤£"K“I³©Ý]Ø}°È¢_tœìóÉËw‰³õ¶Ù$ÍÇÅt²Ëœ÷@¹úÊò…ît¿~ÊKßêÆ‡qºâ9n÷ï®eí~±ð»<‰îÄy:³•.çr?·ÛžR<Ø8]·o8íÝ#[ƒÓƒe³8›¦õ‡ŽÒÞï^çm¾íýÌ÷ðãÒ£N:Ý h>—¿â Ãán÷ ãN7kË­¢y/«Òú’«_0 ƒÁnöm‡îF½¿6]kK©ÈÖ¶ÖÎöM®¼¿ ‰D"‘_|Ö‘§‡ŽÒ,,Ÿ´qþán”‡y]ÆÙé¥tPèEÿÔ¡§Èþ¹˜ͧËËå ùLHwørãôxÝÓµ$.óöö'—?P}E”åªóéa¤ qKœ{‰…½‹µ—O6Oo¼ˆgŬ(ëµ§ópÊöéUÝ£²E&Q5™ç5ðƒût7Oš4ïã“EÖÞQcV³´xÿâǹ¿tÜæÙ/è˜Åɤ,Þ½ÜëãÌß;ætOóü½ÝÌÝJHó¶õ/ç~tÎ?툕ý?vþÙýíÓª»ÔîR?#¡3NWNß½¨ôí•ÕèÎÆÝáÝ­oÖê2­~Áj«a†f˜a†fX?ìß»a§»VÂ/±ŠƒÇãñx<ÿÕø:~çRüê{ƒèt:ýsëËÉÝŸ×w‡¿ú5Çãñóÿ£ã÷~Ž´/÷w·›¡.ã×W~È`ý£o\†œù[7äô¬ƒS~¶(‹S°"?$uâéI&ïüŒ¶QøùÌö‚ç•o¾k’I&™dÒõŸ”v“¿lÒÑ㲘FOòª_ë=ØÝ6iðÓøVß'ó ˆƒŽ8=â?\kâh^/Êw÷]t£ŽOHuw#œî‘¾'¡^dÝíîª÷ÿ'^x‘ÛæýµóNwÎæuªä}†óëœî¯ÁÙ³Yf¡ìþ±Úû.?½‡Þr·#O÷ ,OU¿ÄËNwÝÇðtC¹=ï¾ý’,ÄÝSà’b:ËÂO”_¤üÏN9»™Ó¤}v]œÅù{ÿÌç¦r7jŸTµWš'“8½pÀ”«9å?»)§k…u¹˜ÕE³Ò’¿5âïïMèìÞlL0áÜ>ìáéZñqQ6Î(LÍcñSz¶>r§YW©ãöÚ§<¬Â,ä£×_™ïÖ‡§ëÇi'Ë‹².¹Šýëˆî²½áézq\%EyÔeh¢¬½îìN²¨‹ª½qðEÿ“¶;ìtM±{Øê_Êõ­»Kí†g÷Ü'eQ½ \æÁ`¸+Ã}×q§k§ím"“I‘Ê4ÎÞº ¸šÌë: +^ƒþ™ØîþäÃÓßÜ¥gõŸi‹Åb}Z«}Ââ`ët#úô†ô_Rè.ß:ÝÔ«–ϨÎv»œ~¼¶ñKnÄçÿÿyçŸ=r( uñÞ>À;uÑßÔbœ-ŠŸâì¢u9*µÛbÞ:»Ñõé“/¹Ñ¸BÄòGåt7H^4]òf¯A:÷!•ººÚ=’eëtïXóBSy|z|*-FÅù¿[ù îs¹Ý-ضNwc=Jõ2‡Ï®ÒpÚÚ¹ù2JwœrëtàÝ»|XE~)òû–Ü^ë7*E“¢œoÿ5k ía†Qh~ËÒ¼³/Zkø¬ø¸Ã7VÁ{=)òÌÇËïJórQ”ç¿OWlÜ¿uãε3/Û´|žd¡h·œ«w ïÄÍÀÑÅÇÀ`ð‚»·…íÓMñ<îïEøóöJïÚŸ–ÜëÈá¹ÐKžÕøõîùgÛg}³Ù$^«‹¤˜Mºûõ®.uüÝ>ww‚ö¬ÀyùþQ苾«Ÿ Zþo;]ÛkORjÿ7­ Îíˆ:9ªö¥¤N:=Ü÷:ÔíS£W¸æË‡¿ë½óa¶¸!YwoÒÓÕ¸{Ï£—é¸Ù(ìNs8\é+ãm£;×rçtuòÙþaôºYñ­B”ÄU.úÕøÚ}{–Òðì‘ÞÝ=–»SÃÛ?ÅùÚ¬,êÐl‰4ß’×í9Že·¯BG­³Ú5m&Ü– ‡Ý„s'Íâ2ΛÍÛìø½‹2îdÝÛGÝLZéÁÀ\.—Ëýòî½ÎÝYÉ%‘H$éËJßvÒé.ŸåùäE{ywLyÊi\Ç—Ú A^iòUGî}ˆüÕëoxî62Yü&}g7Ï“ËÜ’âS{qçMØÞý{²hÞ‹åvMœuw÷êî!¹uwc}­½â®½9Ô{‹~|P{Ýòðìºå24o¼Ev™aáæ Ûp½I«yœEMsá¹¥jõõ¨ÿÜÕ›oÿ¦¼·³ÊÕ|”Ï¥ü¥S)«}1§G5³fm©¨Ë4ä—Zc¥|Nårz$!n÷…ŒÃ¥@"Þ!ή U}M‰'-±{úð"ž³¢¬×žÎÃQ(Û+ºS§³E&íõ/yý¥ÁGxz(=n¶Õήòý^'›-_Ðuµ¹n§åÞÆjÜ+½¾ïž"¹¼5v2i6í.½[bëì‹úµ7æÿDT{/ÓíͳoÕ¯º—é'ÅÚŸ‰í³= Ÿâ&—¸›ÈvÜéïç'½áÈçpÿÔ¹Ý[ãv!í™^Ûg·ÉÿT7̸•äóŽ<]wx¹¶¼Û×ÚÖ¿¬µ÷Û>{¼ô½\¶Ïîçø©.4¿úd{¶éöÎé·óW\øÝ^`¼sötŸË]™|ÓÃßuáÖùpÕËš¯|6h¶/6·v£ojU®þdY{·µû« ŽšþµýË­»ëõz½^ûƒ¶ïOÑ{ÏOo«9YÌŠ:¾ÔcQ¨+Ný¾¥úãN/Cò")¦³bÞhð+…?4ÂN0é`çEŽËîì¶|žd¡¨ÓQèÿrÅÓܘL&“Éd2™L&“Éd~Z³½0m·¿d{D·¨º;g^òL3 …B¹ Ê÷­²ùsÊ[;N×¢dR䣲Hë4o^—³ãöžºp8‡Ã¯þo->XʸŽÁ`0øÖÃÃáI˜Å% ƒÁ`ðMÛgùînŸÁ¡,²t¶úcêôz½^¯×ëõz½^‰ûûÙ<É‹*­‹,šMŠj6¹Ä¯) …B¡P( …B¡\oå‡VÙ=èw‘ éh‘¥½|çÑá“oÖâ<™åê7L¢Óét:N¿îú_Z}ï­ã1Kür‡e0 ƒÁ`0 ƒÁ`0 ƒÁ|œÙi˜½õ%ó {À“\.—ËårùõÍÿØæý=Ü÷Ë8™¤£"O“(N ƒÁ`0 ƒÁ`Üfc¯5úÂÞ¬¯ož®ó?-²¶nwßÒ$.Nþ†Çãñx_ÔÛn½þ,›Ã²˜Åy¡V«ÕjµúŠ×[mÝïS¿7¯Åb±X,¾YñïÛ¸ß ù]¢n[7fE‘EG‹è¸è¶z áëÃFØèï½÷j’ÆÓ4¿ÄA–ë×¶›.ýÎÈéQqœÅoÒ\­¾uK˜Òºý‰îm«?gý¨­ûÝ"ÏÓ¤¨Óüä@GÞ8MG+¿Táp8‡»6ܽ–ë¯ =Œóº¨'á<(öß¾j€D"‘®£´ÙJý%¤Í'ëÕ·Ny¹Ó–ýù§é¬¸äN¿nÞí¸êw×=\î›[ù›­ÕÞ¤vд›ý3b_„º½_ñª¿FR©T*•J¥R©T*•ÞðôA›ö§>ål²(Ó¼Ûs—L²¢,š¿È`0 ƒÁ`0ìbm±þøW¡œ…¼ho»'¯Šü9ÎíqÚ6ûsœž§uYŒÃÊgiµZ­öãífÛöçp¾œgÇóR©T~¨ücSú#>ûÓ4/âd‘­Õ/ž_âlkƒÁ`0 ƒÁ`0ŒÏe½·Ñzý½†÷ïDuçÕ¬(ëP …B¡P( ¯xøMöw‘U{¤[ @ @ðE‚ßµÁN<ßfE¶˜†2®d2™L&» Y÷pšÝev/®âlyü!)ÓYyt'uqÁƒÁ`0 ƒÁ`Ü^c½5ö–Æýfs´ ³,Mâ®×ét:Nw=»ß6ݰ¨ùaYÔ!¾ð˜¨D"‘H$ÉuLÚ£ìÃsIšGá§öj=™L&»aÙN›õ·Ø><ÜUé8³4G³¸žü/är¹\.—Ëår¹\.—Ëå¿(ßjóþô÷Ú«¸ÂOIZµ‡åË0‹ÓR,‹ÅbñmŠÿÐÆý3žÏ“,u:B @ ˆ/@´— ûX^Áãñx<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<Çãñx<ÇãñxŸËûmë —ÞAȲ(Y$YH$‰D"‘H$‰D"‘H$‰D"ù²Év›l-“ÙÖàÜäfq=ù1^¨ÕjµZ­V«ÕjµZ­VÿÂú/m½½¬¿?JÿkžÖiMÃ(ë0ŠfeQ‡"[Ti…Á`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0˜+Äü­ev–ÌËçû/œ¿É^Õ|½ UšÌ³¸Œê2ΫYQÖD"‘H$‰D"ñW‰»­¸»_„qSvw¹.Ž£x^³I<þøez½^¯×ëõ—íwÚ~oÙO_}÷â’W‘Êår¹\.—ËårùWÉÿßM¾½¾Ì÷gŬ..: B¡P( …B¡P( …â:í퀷7–Å¿çõ%÷¨ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«ÕjµZ­V«Õê›V·w—ØÞ\ÖÏ‹:™èõz½^¯×ëõú+Ûÿ¡íËþqä#@ ñU‰á’xõèáÚQ¨c@ @ @\oâ_[bkIìÿTäÑxžŽâ< *•J¥R©T*•J¥R©T*•J¥R©T*Õ­©vÚj{YýðàÑÃKt“Ëår¹\.—Ëår¹\.—Ë?yþ»6ßYæ‹$΢x4 UZä2™L&“Éd2™L&“Éd2™L&“Éd2™L&“Éd2™L&“Éd2ÙÕÉöÚlw™=8x¶V†$ÌꢌҼeœÔøtÀ_Z`o „ìÜ®Üi‘…dž…*ºs°ÿ¬úƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`¾8³Ù0;ëKf¿ÊWÑ?æùÅ'G)•J¥R©T*•J¥R©T^¶ü][n,ËWéxRËd2™L&“Éd2™L&“Éd2™L&“Éd2Ù²µ6Û\fâ™H$‰D"‘H$‰D"‘HôË£ûm4XFÅt–…iÈë(ÎGQRÄãywDWI< ‹Åb±X,‹Åb±X,‹Åb­n —Ö~^§ãG³²HBU¥ù¸3ge¨?^m?&‹Åb±X,‹Åb±X,‹Åb±X°öZkki½yš¯Åù8-ê7V´\V´ÀöxUdÙZ–¾Q’0«‹2ªÒqgíËÔ,®'?Æ  ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ~=ö‡ÛYb‹_¯½|µÿ @ @ @ @ @ âòÄî’x¦q]ÌŠ4Ôi%!袮 ñ8 @ @ øåÄßZboI<ëygÑë4ËB¹4¦a”ÆuEÉ¢.êâ§4Iëïß'‰D"‘H$‰D"‘H$‰D"‘H$‰D"‘H$‰D"‘H$‰7[¼×ˆ»ëKñÕù…;#‘H$‰D"‘H$‰D"‘H$‰D"‘H$éšJKé‰D"‘H$‰D"‘H$‰D"‘H§Ò_[is)=L¢0«Ò¬È£O8‡Ãáp8‡Ãáp8‡Ã¹΃Ö,§aþºHuˆê2ΫŠz²4΢i:.ã:m&À`0 ƒÁ`0 ƒÁ`0 ƒÁ`0 ƒÁ`_û}‹ —ØAZ&ñ(ó¨œ,êÉ4Z‹¦ñtÚX@øšÂN+lõ/üE>^«C9fEò:½ø¥^.—Ëår¹\.—Ëår¹üç[m¾ýn> ³2T•X,‹Åb±X,‹Åbñ»;Ëø»ì8Nê¢\,OðÍ“Õö•Èår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\.—Ëår¹\þ5óA›ï.óWqó©T*•J¥R©Tz#Óý6Ý[¦/Âxž-,^GÍVDšGÉ¢.ª×! 5@ @ @ @ @ @ @ º‰Ðïho} =É«yÖÄU:Îãæãhדã@ @ @ @ @XYh!¶·±å/Ëår¹\.—Ëår¹\.—Ëår¹\.—_áü_Û|s™? Yœã‡*­T*•J¥R©T*•J¥R©T*•Ju®úk[ –Õþ(ím_§y¸ä^Y‡Ãáp8‡Ãáp8ç6;Ý-†KçÕb¢'O¢Q…:TÑ4dYZÏ?¾§Ž@ @ –Â^+l€ÊÒ‹ZÜâ·¸Å-nq‹[Üâ¿>‹¯·‹oô Ê¢ªã:¬²™ Óét:N§Óét:N§Óé~¾û]Ûm.»W“EY¤#™L&“É>–µ·mØ,³{quù{>ˆÅb±X,‹Åbñõ¿iãá2~²X @ _4ètõȾ—Å£Q(W>>&“Éd2™Lv}³½6;y2ì¤,ò4‰¦‹µ§ýdaþ:LÓ¯F«;-Ð?÷w?™×A.—Ëår¹\.—Ëå7>ïŸûrgý•vÙ<¯r:ƒ\.—Ëår¹\.—Ëår¹\.—ËÿÐæ{Ëüy‘¯U@ ¸¥Ä¿4Äæú’دêÉ÷S³¸Å-nq‹_“ÅÛ·ˆÍþ)¡ûóºH§Óy¢ºèÛ(­B\@ @Ü|âO-Ñ? üe÷™4‰²ùl^E¡\Ô“0ë¢j>‚@ @ @ @ @ k [d°Dö³¬—ñq•á!©Ó"×jµZ­V«½•íïÛv¸lµÝÚ›P6+Vk“¢ªW:/@ @ø5BûðúÍ­¥pX¦Ó¸\DÝå’Å(§Iòd¸$ð‡½Á`د¦Äå(“h:¯’,DI‘×e|ñžC@ @ q퉽½M@ @ q‹‰ƒ½í­Aÿ¬ãâ*™gqUÓ¢¨'( …B¡P( …B¡P( …B¡P( …B¡P¨+Nýqo{wûä*³I˜¯Ó²Äl‚ô¨a,ÖQ·<†1­Õ†£ ^´ìØKŒ¸']Z]Rô½-SVö7T§£Bu3Žæ=6v{]‘Øtº¦hþËãŽu{7ÿË‹)…›áDõÊdTY1w³„“5—)Áí†Ç ÍÁ;å#LÖÀľ5qœSV¯•~wæÖÌ¡]ê°â§<‹«àºDSIT¡Mš­bp xoOÉŽ´Ð¨D5QŒ•à©’è×™S«Æ• ·Œ¥R,V-nÏc•ê)~Æ<.ƒ,+§¬0&X¥~Pk‰:XªßA¨v5ROí¶–•P²mµßlŒTêÁ™ïèžÆÆï¥{ »Ù6\=øÀYDß*ŽE\éê”Bâ+I‚gpMS‘þkJÏS†fyìAÚX‘°ê±&Äž.tíG÷T…®æ~ôð>+‚5"H¬_uN²dÌk4pŽ“¶ ºÑÂÆxý&ã+ÎGÂỬ¤Ù·á0°îµ§`Å7®õfÝø†/kã¹dÄ5åSùi7(³eKC1j¾Tëý‚AOÛÜ—y6„\qBÿéK4üþíðRÃ,̨ …!S#a~QEÀ¯ÙD´ØtáÐÁÄÛ¦-¡u£|Z£-Q=®>aú`uçǪm_a˜ö’߉Ï%a¿O8îÕ/äLhÅA“²!úm4“9¡#5¦¢øsc”„OœQqÌÀ@r©[9 )n~жø)wÚd<7v€ïݪáŠÏ·èßš#ìVÞ†1 Xª$ÞÌ s΋‘Û±@ºsƒq±z aÌâ ,–Æ x+ŠŠ±¼žV<€°³ü’›sPÎÛõl "]7.ˆÜsRÚ2€' Óõ’8s]´×®ÿ]B÷Û€ÊåÅæ_Að©®e§ ÖÙ×ë\ŽºWRïÚ‰uL±Ü Bĵ¼·*|8ÄWùx~4(-'ù£û:¬¬y‰u%¤ÔÊ‘I‡¸Q<¹’ßHk/µ&•Dâ£ÇÎ!ã|áȘ+hŒ·6WŠ@‡ýš“Ùdh¬¨&*ôÞäÓʬæìF TäòVª `¦Õü²>Š•øß‘õU'"fâùaÌñ•2ÍïÈlëòøžêJ¯þ¯=·þºÆàl;ž•^ÿÓÚíÅì½HíêâÞ9®Óúäº|¤ H!TÝô k×FOB•%²ûl˜®Løâ¾r–+Ž­X=,uÑB:ó§úÌâÿÞšßÔ]Ç'2¤'3ðÑb‡‰…Y0Ü„1‡ý¤6@šý®rW+U•!qߨ˸gžÁ†Arâ×éÖ@ý…‰  À¢ œQ/œ\ާ(òekqYû †g½T:£P |])žE´ÕU}ù”ŽÝŒ”ÚZÂSCØplýÎì :Öƒ¨]™xºày³ 4……¢­`eûÔaÚ k¾é¼- 3¯ŸhÀZhû¢DUIW“2±VSáE÷5+‹;˜±=•¬ /Ñ=\4‡çqAßL®úÜ„åþËM0еÆÐÞà ]Tjðžàâ<Ù*%ŒÑ®jß©³U›‰—ÃŒÌ-…r‡¨sª3ÿf [Uë\`%Yd®ŸY[|¨V‚Š aF´TŒÈÚFS”ãx*ò‡-}g²ˆSx(ÀJò Jª#ø‡àÓD‹O¿WÜ T"Ã6±íg \Îq7“+p×Ç{ ’M“TÌbÛÃ}i¢£$&+ã V2ЈÕ锥êÁñC n¸¸`ìˆô¶’¿ÿÛëà¡¢7bz'AÛaäÛ…˜?!S^•.ËÜFÖ‹)#v!T/o<óûÚ4Þ±¦dÄü¦k/Ù§ žiO$$Öžw.—ˆ¯à(J:Ö[ªé!ÈùéϹù^ЪÌÏ+y=$±® ãfÿ å@%¿Ÿ>2ìN¹l *¢ŸÃêè.´ÜüÆ‹Êp·¯6dBƒòXw™¼Mça)mP, àÀDâtI ±ñzŽÅ­rä“c2ÿ×ËÎDþ|äWé‡l¤²k—irrõè`>x{€9>Œ½n,m«õ%r‰Ly­Aì2ÿ>˜»¼ÂE17Í*E–}‘?É¿‡Õð=lÁrS *wmëB» Hï0VôÇÅ] «HþÞ£1feEv=‹"ë<|Îq:¢§]Zjn"!cÏ–Hý˜ï#v&Э óùØŽ‹øvXS>Þ°]ì° ,?<Áå!uT×é´k<.RG×(@ÿl…4"e¥åó~„ÈŒë-ã äm+¤íóéqÓ¶Œ%[äí£”>ÂÑ0Þ„{9X-ºe½à+'`åLDa­1¤ÂB„”®Â`¤¥á&-éhtU K7¢ŸÝ*`W×Û)OÚÄòukJß K Ȉ‰”p—9¡w¬Å“Ì<øCzò0õ†!FËãêR’ŠÕsjùR¢ºþ„É…Ü¢sœx= ð½2®ò¹O±Q™Ý«¢úÆÔº¬íݘµûßÞÃZ%˜ÔY¬íJ­ˆLsü„ ‰/`§•&H+@g—@1Å3Ø mXæÈpÌ(*Q×#³L‘pxï‚Ö+òu‘¶"¬¥²ìMó€UÌÄšü°ômÀ$$¬ §Ôš_sðX‡>kÊ5G=ø0Õbï_Ò4ˆçÁÈTxCÍT¦ZEÝo@ºþ©}B}>€ ™ýÆpâ˜ì'àðŸð½P´˜¥’1¨¸úߘ#^*\ƒ5Ý@¤ò˜$ì/˜f¡ ÔЯú²ŒRì498,S~”õÖ^TÑL)f]ýF_F AVÄŸ€¯xK)¬§ZFz°¯[E Î/…½ AàË '­é/4KÝ„;ÔÆ&#Ž ”¸jhúžçž`äfR*ÅŽb¢ÚbïÍ.ODAõJÚï@RÅ™2Ó¢›º@×ÒN³¦”L¶Ó­Ù£®%Wqxÿž›Ïyj޽š&v,mËqÏ…rà]©¿ó< }ÿî¯e 5‘,ùÿÀ÷Õ¹‹®XSWý“Q° eC}$¨ãÿø-ÍÅ_YûÜbè-E©û¨? Y ²% %€Dñšïäu­.£ˆdr=V­†ˆ”ï婚 b$OÜóðg½ªwg¢\Ú«iøáÀIN•íä.£*^Ç«þÑ¥Ÿæ±Â^Œ¬ýú­¡ó²6TÏ=ë¤|s|Ôd ÿO²àrÁÎ8 J¸=¢a+힊`c:Ýàzn¡ôm‡ÑÖ`:`õâZ€U&3=…\óý¢¢ÓØ•j¤Ã¢š^m¤zÔ’v, ä†stÐBôeþ%Ðq"N1ÛˆŒåܺ碽¨•<ëÇÍå,\µôØÄ ˜È¡¬]% ™ç0²oÄßù#Ô~ȽÆ,¯ð#G¯}mÚöôSš Hàÿþõ~ãA=œ^…q‚0ë¦ßÝÕ&c^=\äcYÚ•¢Ý³VæÍ_ñ%s\±Ú­¼ôK¦ä‚°ŽVîû¦ ¢@úÒñ4sIjHZuÈŠëãîÊ´`K  í}*¨,¬Ÿ&¸{}…€vÅ{6P"*†J»c…©"1˜Y¥BHž€í––ø«‹"H}…kQ7·HP7õ]nñ!€ p¯ëŽÌEÂ7¥eËî–K°ºŽvSè `«±`TiE¯.zÉñf@¤Ö½¼qœ[c›;ò Ç^ž ­ÏÂý‹Š$KFyö}Áövs³ß›<$øÈXIÕ8+2ÜŸŠMñI/îêÎæXŠ+½`h´Ò¯«/_VJݺL&u]Qõ]®Gƒ¤ˆ‘Ž<+¥–$£ãxÔ.¼Hžè°9øq8‹0`µXR„â|¿‰dã|¦Ò19â|½%.+äADaÿ¡‚ÕÍÊÌá«ßæëc¼ð½7¤¾Z,ß&Ûþ:b\ÕéÍ–šªO=f9À° u¡D4€’[]zbPýÆ`¤ûrÿ-ñg‡ó˜HjNà£5»x…hýÐ3ȸŠfÀ¹bJ•I¿XôÍ,MfDà5ǵÏ•õˆ g¢8Õ"ÿ§*¯Ò¢ b[¥œhMo.\µ *ºÄHSxh»„–¡Ñ èlÝ·ª]Μe;)`6zࡾß6Ä~þX8‚ XÏnÊÍHs¾ZÄ5Ú9ã8PÔ·%5€™xlúRŸÒ,Pë*zΊWÿÛ¶ #†y‡à1;Ë8Àr^*¸Ñ¹ÒLß]Hþux¶RUøÜ0+««Éf¯U`çÞ ÝT8嵈 £*sƒ˜*ÓQGNjC9É¥9`=ƒ?è‹ÞyTÙd1ÕR›¹®+gPr ø Ðý,öÆÍAmæ¬GÖªï'©Œ#àÄžÈ`ÖÔX0N1®€Bœ3Ê!¸ó,61c.v ž>ÁÈP¸B²´‡Ša›)PP­^ǸbÀúè_ܪ, Õ„oÐæá&Ë5o¦§.'œÈÂOÑsZöAÝ5è¢ÚÃÓàÜØ ¨nb¿(¾tß¡y<·§SQ'q™Í¶…­¾L°³—`h¾€ÛŠ»¿A.—‚w«ÏÖ(Äp< ]&|äO³áHGnçùÓôVøE2 Ò02¼›Ñš™Ëž8Eu¿E,äÍ ªAùŽœ€cœ“–~ ì÷ }É’%Èû>iãOÕÅùHX˜û~€{l…Ãý½Üʧ8ËH¼Ñy^&ò“ØÔoðéƒAd Z¬`A÷Çò5ÓòJ8e(^}$«bÄ]„ùôÆrÛ6YÍç¸#ÀD£XÓ]1äZsåj«îS¼Sùeì/‡3]ŽÐâût_r*=c):bñ´Å:ä‰ñB,iÐ m–­²v„é ÇŒö³NKeÞ†ÓT÷\9X‘¥sp†=½¶ê©.-1£ð´žæªg¾ Q˜^ò­Uj8`÷/ýc€ ö»^‡wê‹.!nÅ{ç躘."¥Ž®Pz®°H“ÎV`]ßÌ[d(ÜÒzßm*ÓîA&PÓÀ°e¸øyôÀÒC0?L’™irÒú¨Ây )NöGÎp9µlEéä7sÂ\ÀÞðd„h¬…ˆG2g½ò `>•wA-jrýût޾õ¿>-äÓE%ýÖàØæNÎî°ƒãÍîå3˜¤(p}þ=!ïwr“̤çK^Qu7°A»-÷æ yÔñ5‚³qhî-‹2þ—ÁQV—až¦¥ z|y0! ÎïPq¹.›X&ä“OÄHt‹€\ÀúÂ~§Ñ¬gèC.è뼬*8WË†àŒ c´SÛSc™DË*Á®ñP RÌäÛ)tÀô3D÷]ý¾:ª'¨ð tñì‚¶ë ä^réIÃü)VÁDõtr%$°Ç¢o(­ºÛ¶,ƒì™–éF¡‡žgX‹A“ ùÌI<>DŸ|“+ öWÿâX\0ÿû$à=_<4f6y×]\¦l@o 4É®êiˆT­l­8m±B*‡k"ÃUOÇÉ¢³V”/­zš¥«Š§§8s’šhÉ;H5ÊÕN؆OrYš,1ÞÆMÞqÒzîŠÔ~¡lE!ˆ&ÓSçw$X§ Bµ9Š&S ý[d,0»ºsƒáNÊ÷­4k/Š•ŽìbÅë•rpÍd ÇÀì»S,ÖNšBÔ*ñC5ÕßæeÒǽáx”éo‰1x ™%Ýp;´X¾³Õ£=¤Á‰vÞÿhÇ®2T¶;åo2Ì£ãÃèŸß=æƒ'jâ‰þ]T!x÷ê_…ü@"ë)Õ‰têôbmɦôýMÕßVL/·³<µI4™nìÛƒè©þ3Ûª|ÕRŠX5“~Þz@Š4…RÜËþ--z]xLâ¡þÚ«Æu¬«tûªúêZŒÜ‹téš-»2Š*­ùzÄé‚ïÿÓ(-™íï¨Á9;à2žÞœ"dЧ‹žq•Òd¸‰í"¼8˾“į‚ÝÚ?"®SÍ÷˜\¨2 o ™ÜÔ~Ân47s!>1: P)ÐÕÃØ-’1k¶¸É½WÈU™ÈŽ–úÛIA­ã¡1Ç~À¶Ð#§`>TgÃq‰ìŸ p¢ÕQÅŸqµÞÅ8?íü‡ Šhf1CblP|müÈ£öZ'ÔvAGEr õ“V©Uw¶ÈÍù]Ô,I‡Ÿˆ¨äðlÖœq„Ö¡¶¬ó~1啬b…]=íjä¾p#½ÝuÄæ\dPLÒù)ÅÐqÕ- l'b>ÇŠbol¤„ wIÝ÷9õ†™~ ÇØñw5Ãdº½á"—"rk%Å;äFèwB\¶€ñÂU¿â´Ìæ,Eº|zÓ¤–¨ÅGL}ôù/ð]x7 žÍm_>šÚލÐPÒ% ýÊ5a 6 ­ë°{½GÑÊ0ä9Õ0\“aš_N45¬Æ©@ïÄxÈñ‘Žù·¸þ7¾u€ Æe‹ƒ ¸ú>ü.3HÏ4Û5 8E¥ÓJ,MæJ€søz·PÍ·4Âà%’‘Qnõ¡ÂòÕX5==B#*)ž@¾Ø9òùþ´¬‹Ù ]åj˜!x«¥ íâ(¹ÇÆŒ½“q zÕ1TiC‹üH© Ý)ŽÂŸ<1'ºëÒuéШ\FðÁëõiè‘IñÀN¿'øQCÖXe¿ ¢ß;<ÀƒÈÚsвuH8¢ÓÑüÝ,^p³‚.+}ÈÒ“®`‰D¾¤¥¤ÂO¿+jˆ4“h^:)½2rJf'¢ aL¶'ö£ŒÎK:¹ŠO“ a•ùYÛiÉR?Òy;¸lŠñLôXRRÊJ? X·Aø m1¡»xÙÿÓšÿÚj]Â?I´D_~>ºãg"ßhà¹áÿl¶†—ƒ c–ärVÂÇ©r|òô.Ž?V¥ã™ÙúW\è^T ÔÇŸ×LT¶úÄ*_N,*ò8:ÔŸrj"ügÀ<ײÀ’`éÜÏèînÖެ^Tý”´Ÿ@ð÷æ{n̘‡ ¬κ_Ý«ã.öcìsÞç$z•„³“ý¸¢ïÔ‰~í¾Bp0± mº=ðç±bo'Ǩ¥ ´!¼%­.†6h3×wUF¡pú±\›Yé³À¾Fúo^‡hû.®@.6¦¿ÇÅÁéËS@ _ Q›B8‡(`×gmÙ2&åcsk\ÜÀƒ3f…>Lej«’øoyiÉ2y åÙšÞÒÎdR¦“;W§Î_Ç§Š”ýç*T…Ð áž©$UuÃw²_~SCevN…Hôêò°3ÉHb*ÕQü‡”ºo j¬pBAoYZo¤Èê"NÏl„êy×:G´õ àoÉúÆY¹?°Í°Ô¹Âû ‘Û —cYð?\ÐYóÕèëTxŒBÊ?(3´›m¼“l•Þ7•³õùüãÝ[¹¸aüNG0£8|QlÓ—†8,vN»zÕs Çq†¢ãw”Ç) [W´VQn+ù»Æ:>v†H¦å2`Á°ú­T»…/&ÿÍ– òêêë=v;ØÂEol§†Û¾wn–¾€Br…€×_:¹hl LŠ  >ûˆEy Ï©¯½… ? ¡mD¡'Ðvj?¹kY¯¿ù•VŒ)'®NCëcmGÔû8x)fØMÁ· <0}›XX”D©&< g‡;ÒQãÏÿÇQß3ˆj,dÚ²¶np’ÁÝš§Ðl׫–$¿6§k” Ò ½´7£¨ iAé,{7ë}3ê퉻D2[M&øø³X÷¬Ïƒ/)j){²¹ ù—3vÄ `€F/Z4ñÌžýS1ü61+ï8_@+±)5ܽï¬úKþóƒf0‡z…¹H„ä®Jdi_ÈÃIßuå rWþ4 y£NmoC„ ¦µJ…´Ö—#8ÿW´9ÿ¨–}, xìЮδ šèáeëmóôäÿžÄ†ézH XÉŒ‹,{Ìm"gK÷ƒÏ1” •žµ³Ž'w¦ûE½·P/>¼÷ΙJ¸:[ó`*+ºT¸ü›î‰÷Ÿ/7öe}4UeQíðûºHz…“Ûgž¨¿aê(U „&U]¶‹Ö¾N Ùûåí¨ ¸Ò”'¾ÜªMP6™ö=¶&­6ÓÍRè̼ƒõb­vYóJŸj²\îˆõhŽs‹\l±L¿ZއU#Gä fV-Qe§…iØìÖ`ÓdSâ˜X|ζHײ©áá ÊY¨E MkÌ,¬ªEs¸¨Õè E‘Û\…®rT($A¸ÔÏô7G$yõ ãpèÂ@d?}œYÙœ4Ù±\š£I;ylŽdÌ»“ *[ü„šUª[sÑBŽ€ò1’D¥¼‰„Æ¿a6+ØØ&ªÚϺÅ©äb‡õ“{éƒÐ,-¿F£pÕ²vm!è^Ó"Þמ¶}‡ïÓüÅÇA®ëV¤¥Ê~G+‹=dˆ¸Å1œÕ” ¯\¿ºwÇrÓÛü/‡ÍÃ'a‡Ö¬d{[E ‰{¸Õ/™©8°8‡¾ªÉ—t0[G›ŸßÒ¸½®“ͨx“IØ ÎÚ»%\4h”âžu0Áü_w: Ϙ@\$㵕J¨¨ªi£)ëÈaÁqK¨=®QyÑPÔq´D¦?çÍ;¦Óãf ­QBÝÑA|ÙçëµO޶ŗFÂéàcÒNœ›¦t ü‡ýkR”lÖªÑËt üxøï–koªI„}Þ§¤¤óù;j@ ÍJ|oƒi>ýœRS^«Ë<Âø¿{E °* Š`Ïñ «é+4¼fá2B½ï×IU¦™­ã[  åyS=¯DhöÛ…”÷­ÏU@”æýlMRÊ7Š…èX‚»=_mI„DÈm^Œ4×Î’ÖýÔ ,ÿS#w‰à!®¦‚æÙÕó¿2¥žô³•vElè™çZç²› J?TÝ=.Øšnº|Ö•Øâ£oi­`[£ìÇ“ë-È»ÈÑüfa>À" ŒÍ•°#¾Âx!ë.îv³J§oõ}¦Eýë$¸übÝùœÒ7퉘wíåäVH F;íqY­ÇeÛg´öT ¡¸öjÊš‹Cõ/ê C#0 àv¿tjÒHi‘A¬³Åx83å;Xûάô‡E>Gá6#´ã»¬–7`øì*"_å¹…ˆ„Œ¡×_S“\ç ò-Ô ?²è'‹YÀ‚¨Ö#”ÚØþÊÿ{“Bk(¶¦Ä‰˜·­:°h`ˆÕ]~~•!º5g¨õLQ{„™!?•åœâ²º†Âä$¸  ”m2šŸ¬w„Ú)êÃÛrƒç»å¯8ULõ¡líÁˆãµÂÐøfi;Lf/‹n•-Ö¶Õµ;’ÇMºiáËâY/¬÷¬Ñ9Hbœ]É» qVûì„‹$ø˜æéʶ2Íìf>=Á†. "kCA l˰nKs FÛuô¹_pÑÖ¸3½z UWØWÿ*´¶Ñ,‘‚í4)o-÷Ó¥@„ò¾* lßâR#Ì<á*¸"W÷½Cçˇx`5Ò$ÜH³Þ+ì· œ¼#âÅ%Od®bâL‹yDMè«ó{ Œó Ϻ—0ÿŒ(X]‹dÓ7=Äj\ÏâXÈkyuç /ŸímRâ1©R¿ŠÞd Ëš¼|d ”ë„îÍz·-ïö|q‚drÓÎà‰0/~ðâZ¿ 4›ì°ö4GĈóÚ!»†goc|Å-øšÎÕ†xâÌ$½ªxRLp¿à*ôòm±úcÖË[Ùè£=.xv„™Bøâ@GQ²GŸ¾K¼œŸkåé3æ¼ÕÚßdEVƒc¤ßá›d÷‡¢©šDŽZ³¾a:Þ‘æLP¤%]´$YcÊðÄï 5Ñ›ÅÊÝ›ôqçxžAÐAmUuŽïÛ{ÖV(G—Ú%˜Š$ÓG“D‰I:Ö3aát²˜3£5&!—ÊÛ(#b„s` "aî~ù]Á‚¸\¹%ÒÜ+írÛƒ†v*c‹&[ í—3¢øêýv-›[?û^zX‹nçtÚfòºÏeYÆçÍÀ+~")Á*¨ôE èD·ºùé½”s–îÜ<íB‹îA‘|ž†*!ñšR™ië¥C÷@FRh@ ½×5%» ¡~òËò ĬláŒ#Óª‚Åçè²]ƒçÖDy‘6D§RÙé¯Gm[ú"‘ü*/R‹ûIÖ®'£õž»ŒQ-íî^•n³#S¹qx!«åwp®)IÏ€ãŽ}™_ÜcX™Ô Âüµ„s2ryk.)öbûE¡y<ÌcŸb¨Ñv»”™aôo{85w¯8À{¸¹ŽU2úÅ;؇]cUѧñb©kQÖàíËôÆ¢Ãùø˜’¿/£+¼úú£=ã¢!º³{¿„ŒN/srú_~]‘¿÷ÚêlíQYÐü¹3¡½¸ y\%Mˆ½rHŒ`5(4/¬ã8 óœ”öZ&~ú˜ÍHm¾•“xi½ÛO!í+û*©™6¾w÷`WÑ·iü§Þú·ë®Ž¢X¡óÁWºDa«¯.yþ(K©¯¶ä¨ÃúöåµC[ÜÝ® ß»ÖFß\n‚ë<¸4%>Uƒ–++Ä\º™EZ2µ«“Gâ±{Y(èt”XŽ™} €Ë¦ª\ &u¨r ’Û8áA^̆OÉkƒ?€û:~޽¤y–ªÑÜ0öˆ¾• IÿíË„U(ÆÈù¦‚ éÌFm@™`«ËÕÐa4hVt½ÉšÒ|[iò¢r%çä™Su¿ ýß…¥i9eá)t¡NÂöxÙ=µ²–^>³×XáÞoñv¾ÝXE¤ÈK{É(Ù§i»­TÐ1Àƒ-‡o2f¿æ y±rÐ×”ÊZ{SØÌ´mF¥3[pÞ>·ÐHîS."üÌ1Zèù‹-Ô8KVËùîÉGÕ¤Ò¦ÕûY‡ÃXÔ\U­‰='Ày3@ábSÊ:|.ô- †Q¯8—)a ˆÿŽ~ ”Œ3\Esîy…ÈHÅ¥ èf„ïÎÔì']AW±3Ú\9Ä2¦iÖ0j7ó먻ÉÓ‘6ê8tt^Wë™-YÝ­KF'`ا ÆL+“x/KX²·»›¾2Éå¸Ê¥Òã÷Iº÷wâé¼Üf1 ´Àv¨ç&kÅ'uMŸÁ{Kÿ"¬?íc2^üë‡JA•k,põ9ŸR‘aH¶§ß› t—*ÃôÝ“õ äö6/¡ÀjÉXµ=r¸…Ÿ`a³fëqÔkpô”ÍUÝXÈÅËFöMVÏY|Uo!Ãs±aIý“îc!ÐÒRË.ˆËÕ§Üÿ‰ÿn'Ü©%wæL‰“HƬî³·|s(Ç\…8£õ¤.Y^ ãúÍ£þé*9Éd¼‰Î å•1omhÿ·/‡y×î"FÊÑÒภ2:¸1Ò’‡*Gþ5:6—yB=%™àÕeó”‹»fD›Ö‹lû¢…bJÒêzÁVŠj[E+rV¨´v„á­ó¨¤Þس¯2 »Êcê<ó%¬r‰ Ïqï2 À¥‡) –ä¼Ë´•P°{Â[ƒ§-¦×fAÝ–œÒ¾§ÌÞ‘÷â›t1Gâ´Ã-âä¿<‡~àÒmâД…’*‰ñÄšYD=ñ{H6L¯·c.&ÅÇg_ßRk~¾pWÍ&“µ—¥QW¦“•…ÀZÄÓÛWYFɇdäªÔ“/ÛÏ›Ø^³ÿG eb‡’ñ•1¢¶ éÕë3pfQÑc­8r’ûå§¡j×B{‚Ê6/½®¾L–ÐCÝkr<¤Jºkv;«>•î"ÌÆ˜9éÉ”*¬6gµ/áz–ðê (×à»V'Ô|sú_D¦íîØÊËõD±(’—™&?à@° =,+­ÅBn¯ê ¥É]É¿ÎßÇv –Bpãf1¥Níù× (VÀº¨›üÿ:­(C™Èz~Eóêi y]€ÓB¦ð›G‡¯t ./]Fß›2¡Pg5ÀSÜŽt~Èš@G·²¹Ém$](mCoLg?eT–7SI6ƒ шŒ×«j¬´@=·‘4g=-Äv¿  ö¼zœÍ#Ê…çM¯k(´¬ ÙÎþ¹q7ú墋¢La)T"¿×g®®›Té6ϹæÌÓT"ºÅFPÈô4%$zÐÿ‚xq;,‡, û-7™ÞŠ&êR_X,¼½«ØÕ„xÎ ´ÑM£v·CzÇïÔ4f”Kža]JT/®?4 ioœÄ˜Å*v“3äœþOGqoG»úv‚DK­OìuuµKª§¼³\AI®ÐËrÙÄ%n,?|™¥woc‡û%Ð+aÓ“3h_D*(DZû»•-hÒÌyu:†Y2R´±“¾ô<šãi]rCòÙìWûê⿘gW_5UçêP¢{Mï\ jzôÙƒvÔuL* §°Ówå(G£Ãú•¸Ú7X>¹ÊðÊÂoes!ÙiJdÚÖÀOxq¤mx:|u'ty\¡-Àxõ)È»ìW/q  fK‡U“žhE-:|œ«÷mKÖF†Œ-ÈÜÀkÿ-Cæ)4¦óUvÀ³¸¸cxwÊÀL€«J3{ú2¼;€³§ìġt5|¦ˆ»3`oÜä<Ë0¾l,°¼PWr1R‰J u—8ðç·:®xYã¤,ê¹®·‚y¤nÈcãŽ*(ÇýòÅ2õÖÏâ{ 8ýÚù¡ 7‚G=°y½ÙBOá£Jþ+{š_Œ›œ0g\¼7˜P¼ñŠ £ZÒO Éq©³®ƒ_(sÑ<šj0·,úžÅµ2¸Ù’ß—T0nÊ“¯âaf1 _òO€£¿ 82³©ZòÖ(W€ÿÆf?½áª³Óý]§©•ùë±Ýétä((ç÷`sTN;ÖÝ×Ј¶ÿõª‹lºçýrâzIzé-Ù®22ß™ë­êòPú‚g§÷'p‰Ô¾–N^$b"\šZŒqÆ¿P3C,žA|ïMrJpÉ­ç ™×÷’MY¸y97˜ï„o“›ìvô ÈÆ$ŧ“â;#y³Xýá( EiЇUXvΟº…ÿ£b¾«,kö{£»Ï±5±GsP'Mb®Ä\vrjeCª<(~ÊÐÿé òÃ÷/ï?”yiÈØÁ>RŸ\rãBîdw§"BÖÂÈ Áÿ›c4bŸLLË[k¢‡UÀÊ{ )ýmkU]wþ³â Ã ÆsÔ•_pÃÇð8kNLÉæ%öÂæž¶;\éP(Ç^$’FsàùŒiÇ€,Õ©ó"ŠÔf;d)ÊŸ~£†gŽ­f¸É¾L3rÿŸÞXânª®[oÁnd<tÿÐù¯|`ŒX_ŽX\uoEýmŒAº÷ä½hJã|7&]/ñL-Q öúo¨rŒöæœXº÷žü#g²c+¥.`¾þÝJï‚䊉æx÷0ZŸß“•'ÙúÔBÎÄŠJb~tSmê~6Mk.ÝvÔûæÖmŒáá%î|%S0Lè}dó%6üòÛÇúæ²1ü4Êߢöª ýÛƒs#S|%¾eSÕAÎù¼´r\{71ფì1?ÓÒ4{±Â¢*(‰Î¤JP“› _u«´sáLEš—ñíº·Ÿ6ä M$ë ">êV7…g¹ ÙqÙ·òy&!8·h"ìv؆m ïï ¥<¬î­ŠJ\‚MòþÖ(âÔk01^±!îæØu”’½½“5A,O…˜Ã&ïD+Âý¸DˆÍíòä;¬y3K—¾¡ÛæâƒUIݪœ²5IÉŽj[Qó]V)!-Á^D¨I.Àï™J?†O–'ŸãNîRˆa–ZM2ê@*C©_8€[…¦ô¬9o~\<[«Æ”á°Œ`Z?\˜fjxêÆð(šÓ ¯aâþu&s]ÃQ΂Àd &#õ¸Xo,B)ëÙƒ;æç:N¥6<Ÿce¶³(Å^7/Vž®6˜ ž@v Ù£_y…-‰ø »1£T‚œ)so8£øÃïuo5Ù¡ 4YÊ…!<ÞgJ¯„äXIlþ!¼¤Ô)©Ì‹º±•;– .úÂcåð`)§`Ôw„¶þ걋pF €;ˆ`T]ÔòŒ@‚úB—®àí|õ£ƒ'DBÒ±…‹L|ö³ñ_…Y…±5¯ ÌïÁ\6ñƒ;êÓcnV [*ލæzX4„[Ì3¢µo®—#6!Ù†F'V´om3>áÑ-0Ò[k”D»šsjtkÆøÃ‰›9®Fná}¡ ÅÊM; ^€tªð-ίÅ%‰í­]ÖÞ™t6o=]Ãyæß!ãò%OF3ž‡Î-¶Äyt{©ks.Ûùñ_€è§>D9˜#òdg iîsê eã~JëÔ-äÊÿ@{—0‡0z»ø·ç 9ÑÀù®˜~ò~àË©¶øß¥‚Àëèjؾ6¾1AN8v¯<– a.Á’2|t¢FÓ3[‚%4DŒG¸@Àû¹=w¯\¾¦ù Rí}ºŸ·D<ÀÔÝYŒ¬j.æ 8X-üSì[ kIâ] "¶Àµ`©ôWËãp9ÿv®çv&ÓpèU +§Pö€é¹Úk>ÎC€_ß\þ\ ¨—xYõc"·ÞyµÕ«bØÓblséeçbäáÝF“~'Éå © ‚1£”´Òt®¾I´h)+ÀFãï "èÁâŽj&É¿íaÓa¥œ«ÕŠ¡®ú˜OrØ¿¹§‡4£~Ž7á¥ånîç²]“9Ú×dpZ°Gc'‰Y Å«&´x‘>~\ëMRv¿ß]‡‘8£5•“œ#MÔå¶…stöl×¢äoÑKncz€H©©æ÷å…GU§|Cy÷ ¹xXZv‡(ŽâòÕ #ÿ„ä„>I9ÌQŸÎ¦~HŸ1ÓeÔ¯öÝ×4¤@b9ošæjï¥à€³~°÷Q¬~ ¬ÐÇ—‚0çþ_»³é/m³‡¾φô‚½µ¶¦‚eáïày×ožÒfÞþ>o>Þ\¥á ­ÊSq\Uœ) º½JËU„… Rªò  V#Fì/Æ ÓÝçN%ò2œJ̧ßÍ»šµ¾V)Œ ˜üÚ6°­qT·oÙ®‚·]“bØ’¤|?µ5¥…`œV–® sʉwW¿o?j…¼ý2ÂÉ æÅ¿|Ò º•$0¹l`¡ób«>Ò‹ƒk¥!^’‹Ú+LŽ“ÜÖžu×TÁ$Š ŸwÓ[faŒ¤!dÙYÅ;#É&Ó  ¿bqd9šV!WºÕÚ|Ae¶¸ÙŽIcS/ 5Õ]ÝD@Úe‰OHØôó±>6°±¹äl»‹fð®Œc’ þ¶†³²µ¤’µŠùÆ«ûÈf+ Dßûî*F] “…¨¨==©Ê“Ï»Fc\%•¹*îEJóÆ­c$½Ãá5EÓûäÕ@°+Ü9xð•ZR&-íp{¼ödŽ;Þ0-ì;$U-Ž-SS‡Gù.WƒR†LU¼ùÁy•7Î.ÉÏ(c1?äⳉaÿÍÆ’ªhÒ‡iÅVò­ü¦÷F3XÑ=Íàä·èCÂËóUīłÏSý}Y¥ÜîG†”mç£Gs³ÁÞN™i©lÖLĸX+=;«–ð{¤:ŽK,¥-„-ÞüÊÑ+QKøyÚ-Öù¨êuèVž.HZ³ ¿¤ ó§Ì ³yfäÌòwZ#VÕ'ÚîçˆÑŽ •¶?uàÇWÇZƒ‡x¢î{7_–ãð¿È KÊ¿üè2rÅCZàtð°Šjæ†w‚ µµøØwê(™1†‹öœ·Ä± ªóŒj©áîT¯CJZxÛ´w71xqø1PFø}; t [ ð [ À…Wv÷"„† ~Þ,kG2_“ß”üQŒåcYÚE6«¸É Ή†ïØíUaK­ÒšDÜfµIû“c«]¾EïZ0[a@¨â¨xs–'Íá9w‚i7#òhP¥sàFú ¬Ü‹Í6½öqxÕwÄô,Ñ’e¥Z±ÿþ€h¤57évG[Öä+ŨE§1¤ÜfµÊÜ×cçÚ÷Z }^òÇB¦¹ ä²—d5K ^û¯¥xšaD–—e`má¶±m¢A_«÷¢ 1«DöW|ÇÐ}Ù-œÃ±üVðŒÈÈÉAïúˆ˜äÇ.tç‡ÿL0uÑ=L˜Èü#Üúè¯Ø8„Â׎¦¸²hL‰8[šñ³ì’OÇÃÏÄŠd|†÷mX4ǰ¥¯ìéåL›C@¾G•ÊcÅžõoBàåó­ý£vMÉ+=,§›%8A¿íf4‰¼ÜÜ(S´ˆŽ1vLëiHn‰þë'{t ûLâ=\~¥ôüÈn·%"€io¹”añ¤“Õ¼ úÒé5ˆöjFÄåÇ\ë=à1\Ùs~v¢÷Cm?fj1†,{¦¥Bòẍž}$ƺÞkÛfhñ“ÍT´Úr¤Ú±oåàßuâkåÉbÖi¿)¾áž”Ø·KÅtçÎK|§ZƒSôó¾Hkß!ås ˜qØã† Õ›#p‚u¡±Ò2•E²ßÁÈ ò÷ÖМÓvãƒöûßãø§Æo©%þXrë ßõÐ[Ù‹­þVLï]¦‘èWX¼´­†“lOypWG µÅuˆ kö?Ó£zïm ýd‘ž¾|ö|+Pä,xZ`aöEZÁÙ4Ûü…0•(gÖöóÉ£–“à/Ð)7*ôbbÝWÔ¦ªËÒÞÊ™X´ÑÍ¥¬(mÐ Õ ‚ê oEº{ñþ^:5O Ù4šžùO¾¼[Ä¢8cö“)¡«TØâ-úoý6. ¼N( WOE­Ýû$Ó¶ë¾±æ^6cR–ÇPÛ3çVÞàºe†T@|M ºÑ²½Ñ¾QSu2£xçHÎ˵4`¬˜­‡™TCb>„VU3–j'ɸwÝ8ù4‘ö[/Ç$l6õ=ýFP1 ÝÓö¾“ÕM½ãù:CÉ· éâÁUIß*Ü^¼ ë´¨·¿æó«.†”-“û1;QSÄ–«/õTXäûÝSëœÈ[ª¥©Å%¸š~¬ÇIšB¤ WIkBO“${еkà)Ö¢3r.kbeìë¢_Ú¬r {´èÄ$xVò^R*æCÍ66ýô÷@ œµ*†£¿­ãkmlÇáfõæ¡ú·ìˆ*Z¨uªbí\-oD­^T•’Fµ’üìñà ÏwWBÈØ)V‡Ìï­UÁ]V¡h3&W3Y¤ï4ÄÓoÿ1K¶pÍ“âÝ ÌÈ^-.zæš‘Nܤ9êð 0ÝbâfpOH¢ZLR{Œ‚~/ïZ·M!u$&3©<Íg2ÎËY*ÈÃLÉð¡žK øU˜Å£« êÙˆ½IÓnYCÚ®Æ[¦¦!ƘÖó}caäœTpéçfGЫÒƒi;ÉQµP®*¿b5™üÀaS^û;ö³¶€”¼›Šü­mçž¼£Ò7¡ÿwŽMO@èTè´€×8zÿ9–vÏs™ëñByÓZ«W}4¡‹¡/8QNuÀ‹2v Öµ¶‚êQJv óÈEÍëI(Œi{ÆäUÁçBpGÞ1a51TëW4Ýàp/Þ}šMh1·ZbRÅ^ùÍL½ ßq]} õÏòÏÈ:äòƒ÷‚knZ¡q±Ù>¢jÇâ÷µ<Àˆ‡ä¤šú$v6ä„B$¹T*ÿø½þ›)d)þû‹õ@y’ŒëËÕP/N‰SO‚y½®Xœ|$»Ùíò p¿ù š„·äD†à žO—à' q fMçÄ+Wµòƒ&雄¼"!@];åœñ{x3÷€±ôŒ³¹~2W½Ã‚ŒR4£§àзÐãÜdãüXz\bབHˆ¿Ælnöt'WpE1EzÝ•Ñ=#žmÞ™å$3*³£ÝÌ·‚ÌÁûüWî8d¼ãÍš•¡&G]Ù |ºæÚaYEñçZå4}¢Ÿ»Q°teûÏq˜BÙ[€•íØ|ä߀i×ZhEåòW#«(?`X+n]êh·°‘OeA4/A¤)i¦*hž¥Öˆ õ3cà{¿h Ès³Ž ÅóÝAÀ£ÆØ=°› ÊÖÉÿA˜±çvЋ‘|d´ª+ (S¿«Ü±‹Á‘ãÏÊà5AïäL‹®Yx:ÖUu[è[À±Ñ®¤”¨æÿ„šÏb*0Q }Jº¨æ2#Q?†^Òã"«K”2•êº#Ä{ʤVÙOÆøÆžÔzôoÙ9¢€É[ƒ¶g5!.%ãÈÏ;j'8|2À3—¡{}/5]ü”ƒ¢+èg[3|FÓ ³OFÙÒ0qμ ÏfT¥ý¹¦ÌˆýÀӢ̡õ•ȉüK>Ù{lŽ4h ûŽÐ°o­rÈCþàÉ%N`îVË*}4Ž×PYÿ¼ÖG€Z%½Š£š@öm·|:áßϳØ%*Õo·Ôû9ªzb™À“¦èoRw¥Ë~…­Ú]iŠÕsaS §¶H]DRŠqÅ D¬q8)Tg¾à‘o{ƒ0·ûÍŒa»®æprŸr?’.©P² À ½`ô^µþh†BãvæSíÛ-¬Šx mˆPÆ9îáo•;Ù¯½‹uË2îWˆæBr4kÌ–åê€ÒX‹O_${þã‡7)Õ ü‡WˆûÁE¸Éy¢°_:­ôԖבG³„Ó“;,á–L8¥È§ÔŸÅ>š+•/T…;­Ì»(UÅ(«rm¥…é|÷æõ‹vØv¹ ¥ˆ@€!ežo<·§¶FïÝÌ–N+­ËKî3–­3¸À#>"ÚaYa#‹–ØC'à¤Óü×ÀbsmqlÞ‰Çö½^+È»è×í..I9O¨àßL‚÷•p+ޤhXDA%•l[jP8ÆÒ/)M(Ðó¤@|BÎûEs(Ö€Cò‘–ìÍ» °6çîÝ´ÏàÔ^¥Ôy¼%÷×®Öh§K±zjgD >uÀmìÜúØnÑ%=ò@Ò±!£ƒÕ/=/öFÃ6#k]¥ÒÙ“ç(XÑv­EâúÑpŽjªcý ï¡9Ä»eF ±n†Â¡¯í»3èzŠG®à8_Íém}¸¤ç¬8‹•K¶±‘è¸ Q–©|*»Mþò‚ãà$Ù1ÉòÛ³Ráé§!ëô³³%Bq¬±Åo(Úrb!ë÷:°@·º „Ø'¯3¥Ó¡c ÛnU4†çÓ¡îÂ5F¬nŽ—y9r÷~««”õɽ§|Ç{1²À®ü¥ÆÃ¡ÁÙ%bZ¤àcÄ䂤Q»Û¤-9S¿ˆñP¯´û³ä*‹)D};»TÊD"_~²Þú’Á¢~¥'B„ÚsMpUQnØñª[¡`ÂÀR‘¶pfˆ €w~ëÉ„-¡ r£ëCb@`ÓùWc“Ô0þôƒ&¦¿‡4$,(ÒGm*ö­Ná#AšG"¿¨&"W•Žg«Ü¼â;*ÃÛôgWïQ»*ÛÎãµÔ$Èy9õ] 7ù?¢nÚ7.=RzJxÆ4ù‡N.6òK~¡špW6©Mj¢â1Ïׯ©Ê&nù©æ×§Lôe1&’¿¶ûI<¥bÍ5Ùô ©m2©¾1j"%O:äSpCµúÖ@¿â‡†“%‹þÿV‘'³Y/œ:·†nÊ1Á!8Ô}œÿUo"\ɤÉ2À#1lЕþÄî~|'T§Ñ õª«TÃb\žû„<ŠpìiælÏÍadÈ$°¤ã–7˜2†¯sbÝ€Øñh h¸…·ýÒ“ž¤üÕÛHg”±`Fi¯Ëð™šGÖB9^ŽžFa“Ä îôÓv"ˆN76™„–ÕŠ9~G E;æÆ¥ˆ4WĤweZ.WÀ6£‚`@üßQBÜ*“³<‚Ɇ4qÓÿ˜´kœàÇ×£Ú’º°ÊwzÂ=ŒmËÁn=súm™šÓõñ’^¤Ë%‰oÅÐrJU ‚ dQk´íÆìA£Ë´JÐñAÐO‡¥œÑñ@ï¬Bkè·îë‡ r}¦Ï·iú©~=iÐö³9î7|bsa;à¦æKhM¾òh)$Üi8OØ8 |›€Ï±ÅË£çÚÿ,f*åtíx¨?2"Íõàü7ÚÄ©ÜSÝ׸NN@›œûd­²µº1™>sÜfCk€¿§âå@ßuš‚+´ôðÔÇhYöHK&_c&-óäÄ]‰If xuª³76‘€³“ËKtÿÙjsʼnRèÎp{êaÚ(‚|·N}4S¶•«Ze;ó8Dìé a¹µ•Ò«]Èz`¯ä¾–«iiÓW N©ý¿8k×oœPSÙÞdWgÝ !„ŽpšzêÞ}ÂŽZèsã›džÖ'øX}c ÜäxnÍ žÊ_*Åj£3lò;–DQ^’iÈíÁ—¹À÷$¶®3É6@3w·/½ƒ`s×g­³èa5Áƨ&´Œæ–8X»é¤®_è‰dÞUU ÷EŒ“–ð¿ ¬X™N૾r~¥Ó7™Q ms…Úâ(„óŒ·3ms«ú g½—¯f>°=çù]Þ“÷TŠØ¿ Î8út$\™ú·@ùìapÒÜx0%™Še›¦f³­¾çó’™_6˜ó—¯TµFçâöÁ"©:z¦¿¡ ^Æ$m…þ.ĺÐl£a=}we(ãßT§žuWTí#˜ˆ„5=ùÕ&”uº‡ŠX­y¾T&Ò“Sr à!ajjöéRæY*NõËí©DÍOÖí¹,2Ä 9ãÑPE/Î`†ûrÃj§¼¦Â*b,q£þŸ½G,jµ|ÀÄ©§|H›)=nâÓ6>ã—‚ñûð½¼÷cx˜×f¬ˆeu²xwOx1R®õD]Þ@ËåfËx‰ÆI ‰0'/“©ê$Y¥M'ãO£Ò¹ZG³uö¤¼ê`îþOøj¨¾;°j×V4¶4tx!gÉ*¤?˜ïvýþ$…!Í[èâ\|W ÎoæüA‘ü<Ä'l0¥Ìo]@vŽ¿²°–Òxª (Ûê%ÒgÉRÊÁ·éòä'¤ýjnKOH#×´Ú5Ut¬pÂú”§#_¢B»ÐYÑ ¸/*£¿‰û/™ÚDÔ¦L°íH)‰ûͳ‹oCﮆ`ÛóÒæ˜Û0îüÄZiõ„ÿó†¦fA¬ÊR4ÛÃ8®FzÅñÛ¤ÚS]/£ …x1øæ*âtµ1^ép†•ã×ÉŽñµ¾$€`§ª‹¿]ÑT¸ß÷™„_„øˆØ,6*®q ŒŽôHÃÔö!1†vX‚´ª#î<¥Óf7[ñL©lþÚ©§@„ꬑʃíò˜ø"BËôÖu' i×½po«ñþÄâ‘§zù²µW"K’Z¦”dáAµ… ¸€4¯ç¼Ù©³cc!#Ä•Ä;)tçvàJz?%Ï3å¶¹¬\‚qb‡l_ìWn~ô—h5 çþ=Ð>WƒS–ÊTÉf³Åjî4ÃÁí¶Úд7ãA.#Ê‘`fØ'~QžãN±µöÝ4©Ê³øàêe1 ZâºüãÓc $Z+‰^I±ÏØ1ì¤*{ÊÃ5ÖI˜ÓáHP]²»‚Y‹p˱jù€É½5& *|¼ úäkù?iè¶_Ï‘"ÌemUx·yûÓ¹æ§ÿóRÇÓ¤ÛUíZ‹2?dbèã>vÿUñvIàô5܇Hr²‘ØX‰Hm¾ÇÚ§Âa@*nd;ø8[J"eu¢{Ǥٹœ&JÜUªêåGŸµ '"© /Uàð0æˆà©\°TUûùÞIJa±Ç„Þ¡•sVMÆxTi¿­ \:€¬áÝjè|„’7KXâa0bÂ_¬îÂ~hFF:àbèŠH2"/z¶íÊ›?[íÉ£¦ Xj:›6‰’#‘¿T½ÈíÚG·lã‚.³lþÉq¥ÐMð°ö7Æè*×’wÏ^«8ÈYÃbê cAO@V”¾ôÇ\H\Æ;”¦^‚˜ßž½7QبÊ͈$Xfâö`æÕÜtµä£Ê¼=èÒÎ?=aù¡dX?;•OŽšV[²1“j}Ìq3ÑËÿDs—ÿ®[½}µå¸¯HáËýü’¾ä›ñ<"º­ßö$}féhˆáˆI1« 5òKî‹ +°d97 FÏ•Ó$ýôûÙT0ícñ&ŠNÄ3ز[úž±Íaj‚1õÿH¸ÓÀìFæm€‹h„^¤µ;;UKËÕX‰6:ãÔ­¦ EÏsn€ËÈPbÊ^+ýy¦.AÿR)¬Ç¿Áâáó ¹ìí:Ov_t}{šM+æH³iž[Ž{yáALøâ¹xç줆¬½æ=?tаïOp"nÛiGáBS…™Eî0û¼¹‹ñ)òú´{¯1ñÿ«ò•»~à¢Ö^3P9¤Ü€R­M4a£PPŽÛ‡×OÜôùç‰f@«‘ÒŸ¼!rð´ÓÎík º›g–çÃÍ/kïZ :›N%½©7™ØôVï'îz®aÁ W r©”¿*QLíŸøø#‰"û•±JDLç5y^à(´”ëo^Ç\¨ß¿?ºóãœT"—Â,ß/»~Ê€uB·._“{T] W„²DEDð’κMÀÈמú>â?5: ·HE Örsl–»•K c¤ üPs³…kU-îÄ´KvXÖÚH |Šá ùíSïéù €†î%×ÔFž0!ÏJ¶0ÝÛ G>a3:KžQvï=ôËÏÙD[¶|©œ«gÝx¬¸’Ž_ç;HU(âà¢cÆÙeÕ+éÔ/œ!íäk’d L'ãBÍMiZò3FÜf³#<€Ê-m²Êë¯a¹Y¹ŒÎ+OgåÂJݳ@}F"¨tºQl Á´%?/—ªÙtlx%n°ÝuëCÂãçúW6Cé|ñbhnÙ3­Ú0)™VJtÈ5yòÿòŽuwiIΊ°­Ô=ÁÍŽ.6|Ï×è¶ Í ó•MÌë/¾4eXˆ<ªwBÑÝúˆMå§¹–ŸcT:Ñj0Zt8û¢Á‡&ø9â­@ÜØv¿Ì^µè*LÄ8šúŸäÆ9Uä;òÆn Ÿ-êåCÂbòP‰öf»­:±ð«Ýb–!œXLµöã¨Ðh“edhˆXúuN Ð'WïÕ ÷ ìø:¸8…‰øvôO¥‡Û´µfèB7„Ö& ZhGé[f?8ó‘üì´9ä‡KuÔS ým#âƒZFp‚ÒSúRÒŸ1Þ<õéU˜`ü+#„¼H„òëþr..D‘G#‡ýHÔ½h©Ó2ÄŸyAc¯ÏQŒ*š>/#$ð=Þ6)Á:èÓ}“U¶x“($ð»üZtaŒI„À"=Ð kÐ*_¬qÉx|$üwdgÎõ¶¢é*NJ懱òÔÞ8:Ø ËbƒÉ™øL^òœ€k‰Ù%úhq/-4 ,yJô¯œá_]ƒ}’³ sé È?NUü£IkÇ/Ö‹a-Ó™ Ü)Іc|È™…òÖ.›{{¥rcBŸxijÝQY-+ÏðÂóiçRø@]˜µœ"›P7od°oÈákðpŒÿêéx1ûÄa$KÅÅ'–4 6ZQÃ'¢xis‘Ým@=¤‚k»¨¯ƒ½•ïëҶ/Ì%å² ZI0æ_[Ý_ðZ¾­¢P3ÄwÄñðà$3gúqËßY@0›è™¹õ í’Ji.Œ‰Í}µ^+ªF_´‰uSLÇ(k‡Í™ Vò‹s×s«Gž-j*ØbÉ¡Pkk+ âÜרuýáÌŠ“<¦Üůbê>#;7+tÿ]Po8hܼ±Ï€š+á-ß¡¿€Ú?n¦äˆ¥®cã_ïEüÖ=/AÃà‘êÞ:ÿÙmN32hWǰš¡ö½Qç©Þ:Ü7ß;%,Ló#X$y¼7Æ€{XÏ,k¤1”–ŒIÁŒ¸™àƒÆ"0‚xd‹Õ>“µ!q ëù(Ö¶X“l]lk›ø… ýÀÇí âØ ÀÿrNÍ_`¢¹£·¹¶jpjÑm›\Õ¸ø|Ë Û/A›€ 7;˧šæÊlÇ^º‘ eŠ^Án!?‘[sÿûD—»¯MË!mbX%8€<Ë%üoà’J4‡ª$Æå~™=óGi\zÒ6Üz8KP‰hQ¶ã› ð1±8úz|o^õÅa =îµ+53tªÊïÕ±6õ4F;2F Ÿeª†I«{´ðñýe÷¨‡ª²‘<Œ\¤¡x’§Üä¼>imçÃÉ8 1XW‹ÑÊT´Óª;%žô¹½hC‰vSnikÁVGôórιQy^¸«Zù¢ È‹=[@/¬07!© Õÿwr ½kšTñ•—#´ÆP„­»:ïÄ‘áá8’zòí  |IuÙŒÞæ•«D§è¶qå*šž+¢r<‹us:œ9lÐmâ}¥Èû.-Â/1¾øN”UŸàRÖ¯\½J¿âcÙ½O“§ŽŽj¨ûC²¼ÿèË3ñÙØ3Kl6·æ°/tlvá0”Wœî°a@–…bÏ$6œØ£s_?>¼<~‚¿*•–•'#RÊgÈTž§H ‹þ“fÙTž_o(CÉ sÙç–>g}©rN-õ?‡RÝgîr¦ x ǰÖÙBçæé{ –¦«jå‘)jI@И¿¸rù}P.üÀïÞÀžÎP¿`fÆï–©ÚÙ·‰±ÅÁšönÜ9p\KªÓ÷ÛÌÁ¶ª)µÄ6aÄfà²:"=UNÀí¤5²€êÔ&ógBr·#¼;?6mÊê•BFˆ98dˇõ§e3Ó¼<-¤Ú³ .—Q!ŒÒ=™¶¦öGކzc­Õ‘ˆ8/ÃÛy¯uD°ò| ^eŠÂ_G†ôØ[k©WqPâkÔI‡²C¦žïØÚCˆ]ÍøÓ Ç[y–;u$7æl…މñï”çÇ/ ?©Ð…Æ©Í<Æ—Pó%½@í—A©#{û5r×à’:¼ER»õÆzðïðjuŒÛ YA{~—ì~Y†upªaáu«_xš°5ÏÒ.™We]›Â3ÄMõÜ×{qnØí•ö\7Äà  ¸÷w$ûÇëÒ8ð¬§\aæ&”L;L«ð ¡ð<“vF²ƒ oçKr’ŸªCj@êTZØîb7lãÐÔ½~TìŠy{Á~²=Ù°cì°¾µ/øxÂñWäÙ‚T()vüä€ÃÑEmBñIaݳ× «ú ú¦!ËCÚ¶MÁ…Ÿ(K;ŸÂ$V}.øUíûEÀƒ}Pdb ž–Av‡0ãÐÑà7Œö."—ÿkqpÜa‰"é^u"{­Áý@ Å[ É“*¿ð „Fô»®ëá@Wâ¡Su ‘ÿ3¥hr³ˆg{AûŒ×ûø"̰z^¸Ù›…–©@#”HFüwèûKO¢àÛųmKmÉ»*>Fæ‘ë¤l¼.Æc’ýK·ew¹2!dc 7éûœuú6’¤þ(†8é8ìÆçí·æuíù&ðU¾öv± ëaHÄ@vº½S''!ôs<£#êïÄwL{F§<¼ÊL›&A8°âáú${àº@ÒÇUÛÌGäKüç1"ï÷v= óIÍH耄†1h”%9¥ïÄmˆ¹—?]ìÁRúŸ¢Ðr°®@#ë´±1†±ài[Sí%]$ióQú·B9ª›@Ù$£€+-a-›Ž»A?râkmnÓ—Ä©sñk„êklAßãÁ]¡*0äð›ÄX€ðѦZÎH®_²¢Jª«Ž/ŸÀá\N1UÒÿŸ#)þx;кáS…{Žk(0%xgk+B÷Aݺ)!#lRxkN=ŽlõVà¡[ã+¦å$ÌvÁO£!mngïÆµ­jfwCYñ}áMïÆu/5-$Ìsßz—¦‘’òƒ–…øihЮêÀÄ:uœB%`7ÕO1àŽ0Πlœy<^$>Þ÷ƒóîã°@"Kµ¸N,{lÍ'“D9hsŒåÇþ„Y¤ì$]ÎÙ¬m%ï Û¢ÆvÀýÜ­(-94Ôwj†ÓЛ? Oxé#ÿ0óA„ØIB´WúŸçæ<¨ÀZlû¹C»xŸ¹¨)}kl^J^bÊ*¯Õ[¸¨zsñAƒ-yI,þOÈšžßÊíuèœÏ@ÏáÅÙè³Xî ÝÙŸ /Ï—úoê£CùJ¦/Ì8ß–ëÐwþ`г(¢c¾Ìé'“GÉY˱òÇæçÿM“!{ryÓ½œÊo2Òàf§éC)ÏŠ‰Â,©ƒÙ"Mß߆µ¦l켉™Í¬qü°$žØè‚µ¿\ÃfS73G0ZäÞÅYq#Ô}–Ú©wXXQq@!ên‘o7Ôö(3rû,«ê3Æü+Ö–Ç  ´½¸Bn/mÉ#roúy@jc1;Ø¢ÿ`‹EW­x>uæ}ê§!ÚဳÔ`mµ°)K4”CŰ¼M°áÝñ¦tä•‹ ´5%—ŽV²á/ÂD¾63dè újQÀ÷e©)*Á·_èy$~˜:Þyù¨M6/gì_(›ù¸¹ó¿TfÑG=¶o`gd†ú¹_!(¶.ò!H6ïq)ë‹ÑHp°ö!Îx)ž* öŸŽ(,QMQc èÇð,MÜ+™¯±w¼ã½Q&¤ÆÃå{ù{ÇŽóÝ™ÍÑ¡,HÞÏ›™3d+–EîðD‚Í”j3àqíhÀâEjÖ/õÚ"Ò_øì¹œwèÉ9ÑiÉz̘­kß™êÕ#P£å¥Ë‘Œä‰£äzf;ÜŸ¾·Sû'ôJJ>5}õˆ*]K³øÔEÈ\m[üÂ`'¾V-;$h£o‹üpxåÜQ °I™ù•\ôÃ_€’Øᆨ8­0E¼ôaœøa]:ù€)I¿,·Iﮄ v°OW¯µøüz›%ãtÉÝóÔÚÙÓ?†—ûWÃÙ:7;µúª q¾AS:mÏ-’öúñÄϧ2ÁÎ[n" sÉrSùÖ¦Šn¡ R¦J6u,¶ÆI9©Ò›‹[‡„ñ#<x“³Ï—¿uÏoç Žj\0b(M{oê¨eÊ”-šp¤Í1~¹¬’BdÕ„öÃwF·=ßqTG4+X_Þ•ÊáׂS:ÕjÑk[×Û à'Žƒ²k¢¾–z U€G $ŽŸ@×¶ÚˆÁçå£í‘L¼ËP†IÒ*X”¤~Ý4Éë ‹áÿo2©r±éZP©ˆ!Ì8å¡À›Tä PŠžd›?îÖa$ýË Ñ´Wo ƒÑó\K6ƒyÁ©zg#½ïÁS„©\ÁrÕáÙ‰ˆ]ˆÎamÕ—­v€Ï®)-;9O%›Vh;UŸwĉeb8í˜^ò6¡þô-!íñú–¢òoé5~Üü&ð\Ь‘Íç%OåéB‚w“‚_ð{À M“£­ó…W×ðæÞg4…mWJÀ¹!Ò‹uˆè\-]_*|& .8€)K"Itx@§Š'3ÈòFW³ E¹¡åZ›çQ¥1ÀŒ+0™{ú›ØÀKè_}°l}÷ep´-I†nÒs ¾åöfíè’)zó³F7Ù|@À) 4VwMÈå^k‰‘ÿëàÅjŒâFú0¬´®¡Qé¹S•ú]érS;Iûΰ–™ÛÇT*˜ÌðÝ}åZs•ØÄšçu' ä*ûhðúC×z[»á74,ÑñyªQâùäHž>Ç«m;ð3ÃUï%ƒS|z³ç“”šâ¬qºíðò-Yhº-j Y€swMa}pbeͲ‘@Ó.ë$4>àó Ù\£é{3Zk½BêöÌD°Úi¼.DT /ñJ8à®ц®]ZcÓÔÑN¸¯­€(I˜4vÄFõKÝÌÒ£`†‚¦XþßZ—:ü£†—_ÎLspŽVö¼ñíüãÑ/áj8a±-&Bâ»G ÿ;=—>KaÜÎ/Yu=UCÇUÈ-¶b§{“ï´ÃÉNÊR®Ž1‡o(?&˜øü§oƒ\à™j¦ŸóŽHp¡Rnò¦ùÛ±™²¡Äã#T(–æÏŸ Vœ™üãï»Ô׆ރ.nŒàA¿? ׎ JO”JÃ}¶Ñ`‚kÆ(;v³×Þ(3ÜŽÞ.Ó4¥â.ñÀ¦Þ|€ïÈŒ¡—Ö¹ªÏèö”kº¸ ‹`üŒÉдsb„°Wî”qí5+uT4IL¦66 -¹MÆCÿ1ûl„—2/GÉÀð‚söäs3÷ß¿œYÐèù^bÝÂꡉê¤T|ž–®[ÇÓζhºbS« ½¾>z–Gfn¨­eÜeSõ™ !T2>ÆÖܧ¦'lÿ’¬Ê(2¦YS¬ˆ¥Gjo9‰¡€4f{'þQh«ÒW •„ü )öåTÝÀíüØxU'Gc°ï•òJ  …–(¾·Ù{-‡ÚRâü‡²Tyè?H©—q r©]¬\ÑßÈ:¹Bƒôæ®JáÎ?â*F*¾µégd f5r…Â3Àƒ GåŒhG´°×ðeX‘"ÑúUØ[¾{Ò—©{–_BBððçžy÷­ŠßG|¡Òµ'€t[sž›hµ¸-E’,®&{/jÅg/=4­²°q™À€äZ%NNXî0“´¾ÀAëÅ]ØayX¡¤4l`L|‰Êê³Aø%桨þHuœ¼t,øGÿóá׫™ûœä¬ÿÁuH0OjàÔºC¬rêÁ©XNÿ3YM¤–~Ÿ£š>@QprN§D¦÷G‚.!ú!õ@'ìZw·cѲ¸¸z¾$° …¯Tùý-à77V8 Éð–¡OG\.#R‡Cú@f#™‰Þø•8¶0ÍéÉŸ?ò8:zAw&G·‚&{Ì?üQõ·Ö$×W²å`rIxMÍRÎ!0ß]Ü“±áǘÍr­œ`µk—šíí3)$Ctõ*„í`M1 ±}U$X…$y,¿¹ø{áOøy°O´¶Ò7Ý!pÞµŠDêl9©5÷£„?¹K/õ _fÛ'YpÚÛÿé\àAbàÄ(¸ëYU¿ £°ãÊ€K#³4~7 Ó¾FHV)ÖÑpUèNywT3ØÊÊò@bñà~ÐF {Ë}cÇÔLÚÆþu>í*;­JÜ@êÿÍ%ºÍcYpÝÇãm…t¦·9Õ`ÜÀ»ó]±´eëz†ž) ¿Èy±»:ÖîÀa•LGƒ5ÿódÙþ€'Æl#ïøÂ©}?L`ãx¦<—”Ìôx5êt_¶åS‚…¸Ù© Ðg2Ce=‡Ï{¥ŽеùÔ}H»c,ôC—w¤o© ¸žƒqý§%EØ.0¤ÎQ§Gå\]—Ý_ÑawÉô~âù„öÂ(!Ç'ßhy§Jà.6'Æ*-¨!ÆuD´üް|«äEЉE¸²K#þq—ÚOpºT=߇×Ìæk§_¨<|Ðt¾Õc'r;»Ä-oîèyÝéúÖmVØñ­2x4 ¸!Ñ£{Í‹|€Äψé'~J'˜úü^ îG½-Ö¼ÝWW’?!‘”«~˜ù”=´æýu?4ËÁçÕæé'®4xÅØ{ßœú Éäó>9—×èåªM­Ô4© º÷òŽ.Ÿ¾nå÷‡«[zäB£Œ8]%•YË«ÖôVAyÄųî™Hºav)‘4F»óRhðÙëG¡Ì»MÄÌú…€£5ÖmÒjwȬڔ×)Nd±LëvN!¢O|5EÏ–úgÞN½-'…¢åõo¢ŠklЉ·IXK+Y޵K6Ž|D›ó<æ§L ¾QYòÑ”r‰| „i÷O­Ç#Øÿ¬™ý/®ËÇ<…ëI™MÊ5ú6÷<ìaÍgB] ²…Hd*CàÌÏ£Mc.?ç%Â1¸ Ù;ú¢Ø ‚+  ººî(¤|ÇaƒÏ6cç–PzÄÿÍAuhÆø“DÃ=Š»’¯lPyØ^(ü8Y=;î!¹ûpÌFÿ=³œ´ê—‹¸#«²]ŸÍ,Ñ­Þ~»ØàGñ,r¡ÙŒah£à„Èxüok=PhˆÒûÛö²15¯ò£Â;®imÅ(¦3”x³.27ÒÄ'r øpˆôˆ•%8’?…rRùª9áâàîU]ˆÀ.vgZû¹®ø¨õkô)*‚@û;Ãqÿanâ1üŠ0õªD߀'× CaH°òƒáhtP«D¶MXd3Èrú²²¸Ú†4pBýøp%T:…aòÒÅn…O(ôúº” 0pÇ•³Îq?BØj‡ªà,'/ãš±‹; ÌÀv5LÌ-·åeóíJ+(¡ ÃЩXg~åJ€[ƒ[W µ»¶³÷Ši&~ÑUìÈ£«ý ®[ö9¯{›ëДarJT˾OáÕty2ÿú®é6%ƒ)[Íí ÀÃ+¥ÛzƒWöZ7Þ ˜ þîØ}‚½e$«’ %ËÁªÚ£\iu ¦‚¥{¹ˆ º%ß;Øò5R’‚,š§êGÿ O Òµ`=NnF¸€LIÈ*H„ú ³ÝFœO—N–(pý>cÆbjQ@ ã²Âþ3‚ZrhÿÌk5!íLÞö_Q´Ëð±2N‹b¡ åiKo%Ùy¦»ô²L$»Ù¤‡þ&‚(×D2vzÎ+n+ êƒ~d@r‚°×auåÇl«3‚h%3”Ч=ÀÏnú§_÷? F°?Êâ^Åàa‚T‹ŽW3brº ýE6×*ë³]×›@à¯BuÀLu\¶²yñ£åHâDÔÁoàóö§[ö콫¾Í¾óˆ†ð’ɱ£ åä,þX¸/âÛÛ0VÔ7Gƒ!@®µäÖP5Çæspˆã¿Î–‰­Ð¸cYat^á²LC¤ªî£ˆW[|Õ5øÀR#¤ñvÙJ5ÐÕþß6ªäø¯ÚÀŸŽd®ì§­[çy6šs·zC~×uÚßÝâá“ ŽaÈ¡YË;iÛj+K«øø!D¬Å‹³‰ ØBœ-ƒ?¢Ô½ò&h„Ê¢JCIm÷޼ €Ÿ½atý¿“ÝcWíän\Û퉰þÝ’»ÿÜé‘gìà€sÁÅ‘ð²VƒpE…¼Ãéx€lðmñ ÒüêÔ× :þ‰RsìIéxw€ícšHp¾€º*%^¬ k—pŸ>ߣ9¦·šÊÄj7„{S4^“uGådIG(áÙÿÖH°‚Û7P>Dþw¨5 Ÿ´ä§½Êп‰Ã7­=Ñõ>Ov¢ó\ùagŒ1ãïáa#½ˆÃmľ;wUZåW_bS³d”#é>/eè!ÄĪá…BÿV>Ãf8Çéa¼µ.uά¹j~4ËQñ;.>m¦—z/`Nx/ñRoФèPa,øÉNWé}@9ãc­.yf4ƒZðº_q¦bçè¢ø"ëÝp7A¢`ç²Jð—\#Ûuª`¸EW+ú(óø?»ƒ~ä*ÿ|•=ží©›Ž<¢^s FwB®çfÊ´nÒÉd¶DéLRùô:X åÜ•N˜[nÑfuÉårÛÒ0qn€ÎûGa;_<¬4k»Ü,Vʈè¡6î.¾b±½[ µºßÕ ×ëQTGá›ßˆÛ®äí>ŽššÂ{Q•Ã,SY0ºYAuϹžé¸|ùºc ÿÔgÆTïýž%âŸÃÊÉC胬ŠWÉ$®£¹º—Ndç3}³îãl ¬æJ@¡ŒËöJ€ÙÔA#À­¬j•çL&áÍÓ=C¾—qó¶+ò9ús×2F%§úAÂGq6c-ÿi]z£÷nJ[½ÆO› WXÂá'&Ø€rÉAÁEÂt%Â4©uÜx-D¾ ~´?ijfïN¨ƒÉ… Ž)õx*JB#±ÂNs|›m`ß¾òGi¸î̽$ÞǯO£’OL±mïÆ&9Ð}Î$©ß·+ÆD¤+=ZCIç““‹}7˃hø˜†Èi‚a]„~¼ý) ´N“Ÿ*Λ’ôüÁ*×q:¦ôZ Ýa¶È»ƒŽ×($ cÒTT)MqyÈ™U!V­Ž M°—Lùeûíê$i7–qÀê‰R- â&¾â ƒ ÿ:CÓâ8Y¾·ÙX-^#¦šsMèKÒ+Õ¡‰lK`~A‡4JøI“ýCÀ’Æäª\¨o–:ë›úR&Ôr»Ò5f\u”z¾rÆ®¶Z±XÄ™ºoFÕŠ®×¸8ÙÈfQ×Ë)[°´ò'`“ï–SQ„j˜«ÌŠ?©ÿ6sv‡vUÌr¿x¤­$%­%u=ãLs`|¯ñåøL6ª2†æßÙÈ]Oß :Ö…îùIÜ–B¿bXuÇ+á7¶Æ2äëkÆÁáq *_ª;p«õ¨‹Ñ–aRö$n»éÝ`›öuEålJ!l<ÅË'g¦k˧ÇRmõSd¹S#­Eò§»”Æ’¢—PewÍé(;ÅãíïŸ,_ü½wƒ[iïæð}S×j6¸cJÎÜŽ,χ5&åjETN²`Á~›3Òíô5PµLñ’ï&ùë¬Çµ]_ù¡ðO4"³ŽI&W@Mef&å±·H2d…#®ˆX«å—÷ ƒÿ¢nˆ(?¶MÿIÎBIÊÇˤQxh¶$êhG󈬡¤žÁ#÷nbëÓ‰Œ—yç±âΙõl„Š(Ýqú&ݯê¥ÐžSP+Úëý\­Ù¤ÜG¤È_^Îæ€Ï/ŠÙ[ÙyK›—Úiex¼LÌzåp:ô›Ã†0µÆPwh”‡o?yá6ž:@‰ 6¼y~ÉQB›L㺘ž4}i˜MÝ&™õð)%åß­b¬ÛåûIMcîð¿ ò·Æ•ïF`è ΜýeÃÖ¹ÀšqU™Ïüã;ð š$9;ö§Ð˹….^ik^÷H;F28I-í¦RÛ+F©fƒ~¿©¦güºÅîp”hl¤ì6daŸµçÊúX@} ÎÛdĞˡñ(ð‰ð§Ö8F—å‘Áþ¸Z–A–ÉǾJ‹Ô$ZK­ŒU¬UÞY•s@F1ñÒ?±«Mˆ<” `~k”ëXVÃ[{wrÞ|Hú¢•g£7!Ì–™s…%S“¾Iº/ …1£-||Uçè`4®lµàÝöv`Rà‘@{.€A‘· UÉA—‚£6&øð—ܲ”¶-^›êfÕ‡N}hÛKŸŠ›­ú /®9vÑ|¦J5GŠùFNòRIò.Ó5¿Øo²&U€iØõ0…àTjÀϵz‹³–û¤YÂB_æB´¹ó$)ˆÒ°­Ä\}ExÕQ«Èu²ÏÏݵ¼’¦ƒÒ«¼ ݆õ‹d Êšhÿœ”¥v>„Ëë¼,%—ÃŒ¹nŒBÏfèª _ n Ç5( î ÔW×þ€Òðz+1º^“AwúiœÍê'¤àu’¾<‘שüqÓr 4/ŽKìÑWȬÆh®~‘é:ӥܬŠhXè ÿŠ,Fº]E A-H_CefÈQR:[…L—ƒ6©ðDž¿–Œfi!±^­L)±*!*_Ù#˜Ï7$tÀšÈw·šyýZþÚ# ’Ó@“íGP¥l¨Z™xÐ}Ôæ™â ƒ Œ¿Í’’ª¤²0™ûÓb°g¿•ÑàðE7™Ü9f\ÎõÌM`ÀùBÖ]³÷++ûjšÿ Ñ<Ø"Rû µ_iƒ•¡;1m^ÌÝÎ; %×4ðÌ©Ùݳý«½lWý”Xõ4"kså©ÐXpˆßüg=v‚ºÎ¯Ç#/ÎsÍh[ZÌÕ¬‘ßy˜EL9}ñ¿Ñ{ËU¼–Y™_/µžr‹‰=SêGj¢Ú§Œ=¤9˜ü±(¹ðVñ­°Ñ^æ¦ùŽ …Ÿ©‰nMÖ¾ÄBåhËajaLRI^!§ÏA¿r €WS0nÐ)}gÊ;éÐÌ>"‚ãGÿÅ ø€=ÎEO¹ëÇp°í+‡Üƒ‡¸$Ù]‚+ïk±×¹Í{E#ïô¿:oýíWtîJ³×/:eäTxe¾§eÀ<…·2ì„’»C:ÚÏZ>0 ‹YZWGCNA/data/ImmunePathwayLists.rda0000644000176200001440000002046413103416623016326 0ustar liggesusersBZh91AY&SYÉ ,øÀÃÿ€ÿóÞßÁoÿàÿÿð¿ïÞà`+¾w½` ›6ÛLSfßkàžÞô ]®ºš§RgiÛ¹ÜÈTìè\ŠŒVc;õš{Žf{ÖUsÛ¯wµÀêÞî^½ìeØ5<ìîÁª‰ëB´5ŠP2i  mõ6Ñ Ð 4! ôˆ£#hÔÐÆ„ÚOQˆ W©LÔ@@$$AO&A#DÆIš˜§©¡‰‘„0R¤ &”ö¦&BŸ’#LÒf£#Òi£ÓM òi=#Ò”!©šŒ¦šM1Ð=@ÒeqàáÍÊT6âJ¢ó}¹UG§JKwÛ§ èבPÓVU*‹ª‰nÄ@BºSP£JÕç$Z‡ P<ÅP‰êÂ}ýŸìÛèÂoƒðs’¸ªÅ¶ þfúZ‡U—ü—lÓZ;xSÍIß Sª›Àâ.`Æ»6bj¢ÄÖR’Û¨ú«ßÀ¡»®;eF ö %ÃÆ…0§êó‘Ѻ)6Õkg©ŒqN¯ÆsÂßÇ~sLyõ©»¾üK¯¤Ðv‰ªP=QMr/§aÚë1Aµ¦åuÄÿË4:ÌjKbÇÝåäBizÛz㜔Ìý˜·vϲ½¦9tÓU~—™å½ºŽ–©KNªÔX#•v[„hçœ[+Bg©5Í·5å54k#m>‰{R£bÇÔÙÙ««eô¢ÀÐ¥]…6­ŒJ䫃¢õ{8ß[;`EëÈ7f' ìj±®L(õv3Iw?M½¾#ƒ1¢Ü9úš=üéØÌ5GjøíåãÅ'jÖµ÷Ò­Ÿ’ºá|¤|jU†‰¨d(Á†ƒ3Z`Õ—hòq&Lc¾ßý=} ª†÷bù/‡Ý+­,’Ùg|(+èlD:/ÉÃÜö„Àõì*GT¢Š†V”Éš[Û¦.7éÏ«fÑo¤­H"–¸øCläÍú”%Ù`ÉT.¸#Ì.º=†Å}g·{ñ=ûWƒ´ÓC¯:ê×Q¯=ïݶÖ?Êbþ(Ô:VÄÍBÒKp«û~Z3xÚ˜„ãÖ+2cÇÝ;´Ç‡§+÷öKc‰HUX*F ðîÛÉó å,Z ZõW~Ò»ª^<Õæ}:n\4ÆOS,Qà©kèävÄ.Å–cŒÙ­ZÓçh;{~_ c¶P6òÖªÈa6­®4Œ9iѦ "£OÑ÷FöŽî\w…­ˆõV5~çgˆ+ÖUÖým^º†4¡P ªæì=Ï‘Aöë·²mLaņîäøïÖhY4neG7FnP¹ÈF£R)J\â­QQZ×JS]|^UHº³_e燦†ú‡çIÍõhwÃ5¢+ékj2Õߎ©}:ÑU{çYÄÑÖb Jä¿­ØHµ¦µˆeOÛn”ÍLƒ½=¢ót×Ç0;H×];aÎÜP0Ê¢ AÆîÏJ+¹ÞÁC -Ìó™è“&e8–œ²oÏ›Bå®Yil¼<ãvä L®‰kO«¯¤Ìm…Ö‹o¬2…ñéQé4F‡}€` p` ðÒÚ̾Ö#ó\¤vá.K_¥_@;6˜›cfpj7$ÍUa#°ö;Ôµ§'%’ ·müŠÐJÅçAÙcªæ¨õš­ðÛ >2ä½Îþp‚¬8ÔÙÖÙZC''Oº{;Þ¹«Dö’zž¸×8.TEÁ6ÈO)ÉbÄC(Uƒr佂qºŸjFîŸXÓ[KÛ½M7€¼Õ¨ôee«ARÊÚÖdv-äიµ6¥qx[öߦzôÛ<èlvÙ`Ól8·Vo~8³tqª{Kƒ'‰²Œ,Pü!ÜWAš)âCÄQSa´Ÿ;=ë(ê>[´âÚ3$×_.½§ `¬lCoú[¤FTUEI20«Ââ>p8X™àSÅõY·ÖÝtZn9œ¹éTöµ6ó,F–ö±Li7ql›Ö²}¥jK15XÍŠv¥U_N¦Tj'©QU«o_;˜ª«Þ<YaiÜÉ%þkÚ•¯†Ú KBßjf‘ ÌÍ­ùç[c9ÇO•Ð)а\³fC1&zµ <Ê“kxdÖ  XÃÆ]XiO'7^ÕÞ¢ž’ÔŠ%  \áPqÂ8BH\è´m22É­DE:ÛGW­s× íS¤’±¦ìÆbJè|¬Âv±]—V=©„¬WdKa—Bžª¬b×3£-dyÎsß{†Þ¥ !ªz4Ûb¬é‹i¶•tÒÓU LÑá9 ï .ò7§qâ›-H>–h&²²Ä%š»(ãÀîÖö7ÅÀ0²£”A¯7Ú†·´s}4HKÈPÐ_ÒÏ6*õó™”Óö­ñ­_Hiwi+fº`«Žæ( ĢІÑï)\UÜŸm)sïã¥õ’±˜ÌüW«”±È±wɲÀX£»w]YrÁƒœ!F¥àjQzíQ¦)W›# ‹ÚÖ±\3— !ú÷Ö ôe…/fÌ»2Ãd†3*ŠLçÅïëÌ»<"í3jéV†«Ä×ïÅ̲€¢33/F  „‚ÝÕ›Šv¡R@2UZ2Èç[i*ôɈðáµÜwº­O¼àf¶Ä­œEm}"DÒõôÃîgܵuGUÓ¸ÜNBHÑ,5Éz„AGÈCÎÞ^a]çmùÎvÊ[}N0ØÕ@„AÆÕÏb(ƒ)…çÄmïÓ„sL°ñ£&ÚpãD{·u«A$p«pÙ²—À#{3%m7ºŽ sNà.û39–*Pæ¨8¥§¡‰ïÞ·µØ6Ïpq¢c{lž­„ת՟)uqty “•Úub¨ìùw›¾mNÎÏa« c\,wkQ`0xšãªáøÔj¶2y4%%š@]Aâ«á–˜K8ÛÓ§5tËÙ áÈ¢ Ç;OL ѽglêàÄVKYm'®QSh®B( ;æ’œ¸ñÀ Ì5®X®k~¬QH ¬±h€_àæ{Leí$‹K: @wÅtÐ0_v‘GEð$0öI+”=lŒ"ÛB»"WË/ƒÆP —¥lÞÎÅ™û0,– Sºé«ˆ,wÜÐ.+O-È/Eg{fÈXS§`…V€Ð“ÐA¢!3 ¥C2[™i.+x<ïî{6“^Y9ð~ “n-0ÒŠ&Zjý³v uÑÑ\¼¯í€óp¨Ä7cnÎúï\‹J±[zö߯"TŽ™I%+së*¼ì;Ó(àpS`MÎëìPU[Òk®Í¼iz𠉼áYÂAeÜšìáaÐÔ°Õ¬Lž²C×ÛÆÖËíDáÈ—^ÃÉØ€a±ÝJr 8[Õ-EéæJ"îb„(yœ‚åÐãˆ+óm£š Ù# $ ¶Á@G âÒƼ¨ÐyóϱŠ8^€òšsŒº’­A‹£ÔéuÙÒnûø¥h˜`Ûnuº „Q:VžãçA*ÓÏ2‰¢„’-‰çB¥´Ø^º¿R­§_ËïEŽôð…¡HH ÁFr¹u¥lQ€è ]Ž…¸ÿž‡@9Šwõ@‰!½‡æsÞã”5a°uøí‘sÈbù¥·d%„ õÓɰÑSEB„-03[ï»"$q•[4J#Ó”Ì9^7 ®¸`€öÍ¿ƒîD žG~‡!ÔãÛg„$áúÈI xdn&*AÎ`k:yvíÁ.À…Þ‰»:#Å ú˜Db¶ìþ•g¤,^Zö~ìy¡5ÆeTã"È«Œh'¼2%Tî¶ŸyrÜ @QÙŽÑ-Ö;•´½éR¨•:æFI ʉYlçgfCpæð"ûQ£ ?'° Ø›&ü© Á;c-E248ÕmßÏ;_~¤œ9·œ£Æð’ÛÍT6œï½]™ùì`-A;!XQ$˜HYªWušÎ“TAálPQu8Ö¥+…²÷ P´ì3( T1ÏÈÕéO°ðÎU7C]h³Ú’;S-ÕÒÌy`®^N ßT"O¶Ê¸""ñ°Á QÅãÐn8Ó¡ sÃÓ|›ÕŸgKÎ3¤eLÐgB²]²”ïDšn­Fé„1¶†CÀ ôºO\ÒuúNi3‰ÕHvßlzÆb•Á¤á—C5q†{ǘÆÑ¢`æZˆA‹Yâ!/+çi`¿Ö<æIÈ]å!¬-ž4°¥è‚ 7+»™u/ܯ‘LÛ`t/Ü‚Ÿ=~•}‚}vö(ÛÛ™ôŸb™nÔULÛž=©f!!oÙZ;\j.}AáªðYø»AtDމ¢žrÂÃ^>·‰ªÕ"¨’=n*fPA÷Èh©]ÌJì ¢ññ%Ai²_¢u+œµ¹|!Û2ô3²¢Gœù—"7qjŽJR’BDÁåγÁ›Tº/ˆ¢5|Ù<Þ$T Ö°¬.ï€ÛSÕ&ãÜ­ bã Áûx }`ξ8®‹kñÕoãÂý©â‹Ïúâ\£öÑ+­32íEØÀybžÌ¢Ð>»‰[Šó© ®dly¸GôÍç!F¡º¥@{°¥óE`Àu»ÏÈËïE¥%ÚÓ§ríPœ 7Mâ'Pù#_RC8=ºÛÏÄÄ>^4ZGAÒúw¾—nÈ0‚¸rˆ§änàÜáEè\œ.×@öQ…@dpdŒ¦LL©Yà b¦å± Ç[ÜÖÔXChȪ @Ò±¡òÑfižpX€°µè2c(ÌÊÉogoGÄ5na—¼%sjëŽ&E5¹j†>d•%'q©Úz{¡ÂU!P„Q ƒM¶6£n8 HjDÆ‘  ÃSx1{̶c¿¹É$"2Eß“ænCp[ìC€rþÀTå³9Î¥W­*­ 2Ä‘‘6Dºl ÐéÐÉvžfL(ÈìÈËðW©Wóˆ9ö\10 •bÄ=­Ëýå6…°ì]?@&zWš'‹Ï|èÞ§e!_𜗨žQI?Åó")[ùÇãÞÉó?ÁáWEΧjÔ0 ¶1óa*-‰¯ÛAE×L%뙆0S88Ë €qZ~iM0¤«ï­ª*ÁbR´ gà`¯£‹•5š}¦€cøÌ莱t1ÀÅgâЭXàÄÞy°„tÔãöVsîï a@]£ ùÊ9ž³h¤7oÅE<8¯‡ °7/=<¼Óêõ…8]pg7-8îÿŽh„°*‡hÞÜÜ’™xϦ>ìÇÓŒö/¼™b£¬N›J™-JJK`nji‡×fuà/aÙ™í³bˆ^ÏÁ¡¸ÿe©åÂ+åãlp(ièÚ˜…®aC B%BçKeÊkrþÌ_alÃÌø@Væ æÅØ7}â&ñ£¾m¼½Ñ Ò&6y½¼¡ å4šñ¥§,hÚê4èÍãï½Õ¸½Ã#EBÏ4Yš) c„‹]ct§‚HÓFÒ„±ß$ŒB- Ž1ˆ)†ÜvBq'W7M”[ ÏTvЦ°‰ñ!rñúÐÔõ¬Æ[=ë›9xõÉBí Å„q"ƒâ‰–þzð–k:†‰ Z¯ªðµ ¦Ì±w÷wPrÄÌ^Í,¹4A¶ C¨àGrb?'i³mÿt‚ùØD¹k­ñp"޼ªâ@®Ê rã¢YB hjb§&Åѹ‰‰•:Ÿa®ú©GðUM¯¼EÓÛÚ¿Ó¿^ŠßIJdA©}ðo€$måGÇ Ýìöb^x€Â`L©·vìïù½ÎoÉ ÃLÃ4pyù†úS¿é__#Â;y×.7SÒ›áJõ:§7ÜÛaq÷ü\íóoÕIçÎ3S Ðb”RS*ð8×ÀÁ­Ÿ´ˆZ/^ÿCª“|ÜS·ÈŠy‰hʆ™Ô¹lÀ6‚ƒ‚4€¤ìEH5..ÕP0tR¥ÑPJ’£àEAªjÀà L©· a"‘ käÝyu:˜S(Ÿfª^ÇŒÏ<*Óð¬«æ­¿ÒÏ!Ðsý«×' =n©`wCˆÕõ¿˜q;´'Êê³=YÛ©e½z%¼ÿ†®Y[rjÑRèQCç×ñ‡>\nyüqí œI‘@ñÈ©aؤ ª¸>—H½f¢õÅjB‘åéDœ¬€‡å…dÀWÁƒ#ä4’"ÔXÕ^Y-Æ0¯j¢% ÑU#±½T†V ´§Xˆ0i¦Ä &}~C[‹€æ¯Úå1Ï[Þ,ô[½È2ã HQ!Eh:ÀŽa¬Æì.r F23±â¹ÅúÆ÷èÅŠ"W†|*c‰š­o*X-…åñÁp…è–j4R’B!p–’ke<{ì¡èb@FÜ€Îì×H‚r’øæƒÓnר¡ãÙ!! ~^ãžÑ0L,ö üüùïëî×3çÝ_éw€@€ZƒB*Œá£$XÆ1FA$GÊ!«Zhk•Œvl»m™î·9f A!$!@iÝÚÚ×uF iƒA4™Q “¡@£Ä‘ðPûDæ7D éðOM¥“­ŽæÜé.h&Ø®+˜n­ t'I³Ê7§u;3%Óâ~!R®êXñ4ër\²ŸQÕi—f_p\ã6R½šŠ½ÂkŬ8J,šÛ•(‰¤„C!®zA-ø¼­—WWG³”9©ñ¢þ˜)Yªm¬4üÊe—ËTQÒž`jÜ=ž[w,ñ£½ŠV¥`¬˜W •!O”Rô?ŒTKæÏŽŒ6kßõÑlðïÌpøˆ.æÂA&ŽgiËË*”;Ž'Yª_ ?¼€drQ£Qg 255µ¤áé‡;—ü=9QDæÄ@ë/]m³¦=û$‹ˆZÖkK"¯ 4«1 €I]K Þ$¢rš*&ú„Ÿ2ˆ1ŒlÚ»! k&g´ËÆ7Þ‰Óïuvél5'¯ˆÊ“Lámv ¡¼ BI ›K)PÚjpµÏ#®« æXðÜîÕþÁV¥fóÒôl5`Qý¢mQ7‰}û¯ùP@ï<|iî®ßK–G Zã@|ï j1ºâ÷oɾÅq*#u²¯T­Ò)¼‰®ÉKÇ” åó4¾}ꎉßÀäl©WN5xÉeík¶µ0½«$Ú€óESIBRT xi¤oEƒ 1‹7¿¹ú=Sõ ç"Ûm1-<„ƒŸ „²õZqé’#F…HJ@@Axì¸fhƒ¥xp óçÅêûq1mñaè*¬pÌØaÄÄÕ®z… Sl¬ Sd¿.´¹2 Tt5õ®Np(8~Õ5FH-ãRüÏæ'ÑQSßàT[3eô=§·Ÿœ;ß˳OÒNküÖÛï*Ùü`¹ßì=Ç¡µn#É~%®’‹^¨""¢‡¦´ ¹?¿…–© MN:ú÷çUGj"lÖNͧ8Möp lØ×W4׸±ÀË'#$á&T+ž|žñ pçr¯åÛkä—m*®fIrÅøU*U †S²Š mXP tÈ&ÛHªtÍ—[bóM#p{“ºó¼x¢š¢yzyˆ<> ;‚ÀX!{ú(†úé w¿,þg¿ê;³Åì¡HZ‰·…(ýŠ-€vqá¯Nçrj·éð¥iZ}[Oh¿­èùB¡1Û ÇÊg^&å}¸9ž„8>= ?¥E<¦.gIꨄÜm)$d$üºléMHkôü>YÏ|÷:Ôòì,‚óvñ°² ¡ Îÿn×74Ã}A5ÈÎÓÆô’4à sDfôˆ”kÜ/j)…+£ aKKØL#FC_¹0êðÊÞ¥y_í‚OG–†`§}A䥫}gIJFªSšCH§”“P‡Ø(ñ{*\­±OÉÅ%êTU_ÓIUUºªáÌ‹W}"õE2ULª­"²ˆ`QËìPd(ñüºpªwX«šªÂ We]Ô‹:«Í!kw…î¨>¢‘àUORwâ›âš$[b‘I½#~r$SrƒkŠzªŒ’:*­¥Jwd2*SŸ}&ÅTô(8ª*¬ ;*¯¶!¥U¤Sš=eDCŠFÞº‘´–t¦Š©®)®0 Ùü¿=)ÄC’!ߥÂNâ‹ÖˆxSJQÈ ÅBae!b!÷$-j-(¡µ(¶D3¤Å#šCh£:Jί‰P+ j©¤…ªD•š…®ª]êÏ€†Ø¦Íj©¾T”RdT¦B„6Õ¢)ñE:⛨º¢ÛU.õ] `rü´–h‡ÏQj$âxª®º«ŠGM&Us’jE8Ñ'q ÄRkˆf‘mŠb¬Ò2Ši–ün㩊«¢¬E5Ð&è¦J ª¬ seÒ)Q)”Se%¾©PeÄSrE¶!¤Cn‘ p‡eUUÕ!sÈ’²HáíTU[bi.œJqHÔ!’“”˜¨XUM±EÿÅÜ‘N$2BË>WGCNA/data/BrainRegionMarkers.rda0000644000176200001440000022611513103416622016243 0ustar liggesusersBZh91AY&SYå­/Žÿ‚ÿÐ_þÿýÿÿÿð¿ïßpaYÿ}益¡TÐÕ$”‹™¢Š"a‚ì¥Ø©t×-]C"i@hReª2Ö´Õ®Øi•R°%­"i‘ Ä Ñl­i ­h (ÓE°PÍ€–B€›(h@(i[`J 6É6jÓ65Mhi +{àö¸îÕEt¡lf²—wß( P   ×Õ羸{xïO.õ¹Vsw-ÆéŘìc³ƒÛ³½VYö}ô @  õ¸íÜî3}7vO¼Üß}ÅÍ{dç±gŽól<Ѐ ç;ÛšÉ[6ÍÓ¦æëVî“«íðB"€|ðÖ½k¶èîÆ¥»6c}óÑB€ž¾Vª›YW±¶m¸ðè(¢€w½{›“YU˜ªÙ.zª‘*¥oUêÊÔ ößG¾úЍ>ï(Ð6°·ß¥$ùò•öÊ*©V×ÞúC@kï³Ðžõ=sÕ[»”ÑÝ´Ö>¼)Z4#ìVì©rê…GaxòD}h§Ûì_ „Âúóè4}|>‡ ®'½”ÐI=›£eVƒXƒE7½U'¾Åwª:Óî (sÞŠôk@硆Ùvèó€P«x}óîÇœ Sëlyê7{·qh( [“°Nltçy›#.zQOD÷}p²åíW@Ûç²·^÷žó§™OMš)onl*ì*jŠÐiVÔÔÀä;»»Ò P^ãpÞ½ä¡M\µ­Q!!E:ÑZx}Pî–-õÝŸosÚ«¦¤¶ÄExyÎöÔP š‘6î`w¼©WžÝ:•OxÄVÝÒÈÛ­œzöÖÑÇ­³Õ™¨ ÝcE(àtSð€DD’¦õ 44? @M)‘=CÕ6 dhiê O&„@„4šd©±)êmÚ d”È‚h šL…OɤL$ˆh ¡"M‘'¤Ó'ꀠIQF„e0%=)øÔÓDõ(óCf'ÇÕŽïŬ´Aë¹MËa„Î÷T+ž\°Ä܃·¹Û6\¯òi÷ïW¡¹ð¼šë1à¸((s]¨ï‚í³Õ )ü¥½C€Ð…‘%˜û´E¥àÑæIíb«»”½q­w½üêCvØÉ­.ätÅűñ ÎÑ¶Ž‡‚ûÉ›Ç{ ^Jf5÷ ¨Åµ¦ÒÛ<2_e²Ûxß·8ÈTº!” 9XvkHG.fN|Í÷Æ`ë<÷$»Ÿlg×ìyùO7wˆ#Ÿn¼Xsc£w¿¶Óï>”î4§r‰ýç—*a¢fþ$‘±‚â}O´hxèüjlijEÃÌ*ô,¶+Áž\oô]è1ªpA/œÑ†—»Â¸ŽVW»îÛ‡cYì¥ zÞé­VØÁ®…!3Ü5d€¨f.TØŸ6DüEŸÅÆÞ ň»ìá±°B€“ )kWÒƒ÷xK­ÉG³rú3bŒøüèãqð±”݃ àºgAüh bi,A›©&ü¯kãã‡Îj÷ìô·ímÌŒF•uŦp¶iž39½w)ãfµF¤1ùS³ðì‰c@y0ÝÈ’!;1ƒ«þG‚`𽼜 q•HCÁQϧYÆÂp^N,:˜¶£W»â”n/"{¨ 3ÍîW" •QÉØò@›×gײ u‹™5Œ¡åûC á(Ôƒð!:–uÀgH+’M1îÐ!ã {›V°†¸î”° )<¤—êûi{o† €–úÅIb±ÉöÙP»ü:Âß8O‚ ðÈzôhÇÛ¯Ñ[ ŒãKٛü^Ñq 2Q~müÿÃã™a‘;ˆ—M8Êä[¹v´æÆ€>-ìß,ø—,€ç»TvŠ. aßRÿ7¶gé°)›©w(X6€Øo^ÒwÝÝœûzÙPwßZuÕ=^íü£øÕíwérT#ζRvÛ3ÎGv9‰icñàÆkN¼­ð¦'õ›«|Ž#ך4‚ü< ¬¡}ä²Ä ãñ”˜ ð( =Ä  ·rîÈ{¯iߣò~ûq‚]N2]£9)”Q/ÎñÅP‹ÁnwË_è A &@x»w] ¤™þöSÎäT–HJ/—Æé¸¼>¿’q‚BàfÁ]gñ aðÉ—€5Ä‘,Á¿†/ÂB:—¹åWÒ@†=káéµôañõеý•à°×<±‚ ù%Û,ûá÷el]‰ã(éüDòÄ{[”¯ÃÉ1leÔ‰y z6paó‡E@ƒeC2E>×MÄÉ”oYèæºP±ÉzÖø‘¬œ:a_ÈÁ”´wPxp²0ì@ËF¤wí#él»¿FÉa!þcUAßrEɇ>–ÕµzzÈÊ QàÞ‚?y[Â} Ó{ 17›%ƒŸ*\+g•éÿU"øUT·¤­ñ'ú׋'ks‚¥¯_ú~l.Ii1AªªŠc;IàÅŸT‹ P 4›_é¬}Gة۬+Ðæ ÿðc²ra'!XÀÁ ‘ Q#÷ã·»Ýæ”çyžˆÐ·²PT»ŽW÷uù襰b!CÍ¿¬áA+œz¸ÅþÔ'¼•FŒ¢:6ðð›$y×]Kn9ˆ_£î9ì=>LöµñÍÀykµBˆWBƒ×¦{2 ØA¥#Èò÷¦¤Z܇SÉ¢5S&˜åÀ³ˆ>å«€­HpÊb'ÁЃ<9ˆp>g˜qÖB¢"‘¬QYÐa ”ÂD^þ ² IqÉ0>/æí”õ­Õ$cvC°i¯Ó$p –]쇦²B( ~(Õcôv›NûØÒçÓ” `dŘm'ÀŽõ»Öâ㔸î;cc‹ ‚ívlƒø»Ox(ë7B !óÆ¡”A¦ùÊXg»c¥0Ý£u¸”eTN „l'ή‹ˆ@þÜÉâœÒS£k·¿Ñy 9Ó!”Tì–èSjC®±ïY {6w·rfñ%ÅpËÀ¡nfd/À†é"=À‡òúw–3¯hWý¶28­µY.=AJªÌð?Ÿª‹¦YKÓï’|DW8#·V 9B€°{»ÄÊdÀ÷ÄßЬµ9¬¢À[$D)9[âmqžÀEj1{Sç›E¸4µý§¥Ö È§aG´™ëj•ÜmM¾ê(S]v9ŒÁjXo//qX¿Ên­„¸AÒ`¿"b)Kþå¼ïÉMu–^6ÙæDC‚—w}SNRÌWm0T Ép& Ùqns>š,‚XÒ­²%ØDX¡ð~ƶí?3•8êžÜ7hÒ” ;xØ‚EØÙmñg°HÑb€Œ/hBt7a ¿®ªôՅΖm¹mÚEµ§V!¸Áƒx— ˆ‘-†;6"áP+#] ÙCSÖ+¸û›Áä(—Ãù¬€÷Œ)R˜2.6%N"ȲÄ)æçÃ÷NÌ bsŽ$X"‰ËÑ#ƒDÔRIkKh€ì£°‡Q¨m%¯Á\´`Âä@˜š†Ë§*zj °$iƒ)}-c‡zêðîsÂ* `°:%©Ä @‚ÞŠ—¾Ùó<ín¬!–ò½,³@väi–üŠˆ­·˜\‰ec±˜ ú¥Ÿ‹æTºv>‘aNú7 Kš:0tªÍsûta¹5ïñºë1S¹öqçâe1!j@$åT0Odü;Õé×ÙQ¤ü.·³o—œm'àÐXº]ª¨„ô¼A þÖDäV1.¢îl&^EsMy%‹í'`¥Ï\H3ñcÎèCJ[kHJFÇ¡¾äÄl!Ï¢ù»ƒ?-Ö©ÿè½·?c’ùݧn32s©¼µ7&R¯ íXvKIç&7è±ï\¯K)9„:r2@Ážé•¤ÒõwàÒÒ¡˜™)E„ ,вî;Œü1 ˆÖ÷Î8ý±3qž•5@ec>£#l³í3÷3îÄ>®¦ƒ­eR“¡sógﵬO@3s_‚3wƒ`ù‹,‰‚ddçÌuòµ»¬ø€m Xƒ¼ !t¼¡m~ÈøMU`_CÚ̇9;ŽÒݨ69’/ ë’,øWwÚVß5²6ÐuÆæ 앉CHÁv%“>x5ð L–CëvbïÖ0>ða†-ÈŽ²dîÈ©®ÈÆ„»,à¾<¨K‰_V±ØŽûbÝëûÄ™‚íÓ6Ö–-C:2ŒÐúB#ЃzUÔÂׄ¬T÷ñ¦¦ý4hDd#É–é]ä  ©†ån>Žªí‘Z9ë™öæAÍN^°ÚlS ™]=qÊe¶šÄëòÒïx¹{ Ø DÏ,beÍöNÍ;(+"Ý)öRl3ݼ–1¿½ÉFçØÔ­‡AÔÎßS—C°íJ`Y^öÃvÙm*Áâ¯9¯v£w7<þåAy3, ñÙÒ ÷|˜: «Ô:.WùÒã╬D5ÆX¨”^ê–ÉH#¾©„J?dÕ|q±6°ÀY¹.Þ/k8ˆÌüW­øC#çN™WËÕ¯gïµv0o"˨+œtOÁ/w¼­ËÍ&ØÝñn¤Õ*+ñîÝíðr¢²õt­š²%|ž"îÅÛ_G<§[9US`4KÁ2£îÍü¶`¢‡êòãu¼bä[î‹ä‡ÀP‹|·¼ 4LšUû2õknª@0A£´!õŸ~5lNÝ2‚0æè•s]õ¿Þu7ª";wuã ‰‘mÒ9Ûí]àFdƒ“fFç…0®‚G‘öx àHo²}æ1¾3Ð{œßC"(EÌÑ1©;5cÊÃN®¬ˆ ©ˆß›ã3ô£ -I ¹USX͸™Jdzâ‰mFÃ0aD®Ãõüj eO®fg×÷çV»>=‘˜•1žî`ž¿ ¨§[ü~¸pĆOå±Î^vî}R0ØY‘áÎó‘šF°žå"u7ù³Qc˜+ÉŠ ™`§;§xíVöK!Ü•&©£Áªs•5›_ÛvDöq;V0ëƒìšSå©O}Y!mVÄü÷R‚=½Y7is’^JKÁgøG8Þ{z¾ØÓIU•H¿¯˜ìŃ ¶e;Máðj!øòùBÏWDì½, k\Y…#"~<±(ƒ4˜ÄÌRóGAK½á“¸¼âÊÍ ˆìZp_·¾}N—€ÇœÆ„ i<Ê©”7´9TRT¼'$Œì»4¥¬·º‘A«_ººéP¥’2Gõu£½0iõ˘ÈëRýã3óD }yc0» Ô €>µ[ÊÜ=ÑÅ”?=j®E’µÛÒÝËï{‚¸7@ÖÝZÊ¿_f3ìP(ûùhdʉ‡§ …7P±bz‚ïïá_n¤Vù1ª ÷„V:¯Z{Þà*‘aD ¬W0—È/Ó!–•;8=˜Z—µ|²¤ƒÌ•Ö„½›·ÎMºòkžf¬¨ß´R2V~$2Ð>óã:¾Ü,Éß¡qƒTÒ(dI&Dº—cÁÞ{¥ø,LÜ‘¬ZÝÚ8 c—‘Zp)O£V.óZ¦®ô/´#‹ÆÆ±BÈB00Äfß­êé@xú3«ùVÌcWD7&"¯ƒi`ò4?Á‰•‡ .¸G¥&‡oÝ;aYÎúõ‡ƒŒb[f«òäÛF®óùfº¥™öø;ùϯO:Ò<¡Î²ØÎ¨,êçã×8]9×Ú­«®¾8n®Ýüº:êÏ :|ÜíÜ8¬} ÷Ö]ë¶X–À 'h6à^½ú.\9;çú“Ú|ßñ‘]¹+Ä2¶2 ÄdmNîkXœ!08ÀÐÀB¾Ÿš¸AÅ“Â*ˆ¡ý¢lDÔO¯}â7Úýœk‘¤>ERãG>§4b¿4–ÅÚ"ôÍ”pÚâ<¾{›Vélj Áó* C¥Ÿ‹€-!ñåG˜Õæöáˆãv‡û¤üO`‚å} ¡ŽeQ"¦eàG ÙÖòÎ(„àƒ´‘(ØC©™í`í[ÄSr ójBávøfÂxGØ–:7íª¸ágâÐÛç×­ÞžÆw…ù;áÎ%ë?¦»ù­ÍCÆÕ†ÎP/%M5.VÖÐ@áy4£«(*CÔ E:—æÀpE¢ñh­ótF(Ñuèö›ZD½±^–èAÆÅ¥>=‚Ï Ùÿð½"w¸xY{ÏSÞSE¯¶{ :’ë៖a›ø¼YÆÍåP«Ñïj:Üg‹ÎG„ÛÀ#•ºidÈtÆz¦èÓå­b% M¬_¾Jõfçwa|ïÒY_Ñžå °Ÿa²ÇµÆïð ºéL“1 "OèÖ•Ó¬½+‡Át*¼xžìåof¤.L&:R ^”FV-œ,Ôѻ}¨ßLüÝ¿dKyZOÎÛ·Tg¬â’l¼wÛ›"}œÃE-2hfWÁ§yF’g“6¤p“âÊwîBäã¡/¨\-‡Âhs:éÃျ'Ëè¥qÆ$7“‘M0ôÇ{ô¿‰Š‰yDüèƒâþÇ%³Œ}Yò€|›!_•¢¢N£ÿ|}eƒsh3…‚σXò^‹áñ;-}Ä 1´™Ð»Êð+³bܪµÊè—2QÏâYª( 9`e@°èHPÖàÕì‡bòóèiøû¿6ùoNÅPDVÚ'+a` UÈúsnk³Ü|9¡r„ þG”$Èsö«ÑôÛÃ匥º+š‰ár«ÔŸ›C7‡¥„&ði€Û>3` i¼¢€ã•k³l%¢!èrÜ÷nÕß0­6[jŽÄåÀ¼¤ø(îÀú^Œ¶®+Ø¥GŽ]å¼QÀüKK®4a`|—zÜìÃa <dzAìÖf˜Ü·È{_®žé—X²ßR’ >·ÑÓ“ù$0J!¹ç Q˜PÀ™Ž˜ƒcòªÀí¶âÀ®¶ÓƒìÑ£·`àɧÆc8êš*Ž4œŠ[Éôük­dœ…+¢ŒmôálȘvq1ï;d—$ÖºÛk8½ö#ð¬JIãÊíñaîcÊ5?±Œv·]•>T¥ö¤³6œ¼dÞúCƒž%åÚ})w Ú A̘ œLAA…Qü§íީ”¾,ê;…àà€2ð`|MÓì]tô@âfa{š–@¡Ün|¨î)©ƒ%¸`ÜY'j] ‚ä[…Yé¸RÛbc“†´¨–ÞOš'¸*€q6 -9²©ÁiuzŸÙ’~g!ðà’ô3¦ 9Tnt}Àé«—SN6 1˜°a ™L$oÌo¤Yâ£{" çb ¦Û+°!UéËÇy #Ùèx'.Êôƒ~¶_äÈWAö™ŸCÎr½žýı¿Æ:ÑÞ*!=/ÙÅ0IíRäÙ^¡®;cj±L`×i5‘àr™ƒ¤ î\òʾ¡¨l$RF࣒¾gbGtFú•îA'Mvc‰krÍ…2§/³/aŸ¦åèIïzm·zèa½f•+Å4”a4GÙ{êÈ‚»f |!¢wâêO Ϥõ­]jl_Ò¡ë„}ö²Ù&b#²V™ßºõm»v{r³äzáªJ+9 Ïô—0VÇ‚­35Ø@ŽvyNè"{5 A ~Lƒ˜²g`¡/»v¬R³v4J¹È{,¤p4§`Ó•l,a4±t«ÉÂ#…C#5($!Ú{ƒÙƒž ½ÛpAàËdÂH­ò ²r…/„áòûÙ±Òý´'!ôa=²Æ°~…\÷³Õvz‚ÇÙ¨±ûã~‹é‰Âò‚I…‡SéýIvîÒmä7úOï( —›ÁG›½|SGfö_ñÖ[#"Lòb¦< œ{ñómUE†Æ•³;¬¢§Éa"@'Áw*Œð¶ïÔŽªDƒ¤ù…—ñ)›ÞŸ[ªê¼Ó‘Ä¿hÁEçLŒïµcÒrIý°´(<“óë¦ö“ òÛŽª„L]Òýz… íšõ ¾ípñeOÀÛÓÉ8ÝÈ)máC*Œ'Ö“–ÚVZì÷Ît•$`ø²Û H›Ì&™Á]R211šÌ迦³›Ø¤ôàÄÝu 1;AÌîÉãÖOHÀ}¥ûœV1ç“ðÝ£Ó]}ýUºßå/O3>‡>á ì=eæK–GêÃt­nJÀ±ØýNÍÓÆ/…7v€¾PãÏ‘t¦®?c£znØO¥ >×s¡õ2c†G±Õ?16U‚=d~ߘ§?’…ÆÎR$(A}á]ÌÚ‡æÆÄÈÇ´\ÈŽ̪’J=T3Ûß—µ)<óo/ÙWw¶ïµn{°ÛYl0¾’H©¬!É ^ÚhuT Q€²Y:5瓵~æ ñ²¥3 fæêœ:oQe7f¯ÜJË2zIÝs(äQa#øX¼ŒWÙhâüU"iA()qKRº§^8–øô§h„'[mè]°Â=˜ ÑÄ™h—rå¾ © ª[6ŠÞ /T ¸a ƒ\m«**­g !G“&¸.ÝìÞ[€ºÚW‹ø5Å/¶Î¾`:{²¹óüì ”@ú”Lï¶£§Âo(í ÛW™(x°+ª'n&è´¼ç6Ù±l~¤ÉË\¶Ãì[úñQ]¼ZÊùöŸ¶³ ²_ ˜^8[0¿bªõS ÙÖˆ] 5ø!ƒ}È0%bÜÔ.¸·óÓtŸ>úïç­Vµ¢» 髤¸A#Ï–þË>Èk€-ÒZuµRH¼†/$j$!˜âDzÍõyüß_‰œ´Ù³ùËH&¥OÂõ—õï³C­xÛ8¥çCìñç7á¸|ÝR3"t÷HwÇÂu~¹2ëö¼myIQ-‰wk‘ƒú9‚Q Æ…n¶dDv"è|šØðÕ‡'ý”9¶¢žˆ¶<™÷LIOr8÷•Þäc¬“RYwP'#þpΪiÞAœx{ž-Š[ A;î€ë$ÂÃÒHq ‘•*ù¶òù¾. Pí¦ÃrÛi,Åb-Lã[á™hmTcY#Û«Íïp^Ö.®YxÏŠÕCá¸{«IGÊŽ‚Ì l+bur…r#‘¥!enËœ9, ]Œ±|I𢣽’ΦïTÀ®XT1ÕŒìDô…‰Ý½V]oÎuu@©oc±;}ã|¡“wÚ —5K¢]HԂ⚟«ìº´!øVÝ8pÜCÕ÷’ ¼"ƒ{»æä¼=K“?’p·EÑDU‹p“R,ȱóº@:%Ð} œÊ©õ‘ÚïF”)VÜXÁ=~wmï$Ä(¨ G}-M«˜£‘ôEÝãQ.Ê\Úç'¯n¶Ü ¯¶Õ ñàªõ›*|› P<¶’ß³€§LIXµÂÌ€8áõYÓß&Y=3‚µ‚+9Øý€Gá&ЭüfÃÕ¹#ò[2T!zÂ2³OÏpº¶ÝÎÓ„s—®¸I¹q•‰amˆ x%Ò› Š/bhÓ,²©CQ`øpUÊÈ÷×>åC}í²pxØx^(Ä‘Ú-£³˜LŠÅœ-Çq‡0ÑœÉRnJ@G8¡h?²‚î!ÈÂJ(ˆ´QîÒüLfm†×$?OÛÂZ¡æ·tR,L©H¼bϽ?˜ï²Xr0ê=1$YL{;tÅ f¬0Åt#ÆFœÉ¯_2¹²Óö8ö[ŽKd `AŠr-ˆX…žòÁ±Ý±}cª¯ÙY칎}µ,‚Y²}'ׯ¨¸ ÎèA žlÐX‰oÍÊXkEÄ:×µä•k憩PÏdÂr¢O(WµúúÁq: c± ci…".ýB˜J[k“H·"Ù»L¨¢³‰Ñ\{ FËj†©HúÕíö÷ ëF#me퇋…4|5ƒÂ•ä”FD¯Ò¹×R£˜‚¡È^X 7‰6&d7àÜDäbè`&Ñ(fæÛõ(Ð6+ÇZ´‡ÚÁÃz®u¢yÆ8†Ð‘`I¢×j³CÆK¥YKÁw=ÒÃPH %AŒÛ ™é,ru´"¶êiHWF'ј†â ¸Ë¾(ç‹r|¹š–1X1á ’† VQõËó1åòcØ% K– ]â“×Û j/a(੉XD{•ñ—‹8äXŠÌáÆ6LîÊê÷àÃϬªÜk‘À€¥Ãvtñp/;e5W¯¥³îЖüê¶ïÈ[NiHÔÎc…¡å3;pé*¡6mD¨úi_KRÖ #µ‘Û8Õ†µTCk&Y–•>r3L"Hx¾»ØyrÑ¥–0É ¢7ÆñÒ†1¹×‡mþɹ­çaÅm%9NZEÙ}.‚×° •WÃ7Ð2vv´òã$ï§ÑØ•¢…¤RÒ™ºûÃ.éùï†9t½a 4@ãäöÈÁÄfQ¬@…õ&)lÁÎ /Ô ¤UdW­^YÃg8É£§cícö€W½¢Ù©ó{‹'ïÍÔ]kàˆrôp/s /À²Û9´,(ý™’ì”3ó«Î³%P;Jñ õPj|k½ ¢{Æ+̸œj#K¶o 7k6D)âtª"@ ü·Þøç6¼ë*†²„i¸¥ÆÌfÁÃi‹b‘ ×q´µþlaqõkl¸¬%übޛՕé/ÂÖ©œn†*izv6œX /›åw™,ã0  ÅøÕäcÀ—I´=Œ½sÒ>‡.Ñ´‡éx4΄Víw0Yëß•"aûº(#jccIÚ ˆSÁ´~‘ˤd— þO̸b–G¹Ýý®îu´Þÿ²–aš ºpä¹Ù¾úûžä(â! õÒ*á,ërÜ× ó:Ûqr\.¥¼A  ÊPÙ ãìôÀõ‹3¦‚=o¬´í·³P;7 §eL-æ žñà@Û%)#Ÿ%¾[·}Ëv½,0*†ìG¸ 3½=hÖcÒ ì½p¾9޵“òñtœú~|(óFŽ—³Ã¹`“à”?,;÷@)ö}¥ àzx=È Ùß,;"^çÚíˆhÑèãu–00DOhLV~iÆ|éiMÒ,ƒ*ÑT4ŠAHÀR›a–8‹V„sùTteZâªXÆf–Pg™Á™]6äí–˜kS‡*&v\˜[{ryµêW&9ѽ®Õ UëzòîÇ*KbìÙ£$òÑ¿[ÌØ¨-eeª·;• í{ ˆ÷ù%è¿g²‡uAúüÏIüâfñ ðh7Ú3JŽåÄ>½ŒnÒu º]D?[#,|OД™‰åȸþU@Õ¾eñI Ô’&¥¢9ò{"æd°¬S E‹\>ý‘ëήĮ̀ X‰Õžä´… w“Û•‚è7½“§|YKl9ÐŽ “0½Ÿ•œb’à]§ª©)øsAîp+¢çösD%Û:Ÿ-xîr}æ„ÈTBÿ´”QŽO²æÜHªã²M!ˆ‡?±Æõ¨»¸é[~µ«3op;‘.ùÅ! ï”*—“c§ó}—e‹F NãxÄ&PÌßÛH»‰Š¥™ñð[Ï–~#fݰ-D#éèCÎtç•CË·g"üãfðB2_V·œÊ’èÔƒ%¸]ÁƒÀ±rŒ¬45¿¤­äG7›ã iw7¹Tü*A)ÉùDÛåxÓ?€ˆuAÛ´Ð&÷ºózæÝñØ…[%üDÖK÷vJSÔ¾1›•ÛgNZ€^æ'«˜¨úÒ˜4ó®ZÀÙ/ÊWÍ«èT¼R óåw£ÚÇ!ôRaù„€%8/*ݠÕD·¿(”chNßP`'Hš¸nW¹¾mX55!øã„ÑoÚv”ø´Y¡þŒ:»¤^ð ˜lrÆÃÈþg¿Ôm; ¥œb4Ðbf¸*…ðå·[Ö|X1$C¥#»¸„/#½Fʯa_›$e}$p” âçö:÷,Žs¶U·r Š ¨ÔƒÜíJ¢‚=·†Ž+õ¬p«~<èÑ| úË.ß°}œW“cT:“³àFbéŒ@òŠ%ð*¯k&QN jÒLù.—³¹8ñg\#‚ÙPãÐíùGn)7NÚP‹ &Ócò ‘¹‡à@‘(w^§ã¨vHéhxnÜb%Ì 8ݸÚïË;©þr‡Ûi Õ(˜w˜"=8ˆiƒg¬úYâG±u9NÖ §B%Ñöv±é2ÃEÊ´cñ0's|YDMHÿT‡÷i>k\®1rÏžK’ŠøîQ0z'Á'<Ç(ùða ÄhJ{¹PõòòhÝôן”ãjµqt0ÚdPW_ ¸¹óg©ãÑ¿š °hÿŽT[Àô+w!œô\£3σë ¦]9b ¢ ‡2A¥(‚1ýkk™¢¶qg‡"Ëx"#€ÈfDŸÆ²£««*ºÇyxám¦Ù:c4m¼`P{±¡ƒ*ŒîB£u”¥²Ê§ËKâlG19æ=w°aØâ-|{¢ ž×ñØÏŽü2¸²nèms„@?‰—`ÛK·|S¡d˹dFó š¢¥ªeÐào²÷‹n€^á5¿ßMöf¡¤òq›ûñßLBãü±˜?³!#! {»®»Ê2žôBøKÛj–‚íèiP'ÂþµüIòE¡ñ;°g÷@nhèDw»¶ ¤ Ùœˆ¯Ãj:~ÚûØŽþjnEapéY4¦úé0½òòŒE;¹B7§‚++9˜ÎˆÆH€¯e,Y÷â,†lË©ÚÙXø%;fî!8CU»)ƒ·Ä+(×ó:Lí7ž„PÖÂ@B™5R210ͼBŒ~V({µ’7þ1+.ïÂwMæP‡ÂIûëì·Š'±ZÌp0$Š-¦.ÓßmÉ«yz;Vceœeu—GÕ×6ïHaËÝI~~L-´’ ^ÇÅ2¾ž1§ ë"ºª× z!}ø—ßuGw’ûïu×l÷½¶´æ‡½ÜC*ÌŠ5øB\±Yav”OÏ—Öî݃x¢ ¾êyýA€ hˆL¡øÞ¡¤SyØÒVqº¥Aôj™³-ûLª‘›—!sêúÝ£$AÜ8Ý‚dþ`ø™EðŸt˜áRmjRÃ÷gë¿ÔI]ÃàR·ñ:§&¹äò¡ÝìŒä=E‡Ùz–ÊeIòæcÙ_'Úít£´þÇ;â^úöìÞ„.b@'°í ¹$§Ý$Æ/E#f³5Ö8Þgˆ~éZfÇÅˈ͈(ÍS3A ŠÍ RKIØ‹0ïv˜›\ت2`Jb sVvÎ])ú“Ý‹i`1¥ZR÷—βSKØtÛ&¡@²Ûá,>EõAQ Ø‚ÀNgÙÉ·} –ûäe˜dÃ“ŠØÂ/¹€õr…;[KÄ–oº&TÔxïü/ÕŠDÁ«n¥áLuBAу¦ÀvׇáT¼ëº¨ZôT:ü0~ùYüì×L'¥êý‰íº§#ÝÉ]]YØAfÂJÝñu­½¹«¿á«!s¨÷käàT²&#¡Ç²ºü ©WŽ(º_?g`]Pt€XC«|sÓ:hù)_²QñnvÍÛü¹W‚,¹WH ckãuùô¥Ä¨oDЕ…œÀ'[kÀ Y"¥ 4õ&$3·ì“>/Õ­Í´iۚݴ4é7fV«w!÷t~Uÿ9í}¬°#es|1ÉlÌðQZ*¿ Ú˜B×Ó#&ÈÏš›FÚî¡‚³øã¦ÃoBºpü\),cð§ ,âãoßÖÇïbnD5ºÿe\ê…å’ |11²º>å¶ý›$ËJKrk tgÆæÛ8ÇíÝuòlÀY\ŒÓ[e´Rg ‘¹gÁ>ä_Ð;)gj%¾Wd¼ Žñ#«ØØñd—¤)/©Ûb÷¬Ñœ¥@Ákg#ök ¨Lˆ}pæç÷‡¾Ã–¡…àÐÂØü.† ìå=aÌ\ˆ‰ðjý '¹??* ÑvORý\Í m*It·`àˆîBcð¦/áXíã@i(HY¶h¼eµÔ²U¬¸K1J‡‰uŸ $’è™®ŠjZÆŒ…!ÛÂ!KøZÂsLÉÝÀÄ€‡T'a UX^¦ëÖÖ´Z5ÓCèæïµNdwŽÝ¨F/•Hw–O;WÉý‹&Ðßž O[È(†U|αÑV'ëeY][†0Ü’/áo ÇOLËé‚ÇSíȲؔGŽZüÞÕæÀâdyB¤3!ÃÜúW|VÐÙIJUÝ8ëEˆŒÉ0o“( k!¢ŒˆÈ)©~þhþˆæõ•Ú…Ì,8ÏV \ ШJ®‰2 ^Tÿ-I*WøÚ›•D!®{¬)ZèÕ±.åõ _‘ÜÖ×eË[ §œðÖBº´Î²/ m$*jñìê€T»åa¡F4ÀV¼õ—ÉŽx/Zrý_/>†Z#ØúSá­¾ô«3ÞÄDÇ%§Ï²v1r™¯æ.D€i{eÕ|(¤<ÙŒƒÑÍ9v!—½ÆÜc0È—áëþÃVÀkd2U_“!f!ª٥çÑó©íÞe9ï/²¶¶#ÓÊP`žð›s–à]É6#B·å‹MLû)QÁ!™qi ‰íü7ùk™Xe¬g,Óá{Ð*FÕÏZûl䦑×ÒÃ3ÑÑ\®G…±ûs½OÆÖ˜ãÓJ¤ÂË@SuìÆcÊI¾ñ™wr33ìkíSMGÛ"y lQ¨ ¼#®_Ec '3Ô¸7k›]oN2öx±1}>,áMP`'h rŒ|Qµg›…sw½òX  *¸a…ׂÃhXR„‚¿V ¶U`/ï?Ñ\98!)™Ì°!)©jôh¯ëåÿ±W{+ŒÓt °!z¯+½hÔÞœÅ@¨EæôšÇüŠ Xª…x 6,^¦M.*„ÜTÕm®Î&@søÆB#Šãª~floô¿Œ7cÐ L‹÷x*˜ ý*p³2»xª­‚hh!^.A™ŸÁ¹,ÙS]¼tŠ#'aFõ³»Ã ý«K«¸Ïîß§@òY íu›ÄhŒ#Ìökx ª!¹`¤Á"çªK]N},ü µ žÇpŒ{·/ûi¢ZÖCöF…Ù®H¼>‚Hµ('¡Ò#Ÿ¤€=AH°¦·úv’ŒÍ=Z|A8Ì•ÅkÏgûÜ=i²X>ÒÝÉãν§bª:Õp®=:椑¯•ì·Ýê7@홣RÚEσ“àµóK(¹K[&d…œU?V·:tbñbB†?а‚À^Ob ¢Áš_™%l%5s¿âq˜zßhYW#L„Xí.b±3‡-3°³ªç,°l¬EùÓŒ÷C; QŒw]S˹±‰‹Z´Û7ŸŠ¡×ݼ²‰¡Y–éÊDØ1¦fH¨LDÓÂÃú›x*{+î­kàžlï™ÁšÅÝW¶ÏÎWé·½îèF£%ÝœÄYÛÕˆ7û/tî¾xë”`›B)‹ð–?­å¢Iý÷ ßqpr.™ÂÁ´dZ[·ÄwwÁžëî>²œÎó’T‰U1åþ`ð¶í©e…ò!"˜9µÕ†ôò‘6“#@™.…qµø‰n 0öf ÖLscƒ^>‰ ×Yª„´Âs~H$‰Š´H)Ë)|ØÂûP{Læ6 fTÉÊ@}9¾Ecc!пŒx”Ýé<ÒÙ硲³!kvå@ÀK.Ž;Œ‘Äw —lÜ[n’8¶Óððä˜Þ”±BÞQ¹ ÆÐlAáË8cÖ[tàjÑxúþʼÚ'ô4ûpÌɆ;D ‰i䪓£ý™w ÄFO#<“’ÙÝ2Å”S'X3_ŒÚ‘=vm‘ ¶aM`CrRdŒ/×劑­=ÌÇìcßÊ=ÍRÇ£Hÿ?¿WF›\×ô„øÈ:/TV–u³‚‘0 w#O¡%PÎpòô ã¸•U°ð¯(iD°d¶ø¥ò'ã…ÅŠ¶$”˜€£0gÄí5Ö\ûv ¥yôqõ8Þ8ûÄjIû ©aXÅÀØÌ¢UUÇ6]Â:{þ™{/nîhí´v 2—:ê¡¡WìáÇÏI¡„)n¸³¶X õaÉ/ Oj븀~ "ÜÁí¦âøK]ârL´{aKLªLê‹ì‹ jTtbE3´ð©fX¹'wõ~:K4¡ùÍ‹ýßzªXbÃ+ B'ƒ`°Vf 28.Û† Ù ô@Ö"@-9vŸçÐÕ„1ímFJmÞ ÐþøÎµSj0!œÃç”a #„@•›* YŸ¤#ðéNM¬ºh/²ÒHŸb0àáWÞ€ÕnekSÅââ%ö4Ê+¾×öúÚFˆV}.Ó¼†êâv۾꿃 F°œq=‡[Å…À]Èo#Æ‹/"w3@åLiNåƒ ¢Ôf.ÓÉŸÇ•Báïv>v‹šŒÝ=·“ÝÓÃíñéÛ››·¦´¤Î ¾£bÞËužGÈ‚màÏUs€,›eº@p]€ã¢BJ×Ën2ŠÕ–Èï6|–|s-â´|nWÝŒn{^Uxcà°`QÑîaöo8*LGqŒ®qªÈ¨ÁZþ£?'ÁŸIHãL„³˜çF¬åq²û>ô›²í¸–Gë Û?ª÷²m8e>Ä7F¾ÈºÒÎi|D4 ip*ªyÏ”uym5ãÆº  S+Ù$‰ÉñÆæ#Û}@Wõä“PÞã)äï#q ò°Š$w‡]£¿ݳ*E#‘¿‚”ë(Š=º¶ö‚Îʘ(Tb.NâMéßPGÈ•ú®k/ÚþÏjäGâ½Ø#ö(LFQGäwÀO¢Ú0A¨}ÊnHßt€›Öu €Ç³ ÚÝ3ü¨¼c’AÛÇ=~»ìë 7"VÓõYR‘0¡+×& ?rÊøñ2‚§(‘)ØpÀ€¥¸´ ¾ bä÷·Ï`HCy›#˜3 iU,.cë ð04î”Td ÃwéÑJzí~ˇäX'’>GUøT_˜8Qè‘Úz(áórØs÷ŸœZ%_€ø{2 óeQ߯À×$Úʼ~ Ò·'±wÿUD1 㱄VÆQ«“ ~¨²;í§uE¤*ðƒ #ù]ÆÏ7rÊ̲ TØ`Už •B{z—ËÔ·¥Ö&´‘OÃöƒ{ ±¼™üÞ3ñžWâ™Dß>Ê;ØŒXBPS)ÊdQ9ÜÃÎ𑣄Èj ðÙ˜ §1æäàÍÇ¡‡ÚÇ0«\mÄÓ'‹ù =Î`')ù½U+˜9¥ìÖ~é!1ŠtWE7g-O¨Ý‚m”í¦–÷²p0"aq”µíîcјg÷ C{ñX+8r {1 Ï¿û.g´VÒ¬ss£ññzN+‚(©+sZÔyPœâuw×’]Û±ö‘SX†”vç¬&”@‡ù¹ˆðµ¥¾¬ÒÎ×-¿SVY­Á–™Ð’"ÀV9c~­¬E*•H@¤¨‰Ö}=ƒ,zo¡¯x@2ÀæåéKäNÐÓŠpQÇG¿-„ºÍáû*\Lư¥`ø>ÊjüÅï˜ùë6k¾ÍÖïh€ÊJQbNÉfoÒ½à™0…ÉyE)Ù4ðâó!˜ècj õeíIkøNƪcyl°Þ^Ë=”—Ù)ÕÑfÖj±H úÚ d³”}‘}}µÏUc\ŸîDtµþŸ1OÆ ÃÞ7,ë`d4BŒcÚƒ»÷ÒG®jJD¼Å.vý@$ö÷·µ½rWÊ•$ÝEQ‚Èè$P›}ÌS9‡©L ?‹¤¸ª ‘ßú}ÕlÆ¢†CVäê¬ `>~±  Cd‚9Ïø€ÁkTäÊì&8ËeË!ÑZU(0bˆÉêd­˜‡’aÌ™uUòåjB6µâ¥ •5±‚LV3…Æûx€°•Ä`N]kÚ¶ôrº ÕòS@r³Â@®‹ZY3/Ÿå¯š·3½.Cåi³¬_!çrŠejð'S~â’-6ŽëÚ Ÿ¼x5>ÖS< g㾡çÎK¤ðP†šz`̈IJW;4~ÁNB‹¯»‘³‚݈uO §± ‚ºP2ì™etB-%þZn–·×¥k¶04w31Me°å;«¨ÓžYy+ŒXÙ¦äxôKü …ËÝÙ`T28/oÁ‹’Ái-(aB§~!X^H¡“«Æx‚ù- ÄĹ {Øv;V¬?´>@†wÌAdiˆgÐÜ>Œ–!ß[¬Å‰’…TAØP_‡Âz)‰ŒŽ+@3—±‚†ík¤jãÉÀì¶“ä?>UÆÒïcUç’ñ,yÄbÅ#JÎq¹ã|RmAB:DÖ˜R„ ‘¯@¿zÛZô8´Î¯”òÿzÇÄ­ülà["§¡)kÍÌeèÆ`ËînëI~þO^Q9’ÎcíÒØzU¢ž!#i©š57—>lÅÒ%§ï)ðµõ?SÙ15V1vr#ìäz7Q˜ÔòJÉÜ­#bD{+™VÓœ0’)Û™¨QÕÈ “CC ¢Kð8BU˜]­ò|DDÐ}ÂimaµƒÀ°õj³åœ*žH’f§ L)$ÍÄÒ¬Ãv=o ‰+è×çô{ÇqªÈ·gåƈƌþ’à <ì|^ XܹÜÃc Ê.Sš«ÆOÀ¥lM·Ò±¨|#ðRø@ü+ÿ=g¼£›ú\ðW ­\"õ ôj'×ǧr:4¦ðÑø¢ývX‚ò*C DrÃuPÔVX"ÚÖdoƒßô¨ñÍ\øJ–ZÝÍHÈA>ŽísÚ©ÌRyÿbNC#íeÖß+§f!¥ùdóùñÌ=d<Q>ø{Ö³%¢lâÀéUí"­(æ8¨ ‰°Ö}g&­TG¢!cƒA¿å-e'.Í µiÊêÒËB;¬›ûsðüƒ×^S¾þ+=ˆ­t‹†ð^—§pˆ¼Flª”yòf Ý‘L’ú?Ê9ë»ÁÉ]Ã*!j’^Õ´[ 6æAȤíõår窔ØìeŠÚêbsjúŠ%„„4êªÂj%ÐLJ«c¨¸䉿Øñ¦ñ¹Ú`X¹Žálµ^£jb†¨&Fôhº1¨ÄÞ AË©å;Žg‰ïï7Lç[íRÏ_[Ä‚\TPM‘S|ça0âä?Ý;^XèÌMD|k$à¦ð)Ðãm‘e›¿J”Z;sX̧yãÎ%~wôÆwÓ-cŒE›õÑËMŽ•…`‹°Š¹(K¯Ú²êT˜NëM‡yX[‰‡#{h°‰\‰E"©/WcØz9w{óòÞ„óÊy°²² Ù1Ž·Ýkl+®¯´Éê;PBÇ`´¶rŒn2ê1•MtU^ÄÞkE~|{Ä2/]Ï›—z—úRµ@T¶Z“±(Ž/áýn–^Î ~‡Cqm|y[Ó€Ò^KBü\¼WIF<}nþÈu±ëgDìb ì÷Ê£º^©ñš ²’_M`C”:wŸÏJ¶k¬È‡¹³|ÃÅ:¨};<*‚Ðn -%’é(¶œGå Þù/]ÀñLŽII+q&O“d&üwY…l0–PûþmÎe’F û`èBF?"éKC³ₘf`nÿ_Rµ;;N*6…îZº ƒl^LÖOìáta¦Àt?›Ç¡ðV£E0Q)ú$ðˆèxÝá–LˆYKñÊš’¨ÂÛb6c~à7<5Û«Ü]»<ßáÉð†`X`p¸-‹ÇëlY^?$Xo¼%`†¬ðàÂ(߃«k̆ûˆÖ^*µëyoSßI‘ua?ƒ‘¥äÖ#¢Ú¤æG»ÝýèÌ òJ±^dÑ Éë€VÃH± (€¶Ã¿Híeï‚lˆ,f­’\–2!s­¹4‚_”G¿É>ë«ìÉr®,J)r_¶dÎÉ•œ-×h²¢yN¦a‰Â[^D;Œ‰¹ïÒ°˜®y}"‡å`Âà P_‰·ó 2[ w‘) ˆÂ÷6ô6^&‰SSÓó‘\ßèÀ£ûÙ¾iØå&éÊ—ðG!òoâT•tÀbßWžë‰&qÛ¤0øÎ>e»\Aí·¸Rp“qvºhOKaO¯FpÅ;T¤bÈZ!ìHJ\E—Q GÕåþ–Ppçì/â½¾5ÛK㜠—ÒïXy±òh. í”{Zˆê>QÇ—‡°£J”öVf¹ûu¶UYµÂoF6¹‘³ßw•¬g~?]-ŸÍ8óË÷¦Øñg¯_÷nƒÊµ%A~Ê¢jŠ¢ñ{×à‹¿R…ZR»p¨þ\¨âÐ'l¤—:R¶ýÂ/ÿ#ðhR?x‹T.‚.‚..D_œEÔEÀ‹°‹õˆº¿¬EÈ"àR9¿€‹ÈEÌAÐEá \\»ˆ°‹\·j…ªÿèEû\ûPµBÝ p"ê"ÿ‹Ê€‹ÄE„]„^*» À‹Þƒ/h„ #Ï+÷ýOZbIK?dé”%oè~øú8•ï`¦I.1ÁüÅ1F•PQü™"4 ûD¨ÔC½!õs(i~7vBêÀ®ãjÉ´ ÌòۗÚÕÞH² U¨ÉF•æ¢e,"#Sl]!1@Å™ÓbÕÎtÃÒÕ-JÖn—e³˜DY)\¨² “P´4Æ $JVT€êLŒ¦“ Ö…dÈ‘—yœQ¦ã$P1 Ì(†Ž*ñf²¦Ì Â#&eä‹‘LÌP)ˆŒç:™QõÊ&43o:]šq™t¬å+%T"%(ITUƒ-Á™ržL&‘9¨·6eÝosï>yá×üúãœ3úçi³¶áÔ®íÈ–ln¸«Ñüí±²Œ+´P$Åú/ø™7@còu;ÚØBN:†Uz‹mgm¿á+W×Õc–ÒLrÑþ¢ü¹Œ7ϬÃr8Ÿßϰõþªñ¹LAìÇÍþeš¸‰?˜\÷¡µ@@ÚYæ¢=ÉÈrßÇ×ß-ó©È”MuxÑírNKû”›n|èŠù²iûQÑ©mTiЩ×êœU&Þ×ü£!/…Î ><ׇž.ü±*lû„sBΨ¤×~Ï ”Þ;5·ã,vKœ)¥™NÀ¦ö›k7|•CžæEKl„„I]è+‰"ã·U!®ÑÃ7Y*AÕŠïö»YR>1$ùï·²@Á—ß“|©àë5’Dÿ×rí᪺¦.[ _5í*œUWoÑmQâçöp6~nëë™KŒñsÁá397†V¦š\|Á²¤Ì:¹{^Ψ8ô™ïÍVèÓ‘úÄ6DÎã?mŸ© îßž¾˜uGý,¹êÛyaÔx ›z²•µ~nµq³IL´ĸ§Ò(õüŒ×ƒ³¤ukÃtý¸{¯Ab)ŽÃö¥ "õYªÊ>»ÈÐ&Dr…M˜cüRYæ¿ÙÚÓXPIU1XËD¦4Eœ…æÊbZ_K­Iéðaêš ÆkËŒÞè1p }”G2â6ÔêâñÒ“)96S0½B]}¦¹^u_ë1a^@÷ns*åžÿÙGB¨ÝrnÝÅ‹t˜%“_ÚSX}ï÷nvm1_Üa%²XYĬ԰À°¯|@ÉFëp¶´Šb!ºRùžïºõ5K²àUÃbÖʦíÕy¨ Éï—Rç­ÜPR©Í¸×¬¹!lª§®ûBžÂYéqþÎP¿L­tϹD)¦Õ÷,ÁCý+ЖsBwÚ³^žÕŠGk1cP]ÅÊÜ"ˆ±”Ô´ÏHp|Ã#øoöã^²k¨«°a;¨w•7íû·v·¹Ò~ßÂG†4¶*u)éÆwÞ-yLnéÆûK­RŠs™¶Ê®Øû¼}±f¯±›ÇZÞ#P¼ÙÛÇáeD®jm°'åjÅaޢبÁ}”F°©r­s''†äæŽGÂ)ùEÉ>ë|vÖ…%7ÛÆ}RŸ¾÷Wda°3¥ÍŠçÑP©º™€?WŠ·ªãi ÜÔk¶1. ºžq»éZË+ðó>©T%U½35]ˆ³b£¸Œ Û>Z¼ª4tGùû]çƒ gmbr¿ìAhA§îdh$Ð'H;¸+&Ÿúg}Zßd¼¢ùÄ¥tç®ðrÕï*Ó³ÍÔɹn¢µWòŸœF®›\QõÆÆ×€:-Õ[ ‘™Mÿ’Â^ëõÀâ ØÞÌÁïkGg»éV_Œý«×þb¬¬,m•l;–q˜Œ7Ÿ"Œ˜){êÎÞòªß]¸c´%˜‰iÜ[Æ‚áO†zѼb½ØV3Ö9aQq¤¹G7?´ºÿ¬pX4óþgæZàš3çP¢F…Ú½âHPcÑ—8FÐÄ~f ÷þEïßì$ìùBë7ùóm=@À²~h>¬ ¬î M,9‡ßÖÙÁúór|Ý™.è§ áKYUÊZ3A\HÐTÉïA~– جzÅVÜ>[b©öPÝpton¶á:’ßæïlª1¹é¹83CgÄ›a”ÉŸØô*P:Ólõë¤ø.†Ö—å©VC½–k`^ 7/ŸåâØîàcTg¹ÿGkv¶Ûù9y•ÃWÙ “¶QXÝÓù×FZm¿ÚªA¦|Éõ’ Dh.á_°F´vá†<…¦lÏߟ˜õú/éœÏÌUSv·_Ä ½®t¾Å¿B Þ'IBKùµ0÷šê“ÊÈ“qn—æ’‚}ðáËëñ`ÖU<Õc!¯ õu ÂÕG!ªÄ½ìôXWR-¦{¯…(æ±pŽH& Æ“+Í⊑“Îæð(ÿLýœKm£‡Gع× k«Ü?»>ƒ¾®¢CÞ vÄf³“xDfXí·9x`ذ‘ÿy¿›pÉ£hËòAü\“3pÜ_ìW/Ö©9ÒðãéL”@ÂL?œPtË”â¡bzG€HßåÖÒl…|Ü>k.ÚgKKÜ=¹ìÀ„lU;|¤¿®T<öñ¼WÞ jw‡Å,í~>Ú¥±ürä²—Ÿ«§ÙÓ¦ùue¬×\']Pùïm¾“©–Í‹š+6ýFô»;[M_Äg £Ùâ*A¶„…ùVéh¾Íkl÷”xáê4[ËÀ—mEû–´$Š%‘­Ûö¶šµÞ9×½Ñ?x÷1Ï5ɺü3:~’Þ6X©ÍZ{ÛÀýyWͬ.»}uÃÚm°Ïoåpôy=ì0fÁ~AÄèçÞË?¯ ÓuIZˆ‹I°˜ª Dl§ib[^³Ô†¸™¢ ºz…qr$¥¾Žîž øAziw6{[SôÉ!„…l?ȸÌÛ_àËX‹bÎq_–ݬ­·àüªƒÄÔ÷ï":é>¨õRøL(¦"Ó 1KÅc‡ÎòñFú1¿²ÎÛ­ò^ç¨Ó¶ìù²ÛS ñ‹œŠ-q«0½ôÇžû§ÚßÛì%^8™ªäädÎÍXáèšI;|^}DµsÈm¸¶!Ïohl¦¨ÜYÑÜ·Uî2•¾LÒ› äœÅS¹-˜Á²Bn¢1\430'~¨÷²õ+¸'hдxU÷Sñ&Š) x]è’mm%Å#1&ª¾þ~Íß¼Áõ ÆJùÎõÝëuÇÉœUðIóÚ¿8kc9›E^F–y³9¿ 0í×5ñmbå9P`ÝŸåƒÉ'~þ[îÛÜoHÔíÍ«®+ÉKbîúf¾Ö ä`M{õÕh‹³[½êt–¦›QYí c^£óùX« o¶ØÿoI. þÞb¯DZw¢¢qZ(9‚!=ã~9•”¯ñ•{`sO‹S9Œc¿WÔ"`ßçc²3#^7{úh%yrqž"¯É=ûKåºÛ»>…¸­ÎaÁÆç•Ö®Ýr[[% ‡íÞ–WçTâ,]E~p¼–ÕYÃx’ë‘Rv[ÕÛ¿xoJ7¨Z—¶:¥r‰-ÎáYJ+ÝC¦A-VíW YñœI_ŸµkìÖYPR“iã²­·'6*eDvÙ›Q@O#Ûž8SX¿Ã«Ê¶&ãèntàž(.ÆŒ;‚ìèî4eÙ,:W]½P“oq=·ˆÂMG…'•Æ_”|üäP/ï&Ý)¹Ù„=c܃;Ô£ZØ·ÙK'´ ógîøb(M5'‘Ò…Qº CÆ.‹ì¦Ô¬¿Ÿ5ží ù·gÅ BÒ8ÊeV¯†Ý‡ë†ïÍÂ|V ÿ.¦#úÛ âæ`·;¦®{ð¿sgËʘžíŠáÊá,Ú(½pœ>+ÀõÄ%*ÜÙ:»Fö)œc÷]›/ ¼6Í63­ÁX-mÉênl): u‰{kâ5iÚ0÷ÒÚì™ý´|Šñëv·yóuÍ….½C)RQ„Ýó²Ãc÷.WŸC{7ó­t¶a“„¾íI7˜#_“³1çÖ¥VÚûßobÇŠfö·àîÙØ<ù“ ñ!eG'!ñ°.$Ïdèy¹B:u›ú+¨tÞÓ|—€þS©Ì!ñ¤lŒ…Yê<ÐÃïë.Ĉ}‹T'&' û)Ä»žk–B-™Œ¡¼ƒ‡yò«ò™ª-m& >¶Vz™-®él z{Œ™Ùý»Iv+Ix»ûsC¯KâÌÓ:áEm«#_[Ù•¬*oŸÏ knt¯æ—Eiåƒð¿'¦=Heˆè,\†Ø¶ÁUAR€²b„æ‡ʶrݧ’19þ1®ÐÀÉ÷„½"ˆq¡ö­ÎüµÜº•ŠíÉ{÷™¾~ó¶FÞÍ{ ‹ú‹l#ÍqˆÝB Ýöáú§m'dÒ³÷ËàÅí¦{ýÓÙ£æ®4.¿yã/0{o8JÔ-™Ør™ø¶~/ÆÉ3¹Rêvšêz¨¨*$Ü"}v;…ãù·[:k€×;%ª»ŽÎÝ(a9^Ö{_üݰlÎêé1†°ýyö…ÜïÆÙâ”9ë{*[È”¥¥ˆEoxÒ¿Ë1=ꩇ+^Û–ÿR«Úf¯®XÔP‹†ÛÂܯ›ômê[`§‰{²ø|:P[ëX¸ñ#Ôa®)]:âRÁ¾RÚ™#;/éë-å¨y¾lL=‚ ßVâôáûyÌh¡á€&µ7MNÕÇ…]à¾[‹;•y¿«êº@ÿdD9BÈ׷ϸj0Q˜0OÀwù#¥DL-éCD¡‰¡Þ ÚXá¾ty¤ý²‚Dät?4(J9 ÿ>}mxŒ¼Ê0&cG‘Ü(c&œÕ-Ø%„bÚýåû5ŽçJû~‡ÐÞ7X–0f\÷_|¨FtÈo&Ý ˆ"|µ ø ö†ñx NÄK›ÿÑÛKóúî–=Q>" >ß±ó™ÇÊ¡âæu+!øåûŽ}7æöKÚêæ{t͘f³6›líÒm˜ž!¼ ¦ûj¡‘c‘]ÇI÷î8ŭǬ×Kpd?s$—•£@“Ü+V>¤X÷Ù9p‰Ÿï…KCëj’ìn¯¬‘û}KÑR!p°Öqv?feøñ®ÞØÝóüžk‰Œ!™(hˆx줺Á¯Âky/Ö J´æ üØÚ.™GÄ©¢j¨¢„4“Ì"Û«ä¯0œíATŒ ³.æܸ^wSÍ”[¡Úî,Ø9gh¨ø6%"’vVD¼Æ>gÐâ‡@箼:¢8ç°ª…î.¤˜²e'Hc#ŸÑ1ÁÜv„„°öFh=ˆã¼½g=3Þ?CßÇÑRˆ8îàëôØàÅÒÁ”ÞêKŸˆúI››Q©“ÓàÚ6ôÌ^>‚ã‹ðyÌöšŽýI!WÃmCM%ÌA¶Ýp_*0i°I#> ¯\ÛXÀEóâä`"¼Å}©øyn¥BMC¼ól¦U%$d$q-úuíõËê ¸¡§÷OÂz`¡ ›Ú |ªzó¿v§ï JØBàVÐÍÑ›´'RR¿¿ÕýgrÛˆb¾Ÿ¶”(Ÿ 7Êæ/ÜÇm×mhÇŒxŸ½Aò"õ҆ѩ4Ö6*Ö8j°ÊÂÆ¥,¬# Ž ¾>b”-צÉyŽùž.qÈïgÄ‹2@Óm>Þ]}ÿCÒõöÑv#ÏÇêÊR¢e ²Päb!¹ñxX }³Þt¾b~1[õ÷ýq  Îó4[–¡|ðL}I…Ò.GIU´v/sÂKVñ¬²e¶Ä¦t/Ø›xÑÆÙ‰ˆݱZs2ÔÏ|Äo“´@ãIB™!ÄÒÅœaÚK³¸Ð„pi ’ FÉí·[UKKC/YBB WÁå¡ö.|#ó’3EÛëé×ìiÍø™ßçíßßÏ8·mçP…£1´Z9ªÌútx³ÍŒÁ‚øP…p̪»˜…õ"¼'<—Ý¡0åYŒojÏÅÞ3œt¼‰÷{^¼ª!BmäéDòÕ;ãdPv’MTEV$%Ë“/F=k¶Õ£å¿›z㉸@oóééßøzŠ8^`D\ŠjL±(Õ‹îJ‰î›4}s 0 › Œ•fÄ™ÚèúR’&¹QM¸>À€ÕZ9ñˆÕ{Ì›ÇÔ ¼iâFShWjzë¾ÚìiƒõüêV’kEðìdxÛOL%»@×{)*,Y– £PÒ~Á@¯^€=_;?k@¥]´©@ɉʾµÐkÞ-X’×õተVz%Æ·• UOÄíØ«)€ÜXè úO?2mò¾W#´Ññê¼ÕycSßÏ(-á¸hbBu%;þAÖøºe²Ûqã7ÛpŸ:fjõän$eXsÚ°û•ÍpO¬êíJuBŸ¬—™˜jxôÙù¯;’ÔTÊT•Aõ»Ç>2ê}G3IªxÓXu±#°y>œXˆwò›…-»ZÇ@×a|/Hé u%†gïAId2‚[ׯAëãtwòfuO˜·¹û:7v¤?…´i(ÓÅc™D»[½ºüù©ê­=.|$?ä@dãÝYœ#¸¦‘kÉŸz®F7Á^q¦õ,áÆùxà´@G‘kmÞºë­é]¢P¨L4k³g¤z¬ÎÙ¹¸6@ éŠ0…½Öãó<žâòôúפ¬c¯·ì9)|Ûý8ªçö½ÍdŒIWy Þ7BA¢£¦'1ˆ#›ëw€À¡,¬Í›¯­´÷·/sBˆ~Ÿ|ë´q!¨÷6c®…¬Dq­ÏCïúi¹7ës­³ÊšfxÕR@,meíPaéÈ쥦=Õ ”ØÅ ꂸœ+c{ÅÃwvSøÚǼìqpæÄånì®ü2g垘{ÿ!5Ä|›s¦ÖlW;‚²¨/Ö˜wŽ›¤#a­ÆÆ¨™um¥™we(%§¸z´aÉž­¹'¬Üùô=7؇@û?P0±$}¢”‚§3—ÿ$9Dû8Ác¾@y®›êî?±¡Ë:ñŸYÐßA1~¹ô±÷våA_¿sgÛb {–fmÏ^€=è¼þÑÌý1=¢Ù´÷Ü-¢þŠÔ¶É‘1r~7ÕF’y°ùŃ[Ú;@X4EÊhÂ|à¹Â»²‰ñtÙÜ ç8¢^ Jd®PÈ×*\›ËÒ”+â,Ø»Bmlo{i|ß]þóæ$ž{ëaég@Ÿ¹uï/¥ccw«#‡,Ÿ§T¶ZcS;8§ç ¾ùjþ6"¦ûé%p2M!´ã›Ó3so^RnRIeˆ>C±…œ€>9íòwûvûúw玜¼mÆ¥¶ÆÜ-¾ô0µ† &c\T# 2 ß9*h}ÏwÈ4µŸøplý–µu‹Ô¦.dŠ[Na ŸG{jÕœRŠdOnZøŒl%#bÈÙ1t¹¬O|Ë:7=pˆ–Äõ-ÜÀ ¹å âCëð˜ÍüÑûcLŸÜÓƒ’MëäLµ£L9_ô ¢ß^twì˜|¾ÚÂEÞßfz€ P Ùxö 8Å $=xn4Ú /¸¹£ ÅHb¥uç|ÃÌŸµ!×yʆ?w͸áK;ǚܶ€x¹×CÂñ|e¡T÷±Î>1¶af­³xº¶î¼UH@qÇcºBEur%÷þV¯P7úíM.4ýÛ%¼'Ö,(ó󺢖\ @’–8Ž»3ÍUé9r™°d¶ª†ÝÁ¢,Š×vRÇoAô.Ž›F™ôHi% 6á¶Ñ ‹~°wDOŸŸu+ºõnß²‡ý‹¾×vÖ>˜¤Yeæ]þû†ôT¤7·ÔËG–ê‹Qu"õÀ˺ډõË"”£ˆ>MÑH€6TÕ^fŸÚQá¤ö" a) ïµ"ßTÄÒc!KŒ„VB(h&bUFâÝáLˆ“±ÕN;éc#E¯º‰AH“0ªªÌH cŒuŽÜxן4Œåy‘Æ\ ¥'_ Á¡ÁdÚ$ÿ´Šˆ¥áAX£ÍåökXhÌÙˆâr*ìGP7HgJDºn|›,*ékǹ½HN;zL«’hS`Ñ^^ïsåî9Á…e’–T3öï¸~6Ô•¹ˆçõ>æ¨4|É §ôð}ÌŽÝNceاÞf ¸oÎooŠe/†oz‡37ÊŠï»—hµGº¨f‘Ÿ}fÈÝC• Ý6(ÔÛæ’Weå¡d¬®,‰ûY϶¥¡e²W*’ôN­=ÿoT) $+嘊)E1—bH˜ûçÃܨi—©¾Ó ê GÔӸļ셩:GƒŠg²Ÿ§ÖÞÜõÚµÕÙëk9ÎÐN“&š‰9®¦50,ÜEï µ@xufÏ9¸%a¼ý`£´(ì–Så„!ꑉAyêÇÕ̦}ðÞÄu‹KK¹°C Ö59nþ¿jOß®yNe…ÀàQ>ÖW¤ÊÖ–*œr“S•𥲄$t¹Æ4@Â{ŸLÇw|q£iw«è2ÛœN4´˜&Ð% `Ó†Œ˜ &Ä… !m B!-p´çÄóë­½¯|pË´çëÎùï3¸%5Äý¬ÕU]caÜÆ|"µ¡¨…ºà7owzܽi†|/8¶‚{ƒhõ<\²Ã ÏôObÙÍ‘6Sàã£÷–a ìã®þÖ­HDéUYß­þG‘WYþK¨Á¡ÜCÍ™ŒlìüøÇüÏß—v›Ì(N%Ñgã¶Ë^ÿ0¢±@&qØà¯‘Ô7ÏöÞ°éZ‡4 æÔ‡9¬XbܰѢXù*KÁX<¹2”+a-“àºq›C7ûiȺޙN*gW‰ÇÝvþK›aZkãå)ÜK(ÃÕpï{xØïÕl Õ«»ƒX÷îRüïð7ÝüƒmwžÃ,QM÷’ˆ Â…™*8BrÞžªkYÏêy^˜75ÔæY}͘î„CǘPQmÀlAè÷n”µ¹ü»wÏPhÏî1Ó«úO…Sô–öKöû/œG1¹|‹!–|>ºƒæàs8\ñŸ* ö™T÷¾ðr¨ ÁÄoø@m·ˆÜ%<'9ïÚ ’jP{ä–æ·å_wÔø ÇSåG”ñí—Ü”ÉÇñe 0Ò$çÂÑn!#pƒox+ýôÌÊož%~{mùÂÊÓÃGî2/‘þWöåg©ó†®Ð÷IrîŒt#˜›´‹\sanx1»©/ëáX€¿ÒO–î‰ÇtüàUrÁ¹š5YQ8~ÂJgU†÷vÌzo¼C|)ão­‘ŽXpÏqžîìg}Î'öCEY«hÛéèí§äù9ýœw{" ëI3bÒÖ(¯¿¿ýïwÍ{‰štŸ0Ò§9¶k·¿á=°Ýb3ùÞCR¿Æƒ‡â{=¹éöéO;xžfŒþåñÞ,Ÿ€T:Î?“`'ábçiåy=b¸þÌjáÔ™­”k;þ,¶‡ß«Þ‘’Ixå{‹×÷Ì4UÞÁeÃŽù)ЙNsMÛÉ×g~ןũ©Åœ¸s¹²Q‘S÷Ù2tí¤˜,?{•xoŽomf¢£-OåDN¾Æ‰X[ýðÜíȱ!Œç ö‹®W÷Ø¿šÓ‹‚àWó_)Ëi,vT²ýÞËBtÍp¥yÝ;CÜï«3—­î¯ &Ñ[ ;Â{)XÔ»½ ÕÏo;/¡_ù%vø¹ÿ?½¢xD‘üŒgÕ½®8üëãïH¡TÅCRÈñçK}ãÈa.Ý´ÌÜ„äù‘Õ•Î:Þ1Ó‚½ÑòeðÓOÛ:Àý÷€×ÑÙï,™®ö+{„­–·Hú´˜Ñ†ù˜ˆm0Ä™w\«GÔË¢ãõGe.@?Ÿ¶®C c;Ö¥Äë Úœ_¹,{«xsZ0c±»yå‘·Ñ«VO:K,†8d×ÉÅXe±yV™àMC¦9Rl•/ïš,¢›wm_¦Qýz×|ËÉà··j°=»665gŒG<ß{x°,¯ÝNÚˆσâ6îÏ£·qËÕ'nJ 4¼d¾,ŒÚæž*ßn¶?»¤Q®ýH xïvû]ÝÓ×ÏÆ-[Ý­P‹oµ†x!i3øSÇX½Ü€k‡¤Z½Ð'~WÊYñÔÎñ F­•¨³íª¼xä˜jXȆn+ßã£[,†'ø„vn·Ã¤ÀË­"EÈ%€ª¤é?û¹¬6ç37ŸÞñß²´Ü6‹”œ+¤¯"P„ªR!‡•l6{:~ ªËŠOÛ½ãü—¾d>䨕G'Óãox´²S’Ÿ#J6\(—Z#†kf1üÜ¥ž'þ/ý^l€¦HÇ­g¿Ä!€ôëì¬ìäK×bÿ\ããCo7?£†-î  n«Àk­ù=»Æ~;c_e»ÿÇVÒö6Wå‚ÿªÃÙ.¥ú0Õ¥° °¶lêý´^)‡ö@¥+P•{|•#Lcž˜š·*‡4ùõ™[%Å —ÏoörlAŒ›Ã»kázx~J»,Ʀ·¡mÇ 8·Íœ Ô‘r€¹3»#“ø•æÔ ø> ZcUÛ‹Ÿ/Œ4g\S~Úã½ð÷èsworJÞêüëi´˜0GAzÞ|7!Qº ’Œ#µvÁCŒÈɲӿ¯²iêÞãEËDRÉÌÚÐf{jÖ}ymĬ) uò¿åÃ^ÄÕø‡µ– ,-‹œ š2¡ÅÍ™‚uÊæjÛÆßa¯,Žç–.Çœ=ð½$îÞM¦Üݰî›"]mXö<yôÃØ x£¬Õ ŠÌ»g®±;ÇåÞ¹<ûÒLägûï~i/Ú~}oIΖý'ËÉM©#·»ûw>×¼|^' šv:SSÅÞÖy$8ƬW+Ÿ(úÄ;d;Çœµ»ÒB¯¹ø0dWµËõ±í%Fï·sõïå±XZjýÆö¿CLn+¶Öyoäô¬lÌÊýâ |È?%ñLè%)™—PÇeãÓLé.%¥â ê»r«uù»$øCSa*í;Y¶öçà CŠàˆ Â3Q2}½þëWPpÞ C²™9Îjþ€¯5h:L|9-³ú¶Ÿ¨çðg³ÓyûJœJ —ÑÔ•õöéÊ/‰øÍ§(‚í=Ì#„Ínþ2´[¨·ñø¥q´|‡ä7VNÓĤ@\ÃÍœÊ,ÈÃipN£à1¯Ñnbͺžá Ø*³ž%'[-ë–ã{lð-ã×I Ãù«­Z4Ôqæ:›r@]þµYSã£X„vøŸ¤èúé1Ÿš>8z¯]¸½¢(Cñ¤#/Õ7¸”û‚ž¥Æ¾aŸ^ÐF1¼Ç®M)Q¨2Þ.ß•ÑY±8b¿GŠ)ÞMFXÒ̰ F[mº?.¿–¦nî7Gì*ðbém~ƒuŸu°ÌyÒ/Ü …§&®‡;"!Ïú›ùÖ]{nãóÆQÞu3+|V3*bË6ëe‹—¹® «Ûß:ž|¤¿Œ‹Q¡˜ ŽŠ›Øß«‡ÆœÞWu<‰ÞΉžå»ÝeKñ¨Z|§´Mþ¶•W|ïRÜ;½–þ„-»{—¼ï׌æ®ö“W5»NpÉð:Œ'»¾¨¯üQ~^+^ì?S$ÖÆ ]õI-Aö_ɪJÛñ¸ƒéFÐYçÛe³ï ç¦a8˜ûzá¿:ÂtFÂ÷øóõUÈÀ‚²á.0ÝBi ¤KÛ¬b!Ci¸Ùç~Û‰½«(rÁ˜Y »‚û™Ï´ƒˆ[³¥ÌýçBÖX£i}ä?È€Û„kß•ò”Ëóy:r`¶k¬ãùŠC1ŽH-æ¤ÞKñÆí¡$ `'õâ-+ûèùB}qÏá$"5L¾{$!gܘCGŤ†Ð×I=jÝOåÀïs«Aþ‡ç=Î wô'Ù+m:³B–ÏNAz’à¯ú‰ƒZ5pÔ+ÀÑu¸0É<Ü!ÂÆt쾈·¦ÌÈAx¸YÁæüd&?€Ðæ'N±ÚȉhO¿xž¦‘lôMå P¦ÙbgizÛç×Ⱥ¾ÀgêÉ§Û n½ðø^7Ú¹/l!qì«ðßRýZ˜òýF5•FÆÕðà°5hQÆ;}eÂBáÑœlD™ƒced:ZŒâŒŒmK ŒaºÎ' ëî\$ÓY-Ø%@¶“ڋƘÄôïs3žßÙÁ½ì4¸Ð«ƒX»£^üøÑ¸œâï;t“•Ž‹¬Nmoùˆ<y¦X]l£ëÚNgÚÖénn'~}pÛ= ó;hé|"2\ œŸ ¾²>¥B½%é0\¨õíÝAä¸Çôær&NjSƒW²éŸ#Lí7=Y£©vÖ;ž0\èÆjæ·*èˆÐÒŒ}½|U æãÓ×_­ý;ÍÊ'¯ï>“;PYQ¡I‰ "Cü|dáÙ/ÎȲ §[J¾G³§š®!ßÛçp,lbPʘzNÕßÎkc›®ê玶ì¥lŸlØ:£&×€™3 fÄ.3¥ÃAXÒZˆ;\ªŒ‚%zä[„Ã3ÂI<àüÃÍàuO½“ɤ?ƒ-qhdÌÕÄñF}MDXÉη kÇd þØ%2ÄRÈà 2l›‹#®f1 ]ƒÈλADö»1 äŠ/'°+Ã[£DÍ=C¯b¸~'æÌÊìóíÐEÇ"Û0”׿€À$01´¼Ow^z,U¿kn´Ó¢1žÊXIJÓÀkX ZÀáÊ‘i³h÷™KKöÞ:–Õ…X¶¶ù ÚØ‘Áåx¢7y”Ò%‹U:BPÚw¤dI-Pä™BÍì@¡‡VGî,©50ƒè\(Ì—#Z˜¥‹„¾„ŒûiâÅ!ø1µý¾PJóúµóŸ‘oIÌ-71ÓnN×ZQ›SXé¦ tÀ}D–‹øBŒ0e„ºüEȳA%JUÙʼnh!Î-âX8ay¥ª`§‚ÂÚÍX#Aj&â›(¾¼ãx‚×4P²‰3 !Ê«¢bð;MèÉhŠ "Ñh"ƒ¡AÛI㲩íôÏŒú^Wàž‘á®(î Eð>¾cÍ?dº­  Æ(bÈ`^‘FÊå?ÉÛfÄ„lF?ò2†ë·:pZã2ÊVè ïU¥J«•fØ8˜$¿CøûzZ±•Ÿk!w¦ÙŽNÝ;¾|מš%ôm!ÌÊ‘\!Ý…œÏ…‰¢Øå÷—%í¦D$„¡ ‰¡i 5†¡¶¦EHeSP$#²-‡õˆPÏÇCԓߘ{0]7RrÛ¢ ÆSjÇýæOFH}FR’s«dÕg”â+Ì«*[mCÒùt©¾O2¨Êƒ^ACqÒä\IÝ×´\®šs¶,ÖzrM5÷d¾OwU§Ô÷Ÿl>â>š÷÷[K­NêÌLøõ2[Ó¨$úéHx ’;£êe,ÖÖ Î@)ÞøàPÆ>™ÙcÞ!Ù\”i"y3 Œ¥ µ¹šázº)(äî’¹hØGeêµI(¬©"ipÕSÀx>™ÿ<;™SP~²4KN ø8 "¡"Þà§W¿Ö3æ§¾“‘Fšz|T÷Fãrî ‚N‰Øj&Xq3ÍM:nˆ‰ÒK^Í)M/´j*Ú…e6}HD³]õÎVA\Ü^ˆuþzsÌÂ! Bz׬Zy}»ÊÒ¨„™\¹!ÇTn€Ýú™²]Á.¯(D”ÑIDˆ‘J!$’2@‹¶ûý¿NÝí=~œ€•îýXG“&Ù J¶¬µju­æ:í™ß BψðËïŠl8‡ÃT¶gšÓªf ´¹Ÿ<“Wh‡»6ù©ã‚|O›'ZpÕ½ËÜCžàøÐé¨^ß–¬ÒLB)|å·=Ö®ö`ƒl@öfEüø¿•%3xl}¯-¦½½÷“¿¾:ƒö‹wŽ;F͵—À1Oe‰VŸ/V1‚‡§ùEc§N¢PÜÌ–øºøˆ¸c–ñ!erGœ‘Ó°]ïó3Äø pÊÎ"«ªVÎÎk¸ÖT ºÖ·<(z›.)«¸>^A} øéŸw{mÜ%ÂÇdA«0˜Ûê-õ˜˜ÚŽVùgMk¤jìÓ›ýñ59gڤŷºbÒîDÙ jPEù6®ŽÇ´_¶XÍ{|®WY –ÅÙW»<Öl÷3(‚-‘Í*!IxÝ–{'øœ/f`8žeSê5ªoÄM#¸k1®cmœÇÛ*•ïµT¢“Q¬b÷#DkB·LËCÅ´äœR¸†ÆeD[hf)Š^ç7Ò›å¶i_‰(Œ|¥B¦°þ 1Êûº·Ìb÷ "çïâ–b(ç`Á¦Æ+¤+i¶VÓBTóìÊÕ”pÑU&¿q˜jRE½¥Ö¹5¶ç}¸ž6léAõŸ6Ù`§laŒ6Œ¦û·WÃÎÊ{ïŠä²Î/Þ\ð­K‘íf£;Bã­¦6}ñ5F4–;ú78Þ—ñ®á¹„0°“vŸñ'‘.zí¯µüßzXKÂé…ÏT­B`ê”Å+y\³½UìgÁ=2&ÍR««0ûž+U1rÇVâ¬6‰¼–ž—$6\Œ?(¤u[l‰væÜ ˜YÑËSŽ|ÜÛe!Ý÷eîÒJ[(OôÜ:}%ƒï)ï¸Z¸ŒMãd÷=É)Yï ’ŒkaÚµÃ=gg÷µ­™JIò¡s_çÔ>»Škzò¿¢@àQõ"öGNì¥úN”?¤“N”Ò÷˜j>ª÷`‘J=Ö¤ÞL±öúÑÐ h@ÉN¨þ@Y?™ñ1A¥‰‰¦ýÏÛ¥«šàçglád‘~¦xI {ŒÀÙ`IÐ€ë •Èiø_?.oŽ=–ǺIö( RGeL ¦ë>/rɨ£¬ÛÈs™šáíZUÆ£¶›01èC‡¬˜XúÄÖþ™ÛÚ7uR“5/D¸y8¾áа{rŸ„hŽAîð¢cMjðXœÞ ã¢1<Ôo0$qî§OŸ~=îðRÆoÚ_è×Â@¼xÓÏ]éÚàM|þ¹ÍšèèyÈñðÏRò@Ô^È+Yˆè"ä°f õ*ßÊý“ [†@ûWò’&¬½Ù1FÖ~>µB?*UKPQTREÇó]½¯Þ¾]³›á”Éüí|V·dã…ï/Ùú™ºù¸IòÕõú=y¾úˆ .<º„iêºH(ù0D_„X0{}0V?9ˆ.úilH×½ºŒ ­üeZ“s± z|{¸F÷ÜUê•~ŸŒ¾œb³«èÞÝÙyzŽI,ŸÎ»¾ŽMd¡ ®›LnYOS3¹‰z!áȾÓ?o)ãYÉ9B× ‰$ÎY ‰…Y‹çHÈcs±ÎaâYôË@ÉÄÖ.,‡´Š5·"“|C<€4£>GÛ\Ö¼–¯°¸zHÔÉBñû%j­iÌ¢=aŸ|ÈI>_n{”{–[êg›è†Õ(†"jFFKúH:çù=DËWp,hÇGiE„¹‰°kvØ’c€‡¶‚ˆˆü‡Ÿs¶_‰×ŠÏ³×ñ3ÛãBL,üñ¬÷³W«týÕsl7™õÃ4^IÀ g*$ hj},÷_Eð;&ß1cYÄïeeu÷áÇßàÜ·ÛdbŒk¹Eι²3QÁë&¯€Ûé°þ¾2•ÑêŸu{K½ÚW»¨í}&š¨ïC$‰Gi\”õ Û§4ѧEºÉߨ –Atk=‡Hƒ`×ÁX»ôj,Ž!Xáž‘|Ž;g]ñ•{®’ìÍ3› o™+Ä1‘i;5zxA5x ˆ3{RÇ BwѦÎܧpÌÅs0=®W«(üpq1ÕÕŽA¹¦wNŠf”Þi†Ïî1(¢ˆ³„uÝ!¶'Éï#;â÷³sïêûXÉÏÌgH›‚ #Yn Ô!ÙL ™‚ D’e»U©—T¥Ò«ZV?NÇQPyQ©>EÌw[›±LBÇ£Ç3 FÅ’ÓLÇm·5zT]ÛDªåš2#y¨õ‹ÿS „®7x§8'¬Pý|>±Ïøþ“¿¯<ÿ•¹áœÂûLε'†e‚ÿèˆ;J0"Ò•TÈ w´>¶ùU1- J%ƒêÏðn?8%mïŽsn6+¹o%¼Ø½ºÓü]ZËsnØ›³&ë '9¸¢£Ô•~Êöž×Ø¥2%HÁÝÅ Òž¶®îíÄ’LÆ!…¸1•žÍrÙy!³½Iv% Õ”­èk¥uñF{ÜØ±7RÂW-ƒM«^•Qhö]¬˜¬ë]™éˆÃí½ 7C.{û7mð3öøóÍcPOL"uqú`_Ù|@Ìå¬ÝGÄá½<Úû÷ٳĂùõ°KyJs½Ù\>Zt7<6±(Ê,·p¥–“b/ßž ß»§Ž™°8˜£!nô~õçs#Ô<¬÷ ¥Ë^)~Žÿço¿k«*>S潬/€¨* ‹g™¹ðŸÆ|0¨ìMŠ÷k•ókÃ1œ6=ýî?Œ ®eAòòñ"¾¿yfïËWQ¼cm^fÊÞû-•-ø,=p[TúvÌç]Ì÷ Ýaú #XðoÖ}S3y¥Þ7¯Ç\yJófu¿^ÞµÎãø»"Îû2ø!˜ül¦£NŽ9å¬~cËõ º»bÞ¿t|µE´66‚¦ù–.¸fÏÌ>:æo. ªúÊY,Ì)œGlp³ýš—.wÏlÕRî_ˆ`¼R•„ërÌ6>wEKJl‘eæñB ðôp¶c1Ù:|¬8ÓžÂô*%Žè‚7½ˆF o㊼ÅQõuˆÒ_¿óo® *·–f)ôËSŠ-‹Þ,:  m­ÙXãdÅ5›Òz)ă ¿î E|w”Ùä˜ Áy9B¯õmë‚w*ιŸVå…žvn€ƒ"c7ùeg/|Pø`ô2Á…½w5ê.•"Ÿ-àßw‹Ì¢a³´9}‚ ™Ã ]r8êûIŽóO…j¾–3ÞÆE@ª¢³³l¥dwVBÇj{£xOL{Ø6·s^ä­Ë±ƒ¤xNÝ"õw,ŠÒ½©±ÐÜÔ¤èhæ…•Õ˜ÐËÏ£W{ÕŒ ,Þ]²ÁiãU¦¶¬dÜÖQÑÌa&õz_gîè0¼{mfåÐæí#1Úl÷Ëó»>¤]f·9ÃUµ˜º~Ñ¡¥~$g·œòçm×…xƒ~Iy¥êöQb&Kž® 9'+‹”Sþcnšw£ShûDg<,ræÒ½¿[“ÙçÁ“dË'¾o˜a]Tš!äkš±Ùvñ¬m­…+ÉbÜû+›6P#’|¤e¶? Úý‹ŸïŽk¥©T›nöZ­׃ò6]ÕÕ¤`C \ôó̇µ^OÐÛ3°Y¢.²³2{¶zÏ aûPÚæ]Á®z“RQ‘Œ<ÛC$s¤d¶ð3½×²ºl$*H¯(jú¡¿8{‚¶ô’ÝÎ 9ÇyÄ7‰*G\†KÚàÙ‘åÌ×.™d–ñÙ 72o½÷@m×n/ ôšorŽ"œEÀ(bµ‚¦É– ÷ô¯èÇ×&þó‡«4A„ÐÕ.-˜©ØÆ–.~ÄvƉýƈmó/ðmb¿oº7½Œ¶ÓM›{ç®=‡ÄÔÔ,-®Ñ‡×ˆøÛ‡“±¾©‰4ä1ðEüz™sq$Ô‰ú8z!õ17¢±5PìÉ¡ÌÍEF‹Hù“7Þ FIŸÊæ0ÓØšÒ @LB ÁˆE8vV'|ôºØ§´i=aâ8qà°v:Ä@õíÊž4™Ë]‹4q+13¶í»ïÂ~z¸ýÒ®‚IÝq77¶˜«t¢oxÁSðˆ-´V¹:¡úæuws12j„¾[ù÷lŒƒ„±Á¤d1m£¸c%‘ö¨ îkã·ufÙõWëBÐÑ]IŽ ïi0dQ ¤ ü)•œÆÏ_xS4-çòÜ<ÙSí0¨Á°½%)zz7¾‹Óm¬`ó )<@lŽifÍù <Ç‚ +ÍtÚŠšSé.­ë«ÔVÍL§‡#‚;f eE\¨²:c¾ÖÃ4èª3ÄmˆÛmŽp=cJWú±]ÉïŒÒâg~åV—]Ë®ib¦Þ“¬Zã¼£R`ôT4F¦˜u4Æüð¹ɺñÂ`PtÁ”Æ2ìÌc{dÅLͦ¤j?ˆðF'@âᛌu’Ó TÀ‚°xÃm†dïƒ>¥z-vBº£ðy"&D¿ô¶ëÇ›2{´¡ßo­.òŒyÄù×ÕÎTê=&ÚdF•˜¹qæq8Üúà?FFü¼F†/Îøs‡$&¦­5í’jl`‡G­³:™M´é‹.ÕÔÞ뢦­6’”ÂDë'ê8RAÀs‘cV½‰ Ät1b¡Ü %æÚÅ•S õ©2Ä$¾¼`"ùä}6è¦z¯lwi¾–¦ŽETT˜º»f­E+>lVbò”bÅXu¬-ídQÔÄC4‹(R”nVfðÃ4ÉïòòžžDÝ¡JLPáøDb †¨]}:nÿ"”×…dJ¡ í`Õ› ‹a20LT­ÊrÒpÚ´9‰ Hæcy$a ÑRc#¿ÇMqŸ ºóÔŽÙw¦³€LšG*@+—…ºµ(W…“ìŽ6±U¡»­üeRë|ìmuqb'½ó•“#R€ŽšWݧtyšI³,Ÿf;ß*1oH5=Œ{§¨±è¾à]¯ÀG°e´‚ Aé Z 4#¢®ú]@ ë¯¨ÄŽ‚ó3J¥´ÔÏnøˆ>q\3ô!!6 ÂròEŠ{ö.hCmÒœ—‹ºëÏ~=µÇMhÔÖ£bÚõè}€  éµk}™ýH¾( ‚÷V -77•êèÇ«¥¤(äò«ÜÖ¶±s*km.-±)&€åÕŽã._žè,MQ°"NÚä~T8X>P3/ŸÏr·Jä ÕÓxN³¡9„}+4Źÿ·Kço¸l7½–Mý&Ž6Ã5ÉœÉy«=™>ýÿÚÕ{ñÔOJ°üرTS\KîI­ßã.V´ce-d`¶usßJßž˜á€´–¼3Uü^½N㡽Ÿß5B Í{=µ öQðžnwôÙ<]εç˜^¬›¥iØ•¯TÔÅfqŽÑ¥`*dª¥õ€Î„dÕ·~­3{B/yªƒÂhdî¢r~Qꢔ޶àôx¦SòÚ tî[Sf2øCá¹h¢$éÄ.ßѳ†°õ{¨n7ùÆø8µR§Vö2è^k’s ±#eÓva>²wåüøµMMÁ ¿#¿•ÆÀy7ÍÎã7‘q¯Øç¦á¡ÐT2qÉs‹#«ñAÂìxiWz80î>eMÉú–NÞÓÙ…Ÿ¨)N™ê¬£ý¢*­YÂGIý‹CŒ2oƒ„Ê­@’Df¼qjϦ2ˆÏtpM;…tÂØÚ°ÆlO1Ãð}•ÿåÕ*PùF-lZU‡×u S(xµëŽíAZZ/dΉkÁz‚© /4-Aô£ÄdÆJLeEw›¨†SÏžë9@0”¶ÒòDëÝònT8^ú„(ºÏz?ݶsö®O¶êÔºBƒǽ; 1Úo)ÐÒíawI æ ÆìyTôÈ.Ø B|Ëj@-ÁÍ/?”´ŸÓ#žñ®Å¤ea‚:rŸì¸s7¡ŽÝûꊲ™®ý×´Æc»chWìùò`ÇV[À‹ë«vÅ}>ïë®To²™pL÷ÏZFZç©õe±P+^PáÇW€y`fßlƒì‚â?®èiÛ_/,u­âÜâÈ×¶-'‡Ï~V+ãÂÁî‹ï*Mû9/LcV;ưȺ^%tƳ4^üàñ]ò‹ŸILÕ(˜¬Š÷ÆwÖˆ¹³Û<>-žœÞñ®+8·v¤/Üça¯¾6k·äšCÖ£wîSã– âþìC­Û. Øþl‡ÌLám16AÇ'g; Ý>ØÐt ŸÊÇÇs _ŠUt¨®W¦þ>\ëFŃ¥Ëf-Go\ß:©¼ÖÿžÏcâFöå¡>s2Ј‡ Íz;ºV“éáÀ‘N1[FÞ4|Õƒ€I†瑟c ôÛ7FKíˆHá¶0yÞî®O3,°Àw[6XÆ:'u¸€‰s1" K–aÁ…6 Bj´dH(ƒI áZ£]à©Ù´{zêçÇ샖°¡-ÔoÈ`ò¼‘ˆ4ë·xç¾EŠÒš1D6 ÁàâÏNç.I¥œSR/.Û‘˜ŒáâléVR1¿#ßX.n8ë–`zp3¤#÷Çc<Ž©¶UªQmå]0N?\5è½Q§S´*¸©•ݧJu̱^ø™T‘²u±À‘\©A$e[BÆg82 Ü»ÑÏVØdÍÂÞÜ Â¤éSÇ!Öö:«zV1½Å«àäœ'Ì´$˜rýšC'hgàÿ‹è¹g8Ó$å8)Ä xˆºŠ*Áfõ ãƒÓ8Y€×—â¢å0w7 _Œ8 { ôÑæÖuWS–ÈÔ"Qº¨ ,Lu˜‰2 [ÎÇþ˜ç!‰Ï˲pÕ£qölº:ÊÄÌåƒ0qƒ©${È'Ž{pGƒ„ÑRBSºkUŒå諜Uû9ºuÄoûT/ ÅÂðÛ:& Ý?©Þ µˆæñ9T‘ZžÄÛ 1³›”Ê_H¼V‡~:¬V¦©ã¹ƒ¸X@]&k(A¢p²°‚ŠÄÌÙ.-¤š˜3°wfðXÔBJÏoÖ,I“Á±“d¤’J~ó^¹?ZPëϨÞgÛƒMŽÌ´°½4nµ6†Síë¶Ç ¬†ŒIÐo„¸ Ü4„pðk¡«mé½)VmKà‘¶“M%%*•¶}D¬{c0nˆ8&$ã†b JÞyM…†ikQ™¿ïZÀJÆöN Ó8†: ŠÌIþ¯ã­ÖGWUû±Tÿ€H¦ÂÆ+=/{ÁÄ YÿQP¬Œ2S7 ­Æ³±lkÇ1²#èÇts£mŸ²«½À±9°ÔÇœ²œ¹Pé–BUêðÔ‰BbŒhœ`œd!Žý‡­µë¼ö}ÕùmǕ̕&²ƒ•e©ÙX‡@0[¾HŸø-d¦¯nÁi(l¤Ä2¼Ótržz¡sU.u“N‡œT BA$!R½Ì;eÉK~–$W'Þ"éˆÄŒNryªb|~o`À4?µÙ##!û8sOŽø_K1!DÌ8Ömüwõß{aïn¸ïRM;§¹Ò¯z€u3gRZN‡]ïãCCz´–øp:ƒ•­Õä² /³‘Ì~¤É-ø”ÿ g¾ lY:M­ÛýXçç^Ìo§<&ÏMØÈrËÎŽÝš.´ü²/0¶:ÆLÊØÌoò@k%ìcË•èÆOÄî>ã|5Bü”´YäÚ¾È.1­é (z÷ù9žë«ß–X_Æ]«69#:©£Ò=Q}c}©a|é‹K.ÕÔŒ ¸æÖ=ªò ‹gÕCîÛѢѰ>™8Ts*kò»a=<2W†æ×ÓÕpD7‹$²XÍ%Þ赦¸'azp-8´ªË•ã;Þá¬/¡£ ~ßÚ¥J71«ðo 5ï70϶T{ˆ†ð‚å‡Nî´û^⥎†ö ‘ÒN9–Óž®ß^j»ë|3X­²#¸ÙÄ¢g´1â °‹}qŒnT ™c¡Y U˜éf„¼¯ÚYWf±¤Ú…³âmü™b›'ÎÅØkß<Ö{÷NÂÝ@õ æRP­ÆÇ¨CǪT' `Â8ù£Z*;SíÚK‡ƒ°¾$+¯8`tÐØš’ôfROwÎ ùIJ½ÈÇ»ûÌɆG¯—ùÚTceQˆºÎ»n½Íd<9ã"áâÕRÛØK¬æñm—#·å®XdõxHtâî— TÖI±¢-Ô`kf@j4v K¨’§÷Ù‰Â~è¨bÓȹ)znQµNfýîØ åkü{æOš~±ìÇF9q‚ªÑ°y?><À‚‹Bà×&E°ÃO ÁÛܱ“)jœ$V¹ Û)ðö Z¢© ô*ŸM¡p‰*÷Q5±IÂì áÄŸj¯®‘j)Ë©O–šé±qKZÑ^2‘æ÷| m~íÙO³g4”¥qëÖÝ:‚¦Y&–øÙPŒ¥þ¯iÒôÚü¤»²=Ó‚¥y qBfª*G¥ r+»‰lìUL &öópñ6Í8í;âLõ<)yQ †LîAî—æ0¶^º”+:ŸBpæ† §Í;¬X®õNwËÓ¢6´gJÚàdçµ¶÷ÉnRã,°6RùKmŠ&Eû+¯Ÿ.½}<]4¯Nž(“ÑöùÞq¬âGÎ}ÅòÜ>^ùœFÇ$DC0 »—Ô¦­Oœ?ŽÂÆ6ä`›BRºµ°»:™Þ°Ô0&„1Á“L(Ÿ^xµŸÐû;#Bla±ÛvLŸ»,[,ç]«x“B:Ú(š%hÌŒ8dH…sv!Ä\á'Œ]ÁÎlŽ/ƒ`²þp\ë4t.r›ÉÁHl÷,È,nn;dЋ(K…K=ãJ¡Té&ë6Ó]„àgdiæ.n–µAC2âݞǣµwQÖ¹51ôã3»ñ·l“ðN‰„BCs°LÁÉÔÀð|hòuäö ¡j…Ÿ/êaÀòc@R7‡‡1ç…#¾Ç–Áô„ ‹0Õ‚Ôº”:”I)¨Õck@f£õ;X!HÌdR‚ÑÁ“…„­n±[%:Ú³‚ʺ¸œ¶:S JßJ¯-ûÐç·”±`ÉàŒ+ æ`… XDCiG- ,­Òô?Ê÷Šâ˜:´¡Y6ìrèu‡‘„yÆÅøæÜÌVkø¡péZk 4J°qJeúÒoÒý\éà,Ù¿I¶Ë>ÃH :mA7Ó‡ÅĬÀÝ Á´4yÆžïI1 o‰`y±å…œE¨CëZgPÀzžqNPìuÌôJåYݺèù["­Î™jÂZäsB·0¡-J‡‘¬¨ØÞKøëÛŸKqt‚``k¹‚BÐä`9jû¡çUOy}ï0'á×–E,¹n‘3­Üçh2HáVŒ3y¹Ô¢úÒ¤nÞ”öî1ÛŒ`Vž°:,î hF7Çs´¼Âí%²ßæ6XhëXYšc•v×Zvò³Aêå]‹˜Þ\”r›c\_x?ž¡™:U°‘óGe\={rFšš_~#ÒÃë1O)B+kÍɔˇDV—×t<ë1 b4z‘’BHÉ$ 9ÂHcXÀ*eI„!B%”¨h„5†6›;îñFÑ*|ý¥ÄßÅuËÅb~ÝÇ­i¦j…ì’ŽàXy:nEâ ûøRî;¡ÞÉ©™?)ú§Xpq½6¶=CÐ:> ïÊÉôy 0 š;Å‹!@B„—üÿ%×±õòý­÷&gîùÄ®÷®v1œWT`pf•ò’J_Í›ÇÆÉN4ŠrçŽbýßõWÑ#ïë8Úß„°üYïGTt¬aã±é.Ûô9èF+§R„ö z€úi±VHÍÒ»¢2”ä÷Ëm0,Ûτդ­ße¼ m-›e‘.^ìWËéOÝ—ŠÐ,${ØS®V.×£ÄAeƒÊ%­°º)›lðËN Xr¨t’‘a cÚåØÙ¾c Uõn:ÎpEzhRµ±÷‹Ù¯ó3rq·Ø¶Ùu–¶Ñ0Xl7o»3K,zÈ5ÌŽÔß>«¶ÎAùixùRÄ ¦Çò©ÁÚNdEy±9Œ!Ñ) æL†ÂîáøW7ÖÚGpÅ_¼£3;êσ;Û TõWˆâôKÖÊ‹£›-­üaðûëVÖÄ»kÙåHš‹H&Ü« ÀâÛr ‰¸Ž\AЅЫö·N"¼V0mœ ׎ä¢ÂA0_HO–eówšaä^ Ptgm—nûÊœP$Cíøo>[ˆüiα³éØw8Ý`ôG+Š*bÐ6¢*lðŠkÏ $ëœ]÷×Oeõ:êòÆ 9ç”ëøëfäIúM­8-ä䆄 Óe`/Ô臘´äw!/­K+4›í3ç[ V3—êÙŒ±-pŠ>‡§À8ú嬶]ÊÈÅL^-Œ€AÀC}¨“Êá]SJ‹_¹q'I‚<ŒŽRþF˜¼ž¡ac” °Vm3g¬wÛ€«_rÊ;9O•„lfŠ:.ÎȲt;ö³f€ø-~¨ÅO´j77g[b Ô oà¶~õÅu  8“Z¬©µÍÚPÐèœÀ„¯s ÙP›Ág¬/'¡µNÃñÝX„5[`Ám©ÂôÔ` ¼oáñÉ‚^Qwƒ/ÊÖ7„Që%+„|áÍu´Ú#é0Ãzƒ¢ÌÓð>Êhµ,¥rÌrf£HR&R‘¥iÊå¡|ˆÍÈ«ofÛlg±= ÕR7FÖƒâæÆ*æëA´\Ô èÎ"£ÎH¼™‚2âþ YʾÖ.c¦zPʨ!éäó#4‘±’"TËž[Ä´¥Pœá\˜€ý­Ñßy9¦T#/žŠ$ÒÑÔthàÒLÀ£h¯3~vwÄÖ‘¶lf¤q5A´ ‡OQ5áä:÷6t¾ºÌÍ´ÁS•X Ù‘@`H(ÄED^Ñ¡Va!–ãŽIM°>lš@ÎH1¬0F5š¡À4vª-L935ëÌÀÉ——Qˆ’–o‡7±o‘b"kÜFÈ⣅AQÒå5¤”äYTD-ãbÕûÅœS0\õ‚򪯾)f­JIv¨%’qN`W¶¥ÔA € B!ØÍ DÅ”!ê/qßÉQ%Y$?|›"©K32Ýb¦ÛL»óS­¦T^fQ.‘"âëkd×X  !qÄ(7µöv Ƈñ lÕÉ/ŸšÕªv¾w¨‘t…ë¨DI¶ùÐÌ`XÂÂDaQ‰˜ý Cˆ/1}ªF{8äãÆÝo°—Ôì[ /ÖY»’ßh3ÖÕ^t¹{ÍÕ,ímªjôor5‘j=(5HÆáUäu"8;j"6|Ò§ó¬Ää¡XEâRÄŽ«˜’ð Ø ; VÓsM`¸®LÞÆ™½Î-vaU:U.šúA¬ÖД#RŠD}·@co˜\^ó'„ªHn«j¬ dca¹ÑÁ8î( D‚´ 2RPf÷‹ÅƦP×#ƒr.D , } $ãŸVLL$QžÎy3$PKYŒ%§'9œ’p‹Pd0„KD¢§ÓµQ-5hI‘e- i öqûZ«­’ºüZÓhKó‰ÞZÍ>NVå¤Ý:ÀÛÕã0ø.j{,ŒÔØÍ6Ë‘~IãêøvÁ¢ƒÇŸ*wM B¡ôŽ<دpyÃ/œB/+¯e‹Í@\[¡Ÿá¸@}.rYIì½ ¶O9®^Ë’!^L•pÑ'ìm‘½šôÆÄA›yä­Ìv1ØðÇnïÕÐäŒ?(»k±E‰-wYCof¶¹ BäH^2ö—=á0 â5Ø€DFNÞºhMÂSKA‘é³) $ÖcÖÂê¡7çX`;¤ d©ð\g ü2 ñ:bÏ*„çšNn+UHSqv&úêy;!*ðiV9¯r̬,A~SƒG$‡^ìùܯ/‹qÁ8nØ â‹•LØH6iܵˆ;Od„Û.cŒ _‡.ÉÐâI8K”’„€ª®EÞbmM4G£p;‰¨YÌ„ØmbUá¬:}°‹pµƒi÷tƒÈÍ»~¤e±‡ãÜdV2|ªi(5D µöæ„¥„lÆ ž\í– ¯ã‡‹bnjV/5~åaD Q;CµŸéà=€öØ­NÁ&¤¿<Ͻõ¢×Íð8)Rº‰š!.=ýz‰­Ï½ÇX/hf¢ã&•¾ `¡"+ãß…Â7X9¨¿EviÁV¿µ„¸¶/krYº×Æ1]ƪlßÂ|™|âwê™ô$Îèb‹teè˺ÄLqJÒ¡1¹‘Hü+ KÈù}w@@qöåûêq; ÕÍúô­Ó”ÞRMb¼œ._vî{­rÒâ 蛉ºËéw9÷Ý“îN—‹¡¡ÚìÁ¿ÉÛ ,÷²¸‹edmü,øsièúnQÓO[ôºZëÇκøÞsŠU-¨ýž~ ð‚ü¢}P>zyùÇ™Ç>S‰œ¶·éÛ;eÇ/6ÒÞs& W,+E‹¾ïuúÜʳVSÌÑ.‘¶0ÊFÑݾ«q´Ž­cÑBƒ%÷¶,|¦Ë•¹y±:š™„o¤;´¼ÔÙ‰‡)Xje†RåÄÖñ è–êUÌ­ö£µ>hLêé±ÿ)ßHšÓ?“Dox|š‡]C.06Í·'»¾ëf’dehÖÄ:¬2[¥Ï{ì7^¶?"ƒ ërÈ‹j¼è£…¹€Ûq8o†Z….UT*b`¬#|xºŠÚõ £p\ÚL§%€:W£5·Ñj<\ì¨I¦çâ9‰Uj¡ )u2­‚×€;h¯JÓË+ÑUê8©ZÆäýI­[ƒÛX £My=…Z[?í·<®zÅt=–&dB(áxŒFjÏi†4 4eÖ8úÎÍÅŒÞÖ-á§aë.¥wn·¯hɤ±Q9±9R4ž#c7ÀZ RQ ©Æ0“”z]†”8¤lž vG -ºÌ J„«6¡SWkG0Bq3¡²’U7HB %ô³¾&ÃÓ—ÅTšÕI!R`Q!Æ?3êȧ#PB ›ǧ>û[ÛúsïÓ²ÕïgÆ£Ý:Qb 6*=)V¬f.P4èpò±ê\Ó„¬Øv)FÏðæÞÿ±{½¨ä5 †ÚµñÀ¯Ãë3ˆÁLçO­·îx5µçÉ£?ãH¬åc§û‚µçA7šfòä¤ëµÆ€Ý †æ´~°º0­Øfµ®9Õ–Ód‚g MÖÙÙ¨jhô)W>]ìñoQ¸g(Àb¾Ž˜&hrdUAgÅšî Ô+¯²j¡h$÷sW§°FÁ߯±#æm,†™¶›¶ÿfWXѧºClS:Æ56ð´ñ—mí‹;qZûmÍà !ý$ÈO\ •¼ÕC¤síU%T¢¤þ„;¨Ÿ”<ÿÖã^½•èÞ bÌ6ÜÑ‹\¬w.]U6ŒÞêF |e¨ÑÎPÉ/©…6(«Nþi#yÐÓÇÇÆ( "f·Õò€g¨UV¦Ñs©hš3Dèƒ{Vôx3‡Ùä 0|â,h/d¡‹á§¨j-läª2¶Cðç¸Û"ÈXâr[IÚ:äQ¶Ýl}¡Ôh‚÷­ˆ…,½i&bŒÃÆÚ9Œ€ÌWƒãÆÄ}xðóU&ßN]G0õþ2D„ @$_X‰ÿfzs¯~×óí}?Ÿ~}3—ì8k+{§YcõŒJjÉ}”êÉÑ°Ú ò„ßÛLxÅôTÍü×!Ìlfò4^cwÔ bvÛ9s4~X0-ëXwdÞÚ%ÈÕ©i︵¸Ú šïß½µè¹ÃŽtÙ}ëÌùñ¤íæ " @‘'±ìëüœCymZQ A˜…[»Øæ?k1­~Q ^I Êý†cäAìçN=ñvo1 ¿¹Ÿ\ª#ଙ]¯óMNBªmÛ4w÷€ê|®5!‘Zzèü’ riCƒûÀÁÆê}ä‚D<ÕU,B'…[eñy€,ÑIö×–CÍšËT¹š \DØ{ÞûW—kצh}zSN¡¾ªf!4¬ìK?kbÛ17a¿öy¶»•EeaS" ê9vqK ˆ›˜ÃÚ´ÓyOnÄýˆ6,ÒzC{k|àhI#øÞof^õ#4[£ì‰Æ·%Ô‰åÒ(­¤ò¸ž`ÝîñFÓ·MêùxSvšXF%3:AxûmþÛíšËd¶ŸwÙÄ‹íc ïa_…í_ËïlE¢ý¬±Ù ‡dU0ÑJ˜¦Ô~BTeÅ‚b_ „$— ¥˜€\…,BƒöÌŒSõ (¥"܃ñV$d‡8QT?e, Éð­ÓEŽfLcÚ¨$Pâër§YR„›CI)S$È”©P’CØŽu>€n Ýó†— ûíoÚ\=Îç5p!§g$?‰BÌл6ÝéòâPY @ç$R•¸Dé Ðæ=î¯Ñ1ß56Zx_4$Júц8a–¼ª¢¤Õ©`¦uoÀ $ø~&©‹‘Ý:¢8zùÛáMäëðív éœƒÀù Ê „q¸+p"z—ö×CŒúD7÷5©L…‘Ô&RÒ l6Nx6Ü©è8;è>L'(ÕJý~kÔéõ§hÔ©WÕzwüFŽäál¹¬°²rhÆ,î£ë‡n8¾'#†WZša,è™0bô;tà"UQõ§ù~rúdÞ &mFåZ ÉäÉýTBÑûÈ@‰úˆ‚1à„¾ýó.I«HÌ8¸ Hf$ÄDÖÁ©¼p5½²¹qŠ„…)HXó·éksÌF^.¿))¢p*üÛy«ž–Å>‹ŠT5Ê@‰ bêE|Œ$!!$9½¹i(” VÑS•|ë´\È=L„ ¥“‰ •,¥Å¸|Þ~ç በa—3á”tVàtRtØtÇ.%“H Zá53Ô>€Ù_”ƒÐ-¾·S1;„á"»œú¡†Àœ ¦¢u0ÿ¨©$Æ_‹û% m7ó!=r8ý‡5úsÏhŽPr˜ŒëA—/¤¢¨ˆìvÍ–ë‚yý£ê£ È¿°œ¦ÚÀ2¤¥ CÕ¶<€É ‰Ô5Õaq4•úÉ×úý‹³:¤Õ¡li¦Ú)BäŠé Z" §íÁdwÀ+™˜¼Ôˆš‡J»áB,8 n°Î.F¦;8¨—ýªDæm·Cx”TÙv„ÄSÉ9Þú_©ÔJ,Ë9©ªFS¬ðvåʆ= Öf$I%»)+¼¢"÷^=«*U„¤² B‰mšJýõ –&š„šM‰·y©¼E$¯»Å‰–¦ÆÛfmè\xY³ãu ÖˆŸ;«ûÌ0êä§“Ü€ ò$Jü2,<­œ²C$ÞuäzÖ´mð\“ö7äuß–é¥Z ‹©1±˜ø=.'ŸûÿìH?ì1¢¡$ŠH ±3 ­q¥¬Úšgû_§î~múÚäëL~GñÒGhPÿOòPê–ÈŸöËhKBÿéïG̬©žîBº´œ‡ÏóÉÿ†Ãð þ•ýÏ q µ—ñ¾TØ5um¨Ê¿±ùRÚÓ3²?òP}—a#õâz‡Ò_tn¿Áö»yT~¨ù2¸Æ®ˆ{q­—»„æØG‚Œ‰`PàÈC1¤áÖFÎÞ6ÖQoîrV,‰ð.)ÇxKtïj!-±·‚ìt‰ïüm—ïÑf‹ ½XØ9éì-=ž«ºé¯ß°×ÝL>$/x/䑇Qp7kmò½k5zgKÛÀõëÅöŸCUÞÆõ¤v?(oÄïoIhÔw¸'©GhÜ®p|ÿ‹>Ãq yÖ+¾°5eßk~ !(x¦ ºð2IÉù¶ïTÏŸYTÿuðþJ̾Bp]̶,°',©õy» Ä4 } j¡°¼&’,ÅïÃQ CÚ¡½[é,ò®YGQR½€Kå”aÞzµ™«&ß’\ ^váÄýÈì]'ÚCTÁEÝ8`8¨ó¤®«W‘>¶ž®\ cSèPT)ˆ{' ZVîê 芬Ñ",7‘Ä€” ‚1Wk>ý5&DMìÏ^}rÐ7ßËÂÐdØö¢Ä™£ ¼f%sôVíG nÉÍB=‡Z…™`tGÙ#o^F¦zÅýß²×Ü´Çø¿Y9Þ”A€dðzWz;ÜúGh=n¬ôEzÚBUº›` Ûv x& >’J p@oÈúu9£µÓÍY˜½Üprd®¦?—@ú¾ë]iÃw`¸¦,ve5ñôêðoæXåCåP|ꑃ¡Ý÷z·Œñ#ìÏÔônX½, ´Q«‹þËu¬Ø J-¾2Ê!cÚvÖÝàÀ€@G¤X?»Cï~³…á|´¥7jÀš yÙ¡.À&“îà.oÇ)ri‡d±-ùíÞs¢×–Òj²#µãUºªy}õ@ŒÍàcr¢ ?¾þò q0{ù즺~mm·~®jlÊ'ÑÕíS/ñU¢½ÊWœÍ­Î‘(ˆ%/È:?¾;zÂ4¸‚è¬T¨HÈ™rTú…`²W·31Y@"Z˜“H;tÙº—Ú–å$Äî$& }æç²-ð~¦i‚TÑ;»A‘<ÊO“Î.~=!ûyWÈ<°¹¥Ö8tËR{9Å·ºz,úÿ7}Ä“·óÈúC¡ô¢\>/ÍÖÂ℺xéáˆD wä305ˆI¡éíÍÕÁ‡p!¡I?Xú¿á^$¤ìÉ^ŸµQ_Ù2oZL{)$%ªŒ4â  âèïvÞ!~߿Ԏ™B©Ñp'n&ŸÜÓ“9Í3Æs3Š´Á ĨóvIŠV®êg¼ß>`àoŒÖz;“•ým¶&Ïy?•tÈêc<ž¢‰Éåþ 0ëc³ˆ(|¬(Ì,5á×&öñn'6µ1T¹ýÔ x½¯…á¡Ý¦²ÒJbD“H¥ XÐ$%†ÀÏMÍ™úÁ‚`òy؉ͯ_FG¯VN¼Õ¼CîEq‹V¡»_…xxŒŒ4ÿ~RBO¢fºÝ|!! }¿bb tæ 1qðf´àè­yD¦ï§6*-â†ýªÉ?£óNÁ€•(‚ïh>´Ðrøß?_ù¿¨EûB/öáéŸÇéû¿ï·ÏÝùeŠF£&“ë¡ü[ß?ÊØ?ªþWÇ÷yPY92þXK÷m`WGý£ý‡& ?䌡ÖÎø_Ïu>•ƒjSÙx4¶mžî³—óvŒ_|~m¾Òï£+"¼9.E‡/C±bп£jáK€Ø7{²ú80pâŸ+´$3ªge=Ý‚§üȯ›ðÇã¥÷ÏV€­õº ¨UðNx²AIíð7º€ÌÝ6·üCo†âÛµ¿ñÃPâ~´x¬$ø îvÙÕþ°„ÝðØ…ó³ FBà—œ?ÝÖ,컩g-%~™·l¦vÇÁÜÄõž3þ:ø‹pN‘çb\¸YæS”»\¦­ˆ'*ä€Çs uàö×nÙ·K½îå_ï›ß×Gõγª{é§@þDàˆDÅë¯~ÞøîKLèJäH‡‡°«X¤LŒPR‘>ˆ©šÒæ4áî›OdWŸò9ÙÚáík»œlWÁ"¨Mcd )\ËíNÖQ”1«-\-èƒ/ù9q­yäù4¾ûz(uƒQ¢;è*j/}æ{mq‘ "¾U‡Œ>/ñ€{k&pR ´¹RŽvã¥hà‹ u¬æ='.}T]\cîFýE?ûÕ*7ö†íбj™š!]3?¼Ë?·ø”–¤„±ëÏX’£j(k|k“Ì“$°ÜÌH&J„ß8e&y´ŽYá:'¨;˧ÓÓÓ›½if³¹Ø¹¡•‘2¸:Äs f*¦¸«‡±&\ID•ˆ®—Íù„Ž0Ü11>·å$ UDº“ã•SÜGŸ^ÏH?Õìª`+ýß­ÂŽûq Dû~:qö’{J´öÚ¨¼©{u$9¯î°èOs÷ŒñùßZ +õ‹ N#_l{~³—¤Žœ}žÆ…Q\‘ýQ$ÍKkø;JPð‘Ä%›‘tü¯“œÖXeõÚ}óP‘BƒŒ Fâ$IA‚˜UÄ0UinY’€/0Lƒ÷Åí/J„aG¤G—~/SÙÚm²öÓ÷ͦͦ¨Vc/)  ƒ‚§ÍC§ÖÅÃJÎTéËå­Ž[ —èå —n1‚Z b£B5GëTê výÅÕç`Ýî€èT`ü÷‚H?9ï]Í j"kÚfKùöæi¯xxè°à•à”ƒ€jãEÉ!¡ð½…{@Ž[ñ*ž¾,ÚÝÜØ”Þ%6ŒŽ‹Üc »‡2F$$\C_ Š@tǵHæ¨:%ˆž†ª…³?i¥u€jSH†jÈü¤¨×=Å©~:võõ-·4 Ýy¡€nìAE"»Ã§Œj¿ÆÒïMaW¢Ö¢J© 0ïþtqËùô¯Ù ²¥ûo „_3E7 5ÿp‚=eAÜ‹™BYpÒl³Zý¦ UÐ!%yˆ” ? zB@ˆÈÉ,_:i\•(béH•¢–;‡ƒ¬$"À, ‚¦Är[ê/+«˜‘Ê”¥“$éŸåÒ}ÜÈ!¸`Рà : ÁA ïxŒªFÉ?s§hî¼ ÅÔp÷3„å\õÎê>Ài€'½Î©´;®ljˆ»ÎÚV©œZ¡wûÍ6<sD?oµú‰báàù·yb@Øå(êvJ:Ź®”G®a¤=^áÞ÷½æ—zâk ƒj1: W&YÈàZ¿-O‹ì.©êÒÓݸ˜ícÐs凾_‹ÜèO¿Ý¥Ó¶ü¦—t1$Øü òÜÔÛ²$浓/Þv9t›§ð²1K… ”v”]"V$„%<êN¦Tx)ó»MÆ+fÖZÛfMŒ°Ùf¶ÓfÕ5n$ØÙm[kkfÛm³K!Ó‡3¬7ÊAĉ„(×’ÉøþMJDÉòÜ6‡ ’ÃP膧$éš¼#pC‡_«Ýsõªúïí‰óà–éqàÍt ÆRe˽gmïx$ÊPÐå¿ $½Ð}’q´Ô°B〴$dM"öÚu®-ZÔHQ ’ž6ðÚ;k"S ™)Ñ5SV›Rˆ>ó^:.EL$–dñÐK+‚{!W9•›m“Š0qXr9tK…õà ú› …G|ËA(ØK>¹*z}ýè…ÕºY-'ÎñœD´¨‡Lí£Á¡ë÷þp÷•½ûþ¤ÛÇÞøª¦ˆr¢¢À‘Æ$"X&Cz©’5ÏÎÞS€1ŠÅƒìÄÖÈžQ½±÷ƒ‚°úAüžˆvhF~#HŽšŸ=z&7öñÕ±§é>Õ{?XÛ Òî á§Ó²”,Œ$…´Æ³Q÷zUƩŒ˜Mm^ÔĘ­–ÌÍ™ŒœÑÁbŽV‹¯¾œ'ÃfÔ$##$$0ÁOŠö…Ñ0b˜D‰¬ ì­ÕôΨl¼½:ØBCà6Ôæ$A`j´âýÉŸÂq¨Y.¸ÁÝØÉ¬+ø‰ŸÌI 'šéªeÀQa]y¸l v´=BŴΨ@‚'Ô ßÏÉ¢trý ¾x¼ÿxœ}œÓ/© ÒÎ&Y@ư˜HÊM¥ÛŽÊv2ì|$R÷¿An˜ööôæÀÁ^Væ‰d5Hoøìd)‚ØS–Â@J9ü­W¿cºJ†”32ÅÃw=FkÐõ!XB&Ôºò "ÑÁôsRþtI yÉkY U‰³±)JÎëõ ~€b¦;¹.8¸ú˜áø½LWxXR&2/òÄ,úŽ ƒ–ZlSàÏíALÐÀÐ, G¤,‹xƒú˜ÍŠÍ1"ãºÔ̲–`¡/KB 3nN—<N¢/÷ˆ¿è"ÿ8‹úB/ E„_ñvƒîþÿÏü8¼ÿsK÷÷û×ù^>ÿºœIÑÿvÞgêlº9ÀÈ7í`ÿd!½¦D@WøÓ1å"VáDEzæïþ^:Þàúr;Èçת¼,MϹš¿v‘èR_Јg׌…úŠ%4 £ÑŸù©¯ ÎÀý4¢Aÿj¿0X`¯N» ñí’ÖQ>{ÂJiUÎÛ5Ûg)h‚ð §›»_Âl;SF’Z|›ð\³s¾­·¢¼A¸íÚǵ©P_ý+n¢ ïÜb Š©DGB ™ê-ßýâÂqÆz¦ªÿ’ œ^ísH¿å½@ëË®û ñŒîØãýê8Inl"TˆU•Z_óʬFˆ#抉 ¡"AC:H>ANÚåÛ[²eõºeW¿Ve0ÿ—•¥³;-ÙÉvÎ |Šã ˜[T•¯‘sé¡mß$^¥|®×tÅ<Û v`ìnJ}gÌò ÌÙž²Øq'ñtÉL† j 6áý©9”"!.T[Ò`ÉU¬ñþñåWyX¯i°òeŸ/¨% wRÛ[Ä3é9‹àö£¬í0c6¤O›T8X‹°ÈžÄv'„¾SBF÷îâRúbëU…ØëùŽˆfÊ.Ýðc3Ìn]]4ÐÓ„ÓIÌA"܃҂6^ú Ùba#¸u¬½Í~{W3·¯ÛOm(￾çÝÍa"ÙØd,.€;¬/üЛ$ã|p˜Œ9@†˜ :âûÊ×ZòXPù™I›{ÚÊ·½Û jË×ÎáÃÌ ™…ðס~#bXè"=_V»föúBL l†ëæìlw±¾r=’{Kd…H—Å23»‚¼¿€!rU(*rÓž07Sün€å‹f‹X²îXŒ˜C˘KiÖi¦'+[²lR!‡3™%^3z’v+ÃûŒ1bBÍFVÞpm;·]ÙðÜm+¹tMVGê>/í'‘(_({‡ª$Fæ‰Ne®(xw 0=pÙ%m `ò2ý¥U4Q !$]„|0ð‡v²«Âä´¼ëCeý“ÄÇPä™÷óÝ.ÛÏŽªž=ã!¿µ¸¿± !S 2DBFB˜Ù°lG׳ç× .G \bŒmóÏlUÏu0ú«ã~™Ú…),Òò÷‡Á_¦NPÈ>ˆrÕÇwþ¯ùȯþ”Š)uO‘GÒi(„1Œ–Œ?Ÿ±bKI*«$I B8þÚt=RàR¾¨ÁÅÍõ„¥IíA§ü„Õ\¹ÆGd uåúׯBö¿é‹å…T«Ë¡l©©ø‚kªO½$¢ŸÊK\yr¬RDÍ6O`q ly„‡ßÆì“N!š,ܶjö÷ý~ú!™ÔHjü4QøZc¨çaá?äf<€ôWpÊçÏ÷œ•9~çðžÒJòFðÜMÁ]ÈFŸtrñ³ma¶Æ§ËZ¸®›f•ËO¿âØ-VOPüX5 L6TÐ?—ÉD%àeë¡”@¤ÁnîGéR%AøàÅÝ€œñtÁ4iÀû"QJÿóýAÿÞú€ƒñǯðþ=Ì0ý·½òñž±-}"‹Æ~~ÛKmàoÃÚÄå]†ÖZþz—9üj¶°©žOä¿ñóÇBó#EÚj×?ÅñŠãòwör·Õ—M1¦¸fïbEØñúP/ôŠÆªÝkŠªí.Ü»î^|ÄqIÌòÓþ”Òà yÖnæJ^ŸšÏïÕëÆMÍ™¶Î9ÛkÏtkUš×H ÿÄ+ø¨;$B çB3Õ6éëO—òZZŸf›¡Fë&ʱygkÚ,•‡bj$XÙI7[©‘û‘Ô.{ ÇmùÖ{= ­âl Õ$³þ -8#%p\x¡"VŸB¸ HÑÚ(Ñ*Š4*)7¨HJ/d "Öe ‰y$1¢‰:m ¦‡ùšY- Ž©üКcÕèò"HsˆI#­•S˜6Qá3aýÕý’€èuéG­ùši4ŒD«©„$BcbØYl¤n ×1˜4%(|Ž[ßTž#!p©È8œåå‰gµZÍÑ|ñtzŽyWðSÌŸ•ÓÖ^§ % ‘þöSù…ܹt;C¼ G6ˆ¥†1ñÞAîì ªððÈC˜@É2FÈcŠWs–aÊÂ.!Û×Ú×Ï­¼]U2ªj’†¼f,àI ÀGïJç=È뼸’@‘‘.\l‡hI¨[òFô[‚Ôâ’jdX°ŽTž7zgC²™Çû0! h¢à€nœu˘å9Ñ ôC˜^û(vPÄ ÎGR#ýh¢Ø(>ws4‰ý5‡¹èHmÖcyôª«l¾©ýP%!“°Ö¨Ò‹£ÑCãºÜ˜¨%ñ'­5 z59‡ëXK>ÊA£•äÈ‘K¨Ôýbún¼ÜÖŠ÷ôЃR¤Àiê`FÁÛ«tx™‚?L fví4·Z˜`Ij½Œ-kõÓ¿qMSª‘N4Å,tR_åsÆÄ’)!9c™§K©Ie†BË¡z†ê'u,» GtË]â’B&†‚V'[‰™–òWa:ÅÜþÀ #þ° "ûv_¯¼‹®lWs?Ï};Êüìþ6ãûMÃßû-¨ <¼dIî,87º$áÓ|9ŽT€ìçtÏè™ò¡&[C¼~¼ƒ¡Fl7y¯íŒ[´ÚIPÔÌ÷ëf¼»(«Ib¼½>©\ù G<;ÑSžp>ä8dƒ<ÕËM/´€µ®ð–ý±¡ÂX»í¾F\@¼¹j”ðM §©Ï‚‰ÖæË4_ƒ¸ háÝOœRoŒÚã}µ¶28¶¸£^ûeÒC”w"Í…ÈÔ®»mµÒÃ%jÁ°.B'" ¦P< %28Ãh"*^±ÏLom8.Þ»í׎?B’Õ¹i6W@оÛÒ­ÍÀ`}êé÷ŒfšŒ™’ZThüFl‘j ×ű½ k‰%ª‹^Ù—j¸ÐïW(„„¦ %û8N~‡Ä¬ÀR.ŒÀÈ=N~þ½5í–ßùþ›°Ê©=–n¿oµ=UÅñ±ím7`äñ¤OÔ²SÂi@éÂË:iHñ¾ÍZã¢øpx:)˜@W­=¬a¸°ZBI²JCš J#Ù{?FÝцÇç55/—Ž Ôc–# 2„ B3jh`  ñ -H@‚|•NWäYH$NÍ 1¢uØešž¤4ÞÍˉ—2ᤡTªUíUå†E°‚hö ‹ïµÄBUñfàI…˜‰m32&`ĶÐE‹Óëèÿ‹ïêí…k½­¢§r¨ˆ“×ö³esÅûUZ¾ % Ä¡gµ¨•ýlñ‚h7’•Éq#.ÒA÷€˜w÷6ßdŒ;£ÎWಟžÊᒭ⎸¬š l˜öKBÕÙCt@Æ1„s{| Èá‡$ ›cö'z' ¬çñ%š!þP\‘47ÔE„Uä—b§E:¨ààS€,™QHÃo ª˜c—W£ …ÝÃ`z‹šÓaÈ™J òÏL@3 G¬ÔVEŠ4²”¹²uŒ‰‚@ÝXYü£MNÚ”Ïu›º69&ÃqÄì§Óëp3†›q®¡^€Èw’séÁÚÊK^—tú_zçl>ÇÎâ®ÅÝ”sMþ ¹U NÝcUÊZЛ¥äM€0õ›}ðPô|ó„1n£îAwîtû¡ñ {d.ÂI`(]EàÈÉ™R‚Š”UIÞÅL s™RHp°%Áˆ.>,Ò|¢÷Èôð3èNGkÀ´n}y†Œºys¬@»ì™š`‡í *©à]ðëS‡Ýì¶Û<Ÿð»0>´ŸÐ%Dx¿æºõëö’÷ŸáÛïñãÖ}5ÅåFŸËø¯‹×7Û–ºä}탻äfW[&äå“Jv"Ì6Ê»+ßbP&j0±hdK6áøÞÁqÆ6¾U‡^ÛeÿÆÙÃØdîu@{c„Ï.M²†ðjxÎ cîa[n2PºtÂ1e\ùkR E‹´â´jÚê‚ñÌa'¡Ì!\UV”¢¾elbý(ä@ ˜‡O˜Kù„ƒÌGb­­aBÄ'Žñ´Yž 3¨ºqUFÈnê–$‰8zü°â)°_—Ò¡@t‰cWDØ2×ÍîQÒo§X/÷6é\:¹1–Î"è3 ñbÔ(6F¢C˜Åíç®Ï&תÎåo¸éN,<ÒÄ>Œ×Pƒ “Ú["Ò‰ ‰ˆï™ÍHfzŸ&†Ë'­90j¸¥qk £ºâ5¿ù¶¹Ê9>Fàh”n0EÔJÙ0U«R¤þ±révc¤ëï ÁåµÂWcKya"{ÖñÏþÛxl‘ŸµäGÈU‰!E'íñéÄ„µ4ITjr,'AýÅÕR A¬¡ü[‹ëû˜BE‘d~ç¬bb”6ø1f˜ªj~‘!(S+‹¸!‡é­D Þ‰yÓÒTÊZº«äÿK¥2¥Kˆn±C—6'/¾ñ©=j–B5=AÍ·Qèúï_ˆ•ÖŠ$„Š$eZfÍ·,%ˆD±“'ïÌ.–è¦ÃùI2,‘-V;v‰ÜÑ€eN:¨Žå¾‰n½ÛÚ‰UQ2á)¸D´ŠÅLÚ­ŽƒšÅbŒs(z‰‘uìA„Œa!$$†'¿0Èèhv_ÜKø‡U0QÈ 5ãx':¦aŠé4dÅ\'f=nc©2žaE­ooá9UUTm&­¥-“S4%¦›i9ˆQs¼Ééè½— uµôÖ@’j'êUªH0¥ ÂH3Ï+1Éú7œá¶cl—LÛkñL‚;D Ôù†h:jµÂµ!*lÒöñPЕõqy“ÄÌbHþÊ}žr‘Þ’+x€`Ï_ƒkZ­™›66V/@ýñ¬E›.6fEÅæ?×=dyµLª*FŠÄj\̶5vá a€ÀÌ(ˆz&hœ¾T T$CZ‰“E YYBjX Õ¹CjE(ji²’j“dÈåÊ–:(hT¨È”F T7™8FUJ”F©#U$>©laõ¼>²ÍBì‰!!Q D*0…?£ÚÉõO-¿±Ý;œÑtÄ>#/ž§Ÿ—޼ý›¡™–Þèý¡=ŽàùÃEŠÒPA+ÕdþAw¨e«×¯€$ hJ0GG”-rºé ¹Ã ¤.¶1‡¬­X×ôCׇVvÊëjýˆB„¯jÚ”ä]ù‰‹tûˆv1-1ò>P¾(õ2LBˆ;½ÎÆQh ‡WIJu<BL€ê`¼—Q?!Íå‘– ti 3ñqJa€}¸•¢˜:+H“*ö‡¯`$$‘.ØlÙ4ù þ¤îºgÉ7IGåuÂò†£âX(âšQ0›bbN‹Æ I$i.5 €²C7ÙHÁ"…u]7=CÂ>’|^›jöõêÐî,HþT½ž2ì¯Ôñ¢†g˜e™î!ÐU‘E„VïùÜÌf8‹Ü( ©ÈÝè`˜,»§švj­†:lòi0êR:¦ÉUè=3^æn2&À.5†Âyˆ`YCjèšîýÝ]FÔœzx”øtTx±¦šËÓ·:ÆU-B@f‘‘•`^0p]=dÿ°‹ò GïÔEóóvãoÄ¥J=QF²UD¦¥Aüœn¸ý=óoóSòò½§û4£Z—´åª"d<Áåõ#f§ºJ5¡µ.3\ÿ,`!ްGö`!¹&£üJÂE÷cË¡‰¾jtûéU'&S½˜Õ¥ÇöÀÎÀ)I¶µãv½Çdd‡ˆÞN" ¢Ä …Ih [ùãéÔ;Bîs¯rÔÑ*FX’)‚‚i|®Š¸ôÑ3Í/6¢¤× h]‡y§ÐýuíÍ–Ïn£Œ¶M©˜HAÉe äOÞZEM!¾öÐRL´1#,D¼d]Ðß@8ê_¾&áÛØñ „*U>,0CæYñE€Œ&‰ÈCðš…!FÑÐcji2âKðëUÒ‰(÷Ð7æáedº]Â@z b¶)áÄÀGŠ`ÆF€Ç»úð'H¸,ß@ÔÃä“ßÚ«"t‘T ŠB‚JOÊ-lÊBŸiG¹\Š…ªFÆ0z)ÉN‡3ÈžMú>Bðå*2cíjZjT¢ioŠ-h‘¼¡@œþÀYõ—9•a =ßCÚš„ÜÏE]ÝZESBt„&Ç ²sÄŒB–ú¸i,¤ÜÌ”©!Bˆlp)§PÕD P² ðúDu‹”‡HR°¬)N`E}Dê9:“€3nzš ié¡p¿è*¯€9÷ôë]~ŸÊ÷õuÏ,*êÔC>‡¥~“2T©ZÙµ¨Ê°ô§û"b½²Šü9ÔÆ·˜Îªê°šÚ¿zØÒ@ÊêØª‘÷=ûÞÖØÖ¯‹écä[›2Fï@3»—i–$¢Ì"6?K×±ä€n I°|JGP:Q$¢’Þþ×-è0Øùºò!Ýc¸sCÙ0v>¤~¢übMÄIÕ<‡a.©½¡$aø;A!ñŽÏ0ݰÑ>C°ä­ÔC‚C8Ô8œ³÷¬ê¤*X·­-Œh„ˆZ!õh5Tê<¬TJ‚Ï~Ô.šÝçjÇÍ„|cüÀÅ |¯¿øÏù§kÕÝ:L²ûû{ —ªª dyŒH"EˆD"GX¶›o¿‡4»êpi&µKNÏ»õÕŸ“$! \JŠD#E¹‡÷Ð}¿å¢`¶O¦|cËiýµý½1{þ>ÞÿÚº^ÆUj¶öËœ“PN&56 _¦É‹ÚÆáTT×ÅqYânQFiCàXˆ×òn¾>÷žùyÖwH@Þ»~cõç\_±Zõyµn±PDmßú<×…{Z»ôß"¬â Ä¢¢|$bfLˆQ™ -ýèêŽei6‡TòÚ”2 2ÊÂä­t½ÿ7eBÝ äµB4kŽŠ5ß¼G÷+Ø»ãrëŸÆ\Î`wœ°õ˹ꕿ—áÑÍËíž%‘b¦Nåúq¼q:g'•×ê0˜ÌDÙÐ]q ‚k#³EJ¸wƒÈG'üm>½‹²Üˆ´ùGðå“éœd$תþc¢Æ;5êÔêÇa‡,á6L3±…‘©ó¨Ø ¢~{mœv;XžÚkyÓ¦6çvp» ëçÕ D$@sß=žâ©í¯ÆÌuÞÉÌfÏ$úÍ–Ó¦•è³®uÔ¬Þ‚ Üq´G({ö™åž9OVâY‡uZ;Nz0ÙgCn·Z­(àmQÕKks™ÒûÙmö¨Q9˜ª0PA*ˆ¸2‚%¨äà8iBi¹%(‘)Qyª)%L­£+ÙåJ‰5£2Iãy—~ÏÝ9H¸lHäŸOÒvL7¯S¬‘4ÏE@…L°ŒXiYFLdÁÉdÀK)3 jÍ¥È2Ó7ðœ÷ ר‰ºƒjOp„¯±ÇRœSÑ{˜Í1†š½'/«Êu`9‰ˆ:«À|A™Ôé=%¬UŠúR?\ÀƒU³Ÿ—p:s9«,¬€Y`±G*W"çÌ$O¶ÂK$’€îe¡!ßÁó}Ýã?Šž(ìRö¼^OH¾cæ,k¤ÆP5T"à >§¾»|°7„‰Ð:A>4sAÄEËàû[¿™úoúô×ËnóÇÖ£÷ÿÚ/ùFšºKGúÙßw Ù6–î=úp?"ÿëEEnºí±³Ñu^9%—¼û8þ¼÷rØ–ÎS”>pŸ-ž!°^Öõ{$©÷ñþú ¡ê¡óm­ƒ7œŽâàR‡ Äw\פ³i³úæðßú'`Á@Hˆ÷èñF[ýµ,ô´1ÌÖciì!p“ôåt vœ&‹yx]åÍÑ–ç0Î>cÝÏaïn‚y£ÖXɯ¼"¯h¶Ößwq[wŽ6`\k@÷Ýñ!6O~r¬²•lâÄ‹ ÐõßÎWÄqu½ö9dò}Æ­ž%l8ôà_‡Ôu=é‰~%wÎèñè³zÀÕIˆP˜Ú ËxÉ·.œ9cƒ17QuoUÁºÓÕu$ÞMÒ¤éî›åÀý‰“qÀv÷ÆÐTk5* !™hKÚ }Ä9V(ÆKÐ_ö8üg£e3[Bc«gCýäTlýõSWt)²ãytv½2&idÍüëÀ˜n1qÖùsŽý,³Š˜óä‚Ú´$! ÆDð•P’I'yL"a ¡‚â:óÛ8c÷* “acÏ Ü æâ …CJ¨fK#hwñrÍ澁 F!]zõêßð£ö¤:Þ»à6‰Í( Po3ÜK—lM}×*h§ççä"½, ¶ >‡P²Ü”ö÷FûÑEU51i;^\ŠHº!ÂJ+U9³¢/oʼÌCG ˜§s¼ã^€ÕÛŒßÙk`_[LíýÕðÙ‘æ›+½›ÝÂsý0T›y¡b¹+Ï™ª$ó«†XêNqsœk®i§ƒí®Šì|ev!£zŽï˰á𤾣¥fÎÙ.y ›†‰±°Ée¶ÄÔ×UŽÈae{wAïÁÝÞCŽ{„ÙÞ ·‚¾£}zpvd ±NqI`R„¡Ó»6UQÝ;oØ„D`ðG`×8@Ö\ýÇéBÒ–±@râC ë¶&j¸ÒKvö>ªiÔNHµúcsQΫD¼YîtkÝ&µ hšìv5‹ëKÞI&nÛ½ïž,c`…¹žÜd6·ÃN.(~lI1‚óg-‡KâÕ^šŠ‹RëìªyõÊ~ÀûŒyWôÄaí8é.N}µh:QñÚ›ø¾†K&A¦£4 B‘“~ãÒ®öj0‹‚±üª]ŽHl© jwõññRCÅ—?¢rè¶³¥¸ûÞyþhJ2#ª ’9úÂ3å/^Òoƒ_eP-)=‰;q1ì‘7;ÆL^vï¡Dã|¬q—CÕ¾äB€7ðrLzq¦ÖõÄêâ¬Í¢aFd CD÷ÙÆWÚªêc«ö‘æA$®Ðx ölp”Ä3$s¡y­)©›L˜‚ÝßvY!”z ¸L ƒëd;ŽÁ %c“õB™Ý`¥“ßàB÷CmÐÃf@Ö<àX¥Cè´- @!Ð"†oóÄþ¿Däy‚¹vý* "tŠ"z¤@˜*/45 dÊv3¥¦  == Ìƃaš¨\,©ˆùh̰3ŽÕ#³Cr‡ >Š\ºn[…í"E8OcD[”n ·qûíðx*¤OPt¼t…i@Ø|sD;‹¬ïÌ{äÊ|5óRHBBGÒTì’Y’H\‘‚tõ- o k=O.iÝè$“1’yÊ:õ„Ò‰I#˜‡Zdº‰•‘qöž¯hn‰áòò °µ ›ƒëÔ÷I2Ë€Ò¿K©š¯/’>â9¹ÕA³‘T›ÙÖ¨•L#@‚ö«­¡ñƒq ^Œä_ßÉzuMY PÛ„ÜT¹p8¸†!irȈ^‹ÂÍ¡SÜt3M¬øòP¾Ó¶Ñ #¥¹Ï«ÜƉQ’Ksµá°YI Sn0KMƒ ÓÇtQ$$‰CjÒÉ!(SË~$„ª›)$¥4Ï™(VÖ¶Ù›6ÛîO¾¾GC£vv§hUýJý!vÉút!¥–AE$TŽ0ÛÀ÷ùT³å_±ô/NŸ0’Hç a¤ „’I*­3PÑ~“ßQÀ ±wôïêžä&¼ÀúÀ©$Œe¨ª[ʵZX–h( y¤Ä ¨’}ѱôX¥¹ÁEíY^–¢ÁAD«}åÄkhùÖ£†’Ù1ªÉˈ>@’á×ëÇ—¼úÒ ‹(H!¤† Ž#èžÏFâë†{M˜ø=Z N:5Fç$Üߤ$Àrƒ4‘=§µ ‚æÃ¡ú=d›mÃm"$£Ü—I6Ébr!MY{ºHŸ“Äȵư=°<Ç ƒCÚ‡#Œ?TAú‚/—g»áÇËtåO‰÷÷’o1ðÀ£Š!3˜I•+¨óÕÕÉõ?Ÿ´b œå´ŽÈlÙ Y8Q¬Ìé\zðÀ¹X‰Ôgdڳ˨\ÜAÏ"\Â[g¡.Eð‡I>ÆåjGÍlU<ÄW²XNbcÅ3?¶Ép!0&×d¸Æ/0í2iDrDÌ L)»­HCÆTDéÖ­ÆÜo¼™GÆTí-8 ·+\4ú^*/f (¢Ëòš ‘bÉC½¨€A0ija$,FVíqÌëm¶Ž M•Ë‹j8ÌVÓ¦q[U ú Ñ·]œ@{s³v~V±žÝŸ|VÔ(Õn"ígäÆhß9";Žró«ÖU¤ ×Q\ñYÙìÇ: n5–ÕHFy³©¨Ú QÔñªŠñ¬c8r/Qr½ËÚÍÖ%0„MסI±;ÇF6þûF¶"âHz raŠa&ÍĉÍE~±o³Ÿ8¹‘»² º$N…ŒvpA~|Ìl딸EZdw´ˆ’4 á"fÑÛÌÌiĈJSTìK,¡¾ÎU”Ê£M )$ŠÊ ™cz„’I…Yµ×*–†-UTh¿7qr>Ɖ˜éŒ«WÖš† … ‰¬,¦!¨i)D¢R”ÇD’ G*ÉZD#–þœ¿‰4®î:B¢h(Ñ!ææ/ÕïÏœ˜?~;§,Ö›LaßÝ>Ûã;ÆBrÑsLØê ËñcpãvUL Z*”ÀÞö»Ú˜ Š_ŸâUª°¥Š-±©aä…#§Ü܉€Ë :˜Äxу/èçsæz'(:fÕÞ2Á!–ãm[žˆÒû¿L ./ÿ½JŒÃ´â sµRÔBNuL’BÖ,H’Í*h,WŸ3ds-Žžs12åcƒZYV £us CQ50zd=QµnÈ-³ïïeÑâ7…‡mñà»§åÑxf5ÐvyWu´=ãA|²h†aoVq!g˜â){©Ù…Ûöw ÃÄÁéÊ€7¾žèèßwÅüÂMb\D ÖÿާªªÃmoP¡%Añ1yrsIÝ;ÒsYÕe01 Ý嬄$Nð+wY4¦¤„…Sñs4Æ‹ŸH¡ÏúV+’Bõ%T¢0‡¢ä’r0M‡u8SóžD`G]|…ÐçØAHÌù¾‚mBBl÷”‡ì¨Õ/¼2?KÃPÑdÒWi RÂMB ‚%ÁÉÀ ¦)ëÚvSÝr#ª8êÑ¡wÑ85޲Eã›]…}ÃÈv…BJìž§Žð%”ÌÖs{¿;’ýŠ1ޤ&"×⫎êTyA*rÎÚ|‹· ìÇ÷óÈ´ö³Òa}B<bèÛ Fç±×¬%‘–$LC4:⩪…Ø–°n<S ›ZûåÍSZ':kXbÖ6{xýU¼!üÿ@‹öŠ'îH‹TŸ¼Eòû>_¸þ~þÔ??˜dä Š?jxˆ?_Sh{Èý”õ%l5•ÒßíΗI‹£âªÒÆ7[¥[š4‡–Nѳ¸¨2ö*IäÕ'òPdG7L?fŠS@4¸*ÊGlZNù´Æ\ñq=+ìh®2z<<]¯R%HþèæÞVØ´&ɶüUä'zþ—jÌ0Ab#Ös»á§2Ã.äM DB´DvR3œÑ©fÆäF(¹bŒIË+ªíy^!µTzL`õxTNe™™êÍËXÐHOB&Ç&5¦µÃ µêà÷Âk&¼-Ö¶þF]€œÊ!N‰¢N`U$c}ÞU„œí0}ËB. ¶£xèÁ¿ÜL6ï¹$Ǥ¥l»ÁÙbáØö7Ã@Z´ì®c ‹ƒJi%±Kì/…æå=I Yn´…‹C(b£‘•X¹Ø†Õ0(êLÄîúÍï{ížÓß  i٠ٸâk6m&™é®Z›Bs§¡ª’î$”)˜•rcùÜ׺;¤ÞÛ®ìce=ìQåk»¶´×¹iE ÿ/œÜŠ0aɉ€&­½a–b ÎÊ«ÑóbÔ‘aPÊn[™`|í±:PHÿ…¡V}×ÎgNR$ A)|g¶ß¡Ó0‘Õ4\% pÚ<ñ¾¼/"è±=žp„ŒÖÙeíÛ:ÎÕ°MΠ8²bÒ›U¢„ÀÚX $¡DQàÍ: :PDÜF aÕ=³/2ÎóqQÚbî–/`‚ "ÁƬsŒ:Oc$Ķqi2ÄT$„“ž*€6»ª‰¡›&(ÖýйüEàºF &fçeÝUqöp2=è&ÛyÔÔpF<€ÿ¸Úã*1F6®Î≙”°$uÒè¡CP$YJõp1-œ@<œÁ$7Mî©U1!§È—ØLD „ˆ—M›%ùNfZ¥“)ÌÊ–2l›¤¦-S1J’[s(pÒ//‰ú\Çà×¹,ï]Ë>4ÌZü©Mr8§ê&q.ýHÂ6ƒ+·¥«!D" Ÿ…usŸQz‰‰ëÛGçz~m4(BHPRLvï2âõ¹Ÿš‡Üðu½p¡#æîT¨üœÂÙ14Œ&›i|TÓ*_x#,úˆI £ë- $&Æ• °è@tG™­‹HRn5d †%€ÿ’&Cã)ë©êÜ7)ôG«—CsKš†(K¢q/•òšÐ)IÔ3CèÓq)WÒ¹Åé*‹IL$€'<—O°‡Ñã<¤jaƒ×áÞéÑ[ëI“¥>sXzr•I§ôˆ²$O¥¯bñŒS†®£há©f´hÄË ‡7D€Gb¦®¦Ü¼šö¬8œ¥)U}¥‰…O¢%I$h€E##åײ…—¢‚EÑX£€äÛD“Û|o{'«°Û33Àuê;ûLjHÑ_îgqCYʪª©{t²!4Æ¡ÑØšhB!BhLkþ%‹K÷ÀÂ$˜B¢ÉRaÝW¾ãK6)¼E©‚…¢Díñé$9JµBÒˆv…Ê¢v•$–§­ô£Ð~–»‚¹‡¢p§•äp0r¶»zéP‡F‡P’„„’RÛc¶ØŽY23ùjàìgt6ý¢¶aŸŠŒ\ò8GõžÍuµšÛ1z¹F%µ4ù\ªÒŒ!d “3ç 4Íp°YÃWcØ {³Ð×GpÇK–\¦õÉßvj}šáA¤\†I½®40„’!"¦èòuy ;>¸ê`¼E÷…PÐ\]C =™TI¥6à½Jv ¥(EņœÄ XÜU_æ*ÓËßϛޭ-ZzF"ªKäÚg*-DÃ`µlq,V0?m ŠÁ„ ˆ!Wì¬4&~Ф¯’*‘â Ȗܹ#Ј@¡4À&±#%mÕš+2âç¦`Ì"f¡"°æ¡œIÅ`d„²n‘¶&ÉK”8Llbšw5ŒfmËiKBB…ªÄEM'‚!$I{Ü NáˆGn¨ê¯xÖ8F‰Á¸룠¢ !ôVΤ¹*5ŽÐ1’©ôE,Q(6TÎòƒũ ´”"Èb™ÒòÌPÝ_jt ÇRl4^02‚!Þ< m—1Žñ•a3]ä°Ô¢y{g`ÁÛv$MŠS=½ÃN'C˜½Î,j±:ǰ©äú4 ó9’1Dˆj€g9K<Õîü!Ó(`-ŠHHÉ$ˆEÕOàtIЃ¢‚½ >¡pCFŒ–X0hzˆxy»p¯s4¿`{;àb7LY’lrqŽ »L‚Ao…‹JÉ›e­-[5«,ÖÛdjjÚèiÃCƬ>“Îj?ã¢TM¶0RËx¹þBoó*°þ þøÖ_Ä$GYâHà÷NãÞå\}Êf'›IC/àšsø!>NÎ.H4y#-£º;pºú`%šO¦9¿ |ý¦˜üX´Å3 iGê>‰b²án$AlcG­c%sâ"VO2X›STé()"ë$ I „ Ú鎭¶ÆCeîoµœÚ°Ã-E2-!“BÅba : {Ržß¼øô‡Þ[ßéxKü2·BÍw*’¯ˆúÍ—ÊŸ?© ò–ÉŠ§{ŽNÌš…Ò¨ðTi}Ù~g¡YJ x+í˜á®n;ÒåïßñOC¹YGxÄîŒÉä¿U^'¿õ¼ºI‡KŠÂ]ŒÃ<”@¸HÐÛÅöŒ0H„ðà‚…Þ[²¢ï¶MýÿDF ¾Xkƒqä×õÒ‰Ù^#ãá(OËÞ7Îin‡Uî¢4ç¥ôê6ÒLª1óšZT<,JöVÄ‚`IŒKñ팈9æþo4”Àœ¤,M¨I¥Hööà‹ÿ͵:Q’÷Ð`à”™„”K~ Úã'JxUÙ6E­f­¶ØÂiœÕòÂHÔ£, ch…ÿ²Š«ö.:Á¥·ƒ¥;á¢ä)¸Ìj¨+{-[·Éå^“ɧ0¸:’Êý/ȵ?mœ–<4 ‡qš@‹óM¬³á†Ž—“Ÿô³Éۨ蟢T˜êåv è]¸=1)g1€Qlñz‰N™h „Ë<¢¾@ÆØ{KMRVg f +°z' û*ñ'BUd:\¢15•Œ˜tc¥OŠ®@fJ:¢u<° T¦çY&g§¨ ÕîêSa XÇÕj¦$ ^}Þ„.僓yE_[ï› ös–8]ïªb]¤X~S'»9 ý¢ #ˆ­¿g¬Z`I?ÕtIeY¨v´ Ž€ $ˆàŽ%_jÁ$ UTÞV2- £­®þ-dvnaIWS&‚†& ¡ˆT§HZÒwÀ»³TIT‘zò)my:cçJÄì[( 7þLØúã>'>[ñøçz˜åt¸È@ª'7ŒËœ,À817Ü®„ˆË*j‡2™‚ LÊ஄@T(I œ=-…†›clËèóÅ€„ç{Õ-]©T•þ¶6½ê$.$'g"B$"Ãm²»@aT ±òjÍÚôè¢CÙœ­ •¬œÏ7ãdôqžÅ©öjƒ(–m!+Ù ÂõP—~ÅÖ¬[Šé§Âº|9ff³-Ÿ¦èåp¥´ HÃ󄣩º?@ Á~›ú﫤”Òµ«îÅ%õĤº»ýV û˜€bÞÔ;Øüœœ ½E5’¸ÚʼnS>Ê#êu¹º°ë-Õ˜©,D8æ¢miéM+e=þ}w¢ø—m¢ ~Ä(]" f„!Žö(§~µ´Ûj±yHsÑüÛñŸ¥˜du˜Ñй“cE%ø™ÅÏÅ\“ÏP‰@’ZÅ9§=b`ƨ Jh/Ò¼{qµÝÄĽje¡m46Ðת*X¢\–kVvºX8" C˜9ˆ [ÁÁLŠŒ¥OÚþx«ãKÖSiš*"£ B¥]D´LRrµb ÁãÑö@ ”AQµÓY¬Ë¨(’ò»¢*Ì.¡©djg¬­}5Ú¶‘31ÄSI’9c3”É&uY6æñœ¡…Ð8>¬e¼èŽÖƒ€QuÊ…3>(èD´›ý†?fãï`ŠK*P~Lºý ì.Ïæi{ WSuĸ%‘òs¸‰ôB,0ÿPx9“ÌóÛ¿ÕÖ]“¬1«s6BPåÊ” SðÈ&I† !`h{ ó:¡N°B³+¢‚S#Š $/kÁ!‰ƒÐ¸d=Sc›X‡aøÍ2:nlÝMÜʤMMI…<ò®&‹ èoާ®ˆ{îoࢺË`Ñ C“«ˆìI3£>\Òd9.ÿEò¾ŠaZ){;‚C7$Ø“CÑÊ¡A)Ÿ®¾¨QûsAS4Ê*“© ÍE ´°„Ú&\9--%©Äª!&º¸Š)µ sx€:`³x‰`Éåyýt_Èø°æz Sp»À#Šƒ±ŽHÜÅ á¾ç'^²ZІR’Š¢%‰âRvî›h€ {_G ÌÊbrøš[eþ<ÞHJ= y*Ô¨i!A_I6ÄQ.\¤Y7MØ$’x*¤ç¿iÝz‚EÿH”«IÚ çc´#IDS%~2Éö¤ö:F—²†¢Ç¸JžJ©×¤çTÈØì69&kÑìŸ=~õµ¨¶þ”ß ^g¿bÒÆb=гRä™je¸iËM’‡5RÂS8rsƒÔÅæÙê¦`¿7ƒÊ¤œM6¾I"H'%XŒL՛đ&łߛ`¤S,„uÍÔëËiéµ#™” “å¹cF‘ú›|Y(ôzK$µ­gQ)<BH“"õ,ÈMY2F\à=èD»c|”Á .ç{û¯rè8ì¡|± dk¿/‚¤éDÑA¦{UR–êhDM ÄËò„[Qà>s SùÝ5/Wô’I"Ø‚žóÙØ?&)bž¸¢ð8™ó›8¸ƒí  Á5CEÝ.*{#Ê;{ ¶Úm[;—9gÔ|ܺ¦M9ై÷, K‚„Vø¥ì'‹«HÄeA%$*`þc0½ƒß 0…rç!á7ñ9’¡$Kçà`ÏÑQn°É=þŽ aŠlž9d‡>x«±»pð.w<õN‰ÛSfѼ y§ÔÏ%êcÒw÷.FȱZ‚E0ÌôÇJ¤PÈŠ ÒÂل玃 ¤jT!]Ž8¨©ý“wTðƒÈO ø€ðXÄÉ3Ÿ…нÞü9¼Ÿ¨Eþ~ÿ`{®ƒŸXg‡/ó¹ùÿv7å>¯@?½ÖES·ú¾"(þ†}çÏ1»`­cuû5ƧD°••X?Øá"á³Ê`9ÿ;5Ä9µ‚éuþ{¸GÇ”ú0ÛØåÂ?äñÞÛE —Æó N9y2x>·£€"}.Ño$ìÿÁ59Öy,é=’`˜-ЗÅêbÄ€ãUÚVƒ\ØÝ7ƒÿ⑨‚ݾ[Á`;ï||·ÉkýTãã$p8IÚ÷ö…êÒ>ŠÓOG£ñÝØðÕ ½zbRäB¢€UÏÁ U&ø©›¾5´8c°œ.kw¬­hœCËí³²vÓ—.¡±.•[}©MÑîH¢¤ðª”;½„ÀoU¹¬›ä¸Ÿá¶J1´rñ!·.n:p)CJ`+*8l>Ä%À”QH»™1’Šç‹½;î+¡¡šÃ·$¯Ó^Ý­£é3žùôy}/EC8ñˆ^˜;AÅíQ€š)bSBh‹¤«w=c#$%ĽQ0p´4€ª®£–6šbƒÊ<)'6Õ5âÞèìˆõQ &‘‹B³REÍ#ëëß§}k|ºqâóÓ æ©F-LI ^dªÍ @»Fˆ“ìÚ6²vš31šuBK­8zbð{ð¼mÉÀ¨6%r±ë§¶ÈüÝg½äy³0HÝÚw¾€D»v .ÑÛ:Ò–Þ´Zö±äOÂÖŠÍ=¥va¨6(vzyêsÑ쎂IzÚ÷±Ò%+Åœ$­1 6-L 2.Ò¾#ÛZŒ¤³%y¯Zü§S™êk +óê›HC]›Z ]s±ãúÏ=Veÿpjû•ûŠÀ}s /CC¬¡'Š | öÞAÍ*µÂýèq){=È)o¼Î}êÒÚUé$„êz¢QB.FZ‡”TƒEjàƒÎ#ªÑ$N„ïLE¸-¦‰<õ@`ÝN4ki´Ë¯2vïéìÎü¸nqÊHö¸\ ¡ÕLÓ-mNFCˆ×ì…®78³x¹äØÛ5êcu?]À6z@_€ÿq²J¢ªªŽžll|Ä÷ýy¦^°pjèn0 0L­P÷çMÒaé‡`d9‚ÀSµ¾ûÉÏÐóÒv–ãöO¦v«`d†(î}>Wß¡Ö(\¬€×'A{6ؙ蜤Ê@ÏUJTϹãé…ژ̱´Q-*D™#%J,5ëõ1¤‰Üª22? ݸ¼ÐdV(õ`Œìî¼xžÓ¹%£UE¥¥¡!m‚ÄŽHæ`&Y8 ¤‹$íœdJ:«š˜¼ŒÍ¢ê™èh%ÜEÕÒøÁ‚r(CS —BŠ !3GÓÚØ d&´´&ÍîÉ­äf/„‰i‰ ¨9=`.G;@³õÃMˆÒpÚ’›5”×ó7¤…종#¤\!ˆƒ uõQ#ý“#¯‘4‰s<«T¿ç„r¸6‹[õʆ‚ê¥ÕÜÓ ÔE€²!G§£W‘‹ä)p£Ò$$‰!4¡èIÞšÚlf6ÑtyQë1ØÃŠÝêžR‘@×rˆJ‰íOÖE¢*|öéb£nx’ø^Ò†® Œœ”òŰƒ±qj|4:މ$ ÝÂsL)}`òQØR± *ÊØm\”ã•;I ^JûÕêÖv±DaÅÑNcÈ ÚLv¥I¡D( Š!Aíþ@GúxÏë§¿¿¿N–¶¯â&×—ùÌ÷ÀÆÌVögZ“XkÞKû*ƒ¢—Ö¢rÖ•^ èe%O-²(Ø-‰ •“_&q‘AÍäœNĈ‚Äcd2@jGAÙ¯³Xjl›Xµy_ç·ÑïcàÄüû—Rewm˜…ص¿,‘‚¿ùly½ü`G¯D傘­Š¬»KŽÈ“i ;3ù{X7Œx#¦¾çÑž,‚ 7‹*è’øÖqC€;ä-½ÍJîG:<µ¶€S#¹Š¢í,8?µnÒ§jSÿ ´ëy0ã Ù´é]EwërÕk¼é~tìÓ²}¼‚>àkº²Ií-¸‡Õº²L²\’‡ˆH~»¼hˆèŽˆóÁ$cÕàçiSÍçß>—×¶~$œ‘`°çbx!½­ÙðFFVô×VÃÀ®°l6—Åj,à™{Ò¶L F¯P€À„‡ 9ºÐ#”¡Z´iLª§&¹T+˜(+Q/0yr*¸³í†Ö×]ˆH0J.hàŽ# :Ltåk–Û¦8aHÛ ˆ ièÀDF25ç{4áL ‘ͯ&@÷BxïÔÛ¥¥¿:QÓAs|9$'ÛCÀÀíØeôàã­©ìfÊ]­ºH¥ºˆy ` ;…}ærØmŸa%m,EIHÉQ˜8#1’w—ˆl²! ¶Œ@ K¸Â”…(©fÒ÷Ó/Ñp_*Ãx‰#sÔ[ Ú3™Â{4¼Ðͺ4ê|­‡“N¡ˆƒ0\ÎÍÊ%%$øéàÜ. ô°ßî„!÷ïýžúuHyEI×ÁëîA¼;síâRKÙø8÷âFå„Ô wA¹Ý°B@±!M)ƒªž”ª÷Jž›‡>¦CëH=Ý«Nž°¡DÈð´dàƒü 9\0Éb×é·×ß¿÷±k~-~õ[JoÃÒJI$0ŒB Å‹¦½ÛŒP!½‰ý–¢L£-D&%Vpäpƒ&në D?2Å^ãÈî@{PÒl}b` \¡†[kO:ªŸU6ç(y›¨l§ªP¦œ³"[Ñ»ƒsP ˆŸ÷'OØ?¯âH 6táM”Ü9ž®ŽŽ(œ—¦h‹ûI'Ð)|D&j‡Ûí k ab K_ÚHeÖº!oxB,g±2;ס¹P´tª•=ì'p"‘| F\Jdm-Tç2£ƒ2j‡ƒF4­V‘µeu æœÇ\Ngc'Ä…B¤7éù9Ԙܳ`䜡HúÝw,‡¦=¨ÌÉÓ€ÌþÇTNŒ”${Jp./1± jð‡‘-„êÁ£B²°s€’¨óî™Ê•,‘¤Û´òÒø—ï3)š’2!'« ÂBøPG 1ÝHÓH‚BUIFvErˆÄ: ‡C .YlÂëX[dZ°C]Æç¨G•KÕ)˜XÍÉ)˜ éÈ£·X„ÈÃÜ!S8öqööë”¼Ž¥“C”4„É•-Uª•*î†Ó‚ÜðœD€Ž¿è´âzy­ßÏÂø5¶Ø›FÒðÁ ”È1è)§#ÓP³ª8¨–úk믢ðŸèy‡Ú…ÙO6MK8"¬8ïŒ bGô#Æ®‘*“¥OëìG§0GÆ4ê •¤Ì…²Ý‘z¼ ,[ÎE9ú@d¥$ÛR$„axžfV’b?elX[A)áôä û$¬}K+¶_`öBèLðúrÀ(ÊP„=‘>¨ç6.½ 3àö<‘$ uC!1ÜŸ ž`Ä\¸X´†T!f? ­(ÚiƒävÇåÝø‡bI$œgSÐ Dèî…†€„VË2Ãù×í¸E÷ˆ¿Ê Þε·¿‚óÙC½8¢þÕP¨’S1þN â@‚ÿò ·Y„Búj¢—x¹”à-ÜÌJÙçë­¤=ã*ÔNò~ñ[ñ£|lîFÀ‰9ÚòóBÔƒBÙ Ä$ˆ«v‰ÃL³¼ Û k¸9v»ÒÙÙÔŠÀ݆µ™ ©X½_gª¸ÅÞY:4§m§Ý Ä„>¶ûT½1ÿM&<\&$˜ ¤óPwÒ]wó@†/µ—±ïŒ >²ÌýjÈb æîܛԛï9žŒFê„êçhE“£TG–·¾l¢‘b[©p$‹ÖÉaÛTpËk3Þà¯Õ4EAˆïªí¸Þ¾vÔöË—qÈŽØ ’ …éç+±CBà¹jg‘’!tP„‘BTð³iJô]Rbá I !%{„¡Dmäèy…ªÌLE¶qd->$ׯ>Ì¿÷¸çu+>*sžjtàTVo{Š)3µ¨à«–y HAu5N¼`ì#‹Æ¶ÝáJ0X˜x3Þ€‰[åU'»š…οÉTr–óv3J”Ç—8‘+1̪æŒ(â[˜^ODCÂS›Î}%sâ*-Gƒ}«®‰©oTx±d‹ …ÇKaRwS.4ç3=mYI=Va“ æRFf„1#J¯gØÙh BŒ/xI¢M$I™S I,FXБ-&bfa!5 v—Ɇ°I"dFð­£ OpqŽp:4Qì»v`) ¨ ÅK$–N[©"u©-D¢Ù˜Úb q힯>GŒ YGGQà™p‘‡³ I+4ê\i2hzꇈäÆB fª%)M±‰ fìè%ùìl£ôá©€BŽ\CF[ˆã[µrϾâ_¾Öfm´Ô¡b’ …½Þ›…Ç#¡œ1K’Ý1mVlUãz_Ä"FÀ™vêHpCær$„$j5*¥ ù4SB  \GssgãxPšŽ ¤$‹ö?~d—óÔlË ”„æ[fPÝZ 0" ¿œ‘E{Š"`’]ºÄ#ÆÅ¹KïNdJÚ]ZŸuuºÑÚo×7¡ÆÔU ?©a𠆀³‹L(4[ɯò´-V·¥êo±ëXc! $°®Ï€&ÈŽÖ½ÞJwZžÑ„BvR1Ã6á/q¶ŸÀGªsê f›rè]):4—‚7ÝÕ~tK«ºøÇS ñà:ÓŸ#šàa²° _ðFÂaÐ=9ÕJù}€çÎŽc$D¡=ßÎxíö„­­*™TÏôÎ×ø¶¶¤Ââþ¦¥TܯâR/7~6”Rù¡©e5#IK³‰HP€âÀhÄk‰ëô'ÐZMhíÃ¥”ñøë1£!&44©©k=}Xa®2ɼ¼+ƺw½dÉd[€ÆÕšXµW ™fje©¦4Kò/÷Á3aÏX&›Í­hØ‘)0}<ÔЪzQY$–4K†u®Èi(Žêí¢G¤âAµfå+Ž!¢”7ˆ†(ˆökOaÀ¯Dì xV–vÑÇ€ñ´jõ[8RZÀzöŠè§mˆ‡*ÖCd+!÷í†Ð´Ú¨½S$eC<#ŒØ9E¤´ÒA´ òS’Д÷Ôñe~ôvá!3×52Dä{%ìó¥Knêj«"uñײާ¿NÅâÔ²X™ÕUU¡@‡WJÉý|C‘%N*¸V…MŠ¥‰¿Ô‚ ¬J©J…B“µ´o „o(_/Tâc\6 'C³B]ÜýO·±×>¾NMžkÇ©}ÒAœ…¤ñ UT$¨'ܺºè»t®£pÍ9¨j‡bäÀKýúƒQµ…Nú׿ױ(–µJéNÃø3×ÞŽÈ}ýyn¡³4a©þVÐmù~ÅJ)À x=ZU£ÍòœÚ‰Ô¿åÒŠCõÓßCµƒšϲ`oÄ6G„ÝÕ£…îí$ ΡÀ›ž˜=w)ißpwEŽãÉ>-‹+б ú&›ÉÙÍ,ÕŒ~žéî"gÔäì&ö4Ù¡¾Bœç8£³OUÙïõÆÖf›36kdٴˉ¼ÉwÄ'ïÖ4ZÖ’ÔT’•$BbÕþ£=”—­ü•0’PÛ–’ªúäÚ° .j+„¦ÿ­ô!tˆ=ò½•¥`c.Â÷O« za‰O¡Õlæs?2Ùö+› ÈSmF Ü`b€Ðd‘ ã€[9p\ç>£‘ • !uK+E-m³Ø·¿&m-¸»Ü(}0ùM\œ²ßhã™,<½¾¿xü;#ìmÆ"–¡œ›©˜{È2_oÎ(äl-) Tߟś— é`ÿ°×ß¿ÝG¬YÝíûçÇMÐ:ƒ~ƒÎÙp ‰Ÿî7.MqûŠ®Z5ŸÍÝMª·ö)·Qt¦µ§lTÕT“ 5³˜—ýö?`qïùã¶£úR‚;ÙDÆ?`þ.Å&lÛ£#õåéÝ_2øÊ D¿ìCh.[GFLó׋‘U™Þd¹Ë‡@RmG/!‹kZg¨­GÌRMH0ªÌYª}bñ´wê¶z0av¼_6KŒÝ»#°²™”pDŒ}6sDkŒÀãe›+d“Â1P!æu4w7åÛ"9³ù)×O ”-(´ÙÚ衱¸:9ògzVuDç¯ÀH··6ÀP§00qóÇá‹ÑhÖê Zì—Y¸¨³Ú­¸œ""6¬;áÌ%•…”ïb–-¦4t® @ö5BKÀlLߎ'i{lî— %•x¡Yä€b ¢ùq*1O„s#IOæ3¥£¹©(íÆ\49¶îÛÞüÐ4b&h]:gÑ ‰\µ×Úõ$ª¨rLoÊ,’ UwÊä$a$„$"A„‘ƒ17Ôû¹ýÌL`X…aèDÌ^é§³Ý1íß,ÌåºGJ6Iî¥eîo e2?ȇAº3`Cöi“ˆwÐá "Bž.i(f-M¸í&—<Ý™Ùq‹\0'ï‚dd&\!jYÛ(dÙNz+ÿxáÜùTëÜ,(ôüœIIÙµJ$…å¯Vúʹ,Ó´¥3*Xš„4–ϿՇŒ&4 Ài ±x€Àâ ð7ÖÍRqÂTåBH´G´ sx‹«¡U4T"q3…#SA r^ Qí‹‹ ô¡ Hí8ÐX€|ªM‹F@ÁÖ›¸ÿö+d Ò0Á$$’çh=upC~G23Üà©©R<ù»–JÚÎÂeÛj¨3\iÆ6á½#]‹€àh}ý:$uD螦.‡ôC˜\5MÄy…Û|G>Ç”ïŠfCšÍ¶Ø®ÎMnzÙ P7ÏøßúȪVÿ­u½¦Ï²•ºŽ©ú4ƒ„“€ù´ÈÒT¦WÐ>Šåè~ RGÍ-8áF{˜ U†¾©Ò#òhŽr"²Ž‚P sC±îÌÓè»Knåìœ6ÒÚk#‚d$•H­`tçsäzlfŸˆŸ1îú½;Oc4{J;_x¤8B )æU`€ü³ÞÓem¦\qÁÃS2ƦkXáqÃŬi¦,lÚÖ–ÒaÈ"éTˆ{øT/ìÂ.*HãT-T,Õ \ê…ÅBÿ§P¸Ð•ÅBéÏøª¡qPºaJ’8Â/ùÔ.*8¨\j…Ö¡puÕB鹨\Ô.œÔ.x¨\·]ÿÈEÿQOÜ"ïP¸¨[T.D[¡v9¨\»T.9¨\xT.• š…Æâ¢«þ?øÇa‹þÿ°Eýb/Ü"ýâ/`‹ø¼Tyˆº/QxÔ.‚-ÐEìsª¨[D½‚/p‹°‹ÐUýD/h‹øàr GúD]B/`R?ð"èr"ó{D]„\ˆ½]Ä_ÿˆ°‹¬Aä"úÄ^"/ˆEõˆ°‹Ú°‹}p8v¸tEæ{B‘ò¾Ï¤t^b/ì/‹ÔEÔEÔE÷T-¶eZ aˆ°‹ÌEýÁôˆ¸ˆ<Áø”„ñÚ%Wñ¹è"ûÁKÛæ ®à‹¾¸|I(û„_1Ú"ú_H‹êäEüD8\ˆ¹ÔE÷¸QGñO@‹êi$mj¡pSQú„_‹Ètp"õ{‚/x‹à"ùÀè"úD]D_ Eæ‰]D^Áy/˜Eö𻈾ðEøÂ.â/1HB ä¨XE¹¸I‚.^"/Qp‹ÔpXJ®›T-RG€‹ÌEÜEõˆº„]‚/P¢Û}PUWý„]$J;+Ð~ ‹¸"Ð<]D\^".ªÂ.D\#õ¾ÁM…UMP°‹%˜ƒ %#¹8‹¸tp¸„®DXv/@¢˜"Ñ$á*½D^Á".°ª9¼¹`‹DZ¡jŠ­…`‹T/äu`Eô‚Hr"î"ýb/î«ï¿Ñ¹þ¯óõ:éÏ6²h„’N™Sþ‹‰„”Y¶šc…"¡”I*,¢•Ÿü¡`¢Z@àM]?æµQlJ„$*æ$¦HH܉S(„Eÿ­‚ElQ,_û{YŒÀ"ŠiF`:ÆyÊrsR ØÆY4¿þKQQ ¡…vÐ"º“JEቱ$•TB ¶ÐYD Ë Â$Û1fhšIµ*TœÓœBBËÁ¶·m…iu0ÆÄ”1„Ŷ°3”RQ²c!d£QËC8†‰1F BA›:Hœ8d MäYtªk‰@Y CF™š™&òÔZ(j"”ZŒj„†‡(µP´- *`é–U) –tN"Æq $´@¢ÎKÚ¶Ñ‹¶¸Òh¾q«’–Š&­Ne5Júë ƒB%L‰0rÞé,¶ÔaF…¢éSЙP–T£‘`3&Gá(–¤@F!ÃZ— F“)J,ÉH%i6A1Bi&‡ E¡hfÈwc‡DNŒqeÚl”N5D¹t¯Pâ !( ²½¥@ÚRܤŽ5³†Ë&ãŽ}€‹û¢/â_°"õ~`‹é}kûB/¸!?èGP¤}¢/²w£ñ„_Î"þø‹°‹ÌJ¬"ýb/¸J®ÕH½‚,"Á€¥AùD\ˆ¿˜Eû4ºˆ¿$Q9uÔEØ"ê"‘‰(ê)¹„ är"úD_ Eý»» ¤àEØEæ)_ "Â/Ú"àR0‹’ä"äEؾB/l%uX‹åx»~._(JÂ/!ÄE…#ÐEì G5 Ρ{D[!q*»‚/܃°EÌQ?¼uz¼Ä]â.º¡aHÂ-P½‚.a+°Eöˆº¿XE…#°"Â.ÁQÈEív¼^á*¹¨[j…ö½^¢.¢/xEõU ¾ªrÒÊ€ÙPU«*TªIލZ¡`Eæ"õ|\‰UÔEÒí.â/Ø"ú‚/þþŒPVI”ÖV7!#•7àŸßßÿÿÂ/û÷ØŸ¾R’¥@P@ @sLLiF&Í10¦a˜F4ÄÀF˜„`aÓ`F€ „`&©¡=4Ã!†LC& ¤„Òžši•6™OSi„jz=Iâž›Tòjh  ¸À ¶ªjU)Cà"ûÄ^ñÀ"ýÄYP¿:¡eBÌ`‹¼E‚UhE‚,ÁU XgàÊ…™P²T,ˆ´"ÿ"/Æ¡~µ 3qHØE¸‹ E˜)QU°¤~œÄZ¨Z¨Y¡`‹ÿDZp"î·l\1h‹z¨ØEœ„\°E¸‹ÕëÊ…• ãoU)ÈR: U NB‘¡‚,¨0"ØEâ"îuT-ÄYc*)#õ«¸Eð«B-ê ¶Â.¢,[ˆ¹„[ˆ¶Œj Ü¥7RœUJqAv@ |.b/@E‚/@‹A'Ú¥9@P_ %Jt€ ²©N:’Eá‚ñUHФ{D_ "û[¶jª*šÜ"ú_H¤l"àEìp(†…R1â‘õˆ¶½â.Rn"ï`Eöˆ¸I¡áHʪ*˜"ÁHÁ)B-Zh"À‹B‘‚-„]_˜‹ý¼ÀœªAÈ"àEòUQT󈺊‰ÀE‚©0Eýꨪr/´ETŸÔEÿb- G ‹Î"æsl*“ïrzD\ GÜ"èä"ìn"êzD_äEïw¾1¼EÚ.à‹ ‹B/H‹aÆzª*X—´ *¤KËÞ>Ø**gü*n¥5±S\-¶mª…­êl"ÕBÁ±Ià‹5P·ÕBØ*mJpÔ*o6*kb¦Ä[åBÞ"ßU U B,Š,8lTÍŠ›ÁS‡ ›5Ãb¦ÅM੪”Ö…Vš’ ðE¡b’1ŠEoMàTÞÅMD•ûŠ™˜új,ÕBûjf FX´)œµP³U  ú_x¤j¡oP¹fUF²¡pmPµ‹®ªT,oPgFê…œT-T-T,b¡kH‹Ž5µPµP´¨]X)„_X‹ù#°EÅRP³6IðeD®B‘ÔR4"æ"Á‚,l"à"Фn"ý­b’2¡eT-„Z¨•°‹‡QÂ.[„[à‹*À‹p‹‹aøª¢©Þº M„YUU„_ E ¤ÁC•YUU*Ø)à‹ Eà"àJ®Ñ`EØ|BSZ¿Jª*˜%Wî"à"ú…#Ü"è"ØJ®á¼Â.ѤªdEC/ˆy_X‹ÍPy„ªù„_¸Eﻪè|‚.êƒ.*AØ"ä°UL¨Š/ÔEÐEãPl"ðª&‚’|B.ÁHЋÖ"ê"¢ê"ï`R;X"ÜEöˆº»„^qHøÁQ:ƒqHê"À‹@‹ÈU'h{Z¨]Y¡mA¸‹ÈEÚ|B-b-ÄZWì"ö½@‹B/hEÚ"÷ ¤è]]â‘ù ¤î«¼Eè{Ä\û",±• *T,eBÅBÆ*&*fgì"ä"ØEÞ)p‹Ü"þáeQ<„\ ¤òl“ýˆ¿ÕUO!%^¡QTŸ8‹aHÁæÌ¨YP²¡`‹´E°EÐEü^±Q;Ä^¡Hõ„^‘é¨è°"ö¾Zª ¨E¡Hæ}Â-½€‹/ÀEîw„]Z U¸‹øˆºÔ¢t™RuT'oLÁS2Ì*2¡`‹%P½I4"À‹Î ‹Ö"üD\Á;ERz,t¾ò`‹Ø`‹€‹q´E`R<^¡Õ/`¤p)uul"æ"þ‚/ˆ‹*AÔR8p ®ð‹Ú°"Ф‹¨‹B.Ð"ù…#Ìu{‚,zD^"/—Ë*TN¥Q+šøˆ\•)’¢óÕ  2Š ˜E E‚‘€‹, G°E󈾼E뎂/@‚´*“ •\ñ• %Bðº‰UÔEÿB.BUr¶v‚-„]¡ð¾Êª*œ„\$ØEÐE°‹À"Á`"Áä"ÿb/ÄEò\ÂUtš¨;DZwºˆ¶hE°•XeUT}`‹T‹ÏP²ˆ†T,ˆ³JfR‘•°EÀEõI4«´"æ"ÐEÈE€‹qHê"ôˆ¶´°R8°EÔEÈEÈ"ÜAX"Þ©Ca ¤æ)°U' Eé¼j¤Eä%WÈ"⪂ЋÚ"úÄ_ð"ñsh"îЋ2eB̨Y• RfASQœ"ú@‹¨‹¶ªŠ§ˆ‹ÄEÌl"Á*¾À‹æ°{‚/”EÞ´*“^a Eâ"ᄀ…T‡Ê"ó‡%Q+Œ *¥7R”Eê_€E‚Un¼‚’zD^.B.a`•_X‹Ì"Ê© Īè"ýÄ]D]‘ذ"À‹*”ÊeH2ªS@PžÑ‚/`E¡‚-TJØEë`‹*U 3X"Ê¢`PÌÊ…•BÀ‹´Eà"åPn"ù9ÕFT,ˆ´%W@‹/!!Hﺄ\…#]D\„µUNaHФ|a`Eò‚,\¸ŒX G.‚ˆ`‹aÂ,¼RzDYUQO0‹J›ˆ°J¯mBÁ`‹q½"/‹ ¨˜"èp"æ"Ћ ‹ÏR ¹¸hEÈEüÄYPsXu«A*…ÐE›¼DY¬¨X"Ê…• ;*A‚.‹˜‹•hE°E÷´"è(‡eT‡!Á..‚Ul"ÜA]¿zª*[ˆ¶©Dú‚.¢-r¨XÊ…™a[Ôƒp‹ê„X°EX F ¤Áâ-Â/p‹,{±•  Ì"Á"Áx‹é´|B ÁHï UÜu°Eâ"ê`E¹„YÑ툵¶„Z¨Y• 2¡l"Öe`©ŠS[5‚¦T¦áSMj4*f ™º”ŒYRGn2¡eT-°J­„^B-„_ÔEïlp*“pEãPaTœÁÎ"ûDX •*žÐ"ò«ÿÌPVI”Ög³Ž”À½Tßà?’Ÿÿÿ÷M/û÷ØXo€P•)$”€P J’ PUJU RŠ*…UHTU%P#šdd2`†Œ&ÓFŒ@Ó&FÓ#!“4a0Fš4b™20æ™ ˜!£ ‚4Ñ£4É‘€‡4ÈÈdÁ L¦¦LŒT¤Ð’SñHòžˆýQú¦Ÿª~¨mG”È=C@hêJDB 6¦L$ÚLž©êxÒSÓQ ©“K·.Uý}?ÖÄB=5äxë¡A Éuéü´Õ†«±PC"e8¢}e4«ýÅÀXªº‰4"Ћö„LVDL‹”ÄJ|iÑ+€¶Pqy’È®T,…)¤¯-E‘¤]©ÿi&¤8`‹’Nò?z…ÑG".UEaI;'RIÉ$å+нÐë(^2K´6R‡ÖSÞ£ªž40*#Æ­%u¡¢Uª01­‰V(ÔF,p’Í™qC)˜•½wÁodµ²¶PóIŠ™ñ ¿†°†*N…nWY%„u(xÊ§–ŠÉ).¥*¬‘¨£t!i~!S ¬R˜%ŠS X*af(Ê…’R3!‚dŠCL3L©X)`S2A• 3#©AAó¾0æW0¢½ˆ¯¶U*žP¢ºR-¨Ä"2*«à‚ %H|!iQ_:¥¹%û¨#ùŠ€‹r¸£õ„!¥Ñ/uJi#ö ‚÷ÔV/ÍõÈà•od¤aÌUøÈð¨õ ¥I„N! è›$2_¬Z*늛D²¤Ö¥qVE$”¥òBI á ð’‰ûQ©E¤'**Ü‘1±#¢J+´>+¢‘rðyª“•"ýð0#Ý)u/ru!7É-´’ámEuHùD£­eFÒ¡ö|}BŽR¦¢Dy{*¥õ(XGÖ¦fIX"Ú¨­Ãu’•$v-ÉJä ûÈO²¢±IF'™<ÁßHò{¨ªí ²§KE äJ]Ò2¢Þ¥o!i'm-QÊ&+\ÑdT= v xÒK±I]õVøBä‘W°,H©W÷ÄŠyË*ÌèJ°ª“Z¡•”0¬$]‘]j§}Ep©ÑUt’–P ĨeŽ¥K VbHíR•êH¥Ê îKòÒX’n¤7„0­"çR¶Õ@u#á)R¯ÆT¹$``•K‚î „y½Ñ2¡)|SÀEñ‡…'¦ wˆ»ÉJÅFŠóR&’º*îwê5ò²KANb¨ÈˆÉ*ž¤W*ʪ³QC2€Äd V).¢,¤MR'R1+Û‚!$äJ¥ùÒ&âZaÝI.‘qT«bI¨QÐ{Un¨¶I2#j`Tà›IiR°6¢K„¬®Š„¾h— ª*W¶«Q V¨MÀFÚiDŸZnIjŸH‹Î«˜‚œ d’¡ç$›¨L©‘VTd™‚Å„©,ʃ(ø¾¨(䎵A^TŠó¨éRÜ $Â*ÊIô‰xÒÁR½õOGsª-Ô“!ô|±PôÕKå/(<Šç‘QUóˆùáEa-Ä‹åˆö>þUh å%ë©{:WhˆðŠ£í22Š—R‚¸E+ŠŽj¾¤1YL$ÊJa&Xd É*´„!ÝH6Pô4-¨¨Ú‡èI,©%–Hüô’ë(¯¾CÊWQb° äFf àä%ßܰº’¢ä¥nª‹|! äJÒˆÖïÞK’ƒ”#šDÀ¡’¥ÕJ™kq#"2yÕ)ê }HZ’SELHRä2¬ÉPl¥„3(— ”Ê”Ü%´J»é¸+UFDv$Yù%r^’ b«U>‰|d7ªIo‹‘e+h{BXE› K*SµG:”æAMÔRÜDœêS$ Á¿Î¬£;UAâPâd•ÅPj«€ªÞ¦¤#²I;ª ÿØQ]Ò! D÷È.D¥Òn(­s,*²f³UÐ%Å)S¢H÷¡}ð9UX¨2¥eJʉ0`‰˜’X’7´·FK« .ˆ—Z¨KÚN­P/w×}½9m–ÊuWIÝfǬ«Ü[»]w5œŒÉeÚ-ɹºsC]݉žïq^ÔËŠ®írtîîwugvë]mÕ]™ÚÛg9·3:Ò6¬î±]4¶n2ÅkSNÝ™ÕÝïq^‚3iM–·­J†i†›cR»]£+nÜÝ&—]gwnׯwZ®Ç;»vÜÒî¥Û¨q»:wÝÕµÝv×9Ë«ªÒ.ÐíÖç\ÒÞç¯yÎÚîÝÎí”uw;-9µ6«iˆt9Ú·FÝÜÙj³a»:¶W.-®Ëw]1¹Ò¶qÛ†Û™ºµÛ¬vÎÛ±¬M“]µµ°‹¬@7jîλµ¶ë’çdîÕ®¶j‹»jã @,²•E™c¸H ÝÞúóϼoœ}^÷*¾5·ÒvP¯´öa÷‚ÚÖßWÓºøûÃïV »>åfæØô×[Ø5¶w»‡î58vœuîç—½jëI[Ó®.ÝK­Ý¹wwyíÛ{½ÞŽö{·°íÛ-Ý] Õ€4cWÛ‹«\½VmK¶ó2Mjjcd/YÙµVîrYì:í]ï;k€>˜ß-©´îå÷¸kÜ£·M¢j-Z‘é è–ö÷³ÛA¹Ž¨ªR‰ ¾Ù+ìóvºz×;Ýí÷¼øï·¯3ÃÝdCÓ»¸u}šè·{o4žÝ¶æؽÞßO}l·­u¶YVöéÓ}÷qõ>Ã{Áí½Pé$lÉö×»›ÚÖ7»žîßsËëÖÍ÷µ{—}ö™à÷½·Óvé'{W½Õ×Òí«wÛ½÷Íòø:Ôhúí¾ûÞw¾û|w¾nûëíÇd °lÏ_{·º×ºîäÛÓï±+dõàû}»uÜ÷ƃäÑóëžÓaoo·{ž÷^í³íªvÔ)Õî{Ü{eÏ‘ò•ëO»œf)=}ïõ“سº¯ Ø,MÍ ­÷s¨‚¤m§Ý$Ø€_'\xÔ®}Î8ÖU¶Ëì®mõ£«·ë¸Ð³d{a0¥<{Å<­™åQÛkJ‚ÛvU!zõnu<íË{›,î²ë®$'mU€RªØv>ñ• ¬ˆ2PhîÚDi¨@ëGc¥×Û»½Üð{V( z €g‡¦ït]´‘M alÛlt) ÷Ý«f7{O{ÎÖeµ=ÁC¡ZaCTžH‡£Ý•ݯPÒ©FË}ÌEm†úé–ƒ¾ìë6O·v€Ñ@ÀÔ·vëa¨i¤¨½ïxyòW× >ƒ@_G¾ÖîëOq½{æ÷‡¾="/ P vrê)l5Uyíà’Ù«`Ò××/{€éJ 0ÒŽ{ÃKyw¶õ`£PªÖ”dÒ^ÆDO½\uŠÝ½tyÝÞµ{»ØF€­i% P5Ù…={¾ž Tõ´%VÇls_^ôÁè(íÔ‘î>ûÊçÛ¹;[°6Ü WÝŽ´4hÍŠ²‡MhÐzt]a¦l@£lÀ}T¨öoLÛZֲݹ0m,Ó*1²ëÿ&Mÿv?ò-êÓ…F‡GÉ™ÌõLËÈÊ:™Ë²oG¢ÖCçÃÿl¡¬’“&¾ð‹É„ݸîä,XIàÇqÔ7ÏW¨ÿûr®ÂJl8a§¾ójÿмüô29™ù×OütɧÈÛ¹æ/‘BàyÃ%§Cqן,ãHÚU×DÔðÛ¹*ˆ„zù9d:ær‹â|{’ŸÃ/Kâ ëWBµä`b,& Ðëʧő]~îª~$aŸ‰²^«¢É¯ É"¿òê>PßËúÐMÓ®jݶ3‹½2:¡†‚&È&p5n°c.\=ލ`õóû&Ëe%™Ð8K5ˆ»jrå±[‹å$›OcÈC%;tÑp…-žáïð¬o«Wº…iúññ§ŽX¡ªHÓZʶ¿,l8ÛØ}²F‡ ñk‰?ºjݽTÇ^Ùw«o‡Œ,ö‚˜ÔÇFøÁ'tÑâFä>ÂùQ5›²¿ ˆæýŒp,•AùdÌ5A‰Jé—CùÉMÑWlún#ÞϺy3TtùõÛoäí'˜ßcÆÖ˜Ó`†ùÅr žÚ¯aâJö4ƒ ,wr9ò›÷?¢S7Í2ó,ÏØ:²‡—³\nõcˆ¢…AÏȯZ¿"î%¹îÃ"Y „=¦Ò1Á‰—(;NÄCX<‰v]zޱ˜Q”|»‰ëFŒ}‘ÆÉ‡ÏYæÀ×hð©f¯àò ¶Ž©«ÖÇÍŒ99y=³XÅÝ…ÍP H[nŠÂ>roµ±ì=™™ÆÕ)ƒÊÂQNpÃATûÊdÊŽ·ûËìG+¶¢_%¿Í#¿J¡qç[ɕРR,u Ù17É~å7õ«%åÖ,i~z¤uíÞÏP¸ú†aØã¿àíú\À¿S†‰ûthÚ®<8ê FîÐ\4ä|ël¿­@üßîžÖgÌ¡aæÛMrÀô>3Øç¶6i\ˆÅäkPØH^B$9ÎÇ—ÇLl_Ÿìq_ ¢4[«,Õ ¸fëI~ºbˆGΦvQ`}n8ªŒ7köïŒïù„ÂŽŒ²ñ,¬Ø›‡MñÄÞþ$ãB&:Ûê¨ß÷¾x-½BIÓ¢=›)Ž~¿Å ø½w} I«xrbåµ`¯3LsSΤŽm¬¦ööC2ûô]Gj­¦ƒâǼ.Á~]ڳǤ`¨ré[âgÀõ98yD)óÕZkÒe“‚ÃNöœ/‹7ò?Y»+k'AGl±2ªHI3–Õ¸Vu~²7ß½Ê×oÌó†%~X4tCðÚž’I#§%Ñê~ª8Wz«ýûvø8©„=ßÛsk Ôȵ8ŽCT-³³<½_sÝí#Ac»Ïiʯqþk:¶²û±:Ž—£ 寇‰ÃSÙ[EŒn¼µ¹"OžVp`)Ʀ~cÁ-‘æýב ø•C<’GÄø1g½^\¢+Ù?ðj.ÁÙãÊnihŽzѼ${“¤ŽŽ©þkV)¯])¼v³Ää( ‰gdä†2oé)‚o"Û <IJ#ATó¶ŒgõÆŽËŠß ü ô›ò‰éS%Œå¬­nwx–uAl·±ŒmÓXýbt’¯¼ÈçQ©K4ݾèÜèÀAÉž¶´Ìóƒb|NÂÖÝ$D§ =2\4œãÚà˜ˆm”râ0}ÑÃÐ1J?¿œonøOF¦G£E©'_W³´oÑ$dwZŽ˜ÅÌo5æ*tÑà·8‘†¸£˜ì(xIf®È©finÆJyÂ0Ûîy* ˆGÐd(&á° àG[ç .%A~0ö±„m÷OÊÀìZ 7Š GzÕÂI+ÌÍúYe+ü$(p¥@=-#AB‚ñ·¹¨P±‰$™¥°U*~¯¼dÆyÀ¯&9‘€wŽ, j5øþìýW| Œm|bï¼âP®Ý×;yÓºÉ`KµÖÆle¿”c…)=VO5(&Æ+!ÎR~:æÁ çf[Ò{2¾cŠÓXoug0p”ÂùÙcÚ÷Æšüy^l%ª‡Â¨ªI‚±¦} á¶ §GLOWì­4JLöÙ <ôUêÉ·ÚQ{@Ì‘E LeB}O ;d:ªš-©¨Â ô¼ï âÚ:ºèéMa½z^£FyÊ¡~'4R\1ŠPÌ9¡˜30Ç/C¸Þª»x‘Y³ÊýZO¸Øxë-ô•?Jå«㸆<èÏä¦ó_ 5IÀ†g8)®Éô<æ$f–²gÒs[Q¢~(Ž]QIð¼š? ëkÕ¢þZ4àw2o[ô×÷CÊtÒuÑÛxßS׊öŃÉwíßuï| [À‚ä‰û69—渹Í4ŠE¸Æšåô⺭óͶ]ýÈž¶³ä/YÖ“1à†fÓ×ùš“éDõ2 Ù ÝÇ&‡¿¶[îØíO¡¡x’>Š%9C>)’êdn|þ“ž/Ùz…i=%}{ºõŒ#ºöZá`kùG„[ªºùž:¿½ÐÏåu¡©PÏÃy'ü Ê}ª2ÑÝ¡¨Eä娞)«S³ë>¿sˆ½„Òlç<óð“Ú~»ˆõ¨›tfŸÈÿÚèH…[Áª Ò@OÝÄ»F1´Vû¼`V)£¤ª"«Vø…N³ýßšrg÷vv:ñt„ö´£Êl÷&çûâïº@YU LhD¶†åé<ó”DÝM`Jc…¥“ÀD“ ùïêp°µÑ#±¾ñ©Û¢4¸ Ó–óà`pÕBr0ÍÒ»CöŒc#ð, „Ÿ/kÓxC.hk÷{Õy…2Á/n:³!aßÛ‚Þ…Ývã1H}J“X³ñ&.Xú`_ÁŸ«£g0Lî*Ô¥sÅ ëÔç®jÖQÉœ¿PU®Ÿ§yÜŽ¸ìE"ë~A‰'Ä«ƒÁמsöãËœ¬¹¡<72Éþ4Ÿ¤e`çÔËð°ÿ„‘Pd¸mÁó„À”Ôãèü.no Â}=Ž)UF«c›ˆûüß1 Ç‘±…,׿A.$sౡ w°œÚ’s† NçX¯£‚SÕ•}…œÕÞ9ؘ¸!a‰é1lþ7üâ<ã¾[ÖQrí7¹ÙùR¡É=°ä¶Ge9|(`,iw æ.«Z(¨KVV9Hr]‚˜§ƒ—)Ø7SñV~Å~ß÷®Nl!Œ&ú—ùp– ‘ooŽšèb…û |K w¼a{FתÍe3;¶«¨ñ‰ ûPÈø=`Z=mÜMX¦.‚!ÏÔŽD}äsùÃOìéo®)iÂÅK4Ý”»u½®u±~£¾øÆ=å¼¶‘§Õ™fû¢ú´e®¡†©²”>qá²Ñ3vž­lƒY'-ÐáËýa³ßL·‡ÊøqPSr†7¹L®æ!ÖJ2x‡NÆÙZÎ1p¡w ë‘%²ˆ@¢\U51&w93¥ýÃû§o&~!<šk>QL•>ïÛš,;Ê6˜ägæÜ²2:UýÎç‹ì1¯0Ô­¦ºê¯<(âï{ôœ1ÔȽ Ó•ÐÌ¡+êèH‘‚Êl«²Ö,"ÞU®×ĹîÞ'zȧ?sÓ Ã|Èû[L/Œ£s¡ØÐ¢å3:^fOñ|>ô²†Òlˆ"[\»ƒÁ‘ð›Ÿ~ˆ'ãd†¹Þ©С >„Í8Þéì´»_k…èË+—ôV‰ãíà¡~â\Ý6ao£·š” «•;¢„«Öój´¶C„>î= «îÎD-»>¥ Ôc<Ž6˜Ñµ/׋_wš u>½–êõ§g=æ•ì‹‚‘Jpè(Q‚æ1„8Î!Zo¦uå´$Ñ5TJIN›é­´öïhã5Ÿm ¡³3žgÍV\"rc@c¸–uD~â+ ‹ݺ`£sOŸ^„«æ½-“DË(c½“LˆÔüÁU‚¯>JçHdÞ§'1÷3ì‡^› ™œŽÃÃxƒ²ê´h\ETA7¾î¡¯JM£z¦W£y5[+¤#ÎhK’Ä0âÁÍÎÂòe®ÔKeˆÊQ÷7g9 8ÕI©¾ŠµÊTÉ5š ë^èHWÜ&K/,q>mQ"÷«r ¸×´[â³ÎfÅÍ×£xÔke-èšl§¡<›¶e$ÌŸ¤Wòó×u“!':C½‡ÔÆ^l'Þ2*‘å²xF+<Šà}ÐCõDÁB îüÞf™ôžÊ þu½“úzÑ¥6Üø$pÝ+ë^îîQO 3ž´w¦ äîÄ–v¹¹¼¹D 8̘ê¢éhœ[W5r|ùüu·{r–.µuw¹ÚõÜÛÙŽë}$=›µ1«îÛñ¶p7|Š•¶·O22Ž÷Ñœ@½ÇOÂù'l笼'p7saP$ìÉ€0}AªÇ@Ádµ2fè*~ÂõïCáÿO6ó墼jW²éF'ßåx¯~sa‰l²ö2[Lªá×ÖÄv²d¬]¨Dº÷H8Øý­¤ü‡yì9mÚ¼Vƒ#PGݬ_àXSèrÈ©N|ðE_ –Lh’²Þ»wL¡lÝs…El¹žë{î“Ôë)wÐN*D8ýZ§H² TÉŽZÒ+W3”L°Ñ²ØVô×K{W´onžuÈF±«)­µµêðþoYU·Mn½¬™×V`HI½P%ê“Oôì[Aöé……ßáŠ& E¡$þk^MÌg¦¸ ²ÑÅ$D%¾aT”ûœ84O/ƒAøË&žz¢Ó»\.¥¨Î‰ÇA8öMulÝxާ,ÑOÄø©î/Kg†á?¹1c´“ÀÄKÀûÜê̘…™ÐÓr´« q£Ïò¡´GÓS]CÑDf„§×§€Íçd0qobá!ë!„ñ½«s©Ÿ³vÍܬn;™{Çè´Øþº¢‡š;=Öy~ ¿[ÉZ±ÿ˳Ê'#WC1qi5‚»?) #éíëûcGs-%;îZ3®Õ´yc,”l{'uÈŠ ŒÃ#é8{Ä„D…Æc€ò–Iίó¡Í Õ+¿ßm¦ãiÑ%bÃX`Ì5hÈ©@z”§O‡ýÏpåã²¶Ûˆ×áÒ‹1>uWM\#“1“;:s+ÌQúj¹º)´„üVX“nÆFqi‰1ìCª¿a~ßÁÖQû.6¿Ïò¦üGžÚPþvm–:NQ«5CQ¾ÆµéÓg:<*ÙNV_Õf)êmý[>-xè,–â=ÐæIWÆDÿV±Sh)0Ñâš4*´fw[ü8¬uð¼}tÎ%dSæ×hÜûí;Í™ïOÚ¬ÿG#}®ž°c¸N¡ö"ó¾TžVeQÓ·g\[ã‰Ø£¦ÓÐÙãu¢#HµjŒ1g}9úüP·]6¤"‘ð C§'´É@q.ý 9í¤Ï”cRbž„_„§ ƒØø„ZO˜R.OÃ$m½ÔÑÁº,lo#ætyµsÍxu„#DX¯±Î(YdŒ³›†å&ÙyÁ‹×‰/lYa´9ñ!rCРÇ>'r?Y5щ!¹çK 8›U“,ÁõAÈ‚¿Ù´áÓ°2ú‡úc)4²nöb\Ï”•Ñ/Öï8àèEräߦ`WÓn&áûå·¼d©¯w`ÅA’‰Ã9u| ¬”oüxqÆ&DzŽÿLéw–S4ý/iiS Ä‚gE.EgÛÀ±T¬dŠ>±e®‰éïK؋ᴆJ:+é4¦ß52¿˜±„U¸‘áæ‘6B%.y{²Ä^ë·=]çµÜiÝy,Øêr~g…cæÑØ Æç%ÞE¸@;æj‚ÉŒCc¦ð1 ¯@ÿ¼ƒZiûTɽæ åî1ß.,ØrqU‰‡B¡ÁÉ<3ŽáwîÈn e?Ó Ë`ì×Ö1‡¦›é(°1 jÕP„³`|gR»åö{³3WÔyKe®N}QF'… ÁB~Á¸{Þ<Ëì¦;äµûûÜ ÔÏ]4öÿDYI© ?<áðÎ&¬¦‡‘ º‰ ƒ( è 9’ fíq1} J=j¾L\t[À<^ ¥ù‘â:Äêeó|¿e“ÏÃÁÓ j‰)®ëCs!m£(ö8Gò‰tpfZ°» ǃsH&&ö1Â)þÎO“ðƒÕ ØP¾ý+¶´ÿ‰ÈqãìP‹€¶/€³qßhXŒT:1Õï‰ål¦—m]!ay«[1cQ4.t95/§ 5ªJ©ÑVº½Þz'ÏÆ™Áé³èÛ®ã­á˜;9º-…vVæC½Äãßßã}*¤‘çN4}k>3°õ­¹¹Ò¤nO²ªW”VyyÎ…˧Z¤oéBà-œú2õr–«cݘÐÇbížÖϪšˆÓ&]S|¦“Gtï6°ÓŒ3à4ÂT˾šé0~ÓÁù=ွÌ}ÔD*×Z~²³Žˆ±©ZÒŒMn­Å€ã¼îEn‡&Ï€]ÜŸdØMh.¦o`Vß%þA÷q!…G…i±a팿 Ó',ïsè4ë+Qõëÿ^Gâþ C|þè”mÑp²§AA  ºtµ+Ú~õCVw–ø1O÷;w‹1íÒ/†Nfù¿ctƒƒñ¯l‚íAš}ªæªi`V‘h;÷ž¶Ê£8?`y«´Ô¸£ƒ¹EvgÊçK9'Jõ1¹#¦ÑÙ€ƒaalíÊúðÅByjqǰãÕ<‚0hdfX&™åE$¶‹ŽÞN÷8^âÍr,Š78]¥5ÞQ2v …4R˜/=ë`,ôa]›â©Ñ%O;Ãñ´éœ¨S͇˅[Î) ãÃÊ{Gecíg?ÖÒgE"\7¼‹pþç—ÊÓ ”Llu#Í‚TQçC#¹שּׁýi—ëØ‘ÆÙQù;‡~À¢öíœØd¯<–xH;{Ck &gŸOt5ÔIJB¦Ê^ç*ûTËÓÅÌje{S+Q j.œ0c^Ì 8gɶ+Ø›ê ˆ”_í#ü:C]{–cè–+:ÖòÝ3Ê9ŒõÃm,7Ò‘j‚x•&ðhòøá:¸Uq®­o•¥Œ7,…@Œ÷huTW£¸-ŽÛu‡3ÌŠíÏ EÃWs ½ÍìýÿI–‚§6~qóïó¬b<, /W—t°¢è²½Æ¥Ü_Þî÷}WdM²DžÏ®Ô×ú.ÊCåÊ3´Øu…-þ,œ]Ú©ÍÝEª§¹ož4¡{Í®dŒõ’²Uœ=Ü $º 0UËðvp|M¹Óå@.C•Ëôö?‹SÉ‹¸lqŽî÷¦q=ä"?xÃ1j7@—Ù]GåÈÿ+ÂÛ†bÕñqlÚ¹ ÌŸåÚ¾:´xx÷a0éh¸–sÎà ÇcÕ|â|)jâò¥€â¾d4–ÿ_>úþäý´ ˜Ÿä_çKmT4‚3²¨ ÎFhèï=¯=)[>ÂA’‚sÏâÀ" îŶ:Rßí|óÓ¹GÂ÷ã–R‡áóUp["H>‚GàegŸ¥3åz1²8Ì:ÚÖkHi8Æt-Ô?8™ÀUȦèÈftap<bÓjp#AÄÈ,©ÊÇ*µ gúXX ~³ì£âÓ’»û·ZÒ_³—§ƒyŸ¥}ÝPÏœu«äBZ}e:$F.$!t°C1ùœÖ¨V%`cw(Ð7ò÷ÝåM :3ú™vtZ»]Å ;ê¨Á¸€A¡ûdéïËû>牞ÈÌ9‡+@t€KÜv`ÏÞ„dd |6^&03©‰87äa0•³& ɶ·Vˆ5é/hõmühôh3¼4a°ëñ®L¡×;kàþ/ã}?!„m¶ö(ÿîF*Þ§ãi†…Á¿—oÎÊÜŒ“ÊwQt–Ê~>lûßîiÆTIɰµS"¸[ygc˜¿©î§7ÓzS°A¬Œ1€¤Œ)F×^¬PÏê1ãggyÁÅ}(ÖþNƒé¢ÍN%ù˜h^„1Rï8. °9Ò‡¼Éù‰”hu‘{1qé_¼´{äüŸ¶ÏŒ@3CåR—H„HÉ.‚%B5®.ÍFODž`ŒÖ«r‘Žã?ÀÓlJ<¹8Üè~n-Lr"ÛóZç6’qÐ.O÷ÒÁVáên,ŠDTWÑLø¦/…×7šèÕÑ҉͒­ÊmSÆ+üÑ'ptݱŽäØ0oR­#çÓŒmŃë<§åCVEò¥‘  ÝY/2åYQƯֳZaKµ2Fj†ÜkÐqéVîó 8i“ç[½V ÛÞû —XøuÝ\¥Ê53áÝpRTutß Ï;xâëghìw(ý«»Ò¼˜›÷¦{‰˜îFsËv´‰í ¤eî·T¤êm*¿]mäùÏÄ FÓ¯vÊÎÛª=k‹b¯}~ôºÛ¢'4˜–Ý¥£B5›'>Êžïu®ZˆƒåV‰4an*’!’Sû§‡=‚ÛôdŸo…c(“«a ât†ö°¬¨œ¿¿AÜAûB bÙÃuŠÞ·ÿ ßx¯KЇAJlˆ 5 m6×F¢‡BópA¢zzž[Õ—\ñ[=I#’Ï©y—Ôä"GQžŠ+ñ¤…ò@ð}Q¯¤1Ëì¬o€Ú'Ÿ¥Ð:…MÙ¾ñ²W壊ÛZóçOhÙC–Є”_°ç¶ûËž½‹çbËzÞiYùµ#4|ÏÌæƒ$©ØDQPŧ³íPáchkªGºÎB”ˆ¶W‰v$“ƒ+IVµÔVGA:W}ù™ÄPóh^7`² D*Ñ¢¤©7uÓGš×_†xõÖ¢Ö.pÌKìÁ¥§Ä£3?5(œ©’BžÚy^º:ÌOŒü,ÔîÚñœŸj—º‰±Ô¶Eù_Í:ƒÝíbÖhøÃ8¾žR‘îU±„ûÙìÍí ¹ó!á<ÒqˆÂ1ºcGF¨Öó£kP®øÊÁk® hòSXËÕd÷Ðlº~$L8?îÝÒs€˜ƒFWu(j6)z„FGÉ¡3Ço¿¶šöbô?0žò“õY2ÇéÄü|Žþû=hP'Û#†éª·9BI0¶êØï3XÙfh®faky`ÎeÅGùÄô¶ú’| õ^+ Â&Û49×)Š^üÌnpº4È=y?»M7½ÙÜ…ü˜kÔXkT¯Øxy8÷$‹¸­^Íî¢dø ßÑ´0ˆælÞ»,ÕN“Sù^0ä)=…õ\ÏC’=…Yá qú’eiÚ¥øÁ߬JHÑÒ›·ÕåCê^ÿ¼î÷E`{( ½”©e ó Õ<*[›Y}½KS8ƒ‘ø’‰á'0ª*¹å¡¹ì6R3;EþßÏKk›¥~”ÃR,¦v”'%R‚GýZ­tºN1=­ùsàXLŽÿ‹6bšØ¸š§! œ•ËÍBJ0Ðê€Ö¡ÁøŽùu*ÎÆJ9 V%˜ œ]'~Úç.ÒsÈ|,½ôèô´Û)4 ¤vÅNzZJÖhìKìÌ´8Ó©RGf2ƒ'0q¬r3 ¤Ð9pÊäº'³lqò[wcjîï®W2–´K)w1½|G™á“rƸÍÙe ¥£âNMžò^ˆPÂèŸÀ¨7f]1¿6ŠÓŒ Öƒ*Ñ”‡Á¿w‹ï´`™™Ä …h®ëÚšº6 Ó»ï(?‰| «ªr–=ÛDùàfšèÉË \å^Ö‰¦žËG¥ô·L{ ô‹òPÊ74ÇﱃìÛ_D5ÓÏ-ðxØÊˆHö»²®¢¾´JÓwB(k“ΆýŽ–·æuÑ<…Á‰"Ä;û½=4}H˜’#¹îú’Ðþê'ØC$]Õý/oN²·l¯;UþnÇä“­´c“{r cçà(PÂÂz‘ÍvN—j€uðÚbTÉ]£šÒ°`Üå ª[tïð†!|8î–4¶Ûß÷©š2–,4Y(R¡Æ‰£ðÆ¢z¸º£^f"QI¨_ hKVvDŸ(º‰[(H=äh]›I$ßG:NµdË^¥I±ÅÚšºÆƒñg™ïæhG>ìÕ GU-½jâ5¶1GoÀÁÚ)ç~Î’Q3RÈÚØarì}<×^b•£„•nøMŽÑ#nÃQaÑ»Á¿µ{@l»Æ_~·Û—˜6D˃N&è ü¼Õ´a[â¯g @ø.×Ï3ZxµÕ)9ݹoGwq)æµü‰í¨ž.–fg•{bÃc𾂒%q#EB ‘]3K½Å‡x7·ŽùL% âG²xIq ~üXþÈX²~ë]%_ÌÂWc®‘ÚŒ?²`‰Ñ{°¶v$PQ²mŽø/†cŒUÍ·$À¶ÁÝë«éì°R2%º? HÒ*£~iÏ•‚륷^©þ3Û‘ßÞû/?Hǫ҈Xƒ54¾Oo ´.S=5užÂ̧À0ö^Še‰‹ÑŠtuÔA+¾L•Êî­Eýè’NÇs©EQÜ®çg¶Úθ°íË/kÍ‘·v_ŒÛŽ¢ÄäÁ ñeÊÁ-Ïøö-}…pÂÐg«ü3•Toï‘z±GcJ§0ÇÇ.>º"|ã«r¼<¿!4¦Þ‘PÁøâ¦q}Úùîò!ìÑ·<$\ Ȳ^&™“SŒJΈ%/N^˜`Œab?eÂ.Ri'nµoŒ÷«ÛÉ7¦}÷Æ)?Ã$B¶a.=Ò‰·Œý°<ùWùâ]Qøz?PQ=Îy»³7¥ª„aF‹§X¹Ç:+A&µÈ`Àèf5¨pàWzk;ïkAüeZ?°Ááúb”á÷÷šeD’îÁTÌû*´šdã_OZsóž³Me}»õíT62à föñW)¼ËÎXïò®§×[´&QUY¸Ö¼½ó@Œ¹¬‚‰ñ€qrN`f|׈GÎÁ¦É"±íP52 ~¶£j N! |¤Gµ.°«ÓPÍ 4Š6ŽŠ0¨xUŸÄÌan‚^8Æ`õY9K30Ö†Yq°pãHšØó×¥ˆ1¿Láý€|î–}](î‡Ë^ín‰Ë¯Yé ®ãëMÇ´Î*cæ~%oKÚMôòp©X¨<±À“Ühʶ~?zG…'×u¿ÚÛ³?&¥ö‘œoR¿S¼¶|²ŽeÆ)÷œºHdÏ‘"gÐÈì!Ñ:ƒè”öG\[\Qˆe^¾°“¼ÇYVÞó‘_ñgãCDBñæŽéÛ#+ >ŒU®e&Ëù=›C§ãSHûwl›þ‚.¥1ý"V†:æ(éÿMÑI)|¦!áòd·#˜}§Ù«ŒuÿƒZ’hXO*þ{µ˜íÑÔ.Ϊg7`à:™ßë—4=r%ÿ޲zð˜úMÏ¿ >vN± $7ë%ÍßZ]ì¹þmÚïM4œäù…quîD ©àÇpüƒKGW“ù«P˜Ì±–ü‹uú«ƒçÛ3Ñ‘^Î`O§Åú"TÇJ€Ù•.õôÓ‡]Ö Šd#ÁÆ<Ë}LÔ¡/±ÊkuSW‰ÁGT&^Þï(8Ân¶¯í { _u´Ýy¨¨N#«‰Û—¢„l$»ûNÆçiòøÇüþ(õ²ÊfRòˆíCq.Á“Ô«EBpàF€ú"ôn¡HaØ+ rʧÑ B³ìíZÎuÖJÝ…¬îe=qE‹n &ʨq8®óe=A¡V—øÉ™y–!Li£¿öu,zì¹>Ý>cJñº¦WxáͨfÿV’ö(ðæ “) Mæ‚‹ù¾y)Êm”aa †”9š™êD°®ÌÍYiVÁ µçÝêâÄ.  *ªpe£ £-b•Y7[îžCmâ€Ç Õ./¦Q·ÃŽö7F £H¾Ä»–¿QÒÃ7QeM©à!Ÿ‚ ¨±Ô'Ѩ³A©RìÍŸ¥”xÒºÆV׎‹5œqW=ÔmçF±×²èú×MS³/¡ÌDTSÈ ˜èNé_FƯۭEDÑÁ †x—Ô{ #Ø}ãü «©FÛÊÉe¦oy87øõ¬Ú~3»Å’C©ùÐe~o¸Qäò2¦FÉz.^LŒ+\îÛMu·âi½"뛬àa7øÛišžX# Üï™Ìé2  URä À2zW+MOÑC# )„Œ1ƒÙŸbôQë\Î*ÓŒse¢µ˜êog˜P¨yDm—KÄøyºAG°>œ3|¨³hR?öo/pÍécôØiB¤™ö5Ž£2'Š¢C{Å(ÓZ¼ËUÒ,© % LN›Õt¨l†$橹ô¼ ôgZ½¿JQ?q¹- é¢z·“U+iŸ•ßz ¹žt‚±¸8S'sò¼Ý/^ h…ERù_­=å¨/É£Þ)>-`Ü;ø Ê;i#U+:6 çà#[ûò«© 6±û Pºhp hhš˜)ä€_G)‘Ø÷LÖ'ïn–Ç-RÆ2bÃàüŒcÎÚ•ÈÍÖ%5õ››åæv‹r¨Æ;`&&sø~Ͱ>÷é»ô•3ÂU ”Zw³Éâ›i:/ÙÔš:„]ô:0G³Ã ÕñöÑ=öß6¢\ãYNtèÀ+“ÃDXOÖécKL³M`;°7ôeì#y;[CñrÜ*&0f=V™0‘û@úožŠì\¥3š„y{Ì…0ií5RzW°Y΄1Ž2AÍû›rB?¾,Š;«¢¡+7¢k õI¦!ƒ4™ÆzÔ—F:߆(Û›IŠSvÆîí!®íÏUtó9 ’3_˜€úÙÕ«¥}h*yå¼vZb”jÁòOU4‰ÉñXþpŽú•„§ôrÈf02oito5RGŠDmø¤d“é-›GE·z;´68ËLëWcûÿô´ÐåÂ<Îdì-;/×_«]— ¸þMÝ]ÿ<þ6õ„@v‰£?X\#‡8öþV$líÒÃÙøW„}éª>Q8£—¸oá”Áðˆ|© fœú1§i™Ç=rÓ0ñ 'vnV+’ðl`ƒ/ÎDò­tx‘6Ü½á¯ØŽ\ŠQ%‘bq_¿ƒ|?Ġǵ¹9f>¶ÞÕaN4à&º |á`ÔôêÜk35n¡…R†ÙkrZß’õ¥«¢Q«d,oRŒþ¶žI¯äæŸQÊókYU9 ëøä’‘»>_iNÑåãÒ½(À½f ÓÞ¼m~G1¹ÍécÌ=‡P¼‰“kz%Tê Ù ÑÙ‘zÈðŽN\ÌÖœ¨µ3“š‰Gæ¤u¬(¤ñb/5ZËèúøS]¯Y46Ja£>°7$Ýé áYunŸzXóóylÛéÔJÛîXî2I=N2ºè ¿,'ÉáØÔÁØäÐбu~êWFm9¡é]›•€•/„•´Rïó·¶Yt\=$M>éø²äº?Eð3Ò9ÔW Ü}Yκãâ¡Y>„ÔoìÖGr¿m1ÏUE´bã‹Ò‘îš¿ÄáƒÑ¦Tô¼B›cññ:MrDR¢h? B#Ã’ûEø¹£eµ]ËÊ@Â3ÔÜà‚wáï¼û’UÁ l~ÏÉôØÊûz:ºt “lŒ0¡¹°5ö±üÀ¦ŠI¢}56Å™¸üIT`˜6óG´qב¤]Dçw+GnÚ–Ÿtt(Ü>ŒDì©÷´`B€®ÐÌ›¦g_æ¬B0â~„̓Ÿàö:¶jË’Óö‘FÊè< ì迬ê—EªÞ£ ,V’Öm¤ä(£²åbznã*c,Û‡ÚDç”L¨˜˜YÒÉ|Ï60[&L]›âêÅ(º‚IˆÎ3*ž¤» †! E‘B¡†™v#]§ÅN«#qqƒlÞ^CPl*ÒÀ=6Úé¥kêQ#"•Z‰’–)üvnŪEgÚ©ÞøÛ~·Ó°™ØÚö‰ÅÅãt¾mro–™ž0¢ß£FÂY¢õ4QÒþb¬Œ¡nÄ$ÄLJϯ44|Æ–k#ªÇ~¬ÜlÅùŒ†ÅÉeØŠûè­ø3ËïH6oûZowbtfêÉ«Þ*\;vuPé+ ”’ìØ Üç }18ç¯ë¸»Ú™8Õ$7¾6 IêÆ„NÓZaVF¼ÙÜQÞ"üª~ËÃÌõzfèçßrz)I/ÊâXŒ.Täõ7´»Ïlä«ã§ö(þg«“¿Y$2FØ0ϱUT¶'®Oø‰µòy ôæ³Ç7t×VÖ³VÉ–8m5®G´Ûܱ¤¥­”~­ŒZÙtk‡/fÄEÇTNé -šÂޏŒK¦ÓõPÚC¸jûʉ0v†j^¾~q9¿ŸŽÜMÇÁ˜8=ëùÝ·'g8=C,÷a†. ÛdvH·eŽ9EÝ÷9GÂôb}¥ ÷‚^èϑ¶ùmîÕšÑ_Þªëáã_ÞÃÝVS¿bõy4•ÿQr©r¶so¾ü^¬GN^TÚæ\yо²‰uS«ÄÞš'Þ-²dŽœ‰áNLÐvü¤øµ uõðWJfZ CT´?½$ẕE–½6æäx°ç4©FüƒðÄ‘¥NŒIÕJ›wÐ:Ã(ŽK3Ÿ‡ÄÿQICÄú£³†Ì„l„oŠ.ƒnƒŒ¦qräвiîõdSpr0ì<š°z¿7}nóÀJåô5Ht8l¯r?¨à¼±väraíºû Ö°Ùï3³·cshÖƒ¯³a˜< ‚S_JB@§Ô@Û 9ÆMá·Ëßò:æ3²DØÈHÍÁë˜NÄ¡ÎÏrC·©¦zža›2D#c2å…Ž¥}-óëê"0|(cw`äm²ôÃŽF¦f H¸!-•žÛ¾3t8:dcè;-÷žJ}ì@­ÒxxPgVD¿Q+µÞYÉ@‚%¼ÔH8Š\͇9'WL3ðø™/tßjF^Á•fý_f|Øè¨Ê5Á]7IôÜk…By=†À‰&x©\M×—e}†q‚ã´¦þZÀŠÄ7À¶;d²ŸWu4~!{pÉÁ†ô‡Ñ¡f¯®O®-„,•–œiz铘BcÂÖš³ôáZ:3\ü½fÏœád70º4a4U;^ ’ Àé¿(rú!tu}Õª¾I*¹Ò¤V›éN0•ž´)ËnÓ²âñ;:¦ÒÉx‘|7šeDü!;ƒÐ\¨C££5ž::¸Aø$l7 ƒgìé,˜Ó;Ú>#q§úu%§Ü }¦K¡'µWñ’/Ï©•–­¤2èK:LsÛb•Ž<\óô<7ëæÖ϶™F'2$¸[ñ²ÄÙ\Çd&4eÕ4’™EèÚS²öm~Ýå´àWÁ¿ÕÍ:}¤\QÔþЇ$9NIR(„Q”B";M—>— ª¼" í2 èb`‘2?*ÙÛ“$[cl˺L”Ãc}ú~>«;%™$ldü¯—õ_¯«è•[6Ž·r&ÇÑ]¹ÄwN¶—­½¾Œ[ã¿’ùϵ7,”°4’ü¯0»ôS¯+´NKDJzãlÇ-M~z[þ¡öVìÌŒÎÇ“à›xº O®-îðµÔ )±€ìòÃ.‘2Bª$¤u̓S¶'Ja„auYy沄¡©À—•¡—×ÊTì(7tLÊ{$Ìêd˜NlUŠ#FL¤³ÉP”È™RSŸwSÃø>ZÎÅlº«—IOµâ·ÑÉ˨À³F•<²D§\|}zÃÛGU8s‹gt´&ô5c›J7—à÷EX©ä.CÈ1ÖL£hûS%Qé~{U¼š'A:¡(Èr”²p»¹8LVFaÉ‘˜DA©HOŠŽU" Û·ÂÝ–²¸ƒqgf0ÌÌÁ¢³Êy…cö¦€ñŽt";Ά%ÑÚúÕßœö£šß¡Z·ºøÚUµ&_^\ãáo_B…îþOöi¶fëÇÇ›Íg7 ¡Òw}~¨O}S ¢Õ!ˆ®í„AËùèyÞ-:ï.yDqÙŒõ›³lr 9!uí4­ŽUêSgÇHç \ÆKœøŽ¿>ˆïãÞÃsÔÇ‚'‚ÀÕ§«rL S”γ >\ÌUÞ‘!´Sc³ŸÖP»²òµÓú^þ±¥ðàŒï1©¦ÿ$¢§–ZÝŠ+ý9ËË”3»–Ì(uDÈÆ„¬&yn•sæÕì3Ô…ÒBÞÙUÑð†’XžÊÉÖ{pÁ‰.‹…€;tkn\ø»$DQô{_‰âZÀæ}0gèêRU@÷{Tw9kJ¡½ ƒl[2¢vBõºG‘½©˜·5+ào9ûæ¬5#sbÉlInòW²Ü‰^6¥ÃC²ú‡ºõŽƒq*/'fÍK ¢ªÐe$c³Lr#Wf(¯Z Qú ú4ƒ:ë"ç)ú0³½%G½/y”95ó8…XÅ×°Ÿ[î¯Êk 3q² q‰7j52Õ”“z €ƒ²öíÞ¡ÂZo¶ùNȆÉòk%ô- žãbŠ.í ["¹’[ɘ® J!^z^®)*®ÉÑYRº²´Î£»uÚkZŠÅ2>'©‘³[ß› /27X¦¨Q¢Fe—‹Ù,°­qbTK¼¢Z¸˜ÙT¢!§àKFï©·¯ÎšÅ¢NjÜ_]öUN}•$TÑë2HÉa‹‹µTªØ·©?GŒ Èc38ìú[Qg¸çi¥„ûL«bNôMÆ+¸²»:'š(ô™S3dN%‡l¸°©$rš–äi‘÷ ÒÂqã(ïý¯v¬„¹ï2«ç±ŸàkªÍb ž»äoFɦ¶âk}ä®ú½>ÞUdIå šñ.¢|‡³ + 8üf-Õ>jXuéߤK!ÒNã“Æ_‹x¥÷9•|ªscHh9CÜU\Š9f·0ÛMÿ "wMZ`×5’Ô–Zý¯Z#&RÑBAJ—ëµÕÖHœ”R*‚l’hWHɲQñâÔ)j®‰…o=ò]ýWçN‘¯å‹€œGúcùõ»ãhÖ‰ÇGX¡öVu¿fá*™ç)IgE ×?I‘ÞsåÉ@úra¢g'¿4FöD³ýÊ+ïÓO±Ì6pk¥ŽM§]myõÓQ¿ëv·”t2gDväpåÖ‘ˆÎ wþssÝVF¾ÌÚoi NC!öxðü¢ãUÞäþx»sÛ¯VC>†&Ž=u}ê‰ 9ãµb§'ã§s·À™›ŒhqŸÒ=Ã÷E]:`Òr…ÇLÝþâNBøpˆ]Ì™Ëý:9r6Þ7mù°›}i{‹B¿%;Ÿ?ZäÚ‘‡49\6ŒAñ°î¿FX”ZM+¢¢w™‡R6O c¡žž³Æ«?«<èuR‰I5¥»TÅÕ¦ŠÉÓ9ç›,è'êÃZ H´•1‡E”ªL^ÿÄÆ0íHȆ(a “l3—`c1 Î2!s&v˜…Âk;_æë[ì«÷ü.y…GœÌxè”j}.HpÞY¨CF$dž øRщü&êð…K2u»Ï5Òý[O¨Îótì™…¨àpà,½uuDÝb‚´®‚Üg™Ç؀ΫÂÔéò½½ó¢Å4ÚÃ~×™Ž!\àܹ±èì\t3.#èõ{Ö źºƒe‹³°)žbEŒ+Zöh¹,K­LCá—Rg>÷›Ãwõ[*«³³bÝB6:žå¥-.ðzÔéÖùú<ãUÐÒý$Y9Sw¨‡{â+ávþÎ7Y…ªI8i;Xnâã–‹@mªsYië^`àx ¸æŸÃ¦¨X§l(­õ育ÎÀ ¶ûc¹f›ý§vÇA ”à߮鄀n‘/ n¬ÔæŸ)rùÃ@¤S4š‡-ºMôtáʧ³GšÕ—f©Sõáê!-‘'u}S<ÈMô´¹†Œ•‰q[qcû»MíTÜ•^'~Õ^1ŽL‚‰N-g5Tû?‹È’4ýc-‘ú¼Ñ±Œ^ädèdQ Óu8"ÂZJ_e E?23ôÇôËõ“B¹¿sJöúÎê‹'>í([ýIÒ_9ÛÇÊ-M§CÐwØ&Î ôe¾é„àúßê’:sÝ÷é4ϲ(Ï­ G®\3$[œ@38ƒ’ä þ¦î—%#T%Ê„ìèc:Òv6Šü2g§4¿­4íClÑaa`Ò (P¡c#…éB2¨Ë¹òe„ƒßI³1™ø?Oü(õiý´êÎ_œÜšýÓ!ñÕW*V!ËZV¯‡*-lwÞß8~˜~,bžè0c[ë_²ïfTæŽjÉôš ¶©éºqýac\™õ•®õ¯FÞ—Ø©x+- ;B^ÉÙL;ýéñaãeX}Ñ´Ÿ¶co¾iüñ"vùÆÔ|~RÓŽ† Yõm ƒˆSº3†M7wð ÄEèU~‹keÛ„wF<§0˜MMÇÑì6Í ¿¹ÖFǦtÈ>.×gMT®) ¥§•H"¥òŸFé\$âŽKÖDÅá$:9:.¹êícêïY(üÚ÷¡¡ ÏÌñ#§ð»áØãìËÅü»ãHZI׳£ í“(bÏâŠía„Ž:äú‡;W6Ù»-7–ÀÇfTMÈÆ§3K¤™š+Ÿtyì…áPž™šàÐÄ“J¥ x”:1r‰ª§!žÂ=»®ž_.>ó…ì”¶Fþ.?oâ¹ Û¡…dÜöT¢Ð–™–7 Õ’$/_¬­ïÎyè>Û|WZ²É™X™ýfÃ#½á¹Í]S¡Ìî‘Ó½(‰uŽù\²ñaì“ØÈtB^Ȳº¹åµŽîeñ~ËÖEÖ$H±µ$:ƒP’ZªBgJäl¾V^'=f1öôæ#Ú¦F=ø] ò!4މâÏÍ ´£ žtQ[7ÒIYkïêËÙLMȈ€ø7ÐêauTâMPMÖŠÆðUåbžtÿrdO)ùÓ£ ^‰~__7JyΠ›ÏT¯”Ý Áâ~)Š>ÅXúª þ/ž”€ór6æYhÅ‚X!esN¿ úþ¬(‰€¿¤|™}Ÿö~²Ò¸ªÉ}»é}WF³eĬí×Ý—Œ°2–žÞÏ‚ž®¦GäñMm`pˆ3÷k×,Pé–=9öo“Øí[ÛÈý€Òq®ôÌê0Ò`ÝÓ†ËuŸ+‹$ϗѸûÔ¥_>1ÎÊ\Þµ çÁ›Ú›#¯)£T…AÊ]ïb ç ZDƒÐ³$ü!ð\ßB'øî¿«]çûÇ¥ ŸÖ¼~aÍ÷ö®!Rcúâ§B’™ |ü°yyedß*CêVX.™‘þ±-Ù³%˜ù—ã¨a¤ H§[ {AIOmý‡àU¶Æ–ÌQ¹wzoټΜ¸™ûb÷ ‹Ömâ"¤o4Ê3¨‚c9Ìu†­W[ÏVf‰d³³˜äÓlÍ8ú×ê4óE ig´RÑ©ÿ#!15µ(ò¥]u²3Ʀ÷²*¶]REŸ=]ÿ‡õ¶¦c=…Õe|g ¾¨žC§÷£#¥äkûïžÿG`No[x Ñø=½•õÎó½NÒmr¹•¹ØÜŒé'7\M¸BŒ­É÷4ƃ‘3ÉIžÁ³&¹ê¯UÔ `—®uF¤¶>ÙˆµîiƒJXáUHkgɘ²)Õ){ú²°–òIJՂíûœ„+‹iiÚý LlaP!Q„7åe"±•B5xÌ~ EgöQ2ùµ—ж,ôÔÞɹ Ejï/Gθ×óÛ'–Äc¸Çgeâõ÷ÉT¬gÓ¦C·[^¹Àb¡.µG“PÇ'¨G 5; &`.Yó–Á¨³9å|zÃWÊ¡†,i£¾éákU@\¢3Âdç…0Ñé-¹™"À>vRS‚%AõdMÆð1JáVŠ<Ë TA¥q©IÓŒi¬bM!ŒãÑg*|R6‘˜CK©—¤æ¶´ÏB‘iqj$†á^4Ñ Ô[ÒtõN=¦ïÅåÑÇCéöÞ%yVâQWö€ÖÒ329#ß`Ó3Eö½FhxÃäÆä ϳô¦˜+ë´Ù¼–‚Ó&t,ŸUë]_µV±êzüï"L¢Ìö=ûzÓeѽa^»K7#KǤ¡qŸ—&¶ò\»Þ#§=Ð÷T_Eê3êžÏ÷ë7´“Zj ëPçž»iô‰ЙÊnfŸ«è샷¾šÞ*iY0ÿž{ŒqEkY\v¨gò &©Ñ;1 <›è1)ŒùhfÆÄ_kÙÑšBA%I24™5§ªÝŽÓdoSuÓ° !~ÌÖÂ:©÷d”…b"IóÚÀ—Jˆ¯èµOñ{á lNvHlsÊíCÉ3ö<§%¡è±¡äý^½‡¬Ïñ½°ÿ IoV÷©ƒÛê¯Rz~ ÷_ƒaìOx^޶ˇrg{©(’×îŸÁ¸÷2ü}õŸ®‡­±ÄY5:ÈŒó*¥ÐÛ­“<éÅõÂÇe’ï–ŸAS—Ò±‰UŒ*äAY¸Ö0Ø|% E0‚CI¸<ü%Û"–(<7ºÕ„{4hL¸MC…Ó¿µ•‰šºÓ󾮮_”—™ľH]Ij:Èm 0ȰV“(5išzM\Mˆþ˜ó­,ö˜ÓíÑ¢ïz]d1r £T2¼}Nm¶ð+ ÓwÍõo4ñ/Ë×C+y¾¦Â¿ò›x^xäÙ öNJ“m0þ©ÎéÏ!¾¯ÃS²KšäÃ4eýó–OÅ¢FUóâ¤|.ûÕ±ôŸÃÂÜ·FMÜg+Ž˜QœeÚ­%pf'9ù_i­àî÷<Ÿõ¬àC=aIòˆg©ÀDs%ÊÑ5b¬>¶hœF‰UH°~ƒû¸~dR3‹Þ˜çñUeíG£ç¥|]d¿?H«©! Ÿr¤1iïtc÷”Þ”ybƒQ‘/DgtôÉøœÝ÷»£KÌne—iY;ùUyCŽ[õ$ï›-K!¨CÝèi.£Ü©&Ñ¢Ö2¦\³uÊ<çºZlŠa®f]¿—ÐJÃá±¹Ö[™o·y&Ð’3AI)ÑС鿦Qôýí3ýHûùaeÕÛîp¯ícß-²çìhd\÷$.w(Üd—(’æG‰8ö­E 2$`dYœ  Ẋnü²ã¢V­*òŒ~óˆÏ?QPºQ%Ö§6”šJìµ¢‡ÃÎŒâ2uç'tTæA9vL{bÇRÓ°›Tñ0È[é;¤c9Ÿ3'T••’BP¯†6è<ÙKï/(ªÐÈt~aÎÄ MB©dâS¯äRjtjhyÊ›(MS…iì²UÓÒ–±¿Øa¶kºáÖ–‰²L=ºIÅ¡¦8âGÉFÎëÓ×{\lé7žÌ~êdõõño(íwŠ__Õ­M†Åx¹kY?-ߢjWI$BRý¦»›ø•mß*óôÓ‡ñ¿]ßc2pQ&¥{eNÜÞôõ3Ùܾ8Ê~˜$iXÃ#:uP<vˆaîLœI(Ô®›:3.´Ì†Z‘àöúÍ^$qX˜±* Ã6…êÐÃåB%glÌŽl¦}«óknV ôêE¨Y~¶L†ì†\´Ù’RÇI¹^$À‰HJñõ4%»xWD«Ìâ˜$Œ›±ÂwjYhÆö¥Ù8BêôïoÃQxá…Ú3{IÜÖ]ÌË/)Õ&)LÑUhã]Ѱ­;ݱÌN‹ Æç‹ƒ#â½ÌúÕu°¢7žª*ÝPÔ‡Öq‰µnˆR—c\WÖ¥–ÀÒ_9qb´W: E~ù²óÆ«ñ¬äÆ+5/ê|¡ßfúMLäóO“·”z‚Y“m(&-d¢$‰ÖÎsR¥n&´sgZ¢ú;œ»Ä¨nÍG¸¡/» ;Måô}ú=1C?7L;‡ò Ñ®»ƒÌÜüÒôT­˜ëiß+ätðÀàw´(Ü‚ë×O5Žv"RîSvÉêò˪X#g`²%k—RbHÇòRâö‚EB!÷˜ˆtÂSÞÆÛ\bÇæ!¾"LöU¦ÈÂWÞ™`3ë2Uù¶£'V~'Z‰üd]¡±[Þ=t¢Ž!ïvn"qcCVÿåuÂWýÓÜ%ÎëJ0ßðÀJ)õïY Ê&~幦ãçÂ.ʰF }å÷:­œ™µ´wÓB7;é„&}câÔo²\ÿu$Mu®]Q2fÙç¶ OærIFqC¿ä<&[Ub팿¥½ÅT?“83D æwm·õ7BS‰Ÿ+Š÷U}o@™ - §bz[åŠrÕ¦ƒT üÕ‘HëÖÖ7ySK+O¤w©¹sßô/’Ó1G\¾·µÑÂ1­¿£„•„ÝîѦ--q_m MˆFƹ-Z&¿'ä½í§Š­õôÒf ¢² D_/œ;þÌVÏ鶸“oЀÆtxó+@:Âéaá÷O¥„?J€ú'qÆ+ÓÎÕòºGEø6—ŽçäTu\ç^”l¼+ÓMÞìžTR‚”j£Eþ™¬èoóÊ@²`ƒ%a—\Ø=o‹z#=/0Æ} ]:8Oˆaô½™>]B™Ü–³™“£‘ëÓ¿®,§Ôï®ÛdH¢µ™xW’¥2?¾gz¡‘÷Ö¾¨¶údÈDŽª~ܵŒØ}ILêgí› m¤ÅÇŒu xË1¥Õ¢ù­[õ |Si^Të2Z=ƒø¶ýX@› ¦…ŒïÌZÜ;ëü͇çÒà–ˆ™DùÝ•¼FÑç´DË#4l}Å'2Œ•š—gÖ¢M µ©‰ &YŒÀî¸#ÈÅ Ñ ¥À|\ÚNÙeR*IÈ„DÄ äǪÍ‘™u_»Yâ/<JãÚ/£ššÉ<ž2ìN²Gð ›…E3ß4i®¨ÝµŸ§®‹+7”úõ‹¬å5a~5ÖöFÓ™²h,%f{Ì’Š—[?W¶ò÷I¤whÉUyC¤T\œœNiÍ1I|/ ›“œ=!Ì–‘Yºí3{½uÐ.X´ÕÛlÜ][2uŠ”“ÕFÆug%)Yš7Nø-iÁM6±Br ÚÛRØ¢Tç)w‘½Kóa½ÁøÚ´wS#ľ“<~ ®ø¹yûoS蔯fc†…oW½‡¼YY„D~F?nÒ¹KnãÜPs²J£iz‹ÈI žæHèCùZ6FAäJR+–óV2tøá—všm3$D„%>iÜ gMŸ2{õêæXû„kÜ®g“¹C#ô7_©r×0¦`Áˆî›ˆvúWŒþ|›é= äYâÇ´Ž:Mmšù°â~‚‰¦Lã¨ÒŸME³àààÌFÑiýÁ°ðü³÷%NK•I-spµ£Sžp†‘ß7!Ìë¤Ô–ä“’ç VKøÏ­˜8Çi¶FM´Ÿ£•9µŽ“w|;tU ßh+¤‚‚™¨·µ„L-YÆiAµr?©Zs:AtŠ)¾–ÊåGëáV’ùü¿£WqjÞ|"½jÆ8‚™ùáqãÇâ4yc$@ÈÚ2`Ú¼¬"™½îÉåÎ×0„f®VâvÌ’r¼"6?Gnœ*Èg0µE`•ñ²ªQc„Ì“) ±@Ú†ælbŠ;˜fœÜh‰œNù×ñ¥ (ô_²;ùí|]ÍWk 6óêL|Eñ8›˜/ò.”R=w[â–îb¦õ…c0ñáF)" ¤kÔG¡¥] Õpl:öåtóMÅB©÷Uê|×m„c²LÈ…8 ã 8TT³âŸ†…纟|ZØ™ætÃѨ(*Õ"„©™I%I–¹ÏSÚÝøc$›çU¯zµ„Òým¾§ªßl h$hå ýp >\ºšzæƒG>º5Ÿ¸ü-ä·G(A;ç?ã¶þ7õÆ“,Ó…_d©þÔ¹‰E˜Ø„"â!â!*DHÌ¥^ô©VJ¬ÐZ}î±mO|¯ÔÀí½I f’…¬øC_…úç/ã7•"2yÎøäª¦*ÿ•:ß!ü#%5ú¨óX>…oV·ìüPþõå”4/.´µ§5œ#Õ=F“W¨ƒ*FöÛ÷@oh½¶þ~l­Â Ÿ®˜ë‰¨SpcFÎjR*PvzQ"‚51Žs¿9Zþ’õ£á=£]zù¥j똡Ðï1dø.uˆg9~ÂCnüØïyìav)Hô’I¢¥„$Â^$øì±ýšÂöYéÔÁGœ¸‰?ÝÆ1! èf,Aá“2x­#T¡Z:V×QŒ6ž1TrÉ©Pþ9‰t«cLUCž€‡véûr‰HïŒ|U›}¯7R¨ž\¹ìÿ‹k„õÊ4Z²J¤Ök¦ØÍÅ;°BžßNÔ+íq%< Èz öÚ‘¨0¢ÿwƾÄAÈœ¡™¸JT7|5ʾ _ëÊnu×tL¶‚ Ã"ÎgLQð7\©„WJ^ý+DV„ ¾!q/—5a‰ê'ds?Έ¨[0Àd}AÎÒžwÃŒýIÞI¹Œ©`L¹<÷E|k üݸtcþ~&«5.‹™iuf .ųCƒj!‰ÐuÍÿ‰)(ú© |ZyZɤðçÖmž>œ®V2 )µØ…$“ Ækj“,Ù ßÂ6Dt›µ¹[4æLuBta*œ\;m"ÐT’lo…5mÕÄÜfÊ€¥”e@“JȰ94hdÔºTUÝd\ÂTK-,ºÃqnwb‘qNÃ|Q·Ê¹±œÔé°E ”†eKÈ¢.2?H>s4ÑÁ¿›Z•™½éb%•ÃÞùªò̵ ¥Ù—ï¹7.uÓKC‹d>º;aoI>Ù;;õÖ{]£7Æ®4]µ*s^—­3böã\xÁˆ˜1­Q[m$±1/é‰âcè.çê¸aª$ë„©ëoµU‚a”¤‚¹Êê@˜.‘«ÃµäôNꕱ\PázÏM.~ä2ÛSdž]œt¼ #éH#Lj6u3Ï|*`\±.hppr"÷è~—Åíš*ø¢¦ÝýU’E¸f¶v´7´É+Võ¶÷%R,óR¢þ\¡CZ".¡)™» H›H¬#­\î’ Žãšé=+özâx4ç{¬žr!`áBaQy1Ek¸~¨t´¯íÈxø(Z+¦”í<ƒaò°v”¯#LõìÉ®‰j&0—@fî‚ÔÉ µ"Võ–ö2¥6–JDÜ€Ê"st[o{^¶©Ö)o}7ÈÛ8Ã…w¤wE†f lü"âCÐ<¨ujö• @iÕF]ì$fŽÁ…<¿Ö\wý_v8zÑïa]¦*vÒ‰©©´õ§È v¹,|m;‘˜©Ù„¶òãpqc oT{s”Aâ× ÄøA+0ê%²ƒ´îòuÏò"S5LÁ Áåö_f÷úI!Áe žÀ÷\åE‡Ä ¼W8ƆFhèg9ÚMÇãqÉÉëOŸ(é»±ß˵µ0üØ;¹ÂŸU+ªP͈_½V:÷ã’84(M»Œ 0¢]ÅHsº¸YIôÔwü>ëç;í­Ö7=ÖrnÚ[ÝÞ߇GGUœ«dåWv5¯ªz¯2ët¢úcÍËZ>²ÄŽP5|¤ HI@‘”!¨ñ§Êʘϩ.Hø‘')qó5w–zXr9hm¬˜ÖNãÔ³o=mlMï§>1ë,“j¤ÞW&´æTp ü'~è+¼["1þšþHA­ï6eœÙV¡þï'E„T?:s´ÝŠä‚&ì9ᱨƒöz±É¦ëjc ¹A£Öh=í±“œ½þ·ÒÊÓù4SS2Ôt¡ŠÌï.ÒöVEÒ»‰4¡ˆHJp…»kF>ºuTþÝögº_=´ñašq;*èØŒ`d”ßtÅ£ÏÝÖ¼[çÞ­®5†)J¿‹2Æ-19Ê|ßz=ÿrØy¯ÕªÆg;_ÙÛ‰™ólуœÚ }ð"®®°øT6ƧªÀ†Å¤­;zL³Òæ+ fø‘ äé"£GN1Œ3ƒñt“”Dœ† "sPMÂe5;ù]cYSl.˜bSd|£xpÉ4—³áôzÌé Ã"®v•-Ž`Z%1íª3Ì#dÔ!ÀD·yÁ¡™Ý\iÞÁ„|2‰9C†Þõ5 dÆ\ãפsGcÛ ¬q!V²¾êL"«êëd…’ŠB1#Ì$4WæÁ„©Û]’{eàîF´?«Š„O™é+,QÞ{ªæ‹»²s.¿=óSRtxÖ²Ó´F˜àTã{¯ ÁDü7l¤m=Oí²[×àù\¾3îܽäT4F ý9!Aé#kή`èTÖ?5|?ßDeBûsBÉçîuýPÌ™Î]o2߆§ò̤7DŠ:±¤d—‰ÍP²üNŠ)-]®¡æe:ýH¶43>Qǯªµ]ãÌ-Ñí FfgÓ§DgkéÕdôÎðÇ‘Ê}ÓwIÏ„×ÕÉîŽèmôbŒ´˜X˜¦¿sT£è&+²/íSz²_Á§ÅüVNÓ.PoÉïiŽ-å„CÝ|ÖÝ5Ê,¾™¢œgÖ›íviiAæPÏs/lÐÔ13B蘠YÖ’Q(kó¯¾üï–2Í…m¯›×¸ËË,M¯žë§oÇIcœÔPÊæ|¬Er6šç¸èñ¥žª£4¿áÚþ`ûtu}¸?—kåCᑜ§ÄÞ|#”ì­¤œþf„GØÚüx²I êí¨º9ÒC÷IØ”œ11“ìý¬w^è…ãÙÚ¦VÜxŽõÐÝ·—Wp–Èö Ï?iœH†Ý·,[?G*òlz˜;!LC_ [AÖ@†qK´Qtd?ô= ±=IZWæh"Ò»-ªÁÖ®0G½8õj¯*¶aèŠz¬ÏxÅ3‹ŸC‘ŒêˆwšÝ¤¬/2†÷\(kKTOë.¼^pËò;©Ïãy-a“o»™†šçi—ÖôUSW|Ëý]è×WªìªÐÒÙE…Á™ÌŸdÚ¶¸§õ¼%•Z;½Cš 7ÂÌ$j,FëƒÅàU¤Îá¬Û{åÔƒà•LÐVÌ(þU Xn£©þßt^‡°&~y¥Zõ¿«¾·1Mxc%Ú{^d”‘?>U!©œ:ÉxŽÜüÛH³X ƒFü0Uå ï2lñÛ0qIvÿ‰Á€çA×!m`"D*þG¯^:“»œ{Guvk´¹ªUDýZ…;¤X+Ãù®ÆŒÚ(øµ¶ä2Mͽµò•íº±†w¦Î&8R,,~”“ÚRY“bR§âÓn~iò¼R7®­àîy šnÑœƒ£y‡àæfkÛ½;a.R<µÞÖ±tÛ:ŽF×DÆØ¸EÚÓG6ïÛ[Ô¶!Ö¦Ú¨ó»ªaš‡ÁµlåaEQj„’^L)S›à‚”`‘Ž1rަH¹«b™%Tí5Qj7K4•MæHr‚›$;E‹Å9t+NÉÕ5“eÌU!:[Þ™ÆD1c(È‘3:NgÓv¤5v9Œ#B’ À³õ8PaŠv£SÀHCÍvg:JÖd© É Ê–EÉ=­0*¬b$é2AÑ“"…”»ß\`°]ø’uÊÁ{L‚öyŠñ‰4k1%Bª â,fÍÅ›DL¸¹DU`] Õ©bÓ)Ù@x@µ‹þ3b’hÀT‡(‰å¡) #>1ÜÍ~-Ž/òÏ£ô:´‚èø#YÔí–Ìbç†Ü}+¤¥N–P#ëRM× »Üò1}&r'°×Dª&yý1éÍ"%vKm÷<ñŒA„?“nxHx»|…„ ”ŽÍ-:eûAf2eÄáttdƒÖÎÛàß#gtAÆjeµjrÕ¥@g.ïY ‚ g²eõ2#+µeÿ1¶M±¹‹O‚×\ë˜|.cDÓ°6j0˶+Ò_‘Ö¼ia9Ê 5MYhñˆÂÞÌXß'F~Š×ß$«„zÁ4Tè™ßëLÖYuÛ õÃáMØRw$;luìèo/»æõ aèWÑÒò*×Wͣȩ¿Ì²Y™IÍ^.›=]W0b©öÎpË!‘è%Ù»_K@½¡•b¸·ƒr5V †$®SdrÖ+’xšmLåd¡ÔÅ%;>iÍ‘Ötéq#¬ðr$ûf<zkÑ¿lõQj+}± xD€Í“ŠÈŒž]íñèç‚e?(ƒ åxY2ÁºÀŽ:&ŽwAt¹£wÓÒDŸ«Ñ™†ÂE íÂcåT ç&ÕÈæÎ2¥kJ”.ÿ¼ ¦™´ ’r_>(°`ÚOOæ˜vÅ'Žzû‘Ù«™0jžæð¶Ž“în‹—í°N†ÆGÉÖß;í.qÆ’>£žB4;ç‡wiu€Òu“wøéruóÎß7¼0Ù­s?’ÇS’ù]ø|›K¸£ß oÄ!…‡¼ÂèťÕ+#A±Ó†X˜hVÞ³%½œó·y°üTµÒ“\¹›Okq­¾ù¦`r’M^ReGƒ678úôurƒaµ ø|œ±ïÄ=Üz4úCTêøë/¹öŒê· «ÞØ÷ j9ønóÌ Ã¥étÊh§¯/Ö“=Ì7¾ 6Wº&H¬:–ôoS8Ïñ}=XhcžR2x¾´è,ƒ{“¢B³»Vd‰£VåTB˜´#ir¬É ÀWvÄÒlÎNHC’¨iz̺E‚›¦pF„+z½$\¨Ê¤fÏpªyŸáÝ^T|JŸ¼ÿɦ9¤˜Q¢˜Ú;(~³Áþ¼AFG†0É 2u ð#7õHŽpŸF(oJ°„|ëÊêÃŒW§ïˆ}8ÛKÙXºŠ³×êxž—šÆ“ñ¢<%6Ƥb6ûÎnSÖÚSªï-›Éí¼ºNCôç-¥uç44êtÝó©Ù_ ]¢ïLLC_§ìŠary&+ô¤±ºç' Z†X6—jð=£hñ6SN°o®<[®u|`ŽfMàøšÕÎ8ð¯gnjô« ð¢>¦šëŠœÌ=)´šŒCÊ8`¦àæ+s_SMe*¬ âÍfÞ°é&"u’§Ò¦™–í¶\|+Ë ¤†¨’”pƨé+urÈt¸¬óŠÌƒ:Λ8Ó1龨«Â`(€á@@x*Ö‰;yg5vxJeGÿxv¸¸Íé"Ãæ +H„O—9=Ö'Êr}äŒËè÷a×)Ÿ£ Âúht#ú„þ¯²FÁð,¯ž× û^¤‘gþß GàÅÉ0t3Ã|þÖùÕ¯‹]’ÊßIËŸV#­˜~¾ÿN>»õÎŽÉg³3QᇸÝÛYNz«÷WF)åý˜ÌÇe“yYRÚÁÎ2öæ¾èÿˆ[tºëªS6sÊï&ê#ç…ö cïZTaï)z{]šµÃ£Ù‚Ý…Ÿ†dÒ\ØŒØ6/f³j®Ñ  (mvXñˆå7hK®p’—ŒÀ–ºÌ¶Q¬™,w•ŽÑ^mÇk^C5lêRÉAtri~uÅÖ ¬?ƒçšdL!!=Îg2Û$Ó蘫‹_™ºÂ~”ðÔfsRv ýmZ²ùŒör§ öÈyßû¼‘ÁÀC>tÝÈäGÿ5s ÷þx“ߺ -M˜Ü¬ø§ŒùAïuÍòóUƒ§þ%ÓÅÚc’_r×õ)ªEÃ-Õ-Rú×^˜+ñÇ[þ^ªŽ§Ÿµ|¿ä[¬+ŸýT~ß%§<õÇ’Æ ˜û-Ë​߅(¡Î¾ùˆ‚ðãÁyóT*—¯Ì ¨"'úEŠŠ¨¼}¡R!¦ Xˆ ÿ¤dPTPÝS@/$›ò=³U‚~‹A?Mß÷‘º?Ù¿Õyä䃑iû5ðšÁ!‰ç@)érþj¿ºé]C¢cðoÙ.x qЍýÓòô,$FŸÌüÁ¬j„z÷h—[wqrUgsàÐu¡<›öD|ÆÁùf[CC¤5Œ^µY´™vä¡ä.¨Ì•+(cÇÀTëo·ðÐ @"9w£ü}h}ÑV£éàè)ÿÂ*Šþ~/…°‚·à‚îçåÏÿ$Aú@ÿÉž@¡û»ü=½Ÿ®âß s5&Ÿ³/?ÌŸ¢(Y#YÇLŽÄÚ ´¤ìd üèºý‹EdݯÍúËØ2*—Ü*ï#^y›.oí¡¢éj÷tÖŃŒpÖ94Ûóú•êªk'y|‘lŽ#NQ/ ÷)Ï)N£ºþÏ_ó¿à”à22G|¡x²!Dtúë‰õýžö(S4ŸÛûhZê×DW‘8Ã"ØÑ3+Mt$„÷_[_L¬~qTOÅTMÚ~'?æ‚|~Ÿæ€§Dç8ÅÝØ¨BÓ¥\¹Cˆ¢Ob¨‹Y#ˆšd"ˆŽ*d€(^«®™%£ˆ}4ãdÄÒ![0éC4¿ØÅkS `]Y³b뺈‹‚YˆÈÙ$„.Ùs»Ah¹ÛR÷\ˆZf3 3Úv¹©¦lÊ „ P%H3µH‰@Ö$d¨‚ÊÔ2Q²©V‰–‡d“‘J#A%¨(°†‘LC—0¡$Ýå—°’ií! $a”¬((ÀOD;†&&$A¨‚-]¢í¤›EL$I)ä 4ÍÞŠÞ9ÓxZõPuŠ_)£,HbôBÂ`YHS$­¹iç  nDŒVŒ”íAIÐE6¡Œ¦„Z12 «@È=Ú‰ ˜& o0KÉ¥NQƒŠ˜¡"äYH2•Ì‹Ù+Ìbö6t¦R0еM±,'7SDÌCpÐìLÁT±¸ˆ¹EˆÂ˜îNÙ™ºUÖ¡Šm®Ñ’. ‹„G*lãA³‰ÑÁª˜.:Ô–U¬U”Y‘¦ˆ “Šeé8xÿ^íƒM‚ÁL(¢Tå„„H†H ¨©†HÂd1D¡LNªJM M!%3 bÌ,Jd[Q©#,”UVý±kÁ C5 nLȘÁ¤“lLÐà‡•Õ»N*‹6©b,$•(¢ a„I"PÒ Á1HiDˆbP…J‘ýr!ˆiŒb†&†Æe‘  •e%Ãb‹¦M)µ$NàÈzÌU›(ÒZ‰ˆPÔ 4Óp²e•#FI°Ø¨–’\8‚F5hÖ†Œ˜I Ù[}êuÀô˜$"ú7TL’•j%»¨ícK)?ßË ¥ð.–¹­Í,';×4#,`ÈÂJC EÞÖUp¼+"€^²€£U• x¶sáýþYŽ_ùR®/ù»\±G3Èã~¯WÇpââbMáÐÏü:2Å9± c QÈ2JC,†-„˜!iŒñkFæfXJ‰‘Þlçw P˜! ¶!Up˜±{¶²¶,à»KA‰­š„´5mûFü½&ž#¯Å)Qœö}v!£wF¶¤%SÌGjm¨òd4ØY#Gaâ êK ºŒ‘L]fõ4‹Y@Ý x½‘^²JV|²¬”c‚<àŠ—0! BCRE)Ö„• ˆHàW]´EÖ~˜aÝÜ!ãúíÙØg² "Wt%JÝå ?™–7.$‹ Ë‚È T¤D¼ÚƒÚ‰…Pè‡1dx‘’˜ZTÚÐæ ¡ãg³‹¡¦ö…6ú´AA(b–›»8ÍoÃËLHŒ‡Ó|X±¦Lô 7—-€l–ã,Zä\ÚæÖDÕ$RlïÆÝ·vr摼?”W3"bùæƒc\#˜>D&ë¤2¥¬9¢ãRªW£ài[NÒ¥D^šrá„ò9Š!yá”P53}œBåÓ ^KM )¶U„ÌUTBàl}üàåíÛ½¼zŽ3qÉ9‚GjŒnzžˆ¿ï2á¢ÌQ ¼! qG'•ý¢L ,´×Òc$Ú†p(1"²ª{­{«ÅèpØdFl³n¤ý{øLôþ=í迵ºº›’Q>ßéñôΧ3@Áó¨è)ln'nφ{÷^¡ƒSPõ”"œhH1ai1Ö…Qè‘ ˆf£šíœ¸>:M…Ï}Áœ:§>’´IEÍ*ë­Ï¾ãÙ ú0>/†sìœ#l‘‰ëäƒH'#£Ý­»ÎÏ¡£ÃY±ÄÐPHF1ƒà¸iPIA,íÎâv÷ÏÍ7~¨ùªÐjê«ÿ€"éÏO>×÷TûñîX$_ÔÀy…„W¡–]`˜ö’0dò±‡%U­#vª€`¡¢„)º¡™(D=0zÓŽ{¿Ž¡ÏtðåóHoÓ¯c¿´H'‘¸ô^ÍT³Û×ÄÞ奅à½ñ™s< ÕT!8ùKŠ¡Ó˜S‰¡TM"‚/!QT’èrHæ`p&'ƒD´Ô–ÎBèÄX–‚ê¢sc´e’-‘qÞñ‹O!µ€ Ã5vEØ<ˆÖ£J( < šÕîÄ&ô˜qšgu«{±èNF“¨rroF› ] ªr ‰0F’HÈ1a2a!„ HHÂI$I# BI 1‚I"I!!ÜÿÓÇ·¯ocÛ3­»÷tûË?´×ÓÞ“+þ8é>ìß乺›W6E—êà›×é7¹%S„qzJaó×+g}m8JÈøCÄBqO~6›Syç ¨~âš¾‡–y+z~ÏÍcPÄîTÐT·çphg–>Þ é UÒìIM7=¿Oâ##|¨ð¶¬R XñzAÉÓ.›‘¯g CeÔ½ÁÞFf‡u+ùB±Ö…y¡];À:gãÖC¾ Ø`þOÎ|úswL¹Uú?ÚíãƒÖàíNy%WÒÝz°Í¹¤ŒÂÀÄ-c€ ýì"è|"Œp†XQ»Ñ“ñ¦½—ñO¥®ïHè4 ±Š£ k¹íß¡À×Ñ(Ó¢óªžHìëIÕ…39pŒéå¢eŸÚ?ÈÌñ”Ëtc>q|05ƒ˜ç5a+O½ò,î.ªáAATýH'g?^oÆ™ØÐi k.e=ÿ¤w¡©ŽóžbCÆy¾Ÿ•ƒ„²“'ˆ2ãÖ&Ïy¯ô’Ô0ä33ÏÇkr>’ÔŠ•Š!ÔßmB¸EÞÑvÙð3gÎSp0¬+Rôù§¶èaÔЖuÌ[›Œþ‘”h©‹-ô4ÍrLíô­ƒÔl c6üÆÎ 9Ôþà Ý Æ²|^;*ñÁ[ž°£&†(0(ÏìjÞøñÈ=¯Ë7Êãx=ðÈ‚zÆñš+*-Ùíã„\€¦8»^Ý“4»!6?¾á àºècWL+{QÒTÂÓÒÏÈ)…¾Tx¾˜ä" ª~)ßøáD½Â.ÔjñÓw“®ûƉÙwŽÎçŸövJ—G]Îm~”>ìv [8 µ©ô&®iSOž!¡Ü8¦F¼dšƒiPO7ƒ‡Áü3á…V]¢Ù„9 m~29ö Ï¡Ó|_~—¹ ½8§ïûã9уW3¿¼~^îÑÁü­TùϬ/ V–-vá+pbâkÊo1,ë´mâãÂ!Å£ƒAA¢Q¦ÔU3”*aÆó¨5Žô–.øø‘¿cY²Ïi0jšÀÞðÖA€.ÒBÏ“BQþk_ÕjxäPwŒ•Ž rÑUº–©˜ñ¥|÷%<äðÙ›©ŽbìäÉZʇËÞÙ‡D~mlÜ:PÉF\ÃJˆuYO!D:V<¤¥³’’{iTùrÝ€Øä0ë¯ð·BY?~ú ›¸³ž¦¦\­$Üá°‰ÌhC •¦H‚BȆs4=Uo0ôº>‘Ë9Îϸ_½NoÖ"`(eØ6‘¶iMւ޼àÀôc¤_;‡³ÚÞ Ψu¦fã1üž†Á Ó.¢œï㨕¼v2UBÉÅÙ™äª[ǃ²¿>ož®2`æÝñn¦nŽÂ·šPaõFLu®iÃ!YÆQAc1žm¬[ùQhA˜ÂIs%¡ÚÃ8©a4r¼h‚h¨õîTXºØ­*é¿ äC°¢5Èé~C†¼/XÖ§®¨ot8™5œp7hhÃdì=]“®21*‡n<è¯1#<_+ÜMí'A¬jöÖC[K/Þf(ä#´äNÎ „Ë1m'+UŒ×‘¯åqDÈÖ‘²&XW¹Ï¾Öö;1Ï+s¼æ{íV¶ü‹VCŒ‘w²¢õYYíc*½WÂÖl¦n>‡…;lí¿YÑWÐï€s¿ôÆÀ°YŽ DÁà„pzb…_ò­êãž•–\¥)ž}wG–ø6¿”ûoà% mEÁ"+P“Šn çð @¤Î'6Ù£<¥TCcÑÓ†û–ËœŸeL³ÈB*º †‰¸ÕX1QÊM*%ÖyøíŸ¨ó×:g„Lq™†Ñ®‚m†¤Ì›c|ãÏ­Ýz/;>v2{xw^ó(ìÓW†ÑËv8‚r/|_¹7—þE¹±¹“Bõ¾få?p»ãßñŽs ³þ]ð~sÅCÏÓÍ^ÏjRý‡‰›\yƒ½%~?¼oÚ-Ŧ›=2 MP+÷ L†ÂÜÔð{9)"ö6Ö@%m~‘;â(ÿHyƒ1U›‡ /´ÿm8LTÁŒŽ,ÌÕªeÂÚ hg¼hVD<š¯EC(â]ýdCÙˆq>e¶Ä Ï­TôUb‰\ƒmÔÕðÄc©s¾øšzRs¸H`‹ágC®Fú ¿WBcº”}`ÌÍ$ÌXFì"Œ™#²Œ7Á&õ*溱kR§T+Sª|^DBæ FÈÇoÑ)iÎ/²þs3å›)Mšž’¬yÞÄ ø+0ÔððäÆòâ_Nð#AiiÑ„ž©,.¸Ñ_DÏ äšË¥c qTFŠô ˜hLl¹à1 Ö*‚.üþ×t ±ñÚIÄ(}Äs·…”·‡7,] Á<´=hžKú°:yÎçñ|䆃,‚V„áC¦²ç°Ôuâc-k[Âi¼Î^Ü‚jyÃ:©ÎPëèÓ+OÆ”-bgn‡aùÝãs}`ó·&›Œ…õ™¨ Ó0}G´ª Xn¡  ó9}k±¶w©Â…xêa™îrJ®¡HÐdpXÖ2t… Ø“¼_^Œ`cÉoˆÍÃűît?‘N»œ}mÙ k# ?ô©œ¢ŸP8â@ÃìP×Ñ;–ÄÛ ÆPTgý5‹z#KÉãX²½Í„¼ÇO´£Ÿ¼ol*šbÛá_½Æ²ÌÊ[ÄŸ}-²×›‹õÉ‘Ða¡æ :r)9žé? ±lUœ>·½îÕø®–ý½¼£n䪞aÕeIëÄ5èøÃê±V§OõçnÜÎÚ !ÀŽW:ð´–¼(‘~aÇ©L¶.?C€gO Dr¶²" ¥³9)vžH/|¨Hq@µ,¢u¢)È9>Ýá³~ÎÍ¡ÚÓˆýŒÖÑÞI1fè6ãJi\Åäbfy-Ÿ#¸ÙbØ9~%˜à¶`î‚DuÖÆ[¦_\6K7.—iš šäS4–æ"ÈÇ‘¡Ú»B¯f¾ÎȘÀ%— 5~ƒ¯[vÓÎ:³4Ïy7˜M™™ué<÷ßÿ%Âvó6ß3”hx4ìï%C&\Þ_x ̺ox4⹺ !½ÍÀ“äÏæy(„¦„§ ZÑŒ‰ÿ+¨î]W‹säë¥]‘°âýÅ/‚zÏa)ŠÑÝMÂܳkÙ£Øâr'øóÞ·*'‚83µ+÷"sÃL¾¹TUN=w;%N<×!Ó|zC¨|.Wùn=Ïת["Ô_g¯ì(fstàŽ€1ÝÂÍòFã†Ai“/Å'Óè²4Åã=.ü¿s#]õtÈý >æª_„²z Þ¦„†Æ1™W6[8ØuÞ{ °\£&OÛö½”åb%j*9Ê@‘p àQ“Åg`%Ž/zOkáÚ: |ñˆUZÞã7ø7í?é#n¨Q,Ã<Á¤IÊ2}Y™Б ÞOœ@ê}#Õ9ŽBûꇩ,„.‰åÀ’ð0PAžÇýûo\]×ó•é¡ÄÝ䜎#DÓ_0±ÑH0«U„@Ç `óß|ÿTЯÊ?¿*þq°Iƒ2¤5d§‘Þ5ðù#Ý}©»&S¿œ1Ó7¸2åáÉæ;¾rÝfÿ‡r÷×/¶íJõš¬{cž¢0̰ə ᎴËù4´wÀJ1¦w(ýß­G*éåµÄ‹$&l©Ä$ù‘D=üƒZƒ&´˜U–µ3: ºDw)¨žVF>‹qäúŒÐ;àèìÒP®P˜M¡sðœü+åñͼ¬ÅÔ|5Ga{ñ9iVI†Fe¬vA_ Z×Kûc2ËÑ¢üvç±pyŒÒ”yÉßÈÎécž3÷V71²2‰fuÚ¹×C~T6ÞzÊXì²kå´ uáŽ;ã€laÕ4Î:ª!ãÙx|õ˾W¤.Úaú>­’Âû[)‡[Ù°ñ™Ëxs®OƒhWP­Ð/L¶ TæÆ=w†2ïÍÔ´ŽÉßôÔݸêööM$Xo´Þœ³ÆÕà~ÛªËxÈ·tº”“•³xÀ°È‡žyq¡c[DŠDEGÆR1¾K»#®Ø À.Û¾·1£Éª­Ö¿¨ˆPäÓ|Ó|µí÷#LqÝvÍ}?EhñÁáo½}±|7ÿ[n¹«àÙÇ$Žþ]–‰¶—ZyõªŒ9Á@–Ÿñ‰uß `,p ·i¾p„- &äMs"| 6L ÃÀ=¡ôbCÈïY)áD—¶ôò¸)”kqŽ ¾xqœ´D7 uO]!NA™.“D,ÑS#—`mb{F6Â^·‡õùwܪÐüs­”R?< | ”PZ\ÊÆ’í1(ýmÓn´›Ìq~“3£1ÕÐbÇ8à{ÐÁâ·¬0 Ã´jÄ ßnf—âUûPæÜˆIÛ3;v[3ï‹d±¢åt*^­BU;X¬ øÍöŠ-þ;62''Å -«FÍÑžÂX°tk$#8dÉg°©9êlˆBt½!¬LOŨaLZ'›&vU'­LÔ ¶P¶Îmie0Ï„p*ÏÉDmª ¹·!Ò£*ÑPí¬UÔ1Kq,Ì=,ÏÉé-( ò-­ŽI»»ý·ŠdóÞ‡6R1g'^ääN0» T½ö ‘¦EkªR ‘ÐLõn–®«Zv–UÕˆ{ÜÇ"LdsÈ_ ¹ˆaRl©ÓxÙÁl³”±ÕûìG šã¼ómO¡‘‡ÅK-F3©_nŸ3´Ð]§Xôˆ Ñx¨T æhoa«…³ïµc™¼; Ãg378ÓñkÔK1)½™*ÎÁ6²Ä'Œƒ¡QAê\­±Ï’}:†lvÊGCKó¸âz¥1fÈAÂáÇ >º…1!=$÷ÀÀÄ¥füŸ„æì's*¾¨"9§éëà;¼>ûH9/øn0Ç=§+ùá—?ßÓŒýu ã.>LȘÜUBÉÜ(Øô—Ox`Ƙü7|"|ý$›ù£òåC¨c£7·Fåp‰”1BŽïUTĬÚùwúG5Û·|õ*³›ùŸ¡–ÖÈi(®éZƒúwɽuÉR#zÅí†}m}`ýM±õÚœøœh¾Ãê82äÖФ¤iDZ¤¼sLÜz>»ÉmX$àæêõ ut¦–’Ô¥‡íh9ŽJŠˆ‚ª (Ú8ÎDÈi'ã\nÙ6»‹ç­R<Åâ¦èÍÛ0qÕN{gœ Nš?‡f’ ŠV:_Î!Mv^´I(RÛÐ4Žø-±ñÀW„$’†£eg-97¨ü[±"^7œƒ%Ê9ì¯+Ï_×±©—w¡wä@Vö gýÍ´s¿ñ2Þ^ÿ2çJ(·: õ*žãàŽ•B×X?LžOC$f—óäê–¢ÿŸ¥~(ÝŸÈdMœ¿gÁãhVm‹—¨9jú«Ó±^UBC‚éŸÎ÷Ûû|ãÁZóAƒ\9ž8öçqY{@.%Uá˜ôc>tÆÞw êßä ±ö:Ï-o둟ۙ…ã´§h©O$ߘ»&qÙøÝ#ùµ2?1ѯÚ™mÙB…»yc˦ó÷¾/²àsT0y¥ÍÜmqÞYcôq²5(ÖF´öqß§w:•¯áÍm‚8BB6oÊ‚†W8UÐsòàáÆ$RÐ'ôr¬Œ{>[ìÅÙc¥¤òÓcɺãG6oê}Ý!ܹæ¿õ1¯zqÂ9a |ùJÄl:»Î?®}>ú}¶ïáÀ7Ó¬ø¡ CÌt9ž]?ªõÂëa÷–7~’ïör¯ëðÜ{ìC¿ZÁg÷¦Þ>Ç@ßÒáþTû½FæçJ›ùþIh™.ï…i~»ÄǹCñÕ¸Ë,±,™Ü%Q;_1‡.óÅ?Ï/ÏT±Éò>¡êºßcÝYi£¿‰Ï!ôdEéú¨ÉTÑb Aâ0ߥèiù î‰a˜ä–âª@Üã+ ÏLˆ 6F´•™÷‡^«ƒƒtlRÃäÚÜËɱÑÜ¿]LqâÆí3x19%´†zFG5Žs¢']Ñ·ê`ó¾Á£ª˜7:æD7b-¶‹zƒV7Á›(Е8¥«mð´sm‰Fa ó'¨jR¡Í=Λ,2¢ßK` 71‘úâhT:¼¯ºƒyf µîs+šN˜Í «e‚tl¾d™¥e«È…§)î{¾§[ßhH¶ŸEš;©k˜ 6üEÐ2!ÌNÂG=‚~®BÑ0 Xþ§C“°¥?žß—ízÓôp¬/¨Ïû©o˜ÅoÄuÚ‡}ˆ1 b<0„a€) ´A †á:´€Ãý3ìïÞelÞAäÚ)|?$=Õ‹tfFÁï†&p´#Æ5>‡Ýï^pP [C :/oIu—`"0*?}Š›¦àGpæÒ»HÉ7J=­ÈÀÌÀ±S‡;{P3XQ û¶f/Ö¶?œM{Þ§0þxÎlÙú%ÈÃó5(-Ó¹R raÀU4éö”lÍŸqÓ¿×ò%î'Ù‡¿ÔH8¼ÃÃç_ÉJf d¥qN.r®“‰£Â^ ÙÐo¦72VÆÐŒ$RjƤýÛ%7ý¯ª!’KqñTŸAr(À„XV%âƒvK$ª)b?Û†f¡QÓ¶t(Ð>¯¯Ð÷ÕõÊQS{ÂΊ"b9É ÄBè"§ßðPþáåd!1¿0xî¥ü„7ŸôÔCW—(n­vù± µáb¦/x=»tE·ìF\Õ}5­MÃTLÖ6Дfî:^ y–:óë Ë=*bnÛS¥¯ª“Lw›’RDw%Ë0už‘ÌɆY*fg‡ô÷‡Ïÿ>Ç”£‰kÓ†ÆbÖaŸÜ؈ gÑ:%]3Ð= ¸>úëDœèg¡Á7Y(×µ’÷¦$zYjÖ£ÜýìhÈJÙìÍ™ˆº­ ~ð6.‡eo±$Œ›c ÖˆêœÅ¬êÇ^ά«q£U6köcîrËïàzð8zoç/‹KŸ’À ý? ‰ð~ êº qóQ®QÚ; ¬oPsÂQ£¹°È=¸òË™`±:%ÅQjšo›6_}–=0XhF$bÑQa˜öÁŘÀó@çhaÂYTh6A“´{3†fÌÑZ¯<ðTl|×Ç:IåâßéÓIûšw‚`]OÆÙ{k:¥®'‹ûA—í_yb§å'ÁÏ&2ÔJ®M×ÒË Ñ®upqÍ.Ú%·R7[ÜÔfж.ܶЇ“½Ò>33?ž‰¸` èÒ¬z›wŠÚ{èþú›v>w¯Ù“L-ä2ˆƒïÐí]eöâqÁý@¬'-81f1ø §ÞÅi Zú ÂêK±æøž9™£\’'ØÓ"‡oS™ö:i†…:]–…Þ$¢EÅ»+ñ̳ ;yž¢µ¥|žî.<Ì2ùÞèbÙÈÖ™²vÅö=ZÐ}LŒ!ñâS!®\úßëb ð…Å¿vƒÙF·ŒtÒiPÞl<”3£a.ËÝ~ÿF•µ+Á?¡°Cl$ïÌe[ ¿Š‰'ôRâŽxÙ~zÍ5Þ—Ž:Ö3Æž ò¼³¹‚I $}~šöþÛZ}w½Ù”'Ežã¨üñ7¥õÞ0t€ÚÃ9ƒ+ hvzÖšÅч?b²—älôû€ÌX_A_€LÌ:ÅËVö#ìFK¶A!>?á¶9fe·Ý!fSu£ë%ñ¾-3ƒ‹Ÿ—;)É39¸©¥x?jלŠBÁ¾_*¸`gpµ\…Qõ @Ѩý—U¸ ÚõxăÌT82BûdÏÐø'KgøåÒ/‡Æ 2‹¡<È??ãl‡ô,©Ý?]œ¶ˆi‰á±ÍoûØÁž%Iš‚AŶ‚B>²Pm‡/ tµ.9**(ríÜ">22ÐÈ7¯%é‡>¾ŽÖ.ø8m±löls@±”ux(E>á°Þ:ì{§¨–¥…Ý6iÌRûªô2€Á>ƒž¶y÷ï€`5[sêmõ;™.0Êd1»ÕŠâÖ6á0 ™ Ç‚÷¤ð='YràTf@û­- -QÛˆëzu4@óºÖÜÝF Œ°c ´=Î1·Ýâ#7¼ŸOÔG:LdTÈ™dU=U4>ºš‹Éª×AfŒ¬žû˜HèûdvƘütçÛOP6÷Ž‚â„(îÙ䊪Šl$˜ê +×ì¨ AÛIªì‹SH „z@À@]Oé^cç?§A¡ZÔ=ó›z·°h\Žºq‡¨A› z‹Ü8•kÖLYüÃt]^¤›mVt õ*.F,llŒƒªRŽ dé G`à¡¡²ÈTG%‰núŠ6'Î]w¢ÓôsMFÚ,5•äàŸ®Å¹ÎB{ ¤+y¿R¨zðŠî×Kg+…:×€ûfC!lÔÌxâ-LTà¾S¡AP”‡Hšt€•‹­ºÓ 1Ñç a8‡F5ú"Of< !ºQÎyE@ôù è=ŒF~ެìíÛ•ÇLò2ô`×ÐÅÝLPÈÖ&IÍÔÌÓ³CDàoÐÚ9Uf}+¶¡è|åÈÓDzì9žj ¥²„Sуo¬ Â]˜—‘wÑw13øy×Öa©—žüA Ë¼ÔIx‰«SqðwɇI¤G¯Á¡çRQwdû8Œ:‹ï|à> ¾Ì¶£±"èc–ê+-1FoF_ HÉdª±Mrˆ6CämœŠÈD3?Ù%A¾ÞÈ[láÀÓý¨Á‚€efIŠY’4-ÀÀ/Á ø×Ñâ£ÌDz!¯Ð|¶L!©èÛûA˜¦­š,‡ŽFý› =ý[¶ ×m¡ó*ÄÁ!S:j¤jE‚ Ží‘ .¢ln›hI 1&Ø!¶’†’I@˜ÄÓ15APB"¥¦MÊÇŠ¢²îá…„sWX ãì „Q±P݈-E€ ¡À46Ó‡HnmŒhCnQDª Ê)T‹"Œ$…0 ÝçZü3Žß~›ëYêàµù¨è%4†Y”^›gD‘°øaLø;™lÇGëWŠg¸(ª~ꂆ åüg¬î÷ƒ\ËÛ†Œø0´¥ ,õÒ%IHêĠ9LßÐ1Q ÅšNË6'Âï áÉÍê¦0rb„9ãAýØ`â 3Lœ%ÎÆ™¸½þXRÚ ÊïÅ0«>;ûÄN›ò ,=ˆç·™ÕÅö_çO­ F*Z_Âînˆ4ÖÓ'6Bï i˜èzp¬õ­ ”h¨rÓz©ÅШšSÑՇܿ«:Ö".†1S=‚QÔ`‰  •ÂF»5ÕÓé¦ü„õG„;H.bÛ‹'ÿi¬êcC Ó§J'‚ \ŠSêýƒ¦¹2eŒØt~cbšd*àdLÇÓäçá†Yˆƒ ˆq¡:ž¶ŒRI*?áoaí¼éRØñ°Èökèݘ{È)¸ý_ü“º_Ô‹¯ÙüLñ‹ ž¼\ÜÙ{ßZŠ ç_•¼æÒ;Ü M‰beÓGxµ^Ú¾½†4Ù cîµ\_íA˜ 7PŠ;–ã``Ö¨q@O±u„CŠ'¬+©ÂcHáÑ ‚zí™8üÙ7[ç}÷›°€8õNÏ`¢èý¡]ÄeÕŲ;&q)5ŒùÁJsƒ9Ø,…0Ácx'Dˆ2É6j£ö»³½?n—+[] žK-ÒEáu1Ìñê½v¢²Žv|õÏï,˜°ã0hØS5ŸÙç0mâ53Àa»µXíä,IÛªJ×x ûg#9þÞ5¥ÆÀu4]îü¢º#B8ú¾ôð´§W“/H¶Zð­ CÓLäÕXмa5÷%1êã1Y¼ÓQÑ¡¬HÄT1¶wÆøY~‹c˜q×n-;I-`|ÚŒ²´M ËñLJ<·rq‘’×Að¹ÖþÆsûšø#j?!Çá˜PÇicÎtèðÌ^]åè¾o iŽG®Çù þ“·0-ØÚ³ƒÏÑ蓲ÝÙJ|þ ±ÆE¾#Í»qtIbìäÚJàn#Ëw\Øß1øÌ¹¸L³8Jí¬$ÐÄô•¬ÿÒE4ÌA’Âçõ6Ž~«¿SÕѾ.[}×ìàYŰwí?f¾›+áë•!µ¡OU”’}læ,ðO…s,á­bd£C«¥¡#D»šßtïƒhÙw¼/È3­ òуÔQ^%¯|)Ëžˆó®5Ñ’HÐÒ%‰-$ÿ•¸nÖ‹„Ò±å+í Ò80r‘¤ÓE)‘–±¯Å ²ëZÆpäÜVG JÆ—U¥WìtÏdêc¸0¼jôTÕ—}zXZ48õxR7\§–EwÖÐ7eòxÌ^Ú1`ÝÄþ59V»4ŸèÁŸswO~\žäIciɳizp/íÑå €o˳¾ûHè`ïҘαW×0€ÌXÍ¢Ƈ¯ö¯^Œ’Ñ—£5d(„Ñ‘£ºæ›˜e¶|¤ô´æ&ÓDµFüf¯3‚ÜSaéÒ:k§•OgË'"º3bà\)k\Žc-܃@²€àç%®¶zèÁàó6Þ²óÁO5ÂlÃ>cûøuࡱYÉêüü>Ð6æäÀú‘8ÌÏ!dÉá;¿MLqË4u¢:–’½.&Äšî&M ’9i9ò«ŸZÎ)FÛgœ9àÇœŠ±©¸ÑÉ ¥ƒòr3>2üî‹ðY(¦ît&V>{YB¸>'ª2ï¬úLÇlj~<}¦‚Ü‚.g÷ŸÇêX1ô$Œ/|£4»*\Ô"Ê?Np~æÂëÚú ÛHùKºžâBcùýÿlÍ­Éí§cïB…ãruÆ©o‰À¶,§SjGy¹Ñ ¬ËÌ¥ÄÌ¢ÇÉòs±o9ß2мkeb¨–Ym©¸?Q€yÎb&ýö7š Ù&5),ôù«B'’îcC0s4ºázûäØzL°†º/—n ï5ò¾pg¦ÉùÈDéÞM/ÊRóóàÅ ¨&çt$<¥XcÄvxíµ¹Ð«†fÙ‡ð=AÅùntømàÎôPo;-jm'LÝGŒL&çnC¾Õs¾Ø†Î.ìäIÈDØÑvÄw^EòÎqO`]ÐXzsè#è»Ww@ÕÀ ©"µ?PÁ'{­Ù?¯N㮆ó°éãÈ>‚¶:n~<Ç}„Q{µ™N\§ã§´õͨž9â2¬ôìaJÌB Ý;hnC\ƨaƒFK‰^§‹„J.~™Î¬jçMlYs.Ù ¼È\î<´Æw.÷/·W]l’¿¶85•ÁÐ]ŽnÍÕvrÛn"ù¾-#%FÑ3¢çŽ.s3[„á)ÇC>™îøäôG(šQ¨SBªdQ9Ø´uÝÂ4Þû¨Ü$9…¾Šó2‘û·€¶€ÒV/ÚéD½LÞ,fÇX_v¹u—ðý+e¦P|‘„btc®ÀÀà%Ó†¥$rÉÔPi‚©ðüÔlñʼnæäN˜›T4Í ãsUZ¾y|Š;mÜìRY± ëS4,Ž_ý"¹‘è©îsÜh›á·¯ªÿ,ˆ¦üªŽ}Fss«xÂÆÊöM{Ïeïõs%ÕãÁú,šœÐéªV1ƒ¿U¸xÛ²¬p‚kiÄZ~bM·6v縵i2£äVuœ¹„RáÜÆáÜŠHf²àCÒ cûÕZBP€Lú> ‘@ây¿;_yߌ¶seðü];ñƒö¢:tÍZW”³é¦t64× äjDÌ“&m^€LTfu:H}´¾nl«!ߦÞÕò›9Ɔh6‡ë™CŒãûÜgä=BïÂvï<> jžj{š¨˜ƒoƒ=ÂýóBºOÉÀHSL¡P-C„z]ÛÉšÌJHJ5nÞ ¦ÓÈϱ’ê$pÀ‚š×J\I?_Ÿ®Ý(;²‚U23ÄU«$F¢:‹¤ dïÇ!?:ßÙŽÇèe¯eðxædçÅé¡è:EáM¸73 ½iL’ Ì~ˆß8Ý$R—ÑúŒn!ÛéP¢T–f}~™Im¢)†£D ?°U[º.)SƒÉ“úw—aâ(’Í%[_‡âÜè‚!\f„ÉW6³W þéÆ@&s…q“óWe³.ôv™NÁ¹;Þ"³=M6NB\J9B®¹›Â%ÆÁË©ÑÊ‹‚]Ôxà‹ðÝg §ÎµuÌr¼":¨ä/ìÄ&#g=}›tJ(ºªv·¿' õ]£!»O)´ ÷ÛJMRp!®m ¤Å$§Lož›ÉàäŽÍû8{g¶x2£‚jÈÂóOí9wYóñ’4x¯Ç>I»^¸äh²!êԻ䯺Ö'ÝÅ6Î?œ1ãÈŽ'#$M.áÀúw¾ó䥓„šV¼*ç×:ÕŒÆÃ‰g7Ô[iÚÇMÄ鎾I rÈß»ÜïÖ›3-{–dìùhôï¶6Žz›pŽBhäÔz>ù1¼ïÆyî4ÌJb„ÃC[íˆ <ÁËFйuGêú8X $ªÆbgx¯Ü8®uÄ ™Ý2«m8CôÛäÙÂGÖo}{N]›®kºÚ·‚.´AªÐÄJVŒ8½sÜ  0ÜÜ ³½Ð=d4ä)[»à˜þ4„°*ÙÚ`‹ƒßc £,î§‹X—9²®†%fˆ ŸÃÝq\²Ó%½’½Òìã?ÃØjÒ…‹±Zü4ÊIžÉôˆZ–†L³V*aÓó0‡ ÷P(Ú¼ÑÃñ°pÏ'ï§¹¸)î(€éø{´œ.ÐT–_ÉÉïóÏ?ìWxx‘€`x1~CÄ/fš[™Ö{ÐÏÀ6 |r¥Q$ÎçlßÞo‘x$AŸÌ,D3Ê¿Áoá“×”èððnD=P`Ih½hÑÓmz©àm9õ°3¨Ý/rÜVA]$ è·4 Îþ©¤žÑÈ}V¯%¸\SËTLË7募ЩAZ°…F èÑj0òwÏ›>I^ÆöûîáÎî ­~†OبG?ƒç!c-õ¸œÛqCƒ³™1„³™ïÙíTdÊ==.,hÉé/YýFÅÒâóXàÐ?õ¶^8îW²Aíz îTÄøy¦ ,Ùöï+íµµš³è°¥DYdŠ["¨”$vÈp•—RxÍî¨eR3–|¦Ìlø"þ™Ž¶÷,£¸{ß­FUø²‰¾¨; ¦ úvbž»H…á1©»qÖ{uø×ó€êãSXΊwy˜|B©‘n¦Íýóª5-?ëAqúž«ÇÍ~Äy(Ÿ•jë’wÅ; *ýéé3‰ñ;< bÛßWLèÌW„ÑÜõ7M3ºžÇˆ¦˜»3ç&vcØ¡ßÇÉÐÍ||Î|[ISŒ¿ЙÜà¦þÕÑZùq«t )$™XÐ3rô?-øñà'ߎ‚oAãžÙŸ×Áôö3öõ¼±ÔþÎD´“&˜$¾½ ¨„g.|=Ž)o¶ñZˆÔw€ìΦ» GOXR™­;eúNÌþÑÞeÅsÔׯ®ÛÞØ'„ágù iÑî÷ÄYHS•sŸèß”ñ/[ÄjkÍs† øþªø6 ÓÓïÚqëœÀõF§—‹Þ¯åvOŽ™†ßèäÁ|è@ÜËL²P5+]êñvšµ¢yC +¼:ŽŠ\…Z/Îåùæ–*«¢­Í9A¾Ëm« Û|¶ZêP…}‹œb03…cÓ†š:3+,cö#ÕHƒÀ£¥_ÁkSùIT+,-º̰ Øj×㮄ª³0ì`²Jý÷üFDàb’áÛLd©Þ6[égO­ÂÚR÷TUQXN§DÍ;ô9R•(דØé“îotdvQ6 “¢™g‘gKGùÛÞ/òpgé2œãðÉIõ¯ÜìÙõàÊßY¹J Œ«”ÔÐ3¾’÷Z‹f ÂPŠóaY¸`fs˜kRtòÑùÓ¤X÷Ê4ÇI²ÇÅ6kÍ&Ìü3µÿ¹úõú­[@¬z¹mÍ‹®ª¡ŸÏc]T§éüÆþ§ë÷‚»Ô y›JÁ‚¡G ev¸gY£U ÉÚ¡ ™R(}ýá §0×z|(P|¨Ò°>ÃÇãM\}í•>óXü­Ž}0@¹g'²8èWÁÖ¬ÆöïÜ'‚˜sÔ 3‘j¥kP©Ÿìë@l?bbaè‹Y+…16yÃäw)UéÒBçïoÂëó%‡JÜå¤X,·µÕOÊLZ¤gõTÝ­<`N Ä@û_|O¶Ë™ô ãçÓ q‰TÜECŸåG_}Šú·ÏoÇáð)‚eà¹qf@íQFoÆþæ¿Xà™ ñ4ÿ™ùÊ{…„ñi‡§#(“.ë‚Áž¿RO¹•uƒ9yûA™¾7a~½QjæWœ5°E㹉T¥Ã\çå8ð1ò·çó9û>pàÝœ›X¼1+ü%µå`pìQå[R1îyXÃ0:i€gä`X3¾ŠF™¨Â¤A,åB˜#Ïx@Í^#çÇßÓùt!ê,É€ÒõÆ8ûFGاð[;Q/®‚åõôÓŠ†fŠèÐÝ_ä|²ù2€%¾ä@IJí–UÐDaÞ¼`á,KŠz·¡ùtvH|wú¾!X©O(í–zhÇã2ù}ˆÃ«ë)˜ULú7ð)Y½S瘖±È¯ÏÁ× ÏŸfÁ!E?%8góæoöDðc…“¦’᯼|üúÀP•ú9%ÀR–/,LLzavò™ 9‘D 2(8LgÛÛÞ2c>GiajëSõ…c °½üÞU¨ƒ2ÅpÛ{žM°yk’ 1¦_òKB=Ó{P~KtÀüÕÂè;C¨ˆ”¼Àé½I<ûµ±)&{Ì\-'›UÓ™Ðê/]5~áêwV%!S>Ù^ú“M/cÙä(…$‰è¶YÜø£D)ˆˆi´N:v?à@I ÃQùâ¶ UrÙ|[§v> Rˆˆ”²„1"çAÑ™žƒÈ`š¥]Ù²îhNPsÞÏꂊ¹jÑ ß¬U‡¥³ˆiwŸÙ$!ÈÛpùÉk¬?CÒŠ‚Ü;"ç\ºt¹òë9ŽCQ×/¸aØpÊå¸qXø. „æV<‘8a·ßÀ/µˆoǹBIñâÁÌy‡@à§@æ¸3a ¼”¨C­v¬Ä‡c¡+T ç#œÈPr çß#ºã¡Èî|ØÈœ‡BB@Xol^G]ê¬`)˜›D½>Êý¼¤}cëv3dB‡F#°Pª; ؾ®”q‡<« ETXÈ¡U¹äWí#© u(R $r½‰Ìð3ES‘ÌËØÂç$b‹ Ä[Äž¾CÉ%ËÏΘk[àl«‘h¦ÃÀØL°^XIBÄAÔ3¹ 1ý“n-—SU7gaè sQÔöûú;Zg¤™¢ßLk{ÔPLC ‚D4e€:vb!\[^²åÀæ©Ôc½â„=¨¢Øpb’ÍI2U”… Å(.ÃG"²8Ó{‘Ìþ†Oà #™þkÀ¹Áˆp{§)ƒ24sìãGªú+“I Õ܃…®ËíNáGÿ´£ ²ª}˜.œB¯^=p¯CâÎE³(Q8!F—ÇA„Sß]PTþ †(ƒŒpÃè N†sÒ/ÒQ5@Ó ¹XyZ.àÇ]ê¥fð*LÉJç½rcb7Ô–‡0Œ—"N¡AAE5íËÈ®"U}' äãÏÞ•áèäúéñZÛ“£Û\Žþ¦p«#ܦDtYØ’c@ "è[Æä8[Ó8mŠl@šCm¥ C`“M±¶Ä™x"6…dÂ>±$L;ÓRІ{ÓØA†I5‹DjU $’Š QHQ"Ôd`!ùºñ–Q£úرóÍ’ãÌ`¦tÓ®'ÎBC&ѱNþ_"ÀÓw^÷ÂØ i!‚8%ÿ7+Éœx*n&.ÅNxk¨•Äî4 k¤'\µ´B¾Áãñ’ËndûŒL@ÜØÌ-3¿›dMú¿c‰Ri1‹Á¿ÀŽ˜=[„ ¢AŒI¤ï®sY?*>Œƒð6êz±[sùAñýt,{ž(y“ß5tŽâè t¥Ða²¸y…DHÈdƒ7šõ—eDÂip—¿úóÛµñcÒ­AZ<~ JdcØsÅcc…„¾VâÙG#!A—Æ ˆ„èûtí$\Ÿ‡âì„ óðƒ+bs¬×½½(ýÐ×Fé¶ñ8’š#n0))¯âÌpöa¡®µŽÈ0<œoM”LòŽ‚¸ ça{óÔ6’ðâ :ô‡’“oåË ô¢ü´'}ØìÄÌ‘ÖØˆK~ŒJ#ðaþ :µB‚áÇ ³›=‘‘hô”F’S(±Ô– Ì[A³ïÈÒ8·cR[Lí¨GØ*òêÞ£!“ ~0‘~Æ^°] îäÌ<8[ñCp¨€õívÌû˯|ôqŒ¼/;6@µ>„(zˆŠ êháÞ ‚ÜD!–aü§O¢ †U(TÌÒ»…Æt¥CåÎÝ©HdŒ>#¿;Ω°Û¿nÜ£Û£Ê UÖ-ì½Ñ®Ð+al[¿Ä£â»z‰¼s L…zä]Ž¡W„äH¾C:À×P Ñ)–„¨·‘ÆåóYðhºG1YmßÖ?#U1—“Éj`À5((¿Æä [,Ë]Q=¶1ÔgdÅï; œ²~Ф–[$†2¹,þÆyþP3D†þù—qö©Þ¼X„ wå˜ ²TݾØM.jíÑ]½ë‹áöœFÆ¥ŠÉ¡‘¹¹Ò–/§Ÿ_Rxk™ä•ÍÖ;Íñ”…¡5aØYTx)Ñj‡£’—XP“-ÚõeBþ OJªõ ±.ø.¾ºvõËCáîΣ´A:æ'>]N„ !Ÿ;0ÎŽì}ŸñË/äRÇøìcPL9ïPÕW­?Í™™ôH'¦GJ4;Uôa¬–¥å‹ª8Dí‘¡HCåz/8µ¢âŒÖ­ÏèáuàÑê'¥3¾å%[¥‡çͺm¹¬èjÊY# ! .ï¿P}Ò³¾Þõœ°¦ïÀt½†DÉ2"ußÉ·¤¯ZpEÝŸæ­ØìÎî(±Úÿ? `ZǘM5ܰŒT  ¹¤#>tPiûq½°mïœÚ/ó·½èN3ô“<`Íå–¼Püóšó¸ž¹MN ꌬc¨ò6ûCù wµF‹êëÖãS!ÓÌÇ’€¥øÈòŽý²˜¾™D&v¾zBfr÷»#¦sïy?å‘6ë/7k0g 1Ÿ†;àOÔ‡v'šÛ¾Æû0ž²}}ÎçrY†[ÜÇÇ+ÕÚ^ÍCLh:¸TöeÏV+åcDøV,Þ‘^K? w£¢#ë«Py:l ¶T߯<½ƒõç3±QÕðë®{iéfà3œc%¨{š »#+°Œ‰anùÔ:OM›6EG`écXÜ2žDi·á|¯žš×6l >?'˜Ì*@œø–ë§WMÊçtz_&_KÀÏ…Á9¿c§>'šÈ”PÄ4lšä PÀ·†PþcæÊ²°¹ûŠ5ßLÝŠòê1Ílsê7f¯H}VŠÍ7GI0e°p‘¾î×õÆ3γ5>o0nbîy®hÿ¯=Ïg¼5P䉖Ra@þÑ.â€ï`Í2wÿu×eóuCèϧ_g‚Áļr2Xô´`L1_êB—ÙÒc¿ÊÎ&#óê‘lÛ+KÑ“7åÐ,¨• Í‚Wy"o"E×äСŒ¬‹gXb°’ª¢m™ÙA# Ej–X»yŠd‚þ+3cD9–œ<a)6ô²‡¡k"Y”`¥ºB·KÈ~hÚÁ#O8Ÿ2ü omÇZ©‰âQ‹õpJ›ZLJce±h*]Aa¶,‹Êª/0îÆ6âaF áŒÌÌåRr¡7‰hµhòù\š%–È«*š”LfÁ^;»–”—xÄH"¿­«aä@…ìRÙœê=ö:Þ™»Eƒ·Øý[:võ^¢K %B{ŤŽrûúCZNF3È&`úÐw³>$  ¡û›Þ`÷wŽ]eø'~š¥'Øõ·ä4×¹ƒy ÌÌ[4˜hB ©í­«H‹j. Ç_‰„$¡GÍœ¤Ñ³Áºšû¾°aùb+ÓÂ;Œn‚ûJÀ0p`_}HÉÓ§L8×M„ÊqÅ÷Ø%`ð$»Ucm˺<ÆÇ˼Ó{IÝàû„ z64WXÇ8#öQ…”æ(Ï3ï3J»ãœ“˜ý7 „¡ w˜öôàó'Þ_p)˜þñ-{j møpI~½—Eœ_µ 0=G•Œ†.T¥…ñÀXðˆWi7!”, ƒÝyí®ÁÏê>=5]ð®X}ÀOê!#jâ(<·È?™4¿<ê÷A:ÀŸµùw„ÄAó‘Œ(úÌÔ¹&HoG?PŒrd1yˆJ!ýoG¯“2Ù’ÃÆt~M±sÃ÷yP(t,ŸA Ô]`wÜfk¼’a·4$ ¡µ•\¶RUBߨ|äEˆf–ÀÂüÛN4ÁÜU‡kÌžü~g>È Y¼|¬Î, 7²`pl(uyö‚èèef¶;Ðåÿ×ãÕ—%`ÎågbMÑu„Zƒ’O&èœ=ÐÃâÏw–=³z†Œò~xÒ@Vyb…ÎH2ÒâEBùðŽ3à´}Páx–y:¢ù•*‰dž/%·À×:kaÍŽQj´%DæP‹Ì‚8 DÙŠÆbJ¡mþs„v¯AØ!éHzýЦÀã‘]ÉŠƒ«¹Dù¾NMöøì‡†F*«€ÒE{m¢ä›xrÑHçÛ™õ·àp ƒAT3æàs>M”\„xiìfZ‡‡ Ìóú² ´D,Ž#§X5¸N&P¼`ûÑŒP= ³Y&‘“ŽkbÞSÄgsa¸(`ã"näÓãYH:ŸpH#븨Ëù(gÕ ÄíÉ1ŽÀŽ\%†¤E®×¢P$P¦cµ –ˆ1– hl\˰ªúŠ‹šT†U$ ¶ŽÞ“X‰=ãG¨˜2Ë/¯ÃB6Q)ïTZ€ùƒ‹ÄGßÓqìœø†Æ†Ç cAÓè[ç3ßÖH±¤w¶UhFÑCù–ð¢ BÐÜsðâP\}|„4vŒa‘Å„I£@õt.äo£µŽE¸ÍµA&f÷Å(O_†úuuòKÏ4½úk2n±»w¸aºŽÇb‡-ji(§—š½ €x=¸œæhR8^Ž“Mi2 Ó<ÞfÜçÖ«ò’(º€)kÛ?ÍmEÁ™9üP™·¥‘PÏãÛtg¡Ð™pVåG|Årk"Mh|"5Ô=ާOô{å_ãn§ß“Pì| ’®r!ØØáI´œ!ÉàŒ-ÚC`ÓØ’m¶›„Ó’Лi&Ò$˜úQàÊ Ð !C®É# É!¼s¬²ñÔó»ÈˆÁ˜0D ŽèíO4] |Pñ{Œq§·6‰æžÓ-Æ,uúpõLsŸ“PƒEÑñ*šR[¤õ¸ûD@&zv‡WɼgÖîS;¥æ2°ß;÷dlÜÓâ{ÖgOǯ1Ê•>^ƒ®ÂV)ïa¡ò©·$æC/wA¶£+fä°Ìß„sΦ LmO"fo(Ê!^5²é:C0ÖÌ%!)¥·³z¸Ýq a¶Õ©–Úsšäˆ dfYó"ã%†­\ØËªŠeýMx:ï"AýÞʺÁaã`€ø  }.|ØíÛÁ&p tÙV˜a c&Æaf|u¯ÈíQ‰Ç\Œ ‚à¸ÖJ«ŽÐÖ6¹8¹ƒbc•äÈeOË´gÌeiÈ>Q)8%ä£qÿ×+X½ Fú=5yŠÍ?¿õ³]|dQ·ò"UV»~Í´2B\¶H£h£áùÝ~Ç÷ ir8PLüƒ½Ïpy%8‘¡ˆhñEdç”R±Ô>1›‘ËÞšc{–’~Á\‹±ß*‹e,Ä@š…48}f_|ãÀ¹«o³rP‚ìú˨v3ßÄhDzg¶¯= B¡¤¿Ü0y‰‘øQŸñ¡}Gï\“¬z¶Ú˜Á³NÆ]wÉ\ÑÈí—E•©|Iz-ü˳Ǒ13:gáê†ø#Ìܦ£#T[¶O ¢‹¡, E@ÎýìaÃJä’˜‰~ä´àf)â{:þ=M yÀÚ1mî{Ä/7àtèqØfe!ì³:yf†úßlYÎVŸØ²ž&>UãÓ–Ñ:i üäp÷s4ûo=ŽM„fÃc:§ xä—[B*¡ŒÍÁ`³—ÑʾvfÈ¡0ÈõXWaQï<'žS/ã‚iL'ª5,=Mã’ÙÎg«üX½š¤•²Ï!T²Î ®,[¼9Ȳ‘ÖM ;=Ùób öÁkHî¼k?W/ÆÑ¥ì`†úÚ‡$éAÌÌ›çíO!X[BWn:È·÷Œ«ôd߾ܩÎöu=~P%ü—úÖQû\Œ |˜ä[°• öA¿F«=©øŒMc(D2Íè†úk¼È Hyi¨æª«nNÛ™»Ù~=¯š¼¹¬rí¿°ü âÄ7Üë äic¨ûÓ¹1‚̲çÊ9¿µ^.™:h;òÒË„#?6rmß’ÐìlàÇ©]3c‹É®Õ幉›%ŒgïàAæX`h< r1Ñê%]"L8©Uà òcH_&÷O] çZWo»ù† Á‚§?{’hÛã¡ûM%qŸxóæ]Â]WºRÛ<–ûúÔ>¦k7öêºy?¿Ɉ2§AQmwJÓ›f³ÎQbð p}h(ç!3['ų•K`ªÁ±'â!ª5±k˲¹ÌËÖe÷¼ì  û_ÜîÞŒÛaK±¤a˜þwW~#€šQ‰ƒï åP;}š$EÇâ§õ ©.ŸQcc”'o¼šW0Úó@Å|N÷O´0¨y6cù©ªEèLó‹™c »u |½ ×Zh QûJ|9ã­Îhœ9Œâ/³¶/¯þ¯ÍOë@t;xèߊ½øíû(kp¯Ð1 0k‚Q’‹²VL`µ¾ó’­zŽ|ꆎ^’g\Œé‰iO^è¢×Pbx>b>ÏO[TX{4º ….¯ñ.¤YÓ–%2öÅŽ;}ʬƩÂ^ÐnNè{£Ê­´DšŽ“×wà°âQÕ´5¼-–ˆÄky’TMìŒXÊBS­óõÓžg3ýùðSï^½p;錄¢h`£³„ZK·€ kýŸ^6dð˜\e8;ÃgÄ£h¡N«M94ò[‰†ÃO9ž֚ã^>'¼R‡ÎR~óþ?ú¦GÁ˜°ÕþžCmcö ×”ßï%ò™Õ FK—êÀÅPYÂkÃýž ô4þ/IP=ÏU ÇëÐÚŘ_[—Lµkft°@P0q9;:"z¸7eðg$k0=TjKͺ9ƒ”j2½P1€i°G¶Æ|Ú`pG¯Ü`Q-²œ„åY혟çxÎU ™ ÅfÀ$ãK™œs¡k"!ÃÄMÆÇɱ?¯yeþù?‚ÄS³Òé†'–½ 5VÆk¥Y$»›Myg4¨mKõžKÁrLÅ26nö.W·‰,¾«„t]°x;ÌvÅò }ÕG!]f2žr± ¨ƒ"v© Ôò˜oë‚”¢¨àRXâë -^Dh“ÜǿbCÍ·½û’ÎâQáðá=œŸ|#m­Õ×f&ž*ü™â!ÆÎ¦cµ‘C¢ŠÐÅ(OPþ Ò¬(¢˜U¬²L”0¨èx‰6é¬Åʹ*U\ ’â`¢Î¨”x¦_—XIáÛÞAÕ1®¾¦W+Ǫ‘«ùؼ+ͩӵ-ÙU0»s»;x [ð901‡?'/”ÿƒT>€Á­-ÓÁ¼^‡®ºU­>žNÕôV¹h¨ªÛF”mµÅÆê§yŽ4·A`ƒ‘Y <ªc¢`‘]ä'R¤„œŸô¤•ï@nººƒJ*BOz&†½½NúŒ8¿˜s^G¡³¾Éf&F?¨ð tÀÏ7Ýß#EØÅš]p€õkÅ~‰»\EïŠ÷fă P•Š uÈA‰™ÇSrÚzg‹ aP®^RýÎäø¥ÂÝùËj·#jÇ:©vš À·+)éèóŠÞïFšÉ…Rr¬‡{„ƾtžÆ˜ÔOvQ-Žq'Ä2q"Kmœ./š¥\#Ä„O!Ó•—n©’ŠÚ h%†ö…·E§>{ÜçܸUŽÁ8_ò:K&vßXÜ8ÖÏx |ÒB’qùEiÉZˆJˆœó™ÅÖMðó…L3/9êxiýíƒÌ(º¾‘ºžœPÃÁö0l|YaNt2Ü#ulCÈ¿sÓSK5©óÏ-yÁ̵wâëƒN8·"ˆæO ŵ ÙÊi܈?ŸÏè¹þáÁ}úÐ;¥ÕT3¸(Ñ`UVåä,ç¹Lg0l$$ŒYß¾‚o‚cµâ…`‡QÆTD¹(}+“ò3å€À£Ë"åE„xAemf¯‹b¢úÔª,,Tàd¸î"P[Ür‡HÚR–Î/„ç!Q§>b0B²<øÅ³¸°Ñ$ðç8…óÕê†;GCçý*!‹ÉcXűõê0_°@a"f~õ–è ¥Ì䯰hc–˨Q‹ÝƒNÎ[!¹€q°F–‹ò{‡-.{€ñÞpOt ZÜÏDû‡ƒ-„I:=Àöš.Ùr}Gƒ³Óe¾Ç®CÀÁ@‚ “9„ŽIö'éo }VÃÔQL[>]_ßôgÌ–3/Àd=¼8úN¶†°Ç/|-“;ÚR9^Á_~„^ácwŽ9-°¾t=‘e`kOb3œŽ AM¿#WÄuºÇT¶"òºñPÍ@äÝá˜ïÌêsÀ0fzz”d‹˜™ñØÇæÿræyJ?<'¼è6ðd; ¼Ã÷ðD¦åßq'z.ÀÁ ˆM”$í½ ÁÈ ÇÕ\‘' PÆLt%L¼°ûYõ7B«JYM(­?½~ëò³³ÉqÒè0'¾Mêï’@>FVx†“n Òš±¤z~r2p(ŸÀ›œ¦†8;j$*.h•võ’.SÙN;‡°ÓPAšôšíè·ÍÆ*¦iÐCÐØãME¿¾ZTœæƒ„ü58îhaÚÏ4öÎbL*qïÛÊøN¼¸–ÿOò]ý 8…»Èà&Ô‘Ÿ,&n}1*0 Œ] Céù^K)¦çÚ9—¼OÂÇIï/ªeÅñjÖ{Z5‰Ð`ûE0à®)äfntfÍé«WVïuÃÇë$½h §"å®h45ÒÆK!Á.°×ƒÛ¯ Ð·Zõ·èžÍì· ]ÑÌRKJÛ!wž¶Æz"Å0}¡LàmòÐ~¹<¹Ô´`Añ,ÁèÊ85ÐÐÖ÷›n*ÃÛºòõãpe˜>Cј#£ì;y†²²ë5cüP o)oc ò¤œé`®þ˜ØÞËÆ~I·OfÉ#9P!Âe6D»@ê LŒÝÞ÷lTdKÜÞPYqãÖek•ÛÄæ‹³L‚s:ƒ£Uþ"Th ߆ÿ@}8,9‰ Ü}½˜h~j0£NÝòíÙ÷IDdÁ–›)ØÑÆÆ ñ ³¬ÿ“ #ÃÞ£0Z½ƒ`T_M‚©È¥y0ñžÂgÛeÓk_Ýj¹åàðXmŒ¸EË‘%Rz"ˆ¥÷œh¬±Mµ¹*Ãvó$Ó.¼ÐÀv=åãZ{Ý{y+P©>N<ŒbfÛÃy¿9œÉí[P·ñ‡\²^ú0·aÖ¤b¡câtN­_™w;d`]ôÿ+qñkä|¼˜_T:E‘<Öù/><ø…%/pyPÆ…eNÅS"æ ì Ã+êPµ2›a€ PøH6uY1È ¬#Vó¢!.f¼_ÀÞ<ï6,CÛØN”}˜—Xb9nns›®ÕJGȘ' FÆhi—$³cdûê*ÑÃŽÂ Uaf«E²ª(P¶E¼*”mg£µ’Ò ”(7J2¡ÁP,jàm<ñ7’4æ`ÏÈÖY$5æX‚\É<ËÖŒÃE¬¦>ÚÏc©N1S $²¥:Kë£?P˜ÈB$GZ\Š HI Œd„t"~" ¸4Ò ‰$HÀcm~‘šÇǃÔö[\ÐçN†T> ]ó¢÷\í©µì!µcßd°ã‘Pݨt¢1¢3ÒZYùéRÜc¹Ë@˜LXðdÓ'x Á¤M+Æ„#èù‘Pó|‘þ%2Æå¹ûÓüõàÒ‡o³k¡í˜3ô7¼Ï’p6”ü*‡ä8}ÆE¹äJ”×mÂÆ¥ÙÏ-‹>RZMy«óÈ`¾ „`…Èp¿2¥(&`kÛ†Y×8ù)T< ˜Õ_[§q­Rš÷"t´h°ýyaéh4ÊçX@Nê;}ä9}G¿zJû÷ëSwÁn#×BFaÓªTmuP,¨÷äÞ÷õ¡ÀR L}ú,iP9vâ‡1 qìÅ éÌþªŽ˜{!DJv0Á˜Åüt`çƒèÛ~ó·cص÷Ð=ǃ¨u#³ºˆŒ&dr7ØWRøfeS÷’aõ;¹ë… ˜d7µÔm!N¿ Ÿ*cUØJÍÙ#hÛÇ\Ü=é*Œƒ‡A¶(aÜW§ˆ€* ‰çܘ«' Ž[ïpÁU†´­°’B$$‹†{2³Á·š’…:Ö´Æ\ŠØr¶5° ê¶8N ŽÑ…ü g~6’èg°YP»‡FLY§4 ­eĆ ¢ô&ñ:$©#ë±ì>ø4 ÖN´7ÐJs`LZc‰#QK49-­¨ÀÔùšNTË:èt,à×!þ€1m1 ;FrxÔ¯pÒ „BOédÑÍJ<§<ˆðÓù½ê¹);µæI º–Ãf®ºEÎxF²•!ºÕm>pE Ç|ô×oò §Pù4Åõ ½ é=¦Åâ½-Ê’CÚrShll,Õ®îÙ!Ìi,ØÌ<»F†žÁ_àÀ„i¸ÇŽ¡ |š(1›&Æ›´q½NXOõãsRþœÈ(Uv§ôX˜qjªBäš±ülÏ2¦O ¶qjŽeqFʱRЃaJÃVÚUì(+Aœñ翱%µ¤«<±o#ìÕ™¸Lô=„À)ëÿ Âv…e|ÚGM-0ƒwj:ÔõîOɈÈéÒÔÆè&»áX ŒPÔž ª›~X7léä DC(R%OC[ç¹¢ûg–o,î £‡ºÏÀK .×ßg©&„>«1KÔcU£%Šcë@ɾ. ÍNP¡‰ÀiÂlhj„‘¹è/f×£‘ž¥z†š9hÛG§]³a¡¸ÐCé( âÕÚ›kÁæpyöqÑß ]P ‡¯WY˜‰%i}3þÃòŽëÝê{ßQ·û¢W¥:d¼ˆX…‡Jì 4!yî×Ìy6Æ–Æ»ã8</Ï_˜=Y©†LïêÈCygÓÜ—wBA¤EÏ)Ö„aý=}}®âf½|7!zÞBB‰#20d À§%‘ò^‚ fxÓ)UQè)ÑFˆ D€@„HIH,ÌÈÌÌŸc×=Õ‡`åÔ×/yGG;=Gs÷Ǹ‡F³ÈEÀQÕþnbUÔß:Û-FÄa;ÐW^ö-~øâñY˜Í“=8^<Ýê~ûšeÔáÌ7öO,C3§ðKý¹¶:ËÄäré¡c<4 ôãÏS ÐÎ L¾IAìÑ*Còb™…=é”ÉãV³Ê?õ6z¼†ó Á²rÔd«B<Ðç6üüùô¿jÄ3Ö⹟ÖícÆxÌk|þ¥¸Ød; O gË*ôš2v‚ë1ÏÝÇ Ã*öã¶c†ÝVÅTDx`\:ÀòBú0zh%1àÇQÅ”pGTö/:ýp£^ö‰1Þ‡çr fAµ³^œ ›‘qžs˜{•58tî¤u¶¹ùO§LÏ _]¡m£Ys½T`6¶'uˆ6¾!Srì|v¯ÔÉê;t17驸 [þác ni„ˆIÙˆµ\‹ƒxŽ^3z6ªav~„=@4 ¡ú'0ô/ˆÜj|DkÈ™jÕ\áÞ/¥—ÍÀqf zæAÀ;‘x`ðÂAFr7æñéŠÜg—?²\è`u™Ý…ùÊ4ÆPë­49Þ|H½ä€“=t˜Rb)üô_d÷+¥Ìw”FØx~e¦HX&·®ÐÊÏ?B¥2ŽÙbUÜ3JYÄ4ã—‰•‡øú=þ£½¹/‹3*j~tÛíß—¿äÑ]lúU¤Õtàxׯ•Úv{®å?o?Ÿ9%¥ÉxÒ†~Сôwk*™Ä鯎WípöÀús :ÒåÄ‚µ ÐáD’ZkáÑr M%píöÔØ3ÈR8ô’¢ˆtË‘EÌÆš^ó÷õzd(y±he2rš®°ÕÒ«Ú–ôb±Ò:=4Bs“:;‹YQÔw‘ZÕæïÔñ}S®§•ç/Ÿzõ£>Ük/O$P´ÅQZµh·-öà/ÆM‰¸’p ¾÷5æÅªÓ§ŽP«z;éµqQ)ÍWD (_áÓ€™vàãQİÎòĸ8ò”دkü—·ÑpûøE„\k¤TI;ýšƒÇ<å<ó’´1³´ñ ¾¼mL.o-Ю¶éÜÁf>Ì(‘ΚtMu$Ñ =WQÖÆ¦žï·M°ÔÄuÀ×g¤‘¸èJa7'çˆ~´óý(ÎD:]¨‡ÆùÎO©Î:ic‡`œ&_¤²Â›ñ¸¸S)öEÊAÿ,Ÿþ@}çŸXÍœåD:à ‚`èÕ¤ž ¿ÈKÛ_õÓ)\ô¥ËŠðv-Pô<Óž;X$U¤j°*OÑ 9KÐÀƒÌk-…C¶eœñ€yˆ¸¬PrÓp¥È¤~¸Ç2ö_ìqrtŠ—Cr1„p Çtáï!€­M¶ËrMåŸ|m¦ÊP»ï"å.Ø›~ê?2 þºÈ3ž†A™ØÌÇòØÓ¸jeÓ†­Ë\Œ»blJÞQs”6â?5kYéÚkhýE8D…¢`ºDG‚òÄ—J”2¤¢¢œ*c v…„€I(@„ÀÙ’ˆPuê"쫦öÀÆ;X¾š˜éÈ}²ç˜7É@thG…ã¦8ƒ™­¤[öÐË—/iþ‡[s€°M¯ˆW¸õ ®a  834ÇäªS\*O^ž2?v§j:kãb¼B°ÄÓl ¤wȳpG ‡s6Ó«A)¦Ú›cmB@„$ CØu¾å0Œ fë%È4Ý©g’¬\äÁhr²XSYî|e€úö,Àò5W‰Xª ûM A¥æ0¯¿Ûõb§'þJ‰`÷°Láàgäe!C·ãW&ãêˆq ôT×Ùxúx&²Â™øõp-ºõÕæy¶rã< 4ÈÌfµ<›áÉó1¥>3ÔèËÚãr¨§2í¡ã˜–“‰ÂWéðƒÀׯ.’ïV- % ±<žÐ(Ñb@èum¤¨Zžózg³´FÆÙÐ̆ƒÎ{iDIòk\üMù£%c2Ñ'P;È[Éör¶çÛëÐ)åªlVás ¸Ó² 3êüO!nB Û.èÒ5<¼öÊ:ïMEpÿ'8|=PÀ̓ë’ÅX3ØÊcE*ÒãµÞö/‡–by(Èà}vÆE{ãF´œ}…Pr†$uñ‰Ïû5Š‘8o› òpÉ’.‡.tÆa½]V4'G‹{»øõÁeߟyJ`LÕƒ œ´ë•Q~xçÈ>çRSš™\øÉÏÎàçÖÜÈO±'‘‚M› ¼o=„F¥ú—us‡*ê©fK¨kyªT”£êœQéo*•²Õè]»á}u0ò†a, qš˜²*¢l‚믆Ÿ|C'×uÚº ÷Þ…©º²_ÈÍÓ¤w>:Âêš¿Eu¬k©ÇG|“ˆÇúý®6ìÎàíÚŽNÂÖ*Œw™3Ò#$¬\ã ¢–§X;ž*LÖá6ûÙ%m"×\æ‚Þ‰êOèÆˆÀÁs8ŒïÏ*’FHŒd`‡^˜:Q‚Æã¡Anb,ck# 5½‡„âÛ¤i­ìwž+OÞȯ𴀩S„ Jã›9Îs8^7©¯õËÓèÞP.¢7ÒÒÍŸ¿=‡.0Â¥ +LÉiææu]-õ¾÷ðI§ZãFÆh&›@âðñT?{,QW$ÎS4ƒÈpÅ:xÍÃà¶4®$*x2êˆÙ›ì‰dd>²†´xÞï 4ÜŒ)<Ϥën |ào}’r÷&‹pä @ ”µ&•€@ÔE"ÿJpÄ^‘ÁÒ•ï–g #7£lH FÚN  ×U¼¢tu[žGLlLç¾úà”‘FL€…?׿*ª˜¹¥p³©Bó½E:ÆpÅhZªFÓwÎLª¬À®Û«êvôèG¥>ÅOÈxÛ¯ rTU°3³|ŽF,ð<ãé jPB?·¿ã"ó‹Ùî"¼bqb˜‚ÒƒbÔÇ žè¤¦ Õ»ôåo¸N·k “¥ÚÙB™[i »¾|ÄȳžÖòz’ÇRu7ÇGsŒ7ˆžŒË<þa“ ÏÍKm Ü1›ÆއXQ¼®Xä=-Áè[Àµð9cmi4Ø664àHNJÌðâ`.'8°‘d$¡pîBq6 ®—˜«ë£ ¶³&`háÎ]é Äå:å®bÅö ò#>‰ùmHŒ·11;Ÿ™QÃ5Ñ‘×~€HÖúl[~Œu`‚Bµ ‡úû×&¼—lÉɸìì 4'ur>òbºïAò=›Êê~ 2¸VPc\põ„ÌÏjgÐbY¡b³€ToaÖ Í)•é`騤&¤ê%£šy—˜…mƒ Ðò ÛVý —NÌ€÷¦yò±|6Ø YPçñ&æaÄÖ×:ør3Q‹› XØ\G(¡„C«LÙ¿GŒ[Lë¸ñìfãȾ\3ѯg~§ÆÂ­Ð!œï€‚êÇ´û<á–TjR¿—Ã|v~½‹rÌóèFCë ½{ÐéÔ[†¨ÑJ=?DðCÆÖµ ^K Âq·_Þ¥<=šh®G‰U´‘üÌñÒÅÄáÀ7ÆæÝ Ö·¹±¦±j³†PDø€ácrTL ­£^8F!¥0O­ôcß{ј<“ȳà«p^&ÕB’äÿ{ßZÅñ·^½k2ìº\W±èAZJ‡!1]ÐDb3|-+’_äè#v.wW6&)FêsŠO]4«GMKŽr®H<äyåÂy·­Z÷Š6†I%Æ[ãÿ¶¼±ñ+Í!¤[’¤lzÓ‰tîœ`©\ÏôWä@ÉÊu9Ø8)這ÖùyÀ…›¥jÑ Iè2Œ²…`ï­3üˆ©Ä4›úmñð\p!R5=Ë—L²ÛO*ðjmWI‚R˜A þÿ0_—†c±†PÕ Ê[«Í޶{ÑPH)Ó8¬ßÞ6q¤ÞµRk‘F¤ nûü½Ð+^D¹>‡Æº"ƒHÄSøGd"ü_—ݰýc¼Äƒ UV³B¸ À39Ü .Eú[’ù£ »d°Ä„Œ#6áI[¼WGqqJFoOãꔎt³´6¤y×Èy~ý„H–~>{í¶Y_ºÞÖ&蘼ԟ,«‚ <íþ} %à­ÐËcÊð°ôõ\ìÙBF@„$‘ 2D$‘ê/‚¾V€B$‚D$’0c @ƒm‰°m1‚_ÇÈ@#=9ÁÁæY±Ûã;2,Þ?±ôŽ£—˜{WñT˜rXIÖVJ'¾:ƒÓÙ„Yø.”Dþ ŸÔü«)ær;8µûªÈ`¥BOð%5”ÈÈç·~J:ƒ!§ r¢TÁ˜z-[´bÀê™—Àv¹øówlý‹Ôø‚•z6§Ø¹h<Æ}ƒHg+v;Fþ>lwàóv5ÙCÇI#a& ¤8<‰L I°Å?R~`xq•Îþ6–UÉêóÜ¢7(—ùQqÛàí2ë„jé^ Ȩà©§;‘¸2ÙqÏc«ò-¦Œ|{½tCæ¯{ðÇPÝù\=—êkÃt§j`Â(ã±nË',#ÛÑ´§É§xÄ|f ‹ÌŒ[3¦F¦= UXéóóÛ ýÆF 3uòɱ¡ÙÄÇ‘ÅYCoü[âî]9ë‡c?Sê·ÓIÙc#› zä Üá=*Ìet(± »{Ë‘¥Ñ{)PL•Äþxœ®xq®œJ05¯¤ëóÐUFccÖŒ ˜f32ÚIBÔm.ciâÇ Ð#§0ÅBbÇãQ¾£“‘P ‡ˆs¦²Ý±Æ^ECW↤]@¨ì!ëdÜ~*:óà óäÎ$G˜GËpÓú@ƒ 1îÙ=vcÁ˜`u©ØÈ¾Ç:#Š×¦!í‡NÔ|Óù™Ôt0K²¸qïì[؇ì6?’¿‡ìÏAnp&cM—=‘»E µ·Ä†ªe˜›3wèvFK_á†lwÀ1äýTü ¯'óD{uzd´m‡8bùMbhø5üýCbð#* tÑrgÈtº¥ã†•ÌŸÉU‡Ós“ð5øƒÛ:mÝ©±;™—è6”¤ ç÷âBj¿Æ@I穉ñÇ( “ENC}:köJª¸`±Xs}Œí¬<³€Æð]ÑY´¤`¢ÌòAÉbd>¿±—ÆÓ³z‰}>é˯äÏÙa/’ú!½E‚>ßmlŒ¤e5Ш&d¼rUÕ݆ÞÄ޽Fò§|–që¶ÅÈkj ÊAêñ§Û ±ªË§½ƒ!ç¢Æ '„A¤F2ö÷8]ÍrÏê—¡öa_ ¾^<#u&2|À™Žüç°7šHÔ†{ #XøŒ“t®˜"µÛÓƒ¡ŒA˜sºߨžúüçõa õþˆøÖ «bÒ—]*÷©…ZLH£,ó<óVÀî….K1 fÀ>¯ ˆà" þ'©Àlí¤¤Êl Fj$Þ¦7,ŒwˆIïlñßt–­¹º“/U `‰Û] É©J¢!ÏG %ÕŒ1Ë4r8Z(ª œõézÎÅF‡·à6AÉjcg±/,kÌå] •ÆÌÍãäÑ«Œh¨Us>˜¬¬"|·ÔœÆb¨ºän\ƒQ{ØÀ©­…¾Ö#íèh¦¡"™†kö+eÄJ0ü‚-ý°C,&Gµ‹!¿×Ç2¼ôôÜ]B!À¿Û³³€\˜x>:¿ã:q¹éȆ¥* ϵ¼¶/©páà ó0=ì•Yr¼ÉÁÎ~æÁ¿—kœyOúxûãoc¡’=Nde‘§©ÝöAW cê‡ilùÄÕæý‘Ôúiî±)(õÅSäù9nAñÝ+Âìúú«<"›Ù„ãnèÛðÇÀG @‹üL¾0r€‰J°v˜ïÏž„bõï° ;ÎÞ­l a –Àï‰ñêCäø1#åC"c't]z †_)sïO’Þsð‹z®k§ðþèz×LWž~L؋˞^]$@¬9Ñ@r ‘×{§> !Ö æ=@«¢‚õÝ7ñ;f‡Ÿ#¿ž„ǯN+‡Åþ $4J–€!:°KÝ=—aÔ¤ù“.Z†bƒºêƒRà¨5çp!ˆýÙ±å° çîUÓâ=7²èNþK6<ôàªT3ÿŒ-|C|ºežibcàùbŒ™Ûp•<´âûP“„9VelJLÕü!v/øÆå~ú~‹]‹G¨%{ô÷¢“ ¹#Ó¶ˆè™‚Óne,˜R›[—6:2~œaõã¿ 6ƒÎètü“-‰Eq ÌÔTÓ^"}šö"KœÏÑòlK¾ñO¡ë¶cG0øÀÉQK:Ã*ÃØà¿wüg’ÙßûÕ±ƒ¾AÄ ß÷–É©Ð÷ý¾/•¶°U4ï¹°‹µWL X±õååUbzŠ@+JP†U±ê¨šZK¼/HzPî¬Ì`«PX£š^öEÉdŠ«úÅ–W²žT­`{€\ Œ€D#ÿ¥,A CèŸß æŒÆ‚‚ƒ%r}‚uÈË%ò.¥˜®óÌà…¼Þç™—é{Ùô½ìOŠçýÍÆ××HÓ_4$‘ÖŒl¤ÛûZ㽡>²µCœä’/OGÚ)(BÁE’BG&íßDIÔRz4+ê|ÇÑ.®Y„R‹G˜Þq8кrÜdƒNa41¡`|G HäeÅŠ6–5Žàâ”]"u4H¥Ä†ƒ›¿óGp’P@ 0ˆR (I ª€,¼VÀãw™ß¡­óÉ€þ‚Bõ×û) Š å§r•+ŽsNªÑ8l¶Ý½:ŽsH(°²‰–³]Ú°@8’‚“! „DTD1¡õ¬ˆ…A¤+#¹ØH;6ȱÀ ¯5ÙÀ7".kúÃáÉâÆ¡Aaˆy!¹'êÜ%š0¹¶D¸;Ê.ù)ê\â`ØŒ'zƒ%}¥4ÆQ ] ÈÃk ýRèÙ$, dÚ_÷/ü=Ü‘áßÙ$:ïC.´`‹êXà8½Ì^Çqô~Ò"¡´Âݲêr&ÔD ŽìJ6 !G•ô.d„aLˆ'S%O*ˆXȱœF•ž.â½WìÌ;¹"d;_ÃÅ»B’‰DZü‘”ÎôT'¼AÌ+๥.§s,¢†+á2ÐP—Ðy‘id(ä(ÌmÃèv~à ¤@c!HB˜s¦œµòháéuï…лû±ƒ¹˜¹{]ÍÎcÌÁ&C¼ƒÃqA%KRÔÑʉnÕ^«€à¦èAmä¡ßÿ:Ì=©p=ؘÓ` ©­™Èçïýèj¨¨Mkb~ÙÀ´ð>èì6óŸÓ{þ‰þ„â ˆGÒC¸½˜Ž,†›EȬ¦2¥B}‹0 Ž¤±UK˜d'Á<ÙöüØ)ªa¢4Ö‚‰g™±“uÃQŠ=—ÐÀÁ¸4‹›Ï‰(` aŒj†+zv1 ±Œ M’» ¢JU¨È$™„K ÛÍp4…BÖÒ=ÕÌ‚øT^‡qBPJiرˆi4)­.PÐWœÝY²g'ÇR]P¹[ª …dX%)…½<ÉaX gˆA».h…§Ä E#VIff Ì9³­­ï¹zÁY‹š0!1‚ЖT±&-HT,–X§{$@ö RÂt0A8§ ó‰Õ°`Ë ó2»ÝßÅQ5eŠ0œ¼M]—5÷™‡r½ãY%)’E3! 6¥4D Ë`ª ¢Š¨´É!"A䪅Äòb³6ÇA„–º$?à?—§æm"+ø±4´¢c÷séáËZ©DKA@@<X…€¶(M Áãæ$GÈÜh½ ;ôò9‡0(¨E(Ôö M€ôXðCˆ×˜Ž‹®û¥p¦ÝÊH(?Iˆ¨llpÆ266-IˆÌGº`£â‰h H§ÞÈ êhê “Á!Ùº÷Gzd6B Ÿôü¯ê¾š‹N BhTk²X‡“ž–Xø=Êܯú‚ÝX´gïçì}>W=Cþ˜ú›•$í²5X.fûà¯æƒƒ|æ*v/ÞØv_£“â%T…E3†*1ôÿÜå99?È“~šoЧȇ%œØ1Tyu"ÊÅx$¹8$ïyÜQàHZRƒT4BÂó“óÞö–W‰Ø[û…ƒÝ+8µ†¸Ðy; 噎NO’ÿ¨)DfnyÏ3n#-°gé·1´ždf:’GªìŽQsŸ ±6 .þò1ò½Cœ†‘ 1·äÈ´'Ü‘L‘ÅV[ΫV)…œâ¥Ðb¶€Z£m1Î=:êìh,LVÚĦYçŠ4&äD³¸P:kˆ[i10TÑAãR½‡]ffÆq2Ó@4Áb4Œžº=ècCÙ›HX’K%4Úž('ß’&½¯g³Q\zPªÊKZ*UÌ“FX h²@´X¢h±\u Hs¤ÈMä­P˜A`Åÿ¬ªåUbÄŒ½:‹¥(˜„²Ÿ³´ÞI…#°Ÿ0„ Ðxa.ÂYODˆH†‰˜}AÈ©>÷nÑäõ2H4U¿|:"ø´…#˜Á‡_'„!’ u $ %‘ˆ.€˜0šª°a¸ÐRò+É8±c˜žè<\“5Ü ¦‰½.MŽf§šß3TtW·¢ºË•¢.¯C$€l\0»È‰¿z>}¿#¡FÔY½ÎRüBc UR•Tž Ãë®åq'Ú„~™kÌb44ÖS#;]K7‰ômVŒÖÁï0<¡{«‡„dƒ8Ráb¨ÔìKEBDìD›è`ÃIiœCbÜZ<ÅñiF€XÄ%D¢ÖCdjù?£ò€"ÀhŠÒ!cWõChP4'Ø#$0!2 F€¾BPQ‚DI˜”00.Db‡øÐäú©•²z±¨J¢¡LŒ’3AD0ðd€äPTGav6ŽòkÆ…úŸÓ¾§Ôù´Zѽ ÀÐÖ´Z ¶Ös’ÉcTê»ß‹Ë7Ä;õ¨ ¸CµRdcB²À¸°àü؇-4Æg!ØŠ®è]ƒá,·]Ï‹VT­-.W+eP‹Aö½íkX%®ƒ|úŒÃoԯ𬺃#…%Мgð`ÄZÞ¨/ˆ§ â¥2 ¨SœRá à(©X±n¤IUqjÀI1PHh½.û˜MäÌË|ÖO¹Ç° Ý ’!Ñz¥ÓSGy\)Õ«!z0 Çv$²]=2ÇEnìñ»L\»{ßÁrK8–n„lAÔ † Ńݺò_]((Ð6DÍ;22Ž‚á Žc;ñ–äܳº; â@ËãÐmkC…P¸ñB  GAÜRƒ uUà¨Lib¾ …Xè0TLU cQ‚,  ÉW9,Œ–sZ°æç4C ‰TlÊé½ÍèaÉs#!ÛUq(v“ZbÌÀ˦q“Qö0´I´µ— ¢(ÍbâÎeXÍe\´)ÁÂÆÆNbÄfÍ[E†‚ÌÇ ÍC@f%˜`A,ĵÊŠ0:‡t‡ÞÄêð:+’0ø%”Þ§H ±`²…ù«ˆDØÐ´ ‚D€ t¤Jƒ²|–Öñw½ß Š^^GÙÆ aàǸ)HÜEÃeh*à WœáGp–'µ4hÝÕÜê$|HXI}Χ$ÏOæƒi'×rŠ‚˜ tÜÀæs©w¥»‰¸Ív =ùƒ²Ð{$œîp Í­JH{×$ŒØØÜ\rÇÄ÷/ðQŠ ¿ù‡ÌAÿ4ý€'þB‡ù*‰ûߨ~“çoÝé*¾?Ùãù®~Ÿ×øï‡ÝüúÉõþ.)þ? _†#îí'ÿ€íoé'² íø¦Ž–B¬l{´y‘L~Áêá(ë1'›Gþ ¯öy)¸Æ}›þ€Çöµp_îðŸàMSÝÙû3r ”ö_9óC2é;½ãÝ;/å’¸ŽQ¶ó\k2>¡Å÷cpñÓ =ââ¦b@`¾g|Ž£½G¡9|¹ óÚJ÷ÊϦ§Cµ³üå# ‘ð]š+òž‡jñH *ütRµèQÇuo]²& a™þ8[|t5¹‹«áÜÄ<úVà0ø óÐˇ©$^R‹ï~®PÜ¢Hh~LCåÔŒ7äŠnØWƒoqŨY(·¤­sÒœ¯ËÝe8òþψ×â&lÅ‘•ÒHV¥Ã\ÆÑ12Å8˜^|¶¥D©nYŽËšzEãÅl…ÊýX?[xÓdGX‹{ÖhĪ!窃™ç€zSiNGç‚$A;ƒ…ww«‰—WÉð ÆþãNdý 5d¢ƒˆ®æüÑâ×3ÝÁ¯oA·ÖàA™c°r¦d|'>†lwö§ûÑ峋3“ìÛðC #¨Kb°dÆÏBï#å[º2ݱÐo§ ޾{¯zf›¶s4Îa§É?$ý~ž«.>¾&ý6†=S…NíeÙE°¬yÆŠ‹¨ü¢ŠQÃbI²Ê)ùqùR#âE IÓsÊlÒDiHѬř”ÏÙµõìl…Ñ98±ÊkaâJy&¥ðŽ“qÁfyšiL:ú¥yæ0®Ú5÷L§DzPš"ôèjVV9=Û ]ß•³ú ·?¨í¡¥³iËRͯ½à*“zì`íÑÖbåQ¿ÕåEQrô!å1Ž]æK¿󰺑#sW‰*/§tj\ºhí²m£ÇsÅ×´³µ4^Ú¬¬æÉ;ë5!TxcÁß\®rƒe§ Ç?®ú-‘éwt*iÌ»b q•oÝ«ëx²–yàÓ—býŸêSO]ÍÍ1¢ãÇ )zÛR™&¥Û“LÕ8™‘ cr‡!¡› 9Ò“"bdk¥«_¿^UïË2Í—3]Ë’òvÝE Ræ)þÖ玖¹íïÌrÏgÆ\¦K’x‚Ì®ÏCnldÎT!GulVßmõ8ŒÝø4I|nüs€ÓDÉi%á“€ç]ozd<¾³ùåZ×¥ͲPÛöX{ív_U():³‰1ivß®»ô¦}¸rº‘Ÿ³ñß–Ü]Îß¹föJfü\²šRPÙŽÕfFcáï/^ß À½†eÎ"î×] úœç3ÖaHÆ¥G ’ʧGœéé4¼1jN·çº<Ž¿7N˜ÐMp¤Ó½Pyµ¶‘õÖڸ_»`&6Ó'í!â~ÅÄ_ê2}ý¸P8zÇ·:+ÙÙ®ƒFÕ;®¼±Öäê±\‚û¸“¤é|ïӦ[vP®´Ð[ö m #ÁĵOBÑÓ²‰ØÝz vòÂ|¬ß¯è`ò…!õ{ÛO; ÄEÎÆ ’¢cö6½~}è´Uvµƒ×´l4y1‹ÕpÉuõ3éR¤îh þñí<÷¾ksÚ ÆÛ^Çd™[|G¿ ~æAóÆ%è¼”ŠR[oÉâ»r+Ù0êO(ºÄ(ØdùÁäå&§s$JÖž11ðeÃîgã2É|‡.fóAïj@·äzAÏÂ÷NG²ÓžÅy>ôÅüá‰ø7Qû {|}î<±Ü4¼ brrÃ;¹8ù¥i<šf…ñ 2Y.scØ´­_¸uùÕaZ†áï±z%Ïä>ÃBD;èlœm~uK‚‡4r zƒL}ye&™ïc­ñ †—uXD¹G0»£ùe9JÞfµ>N8}ñE…9¨ØŠ-EØ ËxhSWobÜû‚Òä!. Ö™ íB4âµéÝÏ@ÇHð¼óY6ð\•ÇÏ]²\»yM829k ¶A³ÓÍv˺ÛW +Pš˜“×zo·©/Œßouƒ|xüU¶ š„GØacLaí'ùhÄzR…ZÀž=:Ò©qÜ.„c>¹m½záŒûÌÈ/H€Ö(È'ͽ û:6;ª ƒ¥žw^ÙÄH%À¿Ò×PådoKÛdVÛ=¹[{Þ27Á¬š ë[ IHEŒ¢ŠZ"=È¿µÞ¥ þ‰  ó:÷)“>¿C½OEØÊácà^CÁˆB¹†Äñ’ÕJÌ4]°©œÚ¦=±9 óVQ¡ Ic`ÉA( ;‹¸…ªôÀ¯- ÏJõ*ŽúÐ>ªåKþ–ɳпîÁ cä›Lm¶1¶3¯ ¡™^Qq†t¤”ߺ쯳9,zd–7 ZµÉµ­å‹t…9ñ$ï]/DH©‚ê X1ÔŒÁÛÝí¹_$$Aä¤g±˜ßf¡%=+S$^Ë"š,Ř¿ÍðliÖ{i£¹6³€¿ŸÎÐã¼’?mPûË5¥\±b‚¡$I9_ÞœÉbÊZ/Ì·dù†=á•X_Œ{Xë„+„5á™Á\޲l·SEÜ eíHøt7jYJ’!‰aG¾¡Q†·cdÀQ@Å¥%‹ÁhS ß ÍÑ32»܆(ds7ž­â§gÐÜÔ Db¡Æ ,ª[r½à‹“…X@ǹe×rí‚A£’ïÌL˜`b|‘¢”›Bàeqc¾æŸ$™ñH ¡A@ Òñb¨BÆLI-•!$Ñ2(¢baš¡!!Z 6òC€>¼‰¸ÝˆQDïà:?@àÏΞFèê ËFÞŽ§.”ûD*ä—¼ †Ô6‡`Ðóí@Ø9ƒðÌÂɲ‡Í#bÊP9þœÓè¯ãW,Õ%xï2r'ƒäRÓP3S¼ņÀðJ¾v³Kµà„aAèoƒ÷ñÕUåD#Tx²ÄH`[A×ÏŒ2g ²貆hÄ88±×Îb€KQ»„¶©v¬RùÒ_#Á½‰ˆJÙ7lÐb0m¼Š’ÆÍ£ÌÈ;Bö“Åñ[vŒhpŸ6S™ÉyL š”˜#+% \$~ ÃÊ™™Àž:ls ß2åµ±eµêÌ$Cˆa8 Åk¸¹§d 7E–áCì`ôï¹¾zûÿ¦A‹0éý£LŸ\¹góü×Ñ*Q§¨¾Æ¿?§ðn|—µ½—Dj}𝽆NÐA™gœx#€Ð?B¼<푚V×R°¦Aƒ4|Õï×ø kà·¬ŠS.›ob9úõ޼®¨iîxÓÈôõQç7G1˜NS¾0aFc›)oŒõªȆîÜräìø\ #ç[Š™ðý^w3G°·Öa5ôdƈµ˜p…î§b7i®»ù%03×JÉŒÒí;s¢†stzqq+‹O?½=g!èÐÒÛ/ÑTSÉæ[žÍ‹£ZGos ñi²ÁF€àaï;aù8☄šc¬€¡My¨6w?ã8‰Îûé:EÚü!÷ìB$yd0iן2RÍ/ä)öµÂ ä:€‘ìOÛÌ3ºÌ9iîcžŒŽÈFy7iztùܾ9¤gÚ p3~’*0[š}Ø•åÈAž\¼d’ñMàú¿æ¾=ÿšS/È_  û"•—¢»‚ OA¦jHoüÔ[‘„¨]•…Hz"Ôìoª]ö·C-ÚϬJ#</ôÑ6Lš 6ØU0ÆÌåIÒå`zj3“·†Ñ&:%£cWoŸösîÆÚ†¿EÚ•0¹´,h<4†xü<0 iÈìwêKzKd*/»²Ê<™¬¼S¾änµ1:Ô{«,OMÉjî‰e,Là@²ZoÈŒp’Xjt»!Œú[Gá<$|éÉI™£ÏlÃÌ„4ÆÖ³OL¯y§mÃS¡èÚoÈg¾Ñ™™O¶¬†ô> ˆÏ¢Ý‚Ä]·&Ì„Íì07-‚©ldœ5¤j{PÕêÝÈÎÜõI3¨·†r…®q|óÕ¦ôåÉu9 ­½8ZÀß—!ƤºjUkŒ]c*Ad d-©,ÌíÿdRo!ýÆÌTy#"Õy¸ªž)ï÷þ°þ„gë"ˆÅŒ€ùò&œ¶'X(¡i«kdssÝb¸@0Ô5FÑé®&­É«3Õ2æ€=þýùý÷InÒi¼·ŸdçÃ:iå~woöý«ÑÖ‰p©8½Ó°1`táp 5ùË—zRQ멬 QUªM-ºgâ§øÐ‹e±û ?o‡.p‡3“Ê!Ì`J+€6ûf€×‘i†\²@¿·œ„ŒšLim¾i;¯m¶ €Y佲ψÆè> …€“×f›1ÅX8 ˜fd'ŧm®Xµ^fª_ÒºÜÁlUÊ>ÑBD£Þ€8ê}ß/úEýjGö%9&Áñ`ñcd(èÓþ¸"X þwí9šO,ù¨__ÆôyzÁŸ=¿ð°¶?¹¼àv#!ø3ö(¶ƒ#=Õ–+8 "Ÿ‹1I‘%ÑJꬫӈ ÌÔ&A‹ƒŠà½¡ÿ ¼njh»‹-ã¡Våõ8ŠÞ•–ÐltGëéu;¤ þmÒïbÎ誗¨@v5_u˜C¼lZýkh§îmuˆl~î5ßK×w`íµÌ÷$ž¡ˆÅnv¿üñBF“ QEx ±Æ(iÛpÞ}S#> -Ï–õwïðéö_¢†>ùA1øÁ‘[PÒg¼Ý»@,ŒKi@ÖC /ßµ¶(9Ÿ}Ÿà¡uÊGTå| >ƒHžÝß.š“A£é—µJÝŸç°ýÚ° &‰†•Ét m‘ÕÜXƪXœ¥“ïWàÇãc‹Zê““,NU…t8%¥4%ÈH\j`Å®Q*B/Ù0lÐîÎ|ÝçTÃìm¦ íH]Ì߸%¦º($‚7Xú$bx ÛZ;d·Ÿ2f‚´vá쥠^]úDJQ$|²`‚FèN\Ç =ÜFHŽÏ1¦qˆ‚<#pÐ]òÀè Œ 2Œ[ÑMP/$¨Â$QT×¹y¡[u …Ü­¨h{4†è s"£F4„Bõ Wפz‚¾z‡]Kî:o¹Âæq&<ˆD ‘rè£9¾Úóˆ $8êƒÛ[ÇRèxðhúÞKp;c›`¥99x(é´©œ0óÕ %„v:öÐb;˜ö‡'³Ô·—<Û¦ àã•8?9Ø{æ´ÏR‚x•L¿Š®};?tR‘îÁ‘`À^;iÏã9Qê­ÕÁNd\Å}Ì’šuÁd7÷ðI•Á¡`0Aòâ…3 U‹"d¢ð¥Ïoœ…¨mÈ>JU¡‚±_JØ¥”²eA€ ±WÖ —þl—ÌEˆ¹q1–ª‚FÑh¥=«e063¼HJ‘©l‚T £+–LU`§˜œ¾ŸEõƒÔ(L±p¸)ì@­ô å"æ u)_%î<<—tÁ©‘M‹@iuÜGcA) ´¡PÚ5ÿÐÒå«5:OŸ±°YÊ& +C3¸ÍWb%GùJ ¦™I4‘Ë`Òƒ{#tìˆJS¸4‡Äp&{<ݬ5jj«Ž“V¡ ætG–“G3žÇþ¹õ‘„Œ‹žUT”Ø8`&öĉͪ‹sÏ#2 Á‡4´Hx— ƒ™‘2C04 EÆ ò ¬zžX¾çU dí®¬41÷024$FÇ0ôÕý<ŸcGÉÃz(<ª¡ü0úÀ X±eEkúHœ ¡,'2DùíÆQ^{®ñ& < ]Át9/†Ö¡#‘ܰ á{`! úùÏVŒð>ñO0µYÄjÍ†š–€æ§¸N£¯ tžúð}‹b¾žÞ•‚;…• ä°þO™îT÷ô¦w=‰ ÞZ†œLœÈ^åË%Uj»¶qv‚̱UœÙÐj„ôB•…¼ŸË*œ°lÒn0 ⇱ê>-Òª™Ïo”‘ã‘!”Jj…"Å¢1””‡æ]Ú> æ!hÀf"RQè¶®Õp ¦Ì ‚Àd2ætà\$@et¥ $€Èý®}>«ê@eÌáwÂß%Ë—=æN„AG›Ç#½[V'ÛÁuç§­K ƒ /{^ÁÕõAC+{â[<ŗʽ‚ÅK-DÕOHìÈ$ÐÆq€fïÒž&Zè:£x…ày¹g­¬ëj«ar"òa@š4âB HDƒ‚‚!c¶s’˜ŠA0Q‚M YŒ.””Ò0 „]ã Œ‚úªþRCÐ.ÌâH% @#ï|Rõ !LHÊB€ (’„$¤­Š šÐ´\ha<Ü=>DLØQ„yé¡bTÊ…yä/z^õ­KQyyº‘JZ×wÙŤÁTûV=ê1S’¯" 6ôu°u5! ‹ôæ›–Æà`˜fÎŒ:ŸiƒZ¨Ž”yUA'X$‹@#-FE^k°v[X÷&C'˵¢ P„»bÏ kÄ …2y¡*3®!¾­„'OÜdR]0­öNÏË¢/”pÃY ‘ Âa¢Ë®¿BHÈì¨P ¦ºÉ ÊJP_w¹:ê0îN#ú‘34aL‡¸K„!Š ¨X9²FB’¼W ÷€vV6*bG!î^bTƒ² 2‹ÁHܹ `F°á²–à ýb)÷F,ˆB"¾Žh Ä!èõL7 -ì9Íû2‡õÀh©Ö@ÝeºEÊ ê-üÇ©®²@ª¯CÞÈüˆw¦‰u ©)²ùž¤’¨¾|sbsˆœ×2 ~0ý8!Ü¿§FâÉ횥 ê¦4Aˆ¦f`‹ª‰bQ* $àhh4Ø/c VG¹È4 öÇ.ˆÓ E%ÂE—ä/Þ$€Ó9âÓF_ýÞœÏ\ƒ­èPg©]Z¿è ~Åî/ñfì‹S R…¦ÞY™3&àÄ•{TmÒ Èn¢¨2¢ˆ•I;D;‰Ü¡‡‚G„ݼÑÂÆÂD"y|ÕÌÉC úž1 ’PG/`sq¸gS±ò@TBf©(#bÆkßåj†ÎWÇZœVH„ûûh ’}Cb7 PDµÌˆgñ|Ó%3.4P»‹’z븻`çLu8áPºÎ=´+eræEë^Á{$Žf1r “‰{äYͱvYµ¿Ë! \f X.¬¹.FË…A‹šiÌê½ÁK@@éV’ÉKA È­Ã±p.´RQ$R „\- Iô”Ä„QºÙ,Œ[¶À­ˆ¡¢ök†ØTΤ©2&jjÀÎ)"Ѷe-I-€ü BŒ9X“Z#V´Ë|Ñ0ÓÇÉnÍySÌK\‹…‹LE=xêæPà†›šW톓˜.ˆAЩɧrQsy 8"´¬òȇ‚Eˆ·H$){Ã#o(Â,s †b4„ô7›Å9·JX†_·•„ÑÁ0¥‚-Ô!}Ïö,|í'5E$™¤Òz,þO~e§Ô`TÔþ†H„o$ö³œdTx´Ê¢Ö ê{ĶHÖ2iòë×säYÅ*p°õæØW$Ô)ÔëHX Ñu0P3 =éàuðXzž×¾%Óº\è\¸[‘ÒÓð|Ÿž—<”™Àìw;(üQs+°2Õ*‰ó®˜Ú¶èa QQˆdƒ™1(\­UVSØp²Aü¨4??ÎÔ A Aªhˆfde ÅYö²z½oCÔñ Aù> pJ0/Ìbf¨h)ºSNhx—‹fH\){,î =¤ÉpztKŸ=`€E³±Ä¸ñ „ÝAÀhîê2GÁ¶±£›’Ý\)(R³ÏSêŸ ’däï.`†1ÐÊ¢¢p¨ê{‡ôfÂId<‰> +CàE\p1 H‰%¸I&œ@„#‚ P“™–y¤(Ð~Å™)îÌÙP°‡´Ì×SÕJB(A²¡Q¸\R ]žÄ:–êiä´Q'§øqã’¡¶¤zÂdDA}Å‘à_~.Äþ¾…’J;•à®p$@ÍÔLö Gض£Pæƒ`tv „çL ¦*B1"j)àD<¾k²v<bÄ# AZKB ¿˜€cÅ.h©,°w/2’RSX3B9—Eý4ËÇ2¤³6¤D²±@5Ì3.˜¸~Âq]jr W±A`!i‹˜›!7&ÑÂÙ¥è4!A„¼…YCä9«˜ BÌ$Š0‰ ¹VÈlcÌxÎ{ xˆ*‰±²÷_íöAªq÷öö>ï¡×/¦šŸð0_ž<¼™,¾Ý?­ F.ÇÉØ¯ë·‚Qg= ©š]G`5GcªèäWOŽ‹¦§êt„ç©™Ù™š”…2RÔGF8öÿÂMÈ8mÀÓ‘¬ÿ¨7k¼Ü†*qG£§´çhsê†qäç‘äîE#“j™¯^“Z/á³3Èùìué½H;ÖÈcu;Ôwœ0'[Ð(Ì0ÆdåŒÙÀ».ÎôܱÕý“g˜D ¶•ê´$‰=°=ŽM¬Mô¸ºœlòärkË~U]€d±ÈY«›ìnƧg>óÚº;B½¡€»5‰*l~±ÿPFGy,}ëÕ0È.ÂxÑÛÃÅ¡„°  mßžS·p)E¬ø@ú!ÿKþûsÕ¨'}³¿µ+_mÙ?æpïˆýˆEãý]@Í|3Cçû} 3Äå‡lJ¬{óKìlšàÔÓã•ËvaÕÃWÖ!bT]±÷PÔË¿õgsQfaøä•Ÿ¡áíq¾K]ò·Âùïø;‡MW±xõD¸Ì‹Å¤9zè {%ñO¶ÇS×ÉóÅ× }õЫ0 R-:w1ÔåpÈÔÈü#n¦vȧ> ÔÛ JžDÅ´-Õ£¤{ôázc„]W’k'à?ÙIDíÀq£Œ÷'gÈ2ñ^÷mû8&Ѝ>®(gpÂM/2CÀ¸ùJ¢\/62ç]©&ž»>V‚ æü§ªÞ%I÷žXž…Q„A`ç"dÄÄò¦ŒKËÌh~OÌïÛÇ“¡Ti|ôÐÏ¿Ì3ð'àxÏ“KƒŸ¡õã/—Ð6¸û7íØÜ²¬X˃&¸g`žÒ '6lO¹âsâÌmºæ¤6„7¶ñ=€5ŽÛSq°P!ïxêdQQs]ì»[¾”&XlŽ–'™ÉY`RnSm¿‹[嘰ý5¶¦Ó2z˜í±µïrŠ¥U>‡çˆ-OøNöÞ4Ð9N¥Êº†Acóè©8 ß,l~´Wzö?Çc² Å“¶g ¶ÛOí&„–!;¥ÓŸí `a|>dÄK|•ÀwØdc‡AΆLéñ©ûµ±Ãqÿ¿cò1ý¨oE!1qž!ù(ÄÖ´8¡Bö’šÙ,y z 'Ì>‰^!ry÷ðÇÐÁ]̽½˜ídeøš‡°Í=Y‰¿æT=4ÂDöÀηÇèËF H€ÃY‹Fúc?ž€Ý–щ«ÚûÏ0°9»?NÚ,¨>˜.ðô6æ{~‡®^‰ÜÓƒçSaÓuΉ`±îuCé¤Z¨ÐÌ/ËÓ¯_%ÓœŒnlrìÜ=Tð¸<™7UJë½o1O¸SŸ/:kc.ïààÌ_I.ŽDzȸã?Y‡@D×F#,‹ªqQ üo!9ްîé}©8H¯—<¸Ô蹄ÆõÕ‹z¾ / iS@¼˺?-© WœìÇÃÝÆµÒc´¶uÓý/XR\D[za™ Çî ïæðßì_V£JéÃYË*3›£m(-øèDuõ§8ˆï¢ç…¾æÎ§Ü#™û/cáwÔ¦†µYùçÊTé™ÍY'(…Ñb}¾¬Ãsàp·Ë¿ÐÉ?±ÆLço@7Èk•¿‚äs?5ÕE÷*g¹ÌéUÌ6íÖ9/ÏS>¥;k+š«‚Å‹êqØá°´J+Ëc±uìÎð¬µêŒíðžÖ8ái‡ v Ïb"x0=ܳÈR³­@ÒS íëfD»¼‰Â&!DfŽû¢¢øe™Þäüõ׎NدìäÄ<äS. BôyæYž ¼Ž…†Nè?<Æ… ¾`~ÃæWÈ nƒ}ÖÅT^GiŠ´%‘¨MìI»Ñi¢Ûl튭Ê[—ÚQ lg¡ LÑ,X„"X®œ°SIá ©g™qý€}¿§3ƒ§Ó½Àõ)ŽÇ[íè¥b‹sƒ'Ϲ'¹Ð¾„,|þﶉg¡õ¡»b¾´øÀ²é¡Ü;ë†*2!ÀdS7Ö E]Þ²âÑ^^®§/Ô…¥ôa›ÁF‹axÚ¤ù†åÙŠ žÃ)Iùrª²t³ÎC²1«”8_IG#‘Ï}ð?)Zš€–ÂÆvâï4¥+áŠìCwqòuEÓˆuò¡gèqÃYòk¢ä_±>ÿfyÙÀ?QzQpP@`ÀÐyÌÏ5Ô_Q•«¸ùRhce­Õr:«‘x¸S*â2Ø[ù!¡°¡`ÃS¦6!@v¹*²ªÙ¯Ï&üC{™6Á¥Ú}ý¯Ú—÷žPŒRH A` L)À4)(ौ…BËKtœƒ|[h¡ÌÌ’20 b‰Nða¯yÄÔ‰b Jw(¼œŒÄá6àC¹%<Ç«vèMúÙ!öéëð¸ëyiµñúµÇNy„ýaž»•áÇÀÀ°kgäþlþÓw°4ëŸ÷8ºº½ƒM¦w,­Ø@Á¡Œg\¨>Ø—ì,9bBxÝ Ž˜WÅ)X=••ˆ€æJ‹å+)½©|¦(ª0aM`%¶Â‚IT‘³bʃ$ˆp jb"—Úž¨è‡Œ` ªø,9†ØÍ|Âã ä¶BÁ„YÆŽä´´Œ]nv%¤ûÄ}éù´Rä©›A-š 6Ú ÍR ÙÐÄ.-à}²\Bø.€¸7 2"Ò@"a32*îDN‹WP zTLR@u<ª€Þ¯ä}OÈséYCIûödð.œIò/Tv)„”=¾pK ]{š€uSȰòâ!HA #ܼÄÙä¾±\OèßÀ,„‰ÏRbàþPçò˜»Z:s_•ðÑys‰àË,Œ°$òØ„fŒÕ”á ¤•&ܲ'™Â‘J•ø­h#b$ƒ„—" ¡`B"æôš&Ñ€µ[2Ór¦`$µC.uŠH…%¦´ M€¨°h’„¥$-1|Æ‚ŸfŠ-}〻„½Â÷K‹6¹y…"Ðê4gÀë )ªçz f¯«÷Y $‹€3M¢»ÌÍK–ªà~Ý 2‰Î$h¢'s…‚Ò1„C#@uT²{Êk&ç34Àdºžšqi á¼}Ú ØÉLŸ”@楃ðû5øÓ«éìT QæÂ$$# £ýĉ‘a$MCþ‘*Бd¸œ\އߵJ Ó(í`(øü­Uj~·K?Kß® T© ÈpSìHAõW‘é?¦`OôD_Q]h}Ȱîøžo†£ÈíUhÿD…0‚[’Õ ŠD(þ!ÄØè'RÏbÁt.r0syåpƒ±@™¹Øƒ—£PB É¥>Ô«?'Ëò?oÓ)›Qˆm!/j¦¬Ú1Œ >Z<Ó‚4£B@"Dh…(˜åW€Aøp ©š0BcäñGÌÝ×MýaÄŒƒÒ¤% yç8w+Á£€ £c6ょ–ôÓï= ¨ƒD!ž•IûyzœN©Í‡Íg‘˜×Òãd¡É—®ï"I’yŸ„s€§(v>óÀÔª›Ê,t'ŸÄ‡˜z9ÞÉOŽ ·ÁU„aHRÄÜ9£Ý*˜’$‚ôæ¼Ðˆ‘:Ä(žÊ%BD^­Kðìt¾~ËÀ¹ø=¯| …ýÞ0Z$)4ñY­Ë¨of‰)¹,É!D† ÷†ƒƒG9‹cÂèPîÔ·ƒê%˜d›Â4Ý^«ÈêøÇò÷°DdÖ„ÄkÔçuœÍŽFÇ^(€] Ñ! PêxätЊɼà´U”–WK JňŒ„HE„ÝúŒ ûFmüBhl`4¸œR‚ Äx L&ÈEº^% ÄQü@ýŠ*C=_:Óðþ?›>Ÿ?[ú÷ÿSÓLO͆ÈnwH÷%Aüšÿo ŸúñÒ¿ò^çñü—ûNáöDŸƒÀ¸ðz~†fÉâzŸ•}}=ŽKç[˜ˆ0ôó1¨.û½ b2$Ç=Iqô…˜12ëÐÜ$µDºt4&þïwp (ôYÏDA$çÆ2 Q‘)I¿Ûñzb¿CŽ­¨´¹ç[±u¡IBq"²^8.  F©¥ÖùW*”mÙy—:áÒáú¾: Ó0Ȉ¥ŸÔFdááÐâÞ%`øõÚ{³éP8ï0ÖÇ`mj Š+çBåàÝ.ý1ìPt?ÀwÖbàÌÊÔ0C؇¶_쎨°ÖLÎ'àµ×Yh5‘OA£E^ÈwEQ;Æ#åÏZ—锵Ï;jiÏSµ¾µÅ\¸õþ¹B9-Þ–keË g+–XAâæ¼¼ît)ŒxÛC3Ë®*³ºö©o·Ÿ©­¾Qþ}gZžÇ¢šé_«uLÐF Î¨7öxçIè‰}ü„XyÅý Çc] ¹`Ê©3 F–GyÀÀÀ\ L “]ì¨@NýQgýf"H'S±><ë€øõÙÈx >ˆð‹ÁóV¤Q|Zü¸òUz ²µƒ*Š.‰¢¤§¡AеçdõÞ'20’µäùš^ñ½Ïž¹†=¨ ˆH¡Û7W2T>fÔ6Úæj¼…%Œ¯$lçµ™ž– Îå£8S–ç7+ó,/aq 1çD kSõal°Kîµ2¦Å|±“3› ÿ,KÉS#É0Æ,LUªŠ,UXp¡¶+œ07.RCÞ½bCrn…ÏÐ{ÆJÓ06œ9ÌBÄhȆ'>,àÔìÏ-ò(m P¡„’ÄD(ƒ"C3†Ç<Í!@l\°QBn)(ú)dºìjàY¡‡P%-¢Ñ ²ØÃ ¨2éî˜ìB=)ÃLf•´-K•SÙg å(]U™j­±b‚íËÞát3‰÷Mãú¿&Ý1(ß¡Ìâj\÷E»# ý‡‰ù¹À”ûß .0tPÌ:Ç*µ÷ÎfN”­kR´êÐÜ´— ²IŒ¢Uë ­k¡qK´ndˆ—„Œˆ‘׊¦@à ‘&H!,àÞ™8 ’屦ð? P(PÀ:ƒMP\bïCwa2,qƒ$„€œìCyįYÔªaþÆä/3±xÁ¸¶0 ”µ¦–Ö*YH”é½.˜5£†…0è|îǯu Òh$½$6`±y1üO’a^Z FÆ ¼ ¨2ÇjÈ}m T2 AÍPß^$w‰±Ì6S[îˆUßú@(2Xؼ àŒ™ŒÛ sBh†ñ(8ŽšHMäÐ±Š‚+óZÊõˆ»P[$ S‘ •È=ÜÍMѶŠåUè6šbc`Á£Q¤IÓ]‚A[`ܸÇ6´JáJ#ßdÅÀ¥Ð€svAè<×›Ïprnpw&€tSrÜÈçÃex#eI›s5><Œ*§%pØ'ûî-¹‡Ï* Ðdc3 »ô¿bf_yüÅ¡Iî#ìF"ØpÄa”ªÉ@îC£\/ÎV„`Sêx†þ1_„°>¼=Áû/¨ØØ"…¹œØŽL;‡,Â.Ir’ËNCc¨'™Cuàæ[óˆˆj5òu> æ\÷Ï©qˆmÇìÚ¢“˜U ly©™™R0R"eÈ(>ý@º¢]ÏÑ’Bê¼BÇdêäç ‡5¿d¹Bï¹ç€ùîz†úph´Q89»Û)ø~£ô«ëk>±µÐQó31DþÕ)K»{þWUÓÜÝ²Š‡åàè‡ zÁ3â¥q=j«:h›X"y¾£Ç#Sy…‚æXV/´Œð?7CÛpUù“sfŸëƒbÍB_æèHuÀò‘â a!À„u:tÔNƒŽ£ýº£¥)Cú"n Û2(«1ì< É @ÑA[¨Iz£ ^Lô‚F›)b¨^òKK1 ŸXI¾Ë«F~‰„ﳪÀÄs6„$–ø’Kò TE»iæ}£¾ÇΚ¿Ë?Ûá™Ü®EãÊ?·w¡˜2bù³äbéÆË©P\}yc±¯EBÔÛšØÈÓ®63¡s?èbqŽ¢3ìb\ߢo{©N ™×ûð3\HÈäbVCø8Eœƒ²[–K´wñ~ ]éÄuÏ( 2/­ê»©d,CmF¹m­I /…Z kp S^B lg $s€}FÄ$ÐÛûðrXOÜbœŽ–2˜Ó/‚Œt:šg™†=¦ß첬§“=¤Óö Ó÷äÊÍ!Ÿ·îs×¶€×ûvÜGaXj§°ùï~â×:¼€ì[ÎÅ}dc£åÁ O,cmž¶­ž j•V¢³Ä4Ðá\Y«£­Gà¡,) é2ó=úÐ>Ï`¹#“’aWÈèxTÌLâ'_ ÜŸD>vS=Ëšb>o–úì6PÞ9Óµ—*JP2”…u)ãô}õFæÆú™AU™ì£*x2±ï1œ”Lo9ͦ fÛ¦ìêà?è0£÷޹š™Œ"…tA_ÜßÈÌeBȬã”RUo³‚ÃmÑéûžaÙ͈2ösw5×ï[±„n‘÷ “: ª ©[| çƒ3)#ŒØàJöVcA·û*…íkòZ}2e '+CüI~Ììª4\p9ØÐÁµW§üqñ>šNg…nJÜïTø^_#crý-Í×CÁ-}šÙôÍN.7,nTàéz„‹FÊ•ÈÔ«ÑDÆ…þcAWBAI›¼éæ£\Ït`49i º>ïN¤þÜ&EˆMI1üvÐs/³ÎÖeµ£#!ÔÒáî¦öS¹ÔÑ’š }ÐLEB‰v QP€™õ %Ø,Š TL…‚ê}˶°F^ùqãÓ¾c­Ú¸É˜ šøT‰žlmØ>†àY¾K S­L™ÙœÃ ›R™äMVAé±u]£)óÙÓc§*î›1E‰ö!nöµ-›W.Ñ7(²–[‰ç â^$"®¸9åº?^Å2 |Žã#µ™à÷]6-ƒ$Ž 79¼ä‘ÉP@7€4¹l4²I¼ônð3 ¸Ò`‚Wª½m¬´aE鲉’÷s:õ›dóæè\õ(e‹ÀÀ2{Ów¶µšX€Ê[”΢1] :¥T°Éªà@ÞDYbðÈG È@7‘$˜¾™Ö鎦NZ˜av#¬kaÔÒÕ •æ.„öª'ìÚ¦"í@(D0+AýXG[)$`É@/èn/±"Hq_0¡— Pý?läP³ï]êŸæ)¼·ÒÀ3€¦ h„ Yƒ ‹PVˆ‹äKAÀ?MĦ|•†ÇŸÙ' èw "ƒAv@íÑ ½rLá`‚c/Á³ Ö8èÔ E˶€î ¤06HýU «gÕ‚þf@oæÐíÃ:*‘ê" b’  Cqs|N ÷Í‚–{’råËW.`¢Ë²lÀdQ y­ÒÊŒÏC ² šT¢*Ê1Ž^¨™b?ƒ¨l<ÕÁC&ô ÁîdldD‹Èš9ˆKqßÚMôºƒHÇÜÎÛËÄß)Üàbnd°&áX„B(oi/Flfàd |SBà’eäP(*˜fÂWa‰PH‘ˆ.p{ÊPÖ ÜÜ©äl¶åøBã¼™/ £ú×!û©ý¹…%3¨‚‹2I«)šj ªx–3*EÑ@ÙÞdØ#ÐOq)çÝUMTô;-·FDËôžDÙ3<ÊÐýP|>Q$BêõD€Î‡¼×ät}bu(¼£——3éh˜ok6eÙH‚$+žBWcu¸ Bö<\>=ƒd8B!%aƒ ó&ACÊÃ) äG, TB5@@)  Á‹MÀvH1€jde¬ @ š™Hzô[®¿áG3£ùe d>T Pȶ;xÔ< èŸ1'ì%(¿Õ©ÀxŸ'%Øa4<мbðvˆ¸…qž †/BK‚T‰TÚe·„BŠ+Ä`X‰jhSÉ—X§`bØjI$‹l¶MÁAu-„­ƒzp’ËF"ƒ™}91Mï©>ܶh/CIJF#ȈŽ{/J8F  ¶“¸0Ðü…¡àÉìóIJPa:êy{””…&hÙÞ–É7®àçaKï3R¥hH·ÿžOÖÐ@4Z6'šH•±OÌm¹=!qp3€ó@²;…zê›Sò,Ø´×I PŸÙÑ7Šhm”ØA„úûeå}þîŽz åߥ=«Ø­Y6>§¶AýJ/‚ç­NyûWel+Ì‚•þÿÇØþ ggÕHh¶MU9Ô>å±ÛücP¥«·¤h2¢€©…rõÓ°ý‰ô¤v1¥æ›‚jµÌÅÅJâC§ŒÇ\”AÍÔ&›]Ø¥iàÎæK‡¹&JL3ÑtF#Ì›YxûœÇâù`Azb-*.Çú29޳ËC“žÅXö9M„PêÂè²CaHƒŸûÙ×(ŠƒÈ^9`ø®©s@p>X§žc&Òø1o.È þþ_ïÓ„‚ÙÎ{»ûþLq+…` >g%d^Ö\AãÁà@? 9„î–;SC Ñ=G ƒdöLÔ,VTA:*/Ñas ±.Ç…ž«,å‘RÕ5[?ª¦gºÃ%®¿E)ÈZˆ³*ÙCœaOŽs Ú£áH±%:?tA›S®†!C£q+E‰*7߆5$Áùxší­îˆWñ}0Á‹:¡¯>Å,è™àur@è' /§µJ õè’ ŽYH|,¿Ò‡Cé–%ÌÎf͹8*l\¯$TøWú³¨Èy N_[e—tzò4\g’=ð,Œ}N Í`ÄèIC&±¡©†oìãBâƒxßX¹½lꨓ[Œü¼Ã÷k܆Df ÎÕk0›ØQ[$Ió)T>lgEÏ '~pòèÒØè”¬h×âM åÖKöõ8¨f´9S2ÊÀ¯4>³­Q©Ë¿SAf¸½ÊÚŒD \\­`âú½ˆ\‘ÞÂVµ‚L³°xkL•yá‰Æ´ÔÐ' 0`y;ˆüsZv躹tw``ö‡paÜS€äÄ9ÞÓmB8É7'Î44$kµ?;Åí™pÑ<®^3ÉîLb*ZFÈðºŒ4‚2cX5ªF×40fZö2ÖÕTF¶†·¤YíÁ9¾Œœ°#.Ž!­NºgR¨jQ³Ö>'k®'‡Íá„ãMÍú‡ÎFIŒ‘’¶Œ˜ä•D €˜0HIÞš#øÿ|ˆ"~h+HA  t'öE3þiq°>äü Žblçûh?aw0ý$“ƒO¤Ô…@›)kX K:c¢‚Ä‘Ê>‹(aH‹Ø>*ÿNCÌîl‹+ûÁÿHŸÛÚ¾¶ +$«JûD(±C„RDªC¦ì,é&.Ü—Ò‹XˆnG$ öW!¿áãPßU!! ¢D2˱t—Q†hã?²£#‰S`ͪ=}Ø#½Jh– &> ë½÷DÞ$[•ºîSì?$ŸÙ6Clrx”@Òaþ¡•y)\øª£ÔªÕ (¡Ä¨ r©0RÀþ^…’Éâ%)õÉ+6 êÕ¢tN*|õ` ŠÐx;¬Í²:´t‚ÕÉnÖµíî%ƒ±*M]ã×Þ×!ƒïÔçL¢‚xÊhY–$n Jƒ%Þã'9Dì¦  õd‰Àv^î+ÈÈá7pD\‰dƒhCl$ @$B ägŠæ¸û†*ª£äQÀz~m055,M- _­ˆ«U+Ètõ_'5!‰2‰Mšˆ!¦Àt`TT4‰8l˜b ¡„ê:Ìê.mÌ †Y€4!fär8ÂÂGŠt"Ø5"ZjŠf¸G@bä ,¶u¸èÄvySsw§fÑo¶aÍ›ª™$­RÃE"6Y´,C‘¸ ÍŒFxÕHBâ¼ö°î?0Y%' ̆¢¶b »â¾eб×oN '¨ÃªÉ4b4#à30¥CKIP‰þôå‡cg"òžxrðäddŒ»‘MÁŠÀŠÈœˆî9Äv2ÀŒ¶â‹F&Ô"QÙˆj>g˜$‡Þh ªCá6NëB”op”h…€Ôò:]ÀùÚ!Îñâ­ ì‹*ÝbCÆ„È ÎAbÚùc ¡ŠÑ"Q-Œ´l…#šß¼6(ª’ ›ÑlÀ"§~Dïî‰Ôåú‰‘vŠ<šgOctY HU94d1"@n$‘"˜œ7N!C¨s@ä´@ Ñ¦ªR#ýpLÈ#Ÿj@¡?§ãþrH»Ä楒‘ ¯ƒð*¹PCŠ|C“aO‰=?´1%—±z¨@8CaÌXý 0¿aKD€\Œ( J"/È<ÛYB9 jQØ„k öô“z’bj£RÐX¯ãìüQNdÝkQé€âˆKcÐ@€‘=H'Bûp݈D œÌÍÍÓØXÀñM3ºÖ§2¿éû*‰úB qC¡³²'ƒiî9ÿjŠÿb[Û§Â;ýß›/Ž ¾n–ýº˜žó«¬Óö}^³‰ëòŸÃÃR¿¥iú~¥O¥«ÁñØîrßàΦ´g÷!µ$P1Å¿³‚Ò@ü{IÁЇ5‚õ#.bɘ† ´¢YæcQÈaÆr‘ - ÄÆÃÜžV©y;òÐ ’G†ÆS¡:ÆžùpgJƒ°ã!¦„¦pocÑà\à¡{¬ñ0®ëwß›ÃuVYd8!^®+ò*`±ÿòjYS^áÌ /K*†ýsØÌÏ6¸œ)ø°›ƒŒÈÆ]%sÏUµå‡y3º4]tÑ}õ:gãú]é¥oNõ75M Túo(ÑTQÛUn”HÇ‘.K°¿´„¦œŸkZ¢¿4¡B s&ô¤¤¿Q„lr¸Rœd©¨#TáT4O½hð“ʨ0<èyAA‚X`a?·$ÇAðµ\qðÄ&¬jŠÂ¯JŒQü&nàù"Ï5Àù,Ñd³26’È´dµÖµh(™ߤóʸŒNölw«lî|OPêɸCs 0b&eƒ$†"à*‹‹¹Ÿ.¥¨V‚É®ˆ­júõÌ.a½\“he»:¶Fýë`‰4¡’®jNHÝFV¸ê4™„Ęb£-Åo—xîÂÆL:hUQ. ­C O #à?» I\0Ÿ¥Ôˆc7¡Ú,̉rÐK;Ò•öhY~Ïañ»xBáÖ¹—ÈlZعPؤ!Q%%Pðþ |SèqÁg0-¶¿CU™û#èD22•ô k ƒlÀ2slŒšr6± ÃNîá³dØ^ zÇ{£¥'…K„,0µº0ƒ BÉ AM¡b©¥¤Íã“Ø„sŠa&ì—¡q¢áÜö,FªœðÙîZ!«ØâTwFöˆ‘BS-Ü3y"f”`ãG•88ç{/—êcYì›…Ñ2 «à_¢¶s8”Egñóߟ[ÓP&5ðå8$pï:*°ö¨Åþ0".Ä‘2D¸•ÛͲ(¸¢/ ¹-LY‘!C(`H†(CpBL-ÚÛ 0çà3™M§ GË@Üu‘ísÁtË…Á…t8šo_´ÌuÁW4 FpjEPàÁŠ(`"O& +N ƒ`ô= Û‹¡Á´Œàž€s0n ^ÜÂB&èlOyí¹  ä*P&Ǽ8H²`Q,‡\¯:Z»E·>oL‘èD„-!Î$â¥Ýë ÃKbªÅ—Žü Ô´žÁ%dh@00 ·hÍðþÏ@b„OÖ4@ø;y’즪ø›ˆ×œHG¡T`BIŒc“‚EÙÖ!Gre9mX3ÚÖY ÊÊY÷J b‰.Ú-ÂåY¦X¡Kà*bÞª!¸ÜØ{D(`Ñ,—W’€ï^ïÅ\0·xD8±ìnê2¦åBˆÐæTD ÄØ´?­‚Ä=ÅïT‡-K‘32y ›j h *ŬHF0O!ÐC°v7«Á`ÚHB,‚SÉÙð›Hn!¾â dð x‚ênÜ;.W »T²º…!›¢A~ÌãiÇ€Ðó@°<Š7¤]‘øQ½Øñ5²°€¸®4)ÄÄѶá(ÓR¢óà(» &Ì­ ‚´Ÿ~QhP0û¦Ý.¦ÂÚ´Çþ†Ðþ´8¸RPÂÂhEÉ:§pMÂV„,hQähÈÌÎ%%A|¡A¨ðÞšŸD0ÁÍ"kdÞæ‡ ±mmäh¹Õ¶¨ê- Ö'ìˆf‹‚PÅþ1C+ C¡ìŽB2ò>¥—ˆW‚’Æaܤìܦ$n0»½ù.¸13x‡›°îè)j¥—Þ@á‚tDÔ€I,(B…N¡¸!Øv!u7 $#tä\ }œ¿âËt(NcÞmþcíÐû}/Qó…ì «Èþò@š3÷‚>ßoäÄ\j,NFÆÁ¢üßø-ʺ}U—S]Ù²:â]ë€~ÅäeÃ¥«¯C܉Ùmýg‡ýåä¿Çù1ýµ]ÌôrmúŠ{Ø›úÂGØÀ#!{),ž^ø6ÊѨiæ-ƒU‡aŒË@ØÕ›(µ ¬´Áí±{-‰Æí›zŽnN[š XÖÆ†|Šq[6‹sà>âèbì¿ÈÅãNƒc; <‡ÇAãÁ¹’¨ò¨)we¼èg^=Gd* Øâ¨Ù³HþÔnk‰Ñ3 W,F§oŸcÑøî_n°^§ÆÎNs¦*Új> Õ9fI‰Ó²EpÁÅ…fƃ°jSA,ˆ`¶g®–xf˜½Ë·}öh0Pi@åîý>pmH SLFŠ "âI(–S©M†0©‰%Œ$„² AÀlêÕŸ8%èÏÝ©ìsÊ@’+` G¿0Л  Ü굇XVƒy’IrQpjxÎ\¹k\,Hd¦©¡Þ'™bÀ”g!sBµ.uŽŸv5êZL®q%ìZ×  †u…rŠFÙ!ƒSQ´ ¨4KsC€F€¤4i±)9 –àeûæ© Œº%Œj¨ÙûÛ<ž£P0él•*H“Ú%ÚVj:VñüÕ°Fx$Æ"”D(0ƒôøÓfåÏ ZÎ0G4OVF?ˆJu’@nhR¶#m/a’ ÈÅ| &â"Y‹1L^éâA„R)°Ê Òrl´ÿL cÇB_¤9G]¼Œ}·ªŠÈµ¨æ ±yR4ähR(]ÉÀæ@åmW¡‚Æ„e!“ ÍF°~WÅ „7zd!¡ž6lÏ| 6Z& úBò3Âk¡ÜhÌâ { aSo{žPÓÝ {š”_~â*Š*‘¨¢ˆ|N45'TÞ‡/Ïíñ¾éÉû9œçÙ½E#ÀI!2a ”¾âj’mA¦¾†1j–CÒæ]±°`²bÉ@UþÛ\#Ê›xŽøü2/{.Ï×LyX1áoxa¶9šŸ”ß3´¢ÁGߥÂógk ÇXhÜ*Äi¦ô fªJ)¢­iX`³ˆ’NÀ°Y Æ…¬•{Сc1„€î‰©TÐ4Ðr;nKüÚ»(züü[¾kU.åsš‡ QÐë°rˆP0°Äƒ¸0…™NrY¾¨íhT{ †²"DQ’èÃZ¸£¡}2Ä\cŠn ýˆ#E(J¥ÊOL…%ý%@ª¥ª¸¯s)bKQ§ƒ@Â`ªz Ì¡fB‰NTp›…®+Øv \Òæ±·½3L$ñâ›ôàUÇ ¥óOL‡ N! I2Ì‹˜à5X€î¹ýÏÍœ³¹àê©üaÛ,rÜÓ«ä5CK]1 ­ñ6=ÉU¼/èm¶ÆÊÿ7¢Ã~^«s‹¬ì‹Ý ¦Zí™):dƒQ«©¬µ”u ¨C[‹Œ 6`°CTÆ5#‰,QhmÈTü—Ï6K˜säâÝÞÉë~Á’ºÜëû-G–+dŸô0ÀË¿÷Ô¸Î4Ã=Ë.™Ì1^0ß €°b·T‰ÅKz*ÿ]WPèÿ]#‡<²Ï‚RF©®ˆZiŒaȼZ‚Eó ú~£F<šÂïšÔ\(t ¼zë™–ä2ÈÀ† žºÔ h#é,”ÈÔTIË˽J}•’4P56¶9„•…B*̬®I†j²X¦äK¦áÈ/aZ9É‚õŽ@g%ú’!8¤ïd‰ö?Y£²s鼌‚F;Ÿ±>Á;œD£‘i[Å“y_SæÔàì€n6_¨†P)Èf¥¸™}ée\ÐØØ[…ð)Lû™ŒQmÀýXƒ±¦­—xªD$ Èw ôüúAâa§—‚Å 1( ®7.CÏRên0à=ÞŒ-ÜÓ±Ì;)ð …À ©w[”ð°Pøc¨>€s7 …-È’ê¡?š ”Ä…øI!/d°ZHX>æëϸïWÄN£Î ùâÁ´0ˆÈì”â·D½ëd_$X úãÍ¥…¤?1²fa;îWˆ™›Ó#bÜ4pzÊ’w‘ýh ˆ*â)ÿ÷[ñ?Äéñ<nŽÆ:ÜÊÿÇíüÚ§óJlßó{û±cû²•›(6VîŒë`¥ª}ÐÖ(Y…·ºX.ðýºãv¦vÐ×ÂQ$tPx˜å`LiJêe@ðÅ¡¯(JJ–G‰)ˆU)ˆX±ÔûÈ‚F’$¤‰(¢@ƒ }DúE~àƒÎÀ}ã‡$ºÂ$H² ’бH–">gÌ[¼?8êmI`7 è…ØB1b~þ§ç:<*´ù›¾Ý-.º¯Ø Å¿æt1T}Ï0sònÌÍÞ¹¬~ß¾eöÚ#öcH]šWÕµwÇjêΫ^Äè”Ç=Ìá"¬lµ¯üu/‡öÄÈ0u{™`Aªµ8VœÚÁrèFW0#©ùª[ °ñªŽûfl_ŒÇ •ßSÞ\˜={\PûÇž>K³®ô.»ãú2õñnÇ@ûøÑ3qÓrÒ8¨?#aáýPÒCÑo Éàx-ê(¨99n{ìP°¶ÂÅ“‘¨€†ƒ3Ê:Ših _ÃL :™NþxÒöA­¡ »I!§û6ñ×äxDü_ ã©üñ|S1Áü ¨1Á8ð¸0…ä0Ií< jd¯k¡õˆ—%'i%,Œoö 'DYÉÆ«…ͯøµhc-Y¼ ‹4ßTÒ¿N1«òQî,ßêñê·Ì5Y¢¼»zZT×¼«“Éjrdwqþ³)ó!;«_€¢“ØXf”Ù¢FU'Ž! @¥Ç©GjsKõ’Áa®æüêñ–Eà–P<ØÞŒÌØu–Òõº×D`1ßAÊÖlï1|ݶ}ÌŽ±¥‡GsZá|.Ÿ?•âJX&¤|oŒTÔÌÀ¿#‚ÕXŸìù|ðj¨3éõÙcTò(¼¨7àoK ã›0bï éæí‚|‡¼¦¨ x•‰HF%Ýþ |«RÂ<p6² à¡{§Á8JfT) ¼3Ì J2a–„.çV_zÐÀÁå¾%³ÐòXdHBî232Ö9] vÇBk+5×®KZÖ¬ìbA<äÒ‡ÚÆ#{mÐÌöaàj91§Å)Œ Ò(PÒ ì™»Én•ãnV¹ñäÏ/–íNž­ ‡ÔG+Y¤àd’M‰šdÐAãn¦òzí"µº“{xn0 ÿ%ÎÌŒ2-¤l ø»Š…Ýø¼£É0Æf%ˆ[öj)\89¶aÆ­PüscÅ‘¹§‚·Ô±…é3†_%°º‹Ñé%Ad8ía™Rè+1q˜=‚†ŒÓ£¦ ;#ƒ ç«ô=¶JØ(Øm´Y# H՜Ȝ €'=YÀKµÔ¾Âv ÝWR+‘‚LXj¢Í!òö:HˆÆ,2 ½"•DrÞ`A`¢>0óò„ø‚{ g nîNÝÅ© àQ$î¨*ˆ ²MM‹Øµª‹˜/ƒÜïÁ¹‹ÑG” 1#ñ‚f}±Ï?ržþ“|kn}j™Ä¦YBŽt„@°•¼Žñr}ë ÝËubÛK†Ý»½ÅùÕIÀý9ÌÈljr äK’ ~Ë…@àG‰«¸¤DA;*f©ab0‚@‰ EÈŠ…“À‹W †!–H¿f¡­ ˆË(‚’x$!’E&5  ÜÉÁw ²4 ûœÅ&‡0µÆÓ3 :‡mÒB”§$ªÍ 6M5u°‚¦ ŒCú ”E¢ ΦÜHÙ) $?lX ŸmÁÈ‹êGÉ:›Ý \Á É:x™?:‚Eï_ý£KAESoœÎôÎ÷ ü9–ÊâfÂå`•b'‚”‘ä¬A®€õ/BD‚’EnÂÖ°6r|åÀç¤G|CÀ.6ÿ< Ä µ ÍîñÈh¶ÈnniG$!×1Möú‡æi ä;9ÇFÈ!ÔŽ àæ•™®²”ˆH`³i¼ñ ßm‹7o{;0¢IESïÜ‹‰Ý ÈAéú£p$Á]üÝfƒt¥Ô@깈½‘Cþí·cÆ¿çþ=_¯Ë OÄHzó¸ò± À«ý@{b;ÿ¨…ÿÒ‰ùãó€ÒÈ~n¡»ds›À<“QãþÒ<§Cúà.§ Ðô×Kã)§‰E.7‚^o§›œRœz5§¨T²}Òõ'!Ð(øyî¿0 ư°|Ù8<µHüñŸ4Ü‹Ç|Ü•>Á²þ½78~‡ŽÃ5?m>˜¬}Xf×~¶ôc)ãT}GSûw¿¯^_u±Ì‹¨¯²8:,mð\úyØît‘fbZ3…u³Ä•DyQ_xc–9<©Ý¶Ûg¤ï½Z÷ òN:7WG~¿m;mPlßL;u®Ç=¯UWµB©qŒô3å âó÷_u(ýý¬OáGßÁ䇓øú¡¸ÕÜÞ+%©*)¢S¸‘jl!Šo­õaÙK¸Tæc,CFG·Åx#8(˜™u [•f5ÊEóDH|¶tòÉ܇çËÏ”ûäÎ48í87d1»inÍ&=–Úz€ZÐVŸfQ¢xï¿¿g#~±átŽª~>±ÑênÇ[¸qº¨WÏHòÙŽEpýjnÖyL»úùÏŠl0æ@lÙŒÙÔý–ˆÃæÖ_©Ï&mjè*/^Ïyà.Òñ¦UµÓáúÓ«Žqòšiµ®Dé½µðc'b2Ö<”œ4W¹ø³æ) ð>Àå_~ăÜópÜò ü •¡0ó’7Þ+&§c§ŽÌ$Fº5ö+6,.ß ¨~ÐÅM¹ÉÒy}0óÌØj”ˆÎÀ{ýËåµÔÏ_¡PgsÊþo:‰T_XÈ;ËÁêq輆LiÀ^úž@âÑ‚2ú`,ø×ì ÈÎýî)èdP;W˜ï`´-~ è)æ^÷xx0Å 3Ï2cè®Çðìµ ÂQ‚²Àì¹yýuA‘Ÿ£Û­Ú˜oJ„‰h¯ÎäNWÜÌh<?Éú¡h;D¯:–Që_ 9?L²¢Dâ,-P>%sê¥`Éac«×e¤|®=èèÛ &Ê›Š}`PT`ð-‘RN™X%øà¼5rÔ˜Œë§Å÷}ý™‚Ðúµsvh ;”W…XXhÙ6D‹ðÿ–øfe`= @Ã|Çœü+ŸàÊPóir¹-#Ù:ƒßclt} è,>²+Ä W³>2aZë#‹O—ð;™k;¯Y3ÿuËœõº6"[:c¤C9 ºHËrõô7üðîžžÇؾÀÊœ" ñô»i½7!$<ˆlÁ2ƒ²EËÚÃíê³3Ì2 " Í1ƒXœ¹Ü­Q‰ TÂ%Éž¾)rKÄÃe™Qg‰k 4ÙËi^!;_&K»¯L—×­un†Še œD0Ó~WêAÀ\ùìx^ø•ÎhY ÇWÏæ¿Š7ðä4‹°ûÐç»ÇÃDÌùzz©ëÍxÛiéIûª†%èi ›pÞ^tQÙµýC"ÀÚKAãZƒ½lï íà'ÎD¦>×èÈÿBê!÷ùãb É·h/®-vµPø÷êsGÈO©±Ln6 Ç©oöϕǤÀA¡œ›²ˆBŒU‡¾øZ0Ï~+çZõ×ð˃Go‹_qPJh!𥅻ÈÁ@ŒÛ­j5Xð[xÔöù =ú€O3ö‰‡¨z«n¥‡ÚŠ~^=ŒÑ&§n 0ë[`y <d½$÷yx·‹Ö@<Ýv!…3Z^㸸HqiiÌ‹ƒêj"áÕ¹¢×8hÚ9Ÿ0_B2¹Æ lµ‹õík¶í¥Íp±dÚ ÂÍA^ÒU0qšëE™ Õ hHQë5‘§á~^‡Þçé!‘ ü‘¢zÀÅ48 ûqm°Y.¤;õË'™P„â>j\VÌÁrGQùDz…õ»µhoœA¹àº‡ND7pƒržCn˜·yÖ ŒÍãp”D`\Ì(§èe€Õ¤ÌxëñoA¼jF¹¥ÕœÁ txO·3Ûú+—3|ýyp…xxÌ ŽÜÑ îK Z•¤ÈSOÂyZ­N¾hl¼<Žæ™›{s(l®irs>:Å´q”ðm‹1;ãJçð®œ&W$P±S •ª DšØ±PAÝ!ÁPø±t(s°ª;Ãg˜PõJJê©pÑÜXh9PÂå/¤{1ãâ˜64³& ÍÞå1žŠ†fh-Ø`[æbJ¢=ˆ¢‚I‘ù‹u$Dú…¼v-MšsêKÙ!ÉL vI² +3€"z®4â¦h'dzúnPÎHÀð¼¦CuL^D³<&4d\Âo/ÐÍ@.Ðð“l\žDOÉB:…Ë•Z®†c©âòŽ9óà \µÏ– øŒ ŽdqÏÕxªcv(c*ôâÒÇ"ÄB»4Œ-Áã«ãêg‚BÅ5Š+¨tzže¬@¦Š¢Ybbá%­€ÈÀ,ƒÝ|HþidrÁÈÌ-{ÓÐÈ »>JјA GÐÒ$0‡"%›>­+²x |ÍÊYQÖÊš> |?ô¶W½•_u7$‰“r"êé Xä"È4÷êÐ…å4JbDÄW›†y‡¯³%•.ïO‰æCÌ Îòy;åî!²Äq $—#Œù]6 ;w¹l /Péjx\uÌÄ^ôUUS¸Q}ÌNÅL"0ÞQ•Å#y7 àu„ñ ç#É‹\dT©E-T©4Ϲ¿*ª¤È¬!Ú’4% €€m0ù­ ‹\t%†Êˆ:£1h´’¼&ƒWyh–•/zbÁ@ȃ¢ž=Ê nŽkZt0„sÖ J©Q‰l‚-'9 ÅkÉ©¥X¨u4ÎГ–áô5°]Šò AAC”üDCOÐÖ, Ý“ˆt äÁŸ¡òo!"ÖuAd\Y)X.ç¨uÁ,O’ˆTú›‹qn•κ’u‘% ´ÊÙ úSC È¡B)A ‡äE.z!sÄœ"]Œˆ§jË#Ëü˜ ƒ9–‚íI >ôT¢ˆHÊEâwÅK‚”+"¸)„óBÁØ èvÂwgs„œËZ/²ž£¢Y¨àˆt[h—¢³!i.$dGÖdnäFèJ€Hõ,/2ÈS úó›Ì#ö°iŸ"Û‹%DfìÜo0+¹ÜfaxJ‘à£ùÜôQƒØù JÁ¶Û|ˆJxÀ˜—d“ê6X's|ã¨0=Ô™…“ ¹™68šsm˜¯øRˆõÏh1ú›SÒøë‘ØìЉ.‹,ÏÞOž †— ·Ã˜}b¦gËÍ ;´ªsOè·Ðrc p \›>cY !X£èÖ£ÊMØWQVBqš ÓsÞ¦Mù«ÈÜàf9nkøDWþ,Ͷ¤!ÂúvqÅÃàHò²“tnC¹ Æò%s]lˆ9c‚±´cÃg1j•WÏ;‘†äè^Áœ[†m9 Èk¥%ûÿyíý¿Çoòª·ûþn±KNú®¨õò~Ëu*ÒIàú¹E|šmæ ŒÖeÌ3T=(i€´À¦“XY,†S{ƒ,f7O¯•ŸµŠ:ÑVŠˆžG$ /vl¸Ä(ƒ/ó>O²ÖÝJd.U ÄœŠ Á^³©S›ÙÓÇ€ND /ÒËð{¡H>üíõމ ½ ‹;Ð)Á@fx23 HfC8\ƒ\Šó„ÒØÓšQ…1%xñ–…ꈗŠ^Jl4ÑàÈà€0’ÆøŒ®m®Fï6 vÀÝÅ_¤ŽY7 (Øgq`8tBN"ÅQP$¨Äsˆ-‰½µ0Þí º–.\€jîÒôË‹ÖТ/54J°€Ók Fm(i‰Æ¶Ö*¬µ/¹«2Æ`†ãAŒP6ØÁ©XAú öY,̲’B1™Ì¥4Ä71•Õ­mFÊ@ãvFDje£¢S,°kµ1Õ)„Jr*‚Ö>¾ÃoÂÉE©Žáuþ'‰âgñ1r%†_0ÑáE5šÒT*Õi:ÑÁ&¦4G€ šiŠI) @´B’:Hö~¨S ÒÁ“ø‡z_ÒЧ9³VÒP-”àeú¢¸âz…œîjT óØtrŒaîXÊ„lÀ¾r> Ô+’šèêb/E ,RÊÀN…%$Æá…,¡2@ˆÙ¡ YâÛ2± (-t ÝáAÄ(:\Î逸ҧD5 xõõ»AækO_* à…T¡¤ ¢–‚ì ‚áÙs»ÆŠ7‹Ì=N+¦êO9 t]„J1ÇÁù3Ü)è¹¾sš•©X-UÁfЖ0*L^ªlØj¤„ 2sºå3 BO^ÂíÎäZ†¤n:›-’á»ßš±±È¤'€  üBæÄbªÀºÎß’¢C Ëœ½Ëó Š!… ã paíã²z¹¤H6Æ(Ù‹ÁBˆ‰¤ — ŒIiÙò!„0²…ˆ*ã5;ù’çΑºDŽnôM^ aa)ccŸêLÂt éÄ 7´Wt(8’¡ƒ j D!$d.q¸¥ºŸêȸ'¿sZĭ®°È–P ´.=øÆp.“èØÐÀj©ö1ïìqäð:he›M v”–åPƒ^¨vï'Q!µ ™Œ%)}’'ôý”!ã°$HHI"E„È‹ª?îþÕp9#Íâ%„Ì¡€vì»–$,€õö`ò¨¸]„¹©<¡45CìòsC#{”ó§0ÈÂåÇÖ¿ »Ä“÷q=ÖÙO½ü9y5UÞÅD#–6¶; àR‰z.œ½‰GÑÒ3–z3ñ Z…Qz[ÕŠ…zQ[„ð3 òà¥%ÔØ’R$‘«­Ä™¢D@bØX°Ú&Žˆ`lÁt(°a±ÞéJ°f2@ú‚ÁP úæã¨¢y(¢«÷Îp‰EQNm¨ÆHnÛÕ !æv4+˜š%f™Jú?pH.dºÝË™˜—fÄ aj¦Ì)´i-!ÐA8¯gÜ;—bàçœfgD„$IÀOÖžíCrj;¼š RwEÀ‚¢Þ§è~Ò¤PÎóCÝQxª‘·ÚCØ…Ïl5eÃ0l]¡µ¬X- e’±¸IvÆIM½ù…™Œ“?U²‚„<°…:!¢QžYÍ)»v†íÊh„ ܤ•NfÐÜK‰0Nñ Ý$0 ŒÅó> ¦Ó†$%!ÞØêa&Ç–h}¾-môÓ#Ü«¼áÞ2@Þ Åøk#ÈJÂÙ8LÓæ8]RrÍ?ƒ•Œå0ñv"]³\DvØè©P {‚ÅOC‘б¢Á¢†$(„ÀA„>GŸsœl:vU‚ˆ^žæR4O¯×¬Â¬"ƒ ƒAHLbÔØä#@]‹_—ûhXr‡HÍ l¥Þç™u22#s…áG¿¸_§~ì¡BPAö}¨EPåT†vd'°‘, ð¨7>43A…jJ•Jy3TiIªi‰¥É¡í@™k)yË—2¤¢–F@sRÆê(KUQƒm¶Åè32§pç®A`c¸ô—0 Æ”‘ó}MÆK á2)‚íN#zC€­Â‚,ˆ0œŽGÛ¯<„K¤”†P5> Â:ŒÎâÿ3˜¼Ün*¸QÔ&Ø“ÐÙ€/“QaÓäðÐ2 ò½H<cÄ ØL¤ÍàD„H@ ?Dh"CæbŸHq¤Rè@`D‘ F  b±ûCûAOàŠ(a퟾¾'·cÄô(õ;fƒŸ»¾'ÉïùĘh|6®{­TN´/p…œ;Ée¢Á[} ”2ØÊU¿]>3RqY}¹ð³±ÍLi†T² Ñ­rÅ^&c•ÃEÁg2tFR£H´D†n‹ŸÊÚî$d'[V5ªQAý·x331 s†ÐZ†Zó149‘%ñÙt>Ç3 ìs.˜Î|™!Ù“•Å6:›ácÄ>¬9<%Ê9!8Ñò e† <aV®ä†í±3‘®#ÚNŠ;d <(ÂOû@RÄ&E£ y%WÛt¢iãaØ ²|í§Fºô ¤ß ½´íÆüÅF3Ái‹ _¹)ÉÛHZbÈPD>Éš3<ÉâÆ$• #A­í9l‘ºÑë–ZŸ¢(“ÿgû-þ÷ùù!þ#Z ÏÆºÜ³° È xOòRòS“Âø2<„ÐW"Šãq­Z]:°îœ›E‹Y|íOI$åÂBA+‚¤ÂÜNbb¶öBµ$Ùˆ¡@32,æëe¹—4¹çÔ7œ7ÄâhŽD'$&Ñ]ˆ 4\…57Á,†ˆ¾LZfÝÌ£ƒœd7ÁñÌÑ"¨´ã\Þ‡6Nq›Õò\,óÃw/4ÞŠÜQ¤Ö„«@†"*"/%Ï"2vB§1%7Øa™’`‰Ã,- P±Uç~E¸9Ƀ9o¡¶Õór™†®£ tBÛísÅ_;ÚbÈíáéÍúo¥.¸Õ·%¿|íNpWÚó¢ßq"‚XÐ22<ðq¬X˜P +z̶Q~þ¸{Ô*ŠÙC$¤¢OBnÚžìŒíl{p„XÇÑš‰àRÜoUW¥‚ƒå&|6 ²;8ŸA×Ço?p=ýà.f퉜˜­ê7kBëëN(>Üšå¬l¯#nzg©àZ*¶>9GV`‰0>Š£±cƒµ¨'¾f yœÔ(:„,2Z9ß•qíŒ80 ÂÈ5W“šis·&Àè–<Ì‹ÃÃ6dùn ؘZÈë7ï kC¦š?p…•Æ2Õd:¶ ÔÌ´ &,%¹ÉÁÓZØÃ5¡1²lRÌar³öÓ!Ù×ÇJ^¼ Æ~ÀÏà{üÞÒ’ðÛ3€±]öØ×Lqâë¶ÔÌ9iÃMLmr“Þï;¡Ør2wØž*7Ï™3h¼“øQ5´¢¦æ™ÐʰV³Ú©}üî¹öcå(2KL^ #RúĈ§Õ«2ÒvלFºvÂ(¶PÞ”MôÖµ1ŸÜl;ŠÌyÈ‚a/±¡eÔ' X-ÕXC,i^Îd -"¢œ hS  €•1£¶PÛ_ZGŒ¶!­üJ4sùìì8q°`½ª¯U6Kx„±,,I!âP%qXÒk*È’Q4ªšŠš©µ0r@›!„ÈI2Âj)ªÁ@\¦¸3 rù¹¬Œ‘€H ñ§wûî?[‰}Œ:ŒŒ e¥Aõ¥ƒ<˜s>ãTÌ^[„ŒÀJÐ,0RH Ô’FBCôX ¸w,XMÆ‘€·7ÌÐaäóhY “81ˆ@ •x ˆýqèðfQ54S}zcÁ;…QƒrÆJ$©Krèµ—„aÇZmzRf‹Àª*b–°UË\ ŽTß0l\¸ß"¸»z¦ÀÐu sC’uÜ”~£Éu|g'gZ(݆›]´…’׺\ÈF*lkùÍ èDë[B®WÓeêu¥ÊÖy·n\½êºš‹îýe ‹’Ä _È$VÑ|Äy½ ] !q ˆ]³Ð í×$ü—#Å$™@Œ˜§EœRŽåÓŠþó û¹{r!вlS7÷f-‘ aGs`…ü—Á8‹¼‘x¢ð0Ïìª"”1¸¨D¸p¨rR¥8CBYZ2ŸähÚgêKˆKT$I"ý‰Õz/’”rAö<‡‘pòÈr\šEÕ|Å" ŒІ‡{èx4–Ýܶ^»ÛÜ„!zª²-TdÐ(¨Š €nl¶A¢ Ï+‚F syJ‚0R!æ@°“ôÆ“¨­_8D®Å$*Ç)“kQUb¨ê?ª $ +$Œ"BB@#2 ! }˜cä)#¥¬ÖHÙ€„’ •*#"(‹JÈ€¹%‹K€ª‚00 à0>Œâ ˜?¢Êó`îo€»i-ÜÒ,ÏsØŽ§2,X‚®@Å%@Òjhåî.ÝŒpÕPø‚K#  §Ƨb`¹`x4²ChqGxz@ï`ÔJ3NoÓ%ìÃÐæ±JÌÛ£¹ gTQ@ÄL i$º!¶4 34GÐw4_²åТ¤ªh*’Jª)S¿ÃÏÈ{Ø1bÁyxÚG—©d«34ÚÐ#)‚Ê Qàˆµè¡ˆ¤.ʹM¡€¦%ŠªÉ´ ðŒ>+!7±îFÐ.ìB’‚d$²ywú~0üÚ—%8 ø‹–³üË—"ªÿmõƒèË,Û9%É›34lÒL_CЧÀÄv²>`ïPèX22"RØýÅtøš{Z–"…$*©(ªØÃv$!ÜIæôƒb~öFf\“DòZÎsÀ(Ž„r;ŒP¦§º*÷Õ Á‹‹e,¼Ô²–ÌŸP“ äC_&ª©IÇc‰„†äÜB üdZaHR‡×"É ;ÛØ|8H7Wì`Ÿfå3Sæçð6Ôö Üg£J\¢¥&µ‡T¡ù;eß_"K„Iˆ‚éx½ï(`—Zªz-I€²®øƒïuŒ4±Ì.<@*±e°… ¸'»„¦/Õ"¤"1ÂPÕÃ2èDÙÏEºÙRîÂQÒ!Ê)"º»°8B!*o"/0É"´n2aÊ`¿¼„úDàØ·hBR÷™©Õ98O…†™$P‰D`2#b0K¥‚ñ.Ï9ø%CP;>§X¾ý³ дRt´ü{œ”1&€L‰;ÛU9ÉçÎ%ÀÙ\×!èÃt(?ßMƒ¨ƒ0>â}õC©`ˆÈµ h÷¯L ¡ŠgjÈ̼Åïz A‹!ý(‰œ82REE´´{Ú(–PáÊ`…ïÁ ÔÅDƒQ¦|‡P‘†…‚¬1:‘áéA°ËbI¹ñœ6_®hGGˆ”§•îtÁ¾ôT^ƒDBˆ¿B«çC‹”´B”ˆ<âp ,X5^ÃpZ °©FD#sà©(+ªaÀ©Ž!(b˜²Ћì3ƒ«¹Þ6?…¿¯fß?èET´ý>÷àûs-IëÜëá¦eÉð°AòîfÈ`|˜È¼fC–`±¸6#þõC52ÇZÖ–”~/0ȰQ®€¡øŸ˜Ôþê-{U’¿‡Nå¯oQ³`Ž`u…pôv$ÿ¤¯ý‰1Y ËZŠk*Á?âÀp- k‹³ñXÈàÞc÷€ààs ‘ºR­jáÄ [>A–¢.xèãxo‹ÿ—_4ðñ}êßå­j/X£XP­ ÀL¡BÛ‡ˆvãàùwl ’¤ö^³:ÏŒ”ÜW‚ƒ±d€Bõ„ •(- ©é§5CÆyÞç9‡ÑàÿQz ìg˜”u7e¨."Ö²´;Az**Š";ŠÀID¶$††¡±‚L‚pM"Á(‹£•dBÂ.­+bÍB–ákh4KëÍ‚tÐÖëp—³fªk= Ä€Á!Â0~aö6!½~ƒ€°œ¾$.&f†ŽÎÅ› NäÀEÔ¸}ûÐÞD:"ÍU$$¾ãfÇ9”PLdLÑÉröK[°ž¨÷ŽÎÒåPH%°¬­R⎠ÃdÌ«cbØ-Mª†¡C'X©z ljvu ™¡k„¡Ã ‰8D$ '\‹šâÂ! †uÊäK e¿î¸9y†1x«®‡`13æŠÊÆDR 4%œfF ‹Úª(*¥É™1:’‰R˜IQ Ù ¦Ò €CA£f²¡bÄÏ# ´ Ç 6RgQ½‹ãc|‘$Q:…›t/íž–"–F×Ê¥”˜`kÑ~£ký÷;ôü9†8’8"T— “›Ø„àˆR~¾¥„¹yMÈ”#* TR„5E ˆ”½%äKF.Ð{«ÓõÏÉ(h‚Ï9è„3üÛ>#&Á0åá aÀÌ—:zi¼¥É÷ 24È<À=P§AF®y”°ö9™’ƒO"HHÈó?+x'¸râ0ìŸXqÁŠ=K„Ê\˜P3×k,ŠM™(Â.ލC''ˆy¡s&&ŒÌê°šC Å°XL*XïY2Ê\‡2`s«µº~²/ñQŒ99˜ÐÔÁ¶º± 0n”³(¤Ž¶ºA(·¡šA‰¸.Æ1„ª37L柤ë÷~]Ëõß•·ìÔÝÍÅÁpÀH½ÏÔô¤džˆø>¥Ô9G:sªÍ5u䀈>Äl]iˆcN=ý@;ñî ÎË 1,°;£Ø î ¬PxÆø¦¿&I&I†?Ö„kÒ$&>ŠV¨N1½)1–& ‹¶* áE„$„î” I¶ÔÑCÈ8‹üT°윞 ½~þ&FDCßçÃßu>Ÿ¿‰ÐNgt}‡7wî)ÇÀè`†8yò Lœ® œ(ëC%ÕJûðáÅÁ ;J;Ñ!’Jp'l«Ìm9oh ï{\N1‚ᔽ!Àºš4mMXi4Å"é¿@9Áicãa7Z$Kc«@2FEërY‘O/ëEéÓvî­0çÑ3áÐŽ‘ïOîë™Ò×ÕÆº{”K\‹Ö?ŠÐ“2ž‡Ø¯è[ØÔÛàg8Œ½ •" ¹ÇÉn«"u7…¤w7zŸµÉ¤Uèá`¡LŽ"å®lO"¶‘ö#a§#qÀk!¨²ßÍ©QÍ{* êáÜîÎà†ê3ü‹Fb™Ú;EYÎ Á˜^!ƒI›Ÿ%(HS¢‹J% æ…"™ÜÚ1ծؔí$'0’]¦f†qf.MÓ½Ô¥†ÓmIFä‘ÞW bûÑáûa2ãØ¥…!Ù~òb¨ÙVƵ¦'}«°ÇrÝQý_,½Ïèi&&6d´CÔ†o1³qµL+â1‡|à•)5(bœh^Bˆ²< d’æÉ‹. tÜY “/hæ 4þ8ö}ÔdhXŸ™&‡“c¾’TîòêÞlÐëD*”¾sÏhðwó¾ý^bCàÜÏ&š3žŠ6{»ÿ™î @<'Qa*ƒÑñ&E\¨“f’Òjp[Nn™®oËïkMôµù­q,‚Èq ÁÇ®³îŽ©’âËF¥ÒBdk)¾µ<Ô‚«›æ‘#JV¬Qó§Ÿ÷OÚ8}D Þz U›èR{ƒtÑ7~a÷w-OQÏ…D @€apRqLËdà :£¬bÄUM…'bîê 6Y†%PX’WRì5ŒÈ9"üõähH+6lŸ;ÜMÆ‚H$!0 ‰¼D=1ˆÄ!ŽËÒ»/±|!÷ÅU¡Á46£0Leb(¥Th5‰êr@ÁȨ•Ñ{ÞÅ©6W=Aj‘Sš‚‰ä,DXÎ ¤„€²±/+N»+ ÙÐ_vT¦ÒTT²’Ó}âb/|¤i€ÒÑm«ØN$ …dlò31©™ªŠKØÿv+c¹TZ·¨:\—dÊ34q˜‹3hÃ5u¡Ñ_Ek‡CS#¥•‹/6”Š5 À‚@†õ|¡l‘Q-}UøìZ~UÖk¶|d~‹92!†A’¶.«¯LÜXè$˜lr63Îb€©µ%Ô‘˜À>¡²*dÀZ \ZoL¨WneçF‚ ˜ ³¼â\½C"T©¼Â”d3Ù .0¬0(€Â*q¸}€?Ãô@ð5÷)ÁiMÜãð©Ö‰±g€½ŸÌ›ƒ-P?àCMþ|?ªÊÿd@âˆYgîž@ôè”0! x¯šª-<„i¿Ûtòšë ç°æÙ«‘-°P%ú¥€¸Eyx¬F±¢üo¬N‡2*aX3<Á ^ùºû¾/a¼lµïáC„u‚®­õkd»É1îRUÜCl: BÌÅ †ø859õà;†ô$-›Úò¢J°Ò âïSö÷ú`m­Î܆C–GPÊÁaÜ’¡]EÜËù‡¢©)˜ø;«X©dÐË:# d³rtÙ/ @‚Y ’Ï*+¸ëUá…„,ÅH¼ íë"¨1`âŒ)•l!rÃ)@YØ‘Á-¨pP‚B,ÀújÒvX ,XgPóU¸ºµÝöH %˜”ćÿ¢ÁHQü¿ Ù‰’€¿Ñ²S@óL)b ˆX²”WèqðrPX…ý`'rQ‹(¼Œ²òÜ>#wÉ0c†ûŽ™_ Ó„³p®c»w>/’‘¶]A‘16ïlEÓŒÔ)E™0 ]ˆšÅõpNž¡c)¸å:´9½’ wHêX(A²î¥Bÿ} …tñF3š\¹ô†VðäU@h70£Ed1<!o DÊËÀÓŸ(øÏÉÖª íe É—o<ÈD¢[\ÉK–œB@j lrÍp¶MÜ#$Rà_q Çêƒ@rC¤xú\,†ò”Î%¦Hú‰¼îDŒW,ùóCȰåÒB;„¢7‡ƒ©¶¾ç¼‡"‰â‚¿ṉ̃¹ ƒo‚>“Õ„fÃ…(sÈp—¹`i ãÙ#B„J`TŸÝAcÂõÌMoµNoÊֵ˲^õj+ ‰šØÐŽoŸvï>G±ˆsT Ãpî-ªVýän®BÀºØÔT@Àðç§Aî¢({$ të´Œô—ìᵉ)A8v— Däüâ¨NÆ!ŽÔ!¨È—aÆX„€&‚Db!ÜÔ:„ëƒäPðàñg(Rºk€‰íXðuW.¶$…]_BJ®2½ŒOP“|Æw8$ Üî\Ëc+JÉ£ƒ ŒŠÑrÐò™b©p$Ũ¼Á¨ ØåCPBÑBFDa @8›'\ª>KØ€BÖ¶&_#, |pøó¯Hn b yº;‡ƒ2 ¶Ô€,r @[*dØ0´A¢S-@q™–¥žêÜïɰk5µ¨¿¥¢G²µ†Ç>jª¡U;ÄåZ‚D]SÏÚƒÉΊ{ˆDÌáÃG?ûp¸G¡ä3@ñ‰¢$1ì¼ìŠQR% lmõÅ®qↄ¨(M´abÅ‘ÀÐØ¾Ì¢½J”*«Ú2ÊÆA‚…ž‹ååt9rsÐ52BÔð"ù;‘Ÿ3±$î ÛÌ ¸‚Bxƒ@Ð ˆC®JzàW §Û¾à. ‚–P¥€ƒkP8<%‘挕3‚>аÀØ,Àùƈ@‚@‰}Þ·08cŸÊ›Y;Ylº@ºì¥'‚ݤ -f'/q–1]FŒ•$5~½Ì SÁF†ÅµŒ Ca*Äðú¿2FeLåkc¹z" êv- µl°\¼XÈÀìÑ&zEÏ¡•ÅUÒ¦×)h°Š!à1$5C¤ˆü|s6® Ë­ÂÁbljkqMHH„$‰H…™tŽŒã$ Á½ä #fÖ!M)²¤±@Ô53¢¨¦Å›XNèie𳈡q|nøx° l/„2À\€óÙ K*ùÃ8úA ˆ.FI¹ ¦+P“Á§y¢>)È`î_˜Zûq&ÛÅÁbÖ¨‰†èê8i¾¦ / ÉbÇ‚¹¸i•Ô“|ó‚?{]Þ¬Fq¥‚¥Eõ¨PšHE!A2 ÉèC¤A sV'`„HAái¨XÌ€X Š9‘[¡ 㠱윕¤¼Ðóû!I5 C ²¹”x0-‰ËõØlŒ —Š¥Íf°n9š¸‚À‹ÖhŒÅÈkâȸ\ZÀÉ"30&;[Ojª™©Qä!á”Qþp#wy¿½K<Ðã±^ñ©É,º z*ie2ì<Ãz-‡?¡ÿ1òý¨ ~ðT4ý¤ÌæQêX<\yÞþ7ñ†ž–¹`´«C=ŠŸò×ÜüëîS ‚IX*‘b£ØÑÿ LQ&DI'’I–Ì—ˆö Þë-’)Äè|ŽÅ-">ò队#àç ³ÙÕN*óÜ¿(V„G#ìøp½èt8­JHÆ Fb21vÿwµý­žF†ñjA>ÐþìÁer°ÿ˜÷x³>›eÓO÷Ñ¿†pmsCã#Ú½9àt1ОL+ÈÄ,±ç¨ïîó53·ç”sö\t{”åƒd>ˆ.x·ƒC–:} Þ–±¼à>Ðs†Ò™fy¾z‡>è#KÇ¡h†|‰ÞG_&/Çï^žÇ#1yŠùÐ7¶¨šê4>ˆQÂe™'’F××ëÛaJŬ*Ã\fRaÕ±Ö‘µç¦YŒ¬ò±ÂÀžr—äLÀ1(9gfªT`UU…°È4VU V$^Á†XŒíÐWD$< ê Ú&˜T ‡u‚­JL© }Á…æÕ‰DŒ %†¨cåeZ^FÙs’1Ààó,$Ϫ$6èÄ 3g»ýxîµ÷"ç,Þñ!¨в<†žX0èœT!-:{·-ÄFÖoÁw-ƒ(lŠœ‡±pã“ЉÐ{nm÷…€òÏ40¥ÔÞ#-³°‹F‡#«V7dæW ½8W *òèGçªÐvßuN@L£‹ &Tê4MæAeLµ­a‘X¹-'˜iÎøâG~V<×C<óÝeãPÜèdṵ̂’ffD'~⊻!ã¶QÉk‡Í"ì³sLUQÕâàIb TèˆE\ ciC‰Ýk} å`Á)‘>CQpàR"¤Dæ&o åtZéAjeU–9›-‘¹é†«uì=Ž»ò°žæê†hlºš—¾–£¬ëh㔄B$xäÒ nÜZf;&®²ˆUH´vI “Ô‚˜ÏÇä2!…| le€c€–ìj1Í caÛÅ'2D!\€è²¹B±­h~‚”¥(¶b!áŠÍ<˜N!I‚y#†hfUT©•5’Ï#<ßvJ`34lƒr‚ N…Þ×í%¢Ã/Øì=ò ‡F[û”£H„I ’ *d" Adùô=•QP©€ÆŒ ɇ†0¹²Lƒ3zVrÛK¹Ç$Ž0¶Ê F h”ˆKP®>W¿cÜ8ñ!oBãqbsp..V.À¹~=á4bƒK¡o®¤ÁsÏ-¤£®øä=µ®*šçä<9{ô/jÝU›Ò ÔJäÅ… Y¨ ”±å$”´Š±Rj^ê © ba ÌÃLM ˜àe|Wàù#BÞ„Á‹þº  2"~Ñ?#d  \%6‚Ú“8¿ÐFÎ@@pÜÅú²@°&&P¦6–ƾ‘Qü ˜{Îx$´w)ÄMÈJsOrÁ‰ôßx©¨p²k¢ý q [óŽ?P(ƒ²CÚç3ñ¹ÁË£“:=}Ïu™I­þp}î­3Zb)¿TŽB=R1ñ™v2À@Y\äFAà cy์ :gÄ>æ,(±÷äw[HAÌ ä;m @<¤ ¢r=ÆóÔèSqú}ÞÞ®8ë÷}ü?@}–‹<óWýVG>”ÌÁ•s€´–¯ðâ¥ô¾ò=¸qÈrr\ÆeˆLH5™—U‘èzÖ!¿Ð°àHG¨ù—öP@̘wÓ§ÑÞ)ô!ñ#èƒx™œ uˆd= H©A0 ‹èÜÞŸôBKÿ/.W®õUE I5™"•$aš^¢7hc^ÚoΊÇB£È€±\)V‚’ïêÌ)¥—®{LyÏCj –;çpÁ€˜¹®X9ƒF&ÚCb¯µÒåõ3 »_L%]>ÅŒ…‘-àI’¢"\©#h ‰$(b^ž¡Ô#„D²šòÁ‡ÄnÑîùo eØr :CÂN¤ƒB fLÉD(€óJô ^!™Í¹Bd,TPáæg.ø^žT•t’ÆK—~¦sb‹É0&dçU †­V‡f4“,‚öØ’ÔÕt@LÁ{7ô:”­dÁ a"Ú‡ÀX3¦ò`*Ħ7í±gqÿEü61ˆ¨upþbŽ‚^Ðk‡!t+¿çbµm ʉsˆb~M‚Py8÷U›RHfCÉ>Ç«ACÚñ†JØo°á3±Šè3QT-¥JÈ¿’+$ÍzGáë³°N%#Á,;”ì>&a²\¨dt2j”F^ýùÈËSM£—‚ªÅݺéDxÆ4?Ò¢ %¦ ÂiGØ ½ cNÐLÒn‰ŠfWp¨\iЄ…¬DÀÐÁ6³xp]Ô†bÊTŒ‡`…U…ŽL:cÊ J ý²Èn-~¾ÅMËYX,:ŠªDƒÿvJYp‰ûõÔ*€nH±~Dœ’XÉHRsïÒ š\†»uk°  $$dæ†Y6•.ÒkFÕÆY2“îòA;‘‡,±àF8áFàò`u0a(ûÚB–(žQLès<` É„NdŒ`ò ”``ìÀpÈpX%Ñãê’ŒÃ!z ÕÈ X䆜MÜ(¤àltÈvÍDÕ°f? 2>ü®EŒD´•ɨt‚>¶?/÷~¯÷û¿ì¯ßü2B*T·þ ɽìÁœ^òpók_’hÖ]ÛÔEÌRKÔ<˜Tz…:´„‡ª ®¨f†õ"ešUì#Øt•™³Ú(Ed: ̧fŒ§aZ‰ s ”vY ÚæZ¦C"T3º*„·€ŒÈÓbèTÚ-ÄÊ8`ô¬Œ Eš‡Fšw¶™³V;™#Kˆ…D§ ð‰PÒ’ngŒ]šÆÀSb ʲºš³B°^sX‡$¨Z•Gš„¥Y:¨e–gS[ȵì²*]2ÉÔ¤œ+µIµÚJ“!Ú[„¨f aáVâì5:ÈHYfïfm) ç,ñpá æ`8…y f°ìn•KQi!š%*m®âÑ­åÝÕ*–ÕR¦Žåâ!¥Q©l«Õˆ3)¶Mh-"ÊZ[H˜”dsY„³À [ø"åm{‡`ú÷²}ŽT@Ñ”D’ši*AŠ•M/Ä4“A"Ÿ·/Gs xçêZ‚Op~4;!­ÛGì”QpßöšUƒVcPoïÿ<¾–¡éS’À­WÐÌ­zÈdÊÒ̶‘üŽd‰—*>¿þìw,y¾Éi¥º›áè$n9,dˆD`q?öµaA"¸iøØ‘ìî8úÆ0ƒ„ÔÞ¯ˆ/ 6K$r̈K¬|ª xˆy°zÂÆÆ¨] ÉËGÀmÂT‹ER¶7ho¢`i²áwmühq!x|½þã§Æýn— mŠÔ*e‘CµOC‘ÕX-ôW!q€UJ0 8ñ×70ËCGW-tÉêÿIûáýì%GúLÕfÆAC2h@šÆÊfúA&Äf¥ÝýªÕÃŒ0€Œ; ”M0|U÷EꊕÜÿ£•‘•õ0 2þ-%5ÝQ**1wÐçŒZ¾×60è3Öª$YÕ}ô ÇAÄ! # …(ìDu„ÁÅ,½L ê!ú· ©¹®&Ç%D¡¸ñRü‚D’Dýv‹0„HIF£wôܹ20„…Ãý ™‰Q…ñMˆÄÝP`€¤Ÿß c€— ¤‘†L„( ´$#î°^Ê_k—ƒOab8ŠÔîûBrƒØfuu"F»º, Æ) Hn4À @%äs,¤HÈ’‘ƒph¨¾ºX€½’Äd“ˆ$’ ’ý¦‘àØèo0ò6.;ËáØ(¡ ¨yÕy¿HB–ˆ.ñÍKU6ƒ)qØ‘UºYƒÙæÜ2’Ì´Œ#™a±$JøC q"~s‘°n'"€ýñh 8 QÐN0?ÚF¼lZÖðû»k•ïvÏì.åòÆfVñ“£ƒ*Mél^!PØr2Pñ(A-‹QÙ4¨:·6d” †0@š.c«ãÏu2€àiÈîF‘óüŽgTAâMÖ;ùÎë–Ó%”©ZÌ/6¡+µà=ÕEïvªbÁsî‰~¬$«ÜŒ!¦E¶vW¸¬Ç(›²Ej*j-‰Q Hˆ!ÄV†NÊAµšhè¦e½ô-,TD™š á(ïŠsS¸jzh¶ÐO3Ráè%ÒɸŒõ`'ÏÀÖÐt 6Î0w&¬<ŒPGÇÄþvJ$ .òýÅô€÷@À½Þ\ZyX²UD((G%±HÝÀÒÄ"—ÉÆ?ÝPƒsb<:9:›kHþL´íO'rM´:›ƒî±§o D™P#µSˆˆT-@¸´ÏRxZ¦Æy—smcjÒ¿À}à(nïUqh²üv¢h ž?…Lé! E¹Öw—écj"Ö¶E ƒpq³æ0²ºŒí¾i{ ”2vÜc“šŒhÃèp’£ÆÍ#.:>ž¾Æ} {èg1"Ôò¦6¾bÁb¢ڔòì@*C”oZ‘Ô(LË’®ãŽ #Šs²æž å¢ Q¯,ÓázæØI9À3“p¤P"(=ÓÉ“;%ªØC Îx[N]ç§-ÌŠœ§½¬³.u59n7äŠËå¾d,{™5ÌåÉŒà²9rRùq¾Û”ÏŽv©‘wÀh,źá6yÆü¼z$j²5 uQ¶3{Åc3½7 ®æI˜‘â§d•®;B½dxÁÁS è#ï‘ÔÈÈצúäË+ÄjXº,âÈ0”•÷äfT`Bƒ|è ”ûª¢8Ü„Ö*·˜ :ˆfGCpé!ر¦ãŠØCŒgkH”Ð1±jÄ´î}ŠÔ-ZnVJ±E¢ C2 Mç‘eï \Œ¡`h=‘0Tbíœn;Îa•ñV*ÕSX&ŒÜÜT%‘–qT¢G‰há]à”:žÌ=Wy+!åÔðP¬yÕŒm'Á Ã…/÷ú î#¹¡®uAµ… ¾Ib½:)ßÔ •0°mäB\QKÿÕ€ì1whæ7a  5Ù`‘Á0Qr¾D86[J-ª# 4›#Í8Po é»i¹CŸ‰A†& {k͘Îö±”© ¼„Ú)N×ç™Ó)°<,–,lkc[T'fLX¥h´’„…#y‰¤˜%I‚d^`¢y¡$ˆÎU¦P»9ûá‹©@>fá°Àå‘}à!;Š.†drNæApõ€êÃÜ72#^i«`J!@Aú ‡#ÑÌ,~Å”·3·­C{e<$i¢Ìƒ ÈúSàרdÊ$@ ܰ3WCsÐfù `w`ÐXË"í_çS$™ÿDÇJ‹ý)§)HÊ‹C¯EæˆuʦKd’#Þ#Ÿñóè\ñfL/¨e._À€ŠŸÎÍy.ô=ˆ)å;L¼ò<SÇÞJÄ b(”@Úìøh÷AGPÞ±Z²D¢H„ƒ c ^ç lŒÌ"Åðƒ¿rœ ÂFj÷À~Ù` E"02 lñ±MK¸ÒÅ$DFÊX.wCŠñ°] R”—ßÉøòlD‘´é×ÏÌ£ÇÒ×çéÐPú(/®÷?ç¿ÿCE¢Å ˜nˆHŸÐQ4@ÈM˜&Û`°5¤ÒÜV´,ŠŽ–5†L€a$êB*‰ 3S¿¢"ô À¾ËA¤<Ðn9»lÄ_‚±BfEᙩRb¬šx/`ân ¡ÄÈ>¤%%(п2I¿m`q2K]CìJ?#‘$íÔ—à£â>®szsC@˜¦“…>FŒ!ï© ™‡yÄH{ÂF“dîËß »öwäຩ`cf €‚ØåÃÐ$,àÜB#Y’Ž9•;☒ WB“äÜÑÖ¡® hêRzßk‡,„ðS4ؼî‡4&I*® W¦XÑà >Z/‰I ’+o‰T•IT‚CÉ W]wðmg@‚ÓPŠ$×ðd|Ÿ‚ܤ/îÖ”"§âIQ.~žþÇÔìþ¢ÁñDÍ=Œóa¶ßdš+ .¢Ì’¨¡!Åm€ñO6<Ê ñH*ÝTˆRŸmT  EŠÅ ARË)(¢D)¢,‚°B$ R €"0Zó( !¡ £ ’ÜœBVŒ.©ázFò("JE-% 9ÆkøQVO¤B•t.X›\e@4^ipx9ªFÁ˜RK Q©AР±#Ìß…s½R¥‚†Œr‡Æ‡—-ƒ¡™‚qgÍÀœLœÓ–‚v w=ªÇD¿Ë‘è½msàŒoÊ÷U*N„ÒC_´3™çIm©,R‘i’= ~F›|…ˆ¹ M1#‰¢Üì¡uŠˆlx”ÞŠ}ãâ='ÐM˜B2@OõóPÊ`H ‹„Èâ{_‰pƒ´ñgüx(2S`´m(R’æï!rfvÎQàkZ¨ú”’R`5€Ve(…]œP•D’*y„\i-WÎíñ‚Èb.¾x/³ZAXd^Á‘ €j‘¢tƒbT1A"Ôq¤„Z¬¶µPVá-Ÿ%„yЂj¨ö¥èX‰ò¤Èt¡={ÒŽcø‰GÓ TÔ`­ë#˜ä$"5„pº—aÓ`ÍqY£P Øx>;…²G}Ha˜)X¡ FA’Ö™R)Bùæ'ð¹K¹‡½GDû(Þ’,Š*”G¡C™‚êÒdø)ß·C°}”¬¿?ç $ÿ‰&n!ÇŒGRÊJUḨ̂,¥îZa;@òMîfîê{1 o!Ì5l§áàžÂ$6€‰¤2 ;£”%.—:® ÑÐé⼂I à‚ƒæSÛ\¡~TY³²^æ5W‹ÁlæfI“IYÕ[4äa¦FÄ‹ ç! ƒ¹b"\ ãVÁ¾âæ Œ!½ÌíöéÛÔÇ«¢O¢ûXhæòækžc3È×0ÌÈÔÐÐ4 i¦€óYä`c[@H;>ßåiîl«TT¨ÝL8ü®^ÖÔ ’‘Ÿø•"d@ƒƒC¸‹V(\±ljZ2Ö§'ùÝÀƳ4 Š(*õö%üÄFf óÒfKåLoAÿÈ"¥ ¤®¡¼ðˆúD² ‹ >§Ñ²]-h7T‹8ßSxÒŒeŠ’y¾#–©`ºXåü!‚Àn8…)¾í{b’1i„ÂûÈØýd¡•P…}ô‚æƒä|N!©b³-j*­ÎÐ#$SÓLà›Ðà.Éa:/À¸ÀL4@2=ûFˆÄhˆÐfXµÃŸÅúǯlÉQHœB)¸ýRédÂBâêcû\Óø-޽¡29@ò< æ!lÏ5Ni¸3%(QzK02BÔ @Ot)=é¯ù(-¿šÚ€§´à×7‘Ûï=÷èzÇÆGéõ¥‘Cõ4ÎEJ–'Z"óë„/$©NÿºÂt1pÅ`[˜á… 醴>£w!"¦Ž9Í,êám!%'¨vŽ«sXÅå!ƒˆãïc*–fæ=Œ°%ë²U¾gî,K­»AÕK†µÔï®{Ãï¶zf=ÓbÙ¾Û–YCtɵœ,Öšg½¿9 ¼seô ‡º,ÞôÉCCJ:/Îí6ÛØ M´Y™ѶX®-ÿa©àʃEà8ÔÜÊ$˜0µxÉ÷pvXYg4zÌÎik<®Éë|Æi!¨pAè00*Ž„ÆÃD4Wàæ«‘µDtuQ¹ŸâÜ-å 9Ee†Yß“¦Ý°,¸û[A›le¶®!¯ÞœÝ7ý 96Êä¨m’½ ¹eNœîw)Ñw9˜¬ëÐ^M6 sÔ·Ø0tVM†£‚òãnªëçÊ4•ŽüŽ4“¿Þ¶pŽî,GbÚËæ[dÉ5UÏ3~q`½dm Ç‹!*®ýîi1ž ¼Þd–>¸¼5yŒøYÌ0U˜V€ºÅñ:PÄ;©¨›Ý´øÀ‰˜´˜Š ¯J:±a ³Q$|Š%ã/€y‚wœT†ð»v>Ð]«'h#Ài ™™ˆÞ;ËGAIã3¦gYÙ\ š¶!c ‚d!1]ªkµÐÎèãŽÔR,u"Èþ*¸a`† mt²(9P)˜‚QÑÏztG~I†@rp È„Y‘J´;X•‰–D°jçø¤¢ø\O5ƒ’I~–6E/*°™¤Ð¡ä-qL4Á!gµƒ gn}±ÃÉ6řض´S=Á°‰Ø•™:m”êb±“z’dk‚±¸Ô£Mux±±ƒƒ#d‘Ñ+)aIÐK‚³{!"ü8¦zI`6/ƒ:3ˆËƒŒ²èisM¢æè°|<`ÛC Qår2r6,ÒŒ•V§3ˆÓ!Št­Ð´(8Kžº 7 ùË‹äuÐÞ¯‰—¸e®×0öÇ ½b"‹‰êIbD„$Ã2#ˆ+U/Qž‰®»8ëäAŽçÍt8*P¸n1‰Ò¨Æ(™¥ÀÒ¦Vw*–#@G.AŒF¤‚!¥wO5.8LñJ,\HÔª—øB¹ÊƯp,VG®31 lìÝÀœ Æ؆-À²×$•ƒLÕ(£U8£Y½Ä @SCÎ@dH‘Â'R,ˆÞX5ˆ­ie åH½É²G& ¨Œ †êÍ ddv«‹.;s¹CêÞjsº5š3TÌ*YX$ÈÇ ¢/d4 VDŠÈϱ”‚q(iP;îB =0‹V¨á`w¶õïP©uÑb!‹E;$ƒL.S5¶v1V ½YM²LjèNG%…‹*ÙGä~¼Åó¼’„ØèLÅ¿'ŠÀ1Òb‘0”¹Õ@Šn):à=€Ñ2¢jÆ‚%À8މHÈ>Ôè¾üÈfŸüàXa j”Ÿ¦S{Nˆ˜ :üŒ…áÒ€f„\ŸYD¢tÀI Êa\Àìj›– #æk]¾é™’­Â{É—ƒ¹€Epë=l^ ‘x¸êÄ:,ê?>0 ô€– fÅ’*æA<ÃÍà, ò žˆXƒÓ C@²|îcn<7´C»IE¨…UšÃ%§¹A’‰¤hk ›^I’Jì`{óíÁ6ä[ÔAØ¡ˆÊã…ò a(i 'EÅQ¥º±!Áò¤,´ ŽH“]ÉCRì( @ò¸éÔ÷Õ‡~…¤(jH”U#ÊÆ¡ÐÙ>DænlêF…xŸçZ.½NZI•#Y#Ï%ÜÛ¡EE @/+>²™©P ƒò^ÈÌ8M*бy$†Ðàb„†àc úàìŽLÇ]Յ„äùƒ-Å­„­G1ƒ«Aì˜à­ÍÀ@Ʀ;/ª0…4‡wygEŠ€X¹*cüöó!":ª¤%Rd§NœÆkìr ìP*ÎebÊå¬VoÌé)Ò ©&Mom/Ý@ÃAÊoZZâІI¸`õB—® Û .îvÈ ÓBõ;v;™vRPL›46®KCE‹Ñ{J„÷ƑޢœNæË¾H«Iq:Aêèe„¨àŠˆðL ú3Ïç¨É±.`å9´ÿM°Öƒ>ÎÂhVIB²‘ÜztÌâ¾fÆŽ`bƉk‹AND±Ëq¾Ý‚>‡Ô2K–€è:x0¹ä½|Œ”€¾Æ.’§Ëwj„µÑÊè3@½Ku¨$ÐUú¢ÀÐeõ =dÜLê*;Y³ji«\ ){A´!"¥‡qáiC4Jßc|?X;ãŒÍ ^j5÷+Qâæd@ Ã~XQ} p,…a× ØSb…œÿ8i’æ2KÁ½ŽpF¶uÑ#ÁÃÉ÷“À.›HÈ"dŽQH>uMQ%rS(4B¥ƒÀóÃs’\½’8eJR š€[+ÂÅÈDõÞÄ e!Vh¢,9ÒîR„5e—: `FŸr@H‘!D!`ÁJ³aiX$’©bȧÂÖKb@"J”&Am`?ªÃõÉ7*îÐÓ›fuŽÇŽ 8—lB6FààaŒQË™’B;Ou‡E?}Ì’$)B$! â‚ÌÄîîêNB‰”Ø©*,LU 5BœÈ÷kE8´X Š@¡NÆ/“*¦ŒeMî´µR1©>ݤIèXÒÉC¿â>CØO?|P[?¾!ÏGÿiìçÍw'°w¨BD ð{ª9Å/9Y¢º\_îzºÐÂHÁúSÄ ¹°ä†M 8î‹k¶å@ÝäK§p²oƤ†»Ÿ)ò}Ñlö§Û0!83U§io÷ÞÀ!>¹Øä¯;ƒÕ;ÖSL›¡;sï¿èÕ™äa2¤£0î†éÝ!¼ [¹!ãDî¸EìqCîº,30aHAÝoô™€æb)"©”@Í;]| {å-n¦áÂà<Óí%ŠQ$™V×ùˆ„w]?¡<`£… çÌȺe‹6§–²D…©©a$l-„ ()NO:Hâ%±@}";¢î¸P[Á`ùƒ DyLUtøo n¹ TO'Æï©k† ‹K ÅAJÒÃêYÈ X¹0{úpªª{X©E†©pnLÄ¥y1Pà†“úQÇŸDï°g™Ú*ª¼w*…×IK±ðW®¨ÛÐɰ‹"lÅsЉ–Åà18¨´Å‰i•­(P’d“-¶EšK¾!ôÄZØ0'CsæÙàèÙ¹),»Ù.EÞù …¹öè¨"Û *r¢2HÈä›°Ë'›ÂCºL¤Ú%R4$ûÒRy¿sš†Èfú&šlÔ1 H¨[âÄûرˆµoâëa"DÈ ó“´…iL¼H9þ476Idò0ôpÀ+ópÑ->å~ã¢> "­Þ^êÝz‰8ªØ1ìZÆäÌÉc¡²ªá("§¢3#DAʈ‘ç3˜Zƒáó·Çy=cÕ(&õ8Œ7.ä"X¹Â=f?ïòÄ}†:“7ðXE 8Dup&z, )F"˜Áµ"‡›¬,hóË#W'ˆAAqDv Qp•Ɔ…9ùÄ/X&˜?²|·”¦ì$RHN¡‡¹DB$;aÂó&P_YJ°‹ƒ 1BHØûè (¢SÃ1.ÂÄ4 [¸TIŠ k Rƒ©½ßšLÓä&€d(Aæ)ª½p!ãöì¦ÒÐ*äM@/®ûl(†æ2-’ÆApì`T-À)S¸–MA2È¥ +Ì̪‘6)¿Ü=gQÜt{‘4±ªÞ¦£©9jê1è†èa*"l²DßäÈð\'"Ë£±Ì^k!ñpúNÛ¤ ÷^K ™£Nò Ôx𓻏rzîn@⥦&@Ð|.&àMèRŸ®ÕyerÝá ÝÙ,V/s‘Žý¨ñÔZM K£±f†ÚM€ÄкJ@€‰6Xh @‘„À$„L¦1Be,Bh¬)Z©!(hjÿŽ,Ù!$ƒuý12Ìó8’‡c¡Ä£Ò¢”+šÀÂt SŽãˆ“4IÈ1PXè,fáduKê`kØûF¯‘Í•*h:ŠÉÕ>*_ò€'×"1rŒo9Ñu&¹–ùÔÿ ÓC<ÞIgi…Ht‰ºÏ#$D’4ñØÃ!´k_èÞUÍp¶Ânéà¡à™¯^çK4­œš0ÂCÏ`°!:öÝÀ ð°Tê¾>`ŠæX6l›—C!²Q’y€hhðw’C¡4p¬]‚¢ŸzïiÀr\Âʶ$fØ)`v‘TäÉS * ¸Dn Îïæ#œÈøžáÃc6Ú† 2“cIŒi°cb f ðv5Z›‘ÈFpƘÁÍ!$bÆ+™¼9/‚øÜ¥.oN$n72$މ±ÌíFår¤…ˆEœÓ0åŸÇ´x:’KÓd{˜†2uýÀüÂàÓ®B¡Er”C#`E„û9¥“ t¥Pvz€dZñ£î‡æ–êkàzëî—üé‰^·dj/:œ–—«¢ªBD $ RGæ|éÌ7‡,›£û ÂsA :±(‘ëæ1.Q¸E…å0$ÄX ày!äæ¹?3SÉØÙÆ¸±©ïO¯ïý5÷˜S@ÙQy”l‡¼þ§ò@Súƒ¡÷~?¹Ô£_¼ó=ß3NS£4‚¿LrÕa‚ý5!~å¨~lacýª\´ýŸ‘ÑÐç²n÷MÁˆ>ôï5SzgvÏ7ªý+nù×vNTlÄñ9¦ÇçbyWpÞ¼;Ò´£FogüÝ®1µ*ÇS°tÀ]3©:÷\e=x ,cùI–LÒ^l °ôƶ.xæüØ.–Ô!ŸeÕ€Êkè<žþmS!Ì; c`ÄïWõ e–ENÆáÉsšÀÀuÙË"®.t0KRáßVƒhHCÞCmº¼nÊ5qíþ`T t&ÏPôo-åS3¢Á ˆz>wm²_ˆIh`FDaâT  ‰@,Y¯N¦cf¨†6' &8„(lp„ÔV ¹¡½õ>G¡ÇµN`y.®ÑMˆ’$P*T—Z«ì-LŒ •è¶± f.çuPœÐ2£©ëjl¿ÄÈw4@‘ýq£úà”°Œ .AMÐBÍ© eFˆE Œˆ@‘Á¼E.B½%‹ I±±ì;)QäC3À y#Í[®P¶‹Xà|·§æä;…øý??öý¶±ù}~¾³ËÌ~÷ý ¡_ä Ëî§(Ÿ¨Pÿp¢/üÔAhTAU Å(¨£þä)QºD%)Q€¨AZ "ªáD_òQà » PTÀ&Š ¿óðÕPÿ:âÈÿ’ˆ/ø¨ H ÿä ª'-?õOýÑ3E÷àíªœU^B pAO5]à¡~ê*ò‚ ²" ‡üTÿŠˆŸóCÿpSñ_ú€žŠ*y ŸøŠ¨{Ð@ÿÁQÿ1_¼Sÿ’ˆ/íïîæ ‘@Ò}à‚ŸœAn‚*‰È÷–ü@TÙJ@E¹eQÀ¨/`†ä2_¹E@ SçpPûE¨ú¢*…ZRÊ¢ P”Tˆ‚ìf¥(¨E¡@P䢠Qô. |Dˆ!MÊ* "ÿ³âŠ(\Só¢ª8A ¨}Š®ó° œ0¢ ãA|ü€J_þCÌQ ¦{"ä‚ôÿqÿ7ºÖ.‚ ¹E@þ»ŠÀüDó(‚ÿ>" ¢s⢠ñ@=@Oöõ=°AÜ¢ cÜ"){.ÐSŠŸ`¢ Í @T¤?’Ÿ™E@þíO`7‚‚þÑà ¼{¢"î@Š?¹þáD_’D²" ?îE÷ÙQÈ€‚tûOÎqÑQ¿ÑE@ù h}-e[Pˆ¤@S@óZªŸü XAbˆ.E*"Š(R"ª?rˆ/ñ²–QÉPù"XQÈA`‚®Â þH —ÿ¹C3Õ@CAAA; )è¢ ì ¾¢ˆ·A°‚/©¨ ¥‘U(‚èR(%Á?Ÿü?¼þo÷Ý?ÿÙþ¼±ý÷’†¨¥b£„ŤN RŠ–6¥‰(cö$Ȩ´^ì”Rʆª\1µj)`”„Z•"•¬g,%EìH‹ˆB†à¬Æ¨U{ņ'!ÃQR¯yio/D™Qh¿ß±uÊ\Æ×4±r,t©DŽÆA–/!µ2‹³9gÉ|ëØS4¬X¤¤”†&g`¢®ðT P³³£"¯š¼^õjAˆš»d[KLA’dìòTØ  dc›]ÞÒJÊrd<„Ƚ‰3 dÔÕ‹tl0Âv­I¦ÄØAMÅfQ1¦'"Jf°ä„¬¢Jˆ¬w•í®PuµIm„<˜´fóÅÕKÉ't²M Ë»¥8­*Dbë,2Kh¸ †Å›² Ày4˜ tB’È’ Á¤\‰S¢Ø£x5*ÐÁ‚@8šd^lڀ斚x&/TI«ÓÔ± L`U(³&i;ÝLÚÍ¢a*™V%Æi' + P) ”$ f?³ÀSô ¨A˜¢/õ*ˆ=”DûDE9‰ÓÓ|Óüôþ[Ðý ‚Ÿ!怤Pä)òQ䀥€OÏÓ°_@Pè* ~¨ õà *ù¨‚áC÷=Tú€ )t_Ì" [‚ £$5=ÖQP3@S?¨‚–@EŠ(z ˜QP5QñQP<„  l €|P@üÀ( c÷¢ˆ.º Õݾâ @‚”€ŠÿM@ÐÀ¡T‚ ¢nEæ pD².‚ªcïX¢ ² š” ´µDþÉNˆ ~t?‚¢+ÌD; )àˆ!°‚ôPn*¡Dè º‚ |ÀN ºŸÍæ‚*øä¨wÿ¯ÿó’e5š•EàR4×øB„ß|·ÏþWÿ}Ì þýÿQ÷ªDR’…T‘*Š •*¢©AR‚ª*¨‰RŠDA4Ó2i‘“šÂhÓ&48i¦d4Ó#& 4„ѦL hpÓLÈi¦FL h £L˜@Ðᦘ!ÓLŒ˜@4ÐF™0 š¨‰¢SxTôÉ=O)êx†™#jf£&5¦Ÿª$i锢ÈSÅši=)å=4MŠh=OHÚ›HÐÐ1iET&QU õü¨>t^] ?ÏÛO¿rEkï¥ ·hD×ÛH¯£zy7QqBµF(fC(üt"·ì£‹NEš—n*×p¤ÊˆråE\$5ÍR3ÆQ½8P+ ®mD³U[RKŒX·ÒÛI¸pŠ„ÊK~è%ÆA®õEfrÒfM´MYÍI8¥¡©0!žõPPþ²,ªEêÀ©-”K$ôÑ,¢½À£u«E Ðz)EÒ +îD•슬ÀGQI¢™J–&"’ü*޲)ªPä”uˆªàªSe"l‰zD4ZL`…ô =µH‹”Iî),ïJÉ ˜Iv¥EUÆJ’´ª7–ªõ /ô +ÝJ¦å'¾‘Ô‘Mª÷É# !Ô¡RÛW¢ž‚—žž*>r[å*]¨„Ü7Ü‹un†âl +¡W… ÁI>T¦…UÊ ¹á”ŠóDW:J<SƒY'®¢‹ÙWÃB²”[”sÈlGЋÓ!”¢sÃý…q%¤KÁJ/zP¥}äSR{`– Ö‹$‡$¯ƒDb…KJ¨bEX#ôˆ/j\¨­”©+âÉ$›éF DÚ í¢—7&1%Y$±)d+DÀŒ¨¬f*¶(&U9RÒª&…"aÄTŠŸIUõR…\‚ …ÅHÝ«^¢¤Z(7¨„⊄Ê^É ¡•JÅM"Z)U7 jƒÄœô+ µ%ZÑ?ê“”¤û€. ¼õR—y!#ÒÀI÷¥EU‚ƒæ"žµRžÑ%±(;j(±œ)UFFT“59ÐÁ’§BEr:¨³½C ×Mu÷Ä·Òö¤Rÿ’\b*‡!¢£(¢ƒŒ¤Ž2£—$²E(™&!Š.2“D3J-Bª¯)Jû¾‚1K–"¶N!öEW2YÈ´½"ž¨ØUÁ„YŸU¾žÕX¬Ø¡=I`–ß´³!uj•š¢oʆQ®²³v—Jh£ÓZX…k¦¨ qЬ•˜J<Vf‚Kl¨dU`9‰l”ÇÉC\z¸äna(i!RÏ¥AÓæ¡K$“·ð¡LTc &Q4¨­5Ì)ŠŒ@º•TwD²BjRúe4º¢ªŒ"žåId‰VñaC…J»ô¢Ö‰ÁýÐýT[EW‚C¸…F )>UÍI0Š`‰„…/šUp øéB®ºU5†Å¹P_‰ ñ“tKºQÂÄC‚ñJá „ìPX ª†ä“š¥G˜”¹¤õ oª;*Vð¥ÒUL*£D”\²º°EPÔŠe¶Õq¬òù©TÖ‘Ý¥ñ ‡lUp¡>v”ª½u$úÒ¥í ªÝTv"±UErGDKHÑmJ½ ’S`£Ò™\Š¹è·¢©i"Úˆ\²­/‚öIËTVà hªïÈíHHßBj)n‘Uí ûáìF´Uø"]ÀJ.±xå=©KªT¦‘c´A QI­J±%ý¥Sb nC4”ž +—‘@Wì…$6E|Š!×WŽ‘ðŽ ºAWB+±‘Uõ¢¢7]![(©Š–QÊU]Jh•‚W14ü’I€–2¤²I•U.q{(¥¼C–*¨íÄ«°«²CÌ…G\‡å*DÞ–!+®©Q  Jà”ÖGr”]ò uZ)U8“äU誣l’o)<´RÝWQAÞ¥TÕî·(>ÒWÇTWò¤Ny=Š'º%ÚŸ¹¾Õ'X_­#ÍTyˆ§–$¿uQæ‹ÂECÏw!Ô„ÉDòQá0Q¹ªªFhŸ™J—ŽC'Ç!ãI²ô¦ê¨®r xhN Õ^%$ßJ/-!_˜ÍEW0ˆvRܤڕø¥ÑDÉ)?ò1A ¾I*JýòN’—ÆE:@®e]DT©t©8„uɤ9¨¥¾%’x"”lŠU‚Tû`!ÙWxRìƒ4ªyiEºµ>d܉Á¬‹qIàB7J>2ŸÎQ÷(6ÉÈ‚–Øs(£ˆ¸¡LJ´%wA.–  KàŠª6Ä»d£¬ÂWí¢^r–€„9ªŽ1Tu¢–ŒÒ¬…b*¼ÊÞ•l ¼Õ? WÜ!°¥ä‘-”“db”[H"ó¤UøÔ*Z:/‚ª©õÄ£™I½@W$÷‘ª“YX#E¬Ò©h ±!*4BEº‘„ŠÀQŠÑ(h aM¤Sa!ˆVŒ„‰€+T©j)iFˆê­e5Tp”©¼õÈj‹}UŠƒ×’…+± T­ª ˆªÀòP™¸”˜ˆÒ¤ÜJ©š[’MàVæ•3R+Ä|C <(ðFÈk('zƒ¢üõxH/IÑ'¦¨Ô‡¡AÓ) ·Ð` ʪ0Še) d Â’§EtÈx⫈Šó‘[%=:œÐ!Ê.IOЧZƒÿH¥ò¨T½"ºâŠÙH®Å&ó ”å@‡9TI|*%÷J¼(„ÂJzQÞ>ŸRSTl¤Ö*µG•<‚ÆV„!Uz”*X Å –å(Ò%G Ã$»êI^BDºŠ´’J÷À+’¨«2§:mZàQbƒŸô“I˜–ª,HKò%CøQEô‰kDØ¡÷E9ˆ§¡*—yJ‹™(þ Sõ‘%e@*ùHŒHVäUq‚-¡>¨œª¯Ø”«(ЩêM%Nd¡OL¡9TLÔyêNeRœäC¤ˆo'°Afx¢­e‘TŒ"V@'Ê#ëˆS®#ß"%ðÒL‚1ˆBç”Wl ‚–ŽÔ¥š•rT,’„ª2A7ˆSø m±wÑ’“梮ꕬ°„1GTBš hSͦQÖ%”“Þ%ŸRF‰ßJ¦ä ²Ië!…*÷Ô QTœä%k M ƒÊ%¬BŸ¸ÖP­õRÕ X(ú¡/¤ÜþJEL—¾)á,‚j)s,…Rz L¹…;BšR¦E,*E]û`4%#JRù"ž8*_Ž‰Ë©Š)T2”–I2*•2ª& `Ž hTÄ‚=A’ª¿±Q´j“ Uz*MQQs…KäHпÂJ#ë>ÉZ”>¨Uó*Eñ‰sJ+Òªìí%rTœªK¼Þ)úÊ”ê wRn!±E7ªEÐ…ëJYÂ^TF(Y¥ í$§$È¥ø¤(Ð:Ÿª—Φõ8SÝ„‡¼@tSÓèPóE=eNQÁ/¦QF‘ l•‰ÚMêi"TWªY!ÿÅ&Ñ)m%‘KYE%¢¤ýù’“=@Ï/ë•7Ú*%ç%²Mä<ér¶*$¨ìˆµ”H!S&hY"¢Ü «àK ¼¤o-’++$ºD8…HèW房B¥¬¥:T˜¡+ö¶PaRá(ꪑ좮&Åá o¡d ¦JWª‘sôR¢°1ª%6ÎÔ«šhRÊ’©s)´ˆ»’~¯Œ´ß‰6ÛB·UHÛ-h[³ÛT¤O¿bÀ+"‚RÌÎ$$šªTݤ•Ĺi 4ÅCLÁb¤Å&mÙ䊖 ñVY‘eãD\\hC ªÐuT5ÓÐᤠá´¨µTœÀ\ZÊL­ £,AG–Š¿t¢¹¨QÉ@˜@Ö ª¥Þ*—‘Rw2‰{Eq(~’‡ ƒÜ¥EÈ•K¸Báp‚Ž`«Ü ^좶)pHVÅ.â‰ÅTWeq#’)2¨ÚŠCN²²&uÂSQ.H£²‰wH'qC¢BÙ$·$+”*'ÚR[±BȈe')JšÕ ”Ô¡ò©I]'©OA¿(©=DO¼ˆà §ÅJ¢·Ô® NT»²RœÊ ‡ñMD¹R¯–_|_DRô¯""Ž¢+ —º ò¨Rj”‚[!]d§t‚Ò„âÔš—j(øj‚y"›ÀÈ(ÉT'ýJ—ÏD³ â*y`ƒœ‰Å ó‰kÄb 3R§R%pª© —’òÀM$STŽÒCë=”Þ•{"$}\bRöÕ :„Œ+!I‘3; ¤@êB²•%Â*´;(«)E~jaf¦ÂYP-JNL¥(¯‰ ^ü%Ø”º`£j¥*Þ*†²¦}‘S ÉS%J¹a-â)65R¬Ä”à%ñ‰s$4‹t è¢àªÉ|„Ö &P“) Z•SÏ)OHS(“(5Š[Å2-CÆ j†m%ä*§Bo(SBµT´­•'Žb&B+z£0’‹ )&pQ”ª´4DÂRÈ) ´’£Xª,‚ j¬à£¡*ÈK&ja%L*¦"ž„ ¥êˆSaòÀDB»`OuHœµ\¡WiKŒ¢™Ê+ÚOuBZARe «D©w(EãT‹¡'–j…X¡v”œ(«”Rç.ÕH~&„ôÄ¥¤*‘GŽ –â\¢ðš¦)žx¥¤„æŠ:$z*é ‰Ú”«®ò¤GøäRäÁ, eA¬•S…ëÅQ^‰òHШz¡Pô nPç p–H/])eÝlR»ò(sÒŽÙ!^Šf¶¨7QU\åÈ’Q ­I¢'bU<ŠT]ôªY"¡ç*Sùʘ ¹QÀª. “á%ÇB.ÕP(‹‚¡WŽ‘q޹ÒQ^øKyp"±9Ñ/HD=BZEì…CdQ¢Šª÷<± zeT6O: „¢å9¾æ)2J£µy¢©ÂD6¢‡ ’žòT¹^Z*Ä ËD‡9’¦*ŠÉûª "§Z‡òÁÑT^ePu%ª]éU~Hv".©CªœŠð*SÂ¥ˆ¥a*+48â•VJ ½È[”LIðÊQ;âUKÍDòU "¢Y*¥© P-(ï)0’uAF‰×r¤ °$Ô¤Ð\„Söª„·Tžâ†ÑKЬ%Xˆ£®”¨ê&TDçP|R§qIæˆS¡C²%.úRV(RhTæEb!Ô…`¥;j‡–ŠuÉQkRnJ4 \) *Ù j*²J¥ÅRo(§‰5ˆSiU¼¤Èˆ¸7EiI•%…KIi+iÊŠª†@e(˜•B´)`Bñ…RÂRà…I½U±E7µDX“IQ”f ¦qD¹@ZIQ±)‚DÁPÈ…©R™Q'ªEIµIꊡÆ[ –iKeH°”f•,’eeI¨'I jTÖ ‘F‰¥R<â¤vÉØ9*5å)6(kDñDHð¨X”Vh‹¢n”¢³Eº¨Ká*§„‰©4U CÚ„»ÄSr‡Â ÍRœÂMj"â‚Â]%\”†6Å( „-ê*àª+J°HÒ¨ç"s”¦*ŠÐDµ•€šÅ”'’„_rO¢Â%Ópª–ôœ…Žð)UqÁFÒ!áT‹îŠŒÔ¤âŠ‘ÖB£rq¡'ž*†”¤wʨx!ðTß-dª½ª(÷G­Jz½•@ªöÊ] ‡YIÜX3‘æNªOMAô*ŠÙ(¸¡ñÕõ÷Ê_0—}AÀK­ *ÙE¹DÄ ‚‘à1è%ëB¼´&Ò'å¡&ð,ij¡d¤O¶¤þ¤/b*¾ÄJìJ^dðP‹r¡Wv¾J“B+â”§z(ñ…BÔ¼DÑG\\B^Ò0D>’†€)ø5Š¡ÀUÆ›Q9"QwªŠãÂLºð˜UUŒmJ¾H¥¤RÔKôD)’k%°–”+BS*N²¦BS×"a2ŒÍ’ªr*d³•KB“Ãè«4%ÿ²¸8Á(Jå)$¥ˆ”œ‰6J[¢·¨Y*Eœ ¢‚šÊŒÒhJU ‰iH.’“&um(­ŒAF´ à!¢¤YAF"£b ù"†õ&H®T«Œ„·‰d©”UËo‚ŒåEjJUv”7èE~êO€©}TNQ Ú€: hHi%Ù’E‚EÑB´(œÀ^ N ’­!PéICj…„LÑo š># JºÐ°•K•AL¨J·‰f¨ÚŠ·¢/µ.hDœ M`Nt¥ü¨\Ì%yT‰÷(˜EEõÕHûàL$+‚s¨)ëHWïH-%_¢—¶QL@š©SÌ•KÅ@³R'®*%âBéR¡üP”2Dî*d•ô ùËý€>Ô©ûàÉø öB¯• C)V‚XY(SÛ¢¯z*G5(ñ¨)Ý"?« ' %ùbáH! &Rb¨¬ £Ý%Ĥê‘'Šç¢^’‚¯ü¥SíE´@ô‰xêOZ¤Y ÂR󨣾KÚ©8$б y ”ÿó’e5™öe‰0·ÿxÆ$€€È€ ê=ÂH€ÕM4 MU$~©“OSM42hÄzƒ14Âi‰€š`bt÷¥·ÞF!ðkÂF° QD…C7cREÞ·(‹("ÊÊ’•xr‰þq"Ç‘i,D‹dH³‰a"ãÆ$]¢EËq"øÛ/lD‹­ª$^q"òë–½q"ÇtH²‰n±"Ò$\"E²$ZD‹MbE¾‰¨‘g,j‰"E¶$[7D‹"Ò$__Ù?ÅÜ‘N$<ªÂ°€WGCNA/data/BloodLists.rda0000644000176200001440000001757513103416622014605 0ustar liggesusersBZh91AY&SYâx¤a÷€ÿþËáBcÿàÿÿð¿ïÿ `"?}¾{ã¶k‚¶Êß|›ï»àÕ)éM5ª-aÖ…Ûm‘Mîâ®ÃÖ¾Þâ÷Ü[Ñ®^÷ŒËqÞÝÞ´ç9=¸­ofêyí凳ÙyíË“¦Ow7¯^ø ‚44 a#j€SE2Щ€BÁ zSÒž©ú“50ƒ&€Lˆd'¤Ê£ò§©4h4dMCM 4ÕH„§“(lSÔò4ž†£ÔÓLšy!êh 4Fƒj hOI¡M¦MLÒjzFÒ4d5H4KœV°ÏrH½€€I{˜÷ÿ {ôlU»«"w[9 1…<UH`"ðùýíçø?Ïí?ØùÍOá/ˆ¡Ÿ&¡Eu'Ëñ¸5í1JŸqÑNWOÉL½õkÀßg•\V¦¤—’E‘L>ýñŸµ?Ù ä(ù~¥ŽsfµøgN&³àB%èBEÕŽ|˜à¾%|´"ø]ÚV»„ž© $ë{Z€Ãïv¶ÛþŸèèç‚ Q±)sãÑsjå6SêY±?D †eLÓu2ž/Ó­GuÜÂö1èirÁ©™IšðiêÍ=ÞcÉ¿fãÆ“;?éNãÎþÝÆyFiAúfVáfâ1@kÚ”Z‘~Ûfº´™'í–ëHjO¹Ñ»³«"Á<|ÐÔ[ÐòhK¬`Ì2ñþ|µÕ#ØçË#½Õý« Ò™êÃ4v”ØéoƒXöéƒ Ÿ¢·RW¯nÎ?ÉØbPL8/–L˜¢&ëݬÁÜ·/½UþÇ܉ùzZ¢ìVb;‘„åaóVSrzÕÂa”)ë)¯|¬£B¶'$Ý8Ɉœ›Ç^Ðòg(ò½øÞ½µÞ°}Ä‘vÜÞðGÀ‰ÁMKtue -:µ¯ìF“ 5ŽUe ݉èýàÆs½µš 9ó§gt«0÷Ä¢O'=mNqrC¬‚,A<ÎÖÎ<% aΈ÷Æc¡ýÑ?Téžv½â÷Ô˜EVY’©cŠ“9>ž…´oætúgÛÏÏ1ñFÅäüjç šÖ¶<ã·VÉe­>WÞÊø]³‡K³õÃŒ²úñœ¢ŽDî)ÈG}öŽ3¬Ð–Žâ,ìæMg’p?HÔT THhrKÈœ‰ÎŠïyœü^Ð]M±É+†/¢º¬$hìˆv]M¦Ý<¥ÛµÎ›WÁ'˜µõøÛsxG{`¹£[ÛirO%çýÏ®Êt5!q¡Õº±-‰rt8C¹™`S­ïVÌ äÜÙ.D›•|){RØŸXÞ{8郻qà¢ä2/Dò„ùO$nÓIÔ?Ôöï=™-à%Õ“¼¼½y#<7 YzÜãÕÔŤnüðÝú<æÆB×?$1ÄÜÜèÊ¢Û.g§cFÈØý—¯:Åctí«2é‹%•ù;‚rú¨z–¢(Õçwœã»ixëÌÎkL©ËdJfQx"£b\ô0ñl®PäZB ðý›`æäÊ|3+;©–Ên_=­5שlÌ_âYc"Ò¾0Mš>k9=SÒ|¶-¯S^ì°Á“+ädÐPýº%™#„‘XAsRMnÔ]3z9RÄÜ ñFãÏG‰`—\ÙoV5¼›—S ¨ÈšWTÐÒÅužSà¡O;œm¤R®÷ÊÆGµÅëX@AA]Ì|:Ä>€Ú¿Ù¯bÛDJp\éЦå;WCt~+ÌØâ™XsfˤÍ|: €G}Þ!V¤¡.h‹7nù¿LŠT#mÊAŽU‹t›Q“µ™¡6VUÁóÉI©ó„;\1!¼r¦óþb=© &’&Î$¬eý· ;ùaU;¨(çJª«EV§{¯  ˆ$Zþž~™s}»iß2t¤y]Îv…g…3«…ËŸ´)]’HBúÒÒ½¼ÎMǽ©d®®s"!×£®r r¥'6*B ‹ôLÈ´Ï7£Òû=R3]%74„"Ħ>æiD ¦õu’6µÃ›,Òö™7çs¡Gy«Ay8Ë)Qªü–JxFm»¢NVQGpñçìñ–ì2"ûAƒöaïÜãøœiøòEæÍôcÊïù0 qV¢è=Isçš^X'‚Y_ž?-ÄÓûæ.ªÉ;ˆÑ0Bû5ëC/†éÑi‡‡2,8êZ„é× Âùiµ¤éhœ—fÆ÷*¯!k‹NÔ`¬:ÅÚÆåLnÔsZ½3CšÄà “]ŒihLMôYJ·µšlªÉy†$â¢AwcЬ6:‡‘þeêßÃWƒÒN5ýÿ.„ÞW—8á·«g¾YwÈûÚ*¨€Š#êËéÓ1Œàø¡íÚ_]Uðo„ž®<È£tÛe¸6X6 €Ä–²@ò¼^Ø.²÷*ˆ(¾ü£‚n7ŽL´;VZ# Øþ¬?_tõ_ÄŠ’ ×§´ßÊ/ˆ*ÏI›Ói»-¹N¾úpô^£~þ5g–±Ú+wm›Ê]…$ôÿU–íã4\ÈžÚí Èoô‡ÐJUsﳚxºÓ¦:ë¢úwe'°××ÑÚ[d¸´ËLl0À¨7 Ö8§Sáü“sNÔ2!ÈîG¬ï›ieëó\‹U½˜ÐþM©w/}V›¹ì­ÒZfsGYúÐ…nT‹ìí"7-‹9Ø’d:ÖØ+œgxÛÎQÍ’;LY2þzÊØdËFÉɼêÏ5tÙãooËÚSŽÑôß¶Yw|*Ú¶ÞÙáàœUXµëPH9%ØÂ·Er½Fþ{hÁÐ 0õ–¨ ]½A{¢°q+JÓGx‰ù_0Ëé}çš¿w#$Wó½°½räÀ,¬ÌœsfÚº®Ñ8ëÇZ6&ŵ<‡IYÄŽæçpFBÞ5îíÛ]»Ý¼Ù­,`°NfŠ¡®ôÀ‹üy­(ODb³Ý|Íó#‚ˆpLà÷.Èd0´…QÑpÀa¸«ªõó³+Ü7|²•ñø),cÛ”ÎTòi¤Gë@–•Ë,†ÌÄz zkUøf¸`uC(ék=âô]¼Bû3Í S+G !j‚¹§HÞgSñÇî½ùžÛiS'…é¼\9~Òy_Ñà¿ø"¨†Ub0œÌ ¢éRb¹ðÏ"¯€}h!Îs¥.ݵkn¿CKN©Dø©Â7$䎭ÚÍ!^ãI‡ B‰åJUòB¬okëå–±†åˆžyê›èÛqég3Ì$oÉ| {nc4_fÕŠ1ålGäìû±7fïW“œ;v²w„¢„ÆAà 2âëgìUz]@Ï2Q•t°Rúc3zàª%±³¨ªYfZN&“I¾K£1¶é£sÉb½j}/d”ºuâþXlm¬µŒ•r&eí†ézFñîË‘p¡6çZÕbð¯q–Q1r¸°œ×ÉÄÀÀŸ<CÖ~^øØR`9ÍxR©LåM¼3X²ªÚëz_ ·J7EIw鿞3¾´^ܽšµ²Ç"áel¦³ ðêðÍ®±Ò§R)„2¶®YVðWDa~›$û· JPävªêQ¢Õ3È[Vþú3ÎqÁêoÞØmŒÜ.¶ -¾7zÓ‹ÄÞP” БŒÊ:Ф ²œô`]8ì¥K»µƒ.ÙRlãÚÎÍH&ñ6zá¾5ë^2n)c¾ƒ†N¦pɕס3ÜÖ0uckVÄùÀ8ÊÀ¯:óÐ딌I7|K¬»™Æ[£¦å:§`ÖῘ›ÒÿO©¦™°ˆëB¤8À¿9Â;« Ô‹ƒé¸³‰F È¯¯§¸ +ª—òIŒ­³X«¾–¤(½£ÒYR™g} ‹E¦nÄokaµŒÍÞ[ÃTw.íÓ‹¶Ø Ñ©Ãæ¡Í™UÒf¯¾‡ìßM çÖ0æóF&ê'u9ª\q']b íxdÑ–{EÇ,PY?Ê’HJWóÞ4xž0ÛÆ|œžÊÿ 9ޛ丄c (0K›B¨Z1dçMŒä¦<}&’læTÅ%ˆ„Ðd ãá7õöÈxxÜ_,ö(s–5ÃNh±g —z}Lgög í„w7ô¿0`„âEúR °´Ü ¿JBŒÞn8Œ…Ε.¦Õe'Mµt-èë”4X{w4ïÓLã8‚´ƒÐy»qMê»gç5:?/;u<ê@hT¡uhà .@fíºoÁ€ÉfÌ&러iÃØ 'kÒÚEdÑàQn©—Š8aª¼Çëßoµ!mÒ–™™Où¬8ôö=Y,ŽÖ>gE ‚± 1¥N±£x)¬SÆ<±­ç–U”Àm>!o6}+GÄëü+!pb(Ù3-ZšX¿Æ ®~(¾:qŒëë h’•QÕDž¸Mj]»Ç±¹Î|—0–2uäù÷À!~/uÊÎ’0ðãï~Õs Ê9¨¶Á¦þ^@ž5 ÎÚ×!3ÂÆ/ÌH‚¿g¡²^°I .AoCË—QN™c¤r¯g–”¬_ôecûɤõ«$–÷ï—½þqºípŒ<Ëe²©(çÉ^nöê úæß˜á®;Ü¡!ìŸ$.mÎPMxè#v–#¬ºãÖI ^=„;mc­w©b †_»úÆŽÚ8™Z'Ìßx ú»‘4ûTôÙü¦= °íºð6ࣼÁ+…$/23,à(ð€1 Î* žB#ß· )e‰Xøó…øÜ†üÅüª¸œ ’`ÃiÄ-÷Jñ.ha-HÐŒ¥Zp]»”½ÑýIëA ðßtd@„Ô¬|Ùq>‡ªLG6rDo:úÏQwŸÒŠzfâÒâœk,Eè!ÈHDOËõ RäÑßžwZÿNˆüz¿3˘èYó1…A\ ›K¶ÓÐdNsÄùÌ2 ²Ú„m=Ü vr!™’ÿ"¨sèÀò`E³”Ä%†Äd4NmNY“Õ3³‰|„;­Ô  nÜ¥º¸2ãA䃘µf‰|˜ý X£Õ“äàÓâj¯¸Ó çqÉê"2Æzf¨ÓV0éP¶ÁÊ1@“>KyrkQ¸E‘˜¦µÅó¿(1wžuŽÙcˆâ,˜x˜ÇQÏ¢úVÙÆ=4‚C&}md‹oѤL…¨H‰«¦#‡/“ÈJÛ¾äd§4×cœp(·LåMâ’¬)fT ˆ`¡ù™¬ŠS« z¶E¨Â•u‰Ë‘«µ¥„'•z äOgÂç\Ø!ùJq3 ¸¯LÏòû°8nþCeåöN‹ (6YQþ™JnfH`¼¨dˆHAœùÅã¡âÆõ‰Pò@ƒ×ð¢™;¹é2Ùf˜žÝÌ4È ã¥¨éßVÁÑÐ’ 71ALÈM‡'Œàøì€_p0ŰiÁÜ81;·‡wÃì1ò?n8¼0Ð{(š›ë+Ó…Þ\«Òdåq¯ðØÔ*¿~¡JSã+ßÑ~žù0Xƒn™Æ-.eÁ‘‹…©ÕkºÉc%ÖÎvS³‹ÍÒ°`žTÉÄî L\Êv‰Õß,á7 ”®¸DÚ,Ñ›eLˆsyOÏ+ä è:V¾Ï!›¸Ëõ9Ñ­tÎì31ê7žI“ζž\jJ°—©’c‰tª<£±Òƒ dzFk:ÖbÿMØmM½#уýÓ29"žª`}±³ ˜ ˆc·Â6Ã!'JÌÜQÔlД a?9Ž™DŸ/!p¾šéqEIû}t~¤ìy_MB‘5ÍAˆ DræÚŠh¢ÎÔ£Y4”Z]7ÖÚµ†!P§*èJºÀ„}Ôþú[IªK¾c¤¸­À±;äæµ¥NÒ"H°ë˜ªlÜ8f½ß#•hc½WTÚF|nNkˆŸñ;˜¡‹ǘõà4­F!¶¿¡v¡“@@‚§•Ý-Æm#nC×`À'(îNî±lâgYCWAfD[<7A6©»{ò¹Cù_œ*aÊ^aŠØÛèAãw<…hš¼&6“Ô„\ÓÆTÒ¬`•C¬´Y1JK œCw>iµ ܇¹µý+TNØä.¦6Ó¸˜Ì"Eè"ši!&é(ê¼XˆÞ[àÆN9B ”<ædZݬ1ùæ3FÚ`ƒŽ3±ƒ¦ª.zÄÁ¦¼#±.êyÖ"V]‰˜¾ÀUžµÒ¥UÇe÷];†–Ù$ÜD)ô6(0œÖ]+ö)çó˜kÌréé& ;À#`h._F)“Ë‚«Dñ%Ó=âjNi¯BÛžùÊl› C{'l÷çø§LÝF'ÍØȆ~í ¹ÙTRN†Í9HïqTfl¤÷•™?Çá—bL6Ýùµ»´3s‘:϶âU¹TøÇ±'¾ÜO3” èssk7‰Ar*­•b5¾hÃ<üŸÅyáY0*eìúmÀÖhÝhÒ‰-50QÁ‰\äúHNÓêGs–^(ѱU‡ vFb4â¥i $§Á½Jˆ‰Ü ÝIÈÇ4§S`ˆ ­(ü»kÇ•±‘b0 × á‘!qÊÃ"8Ô{º¶õ@åˆÅD¢Ã¶Mg˦ç;]Và|šûg:F9ƒ2sÝ1Œ}1a.é¢QI2`v5lg)JÞFU]h5/%ú@AK§Î¥f;X1Œc ¡ãÑC¦çð³n¨ƒØDL VCK3¦¥<*á†^œm/`7-<,%Õk(šŒ7YÜX€þÉRü|WA¸çr‹sϳëh‘ˇÇÅ1wä”ò(úûcå"ßoUkA¢þ8†¢ #¸&U0™³)þ- bIfÜÛD¥È:ö@àT‹ãÅA“æ?–FC0‡„Âç¾¹ŽÒ•ü³lÏZˆ0<È”4i§}œš‰E* /fã¨0"’˹†ðCZ{ûö­¾I™;Ïg Äu%=…*”Äéð¶.ñ$%ú˜•OOVUx"ÉIXðU<Á&t!©P®Œ,#!¿Û¡‰Tôz(C?ZU4à êEt!Yîë` WöUNTÿj7OÉGœ„ûà¥bD.JU5¥SsËÐUO¶•O†„1·*˜i’WµBÝoúU4‚65ö%S5FÅFø#˜¦‰TÙK𲩹áòcù¢sʦÍGªpü{ÐÇpªš%S9Tö*¡ÓŠ©mJ§h„¡=ºþÂ8åS†“ˆ$û2†xäQ ªœn GKç•LiĪoJ§>@ŠõUC§Q© ‚N2ªgú§\§nîcëËy1õym6Vž^_£«*ÎBªlʧ]mÕ8`¦ ©½â†“ljTÒ–ñMéTå 3*šÚ1*œ’©•S~U0f£"˜¨êv5%Jã ;4ªhUR»*Uʦ¸I¤¦3{Ò©Ñ´(fS^„5¥S^zú%S0“´£R£ÒÿÅÜ‘N$8ž)ÀWGCNA/src/0000755000176200001440000000000014672545314011713 5ustar liggesusersWGCNA/src/parallelQuantile_stdC.h0000644000176200001440000000105413151333656016333 0ustar liggesusers// "Parallel" quantile: for a list of numeric vectors or arrays, calculate a given quantile of each vector // containing element 'i' of each component of the input list. // NA's are treated as last. // #ifndef __parallelQuantile_stdC_h__ #define __parallelQuantile_stdC_h__ SEXP parallelQuantile(SEXP data_s, SEXP prob_s); SEXP parallelMean(SEXP data_s, SEXP weight_s); SEXP parallelMin(SEXP data_s); SEXP minWhich_call(SEXP matrix_s, SEXP rowWise_s); SEXP quantileC_call(SEXP data_s, SEXP q_s); SEXP rowQuantileC_call(SEXP data_s, SEXP q_s); #endif WGCNA/src/corFunctions-utils.c0000644000176200001440000012256614533632240015674 0ustar liggesusers/* * Common functions for fast calculations of correlations * */ /* * General notes about handling missing data, zero MAD etc: * The idea is that bicor should switch to cor whenever it is feasible, it helps, and it is requested: * (1) if median is NA, the mean would be NA as well, so there's no point in switching to Pearson * (2) In the results, columns and rows corresponding to input with NA means/medians are NA'd out. * (3) The convention is that if zeroMAD is set to non-zero, it is the index of the column in which MAD is * zero (plus one for C indexing) */ #include #include #include #include #include #include #include "pivot.h" #include "conditionalThreading.h" #include "corFunctions-typeDefs.h" #include "corFunctions-utils.h" #define RefUX 0.5 /*=================================================================================== * * median * * ==================================================================================*/ // Here I first put all NAs to the end, then call the pivot function to find the median of the remaining // (finite) entries. double median(double * x, size_t n, int copy, int * err) { double * xx, res; if (copy) { if ( (xx=(double *) malloc(n * sizeof(double)))==NULL ) { Rprintf("Memory allocation error in median(). Could not allocate %d kB.\n", (int) (n * sizeof(double) / 1024 + 1)); *err = 1; return NA_REAL; } memcpy((void *)xx, (void *)x, n * sizeof(double)); } else xx = x; *err = 0; // Put all NA's at the end. size_t bound = n; for (size_t i=n; i>0; ) { i--; if (ISNAN(xx[i])) { bound--; xx[i] = xx[bound]; xx[bound] = NA_REAL; } } // Rprintf("Median: n: %d, bound: %d\n", n, bound); // Any non-NA's left? if (bound==0) res = NA_REAL; else // yes, return the appropriate pivot. res = pivot(xx, bound, ( 1.0 * (bound-1))/2); if (copy) free(xx); return res; } /*=================================================================================== * * quantile * * ==================================================================================*/ // Here I first put all NAs to the end, then call the pivot function to find the appropriate // quantile of the remaining (finite) entries. // q is the quantile: 1/2 will give exactly the median above. double quantile(double * x, size_t n, double q, int copy, int * err) { double * xx; double res; if (copy) { if ( (xx=(double *) malloc(n * sizeof(double)))==NULL ) { Rprintf("Memory allocation error in quantile(). Could not allocate %d kB.\n", (int) (n * sizeof(double) / 1024 + 1)); *err = 1; return NA_REAL; } memcpy((void *)xx, (void *)x, n * sizeof(double)); } else xx = x; *err = 0; // Put all NA's at the end. size_t bound = n; for (size_t i=n; i>0; ) { i--; if (ISNAN(xx[i])) { bound--; xx[i] = xx[bound]; xx[bound] = NA_REAL; } } // Rprintf("Quantile: q: %f, n: %d, bound: %d\n", q, n, bound); // Any non-NA's left? if (bound==0) res = NA_REAL; else // yes, return the appropriate pivot. res = pivot(xx, bound, ( 1.0 * (bound-1))*q); if (copy) free(xx); return res; } double quantile_noCopy(double * x, size_t n, double q) { double res; // Put all NA's at the end. size_t bound = n; for (size_t i=n; i>0; ) { i--; if (ISNAN(x[i])) { bound--; x[i] = x[bound]; x[bound] = NA_REAL; } } // Rprintf("Quantile: q: %f, n: %d, bound: %d\n", q, n, bound); // Any non-NA's left? if (bound==0) res = NA_REAL; else // yes, return the appropriate pivot. res = pivot(x, bound, ( 1.0 * (bound-1))*q); return res; } /*========================================================================================== * * testMedian * * =========================================================================================*/ void testMedian(double *x, int * n, double * res) { int err; *res = median(x, (size_t) *n, 0, &err); } /*========================================================================================== * * testQuantile * * =========================================================================================*/ void testQuantile(double *x, int *n, double *q, double *res) { int err; *res = quantile(x, (size_t) *n, *q, 0, &err); } /*========================================================================================== * * prepareColBicor * * =========================================================================================*/ // prepareColBicor: calculate median and mad of x and put // (1-u^2)^2 * (x - median(x))/(9.0 * qnorm75 * mad(x))/ appropriate normalization // into res. // res must have enough space allocated to hold the result; // aux and aux2 each must also have enough space to hold a copy of x. // maxQoutliers is the maximum allowed proportion of outliers on either side of the median. // fallback: 1: none, 2: individual, 3: all, 4: force Pearson calculation. 4 is necessary for remedial // calculations. // In this case: Pearson pre-calculation entails normalizing columns by mean and variance. void prepareColBicor(double * col, size_t nr, double maxPOutliers, int fallback, int cosine, double * res, size_t * nNAentries, int * NAmed, volatile int * zeroMAD, double * aux, double * aux2) { // const double asymptCorr = 1.4826, qnorm75 = 0.6744898; // Note to self: asymptCorr * qnorm75 is very close to 1 and should equal 1 theoretically. Should // probably leave them out completely. if (fallback==4) { prepareColCor(col, nr, cosine, res, nNAentries, NAmed); return; } int err = 0; // Calculate the median of col memcpy((void *)res, (void *)col, nr * sizeof(double)); double med = median(res, nr, 0, &err); // Create a conditional copy of the median double medX; if (cosine) medX = 0; else medX = med; *zeroMAD = 0; // calculate absolute deviations from the median if (ISNAN(med)) { *NAmed = 1; for (size_t k=0; k -RefUX) lowQ = -RefUX; if (hiQ < RefUX) hiQ = RefUX; lowQ = fabs(lowQ); for (size_t k=0; k 1) ux = 1; // sign of ux doesn't matter. ux = 1-ux*ux; res[k] *= ux*ux ; sum += res[k]*res[k]; } else res[k] = 0; sum = sqrtl(sum); if (sum==0) { for (size_t k=0; k 0) { *NAmean = 0; *nNAentries = nr-count; if (cosine) mean = 0; else mean = mean/count; sum = sqrtl(sum - count * mean*mean); if (sum > 0) { // Rprintf("sum: %Le\n", sum); for (size_t k=0; k 0) { *NAmean = 0; *nNAentries = nr-count; if (cosine) mean = 0; else mean = mean/wsum; sumSq = sqrtl(sumSq - 2*mean * sumxwSq + mean*mean * wsumSq); //Rprintf("\nprepareColCor_weighted: \n"); //Rprintf(" mean: %5.3Lf, sumSq: %5.3Lf\n", mean, sumSq); //Rprintf(" x: "); RprintV(x, nr); //Rprintf(" weights: "); RprintV(weights, nr); if ((wsum > 0) && (sumSq > 0)) { // Rprintf("sum: %Le\n", sum); for (size_t k=0; kx; // Rprintf("Preparing columns: nr = %d, nc = %d\n", x->nr, x->nc); while (td->pc->i < td->pc->n) { // Grab the next column that needs to be done pthread_mutex_lock_c( td->lock, x->threaded); if (td->pc->i < td->pc->n) { size_t col = td->pc->i; // Rprintf("...working on column %d in thread %d\n", col, td->x->id); td->pc->i++; pthread_mutex_unlock_c( td->lock, x->threaded ); prepareColBicor(x->x + col * x->nr, x->nr, x->maxPOutliers, x->fallback, x->cosine, x->multMat + col * x->nr, x->nNAentries + col, x->NAme + col, &(x->zeroMAD), x->aux, x->aux + x->nr); // if (x->zeroMAD > 0) { Rprintf("threadPrepColBicor: mad was zero in column %d.\n", col); } if (x->zeroMAD > 0) *(x->warn) = warnZeroMAD; if ( (x->zeroMAD > 0) && (x->fallback==3)) { pthread_mutex_lock_c( td->lock, x->threaded ); // Rprintf("threadPrepColBicor: Moving counter from %d %d to end at %d in thread %d.\n", // col, td->pc->i, td->pc->n, x->id); x->zeroMAD = col+1; td->pc->i = td->pc->n; pthread_mutex_unlock_c( td->lock, x->threaded ); } } else pthread_mutex_unlock_c( td->lock, x->threaded ); } return NULL; } /*====================================================================================== * * prepareColCor * * =====================================================================================*/ // Used for the fast calculation of Pearson correlation // and when bicor is called with robustsX or robustY = 0 void * threadPrepColCor(void * par) { colPrepThreadData volatile * td = (colPrepThreadData *) par; cor1ThreadData volatile * x = td->x; //Rprintf("threadPrepColCor: starting in thread %d: counter.i = %d, counter.n = %d, nc = %d.\n", // td->x->id, td->pc->i, td->pc->n, td->x->nc); while (td->pc->i < td->pc->n) { // Grab the next column that needs to be done pthread_mutex_lock_c( td->lock, x->threaded ); int col = td->pc->i; if (col < td->x->nc) { td->pc->i++; // Rprintf("threadPrepColCor: preparing column %d in thread %d.\n", col, td->x->id); pthread_mutex_unlock_c( td->lock, x->threaded ); prepareColCor(x->x + col * x->nr, x->nr, x->cosine, x->multMat + col * x->nr, x->nNAentries + col, x->NAme + col); } else pthread_mutex_unlock_c( td->lock, x->threaded ); } return NULL; } /*=================================================================================================== * * Threaded symmetrization and NA'ing out of rows and columns with NA means * *=================================================================================================== */ void * threadSymmetrize(void * par) { symmThreadData * td = (symmThreadData *) par; cor1ThreadData * x = td->x; size_t nc = x->nc; double * result = x->result; int * NAmean = x->NAme; size_t col = 0; while ( (col = td->pc->i) < nc) { // Symmetrize the column // point counter to the next column td->pc->i = col+1; // and update the matrix. Start at j=col to check for values greater than 1. if (NAmean[col] == 0) { double * resx = result + col*nc + col; // Rprintf("Symmetrizing row %d to the same column.\n", col); for (size_t j=col; j 1.0) *resx = 1.0; if (*resx < -1.0) *resx = -1.0; } result[j*nc + col] = *resx; } resx ++; } } else { // Rprintf("NA-ing out column and row %d\n", col); for (size_t j=0; jnSlow; size_t * nNA = td->nNA; double * x = td->x->x; double * multMat = td->x->multMat; double * result = td->x->result; int fbx = td->x->fallback; int cosine = td->x->cosine; size_t nc = td->x->nc, nc1 = nc-1, nr = td->x->nr; int * NAmean = td->x->NAme; size_t * nNAentries = td->x->nNAentries; progressCounter * pci = td->pci, * pcj = td->pcj; double maxPOutliers = td->x->maxPOutliers; double * xx = td->x->aux, * yy = xx + nr; double * xxx = xx + 2*nr, * yyy = xx + 3*nr; double * xx2 = xx + 4*nr, * yy2 = xx + 5*nr; size_t maxDiffNA = (size_t) (td->x->quick * nr); if (fbx==3) fbx = 2; // For these calculations can't go back and redo everything // Rprintf("Checking %d rows and %d columns\n", nc1, nc); // Rprintf("starting at %d and %d\n", pci->i, pcj->i); while (pci->i < nc1) { pthread_mutex_lock_c( td->lock, td->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==nc) { ii++; jj = ii+1; } } while ((i 0) || (NAmean[j] > 0) || ( (nNAentries[i] <= maxDiffNA) && ( nNAentries[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->threaded ); if ((i < nc1) && (j < nc) ) { // Rprintf("Recalculating row %d and column %d, column size %d\n", i, j, nr); memcpy((void *)xx, (void *)(x + i*nr), nr * sizeof(double)); memcpy((void *)yy, (void *)(x + j*nr), nr * sizeof(double)); size_t nNAx = 0, nNAy = 0; for (size_t k=0; k maxDiffNA) || (nNAy-nNAentries[j] > maxDiffNA)) { // must recalculate the auxiliary variables for both columns size_t temp = 0; int zeroMAD = 0; if (nNAx - nNAentries[i] > maxDiffNA) { prepareColBicor(xx, nr, maxPOutliers, fbx, cosine, xxx, &temp, &NAx, &zeroMAD, xx2, yy2); if (zeroMAD) *(td->x->warn) = warnZeroMAD; } else memcpy((void *) xxx, (void *) (multMat + i * nr), nr * sizeof(double)); if (nNAy-nNAentries[j] > maxDiffNA) { prepareColBicor(yy, nr, maxPOutliers, fbx, cosine, yyy, &temp, &NAy, &zeroMAD, xx2, yy2); if (zeroMAD) *(td->x->warn) = warnZeroMAD; } else memcpy((void *) yyy, (void *) (multMat + j * nr), nr * sizeof(double)); if (NAx + NAy==0) { LDOUBLE sumxy = 0; size_t count = 0; for (size_t k=0; knSlow; size_t * nNA = td->nNA; double * x = td->x->x; double * result = td->x->result; size_t nc = td->x->nc, nc1 = nc-1, nr = td->x->nr; int cosine = td->x->cosine; int * NAmean = td->x->NAme; size_t * nNAentries = td->x->nNAentries; progressCounter * pci = td->pci, * pcj = td->pcj; size_t maxDiffNA = (size_t) (td->x->quick * nr); // Rprintf("quick:%f\n", td->x->quick); // Rprintf("Checking %d rows and %d columns\n", nc1, nc); // Rprintf("starting at %d and %d\n", pci->i, pcj->i); while (pci->i < nc1) { pthread_mutex_lock_c( td->lock, td->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==nc) { ii++; jj = ii+1; } } while ((i 0) || (NAmean[j] > 0) || ( (nNAentries[i] <= maxDiffNA) && ( nNAentries[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->threaded ); if ((i < nc1) && (j < nc)) { // Rprintf("Recalculating column %d and row %d, column size %d\n", i, j, nr); *nNA += basic2variableCorrelation( x + i * nr, x + j * nr, nr, result + i*nc + j, cosine, cosine); (*nSlow)++; } } return NULL; } /*=================================================================================================== * * Threaded NA-ing * *=================================================================================================== */ void * threadNAing(void * par) { NA2ThreadData * td = (NA2ThreadData *) par; double * result = td->x->x->result; size_t ncx = td->x->x->nc; int * NAmedX = td->x->x->NAme; size_t ncy = td->x->y->nc; int * NAmedY = td->x->y->NAme; progressCounter * pci = td->pci; progressCounter * pcj = td->pcj; // Go row by row size_t row = 0, col = 0; while ((row = pci->i) < ncx) { pci->i = row + 1; if (NAmedX[row]) { // Rprintf("NA-ing out column and row %d\n", col); for (size_t j=0; ji) < ncy) { pcj->i = col + 1; if (NAmedY[col]) { // Rprintf("NA-ing out column and row %d\n", col); for (size_t i=0; i 1.0) *resx = 1.0; if (*resx < -1.0) *resx = -1.0; } resx++; } } } return NULL; } /*=================================================================================================== * * Threaded "slow" calculations for bicor(x,y) * *=================================================================================================== */ // This can actually be relatively slow, since the search for calculations that need to be done is not // parallel, so one thread may have to traverse the whole matrix. I can imagine parallelizing even that // part, but for now leave it as is as this will at best be a minuscule improvement. void * threadSlowCalcBicor2(void * par) { slowCalc2ThreadData * td = (slowCalc2ThreadData *) par; size_t * nSlow = td->nSlow; size_t * nNA = td->nNA; double * x = td->x->x->x; double * multMatX = td->x->x->multMat; double * result = td->x->x->result; size_t ncx = td->x->x->nc, nr = td->x->x->nr; int * NAmeanX = td->x->x->NAme; size_t * nNAentriesX = td->x->x->nNAentries; int robustX = td->x->x->robust; int fbx = td->x->x->fallback; int cosineX = td->x->x->cosine; double * y = td->x->y->x; double * multMatY = td->x->y->multMat; size_t ncy = td->x->y->nc; int * NAmeanY = td->x->y->NAme; size_t * nNAentriesY = td->x->y->nNAentries; int robustY = td->x->y->robust; int fby = td->x->y->fallback; int cosineY = td->x->y->cosine; double maxPOutliers = td->x->x->maxPOutliers; progressCounter * pci = td->pci, * pcj = td->pcj; double * xx = td->x->x->aux; double * xxx = xx + nr; double * xx2 = xx + 2*nr; double * yy = td->x->y->aux; double * yyy = yy + nr; double * yy2 = yy + 2*nr; double * xx3, *yy3; int maxDiffNA = (int) (td->x->x->quick * nr); if (fbx==3) fbx = 2; if (fby==3) fby = 2; if (!robustX) fbx = 4; if (!robustY) fby = 4; // Rprintf("Remedial calculation thread #%d: starting at %d and %d\n", td->x->x->id, // pci->i, pcj->i); // while (pci->i < ncx) { pthread_mutex_lock_c( td->lock, td->x->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==ncy) { ii++; jj = 0; } } while ((i 0) || (NAmeanY[j] > 0) || ( (nNAentriesX[i] <= maxDiffNA) && ( nNAentriesY[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->x->threaded ); if ((i < ncx) && (j < ncy)) { memcpy((void *)xx, (void *)(x + i*nr), nr * sizeof(double)); memcpy((void *)yy, (void *)(y + j*nr), nr * sizeof(double)); size_t nNAx = 0, nNAy = 0; for (size_t k=0; k maxDiffNA) || (nNAy-nNAentriesY[j] > maxDiffNA)) { // Rprintf("Recalculating row %d and column %d, column size %d in thread %d\n", i, j, nr, // td->x->x->id); // must recalculate the auxiliary variables for both columns size_t temp = 0; int zeroMAD = 0; if (nNAx - nNAentriesX[i] > maxDiffNA) { // Rprintf("...Recalculating row... \n"); //if (robustX && (fbx!=4)) prepareColBicor(xx, nr, maxPOutliers, fbx, cosineX, xxx, &temp, &NAx, &zeroMAD, xx2, yy2); if (zeroMAD) *(td->x->x->warn) = warnZeroMAD; //else // prepareColCor(xx, nr, xxx, &temp, &NAx); xx3 = xxx; } else xx3 = multMatX + i * nr; if (nNAy-nNAentriesY[j] > maxDiffNA) { // Rprintf("...Recalculating column... \n"); //if (robustY && (fby!=4)) prepareColBicor(yy, nr, maxPOutliers, fby, cosineY, yyy, &temp, &NAy, &zeroMAD, xx2, yy2); if (zeroMAD) *(td->x->y->warn) = warnZeroMAD; //else // prepareColCor(yy, nr, yyy, &temp, &NAy); yy3 = yyy; } else yy3 = multMatY + j * nr; if (NAx + NAy==0) { // LDOUBLE sumxy = 0; double sumxy = 0; size_t count = 0; for (size_t k=0; kx->x->id, result[i + j*ncx]); } } else { result[i + j*ncx] = NA_REAL; (*nNA)++; } (*nSlow)++; } } } return NULL; } /*=================================================================================================== * * Threaded "slow" calculations for pearson correlation of 2 variables. * *=================================================================================================== */ // This can actually be relatively slow, since the search for calculations that need to be done is not // parallel, so one thread may have to traverse the whole matrix. I can imagine parallelizing even that // part, but for now leave it as is as this will at best be a minuscule improvement. void * threadSlowCalcCor2(void * par) { slowCalc2ThreadData * td = (slowCalc2ThreadData *) par; size_t * nSlow = td->nSlow; size_t * nNA = td->nNA; double * x = td->x->x->x; // double * multMatX = td->x->x->multMat; double * result = td->x->x->result; size_t ncx = td->x->x->nc, nr = td->x->x->nr; int * NAmeanX = td->x->x->NAme; size_t * nNAentriesX = td->x->x->nNAentries; int cosineX = td->x->x->cosine; double * y = td->x->y->x; // double * multMatY = td->x->y->multMat; size_t ncy = td->x->y->nc; int * NAmeanY = td->x->y->NAme; size_t * nNAentriesY = td->x->y->nNAentries; int cosineY = td->x->y->cosine; size_t maxDiffNA = (size_t) (td->x->x->quick * nr); progressCounter * pci = td->pci, * pcj = td->pcj; // Rprintf("Will tolerate %d additional NAs\n", maxDiffNA); // Rprintf("Checking %d rows and %d columns\n", nc1, nc); // Rprintf("starting at %d and %d\n", pci->i, pcj->i); // while (pci->i < ncx) { pthread_mutex_lock_c( td->lock, td->x->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==ncy) { ii++; jj = 0; } } while ((i 0) || (NAmeanY[j] > 0) || ( (nNAentriesX[i] <= maxDiffNA) && ( nNAentriesY[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->x->threaded ); if ((i < ncx) && (j < ncy)) { // Rprintf("Recalculating row %d and column %d, column size %d; cosineX: %d, cosineY: %d\n", // i, j, nr, cosineX, cosineY); *nNA += basic2variableCorrelation( x + i * nr, y + j * nr, nr, result + i + j*ncx, cosineX, cosineY); (*nSlow)++; } } return NULL; } /*====================================================================================== * * threaded prepareColCor_weighted * * =====================================================================================*/ // Used for the fast calculation of Pearson correlation // and when bicor is called with robustsX or robustY = 0 void * threadPrepColCor_weighted(void * par) { colPrepThreadData volatile * td = (colPrepThreadData *) par; cor1ThreadData volatile * x = td->x; //Rprintf("threadPrepColCor: starting in thread %d: counter.i = %d, counter.n = %d, nc = %d.\n", // td->x->id, td->pc->i, td->pc->n, td->x->nc); while (td->pc->i < td->pc->n) { // Grab the next column that needs to be done pthread_mutex_lock_c( td->lock, x->threaded ); int col = td->pc->i; if (col < td->x->nc) { td->pc->i++; // Rprintf("threadPrepColCor: preparing column %d in thread %d.\n", col, td->x->id); pthread_mutex_unlock_c( td->lock, x->threaded ); prepareColCor_weighted(x->x + col * x->nr, x->weights + col * x->nr, x->nr, x->cosine, x->multMat + col * x->nr, x->nNAentries + col, x->NAme + col); } else pthread_mutex_unlock_c( td->lock, x->threaded ); } return NULL; } void * threadSlowCalcCor_weighted(void * par) { slowCalcThreadData * td = (slowCalcThreadData *) par; size_t * nSlow = td->nSlow; size_t * nNA = td->nNA; double * x = td->x->x; double * weights = td->x->weights; double * result = td->x->result; size_t nc = td->x->nc, nc1 = nc-1, nr = td->x->nr; int cosine = td->x->cosine; int * NAmean = td->x->NAme; size_t * nNAentries = td->x->nNAentries; progressCounter * pci = td->pci, * pcj = td->pcj; size_t maxDiffNA = (size_t) (td->x->quick * nr); // Rprintf("quick:%f\n", td->x->quick); // Rprintf("Checking %d rows and %d columns\n", nc1, nc); // Rprintf("starting at %d and %d\n", pci->i, pcj->i); while (pci->i < nc1) { pthread_mutex_lock_c( td->lock, td->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==nc) { ii++; jj = ii+1; } } while ((i 0) || (NAmean[j] > 0) || ( (nNAentries[i] <= maxDiffNA) && ( nNAentries[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->threaded ); if ((i < nc1) && (j < nc)) { // Rprintf("Recalculating column %d and row %d, column size %d\n", i, j, nr); *nNA += basic2variableCorrelation_weighted(x + i * nr, x + j * nr, weights + i * nr, weights + j * nr, nr, result + i*nc + j, cosine, cosine); (*nSlow)++; } } return NULL; } /*=================================================================================================== * * Threaded "slow" calculations for weighted pearson correlation of 2 variables. * *=================================================================================================== */ // The search for calculations that need to be done is not // parallel, so one thread may have to traverse the whole matrix. I can imagine parallelizing even that // part, but for now leave it as is as this will at best be a minuscule improvement. void * threadSlowCalcCor2_weighted(void * par) { slowCalc2ThreadData * td = (slowCalc2ThreadData *) par; size_t * nSlow = td->nSlow; size_t * nNA = td->nNA; double * x = td->x->x->x; double * weights_x = td->x->x->weights; // double * multMatX = td->x->x->multMat; double * result = td->x->x->result; size_t ncx = td->x->x->nc, nr = td->x->x->nr; int * NAmeanX = td->x->x->NAme; size_t * nNAentriesX = td->x->x->nNAentries; int cosineX = td->x->x->cosine; double * y = td->x->y->x; double * weights_y = td->x->y->weights; // double * multMatY = td->x->y->multMat; size_t ncy = td->x->y->nc; int * NAmeanY = td->x->y->NAme; size_t * nNAentriesY = td->x->y->nNAentries; int cosineY = td->x->y->cosine; size_t maxDiffNA = (size_t) (td->x->x->quick * nr); progressCounter * pci = td->pci, * pcj = td->pcj; while (pci->i < ncx) { pthread_mutex_lock_c( td->lock, td->x->x->threaded ); size_t i = pci->i, ii = i; size_t j = pcj->i, jj = j; do { i = ii; j = jj; jj++; if (jj==ncy) { ii++; jj = 0; } } while ((i 0) || (NAmeanY[j] > 0) || ( (nNAentriesX[i] <= maxDiffNA) && ( nNAentriesY[j] <= maxDiffNA)))); pci->i = ii; pcj->i = jj; pthread_mutex_unlock_c( td->lock, td->x->x->threaded ); if ((i < ncx) && (j < ncy)) { // Rprintf("Recalculating row %d and column %d, column size %d; cosineX: %d, cosineY: %d\n", // i, j, nr, cosineX, cosineY); *nNA += basic2variableCorrelation_weighted(x + i * nr, y + j * nr, weights_x + i * nr, weights_y + j * nr, nr, result + i + j * ncx, cosineX, cosineY); (*nSlow)++; } } return NULL; } WGCNA/src/corFunctions.c0000644000176200001440000013425714533632240014536 0ustar liggesusers/* Calculation of unweighted Pearson and biweght midcorrelation. Copyright (C) 2008 Peter Langfelder; parts based on R by R Development team This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Some notes on handling of zero MAD: (.) in the threaded calculations, each columns has its own NAmed, but the zeroMAD flag is one flag per thread. Thus, it should be zeroed out before the threaded calculation starts and checked at the end. */ #include "corFunctions.h" #include "conditionalThreading.h" #include //#include #include #define USE_FC_LEN_T #include #include #ifndef FCONE # define FCONE #endif #include #include #include #include "pivot.h" #include "corFunctions-typeDefs.h" #include "corFunctions-utils.h" /*======================================================================== * * Short test code to see whether parallel code can be incorporated into R * * ======================================================================= */ int nProcessors(void) { #ifdef WITH_THREADS #ifdef _SC_NPROCESSORS_CONF long nProcessorsOnline = sysconf(_SC_NPROCESSORS_ONLN); #else long nProcessorsOnline = 2; #endif #else long nProcessorsOnline = 1; #endif return (int) nProcessorsOnline; } // Function to calculate suitable number of threads to use. int useNThreads(size_t n, int nThreadsRequested) { #ifdef WITH_THREADS int nt = nThreadsRequested; if ((nt < 1) || (nt > MxThreads)) { nt = nProcessors(); if (nt >MxThreads) nt = MxThreads; } if (n < nt * minSizeForThreading) nt = (n/minSizeForThreading) + 1; return nt; #else // Silence "unused argument" warning n = n+1; return 1; #endif } //=================================================================================================== // Pearson correlation of a matrix with itself. // This one uses matrix multiplication in BLAS to speed up calculation when there are no NA's // and uses threading to speed up the rest of the calculation. //=================================================================================================== // C-level correlation calculation void cor1Fast(double * x, int * nrow, int * ncol, double * weights, double * quick, int * cosine, double * result, int *nNA, int * err, int * nThreads, int * verbose, int * indent) { size_t nr = (size_t) *nrow, nc = (size_t) *ncol; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; *err = 0; size_t nNA_ext = 0; // Allocate space for various variables double * multMat; size_t * nNAentries; int *NAmean; // This matrix will hold preprocessed entries that can be simply multiplied together to get the // numerator if ( (multMat = (double *) malloc(nc*nr * sizeof(double)))==NULL ) { *err = 1; Rprintf("cor1: memory allocation error. If possible, please decrease block size.\n"); return; } // Number of NA entries in each column if ( (nNAentries = (size_t *) malloc(nc * sizeof(size_t)))==NULL ) { free(multMat); *err = 1; Rprintf("cor1: memory allocation error. The needed block is relatively small... suspicious.\n"); return; } // Flag indicating whether the mean of each column is NA if ( (NAmean = (int *) malloc(nc * sizeof(int)))==NULL ) { free(nNAentries); free(multMat); *err = 1; Rprintf("cor1: memory allocation error. The needed block is relatively small... suspicious.\n"); return; } // Decide how many threads to use int nt = useNThreads( nc*nc, *nThreads); if (*verbose) { if (nt > 1) Rprintf("%s..will use %d parallel threads.\n", spaces, nt); else Rprintf("%s..will not use multithreading.\n", spaces); } // double * aux[MxThreads]; // for (int t=0; t < nt; t++) // { // if ( (aux[t] = (double *) malloc(6*nr * sizeof(double)))==NULL) // { // *err = 1; // Rprintf("cor1: memory allocation error. The needed block is very small... suspicious.\n"); // for (int tt = t-1; tt>=0; tt--) free(aux[tt]); // free(NAmean); free(nNAentries); free(multMat); // return; // } // } // Put the general data of the correlation calculation into a structure that can be passed on to // threads. cor1ThreadData thrdInfo[MxThreads]; for (int t = 0; t < nt; t++) { thrdInfo[t].x = x; thrdInfo[t].weights = weights; thrdInfo[t].nr = nr; thrdInfo[t].nc = nc; thrdInfo[t].multMat = multMat; thrdInfo[t].result = result; thrdInfo[t].nNAentries = nNAentries; thrdInfo[t].NAme = NAmean; thrdInfo[t].quick = *quick; thrdInfo[t].cosine = *cosine; thrdInfo[t].id = t; thrdInfo[t].threaded = (nt > 1); } // Column preparation (calculation of the matrix to be multiplied) in a threaded form. colPrepThreadData cptd[MxThreads]; pthread_t thr[MxThreads]; int status[MxThreads]; pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER; progressCounter pc; pc.i = 0; pc.n = nc; // Rprintf("Preparing columns...\n"); for (int t=0; t= 0; t--) free(aux[t]); //Rprintf("End of cor1Fast (1): err = %d\n", *err); //*nNA = 1234; //Rprintf("End of cor1Fast (2): err = %d\n", *err); *nNA = (int) nNA_ext; //Rprintf("End of cor1Fast (3): err = %d\n", *err); free(NAmean); free(nNAentries); free(multMat); } //=================================================================================================== // bicorrelation of a matrix with itself. // This one uses matrix multiplication in BLAS to speed up calculation when there are no NA's // and is threaded to speed up the rest of the calculation. //=================================================================================================== void bicor1Fast(double * x, int * nrow, int * ncol, double * maxPOutliers, double * quick, int * fallback, int * cosine, double * result, int *nNA, int * err, int * warn, int * nThreads, int * verbose, int * indent) { size_t nr = *nrow, nc = *ncol; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; *nNA = 0; *warn = noWarning; *err = 0; size_t nNA_ext = 0; // Allocate space for various variables double * multMat; size_t * nNAentries; int *NAmed; if ( (multMat = (double *) malloc(nc*nr * sizeof(double)))==NULL ) { *err = 1; Rprintf("cor1: memory allocation error. If possible, please decrease block size.\n"); return; } // Number of NA entries in each column if ( (nNAentries = (size_t *) malloc(nc * sizeof(size_t)))==NULL ) { free(multMat); *err = 1; Rprintf("cor1: memory allocation error. The needed block is relatively small... suspicious.\n"); return; } // Flag indicating whether the mean of each column is NA if ( (NAmed = (int *) malloc(nc * sizeof(int)))==NULL ) { free(nNAentries); free(multMat); *err = 1; Rprintf("cor1: memory allocation error. The needed block is relatively small... suspicious.\n"); return; } // Decide how many threads to use int nt = useNThreads( nc*nc, *nThreads); if (*verbose) { if (nt > 1) Rprintf("%s..will use %d parallel threads.\n", spaces, nt); else Rprintf("%s..will not use multithreading.\n", spaces); } double * aux[MxThreads]; for (int t=0; t < nt; t++) { if ( (aux[t] = (double *) malloc(6*nr * sizeof(double)))==NULL) { *err = 1; Rprintf("cor1: memory allocation error. The needed block is very small... suspicious.\n"); for (int tt = t-1; tt>=0; tt--) free(aux[tt]); free(NAmed); free(nNAentries); free(multMat); return; } } // Put the general data of the correlation calculation into a structure that can be passed on to // threads. cor1ThreadData thrdInfo[MxThreads]; for (int t = 0; t < nt; t++) { thrdInfo[t].x = x; thrdInfo[t].weights = NULL; thrdInfo[t].nr = nr; thrdInfo[t].nc = nc; thrdInfo[t].multMat = multMat; thrdInfo[t].result = result; thrdInfo[t].nNAentries = nNAentries; thrdInfo[t].NAme = NAmed; thrdInfo[t].zeroMAD = 0; thrdInfo[t].warn = warn; // point the pointer thrdInfo[t].aux = aux[t]; thrdInfo[t].robust = 0; thrdInfo[t].fallback = *fallback; thrdInfo[t].quick = *quick; thrdInfo[t].cosine = *cosine; thrdInfo[t].maxPOutliers = *maxPOutliers; thrdInfo[t].id = t; thrdInfo[t].threaded = (nt > 1); } // Column preparation (calculation of the matrix to be multiplied) in a threaded form. colPrepThreadData cptd[MxThreads]; pthread_t thr[MxThreads]; int status[MxThreads]; pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER; progressCounter pc; pc.i = 0; pc.n = nc; // Rprintf("Preparing columns...\n"); for (int t=0; t 0) { pearson = 1; if (*verbose) Rprintf("Warning in bicor(x): Thread %d (of %d) reported zero MAD in column %d. %s", t, nt, thrdInfo[t].zeroMAD, "Switching to Pearson correlation.\n"); } if (pearson==1) // Re-do all column preparations using Pearson preparation. { // Set fallback to 4 for slow calculations below. for (int t = 0; t < nt; t++) thrdInfo[t].fallback = 4; pthread_mutex_t mutex2 = PTHREAD_MUTEX_INITIALIZER; pc.i = 0; pc.n = nc; for (int t=0; t= 0; t--) free(aux[t]); *nNA = (int) nNA_ext; free(NAmed); free(nNAentries); free(multMat); } //=================================================================================================== // // Two-variable bicorrelation. Basically the same as bicor1, just must calculate the whole matrix. // If robustX,Y is zero, the corresponding variable will be treated as in pearson correlation. // //=================================================================================================== void bicorFast(double * x, int * nrow, int * ncolx, double * y, int * ncoly, int * robustX, int * robustY, double *maxPOutliers, double * quick, int * fallback, int * cosineX, int * cosineY, double * result, int *nNA, int * err, int * warnX, int * warnY, int * nThreads, int * verbose, int * indent) { size_t nr = *nrow, ncx = *ncolx, ncy = *ncoly; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; *warnX = noWarning; *warnY = noWarning; *err = 0; size_t nNA_ext = 0; double * multMatX, * multMatY; size_t * nNAentriesX, * nNAentriesY; int *NAmedX, *NAmedY; // Rprintf("nr: %d, ncx: %d, ncy: %d\n", nr, ncx, ncy); // Rprintf("robustX: %d, robustY: %d, cosineX: %d, cosineY: %d\n", *robustX, *robustY, *cosineX, *cosineY); // Rprintf("quick: %12.6f, maxPOutliers: %12.6f\n", *quick, *maxPOutliers); // Rprintf("Last few entries of x:\n"); // for (int i = nr-2; i 1) Rprintf("%s..will use %d parallel threads.\n", spaces, nt); else Rprintf("%s..will not use multithreading.\n", spaces); } double * aux[MxThreads]; for (int t=0; t < nt; t++) { if ( (aux[t] = (double *) malloc(6*nr * sizeof(double)))==NULL) { *err = 1; Rprintf("cor1: memory allocation error. The needed block is very small... suspicious.\n"); for (int tt = t-1; tt>=0; tt--) free(aux[tt]); free(NAmedY); free(NAmedX); free(nNAentriesY); free(nNAentriesX); free(multMatY); free(multMatX); return; } } cor1ThreadData thrdInfoX[MxThreads]; cor1ThreadData thrdInfoY[MxThreads]; cor2ThreadData thrdInfo[MxThreads]; for (int t = 0; t < nt; t++) { thrdInfoX[t].x = x; thrdInfoX[t].weights = NULL; thrdInfoX[t].nr = nr; thrdInfoX[t].nc = ncx; thrdInfoX[t].multMat = multMatX; thrdInfoX[t].result = result; thrdInfoX[t].nNAentries = nNAentriesX; thrdInfoX[t].NAme = NAmedX; thrdInfoX[t].zeroMAD = 0; thrdInfoX[t].aux = aux[t]; thrdInfoX[t].robust = *robustX; thrdInfoX[t].fallback = *fallback; thrdInfoX[t].maxPOutliers = *maxPOutliers; thrdInfoX[t].quick = *quick; thrdInfoX[t].cosine = *cosineX; thrdInfoX[t].warn = warnX; thrdInfoX[t].id = t; thrdInfoX[t].threaded = (nt > 1); thrdInfoY[t].x = y; thrdInfoY[t].weights = NULL; thrdInfoY[t].nr = nr; thrdInfoY[t].nc = ncy; thrdInfoY[t].multMat = multMatY; thrdInfoY[t].result = result; thrdInfoY[t].nNAentries = nNAentriesY; thrdInfoY[t].NAme = NAmedY; thrdInfoY[t].zeroMAD = 0; thrdInfoY[t].aux = aux[t] + 3 * nr; thrdInfoY[t].robust = *robustY; thrdInfoY[t].fallback = *fallback; thrdInfoY[t].maxPOutliers = *maxPOutliers; thrdInfoY[t].quick = *quick; thrdInfoY[t].cosine = *cosineY; thrdInfoY[t].warn = warnY; thrdInfoY[t].id = t; thrdInfoY[t].threaded = (nt > 1); thrdInfo[t].x = thrdInfoX + t; thrdInfo[t].y = thrdInfoY + t; } // Prepare the multMat columns in X and Y // Rprintf(" ..preparing columns in x\n"); colPrepThreadData cptd[MxThreads]; pthread_t thr[MxThreads]; int status[MxThreads]; progressCounter pcX, pcY; int pearsonX = 0, pearsonY = 0; // Prepare columns in X pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER; pcX.i = 0; pcX.n = ncx; for (int t=0; t 0) { pearsonX = 1; if (*verbose) Rprintf("Warning in bicor(x, y): thread %d of %d reported zero MAD in column %d of x. %s", t, nt, thrdInfoX[t].zeroMAD, "Switching to Pearson calculation for x.\n"); } if (pearsonX==1) // Re-do all column preparations { for (int t = 0; t < nt; t++) thrdInfoX[t].fallback = 4; pthread_mutex_t mutex2 = PTHREAD_MUTEX_INITIALIZER; pcX.i = 0; pcX.n = ncx; for (int t=0; t 0) { pearsonY = 1; if (*verbose) Rprintf("Warning in bicor(x, y): thread %d of %d reported zero MAD in column %d of y. %s", t, nt, thrdInfoY[t].zeroMAD, "Switching to Pearson calculation for y.\n"); } if (pearsonY==1) // Re-do all column preparations { for (int t = 0; t < nt; t++) thrdInfoY[t].fallback = 4; pthread_mutex_t mutex2Y = PTHREAD_MUTEX_INITIALIZER; pcY.i = 0; pcY.n = ncy; for (int t=0; t= 0; t--) free(aux[t]); free(NAmedY); free(NAmedX); free(nNAentriesY); free(nNAentriesX); free(multMatY); free(multMatX); } /*====================================================================================================== * * corFast: fast correlation of 2 matrices * *====================================================================================================== One important note: if weights_x is not NULL, weights_y is also assumed to be valid. */ void corFast(double * x, int * nrow, int * ncolx, double * y, int * ncoly, double * weights_x, double * weights_y, double * quick, int * cosineX, int * cosineY, double * result, int *nNA, int * err, int * nThreads, int * verbose, int * indent) { size_t nr = *nrow, ncx = *ncolx, ncy = *ncoly; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; size_t nNA_ext = 0; *err = 0; double * multMatX, * multMatY; size_t * nNAentriesX, * nNAentriesY; int *NAmeanX, *NAmeanY; if ( (weights_x == NULL) != (weights_y == NULL)) { *err = 2; error("corFast: weights_x and weights_y must both be either NULL or non-NULL.\n"); } if ( (multMatX = (double *) malloc(ncx*nr * sizeof(double)))==NULL ) { *err = 1; Rprintf("cor(x,y): memory allocation error. If possible, please decrease block size.\n"); return; } if ( (multMatY = (double *) malloc(ncy*nr * sizeof(double)))==NULL ) { free(multMatX); *err = 1; Rprintf("cor(x,y): memory allocation error. If possible, please decrease block size.\n"); return; } if ( (nNAentriesX = (size_t *) malloc(ncx * sizeof(size_t)))==NULL ) { free(multMatY); free(multMatX); *err = 1; Rprintf("cor(x,y): memory allocation error. The needed block is relatively small... suspicious.\n"); return; } if ( (nNAentriesY = (size_t *) malloc(ncy * sizeof(size_t)))==NULL ) { free(nNAentriesX); free(multMatY); free(multMatX); *err = 1; Rprintf("cor(x,y): memory allocation error. The needed block is relatively small... suspicious.\n"); return; } if ( (NAmeanX = (int *) malloc(ncx * sizeof(int)))==NULL ) { free(nNAentriesY); free(nNAentriesX); free(multMatY); free(multMatX); *err = 1; Rprintf("cor(x,y): memory allocation error. The needed block is relatively small... suspicious.\n"); return; } if ( (NAmeanY = (int *) malloc(ncy * sizeof(int)))==NULL ) { free(NAmeanX); free(nNAentriesY); free(nNAentriesX); free(multMatY); free(multMatX); *err = 1; Rprintf("cor(x,y): memory allocation error. The needed block is relatively small... suspicious.\n"); return; } // Decide how many threads to use int nt = useNThreads( ncx* ncy, *nThreads); if (*verbose) { if (nt > 1) Rprintf("%s..will use %d parallel threads.\n", spaces, nt); else Rprintf("%s..will not use multithreading.\n", spaces); } cor1ThreadData thrdInfoX[MxThreads]; cor1ThreadData thrdInfoY[MxThreads]; cor2ThreadData thrdInfo[MxThreads]; for (int t = 0; t < nt; t++) { thrdInfoX[t].x = x; thrdInfoX[t].weights = weights_x; thrdInfoX[t].nr = nr; thrdInfoX[t].nc = ncx; thrdInfoX[t].multMat = multMatX; thrdInfoX[t].result = result; thrdInfoX[t].nNAentries = nNAentriesX; thrdInfoX[t].NAme = NAmeanX; thrdInfoX[t].quick = *quick; thrdInfoX[t].cosine = *cosineX; thrdInfoX[t].maxPOutliers = 1; thrdInfoX[t].id = t; thrdInfoX[t].threaded = (nt > 1); thrdInfoY[t].x = y; thrdInfoY[t].weights = weights_y; thrdInfoY[t].nr = nr; thrdInfoY[t].nc = ncy; thrdInfoY[t].multMat = multMatY; thrdInfoY[t].result = result; thrdInfoY[t].nNAentries = nNAentriesY; thrdInfoY[t].NAme = NAmeanY; thrdInfoY[t].quick = *quick; thrdInfoY[t].cosine = *cosineY; thrdInfoY[t].maxPOutliers = 1; thrdInfoY[t].id = t; thrdInfoY[t].threaded = (nt > 1); thrdInfo[t].x = thrdInfoX + t; thrdInfo[t].y = thrdInfoY + t; } // Prepare the multMat columns in X and Y colPrepThreadData cptd[MxThreads]; pthread_t thr[MxThreads]; int status[MxThreads]; progressCounter pcX, pcY; // Prepare columns in X pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER; pcX.i = 0; pcX.n = ncx; for (int t=0; t #include #include #include "pivot.h" void RprintV(double * v, size_t l) { for (size_t i=0; i mx) mx = v[i]; return mx; } double vMin(double * v, size_t len) { double mn = v[0]; for (size_t i=1; i 2) { // pick the pivot, say as the median of the first, middle and last size_t i1 = 0, i2 = len-1, i3 = (len-1)/2, ip; if (v[i1] <= v[i2]) { if (v[i2] <= v[i3]) ip = i2; else if (v[i3] >= v[i1]) ip = i3; else ip = i1; } else { if (v[i1] <= v[i3]) ip = i1; else if (v[i2] <= v[i3]) ip = i3; else ip = i2; } // put ip at the end double vp = v[ip]; v[ip] = v[len-1]; v[len-1] = vp; // Rprintf(" pivot value: %5.3f, index: %d\n", vp, ip); // pivot everything else size_t bound = 0; for (size_t i=0; i 1.0) { if (crit < 0) return pivot(v, bound, target); else return pivot(v+bound+1, len-bound-1, target-bound-1); } // Rprintf("vMax(v, bound): %5.3f, vMin(v+bound+1, len-bound-1): %5.3f, vp: %5.3f\n", vMax(v, bound), // vMin(v+bound+1, len-bound-1), vp); if (crit < 0) { double v1 = vMax(v, bound); return (v1 *(-crit) + vp * (1+crit)); } // else double v2 = vMin(v+bound+1, len-bound-1); return (vp * (1-crit) + v2 * crit); } else if (len==2) { // Rprintf(" Short v, returning a direct value.\n"); double v1 = vMin(v, 2); double v2 = vMax(v, 2); if (target < 0) return v1; else if (target > 1) return v2; else return (target * v2 + (1-target) * v1); } else { // Rprintf(" length 1 v, returning a direct value.\n"); return v[0]; } } /*==================================================================================================== * * Weighted pivot. * * arguments: * v is the vector of values; * from, to: indices between which to pivot. v[from],..., v[to-1] will be worked on. * target the quantile which is to be calculated; * w are the weights, assumed to be of length len; * csw is the cumulative sum of weights, csw[i] = sum(w, from=0, to=i) * * NOT FINISHED * *===================================================================================================*/ #define swap(a, b, temp) { temp = a; a = b; b = temp; } double pivot_weighted(double * v, size_t from, size_t to, double target, double * w, double * csw) { // Rprintf("Entering pivot with len=%d and target=%f\n ", len, target); // RprintV(v, len); size_t len = to-from; if (len > 2) { // pick the pivot, say as the median of the first, middle and last size_t i1 = from, i2 = to-1, i3 = (from + to)/2, ip; if (v[i1] <= v[i2]) { if (v[i2] <= v[i3]) ip = i2; else if (v[i3] >= v[i1]) ip = i3; else ip = i1; } else { if (v[i1] <= v[i3]) ip = i1; else if (v[i2] <= v[i3]) ip = i3; else ip = i2; } // put v[ip] at the end double vp, wp; swap(v[ip], v[to-1], vp); swap(w[ip], w[to-1], wp); // Rprintf(" pivot value: %5.3f, index: %d\n", vp, ip); // pivot everything else size_t bound = from; for (size_t i=from; i 0 ? csw[from-1] : 0; for (size_t i=from; i 1.0) { if (crit < 0) return pivot(v, bound, target); else return pivot(v+bound+1, len-bound-1, target-bound-1); } // Rprintf("vMax(v, bound): %5.3f, vMin(v+bound+1, len-bound-1): %5.3f, vp: %5.3f\n", vMax(v, bound), // vMin(v+bound+1, len-bound-1), vp); if (crit < 0) { double v1 = vMax(v, bound); return (v1 *(-crit) + vp * (1+crit)); } // else double v2 = vMin(v+bound+1, len-bound-1); return (vp * (1-crit) + v2 * crit); } else if (len==2) { // Rprintf(" Short v, returning a direct value.\n"); double v1 = vMin(v, 2); double v2 = vMax(v, 2); if (target < 0) return v1; else if (target > 1) return v2; else return (target * v2 + (1-target) * v1); } else { // Rprintf(" length 1 v, returning a direct value.\n"); return v[0]; } } /* * * This isn't needed for now. * * void testPivot(double * v, size_t * len, double * target, double * result) { * result = pivot(v, *len, *target); } */ /***************************************************************************************************** * * Implement order via qsort. * *****************************************************************************************************/ int compareOrderStructure(const orderStructure * os1, const orderStructure * os2) { if (ISNAN(os1->val)) return 1; if (ISNAN(os2->val)) return -1; if (os1->val < os2->val) return -1; if (os1->val > os2->val) return 1; return 0; } void qorder_internal(double * x, size_t n, orderStructure * os) { for (R_xlen_t i = 0; ival = *(x+i); (os+i)->index = i; } // Rprintf("qorder: calling qsort.."); qsort(os, (size_t) n, sizeof(orderStructure), ((int (*) (const void *, const void *)) compareOrderStructure)); } SEXP qorder(SEXP data) { R_xlen_t n = Rf_xlength(data); // Rprintf("qorder: length of input data is %ld.\n", n); double * x = REAL(data); orderStructure * os = R_Calloc((size_t) n, orderStructure); qorder_internal(x, (size_t) n, os); SEXP ans; if (n<(size_t) 0x80000000) { // Rprintf("..returning integer order.\n"); PROTECT (ans = allocVector(INTSXP, n)); int * ansp = INTEGER(ans); for (R_xlen_t i = 0; iindex+1); } else { // Rprintf("..returning floating point (double) order.\n"); PROTECT (ans = allocVector(REALSXP, n)); double * ansp = REAL(ans); for (R_xlen_t i = 0; iindex+1); } R_Free(os); UNPROTECT(1); return ans; } WGCNA/src/compiling.h0000644000176200001440000000222213151333656014036 0ustar liggesusers/* Compiling: gcc --std=c99 -fPIC -O3 -o functionsThreaded.so -shared -lpthread -lm \ -I/usr/local/lib/R-2.7.1-goto/include \ -DWITH_THREADS \ corFunctions-common.c corFunctions.c corFunctions-parallel.c networkFunctions.c pivot.c Without threads: gcc --std=c99 -fPIC -O3 -o functions.so -shared -lpthread -lm \ -I/usr/local/lib/R-2.7.1-goto/include \ corFunctions-common.c corFunctions.c corFunctions-parallel.c networkFunctions.c pivot.c Home: gcc --std=c99 -fPIC -O3 -o functionsThreaded.so -shared -lpthread -lm \ -I/usr/local/lib/R-2.8.0-patched-2008-12-06-Goto/include \ -DWITH_THREADS \ corFunctions-common.c corFunctions.c corFunctions-parallel.c networkFunctions.c pivot.c Without threads: gcc --std=c99 -fPIC -O3 -o functions.so -shared -lpthread -lm \ -I/usr/local/lib/R-2.8.0-patched-2008-12-06-Goto/include \ corFunctions-common.c corFunctions.c corFunctions-parallel.c networkFunctions.c pivot.c gcc --std=gnu99 --shared -Ic:/PROGRA~1/R/R-27~0PA/include -o functions.dll functions.c -Lc:/PROGRA~1/R/R-27~0PA/bin -lR -lRblas "C:\Program Files\R\R-2.7.0pat\bin\R.exe" CMD SHLIB functions.c -lRblas */ WGCNA/src/parallelQuantile.h0000644000176200001440000000163513240405177015360 0ustar liggesusers// "Parallel" quantile: for a list of numeric vectors or arrays, calculate a given quantile of each vector // containing element 'i' of each component of the input list. // NA's are treated as last. // #ifndef __parallelQuantile_h__ #define __parallelQuantile_h__ #include #include #include #include "array.h" #include "corFunctions-utils.h" using namespace std; using namespace Rcpp; RcppExport SEXP parallelQuantile(SEXP data_s, SEXP prob_s); RcppExport SEXP parallelMean(SEXP data_s, SEXP weight_s); RcppExport SEXP parallelMin(SEXP data_s); RcppExport SEXP minWhich_call(SEXP matrix_s, SEXP rowWise_s); RcppExport SEXP quantileC_call(SEXP data_s, SEXP q_s); RcppExport SEXP rowQuantileC_call(SEXP data_s, SEXP q_s) void quantileC(double * data, int *nrow, int * ncol, double * q, double * res); void rowQuantileC(double * data, int *nrow, int * ncol, double * q, double * res); #endif WGCNA/src/Makevars.win0000644000176200001440000000002313103416622014161 0ustar liggesusersPKG_LIBS = -lRblas WGCNA/src/myMatrixMultiplication.h0000644000176200001440000000044413240405177016606 0ustar liggesusers #ifndef __myMatrixMultiplication_h__ #define __myMatrixMultiplication_h__ #include #include /* For a symmatrix matrix A, calculate A'A * result[i,j] = sum_k A[k,i] A[k,j] */ void squareSymmetricMatrix(const double * A, const size_t ncol, double * result); #endif WGCNA/src/conditionalThreading.h0000644000176200001440000000437513240405177016216 0ustar liggesusers /* Copyright (C) 2008 Peter Langfelder; parts based on R by R Development team This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #ifndef __conditionalThreading_h__ #define __conditionalThreading_h__ #define MxThreads 128 #ifdef WITH_THREADS // #warning Including pthread headers. #include #include #else // define fake pthread functions so we don't have to put a #ifdef everywhere // // This prevents competing definitions of pthread types to be included #define _BITS_PTHREADTYPES_H typedef int pthread_mutex_t; typedef int pthread_t; typedef int pthread_attr_t; #define PTHREAD_MUTEX_INITIALIZER 0 static inline void pthread_mutex_lock ( pthread_mutex_t * lock ) { } static inline void pthread_mutex_unlock ( pthread_mutex_t * lock ) { } static inline int pthread_join ( pthread_t t, void ** p) { return 0; } #endif // Conditional pthread routines static inline void pthread_mutex_lock_c( pthread_mutex_t * lock, int threaded) { if (threaded) pthread_mutex_lock(lock); } static inline void pthread_mutex_unlock_c(pthread_mutex_t * lock, int threaded) { if (threaded) pthread_mutex_unlock(lock); } static inline int pthread_create_c(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void*), void *arg, int threaded) { #ifdef WITH_THREADS if (threaded) return pthread_create(thread, attr, start_routine, arg); else #endif (*start_routine)(arg); return 0; } static inline int pthread_join_c(pthread_t thread, void * * value_ptr, int threaded) { if (threaded) return pthread_join(thread, (void * *) value_ptr); return 0; } #endif WGCNA/src/myMatrixMultiplication.c0000644000176200001440000000071213240405177016577 0ustar liggesusers/* For a symmatrix matrix A, calculate A'A * result[i,j] = sum_k A[k,i] A[k,j] */ #include "myMatrixMultiplication.h" void squareSymmetricMatrix(const double * A, const size_t ncol, double * result) { for (size_t i=0; i dims; string name_; public: #ifdef CheckDimensions TYPE value(size_t i) { if (i dims, size_t start=0); size_t nDim() { return dims.size(); } vector dim() { return dims; } size_t size() { return size_; } size_t length() { if (dims.size()==0) return 0; size_t prod = 1; for (size_t i=0; i table(); // returns frequencies but no values vector table(vector & values); // returns frequencies and values void copy2vector(size_t start, size_t length, vector & result); void copy2vector(size_t start, size_t length, vector & result); void colMWM(CLASS_NAME & minVal, INT_CLASS & which); void colQuantile(double q, dArray & quantile); void rowQuantile(double q, dArray & quantile); void sample(size_t size, CLASS_NAME & values, int replace = 0); // void sort(); // vector order(); // vector rank(); CLASS_NAME() { allocated = 0; data_ = (TYPE *) NULL; dims.clear(); } CLASS_NAME(size_t size) { initData(size); setDim(size); } CLASS_NAME(size_t size, TYPE value) { initData(size, value); setDim(size); } // CLASS_NAME(CLASS_NAME arr); // This constructor will copy the data from arr into *this ~CLASS_NAME() { if (allocated) { delete data_; allocated = 0; } } }; void CLASS_NAME::initData(size_t size) { size_ = size; data_ = new TYPE[size]; allocated = 1; dims.clear(); dims.push_back(size_); } void CLASS_NAME::initData(size_t size, TYPE val) { initData(size); for (size_t i=0; i size_) throw (Exception("attempt to set linear dimension " + NumberToString(length) + " higher than size " + NumberToString(size()) + " in variable " + name())); else { dims.clear(); dims.push_back(length); } } void CLASS_NAME::setDim(size_t nrow, size_t ncol) { if (nrow*ncol > size()) throw (Exception("attempt to set matrix dimensions " + NumberToString(nrow) + ", " + NumberToString(ncol) + " higher than size " + NumberToString(size()) + " in variable " + name())); else { dims.clear(); dims.push_back(nrow); dims.push_back(ncol); } } void CLASS_NAME::setDim(size_t nrow, size_t ncol, size_t k) { if (nrow*ncol*k > size_) throw (Exception("attempt to set 3-dim CLASS_NAME dimensions " + NumberToString(nrow) + ", " + NumberToString(ncol) + ", " + NumberToString(k) + " higher than size " + NumberToString(size()) + " in variable " + name())); else { dims.clear(); dims.push_back(nrow); dims.push_back(ncol); dims.push_back(k); } } /* void CLASS_NAME::copyData(CLASS_NAME arr, size_t start, size_t length) { if (start >= arr.length()) throw(Exception("attempt to copy non-existent data from variable" + arr.name())); if (length==-1) length = arr.length() - start; if (length > size()) throw(Exception("attempt to copy data larger than target CLASS_NAME size.")); } */ TYPE CLASS_NAME::max() { if (length()==0) throw(Exception(string("attempt to calculate max of an empty array."))); TYPE max = linValue(0); for (size_t i=1; i max)) max = linValue(i); return max; } TYPE CLASS_NAME::min() { if (length()==0) throw(Exception(string("attempt to calculate min of an empty array."))); TYPE min = linValue(0); for (size_t i=1; i CLASS_NAME::table(vector & values) { vector counts; counts.clear(); values.clear(); for (size_t i=0; i CLASS_NAME::table() { vector values; return table(values); } void CLASS_NAME::setDim(vector dims, size_t start) { size_t len = 1; for (size_t i=start; i size()) throw(Exception(string("setDim: not enough space to accomodate given dimensions."))); this->dims.clear(); this->dims.reserve(dims.size()-start); for (size_t i=start; idims.push_back(dims[i]); } void CLASS_NAME::copy2vector(size_t start, size_t length, vector & result) { if (start + length > this->length()) throw(Exception(string("copy2vector: start+length exceed the actual length of the array."))); result.clear(); // result.reserve(length); for (size_t i=start; i & result) { if (start + length > this->length()) throw(Exception(string("copy2vector: start+length exceed the actual length of the array."))); result.clear(); // result.reserve(length); for (size_t i=start; i length()) throw(Exception(string("Attempt to sample too many samples without replacement."))); values.setDim(size); for (size_t i=0; i #include int nProcessors(void); void cor1Fast(double * x, int * nrow, int * ncol, double * weights, double * quick, int * cosine, double * result, int *nNA, int * err, int * nThreads, int * verbose, int * indent); void bicor1Fast(double * x, int * nrow, int * ncol, double * maxPOutliers, double * quick, int * fallback, int * cosine, double * result, int *nNA, int * err, int * warn, int * nThreads, int * verbose, int * indent); void bicorFast(double * x, int * nrow, int * ncolx, double * y, int * ncoly, int * robustX, int * robustY, double * maxPOutliers, double * quick, int * fallback, int * cosineX, int * cosineY, double * result, int *nNA, int * err, int * warnX, int * warnY, int * nThreads, int * verbose, int * indent); void corFast(double * x, int * nrow, int * ncolx, double * y, int * ncoly, double * weights_x, double * weights_y, double * quick, int * cosineX, int * cosineY, double * result, int *nNA, int * err, int * nThreads, int * verbose, int * indent); SEXP cor1Fast_call(SEXP x_s, SEXP weights, SEXP quick_s, SEXP cosine_s, SEXP nNA_s, SEXP err_s, SEXP nThreads_s, SEXP verbose_s, SEXP indent_s); SEXP corFast_call(SEXP x_s, SEXP y_s, SEXP weights_x_s, SEXP weights_y_s, SEXP quick_s, SEXP cosineX_s, SEXP cosineY_s, SEXP nNA_s, SEXP err_s, SEXP nThreads_s, SEXP verbose_s, SEXP indent_s); SEXP bicor1_call(SEXP x_s, SEXP maxPOutliers_s, SEXP quick_s, SEXP fallback_s, SEXP cosine_s, SEXP nNA_s, SEXP err_s, SEXP warn_s, SEXP nThreads_s, SEXP verbose_s, SEXP indent_s); SEXP bicor2_call(SEXP x_s, SEXP y_s, SEXP robustX_s, SEXP robustY_s, SEXP maxPOutliers_s, SEXP quick_s, SEXP fallback_s, SEXP cosineX_s, SEXP cosineY_s, SEXP nNA_s, SEXP err_s, SEXP warnX_s, SEXP warnY_s, SEXP nThreads_s, SEXP verbose_s, SEXP indent_s); #endif WGCNA/src/exceptions.h0000644000176200001440000000063513151333656014244 0ustar liggesusers/* Exception handling. Just adding a bit more information to the standard exception. */ #ifndef __exception_h__ #define __exception_h__ #include using namespace std; class Exception { protected: string _what; public: virtual string what() const throw() { return _what; } Exception(string wht) throw() { _what = wht; } ~Exception() throw() {} }; #endif WGCNA/src/Makevars0000644000176200001440000000006313103416622013371 0ustar liggesusersPKG_LIBS = -lpthread PKG_CPPFLAGS = -DWITH_THREADS WGCNA/src/array.h0000644000176200001440000001566714361660376013222 0ustar liggesusers// The name of this file ends in h so R CMD install doesn't compile it twice. Not very clean but works // for now. #include #include #include #include #include #include #include "exceptions.h" extern "C" { #include "corFunctions-utils.h" } #ifndef __array_cc__ #define __array_cc__ // #define ISNAN(x) false using namespace std; // define a class that can conveniently hold a big vector, matrix etc. #define NoDim -1 #define CheckDimensions string NumberToString(int n) { char s[100]; string ss; snprintf(s, 100, "%d", n); ss = s; return ss; } class indArray { protected: size_t * data_; size_t size_; int allocated; string name_; size_t val32[2]; size_t mask[8*sizeof(size_t)]; size_t invMask[8*sizeof(size_t)]; public: #ifdef CheckDimensions bool value(size_t i) { size_t ii = (i/(8*sizeof(size_t))); if (ii >= size_) throw(Exception(string("indArray::value: index out of range in variable") + name())); size_t j = (i % (8*sizeof(size_t))); return ((data_[ii] & mask[j]) != 0); } void value(size_t i, bool v) { size_t ii = (i/(8*sizeof(size_t))); if (ii >= size_) throw(Exception(string("indArray::value: index out of range in variable") + name())); size_t j = (i % (8*sizeof(size_t))); //cout << "value: i: " << i << ", ii: " << ii << ", j: " << j << ", v:" << v << endl; if (v) data_[ii] |= mask[j]; else data_[ii] &= invMask[j]; } #else bool value(size_t i) { size_t ii = (i/(8*sizeof(size_t))); size_t j = (i % (8*sizeof(size_t))); return (data_[ii] & mask[j] != 0); } void value(size_t i, bool v) { size_t ii = (i/(8*sizeof(size_t))); size_t j = (i % (8*sizeof(size_t))); if (v) data_[ii] |= mask[j]; else data_[ii] &= invMask[j]; } #endif void name(string n) {name_ = n; } string name() {return name_; } void init(size_t size); void init(size_t size, bool value); size_t size() { return size_ * 8 * sizeof(size_t); } size_t * data() { return data_; } void show() { cout << "data_:"; for (size_t i=0; i v) { if (v.size()==0) throw(Exception(string("attempt to calculate max of an empty vector."))); double max = v[0]; for (size_t i=1; i max)) max = v[i]; return max; } int max(vector v) { if (v.size()==0) throw(Exception(string("attempt to calculate max of an empty vector."))); int max = v[0]; for (size_t i=1; i max) max = v[i]; return max; } double min(vector v) { if (v.size()==0) throw(Exception(string("attempt to calculate min of an empty vector."))); double min = v[0]; for (size_t i=1; i v) { if (v.size()==0) throw(Exception(string("attempt to calculate min of an empty vector."))); int min = v[0]; for (size_t i=1; i column; column.reserve(colLen); int err; double val; for (size_t i=0, col=0; i2) throw(Exception(string( "Row-wise quantiles are only defined for 2-dimensional arrays."))); vector dim1 = dim(); dim1.pop_back(); quant.setDim(dim1); } size_t rowLen = dim()[1], nrow = dim()[0]; if (rowLen==0) throw(Exception(string("rowQuantile: Row length is zero in variable") + name())); vector rowData; rowData.reserve(rowLen); int err; double val; for (size_t row=0; row column; column.reserve(colLen); int err; double val; for (size_t i=0, col=0; i #include #include #include "array.h" using namespace std; using namespace Rcpp; /* * * Main function parallelQuantile. I will assume that the R code made the necessary checks on the data being * non-empty and all components having the same length. * */ RcppExport SEXP parallelQuantile(SEXP data_s, SEXP prob_s) { BEGIN_RCPP List data_lst = List(data_s); NumericVector prob_v = NumericVector(prob_s); double prob = prob_v[0]; size_t nSets = data_lst.size(); // cout << "nSets: " << nSets << endl; vector data(nSets); data.clear(); for (size_t i=0; i data(nSets); data.clear(); for (size_t i=0; i data(nSets); data.clear(); for (size_t i=0; i data[set][i])) { min1 = data[set][i]; index1 = set; } minv[i] = min1; which[i] = index1+1; } // cout << "nElements: " << nElements << endl; minv.attr("dim") = data[0].attr("dim"); which.attr("dim") = data[0].attr("dim"); List out; out["min"] = minv; out["which"] = which; return(out); END_RCPP } // if rowWise is non-zero, the min and which will be by rows, otherwise by columns. // RcppExport SEXP minWhich_call(SEXP matrix_s, SEXP rowWise_s) { BEGIN_RCPP NumericMatrix matrix(matrix_s); size_t nrows = matrix.nrow(), ncols = matrix.ncol(); IntegerVector rowWise(rowWise_s); size_t nouter, ninner, outerStride, innerStride; if (rowWise[0] == 0) { nouter = ncols; ninner = nrows; outerStride = nrows; innerStride = 1; } else { nouter = nrows; ninner = ncols; outerStride = 1; innerStride = nrows; } NumericVector min(nouter), which(nouter); for (size_t i=0; i1)) throw(Exception(string("quantileC: given quantile is out of range 0 to 1."))); dArray quant; quant.wrap(res, nc); d.colQuantile(*q, quant); } catch (Exception & err) { Rprintf("Error in (compiled code) quantileC: %s\n", err.what().c_str()); } } void rowQuantileC(double * data, int *nrow, int * ncol, double * q, double * res) { try { int nr = *nrow, nc = *ncol; dArray d; d.wrap(data, nr, nc); if ((*q<0) || (*q>1)) throw(Exception(string("quantileC: given quantile is out of range 0 to 1."))); dArray quant; quant.wrap(res, nr); d.rowQuantile(*q, quant); } catch (Exception & err) { Rprintf("Error in (compiled code) quantileC: %s\n", err.what().c_str()); } } } // extern "C" RcppExport SEXP quantileC_call(SEXP data_s, SEXP q_s) { BEGIN_RCPP NumericMatrix data(data_s); // The dimensions below are defined as int to preserve compatibility with the original .C functions. int nrows = data.nrow(), ncols = data.ncol(); NumericVector q(q_s); NumericVector res(ncols); quantileC(&data[0], &nrows, &ncols, &q[0], &res[0]); return(res); END_RCPP } RcppExport SEXP rowQuantileC_call(SEXP data_s, SEXP q_s) { BEGIN_RCPP NumericMatrix data(data_s); // The dimensions below are defined as int to preserve compatibility with the original .C functions. int nrows = data.nrow(), ncols = data.ncol(); NumericVector q(q_s); NumericVector res(nrows); rowQuantileC(&data[0], &nrows, &ncols, &q[0], &res[0]); return(res); END_RCPP } WGCNA/src/corFunctions-typeDefs.h0000644000176200001440000000636513240405177016323 0ustar liggesusers/* Calculation of unweighted Pearson and biweght midcorrelation. Copyright (C) 2008 Peter Langfelder; parts based on R by R Development team This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Some notes on handling of zero MAD: (.) in the threaded calculations, each columns has its own NAmed, but the zeroMAD flag is one flag per thread. Thus, it should be zeroed out before the threaded calculation starts and checked at the end. */ #ifndef __corFunctions_internal_h__ #define __corFunctions_internal_h__ #define LDOUBLE long double typedef struct { volatile size_t i, n; } progressCounter; /* For each parallel operation will presumably need a separate structure to hold its * information, but can define a common structure holding the general information that is needed to * calculate correlation. Can keep two versions, one for calculating cor(x), one for cor(x,y). * Each specific thread-task specific struct can contain a pointer to the general structure. */ // General information for a [bi]cor(x) calculation typedef struct { double * x, * weights; size_t nr, nc; double * multMat, * result; double * aux; size_t *nNAentries; int *NAme; int zeroMAD; int * warn; double maxPOutliers; double quick; int robust, fallback; int cosine; int id; int threaded; // This flag will be used to indicate whether the calculation really is threaded. // For small problems it doesn't make sense to use threading. } cor1ThreadData; // General information for a [bi]cor(x,y) calculation typedef struct { cor1ThreadData * x, * y; } cor2ThreadData; // Information for column preparation typedef struct { cor1ThreadData * x; progressCounter * pc; pthread_mutex_t * lock; } colPrepThreadData; // Information for symmetrization typedef struct { cor1ThreadData * x; progressCounter * pc; } symmThreadData; // Information for threaded slow calculations for cor1 typedef struct { cor1ThreadData * x; progressCounter * pci, * pcj; size_t * nSlow, * nNA; pthread_mutex_t * lock; } slowCalcThreadData; /*============================================================================================== * * Threaded 2-variable versions of the correlation functions * *============================================================================================== */ typedef struct { cor2ThreadData * x; progressCounter * pci, *pcj; size_t * nSlow, * nNA; pthread_mutex_t * lock; double quick; } slowCalc2ThreadData; // Data for NAing out appropriate rows and columns typedef struct { cor2ThreadData * x; progressCounter * pci, *pcj; } NA2ThreadData; #endif WGCNA/src/networkFunctions.c0000644000176200001440000005530714534356476015461 0ustar liggesusers/* * Copyright (C) 2008 Peter Langfelder; parts based on R by R Development team This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #include "networkFunctions.h" /* ============================================================================================= * * tomSimilarity * * ============================================================================================= * expr: the expression matrix, array of nSamples * nGenes doubles (NA's allowed) corType: CorTypePearson, CorTypeBicor, CorTypeSpearman tomType: TomTypeNone, TomTypeUnsigned, TomTypeSigned */ enum { CorTypePearson = 0, CorTypeBicor = 1, CorTypeSpearman = 2 }; enum { TomTypeNone = 0, TomTypeUnsigned = 1, TomTypeSigned = 2, TomTypeSignedNowick = 3, TomTypeUnsigned2 = 4, TomTypeSigned2 = 5, TomTypeSignedNowick2 = 6 }; enum { TomDenomMin = 0, TomDenomMean = 1 }; enum { AdjTypeUnsigned = 0, AdjTypeSigned = 1, AdjTypeHybrid = 2, AdjTypeUnsignedKeepSign = 3 }; #define MxStr 200 typedef char plString[MxStr]; const plString AdjErrors[] = {"No error. Just a placeholder.", "Standard deviation of some genes is zero.", "Unrecognized correlation type.", "Unrecognized adjacency type."}; void tomSimilarityFromAdj(double * adj, int * nGenes, int * tomType, int * denomType, int * suppressTOMForZeroAdj, int * suppressNegativeTOM, int * useInternalMatrixAlgebra, double * tom, int * verbose, int * indent); //=============================================================================================== // adjacency //=============================================================================================== void adjacency(double * expr, double * weights, int nSamples, int nGenes, int corType, int adjType, double power, double maxPOutliers, double quick, int fallback, int cosine, int replaceMissing, double * adj, int * errCode, int *warn, int * nThreads, int verbose, int indent) { size_t nElems = ((size_t) nGenes) * ((size_t ) nGenes); int nNA = 0; double replacementValue = 0; // Rprintf("Received nGenes: %d, nSamples: %d\n", nGenes, nSamples); // Rprintf("adjacency: adjType: %d\n", adjType); // Rprintf("adjacency: replaceMissing: %d\n", replaceMissing); int err = 0; switch (corType) { case CorTypePearson : // Rprintf("Calling cor_pairwise1..."); cor1Fast(expr, &nSamples, &nGenes, weights, &quick, &cosine, adj, &nNA, &err, nThreads, &verbose, &indent); // Rprintf("..done.\n"); if ((nNA > 0) && (!replaceMissing)) { * errCode = 1; return; } break; case CorTypeBicor : // Rprintf("Calling bicor1..."); bicor1Fast(expr, &nSamples, &nGenes, &maxPOutliers, &quick, &fallback, &cosine, adj, &nNA, &err, warn, nThreads, &verbose, &indent); // Rprintf("..done.\n"); if ((nNA > 0) && (!replaceMissing)) { // Rprintf("nNA: %d\n", nNA); * errCode = 1; return; } if (err>0) { // Rprintf("bicor1 returned err: %d\n", err); * errCode = 3; return; } break; default : * errCode = 2; return; } if ((*errCode==1) && replaceMissing) { Rprintf("Replacing missing adjacency values.\n"); *errCode = 0; if (adjType==AdjTypeSigned) replacementValue = -1; for (size_t i=0; i < nElems; i++) if (ISNAN(adj[i])) adj[i] = replacementValue; } // Rprintf("ADJ 1\n"); switch (adjType) { case AdjTypeUnsigned : for (size_t i=0; i < nElems; i++) adj[i] = pow(fabs(adj[i]), power); break; case AdjTypeUnsignedKeepSign : for (size_t i=0; i < nElems; i++) adj[i] = (signbit(adj[i])? -1: 1) * pow(fabs(adj[i]), power); break; case AdjTypeSigned : for (size_t i=0; i < nElems; i++) adj[i] = pow((1+adj[i])/2, power); break; case AdjTypeHybrid : for (size_t i=0; i < nElems; i++) adj[i] = adj[i] > 0 ? pow(adj[i], power) : 0; break; default : * errCode = 3; } } void testAdjacency(double * expr, double * weights, int * nSamples, int * nGenes, int * corType, int * adjType, double * power, double * maxPOutliers, double * quick, int * fallback, int * cosine, double * adj, int * errCode, int * warn, int * nThreads) { adjacency(expr, weights, * nSamples, * nGenes, * corType, * adjType, * power, *maxPOutliers, *quick, *fallback, *cosine, 0, adj, errCode, warn, nThreads, 1, 0); } /******************************************************************************************** * * checkAvailableMemory * ********************************************************************************************/ size_t checkAvailableMemory(void) { size_t guess; if ( sizeof (size_t)==4 ) guess = 16384; // 2^14 else guess = 131072; // power of 2 nearest to 100k int tooLarge = 1; double * pt; while ( (tooLarge) && (guess > 1000)) { // Rprintf("trying matrix of size %d\n", guess); tooLarge = ( (pt=malloc(guess*guess*sizeof(double))) == NULL ); if (tooLarge) guess = (guess * 3) / 4; // Rprintf("next size will be %d\n", guess); } if (!tooLarge) free(pt); // Rprintf("Returning %d.\n", guess * guess); return guess*guess; } //==================================================================================================== // TOM similarity from adjacency //==================================================================================================== void tomSimilarityFromAdj(double * adj, int * nGenes, int * tomType, int * denomType, int * suppressTOMForZeroAdj, int * suppressNegativeTOM, int * useInternalMatrixAlgebra, double * tom, int * verbose, int * indent) { size_t ng = (size_t) *nGenes; // int err = 0; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; double * conn; if ( (conn = malloc(ng * sizeof(double))) == NULL) error("Memmory allocation error (connectivity)"); if (*verbose > 0) Rprintf("%s..connectivity..\n", spaces); double *tom2 = tom; for (size_t gene = 0; gene < ng; gene++) { double * adj2 = (adj + gene * ng); // set diagonal to 1. *(adj2 + gene) = 1; // Calculate connectivity double sum = 0.0; for (size_t g2 = 0; g2 < ng; g2++) { sum += fabs(adj2[g2]); *tom2 = 0; tom2++; } conn[gene] = sum; } if (*useInternalMatrixAlgebra > 0) { if (*verbose > 0) Rprintf("%s..matrix multiplication (custom code)..\n", spaces); squareSymmetricMatrix(adj, *nGenes, tom); } else { if (*verbose > 0) Rprintf("%s..matrix multiplication (system BLAS)..\n", spaces); double alpha = 1.0, beta = 0.0; F77_NAME(dsyrk)("L", "N", nGenes, nGenes, & alpha, adj, nGenes, & beta, tom, nGenes FCONE FCONE); } if (*verbose > 0) Rprintf("%s..normalization..\n", spaces); // Rprintf("Using denomType %d\n", *denomType); tom2 = tom; double * adj2; size_t ng1 = ng-1; size_t nAbove1 = 0, nSuppressed = 0; if (*suppressTOMForZeroAdj) { Rprintf("%s..will suppress TOM for pairs of nodes with zero adjacency.\n", spaces); } int form = *tomType > TomTypeSignedNowick; switch (* tomType) { case TomTypeUnsigned: case TomTypeUnsigned2: for (size_t j=0; j< ng1; j++) { tom2 = tom + (ng+1)*j + 1; adj2 = adj + (ng+1)*j + 1; for (size_t i=j+1; i< ng; i++) { double den1; if ((* denomType) == TomDenomMin) den1 = fmin(conn[i], conn[j]); else den1 = (conn[i] + conn[j])/2; double den = den1 - * adj2; if (form > 0) { double r; if (den <= 1) r = 0; else r = (*tom2 - *adj2 * 2)/(den-1); *tom2 = (*adj2 + r)/2; } else { if (den==0) *tom2 = 0; else *tom2 = ( *tom2 - *adj2) / den ; } *(tom + ng*i + j) = *tom2; if (*tom2 > 1) nAbove1++; tom2++; adj2++; } } break; case TomTypeSigned: case TomTypeSigned2: for (size_t j=0; j < ng1; j++) { tom2 = tom + (ng+1)*j + 1; adj2 = adj + (ng+1)*j + 1; for (size_t i=j+1; i< ng; i++) { if ((*suppressTOMForZeroAdj == 0) || (*adj2 > 0)) { double den1; if ((* denomType) == TomDenomMin) den1 = fmin(conn[i], conn[j]); else den1 = (conn[i] + conn[j])/2; double den = den1 - fabs(*adj2); if (form > 0) { double r; if (den <= 1) r = 0; else r = (*tom2 - *adj2 * 2)/(den-1); *tom2 = fabs(*adj2 + r)/2; } else { if (den==0) *tom2 = 0; else *tom2 = fabs( *tom2 - *adj2) / den ; } *(tom + ng*i + j) = *tom2; if (*tom2 > 1) { Rprintf("TOM greater than 1: actual value: %f, i: %lu, j: %lu\n", *tom2, (long unsigned int) i, (long unsigned int) j); nAbove1++; } } else { *tom2 = 0; *(tom + ng*i + j) = 0; nSuppressed++; } tom2++; adj2++; } } break; case TomTypeSignedNowick: case TomTypeSignedNowick2: // Differs from the above only in one missing fabs and potential suppression of negative values // Rprintf("Calculating Nowick-type TOM. SuppressNegativeTOM: %d\n", *suppressNegativeTOM); for (size_t j=0; j < ng1; j++) { tom2 = tom + (ng+1)*j + 1; adj2 = adj + (ng+1)*j + 1; for (size_t i=j+1; i< ng; i++) { if ((*suppressTOMForZeroAdj == 0) || (*adj2 > 0)) { double den1; if ((* denomType) == TomDenomMin) den1 = fmin(conn[i], conn[j]); else den1 = (conn[i] + conn[j])/2; double den = den1 - fabs(*adj2); if (form > 0) { double r; if (den <= 1) r = 0; else r = (*tom2 - *adj2 * 2)/(den-1); *tom2 = (*adj2 + r)/2; } else { if (den==0) *tom2 = 0; else *tom2 = ( *tom2 - *adj2) / den ; } *(tom + ng*i + j) = *tom2; if (fabs(*tom2) > 1) { Rprintf("TOM greater than 1: actual value: %f, i: %lu, j: %lu\n", *tom2, (long unsigned int) i, (long unsigned int) j); nAbove1++; } } else { *tom2 = 0; *(tom + ng*i + j) = 0; nSuppressed++; } if (*suppressNegativeTOM && (*tom2 < 0)) { *tom2 = 0; *(tom + ng*i + j) = 0; } tom2++; adj2++; } } break; } if (nSuppressed > 0) Rprintf("%s.. %lu TOM elements were set to zero because of zero adjacencies.\n", spaces, (long unsigned int) nSuppressed); // Set the diagonal of tom to 1 for (size_t i=0; i 0) Rprintf("problem: %lu TOM entries are larger than 1.\n", (long unsigned int) nAbove1); free(conn); if (*verbose > 0) Rprintf("%s..done.\n", spaces); } //=========================================================================================== // // tomSimilarity // //=========================================================================================== // void tomSimilarity(double * expr, double * weights, int * nSamples, int * nGenes, int * corType, int * adjType, double * power, int * tomType, int * denomType, double * maxPOutliers, double * quick, int * fallback, int * cosine, int * replaceMissing, int * suppressTOMForZeroAdj, int * suppressNegativeTOM, int * useInternalMatrixAlgebra, double * tom, int * warn, int * nThreads, int * verbose, int * indent) { // Rprintf("Starting tomSimilarity...\n"); double * adj; int ng = *nGenes, ns = *nSamples; size_t matSize = ( (size_t)ng) * ( (size_t) ng); int err = 0; char spaces[2* *indent+1]; for (int i=0; i<2* *indent; i++) spaces[i] = ' '; spaces[2* *indent] = '\0'; /* int size=4000; int success = 0; double * pt; while ( (pt=malloc(size*size*sizeof(double)))!=NULL ) { size+=1000; success = 1; free(pt); } size -= 1000; if ((*verbose > 0) && success) Rprintf("%sRough guide to maximum array size: about %d x %d array of doubles..\n", spaces, size, size); */ if (*verbose > 0) Rprintf("%sTOM calculation: adjacency..\n", spaces); if (* tomType==TomTypeNone) // just calculate adjacency. { adjacency(expr, weights, ns, ng, *corType, *adjType, *power, *maxPOutliers, *quick, *fallback, *cosine, *replaceMissing, tom, &err, warn, nThreads, *verbose, *indent); if (*verbose > 0) Rprintf("\n"); if (err) error("%s\n", AdjErrors[err]); return; } if ((adj = malloc(matSize * sizeof(double))) == NULL) error("Memmory allocation error."); if ((* tomType == TomTypeSigned) && (* adjType == AdjTypeUnsigned)) * adjType = AdjTypeUnsignedKeepSign; if (*tomType == TomTypeSignedNowick) *adjType = AdjTypeUnsignedKeepSign; if ((* tomType == TomTypeUnsigned) && (* adjType == AdjTypeUnsignedKeepSign)) * adjType = AdjTypeUnsigned; adjacency(expr, weights, ns, ng, * corType, * adjType, * power, * maxPOutliers, * quick, *fallback, *cosine, *replaceMissing, adj, & err, warn, nThreads, *verbose, *indent); // Rprintf("TOM 1\n"); if (err) { Rprintf("TOM: exit because 'adjacency' reported an error.\n"); free(adj); error("%s\n", AdjErrors[err]); } else { tomSimilarityFromAdj(adj, nGenes, tomType, denomType, suppressTOMForZeroAdj, suppressNegativeTOM, useInternalMatrixAlgebra, tom, verbose, indent); free(adj); } } /*====================================================================================================== Function returning the column-wise minimum and minimum index. For easier integration with R, the index will also be stored as a double. NA's are ignored. ========================================================================================================*/ void minWhichMin(double * matrix, int * nRows, int * nColumns, double * min, double * whichMin) { int nrows = *nRows, ncols = *nColumns; for (size_t i=0; i 0) mean[i] = sum/count; else mean[i] = NA_REAL; } } // Version to be called via .Call // SEXP tomSimilarityFromAdj_call(SEXP adj_s, SEXP tomType_s, SEXP denomType_s, SEXP suppressTOMForZeroAdj_s, SEXP suppressNegativeTOM_s, SEXP useInternalMatrixAlgebra_s, SEXP verbose_s, SEXP indent_s) { // Rprintf("Step 1\n"); SEXP dim, tom_s; int *nGenes, *verbose, *indent; int *tomType, *denomType, *suppressTOMForZeroAdj, *suppressNegativeTOM, *useInternalMatrixAlgebra; double *adj, *tom; // Rprintf("Step 2\n"); /* Get dimensions of 'expr'. */ PROTECT(dim = getAttrib(adj_s, R_DimSymbol)); nGenes = INTEGER(dim); if (*nGenes!= *(nGenes+1)) { UNPROTECT(1); error("Input adjacency is not symmetric."); } // nGenes = INTEGER(dim)+1; adj = REAL(adj_s); // Rprintf("Step 3\n"); tomType = INTEGER(tomType_s); denomType = INTEGER(denomType_s); suppressTOMForZeroAdj = INTEGER(suppressTOMForZeroAdj_s); suppressNegativeTOM = INTEGER(suppressNegativeTOM_s); useInternalMatrixAlgebra = INTEGER(useInternalMatrixAlgebra_s); verbose = INTEGER(verbose_s); indent = INTEGER(indent_s); // Rprintf("Step 4\n"); PROTECT(tom_s = allocMatrix(REALSXP, *nGenes, *nGenes)); tom = REAL(tom_s); // Rprintf("Calling tomSimilarity...\n"); tomSimilarityFromAdj(adj, nGenes, tomType, denomType, suppressTOMForZeroAdj, suppressNegativeTOM, useInternalMatrixAlgebra, tom, verbose, indent); // Rprintf("Returned from tomSimilarity...\n"); UNPROTECT(2); return tom_s; } // Version to be called via .Call // SEXP tomSimilarity_call(SEXP expr_s, SEXP weights_s, SEXP corType_s, SEXP adjType_s, SEXP power_s, SEXP tomType_s, SEXP denomType_s, SEXP maxPOutliers_s, SEXP quick_s, SEXP fallback_s, SEXP cosine_s, SEXP replaceMissing_s, SEXP suppressTOMForZeroAdj_s, SEXP suppressNegativeTOM_s, SEXP useInternalMatrixAlgebra_s, SEXP warn_s, // This is an "output" variable SEXP nThreads_s, SEXP verbose_s, SEXP indent_s) { // Rprintf("Step 1\n"); SEXP dim, tom_s; int *nSamples, *nGenes, *fallback, *cosine, *warn, *nThreads, *verbose, *indent; int *corType, *adjType, *tomType, *denomType, *replaceMissing, *suppressTOMForZeroAdj, *suppressNegativeTOM, *useInternalMatrixAlgebra; double *expr, *weights, *power, *quick, *tom, *maxPOutliers; // Rprintf("Step 2\n"); /* Get dimensions of 'expr'. */ PROTECT(dim = getAttrib(expr_s, R_DimSymbol)); nSamples = INTEGER(dim); nGenes = INTEGER(dim)+1; expr = REAL(expr_s); weights = isNull(weights_s)? NULL : REAL(weights_s); // Rprintf("Step 3\n"); corType = INTEGER(corType_s); adjType = INTEGER(adjType_s); tomType = INTEGER(tomType_s); denomType = INTEGER(denomType_s); fallback = INTEGER(fallback_s); cosine = INTEGER(cosine_s); replaceMissing = INTEGER(replaceMissing_s); suppressTOMForZeroAdj = INTEGER(suppressTOMForZeroAdj_s); suppressNegativeTOM = INTEGER(suppressNegativeTOM_s); useInternalMatrixAlgebra = INTEGER(useInternalMatrixAlgebra_s); warn = INTEGER(warn_s); nThreads = INTEGER(nThreads_s); verbose = INTEGER(verbose_s); indent = INTEGER(indent_s); power = REAL(power_s); quick = REAL(quick_s); maxPOutliers = REAL(maxPOutliers_s); // Rprintf("Step 4\n"); PROTECT(tom_s = allocMatrix(REALSXP, *nGenes, *nGenes)); tom = REAL(tom_s); // Rprintf("Calling tomSimilarity...\n"); tomSimilarity(expr, weights, nSamples, nGenes, corType, adjType, power, tomType, denomType, maxPOutliers, quick, fallback, cosine, replaceMissing, suppressTOMForZeroAdj, suppressNegativeTOM, useInternalMatrixAlgebra, tom, warn, nThreads, verbose, indent); // Rprintf("Returned from tomSimilarity...\n"); UNPROTECT(2); return tom_s; } void checkAvailableMemoryForR(double * size) { *size = 1.0 * checkAvailableMemory() ; } /* ============================================================================================= * * Register native routines here. * * =============================================================================================*/ void R_init_WGCNA(DllInfo * info) { static const R_CallMethodDef callMethods[] = { {"tomSimilarity_call", (DL_FUNC) &tomSimilarity_call, 19}, {"tomSimilarityFromAdj_call", (DL_FUNC) &tomSimilarityFromAdj_call, 8}, {"cor1Fast_call", (DL_FUNC) &cor1Fast_call, 9}, {"bicor1_call", (DL_FUNC) &bicor1_call, 11}, {"bicor2_call", (DL_FUNC) &bicor2_call, 16}, {"corFast_call", (DL_FUNC) &corFast_call, 12}, {"parallelQuantile", (DL_FUNC) ¶llelQuantile, 2}, {"parallelMean", (DL_FUNC) ¶llelMean, 2}, {"parallelMin", (DL_FUNC) ¶llelMin, 1}, {"minWhich_call", (DL_FUNC) &minWhich_call, 2}, {"quantileC_call", (DL_FUNC) &quantileC_call, 2}, {"rowQuantileC_call", (DL_FUNC) &rowQuantileC_call, 2}, {"qorder", (DL_FUNC) &qorder, 1}, {NULL, NULL, 0} }; static R_NativePrimitiveArgType checkAvailableMemoryForR_t[] = { REALSXP }; static const R_CMethodDef CMethods[] = { {"checkAvailableMemoryForR", (DL_FUNC) &checkAvailableMemoryForR, 1, checkAvailableMemoryForR_t}, {NULL, NULL, 0} }; R_registerRoutines(info, CMethods, callMethods, NULL, NULL); R_useDynamicSymbols(info, FALSE); } WGCNA/src/networkFunctions.h0000644000176200001440000000235514533632240015442 0ustar liggesusers /* Copyright (C) 2008 Peter Langfelder; parts based on R by R Development team This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #ifndef __networkFunctions_h__ #define __networkFunctions_h__ #include #include #include #include #include #include #include #include #define LDOUBLE long double #include "parallelQuantile_stdC.h" #include "pivot_declarations.h" #include "corFunctions-utils.h" #include "corFunctions.h" #include "myMatrixMultiplication.h" size_t checkAvailableMemory(void); #endif WGCNA/src/pivot.h0000644000176200001440000000075713240405177013226 0ustar liggesusers#ifndef __pivot_h__ #define __pivot_h__ #include #include void RprintV(double * v, size_t l); double vMax(double * v, size_t len); double vMin(double * v, size_t len); double pivot(double * v, size_t len, double target); typedef struct { double val; size_t index; } orderStructure; int compareOrderStructure(const orderStructure * os1, const orderStructure * os2); void qorder_internal(double * x, size_t n, orderStructure * os); SEXP qorder(SEXP data); #endif WGCNA/NAMESPACE0000644000176200001440000000303514356162617012344 0ustar liggesusersuseDynLib(WGCNA) exportPattern("^[^\\.]") importFrom(Rcpp, evalCpp) import(foreach, doParallel, dynamicTreeCut, fastcluster, GO.db, AnnotationDbi, matrixStats) importFrom(survival, is.Surv, coxph, Surv) importFrom(parallel, stopCluster) #importFrom(AnnotationDbi, mappedkeys) importFrom(grDevices, dev.new) importFrom(utils, packageVersion, compareVersion) importFrom(parallel, detectCores) importFrom(Hmisc, rcorr.cens, errbar) importFrom(impute, impute.knn) importFrom(splines, ns) importFrom(preprocessCore, normalize.quantiles) # Imports of previously automatically available functions... importFrom("grDevices", "dev.off", "heat.colors", "pdf", "rgb", "terrain.colors") importFrom("graphics", "abline", "axis", "barplot", "box", "boxplot", "frame", "hist", "image", "layout", "lines", "mtext", "pairs", "panel.smooth", "par", "plot", "polygon", "rect", "segments", "strheight", "strwidth", "text", "title", "points") importFrom("stats", "anova", "as.dendrogram", "as.dist", "as.hclust", "coef", "cov", "cutree", "dist", "fisher.test", "glm", "heatmap", "kruskal.test", "lm", "median", "model.matrix", "na.exclude", "order.dendrogram", "pchisq", "phyper", "pnorm", "predict", "pt", "qnorm", "quantile", "reorder", "residuals", "rexp", "rnorm", "runif", "sd", "smooth.spline", "t.test", "var", "weighted.mean", "p.adjust", "setNames") importFrom("utils", "data", "flush.console", "read.csv", "write.csv", "write.table", "object.size") WGCNA/inst/0000755000176200001440000000000014533632240012070 5ustar liggesusersWGCNA/inst/CITATION0000644000176200001440000000173214533632240013230 0ustar liggesuserscitHeader("To cite WGCNA in publications use:") citFooter("We have invested a lot of time and effort in creating the package,", "please cite it when using it for data analysis.") bibentry(bibtype = "Article", author = c(as.person("Peter Langfelder"), as.person("Steve Horvath")), title = "WGCNA: an R package for weighted correlation network analysis", journal = "BMC Bioinformatics", year = "2008", number = "1", pages = "559", PubMedID = "19114008", url = "https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-559" ) bibentry(bibtype = "Article", title = "Fast {R} Functions for Robust Correlations and Hierarchical Clustering", author = c(as.person("Peter Langfelder"), as.person("Steve Horvath")), journal = "Journal of Statistical Software", year = "2012", volume = "46", number = "11", pages = "1--17", url = "https://www.jstatsoft.org/v46/i11/" ) WGCNA/build/0000755000176200001440000000000014672545352012225 5ustar liggesusersWGCNA/build/partial.rdb0000644000176200001440000000010114672545352014342 0ustar liggesusers‹‹àb```b`aad`b1…À€…‰‘…“5/17µ˜A"Éðh¿eÍ7WGCNA/man/0000755000176200001440000000000014672545314011677 5ustar liggesusersWGCNA/man/coClustering.permutationTest.Rd0000644000176200001440000000746414012015545020032 0ustar liggesusers\name{coClustering.permutationTest} \alias{coClustering.permutationTest} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Permutation test for co-clustering } \description{ This function calculates permutation Z statistics that measure how different the co-clustering of modules in a reference and test clusterings is from random. } \usage{ coClustering.permutationTest( clusters.ref, clusters.test, tupletSize = 2, nPermutations = 100, unassignedLabel = 0, randomSeed = 12345, verbose = 0, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{clusters.ref}{ Reference input clustering. A vector in which each element gives the cluster label of an object. } \item{clusters.test}{ Test input clustering. Must be a vector of the same size as \code{cluster.ref}. } \item{tupletSize}{ Co-clutering tuplet size. } \item{nPermutations}{ Number of permutations to execute. Since the function calculates parametric p-values, a relatively small number of permutations (at least 50) should be sufficient. } \item{unassignedLabel}{ Optional specification of a clustering label that denotes unassigned objects. Objects with this label are excluded from the calculation. } \item{randomSeed}{ Random seed for initializing the random number generator. If \code{NULL}, the generator is not initialized (useful for calling the function sequentially). The default assures reproducibility. } \item{verbose}{ If non-zero, function will print out progress messages. } \item{indent}{ Indentation for progress messages. Each unit adds two spaces. } } \details{ This function performs a permutation test to determine whether observed co-clustering statistics are significantly different from those expected by chance. It returns the observed co-clustering as well as the permutation Z statistic, calculated as \code{(observed - mean)/sd}, where \code{mean} and \code{sd} are the mean and standard deviation of the co-clustering when the test clustering is repeatedly randomly permuted. } \value{ \item{observed }{the observed co-clustering measures for clusters in \code{clusters.ref} } \item{Z}{permutation Z statics} \item{permuted.mean}{means of the co-clustering measures when the test clustering is permuted} \item{permuted.sd}{standard deviations of the co-clustering measures when the test clustering is permuted} \item{permuted.cc}{values of the co-clustering measure for each permutation of the test clustering. A matrix of dimensions (number of permutations)x(number of clusters in reference clustering). } } \references{ For example, see Langfelder P, Luo R, Oldham MC, Horvath S (2011) Is My Network Module Preserved and Reproducible? PLoS Comput Biol 7(1): e1001057. Co-clustering is discussed in the Methods Supplement (Supplementary text 1) of that article. } \author{ Peter Langfelder } \seealso{ \code{\link{coClustering}} for calculation of the "observed" co-clustering measure \code{\link{modulePreservation}} for a large suite of module preservation statistics } \examples{ set.seed(1); nModules = 5; nGenes = 100; cl1 = sample(c(1:nModules), nGenes, replace = TRUE); cl2 = sample(c(1:nModules), nGenes, replace = TRUE); cc = coClustering(cl1, cl2) # Choose a low number of permutations to make the example fast ccPerm = coClustering.permutationTest(cl1, cl2, nPermutations = 20, verbose = 1); ccPerm$observed ccPerm$Z # Combine cl1 and cl2 to obtain clustering that is somewhat similar to cl1: cl3 = cl2; from1 = sample(c(TRUE, FALSE), nGenes, replace = TRUE); cl3[from1] = cl1[from1]; ccPerm = coClustering.permutationTest(cl1, cl3, nPermutations = 20, verbose = 1); # observed co-clustering is higher than before: ccPerm$observed # Note the high preservation Z statistics: ccPerm$Z } \keyword{misc} WGCNA/man/pickHardThreshold.Rd0000644000176200001440000000701314012015545015553 0ustar liggesusers\name{pickHardThreshold} \alias{pickHardThreshold} \alias{pickHardThreshold.fromSimilarity} \title{ Analysis of scale free topology for hard-thresholding. } \description{ Analysis of scale free topology for multiple hard thresholds. The aim is to help the user pick an appropriate threshold for network construction. } \usage{ pickHardThreshold( data, dataIsExpr, RsquaredCut = 0.85, cutVector = seq(0.1, 0.9, by = 0.05), moreNetworkConcepts = FALSE, removeFirst = FALSE, nBreaks = 10, corFnc = "cor", corOptions = "use = 'p'") pickHardThreshold.fromSimilarity( similarity, RsquaredCut = 0.85, cutVector = seq(0.1, 0.9, by = 0.05), moreNetworkConcepts=FALSE, removeFirst = FALSE, nBreaks = 10) } \arguments{ \item{data}{ expression data in a matrix or data frame. Rows correspond to samples and columns to genes. } \item{dataIsExpr}{ logical: should the data be interpreted as expression (or other numeric) data, or as a similarity matrix of network nodes? } \item{similarity}{ similarity matrix: a symmetric matrix with entries between -1 and 1 and unit diagonal.} \item{RsquaredCut}{ desired minimum scale free topology fitting index \eqn{R^2}. } \item{cutVector}{ a vector of hard threshold cuts for which the scale free topology fit indices are to be calculated. } \item{moreNetworkConcepts}{logical: should additional network concepts be calculated? If \code{TRUE}, the function will calculate how the network density, the network heterogeneity, and the network centralization depend on the power. For the definition of these additional network concepts, see Horvath and Dong (2008). PloS Comp Biol. } \item{removeFirst}{ should the first bin be removed from the connectivity histogram? } \item{nBreaks}{ number of bins in connectivity histograms } \item{corFnc}{ a character string giving the correlation function to be used in adjacency calculation. } \item{corOptions}{ further options to the correlation function specified in \code{corFnc}. } } \details{ The function calculates unsigned networks by thresholding the correlation matrix using thresholds given in \code{cutVector}. For each power the scale free topology fit index is calculated and returned along with other information on connectivity. } \value{ A list with the following components: \item{cutEstimate}{ estimate of an appropriate hard-thresholding cut: the lowest cut for which the scale free topology fit \eqn{R^2} exceeds \code{RsquaredCut}. If \eqn{R^2} is below \code{RsquaredCut} for all cuts, \code{NA} is returned. } \item{fitIndices}{ a data frame containing the fit indices for scale free topology. The columns contain the hard threshold, Student p-value for the correlation threshold, adjusted \eqn{R^2} for the linear fit, the linear coefficient, adjusted \eqn{R^2} for a more complicated fit models, mean connectivity, median connectivity and maximum connectivity. If input \code{moreNetworkConcepts} is \code{TRUE}, 3 additional columns containing network density, centralization, and heterogeneity.} } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \author{ Steve Horvath} \seealso{ \code{\link{signumAdjacencyFunction}} } \keyword{misc} WGCNA/man/isMultiData.Rd0000644000176200001440000000313114012015545014366 0ustar liggesusers\name{isMultiData} \alias{isMultiData} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Determine whether the supplied object is a valid multiData structure } \description{ Attempts to determine whether the supplied object is a valid multiData structure (see Details). } \usage{ isMultiData(x, strict = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ An object. } \item{strict}{Logical: should the structure of multiData be checked for "strict" compliance?} } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). This function checks whether the supplied \code{x} is a multiData structure in the "strict" (when \code{strict = TRUE} or "loose" \code{strict = FALSE} sense. } \value{ Logical: \code{TRUE} if the input \code{x} is a multiData structure, \code{FALSE} otherwise. } \author{ Peter Langfelder } \seealso{ Other multiData handling functions whose names start with \code{mtd.} } \keyword{ misc}% __ONLY ONE__ keyword per line WGCNA/man/fixDataStructure.Rd0000644000176200001440000000277414012015545015463 0ustar liggesusers\name{fixDataStructure} \alias{fixDataStructure} \title{Put single-set data into a form useful for multiset calculations. } \description{ Encapsulates single-set data in a wrapper that makes the data suitable for functions working on multiset data collections. } \usage{ fixDataStructure(data, verbose = 0, indent = 0) } \arguments{ \item{data}{ A dataframe, matrix or array with two dimensions to be encapsulated. } \item{verbose}{Controls verbosity. 0 is silent. } \item{indent}{Controls indentation of printed progress messages. 0 means no indentation, every unit adds two spaces.} } \details{ For multiset calculations, many quantities (such as expression data, traits, module eigengenes etc) are presented by a common structure, a vector of lists (one list for each set) where each list has a component \code{data} that contains the actual (expression, trait, eigengene) data for the corresponding set in the form of a dataframe. This funtion creates a vector of lists of length 1 and fills the component \code{data} with the content of parameter \code{data}. } \value{ As described above, input data in a format suitable for functions operating on multiset data collections. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{ \code{\link{checkSets}}} \examples{ singleSetData = matrix(rnorm(100), 10,10); encapsData = fixDataStructure(singleSetData); length(encapsData) names(encapsData[[1]]) dim(encapsData[[1]]$data) all.equal(encapsData[[1]]$data, singleSetData); } \keyword{misc} WGCNA/man/labeledHeatmap.Rd0000644000176200001440000003017014022073754015050 0ustar liggesusers\name{labeledHeatmap} \alias{labeledHeatmap} \title{ Produce a labeled heatmap plot } \description{ Plots a heatmap plot with color legend, row and column annotation, and optional text within th heatmap. } \usage{ labeledHeatmap( Matrix, xLabels, yLabels = NULL, xSymbols = NULL, ySymbols = NULL, colorLabels = NULL, xColorLabels = FALSE, yColorLabels = FALSE, checkColorsValid = TRUE, invertColors = FALSE, setStdMargins = TRUE, xLabelsPosition = "bottom", xLabelsAngle = 45, xLabelsAdj = 1, yLabelsPosition = "left", xColorWidth = 2 * strheight("M"), yColorWidth = 2 * strwidth("M"), xColorOffset = strheight("M")/3, yColorOffset = strwidth("M")/3, colorMatrix = NULL, colors = NULL, naColor = "grey", textMatrix = NULL, cex.text = NULL, textAdj = c(0.5, 0.5), cex.lab = NULL, cex.lab.x = cex.lab, cex.lab.y = cex.lab, colors.lab.x = 1, colors.lab.y = 1, font.lab.x = 1, font.lab.y = 1, bg.lab.x = NULL, bg.lab.y = NULL, x.adj.lab.y = 1, plotLegend = TRUE, keepLegendSpace = plotLegend, legendLabel = "", cex.legendLabel = 1, # Separator line specification verticalSeparator.x = NULL, verticalSeparator.col = 1, verticalSeparator.lty = 1, verticalSeparator.lwd = 1, verticalSeparator.ext = 0, verticalSeparator.interval = 0, horizontalSeparator.y = NULL, horizontalSeparator.col = 1, horizontalSeparator.lty = 1, horizontalSeparator.lwd = 1, horizontalSeparator.ext = 0, horizontalSeparator.interval = 0, # optional restrictions on which rows and columns to actually show showRows = NULL, showCols = NULL, ...) } \arguments{ \item{Matrix}{ numerical matrix to be plotted in the heatmap. } \item{xLabels}{ labels for the columns. See Details. } \item{yLabels}{ labels for the rows. See Details. } \item{xSymbols}{ additional labels used when \code{xLabels} are interpreted as colors. See Details. } \item{ySymbols}{ additional labels used when \code{yLabels} are interpreted as colors. See Details. } \item{colorLabels}{ logical: should \code{xLabels} and \code{yLabels} be interpreted as colors? If given, overrides \code{xColorLabels} and \code{yColorLabels} below.} \item{xColorLabels}{ logical: should \code{xLabels} be interpreted as colors? } \item{yColorLabels}{ logical: should \code{yLabels} be interpreted as colors? } \item{checkColorsValid}{ logical: should given colors be checked for validity against the output of \code{colors()} ? If this argument is \code{FALSE}, invalid color specification will trigger an error.} \item{invertColors}{ logical: should the color order be inverted? } \item{setStdMargins}{ logical: should standard margins be set before calling the plot function? Standard margins depend on \code{colorLabels}: they are wider for text labels and narrower for color labels. The defaults are static, that is the function does not attempt to guess the optimal margins. } \item{xLabelsPosition}{ a character string specifying the position of labels for the columns. Recognized values are (unique abbreviations of) \code{"top", "bottom"}. } \item{xLabelsAngle}{ angle by which the column labels should be rotated. } \item{xLabelsAdj}{ justification parameter for column labels. See \code{\link{par}} and the description of parameter \code{"adj"}. } \item{yLabelsPosition}{ a character string specifying the position of labels for the columns. Recognized values are (unique abbreviations of) \code{"left", "right"}. } \item{xColorWidth}{ width of the color labels for the x axis expressed in user corrdinates.} \item{yColorWidth}{ width of the color labels for the y axis expressed in user coordinates.} \item{xColorOffset}{ gap between the y axis and color labels, in user coordinates.} \item{yColorOffset}{ gap between the x axis and color labels, in user coordinates.} \item{colorMatrix}{ optional explicit specification for the color of the heatmap cells. If given, overrides values specified in \code{colors} and \code{naColor}.} \item{colors}{ color pallette to be used in the heatmap. Defaults to \code{\link{heat.colors}}. Only used if \code{colorMatrix} is not given. } \item{naColor}{ color to be used for encoding missing data. Only used if \code{colorMatrix} is not used.} \item{textMatrix}{ optional text entries for each cell. Either a matrix of the same dimensions as \code{Matrix} or a vector of the same length as the number of entries in \code{Matrix}. } \item{cex.text}{ character expansion factor for \code{textMatrix}. } \item{textAdj}{Adjustment for the entries in the text matrix. See the \code{adj} argument to \code{\link{text}}.} \item{cex.lab}{ character expansion factor for text labels labeling the axes. } \item{cex.lab.x}{ character expansion factor for text labels labeling the x axis. Overrides \code{cex.lab} above. } \item{cex.lab.y}{ character expansion factor for text labels labeling the y axis. Overrides \code{cex.lab} above. } \item{colors.lab.x}{colors for character labels or symbols along x axis.} \item{colors.lab.y}{colors for character labels or symbols along y axis.} \item{font.lab.x}{integer specifying font for labels or symbols along x axis. See \code{\link{text}}.} \item{font.lab.y}{integer specifying font for labels or symbols along y axis. See \code{\link{text}}.} \item{bg.lab.x}{background color for the margin along the x axis.} \item{bg.lab.y}{background color for the margin along the y axs.} \item{x.adj.lab.y}{Justification of labels for the y axis along the x direction. A value of 0 produces left-justified text, 0.5 (the default) centered text and 1 right-justified text. } \item{plotLegend}{ logical: should a color legend be plotted? } \item{keepLegendSpace}{ logical: if the color legend is not drawn, should the space be left empty (\code{TRUE}), or should the heatmap fill the space (\code{FALSE})?} \item{legendLabel}{character string to be shown next to the label analogous to an axis label.} \item{cex.legendLabel}{character expansion factor for the legend label.} \item{verticalSeparator.x}{indices of columns in input \code{Matrix} after which separator lines (vertical lines between columns) should be drawn. \code{NULL} means no lines will be drawn.} \item{verticalSeparator.col}{color(s) of the vertical separator lines. Recycled if need be. } \item{verticalSeparator.lty}{line type of the vertical separator lines. Recycled if need be. } \item{verticalSeparator.lwd}{line width of the vertical separator lines. Recycled if need be. } \item{verticalSeparator.ext}{number giving the extension of the separator line into the margin as a fraction of the margin width. 0 means no extension, 1 means extend all the way through the margin. } \item{verticalSeparator.interval}{number giving the interval for vertical separators. If larger than zero, vertical separators will be drawn after every \code{verticalSeparator.interval} of displayed columns. Used only when length of \code{verticalSeparator.x} is zero. } \item{horizontalSeparator.y}{indices of columns in input \code{Matrix} after which separator lines (horizontal lines between columns) should be drawn. \code{NULL} means no lines will be drawn.} \item{horizontalSeparator.col}{ color(s) of the horizontal separator lines. Recycled if need be. } \item{horizontalSeparator.lty}{line type of the horizontal separator lines. Recycled if need be. } \item{horizontalSeparator.lwd}{line width of the horizontal separator lines. Recycled if need be. } \item{horizontalSeparator.ext}{number giving the extension of the separator line into the margin as a fraction of the margin width. 0 means no extension, 1 means extend all the way through the margin. } \item{horizontalSeparator.interval}{number giving the interval for horizontal separators. If larger than zero, horizontal separators will be drawn after every \code{horizontalSeparator.interval} of displayed rows. Used only when length of \code{horizontalSeparator.y} is zero. } \item{showRows}{A numeric vector giving the indices of rows that are actually to be shown. Defaults to all rows.} \item{showCols}{A numeric vector giving the indices of columns that are actually to be shown. Defaults to all columns.} \item{\dots}{ other arguments to function \code{\link{heatmap}}. } } \details{ The function basically plots a standard heatmap plot of the given \code{Matrix} and embellishes it with row and column labels and/or with text within the heatmap entries. Row and column labels can be either character strings or color squares, or both. To get simple text labels, use \code{colorLabels=FALSE} and pass the desired row and column labels in \code{yLabels} and \code{xLabels}, respectively. To label rows and columns by color squares, use \code{colorLabels=TRUE}; \code{yLabels} and \code{xLabels} are then expected to represent valid colors. For reasons of compatibility with other functions, each entry in \code{yLabels} and \code{xLabels} is expected to consist of a color designation preceded by 2 characters: an example would be \code{MEturquoise}. The first two characters can be arbitrary, they are stripped. Any labels that do not represent valid colors will be considered text labels and printed in full, allowing the user to mix text and color labels. It is also possible to label rows and columns by both color squares and additional text annotation. To achieve this, use the above technique to get color labels and, additionally, pass the desired text annotation in the \code{xSymbols} and \code{ySymbols} arguments. } \value{ None. } \author{ Peter Langfelder} \seealso{ \code{\link{heatmap}}, \code{\link{colors}} } \examples{ # This example illustrates 4 main ways of annotating columns and rows of a heatmap. # Copy and paste the whole example into an R session with an interactive plot window; # alternatively, you may replace the command sizeGrWindow below by opening # another graphical device such as pdf. # Generate a matrix to be plotted nCol = 8; nRow = 7; mat = matrix(runif(nCol*nRow, min = -1, max = 1), nRow, nCol); rowColors = standardColors(nRow); colColors = standardColors(nRow + nCol)[(nRow+1):(nRow + nCol)]; rowColors; colColors; sizeGrWindow(9,7) par(mfrow = c(2,2)) par(mar = c(4, 5, 4, 6)); # Label rows and columns by text: labeledHeatmap(mat, xLabels = colColors, yLabels = rowColors, colors = greenWhiteRed(50), setStdMargins = FALSE, textMatrix = signif(mat, 2), main = "Text-labeled heatmap"); # Label rows and columns by colors: rowLabels = paste("ME", rowColors, sep=""); colLabels = paste("ME", colColors, sep=""); labeledHeatmap(mat, xLabels = colLabels, yLabels = rowLabels, colorLabels = TRUE, colors = greenWhiteRed(50), setStdMargins = FALSE, textMatrix = signif(mat, 2), main = "Color-labeled heatmap"); # Mix text and color labels: rowLabels[3] = "Row 3"; colLabels[1] = "Column 1"; labeledHeatmap(mat, xLabels = colLabels, yLabels = rowLabels, colorLabels = TRUE, colors = greenWhiteRed(50), setStdMargins = FALSE, textMatrix = signif(mat, 2), main = "Mix-labeled heatmap"); # Color labels and additional text labels rowLabels = paste("ME", rowColors, sep=""); colLabels = paste("ME", colColors, sep=""); extraRowLabels = paste("Row", c(1:nRow)); extraColLabels = paste("Column", c(1:nCol)); # Extend margins to fit all labels par(mar = c(6, 6, 4, 6)); labeledHeatmap(mat, xLabels = colLabels, yLabels = rowLabels, xSymbols = extraColLabels, ySymbols = extraRowLabels, colorLabels = TRUE, colors = greenWhiteRed(50), setStdMargins = FALSE, textMatrix = signif(mat, 2), main = "Text- + color-labeled heatmap"); } \keyword{ hplot }% __ONLY ONE__ keyword per line WGCNA/man/adjacency.splineReg.Rd0000644000176200001440000001001214012015545016012 0ustar liggesusers\name{adjacency.splineReg} \alias{adjacency.splineReg} %- Also NEED an '\alias' for EACH other topic documented here. \title{Calculate network adjacency based on natural cubic spline regression } \description{ adjacency.splineReg calculates a network adjacency matrix by fitting spline regression models to pairs of variables (i.e. pairs of columns from \code{datExpr}). Each spline regression model results in a fitting index R.squared. Thus, the n columns of \code{datExpr} result in an n x n dimensional matrix whose entries contain R.squared measures. This matrix is typically non-symmetric. To arrive at a (symmetric) adjacency matrix, one can specify different symmetrization methods with \code{symmetrizationMethod}. } \usage{ adjacency.splineReg( datExpr, df = 6-(nrow(datExpr)<100)-(nrow(datExpr)<30), symmetrizationMethod = "mean", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ data frame containing numeric variables. Example: Columns may correspond to genes and rows to observations (samples).} \item{df}{ degrees of freedom in generating natural cubic spline. The default is as follows: if nrow(datExpr)>100 use 6, if nrow(datExpr)>30 use 4, otherwise use 5.} \item{symmetrizationMethod}{ character string (eg "none", "min","max","mean") that specifies the method used to symmetrize the pairwise model fitting index matrix (see details).} \item{...}{ other arguments from function \code{\link[splines]{ns}}} } \details{ A network adjacency matrix is a symmetric matrix whose entries lie between 0 and 1. It is a special case of a similarity matrix. Each variable (column of \code{datExpr}) is regressed on every other variable, with each model fitting index recorded in a square matrix. Note that the model fitting index of regressing variable x and variable y is usually different from that of regressing y on x. From the spline regression model glm( y ~ ns( x, df)) one can calculate the model fitting index R.squared(y,x). R.squared(y,x) is a number between 0 and 1. The closer it is to 1, the better the spline regression model describes the relationship between x and y and the more significant is the pairwise relationship between the 2 variables. One can also reverse the roles of x and y to arrive at a model fitting index R.squared(x,y). R.squared(x,y) is typically different from R.squared(y,x). Assume a set of n variables x1,...,xn (corresponding to the columns of \code{datExpr}) then one can define R.squared(xi,xj). The model fitting indices for the elements of an n x n dimensional matrix (R.squared(ij)). \code{symmetrizationMethod} implements the following symmetrization methods: A.min(ij)=min(R.squared(ij),R.squared(ji)), A.ave(ij)=(R.squared(ij)+R.squared(ji))/2, A.max(ij)=max(R.squared(ij),R.squared(ji)). For more information about natural cubic spline regression, please refer to functions "ns" and "glm".} \value{ An adjacency matrix of dimensions ncol(datExpr) times ncol(datExpr).} \references{ Song L, Langfelder P, Horvath S Avoiding mutual information based co-expression measures (to appear). Horvath S (2011) Weighted Network Analysis. Applications in Genomics and Systems Biology. Springer Book. ISBN: 978-1-4419-8818-8 } \author{ Lin Song, Steve Horvath } \seealso{ \code{\link[splines]{ns}}, \code{\link{glm}} } \examples{ #Simulate a data frame datE which contains 5 columns and 50 observations m=50 x1=rnorm(m) r=.5; x2=r*x1+sqrt(1-r^2)*rnorm(m) r=.3; x3=r*(x1-.5)^2+sqrt(1-r^2)*rnorm(m) x4=rnorm(m) r=.3; x5=r*x4+sqrt(1-r^2)*rnorm(m) datE=data.frame(x1,x2,x3,x4,x5) #calculate adjacency by symmetrizing using max A.max=adjacency.splineReg(datE, symmetrizationMethod="max") A.max #calculate adjacency by symmetrizing using max A.mean=adjacency.splineReg(datE, symmetrizationMethod="mean") A.mean # output the unsymmetrized pairwise model fitting indices R.squared R.squared=adjacency.splineReg(datE, symmetrizationMethod="none") R.squared } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/addGuideLines.Rd0000644000176200001440000000215614012015545014655 0ustar liggesusers\name{addGuideLines} \alias{addGuideLines} \title{ Add vertical ``guide lines'' to a dendrogram plot} \description{ Adds vertical ``guide lines'' to a dendrogram plot. } \usage{ addGuideLines(dendro, all = FALSE, count = 50, positions = NULL, col = "grey30", lty = 3, hang = 0) } \arguments{ \item{dendro}{ The dendrogram (see \code{\link{hclust}}) to which the guide lines are to be added. } \item{all}{ Add a guide line to every object on the dendrogram? Useful if the number of objects is relatively low. } \item{count}{ Number of guide lines to be plotted. The lines will be equidistantly spaced. } \item{positions}{ Horizontal positions of the added guide lines. If given, overrides \code{count}. } \item{col}{ Color of the guide lines } \item{lty}{ Line type of the guide lines. See \code{\link{par}}. } \item{hang}{ Fraction of the figure height that will separate top ends of guide lines and the merge heights of the corresponding objects. } } \author{ Peter Langfelder } \keyword{ hplot } WGCNA/man/signifNumeric.Rd0000644000176200001440000000231614012015545014754 0ustar liggesusers\name{signifNumeric} \alias{signifNumeric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Round numeric columns to given significant digits. } \description{ This function applies \code{link{signif}} (or possibly other rounding function) to numeric, non-integer columns of a given data frame. } \usage{ signifNumeric(x, digits, fnc = "signif") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Input data frame, matrix or matrix-like object that can be coerced to a data frame. } \item{digits}{ Significant digits to retain. } \item{fnc}{ The rounding function. Typically either \code{\link{signif}} or \code{\link{round}}. } } \details{ The function \code{fnc} is applied to each numeric column that contains at least one non-integer (i.e., at least one element that does not equal its own \code{round}). } \value{ The transformed data frame. } \author{ Peter Langfelder } \seealso{ The rounding functions \code{\link{signif}} and \code{\link{round}}. } \examples{ df = data.frame(text = letters[1:3], ints = c(1:3)+234, nonints = c(0:2) + 0.02345); df; signifNumeric(df, 2); signifNumeric(df, 2, fnc = "round"); } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/greenBlackRed.Rd0000644000176200001440000000170714012015545014645 0ustar liggesusers\name{greenBlackRed} \alias{greenBlackRed} \title{ Green-black-red color sequence } \description{ Generate a green-black-red color sequence of a given length. } \usage{ greenBlackRed(n, gamma = 1) } \arguments{ \item{n}{ number of colors to be returned } \item{gamma}{ color correction power } } \details{ The function returns a color vector that starts with pure green, gradually turns into black and then to red. The power \code{gamma} can be used to control the behaviour of the quarter- and three quarter-values (between green and black, and black and red, respectively). Higher powers will make the mid-colors more green and red, respectively. } \value{ A vector of colors of length \code{n}. } \author{ Peter Langfelder } \examples{ par(mfrow = c(3, 1)) displayColors(greenBlackRed(50)); displayColors(greenBlackRed(50, 2)); displayColors(greenBlackRed(50, 0.5)); } \keyword{color}% __ONLY ONE__ keyword per line WGCNA/man/coxRegressionResiduals.Rd0000644000176200001440000001073614012015545016665 0ustar liggesusers\name{coxRegressionResiduals} \alias{coxRegressionResiduals} %- Also NEED an '\alias' for EACH other topic documented here. \title{Deviance- and martingale residuals from a Cox regression model } \description{ The function inputs a censored time variable which is specified by two input variables \code{time} and \code{event}. It outputs i) the martingale residual and ii) deviance residual corresponding to a Cox regression model. By default, the Cox regression model is an intercept only Cox regression model. But optionally, the user can input covariates using the argument \code{datCovariates}. The function makes use of the coxph function in the survival library. See \code{help(residuals.coxph)} to learn more. } \usage{ coxRegressionResiduals(time, event, datCovariates = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{time}{ is a numeric variable that contains follow up time or time to event. %% ~~Describe \code{time} here~~ } \item{event}{ is a binary variable that takes on values 1 and 0. 1 means that the event took place (e.g. person died, or tumor recurred). 0 means censored, i.e. event has not yet been observed or loss to follow up. } \item{datCovariates}{ a data frame whose columns correspond to covariates that should be used in the Cox regression model. By default, the only covariate the intercept term 1. } } \details{ Residuals are often used to investigate the lack of fit of a model. For Cox regression, there is no easy analog to the usual "observed minus predicted" residual of linear regression. Instead, several specialized residuals have been proposed for Cox regression analysis. The function calculates residuals that are well defined for an intercept only Cox regression model: the martingale and deviance residuals (Therneau et al 1990). The martingale residual of a subject (person) specifies excess failures beyond the expected baseline hazard. For example, a subject who was censored at 3 years, and whose predicted cumulative hazard at 3 years was 30% has a martingale residual of 0-.30 = -0.30 Another subject who had an event at 10 years, and whose predicted cumulative hazard at 10 years was 60% has a martingale residual of 1-.60 = 0.40. Since martingale residuals are not symmetrically distributed, even when the fitted model is correct, it is often advantageous to transform them into more symmetrically distributed residuals: deviance residuals. Thus, deviance residuals are defined as transformations of the martingale residual and the event variable. Deviance residuals are often symmetrically distributed around zero Deviance Residuals are similar to residuals from ordinary linear regression in that they are symmetrically distributed around 0 and have standard deviation of 1.0. . A subjects with a large deviance residual is poorly predicted by the model, i.e. is different from the baseline cumulative hazard. A negative value indicates a longer than expected survival time. When covariates are specified in \code{datCovariates}, then one can plot deviance (or martingale) residuals against the covariates. Unusual patterns may indicate poor fit of the Cox model. Cryptic comments: Deviance (or martingale) residuals can sometimes be used as (uncensored) quantitative variables instead of the original time censored variable. For example, they could be used as outcome in a regression tree or regression forest predictor. } \value{ It outputs a data frame with 2 columns. The first and second column correspond to martingale and deviance residuals respectively. } \references{ %% ~put references to the literature/web site here ~ Thereneau TM, Grambsch PM, Fleming TR (1990) Martingale-based residuals for survival models. Biometrika (1990), 77, 1, pp. 147-60 } \author{ Steve Horvath } \note{ This function can be considered a wrapper of the coxph function. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \examples{ library(survival) # simulate time and event data time1=sample(1:100) event1=sample(c(1,0), 100,replace=TRUE) event1[1:5]=NA time1[1:5]=NA # no covariates datResiduals= coxRegressionResiduals(time=time1,event=event1) # now we simulate a covariate z= rnorm(100) cor(datResiduals,use="p") datResiduals=coxRegressionResiduals(time=time1,event=event1,datCovariates=data.frame(z)) cor(datResiduals,use="p") } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/consensusCalculation.Rd0000644000176200001440000001502114672545314016364 0ustar liggesusers\name{consensusCalculation} \alias{consensusCalculation} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculation of a (single) consenus with optional data calibration. } \description{ This function calculates a single consensus from given individual data, optionally first calibrating the individual data to make them comparable. } \usage{ consensusCalculation( individualData, consensusOptions, useBlocks = NULL, randomSeed = NULL, saveCalibratedIndividualData = FALSE, calibratedIndividualDataFilePattern = "calibratedIndividualData-\%a-Set\%s-Block\%b.RData", # Return options: the data can be either saved or returned but not both. saveConsensusData = NULL, consensusDataFileNames = "consensusData-\%a-Block\%b.RData", getCalibrationSamples= FALSE, # Internal handling of data useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behaviour collectGarbage = FALSE, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{individualData}{ Individual data from which the consensus is to be calculated. It can be either a list or a \code{\link{multiData}} structure. Each element in \code{individulData} can in turn either be a numeric obeject (vector, matrix or array) or a \code{\link{BlockwiseData}} structure. } \item{consensusOptions}{ A list of class \code{ConsensusOptions} that contains options for the consensus calculation. A suitable list can be obtained by calling function \code{\link{newConsensusOptions}}. } \item{useBlocks}{ When \code{individualData} contains \code{\link{BlockwiseData}}, this argument can be an integer vector with indices of blocks for which the calculation should be performed. } \item{randomSeed}{ If non-\code{NULL}, the function will save the current state of the random generator, set the given seed, and restore the random seed to its original state upon exit. If \code{NULL}, the seed is not set nor is it restored on exit. } \item{saveCalibratedIndividualData}{ Logical: should calibrated individual data be saved? } \item{calibratedIndividualDataFilePattern}{ Pattern from which file names for saving calibrated individual data are determined. The conversions \code{\%a}, \code{\%s} and \code{\%b} will be replaced by analysis name, set number and block number, respectively.} \item{saveConsensusData}{ Logical: should final consensus be saved (\code{TRUE}) or returned in the return value (\code{FALSE})? If \code{NULL}, data will be saved only if input data were blockwise data saved on disk rather than held in memory } \item{consensusDataFileNames}{ Pattern from which file names for saving the final consensus are determined. The conversions \code{\%a} and \code{\%b} will be replaced by analysis name and block number, respectively.} \item{getCalibrationSamples}{ When calibration method in the \code{consensusOptions} component of \code{ConsensusTree} is \code{"single quantile"}, this logical argument determines whether the calibration samples should be retuned within the return value. } \item{useDiskCache}{ Logical: should disk cache be used for consensus calculations? The disk cache can be used to sture chunks of calibrated data that are small enough to fit one chunk from each set into memory (blocks may be small enough to fit one block of one set into memory, but not small enogh to fit one block from all sets in a consensus calculation into memory at the same time). Using disk cache is slower but lessens the memry footprint of the calculation. As a general guide, if individual data are split into blocks, we recommend setting this argument to \code{TRUE}. If this argument is \code{NULL}, the function will decide whether to use disk cache based on the number of sets and block sizes. } \item{chunkSize}{ Integer giving the chunk size. If left \code{NULL}, a suitable size will be chosen automatically. } \item{cacheDir}{ Directory in which to save cache files. The files are deleted on normal exit but persist if the function terminates abnormally. } \item{cacheBase}{ Base for the file names of cache files. } \item{collectGarbage}{ Logical: should garbage collection be forced after each major calculation? } \item{verbose}{Integer level of verbosity of diagnostic messages. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{Indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Consensus is defined as the element-wise (also known as "parallel") quantile of the individual data at probability given by the \code{consensusQuantile} element of \code{consensusOptions}. Depending on the value of component \code{calibration} of \code{consensusOptions}, the individual data are first calibrated. For \code{consensusOptions$calibration="full quantile"}, the individual data are quantile normalized using \code{\link[preprocessCore]{normalize.quantiles}}. For \code{consensusOptions$calibration="single quantile"}, the individual data are raised to a power such that the quantiles at probability \code{consensusOptions$calibrationQuantile} are the same. For \code{consensusOptions$calibration="none"}, the individual data are not calibrated. } \value{ A list with the following components: \item{consensusData}{A \code{\link{BlockwiseData}} list containing the consensus.} \item{nSets}{Number of input data sets.} \item{saveCalibratedIndividualData}{Copy of the input \code{saveCalibratedIndividualData}.} \item{calibratedIndividualData}{If input \code{saveCalibratedIndividualData} is \code{TRUE}, a list in which each component is a \code{\link{BlockwiseData}} structure containing the calibrated individual data for the corresponding input individual data set.} \item{calibrationSamples}{If \code{consensusOptions$calibration} is \code{"single quantile"} and \code{getCalibrationSamples} is \code{TRUE}, a list in which each component contains the calibration samples for the corresponding input individual data set.} \item{originCount}{A vector of length \code{nSets} that contains, for each set, the number of (calibrated) elements that were less than or equal the consensus for that element.} } \references{ Consensus network analysis was originally described in Langfelder P, Horvath S. Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 https://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-1-54 } \author{ Peter Langfelder } \seealso{ \code{\link[preprocessCore]{normalize.quantiles}} for quantile normalization. } \keyword{misc} WGCNA/man/colQuantileC.Rd0000644000176200001440000000206614012015545014537 0ustar liggesusers\name{colQuantileC} \alias{colQuantileC} \alias{rowQuantileC} \title{ Fast colunm- and row-wise quantile of a matrix. } \description{ Fast calculation of column- and row-wise quantiles of a matrix at a single probability. Implemented via compiled code, it is much faster than the equivalent \code{apply(data, 2, quantile, prob = p)}. } \usage{ colQuantileC(data, p) rowQuantileC(data, p) } \arguments{ \item{data}{ a numerical matrix column-wise quantiles are desired. Missing values are removed.} \item{p}{ a single probability at which the quantile is to be calculated. } } \details{ At present, only one quantile type is implemented, namely the default type 7 used by R. } \value{ A vector of length equal the number of columns (for \code{colQuantileC}) or rows (for \code{rowQuantileC}) in \code{data} containing the column- or row-wise quantiles. } \author{ Peter Langfelder } \seealso{ \code{\link[stats]{quantile}}; \code{\link{pquantile}} for another way of calculating quantiles across structured data. } \keyword{misc }% __ONLY ONE__ keyword per line WGCNA/man/plotMultiHist.Rd0000644000176200001440000000316114012015545014772 0ustar liggesusers\name{plotMultiHist} \alias{plotMultiHist} \title{ Plot multiple histograms in a single plot } \description{ This function plots density or cumulative distribution function of multiple histograms in a single plot, using lines. } \usage{ plotMultiHist( data, nBreaks = 100, col = 1:length(data), scaleBy = c("area", "max", "none"), cumulative = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A list in which each component corresponds to a separate histogram and is a vector of values to be shown in each histogram. } \item{nBreaks}{ Number of breaks in the combined plot. } \item{col}{ Color of the lines. Should be a vector of the same length as \code{data}. } \item{scaleBy}{ Method to make the different histograms comparable. The counts are scaled such that either the total area or the maximum are the same for all histograms, or the histograms are shown without scaling. } \item{cumulative}{ Logical: should the cumulative distribution be shown instead of the density? } \item{\dots}{ Other graphical arguments. } } \value{ Invisibly, \item{x}{A list with one component per histogram (component of \code{data}), giving the bin midpoints} \item{y}{A list with one component per histogram (component of \code{data}), giving the scaled bin counts} } \author{ Peter Langfelder } \note{ This function is still experimental and behavior may change in the future. } \seealso{ \code{\link{hist}} } \examples{ data = list(rnorm(1000), rnorm(10000) + 2); plotMultiHist(data, xlab = "value", ylab = "scaled density") } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/formatLabels.Rd0000644000176200001440000000610514012015545014565 0ustar liggesusers\name{formatLabels} \alias{formatLabels} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Break long character strings into multiple lines } \description{ This function attempts to break lomg character strings into multiple lines by replacing a given pattern by a newline character. } \usage{ formatLabels( labels, maxCharPerLine = 14, maxWidth = NULL, maxLines = Inf, cex = 1, font = 1, split = " ", fixed = TRUE, newsplit = split, keepSplitAtEOL = TRUE, capitalMultiplier = 1.4, eol = "\n", ellipsis = "...") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{labels}{Character strings to be formatted. } \item{maxCharPerLine}{ Integer giving the maximum number of characters per line. } \item{maxWidth}{Maximum width in user coordinates. If given, overrides \code{maxCharPerLine} above and usually gives a much more efficient formatting.} \item{maxLines}{Maximum lines to retain. If a label extends past the maximum number of lines, \code{ellipsis} is added at the end of the last line.} \item{cex}{Character expansion factor that the user intends to use when adding \code{labels} to the current figure. Only used when \code{maxWidth} is specified.} \item{font}{Integer specifying the font. See \code{\link{par}} for details. } \item{split}{ Pattern to be replaced by newline ('\\n') characters. } \item{fixed}{ Logical: Should the pattern be interpreted literally (\code{TRUE}) or as a regular expression (\code{FALSE})? See \code{\link{strsplit}} and its argument \code{fixed}. } \item{newsplit}{ Character string to replace the occurrences of \code{split} above with. } \item{keepSplitAtEOL}{ When replacing an occurrence of \code{split} with a newline character, should the \code{newsplit} be added before the newline as well? } \item{capitalMultiplier}{A multiplier for capital letters which typically occupy more space than lowercase letters.} \item{eol}{Character string to separate lines in the output.} \item{ellipsis}{Chararcter string to add to the last line if the input label is longer than fits on \code{maxLines} lines.} } \details{ Each given element of \code{labels} is processed independently. The character string is split using \code{strsplit}, with \code{split} as the splitting pattern. The resulting shorter character strings are then concatenated together with \code{newsplit} as the separator. Whenever the length (adjusted using the capital letter multiplier) of the combined result from the start or the previous newline character exceeds \code{maxCharPerLine}, or \code{\link{strwidth}} exceeds \code{maxWidth}, the character specified by \code{eol} is inserted (at the previous split). Note that individual segements (i.e., sections of the input between occurrences of \code{split}) whose number of characters exceeds \code{maxCharPerLine} will not be split. } \value{ A character vector of the same length as input \code{labels}. } \author{ Peter Langfelder } \examples{ s = "A quick hare jumps over the brown fox"; formatLabels(s); } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/signumAdjacencyFunction.Rd0000644000176200001440000000201014012015545016753 0ustar liggesusers\name{signumAdjacencyFunction} \alias{signumAdjacencyFunction} \title{ Hard-thresholding adjacency function } \description{ This function transforms correlations or other measures of similarity into an unweighted network adjacency. } \usage{ signumAdjacencyFunction(corMat, threshold) } \arguments{ \item{corMat}{ a matrix of correlations or other measures of similarity. } \item{threshold}{ threshold for connecting nodes: all nodes whose \code{corMat} is above the threshold will be connected in the resulting network. } } \value{ An unweighted adjacency matrix of the same dimensions as the input \code{corMat}. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath } \seealso{ \code{\link{adjacency}} for soft-thresholding and creating weighted networks. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/standardScreeningNumericTrait.Rd0000644000176200001440000000406514012015545020142 0ustar liggesusers\name{standardScreeningNumericTrait} \alias{standardScreeningNumericTrait} \title{ Standard screening for numeric traits } \description{ Standard screening for numeric traits based on Pearson correlation. } \usage{ standardScreeningNumericTrait(datExpr, yNumeric, corFnc = cor, corOptions = list(use = 'p'), alternative = c("two.sided", "less", "greater"), qValues = TRUE, areaUnderROC = TRUE) } \arguments{ \item{datExpr}{ data frame containing expression data (or more generally variables to be screened), with rows corresponding to samples and columns to genes (variables) } \item{yNumeric}{ a numeric vector giving the trait measurements for each sample } \item{corFnc}{ correlation function. Defaults to Pearson correlation but can also be \code{\link{bicor}}. } \item{corOptions}{ list specifying additional arguments to be passed to the correlation function given by \code{corFnc}. } \item{alternative}{alternative hypothesis for the correlation test} \item{qValues}{ logical: should q-values be calculated?} \item{areaUnderROC}{ logical: should are under the receiver-operating curve be calculated?} } \details{ The function calculates the correlations, associated p-values, area under the ROC, and q-values } \value{ Data frame with the following components: \item{ID }{Gene (or variable) identifiers copied from \code{colnames(datExpr)}} \item{cor}{correlations of all genes with the trait} \item{Z}{Fisher Z statistics corresponding to the correlations} \item{pvalueStudent }{Student p-values of the correlations} \item{qvalueStudent }{(if input \code{qValues==TRUE}) q-values of the correlations calculated from the p-values} \item{AreaUnderROC }{(if input \code{areaUnderROC==TRUE}) area under the ROC} \item{nPresentSamples}{number of samples present for the calculation of each association. } } \author{ Steve Horvath } \seealso{ \code{\link{standardScreeningBinaryTrait}}, \code{\link{standardScreeningCensoredTime}} } \keyword{misc} WGCNA/man/nSets.Rd0000644000176200001440000000116314012015545013245 0ustar liggesusers\name{nSets} \alias{nSets} \title{ Number of sets in a multi-set variable } \description{ A convenience function that returns the number of sets in a multi-set variable. } \usage{ nSets(multiData, ...) } \arguments{ \item{multiData}{ vector of lists; in each list there must be a component named \code{data} whose content is a matrix or dataframe or array of dimension 2. } \item{\dots}{ Other arguments to function \code{\link{checkSets}}. } } \value{ A single integer that equals the number of sets given in the input \code{multiData}. } \author{ Peter Langfelder } \seealso{ \code{\link{checkSets}} } \keyword{misc} WGCNA/man/orderMEs.Rd0000644000176200001440000000455414012015545013700 0ustar liggesusers\name{orderMEs} \alias{orderMEs} \title{Put close eigenvectors next to each other} \description{ Reorder given (eigen-)vectors such that similar ones (as measured by correlation) are next to each other. } \usage{ orderMEs(MEs, greyLast = TRUE, greyName = paste(moduleColor.getMEprefix(), "grey", sep=""), orderBy = 1, order = NULL, useSets = NULL, verbose = 0, indent = 0) } \arguments{ \item{MEs}{Module eigengenes in a multi-set format (see \code{\link{checkSets}}). A vector of lists, with each list corresponding to one dataset and the module eigengenes in the component \code{data}, that is \code{MEs[[set]]$data[sample, module]} is the expression of the eigengene of module \code{module} in sample \code{sample} in dataset \code{set}. The number of samples can be different between the sets, but the modules must be the same. } \item{greyLast}{Normally the color grey is reserved for unassigned genes; hence the grey module is not a proper module and it is conventional to put it last. If this is not desired, set the parameter to \code{FALSE}.} \item{greyName}{Name of the grey module eigengene.} \item{orderBy}{Specifies the set by which the eigengenes are to be ordered (in all other sets as well). Defaults to the first set in \code{useSets} (or the first set, if \code{useSets} is not given).} \item{order}{Allows the user to specify a custom ordering.} \item{useSets}{Allows the user to specify for which sets the eigengene ordering is to be performed.} \item{verbose}{Controls verbostity of printed progress messages. 0 means silent, nonzero verbose.} \item{indent}{A single non-negative integer controling indentation of printed messages. 0 means no indentation, each unit above zero adds two spaces. } } \details{ Ordering module eigengenes is useful for plotting purposes. For this function the order can be specified explicitly, or a set can be given in which the correlations of the eigengenes will determine the order. For the latter, a hierarchical dendrogram is calculated and the order given by the dendrogram is used for the eigengenes in all other sets. } \value{ A vector of lists of the same type as \code{MEs} containing the re-ordered eigengenes. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{\code{\link{moduleEigengenes}}, \code{\link{multiSetMEs}}, \code{\link{consensusOrderMEs}}} \keyword{misc} WGCNA/man/replaceMissing.Rd0000644000176200001440000000125714012015545015122 0ustar liggesusers\name{replaceMissing} \alias{replaceMissing} \title{ Replace missing values with a constant. } \description{ A convenience function for replacing missing values with a (non-missing) constant. } \usage{ replaceMissing(x, replaceWith) } \arguments{ \item{x}{ An atomic vector or array. } \item{replaceWith}{ Value to replace missing entries in \code{x}. The default is \code{FALSE} for logical vectors, 0 for numeric vectors, and empty string "" for character vectors. } } \value{ \code{x} with missing data replaced. } \author{ Peter Langfelder } \examples{ logVec = c(TRUE, FALSE, NA, TRUE); replaceMissing(logVec) numVec = c(1,2,3,4,NA,2) replaceMissing(numVec) } \keyword{misc} WGCNA/man/consensusKME.Rd0000644000176200001440000003313214012015545014527 0ustar liggesusers\name{consensusKME} \alias{consensusKME} \title{ Calculate consensus kME (eigengene-based connectivities) across multiple data sets. } \description{ Calculate consensus kME (eigengene-based connectivities) across multiple data sets, typically following a consensus module analysis. } \usage{ consensusKME( multiExpr, moduleLabels, multiEigengenes = NULL, consensusQuantile = 0, signed = TRUE, useModules = NULL, metaAnalysisWeights = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getQvalues = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(calculateQvalue = getQvalues, pValueMethod = "scale"), setNames = NULL, excludeGrey = TRUE, greyLabel = if (is.numeric(moduleLabels)) 0 else "grey") } \arguments{ \item{multiExpr}{ Expression (or other numeric) data in a multi-set format. A vector of lists; in each list there must be a component named `data' whose content is a matrix or dataframe or array of dimension 2. } \item{moduleLabels}{ Module labels: one label for each gene in \code{multiExpr}. } \item{multiEigengenes}{ Optional eigengenes of modules specified in \code{moduleLabels}. If not given, will be calculated from \code{multiExpr}. } \item{signed}{ logical: should the network be considered signed? In signed networks (\code{TRUE}), negative kME values are not considered significant and the corresponding p-values will be one-sided. In unsigned networks (\code{FALSE}), negative kME values are considered significant and the corresponding p-values will be two-sided. } \item{useModules}{ Optional specification of module labels to which the analysis should be restricted. This could be useful if there are many modules, most of which are not interesting. Note that the "grey" module cannot be used with \code{useModules}.} \item{consensusQuantile}{ Quantile for the consensus calculation. Should be a number between 0 (minimum) and 1. } \item{metaAnalysisWeights}{ Optional specification of meta-analysis weights for each input set. If given, must be a numeric vector of length equal the number of input data sets (i.e., \code{length(multiExpr)}). These weights will be used in addition to constant weights and weights proportional to number of samples (observations) in each set. } \item{corAndPvalueFnc}{ Function that calculates associations between expression profiles and eigengenes. See details. } \item{corOptions}{ List giving additional arguments to function \code{corAndPvalueFnc}. See details. } \item{corComponent}{ Name of the component of output of \code{corAndPvalueFnc} that contains the actual correlation. } \item{getQvalues}{ logical: should q-values (estimates of FDR) be calculated? } \item{useRankPvalue}{ Logical: should the \code{\link{rankPvalue}} function be used to obtain alternative meta-analysis statistics?} \item{rankPvalueOptions}{ Additional options for function \code{\link{rankPvalue}}. These include \code{na.last} (default \code{"keep"}), \code{ties.method} (default \code{"average"}), \code{calculateQvalue} (default copied from input \code{getQvalues}), and \code{pValueMethod} (default \code{"scale"}). See the help file for \code{\link{rankPvalue}} for full details.} \item{setNames}{ names for the input sets. If not given, will be taken from \code{names(multiExpr)}. If those are \code{NULL} as well, the names will be \code{"Set_1", "Set_2", ...}. } \item{excludeGrey}{ logical: should the grey module be excluded from the kME tables? Since the grey module is typically not a real module, it makes little sense to report kME values for it. } \item{greyLabel}{ label that labels the grey module. } } \details{ The function \code{corAndPvalueFnc} is currently is expected to accept arguments \code{x} (gene expression profiles), \code{y} (eigengene expression profiles), and \code{alternative} with possibilities at least \code{"greater", "two.sided"}. Any additional arguments can be passed via \code{corOptions}. The function \code{corAndPvalueFnc} should return a list which at the least contains (1) a matrix of associations of genes and eigengenes (this component should have the name given by \code{corComponent}), and (2) a matrix of the corresponding p-values, named "p" or "p.value". Other components are optional but for full functionality should include (3) \code{nObs} giving the number of observations for each association (which is the number of samples less number of missing data - this can in principle vary from association to association), and (4) \code{Z} giving a Z static for each observation. If these are missing, \code{nObs} is calculated in the main function, and calculations using the Z statistic are skipped. } \value{ Data frame with the following components (for easier readability the order here is not the same as in the actual output): \item{ID}{Gene ID, taken from the column names of the first input data set} \item{consensus.kME.1, consensus.kME.2, ...}{Consensus kME (that is, the requested quantile of the kMEs in the individual data sets)in each module for each gene across the input data sets. The module labels (here 1, 2, etc.) correspond to those in \code{moduleLabels}.} \item{weightedAverage.equalWeights.kME1, weightedAverage.equalWeights.kME2, ...}{ Average kME in each module for each gene across the input data sets. } \item{weightedAverage.RootDoFWeights.kME1, weightedAverage.RootDoFWeights.kME2, ...}{ Weighted average kME in each module for each gene across the input data sets. The weight of each data set is proportional to the square root of the number of samples in the set. } \item{weightedAverage.DoFWeights.kME1, weightedAverage.DoFWeights.kME2, ...}{ Weighted average kME in each module for each gene across the input data sets. The weight of each data set is proportional to number of samples in the set. } \item{weightedAverage.userWeights.kME1, weightedAverage.userWeights.kME2, ...}{ (Only present if input \code{metaAnalysisWeights} is non-NULL.) Weighted average kME in each module for each gene across the input data sets. The weight of each data set is given in \code{metaAnalysisWeights}.} \item{meta.Z.equalWeights.kME1, meta.Z.equalWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set equally. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.RootDoFWeights.kME1, meta.Z.RootDoFWeights.kME2, ...}{ Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by the square root of the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.DoFWeights.kME1, meta.Z.DoFWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.userWeights.kME1, meta.Z.userWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by \code{metaAnalysisWeights}. Only returned if \code{metaAnalysisWeights} is non-NULL and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.p.equalWeights.kME1, meta.p.equalWeights.kME2, ...}{ p-values obtained from the equal-weight meta-analysis Z statistics. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.RootDoFWeights.kME1, meta.p.RootDoFWeights.kME2, ...}{ p-values obtained from the meta-analysis Z statistics with weights proportional to the square root of the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.DoFWeights.kME1, meta.p.DoFWeights.kME2, ...}{ p-values obtained from the degree-of-freedom weight meta-analysis Z statistics. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.userWeights.kME1, meta.p.userWeights.kME2, ...}{ p-values obtained from the user-supplied weight meta-analysis Z statistics. Only returned if \code{metaAnalysisWeights} is non-NULL and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.q.equalWeights.kME1, meta.q.equalWeights.kME2, ...}{ q-values obtained from the equal-weight meta-analysis p-values. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.RootDoFWeights.kME1, meta.q.RootDoFWeights.kME2, ...}{ q-values obtained from the meta-analysis p-values with weights proportional to the square root of the number of samples. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.DoFWeights.kME1, meta.q.DoFWeights.kME2, ...}{ q-values obtained from the degree-of-freedom weight meta-analysis p-values. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.userWeights.kME1, meta.q.userWeights.kME2, ...}{ q-values obtained from the user-specified weight meta-analysis p-values. Only present if \code{metaAnalysisWeights} is non-NULL, \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} The next set of columns contain the results of function \code{\link{rankPvalue}} and are only present if input \code{useRankPvalue} is \code{TRUE}. Some columns may be missing depending on the options specified in \code{rankPvalueOptions}. We explicitly list columns that are based on weighing each set equally; names of these columns carry the suffix \code{.equalWeights} \item{pValueExtremeRank.ME1.equalWeights, pValueExtremeRank.ME2.equalWeights, ...}{ This is the minimum between pValueLowRank and pValueHighRank, i.e. min(pValueLow, pValueHigh)} \item{pValueLowRank.ME1.equalWeights, pValueLowRank.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueHighRank.ME1.equalWeights, pValueHighRank.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueExtremeScale.ME1.equalWeights, pValueExtremeScale.ME2.equalWeights, ...}{ This is the minimum between pValueLowScale and pValueHighScale, i.e. min(pValueLow, pValueHigh)} \item{pValueLowScale.ME1.equalWeights, pValueLowScale.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{pValueHighScale.ME1.equalWeights, pValueHighScale.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{qValueExtremeRank.ME1.equalWeights, qValueExtremeRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueExtremeRank} \item{qValueLowRank.ME1.equalWeights, qValueLowRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueLowRank} \item{qValueHighRank.ME1.equalWeights, lueHighRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueHighRank} \item{qValueExtremeScale.ME1.equalWeights, qValueExtremeScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueExtremeScale} \item{qValueLowScale.ME1.equalWeights, qValueLowScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueLowScale} \item{qValueHighScale.ME1.equalWeights,qValueHighScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueHighScale} \item{...}{Analogous columns corresponding to weighing individual sets by the square root of the number of samples, by number of samples, and by user weights (if given). The corresponding column name suffixes are \code{.RootDoFWeights}, \code{.DoFWeights}, and \code{.userWeights}.} The following set of columns summarize kME in individual input data sets. \item{kME1.Set_1, kME1.Set_2, ..., kME2.Set_1, kME2.Set_2, ...}{ kME values for each gene in each module in each given data set. } \item{p.kME1.Set_1, p.kME1.Set_2, ..., p.kME2.Set_1, p.kME2.Set_2, ...}{ p-values corresponding to kME values for each gene in each module in each given data set. } \item{q.kME1.Set_1, q.kME1.Set_2, ..., q.kME2.Set_1, q.kME2.Set_2, ...}{ q-values corresponding to kME values for each gene in each module in each given data set. Only returned if \code{getQvalues} is \code{TRUE}. } \item{Z.kME1.Set_1, Z.kME1.Set_2, ..., Z.kME2.Set_1, Z.kME2.Set_2, ...}{ Z statistics corresponding to kME values for each gene in each module in each given data set. Only present if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values. } } \references{ Langfelder P, Horvath S., WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008 Dec 29; 9:559. } \author{ Peter Langfelder } \seealso{ \link{signedKME} for eigengene based connectivity in a single data set. \link{corAndPvalue}, \link{bicorAndPvalue} for two alternatives for calculating correlations and the corresponding p-values and Z scores. Both can be used with this function. } \keyword{misc} WGCNA/man/overlapTableUsingKME.Rd0000644000176200001440000001125314012015545016135 0ustar liggesusers\name{overlapTableUsingKME} \alias{overlapTableUsingKME} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Determines significant overlap between modules in two networks based on kME tables. } \description{ Takes two sets of expression data (or kME tables) as input and returns a table listing the significant overlap between each module in each data set, as well as the actual genes in common for every module pair. Modules can be defined in several ways (generally involving kME) based on user input. } \usage{ overlapTableUsingKME( dat1, dat2, colorh1, colorh2, MEs1 = NULL, MEs2 = NULL, name1 = "MM1", name2 = "MM2", cutoffMethod = "assigned", cutoff = 0.5, omitGrey = TRUE, datIsExpression = TRUE) } \arguments{ \item{dat1,dat2}{ Either expression data sets (with samples as rows and genes as columns) or module membership (kME) tables (with genes as rows and modules as columns). Function reads these inputs based on whether datIsExpression=TRUE or FALSE. ***Be sure that these inputs include relevant row and column names, or else the function will not work properly.*** } \item{colorh1,colorh2}{ Color vector (module assignments) corresponding to the genes from dat1/2. This vector must be the same length as the Gene dimension from dat1/2. } \item{MEs1,MEs2}{ If entered (default=NULL), these are the module eigengenes that will be used to form the kME tables. Rows are samples and columns are module assignments. Note that if datIsExpression=FALSE, these inputs are ignored. } \item{name1,name2}{ The names of the two data sets being compared. These names affect the output parameters. } \item{cutoffMethod}{ This variable is used to determine how modules are defined in each data set. Must be one of four options: (1) "assigned" -> use the module assignments in colorh (default); (2) "kME" -> any gene with kME > cutoff is in the module; (3) "numGenes" -> the top cutoff number of genes based on kME is in the module; and (4) "pvalue" -> any gene with correlation pvalue < cutoff is in the module (this includes both positively and negatively-correlated genes). } \item{cutoff}{ For all cutoffMethods other than "assigned", this parameter is used as the described cutoff value. } \item{omitGrey}{ If TRUE the grey modules (non-module genes) for both networks are not returned. } \item{datIsExpression}{ If TRUE (default), dat1/2 is assumed to be expression data. If FALSE, dat1/2 is assumed to be a table of kME values. } } \value{ \item{PvaluesHypergeo}{ A table of p-values showing significance of module overlap based on the hypergeometric test. Note that these p-values are not corrected for multiple comparisons. } \item{AllCommonGenes}{ A character vector of all genes in common between the two data sets. } \item{Genes}{ A list of character vectors of all genes in each module in both data sets. All genes in the MOD module in data set MM1 could be found using "$GenesMM1$MM1_MOD" } \item{OverlappingGenes}{ A list of character vectors of all genes for each between-set comparison from PvaluesHypergeo. All genes in MOD.A from MM1 that are also in MOD.B from MM2 could be found using "$OverlappingGenes$MM1_MOD.A_MM2_MOD.B" } } \author{ Jeremy Miller } \seealso{ \code{\link{overlapTable}} } \examples{ # Example: first generate simulated data. set.seed(100) ME.A = sample(1:100,50); ME.B = sample(1:100,50) ME.C = sample(1:100,50); ME.D = sample(1:100,50) ME.E = sample(1:100,50); ME.F = sample(1:100,50) ME.G = sample(1:100,50); ME.H = sample(1:100,50) ME1 = data.frame(ME.A, ME.B, ME.C, ME.D, ME.E) ME2 = data.frame(ME.A, ME.C, ME.D, ME.E, ME.F, ME.G, ME.H) simDat1 = simulateDatExpr(ME1,1000,c(0.2,0.1,0.08,0.05,0.04,0.3), signed=TRUE) simDat2 = simulateDatExpr(ME2,1000,c(0.2,0.1,0.08,0.05,0.04,0.03,0.02,0.3), signed=TRUE) # Now run the function using assigned genes results = overlapTableUsingKME(simDat1$datExpr, simDat2$datExpr, labels2colors(simDat1$allLabels), labels2colors(simDat2$allLabels), cutoffMethod="assigned") results$PvaluesHypergeo # Now run the function using a p-value cutoff, and inputting the original MEs colnames(ME1) = standardColors(5); colnames(ME2) = standardColors(7) results = overlapTableUsingKME(simDat1$datExpr, simDat2$datExpr, labels2colors(simDat1$allLabels), labels2colors(simDat2$allLabels), ME1, ME2, cutoffMethod="pvalue", cutoff=0.05) results$PvaluesHypergeo # Check which genes are in common between the black modules from set 1 and # the green module from set 2 results$OverlappingGenes$MM1_green_MM2_black } \keyword{misc} WGCNA/man/plotNetworkHeatmap.Rd0000644000176200001440000000400014012015545015772 0ustar liggesusers\name{plotNetworkHeatmap} \alias{plotNetworkHeatmap} \title{ Network heatmap plot } \description{ Network heatmap plot. } \usage{ plotNetworkHeatmap( datExpr, plotGenes, weights = NULL, useTOM = TRUE, power = 6, networkType = "unsigned", main = "Heatmap of the network") } \arguments{ \item{datExpr}{ a data frame containing expression data, with rows corresponding to samples and columns to genes. Missing values are allowed and will be ignored. } \item{plotGenes}{ a character vector giving the names of genes to be included in the plot. The names will be matched against \code{names(datExpr)}. } \item{weights}{optional observation weights for \code{datExpr} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights. Only used with Pearson correlation.} \item{useTOM}{ logical: should TOM be plotted (\code{TRUE}), or correlation-based adjacency (\code{FALSE})? } \item{power}{ soft-thresholding power for network construction. } \item{networkType}{ a character string giving the newtork type. Recognized values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, and \code{"signed hybrid"}. } \item{main}{ main title for the plot. } } \details{ The function constructs a network from the given expression data (selected by \code{plotGenes}) using the soft-thresholding procedure, optionally calculates Topological Overlap (TOM) and plots a heatmap of the network. Note that all network calculations are done in one block and may fail due to memory allocation issues for large numbers of genes. } \value{ None. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath } \seealso{ \code{\link{adjacency}}, \code{\link{TOMsimilarity}} } \keyword{ hplot }% __ONLY ONE__ keyword per line WGCNA/man/votingLinearPredictor.Rd0000644000176200001440000001371114012015545016470 0ustar liggesusers\name{votingLinearPredictor} \alias{votingLinearPredictor} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Voting linear predictor } \description{ Predictor based on univariate regression on all or selected given features that pools all predictions using weights derived from the univariate linear models. } \usage{ votingLinearPredictor( x, y, xtest = NULL, classify = FALSE, CVfold = 0, randomSeed = 12345, assocFnc = "cor", assocOptions = "use = 'p'", featureWeightPowers = NULL, priorWeights = NULL, weighByPrediction = 0, nFeatures.hi = NULL, nFeatures.lo = NULL, dropUnusedDimensions = TRUE, verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Training features (predictive variables). Each column corresponds to a feature and each row to an observation. } \item{y}{ The response variable. Can be a single vector or a matrix with arbitrary many columns. Number of rows (observations) must equal to the number of rows (observations) in x. } \item{xtest}{ Optional test set data. A matrix of the same number of columns (i.e., features) as \code{x}. If test set data are not given, only the prediction on training data will be returned. } \item{classify}{ Should the response be treated as a categorical variable? Classification really only works with two classes. (The function will run for multiclass problems as well, but the results will be sub-optimal.) } \item{CVfold}{ Optional specification of cross-validation fold. If 0 (the default), no cross-validation is performed. } \item{randomSeed}{ Random seed, used for observation selection for cross-validation. If \code{NULL}, the random generator is not reset. } \item{assocFnc}{ Function to measure association. Usually a measure of correlation, for example Pearson correlation or \code{\link{bicor}}. } \item{assocOptions}{ Character string specifying the options to be passed to the association function. } \item{featureWeightPowers}{ Powers to which to raise the result of \code{assocFnc} to obtain weights. Can be a single number or a vector of arbitrary length; the returned value will contain one prediction per power. } \item{priorWeights}{ Prior weights for the features. If given, must be either (1) a vector of the same length as the number of features (columns in \code{x}); (2) a matrix of dimensions length(featureWeightPowers)x(number of features); or (3) array of dimensions (number of response variables)xlength(featureWeightPowers)x(number of features). } \item{weighByPrediction}{ (Optional) power to downweigh features that are not well predicted between training and test sets. See details. } \item{nFeatures.hi}{ Optional restriction of the number of features to use. If given, this many features with the highest association and lowest association (if \code{nFeatures.lo} is not given) will be used for prediction. } \item{nFeatures.lo}{ Optional restriction of the number of lowest (i.e., most negatively) associated features to use. Only used if \code{nFeatures.hi} is also non-NULL. } \item{dropUnusedDimensions}{ Logical: should unused dimensions be dropped from the result? } \item{verbose}{ Integer controling how verbose the diagnostic messages should be. Zero means silent. } \item{indent}{ Indentation for the diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The predictor calculates the association of each (selected) feature with the response and uses the association to calculate the weight of the feature as \code{sign(association) * (association)^featureWeightPower}. Optionally, this weight is multiplied by \code{priorWeights}. Further, a feature prediction weight can be used to downweigh features that are not well predicted by other features (see below). For classification, the (continuous) result of the above calculation is turned into ordinal values essentially by rounding. If features exhibit non-trivial correlations among themselves (such as, for example, in gene expression data), one can attempt to down-weigh features that do not exhibit the same correlation in the test set. This is done by using essentially the same predictor to predict _features_ from all other features in the test data (using the training data to train the feature predictor). Because test features are known, the prediction accuracy can be evaluated. If a feature is predicted badly (meaning the error in the test set is much larger than the error in the cross-validation prediction in training data), it may mean that its quality in the training or test data is low (for example, due to excessive noise or outliers). Such features can be downweighed using the argument \code{weighByPrediction}. The extra factor is min(1, (root mean square prediction error in test set)/(root mean square cross-validation prediction error in the trainig data)^weighByPrediction), that is it is never bigger than 1. } \value{ A list with the following components: \item{predicted}{The back-substitution prediction on the training data. Normally an array of dimensions (number of observations) x (number of response variables) x length(featureWeightPowers), but unused are dropped unless \code{dropUnusedDimensions = FALSE}.} \item{weightBase}{Absolute value of the associations of each feature with each response.} \item{variableImportance}{The weight of each feature in the prediction (including the sign).} \item{predictedTest}{If input \code{xtest} is non-NULL, the predicted test response, in format analogous to \code{predicted} above.} \item{CVpredicted}{If input \code{CVfold} is non-zero, cross-validation prediction on the training data.} } \author{ Peter Langfelder } \note{ It makes little practical sense to supply neither \code{xtest} nor \code{CVfold} since the prediction accuracy on training data will be highly biased. } \seealso{ \code{\link{bicor}} for robust correlation that can be used as an association measure } \keyword{ misc } WGCNA/man/branchSplit.dissim.Rd0000644000176200001440000000207414012015545015713 0ustar liggesusers\name{branchSplit.dissim} \alias{branchSplit.dissim} \title{ Branch split based on dissimilarity. } \description{ Calculation of branch split based on a dissimilarity matrix. This function is used as a plugin for the dynamicTreeCut package and the user should not call this function directly. This function is experimental and subject to change. } \usage{ branchSplit.dissim( dissimMat, branch1, branch2, upperP, minNumberInSplit = 5, getDetails = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{dissimMat}{ Dissimilarity matrix.} \item{branch1}{ Branch 1. } \item{branch2}{ Branch 2. } \item{upperP}{ Percentile of (closest) objects to be considered. } \item{minNumberInSplit}{ Minimum number of objects to be considered. } \item{getDetails}{ Should details of the calculation be returned? } \item{\dots}{ Other arguments for compatibility; currently unused. } } \value{ A single number or a list containing details of the calculation. } \author{ Peter Langfelder } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/mtd.simplify.Rd0000644000176200001440000000312414012015545014567 0ustar liggesusers\name{mtd.simplify} \alias{mtd.simplify} \title{ If possible, simplify a multiData structure to a 3-dimensional array. } \description{ This function attempts to put all \code{data} components into a 3-dimensional array, with the last dimension corresponding to the sets. This is only possible if all \code{data} components are matrices or data frames with the same dimensiosn. } \usage{ mtd.simplify(multiData) } \arguments{ \item{multiData}{ A multiData structure in the "strict" sense (see below). } } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). This function assumes a "strict" multiData structure. } \value{ A 3-dimensional array collecting all \code{data} components. } \author{ Peter Langfelder } \note{ The function is relatively fragile and may fail. Use at your own risk. } \seealso{ \code{\link{multiData}} to create a multiData structure; \code{\link{multiData2list}} for converting multiData structures to plain lists. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/cor.Rd0000644000176200001440000001644114022073754012750 0ustar liggesusers\name{cor} \alias{cor1} \alias{corFast} \alias{cor} \title{ Fast calculations of Pearson correlation. } \description{ These functions implements a faster calculation of (weighted) Pearson correlation. The speedup against the R's standard \code{\link[stats]{cor}} function will be substantial particularly if the input matrix only contains a small number of missing data. If there are no missing data, or the missing data are numerous, the speedup will be smaller. } \usage{ cor(x, y = NULL, use = "all.obs", method = c("pearson", "kendall", "spearman"), weights.x = NULL, weights.y = NULL, quick = 0, cosine = FALSE, cosineX = cosine, cosineY = cosine, drop = FALSE, nThreads = 0, verbose = 0, indent = 0) corFast(x, y = NULL, use = "all.obs", quick = 0, nThreads = 0, verbose = 0, indent = 0) cor1(x, use = "all.obs", verbose = 0, indent = 0) } \arguments{ \item{x}{ a numeric vector or a matrix. If \code{y} is null, \code{x} must be a matrix. } \item{y}{ a numeric vector or a matrix. If not given, correlations of columns of \code{x} will be calculated. } \item{use}{ a character string specifying the handling of missing data. The fast calculations currently support \code{"all.obs"} and \code{"pairwise.complete.obs"}; for other options, see R's standard correlation function \code{\link[stats]{cor}}. Abbreviations are allowed. } \item{method}{ a character string specifying the method to be used. Fast calculations are currently available only for \code{"pearson"}. } \item{weights.x}{optional observation weights for \code{x}. A matrix of the same dimensions as \code{x}, containing non-negative weights. Only used in fast calculations: \code{methods} must be \code{"pearson"} and \code{use} must be one of \code{"all.obs", "pairwise.complete.obs"}.} \item{weights.y}{optional observation weights for \code{y}. A matrix of the same dimensions as \code{y}, containing non-negative weights. Only used in fast calculations: \code{methods} must be \code{"pearson"} and \code{use} must be one of \code{"all.obs", "pairwise.complete.obs"}.} \item{quick}{ real number between 0 and 1 that controls the precision of handling of missing data in the calculation of correlations. See details. } \item{cosine}{ logical: calculate cosine correlation? Only valid for \code{method="pearson"}. Cosine correlation is similar to Pearson correlation but the mean subtraction is not performed. The result is the cosine of the angle(s) between (the columns of) \code{x} and \code{y}. } \item{cosineX}{ logical: use the cosine calculation for \code{x}? This setting does not affect \code{y} and can be used to give a hybrid cosine-standard correlation. } \item{cosineY}{ logical: use the cosine calculation for \code{y}? This setting does not affect \code{x} and can be used to give a hybrid cosine-standard correlation. } \item{drop}{logical: should the result be turned into a vector if it is effectively one-dimensional? } \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. Note that this option does not affect what is usually the most expensive part of the calculation, namely the matrix multiplication. The matrix multiplication is carried out by BLAS routines provided by R; these can be sped up by installing a fast BLAS and making R use it. } \item{verbose}{ Controls the level of verbosity. Values above zero will cause a small amount of diagnostic messages to be printed. } \item{indent}{ Indentation of printed diagnostic messages. Each unit above zero adds two spaces.} } \details{ The fast calculations are currently implemented only for \code{method="pearson"} and \code{use} either \code{"all.obs"} or \code{"pairwise.complete.obs"}. The \code{corFast} function is a wrapper that calls the function \code{cor}. If the combination of \code{method} and \code{use} is implemented by the fast calculations, the fast code is executed; otherwise, R's own correlation \code{\link[stats]{cor}} is executed. The argument \code{quick} specifies the precision of handling of missing data. Zero will cause all calculations to be executed precisely, which may be significantly slower than calculations without missing data. Progressively higher values will speed up the calculations but introduce progressively larger errors. Without missing data, all column means and variances can be pre-calculated before the covariances are calculated. When missing data are present, exact calculations require the column means and variances to be calculated for each covariance. The approximate calculation uses the pre-calculated mean and variance and simply ignores missing data in the covariance calculation. If the number of missing data is high, the pre-calculated means and variances may be very different from the actual ones, thus potentially introducing large errors. The \code{quick} value times the number of rows specifies the maximum difference in the number of missing entries for mean and variance calculations on the one hand and covariance on the other hand that will be tolerated before a recalculation is triggered. The hope is that if only a few missing data are treated approximately, the error introduced will be small but the potential speedup can be significant. } \value{ The matrix of the Pearson correlations of the columns of \code{x} with columns of \code{y} if \code{y} is given, and the correlations of the columns of \code{x} if \code{y} is not given. } \author{ Peter Langfelder } \references{ Peter Langfelder, Steve Horvath (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. \url{https://www.jstatsoft.org/v46/i11/} } \note{ The implementation uses the BLAS library matrix multiplication function for the most expensive step of the calculation. Using a tuned, architecture-specific BLAS may significantly improve the performance of this function. The values returned by the corFast function may differ from the values returned by R's function \code{\link[stats]{cor}} by rounding errors on the order of 1e-15. } \seealso{ R's standard Pearson correlation function \code{\link{cor}}. } \examples{ ## Test the speedup compared to standard function cor # Generate a random matrix with 200 rows and 1000 columns set.seed(10) nrow = 100; ncol = 500; data = matrix(rnorm(nrow*ncol), nrow, ncol); ## First test: no missing data system.time( {corStd = stats::cor(data)} ); system.time( {corFast = cor(data)} ); all.equal(corStd, corFast) # Here R's standard correlation performs very well. # We now add a few missing entries. data[sample(nrow, 10), 1] = NA; # And test the correlations again... system.time( {corStd = stats::cor(data, use ='p')} ); system.time( {corFast = cor(data, use = 'p')} ); all.equal(corStd, corFast) # Here the R's standard correlation slows down considerably # while corFast still retains it speed. Choosing # higher ncol above will make the difference more pronounced. } \keyword{ misc } WGCNA/man/modulePreservation.Rd0000644000176200001440000003455314012015545016051 0ustar liggesusers\name{modulePreservation} \alias{modulePreservation} \title{ Calculation of module preservation statistics } \description{ Calculations of module preservation statistics between independent data sets. } \usage{ modulePreservation( multiData, multiColor, multiWeights = NULL, dataIsExpr = TRUE, networkType = "unsigned", corFnc = "cor", corOptions = "use = 'p'", referenceNetworks = 1, testNetworks = NULL, nPermutations = 100, includekMEallInSummary = FALSE, restrictSummaryForGeneralNetworks = TRUE, calculateQvalue = FALSE, randomSeed = 12345, maxGoldModuleSize = 1000, maxModuleSize = 1000, quickCor = 1, ccTupletSize = 2, calculateCor.kIMall = FALSE, calculateClusterCoeff = FALSE, useInterpolation = FALSE, checkData = TRUE, greyName = NULL, goldName = NULL, savePermutedStatistics = TRUE, loadPermutedStatistics = FALSE, permutedStatisticsFile = if (useInterpolation) "permutedStats-intrModules.RData" else "permutedStats-actualModules.RData", plotInterpolation = TRUE, interpolationPlotFile = "modulePreservationInterpolationPlots.pdf", discardInvalidOutput = TRUE, parallelCalculation = FALSE, verbose = 1, indent = 0) } \arguments{ \item{multiData}{ expression data or adjacency data in multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression or adjacency data. If expression data are used, rows correspond to samples and columns to genes or probes. In case of adjacencies, each \code{data} matrix should be a symmetric matrix ith entries between 0 and 1 and unit diagonal. Each component of the outermost list should be named. } \item{multiColor}{ a list in which every component is a vector giving the module labels of genes in \code{multiExpr}. The components must be named using the same names that are used in \code{multiExpr}; these names are used top match labels to expression data sets. See details. } \item{multiWeights}{optional weights, only when \code{multiData} contains expression data. If given, must be in the multi-set format (see \code{\link{checkSets}}) and weights for each set must have the same dimensions as the corresponding set in \code{multiData}. The weights are used in correlation calculations that involve \code{multiData}, and are supplied as argument \code{weights.x} and possibly \code{weights.y} (where appropriate) to the correlation function specified by \code{corFnc}.} \item{dataIsExpr}{ logical: if \code{TRUE}, \code{multiData} will be interpreted as expression data; if \code{FALSE}, \code{multiData} will be interpreted as adjacencies. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{corFnc}{ character string specifying the function to be used to calculate co-expression similarity. Defaults to Pearson correlation. Another useful choice is \code{\link{bicor}}. More generally, any function returning values between -1 and 1 can be used. } \item{corOptions}{ character string specifying additional arguments to be passed to the function given by \code{corFnc}. Use \code{"use = 'p', method = 'spearman'"} to obtain Spearman correlation. } \item{referenceNetworks}{ a vector giving the indices of expression data to be used as reference networks. Reference networks must have their module labels given in \code{multiColor}. } \item{testNetworks}{a list with one component per each entry in \code{referenceNetworks} above, giving the test networks in which to evaluate module preservation for the corresponding reference network. If not given, preservation will be evaluated in all networks (except each reference network). If \code{referenceNetworks} is of length 1, \code{testNetworks} can also be a vector (instead of a list containing the single vector).} \item{nPermutations}{ specifies the number of permutations that will be calculated in the permutation test. } \item{includekMEallInSummary}{ logical: should cor.kMEall be included in the calculated summary statistics? Because kMEall takes into account all genes in the network, this statistic measures preservation of the full network with respect to the eigengene of the module. This may be undesirable, hence the default is \code{FALSE}.} \item{restrictSummaryForGeneralNetworks}{ logical: should the summary statistics for general (not correlation) networks be restricted (density to meanAdj, connectivity to cor.kIM and cor.Adj)? The default \code{TRUE} corresponds to published work. } \item{calculateQvalue}{ logical: should q-values (local FDR estimates) be calculated? Package qvalue must be installed for this calculation. Note that q-values may not be meaningful when the number of modules is small and/or most modules are preserved. } \item{randomSeed}{ seed for the random number generator. If \code{NULL}, the seed will not be set. If non-\code{NULL} and the random generator has been initialized prior to the function call, the latter's state is saved and restored upon exit} \item{maxGoldModuleSize}{ maximum size of the "gold" module, i.e., the random sample of all network genes. } \item{maxModuleSize}{ maximum module size used for calculations. Modules larger than \code{maxModuleSize} will be reduced by randomly sampling \code{maxModuleSize} genes. } \item{quickCor}{ number between 0 and 1 specifying the handling of missing data in calculation of correlation. Zero means exact but potentially slower calculations; one means potentially faster calculations, but with potentially inaccurate results if the proportion of missing data is large. See \code{\link{cor}} for more details. } \item{ccTupletSize}{ tuplet size for co-clustering calculations. } \item{calculateCor.kIMall}{ logical: should cor.kMEall be calculated? This option is only valid for adjacency input. If \code{FALSE}, cor.kIMall will not be calculated, potentially saving significant amount of time if the input adjacencies are large and contain many modules. } \item{calculateClusterCoeff}{ logical: should statistics based on the clustering coefficient be calculated? While these statistics may be interesting, the calculations are also computationally expensive.} \item{checkData}{ logical: should data be checked for excessive number of missing entries? See \code{\link{goodSamplesGenesMS}} for details. } \item{greyName}{ label used for unassigned genes. Traditionally such genes are labeled by grey color or numeric label 0. These values are the default when \code{multiColor} contains character or numeric vectors, respectively. } \item{goldName}{ label used for the "module" representing a random sample of the whole network. Traditionally such genes are labeled by gold color or numeric label 0.1. These values are the default when \code{greyName} is character and numeric, respectively. If these values conflict with the module labels in \code{multiColor}, they should be set to something not present in \code{multiColor}.} \item{savePermutedStatistics}{ logical: should calculated permutation statistics be saved? Saved statistics may be re-used if the calculation needs to be repeated.} \item{permutedStatisticsFile}{ file name to save the permutation statistics into. } \item{loadPermutedStatistics}{ logical: should permutation statistics be loaded? If a previously executed calculation needs to be repeated, loading permutation study results can cut the calculation time many-fold. } \item{useInterpolation}{ logical: should permutation statistics be calculated by interpolating an artificial set of evenly spaced modules? This option may potentially speed up the calculations, but it restricts calculations to density measures. } \item{plotInterpolation}{ logical: should interpolation plots be saved? If interpolation is used (see \code{useInterpolation} above), the function can optionally generate diagnostic plots that can be used to assess whether the interpolation makes sense. } \item{interpolationPlotFile}{ file name to save the interpolation plots into. } \item{discardInvalidOutput}{logical: should output columns containing no valid data be discarded? This option may be useful when input \code{dataIsExpr} is \code{FALSE} and some of the output statistics cannot be calculated. This option causes such statistics to be dropped from output.} \item{parallelCalculation}{logical: should calculations be done in parallel? Note that parallel calculations are turned off by default and will lead to somewhat DIFFERENT results than serial calculations because the random seed is set differently. For the calculation to actually run in parallel mode, a call to \code{\link{enableWGCNAThreads}} must be made before this function is called.} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function calculates module preservation statistics pair-wise between given reference sets and all other sets in \code{multiExpr}. Reference sets must have their corresponding module assignment specified in \code{multiColor}; module assignment is optional for test sets. Individual expression sets and their module labels are matched using \code{names} of the corresponding components in \code{multiExpr} and \code{multiColor}. For each reference-test pair, the function calculates module preservation statistics that measure how well the modules of the reference set are preserved in the test set. If the \code{multiColor} also contains module assignment for the test set, the calculated statistics also include cross-tabulation statistics that make use of the test module assignment. For each reference-test pair, the function only uses genes (columns of the \code{data} component of each component of \code{multiExpr}) that are in common between the reference and test set. Columns are matched by column names, so column names must be valid. In addition to preservation statistics, the function also calculates several statistics of module quality, that is measures of how well-defined modules are in the reference set. The quality statistics are calculated with respect to genes in common with with a test set; thus the function calculates a set of quality statistics for each reference-test pair. This may be somewhat counter-intuitive, but it allows a direct comparison of corresponding quality and preservation statistics. The calculated p-values are determined from the Z scores of individual measures under assumption of normality. No p-value is calculated for the Zsummary measures. Bonferoni correction to the number of tested modules. Because the p-values for strongly preserved modules are often extremely low, the function reports natural logarithms (base e) of the p-values. However, q-values are reported untransformed since they are calculated that way in package qvalue. Missing data are removed (but see \code{quickCor} above). } \value{ The function returns a nested list of preservation statistics. At the top level, the list components are: \item{quality}{observed values, Z scores, log p-values, Bonferoni-corrected log p-values, and (optionally) q-values of quality statistics. All logarithms are in base 10.} \item{preservation }{observed values, Z scores, log p-values, Bonferoni-corrected log p-values, and (optionally) q-values of density and connectivity preservation statistics. All logarithms are in base 10.} \item{accuracy}{observed values, Z scores, log p-values, Bonferoni-corrected log p-values, and (optionally) q-values of cross-tabulation statistics. All logarithms are in base 10.} \item{referenceSeparability}{observed values, Z scores, log p-values, Bonferoni-corrected log p-values, and (optionally) q-values of module separability in the reference network. All logarithms are in base 10.} \item{testSeparability}{observed values, Z scores, p-values, Bonferoni-corrected p-values, and (optionally) q-values of module separability in the test network. All logarithms are in base 10.} \item{permutationDetails}{results of individual permutations, useful for diagnostics} All of the above are lists. The lists \code{quality}, \code{preservation}, \code{referenceSeparability}, and \code{testSeparability} each contain 4 or 5 components: \code{observed} contains observed values, \code{Z} contains the corresponding Z scores, \code{log.p} contains base 10 logarithms of the p-values, \code{log.pBonf} contains base 10 logarithms of the Bonferoni corrected p-values, and optionally \code{q} contains the associated q-values. The list \code{accuracy} contains \code{observed}, \code{Z}, \code{log.p}, \code{log.pBonf}, optionally \code{q}, and additional components \code{observedOverlapCounts} and \code{observedFisherPvalues} that contain the observed matrices of overlap counts and Fisher test p-values. Each of the lists \code{observed}, \code{Z}, \code{log.p}, \code{log.pBonf}, optionally \code{q}, \code{observedOverlapCounts} and \code{observedFisherPvalues} is structured as a 2-level list where the outer components correspond to reference sets and the inner components to tests sets. As an example, \code{preservation$observed[[1]][[2]]} contains the density and connectivity preservation statistics for the preservation of set 1 modules in set 2, that is set 1 is the reference set and set 2 is the test set. \code{preservation$observed[[1]][[2]]} is a data frame in which each row corresponds to a module in the reference network 1 plus one row for the unassigned objects, and one row for a "module" that contains randomly sampled objects and that represents a whole-network average. Each column corresponds to a statistic as indicated by the column name. } \references{ Peter Langfelder, Rui Luo, Michael C. Oldham, and Steve Horvath, to appear } \author{ Rui Luo and Peter Langfelder } \note{ For large data sets, the permutation study may take a while (typically on the order of several hours). Use \code{verbose = 3} to get detailed progress report as the calculations advance. } \seealso{ Network construction and module detection functions in the WGCNA package such as \code{\link{adjacency}}, \code{\link{blockwiseModules}}; rudimentary cleaning in \code{\link{goodSamplesGenesMS}}; the WGCNA implementation of correlation in \code{\link{cor}}. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/cutreeStaticColor.Rd0000644000176200001440000000246314012015545015613 0ustar liggesusers\name{cutreeStaticColor} \alias{cutreeStaticColor} \title{ Constant height tree cut using color labels} \description{ Cluster detection by a constant height cut of a hierarchical clustering dendrogram. } \usage{ cutreeStaticColor(dendro, cutHeight = 0.9, minSize = 50) } \arguments{ \item{dendro}{ a hierarchical clustering dendrogram such as returned by \code{\link{hclust}}. } \item{cutHeight}{ height at which branches are to be cut. } \item{minSize}{ minimum number of object on a branch to be considered a cluster. } } \details{ This function performs a straightforward constant-height cut as implemented by \code{\link{cutree}}, then calculates the number of objects on each branch and only keeps branches that have at least \code{minSize} objects on them. } \value{ A character vector giving color labels of objects, with "grey" meaning unassigned. The largest cluster is conventionally labeled "turquoise", next "blue" etc. Run \code{standardColors()} to see the sequence of standard color labels. } \author{ Peter Langfelder } \seealso{ \code{\link{hclust}} for hierarchical clustering, \code{\link{cutree}} and \code{\link{cutreeStatic}} for other constant-height branch cuts, \code{\link{standardColors}} to see the sequence of color labels that can be assigned.} \keyword{misc} WGCNA/man/networkScreeningGS.Rd0000644000176200001440000000270614012015545015736 0ustar liggesusers\name{networkScreeningGS} \alias{networkScreeningGS} \title{ Network gene screening with an external gene significance measure } \description{ This function blends standard and network approaches to selecting genes (or variables in general) with high gene significance } \usage{ networkScreeningGS( datExpr, datME, GS, oddPower = 3, blockSize = 1000, minimumSampleSize = ..minNSamples, addGS = TRUE) } \arguments{ \item{datExpr}{ data frame of expression data } \item{datME}{ data frame of module eigengenes } \item{GS}{ numeric vector of gene significances } \item{oddPower}{ odd integer used as a power to raise module memberships and significances } \item{blockSize}{ block size to use for calculations with large data sets } \item{minimumSampleSize}{ minimum acceptable number of samples. Defaults to the default minimum number of samples used throughout the WGCNA package, currently 4.} \item{addGS}{ logical: should gene significances be added to the screening statistics?} } \details{ This function should be considered experimental. It takes into account both the "standard" and the network measures of gene importance for the trait. } \value{ \item{GS.Weighted }{weighted gene significance } \item{GS }{copy of the input gene significances (only if \code{addGS=TRUE})} } \author{ Steve Horvath } \seealso{\code{\link{networkScreening}}, \code{\link{automaticNetworkScreeningGS}}} \keyword{ misc} WGCNA/man/checkAdjMat.Rd0000644000176200001440000000166614012015545014317 0ustar liggesusers\name{checkAdjMat} \alias{checkAdjMat} \alias{checkSimilarity} \title{ Check adjacency matrix } \description{ Checks a given matrix for properties that an adjacency matrix must satisfy. } \usage{ checkAdjMat(adjMat, min = 0, max = 1) checkSimilarity(similarity, min = -1, max = 1) } \arguments{ \item{adjMat}{ matrix to be checked } \item{similarity}{ matrix to be checked } \item{min}{minimum allowed value for entries of the input} \item{max}{maximum allowed value for entries of the input} } \details{ The function checks whether the given matrix really is a 2-dimensional numeric matrix, whether it is square, symmetric, and all finite entries are between \code{min} and \code{max}. If any of the conditions is not met, the function issues an error. } \value{ None. The function returns normally if all conditions are met. } \author{Peter Langfelder} \seealso{\code{\link{adjacency}}} \keyword{misc } WGCNA/man/removeGreyME.Rd0000644000176200001440000000212414012015545014515 0ustar liggesusers\name{removeGreyME} \alias{removeGreyME} \title{Removes the grey eigengene from a given collection of eigengenes. } \description{ Given module eigengenes either in a single data frame or in a multi-set format, removes the grey eigengenes from each set. If the grey eigengenes are not found, a warning is issued. } \usage{ removeGreyME(MEs, greyMEName = paste(moduleColor.getMEprefix(), "grey", sep="")) } \arguments{ \item{MEs}{Module eigengenes, either in a single data frame (typicaly for a single set), or in a multi-set format. See \code{\link{checkSets}} for a description of the multi-set format.} \item{greyMEName}{Name of the module eigengene (in each corresponding data frame) that corresponds to the grey color. This will typically be "PCgrey" or "MEgrey". If the module eigengenes were calculated using standard functions in this library, the default should work.} } \value{ Module eigengenes in the same format as input (either a single data frame or a vector of lists) with the grey eigengene removed. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \keyword{misc} WGCNA/man/simulateMultiExpr.Rd0000644000176200001440000001506114022073754015657 0ustar liggesusers\name{simulateMultiExpr} \alias{simulateMultiExpr} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Simulate multi-set expression data} \description{ Simulation of expression data in several sets with relate module structure. } \usage{ simulateMultiExpr(eigengenes, nGenes, modProportions, minCor = 0.5, maxCor = 1, corPower = 1, backgroundNoise = 0.1, leaveOut = NULL, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, nSubmoduleLayers = 0, nScatteredModuleLayers = 0, averageNGenesInSubmodule = 10, averageExprInSubmodule = 0.2, submoduleSpacing = 2, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{eigengenes}{ the seed eigengenes for the simulated modules in a multi-set format. A list with one component per set. Each component is again a list that must contain a component \code{data}. This is a data frame of seed eigengenes for the corresponding data set. Columns correspond to modules, rows to samples. Number of samples in the simulated data is determined from the number of samples of the eigengenes. } \item{nGenes}{ integer specifyin the number of simulated genes. } \item{modProportions}{ a numeric vector with length equal the number of eigengenes in \code{eigengenes} plus one, containing fractions of the total number of genes to be put into each of the modules and into the "grey module", which means genes not related to any of the modules. See details. } \item{minCor}{ minimum correlation of module genes with the corresponding eigengene. See details. } \item{maxCor}{ maximum correlation of module genes with the corresponding eigengene. See details. } \item{corPower}{ controls the dropoff of gene-eigengene correlation. See details. } \item{backgroundNoise}{ amount of background noise to be added to the simulated expression data. } \item{leaveOut}{ optional specification of modules that should be left out of the simulation, that is their genes will be simulated as unrelated ("grey"). A logical matrix in which columns correspond to sets and rows to modules. Wherever \code{TRUE}, the corresponding module in the corresponding data set will not be simulated, that is its genes will be simulated independently of the eigengene. } \item{signed}{ logical: should the genes be simulated as belonging to a signed network? If \code{TRUE}, all genes will be simulated to have positive correlation with the eigengene. If \code{FALSE}, a proportion given by \code{propNegativeCor} will be simulated with negative correlations of the same absolute values. } \item{propNegativeCor}{ proportion of genes to be simulated with negative gene-eigengene correlations. Only effective if \code{signed} is \code{FALSE}. } \item{geneMeans}{ optional vector of length \code{nGenes} giving desired mean expression for each gene. If not given, the returned expression profiles will have mean zero. } \item{nSubmoduleLayers}{ number of layers of ordered submodules to be added. See details. } \item{nScatteredModuleLayers}{ number of layers of scattered submodules to be added. See details. } \item{averageNGenesInSubmodule}{ average number of genes in a submodule. See details. } \item{averageExprInSubmodule}{ average strength of submodule expression vectors. } \item{submoduleSpacing}{ a number giving submodule spacing: this multiple of the submodule size will lie between the submodule and the next one. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ For details of simulation of individual data sets and the meaning of individual set simulation arguments, see \code{\link{simulateDatExpr}}. This function simulates several data sets at a time and puts the result in a multi-set format. The number of genes is the same for all data sets. Module memberships are also the same, but modules can optionally be ``dissolved'', that is their genes will be simulated as unassigned. Such ``dissolved'', or left out, modules can be specified in the matrix \code{leaveOut}. } \value{ A list with the following components: \item{multiExpr }{simulated expression data in multi-set format analogous to that of the input \code{eigengenes}. A list with one component per set. Each component is again a list that must contains a component \code{data}. This is a data frame of expression data for the corresponding data set. Columns correspond to genes, rows to samples.} \item{setLabels}{a matrix of dimensions (number of genes) times (number of sets) that contains module labels for each genes in each simulated data set. } \item{allLabels}{a matrix of dimensions (number of genes) times (number of sets) that contains the module labels that would be simulated if no module were left out using \code{leaveOut}. This means that all columns of the matrix are equal; the columns are repeated for convenience so \code{allLabels} has the same dimensions as \code{setLabels}. } \item{labelOrder}{a matrix of dimensions (number of modules) times (number of sets) that contains the order in which module labels were assigned to genes in each set. The first label is assigned to genes 1...(module size of module labeled by first label), the second label to the following batch of genes etc.} } \references{ A short description of the simulation method can also be found in the Supplementary Material to the article Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54. The material is posted at http://horvath.genetics.ucla.edu/html/CoexpressionNetwork/EigengeneNetwork/SupplementSimulations.pdf. } \author{ Peter Langfelder} \seealso{ \code{\link{simulateEigengeneNetwork}} for a simulation of eigengenes with a given causal structure; \code{\link{simulateDatExpr}} for simulation of individual data sets; \code{\link{simulateDatExpr5Modules}} for a simple simulation of a data set consisting of 5 modules; \code{\link{simulateModule}} for simulations of individual modules; } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/corPvalueFisher.Rd0000644000176200001440000000137614012015545015260 0ustar liggesusers\name{corPvalueFisher} \alias{corPvalueFisher} \title{ Fisher's asymptotic p-value for correlation} \description{ Calculates Fisher's asymptotic p-value for given correlations. } \usage{ corPvalueFisher(cor, nSamples, twoSided = TRUE) } \arguments{ \item{cor}{ A vector of correlation values whose corresponding p-values are to be calculated } \item{nSamples}{ Number of samples from which the correlations were calculated } \item{twoSided}{ logical: should the calculated p-values be two sided? } } \value{ A vector of p-values of the same length as the input correlations. } \author{ Steve Horvath and Peter Langfelder } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ misc } WGCNA/man/lowerTri2matrix.Rd0000644000176200001440000000304014012015545015263 0ustar liggesusers\name{lowerTri2matrix} \alias{lowerTri2matrix} \title{ Reconstruct a symmetric matrix from a distance (lower-triangular) representation } \description{ Assuming the input vector contains a vectorized form of the distance representation of a symmetric matrix, this function creates the corresponding matrix. This is useful when re-forming symmetric matrices that have been vectorized to save storage space. } \usage{ lowerTri2matrix(x, diag = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ a numeric vector } \item{diag}{ value to be put on the diagonal. Recycled if necessary. } } \details{ The function assumes that \code{x} contains the vectorized form of the distance representation of a symmetric matrix. In particular, \code{x} must have a length that can be expressed as n*(n-1)/2, with n an integer. The result of the function is then an n times n matrix. } \value{ A symmetric matrix whose lower triangle is given by \code{x}. } \author{ Peter Langfelder } \examples{ # Create a symmetric matrix m = matrix(c(1:16), 4,4) mat = (m + t(m)); diag(mat) = 0; # Print the matrix mat # Take the lower triangle and vectorize it (in two ways) x1 = mat[lower.tri(mat)] x2 = as.vector(as.dist(mat)) all.equal(x1, x2) # The vectors are equal # Turn the vectors back into matrices new.mat = lowerTri2matrix(x1, diag = 0); # Did we get back the same matrix? all.equal(mat, new.mat) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{misc} WGCNA/man/chooseTopHubInEachModule.Rd0000644000176200001440000000400714672545314017007 0ustar liggesusers\name{chooseTopHubInEachModule} \alias{chooseTopHubInEachModule} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Chooses the top hub gene in each module } \description{ chooseTopHubInEachModule returns the gene in each module with the highest connectivity, looking at all genes in the expression file. } \usage{ chooseTopHubInEachModule( datExpr, colorh, omitColors = "grey", power = 2, type = "signed", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ Gene expression data with rows as samples and columns as genes. } \item{colorh}{ The module assignments (color vectors) corresponding to the columns in datExpr. } \item{omitColors}{ All colors in this character vector (default is "grey") are ignored by this function. } \item{power}{ Power to use for the adjacency network (default = 2). } \item{type}{ What type of network is being entered. Common choices are "signed" (default) and "unsigned". With "signed" negative correlations count against, whereas with "unsigned" negative correlations are treated identically as positive correlations. } \item{\dots}{ Any other parameters accepted by the *adjacency* function } } \value{ Both functions output a character vector of genes, where the genes are the hub gene picked for each module, and the names correspond to the module in which each gene is a hub. } \author{ Jeremy Miller } \examples{ ## Example: first simulate some data. MEturquoise = sample(1:100,50) MEblue = sample(1:100,50) MEbrown = sample(1:100,50) MEyellow = sample(1:100,50) MEgreen = c(MEyellow[1:30], sample(1:100,20)) MEred = c(MEbrown [1:20], sample(1:100,30)) MEblack = c(MEblue [1:25], sample(1:100,25)) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, MEred, MEblack) dat1 = simulateDatExpr(ME,300,c(0.2,0.1,0.08,0.051,0.05,0.042,0.041,0.3), signed=TRUE) colorh = labels2colors(dat1$allLabels) hubs = chooseTopHubInEachModule(dat1$datExpr, colorh) hubs } \keyword{misc} WGCNA/man/addGrid.Rd0000644000176200001440000000261714230552654013525 0ustar liggesusers\name{addGrid} \alias{addGrid} \title{ Add grid lines to an existing plot. } \description{ This function adds horizontal and/or vertical grid lines to an existing plot. The grid lines are aligned with tick marks. } \usage{ addGrid( linesPerTick = NULL, linesPerTick.horiz = linesPerTick, linesPerTick.vert = linesPerTick, horiz = TRUE, vert = FALSE, col = "grey30", lty = 3) } \arguments{ \item{linesPerTick}{ Number of lines between successive tick marks (including the line on the tickmarks themselves). } \item{linesPerTick.horiz}{ Number of horizontal lines between successive tick marks (including the line on the tickmarks themselves). } \item{linesPerTick.vert}{ Number of vertical lines between successive tick marks (including the line on the tickmarks themselves). } \item{horiz}{ Draw horizontal grid lines? } \item{vert}{ Draw vertical tick lines? } \item{col}{ Specifies color of the grid lines } \item{lty}{ Specifies line type of grid lines. See \code{\link{par}}. } } \details{ If \code{linesPerTick} is not specified, it is set to 5 if number of tick s is 5 or less, and it is set to 2 if number of ticks is greater than 5. } \author{ Peter Langfelder } \note{ The function does not work whenever logarithmic scales are in use. } \examples{ plot(c(1:10), c(1:10)) addGrid(); } \keyword{hplot}% __ONLY ONE__ keyword per line WGCNA/man/verboseBarplot.Rd0000644000176200001440000001255314012015545015147 0ustar liggesusers\name{verboseBarplot} \alias{verboseBarplot} \title{ Barplot with error bars, annotated by Kruskal-Wallis or ANOVA p-value} \description{ Produce a barplot with error bars, annotated by Kruskal-Wallis or ANOVA p-value. } \usage{ verboseBarplot(x, g, main = "", xlab = NA, ylab = NA, cex = 1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, color = "grey", numberStandardErrors = 1, KruskalTest = TRUE, AnovaTest = FALSE, two.sided = TRUE, addCellCounts=FALSE, horiz = FALSE, ylim = NULL, ..., addScatterplot = FALSE, pt.cex = 0.8, pch = 21, pt.col = "blue", pt.bg = "skyblue", randomSeed = 31425, jitter = 0.6, pointLabels = NULL, label.cex = 0.8, label.offs = 0.06, adjustYLim = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ numerical or binary vector of data whose group means are to be plotted } \item{g}{ a factor or a an object coercible to a factor giving the groups whose means are to be calculated. } \item{main}{ main title for the plot.} \item{xlab}{ label for the x-axis. } \item{ylab}{ label for the y-axis. } \item{cex}{ character expansion factor for plot annotations. } \item{cex.axis}{ character expansion factor for axis annotations. } \item{cex.lab}{ character expansion factor for axis labels. } \item{cex.main}{ character expansion factor for the main title. } \item{color}{ a vector giving the colors of the bars in the barplot. } \item{numberStandardErrors}{ size of the error bars in terms of standard errors. See details. } \item{KruskalTest}{logical: should Kruskal-Wallis test be performed? See details. } \item{AnovaTest}{ logical: should ANOVA be performed? See details. } \item{two.sided}{ logical: should the printed p-value be two-sided? See details. } \item{addCellCounts}{ logical: should counts be printed above each bar? } \item{horiz}{ logical: should the bars be drawn horizontally? } \item{ylim}{optional specification of the limits for the y axis. If not given, they will be determined automatically.} \item{\dots}{ other parameters to function \code{\link{barplot}}. } \item{addScatterplot}{logical: should a scatterplot of the data be overlaid? } \item{pt.cex}{character expansion factor for the points.} \item{pch}{shape code for the points.} \item{pt.col}{color for the points.} \item{pt.bg}{background color for the points.} \item{randomSeed}{integer random seed to make plots reproducible.} \item{jitter}{amount of random jitter to add to the position of the points along the x axis.} \item{pointLabels}{Optional text labels for the points displayed using the scatterplot. If given, should be a character vector of the same length as x. See \code{\link{labelPoints}}.} \item{label.cex}{Character expansion (size) factor for \code{pointLabels}.} \item{label.offs}{Offset for \code{pointLabels}, as a fraction of the plot width.} \item{adjustYLim}{logical: should the limits of the y axis be set so as to accomodate the individual points? The adjustment is only carried out if input \code{ylim} is \code{NULL} and \code{addScatterplot} is \code{TRUE}. In particular, if the user supplies \code{ylim}, it is not touched.} } \details{ This function creates a barplot of a numeric variable (input \code{x}) across the levels of a grouping variable (input \code{g}). The height of the bars equals the mean value of \code{x} across the observations with a given level of \code{g}. By default, the barplot also shows plus/minus one standard error. If you want only plus one standard error (not minus) choose \code{two.sided=TRUE}. But the number of standard errors can be determined with the input \code{numberStandardErrors}. For example, if you want a 95\% confidence interval around the mean, choose \code{numberStandardErrors=2}. If you don't want any standard errors set \code{numberStandardErrors=-1}. The function also outputs the p-value of a Kruskal Wallis test (Fisher test for binary input data), which is a non-parametric multi group comparison test. Alternatively, one can use Analysis of Variance (Anova) to compute a p-value by setting \code{AnovaTest=TRUE}. Anova is a generalization of the Student t-test to multiple groups. In case of two groups, the Anova p-value equals the Student t-test p-value. Anova should only be used if \code{x} follows a normal distribution. Anova also assumes homoscedasticity (equal variances). The Kruskal Wallis test is often advantageous since it makes no distributional assumptions. Since the Kruskal Wallis test is based on the ranks of \code{x}, it is more robust with regard to outliers. All p-values are two-sided. } \value{ None. } \author{ Steve Horvath, with contributions from Zhijin (Jean) Wu and Peter Langfelder} \seealso{ \code{\link{barplot}} } \examples{ group=sample(c(1,2),100,replace=TRUE) height=rnorm(100,mean=group) par(mfrow=c(2,2)) verboseBarplot(height,group, main="1 SE, Kruskal Test") verboseBarplot(height,group,numberStandardErrors=2, main="2 SE, Kruskal Test") verboseBarplot(height,group,numberStandardErrors=2,AnovaTest=TRUE, main="2 SE, Anova") verboseBarplot(height,group,numberStandardErrors=2,AnovaTest=TRUE, main="2 SE, Anova, only plus SE", two.sided=FALSE) } \keyword{ misc } WGCNA/man/list2multiData.Rd0000644000176200001440000000164114012015545015054 0ustar liggesusers\name{list2multiData} \alias{list2multiData} \alias{multiData2list} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Convert a list to a multiData structure and vice-versa. } \description{ \code{list2multiData} converts a list to a multiData structure; \code{multiData2list} does the inverse. } \usage{ list2multiData(data) multiData2list(multiData) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A list to be converted to a multiData structure. } \item{multiData}{ A multiData structure to be converted to a list. } } \details{ A multiData structure is a vector of lists (one list for each set) where each list has a component \code{data} containing some useful information. } \value{ For \code{list2multiData}, a multiData structure; for \code{multiData2list}, the corresponding list. } \author{ Peter Langfelder } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/recutBlockwiseTrees.Rd0000644000176200001440000002314614012015545016146 0ustar liggesusers\name{recutBlockwiseTrees} \alias{recutBlockwiseTrees} \title{ Repeat blockwise module detection from pre-calculated data } \description{ Given consensus networks constructed for example using \code{\link{blockwiseModules}}, this function (re-)detects modules in them by branch cutting of the corresponding dendrograms. If repeated branch cuts of the same gene network dendrograms are desired, this function can save substantial time by re-using already calculated networks and dendrograms. } \usage{ recutBlockwiseTrees( datExpr, goodSamples, goodGenes, blocks, TOMFiles, dendrograms, corType = "pearson", networkType = "unsigned", deepSplit = 2, detectCutHeight = 0.995, minModuleSize = min(20, ncol(datExpr)/2 ), maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, pamStage = TRUE, pamRespectsDendro = TRUE, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.3, reassignThreshold = 1e-6, mergeCutHeight = 0.15, impute = TRUE, trapErrors = FALSE, numericLabels = FALSE, verbose = 0, indent = 0, ...) } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. NAs are allowed, but not too many. } \item{goodSamples}{ a logical vector specifying which samples are considered "good" for the analysis. See \code{\link{goodSamplesGenes}}. } \item{goodGenes}{ a logical vector with length equal number of genes in \code{multiExpr} that specifies which genes are considered "good" for the analysis. See \code{\link{goodSamplesGenes}}. } \item{blocks}{ specification of blocks in which hierarchical clustering and module detection should be performed. A numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{TOMFiles}{ a vector of character strings specifying file names in which the block-wise topological overlaps are saved. } \item{dendrograms}{ a list of length equal the number of blocks, in which each component is a hierarchical clustering dendrograms of the genes that belong to the block. } \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pariwise.complete.obs} option. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{deepSplit}{ integer value between 0 and 4. Provides a simplified control over how sensitive module detection should be to module splitting, with 0 least and 4 most sensitive. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{detectCutHeight}{ dendrogram cut height for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minModuleSize}{ minimum module size for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxCoreScatter}{ maximum scatter of the core for a branch to be a cluster, given as the fraction of \code{cutHeight} relative to the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minGap}{ minimum cluster gap given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxAbsCoreScatter}{ maximum scatter of the core for a branch to be a cluster given as absolute heights. If given, overrides \code{maxCoreScatter}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minAbsGap}{ minimum cluster gap given as absolute height difference. If given, overrides \code{minGap}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minSplitHeight}{Minimum split height given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. Branches merging below this height will automatically be merged. Defaults to zero but is used only if \code{minAbsSplitHeight} below is \code{NULL}.} \item{minAbsSplitHeight}{Minimum split height given as an absolute height. Branches merging below this height will automatically be merged. If not given (default), will be determined from \code{minSplitHeight} above.} \item{useBranchEigennodeDissim}{Logical: should branch eigennode (eigengene) dissimilarity be considered when merging branches in Dynamic Tree Cut?} \item{minBranchEigennodeDissim}{Minimum consensus branch eigennode (eigengene) dissimilarity for branches to be considerd separate. The branch eigennode dissimilarity in individual sets is simly 1-correlation of the eigennodes; the consensus is defined as quantile with probability \code{consensusQuantile}.} \item{pamStage}{ logical. If TRUE, the second (PAM-like) stage of module detection will be performed. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{pamRespectsDendro}{Logical, only used when \code{pamStage} is \code{TRUE}. If \code{TRUE}, the PAM stage will respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled and returned to the pool of genes waiting for mofule detection). } \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{reassignThreshold}{ p-value ratio threshold for reassigning genes between modules. See Details. } \item{mergeCutHeight}{ dendrogram cut height for module merging. } \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } \item{numericLabels}{ logical: should the returned modules be labeled by colors (\code{FALSE}), or by numbers (\code{TRUE})? } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{...}{Other arguments.} } \details{ For details on blockwise module detection, see \code{\link{blockwiseModules}}. This function implements the module detection subset of the functionality of \code{\link{blockwiseModules}}; network construction and clustering must be performed in advance. The primary use of this function is to experiment with module detection settings without having to re-execute long network and clustering calculations whose results are not affected by the cutting parameters. This function takes as input the networks and dendrograms that are produced by \code{\link{blockwiseModules}}. Working block by block, modules are identified in the dendrogram by the Dynamic Hybrid Tree Cut algorithm. Found modules are trimmed of genes whose correlation with module eigengene (KME) is less than \code{minKMEtoStay}. Modules in which fewer than \code{minCoreKMESize} genes have KME higher than \code{minCoreKME} are disbanded, i.e., their constituent genes are pronounced unassigned. After all blocks have been processed, the function checks whether there are genes whose KME in the module they assigned is lower than KME to another module. If p-values of the higher correlations are smaller than those of the native module by the factor \code{reassignThresholdPS}, the gene is re-assigned to the closer module. In the last step, modules whose eigengenes are highly correlated are merged. This is achieved by clustering module eigengenes using the dissimilarity given by one minus their correlation, cutting the dendrogram at the height \code{mergeCutHeight} and merging all modules on each branch. The process is iterated until no modules are merged. See \code{\link{mergeCloseModules}} for more details on module merging. } \value{ A list with the following components: \item{colors }{ a vector of color or numeric module labels for all genes.} \item{unmergedColors }{ a vector of color or numeric module labels for all genes before module merging.} \item{MEs }{ a data frame containing module eigengenes of the found modules (given by \code{colors}).} \item{MEsOK}{logical indicating whether the module eigengenes were calculated without errors. } } \references{Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Peter Langfelder} \seealso{ \code{\link{blockwiseModules}} for full module calculation; \code{\link[dynamicTreeCut]{cutreeDynamic}} for adaptive branch cutting in hierarchical clustering dendrograms; \code{\link{mergeCloseModules}} for merging of close modules. } \keyword{ misc } WGCNA/man/overlapTable.Rd0000644000176200001440000000351514356162617014612 0ustar liggesusers\name{overlapTable} \alias{overlapTable} \title{ Calculate overlap of modules } \description{ The function calculates overlap counts and Fisher exact test p-values for the given two sets of module assignments. } \usage{ overlapTable( labels1, labels2, na.rm = TRUE, ignore = NULL, levels1 = NULL, levels2 = NULL, log.p = FALSE) } \arguments{ \item{labels1}{ a vector containing module labels. } \item{labels2}{ a vector containing module labels to be compared to \code{labels1}. } \item{na.rm}{logical: should entries missing in either \code{labels1} or \code{labels2} be removed?} \item{ignore}{an optional vector giving label levels that are to be ignored.} \item{levels1}{optional vector giving levels for \code{labels1}. Defaults to sorted unique non-missing values in \code{labels1} that are not present in \code{ignore}.} \item{levels2}{optional vector giving levels for \code{labels2}. Defaults to sorted unique non-missing values in \code{labels2} that are not present in \code{ignore}.} \item{log.p}{logical: should (natural) logarithms of the p-values be returned instead of the p-values?} } \value{ A list with the following components: \item{countTable}{a matrix whose rows correspond to modules (unique labels) in \code{labels1} and whose columns correspond to modules (unique labels) in \code{labels2}, giving the number of objects in the intersection of the two respective modules. } \item{pTable}{a matrix whose rows correspond to modules (unique labels) in \code{labels1} and whose columns correspond to modules (unique labels) in \code{labels2}, giving Fisher's exact test significance p-values (or their logarithms) for the overlap of the two respective modules. } } \author{ Peter Langfelder } \seealso{ \code{\link{fisher.test}}, \code{\link{matchLabels}} } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/blockwiseConsensusModules.Rd0000644000176200001440000007261414022073754017405 0ustar liggesusers\name{blockwiseConsensusModules} \alias{blockwiseConsensusModules} \title{Find consensus modules across several datasets.} \description{ Perform network construction and consensus module detection across several datasets. } \usage{ blockwiseConsensusModules( multiExpr, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # TOM precalculation arguments, if available individualTOMInfo = NULL, useIndivTOMSubset = NULL, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressNegativeTOM = FALSE, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", # Consensus calculation options: network calibration networkCalibration = c("single quantile", "full quantile", "none"), # Simple quantile calibration options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, getNetworkCalibrationSamples = FALSE, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, # Saving the consensus TOM saveConsensusTOMs = FALSE, consensusTOMFilePattern = "consensusTOM-block.\%b.RData", # Internal handling of TOMs useDiskCache = TRUE, chunkSize = NULL, cacheBase = ".blockConsModsCache", cacheDir = ".", # Alternative consensus TOM input from a previous calculation consensusTOMInfo = NULL, # Basic tree cut options # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, # Advanced tree cut opyions maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, # Gene reassignment and trimming from a module, and module "significance" criteria reassignThresholdPS = 1e-4, trimmingConsensusQuantile = consensusQuantile, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, trapErrors = FALSE, #Module merging options equalizeQuantilesForModuleMerging = FALSE, quantileSummaryForModuleMerging = "mean", mergeCutHeight = 0.15, mergeConsensusQuantile = consensusQuantile, # Output options numericLabels = FALSE, # General options nThreads = 0, verbose = 2, indent = 0, ...) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{checkMissingData}{logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{ integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{blockSizePenaltyPower}{number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{number of centers to be used in the preclustering. Defaults to smaller of \code{nGenes/20} and \code{100*nGenes/maxBlockSize}, where \code{nGenes} is the nunber of genes (variables) in \code{multiExpr}.} \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } %%%%%%%%%%%%%% \item{individualTOMInfo}{ Optional data for TOM matrices in individual data sets. This object is returned by the function \code{\link{blockwiseIndividualTOMs}}. If not given, appropriate topological overlaps will be calculated using the network contruction options below. } \item{useIndivTOMSubset}{ If \code{individualTOMInfo} is given, this argument allows to only select a subset of the individual set networks contained in \code{individualTOMInfo}. It should be a numeric vector giving the indices of the individual sets to be used. Note that this argument is NOT applied to \code{multiExpr}. } %%%%%%%%%%%%%% \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pariwise.complete.obs} option. } \item{maxPOutliers}{ only used for \code{corType=="bicor"}. Specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quickCor}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation, if used, should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). Has no effect for Pearson correlation. See \code{\link{bicor}}.} \item{cosineCorrelation}{logical: should the cosine version of the correlation calculation be used? The cosine calculation differs from the standard one in that it does not subtract the mean. } %%%%%%%%%%%%%% \item{power}{ soft-thresholding power for network construction. Either a single number or a vector of the same length as the number of sets, with one power for each set. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{checkPower}{ logical: should basic sanity check be performed on the supplied \code{power}? If you would like to experiment with unusual powers, set the argument to \code{FALSE} and proceed with caution. } \item{replaceMissingAdjacencies}{logical: should missing values in the calculation of adjacency be replaced by 0?} \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental.} %The default mean denominator %variant %is preferrable and we recommend using it unless the user needs to reproduce older results obtained using %the standard, minimum denominator TOM. } \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? Negative TOM values can occur when \code{TOMType} is \code{"signed Nowick"}.} %%%%%%%%%%%%%%% \item{saveIndividualTOMs}{logical: should individual TOMs be saved to disk for later use? } \item{individualTOMFileNames}{character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} %%%%%%%%%%%%%% \item{networkCalibration}{network calibration method. One of "single quantile", "full quantile", "none" (or a unique abbreviation of one of them).} %%%%%%%%%%%%%% \item{calibrationQuantile}{ if \code{networkCalibration} is \code{"single quantile"}, topological overlaps (or adjacencies if TOMs are not computed) will be scaled such that their \code{calibrationQuantile} quantiles will agree. } \item{sampleForCalibration}{ if \code{TRUE}, calibration quantiles will be determined from a sample of network similarities. Note that using all data can double the memory footprint of the function and the function may fail. } \item{sampleForCalibrationFactor}{ determines the number of samples for calibration: the number is \code{1/calibrationQuantile * sampleForCalibrationFactor}. Should be set well above 1 to ensure accuracy of the sampled quantile. } \item{getNetworkCalibrationSamples}{ logical: should samples used for TOM calibration be saved for future analysis? This option is only available when \code{sampleForCalibration} is \code{TRUE}. } %%%%%%%%%%%%%% \item{consensusQuantile}{ quantile at which consensus is to be defined. See details. } \item{useMean}{logical: should the consensus be determined from a (possibly weighted) mean across the data sets rather than a quantile?} \item{setWeights}{Optional vector (one component per input set) of weights to be used for weighted mean consensus. Only used when \code{useMean} above is \code{TRUE}.} %%%%%%%%%%%%%% \item{saveConsensusTOMs}{ logical: should the consensus topological overlap matrices for each block be saved and returned? } \item{consensusTOMFilePattern}{ character string containing the file namefiles containing the consensus topological overlaps. The tag \code{\%b} will be replaced by the block number. If the resulting file names are non-unique (for example, because the user gives a file name without a \code{\%b} tag), an error will be generated. These files are standard R data files and can be loaded using the \code{\link{load}} function. } %%%%%%%%%%%%%% \item{useDiskCache}{ should calculated network similarities in individual sets be temporarilly saved to disk? Saving to disk is somewhat slower than keeping all data in memory, but for large blocks and/or many sets the memory footprint may be too big. } \item{chunkSize}{ network similarities are saved in smaller chunks of size \code{chunkSize}. } \item{cacheBase}{ character string containing the desired name for the cache files. The actual file names will consists of \code{cacheBase} and a suffix to make the file names unique. } \item{cacheDir}{ character string containing the desired path for the cache files.} %%%%%%%%%%%%%% \item{consensusTOMInfo}{optional list summarizing consensus TOM, output of \code{\link{consensusTOM}}. It contains information about pre-calculated consensus TOM. Supplying this argument replaces TOM calculation, so none of the individual or consensus TOM calculation arguments are taken into account.} %%%%%%%%%%%%%% \item{deepSplit}{ integer value between 0 and 4. Provides a simplified control over how sensitive module detection should be to module splitting, with 0 least and 4 most sensitive. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{detectCutHeight}{ dendrogram cut height for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minModuleSize}{ minimum module size for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{checkMinModuleSize}{ logical: should sanity checks be performed on \code{minModuleSize}?} %%%%%%%%%%%%%%%% \item{maxCoreScatter}{ maximum scatter of the core for a branch to be a cluster, given as the fraction of \code{cutHeight} relative to the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minGap}{ minimum cluster gap given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxAbsCoreScatter}{ maximum scatter of the core for a branch to be a cluster given as absolute heights. If given, overrides \code{maxCoreScatter}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minAbsGap}{ minimum cluster gap given as absolute height difference. If given, overrides \code{minGap}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minSplitHeight}{Minimum split height given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. Branches merging below this height will automatically be merged. Defaults to zero but is used only if \code{minAbsSplitHeight} below is \code{NULL}.} \item{minAbsSplitHeight}{Minimum split height given as an absolute height. Branches merging below this height will automatically be merged. If not given (default), will be determined from \code{minSplitHeight} above.} \item{useBranchEigennodeDissim}{Logical: should branch eigennode (eigengene) dissimilarity be considered when merging branches in Dynamic Tree Cut?} \item{minBranchEigennodeDissim}{Minimum consensus branch eigennode (eigengene) dissimilarity for branches to be considerd separate. The branch eigennode dissimilarity in individual sets is simly 1-correlation of the eigennodes; the consensus is defined as quantile with probability \code{consensusQuantile}.} \item{stabilityLabels}{Optional matrix of cluster labels that are to be used for calculating branch dissimilarity based on split stability. The number of rows must equal the number of genes in \code{multiExpr}; the number of columns (clusterings) is arbitrary. See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{minStabilityDissim}{Minimum stability dissimilarity criterion for two branches to be considered separate. Should be a number between 0 (essentially no dissimilarity required) and 1 (perfect dissimilarity or distinguishability based on \code{stabilityLabels}). See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{pamStage}{ logical. If TRUE, the second (PAM-like) stage of module detection will be performed. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{pamRespectsDendro}{Logical, only used when \code{pamStage} is \code{TRUE}. If \code{TRUE}, the PAM stage will respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } %%%%%%%%%%%%% \item{reassignThresholdPS}{ per-set p-value ratio threshold for reassigning genes between modules. See Details. } \item{trimmingConsensusQuantile}{a number between 0 and 1 specifying the consensus quantile used for kME calculation that determines module trimming according to the arguments below.} \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled and returned to the pool of genes waiting for mofule detection). } \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} %%%%%%%%%%%%% \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } %%%%%%%%%%%%% \item{equalizeQuantilesForModuleMerging}{Logical: equalize quantiles of the module eigengene networks before module merging? If \code{TRUE}, the quantiles of the eigengene correlation matrices (interpreted as a single vectors of non-redundant components) will be equalized across the input data sets. Note that although this seems like a reasonable option, it should be considered experimental and not necessarily recommended.} \item{quantileSummaryForModuleMerging}{One of \code{"mean"} or \code{"median"}. If quantile equalization of the module eigengene networks is performed, the resulting "normal" quantiles will be given by this function of the corresponding quantiles across the input data sets.} \item{mergeCutHeight}{ dendrogram cut height for module merging. } \item{mergeConsensusQuantile}{consensus quantile for module merging. See \code{mergeCloseModules} for details. } \item{numericLabels}{ logical: should the returned modules be labeled by colors (\code{FALSE}), or by numbers (\code{TRUE})? } %%%%%%%%%%%%% \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{...}{Other arguments. At present these can include \code{reproduceBranchEigennodeQuantileError} that instructs the function to reproduce a bug in branch eigennode dissimilarity calculations for purposes if reproducing old reults. } } \details{ The function starts by optionally filtering out samples that have too many missing entries and genes that have either too many missing entries or zero variance in at least one set. Genes that are filtered out are left unassigned by the module detection. Returned eigengenes will contain \code{NA} in entries corresponding to filtered-out samples. If \code{blocks} is not given and the number of genes exceeds \code{maxBlockSize}, genes are pre-clustered into blocks using the function \code{\link{consensusProjectiveKMeans}}; otherwise all genes are treated in a single block. For each block of genes, the network is constructed and (if requested) topological overlap is calculated in each set. To minimize memory usage, calculated topological overlaps are optionally saved to disk in chunks until they are needed again for the calculation of the consensus network topological overlap. Before calculation of the consensus Topological Overlap, individual TOMs are optionally calibrated. Calibration methods include single quantile scaling and full quantile normalization. Single quantile scaling raises individual TOM in sets 2,3,... to a power such that the quantiles given by \code{calibrationQuantile} agree with the quantile in set 1. Since the high TOMs are usually the most important for module identification, the value of \code{calibrationQuantile} is close to (but not equal) 1. To speed up quantile calculation, the quantiles can be determined on a randomly-chosen component subset of the TOM matrices. Full quantile normalization, implemented in \code{\link[preprocessCore]{normalize.quantiles}}, adjusts the TOM matrices such that all quantiles equal each other (and equal to the quantiles of the component-wise average of the individual TOM matrices). Note that network calibration is performed separately in each block, i.e., the normalizing transformation may differ between blocks. This is necessary to avoid manipulating a full TOM in memory. The consensus TOM is calculated as the component-wise \code{consensusQuantile} quantile of the individual (set) TOMs; that is, for each gene pair (TOM entry), the \code{consensusQuantile} quantile across all input sets. Alternatively, one can also use (weighted) component-wise mean across all imput data sets. If requested, the consensus topological overlaps are saved to disk for later use. Genes are then clustered using average linkage hierarchical clustering and modules are identified in the resulting dendrogram by the Dynamic Hybrid tree cut. Found modules are trimmed of genes whose consensus module membership kME (that is, correlation with module eigengene) is less than \code{minKMEtoStay}. Modules in which fewer than \code{minCoreKMESize} genes have consensus KME higher than \code{minCoreKME} are disbanded, i.e., their constituent genes are pronounced unassigned. After all blocks have been processed, the function checks whether there are genes whose KME in the module they assigned is lower than KME to another module. If p-values of the higher correlations are smaller than those of the native module by the factor \code{reassignThresholdPS} (in every set), the gene is re-assigned to the closer module. In the last step, modules whose eigengenes are highly correlated are merged. This is achieved by clustering module eigengenes using the dissimilarity given by one minus their correlation, cutting the dendrogram at the height \code{mergeCutHeight} and merging all modules on each branch. The process is iterated until no modules are merged. See \code{\link{mergeCloseModules}} for more details on module merging. The argument \code{quick} specifies the precision of handling of missing data in the correlation calculations. Zero will cause all calculations to be executed precisely, which may be significantly slower than calculations without missing data. Progressively higher values will speed up the calculations but introduce progressively larger errors. Without missing data, all column means and variances can be pre-calculated before the covariances are calculated. When missing data are present, exact calculations require the column means and variances to be calculated for each covariance. The approximate calculation uses the pre-calculated mean and variance and simply ignores missing data in the covariance calculation. If the number of missing data is high, the pre-calculated means and variances may be very different from the actual ones, thus potentially introducing large errors. The \code{quick} value times the number of rows specifies the maximum difference in the number of missing entries for mean and variance calculations on the one hand and covariance on the other hand that will be tolerated before a recalculation is triggered. The hope is that if only a few missing data are treated approximately, the error introduced will be small but the potential speedup can be significant. } \value{ A list with the following components: \item{colors}{ module assignment of all input genes. A vector containing either character strings with module colors (if input \code{numericLabels} was unset) or numeric module labels (if \code{numericLabels} was set to \code{TRUE}). The color "grey" and the numeric label 0 are reserved for unassigned genes. } \item{unmergedColors }{ module colors or numeric labels before the module merging step. } \item{multiMEs}{ module eigengenes corresponding to the modules returned in \code{colors}, in multi-set format. A vector of lists, one per set, containing eigengenes, proportion of variance explained and other information. See \code{\link{multiSetMEs}} for a detailed description. } \item{goodSamples}{ a list, with one component per input set. Each component is a logical vector with one entry per sample from the corresponding set. The entry indicates whether the sample in the set passed basic quality control criteria. } \item{goodGenes}{a logical vector with one entry per input gene indicating whether the gene passed basic quality control criteria in all sets.} \item{dendrograms}{a list with one component for each block of genes. Each component is the hierarchical clustering dendrogram obtained by clustering the consensus gene dissimilarity in the corresponding block. } \item{TOMFiles}{ if \code{saveConsensusTOMs==TRUE}, a vector of character strings, one string per block, giving the file names of files (relative to current directory) in which blockwise topological overlaps were saved. } \item{blockGenes}{a list with one component for each block of genes. Each component is a vector giving the indices (relative to the input \code{multiExpr}) of genes in the corresponding block. } \item{blocks}{if input \code{blocks} was given, its copy; otherwise a vector of length equal number of genes giving the block label for each gene. Note that block labels are not necessarilly sorted in the order in which the blocks were processed (since we do not require this for the input \code{blocks}). See \code{blockOrder} below. } \item{blockOrder}{ a vector giving the order in which blocks were processed and in which \code{blockGenes} above is returned. For example, \code{blockOrder[1]} contains the label of the first-processed block. } \item{originCount}{A vector of length \code{nSets} that contains, for each set, the number of (calibrated) elements that were less than or equal the consensus for that element.} \item{networkCalibrationSamples}{if the input \code{getNetworkCalibrationSamples} is \code{TRUE}, this component is a list with one component per block. Each component is again a list with two components: \code{sampleIndex} contains indices of the distance structure in which TOM is stored that were sampled, and \code{TOMSamples} is a matrix whose rows correspond to TOM samples and columns to individual set. Hence, \code{networkCalibrationSamples[[blockNo]]$TOMSamples[index, setNo]} contains the TOM entry that corresponds to element \code{networkCalibrationSamples[[blockNo]]$sampleIndex[index]} of the TOM distance structure in block \code{blockNo} and set \code{setNo}. (For details on the distance structure, see \code{\link{dist}}.)} } \note{ If the input datasets have large numbers of genes, consider carefully the \code{maxBlockSize} as it significantly affects the memory footprint (and whether the function will fail with a memory allocation error). From a theoretical point of view it is advantageous to use blocks as large as possible; on the other hand, using smaller blocks is substantially faster and often the only way to work with large numbers of genes. As a rough guide, it is unlikely a standard desktop computer with 4GB memory or less will be able to work with blocks larger than 7000 genes. %Topological overlap calculations can be speeded up substantially (several 10-fold times on multi-core %systems) if R is compiled with a dedicated BLAS (Basic Linear Algebra Subroutines) %library such as ATLAS or GotoBLAS and the package is compiled on your target system (which is always the %case for Unix, Unix-like and Mac systems, but is normally not the case on Windows systems). } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{Peter Langfelder} \seealso{ \code{\link{goodSamplesGenesMS}} for basic quality control and filtering; \code{\link{adjacency}}, \code{\link{TOMsimilarity}} for network construction; \code{\link{hclust}} for hierarchical clustering; \code{\link[dynamicTreeCut]{cutreeDynamic}} for adaptive branch cutting in hierarchical clustering dendrograms; \code{\link{mergeCloseModules}} for merging of close modules. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/blueWhiteRed.Rd0000644000176200001440000000364714230552654014556 0ustar liggesusers\name{blueWhiteRed} \alias{blueWhiteRed} \title{ Blue-white-red color sequence } \description{ Generate a blue-white-red color sequence of a given length. } \usage{ blueWhiteRed( n, gamma = 1, endSaturation = 1, blueEnd = c(0.05 + (1-endSaturation) * 0.45 , 0.55 + (1-endSaturation) * 0.25, 1.00), redEnd = c(1.0, 0.2 + (1-endSaturation) * 0.6, 0.6*(1-endSaturation)), middle = c(1,1,1)) } \arguments{ \item{n}{ number of colors to be returned. } \item{gamma}{ color change power. } \item{endSaturation}{ a number between 0 and 1 giving the saturation of the colors that will represent the ends of the scale. Lower numbers mean less saturation (lighter colors).} \item{blueEnd}{vector of length 3 giving the RGB relative values (between 0 and 1) for the blue or negative end color.} \item{redEnd}{vector of length 3 giving the RGB relative values (between 0 and 1) for the red or positive end color.} \item{middle}{vector of length 3 giving the RGB relative values (between 0 and 1) for the middle of the scale.} } \details{ The function returns a color vector that starts with blue, gradually turns into white and then to red. The power \code{gamma} can be used to control the behaviour of the quarter- and three quarter-values (between blue and white, and white and red, respectively). Higher powers will make the mid-colors more white, while lower powers will make the colors more saturated, respectively. } \value{ A vector of colors of length \code{n}. } \author{ Peter Langfelder } \seealso{ \code{\link{numbers2colors}} for a function that produces a color representation for continuous numbers. } \examples{ par(mfrow = c(3, 1)) displayColors(blueWhiteRed(50)); title("gamma = 1") displayColors(blueWhiteRed(50, 3)); title("gamma = 3") displayColors(blueWhiteRed(50, 0.5)); title("gamma = 0.5") } \keyword{color}% __ONLY ONE__ keyword per line WGCNA/man/plotDendroAndColors.Rd0000644000176200001440000001340314012015545016070 0ustar liggesusers\name{plotDendroAndColors} \alias{plotDendroAndColors} \title{ Dendrogram plot with color annotation of objects } \description{ This function plots a hierarchical clustering dendrogram and color annotation(s) of objects in the dendrogram underneath. } \usage{ plotDendroAndColors( dendro, colors, groupLabels = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, setLayout = TRUE, autoColorHeight = TRUE, colorHeight = 0.2, colorHeightBase = 0.2, colorHeightMax = 0.6, rowWidths = NULL, dendroLabels = NULL, addGuide = FALSE, guideAll = FALSE, guideCount = 50, guideHang = 0.2, addTextGuide = FALSE, cex.colorLabels = 0.8, cex.dendroLabels = 0.9, cex.rowText = 0.8, marAll = c(1, 5, 3, 1), saveMar = TRUE, abHeight = NULL, abCol = "red", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{dendro}{ a hierarchical clustering dendrogram such as one produced by \code{\link[stats]{hclust}}. } \item{colors}{ Coloring of objects on the dendrogram. Either a vector (one color per object) or a matrix (can also be an array or a data frame) with each column giving one color per object. Each column will be plotted as a horizontal row of colors under the dendrogram. } \item{groupLabels}{ Labels for the colorings given in \code{colors}. The labels will be printed to the left of the color rows in the plot. If the argument is given, it must be a vector of length equal to the number of columns in \code{colors}. If not given, \code{names(colors)} will be used if available. If not, sequential numbers starting from 1 will be used.} \item{rowText}{Optional labels to identify colors in the color rows. If given, must be either the same dimensions as \code{colors} or must have the same number of rows and \code{textPositions} must be used to specify which columns of \code{colors} each column of \code{rowText} corresponds to. Each label that occurs will be displayed once, under the largest continuous block of the corresponding \code{colors}.} \item{rowTextAlignment}{Character string specifying whether the labels should be left-justified to the start of the largest block of each label, centered in the middle, or right-justified to the end of the largest block.} \item{rowTextIgnore}{Optional specifications of labels that should be ignored when displaying them using \code{rowText} above. } \item{textPositions}{optional numeric vector of the same length as the number of columns in \code{rowText} giving the color rows under which the text rows should appear.} \item{setLayout}{ logical: should the plotting device be partitioned into a standard layout? If \code{FALSE}, the user is responsible for partitioning. The function expects two regions of the same width, the first one immediately above the second one. } \item{autoColorHeight}{ logical: should the height of the color area below the dendrogram be automatically adjusted for the number of traits? Only effective if \code{setLayout} is \code{TRUE}. } \item{colorHeight}{ specifies the height of the color area under dendrogram as a fraction of the height of the dendrogram area. Only effective when \code{autoColorHeight} above is \code{FALSE}. } \item{colorHeightBase}{when \code{autoColorHeight} is \code{TRUE}, this specifies the minimum height of the color area (the height when there is one color row).} \item{colorHeightMax}{when \code{autoColorHeight} is \code{TRUE}, this specifies the maximum height of the color area (the height when there are many color rows).} \item{rowWidths}{ optional specification of relative row widths for the color and text (if given) rows. Need not sum to 1. } \item{dendroLabels}{ dendrogram labels. Set to \code{FALSE} to disable dendrogram labels altogether; set to \code{NULL} to use row labels of \code{datExpr}. } \item{addGuide}{ logical: should vertical "guide lines" be added to the dendrogram plot? The lines make it easier to identify color codes with individual samples. } \item{guideAll}{ logical: add a guide line for every sample? Only effective for \code{addGuide} set \code{TRUE}. } \item{guideCount}{ number of guide lines to be plotted. Only effective when \code{addGuide} is \code{TRUE} and \code{guideAll} is \code{FALSE}. } \item{guideHang}{ fraction of the dendrogram height to leave between the top end of the guide line and the dendrogram merge height. If the guide lines overlap with dendrogram labels, increase \code{guideHang} to leave more space for the labels. } \item{addTextGuide}{ logical: should guide lines be added for the text rows (if given)? } \item{cex.colorLabels}{ character expansion factor for trait labels. } \item{cex.dendroLabels}{ character expansion factor for dendrogram (sample) labels. } \item{cex.rowText}{ character expansion factor for text rows (if given). } \item{marAll}{ a vector of length 4 giving the bottom, left, top and right margins of the combined plot. There is no margin between the dendrogram and the color plot underneath. } \item{saveMar}{ logical: save margins setting before starting the plot and restore on exit? } \item{abHeight}{ optional specification of the height for a horizontal line in the dendrogram, see \code{\link{abline}}. } \item{abCol}{ color for plotting the horizontal line. } \item{\dots}{ other graphical parameters to \code{\link{plot.hclust}}. } } \details{ The function slits the plotting device into two regions, plots the given dendrogram in the upper region, then plots color rows in the region below the dendrogram. } \value{ None. } \author{ Peter Langfelder } \seealso{ \code{\link{plotColorUnderTree}} } \keyword{ hplot } WGCNA/man/exportNetworkToVisANT.Rd0000644000176200001440000000414214022073754016403 0ustar liggesusers\name{exportNetworkToVisANT} \alias{exportNetworkToVisANT} \title{ Export network data in format readable by VisANT} \description{ Exports network data in a format readable and displayable by the VisANT software. } \usage{ exportNetworkToVisANT( adjMat, file = NULL, weighted = TRUE, threshold = 0.5, maxNConnections = NULL, probeToGene = NULL) } \arguments{ \item{adjMat}{ adjacency matrix of the network to be exported. } \item{file}{ character string specifying the file name of the file in which the data should be written. If not given, no file will be created. The file is in a plain text format. } \item{weighted}{ logical: should the exported network by weighted? } \item{threshold}{ adjacency threshold for including edges in the output. } \item{maxNConnections}{maximum number of exported adjacency edges. This can be used as another filter on the exported edges.} \item{probeToGene}{ optional specification of a conversion between probe names (that label columns and rows of \code{adjacency}) and gene names (that should label nodes in the output). } } \details{ The adjacency matrix is checked for validity. The entries can be negative, however. The adjacency matrix is expected to also have valid \code{names} or \code{dimnames[[2]]} that represent the probe names of the corresponding edges. Whether the output is a weighted network or not, only edges whose (absolute value of) adjacency are above \code{threshold} will be included in the output. If \code{maxNConnections} is given, at most \code{maxNConnections} will be included in the output. If \code{probeToGene} is given, it is expected to have two columns, the first one corresponding to the probe names, the second to their corresponding gene names that will be used in the output. } \value{ A data frame containing the network information suitable as input to VisANT. The same data frame is also written into a file specified by \code{file}, if given. } \references{ VisANT software is available from http://www.visantnet.org/visantnet.html/. } \author{ Peter Langfelder } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/AFcorMI.Rd0000644000176200001440000000275114012015545013375 0ustar liggesusers\name{AFcorMI} \alias{AFcorMI} \title{Prediction of Weighted Mutual Information Adjacency Matrix by Correlation } \description{ AFcorMI computes a predicted weighted mutual information adjacency matrix from a given correlation matrix.} \usage{ AFcorMI(r, m) } \arguments{ \item{r}{ a symmetric correlation matrix with values from -1 to 1. } \item{m}{ number of observations from which the correlation was calcuated. } } \details{ This function is a one-to-one prediction when we consider correlation as unsigned. The prediction corresponds to the \code{AdjacencyUniversalVersion2} discussed in the help file for the function \code{\link{mutualInfoAdjacency}}. For more information about the generation and features of the predicted mutual information adjacency, please refer to the function \code{\link{mutualInfoAdjacency}}. } \value{ A matrix with the same size as the input correlation matrix, containing the predicted mutual information of type \code{AdjacencyUniversalVersion2}. } \author{ Steve Horvath, Lin Song, Peter Langfelder } \seealso{ \code{\link{mutualInfoAdjacency}} } \examples{ #Simulate a data frame datE which contains 5 columns and 50 observations m=50 x1=rnorm(m) r=.5; x2=r*x1+sqrt(1-r^2)*rnorm(m) r=.3; x3=r*(x1-.5)^2+sqrt(1-r^2)*rnorm(m) x4=rnorm(m) r=.3; x5=r*x4+sqrt(1-r^2)*rnorm(m) datE=data.frame(x1,x2,x3,x4,x5) #calculate predicted AUV2 cor.data=cor(datE, use="p") AUV2=AFcorMI(r=cor.data, m=nrow(datE)) } \keyword{ misc } WGCNA/man/normalizeLabels.Rd0000644000176200001440000000127014012015545015273 0ustar liggesusers\name{normalizeLabels} \alias{normalizeLabels} \title{Transform numerical labels into normal order. } \description{ Transforms numerical labels into normal order, that is the largest group will be labeled 1, next largest 2 etc. Label 0 is optionally preserved. } \usage{ normalizeLabels(labels, keepZero = TRUE) } \arguments{ \item{labels}{Numerical labels.} \item{keepZero}{If \code{TRUE} (the default), labels 0 are preserved.} } \value{ A vector of the same length as input, containing the normalized labels. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{misc} WGCNA/man/TOMsimilarity.Rd0000644000176200001440000000761514012015545014727 0ustar liggesusers\name{TOMsimilarity} \alias{TOMsimilarity} \alias{TOMdist} \title{ Topological overlap matrix similarity and dissimilarity} \description{ Calculation of the topological overlap matrix, and the corresponding dissimilarity, from a given adjacency matrix. } \usage{ TOMsimilarity( adjMat, TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, verbose = 1, indent = 0) TOMdist( adjMat, TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, verbose = 1, indent = 0) } \arguments{ \item{adjMat}{ adjacency matrix, that is a square, symmetric matrix with entries between 0 and 1 (negative values are allowed if \code{TOMType=="signed"}). } \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental.} %The default mean denominator variant %is preferrable and we recommend using it unless the user needs to reproduce older results obtained using %the standard, minimum denominator TOM. } \item{suppressTOMForZeroAdjacencies}{Logical: should the results be set to zero for zero adjacencies?} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? } \item{useInternalMatrixAlgebra}{Logical: should WGCNA's own, slow, matrix multiplication be used instead of R-wide BLAS? Only useful for debugging.} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The functions perform basically the same calculations of topological overlap. \code{TOMdist} turns the overlap (which is a measure of similarity) into a measure of dissimilarity by subtracting it from 1. Basic checks on the adjacency matrix are performed and missing entries are replaced by zeros. See \code{\link{TOMsimilarityFromExpr}} for details on the various TOM types. The underlying C code assumes that the diagonal of the adjacency matrix equals 1. If this is not the case, the diagonal of the input is set to 1 before the calculation begins. } \value{ A matrix holding the topological overlap. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 For the Nowick-type signed TOM (referred to as weighted TO, wTO, by Nowick et al.), see Nowick K, Gernat T, Almaas E, Stubbs L. Differences in human and chimpanzee gene expression patterns define an evolving network of transcription factors in brain. Proc Natl Acad Sci U S A. 2009 Dec 29;106(52):22358-63. doi: 10.1073/pnas.0911376106. Epub 2009 Dec 10. or Gysi DM, Voigt A, Fragoso TM, Almaas E, Nowick K. wTO: an R package for computing weighted topological overlap and a consensus network with integrated visualization tool. BMC Bioinformatics. 2018 Oct 24;19(1):392. doi: 10.1186/s12859-018-2351-7. } \author{ Peter Langfelder } \seealso{ \code{\link{TOMsimilarityFromExpr}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/preservationNetworkConnectivity.Rd0000644000176200001440000001263114012015545020645 0ustar liggesusers\name{preservationNetworkConnectivity} \alias{preservationNetworkConnectivity} \title{ Network preservation calculations } \description{ This function calculates several measures of gene network preservation. Given gene expression data in several individual data sets, it calculates the individual adjacency matrices, forms the preservation network and finally forms several summary measures of adjacency preservation for each node (gene) in the network. } \usage{ preservationNetworkConnectivity( multiExpr, useSets = NULL, useGenes = NULL, corFnc = "cor", corOptions = "use='p'", networkType = "unsigned", power = 6, sampleLinks = NULL, nLinks = 5000, blockSize = 1000, setSeed = 12345, weightPower = 2, verbose = 2, indent = 0) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{useSets}{ optional specification of sets to be used for the preservation calculation. Defaults to using all sets. } \item{useGenes}{ optional specification of genes to be used for the preservation calculation. Defaults to all genes. } \item{corFnc}{ character string containing the name of the function to calculate correlation. Suggested functions include \code{"cor"} and \code{"bicor"}. } \item{corOptions}{ further argument to the correlation function. } \item{networkType}{ a character string encoding network type. Recognized values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, and \code{"signed hybrid"}. } \item{power}{ soft thresholding power for network construction. Should be a number greater than 1. } \item{sampleLinks}{ logical: should network connections be sampled (\code{TRUE}) or should all connections be used systematically (\code{FALSE})? } \item{nLinks}{ number of links to be sampled. Should be set such that \code{nLinks * nNeighbors} be several times larger than the number of genes. } \item{blockSize}{ correlation calculations will be split into square blocks of this size, to prevent running out of memory for large gene sets. } \item{setSeed}{ seed to be used for sampling, for repeatability. If a seed already exists, it is saved before the sampling starts and restored upon exit. } \item{weightPower}{ power with which higher adjacencies will be weighted in weighted means } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The preservation network is formed from adjacencies of compared sets. For 'complete' preservations, all given sets are compared at once; for 'pairwise' preservations, the sets are compared in pairs. Unweighted preservations are simple mean preservations for each node; their weighted counterparts are weighted averages in which a preservation of adjacencies \eqn{A^{(1)}_{ij}}{A[i,j; 1]} and \eqn{A^{(2)}_{ij}}{A[i,j; 2]} of nodes \eqn{i,j} between sets 1 and 2 is weighted by \eqn{[ (A^{(1)}_{ij} + A^{(2)}_{ij} )/2]^weightPower}{ ( (A[i,j; 1]+A[i,j; 2])/2)^weightPower}. The hyperbolic preservation is based on \eqn{tanh[( max - min)/(max+min)^2]}{tanh[( max - min)/(max+min)^2]}, where \eqn{max}{max} and \eqn{min}{min} are the componentwise maximum and minimum of the compared adjacencies, respectively. } \value{ A list with the following components: \item{pairwise}{ a matrix with rows corresponding to genes and columns to unique pairs of given sets, giving the pairwise preservation of the adjacencies connecting the gene to all other genes.} \item{complete}{ a vector with one entry for each input gene containing the complete mean preservation of the adjacencies connecting the gene to all other genes.} \item{pairwiseWeighted}{ a matrix with rows corresponding to genes and columns to unique pairs of given sets, giving the pairwise weighted preservation of the adjacencies connecting the gene to all other genes.} \item{completeWeighted}{ a vector with one entry for each input gene containing the complete weighted mean preservation of the adjacencies connecting the gene to all other genes.} \item{pairwiseHyperbolic}{ a matrix with rows corresponding to genes and columns to unique pairs of given sets, giving the pairwise hyperbolic preservation of the adjacencies connecting the gene to all other genes.} \item{completeHyperbolic}{ a vector with one entry for each input gene containing the complete mean hyperbolic preservation of the adjacencies connecting the gene to all other genes.} \item{pairwiseWeightedHyperbolic}{ a matrix with rows corresponding to genes and columns to unique pairs of given sets, giving the pairwise weighted hyperbolic preservation of the adjacencies connecting the gene to all other genes.} \item{completeWeightedHyperbolic}{ a vector with one entry for each input gene containing the complete weighted hyperbolic mean preservation of the adjacencies connecting the gene to all other genes.} } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{ Peter Langfelder } \seealso{ \code{\link{adjacency}} for calculation of adjacency; } \keyword{misc} WGCNA/man/collapseRows.Rd0000644000176200001440000003263314012015545014634 0ustar liggesusers\name{collapseRows} \alias{collapseRows} %- Also NEED an '\alias' for EACH other topic documented here. \title{Select one representative row per group} \description{Abstractly speaking, the function allows one to collapse the rows of a numeric matrix, e.g. by forming an average or selecting one representative row for each group of rows specified by a grouping variable (referred to as \code{rowGroup}). The word "collapse" reflects the fact that the method yields a new matrix whose rows correspond to other rows of the original input data. The function implements several network-based and biostatistical methods for finding a representative row for each group specified in \code{rowGroup}. Optionally, the function identifies the representative row according to the least number of missing data, the highest sample mean, the highest sample variance, the highest connectivity. One of the advantages of this function is that it implements default settings which have worked well in numerous applications. Below, we describe these default settings in more detail. } \usage{ collapseRows(datET, rowGroup, rowID, method="MaxMean", connectivityBasedCollapsing=FALSE, methodFunction=NULL, connectivityPower=1, selectFewestMissing=TRUE, thresholdCombine=NA) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datET}{matrix or data frame containing numeric values where rows correspond to variables (e.g. microarray probes) and columns correspond to observations (e.g. microarrays). Each row of \code{datET} must have a unique row identifier (specified in the vector \code{rowID}). The group label of each row is encoded in the vector \code{rowGroup}. While \code{rowID} should have non-missing, unique values (identifiers), the values of the vector \code{rowGroup} will typically not be unique since the function aims to pick a representative row for each group. } \item{rowGroup}{character vector whose components contain the group label (e.g. a character string) for each row of \code{datET}. This vector needs to have the same length as the vector \code{rowID}. In gene expression applications, this vector could contain the gene symbol (or a co-expression module label). } \item{rowID}{character vector of row identifiers. This should include all the rows from rownames(\code{datET}), but can include other rows. Its entries should be unique (no duplicates) and no missing values are permitted. If the row identifier is missing for a given row, we suggest you remove this row from \code{datET} before applying the function. } \item{method}{character string for determining which method is used to choose a probe among exactly 2 corresponding rows or when connectivityBasedCollapsing=FALSE. These are the options: "MaxMean" (default) or "MinMean" = choose the row with the highest or lowest mean value, respectively. "maxRowVariance" = choose the row with the highest variance (across the columns of \code{datET}). "absMaxMean" or "absMinMean" = choose the row with the highest or lowest mean absolute value. "ME" = choose the eigenrow (first principal component of the rows in each group). Note that with this method option, connectivityBasedCollapsing is automatically set to FALSE. "Average" = for each column, take the average value of the rows in each group "function" = use this method for a user-input function (see the description of the argument "methodFunction"). Note: if method="ME", "Average" or "function", the output parameters "group2row" and "selectedRow" are not informative. } \item{connectivityBasedCollapsing}{logical value. If TRUE, groups with 3 or more corresponding rows will be represented by the row with the highest connectivity according to a signed weighted correlation network adjacency matrix among the corresponding rows. Recall that the connectivity is defined as the rows sum of the adjacency matrix. The signed weighted adjacency matrix is defined as A=(0.5+0.5*COR)^power where power is determined by the argument \code{connectivityPower} and COR denotes the matrix of pairwise Pearson correlation coefficients among the corresponding rows. } \item{methodFunction}{character string. It only needs to be specified if method="function" otherwise its input is ignored. Must be a function that takes a Nr x Nc matrix of numbers as input and outputs a vector with the length Nc (e.g., colMeans). This will then be the method used for collapsing values for multiple rows into a single value for the row. } \item{connectivityPower}{Positive number (typically integer) for specifying the threshold (power) used to construct the signed weighted adjacency matrix, see the description of \code{connectivityBasedCollapsing}. This option is only used if connectivityBasedCollapsing=TRUE. } \item{selectFewestMissing}{logical values. If TRUE (default), the input expression matrix is trimmed such that for each group only the rows with the fewest number of missing values are retained. In situations where an equal number of values are missing (or where there is no missing data), all rows for a given group are retained. Whether this value is set to TRUE or FALSE, all rows with >90\% missing data are omitted from the analysis. } \item{thresholdCombine}{Number between -1 and 1, or NA. If NA (default), this input is ignored. If a number between -1 and 1 is input, this value is taken as a threshold value, and collapseRows proceeds following the "maxMean" method, but ONLY for ids with correlations of R>thresholdCombine. Specifically: ...1) If there is one id/group, keep the id ...2) If there are 2 ids/group, take the maximum mean expression if their correlation is > thresholdCombine ...3) If there are 3+ ids/group, iteratively repeat (2) for the 2 ids with the highest correlation until all ids remaining have correlation < thresholdCombine for each group Note that this option usually results in more than one id per group; therefore, one must use care when implementing this option for use in comparisons between multiple matrices / data frames. } } \details{ The function is robust to missing data. Also, if rowIDs are missing, they are inferred according to the rownames of datET when possible. When a group corresponds to only 1 row then it is represented by this row since there is no other choice. Having said this, the row may be removed if it contains an excessive amount of missing data (90 percent or more missing values), see the description of the argument \code{selectFewestMissing} for more details. A group is represented by a corresponding row with the fewest number of missing data if \code{selectFewestMissing} has been set to TRUE. Often several rows have the same minimum number of missing values (or no missing values) and a representative must be chosen among those rows. In this case we distinguish 2 situations: (1) If a group corresponds to exactly 2 rows then the corresponding row with the highest average is selected if \code{method="maxMean"}. Alternative methods can be chosen as described in \code{method}. (2) If a group corresponds to more than 2 rows, then the function calculates a signed weighted correlation network (with power specified in \code{connectivityPower}) among the corresponding rows if \code{connectivityBasedCollapsing=TRUE}. Next the function calculates the network connectivity of each row (closely related to the sum or correlations with the other matching rows). Next it chooses the most highly connected row as representative. If connectivityBasedCollapsing=FALSE, then \code{method} is used. For both situations, if more than one row has the same value, the first such row is chosen. Setting \code{thresholdCombine} is a special case of this function, as not all ids for a single group are necessarily collapsed--only those with similar expression patterns are collapsed. We suggest using this option when the goal is to decrease the number of ids for computational reasons, but when ALL ids for a single group should not be combined (for example, if two probes could represent different splice variants for the same gene for many genes on a microarray). Example application: when dealing with microarray gene expression data then the rows of \code{datET} may correspond to unique probe identifiers and \code{rowGroup} may contain corresponding gene symbols. Recall that multiple probes (specified using \code{rowID}=ProbeID) may correspond to the same gene symbol (specified using \code{rowGroup}=GeneSymbol). In this case, \code{datET} contains the input expression data with rows as rowIDs and output expression data with rows as gene symbols, collapsing all probes for a given gene symbol into one representative. } \value{ The output is a list with the following components. \item{datETcollapsed}{ is a numeric matrix with the same columns as the input matrix \code{datET}, but with rows corresponding to the different row groups rather than individual row identifiers. (If thresholdCombine is set, then rows still correspond to individual row identifiers.) } \item{group2row}{ is a matrix whose rows correspond to the unique group labels and whose 2 columns report which group label (first column called \code{group}) is represented by what row label (second column called \code{selectedRowID}). Set to NULL if method="ME" or "function".}. \item{selectedRow}{ is a logical vector whose components are TRUE for probes selected as representatives and FALSE otherwise. It has the same length as the vector probeID. Set to NULL if method="ME" or "function".} } \references{ Miller JA, Langfelder P, Cai C, Horvath S (2010) Strategies for optimally aggregating gene expression data: The collapseRows R function. Technical Report. } \author{ Jeremy A. Miller, Steve Horvath, Peter Langfelder, Chaochao Cai } %% ~Make other sections like Warning with \section{Warning }{....} ~ \examples{ ######################################################################## # EXAMPLE 1: # The code simulates a data frame (called dat1) of correlated rows. # You can skip this part and start at the line called Typical Input Data # The first column of the data frame will contain row identifiers # number of columns (e.g. observations or microarrays) m=60 # number of rows (e.g. variables or probes on a microarray) n=500 # seed module eigenvector for the simulateModule function MEtrue=rnorm(m) # numeric data frame of n rows and m columns datNumeric=data.frame(t(simulateModule(MEtrue,n))) RowIdentifier=paste("Probe", 1:n, sep="") ColumnName=paste("Sample",1:m, sep="") dimnames(datNumeric)[[2]]=ColumnName # Let us now generate a data frame whose first column contains the rowID dat1=data.frame(RowIdentifier, datNumeric) #we simulate a vector with n/5 group labels, i.e. each row group corresponds to 5 rows rowGroup=rep( paste("Group",1:(n/5), sep=""), 5 ) # Typical Input Data # Since the first column of dat1 contains the RowIdentifier, we use the following code datET=dat1[,-1] rowID=dat1[,1] # assign row names according to the RowIdentifier dimnames(datET)[[1]]=rowID # run the function and save it in an object collapse.object=collapseRows(datET=datET, rowGroup=rowGroup, rowID=rowID) # this creates the collapsed data where # the first column contains the group name # the second column reports the corresponding selected row name (the representative) # and the remaining columns report the values of the representative row dat1Collapsed=data.frame( collapse.object$group2row, collapse.object$datETcollapsed) dat1Collapsed[1:5,1:5] ######################################################################## # EXAMPLE 2: # Using the same data frame as above, run collapseRows with a user-inputted function. # In this case we will use the mean. Note that since we are choosing some combination # of the probe values for each gene, the group2row and selectedRow output # parameters are not meaningful. collapse.object.mean=collapseRows(datET=datET, rowGroup=rowGroup, rowID=rowID, method="function", methodFunction=colMeans)[[1]] # Note that in this situation, running the following code produces the identical results: collapse.object.mean.2=collapseRows(datET=datET, rowGroup=rowGroup, rowID=rowID, method="Average")[[1]] ######################################################################## # EXAMPLE 3: # Using collapseRows to calculate the module eigengene. # First we create some sample data as in example 1 (or use your own!) m=60 n=500 MEtrue=rnorm(m) datNumeric=data.frame(t(simulateModule(MEtrue,n))) # In this example, rows are genes, and groups are modules. RowIdentifier=paste("Gene", 1:n, sep="") ColumnName=paste("Sample",1:m, sep="") dimnames(datNumeric)[[2]]=ColumnName dat1=data.frame(RowIdentifier, datNumeric) # We simulate a vector with n/100 modules, i.e. each row group corresponds to 100 rows rowGroup=rep( paste("Module",1:(n/100), sep=""), 100 ) datET=dat1[,-1] rowID=dat1[,1] dimnames(datET)[[1]]=rowID # run the function and save it in an object collapse.object.ME=collapseRows(datET=datET, rowGroup=rowGroup, rowID=rowID, method="ME")[[1]] # Note that in this situation, running the following code produces the identical results: collapse.object.ME.2 = t(moduleEigengenes(expr=t(datET),colors=rowGroup)$eigengene) colnames(collapse.object.ME.2) = ColumnName rownames(collapse.object.ME.2) = sort(unique(rowGroup)) } \keyword{misc} WGCNA/man/proportionsInAdmixture.Rd0000644000176200001440000001327514012015545016730 0ustar liggesusers\name{proportionsInAdmixture} \alias{proportionsInAdmixture} \title{Estimate the proportion of pure populations in an admixed population based on marker expression values. } \description{ Assume that \code{datE.Admixture} provides the expression values from a mixture of cell types (admixed population) and you want to estimate the proportion of each pure cell type in the mixed samples (rows of \code{datE.Admixture}). The function allows you to do this as long as you provide a data frame \code{MarkerMeansPure} that reports the mean expression values of markers in each of the pure cell types. } \usage{ proportionsInAdmixture( MarkerMeansPure, datE.Admixture, calculateConditionNumber = FALSE, coefToProportion = TRUE) } \arguments{ \item{MarkerMeansPure}{ is a data frame whose first column reports the name of the marker and the remaining columns report the mean values of the markers in each of the pure populations. The function will estimate the proportion of pure cells which correspond to columns 2 through of \code{dim(MarkerMeansPure)[[2]]} of \code{MarkerMeansPure}. Rows that contain missing values (NA) will be removed. } \item{datE.Admixture}{is a data frame of expression data, e.g. the columns of \code{datE.Admixture} could correspond to thousands of genes. The rows of \code{datE.Admixture} correspond to the admixed samples for which the function estimates the proportions of pure populations. Some of the markers specified in the first column of \code{MarkerMeansPure} should correspond to column names of \code{datE.Admixture}. } \item{calculateConditionNumber}{logical. Default is FALSE. If set to TRUE then it uses the \code{kappa} function to calculates the condition number of the matrix \code{MarkerMeansPure[,-1]}. This allows one to determine whether the linear model for estimating the proportions is well specified. Type \code{help(kappa)} to learn more. \code{kappa()} computes by default (an estimate of) the 2-norm condition number of a matrix or of the R matrix of a QR decomposition, perhaps of a linear fit. } \item{coefToProportion}{logical. By default, it is set to TRUE. When estimating the proportions the function fits a multivariate linear model. Ideally, the coefficients of the linear model correspond to the proportions in the admixed samples. But sometimes the coefficients take on negative values or do not sum to 1. If \code{coefToProportion=TRUE} then negative coefficients will be set to 0 and the remaining coefficients will be scaled so that they sum to 1. } } \details{The methods implemented in this function were motivated by the gene expression deconvolution approach described by Abbas et al (2009), Lu et al (2003), Wang et al (2006). This approach can be used to predict the proportions of (pure) cells in a complex tissue, e.g. the proportion of blood cell types in whole blood. To define the markers, you may need to have expression data from pure populations. Then you can define markers based on a significant t-test or ANOVA across the pure populations. Next use the pure population data to estimate corresponding mean expression values. Hopefully, the array platforms and normalization methods for \code{datE.MarkersAdmixtureTranspose} and \code{MarkerMeansPure} are comparable. When dealing with Affymetrix data: we have successfully used it on untransformed MAS5 data. For statisticians: To estimate the proportions, we use the coefficients of a linear model. Specifically: \code{datCoef= t(lm(datE.MarkersAdmixtureTranspose ~MarkerMeansPure[,-1])$coefficients[-1,])} where \code{datCoef} is a matrix whose rows correspond to the mixed samples (rows of \code{datE.Admixture}) and the columns correspond to pure populations (e.g. cell types), i.e. the columns of \code{MarkerMeansPure[,-1]}. More details can be found in Abbas et al (2009). } \value{A list with the following components \item{PredictedProportions}{data frame that contains the predicted proportions. The rows of \code{PredictedProportions} correspond to the admixed samples, i.e. the rows of \code{datE.Admixture}. The columns of \code{PredictedProportions} correspond to the pure populations, i.e. the columns of \code{MarkerMeansPure[,-1].} } \item{datCoef=datCoef}{data frame of numbers that is analogous to \code{PredictedProportions}. In general, \code{datCoef} will only be different from \code{PredictedProportions} if \code{coefToProportion=TRUE}. See the description of \code{coefToProportion} } \item{conditionNumber}{This is the condition number resulting from the \code{kappa} function. See the description of calculateConditionNumber. } \item{markersUsed}{vector of character strings that contains the subset of marker names (specified in the first column of \code{MarkerMeansPure}) that match column names of \code{datE.Admixture} and that contain non-missing pure mean values. } } \references{ Abbas AR, Wolslegel K, Seshasayee D, Modrusan Z, Clark HF (2009) Deconvolution of Blood Microarray Data Identifies Cellular Activation Patterns in Systemic Lupus Erythematosus. PLoS ONE 4(7): e6098. doi:10.1371/journal.pone.0006098 Lu P, Nakorchevskiy A, Marcotte EM (2003) Expression deconvolution: a reinterpretation of DNA microarray data reveals dynamic changes in cell populations. Proc Natl Acad Sci U S A 100: 10370-10375. Wang M, Master SR, Chodosh LA (2006) Computational expression deconvolution in a complex mammalian organ. BMC Bioinformatics 7: 328. } \author{ Steve Horvath, Chaochao Cai } \note{ This function can be considered a wrapper of the \code{lm} function. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{lm}}, \code{\link{kappa}} } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/labeledHeatmap.multiPage.Rd0000644000176200001440000001117614022073754017003 0ustar liggesusers\name{labeledHeatmap.multiPage} \alias{labeledHeatmap.multiPage} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Labeled heatmap divided into several separate plots. } \description{ This function produces labaled heatmaps divided into several plots. This is useful for large heatmaps where labels on individual columns and rows may become unreadably small (or overlap). } \usage{ labeledHeatmap.multiPage( # Input data and ornaments Matrix, xLabels, yLabels = NULL, xSymbols = NULL, ySymbols = NULL, textMatrix = NULL, # Paging options rowsPerPage = NULL, maxRowsPerPage = 20, colsPerPage = NULL, maxColsPerPage = 10, addPageNumberToMain = TRUE, # Further arguments to labeledHeatmap zlim = NULL, signed = TRUE, main = "", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Matrix}{ numerical matrix to be plotted in the heatmap. } \item{xLabels}{ labels for the columns. See Details. } \item{yLabels}{ labels for the rows. See Details. } \item{xSymbols}{ additional labels used when \code{xLabels} are interpreted as colors. See Details. } \item{ySymbols}{ additional labels used when \code{yLabels} are interpreted as colors. See Details. } \item{textMatrix}{ optional text entries for each cell. Either a matrix of the same dimensions as \code{Matrix} or a vector of the same length as the number of entries in \code{Matrix}. } \item{rowsPerPage}{ optional list in which each component is a vector specifying which rows should appear together in each plot. If not given, will be generated automatically based on \code{maxRowsPerPage} below and the number of rows in \code{Matrix}. } \item{maxRowsPerPage}{ integer giving maximum number of rows appearing on each plot (page). } \item{colsPerPage}{ optional list in which each component is a vector specifying which columns should appear together in each plot. If not given, will be generated automatically based on \code{maxColsPerPage} below and the number of rows in \code{Matrix}. } \item{maxColsPerPage}{ integer giving maximum number of columns appearing on each plot (page). } \item{addPageNumberToMain}{ logical: should plot/page number be added to the \code{main} title of each plot? } \item{zlim}{ Optional specification of the extreme values for the color scale. If not given, will be determined from the input \code{Matrix}. } \item{signed}{ logical: should the input \code{Matrix} be converted to colors using a scale centered at zero? } \item{main}{ Main title for each plot/page, optionally with the plot/page number added. } \item{\dots}{ other arguments to function \code{\link{labeledHeatmap}}. } } \details{ The function \code{\link{labeledHeatmap}} is used to produce each plot/page; most arguments are described in more detail in the help file for that function. In each plot/page \code{\link{labeledHeatmap}} plots a standard heatmap plot of an appropriate sub-rectangle of \code{Matrix} and embellishes it with row and column labels and/or with text within the heatmap entries. Row and column labels can be either character strings or color squares, or both. To get simple text labels, use \code{colorLabels=FALSE} and pass the desired row and column labels in \code{yLabels} and \code{xLabels}, respectively. To label rows and columns by color squares, use \code{colorLabels=TRUE}; \code{yLabels} and \code{xLabels} are then expected to represent valid colors. For reasons of compatibility with other functions, each entry in \code{yLabels} and \code{xLabels} is expected to consist of a color designation preceded by 2 characters: an example would be \code{MEturquoise}. The first two characters can be arbitrary, they are stripped. Any labels that do not represent valid colors will be considered text labels and printed in full, allowing the user to mix text and color labels. It is also possible to label rows and columns by both color squares and additional text annotation. To achieve this, use the above technique to get color labels and, additionally, pass the desired text annotation in the \code{xSymbols} and \code{ySymbols} arguments. If \code{rowsPerPage} (\code{colsPerPage}) is not given, rows (columns) are allocated automatically as uniformly as possible, in contiguous blocks of size at most \code{maxRowsPerPage} (\code{maxColsPerPage}). The allocation is performed by the function \code{\link{allocateJobs}}. } \value{ None. } \author{ Peter Langfelder } \seealso{ The workhorse function \code{\link{labeledHeatmap}} for the actual heatmap plot; function \code{\link{allocateJobs}} for the allocation of rows/columns to each plot. } % R documentation directory. \keyword{ misc } WGCNA/man/goodGenes.Rd0000644000176200001440000000676114012015545014074 0ustar liggesusers\name{goodGenes} \alias{goodGenes} \title{ Filter genes with too many missing entries } \description{ This function checks data for missing entries and returns a list of genes that have non-zero variance and pass two criteria on maximum number of missing values and values whose weight is below a threshold: the fraction of missing values must be below a given threshold and the total number of present samples must be at least equal to a given threshold. If weights are given, entries whose relative weight is below a threshold will be considered missing. } \usage{ goodGenes( datExpr, weights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. } \item{weights}{optional observation weights in the same format (and dimensions) as \code{datExpr}.} \item{useSamples}{ optional specifications of which samples to use for the check. Should be a logical vector; samples whose entries are \code{FALSE} will be ignored for the missing value counts. Defaults to using all samples.} \item{useGenes}{ optional specifications of genes for which to perform the check. Should be a logical vector; genes whose entries are \code{FALSE} will be ignored. Defaults to using all genes.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of non-missing samples for a gene to be considered good. } \item{minNGenes}{ minimum number of good genes for the data set to be considered fit for analysis. If the actual number of good genes falls below this threshold, an error will be issued. } \item{tol}{ an optional 'small' number to compare the variance against. Defaults to the square of \code{1e-10 * max(abs(datExpr), na.rm = TRUE)}. The reason of comparing the variance to this number, rather than zero, is that the fast way of computing variance used by this function sometimes causes small numerical overflow errors which make variance of constant vectors slightly non-zero; comparing the variance to \code{tol} rather than zero prevents the retaining of such genes as 'good genes'.} \item{minRelativeWeight}{ observations whose relative weight is below this threshold will be considered missing. Here relative weight is weight divided by the maximum weight in the column (gene).} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. If weights are given, entries whose relative weight (i.e., weight divided by maximum weight in the column or gene) will be considered missing. For most data sets, the fraction of missing samples criterion will be much more stringent than the absolute number of missing samples criterion. } \value{ A logical vector with one entry per gene that is \code{TRUE} if the gene is considered good and \code{FALSE} otherwise. Note that all genes excluded by \code{useGenes} are automatically assigned \code{FALSE}. } \author{ Peter Langfelder and Steve Horvath } \seealso{ \code{\link{goodSamples}}, \code{\link{goodSamplesGenes}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/exportNetworkToCytoscape.Rd0000644000176200001440000000346414012015545017230 0ustar liggesusers\name{exportNetworkToCytoscape} \alias{exportNetworkToCytoscape} \title{ Export network to Cytoscape } \description{ This function exports a network in edge and node list files in a format suitable for importing to Cytoscape. } \usage{ exportNetworkToCytoscape( adjMat, edgeFile = NULL, nodeFile = NULL, weighted = TRUE, threshold = 0.5, nodeNames = NULL, altNodeNames = NULL, nodeAttr = NULL, includeColNames = TRUE) } \arguments{ \item{adjMat}{ adjacency matrix giving connection strengths among the nodes in the network. } \item{edgeFile}{ file name of the file to contain the edge information. } \item{nodeFile}{ file name of the file to contain the node information. } \item{weighted}{ logical: should the exported network be weighted? } \item{threshold}{ adjacency threshold for including edges in the output. } \item{nodeNames}{ names of the nodes. If not given, \code{dimnames} of \code{adjMat} will be used. } \item{altNodeNames}{ optional alternate names for the nodes, for example gene names if nodes are labeled by probe IDs. } \item{nodeAttr}{ optional node attribute, for example module color. Can be a vector or a data frame. } \item{includeColNames}{ logical: should column names be included in the output files? Note that Cytoscape can read files both with and without column names. } } \details{ If the corresponding file names are supplied, the edge and node data is written to the appropriate files. The edge and node data is also returned as return value (see below). } \value{ A list with the following componens: \item{egdeData}{a data frame containing the edge data, with one row per edge} \item{nodeData}{a data frame containing the node data, with one row per node} } \author{ Peter Langfelder} \seealso{ \code{\link{exportNetworkToVisANT}}} \keyword{ misc } WGCNA/man/orderBranchesUsingHubGenes.Rd0000644000176200001440000001374314012015545017370 0ustar liggesusers\name{orderBranchesUsingHubGenes} \alias{orderBranchesUsingHubGenes} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Optimize dendrogram using branch swaps and reflections. } \description{ This function takes as input the hierarchical clustering tree as well as a subset of genes in the network (generally corresponding to branches in the tree), then returns a semi-optimally ordered tree. The idea is to maximize the correlations between adjacent branches in the dendrogram, in as much as that is possible by adjusting the arbitrary positionings of the branches by swapping and reflecting branches. } \usage{ orderBranchesUsingHubGenes( hierTOM, datExpr = NULL, colorh = NULL, type = "signed", adj = NULL, iter = NULL, useReflections = FALSE, allowNonoptimalSwaps = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{hierTOM}{ A hierarchical clustering object (or gene tree) that is used to plot the dendrogram. For example, the output object from the function hclust or fastcluster::hclust. Note that elements of hierTOM$order MUST be named (for example, with the corresponding gene name). } \item{datExpr}{ Gene expression data with rows as samples and columns as genes, or NULL if a pre-made adjacency is entered. Column names of datExpr must be a subset of gene names of hierTOM$order. } \item{colorh}{ The module assignments (color vectors) corresponding to the rows in datExpr, or NULL if a pre-made adjacency is entered. } \item{type}{ What type of network is being entered. Common choices are "signed" (default) and "unsigned". With "signed" negative correlations count against, whereas with "unsigned" negative correlations are treated identically as positive correlations. } \item{adj}{ Either NULL (default) or an adjacency (or any other square) matrix with rows and columns corresponding to a subset of the genes in hierTOM$order. If entered, datExpr, colorh, and type are all ignored. Typically, this would be left blank but could include correlations between module eigengenes, with rows and columns renamed as genes in the corresponding modules, for example. } \item{iter}{ The number of iterations to run the function in search of optimal branch ordering. The default is the square of the number of modules (or the quare of the number of genes in the adjacency matrix). } \item{useReflections}{ If TRUE, both reflections and branch swapping will be used to optimize dendrogram. If FALSE (default) only branch swapping will be used. } \item{allowNonoptimalSwaps}{ If TRUE, there is chance (that decreases with each iteration) of swapping / reflecting branches whether or not the new correlation between expression of genes in adjacent branches is better or worse. The idea (which has not been sufficiently tested), is that this would prevent the function from getting stuck at a local maxima of correlation. If FALSE (default), the swapping / reflection of branches only occurs if it results in a higher correlation between adjacent branches. } } \value{ \item{hierTOM}{ A hierarchical clustering object with the hierTOM$order variable properly adjusted, but all other variables identical as the heirTOM input. } \item{changeLog}{ A log of all of the changes that were made to the dendrogram, including what change was made, on what iteration, and the Old and New scores based on correlation. These scores have arbitrary units, but higher is better. } } \author{ Jeremy Miller } \note{ This function is very slow and is still in an *experimental* function. We have not had problems with ~10 modules across ~5000 genes, although theoretically it should work for many more genes and modules, depending upon the speed of the computer running R. Please address any problems or suggestions to jeremyinla@gmail.com. } \examples{ \dontrun{ ## Example: first simulate some data. MEturquoise = sample(1:100,50) MEblue = c(MEturquoise[1:25], sample(1:100,25)) MEbrown = sample(1:100,50) MEyellow = sample(1:100,50) MEgreen = c(MEyellow[1:30], sample(1:100,20)) MEred = c(MEbrown [1:20], sample(1:100,30)) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, MEred) dat1 = simulateDatExpr(ME,400,c(0.16,0.12,0.11,0.10,0.10,0.10,0.1), signed=TRUE) TOM1 = TOMsimilarityFromExpr(dat1$datExpr, networkType="signed") colnames(TOM1) <- rownames(TOM1) <- colnames(dat1$datExpr) tree1 = fastcluster::hclust(as.dist(1-TOM1),method="average") colorh = labels2colors(dat1$allLabels) plotDendroAndColors(tree1,colorh,dendroLabels=FALSE) ## Reassign modules using the selectBranch and chooseOneHubInEachModule functions datExpr = dat1$datExpr hubs = chooseOneHubInEachModule(datExpr, colorh) colorh2 = rep("grey", length(colorh)) colorh2 [selectBranch(tree1,hubs["blue"],hubs["turquoise"])] = "blue" colorh2 [selectBranch(tree1,hubs["turquoise"],hubs["blue"])] = "turquoise" colorh2 [selectBranch(tree1,hubs["green"],hubs["yellow"])] = "green" colorh2 [selectBranch(tree1,hubs["yellow"],hubs["green"])] = "yellow" colorh2 [selectBranch(tree1,hubs["red"],hubs["brown"])] = "red" colorh2 [selectBranch(tree1,hubs["brown"],hubs["red"])] = "brown" plotDendroAndColors(tree1,cbind(colorh,colorh2),c("Old","New"),dendroLabels=FALSE) ## Now swap and reflect some branches, then optimize the order of the branches # and output pdf with resulting images pdf("DENDROGRAM_PLOTS.pdf",width=10,height=5) plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Starting Dendrogram") tree1 = swapTwoBranches(tree1,hubs["red"],hubs["turquoise"]) plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Swap blue/turquoise and red/brown") tree1 = reflectBranch(tree1,hubs["blue"],hubs["green"]) plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Reflect turquoise/blue") # (This function will take a few minutes) out = orderBranchesUsingHubGenes(tree1,datExpr,colorh2,useReflections=TRUE,iter=100) tree1 = out$geneTree plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Semi-optimal branch order") out$changeLog dev.off() } } \keyword{misc} WGCNA/man/cutreeStatic.Rd0000644000176200001440000000242014012015545014605 0ustar liggesusers\name{cutreeStatic} \alias{cutreeStatic} \title{ Constant-height tree cut } \description{ Module detection in hierarchical dendrograms using a constant-height tree cut. Only branches whose size is at least \code{minSize} are retained. } \usage{ cutreeStatic(dendro, cutHeight = 0.9, minSize = 50) } \arguments{ \item{dendro}{ a hierarchical clustering dendrogram such as returned by \code{\link{hclust}}. } \item{cutHeight}{ height at which branches are to be cut. } \item{minSize}{ minimum number of object on a branch to be considered a cluster. } } \details{ This function performs a straightforward constant-height cut as implemented by \code{\link{cutree}}, then calculates the number of objects on each branch and only keeps branches that have at least \code{minSize} objects on them. } \value{ A numeric vector giving labels of objects, with 0 meaning unassigned. The largest cluster is conventionally labeled 1, the next largest 2, etc. } \author{ Peter Langfelder } \seealso{ \code{\link{hclust}} for hierarchical clustering, \code{\link{cutree}} and \code{\link{cutreeStatic}} for other constant-height branch cuts, \code{\link{standardColors}} to convert the retuned numerical lables into colors for easier visualization. } \keyword{misc} WGCNA/man/corAndPvalue.Rd0000644000176200001440000000452514022073754014550 0ustar liggesusers\name{corAndPvalue} \alias{corAndPvalue} \title{ Calculation of correlations and associated p-values } \description{ A faster, one-step calculation of Student correlation p-values for multiple correlations, properly taking into account the actual number of observations. } \usage{ corAndPvalue(x, y = NULL, use = "pairwise.complete.obs", alternative = c("two.sided", "less", "greater"), ...) } \arguments{ \item{x}{ a vector or a matrix } \item{y}{ a vector or a matrix. If \code{NULL}, the correlation of columns of \code{x} will be calculated. } \item{use}{ determines handling of missing data. See \code{\link{cor}} for details. } \item{alternative}{ specifies the alternative hypothesis and must be (a unique abbreviation of) one of \code{"two.sided"}, \code{"greater"} or \code{"less"}. the initial letter. \code{"greater"} corresponds to positive association, \code{"less"} to negative association. } \item{\dots}{ other arguments to the function \code{\link{cor}}. } } \details{ The function calculates correlations of a matrix or of two matrices and the corresponding Student p-values. The output is not as full-featured as \code{\link{cor.test}}, but can work with matrices as input. } \value{ A list with the following components, each a matrix: \item{cor}{the calculated correlations} \item{p}{the Student p-values corresponding to the calculated correlations} \item{Z}{Fisher transforms of the calculated correlations} \item{t}{Student t statistics of the calculated correlations} \item{nObs}{Numbers of observations for the correlation, p-values etc.} } \author{ Peter Langfelder and Steve Horvath } \references{ Peter Langfelder, Steve Horvath (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. \url{https://www.jstatsoft.org/v46/i11/} } \seealso{ \code{\link{cor}} for calculation of correlations only; \code{\link{cor.test}} for another function for significance test of correlations } \examples{ # generate random data with non-zero correlation set.seed(1); a = rnorm(100); b = rnorm(100) + a; x = cbind(a, b); # Call the function and display all results corAndPvalue(x) # Set some components to NA x[c(1:4), 1] = NA corAndPvalue(x) # Note that changed number of observations. } \keyword{ stats } WGCNA/man/consensusDissTOMandTree.Rd0000644000176200001440000000554014022073754016711 0ustar liggesusers\name{consensusDissTOMandTree} \alias{consensusDissTOMandTree} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Consensus clustering based on topological overlap and hierarchical clustering } \description{ This function makes a consensus network using all of the default values in the WGCNA library. Details regarding how consensus modules are formed can be found here: http://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/Consensus-NetworkConstruction-man.pdf } \usage{ consensusDissTOMandTree(multiExpr, softPower, TOM = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see checkSets). A vector of lists, one per set. Each set must contain a component data that contains the expression data. Rows correspond to samples and columns to genes or probes. Two or more sets of data must be included and adjacencies cannot be used. } \item{softPower}{ Soft thresholding power used to make each of the networks in multiExpr. } \item{TOM}{ A LIST of matrices holding the topological overlap corresponding to the sets in multiExpr, if they have already been calculated. Otherwise, keep TOM set as NULL (default), and TOM similarities will be calculated using the WGCNA defaults. If inputted, this variable must be a list with each entree a TOM corresponding to the same entries in multiExpr. } } \value{ \item{consensusTOM}{ The TOM difference matrix (1-TOM similarity) corresponding to the consensus network. } \item{consTree}{ Returned value is the same as that of hclust: An object of class hclust which describes the tree produced by the clustering process. This tree corresponds to the dissimilarity matrix consensusTOM. } } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{ Peter Langfelder, Steve Horvath, Jeremy Miller } \seealso{ \code{\link{blockwiseConsensusModules}} } \examples{ # Example consensus network using two simulated data sets set.seed = 100 MEturquoise = sample(1:100,50) MEblue = sample(1:100,50) MEbrown = sample(1:100,50) MEyellow = sample(1:100,50) MEgreen = sample(1:100,50) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen) system.time({ dat1 = simulateDatExpr(ME,300,c(0.2, 0.10, 0.10, 0.10, 0.10, 0.2), signed=TRUE)}) system.time({ dat2 = simulateDatExpr(ME,300,c(0.18, 0.11, 0.11, 0.09, 0.11, 0.23),signed=TRUE)}) multiExpr = list(S1=list(data=dat1$datExpr),S2=list(data=dat2$datExpr)) softPower=8 system.time( { consensusNetwork = consensusDissTOMandTree(multiExpr, softPower)}) system.time({ plotDendroAndColors(consensusNetwork$consTree, cbind(labels2colors(dat1$allLabels), labels2colors(dat2$allLabels)),c("S1","S2"), dendroLabels=FALSE)}) } \keyword{misc} WGCNA/man/pquantile.Rd0000644000176200001440000000466314012015545014163 0ustar liggesusers\name{pquantile} \alias{pquantile} \alias{pquantile.fromList} \alias{pmedian} \alias{pmean} \alias{pmean.fromList} \alias{pminWhich.fromList} \title{ Parallel quantile, median, mean } \description{ Calculation of ``parallel'' quantiles, minima, maxima, medians, and means, across given arguments or across lists } \usage{ pquantile(prob, ...) pquantile.fromList(dataList, prob) pmedian(...) pmean(..., weights = NULL) pmean.fromList(dataList, weights = NULL) pminWhich.fromList(dataList) } \arguments{ \item{prob}{ A single probability at which to calculate the quantile. See \code{\link{quantile}}. } \item{dataList}{A list of numeric vectors or arrays, all of the same length and dimensions, over which to calculate ``parallel'' quantiles.} \item{weights}{Optional vector of the same length as \code{dataList}, giving the weights to be used in the weighted mean. If not given, unit weights will be used.} \item{\dots}{ Numeric arguments. All arguments must have the same dimensions. See details. } } \details{ Given numeric arguments, say x,y,z, of equal dimensions (and length), the \code{pquantile} calculates and returns the quantile of the first components of x,y,z, then the second components, etc. Similarly, \code{pmedian} and \code{pmean} calculate the median and mean, respectively. The funtion \code{pquantile.fromList} is identical to \code{pquantile} except that the argument \code{dataList} replaces the ... in holding the numeric vectors over which to calculate the quantiles. } \value{ \item{pquantile, pquantile.fromList}{A vector or array containing quantiles.} \item{pmean, pmean.fromList}{A vector or array containing means. } \item{pmedian}{A vector or array containing medians.} \item{pminWhich.fromList}{A list with two components: \code{min} gives the minima, \code{which} gives the indices of the elements that are the minima.} Dimensions are copied from dimensions of the input arguments. If any of the input variables have \code{dimnames}, the first non-NULL dimnames are copied into the output. } \author{ Peter Langfelder and Steve Horvath } \seealso{ \code{\link{quantile}}, \code{\link{median}}, \code{\link{mean}} for the underlying statistics. } \examples{ # Generate 2 simple matrices a = matrix(c(1:12), 3, 4); b = a+ 1; c = a + 2; # Set the colnames on matrix a colnames(a) = spaste("col_", c(1:4)); # Example use pquantile(prob = 0.5, a, b, c) pmean(a,b,c) pmedian(a,b,c) } \keyword{ misc } WGCNA/man/selectFewestConsensusMissing.Rd0000644000176200001440000000506314012015545020044 0ustar liggesusers\name{selectFewestConsensusMissing} \alias{selectFewestConsensusMissing} \title{ Select columns with the lowest consensus number of missing data } \description{ Given a \code{\link{multiData}} structure, this function calculates the consensus number of present (non-missing) data for each variable (column) across the data sets, forms the consensus and for each group selects variables whose consensus proportion of present data is at least \code{selectFewestMissing} (see usage below). } \usage{ selectFewestConsensusMissing( mdx, colID, group, minProportionPresent = 1, consensusQuantile = 0, verbose = 0, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mdx}{ A \code{\link{multiData}} structure. All sets must have the same columns. } \item{colID}{ Character vector of column identifiers. This must include all the column names from \code{mdx}, but can include other values as well. Its entries must be unique (no duplicates) and no missing values are permitted. } \item{group}{ Character vector whose components contain the group label (e.g. a character string) for each entry of \code{colID}. This vector must be of the same length as the vector \code{colID}. In gene expression applications, this vector could contain the gene symbol (or a co-expression module label). } \item{minProportionPresent}{A numeric value between 0 and 1 (logical values will be coerced to numeric). Denotes the minimum consensus fraction of present data in each column that will result in the column being retained. } \item{consensusQuantile}{A number between 0 and 1 giving the quantile probability for consensus calculation. 0 means the minimum value (true consensus) will be used.} \item{verbose}{ Level of verbosity; 0 means silent, larger values will cause progress messages to be printed. } \item{...}{Other arguments that should be considered undocumented and subject to change.} } \details{ A 'consensus' of a vector (say 'x') is simply defined as the quantile with probability \code{consensusQuantile} of the vector x. This function calculates, for each variable in \code{mdx}, its proportion of present (i.e., non-NA and non-NaN) values in each of the data sets in \code{mdx}, and forms the consensus. Only variables whose consensus proportion of present data is at least \code{selectFewestMissing} are retained. } \value{ A logical vector with one element per variable in \code{mdx}, giving \code{TRUE} for the retained variables. } \author{ Jeremy Miller and Peter Langfelder } \seealso{ \code{\link{multiData}} } \keyword{misc } WGCNA/man/automaticNetworkScreeningGS.Rd0000644000176200001440000000454014012015545017603 0ustar liggesusers\name{automaticNetworkScreeningGS} \alias{automaticNetworkScreeningGS} \title{ One-step automatic network gene screening with external gene significance } \description{ This function performs gene screening based on external gene significance and their network properties. } \usage{ automaticNetworkScreeningGS( datExpr, GS, power = 6, networkType = "unsigned", detectCutHeight = 0.995, minModuleSize = min(20, ncol(as.matrix(datExpr))/2), datME = NULL) } \arguments{ \item{datExpr}{ data frame containing the expression data, columns corresponding to genes and rows to samples } \item{GS}{ vector containing gene significance for all genes given in \code{datExpr} } \item{power}{ soft thresholding power used in network construction } \item{networkType}{ character string specifying network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"hybrid"}. } \item{detectCutHeight}{ cut height of the gene hierarchical clustering dendrogram. See \code{cutreeDynamic} for details. } \item{minModuleSize}{ minimum module size to be used in module detection procedure. } \item{datME}{ optional specification of module eigengenes. A data frame whose columns are the module eigengenes. If given, module analysis will not be performed. } } \details{ Network screening is a method for identifying genes that have a high gene significance and are members of important modules at the same time. If \code{datME} is given, the function calls \code{\link{networkScreeningGS}} with the default parameters. If \code{datME} is not given, module eigengenes are first calculated using network analysis based on supplied parameters. } \value{ A list with the following components: \item{networkScreening}{a data frame containing results of the network screening procedure. See \code{\link{networkScreeningGS}} for more details.} \item{datME}{ calculated module eigengenes (or a copy of the input \code{datME}, if given).} \item{hubGeneSignificance}{ hub gene significance for all calculated modules. See \code{\link{hubGeneSignificance}}. } } \author{ Steve Horvath } \seealso{ \code{\link{networkScreening}}, \code{\link{hubGeneSignificance}}, \code{\link{networkScreening}}, \code{\link[dynamicTreeCut]{cutreeDynamic}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/vectorTOM.Rd0000644000176200001440000000524014012015545014033 0ustar liggesusers\name{vectorTOM} \alias{vectorTOM} \title{ Topological overlap for a subset of the whole set of genes } \description{ This function calculates topological overlap of a small set of vectors with respect to a whole data set. } \usage{ vectorTOM( datExpr, vect, subtract1 = FALSE, blockSize = 2000, corFnc = "cor", corOptions = "use = 'p'", networkType = "unsigned", power = 6, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ a data frame containing the expression data of the whole set, with rows corresponding to samples and columns to genes. } \item{vect}{ a single vector or a matrix-like object containing vectors whose topological overlap is to be calculated. } \item{subtract1}{ logical: should calculation be corrected for self-correlation? Set this to \code{TRUE} if \code{vect} contains a subset of \code{datExpr}. } \item{blockSize}{ maximum block size for correlation calculations. Only important if \code{vect} contains a large number of columns. } \item{corFnc}{ character string giving the correlation function to be used for the adjacency calculation. Recommended choices are \code{"cor"} and \code{"bicor"}, but other functions can be used as well. } \item{corOptions}{ character string giving further options to be passed to the correlation function. } \item{networkType}{ character string giving network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{power}{ soft-thresholding power for network construction. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Topological overlap can be viewed as the normalized count of shared neighbors encoded in an adjacency matrix. In this case, the adjacency matrix is calculated between the columns of \code{vect} and \code{datExpr} and the topological overlap of vectors in \code{vect} measures the number of shared neighbors in \code{datExpr} that vectors of \code{vect} share. } \value{ A matrix of dimensions \code{n*n}, where \code{n} is the number of columns in \code{vect}. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Peter Langfelder } \seealso{ \code{\link{TOMsimilarity}} for standard calculation of topological overlap. } \keyword{ misc } WGCNA/man/metaAnalysis.Rd0000644000176200001440000003157314012015545014613 0ustar liggesusers\name{metaAnalysis} \alias{metaAnalysis} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Meta-analysis of binary and continuous variables } \description{ This is a meta-analysis complement to functions \code{\link{standardScreeningBinaryTrait}} and \code{\link{standardScreeningNumericTrait}}. Given expression (or other) data from multiple independent data sets, and the corresponding clinical traits or outcomes, the function calculates multiple screening statistics in each data set, then calculates meta-analysis Z scores, p-values, and optionally q-values (False Discovery Rates). Three different ways of calculating the meta-analysis Z scores are provided: the Stouffer method, weighted Stouffer method, and using user-specified weights. } \usage{ metaAnalysis(multiExpr, multiTrait, binary = NULL, metaAnalysisWeights = NULL, corFnc = cor, corOptions = list(use = "p"), getQvalues = FALSE, getAreaUnderROC = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(), setNames = NULL, kruskalTest = FALSE, var.equal = FALSE, metaKruskal = kruskalTest, na.action = "na.exclude") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data (or other data) in multi-set format (see \code{\link{checkSets}}). A vector of lists; in each list there must be a component named \code{data} whose content is a matrix or dataframe or array of dimension 2. } \item{multiTrait}{ Trait or ourcome data in multi-set format. Only one trait is allowed; consequesntly, the \code{data} component of each component list can be either a vector or a data frame (matrix, array of dimension 2). } \item{binary}{ Logical: is the trait binary (\code{TRUE}) or continuous (\code{FALSE})? If not given, the decision will be made based on the content of \code{multiTrait}. } \item{metaAnalysisWeights}{ Optional specification of set weights for meta-analysis. If given, must be a vector of non-negative weights, one entry for each set contained in \code{multiExpr}. } \item{corFnc}{ Correlation function to be used for screening. Should be either the default \code{\link{cor}} or its robust alternative, \code{\link{bicor}}. } \item{corOptions}{ A named list giving extra arguments to be passed to the correlation function. } \item{getQvalues}{ Logical: should q-values (FDRs) be calculated? } \item{getAreaUnderROC}{ Logical: should area under the ROC be calculated? Caution, enabling the calculation will slow the function down considerably for large data sets. } \item{useRankPvalue}{ Logical: should the \code{\link{rankPvalue}} function be used to obtain alternative meta-analysis statistics?} \item{rankPvalueOptions}{ Additional options for function \code{\link{rankPvalue}}. These include \code{na.last} (default \code{"keep"}), \code{ties.method} (default \code{"average"}), \code{calculateQvalue} (default copied from input \code{getQvalues}), and \code{pValueMethod} (default \code{"all"}). See the help file for \code{\link{rankPvalue}} for full details.} \item{setNames}{ Optional specification of set names (labels). These are used to label the corresponding components of the output. If not given, will be taken from the \code{names} attribute of \code{multiExpr}. If \code{names(multiExpr)} is \code{NULL}, generic names of the form \code{Set_1, Set2, ...} will be used. } \item{kruskalTest}{ Logical: should the Kruskal test be performed in addition to t-test? Only applies to binary traits. } \item{var.equal}{ Logical: should the t-test assume equal variance in both groups? If \code{TRUE}, the function will warn the user that the returned test statistics will be different from the results of the standard \code{\link[stats]{t.test}} function. } \item{metaKruskal}{ Logical: should the meta-analysis be based on the results of Kruskal test (\code{TRUE}) or Student t-test (\code{FALSE})? } \item{na.action}{ Specification of what should happen to missing values in \code{\link[stats]{t.test}}. } } \details{ The Stouffer method of combines Z statistics by simply taking a mean of input Z statistics and multiplying it by \code{sqrt(n)}, where \code{n} is the number of input data sets. We refer to this method as \code{Stouffer.equalWeights}. In general, a better (i.e., more powerful) method of combining Z statistics is to weigh them by the number of degrees of freedom (which approximately equals \code{n}). We refer to this method as \code{weightedStouffer}. Finally, the user can also specify custom weights, for example if a data set needs to be downweighted due to technical concerns; however, specifying own weights by hand should be done carefully to avoid possible selection biases. } \value{ Data frame with the following components: \item{ID}{ Identifier of the input genes (or other variables) } \item{Z.equalWeights}{ Meta-analysis Z statistics obtained using Stouffer's method with equal weights} \item{p.equalWeights}{ p-values corresponding to \code{Z.Stouffer.equalWeights} } \item{q.equalWeights}{ q-values corresponding to \code{p.Stouffer.equalWeights}, only present if \code{getQvalues} is \code{TRUE}.} \item{Z.RootDoFWeights}{ Meta-analysis Z statistics obtained using Stouffer's method with weights given by the square root of the number of (non-missing) samples in each data set} \item{p.RootDoFWeights}{ p-values corresponding to \code{Z.DoFWeights} } \item{q.RootDoFWeights}{ q-values corresponding to \code{p.DoFWeights}, only present if \code{getQvalues} is \code{TRUE}. } \item{Z.DoFWeights}{ Meta-analysis Z statistics obtained using Stouffer's method with weights given by the number of (non-missing) samples in each data set} \item{p.DoFWeights}{ p-values corresponding to \code{Z.DoFWeights} } \item{q.DoFWeights}{ q-values corresponding to \code{p.DoFWeights}, only present if \code{getQvalues} is \code{TRUE}. } \item{Z.userWeights}{ Meta-analysis Z statistics obtained using Stouffer's method with user-defined weights. Only present if input \code{metaAnalysisWeights} are present.} \item{p.userWeights}{ p-values corresponding to \code{Z.userWeights} } \item{q.userWeights}{ q-values corresponding to \code{p.userWeights}, only present if \code{getQvalues} is \code{TRUE}. } The next set of columns is present only if input \code{useRankPvalue} is \code{TRUE} and contain the output of the function \code{\link{rankPvalue}} with the same column weights as the above meta-analysis. Depending on the input options \code{calculateQvalue} and \code{pValueMethod} in \code{rankPvalueOptions}, some columns may be missing. The following columns are calculated using equal weights for each data set. \item{pValueExtremeRank.equalWeights}{This is the minimum between pValueLowRank and pValueHighRank, i.e. min(pValueLow, pValueHigh)} \item{pValueLowRank.equalWeights}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueHighRank.equalWeights}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueExtremeScale.equalWeights}{This is the minimum between pValueLowScale and pValueHighScale, i.e. min(pValueLow, pValueHigh)} \item{pValueLowScale.equalWeights}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{pValueHighScale.equalWeights}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{qValueExtremeRank.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueExtremeRank} \item{qValueLowRank.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueLowRank} \item{qValueHighRank.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueHighRank} \item{qValueExtremeScale.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueExtremeScale} \item{qValueLowScale.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueLowScale} \item{qValueHighScale.equalWeights}{local false discovery rate (q-value) corresponding to the p-value pValueHighScale} \item{...}{Analogous columns calculated by weighting each input set using the square root of the number of samples, number of samples, and user weights (if given). The corresponding column names carry the suffixes \code{RootDofWeights}, \code{DoFWeights}, \code{userWeights}.} The following columns contain results returned by \code{\link{standardScreeningBinaryTrait}} or \code{\link{standardScreeningNumericTrait}} (depending on whether the input trait is binary or continuous). For binary traits, the following information is returned for each set: \item{corPearson.Set_1, corPearson.Set_2,...}{Pearson correlation with a binary numeric version of the input variable. The numeric variable equals 1 for level 1 and 2 for level 2. The levels are given by levels(factor(y)).} \item{t.Student.Set_1, t.Student.Set_2, ...}{Student t-test statistic} \item{pvalueStudent.Set_1, pvalueStudent.Set_2, ...}{two-sided Student t-test p-value.} \item{qvalueStudent.Set_1, qvalueStudent.Set_2, ...}{(if input \code{qValues==TRUE}) q-value (local false discovery rate) based on the Student T-test p-value (Storey et al 2004).} \item{foldChange.Set_1, foldChange.Set_2, ...}{a (signed) ratio of mean values. If the mean in the first group (corresponding to level 1) is larger than that of the second group, it equals meanFirstGroup/meanSecondGroup. But if the mean of the second group is larger than that of the first group it equals -meanSecondGroup/meanFirstGroup (notice the minus sign).} \item{meanFirstGroup.Set_1, meanSecondGroup.Set_2, ...}{means of columns in input \code{datExpr} across samples in the second group.} \item{SE.FirstGroup.Set_1, SE.FirstGroup.Set_2, ...}{standard errors of columns in input \code{datExpr} across samples in the first group. Recall that SE(x)=sqrt(var(x)/n) where n is the number of non-missing values of x. } \item{SE.SecondGroup.Set_1, SE.SecondGroup.Set_2, ...}{standard errors of columns in input \code{datExpr} across samples in the second group.} \item{areaUnderROC.Set_1, areaUnderROC.Set_2, ...}{the area under the ROC, also known as the concordance index or C.index. This is a measure of discriminatory power. The measure lies between 0 and 1 where 0.5 indicates no discriminatory power. 0 indicates that the "opposite" predictor has perfect discriminatory power. To compute it we use the function \link[Hmisc]{rcorr.cens} with \code{outx=TRUE} (from Frank Harrel's package Hmisc).} \item{nPresentSamples.Set_1, nPresentSamples.Set_2, ...}{number of samples with finite measurements for each gene.} If input \code{kruskalTest} is \code{TRUE}, the following columns further summarize results of Kruskal-Wallis test: \item{stat.Kruskal.Set_1, stat.Kruskal.Set_2, ...}{Kruskal-Wallis test statistic.} \item{stat.Kruskal.signed.Set_1, stat.Kruskal.signed.Set_2,...}{(Warning: experimental) Kruskal-Wallis test statistic including a sign that indicates whether the average rank is higher in second group (positive) or first group (negative). } \item{pvaluekruskal.Set_1, pvaluekruskal.Set_2, ...}{Kruskal-Wallis test p-value.} \item{qkruskal.Set_1, qkruskal.Set_2, ...}{q-values corresponding to the Kruskal-Wallis test p-value (if input \code{qValues==TRUE}).} \item{Z.Set1, Z.Set2, ...}{Z statistics obtained from \code{pvalueStudent.Set1, pvalueStudent.Set2, ...} or from \code{pvaluekruskal.Set1, pvaluekruskal.Set2, ...}, depending on input \code{metaKruskal}.} For numeric traits, the following columns are returned: \item{cor.Set_1, cor.Set_2, ...}{correlations of all genes with the trait} \item{Z.Set1, Z.Set2, ...}{Fisher Z statistics corresponding to the correlations} \item{pvalueStudent.Set_1, pvalueStudent.Set_2, ...}{Student p-values of the correlations} \item{qvalueStudent.Set_1, qvalueStudent.Set_1, ...}{(if input \code{qValues==TRUE}) q-values of the correlations calculated from the p-values} \item{AreaUnderROC.Set_1, AreaUnderROC.Set_2, ...}{area under the ROC} \item{nPresentSamples.Set_1, nPresentSamples.Set_2, ...}{number of samples present for the calculation of each association. } } \references{ For Stouffer's method, see Stouffer, S.A., Suchman, E.A., DeVinney, L.C., Star, S.A. & Williams, R.M. Jr. 1949. The American Soldier, Vol. 1: Adjustment during Army Life. Princeton University Press, Princeton. A discussion of weighted Stouffer's method can be found in Whitlock, M. C., Combining probability from independent tests: the weighted Z-method is superior to Fisher's approach, Journal of Evolutionary Biology 18:5 1368 (2005) } \author{ Peter Langfelder } \seealso{ \code{\link{standardScreeningBinaryTrait}}, \code{\link{standardScreeningNumericTrait}} for screening functions for individual data sets } \keyword{misc} WGCNA/man/GOenrichmentAnalysis.Rd0000644000176200001440000002501014012015545016234 0ustar liggesusers\name{GOenrichmentAnalysis} \alias{GOenrichmentAnalysis} \title{ Calculation of GO enrichment (experimental)} \description{ NOTE: GOenrichmentAnalysis is deprecated. Please use function enrichmentAnalysis from R package anRichment, available from https://labs.genetics.ucla.edu/horvath/htdocs/CoexpressionNetwork/GeneAnnotation/ WARNING: This function should be considered experimental. The arguments and resulting values (in particular, the enrichment p-values) are not yet finalized and may change in the future. The function should only be used to get a quick and rough overview of GO enrichment in the modules in a data set; for a publication-quality analysis, please use an established tool. Using Bioconductor's annotation packages, this function calculates enrichments and returns terms with best enrichment values. } \usage{ GOenrichmentAnalysis(labels, entrezCodes, yeastORFs = NULL, organism = "human", ontologies = c("BP", "CC", "MF"), evidence = "all", includeOffspring = TRUE, backgroundType = "givenInGO", removeDuplicates = TRUE, leaveOutLabel = NULL, nBestP = 10, pCut = NULL, nBiggest = 0, getTermDetails = TRUE, verbose = 2, indent = 0) } \arguments{ \item{labels}{ cluster (module, group) labels of genes to be analyzed. Either a single vector, or a matrix. In the matrix case, each column will be analyzed separately; analyzing a collection of module assignments in one function call will be faster than calling the function several tinmes. For each row, the labels in all columns must correspond to the same gene specified in \code{entrezCodes}. } \item{entrezCodes}{ Entrez (a.k.a. LocusLink) codes of the genes whose labels are given in \code{labels}. A single vector; the i-th entry corresponds to row i of the matrix \code{labels} (or to the i-the entry if \code{labels} is a vector). } \item{yeastORFs}{ if \code{organism=="yeast"} (below), this argument can be used to input yeast open reading frame (ORF) identifiers instead of Entrez codes. Since the GO mappings for yeast are provided in terms of ORF identifiers, this may lead to a more accurate GO enrichment analysis. If given, the argument \code{entrezCodes} is ignored. } \item{organism}{ character string specifying the organism for which to perform the analysis. Recognized values are (unique abbreviations of) \code{"human", "mouse", "rat", "malaria", "yeast", "fly", "bovine", "worm", "canine", "zebrafish", "chicken"}. } \item{ontologies}{ vector of character strings specifying GO ontologies to be included in the analysis. Can be any subset of \code{"BP", "CC", "MF"}. The result will contain the terms with highest enrichment in each specified category, plus a separate list of terms with best enrichment in all ontologies combined. } \item{evidence}{ vector of character strings specifying admissible evidence for each gene in its specific term, or "all" for all evidence codes. See Details or http://www.geneontology.org/GO.evidence.shtml for available evidence codes and their meaning.} \item{includeOffspring}{ logical: should genes belonging to the offspring of each term be included in the term? As a default, only genes belonging directly to each term are associated with the term. Note that the calculation of enrichments with offspring included can be quite slow for large data sets.} \item{backgroundType}{specification of the background to use. Recognized values are (unique abbreviations of) \code{"allGiven", "allInGO", "givenInGO"}, meaning that the functions will take all genes given in \code{labels} as backround (\code{"allGiven"}), all genes present in any of the GO categories (\code{"allInGO"}), or the intersection of given genes and genes present in GO (\code{"givenInGO"}). The default is recommended for genome-wide enrichment studies. } \item{removeDuplicates}{logical: should duplicate entries in \code{entrezCodes} be removed? If \code{TRUE}, only the first occurence of each unique Entrez code will be kept. The cluster labels \code{labels} will be adjusted accordingly.} \item{leaveOutLabel}{optional specifications of module labels for which enrichment calculation is not desired. Can be a single label or a vector of labels to be ignored. However, if in any of the sets no labels are left to calculate enrichment of, the function will stop with an error.} \item{nBestP}{ specifies the number of terms with highest enrichment whose detailed information will be returned. } \item{pCut}{ alternative specification of terms to be returned: all terms whose enrichment p-value is more significant than \code{pCut} will be returned. If \code{pCut} is given, \code{nBestP} is ignored. } \item{nBiggest}{ in addition to returning terms with highest enrichment, terms that contain most of the genes in each cluster can be returned by specifying the number of biggest terms per cluster to be returned. This may be useful for development and testing purposes. } \item{getTermDetails}{ logical indicating whether detailed information on the most enriched terms should be returned. } \item{verbose}{ integer specifying the verbosity of the function. Zero means silent, positive values will cause the function to print progress reports.} \item{indent}{ integer specifying indentation of the diagnostic messages. Zero means no indentation, each unit adds two spaces.} } \details{ This function is basically a wrapper for the annotation packages available from Bioconductor. It requires the packages GO.db, AnnotationDbi, and org.xx.eg.db, where xx is the code corresponding to the organism that the user wishes to analyze (e.g., Hs for human Homo Sapiens, Mm for mouse Mus Musculus etc). For each cluster specified in the input, the function calculates all enrichments in the specified ontologies, and collects information about the terms with highest enrichment. The enrichment p-value is calculated using Fisher exact test. As background we use all of the supplied genes that are present in at least one term in GO (in any of the ontologies). For best results, the newest annotation libraries should be used. Because of the way Bioconductor is set up, to get the newest annotation libraries you may have to use the current version of R. According to http://www.geneontology.org/GO.evidence.shtml, the following codes are used by GO: \preformatted{ Experimental Evidence Codes EXP: Inferred from Experiment IDA: Inferred from Direct Assay IPI: Inferred from Physical Interaction IMP: Inferred from Mutant Phenotype IGI: Inferred from Genetic Interaction IEP: Inferred from Expression Pattern Computational Analysis Evidence Codes ISS: Inferred from Sequence or Structural Similarity ISO: Inferred from Sequence Orthology ISA: Inferred from Sequence Alignment ISM: Inferred from Sequence Model IGC: Inferred from Genomic Context IBA: Inferred from Biological aspect of Ancestor IBD: Inferred from Biological aspect of Descendant IKR: Inferred from Key Residues IRD: Inferred from Rapid Divergence RCA: inferred from Reviewed Computational Analysis Author Statement Evidence Codes TAS: Traceable Author Statement NAS: Non-traceable Author Statement Curator Statement Evidence Codes IC: Inferred by Curator ND: No biological Data available Automatically-assigned Evidence Codes IEA: Inferred from Electronic Annotation Obsolete Evidence Codes NR: Not Recorded } } \value{ A list with the following components: \item{keptForAnalysis }{ logical vector with one entry per given gene. \code{TRUE} if the entry was used for enrichment analysis. Depending on the setting of \code{removeDuplicates} above, only a single entry per gene may be used. } \item{inGO }{ logical vector with one entry per given gene. \code{TRUE} if the gene belongs to any GO term, \code{FALSE} otherwise. Also \code{FALSE} for genes not used for the analysis because of duplication. } If input \code{labels} contained only one vector of labels, the following components: \item{countsInTerms }{ a matrix whose rows correspond to given cluster, and whose columns correspond to GO terms, contaning number of genes in the intersection of the corresponding module and GO term. Row and column names are set appropriately.} \item{enrichmentP}{a matrix whose rows correspond to given cluster, and whose columns correspond to GO terms, contaning enrichment p-values of each term in each cluster. Row and column names are set appropriately.} \item{bestPTerms}{a list of lists with each inner list corresponding to an ontology given in \code{ontologies} in input, plus one component corresponding to all given ontologies combined. The name of each component is set appropriately. Each inner list contains two components: \code{enrichment} is a data frame containing the highest enriched terms for each module; and \code{forModule} is a list of lists with one inner list per module, appropriately named. Each inner list contains one component per term. If input \code{getTermDeyails} is \code{TRUE}, this component is yet another list and contains components \code{termName} (term name), \code{enrichmentP} (enrichment P value), \code{termDefinition} (GO term definition), \code{termOntology} (GO term ontology), \code{geneCodes} (Entrez codes of module genes in this term), \code{genePositions} (indices of the genes listed in \code{geneCodes} within the given \code{labels}). Thus, to obtain information on say the second term of the 5th module in ontology BP, one can look at the appropriate row of \code{bestPTerms$BP$enrichment}, or one can reference \code{bestPTerms$BP$forModule[[5]][[2]]}. The author of the function apologizes for any confusion this structure of the output may cause. } \item{biggestTerms}{a list of the same format as \code{bestPTerms}, containing information about the terms with most genes in the module for each supplied ontology. } If input \code{labels} contained more than one vector, instead of the above components the return value contains a list named \code{setResults} that has one component per given set; each component is a list containing the above components for the corresponding set. } \author{ Peter Langfelder } \seealso{ Bioconductor's annotation packages such as GO.db and organism-specific annotation packages such as org.Hs.eg.db. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/mtd.setColnames.Rd0000644000176200001440000000323614012015545015214 0ustar liggesusers\name{mtd.setColnames} \alias{mtd.setColnames} \alias{mtd.colnames} \title{ Get and set column names in a multiData structure. } \description{ Get and set column names on each \code{data} component in a multiData structure. } \usage{ mtd.colnames(multiData) mtd.setColnames(multiData, colnames) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ A multiData structure } \item{colnames}{ A vector (coercible to character) of column names. } } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). The \code{mtd.colnames} and \code{mtd.setColnames} assume (and checks for) a "strict" multiData structure. } \value{ \code{mtd.colnames} returns the vector of column names of the \code{data} component. The function assumes the column names in all sets are the same. \code{mtd.setColnames} returns the multiData structure with the column names set in all \code{data} components. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/standardColors.Rd0000644000176200001440000000127114012015545015133 0ustar liggesusers\name{standardColors} \alias{standardColors} \title{Colors this library uses for labeling modules.} \description{ Returns the vector of color names in the order they are assigned by other functions in this library. } \usage{ standardColors(n = NULL) } \arguments{ \item{n}{Number of colors requested. If \code{NULL}, all (approx. 450) colors will be returned. Any other invalid argument such as less than one or more than maximum (\code{length(standardColors())}) will trigger an error. } } \value{ A vector of character color names of the requested length. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \examples{ standardColors(10); } \keyword{color} \keyword{misc} WGCNA/man/plotModuleSignificance.Rd0000644000176200001440000000335414012015545016604 0ustar liggesusers\name{plotModuleSignificance} \alias{plotModuleSignificance} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Barplot of module significance } \description{ Plot a barplot of gene significance. } \usage{ plotModuleSignificance( geneSignificance, colors, boxplot = FALSE, main = "Gene significance across modules,", ylab = "Gene Significance", ...) } \arguments{ \item{geneSignificance}{ a numeric vector giving gene significances. } \item{colors}{ a character vector specifying module assignment for the genes whose significance is given in \code{geneSignificance }. The modules should be labeled by colors. } \item{boxplot}{ logical: should a boxplot be produced instead of a barplot? } \item{main}{ main title for the plot. } \item{ylab}{ y axis label for the plot. } \item{\dots}{ other graphical parameters to \code{\link{plot}}. } } \details{ Given individual gene significances and their module assigment, the function calculates the module significance for each module as the average gene significance of the genes within the module. The result is plotted in a barplot or boxplot form. Each bar or box is labeled by the corresponding module color. } \value{ None. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 } \author{ Steve Horvath } \seealso{ \code{\link{barplot}}, \code{\link{boxplot}} } \keyword{ hplot }% __ONLY ONE__ keyword per line \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/newBlockwiseData.Rd0000644000176200001440000000771714012015545015412 0ustar liggesusers\name{newBlockwiseData} \alias{newBlockwiseData} \alias{BlockwiseData} \alias{mergeBlockwiseData} \alias{addBlockToBlockwiseData} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create, merge and expand BlockwiseData objects } \description{ These functions create, merge and expand BlockwiseData objects for holding in-memory or disk-backed blockwise data. Blockwise here means that the data is too large to be loaded or processed in one piece and is therefore split into blocks that can be handled one by one in a divide-and-conquer manner. } \usage{ newBlockwiseData( data, external = FALSE, fileNames = NULL, doSave = external, recordAttributes = TRUE, metaData = list()) mergeBlockwiseData(...) addBlockToBlockwiseData( bwData, blockData, external = bwData$external, blockFile = NULL, doSave = external, recordAttributes = !is.null(bwData$attributes), metaData = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A list in which each component carries the data of a single block. } \item{external}{ Logical: should the data be disk-backed (\code{TRUE}) or in-memory (\code{FALSE})? } \item{fileNames}{ When \code{external} is \code{TRUE}, this argument must be a character vector of the same length as \code{data}, giving the file names for the data to be saved to, or where the data is already located. } \item{doSave}{ Logical: should data be saved? If this is \code{FALSE}, it is the user's responsibility to ensure the files supplied in \code{fileNames} already exist and contain the expected data. } \item{recordAttributes}{ Logical: should \code{attributes} of the given data be recorded within the object? } \item{metaData}{ A list giving any additional meta-data for \code{data} that should be attached to the object. } \item{bwData}{An existing \code{BlockwiseData} object.} \item{blockData}{A vector, matrix or array carrying the data of a single block.} \item{blockFile}{ File name where data contained in \code{blockData} should be saved. } \item{...}{One or more objects of class \code{BlockwiseData}.} } \details{ Several functions in this package use the concept of blockwise, or "divide-and-conquer", analysis. The BlockwiseData class is meant to hold the blockwise data, or all necessary information about blockwise data that is saved in disk files. The data can be stored in disk files (one file per block) or in-memory. In memory storage is provided so that same code can be used for both smaller (single-block) data where disk storage could slow down operations as well as larger data sets where disk storage and block by block analysis are necessary. } \value{ All three functions return a list with the class set to \code{"BlockwiseData"}, containing the following components: \item{external}{Copy of the input argument \code{external}} \item{data}{If \code{external} is \code{TRUE}, an empty list, otherwise a copy of the input \code{data}.} \item{fileNames}{Copy of the input argument \code{fileNames}.} \item{lengths}{A vector of lengths (results of \code{\link{length}}) of elements of \code{data}.} \item{attributes}{If input \code{recordAttributes} is \code{TRUE}, a list with one component per block (component of \code{data}); each component is in turn a list of attributes of that component of \code{data}.} \item{metaData}{A copy of the input \code{metaData}.} } \author{ Peter Langfelder } \section{Warning}{The definition of \code{BlockwiseData} should be considered experimental and may change in the future.} \seealso{ Other functions on \code{BlockwiseData}: \code{\link{BD.getData}} for retrieving data \code{\link{BD.actualFileNames}} for retrieving file names of files containing data; \code{\link{BD.nBlocks}} for retrieving the number of blocks; \code{\link{BD.blockLengths}} for retrieving block lengths; \code{\link{BD.getMetaData}} for retrieving metadata; \code{\link{BD.checkAndDeleteFiles}} for deleting files of an unneeded object. } \keyword{misc} WGCNA/man/moduleEigengenes.Rd0000644000176200001440000002251314012015545015432 0ustar liggesusers\name{moduleEigengenes} \alias{moduleEigengenes} \title{Calculate module eigengenes.} \description{ Calculates module eigengenes (1st principal component) of modules in a given single dataset. } \usage{ moduleEigengenes(expr, colors, impute = TRUE, nPC = 1, align = "along average", excludeGrey = FALSE, grey = if (is.numeric(colors)) 0 else "grey", subHubs = TRUE, trapErrors = FALSE, returnValidOnly = trapErrors, softPower = 6, scale = TRUE, verbose = 0, indent = 0) } \arguments{ \item{expr}{Expression data for a single set in the form of a data frame where rows are samples and columns are genes (probes).} \item{colors}{A vector of the same length as the number of probes in \code{expr}, giving module color for all probes (genes). Color \code{"grey"} is reserved for unassigned genes. } \item{impute}{If \code{TRUE}, expression data will be checked for the presence of \code{NA} entries and if the latter are present, numerical data will be imputed, using function \code{impute.knn} and probes from the same module as the missing datum. The function \code{impute.knn} uses a fixed random seed giving repeatable results.} \item{nPC}{Number of principal components and variance explained entries to be calculated. Note that only the first principal component is returned; the rest are used only for the calculation of proportion of variance explained. The number of returned variance explained entries is currently \code{min(nPC, 10)}. If given \code{nPC} is greater than 10, a warning is issued.} \item{align}{Controls whether eigengenes, whose orientation is undetermined, should be aligned with average expression (\code{align = "along average"}, the default) or left as they are (\code{align = ""}). Any other value will trigger an error.} \item{excludeGrey}{Should the improper module consisting of 'grey' genes be excluded from the eigengenes?} \item{grey}{Value of \code{colors} designating the improper module. Note that if \code{colors} is a factor of numbers, the default value will be incorrect.} \item{subHubs}{Controls whether hub genes should be substituted for missing eigengenes. If \code{TRUE}, each missing eigengene (i.e., eigengene whose calculation failed and the error was trapped) will be replaced by a weighted average of the most connected hub genes in the corresponding module. If this calculation fails, or if \code{subHubs==FALSE}, the value of \code{trapErrors} will determine whether the offending module will be removed or whether the function will issue an error and stop.} \item{trapErrors}{Controls handling of errors from that may arise when there are too many \code{NA} entries in expression data. If \code{TRUE}, errors from calling these functions will be trapped without abnormal exit. If \code{FALSE}, errors will cause the function to stop. Note, however, that \code{subHubs} takes precedence in the sense that if \code{subHubs==TRUE} and \code{trapErrors==FALSE}, an error will be issued only if both the principal component and the hubgene calculations have failed. } \item{returnValidOnly}{logical; controls whether the returned data frame of module eigengenes contains columns corresponding only to modules whose eigengenes or hub genes could be calculated correctly (\code{TRUE}), or whether the data frame should have columns for each of the input color labels (\code{FALSE}).} \item{softPower}{The power used in soft-thresholding the adjacency matrix. Only used when the hubgene approximation is necessary because the principal component calculation failed. It must be non-negative. The default value should only be changed if there is a clear indication that it leads to incorrect results.} \item{scale}{logical; can be used to turn off scaling of the expression data before calculating the singular value decomposition. The scaling should only be turned off if the data has been scaled previously, in which case the function can run a bit faster. Note however that the function first imputes, then scales the expression data in each module. If the expression contain missing data, scaling outside of the function and letting the function impute missing data may lead to slightly different results than if the data is scaled within the function.} \item{verbose}{Controls verbosity of printed progress messages. 0 means silent, up to (about) 5 the verbosity gradually increases.} \item{indent}{A single non-negative integer controlling indentation of printed messages. 0 means no indentation, each unit above that adds two spaces. } } \details{ Module eigengene is defined as the first principal component of the expression matrix of the corresponding module. The calculation may fail if the expression data has too many missing entries. Handling of such errors is controlled by the arguments \code{subHubs} and \code{trapErrors}. If \code{subHubs==TRUE}, errors in principal component calculation will be trapped and a substitute calculation of hubgenes will be attempted. If this fails as well, behaviour depends on \code{trapErrors}: if \code{TRUE}, the offending module will be ignored and the return value will allow the user to remove the module from further analysis; if \code{FALSE}, the function will stop. From the user's point of view, setting \code{trapErrors=FALSE} ensures that if the function returns normally, there will be a valid eigengene (principal component or hubgene) for each of the input colors. If the user sets \code{trapErrors=TRUE}, all calculational (but not input) errors will be trapped, but the user should check the output (see below) to make sure all modules have a valid returned eigengene. While the principal component calculation can fail even on relatively sound data (it does not take all that many "well-placed" \code{NA} to torpedo the calculation), it takes many more irregularities in the data for the hubgene calculation to fail. In fact such a failure signals there likely is something seriously wrong with the data. } \value{ A list with the following components: \item{eigengenes}{Module eigengenes in a dataframe, with each column corresponding to one eigengene. The columns are named by the corresponding color with an \code{"ME"} prepended, e.g., \code{MEturquoise} etc. If \code{returnValidOnly==FALSE}, module eigengenes whose calculation failed have all components set to \code{NA}.} \item{averageExpr}{If \code{align == "along average"}, a dataframe containing average normalized expression in each module. The columns are named by the corresponding color with an \code{"AE"} prepended, e.g., \code{AEturquoise} etc.} \item{varExplained}{A dataframe in which each column corresponds to a module, with the component \code{varExplained[PC, module]} giving the variance of module \code{module} explained by the principal component no. \code{PC}. The calculation is exact irrespective of the number of computed principal components. At most 10 variance explained values are recorded in this dataframe.} \item{nPC}{A copy of the input \code{nPC}.} \item{validMEs}{A boolean vector. Each component (corresponding to the columns in \code{data}) is \code{TRUE} if the corresponding eigengene is valid, and \code{FALSE} if it is invalid. Valid eigengenes include both principal components and their hubgene approximations. When \code{returnValidOnly==FALSE}, by definition all returned eigengenes are valid and the entries of \code{validMEs} are all \code{TRUE}. } \item{validColors}{A copy of the input colors with entries corresponding to invalid modules set to \code{grey} if given, otherwise 0 if \code{colors} is numeric and "grey" otherwise.} \item{allOK}{Boolean flag signalling whether all eigengenes have been calculated correctly, either as principal components or as the hubgene average approximation.} \item{allPC}{Boolean flag signalling whether all returned eigengenes are principal components.} \item{isPC}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding eigengene is the first principal component and \code{FALSE} if it is the hubgene approximation or is invalid.} \item{isHub}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding eigengene is the hubgene approximation and \code{FALSE} if it is the first principal component or is invalid.} \item{validAEs}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding module average expression is valid.} \item{allAEOK}{Boolean flag signalling whether all returned module average expressions contain valid data. Note that \code{returnValidOnly==TRUE} does not imply \code{allAEOK==TRUE}: some invalid average expressions may be returned if their corresponding eigengenes have been calculated correctly.} } \references{ Zhang, B. and Horvath, S. (2005), "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17} \author{ Steve Horvath \email{SHorvath@mednet.ucla.edu}, Peter Langfelder \email{Peter.Langfelder@gmail.com} } \seealso{\code{\link{svd}}, \code{impute.knn}} % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{misc} WGCNA/man/branchSplit.Rd0000644000176200001440000000216614012015545014426 0ustar liggesusers\name{branchSplit} \alias{branchSplit} \title{ Branch split. } \description{ Calculation of branch split based on expression data. This function is used as a plugin for the dynamicTreeCut package and the user should not call this function directly. } \usage{ branchSplit( expr, branch1, branch2, discardProp = 0.05, minCentralProp = 0.75, nConsideredPCs = 3, signed = FALSE, getDetails = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{expr}{ Expression data. } \item{branch1}{ Branch 1, } \item{branch2}{ Branch 2. } \item{discardProp}{ Proportion of data to be discarded as outliers. } \item{minCentralProp}{ Minimum central proportion } \item{nConsideredPCs}{ Number of principal components to consider. } \item{signed}{ Should the network be considered signed? } \item{getDetails}{ Should details of the calculation be returned? } \item{\dots}{ Other arguments. Present for compatibility; currently unusued. } } \value{ A single number or a list containing detils of the calculation. } \author{ Peter Langfelder } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/correlationPreservation.Rd0000644000176200001440000000330414012015545017073 0ustar liggesusers\name{correlationPreservation} \alias{correlationPreservation} \title{ Preservation of eigengene correlations } \description{ Calculates a summary measure of preservation of eigengene correlations across data sets } \usage{ correlationPreservation(multiME, setLabels, excludeGrey = TRUE, greyLabel = "grey") } \arguments{ \item{multiME}{consensus module eigengenes in a multi-set format. A vector of lists with one list corresponding to each set. Each list must contain a component \code{data} that is a data frame whose columns are consensus module eigengenes. } \item{setLabels}{names to be used for the sets represented in \code{multiME}.} \item{excludeGrey}{logical: exclude the 'grey' eigengene from preservation measure?} \item{greyLabel}{module label corresponding to the 'grey' module. Usually this will be the character string \code{"grey"} if the labels are colors, and the number 0 if the labels are numeric.} } \details{ The function calculates the preservation of correlation of each eigengene with all other eigengenes (optionally except the 'grey' eigengene) in all pairs of sets. } \value{ A data frame whose rows correspond to consensus module eigengenes given in the input \code{multiME}, and columns correspond to all possible set comparisons. The two sets compared in each column are indicated in the column name. } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54} \author{ Peter Langfelder } \seealso{ \code{\link{multiSetMEs}} and module\code{\link{checkSets}} in package moduleColor for more on eigengenes and the multi-set format } \keyword{ misc } WGCNA/man/pruneConsensusModules.Rd0000644000176200001440000000754414012015545016545 0ustar liggesusers\name{pruneConsensusModules} \alias{pruneConsensusModules} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Prune (hierarchical) consensus modules by removing genes with low eigengene-based intramodular connectivity } \description{ This function prunes (hierarchical) consensus modules by removing genes with low eigengene-based intramodular connectivity (KME) and by removing modules that do not have a certain minimum number of genes with a required minimum KME. } \usage{ pruneConsensusModules( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, MEs = NULL, labels, unassignedLabel = if (is.numeric(labels)) 0 else "grey", networkOptions, consensusTree, minModuleSize, minCoreKMESize = minModuleSize/3, minCoreKME = 0.5, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, collectGarbage = FALSE, checkWeights = TRUE, verbose = 1, indent=0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{multiExpr.imputed}{If \code{multiExpr} contain missing data, this argument can be used to supply the expression data with missing data imputed. If not given, the \code{\link[impute]{impute.knn}} function will be used to impute the missing data.} \item{MEs}{Optional consensus module eigengenes, in multi-set format analogous to that of \code{multiExpr}.} \item{labels}{ A vector (numeric, character or a factor) giving module labels for each variable (gene) in multiExpr. } \item{unassignedLabel}{ The label (value in \code{labels}) that represents unassigned genes. Module of this label will not enter the module eigengene clustering and will not be merged with other modules.} \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{consensusTree}{ A list of class \code{\link{ConsensusTree}} specifying the consensus calculation. } \item{minModuleSize}{Minimum number of genes in a module. Modules that have fewer genes (after trimming) will be removed (i.e., their genes will be given the unassigned label).} \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with consensus eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled).} \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose consensus eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{collectGarbage}{ Logical: should garbage be collected after some of the memory-intensive steps? } \item{checkWeights}{Logical: should \code{multiWeights} be checked to make sure their dimensions are concordant with \code{multiExpr} and the weights are valid?} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \value{ The pruned module labels: a vector of the same form as the input \code{labels}. } \author{ Peter Langfelder } \keyword{misc} WGCNA/man/adjacency.Rd0000644000176200001440000001252014012015545014071 0ustar liggesusers\name{adjacency} \alias{adjacency} \alias{adjacency.fromSimilarity} \title{ Calculate network adjacency } \description{ Calculates (correlation or distance) network adjacency from given expression data or from a similarity. } \usage{ adjacency(datExpr, selectCols = NULL, type = "unsigned", power = if (type=="distance") 1 else 6, corFnc = "cor", corOptions = list(use = "p"), weights = NULL, distFnc = "dist", distOptions = "method = 'euclidean'", weightArgNames = c("weights.x", "weights.y")) adjacency.fromSimilarity(similarity, type = "unsigned", power = if (type=="distance") 1 else 6) } \arguments{ \item{datExpr}{ data frame containing expression data. Columns correspond to genes and rows to samples.} \item{similarity}{a (signed) similarity matrix: square, symmetric matrix with entries between -1 and 1. } \item{selectCols}{ for correlation networks only (see below); can be used to select genes whose adjacencies will be calculated. Should be either a numeric vector giving the indices of the genes to be used, or a boolean vector indicating which genes are to be used. } \item{type}{network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}, \code{"distance"}. } \item{power}{soft thresholding power. } \item{corFnc}{ character string specifying the function to be used to calculate co-expression similarity for correlation networks. Defaults to Pearson correlation. Any function returning values between -1 and 1 can be used. } \item{corOptions}{ character string or a list specifying additional arguments to be passed to the function given by \code{corFnc}. Use \code{"use = 'p', method = 'spearman'"} or, equivalently, \code{list(use = 'p', method = 'spearman')} to obtain Spearman correlation. } \item{weights}{optional observation weights for \code{datExpr} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights. Only used with Pearson correlation.} \item{distFnc}{ character string specifying the function to be used to calculate co-expression similarity for distance networks. Defaults to the function \code{\link{dist}}. Any function returning non-negative values can be used.} \item{distOptions}{ character string or a list specifying additional arguments to be passed to the function given by \code{distFnc}. For example, when the function \code{\link{dist}} is used, the argument \code{method} can be used to specify various ways of computing the distance. } \item{weightArgNames}{character vector of length 2 giving the names of the arguments to \code{corFnc} that represent weights for variable x and y. Only used if \code{weights} are non-NULL.} } \details{ The argument \code{type} determines whether a correlation (\code{type} one of \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}), or a distance network (\code{type} equal \code{"distance"}) will be calculated. In correlation networks the adajcency is constructed from correlations (values between -1 and 1, with high numbers meaning high similarity). In distance networks, the adjacency is constructed from distances (non-negative values, high values mean low similarity). The function calculates the similarity of columns (genes) in \code{datExpr} by calling the function given in \code{corFnc} (for correlation networks) or \code{distFnc} (for distance networks), transforms the similarity according to \code{type} and raises it to \code{power}, resulting in a weighted network adjacency matrix. If \code{selectCols} is given, the \code{corFnc} function will be given arguments \code{(datExpr, datExpr[selectCols], ...)}; hence the returned adjacency will have rows corresponding to all genes and columns corresponding to genes selected by \code{selectCols}. Correlation and distance are transformed as follows: for \code{type = "unsigned"}, adjacency = |cor|^power; for \code{type = "signed"}, adjacency = (0.5 * (1+cor) )^power; for \code{type = "signed hybrid"}, adjacency = cor^power if cor>0 and 0 otherwise; and for \code{type = "distance"}, adjacency = (1-(dist/max(dist))^2)^power. The function \code{adjacency.fromSimilarity} inputs a similarity matrix, that is it skips the correlation calculation step but is otherwise identical. } \value{ Adjacency matrix of dimensions \code{ncol(datExpr)} times \code{ncol(datExpr)} (or the same dimensions as \code{similarity}). If \code{selectCols} was given, the number of columns will be the length (if numeric) or sum (if boolean) of \code{selectCols}. } \note{When calculated from the \code{datExpr}, the network is always calculated among the columns of \code{datExpr} irrespective of whether a correlation or a distance network is requested. } \references{ Bin Zhang and Steve Horvath (2005) A General Framework for Weighted Gene Co-Expression Network Analysis, Statistical Applications in Genetics and Molecular Biology, Vol. 4 No. 1, Article 17 Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{ Peter Langfelder and Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/hierarchicalMergeCloseModules.Rd0000644000176200001440000001614714012015545020076 0ustar liggesusers\name{hierarchicalMergeCloseModules} \alias{hierarchicalMergeCloseModules} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Merge close (similar) hierarchical consensus modules } \description{ Merges hierarchical consensus modules that are too close as measured by the correlation of their eigengenes. } \usage{ hierarchicalMergeCloseModules( # input data multiExpr, multiExpr.imputed = NULL, labels, # Optional starting eigengenes MEs = NULL, unassdColor = if (is.numeric(labels)) 0 else "grey", # If missing data are present, impute them? impute = TRUE, # Options for eigengene network construction networkOptions, # Options for constructing the consensus consensusTree, calibrateMESimilarities = FALSE, # Merging options cutHeight = 0.2, iterate = TRUE, # Output options relabel = FALSE, colorSeq = NULL, getNewMEs = TRUE, getNewUnassdME = TRUE, # Options controlling behaviour of the function trapErrors = FALSE, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{multiData}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiExpr.imputed}{If \code{multiExpr} contain missing data, this argument can be used to supply the expression data with missing data imputed. If not given, the \code{\link[impute]{impute.knn}} function will be used to impute the missing data within each module (see \code{\link{imputeByModule}}.} \item{labels}{ A vector (numeric, character or a factor) giving module labels for genes (variables) in \code{multiExpr}. } \item{MEs}{ If module eigengenes have been calculated before, the user can save some computational time by inputting them. \code{MEs} should have the same format as \code{multiExpr}. If they are not given, they will be calculated. } \item{unassdColor}{The label (value in \code{labels}) that represents unassigned genes. Module of this label will not enter the module eigengene clustering and will not be merged with other modules.} \item{impute}{Should missing values be imputed in eigengene calculation? If imputation is disabled, the presence of \code{NA} entries will cause the eigengene calculation to fail and eigengenes will be replaced by their hubgene approximation. See \code{\link{moduleEigengenes}} for more details.} \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{consensusTree}{ A list specifying the consensus calculation. See \code{\link{newConsensusTree}} for details. } \item{calibrateMESimilarities}{ Logical: should module eigengene similarities be calibrated? This setting overrides the calibration options in \code{consensusTree}. } \item{cutHeight}{ Maximum dissimilarity (i.e., 1-correlation) that qualifies modules for merging. } \item{iterate}{Controls whether the merging procedure should be repeated until there is no change. If FALSE, only one iteration will be executed.} \item{relabel}{Controls whether, after merging, color labels should be ordered by module size.} \item{colorSeq}{Color labels to be used for relabeling. Defaults to the standard color order used in this package if \code{colors} are not numeric, and to integers starting from 1 if \code{colors} is numeric.} \item{getNewMEs}{Controls whether module eigengenes of merged modules should be calculated and returned.} \item{getNewUnassdME}{When doing module eigengene manipulations, the function does not normally calculate the eigengene of the 'module' of unassigned ('grey') genes. Setting this option to \code{TRUE} will force the calculation of the unassigned eigengene in the returned newMEs, but not in the returned oldMEs.} \item{trapErrors}{Controls whether computational errors in calculating module eigengenes, their dissimilarity, and merging trees should be trapped. If \code{TRUE}, errors will be trapped and the function will return the input colors. If \code{FALSE}, errors will cause the function to stop.} \item{verbose}{Controls verbosity of printed progress messages. 0 means silent, up to (about) 5 the verbosity gradually increases.} \item{indent}{A single non-negative integer controlling indentation of printed messages. 0 means no indentation, each unit above that adds two spaces. } } \details{ This function merges input modules that are closely related. The similarities are quantified by correlations of module eigengenes; a ``consensus'' similarity is calculated using \code{hierarchicalConsensusMEDissimilarity} according to the recipe in \code{consensusTree}. Once the (dis-)similarities are calculated, average linkage hierarchical clustering of the module eigengenes is performed, the dendrogram is cut at the height \code{cutHeight} and modules on each branch are merged. The process is (optionally) repeated until no more modules are merged. If, for a particular module, the module eigengene calculation fails, a hubgene approximation will be used. The user should be aware that if a computational error occurs and \code{trapErrors==TRUE}, the returned list (see below) will not contain all of the components returned upon normal execution. } \value{ If no errors occurred, a list with components \item{labels}{Labels for the genes corresponding to merged modules. The function attempts to mimic the mode of the input \code{labels}: if the input \code{labels} is numeric, character and factor, respectively, so is the output. Note, however, that if the function performs relabeling, a standard sequence of labels will be used: integers starting at 1 if the input \code{labels} is numeric, and a sequence of color labels otherwise (see \code{colorSeq} above).} \item{dendro}{Hierarchical clustering dendrogram (average linkage) of the eigengenes of the most recently computed tree. If \code{iterate} was set TRUE, this will be the dendrogram of the merged modules, otherwise it will be the dendrogram of the original modules.} \item{oldDendro}{Hierarchical clustering dendrogram (average linkage) of the eigengenes of the original modules.} \item{cutHeight}{The input cutHeight.} \item{oldMEs}{Module eigengenes of the original modules in the sets given by \code{useSets}.} \item{newMEs}{Module eigengenes of the merged modules in the sets given by \code{useSets}.} \item{allOK}{A logical set to \code{TRUE}.} If an error occurred and \code{trapErrors==TRUE}, the list only contains these components: \item{colors}{A copy of the input colors.} \item{allOK}{a logical set to \code{FALSE}.} } \author{ Peter Langfelder } \seealso{ \code{\link{multiSetMEs}} for calculation of (consensus) module eigengenes across multiple data sets; \code{\link{newConsensusTree}} for information about consensus trees; \code{\link{hierarchicalConsensusMEDissimilarity}} for calculation of hierarchical consensus eigengene dissimilarity. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/hierarchicalConsensusKME.Rd0000644000176200001440000004617514012015545017041 0ustar liggesusers\name{hierarchicalConsensusKME} \alias{hierarchicalConsensusKME} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculation of measures of fuzzy module membership (KME) in hierarchical consensus modules } \description{ This function calculates several measures of fuzzy module membership in hiearchical consensus modules. } \usage{ hierarchicalConsensusKME( multiExpr, moduleLabels, multiWeights = NULL, multiEigengenes = NULL, consensusTree, signed = TRUE, useModules = NULL, metaAnalysisWeights = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getFDR = FALSE, useRankPvalue = TRUE, rankPvalueOptions = list(calculateQvalue = getFDR, pValueMethod = "scale"), setNames = names(multiExpr), excludeGrey = TRUE, greyLabel = if (is.numeric(moduleLabels)) 0 else "grey", reportWeightType = NULL, getOwnModuleZ = TRUE, getBestModuleZ = TRUE, getOwnConsensusKME = TRUE, getBestConsensusKME = TRUE, getAverageKME = FALSE, getConsensusKME = TRUE, getMetaColsFor1Set = FALSE, getMetaP = FALSE, getMetaFDR = getMetaP && getFDR, getSetKME = TRUE, getSetZ = FALSE, getSetP = FALSE, getSetFDR = getSetP && getFDR, includeID = TRUE, additionalGeneInfo = NULL, includeWeightTypeInColnames = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{moduleLabels}{ A vector with one entry per column (gene or probe) in \code{multiExpr}, giving the module labels. } \item{multiWeights}{ optional observation weights for data in \code{multiExpr}, in the same format (and dimensions) as \code{multiExpr}. These weights are used in calculation of KME, i.e., the correlation of module eigengenes with data in \code{multiExpr}. The module eigengenes are not weighted in this calculation.} \item{multiEigengenes}{ Optional specification of module eigengenes of the modules (\code{moduleLabels}) in data sets within \code{multiExpr}. If not given, will be calculated. } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{signed}{ Logical: should module membership be considered singed? Signed membership should be used for signed (including signed hybrid) networks and means that negative module membership means the gene is not a member of the module. In other words, in signed networks negative kME values are not considered significant and the corresponding p-values will be one-sided. In unsigned networks, negative kME values are considered significant and the corresponding p-values will be two-sided. } \item{useModules}{ Optional vector specifying which modules should be used. Defaults to all modules except the unassigned module. } \item{metaAnalysisWeights}{ Optional specification of meta-analysis weights for each input set. If given, must be a numeric vector of length equal the number of input data sets (i.e., \code{length(multiExpr)}). These weights will be used in addition to constant weights and weights proportional to number of samples (observations) in each set. } \item{corAndPvalueFnc}{ Function that calculates associations between expression profiles and eigengenes. See details. } \item{corOptions}{ List giving additional arguments to function \code{corAndPvalueFnc}. See details. } \item{corComponent}{ Name of the component of output of \code{corAndPvalueFnc} that contains the actual correlation. } \item{getFDR}{ Logical: should FDR be calculated? } \item{useRankPvalue}{ Logical: should the \code{\link{rankPvalue}} function be used to obtain alternative meta-analysis statistics?} \item{rankPvalueOptions}{ Additional options for function \code{\link{rankPvalue}}. These include \code{na.last} (default \code{"keep"}), \code{ties.method} (default \code{"average"}), \code{calculateQvalue} (default copied from input \code{getQvalues}), and \code{pValueMethod} (default \code{"scale"}). See the help file for \code{\link{rankPvalue}} for full details.} \item{setNames}{ Names for the input sets. If not given, will be taken from \code{names(multiExpr)}. If those are \code{NULL} as well, the names will be \code{"Set_1", "Set_2", ...}. } \item{excludeGrey}{ logical: should the grey module be excluded from the kME tables? Since the grey module is typically not a real module, it makes little sense to report kME values for it. } \item{greyLabel}{ label that labels the grey module. } \item{reportWeightType}{ One of \code{"equal", "rootDoF", "DoF", "user"}. Indicates which of the weights should be reported in the output. If not given, all available weight types will be reported; this always includes \code{"equal", "rootDoF", "DoF"}, while \code{"user"} weights are reported if \code{metaAnalysisWeights} above is given. } \item{getOwnModuleZ}{ Logical: should meta-analysis Z statistic in own module be returned as a column of the output? } \item{getBestModuleZ}{ Logical: should highest meta-analysis Z statistic across all modules and the corresponding module be returned as columns of the output? } \item{getOwnConsensusKME}{ Logical: should consensus KME (eigengene-based connectivity) statistic in own module be returned as a column of the output? } \item{getBestConsensusKME}{ Logical: should highest consensus KME across all modules and the corresponding module be returned as columns of the output? } \item{getAverageKME}{ Logical: Should average KME be calculated? } \item{getConsensusKME}{ Logical: should consensus KME be calculated? } \item{getMetaColsFor1Set}{ Logical: should the meta-statistics be returned if the input data only have 1 set? For 1 set, meta- and individual kME values are the same, so meta-columns essentially duplicate individual columns. } \item{getMetaP}{ Logical: should meta-analysis p-values corresponding to the KME meta-analysis Z statistics be calculated? } \item{getMetaFDR}{ Logical: should FDR estimates for the meta-analysis p-values corresponding to the KME meta-analysis Z statistics be calculated? } \item{getSetKME}{ Logical: should KME values for individual sets be returned? } \item{getSetZ}{ Logical: should Z statistics corresponding to KME for individual sets be returned? } \item{getSetP}{ Logical: should p values corresponding to KME for individual sets be returned? } \item{getSetFDR}{ Logical: should FDR estimates corresponding to KME for individual sets be returned? } \item{includeID}{ Logical: should gene ID (taken from column names of \code{multiExpr}) be included as the first column in the output? } \item{additionalGeneInfo}{ Optional data frame with rows corresponding to genes in \code{multiExpr} that should be included as part of the output. } \item{includeWeightTypeInColnames}{ Logical: should weight type (\code{"equal", "rootDoF", "DoF", "user"}) be included in appropriate meta-analysis column names? } } \details{ This function calculates several measures of (hierarchical) consensus KME (eigengene-based intramodular connectivity or fuzzy module membership) for all genes in all modules. First, it calculates the meta-analysis Z statistics for correlations between genes and module eigengenes; this is known as the consensus module membership Z statistic. The meta-analysis weights can be specified by the user either explicitly or implicitly ("equal", "RootDoF" or "DoF"). Second, it can calculate the consensus KME, i.e., the hierarchical consensus of the KMEs (correlations with eigengenes) across the individual sets. The consensus calculation is specified in the argument \code{consensusTree}; typically, the \code{consensusTree} used here will be the same as the one used for the actual consensus network construction and module identification. See \code{\link{newConsensusTree}} for details on how to specify consensus trees. Third, the function can also calculate the (weighted) average KME using the meta-analysis weights; the average KME can be interpreted as the meta-analysis of the KMEs in the individual sets. This is related to but somewhat distinct from the meta-analysis Z statistics. In addition to these, optional output also includes, for each gene, KME values in the module to which the gene is assigned as well as the maximum KME values and modules for which the maxima are attained. For most genes, the assigned module will be the one with highest KME values, but for some genes the assigned module and module of maximum KME may be different. The function \code{corAndPvalueFnc} is currently is expected to accept arguments \code{x} (gene expression profiles), \code{y} (eigengene expression profiles), and \code{alternative} with possibilities at least \code{"greater", "two.sided"}. If weights are given, these are passed to \code{corAndPvalueFnc} as argument \code{weights.x}. Any additional arguments can be passed via \code{corOptions}. The function \code{corAndPvalueFnc} should return a list which at the least contains (1) a matrix of associations of genes and eigengenes (this component should have the name given by \code{corComponent}), and (2) a matrix of the corresponding p-values, named "p" or "p.value". Other components are optional but for full functionality should include (3) \code{nObs} giving the number of observations for each association (which is the number of samples less number of missing data - this can in principle vary from association to association), and (4) \code{Z} giving a Z static for each observation. If these are missing, \code{nObs} is calculated in the main function, and calculations using the Z statistic are skipped. } \value{ Data frame with the following components, some of which may be missing depending on input options (for easier readability the order here is not the same as in the actual output): \item{ID}{Gene ID, taken from the column names of the first input data set} If given, a copy of \code{additionalGeneInfo}. \item{Z.kME.inOwnModule}{Meta-analysis Z statistic for membership in assigned module.} \item{maxZ.kME}{Maximum meta-analysis Z statistic for membership across all modules.} \item{moduleOfMaxZ.kME}{Module in which the maximum meta-analysis Z statistic is attained. } \item{consKME.inOwnModule}{Consensus KME in assigned module.} \item{maxConsKME}{Maximum consensus KME across all modules.} \item{moduleOfMaxConsKME}{Module in which the maximum consensus KME is attained.} \item{consensus.kME.1, consensus.kME.2, ...}{Consensus kME (that is, the requested quantile of the kMEs in the individual data sets)in each module for each gene across the input data sets. The module labels (here 1, 2, etc.) correspond to those in \code{moduleLabels}.} \item{weightedAverage.equalWeights.kME1, weightedAverage.equalWeights.kME2, ...}{ Average kME in each module for each gene across the input data sets. } \item{weightedAverage.RootDoFWeights.kME1, weightedAverage.RootDoFWeights.kME2, ...}{ Weighted average kME in each module for each gene across the input data sets. The weight of each data set is proportional to the square root of the number of samples in the set. } \item{weightedAverage.DoFWeights.kME1, weightedAverage.DoFWeights.kME2, ...}{ Weighted average kME in each module for each gene across the input data sets. The weight of each data set is proportional to number of samples in the set. } \item{weightedAverage.userWeights.kME1, weightedAverage.userWeights.kME2, ...}{ (Only present if input \code{metaAnalysisWeights} is non-NULL.) Weighted average kME in each module for each gene across the input data sets. The weight of each data set is given in \code{metaAnalysisWeights}.} \item{meta.Z.equalWeights.kME1, meta.Z.equalWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set equally. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.RootDoFWeights.kME1, meta.Z.RootDoFWeights.kME2, ...}{ Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by the square root of the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.DoFWeights.kME1, meta.Z.DoFWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.Z.userWeights.kME1, meta.Z.userWeights.kME2, ...}{Meta-analysis Z statistic for kME in each module, obtained by weighing the Z scores in each set by \code{metaAnalysisWeights}. Only returned if \code{metaAnalysisWeights} is non-NULL and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations.} \item{meta.p.equalWeights.kME1, meta.p.equalWeights.kME2, ...}{ p-values obtained from the equal-weight meta-analysis Z statistics. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.RootDoFWeights.kME1, meta.p.RootDoFWeights.kME2, ...}{ p-values obtained from the meta-analysis Z statistics with weights proportional to the square root of the number of samples. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.DoFWeights.kME1, meta.p.DoFWeights.kME2, ...}{ p-values obtained from the degree-of-freedom weight meta-analysis Z statistics. Only returned if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.p.userWeights.kME1, meta.p.userWeights.kME2, ...}{ p-values obtained from the user-supplied weight meta-analysis Z statistics. Only returned if \code{metaAnalysisWeights} is non-NULL and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the correlations. } \item{meta.q.equalWeights.kME1, meta.q.equalWeights.kME2, ...}{ q-values obtained from the equal-weight meta-analysis p-values. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.RootDoFWeights.kME1, meta.q.RootDoFWeights.kME2, ...}{ q-values obtained from the meta-analysis p-values with weights proportional to the square root of the number of samples. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.DoFWeights.kME1, meta.q.DoFWeights.kME2, ...}{ q-values obtained from the degree-of-freedom weight meta-analysis p-values. Only present if \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} \item{meta.q.userWeights.kME1, meta.q.userWeights.kME2, ...}{ q-values obtained from the user-specified weight meta-analysis p-values. Only present if \code{metaAnalysisWeights} is non-NULL, \code{getQvalues} is \code{TRUE} and the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values.} The next set of columns contain the results of function \code{\link{rankPvalue}} and are only present if input \code{useRankPvalue} is \code{TRUE}. Some columns may be missing depending on the options specified in \code{rankPvalueOptions}. We explicitly list columns that are based on weighing each set equally; names of these columns carry the suffix \code{.equalWeights} \item{pValueExtremeRank.ME1.equalWeights, pValueExtremeRank.ME2.equalWeights, ...}{ This is the minimum between pValueLowRank and pValueHighRank, i.e. min(pValueLow, pValueHigh)} \item{pValueLowRank.ME1.equalWeights, pValueLowRank.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value based on the rank method.} \item{pValueHighRank.ME1.equalWeights, pValueHighRank.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueExtremeScale.ME1.equalWeights, pValueExtremeScale.ME2.equalWeights, ...}{ This is the minimum between pValueLowScale and pValueHighScale, i.e. min(pValueLow, pValueHigh)} \item{pValueLowScale.ME1.equalWeights, pValueLowScale.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{pValueHighScale.ME1.equalWeights, pValueHighScale.ME2.equalWeights, ...}{ Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{qValueExtremeRank.ME1.equalWeights, qValueExtremeRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueExtremeRank} \item{qValueLowRank.ME1.equalWeights, qValueLowRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueLowRank} \item{qValueHighRank.ME1.equalWeights, lueHighRank.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueHighRank} \item{qValueExtremeScale.ME1.equalWeights, qValueExtremeScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueExtremeScale} \item{qValueLowScale.ME1.equalWeights, qValueLowScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueLowScale} \item{qValueHighScale.ME1.equalWeights,qValueHighScale.ME2.equalWeights, ...}{ local false discovery rate (q-value) corresponding to the p-value pValueHighScale} \item{...}{Analogous columns corresponding to weighing individual sets by the square root of the number of samples, by number of samples, and by user weights (if given). The corresponding column name suffixes are \code{.RootDoFWeights}, \code{.DoFWeights}, and \code{.userWeights}.} The following set of columns summarize kME in individual input data sets. \item{kME1.Set_1, kME1.Set_2, ..., kME2.Set_1, kME2.Set_2, ...}{ kME values for each gene in each module in each given data set. } \item{p.kME1.Set_1, p.kME1.Set_2, ..., p.kME2.Set_1, p.kME2.Set_2, ...}{ p-values corresponding to kME values for each gene in each module in each given data set. } \item{q.kME1.Set_1, q.kME1.Set_2, ..., q.kME2.Set_1, q.kME2.Set_2, ...}{ q-values corresponding to kME values for each gene in each module in each given data set. Only returned if \code{getQvalues} is \code{TRUE}. } \item{Z.kME1.Set_1, Z.kME1.Set_2, ..., Z.kME2.Set_1, Z.kME2.Set_2, ...}{ Z statistics corresponding to kME values for each gene in each module in each given data set. Only present if the function \code{corAndPvalueFnc} returns the Z statistics corresponding to the kME values. } } \author{ Peter Langfelder } \seealso{ \code{\link{signedKME}} for eigengene based connectivity in a single data set. \code{\link{corAndPvalue}}, \code{\link{bicorAndPvalue}} for two alternatives for calculating correlations and the corresponding p-values and Z scores. Both can be used with this function. \code{\link{newConsensusTree}} for more details on hierarchical consensus trees and calculations. } \keyword{misc} WGCNA/man/goodSamples.Rd0000644000176200001440000000517314012015545014433 0ustar liggesusers\name{goodSamples} \alias{goodSamples} \title{ Filter samples with too many missing entries } \description{ This function checks data for missing entries and returns a list of samples that pass two criteria on maximum number of missing values: the fraction of missing values must be below a given threshold and the total number of missing genes must be below a given threshold. } \usage{ goodSamples( datExpr, weights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, minRelativeWeight = 0.1, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. } \item{weights}{optional observation weights in the same format (and dimensions) as \code{datExpr}.} \item{useSamples}{ optional specifications of which samples to use for the check. Should be a logical vector; samples whose entries are \code{FALSE} will be ignored for the missing value counts. Defaults to using all samples.} \item{useGenes}{ optional specifications of genes for which to perform the check. Should be a logical vector; genes whose entries are \code{FALSE} will be ignored. Defaults to using all genes.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of good samples for the data set to be considered fit for analysis. If the actual number of good samples falls below this threshold, an error will be issued. } \item{minNGenes}{ minimum number of non-missing samples for a sample to be considered good. } \item{minRelativeWeight}{ observations whose weight divided by the maximum weight is below this threshold will be considered missing. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. For most data sets, the fraction of missing samples criterion will be much more stringent than the absolute number of missing samples criterion. } \value{ A logical vector with one entry per sample that is \code{TRUE} if the sample is considered good and \code{FALSE} otherwise. Note that all samples excluded by \code{useSamples} are automatically assigned \code{FALSE}. } \author{ Peter Langfelder and Steve Horvath } \seealso{ \code{\link{goodSamples}}, \code{\link{goodSamplesGenes}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/scaleFreePlot.Rd0000644000176200001440000000270214012015545014701 0ustar liggesusers\name{scaleFreePlot} \alias{scaleFreePlot} \title{ Visual check of scale-free topology } \description{ A simple visula check of scale-free network ropology. } \usage{ scaleFreePlot( connectivity, nBreaks = 10, truncated = FALSE, removeFirst = FALSE, main = "", ...) } \arguments{ \item{connectivity}{ vector containing network connectivities. } \item{nBreaks}{ number of breaks in the connectivity dendrogram. } \item{truncated}{ logical: should a truncated exponential fit be calculated and plotted in addition to the linear one? } \item{removeFirst}{ logical: should the first bin be removed from the fit? } \item{main}{ main title for the plot. } \item{\dots}{ other graphical parameter to the \code{plot} function. } } \details{ The function plots a log-log plot of a histogram of the given \code{connectivities}, and fits a linear model plus optionally a truncated exponential model. The \eqn{R^2} of the fit can be considered an index of the scale freedom of the network topology. } \value{ None. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath } \seealso{ \code{\link{softConnectivity}} for connectivity calculation in weigheted networks. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/BD.getData.Rd0000644000176200001440000000567514012015545014022 0ustar liggesusers\name{BD.getData} \alias{BD.actualFileNames} \alias{BD.nBlocks} \alias{BD.blockLengths} \alias{BD.getMetaData} \alias{BD.getData} \alias{BD.checkAndDeleteFiles} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Various basic operations on \code{BlockwiseData} objects. } \description{ These functions implement basic operations on \code{\link{BlockwiseData}} objects. Blockwise here means that the data is too large to be loaded or processed in one piece and is therefore split into blocks that can be handled one by one in a divide-and-conquer manner. } \usage{ BD.actualFileNames(bwData) BD.nBlocks(bwData) BD.blockLengths(bwData) BD.getMetaData(bwData, blocks = NULL, simplify = TRUE) BD.getData(bwData, blocks = NULL, simplify = TRUE) BD.checkAndDeleteFiles(bwData) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{bwData}{ A \code{BlockwiseData} object. } \item{blocks}{ Optional vector of integers specifying the blocks on which to execute the operation. } \item{simplify}{ Logical: if the \code{blocks} argument above is of length 1, should the returned list be simplified by removing the redundant outer \code{list} structure? } } \details{ Several functions in this package use the concept of blockwise, or "divide-and-conquer", analysis. The BlockwiseData class is meant to hold the blockwise data, or all necessary information about blockwise data that is saved in disk files. } \value{ \item{BD.actualFileNames}{returns a vector of character strings giving the file names in which the files are saved, or \code{NULL} if the data are held in-memory.} \item{BD.nBlocks}{returns the number of blocks in the input object.} \item{BD.blockLengths}{returns the block lengths (results of applying \code{\link{length}} to the data in each block).} \item{BD.getMetaData}{returns a list with one component per block. Each component is in turn a list containing the stored meta-data for the corresponding block. If \code{blocks} is of length 1 and \code{simplify} is \code{TRUE}, the outer (redundant) \code{list} is removed.} \item{BD.getData}{returns a list with one component per block. Each component is in turn a list containing the stored data for the corresponding block. If \code{blocks} is of length 1 and \code{simplify} is \code{TRUE}, the outer (redundant) \code{list} is removed.} \item{BD.checkAndDeleteFiles}{deletes the files referenced in the input \code{bwData} if they exist.} } \author{ Peter Langfelder } \section{Warning}{The definition of \code{BlockwiseData} and the functions here should be considered experimental and may change in the future.} \seealso{ Definition of and other functions on \code{\link{BlockwiseData}}: \code{\link{newBlockwiseData}} for creating new \code{BlockwiseData} objects; \code{\link{mergeBlockwiseData}} for merging blockwise data structure; \code{\link{addBlockToBlockwiseData}} for adding a new block to existing blockwise data; } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/plotMEpairs.Rd0000644000176200001440000000260414012015545014411 0ustar liggesusers\name{plotMEpairs} \alias{plotMEpairs} \title{ Pairwise scatterplots of eigengenes} \description{ The function produces a matrix of plots containing pairwise scatterplots of given eigengenes, the distribution of their values and their pairwise correlations. } \usage{ plotMEpairs( datME, y = NULL, main = "Relationship between module eigengenes", clusterMEs = TRUE, ...) } \arguments{ \item{datME}{ a data frame containing expression data, with rows corresponding to samples and columns to genes. Missing values are allowed and will be ignored. } \item{y}{ optional microarray sample trait vector. Will be treated as an additional eigengene. } \item{main}{ main title for the plot. } \item{clusterMEs}{ logical: should the module eigengenes be ordered by their dendrogram? } \item{\dots}{ additional graphical parameters to the function \code{\link{pairs}} } } \details{ The function produces an NxN matrix of plots, where N is the number of eigengenes. In the upper traingle it plots pairwise scatterplots of module eigengenes (plus the trait \code{y}, if given). On the diagonal it plots histograms of sample values for each eigengene. Below the diagonal, it displays the pairwise correlations of the eigengenes. } \value{ None. } \author{ Steve Horvath } \seealso{ \code{\link{pairs}} } \keyword{ hplot }% __ONLY ONE__ keyword per line WGCNA/man/modifiedBisquareWeights.Rd0000644000176200001440000001275414672545314017006 0ustar liggesusers\name{modifiedBisquareWeights} \alias{modifiedBisquareWeights} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Modified Bisquare Weights } \description{ Calculation of bisquare weights and the intermediate weight factors similar to those used in the calculation of biweight midcovariance and midcorrelation. The weights are designed such that outliers get smaller weights; the weights become zero for data points more than 9 median absolute deviations from the median. } \usage{ modifiedBisquareWeights( x, removedCovariates = NULL, pearsonFallback = TRUE, maxPOutliers = 0.05, outlierReferenceWeight = 0.1, groupsForMinWeightRestriction = NULL, minWeightInGroups = 0, maxPropUnderMinWeight = 1, defaultWeight = 1, getFactors = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A matrix of numeric observations with variables (features) in columns and observations (samples) in rows. Weights will be calculated separately for each column. } \item{removedCovariates}{ Optional matrix or data frame of variables that are to be regressed out of each column of \code{x} before calculating the weights. If given, must have the same number of rows as \code{x}. } \item{pearsonFallback}{ Logical: for columns of \code{x} that have zero median absolute deviation (MAD), should the appropriately scaled standard deviation be used instead?} \item{maxPOutliers}{ Optional numeric scalar between 0 and 1. Specifies the maximum proportion of outliers in each column, i.e., data with weights equal to \code{outlierReferenceWeight} below. } \item{outlierReferenceWeight}{A number between 0 and 1 specifying what is to be considered an outlier when calculating the proportion of outliers.} \item{groupsForMinWeightRestriction}{ An optional vector with length equal to the number of samples (rows) in \code{x} giving a categorical variable. The output factors and weights are adjusted such that in samples at each level of the variable, the weight is below \code{minWeightInGroups} in a fraction of samples that is at most \code{maxPropUnderMinWeight}.} \item{minWeightInGroups}{ A threshold weight, see \code{groupsForMinWeightRestriction} and details. } \item{maxPropUnderMinWeight}{ A proportion (number between 0 and 1). See \code{groupsForMinWeightRestriction} and details. } \item{defaultWeight}{Value used for weights that would be undefined or not finite, for example, when a column in \code{x} is constant.} \item{getFactors}{ Logical: should the intermediate weight factors be returned as well? } } \details{ Weights are calculated independently for each column of \code{x}. Denoting a column of \code{x} as \code{y}, the weights are calculated as \eqn{(1-u^2)^2}{(1-u^2)^2} where \code{u} is defined as \eqn{\min(1, |y-m|/(9MMAD))}{min(1, abs(y-m)/(9 * MMAD))}. Here \code{m} is the median of the column \code{y} and \code{MMAD} is the modified median absolute deviation. We call the expression \eqn{|y-m|/(9 MMAD)}{abs(y-m)/(9 * MMAD)} the weight factors. Note that outliers are observations with high (>1) weight factors for outliers but low (zero) weights. The calculation of \code{MMAD} starts with calculating the (unscaled) median absolute deviation of the column \code{x}. If the median absolute deviation is zero and \code{pearsonFallback} is TRUE, it is replaced by the standard deviation (multiplied by \code{qnorm(0.75)} to make it asymptotically consistent with MAD). The following two conditions are then checked: (1) The proportion of weights below \code{outlierReferenceWeight} is at most \code{maxPOutliers} and (2) if \code{groupsForMinWeightRestriction} has non-zero length, then for each individual level in \code{groupsForMinWeightRestriction}, the proportion of samples with weights less than \code{minWeightInGroups} is at most \code{maxPropUnderMinWeight}. (If \code{groupsForMinWeightRestriction} is zero-length, the second condition is considered trivially satisfied.) If both conditions are met, \code{MMAD} equals the median absolute deviation, MAD. If either condition is not met, MMAD equals the lowest number for which both conditions are met. } \value{ When the input \code{getFactors} is \code{TRUE}, a list with two components: \item{weights}{A matrix of the same dimensions and \code{dimnames} as the input \code{x} giving the weights of the individual observations in \code{x}.} \item{factors}{A matrix of the same form as \code{weights} giving the weight factors.} When the input \code{getFactors} is \code{FALSE}, the function returns the matrix of weights. } \references{ A full description of the weight calculation can be found, e.g., in Methods section of Wang N, Langfelder P, et al (2022) Mapping brain gene coexpression in daytime transcriptomes unveils diurnal molecular networks and deciphers perturbation gene signatures. Neuron. 2022 Oct 19;110(20):3318-3338.e9. PMID: 36265442; PMCID: PMC9665885. \doi{10.1016/j.neuron.2022.09.028} Other references include, in reverse chronological order, Peter Langfelder, Steve Horvath (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. \url{https://www.jstatsoft.org/v46/i11/} "Introduction to Robust Estimation and Hypothesis Testing", Rand Wilcox, Academic Press, 1997. "Data Analysis and Regression: A Second Course in Statistics", Mosteller and Tukey, Addison-Wesley, 1977, pp. 203-209. } \author{ Peter Langfelder } \seealso{ \code{bicovWeights} for a simpler, less flexible calculation. } \keyword{misc} WGCNA/man/binarizeCategoricalVariable.Rd0000644000176200001440000001244014012015545017560 0ustar liggesusers\name{binarizeCategoricalVariable} \alias{binarizeCategoricalVariable} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Turn a categorical variable into a set of binary indicators } \description{ Given a categorical variable, this function creates a set of indicator variables for the various possible sets of levels. } \usage{ binarizeCategoricalVariable( x, levelOrder = NULL, ignore = NULL, minCount = 3, val1 = 0, val2 = 1, includePairwise = TRUE, includeLevelVsAll = FALSE, dropFirstLevelVsAll = FALSE, dropUninformative = TRUE, namePrefix = "", levelSep = NULL, nameForAll = "all", levelSep.pairwise = if (length(levelSep)==0) ".vs." else levelSep, levelSep.vsAll = if (length(levelSep)==0) (if (nameForAll=="") "" else ".vs.") else levelSep, checkNames = FALSE, includeLevelInformation = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector with categorical values. } \item{levelOrder}{ Optional specification of the levels (unique values) of \code{x}. Defaults to sorted unique values of \code{x}, but can be used to only include a subset of the existing levels as well as to specify the order of the levels in the output variables. } \item{ignore}{ Optional specification of levels of \code{x} that are to be ignored. Note that the levels are ignored only when deciding which variables to include in the output; the samples with these values of \code{x} will be included in "all" in indicators of level vs. all others. } \item{minCount}{ Levels of \code{x} for which there are fewer than \code{minCount} elements will be ignored. } \item{val1}{ Value for the lower level in binary comparisons. } \item{val2}{ Value for the higher level in binary comparisons. } \item{includePairwise}{ Logical: should pairwise binary indicators be included? For each pair of levels, the indicator is \code{val1} for the lower level (earlier in \code{levelOrder}), \code{val2} for the higher level and \code{NA} otherwise. } \item{includeLevelVsAll}{ Logical: should binary indicators for each level be included? The indicator is \code{val2} where \code{x} equals the level and \code{val1} otherwise. } \item{dropFirstLevelVsAll}{ Logical: should the column representing first level vs. all be dropped? This makes the resulting matrix of indicators usable for regression models. } \item{dropUninformative}{ Logical: should uninformative (constant) columns be dropped? } \item{namePrefix}{ Prefix to be used in column names of the output. } \item{nameForAll}{ When naming columns that represent a level vs. all others, \code{nameForAll} will be used to represent all others. } \item{levelSep}{ Separator for levels to be used in column names of the output. If \code{NULL}, pairwise and level vs. all indicators will use different level separators set by \code{levelSep.pairwise} and \code{levelSep.vsAll}. } \item{levelSep.pairwise}{ Separator for levels to be used in column names for pairwise indicators in the output. } \item{levelSep.vsAll}{ Separator for levels to be used in column names for level vs. all indicators in the output. } \item{checkNames}{ Logical: should the names of the output be made into syntactically correct R language names? } \item{includeLevelInformation}{ Logical: should information about which levels are represented by which columns be included in the attributes of the output? } } \details{ The function creates two types of indicators. The first is one level (unique value) of \code{x} vs. all others, i.e., for a given level, the indicator is \code{val2} (usually 1) for all elements of \code{x} that equal the level, and \code{val1} (usually 0) otherwise. Column names for these indicators are the concatenation of \code{namePrefix}, the level, \code{nameSep} and \code{nameForAll}. The level vs. all indicators are created for all levels that have at least \code{minCounts} samples, are present in \code{levelOrder} (if it is non-NULL) and are not included in \code{ignore}. The second type of indicator encodes binary comparisons. For each pair of levels (both with at least \code{minCount} samples), the indicator is \code{val2} (usually 1) for the higher level and \code{val1} (usually 0) for the lower level. The level order is given by \code{levelOrder} (which defaults to the sorted levels of \code{x}), assumed to be sorted in increasing order. All levels with at least \code{minCount} samples that are included in \code{levelOrder} and not included in \code{ignore} are included. } \value{ A matrix containing the indicators variabels, one in each column. When \code{includeLevelInformation} is \code{TRUE}, the attribute \code{includedLevels} is a table with one column per output column and two rows, giving the two levels (unique values of x) represented by the column. } \author{ Peter Langfelder } \seealso{ Variations and wrappers for this function: \code{binarizeCategoricalColumns} for binarizing several columns of a matrix or data frame } \examples{ set.seed(2); x = sample(c("A", "B", "C"), 15, replace = TRUE); out = binarizeCategoricalVariable(x, includePairwise = TRUE, includeLevelVsAll = TRUE); data.frame(x, out); attr(out, "includedLevels") # A different naming for level vs. all columns binarizeCategoricalVariable(x, includeLevelVsAll = TRUE, nameForAll = ""); } \keyword{misc} WGCNA/man/chooseOneHubInEachModule.Rd0000644000176200001440000000463714672545314016777 0ustar liggesusers\name{chooseOneHubInEachModule} \alias{chooseOneHubInEachModule} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Chooses a single hub gene in each module } \description{ chooseOneHubInEachModule returns one gene in each module with high connectivity, given a number of randomly selected genes to test. } \usage{ chooseOneHubInEachModule( datExpr, colorh, numGenes = 100, omitColors = "grey", power = 2, type = "signed", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ Gene expression data with rows as samples and columns as genes. } \item{colorh}{ The module assignments (color vectors) corresponding to the columns in datExpr. } \item{numGenes}{ Th number of random genes to select per module. Higher number of genes increases the accuracy of hub selection but slows down the function. } \item{omitColors}{ All colors in this character vector (default is "grey") are ignored by this function. } \item{power}{ Power to use for the adjacency network (default = 2). } \item{type}{ What type of network is being entered. Common choices are "signed" (default) and "unsigned". With "signed" negative correlations count against, whereas with "unsigned" negative correlations are treated identically as positive correlations. } \item{\dots}{ Any other parameters accepted by the *adjacency* function } } \value{ Both functions output a character vector of genes, where the genes are the hub gene picked for each module, and the names correspond to the module in which each gene is a hub. } \author{ Jeremy Miller } \examples{ ## Example: first simulate some data. MEturquoise = sample(1:100,50) MEblue = sample(1:100,50) MEbrown = sample(1:100,50) MEyellow = sample(1:100,50) MEgreen = c(MEyellow[1:30], sample(1:100,20)) MEred = c(MEbrown [1:20], sample(1:100,30)) MEblack = c(MEblue [1:25], sample(1:100,25)) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, MEred, MEblack) dat1 = simulateDatExpr(ME,300,c(0.2,0.1,0.08,0.051,0.05,0.042,0.041,0.3), signed=TRUE) TOM1 = TOMsimilarityFromExpr(dat1$datExpr, networkType="signed") colnames(TOM1) <- rownames(TOM1) <- colnames(dat1$datExpr) tree1 <- tree2 <- fastcluster::hclust(as.dist(1-TOM1),method="average") colorh = labels2colors(dat1$allLabels) hubs = chooseOneHubInEachModule(dat1$datExpr, colorh) hubs } \keyword{misc} WGCNA/man/simpleConsensusCalculation.Rd0000644000176200001440000000360614022073754017535 0ustar liggesusers\name{simpleConsensusCalculation} \alias{simpleConsensusCalculation} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Simple calculation of a single consenus } \description{ This function calculates a single consensus from given individual data. } \usage{ simpleConsensusCalculation( individualData, consensusOptions, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{individualData}{ Individual data from which the consensus is to be calculated. It can be either a list or a \code{\link{multiData}} structure in which each element is a numeric vector or array. } \item{consensusOptions}{ A list of class \code{ConsensusOptions} that contains options for the consensus calculation. A suitable list can be obtained by calling function \code{\link{newConsensusOptions}}. } \item{verbose}{Integer level of verbosity of diagnostic messages. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{Indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Consensus is defined as the element-wise (also known as "parallel") quantile of of the individual data at probability given by the \code{consensusQuantile} element of \code{consensusOptions}. } \value{ A numeric vector or array of the same dimensions as each element of \code{individualData} } \references{ Consensus network analysis was originally described in Langfelder P, Horvath S. Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 https://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-1-54 } \author{ Peter Langfelder } \seealso{ \code{\link{consensusCalculation}} for consensus calculation that can work with \code{\link{BlockwiseData}} and can calibrate data before calculating consensus. } \keyword{misc} WGCNA/man/newConsensusOptions.Rd0000644000176200001440000000501714012015545016221 0ustar liggesusers\name{newConsensusOptions} \alias{newConsensusOptions} \alias{ConsensusOptions} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create a list holding consensus calculation options. } \description{ This function creates a list of class \code{ConsensusOptions} that holds options for consensus calculations. This list holds options for a single-level analysis. } \usage{ newConsensusOptions( calibration = c("full quantile", "single quantile", "none"), # Simple quantile scaling options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, suppressNegativeResults = FALSE, # Name to prevent files clashes analysisName = "") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{calibration}{ Calibration method. One of \code{"full quantile", "single quantile", "none"} (or a unique abbreviation of one of them).} \item{calibrationQuantile}{ if \code{calibration} is \code{"single quantile"}, input data to a consensus calculation will be scaled such that their \code{calibrationQuantile} quantiles will agree. } \item{sampleForCalibration}{ if \code{TRUE}, calibration quantiles will be determined from a sample of network similarities. Note that using all data can double the memory footprint of the function and the function may fail. } \item{sampleForCalibrationFactor}{ Determines the number of samples for calibration: the number is \code{1/calibrationQuantile * sampleForCalibrationFactor}. Should be set well above 1 to ensure accuracy of the sampled quantile. } \item{consensusQuantile}{Quantile at which consensus is to be defined. See details. } \item{useMean}{ Logical: should the consensus be calculated using (weighted) mean rather than a quantile? } \item{setWeights}{ Optional specification of weights when \code{useMean} is \code{TRUE}. } \item{suppressNegativeResults}{Logical: should negative consensus results be replaced by 0? In a typical network connstruction, negative topological overlap values may results with \code{TOMType = "signed Nowick"}.} \item{analysisName}{ Optional character string naming the consensus analysis. Useful for identifying partial consensus calculation in hierarchical consensus analysis. } } \value{ A list of type \code{ConsensusOptions} that holds copies of the input arguments. } \author{ Peter Langfelder } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/hubGeneSignificance.Rd0000644000176200001440000000147014012015545016032 0ustar liggesusers\name{hubGeneSignificance} \alias{hubGeneSignificance} \title{ Hubgene significance } \description{ Calculate approximate hub gene significance for all modules in network. } \usage{ hubGeneSignificance(datKME, GS) } \arguments{ \item{datKME}{ a data frame (or a matrix-like object) containing eigengene-based connectivities of all genes in the network. } \item{GS}{ a vector with one entry for every gene containing its gene significance. } } \details{ In \code{datKME} rows correspond to genes and columns to modules. } \value{ A vector whose entries are the hub gene significances for each module. } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 } \author{ Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/individualTOMs.Rd0000644000176200001440000001461114012015545015046 0ustar liggesusers\name{individualTOMs} \alias{individualTOMs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculate individual correlation network matrices } \description{ This function calculates correlation network matrices (adjacencies or topological overlaps), after optionally first pre-clustering input data into blocks. } \usage{ individualTOMs( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction options networkOptions, # Save individual TOMs? saveTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", # Behaviour options collectGarbage = TRUE, verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{multiExpr.imputed}{ Optional version of \code{multiExpr} with missing data imputed. If not given and \code{multiExpr} contains missing data, they will be imputed using the function \code{\link[impute]{impute.knn}}. } \item{checkMissingData}{logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{ integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{blockSizePenaltyPower}{number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{number of centers to be used in the preclustering. Defaults to smaller of \code{nGenes/20} and \code{100*nGenes/maxBlockSize}, where \code{nGenes} is the nunber of genes (variables) in \code{multiExpr}.} \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{saveTOMs}{logical: should individual TOMs be saved to disk (\code{TRUE}) or retuned directly in the return value (\code{FALSE})?} \item{individualTOMFileNames}{character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} \item{collectGarbage}{ Logical: should garbage collection be called after each block calculation? This can be useful when the data are large, but could unnecessarily slow down calculation with small data. } \item{verbose}{ Integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ Indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The function starts by optionally filtering out samples that have too many missing entries and genes that have either too many missing entries or zero variance in at least one set. Genes that are filtered out are excluded from the network calculations. If \code{blocks} is not given and the number of genes (columns) in \code{multiExpr} exceeds \code{maxBlockSize}, genes are pre-clustered into blocks using the function \code{\link{consensusProjectiveKMeans}}; otherwise all genes are treated in a single block. Any missing data in \code{multiExpr} will be imputed; if imputed data are already available, they can be supplied separately. For each block of genes, the network adjacency is constructed and (if requested) topological overlap is calculated in each set. The topological overlaps can be saved to disk as RData files, or returned directly within the return value (see below). Note that the matrices can be big and returning them within the return value can quickly exhaust the system's memory. In particular, if the block-wise calculation is necessary, it is usually impossible to return all matrices in the return value. } \value{ A list with the following components: \item{blockwiseAdjacencies}{A \code{\link{multiData}} structure containing (possibly blockwise) network matrices for each input data set. The network matrices are stored as \code{\link{BlockwiseData}} objects.} \item{setNames}{A copy of \code{names(multiExpr)}.} \item{nSets}{Number of sets in \code{multiExpr}} \item{blockInfo}{A list of class \code{\link{BlockInformation}}, giving information about blocks and gene and sample filtering.} \item{networkOptions}{The input \code{networkOptions}, returned as a \code{\link{multiData}} structure with one entry per input data set.} } \author{ Peter Langfelder } \seealso{ Input arguments and output components of this function use \code{\link{multiData}}, \code{\link{NetworkOptions}}, \code{\link{BlockwiseData}}, and \code{\link{BlockInformation}}. Underlying functions of interest include \code{\link{consensusProjectiveKMeans}}, \code{\link{TOMsimilarityFromExpr}}. } \keyword{misc} WGCNA/man/setCorrelationPreservation.Rd0000644000176200001440000000426514012015545017556 0ustar liggesusers\name{setCorrelationPreservation} \alias{setCorrelationPreservation} \title{ Summary correlation preservation measure } \description{ Given consensus eigengenes, the function calculates the average correlation preservation pair-wise for all pairs of sets. } \usage{ setCorrelationPreservation( multiME, setLabels, excludeGrey = TRUE, greyLabel = "grey", method = "absolute") } \arguments{ \item{multiME}{ consensus module eigengenes in a multi-set format. A vector of lists with one list corresponding to each set. Each list must contain a component \code{data} that is a data frame whose columns are consensus module eigengenes. } \item{setLabels}{names to be used for the sets represented in \code{multiME}.} \item{excludeGrey}{logical: exclude the 'grey' eigengene from preservation measure?} \item{greyLabel}{module label corresponding to the 'grey' module. Usually this will be the character string \code{"grey"} if the labels are colors, and the number 0 if the labels are numeric.} \item{method}{ character string giving the correlation preservation measure to use. Recognized values are (unique abbreviations of) \code{"absolute"}, \code{"hyperbolic"}. } } \details{ For each pair of sets, the function calculates the average preservation of correlation among the eigengenes. Two preservation measures are available, the abosolute preservation (high if the two correlations are similar and low if they are different), and the hyperbolically scaled preservation, which de-emphasizes preservation of low correlation values. } \value{ A data frame with each row and column corresponding to a set given in \code{multiME}, containing the pairwise average correlation preservation values. Names and rownames are set to entries of \code{setLabels}. } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{ Peter Langfelder } \seealso{ \code{\link{multiSetMEs}} for module eigengene calculation; \code{\link{plotEigengeneNetworks}} for eigengene network visualization. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/simulateDatExpr.Rd0000644000176200001440000001653614022073754015305 0ustar liggesusers\name{simulateDatExpr} \alias{simulateDatExpr} \title{ Simulation of expression data} \description{ Simulation of expression data with a customizable modular structure and several different types of noise. } \usage{ simulateDatExpr( eigengenes, nGenes, modProportions, minCor = 0.3, maxCor = 1, corPower = 1, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, backgroundNoise = 0.1, leaveOut = NULL, nSubmoduleLayers = 0, nScatteredModuleLayers = 0, averageNGenesInSubmodule = 10, averageExprInSubmodule = 0.2, submoduleSpacing = 2, verbose = 1, indent = 0) } \arguments{ \item{eigengenes}{ a data frame containing the seed eigengenes for the simulated modules. Rows correspond to samples and columns to modules. } \item{nGenes}{ total number of genes to be simulated. } \item{modProportions}{ a numeric vector with length equal the number of eigengenes in \code{eigengenes} plus one, containing fractions of the total number of genes to be put into each of the modules and into the "grey module", which means genes not related to any of the modules. See details. } \item{minCor}{ minimum correlation of module genes with the corresponding eigengene. See details. } \item{maxCor}{ maximum correlation of module genes with the corresponding eigengene. See details. } \item{corPower}{ controls the dropoff of gene-eigengene correlation. See details. } \item{signed}{ logical: should the genes be simulated as belonging to a signed network? If \code{TRUE}, all genes will be simulated to have positive correlation with the eigengene. If \code{FALSE}, a proportion given by \code{propNegativeCor} will be simulated with negative correlations of the same absolute values. } \item{propNegativeCor}{ proportion of genes to be simulated with negative gene-eigengene correlations. Only effective if \code{signed} is \code{FALSE}. } \item{geneMeans}{ optional vector of length \code{nGenes} giving desired mean expression for each gene. If not given, the returned expression profiles will have mean zero. } \item{backgroundNoise}{ amount of background noise to be added to the simulated expression data. } \item{leaveOut}{ optional specification of modules that should be left out of the simulation, that is their genes will be simulated as unrelated ("grey"). This can be useful when simulating several sets, in some which a module is present while in others it is absent. } \item{nSubmoduleLayers}{ number of layers of ordered submodules to be added. See details. } \item{nScatteredModuleLayers}{ number of layers of scattered submodules to be added. See details. } \item{averageNGenesInSubmodule}{ average number of genes in a submodule. See details. } \item{averageExprInSubmodule}{ average strength of submodule expression vectors. } \item{submoduleSpacing}{ a number giving submodule spacing: this multiple of the submodule size will lie between the submodule and the next one. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Given \code{eigengenes} can be unrelated or they can exhibit non-trivial correlations. Each module is simulated separately from others. The expression profiles are chosen such that their correlations with the eigengene run from just below \code{maxCor} to \code{minCor} (hence minCor must be between 0 and 1, not including the bounds). The parameter \code{corPower} can be chosen to control the behaviour of the simulated correlation with the gene index; values higher than 1 will result in the correlation approaching \code{minCor} faster and lower than 1 slower. Numbers of genes in each module are specified (as fractions of the total number of genes \code{nGenes}) by \code{modProportions}. The last entry in \code{modProportions} corresponds to the genes that will be simulated as unrelated to anything else ("grey" genes). The proportion must add up to 1 or less. If the sum is less than one, the remaining genes will be partitioned into groups and simulated to be "close" to the proper modules, that is with small but non-zero correlations (between \code{minCor} and 0) with the module eigengene. If \code{signed} is set \code{FALSE}, the correlation for some of the module genes is chosen negative (but the absolute values remain the same as they would be for positively correlated genes). To ensure consistency for simulations of multiple sets, the indices of the negatively correlated genes are fixed and distributed evenly. In addition to the primary module structure, a secondary structure can be optionally simulated. Modules in the secondary structure have sizes chosen from an exponential distribution with mean equal \code{averageNGenesInSubmodule}. Expression vectors simulated in the secondary structure are simulated with expected standard deviation chosen from an exponential distribution with mean equal \code{averageExprInSubmodule}; the higher this coefficient, the more pronounced will the submodules be in the main modules. The secondary structure can be simulated in several layers; their number is given by \code{SubmoduleLayers}. Genes in these submodules are ordered in the same order as in the main modules. In addition to the ordered submodule structure, a scattered submodule structure can be simulated as well. This structure can be viewed as noise that tends to correlate random groups of genes. The size and effect parameters are the same as for the ordered submodules, and the number of layers added is controlled by \code{nScatteredModuleLayers}. } \value{ A list with the following components: \item{datExpr}{ simulated expression data in a data frame whose columns correspond genes and rows to samples. } \item{setLabels}{ simulated module assignment. Module labels are numeric, starting from 1. Genes simulated to be outside of proper modules have label 0. Modules that are left out (specified in \code{leaveOut}) are indicated as 0 here. } \item{allLabels}{ simulated module assignment. Genes that belong to leftout modules (specified in \code{leaveOut}) are indicated by their would-be assignment here. } \item{labelOrder}{ a vector specifying the order in which labels correspond to the given eigengenes, that is \code{labelOrder[1]} is the label assigned to module whose seed is \code{eigengenes[, 1]} etc. } } \references{ A short description of the simulation method can also be found in the Supplementary Material to the article Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54. The material is posted at http://horvath.genetics.ucla.edu/html/CoexpressionNetwork/EigengeneNetwork/SupplementSimulations.pdf. } \author{ Peter Langfelder } \seealso{ \code{\link{simulateEigengeneNetwork}} for a simulation of eigengenes with a given causal structure; \code{\link{simulateModule}} for simulations of individual modules; \code{\link{simulateDatExpr5Modules}} for a simplified interface to expression simulations; \code{\link{simulateMultiExpr}} for a simulation of several related data sets. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/pruneAndMergeConsensusModules.Rd0000644000176200001440000001202514012015545020136 0ustar liggesusers\name{pruneAndMergeConsensusModules} \alias{pruneAndMergeConsensusModules} \title{ Iterative pruning and merging of (hierarchical) consensus modules } \description{ This function prunes genes with low consensus eigengene-based intramodular connectivity (kME) from modules and merges modules whose consensus similarity is high. The process is repeated until the modules become stable. } \usage{ pruneAndMergeConsensusModules( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, labels, unassignedLabel = if (is.numeric(labels)) 0 else "grey", networkOptions, consensusTree, # Pruning options minModuleSize, minCoreKMESize = minModuleSize/3, minCoreKME = 0.5, minKMEtoStay = 0.2, # Module eigengene calculation and merging options impute = TRUE, trapErrors = FALSE, calibrateMergingSimilarities = FALSE, mergeCutHeight = 0.15, # Behavior iterate = TRUE, collectGarbage = FALSE, getDetails = TRUE, verbose = 1, indent=0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{multiExpr.imputed}{If \code{multiExpr} contain missing data, this argument can be used to supply the expression data with missing data imputed. If not given, the \code{\link[impute]{impute.knn}} function will be used to impute the missing data.} \item{labels}{ A vector (numeric, character or a factor) giving module labels for each variable (gene) in multiExpr. } \item{unassignedLabel}{ The label (value in \code{labels}) that represents unassigned genes. Module of this label will not enter the module eigengene clustering and will not be merged with other modules.} \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{consensusTree}{ A list of class \code{\link{ConsensusTree}} specifying the consensus calculation. } \item{minModuleSize}{Minimum number of genes in a module. Modules that have fewer genes (after trimming) will be removed (i.e., their genes will be given the unassigned label).} \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with consensus eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled).} \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose consensus eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } \item{calibrateMergingSimilarities}{ Logical: should module eigengene similarities be calibrated before calculating the consensus? Although calibration is in principle desirable, the calibration methods currently available assume large data and do not work very well on eigengene similarities. } \item{mergeCutHeight}{ Dendrogram cut height for module merging. } \item{iterate}{ Logical: should the pruning and merging process be iterated until no changes occur? If \code{FALSE}, only one iteration will be carried out. } \item{collectGarbage}{ Logical: should garbage be collected after some of the memory-intensive steps? } \item{getDetails}{ Logical: should certain intermediate results be returned? These include labels and module merging information at each iteration (see return value). } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } %\details{ %Each iteration %} \value{ If input \code{getDetails} is \code{FALSE}, a vector the resulting module labels. If \code{getDetails} is \code{TRUE}, a list with these components: \item{labels}{The resulting module labels} \item{details}{A list. The first component, named \code{originalLabels}, contains a copy of the input labels. The following components are named \code{Iteration.1}, \code{Iteration.2} etc and contain, for each iteration, components \code{prunedLabels} (the result of pruning in that iteration) and \code{mergeInfo} (result of the call to \code{\link{hierarchicalMergeCloseModules}} in that iteration).} } \author{ Peter Langfelder } \seealso{ The underlying functions \code{\link{pruneConsensusModules}} and \code{\link{hierarchicalMergeCloseModules}}. } \keyword{misc} WGCNA/man/qvalue.Rd0000644000176200001440000000653614012015545013457 0ustar liggesusers\name{qvalue} \alias{qvalue} \title{Estimate the q-values for a given set of p-values} \description{ Estimate the q-values for a given set of p-values. The q-value of a test measures the proportion of false positives incurred (called the false discovery rate) when that particular test is called significant. } \usage{ qvalue(p, lambda=seq(0,0.90,0.05), pi0.method="smoother", fdr.level=NULL, robust=FALSE, smooth.df=3, smooth.log.pi0=FALSE) } \arguments{ \item{p}{A vector of p-values (only necessary input)} \item{lambda}{The value of the tuning parameter to estimate \eqn{\pi_0}{pi_0}. Must be in [0,1). Optional, see Storey (2002).} \item{pi0.method}{Either "smoother" or "bootstrap"; the method for automatically choosing tuning parameter in the estimation of \eqn{\pi_0}{pi_0}, the proportion of true null hypotheses} \item{fdr.level}{A level at which to control the FDR. Must be in (0,1]. Optional; if this is selected, a vector of TRUE and FALSE is returned that specifies whether each q-value is less than fdr.level or not.} \item{robust}{An indicator of whether it is desired to make the estimate more robust for small p-values and a direct finite sample estimate of pFDR. Optional.} \item{smooth.df}{Number of degrees-of-freedom to use when estimating \eqn{\pi_0}{pi_0} with a smoother. Optional.} \item{smooth.log.pi0}{If TRUE and \code{pi0.method} = "smoother", \eqn{\pi_0}{pi_0} will be estimated by applying a smoother to a scatterplot of \eqn{log} \eqn{\pi_0}{pi_0} estimates against the tuning parameter \eqn{\lambda}{lambda}. Optional.} } \details{ If no options are selected, then the method used to estimate \eqn{\pi_0}{pi_0} is the smoother method described in Storey and Tibshirani (2003). The bootstrap method is described in Storey, Taylor & Siegmund (2004). } \value{ A list containing: \item{call}{function call} \item{pi0}{an estimate of the proportion of null p-values} \item{qvalues}{a vector of the estimated q-values (the main quantity of interest)} \item{pvalues}{a vector of the original p-values} \item{significant}{if fdr.level is specified, and indicator of whether the q-value fell below fdr.level (taking all such q-values to be significant controls FDR at level fdr.level)} } \note{This function is adapted from package qvalue. The reason we provide our own copy is that package qvalue contains additional functionality that relies on Tcl/Tk which has led to multiple problems. Our copy does not require Tcl/Tk.} \references{ Storey JD. (2002) A direct approach to false discovery rates. Journal of the Royal Statistical Society, Series B, 64: 479-498. Storey JD and Tibshirani R. (2003) Statistical significance for genome-wide experiments. Proceedings of the National Academy of Sciences, 100: 9440-9445. Storey JD. (2003) The positive false discovery rate: A Bayesian interpretation and the q-value. Annals of Statistics, 31: 2013-2035. Storey JD, Taylor JE, and Siegmund D. (2004) Strong control, conservative point estimation, and simultaneous conservative consistency of false discovery rates: A unified approach. Journal of the Royal Statistical Society, Series B, 66: 187-205. } \author{John D. Storey \email{jstorey@u.washington.edu}, adapted for WGCNA by Peter Langfelder} \keyword{misc} WGCNA/man/consensusTreeInputs.Rd0000644000176200001440000000170314012015545016214 0ustar liggesusers\name{consensusTreeInputs} \alias{consensusTreeInputs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Get all elementary inputs in a consensus tree } \description{ This function returns a flat vector or a structured list of elementary inputs to a given consensus tree, that is, inputs that are not consensus trees themselves. } \usage{ consensusTreeInputs(consensusTree, flatten = TRUE) } \arguments{ \item{consensusTree}{ A consensus tree of class \code{\link{ConsensusTree}}. } \item{flatten}{ Logical; if \code{TRUE}, the function returns a flat character vector of inputs; otherwise, a list whose structure reflects the structure of \code{consensusTree}. } } \value{ A character vector of inputs or a list of inputs whose structure reflects the structure of \code{consensusTree}. } \author{ Peter Langfelder } \seealso{ \code{\link{newConsensusTree}} for creating consensus trees. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/pickSoftThreshold.Rd0000644000176200001440000001253714012015545015617 0ustar liggesusers\name{pickSoftThreshold} \alias{pickSoftThreshold} \alias{pickSoftThreshold.fromSimilarity} \title{ Analysis of scale free topology for soft-thresholding } \description{ Analysis of scale free topology for multiple soft thresholding powers. The aim is to help the user pick an appropriate soft-thresholding power for network construction. } \usage{ pickSoftThreshold( data, dataIsExpr = TRUE, weights = NULL, RsquaredCut = 0.85, powerVector = c(seq(1, 10, by = 1), seq(12, 20, by = 2)), removeFirst = FALSE, nBreaks = 10, blockSize = NULL, corFnc = cor, corOptions = list(use = 'p'), networkType = "unsigned", moreNetworkConcepts = FALSE, gcInterval = NULL, verbose = 0, indent = 0) pickSoftThreshold.fromSimilarity( similarity, RsquaredCut = 0.85, powerVector = c(seq(1, 10, by = 1), seq(12, 20, by = 2)), removeFirst = FALSE, nBreaks = 10, blockSize = 1000, moreNetworkConcepts=FALSE, verbose = 0, indent = 0) } \arguments{ \item{data}{ expression data in a matrix or data frame. Rows correspond to samples and columns to genes. } \item{dataIsExpr}{ logical: should the data be interpreted as expression (or other numeric) data, or as a similarity matrix of network nodes? } \item{weights}{optional observation weights for \code{data} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights. Only used with Pearson correlation.} \item{similarity}{ similarity matrix: a symmetric matrix with entries between 0 and 1 and unit diagonal. The only transformation applied to \code{similarity} is raising it to a power. } \item{RsquaredCut}{ desired minimum scale free topology fitting index \eqn{R^2}. } \item{powerVector}{ a vector of soft thresholding powers for which the scale free topology fit indices are to be calculated. } \item{removeFirst}{ should the first bin be removed from the connectivity histogram? } \item{nBreaks}{ number of bins in connectivity histograms } \item{blockSize}{ block size into which the calculation of connectivity should be broken up. If not given, a suitable value will be calculated using function \code{blockSize} and printed if \code{verbose>0}. If R runs into memory problems, decrease this value. } \item{corFnc}{the correlation function to be used in adjacency calculation. } \item{corOptions}{ a list giving further options to the correlation function specified in \code{corFnc}. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{moreNetworkConcepts}{logical: should additional network concepts be calculated? If \code{TRUE}, the function will calculate how the network density, the network heterogeneity, and the network centralization depend on the power. For the definition of these additional network concepts, see Horvath and Dong (2008). PloS Comp Biol. } \item{gcInterval}{a number specifying in interval (in terms of individual genes) in which garbage collection will be performed. The actual interval will never be less than \code{blockSize}.} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The function calculates weighted networks either by interpreting \code{data} directly as similarity, or first transforming it to similarity of the type specified by \code{networkType}. The weighted networks are obtained by raising the similarity to the powers given in \code{powerVector}. For each power the scale free topology fit index is calculated and returned along with other information on connectivity. On systems with multiple cores or processors, the function pickSoftThreshold takes advantage of parallel processing if the function \code{\link{enableWGCNAThreads}} has been called to allow parallel processing and set up the parallel calculation back-end. } \value{ A list with the following components: \item{powerEstimate}{ estimate of an appropriate soft-thresholding power: the lowest power for which the scale free topology fit \eqn{R^2} exceeds \code{RsquaredCut}. If \eqn{R^2} is below \code{RsquaredCut} for all powers, \code{NA} is returned. } \item{fitIndices}{ a data frame containing the fit indices for scale free topology. The columns contain the soft-thresholding power, adjusted \eqn{R^2} for the linear fit, the linear coefficient, adjusted \eqn{R^2} for a more complicated fit models, mean connectivity, median connectivity and maximum connectivity. If input \code{moreNetworkConcepts} is \code{TRUE}, 3 additional columns containing network density, centralization, and heterogeneity.} } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \author{ Steve Horvath and Peter Langfelder } \seealso{ \code{\link{adjacency}}, \code{\link{softConnectivity}} } \keyword{misc} WGCNA/man/softConnectivity.Rd0000644000176200001440000000612314012015545015524 0ustar liggesusers\name{softConnectivity} \alias{softConnectivity} \alias{softConnectivity.fromSimilarity} \title{ Calculates connectivity of a weighted network. } \description{ Given expression data or a similarity, the function constructs the adjacency matrix and for each node calculates its connectivity, that is the sum of the adjacency to the other nodes. } \usage{ softConnectivity( datExpr, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, type = "unsigned", power = if (type == "signed") 15 else 6, blockSize = 1500, minNSamples = NULL, verbose = 2, indent = 0) softConnectivity.fromSimilarity( similarity, type = "unsigned", power = if (type == "signed") 15 else 6, blockSize = 1500, verbose = 2, indent = 0) } \arguments{ \item{datExpr}{ a data frame containing the expression data, with rows corresponding to samples and columns to genes. } \item{similarity}{ a similarity matrix: a square symmetric matrix with entries between -1 and 1. } \item{corFnc}{ character string giving the correlation function to be used for the adjacency calculation. Recommended choices are \code{"cor"} and \code{"bicor"}, but other functions can be used as well. } \item{corOptions}{ character string giving further options to be passed to the correlation function. } \item{weights}{optional observation weights for \code{datExpr} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights. Only used with Pearson correlation.} \item{type}{network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. } \item{power}{ soft thresholding power. } \item{blockSize}{ block size in which adjacency is to be calculated. Too low (say below 100) may make the calculation inefficient, while too high may cause R to run out of physical memory and slow down the computer. Should be chosen such that an array of doubles of size (number of genes) * (block size) fits into available physical memory.} \item{minNSamples}{ minimum number of samples available for the calculation of adjacency for the adjacency to be considered valid. If not given, defaults to the greater of \code{..minNSamples} (currently 4) and number of samples divided by 3. If the number of samples falls below this threshold, the connectivity of the corresponding gene will be returned as \code{NA}. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \value{ A vector with one entry per gene giving the connectivity of each gene in the weighted network. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath } \seealso{ \code{\link{adjacency}} } \keyword{ misc } WGCNA/man/stdErr.Rd0000644000176200001440000000060514012015545013414 0ustar liggesusers\name{stdErr} \alias{stdErr} \title{ Standard error of the mean of a given vector. } \usage{ stdErr(x) } \description{ Returns the standard error of the mean of a given vector. Missing values are ignored. } \arguments{ \item{x}{ a numeric vector } } \value{ Standard error of the mean of x. } \author{ Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/addTraitToMEs.Rd0000644000176200001440000000240114012015545014611 0ustar liggesusers\name{addTraitToMEs} \alias{addTraitToMEs} \title{ Add trait information to multi-set module eigengene structure } \description{ Adds trait information to multi-set module eigengene structure. } \usage{ addTraitToMEs(multiME, multiTraits) } \arguments{ \item{multiME}{ Module eigengenes in multi-set format. A vector of lists, one list per set. Each list must contain an element named \code{data} that is a data frame with module eigengenes. } \item{multiTraits}{ Microarray sample trait(s) in multi-set format. A vector of lists, one list per set. Each list must contain an element named \code{data} that is a data frame in which each column corresponds to a trait, and each row to an individual sample.} } \details{ The function simply \code{cbind}'s the module eigengenes and traits for each set. The number of sets and numbers of samples in each set must be consistent between \code{multiMEs} and \code{multiTraits}. } \value{ A multi-set structure analogous to the input: a vector of lists, one list per set. Each list will contain a component \code{data} with the merged eigengenes and traits for the corresponding set. } \author{ Peter Langfelder } \seealso{ \code{\link{checkSets}}, \code{\link{moduleEigengenes}} } \keyword{ misc } WGCNA/man/automaticNetworkScreening.Rd0000644000176200001440000000477314012015545017361 0ustar liggesusers\name{automaticNetworkScreening} \alias{automaticNetworkScreening} \title{ One-step automatic network gene screening } \description{ This function performs gene screening based on a given trait and gene network properties } \usage{ automaticNetworkScreening( datExpr, y, power = 6, networkType = "unsigned", detectCutHeight = 0.995, minModuleSize = min(20, ncol(as.matrix(datExpr))/2), datME = NULL, getQValues = TRUE, ...) } \arguments{ \item{datExpr}{ data frame containing the expression data, columns corresponding to genes and rows to samples } \item{y}{ vector containing trait values for all samples in \code{datExpr}} \item{power}{ soft thresholding power used in network construction } \item{networkType}{ character string specifying network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"hybrid"}. } \item{detectCutHeight}{ cut height of the gene hierarchical clustering dendrogram. See \code{cutreeDynamic} for details. } \item{minModuleSize}{ minimum module size to be used in module detection procedure. } \item{datME}{ optional specification of module eigengenes. A data frame whose columns are the module eigengenes. If given, module analysis will not be performed. } \item{getQValues}{logical: should q-values (local FDR) be calculated? } \item{...}{other arguments to the module identification function \code{\link{blockwiseModules}}} } \details{ Network screening is a method for identifying genes that have a high gene significance and are members of important modules at the same time. If \code{datME} is given, the function calls \code{\link{networkScreening}} with the default parameters. If \code{datME} is not given, module eigengenes are first calculated using network analysis based on supplied parameters. } \value{ A list with the following components: \item{networkScreening}{a data frame containing results of the network screening procedure. See \code{\link{networkScreening}} for more details.} \item{datME}{ calculated module eigengenes (or a copy of the input \code{datME}, if given).} \item{hubGeneSignificance}{ hub gene significance for all calculated modules. See \code{\link{hubGeneSignificance}}. } } \author{ Steve Horvath } \seealso{ \code{\link{networkScreening}}, \code{\link{hubGeneSignificance}}, \code{\link{networkScreening}}, \code{\link[dynamicTreeCut]{cutreeDynamic}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/fundamentalNetworkConcepts.Rd0000644000176200001440000000561014012015545017521 0ustar liggesusers\name{fundamentalNetworkConcepts} \alias{fundamentalNetworkConcepts} \title{ Calculation of fundamental network concepts from an adjacency matrix. } \description{ This function computes fundamental network concepts (also known as network indices or statistics) based on an adjacency matrix and optionally a node significance measure. These network concepts are defined for any symmetric adjacency matrix (weighted and unweighted). The network concepts are described in Dong and Horvath (2007) and Horvath and Dong (2008). Fundamental network concepts are defined as a function of the off-diagonal elements of an adjacency matrix \code{adj} and/or a node significance measure \code{GS}. } \usage{ fundamentalNetworkConcepts(adj, GS = NULL) } \arguments{ \item{adj}{ an adjacency matrix, that is a square, symmetric matrix with entries between 0 and 1} \item{GS}{ a node significance measure: a vector of the same length as the number of rows (and columns) of the adjacency matrix. } } \value{ A list with the following components: \item{Connectivity}{a numerical vector that reports the connectivity (also known as degree) of each node. This fundamental network concept is also known as whole network connectivity. One can also define the scaled connectivity \code{K=Connectivity/max(Connectivity)} which is used for computing the hub gene significance.} \item{ScaledConnectivity}{the \code{Connectivity} vector scaled by the highest connectivity in the network, i.e., \code{Connectivity/max(Connectivity)}.} \item{ClusterCoef}{a numerical vector that reports the cluster coefficient for each node. This fundamental network concept measures the cliquishness of each node.} \item{MAR}{a numerical vector that reports the maximum adjacency ratio for each node. \code{MAR[i]} equals 1 if all non-zero adjacencies between node \code{i} and the remaining network nodes equal 1. This fundamental network concept is always 1 for nodes of an unweighted network. This is a useful measure for weighted networks since it allows one to determine whether a node has high connectivity because of many weak connections (small MAR) or because of strong (but few) connections (high MAR), see Horvath and Dong 2008. } \item{Density}{the density of the network. } \item{Centralization}{the centralization of the network. } \item{Heterogeneity}{the heterogeneity of the network. } } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \author{ Steve Horvath } \seealso{ \code{\link{conformityBasedNetworkConcepts}} for calculation of conformity based network concepts for a network adjacency matrix; \code{\link{networkConcepts}}, for calculation of conformity based and eigennode based network concepts for a correlation network. } \keyword{ misc } WGCNA/man/labelPoints.Rd0000644000176200001440000000653614012015545014436 0ustar liggesusers\name{labelPoints} \Rdversion{1.1} \alias{labelPoints} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Label scatterplot points } \description{ Given scatterplot point coordinates, the function tries to place labels near the points such that the labels overlap as little as possible. User beware: the algorithm implemented here is quite primitive and while it will help in many cases, it is by no means perfect. Consider this function experimental. We hope to improve the algorithm in the future to make it useful in a broader range of situations. } \usage{ labelPoints( x, y, labels, cex = 0.7, offs = 0.01, xpd = TRUE, jiggle = 0, protectEdges = TRUE, doPlot = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ a vector of x coordinates of the points } \item{y}{ a vector of y coordinates of the points } \item{labels}{ labels to be placed next to the points } \item{cex}{ character expansion factor for the labels } \item{offs}{ offset of the labels from the plotted coordinates in inches } \item{xpd}{ logical: controls truncating labels to fit within the plotting region. See \code{\link{par}}. } \item{jiggle}{ amount of random noise to be added to the coordinates. This may be useful if the scatterplot is too regular (such as all points on one straight line). } \item{protectEdges}{ logical: should labels be shifted inside the (actual or virtual) frame of the plot? } \item{doPlot}{logical: should the labels be actually added to the plot? Value \code{FALSE} may be useful if the user would like to simply compute the best label positions the function can come up with.} \item{\dots}{ other arguments to function \code{\link{text}}. } } \details{ The algorithm basically works by finding the direction of most surrounding points, and attempting to place the label in the opposite direction. There are (not uncommon) situations in which this placement is suboptimal; the author promises to further develop the function sometime in the future. Note that this function does not plot the actual scatterplot; only the labels are plotted. Plotting the scatterplot is the responsibility of the user. The argument \code{offs} needs to be carefully tuned to the size of the plotted symbols. Sorry, no automation here yet. The argument \code{protectEdges} can be used to shift labels that would otherwise extend beyond the plot to within the plot. Sometimes this may cause some overlapping with other points or labels; use with care. } \value{ Invisibly, a data frame with 3 columns, giving the x and y positions of the labels, and the labels themselves. } \author{ Peter Langfelder } \seealso{ \code{\link{plot.default}}, \code{\link{text}} } \examples{ # generate some random points set.seed(11); n = 20; x = runif(n); y = runif(n); # Create a basic scatterplot col = standardColors(n); plot(x,y, pch = 21, col =1, bg = col, cex = 2.6, xlim = c(-0.1, 1.1), ylim = c(-0.1, 1.0)); labelPoints(x, y, paste("Pt", c(1:n), sep=""), offs = 0.10, cex = 1); # label points using longer text labels. Note the positioning is not perfect, but close enough. plot(x,y, pch = 21, col =1, bg = col, cex = 2.6, xlim = c(-0.1, 1.1), ylim = c(-0.1, 1.0)); labelPoints(x, y, col, offs = 0.10, cex = 0.8); } \keyword{ plot }% __ONLY ONE__ keyword per line WGCNA/man/corPredictionSuccess.Rd0000644000176200001440000000264414012015545016313 0ustar liggesusers\name{corPredictionSuccess} \alias{corPredictionSuccess} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Qunatification of success of gene screening } \description{ This function calculates the success of gene screening. } \usage{ corPredictionSuccess(corPrediction, corTestSet, topNumber = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{corPrediction}{ a vector or a matrix of prediction statistics } \item{corTestSet}{ correlation or other statistics on test set } \item{topNumber}{ a vector of the number of top genes to consider} } \details{ For each column in \code{corPrediction}, the function evaluates the mean \code{corTestSet} for the number of top genes (ranked by the column in \code{corPrediction}) given in \code{topNumber}. The higher the mean \code{corTestSet} (for positive \code{corPrediction}) or negative (for negative \code{corPrediction}), the more successful the prediction. } \value{ \item{meancorTestSetOverall }{ difference of \code{meancorTestSetPositive} and \code{meancorTestSetNegative} below } \item{meancorTestSetPositive}{ mean \code{corTestSet} on top genes with positive \code{corPrediction} } \item{meancorTestSetNegative}{ mean \code{corTestSet} on top genes with negative \code{corPrediction} } ... } \author{ Steve Horvath } \seealso{ \code{\link{relativeCorPredictionSuccess}}} \keyword{ misc} WGCNA/man/hierarchicalConsensusModules.Rd0000644000176200001440000004742314012015545020032 0ustar liggesusers\name{hierarchicalConsensusModules} \alias{hierarchicalConsensusModules} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hierarchical consensus network construction and module identification } \description{ Hierarchical consensus network construction and module identification across multiple data sets. } \usage{ hierarchicalConsensusModules( multiExpr, multiWeights = NULL, multiExpr.imputed = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 12345, # Network construction options. networkOptions, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", keepIndividualTOMs = FALSE, # Consensus calculation options consensusTree = NULL, # Return options saveConsensusTOM = TRUE, consensusTOMFilePattern = "consensusTOM-\%a-Block\%b.RData", # Keep the consensus? keepConsensusTOM = saveConsensusTOM, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheBase = ".blockConsModsCache", cacheDir = ".", # Alternative consensus TOM input from a previous calculation consensusTOMInfo = NULL, # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, # Advanced tree cut opyions maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, stabilityCriterion = c("Individual fraction", "Common fraction"), minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, iteratePruningAndMerging = FALSE, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, # Module eigengene calculation options impute = TRUE, trapErrors = FALSE, excludeGrey = FALSE, # Module merging options calibrateMergingSimilarities = FALSE, mergeCutHeight = 0.15, # General options collectGarbage = TRUE, verbose = 2, indent = 0, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{multiExpr.imputed}{If \code{multiExpr} contain missing data, this argument can be used to supply the expression data with missing data imputed. If not given, the \code{\link[impute]{impute.knn}} function will be used to impute the missing data.} \item{checkMissingData}{ Logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ Optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{Integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{blockSizePenaltyPower}{Number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{Number of centers to be used in the preclustering. Defaults to smaller of \code{nGenes/20} and \code{100*nGenes/maxBlockSize}, where \code{nGenes} is the nunber of genes (variables) in \code{multiExpr}.} \item{randomSeed}{Integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{saveIndividualTOMs}{ Logical: should individual TOMs be saved to disk (\code{TRUE}) or retuned directly in the return value (\code{FALSE})? } \item{individualTOMFileNames}{ Character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} \item{keepIndividualTOMs}{ Logical: should individual TOMs be retained after the calculation is finished? } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{saveConsensusTOM}{ Logical: should the consensus TOM be saved to disk? } \item{consensusTOMFilePattern}{ Character string giving the file names to save consensus TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated. } \item{keepConsensusTOM}{ Logical: should consensus TOM be retained after the calculation ends? Depending on \code{saveConsensusTOM}, the retained TOM is either saved to disk or returned within the return value. } \item{useDiskCache}{ Logical: should disk cache be used for consensus calculations? The disk cache can be used to store chunks of calibrated data that are small enough to fit one chunk from each set into memory (blocks may be small enough to fit one block of one set into memory, but not small enough to fit one block from all sets in a consensus calculation into memory at the same time). Using disk cache is slower but lessens the memory footprint of the calculation. As a general guide, if individual data are split into blocks, we recommend setting this argument to \code{TRUE}. If this argument is \code{NULL}, the function will decide whether to use disk cache based on the number of sets and block sizes. } \item{chunkSize}{ Integer giving the chunk size. If left \code{NULL}, a suitable size will be chosen automatically. } \item{cacheDir}{ Directory in which to save cache files. The files are deleted on normal exit but persist if the function terminates abnormally. } \item{cacheBase}{ Base for the file names of cache files. } \item{consensusTOMInfo}{ If the consensus TOM has been pre-calculated using function \code{\link{hierarchicalConsensusTOM}}, this argument can be used to supply it. If given, the consensus TOM calculation options above are ignored. } \item{deepSplit}{Numeric value between 0 and 4. Provides a simplified control over how sensitive module detection should be to module splitting, with 0 least and 4 most sensitive. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{detectCutHeight}{Dendrogram cut height for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minModuleSize}{Minimum module size for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{checkMinModuleSize}{ logical: should sanity checks be performed on \code{minModuleSize}?} \item{maxCoreScatter}{ maximum scatter of the core for a branch to be a cluster, given as the fraction of \code{cutHeight} relative to the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minGap}{ minimum cluster gap given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxAbsCoreScatter}{ maximum scatter of the core for a branch to be a cluster given as absolute heights. If given, overrides \code{maxCoreScatter}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minAbsGap}{ minimum cluster gap given as absolute height difference. If given, overrides \code{minGap}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minSplitHeight}{Minimum split height given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. Branches merging below this height will automatically be merged. Defaults to zero but is used only if \code{minAbsSplitHeight} below is \code{NULL}.} \item{minAbsSplitHeight}{Minimum split height given as an absolute height. Branches merging below this height will automatically be merged. If not given (default), will be determined from \code{minSplitHeight} above.} \item{useBranchEigennodeDissim}{Logical: should branch eigennode (eigengene) dissimilarity be considered when merging branches in Dynamic Tree Cut?} \item{minBranchEigennodeDissim}{Minimum consensus branch eigennode (eigengene) dissimilarity for branches to be considerd separate. The branch eigennode dissimilarity in individual sets is simly 1-correlation of the eigennodes; the consensus is defined as quantile with probability \code{consensusQuantile}.} \item{stabilityLabels}{Optional matrix of cluster labels that are to be used for calculating branch dissimilarity based on split stability. The number of rows must equal the number of genes in \code{multiExpr}; the number of columns (clusterings) is arbitrary. See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{stabilityCriterion}{One of \code{c("Individual fraction", "Common fraction")}, indicating which method for assessing stability similarity of two branches should be used. We recommend \code{"Individual fraction"} which appears to perform better; the \code{"Common fraction"} method is provided for backward compatibility since it was the (only) method available prior to WGCNA version 1.60.} \item{minStabilityDissim}{Minimum stability dissimilarity criterion for two branches to be considered separate. Should be a number between 0 (essentially no dissimilarity required) and 1 (perfect dissimilarity or distinguishability based on \code{stabilityLabels}). See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{pamStage}{ logical. If TRUE, the second (PAM-like) stage of module detection will be performed. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{pamRespectsDendro}{Logical, only used when \code{pamStage} is \code{TRUE}. If \code{TRUE}, the PAM stage will respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{iteratePruningAndMerging}{Logical: should pruning of low-KME genes and module merging be iterated? For backward compatibility, the default is \code{FALSE} but it setting it to \code{TRUE} may lead to better-defined modules.} \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled and returned to the pool of genes waiting for mofule detection). } \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } \item{excludeGrey}{ logical: should the returned module eigengenes exclude the eigengene of the "module" that contains unassigned genes? } \item{calibrateMergingSimilarities}{ Logical: should module eigengene similarities be calibrataed before calculating the consensus? Although calibration is in principle desirable, the calibration methods currently available assume large data and do not work very well on eigengene similarities. } \item{mergeCutHeight}{ Dendrogram cut height for module merging. } \item{collectGarbage}{ Logical: should garbage be collected after some of the memory-intensive steps? } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{\dots}{ Other arguments. Currently ignored. } } \details{ This function calculates a consensus network with a flexible, possibly hierarchical consensus specification, identifies (consensus) modules in the network, and calculates their eigengenes. "Blockwise" calculation is available for large data sets for which a full network (TOM or adjacency matrix) would not fit into avilable RAM. The input can be either several numerical data sets (expression etc) in the argument \code{multiExpr} together with all necessary network construction options, or a pre-calculated network, typically the result of a call to \code{\link{hierarchicalConsensusTOM}}. Steps in the network construction include the following: (1) optional filtering of variables (genes) and observations (samples) that contain too many missing values or have zero variance; (2) optional pre-clustering to split data into blocks of manageable size; (3) calculation of adjacencies and optionally of TOMs in each individual data set; (4) calculation of consensus network from the individual networks; (5) hierarchical clustering and module identification; (6) trimming of modules by removing genes with low correlation with the eigengene of the module; and (7) merging of modules whose eigengenes are strongly correlated. Steps 1-4 (up to and including the calculation of consensus network from the individual networks) are handled by the function \code{\link{hierarchicalConsensusTOM}}. Variables (genes) are clustered using average-linkage hierarchical clustering and modules are identified in the resulting dendrogram by the Dynamic Hybrid tree cut. Found modules are trimmed of genes whose consensus module membership kME (that is, correlation with module eigengene) is less than \code{minKMEtoStay}. Modules in which fewer than \code{minCoreKMESize} genes have consensus KME higher than \code{minCoreKME} are disbanded, i.e., their constituent genes are pronounced unassigned. After all blocks have been processed, the function checks whether there are genes whose KME in the module they assigned is lower than KME to another module. If p-values of the higher correlations are smaller than those of the native module by the factor \code{reassignThresholdPS} (in every set), the gene is re-assigned to the closer module. In the last step, modules whose eigengenes are highly correlated are merged. This is achieved by clustering module eigengenes using the dissimilarity given by one minus their correlation, cutting the dendrogram at the height \code{mergeCutHeight} and merging all modules on each branch. The process is iterated until no modules are merged. See \code{\link{mergeCloseModules}} for more details on module merging. The module trimming and merging process is optionally iterated. Iterations are recommended but are (for now) not the default for backward compatibility. } \value{ List with the following components: \item{labels}{A numeric vector with one component per variable (gene), giving the module label of each variable (gene). Label 0 is reserved for unassigned variables; module labels are sequential and smaller numbers are used for larger modules.} \item{unmergedLabels}{A numeric vector with one component per variable (gene), giving the unmerged module label of each variable (gene), i.e., module labels before the call to module merging.} \item{colors}{A character vector with one component per variable (gene), giving the module colors. The labels are mapped to colors using \code{\link{labels2colors}}.} \item{unmergedColors}{A character vector with one component per variable (gene), giving the unmerged module colors.} \item{multiMEs}{Module eigengenes corresponding to the modules returned in \code{colors}, in multi-set format. A vector of lists, one per set, containing eigengenes, proportion of variance explained and other information. See \code{\link{multiSetMEs}} for a detailed description.} \item{dendrograms}{A list with one component for each block of genes. Each component is the hierarchical clustering dendrogram obtained by clustering the consensus gene dissimilarity in the corresponding block. } \item{consensusTOMInfo}{A list detailing various aspects of the consensus TOM. See \code{\link{hierarchicalConsensusTOM}} for details.} \item{blockInfo}{A list with information about blocks as well as the vriables and observations (genes and samples) retained after filtering out those with zero variance and too many missing values.} \item{moduleIdentificationArguments}{A list with the module identification arguments supplied to this function. Contains \code{deepSplit}, \code{detectCutHeight}, \code{minModuleSize}, \code{maxCoreScatter}, \code{minGap}, \code{maxAbsCoreScatter}, \code{minAbsGap}, \code{minSplitHeight}, \code{useBranchEigennodeDissim}, \code{minBranchEigennodeDissim}, \code{minStabilityDissim}, \code{pamStage}, \code{pamRespectsDendro}, \code{minCoreKME}, \code{minCoreKMESize}, \code{minKMEtoStay}, \code{calibrateMergingSimilarities}, and \code{mergeCutHeight}.} } \note{ If the input datasets have large numbers of genes, consider carefully the \code{maxBlockSize} as it significantly affects the memory footprint (and whether the function will fail with a memory allocation error). From a theoretical point of view it is advantageous to use blocks as large as possible; on the other hand, using smaller blocks is substantially faster and often the only way to work with large numbers of genes. As a rough guide, when 4GB of memory are available, blocks should be no larger than 8,000 genes; with 8GB one can handle some 13,000 genes; with 16GB around 20,000; and with 32GB around 30,000. Depending on the operating system and its setup, these numbers may vary substantially. } \references{ Non-hierarchical consensus networks are described in Langfelder P, Horvath S (2007), Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54. More in-depth discussion of selected topics can be found at http://www.peterlangfelder.com/ , and an FAQ at https://labs.genetics.ucla.edu/horvath/CoexpressionNetwork/Rpackages/WGCNA/faq.html . } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusTOM}} for calculation of hierarchical consensus networks (adjacency and TOM), and a more detailed description of the calculation; \code{\link[fastcluster]{hclust}} and \code{\link[dynamicTreeCut]{cutreeHybrid}} for hierarchical clustering and the Dynamic Tree Cut branch cutting method; \code{\link{mergeCloseModules}} for module merging; \code{\link{blockwiseModules}} for an analogous analysis on a single data set. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/moduleNumber.Rd0000644000176200001440000000204214012015545014604 0ustar liggesusers\name{moduleNumber} \alias{moduleNumber} \title{Fixed-height cut of a dendrogram.} \description{ Detects branches of on the input dendrogram by performing a fixed-height cut. } \usage{ moduleNumber(dendro, cutHeight = 0.9, minSize = 50) } \arguments{ \item{dendro}{a hierarchical clustering dendorgram such as one returned by \code{hclust}. } \item{cutHeight}{Maximum joining heights that will be considered. } \item{minSize}{Minimum cluster size. } } \details{ All contiguous branches below the height \code{cutHeight} that contain at least \code{minSize} objects are assigned unique positive numerical labels; all unassigned objects are assigned label 0. } \value{ A vector of numerical labels giving the assigment of each object. } \note{The numerical labels may not be sequential. See \code{\link{normalizeLabels}} for a way to put the labels into a standard order.} \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{ \code{\link{hclust}}, \code{\link{cutree}}, \code{\link{normalizeLabels}} } \keyword{cluster} WGCNA/man/blockwiseModules.Rd0000644000176200001440000005127114224204150015466 0ustar liggesusers\name{blockwiseModules} \alias{blockwiseModules} \title{ Automatic network construction and module detection } \description{ This function performs automatic network construction and module detection on large expression datasets in a block-wise manner. } \usage{ blockwiseModules( # Input data datExpr, weights = NULL, # Data checking options checkMissingData = TRUE, # Options for splitting data into blocks blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = as.integer(min(ncol(datExpr)/20, 100*ncol(datExpr)/maxBlockSize)), randomSeed = 54321, # load TOM from previously saved file? loadTOM = FALSE, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "signed", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Saving or returning TOM getTOMs = NULL, saveTOMs = FALSE, saveTOMFileBase = "blockwiseTOM", # Basic tree cut options deepSplit = 2, detectCutHeight = 0.995, minModuleSize = min(20, ncol(datExpr)/2 ), # Advanced tree cut options maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, stabilityLabels = NULL, stabilityCriterion = c("Individual fraction", "Common fraction"), minStabilityDissim = NULL, pamStage = TRUE, pamRespectsDendro = TRUE, # Gene reassignment, module trimming, and module "significance" criteria reassignThreshold = 1e-6, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.3, # Module merging options mergeCutHeight = 0.15, impute = TRUE, trapErrors = FALSE, # Output options numericLabels = FALSE, # Options controlling behaviour nThreads = 0, useInternalMatrixAlgebra = FALSE, useCorOptionsThroughout = TRUE, verbose = 0, indent = 0, ...) } \arguments{ \item{datExpr}{ Expression data. A matrix (preferred) or data frame in which columns are genes and rows ar samples. NAs are allowed, but not too many. See \code{checkMissingData} below and details.} \item{weights}{optional observation weights in the same format (and dimensions) as \code{datExpr}. These weights are used in correlation calculation.} \item{checkMissingData}{logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per column (gene) of \code{exprData} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{ blockSizePenaltyPower}{number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{number of centers for pre-clustering. Larger numbers typically results in better but slower pre-clustering. } \item{randomSeed}{integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{loadTOM}{logical: should Topological Overlap Matrices be loaded from previously saved files (\code{TRUE}) or calculated (\code{FALSE})? It may be useful to load previously saved TOM matrices if these have been calculated previously, since TOM calculation is often the most computationally expensive part of network construction and module identification. See \code{saveTOMs} and \code{saveTOMFileBase} below for when and how TOM files are saved, and what the file names are. If \code{loadTOM} is \code{TRUE} but the files cannot be found, or do not contain the correct TOM data, TOM will be recalculated.} \item{corType}{character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pairwise.complete.obs} option. } \item{maxPOutliers}{ only used for \code{corType=="bicor"}. Specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quickCor}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation, if used, should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). Has no effect for Pearson correlation. See \code{\link{bicor}}.} \item{cosineCorrelation}{logical: should the cosine version of the correlation calculation be used? The cosine calculation differs from the standard one in that it does not subtract the mean. } \item{power}{ soft-thresholding power for network construction. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{replaceMissingAdjacencies}{logical: should missing values in the calculation of adjacency be replaced by 0?} \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental.} %The default mean denominator %variant %is preferrable and we recommend using it unless the user needs to reproduce older results obtained using %the standard, minimum denominator TOM. } \item{suppressTOMForZeroAdjacencies}{Logical: should TOM be set to zero for zero adjacencies?} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? Negative TOM values can occur when \code{TOMType} is \code{"signed Nowick"}.} \item{getTOMs}{ deprecated, please use saveTOMs below. } \item{saveTOMs}{ logical: should the consensus topological overlap matrices for each block be saved and returned? } \item{saveTOMFileBase}{ character string containing the file name base for files containing the consensus topological overlaps. The full file names have \code{"block.1.RData"}, \code{"block.2.RData"} etc. appended. These files are standard R data files and can be loaded using the \code{\link{load}} function. } \item{deepSplit}{ integer value between 0 and 4. Provides a simplified control over how sensitive module detection should be to module splitting, with 0 least and 4 most sensitive. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{detectCutHeight}{ dendrogram cut height for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minModuleSize}{ minimum module size for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxCoreScatter}{ maximum scatter of the core for a branch to be a cluster, given as the fraction of \code{cutHeight} relative to the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minGap}{ minimum cluster gap given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxAbsCoreScatter}{ maximum scatter of the core for a branch to be a cluster given as absolute heights. If given, overrides \code{maxCoreScatter}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minAbsGap}{ minimum cluster gap given as absolute height difference. If given, overrides \code{minGap}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minSplitHeight}{Minimum split height given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. Branches merging below this height will automatically be merged. Defaults to zero but is used only if \code{minAbsSplitHeight} below is \code{NULL}.} \item{minAbsSplitHeight}{Minimum split height given as an absolute height. Branches merging below this height will automatically be merged. If not given (default), will be determined from \code{minSplitHeight} above.} \item{useBranchEigennodeDissim}{Logical: should branch eigennode (eigengene) dissimilarity be considered when merging branches in Dynamic Tree Cut?} \item{minBranchEigennodeDissim}{Minimum consensus branch eigennode (eigengene) dissimilarity for branches to be considerd separate. The branch eigennode dissimilarity in individual sets is simly 1-correlation of the eigennodes; the consensus is defined as quantile with probability \code{consensusQuantile}.} \item{stabilityLabels}{Optional matrix of cluster labels that are to be used for calculating branch dissimilarity based on split stability. The number of rows must equal the number of genes in \code{multiExpr}; the number of columns (clusterings) is arbitrary. See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{stabilityCriterion}{One of \code{c("Individual fraction", "Common fraction")}, indicating which method for assessing stability similarity of two branches should be used. We recommend \code{"Individual fraction"} which appears to perform better; the \code{"Common fraction"} method is provided for backward compatibility since it was the (only) method available prior to WGCNA version 1.60.} \item{minStabilityDissim}{Minimum stability dissimilarity criterion for two branches to be considered separate. Should be a number between 0 (essentially no dissimilarity required) and 1 (perfect dissimilarity or distinguishability based on \code{stabilityLabels}). See \code{\link{branchSplitFromStabilityLabels}} for details.} \item{pamStage}{ logical. If TRUE, the second (PAM-like) stage of module detection will be performed. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{pamRespectsDendro}{Logical, only used when \code{pamStage} is \code{TRUE}. If \code{TRUE}, the PAM stage will respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled and returned to the pool of genes waiting for mofule detection). } \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{reassignThreshold}{ p-value ratio threshold for reassigning genes between modules. See Details. } \item{mergeCutHeight}{ dendrogram cut height for module merging. } \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } \item{numericLabels}{ logical: should the returned modules be labeled by colors (\code{FALSE}), or by numbers (\code{TRUE})? } \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{useInternalMatrixAlgebra}{Logical: should WGCNA's own, slow, matrix multiplication be used instead of R-wide BLAS? Only useful for debugging.} \item{useCorOptionsThroughout}{Logical: should correlation options passed to network analysis also be used in calculation of kME? Set to \code{FALSE} to reproduce results obtained with WGCNA 1.62 and older.} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{...}{Other arguments.} } \details{ Before module detection starts, genes and samples are optionally checked for the presence of \code{NA}s. Genes and/or samples that have too many \code{NA}s are flagged as bad and removed from the analysis; bad genes will be automatically labeled as unassigned, while the returned eigengenes will have \code{NA} entries for all bad samples. If \code{blocks} is not given and the number of genes exceeds \code{maxBlockSize}, genes are pre-clustered into blocks using the function \code{\link{projectiveKMeans}}; otherwise all genes are treated in a single block. For each block of genes, the network is constructed and (if requested) topological overlap is calculated. If requested, the topological overlaps are returned as part of the return value list. Genes are then clustered using average linkage hierarchical clustering and modules are identified in the resulting dendrogram by the Dynamic Hybrid tree cut. Found modules are trimmed of genes whose correlation with module eigengene (KME) is less than \code{minKMEtoStay}. Modules in which fewer than \code{minCoreKMESize} genes have KME higher than \code{minCoreKME} are disbanded, i.e., their constituent genes are pronounced unassigned. After all blocks have been processed, the function checks whether there are genes whose KME in the module they assigned is lower than KME to another module. If p-values of the higher correlations are smaller than those of the native module by the factor \code{reassignThresholdPS}, the gene is re-assigned to the closer module. In the last step, modules whose eigengenes are highly correlated are merged. This is achieved by clustering module eigengenes using the dissimilarity given by one minus their correlation, cutting the dendrogram at the height \code{mergeCutHeight} and merging all modules on each branch. The process is iterated until no modules are merged. See \code{\link{mergeCloseModules}} for more details on module merging. The argument \code{quick} specifies the precision of handling of missing data in the correlation calculations. Zero will cause all calculations to be executed precisely, which may be significantly slower than calculations without missing data. Progressively higher values will speed up the calculations but introduce progressively larger errors. Without missing data, all column means and variances can be pre-calculated before the covariances are calculated. When missing data are present, exact calculations require the column means and variances to be calculated for each covariance. The approximate calculation uses the pre-calculated mean and variance and simply ignores missing data in the covariance calculation. If the number of missing data is high, the pre-calculated means and variances may be very different from the actual ones, thus potentially introducing large errors. The \code{quick} value times the number of rows specifies the maximum difference in the number of missing entries for mean and variance calculations on the one hand and covariance on the other hand that will be tolerated before a recalculation is triggered. The hope is that if only a few missing data are treated approximately, the error introduced will be small but the potential speedup can be significant. } \value{ A list with the following components: \item{colors }{ a vector of color or numeric module labels for all genes.} \item{unmergedColors }{ a vector of color or numeric module labels for all genes before module merging.} \item{MEs }{ a data frame containing module eigengenes of the found modules (given by \code{colors}).} \item{goodSamples}{numeric vector giving indices of good samples, that is samples that do not have too many missing entries. } \item{goodGenes}{ numeric vector giving indices of good genes, that is genes that do not have too many missing entries.} \item{dendrograms}{ a list whose components conatain hierarchical clustering dendrograms of genes in each block. } \item{TOMFiles}{ if \code{saveTOMs==TRUE}, a vector of character strings, one string per block, giving the file names of files (relative to current directory) in which blockwise topological overlaps were saved. } \item{blockGenes}{ a list whose components give the indices of genes in each block. } \item{blocks}{if input \code{blocks} was given, its copy; otherwise a vector of length equal number of genes giving the block label for each gene. Note that block labels are not necessarilly sorted in the order in which the blocks were processed (since we do not require this for the input \code{blocks}). See \code{blockOrder} below. } \item{blockOrder}{ a vector giving the order in which blocks were processed and in which \code{blockGenes} above is returned. For example, \code{blockOrder[1]} contains the label of the first-processed block. } \item{MEsOK}{logical indicating whether the module eigengenes were calculated without errors. } } \note{ significantly affects the memory footprint (and whether the function will fail with a memory allocation error). From a theoretical point of view it is advantageous to use blocks as large as possible; on the other hand, using smaller blocks is substantially faster and often the only way to work with large numbers of genes. As a rough guide, it is unlikely a standard desktop computer with 4GB memory or less will be able to work with blocks larger than 8000 genes. } \references{Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Peter Langfelder} % R documentation directory. \seealso{ \code{\link{goodSamplesGenes}} for basic quality control and filtering; \code{\link{adjacency}}, \code{\link{TOMsimilarity}} for network construction; \code{\link[stats]{hclust}} for hierarchical clustering; \code{\link[dynamicTreeCut]{cutreeDynamic}} for adaptive branch cutting in hierarchical clustering dendrograms; \code{\link{mergeCloseModules}} for merging of close modules. } \keyword{ misc } WGCNA/man/branchEigengeneDissim.Rd0000644000176200001440000000722014012015545016366 0ustar liggesusers\name{branchEigengeneDissim} \alias{branchEigengeneDissim} \alias{branchEigengeneSimilarity} \alias{mtd.branchEigengeneDissim} \alias{hierarchicalBranchEigengeneDissim} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Branch dissimilarity based on eigennodes (eigengenes). } \description{ Calculation of branch dissimilarity based on eigennodes (eigengenes) in single set and multi-data situations. This function is used as a plugin for the dynamicTreeCut package and the user should not call this function directly. This function is experimental and subject to change. } \usage{ branchEigengeneDissim( expr, branch1, branch2, corFnc = cor, corOptions = list(use = "p"), signed = TRUE, ...) branchEigengeneSimilarity( expr, branch1, branch2, networkOptions, returnDissim = TRUE, ...) mtd.branchEigengeneDissim( multiExpr, branch1, branch2, corFnc = cor, corOptions = list(use = 'p'), consensusQuantile = 0, signed = TRUE, reproduceQuantileError = FALSE, ...) hierarchicalBranchEigengeneDissim( multiExpr, branch1, branch2, networkOptions, consensusTree, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{expr}{ Expression data. } \item{multiExpr}{ Expression data in multi-set format. } \item{branch1}{ Branch 1. } \item{branch2}{ Branch 2. } \item{corFnc}{ Correlation function. } \item{corOptions}{ Other arguments to the correlation function. } \item{consensusQuantile}{ Consensus quantile. } \item{signed}{ Should the network be considered signed? } \item{reproduceQuantileError}{Logical: should an error in the calculation from previous versions, which caused the true consensus quantile to be \code{1-consensusQuantile} rather than \code{consensusQuantile}, be reproduced? Use this only to reproduce old calculations.} \item{networkOptions}{An object of class \code{\link{NetworkOptions}} giving the network construction options to be used in the calculation of the similarity.} \item{returnDissim}{Logical: if \code{TRUE}, dissimarity, rather than similarity, will be returned.} \item{consensusTree}{A list of class \code{\link{ConsensusTree}} specifying the consensus calculation. Note that calibration options within the consensus specifications are ignored: since the consensus is calulated from entries representing a single value, calibration would not make sense.} \item{\dots}{ Other arguments for compatibility; currently unused. } } \details{ These functions calculate the similarity or dissimilarity of two groups of genes (variables) in \code{expr} or \code{multiExpr} using correlations of the first singular vectors ("eigengenes"). For a single data set (\code{branchEigengeneDissim} and \code{branchEigengeneSimilarity}), the similarity is the correlation, and dissimilarity 1-correlation of the first signular vectors. Functions \code{mtd.branchEigengeneDissim} and \code{hierarchicalBranchEigengeneDissim} calculate consensus eigengene dissimilarity. Function \code{mtd.branchEigengeneDissim} calculates a simple ("flat") consensus of branch eigengene similarities across the given data set, at the given consensus quantile. Function \code{hierarchicalBranchEigengeneDissim} can calculate a hierarchical consensus in which consensus calculations are hierarchically nested. } \value{ A single number, the dissimilarity for \code{branchEigengeneDissim}, \code{mtd.branchEigengeneDissim}, and \code{hierarchicalBranchEigengeneDissim}. \code{branchEigengeneSimilarity} returns similarity or dissimilarity, depending on imput. } \author{ Peter Langfelder } \seealso{\code{\link{hierarchicalConsensusCalculation}}} \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/keepCommonProbes.Rd0000644000176200001440000000163414012015545015424 0ustar liggesusers\name{keepCommonProbes} \alias{keepCommonProbes} \title{ Keep probes that are shared among given data sets } \description{ This function strips out probes that are not shared by all given data sets, and orders the remaining common probes using the same order in all sets. } \usage{ keepCommonProbes(multiExpr, orderBy = 1) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{orderBy}{ index of the set by which probes are to be ordered. } } \value{ Expression data in the same format as the input data, containing only common probes. } \author{ Peter Langfelder } \seealso{\code{\link{checkSets}}} \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/SCsLists.Rd0000644000176200001440000000173014012015545013660 0ustar liggesusers\name{SCsLists} \alias{SCsLists} \docType{data} \title{Stem Cell-Related Genes with Corresponding Gene Markers} \description{ This matrix gives a predefined set of genes related to several stem cell (SC) types, as reported in two previously-published studies. It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(SCsLists)} \format{ A 14003 x 2 matrix of characters containing Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form __, where the references can be found at \code{\link{userListEnrichment}}. Note that the matrix is sorted first by Category and then by Gene, such that all genes related to the same category are listed sequentially. } \source{ For references used in this variable, please see \code{\link{userListEnrichment}} } \examples{ data(SCsLists) head(SCsLists) } \keyword{datasets} WGCNA/man/BloodLists.Rd0000644000176200001440000000171414012015545014231 0ustar liggesusers\name{BloodLists} \alias{BloodLists} \docType{data} \title{Blood Cell Types with Corresponding Gene Markers} \description{ This matrix gives a predefined set of marker genes for many blood cell types, as reported in several previously-published studies. It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(BloodLists)} \format{ A 2048 x 2 matrix of characters containing Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form __, where the references can be found at \code{\link{userListEnrichment}}. Note that the matrix is sorted first by Category and then by Gene, such that all genes related to the same category are listed sequentially. } \source{ For references used in this variable, please see \code{\link{userListEnrichment}} } \examples{ data(BloodLists) head(BloodLists) } \keyword{datasets} WGCNA/man/networkConcepts.Rd0000644000176200001440000002505414012015545015346 0ustar liggesusers\name{networkConcepts} \alias{networkConcepts} \title{ Calculations of network concepts} \description{ This functions calculates various network concepts (topological properties, network indices) of a network calculated from expression data. See details for a detailed description. } \usage{ networkConcepts(datExpr, power = 1, trait = NULL, networkType = "unsigned") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ a data frame containg the expression data, with rows corresponding to samples and columns to genes (nodes). } \item{power}{ soft thresholding power.} \item{trait}{optional specification of a sample trait. A vector of length equal the number of samples in \code{datExpr}. } \item{networkType}{ network type. Recognized values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, and \code{"signed hybrid"}. } } \details{ This function computes various network concepts (also known as network statistics, topological properties, or network indices) for a weighted correlation network. The nodes of the weighted correlation network will be constructed between the columns (interpreted as nodes) of the input \code{datExpr}. If the option \code{networkType="unsigned"} then the adjacency between nodes i and j is defined as \code{A[i,j]=abs(cor(datExpr[,i],datExpr[,j]))^power}. In the following, we use the term gene and node interchangeably since these methods were originally developed for gene networks. The function computes the following 4 types of network concepts (introduced in Horvath and Dong 2008): Type I: fundamental network concepts are defined as a function of the off-diagonal elements of an adjacency matrix A and/or a node significance measure GS. These network concepts can be defined for any network (not just correlation networks). The adjacency matrix of an unsigned weighted correlation network is given by \code{A=abs(cor(datExpr,use="p"))^power} and the trait based gene significance measure is given by \code{GS= abs(cor(datExpr,trait, use="p"))^power} where \code{datExpr}, \code{trait}, \code{power} are input parameters. Type II: conformity-based network concepts are functions of the off-diagonal elements of the conformity based adjacency matrix \code{A.CF=CF*t(CF)} and/or the node significance measure. These network concepts are defined for any network for which a conformity vector can be defined. Details: For any adjacency matrix \code{A}, the conformity vector \code{CF} is calculated by requiring that \code{A[i,j]} is approximately equal to \code{CF[i]*CF[j]}. Using the conformity one can define the matrix \code{A.CF=CF*t(CF)} which is the outer product of the conformity vector with itself. In general, \code{A.CF} is not an adjacency matrix since its diagonal elements are different from 1. If the off-diagonal elements of \code{A.CF} are similar to those of \code{A} according to the Frobenius matrix norm, then \code{A} is approximately factorizable. To measure the factorizability of a network, one can calculate the \code{Factorizability}, which is a number between 0 and 1 (Dong and Horvath 2007). T he conformity is defined using a monotonic, iterative algorithm that maximizes the factorizability measure. Type III: approximate conformity based network concepts are functions of all elements of the conformity based adjacency matrix \code{A.CF} (including the diagonal) and/or the node significance measure \code{GS}. These network concepts are very useful for deriving relationships between network concepts in networks that are approximately factorizable. Type IV: eigengene-based (also known as eigennode-based) network concepts are functions of the eigengene-based adjacency matrix \code{A.E=ConformityE*t(ConformityE)} (diagonal included) and/or the corresponding eigengene-based gene significance measure \code{GSE}. These network concepts can only be defined for correlation networks. Details: The columns (nodes) of \code{datExpr} can be summarized with the first principal component, which is referred to as Eigengene in coexpression network analysis. In general correlation networks, it is called eigennode. The eigengene-based conformity \code{ConformityE[i]} is defined as \code{abs(cor(datE[,i], Eigengene))^power} where the power corresponds to the power used for defining the weighted adjacency matrix \code{A}. The eigengene-based conformity can also be used to define an eigengene-based adjacency matrix \code{A.E=ConformityE*t(ConformityE)}. The eigengene based factorizability \code{EF(datE)} is a number between 0 and 1 that measures how well \code{A.E} approximates \code{A} when the power parameter equals 1. \code{EF(datE)} is defined with respect to the singular values of \code{datExpr}. For a trait based node significance measure \code{GS=abs(cor(datE,trait))^power}, one can also define an eigengene-based node significance measure \code{GSE[i]=ConformityE[i]*EigengeneSignificance} where the eigengene significance \code{abs(cor(Eigengene,trait))^power} is defined as power of the absolute value of the correlation between eigengene and trait. Eigengene-based network concepts are very useful for providing a geometric interpretation of network concepts and for deriving relationships between network concepts. For example, the hub gene significance measure and its eigengene-based analog have been used to characterize networks where highly connected hub genes are important with regard to a trait based gene significance measure (Horvath and Dong 2008). } \value{ A list with the following components: \item{Summary}{a data frame whose rows report network concepts that only depend on the adjacency matrix. Density (mean adjacency), Centralization , Heterogeneity (coefficient of variation of the connectivity), Mean ClusterCoef, Mean Connectivity. The columns of the data frame report the 4 types of network concepts mentioned in the description: Fundamental concepts, eigengene-based concepts, conformity-based concepts, and approximate conformity-based concepts.} \item{Size}{reports the network size, i.e. the number of nodes, which equals the number of columns of the input data frame \code{datExpr}.} \item{Factorizability}{a number between 0 and 1. The closer it is to 1, the better the off-diagonal elements of the conformity based network \code{A.CF} approximate those of \code{A} (according to the Frobenius norm). } \item{Eigengene}{the first principal component of the standardized columns of \code{datExpr}. The number of components of this vector equals the number of rows of \code{datExpr}.} \item{VarExplained}{the proportion of variance explained by the first principal component (the \code{Eigengene}). It is numerically different from the eigengene based factorizability. While \code{VarExplained} is based on the squares of the singular values of \code{datExpr}, the eigengene-based factorizability is based on fourth powers of the singular values. } \item{Conformity}{numerical vector giving the conformity. The number of components of the conformity vector equals the number of columns in \code{datExpr}. The conformity is often highly correlated with the vector of node connectivities. The conformity is computed using an iterative algorithm for maximizing the factorizability measure. The algorithm and related network concepts are described in Dong and Horvath 2007.} \item{ClusterCoef}{a numerical vector that reports the cluster coefficient for each node. This fundamental network concept measures the cliquishness of each node.} \item{Connectivity}{a numerical vector that reports the connectivity (also known as degree) of each node. This fundamental network concept is also known as whole network connectivity. One can also define the scaled connectivity \code{K=Connectivity/max(Connectivity)} which is used for computing the hub gene significance.} \item{MAR}{a numerical vector that reports the maximum adjacency ratio for each node. \code{MAR[i]} equals 1 if all non-zero adjacencies between node \code{i} and the remaining network nodes equal 1. This fundamental network concept is always 1 for nodes of an unweighted network. This is a useful measure for weighted networks since it allows one to determine whether a node has high connectivity because of many weak connections (small MAR) or because of strong (but few) connections (high MAR), see Horvath and Dong 2008. } \item{ConformityE}{a numerical vector that reports the eigengene based (aka eigenenode based) conformity for the correlation network. The number of components equals the number of columns of \code{datExpr}.} \item{GS}{a numerical vector that encodes the node (gene) significance. The i-th component equals the node significance of the i-th column of \code{datExpr} if a sample trait was supplied to the function (input trait). \code{GS[i]=abs(cor(datE[,i], trait, use="p"))^power} } \item{GSE}{a numerical vector that reports the eigengene based gene significance measure. Its i-th component is given by \code{GSE[i]=ConformityE[i]*EigengeneSignificance} where the eigengene significance \code{abs(cor(Eigengene,trait))^power} is defined as power of the absolute value of the correlation between eigengene and trait.} \item{Significance}{a data frame whose rows report network concepts that also depend on the trait based node significance measure. The rows correspond to network concepts and the columns correspond to the type of network concept (fundamental versus eigengene based). The first row of the data frame reports the network significance. The fundamental version of this network concepts is the average gene significance=mean(GS). The eigengene based analog of this concept is defined as mean(GSE). The second row reports the hub gene significance which is defined as slope of the intercept only regression model that regresses the gene significance on the scaled network connectivity K. The third row reports the eigengene significance \code{abs(cor(Eigengene,trait))^power}. More details can be found in Horvath and Dong (2008).} } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \seealso{ \code{\link{conformityBasedNetworkConcepts}} for approximate conformity-based network concepts \code{\link{fundamentalNetworkConcepts}} for calculation of fundamental network concepts only. } \author{ Jun Dong, Steve Horvath, Peter Langfelder } \keyword{ misc } WGCNA/man/verboseIplot.Rd0000644000176200001440000000634014230552654014641 0ustar liggesusers\name{verboseIplot} \alias{verboseIplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Scatterplot with density } \description{ Produce a scatterplot that shows density with color and is annotated by the correlation, MSE, and regression line. } \usage{ verboseIplot( x, y, xlim = NA, ylim = NA, nBinsX = 150, nBinsY = 150, ztransf = function(x) {x}, gamma = 1, sample = NULL, corFnc = "cor", corOptions = "use = 'p'", main = "", xlab = NA, ylab = NA, cex = 1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, abline = FALSE, abline.color = 1, abline.lty = 1, corLabel = corFnc, showMSE = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ numerical vector to be plotted along the x axis. } \item{y}{ numerical vector to be plotted along the y axis. } \item{xlim}{ define the range in x axis } \item{ylim}{ define the range in y axis } \item{nBinsX}{ number of bins along the x axis } \item{nBinsY}{ number of bins along the y axis } \item{ztransf}{ Function to transform the number of counts per pixel, which will be mapped by the function in colramp to well defined colors. The user has to make sure that the transformed density lies in the range [0,zmax], where zmax is any positive number (>=2). } \item{gamma}{ color correction power } \item{sample}{ either a number of points to be sampled or a vector of indices input \code{x} and \code{y} for points to be plotted. Useful when the input vectors are large and plotting all points is not practical. } \item{corFnc}{ character string giving the correlation function to annotate the plot. } \item{corOptions}{ character string giving further options to the correlation function. } \item{main}{ main title for the plot. } \item{xlab}{ label for the x-axis. } \item{ylab}{ label for the y-axis. } \item{cex}{ character expansion factor for plot annotations. } \item{cex.axis}{ character expansion factor for axis annotations. } \item{cex.lab}{ character expansion factor for axis labels. } \item{cex.main}{ character expansion factor for the main title. } \item{abline}{ logical: should the linear regression fit line be plotted? } \item{abline.color}{ color specification for the fit line. } \item{abline.lty}{ line type for the fit line. } \item{corLabel}{ character string to be used as the label for the correlation value printed in the main title. } \item{showMSE}{ logical: should the MSE be added to the main title?} \item{\dots}{ other arguments to the function plot. } } \details{ Irrespective of the specified correlation function, the MSE is always calculated based on the residuals of a linear model. } \value{ If sample above is given, the indices of the plotted points are returned invisibly. } \author{ Chaochao Cai, Steve Horvath } \note{ This funtion is based on verboseScatterplot (Steve Horvath and Peter Langfelder), iplot (Andreas Ruckstuhl, Rene Locher) and greenWhiteRed(Peter Langfelder ) } \seealso{ \link{image} for more parameters } \keyword{graphics } WGCNA/man/labeledBarplot.Rd0000644000176200001440000000473514012015545015075 0ustar liggesusers\name{labeledBarplot} \alias{labeledBarplot} \title{ Barplot with text or color labels. } \description{ Produce a barplot with extra annotation. } \usage{ labeledBarplot( Matrix, labels, colorLabels = FALSE, colored = TRUE, setStdMargins = TRUE, stdErrors = NULL, cex.lab = NULL, xLabelsAngle = 45, ...) } \arguments{ \item{Matrix}{ vector or a matrix to be plotted. } \item{labels}{ labels to annotate the bars underneath the barplot. } \item{colorLabels}{ logical: should the labels be interpreted as colors? If \code{TRUE}, the bars will be labeled by colored squares instead of text. See details. } \item{colored}{ logical: should the bars be divided into segments and colored? If \code{TRUE}, assumes the \code{labels} can be interpreted as colors, and the input \code{Matrix} is square and the rows have the same labels as the columns. See details. } \item{setStdMargins}{ if \code{TRUE}, the function wil set margins \code{c(3, 3, 2, 2)+0.2}. } \item{stdErrors}{ if given, error bars corresponding to \code{1.96*stdErrors} will be plotted on top of the bars. } \item{cex.lab}{ character expansion factor for axis labels, including the text labels underneath the barplot. } \item{xLabelsAngle}{angle at which text labels under the barplot will be printed. } \item{\dots}{ other parameters for the function \code{\link{barplot}}. } } \details{ Individual bars in the barplot can be identified either by printing the text of the corresponding entry in \code{labels} underneath the bar at the angle specified by \code{xLabelsAngle}, or by interpreting the \code{labels} entry as a color (see below) and drawing a correspondingly colored square underneath the bar. For reasons of compatibility with other functions, \code{labels} are interpreted as colors after stripping the first two characters from each label. For example, the label \code{"MEturquoise"} is interpreted as the color turquoise. If \code{colored} is set, the code assumes that \code{labels} can be interpreted as colors, and the input \code{Matrix} is square and the rows have the same labels as the columns. Each bar in the barplot is then sectioned into contributions from each row entry in \code{Matrix} and is colored by the color given by the entry in \code{labels} that corresponds to the row. } \value{ None. } \author{ Peter Langfelder } \keyword{ hplot }% __ONLY ONE__ keyword per line WGCNA/man/hierarchicalConsensusMEDissimilarity.Rd0000644000176200001440000000576714012015545021477 0ustar liggesusers\name{hierarchicalConsensusMEDissimilarity} \alias{hierarchicalConsensusMEDissimilarity} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hierarchical consensus calculation of module eigengene dissimilarity } \description{ Hierarchical consensus calculation of module eigengene dissimilarities, or more generally, correlation-based dissimilarities of sets of vectors. } \usage{ hierarchicalConsensusMEDissimilarity( MEs, networkOptions, consensusTree, greyName = "ME0", calibrate = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{MEs}{ A \code{\link{multiData}} structure containing vectors (usually module eigengenes) whose consensus dissimilarity is to be calculated. } \item{networkOptions}{ A \code{\link{multiData}} structure containing, for each input data set, a list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks. } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{greyName}{ Name of the "grey" module eigengene. Currently not used. } \item{calibrate}{ Logical: should the dissimilarities be calibrated using the calibration method specified in \code{consensusTree}? See details. } } \details{ This function first calculates the similarities of the ME vectors from their correlations, using the appropriate options in \code{networkOptions} (correlation type and options, signed or unsigned dissimilarity etc). This results in a similarity matrix in each of the input data sets. Next, a hierarchical consensus of the similarities is calculated via a call to \code{\link{hierarchicalConsensusCalculation}}, using the consensus specification and options in \code{consensusTree}. In typical use, \code{consensusTree} contains the same consensus specification as the consensus network calculation that gave rise to the consensus modules whose eigengenes are contained in \code{MEs} but this is not mandatory. The argument \code{consensusTree} should have the following components: (1) \code{inputs} must be either a character vector whose components match \code{names(inputData)}, or consensus trees in the own right. (2) \code{consensusOptions} must be a list of class \code{"ConsensusOptions"} that specifies options for calculating the consensus. A suitable set of options can be obtained by calling \code{\link{newConsensusOptions}}. (3) Optionally, the component \code{analysisName} can be a single character string giving the name for the analysis. When intermediate results are returned, they are returned in a list whose names will be set from \code{analysisName} components, if they exist. In the final step, the consensus similarity is turned into a dissimilarity by subtracting it from 1. } \value{ A matrix with rows and columns corresponding to the variables (modules) in MEs, containing the consensus dissimilarities. } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusCalculation}} for the actual consensus calculation. } \keyword{misc} WGCNA/man/relativeCorPredictionSuccess.Rd0000644000176200001440000000172714012015545020010 0ustar liggesusers\name{relativeCorPredictionSuccess} \alias{relativeCorPredictionSuccess} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Compare prediction success } \description{ Compare prediction success of several gene screening methods. } \usage{ relativeCorPredictionSuccess( corPredictionNew, corPredictionStandard, corTestSet, topNumber = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{corPredictionNew}{ Matrix of predictor statistics } \item{corPredictionStandard}{ Reference presdictor statistics} \item{corTestSet}{ Correlations of predictor variables with trait in test set} \item{topNumber}{ A vector giving the numbers of top genes to consider } } \value{ Data frame with components \item{topNumber}{copy of the input \code{topNumber}} \item{kruskalp}{Kruskal-Wallis p-values} } \author{ Steve Horvath } \seealso{ \code{\link{corPredictionSuccess}} } \keyword{ misc } WGCNA/man/goodSamplesGenes.Rd0000644000176200001440000000607514012015545015417 0ustar liggesusers\name{goodSamplesGenes} \alias{goodSamplesGenes} \title{ Iterative filtering of samples and genes with too many missing entries } \description{ This function checks data for missing entries, entries with weights below a threshold, and zero-variance genes, and returns a list of samples and genes that pass criteria on maximum number of missing or low weight values. If necessary, the filtering is iterated. } \usage{ goodSamplesGenes( datExpr, weights = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ expression data. A matrix or data frame in which columns are genes and rows ar samples. } \item{weights}{optional observation weights in the same format (and dimensions) as \code{datExpr}.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of non-missing samples for a gene to be considered good. } \item{minNGenes}{ minimum number of good genes for the data set to be considered fit for analysis. If the actual number of good genes falls below this threshold, an error will be issued. } \item{tol}{ an optional 'small' number to compare the variance against. Defaults to the square of \code{1e-10 * max(abs(datExpr), na.rm = TRUE)}. The reason of comparing the variance to this number, rather than zero, is that the fast way of computing variance used by this function sometimes causes small numerical overflow errors which make variance of constant vectors slightly non-zero; comparing the variance to \code{tol} rather than zero prevents the retaining of such genes as 'good genes'.} \item{minRelativeWeight}{ observations whose relative weight is below this threshold will be considered missing. Here relative weight is weight divided by the maximum weight in the column (gene).} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function iteratively identifies samples and genes with too many missing entries and genes with zero variance. If weights are given, entries with relative weight (weight divided by maximum weight in the column) below \code{minRelativeWeight} will be considered missing. The process is repeated until the lists of good samples and genes are stable. The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. } \value{ A list with the foolowing components: \item{goodSamples}{ A logical vector with one entry per sample that is \code{TRUE} if the sample is considered good and \code{FALSE} otherwise. } \item{goodGenes}{ A logical vector with one entry per gene that is \code{TRUE} if the gene is considered good and \code{FALSE} otherwise. } } \author{ Peter Langfelder } \seealso{ \code{\link{goodSamples}}, \code{\link{goodGenes}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/plotEigengeneNetworks.Rd0000644000176200001440000001613614012015545016501 0ustar liggesusers\name{plotEigengeneNetworks} \alias{plotEigengeneNetworks} \title{ Eigengene network plot } \description{ This function plots dendrogram and eigengene representations of (consensus) eigengenes networks. In the case of conensus eigengene networks the function also plots pairwise preservation measures between consensus networks in different sets. } \usage{ plotEigengeneNetworks( multiME, setLabels, letterSubPlots = FALSE, Letters = NULL, excludeGrey = TRUE, greyLabel = "grey", plotDendrograms = TRUE, plotHeatmaps = TRUE, setMargins = TRUE, marDendro = NULL, marHeatmap = NULL, colorLabels = TRUE, signed = TRUE, heatmapColors = NULL, plotAdjacency = TRUE, printAdjacency = FALSE, cex.adjacency = 0.9, coloredBarplot = TRUE, barplotMeans = TRUE, barplotErrors = FALSE, plotPreservation = "standard", zlimPreservation = c(0, 1), printPreservation = FALSE, cex.preservation = 0.9, ...) } \arguments{ \item{multiME}{ either a single data frame containing the module eigengenes, or module eigengenes in the multi-set format (see \code{\link{checkSets}}). The multi-set format is a vector of lists, one per set. Each set must contain a component \code{data} whose rows correspond to samples and columns to eigengenes. } \item{setLabels}{ A vector of character strings that label sets in \code{multiME}. } \item{letterSubPlots}{ logical: should subplots be lettered? } \item{Letters}{optional specification of a sequence of letters for lettering. Defaults to "ABCD"... } \item{excludeGrey}{ logical: should the grey module eigengene be excluded from the plots? } \item{greyLabel}{ label for the grey module. Usually either "grey" or the number 0. } \item{plotDendrograms}{ logical: should eigengene dendrograms be plotted? } \item{plotHeatmaps}{ logical: should eigengene network heatmaps be plotted? } \item{setMargins}{ logical: should margins be set? See \code{\link[graphics]{par}}.} \item{marDendro}{ a vector of length 4 giving the margin setting for dendrogram plots. See \code{\link[graphics]{par}}. If \code{setMargins} is \code{TRUE} and \code{marDendro} is not given, the function will provide reasonable default values. } \item{marHeatmap}{ a vector of length 4 giving the margin setting for heatmap plots. See \code{\link[graphics]{par}}. If \code{setMargins} is \code{TRUE} and \code{marDendro} is not given, the function will provide reasonable default values. } \item{colorLabels}{ logical: should module eigengene names be interpreted as color names and the colors used to label heatmap plots and barplots? } \item{signed}{ logical: should eigengene networks be constructed as signed? } \item{heatmapColors}{ color palette for heatmaps. Defaults to \code{\link{heat.colors}} when \code{signed} is \code{FALSE}, and to \code{\link{redWhiteGreen}} when \code{signed} is \code{TRUE}. } \item{plotAdjacency}{ logical: should module eigengene heatmaps plot adjacency (ranging from 0 to 1), or correlation (ranging from -1 to 1)? } \item{printAdjacency}{ logical: should the numerical values be printed into the adjacency or correlation heatmap? } \item{cex.adjacency}{ character expansion factor for printing of numerical values into the adjacency or correlation heatmap } \item{coloredBarplot}{ logical: should the barplot of eigengene adjacency preservation distinguish individual contributions by color? This is possible only if \code{colorLabels} is \code{TRUE} and module eigengene names encode valid colors. } \item{barplotMeans}{ logical: plot mean preservation in the barplot? This option effectively rescales the preservation by the number of eigengenes in the network. If means are plotted, the barplot is not colored. } \item{barplotErrors}{ logical: should standard errors of the mean preservation be plotted? } \item{plotPreservation}{ a character string specifying which type of preservation measure to plot. Allowed values are (unique abbreviations of) \code{"standard"}, \code{"hyperbolic"}, \code{"both"}. } \item{zlimPreservation}{ a vector of length 2 giving the value limits for the preservation heatmaps. } \item{printPreservation}{ logical: should preservation values be printed within the heatmap? } \item{cex.preservation}{ character expansion factor for preservation display. } \item{\dots}{ other graphical arguments to function \code{\link{labeledHeatmap}}. } } \details{ Consensus eigengene networks consist of a fixed set of eigengenes "expressed" in several different sets. Network connection strengths are given by eigengene correlations. This function aims to visualize the networks as well as their similarities and differences across sets. The function partitions the screen appropriately and plots eigengene dendrograms in the top row, then a square matrix of plots: heatmap plots of eigengene networks in each set on the diagonal, heatmap plots of pairwise preservation networks below the diagonal, and barplots of aggregate network preservation of individual eigengenes above the diagonal. A preservation plot or barplot in the row i and column j of the square matrix represents the preservation between sets i and j. Individual eigengenes are labeled by their name in the dendrograms; in the heatmaps and barplots they can optionally be labeled by color squares. For compatibility with other functions, the color labels are encoded in the eigengene names by prefixing the color with two letters, such as \code{"MEturquoise"}. Two types of network preservation can be plotted: the \code{"standard"} is simply the difference between adjacencies in the two compared sets. The \code{"hyperbolic"} difference de-emphasizes the preservation of low adjacencies. When \code{"both"} is specified, standard preservation is plotted in the lower triangle and hyperbolic in the upper triangle of each preservation heatmap. If the eigengenes are labeled by color, the bars in the barplot can be split into segments representing the contribution of each eigengene and labeled by the contribution. For example, a yellow segment in a bar labeled by a turquoise square represents the preservation of the adjacency between the yellow and turquoise eigengenes in the two networks compared by the barplot. For large numbers of eigengenes and/or sets, it may be difficult to get a meaningful plot fit a standard computer screen. In such cases we recommend using a device such as \code{\link{postscript}} or \code{\link{pdf}} where the user can specify large dimensions; such plots can be conveniently viewed in standard pdf or postscript viewers. } \value{ None. } \references{ For theory and applications of consensus eigengene networks, see Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{ Peter Langfelder } \seealso{ \code{\link{labeledHeatmap}}, \code{\link{labeledBarplot}} for annotated heatmaps and barplots; \code{\link[stats]{hclust}} for hierarchical clustering and dendrogram plots } \keyword{ hplot } WGCNA/man/multiSetMEs.Rd0000644000176200001440000002417514012015545014374 0ustar liggesusers\name{multiSetMEs} \alias{multiSetMEs} \title{Calculate module eigengenes. } \description{ Calculates module eigengenes for several sets. } \usage{ multiSetMEs(exprData, colors, universalColors = NULL, useSets = NULL, useGenes = NULL, impute = TRUE, nPC = 1, align = "along average", excludeGrey = FALSE, grey = if (is.null(universalColors)) { if (is.numeric(colors)) 0 else "grey" } else if (is.numeric(universalColors)) 0 else "grey", subHubs = TRUE, trapErrors = FALSE, returnValidOnly = trapErrors, softPower = 6, verbose = 1, indent = 0) } \arguments{ \item{exprData}{Expression data in a multi-set format (see \code{\link{checkSets}}). A vector of lists, with each list corresponding to one microarray dataset and expression data in the component \code{data}, that is \code{expr[[set]]$data[sample, probe]} is the expression of probe \code{probe} in sample \code{sample} in dataset \code{set}. The number of samples can be different between the sets, but the probes must be the same. } \item{colors}{A matrix of dimensions (number of probes, number of sets) giving the module assignment of each gene in each set. The color "grey" is interpreted as unassigned.} \item{universalColors}{Alternative specification of module assignment. A single vector of length (number of probes) giving the module assignment of each gene in all sets (that is the modules are common to all sets). If given, takes precedence over \code{color}.} \item{useSets}{If calculations are requested in (a) selected set(s) only, the set(s) can be specified here. Defaults to all sets.} \item{useGenes}{Can be used to restrict calculation to a subset of genes (the same subset in all sets). If given, \code{validColors} in the returned list will only contain colors for the genes specified in \code{useGenes}.} \item{impute}{Logical. If \code{TRUE}, expression data will be checked for the presence of \code{NA} entries and if the latter are present, numerical data will be imputed, using function \code{impute.knn} and probes from the same module as the missing datum. The function \code{impute.knn} uses a fixed random seed giving repeatable results.} \item{nPC}{Number of principal components to be calculated. If only eigengenes are needed, it is best to set it to 1 (default). If variance explained is needed as well, use value \code{NULL}. This will cause all principal components to be computed, which is slower.} \item{align}{Controls whether eigengenes, whose orientation is undetermined, should be aligned with average expression (\code{align = "along average"}, the default) or left as they are (\code{align = ""}). Any other value will trigger an error.} \item{excludeGrey}{Should the improper module consisting of 'grey' genes be excluded from the eigengenes?} \item{grey}{Value of \code{colors} or \code{universalColors} (whichever applies) designating the improper module. Note that if the appropriate colors argument is a factor of numbers, the default value will be incorrect.} \item{subHubs}{Controls whether hub genes should be substituted for missing eigengenes. If \code{TRUE}, each missing eigengene (i.e., eigengene whose calculation failed and the error was trapped) will be replaced by a weighted average of the most connected hub genes in the corresponding module. If this calculation fails, or if \code{subHubs==FALSE}, the value of \code{trapErrors} will determine whether the offending module will be removed or whether the function will issue an error and stop.} \item{trapErrors}{Controls handling of errors from that may arise when there are too many \code{NA} entries in expression data. If \code{TRUE}, errors from calling these functions will be trapped without abnormal exit. If \code{FALSE}, errors will cause the function to stop. Note, however, that \code{subHubs} takes precedence in the sense that if \code{subHubs==TRUE} and \code{trapErrors==FALSE}, an error will be issued only if both the principal component and the hubgene calculations have failed. } \item{returnValidOnly}{Boolean. Controls whether the returned data frames of module eigengenes contain columns corresponding only to modules whose eigengenes or hub genes could be calculated correctly in every set (\code{TRUE}), or whether the data frame should have columns for each of the input color labels (\code{FALSE}).} \item{softPower}{The power used in soft-thresholding the adjacency matrix. Only used when the hubgene approximation is necessary because the principal component calculation failed. It must be non-negative. The default value should only be changed if there is a clear indication that it leads to incorrect results.} \item{verbose}{Controls verbosity of printed progress messages. 0 means silent, up to (about) 5 the verbosity gradually increases.} \item{indent}{A single non-negative integer controlling indentation of printed messages. 0 means no indentation, each unit above that adds two spaces. } } \details{ This function calls \code{\link{moduleEigengenes}} for each set in \code{exprData}. Module eigengene is defined as the first principal component of the expression matrix of the corresponding module. The calculation may fail if the expression data has too many missing entries. Handling of such errors is controlled by the arguments \code{subHubs} and \code{trapErrors}. If \code{subHubs==TRUE}, errors in principal component calculation will be trapped and a substitute calculation of hubgenes will be attempted. If this fails as well, behaviour depends on \code{trapErrors}: if \code{TRUE}, the offending module will be ignored and the return value will allow the user to remove the module from further analysis; if \code{FALSE}, the function will stop. If \code{universalColors} is given, any offending module will be removed from all sets (see \code{validMEs} in return value below). From the user's point of view, setting \code{trapErrors=FALSE} ensures that if the function returns normally, there will be a valid eigengene (principal component or hubgene) for each of the input colors. If the user sets \code{trapErrors=TRUE}, all calculational (but not input) errors will be trapped, but the user should check the output (see below) to make sure all modules have a valid returned eigengene. While the principal component calculation can fail even on relatively sound data (it does not take all that many "well-placed" \code{NA} to torpedo the calculation), it takes many more irregularities in the data for the hubgene calculation to fail. In fact such a failure signals there likely is something seriously wrong with the data. } \value{ A vector of lists similar in spirit to the input \code{exprData}. For each set there is a list with the following components: \item{data}{Module eigengenes in a data frame, with each column corresponding to one eigengene. The columns are named by the corresponding color with an \code{"ME"} prepended, e.g., \code{MEturquoise} etc. Note that, when \code{trapErrors == TRUE} and \code{returnValidOnly==FALSE}, this data frame also contains entries corresponding to removed modules, if any. (\code{validMEs} below indicates which eigengenes are valid and \code{allOK} whether all module eigengens were successfully calculated.) } \item{averageExpr}{If \code{align == "along average"}, a dataframe containing average normalized expression in each module. The columns are named by the corresponding color with an \code{"AE"} prepended, e.g., \code{AEturquoise} etc.} \item{varExplained}{A dataframe in which each column corresponds to a module, with the component \code{varExplained[PC, module]} giving the variance of module \code{module} explained by the principal component no. \code{PC}. This is only accurate if all principal components have been computed (input \code{nPC = NULL}). At most 5 principal components are recorded in this dataframe.} \item{nPC}{A copy of the input \code{nPC}.} \item{validMEs}{A boolean vector. Each component (corresponding to the columns in \code{data}) is \code{TRUE} if the corresponding eigengene is valid, and \code{FALSE} if it is invalid. Valid eigengenes include both principal components and their hubgene approximations. When \code{returnValidOnly==FALSE}, by definition all returned eigengenes are valid and the entries of \code{validMEs} are all \code{TRUE}. } \item{validColors}{A copy of the input colors (\code{universalColors} if set, otherwise \code{colors[, set]}) with entries corresponding to invalid modules set to \code{grey} if given, otherwise 0 if the appropriate input colors are numeric and "grey" otherwise.} \item{allOK}{Boolean flag signalling whether all eigengenes have been calculated correctly, either as principal components or as the hubgene approximation. If \code{universalColors} is set, this flag signals whether all eigengenes are valid in all sets.} \item{allPC}{Boolean flag signalling whether all returned eigengenes are principal components. This flag (as well as the subsequent ones) is set independently for each set.} \item{isPC}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding eigengene is the first principal component and \code{FALSE} if it is the hubgene approximation or is invalid. } \item{isHub}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding eigengene is the hubgene approximation and \code{FALSE} if it is the first principal component or is invalid.} \item{validAEs}{Boolean vector. Each component (corresponding to the columns in \code{eigengenes}) is \code{TRUE} if the corresponding module average expression is valid.} \item{allAEOK}{Boolean flag signalling whether all returned module average expressions contain valid data. Note that \code{returnValidOnly==TRUE} does not imply \code{allAEOK==TRUE}: some invalid average expressions may be returned if their corresponding eigengenes have been calculated correctly.} } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{\code{\link{moduleEigengenes}}} \keyword{misc} WGCNA/man/accuracyMeasures.Rd0000644000176200001440000001356414012015545015460 0ustar liggesusers\name{accuracyMeasures} \alias{accuracyMeasures} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Accuracy measures for a 2x2 confusion matrix or for vectors of predicted and observed values. } \description{ The function calculates various prediction accuracy statistics for predictions of binary or quantitative (continuous) responses. For binary classification, the function calculates the error rate, accuracy, sensitivity, specificity, positive predictive value, and other accuracy measures. For quantitative prediction, the function calculates correlation, R-squared, error measures, and the C-index. } \usage{ accuracyMeasures( predicted, observed = NULL, type = c("auto", "binary", "quantitative"), levels = if (isTRUE(all.equal(dim(predicted), c(2,2)))) colnames(predicted) else if (is.factor(predicted)) sort(unique(c(as.character(predicted), as.character(observed)))) else sort(unique(c(observed, predicted))), negativeLevel = levels[2], positiveLevel = levels[1] ) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{predicted}{ either a a 2x2 confusion matrix (table) whose entries contain non-negative integers, or a vector of predicted values. Predicted values can be binary or quantitative (see \code{type} below). If a 2x2 matrix is given, it must have valid column and row names that specify the levels of the predicted and observed variables whose counts the matrix is giving (e.g., the function \code{\link{table}} sets the names appropriately.) If it is a 2x2 table and the table contains non-negative real (non-integer) numbers the function outputs a warning. } \item{observed}{ if \code{predicted} is a vector of predicted values, this (\code{observed}) must be a vector of the same length giving the "gold standard" (or observed) values. Ignored if \code{predicted} is a 2x2 table. } \item{type}{ character string specifying the type of the prediction problem (i.e., values in the \code{predicted} and \code{observed} vectors). The default \code{"auto"} decides type automatically: if \code{predicted} is a 2x2 table or if the number of unique values in the concatenation of \code{predicted} and \code{observed} is 2, the prediction problem (type) is assumed to be binary, otherwise it is assumed to be quantitative. Inconsistent specification (for example, when \code{predicted} is a 2x2 matrix and \code{type} is \code{"quantitative"}) trigger errors. } \item{levels}{ a 2-element vector specifying the two levels of binary variables. Only used if \code{type} is \code{"binary"} (or \code{"auto"} that results in the binary type). Defaults to either the column names of the confusion matrix (if the matrix is specified) or to the sorted unique values of \code{observed} and \code{opredicted}. } \item{negativeLevel}{ the binary value (level) that corresponds to the negative outcome. Note that the default is the second of the sorted levels (for example, if levels are 1,2, the default negative level is 2). Only used if \code{type} is \code{"binary"} (or \code{"auto"} that results in the binary type).} \item{positiveLevel}{ the binary value (level) that corresponds to the positive outcome. Note that the default is the second of the sorted levels (for example, if levels are 1,2, the default negative level is 2). Only used if \code{type} is \code{"binary"} (or \code{"auto"} that results in the binary type).} } \details{ The rows of the 2x2 table tab must correspond to a test (or predicted) outcome and the columns to a true outcome ("gold standard"). A table that relates a predicted outcome to a true test outcome is also known as confusion matrix. Warning: To correctly calculate sensitivity and specificity, the positive and negative outcome must be properly specified so they can be matched to the appropriate rows and columns in the confusion table. Interchanging the negative and positive levels swaps the estimates of the sensitivity and specificity but has no effect on the error rate or accuracy. Specifically, denote by \code{pos} the index of the positive level in the confusion table, and by \code{neg} th eindex of the negative level in the confusion table. The function then defines number of true positives=TP=tab[pos, pos], no.false positives =FP=tab[pos, neg], no.false negatives=FN=tab[neg, pos], no.true negatives=TN=tab[neg, neg]. Then Specificity= TN/(FP+TN) Sensitivity= TP/(TP+FN) NegativePredictiveValue= TN/(FN + TN) PositivePredictiveValue= TP/(TP + FP) FalsePositiveRate = 1-Specificity FalseNegativeRate = 1-Sensitivity Power = Sensitivity LikelihoodRatioPositive = Sensitivity / (1-Specificity) LikelihoodRatioNegative = (1-Sensitivity)/Specificity. The naive error rate is the error rate of a constant (naive) predictor that assigns the same outcome to all samples. The prediction of the naive predictor equals the most frequenly observed outcome. Example: Assume you want to predict disease status and 70 percent of the observed samples have the disease. Then the naive predictor has an error rate of 30 percent (since it only misclassifies 30 percent of the healthy individuals). } \value{ Data frame with two columns: \item{Measure}{this column contais character strings that specify name of the accuracy measure.} \item{Value}{this column contains the numeric estimates of the corresponding accuracy measures.} } \references{ http://en.wikipedia.org/wiki/Sensitivity_and_specificity } \author{ Steve Horvath and Peter Langfelder } \examples{ m=100 trueOutcome=sample( c(1,2),m,replace=TRUE) predictedOutcome=trueOutcome # now we noise half of the entries of the predicted outcome predictedOutcome[ 1:(m/2)] =sample(predictedOutcome[ 1:(m/2)] ) tab=table(predictedOutcome, trueOutcome) accuracyMeasures(tab) # Same result: accuracyMeasures(predictedOutcome, trueOutcome) } \keyword{ misc } WGCNA/man/PWLists.Rd0000644000176200001440000000146014012015545013516 0ustar liggesusers\name{PWLists} \alias{PWLists} \docType{data} \title{Pathways with Corresponding Gene Markers - Compiled by Mike Palazzolo and Jim Wang from CHDI} \description{ This matrix gives a predefined set of marker genes for many immune response pathways, as assembled by Mike Palazzolo and Jim Wang from CHDI, and colleagues. It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(PWLists)} \format{ A 124350 x 2 matrix of characters containing 2724 Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form __. } \source{ For more information about this list, please see \code{\link{userListEnrichment}} } \examples{ data(PWLists) head(PWLists) } \keyword{datasets} WGCNA/man/matrixToNetwork.Rd0000644000176200001440000000462614012015545015341 0ustar liggesusers\name{matrixToNetwork} \alias{matrixToNetwork} \title{ Construct a network from a matrix } \description{ Constructs a network } \usage{ matrixToNetwork( mat, symmetrizeMethod = c("average", "min", "max"), signed = TRUE, min = NULL, max = NULL, power = 12, diagEntry = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mat}{matrix to be turned into a network. Must be square. } \item{symmetrizeMethod}{ method for symmetrizing the matrix. The method will be applied to each component of mat and its transpose. } \item{signed}{ logical: should the resulting network be signed? Unsigned networks are constructed from \code{abs(mat)}. } \item{min}{ minimum allowed value for \code{mat}. If \code{NULL}, the actual attained minimum of \code{mat} will be used. Missing data are ignored. Values below \code{min} are truncated to \code{min}. } \item{max}{ maximum allowed value for \code{mat}. If \code{NULL}, the actual attained maximum of \code{mat} will be used. Missing data are ignored. Values below \code{max} are truncated to \code{max}. } \item{power}{ the soft-thresholding power. } \item{diagEntry}{ the value of the entries on the diagonal in the result. This is usally 1 but some applications may require a zero (or even NA) diagonal. } } \details{ If \code{signed} is \code{FALSE}, the matrix \code{mat} is first converted to its absolute value. This function then symmetrizes the matrix using the \code{symmetrizeMethod} component-wise on \code{mat} and \code{t(mat)} (i.e., the transpose of \code{mat}). In the next step, the symmetrized matrix is linearly scaled to the interval [0,1] using either \code{min} and \code{max} (each either supplied or determined from the matrix). Values outside of the [min, max] range are truncated to \code{min} or \code{max}. Lastly, the adjacency is calculated by rasing the matrix to \code{power}. The diagonal of the result is set to \code{diagEntry}. Note that most WGCNA functions expect the diagonal of an adjacency matrix to be 1. } \value{ The adjacency matrix that encodes the network. } \author{ Peter Langfelder } \seealso{ \code{adjacency} for calculation of a correlation network (adjacency) from a numeric matrix such as expression data \code{adjacency.fromSimilarity} for simpler calculation of a network from a symmetric similarity matrix. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/convertNumericColumnsToNumeric.Rd0000644000176200001440000000172614012015545020350 0ustar liggesusers\name{convertNumericColumnsToNumeric} \alias{convertNumericColumnsToNumeric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Convert character columns that represent numbers to numeric } \description{ This function converts to numeric those character columns in the input that can be converted to numeric without generating missing values except for the allowed NA representations. } \usage{ convertNumericColumnsToNumeric( data, naStrings = c("NA", "NULL", "NO DATA"), unFactor = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A data frame. } \item{naStrings}{ Character vector of values that are allowd to convert to \code{NA} (a missing numeric value). } \item{unFactor}{ Logical: should the function first convert all factor columns to character? } } \value{ A data frame with convertible columns converted to numeric. } \author{ Peter Langfelder } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/spaste.Rd0000644000176200001440000000123614012015545013451 0ustar liggesusers\name{spaste} \alias{spaste} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Space-less paste } \description{ A convenient wrapper for the \code{\link{paste}} function with \code{sep=""}. } \usage{ spaste(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ standard arguments to function \code{\link{paste}} except \code{sep}. } } \value{ The result of the corresponding \code{\link{paste}}. } \author{ Peter Langfelder } \note{ Do not use the \code{sep} argument. Using will lead to an error. } \seealso{ \code{\link{paste}} } \examples{ a = 1; paste("a=", a); spaste("a=", a); } \keyword{misc} WGCNA/man/multiGSub.Rd0000644000176200001440000000435414012015545014071 0ustar liggesusers\name{multiGSub} \alias{multiGSub} \alias{multiSub} \alias{multiGrep} \alias{multiGrepl} \title{ Analogs of grep(l) and (g)sub for multiple patterns and relacements } \description{ These functions provide convenient pattern finding and substitution for multiple patterns. } \usage{ multiGSub(patterns, replacements, x, ...) multiSub(patterns, replacements, x, ...) multiGrep(patterns, x, ..., sort = TRUE, value = FALSE, invert = FALSE) multiGrepl(patterns, x, ...) } \arguments{ \item{patterns}{ A character vector of patterns. } \item{replacements}{ A character vector of replacements; must be of the same length as \code{patterns}. } \item{x}{ Character vector of strings in which the pattern finding and replacements should be carried out. } \item{sort}{Logical: should the output indices be sorted in increasing order?} \item{value}{Logical: should value rather than the index of the value be returned?} \item{invert}{Logical: should the search be inverted and only indices of elements of \code{x} matching none of the patterns be returned?} \item{\dots}{ Other arguments to \code{\link{sub}} or \code{\link{grep}} } } \details{ For each element of \code{x}, patterns are sequentiall searched for and (for \code{multiSub} and \code{multiGSub} substituted with the corresponding replacement. } \value{ \code{multiSub} and \code{multiGSub} return a character vector of the same length as \code{x}, with all patterns replaces by their replacements in each element of \code{x}. \code{multiSub} replaces each pattern in each element of \code{x} only once, \code{multiGSub} as many times as the pattern is found. \code{multiGrep} returns the indices of those elements in \code{x} in which at least one of \code{patterns} was found, or, if \code{invert} is TRUE, the indices of elements in which none of the patterns were found. If \code{value} is TRUE, values rather than indices are returned. \code{multiGrepl} returns a logical vector of the same length as \code{x}, with \code{TRUE} is any of the patterns matched the element of \code{x}, and \code{FALSE} otherwise. } \author{ Peter Langfelder } \seealso{ The workhorse functions \code{\link{sub}}, \code{\link{gsub}}, \code{\link{grep}} and \code{\link{grepl}}. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/sampledHierarchicalConsensusModules.Rd0000644000176200001440000001311714533632240021336 0ustar liggesusers\name{sampledHierarchicalConsensusModules} \alias{sampledHierarchicalConsensusModules} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hierarchical consensus module identification in sampled data } \description{ This function repeatedly resamples the samples (rows) in supplied data and identifies hierarchical consensus modules on the resampled data. } \usage{ sampledHierarchicalConsensusModules( multiExpr, multiWeights = NULL, networkOptions, consensusTree, nRuns, startRunIndex = 1, endRunIndex = startRunIndex + nRuns -1, replace = FALSE, fraction = if (replace) 1.0 else 0.63, randomSeed = 12345, checkSoftPower = TRUE, nPowerCheckSamples = 2000, individualTOMFilePattern = "individualTOM-Run.\%r-Set\%s-Block.\%b.RData", keepConsensusTOMs = FALSE, consensusTOMFilePattern = "consensusTOM-Run.\%r-\%a-Block.\%b.RData", skipUnsampledCalculation = FALSE, ..., verbose = 2, indent = 0, saveRunningResults = TRUE, runningResultsFile = "results.tmp.RData") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{nRuns}{ Number of network construction and module identification runs. } \item{startRunIndex}{ Number to be assigned to the start run. The run number or index is used to make saved files unique; it has no effect on the actual results of the run. } \item{endRunIndex}{ Number (index) of the last run. If given, \code{nRuns} is ignored. } \item{replace}{ Logical: should samples (observations or rows in entries in \code{multiExpr}) be sampled with replacement? } \item{fraction}{ Fraction of samples to sample for each run. } \item{randomSeed}{ Integer specifying the random seed. If non-NULL, the random number generator state is saved before the seed is set and restored at the end of the function. If \code{NULL}, the random number generator state is not changed nor saved at the start, and not restored at the end. } \item{checkSoftPower}{ Logical: should the soft-tresholding power be adjusted to approximately match the connectivity distribution of the sampled data set and the full data set? } \item{nPowerCheckSamples}{ Number of genes to be sampled from the full data set to calculate connectivity and match soft-tresholding powers. } \item{individualTOMFilePattern}{Pattern for file names for files holding individual TOMs. The tags \code{"\%r, \%a, \%b"} are replaced by run number, analysis name and block number, respectively. The TOM files are usually temporary but can be retained, see \code{keepConsensusTOM} below.} \item{keepConsensusTOMs}{ Logical: should the (final) consensus TOMs of each sampled calculation be retained after the run ends? Note that for large data sets (tens of thousands of nodes) the TOM files are rather large. } \item{consensusTOMFilePattern}{Pattern for file names for files holding consensus TOMs. The tags \code{"\%r, \%a, \%b"} are replaced by run number, analysis name and block number, respectively. The TOM files are usually temporary but can be retained, see \code{keepConsensusTOM} above.} \item{skipUnsampledCalculation}{ Logical: should a calculation on original (not resampled) data be skipped? } \item{\dots}{ Other arguments to \code{\link{hierarchicalConsensusModules}}. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{saveRunningResults}{ Logical: should the cumulative results be saved after each run on resampled data? } \item{runningResultsFile}{ File name of file in which to save running results into. In case of a parallel execution (say on several nodes of a cluster), one should choose a unique name for each process to avoid overwriting the same file. } } \details{ For each run, samples (but not genes) are randomly sampled to obtain a perturbed data set; a full network analysis and module identification is carried out, and the results are returned in a list with one component per run. For each run, the soft-thresholding power can optionally be adjusted such that the mean adjacency in the re-sampled data set equals the mean adjacency in the original data. } \value{ A list with one component per run. Each component is a list with the following components: \item{mods}{The output of the function \code{\link{hierarchicalConsensusModules}} on the resampled data.} \item{samples}{Indices of the samples selected for the resampled data step for this run.} \item{powers}{Actual soft-thresholding powers used in this run.} } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusModules}} for consensus networ analysis and module identification; \code{\link{sampledBlockwiseModules}} for a similar resampling analysis for a single data set. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/prepComma.Rd0000644000176200001440000000066214012015545014077 0ustar liggesusers\name{prepComma} \alias{prepComma} \title{ Prepend a comma to a non-empty string } \description{ Utility function that prepends a comma before the input string if the string is non-empty. } \usage{ prepComma(s) } \arguments{ \item{s}{Character string. } } \value{ If \code{s} is non-empty, returns \code{paste(",", s)}, otherwise returns s. } \author{ Peter Langfelder } \examples{ prepComma("abc"); prepComma(""); } \keyword{misc} WGCNA/man/factorizeNonNumericColumns.Rd0000644000176200001440000000143314012015545017476 0ustar liggesusers\name{factorizeNonNumericColumns} \alias{factorizeNonNumericColumns} \title{ Turn non-numeric columns into factors } \description{ Given a data frame, this function turns non-numeric columns into factors. } \usage{ factorizeNonNumericColumns(data) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A data frame. Non-data frame inputs (e.g., a matrix) are coerced to a data frame. } } \details{ A column is considered numeric if its storage mode is numeric or if it is a character vector, it only contains character representations of numbers and possibly missing values encoded as "NA", "NULL", "NO DATA". } \value{ The input data frame with non-numeric columns turned into factors. } \author{ Peter Langfelder } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/swapTwoBranches.Rd0000644000176200001440000000766014012015545015273 0ustar liggesusers\name{swapTwoBranches} \alias{swapTwoBranches} \alias{reflectBranch} \alias{selectBranch} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Select, swap, or reflect branches in a dendrogram. } \description{ swapTwoBranches takes the a gene tree object and two genes as input, and swaps the branches containing these two genes at the nearest branch point of the dendrogram. reflectBranch takes the a gene tree object and two genes as input, and reflects the branch containing the first gene at the nearest branch point of the dendrogram. selectBranch takes the a gene tree object and two genes as input, and outputs indices for all genes in the branch containing the first gene, up to the nearest branch point of the dendrogram. } \usage{ swapTwoBranches(hierTOM, g1, g2) reflectBranch(hierTOM, g1, g2, both = FALSE) selectBranch(hierTOM, g1, g2) } \arguments{ \item{hierTOM}{ A hierarchical clustering object (or gene tree) that is used to plot the dendrogram. For example, the output object from the function hclust or fastcluster::hclust. Note that elements of hierTOM$order MUST be named (for example, with the corresponding gene name). } \item{g1}{ Any gene in the branch of interest. } \item{g2}{ Any gene in a branch directly adjacent to the branch of interest. } \item{both}{ Logical: should the selection include the branch gene \code{g2}? } } \value{ swapTwoBranches and reflectBranch return a hierarchical clustering object with the hierTOM$order variable properly adjusted, but all other variables identical as the heirTOM input. selectBranch returns a numeric vector corresponding to all genes in the requested branch. } \author{ Jeremy Miller } \examples{ \dontrun{ ## Example: first simulate some data. n = 30; n2 = 2*n; n.3 = 20; n.5 = 10; MEturquoise = sample(1:(2*n),n) MEblue = c(MEturquoise[1:(n/2)], sample(1:(2*n),n/2)) MEbrown = sample(1:n2,n) MEyellow = sample(1:n2,n) MEgreen = c(MEyellow[1:n.3], sample(1:n2,n.5)) MEred = c(MEbrown [1:n.5], sample(1:n2,n.3)) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, MEred) dat1 = simulateDatExpr(ME,8*n ,c(0.16,0.12,0.11,0.10,0.10,0.09,0.15), signed=TRUE) TOM1 = TOMsimilarityFromExpr(dat1$datExpr, networkType="signed") colnames(TOM1) <- rownames(TOM1) <- colnames(dat1$datExpr) tree1 = fastcluster::hclust(as.dist(1-TOM1),method="average") colorh = labels2colors(dat1$allLabels) plotDendroAndColors(tree1,colorh,dendroLabels=FALSE) ## Reassign modules using the selectBranch and chooseOneHubInEachModule functions datExpr = dat1$datExpr hubs = chooseOneHubInEachModule(datExpr, colorh) colorh2 = rep("grey", length(colorh)) colorh2 [selectBranch(tree1,hubs["blue"],hubs["turquoise"])] = "blue" colorh2 [selectBranch(tree1,hubs["turquoise"],hubs["blue"])] = "turquoise" colorh2 [selectBranch(tree1,hubs["green"],hubs["yellow"])] = "green" colorh2 [selectBranch(tree1,hubs["yellow"],hubs["green"])] = "yellow" colorh2 [selectBranch(tree1,hubs["red"],hubs["brown"])] = "red" colorh2 [selectBranch(tree1,hubs["brown"],hubs["red"])] = "brown" plotDendroAndColors(tree1,cbind(colorh,colorh2),c("Old","New"),dendroLabels=FALSE) ## Now swap and reflect some branches, then optimize the order of the branches # Open a suitably sized graphics window sizeGrWindow(12,9); # partition the screen for 3 dendrogram + module color plots layout(matrix(c(1:6), 6, 1), heights = c(0.8, 0.2, 0.8, 0.2, 0.8, 0.2)); plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Starting Dendrogram", setLayout = FALSE) tree1 = swapTwoBranches(tree1,hubs["red"],hubs["turquoise"]) plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Swap blue/turquoise and red/brown", setLayout = FALSE) tree1 = reflectBranch(tree1,hubs["blue"],hubs["green"]) plotDendroAndColors(tree1,colorh2,dendroLabels=FALSE,main="Reflect turquoise/blue", setLayout = FALSE) }} \keyword{misc} WGCNA/man/imputeByModule.Rd0000644000176200001440000000257514012015545015125 0ustar liggesusers\name{imputeByModule} \alias{imputeByModule} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Impute missing data separately in each module } \description{ Use \code{\link[impute]{impute.knn}} to ipmpute missing data, separately in each module. } \usage{ imputeByModule( data, labels, excludeUnassigned = FALSE, unassignedLabel = if (is.numeric(labels)) 0 else "grey", scale = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ Data to be imputed, with variables (genes) in columns and observations (samples) in rows. } \item{labels}{ Module labels. A vector with one entry for each column in \code{data}. } \item{excludeUnassigned}{ Logical: should unassigned variables (genes) be excluded from the imputation? } \item{unassignedLabel}{ The value in \code{labels} that represents unassigned variables. } \item{scale}{ Logical: should \code{data} be scaled to mean 0 and variance 1 before imputation? } \item{\dots}{ Other arguments to \code{\link[impute]{impute.knn}}. } } \value{ The input \code{data} with missing values imputed. } \author{ Peter Langfelder } \note{ This function is potentially faster but could give different imputed values than applying \code{impute.knn} directly to (scaled) \code{data}. } \seealso{ \code{\link[impute]{impute.knn}} that does the actual imputation. } \keyword{misc} WGCNA/man/hierarchicalConsensusTOM.Rd0000644000176200001440000002074114012015545017053 0ustar liggesusers\name{hierarchicalConsensusTOM} \alias{hierarchicalConsensusTOM} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculation of hierarchical consensus topological overlap matrix } \description{ This function calculates consensus topological overlap in a hierarchical manner. } \usage{ hierarchicalConsensusTOM( # ... information needed to calculate individual TOMs multiExpr, multiWeights = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 20000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 12345, # Network construction options networkOptions, # Save individual TOMs? keepIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", # ... or information about individual (more precisely, input) TOMs individualTOMInfo = NULL, # Consensus calculation options consensusTree, useBlocks = NULL, # Save calibrated TOMs? saveCalibratedIndividualTOMs = FALSE, calibratedIndividualTOMFilePattern = "calibratedIndividualTOM-Set\%s-Block\%b.RData", # Return options saveConsensusTOM = TRUE, consensusTOMFilePattern = "consensusTOM-\%a-Block\%b.RData", getCalibrationSamples = FALSE, # Return the intermediate results as well? keepIntermediateResults = saveConsensusTOM, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behavior collectGarbage = TRUE, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ Expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used for correlation calculations with data in \code{multiExpr}.} \item{checkMissingData}{ Logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ Optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{Integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{blockSizePenaltyPower}{Number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{Number of centers to be used in the preclustering. Defaults to smaller of \code{nGenes/20} and \code{100*nGenes/maxBlockSize}, where \code{nGenes} is the nunber of genes (variables) in \code{multiExpr}.} \item{randomSeed}{Integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{keepIndividualTOMs}{ Logical: should individual TOMs be retained after the calculation is finished? } \item{individualTOMFileNames}{ Character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} \item{individualTOMInfo}{ A list, typically returned by \code{\link{individualTOMs}}, containing information about the topological overlap matrices in the individual data sets in \code{multiExpr}. See the output of \code{\link{individualTOMs}} for details on the content of the list. } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{useBlocks}{ Optional vector giving the blocks that should be used for the calcualtions. If \code{NULL}, all all blocks will be used. } \item{saveCalibratedIndividualTOMs}{ Logical: should the calibrated individual TOMs be saved? } \item{calibratedIndividualTOMFilePattern}{ Specification of file names in which calibrated individual TOMs should be saved. } \item{saveConsensusTOM}{ Logical: should the consensus TOM be saved to disk? } \item{consensusTOMFilePattern}{ Character string giving the file names to save consensus TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated. } \item{getCalibrationSamples}{ Logical: should the sampled values used for network calibration be returned? } \item{keepIntermediateResults}{ Logical: should intermediate consensus TOMs be saved as well? } \item{useDiskCache}{ Logical: should disk cache be used for consensus calculations? The disk cache can be used to store chunks of calibrated data that are small enough to fit one chunk from each set into memory (blocks may be small enough to fit one block of one set into memory, but not small enough to fit one block from all sets in a consensus calculation into memory at the same time). Using disk cache is slower but lessens the memory footprint of the calculation. As a general guide, if individual data are split into blocks, we recommend setting this argument to \code{TRUE}. If this argument is \code{NULL}, the function will decide whether to use disk cache based on the number of sets and block sizes. } \item{chunkSize}{ network similarities are saved in smaller chunks of size \code{chunkSize}. If \code{NULL}, an appropriate chunk size will be determined from an estimate of available memory. Note that if the chunk size is greater than the memory required for storing intemediate results, disk cache use will automatically be disabled. } \item{cacheDir}{ character string containing the directory into which cache files should be written. The user should make sure that the filesystem has enough free space to hold the cache files which can get quite large. } \item{cacheBase}{ character string containing the desired name for the cache files. The actual file names will consists of \code{cacheBase} and a suffix to make the file names unique. } \item{collectGarbage}{ Logical: should garbage be collected after memory-intensive operations? } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function is essentially a wrapper for \code{\link{hierarchicalConsensusCalculation}}, with a few additional operations specific to calculations of topological overlaps. } \value{ A list that contains the output of \code{\link{hierarchicalConsensusCalculation}} and two extra components: \item{individualTOMInfo}{A copy of the input \code{individualTOMInfo} if it was non-\code{NULL}, or the result of \code{\link{individualTOMs}}. } \item{consensusTree}{A copy of the input \code{consensusTree}.} } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusCalculation}} for the actual hierarchical consensus calculation; \code{\link{individualTOMs}} for the calculation of individual TOMs in a format suitable for consensus calculation. } \keyword{misc }% __ONLY ONE__ keyword per line WGCNA/man/simpleHierarchicalConsensusCalculation.Rd0000644000176200001440000000560314012015545022024 0ustar liggesusers\name{simpleHierarchicalConsensusCalculation} \alias{simpleHierarchicalConsensusCalculation} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Simple hierarchical consensus calculation } \description{ Hierarchical consensus calculation without calibration. } \usage{ simpleHierarchicalConsensusCalculation(individualData, consensusTree, level = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{individualData}{ Individual data from which the consensus is to be calculated. It can be either a list or a \code{\link{multiData}} structure. Each element in \code{individulData} should be a numeric object (vector, matrix or array). } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{level}{ Integer which the user should leave at 1. This serves to keep default set names unique. } } \details{ This function calculates consensus in a hierarchical manner, using a separate (and possibly different) set of consensus options at each step. The "recipe" for the consensus calculation is supplied in the argument \code{consensusTree}. The argument \code{consensusTree} should have the following components: (1) \code{inputs} must be either a character vector whose components match \code{names(inputData)}, or consensus trees in the own right. (2) \code{consensusOptions} must be a list of class \code{"ConsensusOptions"} that specifies options for calculating the consensus. A suitable set of options can be obtained by calling \code{\link{newConsensusOptions}}. (3) Optionally, the component \code{analysisName} can be a single character string giving the name for the analysis. When intermediate results are returned, they are returned in a list whose names will be set from \code{analysisName} components, if they exist. Unlike the similar function \code{\link{hierarchicalConsensusCalculation}}, this function ignores the calibration settings in the \code{consensusOptions} component of \code{consensusTree}; no calibration of input data is performed. The actual consensus calculation at each level of the consensus tree is carried out in function \code{\link{simpleConsensusCalculation}}. The consensus options for each individual consensus calculation are independent from one another, i.e., the consensus options for different steps can be different. } \value{ A list with a single component \code{consensus}, containing the consensus data of the same dimensions as the individual entries in the input \code{individualData}. This perhaps somewhat cumbersome convention is used to make the output compatible with that of \code{\link{hierarchicalConsensusCalculation}}. } \author{ Peter Langfelder } \seealso{ \code{\link{simpleConsensusCalculation}} for a "single-level" consensus calculation; \code{\link{hierarchicalConsensusCalculation}} for hierarchical consensus calculation with calibration } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/bicorAndPvalue.Rd0000644000176200001440000000455714022073754015070 0ustar liggesusers\name{bicorAndPvalue} \alias{bicorAndPvalue} \title{ Calculation of biweight midcorrelations and associated p-values } \description{ A faster, one-step calculation of Student correlation p-values for multiple biweight midcorrelations, properly taking into account the actual number of observations. } \usage{ bicorAndPvalue(x, y = NULL, use = "pairwise.complete.obs", alternative = c("two.sided", "less", "greater"), ...) } \arguments{ \item{x}{ a vector or a matrix } \item{y}{ a vector or a matrix. If \code{NULL}, the correlation of columns of \code{x} will be calculated. } \item{use}{ determines handling of missing data. See \code{\link{bicor}} for details. } \item{alternative}{ specifies the alternative hypothesis and must be (a unique abbreviation of) one of \code{"two.sided"}, \code{"greater"} or \code{"less"}. the initial letter. \code{"greater"} corresponds to positive association, \code{"less"} to negative association. } \item{\dots}{ other arguments to the function \code{\link{bicor}}. } } \details{ The function calculates the biweight midcorrelations of a matrix or of two matrices and the corresponding Student p-values. The output is not as full-featured as \code{\link{cor.test}}, but can work with matrices as input. } \value{ A list with the following components, each a marix: \item{bicor}{the calculated correlations} \item{p}{the Student p-values corresponding to the calculated correlations} \item{Z}{Fisher transform of the calculated correlations} \item{t}{Student t statistics of the calculated correlations} \item{nObs}{Numbers of observations for the correlation, p-values etc.} } \author{ Peter Langfelder and Steve Horvath } \references{ Peter Langfelder, Steve Horvath (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. \url{https://www.jstatsoft.org/v46/i11/} } \seealso{ \code{\link{bicor}} for calculation of correlations only; \code{\link{cor.test}} for another function for significance test of correlations } \examples{ # generate random data with non-zero correlation set.seed(1); a = rnorm(100); b = rnorm(100) + a; x = cbind(a, b); # Call the function and display all results bicorAndPvalue(x) # Set some components to NA x[c(1:4), 1] = NA corAndPvalue(x) # Note that changed number of observations. } \keyword{ stats } WGCNA/man/returnGeneSetsAsList.Rd0000644000176200001440000001101414022073754016251 0ustar liggesusers\name{returnGeneSetsAsList} \alias{returnGeneSetsAsList} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Return pre-defined gene lists in several biomedical categories. } \description{ This function returns gene sets for use with other R functions. These gene sets can include inputted lists of genes and files containing user-defined lists of genes, as well as a pre-made collection of brain, blood, and other biological lists. The function returns gene lists associated with each category for use with other enrichment strategies (i.e., GSVA). } \usage{ returnGeneSetsAsList( fnIn = NULL, catNmIn = fnIn, useBrainLists = FALSE, useBloodAtlases = FALSE, useStemCellLists = FALSE, useBrainRegionMarkers = FALSE, useImmunePathwayLists = FALSE, geneSubset=NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{fnIn}{ A vector of file names containing user-defined lists. These files must be in one of three specific formats (see details section). The default (NULL) may only be used if one of the "use_____" parameters is TRUE. } \item{catNmIn}{ A vector of category names corresponding to each fnIn. This name will be appended to each overlap corresponding to that filename. The default sets the category names as the corresponding file names. } \item{useBrainLists}{ If TRUE, a pre-made set of brain-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{useBloodAtlases}{ If TRUE, a pre-made set of blood-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{useStemCellLists}{ If TRUE, a pre-made set of stem cell (SC)-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{useBrainRegionMarkers}{ If TRUE, a pre-made set of enrichment lists for human brain regions will be added to any user-defined lists for enrichment comparison. The default is FALSE. These lists are derived from data from the Allen Human Brain Atlas (https://human.brain-map.org/). See references section for more details. } \item{useImmunePathwayLists}{ If TRUE, a pre-made set of enrichment lists for immune system pathways will be added to any user-defined lists for enrichment comparison. The default is FALSE. These lists are derived from the lab of Daniel R Saloman. See references section for more details. } \item{geneSubset}{ A vector of gene (or other) identifiers. If entered, only genes in this list will be returned in the output, otherwise all genes in each category will be returned (default, geneSubset=NULL). } } \details{ User-inputted files for fnIn can be in one of three formats: 1) Text files (must end in ".txt") with one list per file, where the first line is the list descriptor and the remaining lines are gene names corresponding to that list, with one gene per line. For example Ribosome RPS4 RPS8 ... 2) Gene / category files (must be csv files), where the first line is the column headers corresponding to Genes and Lists, and the remaining lines correspond to the genes in each list, for any number of genes and lists. For example: Gene, Category RPS4, Ribosome RPS8, Ribosome ... NDUF1, Mitohcondria NDUF3, Mitochondria ... MAPT, AlzheimersDisease PSEN1, AlzheimersDisease PSEN2, AlzheimersDisease ... 3) Module membership (kME) table in csv format. Currently, the module assignment is the only thing that is used, so as long as the Gene column is 2nd and the Module column is 3rd, it doesn't matter what is in the other columns. For example, PSID, Gene, Module, , RPS4, blue, , NDUF1, red, , RPS8, blue, , NDUF3, red, , MAPT, green, ... } \value{ \item{geneSets}{ A list of categories in alphabetical order, where each compnent of the list is a character vector of all genes corresponding to the named category. For example: geneSets = list(category1=c("gene1","gene2"),category2=c("gene3","gene4","gene5")) } } \references{ Please see the help file for userListEnrichment in the WGCNA library for references for the pre-defined lists. } \author{ Jeremy Miller } \examples{ # Example: Return a list of genes for various immune pathways geneSets = returnGeneSetsAsList(useImmunePathwayLists=TRUE) geneSets[7:8] } \keyword{misc} WGCNA/man/GTOMdist.Rd0000644000176200001440000000127614012015545013610 0ustar liggesusers\name{GTOMdist} \alias{GTOMdist} \title{ Generalized Topological Overlap Measure } \description{ Generalized Topological Overlap Measure, taking into account interactions of higher degree. } \usage{ GTOMdist(adjMat, degree = 1) } \arguments{ \item{adjMat}{ adjacency matrix. See details below. } \item{degree}{ integer specifying the maximum degree to be calculated. } } \value{ Matrix of the same dimension as the input \code{adjMat}. } \references{ Yip A, Horvath S (2007) Gene network interconnectedness and the generalized topological overlap measure. BMC Bioinformatics 2007, 8:22 } \author{ Steve Horvath and Andy Yip } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/hierarchicalConsensusCalculation.Rd0000644000176200001440000001706014012015545020652 0ustar liggesusers\name{hierarchicalConsensusCalculation} \alias{hierarchicalConsensusCalculation} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hierarchical consensus calculation } \description{ Hierarchical consensus calculation with optional data calibration. } \usage{ hierarchicalConsensusCalculation( individualData, consensusTree, level = 1, useBlocks = NULL, randomSeed = NULL, saveCalibratedIndividualData = FALSE, calibratedIndividualDataFilePattern = "calibratedIndividualData-\%a-Set\%s-Block\%b.RData", # Return options: the data can be either saved or returned but not both. saveConsensusData = TRUE, consensusDataFileNames = "consensusData-\%a-Block\%b.RData", getCalibrationSamples= FALSE, # Return the intermediate results as well? keepIntermediateResults = FALSE, # Internal handling of data useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", # Behaviour collectGarbage = FALSE, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{individualData}{ Individual data from which the consensus is to be calculated. It can be either a list or a \code{\link{multiData}} structure. Each element in \code{individulData} can in turn either be a numeric object (vector, matrix or array) or a \code{\link{BlockwiseData}} structure. } \item{consensusTree}{ A list specifying the consensus calculation. See details. } \item{level}{ Integer which the user should leave at 1. This serves to keep default set names unique. } \item{useBlocks}{ When \code{individualData} contains \code{\link{BlockwiseData}}, this argument can be an integer vector with indices of blocks for which the calculation should be performed. } \item{randomSeed}{ If non-\code{NULL}, the function will save the current state of the random generator, set the given seed, and restore the random seed to its original state upon exit. If \code{NULL}, the seed is not set nor is it restored on exit. } \item{saveCalibratedIndividualData}{ Logical: should calibrated individual data be saved? } \item{calibratedIndividualDataFilePattern}{ Pattern from which file names for saving calibrated individual data are determined. The conversions \code{\%a}, \code{\%s} and \code{\%b} will be replaced by analysis name, set number and block number, respectively.} \item{saveConsensusData}{ Logical: should final consensus be saved (\code{TRUE}) or returned in the return value (\code{FALSE})? } \item{consensusDataFileNames}{ Pattern from which file names for saving the final consensus are determined. The conversions \code{\%a} and \code{\%b} will be replaced by analysis name and block number, respectively.} \item{getCalibrationSamples}{ When calibration method in the \code{consensusOptions} component of \code{ConsensusTree} is \code{"single quantile"}, this logical argument determines whether the calibration samples should be returned within the return value. } \item{keepIntermediateResults}{ Logical: should results of intermediate consensus calculations (if any) be kept? These are always returned as \code{BlockwiseData} whose data are saved to disk. } \item{useDiskCache}{ Logical: should disk cache be used for consensus calculations? The disk cache can be used to store chunks of calibrated data that are small enough to fit one chunk from each set into memory (blocks may be small enough to fit one block of one set into memory, but not small enough to fit one block from all sets in a consensus calculation into memory at the same time). Using disk cache is slower but lessens the memory footprint of the calculation. As a general guide, if individual data are split into blocks, we recommend setting this argument to \code{TRUE}. If this argument is \code{NULL}, the function will decide whether to use disk cache based on the number of sets and block sizes. } \item{chunkSize}{ Integer giving the chunk size. If left \code{NULL}, a suitable size will be chosen automatically. } \item{cacheDir}{ Directory in which to save cache files. The files are deleted on normal exit but persist if the function terminates abnormally. } \item{cacheBase}{ Base for the file names of cache files. } \item{collectGarbage}{ Logical: should garbage collection be forced after each major calculation? } \item{verbose}{Integer level of verbosity of diagnostic messages. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{Indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function calculates consensus in a hierarchical manner, using a separate (and possibly different) set of consensus options at each step. The "recipe" for the consensus calculation is supplied in the argument \code{consensusTree}. The argument \code{consensusTree} should have the following components: (1) \code{inputs} must be either a character vector whose components match \code{names(inputData)}, or consensus trees in the own right. (2) \code{consensusOptions} must be a list of class \code{"ConsensusOptions"} that specifies options for calculating the consensus. A suitable set of options can be obtained by calling \code{\link{newConsensusOptions}}. (3) Optionally, the component \code{analysisName} can be a single character string giving the name for the analysis. When intermediate results are returned, they are returned in a list whose names will be set from \code{analysisName} components, if they exist. The actual consensus calculation at each level of the consensus tree is carried out in function \code{\link{consensusCalculation}}. The consensus options for each individual consensus calculation are independent from one another, i.e., the consensus options for different steps can be different. } \value{ A list containing the output of the top level call to \code{\link{consensusCalculation}}; if \code{keepIntermediateResults} is \code{TRUE}, component \code{inputs} contains a (possibly recursive) list of the results of intermediate consensus calculations. Names of the \code{inputs} list are taken from the corresponding \code{analysisName} components if they exist, otherwise from names of the corresponding \code{inputs} components of the supplied \code{consensusTree}. See example below for an example of a relatively simple consensus tree. } \author{ Peter Langfelder } \seealso{ \code{\link{newConsensusOptions}} for obtaining a suitable list of consensus options; \code{\link{consensusCalculation}} for the actual calculation of a consensus that underpins this function. } \examples{ # We generate 3 simple matrices set.seed(5) data = replicate(3, matrix(rnorm(10*100), 10, 100)) names(data) = c("Set1", "Set2", "Set3"); # Put together a consensus tree. In this example the final consensus uses # as input set 1 and a consensus of sets 2 and 3. # First define the consensus of sets 2 and 3: consTree.23 = newConsensusTree( inputs = c("Set2", "Set3"), consensusOptions = newConsensusOptions(calibration = "none", consensusQuantile = 0.25), analysisName = "Consensus of sets 1 and 2"); # Now define the final consensus consTree.final = newConsensusTree( inputs = list("Set1", consTree.23), consensusOptions = newConsensusOptions(calibration = "full quantile", consensusQuantile = 0), analysisName = "Final consensus"); consensus = hierarchicalConsensusCalculation( individualData = data, consensusTree = consTree.final, saveConsensusData = FALSE, keepIntermediateResults = FALSE) names(consensus) } \keyword{misc} WGCNA/man/networkScreening.Rd0000644000176200001440000000555514012015545015511 0ustar liggesusers\name{networkScreening} \alias{networkScreening} \title{ Identification of genes related to a trait } \description{ This function blends standard and network approaches to selecting genes (or variables in general) highly related to a given trait. } \usage{ networkScreening(y, datME, datExpr, corFnc = "cor", corOptions = "use = 'p'", oddPower = 3, blockSize = 1000, minimumSampleSize = ..minNSamples, addMEy = TRUE, removeDiag = FALSE, weightESy = 0.5, getQValues = TRUE) } \arguments{ \item{y}{ clinical trait given as a numeric vector (one value per sample) } \item{datME}{ data frame of module eigengenes } \item{datExpr}{ data frame of expression data } \item{corFnc}{ character string specifying the function to be used to calculate co-expression similarity. Defaults to Pearson correlation. Any function returning values between -1 and 1 can be used. } \item{corOptions}{ character string specifying additional arguments to be passed to the function given by \code{corFnc}. Use \code{"use = 'p', method = 'spearman'"} to obtain Spearman correlation. } \item{oddPower}{ odd integer used as a power to raise module memberships and significances } \item{blockSize}{ block size to use for calculations with large data sets } \item{minimumSampleSize}{ minimum acceptable number of samples. Defaults to the default minimum number of samples used throughout the WGCNA package, currently 4.} \item{addMEy}{ logical: should the trait be used as an additional "module eigengene"?} \item{removeDiag}{ logical: remove the diagonal? } \item{weightESy}{ weight to use for the trait as an additional eigengene; should be between 0 and 1 } \item{getQValues}{ logical: should q-values be calculated? } } \details{ This function should be considered experimental. It takes into account both the "standard" and the network measures of gene importance for the trait. } \value{ datout = data.frame(p.Weighted, q.Weighted, Cor.Weighted, Z.Weighted, p.Standard, q.Standard, Cor.Standard, Z.Standard) Data frame reporting the following quantities for each given gene: \item{p.Weighted }{weighted p-value of association with the trait} \item{q.Weighted }{q-value (local FDR) calculated from \code{p.Weighted}} \item{cor.Weighted}{correlation of trait with gene expression weighted by a network term} \item{Z.Weighted}{ Fisher Z score of the weighted correlation} \item{p.Standard}{ standard Student p-value of association of the gene with the trait} \item{q.Standard}{ q-value (local FDR) calculated from \code{p.Standard}} \item{cor.Standard}{ correlation of gene with the trait} \item{Z.Standard}{ Fisher Z score of the standard correlation} } \author{ Steve Horvath } \keyword{ misc} WGCNA/man/consensusRepresentatives.Rd0000644000176200001440000002164614012015545017305 0ustar liggesusers\name{consensusRepresentatives} \alias{consensusRepresentatives} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Consensus selection of group representatives } \description{ Given multiple data sets corresponding to the same variables and a grouping of variables into groups, the function selects a representative variable for each group using a variety of possible selection approaches. Typical uses include selecting a representative probe for each gene in microarray data. } \usage{ consensusRepresentatives( mdx, group, colID, consensusQuantile = 0, method = "MaxMean", useGroupHubs = TRUE, calibration = c("none", "full quantile"), selectionStatisticFnc = NULL, connectivityPower = 1, minProportionPresent = 1, getRepresentativeData = TRUE, statisticFncArguments = list(), adjacencyArguments = list(), verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mdx}{ A \code{\link{multiData}} structure. All sets must have the same columns. } \item{group}{ Character vector whose components contain the group label (e.g. a character string) for each entry of \code{colID}. This vector must be of the same length as the vector \code{colID}. In gene expression applications, this vector could contain the gene symbol (or a co-expression module label). } \item{colID}{Character vector of column identifiers. This must include all the column names from \code{mdx}, but can include other values as well. Its entries must be unique (no duplicates) and no missing values are permitted. } \item{consensusQuantile}{ A number between 0 and 1 giving the quantile probability for consensus calculation. 0 means the minimum value (true consensus) will be used.} \item{method}{character string for determining which method is used to choose the representative (when \code{useGroupHubs} is \code{TRUE}, this method is only used for groups with 2 variables). The following values can be used: "MaxMean" (default) or "MinMean" return the variable with the highest or lowest mean value, respectively; "maxRowVariance" return the variable with the highest variance; "absMaxMean" or "absMinMean" return the variable with the highest or lowest mean absolute value; and "function" will call a user-input function (see the description of the argument \code{selectionStatisticFnc}). The built-in functions can be instructed to use robust analogs (median and median absolute deviation) by also specifying \code{statisticFncArguments=list(robust = TRUE)}. } \item{useGroupHubs}{Logical: if \code{TRUE}, groups with 3 or more variables will be represented by the variable with the highest connectivity according to a signed weighted correlation network adjacency matrix among the corresponding rows. The connectivity is defined as the row sum of the adjacency matrix. The signed weighted adjacency matrix is defined as A=(0.5+0.5*COR)^power where power is determined by the argument \code{connectivityPower} and COR denotes the matrix of pairwise correlation coefficients among the corresponding rows. Additional arguments to the underlying function \code{\link{adjacency}} can be specified using the argument \code{adjacencyArguments} below. } \item{calibration}{Character string describing the method of calibration of the selection statistic among the data sets. Recognized values are \code{"none"} (no calibration) and \code{"full quantile"} (quantile normalization). } \item{selectionStatisticFnc}{User-supplied function used to calculate the selection statistic when \code{method} above equals \code{"function"}. The function must take argumens \code{x} (a matrix) and possibly other arguments that can be specified using \code{statisticFncArguments} below. The return value must be a vector with one component per column of \code{x} giving the selection statistic for each column. } \item{connectivityPower}{Positive number (typically integer) for specifying the soft-thresholding power used to construct the signed weighted adjacency matrix, see the description of \code{useGroupHubs}. This option is only used if \code{useGroupHubs} is \code{TRUE}. } \item{minProportionPresent}{ A number between 0 and 1 specifying a filter of candidate probes. Specifically, for each group, the variable with the maximum consensus proportion of present data is found. Only variables whose consensus proportion of present data is at least \code{minProportionPresent} times the maximum consensus proportion are retained as candidates for being a representative. } \item{getRepresentativeData}{Logical: should the representative data, i.e., \code{mdx} restricted to the representative variables, be returned? } \item{statisticFncArguments}{ A list giving further arguments to the selection statistic function. Can be used to supply additional arguments to the user-specified \code{selectionStatisticFnc}; the value \code{list(robust = TRUE)} can be used with the built-in functions to use their robust variants.} \item{adjacencyArguments}{Further arguments to the function \code{adjacency}, e.g. \code{adjacencyArguments=list(corFnc = "bicor", corOptions = "use = 'p', maxPOutliers = 0.05")} will select the robust correlation \code{\link{bicor}} with a good set of options. Note that the \code{\link{adjacency}} arguments \code{type} and \code{power} cannot be changed. } \item{verbose}{ Level of verbosity; 0 means silent, larger values will cause progress messages to be printed. } \item{indent}{ Indent for the diagnostic messages; each unit equals two spaces. } } \details{ This function was inspired by \code{\link{collapseRows}}, but there are also important differences. This function focuses on selecting representatives; when summarization is more important, \code{collapseRows} provides more flexibility since it does not require that a single representative be selected. This function and \code{collapseRows} use different input and ouput conventions; user-specified functions need to be tailored differently for \code{collapseRows} than for \code{consensusRepresentatives}. Missing data are allowed and are treated as missing at random. If \code{rowID} is \code{NULL}, it is replaced by the variable names in \code{mdx}. All groups with a single variable are represented by that variable, unless the consensus proportion of present data in the variable is lower than \code{minProportionPresent}, in which case the variable and the group are excluded from the output. For all variables belonging to groups with 2 variables (when \code{useGroupHubs=TRUE}) or with at least 2 variables (when \code{useGroupHubs=FALSE}), selection statistics are calculated in each set (e.g., the selection statistic may be the mean, variance, etc). This results in a matrix of selection statistics (one entry per variable per data set). The selection statistics are next optionally calibrated (normalized) between sets to make them comparable; currently the only implemented calibration method is quantile normalization. For each variable, the consensus selection statistic is defined as the consensus of the (calibrated) selection statistics across the data sets is calculated. The 'consensus' of a vector (say 'x') is simply defined as the quantile with probability \code{consensusQuantile} of the vector x. Important exception: for the \code{"MinMean"} and \code{"absMinMean"} methods, the consensus is the quantile with probability \code{1-consensusQuantile}, since the idea of the consensus is to select the worst (or close to worst) value across the data sets. For each group, the representative is selected as the variable with the best (typically highest, but for \code{"MinMean"} and \code{"absMinMean"} methods the lowest) consensus selection statistic. If \code{useGroupHubs=TRUE}, the intra-group connectivity is calculated for all variables in each set. The intra-group connectivities are optionally calibrated (normalized) between sets, and consensus intra-group connectivity is calculated similarly to the consensus selection statistic above. In each group, the variable with the highest consensus intra-group connectivity is chosen as the representative. } \value{ \item{representatives}{A named vector giving, for each group, the selected representative (input \code{rowID} or the variable (column) name in \code{mdx}). Names correspond to groups.} \item{varSelected}{A logical vector with one entry per variable (column) in input \code{mdx} (possibly after restriction to variables occurring in \code{colID}), \code{TRUE} if the column was selected as a representative.} \item{representativeData}{Only present if \code{getRepresentativeData} is \code{TRUE}; the input \code{mdx} restricted to the representative variables, with column names changed to the corresponding groups.} } \author{ Peter Langfelder, based on code by Jeremy Miller } \seealso{ \code{\link{multiData}} for a description of the \code{multiData} structures; \code{\link{collapseRows}} that solves a related but different problem. Please note the differences in input and output! } \keyword{misc} WGCNA/man/multiData.Rd0000644000176200001440000000323614012015545014100 0ustar liggesusers\name{multiData} \alias{multiData} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create a multiData structure. } \description{ This function creates a multiData structure by storing its input arguments as the 'data' components. } \usage{ multiData(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ Arguments to be stored in the multiData structure. } } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). } \value{ The resulting multiData structure. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData2list}} for converting a multiData structure to a list; \code{\link{list2multiData}} for an alternative way of creating a multiData structure; \code{\link{mtd.apply}, \link{mtd.applyToSubset}, \link{mtd.mapply}} for ways of applying a function to each component of a multiData structure. } \examples{ data1 = matrix(rnorm(100), 20, 5); data2 = matrix(rnorm(50), 10, 5); md = multiData(Set1 = data1, Set2 = data2); checkSets(md) } \keyword{misc }% __ONLY ONE__ keyword per line WGCNA/man/consensusOrderMEs.Rd0000644000176200001440000000536114012015545015576 0ustar liggesusers\name{consensusOrderMEs} \alias{consensusOrderMEs} \title{ Put close eigenvectors next to each other in several sets. } \description{ Reorder given (eigen-)vectors such that similar ones (as measured by correlation) are next to each other. This is a multi-set version of \code{\link{orderMEs}}; the dissimilarity used can be of consensus type (for each pair of eigenvectors the consensus dissimilarity is the maximum of individual set dissimilarities over all sets) or of majority type (for each pair of eigenvectors the consensus dissimilarity is the average of individual set dissimilarities over all sets). } \usage{ consensusOrderMEs(MEs, useAbs = FALSE, useSets = NULL, greyLast = TRUE, greyName = paste(moduleColor.getMEprefix(), "grey", sep=""), method = "consensus") } \arguments{ \item{MEs}{Module eigengenes of several sets in a multi-set format (see \code{\link{checkSets}}). A vector of lists, with each list corresponding to one dataset and the module eigengenes in the component \code{data}, that is \code{MEs[[set]]$data[sample, module]} is the expression of the eigengene of module \code{module} in sample \code{sample} in dataset \code{set}. The number of samples can be different between the sets, but the modules must be the same. } \item{useAbs}{Controls whether vector similarity should be given by absolute value of correlation or plain correlation.} \item{useSets}{Allows the user to specify for which sets the eigengene ordering is to be performed.} \item{greyLast}{Normally the color grey is reserved for unassigned genes; hence the grey module is not a proper module and it is conventional to put it last. If this is not desired, set the parameter to \code{FALSE}.} \item{greyName}{Name of the grey module eigengene.} \item{method}{A character string giving the method to be used calculating the consensus dissimilarity. Allowed values are (abbreviations of) \code{"consensus"} and \code{"majority"}. The consensus dissimilarity is calculated as the maximum of given set dissimilarities for \code{"consensus"} and as the average for \code{"majority"}.} } \details{ Ordering module eigengenes is useful for plotting purposes. This function calculates the consensus or majority dissimilarity of given eigengenes over the sets specified by \code{useSets} (defaults to all sets). A hierarchical dendrogram is calculated using the dissimilarity and the order given by the dendrogram is used for the eigengenes in all other sets. } \value{ A vector of lists of the same type as \code{MEs} containing the re-ordered eigengenes. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{\code{\link{moduleEigengenes}}, \code{\link{multiSetMEs}}, \code{\link{orderMEs}}} \keyword{misc} WGCNA/man/BrainLists.Rd0000644000176200001440000000202614012015545014222 0ustar liggesusers\name{BrainLists} \alias{BrainLists} \docType{data} \title{Brain-Related Categories with Corresponding Gene Markers} \description{ This matrix gives a predefined set of marker genes for many brain-related categories (ie., cell type, organelle, changes with disease, etc.), as reported in several previously-published studies. It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(BrainLists)} \format{ A 48319 x 2 matrix of characters containing Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form __, where the references can be found at \code{\link{userListEnrichment}}. Note that the matrix is sorted first by Category and then by Gene, such that all genes related to the same category are listed sequentially. } \source{ For references used in this variable, please see \code{\link{userListEnrichment}} } \examples{ data(BrainLists) head(BrainLists) } \keyword{datasets} WGCNA/man/mergeCloseModules.Rd0000644000176200001440000001723314012015545015574 0ustar liggesusers\name{mergeCloseModules} \alias{mergeCloseModules} \title{Merge close modules in gene expression data} \description{ Merges modules in gene expression networks that are too close as measured by the correlation of their eigengenes. } \usage{ mergeCloseModules( # input data exprData, colors, # Optional starting eigengenes MEs = NULL, # Optional restriction to a subset of all sets useSets = NULL, # If missing data are present, impute them? impute = TRUE, # Input handling options checkDataFormat = TRUE, unassdColor = if (is.numeric(colors)) 0 else "grey", # Options for eigengene network construction corFnc = cor, corOptions = list(use = 'p'), useAbs = FALSE, # Options for constructing the consensus equalizeQuantiles = FALSE, quantileSummary = "mean", consensusQuantile = 0, # Merging options cutHeight = 0.2, iterate = TRUE, # Output options relabel = FALSE, colorSeq = NULL, getNewMEs = TRUE, getNewUnassdME = TRUE, # Options controlling behaviour of the function trapErrors = FALSE, verbose = 1, indent = 0) } \arguments{ \item{exprData}{Expression data, either a single data frame with rows corresponding to samples and columns to genes, or in a multi-set format (see \code{\link{checkSets}}). See \code{checkDataStructure} below. } \item{colors}{A vector (numeric, character or a factor) giving module colors for genes. The method only makes sense when genes have the same color label in all sets, hence a single vector. } \item{MEs}{If module eigengenes have been calculated before, the user can save some computational time by inputting them. \code{MEs} should have the same format as \code{exprData}. If they are not given, they will be calculated.} \item{useSets}{A vector of scalar allowing the user to specify which sets will be used to calculate the consensus dissimilarity of module eigengenes. Defaults to all given sets. } \item{impute}{Should missing values be imputed in eigengene calculation? If imputation is disabled, the presence of \code{NA} entries will cause the eigengene calculation to fail and eigengenes will be replaced by their hubgene approximation. See \code{\link{moduleEigengenes}} for more details.} \item{checkDataFormat}{If TRUE, the function will check \code{exprData} and \code{MEs} for correct multi-set structure. If single set data is given, it will be converted into a format usable for the function. If FALSE, incorrect structure of input data will trigger an error.} \item{unassdColor}{Specifies the string that labels unassigned genes. Module of this color will not enter the module eigengene clustering and will not be merged with other modules.} \item{corFnc}{Correlation function to be used to calculate correlation of module eigengenes. } \item{corOptions}{Can be used to specify options to the correlation function, in addition to argument \code{x} which is used to pass the actual data to calculate the correlation of.} \item{useAbs}{Specifies whether absolute value of correlation or plain correlation (of module eigengenes) should be used in calculating module dissimilarity.} \item{equalizeQuantiles}{Logical: should quantiles of the eigengene dissimilarity matrix be equalized ("quantile normalized")? The default is \code{FALSE} for reproducibility of old code; when there are many eigengenes (e.g., at least 50), better results may be achieved if quantile equalization is used.} \item{quantileSummary}{One of \code{"mean"} or \code{"median"}. Controls how a reference dissimilarity is computed from the input ones (using mean or median, respectively).} \item{consensusQuantile}{A number giving the desired quantile to use in the consensus similarity calculation (see details).} \item{cutHeight}{Maximum dissimilarity (i.e., 1-correlation) that qualifies modules for merging.} \item{iterate}{Controls whether the merging procedure should be repeated until there is no change. If FALSE, only one iteration will be executed.} \item{relabel}{Controls whether, after merging, color labels should be ordered by module size.} \item{colorSeq}{Color labels to be used for relabeling. Defaults to the standard color order used in this package if \code{colors} are not numeric, and to integers starting from 1 if \code{colors} is numeric.} \item{getNewMEs}{Controls whether module eigengenes of merged modules should be calculated and returned.} \item{getNewUnassdME}{When doing module eigengene manipulations, the function does not normally calculate the eigengene of the 'module' of unassigned ('grey') genes. Setting this option to \code{TRUE} will force the calculation of the unassigned eigengene in the returned newMEs, but not in the returned oldMEs.} \item{trapErrors}{Controls whether computational errors in calculating module eigengenes, their dissimilarity, and merging trees should be trapped. If \code{TRUE}, errors will be trapped and the function will return the input colors. If \code{FALSE}, errors will cause the function to stop.} \item{verbose}{Controls verbosity of printed progress messages. 0 means silent, up to (about) 5 the verbosity gradually increases.} \item{indent}{A single non-negative integer controlling indentation of printed messages. 0 means no indentation, each unit above that adds two spaces. } } \details{ This function merges input modules that are closely related. The similarities are measured by correlations of module eigengenes; a ``consensus'' measure is defined as the ``consensus quantile'' over the corresponding relationship in each set. Once the (dis-)similarity is calculated, average linkage hierarchical clustering of the module eigengenes is performed, the dendrogram is cut at the height \code{cutHeight} and modules on each branch are merged. The process is (optionally) repeated until no more modules are merged. If, for a particular module, the module eigengene calculation fails, a hubgene approximation will be used. The user should be aware that if a computational error occurs and \code{trapErrors==TRUE}, the returned list (see below) will not contain all of the components returned upon normal execution. } \value{ If no errors occurred, a list with components \item{colors}{Color labels for the genes corresponding to merged modules. The function attempts to mimic the mode of the input \code{colors}: if the input \code{colors} is numeric, character and factor, respectively, so is the output. Note, however, that if the fnction performs relabeling, a standard sequence of labels will be used: integers starting at 1 if the input \code{colors} is numeric, and a sequence of color labels otherwise (see \code{colorSeq} above).} \item{dendro}{Hierarchical clustering dendrogram (average linkage) of the eigengenes of the most recently computed tree. If \code{iterate} was set TRUE, this will be the dendrogram of the merged modules, otherwise it will be the dendrogram of the original modules.} \item{oldDendro}{Hierarchical clustering dendrogram (average linkage) of the eigengenes of the original modules.} \item{cutHeight}{The input cutHeight.} \item{oldMEs}{Module eigengenes of the original modules in the sets given by \code{useSets}.} \item{newMEs}{Module eigengenes of the merged modules in the sets given by \code{useSets}.} \item{allOK}{A boolean set to \code{TRUE}.} If an error occurred and \code{trapErrors==TRUE}, the list only contains these components: \item{colors}{A copy of the input colors.} \item{allOK}{a boolean set to \code{FALSE}.} } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } %\seealso{\code{\link{moduleEigengenes}}, \code{\link{multiSetMEs}}} % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ misc } WGCNA/man/consensusMEDissimilarity.Rd0000644000176200001440000000317714012015545017171 0ustar liggesusers\name{consensusMEDissimilarity} \alias{consensusMEDissimilarity} \title{ Consensus dissimilarity of module eigengenes. } \description{ Calculates consensus dissimilarity \code{(1-cor)} of given module eigengenes realized in several sets. } \usage{ consensusMEDissimilarity(MEs, useAbs = FALSE, useSets = NULL, method = "consensus") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{MEs}{Module eigengenes of the same modules in several sets. } \item{useAbs}{Controls whether absolute value of correlation should be used instead of correlation in the calculation of dissimilarity. } \item{useSets}{If the consensus is to include only a selection of the given sets, this vector (or scalar in the case of a single set) can be used to specify the selection. If \code{NULL}, all sets will be used. } \item{method}{A character string giving the method to use. Allowed values are (abbreviations of) \code{"consensus"} and \code{"majority"}. The consensus dissimilarity is calculated as the minimum of given set dissimilarities for \code{"consensus"} and as the average for \code{"majority"}.} } \details{ This function calculates the individual set dissimilarities of the given eigengenes in each set, then takes the (parallel) maximum or average over all sets. For details on the structure of imput data, see \code{\link{checkSets}}. } \value{ A dataframe containing the matrix of dissimilarities, with \code{names} and \code{rownames} set appropriately. } %\references{ \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{ \code{\link{checkSets}}} %\examples{ \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/vectorizeMatrix.Rd0000644000176200001440000000133014012015545015344 0ustar liggesusers\name{vectorizeMatrix} \alias{vectorizeMatrix} \title{ Turn a matrix into a vector of non-redundant components } \description{ A convenient function to turn a matrix into a vector of non-redundant components. If the matrix is non-symmetric, returns a vector containing all entries of the matrix. If the matrix is symmetric, only returns the upper triangle and optionally the diagonal. } \usage{ vectorizeMatrix(M, diag = FALSE) } \arguments{ \item{M}{ the matrix or data frame to be vectorized. } \item{diag}{ logical: should the diagonal be included in the output? } } \value{ A vector containing the non-redundant entries of the input matrix. } \author{ Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/addErrorBars.Rd0000644000176200001440000000076514012015545014532 0ustar liggesusers\name{addErrorBars} \alias{addErrorBars} \title{ Add error bars to a barplot. } \description{ This function adds error bars to an existing barplot. } \usage{ addErrorBars(means, errors, two.side = FALSE) } \arguments{ \item{means}{ vector of means plotted in the barplot } \item{errors}{ vector of standard errors (signle positive values) to be plotted. } \item{two.side}{ should the error bars be two-sided? } } \value{ None. } \author{ Steve Horvath and Peter Langfelder } \keyword{hplot} WGCNA/man/standardScreeningBinaryTrait.Rd0000644000176200001440000001357514012015545017772 0ustar liggesusers\name{standardScreeningBinaryTrait} \Rdversion{1.1} \alias{standardScreeningBinaryTrait} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Standard screening for binatry traits } \description{ The function standardScreeningBinaryTrait computes widely used statistics for relating the columns of the input data frame (argument datE) to a binary sample trait (argument y). The statistics include Student t-test p-value and the corresponding local false discovery rate (known as q-value, Storey et al 2004), the fold change, the area under the ROC curve (also known as C-index), mean values etc. If the input option KruskalTest is set to TRUE, it also computes the Kruskal Wallist test p-value and corresponding q-value. The Kruskal Wallis test is a non-parametric, rank-based group comparison test. } \usage{ standardScreeningBinaryTrait( datExpr, y, corFnc = cor, corOptions = list(use = 'p'), kruskalTest = FALSE, qValues = FALSE, var.equal=FALSE, na.action="na.exclude", getAreaUnderROC = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ a data frame or matrix whose columns will be related to the binary trait } \item{y}{ a binary vector whose length (number of components) equals the number of rows of datE } \item{corFnc}{ correlation function. Defaults to Pearson correlation. } \item{corOptions}{ a list specifying options to corFnc. An empty list must be specified as \code{list()} (supplying \code{NULL} instead will trigger an error). } \item{kruskalTest}{ logical: should the Kruskal test be performed? } \item{qValues}{ logical: should the q-values be calculated? } \item{var.equal}{ logical input parameter for the Student t-test. It indicates whether to treat the two variances (corresponding to the binary grouping) are being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used. Warning: here the default value is TRUE which is different from the default value of t.test. Type help(t.test) for more details. } \item{na.action}{ character string for the Student t-test: indicates what should happen when the data contain missing values NAs. } \item{getAreaUnderROC}{logical: should area under the ROC curve be calculated? The calculation slows the function down somewhat. } } \value{ A data frame whose rows correspond to the columns of datE and whose columns report \item{ID}{column names of the input \code{datExpr}.} \item{corPearson}{pearson correlation with a binary numeric version of the input variable. The numeric variable equals 1 for level 1 and 2 for level 2. The levels are given by levels(factor(y)).} \item{t.Student}{Student's t-test statistic} \item{pvalueStudent}{two-sided Student t-test p-value.} \item{qvalueStudent}{(if input \code{qValues==TRUE}) q-value (local false discovery rate) based on the Student T-test p-value (Storey et al 2004).} \item{foldChange}{a (signed) ratio of mean values. If the mean in the first group (corresponding to level 1) is larger than that of the second group, it equals meanFirstGroup/meanSecondGroup. But if the mean of the second group is larger than that of the first group it equals -meanSecondGroup/meanFirstGroup (notice the minus sign).} \item{meanFirstGroup}{means of columns in input \code{datExpr} across samples in the first group.} \item{meanSecondGroup}{means of columns in input \code{datExpr} across samples in the second group.} \item{SE.FirstGroup}{standard errors of columns in input \code{datExpr} across samples in the first group. Recall that SE(x)=sqrt(var(x)/n) where n is the number of non-missing values of x. } \item{SE.SecondGroup}{standard errors of columns in input \code{datExpr} across samples in the second group.} \item{areaUnderROC}{the area under the ROC, also known as the concordance index or C.index. This is a measure of discriminatory power. The measure lies between 0 and 1 where 0.5 indicates no discriminatory power. 0 indicates that the "opposite" predictor has perfect discriminatory power. To compute it we use the function \link[Hmisc]{rcorr.cens} with \code{outx=TRUE} (from Frank Harrel's package Hmisc). Only present if input \code{getAreUnderROC} is \code{TRUE}.} \item{nPresentSamples}{number of samples with finite measurements for each gene.} If input \code{kruskalTest} is \code{TRUE}, the following columns further summarize results of Kruskal-Wallis test: \item{stat.Kruskal}{Kruskal-Wallis test statistic.} \item{stat.Kruskal.signed}{(Warning: experimental) Kruskal-Wallis test statistic including a sign that indicates whether the average rank is higher in second group (positive) or first group (negative). } \item{pvaluekruskal}{Kruskal-Wallis test p-values.} \item{qkruskal}{q-values corresponding to the Kruskal-Wallis test p-value (if input \code{qValues==TRUE}).} } \references{ Storey JD, Taylor JE, and Siegmund D. (2004) Strong control, conservative point estimation, and simultaneous conservative consistency of false discovery rates: A unified approach. Journal of the Royal Statistical Society, Series B, 66: 187-205. } \author{ Steve Horvath } \examples{ require(survival) # For is.Surv in rcorr.cens m=50 y=sample(c(1,2),m,replace=TRUE) datExprSignal=simulateModule(scale(y),30) datExprNoise=simulateModule(rnorm(m),150) datExpr=data.frame(datExprSignal,datExprNoise) Result1=standardScreeningBinaryTrait(datExpr,y) Result1[1:5,] # use unequal variances and calculate q-values Result2=standardScreeningBinaryTrait(datExpr,y, var.equal=FALSE,qValue=TRUE) Result2[1:5,] # calculate Kruskal Wallis test and q-values Result3=standardScreeningBinaryTrait(datExpr,y,kruskalTest=TRUE,qValue=TRUE) Result3[1:5,] } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/subsetTOM.Rd0000644000176200001440000000455114012015545014042 0ustar liggesusers\name{subsetTOM} \alias{subsetTOM} \title{ Topological overlap for a subset of a whole set of genes } \description{ This function calculates topological overlap of a subset of vectors with respect to a whole data set. } \usage{ subsetTOM( datExpr, subset, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, networkType = "unsigned", power = 6, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ a data frame containing the expression data of the whole set, with rows corresponding to samples and columns to genes. } \item{subset}{ a single logical or numeric vector giving the indices of the nodes for which the TOM is to be calculated. } \item{corFnc}{ character string giving the correlation function to be used for the adjacency calculation. Recommended choices are \code{"cor"} and \code{"bicor"}, but other functions can be used as well. } \item{corOptions}{ character string giving further options to be passed to the correlation function. } \item{weights}{optional observation weights for \code{datExpr} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights. Only used with Pearson correlation.} \item{networkType}{ character string giving network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{power}{ soft-thresholding power for network construction. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function is designed to calculated topological overlaps of small subsets of large expression data sets, for example in individual modules. } \value{ A matrix of dimensions \code{n*n}, where \code{n} is the number of entries selected by \code{block}. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Peter Langfelder } \seealso{ \code{\link{TOMsimilarity}} for standard calculation of topological overlap. } \keyword{ misc } WGCNA/man/verboseBoxplot.Rd0000644000176200001440000000511314012015545015165 0ustar liggesusers\name{verboseBoxplot} \alias{verboseBoxplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Boxplot annotated by a Kruskal-Wallis p-value} \description{ Plot a boxplot annotated by the Kruskal-Wallis p-value. Uses the function \code{\link[graphics]{boxplot}} for the actual drawing. } \usage{ verboseBoxplot(x, g, main = "", xlab = NA, ylab = NA, cex = 1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, notch = TRUE, varwidth = TRUE, ..., addScatterplot = FALSE, pt.cex = 0.8, pch = 21, pt.col = "blue", pt.bg = "skyblue", randomSeed = 31425, jitter = 0.6) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ numerical vector of data whose group means are to be plotted } \item{g}{ a factor or a an object coercible to a factor giving the groups that will go into each box.} \item{main}{ main title for the plot.} \item{xlab}{ label for the x-axis. } \item{ylab}{ label for the y-axis. } \item{cex}{ character expansion factor for plot annotations. } \item{cex.axis}{ character expansion factor for axis annotations. } \item{cex.lab}{ character expansion factor for axis labels. } \item{cex.main}{ character expansion factor for the main title. } \item{notch}{logical: should the notches be drawn? See \code{\link[graphics]{boxplot}} and \code{\link{boxplot.stats}} for details. } \item{varwidth}{logical: if \code{TRUE}, the boxes are drawn with widths proportional to the square-roots of the number of observations in the groups.} \item{\dots}{ other arguments to the function \code{\link{boxplot}}. Of note is the argument \code{las} that specifies label orientation. Value \code{las=1} will result in horizontal labels (the default), while \code{las=2} will result in vertical labels, useful when the labels are long.} \item{addScatterplot}{logical: should a scatterplot of the data be overlaid? } \item{pt.cex}{character expansion factor for the points.} \item{pch}{shape code for the points.} \item{pt.col}{color for the points.} \item{pt.bg}{background color for the points.} \item{randomSeed}{integer random seed to make plots reproducible.} \item{jitter}{amount of random jitter to add to the position of the points along the x axis.} } \value{ Returns the value returned by the function \code{\link{boxplot}}. } \author{ Steve Horvath, with contributions from Zhijin (Jean) Wu and Peter Langfelder } \seealso{ \code{\link{boxplot}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/empiricalBayesLM.Rd0000644000176200001440000003207214672545314015354 0ustar liggesusers\name{empiricalBayesLM} \alias{empiricalBayesLM} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Empirical Bayes-moderated adjustment for unwanted covariates } \description{ This functions removes variation in high-dimensional data due to unwanted covariates while preserving variation due to retained covariates. To prevent numerical instability, it uses Empirical bayes-moderated linear regression, optionally in a robust (outlier-resistant) form. } \usage{ empiricalBayesLM( data, removedCovariates, retainedCovariates = NULL, initialFitFunction = NULL, initialFitOptions = NULL, initialFitRequiresFormula = NULL, initialFit.returnWeightName = NULL, fitToSamples = NULL, weights = NULL, automaticWeights = c("none", "bicov"), aw.maxPOutliers = 0.1, weightType = c("apriori", "empirical"), stopOnSmallWeights = TRUE, minDesignDeviation = 1e-10, robustPriors = FALSE, tol = 1e-4, maxIterations = 1000, garbageCollectInterval = 50000, scaleMeanToSamples = fitToSamples, scaleMeanOfSamples = NULL, getOLSAdjustedData = TRUE, getResiduals = TRUE, getFittedValues = TRUE, getWeights = TRUE, getEBadjustedData = TRUE, verbose = 0, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A 2-dimensional matrix or data frame of numeric data to be adjusted. Variables (for example, genes or methylation profiles) should be in columns and observations (samples) should be in rows. } \item{removedCovariates}{ A vector or two-dimensional object (matrix or data frame) giving the covariates whose effect on the data is to be removed. At least one such covariate must be given. } \item{retainedCovariates}{ A vector or two-dimensional object (matrix or data frame) giving the covariates whose effect on the data is to be retained. May be \code{NULL} if there are no such "retained" covariates. } \item{initialFitFunction}{ Function name to perform the initial fit. The default is to use the internal implementation of linear model fitting. The function must take arguments \code{formula} and \code{data} or \code{x} and \code{y}, plus possibly additional arguments. The return value must be a list with component \code{coefficients}, either \code{scale} or \code{residuals}, and weights must be returned in component specified by \code{initialFit.returnWeightName}. See \code{\link{lm}}, \code{\link[MASS]{rlm}} and other standard fit functions for examples of suitable functions.} \item{initialFitOptions}{ Optional specifications of extra arguments for \code{initialFitFunction}, apart from \code{formula} and \code{data} or \code{x} and \code{y}. Defaults are provided for function \code{\link[MASS]{rlm}}, i.e., if this function is used as \code{initialFitFunction}, suitable initial fit options will be chosen automatically.} \item{initialFitRequiresFormula}{ Logical: does the initial fit function need \code{formula} and \code{data} arguments? If \code{TRUE}, \code{initialFitFunction} will be called with arguments \code{formula} and \code{data}, otherwise with arguments \code{x} and \code{y}.} \item{initialFit.returnWeightName}{ Name of the component of the return value of \code{initialFitFunction} that contains the weights used in the fit. Suitable default value will be chosen automatically for \code{\link[MASS]{rlm}}. } \item{fitToSamples}{ Optional index of samples from which the linear model fits should be calculated. Defaults to all samples. If given, the models will be only fit to the specified samples but all samples will be transformed using the calculated coefficients. } \item{weights}{ Optional 2-dimensional matrix or data frame of the same dimensions as \code{data} giving weights for each entry in \code{data}. These weights will be used in the initial fit and are are separate from the ones returned by \code{initialFitFunction} if it is specified. } \item{automaticWeights}{ One of (unique abrreviations of) \code{"none"} or \code{"bicov"}, instructing the function to calculate weights from the given \code{data}. Value \code{"none"} will result in trivial weights; value \code{"bicov"} will result in biweight midcovariance weights being used. } \item{aw.maxPOutliers}{ If \code{automaticWeights} above is \code{"bicov"}, this argument gets passed to the function \code{\link{bicovWeights}} and determines the maximum proportion of outliers in calculating the weights. See \code{\link{bicovWeights}} for more details. } \item{weightType}{ One of (unique abbreviations of) \code{"apriori"} or \code{"empirical"}. Determines whether a standard (\code{"apriori"}) or a modified (\code{"empirical"}) weighted regression is used. The \code{"apriori"} choice is suitable for weights that have been determined without knowledge of the actual \code{data}, while \code{"empirical"} is appropriate for situations where one wants to down-weigh cartain entries of \code{data} because they may be outliers. In either case, the weights should be determined in a way that is independent of the covariates (both retained and removed). } \item{stopOnSmallWeights}{ Logical: should presence of small \code{"apriori"} weights trigger an error? Because standard weighted regression assumes that all weights are non-zero (otherwise estimates of standard errors will be biased), this function will by default complain about the presence of too small \code{"apriori"} weights. } \item{minDesignDeviation}{ Minimum standard deviation for columns of the design matrix to be retained. Columns with standard deviations below this number will be removed (effectively removing the corresponding terms from the design). } \item{robustPriors}{ Logical: should robust priors be used? This essentially means replacing mean by median and covariance by biweight mid-covariance. } \item{tol}{ Convergence criterion used in the numerical equation solver. When the relative change in coefficients falls below this threshold, the system will be considered to have converged. } \item{maxIterations}{ Maximum number of iterations to use. } \item{garbageCollectInterval}{ Number of variables after which to call garbage collection. } \item{scaleMeanToSamples}{ Optional specification of samples (given as a vector of indices) to whose means the resulting adjusted data should be scaled (more precisely, shifted). } \item{scaleMeanOfSamples}{ Optional specification of samples (given as a vector of indices) that will be used in calculating the shift. Specifically, the shift is such that the mean of samples given in \code{scaleMeanOfSamples} will equal the mean of samples given in \code{scaleMeanToSamples}. Defaults to all samples.} \item{getOLSAdjustedData}{Logical: should data adjusted by ordinary least squares or by \code{initialFitFunction}, if specified, be returned?} \item{getResiduals}{Logical: should the residuals (adjusted values without the means) be returned?} \item{getFittedValues}{Logical: should fitted values be returned?} \item{getWeights}{Logical: should the final weights be returned?} \item{getEBadjustedData}{Logical: should the EB step be performed and the adjusted data returned? If this is \code{FALSE}, the function acts as a rather slow but still potentially useful adjustment using standard fit functions.} \item{verbose}{Level of verbosity. Zero means silent, higher values result in more diagnostic messages being printed.} \item{indent}{Indentation of diagnostic messages. Each unit adds two spaces.} } \details{ This function uses Empirical Bayes-moderated (EB) linear regression to remove variation in \code{data} due to the variables in \code{removedCovariates} while retaining variation due to variables in \code{retainedCovariates}, if any are given. The EB step uses simple normal priors on the regression coefficients and inverse gamma priors on the variances. The procedure starts with multivariate ordinary linear regression of individual columns in \code{data} on \code{retainedCovariates} and \code{removedCovariates}. Alternatively, the user may specify an intial fit function (e.g., robust linear regression). To make the coefficients comparable, columns of \code{data} are scaled to (weighted if weights are given) mean 0 and variance 1. The resulting regression coefficients are used to determine the parameters of the normal prior (mean, covariance, and inverse gamma or median and biweight mid-covariance if robust priors are used), and the variances are used to determine the parameters of the inverse gamma prior. The EB step then essentially shrinks the coefficients toward their means, with the amount of shrinkage determined by the prior covariance. Using appropriate weights can make the data adjustment robust to outliers. This can be achieved automatically by using the argument \code{automaticWeights = "bicov"}. When bicov weights are used, we also recommend setting the argument \code{maxPOutliers} to a maximum proportion of samples that could be outliers. This is especially important if some of the design variables are binary and can be expected to have a strong effect on some of the columns in \code{data}, since standard biweight midcorrelation (and its weights) do not work well on bimodal data. The automatic bicov weights are determined from \code{data} only. It is implicitly assumed that there are no outliers in the retained and removed covariates. Outliers in the covariates are more difficult to work with since, even if the regression is made robust to them, they can influence the adjusted values for the sample in which they appear. Unless the the covariate outliers can be attributed to a relevant variation in experimental conditions, samples with covariate outliers are best removed entirely before calling this function. } \value{ A list with the following components (some of which may be missing depending on input options): \item{adjustedData}{A matrix of the same dimensions as the input \code{data}, giving the adjusted data. If input \code{data} has non-NULL \code{dimnames}, these are copied.} \item{residuals}{A matrix of the same dimensions as the input \code{data}, giving the residuals, that is, adjusted data with zero means.} \item{coefficients}{A matrix of regression coefficients. Rows correspond to the design matrix variables (mean, retained and removed covariates) and columns correspond to the variables (columns) in \code{data}.} \item{coefficiens.scaled}{A matrix of regression coefficients corresponding to columns in \code{data} scaled to mean 0 and variance 1.} \item{sigmaSq}{Estimated error variances (one for each column of input \code{data}.} \item{sigmaSq.scaled}{Estimated error variances corresponding to columns in \code{data} scaled to mean 0 and variance 1.} \item{fittedValues}{Fitted values calculated from the means and coefficients corresponding to the removed covariates, i.e., roughly the values that are subtracted out of the data.} \item{adjustedData.OLS}{A matrix of the same dimensions as the input \code{data}, giving the data adjusted by ordinary least squares. This component should only be used for diagnostic purposes, not as input for further downstream analyses, as the OLS adjustment is inferior to EB adjustment. } \item{residuals.OLS}{A matrix of the same dimensions as the input \code{data}, giving the residuals obtained from ordinary least squares regression, that is, OLS-adjusted data with zero means.} \item{coefficients.OLS}{A matrix of ordinary least squares regression coefficients. Rows correspond to the design matrix variables (mean, retained and removed covariates) and columns correspond to the variables (columns) in \code{data}.} \item{coefficiens.OLS.scaled}{A matrix of ordinary least squares regression coefficients corresponding to columns in \code{data} scaled to mean 0 and variance 1. These coefficients are used to calculate priors for the EB step.} \item{sigmaSq.OLS}{Estimated OLS error variances (one for each column of input \code{data}.} \item{sigmaSq.OLS.scaled}{Estimated OLS error variances corresponding to columns in \code{data} scaled to mean 0 and variance 1. These are used to calculate variance priors for the EB step.} \item{fittedValues.OLS}{OLS fitted values calculated from the means and coefficients corresponding to the removed covariates.} \item{weights}{A matrix of weights used in the regression models. The matrix has the same dimension as the input \code{data}.} \item{dataColumnValid}{Logical vector with one element per column of input \code{data}, indicating whether the column was adjusted. Columns with zero variance or too many missing data cannot be adjusted.} \item{dataColumnWithZeroVariance}{Logical vector with one element per column of input \code{data}, indicating whether the column had zero variance.} \item{coefficientValid}{Logical matrix of the dimension (number of covariates +1) times (number of variables in \code{data}), indicating whether the corresponding regression coefficient is valid. Invalid regression coefficients may be returned as missing values or as zeroes.} } \author{ Peter Langfelder } \seealso{ \code{\link{bicovWeights}} for suitable weights that make the adjustment robust to outliers. } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} WGCNA/man/userListEnrichment.Rd0000644000176200001440000005360014022073754016012 0ustar liggesusers\name{userListEnrichment} \alias{userListEnrichment} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Measure enrichment between inputted and user-defined lists } \description{ This function measures list enrichment between inputted lists of genes and files containing user-defined lists of genes. Significant enrichment is measured using a hypergeometric test. A pre-made collection of brain-related lists can also be loaded. The function writes the significant enrichments to a file, but also returns all overlapping genes across all comparisons. } \usage{ userListEnrichment( geneR, labelR, fnIn = NULL, catNmIn = fnIn, nameOut = "enrichment.csv", useBrainLists = FALSE, useBloodAtlases = FALSE, omitCategories = "grey", outputCorrectedPvalues = TRUE, useStemCellLists = FALSE, outputGenes = FALSE, minGenesInCategory = 1, useBrainRegionMarkers = FALSE, useImmunePathwayLists = FALSE, usePalazzoloWang = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{geneR}{ A vector of gene (or other) identifiers. This vector should include ALL genes in your analysis (i.e., the genes correspoding to your labeled lists AND the remaining background reference genes). } \item{labelR}{ A vector of labels (for example, module assignments) corresponding to the geneR list. NOTE: For all background reference genes that have no corresponding label, use the label "background" (or any label included in the omitCategories parameter). } \item{fnIn}{ A vector of file names containing user-defined lists. These files must be in one of three specific formats (see details section). The default (NULL) may only be used if one of the "use_____" parameters is TRUE. } \item{catNmIn}{ A vector of category names corresponding to each fnIn. This name will be appended to each overlap corresponding to that filename. The default sets the category names as the corresponding file names. } \item{nameOut}{ Name of the file where the output enrichment information will be written. (Note that this file includes only a subset of what is returned by the function.) If \code{NULL} (or zero-length), no output will be written out. } \item{useBrainLists}{ If TRUE, a pre-made set of brain-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{useBloodAtlases}{ If TRUE, a pre-made set of blood-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{omitCategories}{ Any labelR entries corresponding to these categories will be ignored. The default ("grey") will ignore unassigned genes in a standard WGCNA network. } \item{outputCorrectedPvalues}{ If TRUE (default) only pvalues that are significant after correcting for multiple comparisons (using Bonferroni method) will be outputted to nameOut. Otherwise the uncorrected p-values will be outputted to the file. Note that both sets of p-values for all comparisons are reported in the returned "pValues" parameter. } \item{useStemCellLists}{ If TRUE, a pre-made set of stem cell (SC)-derived enrichment lists will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for related references. } \item{outputGenes}{ If TRUE, will output a list of all genes in each returned category, as well as a count of the number of genes in each category. The default is FALSE. } \item{minGenesInCategory}{ Will omit all significant categories with fewer than minGenesInCategory genes (default is 1). } \item{useBrainRegionMarkers}{ If TRUE, a pre-made set of enrichment lists for human brain regions will be added to any user-defined lists for enrichment comparison. The default is FALSE. These lists are derived from data from the Allen Human Brain Atlas (https://human.brain-map.org/). See references section for more details. } \item{useImmunePathwayLists}{ If TRUE, a pre-made set of enrichment lists for immune system pathways will be added to any user-defined lists for enrichment comparison. The default is FALSE. These lists are derived from the lab of Daniel R Saloman. See references section for more details. } \item{usePalazzoloWang}{ If TRUE, a pre-made set of enrichment lists compiled by Mike Palazzolo and Jim Wang from CHDI will be added to any user-defined lists for enrichment comparison. The default is FALSE. See references section for more details. } } \details{ User-inputted files for fnIn can be in one of three formats: 1) Text files (must end in ".txt") with one list per file, where the first line is the list descriptor and the remaining lines are gene names corresponding to that list, with one gene per line. For example Ribosome RPS4 RPS8 ... 2) Gene / category files (must be csv files), where the first line is the column headers corresponding to Genes and Lists, and the remaining lines correspond to the genes in each list, for any number of genes and lists. For example: Gene, Category RPS4, Ribosome RPS8, Ribosome ... NDUF1, Mitohcondria NDUF3, Mitochondria ... MAPT, AlzheimersDisease PSEN1, AlzheimersDisease PSEN2, AlzheimersDisease ... 3) Module membership (kME) table in csv format. Currently, the module assignment is the only thing that is used, so as long as the Gene column is 2nd and the Module column is 3rd, it doesn't matter what is in the other columns. For example, PSID, Gene, Module, , RPS4, blue, , NDUF1, red, , RPS8, blue, , NDUF3, red, , MAPT, green, ... } \value{ \item{pValues}{ A data frame showing, for each comparison, the input category, user defined category, type, the number of overlapping genes and both the uncorrected and Bonferroni corrected p-values for every pair of list overlaps tested. } \item{ovGenes}{ A list of character vectors corresponding to the overlapping genes for every pair of list overlaps tested. Specific overlaps can be found by typing $ovGenes$' -- '. See example below. } \item{sigOverlaps}{ Identical information that is written to nameOut. A data frame ith columns giving the input category, user defined category, type, and P-values (corrected or uncorrected, depending on outputCorrectedPvalues) corresponding to all significant enrichments. } } \references{ The primary reference for this function is: Miller JA, Cai C, Langfelder P, Geschwind DH, Kurian SM, Salomon DR, Horvath S. (2011) Strategies for aggregating gene expression data: the collapseRows R function. BMC Bioinformatics 12:322. If you have any suggestions for lists to add to this function, please e-mail Jeremy Miller at jeremyinla@gmail.com ------------------------------------- References for the pre-defined brain lists (useBrainLists=TRUE, in alphabetical order by category descriptor) are as follows: ABA ==> Cell type markers from: Lein ES, et al. (2007) Genome-wide atlas of gene expression in the adult mouse brain. Nature 445:168-176. ADvsCT_inCA1 ==> Lists of genes found to be increasing or decreasing with Alzheimer's disease in 3 studies: 1. Blalock => Blalock E, Geddes J, Chen K, Porter N, Markesbery W, Landfield P (2004) Incipient Alzheimer's disease: microarray correlation analyses reveal major transcriptional and tumor suppressor responses. PNAS 101:2173-2178. 2. Colangelo => Colangelo V, Schurr J, Ball M, Pelaez R, Bazan N, Lukiw W (2002) Gene expression profiling of 12633 genes in Alzheimer hippocampal CA1: transcription and neurotrophic factor down-regulation and up-regulation of apoptotic and pro-inflammatory signaling. J Neurosci Res 70:462-473. 3. Liang => Liang WS, et al (2008) Altered neuronal gene expression in brain regions differentially affected by Alzheimer's disease: a reference data set. Physiological genomics 33:240-56. Bayes ==> Postsynaptic Density Proteins from: Bayes A, et al. (2011) Characterization of the proteome, diseases and evolution of the human postsynaptic density. Nat Neurosci. 14(1):19-21. Blalock_AD ==> Modules from a network using the data from: Blalock E, Geddes J, Chen K, Porter N, Markesbery W, Landfield P (2004) Incipient Alzheimer's disease: microarray correlation analyses reveal major transcriptional and tumor suppressor responses. PNAS 101:2173-2178. CA1vsCA3 ==> Lists of genes enriched in CA1 and CA3 relative to other each and to other areas of the brain, from several studies: 1. Ginsberg => Ginsberg SD, Che S (2005) Expression profile analysis within the human hippocampus: comparison of CA1 and CA3 pyramidal neurons. J Comp Neurol 487:107-118. 2. Lein => Lein E, Zhao X, Gage F (2004) Defining a molecular atlas of the hippocampus using DNA microarrays and high-throughput in situ hybridization. J Neurosci 24:3879-3889. 3. Newrzella => Newrzella D, et al (2007) The functional genome of CA1 and CA3 neurons under native conditions and in response to ischemia. BMC Genomics 8:370. 4. Torres => Torres-Munoz JE, Van Waveren C, Keegan MG, Bookman RJ, Petito CK (2004) Gene expression profiles in microdissected neurons from human hippocampal subregions. Brain Res Mol Brain Res 127:105-114. 5. GorLorT => In either Ginsberg or Lein or Torres list. Cahoy ==> Definite (10+ fold) and probable (1.5+ fold) enrichment from: Cahoy JD, et al. (2008) A transcriptome database for astrocytes, neurons, and oligodendrocytes: A new resource for understanding brain development and function. J Neurosci 28:264-278. CTX ==> Modules from the CTX (cortex) network from: Oldham MC, et al. (2008) Functional organization of the transcriptome in human brain. Nat Neurosci 11:1271-1282. DiseaseGenes ==> Probable (C or better rating as of 16 Mar 2011) and possible (all genes in database as of ~2008) genetics-based disease genes from: http://www.alzforum.org/ EarlyAD ==> Genes whose expression is related to cognitive markers of early Alzheimer's disease vs. non-demented controls with AD pathology, from: Parachikova, A., et al (2007) Inflammatory changes parallel the early stages of Alzheimer disease. Neurobiology of Aging 28:1821-1833. HumanChimp ==> Modules showing region-specificity in both human and chimp from: Oldham MC, Horvath S, Geschwind DH (2006) Conservation and evolution of gene coexpression networks in human and chimpanzee brains. Proc Natl Acad Sci USA 103: 17973-17978. HumanMeta ==> Modules from the human network from: Miller J, Horvath S, Geschwind D (2010) Divergence of human and mouse brain transcriptome highlights Alzheimer disease pathways. Proc Natl Acad Sci 107:12698-12703. JAXdiseaseGene ==> Genes where mutations in mouse and/or human are known to cause any disease. WARNING: this list represents an oversimplification of data! This list was created from the Jackson Laboratory: Bult CJ, Eppig JT, Kadin JA, Richardson JE, Blake JA; Mouse Genome Database Group (2008) The Mouse Genome Database (MGD): Mouse biology and model systems. Nucleic Acids Res 36 (database issue):D724-D728. Lu_Aging ==> Modules from a network using the data from: Lu T, Pan Y, Kao S-Y, Li C, Kohane I, Chan J, Yankner B (2004) Gene regulation and DNA damage in the ageing human brain. Nature 429:883-891. MicroglialMarkers ==> Markers for microglia and macrophages from several studies: 1. GSE772 => Gan L, et al. (2004) Identification of cathepsin B as a mediator of neuronal death induced by Abeta-activated microglial cells using a functional genomics approach. J Biol Chem 279:5565-5572. 2. GSE1910 => Albright AV, Gonzalez-Scarano F (2004) Microarray analysis of activated mixed glial (microglia) and monocyte-derived macrophage gene expression. J Neuroimmunol 157:27-38. 3. AitGhezala => Ait-Ghezala G, Mathura VS, Laporte V, Quadros A, Paris D, Patel N, et al. Genomic regulation after CD40 stimulation in microglia: relevance to Alzheimer's disease. Brain Res Mol Brain Res 2005;140(1-2):73-85. 4. 3treatments_Thomas => Thomas, DM, Francescutti-Verbeem, DM, Kuhn, DM (2006) Gene expression profile of activated microglia under conditions associated with dopamine neuronal damage. The FASEB Journal 20:515-517. MitochondrialType ==> Mitochondrial genes from the somatic vs. synaptic fraction of mouse cells from: Winden KD, et al. (2009) The organization of the transcriptional network in specific neuronal classes. Mol Syst Biol 5:291. MO ==> Markers for many different things provided to my by Mike Oldham. These were originally from several sources: 1. 2+_26Mar08 => Genetics-based disease genes in two or more studies from http://www.alzforum.org/ (compiled by Mike Oldham). 2. Bachoo => Bachoo, R.M. et al. (2004) Molecular diversity of astrocytes with implications for neurological disorders. PNAS 101, 8384-8389. 3. Foster => Foster, LJ, de Hoog, CL, Zhang, Y, Zhang, Y, Xie, X, Mootha, VK, Mann, M. (2006) A Mammalian Organelle Map by Protein Correlation Profiling. Cell 125(1): 187-199. 4. Morciano => Morciano, M. et al. Immunoisolation of two synaptic vesicle pools from synaptosomes: a proteomics analysis. J. Neurochem. 95, 1732-1745 (2005). 5. Sugino => Sugino, K. et al. Molecular taxonomy of major neuronal classes in the adult mouse forebrain. Nat. Neurosci. 9, 99-107 (2006). MouseMeta ==> Modules from the mouse network from: Miller J, Horvath S, Geschwind D (2010) Divergence of human and mouse brain transcriptome highlights Alzheimer disease pathways. Proc Natl Acad Sci 107:12698-12703. Sugino/Winden ==> Conservative list of genes in modules from the network from: Winden K, Oldham M, Mirnics K, Ebert P, Swan C, Levitt P, Rubenstein J, Horvath S, Geschwind D (2009). The organization of the transcriptional network in specific neuronal classes. Molecular systems biology 5. NOTE: Original data came from this neuronal-cell-type-selection experiment in mouse: Sugino K, Hempel C, Miller M, Hattox A, Shapiro P, Wu C, Huang J, Nelson S (2006). Molecular taxonomy of major neuronal classes in the adult mouse forebrain. Nat Neurosci 9:99-107 Voineagu ==> Several Autism-related gene categories from: Voineagu I, Wang X, Johnston P, Lowe JK, Tian Y, Horvath S, Mill J, Cantor RM, Blencowe BJ, Geschwind DH. (2011). Transcriptomic analysis of autistic brain reveals convergent molecular pathology. Nature 474(7351):380-4 ------------------------------------- References for the pre-defined blood atlases (useBloodAtlases=TRUE, in alphabetical order by category descriptor) are as follows: Blood(composite) ==> Lists for blood cell types with this label are made from combining marker genes from the following three publications: 1. Abbas AB, Baldwin D, Ma Y, Ouyang W, Gurney A, et al. (2005). Immune response in silico (IRIS): immune-specific genes identified from a compendium of microarray expression data. Genes Immun. 6(4):319-31. 2. Grigoryev YA, Kurian SM, Avnur Z, Borie D, Deng J, et al. (2010). Deconvoluting post-transplant immunity: cell subset-specific mapping reveals pathways for activation and expansion of memory T, monocytes and B cells. PLoS One. 5(10):e13358. 3. Watkins NA, Gusnanto A, de Bono B, De S, Miranda-Saavedra D, et al. (2009). A HaemAtlas: characterizing gene expression in differentiated human blood cells. Blood. 113(19):e1-9. Gnatenko ==> Top 50 marker genes for platelets from: Gnatenko DV, et al. (2009) Transcript profiling of human platelets using microarray and serial analysis of gene expression (SAGE). Methods Mol Biol. 496:245-72. Gnatenko2 ==> Platelet-specific genes on a custom microarray from: Gnatenko DV, et al. (2010) Class prediction models of thrombocytosis using genetic biomarkers. Blood. 115(1):7-14. Kabanova ==> Red blood cell markers from: Kabanova S, et al. (2009) Gene expression analysis of human red blood cells. Int J Med Sci. 6(4):156-9. Whitney ==> Genes corresponding to individual variation in blood from: Whitney AR, et al. (2003) Individuality and variation in gene expression patterns in human blood. PNAS. 100(4):1896-1901. ------------------------------------- References for the pre-defined stem cell (SC) lists (useStemCellLists=TRUE, in alphabetical order by category descriptor) are as follows: Cui ==> genes differentiating erythrocyte precursors (CD36+ cells) from multipotent human primary hematopoietic stem cells/progenitor cells (CD133+ cells), from: Cui K, Zang C, Roh TY, Schones DE, Childs RW, Peng W, Zhao K. (2009). Chromatin signatures in multipotent human hematopoietic stem cells indicate the fate of bivalent genes during differentiation. Cell Stem Cell 4:80-93 Lee ==> gene lists related to Polycomb proteins in human embryonic SCs, from (a highly-cited paper!): Lee TI, Jenner RG, Boyer LA, Guenther MG, Levine SS, Kumar RM, Chevalier B, Johnstone SE, Cole MF, Isono K, et al. (2006) Control of developmental regulators by polycomb in human embryonic stem cells. Cell 125:301-313 ------------------------------------- References and more information for the pre-defined human brain region lists (useBrainRegionMarkers=TRUE): HBA ==> Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, Miller JA, et al. (2012) An Anatomically Comprehensive Atlas of the Adult Human Brain Transcriptome. Nature (in press) Three categories of marker genes are presented: 1. globalMarker(top200) = top 200 global marker genes for 22 large brain structures. Genes are ranked based on fold change enrichment (expression in region vs. expression in rest of brain) and the ranks are averaged between brains 2001 and 2002 (human.brain-map.org). 2. localMarker(top200) = top 200 local marker genes for 90 large brain structures. Same as 1, except fold change is defined as expression in region vs. expression in larger region (format: _IN_). For example, enrichment in CA1 is relative to other subcompartments of the hippocampus. 3. localMarker(FC>2) = same as #2, but only local marker genes with fold change > 2 in both brains are included. Regions with <10 marker genes are omitted. ------------------------------------- More information for the pre-defined immune pathways lists (useImmunePathwayLists=TRUE): ImmunePathway ==> These lists were created by Brian Modena (a member of Daniel R Salomon's lab at Scripps Research Institute), with input from Sunil M Kurian and Dr. Salomon, using Ingenuity, WikiPathways and literature search to assemble them. They reflect knowledge-based immune pathways and were in part informed by Dr. Salomon and colleague's work in expression profiling of biopsies and peripheral blood but not in some highly organized process. These lists are not from any particular publication, but are culled to include only genes of reasonably high confidence. ------------------------------------- References for the pre-defined lists from CHDI (usePalazzoloWang=TRUE, in alphabetical order by category descriptor) are as follows: Biocyc NCBI Biosystems ==> Several gene sets from the "Biocyc" component of NCBI Biosystems: Geer LY, Marchler-Bauer A, Geer RC, Han L, He J, He S, Liu C, Shi W, Bryant SH (2010) The NCBI BioSystems database. Nucleic Acids Res. 38(Database issue):D492-6. Kegg NCBI Biosystems ==> Several gene sets from the "Kegg" component of NCBI Biosystems: Geer LY et al 2010 (full citation above). Palazzolo and Wang ==> These gene sets were compiled from a variety of sources by Mike Palazzolo and Jim Wang at CHDI. Pathway Interaction Database NCBI Biosystems ==> Several gene sets from the "Pathway Interaction Database" component of NCBI Biosystems: Geer LY et al 2010 (full citation above). PMID 17500595 Kaltenbach 2007 ==> Several gene sets from: Kaltenbach LS, Romero E, Becklin RR, Chettier R, Bell R, Phansalkar A, et al. (2007) Huntingtin interacting proteins are genetic modifiers of neurodegeneration. PLoS Genet. 3(5):e82 PMID 22348130 Schaefer 2012 ==> Several gene sets from: Schaefer MH, Fontaine JF, Vinayagam A, Porras P, Wanker EE, Andrade-Navarro MA (2012) HIPPIE: Integrating protein interaction networks with experiment based quality scores. PLoS One. 7(2):e31826 PMID 22556411 Culver 2012 ==> Several gene sets from: Culver BP, Savas JN, Park SK, Choi JH, Zheng S, Zeitlin SO, Yates JR 3rd, Tanese N. (2012) Proteomic analysis of wild-type and mutant huntingtin-associated proteins in mouse brains identifies unique interactions and involvement in protein synthesis. J Biol Chem. 287(26):21599-614 PMID 22578497 Cajigas 2012 ==> Several gene sets from: Cajigas IJ, Tushev G, Will TJ, tom Dieck S, Fuerst N, Schuman EM. (2012) The local transcriptome in the synaptic neuropil revealed by deep sequencing and high-resolution imaging. Neuron. 74(3):453-66 Reactome NCBI Biosystems ==> Several gene sets from the "Reactome" component of NCBI Biosystems: Geer LY et al 2010 (full citation above). Wiki Pathways NCBI Biosystems ==> Several gene sets from the "Wiki Pathways" component of NCBI Biosystems: Geer LY et al 2010 (full citation above). Yang ==> These gene sets were compiled from a variety of sources by Mike Palazzolo and Jim Wang at CHDI. } \author{ Jeremy Miller } \examples{ # Example: first, read in some gene names and split them into categories data(BrainLists); listGenes = unique(as.character(BrainLists[,1])) set.seed(100) geneR = sort(sample(listGenes,2000)) categories = sort(rep(standardColors(10),200)) categories[sample(1:2000,200)] = "grey" file1 = tempfile(); file2 = tempfile(); write(c("TESTLIST1",geneR[300:400], sep="\n"), file1) write(c("TESTLIST2",geneR[800:1000],sep="\n"), file2) # Now run the function! testResults = userListEnrichment( geneR, labelR=categories, fnIn=c(file1, file2), catNmIn=c("TEST1","TEST2"), nameOut = NULL, useBrainLists=TRUE, omitCategories ="grey") # To see a list of all significant enrichments type: testResults$sigOverlaps # To see all of the overlapping genes between two categories #(whether or not the p-value is significant), type #restResults$ovGenes$' -- '. For example: testResults$ovGenes$"black -- TESTLIST1__TEST1" testResults$ovGenes$"red -- salmon_M12_Ribosome__HumanMeta" # More detailed overlap information is in the pValue output. For example: head(testResults$pValue) # Clean up the temporary files unlink(file1); unlink(file2) } \keyword{misc} WGCNA/man/sigmoidAdjacencyFunction.Rd0000644000176200001440000000177414012015545017124 0ustar liggesusers\name{sigmoidAdjacencyFunction} \alias{sigmoidAdjacencyFunction} \title{ Sigmoid-type adacency function. } \description{ Sigmoid-type function that converts a similarity to a weighted network adjacency. } \usage{ sigmoidAdjacencyFunction(ss, mu = 0.8, alpha = 20) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ss}{ similarity, a number between 0 and 1. Can be given as a scalar, vector or a matrix. } \item{mu}{ shift parameter. } \item{alpha}{ slope parameter. } } \details{ The sigmoid adjacency function is defined as \eqn{1/(1+\exp[-\alpha(ss - \mu)])}{1/(1 + exp(-alpha * (ss - mu)))}. } \value{ Adjacencies returned in the same form as the input \code{ss} } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath } \keyword{misc }% __ONLY ONE__ keyword per line WGCNA/man/kMEcomparisonScatterplot.Rd0000644000176200001440000001032114012015545017141 0ustar liggesusers\name{kMEcomparisonScatterplot} \alias{kMEcomparisonScatterplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Function to plot kME values between two comparable data sets. } \description{ Plots the kME values of genes in two groups of expression data for each module in an inputted color vector. } \usage{ kMEcomparisonScatterplot( datExpr1, datExpr2, colorh, inA = NULL, inB = NULL, MEsA = NULL, MEsB = NULL, nameA = "A", nameB = "B", plotAll = FALSE, noGrey = TRUE, maxPlot = 1000, pch = 19, fileName = if (plotAll) paste("kME_correlations_between_",nameA,"_and_", nameB,"_all.pdf",sep="") else paste("kME_correlations_between_",nameA,"_and_", nameB,"_inMod.pdf",sep=""), ...) } \arguments{ \item{datExpr1}{ The first expression matrix (samples=rows, genes=columns). This can either include only the data for group A (in which case dataExpr2 must be entered), or can contain all of the data for groups A and B (in which case inA and inB must be entered). } \item{datExpr2}{ The second expression matrix, or set to NULL if all data is from same expression matrix. If entered, datExpr2 must contain the same genes as datExpr1 in the same order. } \item{colorh}{ The common color vector (module labels) corresponding to both sets of expression data. } \item{inA, inB}{ Vectors of TRUE/FALSE indicating whether a sample is in group A/B, or a vector of numeric indices indicating which samples are in group A/B. If datExpr2 is entered, these inputs are ignored (thus default = NULL). For these and all other A/B inputs, "A" corresponds to datExpr1 and "B" corresponds to datExpr2 if datExpr2 is entered; otherwise "A" corresponds to datExpr1[inA,] while "B" corresponds to datExpr1[inB,]. } \item{MEsA, MEsB}{ Either the module eigengenes or NULL (default) in which case the module eigengenes will be calculated. In inputted, MEs MUST be calculated using "moduleEigengenes()$eigengenes" for function to work properly. } \item{nameA, nameB}{ The names of these groups (defaults = "A" and "B"). The resulting file name (see below) and x and y axis labels for each scatter plot depend on these names. } \item{plotAll}{ If TRUE, plot gene-ME correlations for all genes. If FALSE, plot correlations for only genes in the plotted module (default). Note that the output file name will be different depending on this parameter, so both can be run without overwriting results. } \item{noGrey}{ If TRUE (default), the grey module genes are ignored. This parameter is only used if MEsA and MEsB are calculated. } \item{maxPlot}{ The maximum number of random genes to include (default=1000). Smaller values lead to smaller and less cluttered plots, usually without significantly affecting the resulting correlations. This parameter is only used if plotAll=TRUE. } \item{pch}{ See help file for "points". Setting pch=19 (default) produces solid circles. } \item{fileName}{ Name of the file to hold the plots. Since the output format is pdf, the extension should be .pdf .} \item{...}{ Other plotting parameters that are allowable inputs to verboseScatterplot. } } \value{ The default output is a file called "kME_correlations_between_[nameA]_and_[nameB]_[all/inMod].pdf", where [nameA] and [nameB] correspond to the nameA and nameB input parameters, and [all/inMod] depends on whether plotAll=TRUE or FALSE. This output file contains all of the plots as separate pdf images, and will be located in the current working directory. } \author{ Jeremy Miller } \note{ The function "pdf", which can be found in the grDevices library, is required to run this function. } \examples{ # Example output file ("kME_correlations_between_A_and_B_inMod.pdf") using simulated data. \dontrun{ set.seed = 100 ME=matrix(0,50,5) for (i in 1:5) ME[,i]=sample(1:100,50) simData1 = simulateDatExpr5Modules(MEturquoise=ME[,1],MEblue=ME[,2], MEbrown=ME[,3],MEyellow=ME[,4], MEgreen=ME[,5]) simData2 = simulateDatExpr5Modules(MEturquoise=ME[,1],MEblue=ME[,2], MEbrown=ME[,3],MEyellow=ME[,4], MEgreen=ME[,5]) kMEcomparisonScatterplot(simData1$datExpr,simData2$datExpr,simData1$truemodule) } } \keyword{misc} WGCNA/man/removePrincipalComponents.Rd0000644000176200001440000000157014012015545017360 0ustar liggesusers\name{removePrincipalComponents} \alias{removePrincipalComponents} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Remove leading principal components from data } \description{ This function calculates a fixed number of the first principal components of the given data and returns the residuals of a linear regression of each column on the principal components. } \usage{ removePrincipalComponents(x, n) } \arguments{ \item{x}{ Input data, a numeric matrix. All entries must be non-missing and finite. } \item{n}{ Number of principal components to remove. This must be smaller than the smaller of the number of rows and columns in \code{x}. } } \value{ A matrix of residuals of the same dimensions as \code{x}. } \author{ Peter Langfelder } \seealso{ \code{\link{svd}} for singular value decomposition, \code{\link{lm}} for linear regression } \keyword{misc} WGCNA/man/standardScreeningCensoredTime.Rd0000644000176200001440000001407114012015545020113 0ustar liggesusers\name{standardScreeningCensoredTime} \Rdversion{1.1} \alias{standardScreeningCensoredTime} \title{ Standard Screening with regard to a Censored Time Variable } \description{ The function standardScreeningCensoredTime computes association measures between the columns of the input data datE and a censored time variable (e.g. survival time). The censored time is specified using two input variables "time" and "event". The event variable is binary where 1 indicates that the event took place (e.g. the person died) and 0 indicates censored (i.e. lost to follow up). The function fits univariate Cox regression models (one for each column of datE) and outputs a Wald test p-value, a logrank p-value, corresponding local false discovery rates (known as q-values, Storey et al 2004), hazard ratios. Further it reports the concordance index (also know as area under the ROC curve) and optionally results from dichotomizing the columns of datE. } \usage{ standardScreeningCensoredTime( time, event, datExpr, percentiles = seq(from = 0.1, to = 0.9, by = 0.2), dichotomizationResults = FALSE, qValues = TRUE, fastCalculation = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{time}{ numeric variable showing time to event or time to last follow up. } \item{event}{ Input variable \code{time} specifies the time to event or time to last follow up. Input variable \code{event} indicates whether the event happend (=1) or whether there was censoring (=0). } \item{datExpr}{ a data frame or matrix whose columns will be related to the censored time. } \item{percentiles}{ numeric vector which is only used when dichotomizationResults=T. Each value should lie between 0 and 1. For each value specified in the vector percentiles, a binary vector will be defined by dichotomizing the column value according to the corresponding quantile. Next a corresponding p-value will be calculated. } \item{dichotomizationResults}{ logical. If this option is set to TRUE then the values of the columns of datE will be dichotomized and corresponding Cox regression p-values will be calculated. } \item{qValues}{ logical. If this option is set to TRUE (default) then q-values will be calculated for the Cox regression p-values. } \item{fastCalculation}{ logical. If set to TRUE, the function outputs correlation test p-values (and q-values) for correlating the columns of datE with the expected hazard (if no covariate is fit). Specifically, the expected hazard is defined as the deviance residual of an intercept only Cox regression model. The results are very similar to those resulting from a univariate Cox model where the censored time is regressed on the columns of dat. Specifically, this computational speed up is facilitated by the insight that the p-values resulting from a univariate Cox regression coxph(Surv(time,event)~datE[,i]) are very similar to those from corPvalueFisher(cor(devianceResidual,datE[,i]), nSamples). } } \details{ If input option fastCalculation=TRUE, then the function outputs correlation test p-values (and q-values) for correlating the columns of datE with the expected hazard (if no covariate is fit). Specifically, the expected hazard is defined as the deviance residual of an intercept only Cox regression model. The results are very similar to those resulting from a univariate Cox model where the censored time is regressed on the columns of dat. Specifically, this computational speed up is facilitated by the insight that the p-values resulting from a univariate Cox regression coxph(Surv(time,event)~datE[,i]) are very similar to those from corPvalueFisher(cor(devianceResidual,datE[,i]), nSamples) } \value{ If \code{fastCalculation} is \code{FALSE}, the function outputs a data frame whose rows correspond to the columns of datE and whose columns report \item{ID}{column names of the input data datExpr.} \item{pvalueWald}{Wald test p-value from fitting a univariate Cox regression model where the censored time is regressed on each column of datExpr.} \item{qValueWald}{local false discovery rate (q-value) corresponding to the Wald test p-value. } \item{pvalueLogrank}{Logrank p-value resulting from the Cox regression model. Also known as score test p-value. For large sample sizes this sould be similar to the Wald test p-value. } \item{qValueLogrank}{local false discovery rate (q-value) corresponding to the Logrank test p-value. } \item{HazardRatio}{hazard ratio resulting from the Cox model. If the value is larger than 1, then high values of the column are associated with shorter time, e.g. increased hazard of death. A hazard ratio equal to 1 means no relationship between the column and time. HR<1 means that high values are associated with longer time, i.e. lower hazard.} \item{CI.LowerLimitHR}{Lower bound of the 95 percent confidence interval of the hazard ratio. } \item{CI.UpperLimitHR}{Upper bound of the 95 percent confidence interval of the hazard ratio. } \item{C.index}{concordance index, also known as C-index or area under the ROC curve. Calculated with the rcorr.cens option outx=TRUE (ties are ignored).} \item{MinimumDichotPvalue}{This is the smallest p-value from the dichotomization results. To see which dichotomized variable (and percentile) corresponds to the minimum, study the following columns. } \item{pValueDichot0.1}{This columns report the p-value when the column is dichotomized according to the specified percentile (here 0.1). The percentiles are specified in the input option percentiles. } \item{pvalueDeviance}{The p-value resulting from using a correlation test to relate the expected hazard (deviance residual) with each (undichotomized) column of datE. Specifically, the Fisher transformation is used to calculate the p-value for the Pearson correlation. The resulting p-value should be very similar to that of a univariate Cox regression model.} \item{qvalueDeviance}{Local false discovery rate (q-value) corresponding to pvalueDeviance.} \item{corDeviance}{Pearson correlation between the expected hazard (deviance residual) with each (undichotomized) column of datExpr.} } \author{ Steve Horvath } \keyword{ misc } WGCNA/man/bicovWeights.Rd0000644000176200001440000000555614012015545014620 0ustar liggesusers\name{bicovWeights} \alias{bicovWeights} \alias{bicovWeightFactors} \alias{bicovWeightsFromFactors} \title{ Weights used in biweight midcovariance } \description{ Calculation of weights and the intermediate weight factors used in the calculation of biweight midcovariance and midcorrelation. The weights are designed such that outliers get smaller weights; the weights become zero for data points more than 9 median absolute deviations from the median. } \usage{ bicovWeights( x, pearsonFallback = TRUE, maxPOutliers = 1, outlierReferenceWeight = 0.5625, defaultWeight = 0) bicovWeightFactors( x, pearsonFallback = TRUE, maxPOutliers = 1, outlierReferenceWeight = 0.5625, defaultFactor = NA) bicovWeightsFromFactors( u, defaultWeight = 0) } \arguments{ \item{x}{ A vector or a two-dimensional array (matrix or data frame). If two-dimensional, the weights will be calculated separately on each column. } \item{u}{ A vector or matrix of weight factors, usually calculated by \code{bicovWeightFactors}. } \item{pearsonFallback}{ Logical: if the median absolute deviation is zero, should standard deviation be substituted? } \item{maxPOutliers}{ Optional specification of the maximum proportion of outliers, i.e., data with weights equal to \code{outlierReferenceWeight} below. } \item{outlierReferenceWeight}{A number between 0 and 1 specifying what is to be considered an outlier when calculating the proportion of outliers.} \item{defaultWeight}{Value used for weights that correspond to a finite \code{x} but the weights themselves would not be finite, for example, when a column in \code{x} is constant.} \item{defaultFactor}{Value used for factors that correspond to a finite \code{x} but the weights themselves would not be finite, for example, when a column in \code{x} is constant.} } \details{ These functions are based on Equations (1) and (3) in Langfelder and Horvath (2012). The weight factor is denoted \code{u} in that article. Langfelder and Horvath (2012) also describe the Pearson fallback and maximum proportion of outliers in detail. For a full discussion of the biweight midcovariance and midcorrelation, see Wilcox (2005). } \value{ A vector or matrix of the same dimensions as the input \code{x} giving the bisquare weights (\code{bicovWeights} and \code{bicovWeightsFromFactors}) or the bisquare factors (\code{bicovWeightFactors}). } \references{ Langfelder P, Horvath S (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering Journal of Statistical Software 46(11) 1-17 PMID: 23050260 PMCID: PMC3465711 Wilcox RR (2005). Introduction to Robust Estimation and Hypothesis Testing. 2nd edition. Academic Press, Section 9.3.8, page 399 as well as Section 3.12.1, page 83. } \author{ Peter Langfelder } \seealso{ \code{\link{bicor}} } \examples{ x = rnorm(100); x[1] = 10; plot(x, bicovWeights(x)); } \keyword{misc} WGCNA/man/propVarExplained.Rd0000644000176200001440000000313014012015545015430 0ustar liggesusers\name{propVarExplained} \alias{propVarExplained} \title{ Proportion of variance explained by eigengenes. } \description{ This function calculates the proportion of variance of genes in each module explained by the respective module eigengene. } \usage{ propVarExplained(datExpr, colors, MEs, corFnc = "cor", corOptions = "use = 'p'") } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. NAs are allowed and will be ignored. } \item{colors}{ a vector giving module assignment for genes given in \code{datExpr}. Unique values should correspond to the names of the eigengenes in \code{MEs}. } \item{MEs}{ a data frame of module eigengenes in which each column is an eigengene and each row corresponds to a sample. } \item{corFnc}{ character string containing the name of the function to calculate correlation. Suggested functions include \code{"cor"} and \code{"bicor"}. } \item{corOptions}{ further argument to the correlation function. } } \details{ For compatibility with other functions, entries in \code{color} are matched to a substring of \code{names(MEs)} starting at position 3. For example, the entry \code{"turquoise"} in \code{colors} will be matched to the eigengene named \code{"MEturquoise"}. The first two characters of the eigengene name are ignored and can be arbitrary. } \value{ A vector with one entry per eigengene containing the proportion of variance of the module explained by the eigengene. } \author{ Peter Langfelder } \seealso{ \code{\link{moduleEigengenes}} } \keyword{ misc } WGCNA/man/blockwiseIndividualTOMs.Rd0000644000176200001440000003214214022073754016717 0ustar liggesusers\name{blockwiseIndividualTOMs} \alias{blockwiseIndividualTOMs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculation of block-wise topological overlaps } \description{ Calculates topological overlaps in the given (expression) data. If the number of variables (columns) in the input data is too large, the data is first split using pre-clustering, then topological overlaps are calculated in each block. } \usage{ blockwiseIndividualTOMs( multiExpr, multiWeights = NULL, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, replaceMissingAdjacencies = FALSE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Save individual TOMs? If not, they will be returned in the session. saveTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", # General options nThreads = 0, useInternalMatrixAlgebra = FALSE, verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{optional observation weights in the same format (and dimensions) as \code{multiExpr}. These weights are used in correlation calculation.} \item{checkMissingData}{logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{ integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{blockSizePenaltyPower}{number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{number of centers for pre-clustering. Larger numbers typically results in better but slower pre-clustering. The default is \code{as.integer(min(nGenes/20, 100*nGenes/preferredSize))} and is an attempt to arrive at a reasonable number given the resources available. } \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pariwise.complete.obs} option. } \item{maxPOutliers}{ only used for \code{corType=="bicor"}. Specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quickCor}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation, if used, should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). Has no effect for Pearson correlation. See \code{\link{bicor}}.} \item{cosineCorrelation}{logical: should the cosine version of the correlation calculation be used? The cosine calculation differs from the standard one in that it does not subtract the mean. } \item{power}{ soft-thresholding power for network construction. Either a single number or a vector of the same length as the number of sets, with one power for each set.} \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{checkPower}{ logical: should basic sanity check be performed on the supplied \code{power}? If you would like to experiment with unusual powers, set the argument to \code{FALSE} and proceed with caution. } \item{replaceMissingAdjacencies}{logical: should missing values in calculated adjacency be replaced by 0?} \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results in certain special situations but at this time should be considered experimental.} %The default mean denominator %variant %is preferrable and we recommend using it unless the user needs to reproduce older results obtained using %the standard, minimum denominator TOM. } \item{suppressTOMForZeroAdjacencies}{Logical: should TOM be set to zero for zero adjacencies?} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? Negative TOM values can occur when \code{TOMType} is \code{"signed Nowick"}.} \item{saveTOMs}{logical: should calculated TOMs be saved to disk (\code{TRUE}) or returned in the return value (\code{FALSE})? Returning calculated TOMs via the return value ay be more convenient bt not always feasible if the matrices are too big to fit all in memory at the same time. } \item{individualTOMFileNames}{character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{useInternalMatrixAlgebra}{Logical: should WGCNA's own, slow, matrix multiplication be used instead of R-wide BLAS? Only useful for debugging.} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The function starts by optionally filtering out samples that have too many missing entries and genes that have either too many missing entries or zero variance in at least one set. Genes that are filtered out are excluded from the TOM calculations. If \code{blocks} is not given and the number of genes exceeds \code{maxBlockSize}, genes are pre-clustered into blocks using the function \code{\link{consensusProjectiveKMeans}}; otherwise all genes are treated in a single block. For each block of genes, the network is constructed and (if requested) topological overlap is calculated in each set. The topological overlaps can be saved to disk as RData files, or returned directly within the return value (see below). Note that the matrices can be big and returning them within the return value can quickly exhaust the system's memory. In particular, if the block-wise calculation is necessary, it is nearly certain that returning all matrices via the return value will be impossible. } \value{ A list with the following components: \item{actualTOMFileNames}{Only returned if input \code{saveTOMs} is \code{TRUE}. A matrix of character strings giving the file names in which each block TOM is saved. Rows correspond to data sets and columns to blocks.} \item{TOMSimilarities}{Only returned if input \code{saveTOMs} is \code{FALSE}. A list in which each component corresponds to one block. Each component is a matrix of dimensions (N times (number of sets)), where N is the length of a distance structure corresponding to the block. That is, if the block contains n genes, N=n*(n-1)/2. Each column of the matrix contains the topological overlap of variables in the corresponding set ( and the corresponding block), arranged as a distance structure. Do note however that the topological overlap is a similarity (not a distance). } \item{blocks}{if input \code{blocks} was given, its copy; otherwise a vector of length equal number of genes giving the block label for each gene. Note that block labels are not necessarilly sorted in the order in which the blocks were processed (since we do not require this for the input \code{blocks}). See \code{blockOrder} below. } \item{blockGenes}{a list with one component for each block of genes. Each component is a vector giving the indices (relative to the input \code{multiExpr}) of genes in the corresponding block. } \item{goodSamplesAndGenes}{if input \code{checkMissingData} is \code{TRUE}, the output of the function \code{\link{goodSamplesGenesMS}}. A list with components \code{goodGenes} (logical vector indicating which genes passed the missing data filters), \code{goodSamples} (a list of logical vectors indicating which samples passed the missing data filters in each set), and \code{allOK} (a logical indicating whether all genes and all samples passed the filters). See \code{\link{goodSamplesGenesMS}} for more details. If \code{checkMissingData} is \code{FALSE}, \code{goodSamplesAndGenes} contains a list of the same type but indicating that all genes and all samples passed the missing data filters.} The following components are present mostly to streamline the interaction of this function with \code{\link{blockwiseConsensusModules}}. \item{nGGenes}{ Number of genes that passed missing data filters (if input \code{checkMissingData} is \code{TRUE}), or the number of all genes (if \code{checkMissingData} is \code{FALSE}).} \item{gBlocks}{ the vector \code{blocks} (above), restricted to good genes only. } \item{nThreads}{ number of threads used to calculate correlation and TOM matrices. } \item{saveTOMs}{ logical: were calculated matrices saved in files (\code{TRUE}) or returned in the return value (\code{FALSE})?} \item{intNetworkType, intCorType}{integer codes for network and correlation type. } \item{nSets}{number of sets in input data.} \item{setNames}{the \code{names} attribute of input \code{multiExpr}.} } \references{ For a general discussion of the weighted network formalism, see Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 The blockwise approach is briefly described in the article describing this package, Langfelder P, Horvath S (2008) "WGCNA: an R package for weighted correlation network analysis". BMC Bioinformatics 2008, 9:559 } \author{ Peter Langfelder } \seealso{ \code{\link{blockwiseConsensusModules}} } \keyword{misc} WGCNA/man/simulateModule.Rd0000644000176200001440000000756114022073754015161 0ustar liggesusers\name{simulateModule} \alias{simulateModule} \title{ Simulate a gene co-expression module} \description{ Simulation of a single gene co-expression module. } \usage{ simulateModule( ME, nGenes, nNearGenes = 0, minCor = 0.3, maxCor = 1, corPower = 1, signed = FALSE, propNegativeCor = 0.3, geneMeans = NULL, verbose = 0, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ME}{ seed module eigengene. } \item{nGenes}{ number of genes in the module to be simulated. Must be non-zero. } \item{nNearGenes}{ number of genes to be simulated with low correlation with the seed eigengene. } \item{minCor}{ minimum correlation of module genes with the eigengene. See details. } \item{maxCor}{ maximum correlation of module genes with the eigengene. See details. } \item{corPower}{ controls the dropoff of gene-eigengene correlation. See details. } \item{signed}{ logical: should the genes be simulated as belonging to a signed network? If \code{TRUE}, all genes will be simulated to have positive correlation with the eigengene. If \code{FALSE}, a proportion given by \code{propNegativeCor} will be simulated with negative correlations of the same absolute values. } \item{propNegativeCor}{ proportion of genes to be simulated with negative gene-eigengene correlations. Only effective if \code{signed} is \code{FALSE}. } \item{geneMeans}{ optional vector of length \code{nGenes} giving desired mean expression for each gene. If not given, the returned expression profiles will have mean zero. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Module genes are simulated around the eigengene by choosing them such that their (expected) correlations with the seed eigengene decrease progressively from (just below) \code{maxCor} to \code{minCor}. The genes are otherwise independent from one another. The variable \code{corPower} determines how fast the correlation drops towards \code{minCor}. Higher powers lead to a faster frop-off; \code{corPower} must be above zero but need not be integer. If \code{signed} is \code{FALSE}, the genes are simulated so as to be part of an unsigned network module, that is some genes will be simulated with a negative correlation with the seed eigengene (but of the same absolute value that a positively correlated gene would be simulated with). The proportion of genes with negative correlation is controlled by \code{propNegativeCor}. Optionally, the function can also simulate genes that are "near" the module, meaning they are simulated with a low but non-zero correlation with the seed eigengene. The correlations run between \code{minCor} and zero. } \value{ A matrix containing the expression data with rows corresponding to samples and columns to genes. } \references{ A short description of the simulation method can also be found in the Supplementary Material to the article Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54. The material is posted at http://horvath.genetics.ucla.edu/html/CoexpressionNetwork/EigengeneNetwork/SupplementSimulations.pdf. } \author{ Peter Langfelder } \seealso{ \code{\link{simulateEigengeneNetwork}} for a simulation of eigengenes with a given causal structure; \code{\link{simulateDatExpr}} for simulations of whole datasets consisting of multiple modules; \code{\link{simulateDatExpr5Modules}} for a simplified interface to expression simulations; \code{\link{simulateMultiExpr}} for a simulation of several related data sets. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/collapseRowsUsingKME.Rd0000644000176200001440000000456414012015545016201 0ustar liggesusers\name{collapseRowsUsingKME} \alias{collapseRowsUsingKME} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Selects one representative row per group based on kME } \description{ This function selects only the most informative probe for each gene in a kME table, only keeping the probe which has the highest kME with respect to any module in the module membership matrix. This function is a special case of the function collapseRows. } \usage{ collapseRowsUsingKME(MM, Gin, Pin = NULL, kMEcols = 1:dim(MM)[2]) } \arguments{ \item{MM}{ A module membership (kME) table with at least a subset of the columns corresponding to kME values. } \item{Gin}{ Genes labels in a 1 to 1 correspondence with the rows of MM. } \item{Pin}{ If NULL (default), rownames of MM are assumed to be probe IDs. If entered, Pin must be the same length as Gin and correspond to probe IDs for MM. } \item{kMEcols}{ A numeric vector showing which columns in MM correspond to kME values. The default is all of them. } } \value{ \item{datETcollapsed}{ A numeric matrix with the same columns as the input matrix MM, but with rows corresponding to the genes rather than the probes. } \item{group2row}{ A matrix whose rows correspond to the unique gene labels and whose 2 columns report which gene label (first column called group) is represented by what probe (second column called selectedRowID) } \item{selectedRow}{ A logical vector whose components are TRUE for probes selected as representatives and FALSE otherwise. It has the same length as the vector Pin. } } \author{ Jeremy Miller } \seealso{ \code{\link{collapseRows}} } \examples{ # Example: first simulate some data set.seed(100) ME.A = sample(1:100,50); ME.B = sample(1:100,50) ME.C = sample(1:100,50); ME.D = sample(1:100,50) ME1 = data.frame(ME.A, ME.B, ME.C, ME.D) simDatA = simulateDatExpr(ME1,1000,c(0.2,0.1,0.08,0.05,0.3), signed=TRUE) simDatB = simulateDatExpr(ME1,1000,c(0.2,0.1,0.08,0.05,0.3), signed=TRUE) Gin = c(colnames(simDatA$datExpr),colnames(simDatB$datExpr)) Pin = paste("Probe",1:length(Gin),sep=".") datExpr = cbind(simDatA$datExpr, simDatB$datExpr) MM = corAndPvalue(datExpr,ME1)$cor # Now run the function and see some example output results = collapseRowsUsingKME(MM, Gin, Pin) head(results$MMcollapsed) head(results$group2Row) head(results$selectedRow) } \keyword{misc }% __ONLY ONE__ keyword per line WGCNA/man/coClustering.Rd0000644000176200001440000000453314012015545014616 0ustar liggesusers\name{coClustering} \alias{coClustering} \title{ Co-clustering measure of cluster preservation between two clusterings } \description{ The function calculates the co-clustering statistics for each module in the reference clustering. } \usage{ coClustering(clusters.ref, clusters.test, tupletSize = 2, unassignedLabel = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{clusters.ref}{ Reference input clustering. A vector in which each element gives the cluster label of an object. } \item{clusters.test}{ Test input clustering. Must be a vector of the same size as \code{cluster.ref}. } \item{tupletSize}{ Co-clutering tuplet size. } \item{unassignedLabel}{ Optional specification of a clustering label that denotes unassigned objects. Objects with this label are excluded from the calculation. } } \details{ Co-clustering of cluster q in the reference clustering and cluster q' in the test clustering measures the overlap of clusters q and q' by the number of tuplets that can be chosen from the overlap of clusters q and q' relative to the number of tuplets in cluster q. To arrive at a co-clustering measure for cluster q, we sum the co-clustering of q and q' over all clusters q' in the test clustering. A value close to 1 indicates high preservation of the reference cluster in the test clustering, while a value close to zero indicates a low preservation. } \value{ A vector in which each component corresponds to a cluster in the reference clustering. Entries give the co-clustering measure of cluster preservation. } \references{ For example, see Langfelder P, Luo R, Oldham MC, Horvath S (2011) Is My Network Module Preserved and Reproducible? PLoS Comput Biol 7(1): e1001057. Co-clustering is discussed in the Methods Supplement (Supplementary text 1) of that article. } \author{ Peter Langfelder } \seealso{ \code{\link{modulePreservation}} for a large suite of module preservation statistics \code{\link{coClustering.permutationTest}} for a permutation test for co-clustering significance } \examples{ # An example with random (unrelated) clusters: set.seed(1); nModules = 10; nGenes = 1000; cl1 = sample(c(1:nModules), nGenes, replace = TRUE); cl2 = sample(c(1:nModules), nGenes, replace = TRUE); coClustering(cl1, cl2) # For the same reference and test clustering: coClustering(cl1, cl1) } \keyword{misc} WGCNA/man/newBlockInformation.Rd0000644000176200001440000000264714012015545016133 0ustar liggesusers\name{newBlockInformation} \alias{newBlockInformation} \alias{BlockInformation} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create a list holding information about dividing data into blocks } \description{ This function creates a list storing information about dividing data into blocks, as well as about possibly excluding genes or samples with excessive numbers of missing data. } \usage{ newBlockInformation(blocks, goodSamplesAndGenes) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{blocks}{ A vector giving block labels. It is assumed to be a numeric vector with block labels consecutive integers starting at 1. } \item{goodSamplesAndGenes}{ A list returned by \code{\link{goodSamplesGenes}} or \code{\link{goodSamplesGenesMS}}. } } \value{ A list with \code{class} attribute set to \code{BlockInformation}, with the following componens: \item{blocks}{A copy of the input \code{blocks}.} \item{blockGenes}{A list with one component per block, giving the indices of elements in \code{block} whose value is the same.} \item{goodSamplesAndGenes}{A copy of input \code{goodSamplesAndGenes}.} \item{nGGenes}{Number of `good' genes in \code{goodSamplesAndGenes}.} \item{gBlocks}{The input \code{blocks} restricted to `good' genes in \code{goodSamplesAndGenes}.} } \author{ Peter Langfelder } \seealso{ \code{\link{goodSamplesGenes}}, \code{\link{goodSamplesGenesMS}}. } \keyword{misc} WGCNA/man/transposeBigData.Rd0000644000176200001440000000372414012015545015410 0ustar liggesusers\name{transposeBigData} \alias{transposeBigData} %- Also NEED an '\alias' for EACH other topic documented here. \title{Transpose a big matrix or data frame } \description{ This transpose command partitions a big matrix (or data frame) into blocks and applies the t() function to each block separately. } \usage{ transposeBigData(x, blocksize = 20000) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{a matrix or data frame } \item{blocksize}{a positive integer larger than 1, which determines the block size. Default is 20k. } } \details{ Assume you have a very large matrix with say 500k columns. In this case, the standard transpose function of R \code{t()} can take a long time. Solution: Split the original matrix into sub-matrices by dividing the columns into blocks. Next apply \code{t()} to each sub-matrix. The same holds if the large matrix contains a large number of rows. The function \code{transposeBigData} automatically checks whether the large matrix contains more rows or more columns. If the number of columns is larger than or equal to the number of rows then the block wise splitting will be applied to columns otherwise to the rows. } \value{A matrix or data frame (depending on the input \code{x} ) which is the transpose of \code{x}. %% ~Describe the value returned %% If it is a LIST, use %% \item{comp1 }{Description of 'comp1'} %% \item{comp2 }{Description of 'comp2'} %% ... } \references{Any linear algebra book will explain the transpose. } \author{ Steve Horvath, UCLA } \note{ This function can be considered a wrapper of \code{\link{t}()} } \seealso{ The standard function \code{\link{t}} . } \examples{ x=data.frame(matrix(1:10000,nrow=4,ncol=2500)) dimnames(x)[[2]]=paste("Y",1:2500,sep="") xTranspose=transposeBigData(x) x[1:4,1:4] xTranspose[1:4,1:4] } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ misc } WGCNA/man/mtd.subset.Rd0000644000176200001440000000525414012015545014246 0ustar liggesusers\name{mtd.subset} \alias{mtd.subset} \title{ Subset rows and columns in a multiData structure } \description{ The function restricts each \code{data} component to the given columns and rows. } \usage{ mtd.subset( # Input multiData, # Rows and columns to keep rowIndex = NULL, colIndex = NULL, invert = FALSE, # Strict or permissive checking of structure? permissive = FALSE, # Output formatting options drop = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ A multiData structure. } \item{rowIndex}{ A list in which each component corresponds to a set and is a vector giving the rows to be retained in that set. All indexing methods recognized by R can be used (numeric, logical, negative indexing, etc). If \code{NULL}, all columns will be retained in each set. Note that setting individual elements of \code{rowIndex} to \code{NULL} will lead to errors. } \item{colIndex}{ A vector giving the columns to be retained. All indexing methods recognized by R can be used (numeric, logical, negative indexing, etc). In addition, column names of the retained columns may be given; if a given name cannot be matched to a column, an error will be thrown. If \code{NULL}, all columns will be retained. } \item{invert}{Logical: should the selection be inverted?} \item{permissive}{Logical: should the function tolerate "loose" \code{multiData} input? Note that the subsetting may lead to cryptic errors if the input \code{multiData} does not follow the "strict" format. } \item{drop}{Logical: should dimensions with extent 1 be dropped? } } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). This function assumes a "strict" multiData structure unless \code{permissive} is \code{TRUE}. } \value{ A multiData structure containing the selected rows and columns. Attributes (except possibly dimensions and the corresponding dimnames) are retained. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure. } % Add one or more standard keywords, see file 'KEYWORDS' in the \keyword{ misc} WGCNA/man/goodSamplesGenesMS.Rd0000644000176200001440000000736114012015545015656 0ustar liggesusers\name{goodSamplesGenesMS} \alias{goodSamplesGenesMS} \title{ Iterative filtering of samples and genes with too many missing entries across multiple data sets } \description{ This function checks data for missing entries and zero variance across multiple data sets and returns a list of samples and genes that pass criteria maximum number of missing values. If weights are given, entries whose relative weight is below a threshold will be considered missing. The filtering is iterated until convergence. } \usage{ goodSamplesGenesMS( multiExpr, multiWeights = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 2, indent = 0) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of non-missing samples for a gene to be considered good. } \item{minNGenes}{ minimum number of good genes for the data set to be considered fit for analysis. If the actual number of good genes falls below this threshold, an error will be issued. } \item{tol}{ an optional 'small' number to compare the variance against. For each set in \code{multiExpr}, the default value is \code{1e-10 * max(abs(multiExpr[[set]]$data), na.rm = TRUE)}. The reason of comparing the variance to this number, rather than zero, is that the fast way of computing variance used by this function sometimes causes small numerical overflow errors which make variance of constant vectors slightly non-zero; comparing the variance to \code{tol} rather than zero prevents the retaining of such genes as 'good genes'.} \item{minRelativeWeight}{ observations whose relative weight is below this threshold will be considered missing. Here relative weight is weight divided by the maximum weight in the column (gene).} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function iteratively identifies samples and genes with too many missing entries, and genes with zero variance; iterations are necessary since excluding samples effectively changes criteria on genes and vice versa. The process is repeated until the lists of good samples and genes are stable. If weights are given, entries whose relative weight (i.e., weight divided by maximum weight in the column or gene) is below a threshold will be considered missing. The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. } \value{ A list with the foolowing components: \item{goodSamples}{ A list with one component per given set. Each component is a logical vector with one entry per sample in the corresponding set that is \code{TRUE} if the sample is considered good and \code{FALSE} otherwise. } \item{goodGenes}{ A logical vector with one entry per gene that is \code{TRUE} if the gene is considered good and \code{FALSE} otherwise. } } \author{ Peter Langfelder } \seealso{ \code{\link{goodGenes}}, \code{\link{goodSamples}}, \code{\link{goodSamplesGenes}} for cleaning individual sets separately; \code{\link{goodSamplesMS}}, \code{\link{goodGenesMS}} for additional cleaning of multiple data sets together. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/corPvalueStudent.Rd0000644000176200001440000000124214012015545015456 0ustar liggesusers\name{corPvalueStudent} \alias{corPvalueStudent} \title{Student asymptotic p-value for correlation} \description{ Calculates Student asymptotic p-value for given correlations. } \usage{ corPvalueStudent(cor, nSamples) } \arguments{ \item{cor}{ A vector of correlation values whose corresponding p-values are to be calculated } \item{nSamples}{ Number of samples from which the correlations were calculated } } \value{ A vector of p-values of the same length as the input correlations. } \author{ Steve Horvath and Peter Langfelder } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ misc } WGCNA/man/simulateDatExpr5Modules.Rd0000644000176200001440000000460314012015545016704 0ustar liggesusers\name{simulateDatExpr5Modules} \alias{simulateDatExpr5Modules} \title{ Simplified simulation of expression data} \description{ This function provides a simplified interface to the expression data simulation, at the cost of considerably less flexibility. } \usage{ simulateDatExpr5Modules( nGenes = 2000, colorLabels = c("turquoise", "blue", "brown", "yellow", "green"), simulateProportions = c(0.1, 0.08, 0.06, 0.04, 0.02), MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, SDnoise = 1, backgroundCor = 0.3) } \arguments{ \item{nGenes}{ total number of genes to be simulated. } \item{colorLabels}{ labels for simulated modules. } \item{simulateProportions}{ a vector of length 5 giving proportions of the total number of genes to be placed in each individual module. The entries must be positive and sum to at most 1. If the sum is less than 1, the leftover genes will be simulated outside of modules. } \item{MEturquoise}{ seed module eigengene for the first module. } \item{MEblue}{ seed module eigengene for the second module. } \item{MEbrown}{ seed module eigengene for the third module. } \item{MEyellow}{ seed module eigengene for the fourth module. } \item{MEgreen}{ seed module eigengene for the fifth module. } \item{SDnoise}{ level of noise to be added to the simulated expressions. } \item{backgroundCor}{ backgrond correlation. If non-zero, a component will be added to all genes such that the average correlation of otherwise unrelated genes will be \code{backgroundCor}. } } \details{ Roughly one-third of the genes are simulated with a negative correlation to their seed eigengene. See the functions \code{\link{simulateModule}} and \code{\link{simulateDatExpr}} for more details. } \value{ A list with the following components: \item{datExpr}{ the simulated expression data in a data frame, with rows corresponding to samples and columns to genes. } \item{truemodule}{ a vector with one entry per gene containing the simulated module membership. } \item{datME}{a data frame containing a copy of the input module eigengenes. } } \author{ Steve Horvath and Peter Langfelder } \seealso{ \code{\link{simulateModule}} for simulation of individual modules; \code{\link{simulateDatExpr}} for a more comprehensive data simulation interface. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/recutConsensusTrees.Rd0000644000176200001440000002603214012015545016201 0ustar liggesusers\name{recutConsensusTrees} \alias{recutConsensusTrees} \title{ Repeat blockwise consensus module detection from pre-calculated data } \description{ Given consensus networks constructed for example using \code{\link{blockwiseConsensusModules}}, this function (re-)detects modules in them by branch cutting of the corresponding dendrograms. If repeated branch cuts of the same gene network dendrograms are desired, this function can save substantial time by re-using already calculated networks and dendrograms. } \usage{ recutConsensusTrees( multiExpr, goodSamples, goodGenes, blocks, TOMFiles, dendrograms, corType = "pearson", networkType = "unsigned", deepSplit = 2, detectCutHeight = 0.995, minModuleSize = 20, checkMinModuleSize = TRUE, maxCoreScatter = NULL, minGap = NULL, maxAbsCoreScatter = NULL, minAbsGap = NULL, minSplitHeight = NULL, minAbsSplitHeight = NULL, useBranchEigennodeDissim = FALSE, minBranchEigennodeDissim = mergeCutHeight, pamStage = TRUE, pamRespectsDendro = TRUE, trimmingConsensusQuantile = 0, minCoreKME = 0.5, minCoreKMESize = minModuleSize/3, minKMEtoStay = 0.2, reassignThresholdPS = 1e-4, mergeCutHeight = 0.15, mergeConsensusQuantile = trimmingConsensusQuantile, impute = TRUE, trapErrors = FALSE, numericLabels = FALSE, verbose = 2, indent = 0) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{goodSamples}{ a list with one component per set. Each component is a logical vector specifying which samples are considered "good" for the analysis. See \code{\link{goodSamplesGenesMS}}. } \item{goodGenes}{ a logical vector with length equal number of genes in \code{multiExpr} that specifies which genes are considered "good" for the analysis. See \code{\link{goodSamplesGenesMS}}. } \item{blocks}{ specification of blocks in which hierarchical clustering and module detection should be performed. A numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{TOMFiles}{ a vector of character strings specifying file names in which the block-wise topological overlaps are saved. } \item{dendrograms}{ a list of length equal the number of blocks, in which each component is a hierarchical clustering dendrograms of the genes that belong to the block. } \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pariwise.complete.obs} option. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. Note that while no new networks are computed in this function, this parameter affects the interpretation of correlations in this function. } \item{deepSplit}{ integer value between 0 and 4. Provides a simplified control over how sensitive module detection should be to module splitting, with 0 least and 4 most sensitive. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{detectCutHeight}{ dendrogram cut height for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minModuleSize}{ minimum module size for module detection. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{checkMinModuleSize}{ logical: should sanity checks be performed on \code{minModuleSize}?} \item{maxCoreScatter}{ maximum scatter of the core for a branch to be a cluster, given as the fraction of \code{cutHeight} relative to the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minGap}{ minimum cluster gap given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{maxAbsCoreScatter}{ maximum scatter of the core for a branch to be a cluster given as absolute heights. If given, overrides \code{maxCoreScatter}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minAbsGap}{ minimum cluster gap given as absolute height difference. If given, overrides \code{minGap}. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{minSplitHeight}{Minimum split height given as the fraction of the difference between \code{cutHeight} and the 5th percentile of joining heights. Branches merging below this height will automatically be merged. Defaults to zero but is used only if \code{minAbsSplitHeight} below is \code{NULL}.} \item{minAbsSplitHeight}{Minimum split height given as an absolute height. Branches merging below this height will automatically be merged. If not given (default), will be determined from \code{minSplitHeight} above.} \item{useBranchEigennodeDissim}{Logical: should branch eigennode (eigengene) dissimilarity be considered when merging branches in Dynamic Tree Cut?} \item{minBranchEigennodeDissim}{Minimum consensus branch eigennode (eigengene) dissimilarity for branches to be considerd separate. The branch eigennode dissimilarity in individual sets is simly 1-correlation of the eigennodes; the consensus is defined as quantile with probability \code{consensusQuantile}.} \item{pamStage}{ logical. If TRUE, the second (PAM-like) stage of module detection will be performed. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{pamRespectsDendro}{Logical, only used when \code{pamStage} is \code{TRUE}. If \code{TRUE}, the PAM stage will respect the dendrogram in the sense an object can be PAM-assigned only to clusters that lie below it on the branch that the object is merged into. See \code{\link[dynamicTreeCut]{cutreeDynamic}} for more details. } \item{trimmingConsensusQuantile}{a number between 0 and 1 specifying the consensus quantile used for kME calculation that determines module trimming according to the arguments below.} \item{minCoreKME}{ a number between 0 and 1. If a detected module does not have at least \code{minModuleKMESize} genes with eigengene connectivity at least \code{minCoreKME}, the module is disbanded (its genes are unlabeled and returned to the pool of genes waiting for mofule detection). } \item{minCoreKMESize}{ see \code{minCoreKME} above. } \item{minKMEtoStay}{ genes whose eigengene connectivity to their module eigengene is lower than \code{minKMEtoStay} are removed from the module.} \item{reassignThresholdPS}{ per-set p-value ratio threshold for reassigning genes between modules. See Details. } \item{mergeCutHeight}{ dendrogram cut height for module merging. } \item{mergeConsensusQuantile}{consensus quantile for module merging. See \code{mergeCloseModules} for details. } \item{impute}{ logical: should imputation be used for module eigengene calculation? See \code{\link{moduleEigengenes}} for more details. } \item{trapErrors}{ logical: should errors in calculations be trapped? } \item{numericLabels}{ logical: should the returned modules be labeled by colors (\code{FALSE}), or by numbers (\code{TRUE})? } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ For details on blockwise consensus module detection, see \code{\link{blockwiseConsensusModules}}. This function implements the module detection subset of the functionality of \code{\link{blockwiseConsensusModules}}; network construction and clustering must be performed in advance. The primary use of this function is to experiment with module detection settings without having to re-execute long network and clustering calculations whose results are not affected by the cutting parameters. This function takes as input the networks and dendrograms that are produced by \code{\link{blockwiseConsensusModules}}. Working block by block, modules are identified in the dendrograms by the Dynamic Hybrid tree cut. Found modules are trimmed of genes whose consensus module membership kME (that is, correlation with module eigengene) is less than \code{minKMEtoStay}. Modules in which fewer than \code{minCoreKMESize} genes have consensus KME higher than \code{minCoreKME} are disbanded, i.e., their constituent genes are pronounced unassigned. After all blocks have been processed, the function checks whether there are genes whose KME in the module they assigned is lower than KME to another module. If p-values of the higher correlations are smaller than those of the native module by the factor \code{reassignThresholdPS} (in every set), the gene is re-assigned to the closer module. In the last step, modules whose eigengenes are highly correlated are merged. This is achieved by clustering module eigengenes using the dissimilarity given by one minus their correlation, cutting the dendrogram at the height \code{mergeCutHeight} and merging all modules on each branch. The process is iterated until no modules are merged. See \code{\link{mergeCloseModules}} for more details on module merging. } \value{ A list with the following components: \item{colors}{ module assignment of all input genes. A vector containing either character strings with module colors (if input \code{numericLabels} was unset) or numeric module labels (if \code{numericLabels} was set to \code{TRUE}). The color "grey" and the numeric label 0 are reserved for unassigned genes. } \item{unmergedColors }{ module colors or numeric labels before the module merging step. } \item{multiMEs}{ module eigengenes corresponding to the modules returned in \code{colors}, in multi-set format. A vector of lists, one per set, containing eigengenes, proportion of variance explained and other information. See \code{\link{multiSetMEs}} for a detailed description. } } \note{ Basic sanity checks are performed on given arguments, but it is left to the user's responsibility to provide valid input. } \references{ Langfelder P, Horvath S (2007) Eigengene networks for studying the relationships between co-expression modules. BMC Systems Biology 2007, 1:54 } \author{Peter Langfelder} \seealso{ \code{\link{blockwiseConsensusModules}} for the full blockwise modules calculation. Parts of its output are natural input for this function. \code{\link[dynamicTreeCut]{cutreeDynamic}} for adaptive branch cutting in hierarchical clustering dendrograms; \code{\link{mergeCloseModules}} for merging of close modules. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/labels2colors.Rd0000644000176200001440000000411714012015545014721 0ustar liggesusers\name{labels2colors} \alias{labels2colors} \title{Convert numerical labels to colors. } \description{ Converts a vector or array of numerical labels into a corresponding vector or array of colors corresponding to the labels. } \usage{ labels2colors(labels, zeroIsGrey = TRUE, colorSeq = NULL, naColor = "grey", commonColorCode = TRUE) } \arguments{ \item{labels}{Vector or matrix of non-negative integer or other (such as character) labels. See details.} \item{zeroIsGrey}{If TRUE, labels 0 will be assigned color grey. Otherwise, labels below 1 will trigger an error.} \item{colorSeq}{Color sequence corresponding to labels. If not given, a standard sequence will be used.} \item{naColor}{Color that will encode missing values. } \item{commonColorCode}{logical: if \code{labels} is a matrix, should each column have its own colors? } } \details{ If \code{labels} is numeric, it is used directly as index to the standard color sequence. If 0 is present among the labels and \code{zeroIsGrey=TRUE}, labels 0 are given grey color. If \code{labels} is not numeric, its columns are turned into factors and the numeric representation of each factor is used to assign the corresponding colors. In this case \code{commonColorCode} governs whether each column gets its own color code, or whether the color code will be universal. The standard sequence start with well-distinguishable colors, and after about 40 turns into a quasi-random sampling of all colors available in R with the exception of all shades of grey (and gray). If the input \code{labels} have a dimension attribute, it is copied into the output, meaning the dimensions of the returned value are the same as those of the input \code{labels}. } \value{ A vector or array of character strings of the same length or dimensions as \code{labels}. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \examples{ labels = c(0:20); labels2colors(labels); labels = matrix(letters[1:9], 3,3); labels2colors(labels) # Note the difference when commonColorCode = FALSE labels2colors(labels, commonColorCode = FALSE) } \keyword{color} WGCNA/man/moduleMergeUsingKME.Rd0000644000176200001440000001261014361660376015777 0ustar liggesusers\name{moduleMergeUsingKME} \alias{moduleMergeUsingKME} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Merge modules and reassign genes using kME. } \description{ This function takes an expression data matrix (and other user-defined parameters), calculates the module membership (kME) values, and adjusts the module assignments, merging modules that are not sufficiently distinct and reassigning modules that were originally assigned suboptimally. } \usage{ moduleMergeUsingKME( datExpr, colorh, ME = NULL, threshPercent = 50, mergePercent = 25, reassignModules = TRUE, convertGrey = TRUE, omitColors = "grey", reassignScale = 1, threshNumber = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ An expression data matrix, with samples as rows, genes (or probes) as column. } \item{colorh}{ The color vector (module assignments) corresponding to the columns of datExpr. } \item{ME}{ Either NULL (default), at which point the module eigengenes will be calculated, or pre-calculated module eigengenes for each of the modules, with samples as rows (corresponding to datExpr), and modules corresponding to columns (column names MUST be module colors or module colors prefixed by "ME" or "PC"). } \item{threshPercent}{ Threshold percent of the number of genes in the module that should be included for the various analyses. For example, in a module with 200 genes, if threshPercent=50 (default), then 50 genes will be checked for reassignment and used to test whether two modules should be merged. See also threshNumber. } \item{mergePercent}{ If greater than this percent of the assigned genes are above the threshold are in a module other than the assigned module, then these two modules will be merged. For example, if mergePercent=25 (default), and the 70 out of 200 genes in the blue module were more highly correlated with the black module eigengene, then all genes in the blue module would be reassigned to the black module. } \item{reassignModules}{ If TRUE (default), genes are resassigned to the module with which they have the highest module membership (kME), but only if their kME is above the threshPercent (or threshNumber) threshold of that module. } \item{convertGrey}{ If TRUE (default), unassigned (grey) genes are assigned as in "reassignModules" } \item{omitColors}{ These are all of the module assignments which indicate genes that are not assigned to modules (default="grey"). These genes will all be assigned as "grey" by this function. } \item{reassignScale}{ A value between 0 and 1 (default) which determines how the threshPercent gets scaled for reassigning genes. Smaller values reassign more genes, but does not affect the merging process. } \item{threshNumber}{ Either NULL (default) or, if entered, every module is counted as having exactly threshNumber genes, and threshPercent it ignored. This parameter should have the effect of } } \value{ \item{moduleColors}{ The NEW color vector (module assignments) corresponding to the columns of datExpr, after module merging and reassignments. } \item{mergeLog}{ A log of the order in which modules were merged, for reference. } } \author{ Jeremy Miller } \note{ Note that this function should be considered "experimental" as it has only been beta tested. Please e-mail jeremyinla@gmail.com if you have any issues with the function. } \examples{ ## First simulate some data and the resulting network dendrogram set.seed(100) MEturquoise = sample(1:100,50) MEblue = sample(1:100,50) MEbrown = sample(1:100,50) MEyellow = sample(1:100,50) MEgreen = c(MEyellow[1:30], sample(1:100,20)) MEred = c(MEbrown [1:20], sample(1:100,30)) #MEblack = c(MEblue [1:25], sample(1:100,25)) ME = data.frame(MEturquoise, MEblue, MEbrown, MEyellow, MEgreen, MEred)#, MEblack) dat1 = simulateDatExpr(ME, 300, c(0.15,0.13,0.12,0.10,0.09,0.09,0.1), signed=TRUE) TOM1 = TOMsimilarityFromExpr(dat1$datExpr, networkType="signed", nThreads = 1) tree1 = fastcluster::hclust(as.dist(1-TOM1),method="average") ## Here is an example using different mergePercentages, # setting an inclusive threshPercent (91) colorh1 <- colorPlot <- labels2colors(dat1$allLabels) merges = c(65,40,20,5) for (m in merges) colorPlot = cbind(colorPlot, moduleMergeUsingKME(dat1$datExpr,colorh1, threshPercent=91, mergePercent=m)$moduleColors) plotDendroAndColors(tree1, colorPlot, c("ORIG",merges), dendroLabels=FALSE) ## Here is an example using a lower reassignScale (so that more genes get reassigned) colorh1 <- colorPlot <- labels2colors(dat1$allLabels) merges = c(65,40,20,5) for (m in merges) colorPlot = cbind(colorPlot, moduleMergeUsingKME(dat1$datExpr,colorh1,threshPercent=91, reassignScale=0.7, mergePercent=m)$moduleColors) plotDendroAndColors(tree1, colorPlot, c("ORIG",merges), dendroLabels=FALSE) ## Here is an example using a less-inclusive threshPercent (75), # little if anything is merged. colorh1 <- colorPlot <- labels2colors(dat1$allLabels) merges = c(65,40,20,5) for (m in merges) colorPlot = cbind(colorPlot, moduleMergeUsingKME(dat1$datExpr,colorh1, threshPercent=75, mergePercent=m)$moduleColors) plotDendroAndColors(tree1, colorPlot, c("ORIG",merges), dendroLabels=FALSE) # (Note that with real data, the default threshPercent=50 usually results # in some modules being merged) } \keyword{misc} WGCNA/man/moduleColor.getMEprefix.Rd0000644000176200001440000000203714012015545016654 0ustar liggesusers\name{moduleColor.getMEprefix} \alias{moduleColor.getMEprefix} \title{Get the prefix used to label module eigengenes.} \description{ Returns the currently used prefix used to label module eigengenes. When returning module eigengenes in a dataframe, names of the corresponding columns will start with the given prefix. } \usage{ moduleColor.getMEprefix() } \details{ Returns the prefix used to label module eigengenes. When returning module eigengenes in a dataframe, names of the corresponding columns will consist of the corresponfing color label preceded by the given prefix. For example, if the prefix is "PC" and the module is turquoise, the corresponding module eigengene will be labeled "PCturquoise". Most of old code assumes "PC", but "ME" is more instructive and used in some newer analyses. } \value{ A character string. } \note{ Currently the standard prefix is \code{"ME"} and there is no way to change it. } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } \seealso{ \code{\link{moduleEigengenes}} } \keyword{misc} WGCNA/man/mtd.mapply.Rd0000644000176200001440000001112014012015545014230 0ustar liggesusers\name{mtd.mapply} \alias{mtd.mapply} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Apply a function to elements of given multiData structures. } \description{ Inspired by \code{\link{mapply}}, this function applies a given function to each \code{data} component in the input multiData arguments, and optionally simplify the result to an array if possible. } \usage{ mtd.mapply( # What to do FUN, ..., MoreArgs = NULL, # How to interpret the input mdma.argIsMultiData = NULL, # Copy previously known results? mdmaExistingResults = NULL, mdmaUpdateIndex = NULL, # How to format output mdmaSimplify = FALSE, returnList = FALSE, # Options controlling internal behaviour mdma.doCollectGarbage = FALSE, mdmaVerbose = 0, mdmaIndent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{FUN}{Function to be applied. } \item{\dots}{ Arguments to be vectorized over. These can be multiData structures or simple vectors (e.g., lists). } \item{MoreArgs}{ A named list that specifies the scalar arguments (if any) to \code{FUN}. } \item{mdma.argIsMultiData}{ Optional specification whether arguments are multiData structures. A logical vector where each component corresponds to one entry of \code{...}. If not given, multiData status will be determined using \code{\link{isMultiData}} with argument \code{strict=FALSE}. } \item{mdmaExistingResults}{Optional list that contains previously calculated results. This can be useful if only a few sets in \code{multiData} have changed and recalculating the unchanged ones is computationally expensive. If not given, all calculations will be performed. If given, components of this list are copied into the output. See \code{mdmUpdateIndex} for which components are re-calculated by default. } \item{mdmaUpdateIndex}{Optional specification of which sets in \code{multiData} the calculation should actually be carried out. This argument has an effect only if \code{mdmaExistingResults} is non-NULL. If the length of \code{mdmaExistingResults} (call the length `k') is less than the number of sets in \code{multiData}, the function assumes that the existing results correspond to the first `k' sets in \code{multiData} and the rest of the sets are automatically calculated, irrespective of the setting of \code{mdmaUpdateIndex}. The argument \code{mdmaUpdateIndex} can be used to specify re-calculation of some (or all) of the results that already exist in \code{mdmaExistingResults}. } \item{mdmaSimplify}{ Logical: should simplification of the result to an array be attempted? The simplification is fragile and can produce unexpected errors; use the default \code{FALSE} if that happens. } \item{returnList}{Logical: should the result be turned into a list (rather than a multiData structure)? Note that this is incompatible with simplification: if \code{mdaSimplify} is \code{TRUE}, this argument is ignored.} \item{mdma.doCollectGarbage}{ Should garbage collection be forced after each application of \code{FUN}? } \item{mdmaVerbose}{Integer specifying whether progress diagnistics should be printed out. Zero means silent, increasing values will lead to more diagnostic messages.} \item{mdmaIndent}{Integer specifying the indentation of the printed progress messages. Each unit equals two spaces.} } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). This function applies the function \code{FUN} to each \code{data} component of those arguments in \code{...} that are multiData structures in the "loose" sense, and to each component of those arguments in \code{...} that are not multiData structures. } \value{ A multiData structure containing (as the \code{data} components) the results of \code{FUN}. If simplification is successful, an array instead. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure; \code{multiData.apply} for application of a function to a single multiData structure. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/initProgInd.Rd0000644000176200001440000000420614012015545014400 0ustar liggesusers\name{Inline display of progress} \alias{initProgInd} \alias{updateProgInd} \title{ Inline display of progress } \description{ These functions provide an inline display of pregress. } \usage{ initProgInd(leadStr = "..", trailStr = "", quiet = !interactive()) updateProgInd(newFrac, progInd, quiet = !interactive()) } \arguments{ \item{leadStr}{ character string that will be printed before the actual progress number. } \item{trailStr}{ character string that will be printed after the actual progress number. } \item{quiet}{ can be used to silence the indicator for non-interactive sessions whose output is typically redirected to a file. } \item{newFrac}{ new fraction of progress to be displayed. } \item{progInd}{ an object of class \code{progressIndicator} that encodes previously printed message. } } \details{ A progress indicator is a simple inline display of progress intended to satisfy impatient users during lengthy operations. The function \code{initProgInd} initializes a progress indicator (at zero); \code{updateProgInd} updates it to a specified fraction. Note that excessive use of \code{updateProgInd} may lead to a performance penalty (see examples). } \value{ Both functions return an object of class \code{progressIndicator} that holds information on the last printed value and should be used for subsequent updates of the indicator. } \author{ Peter Langfelder } \examples{ max = 10; prog = initProgInd("Counting: ", "done"); for (c in 1:max) { Sys.sleep(0.10); prog = updateProgInd(c/max, prog); } printFlush(""); printFlush("Example 2:"); prog = initProgInd(); for (c in 1:max) { Sys.sleep(0.10); prog = updateProgInd(c/max, prog); } printFlush(""); ## Example of a significant slowdown: ## Without progress indicator: system.time( {a = 0; for (i in 1:10000) a = a+i; } ) ## With progress indicator, some 50 times slower: system.time( { prog = initProgInd("Counting: ", "done"); a = 0; for (i in 1:10000) { a = a+i; prog = updateProgInd(i/10000, prog); } } ) } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/rgcolors.func.Rd0000644000176200001440000000211614361660376014753 0ustar liggesusers\name{rgcolors.func} \alias{rgcolors.func} \title{Red and Green Color Specification} \description{ This function creates a vector of n ``contiguous'' colors, corresponding to n intensities (between 0 and 1) of the red, green and blue primaries, with the blue intensities set to zero. The values returned by \code{rgcolors.func} can be used with a \code{col=} specification in graphics functions or in \code{\link{par}}. } \usage{ rgcolors.func(n=50) } \arguments{ \item{n}{the number of colors (>= 1) to be used in the red and green palette. } } \value{a character vector of color names. Colors are specified directly in terms of their RGB components with a string of the form "#RRGGBB", where each of the pairs RR, GG, BB consist of two hexadecimal digits giving a value in the range 00 to FF. } \author{ Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} \cr Jane Fridlyand, \email{janef@stat.berkeley.edu} } \seealso{\code{\link{plotCor}}, \code{\link{plotMat}}, \code{\link{colors}}, \code{\link{rgb}}, \code{\link{image}}.} \examples{ rgcolors.func(n=5) } \keyword{color} WGCNA/man/displayColors.Rd0000644000176200001440000000111514012015545014775 0ustar liggesusers\name{displayColors} \alias{displayColors} \title{ Show colors used to label modules } \description{ The function plots a barplot using colors that label modules. } \usage{ displayColors(colors = NULL) } \arguments{ \item{colors}{colors to be displayed. Defaults to all colors available for module labeling. } } \details{ To see the first \code{n} colors, use argument \code{colors = standardColors(n)}. } \value{ None. } \author{ Peter Langfelder } \seealso{ \code{\link{standardColors}} } \examples{ displayColors(standardColors(10)) } \keyword{ misc } WGCNA/man/conformityDecomposition.Rd0000644000176200001440000001603314533632240017106 0ustar liggesusers\name{conformityDecomposition} \alias{conformityDecomposition} \title{ Conformity and module based decomposition of a network adjacency matrix. } \description{ The function calculates the conformity based approximation \code{A.CF} of an adjacency matrix and a factorizability measure \code{Factorizability}. If a module assignment \code{Cl} is provided, it also estimates a corresponding intermodular adjacency matrix. In this case, function automatically carries out the module- and conformity based decomposition of the adjacency matrix described in chapter 2 of (Horvath 2011). } \usage{ conformityDecomposition(adj, Cl = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{adj}{a symmetric numeric matrix (or data frame) whose entries lie between 0 and 1. } \item{Cl}{a vector (or factor variable) of length equal to the number of rows of \code{adj}. The variable assigns each network node (row of \code{adj}) to a module. The entries of \code{Cl} could be integers or character strings. } } \details{We distinguish two situation depending on whether or not \code{Cl} equals \code{NULL}. 1) Let us start out assuming that \code{Cl = NULL}. In this case, the function calculates the conformity vector for a general, possibly non-factorizable network \code{adj} by minimizing a quadratic (sums of squares) loss function. The conformity and factorizability for an adjacency matrix is defined in (Dong and Horvath 2007, Horvath and Dong 2008) but we briefly describe it in the following. A network is called exactly factorizable if the pairwise connection strength (adjacency) between 2 network nodes can be factored into node specific contributions, named node 'conformity', i.e. if \code{adj[i,j]=Conformity[i]*Conformity[j]}. The conformity turns out to be highly related to the network connectivity (aka degree). If \code{adj} is not exactly factorizable, then the function \code{conformityDecomposition} calculates a conformity vector of the exactly factorizable network that best approximates \code{adj}. The factorizability measure \code{Factorizability} is a number between 0 and 1. The higher \code{Factorizability}, the more factorizable is \code{adj}. Warning: the algorithm may only converge to a local optimum and it may not converge at all. Also see the notes below. 2) Let us now assume that \code{Cl} is not NULL, i.e. it specifies the module assignment of each node. Then the function calculates a module- and CF-based approximation of \code{adj} (explained in chapter 2 in Horvath 2011). In this case, the function calculates a conformity vector \code{Conformity} and a matrix \code{IntermodularAdjacency} such that \code{adj[i,j]} is approximately equal to \code{Conformity[i]*Conformity[j]*IntermodularAdjacency[module.index[i],module.index[j]]} where \code{module.index[i]} is the row of the matrix \code{IntermodularAdjacency} that corresponds to the module assigned to node i. To estimate \code{Conformity} and a matrix \code{IntermodularAdjacency}, the function attempts to minimize a quadratic loss function (sums of squares). Currently, the function only implements a heuristic algorithm for optimizing the objective function (chapter 2 of Horvath 2011). Another, more accurate Majorization Minorization (MM) algorithm for the decomposition is implemented in the function \code{propensityDecomposition} by Ranola et al (2011). } \value{ \item{A.CF}{a symmetric matrix that approximates the input matrix \code{adj}. Roughly speaking, the i,j-the element of the matrix equals \code{Conformity[i]*Conformity[j]*IntermodularAdjacency[module.index[i],module.index[j]]} where \code{module.index[i]} is the row of the matrix \code{IntermodularAdjacency} that corresponds to the module assigned to node i. } \item{Conformity}{a numeric vector whose entries correspond to the rows of \code{adj}. If \code{Cl=NULL} then \code{Conformity[i]} is the conformity. If \code{Cl} is not NULL then \code{Conformity[i]} is the intramodular conformity with respect to the module that node i belongs to. } \item{IntermodularAdjacency}{ a symmetric matrix (data frame) whose rows and columns correspond to the number of modules specified in \code{Cl}. Interpretation: it measures the similarity (adjacency) between the modules. In this case, the rows (and columns) of \code{IntermodularAdjacency} correspond to the entries of \code{Cl.level}. } \item{Factorizability}{ is a number between 0 and 1. If \code{Cl=NULL} then it equals 1, if (and only if) \code{adj} is exactly factorizable. If \code{Cl} is a vector, then it measures how well the module- and CF based decomposition approximates \code{adj}. } \item{Cl.level}{ is a vector of character strings which correspond to the factor levels of the module assignment \code{Cl}. Incidentally, the function automatically turns \code{Cl} into a factor variable. The components of Conformity and \code{IntramodularFactorizability} correspond to the entries of \code{Cl.level}. } \item{IntramodularFactorizability}{ is a numeric vector of length equal to the number of modules specified by \code{Cl}. Its entries report the factorizability measure for each module. The components correspond to the entries of \code{Cl.level}.} \item{listConformity}{} } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules. BMC Systems Biology 2007, June 1:24 Horvath S, Dong J (2008) Geometric Interpretation of Gene Co-Expression Network Analysis. PloS Computational Biology. 4(8): e1000117. PMID: 18704157 Horvath S (2011) Weighted Network Analysis. Applications in Genomics and Systems Biology. Springer Book. ISBN: 978-1-4419-8818-8 Ranola JMO, Langfelder P, Song L, Horvath S, Lange K (2011) An MM algorithm for the module- and propensity based decomposition of a network. Currently a draft. } \author{ Steve Horvath } \note{Regarding the situation when \code{Cl=NULL}. One can easily show that the conformity vector is not unique if \code{adj} contains only 2 nodes. However, for more than 2 nodes the conformity is uniquely defined when dealing with an exactly factorizable weighted network whose entries \code{adj[i,j]} are larger than 0. In this case, one can get explicit formulas for the conformity (Dong and Horvath 2007). } \seealso{ \code{\link{conformityBasedNetworkConcepts}} %%\code{\link{ propensityDecomposition }} } \examples{ # assume the number of nodes can be divided by 2 and by 3 n=6 # here is a perfectly factorizable matrix A=matrix(1,nrow=n,ncol=n) # this provides the conformity vector and factorizability measure conformityDecomposition(adj=A) # now assume we have a class assignment Cl=rep(c(1,2),c(n/2,n/2)) conformityDecomposition(adj=A,Cl=Cl) # here is a block diagonal matrix blockdiag.A=A blockdiag.A[1:(n/3),(n/3+1):n]=0 blockdiag.A[(n/3+1):n , 1:(n/3)]=0 block.Cl=rep(c(1,2),c(n/3,2*n/3)) conformityDecomposition(adj= blockdiag.A,Cl=block.Cl) # another block diagonal matrix blockdiag.A=A blockdiag.A[1:(n/3),(n/3+1):n]=0.3 blockdiag.A[(n/3+1):n , 1:(n/3)]=0.3 block.Cl=rep(c(1,2),c(n/3,2*n/3)) conformityDecomposition(adj= blockdiag.A,Cl=block.Cl) } \keyword{misc } WGCNA/man/binarizeCategoricalColumns.Rd0000644000176200001440000001765014012015545017463 0ustar liggesusers\name{binarizeCategoricalColumns} \alias{binarizeCategoricalColumns} \alias{binarizeCategoricalColumns.pairwise} \alias{binarizeCategoricalColumns.forRegression} \alias{binarizeCategoricalColumns.forPlots} \title{ Turn categorical columns into sets of binary indicators } \description{ Given a data frame with (some) categorical columns, this function creates a set of indicator variables for the various possible sets of levels. } \usage{ binarizeCategoricalColumns( data, convertColumns = NULL, considerColumns = NULL, maxOrdinalLevels = 3, levelOrder = NULL, minCount = 3, val1 = 0, val2 = 1, includePairwise = FALSE, includeLevelVsAll = TRUE, dropFirstLevelVsAll = TRUE, dropUninformative = TRUE, includePrefix = TRUE, prefixSep = ".", nameForAll = "all", levelSep = NULL, levelSep.pairwise = if (length(levelSep)==0) ".vs." else levelSep, levelSep.vsAll = if (length(levelSep)==0) (if (nameForAll=="") "" else ".vs.") else levelSep, checkNames = FALSE, includeLevelInformation = FALSE) binarizeCategoricalColumns.pairwise( data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", levelSep = ".vs.", checkNames = FALSE) binarizeCategoricalColumns.forRegression( data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", checkNames = TRUE) binarizeCategoricalColumns.forPlots( data, maxOrdinalLevels = 3, convertColumns = NULL, considerColumns = NULL, levelOrder = NULL, val1 = 0, val2 = 1, includePrefix = TRUE, prefixSep = ".", checkNames = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A data frame. } \item{convertColumns}{ Optional character vector giving the column names of the columns to be converted. See \code{maxOrdinalLevels} below. } \item{considerColumns}{ Optional character vector giving the column names of columns that should be looked at and possibly converted. If not given, all columns will be considered. See \code{maxOrdinalLevels} below. } \item{maxOrdinalLevels}{ When \code{convertColumns} above is \code{NULL}, the function looks at all columns in \code{considerColumns} and converts all non-numeric columns and those numeric columns that have at most \code{maxOrdinalLevels} unique values. A column is considered numeric if its storage mode is numeric or if it is character and all entries with the expception of "NA", "NULL" and "NO DATA" represent valid numbers. } \item{levelOrder}{ Optional list giving the ordering of levels (unique values) in each of the converted columns. Best used in conjunction with \code{convertColumns}. } \item{minCount}{ Levels of \code{x} for which there are fewer than \code{minCount} elements will be ignored. } \item{val1}{ Value for the lower level in binary comparisons. } \item{val2}{ Value for the higher level in binary comparisons. } \item{includePairwise}{ Logical: should pairwise binary indicators be included? For each pair of levels, the indicator is \code{val1} for the lower level (earlier in \code{levelOrder}), \code{val2} for the higher level and \code{NA} otherwise. } \item{includeLevelVsAll}{ Logical: should binary indicators for each level be included? The indicator is \code{val2} where \code{x} equals the level and \code{val1} otherwise. } \item{dropFirstLevelVsAll}{ Logical: should the column representing first level vs. all be dropped? This makes the resulting matrix of indicators usable for regression models. } \item{dropUninformative}{ Logical: should uninformative (constant) columns be dropped? } \item{includePrefix}{ Logical: should the column name of the binarized column be included in column names of the output? See details. } \item{prefixSep}{ Separator of column names and level names in column names of the output. See details. } \item{nameForAll}{ Character string that represents "all others" in the column names of indicators of level vs. all others. } \item{levelSep}{ Separator for levels to be used in column names of the output. If \code{NULL}, pairwise and level vs. all indicators will use different level separators set by \code{levelSep.pairwise} and \code{levelSep.vsAll}. } \item{levelSep.pairwise}{ Separator for levels to be used in column names for pairwise indicators in the output. } \item{levelSep.vsAll}{ Separator for levels to be used in column names for level vs. all indicators in the output. } \item{checkNames}{ Logical: should the names of the output be made into syntactically correct R language names? } \item{includeLevelInformation}{ Logical: should information about which levels are represented by which columns be included in the attributes of the output? } } \details{ \code{binarizeCategoricalColumns} is the most general function, the rest are convenience wrappers that set some of the options to achieve the following: \code{binarizeCategoricalColumns.pairwise} returns only pairwise (level vs. level) binary indicators. \code{binarizeCategoricalColumns.forRegression} returns only level vs. all others binary indicators, with the first (according to \code{levelOrder}) level vs. all removed. This is essentially the same as would be returned by \code{\link{model.matrix}} except for the column representing intercept. \code{binarizeCategoricalColumns.forPlots} returns only level vs. all others binary indicators and keeps them all. The columns to be converted are identified as follows. If \code{considerColumns} is given, columns not contained in it will not be converted, even if they are included in \code{convertColumns}. If \code{convertColumns} is given, those columns will be converted (except any not contained in non-empty \code{considerColumns}). If \code{convertColumns} is \code{NULL}, the function converts columns that are not numeric (as reported by \code{\link{is.numeric}}) and those numeric columns that have at most \code{maxOrdinalValues} unique non-missing values. The function creates two types of indicators. The first is one level (unique value) of \code{x} vs. all others, i.e., for a given level, the indicator is \code{val2} (usually 1) for all elements of \code{x} that equal the level, and \code{val1} (usually 0) otherwise. Column names for these indicators are the concatenation of \code{namePrefix}, the level, \code{nameSep} and \code{nameForAll}. The level vs. all indicators are created for all levels that have at least \code{minCounts} samples, are present in \code{levelOrder} (if it is non-NULL) and are not included in \code{ignore}. The second type of indicator encodes binary comparisons. For each pair of levels (both with at least \code{minCount} samples), the indicator is \code{val2} (usually 1) for the higher level and \code{val1} (usually 0) for the lower level. The level order is given by \code{levelOrder} (which defaults to the sorted levels of \code{x}), assumed to be sorted in increasing order. All levels with at least \code{minCount} samples that are included in \code{levelOrder} and not included in \code{ignore} are included. Internally, the function calls \code{\link{binarizeCategoricalVariable}} for each column that is converted. } \value{ A data frame in which the converted columns have been replaced by sets of binarized indicators. When \code{includeLevelInformation} is \code{TRUE}, the attribute \code{includedLevels} is a table with one column per output column and two rows, giving the two levels (unique values of x) represented by the column. } \author{ Peter Langfelder } \examples{ set.seed(2); x = data.frame(a = sample(c("A", "B", "C"), 15, replace = TRUE), b = sample(c(1:3), 15, replace = TRUE)); out = binarizeCategoricalColumns(x, includePairwise = TRUE, includeLevelVsAll = TRUE, includeLevelInformation = TRUE); data.frame(x, out); attr(out, "includedLevels") } \keyword{misc} WGCNA/man/newCorrelationOptions.Rd0000644000176200001440000000575114012015545016527 0ustar liggesusers\name{newCorrelationOptions} \alias{newCorrelationOptions} \alias{CorrelationOptions} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Creates a list of correlation options. } \description{ Convenience function to create a re-usable list of correlation options. } \usage{ newCorrelationOptions( corType = c("pearson", "bicor"), maxPOutliers = 0.05, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, nThreads = 0, corFnc = if (corType=="bicor") "bicor" else "cor", corOptions = c( list(use = 'p', cosine = cosineCorrelation, quick = quickCor, nThreads = nThreads), if (corType=="bicor") list(maxPOutliers = maxPOutliers, pearsonFallback = pearsonFallback) else NULL)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{corType}{ Character specifying the type of correlation function. Currently supported options are \code{"pearson", "bicor"}. } \item{maxPOutliers}{ Maximum proportion of outliers for biweight mid-correlation. See \code{\link{bicor}}. } \item{quickCor}{ Real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See \code{\link{bicor}}. } \item{pearsonFallback}{ Specifies whether the bicor calculation should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). } \item{cosineCorrelation}{ Logical: calculate cosine biweight midcorrelation? Cosine bicorrelation is similar to standard bicorrelation but the median subtraction is not performed. } \item{nThreads}{ A non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{corFnc}{ Correlation function to be called in R code. Should correspoind to the value of \code{corType} above. } \item{corOptions}{ A list of options to be supplied to the correlation function (in addition to appropriate arguments \code{x} and \code{y}). } } \value{ A list containing a copy of the input arguments. The output has class \code{CorrelationOptions}. } \author{ Peter Langfelder } %% ~Make other sections like Warning with \section{Warning }{....} ~ \keyword{misc} WGCNA/man/scaleFreeFitIndex.Rd0000644000176200001440000000240414012015545015474 0ustar liggesusers\name{scaleFreeFitIndex} \Rdversion{1.1} \alias{scaleFreeFitIndex} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calculation of fitting statistics for evaluating scale free topology fit. } \description{ The function scaleFreeFitIndex calculates several indices (fitting statistics) for evaluating scale free topology fit. The input is a vector (of connectivities) k. Next k is discretized into nBreaks number of equal-width bins. Let's denote the resulting vector dk. The relative frequency for each bin is denoted p.dk. } \usage{ scaleFreeFitIndex(k, nBreaks = 10, removeFirst = FALSE) } \arguments{ \item{k}{ numeric vector whose components contain non-negative values } \item{nBreaks}{ positive integer. This determines the number of equal width bins. } \item{removeFirst}{ logical. If TRUE then the first bin will be removed. } } \value{ Data frame with columns \item{Rsquared.SFT}{the model fitting index (R.squared) from the following model lm(log.p.dk ~ log.dk)} \item{slope.SFT}{the slope estimate from model lm(log(p(k))~log(k))} \item{truncatedExponentialAdjRsquared}{the adjusted R.squared measure from the truncated exponential model given by lm2 = lm(log.p.dk ~ log.dk + dk).} } \author{ Steve Horvath } \keyword{ misc } WGCNA/man/rankPvalue.Rd0000644000176200001440000001720414012015545014264 0ustar liggesusers\name{rankPvalue} \alias{rankPvalue} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Estimate the p-value for ranking consistently high (or low) on multiple lists } \description{ The function rankPvalue calculates the p-value for observing that an object (corresponding to a row of the input data frame \code{datS}) has a consistently high ranking (or low ranking) according to multiple ordinal scores (corresponding to the columns of the input data frame \code{datS}). } \usage{ rankPvalue(datS, columnweights = NULL, na.last = "keep", ties.method = "average", calculateQvalue = TRUE, pValueMethod = "all") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datS}{ a data frame whose rows represent objects that will be ranked. Each column of \code{datS} represents an ordinal variable (which can take on negative values). The columns correspond to (possibly signed) object significance measures, e.g., statistics (such as Z statistics), ranks, or correlations. } \item{columnweights}{ allows the user to input a vector of non-negative numbers reflecting weights for the different columns of \code{datZ}. If it is set to \code{NULL} then all weights are equal. } \item{na.last}{ controls the treatment of missing values (NAs) in the rank function. If \code{TRUE}, missing values in the data are put last (i.e. they get the highest rank values). If \code{FALSE}, they are put first; if \code{NA}, they are removed; if \code{"keep"} they are kept with rank NA. See \code{\link{rank}} for more details. } \item{ties.method}{ represents the ties method used in the rank function for the percentile rank method. See \code{\link{rank}} for more details.} \item{calculateQvalue}{ logical: should q-values be calculated? If set to TRUE then the function calculates corresponding q-values (local false discovery rates) using the qvalue package, see Storey JD and Tibshirani R. (2003). This option assumes that qvalue package has been installed. } \item{pValueMethod}{ determines which method is used for calculating p-values. By default it is set to "all", i.e. both methods are used. If it is set to "rank" then only the percentile rank method is used. If it set to "scale" then only the scale method will be used. } } \details{ The function calculates asymptotic p-values (and optionally q-values) for testing the null hypothesis that the values in the columns of datS are independent. This allows us to find objects (rows) with consistently high (or low) values across the columns. Example: Imagine you have 5 vectors of Z statistics corresponding to the columns of datS. Further assume that a gene has ranks 1,1,1,1,20 in the 5 lists. It seems very significant that the gene ranks number 1 in 4 out of the 5 lists. The function rankPvalue can be used to calculate a p-value for this occurrence. The function uses the central limit theorem to calculate asymptotic p-values for two types of test statistics that measure consistently high or low ordinal values. The first method (referred to as percentile rank method) leads to accurate estimates of p-values if datS has at least 4 columns but it can be overly conservative. The percentile rank method replaces each column datS by the ranked version rank(datS[,i]) (referred to ask low ranking) and by rank(-datS[,i]) (referred to as high ranking). Low ranking and high ranking allow one to find consistently small values or consistently large values of datS, respectively. All ranks are divided by the maximum rank so that the result lies in the unit interval [0,1]. In the following, we refer to rank/max(rank) as percentile rank. For a given object (corresponding to a row of datS) the observed percentile rank follows approximately a uniform distribution under the null hypothesis. The test statistic is defined as the sum of the percentile ranks (across the columns of datS). Under the null hypothesis that there is no relationship between the rankings of the columns of datS, this (row sum) test statistic follows a distribution that is given by the convolution of random uniform distributions. Under the null hypothesis, the individual percentile ranks are independent and one can invoke the central limit theorem to argue that the row sum test statistic follows asymptotically a normal distribution. It is well-known that the speed of convergence to the normal distribution is extremely fast in case of identically distributed uniform distributions. Even when datS has only 4 columns, the difference between the normal approximation and the exact distribution is negligible in practice (Killmann et al 2001). In summary, we use the central limit theorem to argue that the sum of the percentile ranks follows a normal distribution whose mean and variance can be calculated using the fact that the mean value of a uniform random variable (on the unit interval) equals 0.5 and its variance equals 1/12. The second method for calculating p-values is referred to as scale method. It is often more powerful but its asymptotic p-value can only be trusted if either datS has a lot of columns or if the ordinal scores (columns of datS) follow an approximate normal distribution. The scale method scales (or standardizes) each ordinal variable (column of datS) so that it has mean 0 and variance 1. Under the null hypothesis of independence, the row sum follows approximately a normal distribution if the assumptions of the central limit theorem are met. In practice, we find that the second approach is often more powerful but it makes more distributional assumptions (if datS has few columns). } \value{ A list whose actual content depends on which p-value methods is selected, and whether q0values are calculated. The following inner components are calculated, organized in outer components \code{datoutrank} and \code{datoutscale},: \item{pValueExtremeRank}{This is the minimum between pValueLowRank and pValueHighRank, i.e. min(pValueLow, pValueHigh)} \item{pValueLowRank}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueHighRank}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the rank method.} \item{pValueExtremeScale}{This is the minimum between pValueLowScale and pValueHighScale, i.e. min(pValueLow, pValueHigh)} \item{pValueLowScale}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{pValueHighScale}{Asymptotic p-value for observing a consistently low value across the columns of datS based on the Scale method.} \item{qValueExtremeRank}{local false discovery rate (q-value) corresponding to the p-value pValueExtremeRank} \item{qValueLowRank}{local false discovery rate (q-value) corresponding to the p-value pValueLowRank} \item{qValueHighRank}{local false discovery rate (q-value) corresponding to the p-value pValueHighRank} \item{qValueExtremeScale}{local false discovery rate (q-value) corresponding to the p-value pValueExtremeScale} \item{qValueLowScale}{local false discovery rate (q-value) corresponding to the p-value pValueLowScale} \item{qValueHighScale}{local false discovery rate (q-value) corresponding to the p-value pValueHighScale} } \references{ Killmann F, VonCollani E (2001) A Note on the Convolution of the Uniform and Related Distributions and Their Use in Quality Control. Economic Quality Control Vol 16 (2001), No. 1, 17-41.ISSN 0940-5151 Storey JD and Tibshirani R. (2003) Statistical significance for genome-wide experiments. Proceedings of the National Academy of Sciences, 100: 9440-9445. } \author{ Steve Horvath } \seealso{ \code{\link{rank}}, \code{\link{qvalue}} } \keyword{ misc}% __ONLY ONE__ keyword per line WGCNA/man/TrueTrait.Rd0000644000176200001440000002233514533632240014105 0ustar liggesusers\name{TrueTrait} \alias{TrueTrait} \title{Estimate the true trait underlying a list of surrogate markers.} \description{ Assume an imprecisely measured trait \code{y} that is related to the true, unobserved trait yTRUE as follows yTRUE=y+noise where noise is assumed to have mean zero and a constant variance. Assume you have 1 or more surrogate markers for yTRUE corresponding to the columns of \code{datX}. The function implements several approaches for estimating yTRUE based on the inputs \code{y} and/or \code{datX}. } \usage{ TrueTrait(datX, y, datXtest=NULL, corFnc = "bicor", corOptions = "use = 'pairwise.complete.obs'", LeaveOneOut.CV=FALSE, skipMissingVariables=TRUE, addLinearModel=FALSE) } \arguments{ \item{datX}{ is a vector or data frame whose columns correspond to the surrogate markers (variables) for the true underlying trait. The number of rows of \code{datX} equals the number of observations, i.e. it should equal the length of \code{y} } \item{y}{is a numeric vector which specifies the observed trait. } \item{datXtest}{can be set as a matrix or data frame of a second, independent test data set. Its columns should correspond to those of \code{datX}, i.e. the two data sets should have the same number of columns but the number or rows (test set observations) can be different.} \item{corFnc}{Character string specifying the correlation function to be used in the calculations. Recomended values are the default Pearson correlation \code{"cor"} or biweight mid-correlation \code{"bicor"}. Additional arguments to the correlation function can be specified using \code{corOptions}.} \item{corOptions}{Character string giving additional arguments to the function specified in \code{corFnc}. } \item{LeaveOneOut.CV}{logical. If TRUE then leave one out cross validation estimates will be calculated for \code{y.true1} and \code{y.true2} based on \code{datX}.} \item{skipMissingVariables}{logical. If TRUE then variables whose values are missing for a given observation will be skipped when estimating the true trait of that particular observation. Thus, the estimate of a particular observation are determined by all the variables whose values are non-missing.} \item{addLinearModel}{logical. If TRUE then the function also estimates the true trait based on the predictions of the linear model \code{lm(y~., data=datX)} } } \details{ This R function implements formulas described in Klemera and Doubal (2006). The assumptions underlying these formulas are described in Klemera et al. But briefly, the function provides several estimates of the true underlying trait under the following assumptions: 1) There is a true underlying trait that affects \code{y} and a list of surrogate markers corresponding to the columns of \code{datX}. 2) There is a linear relationship between the true underlying trait and \code{y} and the surrogate markers. 3) yTRUE =y +Noise where the Noise term has a mean of zero and a fixed variance. 4) Weighted least squares estimation is used to relate the surrogate markers to the underlying trait where the weights are proportional to 1/ssq.j where ssq.j is the noise variance of the j-th marker. Specifically, output \code{y.true1} corresponds to formula 31, \code{y.true2} corresponds to formula 25, and \code{y.true3} corresponds to formula 34. Although the true underlying trait yTRUE is not known, one can estimate the standard deviation between the estimate \code{y.true2} and yTRUE using formula 33. Similarly, one can estimate the SD for the estimate \code{y.true3} using formula 42. These estimated SDs correspond to output components 2 and 3, respectively. These SDs are valuable since they provide a sense of how accurate the measure is. To estimate the correlations between \code{y} and the surrogate markers, one can specify different correlation measures. The default method is based on the Person correlation but one can also specify the biweight midcorrelation by choosing "bicor", see help(bicor) to learn more. When the \code{datX} is comprised of observations measured in different strata (e.g. different batches or independent data sets) then one can obtain stratum specific estimates by specifying the strata using the argument \code{Strata}. In this case, the estimation focuses on one stratum at a time. } \value{A list with the following components. \item{datEstimates}{is a data frame whose columns corresponds to estimates of the true underlying trait. The number of rows equals the number of observations, i.e. the length of \code{y}. The first column \code{y.true1} is the average value of standardized columns of \code{datX} where standardization subtracts out the intercept term and divides by the slope of the linear regression model lm(marker~y). Since this estimate ignores the fact that the surrogate markers have different correlations with \code{y}, it is typically inferior to \code{y.true2}. The second column \code{y.true2} equals the weighted average value of standardized columns of \code{datX}. The standardization is described in section 2.4 of Klemera et al. The weights are proportional to r^2/(1+r^2) where r denotes the correlation between the surrogate marker and \code{y}. Since this estimate does not include \code{y} as additional surrogate marker, it may be slightly inferior to \code{y.true3}. Having said this, the difference between \code{y.true2} and \code{y.true3} is often negligible. An additional column called \code{y.lm} is added if \code{addLinearModel=TRUE}. In this case, \code{y.lm} reports the linear model predictions. Finally, the column \code{y.true3} is very similar to \code{y.true2} but it includes \code{y} as additional surrogate marker. It is expected to be the best estimate of the underlying true trait (see Klemera et al 2006). } \item{datEstimatestest}{is output only if a test data set has been specified in the argument \code{datXtest}. In this case, it contains a data frame with columns \code{ytrue1} and \code{ytrue2}. The number of rows equals the number of test set observations, i.e the number of rows of \code{datXtest}. Since the value of \code{y} is not known in case of a test data set, one cannot calculate \code{y.true3}. An additional column with linear model predictions \code{y.lm} is added if \code{addLinearModel=TRUE}. } \item{datEstimates.LeaveOneOut.CV}{is output only if the argument \code{LeaveOneOut.CV} has been set to \code{TRUE}. In this case, it contains a data frame with leave-one-out cross validation estimates of \code{ytrue1} and \code{ytrue2}. The number of rows equals the length of \code{y}. Since the value of \code{y} is not known in case of a test data set, one cannot calculate \code{y.true3} } \item{SD.ytrue2}{is a scalar. This is an estimate of the standard deviation between the estimate \code{y.true2} and the true (unobserved) yTRUE. It corresponds to formula 33.} \item{SD.ytrue3}{is a scalar. This is an estimate of the standard deviation between \code{y.true3} and the true (unobserved) yTRUE. It corresponds to formula 42.} \item{datVariableInfo}{is a data frame that reports information for each variable (column of \code{datX}) when it comes to the definition of \code{y.true2}. The rows correspond to the number of variables. Columns report the variable name, the center (intercept that is subtracted to scale each variable), the scale (i.e. the slope that is used in the denominator), and finally the weights used in the weighted sum of the scaled variables.} \item{datEstimatesByStratum}{ a data frame that will only be output if \code{Strata} is different from NULL. In this case, it is has the same dimensions as \code{datEstimates} but the estimates were calculated separately for each level of \code{Strata}.} \item{SD.ytrue2ByStratum}{ a vector of length equal to the different levels of \code{Strata}. Each component reports the estimate of \code{SD.ytrue2} for observations in the stratum specified by unique(Strata).} \item{datVariableInfoByStratum}{ a list whose components are matrices with variable information. Each list component reports the variable information in the stratum specified by unique(Strata). } } \references{ Klemera P, Doubal S (2006) A new approach to the concept and computation of biological age. Mechanisms of Ageing and Development 127 (2006) 240-248 Choa IH, Parka KS, Limb CJ (2010) An Empirical Comparative Study on Validation of Biological Age Estimation Algorithms with an Application of Work Ability Index. Mechanisms of Ageing and Development Volume 131, Issue 2, February 2010, Pages 69-78 } \author{Steve Horvath} \examples{ # observed trait y=rnorm(1000,mean=50,sd=20) # unobserved, true trait yTRUE =y +rnorm(100,sd=10) # now we simulate surrogate markers around the true trait datX=simulateModule(yTRUE,nGenes=20, minCor=.4,maxCor=.9,geneMeans=rnorm(20,50,30) ) True1=TrueTrait(datX=datX,y=y) datTrue=True1$datEstimates par(mfrow=c(2,2)) for (i in 1:dim(datTrue)[[2]] ){ meanAbsDev= mean(abs(yTRUE-datTrue[,i])) verboseScatterplot(datTrue[,i],yTRUE,xlab=names(datTrue)[i], main=paste(i, "MeanAbsDev=", signif(meanAbsDev,3))); abline(0,1) } #compare the estimated standard deviation of y.true2 True1[[2]] # with the true SD sqrt(var(yTRUE-datTrue$y.true2)) #compare the estimated standard deviation of y.true3 True1[[3]] # with the true SD sqrt(var(yTRUE-datTrue$y.true3)) } \keyword{misc} WGCNA/man/projectiveKMeans.Rd0000644000176200001440000000766614012015545015440 0ustar liggesusers\name{projectiveKMeans} \alias{projectiveKMeans} \title{ Projective K-means (pre-)clustering of expression data } \description{ Implementation of a variant of K-means clustering for expression data. } \usage{ projectiveKMeans( datExpr, preferredSize = 5000, nCenters = as.integer(min(ncol(datExpr)/20, preferredSize^2/ncol(datExpr))), sizePenaltyPower = 4, networkType = "unsigned", randomSeed = 54321, checkData = TRUE, imputeMissing = TRUE, maxIterations = 1000, verbose = 0, indent = 0) } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. NAs are allowed, but not too many. } \item{preferredSize}{ preferred maximum size of clusters. } \item{nCenters}{ number of initial clusters. Empirical evidence suggests that more centers will give a better preclustering; the default is an attempt to arrive at a reasonable number. } \item{sizePenaltyPower}{ parameter specifying how severe is the penalty for clusters that exceed \code{preferredSize}. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. } \item{checkData}{ logical: should data be checked for genes with zero variance and genes and samples with excessive numbers of missing samples? Bad samples are ignored; returned cluster assignment for bad genes will be \code{NA}. } \item{imputeMissing}{ logical: should missing values in \code{datExpr} be imputed before the calculations start? The early imputation makes the code run faster but may produce slightly different results if re-running older calculations.} \item{maxIterations}{ maximum iterations to be attempted. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The principal aim of this function within WGCNA is to pre-cluster a large number of genes into smaller blocks that can be handled using standard WGCNA techniques. This function implements a variant of K-means clustering that is suitable for co-expression analysis. Cluster centers are defined by the first principal component, and distances by correlation (more precisely, 1-correlation). The distance between a gene and a cluster is multiplied by a factor of \eqn{max(clusterSize/preferredSize, 1)^{sizePenaltyPower}}{\code{max(clusterSize/preferredSize, 1)^sizePenaltyPower}}, thus penalizing clusters whose size exceeds \code{preferredSize}. The function starts with randomly generated cluster assignment (hence the need to set the random seed for repeatability) and executes interations of calculating new centers and reassigning genes to nearest center until the clustering becomes stable. Before returning, nearby clusters are iteratively combined if their combined size is below \code{preferredSize}. The standard principal component calculation via the function \code{svd} fails from time to time (likely a convergence problem of the underlying lapack functions). Such errors are trapped and the principal component is approximated by a weighted average of expression profiles in the cluster. If \code{verbose} is set above 2, an informational message is printed whenever this approximation is used. } \value{ A list with the following components: \item{clusters}{A numerical vector with one component per input gene, giving the cluster number in which the gene is assigned. } \item{centers}{Cluster centers, that is their first principal components. } } \author{ Peter Langfelder } \seealso{ \code{\link{sizeRestrictedClusterMerge}} which implements the last step of merging smaller clusters.} \keyword{ cluster } WGCNA/man/mutualInfoAdjacency.Rd0000644000176200001440000001704114012015545016100 0ustar liggesusers\name{mutualInfoAdjacency} \Rdversion{1.1} \alias{mutualInfoAdjacency} %- Also NEED an '\alias' for EACH other topic documented here. \title{Calculate weighted adjacency matrices based on mutual information} \description{ The function calculates different types of weighted adjacency matrices based on the mutual information between vectors (corresponding to the columns of the input data frame datE). The mutual information between pairs of vectors is divided by an upper bound so that the resulting normalized measure lies between 0 and 1. } \usage{ mutualInfoAdjacency( datE, discretizeColumns = TRUE, entropyEstimationMethod = "MM", numberBins = NULL) } \arguments{ \item{datE}{ \code{datE} is a data frame or matrix whose columns correspond to variables and whose rows correspond to measurements. For example, the columns may correspond to genes while the rows correspond to microarrays. The number of nodes in the mutual information network equals the number of columns of \code{datE}. } \item{discretizeColumns}{ is a logical variable. If it is set to TRUE then the columns of \code{datE} will be discretized into a user-defined number of bins (see \code{numberBins}). } \item{entropyEstimationMethod}{ takes a text string for specifying the entropy and mutual information estimation method. If \code{entropyEstimationMethod="MM"} then the Miller-Madow asymptotic bias corrected empirical estimator is used. If \code{entropyEstimationMethod="ML"} the maximum likelihood estimator (also known as plug-in or empirical estimator) is used. If \code{entropyEstimationMethod="shrink"}, the shrinkage estimator of a Dirichlet probability distribution is used. If \code{entropyEstimationMethod="SG"}, the Schurmann-Grassberger estimator of the entropy of a Dirichlet probability distribution is used. } \item{numberBins}{ is an integer larger than 0 which specifies how many bins are used for the discretization step. This argument is only relevant if \code{discretizeColumns} has been set to TRUE. By default \code{numberBins} is set to sqrt(m) where m is the number of samples, i.e. the number of rows of \code{datE}. Thus the default is \code{numberBins}=sqrt(nrow(datE)). } } \details{ The function inputs a data frame \code{datE} and outputs a list whose components correspond to different weighted network adjacency measures defined beteween the columns of \code{datE}. Make sure to install the following R packages \code{entropy}, \code{minet}, \code{infotheo} since the function \code{mutualInfoAdjacency} makes use of the \code{entropy} function from the R package \code{entropy} (Hausser and Strimmer 2008) and functions from the \code{minet} and \code{infotheo} package (Meyer et al 2008). A weighted network adjacency matrix is a symmetric matrix whose entries take on values between 0 and 1. Each weighted adjacency matrix contains scaled versions of the mutual information between the columns of the input data frame \code{datE}. We assume that datE contains numeric values which will be discretized unless the user chooses the option \code{discretizeColumns=FALSE}. The raw (unscaled) mutual information and entropy measures have units "nat", i.e. natural logarithms are used in their definition (base e=2.71..). Several mutual information estimation methods have been proposed in the literature (reviewed in Hausser and Strimmer 2008, Meyer et al 2008). While mutual information networks allows one to detect non-linear relationships between the columns of \code{datE}, they may overfit the data if relatively few observations are available. Thus, if the number of rows of \code{datE} is smaller than say 200, it may be better to fit a correlation using the function \code{adjacency}. } \value{ The function outputs a list with the following components: \item{ Entropy}{ is a vector whose components report entropy estimates of each column of \code{datE}. The natural logarithm (base e) is used in the definition. Using the notation from the Wikipedia entry (http://en.wikipedia.org/wiki/Mutual_information), this vector contains the values Hx where x corresponds to a column in \code{datE}. } \item{MutualInformation}{ is a symmetric matrix whose entries contain the pairwise mutual information measures between the columns of \code{datE}. The diagonal of the matrix \code{MutualInformation} equals \code{Entropy}. In general, the entries of this matrix can be larger than 1, i.e. this is not an adjacency matrix. Using the notation from the Wikipedia entry, this matrix contains the mutual information estimates I(X;Y) } \item{AdjacencySymmetricUncertainty}{ is a weighted adjacency matrix whose entries are based on the mutual information. Using the notation from the Wikipedia entry, this matrix contains the mutual information estimates \code{AdjacencySymmetricUncertainty}=2*I(X;Y)/(H(X)+H(Y)). Since I(X;X)=H(X), the diagonal elements of \code{AdjacencySymmetricUncertainty} equal 1. In general the entries of this symmetric matrix \code{AdjacencySymmetricUncertainty} lie between 0 and 1. } \item{AdjacencyUniversalVersion1}{ is a weighted adjacency matrix that is a simple function of the \code{AdjacencySymmetricUncertainty}. Specifically, \code{AdjacencyUniversalVersion1= AdjacencySymmetricUncertainty/(2- AdjacencySymmetricUncertainty)}. Note that f(x)= x/(2-x) is a monotonically increasing function on the unit interval [0,1] whose values lie between 0 and 1. The reason why we call it the universal adjacency is that dissUA=1-\code{AdjacencyUniversalVersion1} turns out to be a universal distance function, i.e. it satisfies the properties of a distance (including the triangle inequality) and it takes on a small value if any other distance measure takes on a small value (Kraskov et al 2003). } \item{AdjacencyUniversalVersion2}{ is a weighted adjacency matrix for which dissUAversion2=1-\code{AdjacencyUniversalVersion2} is also a universal distance measure. Using the notation from Wikipedia, the entries of the symmetric matrix AdjacencyUniversalVersion2 are defined as follows \code{AdjacencyUniversalVersion2}=I(X;Y)/max(H(X),H(Y)). } } \references{ Hausser J, Strimmer K (2008) Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. See http://arxiv.org/abs/0811.3579 Patrick E. Meyer, Frederic Lafitte, and Gianluca Bontempi. minet: A R/Bioconductor Package for Inferring Large Transcriptional Networks Using Mutual Information. BMC Bioinformatics, Vol 9, 2008 Kraskov A, Stoegbauer H, Andrzejak RG, Grassberger P (2003) Hierarchical Clustering Based on Mutual Information. ArXiv q-bio/0311039 } \author{ Steve Horvath, Lin Song, Peter Langfelder } \seealso{ \code{\link{adjacency}} } \examples{ # Load requisite packages. These packages are considered "optional", # so WGCNA does not load them automatically. if (require(infotheo, quietly = TRUE) && require(minet, quietly = TRUE) && require(entropy, quietly = TRUE)) { # Example can be executed. #Simulate a data frame datE which contains 5 columns and 50 observations m=50 x1=rnorm(m) r=.5; x2=r*x1+sqrt(1-r^2)*rnorm(m) r=.3; x3=r*(x1-.5)^2+sqrt(1-r^2)*rnorm(m) x4=rnorm(m) r=.3; x5=r*x4+sqrt(1-r^2)*rnorm(m) datE=data.frame(x1,x2,x3,x4,x5) #calculate entropy, mutual information matrix and weighted adjacency # matrices based on mutual information. MIadj=mutualInfoAdjacency(datE=datE) } else printFlush(paste("Please install packages infotheo, minet and entropy", "before running this example.")); } \keyword{ misc } WGCNA/man/nearestNeighborConnectivity.Rd0000644000176200001440000000546014012015545017673 0ustar liggesusers\name{nearestNeighborConnectivity} \alias{nearestNeighborConnectivity} \title{ Connectivity to a constant number of nearest neighbors } \description{ Given expression data and basic network parameters, the function calculates connectivity of each gene to a given number of nearest neighbors. } \usage{ nearestNeighborConnectivity(datExpr, nNeighbors = 50, power = 6, type = "unsigned", corFnc = "cor", corOptions = "use = 'p'", blockSize = 1000, sampleLinks = NULL, nLinks = 5000, setSeed = 38457, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ a data frame containing expression data, with rows corresponding to samples and columns to genes. Missing values are allowed and will be ignored. } \item{nNeighbors}{ number of nearest neighbors to use. } \item{power}{ soft thresholding power for network construction. Should be a number greater than 1. } \item{type}{ a character string encoding network type. Recognized values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, and \code{"signed hybrid"}. } \item{corFnc}{ character string containing the name of the function to calculate correlation. Suggested functions include \code{"cor"} and \code{"bicor"}. } \item{corOptions}{ further argument to the correlation function. } \item{blockSize}{ correlation calculations will be split into square blocks of this size, to prevent running out of memory for large gene sets. } \item{sampleLinks}{ logical: should network connections be sampled (\code{TRUE}) or should all connections be used systematically (\code{FALSE})? } \item{nLinks}{ number of links to be sampled. Should be set such that \code{nLinks * nNeighbors} be several times larger than the number of genes. } \item{setSeed}{ seed to be used for sampling, for repeatability. If a seed already exists, it is saved before the sampling starts and restored upon exit. } \item{verbose}{ integer controlling the level of verbosity. 0 means silent.} \item{indent}{ integer controlling indentation of output. Each unit above 0 adds two spaces. } } \details{ Connectivity of gene \code{i} is the sum of adjacency strengths between gene \code{i} and other genes; in this case we take the \code{nNeighbors} nodes with the highest connection strength to gene \code{i}. The adjacency strengths are calculated by correlating the given expression data using the function supplied in \code{corFNC} and transforming them into adjacency according to the given network \code{type} and \code{power}. } \value{ A vector with one component for each gene containing the nearest neighbor connectivity. } \author{ Peter Langfelder } \seealso{ \code{\link{adjacency}}, \code{\link{softConnectivity}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/plotColorUnderTree.Rd0000644000176200001440000001226014533632240015751 0ustar liggesusers\name{plotColorUnderTree} \alias{plotColorUnderTree} \alias{plotOrderedColors} \title{Plot color rows in a given order, for example under a dendrogram} \description{ Plot color rows encoding information about objects in a given order, for example the order of a clustering dendrogram, usually below the dendrogram or a barplot. } \usage{ plotOrderedColors( order, colors, main = "", rowLabels = NULL, rowWidths = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, addTextGuide = TRUE, cex.rowLabels = 1, cex.rowText = 0.8, startAt = 0, align = c("center", "edge"), separatorLine.col = "black", ...) plotColorUnderTree( dendro, colors, rowLabels = NULL, rowWidths = NULL, rowText = NULL, rowTextAlignment = c("left", "center", "right"), rowTextIgnore = NULL, textPositions = NULL, addTextGuide = TRUE, cex.rowLabels = 1, cex.rowText = 0.8, separatorLine.col = "black", ...) } \arguments{ \item{order}{A vector giving the order of the objects. Must have the same length as \code{colors} if \code{colors} is a vector, or as the number of rows if \code{colors} is a matrix or data frame.} \item{dendro}{A hierarchical clustering dendrogram such one returned by \code{\link{hclust}}.} \item{colors}{Coloring of objects on the dendrogram. Either a vector (one color per object) or a matrix (can also be an array or a data frame) with each column giving one color per object. Each column will be plotted as a horizontal row of colors under the dendrogram.} \item{main}{Optional main title.} \item{rowLabels}{Labels for the colorings given in \code{colors}. The labels will be printed to the left of the color rows in the plot. If the argument is given, it must be a vector of length equal to the number of columns in \code{colors}. If not given, \code{names(colors)} will be used if available. If not, sequential numbers starting from 1 will be used.} \item{rowWidths}{ Optional specification of relative row widths for the color and text (if given) rows. Need not sum to 1. } \item{rowText}{Optional labels to identify colors in the color rows. If given, must be of the same dimensions as \code{colors}. Each label that occurs will be displayed once.} \item{rowTextAlignment}{Character string specifying whether the labels should be left-justified to the start of the largest block of each label, centered in the middle, or right-justified to the end of the largest block.} \item{rowTextIgnore}{Optional specifications of labels that should be ignored when displaying them using \code{rowText} above. } \item{textPositions}{optional numeric vector of the same length as the number of columns in \code{rowText} giving the color rows under which the text rows should appear.} \item{addTextGuide}{ logical: should guide lines be added for the text rows (if given)? } \item{cex.rowLabels}{Font size scale factor for the row labels. See \code{\link[graphics]{par}}.} \item{cex.rowText}{ character expansion factor for text rows (if given). } \item{startAt}{A numeric value indicating where in relationship to the left edge of the plot the center of the first rectangle should be. Useful values are 0 if ploting color under a dendrogram, and 0.5 if ploting colors under a barplot. } \item{align}{Controls the alignment of the color rectangles. \code{"center"} means aligning centers of the rectangles on equally spaced values; \code{"edge"} means aligning edges of the first and last rectangles on the edges of the plot region.} \item{separatorLine.col}{Color of the line separating rows of color rectangles. If \code{NA}, no lines will be drawn.} \item{\dots}{Other parameters to be passed on to the plotting method (such as \code{main} for the main title etc).} } \details{ It is often useful to plot dendrograms or other plots (e.g., barplots) of objects together with additional information about the objects, for example module assignment (by color) that was obtained by cutting a hierarchical dendrogram or external color-coded measures such as gene significance. This function provides a way to do so. The calling code should section the screen into two (or more) parts, plot the dendrogram (via \code{plot(hclust)}) or other information in the upper section and use this function to plot color annotation in the order corresponding to the dendrogram in the lower section. } \value{ A list with the following components \item{colorRectangles}{A list with one component per color row. Each component is a list with 4 elements \code{xl, yb, xr, yt} giving the left, bottom, right and top coordinates of the rectangles in that row.} } \note{ This function replaces \code{plotHclustColors} in package \code{moduleColor}. } \author{ Steve Horvath \email{SHorvath@mednet.ucla.edu} and Peter Langfelder \email{Peter.Langfelder@gmail.com} } \seealso{\code{\link[dynamicTreeCut]{cutreeDynamic}} for module detection in a dendrogram; \code{\link{plotDendroAndColors}} for automated plotting of dendrograms and colors in one step.} \keyword{hplot} WGCNA/man/TOMsimilarityFromExpr.Rd0000644000176200001440000002117614012015545016410 0ustar liggesusers\name{TOMsimilarityFromExpr} \alias{TOMsimilarityFromExpr} \title{ Topological overlap matrix } \description{ Calculation of the topological overlap matrix from given expression data. } \usage{ TOMsimilarityFromExpr( datExpr, weights = NULL, corType = "pearson", networkType = "unsigned", power = 6, TOMType = "signed", TOMDenom = "min", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, replaceMissingAdjacencies = FALSE, suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, useInternalMatrixAlgebra = FALSE, nThreads = 0, verbose = 1, indent = 0) } \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. NAs are allowed, but not too many. } \item{weights}{optional observation weights for \code{datExpr} to be used in correlation calculation. A matrix of the same dimensions as \code{datExpr}, containing non-negative weights.} \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pairwise.complete.obs} option. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{power}{ soft-thresholding power for netwoek construction. } \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See details and keep in mind that the "2" versions should be considered experimental and are subject to change.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental.} %The default mean denominator %variant %is preferrable and we recommend using it unless the user needs to reproduce older results obtained using %the standard, minimum denominator TOM. } \item{maxPOutliers}{ only used for \code{corType=="bicor"}. Specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quickCor}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation, if used, should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). Has no effect for Pearson correlation. See \code{\link{bicor}}.} \item{cosineCorrelation}{logical: should the cosine version of the correlation calculation be used? The cosine calculation differs from the standard one in that it does not subtract the mean. } \item{replaceMissingAdjacencies}{logical: should missing values in the calculation of adjacency be replaced by 0?} \item{suppressTOMForZeroAdjacencies}{Logical: should the result be set to zero for zero adjacencies?} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative?} \item{useInternalMatrixAlgebra}{Logical: should WGCNA's own, slow, matrix multiplication be used instead of R-wide BLAS? Only useful for debugging.} \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Several alternate definitions of topological overlap are available. The oldest version is now called "unsigned"; in this version, all adjacencies are assumed to be non-negative and the topological overlap of nodes \eqn{i,j} is given by \deqn{TOM_{ij} = \frac{a_{ij} + \sum_{k\neq i,j} a_{ik}a_{kj} }{f(k_i, k_j) + 1 - a_{ij}} \, ,}{% TOM[i,j] = ( a[i,j] + \sum a[i,k] a[k,j] )/(f(k[i], k[j]) + 1 - a[i,j]),} where the sum is over \eqn{k} not equal to either \eqn{i} or \eqn{j}, the function \eqn{f} in the denominator can be either min or mean (goverened by argument \code{TOMDenom}), and \eqn{k_i = \sum_{j\neq i} a_{ij}}{k[i] = sum a[i,j]} is the connectivity of node \eqn{i}. The signed versions assume that the adjacency matrix was obtained from an underlying correlation matrix, and the element \eqn{a_{ij}}{a[i,j]} carries the sign of the underlying correlation of the two vectors. (Within WGCNA, this can really only apply to the unsigned adjacency since signed adjacencies are (essentially) zero when the underlying correlation is negative.) The signed and signed Nowick versions are similar to the above unsigned version, differing only in absolute values placed in the expression: the signed Nowick expression is \deqn{TOM_{ij} = \frac{a_{ij} + \sum_{k\neq i,j} a_{ik}a_{kj} }{f(k_i, k_j) + 1 - |a_{ij}|} \, .}{% TOM[i,j] = ( a[i,j] + \sum a[i,k] a[k,j] )/(f(k[i], k[j]) + 1 - |a[i,j]|).} This TOM lies between -1 and 1, and typically is negative when the underlying adjacency is negative. The signed TOM is simply the absolute value of the signed Nowick TOM and is hence always non-negative. For non-negative adjacencies, all 3 version give the same result. A brief note on terminology: the original article by Nowick et al use the name "weighted TO" or wTO; since all of the topological overlap versions calculated in this function are weighted, we use the name signed to indicate that this TOM keeps track of the sign of the underlying correlation. The "2" versions of all 3 adjacency types have a somewhat different form in which the adjacency and the product are normalized separately. Thus, the "unsigned 2" version is \deqn{TOM^{(2)}_{ij} = \frac{1}{2}\left[a_{ij} + \frac{\sum_{k\neq i,j} a_{ik}a_{kj} }{f(k_i, k_j) - a_{ij}}\right] \, .}{% TOM2[i,j] = 0.5 ( a[i,j] + \sum a[i,k] a[k,j] /(f(k[i], k[j]) - a[i,j])).} At present the relative weight of the adjacency and the normalized product term are equal and fixed; in the future a user-specified or automatically determined weight may be implemented. The "signed Nowick 2" and "signed 2" are defined analogously to their original versions. The adjacency is assumed to be signed, and the expression for "signed Nowick 2" TOM is \deqn{TOM^{(2)}_{ij} = \frac{1}{2}\left[a_{ij} + \frac{\sum_{k\neq i,j} a_{ik}a_{kj} }{f(k_i, k_j) - |a_{ij}| } \right] \, .}{% TOM2[i,j] = 0.5 ( a[i,j] + \sum a[i,k] a[k,j] /(f(k[i], k[j]) - |a[i,j]|)).} Analogously to "signed" TOM, "signed 2" differs from "signed Nowick 2" TOM only in taking the absolute value of the result. At present the "2" versions should all be considered experimental and are subject to change. } \value{ A matrix holding the topological overlap. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Peter Langfelder} \seealso{ \code{\link{TOMsimilarity}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/TOMplot.Rd0000644000176200001440000000317114012015545013510 0ustar liggesusers\name{TOMplot} \alias{TOMplot} \title{ Graphical representation of the Topological Overlap Matrix } \description{ Graphical representation of the Topological Overlap Matrix using a heatmap plot combined with the corresponding hierarchical clustering dendrogram and module colors. } \usage{ TOMplot( dissim, dendro, Colors = NULL, ColorsLeft = Colors, terrainColors = FALSE, setLayout = TRUE, ...) } \arguments{ \item{dissim}{ a matrix containing the topological overlap-based dissimilarity } \item{dendro}{ the corresponding hierarchical clustering dendrogram } \item{Colors}{ optional specification of module colors to be plotted on top } \item{ColorsLeft}{ optional specification of module colors on the left side. If \code{NULL}, \code{Colors} will be used. } \item{terrainColors}{ logical: should terrain colors be used? } \item{setLayout}{ logical: should layout be set? If \code{TRUE}, standard layout for one plot will be used. Note that this precludes multiple plots on one page. If \code{FALSE}, the user is responsible for setting the correct layout. } \item{\dots}{ other graphical parameters to \code{\link{heatmap}}. } } \details{ The standard \code{heatmap} function uses the \code{\link{layout}} function to set the following layout (when \code{Colors} is given): \preformatted{ 0 0 5 0 0 2 4 1 3 } To get a meaningful heatmap plot, user-set layout must respect this geometry. } \value{ None. } \author{ Steve Horvath and Peter Langfelder } \seealso{ \code{\link{heatmap}}, the workhorse function doing the plotting. } \keyword{ misc } WGCNA/man/alignExpr.Rd0000644000176200001440000000166714012015545014113 0ustar liggesusers\name{alignExpr} \alias{alignExpr} \title{ Align expression data with given vector } \description{ Multiplies genes (columns) in given expression data such that their correlation with given reference vector is non-negative. } \usage{ alignExpr(datExpr, y = NULL) } \arguments{ \item{datExpr}{ expression data to be aligned. A data frame with columns corresponding to genes and rows to samples. } \item{y}{ reference vector of length equal the number of samples (rows) in \code{datExpr} } } \details{ The function basically multiplies each column in \code{datExpr} by the sign of its correlation with \code{y}. If \code{y} is not given, the first column in \code{datExpr} will be used as the reference vector. } \value{ A data frame containing the aligned expression data, of the same dimensions as the input data frame. } \author{ Steve Horvath and Peter Langfelder } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/plotCor.Rd0000644000176200001440000000322514012015545013574 0ustar liggesusers\name{plotCor} \alias{plotCor} \title{Red and Green Color Image of Correlation Matrix} \description{ This function produces a red and green color image of a correlation matrix using an RGB color specification. Increasingly positive correlations are represented with reds of increasing intensity, and increasingly negative correlations are represented with greens of increasing intensity. } \usage{ plotCor(x, new=FALSE, nrgcols=50, labels=FALSE, labcols=1, title="", ...) } \arguments{ \item{x}{a matrix of numerical values.} \item{new}{If \code{new=F}, \code{x} must already be a correlation matrix. If \code{new=T}, the correlation matrix for the columns of \code{x} is computed and displayed in the image.} \item{nrgcols}{the number of colors (>= 1) to be used in the red and green palette.} \item{labels}{vector of character strings to be placed at the tickpoints, labels for the columns of \code{x}.} \item{labcols}{colors to be used for the labels of the columns of \code{x}. \code{labcols} can have either length 1, in which case all the labels are displayed using the same color, or the same length as \code{labels}, in which case a color is specified for the label of each column of \code{x}.} \item{title}{character string, overall title for the plot.} \item{\dots}{graphical parameters may also be supplied as arguments to the function (see \code{\link{par}}). For comparison purposes, it is good to set \code{zlim=c(-1,1)}.} } \author{ Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} } \seealso{\code{\link{plotMat}},\code{\link{rgcolors.func}}, \code{\link{cor}}, \code{\link{image}}, \code{\link{rgb}}.} \keyword{hplot} WGCNA/man/mtd.apply.Rd0000644000176200001440000001177414012015545014072 0ustar liggesusers\name{mtd.apply} \alias{mtd.apply} \alias{mtd.applyToSubset} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Apply a function to each set in a multiData structure. } \description{ Inspired by \code{\link{lapply}}, these functions apply a given function to each \code{data} component in the input \code{multiData} structure, and optionally simplify the result to an array if possible. } \usage{ mtd.apply( # What to do multiData, FUN, ..., # Pre-existing results and update options mdaExistingResults = NULL, mdaUpdateIndex = NULL, mdaCopyNonData = FALSE, # Output formatting options mdaSimplify = FALSE, returnList = FALSE, # Internal behaviour options mdaVerbose = 0, mdaIndent = 0) mtd.applyToSubset( # What to do multiData, FUN, ..., # Which rows and cols to keep mdaRowIndex = NULL, mdaColIndex = NULL, # Pre-existing results and update options mdaExistingResults = NULL, mdaUpdateIndex = NULL, mdaCopyNonData = FALSE, # Output formatting options mdaSimplify = FALSE, returnList = FALSE, # Internal behaviour options mdaVerbose = 0, mdaIndent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ A multiData structure to apply the function over } \item{FUN}{Function to be applied. } \item{\dots}{ Other arguments to the function \code{FUN}. } \item{mdaRowIndex}{If given, must be a list of the same length as \code{multiData}. Each element must be a logical or numeric vector that specifies rows in each \code{data} component to select before applying the function.} \item{mdaColIndex}{A logical or numeric vector that specifies columns in each \code{data} component to select before applying the function.} \item{mdaExistingResults}{Optional list that contains previously calculated results. This can be useful if only a few sets in \code{multiData} have changed and recalculating the unchanged ones is computationally expensive. If not given, all calculations will be performed. If given, components of this list are copied into the output. See \code{mdmUpdateIndex} for which components are re-calculated by default. } \item{mdaUpdateIndex}{Optional specification of which sets in \code{multiData} the calculation should actually be carried out. This argument has an effect only if \code{mdaExistingResults} is non-NULL. If the length of \code{mdaExistingResults} (call the length `k') is less than the number of sets in \code{multiData}, the function assumes that the existing results correspond to the first `k' sets in \code{multiData} and the rest of the sets are automatically calculated, irrespective of the setting of \code{mdaUpdateIndex}. The argument \code{mdaUpdateIndex} can be used to specify re-calculation of some (or all) of the results that already exist in \code{mdaExistingResults}. } \item{mdaCopyNonData}{Logical: should non-data components of \code{multiData} be copied into the output? Note that the copying is incompatible with simplification; enabling both will trigger an error.} \item{mdaSimplify}{ Logical: should the result be simplified to an array, if possible? Note that this may lead to errors; if so, disable simplification. } \item{returnList}{Logical: should the result be turned into a list (rather than a multiData structure)? Note that this is incompatible with simplification: if \code{mdaSimplify} is \code{TRUE}, this argument is ignored.} \item{mdaVerbose}{Integer specifying whether progress diagnistics should be printed out. Zero means silent, increasing values will lead to more diagnostic messages.} \item{mdaIndent}{Integer specifying the indentation of the printed progress messages. Each unit equals two spaces.} } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). \code{mtd.apply} works on any "loose" multiData structure; \code{mtd.applyToSubset} assumes (and checks for) a "strict" multiData structure. } \value{ A multiData structure containing the results of the supplied function on each \code{data} component in the input multiData structure. Other components are simply copied. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure; \code{\link{mtd.applyToSubset}} for applying a function to a subset of a multiData structure; \code{\link{mtd.mapply}} for vectorizing over several arguments. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/plotClusterTreeSamples.Rd0000644000176200001440000001112214012015545016632 0ustar liggesusers\name{plotClusterTreeSamples} \alias{plotClusterTreeSamples} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Annotated clustering dendrogram of microarray samples } \description{ This function plots an annotated clustering dendorgram of microarray samples. } \usage{ plotClusterTreeSamples( datExpr, y = NULL, traitLabels = NULL, yLabels = NULL, main = if (is.null(y)) "Sample dendrogram" else "Sample dendrogram and trait indicator", setLayout = TRUE, autoColorHeight = TRUE, colorHeight = 0.3, dendroLabels = NULL, addGuide = FALSE, guideAll = TRUE, guideCount = NULL, guideHang = 0.2, cex.traitLabels = 0.8, cex.dendroLabels = 0.9, marAll = c(1, 5, 3, 1), saveMar = TRUE, abHeight = NULL, abCol = "red", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ a data frame containing expression data, with rows corresponding to samples and columns to genes. Missing values are allowed and will be ignored. } \item{y}{ microarray sample trait. Either a vector with one entry per sample, or a matrix in which each column corresponds to a (different) trait and each row to a sample. } \item{traitLabels}{ labels to be printed next to the color rows depicting sample traits. Defaults to column names of \code{y}.} \item{yLabels}{Optional labels to identify colors in the row identifying the sample classes. If given, must be of the same dimensions as \code{y}. Each label that occurs will be displayed once.} \item{main}{ title for the plot. } \item{setLayout}{ logical: should the plotting device be partitioned into a standard layout? If \code{FALSE}, the user is responsible for partitioning. The function expects two regions of the same width, the first one immediately above the second one. } \item{autoColorHeight}{ logical: should the height of the color area below the dendrogram be automatically adjusted for the number of traits? Only effective if \code{setLayout} is \code{TRUE}. } \item{colorHeight}{ Specifies the height of the color area under dendrogram as a fraction of the height of the dendrogram area. Only effective when \code{autoColorHeight} above is \code{FALSE}. } \item{dendroLabels}{ dendrogram labels. Set to \code{FALSE} to disable dendrogram labels altogether; set to \code{NULL} to use row labels of \code{datExpr}. } \item{addGuide}{ logical: should vertical "guide lines" be added to the dendrogram plot? The lines make it easier to identify color codes with individual samples. } \item{guideAll}{ logical: add a guide line for every sample? Only effective for \code{addGuide} set \code{TRUE}. } \item{guideCount}{ number of guide lines to be plotted. Only effective when \code{addGuide} is \code{TRUE} and \code{guideAll} is \code{FALSE}. } \item{guideHang}{ fraction of the dendrogram height to leave between the top end of the guide line and the dendrogram merge height. If the guide lines overlap with dendrogram labels, increase \code{guideHang} to leave more space for the labels. } \item{cex.traitLabels}{ character expansion factor for trait labels. } \item{cex.dendroLabels}{ character expansion factor for dendrogram (sample) labels. } \item{marAll}{ a 4-element vector giving the bottom, left, top and right margins around the combined plot. Note that this is not the same as setting the margins via a call to \code{\link{par}}, because the bottom margin of the dendrogram and the top margin of the color underneath are always zero. } \item{saveMar}{ logical: save margins setting before starting the plot and restore on exit? } \item{abHeight}{ optional specification of the height for a horizontal line in the dendrogram, see \code{\link{abline}}. } \item{abCol}{ color for plotting the horizontal line. } \item{\dots}{ other graphical parameters to \code{\link{plot.hclust}}. } } \details{ The function generates an average linkage hierarchical clustering dendrogram (see \code{\link[stats]{hclust}}) of samples from the given expression data, using Eclidean distance of samples. The dendrogram is plotted together with color annotation for the samples. The trait \code{y} must be numeric. If \code{y} is integer, the colors will correspond to values. If \code{y} is continouos, it will be dichotomized to two classes, below and above median. } \value{ None. } \author{ Steve Horvath and Peter Langfelder} \seealso{ \code{\link[stats]{dist}}, \code{\link[stats]{hclust}}, \code{\link{plotDendroAndColors}} } \keyword{ hplot } \keyword{ misc } WGCNA/man/nearestCentroidPredictor.Rd0000644000176200001440000002544614012015545017170 0ustar liggesusers\name{nearestCentroidPredictor} \alias{nearestCentroidPredictor} \title{ Nearest centroid predictor } \description{ Nearest centroid predictor for binary (i.e., two-outcome) data. Implements a whole host of options and improvements such as accounting for within-class heterogeneity using sample networks, various ways of feature selection and weighing etc. } \usage{ nearestCentroidPredictor( # Input training and test data x, y, xtest = NULL, # Feature weights and selection criteria featureSignificance = NULL, assocFnc = "cor", assocOptions = "use = 'p'", assocCut.hi = NULL, assocCut.lo = NULL, nFeatures.hi = 10, nFeatures.lo = 10, weighFeaturesByAssociation = 0, scaleFeatureMean = TRUE, scaleFeatureVar = TRUE, # Predictor options centroidMethod = c("mean", "eigensample"), simFnc = "cor", simOptions = "use = 'p'", useQuantile = NULL, sampleWeights = NULL, weighSimByPrediction = 0, # What should be returned CVfold = 0, returnFactor = FALSE, # General options randomSeed = 12345, verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Training features (predictive variables). Each column corresponds to a feature and each row to an observation. } \item{y}{ The response variable. Can be a single vector or a matrix with arbitrary many columns. Number of rows (observations) must equal to the number of rows (observations) in x. } \item{xtest}{ Optional test set data. A matrix of the same number of columns (i.e., features) as \code{x}. If test set data are not given, only the prediction on training data will be returned. } \item{featureSignificance}{ Optional vector of feature significance for the response variable. If given, it is used for feature selection (see details). Should preferably be signed, that is features can have high negative significance. } \item{assocFnc}{ Character string specifying the association function. The association function should behave roughly as \code{link{cor}} in that it takes two arguments (a matrix and a vector) plus options and returns the vector of associations between the columns of the matrix and the vector. The associations may be signed (i.e., negative or positive). } \item{assocOptions}{ Character string specifying options to the association function. } \item{assocCut.hi}{ Association (or featureSignificance) threshold for including features in the predictor. Features with associtation higher than \code{assocCut.hi} will be included. If not given, the threshold method will not be used; instead, a fixed number of features will be included as specified by \code{nFeatures.hi} and \code{nFeatures.lo}. } \item{assocCut.lo}{ Association (or featureSignificance) threshold for including features in the predictor. Features with associtation lower than \code{assocCut.lo} will be included. If not given, defaults to \code{-assocCut.hi}. If \code{assocCut.hi} is \code{NULL}, the threshold method will not be used; instead, a fixed number of features will be included as specified by \code{nFeatures.hi} and \code{nFeatures.lo}. } \item{nFeatures.hi}{ Number of highest-associated features (or features with highest \code{featureSignificance}) to include in the predictor. Only used if \code{assocCut.hi} is \code{NULL}. } \item{nFeatures.lo}{ Number of lowest-associated features (or features with highest \code{featureSignificance}) to include in the predictor. Only used if \code{assocCut.hi} is \code{NULL}. } \item{weighFeaturesByAssociation}{ (Optional) power to downweigh features that are less associated with the response. See details. } \item{scaleFeatureMean}{ Logical: should the training features be scaled to mean zero? Unless there are good reasons not to scale, the features should be scaled. } \item{scaleFeatureVar}{ Logical: should the training features be scaled to unit variance? Again, unless there are good reasons not to scale, the features should be scaled. } \item{centroidMethod}{ One of \code{"mean"} and \code{"eigensample"}, specifies how the centroid should be calculated. \code{"mean"} takes the mean across all samples (or all samples within a sample module, if sample networks are used), whereas \code{"eigensample"} calculates the first principal component of the feature matrix and uses that as the centroid. } \item{simFnc}{ Character string giving the similarity function for measuring the similarity between test samples and centroids. This function should behave roughly like the function \code{\link{cor}} in that it takes two arguments (\code{x}, \code{y}) and calculates the pair-wise similarities between columns of \code{x} and \code{y}. For convenience, the value \code{"dist"} is treated specially: the Euclidean distance between the columns of \code{x} and \code{y} is calculated and its negative is returned (so that smallest distance corresponds to highest similarity). Since values of this function are only used for ranking centroids, its values are not restricted to be positive or within certain bounds. } \item{simOptions}{ Character string specifying the options to the similarity function. } \item{useQuantile}{ If non-NULL, the "nearest quantiloid" will be used instead of the nearest centroid. See details. } \item{sampleWeights}{ Optional specification of sample weights. Useful for example if one wants to explore boosting. } \item{weighSimByPrediction}{ (Optional) power to downweigh features that are not well predicted between training and test sets. See details. } \item{CVfold}{ Non-negative integer specifying cross-validation. Zero means no cross-validation will be performed. values above zero specify the number of samples to be considered test data for each step of cross-validation. } \item{returnFactor}{ Logical: should a factor be returned? } \item{randomSeed}{ Integere specifying the seed for the random number generator. If \code{NULL}, the seed will not be set. See \code{\link{set.seed}}. } \item{verbose}{ Integer controling how verbose the diagnostic messages should be. Zero means silent. } \item{indent}{ Indentation for the diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Nearest centroid predictor works by forming a representative profile (centroid) across features for each class from the training data, then assigning each test sample to the class of the nearest representative profile. The representative profile can be formed either as mean or as athe first principal component ("eigensample"; this choice is governed by the option \code{centroidMethod}). When the number of features is large and only a small fraction is likely to be associated with the outcome, feature selection can be used to restrict the features that actually enter the centroid. Feature selection can be based either on their association with the outcome calculated from the training data using \code{assocFnc}, or on user-supplied feature significance (e.g., derived from literature, argument \code{featureSignificance}). In either case, features can be selected by high and low association tresholds or by taking a fixed number of highest- and lowest-associated features. As an alternative to centroids, the predictor can also assign test samples based on a given quantile of the distances from the training samples in each class (argument \code{useQuantile}). This may be advantageous if the samples in each class form irregular clusters. Note that setting \code{useQuantile=0} (i.e., using minimum distance in each class) essentially gives a nearest neighbor predictor: each test sample will be assigned to the class of its nearest training neighbor. If features exhibit non-trivial correlations among themselves (such as, for example, in gene expression data), one can attempt to down-weigh features that do not exhibit the same correlation in the test set. This is done by using essentially the same predictor to predict _features_ from all other features in the test data (using the training data to train the feature predictor). Because test features are known, the prediction accuracy can be evaluated. If a feature is predicted badly (meaning the error in the test set is much larger than the error in the cross-validation prediction in training data), it may mean that its quality in the training or test data is low (for example, due to excessive noise or outliers). Such features can be downweighed using the argument \code{weighByPrediction}. The extra factor is min(1, (root mean square prediction error in test set)/(root mean square cross-validation prediction error in the trainig data)^weighByPrediction), that is it is never bigger than 1. Unless the features' mean and variance can be ascribed clear meaning, the (training) features should be scaled to mean 0 and variance 1 before the centroids are formed. The function implements a basic option for removal of spurious effects in the training and test data, by removng a fixed number of leading principal components from the features. This sometimes leads to better prediction accuracy but should be used with caution. If samples within each class are heterogenous, a single centroid may not represent each class well. This function can deal with within-class heterogeneity by clustering samples (separately in each class), then using a one representative (mean, eigensample) or quantile for each cluster in each class to assign test samples. Various similarity measures, specified by \code{adjFnc}, can be used to construct the sample network adjacency. Similarly, the user can specify a clustering function using \code{clusteringFnc}. The requirements on the clustering function are described in a separate section below. } \value{ A list with the following components: \item{predicted}{The back-substitution prediction in the training set.} \item{predictedTest}{Prediction in the test set.} \item{featureSignificance}{A vector of feature significance calculated by \code{assocFnc} or a copy of the input \code{featureSignificance} if the latter is non-NULL.} \item{selectedFeatures}{A vector giving the indices of the features that were selected for the predictor.} \item{centroidProfile}{The representative profiles of each class (or cluster). Only returned in \code{useQuntile} is \code{NULL}. } \item{testSample2centroidSimilarities}{A matrix of calculated similarities between the test samples and class/cluster centroids.} \item{featureValidationWeights}{A vector of validation weights (see Details) for the selected features. If \code{weighFeaturesByValidation} is 0, a unit vector is used and returned.} \item{CVpredicted}{Cross-validation prediction on the training data. Present only if \code{CVfold} is non-zero.} \item{sampleClusterLabels}{A list with two components (one per class). Each component is a vector of sample cluster labels for samples in the class.} } \author{ Peter Langfelder } \seealso{ \code{\link{votingLinearPredictor}} } \keyword{misc} WGCNA/man/BrainRegionMarkers.Rd0000644000176200001440000000225014022073754015702 0ustar liggesusers\name{BrainRegionMarkers} \alias{BrainRegionMarkers} \docType{data} \title{Gene Markers for Regions of the Human Brain} \description{ This matrix gives a predefined set of marker genes for many regions of the human brain, using data from the Allen Human Brain Atlas (https://human.brain-map.org/) as reported in: Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, Miller JA, et al. (2012) An Anatomically Comprehensive Atlas of the Adult Human Brain Transcriptome. Nature (in press). It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(BrainRegionMarkers)} \format{ A 28477 x 2 matrix of characters containing Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form ___HBA. Note that the matrix is sorted first by Category and then by Gene, such that all genes related to the same category are listed sequentially. } \source{ For references used in this variable, or other information, please see \code{\link{userListEnrichment}} } \examples{ data(BrainRegionMarkers) head(BrainRegionMarkers) } \keyword{datasets} WGCNA/man/allowWGCNAThreads.Rd0000644000176200001440000000377214012015545015372 0ustar liggesusers\name{allowWGCNAThreads} \alias{allowWGCNAThreads} \alias{enableWGCNAThreads} \alias{disableWGCNAThreads} \alias{WGCNAnThreads} \title{ Allow and disable multi-threading for certain WGCNA calculations } \description{ These functions allow and disable multi-threading for WGCNA calculations that can optionally be multi-threaded, which includes all functions using \code{\link{cor}} or \code{\link{bicor}} functions. } \usage{ allowWGCNAThreads(nThreads = NULL) enableWGCNAThreads(nThreads = NULL) disableWGCNAThreads() WGCNAnThreads() } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nThreads}{ Number of threads to allow. If not given, the number of processors online (as reported by system configuration) will be used. There appear to be some cases where the automatically-determined number is wrong; please check the output to see that the number of threads makes sense. Except for testing and/or torturing your system, the number of threads should be no more than the number of actual processors/cores. } } \details{ \code{allowWGCNAThreads} enables parallel calculation within the compiled code in WGCNA, principally for calculation of correlations in the presence of missing data. This function is now deprecated; use \code{enableWGCNAThreads} instead. \code{enableWGCNAThreads} enables parallel calculations within user-level R functions as well as within the compiled code, and registers an appropriate parallel calculation back-end for the operating system/platform. \code{disableWGCNAThreads} disables parallel processing. \code{WGCNAnThreads} returns the number of threads (parallel processes) that WGCNA is currently configured to run with. } \value{ \code{allowWGCNAThreads}, \code{enableWGCNAThreads}, and \code{disableWGCNAThreads} return the maximum number of threads WGCNA calculations will be allowed to use. } \note{ Multi-threading within compiled code is not available on Windows; R code parallelization works on all platforms. } \author{ Peter Langfelder } \keyword{misc} WGCNA/man/matchLabels.Rd0000644000176200001440000000570214012015545014373 0ustar liggesusers\name{matchLabels} \alias{matchLabels} \title{ Relabel module labels to best match the given reference labels } \description{ Given a \code{source} and \code{reference} vectors of module labels, the function produces a module labeling that is equivalent to \code{source}, but individual modules are re-labeled so that modules with significant overlap in \code{source} and \code{reference} have the same labels. } \usage{ matchLabels(source, reference, pThreshold = 5e-2, na.rm = TRUE, ignoreLabels = if (is.numeric(reference)) 0 else "grey", extraLabels = if (is.numeric(reference)) c(1:1000) else standardColors() ) } \arguments{ \item{source}{ a vector or a matrix of reference labels. The labels may be numeric or character. } \item{reference}{ a vector of reference labels. } \item{pThreshold}{ threshold of Fisher's exact test for considering modules to have a significant overlap. } \item{na.rm}{logical: should missing values in either \code{source} or \code{reference} be removed? If not, missing values may be treated as a standard label or the function may throw an error (exact behaviour depends on whether the input labels are numeric or not).} \item{ignoreLabels}{labels in \code{source} and \code{reference} to be considered unmatchable. These labels are excluded from the re-labeling procedure.} \item{extraLabels}{a vector of labels for modules in \code{source} that cannot be matched to any modules in \code{reference}. The user should ensure that this vector contains enough labels since the function automatically removes a values that occur in either \code{source}, \code{reference} or \code{ignoreLabels}, to avoid possible confusion. } } \details{ Each column of \code{source} is treated separately. Unlike in previous version of this function, source and reference labels can be any labels, not necessarily of the same type. The function calculates the overlap of the \code{source} and \code{reference} modules using Fisher's exact test. It then attempts to relabel \code{source} modules such that each \code{source} module gets the label of the \code{reference} module that it overlaps most with, subject to not renaming two \code{source} modules to the same \code{reference} module. (If two \code{source} modules point to the same \code{reference} module, the one with the more significant overlap is chosen.) Those \code{source} modules that cannot be matched to a \code{reference} module are labeled using those labels from \code{extraLabels} that do not occur in either of \code{source}, \code{reference} or \code{ignoreLabels}. } \value{ A vector (if the input \code{source} labels are a vector) or a matrix (if the input \code{source} labels are a matrix) of the new labels. } \author{ Peter Langfelder } \seealso{ \code{\link{overlapTable}} for calculation of overlap counts and p-values; \code{\link{standardColors}} for standard non-numeric WGCNA labels. } \keyword{ misc } WGCNA/man/qvalue.restricted.Rd0000644000176200001440000000173614012015545015623 0ustar liggesusers\name{qvalue.restricted} \alias{qvalue.restricted} \title{ qvalue convenience wrapper } \description{ This function calls \code{\link{qvalue}} on finite input p-values, optionally traps errors from the q-value calculation, and returns just the q values. } \usage{ qvalue.restricted(p, trapErrors = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{p}{ a vector of p-values. Missing data are allowed and will be removed. } \item{trapErrors}{ logical: should errors generated by function \code{\link{qvalue}} trapped? If \code{TRUE}, the errors will be silently ignored and the returned q-values will all be \code{NA}. } \item{\dots}{ other arguments to function \code{\link{qvalue}}. } } \value{ A vector of q-values. Entries whose corresponding p-values were not finite will be \code{NA}. } \author{ Peter Langfelder } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{qvalue}} } \keyword{misc} WGCNA/man/intramodularConnectivity.Rd0000644000176200001440000000742514012015545017260 0ustar liggesusers\name{intramodularConnectivity} \alias{intramodularConnectivity} \alias{intramodularConnectivity.fromExpr} \title{ Calculation of intramodular connectivity } \description{ Calculates intramodular connectivity, i.e., connectivity of nodes to other nodes within the same module. } \usage{ intramodularConnectivity(adjMat, colors, scaleByMax = FALSE) intramodularConnectivity.fromExpr(datExpr, colors, corFnc = "cor", corOptions = "use = 'p'", weights = NULL, distFnc = "dist", distOptions = "method = 'euclidean'", networkType = "unsigned", power = if (networkType=="distance") 1 else 6, scaleByMax = FALSE, ignoreColors = if (is.numeric(colors)) 0 else "grey", getWholeNetworkConnectivity = TRUE) } \arguments{ \item{adjMat}{ adjacency matrix, a square, symmetric matrix with entries between 0 and 1. } \item{colors}{ module labels. A vector of length \code{ncol(adjMat)} giving a module label for each gene (node) of the network.} \item{scaleByMax}{ logical: should intramodular connectivities be scaled by the maximum IM connectivity in each module? } \item{datExpr}{ data frame or matrix containing expression data. Columns correspond to genes and rows to samples.} \item{corFnc}{ character string specifying the function to be used to calculate co-expression similarity for correlation networks. Defaults to Pearson correlation. Any function returning values between -1 and 1 can be used. } \item{corOptions}{ character string specifying additional arguments to be passed to the function given by \code{corFnc}. Use \code{"use = 'p', method = 'spearman'"} to obtain Spearman correlation. } \item{weights}{optional matrix of the same dimensions as \code{datExpr}, giving the weights for individual observations in \code{datExpr}. These will be passed on to the correlation function.} \item{distFnc}{ character string specifying the function to be used to calculate co-expression similarity for distance networks. Defaults to the function \code{\link{dist}}. Any function returning non-negative values can be used.} \item{distOptions}{ character string specifying additional arguments to be passed to the function given by \code{distFnc}. For example, when the function \code{\link{dist}} is used, the argument \code{method} can be used to specify various ways of computing the distance. } \item{networkType}{network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}, \code{"distance"}. } \item{power}{soft thresholding power. } \item{ignoreColors}{level(s) of \code{colors} that identifies unassigned genes. The intramodular connectivity in this "module" will not be calculated.} \item{getWholeNetworkConnectivity}{logical: should whole-network connectivity be computed as well? For large networks, this can be quite time-consuming.} } \details{ The module labels can be numeric or character. For each node (gene), the function sums adjacency entries (excluding the diagonal) to other nodes within the same module. Optionally, the connectivities can be scaled by the maximum connectivy in each module. } \value{ If input \code{getWholeNetworkConnectivity} is \code{TRUE}, a data frame with 4 columns giving the total connectivity, intramodular connectivity, extra-modular connectivity, and the difference of the intra- and extra-modular connectivities for all genes; otherwise a vector of intramodular connectivities, } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 } \author{ Steve Horvath and Peter Langfelder} \seealso{ \code{\link{adjacency}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/stratifiedBarplot.Rd0000644000176200001440000001035714012015545015640 0ustar liggesusers\name{stratifiedBarplot} \alias{stratifiedBarplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bar plots of data across two splitting parameters } \description{ This function takes an expression matrix which can be split using two separate splitting parameters (ie, control vs AD with multiple brain regions), and plots the results as a barplot. Group average, standard deviations, and relevant Kruskal-Wallis p-values are returned. } \usage{ stratifiedBarplot( expAll, groups, split, subset, genes = NA, scale = "N", graph = TRUE, las1 = 2, cex1 = 1.5, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{expAll}{ An expression matrix, with rows as samples and genes/probes as columns. If genes=NA, then column names must be included. } \item{groups}{ A character vector corresponding to the samples in expAll, with each element the group name of the relevant sample or NA for samples not in any group. For, example: NA, NA, NA, Con, Con, Con, Con, AD, AD, AD, AD, NA, NA. This trait will be plotted as adjacent bars for each split. } \item{split}{ A character vector corresponding to the samples in expAll, with each element the group splitting name of the relevant sample or NA for samples not in any group. For, example: NA, NA, NA, Hip, Hip, EC, EC, Hip, Hip, EC, EC, NA, NA. This trait will be plotted as the same color across each split of the barplot. For the function to work properly, the same split values should be inputted for each group. } \item{subset}{ A list of one or more genes to compare the expression with. If the list contains more than one gene, the first element contains the group name. For example, Ribosomes, RPL3, RPL4, RPS3. } \item{genes}{ If entered, this parameter is a list of gene/probe identifiers corresponding to the columns in expAll. } \item{scale}{ For subsets of genes that include more than one gene, this parameter determines how the genes are combined into a single value. Currently, there are five options: 1) ("N")o scaling (default); 2) first divide each gene by the ("A")verage across samples; 3) first scale genes to ("Z")-score across samples; 4) only take the top ("H")ub gene (ignore all but the highest-connected gene); and 5) take the ("M")odule eigengene. Note that these scaling methods have not been sufficiently tested, and should be considered experimental. } \item{graph}{ If TRUE (default), bar plot is made. If FALSE, only the results are returned, and no plot is made. } \item{cex1}{ Sets the graphing parameters of cex.axis and cex.names (default=1.5) } \item{las1}{ Sets the graphing parameter las (default=2). } \item{\dots}{ Other graphing parameters allowed in the barplot function. Note that the parameters for cex.axis, cex.names, and las are superseded by cex1 and las1 and will therefore be ignored. } } \value{ \item{splitGroupMeans}{ The group/split averaged expression across each group and split combination. This is the height of the bars in the graph. } \item{splitGroupSDs}{ The standard deviation of group/split expression across each group and split combination. This is the height of the error bars in the graph. } \item{splitPvals}{ Kruskal-Wallis p-values for each splitting parameter across groups. } \item{groupPvals}{ Kruskal-Wallis p-values for each group parameter across splits. } } \author{ Jeremy Miller } \seealso{ \code{\link{barplot}}, \code{\link{verboseBarplot}} } \examples{ # Example: first simulate some data set.seed(100) ME.A = sample(1:100,50); ME.B = sample(1:100,50) ME.C = sample(1:100,50); ME.D = sample(1:100,50) ME1 = data.frame(ME.A, ME.B, ME.C, ME.D) simDatA = simulateDatExpr(ME1,1000,c(0.2,0.1,0.08,0.05,0.3), signed=TRUE) datExpr = simDatA$datExpr+5 datExpr[1:10,] = datExpr[1:10,]+2 datExpr[41:50,] = datExpr[41:50,]-1 # Now split up the data and plot it! subset = c("Random Genes", "Gene.1", "Gene.234", "Gene.56", "Gene.789") groups = rep(c("A","A","A","B","B","B","C","C","C","C"),5) split = c(rep("ZZ",10), rep("YY",10), rep("XX",10), rep("WW",10), rep("VV",10)) par(mfrow = c(1,1)) results = stratifiedBarplot(datExpr, groups, split, subset) results # Now plot it the other way results = stratifiedBarplot(datExpr, split, groups, subset) } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/blockSize.Rd0000644000176200001440000000516214012015545014101 0ustar liggesusers\name{blockSize} \alias{blockSize} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Attempt to calculate an appropriate block size to maximize efficiency of block-wise calcualtions. } \description{ The function uses a rather primitive way to estimate available memory and use it to suggest a block size appropriate for the many block-by-block calculations in this package. } \usage{ blockSize( matrixSize, rectangularBlocks = TRUE, maxMemoryAllocation = NULL, overheadFactor = 3); } %- maybe also 'usage' for other objects documented here. \arguments{ \item{matrixSize}{the relevant dimension (usually the number of columns) of the matrix that is to be operated on block-by-block. } \item{rectangularBlocks}{logical indicating whether the bocks of data are rectangular (of size \code{blockSize} times \code{matrixSize}) or square (of size \code{blockSize} times \code{blockSize}).} \item{maxMemoryAllocation}{maximum desired memory allocation, in bytes. Should not exceed 2GB or total installed RAM (whichever is greater) on 32-bit systems, while on 64-bit systems it should not exceed the total installed RAM. If not supplied, the available memory will be estimated internally. } \item{overheadFactor}{overhead factor for the memory use by R. Recommended values are between 2 (for simple calculations) and 4 or more for complicated calculations where intermediate results (for which R must also allocate memory) take up a lot of space.} } \details{ Multiple functions within the WGCNA package use a divide-and-conquer (also known as block-by-block, or block-wise) approach to handling large data sets. This function is meant to assist in choosing a suitable block size, given the size of the data and the available memory. If the entire expected result fits into the allowed memory (after taking into account the expected overhead), the returned block size will equal the input \code{matrixSize}. The internal estimation of available memory works by returning the size of largest successfully allocated block of memory. It is hoped that this will lead to reasonable results but some operating systems may actually allocate more than is available. It is therefore preferable that the user specifies the available memory by hand. } \value{ A single integer giving the suggested block size, or \code{matrixSize} if the entire calculation is expected to fit into memory in one piece. } \author{ Peter Langfelder } \examples{ # Suitable blocks for handling 30,000 genes within 2GB (=2^31 bytes) of memory blockSize(30000, rectangularBlocks = TRUE, maxMemoryAllocation = 2^31) } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/bicor.Rd0000644000176200001440000001554114672545314013272 0ustar liggesusers\name{bicor} \alias{bicor} \title{ Biweight Midcorrelation } \description{ Calculate biweight midcorrelation efficiently for matrices. } \usage{ bicor(x, y = NULL, robustX = TRUE, robustY = TRUE, use = "all.obs", maxPOutliers = 1, quick = 0, pearsonFallback = "individual", cosine = FALSE, cosineX = cosine, cosineY = cosine, nThreads = 0, verbose = 0, indent = 0) } \arguments{ \item{x}{ a vector or matrix-like numeric object } \item{y}{ a vector or matrix-like numeric object } \item{robustX}{ use robust calculation for \code{x}?} \item{robustY}{ use robust calculation for \code{y}?} \item{use}{ specifies handling of \code{NA}s. One of (unique abbreviations of) "all.obs", "pairwise.complete.obs". } \item{maxPOutliers}{ specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quick}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). } \item{cosine}{ logical: calculate cosine biweight midcorrelation? Cosine bicorrelation is similar to standard bicorrelation but the median subtraction is not performed. } \item{cosineX}{ logical: use the cosine calculation for \code{x}? This setting does not affect \code{y} and can be used to give a hybrid cosine-standard bicorrelation. } \item{cosineY}{ logical: use the cosine calculation for \code{y}? This setting does not affect \code{x} and can be used to give a hybrid cosine-standard bicorrelation. } \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. Note that this option does not affect what is usually the most expensive part of the calculation, namely the matrix multiplication. The matrix multiplication is carried out by BLAS routines provided by R; these can be sped up by installing a fast BLAS and making R use it.} \item{verbose}{ if non-zero, the underlying C function will print some diagnostics.} \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ This function implements biweight midcorrelation calculation (see references). If \code{y} is not supplied, midcorrelation of columns of \code{x} will be calculated; otherwise, the midcorrelation between columns of \code{x} and \code{y} will be calculated. Thus, \code{bicor(x)} is equivalent to \code{bicor(x,x)} but is more efficient. The options \code{robustX}, \code{robustY} allow the user to revert the calculation to standard correlation calculation. This is important, for example, if any of the variables is binary (or, more generally, discrete) as in such cases the robust methods produce meaningless results. If both \code{robustX}, \code{robustY} are set to \code{FALSE}, the function calculates the standard Pearson correlation (but is slower than the function \code{\link{cor}}). The argument \code{quick} specifies the precision of handling of missing data in the correlation calculations. Value \code{quick = 0} will cause all calculations to be executed accurately, which may be significantly slower than calculations without missing data. Progressively higher values will speed up the calculations but introduce progressively larger errors. Without missing data, all column meadians and median absolute deviations (MADs) can be pre-calculated before the covariances are calculated. When missing data are present, exact calculations require the column medians and MADs to be calculated for each covariance. The approximate calculation uses the pre-calculated median and MAD and simply ignores missing data in the covariance calculation. If the number of missing data is high, the pre-calculated medians and MADs may be very different from the actual ones, thus potentially introducing large errors. The \code{quick} value times the number of rows specifies the maximum difference in the number of missing entries for median and MAD calculations on the one hand and covariance on the other hand that will be tolerated before a recalculation is triggered. The hope is that if only a few missing data are treated approximately, the error introduced will be small but the potential speedup can be significant. The choice \code{"all"} for \code{pearsonFallback} is not fully implemented in the sense that there are rare but possible cases in which the calculation is equivalent to \code{"individual"}. This may happen if the \code{use} option is set to \code{"pairwise.complete.obs"} and the missing data are arranged such that each individual mad is non-zero, but when two columns are analyzed together, the missing data from both columns may make a mad zero. In such a case, the calculation is treated as Pearson, but other columns will be treated as bicor. } \value{ A matrix of biweight midcorrelations. Dimnames on the result are set appropriately. } \references{ Peter Langfelder, Steve Horvath (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. \url{https://www.jstatsoft.org/v46/i11/} "Introduction to Robust Estimation and Hypothesis Testing", Rand Wilcox, Academic Press, 1997. "Data Analysis and Regression: A Second Course in Statistics", Mosteller and Tukey, Addison-Wesley, 1977, pp. 203-209. } \author{ Peter Langfelder} \keyword{ robust } WGCNA/man/clusterCoef.Rd0000644000176200001440000000065114012015545014430 0ustar liggesusers\name{clusterCoef} \alias{clusterCoef} \title{ Clustering coefficient calculation } \description{ This function calculates the clustering coefficients for all nodes in the network given by the input adjacency matrix. } \usage{ clusterCoef(adjMat) } \arguments{ \item{adjMat}{ adjacency matrix } } \value{ A vector of clustering coefficients for each node. } \author{ Steve Horvath } \keyword{ misc } WGCNA/man/collectGarbage.Rd0000644000176200001440000000054514012015545015052 0ustar liggesusers\name{collectGarbage} \alias{collectGarbage} \title{Iterative garbage collection. } \description{ Performs garbage collection until free memory idicators show no change. } \usage{ collectGarbage() } \value{ None. } \author{ Steve Horvath } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{utilities} WGCNA/man/mtd.setAttr.Rd0000644000176200001440000000145414012015545014365 0ustar liggesusers\name{mtd.setAttr} \alias{mtd.setAttr} \title{ Set attributes on each component of a multiData structure } \description{ Set attributes on each \code{data} component of a multiData structure } \usage{ mtd.setAttr(multiData, attribute, valueList) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ A multiData structure. } \item{attribute}{ Name for the attribute to be set } \item{valueList}{ List that gives the attribute value for each set in the multiData structure. } } \value{ The input multiData with the attribute set on each \code{data} component. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure; \code{isMultiData} for a description of the multiData structure. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/nearestNeighborConnectivityMS.Rd0000644000176200001440000000612714012015545020134 0ustar liggesusers\name{nearestNeighborConnectivityMS} \alias{nearestNeighborConnectivityMS} \title{ Connectivity to a constant number of nearest neighbors across multiple data sets } \description{ Given expression data from several sets and basic network parameters, the function calculates connectivity of each gene to a given number of nearest neighbors in each set. } \usage{ nearestNeighborConnectivityMS(multiExpr, nNeighbors = 50, power = 6, type = "unsigned", corFnc = "cor", corOptions = "use = 'p'", blockSize = 1000, sampleLinks = NULL, nLinks = 5000, setSeed = 36492, verbose = 1, indent = 0) } \arguments{ \item{multiExpr}{ expression data in multi-set format. A vector of lists, one list per set. In each list there must be a component named \code{data} whose content is a matrix or dataframe or array of dimension 2 containing the expression data. Rows correspond to samples and columns to genes (probes). } \item{nNeighbors}{ number of nearest neighbors to use. } \item{power}{ soft thresholding power for network construction. Should be a number greater than 1. } \item{type}{ a character string encoding network type. Recognized values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, and \code{"signed hybrid"}. } \item{corFnc}{ character string containing the name of the function to calculate correlation. Suggested functions include \code{"cor"} and \code{"bicor"}. } \item{corOptions}{ further argument to the correlation function. } \item{blockSize}{ correlation calculations will be split into square blocks of this size, to prevent running out of memory for large gene sets. } \item{sampleLinks}{ logical: should network connections be sampled (\code{TRUE}) or should all connections be used systematically (\code{FALSE})? } \item{nLinks}{ number of links to be sampled. Should be set such that \code{nLinks * nNeighbors} be several times larger than the number of genes. } \item{setSeed}{ seed to be used for sampling, for repeatability. If a seed already exists, it is saved before the sampling starts and restored after. } \item{verbose}{ integer controlling the level of verbosity. 0 means silent.} \item{indent}{ integer controlling indentation of output. Each unit above 0 adds two spaces. } } \details{ Connectivity of gene \code{i} is the sum of adjacency strengths between gene \code{i} and other genes; in this case we take the \code{nNeighbors} nodes with the highest connection strength to gene \code{i}. The adjacency strengths are calculated by correlating the given expression data using the function supplied in \code{corFNC} and transforming them into adjacency according to the given network \code{type} and \code{power}. } \value{ A matrix in which columns correspond to sets and rows to genes; each entry contains the nearest neighbor connectivity of the corresponding gene. } \author{ Peter Langfelder } \seealso{ \code{\link{adjacency}}, \code{\link{softConnectivity}}, \code{\link{nearestNeighborConnectivity}} } \keyword{ misc } WGCNA/man/minWhichMin.Rd0000644000176200001440000000224414012015545014364 0ustar liggesusers\name{minWhichMin} \alias{minWhichMin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fast joint calculation of row- or column-wise minima and indices of minimum elements } \description{ Fast joint calculation of row- or column-wise minima and indices of minimum elements. Missing data are removed. } \usage{ minWhichMin(x, byRow = FALSE, dims = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A numeric matrix or array. } \item{byRow}{ Logical: should the minima and indices be found for columns (\code{FALSE}) or rows (\code{TRUE})? } \item{dims}{ Specifies dimensions for which to find the minima and indices. For \code{byRow = FALSE}, they are calculated for dimensions \code{dims+1} to \code{n=length(dim(x))}; for For \code{byRow = TRUE}, they are calculated for dimensions 1,...,\code{dims}. } } \value{ A list with two components, \code{min} and \code{which}; each is a vector or array with dimensions \code{dim(x)[(dims+1):n]} (with \code{n=length(dim(x))}) if \code{byRow = FALSE}, and \code{dim(x)[1:dims]} if \code{byRow = TRUE}. } \author{ Peter Langfelder } \keyword{stats}% __ONLY ONE__ keyword per line WGCNA/man/newConsensusTree.Rd0000644000176200001440000000321614533632240015471 0ustar liggesusers\name{newConsensusTree} \alias{newConsensusTree} \alias{ConsensusTree} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create a new consensus tree } \description{ This function creates a new consensus tree, a class for representing "recipes" for hierarchical consensus calculations. } \usage{ newConsensusTree( consensusOptions = newConsensusOptions(), inputs, analysisName = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{consensusOptions}{ An object of class \code{ConsensusOptions}, usually obtained by calling \code{\link{newConsensusOptions}}. } \item{inputs}{ A vector (or list) of inputs. Each component can be either a character string giving a names of a data set, or another \code{ConsensusTree} object. } \item{analysisName}{ Optional specification of a name for this consensus analysis. While this has no effect on the actual consensus calculation, some functions use this character string to make certain file names unique. } } \details{ Consensus trees specify a "recipe" for the calculation of hierarchical consensus in \code{\link{hierarchicalConsensusCalculation}} and other functions. } \value{ A list with class set to \code{"ConsensusTree"} with these components: \item{consensusOptions}{A copy of the input \code{consensusOptions}.} \item{inputs}{A copy of the input \code{inputs}.} \item{analysisName}{A copy of the input \code{analysisName}.} } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusCalculation}} for hierarchical consensus calculation for which a \code{ConsensusTree} object specifies the recipe } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/sizeRestrictedClusterMerge.Rd0000644000176200001440000000450114012015545017475 0ustar liggesusers\name{sizeRestrictedClusterMerge} \alias{sizeRestrictedClusterMerge} \title{ Cluter merging with size restrictions } \description{ This function merges clusters by correlation of the first principal components such that the resulting merged clusters do not exceed a given maximum size. } \usage{ sizeRestrictedClusterMerge( datExpr, clusters, clusterSizes = NULL, centers = NULL, maxSize, networkType = "unsigned", verbose = 0, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ Data on which the clustering is based (e.g., expression data). Variables are in columns and observations (samples) in rows. } \item{clusters}{ A vector with element per variable (column) in \code{datExpr} giving the cluster label for the corresponding variable. } \item{clusterSizes}{ Optional pre-calculated cluster sizes. If not given, will be determined from given \code{clusters}. } \item{centers}{ Optional pre-calculaed cluster centers (first principal components/singular vectors). If not given, will be calculated from given data and cluster assignments. } \item{maxSize}{ Maximum allowed size of merged clusters. If any of the given \code{clusters} are larger than \code{maxSize}, they will not be changed. } \item{networkType}{ One of \code{"unsigned"} and \code{"signed"}. Determines whether clusters with negatively correlated representatives will be considered similar (\code{"unsigned"}) or dis-similar (\code{"signed"}). } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The function iteratively merges two closest clusters subject to the constraint that the merged cluster size cannot exceed maxSize. Merging stops when no two clusters can be merged without exceeding the maximum size. } \value{ A list with two components \item{clusters}{A numeric vector with one component per input gene, giving the cluster number in which the gene is assigned. } \item{centers}{Cluster centers, that is their first principal components/singular vectors. } } \author{ Peter Langfelder } \seealso{ The last step in \code{\link{projectiveKMeans}} uses this function. } \keyword{cluster} WGCNA/man/goodGenesMS.Rd0000644000176200001440000000760514012015545014332 0ustar liggesusers\name{goodGenesMS} \alias{goodGenesMS} \title{Filter genes with too many missing entries across multiple sets } \description{ This function checks data for missing entries and returns a list of genes that have non-zero variance in all sets and pass two criteria on maximum number of missing values in each given set: the fraction of missing values must be below a given threshold and the total number of missing samples must be below a given threshold. If weights are given, entries whose relative weight is below a threshold will be considered missing. } \usage{ goodGenesMS( multiExpr, multiWeights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, tol = NULL, minRelativeWeight = 0.1, verbose = 1, indent = 0) } \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}.} \item{useSamples}{ optional specifications of which samples to use for the check. Should be a logical vector; samples whose entries are \code{FALSE} will be ignored for the missing value counts. Defaults to using all samples.} \item{useGenes}{ optional specifications of genes for which to perform the check. Should be a logical vector; genes whose entries are \code{FALSE} will be ignored. Defaults to using all genes.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of non-missing samples for a gene to be considered good. } \item{minNGenes}{ minimum number of good genes for the data set to be considered fit for analysis. If the actual number of good genes falls below this threshold, an error will be issued. } \item{tol}{ an optional 'small' number to compare the variance against. For each set in \code{multiExpr}, the default value is \code{1e-10 * max(abs(multiExpr[[set]]$data), na.rm = TRUE)}. The reason of comparing the variance to this number, rather than zero, is that the fast way of computing variance used by this function sometimes causes small numerical overflow errors which make variance of constant vectors slightly non-zero; comparing the variance to \code{tol} rather than zero prevents the retaining of such genes as 'good genes'.} \item{minRelativeWeight}{ observations whose relative weight is below this threshold will be considered missing. Here relative weight is weight divided by the maximum weight in the column (gene).} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. If weights are given, entries whose relative weight (i.e., weight divided by maximum weight in the column or gene) will be considered missing. For most data sets, the fraction of missing samples criterion will be much more stringent than the absolute number of missing samples criterion. } \value{ A logical vector with one entry per gene that is \code{TRUE} if the gene is considered good and \code{FALSE} otherwise. Note that all genes excluded by \code{useGenes} are automatically assigned \code{FALSE}. } \author{ Peter Langfelder } \seealso{ \code{\link{goodGenes}}, \code{\link{goodSamples}}, \code{\link{goodSamplesGenes}} for cleaning individual sets separately; \code{\link{goodSamplesMS}}, \code{\link{goodSamplesGenesMS}} for additional cleaning of multiple data sets together. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/allocateJobs.Rd0000644000176200001440000000155514012015545014560 0ustar liggesusers\name{allocateJobs} \alias{allocateJobs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Divide tasks among workers } \description{ This function calculates an even splitting of a given number of tasks among a given number of workers (threads). } \usage{ allocateJobs(nTasks, nWorkers) } \arguments{ \item{nTasks}{ number of tasks to be divided } \item{nWorkers}{ number of workers } } \details{ Tasks are labeled consecutively 1,2,..., \code{nTasks}. The tasks are split in contiguous blocks as evenly as possible. } \value{ A list with one component per worker giving the task indices to be worked on by each worker. If there are more workers than tasks, the tasks for the extra workers are 0-length numeric vectors. } \author{ Peter Langfelder } \examples{ allocateJobs(10, 3); allocateJobs(2,4); } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/adjacency.polyReg.Rd0000644000176200001440000000740314012015545015515 0ustar liggesusers\name{adjacency.polyReg} \alias{adjacency.polyReg} %- Also NEED an '\alias' for EACH other topic documented here. \title{Adjacency matrix based on polynomial regression } \description{ adjacency.polyReg calculates a network adjacency matrix by fitting polynomial regression models to pairs of variables (i.e. pairs of columns from \code{datExpr}). Each polynomial fit results in a model fitting index R.squared. Thus, the n columns of \code{datExpr} result in an n x n dimensional matrix whose entries contain R.squared measures. This matrix is typically non-symmetric. To arrive at a (symmetric) adjacency matrix, one can specify different symmetrization methods with \code{symmetrizationMethod}.} \usage{ adjacency.polyReg(datExpr, degree=3, symmetrizationMethod = "mean") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{data frame containing numeric variables. Example: Columns may correspond to genes and rows to observations (samples).} \item{degree}{ the degree of the polynomial. Must be less than the number of unique points.} \item{symmetrizationMethod}{character string (eg "none", "min","max","mean") that specifies the method used to symmetrize the pairwise model fitting index matrix (see details).} } \details{ A network adjacency matrix is a symmetric matrix whose entries lie between 0 and 1. It is a special case of a similarity matrix. Each variable (column of \code{datExpr}) is regressed on every other variable, with each model fitting index recorded in a square matrix. Note that the model fitting index of regressing variable x and variable y is usually different from that of regressing y on x. From the polynomial regression model glm(y ~ poly(x,degree)) one can calculate the model fitting index R.squared(y,x). R.squared(y,x) is a number between 0 and 1. The closer it is to 1, the better the polynomial describes the relationship between x and y and the more significant is the pairwise relationship between the 2 variables. One can also reverse the roles of x and y to arrive at a model fitting index R.squared(x,y). If \code{degree}>1 then R.squared(x,y) is typically different from R.squared(y,x). Assume a set of n variables x1,...,xn (corresponding to the columns of \code{datExpr} then one can define R.squared(xi,xj). The model fitting indices for the elements of an n x n dimensional matrix (R.squared(ij)). \code{symmetrizationMethod} implements the following symmetrization methods: A.min(ij)=min(R.squared(ij),R.squared(ji)), A.ave(ij)=(R.squared(ij)+R.squared(ji))/2, A.max(ij)=max(R.squared(ij),R.squared(ji)). } \value{ An adjacency matrix of dimensions ncol(datExpr) times ncol(datExpr).} \references{ Song L, Langfelder P, Horvath S Avoiding mutual information based co-expression measures (to appear). Horvath S (2011) Weighted Network Analysis. Applications in Genomics and Systems Biology. Springer Book. ISBN: 978-1-4419-8818-8 } \author{ Lin Song, Steve Horvath } \seealso{ For more information about polynomial regression, please refer to functions \code{\link{poly}} and \code{\link{glm}} } \examples{ #Simulate a data frame datE which contains 5 columns and 50 observations m=50 x1=rnorm(m) r=.5; x2=r*x1+sqrt(1-r^2)*rnorm(m) r=.3; x3=r*(x1-.5)^2+sqrt(1-r^2)*rnorm(m) x4=rnorm(m) r=.3; x5=r*x4+sqrt(1-r^2)*rnorm(m) datE=data.frame(x1,x2,x3,x4,x5) #calculate adjacency by symmetrizing using max A.max=adjacency.polyReg(datE, symmetrizationMethod="max") A.max #calculate adjacency by symmetrizing using max A.mean=adjacency.polyReg(datE, symmetrizationMethod="mean") A.mean # output the unsymmetrized pairwise model fitting indices R.squared R.squared=adjacency.polyReg(datE, symmetrizationMethod="none") R.squared } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/greenWhiteRed.Rd0000644000176200001440000000354514012015545014713 0ustar liggesusers\name{greenWhiteRed} \alias{greenWhiteRed} \title{ Green-white-red color sequence } \description{ Generate a green-white-red color sequence of a given length. } \usage{ greenWhiteRed(n, gamma = 1, warn = TRUE) } \arguments{ \item{n}{ number of colors to be returned } \item{gamma}{ color change power } \item{warn}{logical: should the user be warned that this function produces a palette unsuitable for people with most common color blindness?} } \details{ The function returns a color vector that starts with green, gradually turns into white and then to red. The power \code{gamma} can be used to control the behaviour of the quarter- and three quarter-values (between green and white, and white and red, respectively). Higher powers will make the mid-colors more white, while lower powers will make the colors more saturated, respectively. Typical use of this function is to produce (via function \code{\link{numbers2colors}}) a color representation of numbers within a symmetric interval around 0, for example, the interval [-1, 1]. Note though that since green and red are not distinguishable by people with the most common type of color blindness, we recommend using the analogous palette returned by the function \code{\link{blueWhiteRed}}. } \value{ A vector of colors of length \code{n}. } \author{ Peter Langfelder } \seealso{ \code{\link{blueWhiteRed}} for a color sequence more friendly to people with the most common type of color blindness; \code{\link{numbers2colors}} for a function that produces a color representation for continuous numbers. } \examples{ par(mfrow = c(3, 1)) displayColors(greenWhiteRed(50)); title("gamma = 1") displayColors(greenWhiteRed(50, 3)); title("gamma = 3") displayColors(greenWhiteRed(50, 0.5)); title("gamma = 0.5") } \keyword{color}% __ONLY ONE__ keyword per line WGCNA/man/randIndex.Rd0000644000176200001440000000115014012015545014061 0ustar liggesusers\name{randIndex} \alias{randIndex} \title{ Rand index of two partitions} \description{ Computes the Rand index, a measure of the similarity between two clusterings. } \usage{ randIndex(tab, adjust = TRUE) } \arguments{ \item{tab}{ a matrix giving the cross-tabulation table of two clusterings. } \item{adjust}{logical: should the "adjusted" version be computed? } } \value{ the Rand index of the input table. } \references{ W. M. Rand (1971). "Objective criteria for the evaluation of clustering methods". Journal of the American Statistical Association 66: 846-850} \author{ Steve Horvath} \keyword{ misc } WGCNA/man/branchSplitFromStabilityLabels.Rd0000644000176200001440000001125314230552654020270 0ustar liggesusers\name{branchSplitFromStabilityLabels} \alias{branchSplitFromStabilityLabels} \alias{branchSplitFromStabilityLabels.individualFraction} \alias{branchSplitFromStabilityLabels.prediction} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Branch split (dissimilarity) statistics derived from labels determined from a stability study } \description{ These functions evaluate how different two branches are based on a series of cluster labels that are usually obtained in a stability study but can in principle be arbitrary. The idea is to quantify how well membership on the two tested branches can be predicted from clusters in the given stability labels. } \usage{ branchSplitFromStabilityLabels( branch1, branch2, stabilityLabels, ignoreLabels = 0, ...) branchSplitFromStabilityLabels.prediction( branch1, branch2, stabilityLabels, ignoreLabels = 0, ...) branchSplitFromStabilityLabels.individualFraction( branch1, branch2, stabilityLabels, ignoreLabels = 0, verbose = 1, indent = 0,...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{branch1}{ A vector of indices giving members of branch 1. } \item{branch2}{ A vector of indices giving members of branch 1. } \item{stabilityLabels}{ A matrix of cluster labels. Each column corresponds to one clustering and each row to one object (whose indices \code{branch1} and \code{branch2} refer to). } \item{ignoreLabels}{ Label or labels that do not constitute proper clusters in \code{stabilityLabels}, for example because they label unassigned objects. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } \item{\dots}{ Ignored. } } \details{ The idea is to measure how well clusters in \code{stabilityLabels} can distinguish the two given branches. For example, if a cluster C intersects with branch1 but not branch2, it can distinguish branches 1 and 2 perfectly. On the other hand, if there is a cluster C that contains both branch 1 and branch 2, the two branches are indistinguishable (based on the test clustering). The three functions differ in the details of the similarity calculation. \code{branchSplitFromStabilityLabels.individualFraction}: Currently the recommended branch split calculation method, and default for \code{\link{hierarchicalConsensusModules}}. For each branch and all clusters that overlap with the branch (not necessarily with the other branch), calculate the fraction of the cluster objects (restricted to the two branches) that belongs to the branch. For each branch, sum these fractions over all clusters. If this number is relatively low, around 0.5, it means most elements are in non-discriminative clusters. \code{branchSplitFromStabilityLabels}: This was the original branch split measure and for backward compatibility it still is the default method in \code{\link{blockwiseModules}} and \code{\link{blockwiseConsensusModules}}. For each cluster C in each clustering in \code{stabilityLabels}, its contribution to the branch similarity is min(r1, r2), where r1 = |intersect(C, branch1)|/|branch1| and r2 = |intersect(C, branch2)|/|branch2|. The statistics for clusters in each clustering are added; the sums are then averaged across the clusterings. \code{branchSplitFromStabilityLabels.prediction}: Use only for experiments, not recommended for actual analyses because it is not stable under small changes in the branch membership. For each cluster that overlaps with both branches, count the objects in the branch with which the cluster has a smaller overlap and add it to the score for that branch. The final counts divided by number of genes on branch give a "indistinctness" score; take the larger of the two indistinctness scores and call this the similarity. Since the result of the last two calculations is a similarity statistic, the final dissimilarity is defined as 1-similarity. The dissimilarity ranges between 0 (branch1 and branch2 are indistinguishable) and 1 (branch1 and branch2 are perfectly distinguishable). These statistics are quite simple and do not correct for similarity that would be expected by chance. On the other hand, all 3 statistics are fairly (though not perfectly) stable under splitting and joining of clusters in \code{stabilityLabels}. } \value{ Branch dissimilarity (a single number between 0 and 1). } \author{ Peter Langfelder } \seealso{ These function are utilized in \code{\link{blockwiseModules}}, \code{\link{blockwiseConsensusModules}} and \code{\link{hierarchicalConsensusModules}}. } \keyword{misc} WGCNA/man/consensusProjectiveKMeans.Rd0000644000176200001440000001246114012015545017326 0ustar liggesusers\name{consensusProjectiveKMeans} \alias{consensusProjectiveKMeans} \title{ Consensus projective K-means (pre-)clustering of expression data } \description{ Implementation of a consensus variant of K-means clustering for expression data across multiple data sets. } \usage{ consensusProjectiveKMeans( multiExpr, preferredSize = 5000, nCenters = NULL, sizePenaltyPower = 4, networkType = "unsigned", randomSeed = 54321, checkData = TRUE, imputeMissing = TRUE, useMean = (length(multiExpr) > 3), maxIterations = 1000, verbose = 0, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{preferredSize}{ preferred maximum size of clusters. } \item{nCenters}{ number of initial clusters. Empirical evidence suggests that more centers will give a better preclustering; the default is \code{as.integer(min(nGenes/20, 100*nGenes/preferredSize))} and is an attempt to arrive at a reasonable number given the resources available. } \item{sizePenaltyPower}{ parameter specifying how severe is the penalty for clusters that exceed \code{preferredSize}. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. } \item{checkData}{ logical: should data be checked for genes with zero variance and genes and samples with excessive numbers of missing samples? Bad samples are ignored; returned cluster assignment for bad genes will be \code{NA}. } \item{imputeMissing}{ logical: should missing values in \code{datExpr} be imputed before the calculations start? If the missing data are not imputed, they will be replaced by 0 which can be problematic.} \item{useMean}{ logical: should mean distance across sets be used instead of maximum? See details. } \item{maxIterations}{ maximum iterations to be attempted. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The principal aim of this function within WGCNA is to pre-cluster a large number of genes into smaller blocks that can be handled using standard WGCNA techniques. This function implements a variant of K-means clustering that is suitable for co-expression analysis. Cluster centers are defined by the first principal component, and distances by correlation. Consensus distance across several sets is defined as the maximum of the corresponding distances in individual sets; however, if \code{useMean} is set, the mean distance will be used instead of the maximum. The distance between a gene and a center of a cluster is multiplied by a factor of \eqn{max(clusterSize/preferredSize, 1)^{sizePenaltyPower}}{\code{max(clusterSize/preferredSize, 1)^sizePenaltyPower}}, thus penalizing clusters whose size exceeds \code{preferredSize}. The function starts with randomly generated cluster assignment (hence the need to set the random seed for repeatability) and executes interations of calculating new centers and reassigning genes to nearest (in the consensus sense) center until the clustering becomes stable. Before returning, nearby clusters are iteratively combined if their combined size is below \code{preferredSize}. Consensus distance defined as maximum of distances in all sets is consistent with the approach taken in \code{\link{blockwiseConsensusModules}}, but the procedure may not converge. Hence it is advisable to use the mean as consensus in cases where there are multiple data sets (4 or more, say) and/or if the input data sets are very different. The standard principal component calculation via the function \code{svd} fails from time to time (likely a convergence problem of the underlying lapack functions). Such errors are trapped and the principal component is approximated by a weighted average of expression profiles in the cluster. If \code{verbose} is set above 2, an informational message is printed whenever this approximation is used. } \value{ A list with the following components: \item{clusters}{ a numerical vector with one component per input gene, giving the cluster number in which the gene is assigned. } \item{centers}{ a vector of lists, one list per set. Each list contains a component \code{data} that contains a matrix whose columns are the cluster centers in the corresponding set. } \item{unmergedClusters}{ a numerical vector with one component per input gene, giving the cluster number in which the gene was assigned before the final merging step. } \item{unmergedCenters}{ a vector of lists, one list per set. Each list contains a component \code{data} that contains a matrix whose columns are the cluster centers before merging in the corresponding set. } } \author{ Peter Langfelder } \seealso{ \code{\link{projectiveKMeans}} } \keyword{ cluster } WGCNA/man/simulateEigengeneNetwork.Rd0000644000176200001440000000320014012015545017147 0ustar liggesusers\name{simulateEigengeneNetwork} \alias{simulateEigengeneNetwork} \title{ Simulate eigengene network from a causal model } \description{ Simulates a set of eigengenes (vectors) from a given set of causal anchors and a causal matrix. } \usage{ simulateEigengeneNetwork( causeMat, anchorIndex, anchorVectors, noise = 1, verbose = 0, indent = 0) } \arguments{ \item{causeMat}{ causal matrix. The entry \code{[i,j]} is the influence (path coefficient) of vector \code{j} on vector \code{i}. } \item{anchorIndex}{ specifies the indices of the anchor vectors. } \item{anchorVectors}{ a matrix giving the actual anchor vectors as columns. Their number must equal the length of \code{anchorIndex}. } \item{noise}{ standard deviation of the noise added to each simulated vector. } \item{verbose}{ level of verbosity. 0 means silent. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation; each unit adds two spaces. } } \details{ The algorithm starts with the anchor vectors and iteratively generates the rest from the path coefficients given in the matrix \code{causeMat}. } \value{ A list with the following components: \item{eigengenes }{ generated eigengenes. } \item{causeMat }{ a copy of the input causal matrix} \item{levels}{ useful for debugging. A vector with one entry for each eigengene giving the number of generations of parents of the eigengene. Anchors have level 0, their direct causal children have level 1 etc.} \item{anchorIndex}{a copy of the input \code{anchorIndex}. } } \author{ Peter Langfelder } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/simulateSmallLayer.Rd0000644000176200001440000000604714012015545015770 0ustar liggesusers\name{simulateSmallLayer} \alias{simulateSmallLayer} \title{ Simulate small modules } \description{ This function simulates a set of small modules. The primary purpose is to add a submodule structure to the main module structure simulated by \code{\link{simulateDatExpr}}. } \usage{ simulateSmallLayer( order, nSamples, minCor = 0.3, maxCor = 0.5, corPower = 1, averageModuleSize, averageExpr, moduleSpacing, verbose = 4, indent = 0) } \arguments{ \item{order}{ a vector giving the simulation order for vectors. See details. } \item{nSamples}{ integer giving the number of samples to be simulated. } \item{minCor}{ a multiple of \code{maxCor} (see below) giving the minimum correlation of module genes with the corresponding eigengene. See details. } \item{maxCor}{ maximum correlation of module genes with the corresponding eigengene. See details. } \item{corPower}{ controls the dropoff of gene-eigengene correlation. See details. } \item{averageModuleSize}{ average number of genes in a module. See details. } \item{averageExpr}{ average strength of module expression vectors. } \item{moduleSpacing}{ a number giving module spacing: this multiple of the module size will lie between the module and the next one. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ Module eigenvectors are chosen randomly and independently. Module sizes are chosen randomly from an exponential distribution with mean equal \code{averageModuleSize}. Two thirds of genes in each module are simulated as proper module genes and one third as near-module genes (see \code{\link{simulateModule}} for details). Between each successive pairs of modules a number of genes given by \code{moduleSpacing} will be left unsimulated (zero expression). Module expression, that is the expected standard deviation of the module expression vectors, is chosen randomly from an exponential distribution with mean equal \code{averageExpr}. The expression profiles are chosen such that their correlations with the eigengene run from just below \code{maxCor} to \code{minCor * maxCor} (hence minCor must be between 0 and 1, not including the bounds). The parameter \code{corPower} can be chosen to control the behaviour of the simulated correlation with the gene index; values higher than 1 will result in the correlation approaching \code{minCor * maxCor} faster and lower than 1 slower. The simulated genes will be returned in the order given in \code{order}. } \value{ A matrix of simulated gene expressions, with dimension \code{(nSamples, length(order))}. } \author{ Peter Langfelder } \seealso{ \code{\link{simulateModule}} for simulation of individual modules; \code{\link{simulateDatExpr}} for the main gene expression simulation function. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/multiData.eigengeneSignificance.Rd0000644000176200001440000001067514012015545020335 0ustar liggesusers\name{multiData.eigengeneSignificance} \alias{multiData.eigengeneSignificance} \title{ Eigengene significance across multiple sets } \description{ This function calculates eigengene significance and the associated significance statistics (p-values, q-values etc) across several data sets. } \usage{ multiData.eigengeneSignificance( multiData, multiTrait, moduleLabels, multiEigengenes = NULL, useModules = NULL, corAndPvalueFnc = corAndPvalue, corOptions = list(), corComponent = "cor", getQvalues = FALSE, setNames = NULL, excludeGrey = TRUE, greyLabel = ifelse(is.numeric(moduleLabels), 0, "grey")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ Expression data (or other data) in multi-set format (see \code{\link{checkSets}}). A vector of lists; in each list there must be a component named \code{data} whose content is a matrix or dataframe or array of dimension 2. } \item{multiTrait}{ Trait or ourcome data in multi-set format. Only one trait is allowed; consequesntly, the \code{data} component of each component list can be either a vector or a data frame (matrix, array of dimension 2). } \item{moduleLabels}{ Module labels: one label for each gene in \code{multiExpr}. } \item{multiEigengenes}{ Optional eigengenes of modules specified in \code{moduleLabels}. If not given, will be calculated from \code{multiExpr}. } \item{useModules}{ Optional specification of module labels to which the analysis should be restricted. This could be useful if there are many modules, most of which are not interesting. Note that the "grey" module cannot be used with \code{useModules}.} \item{corAndPvalueFnc}{ Function that calculates associations between expression profiles and eigengenes. See details. } \item{corOptions}{ List giving additional arguments to function \code{corAndPvalueFnc}. See details. } \item{corComponent}{ Name of the component of output of \code{corAndPvalueFnc} that contains the actual correlation. } \item{getQvalues}{ logical: should q-values (estimates of FDR) be calculated? } \item{setNames}{ names for the input sets. If not given, will be taken from \code{names(multiExpr)}. If those are \code{NULL} as well, the names will be \code{"Set_1", "Set_2", ...}. } \item{excludeGrey}{ logical: should the grey module be excluded from the kME tables? Since the grey module is typically not a real module, it makes little sense to report kME values for it. } \item{greyLabel}{ label that labels the grey module. } } \details{ This is a convenience function that calculates module eigengene significances (i.e., correlations of module eigengenes with a given trait) across all sets in a multi-set analysis. Also returned are p-values, Z scores, numbers of present (i.e., non-missing) observations for each significance, and optionally the q-values (false discovery rates) corresponding to the p-values. The function \code{corAndPvalueFnc} is currently is expected to accept arguments \code{x} (gene expression profiles) and \code{y} (eigengene expression profiles). Any additional arguments can be passed via \code{corOptions}. The function \code{corAndPvalueFnc} should return a list which at the least contains (1) a matrix of associations of genes and eigengenes (this component should have the name given by \code{corComponent}), and (2) a matrix of the corresponding p-values, named "p" or "p.value". Other components are optional but for full functionality should include (3) \code{nObs} giving the number of observations for each association (which is the number of samples less number of missing data - this can in principle vary from association to association), and (4) \code{Z} giving a Z static for each observation. If these are missing, \code{nObs} is calculated in the main function, and calculations using the Z statistic are skipped. } \value{ A list containing the following components. Each component is a matrix in which the rows correspond to module eigengenes and columns to data sets. Row and column names are set appropriately. \item{eigengeneSignificance}{Module eigengene significance.} \item{p.value}{p-values (returned by \code{corAndPvalueFnc}). } \item{q.value}{q-values corresponding to the p-values above. Only returned in input \code{getWvalues} is \code{TRUE}. } \item{Z}{Z statistics (if returned by \code{corAndPvalueFnc}). } \item{nObservations}{Number of non-missing observations in each correlation/p-value.} } \author{ Peter Langfelder } \keyword{misc} WGCNA/man/sampledBlockwiseModules.Rd0000644000176200001440000001170314230552654017004 0ustar liggesusers\name{sampledBlockwiseModules} \alias{sampledBlockwiseModules} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Blockwise module identification in sampled data } \description{ This function repeatedly resamples the samples (rows) in supplied data and identifies modules on the resampled data. } \usage{ sampledBlockwiseModules( datExpr, nRuns, startRunIndex = 1, endRunIndex = startRunIndex + nRuns - skipUnsampledCalculation, replace = FALSE, fraction = if (replace) 1.0 else 0.63, randomSeed = 12345, checkSoftPower = TRUE, nPowerCheckSamples = 2000, skipUnsampledCalculation = FALSE, corType = "pearson", power = 6, networkType = "unsigned", saveTOMs = FALSE, saveTOMFileBase = "TOM", ..., verbose = 2, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ Expression data. A matrix (preferred) or data frame in which columns are genes and rows ar samples. } \item{nRuns}{ Number of sampled network construction and module identification runs. If \code{skipUnsampledCalculation} is \code{FALSE}, one extra calculation (the first) will contain the unsampled calculation. } \item{startRunIndex}{ Number to be assigned to the start run. The run number or index is used to make saved files unique. It is also used in setting the seed for each run to allow the runs to be replicated in smaller or larger batches. } \item{endRunIndex}{ Number (index) of the last run. If given, \code{nRuns} is ignored. } \item{replace}{ Logical: should samples (observations or rows in entries in \code{multiExpr}) be sampled with replacement? } \item{fraction}{ Fraction of samples to sample for each run. } \item{randomSeed}{ Integer specifying the random seed. If non-NULL, the random number generator state is saved before the seed is set and restored at the end of the function. If \code{NULL}, the random number generator state is not saved nor changed at the start, and not restored at the end. } \item{checkSoftPower}{ Logical: should the soft-tresholding power be adjusted to approximately match the connectivity distribution of the sampled data set and the full data set? } \item{nPowerCheckSamples}{ Number of genes to be sampled from the full data set to calculate connectivity and match soft-tresholding powers. } \item{skipUnsampledCalculation}{ Logical: should a calculation on original (not resampled) data be skipped? } \item{corType}{Character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pairwise.complete.obs} option. } \item{power}{ Soft-thresholding power for network construction. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{saveTOMs}{ Logical: should the networks (topological overlaps) be saved for each run? Note that for large data sets (tens of thousands of nodes) the TOM files are rather large. } \item{saveTOMFileBase}{ Character string giving the base of the file names for TOMs. The actual file names will consist of a concatenation of \code{saveTOMFileBase} and \code{"-run--Block-.RData"}. } \item{\dots}{ Other arguments to \code{\link{blockwiseModules}}. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ For each run, samples (but not genes) are randomly sampled to obtain a perturbed data set; a full network analysis and module identification is carried out, and the results are returned in a list with one component per run. For each run, the soft-thresholding power can optionally be adjusted such that the mean adjacency in the re-sampled data set equals the mean adjacency in the original data. } \value{ A list with one component per run. Each component is a list with the following components: \item{mods}{The output of the function \code{\link{blockwiseModules}} applied to a resampled data set.} \item{samples}{Indices of the samples selected for the resampled data step for this run.} \item{powers}{Actual soft-thresholding powers used in this run.} } \references{ An application of this function is described in the motivational example section of Langfelder P, Horvath S (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software 46(11) 1-17; PMID: 23050260 PMCID: PMC3465711 } \author{ Peter Langfelder } \seealso{ \code{\link{blockwiseModules}} for the underlying network analysis and module identification; \code{\link{sampledHierarchicalConsensusModules}} for a similar resampling analysis of consensus networks. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/plotMat.Rd0000644000176200001440000000363314012015545013575 0ustar liggesusers\name{plotMat} \alias{plotMat} \title{Red and Green Color Image of Data Matrix} \description{This function produces a red and green color image of a data matrix using an RGB color specification. Larger entries are represented with reds of increasing intensity, and smaller entries are represented with greens of increasing intensity. } \usage{ plotMat(x, nrgcols=50, rlabels=FALSE, clabels=FALSE, rcols=1, ccols=1, title="",...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{x}{a matrix of numbers.} \item{nrgcols}{the number of colors (>= 1) to be used in the red and green palette.} \item{rlabels}{vector of character strings to be placed at the row tickpoints, labels for the rows of \code{x}.} \item{clabels}{vector of character strings to be placed at the column tickpoints, labels for the columns of \code{x}.} \item{rcols}{colors to be used for the labels of the rows of \code{x}. \code{rcols} can have either length 1, in which case all the labels are displayed using the same color, or the same length as \code{rlabels}, in which case a color is specified for the label of each row of \code{x}.} \item{ccols}{colors to be used for the labels of the columns of \code{x}. \code{ccols} can have either length 1, in which case all the labels are displayed using the same color, or the same length as \code{clabels}, in which case a color is specified for the label of each column of \code{x}.} \item{title}{character string, overall title for the plot.} \item{\dots}{graphical parameters may also be supplied as arguments to the function (see \code{\link{par}}). E.g. \code{zlim=c(-3,3)}} } %\references{ ~put references to the literature/web site here ~ } \author{ Sandrine Dudoit, \email{sandrine@stat.berkeley.edu} } \seealso{\code{\link{plotCor}}, \code{\link{rgcolors.func}}, \code{\link{cor}}, \code{\link{image}}, \code{\link{rgb}}.} \keyword{hplot} WGCNA/man/redWhiteGreen.Rd0000644000176200001440000000176714012015545014717 0ustar liggesusers\name{redWhiteGreen} \alias{redWhiteGreen} \title{ Red-white-green color sequence } \description{ Generate a red-white-green color sequence of a given length. } \usage{ redWhiteGreen(n, gamma = 1) } \arguments{ \item{n}{ number of colors to be returned } \item{gamma}{ color correction power } } \details{ The function returns a color vector that starts with pure green, gradually turns into white and then to red. The power \code{gamma} can be used to control the behaviour of the quarter- and three quarter-values (between red and white, and white and green, respectively). Higher powers will make the mid-colors more white, while lower powers will make the colors more saturated, respectively. } \value{ A vector of colors of length \code{n}. } \author{ Peter Langfelder } \examples{ par(mfrow = c(3, 1)) displayColors(redWhiteGreen(50)); displayColors(redWhiteGreen(50, 3)); displayColors(redWhiteGreen(50, 0.5)); } \keyword{color}% __ONLY ONE__ keyword per line WGCNA/man/shortenStrings.Rd0000644000176200001440000000446714012015545015217 0ustar liggesusers\name{shortenStrings} \alias{shortenStrings} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Shorten given character strings by truncating at a suitable separator. } \description{ This function shortens given character strings so they are not longer than a given maximum length. } \usage{ shortenStrings(strings, maxLength = 25, minLength = 10, split = " ", fixed = TRUE, ellipsis = "...", countEllipsisInLength = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{strings}{ Character strings to be shortened. } \item{maxLength}{ Maximum length (number of characters) in the strings to be retained. See details for when the returned strings can exceed this length. } \item{minLength}{ Minimum length of the returned strings. See details. } \item{split}{ Character string giving the split at which the strings can be truncated. This can be a literal string or a regular expression (if the latter, \code{fixed} below must be set to \code{FALSE}). } \item{fixed}{ Logical: should \code{split} be interpreted as a literal specification (\code{TRUE}) or as a regular expression (\code{FALSE})? } \item{ellipsis}{ Character string that will be appended to every shorten string, to indicate that the string has been shortened. } \item{countEllipsisInLength}{ Logical: should the length of the ellipsis count toward the minimum and maximum length?} } \details{ Strings whose length (number of characters) is at most \code{maxLength} are returned unchanged. For those that are longer, the function uses \code{\link{gregexpr}} to search for the occurrences of \code{split} in each given character string. If such occurrences are found at positions between \code{minLength} and \code{maxLength}, the string will be truncated at the last such \code{split}; otherwise, the string will be truncated at \code{maxLength}. The \code{ellipsis} is appended to each truncated string. } \value{ A character vector of strings, shortened as necessary. If the input \code{strings} had non-NULL dimensions and dimnames, these are copied to the output. } \author{ Peter Langfelder } \seealso{ \code{\link{gregexpr}}, the workhorse pattern matching function \code{\link{formatLabels}} for splitting strings into multiple lines } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/populationMeansInAdmixture.Rd0000644000176200001440000001412314012015545017501 0ustar liggesusers\name{populationMeansInAdmixture} \alias{populationMeansInAdmixture} %- Also NEED an '\alias' for EACH other topic documented here. \title{Estimate the population-specific mean values in an admixed population. } \description{Uses the expression values from an admixed population and estimates of the proportions of sub-populations to estimate the population specific mean values. For example, this function can be used to estimate the cell type specific mean gene expression values based on expression values from a mixture of cells. The method is described in Shen-Orr et al (2010) where it was used to estimate cell type specific gene expression levels based on a mixture sample. } \usage{ populationMeansInAdmixture( datProportions, datE.Admixture, scaleProportionsTo1 = TRUE, scaleProportionsInCelltype = TRUE, setMissingProportionsToZero = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datProportions}{a matrix of non-negative numbers (ideally proportions) where the rows correspond to the samples (rows of \code{datE.Admixture}) and the columns correspond to the sub-populations of the mixture. The function calculates a mean expression value for each column of \code{datProportions}. Negative entries in datProportions lead to an error message. But the rows of \code{datProportions} do not have to sum to 1, see the argument \code{scaleProportionsTo1}. } \item{datE.Admixture}{a matrix of numbers. The rows correspond to samples (mixtures of populations). The columns contain the variables (e.g. genes) for which the means should be estimated. } \item{scaleProportionsTo1}{logical. If set to TRUE (default) then the proportions in each row of \code{datProportions} are scaled so that they sum to 1, i.e. datProportions[i,]=datProportions[i,]/max(datProportions[i,]). In general, we recommend to set it to TRUE. } \item{scaleProportionsInCelltype}{ logical. If set to TRUE (default) then the proportions in each cell types are recaled and make the mean to 0. } \item{setMissingProportionsToZero}{logical. Default is FALSE. If set to TRUE then it sets missing values in \code{datProportions} to zero. } } \details{The function outputs a matrix of coefficients resulting from fitting a regression model. If the proportions sum to 1, then i-th row of the output matrix reports the coefficients of the following model \code{lm(datE.Admixture[,i]~.-1,data=datProportions)}. Aside, the minus 1 in the formula indicates that no intercept term will be fit. Under certain assumptions, the coefficients can be interpreted as the mean expression values in the sub-populations (Shen-Orr 2010). } \value{ a numeric matrix whose rows correspond to the columns of \code{datE.Admixture} (e.g. to genes) and whose columns correspond to the columns of \code{datProportions} (e.g. sub populations or cell types). } \references{ Shen-Orr SS, Tibshirani R, Khatri P, Bodian DL, Staedtler F, Perry NM, Hastie T, Sarwal MM, Davis MM, Butte AJ (2010) Cell type-specific gene expression differences in complex tissues. Nature Methods, vol 7 no.4 } \author{ Steve Horvath, Chaochao Cai } \note{ This can be considered a wrapper of the \code{lm} function. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \examples{ set.seed(1) # this is the number of complex (mixed) tissue samples, e.g. arrays m=10 # true count data (e.g. pure cells in the mixed sample) datTrueCounts=as.matrix(data.frame(TrueCount1=rpois(m,lambda=16), TrueCount2=rpois(m,lambda=8),TrueCount3=rpois(m,lambda=4), TrueCount4=rpois(m,lambda=2))) no.pure=dim(datTrueCounts)[[2]] # now we transform the counts into proportions divideBySum=function(x) t(x)/sum(x) datProportions= t(apply(datTrueCounts,1,divideBySum)) dimnames(datProportions)[[2]]=paste("TrueProp",1:dim(datTrueCounts)[[2]],sep=".") # number of genes that are highly expressed in each pure population no.genesPerPure=rep(5, no.pure) no.genes= sum(no.genesPerPure) GeneIndicator=rep(1:no.pure, no.genesPerPure) # true mean values of the genes in the pure populations # in the end we hope to estimate them from the mixed samples datTrueMeans0=matrix( rnorm(no.genes*no.pure,sd=.3), nrow= no.genes,ncol=no.pure) for (i in 1:no.pure ){ datTrueMeans0[GeneIndicator==i,i]= datTrueMeans0[GeneIndicator==i,i]+1 } dimnames(datTrueMeans0)[[1]]=paste("Gene",1:dim(datTrueMeans0)[[1]],sep="." ) dimnames(datTrueMeans0)[[2]]=paste("MeanPureCellType",1:dim(datTrueMeans0)[[2]], sep=".") # plot.mat(datTrueMeans0) # simulate the (expression) values of the admixed population samples noise=matrix(rnorm(m*no.genes,sd=.1),nrow=m,ncol= no.genes) datE.Admixture= as.matrix(datProportions) \%*\% t(datTrueMeans0) + noise dimnames(datE.Admixture)[[1]]=paste("MixedTissue",1:m,sep=".") datPredictedMeans=populationMeansInAdmixture(datProportions,datE.Admixture) par(mfrow=c(2,2)) for (i in 1:4 ){ verboseScatterplot(datPredictedMeans[,i],datTrueMeans0[,i], xlab="predicted mean",ylab="true mean",main="all populations") abline(0,1) } #assume we only study 2 populations (ie we ignore the others) selectPopulations=c(1,2) datPredictedMeansTooFew=populationMeansInAdmixture(datProportions[,selectPopulations], datE.Admixture) par(mfrow=c(2,2)) for (i in 1:length(selectPopulations) ){ verboseScatterplot(datPredictedMeansTooFew[,i],datTrueMeans0[,i], xlab="predicted mean",ylab="true mean",main="too few populations") abline(0,1) } #assume we erroneously add a population datProportionsTooMany=data.frame(datProportions,WrongProp=sample(datProportions[,1])) datPredictedMeansTooMany=populationMeansInAdmixture(datProportionsTooMany, datE.Admixture) par(mfrow=c(2,2)) for (i in 1:4 ){ verboseScatterplot(datPredictedMeansTooMany[,i],datTrueMeans0[,i], xlab="predicted mean",ylab="true mean",main="too many populations") abline(0,1) } } % Add one or more standard keywords, see file 'KEYWORDS' in the R documentation directory. \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/unsignedAdjacency.Rd0000644000176200001440000000350514012015545015571 0ustar liggesusers\name{unsignedAdjacency} \alias{unsignedAdjacency} \title{ Calculation of unsigned adjacency } \description{ Calculation of the unsigned network adjacency from expression data. The restricted set of parameters for this function should allow a faster and less memory-hungry calculation. } \usage{ unsignedAdjacency( datExpr, datExpr2 = NULL, power = 6, corFnc = "cor", corOptions = "use = 'p'") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datExpr}{ expression data. A data frame in which columns are genes and rows ar samples. Missing values are ignored. } \item{datExpr2}{ optional specification of a second set of expression data. See details. } \item{power}{ soft-thresholding power for network construction. } \item{corFnc}{ character string giving the correlation function to be used for the adjacency calculation. Recommended choices are \code{"cor"} and \code{"bicor"}, but other functions can be used as well. } \item{corOptions}{ character string giving further options to be passed to the correlation function } } \details{ The correlation function will be called with arguments \code{datExpr, datExpr2} plus any extra arguments given in \code{corOptions}. If \code{datExpr2} is \code{NULL}, the standard correlation functions will calculate the corelation of columns in \code{datExpr}. } \value{ Adjacency matrix of dimensions \code{n*n}, where \code{n} is the number of genes in \code{datExpr}. } \references{ Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 } \author{ Steve Horvath and Peter Langfelder } \seealso{ \code{\link{adjacency}} } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/numbers2colors.Rd0000644000176200001440000000460114012015545015130 0ustar liggesusers\name{numbers2colors} \alias{numbers2colors} \title{ Color representation for a numeric variable } \description{ The function creates a color represenation for the given numeric input. } \usage{ numbers2colors( x, signed = NULL, centered = signed, lim = NULL, commonLim = FALSE, colors = if (signed) blueWhiteRed(100) else blueWhiteRed(100)[51:100], naColor = "grey") } \arguments{ \item{x}{ a vector or matrix of numbers. Missing values are allowed and will be assigned the color given in \code{naColor}. If a matrix, each column of the matrix is processed separately and the return value will be a matrix of colors. } \item{signed}{ logical: should \code{x} be considered signed? If \code{TRUE}, the default setting is to use to use a palette that starts with green for the most negative values, continues with white for values around zero and turns red for positive values. If \code{FALSE}, the default palette ranges from white for minimum values to red for maximum values. If not given, the behaviour is controlled by values in \code{x}: if there are both positive and negative values, \code{signed} will be considered \code{TRUE}, otherwise \code{FALSE}.} \item{centered}{ logical. If \code{TRUE} and \code{signed==TRUE}, numeric value zero will correspond to the middle of the color palette. If \code{FALSE} or \code{signed==FALSE}, the middle of the color palette will correspond to the average of the minimum and maximum value. If neither \code{signed} nor \code{centered} are given, \code{centered} will follow \code{signed} (see above).} \item{lim}{ optional specification of limits, that is numeric values that should correspond to the first and last entry of \code{colors}. } \item{commonLim}{logical: should limits be calculated separately for each column of x, or should the limits be the same for all columns? Only applies if \code{lim} is \code{NULL}. } \item{colors}{ color palette to represent the given numbers. } \item{naColor}{ color to represent missing values in \code{x}. } } \details{ Each column of \code{x} is processed individually, meaning that the color palette is adjusted individually for each column of \code{x}. } \value{ A vector or matrix (of the same dimensions as \code{x}) of colors. } \author{ Peter Langfelder } \seealso{ \code{\link{labels2colors}} for color coding of ordinal labels. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/goodSamplesMS.Rd0000644000176200001440000000643314012015545014673 0ustar liggesusers\name{goodSamplesMS} \alias{goodSamplesMS} \title{ Filter samples with too many missing entries across multiple data sets } \description{ This function checks data for missing entries and returns a list of samples that pass two criteria on maximum number of missing values: the fraction of missing values must be below a given threshold and the total number of missing genes must be below a given threshold. } \usage{ goodSamplesMS(multiExpr, multiWeights = NULL, useSamples = NULL, useGenes = NULL, minFraction = 1/2, minNSamples = ..minNSamples, minNGenes = ..minNGenes, minRelativeWeight = 0.1, verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{multiWeights}{ optional observation weights in the same format (and dimensions) as \code{multiExpr}.} \item{useSamples}{ optional specifications of which samples to use for the check. Should be a logical vector; samples whose entries are \code{FALSE} will be ignored for the missing value counts. Defaults to using all samples.} \item{useGenes}{ optional specifications of genes for which to perform the check. Should be a logical vector; genes whose entries are \code{FALSE} will be ignored. Defaults to using all genes.} \item{minFraction}{ minimum fraction of non-missing samples for a gene to be considered good. } \item{minNSamples}{ minimum number of good samples for the data set to be considered fit for analysis. If the actual number of good samples falls below this threshold, an error will be issued. } \item{minNGenes}{ minimum number of non-missing samples for a sample to be considered good. } \item{minRelativeWeight}{ observations whose relative weight is below this threshold will be considered missing. Here relative weight is weight divided by the maximum weight in the column (gene).} \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The constants \code{..minNSamples} and \code{..minNGenes} are both set to the value 4. If weights are given, entries whose relative weight (i.e., weight divided by maximum weight in the column or gene) will be considered missing. For most data sets, the fraction of missing samples criterion will be much more stringent than the absolute number of missing samples criterion. } \value{ A list with one component per input set. Each component is a logical vector with one entry per sample in the corresponding set, indicating whether the sample passed the missing value criteria. } \author{ Peter Langfelder and Steve Horvath } \seealso{ \code{\link{goodGenes}}, \code{\link{goodSamples}}, \code{\link{goodSamplesGenes}} for cleaning individual sets separately; \code{\link{goodGenesMS}}, \code{\link{goodSamplesGenesMS}} for additional cleaning of multiple data sets together. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/orderMEsByHierarchicalConsensus.Rd0000644000176200001440000000374314012015545020372 0ustar liggesusers\name{orderMEsByHierarchicalConsensus} \alias{orderMEsByHierarchicalConsensus} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Order module eigengenes by their hierarchical consensus similarity } \description{ This function calculates a hiearchical consensus similarity of the input eigengenes, clusters the eigengenes according to the similarity and returns the input module eigengenes ordered by the order of resulting dendrogram. } \usage{ orderMEsByHierarchicalConsensus( MEs, networkOptions, consensusTree, greyName = "ME0", calibrate = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{MEs}{ Module eigengenes, or more generally, vectors, to be ordered, in a \code{\link{multiData}} format: A vector of lists, one per set. Each set must contain a component \code{data} that contains the module eigenegens or general vectors, with rows corresponding to samples and columns to genes or probes. } \item{networkOptions}{ A single list of class \code{\link{NetworkOptions}} giving options for network calculation for all of the networks, or a \code{\link{multiData}} structure containing one such list for each input data set. } \item{consensusTree}{ A list specifying the consensus calculation. See \code{\link{newConsensusTree}} for details. } \item{greyName}{ Specifies the column name of eigengene of the "module" that contains unassigned genes. This eigengene (column) will be excluded from the clustering and will be put last in the order. } \item{calibrate}{ Logical: should module eigengene similarities be calibrated? This setting overrides the calibration options in \code{consensusTree}. } } \value{A \code{\link{multiData}} structure of the same format as the input \code{MEs}, with columns ordered by the calculated dendrogram. } \author{ Peter Langfelder } \seealso{ \code{\link{hierarchicalConsensusMEDissimilarity}} for calculating the consensus ME dissimilarity } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/prependZeros.Rd0000644000176200001440000000265314356162617014654 0ustar liggesusers\name{prependZeros} \alias{prependZeros} \alias{prependZeros.int} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Pad numbers with leading zeros to specified total width } \description{ These functions pad the specified numbers with zeros to a specified total width. } \usage{ prependZeros(x, width = max(nchar(x))) prependZeros.int(x, width = max(nchar(as.integer(x)))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Vector of numbers to be padded. For \code{prependZeros}, the vector may be real (non-integer) or even character (and not necessarily representing numbers). For \code{prependZeros}, the vector must be numeric and non-integers get rounded down to the nearest integer. } \item{width}{ Width to pad the numbers to. } } \details{ The \code{prependZeros.int} version works better with numbers such as 100000 which may get converted to character as 1e5 and hence be incorrectly padded in the \code{prependZeros} function. On the flip side, prependZeros works also for non-integer inputs. } \value{ Character vector with the 0-padded numbers. } \author{ Peter Langfelder } \examples{ prependZeros(1:10) prependZeros(1:10, 4) # more exotic examples prependZeros(c(1, 100000), width = 6) ### Produces incorrect output prependZeros.int(c(1, 100000)) ### Correct output prependZeros(c("a", "b", "aa")) ### pads the shorter strings using zeros. } \keyword{ misc } WGCNA/man/consensusTOM.Rd0000644000176200001440000004466314533751110014570 0ustar liggesusers\name{consensusTOM} \alias{consensusTOM} %- Also NEED an '\alias' for EACH other topic documented here. \title{Consensus network (topological overlap).} \description{ Calculation of a consensus network (topological overlap). } \usage{ consensusTOM( # Supply either ... # ... information needed to calculate individual TOMs multiExpr, # Data checking options checkMissingData = TRUE, # Blocking options blocks = NULL, maxBlockSize = 5000, blockSizePenaltyPower = 5, nPreclusteringCenters = NULL, randomSeed = 54321, # Network construction arguments: correlation options corType = "pearson", maxPOutliers = 1, quickCor = 0, pearsonFallback = "individual", cosineCorrelation = FALSE, replaceMissingAdjacencies = FALSE, # Adjacency function options power = 6, networkType = "unsigned", checkPower = TRUE, # Topological overlap options TOMType = "unsigned", TOMDenom = "min", suppressNegativeTOM = FALSE, # Save individual TOMs? saveIndividualTOMs = TRUE, individualTOMFileNames = "individualTOM-Set\%s-Block\%b.RData", # ... or individual TOM information individualTOMInfo = NULL, useIndivTOMSubset = NULL, ##### Consensus calculation options useBlocks = NULL, networkCalibration = c("single quantile", "full quantile", "none"), # Save calibrated TOMs? saveCalibratedIndividualTOMs = FALSE, calibratedIndividualTOMFilePattern = "calibratedIndividualTOM-Set\%s-Block\%b.RData", # Simple quantile calibration options calibrationQuantile = 0.95, sampleForCalibration = TRUE, sampleForCalibrationFactor = 1000, getNetworkCalibrationSamples = FALSE, # Consensus definition consensusQuantile = 0, useMean = FALSE, setWeights = NULL, # Return options saveConsensusTOMs = TRUE, consensusTOMFilePattern = "consensusTOM-Block\%b.RData", returnTOMs = FALSE, # Internal handling of TOMs useDiskCache = NULL, chunkSize = NULL, cacheDir = ".", cacheBase = ".blockConsModsCache", nThreads = 1, # Diagnostic messages verbose = 1, indent = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiExpr}{ expression data in the multi-set format (see \code{\link{checkSets}}). A vector of lists, one per set. Each set must contain a component \code{data} that contains the expression data, with rows corresponding to samples and columns to genes or probes. } \item{checkMissingData}{logical: should data be checked for excessive numbers of missing entries in genes and samples, and for genes with zero variance? See details. } \item{blocks}{ optional specification of blocks in which hierarchical clustering and module detection should be performed. If given, must be a numeric vector with one entry per gene of \code{multiExpr} giving the number of the block to which the corresponding gene belongs. } \item{maxBlockSize}{ integer giving maximum block size for module detection. Ignored if \code{blocks} above is non-NULL. Otherwise, if the number of genes in \code{datExpr} exceeds \code{maxBlockSize}, genes will be pre-clustered into blocks whose size should not exceed \code{maxBlockSize}. } \item{ blockSizePenaltyPower}{number specifying how strongly blocks should be penalized for exceeding the maximum size. Set to a lrge number or \code{Inf} if not exceeding maximum block size is very important.} \item{nPreclusteringCenters}{number of centers for pre-clustering. Larger numbers typically results in better but slower pre-clustering. The default is \code{as.integer(min(nGenes/20, 100*nGenes/preferredSize))} and is an attempt to arrive at a reasonable number given the resources available. } \item{randomSeed}{ integer to be used as seed for the random number generator before the function starts. If a current seed exists, it is saved and restored upon exit. If \code{NULL} is given, the function will not save and restore the seed. } \item{corType}{ character string specifying the correlation to be used. Allowed values are (unique abbreviations of) \code{"pearson"} and \code{"bicor"}, corresponding to Pearson and bidweight midcorrelation, respectively. Missing values are handled using the \code{pariwise.complete.obs} option. } \item{maxPOutliers}{ only used for \code{corType=="bicor"}. Specifies the maximum percentile of data that can be considered outliers on either side of the median separately. For each side of the median, if higher percentile than \code{maxPOutliers} is considered an outlier by the weight function based on \code{9*mad(x)}, the width of the weight function is increased such that the percentile of outliers on that side of the median equals \code{maxPOutliers}. Using \code{maxPOutliers=1} will effectively disable all weight function broadening; using \code{maxPOutliers=0} will give results that are quite similar (but not equal to) Pearson correlation. } \item{quickCor}{ real number between 0 and 1 that controls the handling of missing data in the calculation of correlations. See details. } \item{pearsonFallback}{Specifies whether the bicor calculation, if used, should revert to Pearson when median absolute deviation (mad) is zero. Recongnized values are (abbreviations of) \code{"none", "individual", "all"}. If set to \code{"none"}, zero mad will result in \code{NA} for the corresponding correlation. If set to \code{"individual"}, Pearson calculation will be used only for columns that have zero mad. If set to \code{"all"}, the presence of a single zero mad will cause the whole variable to be treated in Pearson correlation manner (as if the corresponding \code{robust} option was set to \code{FALSE}). Has no effect for Pearson correlation. See \code{\link{bicor}}.} \item{cosineCorrelation}{logical: should the cosine version of the correlation calculation be used? The cosine calculation differs from the standard one in that it does not subtract the mean. } \item{power}{ soft-thresholding power for network construction. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{checkPower}{ logical: should basic sanity check be performed on the supplied \code{power}? If you would like to experiment with unusual powers, set the argument to \code{FALSE} and proceed with caution. } \item{replaceMissingAdjacencies}{logical: should missing values in the calculation of adjacency be replaced by 0?} \item{TOMType}{ one of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{ a character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental.} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? Negative TOM values can occur when \code{TOMType} is \code{"signed Nowick"}.} %%%%%%%%%%%%%%% \item{saveIndividualTOMs}{logical: should individual TOMs be saved to disk for later use? } \item{individualTOMFileNames}{character string giving the file names to save individual TOMs into. The following tags should be used to make the file names unique for each set and block: \code{\%s} will be replaced by the set number; \code{\%N} will be replaced by the set name (taken from \code{names(multiExpr)}) if it exists, otherwise by set number; \code{\%b} will be replaced by the block number. If the file names turn out to be non-unique, an error will be generated.} %%%%%%%%%%%%%% \item{individualTOMInfo}{ Optional data for TOM matrices in individual data sets. This object is returned by the function \code{\link{blockwiseIndividualTOMs}}. If not given, appropriate topological overlaps will be calculated using the network contruction options below. } \item{useIndivTOMSubset}{ If \code{individualTOMInfo} is given, this argument allows to only select a subset of the individual set networks contained in \code{individualTOMInfo}. It should be a numeric vector giving the indices of the individual sets to be used. Note that this argument is NOT applied to \code{multiExpr}. } \item{useBlocks}{optional specification of blocks that should be used for the calcualtions. The default is to use all blocks. } %%%%%%%%%%%%%% \item{networkCalibration}{network calibration method. One of "single quantile", "full quantile", "none" (or a unique abbreviation of one of them).} \item{saveCalibratedIndividualTOMs}{logical: should the calibrated individual TOMs be saved? } \item{calibratedIndividualTOMFilePattern}{pattern of file names for saving calibrated individual TOMs.} %%%%%%%%%%%%%% \item{calibrationQuantile}{ if \code{networkCalibration} is \code{"single quantile"}, topological overlaps (or adjacencies if TOMs are not computed) will be scaled such that their \code{calibrationQuantile} quantiles will agree. } \item{sampleForCalibration}{ if \code{TRUE}, calibration quantiles will be determined from a sample of network similarities. Note that using all data can double the memory footprint of the function and the function may fail. } \item{sampleForCalibrationFactor}{ determines the number of samples for calibration: the number is \code{1/calibrationQuantile * sampleForCalibrationFactor}. Should be set well above 1 to ensure accuracy of the sampled quantile. } \item{getNetworkCalibrationSamples}{logical: should the sampled values used for network calibration be returned?} %%%%%%%%%%%%%% \item{consensusQuantile}{ quantile at which consensus is to be defined. See details. } \item{useMean}{logical: should the consensus be determined from a (possibly weighted) mean across the data sets rather than a quantile?} \item{setWeights}{Optional vector (one component per input set) of weights to be used for weighted mean consensus. Only used when \code{useMean} above is \code{TRUE}.} %%%%%%%%%%%%%% \item{saveConsensusTOMs}{ logical: should the consensus topological overlap matrices for each block be saved and returned? } \item{consensusTOMFilePattern}{ character string containing the file namefiles containing the consensus topological overlaps. The tag \code{\%b} will be replaced by the block number. If the resulting file names are non-unique (for example, because the user gives a file name without a \code{\%b} tag), an error will be generated. These files are standard R data files and can be loaded using the \code{\link{load}} function. } %%%%%%%%%%%%%% \item{returnTOMs}{logical: should calculated consensus TOM(s) be returned? } \item{useDiskCache}{ should calculated network similarities in individual sets be temporarilly saved to disk? Saving to disk is somewhat slower than keeping all data in memory, but for large blocks and/or many sets the memory footprint may be too big. If not given (the default), the function will determine the need of caching based on the size of the data. See \code{chunkSize} below for additional information. } \item{chunkSize}{ network similarities are saved in smaller chunks of size \code{chunkSize}. If \code{NULL}, an appropriate chunk size will be determined from an estimate of available memory. Note that if the chunk size is greater than the memory required for storing intemediate results, disk cache use will automatically be disabled. } \item{cacheDir}{ character string containing the directory into which cache files should be written. The user should make sure that the filesystem has enough free space to hold the cache files which can get quite large. } \item{cacheBase}{ character string containing the desired name for the cache files. The actual file names will consists of \code{cacheBase} and a suffix to make the file names unique. } \item{nThreads}{ non-negative integer specifying the number of parallel threads to be used by certain parts of correlation calculations. This option only has an effect on systems on which a POSIX thread library is available (which currently includes Linux and Mac OSX, but excludes Windows). If zero, the number of online processors will be used if it can be determined dynamically, otherwise correlation calculations will use 2 threads. } \item{verbose}{ integer level of verbosity. Zero means silent, higher values make the output progressively more and more verbose. } \item{indent}{ indentation for diagnostic messages. Zero means no indentation, each unit adds two spaces. } } \details{ The function starts by optionally filtering out samples that have too many missing entries and genes that have either too many missing entries or zero variance in at least one set. Genes that are filtered out are left unassigned by the module detection. Returned eigengenes will contain \code{NA} in entries corresponding to filtered-out samples. If \code{blocks} is not given and the number of genes exceeds \code{maxBlockSize}, genes are pre-clustered into blocks using the function \code{\link{consensusProjectiveKMeans}}; otherwise all genes are treated in a single block. For each block of genes, the network is constructed and (if requested) topological overlap is calculated in each set. To minimize memory usage, calculated topological overlaps are optionally saved to disk in chunks until they are needed again for the calculation of the consensus network topological overlap. Before calculation of the consensus Topological Overlap, individual TOMs are optionally calibrated. Calibration methods include single quantile scaling and full quantile normalization. Single quantile scaling raises individual TOM in sets 2,3,... to a power such that the quantiles given by \code{calibrationQuantile} agree with the quantile in set 1. Since the high TOMs are usually the most important for module identification, the value of \code{calibrationQuantile} is close to (but not equal) 1. To speed up quantile calculation, the quantiles can be determined on a randomly-chosen component subset of the TOM matrices. Full quantile normalization, implemented in \code{\link[preprocessCore]{normalize.quantiles}}, adjusts the TOM matrices such that all quantiles equal each other (and equal to the quantiles of the component-wise average of the individual TOM matrices). Note that network calibration is performed separately in each block, i.e., the normalizing transformation may differ between blocks. This is necessary to avoid manipulating a full TOM in memory. The consensus TOM is calculated as the component-wise \code{consensusQuantile} quantile of the individual (set) TOMs; that is, for each gene pair (TOM entry), the \code{consensusQuantile} quantile across all input sets. Alternatively, one can also use (weighted) component-wise mean across all imput data sets. If requested, the consensus topological overlaps are saved to disk for later use. } \value{ List with the following components: \item{consensusTOM}{only present if input \code{returnTOMs} is \code{TRUE}. A list containing consensus TOM for each block, stored as a distance structure.} \item{TOMFiles}{only present if input \code{saveConsensusTOMs} is \code{TRUE}. A vector of file names, one for each block, in which the TOM for the corresponding block is stored. TOM is saved as a distance structure to save space.} \item{saveConsensusTOMs}{a copy of the input \code{saveConsensusTOMs}.} \item{individualTOMInfo}{information about individual set TOMs. A copy of the input \code{individualTOMInfo} if given; otherwise the result of calling \code{blockwiseIndividualTOMs}. See \code{blockwiseIndividualTOMs} for details.} Further components are retained for debugging and/or convenience. \item{useIndivTOMSubset}{a copy of the input \code{useIndivTOMSubset}.} \item{goodSamplesAndGenes}{a list containing information about which samples and genes are "good" in the sense that they do not contain more than a certain fraction of missing data and (for genes) have non-zero variance. See \code{\link{goodSamplesGenesMS}} for details.} \item{nGGenes}{number of "good" genes in \code{goodSamplesGenes} above. } \item{nSets}{number of input sets.} \item{saveCalibratedIndividualTOMs}{a copy of the input \code{saveCalibratedIndividualTOMs}.} \item{calibratedIndividualTOMFileNames}{if input \code{saveCalibratedIndividualTOMs} is \code{TRUE}, this component will contain the file names of calibrated individual networks. The file names are arranged in a character matrix with each row corresponding to one input set and each column to one block.} \item{networkCalibrationSamples}{if input \code{getNetworkCalibrationSamples} is \code{TRUE}, a list with one component per block. Each component is in turn a list with two components: \code{sampleIndex} is a vector contain the indices of the TOM samples (the indices refer to a flattened distance structure), and \code{TOMSamples} is a matrix of TOM samples with each row corresponding to a sample in \code{sampleIndex}, and each column to one input set.} \item{consensusQuantile}{a copy of the input \code{consensusQuantile}.} \item{originCount}{A vector of length \code{nSets} that contains, for each set, the number of (calibrated) elements that were less than or equal the consensus for that element.} } \references{ WGCNA methodology has been described in Bin Zhang and Steve Horvath (2005) "A General Framework for Weighted Gene Co-Expression Network Analysis", Statistical Applications in Genetics and Molecular Biology: Vol. 4: No. 1, Article 17 PMID: 16646834 The original reference for the WGCNA package is Langfelder P, Horvath S (2008) WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics 2008, 9:559 PMID: 19114008 For consensus modules, see Langfelder P, Horvath S (2007) "Eigengene networks for studying the relationships between co-expression modules", BMC Systems Biology 2007, 1:54 This function uses quantile normalization described, for example, in Bolstad BM1, Irizarry RA, Astrand M, Speed TP (2003) "A comparison of normalization methods for high density oligonucleotide array data based on variance and bias", Bioinformatics. 2003 Jan 22;19(2):1 } \author{ Peter Langfelder } \seealso{ \code{\link{blockwiseIndividualTOMs}} for calculation of topological overlaps across multiple sets. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/multiUnion.Rd0000644000176200001440000000134414012015545014315 0ustar liggesusers\name{multiUnion} \alias{multiUnion} \alias{multiIntersect} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Union and intersection of multiple sets } \description{ Union and intersection of multiple sets. These function generalize the standard functions \code{\link{union}} and \code{\link{intersect}}. } \usage{ multiUnion(setList) multiIntersect(setList) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{setList}{ A list containing the sets to be performed upon. } } \value{ The union or intersection of the given sets. } \author{ Peter Langfelder } \seealso{ The "standard" functions \code{\link{union}} and \code{\link{intersect}}. } \keyword{misc}% __ONLY ONE__ keyword per line WGCNA/man/signedKME.Rd0000644000176200001440000000472614012015545013767 0ustar liggesusers\name{signedKME} \alias{signedKME} \title{ Signed eigengene-based connectivity } \description{ Calculation of (signed) eigengene-based connectivity, also known as module membership. } \usage{ signedKME( datExpr, datME, exprWeights = NULL, MEWeights = NULL, outputColumnName = "kME", corFnc = "cor", corOptions = "use = 'p'") } \arguments{ \item{datExpr}{ a data frame containing the gene expression data. Rows correspond to samples and columns to genes. Missing values are allowed and will be ignored. } \item{datME}{ a data frame containing module eigengenes. Rows correspond to samples and columns to module eigengenes. } \item{exprWeights}{ optional weight matrix of observation weights for \code{datExpr}, of the same dimensions as \code{datExpr}. If given, the weights must be non-negative and will be passed on to the correlation function given in argument \code{corFnc} as argument \code{weights.x}.} \item{MEWeights}{ optional weight matrix of observation weights for \code{datME}, of the same dimensions as \code{datME}. If given, the weights must be non-negative and will be passed on to the correlation function given in argument \code{corFnc} as argument \code{weights.y}.} \item{outputColumnName}{ a character string specifying the prefix of column names of the output. } \item{corFnc}{ character string specifying the function to be used to calculate co-expression similarity. Defaults to Pearson correlation. Any function returning values between -1 and 1 can be used. } \item{corOptions}{ character string specifying additional arguments to be passed to the function given by \code{corFnc}. Use \code{"use = 'p', method = 'spearman'"} to obtain Spearman correlation. } } \details{ Signed eigengene-based connectivity of a gene in a module is defined as the correlation of the gene with the corresponding module eigengene. The samples in \code{datExpr} and \code{datME} must be the same. } \value{ A data frame in which rows correspond to input genes and columns to module eigengenes, giving the signed eigengene-based connectivity of each gene with respect to each eigengene. } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \author{ Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/metaZfunction.Rd0000644000176200001440000000220214012015545014772 0ustar liggesusers\name{metaZfunction} \alias{metaZfunction} \title{ Meta-analysis Z statistic } \description{ The function calculates a meta analysis Z statistic based on an input data frame of Z statistics. } \usage{ metaZfunction(datZ, columnweights = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{datZ}{ Matrix or data frame of Z statistics (assuming standard normal distribution under the null hypothesis). Rows correspond to genes, columns to independent data sets. } \item{columnweights}{ optional vector of non-negative numbers for weighing the columns of datZ. } } \details{ For example, if datZ has 3 columns whose columns are labelled Z1,Z2,Z3 then ZMeta= (Z1+Z2+Z3)/sqrt(3). Under the null hypothesis (where all Z statistics follow a standard normal distribution and the Z statistics are independent), ZMeta also follows a standard normal distribution. To calculate a 2 sided p-value, one an use the following code pvalue=2*pnorm(-abs(ZMeta) ) } \value{ Vector of meta analysis Z statistic. Under the null hypothesis this should follow a standard normal distribution. } \author{ Steve Horvath } \keyword{misc} WGCNA/man/conformityBasedNetworkConcepts.Rd0000644000176200001440000001240114012015545020347 0ustar liggesusers\name{conformityBasedNetworkConcepts} \alias{conformityBasedNetworkConcepts} \title{ Calculation of conformity-based network concepts. } \description{ This function computes 3 types of network concepts (also known as network indices or statistics) based on an adjacency matrix and optionally a node significance measure. } \usage{ conformityBasedNetworkConcepts(adj, GS = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{adj}{ adjacency matrix. A symmetric matrix with components between 0 and 1. } \item{GS}{ optional node significance measure. A vector with length equal the dimension of \code{adj}. } } \details{ This function computes 3 types of network concepts (also known as network indices or statistics) based on an adjacency matrix and optionally a node significance measure. Specifically, it computes I) fundamental network concepts, II) conformity based network concepts, and III) approximate conformity based network concepts. These network concepts are defined for any symmetric adjacency matrix (weighted and unweighted). The network concepts are described in Dong and Horvath (2007) and Horvath and Dong (2008). In the following, we use the term gene and node interchangeably since these methods were originally developed for gene networks. In the following, we briefly describe the 3 types of network concepts: Type I: fundamental network concepts are defined as a function of the off-diagonal elements of an adjacency matrix A and/or a node significance measure GS. Type II: conformity-based network concepts are functions of the off-diagonal elements of the conformity based adjacency matrix A.CF=CF*t(CF) and/or the node significance measure. These network concepts are defined for any network for which a conformity vector can be defined. Details: For any adjacency matrix A, the conformity vector CF is calculated by requiring that A[i,j] is approximately equal to CF[i]*CF[j]. Using the conformity one can define the matrix A.CF=CF*t(CF) which is the outer product of the conformity vector with itself. In general, A.CF is not an adjacency matrix since its diagonal elements are different from 1. If the off-diagonal elements of A.CF are similar to those of A according to the Frobenius matrix norm, then A is approximately factorizable. To measure the factorizability of a network, one can calculate the Factorizability, which is a number between 0 and 1 (Dong and Horvath 2007). The conformity is defined using a monotonic, iterative algorithm that maximizes the factorizability measure. Type III: approximate conformity based network concepts are functions of all elements of the conformity based adjacency matrix A.CF (including the diagonal) and/or the node significance measure GS. These network concepts are very useful for deriving relationships between network concepts in networks that are approximately factorizable. } \value{ A list with the following components: \item{Factorizability}{number between 0 and 1 giving the factorizability of the matrix. The closer to 1 the higher the evidence of factorizability, that is, A-I is close to outer(CF,CF)-diag(CF^2).} \item{fundamentalNCs}{fundamental network concepts, that is network concepts calculated directly from the given adjacency matrix \code{adj}. A list with components \code{ScaledConnectivity} (giving the scaled connectivity of each node), \code{Connectivity} (connectivity of each node), \code{ClusterCoef} (the clustering coefficient of each node), \code{MAR} (maximum adjacency ratio of each node), \code{Density} (the mean density of the network), \code{Centralization} (the centralization of the network), \code{Heterogeneity} (the heterogeneity of the network). If the input node significance \code{GS} is specified, the following additional components are included: \code{NetworkSignificance} (network significance, the mean node significance), and \code{HubNodeSignificance} (hub node significance given by the linear regression of node significance on connectivity). } \item{conformityBasedNCs}{network concepts based on an approximate adjacency matrix given by the outer product of the conformity vector but with unit diagonal. A list with components \code{Conformity} (the conformity vector) and \code{Connectivity.CF, ClusterCoef.CF, MAR.CF, Density.CF, Centralization.CF, Heterogeneity.CF} giving the conformity-based analogs of the above network concepts. } \item{approximateConformityBasedNCs}{network concepts based on an approximate adjacency matrix given by the outer product of the conformity vector. A list with components \code{Conformity} (the conformity vector) and \code{Connectivity.CF.App, ClusterCoef.CF.App, MAR.CF.App, Density.CF.App, Centralization.CF.App, Heterogeneity.CF.App} giving the conformity-based analogs of the above network concepts. } } \references{ Dong J, Horvath S (2007) Understanding Network Concepts in Modules, BMC Systems Biology 2007, 1:24 Horvath S, Dong J (2008) Geometric Interpretation of Gene Coexpression Network Analysis. PLoS Comput Biol 4(8): e1000117 } \author{ Steve Horvath } \seealso{ \code{\link{networkConcepts}} for calculation of eigennode based network concepts for a correlation network; \code{\link{fundamentalNetworkConcepts}} for calculation of fundamental network concepts only. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/ImmunePathwayLists.Rd0000644000176200001440000000176114012015545015764 0ustar liggesusers\name{ImmunePathwayLists} \alias{ImmunePathwayLists} \docType{data} \title{Immune Pathways with Corresponding Gene Markers} \description{ This matrix gives a predefined set of marker genes for many immune response pathways, as assembled by Brian Modena (a member of Daniel R Salomon's lab at Scripps Research Institute), and colleagues. It is used with userListEnrichment to search user-defined gene lists for enrichment. } \usage{data(ImmunePathwayLists)} \format{ A 3597 x 2 matrix of characters containing Gene / Category pairs. The first column (Gene) lists genes corresponding to a given category (second column). Each Category entry is of the form __ImmunePathway. Note that the matrix is sorted first by Category and then by Gene, such that all genes related to the same category are listed sequentially. } \source{ For more information about this list, please see \code{\link{userListEnrichment}} } \examples{ data(ImmunePathwayLists) head(ImmunePathwayLists) } \keyword{datasets} WGCNA/man/nPresent.Rd0000644000176200001440000000063214012015545013747 0ustar liggesusers\name{nPresent} \alias{nPresent} \title{ Number of present data entries. } \description{ A simple sum of present entries in the argument. } \usage{ nPresent(x) } \arguments{ \item{x}{ data in which to count number of present entries. } } \value{ A single number giving the number of present entries in \code{x}. } \author{ Steve Horvath } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/newNetworkOptions.Rd0000644000176200001440000000540514533632240015700 0ustar liggesusers\name{newNetworkOptions} \alias{newNetworkOptions} \alias{NetworkOptions} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Create a list of network construction arguments (options). } \description{ This function creates a reusable list of network calculation arguments/options. } \usage{ newNetworkOptions( correlationOptions = newCorrelationOptions(), # Adjacency options replaceMissingAdjacencies = TRUE, power = 6, networkType = c("signed hybrid", "signed", "unsigned"), checkPower = TRUE, # Topological overlap options TOMType = c("signed", "signed Nowick", "unsigned", "none", "signed 2", "signed Nowick 2", "unsigned 2"), TOMDenom = c("mean", "min"), suppressTOMForZeroAdjacencies = FALSE, suppressNegativeTOM = FALSE, # Internal behavior options useInternalMatrixAlgebra = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{correlationOptions}{ A list of correlation options. See \code{\link{newCorrelationOptions}}. } \item{replaceMissingAdjacencies}{Logical: should missing adjacencies be replaced by zero? } \item{power}{ Soft-thresholding power for network construction. } \item{networkType}{ network type. Allowed values are (unique abbreviations of) \code{"unsigned"}, \code{"signed"}, \code{"signed hybrid"}. See \code{\link{adjacency}}. } \item{checkPower}{Logicel: should the power be checked for sanity? } \item{TOMType}{One of \code{"none"}, \code{"unsigned"}, \code{"signed"}, \code{"signed Nowick"}, \code{"unsigned 2"}, \code{"signed 2"} and \code{"signed Nowick 2"}. If \code{"none"}, adjacency will be used for clustering. See \code{\link{TOMsimilarityFromExpr}} for details.} \item{TOMDenom}{Character string specifying the TOM variant to be used. Recognized values are \code{"min"} giving the standard TOM described in Zhang and Horvath (2005), and \code{"mean"} in which the \code{min} function in the denominator is replaced by \code{mean}. The \code{"mean"} may produce better results but at this time should be considered experimental. } \item{suppressTOMForZeroAdjacencies}{logical: for those components that have zero adjacency, should TOM be set to zero as well?} \item{suppressNegativeTOM}{Logical: should the result be set to zero when negative? Negative TOM values can occur when \code{TOMType} is \code{"signed Nowick"}.} newNetworkOptions \item{useInternalMatrixAlgebra}{logical: should internal implementation of matrix multiplication be used instead of R-provided BLAS? The internal implementation is slow and this option should only be used if one suspects a bug in R-provided BLAS.} } \value{ A list of class \code{NetworkOptions}. } \author{ Peter Langfelder } \seealso{ \code{\link{newCorrelationOptions}} } \keyword{misc} WGCNA/man/mtd.rbindSelf.Rd0000644000176200001440000000271514012015545014650 0ustar liggesusers\name{mtd.rbindSelf} \alias{mtd.rbindSelf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Turn a multiData structure into a single matrix or data frame. } \description{ This function "rbinds" the \code{data} components of all sets in the input into a single matrix or data frame. } \usage{ mtd.rbindSelf(multiData) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{multiData}{ A multiData structure. } } \details{ A multiData structure is intended to store (the same type of) data for multiple, possibly independent, realizations (for example, expression data for several independent experiments). It is a list where each component corresponds to an (independent) data set. Each component is in turn a list that can hold various types of information but must have a \code{data} component. In a "strict" multiData structure, the \code{data} components are required to each be a matrix or a data frame and have the same number of columns. In a "loose" multiData structure, the \code{data} components can be anything (but for most purposes should be of comparable type and content). This function requires a "strict" multiData structure. } \value{ A single matrix or data frame containing the "rbinded" result. } \author{ Peter Langfelder } \seealso{ \code{\link{multiData}} to create a multiData structure; \code{\link{rbind}} for various subtleties of the row binding operation. } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/dynamicMergeCut.Rd0000644000176200001440000000200114012015545015221 0ustar liggesusers\name{dynamicMergeCut} \alias{dynamicMergeCut} \title{ Threshold for module merging } \description{ Calculate a suitable threshold for module merging based on the number of samples and a desired Z quantile. } \usage{ dynamicMergeCut(n, mergeCor = 0.9, Zquantile = 2.35) } \arguments{ \item{n}{number of samples } \item{mergeCor}{ theoretical correlation threshold for module merging } \item{Zquantile}{ Z quantile for module merging } } \details{ This function calculates the threshold for module merging. The threshold is calculated as the lower boundary of the interval around the theoretical correlation \code{mergeCor} whose width is given by the Z value \code{Zquantile}. } \value{ The correlation threshold for module merging; a single number. } \author{ Steve Horvath } \seealso{ \code{\link{moduleEigengenes}}, \code{\link{mergeCloseModules}} } \examples{ dynamicMergeCut(20) dynamicMergeCut(50) dynamicMergeCut(100) } \keyword{ misc }% __ONLY ONE__ keyword per line WGCNA/man/sizeGrWindow.Rd0000644000176200001440000000100214012015545014574 0ustar liggesusers\name{sizeGrWindow} \alias{sizeGrWindow} \title{ Opens a graphics window with specified dimensions } \description{ If a graphic device window is already open, it is closed and re-opened with specified dimensions (in inches); otherwise a new window is opened. } \usage{ sizeGrWindow(width, height) } \arguments{ \item{width}{ desired width of the window, in inches. } \item{height}{ desired heigh of the window, in inches. } } \value{ None. } \author{ Peter Langfelder } \keyword{ misc } WGCNA/man/verboseScatterplot.Rd0000644000176200001440000000775514230552654016071 0ustar liggesusers\name{verboseScatterplot} \alias{verboseScatterplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Scatterplot annotated by regression line and p-value} \description{ Produce a scatterplot annotated by the correlation, p-value, and regression line. } \usage{ verboseScatterplot(x, y, sample = NULL, corFnc = "cor", corOptions = "use = 'p'", main = "", xlab = NA, ylab = NA, cex = 1, cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5, abline = FALSE, abline.color = 1, abline.lty = 1, corLabel = corFnc, displayAsZero = 1e-5, col = 1, bg = 0, pch = 1, lmFnc = lm, plotPriority = NULL, showPValue = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ numerical vector to be plotted along the x axis. } \item{y}{ numerical vector to be plotted along the y axis. } \item{sample}{ determines whether \code{x} and \code{y} should be sampled for plotting, useful to keep the plot manageable when \code{x} and \code{y} are large vectors. The default \code{NULL} value implies no sampling. A single numeric value will be interpreted as the number of points to sample randomly. If a vector is given, it will be interpreted as the indices of the entries in \code{x} and \code{y} that should be plotted. In either case, the correlation and p value will be determined from the full vectors \code{x} and \code{y}.} \item{corFnc}{ character string giving the correlation function to annotate the plot. } \item{corOptions}{ character string giving further options to the correlation function. } \item{main}{ main title for the plot.} \item{xlab}{ label for the x-axis. } \item{ylab}{ label for the y-axis. } \item{cex}{ character expansion factor for plot annotations, recycled as necessary. } \item{cex.axis}{ character expansion factor for axis annotations. } \item{cex.lab}{ character expansion factor for axis labels. } \item{cex.main}{ character expansion factor for the main title. } \item{abline}{ logical: should the linear regression fit line be plotted? } \item{abline.color}{ color specification for the fit line.} \item{abline.lty}{ line type for the fit line.} \item{corLabel}{ character string to be used as the label for the correlation value printed in the main title. } \item{displayAsZero}{ Correlations whose absolute value is smaller than this number will be displayed as zero. This can result in a more intuitive display (for example, cor=0 instead of cor=2.6e-17).} \item{col}{color of the plotted symbols. Recycled as necessary. } \item{bg}{fill color of the plotted symbols (used for certain symbols). Recycled as necessary. } \item{pch}{Integer code for plotted symbols (see \code{link{plot.default}}). Recycled as necessary. } \item{lmFnc}{linear model fit function. Used to calculate the linear model fit line if \code{'abline'} is \code{TRUE}. For example, robust linear models are implemented in the function \code{\link[MASS]{rlm}}. } \item{plotPriority}{Optional numeric vector of same length as \code{x}. Points with higher plot priority will be plotted later, making them more visible if points overlap.} \item{showPValue}{Logical: should the p-value corresponding to the correlation be added to the title?} \item{\dots}{ other arguments to the function \code{\link{plot}}. } } \details{ Irrespective of the specified correlation function, the p-value is always calculated for pearson correlation. } \value{ If \code{sample} above is given, the indices of the plotted points are returned invisibly. } \author{ Steve Horvath and Peter Langfelder } \seealso{ \code{\link{plot.default}} for standard scatterplots } \keyword{hplot}% __ONLY ONE__ keyword per line WGCNA/man/checkSets.Rd0000644000176200001440000000431114012015545014063 0ustar liggesusers\name{checkSets} \alias{checkSets} \title{Check structure and retrieve sizes of a group of datasets. } \description{ Checks whether given sets have the correct format and retrieves dimensions. } \usage{ checkSets(data, checkStructure = FALSE, useSets = NULL) } \arguments{ \item{data}{ A vector of lists; in each list there must be a component named \code{data} whose content is a matrix or dataframe or array of dimension 2. } \item{checkStructure}{If \code{FALSE}, incorrect structure of \code{data} will trigger an error. If \code{TRUE}, an appropriate flag (see output) will be set to indicate whether \code{data} has correct structure.} \item{useSets}{Optional specification of entries of the vector \code{data} that are to be checked. Defaults to all components. This may be useful when \code{data} only contains information for some of the sets.} } \details{ For multiset calculations, many quantities (such as expression data, traits, module eigengenes etc) are presented by a common structure, a vector of lists (one list for each set) where each list has a component \code{data} that contains the actual (expression, trait, eigengene) data for the corresponding set in the form of a dataframe. This funtion checks whether \code{data} conforms to this convention and retrieves some basic dimension information (see output). } \value{ A list with components \item{nSets}{Number of sets (length of the vector \code{data}).} \item{nGenes}{Number of columns in the \code{data} components in the lists. This number must be the same for all sets.} \item{nSamples}{A vector of length \code{nSets} giving the number of rows in the \code{data} components.} \item{structureOK}{Only set if the argument \code{checkStructure} equals \code{TRUE}. The value is \code{TRUE} if the paramter \code{data} passes a few tests of its structure, and \code{FALSE} otherwise. The tests are not exhaustive and are meant to catch obvious user errors rather than be bulletproof.} } \author{ Peter Langfelder, \email{Peter.Langfelder@gmail.com} } %\seealso{ ~~objects to See Also as \code{\link{help}}, ~~~ } %\examples{} % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ misc } WGCNA/DESCRIPTION0000644000176200001440000000435714672567642012653 0ustar liggesusersPackage: WGCNA Version: 1.73 Date: 2024-09-18 Title: Weighted Correlation Network Analysis Maintainer: Peter Langfelder Depends: R (>= 3.0), dynamicTreeCut (>= 1.62), fastcluster Imports: stats, grDevices, utils, matrixStats (>= 0.8.1), Hmisc, impute, splines, foreach, doParallel, preprocessCore, survival, parallel, GO.db, AnnotationDbi, Rcpp (>= 0.11.0) Suggests: org.Hs.eg.db, org.Mm.eg.db, infotheo, entropy, minet LinkingTo: Rcpp ZipData: no License: GPL (>= 2) Authors@R: c(person(given = "Peter", family = "Langfelder", role = c("aut", "cre"), email = "Peter.Langfelder@gmail.com"), person(given = "Steve", family = "Horvath", role = "aut"), person(given = "Chaochao", family = "Cai", role = "aut"), person(given = "Jun", family = "Dong", role = "aut"), person(given = "Jeremy", family = "Miller", role = "aut"), person(given = "Lin", family = "Song", role = "aut"), person(given = "Andy", family = "Yip", role = "aut"), person(given = "Bin", family = "Zhang", role = "aut")) Description: Functions necessary to perform Weighted Correlation Network Analysis on high-dimensional data as originally described in Horvath and Zhang (2005) and Langfelder and Horvath (2008) . Includes functions for rudimentary data cleaning, construction of correlation networks, module identification, summarization, and relating of variables and modules to sample traits. Also includes a number of utility functions for data manipulation and visualization. NeedsCompilation: yes Packaged: 2024-09-18 12:43:22 UTC; plangfelder Author: Peter Langfelder [aut, cre], Steve Horvath [aut], Chaochao Cai [aut], Jun Dong [aut], Jeremy Miller [aut], Lin Song [aut], Andy Yip [aut], Bin Zhang [aut] Repository: CRAN Date/Publication: 2024-09-18 15:20:02 UTC