VGAM/0000755000176000001440000000000012136674024011041 5ustar ripleyusersVGAM/MD50000644000176000001440000007122512136674024011360 0ustar ripleyusersf6c2eaaf925e53832fcb53239b4f5cc8 *BUGS 6a7fba4cfaba8a30efbd9872bcdb525f *DESCRIPTION dd959d3a0cd680792122813a7d58d506 *DISCLAIMER 2af0a233dbb327202c9855a893e5fe4f *NAMESPACE 1a92d93a22cfa2999d96ddaf2b98f990 *NEWS d62d56dbac3a02af9d7c8b2bbcb39e8b *R/Links.R adf209eb2798923a1c63284e2f35a74f *R/aamethods.q 15d1737d6888f9bb905bb0d2d82a385b *R/add1.vglm.q 1738067495adbcc1127a1f8f3cc7988a *R/attrassign.R 832a8467cdb8625a3be4ae3d17f2b90f *R/bAIC.q 5cdd8c0f82dea0e413e04bc2cff8c123 *R/build.terms.vlm.q b6860bb9ee446a7dd9b684c95aa5bc05 *R/calibrate.q 6e439ff28115e3dee0999c1fb16549d8 *R/cao.R ccee03271e151a65bfdfe831b9f3e8b5 *R/cao.fit.q 86abaa804bbae3663eba175e424cb507 *R/coef.vlm.q bfa85b0c6a4c0d8ef179e167e6fb6d93 *R/cqo.R fe11e7df7fc7466a1ad1ae2eb7302790 *R/cqo.fit.q abd0b60fa8407957c67d4392d7af26fe *R/deviance.vlm.q a6b5f34153d4828e30d35addf7d6ba9f *R/effects.vglm.q c613675237da101f62d7b09a5a9022b9 *R/family.actuary.R ef98d702078dfe8028a9ca67293ff0e9 *R/family.aunivariate.R 821b41568b72112b39e165bee64fff63 *R/family.basics.R cd6ac62efb3f8c85f9943b9e366ffcf6 *R/family.binomial.R dc12aa7a7020b9fcb98101ecbc151976 *R/family.bivariate.R f69cb7e860b7381e6876499575031e00 *R/family.categorical.R 7b8a2ffd2480d2bc1686acd3975925bc *R/family.censored.R 6e22b04d33eac0d296369deb9eb0df6d *R/family.circular.R 635eb4cbaa3c7d3eb7aa589425520f91 *R/family.exp.R 5221dacf55524f6604bc7f63a087f349 *R/family.extremes.R d3fb45972867409ec229acb7a053bee4 *R/family.fishing.R 9826cf013ff6c981f11f32a06d26d3ab *R/family.functions.R 7ee48c4f697f650bcd8ed13d50ff99de *R/family.genetic.R 3c5e4f0c78262e274ac99bc697c0626b *R/family.glmgam.R 4b271424d2b5c532da022b833fa091c7 *R/family.loglin.R ff91b689c8f0f97da4f15ce5a938a248 *R/family.math.R 0aafeb41fdf7d02837c021b387f94b55 *R/family.mixture.R bc95bc6f29a8bbb163a03e800627c802 *R/family.nonlinear.R 8f365a42782116a4049d78817ef26443 *R/family.normal.R 71d2f8f47e7be7e42bc725383fe9b8b2 *R/family.others.R d4e9e1cdf543f7b59a67b9229aa4adc5 *R/family.positive.R cf30ede4751332d2e97a032812719180 *R/family.qreg.R b4a7110d940135f7372ae51f0a32070a *R/family.quantal.R c365f0b1c200d523c71b3fddffd31ef7 *R/family.rcim.R 9160d6726da1309528dc856c44c75421 *R/family.rcqo.R 2a7ba5edcb2a5e996431700f90cc5ca5 *R/family.robust.R f4c4f0abbc5708e66af94e0e873a590e *R/family.rrr.R 089ae9a0fd12c18dcb10fde5fc394291 *R/family.sur.R 1b6f4e240e52a537fc9855e5608344ca *R/family.survival.R e53b98453c106c2176b60a6e2241b08b *R/family.ts.R df658830892993fe457fc0146aaa2a3d *R/family.univariate.R 11583197eff8498de3c6121ab66c707a *R/family.vglm.R d59a4ccad1536b81308ec42dffec9a2a *R/family.zeroinf.R daae5f4987b87f24e1dc0aa1c3602147 *R/fittedvlm.R 4b557b8346c0b3634105f243ddfbf24a *R/formula.vlm.q 6ac6db14a8544716dd49fdc95cc06893 *R/generic.q 104a446ef4f0f15e56f3342ca01b34a0 *R/links.q 88359e6090cbf9b245bc49ac35805e1e *R/logLik.vlm.q a3ccdcdbfa8ca1a1881c8853a7eafd2f *R/lrwaldtest.R 6f82978825337429b7c05d36c12ed68a *R/model.matrix.vglm.q 1732357e0c3e1a2e278f364f005762bb *R/mux.q ea6f08f274acb629f4cedb9a46e0ec20 *R/nobs.R 6414d0ff798fffb260139b4131c7808b *R/plot.vglm.q f87f2f2a142820503c24a9124a7f7fd4 *R/predict.vgam.q 90b48a5c5096e443ef28634d1447e600 *R/predict.vglm.q a57f83121934ed29f45a6440def49bde *R/predict.vlm.q 53a8b748527a8b5738121fefb84587fc *R/print.vglm.q e1d1e80faf5b350b32676e53a565263f *R/print.vlm.q 0fa72053f84f1c878c451c1561a66e3a *R/qrrvglm.control.q 7a85e29e0e6c86a808dbc67a5058a2f2 *R/qtplot.q 512cf9358cb8da821891c5ef1e7ca4f0 *R/residuals.vlm.q d7b993156aea56e835e2af8d3df41cf6 *R/rrvglm.R 42e7eec20c6ca8bbb76472b3f98f5536 *R/rrvglm.control.q 470aa87f01b3f571a962465cd2064102 *R/rrvglm.fit.q d0f49d2c6af83f79ce6894993a42b79d *R/s.q 59971ce313b9d5d8117ee9be43049940 *R/s.vam.q 17b2981fe5a5a8b6c8d5ff24e1110c4b *R/smart.R 1bccef4ed724ae0a8d0f677c7d12c15d *R/step.vglm.q 7fdc7139fbe351c53b7a5b64f782ada9 *R/summary.vgam.q d8ddb7543987a1d3088e6c264d253d85 *R/summary.vglm.q 254a4caed282a79fe7bd72a6ac8a54e1 *R/summary.vlm.q 9fd5ab4d09a51e27b81ed54d0ba98f84 *R/uqo.R 58d011e757b69c50072aba6636d0459e *R/vgam.R c479ba9b1e9dfe567e2d02d667392c0e *R/vgam.control.q 1bc56d80200a7c3e8974b6ebd3cddbd1 *R/vgam.fit.q fa4a8b03864d4c88623072fbc836ddbb *R/vgam.match.q a11d62d8e230f9d3f5d1169ffac27703 *R/vglm.R 5e7d4ef7fbcd4a050378cac4480e6a1b *R/vglm.control.q 8b749d824c552fa958f669f4461c0480 *R/vglm.fit.q df2d63117cb8b126e5f568d0c3c0b5f7 *R/vlm.R 128626597c68cf1d6bfe46edce99017a *R/vlm.wfit.q a65b9ce0f4ca8924a4d612dceb7431a3 *R/vsmooth.spline.q 1fd723ab36f7d8d06ea80e7f0695839b *data/Huggins89.t1.rda 5d76a6219b59d73d8606351a4e838435 *data/Perom.rda 9813abe80f1fd180438de1b64a494d23 *data/alclevels.rda dc1953bd5b84c6c769b3216b6c9bfe8e *data/alcoff.rda c69d92ac37883bcb93de5c689f617c6c *data/auuc.rda e597da31ffc931395065afd49d7e1171 *data/backPain.rda 4fa3eac69a59ea5ed0123d54528e5595 *data/backPain.txt.gz e039fd36c33b359830b2ac811ca7fc49 *data/bmi.nz.txt.xz ac781eb03705011daac38279dd8126d9 *data/car.all.rda b29c1a4125f0898885b0a723442d6a92 *data/chest.nz.txt.bz2 4df5fd8b5db905c4c19071e1e6a698a4 *data/chinese.nz.txt.gz 3cb8bc8e1fc615416f0c8838a50b3f51 *data/coalminers.txt.gz a1736f849c17c8def4126685c80a27c7 *data/crashbc.rda 710992c846632d4bb836e0db7754577c *data/crashf.rda d5308a91f8bb3ada33dc46b322cbbf33 *data/crashi.rda 109603b8ff2aed220209e950e92dcea2 *data/crashmc.rda a2bdcbc61dd121258d7d44f4eab13588 *data/crashp.rda 071eb8e5c533bc745bf06a166365d2a1 *data/crashtr.rda 08e87bb80a2364697b17ccec6260387c *data/enzyme.txt.gz b351998ad2ed6556fb83dafdbf2c0540 *data/finney44.rda 3125b7b004c671f9d4516999c8473eac *data/gew.txt.gz bec512b2d2d680889c9b71c7b97dbffd *data/grain.us.txt.bz2 9dcb8cdf026f5468fa70f8037fd72a0b *data/hormone.txt.bz2 b003bfd39730aa0656fc38ac2c347caf *data/hspider.rda dffe21fbabf645127bccc3f3733098a7 *data/hunua.txt.bz2 ebf3caea112403777897aa8b631ac27d *data/leukemia.rda aba4885e0eeda8ee887a422fee01e02a *data/lirat.txt.gz 75cf48caa6781de4a80496c31604b1ef *data/marital.nz.rda d692afa917e63fa7707495b25ae93bee *data/mmt.rda 56490506642d6415ac67d9b6a7f7aff6 *data/olym08.txt.gz fe334fe839d5efbe61aa3a757c38faeb *data/olym12.txt.gz 3ed63397c4a34f3233326ade6cfd1279 *data/oxtemp.txt.gz 04b56fb5acddca81eb3916826a4c88a3 *data/pneumo.rda 824247155f0456f146af38c8818314cf *data/ruge.rda d55951f9995a47976dcc28bd4c877a6a *data/toxop.rda 1b059fc42c890bf89f2282298828d098 *data/ucberk.txt.gz 3be014e1cf99d07b22ca4757d4e43408 *data/venice.rda db2bece75f2f401b842b47b210541ed8 *data/venice90.rda e990ca4deea25b60febd2d315a6a9ec4 *data/waitakere.txt.bz2 ad7680ca4b2ee5cdcfdc6efd64734e2b *data/wffc.indiv.rda 4d0e86344820512b6e9d661b62c8df22 *data/wffc.nc.rda f89fc57a32f5dc7b3ac764ccf9010050 *data/wffc.rda 0e5d28602f173f25c3ae8ae8ca9ab6d7 *data/wffc.teams.rda 81f7f0844a196dc48e91870c4cfafc99 *demo/00Index 532aba4ad4cac611141491a5bb886236 *demo/binom2.or.R a7db0d0c4cc964b01ddbe0cb74153304 *demo/cqo.R 6376ee1862c11e847aab6f7f6fd74d24 *demo/distributions.R d2c02ccaf4d548cc83b3148e55ff0fa3 *demo/lmsqreg.R a3d2728927fc5a3090f8f4ae9af19e1a *demo/vgam.R 00eee385e1a5c716a6f37797c3b4bec5 *demo/zipoisson.R 60616e1e78fe61c1fd4acdf0d3129747 *inst/CITATION fae24431ceffb7f1c6390d81307cda6e *inst/doc/categoricalVGAM.R b1a84a83b8fb788d31d509e17936b603 *inst/doc/categoricalVGAM.Rnw a844badb9c938f40a4f3d76f6b7cb9a7 *inst/doc/categoricalVGAM.pdf e4c5415e487f533b70695b17e40d97bc *inst/doc/categoricalVGAMbib.bib e77fe3e9c0a314c51ba4b36b8d56684b *man/AA.Aa.aa.Rd 3d5d059af0e7d0c88fe059f8fed7e81e *man/AB.Ab.aB.ab.Rd 038a23a3cfb521f14b4885e49bf0188d *man/AB.Ab.aB.ab2.Rd ccf14c227880ca872a7471cf5f7c94b5 *man/ABO.Rd 37202536ea507b17bb8472e3fd1b78e4 *man/AICvlm.Rd 2dda55df0947c86b4614e2d722efb713 *man/Coef.Rd e2087f40465b8feca48d61fb1cecfc6c *man/Coef.qrrvglm-class.Rd 956683c1b81f04580aa6546a85c7d20a *man/Coef.qrrvglm.Rd 9335dbbcdb81716ec556bf5bcf0be309 *man/Coef.rrvglm-class.Rd dd9202d518789994bd081f16a81631ef *man/Coef.rrvglm.Rd 673fb7bdbda0010ee45586680a0275b1 *man/Coef.vlm.Rd 9b60092b7d4f21ff458a0279096ef3bb *man/CommonVGAMffArguments.Rd 06084db07bf8e6b2bc59dd0f40a23f8d *man/DeLury.Rd 64b643dcd690b1eb601fcc70af495790 *man/G1G2G3.Rd fac93d02848bc713742065083217496a *man/Huggins89.t1.Rd f7bc9b5114ed94e014016aed05b8e7d3 *man/Inv.gaussian.Rd 77388e0223539826ca69389d46f80550 *man/Links.Rd 0a95f8292850ef5b0fcf516400864c84 *man/MNSs.Rd 45c9ca6851177b813be07e2446614721 *man/Max.Rd d11449e8d78b47fe2811767798a3966a *man/Opt.Rd f9fb54b978cba49b278630f9403dd73c *man/Pareto.Rd a8acd542dbd768859c06a2b6811c0a13 *man/Perom.Rd 02bd50562a32ff0a21d887511d222364 *man/Qvar.Rd 4273365f7ee730f68259e69fb65f7746 *man/Rcam.Rd e22155cf6e28945d43ed76d0d02e6746 *man/SUR.Rd 2db32b22773df2628c8dbc168636c9f0 *man/SurvS4-class.Rd 4f4e89cb6c8d7db676f3e5224d450271 *man/SurvS4.Rd 1f34fdf36c631e984d2a9f28bf607b67 *man/Tol.Rd 35fb38864c1e10a928af13a607e7b4b8 *man/VGAM-package.Rd 41de97f0bacb4bedc36a589af710ff99 *man/acat.Rd d479795a5dfdd0949d86aa70fffc1140 *man/alaplace3.Rd 8c0662467fc225892c1a1cde14e9fbf5 *man/alaplaceUC.Rd af06e5a2e0552a8ef63756f1c3bce00b *man/amh.Rd 5e1012c84beb593f4558a9df064a3304 *man/amhUC.Rd c034aafa09900eda5767b557ae18e665 *man/amlbinomial.Rd cdb087cd9e65ef96ba2e848dee9e4eeb *man/amlexponential.Rd 6cddfc975ac4418a3693fbf3c810d96d *man/amlnormal.Rd 8c0315925316e09ad8847a5bc960d478 *man/amlpoisson.Rd 9f1ddcb0af49daaec702a1284341d778 *man/auuc.Rd bcddb8c1df8893cf14a4400ee5dee6df *man/backPain.Rd 34b5510370a46ab522a754c731a437be *man/benfUC.Rd c1483ea97ab8115ef70f90bc0984ac6d *man/benini.Rd b3e26d0011014d3722b4ecb3675c4aea *man/beniniUC.Rd 28d965f409597a6485f3141173f901a3 *man/beta.ab.Rd 91deeb79a61f94c1af5d7ac251132821 *man/betaII.Rd 72c00470a5c89c7ebfc9e695da9b07d4 *man/betabinomUC.Rd 053b67bde772fc8d0e96b5b0ac5ebc6c *man/betabinomial.Rd 504ee243a39c7173ac40841afe16339f *man/betabinomial.ab.Rd be38265c59ae5f15c757009310e14a92 *man/betaff.Rd da3fdbf88efd6225c08377a461e45c50 *man/betageomUC.Rd 63ba9c485c5d5b4962fa8e215f4ee87e *man/betageometric.Rd aa6ee6bd6c48de8d03f18a80b836edae *man/betanormUC.Rd f568faafa4b67d1f0bf9ba07ddc4a7f3 *man/betaprime.Rd 1cf45cc5335d55c0a46d1e7df469ce3d *man/bilogis4UC.Rd b81f6ad16bb834d3fde123062ba31ec8 *man/bilogistic4.Rd 7e042a6903115d2eb77d0ef3a35cd8ab *man/binom2.or.Rd 1f1a653e623b2abbb4662b16070019db *man/binom2.orUC.Rd a8cc7cbfa4c21672956a187c4ffba22d *man/binom2.rho.Rd c3f3f95519510e5a324c74369bc52a63 *man/binom2.rhoUC.Rd 7dcb53c5b43d65f3837a65463e1f5612 *man/binomialff.Rd 85bd227a0d4ae18c5511206758f982b3 *man/binormal.Rd bdad9ecfb116c4f30f930bcaf7208735 *man/biplot-methods.Rd 3c8ee4feffa56a6e15b24f0c502026c6 *man/bisa.Rd 832abdebf1e3013d0421f5012efd3a7e *man/bisaUC.Rd 59d5b0478df13fc8ca7c6650e70105ac *man/bivgamma.mckay.Rd 81a2433effb7547679702256a5536b04 *man/bmi.nz.Rd 44f06f92ed85ef1cf5e447ffed182989 *man/borel.tanner.Rd 4e692566eefaedf275e8693ea2f6efbe *man/bortUC.Rd 7bc3641f9f81a4eb77a304103e5f1dcc *man/brat.Rd 0eaf999500ce9554156f37acbfe1e01a *man/bratUC.Rd b4c37774de88cd2f3f8f5e89ced2b491 *man/bratt.Rd f640961a0c1a206ce052a54bb7b4ca34 *man/calibrate-methods.Rd 8ecd34f0a725bf795101738a60bbb401 *man/calibrate.Rd 483b5be2dbbd2d6281d08e730e0e607d *man/calibrate.qrrvglm.Rd 6b6e9dd2da2d784fefb5144eb0e02818 *man/calibrate.qrrvglm.control.Rd ef9e501f27ab7c71b817615b21405bfd *man/cao.Rd e8c2f9b88e37763580bf77f68b0e8fc8 *man/cao.control.Rd e4b532eb5880648443b6fc60b31fbc36 *man/cardUC.Rd 7aea0f32a547bc26d3dfaf65aab3a8b7 *man/cardioid.Rd 288036a65bb6f386d29a99dd40e91a32 *man/cauchit.Rd 81d694e2aea915b2d8ed6c406f517baa *man/cauchy.Rd 2ab80616c05e7aebdcf769c35316eab1 *man/ccoef-methods.Rd 35499ce13b26395bc61c5931d202cf24 *man/ccoef.Rd 5985b55cbfe98a8a7d2b4de3fe3265bf *man/cdf.lmscreg.Rd bd25f55e6466226cb79f74482f793a3f *man/cennormal1.Rd d2156c3ff1e1ecaa38eaa4bbfe3649c0 *man/cenpoisson.Rd a443fafdb223e2fa87d3766ea31d3fd8 *man/cgo.Rd b6cb82fa50d35036cd635f8b1a1a4ec4 *man/cgumbel.Rd 1d5073eb8aded1b67fc52855c72fbc8d *man/chest.nz.Rd 8b159dce27c0461aa7ce49eda949f697 *man/chinese.nz.Rd d58b97e7b28882f689a67019139cef86 *man/chisq.Rd 8ecbb478efcf4b0184a994182b5b2b94 *man/clo.Rd 2ebe24734ed0652482c35da374b660db *man/cloglog.Rd 1aa6ee888bb532eef1f232c9f6a02b5d *man/coalminers.Rd 9250590d8aae1e18953bbc50cbc922d8 *man/constraints.Rd 7564384537e0ed18e6dcac3e0df5b32a *man/cqo.Rd 2f595bffa2e5d997ae33fd6ca7e3f22c *man/crashes.Rd e591cff73505c3e967aea2aa47a4dddf *man/cratio.Rd 51843053ae7e7f2535986ba9fa8707e8 *man/cumulative.Rd c909335c97a9ae26916016dfcc31b804 *man/dagum.Rd 97868e30408a4a35750f9692f5e87b68 *man/dagumUC.Rd e04b86db7152a2d8633c16f07e389357 *man/dcennormal1.Rd fa3351f5e58b37cd7c452ee0a991d76d *man/deplot.lmscreg.Rd 8c45fa4b18d6cfd8fec96f8071cef604 *man/depvar.Rd c4b52569e78545a35752e1368c2c16df *man/dexpbinomial.Rd 6c6f8430f3c65c7ba3ce883eb2c9ad7f *man/df.residual.Rd d21eb844e77835fb1d6ae46a2b112a97 *man/dirichlet.Rd 825897c6d06a47e9ac809bd2251cdb68 *man/dirmul.old.Rd 77a420a5a6ec80e1af4ed8074d516766 *man/dirmultinomial.Rd 844efd17a8d861d7cd173c64f1c8173f *man/eexpUC.Rd d512d29748153b09903ac96efa50a8d4 *man/enormUC.Rd 72492c419917c144ffadc656ee56a63b *man/enzyme.Rd a29f442ce60d8ac8185738242b4f49ce *man/erf.Rd 159ea23d4b4c5e3d473abf5c7f7db841 *man/erlang.Rd 55dad4e8509a4d3522f6c06f53093803 *man/eunifUC.Rd 607d45ed7e4eaebf6cac40c14a57eda0 *man/expexp.Rd f5c104469adfcf4d21cb4c8c525c0850 *man/expexp1.Rd 391ec14ac5da161f67cb01f91bf474cd *man/expgeometric.Rd bba52379a93d8f2e909b579215811554 *man/expgeometricUC.Rd 99739438b960428c5c03a25d654942e8 *man/explink.Rd 2fbb7566f2c74baa4051e3ce849c1909 *man/explogarithmic.Rd 347d45279f0e72bc8c2dab25ace2f28c *man/explogarithmicUC.Rd ac3f81c0c335c8b74b12507e1398edc0 *man/exponential.Rd bbd414bfb50f4be140ac6b66b29694cd *man/exppoisson.Rd 8e5ff25491af9631e681241ed305bf94 *man/exppoissonUC.Rd 2cb7a7ffba4a046d1205295d75d23a18 *man/felix.Rd 0bfa97ff4d9eead46aa1a822e2c231c7 *man/felixUC.Rd 77038da711286677c94066f9326b2a20 *man/fff.Rd b85c54aaade0e94059fcdfd760c23cbd *man/fgm.Rd 0c4744ec66aa44b14f5c3dd2d79856a1 *man/fgmUC.Rd 725193beb8ca3f28903db56ec6d50767 *man/fill.Rd b929e2ab670eb59700bc4a1db07bbbc0 *man/finney44.Rd 6bb9c425367a0154d70bb5baa702b826 *man/fisherz.Rd 464a5be86b451beaef25e096cff36273 *man/fisk.Rd 8215ca60f756bf8f9f2e3b404741fbd7 *man/fiskUC.Rd 81d03e605f6e9bfc48c612dd6369b51e *man/fittedvlm.Rd e3ffaf55fb9c925685d1259eedc4fd3b *man/fnormUC.Rd a449dd872d994d44bb6f7986249f8784 *man/fnormal1.Rd 80974c2814d703c1c1d4eab536f656a2 *man/frank.Rd e6d4221fd51756a2881065dfc303edef *man/frankUC.Rd d08c0b1aaf965520260ac15ad66a8d9f *man/frechet.Rd 0e54e074f0de1b996e1f38fee8d1f844 *man/frechetUC.Rd 3f27614050eac4ca6b793df27105cdbc *man/freund61.Rd 2b392459d756beb1213250d266c90076 *man/fsqrt.Rd 97b73c666866f4daa6e5be208fb7fee3 *man/gamma1.Rd 5edcb17bbf9d4e0f7a6f96ed709b5ed1 *man/gamma2.Rd c0e3957aaf1b96e0a35a2ea95c023fc3 *man/gamma2.ab.Rd 4aeaf1f465f97afa3305a6ed9dcb049f *man/gammahyp.Rd 40973d8617d8769e4cf70b17d9b19846 *man/garma.Rd 3013563566e6982b6e1b939e48cf9c6e *man/gaussianff.Rd df1c376b3ca400ad967513a8f3b1da44 *man/genbetaII.Rd ac349c9adadfadb8cc9a574409c22956 *man/gengamma.Rd bd63e15c3ac9ad8a8213d4cdc8bb3440 *man/gengammaUC.Rd c572a5a90988743fd046d5332bef6497 *man/genpoisson.Rd b1c3656df6f641f918c4e5bbd4fb239f *man/genrayleigh.Rd c31e093e7b6e5a4a7959ba6404b85a23 *man/genrayleighUC.Rd ad1646249e1de561bdd9fe261057a97c *man/geometric.Rd 78b7d9455f1eaa4572ff54427d77935f *man/get.smart.Rd 14a7e2eca6a27884e1673bd908df11e1 *man/get.smart.prediction.Rd a7cc3a8b2ab30458538d2f36279135aa *man/gev.Rd 838c81d8d6c94f4f3ae49df0b25d1cfa *man/gevUC.Rd f87241a6011f5f5a49921a1842a177ed *man/gew.Rd e85bfce5bc1b53316766a1edea3f707c *man/golf.Rd 5cc8c0cabb839b34f4f37de4b57f4428 *man/gompertz.Rd 3affd7c0ae94702950fb738253059a68 *man/gompertzUC.Rd 81d287969447618149d22113fa118d40 *man/gpd.Rd 54b49cf2e3ba865dc7c9297948036d9a *man/gpdUC.Rd 3f3f9b4cb1bd341a9c4c063594516611 *man/grain.us.Rd 21550b13a293b7e3062daf1fba963c09 *man/grc.Rd 3ffdad5594e4eec6062097a5c7c974e7 *man/gumbel.Rd a6df41a1cc82c1744cad46ba89a5b161 *man/gumbelII.Rd 2127127ee0e62bb2cefe05462bee7c39 *man/gumbelIIUC.Rd 1f202bf7be31c71a9d9982b7ef477cc9 *man/gumbelIbiv.Rd 977ee282217151a6c5b83867eab32573 *man/gumbelUC.Rd fc6b1658cbcb87054ab516552b6875f9 *man/guplot.Rd c1a9370d3c80cd92d9510442da0ff940 *man/hatvalues.Rd bed7fbc305bb784fb723242146e2ac9a *man/hormone.Rd 57a5f4c37dd40a74161489df6759fcd4 *man/hspider.Rd b9ed0e8079f4e57429b4647193c5cbc5 *man/huber.Rd ea67b113e21bbe6197fff2358cb47179 *man/huberUC.Rd b330f328e4d6f0db4928a92b30611267 *man/hunua.Rd cd473192d2153433bee1530bce881972 *man/hyperg.Rd 34ba5a500d1e9395c1e6761334232c0e *man/hypersecant.Rd 63751a4f55b918aad163a53994a01a07 *man/hzeta.Rd c3ca61cb9f3d309e8a08dd528de7d994 *man/hzetaUC.Rd 1e31e772997c2b18bc113d77e1e0e176 *man/iam.Rd f4dd596dc646925e2c68c9679c799472 *man/identity.Rd 3f07920de00eeb5766f5fbf545e792f5 *man/inv.gaussianff.Rd 77d16112e2aed1f927ca1d0f4cee0a18 *man/invbinomial.Rd ceafec1c5c64f77d3bf0e39bee2b0277 *man/invlomax.Rd 93c76dca757056d75f7978328608cce8 *man/invlomaxUC.Rd 5aeacd9294068b2ea86d1f7269c56965 *man/invparalogistic.Rd d5b78c1484a4756f09a7f109c753626d *man/invparalogisticUC.Rd f70dc86e1c466a9dd45efa98a5445fc8 *man/is.parallel.Rd a286dd7874899803d31aa0a72aad64f2 *man/is.smart.Rd 1b33dcd08e9f444146fb7fe03a425add *man/is.zero.Rd 30a15dcaa326928e71982bc7306a79cf *man/koenker.Rd 50dded53a59735a07217074d8228393f *man/koenkerUC.Rd 0d9800aa2eb316c662b36593ac2c74a6 *man/kumar.Rd 8756e8c50075f92aeede56aedff7d2c7 *man/kumarUC.Rd 7b2e3a9a2fae362f36bea1ab5539e6f9 *man/lambertW.Rd 0c7294d5f5b568a23c2634a86a07f62b *man/laplace.Rd 7310aca7179d6f31d9e0da64944e8328 *man/laplaceUC.Rd 2aa7fa15b90a2e05cb9c261b192040fb *man/latvar.Rd a75f79d7fcb3ce0380768c06fbbf0e4c *man/leipnik.Rd c93045a9f05888a4675ba3d48e70e7e7 *man/lerch.Rd 8c7fca39c92e5f79391a7881a0f44026 *man/leukemia.Rd 13b2cc3332ac9559d5d47790a8e206e1 *man/levy.Rd 5a35593723af5ff2e544345d4e6b868b *man/lgammaUC.Rd 42d40282918efa270ed17f4bd3eb86a6 *man/lgammaff.Rd fd33ebb21f7ab741392b8c15ec54f5e4 *man/lindUC.Rd 7ca83cec8ecb2fd661ca66bba89dc411 *man/lindley.Rd 59375533957aa583acf12b0b44b0d718 *man/lino.Rd 9c786943dcad40f95f4dddd3ff0f37db *man/linoUC.Rd b5dfa4faa955b15ebade0a3bdc8f93fe *man/lirat.Rd fc9016da8aeb1d1bb210ef7274f9da3d *man/lms.bcg.Rd 111314b39e384cb6a87307d87cad309a *man/lms.bcn.Rd 6e2e5248c45084fbcb0090b86f7f3f46 *man/lms.yjn.Rd 0d35403673c679344da32f978a2331b2 *man/logUC.Rd f0502f0505925ca9d48e6e3994f278a0 *man/logc.Rd d962e7f739d3d752e48ceeb9d5f256c9 *man/loge.Rd 2be2b998e9b4d3d32e72f2c9e0662273 *man/logff.Rd 14c728f5bfd8968fc74390f1cb95dc44 *man/logistic.Rd 8d40cf7f3736ad9219312e228348711c *man/logit.Rd 1f63716471926cf3baae3150c94beb74 *man/loglapUC.Rd a570e779c1f0741c4196a0982fdeddb1 *man/loglaplace.Rd 9217cff35cff9e9e1394d54a30a20ddb *man/loglinb2.Rd 480a45fd3cf55ef81365ecdb397e8fe2 *man/loglinb3.Rd f1c11784dff391acf166a8986d434354 *man/loglog.Rd 4c6053656b2fe0276fbe1a99b0174238 *man/lognormal.Rd e859c980e26eb3e483d0f3648b502d13 *man/logoff.Rd 5ce7aa8f16e81795577cc553d40a1e9c *man/lomax.Rd 9281fd7fad7d154a35ae0534cf4d2e3b *man/lomaxUC.Rd 950443559c152cc441b4b08dd5c7e12e *man/lqnorm.Rd 3f48084e64cd4663677fc8df8e4ecf3d *man/lrtest.Rd c066460c787fa701788c400e56edbf80 *man/lvplot.Rd 8b3ee5b0b1b1ec9659882b0d75a786bc *man/lvplot.qrrvglm.Rd 30f7cce914cf36078392189f12c0670e *man/lvplot.rrvglm.Rd 6fab686982d148f43e04ca4674dd14cf *man/makeham.Rd f459ac6b3f9453e0fb6cf4dfce393b64 *man/makehamUC.Rd a836cdea396e90233979a1065e9aa401 *man/margeff.Rd b5c6a5a36ebe07a60b152387e8096d9a *man/marital.nz.Rd eae8c8d703abffa56be56cc88743822c *man/maxwell.Rd 1fc207ea724c1fb681dc0805733571ba *man/maxwellUC.Rd ad6f24fe862c9936ea99033ba89d4fcf *man/mbinomial.Rd d0ba1cb515890aa57df222840a8ba7d4 *man/mccullagh89.Rd 4d8d0f37dc8249d00e52283764534e98 *man/meplot.Rd 3b5d203389f18b3847122d3a78152f21 *man/micmen.Rd 49ed6c8e6d160b323f1f2acd75d5daec *man/mix2exp.Rd 2a272b10b746642a9ee5bbc6cbfc9511 *man/mix2normal1.Rd 908970d91303cee973dba82825fabd4b *man/mix2poisson.Rd 6cc2c2af7e4107aebccbe4809d649033 *man/mlogit.Rd e41c539196b04b87d33595a73efef01d *man/model.framevlm.Rd 73bc45aa0257f78953611c9fb6daba39 *man/model.matrixvlm.Rd 85d73b769924c10e91065f87bf237fb7 *man/moffset.Rd 7184b188c705a6e326e454f859e76f1b *man/morgenstern.Rd 21bb447049798227c4080791cb1157b3 *man/multinomial.Rd 0ef36351d132ce1f91580c5f71237f39 *man/nakagami.Rd c69bfdd1afbf8ea05b2d37c27f2b097b *man/nakagamiUC.Rd 498f65c2a4248ef79d9d8ceaef534069 *man/nbcanlink.Rd cf0351ecf6456216e465895afff76ad7 *man/nbolf.Rd 5f085d3658315ecf2f70d91b422d1baa *man/negbinomial.Rd 0b6168d2b3d79f02d51dc1f185ad7d35 *man/negbinomial.size.Rd 70653b46108e5e99fcc5b23b7fe97dda *man/normal1.Rd 29a2a7258f41ef47450d2de1c261ae87 *man/notdocumentedyet.Rd dd58e372f599256d80973bc07c85597b *man/olym.Rd 0c48bfcd8e3d21e919b3c0f55fd4d8e2 *man/ordpoisson.Rd 025c5545a37dd996931ea7d2b42211b5 *man/oxtemp.Rd 24a97e3b9709df47d079f4e2665f497b *man/paralogistic.Rd 2fc2cf7200b0f4409471aa2e584168a3 *man/paralogisticUC.Rd 85ba1e6e60fa44f0f79e789bab5616d3 *man/pareto1.Rd 7d6736ddbbfb94188b43ee784cba88a7 *man/paretoIV.Rd 00859ab21f1eb0d605d68c2ad78c771c *man/paretoIVUC.Rd 96c9b961001987506c9e736f053ac2d6 *man/perks.Rd e03cf5b8c36eb729c3f9ab0f1520d505 *man/perksUC.Rd e3241c34fea9817fe468c92eaeb8ca65 *man/persp.qrrvglm.Rd a38168dd57b4be503cf47732714e441b *man/pgamma.deriv.Rd 8e0120c68b69d0760218c483490aed8e *man/pgamma.deriv.unscaled.Rd b6d928375ee9738785be7ec7fa66d277 *man/plackUC.Rd 06966c021b6214237508543c52109d57 *man/plackett.Rd 791d04a5c3a3bc514bf0ed1fc639f8ab *man/plotdeplot.lmscreg.Rd e6eaf56a6f7b23ede6cbd92dbce502ed *man/plotqrrvglm.Rd 958dcd119ee66e5d5318c4cf19f024f8 *man/plotqtplot.lmscreg.Rd 45ee1e3b4fe0a2577f5ea8732f1db0f8 *man/plotrcim0.Rd 613de2bdef6aabc49d265fd1f9ee3648 *man/plotvgam.Rd 72bade4a008240a55ae5a8e5298e30b8 *man/plotvgam.control.Rd bbe8bffd4bcfa945d9573d135bb543f3 *man/pneumo.Rd 9f2d37ecfc67140980a2870d0101f743 *man/pnorm2UC.Rd 8a2b05c37dc154659b9783eea0c5808b *man/poissonff.Rd dab0255f3b6f88ca8362af2570311a2e *man/poissonp.Rd fe262a77e1fef4fd1c795b198d040bda *man/polf.Rd 2b1a116706ced6399a4248853e001d89 *man/polonoUC.Rd 43997b2ec625ae0093dc7485034085bc *man/posbernUC.Rd 00637f43cacf2b2fe91af295fe378a66 *man/posbernoulli.b.Rd f05048b373dfce9317ddbabb088ef0f1 *man/posbernoulli.t.Rd 392ccdfd3c141d7654aa10bac5136d04 *man/posbernoulli.tb.Rd 8953a5a5559f58d0ebbabb3b0e50ba99 *man/posbinomUC.Rd cfdbefc16cb1001c3027fedd64e65f66 *man/posbinomial.Rd 6ec345e5d20c36bdde7b7d09c9b71893 *man/posgeomUC.Rd d14c926ed9841f43e6ace38ca9a7529f *man/posnegbinUC.Rd ac1f3ebc8db196c11356963d4f82d509 *man/posnegbinomial.Rd 4d39085d9df2a816cce2efdc10af0825 *man/posnormUC.Rd 7b1ca086982454d5cedb01496c8c8cdd *man/posnormal1.Rd bfa5a34fbeeca1ee107e2fc332f1ec1a *man/pospoisUC.Rd c33e0546ca2429e1a4bcb9a56ef992e7 *man/pospoisson.Rd 2fdf20b0d607f422c2b01ea15f271652 *man/powl.Rd f5ca83cbbe57ce6a7e98a0318ddc6aac *man/predictqrrvglm.Rd ee617c9486f9db20894440ae145b1cf9 *man/predictvglm.Rd f1cf2e37dcc09fba04770ecb055cf646 *man/prentice74.Rd f26232b73e5f0c2f323d019ba9e46ada *man/probit.Rd 811cfe4a15b3b140c48d930e3172a195 *man/propodds.Rd ccdfc3f7df34475385a243eae0ab5877 *man/prplot.Rd de570e252375d7052edaa7fb175f67eb *man/put.smart.Rd 9e2f7efa937bc97a63800de0afe9455c *man/qrrvglm.control.Rd ddfc6463c5266b7dd79c7a7e9d3f8f6c *man/qtplot.gumbel.Rd 7894f8d45225244008021bd30565ea32 *man/qtplot.lmscreg.Rd eb986116765a0a7229e0988a343f1b6b *man/quasibinomialff.Rd c2efda0141a3df852b775baa18af0c7a *man/quasipoissonff.Rd 67da92796b1e1d1f8866fee2c8cf4954 *man/rayleigh.Rd 02bfbc64253593edfa891a19f33acd89 *man/rayleighUC.Rd ff8c88be946408af6bf1b0931033ee4d *man/rcqo.Rd 1d9601bd76b8c0cddcf567b144b5ef89 *man/rdiric.Rd 385bd032acb1f2925c49a7748dcb8631 *man/recexp1.Rd 2af6888fb0758a9fdaf45fc72f844724 *man/reciprocal.Rd d3f671ea06066c9bee61317ace112d66 *man/recnormal1.Rd 9389504a7c7716cb9b183322290b504e *man/rhobit.Rd b70c93ab6124de167a4ccab2f8fc2221 *man/riceUC.Rd 7471692a618c57fe5f5137deadaef4f7 *man/riceff.Rd 5cfc734589e404f286ce8cda342344bd *man/rig.Rd 258a5e119f601399b04d4dc51ce2e4ef *man/rlplot.egev.Rd 801fbf593c190957e8abd87b1a5bbbdf *man/rrar.Rd ae184e5777e6d580e7200434a99744e2 *man/rrvglm-class.Rd 8ba13aec3e907579d7009e2f648daefc *man/rrvglm.Rd df2e65a3466384528c48da00a8dd7293 *man/rrvglm.control.Rd 493070deddef6815cdd2de211f3a65db *man/rrvglm.optim.control.Rd ecc44804896b8f3d4a9d469a952fe9a6 *man/ruge.Rd 850477e7023b0617c4dd9bf177881736 *man/s.Rd 3e48779e7f6cb3965b6b97a3cc6c840c *man/seq2binomial.Rd 71367fe3b494a45c98f9a96e1fd791e0 *man/setup.smart.Rd 22fd8f8f7a559acaecfbca2c6dbe5818 *man/simplex.Rd 7cdf80a6cdb171d1f6f9ae200422b159 *man/simplexUC.Rd 4d13e6cf2248dde66a69216540cd2e87 *man/sinmad.Rd 754b3dbc268f1df1bf8f675da6a2ebf8 *man/sinmadUC.Rd 8555a29368f14ba2a2ead5344f4ae716 *man/skellam.Rd 4cdec195b127858706897733934dffc4 *man/skellamUC.Rd 094fd596b913d88f9941bb26396d4b72 *man/skewnormal1.Rd 0c30d059794a31ec06e43da1590496cc *man/slash.Rd 9d45778b7f284934351777b4b9686c50 *man/slashUC.Rd 1ed10e28c013e2e08ac5f053b2454714 *man/smart.expression.Rd 163cdb3e4a225aceee82e2d19488d56e *man/smart.mode.is.Rd 2b68a9e20182e8892bb7be344e58e997 *man/smartpred.Rd d48e1e2fa242ba626e652480e84b0a43 *man/snormUC.Rd 3849f780d823a1a0aa67bb65ac35510e *man/sratio.Rd 3fb3e5774481ff1af1ab3dd012fd37c0 *man/studentt.Rd 2228be8da02861f85cd2bf77d409333f *man/tikuv.Rd c0a24f0780ee14e1aadcf261ccf2d80b *man/tikuvUC.Rd caedfadbe16b9c5e83dc81c74ba4e20d *man/tobit.Rd 95db69c0da2ceff7fcb86d6893a861c9 *man/tobitUC.Rd 5e27256f78d67206249604fee70af378 *man/toxop.Rd dd9c86342f896f1b28763fe16a615910 *man/tparetoUC.Rd 39423c1ea32c5ba0d4286b815ad2712d *man/triangle.Rd a262cd49e16acd6fb583cb2aa0fc5a94 *man/triangleUC.Rd 304a7f28494e6f4a3f6e6bb42d02671f *man/trplot.Rd df89cf9f2a94441eaf3d8d625dc992eb *man/trplot.qrrvglm.Rd 5ddf60a47daa1dde214b91ca9dd7df6d *man/truncweibull.Rd 50ada9ecd189456ce9f218d22b49089c *man/ucberk.Rd 0f938e4ad276b59e46cabc77a2f8e79f *man/undocumented-methods.Rd 89ca278b0ede1400678b3525f178aa03 *man/uqo.Rd f63e291da13f8a3c89a60e7b174ccd67 *man/uqo.control.Rd 9ffc09b8e1bca4fe6e4c298e4537adbd *man/venice.Rd 5d0f6c9e067bd6e7d44891427c0b47ff *man/vgam-class.Rd bb56e57215c669e19712b2f3a583172a *man/vgam.Rd c059eb2c3a2c325bd3b9498abe0a5d46 *man/vgam.control.Rd 3901a430c138688b96027a1c8a96c4fd *man/vglm-class.Rd cf27a581829d8d7081e55ebffb0dfecf *man/vglm.Rd e9971e040dd16e21b4f4445dcf288faf *man/vglm.control.Rd a8508ebb5ce0d2fed90d3e9e1d081455 *man/vglmff-class.Rd 9d43253faca810a9baa7f654ac7792b3 *man/vonmises.Rd 77f9be156a1a59c429db0e480eff0f37 *man/vsmooth.spline.Rd c498f29d7fc8156fd345b4892f02190d *man/waitakere.Rd e4d3a522ebb0edad3f9f8261d8f40d93 *man/wald.Rd b3e006846209fa329deadfc18aab6c9d *man/weibull.Rd e3068604e1a1986a32e83c891782a70a *man/weightsvglm.Rd a361b06e43268acba1a3ec3f81fd65cd *man/wffc.P2star.Rd c4bad409f04a155d39f12f93d489849f *man/wffc.Rd 48a51ab0fa73a56e7206d44760639788 *man/wffc.indiv.Rd a0b29acd25cad083c4bc7ccfa491885e *man/wffc.nc.Rd 2cf0ef83f7ff09796fbb1f357ac6da61 *man/wffc.teams.Rd 655258cff21a67e1549b204ff3d451a5 *man/wrapup.smart.Rd bcb9181a6ca8398fefd44de6552a8938 *man/yeo.johnson.Rd e3116eb4708dc7d3a6afdb76e3705284 *man/yip88.Rd 21a90fbde0228b4e74bba93b50300b54 *man/yulesimon.Rd a6128b966f2d5d6df5f36b11bc2c3607 *man/yulesimonUC.Rd 702b59c0ff9a17b02e63efbe7451ef34 *man/zabinomUC.Rd 2a4b6a8e46e7fdcc896c4a291d5c2e81 *man/zabinomial.Rd 7fdb1e52df331edbf0e234b7f455a9e0 *man/zageomUC.Rd 91a61e2550e8daa836931fcdf23dd8d9 *man/zageometric.Rd cbc82d4435bdb4bcf8d8c4a2d5a9e483 *man/zanegbinUC.Rd a214209935a1a86d8129d38fe37cc05c *man/zanegbinomial.Rd ce015717ce27f27018754d67e3316957 *man/zapoisUC.Rd 035de7769a8dabd54be20e64592e0bd4 *man/zapoisson.Rd 61cce538df41d42d6e5daf8f37635527 *man/zero.Rd 7985338d08e88fa23cce9cc0a09724b6 *man/zeta.Rd e0ef189ae8251b5e0d20b614c18cdd5a *man/zetaUC.Rd 86813485832ea3097bccb17a30752861 *man/zetaff.Rd 2dcc3a027d670144db7a96b4ccf48949 *man/zibinomUC.Rd e012ae5e25cc15fdfba42f127bedf773 *man/zibinomial.Rd eac0a99dd131fe06d3ed428eb3f4c515 *man/zigeomUC.Rd 9ea946fdd3d0189c4d634cfb48dd1f06 *man/zigeometric.Rd 5a3c5dfb9a9340b0cbd930e1c3c30ad0 *man/zinegbinUC.Rd 243a21fd3b1684694bfae65502ad9c2e *man/zinegbinomial.Rd 89d598976784c12c45db5af25d1bc66f *man/zipebcom.Rd e8e65cb1b0a3b7ae3bfb81222966024d *man/zipf.Rd 15d3e6361ff82acece70960b06e13d1b *man/zipfUC.Rd e06712314cd3b09f403cfd0aea0b4b31 *man/zipoisUC.Rd ccbd33a607fe455f79a9d3248234ac35 *man/zipoisson.Rd 4aaf5efcfbcf1bdf32b13f632ac3ed0f *src/caqo3.c 77ed63cecc681dfebc94a028d0cfc996 *src/fgam.f f8fe99dcda865eceb06b66f4976f4bf2 *src/gautr.c dc1ca5b4e9a67b6d48c25e7107112d9c *src/lerchphi.c 9dd33afbac4653b7d8bdbd2794b9c262 *src/lms.f 9cfd5e51c2dba024afc28b0fffaece4a *src/muxr.c 1f51508edc95c9a11a4443d19ef759af *src/rgam.f ef267a93286cc6c6464fd50192ec0702 *src/rgam3.c 10939d9fb380d54da716a835d37fdf75 *src/tyeepolygamma3.c 79cf39f1d83f25e29a6c56d344ea8d76 *src/vcall2.f 83c304cbbe3f0a9bfbe7ab5aa0eefd4e *src/vdigami.f 3e145d8721d17dbd0e642508c2de1472 *src/veigen.f cc72ffc1acb79e253cc97fbe2608e9ed *src/vgam.f 5d87230c617938f7ed3e71123c30a160 *src/vgam3.c f910910e33c21855f63634e4e9a99903 *src/vlinpack1.f 80c0a0f512ae74ecbed144c5f115fb16 *src/vlinpack2.f e9187111f5c6ce1e5808bbb3dc088c17 *src/vlinpack3.f 9e424b144361fdaa0d8573729df1d442 *src/vmux.f 0317d171d3fa308b4e19e2c386341945 *src/vmux3.c d5c3783cc318a8e1c0b7aafcf5849dee *src/zeta3.c VGAM/src/0000755000176000001440000000000012136651167011633 5ustar ripleyusersVGAM/src/zeta3.c0000644000176000001440000001465512136651167013040 0ustar ripleyusers #include #include #include void vzetawr(double sjwyig9t[], double *bqelz3cy, int *kpzavbj3, int *f8yswcat); double fvlmz9iyzeta8(double , double kxae8glp[]); double fvlmz9iydzeta8(double , double kxae8glp[]); double fvlmz9iyddzeta8(double , double kxae8glp[]); void vbecoef(double kxae8glp[]); void vzetawr(double sjwyig9t[], double *bqelz3cy, int *kpzavbj3, int *f8yswcat) { int ayfnwr1v; double *qnwamo0e1, *qnwamo0e2; double kxae8glp[12]; vbecoef(kxae8glp); qnwamo0e1 = bqelz3cy; qnwamo0e2 = sjwyig9t; if (*kpzavbj3 == 0) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iyzeta8(*qnwamo0e2++, kxae8glp); } } else if (*kpzavbj3 == 1) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iydzeta8(*qnwamo0e2++, kxae8glp); } } else if (*kpzavbj3 == 2) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iyddzeta8(*qnwamo0e2++, kxae8glp); } } else { Rprintf("Error: *kpzavbj3 must equal 0, 1 or 2 in C function vzetawr\n"); } } double fvlmz9iyzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, a2svdbx3tk, fred; ayfnwr1v = 12; gp1jxzuh = 8; a2svdbx3tk = pow((double) ayfnwr1v, (double) 2.0); xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; q6zdcwxk = 1.000 / (ghz9vuba - 1.000) + 0.500 / ayfnwr1v + kxae8glp[0] * xvr7bonh; for (uw3favmo = 2; uw3favmo <= gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.000) * (ghz9vuba + m2svdbx3tk - 2.000) / (m2svdbx3tk - 1.000) / m2svdbx3tk / a2svdbx3tk; q6zdcwxk += xvr7bonh * kxae8glp[uw3favmo-1]; } fred = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); q6zdcwxk = 1.000 + q6zdcwxk * fred; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { q6zdcwxk += pow((double) nsvdbx3tk, (double) -ghz9vuba); } return q6zdcwxk; } double fvlmz9iydzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, dh9mgvze, a2svdbx3tk, ugqvjoe5a, ugqvjoe5n, fred; ayfnwr1v = 12; gp1jxzuh = 8; ugqvjoe5a = log( (double) ayfnwr1v ); a2svdbx3tk = ayfnwr1v * ayfnwr1v; xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; dh9mgvze = 1.000 / ghz9vuba - ugqvjoe5a; q6zdcwxk = kxae8glp[0] * xvr7bonh * dh9mgvze; for (uw3favmo = 2; uw3favmo <= gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.0) * (ghz9vuba + m2svdbx3tk - 2.0) / (m2svdbx3tk - 1.0) / m2svdbx3tk / a2svdbx3tk; dh9mgvze += 1.0 / (ghz9vuba + m2svdbx3tk - 3.0) + 1.0 / (ghz9vuba + m2svdbx3tk - 2.0); q6zdcwxk += kxae8glp[uw3favmo-1] * xvr7bonh * dh9mgvze; } fred = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); q6zdcwxk = (q6zdcwxk - 1.000 / pow(ghz9vuba - 1.000, (double) 2.0) - ugqvjoe5a * (1.000 / (ghz9vuba - 1.000) + 0.5000 / ayfnwr1v)) * fred; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { ugqvjoe5n = log( (double) nsvdbx3tk ); q6zdcwxk -= ugqvjoe5n / exp(ugqvjoe5n * ghz9vuba); } return q6zdcwxk; } double fvlmz9iyddzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, dh9mgvze, hpmwnav2, a2svdbx3tk, ugqvjoe5a, ugqvjoe5n, fred, fred2; ayfnwr1v = 12; gp1jxzuh = 8; ugqvjoe5a = log( (double) ayfnwr1v ); a2svdbx3tk = ayfnwr1v * ayfnwr1v; xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; dh9mgvze = 1.000 / ghz9vuba - ugqvjoe5a; hpmwnav2 = 1.000 / ghz9vuba / ghz9vuba; q6zdcwxk = kxae8glp[0] * xvr7bonh * (pow(dh9mgvze, (double) 2.0) - hpmwnav2); for (uw3favmo = 2; uw3favmo < gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.000) * (ghz9vuba + m2svdbx3tk - 2.000) / (m2svdbx3tk - 1.0) / m2svdbx3tk / a2svdbx3tk; dh9mgvze += 1.000 / (ghz9vuba + m2svdbx3tk - 3.000) + 1.000 / (ghz9vuba + m2svdbx3tk - 2.000); hpmwnav2 += 1.000 / pow(ghz9vuba + m2svdbx3tk - 3.000, (double) 2.0) + 1.000 / pow(ghz9vuba + m2svdbx3tk - 2.000, (double) 2.0); q6zdcwxk += kxae8glp[uw3favmo-1] * xvr7bonh * (dh9mgvze * dh9mgvze - hpmwnav2); } fred = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); fred2 = pow(ugqvjoe5a, (double) 2.0) * (1.0 / (ghz9vuba - 1.0) + 0.50 / ayfnwr1v); q6zdcwxk = (q6zdcwxk + 2.0 / pow(ghz9vuba - 1.0, (double) 3.0) + 2.0 * ugqvjoe5a / pow(ghz9vuba - 1.0, (double) 2.0) + fred2) * fred; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { ugqvjoe5n = log( (double) nsvdbx3tk ); q6zdcwxk += pow(ugqvjoe5n, (double) 2.0) / exp(ugqvjoe5n * ghz9vuba); } return q6zdcwxk; } void vbecoef(double kxae8glp[]) { kxae8glp[0] = 1.000 / 6.000; kxae8glp[1] = -1.000 / 30.000; kxae8glp[2] = 1.000 / 42.000; kxae8glp[3] = -1.000 / 30.000; kxae8glp[4] = 5.000 / 66.000; kxae8glp[5] = -691.000 / 2730.000; kxae8glp[6] = 7.000 / 6.000; kxae8glp[7] = -3617.000 / 510.000; kxae8glp[8] = 4386.700 / 79.800; kxae8glp[9] = -1746.1100 / 3.3000; kxae8glp[10] = 8545.1300 / 1.3800; kxae8glp[11] = -2363.6409100 / 0.0273000; } void conmax_Z(double *lamvec, double *nuvec, double *bqelz3cy, int *nlength, int *kpzavbj3, double *qaltf0nz) { double *pq6zdcwxk, denom = 0.0, yq6lorbx, prevterm; int ayfnwr1v; *qaltf0nz = 1.0e-6; if (*kpzavbj3 == 0) { pq6zdcwxk = bqelz3cy; for (ayfnwr1v = 0; ayfnwr1v < *nlength; ayfnwr1v++) { prevterm = 1.0 + *lamvec; denom = 1.0; *pq6zdcwxk = prevterm; yq6lorbx = 2.0; if (*nuvec == 0.0 && *lamvec >= 1.0) { Rprintf("Error: series will not converge. Returning 0.0\n"); *pq6zdcwxk = 0.0; } else { while (prevterm > *qaltf0nz) { denom = denom * pow(yq6lorbx, *lamvec); prevterm = prevterm * *lamvec / denom; *pq6zdcwxk += prevterm; yq6lorbx += 1.0; } } lamvec++; nuvec++; pq6zdcwxk++; } } else if (*kpzavbj3 == 1) { } else if (*kpzavbj3 == 2) { } } VGAM/src/vmux3.c0000644000176000001440000006275112136651167013074 0ustar ripleyusers #include #include #include #include #include void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); int fvlmz9iyC_VIAM(int *cz8qdfyj, int *rvy1fpli, int *wy1vqfzu); void fvlmz9iyC_vm2a(double mtlgduey8[], double bzmd6ftvmat[], int *dim1m, int *f8yswcat, int *wy1vqfzu, int *iupper, int tgiyxdw1[], int dufozmt7[], int *oey3ckps); void fvlmz9iyC_nudh6szq(double wpuarq2m[], double tlgduey8[], double bzmd6ftvmat[], int *npjlv3mr, int *f8yswcat, int *wy1vqfzu); void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *dimu); void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *i_solve); void fvlmz9iyC_mxrbkut0(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *dimu, int *rutyk8mg); void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq); double fvlmz9iyC_tldz5ion(double xx); void fvlmz9iyC_enbin9(double bzmd6ftv[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty); void fvlmz9iyC_enbin8(double bzmd6ftv[], double hdqsx7bk[], double hsj9bzaq[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go); void fvlmz9iyC_mbessI0(double unvxka0m[], int *f8yswcat, int *kpzavbj3, double dvector0[], double dvector1[], double dvector2[], int *zjkrtol8, double *qaltf0nz); void VGAM_C_mux34(double he7mqnvy[], double Dmat[], int *vnc1izfy, int *e0nmabdk, int *ui4ntmvd, double bqelz3cy[]); void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu) { int urohxe6t, bpvaqm5z, *ptri; ptri = tgiyxdw1; for (urohxe6t = *wy1vqfzu; urohxe6t >= 1; urohxe6t--) { for (bpvaqm5z = 1; bpvaqm5z <= urohxe6t; bpvaqm5z++) { *ptri++ = bpvaqm5z; } } ptri = dufozmt7; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { for (bpvaqm5z = urohxe6t; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *ptri++ = bpvaqm5z; } } } int fvlmz9iyC_VIAM(int *cz8qdfyj, int *rvy1fpli, int *wy1vqfzu) { int urohxe6t; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; int imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (urohxe6t = 1; urohxe6t <= imk5wjxg; urohxe6t++) { if ((wkumc9idtgiyxdw1[urohxe6t-1]== *cz8qdfyj && wkumc9iddufozmt7[urohxe6t-1] == *rvy1fpli) || (wkumc9idtgiyxdw1[urohxe6t-1]== *rvy1fpli && wkumc9iddufozmt7[urohxe6t-1] == *cz8qdfyj)) { Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); return urohxe6t; } } Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); return 0; } void fvlmz9iyC_vm2a(double mtlgduey8[], double bzmd6ftvmat[], int *dim1m, int *f8yswcat, int *wy1vqfzu, int *iupper, int tgiyxdw1[], int dufozmt7[], int *oey3ckps) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t; int bpvaqm5z, usvdbx3tk, i_size_bzmd6ftvmat, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu; double *qnwamo0e; if (*oey3ckps == 1) { if (*iupper == 1 || *dim1m != imk5wjxg) { i_size_bzmd6ftvmat = zyojx5hw * *f8yswcat; qnwamo0e = bzmd6ftvmat; for (ayfnwr1v = 0; ayfnwr1v < i_size_bzmd6ftvmat; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } if (iupper == 0) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { urohxe6t = (ayfnwr1v-1) * zyojx5hw; for (yq6lorbx = 1; yq6lorbx <= *dim1m; yq6lorbx++) { bpvaqm5z = tgiyxdw1[yq6lorbx-1] - 1 + (dufozmt7[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; usvdbx3tk = dufozmt7[yq6lorbx-1] - 1 + (tgiyxdw1[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; gp1jxzuh = (yq6lorbx-1) + (ayfnwr1v-1) * *dim1m; bzmd6ftvmat[usvdbx3tk] = bzmd6ftvmat[bpvaqm5z] = mtlgduey8[gp1jxzuh]; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { urohxe6t = (ayfnwr1v-1) * zyojx5hw; for (yq6lorbx = 1; yq6lorbx <= *dim1m; yq6lorbx++) { bpvaqm5z = tgiyxdw1[yq6lorbx-1] - 1 + (dufozmt7[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; gp1jxzuh = (ayfnwr1v-1) * *dim1m + (yq6lorbx-1); bzmd6ftvmat[bpvaqm5z] = mtlgduey8[gp1jxzuh]; } } } } void fvlmz9iyC_nudh6szq(double wpuarq2m[], double tlgduey8[], double bzmd6ftvmat[], int *npjlv3mr, int *f8yswcat, int *wy1vqfzu) { int ayfnwr1v, yq6lorbx, bpvaqm5z, pqneb2ra = 1, djaq7ckz = 1, oey3ckps = 0; int zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; double q6zdcwxk; double *wkumc9idwk12; wkumc9idwk12 = Calloc(zyojx5hw, double); wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { fvlmz9iyC_vm2a(wpuarq2m + (ayfnwr1v - 1) * *npjlv3mr, wkumc9idwk12, npjlv3mr, &pqneb2ra, wy1vqfzu, &djaq7ckz, wkumc9idtgiyxdw1, wkumc9iddufozmt7, &oey3ckps); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (bpvaqm5z = yq6lorbx; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { q6zdcwxk += wkumc9idwk12[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu] * tlgduey8[ayfnwr1v-1 + (bpvaqm5z-1) * *f8yswcat]; } bzmd6ftvmat[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk; } } Free(wkumc9idwk12); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *npjlv3mr) { int ayfnwr1v, yq6lorbx, gp1jxzuh, pqneb2ra = 1, djaq7ckz = 1, oey3ckps = 0, zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; double q6zdcwxk; double *wkumc9idwk12; wkumc9idwk12 = Calloc(zyojx5hw , double); wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { fvlmz9iyC_vm2a(wpuarq2m + (ayfnwr1v - 1) * *npjlv3mr, wkumc9idwk12, npjlv3mr, &pqneb2ra, wy1vqfzu, &djaq7ckz, wkumc9idtgiyxdw1, wkumc9iddufozmt7, &oey3ckps); for (yq6lorbx = *wy1vqfzu; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = unvxka0m[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu]; for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk -= wkumc9idwk12[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu]; } unvxka0m[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk / wkumc9idwk12[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } Free(wkumc9idwk12); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *i_solve) { double q6zdcwxk; int ayfnwr1v, yq6lorbx, gp1jxzuh; *dvhw1ulq = 1; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { q6zdcwxk = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= ayfnwr1v-1; gp1jxzuh++) { q6zdcwxk += pow(rbne6ouj[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu], (double) 2.0); } rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] -= q6zdcwxk; if (rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] <= 0.0e0) { Rprintf("Error in fvlmz9iyjdbomp0g: not pos-def.\n"); *dvhw1ulq = 0; return; } rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] = sqrt(rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu]); for (yq6lorbx = ayfnwr1v+1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= ayfnwr1v-1; gp1jxzuh++) { q6zdcwxk += rbne6ouj[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu] * rbne6ouj[gp1jxzuh-1 + (yq6lorbx-1) * *wy1vqfzu]; } rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = (rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] - q6zdcwxk) / rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu]; } } if (*i_solve == 0) { for (ayfnwr1v = 2; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= ayfnwr1v-1; yq6lorbx++) { rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = 0.0e0; } return; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = unvxka0m[yq6lorbx-1]; for (gp1jxzuh = 1; gp1jxzuh <= yq6lorbx-1; gp1jxzuh++) { q6zdcwxk -= rbne6ouj[gp1jxzuh-1 + (yq6lorbx-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1]; } unvxka0m[yq6lorbx-1] = q6zdcwxk / rbne6ouj[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } for(yq6lorbx = *wy1vqfzu; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = unvxka0m[yq6lorbx-1]; for(gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk -= rbne6ouj[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1]; } unvxka0m[yq6lorbx-1] = q6zdcwxk / rbne6ouj[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } void fvlmz9iyC_mxrbkut0(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *npjlv3mr, int *rutyk8mg) { double q6zdcwxk; int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z; double *wkumc9idwk12, *wkumc9idwk34; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu, dz1lbtph = *wy1vqfzu * *xjc4ywlh; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); wkumc9idwk12 = Calloc(zyojx5hw, double); wkumc9idwk34 = Calloc(dz1lbtph, double); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *npjlv3mr; bpvaqm5z++) { yq6lorbx = wkumc9idtgiyxdw1[bpvaqm5z-1] - 1 + (wkumc9iddufozmt7[bpvaqm5z-1] - 1) * *wy1vqfzu; wkumc9idwk12[yq6lorbx] = wpuarq2m[bpvaqm5z-1 + (ayfnwr1v-1) * *npjlv3mr]; } for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk34[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] = he7mqnvy[(ayfnwr1v-1) * *wy1vqfzu + yq6lorbx-1 + (gp1jxzuh-1) * *rutyk8mg]; } } for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (bpvaqm5z = yq6lorbx; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { q6zdcwxk += wkumc9idwk12[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu] * wkumc9idwk34[bpvaqm5z-1 + (gp1jxzuh-1) * *wy1vqfzu]; } he7mqnvy[(ayfnwr1v-1) * *wy1vqfzu + yq6lorbx-1 + (gp1jxzuh-1) * *rutyk8mg] = q6zdcwxk; } } } Free(wkumc9idwk12); Free(wkumc9idwk34); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq) { int ayfnwr1v, yq6lorbx, gp1jxzuh, uaoynef0, zyojx5hw = *wy1vqfzu * *wy1vqfzu; double q6zdcwxk, vn3iasxugno = 1.0e-14; double *wkumc9idwrk; wkumc9idwrk = Calloc(zyojx5hw, double); *dvhw1ulq = 1; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = ayfnwr1v; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = (yq6lorbx == ayfnwr1v) ? 1.0e0 : 0.0e0; for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= ayfnwr1v; gp1jxzuh++) { q6zdcwxk -= wpuarq2m[yq6lorbx-1 + (gp1jxzuh-1) * *npjlv3mr] * wkumc9idwrk[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu]; } if (fabs(wpuarq2m[yq6lorbx-1 + (yq6lorbx-1) * *npjlv3mr]) < vn3iasxugno) { Rprintf("Error in fvlmz9iyC_lkhnw9yq: U(cz8qdfyj,cz8qdfyj) is zero.\n"); *dvhw1ulq = 0; } else { wkumc9idwrk[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk / wpuarq2m[yq6lorbx-1 + (yq6lorbx-1) * *npjlv3mr]; } } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = yq6lorbx; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { uaoynef0 = (yq6lorbx < ayfnwr1v) ? ayfnwr1v : yq6lorbx; q6zdcwxk = 0.0e0; for(gp1jxzuh = uaoynef0; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk += wkumc9idwrk[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * wkumc9idwrk[ayfnwr1v-1 + (gp1jxzuh-1) * *wy1vqfzu]; } ks3wejcv[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = ks3wejcv[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = q6zdcwxk; } } Free(wkumc9idwrk); } double fvlmz9iyC_tldz5ion(double xval) { double hofjnx2e, xd4mybgj[6], q6zdcwxk = 1.000000000190015, tmp_y = xval; int yq6lorbx; xd4mybgj[0]= 76.18009172947146e0; xd4mybgj[1]= -86.50532032941677e0; xd4mybgj[2]= 24.01409824083091e0; xd4mybgj[3]= -1.231739572450155e0; xd4mybgj[4]= 0.1208650973866179e-2; xd4mybgj[5]= -0.5395239384953e-5; hofjnx2e = xval + 5.50; hofjnx2e -= (xval + 0.50) * log(hofjnx2e); for (yq6lorbx = 0; yq6lorbx < 6; yq6lorbx++) { tmp_y += 1.0e0; q6zdcwxk += xd4mybgj[yq6lorbx] / tmp_y; } return -hofjnx2e + log(2.5066282746310005e0 * q6zdcwxk / xval); } void fvlmz9iyC_enbin9(double bzmd6ftvmat[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty) { int ayfnwr1v, kij0gwer, esql7umk; double vjz5sxty, pvcjl2na, mwuvskg1, btiehdm2 = 100.0e0 * *rsynp1go, ydb, ft3ijqmy, q6zdcwxk, plo6hkdr, csi9ydge, oxjgzv0e = 0.001e0; double bk3ymcih = -1.0; csi9ydge = bk3ymcih; bk3ymcih += bk3ymcih; bk3ymcih += csi9ydge; if (*n2kersmx <= 0.80e0 || *n2kersmx >= 1.0e0) { Rprintf("Error in fvlmz9iyC_enbin9: bad n2kersmx value.\n"); *dvhw1ulq = 0; return; } *dvhw1ulq = 1; for (kij0gwer = 1; kij0gwer <= *zy1mchbf; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { vjz5sxty = nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]; if ((vjz5sxty < oxjgzv0e) || ( nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > 1.0e5)) { bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * (1.0e0 + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat])) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); if (bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > -btiehdm2) bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -btiehdm2; goto ceqzd1hi20; } q6zdcwxk = 0.0e0; pvcjl2na = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); mwuvskg1 = 1.0e0 - pvcjl2na; csi9ydge = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]; if (pvcjl2na < btiehdm2) pvcjl2na = btiehdm2; if (mwuvskg1 < btiehdm2) mwuvskg1 = btiehdm2; esql7umk = 100 + 15 * floor(nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); if (esql7umk < *sguwj9ty) { esql7umk = *sguwj9ty; } ft3ijqmy = pow(pvcjl2na, csi9ydge); *ux3nadiw = ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 1.0e0; ft3ijqmy = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * mwuvskg1 * ft3ijqmy; *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow((hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb), (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 2.0e0; while (((*ux3nadiw <= *n2kersmx) || (plo6hkdr > 1.0e-4)) && (ydb < esql7umk)) { ft3ijqmy = (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] - 1.0 + ydb) * mwuvskg1 * ft3ijqmy / ydb; *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow((hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb), (double) 2.0); q6zdcwxk += plo6hkdr; ydb += 1.0e0; } bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -q6zdcwxk; ceqzd1hi20: bk3ymcih = 0.0e0; } } } void fvlmz9iyC_enbin8(double bzmd6ftvmat[], double hdqsx7bk[], double hsj9bzaq[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go) { int ayfnwr1v, kij0gwer; double ft3ijqmy, tad5vhsu, o3jyipdf, pq0hfucn, q6zdcwxk, plo6hkdr, qtce8hzo1 = 0.0e0, qtce8hzo2 = 0.0e0; int fw2rodat, rx8qfndg, mqudbv4y; double onemse, nm0eljqk, ydb, btiehdm2 = -100.0 * *rsynp1go, kbig = 1.0e4, oxjgzv0e = 0.0010; Rprintf("zz 20100122; this function fvlmz9iyC_enbin8 unchecked.\n"); if (*n2kersmx <= 0.80e0 || *n2kersmx >= 1.0e0) { Rprintf("returning since n2kersmx <= 0.8 or >= 1\n"); *dvhw1ulq = 0; return; } onemse = 1.0e0 / (1.0e0 + oxjgzv0e); *dvhw1ulq = 1; for (kij0gwer = 1; kij0gwer <= *zy1mchbf; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { if ( hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > kbig) hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = kbig; if (hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] < oxjgzv0e) hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = oxjgzv0e; if (hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > onemse) { nm0eljqk = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * (1.0e0 / hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] - 1.0e0); bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -nm0eljqk * (1.0e0 + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk)) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); if (bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > btiehdm2) bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = btiehdm2; goto ceqzd1hi20; } q6zdcwxk = 0.0e0; fw2rodat = 1; rx8qfndg = hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1)**f8yswcat] < (1.0 - *rsynp1go) ? 1 : 0; mqudbv4y = fw2rodat && rx8qfndg ? 1 : 0; if (mqudbv4y) { qtce8hzo2 = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * log(hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); *ux3nadiw = exp(qtce8hzo2); } else { *ux3nadiw = 0.0e0; } plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); q6zdcwxk += plo6hkdr; o3jyipdf = fvlmz9iyC_tldz5ion(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); ydb = 1.0e0; tad5vhsu = fvlmz9iyC_tldz5ion(ydb + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); pq0hfucn = 0.0e0; if (mqudbv4y) { qtce8hzo1 = log(1.0e0 - hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); ft3ijqmy = exp(ydb * qtce8hzo1 + qtce8hzo2 + tad5vhsu - o3jyipdf - pq0hfucn); } else { ft3ijqmy = 0.0e0; } *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb, (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 2.0e0; while((*ux3nadiw <= *n2kersmx) || (plo6hkdr > 1.0e-4)) { tad5vhsu += log(ydb + hdqsx7bk[ayfnwr1v-1+(kij0gwer-1) * *f8yswcat] - 1.0); pq0hfucn += log(ydb); if (mqudbv4y) { ft3ijqmy = exp(ydb * qtce8hzo1 + qtce8hzo2 + tad5vhsu - o3jyipdf - pq0hfucn); } else { ft3ijqmy = 0.0e0; } *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb, (double) 2.0); q6zdcwxk += plo6hkdr; ydb += 1.0e0; if (ydb > 1.0e3) goto ceqzd1hi21; } ceqzd1hi21: bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -q6zdcwxk; ceqzd1hi20: tad5vhsu = 0.0e0; } } } void fvlmz9iyC_mbessI0(double unvxka0m[], int *f8yswcat, int *kpzavbj3, double dvector0[], double dvector1[], double dvector2[], int *zjkrtol8, double *qaltf0nz) { int ayfnwr1v, gp1jxzuh, c5aesxkus; double f0, t0, m0, f1, t1, m1, f2, t2, m2, Toobig = 20.0e0; Rprintf("zz 20100122; this function fvlmz9iyC_mbessI0 unchecked.\n"); *zjkrtol8 = 0; if (!(*kpzavbj3 == 0 || *kpzavbj3 == 1 || *kpzavbj3 == 2)) { Rprintf("Error in fvlmz9iyC_mbessI0: kpzavbj3 not in 0:2. Returning.\n"); *zjkrtol8 = 1; return; } for (gp1jxzuh = 1; gp1jxzuh <= *f8yswcat; gp1jxzuh++) { if (fabs(unvxka0m[gp1jxzuh-1]) > Toobig) { Rprintf("Error in fvlmz9iyC_mbessI0: unvxka0m[] value > too big.\n"); *zjkrtol8 = 1; return; } t1 = unvxka0m[gp1jxzuh-1] / 2.0e0; f1 = t1; t0 = t1 * t1; f0 = 1.0e0 + t0; t2 = 0.50e0; f2 = t2; c5aesxkus = 15; if (fabs(unvxka0m[gp1jxzuh-1]) > 10.0) c5aesxkus = 25; if (fabs(unvxka0m[gp1jxzuh-1]) > 15.0) c5aesxkus = 35; if (fabs(unvxka0m[gp1jxzuh-1]) > 20.0) c5aesxkus = 40; if (fabs(unvxka0m[gp1jxzuh-1]) > 30.0) c5aesxkus = 55; for (ayfnwr1v = 1; ayfnwr1v <= c5aesxkus; ayfnwr1v++) { m0 = pow(unvxka0m[gp1jxzuh-1] / (2.0 * (ayfnwr1v + 1.0)), (double) 2); m1 = m0 * (1.0e0 + 1.0e0 / ayfnwr1v); m2 = m1 * (2.0e0 * ayfnwr1v + 1.0e0) / (2.0e0 * ayfnwr1v - 1.0e0); t0 = t0 * m0; t1 = t1 * m1; t2 = t2 * m2; f0 = f0 + t0; f1 = f1 + t1; f2 = f2 + t2; if ((fabs(t0) < *qaltf0nz) && (fabs(t1) < *qaltf0nz) && (fabs(t2) < *qaltf0nz)) break; } if (0 <= *kpzavbj3) dvector0[gp1jxzuh-1] = f0; if (1 <= *kpzavbj3) dvector1[gp1jxzuh-1] = f1; if (2 <= *kpzavbj3) dvector2[gp1jxzuh-1] = f2; } } void VGAM_C_mux34(double he7mqnvy[], double Dmat[], int *vnc1izfy, int *e0nmabdk, int *ui4ntmvd, double bqelz3cy[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh; double *qnwamo0e1, *qnwamo0e2; if (*e0nmabdk == 1) { qnwamo0e1 = bqelz3cy; qnwamo0e2 = he7mqnvy; for (ayfnwr1v = 0; ayfnwr1v < *vnc1izfy; ayfnwr1v++) { *qnwamo0e1++ = *Dmat * pow(*qnwamo0e2++, (double) 2.0); } return; } if (*ui4ntmvd == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *vnc1izfy; ayfnwr1v++) { bqelz3cy[ayfnwr1v-1] = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (yq6lorbx-1) * *e0nmabdk] * pow(he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy], (double) 2.0); } if (*e0nmabdk > 1) { for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *e0nmabdk; gp1jxzuh++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (gp1jxzuh-1) * *e0nmabdk] * he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy] * he7mqnvy[ayfnwr1v-1 + (gp1jxzuh-1) * *vnc1izfy] * 2.0; } } } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *vnc1izfy; ayfnwr1v++) { bqelz3cy[ayfnwr1v-1] = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *e0nmabdk; gp1jxzuh++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (gp1jxzuh-1) * *e0nmabdk] * he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy] * he7mqnvy[ayfnwr1v-1 + (gp1jxzuh-1) * *vnc1izfy]; } } } } } VGAM/src/vmux.f0000644000176000001440000004655512136651167013020 0ustar ripleyusers subroutine qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) implicit logical (a-z) integer wy1vqfzu, tgiyxdw1(*), dufozmt7(*) integer urohxe6t, bpvaqm5z, ayfnwr1v ayfnwr1v = 1 urohxe6t = wy1vqfzu 23000 if(.not.(urohxe6t.ge.1))goto 23002 do 23003 bpvaqm5z=1,urohxe6t tgiyxdw1(ayfnwr1v) = bpvaqm5z ayfnwr1v = ayfnwr1v+1 23003 continue urohxe6t=urohxe6t-1 goto 23000 23002 continue ayfnwr1v = 1 do 23005 urohxe6t=1,wy1vqfzu do 23007 bpvaqm5z=urohxe6t,wy1vqfzu dufozmt7(ayfnwr1v) = bpvaqm5z ayfnwr1v = ayfnwr1v+1 23007 continue 23005 continue return end integer function viamf(cz8qdfyj, rvy1fpli, wy1vqfzu, tgiyxdw1, &dufozmt7) integer cz8qdfyj, rvy1fpli, wy1vqfzu, tgiyxdw1(*), dufozmt7(*) integer urohxe6t, imk5wjxg imk5wjxg = wy1vqfzu*(wy1vqfzu+1)/2 do 23009 urohxe6t=1,imk5wjxg if(.not.((tgiyxdw1(urohxe6t).eq.cz8qdfyj .and. dufozmt7(urohxe6t) &.eq.rvy1fpli) .or.(tgiyxdw1(urohxe6t).eq.rvy1fpli .and. dufozmt7( &urohxe6t).eq.cz8qdfyj)))goto 23011 viamf = urohxe6t return 23011 continue 23009 continue viamf = 0 return end subroutine vm2af(mat, a, dimm, tgiyxdw1, dufozmt7, kuzxj1lo, &wy1vqfzu, upper) implicit logical (a-z) integer dimm, tgiyxdw1(dimm), dufozmt7(dimm), kuzxj1lo, wy1vqfzu, &upper double precision mat(dimm,kuzxj1lo), a(wy1vqfzu,wy1vqfzu,kuzxj1lo) integer ayfnwr1v, yq6lorbx, gp1jxzuh, imk5wjxg imk5wjxg = wy1vqfzu * (wy1vqfzu + 1) / 2 if(.not.(upper .eq. 1 .or. dimm .ne. imk5wjxg))goto 23013 ayfnwr1v = 1 23015 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23017 yq6lorbx = 1 23018 if(.not.(yq6lorbx.le.wy1vqfzu))goto 23020 gp1jxzuh = 1 23021 if(.not.(gp1jxzuh.le.wy1vqfzu))goto 23023 a(gp1jxzuh,yq6lorbx,ayfnwr1v) = 0.0d0 gp1jxzuh=gp1jxzuh+1 goto 23021 23023 continue yq6lorbx=yq6lorbx+1 goto 23018 23020 continue ayfnwr1v=ayfnwr1v+1 goto 23015 23017 continue 23013 continue do 23024 ayfnwr1v=1,kuzxj1lo do 23026 yq6lorbx=1,dimm a(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx),ayfnwr1v) = mat(yq6lorbx, &ayfnwr1v) if(.not.(upper .eq. 0))goto 23028 a(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx),ayfnwr1v) = mat(yq6lorbx, &ayfnwr1v) 23028 continue 23026 continue 23024 continue return end subroutine nudh6szqf(wpuarq2m, tlgduey8, lfu2qhid, dimu, tgiyxdw1, & dufozmt7, kuzxj1lo, wy1vqfzu, wk1200) implicit logical (a-z) integer dimu, tgiyxdw1(*), dufozmt7(*), kuzxj1lo, wy1vqfzu double precision wpuarq2m(dimu,kuzxj1lo), tlgduey8(kuzxj1lo, &wy1vqfzu), lfu2qhid(wy1vqfzu,kuzxj1lo), wk1200(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, bpvaqm5z, one, upper one = 1 upper = 1 ayfnwr1v = 1 23030 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23032 call vm2af(wpuarq2m(1,ayfnwr1v), wk1200, dimu, tgiyxdw1, dufozmt7, & one, wy1vqfzu, upper) yq6lorbx = 1 23033 if(.not.(yq6lorbx.le.wy1vqfzu))goto 23035 q6zdcwxk = 0.0d0 bpvaqm5z = yq6lorbx 23036 if(.not.(bpvaqm5z.le.wy1vqfzu))goto 23038 q6zdcwxk = q6zdcwxk + wk1200(yq6lorbx,bpvaqm5z) * tlgduey8( &ayfnwr1v,bpvaqm5z) bpvaqm5z=bpvaqm5z+1 goto 23036 23038 continue lfu2qhid(yq6lorbx,ayfnwr1v) = q6zdcwxk yq6lorbx=yq6lorbx+1 goto 23033 23035 continue ayfnwr1v=ayfnwr1v+1 goto 23030 23032 continue return end subroutine vbksf(wpuarq2m, bvecto, wy1vqfzu, kuzxj1lo, wk1200, &tgiyxdw1, dufozmt7, dimu) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo, tgiyxdw1(*), dufozmt7(*), dimu double precision wpuarq2m(dimu,kuzxj1lo), bvecto(wy1vqfzu, &kuzxj1lo), wk1200(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, gp1jxzuh, upper, one upper = 1 one = 1 ayfnwr1v = 1 23039 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23041 call vm2af(wpuarq2m(1,ayfnwr1v), wk1200, dimu, tgiyxdw1, dufozmt7, & one, wy1vqfzu, upper) yq6lorbx = wy1vqfzu 23042 if(.not.(yq6lorbx.ge.1))goto 23044 q6zdcwxk = bvecto(yq6lorbx,ayfnwr1v) gp1jxzuh = yq6lorbx+1 23045 if(.not.(gp1jxzuh.le.wy1vqfzu))goto 23047 q6zdcwxk = q6zdcwxk - wk1200(yq6lorbx,gp1jxzuh) * bvecto(gp1jxzuh, &ayfnwr1v) gp1jxzuh=gp1jxzuh+1 goto 23045 23047 continue bvecto(yq6lorbx,ayfnwr1v) = q6zdcwxk / wk1200(yq6lorbx,yq6lorbx) yq6lorbx=yq6lorbx-1 goto 23042 23044 continue ayfnwr1v=ayfnwr1v+1 goto 23039 23041 continue return end subroutine vcholf(wmat, bvecto, wy1vqfzu, dvhw1ulq, isolve) implicit logical (a-z) integer isolve integer wy1vqfzu, dvhw1ulq double precision wmat(wy1vqfzu,wy1vqfzu), bvecto(wy1vqfzu) double precision q6zdcwxk, dsqrt integer ayfnwr1v, yq6lorbx, gp1jxzuh dvhw1ulq=1 do 23048 ayfnwr1v=1,wy1vqfzu q6zdcwxk = 0d0 do 23050 gp1jxzuh=1,ayfnwr1v-1 q6zdcwxk = q6zdcwxk + wmat(gp1jxzuh,ayfnwr1v) * wmat(gp1jxzuh, &ayfnwr1v) 23050 continue wmat(ayfnwr1v,ayfnwr1v) = wmat(ayfnwr1v,ayfnwr1v) - q6zdcwxk if(.not.(wmat(ayfnwr1v,ayfnwr1v) .le. 0d0))goto 23052 dvhw1ulq = 0 return 23052 continue wmat(ayfnwr1v,ayfnwr1v) = dsqrt(wmat(ayfnwr1v,ayfnwr1v)) do 23054 yq6lorbx=ayfnwr1v+1,wy1vqfzu q6zdcwxk = 0d0 do 23056 gp1jxzuh=1,ayfnwr1v-1 q6zdcwxk = q6zdcwxk + wmat(gp1jxzuh,ayfnwr1v) * wmat(gp1jxzuh, &yq6lorbx) 23056 continue wmat(ayfnwr1v,yq6lorbx) = (wmat(ayfnwr1v,yq6lorbx) - q6zdcwxk) / &wmat(ayfnwr1v,ayfnwr1v) 23054 continue 23048 continue if(.not.(isolve .eq. 0))goto 23058 do 23060 ayfnwr1v=2,wy1vqfzu do 23062 yq6lorbx=1,ayfnwr1v-1 wmat(ayfnwr1v,yq6lorbx) = 0.0d0 23062 continue return 23060 continue 23058 continue do 23064 yq6lorbx=1,wy1vqfzu q6zdcwxk = bvecto(yq6lorbx) do 23066 gp1jxzuh=1,yq6lorbx-1 q6zdcwxk = q6zdcwxk - wmat(gp1jxzuh,yq6lorbx) * bvecto(gp1jxzuh) 23066 continue bvecto(yq6lorbx) = q6zdcwxk / wmat(yq6lorbx,yq6lorbx) 23064 continue yq6lorbx = wy1vqfzu 23068 if(.not.(yq6lorbx.ge.1))goto 23070 q6zdcwxk = bvecto(yq6lorbx) gp1jxzuh = yq6lorbx+1 23071 if(.not.(gp1jxzuh.le.wy1vqfzu))goto 23073 q6zdcwxk = q6zdcwxk - wmat(yq6lorbx,gp1jxzuh) * bvecto(gp1jxzuh) gp1jxzuh=gp1jxzuh+1 goto 23071 23073 continue bvecto(yq6lorbx) = q6zdcwxk / wmat(yq6lorbx,yq6lorbx) yq6lorbx=yq6lorbx-1 goto 23068 23070 continue return end subroutine mxrbkut0f(wpuarq2m, he7mqnvy, wy1vqfzu, xjc4ywlh, &kuzxj1lo, wk1200, wk3400, tgiyxdw1, dufozmt7, dimu, rutyk8mg) implicit logical (a-z) integer dimu, wy1vqfzu, xjc4ywlh, kuzxj1lo, tgiyxdw1(*), dufozmt7( &*), rutyk8mg double precision wpuarq2m(dimu,kuzxj1lo), he7mqnvy(rutyk8mg, &xjc4ywlh), wk1200(wy1vqfzu,wy1vqfzu), wk3400(wy1vqfzu,xjc4ywlh) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z do 23074 yq6lorbx=1,wy1vqfzu do 23076 ayfnwr1v=1,wy1vqfzu wk1200(ayfnwr1v,yq6lorbx) = 0.0d0 23076 continue 23074 continue do 23078 ayfnwr1v=1,kuzxj1lo do 23080 bpvaqm5z=1,dimu wk1200(tgiyxdw1(bpvaqm5z), dufozmt7(bpvaqm5z)) = wpuarq2m( &bpvaqm5z,ayfnwr1v) 23080 continue do 23082 gp1jxzuh=1,xjc4ywlh do 23084 yq6lorbx=1,wy1vqfzu wk3400(yq6lorbx,gp1jxzuh) = he7mqnvy((ayfnwr1v-1)*wy1vqfzu+ &yq6lorbx,gp1jxzuh) 23084 continue 23082 continue do 23086 gp1jxzuh=1,xjc4ywlh do 23088 yq6lorbx=1,wy1vqfzu q6zdcwxk = 0d0 do 23090 bpvaqm5z=yq6lorbx,wy1vqfzu q6zdcwxk = q6zdcwxk + wk1200(yq6lorbx,bpvaqm5z) * wk3400(bpvaqm5z, &gp1jxzuh) 23090 continue he7mqnvy((ayfnwr1v-1)*wy1vqfzu+yq6lorbx,gp1jxzuh) = q6zdcwxk 23088 continue 23086 continue 23078 continue return end subroutine vrinvf9(wpuarq2m, ldr, wy1vqfzu, dvhw1ulq, ks3wejcv, &work) implicit logical (a-z) integer ldr, wy1vqfzu, dvhw1ulq double precision wpuarq2m(ldr,wy1vqfzu), ks3wejcv(wy1vqfzu, &wy1vqfzu), work(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer yq6lorbx, gp1jxzuh, col, uaoynef0 dvhw1ulq = 1 yq6lorbx = 1 23092 if(.not.(yq6lorbx.le.wy1vqfzu))goto 23094 col = 1 23095 if(.not.(col.le.wy1vqfzu))goto 23097 work(yq6lorbx,col) = 0.0d0 col=col+1 goto 23095 23097 continue yq6lorbx=yq6lorbx+1 goto 23092 23094 continue col = 1 23098 if(.not.(col.le.wy1vqfzu))goto 23100 yq6lorbx = col 23101 if(.not.(yq6lorbx.ge.1))goto 23103 if(.not.(yq6lorbx .eq. col))goto 23104 q6zdcwxk = 1.0d0 goto 23105 23104 continue q6zdcwxk = 0.0d0 23105 continue gp1jxzuh = yq6lorbx+1 23106 if(.not.(gp1jxzuh.le.col))goto 23108 q6zdcwxk = q6zdcwxk - wpuarq2m(yq6lorbx,gp1jxzuh) * work(gp1jxzuh, &col) gp1jxzuh=gp1jxzuh+1 goto 23106 23108 continue if(.not.(wpuarq2m(yq6lorbx,yq6lorbx) .eq. 0.0d0))goto 23109 dvhw1ulq = 0 goto 23110 23109 continue work(yq6lorbx,col) = q6zdcwxk / wpuarq2m(yq6lorbx,yq6lorbx) 23110 continue yq6lorbx=yq6lorbx-1 goto 23101 23103 continue col=col+1 goto 23098 23100 continue yq6lorbx = 1 23111 if(.not.(yq6lorbx.le.wy1vqfzu))goto 23113 col = yq6lorbx 23114 if(.not.(col.le.wy1vqfzu))goto 23116 if(.not.(yq6lorbx .lt. col))goto 23117 uaoynef0 = col goto 23118 23117 continue uaoynef0 = yq6lorbx 23118 continue q6zdcwxk = 0.0d0 gp1jxzuh = uaoynef0 23119 if(.not.(gp1jxzuh.le.wy1vqfzu))goto 23121 q6zdcwxk = q6zdcwxk + work(yq6lorbx,gp1jxzuh) * work(col,gp1jxzuh) gp1jxzuh=gp1jxzuh+1 goto 23119 23121 continue ks3wejcv(yq6lorbx,col) = q6zdcwxk ks3wejcv(col,yq6lorbx) = q6zdcwxk col=col+1 goto 23114 23116 continue yq6lorbx=yq6lorbx+1 goto 23111 23113 continue return end subroutine tldz5ion(xx, lfu2qhid) implicit logical (a-z) double precision xx, lfu2qhid double precision x, y, hofjnx2e, q6zdcwxk, xd4mybgj(6) integer yq6lorbx xd4mybgj(1)= 76.18009172947146d0 xd4mybgj(2)= -86.50532032941677d0 xd4mybgj(3)= 24.01409824083091d0 xd4mybgj(4)= -1.231739572450155d0 xd4mybgj(5)= 0.1208650973866179d-2 xd4mybgj(6)= -0.5395239384953d-5 x = xx y = xx hofjnx2e = x+5.50d0 hofjnx2e = hofjnx2e - (x+0.50d0) * dlog(hofjnx2e) q6zdcwxk=1.000000000190015d0 yq6lorbx=1 23122 if(.not.(yq6lorbx.le.6))goto 23124 y = y + 1.0d0 q6zdcwxk = q6zdcwxk + xd4mybgj(yq6lorbx)/y yq6lorbx=yq6lorbx+1 goto 23122 23124 continue lfu2qhid = -hofjnx2e + dlog(2.5066282746310005d0 * q6zdcwxk / x) return end subroutine enbin9(bzmd6ftv, hdqsx7bk, nm0eljqk, n2kersmx, n, &dvhw1ulq, zy1mchbf, ux3nadiw, rsynp1go, sguwj9ty) implicit logical (a-z) integer n, dvhw1ulq, zy1mchbf, sguwj9ty double precision bzmd6ftv(n, zy1mchbf), hdqsx7bk(n, zy1mchbf), &nm0eljqk(n, zy1mchbf), n2kersmx, ux3nadiw, rsynp1go integer ayfnwr1v, kij0gwer double precision oxjgzv0e, btiehdm2, ydb, vjz5sxty, esql7umk, &pvcjl2na, mwuvskg1, ft3ijqmy, hmayv1xt, q6zdcwxk, plo6hkdr real csi9ydge if(.not.(n2kersmx .le. 0.80d0 .or. n2kersmx .ge. 1.0d0))goto 23125 dvhw1ulq = 0 return 23125 continue btiehdm2 = 100.0d0 * rsynp1go oxjgzv0e = 0.001d0 dvhw1ulq = 1 kij0gwer=1 23127 if(.not.(kij0gwer.le.zy1mchbf))goto 23129 ayfnwr1v=1 23130 if(.not.(ayfnwr1v.le.n))goto 23132 vjz5sxty = nm0eljqk(ayfnwr1v,kij0gwer) / hdqsx7bk(ayfnwr1v, &kij0gwer) if(.not.((vjz5sxty .lt. oxjgzv0e) .or. (nm0eljqk(ayfnwr1v, &kij0gwer) .gt. 1.0d5)))goto 23133 bzmd6ftv(ayfnwr1v,kij0gwer) = -nm0eljqk(ayfnwr1v,kij0gwer) * (1. &0d0 + hdqsx7bk(ayfnwr1v,kij0gwer)/(hdqsx7bk(ayfnwr1v,kij0gwer) + &nm0eljqk(ayfnwr1v,kij0gwer))) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 if(.not.(bzmd6ftv(ayfnwr1v,kij0gwer) .gt. -btiehdm2))goto 23135 bzmd6ftv(ayfnwr1v,kij0gwer) = -btiehdm2 23135 continue goto 20 23133 continue q6zdcwxk = 0.0d0 pvcjl2na = hdqsx7bk(ayfnwr1v,kij0gwer) / (hdqsx7bk(ayfnwr1v, &kij0gwer) + nm0eljqk(ayfnwr1v,kij0gwer)) mwuvskg1 = 1.0d0 - pvcjl2na csi9ydge = hdqsx7bk(ayfnwr1v,kij0gwer) if(.not.(pvcjl2na .lt. btiehdm2))goto 23137 pvcjl2na = btiehdm2 23137 continue if(.not.(mwuvskg1 .lt. btiehdm2))goto 23139 mwuvskg1 = btiehdm2 23139 continue esql7umk = 100.0d0 + 15.0d0 * nm0eljqk(ayfnwr1v,kij0gwer) if(.not.(esql7umk .lt. sguwj9ty))goto 23141 esql7umk = sguwj9ty 23141 continue ft3ijqmy = pvcjl2na ** csi9ydge ux3nadiw = ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 1.0d0 ft3ijqmy = hdqsx7bk(ayfnwr1v,kij0gwer) * mwuvskg1 * ft3ijqmy ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + &ydb)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 2.0d0 23143 if(.not.(((ux3nadiw .le. n2kersmx) .or. (plo6hkdr .gt. 1.0d-4)) . &and.(ydb .lt. esql7umk)))goto 23144 ft3ijqmy = (hdqsx7bk(ayfnwr1v,kij0gwer) - 1.0d0 + ydb) * mwuvskg1 &* ft3ijqmy / ydb ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + &ydb)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = ydb + 1.0d0 goto 23143 23144 continue bzmd6ftv(ayfnwr1v,kij0gwer) = -q6zdcwxk 20 hmayv1xt = 0.0d0 ayfnwr1v=ayfnwr1v+1 goto 23130 23132 continue kij0gwer=kij0gwer+1 goto 23127 23129 continue return end subroutine enbin8(bzmd6ftv, hdqsx7bk, hsj9bzaq, n2kersmx, &kuzxj1lo, dvhw1ulq, zy1mchbf, ux3nadiw, rsynp1go) implicit logical (a-z) integer kuzxj1lo, dvhw1ulq, zy1mchbf double precision bzmd6ftv(kuzxj1lo, zy1mchbf), hdqsx7bk(kuzxj1lo, &zy1mchbf), hsj9bzaq(kuzxj1lo, zy1mchbf), n2kersmx, ux3nadiw, &rsynp1go integer ayfnwr1v, kij0gwer, esql7umk double precision ft3ijqmy, tad5vhsu, o3jyipdf, pq0hfucn, q6zdcwxk, & d1, d2, plo6hkdr, hnu1vjyw logical pok1, pok2, pok12 double precision oxjgzv0e, onemse, nm0eljqk, btiehdm2, ydb, kbig d1 = 0.0d0 d2 = 0.0d0 btiehdm2 = -100.0d0 * rsynp1go esql7umk = 3000 if(.not.(n2kersmx .le. 0.80d0 .or. n2kersmx .ge. 1.0d0))goto 23145 dvhw1ulq = 0 return 23145 continue kbig = 1.0d4 oxjgzv0e = 0.001d0 hnu1vjyw = 1.0d0 - rsynp1go onemse = 1.0d0 / (1.0d0 + oxjgzv0e) dvhw1ulq = 1 kij0gwer=1 23147 if(.not.(kij0gwer.le.zy1mchbf))goto 23149 ayfnwr1v=1 23150 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23152 if(.not.(hdqsx7bk(ayfnwr1v,kij0gwer) .gt. kbig))goto 23153 hdqsx7bk(ayfnwr1v,kij0gwer) = kbig 23153 continue if(.not.(hsj9bzaq(ayfnwr1v,kij0gwer) .lt. oxjgzv0e))goto 23155 hsj9bzaq(ayfnwr1v,kij0gwer) = oxjgzv0e 23155 continue if(.not.((hsj9bzaq(ayfnwr1v,kij0gwer) .gt. onemse)))goto 23157 nm0eljqk = hdqsx7bk(ayfnwr1v,kij0gwer) * (1.0d0/hsj9bzaq(ayfnwr1v, &kij0gwer) - 1.0d0) bzmd6ftv(ayfnwr1v,kij0gwer) = -nm0eljqk * (1.0d0 + hdqsx7bk( &ayfnwr1v,kij0gwer)/(hdqsx7bk(ayfnwr1v,kij0gwer) + nm0eljqk)) / &hdqsx7bk(ayfnwr1v,kij0gwer)**2 if(.not.(bzmd6ftv(ayfnwr1v,kij0gwer) .gt. btiehdm2))goto 23159 bzmd6ftv(ayfnwr1v,kij0gwer) = btiehdm2 23159 continue goto 20 23157 continue q6zdcwxk = 0.0d0 pok1 = .true. pok2 = hsj9bzaq(ayfnwr1v,kij0gwer) .lt. (1.0d0-rsynp1go) pok12 = pok1 .and. pok2 if(.not.(pok12))goto 23161 d2 = hdqsx7bk(ayfnwr1v,kij0gwer) * dlog(hsj9bzaq(ayfnwr1v, &kij0gwer)) ux3nadiw = dexp(d2) goto 23162 23161 continue ux3nadiw = 0.0d0 23162 continue plo6hkdr = (1.0d0 - ux3nadiw) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 q6zdcwxk = q6zdcwxk + plo6hkdr call tldz5ion(hdqsx7bk(ayfnwr1v,kij0gwer), o3jyipdf) ydb = 1.0d0 call tldz5ion(ydb + hdqsx7bk(ayfnwr1v,kij0gwer), tad5vhsu) pq0hfucn = 0.0d0 if(.not.(pok12))goto 23163 d1 = dlog(1.0d0 - hsj9bzaq(ayfnwr1v,kij0gwer)) ft3ijqmy = dexp(ydb * d1 + d2 + tad5vhsu - o3jyipdf - pq0hfucn) goto 23164 23163 continue ft3ijqmy = 0.0d0 23164 continue ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + &ydb)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 2.0d0 23165 if(.not.((ux3nadiw .le. n2kersmx) .or. (plo6hkdr .gt. 1.0d-4))) &goto 23166 tad5vhsu = tad5vhsu + dlog(ydb + hdqsx7bk(ayfnwr1v,kij0gwer) - 1. &0d0) pq0hfucn = pq0hfucn + dlog(ydb) if(.not.(pok12))goto 23167 ft3ijqmy = dexp(ydb * d1 + d2 + tad5vhsu - o3jyipdf - pq0hfucn) goto 23168 23167 continue ft3ijqmy = 0.0d0 23168 continue ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + &ydb)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = ydb + 1.0d0 if(.not.(ydb .gt. 1.0d3))goto 23169 goto 21 23169 continue goto 23165 23166 continue 21 bzmd6ftv(ayfnwr1v,kij0gwer) = -q6zdcwxk 20 tad5vhsu = 0.0d0 ayfnwr1v=ayfnwr1v+1 goto 23150 23152 continue kij0gwer=kij0gwer+1 goto 23147 23149 continue return end subroutine mbessi0(bvecto, kuzxj1lo, kpzavbj3, d0, d1, d2, &zjkrtol8, qaltf0nz) implicit logical (a-z) integer kuzxj1lo, kpzavbj3, zjkrtol8, c5aesxkus double precision bvecto(kuzxj1lo), d0(kuzxj1lo), d1(kuzxj1lo), d2( &kuzxj1lo), qaltf0nz integer ayfnwr1v, gp1jxzuh double precision f0, t0, m0, f1, t1, m1, f2, t2, m2 double precision toobig toobig = 20.0d0 zjkrtol8 = 0 if(.not.(.not.(kpzavbj3 .eq. 0 .or. kpzavbj3 .eq. 1 .or. kpzavbj3 &.eq. 2)))goto 23171 zjkrtol8 = 1 return 23171 continue do 23173 gp1jxzuh=1,kuzxj1lo if(.not.(dabs(bvecto(gp1jxzuh)) .gt. toobig))goto 23175 zjkrtol8 = 1 return 23175 continue t1 = bvecto(gp1jxzuh) / 2.0d0 f1 = t1 t0 = t1 * t1 f0 = 1.0d0 + t0 t2 = 0.50d0 f2 = t2 c5aesxkus = 15 if(.not.(dabs(bvecto(gp1jxzuh)) .gt. 10))goto 23177 c5aesxkus = 25 23177 continue if(.not.(dabs(bvecto(gp1jxzuh)) .gt. 15))goto 23179 c5aesxkus = 35 23179 continue if(.not.(dabs(bvecto(gp1jxzuh)) .gt. 20))goto 23181 c5aesxkus = 40 23181 continue if(.not.(dabs(bvecto(gp1jxzuh)) .gt. 30))goto 23183 c5aesxkus = 55 23183 continue do 23185 ayfnwr1v=1,c5aesxkus m0 = (bvecto(gp1jxzuh) / (2.0d0*(ayfnwr1v+1.0d0))) ** 2.0 m1 = m0 * (1.0d0 + 1.0d0/ayfnwr1v) m2 = m1 * (2.0d0*ayfnwr1v + 1.0d0) / (2.0d0*ayfnwr1v - 1.0d0) t0 = t0 * m0 t1 = t1 * m1 t2 = t2 * m2 f0 = f0 + t0 f1 = f1 + t1 f2 = f2 + t2 if(.not.((dabs(t0) .lt. qaltf0nz) .and. (dabs(t1) .lt. qaltf0nz) &.and. (dabs(t2) .lt. qaltf0nz)))goto 23187 goto 23186 23187 continue 23185 continue 23186 continue if(.not.(0 .le. kpzavbj3))goto 23189 d0(gp1jxzuh) = f0 23189 continue if(.not.(1 .le. kpzavbj3))goto 23191 d1(gp1jxzuh) = f1 23191 continue if(.not.(2 .le. kpzavbj3))goto 23193 d2(gp1jxzuh) = f2 23193 continue 23173 continue return end VGAM/src/vlinpack3.f0000644000176000001440000004503012136651167013676 0ustar ripleyusersc 1/4/00 c The following code is linpack.f from GAMFIT c For R.1.0-0, subroutine dshift is needed c 12/7/02; T.Yee c I've modifed the routines in this file so that reals become double c precisions. The subroutine and functions may have a "8" put after it c to (hopefully) make it unique. c All this for the VGAM package. c For example, "real function ddot" to "double precision function ddot8". c I might add a "implicit logical (a-z)" line to pick up errors. subroutine daxpy8(n,da,dx,incx,dy,incy) implicit logical (a-z) c c constant times a vector plus a vector. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c c c c 20130419: orig.: c double precision dx(1),dy(1),da c c c c double precision dx(*),dy(*),da integer i,incx,incy,m,mp1,n c Undeclared, so added by T.Yee integer ix, iy c if(n.le.0)return if (da .eq. 0.0d0) return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dy(iy) = dy(iy) + da*dx(ix) ix = ix + incx iy = iy + incy 10 continue return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,4) if( m .eq. 0 ) go to 40 do 30 i = 1,m dy(i) = dy(i) + da*dx(i) 30 continue if( n .lt. 4 ) return 40 mp1 = m + 1 do 50 i = mp1,n,4 dy(i) = dy(i) + da*dx(i) dy(i + 1) = dy(i + 1) + da*dx(i + 1) dy(i + 2) = dy(i + 2) + da*dx(i + 2) dy(i + 3) = dy(i + 3) + da*dx(i + 3) 50 continue return end subroutine dcopy8(n,dx,incx,dy,incy) implicit logical (a-z) c c copies a vector, x, to a vector, y. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c double precision dx(*),dy(*) integer i,incx,incy,ix,iy,m,mp1,n c if(n.le.0)return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dy(iy) = dx(ix) ix = ix + incx iy = iy + incy 10 continue return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,7) if( m .eq. 0 ) go to 40 do 30 i = 1,m dy(i) = dx(i) 30 continue if( n .lt. 7 ) return 40 mp1 = m + 1 do 50 i = mp1,n,7 dy(i) = dx(i) dy(i + 1) = dx(i + 1) dy(i + 2) = dx(i + 2) dy(i + 3) = dx(i + 3) dy(i + 4) = dx(i + 4) dy(i + 5) = dx(i + 5) dy(i + 6) = dx(i + 6) 50 continue return end double precision function ddot8(n,dx,incx,dy,incy) c c 12/7/02; T.Yee c I've modifed "real function ddot" to c "double precision function ddot8" for the VGAM package c I've added the "implicit logical (a-z)" line implicit logical (a-z) c c forms the dot product of two vectors. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c double precision dx(*),dy(*),dtemp integer i,incx,incy,ix,iy,m,mp1,n c ddot8 = 0.0d0 dtemp = 0.0d0 if(n.le.0)return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dtemp = dtemp + dx(ix)*dy(iy) ix = ix + incx iy = iy + incy 10 continue ddot8 = dtemp return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,5) if( m .eq. 0 ) go to 40 do 30 i = 1,m dtemp = dtemp + dx(i)*dy(i) 30 continue if( n .lt. 5 ) go to 60 40 mp1 = m + 1 do 50 i = mp1,n,5 dtemp = dtemp + dx(i)*dy(i) + dx(i + 1)*dy(i + 1) + * dx(i + 2)*dy(i + 2) + dx(i + 3)*dy(i + 3)+dx(i + 4)*dy(i + 4) 50 continue 60 ddot8 = dtemp return end double precision function dnrm28 ( n, dx,ldx, incx) implicit logical (a-z) c Undeclared, so added by T.Yee integer n, ldx, incx, i, j, nn integer next double precision dx(ldx), cutlo, cuthi, hitest, sum, * xmax,zero,one data zero, one /0.0d0, 1.0d0/ c c euclidean norm of the n-vector stored in dx() with storage c increment incx . c if n .le. 0 return with result = 0. c if n .ge. 1 then incx must be .ge. 1 c c c.l.lawson, 1978 jan 08 c c four phase method using two built-in constants that are c hopefully applicable to all machines. c cutlo = maximum of sqrt(u/eps) over all known machines. c cuthi = minimum of sqrt(v) over all known machines. c where c eps = smallest no. such that eps + 1. .gt. 1. c u = smallest positive no. (underflow limit) c v = largest no. (overflow limit) c c brief outline of algorithm.. c c phase 1 scans zero components. c move to phase 2 when a component is nonzero and .le. cutlo c move to phase 3 when a component is .gt. cutlo c move to phase 4 when a component is .ge. cuthi/m c where m = n for x() double precision and m = 2*n for complex. c c values for cutlo and cuthi.. c from the environmental parameters listed in the imsl converter c document the limiting values are as follows.. c cutlo, s.p. u/eps = 2**(-102) for honeywell. close seconds are c univac and dec at 2**(-103) c thus cutlo = 2**(-51) = 4.44089d-16 c cuthi, s.p. v = 2**127 for univac, honeywell, and dec. c thus cuthi = 2**(63.5) = 1.30438d19 c cutlo, d.p. u/eps = 2**(-67) for honeywell and dec. c thus cutlo = 2**(-33.5) = 8.23181d-11 c cuthi, d.p. same as s.p. cuthi = 1.30438d19 c data cutlo, cuthi / 8.232d-11, 1.304d19 / c data cutlo, cuthi / 4.441d-16, 1.304d19 / data cutlo, cuthi / 8.232d-11, 1.304d19 / c if(n .gt. 0) go to 10 dnrm28 = zero go to 300 c 10 next = 30 sum = zero nn = n * incx c begin main loop i = 1 c 20 go to next,(30, 50, 70, 110) 20 if(next .eq. 30) go to 30 if(next .eq. 50) go to 50 if(next .eq. 70) go to 70 if(next .eq. 110) go to 110 c An error!!! dnrm28 = 0.0d0 return c 30 if( dabs(dx(i)) .gt. cutlo) go to 85 next = 50 xmax = zero c c phase 1. sum is zero c 50 if( dx(i) .eq. zero) go to 200 if( dabs(dx(i)) .gt. cutlo) go to 85 c c prepare for phase 2. next = 70 go to 105 c c prepare for phase 4. c 100 i = j next = 110 sum = (sum / dx(i)) / dx(i) 105 xmax = dabs(dx(i)) go to 115 c c phase 2. sum is small. c scale to avoid destructive underflow. c 70 if( dabs(dx(i)) .gt. cutlo ) go to 75 c c common code for phases 2 and 4. c in phase 4 sum is large. scale to avoid overflow. c 110 if( dabs(dx(i)) .le. xmax ) go to 115 sum = one + sum * (xmax / dx(i))**2 xmax = dabs(dx(i)) go to 200 c 115 sum = sum + (dx(i)/xmax)**2 go to 200 c c c prepare for phase 3. c 75 sum = (sum * xmax) * xmax c c c for real or d.p. set hitest = cuthi/n c for complex set hitest = cuthi/(2*n) c c "float" changed to "dfloat" by T.Yee 85 hitest = cuthi/dfloat( n ) c c phase 3. sum is mid-range. no scaling. c do 95 j =i,nn,incx if(dabs(dx(j)) .ge. hitest) go to 100 95 sum = sum + dx(j)**2 dnrm28 = dsqrt( sum ) go to 300 c 200 continue i = i + incx if ( i .le. nn ) go to 20 c c end of main loop. c c compute square root and adjust for scaling. c dnrm28 = xmax * dsqrt(sum) 300 continue return end subroutine dscal8(n,da,dx,incx) implicit logical (a-z) c c scales a vector by a constant. c uses unrolled loops for increment equal to one. c jack dongarra, linpack, 3/11/78. c double precision da,dx(*) integer i,incx,m,mp1,n,nincx c if(n.le.0)return if(incx.eq.1)go to 20 c c code for increment not equal to 1 c nincx = n*incx do 10 i = 1,nincx,incx dx(i) = da*dx(i) 10 continue return c c code for increment equal to 1 c c c clean-up loop c 20 m = mod(n,5) if( m .eq. 0 ) go to 40 do 30 i = 1,m dx(i) = da*dx(i) 30 continue if( n .lt. 5 ) return 40 mp1 = m + 1 do 50 i = mp1,n,5 dx(i) = da*dx(i) dx(i + 1) = da*dx(i + 1) dx(i + 2) = da*dx(i + 2) dx(i + 3) = da*dx(i + 3) dx(i + 4) = da*dx(i + 4) 50 continue return end subroutine dshift8(x,ldx,n,j,k) implicit logical (a-z) integer ldx,n,j,k double precision x(ldx,k), tt integer i,jj if(k.le.j)return do 100 i=1,n tt=x(i,j) do 50 jj=j+1,k x(i,jj-1)=x(i,jj) 50 continue x(i,k)=tt 100 continue return end subroutine vdqrsl(x,ldx,n,k,qraux,y,qy,qty,b,rsd,xb,job,info) implicit logical (a-z) integer ldx,n,k,job,info double precision x(ldx,*),qraux(*),y(*),qy(*),qty(*),b(*),rsd(*), * xb(*) c c c c 20130419: orig.: c double precision x(ldx,1),qraux(1),y(1),qy(1),qty(1),b(1),rsd(1), c * xb(1) c c c c dqrsl applies the output of dqrdc to compute coordinate c transformations, projections, and least squares solutions. c for k .le. min(n,p), let xk be the matrix c c xk = (x(jpvt(1)),x(jpvt(2)), ... ,x(jpvt(k))) c c formed from columnns jpvt(1), ... ,jpvt(k) of the original c n x p matrix x that was input to dqrdc (if no pivoting was c done, xk consists of the first k columns of x in their c original order). dqrdc produces a factored orthogonal matrix q c and an upper triangular matrix r such that c c xk = q * (r) c (0) c c this information is contained in coded form in the arrays c x and qraux. c c on entry c c x double precision(ldx,p). c x contains the output of dqrdc. c c ldx integer. c ldx is the leading dimension of the array x. c c n integer. c n is the number of rows of the matrix xk. it must c have the same value as n in dqrdc. c c k integer. c k is the number of columns of the matrix xk. k c must nnot be greater than min(n,p), where p is the c same as in the calling sequence to dqrdc. c c qraux double precision(p). c qraux contains the auxiliary output from dqrdc. c c y double precision(n) c y contains an n-vector that is to be manipulated c by dqrsl. c c job integer. c job specifies what is to be computed. job has c the decimal expansion abcde, with the following c meaning. c c if a.ne.0, compute qy. c if b,c,d, or e .ne. 0, compute qty. c if c.ne.0, compute b. c if d.ne.0, compute rsd. c if e.ne.0, compute xb. c c note that a request to compute b, rsd, or xb c automatically triggers the computation of qty, for c which an array must be provided in the calling c sequence. c c on return c c qy double precision(n). c qy conntains q*y, if its computation has been c requested. c c qty double precision(n). c qty contains trans(q)*y, if its computation has c been requested. here trans(q) is the c transpose of the matrix q. c c b double precision(k) c b contains the solution of the least squares problem c c minimize norm2(y - xk*b), c c if its computation has been requested. (note that c if pivoting was requested in dqrdc, the j-th c component of b will be associated with column jpvt(j) c of the original matrix x that was input into dqrdc.) c c rsd double precision(n). c rsd contains the least squares residual y - xk*b, c if its computation has been requested. rsd is c also the orthogonal projection of y onto the c orthogonal complement of the column space of xk. c c xb double precision(n). c xb contains the least squares approximation xk*b, c if its computation has been requested. xb is also c the orthogonal projection of y onto the column space c of x. c c info integer. c info is zero unless the computation of b has c been requested and r is exactly singular. in c this case, info is the index of the first zero c diagonal element of r and b is left unaltered. c c the parameters qy, qty, b, rsd, and xb are not referenced c if their computation is not requested and in this case c can be replaced by dummy variables in the calling program. c to save storage, the user may in some cases use the same c array for different parameters in the calling sequence. a c frequently occuring example is when one wishes to compute c any of b, rsd, or xb and does not need y or qty. in this c case one may identify y, qty, and one of b, rsd, or xb, while c providing separate arrays for anything else that is to be c computed. thus the calling sequence c c call dqrsl(x,ldx,n,k,qraux,y,dum,y,b,y,dum,110,info) c c will result in the computation of b and rsd, with rsd c overwriting y. more generally, each item in the following c list contains groups of permissible identifications for c a single callinng sequence. c c 1. (y,qty,b) (rsd) (xb) (qy) c c 2. (y,qty,rsd) (b) (xb) (qy) c c 3. (y,qty,xb) (b) (rsd) (qy) c c 4. (y,qy) (qty,b) (rsd) (xb) c c 5. (y,qy) (qty,rsd) (b) (xb) c c 6. (y,qy) (qty,xb) (b) (rsd) c c in any group the value returned in the array allocated to c the group corresponds to the last member of the group. c c linpack. this version dated 08/14/78 . c g.w. stewart, university of maryland, argonne national lab. c c dqrsl uses the following functions and subprograms. c c blas daxpy8,dcopy8,ddot8 c fortran dabs,min0,mod c c internal variables c integer i,j,jj,ju,kp1 double precision ddot8,t,temp logical cb,cqy,cqty,cr,cxb c c c set info flag. c info = 0 c c determine what is to be computed. c cqy = job/10000 .ne. 0 cqty = mod(job,10000) .ne. 0 cb = mod(job,1000)/100 .ne. 0 cr = mod(job,100)/10 .ne. 0 cxb = mod(job,10) .ne. 0 ju = min0(k,n-1) c c special action when n=1. c if (ju .ne. 0) go to 40 if (cqy) qy(1) = y(1) if (cqty) qty(1) = y(1) if (cxb) xb(1) = y(1) if (.not.cb) go to 30 if (x(1,1) .ne. 0.0d0) go to 10 info = 1 go to 20 10 continue b(1) = y(1)/x(1,1) 20 continue 30 continue if (cr) rsd(1) = 0.0d0 go to 250 40 continue c c set up to compute qy or qty. c if (cqy) call dcopy8(n,y,1,qy,1) if (cqty) call dcopy8(n,y,1,qty,1) if (.not.cqy) go to 70 c c compute qy. c do 60 jj = 1, ju j = ju - jj + 1 if (qraux(j) .eq. 0.0d0) go to 50 temp = x(j,j) x(j,j) = qraux(j) t = -ddot8(n-j+1,x(j,j),1,qy(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,qy(j),1) x(j,j) = temp 50 continue 60 continue 70 continue if (.not.cqty) go to 100 c c compute trans(q)*y. c do 90 j = 1, ju if (qraux(j) .eq. 0.0d0) go to 80 temp = x(j,j) x(j,j) = qraux(j) t = -ddot8(n-j+1,x(j,j),1,qty(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,qty(j),1) x(j,j) = temp 80 continue 90 continue 100 continue c c set up to compute b, rsd, or xb. c if (cb) call dcopy8(k,qty,1,b,1) kp1 = k + 1 if (cxb) call dcopy8(k,qty,1,xb,1) if(cr .and. k .lt. n) call dcopy8(n-k,qty(kp1),1,rsd(kp1),1) if (.not.cxb .or. kp1 .gt. n) go to 120 do 110 i = kp1, n xb(i) = 0.0d0 110 continue 120 continue if (.not.cr) go to 140 do 130 i = 1, k rsd(i) = 0.0d0 130 continue 140 continue if (.not.cb) go to 190 c c compute b. c do 170 jj = 1, k j = k - jj + 1 if (x(j,j) .ne. 0.0d0) go to 150 info = j c ......exit go to 180 150 continue b(j) = b(j)/x(j,j) if (j .eq. 1) go to 160 t = -b(j) call daxpy8(j-1,t,x(1,j),1,b,1) 160 continue 170 continue 180 continue 190 continue if (.not.cr .and. .not.cxb) go to 240 c c compute rsd or xb as required. c do 230 jj = 1, ju j = ju - jj + 1 if (qraux(j) .eq. 0.0d0) go to 220 temp = x(j,j) x(j,j) = qraux(j) if (.not.cr) go to 200 t = -ddot8(n-j+1,x(j,j),1,rsd(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,rsd(j),1) 200 continue if (.not.cxb) go to 210 t = -ddot8(n-j+1,x(j,j),1,xb(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,xb(j),1) 210 continue x(j,j) = temp 220 continue 230 continue 240 continue 250 continue return end VGAM/src/vlinpack2.f0000644000176000001440000001715512136651167013704 0ustar ripleyusersc This file contains modified code from Hastie and Tibshirani's c GAMFIT code, as well as a rational cholesky function or two. c All code here derives from linpack c T.Yee 7/10/99 c This function was formerly real function dnrm2, but now converted c to double precision c Nb. changed "float(n)" to "dfloat(n)" double precision function vdnrm2 ( n, dx,ldx, incx) c c added by tyee 23/9/00: implicit double precision (a-h,o-z) implicit integer (i-n) c integer next double precision dx(ldx), cutlo, cuthi, hitest, sum double precision xmax,zero,one data zero, one /0.0d0, 1.0d0/ c c euclidean norm of the n-vector stored in dx() with storage c increment incx . c if n .le. 0 return with result = 0. c if n .ge. 1 then incx must be .ge. 1 c c c.l.lawson, 1978 jan 08 c c four phase method using two built-in constants that are c hopefully applicable to all machines. c cutlo = maximum of dsqrt(u/eps) over all known machines. c cuthi = minimum of dsqrt(v) over all known machines. c where c eps = smallest no. such that eps + 1. .gt. 1. c u = smallest positive no. (underflow limit) c v = largest no. (overflow limit) c c brief outline of algorithm.. c c phase 1 scans zero components. c move to phase 2 when a component is nonzero and .le. cutlo c move to phase 3 when a component is .gt. cutlo c move to phase 4 when a component is .ge. cuthi/m c where m = n for x() double precision and m = 2*n for complex. c c values for cutlo and cuthi.. c from the environmental parameters listed in the imsl converter c document the limiting values are as follows.. c cutlo, s.p. u/eps = 2**(-102) for honeywell. close seconds are c univac and dec at 2**(-103) c thus cutlo = 2**(-51) = 4.44089e-16 c cuthi, s.p. v = 2**127 for univac, honeywell, and dec. c thus cuthi = 2**(63.5) = 1.30438e19 c cutlo, d.p. u/eps = 2**(-67) for honeywell and dec. c thus cutlo = 2**(-33.5) = 8.23181e-11 c cuthi, d.p. same as s.p. cuthi = 1.30438e19 c data cutlo, cuthi / 8.232e-11, 1.304e19 / c data cutlo, cuthi / 4.441e-16, 1.304e19 / data cutlo, cuthi / 8.232e-11, 1.304e19 / c if(n .gt. 0) go to 10 vdnrm2 = zero go to 300 c 10 next = 30 sum = zero nn = n * incx c begin main loop i = 1 c 20 go to next,(30, 50, 70, 110) 20 if(next .eq. 30) go to 30 if(next .eq. 50) go to 50 if(next .eq. 70) go to 70 if(next .eq. 110) go to 110 c An error!!! vdnrm2 = 0.0d0 return 30 if( dabs(dx(i)) .gt. cutlo) go to 85 next = 50 xmax = zero c c phase 1. sum is zero c 50 if( dx(i) .eq. zero) go to 200 if( dabs(dx(i)) .gt. cutlo) go to 85 c c prepare for phase 2. next = 70 go to 105 c c prepare for phase 4. c 100 i = j next = 110 sum = (sum / dx(i)) / dx(i) 105 xmax = dabs(dx(i)) go to 115 c c phase 2. sum is small. c scale to avoid destructive underflow. c 70 if( dabs(dx(i)) .gt. cutlo ) go to 75 c c common code for phases 2 and 4. c in phase 4 sum is large. scale to avoid overflow. c 110 if( dabs(dx(i)) .le. xmax ) go to 115 c 11/4/01: replacing "**2.0d0" by "**2" (three times in this file) sum = one + sum * (xmax / dx(i))**2 xmax = dabs(dx(i)) go to 200 c 115 sum = sum + (dx(i)/xmax)**2 go to 200 c c c prepare for phase 3. c 75 sum = (sum * xmax) * xmax c c c for real or d.p. set hitest = cuthi/n c for complex set hitest = cuthi/(2*n) c 85 hitest = cuthi / dfloat( n ) c c phase 3. sum is mid-range. no scaling. c do 95 j =i,nn,incx if(dabs(dx(j)) .ge. hitest) go to 100 95 sum = sum + dx(j)**2 vdnrm2 = dsqrt( sum ) go to 300 c 200 continue i = i + incx if ( i .le. nn ) go to 20 c c end of main loop. c c compute square root and adjust for scaling. c vdnrm2 = xmax * dsqrt(sum) 300 continue return end c ============================================================== c This is modified linpack Fortran code c Changes marked with yyy c 23/9/99 c Works subroutine vdpbfa7(abd,lda,n,m,info,d) integer lda,n,m,info double precision abd(lda,*), d(n) c c c c 20130419: orig.: c double precision abd(lda,1), d(n) c c c c vdpbfa7 is dpbfa8 but uses Rational Cholesky instead of ordinary c Cholesky c c abd = t(u) d u where u is unit upper triangular and d is diagonal c the diagonal of d is stored where the 1's of the u would be stored c c See dpbfa8 for more information c d(1:n) is assigned the values of diag(d), and abd(m+1,) <- 1 c c Improvement yet to do: c delete d and put its contents into abd(m+1,) (intrinsic 1's) c c internal variables c c double precision ddot8 double precision s,t integer ik,j,jk,k,mu, i,row c begin block with ...exits to 40 c c c yyy d(1) = abd(m+1,1) c do 30 j = 1, n c print *, "j = ", j info = j s = 0.0d0 ik = m + 1 jk = max0(j-m,1) mu = max0(m+2-j,1) if (m .lt. mu) go to 20 do 10 k = mu, m c print *, " k = ", k c t = abd(k,j) - ddot8(k-mu,abd(ik,jk),1,abd(mu,j),1) c t = abd(k,j) do 1 i = 1,k-mu row = mu-2+i+j-m t = t - d(row)*abd(ik-1+i,jk)*abd(mu-1+i,j) c print *, " row = ", row 1 continue c c yyy c t = t/abd(m+1,jk) row = mu-2+(k-mu+1)+j-m c print *, " row = ", row t = t/d(row) c abd(k,j) = t c c yyy c print *, " index = ", mu-1+i+j-m s = s + t*t*d(row) c ik = ik - 1 jk = jk + 1 10 continue 20 continue s = abd(m+1,j) - s c c ......exit if (s .le. 0.0d0) go to 40 c c yyy c abd(m+1,j) = dsqrt(s) abd(m+1,j) = 1d0 d(j) = s c 30 continue info = 0 40 continue return end subroutine vdpbsl7(abd,lda,n,m,b,d) integer lda,n,m double precision abd(lda,*),b(*),d(*) c c c c 20130419: orig: c double precision abd(lda,1),b(1),d(1) c c c c vdpbsl7 is dpbsl8 but uses Rational Cholesky instead of ordinary c Cholesky c c See dpbsl8 for more information c c Improvement yet to do: c delete d and put its contents into abd(m+1,) (intrinsic 1's) c c internal variables c double precision ddot8,t integer k,kb,la,lb,lm c c solve trans(r)*y = b c do 10 k = 1, n lm = min0(k-1,m) la = m + 1 - lm lb = k - lm t = ddot8(lm,abd(la,k),1,b(lb),1) c c yyy c b(k) = (b(k) - t)/abd(m+1,k) b(k) = b(k) - t c 10 continue c c c yyy do 15 k = 1, n b(k) = b(k)/d(k) 15 continue c c c solve r*x = y c do 20 kb = 1, n k = n + 1 - kb lm = min0(k-1,m) la = m + 1 - lm lb = k - lm c c yyy c b(k) = b(k)/abd(m+1,k) c t = -b(k) call daxpy8(lm,t,abd(la,k),1,b(lb),1) 20 continue return end VGAM/src/vlinpack1.f0000644000176000001440000000420212136651167013670 0ustar ripleyusers subroutine vqrdca(x,ldx,n,p,fasrkub3,jpvt,work,xwdf5ltg,eps) implicit double precision (a-h,o-z) implicit integer (i-n) double precision dsign, dabs, dmax1, dsqrt integer min0 integer ldx,n,p,xwdf5ltg integer jpvt(*) integer j,jj,jp,l,lup,curpvt double precision x(ldx,p),fasrkub3(p),work(*),eps double precision vdnrm2,tt double precision ddot8,nrmxl,t do 23000 j=1,p fasrkub3(j) = vdnrm2(n,x(1,j),ldx,1) work(j) = fasrkub3(j) 23000 continue l=1 lup = min0(n,p) curpvt = p 23002 if(.not.(l.le.lup))goto 23003 fasrkub3(l) = 0.0d0 nrmxl = vdnrm2(n-l+1, x(l,l), ldx, 1) if(.not.(nrmxl .lt. eps))goto 23004 call dshift8(x,ldx,n,l,curpvt) jp = jpvt(l) t=fasrkub3(l) tt=work(l) j=l+1 23006 if(.not.(j.le.curpvt))goto 23008 jj=j-1 jpvt(jj)=jpvt(j) fasrkub3(jj)=fasrkub3(j) work(jj)=work(j) j=j+1 goto 23006 23008 continue jpvt(curpvt)=jp fasrkub3(curpvt)=t work(curpvt)=tt curpvt=curpvt-1 if(.not.(lup.gt.curpvt))goto 23009 lup=curpvt 23009 continue goto 23005 23004 continue if(.not.(l.eq.n))goto 23011 goto 23003 23011 continue if(.not.(x(l,l).ne.0.0d0))goto 23013 nrmxl = dsign(nrmxl,x(l,l)) 23013 continue call dscal8(n-l+1,1.0d0/nrmxl,x(l,l),1) x(l,l) = 1.0d0+x(l,l) j=l+1 23015 if(.not.(j.le.curpvt))goto 23017 t = -ddot8(n-l+1,x(l,l),1,x(l,j),1)/x(l,l) call daxpy8(n-l+1,t,x(l,l),1,x(l,j),1) if(.not.(fasrkub3(j).ne.0.0d0))goto 23018 tt = 1.0d0-(dabs(x(l,j))/fasrkub3(j))**2 tt = dmax1(tt,0.0d0) t = tt tt = 1.0d0+0.05d0*tt*(fasrkub3(j)/work(j))**2 if(.not.(tt.ne.1.0d0))goto 23020 fasrkub3(j) = fasrkub3(j)*dsqrt(t) goto 23021 23020 continue fasrkub3(j) = vdnrm2(n-l,x(l+1,j),ldx,1) work(j) = fasrkub3(j) 23021 continue 23018 continue j=j+1 goto 23015 23017 continue fasrkub3(l) = x(l,l) x(l,l) = -nrmxl l=l+1 23005 continue goto 23002 23003 continue xwdf5ltg = lup return end VGAM/src/vgam3.c0000644000176000001440000024155012136651167013023 0ustar ripleyusers #include #include #include #include #include void Yee_vbvs(int *f8yswcat, double gkdx5jal[], double rpyis2kc[], double sjwyig9t[], double kispwgx3[], int *acpios9q, int *order, int *wy1vqfzu); void fapc0tnbtfeswo7c(double osiz4fxy[], int *acpios9q, int *wy1vqfzu, int *ldk, double wbkq9zyi[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[]); void fapc0tnbybnagt8k(int *iii, int *cz8qdfyj, int *tesdm5kv, double g9fvdrbw[], double osiz4fxy[], double rbne6ouj[], int *kxvq6sfw, int *nyfu9rod, int *wy1vqfzu, int *ldk, int *kvowz9ht, int *kuzxj1lo, int tgiyxdw1[], int dufozmt7[]); void Yee_spline(double *sjwyig9t, double *tlgduey8, double *rbne6ouj, double *gkdx5jal, int *lqsahu0r, int *acpios9q, int *ldk, int *wy1vqfzu, int *kvowz9ht, double wbkq9zyi[], double lamvec[], int *aalgpft4y, double t8hwvalr[], double rpyis2kc[], double ui8ysltq[], double ifys6woa[], double hdnw2fts[], int *yzoe1rsp, int *fbd5yktj, int *ftnjamu2, double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double tt2[], int *cvnjhg2u, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ); void fapc0tnbcn8kzpab(double gkdx5jal[], double sjwyig9t[], double rpyis2kc[], int *lqsahu0r, int *acpios9q, int *wy1vqfzu, double t8hwvalr[]); void vsuff9(int *ftnjamu2, int *lqsahu0r, int ezlgm2up[], double sjwyig9t[], double tlgduey8[], double rbne6ouj[], double pygsw6ko[], double pasjmo8g[], double eshvo2ic[], double ueshvo2ic[], double onxjvw8u[], int *dvhw1ulq, int *wy1vqfzu, int *kvowz9ht, int *npjlv3mr, double conmat[], int *kgwmz4ip, int *iz2nbfjc, int *wueshvo2ic, int *npjlv3mreshvo2ic, int *dim2eshvo2ic); void fapc0tnbicpd0omv(double enaqpzk9[], double sjwyig9t[], double gkdx5jal[], double grmuyvx9[], int *ldk, int *lqsahu0r, int *acpios9q, int *wy1vqfzu, int *jzwsy6tp, double rbne6ouj[], double ifys6woa[], int *kvowz9ht, int *ftnjamu2); void fapc0tnbo0xlszqr(int *wy1vqfzu, double *g9fvdrbw, double *quc6khaf, double *bmb); void fapc0tnbvsel(int *nurohxe6t, int *nbpvaqm5z, int *wy1vqfzu, int *ldk, double minv[], double quc6khaf[]); void fapc0tnbovjnsmt2(double bmb[], double rbne6ouj[], double ifys6woa[], int *wy1vqfzu, int *kuzxj1lo, int *dimw, int *iii, int tgiyxdw1_[], int dufozmt7_[]); void fapc0tnbvicb2(double enaqpzk9[], double wpuarq2m[], double Dvector[], int *wy1vqfzu, int *f8yswcat); void fapc0tnbewg7qruh(double ci1oyxas[], double tlgduey8[], double rbne6ouj[], int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int *lqsahu0r, double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double ui8ysltq[], int *kvowz9ht, int *fbd5yktj, int *ldk, int *aalgpft4y, int *yzoe1rsp, double rpyis2kc[], double gkdx5jals[], double ifys6woa[], double conmat[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int *acpios9q, int *iz2nbfjc, int *kgwmz4ip, int *npjlv3mr, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ); void Yee_vbfa( int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]); void fapc0tnbvbfa1(int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int lqsahu0r[], int which[], double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double *zpcqv3uj, double vc6hatuj[], double fasrkub3[], int *qemj9asg, int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], double *ghdetj8v, int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[], int *nhja0izq, int *yzoe1rsp, int *ueb8hndv, int *gtrlbz3e, int *rutyk8mg, int *xjc4ywlh, int *kvowz9ht, int *npjlv3mr, int *fbd5yktj, int *ldk, int *algpft4y, int itdcb8ilk[], double tdcb8ilk[]); void fapc0tnbx6kanjdh(double sjwyig9t[], double xout[], int *f8yswcat, int *wy1vqfzu); double fapc0tnbrd9beyfk(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double m0ibglfx[]); void fapc0tnbpitmeh0q(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double *lfu2qhid, double *lm9vcjob); void fapc0tnbdsrt0gem(int *f8yswcat, double sjwyig9t[], double po8rwsmy[], double bhcji9gl[], double ub4xioar[], double ui8ysltq[], int *yzoe1rsp); void fapc0tnbshm8ynte(int *ftnjamu2, int ezlgm2up[], double pygsw6ko[], double sjwyig9t[]); void vknootl2(double x[], int *f8yswcat, double gkdx5jal[], int *rvy1fpli, int *ukgwt7na); void Yee_pknootl2(double *gkdx5jal, int *f8yswcat, int *zo8wpibx, double *Toler_ankcghz2); void F77_NAME(wbvalue)(double*, double*, int*, int*, double*, int*, double*); void F77_NAME(vinterv)(double*, int*, double*, int*, int*); void F77_NAME(vbsplvd)(double*, int*, double*, int*, double*, double*,int*); void F77_NAME(vdpbfa7)(double*, int*, int*, int*, int*, double*); void F77_NAME(vdpbsl7)(double*, int*, int*, int*, double*, double*); void F77_NAME(vdqrsl)(double*, int*, int*, int*, double*, double*, double*, double*, double*, double*, double*, int*, int*); void F77_NAME(vqrdca)(double*, int*, int*, int*, double*, int*, double*, int*, double*); void Free_fapc0tnbyee_spline(double *wkumc9idosiz4fxy, double *wkumc9idenaqpzk9, double *wkumc9idbtwy, double *wkumc9idwk0, double *wkumc9idbk3ymcih, int *wkumc9idtgiyxdw1, int *wkumc9iddufozmt7); void Free_fapc0tnbewg7qruh(double *wkumc9idWrk1, int *wkumc9idges1xpkr, double *wkumc9idbeta, double *wkumc9idfasrkub3, double *wkumc9idsout, double *wkumc9idr0oydcxb, double *wkumc9idub4xioar, double *wkumc9ideffect, double *wkumc9idueshvo2ic, double *wkumc9ids0, double *wkumc9idpygsw6ko, double *wkumc9idpasjmo8g, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, double *wkumc9idwk4); void F77_NAME(vdigami)(double*, double*, double*, double*, double*, double*, double*, double*, double*, double*, int*); void VGAM_C_vdigami(double d[], double x[], double p[], double gplog[], double gp1log[], double psip[], double psip1[], double psidp[], double psidp1[], int *ifault, double *tmax, int *f8yswcat); void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]); extern void n5aioudkdnaoqj0l(double *pjb6wfoq, double *xs, double *ys, double ws[], int *kuzxj1lo, int *nk, double gkdx5jal[], double coef[], double sz[], double ifys6woa[], double *wbkq9zyi, double parms[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); extern void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double tb[], int *nb); extern void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); extern void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *isolve); extern void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *isolve); extern void fvlmz9iyC_nudh6szq(double wpuarq2m[], double tlgduey8[], double lfu2qhid[], int *dimu, int *f8yswcat, int *wy1vqfzu); extern void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *dimu); extern void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq); extern void fvlmz9iyC_mxrbkut0(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *dimu, int *rutyk8mg); void VGAM_C_vdigami(double d[], double x[], double p[], double gplog[], double gp1log[], double psip[], double psip1[], double psidp[], double psidp1[], int *ifault, double *tmax, int *f8yswcat) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { F77_CALL(vdigami)(d, x, p, gplog, gp1log, psip, psip1, psidp, psidp1, tmax, ifault); d += 6; x++; p++; gplog++; gp1log++; psip++; psip1++; psidp++; psidp1++; ifault++; } } void Yee_vbvs(int *f8yswcat, double gkdx5jal[], double rpyis2kc[], double sjwyig9t[], double kispwgx3[], int *acpios9q, int *order, int *wy1vqfzu) { double *chw8lzty; int ayfnwr1v, yq6lorbx, h2dpsbkr = 4; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { chw8lzty = sjwyig9t; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jal, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, order, kispwgx3++); } rpyis2kc += *acpios9q; } } void fapc0tnbtfeswo7c(double osiz4fxy[], int *acpios9q, int *wy1vqfzu, int *ldk, double wbkq9zyi[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[]) { int ayfnwr1v, yq6lorbx; double *fpdlcqk9wbkq9zyi, *fpdlcqk9xecbg0pf, *fpdlcqk9z4grbpiq, *fpdlcqk9d7glzhbj, *fpdlcqk9v2eydbxs, *fpdlcqk9osiz4fxy; fpdlcqk9osiz4fxy = osiz4fxy + *ldk - 1; fpdlcqk9xecbg0pf = xecbg0pf; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9xecbg0pf; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9xecbg0pf++; } fpdlcqk9osiz4fxy = osiz4fxy + *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy + *ldk - *wy1vqfzu - 1; fpdlcqk9z4grbpiq = z4grbpiq; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9z4grbpiq; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9z4grbpiq++; } fpdlcqk9osiz4fxy = osiz4fxy + *ldk + 2 * *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy - 2 * *wy1vqfzu - 1; fpdlcqk9d7glzhbj = d7glzhbj; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9d7glzhbj; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9d7glzhbj++; } fpdlcqk9osiz4fxy = osiz4fxy + *ldk + 3 * *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy - 3 * *wy1vqfzu - 1; fpdlcqk9v2eydbxs = v2eydbxs; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9v2eydbxs; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9v2eydbxs++; } } void fapc0tnbybnagt8k(int *iii, int *cz8qdfyj, int *tesdm5kv, double g9fvdrbw[], double osiz4fxy[], double rbne6ouj[], int *kxvq6sfw, int *nyfu9rod, int *wy1vqfzu, int *ldk, int *kvowz9ht, int *kuzxj1lo, int tgiyxdw1[], int dufozmt7[]) { double tmp_wrk; int urohxe6t, nead, bcol, brow, biuvowq2, nbj8tdsk; bcol = *cz8qdfyj + *tesdm5kv; brow = *cz8qdfyj; for (urohxe6t = 1; urohxe6t <= *kvowz9ht; urohxe6t++) { tmp_wrk = rbne6ouj[*iii -1 + (urohxe6t-1) * *kuzxj1lo] * g9fvdrbw[*kxvq6sfw-1] * g9fvdrbw[*nyfu9rod-1]; biuvowq2 = (brow-1) * *wy1vqfzu + tgiyxdw1[urohxe6t-1]; nbj8tdsk = (bcol-1) * *wy1vqfzu + dufozmt7[urohxe6t-1]; nead = nbj8tdsk - biuvowq2; osiz4fxy[*ldk - nead - 1 + (nbj8tdsk-1) * *ldk] += tmp_wrk; if (*tesdm5kv > 0 && dufozmt7[urohxe6t-1] != tgiyxdw1[urohxe6t-1]) { biuvowq2 = (brow-1) * *wy1vqfzu + dufozmt7[urohxe6t-1]; nbj8tdsk = (bcol-1) * *wy1vqfzu + tgiyxdw1[urohxe6t-1]; nead = nbj8tdsk - biuvowq2; osiz4fxy[*ldk - nead - 1 + (nbj8tdsk-1) * *ldk] += tmp_wrk; } } } void Free_fapc0tnbyee_spline(double *wkumc9idosiz4fxy, double *wkumc9idenaqpzk9, double *wkumc9idbtwy, double *wkumc9idwk0, double *wkumc9idbk3ymcih, int *wkumc9idtgiyxdw1, int *wkumc9iddufozmt7) { Free(wkumc9idosiz4fxy); Free(wkumc9idenaqpzk9); Free(wkumc9idbtwy); Free(wkumc9idwk0); Free(wkumc9idbk3ymcih); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void Yee_spline(double *sjwyig9t, double *tlgduey8, double *rbne6ouj, double *gkdx5jal, int *lqsahu0r, int *acpios9q, int *ldk, int *wy1vqfzu, int *kvowz9ht, double wbkq9zyi[], double lamvec[], int *aalgpft4y, double t8hwvalr[], double rpyis2kc[], double ui8ysltq[], double ifys6woa[], double hdnw2fts[], int *yzoe1rsp, int *fbd5yktj, int *ftnjamu2, double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, dqlr5bse, pqzfxw4i, wep0oibc; int have_setup_sg = 0; /* == 1 if sg[0123] have been initialized */ int junkicrit = -1, xtov9rbf = 4, l3zpbstu[3], pn9eowxc; double jstx4uwe[4], g9fvdrbw[4], qaltf0nz = 0.1e-9, ms0qypiw[16], *fpdlcqk9btwy; int yu6izdrc = 0, pqneb2ra = 1, qhzja4ny = 2, bvsquk3z = 3, h2dpsbkr = 4; int arm0lkbg1, arm0lkbg2; double *wkumc9idosiz4fxy, *wkumc9idenaqpzk9, *wkumc9idbtwy, *wkumc9idwk0, *wkumc9idbk3ymcih; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; int imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; double kpftdm0jmynl7uaq = tdcb8ilk[0], kpftdm0jzustx4fw = tdcb8ilk[1], kpftdm0jtol = tdcb8ilk[2], kpftdm0jeps = tdcb8ilk[3]; double svdbx3tk_tt1, svdbx3tk_tt2 = 0.0, svdbx3tk_g2dnwteb = -1.0; double *wkumc9idzvau2lct, *wkumc9idf6lsuzax, *wkumc9idfvh2rwtc, *wkumc9iddcfir2no; double *wkumc9idxwy; double *fpdlcqk9ifys6woa; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); wkumc9idosiz4fxy = Calloc(*ldk * (*wy1vqfzu * *acpios9q), double); wkumc9idenaqpzk9 = Calloc(*ldk * (*acpios9q * *wy1vqfzu), double); wkumc9idbtwy = Calloc(*wy1vqfzu * *acpios9q , double); wkumc9idbk3ymcih = Calloc( *acpios9q , double); wkumc9idwk0 = Calloc(*acpios9q * *wy1vqfzu , double); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (wbkq9zyi[yq6lorbx-1] == 0.0) { pn9eowxc = 0; } else { /// vvv pn9eowxc = 1; if (have_setup_sg == 0) { have_setup_sg = 1; // Need only be done once n5aioudkzosq7hub(xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, gkdx5jal, acpios9q); for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { svdbx3tk_tt2 += xecbg0pf[ayfnwr1v-1]; } } wkumc9idxwy = Calloc(*acpios9q, double); wkumc9idzvau2lct = Calloc(*acpios9q, double); wkumc9idf6lsuzax = Calloc(*acpios9q, double); wkumc9idfvh2rwtc = Calloc(*acpios9q, double); wkumc9iddcfir2no = Calloc(*acpios9q, double); n5aioudkgt9iulbf(sjwyig9t, tlgduey8 + (yq6lorbx-1) * *lqsahu0r, // bhcji9gl rbne6ouj + (yq6lorbx-1) * *lqsahu0r, // po8rwsmy, gkdx5jal, lqsahu0r, acpios9q, wkumc9idxwy, // lqsahu0r === kuzxj1lo wkumc9idzvau2lct, wkumc9idf6lsuzax, wkumc9idfvh2rwtc, wkumc9iddcfir2no); svdbx3tk_tt1 = 0.0; for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { svdbx3tk_tt1 += wkumc9idzvau2lct[ayfnwr1v-1]; } Free(wkumc9idxwy); Free(wkumc9idzvau2lct); Free(wkumc9idf6lsuzax); Free(wkumc9idfvh2rwtc); Free(wkumc9iddcfir2no); svdbx3tk_g2dnwteb = svdbx3tk_tt1 / svdbx3tk_tt2; lamvec[yq6lorbx-1] = svdbx3tk_g2dnwteb * pow(16.0, wbkq9zyi[yq6lorbx-1] * 6.0 - 2.0); } /// vvv if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu || pn9eowxc == 0) { // ggg wep0oibc = 1; l3zpbstu[0] = junkicrit; l3zpbstu[1] = pn9eowxc; l3zpbstu[2] = itdcb8ilk[0]; jstx4uwe[0] = kpftdm0jmynl7uaq; // Prior to 20100313: was waiez6nt; jstx4uwe[1] = kpftdm0jzustx4fw; // Prior to 20100313: was fp6nozvx; jstx4uwe[2] = kpftdm0jtol; // Prior to 20100313: was Toler_df; jstx4uwe[3] = kpftdm0jeps; // Introduced as an arg, 20100313 if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu) { // hhh for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] /= rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } have_setup_sg = 1; n5aioudkdnaoqj0l(hdnw2fts + yq6lorbx-1, sjwyig9t, tlgduey8 + (yq6lorbx-1) * *lqsahu0r, rbne6ouj + (yq6lorbx-1) * *lqsahu0r, lqsahu0r, acpios9q, gkdx5jal, rpyis2kc + (yq6lorbx-1) * *acpios9q, t8hwvalr + (yq6lorbx-1) * *lqsahu0r, ifys6woa + (yq6lorbx-1) * *lqsahu0r, // *ftnjamu2, wbkq9zyi + yq6lorbx-1, jstx4uwe, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, l3zpbstu, &xtov9rbf, &wep0oibc, fbd5yktj); lamvec[yq6lorbx-1] = jstx4uwe[0]; if (*fbd5yktj) { Rprintf("Error in n5aioudkdnaoqj0l; inside Yee_spline\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } if (*yzoe1rsp) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { gp1jxzuh = ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2; bpvaqm5z = ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r; ui8ysltq[gp1jxzuh] = ifys6woa[bpvaqm5z] / rbne6ouj[bpvaqm5z]; } } } else { // hhh and uuu have_setup_sg = 1; n5aioudkdnaoqj0l(hdnw2fts + yq6lorbx-1, sjwyig9t, wkumc9idbk3ymcih, rbne6ouj + (yq6lorbx-1) * *lqsahu0r, lqsahu0r, acpios9q, gkdx5jal, rpyis2kc + (yq6lorbx-1) * *acpios9q, t8hwvalr + (yq6lorbx-1) * *lqsahu0r, ifys6woa + (yq6lorbx-1) * *lqsahu0r, // 20130427 wbkq9zyi + yq6lorbx-1, jstx4uwe, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, l3zpbstu, &xtov9rbf, &wep0oibc, fbd5yktj); lamvec[yq6lorbx-1] = jstx4uwe[0]; if (*fbd5yktj) { Rprintf("Error in Rgam_dnaoqj0l; inside Yee_spline\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } // uuu if (*fbd5yktj) { Rprintf("Error in n5aioudkdnaoqj0l: fbd5yktj = %3d.\n", *fbd5yktj); Rprintf("Called within Yee_spline.\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } // ggg } if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu) { Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { hdnw2fts[yq6lorbx-1] -= 1.0; // Decrement it. } return; } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { arm0lkbg1 = *acpios9q + 1; F77_CALL(vinterv)(gkdx5jal, &arm0lkbg1, sjwyig9t + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jal[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { Rprintf("Freeing memory in Yee_spline and returning.\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, sjwyig9t + ayfnwr1v-1, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); yq6lorbx = dqlr5bse - 4 + 1; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[0]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &pqneb2ra, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &qhzja4ny, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &qhzja4ny, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &bvsquk3z, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 2; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[1]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &qhzja4ny, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &qhzja4ny, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 3; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[2]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &bvsquk3z, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &bvsquk3z, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 4; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[3]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &h2dpsbkr, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); } fapc0tnbtfeswo7c(wkumc9idosiz4fxy, acpios9q, wy1vqfzu, ldk, lamvec, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs); arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; F77_CALL(vdpbfa7)(wkumc9idosiz4fxy, ldk, &arm0lkbg1, &arm0lkbg2, aalgpft4y, wkumc9idwk0); if (*aalgpft4y) { Rprintf("Error in subroutine vdpbfa7; inside Yee_spline.\n"); Rprintf("*aalgpft4y = %3d\n", *aalgpft4y); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; F77_CALL(vdpbsl7)(wkumc9idosiz4fxy, ldk, &arm0lkbg1, &arm0lkbg2, wkumc9idbtwy, wkumc9idwk0); fpdlcqk9btwy = wkumc9idbtwy; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { rpyis2kc[ ayfnwr1v-1 + (yq6lorbx-1) * *acpios9q] = *fpdlcqk9btwy++; } } fapc0tnbcn8kzpab(gkdx5jal, sjwyig9t, rpyis2kc, lqsahu0r, acpios9q, wy1vqfzu, t8hwvalr); arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; fapc0tnbvicb2(wkumc9idenaqpzk9, wkumc9idosiz4fxy, wkumc9idwk0, &arm0lkbg2, &arm0lkbg1); fapc0tnbicpd0omv(wkumc9idenaqpzk9, sjwyig9t, gkdx5jal, ui8ysltq, ldk, lqsahu0r, acpios9q, wy1vqfzu, yzoe1rsp, rbne6ouj, ifys6woa, kvowz9ht, ftnjamu2); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { hdnw2fts[yq6lorbx-1] = -1.0; // Initialize; subtract the linear part } fpdlcqk9ifys6woa = ifys6woa; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { hdnw2fts[yq6lorbx-1] += *fpdlcqk9ifys6woa++; } } Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); } void fapc0tnbcn8kzpab(double gkdx5jals[], double sjwyig9t[], double rpyis2kc[], int *lqsahu0r, int *acpios9q, int *wy1vqfzu, double t8hwvalr[]) { int ayfnwr1v, yq6lorbx, yu6izdrc = 0, h2dpsbkr = 4; double *chw8lzty; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { chw8lzty = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *lqsahu0r; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jals, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, &yu6izdrc, t8hwvalr++); } rpyis2kc += *acpios9q; } } void Free_fapc0tnbvsuff9(double *wkumc9idwk1a, double *wkumc9idwk1b, double *wkumc9idwk2a, double *wkumc9idwk2b, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, int *wkumc9idtgiyxdw11, int *wkumc9iddufozmt71, int *wkumc9idtgiyxdw12, int *wkumc9iddufozmt72, int *iz2nbfjc) { Free(wkumc9idwk1a); Free(wkumc9idwk1b); Free(wkumc9idwk2a); Free(wkumc9idwk2b); if (! *iz2nbfjc) { Free(wkumc9ideshvo2ic); Free(wkumc9idonxjvw8u); } Free(wkumc9idtgiyxdw11); Free(wkumc9iddufozmt71); Free(wkumc9idtgiyxdw12); Free(wkumc9iddufozmt72); } void vsuff9(int *ftnjamu2, int *lqsahu0r, int ezlgm2up[], double sjwyig9t[], double tlgduey8[], double rbne6ouj[], double pygsw6ko[], double pasjmo8g[], double eshvo2ic[], double ueshvo2ic[], double onxjvw8u[], int *dvhw1ulq, int *wy1vqfzu, int *kvowz9ht, int *npjlv3mr, double conmat[], int *kgwmz4ip, int *iz2nbfjc, int *wueshvo2ic, int *npjlv3mreshvo2ic, int *dim2eshvo2ic) { double *qnwamo0e, *qnwamo0e1, *qnwamo0e2; int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, *ptri; int pqneb2ra = 1; double *wkumc9idwk1a, *wkumc9idwk1b, *wkumc9idwk2a, *wkumc9idwk2b, *wkumc9ideshvo2ic, *wkumc9idonxjvw8u; int *wkumc9idtgiyxdw11, *wkumc9iddufozmt71, *wkumc9idtgiyxdw12, *wkumc9iddufozmt72; int zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, n2colb = *kgwmz4ip * *kgwmz4ip, n3colb = *kgwmz4ip * (*kgwmz4ip + 1) / 2; wkumc9ideshvo2ic = conmat; wkumc9idonxjvw8u = conmat; wkumc9idwk1a = Calloc(zyojx5hw , double); wkumc9idwk1b = Calloc(*wy1vqfzu , double); wkumc9idwk2a = Calloc(n2colb , double); wkumc9idwk2b = Calloc(*kgwmz4ip , double); wkumc9idtgiyxdw11 = Calloc(imk5wjxg , int); wkumc9iddufozmt71 = Calloc(imk5wjxg , int); wkumc9idtgiyxdw12 = Calloc(n3colb , int); wkumc9iddufozmt72 = Calloc(n3colb , int); if (*iz2nbfjc) { if (*npjlv3mr < *kvowz9ht || *kgwmz4ip != *wy1vqfzu) { Rprintf("Error in fapc0tnbvsuff9: "); Rprintf("must have npjlv3mr >= kvowz9ht & kgwmz4ip = M\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); *dvhw1ulq = 0; return; } } else { if (*npjlv3mreshvo2ic < n3colb || *dim2eshvo2ic < n3colb) { Rprintf("Error in fapc0tnbvsuff9 with nontrivial constraints:\n"); Rprintf("must have npjlv3mreshvo2ic and dim2eshvo2ic both >= n3colb\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); *dvhw1ulq = 0; return; } wkumc9ideshvo2ic = Calloc(*lqsahu0r * zyojx5hw , double); wkumc9idonxjvw8u = Calloc(*lqsahu0r * *wy1vqfzu , double); } fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw11, wkumc9iddufozmt71, wy1vqfzu); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw12, wkumc9iddufozmt72, kgwmz4ip); ptri = ezlgm2up; qnwamo0e = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { pygsw6ko[(*ptri++) - 1] = *qnwamo0e++; } if (*iz2nbfjc) { qnwamo0e = onxjvw8u; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 0; ayfnwr1v < *lqsahu0r; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } if (*iz2nbfjc) { qnwamo0e = eshvo2ic; for (yq6lorbx = 1; yq6lorbx <= *dim2eshvo2ic; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } qnwamo0e1 = (*iz2nbfjc) ? eshvo2ic : wkumc9ideshvo2ic; qnwamo0e2 = (*iz2nbfjc) ? onxjvw8u : wkumc9idonxjvw8u; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { qnwamo0e2[ezlgm2up[ayfnwr1v-1]-1 + (yq6lorbx-1) * *lqsahu0r] += wkumc9idwk1a[yq6lorbx -1 + (gp1jxzuh-1) * *wy1vqfzu] * tlgduey8[ayfnwr1v -1 + (gp1jxzuh-1) * *ftnjamu2]; } } for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { qnwamo0e1[ezlgm2up[ayfnwr1v-1]-1 + (yq6lorbx-1) * *lqsahu0r] += rbne6ouj[ayfnwr1v -1 + (yq6lorbx-1) * *ftnjamu2]; } } *dvhw1ulq = 1; if (*iz2nbfjc) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = eshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk1b[yq6lorbx-1] = onxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } fvlmz9iyjdbomp0g(wkumc9idwk1a, wkumc9idwk1b, wy1vqfzu, dvhw1ulq, &pqneb2ra); if (*dvhw1ulq != 1) { Rprintf("*dvhw1ulq != 1 after fvlmz9iyjdbomp0g in vsuff9.\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); return; } if (*wueshvo2ic) { for (yq6lorbx = 1; yq6lorbx <= *npjlv3mreshvo2ic; yq6lorbx++) { ueshvo2ic[yq6lorbx-1 + (ayfnwr1v-1) * *npjlv3mreshvo2ic] = wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu]; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { pasjmo8g[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk1b[yq6lorbx-1]; } } } else { qnwamo0e = wkumc9idwk1a; for (yq6lorbx = 1; yq6lorbx <= zyojx5hw; yq6lorbx++) { *qnwamo0e++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9ideshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk1b[yq6lorbx-1] = wkumc9idonxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { for (gp1jxzuh = yq6lorbx; gp1jxzuh <= *kgwmz4ip; gp1jxzuh++) { wkumc9idwk2a[yq6lorbx-1 + (gp1jxzuh-1) * *kgwmz4ip] = 0.0e0; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { wkumc9idwk2a[yq6lorbx-1 + (gp1jxzuh-1) * *kgwmz4ip] += conmat[urohxe6t-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwk1a[urohxe6t-1 + (bpvaqm5z-1) * *wy1vqfzu] * conmat[bpvaqm5z-1 + (gp1jxzuh-1) * *wy1vqfzu]; } } } } for (yq6lorbx = 1; yq6lorbx <= *dim2eshvo2ic; yq6lorbx++) { eshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2a[wkumc9idtgiyxdw12[yq6lorbx-1]-1 + (wkumc9iddufozmt72[yq6lorbx-1]-1) * *kgwmz4ip]; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { wkumc9idwk2b[yq6lorbx-1] = 0.0e0; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { wkumc9idwk2b[yq6lorbx-1] += conmat[urohxe6t-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwk1b[urohxe6t-1]; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { onxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2b[yq6lorbx-1]; } fvlmz9iyjdbomp0g(wkumc9idwk2a, wkumc9idwk2b, kgwmz4ip, dvhw1ulq, &pqneb2ra); if (*dvhw1ulq != 1) { Rprintf("*dvhw1ulq!=1 in vchol-vsuff9. Something gone wrong\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); return; } if (*wueshvo2ic) { for (yq6lorbx = 1; yq6lorbx <= *npjlv3mreshvo2ic; yq6lorbx++) { ueshvo2ic[yq6lorbx-1 + (ayfnwr1v-1) * *npjlv3mreshvo2ic] = wkumc9idwk2a[wkumc9idtgiyxdw12[yq6lorbx-1]-1 + (wkumc9iddufozmt72[yq6lorbx-1]-1) * *kgwmz4ip]; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { pasjmo8g[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2b[yq6lorbx-1]; } } } Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); } void fapc0tnbicpd0omv(double enaqpzk9[], double sjwyig9t[], double gkdx5jals[], double grmuyvx9[], int *ldk, int *lqsahu0r, int *acpios9q, int *wy1vqfzu, int *jzwsy6tp, double rbne6ouj[], double ifys6woa[], int *kvowz9ht, int *ftnjamu2) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, dqlr5bse, pqzfxw4i; double ms0qypiw[16], g9fvdrbw[4], qaltf0nz = 0.10e-9; int arm0lkbg1, arm0lkbg4, *ptri1, *ptri2; double tmp_var4, tmp_var5, *qnwamo0e; double *wkumc9idwrk, *wkumc9idbmb; int *wkumc9idtgiyxdw1_, *wkumc9iddufozmt7_, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu; wkumc9idtgiyxdw1_ = Calloc(imk5wjxg, int); wkumc9iddufozmt7_ = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1_, wkumc9iddufozmt7_, wy1vqfzu); ptri1 = wkumc9idtgiyxdw1_; ptri2 = wkumc9iddufozmt7_; for (ayfnwr1v = 0; ayfnwr1v < imk5wjxg; ayfnwr1v++) { (*ptri1++)--; (*ptri2++)--; } wkumc9idwrk = Calloc(zyojx5hw, double); wkumc9idbmb = Calloc(zyojx5hw, double); if (*jzwsy6tp) { qnwamo0e = grmuyvx9; for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { qnwamo0e = wkumc9idbmb; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { *qnwamo0e++ = 0.0e0; } } arm0lkbg1 = *acpios9q + 1; F77_CALL(vinterv)(gkdx5jals, &arm0lkbg1, sjwyig9t + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jals[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { Rprintf("pqzfxw4i!=1 after vinterv called in fapc0tnbicpd0omv\n"); Free(wkumc9idtgiyxdw1_); Free(wkumc9iddufozmt7_); Free(wkumc9idwrk); return; } } arm0lkbg1 = 1; arm0lkbg4 = 4; F77_CALL(vbsplvd)(gkdx5jals, &arm0lkbg4, sjwyig9t + ayfnwr1v-1, &dqlr5bse, ms0qypiw, g9fvdrbw, &arm0lkbg1); yq6lorbx = dqlr5bse - 4 + 1; for (urohxe6t = yq6lorbx; urohxe6t <= (yq6lorbx + 3); urohxe6t++) { fapc0tnbvsel(&urohxe6t, &urohxe6t, wy1vqfzu, ldk, enaqpzk9, wkumc9idwrk); tmp_var4 = pow(g9fvdrbw[urohxe6t-yq6lorbx], (double) 2.0); fapc0tnbo0xlszqr(wy1vqfzu, &tmp_var4, wkumc9idwrk, wkumc9idbmb); } for (urohxe6t = yq6lorbx; urohxe6t <= (yq6lorbx+3); urohxe6t++) { for (bpvaqm5z = urohxe6t+1; bpvaqm5z <= (yq6lorbx+3); bpvaqm5z++) { fapc0tnbvsel(&urohxe6t, &bpvaqm5z, wy1vqfzu, ldk, enaqpzk9, wkumc9idwrk); tmp_var5 = 2.0 * g9fvdrbw[urohxe6t-yq6lorbx] * g9fvdrbw[bpvaqm5z-yq6lorbx]; fapc0tnbo0xlszqr(wy1vqfzu, &tmp_var5, wkumc9idwrk, wkumc9idbmb); } } if (*jzwsy6tp) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { grmuyvx9[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = wkumc9idbmb[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } fapc0tnbovjnsmt2(wkumc9idbmb, rbne6ouj, ifys6woa, wy1vqfzu, lqsahu0r, kvowz9ht, &ayfnwr1v, wkumc9idtgiyxdw1_, wkumc9iddufozmt7_); } Free(wkumc9idtgiyxdw1_); Free(wkumc9iddufozmt7_); Free(wkumc9idwrk); Free(wkumc9idbmb); } void fapc0tnbo0xlszqr(int *wy1vqfzu, double *g9fvdrbw, double *quc6khaf, double *bmb) { int yq6lorbx, gp1jxzuh; double *qnwamo0e; qnwamo0e = quc6khaf; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 0; gp1jxzuh < *wy1vqfzu; gp1jxzuh++) { *quc6khaf *= *g9fvdrbw; quc6khaf++; } } quc6khaf = qnwamo0e; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 0; gp1jxzuh < *wy1vqfzu; gp1jxzuh++) { *bmb += *quc6khaf++; bmb++; } } } void fapc0tnbvsel(int *nurohxe6t, int *nbpvaqm5z, int *wy1vqfzu, int *ldk, double minv[], double quc6khaf[]) { int ayfnwr1v, yq6lorbx, biuvowq2, nbj8tdsk; double *qnwamo0e; qnwamo0e = quc6khaf; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *qnwamo0e++ = 0.0; } } if (*nurohxe6t != *nbpvaqm5z) { for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { biuvowq2 = (*nurohxe6t - 1) * *wy1vqfzu + ayfnwr1v; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { nbj8tdsk = (*nbpvaqm5z - 1) * *wy1vqfzu + yq6lorbx; quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = minv[*ldk - (nbj8tdsk-biuvowq2)-1 + (nbj8tdsk-1) * *ldk]; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { biuvowq2 = (*nurohxe6t - 1) * *wy1vqfzu + ayfnwr1v; for (yq6lorbx = ayfnwr1v; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { nbj8tdsk = (*nbpvaqm5z - 1) * *wy1vqfzu + yq6lorbx; quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = minv[*ldk - (nbj8tdsk-biuvowq2)-1 + (nbj8tdsk-1) * *ldk]; } } for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = ayfnwr1v+1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { quc6khaf[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu]; } } } } void fapc0tnbovjnsmt2(double bmb[], double rbne6ouj[], double ifys6woa[], int *wy1vqfzu, int *lqsahu0r, int *kvowz9ht, int *iii, int tgiyxdw1_[], int dufozmt7_[]) { int yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z; double q6zdcwxk; int zyojx5hw = *wy1vqfzu * *wy1vqfzu; double *wkumc9idwrk; wkumc9idwrk = Calloc(zyojx5hw, double); for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { for (urohxe6t = 1; urohxe6t <= *kvowz9ht; urohxe6t++) { yq6lorbx = tgiyxdw1_[urohxe6t-1] + (dufozmt7_[urohxe6t-1] ) * *wy1vqfzu; gp1jxzuh = dufozmt7_[urohxe6t-1] + (tgiyxdw1_[urohxe6t-1] ) * *wy1vqfzu; wkumc9idwrk[yq6lorbx] = wkumc9idwrk[gp1jxzuh] = rbne6ouj[*iii-1 + (urohxe6t-1) * *lqsahu0r]; } q6zdcwxk = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk += bmb[bpvaqm5z-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwrk[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu]; } ifys6woa[*iii-1 + (bpvaqm5z-1) * *lqsahu0r] = q6zdcwxk; } Free(wkumc9idwrk); } void fapc0tnbvicb2(double enaqpzk9[], double wpuarq2m[], double Dvector[], int *wy1vqfzu, int *f8yswcat) { int ayfnwr1v, gp1jxzuh, urohxe6t, uplim, sedf7mxb, hofjnx2e, kij0gwer; int Mplus1 = *wy1vqfzu + 1; int Mp1Mp1 = Mplus1 * Mplus1; double *wkumc9iduu; wkumc9iduu = Calloc(Mp1Mp1, double); enaqpzk9[*wy1vqfzu + (*f8yswcat-1) * Mplus1] = 1.0e0 / Dvector[*f8yswcat-1]; hofjnx2e = *wy1vqfzu + 1; sedf7mxb = *f8yswcat + 1 - hofjnx2e; for (kij0gwer = sedf7mxb; kij0gwer <= *f8yswcat; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= hofjnx2e; ayfnwr1v++) { wkumc9iduu[ayfnwr1v-1 + (kij0gwer-sedf7mxb) * Mplus1] = wpuarq2m[ayfnwr1v-1 + (kij0gwer-1 ) * Mplus1]; } } for (ayfnwr1v = *f8yswcat-1; ayfnwr1v >= 1; ayfnwr1v--) { uplim = *wy1vqfzu < (*f8yswcat - ayfnwr1v) ? *wy1vqfzu : *f8yswcat - ayfnwr1v; for (urohxe6t = 1; urohxe6t <= uplim; urohxe6t++) { enaqpzk9[-urohxe6t+*wy1vqfzu + (ayfnwr1v+urohxe6t-1) * Mplus1] = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= urohxe6t; gp1jxzuh++) { enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1] -= wkumc9iduu[-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh - sedf7mxb) * Mplus1] * enaqpzk9[gp1jxzuh-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1]; } for ( ; gp1jxzuh <= uplim; gp1jxzuh++) { enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1] -= wkumc9iduu[-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh - sedf7mxb) * Mplus1] * enaqpzk9[urohxe6t-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh-1 ) * Mplus1]; } } enaqpzk9[*wy1vqfzu + (ayfnwr1v-1) * Mplus1] = 1.0e0 / Dvector[ayfnwr1v-1]; for (urohxe6t = 1; urohxe6t <= uplim; urohxe6t++) { enaqpzk9[ *wy1vqfzu + (ayfnwr1v - 1 ) * Mplus1] -= wkumc9iduu[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t - sedf7mxb) * Mplus1] * enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t - 1 ) * Mplus1]; } if (ayfnwr1v == sedf7mxb) { if (--sedf7mxb < 1) { sedf7mxb = 1; } else { for (kij0gwer = hofjnx2e - 1; kij0gwer >= 1; kij0gwer--) { for (gp1jxzuh = 1; gp1jxzuh <= hofjnx2e; gp1jxzuh++) { wkumc9iduu[gp1jxzuh-1 + kij0gwer * Mplus1] = wkumc9iduu[gp1jxzuh-1 + (kij0gwer-1) * Mplus1]; } } for (gp1jxzuh = 1; gp1jxzuh <= hofjnx2e; gp1jxzuh++) { wkumc9iduu[gp1jxzuh-1] = wpuarq2m[gp1jxzuh-1 + (sedf7mxb-1) * Mplus1]; } } } } Free(wkumc9iduu); } void Free_fapc0tnbewg7qruh(double *wkumc9idWrk1, int *wkumc9idges1xpkr, double *wkumc9idbeta, double *wkumc9idfasrkub3, double *wkumc9idsout, double *wkumc9idr0oydcxb, double *wkumc9idub4xioar, double *wkumc9ideffect, double *wkumc9idueshvo2ic, double *wkumc9ids0, double *wkumc9idpygsw6ko, double *wkumc9idpasjmo8g, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, double *wkumc9idwk4) { Free(wkumc9idWrk1); Free(wkumc9idges1xpkr); Free(wkumc9idbeta); Free(wkumc9idfasrkub3); Free(wkumc9idsout); Free(wkumc9idr0oydcxb); Free(wkumc9idub4xioar); Free(wkumc9ideffect); Free(wkumc9idueshvo2ic); Free(wkumc9ids0); Free(wkumc9idpygsw6ko); Free(wkumc9idpasjmo8g); Free(wkumc9ideshvo2ic); Free(wkumc9idonxjvw8u); Free(wkumc9idwk4); } void fapc0tnbewg7qruh(double ci1oyxas[], double tlgduey8[], double rbne6ouj[], int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int *lqsahu0r, double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double ui8ysltq[], int *kvowz9ht, int *fbd5yktj, int *ldk, int *aalgpft4y, int *yzoe1rsp, double rpyis2kc[], double gkdx5jals[], double ifys6woa[], double conmat[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int *acpios9q, int *iz2nbfjc, int *kgwmz4ip, int *npjlv3mr, int itdcb8ilk[], double tdcb8ilk[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, qemj9asg, dvhw1ulq, infoqr_svdbx3tk, rutyk8mg = *lqsahu0r * *kgwmz4ip; int pqneb2ra = 1, ybnsqgo9 = 101; int xjc4ywlh = 2 * *kgwmz4ip, kgwmz4ip2 = 2 * *kgwmz4ip; int npjlv3mreshvo2ic = (*iz2nbfjc == 1) ? *npjlv3mr : *kgwmz4ip * (*kgwmz4ip + 1) / 2, dim2eshvo2ic = (*iz2nbfjc == 1) ? *kvowz9ht : *kgwmz4ip * (*kgwmz4ip + 1) / 2; double xmin, xrange, *fpdlcqk9ui8ysltq, *fpdlcqk9hdnw2fts, *fpdlcqk9ub4xioar, *fpdlcqk9ifys6woa, *fpdlcqk9pygsw6ko, dtad5vhsu, do3jyipdf, dpq0hfucn, pvofyg8z = 1.0e-7; int *wkumc9idges1xpkr, maxrutyk8mgxjc4ywlh; double *wkumc9idWrk1, *wkumc9idwk4; double *wkumc9idbeta, *wkumc9idfasrkub3, *wkumc9idsout, *wkumc9idr0oydcxb, *wkumc9idub4xioar, *wkumc9ideffect, *wkumc9idueshvo2ic, *wkumc9ids0; double *wkumc9idpygsw6ko, *wkumc9idpasjmo8g, *wkumc9ideshvo2ic, *wkumc9idonxjvw8u; maxrutyk8mgxjc4ywlh = (rutyk8mg > xjc4ywlh) ? rutyk8mg : xjc4ywlh; wkumc9idWrk1 = Calloc(maxrutyk8mgxjc4ywlh , double); wkumc9idwk4 = Calloc(rutyk8mg * xjc4ywlh , double); wkumc9idges1xpkr = Calloc(kgwmz4ip2 , int); wkumc9idbeta = Calloc(kgwmz4ip2 , double); wkumc9idfasrkub3 = Calloc(kgwmz4ip2 , double); wkumc9idsout = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idr0oydcxb = Calloc(*kgwmz4ip * *lqsahu0r , double); wkumc9idub4xioar = Calloc(*kgwmz4ip * *lqsahu0r , double); wkumc9ideffect = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idueshvo2ic = Calloc(npjlv3mreshvo2ic * *lqsahu0r , double); wkumc9ids0 = Calloc(kgwmz4ip2 * kgwmz4ip2 * 2 , double); wkumc9idpygsw6ko = Calloc(*lqsahu0r , double); wkumc9idpasjmo8g = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idonxjvw8u = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9ideshvo2ic = Calloc(*lqsahu0r * dim2eshvo2ic , double); vsuff9(ftnjamu2, lqsahu0r, ezlgm2up, ci1oyxas, tlgduey8, rbne6ouj, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idueshvo2ic, wkumc9idonxjvw8u, &dvhw1ulq, wy1vqfzu, kvowz9ht, npjlv3mr, conmat, kgwmz4ip, iz2nbfjc, &pqneb2ra, &npjlv3mreshvo2ic, &dim2eshvo2ic); if (dvhw1ulq != 1) { Rprintf("Error in fapc0tnbewg7qruh after calling vsuff9.\n"); Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); return; } xmin = wkumc9idpygsw6ko[0]; xrange = wkumc9idpygsw6ko[*lqsahu0r-1] - wkumc9idpygsw6ko[0]; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { wkumc9idpygsw6ko[ayfnwr1v-1] = (wkumc9idpygsw6ko[ayfnwr1v-1] - xmin) / xrange; } *ldk = 4 * *kgwmz4ip; *ldk = 3 * *kgwmz4ip + 1; *fbd5yktj = 0; Yee_spline(wkumc9idpygsw6ko, wkumc9idonxjvw8u, wkumc9ideshvo2ic, gkdx5jals, lqsahu0r, acpios9q, ldk, kgwmz4ip, &dim2eshvo2ic, wbkq9zyi, lamvec, aalgpft4y, wkumc9idsout, rpyis2kc, ui8ysltq, ifys6woa, hdnw2fts, yzoe1rsp, fbd5yktj, ftnjamu2, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, itdcb8ilk, tdcb8ilk); for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { } if (1) { // Do not execute this code block fpdlcqk9hdnw2fts = hdnw2fts; fpdlcqk9ifys6woa = ifys6woa; for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { *fpdlcqk9hdnw2fts = 0.0e0; *fpdlcqk9hdnw2fts = -1.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *fpdlcqk9hdnw2fts += *fpdlcqk9ifys6woa++; } fpdlcqk9hdnw2fts++; } } if (*kgwmz4ip >= 1) { fapc0tnbx6kanjdh(wkumc9idpygsw6ko, wkumc9idwk4, lqsahu0r, kgwmz4ip); rutyk8mg = *lqsahu0r * *kgwmz4ip; fvlmz9iyC_mxrbkut0(wkumc9idueshvo2ic, wkumc9idwk4, kgwmz4ip, &xjc4ywlh, lqsahu0r, &npjlv3mreshvo2ic, &rutyk8mg); for (gp1jxzuh = 1; gp1jxzuh <= xjc4ywlh; gp1jxzuh++) { wkumc9idges1xpkr[gp1jxzuh-1] = gp1jxzuh; } F77_CALL(vqrdca)(wkumc9idwk4, &rutyk8mg, &rutyk8mg, &xjc4ywlh, wkumc9idfasrkub3, wkumc9idges1xpkr, wkumc9idWrk1, &qemj9asg, &pvofyg8z); fvlmz9iyC_nudh6szq(wkumc9idueshvo2ic, wkumc9idsout, wkumc9idr0oydcxb, &npjlv3mreshvo2ic, lqsahu0r, kgwmz4ip); F77_CALL(vdqrsl)(wkumc9idwk4, &rutyk8mg, &rutyk8mg, &qemj9asg, wkumc9idfasrkub3, wkumc9idr0oydcxb, wkumc9idWrk1, wkumc9ideffect, wkumc9idbeta, wkumc9idWrk1, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); fvlmz9iyC_vbks(wkumc9idueshvo2ic, wkumc9idub4xioar, kgwmz4ip, lqsahu0r, &npjlv3mreshvo2ic); if (*yzoe1rsp) { fvlmz9iyC_lkhnw9yq(wkumc9idwk4, wkumc9ids0, &rutyk8mg, &xjc4ywlh, &dvhw1ulq); if (dvhw1ulq != 1) { Rprintf("Error in fapc0tnbewg7qruh calling fvlmz9iyC_lkhnw9yq.\n"); Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); return; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { dtad5vhsu = wkumc9ids0[yq6lorbx-1 + (yq6lorbx-1 ) * kgwmz4ip2]; do3jyipdf = wkumc9ids0[yq6lorbx-1 + (yq6lorbx-1 + *kgwmz4ip) * kgwmz4ip2]; dpq0hfucn = wkumc9ids0[yq6lorbx-1 + *kgwmz4ip + (yq6lorbx-1 + *kgwmz4ip) * kgwmz4ip2]; fpdlcqk9ui8ysltq = ui8ysltq + (yq6lorbx-1) * *ftnjamu2; fpdlcqk9pygsw6ko = wkumc9idpygsw6ko; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *fpdlcqk9ui8ysltq -= dtad5vhsu + *fpdlcqk9pygsw6ko * (2.0 * do3jyipdf + *fpdlcqk9pygsw6ko * dpq0hfucn); fpdlcqk9ui8ysltq++; fpdlcqk9pygsw6ko++; } } } } else { fapc0tnbdsrt0gem(lqsahu0r, wkumc9idpygsw6ko, wkumc9ideshvo2ic, wkumc9idsout, wkumc9idub4xioar, ui8ysltq, yzoe1rsp); } fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { wkumc9idsout[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] -= *fpdlcqk9ub4xioar++; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r, */ ezlgm2up, wkumc9idsout + (yq6lorbx-1) * *lqsahu0r, kispwgx3 + (yq6lorbx-1) * *ftnjamu2); } Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); } void Yee_vbfa(int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]) { double *ghdetj8v, *zpcqv3uj; int nhja0izq, rutyk8mg, xjc4ywlh, lyzoe1rsp, ueb8hndv, gtrlbz3e, algpft4y = 0, qemj9asg, npjlv3mr, kvowz9ht, ldk, fbd5yktj = 0; int *ftnjamu2, *wy1vqfzu; int itdcb8ilk[1]; double tdcb8ilk[4]; itdcb8ilk[0] = psdvgce3[15]; /* contr.sp$c5aesxku in s.vam() */ tdcb8ilk[0] = fjcasv7g[2]; /* contr.sp$low in s.vam() */ tdcb8ilk[1] = fjcasv7g[3]; /* contr.sp$high in s.vam() */ tdcb8ilk[2] = fjcasv7g[4]; /* contr.sp$tol in s.vam() */ tdcb8ilk[3] = fjcasv7g[5]; /* contr.sp$eps in s.vam() */ wy1vqfzu = psdvgce3 + 7; ftnjamu2 = psdvgce3; nhja0izq = psdvgce3[2]; lyzoe1rsp = psdvgce3[3]; gtrlbz3e = psdvgce3[5]; qemj9asg = psdvgce3[6]; rutyk8mg = psdvgce3[8]; xjc4ywlh = psdvgce3[9]; kvowz9ht = psdvgce3[11]; npjlv3mr = psdvgce3[12]; ldk = psdvgce3[14]; zpcqv3uj = fjcasv7g + 0; /* bf.qaltf0nz */ ghdetj8v = fjcasv7g + 1; /* ghdetj8v */ fapc0tnbvbfa1(ftnjamu2, wy1vqfzu, ezlgm2up, lqsahu0r, which, he7mqnvy, tlgduey8, rbne6ouj, wbkq9zyi, lamvec, hdnw2fts, kispwgx3, m0ibglfx, zshtfg8c, ui8ysltq, zpcqv3uj, vc6hatuj, fasrkub3, &qemj9asg, ges1xpkr, wpuarq2m, hjm2ktyr, ulm3dvzg, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, ghdetj8v, nbzjkpi3, lindex, acpios9q, jwbkl9fp, &nhja0izq, &lyzoe1rsp, &ueb8hndv, >rlbz3e, &rutyk8mg, &xjc4ywlh, &kvowz9ht, &npjlv3mr, &fbd5yktj, &ldk, &algpft4y, itdcb8ilk, tdcb8ilk); psdvgce3[6] = qemj9asg; psdvgce3[4] = ueb8hndv; psdvgce3[13] = fbd5yktj; psdvgce3[16] = algpft4y; } void fapc0tnbvbfa1(int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int lqsahu0r[], int which[], double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double *zpcqv3uj, double vc6hatuj[], double fasrkub3[], int *qemj9asg, int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], double *ghdetj8v, int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[], int *nhja0izq, int *yzoe1rsp, int *ueb8hndv, int *gtrlbz3e, int *rutyk8mg, int *xjc4ywlh, int *kvowz9ht, int *npjlv3mr, int *fbd5yktj, int *ldk, int *algpft4y, int itdcb8ilk[], double tdcb8ilk[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, wg1xifdy, ybnsqgo9 = 101, maxrutyk8mgxjc4ywlh, infoqr_svdbx3tk, sumzv2xfhei = 0; double qtce8hzo, deltaf, z4vrscot, pvofyg8z = 1.0e-7, g2dnwteb = 1.0, *fpdlcqk9m0ibglfx, *fpdlcqk9ub4xioar, *fpdlcqk9tlgduey8, *fpdlcqk9ghz9vuba, *fpdlcqk9hjm2ktyr, *fpdlcqk9kispwgx3, *qnwamo0e1; double *wkumc9idTwk, *wkumc9idwkbzmd6ftv, *wkumc9idwk9; double *wkumc9idghz9vuba, *wkumc9idoldmat, *wkumc9idub4xioar, *wkumc9idwk2; double *wkumc9idall_xecbg0pf, *wkumc9idall_z4grbpiq, *wkumc9idall_d7glzhbj, *wkumc9idall_v2eydbxs, *wkumc9idall_tt2; int cvnjhg2u; maxrutyk8mgxjc4ywlh = (*ftnjamu2 * *wy1vqfzu > *xjc4ywlh) ? (*ftnjamu2 * *wy1vqfzu) : *xjc4ywlh; wkumc9idTwk = Calloc(maxrutyk8mgxjc4ywlh , double); wkumc9idwkbzmd6ftv = Calloc(*xjc4ywlh * *rutyk8mg, double); wkumc9idwk9 = Calloc(*xjc4ywlh , double); wkumc9idghz9vuba = Calloc(*ftnjamu2 * *wy1vqfzu, double); wkumc9idoldmat = Calloc(*ftnjamu2 * *wy1vqfzu, double); wkumc9idub4xioar = Calloc(*wy1vqfzu * *ftnjamu2, double); wkumc9idwk2 = Calloc(*ftnjamu2 * *wy1vqfzu, double); if ( *nhja0izq == 0 || *nhja0izq == 1 ) { *gtrlbz3e = 1; } if (*qemj9asg == 0) { fvlmz9iyC_mxrbkut0(wpuarq2m, vc6hatuj, wy1vqfzu, xjc4ywlh, ftnjamu2, npjlv3mr, rutyk8mg); for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { ges1xpkr[gp1jxzuh-1] = gp1jxzuh; } F77_CALL(vqrdca)(vc6hatuj, rutyk8mg, rutyk8mg, xjc4ywlh, fasrkub3, ges1xpkr, wkumc9idTwk, qemj9asg, &pvofyg8z); } fpdlcqk9m0ibglfx = m0ibglfx; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx++ = 0.0e0; } } for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { fpdlcqk9m0ibglfx = m0ibglfx; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { bpvaqm5z = hnpt1zym[gp1jxzuh-1] - 1; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += kispwgx3[ayfnwr1v-1 + bpvaqm5z * *ftnjamu2]; fpdlcqk9m0ibglfx++; bpvaqm5z++; } } } else { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { urohxe6t = hnpt1zym[gp1jxzuh-1] + wg1xifdy - 2; fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9kispwgx3 = kispwgx3 + urohxe6t * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9hjm2ktyr = hjm2ktyr + urohxe6t * *wy1vqfzu; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9hjm2ktyr++ * *fpdlcqk9kispwgx3; fpdlcqk9m0ibglfx++; } fpdlcqk9kispwgx3++; } } } } sumzv2xfhei = jwbkl9fp[(1 + *nhja0izq) - 1]; wkumc9idall_xecbg0pf = Calloc(sumzv2xfhei, double); wkumc9idall_z4grbpiq = Calloc(sumzv2xfhei, double); wkumc9idall_d7glzhbj = Calloc(sumzv2xfhei, double); wkumc9idall_v2eydbxs = Calloc(sumzv2xfhei, double); wkumc9idall_tt2 = Calloc(*nhja0izq , double); *ueb8hndv = 0; while ((g2dnwteb > *zpcqv3uj ) && (*ueb8hndv < *gtrlbz3e)) { (*ueb8hndv)++; deltaf = 0.0e0; fpdlcqk9ghz9vuba = wkumc9idghz9vuba; fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + yq6lorbx-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++ - *fpdlcqk9m0ibglfx; fpdlcqk9m0ibglfx += *wy1vqfzu; } } fvlmz9iyC_nudh6szq(wpuarq2m, wkumc9idghz9vuba, wkumc9idTwk, npjlv3mr, ftnjamu2, wy1vqfzu); F77_CALL(vdqrsl)(vc6hatuj, rutyk8mg, rutyk8mg, qemj9asg, fasrkub3, wkumc9idTwk, wkumc9idwk2, wkumc9idwk2, zshtfg8c, wkumc9idwk2, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); *ghdetj8v = 0.0e0; qnwamo0e1 = wkumc9idTwk; fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { qtce8hzo = *qnwamo0e1++ - *fpdlcqk9ub4xioar++; *ghdetj8v += pow(qtce8hzo, (double) 2.0); } } fvlmz9iyC_vbks(wpuarq2m, wkumc9idub4xioar, wy1vqfzu, ftnjamu2, npjlv3mr); for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { urohxe6t = hnpt1zym[gp1jxzuh-1] + yq6lorbx -2; if (iz2nbfjc[gp1jxzuh-1] == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = kispwgx3[ayfnwr1v-1 + urohxe6t * *ftnjamu2]; wkumc9idghz9vuba[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] - wkumc9idub4xioar[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] - m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] + wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = 0.0e0; for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { bpvaqm5z = hnpt1zym[gp1jxzuh-1] + wg1xifdy -2; wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] += hjm2ktyr[yq6lorbx-1 + bpvaqm5z * *wy1vqfzu] * kispwgx3[ayfnwr1v-1 + bpvaqm5z * *ftnjamu2]; } wkumc9idghz9vuba[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] - wkumc9idub4xioar[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] - m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] + wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } } cvnjhg2u = (*ueb8hndv == 1) ? 0 : 1; fapc0tnbewg7qruh(he7mqnvy+(which[gp1jxzuh-1]-1) * *ftnjamu2, wkumc9idghz9vuba, rbne6ouj, ftnjamu2, wy1vqfzu, ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, lqsahu0r + gp1jxzuh-1, wbkq9zyi + hnpt1zym[gp1jxzuh-1]-1, lamvec + hnpt1zym[gp1jxzuh-1]-1, hdnw2fts + hnpt1zym[gp1jxzuh-1]-1, kispwgx3 + (hnpt1zym[gp1jxzuh-1]-1) * *ftnjamu2, ui8ysltq + (hnpt1zym[gp1jxzuh-1]-1) * *ftnjamu2, kvowz9ht, fbd5yktj, ldk, algpft4y, yzoe1rsp, rpyis2kc + nbzjkpi3[gp1jxzuh-1]-1, gkdx5jals + jwbkl9fp[gp1jxzuh-1]-1, ifys6woa + lindex[gp1jxzuh-1]-1, hjm2ktyr + (hnpt1zym[gp1jxzuh-1]-1) * *wy1vqfzu, wkumc9idall_xecbg0pf + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_z4grbpiq + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_d7glzhbj + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_v2eydbxs + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_tt2 + gp1jxzuh-1 , // If 0 then compute wkumc9idall_sg[0:3] else already done: &cvnjhg2u, acpios9q + gp1jxzuh-1, iz2nbfjc + gp1jxzuh-1, ulm3dvzg + gp1jxzuh-1, npjlv3mr, itdcb8ilk, tdcb8ilk); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] += kispwgx3[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+yq6lorbx-2) * *ftnjamu2]; } } else { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1+ (ayfnwr1v-1) * *wy1vqfzu] += hjm2ktyr[yq6lorbx-1+ (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *wy1vqfzu] * kispwgx3[ayfnwr1v-1+ (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2]; } } } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] -= wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { deltaf += fapc0tnbrd9beyfk(ftnjamu2, wkumc9idoldmat + (yq6lorbx-1) * *ftnjamu2, rbne6ouj + (yq6lorbx-1) * *ftnjamu2, kispwgx3 + (hnpt1zym[gp1jxzuh-1]+yq6lorbx-2) * *ftnjamu2); } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idTwk[ayfnwr1v-1] = 0.0e0; for (wg1xifdy=1; wg1xifdy<=ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { wkumc9idTwk[ayfnwr1v-1] += hjm2ktyr[yq6lorbx-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *wy1vqfzu] * kispwgx3[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2]; } } deltaf += fapc0tnbrd9beyfk(ftnjamu2, wkumc9idoldmat + (yq6lorbx-1) * *ftnjamu2, rbne6ouj + (yq6lorbx-1) * *ftnjamu2, wkumc9idTwk); } } fpdlcqk9ghz9vuba = wkumc9idghz9vuba; fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + yq6lorbx-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++ - *fpdlcqk9m0ibglfx; fpdlcqk9m0ibglfx += *wy1vqfzu; } } fvlmz9iyC_nudh6szq(wpuarq2m, wkumc9idghz9vuba, wkumc9idTwk, npjlv3mr, ftnjamu2, wy1vqfzu); F77_CALL(vdqrsl)(vc6hatuj, rutyk8mg, rutyk8mg, qemj9asg, fasrkub3, wkumc9idTwk, wkumc9idwk2, wkumc9idwk2, zshtfg8c, wkumc9idwk2, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); fvlmz9iyC_vbks(wpuarq2m, wkumc9idub4xioar, wy1vqfzu, ftnjamu2, npjlv3mr); } if (*nhja0izq > 0) { z4vrscot = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { z4vrscot += rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * pow(m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu], (double) 2.0); } } g2dnwteb = (z4vrscot > 0.0e0) ? sqrt(deltaf / z4vrscot) : 0.0; } if (*ueb8hndv == 1) { g2dnwteb = 1.0e0; } } for (yq6lorbx = 1; yq6lorbx <= *xjc4ywlh; yq6lorbx++) { wkumc9idwk9[yq6lorbx-1] = zshtfg8c[yq6lorbx-1]; } for (yq6lorbx = 1; yq6lorbx <= *xjc4ywlh; yq6lorbx++) { zshtfg8c[ges1xpkr[yq6lorbx-1]-1] = wkumc9idwk9[yq6lorbx-1]; } fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9ub4xioar++; fpdlcqk9m0ibglfx++; } } if (*yzoe1rsp && (*nhja0izq > 0)) { for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r + gp1jxzuh-1, */ ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, ui8ysltq + (hnpt1zym[ gp1jxzuh-1] + wg1xifdy-2) * *ftnjamu2, wkumc9idoldmat); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ui8ysltq[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2] = wkumc9idoldmat[ayfnwr1v-1]; } } } if (0) { for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r + gp1jxzuh-1, */ ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, ifys6woa + (hnpt1zym[ gp1jxzuh-1] + wg1xifdy-2) * *ftnjamu2, wkumc9idoldmat); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ifys6woa[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2] = wkumc9idoldmat[ayfnwr1v-1]; } } } } } Free(wkumc9idwkbzmd6ftv); Free(wkumc9idwk9); Free(wkumc9idTwk); Free(wkumc9idghz9vuba); Free(wkumc9idoldmat); Free(wkumc9idub4xioar); Free(wkumc9idwk2); Free(wkumc9idall_xecbg0pf); Free(wkumc9idall_z4grbpiq); Free(wkumc9idall_d7glzhbj); Free(wkumc9idall_v2eydbxs); Free(wkumc9idall_tt2); } void fapc0tnbx6kanjdh(double sjwyig9t[], double xout[], int *f8yswcat, int *wy1vqfzu) { int ayfnwr1v, yq6lorbx, gp1jxzuh, iptr = 0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { xout[iptr++] = (yq6lorbx == gp1jxzuh) ? 1.0e0 : 0.0e0; } } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { xout[iptr++] = (yq6lorbx == gp1jxzuh) ? sjwyig9t[ayfnwr1v-1] : 0.0e0; } } } } double fapc0tnbrd9beyfk(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double m0ibglfx[]) { int ayfnwr1v; double rd9beyfk, rxeqjn0y = 0.0, lm9vcjob = 0.0; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { lm9vcjob += *po8rwsmy; rxeqjn0y += *po8rwsmy++ * pow(*bhcji9gl++ - *m0ibglfx++, (double) 2.0); } rd9beyfk = (lm9vcjob > 0.0e0) ? (rxeqjn0y / lm9vcjob) : 0.0e0; return rd9beyfk; } void fapc0tnbpitmeh0q(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double *lfu2qhid, double *lm9vcjob) { double rxeqjn0yy = 0.0; int ayfnwr1v; *lm9vcjob = 0.0e0; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *lm9vcjob += *po8rwsmy; rxeqjn0yy += *po8rwsmy++ * *bhcji9gl++; } *lfu2qhid = (*lm9vcjob > 0.0e0) ? (rxeqjn0yy / *lm9vcjob) : 0.0e0; } void fapc0tnbdsrt0gem(int *f8yswcat, double sjwyig9t[], double po8rwsmy[], double bhcji9gl[], double ub4xioar[], double ui8ysltq[], int *yzoe1rsp) { int ayfnwr1v; double pygsw6ko, pasjmo8g, intercept, eck8vubt, qtce8hzo, lm9vcjob = 0.0, q6zdcwxk = 0.0, nsum = 0.0, *fpdlcqk9po8rwsmy, *fpdlcqk9sjwyig9t, *fpdlcqk9bhcji9gl; fapc0tnbpitmeh0q(f8yswcat, sjwyig9t, po8rwsmy, &pygsw6ko, &lm9vcjob); fapc0tnbpitmeh0q(f8yswcat, bhcji9gl, po8rwsmy, &pasjmo8g, &lm9vcjob); fpdlcqk9sjwyig9t = sjwyig9t; fpdlcqk9bhcji9gl = bhcji9gl; fpdlcqk9po8rwsmy = po8rwsmy; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { qtce8hzo = *fpdlcqk9sjwyig9t++ - pygsw6ko; nsum += qtce8hzo * (*fpdlcqk9bhcji9gl++ - pasjmo8g) * *fpdlcqk9po8rwsmy; qtce8hzo = pow(qtce8hzo, (double) 2.0); q6zdcwxk += qtce8hzo * *fpdlcqk9po8rwsmy++; } eck8vubt = nsum / q6zdcwxk; intercept = pasjmo8g - eck8vubt * pygsw6ko; fpdlcqk9sjwyig9t = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *ub4xioar++ = intercept + eck8vubt * *fpdlcqk9sjwyig9t++; } if (*yzoe1rsp) { fpdlcqk9sjwyig9t = sjwyig9t; fpdlcqk9po8rwsmy = po8rwsmy; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { qtce8hzo = *fpdlcqk9sjwyig9t++ - pygsw6ko; if (*fpdlcqk9po8rwsmy++ > 0.0e0) { *ui8ysltq -= (1.0e0 / lm9vcjob + pow(qtce8hzo, (double) 2.0) / q6zdcwxk); ui8ysltq++; } else { *ui8ysltq++ = 0.0e0; } } } } void fapc0tnbshm8ynte(int *ftnjamu2, int ezlgm2up[], double pygsw6ko[], double sjwyig9t[]) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *sjwyig9t++ = pygsw6ko[*ezlgm2up++ -1]; } } void vknootl2(double sjwyig9t[], int *f8yswcat, double gkdx5jal[], int *rvy1fpli, int *ukgwt7na) { int ayfnwr1v, yq6lorbx, ndzv2xfhei; if (*ukgwt7na) { ndzv2xfhei = *rvy1fpli - 6; } else { ndzv2xfhei = (*f8yswcat <= 40) ? *f8yswcat : floor((double) 40.0 + pow((double) *f8yswcat - 40.0, (double) 0.25)); } *rvy1fpli = ndzv2xfhei + 6; for (yq6lorbx = 1; yq6lorbx <= 3; yq6lorbx++) { *gkdx5jal++ = sjwyig9t[0]; } for (yq6lorbx = 1; yq6lorbx <= ndzv2xfhei; yq6lorbx++) { ayfnwr1v = (yq6lorbx - 1) * (*f8yswcat - 1) / (ndzv2xfhei - 1); *gkdx5jal++ = sjwyig9t[ayfnwr1v]; } for (yq6lorbx = 1; yq6lorbx <= 3; yq6lorbx++) { *gkdx5jal++ = sjwyig9t[*f8yswcat -1]; } } void Yee_pknootl2(double *gkdx5jal, int *f8yswcat, int *zo8wpibx, double *Toler_ankcghz2) { int ayfnwr1v, yq6lorbx = *f8yswcat - 4, cjop5bwm = 4; for (ayfnwr1v = 1; ayfnwr1v <= 4; ayfnwr1v++) { *zo8wpibx++ = 1; } for (ayfnwr1v = 5; ayfnwr1v <= yq6lorbx; ayfnwr1v++) { if ((gkdx5jal[ayfnwr1v -1] - gkdx5jal[cjop5bwm -1] >= *Toler_ankcghz2) && (gkdx5jal[ *f8yswcat -1] - gkdx5jal[ayfnwr1v -1] >= *Toler_ankcghz2)) { *zo8wpibx++ = 1; cjop5bwm = ayfnwr1v; } else { *zo8wpibx++ = 0; } } for (ayfnwr1v = *f8yswcat - 3; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { *zo8wpibx++ = 1; } } VGAM/src/vgam.f0000644000176000001440000013717112136651167012746 0ustar ripleyusers subroutine vbvs(kuzxj1lo,ankcghz2,rpyis2kc,nk,he7mqnvy,smat,order, &wy1vqfzu) integer kuzxj1lo, nk, order, wy1vqfzu double precision ankcghz2(nk+4), rpyis2kc(nk,wy1vqfzu), he7mqnvy( &kuzxj1lo), smat(kuzxj1lo,wy1vqfzu) double precision chw8lzty integer ayfnwr1v, yq6lorbx, ifour4 ifour4 = 4 do 23000 yq6lorbx=1,wy1vqfzu do 23002 ayfnwr1v=1,kuzxj1lo chw8lzty = he7mqnvy(ayfnwr1v) call wbvalue(ankcghz2, rpyis2kc(1,yq6lorbx), nk, ifour4, chw8lzty, & order, smat(ayfnwr1v,yq6lorbx)) 23002 continue 23000 continue return end subroutine tfeswo7c(osiz4fxy, nk, wy1vqfzu, ldk, wbkq9zyi, sgmat) implicit logical (a-z) integer nk, wy1vqfzu, ldk double precision osiz4fxy(ldk,nk*wy1vqfzu), wbkq9zyi(wy1vqfzu), &sgmat(nk,4) integer ayfnwr1v, yq6lorbx do 23004 ayfnwr1v=1,nk do 23006 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk,(ayfnwr1v-1)*wy1vqfzu+yq6lorbx) = osiz4fxy(ldk,( &ayfnwr1v-1)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorbx) * sgmat( &ayfnwr1v,1) 23006 continue 23004 continue do 23008 ayfnwr1v=1,(nk-1) do 23010 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-wy1vqfzu,(ayfnwr1v-0)*wy1vqfzu+yq6lorbx) = osiz4fxy( &ldk-wy1vqfzu,(ayfnwr1v-0)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorbx) &* sgmat(ayfnwr1v,2) 23010 continue 23008 continue do 23012 ayfnwr1v=1,(nk-2) do 23014 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-2*wy1vqfzu,(ayfnwr1v+1)*wy1vqfzu+yq6lorbx) = &osiz4fxy(ldk-2*wy1vqfzu,(ayfnwr1v+1)*wy1vqfzu+yq6lorbx) + &wbkq9zyi(yq6lorbx) * sgmat(ayfnwr1v,3) 23014 continue 23012 continue do 23016 ayfnwr1v=1,(nk-3) do 23018 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-3*wy1vqfzu,(ayfnwr1v+2)*wy1vqfzu+yq6lorbx) = &osiz4fxy(ldk-3*wy1vqfzu,(ayfnwr1v+2)*wy1vqfzu+yq6lorbx) + &wbkq9zyi(yq6lorbx) * sgmat(ayfnwr1v,4) 23018 continue 23016 continue return end subroutine ybnagt8k(iii, cz8qdfyj, tesdm5kv, g9fvdrbw, osiz4fxy, &wmat, kxvq6sfw, nyfu9rod, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, &tgiyxdw1, dufozmt7) implicit logical (a-z) integer iii, cz8qdfyj, tesdm5kv, kxvq6sfw, nyfu9rod, wy1vqfzu, &ldk, dimw, kuzxj1lo, nk, tgiyxdw1(*), dufozmt7(*) double precision g9fvdrbw(4,*), osiz4fxy(ldk, nk*wy1vqfzu), wmat( &kuzxj1lo,dimw) double precision obr6tcex integer urohxe6t, nead, bcol, brow, biuvowq2, nbj8tdsk bcol = cz8qdfyj + tesdm5kv brow = cz8qdfyj do 23020 urohxe6t=1,dimw obr6tcex = wmat(iii,urohxe6t) * g9fvdrbw(kxvq6sfw,1) * g9fvdrbw( &nyfu9rod,1) biuvowq2 = (brow-1)*wy1vqfzu + tgiyxdw1(urohxe6t) nbj8tdsk = (bcol-1)*wy1vqfzu + dufozmt7(urohxe6t) nead = nbj8tdsk - biuvowq2 osiz4fxy(ldk-nead, nbj8tdsk) = osiz4fxy(ldk-nead, nbj8tdsk) + &obr6tcex if(.not.(tesdm5kv .gt. 0 .and. dufozmt7(urohxe6t) .ne. tgiyxdw1( &urohxe6t)))goto 23022 biuvowq2 = (brow-1)*wy1vqfzu + dufozmt7(urohxe6t) nbj8tdsk = (bcol-1)*wy1vqfzu + tgiyxdw1(urohxe6t) nead = nbj8tdsk - biuvowq2 osiz4fxy(ldk-nead, nbj8tdsk) = osiz4fxy(ldk-nead, nbj8tdsk) + &obr6tcex 23022 continue 23020 continue return end subroutine vsplin(he7mqnvy,rbne6ouj,wmat,kuzxj1lo,gkdx5jal, nk, &ldk,wy1vqfzu,dimw, tgiyxdw1,dufozmt7, wkmm, wbkq9zyi, info, &t8hwvalr, rpyis2kc, osiz4fxy, btwy, sgdub, ui8ysltq, yzoe1rsp, &bmb, ifys6woa, dof, scrtch, fbd5yktj, truen) implicit logical (a-z) integer kuzxj1lo, nk, ldk, wy1vqfzu, dimw, tgiyxdw1(*), dufozmt7(* &), info, fbd5yktj, truen integer yzoe1rsp double precision he7mqnvy(kuzxj1lo), rbne6ouj(kuzxj1lo,wy1vqfzu), &wmat(kuzxj1lo,dimw), gkdx5jal(nk+4), wkmm(wy1vqfzu,wy1vqfzu,16), &wbkq9zyi(wy1vqfzu), t8hwvalr(kuzxj1lo,wy1vqfzu), rpyis2kc(nk, &wy1vqfzu), osiz4fxy(ldk,nk*wy1vqfzu), btwy(wy1vqfzu,nk) double precision sgdub(nk,wy1vqfzu), ui8ysltq(truen,wy1vqfzu), &bmb(wy1vqfzu,wy1vqfzu), ifys6woa(kuzxj1lo,wy1vqfzu), dof(wy1vqfzu) &, scrtch(*) integer yq6lorbx, ayfnwr1v, dqlr5bse, pqzfxw4i, urohxe6t, icrit integer gp0xjetb, e5knafcg, wep0oibc, l3zpbstu(3), ispar, i1loc double precision qaltf0nz, g9fvdrbw(4,1), ms0qypiw(16), penalt, &qcpiaj7f, fp6nozvx, waiez6nt, toldf, parms(3) do 23024 yq6lorbx=1,wy1vqfzu if(.not.(wbkq9zyi(yq6lorbx) .eq. 0.0d0))goto 23026 ispar=0 icrit=3 goto 23027 23026 continue ispar=1 icrit=1 23027 continue if(.not.((wy1vqfzu .eq. 1) .or. (dimw.eq.wy1vqfzu) .or. (ispar &.eq. 0)))goto 23028 e5knafcg = 4 fp6nozvx = 1.50d0 waiez6nt = 0.00d0 wep0oibc = 1 toldf=0.001d0 if(.not.(wy1vqfzu.eq.1))goto 23030 toldf=0.005d0 goto 23031 23030 continue if(.not.(wy1vqfzu.eq.2))goto 23032 toldf=0.015d0 goto 23033 23032 continue if(.not.(wy1vqfzu.eq.3))goto 23034 toldf=0.025d0 goto 23035 23034 continue toldf=0.045d0 23035 continue 23033 continue 23031 continue l3zpbstu(1) = icrit l3zpbstu(2) = ispar l3zpbstu(3) = 300 parms(1) = waiez6nt parms(2) = fp6nozvx parms(3) = toldf gp0xjetb=0 if(.not.((wy1vqfzu .eq. 1) .or. (dimw.eq.wy1vqfzu)))goto 23036 do 23038 ayfnwr1v=1,kuzxj1lo rbne6ouj(ayfnwr1v,yq6lorbx) = rbne6ouj(ayfnwr1v,yq6lorbx) / wmat( &ayfnwr1v,yq6lorbx) 23038 continue call dnaoqj0l(penalt, dof(yq6lorbx), he7mqnvy, rbne6ouj(1, &yq6lorbx), wmat(1,yq6lorbx), kuzxj1lo,nk, gkdx5jal,rpyis2kc(1, &yq6lorbx), t8hwvalr(1,yq6lorbx), ifys6woa(1,yq6lorbx), qcpiaj7f, &wbkq9zyi(yq6lorbx),parms, scrtch, gp0xjetb,l3zpbstu, e5knafcg, &wep0oibc,fbd5yktj) if(.not.(fbd5yktj .ne. 0))goto 23040 return 23040 continue do 23042 ayfnwr1v=1,kuzxj1lo wmat(ayfnwr1v,yq6lorbx) = wmat(ayfnwr1v,yq6lorbx) * wmat(ayfnwr1v, &yq6lorbx) 23042 continue if(.not.(yzoe1rsp .ne. 0))goto 23044 do 23046 ayfnwr1v=1,kuzxj1lo ui8ysltq(ayfnwr1v,yq6lorbx) = ifys6woa(ayfnwr1v,yq6lorbx) / wmat( &ayfnwr1v,yq6lorbx) 23046 continue 23044 continue goto 23037 23036 continue call dnaoqj0l(penalt, dof(yq6lorbx), he7mqnvy, btwy(1,yq6lorbx), &wmat(1,yq6lorbx), kuzxj1lo,nk, gkdx5jal,rpyis2kc(1,yq6lorbx), &t8hwvalr(1,yq6lorbx), ifys6woa(1,yq6lorbx), qcpiaj7f,wbkq9zyi( &yq6lorbx),parms, scrtch, gp0xjetb,l3zpbstu, e5knafcg,wep0oibc, &fbd5yktj) if(.not.(fbd5yktj .ne. 0))goto 23048 return 23048 continue do 23050 ayfnwr1v=1,kuzxj1lo wmat(ayfnwr1v,yq6lorbx) = wmat(ayfnwr1v,yq6lorbx) * wmat(ayfnwr1v, &yq6lorbx) 23050 continue 23037 continue if(.not.(fbd5yktj .ne. 0))goto 23052 return 23052 continue 23028 continue 23024 continue if(.not.((wy1vqfzu .eq. 1) .or. (dimw .eq. wy1vqfzu)))goto 23054 return 23054 continue do 23056 ayfnwr1v=1,nk do 23058 yq6lorbx=1,wy1vqfzu btwy(yq6lorbx,ayfnwr1v)=0.0d0 23058 continue 23056 continue do 23060 ayfnwr1v=1,(nk*wy1vqfzu) do 23062 yq6lorbx=1,ldk osiz4fxy(yq6lorbx,ayfnwr1v) = 0.0d0 23062 continue 23060 continue qaltf0nz = 0.1d-9 do 23064 ayfnwr1v=1,kuzxj1lo call vinterv(gkdx5jal(1),(nk+1),he7mqnvy(ayfnwr1v),dqlr5bse, &pqzfxw4i) if(.not.(pqzfxw4i .eq. 1))goto 23066 if(.not.(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))) &goto 23068 dqlr5bse=dqlr5bse-1 goto 23069 23068 continue return 23069 continue 23066 continue call vbsplvd(gkdx5jal,4,he7mqnvy(ayfnwr1v),dqlr5bse,ms0qypiw, &g9fvdrbw,1) yq6lorbx= dqlr5bse-4+1 do 23070 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj( &ayfnwr1v,urohxe6t) * g9fvdrbw(1,1) 23070 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 1, &1, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 1, &2, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 2, g9fvdrbw, osiz4fxy, wmat, 1, &3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 3, g9fvdrbw, osiz4fxy, wmat, 1, &4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+2 do 23072 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj( &ayfnwr1v,urohxe6t) * g9fvdrbw(2,1) 23072 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 2, &2, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 2, &3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 2, g9fvdrbw, osiz4fxy, wmat, 2, &4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+3 do 23074 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj( &ayfnwr1v,urohxe6t) * g9fvdrbw(3,1) 23074 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 3, &3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 3, &4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+4 do 23076 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj( &ayfnwr1v,urohxe6t) * g9fvdrbw(4,1) 23076 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 4, &4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) 23064 continue call zosq7hub(sgdub(1,1), sgdub(1,2), sgdub(1,3), sgdub(1,4), &gkdx5jal, nk) call tfeswo7c(osiz4fxy, nk, wy1vqfzu, ldk, wbkq9zyi, sgdub) call vdpbfa7(osiz4fxy, ldk, nk*wy1vqfzu, ldk-1, info, sgdub) if(.not.(info .ne. 0))goto 23078 return 23078 continue call vdpbsl7(osiz4fxy, ldk, nk*wy1vqfzu, ldk-1, btwy, sgdub) i1loc = 0 do 23080 ayfnwr1v=1,nk do 23082 yq6lorbx=1,wy1vqfzu i1loc = i1loc + 1 rpyis2kc(ayfnwr1v,yq6lorbx) = btwy(yq6lorbx,ayfnwr1v) 23082 continue 23080 continue call cn8kzpab(gkdx5jal, he7mqnvy, rpyis2kc, kuzxj1lo, nk, &wy1vqfzu, t8hwvalr) call vicb2(osiz4fxy, osiz4fxy, sgdub, wkmm, ldk-1, nk*wy1vqfzu) call icpd0omv(osiz4fxy, he7mqnvy, gkdx5jal, ui8ysltq, ldk, &kuzxj1lo, nk, wy1vqfzu, yzoe1rsp, bmb, wkmm, wmat, ifys6woa, dimw, & tgiyxdw1, dufozmt7, truen) return end subroutine cn8kzpab(ankcghz2, he7mqnvy, rpyis2kc, kuzxj1lo, nk, &wy1vqfzu, t8hwvalr) implicit logical (a-z) integer kuzxj1lo, nk, wy1vqfzu double precision ankcghz2(nk+4), he7mqnvy(kuzxj1lo), rpyis2kc(nk, &wy1vqfzu), t8hwvalr(kuzxj1lo,wy1vqfzu) double precision chw8lzty integer ayfnwr1v, yq6lorbx, izero0, ifour4 izero0 = 0 ifour4 = 4 do 23084 ayfnwr1v=1,kuzxj1lo chw8lzty = he7mqnvy(ayfnwr1v) do 23086 yq6lorbx=1,wy1vqfzu call wbvalue(ankcghz2, rpyis2kc(1,yq6lorbx), nk, ifour4, chw8lzty, & izero0, t8hwvalr(ayfnwr1v,yq6lorbx)) 23086 continue 23084 continue return end subroutine vsuff9(kuzxj1lo,nef,ezlgm2up, he7mqnvy,tlgduey8,wmat, &pygsw6ko,pasjmo8g,wbar,uwbar,wpasjmo8g, wy1vqfzu, dimw, dimu, &tgiyxdw1, dufozmt7, work, work2, hjm2ktyr, kgwmz4ip, iz2nbfjc, &wuwbar, dvhw1ulq) implicit logical (a-z) integer kuzxj1lo, nef, ezlgm2up(kuzxj1lo), wy1vqfzu, dimw, dimu, &kgwmz4ip, iz2nbfjc, wuwbar, dvhw1ulq, tgiyxdw1(*),dufozmt7(*) double precision he7mqnvy(kuzxj1lo), tlgduey8(kuzxj1lo,wy1vqfzu), &wmat(kuzxj1lo,dimw), pygsw6ko(nef), pasjmo8g(nef,wy1vqfzu), wbar( &nef,*), uwbar(dimu,nef), wpasjmo8g(nef,wy1vqfzu), work(wy1vqfzu, &wy1vqfzu+1), work2(kgwmz4ip,kgwmz4ip+1), hjm2ktyr(wy1vqfzu, &kgwmz4ip) integer ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, &imk5wjxg integer oneint oneint = 1 if(.not.(iz2nbfjc .eq. 1))goto 23088 if(.not.((dimu .ne. dimw) .or. (kgwmz4ip .ne. wy1vqfzu)))goto 2309 &0 dvhw1ulq = 0 return 23090 continue 23088 continue imk5wjxg = wy1vqfzu * (wy1vqfzu+1) / 2 if(.not.(dimw .gt. imk5wjxg))goto 23092 23092 continue call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do 23094 ayfnwr1v=1,kuzxj1lo pygsw6ko(ezlgm2up(ayfnwr1v))=he7mqnvy(ayfnwr1v) 23094 continue do 23096 yq6lorbx=1,wy1vqfzu do 23098 ayfnwr1v=1,nef wpasjmo8g(ayfnwr1v,yq6lorbx) = 0.0d0 23098 continue 23096 continue do 23100 yq6lorbx=1,dimw do 23102 ayfnwr1v=1,nef wbar(ayfnwr1v,yq6lorbx) = 0.0d0 23102 continue 23100 continue if(.not.(dimw .ne. imk5wjxg))goto 23104 do 23106 gp1jxzuh=1,wy1vqfzu do 23108 yq6lorbx=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = 0.0d0 23108 continue 23106 continue 23104 continue do 23110 ayfnwr1v=1,kuzxj1lo do 23112 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wmat(ayfnwr1v, &yq6lorbx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1( &yq6lorbx),dufozmt7(yq6lorbx)) 23112 continue do 23114 yq6lorbx=1,wy1vqfzu do 23116 gp1jxzuh=1,wy1vqfzu wpasjmo8g(ezlgm2up(ayfnwr1v),yq6lorbx) = wpasjmo8g(ezlgm2up( &ayfnwr1v),yq6lorbx) + work(yq6lorbx,gp1jxzuh)*tlgduey8(ayfnwr1v, &gp1jxzuh) 23116 continue 23114 continue do 23118 yq6lorbx=1,dimw wbar(ezlgm2up(ayfnwr1v),yq6lorbx) = wbar(ezlgm2up(ayfnwr1v), &yq6lorbx) + wmat(ayfnwr1v,yq6lorbx) 23118 continue 23110 continue dvhw1ulq = 1 if(.not.(iz2nbfjc .eq. 1))goto 23120 do 23122 ayfnwr1v=1,nef do 23124 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wbar(ayfnwr1v, &yq6lorbx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1( &yq6lorbx),dufozmt7(yq6lorbx)) 23124 continue do 23126 yq6lorbx=1,wy1vqfzu work(yq6lorbx,wy1vqfzu+1)=wpasjmo8g(ayfnwr1v,yq6lorbx) 23126 continue call vcholf(work, work(1,wy1vqfzu+1), wy1vqfzu, dvhw1ulq, oneint) if(.not.(dvhw1ulq .ne. 1))goto 23128 return 23128 continue if(.not.(wuwbar .ne. 0))goto 23130 do 23132 yq6lorbx=1,dimw uwbar(yq6lorbx,ayfnwr1v) = work(tgiyxdw1(yq6lorbx),dufozmt7( &yq6lorbx)) 23132 continue 23130 continue do 23134 yq6lorbx=1,wy1vqfzu pasjmo8g(ayfnwr1v,yq6lorbx)=work(yq6lorbx,wy1vqfzu+1) 23134 continue 23122 continue goto 23121 23120 continue if(.not.(dimw .ne. imk5wjxg))goto 23136 do 23138 yq6lorbx=1,wy1vqfzu do 23140 gp1jxzuh=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = 0.0d0 23140 continue 23138 continue 23136 continue do 23142 ayfnwr1v=1,nef call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do 23144 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wbar(ayfnwr1v, &yq6lorbx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1( &yq6lorbx),dufozmt7(yq6lorbx)) 23144 continue do 23146 yq6lorbx=1,wy1vqfzu work(yq6lorbx,wy1vqfzu+1)=wpasjmo8g(ayfnwr1v,yq6lorbx) 23146 continue do 23148 yq6lorbx=1,kgwmz4ip do 23150 gp1jxzuh=yq6lorbx,kgwmz4ip work2(yq6lorbx,gp1jxzuh) = 0.0d0 do 23152 urohxe6t=1,wy1vqfzu do 23154 bpvaqm5z=1,wy1vqfzu work2(yq6lorbx,gp1jxzuh) = work2(yq6lorbx,gp1jxzuh) + hjm2ktyr( &urohxe6t,yq6lorbx) * work(urohxe6t,bpvaqm5z) * hjm2ktyr(bpvaqm5z, &gp1jxzuh) 23154 continue 23152 continue 23150 continue 23148 continue call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) do 23156 yq6lorbx=1,dimu wbar(ayfnwr1v,yq6lorbx) = work2(tgiyxdw1(yq6lorbx),dufozmt7( &yq6lorbx)) 23156 continue do 23158 yq6lorbx=1,kgwmz4ip work2(yq6lorbx,kgwmz4ip+1) = 0.0d0 do 23160 urohxe6t=1,wy1vqfzu work2(yq6lorbx,kgwmz4ip+1) = work2(yq6lorbx,kgwmz4ip+1) + &hjm2ktyr(urohxe6t,yq6lorbx) * work(urohxe6t,wy1vqfzu+1) 23160 continue 23158 continue do 23162 yq6lorbx=1,kgwmz4ip wpasjmo8g(ayfnwr1v,yq6lorbx) = work2(yq6lorbx,kgwmz4ip+1) 23162 continue call vcholf(work2, work2(1,kgwmz4ip+1), kgwmz4ip, dvhw1ulq, &oneint) if(.not.(dvhw1ulq .ne. 1))goto 23164 return 23164 continue if(.not.(wuwbar .ne. 0))goto 23166 do 23168 yq6lorbx=1,dimu uwbar(yq6lorbx,ayfnwr1v) = work2(tgiyxdw1(yq6lorbx),dufozmt7( &yq6lorbx)) 23168 continue 23166 continue do 23170 yq6lorbx=1,kgwmz4ip pasjmo8g(ayfnwr1v,yq6lorbx) = work2(yq6lorbx,kgwmz4ip+1) 23170 continue 23142 continue 23121 continue return end subroutine icpd0omv(enaqpzk9, he7mqnvy, gkdx5jal, grmuyvx9, ldk, &kuzxj1lo, nk, wy1vqfzu, jzwsy6tp, bmb, work, wmat, ifys6woa, dimw, & tgiyxdw1, dufozmt7, truen) implicit logical (a-z) integer ldk, kuzxj1lo, nk, wy1vqfzu, jzwsy6tp, dimw, tgiyxdw1(*), &dufozmt7(*), truen double precision enaqpzk9(ldk,nk*wy1vqfzu), he7mqnvy(kuzxj1lo), &gkdx5jal(nk+4), grmuyvx9(truen,wy1vqfzu), bmb(wy1vqfzu,wy1vqfzu), &work(wy1vqfzu,wy1vqfzu), wmat(kuzxj1lo,dimw), ifys6woa(kuzxj1lo, &wy1vqfzu) integer ayfnwr1v, yq6lorbx, gp1jxzuh, dqlr5bse, pqzfxw4i, &urohxe6t, bpvaqm5z double precision qaltf0nz, ms0qypiw(16), g9fvdrbw(4,1) if(.not.(jzwsy6tp .ne. 0))goto 23172 do 23174 gp1jxzuh=1,wy1vqfzu do 23176 ayfnwr1v=1,kuzxj1lo grmuyvx9(ayfnwr1v,gp1jxzuh) = 0.0d0 23176 continue 23174 continue 23172 continue qaltf0nz = 0.10d-9 call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do 23178 ayfnwr1v=1,kuzxj1lo do 23180 yq6lorbx=1,wy1vqfzu do 23182 gp1jxzuh=1,wy1vqfzu bmb(yq6lorbx,gp1jxzuh)=0.0d0 23182 continue 23180 continue call vinterv(gkdx5jal(1), (nk+1), he7mqnvy(ayfnwr1v), dqlr5bse, &pqzfxw4i) if(.not.(pqzfxw4i.eq. 1))goto 23184 if(.not.(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))) &goto 23186 dqlr5bse=dqlr5bse-1 goto 23187 23186 continue return 23187 continue 23184 continue call vbsplvd(gkdx5jal, 4, he7mqnvy(ayfnwr1v), dqlr5bse, ms0qypiw, &g9fvdrbw, 1) yq6lorbx= dqlr5bse-4+1 do 23188 urohxe6t=yq6lorbx,yq6lorbx+3 call vsel(urohxe6t, urohxe6t, wy1vqfzu, nk, ldk, enaqpzk9, work) call o0xlszqr(wy1vqfzu, g9fvdrbw(urohxe6t-yq6lorbx+1,1) * &g9fvdrbw(urohxe6t-yq6lorbx+1,1), work, bmb) 23188 continue do 23190 urohxe6t=yq6lorbx,yq6lorbx+3 do 23192 bpvaqm5z=urohxe6t+1,yq6lorbx+3 call vsel(urohxe6t, bpvaqm5z, wy1vqfzu, nk, ldk, enaqpzk9, work) call o0xlszqr(wy1vqfzu, 2.0d0 * g9fvdrbw(urohxe6t-yq6lorbx+1,1) * &g9fvdrbw(bpvaqm5z-yq6lorbx+1,1), work, bmb) 23192 continue 23190 continue if(.not.(jzwsy6tp .ne. 0))goto 23194 do 23196 yq6lorbx=1,wy1vqfzu grmuyvx9(ayfnwr1v,yq6lorbx) = bmb(yq6lorbx,yq6lorbx) 23196 continue 23194 continue call ovjnsmt2(bmb, wmat, work, ifys6woa, wy1vqfzu, kuzxj1lo, dimw, & tgiyxdw1, dufozmt7, ayfnwr1v) 23178 continue return end subroutine o0xlszqr(wy1vqfzu, g9fvdrbw, work, bmb) implicit logical (a-z) integer wy1vqfzu double precision g9fvdrbw, work(wy1vqfzu,wy1vqfzu), bmb(wy1vqfzu, &wy1vqfzu) integer yq6lorbx, gp1jxzuh do 23198 yq6lorbx=1,wy1vqfzu do 23200 gp1jxzuh=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = work(yq6lorbx,gp1jxzuh) * g9fvdrbw 23200 continue 23198 continue do 23202 yq6lorbx=1,wy1vqfzu do 23204 gp1jxzuh=1,wy1vqfzu bmb(gp1jxzuh,yq6lorbx) = bmb(gp1jxzuh,yq6lorbx) + work(gp1jxzuh, &yq6lorbx) 23204 continue 23202 continue return end subroutine vsel(s, t, wy1vqfzu, nk, ldk, minv, work) implicit logical (a-z) integer s, t, wy1vqfzu, nk, ldk double precision minv(ldk,nk*wy1vqfzu), work(wy1vqfzu,wy1vqfzu) integer ayfnwr1v, yq6lorbx, biuvowq2, nbj8tdsk do 23206 ayfnwr1v=1,wy1vqfzu do 23208 yq6lorbx=1,wy1vqfzu work(ayfnwr1v,yq6lorbx) = 0.0d0 23208 continue 23206 continue if(.not.(s .ne. t))goto 23210 do 23212 ayfnwr1v=1,wy1vqfzu biuvowq2 = (s-1)*wy1vqfzu + ayfnwr1v do 23214 yq6lorbx=1,wy1vqfzu nbj8tdsk = (t-1)*wy1vqfzu + yq6lorbx work(ayfnwr1v,yq6lorbx) = minv(ldk-(nbj8tdsk-biuvowq2), nbj8tdsk) 23214 continue 23212 continue goto 23211 23210 continue do 23216 ayfnwr1v=1,wy1vqfzu biuvowq2 = (s-1)*wy1vqfzu + ayfnwr1v do 23218 yq6lorbx=ayfnwr1v,wy1vqfzu nbj8tdsk = (t-1)*wy1vqfzu + yq6lorbx work(ayfnwr1v,yq6lorbx) = minv(ldk-(nbj8tdsk-biuvowq2), nbj8tdsk) 23218 continue 23216 continue do 23220 ayfnwr1v=1,wy1vqfzu do 23222 yq6lorbx=ayfnwr1v+1,wy1vqfzu work(yq6lorbx,ayfnwr1v) = work(ayfnwr1v,yq6lorbx) 23222 continue 23220 continue 23211 continue return end subroutine ovjnsmt2(bmb, wmat, work, ifys6woa, wy1vqfzu, kuzxj1lo, & dimw, tgiyxdw1, dufozmt7, iii) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo, dimw, tgiyxdw1(*), dufozmt7(*), iii double precision bmb(wy1vqfzu,wy1vqfzu), wmat(kuzxj1lo,dimw), &work(wy1vqfzu,wy1vqfzu), ifys6woa(kuzxj1lo,wy1vqfzu) double precision q6zdcwxk, obr6tcex integer yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z do 23224 bpvaqm5z=1,wy1vqfzu do 23226 yq6lorbx=1,wy1vqfzu do 23228 gp1jxzuh=1,wy1vqfzu work(gp1jxzuh,yq6lorbx) = 0.0d0 23228 continue 23226 continue do 23230 urohxe6t=1,dimw obr6tcex = wmat(iii,urohxe6t) work(tgiyxdw1(urohxe6t),dufozmt7(urohxe6t)) = obr6tcex work(dufozmt7(urohxe6t),tgiyxdw1(urohxe6t)) = obr6tcex 23230 continue q6zdcwxk = 0.0d0 do 23232 yq6lorbx=1,wy1vqfzu q6zdcwxk = q6zdcwxk + bmb(bpvaqm5z,yq6lorbx) * work(yq6lorbx, &bpvaqm5z) 23232 continue ifys6woa(iii,bpvaqm5z) = q6zdcwxk 23224 continue return end subroutine vicb2(enaqpzk9, wpuarq2m, d, uu, wy1vqfzu, kuzxj1lo) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo double precision enaqpzk9(wy1vqfzu+1,kuzxj1lo), wpuarq2m(wy1vqfzu+ &1,kuzxj1lo), d(kuzxj1lo), uu(wy1vqfzu+1,wy1vqfzu+1) integer ayfnwr1v, gp1jxzuh, lsvdbx3tk, uplim, sedf7mxb, hofjnx2e, &kij0gwer enaqpzk9(wy1vqfzu+1,kuzxj1lo) = 1.0d0 / d(kuzxj1lo) hofjnx2e = wy1vqfzu+1 sedf7mxb = kuzxj1lo+1 - hofjnx2e do 23234 kij0gwer=sedf7mxb,kuzxj1lo do 23236 ayfnwr1v=1,hofjnx2e uu(ayfnwr1v, kij0gwer-sedf7mxb+1) = wpuarq2m(ayfnwr1v, kij0gwer) 23236 continue 23234 continue ayfnwr1v = kuzxj1lo-1 23238 if(.not.(ayfnwr1v.ge.1))goto 23240 if(.not.(wy1vqfzu .lt. kuzxj1lo-ayfnwr1v))goto 23241 uplim = wy1vqfzu goto 23242 23241 continue uplim = kuzxj1lo-ayfnwr1v 23242 continue lsvdbx3tk=1 23243 if(.not.(lsvdbx3tk.le.uplim))goto 23245 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = 0.0d0 gp1jxzuh=1 23246 if(.not.(gp1jxzuh.le.lsvdbx3tk))goto 23248 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = enaqpzk9(- &lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) - uu(-gp1jxzuh+wy1vqfzu+ &1,ayfnwr1v+gp1jxzuh -sedf7mxb+1) * enaqpzk9(gp1jxzuh-lsvdbx3tk+ &wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) gp1jxzuh=gp1jxzuh+1 goto 23246 23248 continue 23249 if(.not.(gp1jxzuh.le.uplim))goto 23251 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = enaqpzk9(- &lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) - uu(-gp1jxzuh+wy1vqfzu+ &1,ayfnwr1v+gp1jxzuh -sedf7mxb+1) * enaqpzk9(lsvdbx3tk-gp1jxzuh+ &wy1vqfzu+1,ayfnwr1v+gp1jxzuh) gp1jxzuh=gp1jxzuh+1 goto 23249 23251 continue lsvdbx3tk=lsvdbx3tk+1 goto 23243 23245 continue enaqpzk9(wy1vqfzu+1,ayfnwr1v) = 1.0d0 / d(ayfnwr1v) lsvdbx3tk = 1 23252 if(.not.(lsvdbx3tk.le.uplim))goto 23254 enaqpzk9(wy1vqfzu+1,ayfnwr1v) = enaqpzk9(wy1vqfzu+1,ayfnwr1v) - &uu(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk -sedf7mxb+1) * &enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) lsvdbx3tk=lsvdbx3tk+1 goto 23252 23254 continue if(.not.(ayfnwr1v .eq. sedf7mxb))goto 23255 sedf7mxb = sedf7mxb-1 if(.not.(sedf7mxb .lt. 1))goto 23257 sedf7mxb = 1 goto 23258 23257 continue kij0gwer=hofjnx2e-1 23259 if(.not.(kij0gwer.ge.1))goto 23261 gp1jxzuh=1 23262 if(.not.(gp1jxzuh.le.hofjnx2e))goto 23264 uu(gp1jxzuh,kij0gwer+1) = uu(gp1jxzuh,kij0gwer) gp1jxzuh=gp1jxzuh+1 goto 23262 23264 continue kij0gwer=kij0gwer-1 goto 23259 23261 continue gp1jxzuh=1 23265 if(.not.(gp1jxzuh.le.hofjnx2e))goto 23267 uu(gp1jxzuh,1) = wpuarq2m(gp1jxzuh,sedf7mxb) gp1jxzuh=gp1jxzuh+1 goto 23265 23267 continue 23258 continue 23255 continue ayfnwr1v = ayfnwr1v-1 goto 23238 23240 continue return end subroutine ewg7qruh(sjwyig9tto,tlgduey8,wmat, kuzxj1lo,wy1vqfzu, &ezlgm2up,nef, wbkq9zyi,dof,smo,cov, s0, xin,yin,rbne6ouj,win, &work1,work3, dimw, fbd5yktj, ldk, info, yzoe1rsp, sgdub, rpyis2kc, & zv2xfhei, acpios9q,tgiyxdw1,dufozmt7, bmb, ifys6woa, wkmm, &iz2nbfjc,kgwmz4ip,ges1xpkr, hjm2ktyr, beta, fasrkub3, sout, &r0oydcxb, ub4xioar, effect, uwin) implicit logical (a-z) integer kuzxj1lo,wy1vqfzu,ezlgm2up(kuzxj1lo),nef, dimw, fbd5yktj, &ldk, info, yzoe1rsp, acpios9q,tgiyxdw1(*),dufozmt7(*), iz2nbfjc, &kgwmz4ip, ges1xpkr(kgwmz4ip*2) double precision sjwyig9tto(kuzxj1lo), tlgduey8(kuzxj1lo,wy1vqfzu) &, wmat(kuzxj1lo,dimw), wbkq9zyi(kgwmz4ip), dof(kgwmz4ip), smo( &kuzxj1lo,kgwmz4ip), cov(kuzxj1lo,kgwmz4ip) double precision s0(2*kgwmz4ip, 2*kgwmz4ip,2) double precision work1(*), work3(*), sgdub(*), rpyis2kc(*), &zv2xfhei(acpios9q+4) double precision xin(nef), yin(nef,wy1vqfzu), rbne6ouj(nef, &wy1vqfzu), win(nef,*), bmb(*), ifys6woa(nef,kgwmz4ip), wkmm( &wy1vqfzu,wy1vqfzu,16), hjm2ktyr(wy1vqfzu,kgwmz4ip) double precision beta(2*kgwmz4ip), fasrkub3(2*kgwmz4ip), sout(nef, &kgwmz4ip), r0oydcxb(kgwmz4ip,nef), ub4xioar(kgwmz4ip,nef), effect( &nef*kgwmz4ip), uwin(*) integer dimwin integer ayfnwr1v, yq6lorbx, gp1jxzuh, rutyk8mg, xjc4ywlh, job, &qemj9asg, dvhw1ulq integer oneint double precision xmin, xrange, pvofyg8z oneint = 1 if(.not.(iz2nbfjc .eq. 1))goto 23268 dimwin = dimw goto 23269 23268 continue dimwin = kgwmz4ip*(kgwmz4ip+1)/2 23269 continue call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) call vsuff9(kuzxj1lo,nef,ezlgm2up, sjwyig9tto,tlgduey8,wmat, xin, &yin,win,uwin,rbne6ouj, wy1vqfzu, dimw, dimwin, tgiyxdw1, dufozmt7, & wkmm, wkmm(1,1,3), hjm2ktyr, kgwmz4ip, iz2nbfjc, oneint, &dvhw1ulq) if(.not.(dvhw1ulq .ne. 1))goto 23270 return 23270 continue xmin = xin(1) xrange = xin(nef)-xin(1) do 23272 ayfnwr1v=1,nef xin(ayfnwr1v) = (xin(ayfnwr1v)-xmin)/xrange 23272 continue ldk = 4*kgwmz4ip fbd5yktj = 0 do 23274 yq6lorbx=1,kgwmz4ip if(.not.(wbkq9zyi(yq6lorbx) .eq. 0.0d0))goto 23276 dof(yq6lorbx) = dof(yq6lorbx) + 1.0d0 23276 continue 23274 continue call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call vsplin(xin,rbne6ouj,win,nef,zv2xfhei, acpios9q,ldk,kgwmz4ip, &dimwin, tgiyxdw1,dufozmt7, wkmm, wbkq9zyi, info, sout, rpyis2kc, &work3(1), work3(1+acpios9q*kgwmz4ip*ldk), sgdub, cov, yzoe1rsp, &bmb, ifys6woa, dof, work1, fbd5yktj, kuzxj1lo) do 23278 yq6lorbx=1,kgwmz4ip dof(yq6lorbx) = -1.0d0 do 23280 ayfnwr1v=1,nef dof(yq6lorbx)=dof(yq6lorbx)+ifys6woa(ayfnwr1v,yq6lorbx) 23280 continue 23278 continue if(.not.(kgwmz4ip .ge. 1))goto 23282 pvofyg8z = 1.0d-7 rutyk8mg = nef*kgwmz4ip xjc4ywlh = 2*kgwmz4ip job = 101 info = 1 call x6kanjdh(xin, work3, nef, kgwmz4ip) call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call mxrbkut0f(uwin, work3, kgwmz4ip, xjc4ywlh, nef, wkmm(1,1,1), &wkmm(1,1,2), tgiyxdw1, dufozmt7, dimwin, rutyk8mg) do 23284 gp1jxzuh=1,xjc4ywlh ges1xpkr(gp1jxzuh) = gp1jxzuh 23284 continue call vqrdca(work3,rutyk8mg,rutyk8mg,xjc4ywlh,fasrkub3,ges1xpkr, &work1,qemj9asg,pvofyg8z) call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call nudh6szqf(uwin,sout,r0oydcxb,dimwin,tgiyxdw1,dufozmt7,nef, &kgwmz4ip,wkmm) call vdqrsl(work3,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3,r0oydcxb, &work1(1),effect,beta, work1(1),ub4xioar,job,info) call vbksf(uwin,ub4xioar,kgwmz4ip,nef,wkmm,tgiyxdw1,dufozmt7, &dimwin) if(.not.(yzoe1rsp .ne. 0))goto 23286 call vrinvf9(work3, rutyk8mg, xjc4ywlh, dvhw1ulq, s0(1,1,1), s0(1, &1,2)) if(.not.(dvhw1ulq .ne. 1))goto 23288 return 23288 continue do 23290 yq6lorbx=1,kgwmz4ip do 23292 ayfnwr1v=1,nef cov(ayfnwr1v,yq6lorbx) = cov(ayfnwr1v,yq6lorbx) - s0(yq6lorbx, &yq6lorbx,1) - xin(ayfnwr1v) * (2.0d0 * s0(yq6lorbx,yq6lorbx+ &kgwmz4ip,1) + xin(ayfnwr1v) * s0(yq6lorbx+kgwmz4ip,yq6lorbx+ &kgwmz4ip,1)) 23292 continue 23290 continue 23286 continue goto 23283 23282 continue call dsrt0gem(nef, xin, win, sout, ub4xioar, cov, yzoe1rsp) 23283 continue do 23294 ayfnwr1v=1,nef do 23296 yq6lorbx=1,kgwmz4ip sout(ayfnwr1v,yq6lorbx) = sout(ayfnwr1v,yq6lorbx) - ub4xioar( &yq6lorbx,ayfnwr1v) 23296 continue 23294 continue do 23298 yq6lorbx=1,kgwmz4ip call shm8ynte(kuzxj1lo, nef, ezlgm2up, sout(1,yq6lorbx), smo(1, &yq6lorbx)) 23298 continue return end subroutine vbfa( n,wy1vqfzu,psdvgce3, he7mqnvy,tlgduey8,wmat, &wbkq9zyi,dof, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx,s0, &beta,cov,zpcqv3uj, vc6hatuj,fasrkub3, ges1xpkr, xbig, wpuarq2m, &hjm2ktyr, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, &wkmm, work3, sgdub, bmb, ifys6woa, mwk, twk, rpyis2kc, zv2xfhei, &resss, nbzjkpi3, acpios9q, itwk, jwbkl9fp) implicit logical (a-z) integer irhm4cfa, n, wy1vqfzu, psdvgce3(15), ezlgm2up(*),nef(*), &which(*), ges1xpkr(*) integer jnxpuym2(*), hnpt1zym(*), fzm1ihwj(*), iz2nbfjc(*), &nbzjkpi3(*), acpios9q(*), itwk(*), jwbkl9fp(*) double precision he7mqnvy(*),tlgduey8(*),wmat(*),wbkq9zyi(*),dof(* &), ub4xioar(*),kispwgx3(*), m0ibglfx(*), s0(wy1vqfzu), beta(*), &cov(*),zpcqv3uj, vc6hatuj(*),fasrkub3(*) double precision xbig(*), wpuarq2m(*), hjm2ktyr(*), work1(*), wk2( &n,wy1vqfzu,3), wkmm(wy1vqfzu,wy1vqfzu,16), work3(*), sgdub(*), &bmb(*), ifys6woa(*), mwk(*), twk(*), rpyis2kc(*), zv2xfhei(*), &resss integer p,q,yzoe1rsp,niter,gtrlbz3e, rutyk8mg, xjc4ywlh, lyma1kwc, & dimw, dimu, fbd5yktj,ldk integer iter integer xs4wtvlg integer ayfnwr1v, imk5wjxg, qemj9asg irhm4cfa = 0 imk5wjxg = wy1vqfzu*(wy1vqfzu+1)/2 p=psdvgce3(2) q=psdvgce3(3) yzoe1rsp= 0 if(.not.(psdvgce3(4) .eq. 1))goto 23300 yzoe1rsp = 1 23300 continue gtrlbz3e=psdvgce3(6) qemj9asg=psdvgce3(7) rutyk8mg=psdvgce3(9) xjc4ywlh=psdvgce3(10) lyma1kwc=psdvgce3(11) dimw=psdvgce3(12) dimu=psdvgce3(13) fbd5yktj = 0 ldk=psdvgce3(15) xs4wtvlg = 1 if(.not.(lyma1kwc .gt. 0))goto 23302 do 23304 ayfnwr1v=1,lyma1kwc work1(ayfnwr1v) = dof(ayfnwr1v) work1(ayfnwr1v+lyma1kwc) = wbkq9zyi(ayfnwr1v) work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) 23304 continue 23302 continue iter = 0 23306 if(.not.(xs4wtvlg .ne. 0))goto 23307 iter = iter+1 if(.not.(iter .gt. 1))goto 23308 if(.not.(lyma1kwc .gt. 0))goto 23310 do 23312 ayfnwr1v=1,lyma1kwc if(.not.(work1(ayfnwr1v+lyma1kwc).eq.0.0d0 .and.(dabs(work1( &ayfnwr1v+2*lyma1kwc)-dof(ayfnwr1v))/dof(ayfnwr1v).gt.0.05d0))) &goto 23314 work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) dof(ayfnwr1v)=work1(ayfnwr1v) wbkq9zyi(ayfnwr1v)=0.0d0 goto 23315 23314 continue work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) 23315 continue 23312 continue 23310 continue 23308 continue call vbfa1(irhm4cfa,n,wy1vqfzu, he7mqnvy,tlgduey8,wmat,wbkq9zyi, &dof, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx,s0, beta,cov, &zpcqv3uj, vc6hatuj,fasrkub3, qemj9asg,ges1xpkr, xbig, wpuarq2m, &hjm2ktyr, jnxpuym2, hnpt1zym, fzm1ihwj(1), fzm1ihwj(1 + imk5wjxg), & iz2nbfjc, work1(1+3*lyma1kwc), wkmm, work3, sgdub, bmb, ifys6woa, & mwk, twk, rpyis2kc, zv2xfhei, resss, nbzjkpi3, acpios9q, itwk, &jwbkl9fp, p,q,yzoe1rsp,niter,gtrlbz3e, wk2(1,1,1), wk2(1,1,2), &wk2(1,1,3), rutyk8mg, xjc4ywlh, lyma1kwc, dimw, dimu, fbd5yktj, &ldk) if(.not.(irhm4cfa .ne. 0))goto 23316 call vcall2(xs4wtvlg,w,y,m0ibglfx,beta,wpuarq2m) goto 23317 23316 continue xs4wtvlg = 0 23317 continue if(.not.(xs4wtvlg .ne. 0))goto 23318 qemj9asg=0 23318 continue goto 23306 23307 continue psdvgce3(7) = qemj9asg psdvgce3(5) = niter psdvgce3(14) = fbd5yktj return end subroutine vbfa1(irhm4cfa,kuzxj1lo,wy1vqfzu, he7mqnvy,tlgduey8, &wmat,wbkq9zyi,dof, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx, &s0, beta,cov,zpcqv3uj, vc6hatuj,fasrkub3, qemj9asg,ges1xpkr, xbig, & wpuarq2m, hjm2ktyr, jnxpuym2, hnpt1zym, tgiyxdw1, dufozmt7, &iz2nbfjc, work1, wkmm, work3, sgdub, bmb, ifys6woa, mwk, twk, &rpyis2kc, zv2xfhei, resss, nbzjkpi3, acpios9q, itwk, jwbkl9fp, p, &q, yzoe1rsp, niter, gtrlbz3e, ghz9vuba, oldmat, wk2, rutyk8mg, &xjc4ywlh, lyma1kwc, dimw, dimu, fbd5yktj, ldk) implicit logical (a-z) integer qemj9asg integer dufozmt7(*), tgiyxdw1(*) integer p, q, yzoe1rsp, niter, gtrlbz3e, rutyk8mg, xjc4ywlh, &lyma1kwc, dimw, dimu, fbd5yktj, ldk integer irhm4cfa, kuzxj1lo, wy1vqfzu, ezlgm2up(kuzxj1lo,q),nef(q), &which(q), ges1xpkr(xjc4ywlh) integer jnxpuym2(q), hnpt1zym(q), iz2nbfjc(q), nbzjkpi3(q+1), &acpios9q(q), itwk(*), jwbkl9fp(q+1) double precision he7mqnvy(kuzxj1lo,p), tlgduey8(kuzxj1lo,wy1vqfzu) &, wmat(kuzxj1lo,dimw), wbkq9zyi(lyma1kwc), dof(lyma1kwc) double precision ub4xioar(wy1vqfzu,kuzxj1lo), kispwgx3(kuzxj1lo, &lyma1kwc), m0ibglfx(wy1vqfzu,kuzxj1lo), s0(wy1vqfzu), beta( &xjc4ywlh), cov(kuzxj1lo,lyma1kwc), zpcqv3uj, vc6hatuj(rutyk8mg, &xjc4ywlh), fasrkub3(xjc4ywlh) double precision xbig(rutyk8mg,xjc4ywlh), wpuarq2m(dimu,kuzxj1lo), & hjm2ktyr(wy1vqfzu,lyma1kwc), work1(*), wk2(kuzxj1lo,wy1vqfzu), &wkmm(wy1vqfzu,wy1vqfzu,16), work3(*), sgdub(*), bmb(*), ifys6woa(* &), mwk(*), twk(*), rpyis2kc(*), zv2xfhei(*), resss double precision ghz9vuba(kuzxj1lo,wy1vqfzu), oldmat(kuzxj1lo, &wy1vqfzu) integer job,info,nefk integer ayfnwr1v, yq6lorbx, gp1jxzuh, wg1xifdy double precision vo4mtexk, rd9beyfk,ratio, deltaf, z4vrscot, &pvofyg8z pvofyg8z = 1.0d-7 job = 101 info = 1 if(.not.(q .eq. 0))goto 23320 gtrlbz3e = 1 23320 continue if(.not.(irhm4cfa .ne. 0))goto 23322 do 23324 yq6lorbx=1,xjc4ywlh do 23326 ayfnwr1v=1,rutyk8mg vc6hatuj(ayfnwr1v,yq6lorbx)=xbig(ayfnwr1v,yq6lorbx) 23326 continue 23324 continue 23322 continue if(.not.(qemj9asg.eq.0))goto 23328 call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call mxrbkut0f(wpuarq2m, vc6hatuj, wy1vqfzu, xjc4ywlh, kuzxj1lo, &wkmm(1,1,1), wkmm(1,1,2), tgiyxdw1, dufozmt7, dimu, rutyk8mg) do 23330 gp1jxzuh=1,xjc4ywlh ges1xpkr(gp1jxzuh) = gp1jxzuh 23330 continue call vqrdca(vc6hatuj,rutyk8mg,rutyk8mg,xjc4ywlh,fasrkub3,ges1xpkr, &twk,qemj9asg,pvofyg8z) 23328 continue do 23332 yq6lorbx=1,wy1vqfzu do 23334 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v)=0.0d0 23334 continue if(.not.(q .gt. 0))goto 23336 do 23338 gp1jxzuh=1,q if(.not.(iz2nbfjc(gp1jxzuh).eq.1))goto 23340 do 23342 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + &kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+yq6lorbx-1) 23342 continue goto 23341 23340 continue do 23344 wg1xifdy=1,jnxpuym2(gp1jxzuh) do 23346 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + &hjm2ktyr(yq6lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3( &ayfnwr1v,hnpt1zym(gp1jxzuh)+wg1xifdy-1) 23346 continue 23344 continue 23341 continue 23338 continue 23336 continue 23332 continue niter = 0 ratio = 1.0d0 23348 if(.not.((ratio .gt. zpcqv3uj ) .and. (niter .lt. gtrlbz3e))) &goto 23349 niter = niter + 1 deltaf = 0.0d0 do 23350 yq6lorbx=1,wy1vqfzu do 23352 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,yq6lorbx)=tlgduey8(ayfnwr1v,yq6lorbx)-m0ibglfx( &yq6lorbx,ayfnwr1v) 23352 continue 23350 continue call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call nudh6szqf(wpuarq2m,ghz9vuba, twk, dimu,tgiyxdw1,dufozmt7, &kuzxj1lo,wy1vqfzu,wkmm) call vdqrsl(vc6hatuj,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3, twk, &wk2,wk2, beta, wk2,ub4xioar,job,info) resss=0.0d0 do 23354 ayfnwr1v=1,kuzxj1lo do 23356 yq6lorbx=1,wy1vqfzu vo4mtexk = twk((ayfnwr1v-1)*wy1vqfzu+yq6lorbx) - ub4xioar( &yq6lorbx,ayfnwr1v) resss = resss + vo4mtexk * vo4mtexk 23356 continue 23354 continue call vbksf(wpuarq2m,ub4xioar,wy1vqfzu,kuzxj1lo,wkmm,tgiyxdw1, &dufozmt7,dimu) if(.not.(q .gt. 0))goto 23358 do 23360 gp1jxzuh=1,q do 23362 yq6lorbx=1,wy1vqfzu if(.not.(iz2nbfjc(gp1jxzuh).eq.1))goto 23364 do 23366 ayfnwr1v=1,kuzxj1lo oldmat(ayfnwr1v,yq6lorbx)=kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+ &yq6lorbx-1) ghz9vuba(ayfnwr1v,yq6lorbx) = tlgduey8(ayfnwr1v,yq6lorbx) - &ub4xioar(yq6lorbx,ayfnwr1v) - m0ibglfx(yq6lorbx,ayfnwr1v) + &oldmat(ayfnwr1v,yq6lorbx) 23366 continue goto 23365 23364 continue do 23368 ayfnwr1v=1,kuzxj1lo oldmat(ayfnwr1v,yq6lorbx)=0.0d0 do 23370 wg1xifdy=1,jnxpuym2(gp1jxzuh) oldmat(ayfnwr1v,yq6lorbx)=oldmat(ayfnwr1v,yq6lorbx) + hjm2ktyr( &yq6lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3(ayfnwr1v, &hnpt1zym(gp1jxzuh)+wg1xifdy-1) 23370 continue ghz9vuba(ayfnwr1v,yq6lorbx) = tlgduey8(ayfnwr1v,yq6lorbx) - &ub4xioar(yq6lorbx,ayfnwr1v) - m0ibglfx(yq6lorbx,ayfnwr1v) + &oldmat(ayfnwr1v,yq6lorbx) 23368 continue 23365 continue 23362 continue nefk = nef(gp1jxzuh) call ewg7qruh(he7mqnvy(1,which(gp1jxzuh)),ghz9vuba,wmat, kuzxj1lo, &wy1vqfzu,ezlgm2up(1,gp1jxzuh),nefk, wbkq9zyi(hnpt1zym(gp1jxzuh)), &dof(hnpt1zym(gp1jxzuh)), kispwgx3(1,hnpt1zym(gp1jxzuh)), cov(1, &hnpt1zym(gp1jxzuh)), s0, mwk(1), mwk(1+nefk), mwk(1+nefk*( &wy1vqfzu+1)), mwk(1+nefk*(2*wy1vqfzu+1)), work1, work3, dimw, &fbd5yktj, ldk, info, yzoe1rsp, sgdub, rpyis2kc(nbzjkpi3(gp1jxzuh)) &, zv2xfhei(jwbkl9fp(gp1jxzuh)), acpios9q(gp1jxzuh),tgiyxdw1, &dufozmt7, bmb, ifys6woa, wkmm, iz2nbfjc(gp1jxzuh),jnxpuym2( &gp1jxzuh),itwk, hjm2ktyr(1,hnpt1zym(gp1jxzuh)), twk(1), twk(1+2* &jnxpuym2(gp1jxzuh)), twk(1+4*jnxpuym2(gp1jxzuh)), twk(1+(4+nefk)* &jnxpuym2(gp1jxzuh)), twk(1+(4+2*nefk)*jnxpuym2(gp1jxzuh)), twk(1+( &4+3*nefk)*jnxpuym2(gp1jxzuh)), twk(1+(4+4*nefk)*jnxpuym2(gp1jxzuh) &)) do 23372 yq6lorbx=1,wy1vqfzu if(.not.(iz2nbfjc(gp1jxzuh).eq.1))goto 23374 do 23376 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + &kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+yq6lorbx-1) 23376 continue goto 23375 23374 continue do 23378 wg1xifdy=1,jnxpuym2(gp1jxzuh) do 23380 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v)=m0ibglfx(yq6lorbx,ayfnwr1v) + &hjm2ktyr(yq6lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3( &ayfnwr1v,hnpt1zym(gp1jxzuh)+wg1xifdy-1) 23380 continue 23378 continue 23375 continue do 23382 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) - &oldmat(ayfnwr1v,yq6lorbx) 23382 continue 23372 continue do 23384 yq6lorbx=1,wy1vqfzu if(.not.(iz2nbfjc(gp1jxzuh) .eq. 1))goto 23386 deltaf = deltaf + rd9beyfk(kuzxj1lo,oldmat(1,yq6lorbx),kispwgx3(1, &hnpt1zym(gp1jxzuh)+yq6lorbx-1), wmat(1,yq6lorbx)) goto 23387 23386 continue do 23388 ayfnwr1v=1,kuzxj1lo twk(ayfnwr1v) = 0.0d0 do 23390 wg1xifdy=1,jnxpuym2(gp1jxzuh) twk(ayfnwr1v) = twk(ayfnwr1v) + hjm2ktyr(yq6lorbx,hnpt1zym( &gp1jxzuh)+wg1xifdy-1) * kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+ &wg1xifdy-1) 23390 continue 23388 continue deltaf = deltaf + rd9beyfk(kuzxj1lo, oldmat(1,yq6lorbx), twk, &wmat(1,yq6lorbx)) 23387 continue 23384 continue do 23392 yq6lorbx=1,wy1vqfzu do 23394 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,yq6lorbx)=tlgduey8(ayfnwr1v,yq6lorbx)-m0ibglfx( &yq6lorbx,ayfnwr1v) 23394 continue 23392 continue call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call nudh6szqf(wpuarq2m,ghz9vuba, twk, dimu,tgiyxdw1,dufozmt7, &kuzxj1lo,wy1vqfzu,wkmm) call vdqrsl(vc6hatuj,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3, twk, &wk2,wk2, beta, wk2,ub4xioar,job,info) call vbksf(wpuarq2m,ub4xioar,wy1vqfzu,kuzxj1lo,wkmm,tgiyxdw1, &dufozmt7,dimu) 23360 continue 23358 continue if(.not.(q .gt. 0))goto 23396 z4vrscot=0.0d0 do 23398 yq6lorbx=1,wy1vqfzu do 23400 ayfnwr1v=1,kuzxj1lo z4vrscot = z4vrscot + wmat(ayfnwr1v,yq6lorbx) * m0ibglfx(yq6lorbx, &ayfnwr1v)**2 23400 continue 23398 continue if(.not.(z4vrscot .gt. 0.0d0))goto 23402 ratio = dsqrt(deltaf/z4vrscot) goto 23403 23402 continue ratio = 0.0d0 23403 continue 23396 continue if(.not.(niter .eq. 1))goto 23404 ratio = 1.0d0 23404 continue goto 23348 23349 continue do 23406 yq6lorbx=1,xjc4ywlh twk(yq6lorbx)=beta(yq6lorbx) 23406 continue do 23408 yq6lorbx=1,xjc4ywlh beta(ges1xpkr(yq6lorbx))=twk(yq6lorbx) 23408 continue do 23410 ayfnwr1v=1,kuzxj1lo do 23412 yq6lorbx=1,wy1vqfzu m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + &ub4xioar(yq6lorbx,ayfnwr1v) 23412 continue 23410 continue if(.not.((yzoe1rsp .ne. 0) .and. (q .gt. 0)))goto 23414 do 23416 gp1jxzuh=1,q do 23418 wg1xifdy=1,jnxpuym2(gp1jxzuh) call shm8ynte(kuzxj1lo,nef(gp1jxzuh),ezlgm2up(1,gp1jxzuh), cov(1, &hnpt1zym(gp1jxzuh)+wg1xifdy-1),oldmat) do 23420 ayfnwr1v=1,kuzxj1lo cov(ayfnwr1v,hnpt1zym(gp1jxzuh)+wg1xifdy-1) = oldmat(ayfnwr1v,1) 23420 continue 23418 continue 23416 continue 23414 continue return end subroutine x6kanjdh(he7mqnvy, xout, kuzxj1lo, wy1vqfzu) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu double precision he7mqnvy(kuzxj1lo), xout(*) integer ayfnwr1v, yq6lorbx, gp1jxzuh, iptr iptr=1 do 23422 yq6lorbx=1,wy1vqfzu do 23424 ayfnwr1v=1,kuzxj1lo do 23426 gp1jxzuh=1,wy1vqfzu if(.not.(yq6lorbx .eq. gp1jxzuh))goto 23428 xout(iptr) = 1.0d0 goto 23429 23428 continue xout(iptr) = 0.0d0 23429 continue iptr=iptr+1 23426 continue 23424 continue 23422 continue do 23430 yq6lorbx=1,wy1vqfzu do 23432 ayfnwr1v=1,kuzxj1lo do 23434 gp1jxzuh=1,wy1vqfzu if(.not.(yq6lorbx .eq. gp1jxzuh))goto 23436 xout(iptr) = he7mqnvy(ayfnwr1v) goto 23437 23436 continue xout(iptr) = 0.0d0 23437 continue iptr=iptr+1 23434 continue 23432 continue 23430 continue return end double precision function rd9beyfk(kuzxj1lo, bhcji9gl, m0ibglfx, &po8rwsmy) integer kuzxj1lo double precision bhcji9gl(kuzxj1lo), m0ibglfx(kuzxj1lo), po8rwsmy( &kuzxj1lo) integer ayfnwr1v double precision lm9vcjob, rxeqjn0y, work rxeqjn0y = 0.0d0 lm9vcjob = 0.0d0 do 23438 ayfnwr1v=1,kuzxj1lo work = bhcji9gl(ayfnwr1v) - m0ibglfx(ayfnwr1v) rxeqjn0y = rxeqjn0y + po8rwsmy(ayfnwr1v)*work*work lm9vcjob = lm9vcjob + po8rwsmy(ayfnwr1v) 23438 continue if(.not.(lm9vcjob .gt. 0.0d0))goto 23440 rd9beyfk=rxeqjn0y/lm9vcjob goto 23441 23440 continue rd9beyfk=0.0d0 23441 continue return end subroutine pitmeh0q(kuzxj1lo, bhcji9gl, po8rwsmy, lfu2qhid, &lm9vcjob) implicit logical (a-z) integer kuzxj1lo double precision bhcji9gl(kuzxj1lo), po8rwsmy(kuzxj1lo), lfu2qhid, & lm9vcjob double precision rxeqjn0y integer ayfnwr1v lm9vcjob = 0.0d0 rxeqjn0y = 0.0d0 do 23442 ayfnwr1v=1,kuzxj1lo rxeqjn0y = rxeqjn0y + bhcji9gl(ayfnwr1v) * po8rwsmy(ayfnwr1v) lm9vcjob = lm9vcjob + po8rwsmy(ayfnwr1v) 23442 continue if(.not.(lm9vcjob .gt. 0.0d0))goto 23444 lfu2qhid = rxeqjn0y / lm9vcjob goto 23445 23444 continue lfu2qhid = 0.0d0 23445 continue return end subroutine dsrt0gem(kuzxj1lo, x, w, bhcji9gl, ub4xioar, cov, &yzoe1rsp) implicit logical (a-z) integer kuzxj1lo integer yzoe1rsp double precision x(kuzxj1lo), w(kuzxj1lo), bhcji9gl(kuzxj1lo), &ub4xioar(kuzxj1lo) double precision cov(kuzxj1lo,*) integer ayfnwr1v double precision pasjmo8g, pygsw6ko, q6zdcwxk, nsum, eck8vubt, &interc, bzmd6ftv, hofjnx2e, lm9vcjob call pitmeh0q(kuzxj1lo,bhcji9gl,w,pasjmo8g, lm9vcjob) call pitmeh0q(kuzxj1lo,x,w,pygsw6ko, lm9vcjob) nsum = 0.0d0 q6zdcwxk = 0.0d0 do 23446 ayfnwr1v=1,kuzxj1lo hofjnx2e = x(ayfnwr1v)-pygsw6ko nsum = nsum + hofjnx2e * (bhcji9gl(ayfnwr1v)-pasjmo8g) * w( &ayfnwr1v) hofjnx2e = hofjnx2e * hofjnx2e q6zdcwxk = q6zdcwxk + hofjnx2e * w(ayfnwr1v) 23446 continue eck8vubt = nsum/q6zdcwxk interc = pasjmo8g - eck8vubt * pygsw6ko do 23448 ayfnwr1v=1,kuzxj1lo ub4xioar(ayfnwr1v) = interc + eck8vubt * x(ayfnwr1v) 23448 continue bzmd6ftv = interc + eck8vubt * x(1) if(.not.(yzoe1rsp .ne. 0))goto 23450 do 23452 ayfnwr1v=1,kuzxj1lo hofjnx2e = x(ayfnwr1v)-pygsw6ko if(.not.(w(ayfnwr1v) .gt. 0.0d0))goto 23454 cov(ayfnwr1v,1) = cov(ayfnwr1v,1) - 1.0d0/lm9vcjob - hofjnx2e * &hofjnx2e / q6zdcwxk goto 23455 23454 continue cov(ayfnwr1v,1) = 0.0d0 23455 continue 23452 continue 23450 continue return end subroutine shm8ynte(kuzxj1lo, p, ezlgm2up, pygsw6ko, x) implicit logical (a-z) integer kuzxj1lo, p, ezlgm2up(kuzxj1lo) double precision pygsw6ko(p), x(kuzxj1lo) integer ayfnwr1v do 23456 ayfnwr1v=1,kuzxj1lo x(ayfnwr1v) = pygsw6ko(ezlgm2up(ayfnwr1v)) 23456 continue return end subroutine vankcghz2l2(x, kuzxj1lo, ankcghz2, rvy1fpli, ukgwt7na) implicit logical (a-z) integer kuzxj1lo, rvy1fpli, ukgwt7na double precision x(kuzxj1lo), ankcghz2(kuzxj1lo) integer ndk, yq6lorbx if(.not.(ukgwt7na .eq. 0))goto 23458 if(.not.(kuzxj1lo .le. 40))goto 23460 ndk = kuzxj1lo goto 23461 23460 continue ndk = 40 + dexp(0.25d0 * dlog(kuzxj1lo-40.0d0)) 23461 continue goto 23459 23458 continue ndk = rvy1fpli - 6 23459 continue rvy1fpli = ndk + 6 do 23462 yq6lorbx = 1,3 ankcghz2(yq6lorbx) = x(1) 23462 continue do 23464 yq6lorbx = 1,ndk ankcghz2(yq6lorbx+3) = x( 1 + (yq6lorbx-1)*(kuzxj1lo-1)/(ndk-1) ) 23464 continue do 23466 yq6lorbx = 1,3 ankcghz2(ndk+3+yq6lorbx) = x(kuzxj1lo) 23466 continue return end subroutine pankcghz2l2(ankcghz2, kuzxj1lo, zo8wpibx, tol) implicit logical (a-z) integer kuzxj1lo, zo8wpibx(kuzxj1lo) double precision ankcghz2(kuzxj1lo), tol integer ayfnwr1v, cjop5bwm do 23468 ayfnwr1v=1,4 zo8wpibx(ayfnwr1v) = 1 23468 continue cjop5bwm = 4 do 23470 ayfnwr1v=5,(kuzxj1lo-4) if(.not.((ankcghz2(ayfnwr1v) - ankcghz2(cjop5bwm) .ge. tol) .and.( &ankcghz2(kuzxj1lo) - ankcghz2(ayfnwr1v) .ge. tol)))goto 23472 zo8wpibx(ayfnwr1v) = 1 cjop5bwm = ayfnwr1v goto 23473 23472 continue zo8wpibx(ayfnwr1v) = 0 23473 continue 23470 continue do 23474 ayfnwr1v=(kuzxj1lo-3),kuzxj1lo zo8wpibx(ayfnwr1v) = 1 23474 continue return end VGAM/src/veigen.f0000644000176000001440000005174012136651167013266 0ustar ripleyusers subroutine veigen(M, n, x, vals, ov, vec, junk1, junk2, * wk, rowi, coli, dimmv, ec) implicit logical (a-z) integer M, n, ov, ec, i, k, dimmv, MM2, * rowi(M*(M+1)/2), coli(M*(M+1)/2), full double precision x(dimmv, n), vals(M, n), vec(M,M,n), junk1(M), * junk2(M), wk(M,M) MM2 = M*(M+1)/2 if(dimmv.eq.MM2) then full = 1 else full = 0 end if do 300 i=1,n do 600 k=1,dimmv wk(rowi(k), coli(k)) = x(k,i) wk(coli(k), rowi(k)) = wk(rowi(k), coli(k)) 600 continue if(full.eq.0) then do 500 k=dimmv+1,MM2 wk(rowi(k), coli(k)) = 0.0d0 wk(coli(k), rowi(k)) = 0.0d0 500 continue end if c call vrs818(M, M, wk, vals(1,i), ov, vec(1,1,i), junk1, * junk2, ec) if(ec.ne.0) goto 200 300 continue c 200 return end SUBROUTINE VRS818(NM,N,A,W,MATZ,Z,FV1,FV2,IERR) C INTEGER N,NM,IERR,MATZ DOUBLE PRECISION A(NM,N),W(N),Z(NM,N),FV1(N),FV2(N) C C THIS SUBROUTINE CALLS THE RECOMMENDED SEQUENCE OF C SUBROUTINES FROM THE EIGENSYSTEM SUBROUTINE PACKAGE (EISPACK) C TO FIND THE EIGENVALUES AND EIGENVECTORS (IF DESIRED) C OF A REAL SYMMETRIC MATRIX. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF THE TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX A. C C A CONTAINS THE REAL SYMMETRIC MATRIX. C C MATZ IS AN INTEGER VARIABLE SET EQUAL TO ZERO IF C ONLY EIGENVALUES ARE DESIRED. OTHERWISE IT IS SET TO C ANY NON-ZERO INTEGER FOR BOTH EIGENVALUES AND EIGENVECTORS. C C ON OUTPUT C C W CONTAINS THE EIGENVALUES IN ASCENDING ORDER. C C Z CONTAINS THE EIGENVECTORS IF MATZ IS NOT ZERO. C C IERR IS AN INTEGER OUTPUT VARIABLE SET EQUAL TO AN ERROR C COMPLETION CODE DESCRIBED IN THE DOCUMENTATION FOR TQLRAT C AND TQL2. THE NORMAL COMPLETION CODE IS ZERO. C C FV1 AND FV2 ARE TEMPORARY STORAGE ARRAYS. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C IF (N .LE. NM) GO TO 10 IERR = 10 * N GO TO 50 C 10 IF (MATZ .NE. 0) GO TO 20 C .......... FIND EIGENVALUES ONLY .......... CALL VTRED1(NM,N,A,W,FV1,FV2) CALL TQLRA9(N,W,FV2,IERR) GO TO 50 C .......... FIND BOTH EIGENVALUES AND EIGENVECTORS .......... 20 CALL VTRED2(NM,N,A,W,FV1,Z) CALL VTQL21(NM,N,W,FV1,Z,IERR) 50 RETURN END SUBROUTINE VTQL21(NM,N,D,E,Z,IERR) C INTEGER I,J,K,L,M,N,II,L1,L2,NM,MML,IERR DOUBLE PRECISION D(N),E(N),Z(NM,N) DOUBLE PRECISION C,C2,C3,DL1,EL1,F,G,H,P,R,S,S2,TST1,TST2,PYTHA9 C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TQL2, C NUM. MATH. 11, 293-306(1968) BY BOWDLER, MARTIN, REINSCH, AND C WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 227-240(1971). C C THIS SUBROUTINE FINDS THE EIGENVALUES AND EIGENVECTORS C OF A SYMMETRIC TRIDIAGONAL MATRIX BY THE QL METHOD. C THE EIGENVECTORS OF A FULL SYMMETRIC MATRIX CAN ALSO C BE FOUND IF TRED2 HAS BEEN USED TO REDUCE THIS C FULL MATRIX TO TRIDIAGONAL FORM. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE INPUT MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE INPUT MATRIX C IN ITS LAST N-1 POSITIONS. E(1) IS ARBITRARY. C C Z CONTAINS THE TRANSFORMATION MATRIX PRODUCED IN THE C REDUCTION BY TRED2, IF PERFORMED. IF THE EIGENVECTORS C OF THE TRIDIAGONAL MATRIX ARE DESIRED, Z MUST CONTAIN C THE IDENTITY MATRIX. C C ON OUTPUT C C D CONTAINS THE EIGENVALUES IN ASCENDING ORDER. IF AN C ERROR EXIT IS MADE, THE EIGENVALUES ARE CORRECT BUT C UNORDERED FOR INDICES 1,2,...,IERR-1. C C E HAS BEEN DESTROYED. C C Z CONTAINS ORTHONORMAL EIGENVECTORS OF THE SYMMETRIC C TRIDIAGONAL (OR FULL) MATRIX. IF AN ERROR EXIT IS MADE, C Z CONTAINS THE EIGENVECTORS ASSOCIATED WITH THE STORED C EIGENVALUES. C C IERR IS SET TO C ZERO FOR NORMAL RETURN, C J IF THE J-TH EIGENVALUE HAS NOT BEEN C DETERMINED AFTER 30 ITERATIONS. C C CALLS PYTHA9 FOR DSQRT(A*A + B*B) . C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ c c unnecessary initialization of C3 and S2 to keep g77 -Wall happy c C3 = 0.0D0 S2 = 0.0D0 C IERR = 0 IF (N .EQ. 1) GO TO 1001 C DO 100 I = 2, N 100 E(I-1) = E(I) C F = 0.0D0 TST1 = 0.0D0 E(N) = 0.0D0 C DO 240 L = 1, N J = 0 H = DABS(D(L)) + DABS(E(L)) IF (TST1 .LT. H) TST1 = H C .......... LOOK FOR SMALL SUB-DIAGONAL ELEMENT .......... DO 110 M = L, N TST2 = TST1 + DABS(E(M)) IF (TST2 .EQ. TST1) GO TO 120 C .......... E(N) IS ALWAYS ZERO, SO THERE IS NO EXIT C THROUGH THE BOTTOM OF THE LOOP .......... 110 CONTINUE C 120 IF (M .EQ. L) GO TO 220 130 IF (J .EQ. 30) GO TO 1000 J = J + 1 C .......... FORM SHIFT .......... L1 = L + 1 L2 = L1 + 1 G = D(L) P = (D(L1) - G) / (2.0D0 * E(L)) R = PYTHA9(P,1.0D0) D(L) = E(L) / (P + DSIGN(R,P)) D(L1) = E(L) * (P + DSIGN(R,P)) DL1 = D(L1) H = G - D(L) IF (L2 .GT. N) GO TO 145 C DO 140 I = L2, N 140 D(I) = D(I) - H C 145 F = F + H C .......... QL TRANSFORMATION .......... P = D(M) C = 1.0D0 C2 = C EL1 = E(L1) S = 0.0D0 MML = M - L C .......... FOR I=M-1 STEP -1 UNTIL L DO -- .......... DO 200 II = 1, MML C3 = C2 C2 = C S2 = S I = M - II G = C * E(I) H = C * P R = PYTHA9(P,E(I)) E(I+1) = S * R S = E(I) / R C = P / R P = C * D(I) - S * G D(I+1) = H + S * (C * G + S * D(I)) C .......... FORM VECTOR .......... DO 180 K = 1, N H = Z(K,I+1) Z(K,I+1) = S * Z(K,I) + C * H Z(K,I) = C * Z(K,I) - S * H 180 CONTINUE C 200 CONTINUE C P = -S * S2 * C3 * EL1 * E(L) / DL1 E(L) = S * P D(L) = C * P TST2 = TST1 + DABS(E(L)) IF (TST2 .GT. TST1) GO TO 130 220 D(L) = D(L) + F 240 CONTINUE C .......... ORDER EIGENVALUES AND EIGENVECTORS .......... DO 300 II = 2, N I = II - 1 K = I P = D(I) C DO 260 J = II, N IF (D(J) .GE. P) GO TO 260 K = J P = D(J) 260 CONTINUE C IF (K .EQ. I) GO TO 300 D(K) = D(I) D(I) = P C DO 280 J = 1, N P = Z(J,I) Z(J,I) = Z(J,K) Z(J,K) = P 280 CONTINUE C 300 CONTINUE C GO TO 1001 C .......... SET ERROR -- NO CONVERGENCE TO AN C EIGENVALUE AFTER 30 ITERATIONS .......... 1000 IERR = L 1001 RETURN END SUBROUTINE TQLRA9(N,D,E2,IERR) C INTEGER I,J,L,M,N,II,L1,MML,IERR DOUBLE PRECISION D(N),E2(N) DOUBLE PRECISION B,C,F,G,H,P,R,S,T,EPSLO9,PYTHA9 C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TQLRAT, C ALGORITHM 464, COMM. ACM 16, 689(1973) BY REINSCH. C C THIS SUBROUTINE FINDS THE EIGENVALUES OF A SYMMETRIC C TRIDIAGONAL MATRIX BY THE RATIONAL QL METHOD. C C ON INPUT C C N IS THE ORDER OF THE MATRIX. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE INPUT MATRIX. C C E2 CONTAINS THE SQUARES OF THE SUBDIAGONAL ELEMENTS OF THE C INPUT MATRIX IN ITS LAST N-1 POSITIONS. E2(1) IS ARBITRARY. C C ON OUTPUT C C D CONTAINS THE EIGENVALUES IN ASCENDING ORDER. IF AN C ERROR EXIT IS MADE, THE EIGENVALUES ARE CORRECT AND C ORDERED FOR INDICES 1,2,...IERR-1, BUT MAY NOT BE C THE SMALLEST EIGENVALUES. C C E2 HAS BEEN DESTROYED. C C IERR IS SET TO C ZERO FOR NORMAL RETURN, C J IF THE J-TH EIGENVALUE HAS NOT BEEN C DETERMINED AFTER 30 ITERATIONS. C C CALLS PYTHA9 FOR DSQRT(A*A + B*B) . C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ c c unnecessary initialization of B and C to keep g77 -Wall happy c B = 0.0D0 C = 0.0D0 C IERR = 0 IF (N .EQ. 1) GO TO 1001 C DO 100 I = 2, N 100 E2(I-1) = E2(I) C F = 0.0D0 T = 0.0D0 E2(N) = 0.0D0 C DO 290 L = 1, N J = 0 H = DABS(D(L)) + DSQRT(E2(L)) IF (T .GT. H) GO TO 105 T = H B = EPSLO9(T) C = B * B C .......... LOOK FOR SMALL SQUARED SUB-DIAGONAL ELEMENT .......... 105 DO 110 M = L, N IF (E2(M) .LE. C) GO TO 120 C .......... E2(N) IS ALWAYS ZERO, SO THERE IS NO EXIT C THROUGH THE BOTTOM OF THE LOOP .......... 110 CONTINUE C 120 IF (M .EQ. L) GO TO 210 130 IF (J .EQ. 30) GO TO 1000 J = J + 1 C .......... FORM SHIFT .......... L1 = L + 1 S = DSQRT(E2(L)) G = D(L) P = (D(L1) - G) / (2.0D0 * S) R = PYTHA9(P,1.0D0) D(L) = S / (P + DSIGN(R,P)) H = G - D(L) C DO 140 I = L1, N 140 D(I) = D(I) - H C F = F + H C .......... RATIONAL QL TRANSFORMATION .......... G = D(M) IF (G .EQ. 0.0D0) G = B H = G S = 0.0D0 MML = M - L C .......... FOR I=M-1 STEP -1 UNTIL L DO -- .......... DO 200 II = 1, MML I = M - II P = G * H R = P + E2(I) E2(I+1) = S * R S = E2(I) / R D(I+1) = H + S * (H + D(I)) G = D(I) - E2(I) / G IF (G .EQ. 0.0D0) G = B H = G * P / R 200 CONTINUE C E2(L) = S * G D(L) = H C .......... GUARD AGAINST UNDERFLOW IN CONVERGENCE TEST .......... IF (H .EQ. 0.0D0) GO TO 210 IF (DABS(E2(L)) .LE. DABS(C/H)) GO TO 210 E2(L) = H * E2(L) IF (E2(L) .NE. 0.0D0) GO TO 130 210 P = D(L) + F C .......... ORDER EIGENVALUES .......... IF (L .EQ. 1) GO TO 250 C .......... FOR I=L STEP -1 UNTIL 2 DO -- .......... DO 230 II = 2, L I = L + 2 - II IF (P .GE. D(I-1)) GO TO 270 D(I) = D(I-1) 230 CONTINUE C 250 I = 1 270 D(I) = P 290 CONTINUE C GO TO 1001 C .......... SET ERROR -- NO CONVERGENCE TO AN C EIGENVALUE AFTER 30 ITERATIONS .......... 1000 IERR = L 1001 RETURN END SUBROUTINE VTRED1(NM,N,A,D,E,E2) C INTEGER I,J,K,L,N,II,NM,JP1 DOUBLE PRECISION A(NM,N),D(N),E(N),E2(N) DOUBLE PRECISION F,G,H,SCALE C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TRED1, C NUM. MATH. 11, 181-195(1968) BY MARTIN, REINSCH, AND WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 212-226(1971). C C THIS SUBROUTINE REDUCES A REAL SYMMETRIC MATRIX C TO A SYMMETRIC TRIDIAGONAL MATRIX USING C ORTHOGONAL SIMILARITY TRANSFORMATIONS. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C A CONTAINS THE REAL SYMMETRIC INPUT MATRIX. ONLY THE C LOWER TRIANGLE OF THE MATRIX NEED BE SUPPLIED. C C ON OUTPUT C C A CONTAINS INFORMATION ABOUT THE ORTHOGONAL TRANS- C FORMATIONS USED IN THE REDUCTION IN ITS STRICT LOWER C TRIANGLE. THE FULL UPPER TRIANGLE OF A IS UNALTERED. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE TRIDIAGONAL MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE TRIDIAGONAL C MATRIX IN ITS LAST N-1 POSITIONS. E(1) IS SET TO ZERO. C C E2 CONTAINS THE SQUARES OF THE CORRESPONDING ELEMENTS OF E. C E2 MAY COINCIDE WITH E IF THE SQUARES ARE NOT NEEDED. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C DO 100 I = 1, N D(I) = A(N,I) A(N,I) = A(I,I) 100 CONTINUE C .......... FOR I=N STEP -1 UNTIL 1 DO -- .......... DO 300 II = 1, N I = N + 1 - II L = I - 1 H = 0.0D0 SCALE = 0.0D0 IF (L .LT. 1) GO TO 130 C .......... SCALE ROW (ALGOL TOL THEN NOT NEEDED) .......... DO 120 K = 1, L 120 SCALE = SCALE + DABS(D(K)) C IF (SCALE .NE. 0.0D0) GO TO 140 C DO 125 J = 1, L D(J) = A(L,J) A(L,J) = A(I,J) A(I,J) = 0.0D0 125 CONTINUE C 130 E(I) = 0.0D0 E2(I) = 0.0D0 GO TO 300 C 140 DO 150 K = 1, L D(K) = D(K) / SCALE H = H + D(K) * D(K) 150 CONTINUE C E2(I) = SCALE * SCALE * H F = D(L) G = -DSIGN(DSQRT(H),F) E(I) = SCALE * G H = H - F * G D(L) = F - G IF (L .EQ. 1) GO TO 285 C .......... FORM A*U .......... DO 170 J = 1, L 170 E(J) = 0.0D0 C DO 240 J = 1, L F = D(J) G = E(J) + A(J,J) * F JP1 = J + 1 IF (L .LT. JP1) GO TO 220 C DO 200 K = JP1, L G = G + A(K,J) * D(K) E(K) = E(K) + A(K,J) * F 200 CONTINUE C 220 E(J) = G 240 CONTINUE C .......... FORM P .......... F = 0.0D0 C DO 245 J = 1, L E(J) = E(J) / H F = F + E(J) * D(J) 245 CONTINUE C H = F / (H + H) C .......... FORM Q .......... DO 250 J = 1, L 250 E(J) = E(J) - H * D(J) C .......... FORM REDUCED A .......... DO 280 J = 1, L F = D(J) G = E(J) C DO 260 K = J, L 260 A(K,J) = A(K,J) - F * E(K) - G * D(K) C 280 CONTINUE C 285 DO 290 J = 1, L F = D(J) D(J) = A(L,J) A(L,J) = A(I,J) A(I,J) = F * SCALE 290 CONTINUE C 300 CONTINUE C RETURN END SUBROUTINE VTRED2(NM,N,A,D,E,Z) C INTEGER I,J,K,L,N,II,NM,JP1 DOUBLE PRECISION A(NM,N),D(N),E(N),Z(NM,N) DOUBLE PRECISION F,G,H,HH,SCALE C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TRED2, C NUM. MATH. 11, 181-195(1968) BY MARTIN, REINSCH, AND WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 212-226(1971). C C THIS SUBROUTINE REDUCES A REAL SYMMETRIC MATRIX TO A C SYMMETRIC TRIDIAGONAL MATRIX USING AND ACCUMULATING C ORTHOGONAL SIMILARITY TRANSFORMATIONS. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C A CONTAINS THE REAL SYMMETRIC INPUT MATRIX. ONLY THE C LOWER TRIANGLE OF THE MATRIX NEED BE SUPPLIED. C C ON OUTPUT C C D CONTAINS THE DIAGONAL ELEMENTS OF THE TRIDIAGONAL MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE TRIDIAGONAL C MATRIX IN ITS LAST N-1 POSITIONS. E(1) IS SET TO ZERO. C C Z CONTAINS THE ORTHOGONAL TRANSFORMATION MATRIX C PRODUCED IN THE REDUCTION. C C A AND Z MAY COINCIDE. IF DISTINCT, A IS UNALTERED. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C DO 100 I = 1, N C DO 80 J = I, N 80 Z(J,I) = A(J,I) C D(I) = A(N,I) 100 CONTINUE C IF (N .EQ. 1) GO TO 510 C .......... FOR I=N STEP -1 UNTIL 2 DO -- .......... DO 300 II = 2, N I = N + 2 - II L = I - 1 H = 0.0D0 SCALE = 0.0D0 IF (L .LT. 2) GO TO 130 C .......... SCALE ROW (ALGOL TOL THEN NOT NEEDED) .......... DO 120 K = 1, L 120 SCALE = SCALE + DABS(D(K)) C IF (SCALE .NE. 0.0D0) GO TO 140 130 E(I) = D(L) C DO 135 J = 1, L D(J) = Z(L,J) Z(I,J) = 0.0D0 Z(J,I) = 0.0D0 135 CONTINUE C GO TO 290 C 140 DO 150 K = 1, L D(K) = D(K) / SCALE H = H + D(K) * D(K) 150 CONTINUE C F = D(L) G = -DSIGN(DSQRT(H),F) E(I) = SCALE * G H = H - F * G D(L) = F - G C .......... FORM A*U .......... DO 170 J = 1, L 170 E(J) = 0.0D0 C DO 240 J = 1, L F = D(J) Z(J,I) = F G = E(J) + Z(J,J) * F JP1 = J + 1 IF (L .LT. JP1) GO TO 220 C DO 200 K = JP1, L G = G + Z(K,J) * D(K) E(K) = E(K) + Z(K,J) * F 200 CONTINUE C 220 E(J) = G 240 CONTINUE C .......... FORM P .......... F = 0.0D0 C DO 245 J = 1, L E(J) = E(J) / H F = F + E(J) * D(J) 245 CONTINUE C HH = F / (H + H) C .......... FORM Q .......... DO 250 J = 1, L 250 E(J) = E(J) - HH * D(J) C .......... FORM REDUCED A .......... DO 280 J = 1, L F = D(J) G = E(J) C DO 260 K = J, L 260 Z(K,J) = Z(K,J) - F * E(K) - G * D(K) C D(J) = Z(L,J) Z(I,J) = 0.0D0 280 CONTINUE C 290 D(I) = H 300 CONTINUE C .......... ACCUMULATION OF TRANSFORMATION MATRICES .......... DO 500 I = 2, N L = I - 1 Z(N,L) = Z(L,L) Z(L,L) = 1.0D0 H = D(I) IF (H .EQ. 0.0D0) GO TO 380 C DO 330 K = 1, L 330 D(K) = Z(K,I) / H C DO 360 J = 1, L G = 0.0D0 C DO 340 K = 1, L 340 G = G + Z(K,I) * Z(K,J) C DO 360 K = 1, L Z(K,J) = Z(K,J) - G * D(K) 360 CONTINUE C 380 DO 400 K = 1, L 400 Z(K,I) = 0.0D0 C 500 CONTINUE C 510 DO 520 I = 1, N D(I) = Z(N,I) Z(N,I) = 0.0D0 520 CONTINUE C Z(N,N) = 1.0D0 E(1) = 0.0D0 RETURN END DOUBLE PRECISION FUNCTION EPSLO9(X) DOUBLE PRECISION X C C ESTIMATE UNIT ROUNDOFF IN QUANTITIES OF SIZE X. C DOUBLE PRECISION A,B,C,EPS C C THIS PROGRAM SHOULD FUNCTION PROPERLY ON ALL SYSTEMS C SATISFYING THE FOLLOWING TWO ASSUMPTIONS, C 1. THE BASE USED IN REPRESENTING FLOATING POINT C NUMBERS IS NOT A POWER OF THREE. C 2. THE QUANTITY A IN STATEMENT 10 IS REPRESENTED TO C THE ACCURACY USED IN FLOATING POINT VARIABLES C THAT ARE STORED IN MEMORY. C THE STATEMENT NUMBER 10 AND THE GO TO 10 ARE INTENDED TO C FORCE OPTIMIZING COMPILERS TO GENERATE CODE SATISFYING C ASSUMPTION 2. C UNDER THESE ASSUMPTIONS, IT SHOULD BE TRUE THAT, C A IS NOT EXACTLY EQUAL TO FOUR-THIRDS, C B HAS A ZERO FOR ITS LAST BIT OR DIGIT, C C IS NOT EXACTLY EQUAL TO ONE, C EPS MEASURES THE SEPARATION OF 1.0 FROM C THE NEXT LARGER FLOATING POINT NUMBER. C THE DEVELOPERS OF EISPACK WOULD APPRECIATE BEING INFORMED C ABOUT ANY SYSTEMS WHERE THESE ASSUMPTIONS DO NOT HOLD. C C THIS VERSION DATED 4/6/83. C A = 4.0D0/3.0D0 10 B = A - 1.0D0 C = B + B + B EPS = DABS(C-1.0D0) IF (EPS .EQ. 0.0D0) GO TO 10 EPSLO9 = EPS*DABS(X) RETURN END DOUBLE PRECISION FUNCTION PYTHA9(A,B) DOUBLE PRECISION A,B C C FINDS DSQRT(A**2+B**2) WITHOUT OVERFLOW OR DESTRUCTIVE UNDERFLOW C DOUBLE PRECISION P,R,S,T,U P = DMAX1(DABS(A),DABS(B)) IF (P .EQ. 0.0D0) GO TO 20 R = (DMIN1(DABS(A),DABS(B))/P)**2 10 CONTINUE T = 4.0D0 + R IF (T .EQ. 4.0D0) GO TO 20 S = R/T U = 1.0D0 + 2.0D0*S P = U*P R = (S/U)**2 * R GO TO 10 20 PYTHA9 = P RETURN END VGAM/src/vdigami.f0000644000176000001440000000711512136651167013426 0ustar ripleyusers SUBROUTINE vdigami(D, X, P, GPLOG, GP1LOG, PSIP, PSIP1, PSIDP, * PSIDP1, IFAULT, TMAX) C C ALGORITHM AS 187 APPL. STATIST. (1982) VOL.31, NO.3 C C Computes derivatives of the incomplete gamma integral for positive C parameters, X, P, using a series expansion if P > X or X <= 1, and C a continued fraction expansion otherwise. C C Calculation of D(4) in line 60 corrected 5 October 1993. C C N.B. The user must input values of the incomplete gamma, digamma C and trigamma functions. These can be obtained using AS 239 C (or 32), AS 103 and AS 121 respectively. C C C C C 20130214; adapted by T. W. Yee to handle DOUBLE PRECISION arguments. C And declarations of *all* variables. C And a wrapper function written to call this subroutine. C TMAX is now input. C Seems to work but more testing is required. C DOUBLE PRECISION X, P, GPLOG, GP1LOG, PSIP, PSIP1, PSIDP, PSIDP1 DOUBLE PRECISION TMAX INTEGER IFAULT C C C C C DOUBLE PRECISION PN(6), D(6), DP(6), DPP(6), ZERO, ONE, TWO C DATA TMAX/100.0/ DATA E, OFLO, VSMALL/1.D-6, 1.D30, 1.D-30/ DATA ZERO/0.0/, ONE/1.0/, TWO/2.0/ C IFAULT = 0 C C Derivatives with respect to X C PM1 = P - ONE XLOG = DLOG(X) D(1) = DEXP(-GPLOG + PM1*XLOG - X) D(2) = D(1) * (PM1/X - ONE) D(5) = D(1) * (XLOG - PSIP) C C Derivatives with respect to P C IF (X .GT. ONE .AND. X .GE. P) GO TO 30 C C Series expansion C F = DEXP(P*XLOG - GP1LOG - X) DFP = F * (XLOG - PSIP1) DFPP = DFP*DFP/F - F*PSIDP1 C TMAXP = TMAX + P C = ONE S = ONE CP = ZERO CPP = ZERO DSP = ZERO DSPP = ZERO A = P 1 A = A + ONE CPC = CP / C CP = CPC - ONE/A CPP = CPP/C - CPC*CPC + ONE/A**2 C = C*X/A CP = CP*C CPP = CPP*C + CP*CP/C S = S + C DSP = DSP + CP DSPP = DSPP + CPP IF (A .GT. TMAXP) GO TO 1001 IF (C .GT. E*S) GO TO 1 D(6) = S*F D(3) = S*DFP + F*DSP D(4) = S*DFPP + TWO*DFP*DSP + F*DSPP RETURN C C Continued fraction expansion C 30 F = DEXP(P*XLOG - GPLOG - X) DFP = F * (XLOG - PSIP) DFPP = DFP*DFP/F - F*PSIDP C A = PM1 B = X + ONE - A TERM = ZERO PN(1) = ONE PN(2) = X PN(3) = X + ONE PN(4) = X * B S0 = PN(3) / PN(4) DO 31 I = 1, 4 DP(I) = ZERO DPP(I) = ZERO 31 CONTINUE DP(4) = -X C 32 A = A - ONE B = B + TWO TERM = TERM + ONE AN = A*TERM PN(5) = B*PN(3) + AN*PN(1) PN(6) = B*PN(4) + AN*PN(2) DP(5) = B*DP(3) - PN(3) + AN*DP(1) + PN(1)*TERM DP(6) = B*DP(4) - PN(4) + AN*DP(2) + PN(2)*TERM DPP(5) = B*DPP(3) + AN*DPP(1) + TWO*(TERM*DP(1) - DP(3)) DPP(6) = B*DPP(4) + AN*DPP(2) + TWO*(TERM*DP(2) - DP(4)) C IF (ABS(PN(6)) .LT. VSMALL) GO TO 35 S = PN(5) / PN(6) C = ABS(S - S0) IF (C*P .GT. E) GO TO 34 IF (C .LE. E*S) GO TO 42 C 34 S0 = S 35 DO 36 I = 1, 4 I2 = I + 2 DP(I) = DP(I2) DPP(I) = DPP(I2) PN(I) = PN(I2) 36 CONTINUE C IF (TERM .GT. TMAX) GO TO 1001 IF (ABS(PN(5)) .LT. OFLO) GO TO 32 DO 41 I = 1, 4 DP(I) = DP(I) / OFLO DPP(I) = DPP(I) / OFLO PN(I) = PN(I) / OFLO 41 CONTINUE GO TO 32 C 42 D(6) = ONE - F*S DSP = (DP(5) - S*DP(6)) / PN(6) DSPP = (DPP(5) - S*DPP(6) - TWO*DSP*DP(6)) / PN(6) D(3) = -F*DSP - S*DFP D(4) = -F*DSPP - TWO*DSP*DFP - S*DFPP RETURN C C Set fault indicator C 1001 IFAULT = 1 RETURN END VGAM/src/vcall2.f0000644000176000001440000000117312136651167013167 0ustar ripleyusersC Output from Public domain Ratfor, version 1.01 subroutine vcall2(onemor,w,y,eta,beta,u) logical onemor double precision w(*), y(*), eta(*), beta(*), u(*) onemor = .true. w(1) = 1.0d0 y(1) = 1.0d0 eta(1) = 1.0d0 beta(1) = 1.0d0 u(1) = 1.0d0 return end subroutine vcall1(onemor,y,eta,beta,u,xbig,cpxbig) logical onemor, cpxbig double precision y(*), eta(*), beta(*), u(*), xbig(*) onemor = .true. y(1) = 1.0d0 eta(1) = 1.0d0 beta(1) = 1.0d0 u(1) = 1.0d0 xbig(1) = 1.0d0 cpxbig = .true. return end VGAM/src/tyeepolygamma3.c0000644000176000001440000000771012136651167014744 0ustar ripleyusers #include #include #include #include #include void tyee_C_vdgam1(double *xval, double *lfu2qhid, int *dvhw1ulq); void tyee_C_vtgam1(double *xval, double *lfu2qhid, int *dvhw1ulq); void tyee_C_dgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq); void tyee_C_tgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq); void tyee_C_cum8sum(double ci1oyxas[], double lfu2qhid[], int *nlfu2qhid, double valong[], int *ntot, int *notdvhw1ulq); void tyee_C_vdgam1(double *xval, double *lfu2qhid, int *dvhw1ulq) { double wval, series, obr6tcex, tmp1; *dvhw1ulq = 1; if (*xval <= 0.0e0) { *dvhw1ulq = 0; return; } if (*xval < 6.0e0) { tmp1 = *xval + 6.0e0; tyee_C_vdgam1(&tmp1, &obr6tcex, dvhw1ulq); *lfu2qhid = obr6tcex - 1.0e0 / *xval - 1.0e0 / (*xval + 1.0e0) - 1.0e0 / (*xval + 2.0e0) - 1.0e0 / (*xval + 3.0e0) - 1.0e0 / (*xval + 4.0e0) - 1.0e0 / (*xval + 5.0e0); return; } wval = 1.0e0 / (*xval * *xval); series = ((wval * ( -1.0e0 / 12.0e0 + ((wval * ( 1.0e0 / 120.0e0 + ((wval * ( -1.0e0 / 252.0e0 + ((wval * ( 1.0e0 / 240.0e0 + ((wval * ( -1.0e0 / 132.0e0 + ((wval * (691.0e0 /32760.0e0 + ((wval * ( -1.0e0 / 12.0e0 + (wval * 3617.0e0)/ 8160.0e0))))))))))))))))))))); *lfu2qhid = log(*xval) - 0.5e0 / *xval + series; } void tyee_C_vtgam1(double *xval, double *lfu2qhid, int *dvhw1ulq) { double wval, series, obr6tcex, tmp1; *dvhw1ulq = 1; if (*xval <= 0.0e0) { *dvhw1ulq = 0; return; } if (*xval < 6.0e0) { tmp1 = *xval + 6.0e0; tyee_C_vtgam1(&tmp1, &obr6tcex, dvhw1ulq); *lfu2qhid = obr6tcex + 1.0e0 / pow( (double) *xval, (double) 2.0) + 1.0e0 / pow( (double) (*xval + 1.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 2.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 3.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 4.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 5.0e0), (double) 2.0); return; } wval = 1.0e0 / (*xval * *xval); series = 1.0e0 + (wval * ( 1.0e0 / 6.0e0 + (wval * ( -1.0e0 / 30.0e0 + (wval * ( 1.0e0 / 42.0e0 + (wval * ( -1.0e0 / 30.0e0 + (wval * ( 5.0e0 / 66.0e0 + (wval * (-691.0e0 /2370.0e0 + (wval * ( 7.0e0 / 6.0e0 - (wval * 3617.0e0)/ 510.0e0)))))))))))))); *lfu2qhid = 0.5e0 * wval + series / *xval; } void tyee_C_dgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq) { int ayfnwr1v, okobr6tcex; double *qnwamo0e1, *qnwamo0e2; *dvhw1ulq = 1; qnwamo0e1 = sjwyig9t; qnwamo0e2 = lfu2qhid; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { tyee_C_vdgam1(qnwamo0e1++, qnwamo0e2++, &okobr6tcex); if (okobr6tcex != 1) *dvhw1ulq = okobr6tcex; } } void tyee_C_tgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq) { int ayfnwr1v, okobr6tcex; double *qnwamo0e1, *qnwamo0e2; *dvhw1ulq = 1; qnwamo0e1 = sjwyig9t; qnwamo0e2 = lfu2qhid; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { tyee_C_vtgam1(qnwamo0e1++, qnwamo0e2++, &okobr6tcex); if (okobr6tcex != 1) *dvhw1ulq = okobr6tcex; } } void tyee_C_cum8sum(double ci1oyxas[], double lfu2qhid[], int *nlfu2qhid, double valong[], int *ntot, int *notdvhw1ulq) { int ayfnwr1v, iii = 1; lfu2qhid[iii-1] = ci1oyxas[iii-1]; for (ayfnwr1v = 2; ayfnwr1v <= *ntot; ayfnwr1v++) { if (valong[ayfnwr1v-1] > valong[ayfnwr1v-2]) { lfu2qhid[iii-1] += ci1oyxas[ayfnwr1v-1]; } else { iii++; lfu2qhid[iii-1] = ci1oyxas[ayfnwr1v-1]; } } *notdvhw1ulq = (iii == *nlfu2qhid) ? 0 : 1; } VGAM/src/rgam3.c0000644000176000001440000006766012136651167013027 0ustar ripleyusers #include #include #include #include #include void n5aioudkdnaoqj0l(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *wbkq9zyi, double jstx4uwe[4], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[3], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); void n5aioudkhbzuprs6(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double *rpyis2kc, double *imdvf4hx, double *ifys6woa, double *i9mwnvqt, int *pn9eowxc, int *ic5aesxku, double *mynl7uaq, double *zustx4fw, double *nbe4gvpq, double *qaltf0nz, int *cvnjhg2u, double xwy[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double gkdx5jal[], int *acpios9q); void n5aioudkvmnweiy2(double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *acpios9q, int *wep0oibc, int *iflag); void n5aioudkwmhctl9x( double *qgnl3toc, double sjwyig9t[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, int *pn9eowxc, // int *icrit, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *i9mwnvqt, double xwy[], double *qcpiaj7f, double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *algpft4y); void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]); void F77_NAME(vinterv)(double*, int*, double*, int*, int*); void F77_NAME(vbsplvd)(double*, int*, double*, int*, double*, double*, int*); void F77_NAME(dpbfa8)(double*, int*, int*, int*, int*); void F77_NAME(dpbsl8)(double*, int*, int*, int*, double*); void F77_NAME(wbvalue)(double*, double*, int*, int*, double*, int*, double*); void n5aioudkdnaoqj0l(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *wbkq9zyi, double jstx4uwe[4], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[3], int *xtov9rbf, int *wep0oibc, int *fbd5yktj) { double *wkumc9idxwy, *wkumc9idbuhyalv4, *wkumc9idzvau2lct, *wkumc9idf6lsuzax, *wkumc9idfvh2rwtc, *wkumc9iddcfir2no, *wkumc9idfulcp8wa, *wkumc9idplj0trqx; wkumc9idxwy = Calloc(*acpios9q, double); wkumc9idzvau2lct = Calloc(*acpios9q, double); wkumc9idf6lsuzax = Calloc(*acpios9q, double); wkumc9idfvh2rwtc = Calloc(*acpios9q, double); wkumc9iddcfir2no = Calloc(*acpios9q, double); wkumc9idbuhyalv4 = Calloc(*xtov9rbf * *acpios9q, double); wkumc9idfulcp8wa = Calloc(*xtov9rbf * *acpios9q, double); wkumc9idplj0trqx = Calloc( (int) 1 , double); n5aioudkhbzuprs6(qgnl3toc, sjwyig9t, bhcji9gl, po8rwsmy, kuzxj1lo, acpios9q, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, wbkq9zyi, l3zpbstu + 1, l3zpbstu + 2, jstx4uwe, jstx4uwe + 1, jstx4uwe + 2, jstx4uwe + 3, cvnjhg2u, wkumc9idxwy, wkumc9idzvau2lct, wkumc9idf6lsuzax, wkumc9idfvh2rwtc, wkumc9iddcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, wkumc9idbuhyalv4, wkumc9idfulcp8wa, wkumc9idplj0trqx, xtov9rbf, wep0oibc, fbd5yktj); Free(wkumc9idxwy); Free(wkumc9idbuhyalv4); Free(wkumc9idzvau2lct); Free(wkumc9idf6lsuzax); Free(wkumc9idfvh2rwtc); Free(wkumc9iddcfir2no); Free(wkumc9idfulcp8wa); Free(wkumc9idplj0trqx); } void n5aioudkhbzuprs6(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double *rpyis2kc, double *imdvf4hx, double *ifys6woa, double *wbkq9zyi, int *pn9eowxc, int *ic5aesxku, double *mynl7uaq, double *zustx4fw, double *nbe4gvpq, double *qaltf0nz, int *cvnjhg2u, double xwy[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj) { static const double c_Gold = 0.381966011250105151795413165634; double tt1 = 0.0, g2dnwteb, wkumc9ida, wkumc9idb, wkumc9idd, wkumc9ide, wkumc9idxm, wkumc9idp, wkumc9idq, wkumc9idr, // qaltf0nz, Tol1, Tol2, wkumc9idu, wkumc9idv, wkumc9idw, wkumc9idfu, wkumc9idfv, wkumc9idfw, wkumc9idfx, wkumc9idx, wkumc9idax, wkumc9idbx; int ayfnwr1v, viter = 0; double yjpnro8d = 8.0e88, bk3ymcih = 0.0e0, *qcpiaj7f, qcpiaj7f0 = 0.0; qcpiaj7f = &qcpiaj7f0; g2dnwteb = bk3ymcih; bk3ymcih += bk3ymcih; bk3ymcih *= bk3ymcih; bk3ymcih += g2dnwteb; wkumc9idd = 0.0; wkumc9idfu = 0.0e0; wkumc9idu = 0.0e0; if (*cvnjhg2u == 0) { n5aioudkzosq7hub(xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, gkdx5jal, acpios9q); *tt2 = 0.0; for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { *tt2 += xecbg0pf[ayfnwr1v-1]; } *cvnjhg2u = 1; } else { } n5aioudkgt9iulbf(sjwyig9t, bhcji9gl, po8rwsmy, gkdx5jal, kuzxj1lo, acpios9q, xwy, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no); for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { tt1 += zvau2lct[ayfnwr1v-1]; } g2dnwteb = tt1 / *tt2; if (*pn9eowxc == 1) { *mynl7uaq = g2dnwteb * pow(16.0, *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, (icrit used to be used solely) gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, // Not used here zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); return; } wkumc9idax = *mynl7uaq; wkumc9idbx = *zustx4fw; /* Initialization. */ wkumc9ida = wkumc9idax; wkumc9idb = wkumc9idbx; wkumc9idv = wkumc9ida + c_Gold * (wkumc9idb - wkumc9ida); wkumc9idw = wkumc9idx = wkumc9idv; wkumc9ide = 0.0e0; *wbkq9zyi = wkumc9idx; *mynl7uaq = g2dnwteb * pow((double) 16.0, (double) *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); wkumc9idfx = *qcpiaj7f; wkumc9idfv = wkumc9idfw = wkumc9idfx; while (*fbd5yktj == 0) { viter++; wkumc9idxm = 0.5e0 * (wkumc9ida + wkumc9idb); Tol1 = *qaltf0nz * fabs(wkumc9idx) + *nbe4gvpq / 3.0e0; Tol2 = 2.0e0 * Tol1; if ((fabs(wkumc9idx - wkumc9idxm) <= (Tol2 - 0.5 * (wkumc9idb - wkumc9ida))) || (viter > *ic5aesxku)) goto L_End; if ((fabs(wkumc9ide) <= Tol1) || (wkumc9idfx >= yjpnro8d) || (wkumc9idfv >= yjpnro8d) || (wkumc9idfw >= yjpnro8d)) goto a3bdsirf; wkumc9idr = (wkumc9idx - wkumc9idw) * (wkumc9idfx - wkumc9idfv); wkumc9idq = (wkumc9idx - wkumc9idv) * (wkumc9idfx - wkumc9idfw); wkumc9idp = (wkumc9idx - wkumc9idv) * wkumc9idq - (wkumc9idx - wkumc9idw) * wkumc9idr; wkumc9idq = 2.0e0 * (wkumc9idq - wkumc9idr); if (wkumc9idq > 0.0e0) wkumc9idp = -wkumc9idp; wkumc9idq = fabs(wkumc9idq); wkumc9idr = wkumc9ide; wkumc9ide = wkumc9idd; if (fabs(wkumc9idp) >= fabs(0.5 * wkumc9idq * wkumc9idr) || wkumc9idq == 0.0e0) { goto a3bdsirf; } if (wkumc9idp <= wkumc9idq * (wkumc9ida - wkumc9idx) || wkumc9idp >= wkumc9idq * (wkumc9idb - wkumc9idx)) goto a3bdsirf; wkumc9idd = wkumc9idp / wkumc9idq; wkumc9idu = wkumc9idx + wkumc9idd; if (wkumc9idu - wkumc9ida < Tol2 || wkumc9idb - wkumc9idu < Tol2) wkumc9idd = fsign(Tol1, wkumc9idxm - wkumc9idx); goto ceqzd1hi50; a3bdsirf: wkumc9ide = (wkumc9idx >= wkumc9idxm) ? wkumc9ida - wkumc9idx : wkumc9idb - wkumc9idx; wkumc9idd = c_Gold * wkumc9ide; ceqzd1hi50: wkumc9idu = wkumc9idx + ((fabs(wkumc9idd) >= Tol1) ? wkumc9idd : fsign(Tol1, wkumc9idd)); *wbkq9zyi = wkumc9idu; *mynl7uaq = g2dnwteb * pow((double) 16.0, (double) *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); wkumc9idfu = *qcpiaj7f; if (wkumc9idfu > yjpnro8d) wkumc9idfu = 2.0e0 * yjpnro8d; if (wkumc9idfu <= wkumc9idfx) { if (wkumc9idu >= wkumc9idx) wkumc9ida = wkumc9idx; else wkumc9idb = wkumc9idx; wkumc9idv = wkumc9idw; wkumc9idfv = wkumc9idfw; wkumc9idw = wkumc9idx; wkumc9idfw = wkumc9idfx; wkumc9idx = wkumc9idu; wkumc9idfx = wkumc9idfu; } else { if (wkumc9idu < wkumc9idx) wkumc9ida = wkumc9idu; else wkumc9idb = wkumc9idu; if (wkumc9idfu <= wkumc9idfw || wkumc9idw == wkumc9idx) { wkumc9idv = wkumc9idw; wkumc9idfv = wkumc9idfw; wkumc9idw = wkumc9idu; wkumc9idfw = wkumc9idfu; } else if (wkumc9idfu <= wkumc9idfv || wkumc9idv == wkumc9idx || wkumc9idv == wkumc9idw){ wkumc9idv = wkumc9idu; wkumc9idfv = wkumc9idfu; } } } L_End: bk3ymcih = 0.0e0; *wbkq9zyi = wkumc9idx; *qcpiaj7f = wkumc9idfx; return; } void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double gkdx5jal[], int *acpios9q) { int dqlr5bse, pqzfxw4i, bvsquk3z = 3, h2dpsbkr = 4, nkplus1 = *acpios9q + 1; int ayfnwr1v, gp1jxzuh, yq6lorbx; int urohxe6t; double g9fvdrbw[12], ms0qypiw[16], yw1[4], yw2[4], wrk1, othird = 1.0 / 3.0, *qnwamo0e0, *qnwamo0e1, *qnwamo0e2, *qnwamo0e3; qnwamo0e0 = xecbg0pf; qnwamo0e1 = z4grbpiq; qnwamo0e2 = d7glzhbj; qnwamo0e3 = v2eydbxs; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { *qnwamo0e0++ = *qnwamo0e1++ = *qnwamo0e2++ = *qnwamo0e3++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nkplus1, gkdx5jal + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, gkdx5jal + ayfnwr1v - 1, &dqlr5bse, ms0qypiw, g9fvdrbw, &bvsquk3z); for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yw1[gp1jxzuh-1] = g9fvdrbw[gp1jxzuh-1 + 2*4]; } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, gkdx5jal + ayfnwr1v, &dqlr5bse, ms0qypiw, g9fvdrbw, &bvsquk3z); for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yw2[gp1jxzuh-1] = g9fvdrbw[gp1jxzuh-1 + 2*4] - yw1[gp1jxzuh-1]; } wrk1 = gkdx5jal[ayfnwr1v] - gkdx5jal[ayfnwr1v-1]; if (dqlr5bse >= 4) { for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 4 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 4) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 2; if (yq6lorbx <= 4) { d7glzhbj[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 3; if (yq6lorbx <= 4) { v2eydbxs[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 3) { for (gp1jxzuh = 1; gp1jxzuh <= 3; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 3 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 3) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 2; if (yq6lorbx <= 3) { d7glzhbj[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 2) { for (gp1jxzuh = 1; gp1jxzuh <= 2; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 2 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 2) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 1) { for (gp1jxzuh = 1; gp1jxzuh <= 1; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 1 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } } void n5aioudkvmnweiy2(double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *acpios9q, int *wep0oibc, int *iflag) { int ayfnwr1v, yq6lorbx, gp1jxzuh; double wjm3[3], wjm2[2], wjm1[1], c0, c1, c2, c3; double pcsuow9k, qdbgu6oi, upwkh5xz, rul5fnyd, ueydbrg6, plce2srm, k3yvomnh, bfdjhu7l, ctfvwdu0; c1 = c2 = c3 = 0.0e0; wjm3[0] = wjm3[1] = wjm3[2] = wjm2[0] = wjm2[1] = wjm1[0] = 0.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; c0 = 1.0e0 / buhyalv4[3 + (yq6lorbx-1) * *xtov9rbf]; if (yq6lorbx <= (*acpios9q-3)) { c1 = buhyalv4[0 + (yq6lorbx+2) * *xtov9rbf] * c0; c2 = buhyalv4[1 + (yq6lorbx+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + (yq6lorbx+0) * *xtov9rbf] * c0; } else if (yq6lorbx == (*acpios9q - 2)) { c1 = 0.0e0; c2 = buhyalv4[1 + (yq6lorbx+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + yq6lorbx * *xtov9rbf] * c0; } else if (yq6lorbx == (*acpios9q - 1)) { c1 = c2 = 0.0e0; c3 = buhyalv4[2 + yq6lorbx * *xtov9rbf] * c0; } else if (yq6lorbx == *acpios9q) { c1 = c2 = c3 = 0.0e0; } pcsuow9k = c1 * wjm3[0]; qdbgu6oi = c2 * wjm3[1]; upwkh5xz = c3 * wjm3[2]; rul5fnyd = c1 * wjm3[1]; ueydbrg6 = c2 * wjm2[0]; plce2srm = c3 * wjm2[1]; k3yvomnh = c1 * wjm3[2]; bfdjhu7l = c2 * wjm2[1]; ctfvwdu0 = c3 * wjm1[0]; fulcp8wa[0 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (pcsuow9k+qdbgu6oi+upwkh5xz); fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (rul5fnyd+ueydbrg6+plce2srm); fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (k3yvomnh+bfdjhu7l+ctfvwdu0); fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf] = pow(c0, (double) 2.0) + c1 * (pcsuow9k + 2.0e0 * (qdbgu6oi + upwkh5xz)) + c2 * (ueydbrg6 + 2.0e0 * plce2srm) + c3 * ctfvwdu0; wjm3[0] = wjm2[0]; wjm3[1] = wjm2[1]; wjm3[2] = fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf]; wjm2[0] = wjm1[0]; wjm2[1] = fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf]; wjm1[0] = fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf]; } if (*iflag == 0) { return; } Rprintf("plj0trqx must not be a double of length one!\n"); for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; for (gp1jxzuh = 1; gp1jxzuh <= 4 && yq6lorbx + gp1jxzuh-1 <= *acpios9q; gp1jxzuh++) { plj0trqx[yq6lorbx-1 + (yq6lorbx+gp1jxzuh-2) * *wep0oibc] = fulcp8wa[4-gp1jxzuh + (yq6lorbx-1) * *xtov9rbf]; } } for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; for (gp1jxzuh = yq6lorbx-4; gp1jxzuh >= 1; gp1jxzuh--) { c0 = 1.0 / buhyalv4[3 + (gp1jxzuh-1) * *xtov9rbf]; c1 = buhyalv4[0 + (gp1jxzuh+2) * *xtov9rbf] * c0; c2 = buhyalv4[1 + (gp1jxzuh+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + gp1jxzuh * *xtov9rbf] * c0; plj0trqx[gp1jxzuh-1 + (yq6lorbx-1) * *wep0oibc] = 0.0e0 - ( c1 * plj0trqx[gp1jxzuh+2 + (yq6lorbx-1) * *wep0oibc] + c2 * plj0trqx[gp1jxzuh+1 + (yq6lorbx-1) * *wep0oibc] + c3 * plj0trqx[gp1jxzuh + (yq6lorbx-1) * *wep0oibc] ); } } } void n5aioudkwmhctl9x(double *qgnl3toc, double sjwyig9t[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, int *pn9eowxc, // int *icrit, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *i9mwnvqt, double xwy[], double *qcpiaj7f, double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *algpft4y) { double ms0qypiw[16], b0, b1, b2, b3, qaltf0nz = 0.1e-10, g9fvdrbw[4], qtce8hzo, *chw8lzty, egwbdua212 = 0.0e0; int yu6izdrc = 0, pqneb2ra = 1, bvsquk3z = 3, h2dpsbkr = 4, pqzfxw4i, ayfnwr1v, yq6lorbx, dqlr5bse, nkp1 = *acpios9q + 1; double *qnwamo0e1, *qnwamo0e2; qnwamo0e1 = rpyis2kc; qnwamo0e2 = xwy; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { *qnwamo0e1++ = *qnwamo0e2++; } qnwamo0e1 = zvau2lct; qnwamo0e2 = xecbg0pf; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { buhyalv4[3 + ayfnwr1v * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = f6lsuzax; qnwamo0e2 = z4grbpiq; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-1); ayfnwr1v++) { buhyalv4[2 + ayfnwr1v * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = fvh2rwtc; qnwamo0e2 = d7glzhbj; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-2); ayfnwr1v++) { buhyalv4[1 + (ayfnwr1v+1) * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = dcfir2no; qnwamo0e2 = v2eydbxs; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-3); ayfnwr1v++) { buhyalv4[ (ayfnwr1v+2) * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } F77_CALL(dpbfa8)(buhyalv4, xtov9rbf, acpios9q, &bvsquk3z, algpft4y); if (*algpft4y != 0) { Rprintf("In C function wmhctl9x; Error:\n"); Rprintf("Leading minor of order %d is not pos-def\n", *algpft4y); return; } F77_CALL(dpbsl8)(buhyalv4, xtov9rbf, acpios9q, &bvsquk3z, rpyis2kc); chw8lzty = sjwyig9t; qnwamo0e1 = imdvf4hx; for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jal, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, &yu6izdrc, qnwamo0e1++); } n5aioudkvmnweiy2(buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, acpios9q, wep0oibc, &yu6izdrc); //Rprintf("first one n5aioudkwmhctl9x pow(po8rwsmy[0], (double) 1.0) = "); //Rprintf("%9.5e\n", pow(po8rwsmy[0], (double) 1.0)); chw8lzty = sjwyig9t; for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nkp1, chw8lzty, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == -1) { dqlr5bse = 4; *chw8lzty = gkdx5jal[3] + qaltf0nz; } else if (pqzfxw4i == 1) { dqlr5bse = *acpios9q; *chw8lzty = gkdx5jal[*acpios9q] - qaltf0nz; } yq6lorbx = dqlr5bse-3; F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, chw8lzty++, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); b0 = g9fvdrbw[0]; b1 = g9fvdrbw[1]; b2 = g9fvdrbw[2]; b3 = g9fvdrbw[3]; qtce8hzo = (b0 * (fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf] * b0 + 2.0e0* (fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf] * b1 + fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf] * b2 + fulcp8wa[0 + (yq6lorbx-1) * *xtov9rbf] * b3)) + b1 * (fulcp8wa[3 + yq6lorbx * *xtov9rbf] * b1 + 2.0e0* (fulcp8wa[2 + yq6lorbx * *xtov9rbf] * b2 + fulcp8wa[1 + yq6lorbx * *xtov9rbf] * b3)) + b2 * (fulcp8wa[3 + (yq6lorbx+1) * *xtov9rbf] * b2 + 2.0e0* fulcp8wa[2 + (yq6lorbx+1) * *xtov9rbf] * b3) + fulcp8wa[3 + (yq6lorbx+2) * *xtov9rbf] * pow(b3, (double) 2.0)) * po8rwsmy[ayfnwr1v-1]; ifys6woa[ayfnwr1v-1] = qtce8hzo; } if (*pn9eowxc == 1) { return; } for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { egwbdua212 += ifys6woa[ayfnwr1v-1]; } *qcpiaj7f = pow(*qgnl3toc - egwbdua212, (double) 2.0); } void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]) { double g9fvdrbw[4], ms0qypiw[16], wsvdbx3tk, wv2svdbx3tk, qaltf0nz = 0.1e-9; int ayfnwr1v, yq6lorbx, dqlr5bse, pqzfxw4i, nhnpt1zym1 = *kuzxj1lo + 1, pqneb2ra = 1, h2dpsbkr = 4; double *qnwamo0e0, *qnwamo0e1, *qnwamo0e2, *qnwamo0e3, *qnwamo0e4; qnwamo0e0 = zvau2lct; qnwamo0e1 = f6lsuzax; qnwamo0e2 = fvh2rwtc; qnwamo0e3 = dcfir2no; qnwamo0e4 = zyupcmk6; for (ayfnwr1v = 0; ayfnwr1v < *kuzxj1lo; ayfnwr1v++) { *qnwamo0e0++ = *qnwamo0e1++ = *qnwamo0e2++ = *qnwamo0e3++ = *qnwamo0e4++ = 0.0e0; } //Rprintf("first one n5aioudkgt9iulbf pow(po8rwsmy[0], (double) 1.0) = "); //Rprintf("%9.5e\n", pow(po8rwsmy[0], (double) 1.0)); for (ayfnwr1v = 1; ayfnwr1v <= *rvy1fpli; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nhnpt1zym1, sjwyig9t + ayfnwr1v - 1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jal[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { return; } } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, sjwyig9t + ayfnwr1v - 1, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); yq6lorbx = dqlr5bse - 4 + 1; wsvdbx3tk = po8rwsmy[ayfnwr1v-1]; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[0]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[0]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[1]; fvh2rwtc[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; dcfir2no[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse - 4 + 2; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[1]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[1]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; fvh2rwtc[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse - 4 + 3; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[2]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[3]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; } } VGAM/src/rgam.f0000644000176000001440000005077112136651167012742 0ustar ripleyusers subroutine dnaoqj0l(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, &ankcghz2,coef,sz,ifys6woa, qcpiaj7f,wbkq9zyi,parms, scrtch, &gp0xjetb,l3zpbstu,e5knafcg,wep0oibc,fbd5yktj) implicit logical (a-z) integer kuzxj1lo, nk, gp0xjetb, l3zpbstu(3), e5knafcg, wep0oibc, &fbd5yktj double precision penalt, pjb6wfoq, xs(kuzxj1lo), ys(kuzxj1lo), ws( &kuzxj1lo), ankcghz2(nk+4), coef(nk), sz(kuzxj1lo), ifys6woa( &kuzxj1lo), qcpiaj7f, wbkq9zyi, parms(3), scrtch(*) call hbzuprs6(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, ankcghz2, &coef,sz,ifys6woa, qcpiaj7f,l3zpbstu(1),wbkq9zyi,l3zpbstu(2), &l3zpbstu(3), parms(1),parms(2),parms(3), gp0xjetb, scrtch(1), &scrtch(nk+1),scrtch(2*nk+1),scrtch(3*nk+1),scrtch(4*nk+1), scrtch( &5*nk+1),scrtch(6*nk+1),scrtch(7*nk+1),scrtch(8*nk+1), scrtch(9*nk+ &1),scrtch(9*nk+e5knafcg*nk+1),scrtch(9*nk+2*e5knafcg*nk+1), &e5knafcg,wep0oibc,fbd5yktj) return end subroutine hbzuprs6(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, &ankcghz2,coef,sz,ifys6woa, qcpiaj7f,icrit,i9mwnvqt,ispar, &c5aesxku, mynl7uaq,zustx4fw,tol, gp0xjetb, xwy, zvau2lct,f6lsuzax, &fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4, &fulcp8wa,plj0trqx, e5knafcg,wep0oibc,fbd5yktj) implicit logical (a-z) integer kuzxj1lo,nk, icrit,ispar, gp0xjetb, e5knafcg,wep0oibc, &fbd5yktj integer c5aesxku double precision penalt,pjb6wfoq,xs(kuzxj1lo),ys(kuzxj1lo),ws( &kuzxj1lo), ankcghz2(nk+4), coef(nk),sz(kuzxj1lo),ifys6woa( &kuzxj1lo), qcpiaj7f,i9mwnvqt,mynl7uaq,zustx4fw,tol, xwy(nk), &zvau2lct(nk),f6lsuzax(nk),fvh2rwtc(nk),dcfir2no(nk), xecbg0pf(nk), &z4grbpiq(nk),d7glzhbj(nk),v2eydbxs(nk), buhyalv4(e5knafcg,nk), &fulcp8wa(e5knafcg,nk),plj0trqx(wep0oibc,nk) double precision t1,t2,ratio, a,b,c,d,e,qaltf0nz,xm,p,q,r,tol1, &tol2,u,v,w, fu,fv,fw,fx,x, ax,bx integer ayfnwr1v, viter double precision yjpnro8d, hmayv1xt yjpnro8d = 8.0d88 hmayv1xt = 0.0d0 d = 0.5d0 u = 0.5d0 ratio = 0.5d0 ayfnwr1v = 1 23000 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23002 if(.not.(ws(ayfnwr1v).gt.0.0d0))goto 23003 ws(ayfnwr1v) = dsqrt(ws(ayfnwr1v)) 23003 continue ayfnwr1v = ayfnwr1v+1 goto 23000 23002 continue if(.not.(gp0xjetb .eq. 0))goto 23005 call zosq7hub(xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs,ankcghz2,nk) call gt9iulbf(xs,ys,ws,ankcghz2, kuzxj1lo,nk, xwy,zvau2lct, &f6lsuzax,fvh2rwtc,dcfir2no) t1 = 0.0d0 t2 = 0.0d0 do 23007 ayfnwr1v = 3,nk-3 t1 = t1 + zvau2lct(ayfnwr1v) 23007 continue do 23009 ayfnwr1v = 3,nk-3 t2 = t2 + xecbg0pf(ayfnwr1v) 23009 continue ratio = t1/t2 gp0xjetb = 1 23005 continue if(.not.(ispar .eq. 1))goto 23011 call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, &ankcghz2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct, &f6lsuzax,fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, &buhyalv4,fulcp8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) return 23011 continue ax = mynl7uaq bx = zustx4fw c = 0.381966011250105097d0 qaltf0nz = 2.0d-5 viter = 0 a = ax b = bx v = a + c*(b - a) w = v x = v e = 0.0d0 i9mwnvqt = ratio * dexp((-2.0d0 + x*6.0d0) * dlog(16.0d0)) call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, &ankcghz2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct, &f6lsuzax,fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, &buhyalv4,fulcp8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) fx = qcpiaj7f fv = fx fw = fx 23013 if(.not.(fbd5yktj .eq. 0))goto 23014 viter = viter + 1 xm = 0.5d0*(a + b) tol1 = qaltf0nz*dabs(x) + tol/3.0d0 tol2 = 2.0d0*tol1 if(.not.((dabs(x - xm) .le. (tol2 - 0.5d0*(b - a))) .or.(viter &.gt. c5aesxku)))goto 23015 go to 90 23015 continue if(.not.((dabs(e) .le. tol1) .or.(fx .ge. yjpnro8d) .or.(fv .ge. &yjpnro8d) .or.(fw .ge. yjpnro8d)))goto 23017 go to 40 23017 continue r = (x - w)*(fx - fv) q = (x - v)*(fx - fw) p = (x - v)*q - (x - w)*r q = 2.0d0 * (q - r) if(.not.(q .gt. 0.0d0))goto 23019 p = -p 23019 continue q = dabs(q) r = e e = d if(.not.((dabs(p) .ge. dabs(0.5d0*q*r)) .or.(q .eq. 0.0d0)))goto 2 &3021 go to 40 23021 continue if(.not.((p .le. q*(a - x)) .or. (p .ge. q*(b - x))))goto 23023 go to 40 23023 continue d = p/q u = x + d if(.not.((u - a) .lt. tol2))goto 23025 d = dsign(tol1, xm - x) 23025 continue if(.not.((b - u) .lt. tol2))goto 23027 d = dsign(tol1, xm - x) 23027 continue go to 50 40 if(.not.(x .ge. xm))goto 23029 e = a - x goto 23030 23029 continue e = b - x 23030 continue d = c*e 50 if(.not.(dabs(d) .ge. tol1))goto 23031 u = x + d goto 23032 23031 continue u = x + dsign(tol1, d) 23032 continue i9mwnvqt = ratio * dexp((-2.0d0 + u*6.0) * dlog(16.0d0)) call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, &ankcghz2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct, &f6lsuzax,fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, &buhyalv4,fulcp8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) fu = qcpiaj7f if(.not.(fu .gt. yjpnro8d))goto 23033 fu = 2.0d0 * yjpnro8d 23033 continue if(.not.(fu .le. fx))goto 23035 if(.not.(u .ge. x))goto 23037 a = x goto 23038 23037 continue b = x 23038 continue v = w fv = fw w = x fw = fx x = u fx = fu goto 23036 23035 continue if(.not.(u .lt. x))goto 23039 a = u goto 23040 23039 continue b = u 23040 continue if(.not.((fu .le. fw) .or. (w .eq. x)))goto 23041 v = w fv = fw w = u fw = fu goto 23042 23041 continue if(.not.((fu .le. fv) .or. (v .eq. x) .or. (v .eq. w)))goto 23043 v = u fv = fu 23043 continue 23042 continue 23036 continue goto 23013 23014 continue 90 hmayv1xt = 0.0d0 i9mwnvqt = ratio * dexp((-2.0d0 + x*6.0d0) * dlog(16.0d0)) qcpiaj7f = fx return return end subroutine zosq7hub(xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs,tb,nb) implicit logical (a-z) integer nb double precision xecbg0pf(nb),z4grbpiq(nb),d7glzhbj(nb),v2eydbxs( &nb),tb(nb+4) integer dqlr5bse,ilo,pqzfxw4i, three3, ifour4, nbp1 integer ayfnwr1v,iii,yq6lorbx integer i2svdbx3tk double precision g9fvdrbw(4,3),work(16),yw1(4),yw2(4), wpt double precision othird othird = 1.0d0 / 3.0d0 three3 = 3 ifour4 = 4 nbp1 = nb + 1 do 23045 ayfnwr1v = 1,nb xecbg0pf(ayfnwr1v) = 0.0d0 z4grbpiq(ayfnwr1v) = 0.0d0 d7glzhbj(ayfnwr1v) = 0.0d0 v2eydbxs(ayfnwr1v) = 0.0d0 23045 continue ilo = 1 do 23047 ayfnwr1v = 1,nb call vinterv(tb(1), nbp1 ,tb(ayfnwr1v),dqlr5bse,pqzfxw4i) call vbsplvd(tb,ifour4,tb(ayfnwr1v),dqlr5bse,work,g9fvdrbw,three3) do 23049 iii = 1,4 yw1(iii) = g9fvdrbw(iii,3) 23049 continue call vbsplvd(tb,ifour4,tb(ayfnwr1v+1),dqlr5bse,work,g9fvdrbw, &three3) do 23051 iii = 1,4 yw2(iii) = g9fvdrbw(iii,3) - yw1(iii) 23051 continue wpt = tb(ayfnwr1v+1) - tb(ayfnwr1v) if(.not.(dqlr5bse .ge. 4))goto 23053 do 23055 iii = 1,4 yq6lorbx = iii i2svdbx3tk = dqlr5bse-4+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt * (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(.not.(yq6lorbx .le. 4))goto 23057 z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23057 continue yq6lorbx = iii+2 if(.not.(yq6lorbx .le. 4))goto 23059 d7glzhbj(i2svdbx3tk) = d7glzhbj(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23059 continue yq6lorbx = iii+3 if(.not.(yq6lorbx .le. 4))goto 23061 v2eydbxs(i2svdbx3tk) = v2eydbxs(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23061 continue 23055 continue goto 23054 23053 continue if(.not.(dqlr5bse .eq. 3))goto 23063 do 23065 iii = 1,3 yq6lorbx = iii i2svdbx3tk = dqlr5bse-3+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(.not.(yq6lorbx .le. 3))goto 23067 z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23067 continue yq6lorbx = iii+2 if(.not.(yq6lorbx .le. 3))goto 23069 d7glzhbj(i2svdbx3tk) = d7glzhbj(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23069 continue 23065 continue goto 23064 23063 continue if(.not.(dqlr5bse .eq. 2))goto 23071 do 23073 iii = 1,2 yq6lorbx = iii i2svdbx3tk = dqlr5bse-2+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(.not.(yq6lorbx .le. 2))goto 23075 z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23075 continue 23073 continue goto 23072 23071 continue if(.not.(dqlr5bse .eq. 1))goto 23077 do 23079 iii = 1,1 yq6lorbx = iii i2svdbx3tk = dqlr5bse-1+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1( &yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0. &50 + yw2(iii)*yw2(yq6lorbx)*othird) 23079 continue 23077 continue 23072 continue 23064 continue 23054 continue 23047 continue return end subroutine vmnweiy2(buhyalv4,fulcp8wa,plj0trqx, e5knafcg,nk, &wep0oibc,iflag) implicit logical (a-z) integer e5knafcg,nk,wep0oibc,iflag double precision buhyalv4(e5knafcg,nk), fulcp8wa(e5knafcg,nk), &plj0trqx(wep0oibc,nk) integer ayfnwr1v, yq6lorbx, gp1jxzuh double precision wjm3(3),wjm2(2),wjm1(1),c0,c1,c2,c3 double precision pcsuow9k, qdbgu6oi, upwkh5xz, rul5fnyd, ueydbrg6, & plce2srm, k3yvomnh, bfdjhu7l, ctfvwdu0 c1 = 0.0d0 c2 = 0.0d0 c3 = 0.0d0 wjm3(1) = 0.0d0 wjm3(2) = 0.0d0 wjm3(3) = 0.0d0 wjm2(1) = 0.0d0 wjm2(2) = 0.0d0 wjm1(1) = 0.0d0 do 23081 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 c0 = 1.0d0 / buhyalv4(4,yq6lorbx) if(.not.(yq6lorbx .le. (nk-3)))goto 23083 c1 = buhyalv4(1,yq6lorbx+3)*c0 c2 = buhyalv4(2,yq6lorbx+2)*c0 c3 = buhyalv4(3,yq6lorbx+1)*c0 goto 23084 23083 continue if(.not.(yq6lorbx .eq. (nk-2)))goto 23085 c1 = 0.0d0 c2 = buhyalv4(2,yq6lorbx+2)*c0 c3 = buhyalv4(3,yq6lorbx+1)*c0 goto 23086 23085 continue if(.not.(yq6lorbx .eq. (nk-1)))goto 23087 c1 = 0.0d0 c2 = 0.0d0 c3 = buhyalv4(3,yq6lorbx+1)*c0 goto 23088 23087 continue if(.not.(yq6lorbx .eq. nk))goto 23089 c1 = 0.0d0 c2 = 0.0d0 c3 = 0.0d0 23089 continue 23088 continue 23086 continue 23084 continue pcsuow9k = c1*wjm3(1) qdbgu6oi = c2*wjm3(2) upwkh5xz = c3*wjm3(3) rul5fnyd = c1*wjm3(2) ueydbrg6 = c2*wjm2(1) plce2srm = c3*wjm2(2) k3yvomnh = c1*wjm3(3) bfdjhu7l = c2*wjm2(2) ctfvwdu0 = c3*wjm1(1) fulcp8wa(1,yq6lorbx) = 0.0d0 - (pcsuow9k+qdbgu6oi+upwkh5xz) fulcp8wa(2,yq6lorbx) = 0.0d0 - (rul5fnyd+ueydbrg6+plce2srm) fulcp8wa(3,yq6lorbx) = 0.0d0 - (k3yvomnh+bfdjhu7l+ctfvwdu0) fulcp8wa(4,yq6lorbx) = c0**2 + c1*(pcsuow9k + 2.0d0*(qdbgu6oi + &upwkh5xz)) + c2*(ueydbrg6 + 2.0d0* plce2srm) + c3*ctfvwdu0 wjm3(1) = wjm2(1) wjm3(2) = wjm2(2) wjm3(3) = fulcp8wa(2,yq6lorbx) wjm2(1) = wjm1(1) wjm2(2) = fulcp8wa(3,yq6lorbx) wjm1(1) = fulcp8wa(4,yq6lorbx) 23081 continue if(.not.(iflag .eq. 0))goto 23091 return 23091 continue do 23093 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 gp1jxzuh = 1 23095 if(.not.(gp1jxzuh.le.4.and.yq6lorbx+gp1jxzuh-1.le.nk))goto 23097 plj0trqx(yq6lorbx,yq6lorbx+gp1jxzuh-1) = fulcp8wa(5-gp1jxzuh, &yq6lorbx) gp1jxzuh = gp1jxzuh+1 goto 23095 23097 continue 23093 continue do 23098 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 gp1jxzuh = yq6lorbx-4 23100 if(.not.(gp1jxzuh.ge.1))goto 23102 c0 = 1.0 / buhyalv4(4,gp1jxzuh) c1 = buhyalv4(1,gp1jxzuh+3)*c0 c2 = buhyalv4(2,gp1jxzuh+2)*c0 c3 = buhyalv4(3,gp1jxzuh+1)*c0 plj0trqx(gp1jxzuh,yq6lorbx) = 0.0d0- ( c1*plj0trqx(gp1jxzuh+3, &yq6lorbx) + c2*plj0trqx(gp1jxzuh+2,yq6lorbx) + c3*plj0trqx( &gp1jxzuh+1,yq6lorbx) ) gp1jxzuh = gp1jxzuh-1 goto 23100 23102 continue 23098 continue return end subroutine wmhctl9x(penalt,pjb6wfoq,x,y,w, kuzxj1lo,nk,icrit, &ankcghz2,coef,sz,ifys6woa, qcpiaj7f, i9mwnvqt, xwy, zvau2lct, &f6lsuzax,fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, &buhyalv4,fulcp8wa,plj0trqx, e5knafcg,wep0oibc,info) implicit logical (a-z) integer kuzxj1lo,nk,icrit, e5knafcg,wep0oibc,info double precision penalt,pjb6wfoq,x(kuzxj1lo),y(kuzxj1lo),w( &kuzxj1lo) double precision ankcghz2(nk+4), coef(nk),sz(kuzxj1lo),ifys6woa( &kuzxj1lo), qcpiaj7f, i9mwnvqt, xwy(nk) double precision zvau2lct(nk),f6lsuzax(nk),fvh2rwtc(nk),dcfir2no( &nk) double precision xecbg0pf(nk),z4grbpiq(nk),d7glzhbj(nk),v2eydbxs( &nk), buhyalv4(e5knafcg,nk),fulcp8wa(e5knafcg,nk),plj0trqx( &wep0oibc,nk) double precision resss, work(16), b0,b1,b2,b3,qaltf0nz, g9fvdrbw( &4,1), xv,eqdf double precision qtce8hzo double precision rxeqjn0y integer izero0, three3, ilo, pqzfxw4i, yq6lorbx, ayfnwr1v integer icoef, dqlr5bse, ifour4, hbsl0gto, nkp1 ilo = 1 qaltf0nz = 0.1d-10 izero0 = 0 three3 = 3 ifour4 = 4 hbsl0gto = 1 nkp1 = nk + 1 do 23103 ayfnwr1v = 1,nk coef(ayfnwr1v) = xwy(ayfnwr1v) 23103 continue do 23105 ayfnwr1v = 1,nk buhyalv4(4,ayfnwr1v) = zvau2lct(ayfnwr1v)+i9mwnvqt*xecbg0pf( &ayfnwr1v) 23105 continue do 23107 ayfnwr1v = 1,(nk-1) buhyalv4(3,ayfnwr1v+1) = f6lsuzax(ayfnwr1v)+i9mwnvqt*z4grbpiq( &ayfnwr1v) 23107 continue do 23109 ayfnwr1v = 1,(nk-2) buhyalv4(2,ayfnwr1v+2) = fvh2rwtc(ayfnwr1v)+i9mwnvqt*d7glzhbj( &ayfnwr1v) 23109 continue do 23111 ayfnwr1v = 1,(nk-3) buhyalv4(1,ayfnwr1v+3) = dcfir2no(ayfnwr1v)+i9mwnvqt*v2eydbxs( &ayfnwr1v) 23111 continue call dpbfa8(buhyalv4,e5knafcg,nk,three3,info) if(.not.(info .ne. 0))goto 23113 return 23113 continue call dpbsl8(buhyalv4,e5knafcg,nk,three3,coef) icoef = 1 do 23115 ayfnwr1v = 1,kuzxj1lo xv = x(ayfnwr1v) call wbvalue(ankcghz2,coef, nk,ifour4,xv,izero0, sz(ayfnwr1v)) 23115 continue if(.not.(icrit .eq. 0))goto 23117 return 23117 continue call vmnweiy2(buhyalv4,fulcp8wa,plj0trqx, e5knafcg,nk,wep0oibc, &izero0) do 23119 ayfnwr1v = 1,kuzxj1lo xv = x(ayfnwr1v) call vinterv(ankcghz2(1), nkp1 ,xv,dqlr5bse,pqzfxw4i) if(.not.(pqzfxw4i .eq. -1))goto 23121 dqlr5bse = 4 xv = ankcghz2(4) + qaltf0nz 23121 continue if(.not.(pqzfxw4i .eq. 1))goto 23123 dqlr5bse = nk xv = ankcghz2(nk+1) - qaltf0nz 23123 continue yq6lorbx = dqlr5bse-3 call vbsplvd(ankcghz2,ifour4,xv,dqlr5bse,work,g9fvdrbw,hbsl0gto) b0 = g9fvdrbw(1,1) b1 = g9fvdrbw(2,1) b2 = g9fvdrbw(3,1) b3 = g9fvdrbw(4,1) qtce8hzo = (b0 *(fulcp8wa(4,yq6lorbx)*b0 + 2.0d0*(fulcp8wa(3, &yq6lorbx)*b1 + fulcp8wa(2,yq6lorbx)*b2 + fulcp8wa(1,yq6lorbx)*b3)) & + b1 *(fulcp8wa(4,yq6lorbx+1)*b1 + 2.0d0*(fulcp8wa(3,yq6lorbx+1)* &b2 + fulcp8wa(2,yq6lorbx+1)*b3)) + b2 *(fulcp8wa(4,yq6lorbx+2)*b2 &+ 2.0d0* fulcp8wa(3,yq6lorbx+2)*b3 )+ b3**2* fulcp8wa(4,yq6lorbx+ &3)) * w(ayfnwr1v)**2 ifys6woa(ayfnwr1v) = qtce8hzo 23119 continue if(.not.(icrit .eq. 1))goto 23125 resss = 0.0d0 eqdf = 0.0d0 rxeqjn0y = 0.0d0 do 23127 ayfnwr1v = 1,kuzxj1lo resss = resss + ((y(ayfnwr1v)-sz(ayfnwr1v))*w(ayfnwr1v))**2 eqdf = eqdf + ifys6woa(ayfnwr1v) rxeqjn0y = rxeqjn0y + w(ayfnwr1v)*w(ayfnwr1v) 23127 continue qcpiaj7f = (resss/rxeqjn0y)/((1.0d0-(pjb6wfoq+penalt*eqdf)/ &rxeqjn0y)**2) goto 23126 23125 continue if(.not.(icrit .eq. 2))goto 23129 qcpiaj7f = 0.0d0 rxeqjn0y = 0.0d0 do 23131 ayfnwr1v = 1,kuzxj1lo qcpiaj7f = qcpiaj7f + (((y(ayfnwr1v)-sz(ayfnwr1v))*w(ayfnwr1v))/( &1.0d0-ifys6woa(ayfnwr1v)))**2 rxeqjn0y = rxeqjn0y + w(ayfnwr1v)*w(ayfnwr1v) 23131 continue qcpiaj7f = qcpiaj7f / rxeqjn0y goto 23130 23129 continue qcpiaj7f = 0.0d0 do 23133 ayfnwr1v = 1,kuzxj1lo qcpiaj7f = qcpiaj7f+ifys6woa(ayfnwr1v) 23133 continue qcpiaj7f = 3.0d0 + (pjb6wfoq-qcpiaj7f)**2 23130 continue 23126 continue return end subroutine gt9iulbf(he7mqnvy,ghz9vuba,w,gkdx5jal, rvy1fpli, &kuzxj1lo, bhcji9glto,zvau2lct,f6lsuzax,fvh2rwtc,dcfir2no) implicit logical (a-z) integer rvy1fpli,kuzxj1lo double precision he7mqnvy(rvy1fpli),ghz9vuba(rvy1fpli),w(rvy1fpli) &,gkdx5jal(kuzxj1lo+4), bhcji9glto(kuzxj1lo), zvau2lct(kuzxj1lo), &f6lsuzax(kuzxj1lo),fvh2rwtc(kuzxj1lo),dcfir2no(kuzxj1lo) double precision qaltf0nz,g9fvdrbw(4,1),work(16) double precision w2svdbx3tk, wv2svdbx3tk integer yq6lorbx,ayfnwr1v,ilo,dqlr5bse,pqzfxw4i, nhnpt1zym1 integer ifour4, hbsl0gto hbsl0gto = 1 ifour4 = 4 nhnpt1zym1 = kuzxj1lo + 1 do 23135 ayfnwr1v = 1,kuzxj1lo bhcji9glto(ayfnwr1v) = 0.0d0 zvau2lct(ayfnwr1v) = 0.0d0 f6lsuzax(ayfnwr1v) = 0.0d0 fvh2rwtc(ayfnwr1v) = 0.0d0 dcfir2no(ayfnwr1v) = 0.0d0 23135 continue ilo = 1 qaltf0nz = 0.1d-9 do 23137 ayfnwr1v = 1,rvy1fpli call vinterv(gkdx5jal(1), nhnpt1zym1 ,he7mqnvy(ayfnwr1v),dqlr5bse, &pqzfxw4i) if(.not.(pqzfxw4i .eq. 1))goto 23139 if(.not.(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))) &goto 23141 dqlr5bse = dqlr5bse-1 goto 23142 23141 continue return 23142 continue 23139 continue call vbsplvd(gkdx5jal,ifour4,he7mqnvy(ayfnwr1v),dqlr5bse,work, &g9fvdrbw,hbsl0gto) yq6lorbx = dqlr5bse-4+1 w2svdbx3tk = w(ayfnwr1v)**2 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(1,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk* &ghz9vuba(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(1, &1) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(2, &1) fvh2rwtc(yq6lorbx) = fvh2rwtc(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3, &1) dcfir2no(yq6lorbx) = dcfir2no(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4, &1) yq6lorbx = dqlr5bse-4+2 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(2,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk* &ghz9vuba(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(2, &1) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3, &1) fvh2rwtc(yq6lorbx) = fvh2rwtc(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4, &1) yq6lorbx = dqlr5bse-4+3 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(3,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk* &ghz9vuba(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3, &1) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4, &1) yq6lorbx = dqlr5bse wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(4,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk* &ghz9vuba(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4, &1) 23137 continue return end VGAM/src/muxr.c0000644000176000001440000002672212136651167013003 0ustar ripleyusers/* This code is Copyright (C) 1998-2012 T. W. Yee, University of Auckland. All rights reserved. */ #include #include #include #include #include void vdec(int *row_index, int *col_index, int *dimm); void m2a(double *m, double *a, int *dimm, int *row_index, int *col_index, int *n, int *M, int *upper); void a2m(double *a, double *m, int *dimm, int *row_index, int *col_index, int *n, int *M); void mux2(double *cc, double *ymat, double *ans, int *p, int *n, int *M); void mux22(double *cc, double *ymat, double *ans, int *dimm, int *row_index, int *col_index, int *n, int *M, double *wk, int *upper); void mux5(double *cc, double *x, double *ans, int *M, int *n, int *r, int *dimm, int *dimr, int *matrix, double *wk, double *wk2, int *row_index_M, int *col_index_M, int *row_index_r, int *col_index_r); void mux55(double *evects, double *evals, double *ans, double *wk, double *wk2, int *row_index, int *col_index, int *M, int *n); void mux7(double *cc, double *x, double *ans, int *M, int *q, int *n, int *r); void mux111(double *cc, double *txmat, int *M, int *R, int *n, double *wk, double *wk2, int *row_index, int *col_index, int *dimm, int *upper); void mux15(double *cc, double *x, double *ans, int *M, int *n); void vchol(double *cc, int *M, int *n, int *ok, double *wk, int *row_index, int *col_index, int *dimm); void vforsub(double *cc, double *b, int *M, int *n, double *wk, int *row_index, int *col_index, int *dimm); void vbacksub(double *cc, double *b, int *M, int *n, double *wk, int *row_index, int *col_index, int *dimm); void tapplymat1(double *mat, int *nr, int *nc, int *type); void vdec(int *row_index, int *col_index, int *dimm) { int i; for(i = 0; i < *dimm; i++) { row_index[i] -= 1; col_index[i] -= 1; } } void m2a(double *m, double *a, int *dimm, int *row_index, int *col_index, int *n, int *M, int *upper) { int i, k, MM = *M * *M, MMn = *M * *M * *n; if(*upper == 1 || *dimm != *M * (*M + 1) / 2) for(k = 0; k < MMn; k++) a[k] = 0.0; for(k = 0; k < *n; k++) { for(i = 0; i < *dimm; i++) { a[row_index[i] + col_index[i] * *M] = m[i]; if(*upper == 0) a[col_index[i] + row_index[i] * *M] = m[i]; } a += MM; m += *dimm; } } void a2m(double *a, double *m, int *dimm, int *row_index, int *col_index, int *n, int *M) { int i, k, MM= *M * *M; for(k = 0; k < *n; k++) { for(i = 0; i < *dimm; i++) m[i] = a[row_index[i] + col_index[i] * *M]; a += MM; m += *dimm; } } void mux2(double *cc, double *ymat, double *ans, int *p, int *n, int *M) { double s; int i, j, t, Mp = *M * *p; for(i = 0; i < *n; i++) { for(j = 0; j < *M; j++) { s = 0.0; for(t = 0; t < *p; t++) s += cc[j + t * *M] * ymat[t]; *ans++ = s; } ymat += *p; cc += Mp; } } void mux22(double *cc, double *ymat, double *ans, int *dimm, int *row_index, int *col_index, int *n, int *M, double *wk, int *upper) { double s; int j, t, k, one = 1, lower; vdec(row_index, col_index, dimm); for(k = 0; k < *n; k++) { m2a(cc, wk, dimm, row_index, col_index, &one, M, upper); for(j = 0; j < *M; j++) { s = 0.0; lower = *upper == 0 ? 0 : j; for(t = lower; t < *M; t++) s += wk[j + t * *M] * ymat[t]; *ans++ = s; } ymat += *M; cc += *dimm; } } void mux5(double *cc, double *x, double *ans, int *M, int *n, int *r, int *dimm, int *dimr, int *matrix, double *wk, double *wk2, int *row_index_M, int *col_index_M, int *row_index_r, int *col_index_r) { double s, *pd, *pd2; int i, j, k, t, Mr = *M * *r, rr = *r * *r, MM = *M * *M, u, jM, jr, kM, kr, one=1, upper=0; if(*matrix == 1) { vdec(row_index_M, col_index_M, dimm); vdec(row_index_r, col_index_r, dimr); pd = wk; pd2 = wk2; } else { /* Commented out on 2/5/06. Need to fix this up more cleanly. Rprintf("Error: can only handle matrix.arg == 1\n"); exit(-1); */ /* 26/9/07: The following line was added only to avoid a warning message from the compiler */ pd = pd2 = wk; } for(i = 0; i < *n; i++) { if(*matrix == 1) m2a(cc, pd, dimm, row_index_M, col_index_M, &one, M, &upper); else { pd = cc; pd2 = ans; } for(j = 0; j < *r; j++) { jM = j * *M; jr = j * *r; for(k = j; k < *r; k++) { kM = k * *M; kr = k * *r; s = 0.0; for(t = 0; t < *M; t++) for(u = 0; u < *M; u++) s += x[t + jM] * pd[t + u * *M] * x[u + kM]; pd2[j + kr] = pd2[k + jr] = s; } } if(*matrix == 1) a2m(pd2, ans, dimr, row_index_r, col_index_r, &one, r); cc += (*matrix == 1 ? *dimm : MM); x += Mr; ans += (*matrix == 1 ? *dimr : rr); } } void mux55(double *evects, double *evals, double *ans, double *wk, double *wk2, int *row_index, int *col_index, int *M, int *n) { double *pd, *pd2, t; int i, j, k, s, MM = *M * *M, one=1, MM12 = *M * (*M + 1)/2; vdec(row_index, col_index, &MM12); for(i = 0; i < *n; i++) { pd = evects; pd2 = wk2; for(j = 0; j < *M; j++) for(k = 0; k < *M; k++) *pd2++ = *pd++ * evals[j]; for(j = 0; j < *M; j++) for(k = j; k < *M; k++) { t = 0.0; for(s = 0; s < *M; s++) t += wk2[j + s * *M] * evects[k + s * *M]; wk[j + k * *M] = wk[k + j * *M] = t; } a2m(wk, ans, &MM12, row_index, col_index, &one, M); ans += MM12; evals += *M; evects += MM; } } void mux7(double *cc, double *x, double *ans, int *M, int *q, int *n, int *r) { double s; int i, j, k, t, Mq = *M * *q, qr = *q * *r, Mr = *M * *r, kq, kM; for(i = 0; i < *n; i++) { for(j = 0; j < *M; j++) { for(k = 0; k < *r; k++) { kq = k * *q; kM = k * *M; s = 0.0; for(t = 0; t < *q; t++) s += cc[j + t * *M] * x[t + kq]; ans[j + kM] = s; } } cc += Mq; ans += Mr; x += qr; } } void mux111(double *cc, double *txmat, int *M, int *R, int *n, double *wk, double *wk2, int *row_index, int *col_index, int *dimm, int *upper) { double s, *pd2; int i, j, k, t, MM = *M * *M, MR = *M * *R, lower; vdec(row_index, col_index, dimm); for(i = 0; i < MM; i++) wk[i] = 0.0; for(t = 0; t < *n; t++) { for(i = 0; i < *dimm; i++) { if(*upper == 0) wk[row_index[i] + col_index[i] * *M] = wk[col_index[i] + row_index[i] * *M] = *cc++; else wk[row_index[i] + col_index[i] * *M] = *cc++; } pd2 = txmat; for(i = 0; i < *M; i++) for(j = 0; j < *R; j++) wk2[i + j * *M] = *pd2++; for(i = 0; i < *M; i++) for(j = 0; j < *R; j++) { s = 0.0; lower = *upper == 0 ? 0 : i; for(k = lower; k < *M; k++) s += wk[i + k * *M] * wk2[k + j * *M]; txmat[j + i * *R] = s; } txmat += MR; } } void mux15(double *cc, double *x, double *ans, int *M, int *n) { double *pd, *pd2; int i, j, k, MM = *M * *M; for(i = 0; i < *n; i++) { pd = cc; pd2 = ans; for(j = 0; j < *M; j++) for(k = 0; k < *M; k++) *pd2++ = *pd++ * x[j]; pd2 = ans; for(j = 0; j < *M; j++) for(k = 0; k < *M; k++) { *pd2 *= x[k]; pd2++; } ans += MM; x += *M; } } void vchol(double *cc, int *M, int *n, int *ok, double *wk, int *row_index, int *col_index, int *dimm) { double s, *pd; int t, i, j, k, iM, iiM, upper = 0, one = 1; vdec(row_index, col_index, dimm); pd = wk; for(t = 0; t < *n; t++) { *ok = 1; m2a(cc, wk, dimm, row_index, col_index, &one, M, &upper); for(i = 0; i < *M; i++) { s = 0.0; iM = i * *M; iiM = i + iM; for(k = 0; k < i; k++) s += pd[k + iM] * pd[k + iM]; pd[iiM] -= s; if(pd[iiM] < 0.0) { *ok = 0; break; } pd[iiM] = sqrt(pd[iiM]); for(j = i+1; j < *M; j++) { s = 0.0; for(k = 0; k < i; k++) s += pd[k + iM] * pd[k + j * *M]; pd[i + j * *M] = (pd[i + j * *M] - s) / pd[iiM]; } } a2m(wk, cc, dimm, row_index, col_index, &one, M); cc += *dimm; ok++; } } void vforsub(double *cc, double *b, int *M, int *n, double *wk, int *row_index, int *col_index, int *dimm) { double s, *pd; int j, k, t, upper = 1, one = 1; pd = wk; vdec(row_index, col_index, dimm); for(t = 0; t < *n; t++) { m2a(cc, wk, dimm, row_index, col_index, &one, M, &upper); for(j = 0; j < *M; j++) { s = b[j]; for(k = 0; k < j; k++) s -= pd[k + j * *M] * b[k]; b[j] = s / pd[j + j * *M]; } cc += *dimm; b += *M; } } void vbacksub(double *cc, double *b, int *M, int *n, double *wk, int *row_index, int *col_index, int *dimm) { double s, *pd; int j, k, t, upper = 1, one = 1; pd = wk; vdec(row_index, col_index, dimm); for(t = 0; t < *n; t++) { m2a(cc, wk, dimm, row_index, col_index, &one, M, &upper); for(j = *M - 1; j >= 0; j--) { s = b[j]; for(k = j + 1; k < *M; k++) s -= pd[j + k * *M] * b[k]; b[j] = s / pd[j + j * *M]; } cc += *dimm; b += *M; } } void tapplymat1(double *mat, int *nr, int *nc, int *type) { double *pd = mat, *pd2 = mat + *nr; int i, j; if(*type == 1) for(j = 2; j <= *nc; j++) for(i = 0; i < *nr; i++, pd2++) *pd2 += *pd++; if(*type == 2) { pd2 = mat + *nr * *nc - 1; pd = pd2 - *nr; for(j = *nc; j >= 2; j--) for(i = 0; i < *nr; i++, pd2--) *pd2 -= *pd--; } if(*type == 3) for(j = 2; j <= *nc; j++) for(i = 0; i < *nr; i++, pd2++) *pd2 *= *pd++; if(*type < 1 || *type > 3) Rprintf("Error: *type not matched\n"); } VGAM/src/lms.f0000644000176000001440000001543612136651167012606 0ustar ripleyusers subroutine dpdlyjn(psi, i9mwnvqt, mymu, sigma, kpzavbj3ative, &lfu2qhid) implicit logical (a-z) integer kpzavbj3ative double precision psi, i9mwnvqt, mymu, sigma, lfu2qhid(3) integer hbsl0gto, izero0 double precision aa, bb, uqnkc6zg, n3iasxug logical cc, pos hbsl0gto = 1 izero0 = 0 n3iasxug = 1.0d-04 mymu = 0.0d0 sigma = 1.0d0 cc = (psi .ge. 0.0d0) if(.not.(cc))goto 23000 bb = i9mwnvqt pos = (dabs(i9mwnvqt) .le. n3iasxug) goto 23001 23000 continue bb = -2.0d0 + i9mwnvqt pos = (dabs(i9mwnvqt-2.0d0) .le. n3iasxug) 23001 continue aa = 1.0d0 + psi * bb if(.not.(kpzavbj3ative .ge. 0))goto 23002 if(.not.(pos))goto 23004 lfu2qhid(1) = psi goto 23005 23004 continue lfu2qhid(1) = aa / bb 23005 continue 23002 continue if(.not.(kpzavbj3ative .ge. 1))goto 23006 if(.not.(pos))goto 23008 lfu2qhid(2) = (lfu2qhid(1)**2) / 2 goto 23009 23008 continue uqnkc6zg = lfu2qhid(1) lfu2qhid(2) = (aa * (dlog(aa)/bb) - uqnkc6zg) / bb 23009 continue 23006 continue if(.not.(kpzavbj3ative .ge. 2))goto 23010 if(.not.(pos))goto 23012 lfu2qhid(3) = (lfu2qhid(1)**3) / 3 goto 23013 23012 continue uqnkc6zg = lfu2qhid(2) * 2.0d0 lfu2qhid(3) = (aa * (dlog(aa)/bb) ** 2 - uqnkc6zg) / bb 23013 continue 23010 continue return end subroutine gleg11(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), &lfu2qhid integer hbsl0gto, itwo2, three3 double precision psi, pim12, o3jyipdf(3), two12 three3 = 3 itwo2 = 2 hbsl0gto = 1 two12 = 1.41421356237309515d0 if(.not.(lenkpzavbj3mat .gt. 0))goto 23014 lfu2qhid = kpzavbj3mat(4) * (kpzavbj3mat(2)**2 + two12 * sigma * &ghz9vuba * kpzavbj3mat(3)) goto 23015 23014 continue pim12 = 0.564189583547756279d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, itwo2, o3jyipdf) lfu2qhid = (dexp(-ghz9vuba*ghz9vuba) * pim12) * (o3jyipdf(2)**2 + &(psi - mymu) * o3jyipdf(3)) / sigma**2 23015 continue return end subroutine gleg12(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), &lfu2qhid integer hbsl0gto, itwo2 double precision psi, pim12, two12 double precision tad5vhsu(3) itwo2 = 2 hbsl0gto = 1 if(.not.(lenkpzavbj3mat .gt. 0))goto 23016 lfu2qhid = kpzavbj3mat(4) * (-kpzavbj3mat(2)) goto 23017 23016 continue pim12 = 0.564189583547756279d0 two12 = 1.41421356237309515d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, hbsl0gto, tad5vhsu) lfu2qhid = (dexp(-ghz9vuba*ghz9vuba) * pim12) * (-tad5vhsu(2)) / &sigma**2 23017 continue return end subroutine gleg13(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), &lfu2qhid integer hbsl0gto, itwo2 double precision psi, mtpim12, two12 double precision tad5vhsu(3) itwo2 = 2 hbsl0gto = 1 if(.not.(lenkpzavbj3mat .gt. 0))goto 23018 lfu2qhid = kpzavbj3mat(4) * (-kpzavbj3mat(2)) * dsqrt(8.0d0) * &ghz9vuba goto 23019 23018 continue mtpim12 = -1.12837916709551256d0 two12 = 1.41421356237309515d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, hbsl0gto, tad5vhsu) lfu2qhid = dexp(-ghz9vuba*ghz9vuba) * mtpim12 * tad5vhsu(2) * ( &psi - mymu) / sigma**3 23019 continue return end subroutine gint3(minx, maxx, wts, ahl0onwx, i9mwnvqt, mymu, sigma, & kk, lfu2qhid, elemnt) implicit logical (a-z) integer kk, elemnt double precision minx, maxx, wts(kk), ahl0onwx(kk), lfu2qhid, &i9mwnvqt, mymu, sigma integer gp1jxzuh, lenkpzavbj3mat double precision atx, dint, tint, kpzavbj3mat(4), midpt, range12 lenkpzavbj3mat = 0 midpt = 0.50d0 * (minx + maxx) range12 = 0.50d0 * (maxx - minx) dint = 0.0d0 if(.not.(elemnt .eq. 1))goto 23020 do 23022 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg11(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, tint) dint = dint + tint * wts(gp1jxzuh) 23022 continue goto 23021 23020 continue if(.not.(elemnt .eq. 2))goto 23024 do 23026 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg12(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, tint) dint = dint + tint * wts(gp1jxzuh) 23026 continue goto 23025 23024 continue if(.not.(elemnt .eq. 3))goto 23028 do 23030 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg13(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, &lenkpzavbj3mat, tint) dint = dint + tint * wts(gp1jxzuh) 23030 continue 23028 continue 23025 continue 23021 continue lfu2qhid = lfu2qhid + range12 * dint return end subroutine yjngintf(minx, maxx, ahl0onwx, wts, kuzxj1lo, kk, &i9mwnvqt, mymu, sigma, lfu2qhid, qaltf0nz) implicit logical (a-z) integer kuzxj1lo, kk double precision minx(kuzxj1lo), maxx(kuzxj1lo), wts(kk), &ahl0onwx(kk), i9mwnvqt(kuzxj1lo), mymu(kuzxj1lo), sigma(kuzxj1lo), & lfu2qhid(3,kuzxj1lo), qaltf0nz integer ayfnwr1v, iii, gp1jxzuh, lencomp, ipzbcvw3, hmayv1xt, &elemnt, hbsl0gto, itwo2 double precision xd4mybgj, j4qgxvlk, wiptsjx8 hbsl0gto = 1 itwo2 = 2 lencomp = 12 do 23032 ayfnwr1v = 1,kuzxj1lo do 23034 elemnt=1,3 j4qgxvlk = -10.0d0 do 23036 iii=2,lencomp ipzbcvw3 = 2 ** iii xd4mybgj = (maxx(ayfnwr1v) - minx(ayfnwr1v)) / ipzbcvw3 lfu2qhid(elemnt,ayfnwr1v) = 0.0d0 do 23038 gp1jxzuh=1,ipzbcvw3 call gint3(minx(ayfnwr1v)+(gp1jxzuh-1)*xd4mybgj, minx(ayfnwr1v)+ &gp1jxzuh*xd4mybgj, wts, ahl0onwx, i9mwnvqt(ayfnwr1v), mymu( &ayfnwr1v), sigma(ayfnwr1v), kk, lfu2qhid(elemnt,ayfnwr1v), elemnt) 23038 continue wiptsjx8 = dabs(lfu2qhid(elemnt,ayfnwr1v) - j4qgxvlk) / (1.0d0 + &dabs(lfu2qhid(elemnt,ayfnwr1v))) if(.not.(wiptsjx8 .lt. qaltf0nz))goto 23040 goto 234 goto 23041 23040 continue j4qgxvlk = lfu2qhid(elemnt,ayfnwr1v) 23041 continue 23036 continue 234 hmayv1xt = 0 23034 continue 23032 continue return end VGAM/src/lerchphi.c0000644000176000001440000002320712136651167013601 0ustar ripleyusers/* ------------------------------- Lerch's transcendent Phi(z,s,v) ------------------------------- This program is copyright by Sergej V. Aksenov (http://www.geocities.com/saksenov) and Ulrich D. Jentschura (jentschura@physik.tu-dresden.de), 2002. Version 1.00 (May 1, 2002) Calling sequence: int lerchphi(double *z, double *s, double *v, double *acc, double *result, int *iter) calculates Lerch's Phi transcendent Phi(z,s,v) with *result to a specified accuracy *acc after *iter iterations. Double precision is used throughout the calculation. The program uses direct summation of the defining series for |z| <= 0.5 and CNCT for 0.5 < |z| < 1.0. The integer return code has to be interpreted as follows. ------------- Return codes: ------------- 0 - Normal termination. 1 - Lerch Phi diverges for 1 <= |z|. 2 - Lerch Phi is not defined for integer v <= 0. 3 - pow() is not defined for v < 0 and s not integer. 4 - Long integer overflow in aj1234(). 5 - Underflow in remainder estimate omega in lerchphi(). 6 - No convergence within the maximum number of iterations. Implementation note: In subroutine aj1234(), defining variables ind and two2k as type double instead of long int might eliminate overflow error which occurs for high indices (error code 4). */ #include #include #include #define macheps DBL_EPSILON #define machmin DBL_MIN /* If preprocessor macro ADD_UNDERSCORE was defined, add underscore to the function name --- needed for linking to Fortran programs on a Sun. */ #if (ADD_UNDERSCORE) #define lerchphi lerchphi_ #endif /* Function that computes van Wijngaarden's A_j for a given j. */ static int aj1234(double *z, double *s, double *v, int j, double *acc, double *res) { double sum, bjk, z2ind; int k, flag; unsigned long int ind, two2k; sum = bjk = 0.0; k = -1; two2k = 1; flag = 0; /* Sum b^j_k's over k. */ for (;;) { k++; /* Index for the term of the original series. */ if (k > 0) two2k *= 2; ind = two2k * (j + 1) - 1; /* If long integer overflow occurs, variables become zero. Not relevant in v1.0 because two2k and ind are double type. */ if (k > 0 && (two2k == 0 || ind == 0)) { flag = 4; break; } /* Increment the sum. */ z2ind = pow(*z, ind); bjk = two2k * z2ind / pow(*v + ind, *s); sum += bjk; /* Stop summation if either sum is zero or |term/sum| is below requested accuracy. */ if (fabs(sum) <= machmin || fabs(bjk/sum) < 1.0e-2 * (*acc)) break; } *res = sum; return flag; } /* Function that computes approximation to Lerch Phi as a converging sequence of CNC transforms S^n_k. */ int lerchphi(double *z, double *s, double *v, double *acc, double *result, int *iter) { const unsigned short int beta = 1, n = 0, imax = 100; unsigned short int j, m; int i, sign, flag; double v1, sn, eps0, eps, skn, skn0, omega, *num, *den, *StoreAj, factor, factor1, x, est, iom, sum1, cacc; /* Added 20090205 by T.Yee to suppress 4 warnings */ sum1 = est = 0.0; StoreAj = &v1; m = 0; /* Local copy of v. */ v1 = *v; /* Special cases. */ /* 1 <= |z|. (Return error, Lerch Phi diverges.) */ if (1.0 <= fabs(*z)) { *result = 1.0; *iter = 0; return 1; } /* v <= 0 is integer. (Return error, Lerch Phi is not defined.) */ if (fabs(floor(*v) - *v) <= macheps*fabs(*v) && *v <= 0.0) { *result = 1.0; *iter = 0; return 2; } /* v < 0 is not integer or zero and z != 0 (z == 0 considered below) ... */ if (*v < 0.0 && fabs(*z) > machmin) { /* s is not an integer. (Return error because pow() is not defined.) */ if (fabs(floor(*s) - *s) > macheps*fabs(*s)) { *result = 1.0; *iter = 0; return 3; } /* s is an integer. (Transform v to positive). */ else { m = - (int) floor(*v); v1 += m; sum1 = 0.0; if ((int) *s % 2 == 0) sign = 1; else sign = -1; for (i = 0; i <= m-1; i++) { if ((i > 0) && (*z < 0)) sign = -sign; sum1 += sign*pow(fabs(*z),i)/pow(fabs(*v+i),*s); } } } /* z = 0 and ... */ if (fabs(*z) <= machmin) { /* ... v < 0 is not integer or zero and ... */ if (*v < 0) { /* s is not an integer. (Return error because pow() is not defined.) */ if (fabs(floor(*s) - *s) > macheps*fabs(*s)) { *result = 1.0; *iter = 0; return 3; } /* s is an integer. (Return first term of series.)*/ else { if ((int) *s % 2 == 0) sign = 1; else sign = -1; *result = sign * 1.0 / pow(fabs(*v), *s); } } /* ... v > 0. (Return first term of series.) */ else { *result = 1.0 / pow(*v, *s); *iter = 1; return 0; } } /* General case. */ /* Some initializations. */ /* sn denotes current partial sum of defining series: z > 0.5: sn is partial sum S_n of the van Wijngaarden transformed series. z <= 0.5: sn is the partial sum of the power series defining LerchPhi. skn0 and skn denote successive partial sums S^k_n that are same as sn in case of direct summation and delta-transformed in case of CNCT. eps0 and eps denote successive differences between partial sums S^k_n. */ eps0 = skn = skn0 = sn = 0.0; /* omega is next term of a partial sum (of defining power series for direct summation, of van Wijngaarden transformed series for CNCT) and also becomes a remainder estimate in the delta transformation in CNCT). */ /* For z <= 0.5 van Wijngaarden transformation is not used [hence no calls to aj1234()]. */ /* Direct summation and CNCT (z < -0.5) case. */ if (*z <= 0.5) omega = 1.0 / pow(v1, *s); /* CNCT (z > 0.5) case. */ else { flag = aj1234(z, s, &v1, 0, acc, &omega); if (flag) { *result = 1.0; *iter = 0; return flag; } } /* Allocate memory for working arrays. */ num = (double *) malloc(imax * sizeof(double)); den = (double *) malloc(imax * sizeof(double)); /* StoreAj is used only in CNCT */ if (*z > 0.5) StoreAj = (double *) malloc(imax * sizeof(double)); flag = 0; i = -1; sign = -1; /* Main loop: iterations for S^k_n. */ for (;;) { /* i points to current iterate. */ i++; /* Increment the sum. */ sign = -sign; sn += omega; /* Next term: omega. */ if (*z < 0.0) /* Direct summation and CNCT (z < -0.5) case. */ /* Recurrence for power series. */ omega = (*z) * pow((v1+i)/(v1+i+1), *s) * omega; else /* z > 0 */ { if (*z <= 0.5) /* "Direct summation". */ omega = (*z) * pow((v1+i)/(v1+i+1), *s) * omega; else /* CNCT (z > 0.5) case. */ { *(StoreAj+i) = sign * omega; if (i % 2 == 0) /* Recurrence for odd pointer i. */ {omega = -sign * 0.5 * (*(StoreAj+i/2) - pow(*z, i/2) / pow(v1+i/2, *s));} else { flag = aj1234(z, s, &v1, i+1, acc, &omega); if (flag) break; else omega = -sign * omega; } } } /* Direct summation case: store current sum and remainder estimate. */ if (fabs(*z) <= 0.5) { skn = sn; est = 2.0 * pow(fabs(*z), (i+1)) / pow(v1+i+1, *s); } /* CNCT case. */ else { /* Make sure omega is representable machine number. */ if (fabs(omega) <= machmin) { flag = 5; break; } else iom = 1.0 / omega; /* Last terms in sums of numerator and denominator of i-th partial sum. */ *(num+i) = sn * iom; *(den+i) = iom; /* Recurrence computation of numerator and denominator of a S_k^n. */ if (i > 0) { factor = 1.0; *(num+i-1) = *(num+i) - factor * (*(num+i-1)); *(den+i-1) = *(den+i) - factor * (*(den+i-1)); } factor1 = (double) (beta+n+i-1) * (beta+n+i-2); for(j = 2; j <= i; j++) { factor = factor1 / (beta+n+i+j-2) / (beta+n+i+j-3); *(num+i-j) = *(num+i-j+1) - factor * (*(num+i-j)); *(den+i-j) = *(den+i-j+1) - factor * (*(den+i-j)); } /* Current approximation of the sum S_k^n. */ skn = *num / *den; } /* else CNCT case. */ eps = fabs(skn - skn0); /* Check the three termination criteria. */ /* |est/skn| is less than the requested accuracy (est is a remainder estimate). */ if (i > 0 && eps < eps0) { if (fabs(*z) > 0.5) { x = eps/eps0; est = 2.0/x/(1.0-x)*eps; } cacc = fabs(est/skn); if (cacc < (*acc)) break; } /* Successive iterates skn are the same. */ if (eps <= 0.0) break; /* Maximum number of iterations is exceeded. */ if (i > imax-2) { flag = 6; break; } /* Go on to the next iteration. */ skn0 = skn; eps0 = eps; } /* for */ /* Store the resulting sum. */ if (*v < 0) { sign = 1; if ((*z < 0) && (m % 2 != 0)) sign = -1; *result = sum1 + skn * sign * pow(fabs(*z),m); } else *result = skn; /* Store the number of iterations. */ *iter = i + 1; /* Clean up. */ free(num); free(den); if (*z > 0.5) free(StoreAj); return flag; } #undef macheps #undef machmin /* Code below written by T. Yee 14/6/06; is a wrapper function */ void lerchphi123(int *err, int *L, double *z, double *s, double *v, double *acc, double *result, int *iter) { int ell; for(ell = 0; ell < *L; ell++) { err[ell] = lerchphi(z+ell, s+ell, v+ell, acc, result+ell, iter); } } VGAM/src/gautr.c0000644000176000001440000001502412136651167013123 0ustar ripleyusers#include "math.h" /* Frequently used numerical constants: */ #define OneUponSqrt2Pi .39894228040143267794 #define twopi 6.283195307179587 #define LnSqrt2Pi -0.9189385332046727417803296 #define SQRT2 1.414213562373095049 #define SQRTPI 1.772453850905516027 /* --------------------------------------------------------------------------- UNIVARIATE NORMAL PROBABILITY ---------------------------------------------------------------------------*/ #define UPPERLIMIT 20.0 /* I won't return either of univariate normal density or probability when x < -UPPERLIMIT or x > UPPERLIMIT. */ #define P10 242.66795523053175 #define P11 21.979261618294152 #define P12 6.9963834886191355 #define P13 -.035609843701815385 #define Q10 215.05887586986120 #define Q11 91.164905404514901 #define Q12 15.082797630407787 #define Q13 1.0 #define P20 300.4592610201616005 #define P21 451.9189537118729422 #define P22 339.3208167343436870 #define P23 152.9892850469404039 #define P24 43.16222722205673530 #define P25 7.211758250883093659 #define P26 .5641955174789739711 #define P27 -.0000001368648573827167067 #define Q20 300.4592609569832933 #define Q21 790.9509253278980272 #define Q22 931.3540948506096211 #define Q23 638.9802644656311665 #define Q24 277.5854447439876434 #define Q25 77.00015293522947295 #define Q26 12.78272731962942351 #define Q27 1.0 #define P30 -.00299610707703542174 #define P31 -.0494730910623250734 #define P32 -.226956593539686930 #define P33 -.278661308609647788 #define P34 -.0223192459734184686 #define Q30 .0106209230528467918 #define Q31 .191308926107829841 #define Q32 1.05167510706793207 #define Q33 1.98733201817135256 #define Q34 1.0 double pnorm1(double x) { int sn; double R1, R2, y, y2, y3, y4, y5, y6, y7; double erf, erfc, z, z2, z3, z4; double phi; if (x < -UPPERLIMIT) return 0.0; if (x > UPPERLIMIT) return 1.0; y = x / SQRT2; if (y < 0) { y = -y; sn = -1; } else sn = 1; y2 = y * y; y4 = y2 * y2; y6 = y4 * y2; if(y < 0.46875) { R1 = P10 + P11 * y2 + P12 * y4 + P13 * y6; R2 = Q10 + Q11 * y2 + Q12 * y4 + Q13 * y6; erf = y * R1 / R2; if (sn == 1) phi = 0.5 + 0.5*erf; else phi = 0.5 - 0.5*erf; } else if (y < 4.0) { y3 = y2 * y; y5 = y4 * y; y7 = y6 * y; R1 = P20 + P21 * y + P22 * y2 + P23 * y3 + P24 * y4 + P25 * y5 + P26 * y6 + P27 * y7; R2 = Q20 + Q21 * y + Q22 * y2 + Q23 * y3 + Q24 * y4 + Q25 * y5 + Q26 * y6 + Q27 * y7; erfc = exp(-y2) * R1 / R2; if (sn == 1) phi = 1.0 - 0.5*erfc; else phi = 0.5*erfc; } else { z = y4; z2 = z * z; z3 = z2 * z; z4 = z2 * z2; R1 = P30 + P31 * z + P32 * z2 + P33 * z3 + P34 * z4; R2 = Q30 + Q31 * z + Q32 * z2 + Q33 * z3 + Q34 * z4; erfc = (exp(-y2)/y) * (1.0 / SQRTPI + R1 / (R2 * y2)); if (sn == 1) phi = 1.0 - 0.5*erfc; else phi = 0.5*erfc; } return phi; } /* --------------------------------------------------------------------------- UNIVARIATE NORMAL DENSITY ---------------------------------------------------------------------------*/ double dnorm1(double x) { if (x < -UPPERLIMIT) return 0.0; if (x > UPPERLIMIT) return 0.0; return OneUponSqrt2Pi * exp(-0.5 * x * x); } /* --------------------------------------------------------------------------- LN OF UNIVARIATE NORMAL DENSITY ---------------------------------------------------------------------------*/ double lndnorm1(double x) { return LnSqrt2Pi - (0.5*x*x); } /*--------------------------------------------------------------------------- BIVARIATE NORMAL PROBABILITY ---------------------------------------------------------------------------*/ #define con (twopi / 2.0) * 10.0e-10 double bivnor(double ah, double ak, double r) { /* based on alg 462 comm. acm oct 73 gives the probability that a bivariate normal exceeds (ah,ak). gh and gk are .5 times the right tail areas of ah, ak under a n(0,1) Tranlated from FORTRAN to ratfor using struct; from ratfor to C by hand. */ double a2, ap, b, cn, conex, ex, g2, gh, gk, gw, h2, h4, rr, s1, s2, sgn, sn, sp, sqr, t, temp, w2, wh, wk; int is; temp = -ah; gh = pnorm1(temp); gh = gh / 2.0; temp = -ak; gk = pnorm1(temp); gk = gk / 2.0; b = 0; if (r==0) b = 4*gh*gk; else { rr = 1-r*r; if (rr<0) return 0; /* zz; 29/6/02; was originally return; not sure */ if (rr!=0) { sqr = sqrt(rr); if (ah!=0) { b = gh; if (ah*ak<0) b = b-.5; else if (ah*ak==0) goto label10; } else if (ak==0) { b = atan(r/sqr)/twopi+.25; goto label50; } b = b+gk; if (ah==0) goto label20; label10: wh = -ah; wk = (ak/ah-r)/sqr; gw = 2*gh; is = -1; goto label30; label20: do { wh = -ak; wk = (ah/ak-r)/sqr; gw = 2*gk; is = 1; label30: sgn = -1; t = 0; if (wk!=0) { if (fabs(wk)>=1) { /* this brace added 28/6/02 by tyee */ if (fabs(wk)==1) { t = wk*gw*(1-gw)/2; goto label40; } else { sgn = -sgn; wh = wh*wk; g2 = pnorm1(wh); wk = 1/wk; if (wk<0) b = b+.5; b = b-(gw+g2)/2+gw*g2; } } h2 = wh*wh; a2 = wk*wk; h4 = h2*.5; ex = 0; if (h4<150.0) ex = exp(-h4); w2 = h4*ex; ap = 1; s2 = ap-ex; sp = ap; s1 = 0; sn = s1; conex = fabs(con/wk); do { cn = ap*s2/(sn+sp); s1 = s1+cn; if (fabs(cn)<=conex) break; sn = sp; sp = sp+1; s2 = s2-w2; w2 = w2*h4/sp; ap = -ap*a2; } while (1); t = (atan(wk)-wk*s1)/twopi; label40: b = b+sgn*t; } if (is>=0) break; } while(ak!=0); } else if (r>=0) if (ah>=ak) b = 2*gh; else b = 2*gk; else if (ah+ak<0) b = 2*(gh+gk)-1; } label50: if (b<0) b = 0; if (b>1) b = 1; return(b); } /* in the following function size measures the dimension of x singler == 1 if r is a scalar; otherwise r is same size as x & y */ /* This is called by S */ void pnorm2(double *x, double *y, double *r, int *size, int *singler, double *ans) { int i; if(*singler == 1) { for(i = 0; i < *size; i++) ans[i] = bivnor(x[i], y[i], *r); } else { for(i = 0; i < *size; i++) ans[i] = bivnor(x[i], y[i], r[i]); } } /* main() { int i; double x,y,r; x = 0.0; y = 0.0; for(i = -9; i<=9; i++) { r = i / 10.0; Rprintf("%10.2f %10.6f \n",r,bivnor(x,y,r)); } } */ VGAM/src/fgam.f0000644000176000001440000005603412136651167012724 0ustar ripleyusersc 24/8/99 c This is the original fgam.f file c It needs to be compiled and loaded into R in order to smooth. c All of this is automatically in Splus subroutine vbsplvd ( t, k, x, left, a, dbiatx, nderiv ) implicit double precision(a-h,o-z) calls bsplvb calculates value and deriv.s of all b-splines which do not vanish at x c c****** i n p u t ****** c t the knot array, of length left+k (at least) c k the order of the b-splines to be evaluated c x the point at which these values are sought c left an integer indicating the left endpoint of the interval of c interest. the k b-splines whose support contains the interval c (t(left), t(left+1)) c are to be considered. c a s s u m p t i o n - - - it is assumed that c t(left) .lt. t(left+1) c division by zero will result otherwise (in b s p l v b ). c also, the output is as advertised only if c t(left) .le. x .le. t(left+1) . c nderiv an integer indicating that values of b-splines and their c derivatives up to but not including the nderiv-th are asked c for. ( nderiv is replaced internally by the integer in (1,k) c closest to it.) c c****** w o r k a r e a ****** c a an array of order (k,k), to contain b-coeff.s of the derivat- c ives of a certain order of the k b-splines of interest. c c****** o u t p u t ****** c dbiatx an array of order (k,nderiv). its entry (i,m) contains c value of (m-1)st derivative of (left-k+i)-th b-spline of c order k for knot sequence t , i=m,...,k; m=1,...,nderiv. c c****** m e t h o d ****** c values at x of all the relevant b-splines of order k,k-1,..., c k+1-nderiv are generated via bsplvb and stored temporarily c in dbiatx . then, the b-coeffs of the required derivatives of the c b-splines of interest are generated by differencing, each from the c preceding one of lower order, and combined with the values of b- c splines of corresponding order in dbiatx to produce the desired c values. c integer k,left,nderiv, i,ideriv,il,j,jlow,jp1mid,kp1,kp1mm, * ldummy,m,mhigh double precision a(k,k),dbiatx(k,nderiv),t(*),x double precision factor,fkp1mm,sum mhigh = max0(min0(nderiv,k),1) c mhigh is usually equal to nderiv. kp1 = k+1 call bsplvb(t,kp1-mhigh,1,x,left,dbiatx) if (mhigh .eq. 1) go to 99 c the first column of dbiatx always contains the b-spline values c for the current order. these are stored in column k+1-current c order before bsplvb is called to put values for the next c higher order on top of it. ideriv = mhigh do 15 m=2,mhigh jp1mid = 1 do 11 j=ideriv,k dbiatx(j,ideriv) = dbiatx(jp1mid,1) 11 jp1mid = jp1mid + 1 ideriv = ideriv - 1 call bsplvb(t,kp1-ideriv,2,x,left,dbiatx) 15 continue c c at this point, b(left-k+i, k+1-j)(x) is in dbiatx(i,j) for c i=j,...,k and j=1,...,mhigh ('=' nderiv). in particular, the c first column of dbiatx is already in final form. to obtain cor- c responding derivatives of b-splines in subsequent columns, gene- c rate their b-repr. by differencing, then evaluate at x. c jlow = 1 do 20 i=1,k do 19 j=jlow,k 19 a(j,i) = 0d0 jlow = i 20 a(i,i) = 1d0 c at this point, a(.,j) contains the b-coeffs for the j-th of the c k b-splines of interest here. c do 40 m=2,mhigh kp1mm = kp1 - m fkp1mm = dble(kp1mm) il = left i = k c c for j=1,...,k, construct b-coeffs of (m-1)st derivative of c b-splines from those for preceding derivative by differencing c and store again in a(.,j) . the fact that a(i,j) = 0 for c i .lt. j is used.sed. do 25 ldummy=1,kp1mm factor = fkp1mm/(t(il+kp1mm) - t(il)) c the assumption that t(left).lt.t(left+1) makes denominator c in factor nonzero. do 24 j=1,i 24 a(i,j) = (a(i,j) - a(i-1,j))*factor il = il - 1 25 i = i - 1 c c for i=1,...,k, combine b-coeffs a(.,i) with b-spline values c stored in dbiatx(.,m) to get value of (m-1)st derivative of c i-th b-spline (of interest here) at x , and store in c dbiatx(i,m). storage of this value over the value of a b-spline c of order m there is safe since the remaining b-spline derivat- c ive of the same order do not use this value due to the fact c that a(j,i) = 0 for j .lt. i . c Originally: c 30 do 40 i=1,k do 40 i=1,k sum = 0. jlow = max0(i,m) do 35 j=jlow,k 35 sum = a(j,i)*dbiatx(j,m) + sum 40 dbiatx(i,m) = sum 99 return end subroutine bsplvb ( t, jhigh, index, x, left, biatx ) implicit double precision(a-h,o-z) calculates the value of all possibly nonzero b-splines at x of order c c jout = dmax( jhigh , (j+1)*(index-1) ) c c with knot sequence t . c c****** i n p u t ****** c t.....knot sequence, of length left + jout , assumed to be nonde- c creasing. a s s u m p t i o n . . . . c t(left) .lt. t(left + 1) . c d i v i s i o n b y z e r o will result if t(left) = t(left+1) c jhigh, c index.....integers which determine the order jout = max(jhigh, c (j+1)*(index-1)) of the b-splines whose values at x are to c be returned. index is used to avoid recalculations when seve- c ral columns of the triangular array of b-spline values are nee- c ded (e.g., in bvalue or in vbsplvd ). precisely, c if index = 1 , c the calculation starts from scratch and the entire triangular c array of b-spline values of orders 1,2,...,jhigh is generated c order by order , i.e., column by column . c if index = 2 , c only the b-spline values of order j+1, j+2, ..., jout are ge- c nerated, the assumption being that biatx , j , deltal , deltar c are, on entry, as they were on exit at the previous call. c in particular, if jhigh = 0, then jout = j+1, i.e., just c the next column of b-spline values is generated. c c w a r n i n g . . . the restriction jout .le. jmax (= 20) is im- c posed arbitrarily by the dimension statement for deltal and c deltar below, but is n o w h e r e c h e c k e d for . c c x.....the point at which the b-splines are to be evaluated. c left.....an integer chosen (usually) so that c t(left) .le. x .le. t(left+1) . c c****** o u t p u t ****** c biatx.....array of length jout , with biatx(i) containing the val- c ue at x of the polynomial of order jout which agrees with c the b-spline b(left-jout+i,jout,t) on the interval (t(left), c t(left+1)) . c c****** m e t h o d ****** c the recurrence relation c c x - t(i) t(i+j+1) - x c b(i,j+1)(x) = -----------b(i,j)(x) + ---------------b(i+1,j)(x) c t(i+j)-t(i) t(i+j+1)-t(i+1) c c is used (repeatedly) to generate the (j+1)-vector b(left-j,j+1)(x), c ...,b(left,j+1)(x) from the j-vector b(left-j+1,j)(x),..., c b(left,j)(x), storing the new values in biatx over the old. the c facts that c b(i,1) = 1 if t(i) .le. x .lt. t(i+1) c and that c b(i,j)(x) = 0 unless t(i) .le. x .lt. t(i+j) c are used. the particular organization of the calculations follows al- c gorithm (8) in chapter x of the text. c parameter(jmax = 20) integer index,jhigh,left, i,j,jp1 double precision biatx(jhigh),t(*),x, deltal(jmax) double precision deltar(jmax),saved,term c dimension biatx(jout), t(left+jout) current fortran standard makes it impossible to specify the length of c t and of biatx precisely without the introduction of otherwise c superfluous additional arguments. data j/1/ c save j,deltal,deltar (valid in fortran 77) c go to (10,20), index 10 j = 1 biatx(1) = 1d0 if (j .ge. jhigh) go to 99 c 20 jp1 = j + 1 deltar(j) = t(left+j) - x deltal(j) = x - t(left+1-j) saved = 0d0 do 26 i=1,j term = biatx(i)/(deltar(i) + deltal(jp1-i)) biatx(i) = saved + deltar(i)*term 26 saved = deltal(jp1-i)*term biatx(jp1) = saved j = jp1 if (j .lt. jhigh) go to 20 c 99 return end c 20090105; converted bvalue into a subroutine. subroutine wbvalue ( t, bcoef, n, k, x, jderiv, bvalue) implicit double precision(a-h,o-z) double precision bvalue calls vinterv c calculates value at x of jderiv-th derivative of spline from b-repr. c the spline is taken to be continuous from the right. c c****** i n p u t ****** c t, bcoef, n, k......forms the b-representation of the spline f to c be evaluated. specifically, c t.....knot sequence, of length n+k, assumed nondecreasing. c bcoef.....b-coefficient sequence, of length n . c n.....length of bcoef and dimension of s(k,t), c a s s u m e d positive . c k.....order of the spline . c c w a r n i n g . . . the restriction k .le. kmax (=20) is imposed c arbitrarily by the dimension statement for aj, dm, dm below, c but is n o w h e r e c h e c k e d for. c c x.....the point at which to evaluate . c jderiv.....integer giving the order of the derivative to be evaluated c a s s u m e d to be zero or positive. c c****** o u t p u t ****** c bvalue.....the value of the (jderiv)-th derivative of f at x . c c****** m e t h o d ****** c the nontrivial knot interval (t(i),t(i+1)) containing x is lo- c cated with the aid of vinterv . the k b-coeffs of f relevant for c this interval are then obtained from bcoef (or taken to be zero if c not explicitly available) and are then differenced jderiv times to c obtain the b-coeffs of (d**jderiv)f relevant for that interval. c precisely, with j = jderiv, we have from x.(12) of the text that c c (d**j)f = sum ( bcoef(.,j)*b(.,k-j,t) ) c c where c / bcoef(.), , j .eq. 0 c / c bcoef(.,j) = / bcoef(.,j-1) - bcoef(.-1,j-1) c / ----------------------------- , j .gt. 0 c / (t(.+k-j) - t(.))/(k-j) c c then, we use repeatedly the fact that c c sum ( a(.)*b(.,m,t)(x) ) = sum ( a(.,x)*b(.,m-1,t)(x) ) c with c (x - t(.))*a(.) + (t(.+m-1) - x)*a(.-1) c a(.,x) = --------------------------------------- c (x - t(.)) + (t(.+m-1) - x) c c to write (d**j)f(x) eventually as a linear combination of b-splines c of order 1 , and the coefficient for b(i,1,t)(x) must then c be the desired number (d**j)f(x). (see x.(17)-(19) of text). c parameter(kmax = 20) integer jderiv,k,n, i,ilo,imk,j,jc,jcmin,jcmax,jj,km1,mflag,nmi double precision bcoef(n),t(*),x double precision aj(kmax),dm(kmax),dp(kmax),fkmj c dimension t(n+k) current fortran standard makes it impossible to specify the length of c t precisely without the introduction of otherwise superfluous c additional arguments. bvalue = 0.0d0 if (jderiv .ge. k) go to 99 c c *** find i s.t. 1 .le. i .lt. n+k and t(i) .lt. t(i+1) and c t(i) .le. x .lt. t(i+1) . if no such i can be found, x lies c outside the support of the spline f and bvalue = 0. c (the asymmetry in this choice of i makes f rightcontinuous) if( (x.ne.t(n+1)) .or. (t(n+1).ne.t(n+k)) ) go to 700 i = n go to 701 700 call vinterv ( t, n+k, x, i, mflag ) if (mflag .ne. 0) go to 99 701 continue c *** if k = 1 (and jderiv = 0), bvalue = bcoef(i). km1 = k - 1 if (km1 .gt. 0) go to 1 bvalue = bcoef(i) go to 99 c c *** store the k b-spline coefficients relevant for the knot interval c (t(i),t(i+1)) in aj(1),...,aj(k) and compute dm(j) = x - t(i+1-j), c dp(j) = t(i+j) - x, j=1,...,k-1 . set any of the aj not obtainable c from input to zero. set any t.s not obtainable equal to t(1) or c to t(n+k) appropriately. 1 jcmin = 1 imk = i - k if (imk .ge. 0) go to 8 jcmin = 1 - imk do 5 j=1,i 5 dm(j) = x - t(i+1-j) do 6 j=i,km1 aj(k-j) = 0. 6 dm(j) = dm(i) go to 10 8 do 9 j=1,km1 9 dm(j) = x - t(i+1-j) c 10 jcmax = k nmi = n - i if (nmi .ge. 0) go to 18 jcmax = k + nmi do 15 j=1,jcmax 15 dp(j) = t(i+j) - x do 16 j=jcmax,km1 aj(j+1) = 0. 16 dp(j) = dp(jcmax) go to 20 18 do 19 j=1,km1 19 dp(j) = t(i+j) - x c 20 do 21 jc=jcmin,jcmax 21 aj(jc) = bcoef(imk + jc) c c *** difference the coefficients jderiv times. if (jderiv .eq. 0) go to 30 do 23 j=1,jderiv kmj = k-j fkmj = dble(kmj) ilo = kmj do 23 jj=1,kmj aj(jj) = ((aj(jj+1) - aj(jj))/(dm(ilo) + dp(jj)))*fkmj 23 ilo = ilo - 1 c c *** compute value at x in (t(i),t(i+1)) of jderiv-th derivative, c given its relevant b-spline coeffs in aj(1),...,aj(k-jderiv). 30 if (jderiv .eq. km1) go to 39 jdrvp1 = jderiv + 1 do 33 j=jdrvp1,km1 kmj = k-j ilo = kmj do 33 jj=1,kmj aj(jj) = (aj(jj+1)*dm(ilo) + aj(jj)*dp(jj))/(dm(ilo)+dp(jj)) 33 ilo = ilo - 1 39 bvalue = aj(1) c 99 return end subroutine vinterv ( xt, lxt, x, left, mflag ) implicit double precision(a-h,o-z) computes left = max( i ; 1 .le. i .le. lxt .and. xt(i) .le. x ) . c c****** i n p u t ****** c xt.....a double precision sequence, of length lxt , assumed to be nondecreasing c lxt.....number of terms in the sequence xt . c x.....the point whose location with respect to the sequence xt is c to be determined. c c****** o u t p u t ****** c left, mflag.....both integers, whose value is c c 1 -1 if x .lt. xt(1) c i 0 if xt(i) .le. x .lt. xt(i+1) c lxt 1 if xt(lxt) .le. x c c in particular, mflag = 0 is the 'usual' case. mflag .ne. 0 c indicates that x lies outside the halfopen interval c xt(1) .le. y .lt. xt(lxt) . the asymmetric treatment of the c interval is due to the decision to make all pp functions cont- c inuous from the right. c c****** m e t h o d ****** c the program is designed to be efficient in the common situation that c it is called repeatedly, with x taken from an increasing or decrea- c sing sequence. this will happen, e.g., when a pp function is to be c graphed. the first guess for left is therefore taken to be the val- c ue returned at the previous call and stored in the l o c a l varia- c ble ilo . a first check ascertains that ilo .lt. lxt (this is nec- c essary since the present call may have nothing to do with the previ- c ous call). then, if xt(ilo) .le. x .lt. xt(ilo+1), we set left = c ilo and are done after just three comparisons. c otherwise, we repeatedly double the difference istep = ihi - ilo c while also moving ilo and ihi in the direction of x , until c xt(ilo) .le. x .lt. xt(ihi) , c after which we use bisection to get, in addition, ilo+1 = ihi . c left = ilo is then returned. c integer left,lxt,mflag, ihi,ilo,istep,middle double precision x,xt(lxt) data ilo /1/ c save ilo (a valid fortran statement in the new 1977 standard) ihi = ilo + 1 if (ihi .lt. lxt) go to 20 if (x .ge. xt(lxt)) go to 110 if (lxt .le. 1) go to 90 ilo = lxt - 1 ihi = lxt c 20 if (x .ge. xt(ihi)) go to 40 if (x .ge. xt(ilo)) go to 100 c c **** now x .lt. xt(ilo) . decrease ilo to capture x . c c c Originally: c 30 istep = 1 istep = 1 c c 31 ihi = ilo ilo = ihi - istep if (ilo .le. 1) go to 35 if (x .ge. xt(ilo)) go to 50 istep = istep*2 go to 31 35 ilo = 1 if (x .lt. xt(1)) go to 90 go to 50 c **** now x .ge. xt(ihi) . increase ihi to capture x . 40 istep = 1 41 ilo = ihi ihi = ilo + istep if (ihi .ge. lxt) go to 45 if (x .lt. xt(ihi)) go to 50 istep = istep*2 go to 41 45 if (x .ge. xt(lxt)) go to 110 ihi = lxt c c **** now xt(ilo) .le. x .lt. xt(ihi) . narrow the interval. 50 middle = (ilo + ihi)/2 if (middle .eq. ilo) go to 100 c note. it is assumed that middle = ilo in case ihi = ilo+1 . if (x .lt. xt(middle)) go to 53 ilo = middle go to 50 53 ihi = middle go to 50 c**** set output and return. 90 mflag = -1 left = 1 return 100 mflag = 0 left = ilo return 110 mflag = 1 left = lxt return end c ===================================================================== c These two subroutines, dpbfa8 and dpbsl8, are called by sslvrg. c Note: a rational cholesky version of these functions are available, c called vdpbfa7 and vdpbsl7 c T.Yee 7/10/99 c 1/7/02 c T.Yee has renamed dbpbfa to dbpbfa8 and dpbsl to dpbsl8, to ensure uniqueness subroutine dpbfa8(abd,lda,n,m,info) integer lda,n,m,info double precision abd(lda,*) c c c 20130419; Originally: c double precision abd(lda,1) c c c c c dpbfa8 factors a double precision symmetric positive definite c matrix stored in band form. c c dpbfa8 is usually called by dpbco, but it can be called c directly with a saving in time if rcond is not needed. c c on entry c c abd double precision(lda, n) c the matrix to be factored. the columns of the upper c triangle are stored in the columns of abd and the c diagonals of the upper triangle are stored in the c rows of abd . see the comments below for details. c c lda integer c the leading dimension of the array abd . c lda must be .ge. m + 1 . c c n integer c the order of the matrix a . c c m integer c the number of diagonals above the main diagonal. c 0 .le. m .lt. n . c c on return c c abd an upper triangular matrix r , stored in band c form, so that a = trans(r)*r . c c info integer c = 0 for normal return. c = k if the leading minor of order k is not c positive definite. c c band storage c c if a is a symmetric positive definite band matrix, c the following program segment will set up the input. c c m = (band width above diagonal) c do 20 j = 1, n c i1 = max0(1, j-m) c do 10 i = i1, j c k = i-j+m+1 c abd(k,j) = a(i,j) c 10 continue c 20 continue c c linpack. this version dated 08/14/78 . c cleve moler, university of new mexico, argonne national lab. c c subroutines and functions c c blas ddot c fortran max0,dsqrt c c internal variables c double precision ddot8,t double precision s integer ik,j,jk,k,mu c begin block with ...exits to 40 c c do 30 j = 1, n info = j s = 0.0d0 ik = m + 1 jk = max0(j-m,1) mu = max0(m+2-j,1) if (m .lt. mu) go to 20 do 10 k = mu, m t = abd(k,j) - ddot8(k-mu,abd(ik,jk),1,abd(mu,j),1) t = t/abd(m+1,jk) abd(k,j) = t s = s + t*t ik = ik - 1 jk = jk + 1 10 continue 20 continue s = abd(m+1,j) - s c ......exit if (s .le. 0.0d0) go to 40 abd(m+1,j) = dsqrt(s) 30 continue info = 0 40 continue return end subroutine dpbsl8(abd,lda,n,m,b) integer lda,n,m double precision abd(lda,*),b(*) c c c 20130419; originally: c double precision abd(lda,1),b(1) c c c dpbsl8 solves the double precision symmetric positive definite c band system a*x = b c using the factors computed by dpbco or dpbfa8. c c on entry c c abd double precision(lda, n) c the output from dpbco or dpbfa8. c c lda integer c the leading dimension of the array abd . c c n integer c the order of the matrix a . c c m integer c the number of diagonals above the main diagonal. c c b double precision(n) c the right hand side vector. c c on return c c b the solution vector x . c c error condition c c a division by zero will occur if the input factor contains c a zero on the diagonal. technically this indicates c singularity but it is usually caused by improper subroutine c arguments. it will not occur if the subroutines are called c correctly and info .eq. 0 . c c to compute inverse(a) * c where c is a matrix c with p columns c call dpbco(abd,lda,n,rcond,z,info) c if (rcond is too small .or. info .ne. 0) go to ... c do 10 j = 1, p c call dpbsl8(abd,lda,n,c(1,j)) c 10 continue c c linpack. this version dated 08/14/78 . c cleve moler, university of new mexico, argonne national lab. c c subroutines and functions c c blas daxpy,ddot c fortran min0 c c internal variables c double precision ddot8,t integer k,kb,la,lb,lm c c solve trans(r)*y = b c do 10 k = 1, n lm = min0(k-1,m) la = m + 1 - lm lb = k - lm t = ddot8(lm,abd(la,k),1,b(lb),1) b(k) = (b(k) - t)/abd(m+1,k) 10 continue c c solve r*x = y c do 20 kb = 1, n k = n + 1 - kb lm = min0(k-1,m) la = m + 1 - lm lb = k - lm b(k) = b(k)/abd(m+1,k) t = -b(k) call daxpy8(lm,t,abd(la,k),1,b(lb),1) 20 continue return end VGAM/src/caqo3.c0000644000176000001440000032161512136651167013015 0ustar ripleyusers #include #include #include #include #include void yiumjq3npnm1or(double *objzgdk0, double *lfu2qhid); void yiumjq3npnm1ow(double objzgdk0[], double lfu2qhid[], int *f8yswcat); void yiumjq3nn2howibc2a(double *objzgdk0, double *i9mwnvqt, double *lfu2qhid); void yiumjq3nbewf1pzv9(double *objzgdk0, double *lfu2qhid); void yiumjq3ng2vwexyk9(double *objzgdk0, double *lfu2qhid); void yiumjq3npkc4ejib(double w8znmyce[], double zshtfg8c[], double m0ibglfx[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xlpjcg3s, int *vtsou9pz, int *hj3ftvzu, int *qfx3vhct, int *unhycz0e, double vm4xjosb[]); void yiumjq3nnipyajc1(double m0ibglfx[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu); void yiumjq3nshjlwft5(int *qfx3vhct, double tlgduey8[], double ufgqj9ck[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *kvowz9ht, double m0ibglfx[], double *jxacz5qu, int *hj3ftvzu, double *dn3iasxug, double *vsoihn1r, int *dqk5muto); void yiumjq3nflncwkfq76(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct); void yiumjq3nflncwkfq71(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *xwdf5ltg, int *qfx3vhct, double vm4xjosb[], int *br5ovgcj, int *xlpjcg3s, double kifxa0he[], int *yru9olks, int *unhycz0e); void yiumjq3nflncwkfq72(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct, int *afpc0kns, int *fmzq7aob, int *eu3oxvyb, int *unhycz0e, double vm4xjosb[]); void yiumjq3nietam6(double tlgduey8[], double m0ibglfx[], double y7sdgtqi[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu, double ufgqj9ck[], int *wr0lbopv); void yiumjq3ndlgpwe0c(double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double *rsynp1go, double *dn3iasxug, double *uaf2xgqy, int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *hj3ftvzu, int *qfx3vhct, int *zjkrtol8, int *unhycz0e, double vm4xjosb[]); void cqo_2(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]); void cqo_1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]); void vcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int acpios9q[], int jwbkl9fp[]); void dcqo1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double *ydcnh9xl); void vdcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double ajul8wkv[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int acpios9q[], int jwbkl9fp[]); double fvlmz9iyC_tldz5ion(double xx); void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); void fvlmz9iyC_enbin9(double lfu2qhid[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty); void Yee_vbfa(int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int acpios9q[], int jwbkl9fp[]); void F77_NAME(vqrdca)(double*, int*, int*, int*, double*, int*, double*, int*, double*); void F77_NAME(vdqrsl)(double*, int*, int*, int*, double*, double*, double*, double*, double*, double*, double*, int*, int*); void tyee_C_vdgam1(double*, double*, int*); void tyee_C_vtgam1(double*, double*, int*); void yiumjq3nn2howibc2a(double *objzgdk0, double *i9mwnvqt, double *lfu2qhid) { double pq0hfucn, xd4mybgj; if (1.0e0 - *objzgdk0 >= 1.0e0) { *lfu2qhid = -8.12589e0 / (3.0 * sqrt(*i9mwnvqt)); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 8.12589e0 / (3.0 * sqrt(*i9mwnvqt)); } else { pq0hfucn = 1.0e0 - *objzgdk0; yiumjq3npnm1or(&pq0hfucn, &xd4mybgj); xd4mybgj /= 3.0e0 * sqrt(*i9mwnvqt); *lfu2qhid = -3.0e0 * log(1.0e0 + xd4mybgj); } } void yiumjq3nbewf1pzv9(double *objzgdk0, double *lfu2qhid) { if (*objzgdk0 <= 2.0e-200) { *lfu2qhid = -460.0e0; } else if (*objzgdk0 <= 1.0e-14) { *lfu2qhid = log( *objzgdk0 ); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 3.542106e0; } else { *lfu2qhid = log(-log(1.0e0 - *objzgdk0)); } } void yiumjq3ng2vwexyk9(double *objzgdk0, double *lfu2qhid) { if (*objzgdk0 <= 2.0e-200) { *lfu2qhid = -460.0e0; } else if (*objzgdk0 <= 1.0e-14) { *lfu2qhid = log( *objzgdk0 ); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 34.53958e0; } else { *lfu2qhid = log(*objzgdk0 / (1.0e0 - *objzgdk0)); } } void yiumjq3npkc4ejib(double w8znmyce[], double zshtfg8c[], double m0ibglfx[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xlpjcg3s, int *vtsou9pz, int *hj3ftvzu, int *qfx3vhct, int *unhycz0e, double vm4xjosb[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, sedf7mxb; double *fpdlcqk9zshtfg8c, *fpdlcqk9w8znmyce, *fpdlcqk9f9piukdx, *fpdlcqk9m0ibglfx, *fpdlcqk9vm4xjosb; if (*vtsou9pz == 1) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { sedf7mxb = 2 * *hj3ftvzu - 1; if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_pkc4ejib\n"); fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9w8znmyce = w8znmyce + 0 + (gp1jxzuh-1) * *br5ovgcj; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9w8znmyce++; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } sedf7mxb = 2 * *hj3ftvzu; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9w8znmyce = w8znmyce + 1 + (gp1jxzuh-1) * *br5ovgcj; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9w8znmyce++; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } } else { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; for (ayfnwr1v = 0; ayfnwr1v < *br5ovgcj; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; fpdlcqk9w8znmyce = w8znmyce; // + (gp1jxzuh-1) * *br5ovgcj; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } } } else { if (*br5ovgcj != *wy1vqfzu * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != *wy1vqfzu * *ftnjamu2 in C_pkc4ejib\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9f9piukdx = w8znmyce; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx = 0.0e0; fpdlcqk9zshtfg8c = zshtfg8c; fpdlcqk9w8znmyce = fpdlcqk9f9piukdx++; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce * *fpdlcqk9zshtfg8c++; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9m0ibglfx++; } } } fpdlcqk9vm4xjosb = vm4xjosb; if (*unhycz0e == 1) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu - 2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } else { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu - 1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } } void yiumjq3nnipyajc1(double m0ibglfx[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu) { int ayfnwr1v, yq6lorbx; double tmpwk, *fpdlcqk9t8hwvalr, *fpdlcqk9m0ibglfx; if (*hj3ftvzu == 0) { fpdlcqk9t8hwvalr = t8hwvalr; fpdlcqk9m0ibglfx = m0ibglfx; if (*qfx3vhct == 1) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { tmpwk = exp(*fpdlcqk9m0ibglfx++); *fpdlcqk9t8hwvalr++ = tmpwk / (1.0 + tmpwk); } } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = exp(*fpdlcqk9m0ibglfx++); } if (*qfx3vhct == 4) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = 1.0e0 - exp(-exp(*fpdlcqk9m0ibglfx++)); } if (*qfx3vhct == 3 || *qfx3vhct == 5) { if (2 * *afpc0kns != *wy1vqfzu) { //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); } //Rprintf for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9t8hwvalr++ = exp(*fpdlcqk9m0ibglfx++); fpdlcqk9m0ibglfx++; } } if (*qfx3vhct == 8) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = *fpdlcqk9m0ibglfx++; } } else { fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; if (*qfx3vhct == 1) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tmpwk = exp(*fpdlcqk9m0ibglfx); *fpdlcqk9t8hwvalr = tmpwk / (1.0 + tmpwk); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = exp(*fpdlcqk9m0ibglfx); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = 1.0e0 - exp(-exp(*fpdlcqk9m0ibglfx)); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu-2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = exp(*fpdlcqk9m0ibglfx); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = *fpdlcqk9m0ibglfx; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } } void yiumjq3nshjlwft5(int *qfx3vhct, double tlgduey8[], double ufgqj9ck[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *kvowz9ht, double m0ibglfx[], double *jxacz5qu, int *hj3ftvzu, double *dn3iasxug, double *vsoihn1r, int *dqk5muto) { int ayfnwr1v, yq6lorbx, lbgwvp3q; double txlvcey5, xd4mybgj, uqnkc6zg, hofjnx2e, smmu, afwp5imx, ivqk2ywz, qvd7yktm, hdqsx7bk, anopu9vi, jtnbu2hz, prev_lfu2qhid = 0.0e0, lfu2qhid = 0.0e0, *fpdlcqk9m0ibglfx, *fpdlcqk9t8hwvalr, *fpdlcqk9ufgqj9ck, *fpdlcqk9tlgduey8; if (*hj3ftvzu == 0) { fpdlcqk9tlgduey8 = tlgduey8; if (*qfx3vhct == 1 || *qfx3vhct == 4) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { // yyy fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { // bbb ivqk2ywz = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9tlgduey8*log(*fpdlcqk9tlgduey8) :0.0; if (*fpdlcqk9tlgduey8 < 1.0e0) ivqk2ywz += (1.0e0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9tlgduey8); xd4mybgj = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgj < *dn3iasxug) { smmu = *fpdlcqk9t8hwvalr; qvd7yktm = *fpdlcqk9tlgduey8 * ((smmu < *dn3iasxug) ? *vsoihn1r : log(smmu)); afwp5imx = 1.0e0 - smmu; qvd7yktm += (afwp5imx < *dn3iasxug ? *vsoihn1r : log(afwp5imx))* (1.0 - *fpdlcqk9tlgduey8); } else { qvd7yktm = *fpdlcqk9tlgduey8 * log( *fpdlcqk9t8hwvalr) + (1.0 - *fpdlcqk9tlgduey8) * log(1.0 - *fpdlcqk9t8hwvalr); } lfu2qhid += *fpdlcqk9ufgqj9ck++ * (ivqk2ywz - qvd7yktm); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } // bbb jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } // yyy } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8 + *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr) : *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } if (*qfx3vhct == 5) { fpdlcqk9tlgduey8 = tlgduey8; if (2 * *afpc0kns != *wy1vqfzu) { //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); } //Rprintf for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + 2*yq6lorbx-1; fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { jtnbu2hz = exp(*fpdlcqk9m0ibglfx); uqnkc6zg = fvlmz9iyC_tldz5ion(jtnbu2hz); xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? (jtnbu2hz - 1.0e0) * log(*fpdlcqk9tlgduey8) + (log(jtnbu2hz) - *fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr - log(*fpdlcqk9t8hwvalr)) * jtnbu2hz - uqnkc6zg : -1000.0e0; xd4mybgj = -xd4mybgj; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } if (*qfx3vhct == 3) { if (*dqk5muto == 0) { anopu9vi = 34.0e0; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xd4mybgj = (tlgduey8[ayfnwr1v-1+ (yq6lorbx-1)* *ftnjamu2] < 1.0e0) ? 1.0e0 : tlgduey8[ayfnwr1v-1+ (yq6lorbx-1)* *ftnjamu2]; lfu2qhid += ufgqj9ck[ayfnwr1v-1] * (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * log(xd4mybgj/t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns]) + (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] + hdqsx7bk) * log((t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns ] + hdqsx7bk) / (hdqsx7bk + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]))); } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } else { anopu9vi = 34.0e0; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu]); lbgwvp3q = 0; } if (lbgwvp3q) { uqnkc6zg = hofjnx2e = 0.0e0; } else { uqnkc6zg = fvlmz9iyC_tldz5ion(hdqsx7bk + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]); hofjnx2e = fvlmz9iyC_tldz5ion(hdqsx7bk); } txlvcey5 = fvlmz9iyC_tldz5ion(1.0e0 + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]); xd4mybgj = hdqsx7bk * log(hdqsx7bk / (hdqsx7bk + t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns])) + uqnkc6zg - hofjnx2e - txlvcey5; if (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] > 0.0e0) { xd4mybgj += tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * log(t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns] / (hdqsx7bk + t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns])); } lfu2qhid += ufgqj9ck[ayfnwr1v-1] * xd4mybgj; } jxacz5qu[yq6lorbx] = 2.0 * (-0.5 * lfu2qhid + 0.5 * prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } lfu2qhid *= (-0.5); } } if (*qfx3vhct == 8) { fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr; lfu2qhid += *fpdlcqk9ufgqj9ck++ * pow(xd4mybgj, (double) 2.0); fpdlcqk9t8hwvalr += *afpc0kns; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } } else { fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; if (*qfx3vhct == 1 || *qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ivqk2ywz = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8) : 0.0; if (*fpdlcqk9tlgduey8 < 1.0e0) ivqk2ywz += (1.0e0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9tlgduey8); xd4mybgj = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgj < *dn3iasxug) { smmu = *fpdlcqk9t8hwvalr; qvd7yktm = *fpdlcqk9tlgduey8 * ((smmu < *dn3iasxug) ? *vsoihn1r : log(smmu)); afwp5imx = 1.0e0 - smmu; qvd7yktm += (afwp5imx < *dn3iasxug ? *vsoihn1r : log(afwp5imx)) * (1.0 - *fpdlcqk9tlgduey8); } else { qvd7yktm = *fpdlcqk9tlgduey8 * log( *fpdlcqk9t8hwvalr) + (1.0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9t8hwvalr); } lfu2qhid += *fpdlcqk9ufgqj9ck++ * (ivqk2ywz - qvd7yktm); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8 > 0.0e0 ? *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8 + *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr) : *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 5) { fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { jtnbu2hz = exp(*fpdlcqk9m0ibglfx); uqnkc6zg = fvlmz9iyC_tldz5ion(jtnbu2hz); xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? (jtnbu2hz - 1.0e0) * log(*fpdlcqk9tlgduey8) + jtnbu2hz * (log(jtnbu2hz) - *fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr - log(*fpdlcqk9t8hwvalr)) - uqnkc6zg : -1000.0e0; xd4mybgj = -xd4mybgj; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 3) { if (*dqk5muto == 0) { anopu9vi = 34.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu -1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu -1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2* *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xd4mybgj = (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] < 1.0e0) ? 1.0e0 : tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]; lfu2qhid += ufgqj9ck[ayfnwr1v-1] * (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] * log(xd4mybgj/t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]) + (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk) * log((t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns] + hdqsx7bk) / (hdqsx7bk+tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]))); } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { hdqsx7bk = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); uqnkc6zg = fvlmz9iyC_tldz5ion(hdqsx7bk + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]); hofjnx2e = fvlmz9iyC_tldz5ion(hdqsx7bk); txlvcey5 = fvlmz9iyC_tldz5ion(1.0e0 + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]); xd4mybgj = hdqsx7bk * log(hdqsx7bk / (hdqsx7bk + t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns])) + uqnkc6zg - hofjnx2e - txlvcey5; if (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] > 0.0e0) { xd4mybgj += tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] * log(t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns] / (hdqsx7bk + t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns])); } lfu2qhid += ufgqj9ck[ayfnwr1v-1] * xd4mybgj; } lfu2qhid *= (-0.5e0); } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { lfu2qhid += *fpdlcqk9ufgqj9ck++ * pow(*fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr, (double) 2.0); fpdlcqk9t8hwvalr += *afpc0kns; } } } *jxacz5qu = 2.0e0 * lfu2qhid; } void yiumjq3nflncwkfq76(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct) { int ayfnwr1v, hpmwnav2; // sedf7mxb = 1; double *fpdlcqk9w8znmyce, *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = lncwkfq7; if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; *fpdlcqk9w8znmyce++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 0.0e0; *fpdlcqk9w8znmyce++ = 1.0e0; } for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; } if (*br5ovgcj != *ftnjamu2) Rprintf("Error: *br5ovgcj != *ftnjamu2 in C_flncwkfq76\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; } } } } void yiumjq3nflncwkfq71(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *xwdf5ltg, int *qfx3vhct, double vm4xjosb[], int *br5ovgcj, int *xlpjcg3s, double kifxa0he[], int *yru9olks, int *unhycz0e) { int i0spbklx, ayfnwr1v, hpmwnav2, // sedf7mxb = *xwdf5ltg + 1, hyqwtp6i = *xwdf5ltg * (*xwdf5ltg + 1) / 2; double *fpdlcqk9lncwkfq7, *fpdlcqk9lncwkfq71, *fpdlcqk9lncwkfq72, *fpdlcqk9w8znmyce, *fpdlcqk9vm4xjosb, *fpdlcqk9kifxa0he; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; wkumc9idtgiyxdw1 = Calloc(hyqwtp6i, int); wkumc9iddufozmt7 = Calloc(hyqwtp6i, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, xwdf5ltg); fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = fpdlcqk9lncwkfq71 = fpdlcqk9lncwkfq72 = lncwkfq7; if (*qfx3vhct == 3 || *qfx3vhct == 5) { // ggg if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_flncwkfq71\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; *fpdlcqk9w8znmyce++ = 0.0e0; } } if (*unhycz0e == 0) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } else { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } } else { // ggg and hhh for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; } } if (*unhycz0e == 0) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; } } } else { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } } // hhh if (*yru9olks > 0) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { // kkk for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; *fpdlcqk9w8znmyce++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 0.0e0; *fpdlcqk9w8znmyce++ = 1.0e0; } if (*yru9olks > 1) { fpdlcqk9kifxa0he = kifxa0he; // + (i0spbklx-1) * *ftnjamu2; for (i0spbklx = 2; i0spbklx <= *yru9olks; i0spbklx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9kifxa0he++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } } else { // kkk and iii fpdlcqk9kifxa0he = kifxa0he; // + (i0spbklx-1) * *ftnjamu2; for (i0spbklx = 1; i0spbklx <= *yru9olks; i0spbklx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9kifxa0he++; } } } // iii } // if (*yru9olks > 0) Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void yiumjq3nflncwkfq72(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct, int *afpc0kns, int *fmzq7aob, int *eu3oxvyb, int *unhycz0e, double vm4xjosb[]) { int i0spbklx, ayfnwr1v, yq6lorbx, gp1jxzuh, hpmwnav2, sedf7mxb = 0, hyqwtp6i = *xwdf5ltg * (*xwdf5ltg + 1) / 2; double uqnkc6zg, *fpdlcqk9lncwkfq7, *fpdlcqk9lncwkfq71, *fpdlcqk9lncwkfq72, *fpdlcqk9w8znmyce, *fpdlcqk9vm4xjosb; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; wkumc9idtgiyxdw1 = Calloc(hyqwtp6i, int); wkumc9iddufozmt7 = Calloc(hyqwtp6i, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, xwdf5ltg); fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = lncwkfq7; for (gp1jxzuh = 1; gp1jxzuh <= *eu3oxvyb; gp1jxzuh++) { for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9w8znmyce++ = 0.0e0; } fpdlcqk9w8znmyce = w8znmyce; if (*qfx3vhct == 3 || *qfx3vhct == 5) { if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_flncwkfq72\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce = *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce += 2 + *br5ovgcj; } fpdlcqk9lncwkfq7++; fpdlcqk9w8znmyce -= *afpc0kns * *br5ovgcj; // fixed@20100406 } sedf7mxb += *afpc0kns; } } else { for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9lncwkfq7++; fpdlcqk9w8znmyce -= *wy1vqfzu * *br5ovgcj; // fixed@20100406 } sedf7mxb += *wy1vqfzu; } } if (*fmzq7aob == 0) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce = uqnkc6zg; fpdlcqk9w8znmyce += 2 + *br5ovgcj; } fpdlcqk9w8znmyce -= *afpc0kns * *br5ovgcj; // fixed@20100406 } sedf7mxb += *afpc0kns; } } else { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9w8znmyce++ = uqnkc6zg; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9w8znmyce -= *wy1vqfzu * *br5ovgcj; // fixed@20100406 } sedf7mxb += *wy1vqfzu; } } } else { if (*unhycz0e == 1) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } else { if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + (sedf7mxb+i0spbklx-1) * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce++ = uqnkc6zg; fpdlcqk9w8znmyce++; } } } sedf7mxb += hyqwtp6i; } else { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + (sedf7mxb+i0spbklx-1) * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9w8znmyce++ = uqnkc6zg; } } sedf7mxb += hyqwtp6i; } } } Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void yiumjq3nietam6(double tlgduey8[], double m0ibglfx[], double y7sdgtqi[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu, double ufgqj9ck[], int *wr0lbopv) { int ayfnwr1v; double gyuq8dex, g2vwexykp, qa8ltuhj, vogkfwt8 = 0.0e0, msrdjh5f = 0.0e0, kwvo4ury, cpz4fgkx, tad5vhsu, khl0iysgk, myoffset = 1.0 / 32.0; double *fpdlcqk9tlgduey8, *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9ufgqj9ck; fpdlcqk9m0ibglfx = fpdlcqk9m0ibglfx1 = fpdlcqk9m0ibglfx2 = &tad5vhsu; gyuq8dex = 1.0; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ufgqj9ck = ufgqj9ck; if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * *hj3ftvzu-1; fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * *hj3ftvzu-2; } else fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; if (*qfx3vhct == 1 || *qfx3vhct == 4 || *qfx3vhct == 3 || *qfx3vhct == 5) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { msrdjh5f += *fpdlcqk9ufgqj9ck; vogkfwt8 += *fpdlcqk9tlgduey8++ * *fpdlcqk9ufgqj9ck++; } gyuq8dex = vogkfwt8 / msrdjh5f; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; } if (*qfx3vhct == 1) { yiumjq3ng2vwexyk9(&gyuq8dex, &g2vwexykp); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = g2vwexykp; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = log(*fpdlcqk9tlgduey8++ + myoffset); fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 4) { yiumjq3nbewf1pzv9(&gyuq8dex, &qa8ltuhj); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = qa8ltuhj; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 5) { if (*wr0lbopv == 1 || *wr0lbopv == 2) { kwvo4ury = *wr0lbopv == 1 ? log(gyuq8dex + myoffset) : log((6.0 / 8.0) * gyuq8dex); cpz4fgkx = log(y7sdgtqi[3 + *afpc0kns + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } else { cpz4fgkx = log(y7sdgtqi[3 + *afpc0kns + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = log(*fpdlcqk9tlgduey8++ + myoffset); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } } if (*qfx3vhct == 3) { if (*wr0lbopv == 1) { kwvo4ury = log(gyuq8dex + myoffset); cpz4fgkx = log(y7sdgtqi[3 + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } else if (*wr0lbopv == 2) { kwvo4ury = log(gyuq8dex + myoffset); khl0iysgk = y7sdgtqi[3 + *hj3ftvzu -1]; cpz4fgkx = log(khl0iysgk); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tad5vhsu = *fpdlcqk9tlgduey8 - gyuq8dex; *fpdlcqk9m0ibglfx2 = (tad5vhsu < 3.0 * gyuq8dex) ? kwvo4ury : log(sqrt(*fpdlcqk9tlgduey8)); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9tlgduey8++; } } else if (*wr0lbopv == 3) { kwvo4ury = log(gyuq8dex + myoffset); khl0iysgk = y7sdgtqi[3 + *hj3ftvzu -1]; cpz4fgkx = log(khl0iysgk); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tad5vhsu = *fpdlcqk9tlgduey8 - gyuq8dex; if (tad5vhsu > gyuq8dex) { *fpdlcqk9m0ibglfx2 = log(0.5 * (*fpdlcqk9tlgduey8 + gyuq8dex)); *fpdlcqk9m0ibglfx1 = log(khl0iysgk / (tad5vhsu / gyuq8dex)); } else if (*fpdlcqk9tlgduey8 < (gyuq8dex / 4.0)) { *fpdlcqk9m0ibglfx2 = log(gyuq8dex / 4.0); *fpdlcqk9m0ibglfx1 = cpz4fgkx; } else { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9tlgduey8++; } } else { cpz4fgkx = log(y7sdgtqi[3 + *hj3ftvzu - 1]); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = log(*fpdlcqk9tlgduey8++ + myoffset); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = *fpdlcqk9tlgduey8++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } void yiumjq3ndlgpwe0c(double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double *rsynp1go, double *dn3iasxug, double *uaf2xgqy, int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *hj3ftvzu, int *qfx3vhct, int *zjkrtol8, int *unhycz0e, double vm4xjosb[]) { int ayfnwr1v, lbgwvp3q = -7; //qfx3vhct # kvowz9ht double xd4mybgja, xd4mybgjb, xd4mybgjc, anopu9vi; double *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9t8hwvalr, *fpdlcqk9vm4xjosb, *fpdlcqk9wpuarq2m, *fpdlcqk9ufgqj9ck, *fpdlcqk9rbne6ouj, *fpdlcqk9tlgduey8, *fpdlcqk9ghz9vuba; double hdqsx7bk, dkdeta, dldk, ux3nadiw, ed2ldk2, n2kersmx; double bzmd6ftvmat[1], kkmat[1], nm0eljqk[1]; int dvhw1ulq, sguwj9ty, pqneb2ra = 1; double jtnbu2hz, uqnkc6zgd, uqnkc6zgt, dldshape, fvn3iasxug, xk7dnvei; int okobr6tcex; double tmp1; fpdlcqk9m0ibglfx = fpdlcqk9m0ibglfx1 = fpdlcqk9m0ibglfx2 = &xd4mybgja; lbgwvp3q += 7; lbgwvp3q *= lbgwvp3q; n2kersmx = 0.990e0; n2kersmx = 0.995e0; fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * *hj3ftvzu-1; fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * *hj3ftvzu-2; } fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9vm4xjosb = vm4xjosb; fpdlcqk9wpuarq2m = wpuarq2m + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; fpdlcqk9rbne6ouj = rbne6ouj + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ghz9vuba = ghz9vuba + (*hj3ftvzu-1) * *ftnjamu2; if (*qfx3vhct == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgja = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck++; if (xd4mybgja < *dn3iasxug) xd4mybgja = *dn3iasxug; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj++ = xd4mybgjb; *fpdlcqk9ghz9vuba++ = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr) / xd4mybgja; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgja = *fpdlcqk9t8hwvalr; xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck++; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj = xd4mybgjb; if (*fpdlcqk9tlgduey8 > 0.0e0) { xd4mybgjc = xd4mybgja; if (xd4mybgjc < *dn3iasxug) xd4mybgjc = *dn3iasxug; *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - xd4mybgjc) / xd4mybgjc; } else { *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx - 1.0e0; } fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9rbne6ouj++; fpdlcqk9tlgduey8++; fpdlcqk9ghz9vuba++; } } if (*qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (*fpdlcqk9t8hwvalr < *dn3iasxug || *fpdlcqk9t8hwvalr > 1.0e0 - *dn3iasxug) { xd4mybgja = *dn3iasxug; xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj = xd4mybgjb; *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - *fpdlcqk9t8hwvalr) / xd4mybgja; } else { xd4mybgja = -(1.0e0 - *fpdlcqk9t8hwvalr) * log(1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgja < *dn3iasxug) { xd4mybgja = *dn3iasxug; } xd4mybgjb = -xd4mybgja * *fpdlcqk9ufgqj9ck * log(1.0e0 - *fpdlcqk9t8hwvalr) / *fpdlcqk9t8hwvalr; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; } *fpdlcqk9rbne6ouj = xd4mybgjb; *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - *fpdlcqk9t8hwvalr) / xd4mybgja; } fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9ufgqj9ck++; fpdlcqk9rbne6ouj++; fpdlcqk9tlgduey8++; fpdlcqk9ghz9vuba++; } } if (*qfx3vhct == 5) { fvn3iasxug = 1.0e-20; anopu9vi = 34.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { jtnbu2hz = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { jtnbu2hz = exp(-anopu9vi); lbgwvp3q = 1; } else { jtnbu2hz = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } tyee_C_vdgam1(&jtnbu2hz, &uqnkc6zgd, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("Error 1 in dlgpwe0c okobr6tcex=%d. Ploughing on.\n", okobr6tcex); } xk7dnvei = t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]; if (xk7dnvei < fvn3iasxug) { xk7dnvei = fvn3iasxug; } dldshape = log(tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]) + log(jtnbu2hz) - log(xk7dnvei) + 1.0e0 - uqnkc6zgd - tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei; tyee_C_vtgam1(&jtnbu2hz, &uqnkc6zgt, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("Error 2 in dlgpwe0c okobr6tcex=%d. Ploughing on.\n", okobr6tcex); } rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * jtnbu2hz; xd4mybgja = jtnbu2hz * uqnkc6zgt - 1.0e0; rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * jtnbu2hz * xd4mybgja; if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2]); } if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2]); } if (xd4mybgja < fvn3iasxug) { xd4mybgja = fvn3iasxug; } ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *wy1vqfzu] + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei - 1.0e0; ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] + dldshape / xd4mybgja; } } if (*qfx3vhct == 3) { anopu9vi = 34.0e0; fvn3iasxug = 1.0e-20; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xk7dnvei = t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]; if (xk7dnvei < fvn3iasxug) { xk7dnvei = fvn3iasxug; } tmp1 = tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk; tyee_C_vdgam1(&tmp1, &xd4mybgja, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("error in dlgpwe0c okobr6tcex 3: %3d \n", okobr6tcex); } tyee_C_vdgam1(&hdqsx7bk, &xd4mybgjb, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("error in dlgpwe0c okobr6tcex 4: %3d \n", okobr6tcex); } dldk = xd4mybgja - xd4mybgjb - (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk) / (xk7dnvei + hdqsx7bk) + 1.0 + log(hdqsx7bk / (xk7dnvei + hdqsx7bk)); dkdeta = hdqsx7bk; kkmat[0] = hdqsx7bk; nm0eljqk[0] = xk7dnvei; sguwj9ty = 5000; fvlmz9iyC_enbin9(bzmd6ftvmat, kkmat, nm0eljqk, &n2kersmx, &pqneb2ra, &dvhw1ulq, &pqneb2ra, &ux3nadiw, rsynp1go, &sguwj9ty); if (dvhw1ulq != 1) { *zjkrtol8 = 5; Rprintf("Error. Exiting enbin9; dvhw1ulq is %d\n", dvhw1ulq); return; } ed2ldk2 = -bzmd6ftvmat[0] - 1.0e0 / hdqsx7bk + 1.0e0 / (hdqsx7bk + xk7dnvei); rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * xk7dnvei * hdqsx7bk / (xk7dnvei + hdqsx7bk); rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * hdqsx7bk * (-bzmd6ftvmat[0] * hdqsx7bk - 1.0e0 + hdqsx7bk / (hdqsx7bk + xk7dnvei)); if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2]); if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2]); } ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *wy1vqfzu] + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei - 1.0e0; ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] + dldk / (dkdeta * ed2ldk2); } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9rbne6ouj = *fpdlcqk9ufgqj9ck++; *fpdlcqk9wpuarq2m = sqrt(*fpdlcqk9rbne6ouj); *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9rbne6ouj++; } } if (*unhycz0e == 1) { fpdlcqk9ghz9vuba = ghz9vuba + ((*qfx3vhct == 3 || *qfx3vhct == 5) ? (2 * *hj3ftvzu-2) : (*hj3ftvzu-1)) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba -= *fpdlcqk9vm4xjosb++; fpdlcqk9ghz9vuba++; } } } void cqo_2(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z, yu6izdrc = 0, kcm6jfob, fmzq7aob, xwdf5ltg, kvowz9ht, f7svlajr, qfx3vhct, c5aesxkul, pqneb2ra = 1; int ybnsqgo9, algpft4y, qemj9asg, xlpjcg3s, eu3oxvyb, vtsou9pz, unhycz0e, wr0lbopv; double dn3iasxug, wiptsjx8, bh2vgiay, pvofyg8z = 1.0e-7, uylxqtc7 = 0.0, uaf2xgqy, vsoihn1r, rsynp1go; // rpto5qwb, double *qnwamo0e1, *fpdlcqk9w8znmyce, *fpdlcqk9m0ibglfx, *fpdlcqk9vm4xjosb, *fpdlcqk9vc6hatuj, *fpdlcqk9wpuarq2m, *fpdlcqk9ghz9vuba; double hmayv1xt1 = 10.0, hmayv1xt2 = 0.0; int x1jrewny = 0; double *wkumc9idrpto5qwb, *wkumc9idtwk; wkumc9idrpto5qwb = Calloc(1 + *afpc0kns , double); wkumc9idtwk = Calloc(*wy1vqfzu * *ftnjamu2 * 2, double); xwdf5ltg = xui7hqwl[0]; fmzq7aob = xui7hqwl[1]; xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; eu3oxvyb = xui7hqwl[10]; vtsou9pz = xui7hqwl[11]; unhycz0e = xui7hqwl[13]; wr0lbopv = xui7hqwl[17]; dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); if (qfx3vhct == 1 || qfx3vhct == 4) vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt1 -= bh2vgiay; hmayv1xt2 -= rsynp1go; hmayv1xt1 += hmayv1xt2; *zjkrtol8 = 1; yiumjq3nflncwkfq72(lncwkfq7, w8znmyce, ftnjamu2, wy1vqfzu, br5ovgcj, &xwdf5ltg, &qfx3vhct, afpc0kns, &fmzq7aob, &eu3oxvyb, &unhycz0e, vm4xjosb); ceqzd1hi653: hmayv1xt2 = 1.0e0; if (f7svlajr == 0) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yq6lorbx, ufgqj9ck, &wr0lbopv); } } else if (f7svlajr == 2) { yiumjq3npkc4ejib(w8znmyce, zshtfg8c, m0ibglfx, ftnjamu2, wy1vqfzu, br5ovgcj, &xlpjcg3s, &vtsou9pz, &yu6izdrc, &qfx3vhct, &unhycz0e, vm4xjosb); } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yu6izdrc); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, wkumc9idrpto5qwb, &yu6izdrc, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { wkumc9idrpto5qwb[0] = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &yq6lorbx, &qfx3vhct, zjkrtol8, &unhycz0e, vm4xjosb); } fpdlcqk9vc6hatuj = vc6hatuj; fpdlcqk9w8znmyce = w8znmyce; for (yq6lorbx = 1; yq6lorbx <= xlpjcg3s; yq6lorbx++) for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9vc6hatuj++ = *fpdlcqk9w8znmyce++; if (qfx3vhct == 3 || qfx3vhct == 5) { Rprintf("20100410; Error: this definitely does not work\n"); if (2 * *wy1vqfzu * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 2 * *wy1vqfzu * *ftnjamu2 != *br5ovgcj in C_cqo_2\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m++; fpdlcqk9vc6hatuj++; } } } } else { if (*wy1vqfzu * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: *wy1vqfzu * *ftnjamu2 != *br5ovgcj in C_cqo_2\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m++; fpdlcqk9vc6hatuj++; } } } } for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) ges1xpkr[gp1jxzuh-1] = gp1jxzuh; F77_CALL(vqrdca)(vc6hatuj, br5ovgcj, br5ovgcj, &xlpjcg3s, fasrkub3, ges1xpkr, wkumc9idtwk, &qemj9asg, &pvofyg8z); if (qemj9asg != xlpjcg3s) { *zjkrtol8 = 2; Rprintf("Failure or Error in cqo_2: vc6hatuj is not of full xwdf5ltg.\n"); Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); return; } if (*npjlv3mr != *wy1vqfzu) //Rprintf Rprintf("Error: *wy1vqfzu != *npjlv3mr in C_cqo_2\n"); qnwamo0e1 = wkumc9idtwk; fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9ghz9vuba = ghz9vuba + ayfnwr1v-1; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *qnwamo0e1++ = *fpdlcqk9wpuarq2m++ * *fpdlcqk9ghz9vuba; fpdlcqk9ghz9vuba += *ftnjamu2; } } ybnsqgo9 = 101; F77_CALL(vdqrsl)(vc6hatuj, br5ovgcj, br5ovgcj, &qemj9asg, fasrkub3, wkumc9idtwk, &uylxqtc7, wkumc9idtwk + *wy1vqfzu * *ftnjamu2, zshtfg8c, &uylxqtc7, m0ibglfx, &ybnsqgo9, &algpft4y); if (*npjlv3mr != *wy1vqfzu) //Rprintf Rprintf("Error: *wy1vqfzu != *npjlv3mr in C_cqo_2\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx /= *fpdlcqk9wpuarq2m++; fpdlcqk9m0ibglfx++; } } if (unhycz0e == 1) { if (qfx3vhct == 3 || qfx3vhct == 5) { if (2 * *afpc0kns != *wy1vqfzu) //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_cqo_2\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb; fpdlcqk9m0ibglfx += 2; } fpdlcqk9vm4xjosb++; } } else { fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb; fpdlcqk9m0ibglfx++; } fpdlcqk9vm4xjosb++; } } } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yu6izdrc); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes, &yu6izdrc, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(*tlq9wpes - *wkumc9idrpto5qwb) / (1.0e0 + fabs(*tlq9wpes)); if (wiptsjx8 < bh2vgiay) { // xxx *zjkrtol8 = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes, &yu6izdrc, &dn3iasxug, &vsoihn1r, &yu6izdrc); } x1jrewny = 1; goto ceqzd1hi20097; } else { // xxx and *wkumc9idrpto5qwb = *tlq9wpes; x1jrewny = 0; } } ceqzd1hi20097: hmayv1xt1 = 0.0e0; if (x1jrewny == 1) { Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); return; } if (f7svlajr == 1 || f7svlajr == 2) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); } void cqo_1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]) { int ayfnwr1v, hj3ftvzu, yu6izdrc = 0, pqneb2ra = 1, wr0lbopv, kcm6jfob, unhycz0e, xwdf5ltg, kvowz9ht, f7svlajr, qfx3vhct, c5aesxkul, ybnsqgo9, algpft4y, qemj9asg, xlpjcg3s, vtsou9pz, yru9olks; double dn3iasxug, wiptsjx8, pvofyg8z = 1.0e-7, uylxqtc7 = 0.0, bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go, rpto5qwb; double *fpdlcqk9zshtfg8c, *fpdlcqk9w8znmyce, *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9vm4xjosb, *fpdlcqk9vc6hatuj, *fpdlcqk9twk, *fpdlcqk9wpuarq2m, *fpdlcqk9wpuarq2m1, *fpdlcqk9wpuarq2m2, *fpdlcqk9ghz9vuba1, *fpdlcqk9ghz9vuba2; int gp1jxzuh; double hmayv1xt = 2.0, Totdev = 0.0e0; double *wkumc9idtwk; wkumc9idtwk = Calloc(*br5ovgcj * 3 , double); xwdf5ltg = xui7hqwl[0]; xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; // twice vtsou9pz = xui7hqwl[11]; zjkrtol8[0] = -1; for(ayfnwr1v = 1; ayfnwr1v <= *afpc0kns; ayfnwr1v++) zjkrtol8[ayfnwr1v] = 1; if (vtsou9pz != 1) { Rprintf("Error: vtsou9pz is not unity in cqo_1!\n"); *zjkrtol8 = 4; Free(wkumc9idtwk); return; } unhycz0e = xui7hqwl[13]; yru9olks = xui7hqwl[15]; wr0lbopv = xui7hqwl[17]; //20120222; correct but unused. dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); if (qfx3vhct == 1 || qfx3vhct == 4) vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt -= rsynp1go; hmayv1xt += hmayv1xt; yiumjq3nflncwkfq71(lncwkfq7, w8znmyce, ftnjamu2, &xwdf5ltg, &qfx3vhct, vm4xjosb, br5ovgcj, &xlpjcg3s, kifxa0he, &yru9olks, &unhycz0e); for (hj3ftvzu = 1; hj3ftvzu <= *afpc0kns; hj3ftvzu++) { ceqzd1hi653: hmayv1xt = 1.0e0; if (f7svlajr == 0) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu, ufgqj9ck, &wr0lbopv); } else if (f7svlajr == 2) { yiumjq3npkc4ejib(w8znmyce, zshtfg8c + (hj3ftvzu-1) * xlpjcg3s, m0ibglfx, ftnjamu2, wy1vqfzu, br5ovgcj, &xlpjcg3s, &vtsou9pz, &hj3ftvzu, &qfx3vhct, &unhycz0e, vm4xjosb); } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, &rpto5qwb, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { rpto5qwb = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &hj3ftvzu, &qfx3vhct, zjkrtol8 + hj3ftvzu, &unhycz0e, vm4xjosb); fpdlcqk9vc6hatuj = vc6hatuj; fpdlcqk9w8znmyce = w8znmyce; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9vc6hatuj++ = *fpdlcqk9w8znmyce++; if (qfx3vhct == 3 || qfx3vhct == 5) { if (2 * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 2 * *ftnjamu2 != *br5ovgcj in C_cqo_1\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m2 = wpuarq2m + 2*hj3ftvzu -2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2*hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m2; fpdlcqk9vc6hatuj++; *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m1; fpdlcqk9vc6hatuj++; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } } } else { if (1 * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 1 * *ftnjamu2 != *br5ovgcj in C_cqo_1\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m; fpdlcqk9vc6hatuj++; fpdlcqk9wpuarq2m += *npjlv3mr; } } } for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) ges1xpkr[gp1jxzuh-1] = gp1jxzuh; F77_CALL(vqrdca)(vc6hatuj, br5ovgcj, br5ovgcj, &xlpjcg3s, fasrkub3, ges1xpkr, wkumc9idtwk, &qemj9asg, &pvofyg8z); if (qemj9asg != xlpjcg3s) { Rprintf("Error in cqo_1: vc6hatuj is not of full xwdf5ltg.\n"); *zjkrtol8 = 2; Free(wkumc9idtwk); return; } if (qfx3vhct == 3 || qfx3vhct == 5) { fpdlcqk9ghz9vuba1 = ghz9vuba + (2*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ghz9vuba2 = ghz9vuba + (2*hj3ftvzu-2) * *ftnjamu2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2*hj3ftvzu-1; fpdlcqk9wpuarq2m2 = wpuarq2m + 2*hj3ftvzu-2; fpdlcqk9twk = wkumc9idtwk; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m2 * *fpdlcqk9ghz9vuba2++; *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m1 * *fpdlcqk9ghz9vuba1++; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } } else { fpdlcqk9ghz9vuba1 = ghz9vuba + (hj3ftvzu-1) * *ftnjamu2; fpdlcqk9twk = wkumc9idtwk; fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m * *fpdlcqk9ghz9vuba1++; fpdlcqk9wpuarq2m += *npjlv3mr; } } ybnsqgo9 = 101; F77_CALL(vdqrsl)(vc6hatuj, br5ovgcj, br5ovgcj, &qemj9asg, fasrkub3, wkumc9idtwk, &uylxqtc7, wkumc9idtwk + *br5ovgcj, zshtfg8c + (hj3ftvzu-1) * xlpjcg3s, &uylxqtc7, wkumc9idtwk + 2 * *br5ovgcj, &ybnsqgo9, &algpft4y); fpdlcqk9twk = wkumc9idtwk; fpdlcqk9zshtfg8c = zshtfg8c + (hj3ftvzu-1) * xlpjcg3s; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { *fpdlcqk9twk++ = *fpdlcqk9zshtfg8c++; } fpdlcqk9twk = wkumc9idtwk; fpdlcqk9zshtfg8c = zshtfg8c + (hj3ftvzu-1) * xlpjcg3s; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { *(fpdlcqk9zshtfg8c + ges1xpkr[gp1jxzuh-1] - 1) = *fpdlcqk9twk++; } if (qfx3vhct == 3 || qfx3vhct == 5) { fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * hj3ftvzu -2; fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * hj3ftvzu -1; fpdlcqk9twk = wkumc9idtwk + 2 * *br5ovgcj; fpdlcqk9wpuarq2m2 = wpuarq2m + 2 * hj3ftvzu -2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2 * hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m2; *fpdlcqk9m0ibglfx1 = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m1; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } if (unhycz0e == 1) { fpdlcqk9m0ibglfx = m0ibglfx + 2*hj3ftvzu-2; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } else { fpdlcqk9m0ibglfx = m0ibglfx + hj3ftvzu -1; fpdlcqk9twk = wkumc9idtwk + 2 * *br5ovgcj; fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9wpuarq2m += *npjlv3mr; } if (unhycz0e == 1) { fpdlcqk9m0ibglfx = m0ibglfx + hj3ftvzu-1; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(tlq9wpes[hj3ftvzu] - rpto5qwb) / (1.0e0 + fabs(tlq9wpes[hj3ftvzu])); if (wiptsjx8 < bh2vgiay) { zjkrtol8[hj3ftvzu] = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &yu6izdrc); } Totdev += tlq9wpes[hj3ftvzu]; goto ceqzd1hi1011; } else { rpto5qwb = tlq9wpes[hj3ftvzu]; } } Rprintf("cqo_1; no convergence for Species "); Rprintf("number %3d. Trying internal starting values.\n", hj3ftvzu); if (f7svlajr == 1) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; zjkrtol8[hj3ftvzu] = 2; Rprintf("cqo_1; no convergence for Species "); Rprintf("number %3d. Continuing on with other species.\n", hj3ftvzu); Totdev += tlq9wpes[hj3ftvzu]; ceqzd1hi1011: hmayv1xt = 3.0e0; } if(zjkrtol8[0] == -1) for(ayfnwr1v = 1; ayfnwr1v <= *afpc0kns; ayfnwr1v++) if(zjkrtol8[ayfnwr1v] != 0) zjkrtol8[0] = 1; if(zjkrtol8[0] == -1) zjkrtol8[0] = 0; *tlq9wpes = Totdev; Free(wkumc9idtwk); } void dcqo1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double *ydcnh9xl) { int ayfnwr1v, gp1jxzuh, xvr7bonh, hpmwnav2, idlosrw8, xwdf5ltg = xui7hqwl[ 0], vtsou9pz; int exrkcn5d = xui7hqwl[12]; double fxnhilr3, *fpdlcqk9k7hulceq, *fpdlcqk9kpzavbj3mat, *fpdlcqk9lncwkfq7, *fpdlcqk9yxiwebc5, *fpdlcqk9atujnxb8; double *wkumc9idajul8wkv, *wkumc9iddev0, *wkumc9idyxiwebc5; wkumc9idajul8wkv = Calloc(exrkcn5d , double); wkumc9iddev0 = Calloc(1 + *afpc0kns , double); wkumc9idyxiwebc5 = Calloc(*ftnjamu2 * xwdf5ltg , double); fpdlcqk9kpzavbj3mat = kpzavbj3mat; idlosrw8 = xui7hqwl[ 4]; vtsou9pz = xui7hqwl[11]; fpdlcqk9lncwkfq7 = lncwkfq7; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5; for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fxnhilr3 = 0.0e0; fpdlcqk9k7hulceq = k7hulceq + (hpmwnav2-1) * *eoviz2fb; fpdlcqk9atujnxb8 = atujnxb8 + ayfnwr1v-1; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fxnhilr3 += *fpdlcqk9atujnxb8 * *fpdlcqk9k7hulceq++; fpdlcqk9atujnxb8 += *ftnjamu2; } *fpdlcqk9yxiwebc5++ = *fpdlcqk9lncwkfq7++ = fxnhilr3; } } if (vtsou9pz == 1) { cqo_1(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, wkumc9idajul8wkv, y7sdgtqi); } else { cqo_2(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, wkumc9idajul8wkv, y7sdgtqi); } fpdlcqk9atujnxb8 = atujnxb8; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9atujnxb8 *= *ydcnh9xl; fpdlcqk9atujnxb8++; } } for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9atujnxb8 = atujnxb8 + (xvr7bonh-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++ + *fpdlcqk9atujnxb8++; } xui7hqwl[4] = 2; for (gp1jxzuh = 1; gp1jxzuh <= exrkcn5d; gp1jxzuh++) zshtfg8c[gp1jxzuh-1] = wkumc9idajul8wkv[gp1jxzuh-1]; if (vtsou9pz == 1) { cqo_1(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi); } else { cqo_2(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi); } if (*zjkrtol8 != 0) { Rprintf("Error in dcqo1: zjkrtol8 = %d\n", *zjkrtol8); Rprintf("Continuing.\n"); } *fpdlcqk9kpzavbj3mat++ = (*tlq9wpes - *wkumc9iddev0) / *ydcnh9xl; } if (xwdf5ltg > 1) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++; } } Free(wkumc9idajul8wkv); Free(wkumc9iddev0); Free(wkumc9idyxiwebc5); xui7hqwl[4] = idlosrw8; } void vcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double tlq9wpes[], double zshtfg8c[], double y7sdgtqi[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int acpios9q[], int jwbkl9fp[]) { int hj3ftvzu, ehtjigf4, kvowz9ht, yu6izdrc = 0, pqneb2ra = 1, xwdf5ltg = xui7hqwl[0], f7svlajr, qfx3vhct, c5aesxkul, wr0lbopv, vtsou9pz, xlpjcg3s, sedf7mxb, kcm6jfob, lensmo = (xwdf5ltg == 1 ? 2 : 4) * *afpc0kns; double rpto5qwb, dn3iasxug, wiptsjx8, bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go, fjcasv7g[6], ghdetj8v = 0.0; double *fpdlcqk9kispwgx3; double hmayv1xt = 0.0, Totdev = 0.0e0; int qes4mujl, ayfnwr1v, kij0gwer, xumj5dnk, lyma1kwc; // = xui7hqwl[10]; double hmayv1xtvm4xjosb[2]; double *fpdlcqk9lxyst1eb, *fpdlcqk9zyodca3j, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9wpuarq2m1, *fpdlcqk9wpuarq2m2; double *wkumc9idui8ysltq, *wkumc9idlxyst1eb, *wkumc9idzyodca3j; double *wkumc9idhdnw2fts, *wkumc9idwbkq9zyi; fjcasv7g[0] = 0.001; fjcasv7g[1] = 0.0; fjcasv7g[2] = -1.5; fjcasv7g[3] = 1.5; fjcasv7g[4] = 1.0e-4; fjcasv7g[5] = 2.0e-8; wkumc9idui8ysltq = Calloc((*ftnjamu2 * *wy1vqfzu) * (*afpc0kns * *wy1vqfzu), double); wkumc9idlxyst1eb = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idzyodca3j = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idhdnw2fts = Calloc(lensmo , double); wkumc9idwbkq9zyi = Calloc(lensmo , double); for(ayfnwr1v = 0; ayfnwr1v < lensmo; ayfnwr1v++) { wkumc9idhdnw2fts[ayfnwr1v] = hdnw2fts[ayfnwr1v]; wkumc9idwbkq9zyi[ayfnwr1v] = wbkq9zyi[ayfnwr1v]; } xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; // # = 1 f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; lyma1kwc = psdvgce3[10]; // vtsou9pz = xui7hqwl[11]; if (vtsou9pz != 1 || lyma1kwc != xwdf5ltg) { Rprintf("Error: 'vtsou9pz' != 1, or 'lyma1kwc' != 'xwdf5ltg', in vcao6!\n"); *zjkrtol8 = 4; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } wr0lbopv = xui7hqwl[17]; dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt += hmayv1xt; hmayv1xt *= hmayv1xt; *zjkrtol8 = 1; for (hj3ftvzu = 1; hj3ftvzu <= *afpc0kns; hj3ftvzu++) { ceqzd1hi653: hmayv1xt = 1.0; qes4mujl = (qfx3vhct == 3 || qfx3vhct == 5) ? 2 * hj3ftvzu - 1 : hj3ftvzu; if (f7svlajr == 0) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu, ufgqj9ck, &wr0lbopv); } else if (f7svlajr != 1) { Rprintf("Failure due to bad input of 'f7svlajr' variable\n"); *zjkrtol8 = 6; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, &rpto5qwb, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { rpto5qwb = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { yiumjq3nflncwkfq76(lncwkfq7, vc6hatuj, ftnjamu2, br5ovgcj, &xwdf5ltg, &qfx3vhct); psdvgce3[6] = 0; yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &hj3ftvzu, &qfx3vhct, zjkrtol8, &yu6izdrc, hmayv1xtvm4xjosb); fpdlcqk9lxyst1eb = wkumc9idlxyst1eb; fpdlcqk9zyodca3j = wkumc9idzyodca3j; fpdlcqk9m0ibglfx1 = m0ibglfx + qes4mujl-1; fpdlcqk9wpuarq2m1 = wpuarq2m + qes4mujl-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9m0ibglfx2 = fpdlcqk9m0ibglfx1; fpdlcqk9wpuarq2m2 = fpdlcqk9wpuarq2m1; for (kij0gwer = 1; kij0gwer <= *qfozcl5b; kij0gwer++) { *fpdlcqk9lxyst1eb++ = *fpdlcqk9m0ibglfx2++; *fpdlcqk9zyodca3j++ = *fpdlcqk9wpuarq2m2++; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9wpuarq2m1 += *npjlv3mr; } sedf7mxb = 0; // 20100416 a stop gap. Used for xwdf5ltg==2 only i think. ehtjigf4 = xwdf5ltg * (hj3ftvzu-1); if (kcm6jfob == 1) { for (kij0gwer = 1; kij0gwer <= lyma1kwc; kij0gwer++) { fpdlcqk9kispwgx3 = kispwgx3 + (ehtjigf4 + hnpt1zym[kij0gwer-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9kispwgx3++ = 0.0e0; } } else { wbkq9zyi[ ehtjigf4 + hnpt1zym[0]-1] = wkumc9idwbkq9zyi[ ehtjigf4 + hnpt1zym[0]-1]; hdnw2fts[ ehtjigf4 + hnpt1zym[0]-1] = wkumc9idhdnw2fts[ ehtjigf4 + hnpt1zym[0]-1]; if (xwdf5ltg == 2) { wbkq9zyi[ ehtjigf4 + hnpt1zym[1]-1] = wkumc9idwbkq9zyi[ ehtjigf4 + hnpt1zym[1]-1]; // wkumc9idr3eoxkzp; hdnw2fts[sedf7mxb + ehtjigf4 + hnpt1zym[1]-1] = wkumc9idhdnw2fts[sedf7mxb + ehtjigf4 + hnpt1zym[1]-1]; // wkumc9idwld4qctn; } } Yee_vbfa(psdvgce3, fjcasv7g, mbvnaor6, ghz9vuba + (qes4mujl-1) * *ftnjamu2, rbne6ouj + (qes4mujl-1) * *ftnjamu2, hdnw2fts + sedf7mxb + ehtjigf4 + hnpt1zym[0] - 1, lamvec + ehtjigf4 + hnpt1zym[0] - 1, wbkq9zyi + ehtjigf4 + hnpt1zym[0] - 1, ezlgm2up, lqsahu0r, which, kispwgx3 + (ehtjigf4 + *hnpt1zym - 1) * *ftnjamu2, wkumc9idlxyst1eb, zshtfg8c + (hj3ftvzu - 1) * xlpjcg3s, wkumc9idui8ysltq, vc6hatuj, fasrkub3, ges1xpkr, wkumc9idzyodca3j, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa + (ehtjigf4 + hnpt1zym[0] - 1) * *ftnjamu2, rpyis2kc + (hj3ftvzu-1) * (nbzjkpi3[xwdf5ltg] - 1), gkdx5jals, nbzjkpi3, acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; xumj5dnk = psdvgce3[13]; if (xumj5dnk != 0) { Rprintf("vcao6: Error... exiting; error code = %d\n", xumj5dnk); *zjkrtol8 = 8; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } fpdlcqk9lxyst1eb = wkumc9idlxyst1eb; fpdlcqk9m0ibglfx1 = m0ibglfx + qes4mujl-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9m0ibglfx2 = fpdlcqk9m0ibglfx1; for (kij0gwer = 1; kij0gwer <= *qfozcl5b; kij0gwer++) { *fpdlcqk9m0ibglfx2++ = *fpdlcqk9lxyst1eb++; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(tlq9wpes[hj3ftvzu] - rpto5qwb) / (1.0e0 + fabs(tlq9wpes[hj3ftvzu])); if (wiptsjx8 < bh2vgiay) { *zjkrtol8 = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &yu6izdrc); } Totdev += tlq9wpes[hj3ftvzu]; goto ceqzd1hi1011; } else { rpto5qwb = tlq9wpes[hj3ftvzu]; } } if (f7svlajr == 1) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; Totdev += tlq9wpes[hj3ftvzu]; ceqzd1hi1011: hmayv1xt = 2.0e0; } *tlq9wpes = Totdev; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); } void vdcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double tlq9wpes[], double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double ajul8wkv[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int acpios9q[], int jwbkl9fp[]) { int ayfnwr1v, xvr7bonh, hpmwnav2, idlosrw8, xwdf5ltg = xui7hqwl[ 0], vtsou9pz; double fxnhilr3; double ghdetj8v = 0.0e0, ydcnh9xl = y7sdgtqi[3 + *afpc0kns + *afpc0kns + 3 -1]; double *fpdlcqk9k7hulceq, *fpdlcqk9kpzavbj3mat, *fpdlcqk9lncwkfq7, *fpdlcqk9yxiwebc5, *fpdlcqk9atujnxb8; double *wkumc9idyxiwebc5; double *wkumc9idlxyst1eb, *wkumc9idzyodca3j; double *wkumc9iddev0; wkumc9idyxiwebc5 = Calloc(*ftnjamu2 * xwdf5ltg , double); fpdlcqk9kpzavbj3mat = kpzavbj3mat; wkumc9iddev0 = Calloc(1 + *afpc0kns , double); wkumc9idlxyst1eb = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idzyodca3j = Calloc( *qfozcl5b * *ftnjamu2 , double); idlosrw8 = xui7hqwl[ 4]; vtsou9pz = xui7hqwl[11]; fpdlcqk9lncwkfq7 = lncwkfq7; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5; for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fxnhilr3 = 0.0e0; fpdlcqk9k7hulceq = k7hulceq + (hpmwnav2-1) * *eoviz2fb; fpdlcqk9atujnxb8 = atujnxb8 + ayfnwr1v-1; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fxnhilr3 += *fpdlcqk9atujnxb8 * *fpdlcqk9k7hulceq++; fpdlcqk9atujnxb8 += *ftnjamu2; } *fpdlcqk9yxiwebc5++ = *fpdlcqk9lncwkfq7++ = fxnhilr3; } } if (vtsou9pz == 1) { vcao6(lncwkfq7, tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, ajul8wkv, y7sdgtqi, psdvgce3, qfozcl5b, hdnw2fts, lamvec, wbkq9zyi, ezlgm2up, lqsahu0r, which, kispwgx3, mbvnaor6, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, nbzjkpi3, acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; } fpdlcqk9atujnxb8 = atujnxb8; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9atujnxb8 *= ydcnh9xl; fpdlcqk9atujnxb8++; } } for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { fpdlcqk9atujnxb8 = atujnxb8; // + (xvr7bonh-1) * *ftnjamu2; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++ + *fpdlcqk9atujnxb8++; } xui7hqwl[4] = 0; if (vtsou9pz == 1) { vcao6(lncwkfq7, tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi, psdvgce3, qfozcl5b, hdnw2fts, lamvec, wbkq9zyi, ezlgm2up, lqsahu0r, which, kispwgx3, mbvnaor6, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, nbzjkpi3, acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; } if (*zjkrtol8 != 0) { Rprintf("Warning: failured to converge in vdcao6. \n"); Rprintf("Continuing.\n"); } *fpdlcqk9kpzavbj3mat++ = (*tlq9wpes - *wkumc9iddev0) / ydcnh9xl; } if (xwdf5ltg > 1) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++; } } Free(wkumc9idyxiwebc5); Free(wkumc9iddev0 ); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); xui7hqwl[4] = idlosrw8; } void yiumjq3npnm1or(double *objzgdk0, double *lfu2qhid) { int sn; double R1, R2, y, y2, y3, y4, y5, y6, y7; double erf, erfc, z, z2, z3, z4; double SQRT2 = 1.414213562373095049e0, SQRTPI = 1.772453850905516027e0, ULIMIT = 20.0e0, P10 = 242.66795523053175e0, P11 = 21.979261618294152e0, P12 = 6.9963834886191355e0, P13 = -.035609843701815385e0, Q10 = 215.05887586986120e0, Q11 = 91.164905404514901e0, Q12 = 15.082797630407787e0, Q13 = 1.0e0, P20 = 300.4592610201616005e0, P21 = 451.9189537118729422e0, P22 = 339.3208167343436870e0, P23 = 152.9892850469404039e0, P24 = 43.16222722205673530e0, P25 = 7.211758250883093659e0, P26 = .5641955174789739711e0, P27 = -.0000001368648573827167067e0, Q20 = 300.4592609569832933e0, Q21 = 790.9509253278980272e0, Q22 = 931.3540948506096211e0, Q23 = 638.9802644656311665e0, Q24 = 277.5854447439876434e0, Q25 = 77.00015293522947295e0, Q26 = 12.78272731962942351e0, Q27 = 1.0e0, P30 = -.00299610707703542174e0, P31 = -.0494730910623250734e0, P32 = -.226956593539686930e0, P33 = -.278661308609647788e0, P34 = -.0223192459734184686e0, Q30 = .0106209230528467918e0, Q31 = .191308926107829841e0, Q32 = 1.05167510706793207e0, Q33 = 1.98733201817135256e0, Q34 = 1.0e0; if (*objzgdk0 < -ULIMIT) { *lfu2qhid = 2.753624e-89; return; } if (*objzgdk0 > ULIMIT) { *lfu2qhid = 1.0e0; return; } y = *objzgdk0 / SQRT2; if (y < 0.0e0) { y = -y; sn = -1; } else { sn = 1; } y2 = y * y; y4 = y2 * y2; y6 = y4 * y2; if (y < 0.46875e0) { R1 = P10 + P11 * y2 + P12 * y4 + P13 * y6; R2 = Q10 + Q11 * y2 + Q12 * y4 + Q13 * y6; erf = y * R1 / R2; *lfu2qhid = (sn == 1) ? 0.5e0 + 0.5 * erf : 0.5e0 - 0.5 * erf; } else if (y < 4.0e0) { y3 = y2 * y; y5 = y4 * y; y7 = y6 * y; R1 = P20 + P21 * y + P22 * y2 + P23 * y3 + P24 * y4 + P25 * y5 + P26 * y6 + P27 * y7; R2 = Q20 + Q21 * y + Q22 * y2 + Q23 * y3 + Q24 * y4 + Q25 * y5 + Q26 * y6 + Q27 * y7; erfc = exp(-y2) * R1 / R2; *lfu2qhid = (sn == 1) ? 1.0 - 0.5 * erfc : 0.5 * erfc; } else { z = y4; z2 = z * z; z3 = z2 * z; z4 = z2 * z2; R1 = P30 + P31 * z + P32 * z2 + P33 * z3 + P34 * z4; R2 = Q30 + Q31 * z + Q32 * z2 + Q33 * z3 + Q34 * z4; erfc = (exp(-y2)/y) * (1.0 / SQRTPI + R1 / (R2 * y2)); *lfu2qhid = (sn == 1) ? 1.0 - 0.5 * erfc : 0.5 * erfc; } } void yiumjq3npnm1ow(double objzgdk0[], double lfu2qhid[], int *f8yswcat) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { yiumjq3npnm1or(objzgdk0++, lfu2qhid++); } } VGAM/man/0000755000176000001440000000000012136651105011607 5ustar ripleyusersVGAM/man/zipoisson.Rd0000644000176000001440000002536012136651105014141 0ustar ripleyusers\name{zipoisson} \alias{zipoisson} \alias{zipoissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution Family Function } \description{ Fits a zero-inflated Poisson distribution by full maximum likelihood estimation. } \usage{ zipoissonff(llambda = "loge", lprobp = "logit", ilambda = NULL, iprobp = NULL, imethod = 1, shrinkage.init = 0.8, zero = -2) zipoisson(lpstr0 = "logit", llambda = "loge", ipstr0 = NULL, ilambda = NULL, imethod = 1, shrinkage.init = 0.8, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, llambda}{ Link function for the parameter \eqn{\phi}{phi} and the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices; see \code{\link{CommonVGAMffArguments}} for more information. For the zero-\emph{deflated} model see below. } \item{ipstr0, ilambda}{ Optional initial values for \eqn{\phi}{phi}, whose values must lie between 0 and 1. Optional initial values for \eqn{\lambda}{lambda}, whose values must be positive. The defaults are to compute an initial value internally for each. If a vector then recycling is used. } \item{lprobp, iprobp}{ Corresponding arguments for the other parameterization. See details below. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for \eqn{\lambda}{lambda}. If failure to converge occurs try another value and/or else specify a value for \code{shrinkage.init} and/or else specify a value for \code{ipstr0}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{shrinkage.init}{ How much shrinkage is used when initializing \eqn{\lambda}{lambda}. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used in conjunction with \code{imethod}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ An integer specifying which linear/additive predictor is modelled as intercepts only. If given, the value must be either 1 or 2, and the default is none of them. Setting \code{zero = 1} makes \eqn{\phi}{phi} a single parameter. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ This model is a mixture of a Poisson distribution and the value 0; it has value 0 with probability \eqn{\phi}{phi} else is Poisson(\eqn{\lambda}{lambda}) distributed. Thus there are two sources for zero values, and \eqn{\phi}{phi} is the probability of a \emph{structural zero}. The model for \code{zipoisson()} can be written \deqn{P(Y = 0) = \phi + (1-\phi) \exp(-\lambda),}{% P(Y = 0) = phi + (1-phi) * exp(-lambda),} and for \eqn{y=1,2,\ldots}, \deqn{P(Y = y) = (1-\phi) \exp(-\lambda) \lambda^y / y!.}{% P(Y = y) = (1-phi) * exp(-lambda) * lambda^y / y!.} Here, the parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{(1-\phi) \lambda}{(1-phi)*lambda} and these are returned as the fitted values. The variance of \eqn{Y} is \eqn{(1-\phi) \lambda (1 + \phi \lambda)}{ (1-phi)*lambda*(1 + phi lambda)}. By default, the two linear/additive predictors are \eqn{(logit(\phi), \log(\lambda))^T}{(logit(phi), log(lambda))^T}. The \pkg{VGAM} family function \code{zipoissonff()} has a few changes compared to \code{zipoisson()}. These are: (i) the order of the linear/additive predictors is switched so the Poisson mean comes first; (ii) \code{probp} is now the probability of the Poisson component, i.e., \code{probp} is \code{1-pstr0}; (iii) it can handle multiple responses; (iv) argument \code{zero} has a new default so that the \code{probp} is an intercept-only by default. Now \code{zipoissonff()} is generally recommended over \code{zipoisson()} (and definitely recommended over \code{\link{yip88}}). Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Thas, O. and Rayner, J. C. W. (2005) Smooth tests for the zero-inflated Poisson distribution. \emph{Biometrics}, \bold{61}, 808--815. Data: Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. Cameron, A. C. and Trivedi, P. K. (1998) \emph{Regression Analysis of Count Data}. Cambridge University Press: Cambridge. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. } \author{ T. W. Yee } \note{ The \code{misc} slot has a component called \code{pobs0} which is the estimate of \eqn{P(Y = 0)}. Note that \eqn{P(Y = 0)} is not the parameter \eqn{\phi}{phi}. The estimated probability of a structural 0 is returned in the \code{misc} slot with component name \code{pstr0}. Although the functions in \code{\link{Zipois}} can handle the zero-\emph{deflated} Poisson distribution, this family function cannot estimate this very well in general. One sets \code{lpstr0 = identity}, however, the iterations might fall outside the parameter space. Practically, it is restricted to intercept-models only (see example below). Also, one might need inputting good initial values or using a simpler model to obtain initial values. % If there is a covariate then it is best to % constrain \code{pstr0} to be intercept-only, e.g., % by \code{zipoisson(lpstr0 = identity, zero = -1)}. A (somewhat) similar and more reliable method for zero-deflation is to try the zero-altered Poisson model (see \code{\link{zapoisson}}). The use of this \pkg{VGAM} family function with \code{\link{rrvglm}} can result in a so-called COZIGAM or COZIGLM. That is, a reduced-rank zero-inflated Poisson model (RR-ZIP) is a constrained zero-inflated generalized linear model. See \pkg{COZIGAM}. A RR-ZINB model can also be fitted easily; see \code{\link{zinegbinomial}}. Jargon-wise, a COZIGLM might be better described as a COZIVGLM-ZIP. } \section{Warning }{ Numerical problems can occur, e.g., when the probability of zero is actually less than, not more than, the nominal probability of zero. For example, in the Angers and Biswas (2003) data below, replacing 182 by 1 results in nonconvergence. Half-stepping is not uncommon. If failure to converge occurs, try using combinations of \code{imethod}, \code{shrinkage.init}, \code{ipstr0}, and/or \code{zipoisson(zero = 1)} if there are explanatory variables. The default for \code{zipoissonff()} is to model the structural zero probability as an intercept-only. } \seealso{ \code{\link{zapoisson}}, \code{\link{Zipois}}, \code{\link{yip88}}, \code{\link{rrvglm}}, \code{\link{zipebcom}}, \code{\link[stats:Poisson]{rpois}}. } \examples{ # Example 1: simulated ZIP data set.seed(123) zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pstr01 = logit(-0.5 + 1*x2, inverse = TRUE), pstr02 = logit( 0.5 - 1*x2, inverse = TRUE), Ps01 = logit(-0.5 , inverse = TRUE), Ps02 = logit( 0.5 , inverse = TRUE), lambda1 = loge(-0.5 + 2*x2, inverse = TRUE), lambda2 = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit1 <- vglm(y1 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") coef(fit1, matrix = TRUE) # These should agree with the above values coef(fit2, matrix = TRUE) # These should agree with the above values # Fit all two simultaneously, using a different parameterization: fit12 <- vglm(cbind(y1, y2) ~ x2, zipoissonff, zdata, crit = "coef") coef(fit12, matrix = TRUE) # These should agree with the above values # For the first observation compute the probability that y1 is # due to a structural zero. head(zdata, 1) pfit1 <- predict(fit1, zdata[1, ]) pstr0 <- logit(pfit1[1], inverse = TRUE) lambda <- loge(pfit1[2], inverse = TRUE) (prob.struc.0 <- pstr0 / dzipois(x = 0, lambda = lambda, pstr0 = pstr0)) # Example 2: McKendrick (1926). Data from 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit <- vglm(ncases ~ 1, zipoisson, wei = wfreq, cholera, trace = TRUE) coef(fit, matrix = TRUE) with(cholera, cbind(actual = wfreq, fitted = round(dzipois(ncases, lambda = Coef(fit)[2], pstr0 = Coef(fit)[1]) * sum(wfreq), dig = 2))) # Example 3: data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) fit <- vglm(y ~ 1, zipoisson(lpstr0 = probit, ipstr0 = 0.8), abdata, weight = w, trace = TRUE) fit@misc$pobs0 # Estimate of P(Y = 0) coef(fit, matrix = TRUE) Coef(fit) # Estimate of pstr0 and lambda fitted(fit) with(abdata, weighted.mean(y, w)) # Compare this with fitted(fit) summary(fit) # Example 4: zero-deflated model for an intercept-only data zdata <- transform(zdata, lambda3 = loge( 0.0 , inverse = TRUE)) zdata <- transform(zdata, deflat_limit = -1 / expm1(lambda3)) # Boundary # The 'pstr0' parameter is negative and in parameter space: zdata <- transform(zdata, usepstr0 = deflat_limit / 1.5) zdata <- transform(zdata, y3 = rzipois(nn, lambda3, pstr0 = usepstr0)) head(zdata) with(zdata, table(y3)) # A lot of deflation fit3 <- vglm(y3 ~ 1, zipoisson(zero = -1, lpstr0 = identity), zdata, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) # Check how accurate it was: zdata[1, 'usepstr0'] # Answer coef(fit3)[1] # Estimate Coef(fit3) # Example 5: This RR-ZIP is known as a COZIGAM or COZIVGLM-ZIP set.seed(123) rrzip <- rrvglm(Alopacce ~ bs(WaterCon, df = 3), zipoisson(zero = NULL), hspider, trace = TRUE, Index.corner = 2) coef(rrzip, matrix = TRUE) Coef(rrzip) summary(rrzip) \dontrun{plotvgam(rrzip, lcol = "blue")} } \keyword{models} \keyword{regression} % Yee, T. W. (2012) % An alternative to quasi-Poisson vs. negative binomial % regression: the reduced-rank negative binomial model. % \emph{In preparation}. VGAM/man/zipoisUC.Rd0000644000176000001440000000672712136651105013657 0ustar ripleyusers\name{Zipois} \alias{Zipois} \alias{dzipois} \alias{pzipois} \alias{qzipois} \alias{rzipois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated Poisson distribution with parameter \code{pstr0}. } \usage{ dzipois(x, lambda, pstr0 = 0, log = FALSE) pzipois(q, lambda, pstr0 = 0) qzipois(p, lambda, pstr0 = 0) rzipois(n, lambda, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles. } \item{p}{vector of probabilities. } \item{n}{number of observations. Must be a single positive integer. } \item{lambda}{ Vector of positive means. } \item{pstr0}{ Probability of a structural zero (i.e., ignoring the Poisson distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having an ordinary Poisson distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{Poisson(\lambda)}{Poisson(lambda)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{Poisson(\lambda)}{Poisson(lambda)}. } \value{ \code{dzipois} gives the density, \code{pzipois} gives the distribution function, \code{qzipois} gives the quantile function, and \code{rzipois} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated Poisson} distribution. Here, \code{pstr0} is also permitted to lie in the interval \code{[-1/expm1(lambda), 0]}. The resulting probability of a zero count is \emph{less than} the nominal Poisson value, and the use of \code{pstr0} to stand for the probability of a structural zero loses its meaning. When \code{pstr0} equals \code{-1/expm1(lambda)} this corresponds to the positive-Poisson distribution (e.g., see \code{\link{dpospois}}). } \seealso{ \code{\link{zipoisson}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{rzinegbin}}. } \examples{ lambda <- 3; pstr0 <- 0.2; x <- (-1):7 (ii <- dzipois(x, lambda, pstr0 = pstr0)) max(abs(cumsum(ii) - pzipois(x, lambda, pstr0 = pstr0))) # Should be 0 table(rzipois(100, lambda, pstr0 = pstr0)) table(qzipois(runif(100), lambda, pstr0)) round(dzipois(0:10, lambda, pstr0 = pstr0) * 100) # Should be similar \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # Zero-inflated Poisson barplot(rbind(dzipois(x, lambda, pstr0 = pstr0), dpois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste("ZIP(", lambda, ", pstr0 = ", pstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) deflat_limit <- -1 / expm1(lambda) # Zero-deflated Poisson newpstr0 <- round(deflat_limit / 1.5, 3) barplot(rbind(dzipois(x, lambda, pstr0 = newpstr0), dpois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste("ZDP(", lambda, ", pstr0 = ", newpstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/zipfUC.Rd0000644000176000001440000000260012136651105013274 0ustar ripleyusers\name{Zipf} \alias{Zipf} \alias{dzipf} \alias{pzipf} %\alias{qzipf} %\alias{rzipf} \title{The Zipf Distribution} \description{ Density, and cumulative distribution function for the Zipf distribution. } \usage{ dzipf(x, N, s, log = FALSE) pzipf(q, N, s) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} % \item{n}{number of observations. % Must be a positive integer of length 1.} \item{N, s }{ the number of elements, and the exponent characterizing the distribution. See \code{\link{zipf}} for more details. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dzipf} gives the density, and \code{pzipf} gives the cumulative distribution function. % \code{qzipf} gives the quantile function, and % \code{rzipf} generates random deviates. } \author{ T. W. Yee } \details{ This is a finite version of the zeta distribution. See \code{\link{zipf}} for more details. } %\note{ % %} \seealso{ \code{\link{zipf}}. } \examples{ N <- 10; s <- 0.5; y <- 1:N proby <- dzipf(y, N = N, s = s) \dontrun{ plot(proby ~ y, type = "h", col = "blue", ylab = "Probability", ylim = c(0, 0.2), main = paste("Zipf(N = ",N,", s = ",s,")", sep = ""), lwd = 2, las = 1) } sum(proby) # Should be 1 max(abs(cumsum(proby) - pzipf(y, N = N, s = s))) # Should be 0 } \keyword{distribution} VGAM/man/zipf.Rd0000644000176000001440000000567312136651105013061 0ustar ripleyusers\name{zipf} \alias{zipf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zipf Distribution Family Function } \description{ Estimates the parameter of the Zipf distribution. } \usage{ zipf(N = NULL, link = "loge", init.s = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{N}{ Number of elements, an integer satisfying \code{1 < N < Inf}. The default is to use the maximum value of the response. If given, \code{N} must be no less that the largest response value. If \code{N = Inf} and \eqn{s>1} then this is the zeta distribution (use \code{\link{zetaff}} instead). } \item{link}{ Parameter link function applied to the (positive) parameter \eqn{s}. See \code{\link{Links}} for more choices. } \item{init.s}{ Optional initial value for the parameter \eqn{s}. The default is to choose an initial value internally. If converge failure occurs use this argument to input a value. } } \details{ The probability function for a response \eqn{Y} is \deqn{P(Y=y) = y^{-s} / \sum_{i=1}^N i^{-s},\ \ s>0,\ \ y=1,2,\ldots,N,}{% P(Y=y) = (y^(-s)) / sum((1:N)^(-s)), s>0, y=1,2,...,N,} where \eqn{s} is the exponent characterizing the distribution. The mean of \eqn{Y}, which are returned as the fitted values, is \eqn{\mu = H_{N,s-1} / H_{N,s}}{H(N,s-1) / H(N,s)} where \eqn{H_{n,m}= \sum_{i=1}^n i^{-m}}{H(n,m)=sum((1:n)^(-m))} is the \eqn{n}th generalized harmonic number. Zipf's law is an experimental law which is often applied to the study of the frequency of words in a corpus of natural language utterances. It states that the frequency of any word is inversely proportional to its rank in the frequency table. For example, \code{"the"} and \code{"of"} are first two most common words, and Zipf's law states that \code{"the"} is twice as common as \code{"of"}. Many other natural phenomena conform to Zipf's law. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ pp.526-- of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey, USA: Wiley. } \author{ T. W. Yee } \note{ Upon convergence, the \code{N} is stored as \code{@misc$N}. } \seealso{ \code{\link{dzipf}}, \code{\link{zetaff}}. } \examples{ zdata <- data.frame(y = 1:5, ofreq = c(63, 14, 5, 1, 2)) fit <- vglm(y ~ 1, zipf, zdata, trace = TRUE, weight = ofreq, crit = "coef") fit <- vglm(y ~ 1, zipf(link = identity, init = 3.4), zdata, trace = TRUE, weight = ofreq) fit@misc$N (shat <- Coef(fit)) with(zdata, weighted.mean(y, ofreq)) fitted(fit, matrix = FALSE) } \keyword{models} \keyword{regression} %pp.465--471, Chapter 11 of %Johnson N. L., Kotz S., and Kemp A. W. (1993) %\emph{Univariate Discrete Distributions}, %2nd ed. %New York: Wiley. VGAM/man/zipebcom.Rd0000644000176000001440000002065412136651105013715 0ustar ripleyusers\name{zipebcom} \alias{zipebcom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exchangeable bivariate cloglog odds-ratio model from a zero-inflated Poisson distribution } \description{ Fits an exchangeable bivariate odds-ratio model to two binary responses with a complementary log-log link. The data are assumed to come from a zero-inflated Poisson distribution that has been converted to presence/absence. } \usage{ zipebcom(lmu12 = "cloglog", lphi12 = "logit", loratio = "loge", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = 2:3, tol = 0.001, addRidge = 0.001) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu12, imu12}{ Link function, extra argument and optional initial values for the first (and second) marginal probabilities. Argument \code{lmu12} should be left alone. Argument \code{imu12} may be of length 2 (one element for each response). } \item{lphi12}{ Link function applied to the \eqn{\phi}{phi} parameter of the zero-inflated Poisson distribution (see \code{\link{zipoisson}}). See \code{\link{Links}} for more choices. } \item{loratio}{ Link function applied to the odds ratio. See \code{\link{Links}} for more choices. } \item{iphi12, ioratio}{ Optional initial values for \eqn{\phi}{phi} and the odds ratio. See \code{\link{CommonVGAMffArguments}} for more details. In general, good initial values (especially for \code{iphi12}) are often required, therefore use these arguments if convergence failure occurs. If inputted, the value of \code{iphi12} cannot be more than the sample proportions of zeros in either response. } % \item{ephi12, eoratio}{ % List. Extra argument for each of the links. % emu12 = list(), ephi12 = list(), eoratio = list(), % See \code{earg} in \code{\link{Links}} for general information. % } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none. The default has both \eqn{\phi}{phi} and the odds ratio as not being modelled as a function of the explanatory variables (apart from an intercept). } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{addRidge}{ Some small positive numerical value. The first two diagonal elements of the working weight matrices are multiplied by \code{1+addRidge} to make it diagonally dominant, therefore positive-definite. } } \details{ This \pkg{VGAM} family function fits an exchangeable bivariate odds ratio model (\code{\link{binom2.or}}) with a \code{\link{cloglog}} link. The data are assumed to come from a zero-inflated Poisson (ZIP) distribution that has been converted to presence/absence. Explicitly, the default model is \deqn{cloglog[P(Y_j=1)/(1-\phi)] = \eta_1,\ \ \ j=1,2}{% cloglog[P(Y_j=1)/(1-phi)] = eta_1,\ \ \ j=1,2} for the (exchangeable) marginals, and \deqn{logit[\phi] = \eta_2,}{% logit[phi] = eta_2,} for the mixing parameter, and \deqn{\log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = \eta_3,}{% log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = eta_3,} specifies the dependency between the two responses. Here, the responses equal 1 for a success and a 0 for a failure, and the odds ratio is often written \eqn{\psi=p_{00}p_{11}/(p_{10}p_{01})}{psi=p00 p11 / (p10 p01)}. We have \eqn{p_{10} = p_{01}}{p10 = p01} because of the exchangeability. The second linear/additive predictor models the \eqn{\phi}{phi} parameter (see \code{\link{zipoisson}}). The third linear/additive predictor is the same as \code{\link{binom2.or}}, viz., the log odds ratio. Suppose a dataset1 comes from a Poisson distribution that has been converted to presence/absence, and that both marginal probabilities are the same (exchangeable). Then \code{binom2.or("cloglog", exch=TRUE)} is appropriate. Now suppose a dataset2 comes from a \emph{zero-inflated} Poisson distribution. The first linear/additive predictor of \code{zipebcom()} applied to dataset2 is the same as that of \code{binom2.or("cloglog", exch=TRUE)} applied to dataset1. That is, the \eqn{\phi}{phi} has been taken care of by \code{zipebcom()} so that it is just like the simpler \code{\link{binom2.or}}. Note that, for \eqn{\eta_1}{eta_1}, \code{mu12 = prob12 / (1-phi12)} where \code{prob12} is the probability of a 1 under the ZIP model. Here, \code{mu12} correspond to \code{mu1} and \code{mu2} in the \code{\link{binom2.or}}-Poisson model. If \eqn{\phi=0}{phi=0} then \code{zipebcom()} should be equivalent to \code{binom2.or("cloglog", exch=TRUE)}. Full details are given in Yee and Dirnbock (2009). The leading \eqn{2 \times 2}{2 x 2} submatrix of the expected information matrix (EIM) is of rank-1, not 2! This is due to the fact that the parameters corresponding to the first two linear/additive predictors are unidentifiable. The quick fix around this problem is to use the \code{addRidge} adjustment. The model is fitted by maximum likelihood estimation since the full likelihood is specified. Fisher scoring is implemented. The default models \eqn{\eta_2}{eta2} and \eqn{\eta_3}{eta3} as single parameters only, but this can be circumvented by setting \code{zero=NULL} in order to model the \eqn{\phi}{phi} and odds ratio as a function of all the explanatory variables. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. These estimated probabilities should be extracted with the \code{fitted} generic function. } \section{Warning }{ The fact that the EIM is not of full rank may mean the model is naturally ill-conditioned. Not sure whether there are any negative consequences wrt theory. For now it is certainly safer to fit \code{\link{binom2.or}} to bivariate binary responses. } \references{ Yee, T. W. and Dirnbock, T. (2009) Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, \bold{259}(4), 684--694. } %\author{ Thomas W. Yee } \note{ The \code{"12"} in the argument names reinforce the user about the exchangeability assumption. The name of this \pkg{VGAM} family function stands for \emph{zero-inflated Poisson exchangeable bivariate complementary log-log odds-ratio model} or ZIP-EBCOM. See \code{\link{binom2.or}} for details that are pertinent to this \pkg{VGAM} family function too. Even better initial values are usually needed here. The \code{xij} (see \code{\link{vglm.control}}) argument enables environmental variables with different values at the two time points to be entered into an exchangeable \code{\link{binom2.or}} model. See the author's webpage for sample code. } \seealso{ \code{\link{binom2.or}}, \code{\link{zipoisson}}, \code{\link{cloglog}}, \code{\link{CommonVGAMffArguments}}, \code{\link{posbernoulli.tb}}. } \examples{ zdata <- data.frame(x2 = seq(0, 1, len = (nsites <- 2000))) zdata <- transform(zdata, eta1 = -3 + 5 * x2, phi1 = logit(-1, inverse = TRUE), oratio = exp(2)) zdata <- transform(zdata, mu12 = cloglog(eta1, inverse = TRUE) * (1-phi1)) tmat <- with(zdata, rbinom2.or(nsites, mu1 = mu12, oratio = oratio, exch = TRUE)) zdata <- transform(zdata, ybin1 = tmat[, 1], ybin2 = tmat[, 2]) with(zdata, table(ybin1, ybin2)) / nsites # For interest only \dontrun{ # Various plots of the data, for interest only par(mfrow = c(2, 2)) plot(jitter(ybin1) ~ x2, data = zdata, col = "blue") plot(jitter(ybin2) ~ jitter(ybin1), data = zdata, col = "blue") plot(mu12 ~ x2, data = zdata, col = "blue", type = "l", ylim = 0:1, ylab = "Probability", main = "Marginal probability and phi") with(zdata, abline(h = phi1[1], col = "red", lty = "dashed")) tmat2 <- with(zdata, dbinom2.or(mu1 = mu12, oratio = oratio, exch = TRUE)) with(zdata, matplot(x2, tmat2, col = 1:4, type = "l", ylim = 0:1, ylab = "Probability", main = "Joint probabilities")) } # Now fit the model to the data. fit <- vglm(cbind(ybin1, ybin2) ~ x2, zipebcom, dat = zdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) vcov(fit) } \keyword{models} \keyword{regression} VGAM/man/zinegbinomial.Rd0000644000176000001440000001376712136651105014743 0ustar ripleyusers\name{zinegbinomial} \alias{zinegbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Negative Binomial Distribution Family Function } \description{ Fits a zero-inflated negative binomial distribution by full maximum likelihood estimation. } \usage{ zinegbinomial(lpstr0 = "logit", lmunb = "loge", lsize = "loge", ipstr0 = NULL, isize = NULL, zero = c(-1, -3), imethod = 1, shrinkage.init = 0.95, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, lmunb, lsize}{ Link functions for the parameters \eqn{\phi}{pstr0}, the mean and \eqn{k}; see \code{\link{negbinomial}} for details, and \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{epstr0, emunb, esize}{ % epstr0 = list(), emunb = list(), esize = list(), % List. Extra arguments for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{ipstr0, isize}{ Optional initial values for \eqn{\phi}{pstr0} and \eqn{k}{k}. The default is to compute an initial value internally for both. If a vector then recycling is used. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} which specifies the initialization method for the mean parameter. If failure to converge occurs try another value and/or else specify a value for \code{shrinkage.init}. } \item{zero}{ Integers specifying which linear/additive predictor is modelled as intercepts only. If given, their absolute values must be either 1 or 2 or 3. The default is the \eqn{\phi}{pstr0} and \eqn{k} parameters (both for each response). See \code{\link{CommonVGAMffArguments}} for more information. } \item{shrinkage.init, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ This function uses simulation and Fisher scoring, and is based on \deqn{P(Y=0) = \phi + (1-\phi) (k/(k+\mu))^k,}{% P(Y=0) = \phi + (1-\phi) * (k/(k+\mu))^k,} and for \eqn{y=1,2,\ldots}, \deqn{P(Y=y) = (1-\phi) \, dnbinom(y, \mu, k).}{% P(Y=y) = (1-\phi) * dnbinom(y, \mu, k).} The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{(1-\phi) \mu}{(1-phi)*munb} (returned as the fitted values). By default, the three linear/additive predictors are \eqn{(logit(\phi), \log(\mu), \log(k))^T}{(logit(phi), log(munb), log(k))^T}. See \code{\link{negbinomial}}, another \pkg{VGAM} family function, for the formula of the probability density function and other details of the negative binomial distribution. Independent multivariate responses are handled. If so then arguments \code{ipstr0} and \code{isize} may be vectors with length equal to the number of responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ % %} \author{ T. W. Yee } \note{ % 20130316: commenting out this: % For intercept-models, the \code{misc} slot has a component called % \code{pobs0} which is the estimate of \eqn{P(Y=0)}. % Note that \eqn{P(Y=0)} is not the parameter \eqn{\phi}{phi}. % 20130316: adding this: Estimated probabilities of a structural zero and an observed zero are returned, as in \code{\link{zipoisson}}. If \eqn{k} is large then the use of \pkg{VGAM} family function \code{\link{zipoisson}} is probably preferable. This follows because the Poisson is the limiting distribution of a negative binomial as \eqn{k} tends to infinity. The zero-\emph{deflated} negative binomial distribution might be fitted by setting \code{lpstr0 = identity}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered negative binomial distribution (see \code{\link{zanegbinomial}}). } \section{Warning }{ Numerical problems can occur, e.g., when the probability of zero is actually less than, not more than, the nominal probability of zero. Half-stepping is not uncommon. If failure to converge occurs, try using combinations of \code{imethod}, \code{shrinkage.init}, \code{ipstr0}, \code{isize}, and/or \code{zero} if there are explanatory variables. This \pkg{VGAM} family function is computationally expensive and usually runs slowly; setting \code{trace = TRUE} is useful for monitoring convergence. } \seealso{ \code{\link{Zinegbin}}, \code{\link{negbinomial}}, \code{\link[stats:Poisson]{rpois}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ # Example 1 ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, pstr0 = logit(-0.5 + 1 * x2, inverse = TRUE), munb = exp( 3 + 1 * x2), size = exp( 0 + 2 * x2)) ndata <- transform(ndata, y1 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0), y2 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0)) with(ndata, table(y1)["0"] / sum(table(y1))) fit <- vglm(cbind(y1, y2) ~ x2, zinegbinomial(zero = NULL), ndata) coef(fit, matrix = TRUE) summary(fit) head(cbind(fitted(fit), with(ndata, (1 - pstr0) * munb))) round(vcov(fit), 3) # Example 2: RR-ZINB could also be called a COZIVGLM-ZINB-2 ndata <- data.frame(x2 = runif(nn <- 2000)) ndata <- transform(ndata, x3 = runif(nn)) ndata <- transform(ndata, eta1 = 3 + 1 * x2 + 2 * x3) ndata <- transform(ndata, pstr0 = logit(-1.5 + 0.5 * eta1, inverse = TRUE), munb = exp(eta1), size = exp(4)) ndata <- transform(ndata, y1 = rzinegbin(nn, pstr0 = pstr0, mu = munb, size = size)) with(ndata, table(y1)["0"] / sum(table(y1))) rrzinb <- rrvglm(y1 ~ x2 + x3, zinegbinomial(zero = NULL), ndata, Index.corner = 2, szero = 3, trace = TRUE) coef(rrzinb, matrix = TRUE) Coef(rrzinb) } } \keyword{models} \keyword{regression} VGAM/man/zinegbinUC.Rd0000644000176000001440000000657712136651105014152 0ustar ripleyusers\name{Zinegbin} \alias{Zinegbin} \alias{dzinegbin} \alias{pzinegbin} \alias{qzinegbin} \alias{rzinegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated negative binomial distribution with parameter \code{pstr0}. } \usage{ dzinegbin(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) pzinegbin(q, size, prob = NULL, munb = NULL, pstr0 = 0) qzinegbin(p, size, prob = NULL, munb = NULL, pstr0 = 0) rzinegbin(n, size, prob = NULL, munb = NULL, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a single positive integer. } \item{size, prob, munb, log}{ Arguments matching \code{\link[stats:NegBinomial]{dnbinom}}. The argument \code{munb} corresponds to \code{mu} in \code{\link[stats:NegBinomial]{dnbinom}} and has been renamed to emphasize the fact that it is the mean of the negative binomial \emph{component}. } \item{pstr0}{ Probability of structural zero (i.e., ignoring the negative binomial distribution), called \eqn{\phi}{phi}. } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and a negative binomial distribution with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed as a negative binomial distribution (see \code{\link[stats:NegBinomial]{rnbinom}}.) See \code{\link{negbinomial}}, a \pkg{VGAM} family function, for the formula of the probability density function and other details of the negative binomial distribution. } \value{ \code{dzinegbin} gives the density, \code{pzinegbin} gives the distribution function, \code{qzinegbin} gives the quantile function, and \code{rzinegbin} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zinegbinomial}}, \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{rzipois}}. } \examples{ munb <- 3; pstr0 <- 0.2; size <- k <- 10; x <- 0:10 (ii <- dzinegbin(x, pstr0 = pstr0, mu = munb, size = k)) max(abs(cumsum(ii) - pzinegbin(x, pstr0 = pstr0, mu = munb, size = k))) # 0 table(rzinegbin(100, pstr0 = pstr0, mu = munb, size = k)) table(qzinegbin(runif(1000), pstr0 = pstr0, mu = munb, size = k)) round(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k) * 1000) # Should be similar \dontrun{barplot(rbind(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k), dnbinom(x, mu = munb, size = k)), las = 1, beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZINB(mu = ", munb, ", k = ", k, ", pstr0 = ", pstr0, ") (blue) vs NB(mu = ", munb, ", size = ", k, ") (green)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/zigeometric.Rd0000644000176000001440000000764712136651105014435 0ustar ripleyusers\name{zigeometric} \alias{zigeometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Geometric Distribution Family Function } \description{ Fits a zero-inflated geometric distribution by maximum likelihood estimation. } \usage{ zigeometric(lprob = "logit", lpstr0 = "logit", iprob = NULL, ipstr0 = NULL, imethod = 1, bias.red = 0.5, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob, lpstr0}{ Link functions for the parameters \eqn{p}{prob} (\code{prob}) and \eqn{\phi}{phi}. The usual geometric probability parameter is the former. The probability of a structural zero is the latter. See \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{eprob, epstr0}{ eprob = list(), epstr0 = list(), % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{bias.red}{ A constant used in the initialization process of \code{pstr0}. It should lie between 0 and 1, with 1 having no effect. } \item{iprob, ipstr0}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ This function uses Fisher scoring and is based on \deqn{P(Y=0) = \phi + (1-\phi) p,}{% P(Y=0) = phi + (1-phi) * prob,} for \eqn{y=0}, and \deqn{P(Y=y) = (1-\phi) p (1 - p)^{y}.}{% P(Y=y) = (1-phi) * prob * (1 - prob)^y.} for \eqn{y=1,2,\ldots}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{E(Y)=(1-\phi) p / (1-p)}{E(Y) = (1-phi) * prob / (1-prob)} and these are returned as the fitted values. By default, the two linear/additive predictors are \eqn{(logit(p), logit(\phi))^T}{(logit(prob), logit(phi))^T}. Multiple responses are handled. % 20130316: Estimated probabilities of a structural zero and an observed zero are returned as in \code{\link{zipoisson}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ % Numerical problems may occur since the initial values are currently % not very good. The zero-\emph{deflated} geometric distribution might be fitted by setting \code{lpstr0 = identity}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered geometric distribution (see \code{\link{zageometric}}). } %\section{Warning }{ % Numerical problems can occur. % Half-stepping is not uncommon. % If failure to converge occurs, make use of the argument \code{ipstr0}. % %} \seealso{ \code{\link{rzigeom}}, \code{\link{geometric}}, \code{\link{zageometric}}, \code{\link[stats]{rgeom}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta1 = 1.0 - 1.0 * x2 + 2.0 * x3, eta2 = -1.0, eta3 = 0.5) gdata <- transform(gdata, prob1 = logit(eta1, inverse = TRUE), prob2 = logit(eta2, inverse = TRUE), prob3 = logit(eta3, inverse = TRUE)) gdata <- transform(gdata, y1 = rzigeom(nn, prob1, pstr0 = prob3), y2 = rzigeom(nn, prob2, pstr0 = prob3), y3 = rzigeom(nn, prob2, pstr0 = prob3)) with(gdata, table(y1)) with(gdata, table(y2)) with(gdata, table(y3)) head(gdata) fit1 <- vglm(y1 ~ x2 + x3 + x4, zigeometric, gdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y2, y3) ~ 1, zigeometric, gdata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/zigeomUC.Rd0000644000176000001440000000475212136651105013630 0ustar ripleyusers\name{Zigeom} \alias{Zigeom} \alias{dzigeom} \alias{pzigeom} \alias{qzigeom} \alias{rzigeom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Geometric Distribution } \description{ Density, and random generation for the zero-inflated geometric distribution with parameter \code{pstr0}. } \usage{ dzigeom(x, prob, pstr0 = 0, log = FALSE) pzigeom(q, prob, pstr0 = 0) qzigeom(p, prob, pstr0 = 0) rzigeom(n, prob, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{prob}{see \code{\link[stats]{dgeom}}.} \item{n}{number of observations. } \item{pstr0}{ Probability of structural zero (ignoring the geometric distribution), called \eqn{\phi}{phi}. The default value corresponds to the response having an ordinary geometric distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{geometric(prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{geometric(prob)}. } \value{ \code{dzigeom} gives the density, \code{pzigeom} gives the distribution function, \code{qzigeom} gives the quantile function, and \code{rzigeom} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zigeometric}}, \code{\link[stats]{dgeom}}. } \examples{ prob <- 0.5; pstr0 <- 0.2; x <- (-1):20 (ii <- dzigeom(x, prob, pstr0)) max(abs(cumsum(ii) - pzigeom(x, prob, pstr0))) # Should be 0 table(rzigeom(1000, prob, pstr0)) \dontrun{ x <- 0:10 barplot(rbind(dzigeom(x, prob, pstr0), dgeom(x, prob)), beside = TRUE, col = c("blue","orange"), ylab = "P[Y = y]", xlab = "y", las = 1, main = paste("zigeometric(", prob, ", pstr0 = ", pstr0, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/zibinomial.Rd0000644000176000001440000001230412136651105014233 0ustar ripleyusers\name{zibinomial} \alias{zibinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Binomial Distribution Family Function } \description{ Fits a zero-inflated binomial distribution by maximum likelihood estimation. } \usage{ zibinomial(lpstr0 = "logit", lprob = "logit", ipstr0 = NULL, zero = 1, mv = FALSE, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, lprob}{ Link functions for the parameter \eqn{\phi}{phi} and the usual binomial probability \eqn{\mu}{prob} parameter. See \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{epstr0, eprob}{ % epstr0 = list(), eprob = list(), % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{ipstr0}{ Optional initial values for \eqn{\phi}{phi}, whose values must lie between 0 and 1. The default is to compute an initial value internally. If a vector then recyling is used. } % \item{zero}{ % An integer specifying which linear/additive predictor is modelled % as intercepts only. If given, the value must be either 1 or 2, % and the default is the first. Setting \code{zero = NULL} enables both % \eqn{\phi}{phi} and \eqn{\mu}{prob} to be modelled as a function of % the explanatory variables. % See \code{\link{CommonVGAMffArguments}} for more information. % } \item{mv}{ Logical. Currently it must be \code{FALSE} to mean the function does not handle multivariate responses. This is to remain compatible with the same argument in \code{\link{binomialff}}. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ This function uses Fisher scoring and is based on \deqn{P(Y=0) = \phi + (1-\phi) (1-\mu)^N,}{% P(Y=0) = phi + (1- phi) * (1-prob)^N,} for \eqn{y=0}, and \deqn{P(Y=y) = (1-\phi) {N \choose Ny} \mu^{Ny} (1-\mu)^{N(1-y)}.}{% P(Y=y) = (1-phi) * choose(N,Ny) * prob^(N*y) * (1-prob)^(N*(1-y)).} for \eqn{y=1/N,2/N,\ldots,1}. That is, the response is a sample proportion out of \eqn{N} trials, and the argument \code{size} in \code{\link{rzibinom}} is \eqn{N} here. The parameter \eqn{\phi}{phi} is the probability of a structural zero, and it satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{E(Y)=(1-\phi) \mu}{E(Y) = (1-phi) * prob} and these are returned as the fitted values. By default, the two linear/additive predictors are \eqn{(logit(\phi), logit(\mu))^T}{(logit(phi), logit(prob))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ The response variable must have one of the formats described by \code{\link{binomialff}}, e.g., a factor or two column matrix or a vector of sample proportions with the \code{weights} argument specifying the values of \eqn{N}. To work well, one needs large values of \eqn{N} and \eqn{\mu>0}{prob>0}, i.e., the larger \eqn{N} and \eqn{\mu}{prob} are, the better. If \eqn{N = 1} then the model is unidentifiable since the number of parameters is excessive. Setting \code{stepsize = 0.5}, say, may aid convergence. % 20130316; commenting out this: % For intercept-models and constant \eqn{N} over the \eqn{n} observations, % the \code{misc} slot has a component called \code{pobs0} which is the % estimate of the probability of an observed 0, i.e., \eqn{P(Y=0)}. % This family function currently cannot handle a multivariate % response (only \code{mv = FALSE} can be handled). % 20130316; adding this: Estimated probabilities of a structural zero and an observed zero are returned, as in \code{\link{zipoisson}}. The zero-\emph{deflated} binomial distribution might be fitted by setting \code{lpstr0 = identity}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered binomial distribution (see \code{\link{zabinomial}}). } \section{Warning }{ Numerical problems can occur. Half-stepping is not uncommon. If failure to converge occurs, make use of the argument \code{ipstr0}. } \seealso{ \code{\link{rzibinom}}, \code{\link{binomialff}}, \code{\link{posbinomial}}, \code{\link[stats:Binomial]{rbinom}}. } \examples{ size <- 10 # Number of trials; N in the notation above nn <- 200 zibdata <- data.frame(pstr0 = logit( 0, inverse = TRUE), # 0.50 mubin = logit(-1, inverse = TRUE), # Mean of usual binomial sv = rep(size, length = nn)) zibdata <- transform(zibdata, y = rzibinom(nn, size = sv, prob = mubin, pstr0 = pstr0)) with(zibdata, table(y)) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomial, zibdata, trace = TRUE) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomial, zibdata, trace = TRUE, stepsize = 0.5) coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models fit@misc$pobs0 # Estimate of P(Y = 0) head(fitted(fit)) with(zibdata, mean(y)) # Compare this with fitted(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/zibinomUC.Rd0000644000176000001440000000616212136651105014002 0ustar ripleyusers\name{Zibinom} \alias{Zibinom} \alias{dzibinom} \alias{pzibinom} \alias{qzibinom} \alias{rzibinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated binomial distribution with parameter \code{pstr0}. } \usage{ dzibinom(x, size, prob, pstr0 = 0, log = FALSE) pzibinom(q, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) qzibinom(p, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) rzibinom(n, size, prob, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{size}{number of trials. It is the \eqn{N} symbol in the formula given in \code{\link{zibinomial}}. } \item{prob}{probability of success on each trial. } \item{n}{number of observations. Must be a single positive integer. } \item{log, log.p, lower.tail}{ Arguments that are passed on to \code{\link[stats:Binomial]{pbinom}}.} \item{pstr0}{ Probability of a structural zero (i.e., ignoring the binomial distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi=0}{phi=0} corresponds to the response having an ordinary binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{Binomial(size, prob)}{Binomial(size, prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{Binomial(size, prob)}{Binomial(size, prob)}. } \value{ \code{dzibinom} gives the density, \code{pzibinom} gives the distribution function, \code{qzibinom} gives the quantile function, and \code{rzibinom} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zibinomial}}, \code{\link[stats:Binomial]{dbinom}}. } \examples{ prob <- 0.2; size <- 10; pstr0 <- 0.5 (ii <- dzibinom(0:size, size, prob, pstr0 = pstr0)) max(abs(cumsum(ii) - pzibinom(0:size, size, prob, pstr0 = pstr0))) # Should be 0 table(rzibinom(100, size, prob, pstr0 = pstr0)) table(qzibinom(runif(100), size, prob, pstr0 = pstr0)) round(dzibinom(0:10, size, prob, pstr0 = pstr0) * 100) # Should be similar \dontrun{ x <- 0:size barplot(rbind(dzibinom(x, size, prob, pstr0 = pstr0), dbinom(x, size, prob)), beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZIB(", size, ", ", prob, ", pstr0 = ", pstr0, ") (blue) vs", " Binomial(", size, ", ", prob, ") (green)", sep=""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/zetaff.Rd0000644000176000001440000000543312136651105013362 0ustar ripleyusers\name{zetaff} \alias{zetaff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zeta Distribution Family Function } \description{ Estimates the parameter of the zeta distribution. } \usage{ zetaff(link = "loge", init.p = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, init.p, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. These arguments apply to the (positive) parameter \eqn{p}. See \code{\link{Links}} for more choices. Choosing \code{\link{loglog}} constrains \eqn{p>1}, but may fail if the maximum likelihood estimate is less than one. } } \details{ In this long tailed distribution the response must be a positive integer. The probability function for a response \eqn{Y} is \deqn{P(Y=y) = 1/[y^{p+1} \zeta(p+1)],\ \ \ p>0,\ \ \ y=1,2,...}{% P(Y=y) = 1/(y^(p+1) zeta(p+1)), p>0, y=1,2,...} where \eqn{\zeta}{zeta} is Riemann's zeta function. The parameter \eqn{p} is positive, therefore a log link is the default. The mean of \eqn{Y} is \eqn{\mu = \zeta(p) / \zeta(p+1)}{mu = zeta(p)/zeta(p+1)} (provided \eqn{p>1}) and these are the fitted values. The variance of \eqn{Y} is \eqn{\zeta(p-1) / \zeta(p+1) - \mu^2}{zeta(p-1) / zeta(p+1) - mu^2} provided \eqn{p>2}. It appears that good initial values are needed for successful convergence. If convergence is not obtained, try several values ranging from values near 0 to values about 10 or more. Multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %Lindsey, J. K. (1995) %\emph{Modelling Frequency and Count Data}. %Oxford: Clarendon Press. \references{ pp.527-- of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. Knight, K. (2000) \emph{Mathematical Statistics}. Boca Raton: Chapman & Hall/CRC Press. } \author{ T. W. Yee } \note{ The \code{\link{zeta}} function may be used to compute values of the zeta function. } \seealso{ \code{\link{zeta}}, \code{\link{dzeta}}, \code{\link{hzeta}}, \code{\link{zipf}}. } \examples{ zdata <- data.frame(y = 1:5, w = c(63, 14, 5, 1, 2)) # Knight, p.304 fit <- vglm(y ~ 1, zetaff, zdata, trace = TRUE, weight = w, crit = "coef") (phat <- Coef(fit)) # 1.682557 with(zdata, cbind(round(dzeta(y, phat) * sum(w), 1), w)) with(zdata, weighted.mean(y, w)) fitted(fit, matrix = FALSE) predict(fit) # The following should be zero at the MLE: with(zdata, mean(log(rep(y, w))) + zeta(1+phat, deriv = 1) / zeta(1+phat)) } \keyword{models} \keyword{regression} % Also known as the Joos model or discrete Pareto distribution. VGAM/man/zetaUC.Rd0000644000176000001440000000324212136651105013272 0ustar ripleyusers\name{Zeta} \alias{Zeta} \alias{dzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{The Zeta Distribution } \description{ Density for the zeta distribution. } % zz p is not a good argument name, esp. with qzeta(p, p) \usage{ dzeta(x, p, log = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Numerical vector/matrix to evaluate the density. } \item{p}{ The parameter \eqn{p}. This must be positive. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \details{ The density function of the zeta distribution is given by \deqn{y^{-p-1} / \zeta(p+1)}{% y^(-p-1) / zeta(p+1)} where \eqn{p>0}, \eqn{y=1,2,\ldots}, and \eqn{\zeta}{zeta} is Riemann's zeta function. } \value{ Returns the density evaluated at \code{x}. } \references{ Johnson N. L., Kotz S., and Balakrishnan N. (1993) \emph{Univariate Discrete Distributions}, 2nd ed. New York: Wiley. % Lindsey, J. K. (2002zz) % \emph{Applied Statistical Modelling}, 2nd ed. % London: Chapman & Hall.zz % Knight, K. (2002zz) % Theory book. % London: Chapman & Hall.zz } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{\link{zetaff}} estimates the parameter \eqn{p}. } \section{Warning}{ This function has not been fully tested. } \seealso{ \code{\link{zeta}}, \code{\link{zetaff}}. } \examples{ dzeta(1:20, p = 2) \dontrun{ plot(1:6, dzeta(1:6, p = 4), type = "h", las = 1, ylab = "Probability", main = "zeta probability function; black: p = 4; blue: p = 2") points(0.10 + 1:6, dzeta(1:6, p = 2), type = "h", col = "blue") } } \keyword{distribution} VGAM/man/zeta.Rd0000644000176000001440000000700012136651105013036 0ustar ripleyusers\name{zeta} \alias{zeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Riemann's Zeta Function } \description{ Computes Riemann's zeta function and its first two derivatives. } \usage{ zeta(x, deriv = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A complex-valued vector/matrix whose real values must be \eqn{\geq 1}{>= 1}. Otherwise, if \code{x} may be real. If \code{deriv} is 1 or 2 then \code{x} must be real and positive. } \item{deriv}{ An integer equalling 0 or 1 or 2, which is the order of the derivative. The default means it is computed ordinarily. } } \details{ While the usual definition involves an infinite series, more efficient methods have been devised to compute the value. In particular, this function uses Euler-Maclaurin summation. Theoretically, the zeta function can be computed over the whole complex plane because of analytic continuation. The formula used here for analytic continuation is \deqn{\zeta(s) = 2^s \pi^{s-1} \sin(\pi s/2) \Gamma(1-s) \zeta(1-s).}{% zeta(s) = 2^s * pi^(s-1) * sin(pi*s/2) * gamma(1-s) * zeta(1-s).} This is actually one of several formulas, but this one was discovered by Riemann himself and is called the \emph{functional equation}. } \section{Warning}{ This function has not been fully tested, especially the derivatives. In particular, analytic continuation does not work here for complex \code{x} with \code{Re(x)<1} because currently the \code{\link[base:Special]{gamma}} function does not handle complex arguments. } \value{ A vector/matrix of computed values. % The derivative is attached as an attribute zz. } \references{ Riemann, B. (1859) Ueber die Anzahl der Primzahlen unter einer gegebenen Grosse. \emph{Monatsberichte der Berliner Akademie, November 1859}. Edwards, H. M. (1974) \emph{Riemann's Zeta Function}. Academic Press: New York. Markman, B. (1965) The Riemann zeta function. \emph{BIT}, \bold{5}, 138--141. Abramowitz, M. and Stegun, I. A. (1972) \emph{Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables}, New York: Dover Publications Inc. } \author{ T. W. Yee, with the help of Garry J. Tee. } \note{ Estimation of the parameter of the zeta distribution can be achieved with \code{\link{zetaff}}. } \seealso{ \code{\link{zetaff}}, \code{\link{lerch}}, \code{\link[base:Special]{gamma}}. } \examples{ zeta(2:10) \dontrun{ curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange", las = 1, main = expression({zeta}(x))) curve(zeta, 1.2, 12, add = TRUE, col = "orange") abline(v = 0, h = c(0, 1), lty = "dashed", col = "gray") # Close up plot: curve(zeta, -14, -0.4, col = "orange", main = expression({zeta}(x))) abline(v = 0, h = 0, lty = "dashed", col = "gray") x <- seq(0.04, 0.8, len = 100) # Plot of the first derivative plot(x, zeta(x, deriv = 1), type = "l", las = 1, col = "blue", xlim = c(0.04, 3), ylim = c(-6, 0), main = "zeta'(x)") x <- seq(1.2, 3, len = 100) lines(x, zeta(x, deriv = 1), col = "blue") abline(v = 0, h = 0, lty = "dashed", col = "gray") } zeta(2) - pi^2 / 6 # Should be zero zeta(4) - pi^4 / 90 # Should be zero zeta(6) - pi^6 / 945 # Should be 0 zeta(8) - pi^8 / 9450 # Should be 0 # zeta(0, deriv = 1) + 0.5 * log(2*pi) # Should be 0 } \keyword{math} % curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange") % curve(zeta, 1.2, 12, add = TRUE, col = "orange") % abline(v = 0, h = c(0,1), lty = "dashed") VGAM/man/zero.Rd0000644000176000001440000000576112136651105013066 0ustar ripleyusers\name{zero} % \alias{zeroarg} \alias{zero} \title{ The zero Argument in VGAM Family Functions } \description{ The \code{zero} argument allows users to conveniently model certain linear/additive predictors as intercepts only. } % \usage{ % VGAMfamilyFunction(zero = 3) % } \value{ Nothing is returned. It is simply a convenient argument for constraining certain linear/additive predictors to be an intercept only. } \section{Warning }{ The use of other arguments may conflict with the \code{zero} argument. For example, using \code{constraints} to input constraint matrices may conflict with the \code{zero} argument. Another example is the argument \code{parallel}. In general users should not assume any particular order of precedence when there is potential conflict of definition. Currently no checking for consistency is made. The argument \code{zero} may be renamed in the future to something better. } \section{Side Effects}{ The argument creates the appropriate constraint matrices internally. } \details{ Often a certain parameter needs to be modelled simply while other parameters in the model may be more complex, for example, the \eqn{\lambda}{lambda} parameter in LMS-Box-Cox quantile regression should be modelled more simply compared to its \eqn{\mu}{mu} parameter. Another example is the \eqn{\xi}{xi} parameter in a GEV distribution which is should be modelled simpler than its \eqn{\mu}{mu} parameter. Using the \code{zero} argument allows this to be fitted conveniently without having to input all the constraint matrices explicitly. The \code{zero} argument should be assigned an integer vector from the set \{\code{1:M}\} where \code{M} is the number of linear/additive predictors. Full details about constraint matrices can be found in the references. } \author{T. W. Yee } \note{ In all \pkg{VGAM} family functions \code{zero = NULL} means none of the linear/additive predictors are modelled as intercepts-only. Almost all \pkg{VGAM} family function have \code{zero = NULL} as the default, but there are some exceptions, e.g., \code{\link{binom2.or}}. Typing something like \code{coef(fit, matrix = TRUE)} is a useful way to ensure that the \code{zero} argument has worked as expected. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. \url{http://www.stat.auckland.ac.nz/~yee} } \seealso{ \code{\link{constraints}}. } \examples{ args(multinomial) args(binom2.or) args(gpd) #LMS quantile regression example fit <- vglm(BMI ~ bs(age, df = 4), lms.bcg(zero = c(1,3)), bmi.nz, trace = TRUE) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} \keyword{programming} %zz Here is a conflict which is not picked up (no internal checking done) VGAM/man/zapoisson.Rd0000644000176000001440000001323312136651105014125 0ustar ripleyusers\name{zapoisson} \alias{zapoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Poisson Distribution } \description{ Fits a zero-altered Poisson distribution based on a conditional model involving a Bernoulli distribution and a positive-Poisson distribution. } \usage{ zapoisson(lpobs0 = "logit", llambda = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{llambda}{ Link function for the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices. } % \item{epobs0, elambda}{ % epobs0 = list(), elambda = list(), % Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for more information. % Integer valued vector, usually assigned \eqn{-1} or \eqn{1} if used % at all. Specifies which of the two linear/additive predictors are % modelled as an intercept only. % By default, both linear/additive predictors are modelled using % the explanatory variables. % If \code{zero = 1} then the \eqn{p_0}{pobs0} parameter % (after \code{lpobs0} is applied) is modelled as a single unknown % number that is estimated. It is modelled as a function of the % explanatory variables by \code{zero = NULL}. A negative value % means that the value is recycled, so setting \eqn{-1} means % all \eqn{p_0}{pobs0} are intercept-only (for multivariate responses). } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, else \eqn{Y} has a positive-Poisson(\eqn{\lambda)}{lambda)} distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered Poisson distribution differs from the zero-inflated Poisson distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the Poisson distribution too. Some people call the zero-altered Poisson a \emph{hurdle} model. For one response/species, by default, the two linear/additive predictors are \eqn{(logit(p_0), \log(\lambda))^T}{(logit(pobs0), log(lambda))^T}. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} which is given by \deqn{\mu = (1-p_0) \lambda / [1 - \exp(-\lambda)].}{% mu = (1-pobs0) * lambda / [1 - exp(-lambda)].} } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } %20111123; this has been fixed up with proper FS using EIM. %\section{Warning }{ % Inference obtained from \code{summary.vglm} % and \code{summary.vgam} may or may not be correct. % In particular, the p-values, standard errors and degrees of % freedom may need adjustment. Use simulation on artificial % data to check that these are reasonable. % % %} \author{ T. W. Yee } \note{ There are subtle differences between this family function and \code{\link{zipoisson}} and \code{\link{yip88}}. In particular, \code{\link{zipoisson}} is a \emph{mixture} model whereas \code{zapoisson()} and \code{\link{yip88}} are \emph{conditional} models. Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates. % It can be thought of an extension % of \code{\link{yip88}}, which is also a conditional model but its % \eqn{\phi}{phi} parameter is a scalar only. This family function effectively combines \code{\link{pospoisson}} and \code{\link{binomialff}} into one family function. This family function can handle a multivariate response, e.g., more than one species. } \seealso{ \code{\link{rzapois}}, \code{\link{zipoisson}}, \code{\link{pospoisson}}, \code{\link{posnegbinomial}}, \code{\link{binomialff}}, \code{\link{rpospois}}, \code{\link{CommonVGAMffArguments}}. } \examples{ zapdata <- data.frame(x2 = runif(nn <- 1000)) zapdata <- transform(zapdata, pobs0 = logit( -1 + 1*x2, inverse = TRUE), lambda = loge(-0.5 + 2*x2, inverse = TRUE)) zapdata <- transform(zapdata, y = rzapois(nn, lambda, pobs0 = pobs0)) with(zapdata, table(y)) fit <- vglm(y ~ x2, zapoisson, zapdata, trace = TRUE) fit <- vglm(y ~ x2, zapoisson, zapdata, trace = TRUE, crit = "coef") head(fitted(fit)) head(predict(fit)) head(predict(fit, untransform = TRUE)) coef(fit, matrix = TRUE) summary(fit) # Another example ------------------------------ # Data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) yy <- with(abdata, rep(y, w)) fit3 <- vglm(yy ~ 1, zapoisson, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3), 1) mean(yy) # compare this with fitted(fit3) } \keyword{models} \keyword{regression} VGAM/man/zapoisUC.Rd0000644000176000001440000000433212136651105013635 0ustar ripleyusers\name{Zapois} \alias{Zapois} \alias{dzapois} \alias{pzapois} \alias{qzapois} \alias{rzapois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered Poisson distribution with parameter \code{pobs0}. } \usage{ dzapois(x, lambda, pobs0 = 0, log = FALSE) pzapois(q, lambda, pobs0 = 0) qzapois(p, lambda, pobs0 = 0) rzapois(n, lambda, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{lambda}{ Vector of positive means. } \item{pobs0}{ Probability of zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive Poisson distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive \eqn{Poisson(\lambda)}{Poisson(lambda)}. } \value{ \code{dzapois} gives the density, \code{pzapois} gives the distribution function, \code{qzapois} gives the quantile function, and \code{rzapois} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zapoisson}}. } \examples{ lambda <- 3; pobs0 <- 0.2; x <- (-1):7 (ii <- dzapois(x, lambda, pobs0)) max(abs(cumsum(ii) - pzapois(x, lambda, pobs0))) # Should be 0 table(rzapois(100, lambda, pobs0)) table(qzapois(runif(100), lambda, pobs0)) round(dzapois(0:10, lambda, pobs0) * 100) # Should be similar \dontrun{ x <- 0:10 barplot(rbind(dzapois(x, lambda, pobs0), dpois(x, lambda)), beside = TRUE, col = c("blue", "green"), las = 1, main = paste("ZAP(", lambda, ", pobs0 = ", pobs0, ") [blue] vs", " Poisson(", lambda, ") [green] densities", sep = ""), names.arg = as.character(x), ylab = "Probability") } } \keyword{distribution} VGAM/man/zanegbinomial.Rd0000644000176000001440000001377312136651105014730 0ustar ripleyusers\name{zanegbinomial} \alias{zanegbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Negative Binomial Distribution } \description{ Fits a zero-altered negative binomial distribution based on a conditional model involving a binomial distribution and a positive-negative binomial distribution. } \usage{ zanegbinomial(lpobs0 = "logit", lmunb = "loge", lsize = "loge", ipobs0 = NULL, isize = NULL, zero = c(-1, -3), imethod = 1, nsimEIM = 250, shrinkage.init = 0.95) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{lmunb}{ Link function applied to the \code{munb} parameter, which is the mean \eqn{\mu_{nb}}{munb} of an ordinary negative binomial distribution. See \code{\link{Links}} for more choices. } \item{lsize}{ Parameter link function applied to the reciprocal of the dispersion parameter, called \code{k}. That is, as \code{k} increases, the variance of the response decreases. See \code{\link{Links}} for more choices. } % \item{epobs0, emunb, esize}{ % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % epobs0 = list(), emunb = list(), esize = list(), % } \item{ipobs0, isize}{ Optional initial values for \eqn{p_0}{pobs0} and \code{k}. If given, it is okay to give one value for each response/species by inputting a vector whose length is the number of columns of the response matrix. } \item{zero}{ Integer valued vector, may be assigned, e.g., \eqn{-3} or \eqn{3} if the probability of an observed value is to be modelled with the covariates. Specifies which of the three linear predictors are modelled as an intercept only. By default, the \code{k} and \eqn{p_0}{pobs0} parameters for each response are modelled as single unknown numbers that are estimated. All parameters can be modelled as a function of the explanatory variables by setting \code{zero = NULL}. A negative value means that the value is recycled, so setting \eqn{-3} means all \code{k} are intercept-only. See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM, imethod}{ See \code{\link{CommonVGAMffArguments}}. } \item{shrinkage.init}{ See \code{\link{negbinomial}} and \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, or \eqn{Y} has a positive-negative binomial distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered negative binomial distribution differs from the zero-inflated negative binomial distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the negative binomial distribution too. The zero-inflated negative binomial distribution is implemented in the \pkg{VGAM} package. Some people call the zero-altered negative binomial a \emph{hurdle} model. For one response/species, by default, the three linear/additive predictors are \eqn{(logit(p_0), \log(\mu_{nb}), \log(k))^T}{(logit(pobs0), log(munb), log(k))^T}. This vector is recycled for multiple species. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} which is given by \deqn{\mu = (1-p_0) \mu_{nb} / [1 - (k/(k+\mu_{nb}))^k].}{% mu = (1-pobs0) * munb / [1 - (k/(k+munb))^k].} } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. } \section{Warning }{ Convergence for this \pkg{VGAM} family function seems to depend quite strongly on providing good initial values. This \pkg{VGAM} family function is computationally expensive and usually runs slowly; setting \code{trace = TRUE} is useful for monitoring convergence. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \author{ T. W. Yee } \note{ Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates provided \code{zero} is set correctly. It is a conditional model, not a mixture model. Simulated Fisher scoring is the algorithm. This family function effectively combines \code{\link{posnegbinomial}} and \code{\link{binomialff}} into one family function. This family function can handle a multivariate response, e.g., more than one species. } \seealso{ \code{\link{dzanegbin}}, \code{\link{posnegbinomial}}, \code{\link{negbinomial}}, \code{\link{binomialff}}, \code{\link{rposnegbin}}, \code{\link{zinegbinomial}}, \code{\link{zipoisson}}, \code{\link[stats:NegBinomial]{dnbinom}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pobs0 = logit(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzanegbin(nn, munb = exp(0+2*x2), size = exp(1), pobs0 = pobs0), y2 = rzanegbin(nn, munb = exp(1+2*x2), size = exp(1), pobs0 = pobs0)) with(zdata, table(y1)) with(zdata, table(y2)) fit <- vglm(cbind(y1, y2) ~ x2, zanegbinomial, zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) } } \keyword{models} \keyword{regression} VGAM/man/zanegbinUC.Rd0000644000176000001440000000510012136651105014117 0ustar ripleyusers\name{Zanegbin} \alias{Zanegbin} \alias{dzanegbin} \alias{pzanegbin} \alias{qzanegbin} \alias{rzanegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered negative binomial distribution with parameter \code{pobs0}. } \usage{ dzanegbin(x, size, prob = NULL, munb = NULL, pobs0 = 0, log = FALSE) pzanegbin(q, size, prob = NULL, munb = NULL, pobs0 = 0) qzanegbin(p, size, prob = NULL, munb = NULL, pobs0 = 0) rzanegbin(n, size, prob = NULL, munb = NULL, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{size, prob, munb, log}{ Parameters from the ordinary negative binomial distribution (see \code{\link[stats:NegBinomial]{dnbinom}}). Some arguments have been renamed slightly. } \item{pobs0}{ Probability of zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive negative binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive negative binomial(\eqn{\mu_{nb}}{munb}, size) distribution. } \value{ \code{dzanegbin} gives the density and \code{pzanegbin} gives the distribution function, \code{qzanegbin} gives the quantile function, and \code{rzanegbin} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zanegbinomial}}, \code{\link{rposnegbin}}. } \examples{ munb <- 3; size <- 4; pobs0 <- 0.3; x <- (-1):7 dzanegbin(x, munb = munb, size = size, pobs0 = pobs0) table(rzanegbin(100, munb = munb, size = size, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzanegbin(x, munb = munb, size = size, pobs0 = pobs0), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue","green"), cex.main = 0.7, las = 1, ylab = "Probability",names.arg = as.character(x), main = paste("ZANB(munb = ", munb, ", size = ", size,", pobs0 = ", pobs0, ") [blue] vs", " NB(mu = ", munb, ", size = ", size, ") [green] densities", sep = "")) } } \keyword{distribution} VGAM/man/zageometric.Rd0000644000176000001440000001014512136651105014410 0ustar ripleyusers\name{zageometric} \alias{zageometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Geometric Distribution } \description{ Fits a zero-altered geometric distribution based on a conditional model involving a Bernoulli distribution and a positive-geometric distribution. } \usage{ zageometric(lpobs0 = "logit", lprob = "logit", imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0} or \eqn{\phi}{phi}, called \code{pobs0} or \code{phi} here. See \code{\link{Links}} for more choices. } \item{lprob}{ Parameter link function applied to the probability of success, called \code{prob} or \eqn{p}. See \code{\link{Links}} for more choices. } % \item{epobs0, eprob}{ % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % epobs0 = list(), eprob = list(), % } \item{ipobs0, iprob}{ Optional initial values for the parameters. If given, they must be in range. For multi-column responses, these are recycled sideways. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, or \eqn{Y} has a positive-geometric distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered geometric distribution differs from the zero-inflated geometric distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the geometric distribution too. The zero-inflated geometric distribution is implemented in the \pkg{VGAM} package. Some people call the zero-altered geometric a \emph{hurdle} model. The input can be a matrix (multiple responses). By default, the two linear/additive predictors are \eqn{(\log(\phi), logit(p))^T}{(log(phi), logit(prob))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} which is given by \deqn{\mu = (1-\phi) / p.}{% mu = (1- phi) / p.} } %\references{ % % %} \section{Warning }{ Convergence for this \pkg{VGAM} family function seems to depend quite strongly on providing good initial values. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \author{ T. W. Yee } \note{ Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates. It is a conditional model, not a mixture model. This family function effectively combines \code{\link{binomialff}} and \code{posgeometric()} and \code{\link{geometric}} into one family function. However, \code{posgeometric()} is not written because it is trivially related to \code{\link{geometric}}. } \seealso{ \code{\link{dzageom}}, % \code{\link{posgeometric}}, \code{\link{geometric}}, \code{\link{zigeometric}}, \code{\link[stats:Geometric]{dgeom}}, \code{\link{CommonVGAMffArguments}}. } \examples{ zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logit(-1 + 2*x2, inverse = TRUE), prob = logit(-2 + 3*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzageom(nn, prob = prob, pobs0 = pobs0), y2 = rzageom(nn, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) fit <- vglm(cbind(y1, y2) ~ x2, zageometric, zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/zageomUC.Rd0000644000176000001440000000436712136651105013622 0ustar ripleyusers\name{Zageom} \alias{Zageom} \alias{dzageom} \alias{pzageom} \alias{qzageom} \alias{rzageom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Geometric Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered geometric distribution with parameter \code{pobs0}. } \usage{ dzageom(x, prob, pobs0 = 0, log = FALSE) pzageom(q, prob, pobs0 = 0) qzageom(p, prob, pobs0 = 0) rzageom(n, prob, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{prob, log}{ Parameters from the ordinary geometric distribution (see \code{\link[stats:Geometric]{dgeom}}). } \item{pobs0}{ Probability of (an observed) zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive geometric distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive geometric(prob) distribution. } \value{ \code{dzageom} gives the density and \code{pzageom} gives the distribution function, \code{qzageom} gives the quantile function, and \code{rzageom} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link{rposgeom}}. } \examples{ prob <- 0.35; pobs0 <- 0.05; x <- (-1):7 dzageom(x, prob = prob, pobs0 = pobs0) table(rzageom(100, prob = prob, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzageom(x, prob = prob, pobs0 = pobs0), dgeom(x, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAG(prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Geometric(prob = ", prob, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/zabinomial.Rd0000644000176000001440000000702712136651105014231 0ustar ripleyusers\name{zabinomial} \alias{zabinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Binomial Distribution } \description{ Fits a zero-altered binomial distribution based on a conditional model involving a Bernoulli distribution and a positive-binomial distribution. } \usage{ zabinomial(lprob = "logit", lpobs0 = "logit", iprob = NULL, ipobs0 = NULL, imethod = 1, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob}{ Parameter link function applied to the probability parameter of the binomial distribution. See \code{\link{Links}} for more choices. } \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{iprob, ipobs0}{ See \code{\link{CommonVGAMffArguments}}. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, else \eqn{Y} has a positive-binomial distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which may be modelled as a function of the covariates. The zero-altered binomial distribution differs from the zero-inflated binomial distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the binomial distribution too. The zero-inflated binomial distribution is implemented in \code{\link{zibinomial}}. Some people call the zero-altered binomial a \emph{hurdle} model. The input is currently a vector or one-column matrix. Dy default, the two linear/additive predictors are \eqn{(\log(p), logit(p_0))^T}{(log(prob), logit(pobs0))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} which is given by \deqn{\mu = (1-p_0) \mu_{b} / [1 - (1 - \mu_{b})^N]}{% mu = (1-pobs0) * mub / [1 - (1 - mub)^N]} where \eqn{\mu_{b}}{mub} is the usual binomial mean. } %\references{ % % %} %\section{Warning }{ % %} \author{ T. W. Yee } \note{ The response should be a two-column matrix of counts, with first column giving the number of successes. Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates by having \code{zero = NULL}. It is a conditional model, not a mixture model. This family function effectively combines \code{\link{posbinomial}} and \code{\link{binomialff}} into one family function. } \seealso{ \code{\link{dzabinom}}, \code{\link{zibinomial}}, \code{\link{posbinomial}}, \code{\link{binomialff}}, \code{\link[stats:Binomial]{dbinom}}, \code{\link{CommonVGAMffArguments}}. } \examples{ zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, size = 10, prob = logit(-2 + 3*x2, inverse = TRUE), pobs0 = logit(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzabinom(nn, size = size, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) fit <- vglm(cbind(y1, size - y1) ~ x2, zabinomial(zero = NULL), zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/zabinomUC.Rd0000644000176000001440000000461112136651105013767 0ustar ripleyusers\name{Zabinom} \alias{Zabinom} \alias{dzabinom} \alias{pzabinom} \alias{qzabinom} \alias{rzabinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered binomial distribution with parameter \code{pobs0}. } \usage{ dzabinom(x, size, prob, pobs0 = 0, log = FALSE) pzabinom(q, size, prob, pobs0 = 0) qzabinom(p, size, prob, pobs0 = 0) rzabinom(n, size, prob, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{size, prob, log}{ Parameters from the ordinary binomial distribution (see \code{\link[stats:Binomial]{dbinom}}). } \item{pobs0}{ Probability of (an observed) zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive binomial(size, prob) distribution. } \value{ \code{dzabinom} gives the density and \code{pzabinom} gives the distribution function, \code{qzabinom} gives the quantile function, and \code{rzabinom} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ % \code{\link{zabinomial}}, \code{\link{zibinomial}}, \code{\link{rposbinom}}. } \examples{ size <- 10; prob <- 0.15; pobs0 <- 0.05; x <- (-1):7 dzabinom(x, size = size, prob = prob, pobs0 = pobs0) table(rzabinom(100, size = size, prob = prob, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzabinom(x, size = size, prob = prob, pobs0 = pobs0), dbinom(x, size = size, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAB(size = ", size, ", prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Binom(size = ", size, ", prob = ", prob, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/yulesimonUC.Rd0000644000176000001440000000266212136651105014360 0ustar ripleyusers\name{Yules} \alias{Yules} \alias{dyules} \alias{pyules} %\alias{qyules} \alias{ryules} \title{ Yule-Simon Distribution } \description{ Density, distribution function, quantile function and random generation for the Yule-Simon distribution. } \usage{ dyules(x, rho, log = FALSE) pyules(q, rho) %qyules(p, rho) ryules(n, rho) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of quantiles. For the density, it should be a vector with positive integer values in order for the probabilities to be positive. } % \item{p}{vector of probabilities.} \item{n}{number of observations. A single positive integer.} \item{rho}{ See \code{\link{yulesimon}}. } \item{log}{logical; if TRUE, the logarithm is returned. } } \details{ See \code{\link{yulesimon}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \value{ \code{dyules} gives the density, \code{pyules} gives the distribution function, and % \code{qyules} gives the quantile function, and \code{ryules} generates random deviates. } %\references{ % %} \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{yulesimon}}. } \examples{ dyules(1:20, 2.1) ryules(20, 2.1) round(1000 * dyules(1:8, 2)) table(ryules(1000, 2)) \dontrun{ x <- 0:6 plot(x, dyules(x, rho = 2.2), type = "h", las = 1, col = "blue") } } \keyword{distribution} VGAM/man/yulesimon.Rd0000644000176000001440000000507312136651105014127 0ustar ripleyusers\name{yulesimon} \alias{yulesimon} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Yule-Simon Family Function } \description{ Estimating the parameter of the Yule-Simon distribution. } \usage{ yulesimon(link = "loge", irho = NULL, nsimEIM = 200, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the \eqn{\rho}{rho} parameter. See \code{\link{Links}} for more choices and for general information. } \item{irho}{ Optional initial value for the (positive) parameter. See \code{\link{CommonVGAMffArguments}} for more information. The default is to obtain an initial value internally. Use this argument if the default fails. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability function is \deqn{f(y;\rho) = rho*beta(y,rho+1),}{% f(y;rho) = rho*beta(y,rho+1),} where the parameter \eqn{\rho>0}{rho>0}, \eqn{beta} is the \code{\link[base]{beta}} function, and \eqn{y=1,2,\ldots}{y=1,2,...}. The function \code{\link{dyules}} computes this probability function. The mean of \eqn{Y}, which is returned as fitted values, is \eqn{\rho/(\rho-1)}{rho/(rho-1)} provided \eqn{\rho > 1}{rho > 1}. The variance of \eqn{Y} is \eqn{\rho^2/((\rho-1)^2 (\rho-2))}{rho^2/((rho-1)^2 (rho-2))} provided \eqn{\rho > 2}{rho > 2}. The distribution was named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution. This family function can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Simon, H. A. (1955) On a class of skew distribution functions. \emph{Biometrika}, \bold{42}, 425--440. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{ryules}}. } \examples{ ydata <- data.frame(x2 = runif(nn <- 1000)) ydata <- transform(ydata, y = ryules(nn, rho = exp(1.5 - x2))) with(ydata, table(y)) fit <- vglm(y ~ x2, yulesimon, ydata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} %# Generate some yulesimon random variates %set.seed(123) %nn = 400 %x = 1:20 %alpha = 1.1 # The parameter %probs = dyulesimon(x, alpha) %\dontrun{ %plot(x, probs, type="h", log="y")} %cs = cumsum(probs) %tab = table(cut(runif(nn), brea = c(0,cs,1))) %index = (1:length(tab))[tab>0] %y = rep(index, times=tab[index]) VGAM/man/yip88.Rd0000644000176000001440000001076512136651105013070 0ustar ripleyusers\name{yip88} \alias{yip88} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution (Yip (1988) algorithm) } \description{ Fits a zero-inflated Poisson distribution based on Yip (1988). } \usage{ yip88(link = "loge", n.arg = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices. } \item{n.arg}{ The total number of observations in the data set. Needed when the response variable has all the zeros deleted from it, so that the number of zeros can be determined. } } \details{ The method implemented here, Yip (1988), maximizes a \emph{conditional} likelihood. Consequently, the methodology used here deletes the zeros from the data set, and is thus related to the positive Poisson distribution (where \eqn{P(Y=0) = 0}). The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and Poisson(\eqn{\lambda}{lambda}) with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is Poisson(\eqn{\lambda}{lambda}). The mean, \eqn{(1-\phi) \lambda}{(1-phi) * lambda}, can be obtained by the extractor function \code{fitted} applied to the object. This family function treats \eqn{\phi}{phi} as a scalar. If you want to model both \eqn{\phi}{phi} and \eqn{\lambda}{lambda} as a function of covariates, try \code{\link{zipoisson}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Yip, P. (1988) Inference about the mean of a Poisson distribution in the presence of a nuisance parameter. \emph{The Australian Journal of Statistics}, \bold{30}, 299--306. Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. } \author{ Thomas W. Yee } \note{ The data may be inputted in two ways. The first is when the response is a vector of positive values, with the argument \code{n} in \code{yip88} specifying the total number of observations. The second is simply include all the data in the response. In this case, the zeros are trimmed off during the computation, and the \code{x} and \code{y} slots of the object, if assigned, will reflect this. The estimate of \eqn{\phi}{phi} is placed in the \code{misc} slot as \code{@misc$pstr0}. However, this estimate is computed only for intercept models, i.e., the formula is of the form \code{y ~ 1}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. Yip (1988) only considered \eqn{\phi}{phi} being a scalar and not modelled as a function of covariates. To get around this limitation, try \code{\link{zipoisson}}. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \seealso{ \code{\link{zipoisson}}, \code{\link{Zipois}}, \code{\link{zapoisson}}, \code{\link{pospoisson}}, \code{\link{poissonff}}, \code{\link{dzipois}}. } \examples{ phi <- 0.35; lambda <- 2 # Generate some artificial data y <- rzipois(n <- 1000, lambda, phi) table(y) # Two equivalent ways of fitting the same model fit1 <- vglm(y ~ 1, yip88(n = length(y)), subset = y > 0) fit2 <- vglm(y ~ 1, yip88, trace = TRUE, crit = "coef") (true.mean <- (1-phi) * lambda) mean(y) head(fitted(fit1)) fit1@misc$pstr0 # The estimate of phi # Compare the ZIP with the positive Poisson distribution pp <- vglm(y ~ 1, pospoisson, subset = y > 0, crit = "c") coef(pp) Coef(pp) coef(fit1) - coef(pp) # Same head(fitted(fit1) - fitted(pp)) # Different # Another example (Angers and Biswas, 2003) --------------------- abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) yy <- with(abdata, rep(y, w)) fit3 <- vglm(yy ~ 1, yip88(n = length(yy)), subset = yy > 0) fit3@misc$pstr0 # Estimate of phi (they get 0.5154 with SE 0.0707) coef(fit3, matrix = TRUE) Coef(fit3) # Estimate of lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3)) mean(yy) # Compare this with fitted(fit3) } \keyword{models} \keyword{regression} VGAM/man/yeo.johnson.Rd0000644000176000001440000000470112136651105014351 0ustar ripleyusers\name{yeo.johnson} \alias{yeo.johnson} %- Also NEED an '\alias' for EACH other topic documented here. \title{Yeo-Johnson Transformation} \description{ Computes the Yeo-Johnson transformation, which is a normalizing transformation. } \usage{ yeo.johnson(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{Numeric, a vector or matrix. } \item{lambda}{Numeric. It is recycled to the same length as \code{y} if necessary. } \item{derivative}{Non-negative integer. The default is the ordinary function evaluation, otherwise the derivative with respect to \code{lambda}.} \item{epsilon}{ Numeric and positive value. The tolerance given to values of \code{lambda} when comparing it to 0 or 2. } \item{inverse}{ Logical. Return the inverse transformation? } } \details{ The Yeo-Johnson transformation can be thought of as an extension of the Box-Cox transformation. It handles both positive and negative values, whereas the Box-Cox transformation only handles positive values. Both can be used to transform the data so as to improve normality. They can be used to perform LMS quantile regression. } \value{ The Yeo-Johnson transformation or its inverse, or its derivatives with respect to \code{lambda}, of \code{y}. } \references{ Yeo, I.-K. and Johnson, R. A. (2000) A new family of power transformations to improve normality or symmetry. \emph{Biometrika}, \bold{87}, 954--959. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. } \author{ Thomas W. Yee } \note{ If \code{inverse = TRUE} then the argument \code{derivative = 0} is required. } \seealso{ \code{\link{lms.yjn}}, \code{\link[MASS]{boxcox}}. } \examples{ y <- seq(-4, 4, len = (nn <- 200)) ltry <- c(0, 0.5, 1, 1.5, 2) # Try these values of lambda lltry <- length(ltry) psi <- matrix(as.numeric(NA), nn, lltry) for(ii in 1:lltry) psi[,ii] <- yeo.johnson(y, lambda = ltry[ii]) \dontrun{ matplot(y, psi, type = "l", ylim = c(-4, 4), lwd = 2, lty = 1:lltry, ylab = "Yeo-Johnson transformation", col = 1:lltry, las = 1, main = "Yeo-Johnson transformation with some values of lambda") abline(v = 0, h = 0) legend(x = 1, y = -0.5, lty = 1:lltry, legend = as.character(ltry), lwd = 2, col = 1:lltry) } } \keyword{models} \keyword{regression} VGAM/man/wrapup.smart.Rd0000644000176000001440000000202012136651105014533 0ustar ripleyusers\name{wrapup.smart} \alias{wrapup.smart} \title{ Cleans Up After Smart Prediction } \description{ \code{wrapup.smart} deletes any variables used by smart prediction. Needed by both the modelling function and the prediction function. } \usage{ wrapup.smart() } \details{ The variables to be deleted are \code{.smart.prediction}, \code{.smart.prediction.counter}, and \code{.smart.prediction.mode}. The function \code{wrapup.smart} is useful in \R because these variables are held in \code{smartpredenv}. In S-PLUS, \code{wrapup.smart} is not really necessary because the variables are placed in frame 1, which disappears when finished anyway. } \references{ See the technical help file at \url{http://www.stat.auckland.ac.nz/~yee} for details. } \seealso{ \code{\link{setup.smart}}. } \examples{ \dontrun{# Place this inside modelling functions such as lm, glm, vglm. wrapup.smart() # Put at the end of lm } } \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/wffc.teams.Rd0000644000176000001440000000211512136651105014132 0ustar ripleyusers\name{wffc.teams} \alias{wffc.teams} \docType{data} \title{ 2008 World Fly Fishing Championships (Team results) Data} \description{ Team results of the 2008 FIPS-MOUCHE World Fly Fishing Championships held in Rotorua, New Zealand during 22--30 March 2008. } \usage{data(wffc.teams)} \format{ A data frame with 18 observations on the following 5 variables. Some of these variable are described in \code{\link[VGAM]{wffc}}. \describe{ \item{\code{country}}{a character vector.} \item{\code{totalPlacings}}{a numeric vector; these are the summed placings over the 5 sessions and 5 team members. } \item{\code{points}}{a numeric vector; see \code{\link[VGAM]{wffc}}.} \item{\code{noofcaptures}}{a numeric vector.} \item{\code{longestfish}}{a numeric vector.} } } \details{ This data frame gives the team results of the competition. See also \code{\link[VGAM]{wffc}} and \code{\link[VGAM]{wffc.indiv}} for more details and links. } %\source{ % \url{http://www.2008worldflyfishingchamps.com/}. %} %\references{ % %} \examples{ wffc.teams } \keyword{datasets} VGAM/man/wffc.nc.Rd0000644000176000001440000000270712136651105013430 0ustar ripleyusers\name{wffc.nc} \alias{wffc.nc} \docType{data} \title{ 2008 World Fly Fishing Championships (Number of captures) Data} \description{ Number of captures in the 2008 FIPS-MOUCHE World Fly Fishing Championships held in Rotorua, New Zealand during 22--30 March 2008. } \usage{data(wffc.nc)} \format{ A data frame with 475 observations on the following 7 variables. Most of these variable are described in \code{\link[VGAM]{wffc}}. Each row is sorted by sector, session and beat. \describe{ \item{\code{sector}}{a numeric vector.} \item{\code{session}}{a numeric vector.} \item{\code{beatboat}}{a numeric vector.} \item{\code{numbers}}{a numeric vector.} \item{\code{comid}}{a numeric vector.} \item{\code{iname}}{a character vector.} \item{\code{country}}{a character vector.} } } \details{ This data frame was obtained by processing \code{\link[VGAM]{wffc}}. The key variable is \code{numbers}, which is sector-session-beat specific. Note that some fish may have been caught more than once, hence these data do not represent individual fish. } %\source{ % \url{http://www.2008worldflyfishingchamps.com/}. %} \references{ Yee, T. W. (2010) VGLMs and VGAMs: an overview for applications in fisheries research. \emph{Fisheries Research}, \bold{101}, 116--126. } \seealso{ \code{\link[VGAM]{DeLury}}. } \examples{ xtabs( ~ sector + session, wffc.nc) } \keyword{datasets} % with(wffc.nc, table(sector, session)) VGAM/man/wffc.indiv.Rd0000644000176000001440000000272212136651105014136 0ustar ripleyusers\name{wffc.indiv} \alias{wffc.indiv} \docType{data} \title{ 2008 World Fly Fishing Championships (Individual results) Data} \description{ Individual competitors' results of the 2008 FIPS-MOUCHE World Fly Fishing Championships held in Rotorua, New Zealand during 22--30 March 2008. } \usage{data(wffc.indiv)} \format{ A data frame with 99 observations on the following 8 variables. Some of these variable are described in \code{\link[VGAM]{wffc}}. \describe{ \item{\code{totalPlacings}}{a numeric vector; these are the summed placings over the 5 sessions.} \item{\code{points}}{a numeric vector.} \item{\code{noofcaptures}}{a numeric vector.} \item{\code{longest}}{a numeric vector.} \item{\code{individual}}{a numeric vector; did the competitor fish in a team or as an individual? (one team was made of composite countries due to low numbers).} \item{\code{country}}{a character vector.} \item{\code{iname}}{a character vector.} \item{\code{comid}}{a numeric vector.} } } \details{ This data frame gives the individual results of the competition. See also \code{\link[VGAM]{wffc}} and \code{\link[VGAM]{wffc.teams}} for more details and links. } %\source{ % \url{http://www.2008worldflyfishingchamps.com/}. %} \references{ Yee, T. W. (2010) VGLMs and VGAMs: an overview for applications in fisheries research. \emph{Fisheries Research}, \bold{101}, 116--126. } \examples{ summary(wffc.indiv) } \keyword{datasets} VGAM/man/wffc.Rd0000644000176000001440000001652112136651105013030 0ustar ripleyusers\name{wffc} \alias{wffc} \docType{data} \title{ 2008 World Fly Fishing Championships Data} \description{ Capture records of the 2008 FIPS-MOUCHE World Fly Fishing Championships held in Rotorua, New Zealand during 22--30 March 2008. } \usage{data(wffc)} \format{ A data frame with 4267 observations on the following 8 variables. Each row is a recorded capture. \describe{ \item{\code{length}}{a numeric vector; length of fish in mm.} \item{\code{water}}{a factor with levels \code{Waihou}, \code{Waimakariri}, \code{Whanganui}, \code{Otamangakau}, \code{Rotoaira}. These are known as Sectors IV, V, I, II, III respectively, and are also represented by the variable \code{sector}. } \item{\code{session}}{a numeric vector; a value from the set 1,2,\ldots,6. These are time ordered, and there were two sessions per competition day.} \item{\code{sector}}{a numeric vector; a value from the set 1,2,\ldots,5.} \item{\code{beatboat}}{a numeric vector; beat or boat number, a value from the set 1,2,\ldots,19.} \item{\code{comid}}{a numeric vector; the competitor's ID number. Uniquely identifies each competitor. These ID numbers actually correspond to their rankings in the individual level. } \item{\code{iname}}{a character vector; the individual competitor's name. } \item{\code{country}}{a character vector; what country the competitors represented. The countries represented were Australia (AUS), Canada (CAN), Croatia (CRO), Czech Republic (CZE), England (ENG), Finland (FIN), France (FRA), Holland (NED), Ireland (IRE), Italy (ITA), Japan (JPN), Malta (MAL), New Zealand (NZL), Poland (POL), Portugal (POR), South Africa (RSA), Slovakia (SVK), USA (USA), Wales (WAL). } } } \details{ Details may be obtained at Yee (2010) and Yee (2013). Here is a brief summary. The three competition days were 28--30 March. Each session was fixed at 9.00am--12.00pm and 2.30--5.30pm daily. One of the sessions was a rest session. Each of 19 teams had 5 members, called A, B, C, D and E (there was a composite team, actually). The scoring system allocated 100 points to each eligible fish (minimum length was 18 cm) and 20 points for each cm of its length (rounded up to the nearest centimeter). Thus a 181mm or 190mm fish was worth 480 points. Each river was divided into 19 contiguous downstream beats labelled 1,2,\ldots,19. Each lake was fished by 9 boats, each with two competitors except for one boat which only had one. Each competitor was randomly assigned to a beat/boat. Competitors were ranked according to their placings at each sector-session combination, and then these placings were summed. Those with the minimum total placings were the winners, thus it was not necessarily those who had the maximum points who won. For example, in Session 1 at the Waihou River, each of the 19 competitors was ranked 1 (best) to 19 (worst) according to the point system. This is the ``placing'' for that session. These placings were added up over the 5 sessions to give the ``total placings''. All sectors have naturally wild Rainbow trout (\emph{Oncorhynchus mykiss}) while Lake Otamangakau and the Whanganui River also holds Brown trout (\emph{Salmo trutta}). Only these two species were targetted. The species was not recorded electronically, however a post-analysis of the paper score sheets from the two lakes showed that, approximately, less than 5 percent were Brown trout. It may be safely assumed that all the Waihou and Waimakariri fish were Rainbow trout. The gender of the fish were also not recorded electronically, and anyway, distinguishing between male and female was very difficult for small fish. Although species and gender data were supposed to have been collected at the time of capture the quality of these variables is rather poor and furthermore they were not recorded electronically. % 11 out of (11 + 210) were brown trout, in Otamangakau. % 52 were NAs. % % 3 out of ( 3 + 179) were brown trout, in Rotoaira. % 19 were NAs. Note that some fish may have been caught more than once, hence these data do not represent individual fish but rather recorded captures. Note also that a few internal discrepancies may be found within and between the data frames \code{\link[VGAM]{wffc}}, \code{\link[VGAM]{wffc.nc}}, \code{\link[VGAM]{wffc.indiv}}, \code{\link[VGAM]{wffc.teams}}. This is due to various reasons, such as competitors being replaced by reserves when sick, fish that were included or excluded upon the local judge's decision, competitors who fished two hours instead of three by mistake, etc. The data has already been cleaned of errors and internal inconsistencies but a few may remain. } \seealso{ \code{\link[VGAM]{wffc.indiv}}, \code{\link[VGAM]{wffc.teams}}, \code{\link[VGAM]{wffc.nc}}, \code{\link[VGAM]{wffc.P1}}. } \source{ This data frame was adapted from the WFFC's spreadsheet. Special thanks goes to Paul Dewar, Jill Mandeno, Ilkka Pirinen, and the other members of the Organising Committee of the 28th FIPS-Mouche World Fly Fishing Championships for access to the data. The assistance and feedback of Colin Shepherd is gratefully acknowledged. } \references{ % \url{http://www.2008worldflyfishingchamps.com} % is the official website. Yee, T. W. (2010) VGLMs and VGAMs: an overview for applications in fisheries research. \emph{Fisheries Research}, \bold{101}, 116--126. Yee, T. W. (2013) On strategies and issues raised by an analysis of the 2008 World Fly Fishing Championships data. \emph{In preparation}. } \examples{ summary(wffc) with(wffc, table(water, session)) # Obtain some simple plots waihou <- subset(wffc, water == "Waihou") waimak <- subset(wffc, water == "Waimakariri") whang <- subset(wffc, water == "Whanganui") otam <- subset(wffc, water == "Otamangakau") roto <- subset(wffc, water == "Rotoaira") minlength <- min(wffc[,"length"]) maxlength <- max(wffc[,"length"]) nwater <- c("Waihou" = nrow(waihou), "Waimakariri" = nrow(waimak), "Whanganui" = nrow(whang), "Otamangakau" = nrow(otam), "Rotoaira" = nrow(roto)) \dontrun{ par(mfrow = c(2,3), las = 1) # Overall distribution of length with(wffc, boxplot(length/10 ~ water, ylim = c(minlength, maxlength)/10, border = "blue", main = "Length (cm)", cex.axis = 0.5)) # Overall distribution of LOG length with(wffc, boxplot(length/10 ~ water, ylim = c(minlength, maxlength)/10, border = "blue", log = "y", cex.axis = 0.5, main = "Length (cm) on a log scale")) # Overall distribution of number of captures pie(nwater, border = "blue", main = "Proportion of captures", labels = names(nwater), density = 10, col = 1:length(nwater), angle = 85+30* 1:length(nwater)) # Overall distribution of number of captures with(wffc, barplot(nwater, main = "Number of captures", cex.names = 0.5, col = "lightblue")) # Overall distribution of proportion of number of captures with(wffc, barplot(nwater / sum(nwater), cex.names = 0.5, col = "lightblue", main = "Proportion of captures")) # An interesting lake with(roto, hist(length/10, xlab = "Fish length (cm)", col = "lightblue", breaks = seq(18, 70, by = 3), prob = TRUE, ylim = c(0, 0.08), border = "blue", ylab = "", main = "Lake Rotoaira", lwd = 2)) } } \keyword{datasets} VGAM/man/wffc.P2star.Rd0000644000176000001440000000663412136651105014206 0ustar ripleyusers\name{wffc.points} \alias{wffc.P1} \alias{wffc.P1star} \alias{wffc.P2} \alias{wffc.P2star} \alias{wffc.P3} \alias{wffc.P3star} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Point System for the 2008 World Fly Fishing Championships } \description{ Point system for the 2008 World Fly Fishing Championships: current and some proposals. } \usage{ wffc.P1(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P2(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P3(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P1star(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P2star(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P3star(length, c1 = 100, min.eligible = 0.18, ppm = 2000) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{length}{ Length of the fish, in meters. Numeric vector. } \item{c1}{ Points added to each eligible fish. } \item{min.eligible}{ The 2008 WFFC regulations stipulated that the smallest eligible fish was 0.180 m, which is 180 mm. } \item{ppm}{ Points per meter of length of the fish. } } \details{ The official website contains a document with the official rules and regulations of the competition. The function \code{wffc.P1()} implements the current WFFC point system, and is `discrete' in that fish lengths are rounded up to the nearest centimeter (provided it is greater or equal to \code{min.eligible} m). \code{wffc.P1star()} is a `continuous' version of it. The function \code{wffc.P2()} is a new proposal which rewards catching bigger fish. It is based on a quadratic polynomial. \code{wffc.P2star()} is a `continuous' version of it. The function \code{wffc.P3()} is another new proposal which rewards catching bigger fish. Named a \emph{cumulative linear proposal}, it adds \code{ppm} to each multiple of \code{min.eligible} of length. One adds one lot of \code{c1} to each eligible fish. \code{wffc.P3star()} is a `continuous' version of \code{wffc.P3()}. } \value{ A vector with the number of points. } \references{ % \url{http://www.2008worldflyfishingchamps.com} % was the official 2008 website. % \url{http://www.http://san2010.pl} % was the official 2010 website. Yee, T. W. (2013) On strategies and issues raised by an analysis of the 2008 World Fly Fishing Championships data. \emph{In preparation}. } \author{ T. W. Yee. } \note{ \code{wffc.P2} and \code{wffc.P2star} may change in the future, as well as possibly \code{wffc.P3} and \code{wffc.P3star}. } \seealso{ \code{\link[VGAM]{wffc}}. } \examples{ \dontrun{ fishlength <- seq(0.0, 0.72, by = 0.001) plot(fishlength, wffc.P2star(fishlength), type = "l", col = "blue", las = 1, lty = "dashed", lwd = 2, las = 1, cex.main = 0.8, xlab = "Fish length (m)", ylab = "Competition points", main = "Current (red) and proposed (blue and green) WFFC point system") lines(fishlength, wffc.P1star(fishlength), type = "l", col = "red", lwd = 2) lines(fishlength, wffc.P3star(fishlength), type = "l", col = "darkgreen", lwd = 2, lty = "dashed") abline(v = (1:4) * 0.18, lty = "dotted") abline(h = (1:9) * wffc.P1star(0.18), lty = "dotted") } # Successive slopes: (wffc.P1star((2:8)*0.18) - wffc.P1star((1:7)*0.18)) / (0.18 * 2000) (wffc.P3star((2:8)*0.18) - wffc.P3star((1:7)*0.18)) / (0.18 * 2000) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ models } VGAM/man/weightsvglm.Rd0000644000176000001440000000765312136651105014451 0ustar ripleyusers\name{weightsvglm} \alias{weightsvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Prior and Working Weights of a VGLM fit } \description{ Returns either the prior weights or working weights of a VGLM object. } \usage{ weightsvglm(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ a model object from the \pkg{VGAM} \R package that inherits from a \emph{vector generalized linear model} (VGLM), e.g., a model of class \code{"vglm"}. } \item{type}{ Character, which type of weight is to be returned? The default is the first one. } \item{matrix.arg}{ Logical, whether the answer is returned as a matrix. If not, it will be a vector. } \item{ignore.slot}{ Logical. If \code{TRUE} then \code{object@weights} is ignored even if it has been assigned, and the long calculation for \code{object@weights} is repeated. This may give a slightly different answer because of the final IRLS step at convergence may or may not assign the latest value of quantities such as the mean and weights. } \item{deriv.arg}{ Logical. If \code{TRUE} then a list with components \code{deriv} and \code{weights} is returned. See below for more details. } \item{\dots}{ Currently ignored. } } \details{ Prior weights are usually inputted with the \code{weights} argument in functions such as \code{\link{vglm}} and \code{\link{vgam}}. It may refer to frequencies of the individual data or be weight matrices specified beforehand. Working weights are used by the IRLS algorithm. They correspond to the second derivatives of the log-likelihood function with respect to the linear predictors. The working weights correspond to positive-definite weight matrices and are returned in \emph{matrix-band} form, e.g., the first \eqn{M} columns correspond to the diagonals, etc. } \value{ If \code{type = "working"} and \code{deriv = TRUE} then a list is returned with the two components described below. Otherwise the prior or working weights are returned depending on the value of \code{type}. \item{deriv}{ Typically the first derivative of the log-likelihood with respect to the linear predictors. For example, this is the variable \code{deriv.mu} in \code{vglm.fit()}, or equivalently, the matrix returned in the \code{"deriv"} slot of a \pkg{VGAM} family function. } \item{weights }{ The working weights. } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Chambers, J. M. and T. J. Hastie (eds) (1992) \emph{Statistical Models in S}. Wadsworth & Brooks/Cole. } \author{ Thomas W. Yee } \note{ This function is intended to be similar to \code{weights.glm} (see \code{\link[stats]{glm}}). } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[stats]{glm}}, \code{\link{vglmff-class}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # These are sample proportions weights(fit, type = "prior", matrix = FALSE) # Number of observations # Look at the working residuals nn <- nrow(model.matrix(fit, type = "lm")) M <- ncol(predict(fit)) temp <- weights(fit, type = "working", deriv = TRUE) wz <- m2adefault(temp$weights, M = M) # In array format wzinv <- array(apply(wz, 3, solve), c(M, M, nn)) wresid <- matrix(NA, nn, M) # Working residuals for(ii in 1:nn) wresid[ii,] <- wzinv[, , ii, drop = TRUE] \%*\% temp$deriv[ii, ] max(abs(c(resid(fit, type = "work")) - c(wresid))) # Should be 0 (zedd <- predict(fit) + wresid) # Adjusted dependent vector } \keyword{models} \keyword{regression} VGAM/man/weibull.Rd0000644000176000001440000001543212136651105013546 0ustar ripleyusers\name{weibull} \alias{weibull} %\alias{weibullff} %\alias{weibull.lsh} %\alias{weibull3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Weibull Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Weibull distribution. No observations should be censored. } \usage{ weibull(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the (positive) shape parameter (called \eqn{a} below) and (positive) scale parameter (called \eqn{b} below). See \code{\link{Links}} for more choices. } \item{ishape, iscale}{ Optional initial values for the shape and scale parameters. } \item{nrfs}{ Currently this argument is ignored. Numeric, of length one, with value in \eqn{[0,1]}. Weighting factor between Newton-Raphson and Fisher scoring. The value 0 means pure Newton-Raphson, while 1 means pure Fisher scoring. The default value uses a mixture of the two algorithms, and retaining positive-definite working weights. } \item{imethod}{ Initialization method used if there are censored observations. Currently only the values 1 and 2 are allowed. } \item{zero, probs.y}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Weibull density for a response \eqn{Y} is \deqn{f(y;a,b) = a y^{a-1} \exp[-(y/b)^a] / (b^a)}{% f(y;a,b) = a y^(a-1) * exp(-(y/b)^a) / [b^a]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y > 0}. The cumulative distribution function is \deqn{F(y;a,b) = 1 - \exp[-(y/b)^a].}{% F(y;a,b) = 1 - exp(-(y/b)^a).} The mean of \eqn{Y} is \eqn{b \, \Gamma(1+ 1/a)}{b * gamma(1+ 1/a)} (returned as the fitted values), and the mode is at \eqn{b\,(1-1/a)^{1/a}}{b * (1- 1/a)^(1/a)} when \eqn{a>1}. The density is unbounded for \eqn{a<1}. The \eqn{k}th moment about the origin is \eqn{E(Y^k) = b^k \, \Gamma(1+ k/a)}{E(Y^k) = b^k * gamma(1+ k/a)}. The hazard function is \eqn{a t^{a-1} / b^a}{a * t^(a-1) / b^a}. This \pkg{VGAM} family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Although the expected information matrices used here are valid in all regions of the parameter space, the regularity conditions for maximum likelihood estimation are satisfied only if \eqn{a>2} (according to Kleiber and Kotz (2003)). If this is violated then a warning message is issued. One can enforce \eqn{a>2} by choosing \code{lshape = logoff(offset = -2)}. Common values of the shape parameter lie between 0.5 and 3.5. Summarized in Harper et al. (2011), for inference, there are 4 cases to consider. If \eqn{a \le 1} then the MLEs are not consistent (and the smallest observation becomes a hyperefficient solution for the location parameter in the 3-parameter case). If \eqn{1 < a < 2} then MLEs exist but are not asymptotically normal. If \eqn{a = 2} then the MLEs exist and are normal and asymptotically efficient but with a slower convergence rate than when \eqn{a > 2}. If \eqn{a > 2} then MLEs have classical asymptotic properties. The 3-parameter (location is the third parameter) Weibull can be estimated by maximizing a profile log-likelihood (see, e.g., Harper et al. (2011) and Lawless (2003)), else try \code{\link{gev}} which is a better parameterization. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Lawless, J. F. (2003) \emph{Statistical Models and Methods for Lifetime Data}, 2nd ed. {Hoboken, NJ, USA: John Wiley & Sons}. Rinne, Horst. (2009) \emph{The Weibull Distribution: A Handbook}. Boca Raton, FL, USA: CRC Press. Gupta, R. D. and Kundu, D. (2006) On the comparison of Fisher information of the Weibull and GE distributions, \emph{Journal of Statistical Planning and Inference}, \bold{136}, 3130--3144. Harper, W. V. and Eschenbach, T. G. and James, T. R. (2011) Concerns about Maximum Likelihood Estimation for the Three-Parameter {W}eibull Distribution: Case Study of Statistical Software, \emph{The American Statistician}, \bold{65(1)}, {44--54}. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. Smith, R. L. and Naylor, J. C. (1987) A comparison of maximum likelihood and Bayesian estimators for the three-parameter Weibull distribution. \emph{Applied Statistics}, \bold{36}, 358--369. } \author{ T. W. Yee } \note{ Successful convergence depends on having reasonably good initial values. If the initial values chosen by this function are not good, make use the two initial value arguments. This \pkg{VGAM} family function handles multiple responses. The Weibull distribution is often an alternative to the lognormal distribution. The inverse Weibull distribution, which is that of \eqn{1/Y} where \eqn{Y} has a Weibull(\eqn{a,b}) distribution, is known as the log-Gompertz distribution. There are problems implementing the three-parameter Weibull distribution. These are because the classical regularity conditions for the asymptotic properties of the MLEs are not satisfied because the support of the distribution depends on one of the parameters. } \section{Warning}{ This function is under development to handle other censoring situations. The version of this function which will handle censored data will be called \code{cenweibull()}. It is currently being written and will use \code{\link{SurvS4}} as input. It should be released in later versions of \pkg{VGAM}. If the shape parameter is less than two then misleading inference may result, e.g., in the \code{summary} and \code{vcov} of the object. } \seealso{ \code{\link[stats:Weibull]{dweibull}}, \code{\link{truncweibull}}, \code{\link{gev}}, \code{\link{lognormal}}, \code{\link{expexp}}. \code{\link{gumbelII}}. } \examples{ wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, y1 = rweibull(nn, shape = exp(1 + x2), scale = exp(-2)), y2 = rweibull(nn, shape = exp(2 - x2), scale = exp( 1))) fit <- vglm(cbind(y1, y2) ~ x2, weibull, wdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/wald.Rd0000644000176000001440000000374712136651105013040 0ustar ripleyusers\name{wald} \alias{wald} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Wald Distribution Family Function } \description{ Estimates the parameter of the standard Wald distribution by maximum likelihood estimation. } \usage{ wald(link.lambda = "loge", init.lambda = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link.lambda}{ Parameter link function for the \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices and general information. } \item{init.lambda}{ Initial value for the \eqn{\lambda}{lambda} parameter. The default means an initial value is chosen internally. } } \details{ The standard Wald distribution is a special case of the inverse Gaussian distribution with \eqn{\mu=1}{mu=1}. It has a density that can be written as \deqn{f(y;\lambda) = \sqrt{\lambda/(2\pi y^3)} \; \exp\left(-\lambda (y-1)^2/(2 y)\right)}{% f(y;mu,lambda) = sqrt(lambda/(2*pi*y^3)) * exp(-lambda*(y-1)^2/(2*y)) } where \eqn{y>0} and \eqn{\lambda>0}{lambda>0}. The mean of \eqn{Y} is \eqn{1} (returned as the fitted values) and its variance is \eqn{1/\lambda}{1/lambda}. By default, \eqn{\eta=\log(\lambda)}{eta=log(lambda)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{\link{inv.gaussianff}} estimates the location parameter \eqn{\mu}{mu} too. } \seealso{ \code{\link{inv.gaussianff}}. } \examples{ wdata <- data.frame(y = rgamma(n = 1000, shape = 1)) # Not inverse Gaussian!! fit <- vglm(y ~ 1, wald(init = 0.2), wdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/waitakere.Rd0000644000176000001440000000330612136651105014054 0ustar ripleyusers\name{waitakere} \alias{waitakere} \docType{data} \title{Waitakere Ranges Data} \description{ The \code{waitakere} data frame has 579 rows and 18 columns. Altitude is explanatory, and there are binary responses (presence/absence = 1/0 respectively) for 17 plant species. } \usage{data(waitakere)} \format{ This data frame contains the following columns: \describe{ \item{agaaus}{Agathis australis, or Kauri} \item{beitaw}{Beilschmiedia tawa, or Tawa} \item{corlae}{Corynocarpus laevigatus} \item{cyadea}{Cyathea dealbata} \item{cyamed}{Cyathea medullaris} \item{daccup}{Dacrydium cupressinum} \item{dacdac}{Dacrycarpus dacrydioides} \item{eladen}{Elaecarpus dentatus} \item{hedarb}{Hedycarya arborea} \item{hohpop}{Species name unknown} \item{kniexc}{Knightia excelsa, or Rewarewa} \item{kuneri}{Kunzea ericoides} \item{lepsco}{Leptospermum scoparium} \item{metrob}{Metrosideros robusta} \item{neslan}{Nestegis lanceolata} \item{rhosap}{Rhopalostylis sapida} \item{vitluc}{Vitex lucens, or Puriri} \item{altitude}{meters above sea level} } } \details{ These were collected from the Waitakere Ranges, a small forest in northern Auckland, New Zealand. At 579 sites in the forest, the presence/absence of 17 plant species was recorded, as well as the altitude. Each site was of area size 200\eqn{m^2}{m^2}. } \source{ Dr Neil Mitchell, University of Auckland. } %\references{ %None. %} \seealso{ \code{\link{hunua}}. } \examples{ fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, waitakere) head(predict(fit, waitakere, type = "response")) \dontrun{ plot(fit, se = TRUE, lcol = "orange", scol = "blue") } } \keyword{datasets} VGAM/man/vsmooth.spline.Rd0000644000176000001440000001512712136651105015074 0ustar ripleyusers\name{vsmooth.spline} \alias{vsmooth.spline} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Vector cubic smoothing spline } \description{ Fits a vector cubic smoothing spline. } \usage{ vsmooth.spline(x, y, w = NULL, df = rep(5, M), spar = NULL, all.knots = FALSE, iconstraint = diag(M), xconstraint = diag(M), constraints = list("(Intercepts)" = diag(M), x = diag(M)), var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list()) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector, matrix or a list. If a list, the \code{x} component is used. If a matrix, the first column is used. \code{x} may also be a complex vector, in which case the real part is used, and the imaginary part is used for the response. In this help file, \code{n} is the number of unique values of \code{x}. } \item{y}{ A vector, matrix or a list. If a list, the \code{y} component is used. If a matrix, all but the first column is used. In this help file, \code{M} is the number of columns of \code{y} if there are no constraints on the functions. } \item{w}{ The weight matrices or the number of observations. If the weight matrices, then this must be a \code{n}-row matrix with the elements in matrix-band form (see \code{iam}). If a vector, then these are the number of observations. By default, \code{w} is the \code{M} by \code{M} identity matrix, denoted by \code{matrix(1, n, M)}. } \item{df}{ Numerical vector containing the degrees of freedom for each component function (smooth). If necessary, the vector is recycled to have length equal to the number of component functions to be estimated (\code{M} if there are no constraints), which equals the number of columns of the \code{x}-constraint matrix. A value of 2 means a linear fit, and each element of \code{df} should lie between 2 and \code{n}. The larger the values of \code{df} the more wiggly the smooths. } \item{spar}{ Numerical vector containing the non-negative smoothing parameters for each component function (smooth). If necessary, the vector is recycled to have length equal to the number of component functions to be estimated (\code{M} if there are no constraints), which equals the number of columns of the \code{x}-constraint matrix. A value of zero means the smooth goes through the data and hence is wiggly. A value of \code{Inf} may be assigned, meaning the smooth will be linear. By default, the \code{NULL} value of \code{spar} means \code{df} is used to determine the smoothing parameters. } \item{all.knots}{ Logical. If \code{TRUE} then each distinct value of \code{x} will be a knot. By default, only a subset of the unique values of \code{x} are used; typically, the number of knots is \code{O(n^0.25)} for \code{n} large, but if \code{n <= 40} then all the unique values of \code{x} are used. } \item{iconstraint}{ A \code{M}-row constraint matrix for the intercepts. It must be of full column rank. By default, the constraint matrix for the intercepts is the \code{M} by \code{M} identity matrix, meaning no constraints. } \item{xconstraint}{ A \code{M}-row constraint matrix for \code{x}. It must be of full column rank. By default, the constraint matrix for the intercepts is the \code{M} by \code{M} identity matrix, meaning no constraints. } \item{constraints}{ An alternative to specifying \code{iconstraint} and \code{xconstraint}, this is a list with two components corresponding to the intercept and \code{x} respectively. They must both be a \code{M}-row constraint matrix with full column rank. } \item{var.arg}{ Logical: return the pointwise variances of the fit? Currently, this corresponds only to the nonlinear part of the fit, and may be wrong. } \item{scale.w}{ Logical. By default, the weights \code{w} are scaled so that the diagonal elements have mean 1. } \item{nk}{ Number of knots. If used, this argument overrides \code{all.knots}, and must lie between 6 and \code{n}+2 inclusive. } \item{control.spar}{ See \code{\link[stats]{smooth.spline}}. } } \details{ The algorithm implemented is detailed in Yee (2000). It involves decomposing the component functions into a linear and nonlinear part, and using B-splines. The cost of the computation is \code{O(n M^3)}. The argument \code{spar} contains \emph{scaled} smoothing parameters. } \value{ An object of class \code{"vsmooth.spline"} (see \code{vsmooth.spline-class}). } \references{ Yee, T. W. (2000) Vector Splines and Other Vector Smoothers. Pages 529--534. In: Bethlehem, J. G. and van der Heijde, P. G. M. \emph{Proceedings in Computational Statistics COMPSTAT 2000}. Heidelberg: Physica-Verlag. } \author{ Thomas W. Yee } \note{ This function is quite similar to \code{\link[stats]{smooth.spline}} but offers less functionality. For example, cross validation is not implemented here. For \code{M = 1}, the results will be generally different, mainly due to the different way the knots are selected. The vector cubic smoothing spline which \code{s()} represents is computationally demanding for large \eqn{M}. The cost is approximately \eqn{O(n M^3)} where \eqn{n} is the number of unique abscissae. Yet to be done: return the \emph{unscaled} smoothing parameters. } \seealso{ \code{vsmooth.spline-class}, \code{plot.vsmooth.spline}, \code{predict.vsmooth.spline}, \code{iam}, \code{\link[VGAM]{s}}, \code{\link[stats]{smooth.spline}}. } \examples{ nn <- 20; x <- 2 + 5*(nn:1)/nn x[2:4] <- x[5:7] # Allow duplication y1 <- sin(x) + rnorm(nn, sd = 0.13) y2 <- cos(x) + rnorm(nn, sd = 0.13) y3 <- 1 + sin(x) + rnorm(nn, sd = 0.13) # Run this for constraints y <- cbind(y1, y2, y3) ww <- cbind(rep(3, nn), 4, (1:nn)/nn) (fit <- vsmooth.spline(x, y, w = ww, df = 5)) \dontrun{ plot(fit) # The 1st and 3rd functions do not differ by a constant } mat <- matrix(c(1,0,1, 0,1,0), 3, 2) (fit2 <- vsmooth.spline(x, y, w = ww, df = 5, iconstr = mat, xconstr = mat)) # The 1st and 3rd functions do differ by a constant: mycols <- c("orange", "blue", "orange") \dontrun{ plot(fit2, lcol = mycols, pcol = mycols, las = 1) } p <- predict(fit, x = model.matrix(fit, type = "lm"), deriv = 0) max(abs(depvar(fit) - with(p, y))) # Should be 0; and fit@y is not good par(mfrow = c(3, 1)) ux <- seq(1, 8, len = 100) for(dd in 1:3) { pp <- predict(fit, x = ux, deriv = dd) \dontrun{with(pp, matplot(x, y, type = "l", main = paste("deriv =", dd), lwd = 2, ylab = "", cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5)) } } } \keyword{regression} \keyword{smooth} VGAM/man/vonmises.Rd0000644000176000001440000001026612136651105013746 0ustar ripleyusers\name{vonmises} \alias{vonmises} %- Also NEED an '\alias' for EACH other topic documented here. \title{ von Mises Distribution Family Function } \description{ Estimates the location and scale parameters of the von Mises distribution by maximum likelihood estimation. } \usage{ vonmises(llocation = elogit(min = 0, max = 2 * pi), lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions applied to the location \eqn{a} parameter and scale parameter \eqn{k}, respectively. See \code{\link{Links}} for more choices. For \eqn{k}, a log link is the default because the parameter is positive. } \item{ilocation}{ Initial value for the location \eqn{a} parameter. By default, an initial value is chosen internally using \code{imethod}. Assigning a value will override the argument \code{imethod}. } \item{iscale}{ Initial value for the scale \eqn{k} parameter. By default, an initial value is chosen internally using \code{imethod}. Assigning a value will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ilocation} and \code{iscale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, choose one value from the set \{1,2\}. } % \item{hstep}{ Positive numeric. The \eqn{h} used for the finite difference % approximation, e.g., in \eqn{(f(x+h)-f(x))/h} for the first % derivative estimate of the modified Bessel function values. % If too small, some half stepsizing may occur; % if too large, numerical problems might occur. % } } \details{ The (two-parameter) von Mises is the most commonly used distribution in practice for circular data. It has a density that can be written as \deqn{f(y;a,k) = \frac{\exp[k\cos(y-a)]}{ 2\pi I_0(k)}}{% f(y;a,k) = exp[k*cos(y-a)] / (2*pi*I0(k))} where \eqn{0 \leq y < 2\pi}{0 <= y < 2*pi}, \eqn{k>0} is the scale parameter, \eqn{a} is the location parameter, and \eqn{I_0(k)}{I0(k)} is the modified Bessel function of order 0 evaluated at \eqn{k}. The mean of \eqn{Y} (which is the fitted value) is \eqn{a} and the circular variance is \eqn{1 - I_1(k) / I_0(k)}{1 - I1(k) / I0(k)} where \eqn{I_1(k)}{I1(k)} is the modified Bessel function of order 1. By default, \eqn{\eta_1=\log(a/(2\pi-a))}{eta1=log(a/(2*pi-a))} and \eqn{\eta_2=\log(k)}{eta2=log(k)} for this family function. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ The response and the fitted values are scaled so that \eqn{0\leq y< 2\pi}{0<=y<2*pi}. The linear/additive predictors are left alone. Fisher scoring is used. } \section{Warning }{ Numerically, the von Mises can be difficult to fit because of a log-likelihood having multiple maxima. The user is therefore encouraged to try different starting values, i.e., make use of \code{ilocation} and \code{iscale}. } \seealso{ \code{\link[base]{Bessel}}, \code{\link{cardioid}}. \pkg{CircStats} and \pkg{circular} currently have a lot more R functions for circular data than the \pkg{VGAM} package. } \examples{ vdata <- data.frame(x2 = runif(nn <- 1000)) vdata <- transform(vdata, y = rnorm(nn, m = 2+x2, sd = exp(0.2))) # Bad data!! fit <- vglm(y ~ x2, vonmises(zero = 2), vdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(vdata, range(y)) # Original data range(depvar(fit)) # Processed data is in [0,2*pi) } \keyword{models} \keyword{regression} %later an extended logit link \eqn{\eta_1=\log(a/(2\pi-a))}{eta1=log(a/(2*pi-a))} %might be provided for \eqn{\eta_1}{eta1}. %\eqn{\eta_1=a}{eta1=a} and VGAM/man/vglmff-class.Rd0000644000176000001440000002043012136651105014461 0ustar ripleyusers\name{vglmff-class} \docType{class} \alias{vglmff-class} \title{Class ``vglmff'' } \description{ Family functions for the \pkg{VGAM} package } \section{Objects from the Class}{ Objects can be created by calls of the form \code{new("vglmff", ...)}. } \section{Slots}{ In the following, \eqn{M} is the number of linear/additive predictors. \describe{ \item{\code{blurb}:}{ Object of class \code{"character"} giving a small description of the model. Important arguments such as parameter link functions can be expressed here. } \item{\code{constraints}:}{ Object of class \code{"expression"} which sets up any constraint matrices defined by arguments in the family function. A \code{zero} argument is always fed into \code{cm.zero.vgam}, whereas other constraints are fed into \code{cm.vgam}. } \item{\code{deviance}:}{ Object of class \code{"function"} returning the deviance of the model. This slot is optional. If present, the function must have arguments \code{function(mu, y, w, residuals = FALSE, eta, extra = NULL)}. Deviance residuals are returned if \code{residuals = TRUE}. } \item{\code{fini}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. This code is evaluated immediately after the fitting. } \item{\code{first}:}{ Object of class \code{"expression"} to insert code at a special position in \code{\link{vglm}} or \code{\link{vgam}}. } \item{\code{infos}:}{ Object of class \code{"function"} which returns a list with components such as \code{Musual}. At present only a very few \pkg{VGAM} family functions have this feature implemented. Those that do do not require specifying the \code{Musual} argument when used with \code{\link{rcim}}. } \item{\code{initialize}:}{ Object of class \code{"expression"} used to perform error checking (especially for the variable \code{y}) and obtain starting values for the model. In general, \code{etastart} or \code{mustart} are assigned values based on the variables \code{y}, \code{x} and \code{w}. } \item{\code{linkinv}:}{ Object of class \code{"function"} which returns the fitted values, given the linear/additive predictors. The function must have arguments \code{function(eta, extra = NULL)}. } \item{\code{last}:}{ Object of class \code{"expression"} to insert code at a special position (at the very end) of \code{vglm.fit()} or \code{vgam.fit()}. This code is evaluated after the fitting. The list \code{misc} is often assigned components in this slot, which becomes the \code{misc} slot on the fitted object. } \item{\code{linkfun}:}{ Object of class \code{"function"} which, given the fitted values, returns the linear/additive predictors. If present, the function must have arguments \code{function(mu, extra = NULL)}. Most \pkg{VGAM} family functions do not have a \code{linkfun} function. They largely are for classical exponential families, i.e., GLMs. } \item{\code{loglikelihood}:}{ Object of class \code{"function"} returning the log-likelihood of the model. This slot is optional. If present, the function must have arguments \code{function(mu, y, w, residuals = FALSE, eta, extra = NULL)}. The argument \code{residuals} can be ignored because log-likelihood residuals aren't defined. } \item{\code{middle}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. } \item{\code{middle2}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. } \item{\code{summary.dispersion}:}{ Object of class \code{"logical"} indicating whether the general VGLM formula (based on a residual sum of squares) can be used for computing the scaling/dispersion parameter. It is \code{TRUE} for most models except for nonlinear regression models. } \item{\code{vfamily}:}{ Object of class \code{"character"} giving class information about the family function. Although not developed at this stage, more flexible classes are planned in the future. For example, family functions \code{sratio}, \code{cratio}, \code{cumulative}, and \code{acat} all operate on categorical data, therefore will have a special class called \code{"VGAMcat"}, say. Then if \code{fit} was a \code{vglm} object, then \code{coef(fit)} would print out the \code{vglm} coefficients plus \code{"VGAMcat"} information as well. } \item{\code{deriv}:}{ Object of class \code{"expression"} which returns a \eqn{M}-column matrix of first derivatives of the log-likelihood function with respect to the linear/additive predictors, i.e., the score vector. In Yee and Wild (1996) this is the \eqn{\bold{d}_i}{\bold{d}i} vector. Thus each row of the matrix returned by this slot is such a vector. } \item{\code{weight}:}{ Object of class \code{"expression"} which returns the second derivatives of the log-likelihood function with respect to the linear/additive predictors. This can be either the observed or expected information matrix, i.e., Newton-Raphson or Fisher-scoring respectively. In Yee and Wild (1996) this is the \eqn{\bold{W}_i}{\bold{W}i} matrix. Thus each row of the matrix returned by this slot is such a matrix. Like the \code{weights} slot of \code{vglm}/\code{vgam}, it is stored in \emph{matrix-band} form, whereby the first \eqn{M} columns of the matrix are the diagonals, followed by the upper-diagonal band, followed by the band above that, etc. In this case, there can be up to \eqn{M(M+1)} columns, with the last column corresponding to the (1,\eqn{M}) elements of the weight matrices. } } } \section{Methods}{ \describe{ \item{print}{\code{signature(x = "vglmff")}: short summary of the family function. } } } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. \url{http://www.stat.auckland.ac.nz/~yee} contains further information on how to write \pkg{VGAM} family functions. The file is amongst other \pkg{VGAM} PDF documentation. } \author{ Thomas W. Yee } \note{ With link functions etc., one must use \code{substitute} to embed the options into the code. There are two different forms: \code{eval(substitute(expression({...}), list(...)))} for expressions, and \code{eval(substitute( function(...) { ... }, list(...) )) } for functions. % 20130322; this is obsolete, and can delete it: % A unified method of handling arguments is to use % \code{match.arg}. This allows, for example, % \code{vglm(..., family = cratio(link = logit))} % and % \code{vglm(..., family = cratio(link = "logi"))} % to be equivalent (Nb. there is a \code{logit} function). The \code{extra} argument in \code{linkinv}, \code{linkfun}, \code{deviance}, \code{loglikelihood}, etc. matches with the argument \code{extra} in \code{\link{vglm}}, \code{\link{vgam}} and \code{\link{rrvglm}}. This allows input to be fed into all slots of a \pkg{VGAM} family function. The expression \code{derivative} is evaluated immediately prior to \code{weight}, so there is provision for re-use of variables etc. Programmers must be careful to choose variable names that do not interfere with \code{vglm.fit}, \code{vgam.fit()} etc. Programmers of \pkg{VGAM} family functions are encouraged to keep to previous conventions regarding the naming of arguments, e.g., \code{link} is the argument for parameter link functions, \code{zero} for allowing some of the linear/additive predictors to be an intercept term only, etc. In general, Fisher-scoring is recommended over Newton-Raphson where tractable. Although usually slightly slower in convergence, the weight matrices from using the expected information are positive-definite over a larger parameter space. } \section{Warning }{ \pkg{VGAM} family functions are not compatible with \code{\link[stats]{glm}}, nor \code{gam} (from either \pkg{gam} or \pkg{mgcv} packages). } \seealso{ \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{rcim}}. } \examples{ cratio() cratio(link = "cloglog") cratio(link = "cloglog", reverse = TRUE) } \keyword{classes} VGAM/man/vglm.control.Rd0000644000176000001440000002554512136651105014535 0ustar ripleyusers\name{vglm.control} \alias{vglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control function for vglm } \description{ Algorithmic constants and parameters for running \code{vglm} are set using this function. } \usage{ vglm.control(checkwz = TRUE, Check.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weight = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{Check.rank}{ logical indicating whether the rank of the VLM matrix should be checked. If this is not of full column rank then the results are not to be trusted. The default is to give an error message if the VLM matrix is not of full column rank. } \item{criterion}{ character variable describing what criterion is to be used to test for convergence. The possibilities are listed in \code{.min.criterion.VGAM}, but most family functions only implement a few of these. } \item{epsilon}{ positive convergence tolerance epsilon. Roughly speaking, the Newton-Raphson/Fisher-scoring iterations are assumed to have converged when two successive \code{criterion} values are within \code{epsilon} of each other. } \item{half.stepsizing}{ logical indicating if half-stepsizing is allowed. For example, in maximizing a log-likelihood, if the next iteration has a log-likelihood that is less than the current value of the log-likelihood, then a half step will be taken. If the log-likelihood is still less than at the current position, a quarter-step will be taken etc. Eventually a step will be taken so that an improvement is made to the convergence criterion. \code{half.stepsizing} is ignored if \code{criterion == "coefficients"}. } \item{maxit}{ maximum number of (usually Fisher-scoring) iterations allowed. Sometimes Newton-Raphson is used. } \item{noWarning}{ logical indicating whether to suppress a warning if convergence is not obtained within \code{maxit} iterations. This is ignored if \code{maxit = 1} is set. } \item{stepsize}{ usual step size to be taken between each Newton-Raphson/Fisher-scoring iteration. It should be a value between 0 and 1, where a value of unity corresponds to an ordinary step. A value of 0.5 means half-steps are taken. Setting a value near zero will cause convergence to be generally slow but may help increase the chances of successful convergence for some family functions. } \item{save.weight}{ logical indicating whether the \code{weights} slot of a \code{"vglm"} object will be saved on the object. If not, it will be reconstructed when needed, e.g., \code{summary}. Some family functions have \code{save.weight = TRUE} and others have \code{save.weight = FALSE} in their control functions. } \item{trace}{ logical indicating if output should be produced for each iteration. Setting \code{trace = TRUE} is recommended in general because \pkg{VGAM} fits a very broad variety of models and distributions, and for some of them, convergence is intrinsically more difficult. Monitoring convergence can help check that the solution is reasonable or that a problem has occurred. It may suggest better initial values are needed, the making of invalid assumptions, or that the model is inappropriate for the data, etc. } \item{wzepsilon}{ small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{xij}{ A formula or a list of formulas. Each formula has a RHS giving \eqn{M} terms making up a covariate-dependent term (whose name is the response). That is, it creates a variable that takes on different values for each linear/additive predictor, e.g., the ocular pressure of each eye. The \eqn{M} terms must be unique; use \code{\link{fill1}}, \code{fill2}, \code{fill3}, etc. if necessary. Each formula should have a response which is taken as the name of that variable, and the \eqn{M} terms are enumerated in sequential order. Each of the \eqn{M} terms multiply each successive row of the constraint matrix. When \code{xij} is used, the use of \code{form2} is also required to give \emph{every} term used by the model. } % \item{jix}{ % A formula or a list of formulas specifying % which explanatory variables are to be plotted for each \code{xij} term. % For example, in the code below, % the term \code{BS(dumm)} could be plotted against either % \code{dum1} or \code{dum2}, therefore % either \code{jix=dum1} or \code{jix=dum2} are ok. % This argument is made use of by \code{plotvgam()}. % Each formula has a RHS giving \eqn{r_k} unique terms, % one for each column of the constraint matrix. % Each formula should have a response that matches the \code{formula} argument. % The argument \code{jix} is a reversal of \code{xij} to emphasize % the same framework for handling terms involving covariates that have % different values for each linear/additive predictor. % % } \item{\dots}{ other parameters that may be picked up from control functions that are specific to the \pkg{VGAM} family function. } } \details{ Most of the control parameters are used within \code{vglm.fit} and you will have to look at that to understand the full details. Setting \code{save.weight = FALSE} is useful for some models because the \code{weights} slot of the object is the largest and so less memory is used to store the object. However, for some \pkg{VGAM} family function, it is necessary to set \code{save.weight = TRUE} because the \code{weights} slot cannot be reconstructed later. } \value{ A list with components matching the input names. A little error checking is done, but not much. The list is assigned to the \code{control} slot of \code{vglm} objects. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee} \note{ Reiterating from above, setting \code{trace = TRUE} is recommended in general. In Example 2 below there are two covariates that have linear/additive predictor specific values. These are handled using the \code{xij} argument. } \section{Warning}{ For some applications the default convergence criterion should be tightened. Setting something like \code{criterion = "coef", epsilon = 1e-09} is one way to achieve this, and also add \code{trace = TRUE} to monitor the convergence. Setting \code{maxit} to some higher number is usually not needed, and needing to do so suggests something is wrong, e.g., an ill-conditioned model, over-fitting or under-fitting. } \seealso{ \code{\link{vglm}}, \code{\link{fill}}. The author's homepage has further documentation about the \code{xij} argument. } \examples{ # Example 1. pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo, crit = "coef", step = 0.5, trace = TRUE, epsil = 1e-8, maxit = 40) # Example 2. The use of the xij argument (simple case). ymat <- rdiric(n <- 1000, shape = rep(exp(2), len = 4)) mydat <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n), z1 = runif(n), z2 = runif(n), z3 = runif(n), z4 = runif(n)) mydat <- transform(mydat, X = x1, Z = z1) mydat <- round(mydat, dig = 2) fit2 <- vglm(ymat ~ X + Z, dirichlet(parallel = TRUE), data = mydat, trace = TRUE, xij = list(Z ~ z1 + z2 + z3 + z4, X ~ x1 + x2 + x3 + x4), form2 = ~ Z + z1 + z2 + z3 + z4 + X + x1 + x2 + x3 + x4) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) max(abs(predict(fit2)-predict(fit2, new = mydat))) # Predicts correctly summary(fit2) \dontrun{ # plotvgam(fit2, se = TRUE, xlab = "x1", which.term = 1) # Bug! # plotvgam(fit2, se = TRUE, xlab = "z1", which.term = 2) # Bug! plotvgam(fit2, xlab = "x1") # Correct plotvgam(fit2, xlab = "z1") # Correct } # Example 3. The use of the xij argument (complex case). set.seed(123) coalminers <- transform(coalminers, Age = (age - 42) / 5, dum1 = round(runif(nrow(coalminers)), dig = 2), dum2 = round(runif(nrow(coalminers)), dig = 2), dum3 = round(runif(nrow(coalminers)), dig = 2), dumm = round(runif(nrow(coalminers)), dig = 2)) BS <- function(x, ..., df = 3) bs(c(x,...), df = df)[1:length(x),,drop = FALSE] NS <- function(x, ..., df = 3) ns(c(x,...), df = df)[1:length(x),,drop = FALSE] # Equivalently... BS <- function(x, ..., df = 3) head(bs(c(x,...), df = df), length(x), drop = FALSE) NS <- function(x, ..., df = 3) head(ns(c(x,...), df = df), length(x), drop = FALSE) fit3 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age + NS(dum1, dum2), fam = binom2.or(exchangeable = TRUE, zero = 3), xij = list(NS(dum1, dum2) ~ NS(dum1, dum2) + NS(dum2, dum1) + fill(NS( dum1))), form2 = ~ NS(dum1, dum2) + NS(dum2, dum1) + fill(NS(dum1)) + dum1 + dum2 + dum3 + Age + age + dumm, data = coalminers, trace = TRUE) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) \dontrun{ plotvgam(fit3, se = TRUE, lcol = "red", scol = "blue", xlab = "dum1") } } \keyword{models} \keyword{regression} % zz 20090506 put elsewhere: % % %# Example 4. The use of the xij argument (complex case). %# Here is one method to handle the xij argument with a term that %# produces more than one column in the model matrix. %# The constraint matrix for 'op' has one column. %POLY3 <- function(x, ...) { % # A cubic; ensures that the basis functions are the same. % poly(c(x,...), 3)[1:length(x),] %} % %\dontrun{ %fit4 <- vglm(cbind(leye,reye) ~ POLY3(op), trace=TRUE, % fam = binom2.or(exchangeable=TRUE, zero=3), data=eyesdata, % xij = list(POLY3(op) ~ POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))), % form2 = ~ POLY3(op) + POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))) %coef(fit4) %coef(fit4, matrix=TRUE) %head(predict(fit4)) %} VGAM/man/vglm.Rd0000644000176000001440000004154012136651105013047 0ustar ripleyusers\name{vglm} \alias{vglm} %\alias{vglm.fit} \title{Fitting Vector Generalized Linear Models } \description{ \code{vglm} is used to fit vector generalized linear models (VGLMs). This is a very large class of models that includes generalized linear models (GLMs) as a special case. } \usage{ vglm(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear predictor. Different variables in each linear predictor can be chosen by specifying constraint matrices. } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{vglm} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. If the \pkg{VGAM} family function handles multiple responses (\eqn{q > 1} of them, say) then \code{weights} can be a matrix with \eqn{q} columns. Each column matches the respective response. If it is a vector (the usually case) then it is recycled into a matrix with \eqn{q} columns. The values of \code{weights} must be positive; try setting a very small value such as \code{1.0e-8} to effectively delete an observation. % If \code{weights} is a matrix, % then it should be must be in \emph{matrix-band} form, whereby the % first \eqn{M} columns of the matrix are the diagonals, % followed by the upper-diagonal band, followed by the % band above that, etc. In this case, there can be up to % \eqn{M(M+1)} columns, with the last column corresponding % to the (1,\eqn{M}) elements of the weight matrices. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix with the same number of rows as the response. If \eqn{M = 1} then it may be a vector. Note that \code{etastart} and the output of \code{predict(fit)} should be comparable. Here, \code{fit} is the fitted object. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix; if a matrix, then it has the same number of rows as the response. Usually \code{mustart} and the output of \code{fitted(fit)} should be comparable. Some family functions do not make use of this argument. } \item{coefstart}{ starting values for the coefficient vector. The length and order must match that of \code{coef(fit)}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{offset}{ a vector or \eqn{M}-column matrix of offset values. These are \emph{a priori} known and are added to the linear/additive predictors during fitting. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{vglm.fit()} uses iteratively reweighted least squares (IRLS). } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGLM model matrix type \code{model.matrix(vglmfit)} where \code{vglmfit} is a \code{vglm} object. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{\link{model.matrix.default}}. } \item{constraints}{ an optional list of constraint matrices. The components of the list must be named with the term it corresponds to (and it must match in character format exactly). There are two types of input: \code{"lm"}-type and \code{"vlm"}-type. The former is a subset of the latter. The former has a matrix for each term of the LM matrix. The latter has a matrix for each column of the VLM matrix. After fitting, the \code{\link{constraints}} extractor function may be applied; it returns the \code{"vlm"}-type list of constraint matrices by default. If \code{"lm"}-type are returned by \code{\link{constraints}} then these can be fed into this argument and it should give the same model as before. Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values, e.g., \code{parallel} (see \code{\link{CommonVGAMffArguments}}). If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. } \item{extra}{ an optional list with any extra information that might be needed by the \pkg{VGAM} family function. } \item{form2}{ The second (optional) formula. If argument \code{xij} is used (see \code{\link{vglm.control}}) then \code{form2} needs to have \emph{all} terms in the model. Also, some \pkg{VGAM} family functions such as \code{\link{micmen}} use this argument to input the regressor variable. If given, the slots \code{@Xm2} and \code{@Ym2} may be assigned. Note that smart prediction applies to terms in \code{form2} too. } \item{qr.arg}{ logical value indicating whether the slot \code{qr}, which returns the QR decomposition of the VLM model matrix, is returned on the object. } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{vglm.control}}. } } \details{ A vector generalized linear model (VGLM) is loosely defined as a statistical model that is a function of \eqn{M} linear predictors. The central formula is given by \deqn{\eta_j = \beta_j^T x}{% eta_j = beta_j^T x} where \eqn{x}{x} is a vector of explanatory variables (sometimes just a 1 for an intercept), and \eqn{\beta_j}{beta_j} is a vector of regression coefficients to be estimated. Here, \eqn{j=1,\ldots,M}, where \eqn{M} is finite. Then one can write \eqn{\eta=(\eta_1,\ldots,\eta_M)^T}{eta=(eta_1,\ldots,\eta_M)^T} as a vector of linear predictors. Most users will find \code{vglm} similar in flavour to \code{\link[stats]{glm}}. The function \code{vglm.fit} actually does the work. % If more than one of \code{etastart}, \code{start} and \code{mustart} % is specified, the first in the list will be used. } \value{ An object of class \code{"vglm"}, which has the following slots. Some of these may not be assigned to save space, and will be recreated if necessary later. \item{extra}{the list \code{extra} at the end of fitting.} \item{family}{the family function (of class \code{"vglmff"}).} \item{iter}{the number of IRLS iterations used.} \item{predictors}{a \eqn{M}-column matrix of linear predictors.} \item{assign}{a named list which matches the columns and the (LM) model matrix terms.} \item{call}{the matched call.} \item{coefficients}{a named vector of coefficients.} \item{constraints}{ a named list of constraint matrices used in the fitting. } \item{contrasts}{the contrasts used (if any).} \item{control}{list of control parameter used in the fitting.} \item{criterion}{list of convergence criterion evaluated at the final IRLS iteration.} \item{df.residual}{the residual degrees of freedom.} \item{df.total}{the total degrees of freedom.} \item{dispersion}{the scaling parameter.} \item{effects}{the effects.} \item{fitted.values}{ the fitted values, as a matrix. This is often the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{misc}{a list to hold miscellaneous parameters.} \item{model}{the model frame.} \item{na.action}{a list holding information about missing values.} \item{offset}{if non-zero, a \eqn{M}-column matrix of offsets.} \item{post}{a list where post-analysis results may be put.} \item{preplot}{used by \code{\link{plotvgam}}, the plotting parameters may be put here.} \item{prior.weights}{ initially supplied weights (the \code{weights} argument). Also see \code{\link{weightsvglm}}. } \item{qr}{the QR decomposition used in the fitting.} \item{R}{the \bold{R} matrix in the QR decomposition used in the fitting.} \item{rank}{numerical rank of the fitted model.} \item{residuals}{the \emph{working} residuals at the final IRLS iteration.} \item{rss}{residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices.} \item{smart.prediction}{ a list of data-dependent parameters (if any) that are used by smart prediction. } \item{terms}{the \code{\link[stats]{terms}} object used.} \item{weights}{the working weight matrices at the final IRLS iteration. This is in matrix-band form.} \item{x}{the model matrix (linear model LM, not VGLM).} \item{xlevels}{the levels of the factors, if any, used in fitting.} \item{y}{the response, in matrix form.} This slot information is repeated at \code{\link{vglm-class}}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ This function can fit a wide variety of statistical models. Some of these are harder to fit than others because of inherent numerical difficulties associated with some of them. Successful model fitting benefits from cumulative experience. Varying the values of arguments in the \pkg{VGAM} family function itself is a good first step if difficulties arise, especially if initial values can be inputted. A second, more general step, is to vary the values of arguments in \code{\link{vglm.control}}. A third step is to make use of arguments such as \code{etastart}, \code{coefstart} and \code{mustart}. Some \pkg{VGAM} family functions end in \code{"ff"} to avoid interference with other functions, e.g., \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{gaussianff}}, \code{gammaff}. This is because \pkg{VGAM} family functions are incompatible with \code{\link[stats]{glm}} (and also \code{\link[gam]{gam}} in the \pkg{gam} library and \code{\link[mgcv]{gam}} in the \pkg{mgcv} library). The smart prediction (\code{\link{smartpred}}) library is incorporated within the \pkg{VGAM} library. The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs. In Example 5 below, the \code{xij} argument to illustrate covariates that are specific to a linear predictor. Here, \code{lop}/\code{rop} are the ocular pressures of the left/right eye (artificial data). Variables \code{leye} and \code{reye} might be the presence/absence of a particular disease on the LHS/RHS eye respectively. See \code{\link{vglm.control}} and \code{\link{fill}} for more details and examples. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \section{WARNING}{ See warnings in \code{\link{vglm.control}}. } \seealso{ \code{\link{vglm.control}}, \code{\link{vglm-class}}, \code{\link{vglmff-class}}, \code{\link{smartpred}}, \code{vglm.fit}, \code{\link{fill}}, \code{\link{rrvglm}}, \code{\link{vgam}}. Methods functions include \code{coef.vlm}, \code{\link{constraints.vlm}}, \code{\link{hatvaluesvlm}}, \code{\link{predictvglm}}, \code{summary.vglm}, \code{AIC.vglm}, \code{\link{lrtest_vglm}}, etc. } \examples{ # Example 1. See help(glm) print(d.AD <- data.frame(treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18,17,15,20,10,20,25,13,12))) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff, data = d.AD, trace = TRUE) summary(vglm.D93) # Example 2. Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) # Example 3. Proportional odds model fit3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) coef(fit3, matrix = TRUE) constraints(fit3) model.matrix(fit3, type = "lm") # LM model matrix model.matrix(fit3) # Larger VGLM (or VLM) model matrix # Example 4. Bivariate logistic model fit4 <- vglm(cbind(nBnW, nBW, BnW, BW) ~ age, binom2.or, coalminers) coef(fit4, matrix = TRUE) depvar(fit4) # Response are proportions weights(fit4, type = "prior") # Example 5. The use of the xij argument (simple case). # The constraint matrix for 'op' has one column. nn <- 1000 eyesdat <- round(data.frame(lop = runif(nn), rop = runif(nn), op = runif(nn)), dig = 2) eyesdat <- transform(eyesdat, eta1 = -1 + 2 * lop, eta2 = -1 + 2 * lop) eyesdat <- transform(eyesdat, leye = rbinom(nn, size = 1, prob = logit(eta1, inv = TRUE)), reye = rbinom(nn, size = 1, prob = logit(eta2, inv = TRUE))) head(eyesdat) fit5 <- vglm(cbind(leye, reye) ~ op, binom2.or(exchangeable = TRUE, zero = 3), data = eyesdat, trace = TRUE, xij = list(op ~ lop + rop + fill(lop)), form2 = ~ op + lop + rop + fill(lop)) coef(fit5) coef(fit5, matrix = TRUE) constraints(fit5) } \keyword{models} \keyword{regression} %eyesdat$leye <- ifelse(runif(n) < 1/(1+exp(-1+2*eyesdat$lop)), 1, 0) %eyesdat$reye <- ifelse(runif(n) < 1/(1+exp(-1+2*eyesdat$rop)), 1, 0) %coef(fit, matrix = TRUE, compress = FALSE) % 20090506 zz Put these examples elsewhere: % %# Example 6. The use of the xij argument (complex case). %# Here is one method to handle the xij argument with a term that %# produces more than one column in the model matrix. %# The constraint matrix for 'op' has essentially one column. %POLY3 <- function(x, ...) { % # A cubic; ensures that the basis functions are the same. % poly(c(x,...), 3)[1:length(x),] % head(poly(c(x,...), 3), length(x), drop = FALSE) %} % %fit6 <- vglm(cbind(leye, reye) ~ POLY3(op), trace = TRUE, % fam = binom2.or(exchangeable = TRUE, zero=3), data=eyesdat, % xij = list(POLY3(op) ~ POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))), % form2 = ~ POLY3(op) + POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))) %coef(fit6) %coef(fit6, matrix = TRUE) %head(predict(fit6)) %\dontrun{ %plotvgam(fit6, se = TRUE) # Wrong since it plots against op, not lop. %} % % %# Example 7. The use of the xij argument (simple case). %# Each constraint matrix has 4 columns. %ymat <- rdiric(n <- 1000, shape=c(4,7,3,1)) %mydat <- data.frame(x1=runif(n), x2=runif(n), x3=runif(n), x4=runif(n), % z1=runif(n), z2=runif(n), z3=runif(n), z4=runif(n), % X2=runif(n), Z2=runif(n)) %mydat <- round(mydat, dig=2) %fit7 <- vglm(ymat ~ X2 + Z2, data=mydat, crit="c", % fam = dirichlet(parallel = TRUE), # Intercept is also parallel. % xij = list(Z2 ~ z1 + z2 + z3 + z4, % X2 ~ x1 + x2 + x3 + x4), % form2 = ~ Z2 + z1 + z2 + z3 + z4 + % X2 + x1 + x2 + x3 + x4) %head(model.matrix(fit7, type="lm")) # LM model matrix %head(model.matrix(fit7, type="vlm")) # Big VLM model matrix %coef(fit7) %coef(fit7, matrix = TRUE) %max(abs(predict(fit7)-predict(fit7, new=mydat))) # Predicts correctly %summary(fit7) VGAM/man/vglm-class.Rd0000644000176000001440000001655512136651105014162 0ustar ripleyusers\name{vglm-class} \docType{class} \alias{vglm-class} \title{Class ``vglm'' } \description{ Vector generalized linear models. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{vglm(...)}. % ~~ describe objects here ~~ } \section{Slots}{ In the following, \eqn{M} is the number of linear predictors. \describe{ \item{\code{extra}:}{Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. %This may be missing or consist entirely %of \code{NA}s, e.g., the Cauchy model. } \item{\code{misc}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{rss}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vlm"}, directly. } \section{Methods}{ \describe{ \item{cdf}{\code{signature(object = "vglm")}: cumulative distribution function. Applicable to, e.g., quantile regression and extreme value data models.} \item{deplot}{\code{signature(object = "vglm")}: Applicable to, e.g., quantile regression.} \item{deviance}{\code{signature(object = "vglm")}: deviance of the model (where applicable). } \item{plot}{\code{signature(x = "vglm")}: diagnostic plots. } \item{predict}{\code{signature(object = "vglm")}: extract the linear predictors or predict the linear predictors at a new data frame.} \item{print}{\code{signature(x = "vglm")}: short summary of the object. } \item{qtplot}{\code{signature(object = "vglm")}: quantile plot (only applicable to some models). } \item{resid}{\code{signature(object = "vglm")}: residuals. There are various types of these. } \item{residuals}{\code{signature(object = "vglm")}: residuals. Shorthand for \code{resid}. } \item{rlplot}{\code{signature(object = "vglm")}: return level plot. Useful for extreme value data models.} \item{summary}{\code{signature(object = "vglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. \url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } %~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vglm}}, \code{\link{vglmff-class}}, \code{\link{vgam-class}}. } \examples{ # Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) } \keyword{classes} VGAM/man/vgam.control.Rd0000644000176000001440000001271412136651105014514 0ustar ripleyusers\name{vgam.control} \alias{vgam.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control function for vgam } \description{ Algorithmic constants and parameters for running \code{\link{vgam}} are set using this function. } \usage{ vgam.control(all.knots = FALSE, bf.epsilon = 1e-07, bf.maxit = 30, checkwz=TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, maxit = 30, na.action = na.fail, nk = NULL, save.weight = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, ...) } %- maybe also `usage' for other objects documented here. \arguments{ % zz na.action differs from vglm \item{all.knots}{ logical indicating if all distinct points of the smoothing variables are to be used as knots. By default, \code{all.knots=TRUE} for \eqn{n \leq 40}{n <= 40}, and for \eqn{n > 40}, the number of knots is approximately \eqn{40 + (n-40)^{0.25}}{40 + (n-40)^0.25}. This increases very slowly with \eqn{n} so that the number of knots is approximately between 50 and 60 for large \eqn{n}. } \item{bf.epsilon}{ tolerance used by the modified vector backfitting algorithm for testing convergence. Must be a positive number. } \item{bf.maxit}{ maximum number of iterations allowed in the modified vector backfitting algorithm. Must be a positive integer. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{criterion}{ character variable describing what criterion is to be used to test for convergence. The possibilities are listed in \code{.min.criterion.VGAM}, but most family functions only implement a few of these. } \item{epsilon}{ positive convergence tolerance epsilon. Roughly speaking, the Newton-Raphson/Fisher-scoring/local-scoring iterations are assumed to have converged when two successive \code{criterion} values are within \code{epsilon} of each other. } \item{maxit}{ maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. } \item{na.action}{ how to handle missing values. Unlike the SPLUS \code{gam} function, \code{\link{vgam}} cannot handle \code{NA}s when smoothing. } \item{nk}{ vector of length \eqn{d} containing positive integers. where \eqn{d} be the number of \code{\link{s}} terms in the formula. Recycling is used if necessary. The \eqn{i}th value is the number of B-spline coefficients to be estimated for each component function of the \eqn{i}th \code{s()} term. \code{nk} differs from the number of knots by some constant. If specified, \code{nk} overrides the automatic knot selection procedure. } \item{save.weight}{ logical indicating whether the \code{weights} slot of a \code{"vglm"} object will be saved on the object. If not, it will be reconstructed when needed, e.g., \code{summary}. } \item{se.fit}{ logical indicating whether approximate pointwise standard errors are to be saved on the object. If \code{TRUE}, then these can be plotted with \code{plot(..., se = TRUE)}. } \item{trace}{ logical indicating if output should be produced for each iteration. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } % \item{xij}{ % formula giving terms making up a covariate-dependent term. % % } \item{\dots}{ other parameters that may be picked up from control functions that are specific to the \pkg{VGAM} family function. % zz see later. } } \details{ Most of the control parameters are used within \code{vgam.fit} and you will have to look at that to understand the full details. Many of the control parameters are used in a similar manner by \code{vglm.fit} (\code{\link{vglm}}) because the algorithm (IRLS) is very similar. Setting \code{save.weight=FALSE} is useful for some models because the \code{weights} slot of the object is often the largest and so less memory is used to store the object. However, for some \pkg{VGAM} family function, it is necessary to set \code{save.weight=TRUE} because the \code{weights} slot cannot be reconstructed later. } \value{ A list with components matching the input names. A little error checking is done, but not much. The list is assigned to the \code{control} slot of \code{\link{vgam}} objects. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. % \url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee} \note{ \code{\link{vgam}} does not implement half-stepsizing, therefore parametric models should be fitted with \code{\link{vglm}}. Also, \code{\link{vgam}} is slower than \code{\link{vglm}} too. } \section{Warning}{ See \code{\link{vglm.control}}. } \seealso{ \code{\link{vgam}}, \code{\link{vglm.control}}, \code{\link{vsmooth.spline}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let, df = 2), multinomial, data = pneumo, trace = TRUE, eps = 1e-4, maxit = 10) } \keyword{models} \keyword{regression} \keyword{smooth} % xij = NULL, VGAM/man/vgam.Rd0000644000176000001440000001744412136651105013042 0ustar ripleyusers\name{vgam} \alias{vgam} %\alias{vgam.fit} \title{ Fitting Vector Generalized Additive Models } % 15/2/03; based a lot from vglm.Rd \description{ Fit a vector generalized additive model (VGAM). This is a large class of models that includes generalized additive models (GAMs) and vector generalized linear models (VGLMs) as special cases. } \usage{ vgam(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), qr.arg = FALSE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ % The following comes from vglm.Rd but with minor tweaks \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear/additive predictor, and usually includes at least one \code{\link[VGAM]{s}} term. Different variables in each linear/additive predictor can be chosen by specifying constraint matrices. } \item{family}{ Same as for \code{\link{vglm}}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{vgam} is called. } \item{weights, subset, na.action}{ Same as for \code{\link{vglm}}. } \item{etastart, mustart, coefstart}{ Same as for \code{\link{vglm}}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{vgam.control}} for details. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{vgam.fit} uses iteratively reweighted least squares (IRLS). } \item{constraints, model, offset}{ Same as for \code{\link{vglm}}. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGAM model matrix type \code{model.matrix(vgamfit)} where \code{vgamfit} is a \code{vgam} object. } \item{contrasts, extra, qr.arg, smart}{ Same as for \code{\link{vglm}}. } \item{\dots}{ further arguments passed into \code{\link{vgam.control}}. } } \details{ A vector generalized additive model (VGAM) is loosely defined as a statistical model that is a function of \eqn{M} additive predictors. The central formula is given by \deqn{\eta_j = \sum_{k=1}^p f_{(j)k}(x_k)}{% eta_j = sum_{k=1}^p f_{(j)k}(x_k)} where \eqn{x_k}{x_k} is the \eqn{k}th explanatory variable (almost always \eqn{x_1=1} for the intercept term), and \eqn{f_{(j)k}} are smooth functions of \eqn{x_k} that are estimated by smoothers. The first term in the summation is just the intercept. Currently only one type of smoother is implemented and this is called a \emph{vector (cubic smoothing spline) smoother}. Here, \eqn{j=1,\ldots,M} where \eqn{M} is finite. If all the functions are constrained to be linear then the resulting model is a vector generalized linear model (VGLM). VGLMs are best fitted with \code{\link{vglm}}. Vector (cubic smoothing spline) smoothers are represented by \code{s()} (see \code{\link[VGAM]{s}}). Local regression via \code{lo()} is \emph{not} supported. The results of \code{vgam} will differ from the \code{gam()} (in the \pkg{gam}) because \code{vgam()} uses a different knot selection algorithm. In general, fewer knots are chosen because the computation becomes expensive when the number of additive predictors \eqn{M} is large. The underlying algorithm of VGAMs is iteratively reweighted least squares (IRLS) and modified vector backfitting using vector splines. B-splines are used as the basis functions for the vector (smoothing) splines. \code{vgam.fit()} is the function that actually does the work. The smoothing code is based on F. O'Sullivan's BART code. % If more than one of \code{etastart}, \code{start} and \code{mustart} % is specified, the first in the list will be used. A closely related methodology based on VGAMs called \emph{constrained additive ordination} (CAO) first forms a linear combination of the explanatory variables (called \emph{latent variables}) and then fits a GAM to these. This is implemented in the function \code{\link{cao}} for a very limited choice of family functions. } \value{ An object of class \code{"vgam"} (see \code{\link{vgam-class}} for further information). } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ This function can fit a wide variety of statistical models. Some of these are harder to fit than others because of inherent numerical difficulties associated with some of them. Successful model fitting benefits from cumulative experience. Varying the values of arguments in the \pkg{VGAM} family function itself is a good first step if difficulties arise, especially if initial values can be inputted. A second, more general step, is to vary the values of arguments in \code{\link{vgam.control}}. A third step is to make use of arguments such as \code{etastart}, \code{coefstart} and \code{mustart}. Some \pkg{VGAM} family functions end in \code{"ff"} to avoid interference with other functions, e.g., \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{gaussianff}}, \code{gammaff}. This is because \pkg{VGAM} family functions are incompatible with \code{\link[stats]{glm}} (and also \code{\link[gam]{gam}} in the \pkg{gam} library and \code{\link[mgcv]{gam}} in the \pkg{mgcv} library). The smart prediction (\code{\link{smartpred}}) library is packed with the \pkg{VGAM} library. The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \section{WARNING}{ See warnings in \code{\link{vglm.control}}. } \seealso{ \code{\link{vgam.control}}, \code{\link{vgam-class}}, \code{\link{vglmff-class}}, \code{\link{plotvgam}}, \code{\link{vglm}}, \code{\link[VGAM]{s}}, \code{\link{vsmooth.spline}}, \code{\link{cao}}. } \examples{ # Nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo) # Nonparametric logistic regression fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) \dontrun{ plot(fit, se = TRUE) } pfit <- predict(fit, type = "terms", raw = TRUE, se = TRUE) names(pfit) head(pfit$fitted) head(pfit$se.fit) pfit$df pfit$sigma # Fit two species simultaneously fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(mv = TRUE), data = hunua) coef(fit2, matrix = TRUE) # Not really interpretable \dontrun{ plot(fit2, se = TRUE, overlay = TRUE, lcol = 1:2, scol = 1:2) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(fit2)[ooo,], ylim = c(0, 0.8), xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", type = "l", lwd = 2)) with(hunua, rug(altitude)) } } \keyword{models} \keyword{regression} \keyword{smooth} VGAM/man/vgam-class.Rd0000644000176000001440000002035612136651105014141 0ustar ripleyusers\name{vgam-class} \docType{class} \alias{vgam-class} \title{Class ``vgam'' } \description{ Vector generalized additive models. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{vgam(...)}. % ~~ describe objects here ~~ } \section{Slots}{ \describe{ \item{\code{nl.chisq}:}{Object of class \code{"numeric"}. Nonlinear chi-squared values. } \item{\code{nl.df}:}{Object of class \code{"numeric"}. Nonlinear chi-squared degrees of freedom values. } \item{\code{spar}:}{Object of class \code{"numeric"} containing the (scaled) smoothing parameters. } \item{\code{s.xargument}:}{Object of class \code{"character"} holding the variable name of any \code{s()} terms. } \item{\code{var}:}{Object of class \code{"matrix"} holding approximate pointwise standard error information. } \item{\code{Bspline}:}{Object of class \code{"list"} holding the scaled (internal and boundary) knots, and the fitted B-spline coefficients. These are used for prediction. } \item{\code{extra}:}{Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. This is usually the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{\code{misc}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{rss}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vglm"}, directly. Class \code{"vlm"}, by class \code{"vglm"}. } \section{Methods}{ \describe{ \item{cdf}{\code{signature(object = "vglm")}: cumulative distribution function. Useful for quantile regression and extreme value data models.} \item{deplot}{\code{signature(object = "vglm")}: density plot. Useful for quantile regression models.} \item{deviance}{\code{signature(object = "vglm")}: deviance of the model (where applicable). } \item{plot}{\code{signature(x = "vglm")}: diagnostic plots. } \item{predict}{\code{signature(object = "vglm")}: extract the additive predictors or predict the additive predictors at a new data frame.} \item{print}{\code{signature(x = "vglm")}: short summary of the object. } \item{qtplot}{\code{signature(object = "vglm")}: quantile plot (only applicable to some models). } \item{resid}{\code{signature(object = "vglm")}: residuals. There are various types of these. } \item{residuals}{\code{signature(object = "vglm")}: residuals. Shorthand for \code{resid}. } \item{rlplot}{\code{signature(object = "vglm")}: return level plot. Useful for extreme value data models.} \item{summary}{\code{signature(object = "vglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. \url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } \note{ VGAMs have all the slots that \code{\link{vglm}} objects have (\code{\link{vglm-class}}), plus the first few slots described in the section above. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vgam.control}}, \code{\link{vglm}}, \code{\link[VGAM]{s}}, \code{\link{vglm-class}}, \code{\link{vglmff-class}}. } \examples{ # Fit a nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), pneumo) } \keyword{classes} \keyword{models} \keyword{regression} \keyword{smooth} VGAM/man/venice.Rd0000644000176000001440000000727012136651105013355 0ustar ripleyusers\name{venice} \alias{venice} \alias{venice90} \docType{data} \title{ Venice Maximum Sea Levels Data} \description{ Some sea levels data sets recorded at Venice, Italy. } \usage{ data(venice) data(venice90) } \format{ \code{venice} is a data frame with 51 observations on the following 11 variables. It concerns the maximum heights of sea levels between 1931 and 1981. \describe{ \item{year}{a numeric vector. } \item{r1,r2,r3,r4,r5,r6,r7,r8,r9,r10}{numeric vectors; \code{r1} is the highest recorded value, \code{r2} is the second highest recorded value, etc. } } \code{venice90} is a data frame with 455 observations on the following 7 variables. \describe{ \item{year, month, day, hour }{numeric vectors; actual time of the recording. } \item{sealevel}{numeric; sea level. } \item{ohour}{numeric; number of hours since the midnight of 31 Dec 1939 and 1 Jan 1940. } \item{Year}{numeric vector; approximate year as a real number. The formula is \code{start.year + ohour / (365.26 * 24)} where \code{start.year} is 1940. One can treat \code{Year} as continuous whereas \code{year} can be treated as both continuous and discrete. } } } \details{ Sea levels are in cm. For \code{venice90}, the value 0 corresponds to a fixed reference point (e.g., the mean sea level in 1897 at an old palace of Venice). Clearly since the relative (perceived) mean sea level has been increasing in trend over time (more than an overall 0.4 m increase by 2010), therefore the value 0 is (now) a very low and unusual measurement. For \code{venice}, in 1935 only the top six values were recorded. For \code{venice90}, this is a subset of a data set provided by Paolo Pirazzoli consisting of hourly sea levels from 1940 to 2009. Values greater than 90 cm were extracted, and then declustered (each cluster provides no more than one value, and each value is at least 24 hours apart). Thus the values are more likely to be independent. Of the original \code{(2009-1940+1)*365.26*24} values about 7 percent of these comprise \code{venice90}. Yet to do: check for consistency between the data sets. Some external data sets elsewhere have some extremes recorded at times not exactly on the hour. } \source{ Pirazzoli, P. (1982) Maree estreme a Venezia (periodo 1872--1981). \emph{Acqua Aria}, \bold{10}, 1023--1039. Thanks to Paolo Pirazzoli and Alberto Tomasin for the \code{venice90} data. } \references{ Smith, R. L. (1986) Extreme value theory based on the \emph{r} largest annual events. \emph{Journal of Hydrology}, \bold{86}, 27--43. Battistin, D. and Canestrelli, P. (2006). \emph{La serie storica delle maree a Venezia, 1872--2004} (in Italian), Comune di Venezia. Istituzione Centro Previsione e Segnalazioni Maree. } \seealso{ \code{\link[VGAM]{guplot}}, \code{\link[VGAM]{gev}}, \code{\link[VGAM]{gpd}}. } \examples{ \dontrun{ matplot(venice[["year"]], venice[, -1], xlab = "Year", ylab = "Sea level (cm)", type = "l") ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) par(mfrow = c(2, 1), xpd = TRUE) plot(fit1, se = TRUE, lcol = "blue", llwd = 2, slty = "dashed") par(mfrow = c(1,1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), llwd = 2, pcol = "blue", tadj = 0.1) plot(sealevel ~ Year, data = venice90, type = "h", col = "blue") summary(venice90) dim(venice90) round(100 * nrow(venice90) / ((2009 - 1940 + 1) * 365.26 * 24), dig = 3) } } \keyword{datasets} VGAM/man/uqo.control.Rd0000644000176000001440000002453612136651105014373 0ustar ripleyusers\name{uqo.control} \alias{uqo.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for UQO models } \description{ Algorithmic constants and parameters for an unconstrained quadratic ordination (UQO) model, by fitting a \emph{quadratic unconstrained vector generalized additive model} (QU-VGLM), are set using this function. It is the control function of \code{\link{uqo}}. } \usage{ uqo.control(Rank=1, Bestof = if (length(lvstart) && !jitter.sitescores) 1 else 10, CA1 = FALSE, Crow1positive = TRUE, epsilon = 1.0e-07, EqualTolerances = ITolerances, Etamat.colmax = 10, GradientFunction=TRUE, Hstep = 0.001, isdlv = rep(c(2, 1, rep(0.5, len=Rank)), len=Rank), ITolerances = FALSE, lvstart = NULL, jitter.sitescores = FALSE, maxitl = 40, Maxit.optim = 250, MUXfactor = rep(3, length=Rank), optim.maxit = 20, nRmax = 250, SD.sitescores = 1.0, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO=TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Rank}{ The numerical rank \eqn{R} of the model, i.e., the number of latent variables or ordination axes. Currently only \eqn{R=1} is recommended. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument has value 1 if an initial value for the site scores is inputted using \code{lvstart}. } \item{CA1}{ Logical. If \code{TRUE} the site scores from a correspondence analysis (CA) are computed and used on the first axis as initial values. Both \code{CA1} and \code{Use.Init.Poisson.QO} cannot both be \code{TRUE}. } \item{Crow1positive}{ Logical vector of length \code{Rank} (recycled if necessary): are the elements of the first row of the latent variable matrix \eqn{\nu}{nu} positive? For example, if \code{Rank} is 2, then specifying \code{Crow1positive=c(FALSE, TRUE)} will force the first site score's first element to be negative, and the first site score's second element to be positive. Note that there is no \eqn{C} matrix with UQO, but the argument's name comes from \code{\link{qrrvglm.control}} and is left unchanged for convenience. } \item{epsilon}{ Positive numeric. Used to test for convergence for GLMs fitted in FORTRAN. Larger values mean a loosening of the convergence criterion. } \item{EqualTolerances}{ Logical indicating whether each (quadratic) predictor will have equal tolerances. Setting \code{EqualTolerances=TRUE} can help avoid numerical problems, especially with binary data. Note that the estimated (common) tolerance matrix may or may not be positive-definite. If it is, then it can be scaled to the \eqn{R} x \eqn{R} identity matrix. Setting \code{ITolerances=TRUE} will fit a common \eqn{R} x \eqn{R} identity matrix as the tolerance matrix to the data, but this is model-driven rather than being data-driven because it \emph{forces} bell-shaped curves/surfaces onto the data. If the estimated (common) tolerance matrix happens to be positive-definite, then this model is essentially equivalent to the model with \code{ITolerances=TRUE}. See \bold{Details} in \code{\link{cqo}} and \code{\link{qrrvglm.control}} for more details. } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO=TRUE}. } \item{GradientFunction}{ Logical. Whether \code{\link[stats]{optim}}'s argument \code{gr} is used or not, i.e., to compute gradient values. The default value is usually faster on most problems. } \item{Hstep}{ Positive value. Used as the step size in the finite difference approximation to the derivatives by \code{\link[stats]{optim}}. } \item{isdlv}{ Initial standard deviations for the latent variables (site scores). Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{ITolerances=TRUE}. Used by \code{.Init.Poisson.QO()} to obtain initial values for the constrained coefficients \eqn{C} adjusted to a reasonable value. It adjusts the spread of the site scores relative to a common species tolerance of 1 for each ordination axis. A value between 0.5 and 10 is recommended; a value such as 10 means that the range of the environmental space is very large relative to the niche width of the species. The successive values should decrease because the first ordination axis should have the most spread of site scores, followed by the second ordination axis, etc. } \item{ITolerances}{ Logical. If \code{TRUE} then the (common) tolerance matrix is the \eqn{R} x \eqn{R} identity matrix by definition. Note that \code{ITolerances=TRUE} implies \code{EqualTolerances=TRUE}, but not vice versa. Internally, the quadratic terms will be treated as offsets (in GLM jargon) and so the models can potentially be fitted very efficiently. See \bold{Details} in \code{\link{cqo}} and \code{\link{qrrvglm.control}} for more details. more details. The success of \code{ITolerances=TRUE} often depends on suitable values for \code{isdlv} and/or \code{MUXfactor}. } \item{lvstart}{ Optional matrix of initial values of the site scores. If given, the matrix must be \eqn{n} by \eqn{R}, where \eqn{n} is the number of sites and \eqn{R} is the rank. This argument overrides the arguments \code{Use.Init.Poisson.QO} and \code{CA1}. Good possibilities for \code{lvstart} are the site scores from a constrained ordination, e.g., from \code{\link{cqo}}. } \item{jitter.sitescores}{ Logical. If \code{TRUE} the initial values for the site scores are jittered to add a random element to the starting values. } \item{maxitl}{ Positive integer. Number of iterations allowed for the IRLS algorithm implemented in the compiled code. } \item{Maxit.optim}{ Positive integer. Number of iterations given to the function \code{\link[stats]{optim}} at each of the \code{optim.maxit} iterations. } \item{MUXfactor}{ Multiplication factor for detecting large offset values. Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{ITolerances=TRUE}. Offsets are \eqn{-0.5} multiplied by the sum of the squares of all \eqn{R} latent variable values. If the latent variable values are too large then this will result in numerical problems. By too large, it is meant that the standard deviation of the latent variable values are greater than \code{MUXfactor[r] * isdlv[r]} for \code{r=1:Rank} (this is why centering and scaling all the numerical predictor variables in \eqn{x_2} is recommended). A value about 3 or 4 is recommended. If failure to converge occurs, try a slightly lower value. } \item{optim.maxit}{ Positive integer. Number of times \code{\link[stats]{optim}} is invoked. % At iteration \code{i}, the \code{i}th value of \code{Maxit.optim} % is fed into \code{\link[stats]{optim}}. } \item{nRmax}{ Positive integer. If the number of parameters making up the latent variable values (\eqn{n} multiplied by \eqn{R}) is greater than this argument then a conjugate-gradients algorithm is used, otherwise a quasi-Newton algorithm is used by \code{\link[stats]{optim}}. The conjugate-gradients method is more suitable when the number of parameters is large because it requires less memory. } \item{SD.sitescores}{ Numeric. Standard deviation of the initial values of the site scores, which are generated from a normal distribution. } % \item{Dzero}{ Integer vector specifying which squared terms % are to be zeroed. These linear predictors will correspond to % a RR-VGLM. % The values must be elements from the set \{1,2,\ldots,\eqn{M}\}. % Used only if \code{Quadratic=TRUE} and \code{FastAlgorithm=FALSE}. % } \item{SmallNo}{ Positive numeric between \code{.Machine$double.eps} and \code{0.0001}. Used to avoid under- or over-flow in the IRLS algorithm. } \item{trace}{ Logical indicating if output should be produced for each iteration. } % \item{Kinit}{ Initial values for the index parameters \code{k} in the % negative binomial distribution (one per species). % In general, a smaller number is preferred over a larger number. % The vector is recycled to the number of responses (species). % } \item{Use.Init.Poisson.QO}{ Logical. If \code{TRUE} then the function \code{.Init.Poisson.QO()} is used to obtain initial values for the site scores. If \code{FALSE} then random numbers are used instead. Both \code{CA1} and \code{Use.Init.Poisson.QO} cannot both be \code{TRUE}. } \item{\dots}{ Ignored at present. } } \details{ The algorithm currently used by \code{\link{uqo}} is unsophisticated and fails often. Improvements will hopefully be made soon. See \code{\link{cqo}} and \code{\link{qrrvglm.control}} for more details that are equally pertinent to UQO. % zz site scores are centered. Possibly uncorrelated too? To reduce the number of parameters being estimated, setting \code{ITolerances = TRUE} or \code{EqualTolerances = TRUE} is advised. } \value{ A list with the components corresponding to its arguments, after some basic error checking. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. %Yee, T. W. (2012) %On constrained and unconstrained quadratic ordination. %\emph{Manuscript in preparation}. } \author{T. W. Yee} \note{ This is a difficult optimization problem, and the current algorithm needs to be improved. } \seealso{ \code{\link{uqo}}. } \section{Warning }{ This function is currently very sensitive to initial values. Setting \code{Bestof} some reasonably large integer is recommended. } \examples{ uqo.control() } \keyword{models} \keyword{regression} VGAM/man/uqo.Rd0000644000176000001440000002376012136651105012712 0ustar ripleyusers\name{uqo} \alias{uqo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Unconstrained Quadratic Ordination (UQO)} \description{ An \emph{unconstrained quadratic ordination} (UQO) (equivalently, noncanonical Gaussian ordination) model is fitted using the \emph{quadratic unconstrained vector generalized linear model} (QU-VGLM) framework. In this documentation, \eqn{M} is the number of linear predictors or species. } \usage{ uqo(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = uqo.control(...), offset = NULL, method = "uqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. Since there is no \eqn{x_2} vector by definition, the RHS of the formula has all terms belonging to the \eqn{x_1} vector. } \item{family}{ a function of class \code{"vglmff"} describing what statistical model is to be fitted. Currently two families are supported: Poisson and binomial. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{uqo} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. This argument should not be used. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix. If \eqn{M = 1} then it may be a vector. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. } \item{coefstart}{ starting values for the coefficient vector. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{uqo.control}} for details. } \item{offset}{ a vector or \eqn{M}-column matrix of offset values. This argument should not be used. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{uqo.fit} uses iteratively reweighted least squares (IRLS). } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{\link{model.matrix.default}}. } \item{constraints}{ an optional list of constraint matrices. This argument should not be used. } \item{extra}{ an optional list with any extra information that might be needed by the family function. } \item{qr.arg}{ logical value indicating whether the slot \code{qr}, which returns the QR decomposition of the VLM model matrix, is returned on the object. This argument should not be set \code{TRUE}. } \item{\dots}{ further arguments passed into \code{\link{uqo.control}}. } } \details{ \emph{Unconstrained quadratic ordination} models fit symmetric bell-shaped response curves/surfaces to response data, but the latent variables are largely free parameters and are not constrained to be linear combinations of the environmental variables. This poses a difficult optimization problem. The current algorithm is very simple and will often fail (even for \code{Rank = 1}) but hopefully this will be improved in the future. The central formula is given by \deqn{\eta = B_1^T x_1 + A \nu + \sum_{m = 1}^M (\nu^T D_m \nu) e_m}{% eta = B_1^T x_1 + A nu + sum_{m = 1}^M (nu^T D_m nu) e_m} where \eqn{x_1}{x_1} is a vector (usually just a 1 for an intercept), \eqn{\nu}{nu} is a \eqn{R}-vector of latent variables, \eqn{e_m} is a vector of 0s but with a 1 in the \eqn{m}th position. The \eqn{\eta}{eta} are a vector of linear/additive predictors, e.g., the \eqn{m}th element is \eqn{\eta_m = \log(E[Y_m])}{eta_m = log(E[Y_m])} for the \eqn{m}th species. The matrices \eqn{B_1}, \eqn{A}, and \eqn{D_m} are estimated from the data, i.e., contain the regression coefficients. Also, \eqn{\nu}{nu} is estimated. The tolerance matrices satisfy \eqn{T_s = -\frac12 D_s^{-1}}{T_s = -(0.5 D_s^(-1)}. Many important UQO details are directly related to arguments in \code{\link{uqo.control}}; see also \code{\link{cqo}} and \code{\link{qrrvglm.control}}. Currently, only Poisson and binomial \pkg{VGAM} family functions are implemented for this function, and dispersion parameters for these are assumed known. Thus the Poisson is catered for by \code{\link{poissonff}}, and the binomial by \code{\link{binomialff}}. Those beginning with \code{"quasi"} have dispersion parameters that are estimated for each species, hence will give an error message here. } \value{ An object of class \code{"uqo"} (this may change to \code{"quvglm"} in the future). } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. %Yee, T. W. (2005) %On constrained and unconstrained quadratic ordination. %\emph{Manuscript in preparation}. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{Thomas W. Yee} \note{ The site scores are centered. When \eqn{R>1}, they are uncorrelated and should be unique up to a rotation. The argument \code{Bestof} in \code{\link{uqo.control}} controls the number of models fitted (each uses different starting values) to the data. This argument is important because convergence may be to a \emph{local} solution rather than the \emph{global} solution. Using more starting values increases the chances of finding the global solution. Local solutions arise because the optimization problem is highly nonlinear. In the example below, a CQO model is fitted and used for providing initial values for a UQO model. } \section{Warning }{ Local solutions are not uncommon when fitting UQO models. To increase the chances of obtaining the global solution, set \code{ITolerances = TRUE} or \code{EqualTolerances = TRUE} and increase the value of the argument \code{Bestof} in \code{\link{uqo.control}}. For reproducibility of the results, it pays to set a different random number seed before calling \code{uqo} (the function \code{\link[base:Random]{set.seed}} does this). The function \code{uqo} is very sensitive to initial values, and there is a lot of room for improvement here. UQO is computationally expensive. It pays to keep the rank to no more than 2, and 1 is much preferred over 2. The data needs to conform closely to the statistical model. Currently there is a bug with the argument \code{Crow1positive} in \code{\link{uqo.control}}. This argument might be interpreted as controlling the sign of the first site score, but currently this is not done. } \seealso{ \code{\link{uqo.control}}, \code{\link{cqo}}, \code{\link{qrrvglm.control}}, \code{\link{rcqo}}, % \code{\link{cao}}, \code{\link{poissonff}}, \code{\link{binomialff}}, \code{Coef.uqo}, \code{lvplot.uqo}, \code{persp.uqo}, \code{trplot.uqo}, \code{vcov.uqo}, \code{\link[base:Random]{set.seed}}, \code{\link{hspider}}. } \examples{ \dontrun{ set.seed(123) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, ITolerances = TRUE, fam = poissonff, data = hspider, Crow1positive = TRUE, Bestof=3, trace = FALSE) if (deviance(p1) > 1589.0) stop("suboptimal fit obtained") set.seed(111) up1 <- uqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ 1, family = poissonff, data = hspider, ITolerances = TRUE, Crow1positive = TRUE, lvstart = -lv(p1)) if (deviance(up1) > 1310.0) stop("suboptimal fit obtained") nos <- ncol(depvar(up1)) # Number of species clr <- (1:(nos+1))[-7] # Omit yellow lvplot(up1, las = 1, y = TRUE, pch = 1:nos, scol = clr, lcol = clr, pcol = clr, llty = 1:nos, llwd = 2) legend(x = 2, y = 135, colnames(up1@y), col = clr, lty = 1:nos, lwd = 2, merge = FALSE, ncol = 1, x.inter = 4.0, bty = "l", cex = 0.9) # Compare the site scores between the two models plot(lv(p1), lv(up1), xlim = c(-3, 4), ylim = c(-3, 4), las = 1) abline(a = 0, b = -1, lty = 2, col = "blue", xpd = FALSE) cor(lv(p1, ITol = TRUE), lv(up1)) # Another comparison between the constrained and unconstrained models # The signs are not right so they are similar when reflected about 0 par(mfrow = c(2, 1)) persp(up1, main = "Red/Blue are the constrained/unconstrained models", label = TRUE, col = "blue", las = 1) persp(p1, add = FALSE, col = "red") pchisq(deviance(p1) - deviance(up1), df = 52-30, lower.tail = FALSE) }} \keyword{models} \keyword{regression} % 6/10/06; when the bug is fixed: %persp(p1, add = TRUE, col = "red") VGAM/man/undocumented-methods.Rd0000644000176000001440000002322312136651105016233 0ustar ripleyusers\name{undocumented-methods} \docType{methods} %\alias{ccoef,ANY-method} %\alias{ccoef-method} % % % % 20121105 \alias{Rank,qrrvglm-method} \alias{Rank,rrvglm-method} \alias{Rank,cao-method} % 20120821 \alias{model.matrix,vsmooth.spline-method} % % 20120511 \alias{is.parallel,matrix-method} \alias{is.parallel,vglm-method} \alias{is.parallel,ANY-method} \alias{is.zero,matrix-method} \alias{is.zero,vglm-method} \alias{is.zero,ANY-method} % % % 20120215 %\alias{print,vglmff-method} \alias{show,vglmff-method} % % % % 20120112 \alias{AIC,ANY-method} \alias{coef,ANY-method} \alias{logLik,ANY-method} \alias{plot,ANY-method} \alias{vcov,ANY-method} \alias{plot,cao,ANY-method} \alias{plot,qrrvglm,ANY-method} \alias{plot,rcim,ANY-method} \alias{plot,rcim0,ANY-method} \alias{plot,uqo,ANY-method} \alias{plot,vgam,ANY-method} \alias{plot,vglm,ANY-method} \alias{plot,vlm,ANY-method} \alias{plot,vsmooth.spline,ANY-method} % % % % % \alias{AIC,vlm-method} \alias{AIC,vglm-method} \alias{AIC,vgam-method} \alias{AIC,rrvglm-method} \alias{AIC,qrrvglm-method} \alias{attrassign,lm-method} \alias{calibrate,qrrvglm-method} \alias{calibrate,cao-method} \alias{calibrate,uqo-method} \alias{cdf,vglm-method} \alias{cdf,vgam-method} \alias{coefficients,cao-method} \alias{coefficients,vlm-method} \alias{coefficients,vglm-method} \alias{coefficients,qrrvglm-method} \alias{coefficients,uqo-method} \alias{coefficients,vsmooth.spline-method} \alias{coefficients,vsmooth.spline.fit-method} \alias{coefficients,summary.vglm-method} \alias{Coefficients,vlm-method} \alias{coef,cao-method} \alias{coef,vlm-method} \alias{coef,vglm-method} \alias{coef,qrrvglm-method} \alias{coef,uqo-method} \alias{coef,vsmooth.spline-method} \alias{coef,vsmooth.spline.fit-method} \alias{coef,summary.vglm-method} \alias{Coef,cao-method} \alias{Coef,vlm-method} \alias{Coef,qrrvglm-method} \alias{Coef,rrvglm-method} \alias{Coef,uqo-method} \alias{constraints,vlm-method} \alias{deplot,vglm-method} \alias{deplot,vgam-method} % \alias{depvar,ANY-method} \alias{depvar,cao-method} \alias{depvar,qrrvglm-method} \alias{depvar,rcim-method} \alias{depvar,rrvglm-method} \alias{depvar,vlm-method} \alias{depvar,vsmooth.spline-method} % \alias{deviance,qrrvglm-method} \alias{deviance,vlm-method} \alias{deviance,vglm-method} \alias{deviance,uqo-method} \alias{df.residual,vlm-method} \alias{effects,vlm-method} \alias{fitted.values,qrrvglm-method} \alias{fitted.values,vlm-method} \alias{fitted.values,vglm-method} \alias{fitted.values,uqo-method} \alias{fitted.values,vsmooth.spline-method} \alias{fitted,qrrvglm-method} \alias{fitted,vlm-method} \alias{fitted,vglm-method} \alias{fitted,uqo-method} \alias{fitted,vsmooth.spline-method} % % %\alias{case.names,ANY-method} \alias{case.names,vlm-method} \alias{case.names,vgam-method} \alias{case.names,vglm-method} \alias{case.names,rrvglm-method} \alias{case.names,qrrvglm-method} \alias{case.names,grc-method} % %\alias{variable.names,ANY-method} \alias{variable.names,vlm-method} \alias{variable.names,vgam-method} \alias{variable.names,vglm-method} \alias{variable.names,rrvglm-method} \alias{variable.names,qrrvglm-method} \alias{variable.names,grc-method} % % %\alias{formula,ANY-method} \alias{formula,vlm-method} \alias{formula,vgam-method} \alias{formula,vglm-method} \alias{formula,rrvglm-method} \alias{formula,qrrvglm-method} \alias{formula,grc-method} %\alias{formula,uqo-method} % \alias{formula,vsmooth.spline-method} % % % \alias{hatvalues,ANY-method} \alias{hatvalues,vlm-method} \alias{hatvalues,vglm-method} \alias{hatvalues,cao-method} \alias{hatvalues,qrrvglm-method} \alias{hatvalues,rcim-method} \alias{hatvalues,rrvglm-method} % % \alias{hatplot,ANY-method} \alias{hatplot,matrix-method} \alias{hatplot,vlm-method} \alias{hatplot,vglm-method} \alias{hatplot,cao-method} \alias{hatplot,qrrvglm-method} \alias{hatplot,rcim-method} \alias{hatplot,rrvglm-method} % % \alias{dfbeta,ANY-method} \alias{dfbeta,matrix-method} \alias{dfbeta,vlm-method} \alias{dfbeta,vglm-method} \alias{dfbeta,cao-method} \alias{dfbeta,qrrvglm-method} \alias{dfbeta,rcim-method} \alias{dfbeta,rrvglm-method} % % % \alias{guplot,numeric-method} \alias{guplot,vlm-method} %\alias{model.frame,ANY-method} \alias{model.frame,vlm-method} %\alias{plot,rcim0,ANY-method} %\alias{plot,rcim,ANY-method} %\alias{plot,cao,ANY-method} %\alias{plot,vlm,ANY-method} %\alias{plot,vglm,ANY-method} %\alias{plot,vgam,ANY-method} %\alias{plot,qrrvglm,ANY-method} %\alias{plot,uqo,ANY-method} %\alias{plot,vsmooth.spline,ANY-method} \alias{predictors,vglm-method} \alias{rlplot,vglm-method} \alias{terms,vlm-method} \alias{is.bell,uqo-method} \alias{is.bell,qrrvglm-method} \alias{is.bell,rrvglm-method} \alias{is.bell,vlm-method} \alias{is.bell,cao-method} \alias{is.bell,Coef.qrrvglm-method} \alias{logLik,vlm-method} \alias{logLik,summary.vglm-method} \alias{logLik,vglm-method} \alias{logLik,vgam-method} \alias{lvplot,cao-method} \alias{lvplot,qrrvglm-method} \alias{lvplot,rrvglm-method} \alias{lvplot,uqo-method} \alias{lv,cao-method} \alias{lv,Coef.cao-method} \alias{lv,rrvglm-method} \alias{lv,qrrvglm-method} \alias{lv,Coef.rrvglm-method} \alias{lv,Coef.qrrvglm-method} \alias{lv,uqo-method} \alias{latvar,Coef.qrrvglm-method} \alias{latvar,Coef.rrvglm-method} \alias{latvar,qrrvglm-method} \alias{latvar,rrvglm-method} \alias{Max,qrrvglm-method} \alias{Max,Coef.qrrvglm-method} \alias{Max,uqo-method} \alias{meplot,numeric-method} \alias{meplot,vlm-method} %\alias{model.matrix,ANY-method} \alias{model.matrix,qrrvglm-method} \alias{model.matrix,vlm-method} \alias{nobs,ANY-method} \alias{nobs,vlm-method} \alias{npred,ANY-method} \alias{npred,vlm-method} \alias{npred,cao-method} \alias{npred,qrrvglm-method} \alias{npred,rcim-method} \alias{npred,rrvglm-method} \alias{nvar,ANY-method} \alias{nvar,vlm-method} \alias{nvar,vgam-method} \alias{nvar,rrvglm-method} \alias{nvar,qrrvglm-method} \alias{nvar,cao-method} \alias{nvar,vlm-method} \alias{nvar,rcim-method} \alias{Opt,qrrvglm-method} \alias{Opt,Coef.qrrvglm-method} \alias{Opt,uqo-method} \alias{persp,cao-method} \alias{persp,qrrvglm-method} \alias{persp,uqo-method} \alias{predict,cao-method} \alias{predict,qrrvglm-method} \alias{predict,vgam-method} \alias{predict,vglm-method} \alias{predict,rrvglm-method} \alias{predict,vlm-method} \alias{predict,uqo-method} \alias{predict,vsmooth.spline-method} \alias{predict,vsmooth.spline.fit-method} % % % Added 20090505: %\alias{print,ANY-method} % % % Added 20111224: \alias{lrtest,ANY-method} \alias{lrtest,vglm-method} %\alias{waldtest,ANY-method} \alias{print,VGAManova-method} \alias{show,VGAManova-method} % % % \alias{print,Coef.cao-method} \alias{print,summary.cao-method} \alias{print,qrrvglm-method} \alias{print,Coef.qrrvglm-method} \alias{print,rrvglm-method} % 20090505 \alias{print,summary.qrrvglm-method} \alias{print,Coef.rrvglm-method} \alias{print,vlm-method} \alias{print,vglm-method} \alias{print,vgam-method} \alias{print,summary.rrvglm-method} \alias{print,summary.vgam-method} \alias{print,summary.vglm-method} \alias{print,summary.vlm-method} \alias{print,uqo-method} \alias{print,Coef.uqo-method} \alias{print,summary.uqo-method} \alias{print,vsmooth.spline-method} \alias{print,cao-method} \alias{qtplot,vglm-method} \alias{qtplot,vgam-method} \alias{residuals,qrrvglm-method} \alias{residuals,vlm-method} \alias{residuals,vglm-method} \alias{residuals,vgam-method} \alias{residuals,uqo-method} \alias{residuals,vsmooth.spline-method} \alias{resid,qrrvglm-method} \alias{resid,vlm-method} \alias{resid,vglm-method} \alias{resid,vgam-method} \alias{resid,uqo-method} \alias{resid,vsmooth.spline-method} \alias{show,Coef.cao-method} \alias{show,summary.cao-method} \alias{show,qrrvglm-method} \alias{show,Coef.qrrvglm-method} \alias{show,rrvglm-method} % 20090505 \alias{show,summary.qrrvglm-method} \alias{show,Coef.rrvglm-method} \alias{show,vlm-method} \alias{show,vglm-method} \alias{show,vgam-method} \alias{show,summary.rrvglm-method} \alias{show,summary.vgam-method} \alias{show,summary.vglm-method} \alias{show,summary.vlm-method} \alias{show,uqo-method} \alias{show,Coef.uqo-method} \alias{show,summary.uqo-method} \alias{show,vsmooth.spline-method} \alias{show,cao-method} \alias{summary,grc-method} \alias{summary,cao-method} \alias{summary,qrrvglm-method} \alias{summary,rcim-method} \alias{summary,rcim0-method} \alias{summary,rrvglm-method} \alias{summary,vgam-method} \alias{summary,vglm-method} \alias{summary,vlm-method} \alias{summary,uqo-method} \alias{Tol,cao-method} \alias{Tol,qrrvglm-method} \alias{Tol,Coef.qrrvglm-method} \alias{Tol,uqo-method} \alias{Tol,Coef.uqo-method} \alias{trplot,qrrvglm-method} \alias{trplot,uqo-method} \alias{trplot,cao-method} \alias{vcov,rrvglm-method} \alias{vcov,qrrvglm-method} \alias{vcov,vlm-method} \alias{vcov,vglm-method} \alias{vplot,factor-method} \alias{vplot,list-method} \alias{vplot,matrix-method} \alias{vplot,numeric-method} \alias{weights,vlm-method} \alias{weights,vglm-method} % % % This does not work (need one line for each one): %\alias{trplot,qrrvglm,uqo-method} % % % \title{ Undocumented Methods Functions } \description{ Lots of undocumented methods functions are aliased here. In the \pkg{VGAM} package there are currently many objects/methods/classes which are currently internal and/or undocumented. The help file suppresses the warnings when the package is 'CHECK'ed. } %\usage{ % \S4method{ccoef}{cao,Coef.cao,rrvglm,qrrvglm, % Coef.rrvglm,Coef.qrrvglm}(object, ...) %} \section{Methods}{ There are many methods and these will be documented over time. \describe{ \item{object}{ This argument is often used, and it is the primary object from which the function operates on. } } } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} \keyword{internal} VGAM/man/ucberk.Rd0000644000176000001440000000365512136651105013362 0ustar ripleyusers\name{ucberk} \alias{ucberk} \docType{data} \title{ University California Berkeley Graduate Admissions } \description{ University California Berkeley Graduate Admissions: counts cross-classified by acceptance/rejection and gender, for the six largest departments. } \usage{data(ucberk)} \format{ A data frame with 6 departmental groups with the following 5 columns. \describe{ \item{m.deny}{Counts of men denied admission. } \item{m.admit}{Counts of men admitted. } \item{w.deny}{Counts of women denied admission. } \item{w.admit}{Counts of women admitted. } \item{dept}{Department (the six largest), called \code{A}, code{B}, \dots, code{F}. } } } \details{ From Bickel et al. (1975), the data consists of applications for admission to graduate study at the University of California, Berkeley, for the fall 1973 quarter. In the admissions cycle for that quarter, the Graduate Division at Berkeley received approximately 15,000 applications, some of which were later withdrawn or transferred to a different proposed entry quarter by the applicants. Of the applications finally remaining for the fall 1973 cycle 12,763 were sufficiently complete to permit a decision. There were about 101 graduate department and interdepartmental graduate majors. There were 8442 male applicants and 4321 female applicants. About 44 percent of the males and about 35 percent of the females were admitted. The data are well-known for illustrating Simpson's paradox. } %\source{ % % %} \references{ Bickel, P. J., Hammel, E. A. and O'Connell, J. W. (1975) Sex bias in graduate admissions: data from Berkeley. \emph{Science}, \bold{187}(4175): 398--404. Freedman, D., Pisani, R. and Purves, R. (1998) Chapter 2 of \emph{Statistics}, 3rd. ed., W. W. Norton & Company. } \examples{ summary(ucberk) } \keyword{datasets} % 7 February 1975 % Bickel, et al., 187 (4175): 398-404 VGAM/man/truncweibull.Rd0000644000176000001440000001066312136651105014623 0ustar ripleyusers\name{truncweibull} \alias{truncweibull} %\alias{truncweibullff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Truncated Weibull Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Weibull distribution with lower truncation. No observations should be censored. } \usage{ truncweibull(lower.limit = 1e-5, lAlpha = "loge", lBetaa = "loge", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lower.limit}{ Positive lower truncation limits. Recycled to the same dimension as the response, going across rows first. The default, being close to 0, should mean effectively the same results as \code{\link{weibull}} if there are no response values that are smaller. } \item{lAlpha, lBetaa}{ Parameter link functions applied to the (positive) parameters \code{Alpha} (called \eqn{\alpha} below) and (positive) \code{Betaa} (called \eqn{\beta} below). See \code{\link{Links}} for more choices. } \item{iAlpha, iBetaa}{ See \code{\link{CommonVGAMffArguments}}. } \item{imethod, nrfs, zero, probs.y}{ Details at \code{\link{weibull}}. } } \details{ MLE of the two parameters of the Weibull distribution are computed, subject to lower truncation. That is, all response values are greater than \code{lower.limit}, element-wise. For a particular observation this is any known positive value. This function is currently based directly on Wingo (1989) and his parameterization is used (it differs from \code{\link{weibull}}.) In particular, \eqn{\beta = a} and \eqn{\alpha = (1/b)^a} where \eqn{a} and \eqn{b} are as in \code{\link{weibull}} and \code{\link[stats:Weibull]{dweibull}}. % More details about the Weibull density are \code{\link{weibull}}. Upon fitting the \code{extra} slot has a component called \code{lower.limit} which is of the same dimension as the response. The fitted values are the mean, which are computed using \code{\link{pgamma.deriv}} and \code{\link{pgamma.deriv.unscaled}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Wingo, D. R. (1989) The left-truncated Weibull distribution: theory and computation. \emph{Statistical Papers}, \bold{30}(1), 39--48. } \author{ T. W. Yee } \note{ More improvements need to be made, e.g., initial values are currently based on no truncation. This \pkg{VGAM} family function handles multiple responses. } \section{Warning}{ This function may be converted to the same parameterization as \code{\link{weibull}} at any time. Yet to do: one element of the EIM may be wrong (due to two interpretations of a formula; but it seems to work). Convergence is slower than usual and this may imply something is wrong; use argument \code{maxit}. In fact, it's probably because \code{\link{pgamma.deriv.unscaled}} is inaccurate at \code{q = 1} and \code{q = 2}. Also, convergence should be monitored, especially if the truncation means that a large proportion of the data is lost compared to an ordinary Weibull distribution. } \seealso{ \code{\link{weibull}}, \code{\link[stats:Weibull]{dweibull}}, \code{\link{pgamma.deriv}}, \code{\link{pgamma.deriv.unscaled}}. } \examples{ nn <- 5000; prop.lost <- 0.40 # Proportion lost to truncation wdata <- data.frame(x2 = runif(nn)) # Complete Weibull data wdata <- transform(wdata, Betaa = exp(1)) # > 2 is okay (satisfies regularity conds) wdata <- transform(wdata, Alpha = exp(0.5 - 1 * x2)) wdata <- transform(wdata, Shape = Betaa, # aaa = Betaa, # bbb = 1 / Alpha^(1 / Betaa), Scale = 1 / Alpha^(1 / Betaa)) wdata <- transform(wdata, y2 = rweibull(nn, shape = Shape, scale = Scale)) summary(wdata) lower.limit2 <- with(wdata, quantile(y2, prob = prop.lost)) # Proportion lost wdata <- subset(wdata, y2 > lower.limit2) # Smaller due to truncation fit1 <- vglm(y2 ~ x2, maxit = 100, trace = TRUE, truncweibull(lower.limit = lower.limit2), data = wdata) coef(fit1, matrix = TRUE) summary(fit1) vcov(fit1) head(fit1@extra$lower.limit) } \keyword{models} \keyword{regression} VGAM/man/trplot.qrrvglm.Rd0000644000176000001440000001427412136651105015123 0ustar ripleyusers\name{trplot.qrrvglm} \alias{trplot.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Trajectory plot for QRR-VGLMs } \description{ Produces a trajectory plot for \emph{quadratic reduced-rank vector generalized linear models} (QRR-VGLMs). It is only applicable for rank-1 models with argument \code{noRRR = ~ 1}. } \usage{ trplot.qrrvglm(object, whichSpecies = NULL, add=FALSE, plot.it = TRUE, label.sites = FALSE, sitenames = rownames(object@y), axes.equal = TRUE, cex = par()$cex, col = 1:(nos * (nos - 1)/2), log = "", lty = rep(par()$lty, length.out = nos * (nos - 1)/2), lwd = rep(par()$lwd, length.out = nos * (nos - 1)/2), tcol = rep(par()$col, length.out = nos * (nos - 1)/2), xlab = NULL, ylab = NULL, main = "", type = "b", check.ok = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class \code{"qrrvglm"}, i.e., a CQO object. } \item{whichSpecies}{ Integer or character vector specifying the species to be plotted. If integer, these are the columns of the response matrix. If character, these must match exactly with the species' names. The default is to use all species. } \item{add}{ Logical. Add to an existing plot? If \code{FALSE} (default), a new plot is made. } \item{plot.it}{ Logical. Plot it? } \item{label.sites}{ Logical. If \code{TRUE}, the points on the curves/trajectories are labelled with the \code{sitenames}. } \item{sitenames}{ Character vector. The names of the sites. } \item{axes.equal}{ Logical. If \code{TRUE}, the x- and y-axes will be on the same scale. } \item{cex}{ Character expansion of the labelling of the site names. Used only if \code{label.sites} is \code{TRUE}. See the \code{cex} argument in \code{\link[graphics]{par}}. } \item{col}{Color of the lines. See the \code{col} argument in \code{\link[graphics]{par}}. Here, \code{nos} is the number of species. } \item{log}{ Character, specifying which (if any) of the x- and y-axes are to be on a logarithmic scale. See the \code{log} argument in \code{\link[graphics]{par}}. } \item{lty}{ Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lwd}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{tcol}{Color of the text for the site names. See the \code{col} argument in \code{\link[graphics]{par}}. Used only if \code{label.sites} is \code{TRUE}. } \item{xlab}{Character caption for the x-axis. By default, a suitable caption is found. See the \code{xlab} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{ylab}{Character caption for the y-axis. By default, a suitable caption is found. See the \code{xlab} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{main}{ Character, giving the title of the plot. See the \code{main} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{type}{ Character, giving the type of plot. A common option is to use \code{type="l"} for lines only. See the \code{type} argument of \code{\link[graphics]{plot}}. } \item{check.ok}{ Logical. Whether a check is performed to see that \code{noRRR = ~ 1} was used. It doesn't make sense to have a trace plot unless this is so. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{xlim} and \code{ylim}. } } \details{ A trajectory plot plots the fitted values of a `second' species against a `first' species. The argument \code{whichSpecies} must therefore contain at least two species. By default, all of the species that were fitted in \code{object} are plotted. With more than a few species the resulting plot will be very congested, and so it is recommended that only a few species be selected for plotting. In the above, \eqn{M} is the number of species selected for plotting, so there will be \eqn{M(M-1)/2}{M*(M-1)/2} curves/trajectories in total. A trajectory plot will be fitted only if \code{noRRR = ~ 1} because otherwise the trajectory will not be a smooth function of the latent variables. } \value{ A list with the following components. \item{species.names}{ A matrix of characters giving the `first' and `second' species. The number of different combinations of species is given by the number of rows. This is useful for creating a legend. } \item{sitenames}{A character vector of site names, sorted by the latent variable (from low to high). } } \references{ Yee, T. W. (2012) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. } \author{ Thomas W. Yee } \note{ Plotting the axes on a log scale is often a good idea. The use of \code{xlim} and \code{ylim} to control the axis limits is also a good idea, so as to limit the extent of the curves at low abundances or probabilities. Setting \code{label.sites = TRUE} is a good idea only if the number of sites is small, otherwise there is too much clutter. } \seealso{ \code{\link{cqo}}, \code{\link[graphics]{par}}, \code{\link[graphics]{title}}. } \examples{\dontrun{ set.seed(111) # This leads to the global solution # hspider[,1:6] <- scale(hspider[,1:6]) # Standardize the environmental variables p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, trace = FALSE) trplot(p1, whichSpecies = 1:3, log = "xy", type = "b", lty = 1, main = "Trajectory plot of three hunting spiders species", col = c("blue","red","green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, lwd = 2, lty = 1, col = c("blue", "red", "green"), with(ii, paste(species.names[,1], species.names[,2], sep = " and "))) abline(a = 0, b = 1, lty = "dashed", col = "grey") # Useful reference line } } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/trplot.Rd0000644000176000001440000000411312136651105013421 0ustar ripleyusers\name{trplot} \alias{trplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Trajectory Plot } \description{ Generic function for a trajectory plot. } \usage{ trplot(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which a trajectory plot is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. They usually are graphical parameters, and sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Trajectory plots can be defined in different ways for different models. Many models have no such notion or definition. For quadratic and additive ordination models they plot the fitted values of two species against each other (more than two is theoretically possible, but not implemented in this software yet). } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2012) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. } \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{trplot.qrrvglm}}, \code{\link{perspqrrvglm}}, \code{\link{lvplot}}. } \examples{ \dontrun{ set.seed(123) hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1cqo <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Crow1positive = FALSE) nos <- ncol(depvar(p1cqo)) clr <- 1:nos # OR (1:(nos+1))[-7] to omit yellow trplot(p1cqo, whichSpecies = 1:3, log = "xy", col = c("blue", "orange", "green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "orange", "green")) abline(a = 0, b = 1, lty = "dashed", col = "grey") } } \keyword{models} \keyword{regression} VGAM/man/triangleUC.Rd0000644000176000001440000000401512136651105014133 0ustar ripleyusers\name{Triangle} \alias{Triangle} \alias{dtriangle} \alias{ptriangle} \alias{qtriangle} \alias{rtriangle} \title{The Triangle Distribution} \description{ Density, distribution function, quantile function and random generation for the Triangle distribution with parameter \code{theta}. } \usage{ dtriangle(x, theta, lower = 0, upper = 1, log = FALSE) ptriangle(q, theta, lower = 0, upper = 1) qtriangle(p, theta, lower = 0, upper = 1) rtriangle(n, theta, lower = 0, upper = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{theta}{the theta parameter which lies between \code{lower} and \code{upper}. } \item{lower, upper}{lower and upper limits of the distribution. Must be finite. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dtriangle} gives the density, \code{ptriangle} gives the distribution function, \code{qtriangle} gives the quantile function, and \code{rtriangle} generates random deviates. } %\references{ % %} \author{ T. W. Yee } \details{ See \code{\link{triangle}}, the \pkg{VGAM} family function for estimating the parameter \eqn{\theta}{theta} by maximum likelihood estimation. } %\note{ % %} \seealso{ \code{\link{triangle}}. } \examples{ \dontrun{ x <- seq(-0.1, 1.1, by = 0.01); theta <- 0.75 plot(x, dtriangle(x, theta = theta), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = c(0,2), ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, ptriangle(x, theta = theta), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtriangle(probs, theta = theta) lines(Q, dtriangle(Q, theta = theta), col = "purple", lty = 3, type = "h") ptriangle(Q, theta = theta) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/triangle.Rd0000644000176000001440000000410312136651105013701 0ustar ripleyusers\name{triangle} \alias{triangle} %- Also NEED an '\alias' for EACH other topic documented here. \title{Triangle Distribution Family Function } \description{ Estimating the parameter of the triangle distribution by maximum likelihood estimation. } \usage{ triangle(lower = 0, upper = 1, link = elogit(min = lower, max = upper), itheta = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lower, upper}{lower and upper limits of the distribution. Must be finite. Called \eqn{A} and \eqn{B} respectively below. } \item{link}{ Parameter link function applied to the parameter \eqn{\theta}{theta}, which lies in \eqn{(A,B)}. See \code{\link{Links}} for more choices. The default constrains the estimate to lie in the interval. } \item{itheta}{ Optional initial value for the parameter. The default is to compute the value internally. } } \details{ The triangle distribution has a probability density function that consists of two lines joined at \eqn{\theta}{theta}. The lines intersect the \eqn{y = 0} axis at \eqn{A} and \eqn{B}. Here, Fisher scoring is used. On fitting, the \code{extra} slot has components called \code{lower} and \code{upper} which contains the values of the above arguments (recycled to the right length). The fitted values are the mean of the distribution, which is a little messy to write. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ The response must contain values in \eqn{(A, B)}. For most data sets (especially small ones) it is very common for half-stepping to occur. } \seealso{ \code{\link{Triangle}}. } \examples{ tdata <- data.frame(y = rtriangle(n <- 3000, theta = 3/4)) fit <- vglm(y ~ 1, triangle(link = "identity"), tdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@extra$lower) head(fitted(fit)) with(tdata, mean(y)) } \keyword{models} \keyword{regression} VGAM/man/tparetoUC.Rd0000644000176000001440000000466312136651105014015 0ustar ripleyusers\name{Tpareto} \alias{Tpareto} \alias{dtpareto} \alias{ptpareto} \alias{qtpareto} \alias{rtpareto} \title{The Truncated Pareto Distribution} \description{ Density, distribution function, quantile function and random generation for the upper truncated Pareto(I) distribution with parameters \code{lower}, \code{upper} and \code{shape}. } \usage{ dtpareto(x, lower, upper, shape, log = FALSE) ptpareto(q, lower, upper, shape) qtpareto(p, lower, upper, shape) rtpareto(n, lower, upper, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n, log}{Same meaning as \code{\link[stats:Uniform]{runif}}. } \item{lower, upper, shape}{ the lower, upper and shape (\eqn{k}) parameters. If necessary, values are recycled. } } \value{ \code{dtpareto} gives the density, \code{ptpareto} gives the distribution function, \code{qtpareto} gives the quantile function, and \code{rtpareto} generates random deviates. } \references{ Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006) Parameter estimation for the truncated Pareto distribution, \emph{Journal of the American Statistical Association}, \bold{101}(473), 270--277. } \author{ T. W. Yee } \details{ See \code{\link{tpareto1}}, the \pkg{VGAM} family function for estimating the parameter \eqn{k} by maximum likelihood estimation, for the formula of the probability density function and the range restrictions imposed on the parameters. } %%\note{ %% The truncated Pareto distribution is %%} \seealso{ \code{\link{tpareto1}}. } \examples{ lower <- 3; upper <- 8; kay <- exp(0.5) \dontrun{ xx <- seq(lower - 0.5, upper + 0.5, len = 401) plot(xx, dtpareto(xx, low = lower, upp = upper, shape = kay), main = "Truncated Pareto density split into 10 equal areas", type = "l", ylim = 0:1, xlab = "x") abline(h = 0, col = "blue", lty = 2) qq <- qtpareto(seq(0.1, 0.9, by = 0.1), low = lower, upp = upper, shape = kay) lines(qq, dtpareto(qq, low = lower, upp = upper, shape = kay), col = "purple", lty = 3, type = "h") lines(xx, ptpareto(xx, low = lower, upp = upper, shape = kay), col = "orange") } pp <- seq(0.1, 0.9, by = 0.1) qq <- qtpareto(pp, low = lower, upp = upper, shape = kay) ptpareto(qq, low = lower, upp = upper, shape = kay) qtpareto(ptpareto(qq, low = lower, upp = upper, shape = kay), low = lower, upp = upper, shape = kay) - qq # Should be all 0 } \keyword{distribution} VGAM/man/toxop.Rd0000644000176000001440000000231412136651105013247 0ustar ripleyusers\name{toxop} \alias{toxop} \docType{data} \title{ Toxoplasmosis Data } \description{ Toxoplasmosis data in 34 cities in El Salvador. } \usage{data(toxop)} \format{ A data frame with 34 observations on the following 4 variables. \describe{ \item{\code{rainfall}}{ a numeric vector; the amount of rainfall in each city. } \item{\code{ssize}}{a numeric vector; sample size.} \item{\code{cityNo}}{a numeric vector; the city number.} \item{\code{positive}}{a numeric vector; the number of subjects testing positive for the disease. } } } \details{ See the references for details. } \source{ See the references for details. } \seealso{ \code{\link[VGAM]{dexpbinomial}}. } \references{ Efron, B. (1978) Regression and ANOVA With zero-one data: measures of residual variation. \emph{Journal of the American Statistical Association}, \bold{73}, 113--121. Efron, B. (1986) Double exponential families and their use in generalized linear regression. \emph{Journal of the American Statistical Association}, \bold{81}, 709--721. } \examples{ \dontrun{ with(toxop, plot(rainfall, positive / ssize, col = "blue")) plot(toxop, col = "blue") } } \keyword{datasets} VGAM/man/tobitUC.Rd0000644000176000001440000000507412136651105013455 0ustar ripleyusers\name{Tobit} \alias{Tobit} \alias{dtobit} \alias{ptobit} \alias{qtobit} \alias{rtobit} \title{The Tobit Distribution} \description{ Density, distribution function, quantile function and random generation for the Tobit model. } \usage{ dtobit(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) ptobit(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) qtobit(p, mean = 0, sd = 1, Lower = 0, Upper = Inf) rtobit(n, mean = 0, sd = 1, Lower = 0, Upper = Inf) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{Lower, Upper}{vector of lower and upper thresholds. } \item{mean, sd, lower.tail, log, log.p}{ see \code{\link[stats:Normal]{rnorm}}. } } \value{ \code{dtobit} gives the density, \code{ptobit} gives the distribution function, \code{qtobit} gives the quantile function, and \code{rtobit} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{tobit}}, the \pkg{VGAM} family function for estimating the parameters, for details. Note that the density at \code{Lower} and \code{Upper} is the value of \code{\link[stats:Normal]{dnorm}} evaluated there plus the area to the left/right of that point too. Thus there are two spikes; see the example below. } %\note{ %} \seealso{ \code{\link{tobit}}. } \examples{ \dontrun{ m <- 0.5; x <- seq(-2, 4, len = 501) Lower <- -1; Upper <- 2.5 plot(x, ptobit(x, m = m, Lower = Lower, Upper = Upper), type = "l", ylim = 0:1, las = 1, col = "orange", ylab = paste("ptobit(m = ", m, ", sd = 1, Lower =", Lower, ", Upper =", Upper, ")"), main = "Orange is cumulative distribution function; blue is density", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0) lines(x, dtobit(x, m = m, Lower = Lower, Upper = Upper), col = "blue") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtobit(probs, m = m, Lower = Lower, Upper = Upper) lines(Q, ptobit(Q, m = m, Lower = Lower, Upper = Upper), col = "purple", lty = "dashed", type = "h") lines(Q, dtobit(Q, m = m, Lower = Lower, Upper = Upper), col = "darkgreen", lty = "dashed", type = "h") abline(h = probs, col = "purple", lty = "dashed") max(abs(ptobit(Q, m = m, Lower = Lower, Upper = Upper) - probs)) # Should be 0 endpts <- c(Lower, Upper) # Endpoints have a spike lines(endpts, dtobit(endpts, m = m, Lower = Lower, Upper = Upper), col = "blue", lwd = 2, type = "h") } } \keyword{distribution} VGAM/man/tobit.Rd0000644000176000001440000002426712136651105013232 0ustar ripleyusers\name{tobit} \alias{tobit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Tobit Model } \description{ Fits a Tobit model. } \usage{ tobit(Lower = 0, Upper = Inf, lmu = "identity", lsd = "loge", nsimEIM = 250, imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Lower}{ Numeric. It is the value \eqn{L} described below. Any value of the linear model \eqn{x_i^T \beta}{x_i^T beta} that is less than this lowerbound is assigned this value. Hence this should be the smallest possible value in the response variable. May be a vector (see below for more information). } \item{Upper}{ Numeric. It is the value \eqn{U} described below. Any value of the linear model \eqn{x_i^T \beta}{x_i^T beta} that is greater than this upperbound is assigned this value. Hence this should be the largest possible value in the response variable. May be a vector (see below for more information). } \item{lmu, lsd}{ Parameter link functions for the mean and standard deviation parameters. See \code{\link{Links}} for more choices. The standard deviation is a positive quantity, therefore a log link is its default. } \item{imu, isd}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted}{ Type of fitted value returned. The first choice is default and is the ordinary uncensored or unbounded linear model. If \code{"censored"} then the fitted values in the interval \eqn{[L, U]}. If \code{"mean.obs"} then the mean of the observations is returned; this is a doubly truncated normal distribution augmented by point masses at the truncation points (see \code{\link{dtobit}}). } \item{imethod}{ Initialization method. Either 1 or 2, this specifies two methods for obtaining initial values for the parameters. } \item{nsimEIM}{ Used if nonstandard Tobit model. See \code{\link{CommonVGAMffArguments}} for information. } \item{zero}{ An integer vector, containing the value 1 or 2. If so, the mean or standard deviation respectively are modelled as an intercept-only. Setting \code{zero = NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. } } \details{ The Tobit model can be written \deqn{y_i^* = x_i^T \beta + \varepsilon_i}{% y_i^* = x_i^T beta + e_i} where the \eqn{e_i \sim N(0,\sigma^2)}{e_i ~ N(0,sigma^2)} independently and \eqn{i=1,\ldots,n}{i=1,...,n}. However, we measure \eqn{y_i = y_i^*} only if \eqn{y_i^* > L} and \eqn{y_i^* < U} for some cutpoints \eqn{L} and \eqn{U}. Otherwise we let \eqn{y_i=L} or \eqn{y_i=U}, whatever is closer. The Tobit model is thus a multiple linear regression but with censored responses if it is below or above certain cutpoints. The defaults for \code{Lower} and \code{Upper} and \code{lmu} correspond to the \emph{standard} Tobit model. Then Fisher scoring is used, else simulated Fisher scoring. By default, the mean \eqn{x_i^T \beta}{x_i^T beta} is the first linear/additive predictor, and the log of the standard deviation is the second linear/additive predictor. The Fisher information matrix for uncensored data is diagonal. The fitted values are the estimates of \eqn{x_i^T \beta}{x_i^T beta}. } \section{Warning }{ Convergence is often slow. Setting \code{crit = "coeff"} is recommended since premature convergence of the log-likelihood is common. Simulated Fisher scoring is implemented for the nonstandard Tobit model. For this, the working weight matrices for some observations are prone to not being positive-definite; if so then some checking of the final model is recommended and/or try inputting some initial values. % The working weight matrices for most observations % are not positive-definite. These responses would otherwise have a % fitted value much less than the cutpoint. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Tobin, J. (1958) Estimation of relationships for limited dependent variables. \emph{Econometrica} \bold{26}, 24--36. } \author{ Thomas W. Yee } \note{ The response can be a matrix. If so, then \code{Lower} and \code{Upper} are recycled into a matrix with the number of columns equal to the number of responses, and the recycling is done row-wise (\code{byrow = TRUE}). For example, these are returned in \code{fit4@misc$Lower} and \code{fit4@misc$Upper} below. If there is no censoring then \code{\link{normal1}} is recommended instead. Any value of the response less than \code{Lower} or greater than \code{Upper} will be assigned the value \code{Lower} and \code{Upper} respectively, and a warning will be issued. The fitted object has components \code{censoredL} and \code{censoredU} in the \code{extra} slot which specifies whether observations are censored in that direction. The function \code{\link{cennormal1}} is an alternative to \code{tobit()}. } \seealso{ \code{\link{rtobit}}, \code{\link{cennormal1}}, \code{\link{normal1}}, \code{\link{dcennormal1}}, \code{\link{posnormal1}}, \code{\link[stats:Normal]{rnorm}}. } \examples{ \dontrun{ # Here, fit1 is a standard Tobit model and fit2 is a nonstandard Tobit model tdata <- data.frame(x2 = seq(-1, 1, length = (nn <- 100))) set.seed(1) Lower <- 1; Upper = 4 # For the nonstandard Tobit model tdata <- transform(tdata, Lower.vec = rnorm(nn, Lower, 0.5), Upper.vec = rnorm(nn, Upper, 0.5)) meanfun1 <- function(x) 0 + 2*x meanfun2 <- function(x) 2 + 2*x meanfun3 <- function(x) 2 + 2*x meanfun4 <- function(x) 3 + 2*x tdata <- transform(tdata, y1 = rtobit(nn, mean = meanfun1(x2)), # Standard Tobit model y2 = rtobit(nn, mean = meanfun2(x2), Lower = Lower, Upper = Upper), y3 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec), y4 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec)) with(tdata, table(y1 == 0)) # How many censored values? with(tdata, table(y2 == Lower | y2 == Upper)) # How many censored values? with(tdata, table(attr(y2, "cenL"))) with(tdata, table(attr(y2, "cenU"))) fit1 <- vglm(y1 ~ x2, tobit, tdata, trace = TRUE, crit = "coeff") # crit = "coeff" is recommended coef(fit1, matrix = TRUE) summary(fit1) fit2 <- vglm(y2 ~ x2, tobit(Lower = Lower, Upper = Upper, type.f = "cens"), tdata, crit = "coeff", trace = TRUE) # ditto table(fit2@extra$censoredL) table(fit2@extra$censoredU) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ x2, tobit(Lower = with(tdata, Lower.vec), Upper = with(tdata, Upper.vec), type.f = "cens"), tdata, crit = "coeff", trace = TRUE) # ditto table(fit3@extra$censoredL) table(fit3@extra$censoredU) coef(fit3, matrix = TRUE) # fit4 is fit3 but with type.fitted = "uncen". fit4 <- vglm(cbind(y3, y4) ~ x2, tobit(Lower = rep(with(tdata, Lower.vec), each = 2), Upper = rep(with(tdata, Upper.vec), each = 2)), tdata, crit = "coeff", trace = TRUE) # ditto head(fit4@extra$censoredL) # A matrix head(fit4@extra$censoredU) # A matrix head(fit4@misc$Lower) # A matrix head(fit4@misc$Upper) # A matrix coef(fit4, matrix = TRUE) } \dontrun{ # Plot the results par(mfrow = c(2, 2)) # Plot fit1 plot(y1 ~ x2, tdata, las = 1, main = "Standard Tobit model", col = as.numeric(attr(y1, "cenL")) + 3, pch = as.numeric(attr(y1, "cenL")) + 1) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 2.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun1(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit1) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y1 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! # Plot fit2 plot(y2 ~ x2, tdata, las = 1, main = "Tobit model", col = as.numeric(attr(y2, "cenL")) + 3 + as.numeric(attr(y2, "cenU")), pch = as.numeric(attr(y2, "cenL")) + 1 + as.numeric(attr(y2, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun2(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit2) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y2 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! # Plot fit3 plot(y3 ~ x2, tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 3 + as.numeric(attr(y3, "cenU")), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit3) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! # Plot fit4 plot(y3 ~ x2, tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 3 + as.numeric(attr(y3, "cenU")), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit4)[, 1] ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! } } \keyword{models} \keyword{regression} VGAM/man/tikuvUC.Rd0000644000176000001440000000512312136651105013471 0ustar ripleyusers\name{Tikuv} \alias{Tikuv} \alias{dtikuv} \alias{ptikuv} \alias{qtikuv} \alias{rtikuv} \title{A Short-tailed Symmetric Distribution } \description{ Density, cumulative distribution function, quantile function and random generation for the short-tailed symmetric distribution of Tiku and Vaughan (1999). } \usage{ dtikuv(x, d, mean = 0, sigma = 1, log = FALSE) ptikuv(q, d, mean = 0, sigma = 1) qtikuv(p, d, mean = 0, sigma = 1, ...) rtikuv(n, d, mean = 0, sigma = 1, Smallno = 1.0e-6) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{d, mean, sigma }{ arguments for the parameters of the distribution. See \code{\link{tikuv}} for more details. For \code{rtikuv}, arguments \code{mean} and \code{sigma} must be of length 1. } \item{Smallno}{ Numeric, a small value used by the rejection method for determining the lower and upper limits of the distribution. That is, \code{ptikuv(L) < Smallno} and \code{ptikuv(U) > 1-Smallno} where \code{L} and \code{U} are the lower and upper limits respectively. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dtikuv} gives the density, \code{ptikuv} gives the cumulative distribution function, \code{qtikuv} gives the quantile function, and \code{rtikuv} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{tikuv}} for more details. } %\note{ % %} \seealso{ \code{\link{tikuv}}. } \examples{ \dontrun{ par(mfrow = c(2, 1)) x <- seq(-5, 5, len = 401) plot(x, dnorm(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard normal, others are dtikuv(x, d)") lines(x, dtikuv(x, d = -10), col = "orange") lines(x, dtikuv(x, d = -1 ), col = "blue") lines(x, dtikuv(x, d = 1 ), col = "green") legend("topleft", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("d =", c(-10, -1, 1))) plot(x, pnorm(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard normal, others are ptikuv(x, d)") lines(x, ptikuv(x, d = -10), col = "orange") lines(x, ptikuv(x, d = -1 ), col = "blue") lines(x, ptikuv(x, d = 1 ), col = "green") legend("topleft", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("d =", c(-10, -1, 1))) } probs <- seq(0.1, 0.9, by = 0.1) ptikuv(qtikuv(p = probs, d = 1), d = 1) - probs # Should be all 0 } \keyword{distribution} VGAM/man/tikuv.Rd0000644000176000001440000000750612136651105013250 0ustar ripleyusers\name{tikuv} \alias{tikuv} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Short-tailed Symmetric Distribution Family Function } \description{ Fits the short-tailed symmetric distribution of Tiku and Vaughan (1999). } \usage{ tikuv(d, lmean = "identity", lsigma = "loge", isigma = NULL, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{d}{ The \eqn{d} parameter. It must be a single numeric value less than 2. Then \eqn{h = 2-d>0} is another parameter. } \item{lmean, lsigma}{ Link functions for the mean and standard deviation parameters of the usual univariate normal distribution (see \bold{Details} below). They are \eqn{\mu}{mu} and \eqn{\sigma}{sigma} respectively. See \code{\link{Links}} for more choices. } % \item{emean, esigma}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. %emean = list(), esigma = list(), % % } \item{isigma}{ Optional initial value for \eqn{\sigma}{sigma}. A \code{NULL} means a value is computed internally. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2\} corresponding respectively to \eqn{\mu}{mu}, \eqn{\sigma}{sigma}. If \code{zero = NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. For many data sets having \code{zero = 2} is a good idea. } } \details{ The short-tailed symmetric distribution of Tiku and Vaughan (1999) has a probability density function that can be written \deqn{f(y) = \frac{K}{\sqrt{2\pi} \sigma} \left[ 1 + \frac{1}{2h} \left( \frac{y-\mu}{\sigma} \right)^2 \right]^2 \exp\left( -\frac12 (y-\mu)^2 / \sigma^2 \right) }{% f(y) = (K/(sqrt(2*pi)*sigma)) * [1 + (1/(2*h)) * ((y-mu)/sigma)^2]^2 * exp( -0.5 * (y-mu)^2/ sigma^2) } where \eqn{h=2-d>0}, \eqn{K} is a function of \eqn{h}, \eqn{-\infty < y < \infty}{-Inf < y < Inf}, \eqn{\sigma > 0}{sigma > 0}. The mean of \eqn{Y} is \eqn{E(Y) = \mu}{E(Y) = mu} and this is returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Akkaya, A. D. and Tiku, M. L. (2008) Short-tailed distributions and inliers. \emph{Test}, \bold{17}, 282--296. Tiku, M. L. and Vaughan, D. C. (1999) A family of short-tailed symmetric distributions. \emph{Technical report, McMaster University, Canada}. } \author{ Thomas W. Yee } \note{ The density function is the product of a univariate normal density and a polynomial in the response \eqn{y}. The distribution is bimodal if \eqn{d>0}, else is unimodal. A normal distribution arises as the limit as \eqn{d} approaches \eqn{-\infty}{-Inf}, i.e., as \eqn{h} approaches \eqn{\infty}{Inf}. Fisher scoring is implemented. After fitting the value of \code{d} is stored in \code{@misc} with component name \code{d}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned, e.g., when \eqn{d} is very close to 2 or approaches \code{-Inf}. } \seealso{ \code{\link{dtikuv}}, \code{\link{normal1}}. } \examples{ m <- 1.0; sigma <- exp(0.5) tdata <- data.frame(y = rtikuv(n = 1000, d = 1, m = m, s = sigma)) tdata <- transform(tdata, sy = sort(y)) fit <- vglm(y ~ 1, tikuv(d = 1), data = tdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) with(tdata, mean(y)) \dontrun{ with(tdata, hist(y, prob = TRUE)) lines(dtikuv(sy, d = 1, m = Cfit[1], s = Cfit[2]) ~ sy, tdata, col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/studentt.Rd0000644000176000001440000001166712136651105013763 0ustar ripleyusers\name{studentt} \alias{studentt} \alias{studentt2} \alias{studentt3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Student t Distribution } \description{ Estimation of parameters in a Student t distribution. } \usage{ studentt(ldf = "loglog", idf = NULL, tol1 = 0.1, imethod = 1) studentt2(df = Inf, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = -2) studentt3(llocation = "identity", lscale = "loge", ldf = "loglog", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = -(2:3)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, ldf}{ Parameter link functions for each parameter, e.g., for degrees of freedom \eqn{\nu}{nu}. See \code{\link{Links}} for more choices. The defaults ensures the parameters are in range. A \code{\link{loglog}} link keeps the degrees of freedom greater than unity; see below. } \item{ilocation, iscale, idf}{ Optional initial values. If given, the values must be in range. The default is to compute an initial value internally. } \item{tol1}{ A positive value, the tolerance for testing whether an initial value is 1. Best to leave this argument alone. } \item{df}{ Numeric, user-specified degrees of freedom. It may be of length equal to the number of columns of a response matrix. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Student t density function is \deqn{f(y;\nu) = \frac{\Gamma((\nu+1)/2)}{\sqrt{\nu \pi} \Gamma(\nu/2)} \left(1 + \frac{y^2}{\nu} \right)^{-(\nu+1)/2}}{% f(y;nu) = (gamma((nu+1)/2) / (sqrt(nu*pi) gamma(nu/2))) * (1 + y^2 / nu)^{-(nu+1)/2}} for all real \eqn{y}. Then \eqn{E(Y)=0} if \eqn{\nu>1}{nu>1} (returned as the fitted values), and \eqn{Var(Y)= \nu/(\nu-2)}{Var(Y)= nu/(nu-2)} for \eqn{\nu > 2}{nu > 2}. When \eqn{\nu=1}{nu=1} then the Student \eqn{t}-distribution corresponds to the standard Cauchy distribution, \code{\link{cauchy1}}. When \eqn{\nu=2}{nu=2} with a scale parameter of \code{sqrt(2)} then the Student \eqn{t}-distribution corresponds to the standard Koenker distribution, \code{\link{koenker}}. The degrees of freedom can be treated as a parameter to be estimated, and as a real and not an integer. The Student t distribution is used for a variety of reasons in statistics, including robust regression. Let \eqn{Y = (T - \mu) / \sigma}{Y = (T - mu) / sigma} where \eqn{\mu}{mu} and \eqn{\sigma}{sigma} are the location and scale parameters respectively. Then \code{studentt3} estimates the location, scale and degrees of freedom parameters. And \code{studentt2} estimates the location, scale parameters for a user-specified degrees of freedom, \code{df}. And \code{studentt} estimates the degrees of freedom parameter only. The fitted values are the location parameters. By default the linear/additive predictors are \eqn{(\mu, \log(\sigma), \log\log(\nu))^T}{(mu, log(sigma), log log(nu))^T} or subsets thereof. In general convergence can be slow, especially when there are covariates. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Student (1908) The probable error of a mean. \emph{Biometrika}, \bold{6}, 1--25. Zhu, D. and Galbraith, J. W. (2010) A generalized asymmetric Student-\emph{t} distribution with application to financial econometrics. \emph{Journal of Econometrics}, \bold{157}, 297--305. } \author{ T. W. Yee } \note{ \code{studentt3} and \code{studentt2} can handle multiple responses. Practical experience has shown reasonably good initial values are required. If convergence failure occurs try using arguments such as \code{idf}. Local solutions are also possible, especially when the degrees of freedom is close to unity or the scale parameter is close to zero. A standard normal distribution corresponds to a \emph{t} distribution with infinite degrees of freedom. Consequently, if the data is close to normal, there may be convergence problems; best to use \code{\link{normal1}} instead. } \seealso{ \code{\link{normal1}}, \code{\link{cauchy1}}, \code{\link{logistic}}, \code{\link{huber2}}, \code{\link{koenker}}, \code{\link[stats]{TDist}}. } \examples{ tdata <- data.frame(x2 = runif(nn <- 1000)) tdata <- transform(tdata, y1 = rt(nn, df = exp(exp(0.5 - x2))), y2 = rt(nn, df = exp(exp(0.5 - x2)))) fit1 <- vglm(y1 ~ x2, studentt, tdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y1, y2) ~ x2, studentt3, tdata, trace = TRUE) coef(fit2, matrix = TRUE) } \keyword{models} \keyword{regression} %Evans, M., Hastings, N. and Peacock, B. (2000) %\emph{Statistical Distributions}, %New York: Wiley-Interscience, Third edition. VGAM/man/sratio.Rd0000644000176000001440000001003312136651105013374 0ustar ripleyusers\name{sratio} \alias{sratio} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Stopping Ratios } \description{ Fits a stopping ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response. } \usage{ sratio(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{M} stopping ratio probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the stopping ratios used are \eqn{\eta_j = logit(P[Y=j|Y \geq j])}{eta_j = logit(P[Y=j|Y>=j])} for \eqn{j=1,\dots,M}. If \code{reverse} is \code{TRUE}, then \eqn{\eta_j = logit(P[Y=j+1|Y \leq j+1])}{eta_j = logit(P[Y=j+1|Y<=j+1])} will be used. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. The default value means none are modelled as intercept-only terms. } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\dots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. There are a number of definitions for the \emph{continuation ratio} in the literature. To make life easier, in the \pkg{VGAM} package, we use \emph{continuation} ratios (see \code{\link{cratio}}) and \emph{stopping} ratios. Continuation ratios deal with quantities such as \code{logit(P[Y>j|Y>=j])}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{cratio}}, \code{\link{acat}}, \code{\link{cumulative}}, \code{\link{multinomial}}, \code{\link{pneumo}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, sratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) } \keyword{models} \keyword{regression} VGAM/man/snormUC.Rd0000644000176000001440000000446212136651105013472 0ustar ripleyusers\name{snorm} \alias{snorm} \alias{dsnorm} %\alias{psnorm} %\alias{qsnorm} \alias{rsnorm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Skew-Normal Distribution } \description{ Density and % , distribution function, quantile function and random generation for the skew-normal distribution. } \usage{ dsnorm(x, location = 0, scale = 1, shape = 0, log = FALSE) %psnorm(q, lambda) %qsnorm(p, lambda) rsnorm(n, location = 0, scale = 1, shape = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{vector of quantiles.} % \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{location}{ The location parameter \eqn{\xi}{xi}. A vector. } \item{scale}{ The scale parameter \eqn{\omega}{w}. A positive vector. } \item{shape}{ The shape parameter. It is called \eqn{\alpha}{alpha} in \code{\link{skewnormal1}}. } \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \details{ See \code{\link{skewnormal1}}, which currently only estimates the shape parameter. More generally here, \eqn{Z = \xi + \omega Y}{Z = xi + w * Y} where \eqn{Y} has a standard skew-normal distribution (see \code{\link{skewnormal1}}), \eqn{\xi}{xi} is the location parameter and \eqn{\omega}{w} is the scale parameter. } \value{ \code{dsnorm} gives the density, % \code{psnorm} gives the distribution function, % \code{qsnorm} gives the quantile function, and \code{rsnorm} generates random deviates. } \references{ \url{http://tango.stat.unipd.it/SN}. } \author{ T. W. Yee } \note{ The default values of all three parameters corresponds to the skew-normal being the standard normal distribution. } \seealso{ \code{\link{skewnormal1}}. } \examples{ \dontrun{ N <- 200 # grid resolution shape <- 7 x <- seq(-4, 4, len = N) plot(x, dsnorm(x, shape = shape), type = "l", col = "blue", las = 1, ylab = "", lty = 1, lwd = 2) abline(v = 0, h = 0, col = "grey") lines(x, dnorm(x), col = "orange", lty = 2, lwd = 2) legend("topleft", leg = c(paste("Blue = dsnorm(x, ", shape,")", sep = ""), "Orange = standard normal density"), lty = 1:2, lwd = 2, col = c("blue", "orange")) } } \keyword{distribution} VGAM/man/smartpred.Rd0000644000176000001440000001432112136651105014100 0ustar ripleyusers\name{smartpred} \alias{smartpred} \title{ Smart Prediction } \description{ Data-dependent parameters in formula terms can cause problems in when predicting. The \pkg{smartpred} package for \R and S-PLUS saves data-dependent parameters on the object so that the bug is fixed. The \code{\link[stats]{lm}} and \code{\link[stats]{glm}} functions have been fixed properly. Note that the \pkg{VGAM} package by T. W. Yee automatically comes with smart prediction. } %\usage{ %lm() %glm() %ns() %bs() %poly() %scale() %vglm() %rrvglm() %vgam() %cao() %cqo() %uqo() %} \value{ Returns the usual object, but with one list/slot component called \code{smart.prediction} containing any data-dependent parameters. } \section{Side Effects}{ The variables \code{.max.smart}, \code{.smart.prediction} and \code{.smart.prediction.counter} are created while the model is being fitted. In \R they are created in a new environment called \code{smartpredenv}. In S-PLUS they are created in frame 1. These variables are deleted after the model has been fitted. However, in \R, if there is an error in the model fitting function or the fitting model is killed (e.g., by typing control-C) then these variables will be left in \code{smartpredenv}. At the beginning of model fitting, these variables are deleted if present in \code{smartpredenv}. During prediction, the variables \code{.smart.prediction} and \code{.smart.prediction.counter} are reconstructed and read by the smart functions when the model frame is re-evaluated. After prediction, these variables are deleted. If the modelling function is used with argument \code{smart = FALSE} (e.g., \code{vglm(..., smart = FALSE)}) then smart prediction will not be used, and the results should match with the original \R or S-PLUS functions. } \details{ \R version 1.6.0 introduced a partial fix for the prediction problem because it does not work all the time, e.g., for terms such as \code{I(poly(x, 3))}, \code{poly(c(scale(x)), 3)}, \code{bs(scale(x), 3)}, \code{scale(scale(x))}. See the examples below. Smart prediction, however, will always work. % albeit, not so elegantly. The basic idea is that the functions in the formula are now smart, and the modelling functions make use of these smart functions. Smart prediction works in two ways: using \code{\link{smart.expression}}, or using a combination of \code{\link{put.smart}} and \code{\link{get.smart}}. } \author{T. W. Yee and T. J. Hastie} \note{ In S-PLUS you will need to load in the \pkg{smartpred} library with the argument \code{first = T}, e.g., \code{library(smartpred, lib = "./mys8libs", first = T)}. Here, \code{mys8libs} is the name of a directory of installed packages. To install the smartpred package in Linux/Unix, type something like \code{Splus8 INSTALL -l ./mys8libs ./smartpred_0.8-2.tar.gz}. } %\note{ % In \R and % prior to the \pkg{VGAM} package using name spaces, the location of the % variables was the workspace. The present use of \code{smartpredenv} % is superior, and is somewhat similar to the S-PLUS implementation in % that the user is more oblivious to its existence. % %} \seealso{ \code{\link{get.smart.prediction}}, \code{\link{get.smart}}, \code{\link{put.smart}}, \code{\link{smart.expression}}, \code{\link{smart.mode.is}}, \code{\link{setup.smart}}, \code{\link{wrapup.smart}}. Commonly used data-dependent functions include \code{\link[base]{scale}}, \code{\link[stats]{poly}}, \code{\link[splines]{bs}}, \code{\link[splines]{ns}}. In \R, the functions \code{\link[splines]{bs}} and \code{\link[splines]{ns}} are in the \pkg{splines} package, and this library is automatically loaded in because it contains compiled code that \code{\link[splines]{bs}} and \code{\link[splines]{ns}} call. The website \url{http://www.stat.auckland.ac.nz/~yee} contains more information such as how to write a smart function, and other technical details. The functions \code{\link[VGAM]{vglm}}, \code{\link[VGAM]{vgam}}, \code{\link[VGAM]{rrvglm}} and \code{\link[VGAM]{cqo}} in T. W. Yee's \pkg{VGAM} package are examples of modelling functions that employ smart prediction. } \section{WARNING }{ In S-PLUS, if the \code{"bigdata"} library is loaded then it is \code{detach()}'ed. This is done because \code{scale} cannot be made smart if \code{"bigdata"} is loaded (it is loaded by default in the Windows version of Splus 8.0, but not in Linux/Unix). The function \code{\link[base]{search}} tells what is currently attached. In \R and S-PLUS the functions \code{\link[splines]{predict.bs}} and \code{predict.ns} are not smart. That is because they operate on objects that contain attributes only and do not have list components or slots. In \R the function \code{\link[stats:poly]{predict.poly}} is not smart. } \examples{ # Create some data first n <- 20 set.seed(86) # For reproducibility of the random numbers x <- sort(runif(n)) y <- sort(runif(n)) \dontrun{if(is.R()) library(splines) # To get ns() in R } # This will work for R 1.6.0 and later, but fail for S-PLUS fit <- lm(y ~ ns(x, df = 5)) \dontrun{ plot(x, y) lines(x, fitted(fit)) newx <- seq(0, 1, len = n) points(newx, predict(fit, data.frame(x = newx)), type = "b", col = 2, err = -1) } # The following fails for R 1.6.x and later but works with smart prediction fit <- lm(y ~ ns(scale(x), df = 5)) \dontrun{ fit$smart.prediction plot(x, y) lines(x, fitted(fit)) newx <- seq(0, 1, len = n) points(newx, predict(fit, data.frame(x = newx)), type = "b", col = 2, err = -1) } # The following requires the VGAM package to be loaded \dontrun{ library(VGAM) fit <- vlm(y ~ ns(scale(x), df = 5)) fit@smart.prediction plot(x, y) lines(x, fitted(fit)) newx <- seq(0, 1, len = n) points(newx, predict(fit, data.frame(x = newx)), type = "b", col = 2, err = -1) } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} %lm(..., smart = TRUE) %glm(..., smart = TRUE) %ns() %bs() %poly() %scale() %vglm(..., smart = TRUE) %rrvglm(..., smart = TRUE) %vgam(..., smart = TRUE) %cao(..., smart = TRUE) %cqo(..., smart = TRUE) %uqo(..., smart = TRUE) %library(smartpred, lib = "./mys8libs", first = T) VGAM/man/smart.mode.is.Rd0000644000176000001440000000327112136651105014564 0ustar ripleyusers\name{smart.mode.is} \alias{smart.mode.is} \title{ Determine What Mode the Smart Prediction is In } \description{ Determine which of three modes the smart prediction is currently in. } \usage{ smart.mode.is(mode.arg=NULL) } \arguments{ \item{mode.arg}{ a character string, either \code{"read"}, \code{"write"} or \code{"neutral"}. }} \value{ If \code{mode.arg} is given, then either \code{TRUE} or \code{FALSE} is returned. If \code{mode.arg} is not given, then the mode (\code{"neutral"}, \code{"read"} or \code{"write"}) is returned. Usually, the mode is \code{"neutral"}. } \seealso{ \code{\link{put.smart}}, \code{\link[splines]{bs}}, \code{\link[stats]{poly}}. } \details{ Smart functions such as \code{\link[splines]{bs}} and \code{\link[stats]{poly}} need to know what mode smart prediction is in. If it is in \code{"write"} mode then the parameters are saved to \code{.smart.prediction} using \code{\link{put.smart}}. If in \code{"read"} mode then the parameters are read in using \code{\link{get.smart}}. If in \code{"neutral"} mode then the smart function behaves like an ordinary function. } \examples{ my1 <- function(x, minx = min(x)) { # Here is a smart function x <- x # Needed for nested calls, e.g., bs(scale(x)) if(smart.mode.is("read")) { smart <- get.smart() minx <- smart$minx # Overwrite its value } else if(smart.mode.is("write")) put.smart(list(minx = minx)) sqrt(x - minx) } attr(my1, "smart") <- TRUE smart.mode.is() # Returns "neutral" smart.mode.is(smart.mode.is()) # Returns TRUE } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/smart.expression.Rd0000644000176000001440000000213112136651105015417 0ustar ripleyusers\name{smart.expression} \alias{smart.expression} \title{ S Expression for Smart Functions } \description{ \code{smart.expression} is an S expression for a smart function to call itself. It is best if you go through it line by line, but most users will not need to know anything about it. It requires the primary argument of the smart function to be called \code{"x"}. The list component \code{match.call} must be assigned the value of \code{match.call()} in the smart function; this is so that the smart function can call itself later. } \seealso{ \code{\link[base]{match.call}}. } \examples{ "my2" <- function(x, minx=min(x)) { # Here is a smart function x <- x # Needed for nested calls, e.g., bs(scale(x)) if(smart.mode.is("read")) { return(eval(smart.expression)) } else if(smart.mode.is("write")) put.smart(list(minx=minx, match.call=match.call())) (x-minx)^2 } attr(my2, "smart") <- TRUE } %\keyword{smartpred} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. % Edited manually 17/2/03, 9/7/03 VGAM/man/slashUC.Rd0000644000176000001440000000341312136651105013441 0ustar ripleyusers\name{Slash} \alias{Slash} \alias{dslash} \alias{pslash} \alias{rslash} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Slash Distribution } \description{ Density function, distribution function, and random generation for the slash distribution. } \usage{ dslash(x, mu = 0, sigma = 1, log = FALSE, smallno = .Machine$double.eps*1000) pslash(q, mu = 0, sigma = 1) rslash(n, mu = 0, sigma = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{n}{number of observations. Must be a single positive integer. } \item{mu, sigma}{the mean and standard deviation of the univariate normal distribution. } \item{log}{ Logical. If \code{TRUE} then the logarithm of the density is returned. } \item{smallno}{ See \code{\link{slash}}. } } \details{ See \code{\link{slash}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for the formula of the probability density function and other details. } \value{ \code{dslash} gives the density, and \code{pslash} gives the distribution function, \code{rslash} generates random deviates. } %\references{ } \author{ Thomas W. Yee and C. S. Chee} \note{ \code{pslash} is very slow. } \seealso{ \code{\link{slash}}. } \examples{ \dontrun{ curve(dslash, col = "blue", ylab = "f(x)", -5, 5, ylim = c(0, 0.4), main = "Standard slash, normal and Cauchy densities", lwd = 2) curve(dnorm, col = "black", lty = 2, lwd = 2, add = TRUE) curve(dcauchy, col = "orange", lty = 3, lwd = 2, add = TRUE) legend(x = 2, y = 0.3, c("slash", "normal", "Cauchy"), lty = 1:3, col = c("blue","black","orange"), lwd = 2) curve(pslash, col = "blue", -5, 5, ylim = 0:1) } } \keyword{distribution} VGAM/man/slash.Rd0000644000176000001440000000652712136651105013222 0ustar ripleyusers\name{slash} \alias{slash} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Slash Distribution Family Function } \description{ Estimates the two parameters of the slash distribution by maximum likelihood estimation. } \usage{ slash(lmu = "identity", lsigma = "loge", imu = NULL, isigma = NULL, iprobs = c(0.1, 0.9), nsimEIM = 250, zero = NULL, smallno = .Machine$double.eps*1000) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsigma}{ Parameter link functions applied to the \eqn{\mu}{mu} and \eqn{\sigma}{sigma} parameters, respectively. See \code{\link{Links}} for more choices. } % \item{emu, esigma}{ % List. Extra argument for each of the link functions. % See \code{earg} in \code{\link{Links}} for general information. %emu = list(), esigma = list(), % } \item{imu, isigma}{ Initial values. A \code{NULL} means an initial value is chosen internally. See \code{\link{CommonVGAMffArguments}} for more information. } \item{iprobs}{ Used to compute the initial values for \code{mu}. This argument is fed into the \code{probs} argument of \code{\link[stats]{quantile}}, and then a grid between these two points is used to evaluate the log-likelihood. This argument must be of length two and have values between 0 and 1. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{smallno}{ Small positive number, used to test for the singularity. } } \details{ The standard slash distribution is the distribution of the ratio of a standard normal variable to an independent standard uniform(0,1) variable. It is mainly of use in simulation studies. One of its properties is that it has heavy tails, similar to those of the Cauchy. The general slash distribution can be obtained by replacing the univariate normal variable by a general normal \eqn{N(\mu,\sigma)}{N(mu,sigma)} random variable. It has a density that can be written as \deqn{f(y) = \left\{ \begin{array}{cl} 1/(2 \sigma \sqrt(2 \pi)) & if y=\mu, \\ 1-\exp(-(((y-\mu)/\sigma)^2)/2))/(\sqrt(2 pi) \sigma ((y-\mu)/\sigma)^2) & if y \ne \mu. \end{array} \right . }{% f(y) = 1/(2*sigma*sqrt(2*pi)) if y=mu = 1-exp(-(((x-mu)/sigma)^2)/2))/(sqrt(2*pi)*sigma*((x-mu)/sigma)^2) if y!=mu} where \eqn{\mu}{mu} and \eqn{\sigma}{sigma} are the mean and standard deviation of the univariate normal distribution respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Kafadar, K. (1982) A Biweight Approach to the One-Sample Problem \emph{Journal of the American Statistical Association}, \bold{77}, 416--424. } \author{ T. W. Yee and C. S. Chee } \note{ Fisher scoring using simulation is used. Convergence is often quite slow. Numerical problems may occur. } \seealso{ \code{\link{rslash}}. } \examples{ \dontrun{ sdata <- data.frame(y = rslash(n = 1000, mu = 4, sigma = exp(2))) fit <- vglm(y ~ 1, slash, sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/skewnormal1.Rd0000644000176000001440000000651712136651105014352 0ustar ripleyusers\name{skewnormal1} \alias{skewnormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Skew-Normal Distribution Family Function } \description{ Maximum likelihood estimation of the shape parameter of a univariate skew-normal distribution. } \usage{ skewnormal1(lshape = "identity", ishape = NULL, nsimEIM = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, ishape, nsimEIM}{ See \code{\link{Links}} and \code{\link{CommonVGAMffArguments}}. } } \details{ The univariate skew-normal distribution has a density function that can be written \deqn{f(y) = 2 \, \phi(y) \, \Phi(\alpha y)}{% f(y) = 2 * phi(y) * Phi(alpha * y)} where \eqn{\alpha}{alpha} is the shape parameter. Here, \eqn{\phi}{phi} is the standard normal density and \eqn{\Phi}{Phi} its cumulative distribution function. When \eqn{\alpha=0}{alpha=0} the result is a standard normal distribution. When \eqn{\alpha=1}{alpha=1} it models the distribution of the maximum of two independent standard normal variates. When the absolute value of the shape parameter increases the skewness of the distribution increases. The limit as the shape parameter tends to positive infinity results in the folded normal distribution or half-normal distribution. When the shape parameter changes its sign, the density is reflected about \eqn{y=0}. The mean of the distribution is \eqn{\mu=\alpha \sqrt{2/(\pi (1+\alpha^2))}}{mu=alpha*sqrt(2/(pi*(1+alpha^2)))} and these are returned as the fitted values. The variance of the distribution is \eqn{1-\mu^2}{1-mu^2}. The Newton-Raphson algorithm is used unless the \code{nsimEIM} argument is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Azzalini, A. A. (1985) A class of distributions which include the normal. \emph{Scandinavian Journal of Statistics}, \bold{12}, 171--178. Azzalini, A. and Capitanio, A. (1999) Statistical applications of the multivariate skew-normal distribution. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{61}, 579--602. } \author{ Thomas W. Yee } \note{ It is a good idea to use several different initial values to ensure that the global solution is obtained. This family function will be modified (hopefully soon) to handle a location and scale parameter too. } \section{Warning }{ It is well known that the EIM of Azzalini's skew-normal distribution is singular for skewness parameter tending to zero, and thus produces influential problems. } \seealso{ \code{\link{snorm}}, \code{\link{normal1}}, \code{\link{fnormal1}}. } \examples{ sdata <- data.frame(y = rsnorm(nn <- 1000, shape = 5)) fit <- vglm(y ~ 1, skewnormal1, sdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit), 1) with(sdata, mean(y)) \dontrun{ with(sdata, hist(y, prob = TRUE)) x <- with(sdata, seq(min(y), max(y), len = 200)) with(sdata, lines(x, dsnorm(x, shape = Coef(fit)), col = "blue")) } sdata <- data.frame(x2 = runif(nn)) sdata <- transform(sdata, y = rsnorm(nn, shape = 1 + 2*x2)) fit <- vglm(y ~ x2, skewnormal1, sdata, trace = TRUE, crit = "coef") summary(fit) } \keyword{models} \keyword{regression} VGAM/man/skellamUC.Rd0000644000176000001440000000305412136651105013760 0ustar ripleyusers\name{Skellam} \alias{Skellam} \alias{dskellam} %\alias{pskellam} %\alias{qskellam} \alias{rskellam} \title{The Skellam Distribution} \description{ Density % distribution function, quantile function and random generation for the Skellam distribution. } \usage{ dskellam(x, mu1, mu2, log = FALSE) %pskellam(q, mu1, mu2) %qskellam(p, mu1, mu2) rskellam(n, mu1, mu2) } \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{mu1, mu2}{ See \code{\link{skellam}}}. \item{log}{ Logical; if TRUE, the logarithm is returned. } } \value{ \code{dskellam} gives the density, and % \code{pskellam} gives the distribution function, % \code{qskellam} gives the quantile function, and \code{rskellam} generates random deviates. } %\author{ T. W. Yee } \details{ See \code{\link{skellam}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \section{Warning }{ Numerical problems may occur for data if \eqn{\mu_1}{mu1} and/or \eqn{\mu_2}{mu2} are large. The normal approximation for this case has not been implemented yet. } \seealso{ \code{\link{skellam}}, \code{\link[stats:Poisson]{dpois}}. } \examples{ \dontrun{ mu1 <- 1; mu2 <- 2; x <- (-7):7 plot(x, dskellam(x, mu1, mu2), type = "h", las = 1, col = "blue", main = paste("Density of Skellam distribution with mu1 = ", mu1, " and mu2 = ", mu2, sep = "")) } } \keyword{distribution} VGAM/man/skellam.Rd0000644000176000001440000000611712136651105013533 0ustar ripleyusers\name{skellam} \alias{skellam} %- Also NEED an '\alias' for EACH other topic documented here. \title{Skellam Distribution Family Function} \description{ Estimates the two parameters of a Skellam distribution by maximum likelihood estimation. } \usage{ skellam(lmu1 = "loge", lmu2 = "loge", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu1, lmu2}{ Link functions for the \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2} parameters. See \code{\link{Links}} for more choices and for general information. } \item{imu1, imu2}{ Optional initial values for the parameters. See \code{\link{CommonVGAMffArguments}} for more information. If convergence failure occurs (this \pkg{VGAM} family function seems to require good initial values) try using these arguments. } \item{nsimEIM, parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. In particular, setting \code{parallel=TRUE} will constrain the two means to be equal. } } \details{ The Skellam distribution models the difference between two independent Poisson distributions. It has density function \deqn{f(y;\mu_1,\mu_2) = \left( \frac{ \mu_1 }{\mu_2} \right)^{y/2} \, \exp(-\mu_1-\mu_2 ) \, I_y( 2 \sqrt{ \mu_1 \mu_2}) }{% f(y;mu1,mu2) = ( \mu1 / mu_2 )^(y/2) * exp(-mu1-mu2 ) * I_y( 2 * sqrt(mu1*mu2)) } where \eqn{y} is an integer, \eqn{\mu_1 > 0}{mu1 > 0}, \eqn{\mu_2 > 0}{mu2 > 0}. Here, \eqn{I_v} is the modified Bessel function of the first kind with order \eqn{v}. The mean is \eqn{\mu_1 - \mu_2}{mu1 - mu2} (returned as the fitted values) and the variance is \eqn{\mu_1 + \mu_2}{mu1 + mu2}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \section{Warning }{ This \pkg{VGAM} family function seems fragile and very sensitive to the initial values. Use very cautiously!! } \references{ Skellam, J. G. (1946) The frequency distribution of the difference between two Poisson variates belonging to different populations. \emph{Journal of the Royal Statistical Society, Series A}, \bold{109}, 296. } %\author{ T. W. Yee } \note{ Numerical problems may occur for data if \eqn{\mu_1}{mu1} and/or \eqn{\mu_2}{mu2} are large. } \seealso{ \code{\link{dskellam}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{poissonff}}. } \examples{ \dontrun{ sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, mu1 = exp(1+x2), mu2 = exp(1+x2)) sdata <- transform(sdata, y = rskellam(nn, mu1, mu2)) fit1 <- vglm(y ~ x2, skellam, sdata, trace = TRUE, crit = "c") fit2 <- vglm(y ~ x2, skellam(parallel = TRUE), sdata, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) summary(fit1) # Likelihood ratio test for equal means: pchisq(2 * (logLik(fit1) - logLik(fit2)), df = fit2@df.residual - fit1@df.residual, lower.tail = FALSE) } } \keyword{models} \keyword{regression} VGAM/man/sinmadUC.Rd0000644000176000001440000000346512136651105013611 0ustar ripleyusers\name{Sinmad} \alias{Sinmad} \alias{dsinmad} \alias{psinmad} \alias{qsinmad} \alias{rsinmad} \title{The Singh-Maddala Distribution} \description{ Density, distribution function, quantile function and random generation for the Singh-Maddala distribution with shape parameters \code{a} and \code{q}, and scale parameter \code{scale}. } \usage{ dsinmad(x, shape1.a, scale = 1, shape3.q, log = FALSE) psinmad(q, shape1.a, scale = 1, shape3.q) qsinmad(p, shape1.a, scale = 1, shape3.q) rsinmad(n, shape1.a, scale = 1, shape3.q) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a, shape3.q}{shape parameters.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dsinmad} gives the density, \code{psinmad} gives the distribution function, \code{qsinmad} gives the quantile function, and \code{rsinmad} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{sinmad}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Singh-Maddala distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{sinmad}}, \code{\link{genbetaII}}. } \examples{ sdata <- data.frame(y = rsinmad(n = 3000, exp(1), exp(2), exp(1))) fit <- vglm(y ~ 1, sinmad(ishape1.a = 2.1), sdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/sinmad.Rd0000644000176000001440000000714512136651105013360 0ustar ripleyusers\name{sinmad} \alias{sinmad} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Singh-Maddala Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Singh-Maddala distribution. } \usage{ sinmad(lshape1.a = "loge", lscale = "loge", lshape3.q = "loge", ishape1.a = NULL, iscale = NULL, ishape3.q = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale, lshape3.q}{ Parameter link functions applied to the (positive) parameters \code{a}, \code{scale}, and \code{q}. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale, ishape3.q}{ Optional initial values for \code{a}, \code{scale}, and \code{q}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3\} which correspond to \code{a}, \code{scale}, \code{q}, respectively. } } \details{ The 3-parameter Singh-Maddala distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{p=1}. It is known under various other names, such as the Burr XII (or just the Burr distribution), Pareto IV, beta-P, and generalized log-logistic distribution. More details can be found in Kleiber and Kotz (2003). Some distributions which are special cases of the 3-parameter Singh-Maddala are the Lomax (\eqn{a=1}), Fisk (\eqn{q=1}), and paralogistic (\eqn{a=q}). The Singh-Maddala distribution has density \deqn{f(y) = aq y^{a-1} / [b^a \{1 + (y/b)^a\}^{1+q}]}{% f(y) = aq y^(a-1) / [b^a (1 + (y/b)^a)^(1+q)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)^a]^{-q}.}{% F(y) = 1 - [1 + (y/b)^a]^(-q).} The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(q - 1/a) / \Gamma(q)}{% E(Y) = b gamma(1 + 1/a) gamma(q - 1/a) / gamma(q)} provided \eqn{-a < 1 < aq}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Sinmad}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ sdata <- data.frame(y = rsinmad(n = 1000, exp(1), exp(2), exp(0))) fit <- vglm(y ~ 1, sinmad, sdata, trace = TRUE) fit <- vglm(y ~ 1, sinmad(ishape1.a = exp(1)), sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) # Harder problem (has the shape3.q parameter going to infinity) set.seed(3) sdata <- data.frame(y1 = rbeta(1000, 6, 6)) # hist(with(sdata, y1)) if (FALSE) { # This fails fit1 <- vglm(y1 ~ 1, sinmad, data = sdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, sinmad, data = sdata, trace = TRUE, maxit = 6, crit = "coef") Coef(fit1) } # Try this remedy: fit2 <- vglm(y1 ~ 1, sinmad(ishape3.q = 3, lshape3.q = "loglog"), data = sdata, trace = TRUE, stepsize = 0.05, maxit = 99) coef(fit2, matrix = TRUE) Coef(fit2) } \keyword{models} \keyword{regression} VGAM/man/simplexUC.Rd0000644000176000001440000000360412136651105014012 0ustar ripleyusers\name{Simplex } \alias{dsimplex} \alias{psimplex} \alias{qsimplex} \alias{rsimplex} \title{ Simplex Distribution } \description{ Density function, and random generation for the simplex distribution. } \usage{ dsimplex(x, mu = 0.5, dispersion = 1, log = FALSE) rsimplex(n, mu = 0.5, dispersion = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Vector of quantiles. The support of the distribution is the interval \eqn{(0,1)}. } \item{mu, dispersion}{Mean and dispersion parameters. The former lies in the interval \eqn{(0,1)} and the latter is positive. } \item{n, log}{Same usage as \code{\link[stats:Uniform]{runif}}.} } \details{ The \pkg{VGAM} family function \code{\link{simplex}} fits this model; see that online help for more information. For \code{rsimplex()} the rejection method is used; it may be very slow if the density is highly peaked, and will fail if the density asymptotes at the boundary. } \value{ \code{dsimplex(x)} gives the density function, \code{rsimplex(n)} gives \eqn{n} random variates. } % \references{ % %} \author{ T. W. Yee } \seealso{ \code{\link{simplex}}. } \examples{ sigma <- c(4, 2, 1) # Dispersion parameter mymu <- c(0.1, 0.5, 0.7); xxx <- seq(0, 1, len = 501) \dontrun{ par(mfrow = c(3, 3)) # Figure 2.1 of Song (2007) for(iii in 1:3) for(jjj in 1:3) { plot(xxx, dsimplex(xxx, mymu[jjj], sigma[iii]), type = "l", col = "blue", xlab = "", ylab = "", main = paste("mu = ", mymu[jjj], ", sigma = ", sigma[iii], sep = "")) } } } \keyword{distribution} % mean(rsimplex(1000, mymu[2], sigma[2])) # Should be mu below % var(rsimplex(1000, mymu[2], sigma[2])) # Should be as below % (mu <- mymu[2]) % lambda <- (1 / sigma[2])^2 % mu * (1 - mu) - sqrt(lambda / 2) * exp(lambda / (mu^2 * (1 - mu)^2)) * % pgamma(lambda / (2 * mu^2 * (1 - mu)^2), 0.5, lower = FALSE) * gamma(0.5) VGAM/man/simplex.Rd0000644000176000001440000000630012136651105013556 0ustar ripleyusers\name{simplex} \alias{simplex} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Simplex Distribution Family Function } \description{ The two parameters of the univariate standard simplex distribution are estimated by full maximum likelihood estimation. } \usage{ simplex(lmu = "logit", lsigma = "loge", imu = NULL, isigma = NULL, imethod = 1, shrinkage.init = 0.95, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsigma}{ Link function for \code{mu} and \code{sigma}. See \code{\link{Links}} for more choices. } \item{imu, isigma}{ Optional initial values for \code{mu} and \code{sigma}. A \code{NULL} means a value is obtained internally. } \item{imethod, shrinkage.init, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function can be written \deqn{f(y; \mu, \sigma) = [2 \pi \sigma^2 (y (1-y))^3]^{-0.5} \exp[-0.5 (y-\mu)^2 / (\sigma^2 y (1-y) \mu^2 (1-\mu)^2)] }{% f(y; mu, sigma) = [2* pi * sigma^2 * (y*(1-y))^3]^(-0.5) * exp[-0.5 * (y-mu)^2 / (sigma^2 * y * (1-y) * mu^2 * (1-mu)^2)] } for \eqn{0 < y < 1}, \eqn{0 < \mu < 1}{0 < mu < 1}, and \eqn{\sigma > 0}{sigma > 0}. The mean of \eqn{Y} is \eqn{\mu}{mu} (called \code{mu}, and returned as the fitted values). % This comes from Jorgensen but it is not confirmed by simulations: % The variance of \eqn{Y} is \eqn{\mu (1 - \mu) - \sqrt{ \lambda / 2} % \exp\{ \lambda / (\mu^2 (1 - \mu)^2) \} % \Gamma(\lambda / (2 \mu^2 (1 - \mu)^2), 0.5)}{ % mu * (1 - mu) - sqrt(lambda / 2) * % exp(lambda / (mu^2 * (1 - mu)^2)) * % Gamma(lambda / (2 * mu^2 * (1 - mu)^2), 0.5)}. % Here, \eqn{\Gamma(x, a)}{Gamma(x, a)} is the % `upper' normalized incomplete gamma function given by % \code{pgamma(x, a, lower = FALSE) * gamma(a)}. The second parameter, \code{sigma}, of this standard simplex distribution is known as the dispersion parameter. The unit variance function is \eqn{V(\mu) = \mu^3 (1-\mu)^3}{V(mu) = mu^3 (1-mu)^3}. Fisher scoring is applied to both parameters. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Song, P. X.-K. (2007) \emph{Correlated Data Analysis: Modeling, Analytics, and Applications}. Springer. } \author{ T. W. Yee } \note{ This distribution is potentially useful for dispersion modelling. Numerical problems may occur when \code{mu} is very close to 0 or 1. } \seealso{ \code{\link{dsimplex}}, \code{\link{dirichlet}}, \code{\link{rig}}, \code{\link{binomialff}}. } \examples{ sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, eta1 = 1 + 2 * x2, eta2 = 1 - 2 * x2) sdata <- transform(sdata, y = rsimplex(nn, mu = logit(eta1, inverse = TRUE), dispersion = exp(eta2))) (fit <- vglm(y ~ x2, simplex(zero = NULL), sdata, trace = TRUE)) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/setup.smart.Rd0000644000176000001440000000456412136651105014374 0ustar ripleyusers\name{setup.smart} \alias{setup.smart} \title{ Smart Prediction Setup } \description{ Sets up smart prediction in one of two modes: \code{"write"} and \code{"read"}. } \usage{ setup.smart(mode.arg, smart.prediction = NULL, max.smart = 30) } \arguments{ \item{mode.arg}{ \code{mode.arg} must be \code{"write"} or \code{"read"}. If in \code{"read"} mode then \code{smart.prediction} must be assigned the data structure \code{.smart.prediction} that was created while fitting. This is stored in \code{object@smart.prediction} or \code{object$smart.prediction} where \code{object} is the name of the fitted object. } \item{smart.prediction}{ If in \code{"read"} mode then \code{smart.prediction} must be assigned the list of data dependent parameters, which is stored on the fitted object. Otherwise, \code{smart.prediction} is ignored. } \item{max.smart}{ \code{max.smart} is the initial length of the list \code{.smart.prediction}. It is not important because \code{.smart.prediction} is made larger if needed. }} \value{ Nothing is returned. } \section{Side Effects}{ In \code{"write"} mode \code{.smart.prediction} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is assigned an empty list with \code{max.smart} components. In \code{"read"} mode \code{.smart.prediction} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is assigned \code{smart.prediction}. In both cases, \code{.smart.prediction.counter} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is assigned the value 0, and \code{.smart.prediction.mode} and \code{.max.smart} are written to \code{smartpredenv} (\R) or frame 1 (S-PLUS) too. } \details{ This function is only required by programmers writing a modelling function such as \code{\link[stats]{lm}} and \code{\link[stats]{glm}}, or a prediction functions of such, e.g., \code{\link[stats]{predict.lm}}. The function \code{setup.smart} operates by mimicking the operations of a first-in first-out stack (better known as a \emph{queue}). } \seealso{ \code{\link[stats]{lm}}, \code{\link[stats]{predict.lm}}. } \examples{ \dontrun{# Put at the beginning of lm setup.smart("write") } \dontrun{# Put at the beginning of predict.lm setup.smart("read", smart.prediction=object$smart.prediction) } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/seq2binomial.Rd0000644000176000001440000000706512136651105014473 0ustar ripleyusers\name{seq2binomial} \alias{seq2binomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-stage Sequential Binomial Distribution Family Function } \description{ Estimation of the probabilities of a two-stage binomial distribution. } \usage{ seq2binomial(lprob1 = "logit", lprob2 = "logit", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, apply.parint = TRUE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob1, lprob2}{ Parameter link functions applied to the two probabilities, called \eqn{p} and \eqn{q} below. See \code{\link{Links}} for more choices. } \item{iprob1, iprob2}{ Optional initial value for the first and second probabilities respectively. A \code{NULL} means a value is obtained in the \code{initialize} slot. } \item{parallel, apply.parint, zero}{ Details at \code{\link{Links}}. } } \details{ This \pkg{VGAM} family function fits the model described by Crowder and Sweeting (1989) which is described as follows. Each of \eqn{m} spores has a probability \eqn{p} of germinating. Of the \eqn{y_1}{y1} spores that germinate, each has a probability \eqn{q} of bending in a particular direction. Let \eqn{y_2}{y2} be the number that bend in the specified direction. The probability model for this data is \eqn{P(y_1,y_2) =}{P(y1,y2) =} \deqn{ {m \choose y_1} p^{y_1} (1-p)^{m-y_1} {y_1 \choose y_2} q^{y_2} (1-q)^{y_1-y_2}}{% {choose(m,y1)} p^{y1} (1-p)^{m-y1} {choose(y1,y2)} q^{y2} (1-q)^{y1-y2}} for \eqn{0 < p < 1}, \eqn{0 < q < 1}, \eqn{y_1=1,\ldots,m}{y1=1,\ldots,m} and \eqn{y_2=1,\ldots,y_1}{y2=1,\ldots,y1}. Here, \eqn{p} is \code{prob1}, \eqn{q} is \code{prob2}. Although the Authors refer to this as the \emph{bivariate binomial} model, I have named it the \emph{(two-stage) sequential binomial} model. Fisher scoring is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Crowder, M. and Sweeting, T. (1989). Bayesian inference for a bivariate binomial distribution. \emph{Biometrika}, \bold{76}, 599--603. } \author{ Thomas W. Yee } \note{ The response must be a two-column matrix of sample proportions corresponding to \eqn{y_1}{y1} and \eqn{y_2}{y2}. The \eqn{m} values should be inputted with the \code{weights} argument of \code{\link{vglm}} and \code{\link{vgam}}. The fitted value is a two-column matrix of estimated probabilities \eqn{p} and \eqn{q}. A common form of error is when there are no trials for \eqn{y_1}{y1}, e.g., if \code{mvector} below has some values which are zero. } \seealso{ \code{\link{binomialff}}. } \examples{ sdata <- data.frame(mvector = round(rnorm(nn <- 100, m = 10, sd = 2)), x2 = runif(nn)) sdata <- transform(sdata, prob1 = logit(+2 - x2, inverse = TRUE), prob2 = logit(-2 + x2, inverse = TRUE)) sdata <- transform(sdata, successes1 = rbinom(nn, size = mvector, prob = prob1)) sdata <- transform(sdata, successes2 = rbinom(nn, size = successes1, prob = prob2)) sdata <- transform(sdata, y1 = successes1 / mvector) sdata <- transform(sdata, y2 = successes2 / successes1) fit <- vglm(cbind(y1, y2) ~ x2, seq2binomial, weight = mvector, data = sdata, trace = TRUE) coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) head(depvar(fit)) head(weights(fit, type = "prior")) # Same as with(sdata, mvector) } \keyword{models} \keyword{regression} VGAM/man/s.Rd0000644000176000001440000001021312136651105012335 0ustar ripleyusers\name{s} \alias{s} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Defining smooths in VGAM formulae } \description{ \code{s} is used in the definition of (vector) smooth terms within \code{vgam} formulae. } \usage{ s(x, df = 4, spar = 0, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{x}{ covariate (abscissae) to be smoothed. Note that \code{x} must be a \emph{single} variable and not a function of a variable. For example, \code{s(x)} is fine but \code{s(log(x))} will fail. In this case, let \code{logx <- log(x)} (in the data frame), say, and then use \code{s(logx)}. At this stage bivariate smoothers (\code{x} would be a two-column matrix) are not implemented. } \item{df}{ numerical vector of length \eqn{r}. Effective degrees of freedom: must lie between 1 (linear fit) and \eqn{n} (interpolation). Thus one could say that \code{df-1} is the \emph{nonlinear degrees of freedom} of the smooth. Recycling of values will be used if \code{df} is not of length \eqn{r}. If \code{spar} is positive then this argument is ignored. } \item{spar}{ numerical vector of length \eqn{r}. Positive smoothing parameters (after scaling) . Larger values mean more smoothing so that the solution approaches a linear fit for that component function. A zero value means that \code{df} is used. Recycling of values will be used if \code{spar} is not of length \eqn{r}. } \item{\dots}{ Ignored for now. } } \details{ In this help file \eqn{M} is the number of additive predictors and \eqn{r} is the number of component functions to be estimated (so that \eqn{r} is an element from the set \{1,2,\ldots,\eqn{M}\}). Also, if \eqn{n} is the number of \emph{distinct} abscissae, then \code{s} will fail if \eqn{n < 7}. \code{s}, which is symbolic and does not perform any smoothing itself, only handles a single covariate. Note that \code{s} works in \code{\link{vgam}} only. It has no effect in \code{\link{vglm}} (actually, it is similar to the identity function \code{\link[base:AsIs]{I}} so that \code{s(x2)} is the same as \code{x2} in the LM model matrix). It differs from the \code{s} of the \pkg{gam} and \pkg{mgcv} packages; they should not be mixed together. Also, terms involving \code{s} should be simple additive terms, and not involving interactions and nesting etc. For example, \code{myfactor:s(x2)} is not a good idea. % It also differs from the S-PLUS \code{s} which allows % \code{spar} to be negative; \pkg{VGAM} does not allow this. } \value{ A vector with attributes that are (only) used by \code{vgam}. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ The vector cubic smoothing spline which \code{s()} represents is computationally demanding for large \eqn{M}. The cost is approximately \eqn{O(n M^3)} where \eqn{n} is the number of unique abscissae. An alternative to using \code{s} with \code{\link{vgam}} is \code{\link[splines]{bs}} and/or \code{\link[splines]{ns}} with \code{\link{vglm}}. The latter implements half-stepping, which is helpful if convergence is difficult. } % ~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vgam}}, \code{\link{vsmooth.spline}}. } \examples{ # Nonparametric logistic regression fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) \dontrun{ plot(fit, se = TRUE) } # Bivariate logistic model with artificial data nn <- 300 bdata <- data.frame(x1 = runif(nn), x2 = runif(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = 1, prob = logit(sin(2 * x2), inverse = TRUE)), y2 = rbinom(nn, size = 1, prob = logit(sin(2 * x2), inverse = TRUE))) fit <- vgam(cbind(y1, y2) ~ x1 + s(x2, 3), trace = TRUE, binom2.or(exchangeable = TRUE ~ s(x2, 3)), data = bdata) coef(fit, matrix = TRUE) # Hard to interpret \dontrun{ plot(fit, se = TRUE, which.term = 2, scol = "blue") } } \keyword{models} \keyword{regression} \keyword{smooth} VGAM/man/ruge.Rd0000644000176000001440000000213512136651105013041 0ustar ripleyusers\name{ruge} \alias{ruge} \docType{data} \title{Rutherford-Geiger Polonium Data} \description{ Decay counts of polonium recorded by Rutherford and Geiger (1910). } \usage{data(ruge)} \format{ This data frame contains the following columns: \describe{ \item{counts}{a numeric vector, counts or frequencies} \item{number}{a numeric vector, the number of decays} } } \details{ These are the radioactive decay counts of polonium recorded by Rutherford and Geiger (1910) representing the number of scintillations in 2608 1/8 minute intervals. For example, there were 57 frequencies of zero counts. The counts can be thought of as being approximately Poisson distributed. } \source{ Rutherford, E. and Geiger, H. (1910) The Probability Variations in the Distribution of alpha Particles, \emph{Philosophical Magazine}, \bold{20}, 698--704. } %\references{ %} \examples{ lambdahat <- with(ruge, weighted.mean(number, w = counts)) (N <- with(ruge, sum(counts))) with(ruge, cbind(number, counts, fitted = round(N * dpois(number, lam = lambdahat)))) } \keyword{datasets} VGAM/man/rrvglm.optim.control.Rd0000644000176000001440000000376712136651105016232 0ustar ripleyusers\name{rrvglm.optim.control} \alias{rrvglm.optim.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control function for rrvglm() calling optim() } \description{ Algorithmic constants and parameters for running \code{optim} within \code{rrvglm} are set using this function. } \usage{ rrvglm.optim.control(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Fnscale}{ Passed into \code{optim} as \code{fnscale}. } \item{Maxit}{ Passed into \code{optim} as \code{maxit}. } \item{Switch.optimizer}{ Iteration number when the "Nelder-Mead" method of \code{optim} is switched to the quasi-Newton "BFGS" method. Assigning \code{Switch.optimizer} a negative number means always BFGS, while assigning \code{Switch.optimizer} a value greater than \code{maxits} means always use Nelder-Mead. } \item{Abstol}{ Passed into \code{optim} as \code{abstol}. } \item{Reltol}{ Passed into \code{optim} as \code{reltol}. } \item{\dots}{ Ignored. } } \details{ See \code{\link[stats]{optim}} for more details. } \value{ A list with components equal to the arguments. } %\references{ ~put references to the literature/web site here ~ } \author{ Thomas W. Yee } \note{ The transition between optimization methods may be unstable, so users may have to vary the value of \code{Switch.optimizer}. Practical experience with \code{Switch.optimizer} shows that setting it to too large a value may lead to a local solution, whereas setting it to a low value will obtain the global solution. It appears that, if BFGS kicks in too late when the Nelder-Mead algorithm is starting to converge to a local solution, then switching to BFGS will not be sufficient to bypass convergence to that local solution. } \seealso{ \code{\link{rrvglm.control}}, \code{\link[stats]{optim}}. } %\examples{ %} \keyword{models} \keyword{regression} VGAM/man/rrvglm.control.Rd0000644000176000001440000002102112136651105015062 0ustar ripleyusers\name{rrvglm.control} \alias{rrvglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control function for rrvglm } \description{ Algorithmic constants and parameters for running \code{rrvglm} are set using this function. } \usage{ rrvglm.control(Rank = 1, Algorithm = c("alternating", "derivative"), Corner = TRUE, Uncorrelated.lv = FALSE, Wmat = NULL, Svd.arg = FALSE, Index.corner = if (length(szero)) head((1:1000)[-szero], Rank) else 1:Rank, Ainit = NULL, Alpha = 0.5, Bestof = 1, Cinit = NULL, Etamat.colmax = 10, SD.Ainit = 0.02, SD.Cinit = 0.02, szero = NULL, noRRR = ~1, Norrr = NA, trace = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{Rank}{ The numerical rank \eqn{R} of the model. Must be an element from the set \{1,2,\ldots,min(\eqn{M},\emph{p2})\}. Here, the vector of explanatory variables \bold{x} is partitioned into (\bold{x1},\bold{x2}), which is of dimension \emph{p1}+\emph{p2}. The variables making up \bold{x1} are given by the terms in \code{noRRR} argument, and the rest of the terms comprise \bold{x2}. } \item{Algorithm}{ Character string indicating what algorithm is to be used. The default is the first one. } \item{Corner}{ Logical indicating whether corner constraints are to be used. This is one method for ensuring a unique solution. If \code{TRUE}, \code{Index.corner} specifies the \eqn{R} rows of the constraint matrices that are use as the corner constraints, i.e., they hold an order-\eqn{R} identity matrix. } \item{Uncorrelated.lv}{ Logical indicating whether uncorrelated latent variables are to be used. This is normalization forces the variance-covariance matrix of the latent variables to be \code{diag(Rank)}, i.e., unit variance and uncorrelated. This constraint does not lead to a unique solution because it can be rotated. } \item{Wmat}{ Yet to be done. } \item{Svd.arg}{ Logical indicating whether a singular value decomposition of the outer product is to computed. This is another normalization which ensures uniqueness. See the argument \code{Alpha} below. } \item{Index.corner}{ Specifies the \eqn{R} rows of the constraint matrices that are used for the corner constraints, i.e., they hold an order-\eqn{R} identity matrix. } \item{Alpha}{ The exponent in the singular value decomposition that is used in the first part: if the SVD is \eqn{U D V^T}{ U \%*\% D \%*\% t(V) } then the first and second parts are \eqn{U D^{\alpha}}{ U \%*\% D^Alpha} and \eqn{D^{1-\alpha} V^T}{D^(1-Alpha) \%*\% t(V)} respectively. A value of 0.5 is `symmetrical'. This argument is used only when \code{Svd.arg=TRUE}. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument works only when the function generates its own initial value for \bold{C}, i.e., when \bold{C} is \emph{not} passed in as initial values. } \item{Ainit, Cinit}{ Initial \bold{A} and \bold{C} matrices which may speed up convergence. They must be of the correct dimension. } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO=TRUE}. } % \item{Quadratic}{ % Logical indicating whether a \emph{Quadratic} % RR-VGLM is to be fitted. If \code{TRUE}, an object of class % \code{"qrrvglm"} will be returned, otherwise \code{"rrvglm"}. % } \item{szero}{ Integer vector specifying which rows of the estimated constraint matrices (\bold{A}) are to be all zeros. These are called \emph{structural zeros}. Must not have any common value with \code{Index.corner}, and be a subset of the vector \code{1:M}. } \item{SD.Ainit, SD.Cinit}{ Standard deviation of the initial values for the elements of \bold{A} and \bold{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE}. } % \item{ppar}{ Ignore this. } \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression. That is, \code{noRRR} specifes which explanatory variables are in the \eqn{x_1}{x1} vector of \code{\link{rrvglm}}, and the rest go into \eqn{x_2}{x2}. The \eqn{x_1}{x1} variables constitute the \eqn{\bold{B}_1}{\bold{B}1} matrix in Yee and Hastie (2003). Those \eqn{x_2}{x2} variables which are subject to the reduced-rank regression correspond to the \eqn{\bold{B}_2}{\bold{B}2} matrix. Set \code{noRRR = NULL} for the reduced-rank regression to be applied to every explanatory variable including the intercept. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } \item{trace}{ Logical indicating if output should be produced for each iteration. % Useful when \code{Quadratic=TRUE} because QRR-VGLMs are % computationally expensive and it's good to see that the program % is working! } \item{Use.Init.Poisson.QO}{ Logical indicating whether the \code{.Init.Poisson.QO()} should be used to obtain initial values for the \bold{C}. The function uses a new method that can work well if the data are Poisson counts coming from an equal-tolerances QRR-VGLM (CQO). This option is less realistic for RR-VGLMs compared to QRR-VGLMs. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{\dots}{ Variables in \dots are passed into \code{\link{vglm.control}}. If the derivative algorithm is used, then \dots are also passed into \code{\link{rrvglm.optim.control}}. } In the above, \eqn{R} is the \code{Rank} and \eqn{M} is the number of linear predictors. } \details{ % QRR-VGLMs are an extension of RR-VGLMs and are useful for constrained % ordination. QRR-VGLMs fitted with \pkg{VGAM} allow a maximum % likelihood solution to constrained quadratic ordination (CQO; % formerly called canonical Gaussian ordination) models. % For QRR-VGLMs, if \code{EqualTolerances=TRUE} and % \code{ITolerances=FALSE} then the default is that the \bold{C} % matrix is constrained by forcing the latent variables to have sample % variance-covariance matrix equalling \code{diag(Rank)}, i.e., unit % variance and uncorrelated. \pkg{VGAM} supports three normalizations to ensure a unique solution. Of these, only corner constraints will work with \code{summary} of RR-VGLM objects. } \value{ A list with components matching the input names. Some error checking is done, but not much. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ % The function call \code{cqo(...)} is equivalent to % \code{rrvglm(..., Quadratic=TRUE)}, and hence uses this function. % For QRR-VGLMs, the function \code{\link{qrrvglm.control}} is called too. The arguments in this function begin with an upper case letter to help avoid interference with those of \code{\link{vglm.control}}. In the example below a rank-1 \emph{stereotype} model (Anderson, 1984) is fitted. } %- \section{Warning }{ } \seealso{ \code{\link{rrvglm}}, \code{\link{rrvglm.optim.control}}, \code{\link{rrvglm-class}}, \code{\link{vglm}}, \code{\link{vglm.control}}, \code{\link{cqo}}. } \examples{ \dontrun{ set.seed(111) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # x3 is random noise fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo, Rank = 1, Index.corner = 2) constraints(fit) vcov(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/rrvglm.Rd0000644000176000001440000002277712136651105013426 0ustar ripleyusers\name{rrvglm} \alias{rrvglm} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Fitting Reduced-Rank Vector Generalized Linear Models (RR-VGLMs) } \description{ A \emph{reduced-rank vector generalized linear model} (RR-VGLM) is fitted. RR-VGLMs are VGLMs but some of the constraint matrices are estimated. In this documentation, \eqn{M} is the number of linear predictors. } \usage{ rrvglm(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = rrvglm.control(...), offset = NULL, method = "rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{formula, family, weights}{ See \code{\link{vglm}}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{rrvglm} is called. } \item{subset, na.action}{ See \code{\link{vglm}}. } \item{etastart, mustart, coefstart}{ See \code{\link{vglm}}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{rrvglm.control}} for details. } \item{offset, model, contrasts}{ See \code{\link{vglm}}. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{rrvglm.fit} uses iteratively reweighted least squares (IRLS). } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGLM model matrix type \code{model.matrix(vglmfit)} where \code{vglmfit} is a \code{vglm} object. } \item{constraints}{ See \code{\link{vglm}}. } \item{extra, smart, qr.arg}{ See \code{\link{vglm}}. } \item{\dots}{ further arguments passed into \code{\link{rrvglm.control}}. } } \details{ The central formula is given by \deqn{\eta = B_1^T x_1 + A \nu}{% eta = B_1^T x_1 + A nu} where \eqn{x_1}{x1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x2} is another vector of explanatory variables, and \eqn{\nu = C^T x_2}{nu = C^T x_2} is an \eqn{R}-vector of latent variables. Here, \eqn{\eta}{eta} is a vector of linear predictors, e.g., the \eqn{m}th element is \eqn{\eta_m = \log(E[Y_m])}{eta_m = log(E[Y_m])} for the \eqn{m}th Poisson response. The matrices \eqn{B_1}, \eqn{A} and \eqn{C} are estimated from the data, i.e., contain the regression coefficients. For ecologists, the central formula represents a \emph{constrained linear ordination} (CLO) since it is linear in the latent variables. It means that the response is a monotonically increasing or decreasing function of the latent variables. For identifiability it is common to enforce \emph{corner constraints} on \eqn{A}: by default, the top \eqn{R} by \eqn{R} submatrix is fixed to be the order-\eqn{R} identity matrix and the remainder of \eqn{A} is estimated. The underlying algorithm of RR-VGLMs is iteratively reweighted least squares (IRLS) with an optimizing algorithm applied within each IRLS iteration (e.g., alternating algorithm). In theory, any \pkg{VGAM} family function that works for \code{\link{vglm}} and \code{\link{vgam}} should work for \code{rrvglm} too. The function that actually does the work is \code{rrvglm.fit}; it is \code{vglm.fit} with some extra code. } \value{ An object of class \code{"rrvglm"}, which has the the same slots as a \code{"vglm"} object. The only difference is that the some of the constraint matrices are estimates rather than known. But \pkg{VGAM} stores the models the same internally. The slots of \code{"vglm"} objects are described in \code{\link{vglm-class}}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Anderson, J. A. (1984) Regression and ordered categorical variables. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{46}, 1--30. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The arguments of \code{rrvglm} are in general the same as those of \code{\link{vglm}} but with some extras in \code{\link{rrvglm.control}}. The smart prediction (\code{\link{smartpred}}) library is packed with the \pkg{VGAM} library. In an example below, a rank-1 \emph{stereotype} model of Anderson (1984) is fitted to some car data. The reduced-rank regression is performed, adjusting for two covariates. Setting a trivial constraint matrix for the latent variable variables in \eqn{x_2}{x2} avoids a warning message when it is overwritten by a (common) estimated constraint matrix. It shows that German cars tend to be more expensive than American cars, given a car of fixed weight and width. If \code{fit <- rrvglm(..., data = mydata)} then \code{summary(fit)} requires corner constraints and no missing values in \code{mydata}. Often the estimated variance-covariance matrix of the parameters is not positive-definite; if this occurs, try refitting the model with a different value for \code{Index.corner}. For \emph{constrained quadratic ordination} (CQO) see \code{\link{cqo}} for more details about QRR-VGLMs. With multivariate binary responses, one must use \code{binomialff(mv = TRUE)} to indicate that the response (matrix) is multivariate. Otherwise, it is interpreted as a single binary response variable. } % zzz; arguments of \code{\link{vglm}} are definitive. They're copied here. \seealso{ \code{\link{rrvglm.control}}, % \code{\link{qrrvglm.control}}, \code{\link{lvplot.rrvglm}} (same as \code{\link{biplot.rrvglm}}), % \code{\link{vcovqrrvglm}}, \code{\link{rrvglm-class}}, \code{\link{grc}}, \code{\link{cqo}}, \code{\link{vglmff-class}}, \code{\link{vglm}}, \code{\link{vglm-class}}, \code{\link{smartpred}}, \code{rrvglm.fit}. Special family functions include \code{\link{negbinomial}} \code{\link{zipoisson}} and \code{\link{zinegbinomial}}. (see Yee (2012) and \pkg{COZIGAM}). Methods functions include \code{\link{Coef.rrvglm}}, \code{summary.rrvglm}, etc. Data include \code{\link{crashi}}. } \examples{ \dontrun{ # Example 1: RR negative binomial (RR-NB) with Var(Y) = mu + delta1 * mu^delta2 nn <- 1000 # Number of observations delta1 <- 3.0 # Specify this delta2 <- 1.5 # Specify this; should be greater than unity a21 <- 2 - delta2 mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y2 = rnbinom(nn, mu=mu, size=(1/delta1)*mu^a21)) plot(y2 ~ x2, data = mydata, pch = "+", col = 'blue', las = 1, main = paste("Var(Y) = mu + ", delta1, " * mu^", delta2, sep = "")) rrnb2 <- rrvglm(y2 ~ x2 + x3, negbinomial(zero = NULL), mydata, trace = TRUE) a21.hat <- (Coef(rrnb2)@A)["log(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "log(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "log(size)"] (delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat)) (delta2.hat <- 2 - a21.hat) # exp(a21.hat * predict(rrnb2)[1,1] - predict(rrnb2)[1,2]) # delta1.hat summary(rrnb2) # Obtain a 95 percent confidence interval for delta2: se.a21.hat <- sqrt(vcov(rrnb2)["I(lv.mat)", "I(lv.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * 1.96 * se.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # The 95 percent confidence interval Confint.rrnb(rrnb2) # Quick way to get it # Plot the abundances and fitted values against the latent variable plot(y2 ~ lv(rrnb2), data = mydata, col = "blue", xlab = "Latent variable", las = 1) ooo <- order(lv(rrnb2)) lines(fitted(rrnb2)[ooo] ~ lv(rrnb2)[ooo], col = "red") # Example 2: stereotype model (reduced-rank multinomial logit model) data(car.all) index <- with(car.all, Country == "Germany" | Country == "USA" | Country == "Japan" | Country == "Korea") scar <- car.all[index, ] # standardized car data fcols <- c(13,14,18:20,22:26,29:31,33,34,36) # These are factors scar[,-fcols] = scale(scar[, -fcols]) # Standardize all numerical vars ones <- matrix(1, 3, 1) clist <- list("(Intercept)" = diag(3), Width = ones, Weight = ones, Disp. = diag(3), Tank = diag(3), Price = diag(3), Frt.Leg.Room = diag(3)) set.seed(111) fit <- rrvglm(Country ~ Width + Weight + Disp. + Tank + Price + Frt.Leg.Room, multinomial, data = scar, Rank = 2, trace = TRUE, constraints = clist, noRRR = ~ 1 + Width + Weight, Uncor = TRUE, Corner = FALSE, Bestof = 2) fit@misc$deviance # A history of the fits Coef(fit) biplot(fit, chull = TRUE, scores = TRUE, clty = 2, Ccex = 2, ccol = "blue", scol = "red", Ccol = "darkgreen", Clwd = 2, main = "1=Germany, 2=Japan, 3=Korea, 4=USA") } } \keyword{models} \keyword{regression} VGAM/man/rrvglm-class.Rd0000644000176000001440000001622412136651105014517 0ustar ripleyusers\name{rrvglm-class} \docType{class} \alias{rrvglm-class} \title{Class ``rrvglm'' } \description{ Reduced-rank vector generalized linear models. } \section{Objects from the Class}{ Objects can be created by calls to \code{\link{rrvglm}}. } \section{Slots}{ \describe{ \item{\code{extra}:}{ Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{ Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{ Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{ Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. This is usually the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{\code{misc}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{ Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{ Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{ Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{ Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{rss}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vglm"}, directly. Class \code{"vlm"}, by class "vglm". } \section{Methods}{ \describe{ \item{biplot}{\code{signature(x = "rrvglm")}: biplot. } \item{Coef}{\code{signature(object = "rrvglm")}: more detailed coefficients giving \bold{A}, \eqn{\bold{B}_1}{\bold{B}1}, \bold{C}, etc. } \item{biplot}{\code{signature(object = "rrvglm")}: biplot. } \item{print}{\code{signature(x = "rrvglm")}: short summary of the object. } \item{summary}{\code{signature(object = "rrvglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. \url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } \note{ The slots of \code{"rrvglm"} objects are currently identical to \code{"vglm"} objects. } % ~Make other sections like Warning with \section{Warning }{....} ~ % zzz need to make sure this function matches \code{\link{vglm-class}}, %where \code{\link{vglm-class}} is definitive. \seealso{ \code{\link{rrvglm}}, \code{\link{lvplot.rrvglm}}, \code{\link{vglmff-class}}. } \examples{ \dontrun{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # x3 is unrelated fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo, Rank = 1) Coef(fit) } } \keyword{classes} % set.seed(111) VGAM/man/rrar.Rd0000644000176000001440000000613212136651105013046 0ustar ripleyusers\name{rrar} \alias{rrar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Nested reduced-rank autoregressive models for multiple time series } \description{ Estimates the parameters of a nested reduced-rank autoregressive model for multiple time series. } \usage{ rrar(Ranks = 1, coefstart = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Ranks}{ Vector of integers: the ranks of the model. Each value must be at least one and no more than \code{M}, where \code{M} is the number of response variables in the time series. The length of \code{Ranks} is the \emph{lag}, which is often denoted by the symbol \emph{L} in the literature. } \item{coefstart}{ Optional numerical vector of initial values for the coefficients. By default, the family function chooses these automatically. } } \details{ Full details are given in Ahn and Reinsel (1988). Convergence may be very slow, so setting \code{maxits = 50}, say, may help. If convergence is not obtained, you might like to try inputting different initial values. Setting \code{trace = TRUE} in \code{\link{vglm}} is useful for monitoring the progress at each iteration. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Ahn, S. and Reinsel, G. C. (1988) Nested reduced-rank autoregressive models for multiple time series. \emph{Journal of the American Statistical Association}, \bold{83}, 849--856. } \author{ T. W. Yee } \note{ This family function should be used within \code{\link{vglm}} and not with \code{\link{rrvglm}} because it does not fit into the RR-VGLM framework exactly. Instead, the reduced-rank model is formulated as a VGLM! A methods function \code{Coef.rrar}, say, has yet to be written. It would return the quantities \code{Ak1}, \code{C}, \code{D}, \code{omegahat}, \code{Phi}, etc. as slots, and then \code{show.Coef.rrar} would also need to be written. } \seealso{ \code{\link{vglm}}, \code{\link{grain.us}}. } \examples{ \dontrun{ year <- seq(1961 + 1/12, 1972 + 10/12, by = 1/12) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(2, 2)) for(ii in 1:4) { plot(year, grain.us[, ii], main = names(grain.us)[ii], las = 1, type = "l", xlab = "", ylab = "", col = "blue") points(year, grain.us[, ii], pch = "*", col = "blue") } apply(grain.us, 2, mean) # mu vector cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Ranks = c(4, 1)), trace = TRUE) summary(fit) print(fit@misc$Ak1, dig = 2) print(fit@misc$Cmatrices, dig = 3) print(fit@misc$Dmatrices, dig = 3) print(fit@misc$omegahat, dig = 3) print(fit@misc$Phimatrices, dig = 2) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(4, 1)) for(ii in 1:4) { plot(year, fit@misc$Z[, ii], main = paste("Z", ii, sep = ""), type = "l", xlab = "", ylab = "", las = 1, col = "blue") points(year, fit@misc$Z[, ii], pch = "*", col = "blue") } } } \keyword{ts} \keyword{regression} \keyword{models} VGAM/man/rlplot.egev.Rd0000644000176000001440000001167212136651105014346 0ustar ripleyusers\name{rlplot.egev} \alias{rlplot.egev} \alias{rlplot.gev} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Return Level Plot for GEV Fits } \description{ A return level plot is constructed for a GEV-type model. } \usage{ rlplot.egev(object, plot.it = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log = TRUE, CI = TRUE, epsilon = 1e-05, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} extremes model of the GEV-type, produced by \code{\link{vglm}} with a family function either \code{"gev"} or \code{"egev"}. } \item{plot.it}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{probability}{ Numeric vector of probabilities used. } \item{add.arg}{ Logical. Add the plot to an existing plot? } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{main}{ Title of the plot. See \code{\link[graphics]{title}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{slty.arg, scol.arg, slwd.arg}{ Correponding arguments for the lines used for the confidence intervals. Used only if \code{CI=TRUE}. } \item{ylim}{ Limits for the y-axis. Numeric of length 2. } \item{log}{ Logical. If \code{TRUE} then \code{log=""} otherwise \code{log="x"}. This changes the labelling of the x-axis only. } \item{CI}{ Logical. Add in a 95 percent confidence interval? } \item{epsilon}{ Numeric, close to zero. Used for the finite-difference approximation to the first derivatives with respect to each parameter. If too small, numerical problems will occur. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{sub} and \code{las}. } } \details{ A return level plot plots \eqn{z_p}{zp} versus \eqn{\log(y_p)}{log(yp)}. It is linear if the shape parameter \eqn{\xi=0}{xi=0}. If \eqn{\xi<0}{xi<0} then the plot is convex with asymptotic limit as \eqn{p} approaches zero at \eqn{\mu-\sigma / \xi}{mu-sigma/xi}. And if \eqn{\xi>0}{xi>0} then the plot is concave and has no finite bound. Here, \eqn{G(z_p) = 1-p}{G(zp) = 1-p} where \eqn{0 i1 rlplot(fit2, pcol = "darkorange", lcol = "blue", log = FALSE, scol = "darkgreen", slty = "dashed", las = 1) -> i2 range(i2@post$rlplot$upper - i1@post$rlplot$upper) # Should be near 0 range(i2@post$rlplot$lower - i1@post$rlplot$lower) # Should be near 0 } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/rig.Rd0000644000176000001440000000251512136651105012662 0ustar ripleyusers\name{rig} \alias{rig} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Reciprocal Inverse Gaussian distribution } \description{ Estimation of the parameters of a reciprocal inverse Gaussian distribution. } \usage{ rig(lmu = "identity", llambda = "loge", imu = NULL, ilambda = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, llambda}{ Link functions for \code{mu} and \code{lambda}. See \code{\link{Links}} for more choices. } \item{imu, ilambda}{ Initial values for \code{mu} and \code{lambda}. A \code{NULL} means a value is computed internally. } } \details{ See Jorgensen (1997) for details. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall } \author{ T. W. Yee } \note{ This distribution is potentially useful for dispersion modelling. } \seealso{ \code{\link{simplex}}. } \examples{ rdata <- data.frame(y = rchisq(n = 100, df = 14)) # Not 'proper' data!! fit <- vglm(y ~ 1, rig, rdata, trace = TRUE) fit <- vglm(y ~ 1, rig, rdata, trace = TRUE, eps = 1e-9, crit = "coef") summary(fit) } \keyword{models} \keyword{regression} VGAM/man/riceff.Rd0000644000176000001440000000523512136651105013341 0ustar ripleyusers\name{riceff} \alias{riceff} %- Also NEED an '\alias' for EACH other topic documented here. \title{Rice Distribution Family Function} \description{ Estimates the two parameters of a Rice distribution by maximum likelihood estimation. } \usage{ riceff(lvee = "loge", lsigma = "loge", ivee = NULL, isigma = NULL, nsimEIM = 100, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lvee, lsigma}{ Link functions for the \eqn{v} and \eqn{\sigma}{sigma} parameters. See \code{\link{Links}} for more choices and for general information. } \item{ivee, isigma}{ Optional initial values for the parameters. See \code{\link{CommonVGAMffArguments}} for more information. If convergence failure occurs (this \pkg{VGAM} family function seems to require good initial values) try using these arguments. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The Rician distribution has density function \deqn{f(y;v,\sigma) = \frac{ y }{\sigma^2} \, \exp(-(y^2+v^2) / (2\sigma^2)) \, I_0(y v / \sigma^2) }{% f(y;v,sigma) = (y/sigma^2) * exp(-(y^2+v^2) / (2*sigma^2)) * I_0(y*v/sigma^2)} where \eqn{y > 0}, \eqn{v > 0}, \eqn{\sigma > 0} and \eqn{I_0} is the modified Bessel function of the first kind with order zero. When \eqn{v = 0} the Rice distribution reduces to a Rayleigh distribution. The mean is \eqn{\sigma \sqrt{\pi/2} \exp(z/2) ((1-z) I_0(-z/2)-z I_1(-z/2))}{sigma*sqrt(pi/2) * exp(z/2)*((1-z) * I_0(-z/2)-z*I_1(-z/2))} (returned as the fitted values) where \eqn{z=-v^2/(2 \sigma^2)}{z=-v^2/(2*sigma^2)}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Rice, S. O. (1945) Mathematical Analysis of Random Noise. \emph{Bell System Technical Journal}, \bold{24}, 46--156. } \author{ T. W. Yee } \note{ Convergence problems may occur for data where \eqn{v=0}; if so, use \code{\link{rayleigh}} or possibly use an \code{\link{identity}} link. When \eqn{v} is large (greater than 3, say) then the mean is approximately \eqn{v} and the standard deviation is approximately \eqn{\sigma}{sigma}. } \seealso{ \code{\link{drice}}, \code{\link{rayleigh}}, \code{\link[base:Bessel]{besselI}}. } \examples{ \dontrun{ vee <- exp(2); sigma <- exp(1); y <- rrice(n <- 1000, vee, sigma) fit <- vglm(y ~ 1, riceff, trace = TRUE, crit = "c") c(mean(y), fitted(fit)[1]) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/riceUC.Rd0000644000176000001440000000276212136651105013257 0ustar ripleyusers\name{Rice} \alias{Rice} \alias{drice} %\alias{price} %\alias{qrice} \alias{rrice} \title{The Rice Distribution} \description{ Density % distribution function, quantile function and random generation for the Rician distribution. } \usage{ drice(x, vee, sigma, log = FALSE) %price(q, vee, sigma) %qrice(p, vee, sigma) rrice(n, vee, sigma) } \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{vee, sigma}{ See \code{\link{riceff}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{drice} gives the density, % \code{price} gives the distribution function, % \code{qrice} gives the quantile function, and \code{rrice} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{riceff}}, the \pkg{VGAM} family function for estimating the two parameters, for the formula of the probability density function and other details. } %\section{Warning }{ % %} \seealso{ \code{\link{riceff}}. } \examples{ \dontrun{ x <- seq(0.01, 7, len = 201) plot(x, drice(x, vee = 0, sigma = 1), type = "n", las = 1,, ylab = "", main = "Density of Rice distribution for various values of v") sigma <- 1; vee <- c(0,0.5,1,2,4) for(ii in 1:length(vee)) lines(x, drice(x, vee[ii], sigma), col = ii) legend(x = 5, y = 0.6, legend = as.character(vee), col = 1:length(vee), lty = 1) } } \keyword{distribution} VGAM/man/rhobit.Rd0000644000176000001440000000551512136651105013373 0ustar ripleyusers\name{rhobit} \alias{rhobit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Rhobit Link Function } \description{ Computes the rhobit link transformation, including its inverse and the first two derivatives. } \usage{ rhobit(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bminvalue, bmaxvalue}{ Optional boundary values, e.g., values of \code{theta} which are less than or equal to -1 can be replaced by \code{bminvalue} before computing the link function value. And values of \code{theta} which are greater than or equal to 1 can be replaced by \code{bmaxvalue} before computing the link function value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{rhobit} link function is commonly used for parameters that lie between \eqn{-1} and \eqn{1}. Numerical values of \code{theta} close to \eqn{-1} or \eqn{1} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the rhobit of \code{theta}, i.e., \code{log((1 + theta)/(1 - theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{(exp(theta) - 1)/(exp(theta) + 1)}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to \eqn{-1} or \eqn{1}. One way of overcoming this is to use \code{bminvalue}, etc. The correlation parameter of a standard bivariate normal distribution lies between \eqn{-1} and \eqn{1}, therefore this function can be used for modelling this parameter as a function of explanatory variables. The link function \code{rhobit} is very similar to \code{\link{fisherz}}, e.g., just twice the value of \code{\link{fisherz}}. } \seealso{ \code{\link{Links}}, \code{\link{binom2.rho}}, \code{\link{fisherz}}. } \examples{ theta <- seq(-0.99, 0.99, by = 0.01) y <- rhobit(theta) \dontrun{ plot(theta, y, type = "l", las = 1, ylab = "", main = "rhobit(theta)") abline(v = 0, h = 0, lty = 2) } x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) rhobit(x) # Has NAs rhobit(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/recnormal1.Rd0000644000176000001440000000604512136651105014146 0ustar ripleyusers\name{recnormal1} \alias{recnormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Upper Record Values from a Univariate Normal Distribution } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution when the observations are upper record values. } \usage{ recnormal1(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lsd}{ Link functions applied to the mean and sd parameters. See \code{\link{Links}} for more choices. } \item{imean, isd}{ Numeric. Optional initial values for the mean and sd. The default value \code{NULL} means they are computed internally, with the help of \code{imethod}. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. Choose the another value if convergence fails, or use \code{imean} and/or \code{isd}. } \item{zero}{ An integer vector, containing the value 1 or 2. If so, the mean or standard deviation respectively are modelled as an intercept only. Usually, setting \code{zero = 2} will be used, if used at all. The default value \code{NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. } } \details{ The response must be a vector or one-column matrix with strictly increasing values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998) \emph{Records}, New York: John Wiley & Sons. } \author{ T. W. Yee } \note{ This family function tries to solve a difficult problem, and the larger the data set the better. Convergence failure can commonly occur, and convergence may be very slow, so set \code{maxit = 200, trace = TRUE}, say. Inputting good initial values are advised. This family function uses the BFGS quasi-Newton update formula for the working weight matrices. Consequently the estimated variance-covariance matrix may be inaccurate or simply wrong! The standard errors must be therefore treated with caution; these are computed in functions such as \code{vcov()} and \code{summary()}. } \seealso{ \code{\link{normal1}}, \code{\link{dcennormal1}}. } \examples{ nn <- 10000; mymean <- 100 # First value is reference value or trivial record Rdata <- data.frame(rawy = c(mymean, rnorm(nn, me = mymean, sd = exp(3)))) # Keep only observations that are records: rdata <- data.frame(y = unique(cummax(with(Rdata, rawy)))) fit <- vglm(y ~ 1, recnormal1, rdata, trace = TRUE, maxit = 200) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} %# Keep only observations that are records %delete = c(FALSE, rep(TRUE, len = n)) %for(i in 2:length(rawy)) % if (rawy[i] > max(rawy[1:(i-1)])) delete[i] = FALSE %(y = rawy[!delete]) VGAM/man/reciprocal.Rd0000644000176000001440000000406512136651105014226 0ustar ripleyusers\name{reciprocal} \alias{reciprocal} \alias{nreciprocal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Reciprocal link function } \description{ Computes the reciprocal transformation, including its inverse and the first two derivatives. } \usage{ reciprocal(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) nreciprocal(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{reciprocal} link function is a special case of the power link function. Numerical values of \code{theta} close to 0 result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The \code{nreciprocal} link function computes the negative reciprocal, i.e., \eqn{-1/ \theta}{-1/theta}. } \value{ For \code{reciprocal}: for \code{deriv = 0}, the reciprocal of \code{theta}, i.e., \code{1/theta} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{1/theta}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } %\section{Warning}{ %} \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 0. } \seealso{ \code{\link{identity}}, \code{\link{powl}}. } \examples{ reciprocal(1:5) reciprocal(1:5, inverse = TRUE, deriv = 2) nreciprocal(1:5) nreciprocal(1:5, inverse = TRUE, deriv = 2) x <- (-3):3 reciprocal(x) # Has Inf reciprocal(x, bvalue = .Machine$double.eps) # Has no Inf } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/recexp1.Rd0000644000176000001440000000373312136651105013453 0ustar ripleyusers\name{recexp1} \alias{recexp1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Upper Record Values from a 1-parameter Exponential Distribution } \description{ Maximum likelihood estimation of the rate parameter of a 1-parameter exponential distribution when the observations are upper record values. } \usage{ recexp1(lrate = "loge", irate = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrate}{ Link function applied to the rate parameter. See \code{\link{Links}} for more choices. } \item{irate}{ Numeric. Optional initial values for the rate. The default value \code{NULL} means they are computed internally, with the help of \code{imethod}. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. Choose the another value if convergence fails, or use \code{irate}. } } \details{ The response must be a vector or one-column matrix with strictly increasing values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998) \emph{Records}, New York: John Wiley & Sons. } \author{ T. W. Yee } \note{ By default, this family function has the intercept-only MLE as the initial value, therefore convergence may only take one iteration. Fisher scoring is used. } \seealso{ \code{\link{exponential}}. } \examples{ rawy <- rexp(n <- 10000, rate = exp(1)) y <- unique(cummax(rawy)) # Keep only the records length(y) / y[length(y)] # MLE of rate fit <- vglm(y ~ 1, recexp1, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} %# Keep only the records %delete = c(FALSE, rep(TRUE, len=n-1)) %for(i in 2:length(rawy)) % if (rawy[i] > max(rawy[1:(i-1)])) delete[i] = FALSE %(y = rawy[!delete]) VGAM/man/rdiric.Rd0000644000176000001440000000265312136651105013360 0ustar ripleyusers\name{rdiric} \alias{rdiric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Dirichlet distribution } \description{ Generates Dirichlet random variates. } \usage{ rdiric(n, shape, dimension = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. } \item{shape}{ the shape parameters. These must be positive. If \code{dimension} is specifed, values are recycled if necessary to length \code{dimension}. } \item{dimension}{ the dimension of the distribution. If \code{dimension} is not numeric then it is taken to be \code{length(shape)}. } } \details{ This function is based on a relationship between the gamma and Dirichlet distribution. Random gamma variates are generated, and then Dirichlet random variates are formed from these. } \value{ A \code{n} by \code{dimension} matrix of Dirichlet random variates. Each element is positive, and each row will sum to unity. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ Thomas W. Yee } \seealso{ \code{\link{dirichlet}} is a \pkg{VGAM} family function for fitting a Dirichlet distribution to data. } \examples{ y <- rdiric(n = 1000, shape = c(3, 1, 4)) fit <- vglm(y ~ 1, dirichlet, trace = TRUE, crit = "c") Coef(fit) coef(fit, matrix = TRUE) } \keyword{distribution} VGAM/man/rcqo.Rd0000644000176000001440000003306312136651105013047 0ustar ripleyusers\name{rcqo} \alias{rcqo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Constrained Quadratic Ordination } \description{ Random generation for constrained quadratic ordination (CQO). } \usage{ rcqo(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"), EqualMaxima = FALSE, EqualTolerances = TRUE, ESOptima = FALSE, loabundance = if (EqualMaxima) hiabundance else 10, hiabundance = 100, sdlv = head(1.5/2^(0:3), Rank), sdOptima = ifelse(ESOptima, 1.5/Rank, 1) * ifelse(scalelv, sdlv, 1), sdTolerances = 0.25, Kvector = 1, Shape = 1, sqrt = FALSE, Log = FALSE, rhox = 0.5, breaks = 4, seed = NULL, Crow1positive = TRUE, xmat = NULL, scalelv = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ Number of sites. It is denoted by \eqn{n} below. } \item{p}{ Number of environmental variables, including an intercept term. It is denoted by \eqn{p} below. Must be no less than \eqn{1+R} in value. } \item{S}{ Number of species. It is denoted by \eqn{S} below. } \item{Rank}{ The rank or the number of latent variables or true dimension of the data on the reduced space. This must be either 1, 2, 3 or 4. It is denoted by \eqn{R}. } \item{family}{ What type of species data is to be returned. The first choice is the default. If binomial then a 0 means absence and 1 means presence. If ordinal then the \code{breaks} argument is passed into the \code{breaks} argument of \code{\link[base]{cut}}. Note that either the Poisson or negative binomial distributions are used to generate binomial and ordinal data, and that an upper-case choice is used for the negative binomial distribution (this makes it easier for the user). If \code{"gamma2"} then this is the 2-parameter gamma distribution. % , % and the resulting values are % 1,2,\ldots,\code{breaks} if \code{breaks} is a single integer zz % else zz. } \item{EqualMaxima}{ Logical. Does each species have the same maxima? See arguments \code{loabundance} and \code{hiabundance}. } \item{EqualTolerances}{ Logical. Does each species have the same tolerance? If \code{TRUE} then the common value is 1 along every latent variable, i.e., all species' tolerance matrices are the order-\eqn{R} identity matrix. } \item{ESOptima}{ Logical. Do the species have equally spaced optima? If \code{TRUE} then the quantity \eqn{S^{1/R}}{S^(1/R)} must be an integer with value 2 or more. That is, there has to be an appropriate number of species in total. This is so that a grid of optimum values is possible in \eqn{R}-dimensional latent variable space in order to place the species' optima. Also see the argument \code{sdTolerances}. } \item{loabundance, hiabundance}{ Numeric. These are recycled to a vector of length \eqn{S}. The species have a maximum between \code{loabundance} and \code{hiabundance}. That is, at their optimal environment, the mean abundance of each species is between the two componentwise values. If \code{EqualMaxima} is \code{TRUE} then \code{loabundance} and \code{hiabundance} must have the same values. If \code{EqualMaxima} is \code{FALSE} then the logarithm of the maxima are uniformly distributed between \code{log(loabundance)} and \code{log(hiabundance)}. } \item{sdlv}{ Numeric, of length \eqn{R} (recycled if necessary). Site scores along each latent variable have these standard deviation values. This must be a decreasing sequence of values because the first ordination axis contains the greatest spread of the species' site scores, followed by the second axis, followed by the third axis, etc. } \item{sdOptima}{ Numeric, of length \eqn{R} (recycled if necessary). If \code{ESOptima = FALSE} then, for the \eqn{r}th latent variable axis, the optima of the species are generated from a normal distribution centered about 0. If \code{ESOptima = TRUE} then the \eqn{S} optima are equally spaced about 0 along every latent variable axis. Regardless of the value of \code{ESOptima}, the optima are then scaled to give standard deviation \code{sdOptima[r]}. } \item{sdTolerances}{ Logical. If \code{EqualTolerances = FALSE} then, for the \eqn{r}th latent variable, the species' tolerances are chosen from a normal distribution with mean 1 and standard deviation \code{sdTolerances[r]}. However, the first species \code{y1} has its tolerance matrix set equal to the order-\eqn{R} identity matrix. All tolerance matrices for all species are diagonal in this function. This argument is ignored if \code{EqualTolerances} is \code{TRUE}, otherwise it is recycled to length \eqn{R} if necessary. } \item{Kvector}{ A vector of positive \eqn{k} values (recycled to length \eqn{S} if necessary) for the negative binomial distribution (see \code{\link{negbinomial}} for details). Note that a natural default value does not exist, however the default value here is probably a realistic one, and that for large values of \eqn{\mu} one has \eqn{Var(Y) = \mu^2 / k}{Var(Y) = mu^2 / k} approximately. } \item{Shape}{ A vector of positive \eqn{\lambda}{lambda} values (recycled to length \eqn{S} if necessary) for the 2-parameter gamma distribution (see \code{\link{gamma2}} for details). Note that a natural default value does not exist, however the default value here is probably a realistic one, and that \eqn{Var(Y) = \mu^2 / \lambda}{Var(Y) = mu^2 / lambda}. } \item{sqrt}{ Logical. Take the square-root of the negative binomial counts? Assigning \code{sqrt = TRUE} when \code{family="negbinomial"} means that the resulting species data can be considered very crudely to be approximately Poisson distributed. They will not integers in general but much easier (less numerical problems) to estimate using something like \code{cqo(..., family="poissonff")}. } \item{Log}{ Logical. Take the logarithm of the gamma random variates? Assigning \code{Log = TRUE} when \code{family="gamma2"} means that the resulting species data can be considered very crudely to be approximately Gaussian distributed about its (quadratic) mean. The result is that it is much easier (less numerical problems) to estimate using something like \code{cqo(..., family="gaussianff")}. } \item{rhox}{ Numeric, less than 1 in absolute value. The correlation between the environmental variables. The correlation matrix is a matrix of 1's along the diagonal and \code{rhox} in the off-diagonals. Note that each environmental variable is normally distributed with mean 0. The standard deviation of each environmental variable is chosen so that the site scores have the determined standard deviation, as given by argument \code{sdlv}. } \item{breaks}{ If \code{family} is assigned an ordinal value then this argument is used to define the cutpoints. It is fed into the \code{breaks} argument of \code{\link[base]{cut}}. } \item{seed}{ If given, it is passed into \code{\link[base:Random]{set.seed}}. This argument can be used to obtain reproducible results. If set, the value is saved as the \code{"seed"} attribute of the returned value. The default will not change the random generator state, and return \code{\link[base:Random]{.Random.seed}} as \code{"seed"} attribute. } \item{Crow1positive}{ See \code{\link{qrrvglm.control}} for details. } \item{xmat}{ The \eqn{n} by \eqn{p-1} environmental matrix can be inputted. } \item{scalelv}{ Logical. If \code{FALSE} the argument \code{sdlv} is ignored and no scaling of the latent variable values is performed. } } \details{ This function generates data coming from a constrained quadratic ordination (CQO) model. In particular, data coming from a \emph{species packing model} can be generated with this function. The species packing model states that species have equal tolerances, equal maxima, and optima which are uniformly distributed over the latent variable space. This can be achieved by assigning the arguments \code{ESOptima = TRUE}, \code{EqualMaxima = TRUE}, \code{EqualTolerances = TRUE}. At present, the Poisson and negative binomial abundances are generated first using \code{loabundance} and \code{hiabundance}, and if \code{family} is binomial or ordinal then it is converted into these forms. In CQO theory the \eqn{n} by \eqn{p} matrix \eqn{X} is partitioned into two parts \eqn{X_1} and \eqn{X_2}. The matrix \eqn{X_2} contains the `real' environmental variables whereas the variables in \eqn{X_1} are just for adjustment purposes; they contain the intercept terms and other variables that one wants to adjust for when (primarily) looking at the variables in \eqn{X_2}. This function has \eqn{X_1} only being a matrix of ones, i.e., containing an intercept only. } \value{ A \eqn{n} by \eqn{p-1+S} data frame with components and attributes. In the following the attributes are labelled with double quotes. \item{x2, x3, x4, \ldots, xp}{ The environmental variables. This makes up the \eqn{n} by \eqn{p-1} \eqn{X_2} matrix. Note that \code{x1} is not present; it is effectively a vector of ones since it corresponds to an intercept term when \code{\link{cqo}} is applied to the data. } \item{y1, y2, x3, \ldots, yS}{ The species data. This makes up the \eqn{n} by \eqn{S} matrix \eqn{Y}. This will be of the form described by the argument \code{family}. } \item{"ccoefficients"}{ The \eqn{p-1} by \eqn{R} matrix of constrained coefficients (or canonical coefficients). These are also known as weights or loadings. } \item{"formula"}{ The formula involving the species and environmental variable names. This can be used directly in the \code{formula} argument of \code{\link{cqo}}. } \item{"logmaxima"}{ The \eqn{S}-vector of species' maxima, on a log scale. These are uniformly distributed between \code{log(loabundance)} and \code{log(hiabundance)}. } \item{"lv"}{ The \eqn{n} by \eqn{R} matrix of site scores. Each successive column (latent variable) has sample standard deviation equal to successive values of \code{sdlv}. } \item{"eta"}{ The linear/additive predictor value. } \item{"optima"}{ The \eqn{S} by \eqn{R} matrix of species' optima. } \item{"tolerances"}{ The \eqn{S} by \eqn{R} matrix of species' tolerances. These are the square root of the diagonal elements of the tolerance matrices (recall that all tolerance matrices are restricted to being diagonal in this function). } Other attributes are \code{"break"}, \code{"family"}, \code{"Rank"}, \code{"loabundance"}, \code{"hiabundance"}, \code{"EqualTolerances"}, \code{"EqualMaxima"}, \code{"seed"} as used. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. ter Braak, C. J. F. and Prentice, I. C. (1988) A theory of gradient analysis. \emph{Advances in Ecological Research}, \bold{18}, 271--317. } \author{ T. W. Yee } \note{ This function is under development and is not finished yet. There may be a few bugs. Yet to do: add an argument that allows absences to be equal to the first level if ordinal data is requested. } \seealso{ \code{\link{cqo}}, \code{\link{qrrvglm.control}}, \code{\link[base]{cut}}, \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. } \examples{ \dontrun{ # Example 1: Species packing model: n <- 100; p <- 5; S <- 5 mydata <- rcqo(n, p, S, ESOpt = TRUE, EqualMax = TRUE) names(mydata) (myform <- attr(mydata, "formula")) fit <- cqo(myform, poissonff, mydata, Bestof = 3) # EqualTol = TRUE matplot(attr(mydata, "lv"), mydata[,-(1:(p-1))], col = 1:S) persp(fit, col = 1:S, add = TRUE) lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # The same plot as above # Compare the fitted model with the 'truth' ccoef(fit) # The fitted model attr(mydata, "ccoefficients") # The 'truth' c(apply(attr(mydata, "lv"), 2, sd), apply(lv(fit), 2, sd)) # Both values should be approx equal # Example 2: negative binomial data fitted using a Poisson model: n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "negbin", sqrt = TRUE) myform <- attr(mydata, "formula") fit <- cqo(myform, fam = poissonff, dat = mydata) # ITol = TRUE, lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Compare the fitted model with the 'truth' ccoef(fit) # The fitted model attr(mydata, "ccoefficients") # The 'truth' # Example 3: gamma2 data fitted using a Gaussian model: n <- 200; p <- 5; S <- 3 mydata <- rcqo(n, p, S, fam = "gamma2", Log = TRUE) fit <- cqo(attr(mydata, "formula"), fam = gaussianff, dat = mydata) # ITol = TRUE, matplot(attr(mydata, "lv"), exp(mydata[,-(1:(p-1))]), col = 1:S) # 'raw' data lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Fitted model to transformed data # Compare the fitted model with the 'truth' ccoef(fit) # The fitted model attr(mydata, "ccoefficients") # The 'truth' } } \keyword{distribution} \keyword{datagen} VGAM/man/rayleighUC.Rd0000644000176000001440000000365612136651105014144 0ustar ripleyusers\name{Rayleigh} \alias{Rayleigh} \alias{drayleigh} \alias{prayleigh} \alias{qrayleigh} \alias{rrayleigh} \title{The Rayleigh Distribution} \description{ Density, distribution function, quantile function and random generation for the Rayleigh distribution with parameter \code{a}. } \usage{ drayleigh(x, scale = 1, log = FALSE) prayleigh(q, scale = 1) qrayleigh(p, scale = 1) rrayleigh(n, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{scale}{the scale parameter \eqn{b}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{drayleigh} gives the density, \code{prayleigh} gives the distribution function, \code{qrayleigh} gives the quantile function, and \code{rrayleigh} generates random deviates. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \details{ See \code{\link{rayleigh}}, the \pkg{VGAM} family function for estimating the scale parameter \eqn{b} by maximum likelihood estimation, for the formula of the probability density function and range restrictions on the parameter \eqn{b}. } \note{ The Rayleigh distribution is related to the Maxwell distribution. } \seealso{ \code{\link{rayleigh}}, \code{\link{maxwell}}. } \examples{ \dontrun{ Scale <- 2; x <- seq(-1, 8, by = 0.1) plot(x, drayleigh(x, scale = Scale), type = "l", ylim = c(0,1), las = 1, ylab = "", main = "Rayleigh density divided into 10 equal areas; orange = cdf") abline(h = 0, col = "blue", lty = 2) qq <- qrayleigh(seq(0.1, 0.9, by = 0.1), scale = Scale) lines(qq, drayleigh(qq, scale = Scale), col = "purple", lty = 3, type = "h") lines(x, prayleigh(x, scale = Scale), col = "orange") } } \keyword{distribution} VGAM/man/rayleigh.Rd0000644000176000001440000000725012136651105013706 0ustar ripleyusers\name{rayleigh} \alias{rayleigh} \alias{cenrayleigh} %- Also NEED an '\alias' for EACH other topic documented here. \title{Rayleigh Distribution Family Function } \description{ Estimating the parameter of the Rayleigh distribution by maximum likelihood estimation. Right-censoring is allowed. } \usage{ rayleigh(lscale = "loge", nrfs = 1/3 + 0.01, oim.mean = TRUE, zero = NULL) cenrayleigh(lscale = "loge", oim = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale}{ Parameter link function applied to the scale parameter \eqn{b}. See \code{\link{Links}} for more choices. A log link is the default because \eqn{b} is positive. } \item{nrfs}{ Numeric, of length one, with value in \eqn{[0,1]}. Weighting factor between Newton-Raphson and Fisher scoring. The value 0 means pure Newton-Raphson, while 1 means pure Fisher scoring. The default value uses a mixture of the two algorithms, and retaining positive-definite working weights. } \item{oim.mean}{ Logical, used only for intercept-only models. \code{TRUE} means the mean of the OIM elements are used as working weights. If \code{TRUE} then this argument has top priority for working out the working weights. \code{FALSE} means use another algorithm. } \item{oim}{ Logical. For censored data only, \code{TRUE} means the Newton-Raphson algorithm, and \code{FALSE} means Fisher scoring. } \item{zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Rayleigh distribution, which is used in physics, has a probability density function that can be written \deqn{f(y) = y \exp(-0.5 (y/b)^2)/b^2}{% f(y) = y*exp(-0.5*(y/b)^2)/b^2} for \eqn{y > 0} and \eqn{b > 0}. The mean of \eqn{Y} is \eqn{b \sqrt{\pi / 2}}{b * sqrt(pi / 2)} and its variance is \eqn{b^2 (4-\pi)/2}{b^2 (4-pi)/2}. The \pkg{VGAM} family function \code{cenrayleigh} handles right-censored data (the true value is greater than the observed value). To indicate which type of censoring, input \code{extra = list(rightcensored = vec2)} where \code{vec2} is a logical vector the same length as the response. If the component of this list is missing then the logical values are taken to be \code{FALSE}. The fitted object has this component stored in the \code{extra} slot. Th \pkg{VGAM} family function \code{rayleigh} handles multiple responses. } \section{Warning}{ The theory behind the argument \code{oim} is not fully complete. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ A related distribution is the Maxwell distribution. } \seealso{ \code{\link{Rayleigh}}, \code{\link{genrayleigh}}, \code{\link{riceff}}, \code{\link{maxwell}}. } \examples{ nn <- 1000; Scale <- exp(2) rdata <- data.frame(ystar = rrayleigh(nn, scale = Scale)) fit <- vglm(ystar ~ 1, rayleigh, rdata, trace = TRUE, crit = "c") head(fitted(fit)) with(rdata, mean(ystar)) coef(fit, matrix = TRUE) Coef(fit) # Censored data rdata <- transform(rdata, U = runif(nn, 5, 15)) rdata <- transform(rdata, y = pmin(U, ystar)) \dontrun{ par(mfrow = c(1,2)); hist(with(rdata, ystar)); hist(with(rdata, y)) } extra <- with(rdata, list(rightcensored = ystar > U)) fit <- vglm(y ~ 1, cenrayleigh, rdata, trace = TRUE, extra = extra) table(fit@extra$rightcen) coef(fit, matrix = TRUE) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/quasipoissonff.Rd0000644000176000001440000000762512136651105015161 0ustar ripleyusers\name{quasipoissonff} %\alias{quasipoisson} \alias{quasipoissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-Poisson Family Function } \description{ Fits a generalized linear model to a Poisson response, where the dispersion parameter is unknown. } \usage{ quasipoissonff(link = "loge", onedpar = FALSE, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function. See \code{\link{Links}} for more choices. } \item{onedpar}{ One dispersion parameter? If the response is a matrix, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if the response is a matrix. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. } } \details{ \eqn{M} defined above is the number of linear/additive predictors. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate. A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the Poisson model. Over-dispersion is more common in practice. When fitting a Quadratic RR-VGLM, the response is a matrix of \eqn{M}, say, columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ This function will handle a matrix response automatically. The call \code{poissonff(dispersion = 0, ...)} is equivalent to \code{quasipoissonff(...)}. The latter was written so that R users of \code{quasipoisson()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ See the warning in \code{\link{quasibinomialff}}. } \seealso{ \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{loge}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{binomialff}}, \code{\link{quasibinomialff}}, \code{\link[stats]{quasipoisson}}. } \examples{ quasipoissonff() \dontrun{n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "poisson", EqualTol = FALSE) myform <- attr(mydata, "formula") p1 <- cqo(myform, fam = quasipoissonff, EqualTol = FALSE, data = mydata) sort(p1@misc$deviance.Bestof) # A history of all the iterations lvplot(p1, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S) summary(p1) # The dispersion parameters are estimated }} \keyword{models} \keyword{regression} %n = 100 %x2 = rnorm(n) %x3 = rnorm(n) %x4 = rnorm(n) %lv1 = 0 + x3 - 2*x4 %lambda1 = exp(3 - 0.5 * (lv1-0)^2) %lambda2 = exp(2 - 0.5 * (lv1-1)^2) %lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2) %y1 = rpois(n, lambda1) %y2 = rpois(n, lambda2) %y3 = rpois(n, lambda3) %p1 = cqo(cbind(y1,y2,y3) ~ x2 + x3 + x4, quasipoissonff) %lvplot(p1, y=TRUE, lcol=1:3, pch=1:3, pcol=1:3) %summary(p1) # Three dispersion parameters are estimated VGAM/man/quasibinomialff.Rd0000644000176000001440000001233712136651105015255 0ustar ripleyusers\name{quasibinomialff} %\alias{quasibinomial} \alias{quasibinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-Binomial Family Function } \description{ Family function for fitting generalized linear models to binomial responses, where the dispersion parameters are unknown. } \usage{ quasibinomialff(link = "logit", mv = FALSE, onedpar = !mv, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function. See \code{\link{Links}} for more choices. } \item{mv}{ Multivariate response? If \code{TRUE}, then the response is interpreted as \eqn{M} binary responses, where \eqn{M} is the number of columns of the response matrix. In this case, the response matrix should have zero/one values only. If \code{FALSE} and the response is a (2-column) matrix, then the number of successes is given in the first column and the second column is the number of failures. } \item{onedpar}{ One dispersion parameter? If \code{mv}, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if \code{mv} is \code{TRUE}. This argument allows for the parallelism assumption whereby the regression coefficients for a variable is constrained to be equal over the \eqn{M} linear/additive predictors. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. } } \details{ The final model is not fully estimated by maximum likelihood since the dispersion parameter is unknown (see pp.124--8 of McCullagh and Nelder (1989) for more details). A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the binomial model. Over-dispersion is more common in practice. Setting \code{mv=TRUE} is necessary when fitting a Quadratic RR-VGLM (see \code{\link{cqo}}) because the response will be a matrix of \eqn{M} columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ If \code{mv} is \code{FALSE} (the default), then the response can be of one of three formats: a factor (first level taken as success), a vector of proportions of success, or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified. In particular, for a general vector of proportions, you will need to specify \code{weights} because the number of trials is needed. If \code{mv} is \code{TRUE}, then the matrix response can only be of one format: a matrix of 1's and 0's (1=success). This function is only a front-end to the \pkg{VGAM} family function \code{binomialff()}; indeed, \code{quasibinomialff(...)} is equivalent to \code{binomialff(..., dispersion=0)}. Here, the argument \code{dispersion=0} signifies that the dispersion parameter is to be estimated. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ The log-likelihood pertaining to the ordinary family is used to test for convergence during estimation, and is printed out in the summary. } \seealso{ \code{\link{binomialff}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{poissonff}}, \code{\link{quasipoissonff}}, \code{\link[stats]{quasibinomial}}. } \examples{ quasibinomialff() quasibinomialff(link = "probit") # Nonparametric logistic regression hunua <- transform(hunua, a.5 = sqrt(altitude)) # Transformation of altitude fit1 <- vglm(agaaus ~ poly(a.5, 2), quasibinomialff, hunua) fit2 <- vgam(agaaus ~ s(a.5, df = 2), quasibinomialff, hunua) \dontrun{ plot(fit2, se = TRUE, llwd = 2, lcol = "orange", scol = "orange", xlab = "sqrt(altitude)", ylim = c(-3, 1), main = "GAM and quadratic GLM fitted to species data") plotvgam(fit1, se = TRUE, lcol = "blue", scol = "blue", add = TRUE, llwd = 2) } fit1@misc$dispersion # dispersion parameter logLik(fit1) # Here, the dispersion parameter defaults to 1 fit0 <- vglm(agaaus ~ poly(a.5, 2), binomialff, hunua) fit0@misc$dispersion # dispersion parameter } \keyword{models} \keyword{regression} %AIC(fit1) %AIC(fit0) VGAM/man/qtplot.lmscreg.Rd0000644000176000001440000000456512136651105015066 0ustar ripleyusers\name{qtplot.lmscreg} \alias{qtplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for LMS Quantile Regression } \description{ Plots quantiles associated with a LMS quantile regression. } \usage{ qtplot.lmscreg(object, newdata = NULL, percentiles = object@misc$percentiles, plot.it = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Optional data frame for computing the quantiles. If missing, the original data is used. } \item{percentiles}{ Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default are the percentiles used when the model was fitted. } \item{plot.it}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{\dots}{ Graphical parameter that are passed into \code{\link{plotqtplot.lmscreg}}. } } \details{ The `primary' variable is defined as the main covariate upon which the regression or smoothing is performed. For example, in medical studies, it is often the age. In \pkg{VGAM}, it is possible to handle more than one covariate, however, the primary variable must be the first term after the intercept. } \value{ A list with the following components. \item{fitted.values }{A vector of fitted percentile values. } \item{percentiles }{The percentiles used. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ \code{\link{plotqtplot.lmscreg}} does the actual plotting. } \seealso{ \code{\link{plotqtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero=1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25, 50, 75, 95), lcol = "blue", tcol = "blue", llwd = 2) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/qtplot.gumbel.Rd0000644000176000001440000000754612136651105014707 0ustar ripleyusers\name{qtplot.gumbel} \alias{qtplot.gumbel} \alias{qtplot.egumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for Gumbel Regression } \description{ Plots quantiles associated with a Gumbel model. } \usage{ qtplot.gumbel(object, plot.it = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} extremes model of the Gumbel type, produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function either \code{"gumbel"} or \code{"egumbel"}. } \item{plot.it}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{y.arg}{ Logical. Add the raw data on to the plot? } \item{spline.fit}{ Logical. Use a spline fit through the fitted percentiles? This can be useful if there are large gaps between some values along the covariate. } \item{label}{ Logical. Label the percentiles? } \item{R}{ See \code{\link{gumbel}}. } \item{percentiles}{ See \code{\link{gumbel}}. } \item{add.arg}{ Logical. Add the plot to an existing plot? } \item{mpv}{ See \code{\link{gumbel}}. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{main}{ Title of the plot. See \code{\link[graphics]{title}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{tcol.arg}{ Color of the text (if \code{label} is \code{TRUE}). See the \code{col} argument of \code{\link[graphics]{par}}. } \item{tadj}{ Text justification. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{sub} and \code{las}. } } \details{ There should be a single covariate such as time. The quantiles specified by \code{percentiles} are plotted. } \value{ The object with a list called \code{qtplot} in the \code{post} slot of \code{object}. (If \code{plot.it = FALSE} then just the list is returned.) The list contains components \item{fitted.values}{ The percentiles of the response, possibly including the MPV. } \item{percentiles}{ The percentiles (small vector of values between 0 and 100. } } %\references{ ~put references to the literature/web site here ~ } \author{ Thomas W. Yee } \note{ Unlike \code{\link{gumbel}}, one cannot have \code{percentiles = NULL}. } \seealso{ \code{\link{gumbel}}. } \examples{ ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) \dontrun{ par(mfrow = c(1, 1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.4, ylab = "Sea level (cm)") qtplot(fit1, perc = 97, mpv = FALSE, lcol = 3, tcol = 3, lwd = 2, tadj = 0.4, add = TRUE) -> saved head(saved@post$qtplot$fitted) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/qrrvglm.control.Rd0000644000176000001440000005244412136651105015260 0ustar ripleyusers\name{qrrvglm.control} \alias{qrrvglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control function for QRR-VGLMs (CQO) } \description{ Algorithmic constants and parameters for a constrained quadratic ordination (CQO), by fitting a \emph{quadratic reduced-rank vector generalized linear model} (QRR-VGLM), are set using this function. It is the control function for \code{\link{cqo}}. } \usage{ qrrvglm.control(Rank = 1, Bestof = if(length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = TRUE, Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isdlv = rep(c(2, 1, rep(0.5, length = Rank)), length = Rank), iKvector = 0.1, iShape = 0.1, ITolerances = FALSE, maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep(7, length = Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if(ITolerances) 0.001 else 1.0, SD.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) } %- maybe also `usage' for other objects documented here. \arguments{ In the following, \eqn{R} is the \code{Rank}, \eqn{M} is the number of linear predictors, and \eqn{S} is the number of responses (species). Thus \eqn{M=S} for binomial and Poisson responses, and \eqn{M=2S} for the negative binomial and 2-parameter gamma distributions. \item{Rank}{ The numerical rank \eqn{R} of the model, i.e., the number of ordination axes. Must be an element from the set \{1,2,\ldots,min(\eqn{M},\eqn{p_2}{p2})\} where the vector of explanatory variables \eqn{x} is partitioned into (\eqn{x_1},\eqn{x_2}), which is of dimension \eqn{p_1+p_2}{p1+p2}. The variables making up \eqn{x_1} are given by the terms in the \code{noRRR} argument, and the rest of the terms comprise \eqn{x_2}. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument has value 1 if an initial value for \eqn{C} is inputted using \code{Cinit}. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{Cinit}{ Optional initial \eqn{C} matrix, which must be a \eqn{p_2}{p2} by \eqn{R} matrix. The default is to apply \code{.Init.Poisson.QO()} to obtain initial values. } \item{Crow1positive}{ Logical vector of length \code{Rank} (recycled if necessary): are the elements of the first row of \eqn{C} positive? For example, if \code{Rank} is 4, then specifying \code{Crow1positive = c(FALSE, TRUE)} will force \eqn{C[1,1]} and \eqn{C[1,3]} to be negative, and \eqn{C[1,2]} and \eqn{C[1,4]} to be positive. This argument allows for a reflection in the ordination axes because the coefficients of the latent variables are unique up to a sign. } \item{epsilon}{ Positive numeric. Used to test for convergence for GLMs fitted in C. Larger values mean a loosening of the convergence criterion. If an error code of 3 is reported, try increasing this value. } \item{EqualTolerances}{ Logical indicating whether each (quadratic) predictor will have equal tolerances. Having \code{EqualTolerances = TRUE} can help avoid numerical problems, especially with binary data. Note that the estimated (common) tolerance matrix may or may not be positive-definite. If it is then it can be scaled to the \eqn{R} by \eqn{R} identity matrix, i.e., made equivalent to \code{ITolerances = TRUE}. Setting \code{ITolerances = TRUE} will \emph{force} a common \eqn{R} by \eqn{R} identity matrix as the tolerance matrix to the data even if it is not appropriate. In general, setting \code{ITolerances = TRUE} is preferred over \code{EqualTolerances = TRUE} because, if it works, it is much faster and uses less memory. However, \code{ITolerances = TRUE} requires the environmental variables to be scaled appropriately. See \bold{Details} for more details. } % \item{Eta.range}{ Numerical vector of length 2 or \code{NULL}. % Gives the lower and upper bounds on the values that can be taken % by the quadratic predictor (i.e., on the eta-scale). % Since \code{FastAlgorithm = TRUE}, this argument should be ignored. % } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO = TRUE}. } \item{FastAlgorithm}{ Logical. Whether a new fast algorithm is to be used. The fast algorithm results in a large speed increases compared to Yee (2004). Some details of the fast algorithm are found in Appendix A of Yee (2006). Setting \code{FastAlgorithm = FALSE} will give an error. } \item{GradientFunction}{ Logical. Whether \code{\link[stats]{optim}}'s argument \code{gr} is used or not, i.e., to compute gradient values. Used only if \code{FastAlgorithm} is \code{TRUE}. The default value is usually faster on most problems. } \item{Hstep}{ Positive value. Used as the step size in the finite difference approximation to the derivatives by \code{\link[stats]{optim}}. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{isdlv}{ Initial standard deviations for the latent variables (site scores). Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{ITolerances = TRUE}. Used by \code{.Init.Poisson.QO()} to obtain initial values for the constrained coefficients \eqn{C} adjusted to a reasonable value. It adjusts the spread of the site scores relative to a common species tolerance of 1 for each ordination axis. A value between 0.5 and 10 is recommended; a value such as 10 means that the range of the environmental space is very large relative to the niche width of the species. The successive values should decrease because the first ordination axis should have the most spread of site scores, followed by the second ordination axis, etc. } \item{iKvector, iShape}{ Numeric, recycled to length \eqn{S} if necessary. Initial values used for estimating the positive \eqn{k} and \eqn{\lambda}{lambda} parameters of the negative binomial and 2-parameter gamma distributions respectively. For further information see \code{\link{negbinomial}} and \code{\link{gamma2}}. These arguments override the \code{ik} and \code{ishape} arguments in \code{\link{negbinomial}} and \code{\link{gamma2}}. } \item{ITolerances}{ Logical. If \code{TRUE} then the (common) tolerance matrix is the \eqn{R} by \eqn{R} identity matrix by definition. Note that having \code{ITolerances = TRUE} implies \code{EqualTolerances = TRUE}, but not vice versa. Internally, the quadratic terms will be treated as offsets (in GLM jargon) and so the models can potentially be fitted very efficiently. \emph{However, it is a very good idea to center and scale all numerical variables in the \eqn{x_2} vector}. See \bold{Details} for more details. The success of \code{ITolerances = TRUE} often depends on suitable values for \code{isdlv} and/or \code{MUXfactor}. } \item{maxitl}{ Maximum number of times the optimizer is called or restarted. Most users should ignore this argument. } \item{imethod}{ Method of initialization. A positive integer 1 or 2 or 3 etc. depending on the \pkg{VGAM} family function. Currently it is used for \code{\link{negbinomial}} and \code{\link{gamma2}} only, and used within the C. } \item{Maxit.optim}{ Positive integer. Number of iterations given to the function \code{\link[stats]{optim}} at each of the \code{optim.maxit} iterations. } \item{MUXfactor}{ Multiplication factor for detecting large offset values. Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{ITolerances = TRUE}. Offsets are \eqn{-0.5} multiplied by the sum of the squares of all \eqn{R} latent variable values. If the latent variable values are too large then this will result in numerical problems. By too large, it is meant that the standard deviation of the latent variable values are greater than \code{MUXfactor[r] * isdlv[r]} for \code{r=1:Rank} (this is why centering and scaling all the numerical predictor variables in \eqn{x_2} is recommended). A value about 3 or 4 is recommended. If failure to converge occurs, try a slightly lower value. } \item{optim.maxit}{ Positive integer. Number of times \code{\link[stats]{optim}} is invoked. At iteration \code{i}, the \code{i}th value of \code{Maxit.optim} is fed into \code{\link[stats]{optim}}. } \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression (or formation of the latent variables), i.e., those belong to \eqn{x_1}. Those variables which do not make up the latent variable (reduced-rank regression) correspond to the \eqn{B_1}{B_1} matrix. The default is to omit the intercept term from the latent variables. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } \item{Parscale}{ Numerical and positive-valued vector of length \eqn{C} (recycled if necessary). Passed into \code{optim(..., control = list(parscale = Parscale))}; the elements of \eqn{C} become \eqn{C} / \code{Parscale}. Setting \code{ITolerances = TRUE} results in line searches that are very large, therefore \eqn{C} has to be scaled accordingly to avoid large step sizes. See \bold{Details} for more information. It's probably best to leave this argument alone. } \item{SD.Cinit}{ Standard deviation of the initial values for the elements of \eqn{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE} and \eqn{C} is not inputted using \code{Cinit}. } \item{trace}{ Logical indicating if output should be produced for each iteration. The default is \code{TRUE} because the calculations are numerically intensive, meaning it may take a long time, so that the user might think the computer has locked up if \code{trace = FALSE}. } % \item{Kinit}{ Initial values for the index parameters \code{k} in the % negative binomial distribution (one per species). % In general, a smaller number is preferred over a larger number. % The vector is recycled to the number of responses (species). % The argument is currently not used. % } % \item{Dzero}{ Integer vector specifying which squared terms % are to be zeroed. These linear predictors will correspond to % a RR-VGLM. % The values must be elements from the set \{1,2,\ldots,\eqn{M}\}. % Used only if \code{Quadratic = TRUE} and \code{FastAlgorithm = FALSE}. % } \item{SmallNo}{ Positive numeric between \code{.Machine$double.eps} and \code{0.0001}. Used to avoid under- or over-flow in the IRLS algorithm. Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Use.Init.Poisson.QO}{ Logical. If \code{TRUE} then the function \code{.Init.Poisson.QO()} is used to obtain initial values for the canonical coefficients \eqn{C}. If \code{FALSE} then random numbers are used instead. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{\dots}{ Ignored at present. } } \details{ Recall that the central formula for CQO is \deqn{\eta = B_1^T x_1 + A \nu + \sum_{m=1}^M (\nu^T D_m \nu) e_m}{% eta = B_1^T x_1 + A nu + sum_{m=1}^M (nu^T D_m nu) e_m} where \eqn{x_1}{x_1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x_2} is a vector of environmental variables, \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables, \eqn{e_m} is a vector of 0s but with a 1 in the \eqn{m}th position. QRR-VGLMs are an extension of RR-VGLMs and allow for maximum likelihood solutions to constrained quadratic ordination (CQO) models. % For the fitting of QRR-VGLMs, the default is that the \eqn{C} matrix % (containing the \emph{canonical} or \emph{constrained coefficients} % corresponding to \eqn{x_2}) % is constrained by forcing the latent variables to have sample % variance-covariance matrix equalling \code{diag(Rank)}, i.e., % unit variance and uncorrelated. The tolerance matrices are, in % general, diagonal under such a constraint. Having \code{ITolerances = TRUE} means all the tolerance matrices are the order-\eqn{R} identity matrix, i.e., it \emph{forces} bell-shaped curves/surfaces on all species. This results in a more difficult optimization problem (especially for 2-parameter models such as the negative binomial and gamma) because of overflow errors and it appears there are more local solutions. To help avoid the overflow errors, scaling \eqn{C} by the factor \code{Parscale} can help enormously. Even better, scaling \eqn{C} by specifying \code{isdlv} is more understandable to humans. If failure to converge occurs, try adjusting \code{Parscale}, or better, setting \code{EqualTolerances = TRUE} (and hope that the estimated tolerance matrix is positive-definite). To fit an equal-tolerances model, it is firstly best to try setting \code{ITolerances = TRUE} and varying \code{isdlv} and/or \code{MUXfactor} if it fails to converge. If it still fails to converge after many attempts, try setting \code{EqualTolerances = TRUE}, however this will usually be a lot slower because it requires a lot more memory. With a \eqn{R > 1} model, the latent variables are always uncorrelated, i.e., the variance-covariance matrix of the site scores is a diagonal matrix. If setting \code{EqualTolerances = TRUE} is used and the common estimated tolerance matrix is positive-definite then that model is effectively the same as the \code{ITolerances = TRUE} model (the two are transformations of each other). In general, \code{ITolerances = TRUE} is numerically more unstable and presents a more difficult problem to optimize; the arguments \code{isdlv} and/or \code{MUXfactor} often must be assigned some good value(s) (possibly found by trial and error) in order for convergence to occur. Setting \code{ITolerances = TRUE} \emph{forces} a bell-shaped curve or surface onto all the species data, therefore this option should be used with deliberation. If unsuitable, the resulting fit may be very misleading. Usually it is a good idea for the user to set \code{EqualTolerances = FALSE} to see which species appear to have a bell-shaped curve or surface. Improvements to the fit can often be achieved using transformations, e.g., nitrogen concentration to log nitrogen concentration. Fitting a CAO model (see \code{\link{cao}}) first is a good idea for pre-examining the data and checking whether it is appropriate to fit a CQO model. %Suppose \code{FastAlgorithm = FALSE}. In theory (if %\code{Eta.range = NULL}), for QRR-VGLMs, the predictors have the values of %a quadratic form. However, when \code{Eta.range} is assigned a numerical %vector of length 2 (giving the endpoints of an interval), then those %values lying outside the interval are assigned the closest boundary %value. The \code{Eta.range} argument is provided to help avoid %numerical problems associated with the inner minimization problem. A %consequence of this is that the fitted values are bounded, e.g., between %\code{1/(1+exp(-Eta.range[1]))} and \code{1/(1+exp(-Eta.range[2]))} for %binary data (logit link), and greater than \code{exp(Eta.range[1])} for %Poisson data (log link). It is suggested that, for binary responses, %\code{c(-16, 16)} be used, and for Poisson responses, \code{c(-16, Inf)} %be used. The value \code{NULL} corresponds to \code{c(-Inf, Inf)}. } \value{ A list with components matching the input names. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ When \code{ITolerances = TRUE} it is a good idea to apply \code{\link[base]{scale}} to all the numerical variables that make up the latent variable, i.e., those of \eqn{x_2}{x_2}. This is to make them have mean 0, and hence avoid large offset values which cause numerical problems. This function has many arguments that are common with \code{\link{rrvglm.control}} and \code{\link{vglm.control}}. It is usually a good idea to try fitting a model with \code{ITolerances = TRUE} first, and if convergence is unsuccessful, then try \code{EqualTolerances = TRUE} and \code{ITolerances = FALSE}. Ordination diagrams with \code{EqualTolerances = TRUE} have a natural interpretation, but with \code{EqualTolerances = FALSE} they are more complicated and requires, e.g., contours to be overlaid on the ordination diagram (see \code{\link{lvplot.qrrvglm}}). % and/or use the \code{Eta.range} argument. In the example below, an equal-tolerances CQO model is fitted to the hunting spiders data. Because \code{ITolerances = TRUE}, it is a good idea to center all the \eqn{x_2} variables first. Upon fitting the model, the actual standard deviation of the site scores are computed. Ideally, the \code{isdlv} argument should have had this value for the best chances of getting good initial values. For comparison, the model is refitted with that value and it should run more faster and reliably. } \section{Warning }{ The default value of \code{Bestof} is a bare minimum for many datasets, therefore it will be necessary to increase its value to increase the chances of obtaining the global solution. %Suppose \code{FastAlgorithm = FALSE}. %The fitted values of QRR-VGLMs can be restricted to lie between two values %in order to help make the computation numerically stable. For some data %sets, it may be necessary to use \code{Eta.range} to obtain convergence; %however, the fitted values etc. will no longer be accurate, especially at %small and/or large values. Convergence is slower when \code{Eta.range} %is used to restrict values. } \seealso{ \code{\link{cqo}}, \code{\link{rcqo}}, \code{\link{Coef.qrrvglm}}, \code{\link{Coef.qrrvglm-class}}, % \code{\link{rrvglm}}, % \code{\link{rrvglm.control}}, % \code{\link{rrvglm.optim.control}}, \code{\link[stats]{optim}}, \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. } \examples{ \dontrun{ # Poisson CQO with equal tolerances set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Good idea when ITolerances = TRUE p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, EqualTolerances = TRUE) sort(p1@misc$deviance.Bestof) # A history of all the iterations (isdlv <- apply(lv(p1), 2, sd)) # Should be approx isdlv # Refit the model with better initial values set.seed(111) # This leads to the global solution p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, ITolerances = TRUE, isdlv = isdlv, # Note the use of isdlv here quasipoissonff, data = hspider) sort(p1@misc$deviance.Bestof) # A history of all the iterations } } \keyword{models} \keyword{regression} %\dontrun{ %# 20120221; withdrawn for a while coz it creates a lot of error messages. %# Negative binomial CQO; smallest deviance is about 275.389 %set.seed(1234) # This leads to a reasonable (but not the global) solution? %nb1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, % Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ % WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, % ITol = FALSE, EqualTol = TRUE, # A good idea for negbinomial % fam = negbinomial, data = hspider) %sort(nb1@misc$deviance.Bestof) # A history of all the iterations %summary(nb1) %} %\dontrun{ lvplot(nb1, lcol = 1:12, y = TRUE, pcol = 1:12) } VGAM/man/put.smart.Rd0000644000176000001440000000333212136651105014034 0ustar ripleyusers\name{put.smart} \alias{put.smart} \title{ Adds a List to the End of the List ``.smart.prediction'' } \description{ Adds a list to the end of the list \code{.smart.prediction} in \code{smartpredenv} (\R) or frame 1 (S-PLUS). } \usage{ put.smart(smart) } \arguments{ \item{smart}{ a list containing parameters needed later for smart prediction. } } \value{ Nothing is returned. } \section{Side Effects}{ The variable \code{.smart.prediction.counter} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is incremented beforehand, and \code{.smart.prediction[[.smart.prediction.counter]]} is assigned the list \code{smart}. If the list \code{.smart.prediction} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is not long enough to hold \code{smart}, then it is made larger, and the variable \code{.max.smart} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is adjusted accordingly. } \details{ \code{put.smart} is used in \code{"write"} mode within a smart function. It saves parameters at the time of model fitting, which are later used for prediction. The function \code{put.smart} is the opposite of \code{\link{get.smart}}, and both deal with the same contents. } \seealso{ \code{\link{get.smart}}. } \examples{ "my1" <- function(x, minx=min(x)) { # Here is a smart function x <- x # Needed for nested calls, e.g., bs(scale(x)) if(smart.mode.is("read")) { smart <- get.smart() minx <- smart$minx # Overwrite its value } else if(smart.mode.is("write")) put.smart(list(minx=minx)) sqrt(x-minx) } attr(my1, "smart") <- TRUE } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/prplot.Rd0000644000176000001440000000464412136651105013426 0ustar ripleyusers\name{prplot} \alias{prplot} \alias{prplot.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Probability Plots for Categorical Data Analysis } \description{ Plots the fitted probabilities for some very simplified special cases of categorical data analysis models. } \usage{ prplot(object, control = prplot.control(...), ...) prplot.control(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Currently only an \code{\link{cumulative}} object. This includes a \code{\link{propodds}} object since that \pkg{VGAM} family function is a special case of \code{\link{cumulative}}. } \item{control}{ List containing some basic graphical parameters. } \item{xlab, ylab, main, xlim, ylim, lty }{ See \code{\link[graphics]{par}} and \code{...} below. } \item{col, rcol, lwd, rlwd, las, rug.arg}{ See \code{\link[graphics]{par}} and \code{...} below. Arguments starting with \code{r} refer to the rug. Argument \code{rug.arg} is logical: add a rug for the distinct values of the explanatory variable? } \item{\dots}{ Arguments such as \code{xlab} which are fed into \code{prplot.control()}. Only a small selection of graphical arguments from \code{\link[graphics]{par}} are offered. } } \details{ For models involving one term in the RHS of the formula this function plots the fitted probabilities against the single explanatory variable. } \value{ The object is returned invisibly with the \code{preplot} slot assigned. This is obtained by a call to \code{plotvgam()}. } %\references{ %% ~put references to the literature/web site here ~ %} %\author{ %T. W. Yee %} \note{ This function is rudimentary. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{cumulative}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) M <- npred(fit) # Or fit@misc$M \dontrun{ prplot(fit) prplot(fit, lty = 1:M, col = (1:M)+2, rug = TRUE, las = 1, ylim = c(0, 1), rlwd = 2) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/propodds.Rd0000644000176000001440000000567212136651105013742 0ustar ripleyusers\name{propodds} \alias{propodds} \title{ Proportional Odds Model for Ordinal Regression } \description{ Fits the proportional odds model to a (preferably ordered) factor response. } \usage{ propodds(reverse = TRUE, whitespace = FALSE) } \arguments{ \item{reverse, whitespace}{ Logical. Fed into arguments of the same name in \code{\link{cumulative}}. } } \details{ The \emph{proportional odds model} is a special case from the class of \emph{cumulative link models}. It involves a logit link applied to cumulative probabilities and a strong \emph{parallelism} assumption. A parallelism assumption means there is less chance of numerical problems because the fitted probabilities will remain between 0 and 1; however the \emph{parallelism} assumption ought to be checked, e.g., via a likelihood ratio test. This \pkg{VGAM} family function is merely a shortcut for \code{cumulative(reverse = reverse, link = "logit", parallel = TRUE)}. Please see \code{\link{cumulative}} for more details on this model. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Agresti, A. (2010) \emph{Analysis of Ordinal Categorical Data}, 2nd ed. New York: Wiley. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{cumulative}}. } \examples{ # Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) depvar(fit) # Sample proportions weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices summary(fit) # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), propodds, pneumo) \dontrun{ plot(fit2, se = TRUE, lcol = 2, scol = 2) } # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(deviance(fit) - deviance(fit3), df = df.residual(fit) - df.residual(fit3), lower.tail = FALSE) lrtest(fit3, fit) # Easier } \keyword{models} \keyword{regression} % pneumo$let <- log(pneumo$exposure.time) VGAM/man/probit.Rd0000644000176000001440000000512712136651105013402 0ustar ripleyusers\name{probit} \alias{probit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Probit Link Function } \description{ Computes the probit transformation, including its inverse and the first two derivatives. } \usage{ probit(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The probit link function is commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the probit of \code{theta}, i.e., \code{qnorm(theta)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{pnorm(theta)}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the univariate normal distribution (see \code{\link{normal1}}). } \seealso{ \code{\link{Links}}, \code{\link{logit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) probit(p) max(abs(probit(probit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) probit(p) # Has NAs probit(p, bvalue = .Machine$double.eps) # Has no NAs \dontrun{p <- seq(0.01, 0.99, by = 0.01); par(lwd = (mylwd <- 2)) plot(p, logit(p), type = "l", col = "limegreen", ylab = "transformation", las = 1, main = "Some probability link functions") lines(p, probit(p), col = "purple") lines(p, cloglog(p), col = "chocolate") lines(p, cauchit(p), col = "tan") abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4.0, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/prentice74.Rd0000644000176000001440000000663312136651105014072 0ustar ripleyusers\name{prentice74} \alias{prentice74} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Prentice (1974) Log-gamma Distribution } \description{ Estimation of a 3-parameter log-gamma distribution described by Prentice (1974). } \usage{ prentice74(llocation = "identity", lscale = "loge", lshape = "identity", ilocation = NULL, iscale = NULL, ishape = NULL, zero = 2:3) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, lshape}{ Parameter link function applied to the location parameter \eqn{a}, positive scale parameter \eqn{b} and the shape parameter \eqn{q}, respectively. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Initial value for \eqn{a} and \eqn{b}, respectively. The defaults mean an initial value is determined internally for each. } \item{ishape}{ Initial value for \eqn{q}. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts-only. The values must be from the set \{1,2,3\}. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function is given by \deqn{f(y;a,b,q) = |q|\,\exp(w/q^2 - e^w) / (b \, \Gamma(1/q^2)),}{% f(y;a,b,q) = |q| * exp(w/q^2 - e^w) / (b*gamma(1/q^2)),} for shape parameter \eqn{q \ne 0}{q != 0}, positive scale parameter \eqn{b > 0}, location parameter \eqn{a}, and all real \eqn{y}. Here, \eqn{w = (y-a)q/b+\psi(1/q^2)}{w = (y-a)*q/b+psi(1/q^2)} where \eqn{\psi}{psi} is the digamma function, \code{\link[base:Special]{digamma}}. The mean of \eqn{Y} is \eqn{a} (returned as the fitted values). This is a different parameterization compared to \code{\link{lgamma3ff}}. Special cases: \eqn{q = 0} is the normal distribution with standard deviation \eqn{b}, \eqn{q = -1} is the extreme value distribution for maxima, \eqn{q = 1} is the extreme value distribution for minima (Weibull). If \eqn{q > 0} then the distribution is left skew, else \eqn{q < 0} is right skew. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Prentice, R. L. (1974) A log gamma model and its maximum likelihood estimation. \emph{Biometrika}, \bold{61}, 539--544. %On Maximisation of the Likelihood for the Generalised Gamma Distribution. %Angela Noufaily & M.C. Jones, %29-Oct-2009, %\url{http://stats-www.open.ac.uk/TechnicalReports/} } \section{Warning }{ The special case \eqn{q = 0} is not handled, therefore estimates of \eqn{q} too close to zero may cause numerical problems. } \author{ T. W. Yee } \note{ The notation used here differs from Prentice (1974): \eqn{\alpha = a}{alpha = a}, \eqn{\sigma = b}{sigma = b}. Fisher scoring is used. } \seealso{ \code{\link{lgamma3ff}}, \code{\link[base:Special]{lgamma}}, \code{\link{gengamma}}. } \examples{ pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, loc = -1 + 2*x2, Scale = exp(1)) pdata <- transform(pdata, y = rlgamma(nn, loc = loc, scale = Scale, k = 1)) fit <- vglm(y ~ x2, prentice74(zero = 2:3), pdata, trace = TRUE) coef(fit, matrix = TRUE) # Note the coefficients for location } \keyword{models} \keyword{regression} VGAM/man/predictvglm.Rd0000644000176000001440000001002412136651105014413 0ustar ripleyusers\name{predictvglm} \alias{predictvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{Predict Method for a VGLM fit} \description{ Predicted values based on a vector generalized linear model (VGLM) object. } \usage{ predictvglm(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, extra = object@extra, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class inheriting from \code{"vlm"}, e.g., \code{\link{vglm}}. } \item{newdata}{ An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. } \item{type}{ The value of this argument can be abbreviated. The type of prediction required. The default is the first one, meaning on the scale of the linear predictors. This should be a \eqn{n \times M}{n x M} matrix. The alternative \code{"response"} is on the scale of the response variable, and depending on the family function, this may or may not be the mean. Often this is the fitted value, e.g., \code{fitted(vglmObject)} (see \code{\link{fittedvlm}}). Note that the response is output from the \code{@linkinv} slot, where the \code{eta} argument is the \eqn{n \times M}{n x M} matrix of linear predictors. The \code{"terms"} option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale. The terms have been centered. } \item{se.fit}{ logical: return standard errors? } \item{deriv}{ Non-negative integer. Currently this must be zero. Later, this may be implemented for general values. } \item{dispersion}{ Dispersion parameter. This may be inputted at this stage, but the default is to use the dispersion parameter of the fitted model. } \item{extra}{ A list containing extra information. This argument should be ignored. } \item{untransform}{ Logical. Reverses any parameter link function. This argument only works if \code{type = "link", se.fit = FALSE, deriv = 0}. } \item{\dots}{Arguments passed into \code{predictvlm}. } } \details{ Obtains predictions and optionally estimates standard errors of those predictions from a fitted \code{\link{vglm}} object. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ If \code{se.fit = FALSE}, a vector or matrix of predictions. If \code{se.fit = TRUE}, a list with components \item{fitted.values}{Predictions} \item{se.fit}{Estimated standard errors} \item{df}{Degrees of freedom} \item{sigma}{The square root of the dispersion parameter} } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ Setting \code{se.fit = TRUE} and \code{type = "response"} will generate an error. } \section{Warning }{ This function may change in the future. } \seealso{ \code{\link[stats]{predict}}, \code{\link{vglm}}, \code{predictvlm}, \code{\link{smartpred}}. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), propodds, data = pneumo, trace = TRUE, x.arg = FALSE) class(fit) (q0 <- head(predict(fit))) (q1 <- predict(fit, newdata = head(pneumo))) (q2 <- predict(fit, newdata = head(pneumo))) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE head(predict(fit)) head(predict(fit, untransform = TRUE)) p0 <- head(predict(fit, type = "response")) p1 <- head(predict(fit, type = "response", newdata = pneumo)) p2 <- head(predict(fit, type = "response", newdata = pneumo)) p3 <- head(fitted(fit)) all.equal(p0, p1) # Should be TRUE all.equal(p1, p2) # Should be TRUE all.equal(p2, p3) # Should be TRUE predict(fit, type = "terms", se = TRUE) } \keyword{models} \keyword{regression} VGAM/man/predictqrrvglm.Rd0000644000176000001440000000443112136651105015145 0ustar ripleyusers\name{predictqrrvglm} \alias{predictqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Predict Method for a CQO fit } \description{ Predicted values based on a constrained quadratic ordination (CQO) object. } \usage{ predictqrrvglm(object, newdata=NULL, type = c("link", "response", "lv", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varlvI = FALSE, reference = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class inheriting from \code{"qrrvglm"}. } \item{newdata}{ An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. } \item{type, se.fit, dispersion, extra}{ See \code{\link{predictvglm}}. } \item{deriv}{ Derivative. Currently only 0 is handled. } \item{varlvI, reference}{ Arguments passed into \code{\link{Coef.qrrvglm}}. } \item{\dots}{ Currently undocumented. } } \details{ Obtains predictions from a fitted CQO object. Currently there are lots of limitations of this function; it is unfinished. % and optionally estimates standard errors of those predictions } \value{ See \code{\link{predictvglm}}. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ T. W. Yee } \note{ This function is not robust and has not been checked fully. } \seealso{ \code{\link{cqo}}. } \examples{ hspider[,1:6]=scale(hspider[,1:6]) # Standardize the environmental variables set.seed(1234) # vvv p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, # vvv Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ # vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, # vvv fam=poissonff, data=hspider, Crow1positive=FALSE, ITol=TRUE) # vvv sort(p1@misc$deviance.Bestof) # A history of all the iterations # vvv head(predict(p1)) # The following should be all zeros # vvv max(abs(predict(p1, new=head(hspider)) - head(predict(p1)))) # vvv max(abs(predict(p1, new=head(hspider), type="res") - head(fitted(p1)))) } \keyword{models} \keyword{regression} VGAM/man/powl.Rd0000644000176000001440000000426212136651105013063 0ustar ripleyusers\name{powl} \alias{powl} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Power Link Function } \description{ Computes the power transformation, including its inverse and the first two derivatives. } \usage{ powl(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{power}{ This denotes the power or exponent. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The power link function raises a parameter by a certain value of \code{power}. Care is needed because it is very easy to get numerical problems, e.g., if \code{power=0.5} and \code{theta} is negative. } \value{ For \code{powl} with \code{deriv = 0}, then \code{theta} raised to the power of \code{power}. And if \code{inverse = TRUE} then \code{theta} raised to the power of \code{1/power}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } %\references{ % McCullagh, P. and Nelder, J. A. (1989) % \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. % %} \author{ Thomas W. Yee } \note{ Numerical problems may occur for certain combinations of \code{theta} and \code{power}. Consequently this link function should be used with caution. } \seealso{ \code{\link{Links}}, \code{\link{loge}}. } \examples{ powl("a", power = 2, short = FALSE, tag = TRUE) powl(x <- 1:5) powl(x, power = 2) max(abs(powl(powl(x, power = 2), power = 2, inverse=TRUE) - x)) # Should be 0 powl(x <- (-5):5, power = 0.5) # Has NAs # 1/2 = 0.5 pdata <- data.frame(y = rbeta(n = 1000, shape1 = 2^2, shape2 = 3^2)) fit <- vglm(y ~ 1, beta.ab(lshape1 = powl(power = 0.5), i1 = 3, lshape2 = powl(power = 0.5), i2 = 7), pdata) t(coef(fit, matrix = TRUE)) Coef(fit) # Useful for intercept-only models vcov(fit, untransform = TRUE) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/pospoisson.Rd0000644000176000001440000000524512136651105014320 0ustar ripleyusers\name{pospoisson} \alias{pospoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Poisson Distribution Family Function } \description{ Fits a positive Poisson distribution. } \usage{ pospoisson(link = "loge", expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the usual mean (lambda) parameter of an ordinary Poisson distribution. See \code{\link{Links}} for more choices. } \item{expected}{ Logical. Fisher scoring is used if \code{expected = TRUE}, else Newton-Raphson. } \item{ilambda, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The positive Poisson distribution is the ordinary Poisson distribution but with the probability of zero being zero. Thus the other probabilities are scaled up (i.e., divided by \eqn{1-P[Y=0]}). The mean, \eqn{\lambda / (1 - \exp(-\lambda))}{lambda/(1-exp(-lambda))}, can be obtained by the extractor function \code{fitted} applied to the object. A related distribution is the zero-inflated Poisson, in which the probability \eqn{P[Y=0]} involves another parameter \eqn{\phi}{phi}. See \code{\link{zipoisson}}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Coleman, J. S. and James, J. (1961) The equilibrium size distribution of freely-forming groups. \emph{Sociometry}, \bold{24}, 36--45. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ This family function can handle a multivariate response. Yet to be done: a \code{quasi.pospoisson} which estimates a dispersion parameter. } \seealso{ \code{\link{Pospois}}, \code{\link{posnegbinomial}}, \code{\link{poissonff}}, \code{\link{zipoisson}}. } \examples{ # Data from Coleman and James (1961) cjdata <- data.frame(y = 1:6, freq = c(1486, 694, 195, 37, 10, 1)) fit <- vglm(y ~ 1, pospoisson, cjdata, weights = freq) Coef(fit) summary(fit) fitted(fit) pdata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data pdata <- transform(pdata, lambda = exp(1 - 2 * x2)) pdata <- transform(pdata, y1 = rpospois(nn, lambda)) with(pdata, table(y1)) fit <- vglm(y1 ~ x2, pospoisson, pdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/pospoisUC.Rd0000644000176000001440000000527512136651105014033 0ustar ripleyusers\name{Pospois} \alias{Pospois} \alias{dpospois} \alias{ppospois} \alias{qpospois} \alias{rpospois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-Poisson distribution. } \usage{ dpospois(x, lambda, log = FALSE) ppospois(q, lambda) qpospois(p, lambda) rpospois(n, lambda) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{lambda}{ vector of positive means (of an ordinary Poisson distribution). Short vectors are recycled. } \item{log}{ logical. } } \details{ The positive-Poisson distribution is a Poisson distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\lambda / (1-\exp(-\lambda)).}{% lambda / (1-exp(-lambda)).} As \eqn{\lambda}{lambda} increases, the positive-Poisson and Poisson distributions become more similar. Unlike similar functions for the Poisson distribution, a zero value of \code{lambda} is not permitted here. } \value{ \code{dpospois} gives the density, \code{ppospois} gives the distribution function, \code{qpospois} gives the quantile function, and \code{rpospois} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee } \note{ % 20120405; no longer true to a superior method: % For \code{rpospois}, the arguments of the function are fed % into \code{\link[stats:Poisson]{rpois}} until \eqn{n} positive % values are obtained. This may take a long time if \code{lambda} % has values close to 0. The family function \code{\link{pospoisson}} estimates \eqn{\lambda}{lambda} by maximum likelihood estimation. } \seealso{ \code{\link{pospoisson}}, \code{\link{zapoisson}}, \code{\link{zipoisson}}, \code{\link[stats:Poisson]{rpois}}. } \examples{ lambda <- 2; y = rpospois(n = 1000, lambda) table(y) mean(y) # Sample mean lambda / (1 - exp(-lambda)) # Population mean (ii <- dpospois(0:7, lambda)) cumsum(ii) - ppospois(0:7, lambda) # Should be 0s table(rpospois(100, lambda)) table(qpospois(runif(1000), lambda)) round(dpospois(1:10, lambda) * 1000) # Should be similar \dontrun{ x <- 0:7 barplot(rbind(dpospois(x, lambda), dpois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive Poisson(", lambda, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/posnormal1.Rd0000644000176000001440000001001212136651105014163 0ustar ripleyusers\name{posnormal1} \alias{posnormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Normal Distribution Family Function } \description{ Fits a positive (univariate) normal distribution. } \usage{ posnormal1(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, nsimEIM = 100, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lsd}{ Link functions for the mean and standard deviation parameters of the usual univariate normal distribution. They are \eqn{\mu}{mu} and \eqn{\sigma}{sigma} respectively. See \code{\link{Links}} for more choices. } % \item{emean, esd}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. %emean = list(), esd = list(), % % } \item{imean, isd}{ Optional initial values for \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. A \code{NULL} means a value is computed internally. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2\} corresponding respectively to \eqn{\mu}{mu}, \eqn{\sigma}{sigma}. If \code{zero = NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. For many data sets having \code{zero = 2} is a good idea. } } \details{ The positive normal distribution is the ordinary normal distribution but with the probability of zero or less being zero. The rest of the probability density function is scaled up. Hence the probability density function can be written \deqn{f(y) = \frac{1}{\sqrt{2\pi} \sigma} \exp\left( -\frac12 (y-\mu)^2 / \sigma^2 \right) / \left[ 1-\Phi(-\mu/ \sigma) \right]}{% f(y) = (1/(sqrt(2*pi)*sigma)) * exp( -0.5 * (y-mu)^2/ sigma^2) / [1-Phi(-mu/ sigma)] } where \eqn{\Phi()}{Phi} is the cumulative distribution function of a standard normal (\code{\link[stats:Normal]{pnorm}}). Equivalently, this is \deqn{f(y) = \frac{1}{\sigma} \frac{\phi((y-\mu) / \sigma)}{ 1-\Phi(-\mu/ \sigma)}.}{% f(y) = (1/sigma) * dnorm((y-mu)/sigma) / [1-pnorm(-mu/ sigma)].} where \eqn{\phi()}{dnorm()} is the probability density function of a standard normal distribution (\code{\link[stats:Normal]{dnorm}}). The mean of \eqn{Y} is \deqn{E(Y) = \mu + \sigma \frac{\phi(-\mu/ \sigma)}{ 1-\Phi(-\mu/ \sigma)}. }{% E(Y) = mu + sigma * dnorm((y-mu)/sigma) / [1-pnorm(-mu/ sigma)]. } } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response variable for this family function is the same as \code{\link{normal1}} except positive values are required. Reasonably good initial values are needed. Fisher scoring is implemented. The distribution of the reciprocal of a positive normal random variable is known as an alpha distribution. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. } \seealso{ \code{\link{normal1}}, \code{\link{tobit}}. } \examples{ pdata <- data.frame(m = 1.0, SD = exp(1.0)) pdata <- transform(pdata, y = rposnorm(n <- 1000, m = m, sd = SD)) \dontrun{with(pdata, hist(y, prob = TRUE, border = "blue", main = paste("posnorm(m =", m[1], ", sd =", round(SD[1], 2),")"))) } fit <- vglm(y ~ 1, fam = posnormal1, pdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) mygrid <- with(pdata, seq(min(y), max(y), len = 200)) # Add the fit to the histogram \dontrun{lines(mygrid, dposnorm(mygrid, Cfit[1], Cfit[2]), col = "red")} } \keyword{models} \keyword{regression} VGAM/man/posnormUC.Rd0000644000176000001440000000350512136651105014026 0ustar ripleyusers\name{Posnorm} \alias{Posnorm} \alias{dposnorm} \alias{pposnorm} \alias{qposnorm} \alias{rposnorm} \title{The Positive-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the univariate positive-normal distribution. } \usage{ dposnorm(x, mean = 0, sd = 1, log = FALSE) pposnorm(q, mean = 0, sd = 1) qposnorm(p, mean = 0, sd = 1) rposnorm(n, mean = 0, sd = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{mean, sd, log}{ see \code{\link[stats:Normal]{rnorm}}. } } \value{ \code{dposnorm} gives the density, \code{pposnorm} gives the distribution function, \code{qposnorm} gives the quantile function, and \code{rposnorm} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{posnormal1}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{posnormal1}}. } \examples{ \dontrun{ m <- 0.8; x <- seq(-1, 4, len = 501) plot(x, dposnorm(x, m = m), type = "l", ylim = 0:1, las = 1, ylab = paste("posnorm(m = ", m, ", sd = 1)"), col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pposnorm(x, m = m), col = "orange") abline(h = 0, col = "grey") probs <- seq(0.1, 0.9, by = 0.1) Q <- qposnorm(probs, m = m) lines(Q, dposnorm(Q, m = m), col = "purple", lty = 3, type = "h") lines(Q, pposnorm(Q, m = m), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pposnorm(Q, m = m) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/posnegbinomial.Rd0000644000176000001440000001322212136651105015104 0ustar ripleyusers\name{posnegbinomial} \alias{posnegbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Negative Binomial Distribution Family Function } \description{ Maximum likelihood estimation of the two parameters of a positive negative binomial distribution. } \usage{ posnegbinomial(lmunb = "loge", lsize = "loge", isize = NULL, zero = -2, nsimEIM = 250, shrinkage.init = 0.95, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmunb}{ Link function applied to the \code{munb} parameter, which is the mean \eqn{\mu_{nb}}{munb} of an ordinary negative binomial distribution. See \code{\link{Links}} for more choices. } \item{lsize}{ Parameter link function applied to the dispersion parameter, called \code{k}. See \code{\link{Links}} for more choices. } \item{isize}{ Optional initial value for \code{k}, an index parameter. The value \code{1/k} is known as a dispersion parameter. If failure to converge occurs try different values (and/or use \code{imethod}). If necessary this vector is recycled to length equal to the number of responses. A value \code{NULL} means an initial value for each response is computed internally using a range of values. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } \item{shrinkage.init, imethod}{ See \code{\link{negbinomial}}. } } \details{ The positive negative binomial distribution is an ordinary negative binomial distribution but with the probability of a zero response being zero. The other probabilities are scaled to sum to unity. This family function is based on \code{\link{negbinomial}} and most details can be found there. To avoid confusion, the parameter \code{munb} here corresponds to the mean of an ordinary negative binomial distribution \code{\link{negbinomial}}. The mean of \code{posnegbinomial} is \deqn{\mu_{nb} / (1-p(0))}{% munb / (1-p(0))} where \eqn{p(0) = (k/(k + \mu_{nb}))^k}{p(0) = (k/(k + munb))^k} is the probability an ordinary negative binomial distribution has a zero value. The parameters \code{munb} and \code{k} are not independent in the positive negative binomial distribution, whereas they are in the ordinary negative binomial distribution. This function handles \emph{multivariate} responses, so that a matrix can be used as the response. The number of columns is the number of species, say, and setting \code{zero = -2} means that \emph{all} species have a \code{k} equalling a (different) intercept only. } \section{Warning}{ The Poisson model corresponds to \code{k} equalling infinity. If the data is Poisson or close to Poisson, numerical problems may occur. Possibly a loglog link could be added in the future to try help handle this problem. This \pkg{VGAM} family function is computationally expensive and usually runs slowly; setting \code{trace = TRUE} is useful for monitoring convergence. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Barry, S. C. and Welsh, A. H. (2002) Generalized additive modelling and zero inflated count data. \emph{Ecological Modelling}, \bold{157}, 179--188. Fisher, R. A., Corbet, A. S. and Williams, C. B. (1943) The Relation Between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population, \emph{Journal of Animal Ecology}, \bold{12}, 42--58. Williamson, E. and Bretherton, M. H. (1964) Tables of the logarithmic series distribution. \emph{Annals of Mathematical Statistics}, \bold{35}, 284--297. } \author{ Thomas W. Yee } \note{ This family function handles multiple responses. } \seealso{ \code{\link{rposnegbin}}, \code{\link{pospoisson}}, \code{\link{negbinomial}}, \code{\link{zanegbinomial}}, % \code{\link[MASS]{rnegbin}}. \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, y1 = rposnegbin(nn, munb = exp(0+2*x2), size = exp(1)), y2 = rposnegbin(nn, munb = exp(1+2*x2), size = exp(3))) fit <- vglm(cbind(y1, y2) ~ x2, posnegbinomial, pdata, trace = TRUE) coef(fit, matrix = TRUE) dim(depvar(fit)) # dim(fit@y) is not as good # Another artificial data example pdata2 <- data.frame(munb = exp(2), size = exp(3)); nn <- 1000 pdata2 <- transform(pdata2, y3 = rposnegbin(nn, munb = munb, size = size)) with(pdata2, table(y3)) fit <- vglm(y3 ~ 1, posnegbinomial, pdata2, trace = TRUE) coef(fit, matrix = TRUE) with(pdata2, mean(y3)) # Sample mean head(with(pdata2, munb/(1-(size/(size+munb))^size)), 1) # Population mean head(fitted(fit), 3) head(predict(fit), 3) # Example: Corbet (1943) butterfly Malaya data corbet <- data.frame(nindiv = 1:24, ofreq = c(118, 74, 44, 24, 29, 22, 20, 19, 20, 15, 12, 14, 6, 12, 6, 9, 9, 6, 10, 10, 11, 5, 3, 3)) fit <- vglm(nindiv ~ 1, posnegbinomial, weights = ofreq, data = corbet) coef(fit, matrix = TRUE) Coef(fit) (khat <- Coef(fit)["size"]) pdf2 <- dposnegbin(x = with(corbet, nindiv), mu = fitted(fit), size = khat) print( with(corbet, cbind(nindiv, ofreq, fitted = pdf2*sum(ofreq))), dig = 1) with(corbet, matplot(nindiv, cbind(ofreq, fitted = pdf2*sum(ofreq)), las = 1, type = "b", ylab = "Frequency", col = c("blue", "orange"), main = "blue 1s = observe; orange 2s = fitted")) } } \keyword{models} \keyword{regression} % bigN = with(corbet, sum(ofreq)) VGAM/man/posnegbinUC.Rd0000644000176000001440000000721112136651105014313 0ustar ripleyusers\name{Posnegbin} \alias{Posnegbin} \alias{dposnegbin} \alias{pposnegbin} \alias{qposnegbin} \alias{rposnegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-negative binomial distribution. } \usage{ dposnegbin(x, size, prob = NULL, munb = NULL, log = FALSE) pposnegbin(q, size, prob = NULL, munb = NULL) qposnegbin(p, size, prob = NULL, munb = NULL) rposnegbin(n, size, prob = NULL, munb = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{size, prob, munb, log}{ Same arguments as that of an ordinary negative binomial distribution (see \code{\link[stats:NegBinomial]{dnbinom}}). Some arguments have been renamed slightly. % This is called \eqn{\theta}{theta} in the \code{\link[MASS]{rnegbin}} % function in the \code{MASS} library. Short vectors are recycled. The parameter \code{1/size} is known as a dispersion parameter; as \code{size} approaches infinity, the negative binomial distribution approaches a Poisson distribution. } } \details{ The positive-negative binomial distribution is a negative binomial distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\mu / (1-p(0))}{% munb / (1-p(0))} where \eqn{\mu}{munb} the mean of an ordinary negative binomial distribution. % 20120405; no longer true to a superior method: % The arguments of % \code{rposnegbin()} % are fed into % \code{\link[stats:NegBinomial]{rnbinom}} until \eqn{n} positive values % are obtained. } \value{ \code{dposnegbin} gives the density, \code{pposnegbin} gives the distribution function, \code{qposnegbin} gives the quantile function, and \code{rposnegbin} generates \eqn{n} random deviates. } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. } \author{ T. W. Yee } %\note{ % 20120405; no longer true to a superior method: % The running time % of \code{rposnegbin()} % is slow when \code{munb} is very close to zero. % %} \seealso{ \code{\link{posnegbinomial}}, \code{\link{zanegbinomial}}, \code{\link{zinegbinomial}}, \code{\link[stats:NegBinomial]{rnbinom}}. % \code{\link[MASS]{rnegbin}}, } \examples{ munb <- 5; size <- 4; n <- 1000 table(y <- rposnegbin(n, munb = munb, size = size)) mean(y) # sample mean munb / (1 - (size / (size + munb))^size) # population mean munb / pnbinom(0, mu = munb, size = size, lower.tail = FALSE) # same as before x <- (-1):17 (ii <- dposnegbin(x, munb = munb, size = size)) max(abs(cumsum(ii) - pposnegbin(x, munb = munb, size = size))) # Should be 0 \dontrun{ x <- 0:10 barplot(rbind(dposnegbin(x, munb = munb, size = size), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue","green"), main = paste("dposnegbin(munb = ", munb, ", size = ", size, ") (blue) vs", " dnbinom(mu = ", munb, ", size = ", size, ") (green)", sep = ""), names.arg = as.character(x)) } # Another test for pposnegbin() nn <- 5000 mytab <- cumsum(table(rposnegbin(nn, munb = munb, size = size))) / nn myans <- pposnegbin(sort(as.numeric(names(mytab))), munb = munb, size = size) max(abs(mytab - myans)) # Should be 0 } \keyword{distribution} VGAM/man/posgeomUC.Rd0000644000176000001440000000523312136651105014002 0ustar ripleyusers\name{Posgeom} \alias{Posgeom} \alias{dposgeom} \alias{pposgeom} \alias{qposgeom} \alias{rposgeom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-geometric Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-geometric distribution. } \usage{ dposgeom(x, prob, log = FALSE) pposgeom(q, prob) qposgeom(p, prob) rposgeom(n, prob) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{prob}{ vector of probabilities of success (of an ordinary geometric distribution). Short vectors are recycled. } \item{log}{ logical. } } \details{ The positive-geometric distribution is a geometric distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \eqn{1/prob}{1/prob}. As \eqn{prob}{prob} decreases, the positive-geometric and geometric distributions become more similar. Like similar functions for the geometric distribution, a zero value of \code{prob} is not permitted here. } \value{ \code{dposgeom} gives the density, \code{pposgeom} gives the distribution function, \code{qposgeom} gives the quantile function, and \code{rposgeom} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee } %\note{ % 20120405; no longer true to a superior method: % For \code{rposgeom()}, the arguments of the function are fed % into \code{\link[stats:Geometric]{rgeom}} until \eqn{n} positive % values are obtained. This may take a long time if \code{prob} % has values close to 1. % The family function \code{posgeometric} needs not be written. % If it were, then it would estimate % \eqn{prob}{prob} by maximum likelihood estimation. %} \seealso{ % \code{posgeometric}, \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link[stats:Geometric]{rgeom}}. } \examples{ prob <- 0.75; y = rposgeom(n = 1000, prob) table(y) mean(y) # Sample mean 1 / prob # Population mean (ii <- dposgeom(0:7, prob)) cumsum(ii) - pposgeom(0:7, prob) # Should be 0s table(rposgeom(100, prob)) table(qposgeom(runif(1000), prob)) round(dposgeom(1:10, prob) * 1000) # Should be similar \dontrun{ x <- 0:5 barplot(rbind(dposgeom(x, prob), dgeom(x, prob)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive geometric(", prob, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/posbinomial.Rd0000644000176000001440000000625712136651105014424 0ustar ripleyusers\name{posbinomial} \alias{posbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Binomial Distribution Family Function } \description{ Fits a positive binomial distribution. } \usage{ posbinomial(link = "logit", mv = FALSE, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, mv, parallel, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The positive binomial distribution is the ordinary binomial distribution but with the probability of zero being zero. Thus the other probabilities are scaled up (i.e., divided by \eqn{1-P(Y=0)}{1-P(Y=0)}). The fitted values are the ordinary binomial distribution fitted values, i.e., the usual mean. In the capture-recapture literature this model is called the \eqn{M_0}. It arises from a sum of a sequence of \eqn{\tau}-Bernoulli random variates subject to at least one success (capture). Here, each animal has the same probability of capture or recapture, regardless of the \eqn{\tau} sampling occasions. Independence between animals and between sampling occasions etc. is assumed. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Patil, G. P. (1962) Maximum likelihood estimation for generalised power series distributions and its application to a truncated binomial distribution. \emph{Biometrika}, \bold{49}, 227--237. Pearson, K. (1913) \emph{A monograph on Albinism in Man}. Drapers Company Research Memoirs. } \author{ Thomas W. Yee } \note{ The input for this family function is the same as \code{\link{binomialff}}. If \code{mv = TRUE} then each column of the matrix response should be a count (the number of successes), and the \code{weights} argument should be a matrix of the same dimension as the response containing the number of trials. If \code{mv = FALSE} then the response input should be the same as \code{\link{binomialff}}. Yet to be done: a \code{quasi.posbinomial()} which estimates a dispersion parameter. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. } \seealso{ \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.t}}, \code{\link{posbernoulli.tb}}, \code{\link{binomialff}}. } \examples{ # Number of albinotic children in families with 5 kids (from Patil, 1962) ,,,, albinos <- data.frame(y = c(rep(1, 25), rep(2, 23), rep(3, 10), 4, 5), n = rep(5, 60)) fit1 <- vglm(cbind(y, n-y) ~ 1, posbinomial, albinos, trace = TRUE) summary(fit1) Coef(fit1) # = MLE of p = 0.3088 head(fitted(fit1)) sqrt(vcov(fit1, untransform = TRUE)) # SE = 0.0322 # Fit a M_0 model to the Perom data ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, M_0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, data = Perom, posbinomial, trace = TRUE) coef(M_0, matrix = TRUE) Coef(M_0) constraints(M_0, matrix = TRUE) summary(M_0) } \keyword{models} \keyword{regression} % albinos <- transform(albinos, yprop = y / 5) VGAM/man/posbinomUC.Rd0000644000176000001440000000772512136651105014167 0ustar ripleyusers\name{Posbinom} \alias{Posbinom} \alias{dposbinom} \alias{pposbinom} \alias{qposbinom} \alias{rposbinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-binomial distribution. } \usage{ dposbinom(x, size, prob, log = FALSE) pposbinom(q, size, prob) qposbinom(p, size, prob) rposbinom(n, size, prob) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{size}{number of trials. It is the \eqn{N} symbol in the formula given in \code{\link{posbinomial}}. } \item{prob}{probability of success on each trial. } % 20120407: % \item{log.p, lower.tail}{ % Arguments that are passed on to % \code{\link[stats:Binomial]{pbinom}} etc. % % } \item{log}{ See \code{\link[stats:Binomial]{dbinom}}. } } \details{ The positive-binomial distribution is a binomial distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\mu / (1-(1-\mu)^N)}{% mu / (1-(1-mu)^N)} where \eqn{\mu}{mu} is the argument \code{prob} above. As \eqn{\mu}{mu} increases, the positive-binomial and binomial distributions become more similar. Unlike similar functions for the binomial distribution, a zero value of \code{prob} is not permitted here. } \value{ \code{dposbinom} gives the density, \code{pposbinom} gives the distribution function, \code{qposbinom} gives the quantile function, and \code{rposbinom} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee. } \note{ For \code{dposbinom()}, if arguments \code{size} or \code{prob} equal 0 then a \code{NaN} is returned. % 20120405; no longer true to a superior method: % For \code{rposbinom()}, the arguments of the function are fed into % \code{\link[stats:Binomial]{rbinom}} until \eqn{n} positive values % are obtained. This may take a long time if \code{prob} has values % close to 0. The family function \code{\link{posbinomial}} estimates the parameters by maximum likelihood estimation. } \seealso{ \code{\link{posbinomial}}, \code{\link{dposbern}}, \code{\link{zabinomial}}, \code{\link{zibinomial}}, \code{\link[stats:Binomial]{rbinom}}. } \examples{ prob <- 0.2; size <- 10 table(y <- rposbinom(n = 1000, size, prob)) mean(y) # Sample mean size * prob / (1-(1-prob)^size) # Population mean (ii <- dposbinom(0:size, size, prob)) cumsum(ii) - pposbinom(0:size, size, prob) # Should be 0s table(rposbinom(100, size, prob)) table(qposbinom(runif(1000), size, prob)) round(dposbinom(1:10, size, prob) * 1000) # Should be similar \dontrun{ barplot(rbind(dposbinom(x = 0:size, size, prob), dbinom(x = 0:size, size, prob)), beside = TRUE, col = c("blue", "green"), main = paste("Positive-binomial(", size, ",", prob, ") (blue) vs", " Binomial(", size, ",", prob, ") (green)", sep = ""), names.arg = as.character(0:size), las = 1) } # Simulated data example nn <- 1000; sizeval1 <- 10; sizeval2 <- 20 pdata <- data.frame(x2 = seq(0, 1, length = nn)) pdata <- transform(pdata, prob1 = logit(-2 + 2 * x2, inverse = TRUE), prob2 = logit(-1 + 1 * x2, inverse = TRUE), sizev1 = rep(sizeval1, len = nn), sizev2 = rep(sizeval2, len = nn)) pdata <- transform(pdata, y1 = rposbinom(nn, size = sizev1, prob = prob1), y2 = rposbinom(nn, size = sizev2, prob = prob2)) with(pdata, table(y1)) with(pdata, table(y2)) # Multivariate response fit2 <- vglm(cbind(y1, y2) ~ x2, posbinomial(mv = TRUE), trace = TRUE, pdata, weight = cbind(sizev1, sizev2)) coef(fit2, matrix = TRUE) } \keyword{distribution} VGAM/man/posbernoulli.tb.Rd0000644000176000001440000001603512136651105015224 0ustar ripleyusers\name{posbernoulli.tb} %\alias{posbernoulli} \alias{posbernoulli.tb} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Time and Behavioural Effects (experimental) } \description{ Fits a GLM-like model to multiple (currently only two or three) Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects and behavioural effects are accommodated. However, this function only handles two and three sampling occasions. } \usage{ posbernoulli.tb(link = "logit", parallel.t = FALSE, parallel.b = FALSE, apply.parint = FALSE, imethod = 1, iprob = NULL, dconst = 0.1, dpower = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, imethod, iprob, parallel.t, parallel.b, apply.parint}{ See \code{\link{CommonVGAMffArguments}} for information. But \code{parallel.t} and \code{parallel.b} must each be logicals only. Argument \code{parallel.t} means parallel with respect to time, and matches the same argument name in \code{\link{posbernoulli.t}}. Suppose the model is intercept-only. Setting \code{parallel.t = TRUE} results in the \eqn{M_b} model. Setting \code{parallel.b = TRUE} results in the \eqn{M_t} model. Setting \code{parallel.t = TRUE} and setting \code{parallel.b = TRUE} results in the \eqn{M_0} model. Note the default for \code{parallel.t} and \code{parallel.b} (both \code{FALSE}) may be unsuitable for most data sets which have a large \eqn{\tau} because of the large number of parameters; it can be too flexible. Note that adding covariates will result in a \eqn{M_{tbh}} model. } \item{dconst, dpower}{ Decay constants and power (exponent) for the ridge adjustment for the working weight matrices. At iteration \eqn{t} of the IRLS algorithm a positive value is added to the first \eqn{\tau}{tau} diagonal elements of the working weight matrices to make them positive-definite. This adjustment is \eqn{K \times t^p}{K * t^p} where \eqn{K} is \code{dconst} and \eqn{p} is \code{dpower}. This is always positive but decays to zero as iterations proceed (provided \eqn{p} is negative etc.). } } \details{ This model (commonly known as \eqn{M_{tb}} in the capture--recapture literature) operates on a response matrix of 0s and 1s. See \code{\link{posbernoulli.t}} for information that is in common. This \pkg{VGAM} family function is \emph{experimental only}. When finished, it should allow time and behavioural effects to be modelled. Evidently, the expected information matrix (EIM) is \emph{not} of full rank, so \code{dconst} and \code{dpower} are used to \emph{try} fix up the problem. The default link functions are \eqn{(logit \,p_{c1},\ldots,logit \,p_{c\tau},logit \,p_{r2},\ldots,logit \,p_{r\tau})^T}{ (logit p_{c1},\ldots,logit p_{c,tau},logit p_{r2},\ldots,logit p_{r,tau})^T} where the subscript \eqn{c} denotes capture, the subscript \eqn{r} denotes recapture, and it is not possible to recapture the animal at sampling occasion 1. Thus \eqn{M = 2\tau - 1}{M=2*tau-1}. The parameters are currently prefixed by \code{cap.prob} and \code{recap.prob} for the capture and recapture probabilities. % Not surprisingly, % the fitted values are similar to \code{\link{posbernoulli.t}} and % \code{\link{posbernoulli.b}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ See \code{\link{posbernoulli.t}}. } \author{ Thomas W. Yee. } \note{ It is a good idea to constrain the probabilities for each sampling occasion to be equal, and also allow the behavioural effect to be modelled using the intercept. See \code{M_tbh.1} below. The current restriction of handling only \eqn{\tau=2}{tau=2} and \eqn{\tau=3}{tau=3} sampling occasions is unfortunate and more work is needed to extend this to four or more. Data-wise, at each sampling occasion, the \eqn{M_{tb}} model requires at least one first capture and at least one noncapture. % If not all of the \eqn{2^{\tau}-1}{2^(tau) - 1} combinations of % the response matrix are not present then it pays to add % such rows to the response matrix and assign a small but % positive prior weight. % For example, if \eqn{\tau=2}{tau=2} then there should be % (0,1) rows, % (1,0) rows and % (1,1) rows present in the response matrix. } \section{Warning }{ As this model is likely to be overparameterized, probably this function should not be used (for now?). % From Jakub: Estimation for the population size (and its SE) for the \eqn{M_{tb}} model may be wrong. Models \eqn{M_{tbh}} and \eqn{M_{th}} may be wrong. But models \eqn{M_{bh}}, \eqn{M_{h}}, \eqn{M_{b}}, \eqn{M_{t}}, \eqn{M_{0}} seem fine. Inference, especially using standard errors, may be fraught here because the EIM is, strictly speaking, not of full rank. A similar adjustment is made by \code{\link{zipebcom}}. It is a good idea to monitor convergence. The \eqn{M_0} model is best fitted with \code{\link{posbernoulli.b}} or \code{\link{posbernoulli.t}} or \code{\link{posbinomial}} because the standard errors are more accurate. } \seealso{ \code{\link{posbernoulli.b}} (including \eqn{\widehat{N}}), \code{\link{posbernoulli.t}}, \code{\link{posbinomial}}. } \examples{ \dontrun{ # Example 1: simulated data set.seed(123) nTimePts <- 2 # Must be 2 or 3 currently (aka tau == # of sampling occasions) nnn <- 10000 # Number of animals pdata <- rposbern(n = nnn, nTimePts = nTimePts, pvars = 2) dim(pdata) head(pdata) clist <- list("(Intercept)" = cbind(1, c(0, 0, 1)), # Capture effect is last coln x2 = rbind(1, 1, 1)) M_tbh.1 <- vglm(cbind(y1, y2) ~ x2, constraints = clist, trace = TRUE, posbernoulli.tb, data = pdata) summary(M_tbh.1) coef(M_tbh.1) coef(M_tbh.1, matrix = TRUE) constraints(M_tbh.1, matrix = TRUE) summary(M_tbh.1) # Standard errors are very approximate head(fitted(M_tbh.1)) head(model.matrix(M_tbh.1, type = "vlm"), 21) dim(depvar(M_tbh.1)) # Example 2: Perom subset data Hlist <- list("(Intercept)" = cbind(1, c(0, 0, 0, 1, 1)), sex = rbind(1, 1, 1, 1, 1), weight = rbind(1, 1, 1, 1, 1)) Psubset <- subset(Perom, y1 + y2 + y3 > 0) head(Psubset) fit1 <- vglm(cbind(y1, y2, y3) ~ sex + weight, constraints = Hlist, posbernoulli.tb, data = Psubset, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) summary(fit1) # Standard errors are very approximate # fit1 is the same as Fit1: Fit1 <- vglm(cbind(y1, y2, y3) ~ sex + weight, data = Psubset, posbernoulli.tb(parallel.t = TRUE), trace = TRUE) constraints(Fit1) # Same as Hlist yyy <- depvar(fit1) if (length(table(4 * yyy[, 1] + 2 * yyy[, 2] + 1 * yyy[, 3])) != 2^(ncol(yyy))-1) warning("not every combination is represented by a row in the response matrix") } } \keyword{models} \keyword{regression} VGAM/man/posbernoulli.t.Rd0000644000176000001440000001530012136651105015054 0ustar ripleyusers\name{posbernoulli.t} %\alias{posbernoulli} \alias{posbernoulli.t} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Time Effects } \description{ Fits a GLM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects are accommodated. } \usage{ posbernoulli.t(link = "logit", parallel.t = FALSE, apply.parint = TRUE, iprob = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, iprob, parallel.t, apply.parint}{ See \code{\link{CommonVGAMffArguments}} for information. Setting \code{parallel.t = TRUE} results in the \eqn{M_0} model. } } \details{ This model (commonly known as \eqn{M_t} in the capture--recapture literature) operates on a capture history matrix response of 0s and 1s. Each column is a sampling occasion where animals are potentially captured (e.g., a field trip), and each row is an individual animal. Capture is a 1, else a 0. No removal of animals from the population is made (closed population), e.g., no immigration or emigration. Each row of the response matrix has at least one capture. A conditional likelihood is maximized using Fisher scoring. Each sampling occasion has a separate probability that is modelled here. The probabilities can be constrained to be equal by setting \code{parallel.t = TRUE}; then the results are effectively the same as \code{\link{posbinomial}} except the binomial constants are not included in the log-likelihood. If \code{parallel.t = FALSE} then each column should have at least one 1 and at least one 0. It is well-known that some species of animals are affected by capture, e.g., trap-shy or trap-happy. This \pkg{VGAM} family function does \emph{not} allow any behavioral effect to be modelled (\code{\link{posbernoulli.b}} does). However, it \emph{does} allow covariates that are specific to each sampling occasion, e.g., through the \code{xij} argument. Ignoring capture history effects would mean \code{\link{posbinomial}} could be used by aggregating over the sampling occasions. If there are no covariates that are specific to each occasion then the response matrix can be summed over the columns and \code{\link{posbinomial}} could be used by aggregating over the sampling occasions. It is assumed that the animals are independent and that, for a given animal, each sampling occasion is independent. And animals do not lose their marks/tags, and all marks/tags are correctly recorded. The number of linear/additive predictors is equal to the number of sampling occasions, i.e., \eqn{M = \tau}, say. The default link functions are \eqn{(logit \,p_{1},\ldots,logit \,p_{\tau})^T}{(logit p_(1),\ldots,logit p_(tau))^T} where \eqn{p} denotes the probability. % Thus \eqn{M = \tau}{M = tau}. The fitted value returned is of the same dimension as the response matrix. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. Upon fitting the \code{extra} slot has a (list) component called \code{N.hat} which is a point estimate of the population size \eqn{N}. And there is a component called \code{SE.N.hat} containing its standard error. } \references{ Huggins, R. M. (1991) Some practical aspects of a conditional likelihood approach to capture experiments. \emph{Biometrics}, \bold{47}, 725--732. Huggins, R. M. and Hwang, W.-H. (2011) A review of the use of conditional likelihood in capture--recapture experiments. \emph{International Statistical Review}, \bold{79}, 385--400. Otis, D. L. and Burnham, K. P. and White, G. C. and Anderson, D. R. (1978) Statistical inference from capture data on closed animal populations, \emph{Wildlife Monographs}, \bold{62}, 3--135. } \author{ Thomas W. Yee. } \note{ The \code{weights} argument of \code{\link{vglm}} need not be assigned, and the default is just a matrix of ones. Numerical problems are more likely to occur if \code{parallel.t = FALSE}. Each sampling occasion may need at least one success (capture) and one failure. The response matrix is returned unchanged; i.e., not converted into proportions like \code{\link{posbinomial}}. If the response matrix has column names then these are used in the labelling, else \code{prob1}, \code{prob2}, etc. are used. Data-wise, at each sampling occasion, the \eqn{M_t} model requires at least one first capture and at least one noncapture. % If not all of the \eqn{2^{\tau}-1}{2^(tau) - 1} combinations of % the response matrix are not present then it pays to add % such rows to the response matrix and assign a small but % positive prior weight. % For example, if \eqn{\tau=2}{tau=2} then there should be % (0,1) rows, % (1,0) rows and % (1,1) rows present in the response matrix. } \section{Warning }{ See \code{\link{posbernoulli.tb}}. } \seealso{ \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.tb}}, \code{\link{Perom}}, \code{\link{Huggins89.t1}}, \code{\link{vglm.control}} for \code{xij}, \code{\link{dposbern}}, \code{\link{rposbern}}, \code{\link{posbinomial}}. % \code{\link{huggins91}}. } \examples{ M_t <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, trace = TRUE, posbernoulli.t, data = Perom) # Has parallel.t = FALSE coef(M_t, matrix = TRUE) summary(M_t) M_th.1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, trace = TRUE, posbernoulli.t, data = Perom) # Has parallel.t = FALSE summary(M_th.1) head(depvar(M_th.1)) # Response capture history matrix dim(depvar(M_th.1)) M_h.2 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, trace = TRUE, posbernoulli.t(parallel.t = TRUE), data = Perom) lrtest(M_th.1, M_h.2) # Test the parallelism assumption coef(M_h.2) coef(M_h.2, matrix = TRUE) constraints(M_h.2, matrix = TRUE) summary(M_h.2) head(model.matrix(M_h.2, type = "vlm"), 21) M_h.2@extra$N.hat # Estimate of the population size; should be about N M_h.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M_h.2@extra$N.hat + c(-1, 1) * 1.96 * M_h.2@extra$SE.N.hat, 1) # Fit (effectively) the parallel model using posbinomial() Perom <- transform(Perom, ysum = y1 + y2 + y3 + y4 + y5 + y6, tau = 6) M_h.3 <- vglm(cbind(ysum, tau - ysum) ~ sex + weight, posbinomial, data = Perom, trace = TRUE) max(abs(coef(M_h.2) - coef(M_h.3))) # Should be zero logLik(M_h.3) - logLik(M_h.2) # Difference is due to the binomial constants } \keyword{models} \keyword{regression} VGAM/man/posbernoulli.b.Rd0000644000176000001440000001427212136651105015041 0ustar ripleyusers\name{posbernoulli.b} %\alias{posbernoulli} \alias{posbernoulli.b} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Behavioural Effects } \description{ Fits a GLM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Capture history behavioural effects are accommodated. } \usage{ posbernoulli.b(link = "logit", parallel.b = FALSE, apply.parint = TRUE, icap.prob = NULL, irecap.prob = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, parallel.b, apply.parint, icap.prob, irecap.prob}{ See \code{\link{CommonVGAMffArguments}} for information about these arguments. With an intercept-only model setting \code{parallel.b = TRUE} results in the \eqn{M_0} model; it just deletes the 2nd column of the constraint matrix corresponding to the intercept. % The default value of \code{zero} means that the behavioural % effect is modelled as the difference between the % two intercepts. % That is, it is modelled through the intercept, and a % negative value of the second linear/additive predictor means trap shy, etc. } } \details{ This model (commonly known as \eqn{M_b} in the capture--recapture literature) operates on a capture history matrix response of 0s and 1s. See \code{\link{posbernoulli.t}} for details. Each sampling occasion has the same probability and this is modelled here. But once an animal is captured, it is marked so that its future capture history can be recorded. The effect of the recapture probability is modelled through a second linear/additive predictor, and this usually differs from the first linear/additive predictor by just a different intercept (because \code{parallel.b = TRUE} but the parallelism does not apply to the intercept). It is well-known that some species of animals are affected by capture, e.g., trap-shy or trap-happy. This \pkg{VGAM} family function \emph{does} allow the capture history to be modelled via such behavioural effects. See \code{\link{posbernoulli.t}} for other information, e.g., common assumptions. The number of linear/additive predictors is \eqn{M = 2}, and the default links are \eqn{(logit \,p_c, logit \,p_r)^T}{(logit p_c, logit p_r)^T} where \eqn{p_c} is the probability of capture and \eqn{p_r} is the probability of recapture. The fitted value returned is of the same dimension as the response matrix, and depends on the capture history: prior to being first captured, it is \code{cap.prob}. Afterwards, it is \code{recap.prob}. By default, the constraint matrix for the intercept term is set up so that \eqn{p_r} differs from \eqn{p_c} by a simple binary effect. This allows an estimate of the trap-happy/trap-shy effect. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \section{Warning }{ See \code{\link{posbernoulli.tb}}. } \references{ See \code{\link{posbernoulli.t}}. } \author{ Thomas W. Yee. } \note{ When the number of sampling occasions is large the computation becomes increasingly slower. Monitor convergence by setting \code{trace = TRUE}. The dependent variable is \emph{not} scaled to row proportions. This is the same as \code{\link{posbernoulli.t}} but different from \code{\link{posbinomial}} and \code{\link{binomialff}}. } \seealso{ \code{\link{posbernoulli.t}} (including estimating \eqn{N}), \code{\link{posbernoulli.tb}}, \code{\link{Perom}}, \code{\link{dposbern}}, \code{\link{rposbern}}, \code{\link{posbinomial}}. % \code{\link{huggins91}}. % \code{\link{vglm.control}} for \code{xij}, } \examples{ # Perom data ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, # Fit a M_b model M_b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, data = Perom, posbernoulli.b, trace = TRUE) coef(M_b, matrix = TRUE) constraints(M_b, matrix = TRUE) summary(M_b) # Fit a M_bh model M_bh <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b, trace = TRUE, data = Perom) coef(M_bh, matrix = TRUE) constraints(M_bh) # (2,2) element of "(Intercept)" is the behavioural effect summary(M_bh) # Estimate of behavioural effect is positive (trap-happy) # Fit a M_h model M_h <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, data = Perom, posbernoulli.t(parallel.t = TRUE), trace = TRUE) coef(M_h, matrix = TRUE) constraints(M_h, matrix = TRUE) summary(M_h) # Fit a M_0 model M_0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, data = Perom, posbinomial, trace = TRUE) coef(M_0, matrix = TRUE) constraints(M_0, matrix = TRUE) summary(M_0) # Simulated data set ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, set.seed(123); nTimePts <- 5; N <- 1000 hdata <- rposbern(n = N, nTimePts = nTimePts, pvars = 2, is.popn = TRUE) # N is the popn size nrow(hdata) # Less than N # The truth: xcoeffs are c(-2, 1, 2) and cap.effect = -1 model1 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.b, data = hdata, trace = TRUE) coef(model1) coef(model1, matrix = TRUE) constraints(model1, matrix = TRUE) summary(model1) head(depvar(model1)) # Capture history response matrix head(model1@extra$cap.hist1) # Info on its capture history head(model1@extra$cap1) # When it was first captured head(fitted(model1)) # Depends on capture history (trap.effect <- coef(model1)["(Intercept):2"]) # Should be -1 head(model.matrix(model1, type = "vlm"), 21) head(hdata) summary(hdata) dim(depvar(model1)) vcov(model1) model1@extra$N.hat # Estimate of the population size; should be about N model1@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(model1@extra$N.hat + c(-1, 1) * 1.96 * model1@extra$SE.N.hat, 1) } \keyword{models} \keyword{regression} %# Compare the models using a LRT %lrtest(M_bh, M_h) %(wald.pvalue <- 2 * pnorm(abs(summary(M_bh)@coef3["(Intercept):2", "z value"]), % lower.tail = FALSE)) # Two-sided pvalue VGAM/man/posbernUC.Rd0000644000176000001440000001017312136651105014000 0ustar ripleyusers\name{posbernUC} \alias{posbernUC} \alias{dposbern} \alias{rposbern} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Sequence Model } \description{ Density, and random generation for multiple Bernoulli responses where each row in the response matrix has at least one success. } \usage{ rposbern(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), cap.effect = -1, link = "logit", is.popn = FALSE, earg.link = FALSE) dposbern(x, prob, prob0 = prob, log = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ response vector or matrix. Should only have 0 and 1 values, at least two columns, and each row should have at least one 1. } \item{nTimePts}{Number of sampling occasions. Called \eqn{\tau} in \code{\link{posbernoulli.b}} and \code{\link{posbernoulli.t}}. } \item{n}{number of observations. Usually a single positive integer, else the length of the vector is used. See argument \code{is.popn}. } \item{is.popn}{ Logical. If \code{TRUE} then argument \code{n} is the population size and what is returned may have substantially less rows than \code{n}. That is, if an animal has at least one one in its sequence then it is returned, else that animal is not returned. Put in other words, only animals captured at least once are returned in the sample. } \item{cap.effect}{ Numeric, the capture effect. Added to the linear predictor if captured previously. A positive or negative value corresponds to a trap-happy and trap-shy effect respectively. } % \item{double.ch}{ % Logical. % If \code{TRUE} then the values of \code{ch0}, \code{ch1}, \ldots are % 2 or 0, else 1 or 0. % Setting this argument \code{TRUE} means that a model can be fitted % with half the capture history in both denominator and numerator % (this is a compromise of the Huggins (1991) model where the full % capture history only appears in the numerator). % } \item{pvars}{ Number of other numeric covariates that make up the linear predictor. Labelled \code{x1}, \code{x2}, \ldots, where the first is an intercept, and the others are independent standard \code{\link[stats:Uniform]{runif}} random variates. The first \code{pvars} elements of \code{xcoeff} are used. } \item{xcoeff}{ The regression coefficients of the linear predictor. These correspond to \code{x1}, \code{x2}, \ldots, and the first is for the intercept. The length of \code{xcoeff} must be at least \code{pvars}. } \item{link, earg.link}{ The former is used to generate the probabilities for capture at each occasion. Other details at \code{\link{CommonVGAMffArguments}}. } \item{prob, prob0}{ Matrix of probabilities for the numerator and denominators respectively. The default does \emph{not} correspond to the \eqn{M_b} model since the \eqn{M_b} model has a denominator which involves the capture history. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The form of the conditional likelihood is described in \code{\link{posbernoulli.b}}. The denominator is equally shared among the elements of the matrix \code{x}. } \value{ This function returns a data frame with some attributes. The function generates random deviates (\eqn{\tau} columns labelled \code{y1}, \code{y2}, \ldots) for the response. Some indicator columns are also included (those starting with \code{ch} are for previous capture history, and those starting with \code{z} are zero), and these are useful for the \code{xij} argument. The function \code{dposbern} gives the density, } %\references{ } \author{ Thomas W. Yee. } \note{ The \code{r}-type function is experimental and does not follow the usual conventions of \code{r}-type R functions. The \code{d}-type function is more conventional. } \seealso{ % \code{\link{huggins91}}, \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.t}}, \code{\link{posbernoulli.tb}}. } \examples{ set.seed(123); rposbern(n = 10) attributes(rposbern(n = 10)) } \keyword{distribution} \keyword{datagen} %double.ch = FALSE, VGAM/man/polonoUC.Rd0000644000176000001440000001017712136651105013642 0ustar ripleyusers\name{Polono} \alias{Polono} \alias{dpolono} \alias{ppolono} %\alias{qpolono} \alias{rpolono} \title{The Poisson Lognormal Distribution} \description{ Density, and random generation for the Poisson lognormal distribution. } \usage{ dpolono(x, meanlog = 0, sdlog = 1, bigx = 170, ...) ppolono(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) rpolono(n, meanlog = 0, sdlog = 1) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{meanlog, sdlog }{ the mean and standard deviation of the normal distribution (on the log scale). They match the arguments in \code{\link[stats:Lognormal]{Lognormal}}. } \item{bigx}{ Numeric. This argument is for handling large values of \code{x} and/or when \code{\link[stats]{integrate}} fails. A first order Taylor series approximation [Equation (7) of Bulmer (1974)] is used at values of \code{x} that are greater or equal to this argument. For \code{bigx = 10}, he showed that the approximation has a relative error less than 0.001 for values of \code{meanlog} and \code{sdlog} ``likely to be encountered in practice''. The argument can be assigned \code{Inf} in which case the approximation is not used. } \item{isOne }{ Used to test whether the cumulative probabilities have effectively reached unity. } \item{...}{ Arguments passed into \code{\link[stats]{integrate}}. } } \value{ \code{dpolono} gives the density, \code{ppolono} gives the distribution function, and % \code{qpolono} gives the quantile function, and \code{rpolono} generates random deviates. } \references{ Bulmer, M. G. (1974) On fitting the Poisson lognormal distribution to species-abundance data. \emph{Biometrics}, \bold{30}, 101--110. } \author{ T. W. Yee. Some anonymous soul kindly wrote \code{ppolono()} and improved the original \code{dpolono()}. } \details{ The Poisson lognormal distribution is similar to the negative binomial in that it can be motivated by a Poisson distribution whose mean parameter comes from a right skewed distribution (gamma for the negative binomial and lognormal for the Poisson lognormal distribution). % See zz code{link{polonozz}}, the \pkg{VGAM} family function % for estimating the parameters, % for the formula of the probability density function and other details. } \note{ By default, \code{dpolono} involves numerical integration that is performed using \code{\link[stats]{integrate}}. Consequently, computations are very slow and numerical problems may occur (if so then the use of \code{...} may be needed). Alternatively, for extreme values of \code{x}, \code{meanlog}, \code{sdlog}, etc., the use of \code{bigx = Inf} avoids the call to \code{\link[stats]{integrate}}, however the answer may be a little inaccurate. For the maximum likelihood estimation of the 2 parameters a \pkg{VGAM} family function called \code{polono}, say, has not been written yet. } \seealso{ \code{\link{lognormal}}, \code{\link{poissonff}}, \code{\link{negbinomial}}. } \examples{ meanlog <- 0.5; sdlog <- 0.5; yy <- 0:19 sum(proby <- dpolono(yy, m = meanlog, sd = sdlog)) # Should be 1 max(abs(cumsum(proby) - ppolono(yy, m = meanlog, sd = sdlog))) # Should be 0 \dontrun{ opar = par(no.readonly = TRUE) par(mfrow = c(2, 2)) plot(yy, proby, type = "h", col = "blue", ylab = "P[Y=y]", log = "", main = paste("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")", sep = "")) y <- 0:190 # More extreme values; use the approximation and plot on a log scale (sum(proby <- dpolono(y, m = meanlog, sd = sdlog, bigx = 100))) # Should be 1 plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y] (log)", log = "y", main = paste("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")", sep = "")) # Note the kink at bigx # Random number generation table(y <- rpolono(n = 1000, m = meanlog, sd = sdlog)) hist(y, breaks = ((-1):max(y))+0.5, prob = TRUE, border = "blue") par(opar) } } \keyword{distribution} VGAM/man/polf.Rd0000644000176000001440000000770512136651105013047 0ustar ripleyusers\name{polf} \alias{polf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson-Ordinal Link Function } \description{ Computes the Poisson-ordinal transformation, including its inverse and the first two derivatives. } \usage{ polf(theta, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{cutpoint}{ The cutpoints should be non-negative integers. If \code{polf()} is used as the link function in \code{\link{cumulative}} then one should choose \code{reverse = TRUE, parallel = TRUE, apply.parint = TRUE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The Poisson-ordinal link function (POLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying Poisson distribution. If the cutpoint is zero then a complementary log-log link is used. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2012) for details. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the Poisson distribution (see \code{\link{poissonff}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{ordpoisson}}, \code{\link{poissonff}}, \code{\link{nbolf}}, \code{\link{golf}}, \code{\link{cumulative}}. } \examples{ polf("p", cutpoint = 2, short = FALSE) polf("p", cutpoint = 2, tag = TRUE) p <- seq(0.01, 0.99, by = 0.01) y <- polf(p, cutpoint = 2) y. <- polf(p, cutpoint = 2, deriv = 1) max(abs(polf(y, cutpoint = 2, inv = TRUE) - p)) # Should be 0 \dontrun{par(mfrow = c(2, 1), las = 1) plot(p, y, type = "l", col = "blue", main = "polf()") abline(h = 0, v = 0.5, col = "orange", lty = "dashed") plot(p, y., type = "l", col = "blue", main = "(Reciprocal of) first POLF derivative") } # Rutherford and Geiger data ruge <- data.frame(yy = rep(0:14, times = c(57,203,383,525,532,408,273,139,45,27,10,4,0,1,1))) with(ruge, length(yy)) # 2608 1/8-minute intervals cutpoint <- 5 ruge <- transform(ruge, yy01 = ifelse(yy <= cutpoint, 0, 1)) fit <- vglm(yy01 ~ 1, binomialff(link = polf(cutpoint = cutpoint)), ruge) coef(fit, matrix = TRUE) exp(coef(fit)) # Another example pdata <- data.frame(x2 = sort(runif(nn <- 1000))) pdata <- transform(pdata, x3 = runif(nn)) pdata <- transform(pdata, mymu = exp( 3 + 1 * x2 - 2 * x3)) pdata <- transform(pdata, y1 = rpois(nn, lambda = mymu)) cutpoints <- c(-Inf, 10, 20, Inf) pdata <- transform(pdata, cuty = Cut(y1, breaks = cutpoints)) \dontrun{ with(pdata, plot(x2, x3, col = cuty, pch = as.character(cuty))) } with(pdata, table(cuty) / sum(table(cuty))) fit <- vglm(cuty ~ x2 + x3, data = pdata, trace = TRUE, cumulative(reverse = TRUE, parallel = TRUE, apply.parint = TRUE, link = polf(cutpoint = cutpoints[2:3]), mv = TRUE)) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc$earg } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/poissonp.Rd0000644000176000001440000000662312136651105013757 0ustar ripleyusers\name{poissonp} \alias{poissonp} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson-points-on-a-plane/volume Distances Distribution } \description{ Estimating the density parameter of the distances from a fixed point to the u-th nearest point, in a plane or volume. } \usage{ poissonp(ostatistic, dimension = 2, link = "loge", idensity = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ostatistic}{ Order statistic. A single positive integer. For example, the value 5 means the response are the distances of the fifth nearest value to that point (usually over many planes or volumes). } \item{dimension}{ The value 2 or 3; 2 meaning a plane and 3 meaning a volume. } \item{link}{ Parameter link function applied to the (positive) density parameter, called \eqn{\lambda}{lambda} below. See \code{\link{Links}} for more choices. } \item{idensity}{ Optional initial value for the parameter. A \code{NULL} value means a value is obtained internally. Use this argument if convergence failure occurs. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for \eqn{\lambda}{lambda}. If failure to converge occurs try another value and/or else specify a value for \code{idensity}. } } \details{ Suppose the number of points in any region of area \eqn{A} of the plane is a Poisson random variable with mean \eqn{\lambda A}{lambda*A} (i.e., \eqn{\lambda}{lambda} is the \emph{density} of the points). Given a fixed point \eqn{P}, define \eqn{D_1}, \eqn{D_2},\ldots to be the distance to the nearest point to \eqn{P}, second nearest to \eqn{P}, etc. This \pkg{VGAM} family function estimates \eqn{\lambda}{lambda} since the probability density function for \eqn{D_u} is easily derived, \eqn{u=1,2,\ldots}{u=1,2,...}. Here, \eqn{u} corresponds to the argument \code{ostatistic}. Similarly, suppose the number of points in any volume \eqn{V} is a Poisson random variable with mean \eqn{\lambda V}{lambda*V} where, once again, \eqn{\lambda}{lambda} is the \emph{density} of the points. This \pkg{VGAM} family function estimates \eqn{\lambda}{lambda} by specifying the argument \code{ostatistic} and using \code{dimension=3}. The mean of \eqn{D_u} is returned as the fitted values. Newton-Raphson is the same as Fisher-scoring. } \section{Warning}{ Convergence may be slow if the initial values are far from the solution. This often corresponds to the situation when the response values are all close to zero, i.e., there is a high density of points. Formulae such as the means have not been fully checked. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{poissonff}}. } \examples{ pdata <- data.frame(y = rgamma(10, shape = exp(-1))) # Not proper data! os <- 2 fit <- vglm(y ~ 1, poissonp(os, 2), pdata, tra = TRUE, crit = "c") fit <- vglm(y ~ 1, poissonp(os, 3), pdata, tra = TRUE, crit = "c") # Slow convergence? fit <- vglm(y ~ 1, poissonp(os, 3, idensi = 1), pdata, trace = TRUE, crit = "c") head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/poissonff.Rd0000644000176000001440000001274712136651105014117 0ustar ripleyusers\name{poissonff} %\alias{poisson} \alias{poissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson Family Function } \description{ Family function for a generalized linear model fitted to Poisson responses. The dispersion parameters may be known or unknown. } \usage{ poissonff(link = "loge", dispersion = 1, onedpar = FALSE, imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean or means. See \code{\link{Links}} for more choices and information. } \item{dispersion}{ Dispersion parameter. By default, maximum likelihood is used to estimate the model because it is known. However, the user can specify \code{dispersion = 0} to have it estimated, or else specify a known positive value (or values if the response is a matrix---one value per column). } \item{onedpar}{ One dispersion parameter? If the response is a matrix, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if the response is a matrix. } \item{imu, imethod}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } \item{bred, earg.link}{ Details at \code{\link{CommonVGAMffArguments}}. Setting \code{bred = TRUE} should work for multiple responses and all \pkg{VGAM} link functions; it has been tested for \code{\link{loge}}, \code{\link{identity}} but further testing is required. } } \details{ \eqn{M} defined above is the number of linear/additive predictors. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate. A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the Poisson model. Over-dispersion is more common in practice. When fitting a Quadratic RR-VGLM (see \code{\link{cqo}}), the response is a matrix of \eqn{M}, say, columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix) if \code{dispersion = 0} and \code{onedpar = FALSE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ This function will handle a matrix response automatically. The call \code{poissonff(dispersion=0, ...)} is equivalent to \code{quasipoissonff(...)}. The latter was written so that R users of \code{quasipoisson()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ With a multivariate response, assigning a known dispersion parameter for \emph{each} response is not handled well yet. Currently, only a single known dispersion parameter is handled well. } \seealso{ \code{\link{Links}}, \code{\link{quasipoissonff}}, \code{\link{genpoisson}}, \code{\link{zipoisson}}, \code{\link{skellam}}, \code{\link{mix2poisson}}, \code{\link{cenpoisson}}, \code{\link{ordpoisson}}, \code{\link{amlpoisson}}, \code{\link{invbinomial}}, \code{\link{loge}}, \code{\link{polf}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{binomialff}}, \code{\link{quasibinomialff}}, \code{\link[stats]{poisson}}, \code{\link{poissonp}}. } \examples{ poissonff() set.seed(123) pdata <- data.frame(x2 = rnorm(nn <- 100)) pdata <- transform(pdata, y1 = rpois(nn, exp(1 + x2))) (fit1 <- vglm(y1 ~ x2, family = poissonff, pdata)) (fit2 <- vglm(y1 ~ x2, family = poissonff(bred = TRUE), pdata)) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) nn <- 200 cdata <- data.frame(x2 = rnorm(nn), x3 = rnorm(nn), x4 = rnorm(nn)) cdata <- transform(cdata, lv1 = 0 + x3 - 2*x4) cdata <- transform(cdata, lambda1 = exp(3 - 0.5 * (lv1-0)^2), lambda2 = exp(2 - 0.5 * (lv1-1)^2), lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2)) cdata <- transform(cdata, y1 = rpois(nn, lambda1), y2 = rpois(nn, lambda2), y3 = rpois(nn, lambda3)) \dontrun{ lvplot(p1, y = TRUE, lcol = 2:4, pch = 2:4, pcol = 2:4, rug = FALSE) } } \keyword{models} \keyword{regression} %# vvv p1 = cqo(cbind(y1,y2,y3) ~ x2 + x3 + x4, poissonff, cdata, %# vvv EqualTol = FALSE, ITol = FALSE) %# vvv summary(p1) # # Three dispersion parameters are all unity VGAM/man/pnorm2UC.Rd0000644000176000001440000000552712136651105013554 0ustar ripleyusers\name{pnorm2} \alias{pnorm2} \title{Bivariate normal distribution cumulative distribution function} \description{ % Density, Cumulative distribution function % quantile function % and % random generation for the bivariate normal distribution distribution. } \usage{ pnorm2(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) } \arguments{ \item{x1, x2}{vector of quantiles.} \item{mean1, mean2, var1, var2, cov12}{ vector of means, variances and the covariance. % standard deviations and correlation parameter. } % \item{n}{number of observations. } % \item{log}{ % Logical. % If \code{log = TRUE} then the logarithm of the density is returned. % } % \item{rho}{ % See \code{\link{binormal}}. % } } \value{ % \code{dmakeham} gives the density, \code{pnorm2} gives the cumulative distribution function. % \code{qmakeham} gives the quantile function, and % \code{rmakeham} generates random deviates. } % \author{ T. W. Yee } \details{ The default arguments correspond to the standard bivariate normal distribution with correlation parameter \eqn{\rho = 0}{rho = 0}. That is, two independent standard normal distibutions. Let \code{sd1} be \code{sqrt(var1)} and written \eqn{\sigma_1}{sigma_1}, etc. Then the general formula for the correlation coefficient is \eqn{\rho = cov / (\sigma_1 \sigma_2)}{rho = cov / (sigma_1 * sigma_2)} where \eqn{cov} is argument \code{cov12}. Thus if arguments \code{var1} and \code{var2} are left alone then \code{cov12} can be inputted with \eqn{\rho}{rho}. One can think of this function as an extension of \code{\link[stats]{pnorm}} to two dimensions, however note that the argument names have been changed for \pkg{VGAM} 0.9-1 onwards. } \references{ Based on Donnelly (1973), the code was translated from FORTRAN to ratfor using struct, and then from ratfor to C manually. The function was originally called \code{bivnor}, and TWY only wrote a wrapper function. Donnelly, T. G. (1973) Algorithm 462: Bivariate Normal Distribution. \emph{Communications of the ACM}, \bold{16}, 638. % It gives the probability that a bivariate normal exceeds (ah, ak). % Here, gh and gk are 0.5 times the right tail areas of ah, ak under a N(0, 1) % distribution. } \section{Warning}{ Being based on an approximation, the results may be negative! Also, this function may be renamed to \code{pnormal2()}, or something similar, at a later date. } %\note{ % %} \seealso{ \code{\link[stats]{pnorm}}, \code{\link{binormal}}, \code{\link{normal1}}. } \examples{ yvec <- c(-5, -1.96, 0, 1.96, 5) ymat <- expand.grid(yvec, yvec) cbind(ymat, pnorm2(ymat[, 1], ymat[, 2])) \dontrun{ rhovec <- seq(-0.95, 0.95, by = 0.01) plot(rhovec, pnorm2(0, 0, cov12 = rhovec), type = "l", col = "blue", las = 1) abline(v = 0, h = 0.25, col = "gray", lty = "dashed") } } \keyword{distribution} VGAM/man/pneumo.Rd0000644000176000001440000000225312136651105013403 0ustar ripleyusers\name{pneumo} \alias{pneumo} \docType{data} \title{Pneumoconiosis in Coalminers Data} \description{ The \code{pneumo} data frame has 8 rows and 4 columns. Exposure time is explanatory, and there are 3 ordinal response variables. } \usage{data(pneumo)} \format{ This data frame contains the following columns: \describe{ \item{exposure.time}{a numeric vector, in years} \item{normal}{a numeric vector, counts} \item{mild}{a numeric vector, counts} \item{severe}{a numeric vector, counts} } } \details{ These were collected from coalface workers. In the original data set, the two most severe categories were combined. } \source{ Ashford, J.R., 1959. An approach to the analysis of data for semi-quantal responses in biological assay. \emph{Biometrics}, \bold{15}, 573--581. } \seealso{ \code{\link{cumulative}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \examples{ # Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) } \keyword{datasets} VGAM/man/plotvgam.control.Rd0000644000176000001440000000644412136651105015416 0ustar ripleyusers\name{plotvgam.control} \alias{plotvgam.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for plotvgam() } \description{ Provides default values for many arguments available for \code{plotvgam()}. } \usage{ plotvgam.control(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{which.cf}{ Integer vector specifying which component functions are to be plotted (for each covariate). Must have values from the set \{1,2,\ldots,\eqn{M}\}. } \item{xlim}{ Range for the x-axis. } \item{ylim}{ Range for the y-axis. } \item{llty}{ Line type for the fitted functions (lines). Fed into \code{par(lty)}. } \item{slty}{ Line type for the standard error bands. Fed into \code{par(lty)}. } \item{pcex}{ Character expansion for the points (residuals). Fed into \code{par(cex)}. } \item{pch}{ Character used for the points (residuals). Same as \code{par(pch)}. } \item{pcol}{ Color of the points. Fed into \code{par(col)}. } \item{lcol}{ Color of the fitted functions (lines). Fed into \code{par(col)}. } \item{rcol}{ Color of the rug plot. Fed into \code{par(col)}. } \item{scol}{ Color of the standard error bands. Fed into \code{par(col)}. } \item{llwd}{ Line width of the fitted functions (lines). Fed into \code{par(lwd)}. } \item{slwd}{ Line width of the standard error bands. Fed into \code{par(lwd)}. } \item{add.arg}{ Logical. If \code{TRUE} then the plot will be added to an existing plot, otherwise a new plot will be made. } \item{one.at.a.time}{ Logical. If \code{TRUE} then the plots are done one at a time, with the user having to hit the return key between the plots. } \item{.include.dots}{ Not to be used by the user. } \item{noxmean}{ Logical. If \code{TRUE} then the point at the mean of \eqn{x}, which is added when standard errors are specified and it thinks the function is linear, is not added. One might use this argument if \code{ylab} is specified. } \item{\dots}{ Other arguments that may be fed into \code{par()}. } In the above, \eqn{M} is the number of linear/additive predictors. } \details{ The most obvious features of \code{\link{plotvgam}} can be controlled by the above arguments. } \value{ A list with values matching the arguments. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } %\note{ % This function enables \code{plotvgam()} to work in a similar % manner to S-PLUS's \code{plot.gam()}. % However, there is no interactive options yet. % %} \seealso{ \code{\link{plotvgam}}. } \examples{ plotvgam.control(lcol = c("red", "blue"), scol = "darkgreen", se = TRUE) } \keyword{models} \keyword{regression} \keyword{smooth} \keyword{dplot} VGAM/man/plotvgam.Rd0000644000176000001440000001344012136651105013731 0ustar ripleyusers\name{plotvgam} \alias{plotvgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Default VGAM Plotting } \description{ Component functions of a \code{\link{vgam-class}} object can be plotted with \code{plotvgam()}. These are on the scale of the linear/additive predictor. } \usage{ plotvgam(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A fitted \pkg{VGAM} object, e.g., produced by \code{\link{vgam}}, \code{\link{vglm}}, or \code{\link{rrvglm}}. } \item{newdata}{ Data frame. May be used to reconstruct the original data set. } \item{y}{ Unused. } \item{residuals}{ Logical. If \code{TRUE} then residuals are plotted. See \code{type.residuals}} \item{rugplot}{ Logical. If \code{TRUE} then a rug plot is plotted at the foot of each plot. These values are jittered to expose ties. } \item{se}{ Logical. If \code{TRUE} then approximate \eqn{\pm 2}{+-2} pointwise standard error bands are included in the plot. } \item{scale}{ Numerical. By default, each plot will have its own y-axis scale. However, by specifying a value, each plot's y-axis scale will be at least \code{scale} wide. } \item{raw}{ Logical. If \code{TRUE} then the smooth functions are those obtained directly by the algorithm, and are plotted without having to premultiply with the constraint matrices. If \code{FALSE} then the smooth functions have been premultiply by the constraint matrices. The \code{raw} argument is directly fed into \code{predict.vgam()}. } \item{offset.arg}{ Numerical vector of length \eqn{r}. These are added to the component functions. Useful for separating out the functions when \code{overlay} is \code{TRUE}. If \code{overlay} is \code{TRUE} and there is one covariate then using the intercept values as the offsets can be a good idea. } \item{deriv.arg}{ Numerical. The order of the derivative. Should be assigned an small integer such as 0, 1, 2. Only applying to \code{s()} terms, it plots the derivative. } \item{overlay}{ Logical. If \code{TRUE} then component functions of the same covariate are overlaid on each other. The functions are centered, so \code{offset.arg} can be useful when \code{overlay} is \code{TRUE}. } \item{type.residuals}{ if \code{residuals} is \code{TRUE} then the first possible value of this vector, is used to specify the type of residual. } \item{plot.arg}{ Logical. If \code{FALSE} then no plot is produced. } \item{which.term}{ Character or integer vector containing all terms to be plotted, e.g., \code{which.term = c("s(age)", "s(height"))} or \code{which.term = c(2, 5, 9)}. By default, all are plotted. } \item{which.cf}{ An integer-valued vector specifying which linear/additive predictors are to be plotted. The values must be from the set \{1,2,\ldots,\eqn{r}\}. By default, all are plotted. } \item{control}{ Other control parameters. See \code{\link{plotvgam.control}}. } \item{\dots}{ Other arguments that can be fed into \code{\link{plotvgam.control}}. This includes line colors, line widths, line types, etc. } \item{varxij}{ Positive integer. Used if \code{xij} of \code{\link{vglm.control}} was used, this chooses which inner argument the component is plotted against. This argument is related to \code{raw = TRUE} and terms such as \code{NS(dum1,dum2)} and constraint matrices that have more than one column. The default would plot the smooth against \code{dum1} but setting \code{varxij = 2} could mean plotting the smooth against \code{dum2}. See the \pkg{VGAM} website for further information. } } \details{ In this help file \eqn{M} is the number of linear/additive predictors, and \eqn{r} is the number of columns of the constraint matrix of interest. Many of \code{plotvgam()}'s options can be found in \code{\link{plotvgam.control}}, e.g., line types, line widths, colors. } \value{ The original object, but with the \code{preplot} slot of the object assigned information regarding the plot. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ While \code{plot(fit)} will work if \code{class(fit)} is \code{"vgam"}, it is necessary to use \code{plotvgam(fit)} explicitly otherwise. \code{plotvgam()} is quite buggy at the moment. % \code{plotvgam()} works in a similar % manner to S-PLUS's \code{plot.gam()}, however, there is no % options for interactive construction of the plots yet. } \seealso{ \code{\link{vgam}}, \code{\link{plotvgam.control}}, \code{predict.vgam}, \code{\link{vglm}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vgam(cbind(nBnW, nBW, BnW, BW) ~ s(Age), binom2.or(zero = NULL), coalminers) \dontrun{ par(mfrow = c(1,3)) plot(fit, se = TRUE, ylim = c(-3, 2), las = 1) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", ylim = c(-3, 2)) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", overlay = TRUE) } } \keyword{models} \keyword{regression} \keyword{smooth} \keyword{graphs} VGAM/man/plotrcim0.Rd0000644000176000001440000001260012136651105014006 0ustar ripleyusers\name{plotrcim0} \alias{plotrcim0} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Main effects plot for a Row-Column Interaction Model (RCIM) } \description{ Produces a main effects plot for Row-Column Interaction Models (RCIMs). } \usage{ plotrcim0(object, centered = TRUE, whichplots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab= "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, ...) } \arguments{ \item{object}{ An \code{\link{rcim}} object. This should be of rank-0, i.e., main effects only and no interactions. } \item{whichplots}{ Numeric, describing which plots are to be plotted. The row effects plot is 1 and the column effects plot is 2. Set the value \code{0}, say, for no plots at all. } \item{centered}{ Logical. If \code{TRUE} then the row and column effects are centered (but not scaled) by \code{\link[base]{scale}}. If \code{FALSE} then the raw effects are used (of which the first are zero by definition). } \item{hline0, hlty, hcol, hlwd}{ \code{hline0} is logical. If \code{TRUE} then a horizontal line is plotted at 0 and the other arguments describe this line. Probably having \code{hline0 = TRUE} only makes sense when \code{centered = TRUE}. } \item{rfirst, cfirst}{ \code{rfirst} is the level of row that is placed first in the row effects plot, etc. } \item{rmain, cmain}{ Character. \code{rmain} is the main label in the row effects plot, etc. } \item{rtype, ctype, rsub, csub}{ See the \code{type} and \code{sub} arguments of \code{\link[graphics:plot]{plot}}. } %\item{rlabels, clabels}{ % rlabels = FALSE, clabels = FALSE, % Currently not functioning properly. % zz. % See \code{labels} argument of % \code{\link[graphics:plot]{plot}}. % %} \item{rxlab, rylab, cxlab, cylab}{ Character. For the row effects plot, \code{rxlab} is \code{xlab} and \code{rylab} is \code{ylab}; see \code{\link[graphics:par]{par}}. Ditto for \code{cxlab} and \code{cylab} for the column effects plot. } \item{rcex.lab, ccex.lab}{ Numeric. \code{rcex.lab} is \code{cex} for the row effects plot label, etc. } \item{rcex.axis, ccex.axis}{ Numeric. \code{rcex.axis} is the \code{cex} argument for the row effects axis label, etc. } \item{rtick, ctick}{ Logical. If \code{rtick = TRUE} then add ticks to the row effects plot, etc. } \item{rcol, ccol}{ \code{rcol} give a colour for the row effects plot, etc. } %\item{llwd}{ % Fed into \code{lwd} of \code{\link[graphics:par]{par}}. % %} %\item{rlas, clas}{ % Fed into \code{las} of \code{\link[graphics:par]{par}}. % %} %\item{main}{ % Character. A common title. % %} %\item{type}{ % Fed into \code{type} of \code{\link[graphics:plot]{plot}}. % %} \item{...}{ Arguments fed into both \code{\link[graphics:plot]{plot}} calls. } } \details{ This function plots the row and column effects of a rank-0 RCIM. As the result is a main effects plot of a regression analysis, its interpretation when \code{centered = FALSE} is relative to the baseline (reference level) of a row and column, and should also be considered in light of the link function used. Many arguments that start with \code{"r"} refer to the row effects plot, and \code{"c"} for the column effects plot. } \value{ The original object with the \code{post} slot assigned additional information from the plot. } \note{ This function should be only used to plot the object of rank-0 RCIM. If the rank is positive then it will issue a warning. Using an argument \code{ylim} will mean the row and column effects are plotted on a common scale; see \code{\link[graphics]{plot.window}}. % This function is not finished yet. % There may be several bugs! } \author{ T. W. Yee, A. F. Hadi. } %\section{Warning}{ % %} \seealso{ \code{\link{moffset}} \code{\link{Rcim}}, \code{\link{rcim}}. } \examples{ alcoff.e <- moffset(alcoff, "6", "Mon", postfix = "*") # Effective day fit0 <- rcim(alcoff.e, family = poissonff) \dontrun{par(oma = c(0, 0, 4, 0), mfrow = 1:2) # For all plots below too ii = plot(fit0, rcol = "blue", ccol = "orange", lwd = 4, ylim = c(-2, 2), # A common ylim cylab = "Effective daily effects", rylab = "Hourly effects", rxlab = "Hour", cxlab = "Effective day") ii@post # Endowed with additional information } # Negative binomial example fit1 <- rcim(alcoff.e, negbinomial, trace = TRUE) \dontrun{ plot(fit1, ylim = c(-2, 2)) } # Univariate normal example fit2 <- rcim(alcoff.e, normal1, trace = TRUE) \dontrun{ plot(fit2, ylim = c(-200, 400)) } # Median-polish example fit3 <- rcim(alcoff.e, alaplace2(tau = 0.5, intparloc = TRUE), trace = TRUE) \dontrun{ plot(fit3, ylim = c(-200, 250)) } # Zero-inflated Poisson example on "crashp" (no 0s in alcoff) cbind(rowSums(crashp)) # Easy to see the data cbind(colSums(crashp)) # Easy to see the data fit4 <- rcim(Rcim(crashp, rbaseline = "5", cbaseline = "Sun"), zipoissonff, trace = TRUE) \dontrun{ plot(fit4, ylim = c(-3, 3)) } } VGAM/man/plotqtplot.lmscreg.Rd0000644000176000001440000000767312136651105015770 0ustar ripleyusers\name{plotqtplot.lmscreg} \alias{plotqtplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for LMS Quantile Regression } \description{ Plots the quantiles associated with a LMS quantile regression. } \usage{ plotqtplot.lmscreg(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{fitted.values}{ Matrix of fitted values. } \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Data frame at which predictions are made. By default, the original data are used. } \item{percentiles}{ Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default is to use the percentiles when fitting the model. For example, the value 50 corresponds to the median. } \item{lp}{ Length of \code{percentiles}. } \item{add.arg}{ Logical. Add the quantiles to an existing plot? } \item{y}{ Logical. Add the response as points to the plot? } \item{spline.fit}{ Logical. Add a spline curve to the plot? } \item{label}{ Logical. Add the percentiles (as text) to the plot? } \item{size.label}{ Numeric. How much room to leave at the RHS for the label. It is in percent (of the range of the primary variable). } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{xlim}{ Limits of the x-axis. See \code{\link[graphics]{par}}. } \item{ylim}{ Limits of the y-axis. See \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{tcol.arg}{ Color of the text (if \code{label} is \code{TRUE}). See the \code{col} argument of \code{\link[graphics]{par}}. } \item{tadj}{ Text justification. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{main} and \code{las}. } } \details{ The above graphical parameters offer some flexibility when plotting the quantiles. } \value{ The matrix of fitted values. } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ While the graphical arguments of this function are useful to the user, this function should not be called directly. } \seealso{ \code{\link{qtplot.lmscreg}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25,50,75,95), lcol = "blue", tcol = "blue", llwd = 2) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/plotqrrvglm.Rd0000644000176000001440000000447712136651105014503 0ustar ripleyusers\name{plotqrrvglm} \alias{plotqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Model Diagnostic Plots for QRR-VGLMs } \description{ The residuals of a QRR-VGLM are plotted for model diagnostic purposes. } \usage{ plotqrrvglm(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", ITolerances = object@control$EqualTolerances, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object of class \code{"qrrvglm"}. } \item{rtype}{ Character string giving residual type. By default, the first one is chosen. } \item{ask}{ Logical. If \code{TRUE}, the user is asked to hit the return key for the next plot. } \item{main}{ Character string giving the title of the plot. } \item{xlab}{ Character string giving the x-axis caption. } \item{ITolerances}{ Logical. This argument is fed into \code{Coef(object, ITolerances = ITolerances)}. } \item{\dots}{ Other plotting arguments (see \code{\link[graphics]{par}}). } } \details{ Plotting the residuals can be potentially very useful for checking that the model fit is adequate. } \value{ The original object. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{Thomas W. Yee} \note{ An ordination plot of a QRR-VGLM can be obtained by \code{\link{lvplot.qrrvglm}}. } \seealso{ \code{\link{lvplot.qrrvglm}}, \code{\link{cqo}}. } \examples{\dontrun{ # QRR-VGLM on the hunting spiders data # This is computationally expensive set.seed(111) # This leads to the global solution # hspider[,1:6]=scale(hspider[,1:6]) # Standardize the environmental variables p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Crow1positive = FALSE) par(mfrow = c(3, 4)) plot(p1, rtype = "response", col = "blue", pch = 4, las = 1, main = "") } } \keyword{dplot} \keyword{models} \keyword{regression} VGAM/man/plotdeplot.lmscreg.Rd0000644000176000001440000000613612136651105015725 0ustar ripleyusers\name{plotdeplot.lmscreg} \alias{plotdeplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Density Plot for LMS Quantile Regression } \description{ Plots a probability density function associated with a LMS quantile regression. } \usage{ plotdeplot.lmscreg(answer, y.arg, add.arg = FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{answer}{ Output from functions of the form \code{deplot.???} where \code{???} is the name of the \pkg{VGAM} LMS family function, e.g., \code{lms.yjn}. See below for details. } \item{y.arg}{ Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. } \item{add.arg}{ Logical. Add the density to an existing plot? } \item{xlab, ylab}{ Caption for the x- and y-axes. See \code{\link[graphics]{par}}. } \item{xlim, ylim}{ Limits of the x- and y-axes. See \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{col.arg}{ Line color. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{main} and \code{las}. } } \details{ The above graphical parameters offer some flexibility when plotting the quantiles. } \value{ The list \code{answer}, which has components \item{newdata}{ The argument \code{newdata} above from the argument list of \code{\link{deplot.lmscreg}}, or a one-row data frame constructed out of the \code{x0} argument. } \item{y}{ The argument \code{y.arg} above. } \item{density}{ Vector of the density function values evaluated at \code{y.arg}. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ While the graphical arguments of this function are useful to the user, this function should not be called directly. } \seealso{ \code{\link{deplot.lmscreg}}. } \examples{ fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), bmi.nz) \dontrun{ y = seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = y, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (orange)") deplot(fit, x0 = 40, y = y, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = y, add = TRUE, col = "orange", llwd = 2) -> aa names(aa@post$deplot) aa@post$deplot$newdata head(aa@post$deplot$y) head(aa@post$deplot$density) } } \keyword{dplot} \keyword{models} \keyword{regression} VGAM/man/plackett.Rd0000644000176000001440000000574512136651105013720 0ustar ripleyusers\name{plackett} \alias{plackett} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Plackett's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Plackett's bivariate distribution by maximum likelihood estimation. } \usage{ plackett(link = "loge", ioratio = NULL, imethod = 1, nsimEIM = 200) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) odds ratio \eqn{\psi}{psi}. See \code{\link{Links}} for more choices and information. } \item{ioratio}{ Numeric. Optional initial value for \eqn{\psi}{psi}. If a convergence failure occurs try assigning a value or a different value. } \item{imethod, nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The defining equation is \deqn{\psi = H \times (1-y_1-y_2+H) / ((y_1-H) \times (y_2-H))}{% psi = H*(1-y1-y2+H) / ((y1-H)*(y2-H))} where \eqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = H_{\psi}(y_1,y_2)}{P(Y1 <= y1, Y2 <= y2)= H(y1,y2)} is the cumulative distribution function. The density function is \eqn{h_{\psi}(y_1,y_2) =}{h(y1,y2) =} \deqn{\psi [1 + (\psi-1)(y_1 + y_2 - 2 y_1 y_2) ] / \left( [1 + (\psi-1)(y_1 + y_2) ]^2 - 4 \psi (\psi-1) y_1 y_2 \right)^{3/2}}{% psi*[1 + (psi-1)*(y1 + y2 - 2*y1*y2) ] / ( [1 + (psi-1)*(y1 + y2)]^2 - 4*psi*(psi-1)*y1*y2)^(3/2)} for \eqn{\psi > 0}{psi > 0}. Some writers call \eqn{\psi}{psi} the \emph{cross product ratio} but it is called the \emph{odds ratio} here. The support of the function is the unit square. The marginal distributions here are the standard uniform although it is commonly generalized to other distributions. If \eqn{\psi = 1}{psi=1} then \eqn{h_{\psi}(y_1,y_2) = y_1 y_2}{h(y1,y2) = y1*y2}, i.e., independence. As the odds ratio tends to infinity one has \eqn{y_1=y_2}{y1=y2}. As the odds ratio tends to 0 one has \eqn{y_2=1-y_1}{y2=1-y1}. Fisher scoring is implemented using \code{\link{rplack}}. Convergence is often quite slow. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Plackett, R. L. (1965) A class of bivariate distributions. \emph{Journal of the American Statistical Association}, \bold{60}, 516--522. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a 2-column matrix with 0.5 values because the marginal distributions correspond to a standard uniform distribution. } \seealso{ \code{\link{rplack}}, \code{\link{frank}}. } \examples{ \dontrun{ ymat <- rplack(n = 2000, oratio = exp(2)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = plackett, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/plackUC.Rd0000644000176000001440000000321012136651105013414 0ustar ripleyusers\name{Plackett} \alias{Plackett} \alias{dplack} \alias{pplack} \alias{rplack} \title{Plackett's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Plackett distribution. } \usage{ dplack(x1, x2, oratio, log = FALSE) pplack(q1, q2, oratio) rplack(n, oratio) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{oratio}{the positive odds ratio \eqn{\psi}{psi}.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{dplack} gives the density, \code{pplack} gives the distribution function, and \code{rplack} generates random deviates (a two-column matrix). } \references{ Mardia, K. V. (1967) Some contributions to contingency-type distributions. \emph{Biometrika}, \bold{54}, 235--249. } \author{ T. W. Yee } \details{ See \code{\link{plackett}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{plackett}}, \code{\link{frank}}. } \examples{ \dontrun{ N <- 101; oratio <- exp(1) x <- seq(0.0, 1.0, len = N) ox <- expand.grid(x, x) z <- dplack(ox[,1], ox[,2], oratio = oratio) contour(x, x, matrix(z, N, N), col = "blue") z <- pplack(ox[,1], ox[,2], oratio = oratio) contour(x, x, matrix(z, N, N), col = "blue") plot(rr <- rplack(n = 3000, oratio = oratio)) par(mfrow = c(1,2)) hist(rr[,1]) # Should be uniform hist(rr[,2]) # Should be uniform } } \keyword{distribution} VGAM/man/pgamma.deriv.unscaled.Rd0000644000176000001440000000424412136651105016251 0ustar ripleyusers\name{pgamma.deriv.unscaled} \alias{pgamma.deriv.unscaled} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Derivatives of the Incomplete Gamma Integral (Unscaled Version) } \description{ The first two derivatives of the incomplete gamma integral with scaling. } \usage{ pgamma.deriv.unscaled(q, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{q, shape}{ As in \code{\link[stats]{pgamma}} and \code{\link{pgamma.deriv}} but these must be vectors of positive values only and finite. } } \details{ Define \deqn{G(x, a) = \int_0^x t^{a-1} e^{-t} dt}{G(a,x) = integral_0^x t^(a-1) exp(-t) dt} so that \eqn{G(x, a)} is \code{pgamma(x, a) * gamma(a)}. Write \eqn{x = q} and \code{shape =} \eqn{a}. The 0th and first and second derivatives with respect to \eqn{a} of \eqn{G} are returned. This function is similar in spirit to \code{\link{pgamma.deriv}} but here there is no gamma function to scale things. Currently a 3-column matrix is returned (in the future this may change and an argument may be supplied so that only what is required by the user is computed.) This function is based on Wingo (1989). } \value{ The 3 columns, running from left to right, are the \code{0:2}th derivatives with respect to \eqn{a}. } \references{ See \code{\link{truncweibull}}. } \author{ T. W. Yee. } %\note{ % If convergence does not occur then try increasing the value of % \code{tmax}. % %} \section{Warning }{ These function seems inaccurate for \code{q = 1} and \code{q = 2}; see the plot below. } \seealso{ \code{\link{pgamma.deriv}}, \code{\link[stats]{pgamma}}. } \examples{ x <- 3; aa <- seq(0.3, 04, by = 0.01) ans.u <- pgamma.deriv.unscaled(x, aa) head(ans.u) \dontrun{ par(mfrow = c(1, 3)) for (jay in 1:3) { plot(aa, ans.u[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, main = colnames(ans.u)[jay], log = "", xlab = "shape", ylab = "") abline(h = 0, v = 1:2, lty = "dashed", col = "gray") # Inaccurate at 1 and 2 } } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/man/pgamma.deriv.Rd0000644000176000001440000000616512136651105014460 0ustar ripleyusers\name{pgamma.deriv} \alias{pgamma.deriv} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Derivatives of the Incomplete Gamma Integral } \description{ The first two derivatives of the incomplete gamma integral. } \usage{ pgamma.deriv(q, shape, tmax = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{q, shape}{ As in \code{\link[stats]{pgamma}} but these must be vectors of positive values only and finite. } % \item{shape}{ % A vector of positive values. % %} \item{tmax}{ Maximum number of iterations allowed in the computation (per \code{q} value). } } \details{ Write \eqn{x = q} and \code{shape =} \eqn{a}. The first and second derivatives with respect to \eqn{q} and \eqn{a} are returned. This function is similar in spirit to \code{\link[stats]{pgamma}}; define \deqn{P(a,x) = \frac{1}{\Gamma(a)} \int_0^x t^{a-1} e^{-t} dt}{P(a,x) = 1/Gamma(a) integral_0^x t^(a-1) exp(-t) dt} so that \eqn{P(a, x)} is \code{pgamma(x, a)}. Currently a 6-column matrix is returned (in the future this may change and an argument may be supplied so that only what is required by the user is computed.) The computations use a series expansion for \eqn{a \leq x \leq 1}{a <= x <= 1} or or \eqn{x < a}, else otherwise a continued fraction expansion. Machine overflow can occur for large values of \eqn{x} when \eqn{x} is much greater than \eqn{a}. } \value{ The first 5 columns, running from left to right, are the derivatives with respect to: \eqn{x}, \eqn{x^2}, \eqn{a}, \eqn{a^2}, \eqn{xa}. The 6th column is \eqn{P(a, x)} (but it is not as accurate as calling \code{\link[stats]{pgamma}} directly). } \references{ Moore, R. J. (1982) Algorithm AS 187: Derivatives of the Incomplete Gamma Integral. \emph{Journal of the Royal Statistical Society, Series C} \emph{(Applied Statistics)}, \bold{31}(3), 330--335. } \author{ T. W. Yee wrote the wrapper function to the Fortran subroutine written by R. J. Moore. The subroutine was modified to run using double precision. The original code came from \url{http://lib.stat.cmu.edu/apstat/187}. } \note{ If convergence does not occur then try increasing the value of \code{tmax}. Yet to do: add more arguments to give greater flexibility in the accuracy desired and to compute only quantities that are required by the user. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{pgamma.deriv.unscaled}}, \code{\link[stats]{pgamma}}. } \examples{ x <- seq(2, 10, length = 501) head(ans <- pgamma.deriv(x, 2)) \dontrun{ par(mfrow = c(2, 3)) for (jay in 1:6) plot(x, ans[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, log = "x", main = colnames(ans)[jay], xlab = "q", ylab = "") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} % Some part of R-2.15.2/src/library/stats/man/GammaDist.Rd used % An error in the article? % I believe comments in the code (C in fortran). % for \eqn{a \leq x \leq 1}{a <= x <= 1}, and VGAM/man/persp.qrrvglm.Rd0000644000176000001440000001674212136651105014732 0ustar ripleyusers\name{perspqrrvglm} \alias{perspqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Perspective plot for QRR-VGLMs } \description{ Produces a perspective plot for a CQO model (QRR-VGLM). It is only applicable for rank-1 or rank-2 models with argument \code{noRRR = ~ 1}. } \usage{ perspqrrvglm(x, varlvI = FALSE, reference = NULL, plot.it = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, gridlength = if (Rank == 1) 301 else c(51,51), whichSpecies = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, stretch = 1.05, main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", llty = par()$lty, llwd = par()$lwd, add1 = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Object of class \code{"qrrvglm"}, i.e., a constrained quadratic ordination (CQO) object. } \item{varlvI}{ Logical that is fed into \code{\link{Coef.qrrvglm}}. } \item{reference}{ Integer or character that is fed into \code{\link{Coef.qrrvglm}}. } \item{plot.it}{ Logical. Plot it? } \item{xlim, ylim}{ Limits of the x- and y-axis. Both are numeric of length 2. See \code{\link[graphics]{par}}. } \item{zlim}{ Limits of the z-axis. Numeric of length 2. Ignored if rank is 1. See \code{\link[graphics]{par}}. } \item{gridlength}{ Numeric. The fitted values are evaluated on a grid, and this argument regulates the fineness of the grid. If \code{Rank = 2} then the argument is recycled to length 2, and the two numbers are the number of grid points on the x- and y-axes respectively. } \item{whichSpecies}{ Numeric or character vector. Indicates which species are to be plotted. The default is to plot all of them. If numeric, it should contain values in the set \{1,2,\ldots,\eqn{S}\} where \eqn{S} is the number of species. } \item{xlab, ylab}{ Character caption for the x-axis and y-axis. By default, a suitable caption is found. See the \code{xlab} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{zlab}{Character caption for the z-axis. Used only if \code{Rank = 2}. By default, a suitable caption is found. See the \code{xlab} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{labelSpecies}{Logical. Whether the species should be labelled with their names. Used for \code{Rank = 1} only. The position of the label is just above the species' maximum. } \item{stretch}{ Numeric. A value slightly more than 1, this argument adjusts the height of the y-axis. Used for \code{Rank = 1} only. } \item{main}{ Character, giving the title of the plot. See the \code{main} argument in \code{\link[graphics]{plot}} or \code{\link[graphics]{title}}. } \item{ticktype}{ Tick type. Used only if \code{Rank = 2}. See \code{\link[graphics]{persp}} for more information. } \item{col}{ Color. See \code{\link[graphics]{persp}} for more information. } \item{llty}{ Line type. Rank-1 models only. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{llwd}{ Line width. Rank-1 models only. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{add1}{ Logical. Add to an existing plot? Used only for rank-1 models. } \item{\dots}{ Arguments passed into \code{\link[graphics]{persp}}. Useful arguments here include \code{theta} and \code{phi}, which control the position of the eye. } } \details{ For a rank-1 model, a perspective plot is similar to \code{\link{lvplot.qrrvglm}} but plots the curves along a fine grid and there is no rugplot to show the site scores. For a rank-2 model, a perspective plot has the first latent variable as the x-axis, the second latent variable as the y-axis, and the expected value (fitted value) as the z-axis. The result of a CQO is that each species has a response surface with elliptical contours. This function will, at each grid point, work out the maximum fitted value over all the species. The resulting response surface is plotted. Thus rare species will be obscured and abundant species will dominate the plot. To view rare species, use the \code{whichSpecies} argument to select a subset of the species. A perspective plot will be performed if \code{noRRR = ~ 1}, and \code{Rank = 1} or \code{2}. Also, all the tolerance matrices of those species to be plotted must be positive-definite. } \value{ For a rank-2 model, a list with the following components. \item{fitted}{ A \eqn{(G_1 \times G_2)}{(G1*G2)} by \eqn{M} matrix of fitted values on the grid. Here, \eqn{G_1}{G1} and \eqn{G_2}{G2} are the two values of \code{gridlength}. } \item{lv1grid, lv2grid}{The grid points for the x-axis and y-axis. } \item{maxfitted}{ A \eqn{G_1}{G1} by \eqn{G_2}{G2} matrix of maximum of the fitted values over all species. These are the values that are plotted on the z-axis. } For a rank-1 model, the components \code{lv2grid} and \code{maxfitted} are \code{NULL}. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ Thomas W. Yee } \note{ Yee (2004) does not refer to perspective plots. Instead, contour plots via \code{\link{lvplot.qrrvglm}} are used. For rank-1 models, a similar function to this one is \code{\link{lvplot.qrrvglm}}. It plots the fitted values at the actual site score values rather than on a fine grid here. The result has the advantage that the user sees the curves as a direct result from a model fitted to data whereas here, it is easy to think that the smooth bell-shaped curves are the truth because the data is more of a distance away. } \seealso{ \code{\link[graphics]{persp}}, \code{\link{cqo}}, \code{\link{Coef.qrrvglm}}, \code{\link{lvplot.qrrvglm}}, \code{\link[graphics]{par}}, \code{\link[graphics]{title}}. } \examples{\dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Good idea when ITolerances = TRUE set.seed(111) r1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, hspider, trace = FALSE, ITolerances = TRUE) set.seed(111) # r2 below is an ill-conditioned model r2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, isdlv = c(2.4,1.0), Muxfactor = 3.0, trace = FALSE, poissonff, hspider, Rank = 2, EqualTolerances = TRUE) sort(r1@misc$deviance.Bestof) # A history of the fits sort(r2@misc$deviance.Bestof) # A history of the fits if (deviance(r2) > 857) stop("suboptimal fit obtained") persp(r1, xlim = c(-6,5), col = 1:4, label = TRUE) # Involves all species persp(r2, xlim = c(-6,5), ylim = c(-4,5), theta = 10, phi = 20, zlim = c(0,220)) # Omit the two dominant species to see what's behind them persp(r2, xlim = c(-6,5), ylim = c(-4,5), theta = 10, phi = 20, zlim = c(0,220), which = (1:10)[-c(8,10)]) # Use zlim to retain the original z-scale } } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/perksUC.Rd0000644000176000001440000000342212136651105013453 0ustar ripleyusers\name{Perks} \alias{Perks} \alias{dperks} \alias{pperks} \alias{qperks} \alias{rperks} \title{The Perks Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Perks distribution. } \usage{ dperks(x, shape, scale = 1, log = FALSE) pperks(q, shape, scale = 1) qperks(p, shape, scale = 1) rperks(n, shape, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{shape, scale}{positive shape and scale parameters. } } \value{ \code{dperks} gives the density, \code{pperks} gives the cumulative distribution function, \code{qperks} gives the quantile function, and \code{rperks} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{perks}} for details. } %\note{ % %} \seealso{ \code{\link{perks}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1.0); Scale <- exp(1); max(abs(pperks(qperks(p = probs, Shape, Scale), Shape, Scale) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 07, by = 0.01); plot(x, dperks(x, Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pperks(x, Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qperks(probs, Shape, Scale) lines(Q, dperks(Q, Shape, Scale), col = "purple", lty = 3, type = "h") pperks(Q, Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/perks.Rd0000644000176000001440000000711512136651105013226 0ustar ripleyusers\name{perks} \alias{perks} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Perks Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Perks distribution. } \usage{ perks(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, gshape = exp(-5:5), gscale = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the shape parameter \code{shape}, scale parameter \code{scale}. All parameters are treated as positive here See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{ishape, iscale}{ Optional initial values. A \code{NULL} means a value is computed internally. } \item{gshape, gscale}{ See \code{\link{CommonVGAMffArguments}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } \item{oim.mean}{ To be currently ignored. } } \details{ The Perks distribution has cumulative distribution function \deqn{F(x; \alpha, \beta) = 1 - \left\{ \frac{1 + \alpha}{1 + \alpha e^{\beta y}} \right\}^{1 / \beta} }{% F(x; alpha, beta) = 1 - ((1 + \alpha)/(1 + alpha * e^(beta * y)))^(1 / beta) } which leads to a probability density function \deqn{f(x; \alpha, \beta) = \left[ 1 + \alpha \right]^{1 / \beta} \alpha e^{\beta y} / (1 + \alpha e^{\beta y})^{1 + 1 / \beta} }{% f(x; alpha, beta) = [ 1 + alpha]^(1 / \beta) * alpha * exp(beta * y) / (1 + alpha * exp(beta * y))^(1 + 1 / beta) } for \eqn{\alpha > 0}{alpha > 0}, \eqn{\beta > 0}{beta > 0}, \eqn{x > 0}. Here, \eqn{\beta}{beta} is called the scale parameter \code{scale}, and \eqn{\alpha}{alpha} is called a shape parameter. The moments for this distribution do not appear to be available in closed form. Simulated Fisher scoring is used and multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Perks, W. (1932) On some experiments in the graduation of mortality statistics. \emph{Journal of the Institute of Actuaries}, \bold{63}, 12--40. Richards, S. J. (2012) A handbook of parametric survival models for actuarial use. \emph{Scandinavian Actuarial Journal}. 1--25. } \author{ T. W. Yee } \section{Warning }{ A lot of care is needed because this is a rather difficult distribution for parameter estimation. If the self-starting initial values fail then try experimenting with the initial value arguments, especially \code{iscale}. Successful convergence depends on having very good initial values. Also, monitor convergence by setting \code{trace = TRUE}. } \seealso{ \code{\link{dperks}}. } \examples{ \dontrun{ set.seed(123) pdata <- data.frame(x2 = runif(nn <- 1000)) # x2 unused pdata <- transform(pdata, eta1 = -1, ceta1 = 1) pdata <- transform(pdata, shape1 = exp(eta1), scale1 = exp(ceta1)) pdata <- transform(pdata, y1 = rperks(nn, shape = shape1, scale = scale1)) fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) } } \keyword{models} \keyword{regression} %# fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) %# fit2 <- vglm(y1 ~ 1, perks(imeth = 2), data = pdata, trace = TRUE) % Argument \code{probs.y} is used only when \code{imethod = 2}. VGAM/man/paretoIVUC.Rd0000644000176000001440000000650212136651105014062 0ustar ripleyusers\name{ParetoIV} \alias{ParetoIV} \alias{dparetoIV} \alias{pparetoIV} \alias{qparetoIV} \alias{rparetoIV} \alias{ParetoIII} \alias{dparetoIII} \alias{pparetoIII} \alias{qparetoIII} \alias{rparetoIII} \alias{ParetoII} \alias{dparetoII} \alias{pparetoII} \alias{qparetoII} \alias{rparetoII} \alias{ParetoI} \alias{dparetoI} \alias{pparetoI} \alias{qparetoI} \alias{rparetoI} \title{The Pareto(IV/III/II) Distributions} \description{ Density, distribution function, quantile function and random generation for the Pareto(IV/III/II) distributions. } \usage{ dparetoIV(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) pparetoIV(q, location = 0, scale = 1, inequality = 1, shape = 1) qparetoIV(p, location = 0, scale = 1, inequality = 1, shape = 1) rparetoIV(n, location = 0, scale = 1, inequality = 1, shape = 1) dparetoIII(x, location = 0, scale = 1, inequality = 1, log = FALSE) pparetoIII(q, location = 0, scale = 1, inequality = 1) qparetoIII(p, location = 0, scale = 1, inequality = 1) rparetoIII(n, location = 0, scale = 1, inequality = 1) dparetoII(x, location = 0, scale = 1, shape = 1, log = FALSE) pparetoII(q, location = 0, scale = 1, shape = 1) qparetoII(p, location = 0, scale = 1, shape = 1) rparetoII(n, location = 0, scale = 1, shape = 1) dparetoI(x, scale = 1, shape = 1) pparetoI(q, scale = 1, shape = 1) qparetoI(p, scale = 1, shape = 1) rparetoI(n, scale = 1, shape = 1) } \arguments{ \item{x, q}{vector of quantiles. } \item{p}{vector of probabilities. } \item{n}{number of observations. Must be a single positive integer. } \item{location}{the location parameter. } \item{scale, shape, inequality}{the (positive) scale, inequality and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ Functions beginning with the letter \code{d} give the density, functions beginning with the letter \code{p} give the distribution function, functions beginning with the letter \code{q} give the quantile function, and functions beginning with the letter \code{r} generates random deviates. } \references{ Brazauskas, V. (2003) Information matrix for Pareto(IV), Burr, and related distributions. \emph{Comm. Statist. Theory and Methods} \bold{32}, 315--325. Arnold, B. C. (1983) \emph{Pareto Distributions}. Fairland, Maryland: International Cooperative Publishing House. } \author{ T. W. Yee } \details{ For the formulas and other details see \code{\link{paretoIV}}. } \note{ The functions \code{[dpqr]paretoI} are the same as \code{[dpqr]pareto1} except for a slight change in notation: \eqn{s=k} and \eqn{b=\alpha}{b=alpha}; see \code{\link{Pareto}}. } \seealso{ \code{\link{paretoIV}}, \code{\link{Pareto}}. } \examples{ \dontrun{ x <- seq(-0.2, 4, by = 0.01) loc <- 0; Scale <- 1; ineq <- 1; shape <- 1.0; plot(x, dparetoIV(x, loc, Scale, ineq, shape), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylim = 0:1, las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) Q <- qparetoIV(seq(0.05,0.95,by = 0.05), loc, Scale, ineq, shape) lines(Q, dparetoIV(Q, loc, Scale, ineq, shape), col = "purple", lty = 3, type = "h") lines(x, pparetoIV(x, loc, Scale, ineq, shape), col = "orange") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/paretoIV.Rd0000644000176000001440000001270212136651105013631 0ustar ripleyusers\name{paretoIV} \alias{paretoIV} \alias{paretoIII} \alias{paretoII} %- Also NEED an '\alias' for EACH other topic documented here. \title{Pareto(IV/III/II) Distribution Family Functions } \description{ Estimates three of the parameters of the Pareto(IV) distribution by maximum likelihood estimation. Some special cases of this distribution are also handled. } \usage{ paretoIV(location = 0, lscale = "loge", linequality = "loge", lshape = "loge", iscale = 1, iinequality = 1, ishape = NULL, imethod = 1) paretoIII(location = 0, lscale = "loge", linequality = "loge", iscale = NULL, iinequality = NULL) paretoII(location = 0, lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{location}{ Location parameter, called \eqn{a} below. It is assumed known. } \item{lscale, linequality, lshape}{ Parameter link functions for the scale parameter (called \eqn{b} below), inequality parameter (called \eqn{g} below), and shape parameter (called \eqn{s} below). See \code{\link{Links}} for more choices. A log link is the default for all because all these parameters are positive. } \item{iscale, iinequality, ishape}{ Initial values for the parameters. A \code{NULL} value means that it is obtained internally. If convergence failure occurs, use these arguments to input some alternative initial values. } \item{imethod}{ Method of initialization for the shape parameter. Currently only values 1 and 2 are available. Try the other value if convergence failure occurs. } } \details{ The Pareto(IV) distribution, which is used in actuarial science, economics, finance and telecommunications, has a cumulative distribution function that can be written \deqn{F(y) = 1 - [1 + ((y-a)/b)^{1/g}]^{-s}}{% F(y) = 1 - [1 + ((y-a)/b)^(1/g)]^(-s)} for \eqn{y > a}, \eqn{b>0}, \eqn{g>0} and \eqn{s>0}. The \eqn{a}{a} is called the \emph{location} parameter, \eqn{b} the \emph{scale} parameter, \eqn{g} the \emph{inequality} parameter, and \eqn{s} the \emph{shape} parameter. The location parameter is assumed known otherwise the Pareto(IV) distribution will not be a regular family. This assumption is not too restrictive in modelling because in typical applications this parameter is known, e.g., in insurance and reinsurance it is pre-defined by a contract and can be represented as a deductible or a retention level. The inequality parameter is so-called because of its interpretation in the economics context. If we choose a unit shape parameter value and a zero location parameter value then the inequality parameter is the Gini index of inequality, provided \eqn{g \leq 1}{g<=1}. The fitted values are currently \code{NA} because I haven't worked out what the mean of \eqn{Y} is yet. % The mean of \eqn{Y} is % \eqn{\alpha k/(k-1)}{alpha*k/(k-1)} provided \eqn{k>1}. % Its variance is % \eqn{\alpha^2 k /((k-1)^2 (k-2))}{alpha^2 k /((k-1)^2 (k-2))} % provided \eqn{k>2}. % The maximum likelihood estimator for the location parameter is % \code{min(y)}, i.e., the smallest response value. There are a number of special cases of the Pareto(IV) distribution. These include the Pareto(I), Pareto(II), Pareto(III), and Burr family of distributions. Denoting \eqn{PIV(a,b,g,s)} as the Pareto(IV) distribution, the Burr distribution \eqn{Burr(b,g,s)} is \eqn{PIV(a=0,b,1/g,s)}, the Pareto(III) distribution \eqn{PIII(a,b,g)} is \eqn{PIV(a,b,g,s=1)}, the Pareto(II) distribution \eqn{PII(a,b,s)} is \eqn{PIV(a,b,g=1,s)}, and the Pareto(I) distribution \eqn{PI(b,s)} is \eqn{PIV(b,b,g=1,s)}. Thus the Burr distribution can be fitted using the \code{\link{nloge}} link function and using the default \code{location=0} argument. The Pareto(I) distribution can be fitted using \code{\link{pareto1}} but there is a slight change in notation: \eqn{s=k} and \eqn{b=\alpha}{b=alpha}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Brazauskas, V. (2003) Information matrix for Pareto(IV), Burr, and related distributions. \emph{Comm. Statist. Theory and Methods} \bold{32}, 315--325. Arnold, B. C. (1983) \emph{Pareto Distributions}. Fairland, Maryland: International Cooperative Publishing House. } \author{ T. W. Yee } \note{ The \code{extra} slot of the fitted object has a component called \code{"location"} which stores the location parameter value(s). } %\section{Warning }{ % The Pareto(IV) distribution is very general, % for example, special cases include the Pareto(I), Pareto(II), % Pareto(III), and Burr family of distributions. Consequently, reasonably % good initial values are recommended, and convergence to a local solution % may occur. For this reason setting \code{trace=TRUE} is a good idea % for monitoring the convergence. % Large samples are ideally required to get reasonable results. % %} \seealso{ \code{\link{ParetoIV}}, \code{\link{pareto1}}, \code{\link{gpd}}. } \examples{ pdata <- data.frame(y = rparetoIV(2000, scale = exp(1), ineq = exp(-0.3), shape = exp(1))) \dontrun{par(mfrow = c(2, 1)); with(pdata, hist(y)); with(pdata, hist(log(y))) } fit <- vglm(y ~ 1, paretoIV, pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/pareto1.Rd0000644000176000001440000001437012136651105013456 0ustar ripleyusers\name{pareto1} \alias{pareto1} \alias{tpareto1} %- Also NEED an '\alias' for EACH other topic documented here. \title{Pareto and Truncated Pareto Distribution Family Functions } \description{ Estimates one of the parameters of the Pareto(I) distribution by maximum likelihood estimation. Also includes the upper truncated Pareto(I) distribution. } \usage{ pareto1(lshape = "loge", location = NULL) tpareto1(lower, upper, lshape = "loge", ishape = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape}{ Parameter link function applied to the parameter \eqn{k}. See \code{\link{Links}} for more choices. A log link is the default because \eqn{k} is positive. } \item{lower, upper}{ Numeric. Lower and upper limits for the truncated Pareto distribution. Each must be positive and of length 1. They are called \eqn{\alpha}{alpha} and \eqn{U} below. } \item{ishape}{ Numeric. Optional initial value for the shape parameter. A \code{NULL} means a value is obtained internally. If failure to converge occurs try specifying a value, e.g., 1 or 2. } \item{location}{ Numeric. The parameter \eqn{\alpha}{alpha} below. If the user inputs a number then it is assumed known with this value. The default means it is estimated by maximum likelihood estimation, which means \code{min(y)} where \code{y} is the response vector. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ishape}. } } \details{ A random variable \eqn{Y} has a Pareto distribution if \deqn{P[Y>y] = C / y^{k}}{% P[Y>y] = C / y^k} for some positive \eqn{k} and \eqn{C}. This model is important in many applications due to the power law probability tail, especially for large values of \eqn{y}. The Pareto distribution, which is used a lot in economics, has a probability density function that can be written \deqn{f(y) = k \alpha^k / y^{k+1}}{% f(y) = k * alpha^k / y^(k+1)} for \eqn{0 < \alpha < y}{0< alpha < y} and \eqn{k>0}. The \eqn{\alpha}{alpha} is known as the location parameter, and \eqn{k} is known as the shape parameter. The mean of \eqn{Y} is \eqn{\alpha k/(k-1)}{alpha*k/(k-1)} provided \eqn{k > 1}. Its variance is \eqn{\alpha^2 k /((k-1)^2 (k-2))}{alpha^2 k /((k-1)^2 (k-2))} provided \eqn{k > 2}. The upper truncated Pareto distribution has a probability density function that can be written \deqn{f(y) = k \alpha^k / [y^{k+1} (1-(\alpha/U)^k)]}{% f(y) = k * alpha^k / [y^(k+1) (1-(\alpha/U)^k)]} for \eqn{0 < \alpha < y < U < \infty}{0< alpha < y < U < Inf} and \eqn{k>0}. Possibly, better names for \eqn{k} are the \emph{index} and \emph{tail} parameters. Here, \eqn{\alpha}{alpha} and \eqn{U} are known. The mean of \eqn{Y} is \eqn{k \alpha^k (U^{1-k}-\alpha^{1-k}) / [(1-k)(1-(\alpha/U)^k)]}{ k * lower^k * (U^(1-k)-alpha^(1-k)) / ((1-k) * (1-(alpha/U)^k))}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006) Parameter estimation for the truncated Pareto distribution, \emph{Journal of the American Statistical Association}, \bold{101}(473), 270--277. } \author{ T. W. Yee } \note{ Outside of economics, the Pareto distribution is known as the Bradford distribution. For \code{pareto1}, if the estimate of \eqn{k} is less than or equal to unity then the fitted values will be \code{NA}s. Also, \code{pareto1} fits the Pareto(I) distribution. See \code{\link{paretoIV}} for the more general Pareto(IV/III/II) distributions, but there is a slight change in notation: \eqn{s = k} and \eqn{b=\alpha}{b = alpha}. In some applications the Pareto law is truncated by a natural upper bound on the probability tail. The upper truncated Pareto distribution has three parameters (called \eqn{\alpha}{alpha}, \eqn{U} and \eqn{k} here) but the family function \code{tpareto} estimates only \eqn{k}. With known lower and upper limits, the ML estimator of \eqn{k} has the usual properties of MLEs. Aban (2006) discusses other inferential details. } \section{Warning }{ The usual or unbounded Pareto distribution has two parameters (called \eqn{\alpha}{alpha} and \eqn{k} here) but the family function \code{pareto1} estimates only \eqn{k} using iteratively reweighted least squares. The MLE of the \eqn{\alpha}{alpha} parameter lies on the boundary and is \code{min(y)} where \code{y} is the response. Consequently, using the default argument values, the standard errors are incorrect when one does a \code{summary} on the fitted object. If the user inputs a value for \code{alpha} then it is assumed known with this value and then \code{summary} on the fitted object should be correct. Numerical problems may occur for small \eqn{k}, e.g., \eqn{k < 1}. } \seealso{ \code{\link{Pareto}}, \code{\link{Tpareto}}, \code{\link{paretoIV}}, \code{\link{gpd}}. } \examples{ alpha <- 2; kay <- exp(3) pdat <- data.frame(y = rpareto(n = 1000, location = alpha, shape = kay)) fit <- vglm(y ~ 1, pareto1, pdat, trace = TRUE) fit@extra # The estimate of alpha is here head(fitted(fit)) with(pdat, mean(y)) coef(fit, matrix = TRUE) summary(fit) # Standard errors are incorrect!! # Here, alpha is assumed known fit2 <- vglm(y ~ 1, pareto1(location = alpha), pdat, trace = TRUE) fit2@extra # alpha stored here head(fitted(fit2)) coef(fit2, matrix = TRUE) summary(fit2) # Standard errors are okay # Upper truncated Pareto distribution lower <- 2; upper <- 8; kay <- exp(2) pdat3 <- data.frame(y = rtpareto(n = 100, lower = lower, upper = upper, shape = kay)) fit3 <- vglm(y ~ 1, tpareto1(lower, upper), pdat3, trace = TRUE) coef(fit3, matrix = TRUE) c(fit3@misc$lower, fit3@misc$upper) } \keyword{models} \keyword{regression} % Package lmomco fits generalized pareto (three parameter) using % method of L-moments. VGAM/man/paralogisticUC.Rd0000644000176000001440000000350612136651105015013 0ustar ripleyusers\name{Paralogistic} \alias{Paralogistic} \alias{dparalogistic} \alias{pparalogistic} \alias{qparalogistic} \alias{rparalogistic} \title{The Paralogistic Distribution} \description{ Density, distribution function, quantile function and random generation for the paralogistic distribution with shape parameter \code{a} and scale parameter \code{scale}. } \usage{ dparalogistic(x, shape1.a, scale = 1, log = FALSE) pparalogistic(q, shape1.a, scale = 1) qparalogistic(p, shape1.a, scale = 1) rparalogistic(n, shape1.a, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \value{ \code{dparalogistic} gives the density, \code{pparalogistic} gives the distribution function, \code{qparalogistic} gives the quantile function, and \code{rparalogistic} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{paralogistic}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The paralogistic distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{paralogistic}}, \code{\link{genbetaII}}. } \examples{ pdata <- data.frame(y = rparalogistic(n = 3000, exp(1), exp(2))) fit <- vglm(y ~ 1, paralogistic(ishape1.a = 2.1), pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/paralogistic.Rd0000644000176000001440000000527312136651105014566 0ustar ripleyusers\name{paralogistic} \alias{paralogistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Paralogistic Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter paralogistic distribution. } \usage{ paralogistic(lshape1.a = "loge", lscale = "loge", ishape1.a = 2, iscale = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) shape parameter \code{a} and (positive) scale parameter \code{scale}. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale}{ Optional initial values for \code{a} and \code{scale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2\} which correspond to \code{a}, \code{scale}, respectively. } } \details{ The 2-parameter paralogistic distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{p=1} and \eqn{a=q}. It is the 3-parameter Singh-Maddala distribution with \eqn{a=q}. More details can be found in Kleiber and Kotz (2003). The 2-parameter paralogistic has density \deqn{f(y) = a^2 y^{a-1} / [b^a \{1 + (y/b)^a\}^{1+a}]}{% f(y) = a^2 y^(a-1) / [b^a (1 + (y/b)^a)^(1+a)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \eqn{a} is the shape parameter. The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(a - 1/a) / \Gamma(a)}{% E(Y) = b gamma(1 + 1/a) gamma(a - 1/a) / gamma(a)} provided \eqn{a > 1}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Paralogistic}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{invparalogistic}}. } \examples{ pdata <- data.frame(y = rparalogistic(n = 3000, exp(1), exp(2))) fit <- vglm(y ~ 1, paralogistic, pdata, trace = TRUE) fit <- vglm(y ~ 1, paralogistic(ishape1.a = 2.3, iscale = 7), pdata, trace = TRUE, epsilon = 1e-8) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/oxtemp.Rd0000644000176000001440000000112212136651105013406 0ustar ripleyusers\name{oxtemp} \alias{oxtemp} \docType{data} \title{ Oxford Temperature Data } \description{ Annual maximum temperatures collected at Oxford, UK. } \usage{data(oxtemp)} \format{ A data frame with 80 observations on the following 2 variables. \describe{ \item{maxtemp}{Annual maximum temperatures (in degrees Fahrenheit). } \item{year}{The values 1901 to 1980. } } } \details{ The data were collected from 1901 to 1980. } % zz: \source{ Unknown. } % \references{ % } \examples{ \dontrun{ fit <- vglm(maxtemp ~ 1, egev, data = oxtemp, trace = TRUE) } } \keyword{datasets} VGAM/man/ordpoisson.Rd0000644000176000001440000001230712136651105014300 0ustar ripleyusers\name{ordpoisson} \alias{ordpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Poisson Family Function } \description{ Fits a Poisson regression where the response is ordinal (the Poisson counts are grouped between known cutpoints). } \usage{ ordpoisson(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{cutpoints}{ Numeric. The cutpoints, \eqn{K_l}. These must be non-negative integers. \code{Inf} values may be included. See below for further details. } \item{countdata}{ Logical. Is the response (LHS of formula) in count-data format? If not then the response is a matrix or vector with values \code{1}, \code{2}, \ldots, \code{L}, say, where \code{L} is the number of levels. Such input can be generated with \code{\link[base]{cut}} with argument \code{labels = FALSE}. If \code{countdata = TRUE} then the response is expected to be in the same format as \code{depvar(fit)} where \code{fit} is a fitted model with \code{ordpoisson} as the \pkg{VGAM} family function. That is, the response is matrix of counts with \code{L} columns (if \code{NOS = 1}). } \item{NOS}{ Integer. The number of species, or more generally, the number of response random variates. This argument must be specified when \code{countdata = TRUE}. Usually \code{NOS = 1}. } \item{Levels}{ Integer vector, recycled to length \code{NOS} if necessary. The number of levels for each response random variate. This argument should agree with \code{cutpoints}. This argument must be specified when \code{countdata = TRUE}. } \item{init.mu}{ Numeric. Initial values for the means of the Poisson regressions. Recycled to length \code{NOS} if necessary. Use this argument if the default initial values fail (the default is to compute an initial value internally). } \item{parallel, zero, link}{ See \code{\link{poissonff}}. } } \details{ This \pkg{VGAM} family function uses maximum likelihood estimation (Fisher scoring) to fit a Poisson regression to each column of a matrix response. The data, however, is ordinal, and is obtained from known integer cutpoints. Here, \eqn{l=1,\ldots,L} where \eqn{L} (\eqn{L \geq 2}{L >= 2}) is the number of levels. In more detail, let \eqn{Y^*=l} if \eqn{K_{l-1} < Y \leq K_{l}}{K_{l-1} < Y <= K_{l}} where the \eqn{K_l} are the cutpoints. We have \eqn{K_0=-\infty}{K_0=-Inf} and \eqn{K_L=\infty}{K_L=Inf}. The response for this family function corresponds to \eqn{Y^*} but we are really interested in the Poisson regression of \eqn{Y}. If \code{NOS=1} then the argument \code{cutpoints} is a vector \eqn{(K_1,K_2,\ldots,K_L)} where the last value (\code{Inf}) is optional. If \code{NOS>1} then the vector should have \code{NOS-1} \code{Inf} values separating the cutpoints. For example, if there are \code{NOS=3} responses, then something like \code{ordpoisson(cut = c(0, 5, 10, Inf, 20, 30, Inf, 0, 10, 40, Inf))} is valid. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Sometimes there are no observations between two cutpoints. If so, the arguments \code{Levels} and \code{NOS} need to be specified too. See below for an example. } \section{Warning }{ The input requires care as little to no checking is done. If \code{fit} is the fitted object, have a look at \code{fit@extra} and \code{depvar(fit)} to check. } \seealso{ \code{\link{poissonff}}, \code{\link{polf}}, \code{\link[base:factor]{ordered}}. } \examples{ set.seed(123) # Example 1 x2 <- runif(n <- 1000); x3 <- runif(n) mymu <- exp(3 - 1 * x2 + 2 * x3) y1 <- rpois(n, lambda = mymu) cutpts <- c(-Inf, 20, 30, Inf) fcutpts <- cutpts[is.finite(cutpts)] # finite cutpoints ystar <- cut(y1, breaks = cutpts, labels = FALSE) \dontrun{ plot(x2, x3, col = ystar, pch = as.character(ystar)) } table(ystar) / sum(table(ystar)) fit <- vglm(ystar ~ x2 + x3, fam = ordpoisson(cutpoi = fcutpts)) head(depvar(fit)) # This can be input if countdata = TRUE head(fitted(fit)) head(predict(fit)) coef(fit, matrix = TRUE) fit@extra # Example 2: multivariate and there are no obsns between some cutpoints cutpts2 <- c(-Inf, 0, 9, 10, 20, 70, 200, 201, Inf) fcutpts2 <- cutpts2[is.finite(cutpts2)] # finite cutpoints y2 <- rpois(n, lambda = mymu) # Same model as y1 ystar2 <- cut(y2, breaks = cutpts2, labels = FALSE) table(ystar2) / sum(table(ystar2)) fit <- vglm(cbind(ystar,ystar2) ~ x2 + x3, fam = ordpoisson(cutpoi = c(fcutpts,Inf,fcutpts2,Inf), Levels = c(length(fcutpts)+1,length(fcutpts2)+1), parallel = TRUE), trace = TRUE) coef(fit, matrix = TRUE) fit@extra constraints(fit) summary(depvar(fit)) # Some columns have all zeros } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/olym.Rd0000644000176000001440000000437312136651105013065 0ustar ripleyusers\name{olympics} \alias{olym08} \alias{olym12} \docType{data} \title{ 2008 and 2012 Summer Olympic Final Medal Count Data} \description{ Final medal count, by country, for the Summer 2008 and 2012 Olympic Games. } \usage{ data(olym08) data(olym12) } \format{ A data frame with 87 or 85 observations on the following 6 variables. \describe{ \item{\code{rank}}{a numeric vector, overall ranking of the countries. } \item{\code{country}}{a factor. } \item{\code{gold}}{a numeric vector, number of gold medals. } \item{\code{silver}}{a numeric vector, number of silver medals. } \item{\code{bronze}}{a numeric vector, number of bronze medals. } \item{\code{totalmedal}}{a numeric vector, total number of medals. } % \item{\code{country}}{a factor. character vector. } } } \details{ The events were held during (i) August 8--24, 2008, in Beijing; and (ii) 27 July--12 August, 2012, in London. % This is a simple two-way contingency table of counts. } \source{ \url{http://www.associatedcontent.com/article/979484/2008_summer_olympic_medal_count_total.html}, \url{http://www.london2012.com/medals/medal-count/}. } \references{ The official English website was/is \url{http://en.beijing2008.cn} and \url{http://www.london2012.com}. Help from Viet Hoang Quoc is gratefully acknowledged. } \seealso{ \code{\link[VGAM]{grc}}. } \examples{ summary(olym08) summary(olym12) ## maybe str(olym08) ; plot(olym08) ... \dontrun{ par(mfrow = c(1, 2)) myylim <- c(0, 55) with(head(olym08, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2008 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) with(head(olym12, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2012 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) } } \keyword{datasets} VGAM/man/notdocumentedyet.Rd0000644000176000001440000003024012136651105015467 0ustar ripleyusers\name{notdocumentedyet} \alias{notdocumentedyet} % % % 201302; % \alias{pgamma.deriv.unscaled} % \alias{pgamma.deriv} % \alias{digami} % % 201212; \alias{binom2.rho.ss} % % 20121105; \alias{N.hat.posbernoulli} \alias{aux.posbernoulli} \alias{posbern.aux} \alias{Rank} \alias{Rank.rrvglm} \alias{Rank.qrrvglm} \alias{Rank.cao} % 20121015; delete this later %\alias{huggins91.old} % % 20120912 \alias{arwz2wz} % % 20120813 New links (no earg) \alias{Dtheta.deta} \alias{D2theta.deta2} \alias{Eta2theta} \alias{Theta2eta} \alias{link2list} \alias{Namesof} % % % % % 20120514, 20120528, \alias{w.wz.merge} \alias{w.y.check} \alias{vweighted.mean.default} % % 20120418 \alias{nvar_vlm} % 20120310 %\alias{hatvalues} %\alias{hatvalues.vlm} % % % 20120307 \alias{npred} \alias{npred.vlm} % % % % 20120215 % \alias{print.vglmff} \alias{show.vglmff} % \alias{print.vfamily} % \alias{show.Coef.rrar} % \alias{family.vglm} \alias{show.vgam} \alias{show.vglm} \alias{show.vlm} % \alias{print.vgam} % \alias{print.vglm} % \alias{print.vlm} % \alias{print.vlm.wfit} % % % % % 20120112 \alias{AIC} \alias{coef} \alias{logLik} \alias{plot} \alias{show.summary.vglm} \alias{vcov} \alias{vcovvlm} \alias{VGAMenv} \alias{nobs} \alias{show.Coef.cao} \alias{show.Coef.qrrvglm} \alias{show.Coef.rrvglm} \alias{show.rrvglm} \alias{show.summary.cao} % \alias{show.summary.lms} \alias{show.summary.qrrvglm} % \alias{show.summary.rc.exponential} \alias{show.summary.rrvglm} \alias{show.summary.uqo} \alias{show.summary.vgam} \alias{show.summary.vglm} \alias{show.summary.vlm} \alias{show.uqo} \alias{show.vanova} \alias{show.vsmooth.spline} % % % % % % % % % % 20111224; lrtest and waldtest stuff %\alias{lrtest} %\alias{lrtest_vglm} %\alias{print_anova} \alias{update_default} \alias{update_formula} % %\alias{waldtest} %\alias{waldtest_vglm} %\alias{waldtest_default} %\alias{waldtest_formula} % % % % % 20110202; 20110317; James Lauder work %\alias{dexpgeom} %\alias{pexpgeom} %\alias{qexpgeom} %\alias{rexpgeom} %\alias{expgeometric} % %\alias{dweibull3} %\alias{pweibull3} %\alias{qweibull3} %\alias{rweibull3} %\alias{weibull3} % % % % 20110321; misc. datasets. %\alias{fibre1.5} %\alias{fibre15} % % % 20120206; for RR-NB, or rrn.tex. \alias{plota21} % % % 20110202; for Melbourne; these include datasets. \alias{azprocedure} \alias{Confint.rrnb} \alias{Confint.nb1} %\alias{gala} \alias{mmt} % % % %20111128; basics \alias{is.empty.list} % % % % %20101222; Alfian work %\alias{Rcim} % Has been written %\alias{plotrcim0} % Has been written %\alias{moffset} % Has been written % \alias{Qvar} \alias{plotqvar} \alias{depvar.vlm} % % % % %20110411 \alias{dbinorm} % %20090330 \alias{dclogloglap} \alias{dlogitlap} \alias{dprobitlap} \alias{logitlaplace1.control} \alias{loglaplace1.control} \alias{pclogloglap} \alias{plogitlap} \alias{pprobitlap} \alias{qclogloglap} \alias{qlogitlap} \alias{qprobitlap} \alias{rclogloglap} \alias{rlogitlap} \alias{rprobitlap} % % % \alias{A1A2A3} \alias{AAaa.nohw} %\alias{AIC} %\alias{AIC.qrrvglm} %\alias{AIC.rrvglm} %\alias{AIC.vgam} %\alias{AIC.vglm} %\alias{AIC.vlm} % \alias{Build.terms} \alias{Build.terms.vlm} \alias{Coef.cao} \alias{Coefficients} \alias{Cut} \alias{Deviance.categorical.data.vgam} \alias{InverseBrat} \alias{Max.Coef.qrrvglm} \alias{Max.qrrvglm} \alias{Opt.Coef.qrrvglm} \alias{Opt.qrrvglm} % \alias{R170.or.later} \alias{Tol.Coef.qrrvglm} \alias{Tol.Coef.uqo} \alias{Tol.qrrvglm} \alias{Tol.uqo} \alias{a2m} \alias{abbott} % \alias{acat.deriv} % \alias{add.arg} % \alias{add.constraints} % \alias{add.hookey} \alias{add1} \alias{add1.vgam} \alias{add1.vglm} % \alias{adjust.Dmat.expression} \alias{alaplace1.control} \alias{alaplace2.control} \alias{alaplace3.control} % \alias{alias.vgam} % \alias{alias.vglm} \alias{anova.vgam} \alias{anova.vglm} % \alias{as.vanova} % \alias{attrassign} % \alias{attrassigndefault} % \alias{attrassignlm} % \alias{beta4} % \alias{betaffqn} \alias{biplot} \alias{biplot.qrrvglm} % \alias{block.diag} % \alias{borel.tanner} \alias{bs} % \alias{callcaof} % \alias{callcqof} % \alias{calldcaof} % \alias{calldcqof} % \alias{callduqof} % \alias{calluqof} % \alias{canonical.Blist} % \alias{cao.fit} \alias{car.all} \alias{care.exp} \alias{ccoef.Coef.cao} \alias{ccoef.Coef.qrrvglm} \alias{ccoef.cao} \alias{ccoef.qrrvglm} \alias{cdf} \alias{cdf.lms.bcg} \alias{cdf.lms.bcn} \alias{cdf.lms.yjn} \alias{cdf.vglm} % \alias{cm.nointercept.vgam} % \alias{cm.vgam} % \alias{cm.zero.vgam} \alias{coefficients} \alias{coefqrrvglm} \alias{coefvlm} \alias{coefvsmooth.spline} \alias{coefvsmooth.spline.fit} % \alias{constraints.vlm} % \alias{cqo.fit} \alias{d2theta.deta2} % \alias{dcda.fast} % \alias{dctda.fast.only} \alias{deplot} \alias{deplot.default} \alias{deplot.lms.bcg} \alias{deplot.lms.bcn} \alias{deplot.lms.yjn} \alias{deplot.lms.yjn2} \alias{deplot.vglm} \alias{deviance} \alias{deviance.uqo} \alias{deviance.vglm} \alias{deviance.vlm} %\alias{df.residual} %\alias{df.residual_vlm} \alias{dimm} % \alias{dneg.binomial} \alias{dnorm2} \alias{dotC} \alias{dotFortran} % \alias{dpsi.dlambda.yjn} % \alias{drop1.vgam} % \alias{drop1.vglm} \alias{dtheta.deta} % \alias{dy.dyj} % \alias{dyj.dy} \alias{effects} % \alias{effects.vgam} % \alias{effects.vlm} % \alias{eifun} % \alias{eijfun} \alias{erfc} \alias{eta2theta} %\alias{explink} % \alias{extract.arg} %\alias{felix} %\alias{dfelix} \alias{fff.control} \alias{fill2} \alias{fill3} \alias{fitted} \alias{fitted.values} \alias{fitted.values.uqo} \alias{fittedvsmooth.spline} % \alias{variable.names} \alias{variable.namesvlm} \alias{variable.namesrrvglm} \alias{case.names} \alias{case.namesvlm} % \alias{formula} \alias{formulavlm} \alias{formulaNA.VGAM} \alias{gammaff} % \alias{get.arg} % \alias{get.rrvglm.se1} % \alias{get.rrvglm.se2} % \alias{getind} % \alias{gh.weight.yjn.11} % \alias{gh.weight.yjn.12} % \alias{gh.weight.yjn.13} % \alias{glag.weight.yjn.11} % \alias{glag.weight.yjn.12} % \alias{glag.weight.yjn.13} % \alias{gleg.weight.yjn.11} % \alias{gleg.weight.yjn.12} % \alias{gleg.weight.yjn.13} \alias{glm} % \alias{hypersecant} % \alias{hypersecant.1} % \alias{ima} % \alias{interleave.VGAM} % \alias{invbinomial} \alias{inverse.gaussianff} \alias{is.Numeric} \alias{is.bell} \alias{is.bell.cao} \alias{is.bell.qrrvglm} \alias{is.bell.rrvglm} \alias{is.bell.vlm} \alias{Kayfun.studentt} % \alias{is.linear.term} % \alias{jitteruqo} \alias{lm} \alias{lm2qrrvlm.model.matrix} \alias{lm2vlm.model.matrix} \alias{vlm2lm.model.matrix} \alias{lms.bcg.control} \alias{lms.bcn.control} \alias{lms.yjn.control} \alias{lmscreg.control} \alias{logLik.vlm} % \alias{lv.Coef.cao} 20090505 \alias{latvar.Coef.qrrvglm} \alias{lv.cao} \alias{latvar.qrrvglm} \alias{lvplot.cao} \alias{m2adefault} \alias{m2avglm} % \alias{matrix.power} \alias{mbesselI0} \alias{mix2exp.control} \alias{mix2normal1.control} \alias{mix2poisson.control} \alias{model.matrix.qrrvglm} % \alias{mux11} % \alias{mux111} % \alias{mux15} % \alias{mux2} % \alias{mux22} % \alias{mux5} % \alias{mux55} % \alias{mux7} % \alias{mux9} % \alias{my.dbinom} \alias{my1} \alias{my2} \alias{namesof} % \alias{natural.ig} % \alias{neg.binomial} % \alias{neg.binomial.k} % \alias{negbin.ab} % \alias{new.assign} \alias{nlminbcontrol} \alias{nbolf2} \alias{nobs.vlm} \alias{nvar} \alias{nvar.vlm} \alias{nvar.vgam} \alias{nvar.rrvglm} \alias{nvar.qrrvglm} \alias{nvar.cao} \alias{nvar.rcim} \alias{ns} % \alias{num.deriv.rrr} \alias{persp} \alias{persp.cao} \alias{plot.cao} \alias{plotpreplotvgam} \alias{plotvglm} \alias{plotvlm} \alias{plotvsmooth.spline} % \alias{pnorm2} done 20120910 % \alias{poissonqn} \alias{poly} \alias{predict} \alias{predict.cao} \alias{predict.glm} \alias{predict.lm} \alias{predict.mlm} % \alias{predictqrrvglm} \alias{predict.rrvglm} \alias{predict.uqo} \alias{predict.vgam} \alias{predict.vlm} \alias{predictcao} \alias{predictors} \alias{predictors.vglm} \alias{predictvsmooth.spline} \alias{predictvsmooth.spline.fit} % \alias{preplotvgam} \alias{print} \alias{procVec} \alias{negzero.expression} \alias{process.binomial2.data.vgam} \alias{process.categorical.data.vgam} % \alias{process.constraints} % \alias{proj.vgam} % \alias{proj.vglm} \alias{put.caption} % \alias{pweights} % \alias{qrrvglm.xprod} \alias{qtplot} \alias{qtplot.default} \alias{qtplot.lms.bcg} \alias{qtplot.lms.bcn} \alias{explot.lms.bcn} \alias{qtplot.lms.yjn} \alias{qtplot.lms.yjn2} \alias{qtplot.vextremes} \alias{qtplot.vglm} \alias{quasiff} % \alias{rainfall} % \alias{remove.arg} % \alias{replace.constraints} \alias{resid} \alias{residuals} % \alias{residualsqrrvglm} % \alias{residualsuqo} % \alias{residualsvglm} % \alias{residualsvlm} % \alias{residvsmooth.spline} \alias{rlplot} \alias{rlplot.vextremes} \alias{rlplot.vglm} % \alias{rrar.Ak1} % \alias{rrar.Ci} % \alias{rrar.Di} % \alias{rrar.Ht} % \alias{rrar.Mi} % \alias{rrar.Mmat} % \alias{rrar.UU} % \alias{rrar.Ut} % \alias{rrar.Wmat} \alias{rrar.control} % \alias{rrr.alternating.expression} % \alias{rrr.deriv.gradient.fast} % \alias{rrr.deriv.rss} % \alias{rrr.derivC.rss} % \alias{rrr.derivative.expression} % \alias{rrr.end.expression} % \alias{rrr.init.expression} % \alias{rrr.normalize} \alias{rrvglm.control.Gaussian} % \alias{rrvglm.fit} \alias{ResSS.vgam} \alias{s.vam} \alias{scale.default} \alias{simple.exponential} \alias{simple.poisson} \alias{size.binomial} \alias{stdze1} \alias{stdze2} % \alias{step.vgam} % \alias{step.vglm} % \alias{subconstraints} \alias{summary.cao} \alias{summary.grc} \alias{summary.lms} \alias{summary.qrrvglm} \alias{summary.rc.exponential} \alias{summaryrcim} \alias{summary.rrvglm} \alias{summary.uqo} \alias{summaryvgam} \alias{summaryvglm} \alias{summaryvlm} % \alias{tapplymat1} \alias{terms.vlm} \alias{theta2eta} % \alias{trivial.constraints} % \alias{update.vgam} % \alias{update.vglm} % \alias{uqo.fit} % \alias{valid.vglmff} % \alias{valid.vknotl2} % \alias{valt} % \alias{valt.1iter} % \alias{valt.2iter} % \alias{valt.control} % \alias{varassign} % \alias{vbacksub} % \alias{vchol} % \alias{vchol.greenstadt} \alias{vcontrol.expression} % \alias{vcovdefault} % \alias{vcovqrrvglm} \alias{vcovrrvglm} % \alias{vcovvlm} % \alias{veigen} % \alias{vellipse} % \alias{vforsub} % \alias{vgam.fit} % \alias{vgam.match} % \alias{vgam.nlchisq} % \alias{vgety} \alias{vglm.fit} \alias{vglm.garma.control} \alias{vglm.multinomial.control} \alias{vglm.multinomial.deviance.control} \alias{dmultinomial} \alias{vglm.vcategorical.control} % \alias{vindex} % \alias{vlabel} \alias{vlm} \alias{vlm.control} % \alias{vlm.wfit} \alias{vnonlinear.control} \alias{vplot} \alias{vplot.default} \alias{vplot.factor} \alias{vplot.list} \alias{vplot.matrix} \alias{vplot.numeric} \alias{vvplot.factor} \alias{weights} \alias{Wr1} \alias{Wr2} % \alias{wweighted.mean} \alias{wweights} % \alias{yformat} % \alias{ylim.scale} % % % \alias{Coef.uqo-class} \alias{cao-class} \alias{rcim0-class} \alias{rcim-class} \alias{grc-class} \alias{qrrvglm-class} \alias{summary.qrrvglm-class} \alias{summary.rrvglm-class} \alias{summary.vgam-class} \alias{summary.vglm-class} \alias{summary.vlm-class} %%% 20101216 \alias{summary.rcim-class} %\alias{summary.rcim-class} %\alias{summaryrcim-class} \alias{uqo-class} \alias{vcov.qrrvglm-class} \alias{vlm-class} \alias{vlmsmall-class} \alias{vsmooth.spline-class} \alias{vsmooth.spline.fit-class} \alias{Coef.cao-class} \alias{summary.cao-class} % % %- Also NEED an '\alias' for EACH other topic documented here. \title{ Undocumented and Internally Used Functions and Classes } \description{ Those currently undocumented and internally used functions are aliased to this help file. Ditto for some classes. } %\usage{ %normal1(lmean = "identity", lsd = "loge", zero = NULL) %} %- maybe also 'usage' for other objects documented here. %\arguments{ % \item{lmean}{ % Link function applied to the mean. % See \code{\link{Links}} for more choices. % % } %} \details{ In the \pkg{VGAM} package there are currently many objects/methods/classes which are currently internal and/or undocumented. The help file suppresses the warnings when the package is 'CHECK'ed. } \value{ Each objects/methods/classes may or may not have its own individual value. These will be documented over time. } %\references{ %} \author{ T. W. Yee } %\note{ % %} %\seealso{ % \code{gaussianff}, % \code{\link{posnormal1}}. %} %\examples{ %} \keyword{models} \keyword{regression} \keyword{internal} VGAM/man/normal1.Rd0000644000176000001440000001000312136651105013441 0ustar ripleyusers\name{normal1} \alias{normal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Normal Distribution } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution. } \usage{ normal1(lmean = "identity", lsd = "loge", lvar = "loge", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, apply.parint = FALSE, smallno = 1e-05, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lsd, lvar}{ Link functions applied to the mean and standard deviation/variance. See \code{\link{Links}} for more choices. Being positive quantities, a log link is the default for the standard deviation and variance (see \code{var.arg}). } % \item{emean, esd, evar}{ % List. Extra argument for the links. % See \code{earg} in \code{\link{Links}} for general information. % emean = list(), esd = list(), evar = list(), % } \item{var.arg}{ Logical. If \code{TRUE} then the second parameter is the variance and \code{lsd} and \code{esd} are ignored, else the standard deviation is used and \code{lvar} and \code{evar} are ignored. } \item{smallno}{ Numeric, positive but close to 0. Used specifically for quasi-variances; if the link for the mean is \code{\link{explink}} then any non-positive value of \code{eta} is replaced by this quantity (hopefully, temporarily and only during early iterations). } \item{imethod, parallel, isd, apply.parint, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. If \code{lmean = loge} then try \code{imethod = 2}. Argument \code{apply.parint} refers to whether the parallelism constraint is applied to the intercept too. } } \details{ This fits a linear model (LM) as the first linear/additive predictor. So, by default, this is just the mean. By default, the log of the standard deviation is the second linear/additive predictor. The Fisher information matrix is diagonal. This \pkg{VGAM} family function can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ Yet to do: allow an argument such as \code{eq.sd} that enables the standard devations to be the same. } \seealso{ \code{\link{gaussianff}}, \code{\link{posnormal1}}, \code{\link{mix2normal1}}, % \code{\link{normal1sum1}}, \code{\link{Qvar}}, \code{\link{tobit}}, \code{\link{cennormal1}}, \code{\link{fnormal1}}, \code{\link{skewnormal1}}, \code{\link{dcennormal1}}, \code{\link{SUR}}, \code{\link{huber2}}, \code{\link{studentt}}, \code{\link{binormal}}, \code{\link[stats:Normal]{dnorm}}. } \examples{ ndata <- data.frame(x2 = rnorm(nn <- 200)) ndata <- transform(ndata, y1 = rnorm(nn, mean = 1-3*x2, sd = exp(1+0.2*x2)), y2 = rnorm(nn, mean = 1+2*x2, sd = exp(1+ 2*x2)^0.5), y3 = rnorm(nn, mean = 1+2*x2, sd = exp(1+ 2*x2)^0.5)) fit1 <- vglm(y1 ~ x2, normal1(zero = NULL), ndata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y2, y3) ~ x2, data = ndata, trace = TRUE, normal1(var = TRUE, parallel = TRUE, apply.parint = TRUE, zero = NULL)) coef(fit2, matrix = TRUE) # Generate data from N(mu = theta = 10, sigma = theta) and estimate theta. theta <- 10 ndata <- data.frame(y = rnorm(100, m = theta, sd = theta)) fit3 <- vglm(y ~ 1, normal1(lsd = "identity"), ndata, constraints = list("(Intercept)" = rbind(1, 1))) fit4 <- vglm(y ~ 1, normal1(lsd = "identity", parallel = TRUE, apply.parint = TRUE, zero = NULL), ndata) coef(fit3, matrix = TRUE) coef(fit4, matrix = TRUE) # Same as fit3 } \keyword{models} \keyword{regression} VGAM/man/negbinomial.size.Rd0000644000176000001440000000677012136651105015345 0ustar ripleyusers\name{negbinomial.size} \alias{negbinomial.size} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial Distribution Family Function With Known Size} \description{ Maximum likelihood estimation of the mean parameter of a negative binomial distribution with known size parameter. } \usage{ negbinomial.size(size = Inf, lmu = "loge", imu = NULL, probs.y = 0.75, imethod = 1, shrinkage.init = 0.95, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{size}{ Numeric, positive. Same as argument \code{size} of \code{\link[stats:NegBinomial]{rnbinom}}. If the response is a matrix then this is recycled to a matrix of the same dimension, by row (\code{\link[base]{matrix}} with \code{byrow = TRUE}). } \item{lmu, imu}{ Same as \code{\link{negbinomial}}. } \item{probs.y}{ Same as \code{\link{negbinomial}}. } \item{imethod, zero}{ Same as \code{\link{negbinomial}}. } \item{shrinkage.init}{ Same as \code{\link{negbinomial}}. } } \details{ This \pkg{VGAM} family function estimates only the mean parameter of the negative binomial distribution. See \code{\link{negbinomial}} for general information. Setting \code{size = 1} gives what I call the NB-G (geometric model; see Hilbe (2011)). The default, \code{size = Inf}, corresponds to the Poisson distribution. } %\section{Warning}{ % %} \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. } \author{ Thomas W. Yee } \note{ If \code{lmu = "nbcanlink"} in \code{negbinomial.size()} then the \code{size} argument here should be assigned. % is placed inside the \code{earg} % argument of \code{nbcanlink()} as a matrix with conformable size. } \seealso{ \code{\link{negbinomial}}, \code{\link{nbcanlink}} (NB-C model), \code{\link{quasipoissonff}}, \code{\link{poissonff}}, % \code{\link[MASS]{rnegbin}}. \code{\link[stats:NegBinomial]{rnbinom}}. } \examples{ # Simulated data with various multiple responses size1 <- exp(1); size2 <- exp(2); size3 <- exp(0); size4 <- Inf ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, eta1 = -1 - 2 * x2, # eta1 must be negative size1 = size1) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inv = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), # NB-C y2 = rnbinom(nn, mu = exp(2 - x2), size = size2), y3 = rnbinom(nn, mu = exp(3 + x2), size = size3), # NB-G y4 = rpois (nn, la = exp(1 + x2))) # Also known as NB-C with size known (Hilbe, 2011) fit1 <- vglm(y1 ~ x2, negbinomial.size(size = size1, lmu = "nbcanlink"), ndata, trace = TRUE, crit = "coef") coef(fit1, matrix = TRUE) head(fit1@misc$size) # size saved here fit2 <- vglm(cbind(y2, y3, y4) ~ x2, negbinomial.size(size = c(size2, size3, size4)), ndata, trace = TRUE) coef(fit2, matrix = TRUE) head(fit2@misc$size) # size saved here } \keyword{models} \keyword{regression} VGAM/man/negbinomial.Rd0000644000176000001440000003720112136651105014365 0ustar ripleyusers\name{negbinomial} \alias{negbinomial} \alias{polya} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial Distribution Family Function } \description{ Maximum likelihood estimation of the two parameters of a negative binomial distribution. } \usage{ negbinomial(lmu = "loge", lsize = "loge", imu = NULL, isize = NULL, probs.y = 0.75, nsimEIM = 100, cutoff = 0.995, Maxiter = 5000, deviance.arg = FALSE, imethod = 1, parallel = FALSE, shrinkage.init = 0.95, zero = -2) polya(lprob = "logit", lsize = "loge", iprob = NULL, isize = NULL, probs.y = 0.75, nsimEIM = 100, deviance.arg = FALSE, imethod = 1, shrinkage.init = 0.95, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsize, lprob}{ Link functions applied to the \eqn{\mu}{mu}, \eqn{k} and \eqn{p} parameters. See \code{\link{Links}} for more choices. Note that the \eqn{\mu}{mu}, \eqn{k} and \eqn{p} parameters are the \code{mu}, \code{size} and \code{prob} arguments of \code{\link[stats:NegBinomial]{rnbinom}} respectively. Common alternatives for \code{lsize} are \code{\link{nloge}} and \code{\link{reciprocal}}. } \item{imu, isize, iprob}{ Optional initial values for the mean and \eqn{k} and \eqn{p}. For \eqn{k}, if failure to converge occurs then try different values (and/or use \code{imethod}). For a \eqn{S}-column response, \code{isize} can be of length \eqn{S}. A value \code{NULL} means an initial value for each response is computed internally using a range of values. The last argument is ignored if used within \code{\link{cqo}}; see the \code{iKvector} argument of \code{\link{qrrvglm.control}} instead. } \item{probs.y}{ Passed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}} when \code{imethod = 3} to obtain an initial value for the mean. } \item{nsimEIM}{ This argument is used for computing the diagonal element of the \emph{expected information matrix} (EIM) corresponding to \eqn{k}. See \code{\link{CommonVGAMffArguments}} for more information and the note below. } \item{cutoff}{ Used in the finite series approximation. A numeric which is close to 1 but never exactly 1. Used to specify how many terms of the infinite series for computing the second diagonal element of the EIM are actually used. The sum of the probabilites are added until they reach this value or more (but no more than \code{Maxiter} terms allowed). It is like specifying \code{p} in an imaginary function \code{qnegbin(p)}. } \item{Maxiter}{ Used in the finite series approximation. Integer. The maximum number of terms allowed when computing the second diagonal element of the EIM. In theory, the value involves an infinite series. If this argument is too small then the value may be inaccurate. } \item{deviance.arg}{ Logical. If \code{TRUE}, the deviance function is attached to the object. Under ordinary circumstances, it should be left alone because it really assumes the index parameter is at the maximum likelihood estimate. Consequently, one cannot use that criterion to minimize within the IRLS algorithm. It should be set \code{TRUE} only when used with \code{\link{cqo}} under the fast algorithm. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} which specifies the initialization method for the \eqn{\mu}{mu} parameter. If failure to converge occurs try another value and/or else specify a value for \code{shrinkage.init} and/or else specify a value for \code{isize}. } \item{parallel}{ See \code{\link{CommonVGAMffArguments}} for more information. Setting \code{parallel = TRUE} is useful in order to get something similar to \code{\link{quasipoissonff}} or what is known as NB-1. The parallelism constraint does not apply to any intercept term. You should set \code{zero = NULL} too if \code{parallel = TRUE} to avoid a conflict. } \item{shrinkage.init}{ How much shrinkage is used when initializing \eqn{\mu}{mu}. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used in conjunction with \code{imethod}. If convergence failure occurs try setting this argument to 1. } \item{zero}{ Integer valued vector, usually assigned \eqn{-2} or \eqn{2} if used at all. Specifies which of the two linear/additive predictors are modelled as an intercept only. By default, the \eqn{k} parameter (after \code{lsize} is applied) is modelled as a single unknown number that is estimated. It can be modelled as a function of the explanatory variables by setting \code{zero = NULL}; this has been called a NB-H model by Hilbe (2011). A negative value means that the value is recycled, so setting \eqn{-2} means all \eqn{k} are intercept-only. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The negative binomial distribution can be motivated in several ways, e.g., as a Poisson distribution with a mean that is gamma distributed. There are several common parametrizations of the negative binomial distribution. The one used by \code{negbinomial()} uses the mean \eqn{\mu}{mu} and an \emph{index} parameter \eqn{k}, both which are positive. Specifically, the density of a random variable \eqn{Y} is \deqn{f(y;\mu,k) ~=~ {y + k - 1 \choose y} \, \left( \frac{\mu}{\mu+k} \right)^y\, \left( \frac{k}{k+\mu} \right)^k }{% f(y;mu,k) = C_{y}^{y + k - 1} [mu/(mu+k)]^y [k/(k+mu)]^k} where \eqn{y=0,1,2,\ldots}, and \eqn{\mu > 0}{mu > 0} and \eqn{k > 0}. Note that the \emph{dispersion} parameter is \eqn{1/k}, so that as \eqn{k} approaches infinity the negative binomial distribution approaches a Poisson distribution. The response has variance \eqn{Var(Y)=\mu+\mu^2/k}{Var(Y)=mu*(1+mu/k)}. When fitted, the \code{fitted.values} slot of the object contains the estimated value of the \eqn{\mu}{mu} parameter, i.e., of the mean \eqn{E(Y)}. It is common for some to use \eqn{\alpha=1/k}{alpha=1/k} as the ancillary or heterogeneity parameter; so common alternatives for \code{lsize} are \code{\link{nloge}} and \code{\link{reciprocal}}. For \code{polya} the density is \deqn{f(y;p,k) ~=~ {y + k - 1 \choose y} \, \left( 1 - p \right)^y\, p^k }{% f(y;p,k) = C_{y}^{y + k - 1} [1 - p]^y p^k} where \eqn{y=0,1,2,\ldots}, and \eqn{0 < p < 1}{0 < p < 1} and \eqn{k > 0}. The negative binomial distribution can be coerced into the classical GLM framework with one of the parameters being of interest and the other treated as a nuisance/scale parameter (this is implemented in the \pkg{MASS} library). The \pkg{VGAM} family function \code{negbinomial} treats both parameters on the same footing, and estimates them both by full maximum likelihood estimation. Simulated Fisher scoring is employed as the default (see the \code{nsimEIM} argument). The parameters \eqn{\mu}{mu} and \eqn{k} are independent (diagonal EIM), and the confidence region for \eqn{k} is extremely skewed so that its standard error is often of no practical use. The parameter \eqn{1/k} has been used as a measure of aggregation. These \pkg{VGAM} family functions handle \emph{multivariate} responses, so that a matrix can be used as the response. The number of columns is the number of species, say, and setting \code{zero = -2} means that \emph{all} species have a \eqn{k} equalling a (different) intercept only. } \section{Warning}{ The Poisson model corresponds to \eqn{k} equalling infinity. If the data is Poisson or close to Poisson, numerical problems will occur. Possibly choosing a log-log link may help in such cases, otherwise use \code{\link{poissonff}} or \code{\link{quasipoissonff}}. These functions are fragile; the maximum likelihood estimate of the index parameter is fraught (see Lawless, 1987). In general, the \code{\link{quasipoissonff}} is more robust. Other alternatives to \code{negbinomial} are to fit a NB-1 or RR-NB (aka NB-P) model; see Yee (2012). Also available are the NB-C, NB-H and NB-G. Assigning values to the \code{isize} argument may lead to a local solution, and smaller values are preferred over large values when using this argument. Yet to do: write a family function which uses the methods of moments estimator for \eqn{k}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lawless, J. F. (1987) Negative binomial and mixed Poisson regression. \emph{The Canadian Journal of Statistics} \bold{15}, 209--225. Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. Bliss, C. and Fisher, R. A. (1953) Fitting the negative binomial distribution to biological data. \emph{Biometrics} \bold{9}, 174--200. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. } \author{ Thomas W. Yee } \note{ % The \pkg{VGAM} package has a few other family functions for the % negative binomial distribution. Currently, none of these others work % very well. These two functions implement two common parameterizations of the negative binomial (NB). Some people called the NB with integer \eqn{k} the \emph{Pascal} distribution, whereas if \eqn{k} is real then this is the \emph{Polya} distribution. I don't. The one matching the details of \code{\link[stats:NegBinomial]{rnbinom}} in terms of \eqn{p} and \eqn{k} is \code{polya()}. For \code{polya()} the code may fail when \eqn{p} is close to 0 or 1. It is not yet compatible with \code{\link{cqo}} or \code{\link{cao}}. Suppose the response is called \code{ymat}. For \code{negbinomial()} the diagonal element of the \emph{expected information matrix} (EIM) for parameter \eqn{k} involves an infinite series; consequently simulated Fisher scoring (see \code{nsimEIM}) is the default. This algorithm should definitely be used if \code{max(ymat)} is large, e.g., \code{max(ymat) > 300} or there are any outliers in \code{ymat}. A second algorithm involving a finite series approximation can be invoked by setting \code{nsimEIM = NULL}. Then the arguments \code{Maxiter} and \code{cutoff} are pertinent. Regardless of the algorithm used, convergence problems may occur, especially when the response has large outliers or is large in magnitude. If convergence failure occurs, try using arguments (in recommended decreasing order) \code{nsimEIM}, \code{shrinkage.init}, \code{imethod}, \code{Maxiter}, \code{cutoff}, \code{isize}, \code{zero}. The function \code{negbinomial} can be used by the fast algorithm in \code{\link{cqo}}, however, setting \code{EqualTolerances = TRUE} and \code{ITolerances = FALSE} is recommended. % For \code{\link{cqo}} and \code{\link{cao}}, taking the square-root % of the response means (approximately) a \code{\link{poissonff}} family % may be used on the transformed data. % If the negative binomial family function \code{\link{negbinomial}} % is used for \code{cqo} then set \code{negbinomial(deviance = TRUE)} % is necessary. This means to minimize the deviance, which the fast % algorithm can handle. In the first example below (Bliss and Fisher, 1953), from each of 6 McIntosh apple trees in an orchard that had been sprayed, 25 leaves were randomly selected. On each of the leaves, the number of adult female European red mites were counted. There are two special uses of \code{negbinomial} for handling count data. Firstly, when used by \code{\link{rrvglm}} this results in a continuum of models in between and inclusive of quasi-Poisson and negative binomial regression. This is known as a reduced-rank negative binomial model \emph{(RR-NB)}. It fits a negative binomial log-linear regression with variance function \eqn{Var(Y)=\mu+\delta_1 \mu^{\delta_2}}{Var(Y) = mu + delta1 * mu^delta2} where \eqn{\delta_1}{delta1} and \eqn{\delta_2}{delta2} are parameters to be estimated by MLE. Confidence intervals are available for \eqn{\delta_2}{delta2}, therefore it can be decided upon whether the data are quasi-Poisson or negative binomial, if any. Secondly, the use of \code{negbinomial} with \code{parallel = TRUE} inside \code{\link{vglm}} can result in a model similar to \code{\link{quasipoissonff}}. This is named the \emph{NB-1} model. The dispersion parameter is estimated by MLE whereas \code{\link[stats:glm]{glm}} uses the method of moments. In particular, it fits a negative binomial log-linear regression with variance function \eqn{Var(Y) = \phi_0 \mu}{Var(Y) = phi0 * mu} where \eqn{\phi_0}{phi0} is a parameter to be estimated by MLE. Confidence intervals are available for \eqn{\phi_0}{phi0}. } \seealso{ \code{\link{quasipoissonff}}, \code{\link{poissonff}}, \code{\link{zinegbinomial}}, \code{\link{negbinomial.size}} (e.g., NB-G), \code{\link{nbcanlink}} (NB-C), \code{\link{posnegbinomial}}, \code{\link{invbinomial}}, % \code{\link[MASS]{rnegbin}}. \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{nbolf}}, \code{\link{rrvglm}}, \code{\link{cao}}, \code{\link{cqo}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # Example 1: apple tree data appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1)) fit <- vglm(y ~ 1, negbinomial, appletree, weights = w) summary(fit) coef(fit, matrix = TRUE) Coef(fit) # Example 2: simulated data with multivariate response ndata <- data.frame(x2 = runif(nn <- 500)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), size = exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, ndata, trace = TRUE) coef(fit1, matrix = TRUE) # Example 3: large counts so definitely use the nsimEIM argument ndata <- transform(ndata, y3 = rnbinom(nn, mu = exp(12+x2), size = exp(1))) with(ndata, range(y3)) # Large counts fit2 <- vglm(y3 ~ x2, negbinomial(nsimEIM = 100), ndata, trace = TRUE) coef(fit2, matrix = TRUE) # Example 4: a NB-1 to estimate a negative binomial with Var(Y) = phi0 * mu nn <- 1000 # Number of observations phi0 <- 10 # Specify this; should be greater than unity delta0 <- 1 / (phi0 - 1) mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y3 = rnbinom(nn, mu = mu, size = delta0 * mu)) \dontrun{ plot(y3 ~ x2, data = mydata, pch = "+", col = 'blue', main = paste("Var(Y) = ", phi0, " * mu", sep = ""), las = 1) } nb1 <- vglm(y3 ~ x2 + x3, negbinomial(parallel = TRUE, zero = NULL), mydata, trace = TRUE) # Extracting out some quantities: cnb1 <- coef(nb1, matrix = TRUE) mydiff <- (cnb1["(Intercept)", "log(size)"] - cnb1["(Intercept)", "log(mu)"]) delta0.hat <- exp(mydiff) (phi.hat <- 1 + 1 / delta0.hat) # MLE of phi summary(nb1) # Obtain a 95 percent confidence interval for phi0: myvec <- rbind(-1, 1, 0, 0) (se.mydiff <- sqrt(t(myvec) \%*\% vcov(nb1) \%*\% myvec)) ci.mydiff <- mydiff + c(-1.96, 1.96) * se.mydiff ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # The 95 percent conf. interval for phi0 Confint.nb1(nb1) # Quick way to get it summary(glm(y3 ~ x2 + x3, quasipoisson, mydata))$disper # cf. moment estimator } \keyword{models} \keyword{regression} %y1 = MASS:::rnegbin(n, mu=exp(3+x), theta=exp(1)) # k is theta %y2 = MASS:::rnegbin(n, mu=exp(2-x), theta=exp(0)) VGAM/man/nbolf.Rd0000644000176000001440000000716112136651105013203 0ustar ripleyusers\name{nbolf} \alias{nbolf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial-Ordinal Link Function } \description{ Computes the negative binomial-ordinal transformation, including its inverse and the first two derivatives. } \usage{ nbolf(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{cutpoint, k}{ Here, \code{k} is the \eqn{k} parameter associated with the negative binomial distribution; see \code{\link{negbinomial}}. The cutpoints should be non-negative integers. If \code{nbolf()} is used as the link function in \code{\link{cumulative}} then one should choose \code{reverse = TRUE, parallel = TRUE, apply.parint = TRUE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The negative binomial-ordinal link function (NBOLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying negative binomial distribution. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2012) for details. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the negative binomial distribution (see \code{\link{negbinomial}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{negbinomial}}, \code{\link{polf}}, \code{\link{golf}}, \code{nbolf2}, \code{\link{cumulative}}, \code{\link{CommonVGAMffArguments}}. } \examples{ nbolf("p", cutpoint = 2, k = 1, short = FALSE) nbolf("p", cutpoint = 2, k = 1, tag = TRUE) p <- seq(0.02, 0.98, by = 0.01) y <- nbolf(p,cutpoint = 2, k = 1) y. <- nbolf(p,cutpoint = 2, k = 1, deriv = 1) max(abs(nbolf(y,cutpoint = 2, k = 1, inv = TRUE) - p)) # Should be 0 \dontrun{ par(mfrow = c(2, 1), las = 1) plot(p, y, type = "l", col = "blue", main = "nbolf()") abline(h = 0, v = 0.5, col = "red", lty = "dashed") plot(p, y., type = "l", col = "blue", main = "(Reciprocal of) first NBOLF derivative") } # Another example nn <- 1000 x2 <- sort(runif(nn)) x3 <- runif(nn) mymu <- exp( 3 + 1 * x2 - 2 * x3) k <- 4 y1 <- rnbinom(nn, mu = mymu, size = k) cutpoints <- c(-Inf, 10, 20, Inf) cuty <- Cut(y1, breaks = cutpoints) \dontrun{ plot(x2, x3, col = cuty, pch = as.character(cuty)) } table(cuty) / sum(table(cuty)) fit <- vglm(cuty ~ x2 + x3, trace = TRUE, cumulative(reverse = TRUE, mv = TRUE, parallel = TRUE, apply.parint = TRUE, link = nbolf(cutpoint = cutpoints[2:3], k = k))) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/nbcanlink.Rd0000644000176000001440000001164312136651105014042 0ustar ripleyusers\name{nbcanlink} \alias{nbcanlink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative binomial canonical link function } \description{ Computes the negative binomial canonical link transformation, including its inverse and the first two derivatives. } \usage{ nbcanlink(theta, size = NULL, wrt.eta = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. Typically the mean of a negative binomial (NB) distribution. See below for further details. } \item{size, wrt.eta}{ \code{size} contains the \eqn{k} matrix which must be of a conformable dimension as \code{theta}. Also, if \code{deriv > 0} then \code{wrt.eta} is either 1 or 2 (1 for with respect to the first linear predictor, and 2 for with respect to the second linear predictor (a function of \eqn{k})). } \item{bvalue}{ Details at \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The negative binomial (NB) canonical link is \eqn{\log(\theta/ (\theta + k))}{log(theta/(theta + k))} where \eqn{\theta}{theta} is the mean of a NB distribution. The canonical link is used for theoretically relating the NB to GLM class. This link function was specifically written for \code{\link{negbinomial}} and \code{\link{negbinomial.size}}, and should not be used elsewhere (these \pkg{VGAM} family functions have code that specifically handles \code{nbcanlink()}.) } \value{ For \code{deriv = 0}, the above equation when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{kmatrix / expm1(-theta)}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. } \author{ Thomas W. Yee } \section{Warning}{ This function currently does not work very well with \code{\link{negbinomial}}! The NB-C model is sensitive to the initial values and may converge to a local solution. Pages 210 and 309 of Hilbe (2011) notes convergence difficulties (of Newton-Raphson type algorithms), and this applies here. This function should work okay with \code{\link{negbinomial.size}}. Currently trying something like \code{imethod = 3} or \code{imu}, \code{stepsize = 0.5}, \code{maxit = 100}, \code{zero = -2} should help; see the example below. } \note{ While theoretically nice, this function is not recommended in general since its value is always negative (linear predictors ought to be unbounded in general). A \code{\link{loge}} link for argument \code{lmu} is recommended instead. Numerical instability may occur when \code{theta} is close to 0 or 1. Values of \code{theta} which are less than or equal to 0 can be replaced by \code{bvalue} before computing the link function value. See \code{\link{Links}}. } \seealso{ \code{\link{negbinomial}}, \code{\link{negbinomial.size}}. } \examples{ nbcanlink("mu", short = FALSE) mymu <- 1:10 # Test some basic operations: kmatrix <- matrix(runif(length(mymu)), length(mymu), 1) eta1 <- nbcanlink(mymu, size = kmatrix) ans2 <- nbcanlink(eta1, size = kmatrix, inverse = TRUE) max(abs(ans2 - mymu)) # Should be 0 \dontrun{ mymu <- c(seq(0.5, 10, length = 101)) kmatrix <- matrix(10, length(mymu), 1) plot(nbcanlink(mymu, size = kmatrix) ~ mymu, las = 1, type = "l", col = "blue", lwd = 1.5, xlab = expression({mu})) # Estimate the parameters from some simulated data (see Warning section) set.seed(123) ndata <- data.frame(x2 = runif(nn <- 1000 )) size1 <- exp(1); size2 <- exp(2) ndata <- transform(ndata, eta1 = -1 - 2 * x2, # eta1 < 0 size1 = size1, size2 = size2) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inv = TRUE), mu2 = nbcanlink(eta1, size = size2, inv = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), y2 = rnbinom(nn, mu = mu2, size = size2)) head(ndata) summary(ndata) fit <- vglm(cbind(y1, y2) ~ x2, negbinomial("nbcanlink", imethod = 3), stepsize = 0.5, ndata, # Deliberately slow the convergence rate maxit = 100, trace = TRUE) # Warning: may converge to a local soln coef(fit, matrix = TRUE) summary(fit) } } \keyword{math} \keyword{models} \keyword{regression} % abline(h = 0, col = "lightgray", lty = "dashed", lwd = 2.0) % The variance-covariance matrix may be wrong when the % canonical link is used. % vcov(fit) # May be wrong VGAM/man/nakagamiUC.Rd0000644000176000001440000000456312136651105014106 0ustar ripleyusers\name{Nakagami} \alias{Nakagami} \alias{dnaka} \alias{pnaka} \alias{qnaka} \alias{rnaka} \title{Nakagami Distribution } \description{ Density, cumulative distribution function, quantile function and random generation for the Nakagami distribution. } \usage{ dnaka(x, shape, scale = 1, log = FALSE) pnaka(q, shape, scale = 1) qnaka(p, shape, scale = 1, ...) rnaka(n, shape, scale = 1, Smallno = 1.0e-6) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{shape, scale }{ arguments for the parameters of the distribution. See \code{\link{nakagami}} for more details. For \code{rnaka}, arguments \code{shape} and \code{scale} must be of length 1. } \item{Smallno}{ Numeric, a small value used by the rejection method for determining the upper limit of the distribution. That is, \code{pnaka(U) > 1-Smallno} where \code{U} is the upper limit. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dnaka} gives the density, \code{pnaka} gives the cumulative distribution function, \code{qnaka} gives the quantile function, and \code{rnaka} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{nakagami}} for more details. } %\note{ % %} \seealso{ \code{\link{nakagami}}. } \examples{ \dontrun{ x <- seq(0, 3.2, len = 200) plot(x, dgamma(x, shape = 1), type = "n", col = "black", ylab = "", ylim = c(0,1.5), main = "dnaka(x, shape)") lines(x, dnaka(x, shape = 1), col = "orange") lines(x, dnaka(x, shape = 2), col = "blue") lines(x, dnaka(x, shape = 3), col = "green") legend(2, 1.0, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) plot(x, pnorm(x), type = "n", col = "black", ylab = "", ylim = 0:1, main = "pnaka(x, shape)") lines(x, pnaka(x, shape = 1), col = "orange") lines(x, pnaka(x, shape = 2), col = "blue") lines(x, pnaka(x, shape = 3), col = "green") legend(2, 0.6, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) } probs <- seq(0.1, 0.9, by = 0.1) pnaka(qnaka(p = probs, shape = 2), shape = 2) - probs # Should be all 0 } \keyword{distribution} VGAM/man/nakagami.Rd0000644000176000001440000000663012136651105013653 0ustar ripleyusers\name{nakagami} \alias{nakagami} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Nakagami Distribution Family Function } \description{ Estimation of the two parameters of the Nakagami distribution by maximum likelihood estimation. } \usage{ nakagami(lshape = "loge", lscale = "loge", ishape = NULL, iscale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the \emph{shape} and \emph{scale} parameters. Log links ensure they are positive. See \code{\link{Links}} for more choices and information. } \item{ishape, iscale}{ Optional initial values for the shape and scale parameters. For \code{ishape}, a \code{NULL} value means it is obtained in the \code{initialize} slot based on the value of \code{iscale}. For \code{iscale}, assigning a \code{NULL} means a value is obtained in the \code{initialize} slot, however, setting another numerical value is recommended if convergence fails or is too slow. } } \details{ The Nakagami distribution, which is useful for modelling wireless systems such as radio links, can be written \deqn{f(y) = 2 (shape/scale)^{shape} y^{2 \times shape-1} \exp(-shape \times y^2/scale) / \Gamma(shape)}{% 2 * (shape/scale)^shape * y^(2*shape-1) * exp(-shape*y^2/scale) / gamma(shape)} for \eqn{y > 0}, \eqn{shape > 0}, \eqn{scale > 0}. The mean of \eqn{Y} is \eqn{\sqrt{scale/shape} \times \Gamma(shape+0.5) / \Gamma(shape)}{sqrt(scale/shape) * gamma(shape+0.5) / gamma(shape)} and these are returned as the fitted values. By default, the linear/additive predictors are \eqn{\eta_1=\log(shape)}{eta1=log(shape)} and \eqn{\eta_2=\log(scale)}{eta2=log(scale)}. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Nakagami, M. (1960) The \emph{m}-distribution: a general formula of intensity distribution of rapid fading, pp.3--36 in: \emph{Statistical Methods in Radio Wave Propagation}. W. C. Hoffman, Ed., New York: Pergamon. } \author{ T. W. Yee } \note{ The Nakagami distribution is also known as the Nakagami-\emph{m} distribution, where \eqn{m=shape} here. Special cases: \eqn{m=0.5} is a one-sided Gaussian distribution and \eqn{m=1} is a Rayleigh distribution. The second moment is \eqn{E(Y^2)=m}. If \eqn{Y} has a Nakagami distribution with parameters \emph{shape} and \emph{scale} then \eqn{Y^2} has a gamma distribution with shape parameter \emph{shape} and scale parameter \emph{scale/shape}. } \seealso{ \code{\link{rnaka}}, \code{\link{gamma2}}, \code{\link{rayleigh}}. } \examples{ nn <- 1000; shape <- exp(0); Scale <- exp(1) ndata <- data.frame(y1 = sqrt(rgamma(nn, shape = shape, scale = Scale/shape))) fit <- vglm(y1 ~ 1, nakagami, ndata, trace = TRUE, crit = "c") ndata <- transform(ndata, y2 = rnaka(nn, shape = shape, scale = Scale)) fit <- vglm(y2 ~ 1, nakagami(iscale = 3), ndata, trace = TRUE) head(fitted(fit)) with(ndata, mean(y2)) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) \dontrun{ with(ndata, hist(sy <- sort(y2), prob = TRUE, main = "", xlab = "y", ylim = c(0, 0.6))) lines(dnaka(sy, shape = Cfit[1], scale = Cfit[2]) ~ sy, ndata, col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/multinomial.Rd0000644000176000001440000002572712136651105014445 0ustar ripleyusers\name{multinomial} \alias{multinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Multinomial Logit Model } \description{ Fits a multinomial logit model to a (preferably unordered) factor response. } \usage{ multinomial(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "last", whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Any values must be from the set \{1,2,\ldots,\eqn{M}\}. The default value means none are modelled as intercept-only terms. See \code{\link{CommonVGAMffArguments}} for more information. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{nointercept, whitespace}{ See \code{\link{CommonVGAMffArguments}} for more details. } \item{refLevel}{ Either a single positive integer or a value of the factor. If an integer then it specifies which column of the response matrix is the reference or baseline level. The default is the last one (the \eqn{(M+1)}th one). If used, this argument will be often assigned the value \code{1}. If inputted as a value of a factor then beware of missing values of certain levels of the factor (\code{drop.unused.levels = TRUE} or \code{drop.unused.levels = FALSE}). See the example below. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with unordered values \eqn{1,2,\dots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. The default model can be written \deqn{\eta_j = \log(P[Y=j]/ P[Y=M+1])}{% eta_j = log(P[Y=j]/ P[Y=M+1])} where \eqn{\eta_j}{eta_j} is the \eqn{j}th linear/additive predictor. Here, \eqn{j=1,\ldots,M}, and \eqn{\eta_{M+1}}{eta_{M+1}} is 0 by definition. That is, the last level of the factor, or last column of the response matrix, is taken as the reference level or baseline---this is for identifiability of the parameters. The reference or baseline level can be changed with the \code{refLevel} argument. In almost all the literature, the constraint matrices associated with this family of models are known. For example, setting \code{parallel = TRUE} will make all constraint matrices (except for the intercept) equal to a vector of \eqn{M} 1's. If the constraint matrices are unknown and to be estimated, then this can be achieved by fitting the model as a reduced-rank vector generalized linear model (RR-VGLM; see \code{\link{rrvglm}}). In particular, a multinomial logit model with unknown constraint matrices is known as a \emph{stereotype} model (Anderson, 1984), and can be fitted with \code{\link{rrvglm}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York, USA: Wiley. Hastie, T. J., Tibshirani, R. J. and Friedman, J. H. (2009) \emph{The Elements of Statistical Learning: Data Mining, Inference and Prediction}, 2nd ed. New York, USA: Springer-Verlag. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York, USA: Springer-Verlag. Anderson, J. A. (1984) Regression and ordered categorical variables. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{46}, 1--30. Tutz, G. (2012) \emph{Regression for Categorical Data}, Cambridge University Press. Further information and examples on categorical data analysis by the \pkg{VGAM} package can be found at \url{http://www.stat.auckland.ac.nz/~yee/VGAM/doc/categorical.pdf}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is the matrix of sample proportions. The multinomial logit model is more appropriate for a nominal (unordered) factor response than for an ordinal (ordered) factor response. Models more suited for the latter include those based on cumulative probabilities, e.g., \code{\link{cumulative}}. \code{multinomial} is prone to numerical difficulties if the groups are separable and/or the fitted probabilities are close to 0 or 1. The fitted values returned are estimates of the probabilities \eqn{P[Y=j]} for \eqn{j=1,\ldots,M+1}. See \pkg{safeBinaryRegression} for the logistic regression case. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x2}, \code{x3} and \code{x4}, then \code{parallel = TRUE ~ x2 + x3 - 1} and \code{parallel = FALSE ~ x4} are equivalent. This would constrain the regression coefficients for \code{x2} and \code{x3} to be equal; those of the intercepts and \code{x4} would be different. In Example 4 below, a conditional logit model is fitted to an artificial data set that explores how cost and travel time affect people's decision about how to travel to work. Walking is the baseline group. The variable \code{Cost.car} is the difference between the cost of travel to work by car and walking, etc. The variable \code{Time.car} is the difference between the travel duration/time to work by car and walking, etc. For other details about the \code{xij} argument see \code{\link{vglm.control}} and \code{\link{fill}}. The \code{\link[nnet]{multinom}} function in the \pkg{nnet} package uses the first level of the factor as baseline, whereas the last level of the factor is used here. Consequently the estimated regression coefficients differ. } % In the future, this family function may be renamed to % ``\code{mum}'' (for \bold{mu}ltinomial logit \bold{m}odel). % Please let me know if you strongly agree or disagree about this. \section{Warning }{ No check is made to verify that the response is nominal. See \code{\link{CommonVGAMffArguments}} for more warnings. } \seealso{ \code{\link{margeff}}, \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{dirichlet}}, \code{\link{dirmultinomial}}, \code{\link{rrvglm}}, \code{\link{fill1}}, \code{\link[stats:Multinom]{Multinomial}}, \code{\link{mlogit}}, \code{\link[datasets]{iris}}. The author's homepage has further documentation about categorical data analysis using \pkg{VGAM}. } % \code{\link[base:Multinom]{rmultinom}} \examples{ # Example 1: fit a multinomial logit model to Edgar Anderson's iris data data(iris) \dontrun{ fit <- vglm(Species ~ ., multinomial, iris) coef(fit, matrix = TRUE) } # Example 2a: a simple example ycounts <- t(rmultinom(10, size = 20, prob = c(0.1, 0.2, 0.8))) # Counts fit <- vglm(ycounts ~ 1, multinomial) head(fitted(fit)) # Proportions fit@prior.weights # NOT recommended for extraction of prior weights weights(fit, type = "prior", matrix = FALSE) # The better method depvar(fit) # Sample proportions; same as fit@y constraints(fit) # Constraint matrices # Example 2b: Different reference level used as the baseline fit2 <- vglm(ycounts ~ 1, multinomial(refLevel = 2)) coef(fit2, matrix = TRUE) coef(fit , matrix = TRUE) # Easy to reconcile this output with fit2 # Example 3: The response is a factor. nn <- 10 dframe3 <- data.frame(yfactor = gl(3, nn, labels = c("Control", "Trt1", "Trt2")), x2 = runif(3 * nn)) myrefLevel <- with(dframe3, yfactor[12]) fit3a <- vglm(yfactor ~ x2, multinomial(refLevel = myrefLevel), dframe3) fit3b <- vglm(yfactor ~ x2, multinomial(refLevel = 2), dframe3) coef(fit3a, matrix = TRUE) # "Treatment1" is the reference level coef(fit3b, matrix = TRUE) # "Treatment1" is the reference level margeff(fit3b) # Example 4: Fit a rank-1 stereotype model data(car.all) fit4 <- rrvglm(Country ~ Width + Height + HP, multinomial, car.all) coef(fit4) # Contains the C matrix constraints(fit4)$HP # The A matrix coef(fit4, matrix = TRUE) # The B matrix Coef(fit4)@C # The C matrix ccoef(fit4) # Better to get the C matrix this way Coef(fit4)@A # The A matrix svd(coef(fit4, matrix = TRUE)[-1, ])$d # This has rank 1; = C %*% t(A) # Example 5: The use of the xij argument (aka conditional logit model) set.seed(111) nn <- 100 # Number of people who travel to work M <- 3 # There are M+1 models of transport to go to work ycounts <- matrix(0, nn, M+1) ycounts[cbind(1:nn, sample(x = M+1, size = nn, replace = TRUE))] = 1 dimnames(ycounts) <- list(NULL, c("bus","train","car","walk")) gotowork <- data.frame(cost.bus = runif(nn), time.bus = runif(nn), cost.train= runif(nn), time.train= runif(nn), cost.car = runif(nn), time.car = runif(nn), cost.walk = runif(nn), time.walk = runif(nn)) gotowork <- round(gotowork, dig = 2) # For convenience gotowork <- transform(gotowork, Cost.bus = cost.bus - cost.walk, Cost.car = cost.car - cost.walk, Cost.train = cost.train - cost.walk, Cost = cost.train - cost.walk, # for labelling Time.bus = time.bus - time.walk, Time.car = time.car - time.walk, Time.train = time.train - time.walk, Time = time.train - time.walk) # for labelling fit <- vglm(ycounts ~ Cost + Time, multinomial(parall = TRUE ~ Cost + Time - 1), xij = list(Cost ~ Cost.bus + Cost.train + Cost.car, Time ~ Time.bus + Time.train + Time.car), form2 = ~ Cost + Cost.bus + Cost.train + Cost.car + Time + Time.bus + Time.train + Time.car, data = gotowork, trace = TRUE) head(model.matrix(fit, type = "lm")) # LM model matrix head(model.matrix(fit, type = "vlm")) # Big VLM model matrix coef(fit) coef(fit, matrix = TRUE) constraints(fit) summary(fit) max(abs(predict(fit) - predict(fit, new = gotowork))) # Should be 0 } \keyword{models} \keyword{regression} % 20100915; this no longer works: % # Example 2c: Different input to Example 2a but same result % w <- apply(ycounts, 1, sum) # Prior weights % yprop <- ycounts / w # Sample proportions % fitprop <- vglm(yprop ~ 1, multinomial, weights=w) % head(fitted(fitprop)) # Proportions % weights(fitprop, type="prior", matrix=FALSE) % fitprop@y # Same as the input VGAM/man/morgenstern.Rd0000644000176000001440000000652112136651105014445 0ustar ripleyusers\name{morgenstern} \alias{morgenstern} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Morgenstern's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Morgenstern's bivariate distribution by maximum likelihood estimation. } \usage{ morgenstern(lapar = "rhobit", iapar = NULL, tola0 = 0.01, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function for the association parameter \eqn{\alpha}{alpha}, which lies between \eqn{-1} and \eqn{1}. See \code{\link{Links}} for more choices and other information. } \item{iapar}{ Numeric. Optional initial value for \eqn{\alpha}{alpha}. By default, an initial value is chosen internally. If a convergence failure occurs try assigning a different value. Assigning a value will override the argument \code{imethod}. } \item{tola0}{ Positive numeric. If the estimate of \eqn{\alpha}{alpha} has an absolute value less than this then it is replaced by this value. This is an attempt to fix a numerical problem when the estimate is too close to zero. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ia}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = e^{-y_1-y_2} ( 1 + \alpha [1 - e^{-y_1}] [1 - e^{-y_2}] ) + 1 - e^{-y_1} - e^{-y_2} }{% P(Y1 <= y1, Y2 <= y2) = exp(-y1-y2) * ( 1 + alpha * [1 - exp(-y1)] * [1 - exp(-y2)] ) + 1 - exp(-y1) - exp(-y2) } for \eqn{\alpha}{alpha} between \eqn{-1} and \eqn{1}. The support of the function is for \eqn{y_1>0}{y1>0} and \eqn{y_2>0}{y2>0}. The marginal distributions are an exponential distribution with unit mean. When \eqn{\alpha = 0}{alpha=0} then the random variables are independent, and this causes some problems in the estimation process since the distribution no longer depends on the parameter. A variant of Newton-Raphson is used, which only seems to work for an intercept model. It is a very good idea to set \code{trace = TRUE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean. This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{fgm}}, \code{\link{gumbelIbiv}}. } \examples{ N <- 1000; mdata <- data.frame(y1 = rexp(N), y2 = rexp(N)) \dontrun{plot(ymat)} fit <- vglm(cbind(y1, y2) ~ 1, morgenstern, mdata, trace = TRUE) # This may fail: fit <- vglm(cbind(y1, y2) ~ 1, morgenstern, mdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/moffset.Rd0000644000176000001440000000716512136651105013552 0ustar ripleyusers\name{moffset} \alias{moffset} \title{ Matrix Offset } \description{ Modify a matrix by shifting successive elements. } \usage{ moffset(mat, roffset = 0, coffset = 0, postfix = "") } \arguments{ \item{mat}{ Data frame or matrix. This ought to have at least three rows and three columns. The elements are shifted in the order of \code{c(mat)}, i.e., going down successive columns, as the columns go from left to right. Wrapping of values is done. } \item{roffset, coffset}{ Numeric or character. If numeric, the amount of shift (offset) for each row and column. The default is no change to \code{mat}. If character, the offset is computed by matching with the row or column names. For example, for the \code{\link{alcoff}}, put \code{roffset = "6"} means that we make an effective day's dataset start from 6:00 am, and this wraps around to include midnight to 05.59 am on the next day. } \item{postfix}{ Character. Modified rows and columns are renamed by pasting this argument to the end of each name. The default is no change. } } \details{ This function allows a matrix to be rearranged so that element (\code{roffset} + 1, \code{coffset} + 1) becomes the (1, 1) element. The elements are assumed to be ordered in the same way as the elements of \code{c(mat)}, This function is applicable to, e.g., \code{\link{alcoff}}, where it is useful to define the \emph{effective day} as starting at some other hour than midnight, e.g., 6.00am. This is because partying on Friday night continues on into Saturday morning, therefore it is more interpretable to use the effective day when considering a daily effect. This is a data preprocessing function for \code{\link{rcim}} and \code{\link{plotrcim0}}. The differences between \code{\link{Rcim}} and \code{\link{moffset}} is that \code{\link{Rcim}} only reorders the level of the rows and columns so that the data is shifted but not moved. That is, a value in one row stays in that row, and ditto for column. But in \code{\link{moffset}} values in one column can be moved to a previous column. See the examples below. } \value{ A matrix of the same dimensional as its input. } \author{ T. W. Yee, Alfian F. Hadi. } \note{ % This function was originally for a 24 x 7 dimensional matrix % (24 hours of the day by 7 days per week) such as \code{\link{alcoff}}. % Of course, this function can be applied to any moderately % large matrix. The input \code{mat} should have row names and column names. } \seealso{ \code{\link{Rcim}}, \code{\link{rcim}}, \code{\link{plotrcim0}}, \code{\link{alcoff}}, \code{\link{crashi}}. } \examples{ moffset(alcoff, 3, 2, "*") # Some day's data is moved to previous day. Rcim(alcoff, 3 + 1, 2 + 1) # Data does not move as much. alcoff # Original data moffset(alcoff, 3, 2, "*") - Rcim(alcoff, 3+1, 2+1) # Note the differences # An 'effective day' data set: alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*") fit.o <- rcim(alcoff) # default baselines are first row and col fit.e <- rcim(alcoff.e) # default baselines are first row and col \dontrun{ par(mfrow = c(2, 2), mar = c(9,4,2,1)) plot(fit.o, rsub = "Not very interpretable", csub = "Not very interpretable") plot(fit.e, rsub = "More interpretable", csub = "More interpretable") } # Some checking all.equal(moffset(alcoff), alcoff) # Should be no change moffset(alcoff, 1, 1, "*") moffset(alcoff, 2, 3, "*") moffset(alcoff, 1, 0, "*") moffset(alcoff, 0, 1, "*") moffset(alcoff, "6", "Mon", "*") # This one is good # Customise row and column baselines fit2 <- rcim(Rcim(alcoff.e, rbaseline = "11", cbaseline = "Mon*")) } VGAM/man/model.matrixvlm.Rd0000644000176000001440000000636412136651105015231 0ustar ripleyusers\name{model.matrixvlm} \alias{model.matrixvlm} \title{Construct the Design Matrix of a VLM Object} \usage{ model.matrixvlm(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), lapred.index = NULL, \dots) } \arguments{ \item{object}{an object of a class that inherits from the \emph{vector linear model} (VLM). } \item{type}{Type of design matrix returned. The first is the default. The value \code{"vlm"} is the VLM model matrix corresponding to the \code{formula} argument. The value \code{"lm"} is the LM model matrix corresponding to the \code{formula} argument. The value \code{"lm2"} is the second (LM) model matrix corresponding to the \code{form2} argument. The value \code{"bothlmlm2"} means both LM and VLM model matrices. } \item{lapred.index}{ Single integer. The index for a linear/additive predictor, it must have a value from the set \code{1:M}, and \code{type = "lm"} must be assigned. Then it returns a subset of the VLM matrix corresponding to the \code{lapred.index}th linear/additive predictor; this is a LM-type matrix. } \item{\dots}{further arguments passed to or from other methods. These include \code{data} (which is a data frame created with \code{\link{model.framevlm}}), \code{contrasts.arg}, and \code{xlev}. See \code{\link[stats]{model.matrix}} for more information. } } \description{ Creates a design matrix. Two types can be returned: a large one (class \code{"vlm"} or one that inherits from this such as \code{"vglm"}) or a small one (such as returned if it were of class \code{"lm"}). } \details{ This function creates a design matrix from \code{object}. This can be a small LM object or a big VLM object (default). The latter is constructed from the former and the constraint matrices. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ The design matrix for a regression model with the specified formula and data. If \code{type = "bothlmlm2"} then a list is returned with components \code{"X"} and \code{"Xm2"}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Chambers, J. M. (1992) \emph{Data for models.} Chapter 3 of \emph{Statistical Models in S} eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. } \seealso{ \code{\link[stats]{model.matrix}}, \code{\link{model.framevlm}}, \code{\link{predictvglm}}, \code{\link{smartpred}}. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) fit@x # Not saved on the object model.matrix(fit) model.matrix(fit, lapred.index = 1, type = "lm") model.matrix(fit, lapred.index = 2, type = "lm") (Check1 <- head(model.matrix(fit, type = "lm"))) (Check2 <- model.matrix(fit, data = head(pneumo), type = "lm")) all.equal(c(Check1), c(Check2)) q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE } \keyword{models} VGAM/man/model.framevlm.Rd0000644000176000001440000000440212136651105015006 0ustar ripleyusers\name{model.framevlm} \alias{model.framevlm} \title{Construct the Model Frame of a VLM Object} \usage{ model.framevlm(object, setupsmart = TRUE, wrapupsmart = TRUE, \dots) } \arguments{ \item{object}{a model object from the \pkg{VGAM} \R package that inherits from a \emph{vector linear model} (VLM), e.g., a model of class \code{"vglm"}.} \item{\dots}{further arguments such as \code{data}, \code{na.action}, \code{subset}. See \code{\link[stats]{model.frame}} for more information on these. } \item{setupsmart, wrapupsmart}{ Logical. Arguments to determine whether to use smart prediction. } } \description{ This function returns a \code{\link{data.frame}} with the variables. It is applied to an object which inherits from class \code{"vlm"} (e.g., a fitted model of class \code{"vglm"}). } \details{Since \code{object} is an object which inherits from class \code{"vlm"} (e.g., a fitted model of class \code{"vglm"}), the method will either returned the saved model frame used when fitting the model (if any, selected by argument \code{model = TRUE}) or pass the call used when fitting on to the default method. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ A \code{\link{data.frame}} containing the variables used in the \code{object} plus those specified in \code{\dots}. } \seealso{ \code{\link[stats]{model.frame}}, \code{\link{model.matrixvlm}}, \code{\link{predictvglm}}, \code{\link{smartpred}}. } \references{ Chambers, J. M. (1992) \emph{Data for models.} Chapter 3 of \emph{Statistical Models in S} eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal,mild, severe) ~ poly(c(scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) check1 <- head(model.frame(fit)) check1 check2 <- model.frame(fit, data = head(pneumo)) check2 all.equal(unlist(check1), unlist(check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE } \keyword{models} VGAM/man/mlogit.Rd0000644000176000001440000000504612136651105013376 0ustar ripleyusers\name{mlogit} \alias{mlogit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Multinomial Logit Link Function } \description{ Computes the mlogit transformation, including its inverse and the first two derivatives. } \usage{ mlogit(theta, refLevel = "last", M = NULL, whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{refLevel, M, whitespace}{ See \code{\link{multinomial}}. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{mlogit()} link function is a generalization of the \code{\link{logit}} link to \eqn{M} levels/classes. It forms the basis of the \code{\link{multinomial}} logit model. } \value{ For \code{mlogit} with \code{deriv = 0}, the mlogit of \code{theta}, i.e., \code{log(theta[,j]/theta[,M+1])} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta[,j])/(1+rowSums(exp(theta)))}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0 (for \code{mlogit}). One way of overcoming this is to use, e.g., \code{bvalue}. Currently \code{care.exp()} is used to avoid \code{NA}s being returned if the probability is too close to 1. } \seealso{ \code{\link{Links}}, \code{\link{multinomial}}, \code{\link{logit}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, trace = TRUE, pneumo) # For illustration only! fitted(fit) predict(fit) mlogit(fitted(fit)) mlogit(fitted(fit)) - predict(fit) # Should be all 0s mlogit(predict(fit), inverse = TRUE) mlogit(predict(fit), inverse = TRUE, refLevel = 1) # For illustration only mlogit(predict(fit), inverse = TRUE) - fitted(fit) # Should be all 0s mlogit(fitted(fit), deriv = 1) mlogit(fitted(fit), deriv = 2) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/mix2poisson.Rd0000644000176000001440000001207512136651105014375 0ustar ripleyusers\name{mix2poisson} \alias{mix2poisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mixture of Two Poisson Distributions } \description{ Estimates the three parameters of a mixture of two Poisson distributions by maximum likelihood estimation. } \usage{ mix2poisson(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi, llambda}{ Link functions for the parameter \eqn{\phi}{phi} and \eqn{\lambda}{lambda}. See \code{\link{Links}} for more choices. } % \item{ephi, el1, el2}{ % ephi = list(), el1 = list(), el2 = list(), % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{iphi}{ Initial value for \eqn{\phi}{phi}, whose value must lie between 0 and 1. } \item{il1, il2}{ Optional initial value for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. These values must be positive. The default is to compute initial values internally using the argument \code{qmu}. % If these arguments are supplied then practical experience % suggests they should be quite well-separated. } \item{qmu}{ Vector with two values giving the probabilities relating to the sample quantiles for obtaining initial values for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The two values are fed in as the \code{probs} argument into \code{\link[stats]{quantile}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The probability function can be loosely written as \deqn{P(Y=y) = \phi \, Poisson(\lambda_1) + (1-\phi) \, Poisson(\lambda_2)}{% P(Y=y) = phi * Poisson(lambda1) + (1-phi) * Poisson(lambda2)} where \eqn{\phi}{phi} is the probability an observation belongs to the first group, and \eqn{y=0,1,2,\ldots}{y=0,1,2,...}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{\phi \lambda_1 + (1-\phi) \lambda_2}{phi*lambda1 + (1-phi)*lambda2} and this is returned as the fitted values. By default, the three linear/additive predictors are \eqn{(logit(\phi), \log(\lambda_1), \log(\lambda_2))^T}{(logit(phi), log(lambda1), log(lambda2))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } % \references{ ~put references to the literature/web site here ~ } \section{Warning }{ This \pkg{VGAM} family function requires care for a successful application. In particular, good initial values are required because of the presence of local solutions. Therefore running this function with several different combinations of arguments such as \code{iphi}, \code{il1}, \code{il2}, \code{qmu} is highly recommended. Graphical methods such as \code{\link[graphics]{hist}} can be used as an aid. With grouped data (i.e., using the \code{weights} argument) one has to use a large value of \code{nsimEIM}; see the example below. } \author{ T. W. Yee } \note{ The response must be integer-valued since \code{\link[stats]{dpois}} is invoked. Fitting this model successfully to data can be difficult due to local solutions and ill-conditioned data. It pays to fit the model several times with different initial values, and check that the best fit looks reasonable. Plotting the results is recommended. This function works better as \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2} become more different. The default control argument \code{trace = TRUE} is to encourage monitoring convergence. } \seealso{ \code{\link[stats:Poisson]{rpois}}, \code{\link{poissonff}}, \code{\link{mix2normal1}}. } \examples{ \dontrun{ # Example 1: simulated data nn <- 1000 mu1 <- exp(2.5) # also known as lambda1 mu2 <- exp(3) (phi <- logit(-0.5, inverse = TRUE)) mdata <- data.frame(y = rpois(nn, ifelse(runif(nn) < phi, mu1, mu2))) fit <- vglm(y ~ 1, mix2poisson, mdata) coef(fit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(fit), 'Truth' = c(phi, mu1, mu2)), dig = 2) ty <- with(mdata, table(y)) plot(names(ty), ty, type = "h", main = "Orange=estimate, blue=truth", ylab = "Frequency", xlab = "y") abline(v = Coef(fit)[-1], lty = 2, col = "orange", lwd = 2) abline(v = c(mu1, mu2), lty = 2, col = "blue", lwd = 2) # Example 2: London Times data (Lange, 1997, p.31) ltdata1 <- data.frame(deaths = 0:9, freq = c(162, 267, 271, 185, 111, 61, 27, 8, 3, 1)) ltdata2 <- data.frame(y = with(ltdata1, rep(deaths, freq))) # Usually this does not work well unless nsimEIM is large fit <- vglm(deaths ~ 1, weight = freq, data = ltdata1, mix2poisson(iphi = 0.3, il1 = 1, il2 = 2.5, nsimEIM = 5000)) # This works better in general fit <- vglm(y ~ 1, mix2poisson(iphi = 0.3, il1 = 1, il2 = 2.5), ltdata2) coef(fit, matrix = TRUE) Coef(fit) } } \keyword{models} \keyword{regression} VGAM/man/mix2normal1.Rd0000644000176000001440000001464212136651105014256 0ustar ripleyusers\name{mix2normal1} \alias{mix2normal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mixture of Two Univariate Normal Distributions } \description{ Estimates the five parameters of a mixture of two univariate normal distributions by maximum likelihood estimation. } \usage{ mix2normal1(lphi = "logit", lmu = "identity", lsd = "loge", iphi = 0.5, imu1 = NULL, imu2 = NULL, isd1 = NULL, isd2 = NULL, qmu = c(0.2, 0.8), equalsd = TRUE, nsimEIM = 100, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi,lmu,lsd}{ Link functions for the parameters \eqn{\phi}{phi}, \eqn{\mu}{mu}, and \eqn{\sigma}{sd}. See \code{\link{Links}} for more choices. } % \item{ephi, emu1, emu2, esd1, esd2}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % If \code{equalsd = TRUE} then \code{esd1} must equal \code{esd2}. % } \item{iphi}{ Initial value for \eqn{\phi}{phi}, whose value must lie between 0 and 1. } \item{imu1, imu2}{ Optional initial value for \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2}. The default is to compute initial values internally using the argument \code{qmu}. } \item{isd1, isd2}{ Optional initial value for \eqn{\sigma_1}{sd1} and \eqn{\sigma_2}{sd2}. The default is to compute initial values internally based on the argument \code{qmu}. Currently these are not great, therefore using these arguments where practical is a good idea. } \item{qmu}{ Vector with two values giving the probabilities relating to the sample quantiles for obtaining initial values for \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2}. The two values are fed in as the \code{probs} argument into \code{\link[stats]{quantile}}. } \item{equalsd}{ Logical indicating whether the two standard deviations should be constrained to be equal. If \code{TRUE} then the appropriate constraint matrices will be used. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } \item{zero}{ An integer specifying which linear/additive predictor is modelled as intercepts only. If given, the value or values must be from the set \eqn{\{1,2,\ldots,5\}}{1,2,...,5}. The default is the first one only, meaning \eqn{\phi}{phi} is a single parameter even when there are explanatory variables. Set \code{zero = NULL} to model all linear/additive predictors as functions of the explanatory variables. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function can be loosely written as \deqn{f(y) = \phi \, N(\mu_1,\sigma_1) + (1-\phi) \, N(\mu_2, \sigma_2)}{% f(y) = phi * N(mu1, sd1) + (1-phi) * N(mu2, sd2)} where \eqn{\phi}{phi} is the probability an observation belongs to the first group. The parameters \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2} are the means, and \eqn{\sigma_1}{sd1} and \eqn{\sigma_2}{sd2} are the standard deviations. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{\phi \mu_1 + (1-\phi) \mu_2}{phi*mu1 + (1-phi)*mu2} and this is returned as the fitted values. By default, the five linear/additive predictors are \eqn{(logit(\phi), \mu_1, \log(\sigma_1), \mu_2, \log(\sigma_2))^T}{ (logit(phi), mu1, log(sd1), mu2, log(sd2))^T}. If \code{equalsd = TRUE} then \eqn{\sigma_1 = \sigma_2}{sd1=sd2} is enforced. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ McLachlan, G. J. and Peel, D. (2000) \emph{Finite Mixture Models}. New York: Wiley. Everitt, B. S. and Hand, D. J. (1981) \emph{Finite Mixture Distributions}. London: Chapman & Hall. } \section{Warning }{ Numerical problems can occur and half-stepping is not uncommon. If failure to converge occurs, try inputting better initial values, e.g., by using \code{iphi}, \code{qmu}, \code{imu1}, \code{imu2}, \code{isd1}, \code{isd2}, etc. % This function uses a quasi-Newton update for the working weight matrices % (BFGS variant). It builds up approximations to the weight matrices, % and currently the code is not fully tested. % In particular, results based on the weight matrices (e.g., from % \code{vcov} and \code{summary}) may be quite incorrect, especially when % the arguments \code{weights} is used to input prior weights. This \pkg{VGAM} family function should be used with care. } \author{ T. W. Yee } \note{ Fitting this model successfully to data can be difficult due to numerical problems and ill-conditioned data. It pays to fit the model several times with different initial values and check that the best fit looks reasonable. Plotting the results is recommended. This function works better as \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2} become more different. Convergence can be slow, especially when the two component distributions are not well separated. The default control argument \code{trace = TRUE} is to encourage monitoring convergence. Having \code{equalsd = TRUE} often makes the overall optimization problem easier. } \seealso{ \code{\link{normal1}}, \code{\link[stats:Normal]{Normal}}, \code{\link{mix2poisson}}. } \examples{ \dontrun{ mu1 <- 99; mu2 <- 150; nn <- 1000 sd1 <- sd2 <- exp(3) (phi <- logit(-1, inverse = TRUE)) mdata <- data.frame(y = ifelse(runif(nn) < phi, rnorm(nn, mu1, sd1), rnorm(nn, mu2, sd2))) fit <- vglm(y ~ 1, mix2normal1(equalsd = TRUE), mdata) # Compare the results cfit <- coef(fit) round(rbind('Estimated' = c(logit(cfit[1], inverse = TRUE), cfit[2], exp(cfit[3]), cfit[4]), 'Truth' = c(phi, mu1, sd1, mu2)), dig = 2) # Plot the results xx <- with(mdata, seq(min(y), max(y), len = 200)) plot(xx, (1-phi)*dnorm(xx, mu2, sd2), type = "l", xlab = "y", main = "Orange = estimate, blue = truth", col = "blue", ylab = "Density") phi.est <- logit(coef(fit)[1], inverse = TRUE) sd.est <- exp(coef(fit)[3]) lines(xx, phi*dnorm(xx, mu1, sd1), col = "blue") lines(xx, phi.est * dnorm(xx, Coef(fit)[2], sd.est), col = "orange") lines(xx, (1-phi.est) * dnorm(xx, Coef(fit)[4], sd.est), col = "orange") abline(v = Coef(fit)[c(2,4)], lty = 2, col = "orange") abline(v = c(mu1, mu2), lty = 2, col = "blue") } } \keyword{models} \keyword{regression} VGAM/man/mix2exp.Rd0000644000176000001440000001030112136651105013465 0ustar ripleyusers\name{mix2exp} \alias{mix2exp} %- Also NEED an '\alias' for EACH other topic documented here. %- Adapted from mix2poisson.Rd \title{ Mixture of Two Exponential Distributions } \description{ Estimates the three parameters of a mixture of two exponential distributions by maximum likelihood estimation. } \usage{ mix2exp(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi, llambda}{ Link functions for the parameters \eqn{\phi}{phi} and \eqn{\lambda}{lambda}. The latter is the rate parameter and note that the mean of an ordinary exponential distribution is \eqn{1 / \lambda}. See \code{\link{Links}} for more choices. } \item{iphi, il1, il2}{ Initial value for \eqn{\phi}{phi}, and optional initial value for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The last two have values that must be positive. The default is to compute initial values internally using the argument \code{qmu}. } \item{qmu}{ Vector with two values giving the probabilities relating to the sample quantiles for obtaining initial values for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The two values are fed in as the \code{probs} argument into \code{\link[stats]{quantile}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The probability function can be loosely written as \deqn{P(Y=y) = \phi\,Exponential(\lambda_1) + (1-\phi)\,Exponential(\lambda_2)}{% P(Y=y) = phi * Exponential(lambda1) + (1-phi) * Exponential(lambda2)} where \eqn{\phi}{phi} is the probability an observation belongs to the first group, and \eqn{y>0}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{\phi / \lambda_1 + (1-\phi) / \lambda_2}{phi/lambda1 + (1-phi)/lambda2} and this is returned as the fitted values. By default, the three linear/additive predictors are \eqn{(logit(\phi), \log(\lambda_1), \log(\lambda_2))^T}{(logit(phi), log(lambda1), log(lambda2))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } % \references{ ~put references to the literature/web site here ~ } \section{Warning }{ This \pkg{VGAM} family function requires care for a successful application. In particular, good initial values are required because of the presence of local solutions. Therefore running this function with several different combinations of arguments such as \code{iphi}, \code{il1}, \code{il2}, \code{qmu} is highly recommended. Graphical methods such as \code{\link[graphics]{hist}} can be used as an aid. } \author{ T. W. Yee } \note{ Fitting this model successfully to data can be difficult due to local solutions, uniqueness problems and ill-conditioned data. It pays to fit the model several times with different initial values and check that the best fit looks reasonable. Plotting the results is recommended. This function works better as \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2} become more different. The default control argument \code{trace = TRUE} is to encourage monitoring convergence. } \seealso{ \code{\link[stats:Exponential]{rexp}}, \code{\link{exponential}}, \code{\link{mix2poisson}}. } \examples{ \dontrun{ lambda1 <- exp(1); lambda2 <- exp(3) (phi <- logit(-1, inverse = TRUE)) mdata <- data.frame(y1 = rexp(nn <- 1000, lambda1)) mdata <- transform(mdata, y2 = rexp(nn, lambda2)) mdata <- transform(mdata, Y = ifelse(runif(nn) < phi, y1, y2)) fit <- vglm(Y ~ 1, mix2exp, mdata, trace = TRUE) coef(fit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(fit), 'Truth' = c(phi, lambda1, lambda2)), dig = 2) with(mdata, hist(Y, prob = TRUE, main = "Orange = estimate, blue = truth")) abline(v = 1 / Coef(fit)[c(2, 3)], lty = 2, col = "orange", lwd = 2) abline(v = 1 / c(lambda1, lambda2), lty = 2, col = "blue", lwd = 2) } } \keyword{models} \keyword{regression} VGAM/man/micmen.Rd0000644000176000001440000001114612136651105013351 0ustar ripleyusers\name{micmen} \alias{micmen} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Michaelis-Menten Model } \description{ Fits a Michaelis-Menten nonlinear regression model. } \usage{ micmen(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identity", link2 = "identity", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{rpar}{ Numeric. Initial positive ridge parameter. This is used to create positive-definite weight matrices. } \item{divisor}{ Numerical. The divisor used to divide the ridge parameter at each iteration until it is very small but still positive. The value of \code{divisor} should be greater than one. } \item{init1, init2}{ Numerical. Optional initial value for the first and second parameters, respectively. The default is to use a self-starting value. } \item{link1, link2}{ Parameter link function applied to the first and second parameters, respectively. See \code{\link{Links}} for more choices. } \item{dispersion}{ Numerical. Dispersion parameter. } \item{firstDeriv}{ Character. Algorithm for computing the first derivatives and working weights. The first is the default. } \item{imethod, probs.x}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{oim}{ Use the OIM? See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2\}. A \code{NULL} means none. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The Michaelis-Menten model is given by \deqn{E(Y_i) = (\theta_1 u_i) / (\theta_2 + u_i)}{% E(Y_i) = theta1 * u_i / (theta2 + u_i)} where \eqn{\theta_1}{theta1} and \eqn{\theta_2}{theta2} are the two parameters. The relationship between iteratively reweighted least squares and the Gauss-Newton algorithm is given in Wedderburn (1974). However, the algorithm used by this family function is different. Details are given at the Author's web site. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Seber, G. A. F. and Wild, C. J. (1989) \emph{Nonlinear Regression}, New York: Wiley. Wedderburn, R. W. M. (1974) Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. \emph{Biometrika}, \bold{61}, 439--447. Bates, D. M. and Watts, D. G. (1988) \emph{Nonlinear Regression Analysis and Its Applications}, New York: Wiley. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ T. W. Yee } \note{ The regressor values \eqn{u_i}{u_i} are inputted as the RHS of the \code{form2} argument. It should just be a simple term; no smart prediction is used. It should just a single vector, therefore omit the intercept term. The LHS of the formula \code{form2} is ignored. To predict the response at new values of \eqn{u_i}{u_i} one must assign the \code{@extra$Xm2} slot in the fitted object these values, e.g., see the example below. Numerical problems may occur. If so, try setting some initial values for the parameters. In the future, several self-starting initial values will be implemented. } \seealso{ \code{\link{enzyme}}. % \code{skira}. } \section{Warning }{ This function is not (nor could ever be) entirely reliable. Plotting the fitted function and monitoring convergence is recommended. } \examples{ fit <- vglm(velocity ~ 1, micmen, enzyme, trace = TRUE, crit = "coef", form2 = ~ conc - 1) summary(fit) \dontrun{ plot(velocity ~ conc, enzyme, xlab = "concentration", las = 1, col = "blue", main = "Michaelis-Menten equation for the enzyme data", ylim = c(0, max(velocity)), xlim = c(0, max(conc))) points(fitted(fit) ~ conc, enzyme, col = "red", pch = "+", cex = 1.5) # This predicts the response at a finer grid: newenzyme <- data.frame(conc = seq(0, max(with(enzyme, conc)), len = 200)) fit@extra$Xm2 <- newenzyme$conc # This assignment is needed for prediction lines(predict(fit, newenzyme, "response") ~ conc, newenzyme, col = "red") } } \keyword{models} \keyword{regression} VGAM/man/meplot.Rd0000644000176000001440000001000612136651105013373 0ustar ripleyusers\name{meplot} \alias{meplot} \alias{meplot.default} \alias{meplot.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mean Excess Plot } \description{ Mean excess plot (also known as a mean residual life plot), a diagnostic plot for the generalized Pareto distribution (GPD). } \usage{ meplot(object, ...) meplot.default(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) meplot.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{ A numerical vector. \code{NA}s etc. are not allowed.} \item{main, xlab, ylab}{Character. Overall title for the plot, and titles for the x- and y-axes. } \item{lty}{Line type. The second value is for the mean excess value, the first and third values are for the envelope surrounding the confidence interval. } \item{conf}{Confidence level. The default results in approximate 95 percent confidence intervals for each mean excess value. } \item{col}{Colour of the three lines. } \item{type}{Type of plot. The default means lines are joined between the mean excesses and also the upper and lower limits of the confidence intervals. } \item{object}{ An object that inherits class \code{"vlm"}, usually of class \code{\link{vglm-class}} or \code{\link{vgam-class}}. } \item{\dots}{ Graphical argument passed into \code{\link[graphics]{plot}}. See \code{\link[graphics]{par}} for an exhaustive list. The arguments \code{xlim} and \code{ylim} are particularly useful. } } \details{ If \eqn{Y} has a GPD with scale parameter \eqn{\sigma}{sigma} and shape parameter \eqn{\xi<1}{xi<1}, and if \eqn{y>0}, then \deqn{E(Y-u|Y>u) = \frac{\sigma+\xi u}{1-\xi}.}{% E(Y-u|Y>u) = \frac{\sigma+ xi*u}{1- xi}.} It is a linear function in \eqn{u}, the threshold. Note that \eqn{Y-u} is called the \emph{excess} and values of \eqn{Y} greater than \eqn{u} are called \emph{exceedances}. The empirical versions used by these functions is to use sample means to estimate the left hand side of the equation. Values of \eqn{u} in the plot are the values of \eqn{y} itself. If the plot is roughly a straight line then the GPD is a good fit; this plot can be used to select an appropriate threshold value. See \code{\link{gpd}} for more details. If the plot is flat then the data may be exponential, and if it is curved then it may be Weibull or gamma. There is often a lot of variance/fluctuation at the RHS of the plot due to fewer observations. The function \code{meplot} is generic, and \code{meplot.default} and \code{meplot.vlm} are some methods functions for mean excess plots. } \value{ A list is returned invisibly with the following components. \item{threshold }{The x axis values. } \item{meanExcess }{The y axis values. Each value is a sample mean minus a value \eqn{u}. } \item{plusminus }{The amount which is added or subtracted from the mean excess to give the confidence interval. The last value is a \code{NA} because it is based on one observation. } } \references{ Davison, A. C. and Smith, R. L. (1990) Models for exceedances over high thresholds (with discussion). \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{52}, 393--442. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \note{ The function is designed for speed and not accuracy, therefore huge data sets with extremely large values may cause failure (the function \code{\link[base]{cumsum}} is used.) Ties may not be well handled. } \seealso{ \code{\link{gpd}}. } \examples{ \dontrun{meplot(with(venice90, sealevel), las = 1) -> ii names(ii) abline(h = ii$meanExcess[1], col = "orange", lty = "dashed") par(mfrow = c(2, 2)) for (ii in 1:4) meplot(rgpd(1000), col = c("orange", "blue", "orange")) } } \keyword{models} \keyword{regression} VGAM/man/mccullagh89.Rd0000644000176000001440000000676612136651105014235 0ustar ripleyusers\name{mccullagh89} \alias{mccullagh89} %- Also NEED an '\alias' for EACH other topic documented here. \title{McCullagh (1989) Distribution Family Function} \description{ Estimates the two parameters of the McCullagh (1989) distribution by maximum likelihood estimation. } \usage{ mccullagh89(ltheta = "rhobit", lnu = logoff(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ltheta, lnu}{ Link functions for the \eqn{\theta}{theta} and \eqn{\nu}{nu} parameters. See \code{\link{Links}} for general information. } \item{itheta, inu}{ Numeric. Optional initial values for \eqn{\theta}{theta} and \eqn{\nu}{nu}. The default is to internally compute them. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, choose one value from the set \{1,2\}. } } \details{ The McCullagh (1989) distribution has density function \deqn{f(y;\theta,\nu) = \frac{ \{ 1-y^2 \}^{\nu-\frac12}} { (1-2\theta y + \theta^2)^{\nu} \mbox{Beta}(\nu+\frac12, \frac12)}}{% f(y;theta,nu) = (1-y^2)^(nu-0.5) / [ (1 - 2*theta*y+theta^2)^nu * Beta(nu+0.5, 0.5)]} where \eqn{-1 < y < 1} and \eqn{-1 < \theta < 1}{-1 < theta < 1}. This distribution is equation (1) in that paper. The parameter \eqn{\nu}{nu} satisfies \eqn{\nu > -1/2}{nu > -1/2}, therefore the default is to use an log-offset link with offset equal to 0.5, i.e., \eqn{\eta_2=\log(\nu+0.5)}{eta_2=log(nu+0.5)}. The mean is of \eqn{Y} is \eqn{\nu \theta / (1+\nu)}{nu*theta/(1+nu)}, and these are returned as the fitted values. This distribution is related to the Leipnik distribution (see Johnson et al. (1995)), is related to ultraspherical functions, and under certain conditions, arises as exit distributions for Brownian motion. Fisher scoring is implemented here and it uses a diagonal matrix so the parameters are globally orthogonal in the Fisher information sense. McCullagh (1989) also states that, to some extent, \eqn{\theta}{theta} and \eqn{\nu}{nu} have the properties of a location parameter and a precision parameter, respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ McCullagh, P. (1989) Some statistical properties of a family of continuous univariate distributions. \emph{Journal of the American Statistical Association}, \bold{84}, 125--129. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. (pages 612--617). } \author{ T. W. Yee } \note{ Convergence may be slow or fail unless the initial values are reasonably close. If a failure occurs, try assigning the argument \code{inu} and/or \code{itheta}. Figure 1 of McCullagh (1989) gives a broad range of densities for different values of \eqn{\theta}{theta} and \eqn{\nu}{nu}, and this could be consulted for obtaining reasonable initial values if all else fails. } \seealso{ \code{\link{leipnik}}, \code{\link{rhobit}}, \code{\link{logoff}}. } %\section{Warning }{ %} \examples{ mdata <- data.frame(y = rnorm(n = 1000, sd = 0.2)) # Limit as theta = 0, nu = Inf fit <- vglm(y ~ 1, mccullagh89, mdata, trace = TRUE) head(fitted(fit)) with(mdata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/mbinomial.Rd0000644000176000001440000001307512136651105014053 0ustar ripleyusers\name{mbinomial} \alias{mbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Matched Binomial Distribution Family Function } \description{ Estimation of a binomial regression in a matched case-control study. } \usage{ mbinomial(mvar = NULL, link = "logit", parallel = TRUE, smallno = .Machine$double.eps^(3/4)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mvar}{ Formula specifying the matching variable. This shows which observation belongs to which matching set. The intercept should be suppressed from the formula, and the term must be a \code{\link[base]{factor}}. } \item{link}{ Parameter link function for the probability parameter. % called \eqn{p} below. Information for these are at \code{\link{Links}} and \code{\link{CommonVGAMffArguments}}. } \item{parallel}{ This should always be set \code{TRUE} otherwise there will be too many parameters to estimate. See \code{\link{CommonVGAMffArguments}} for more information. } \item{smallno}{ Numeric, a small positive value. For a specific observation, used to nullify the linear/additive predictors that are not needed. } } \details{ By default, this \pkg{VGAM} family function fits a logistic regression model to a binary response from a matched case-control study. Here, each case \eqn{(Y = 1}) is matched with one or more controls \eqn{(Y = 0}) with respect to some matching variables (confounders). For example, the first matched set is all women aged from 20 to 25, the second matched set is women aged between 26 to 30, etc. The logistic regression has a different intercept for each matched set but the other regression coefficients are assumed to be the same across matched sets (\code{parallel = TRUE}). Let \eqn{C} be the number of matched sets. This \pkg{VGAM} family function uses a trick by allowing \eqn{M}, the number of linear/additive predictors, to be equal to \eqn{C}, and then nullifying all but one of them for a particular observation. The term specified by the \code{mvar} argument must be a \code{\link[base]{factor}}. Consequently, the model matrix contains an intercept plus one column for each level of the factor (except the first (this is the default in R)). Altogether there are \eqn{C} columns. The algorithm here constructs a different constraint matrix for each of the \eqn{C} columns. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Section 8.2 of Hastie, T. J. and Tibshirani, R. J. (1990) \emph{Generalized Additive Models}, London: Chapman & Hall. Pregibon, D. (1984) Data analytic methods for matched case-control studies. \emph{Biometrics}, \bold{40}, 639--651. Chapter 7 of Breslow, N. E. and Day, N. E. (1980) \emph{Statistical Methods in Cancer Research I: The Analysis of Case-Control Studies}. Lyon: International Agency for Research on Cancer. Holford, T. R. and White, C. and Kelsey, J. L. (1978) Multivariate analysis for matched case-control studies. \emph{American Journal of Epidemiology}, \bold{107}, 245--256. } \author{ Thomas W. Yee } \note{ The response is assumed to be in a format that can also be inputted into \code{\link{binomialff}}. } \section{Warning }{ Both the memory requirements and computational time of this \pkg{VGAM} family function grows very quickly with respect to the number of matched sets. For example, the large model matrix of a data set with 100 matched sets consisting of one case and one control per set will take up at least (about) 20Mb of memory. For a constant number of cases and controls per matched set, the memory requirements are \eqn{O(C^3)} and the the computational time is \eqn{O(C^4)} flops. The example below has been run successfully with \code{n = 700} (this corresponds to \eqn{C = 350}) but only on a big machine and it took over 10 minutes. The large model matrix was 670Mb. } \seealso{ \code{\link{binomialff}}. } \examples{ \dontrun{ # Cf. Hastie and Tibshirani (1990) p.209. The variable n must be even. # Here, the intercept for each matched set accounts for x3 which is # the confounder or matching variable. n <- 700 # Requires a big machine with lots of memory. Expensive wrt time n <- 100 # This requires a reasonably big machine. mydat <- data.frame(x2 = rnorm(n), x3 = rep(rnorm(n/2), each = 2)) xmat <- with(mydat, cbind(x2, x3)) mydat <- transform(mydat, eta = -0.1 + 0.2 * x2 + 0.3 * x3) etamat <- with(mydat, matrix(eta, n/2, 2)) condmu <- exp(etamat[, 1]) / (exp(etamat[, 1]) + exp(etamat[, 2])) y1 <- ifelse(runif(n/2) < condmu, 1, 0) y <- cbind(y1, 1 - y1) mydat <- transform(mydat, y = c(y1, 1-y1), ID = factor(c(row(etamat)))) fit <- vglm(y ~ 1 + ID + x2, trace = TRUE, mbinomial(mvar = ~ ID - 1), data = mydat) dimnames(coef(fit, matrix = TRUE)) coef(fit, matrix = TRUE) summary(fit) head(fitted(fit)) objsizemb <- function(object) round(object.size(object) / 2^20, dig = 2) objsizemb(fit) # in Mb VLMX <- model.matrix(fit, type = "vlm") # The big model matrix dim(VLMX) objsizemb(VLMX) # in Mb rm(VLMX) } } \keyword{models} \keyword{regression} % Some summary(fit) output %ID347 -1.6699e-01 2.01099 -8.3039e-02 %ID348 -3.0398e-01 2.00455 -1.5165e-01 %ID349 1.7915e-01 2.00147 8.9509e-02 %ID350 -3.7716e-02 2.00423 -1.8818e-02 %x2 2.5748e-01 0.10647 2.4183e+00 %# Use the trick of Holford et al. (1978) VGAM/man/maxwellUC.Rd0000644000176000001440000000365212136651105014005 0ustar ripleyusers\name{Maxwell} \alias{Maxwell} \alias{dmaxwell} \alias{pmaxwell} \alias{qmaxwell} \alias{rmaxwell} \title{The Maxwell Distribution} \description{ Density, distribution function, quantile function and random generation for the Maxwell distribution. } \usage{ dmaxwell(x, a, log = FALSE) pmaxwell(q, a) qmaxwell(p, a) rmaxwell(n, a) } \arguments{ \item{x, q, p, n}{ Same as \code{\link[stats:Uniform]{Uniform}}. } \item{a}{the parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dmaxwell} gives the density, \code{pmaxwell} gives the distribution function, \code{qmaxwell} gives the quantile function, and \code{rmaxwell} generates random deviates. } \references{ Balakrishnan, N. and Nevzorov, V. B. (2003) \emph{A Primer on Statistical Distributions}. Hoboken, New Jersey: Wiley. } \author{ T. W. Yee } \details{ See \code{\link{maxwell}}, the \pkg{VGAM} family function for estimating the parameter \eqn{a} by maximum likelihood estimation, for the formula of the probability density function. } \note{ The Maxwell distribution is related to the Rayleigh distribution. } \seealso{ \code{\link{maxwell}}, \code{\link{Rayleigh}}, \code{\link{rayleigh}}. } \examples{ \dontrun{ a <- 3; x <- seq(-0.5, 3, length = 100) plot(x, dmaxwell(x, a = a), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, pmaxwell(x, a = a), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmaxwell(probs, a = a) lines(Q, dmaxwell(Q, a), col = "purple", lty = 3, type = "h") lines(Q, pmaxwell(Q, a), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pmaxwell(Q, a) - probs)) # Should be zero } } \keyword{distribution} VGAM/man/maxwell.Rd0000644000176000001440000000354612136651105013557 0ustar ripleyusers\name{maxwell} \alias{maxwell} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Maxwell Distribution Family Function } \description{ Estimating the parameter of the Maxwell distribution by maximum likelihood estimation. } \usage{ maxwell(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, zero}{ Parameter link function applied to \eqn{a}. See \code{\link{Links}} for more choices and information; a log link is the default because the parameter is positive. More information is at \code{\link{CommonVGAMffArguments}}. } } \details{ The Maxwell distribution, which is used in the area of thermodynamics, has a probability density function that can be written \deqn{f(y;a) = \sqrt{2/\pi} a^{3/2} y^2 \exp(-0.5 a y^2)}{% f(y;a) = sqrt(2/pi) * a^(3/2) * y^2 * exp(-0.5*a*y^2)} for \eqn{y>0} and \eqn{a>0}. The mean of \eqn{Y} is \eqn{\sqrt{8 / (a \pi)}}{sqrt(8 / (a * pi))} (returned as the fitted values), and its variance is \eqn{(3\pi - 8)/(\pi a)}{(3*pi - 8)/(pi*a)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ von Seggern, D. H. (1993) \emph{CRC Standard Curves and Surfaces}, Boca Raton, FL.: CRC Press. } \author{ T. W. Yee } \note{ Fisher-scoring and Newton-Raphson are the same here. A related distribution is the Rayleigh distribution. This \pkg{VGAM} family function handles multiple responses. } \seealso{ \code{\link{Maxwell}}, \code{\link{rayleigh}}. } \examples{ mdata <- data.frame(y = rmaxwell(1000, a = exp(2))) fit <- vglm(y ~ 1, maxwell, mdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/marital.nz.Rd0000644000176000001440000000234412136651105014160 0ustar ripleyusers\name{marital.nz} \alias{marital.nz} \docType{data} \title{ New Zealand Marital Data. } \description{ Some marital data mainly from a large NZ company collected in the early 1990s. } \usage{data(marital.nz)} \format{ A data frame with 6053 observations on the following 3 variables. \describe{ \item{\code{age}}{a numeric vector, age in years} \item{\code{ethnicity}}{a factor with levels \code{European} \code{Maori} \code{Other} \code{Polynesian}. Only Europeans are included in the data set. } \item{\code{mstatus}}{a factor with levels \code{Divorced/Separated}, \code{Married/Partnered}, \code{Single}, \code{Widowed}. } } } \details{ This is a subset of a data set collected from a self-administered questionnaire administered in a large New Zealand workforce observational study conducted during 1992--3. The data were augmented by a second study consisting of retirees. The data can be considered a reasonable representation of the white male New Zealand population in the early 1990s. } \source{ Clinical Trials Research Unit, University of Auckland, New Zealand. } \references{ See \code{\link{bmi.nz}} and \code{\link{chest.nz}}. } \examples{ summary(marital.nz) } \keyword{datasets} VGAM/man/margeff.Rd0000644000176000001440000000773212136651105013516 0ustar ripleyusers\name{margeff} \alias{margeff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Marginal effects for the multinomial logit and cumulative models } \description{ Marginal effects for the multinomial logit model and cumulative logit/probit/... models: the derivative of the fitted probabilities with respect to each explanatory variable. } \usage{ margeff(object, subset = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{vglm}} \code{\link{multinomial}} or \code{\link{cumulative}} object. } \item{subset}{ Numerical or logical vector, denoting the required observation(s). Recycling is used if possible. The default means all observations. } } \details{ Computes the derivative of the fitted probabilities of a multinomial logit model or cumulative logit/probit/... model with respect to each explanatory variable. } \value{ A \eqn{p} by \eqn{M+1} by \eqn{n} array, where \eqn{p} is the number of explanatory variables and the (hopefully) nominal response has \eqn{M+1} levels, and there are \eqn{n} observations. If \code{is.numeric(subset)} and \code{length(subset) == 1} then a \eqn{p} by \eqn{M+1} matrix is returned. } % \references{ ~put references to the literature/web site here ~ } \author{ T. W. Yee } \section{Warning }{ Care is needed in interpretation, e.g., the change is not universally accurate for a unit change in each explanatory variable because eventually the `new' probabilities may become negative or greater than unity. Also, the `new' probabilities will not sum to one. This function is not applicable for models with data-dependent terms such as \code{\link{bs}} and \code{\link{poly}}. Also the function should not be applied to models with any terms that have generated more than one column of the LM model matrix, such as \code{\link{bs}} and \code{\link{poly}}. For such try using numerical methods such as finite-differences. The \code{formula} in \code{object} should comprise of simple terms of the form \code{ ~ x2 + x3 + x4}, etc. } \note{ For \code{\link{multinomial}} this function should handle any value of \code{refLevel} and also any constraint matrices. However, it does not currently handle the \code{xij} or \code{form2} arguments, nor \code{\link{vgam}} objects. For \code{\link{multinomial}} if \code{subset} is numeric then the function uses a \code{for} loop over the observations (slow). The default computations use vectorization; this uses more memory than a \code{for} loop but is faster. } \seealso{ \code{\link{multinomial}}, \code{\link{cumulative}}, \code{\link{vglm}}. } \examples{ # Not a good example for multinomial() because the response is ordinal!! ii <- 3; hh <- 1/100 pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(reverse = TRUE, parallel = TRUE), data = pneumo) fitted(fit)[ii, ] mynewdata <- with(pneumo, data.frame(let = let[ii]+hh)) (newp <- predict(fit, newdata = mynewdata, type = "response")) # Compare the difference. Should be the same as hh --> 0. round(dig = 3, (newp-fitted(fit)[ii, ])/hh) # Finite-difference approxn round(dig = 3, margeff(fit, subset = ii)["let",]) # Other examples round(dig = 3, margeff(fit)) round(dig = 3, margeff(fit, subset = 2)["let",]) round(dig = 3, margeff(fit, subset = c(FALSE, TRUE))["let",,]) # recycling round(dig = 3, margeff(fit, subset = c(2, 4, 6, 8))["let",,]) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % set \code{i=1:n}. % hh * margeff(fit, i=ii)["let",] % cumulative(reverse=TRUE, parallel=TRUE), % cumulative(reverse=FALSE, parallel=TRUE), % cumulative(reverse=TRUE, parallel=FALSE), % cumulative(reverse=FALSE, parallel=FALSE), VGAM/man/makehamUC.Rd0000644000176000001440000000473712136651105013744 0ustar ripleyusers\name{Makeham} \alias{Makeham} \alias{dmakeham} \alias{pmakeham} \alias{qmakeham} \alias{rmakeham} \title{The Makeham Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Makeham distribution. } \usage{ dmakeham(x, shape, scale = 1, epsilon = 0, log = FALSE) pmakeham(q, shape, scale = 1, epsilon = 0) qmakeham(p, shape, scale = 1, epsilon = 0) rmakeham(n, shape, scale = 1, epsilon = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{shape, scale}{positive shape and scale parameters. } \item{epsilon}{another parameter. Must be non-negative. See below. } } \value{ \code{dmakeham} gives the density, \code{pmakeham} gives the cumulative distribution function, \code{qmakeham} gives the quantile function, and \code{rmakeham} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{makeham}} for details. The default value of \code{epsilon = 0} corresponds to the Gompertz distribution. The function \code{\link{pmakeham}} uses \code{\link{lambertW}}. } \references{ Jodra, P. (2009) A closed-form expression for the quantile function of the Gompertz-Makeham distribution. \emph{Mathematics and Computers in Simulation}, \bold{79}, 3069--3075. } %\note{ % %} \seealso{ \code{\link{makeham}}, \code{\link{lambertW}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1); Scale <- exp(1); eps = Epsilon <- exp(-1) max(abs(pmakeham(qmakeham(p = probs, Shape, sca = Scale, eps = Epsilon), Shape, sca = Scale, eps = Epsilon) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 2.0, by = 0.01); plot(x, dmakeham(x, Shape, sca = Scale, eps = Epsilon), type = "l", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pmakeham(x, Shape, sca = Scale, eps = Epsilon), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmakeham(probs, Shape, sca = Scale, eps = Epsilon) lines(Q, dmakeham(Q, Shape, sca = Scale, eps = Epsilon), col = "purple", lty = 3, type = "h") pmakeham(Q, Shape, sca = Scale, eps = Epsilon) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/makeham.Rd0000644000176000001440000001056112136651105013504 0ustar ripleyusers\name{makeham} \alias{makeham} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Makeham Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Makeham distribution. } \usage{ makeham(lshape = "loge", lscale = "loge", lepsilon = "loge", ishape = NULL, iscale = NULL, iepsilon = NULL, gshape = exp(-5:5), gscale = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale, lepsilon}{ Parameter link functions applied to the shape parameter \code{shape}, scale parameter \code{scale}, and other parameter \code{epsilon}. All parameters are treated as positive here (cf. \code{\link{dmakeham}} allows \code{epsilon = 0}, etc.). See \code{\link{Links}} for more choices. } % \item{eshape, escale, eepsilon}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{ishape, iscale, iepsilon}{ Optional initial values. A \code{NULL} means a value is computed internally. A value must be given for \code{iepsilon} currently, and this is a sensitive parameter! } \item{gshape, gscale, gepsilon}{ See \code{\link{CommonVGAMffArguments}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. Argument \code{probs.y} is used only when \code{imethod = 2}. } \item{oim.mean}{ To be currently ignored. } } \details{ The Makeham distribution, which adds another parameter to the Gompertz distribution, has cumulative distribution function \deqn{F(x; \alpha, \beta, \varepsilon) = 1 - \exp \left\{ -y \varepsilon + \frac {\alpha}{\beta} \left[ 1 - e^{\beta y} \right] \right\} }{% F(x; alpha, beta, epsilon) = 1 - exp(-y * epsilon + (alpha / beta) * [1 - e^(beta * y)]) } which leads to a probability density function \deqn{f(x; \alpha, \beta, \varepsilon) = \left[ \varepsilon + \alpha e^{\beta x} \right] \; \exp \left\{ -x \varepsilon + \frac {\alpha}{\beta} \left[ 1 - e^{\beta x} \right] \right\}, }{% f(x; alpha, beta, epsilon) = (epsilon + alpha * e^(beta x) ) * exp(-x * epsilon + (alpha / beta) * [1 - e^(beta * x)]) } for \eqn{\alpha > 0}{alpha > 0}, \eqn{\beta > 0}{beta > 0}, \eqn{\varepsilon \geq 0}{epsilon >= 0}, \eqn{x > 0}. Here, \eqn{\beta}{beta} is called the scale parameter \code{scale}, and \eqn{\alpha}{alpha} is called a shape parameter. The moments for this distribution do not appear to be available in closed form. Simulated Fisher scoring is used and multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ % %} \author{ T. W. Yee } \section{Warning }{ A lot of care is needed because this is a rather difficult distribution for parameter estimation, especially when the shape parameter is large relative to the scale parameter. If the self-starting initial values fail then try experimenting with the initial value arguments, especially \code{iepsilon}. Successful convergence depends on having very good initial values. More improvements could be made here. Also, monitor convergence by setting \code{trace = TRUE}. A trick is to fit a \code{\link{gompertz}} distribution and use it for initial values; see below. However, this family function is currently numerically fraught. } \seealso{ \code{\link{dmakeham}}, \code{\link{gompertz}}. } \examples{ \dontrun{ set.seed(123) mdata <- data.frame(x2 = runif(nn <- 1000)) mdata <- transform(mdata, eta1 = -1, ceta1 = 1, eeta1 = -2) mdata <- transform(mdata, shape1 = exp(eta1), scale1 = exp(ceta1), epsil1 = exp(eeta1)) mdata <- transform(mdata, y1 = rmakeham(nn, shape = shape1, scale = scale1, eps = epsil1)) # A trick is to fit a Gompertz distribution first fit0 <- vglm(y1 ~ 1, gompertz, data = mdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, makeham, data = mdata, etastart = cbind(predict(fit0), log(0.1)), trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) } } \keyword{models} \keyword{regression} %# fit1 <- vglm(y1 ~ 1, makeham, data = mdata, trace = TRUE) %# fit2 <- vglm(y1 ~ 1, makeham(imeth = 2), data = mdata, trace = TRUE) VGAM/man/lvplot.rrvglm.Rd0000644000176000001440000001640312136651105014732 0ustar ripleyusers\name{lvplot.rrvglm} \alias{lvplot.rrvglm} \alias{biplot.rrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot for RR-VGLMs } \description{ Produces an \emph{ordination diagram} (also known as a \emph{biplot} or \emph{latent variable plot}) for \emph{reduced-rank vector generalized linear models} (RR-VGLMs). For rank-2 models only, the x- and y-axis are the first and second canonical axes respectively. } \usage{ lvplot.rrvglm(object, A = TRUE, C = TRUE, scores = FALSE, plot.it = TRUE, groups = rep(1, n), gapC = sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels = if (length(object@misc$predictors.names)) object@misc$predictors.names else paste("LP", 1:M, sep = ""), Aadj = par()$adj, Acex = par()$cex, Acol = par()$col, Apch = NULL, Clabels = rownames(Cmat), Cadj = par()$adj, Ccex = par()$cex, Ccol = par()$col, Clty = par()$lty, Clwd = par()$lwd, chull.arg = FALSE, ccex = par()$cex, ccol = par()$col, clty = par()$lty, clwd = par()$lwd, spch = NULL, scex = par()$cex, scol = par()$col, slabels = rownames(x2mat), ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class \code{"rrvglm"}. } \item{A}{ Logical. Allow the plotting of \bold{A}? } \item{C}{ Logical. Allow the plotting of \bold{C}? If \code{TRUE} then \bold{C} is represented by arrows emenating from the origin. } \item{scores}{ Logical. Allow the plotting of the \eqn{n} scores? The scores are the values of the latent variables for each observation. } \item{plot.it}{ Logical. Plot it? If \code{FALSE}, no plot is produced and the matrix of scores (\eqn{n} latent variable values) is returned. If \code{TRUE}, the rank of \code{object} need not be 2. } \item{groups}{ A vector whose distinct values indicate which group the observation belongs to. By default, all the observations belong to a single group. Useful for the multinomial logit model (see \code{\link{multinomial}}.} \item{gapC}{ The gap between the end of the arrow and the text labelling of \bold{C}, in latent variable units.} \item{scaleA}{ Numerical value that is multiplied by \bold{A}, so that \bold{C} is divided by this value. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{Alabels}{ Character vector to label \bold{A}. Must be of length \eqn{M}. } \item{Aadj}{ Justification of text strings for labelling \bold{A}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{Acex}{ Numeric. Character expansion of the labelling of \bold{A}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Acol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Apch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{pch} argument can be of length \eqn{M}, the number of species. } \item{Clabels}{ Character vector to label \bold{C}. Must be of length \eqn{p2}. } \item{Cadj}{ Justification of text strings for labelling \bold{C}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{Ccex}{ Numeric. Character expansion of the labelling of \bold{C}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Ccol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Clty}{ Line type of the arrows representing \bold{C}. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{Clwd}{ Line width of the arrows representing \bold{C}. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{chull.arg}{ Logical. Plot the convex hull of the scores? This is done for each group (see the \code{group} argument). } \item{ccex}{ Numeric. Character expansion of the labelling of the convex hull. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{ccol}{ Line color of the convex hull. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{clty}{ Line type of the convex hull. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{clwd}{ Line width of the convex hull. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{spch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{spch} argument can be of length \eqn{M}, the number of species. } \item{scex}{ Numeric. Character expansion of the labelling of the scores. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{scol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{slabels}{ Character vector to label the scores. Must be of length \eqn{n}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{xlim} and \code{ylim}. } } \details{ For RR-VGLMs, a \emph{biplot} and a \emph{latent variable} plot coincide. In general, many of the arguments starting with ``A'' refer to \bold{A} (of length \eqn{M}), ``C'' to \bold{C} (of length \eqn{p2}), ``c'' to the convex hull (of length \code{length(unique(groups))}), and ``s'' to scores (of length \eqn{n}). As the result is a biplot, its interpretation is based on the inner product. } \value{ The matrix of scores (\eqn{n} latent variable values) is returned regardless of whether a plot was produced or not. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ % Further work to be done: This function could be hooked up % to the normalization code of \code{\link{rrvglm}} to allow uncorrelated % latent variables etc. The functions \code{\link{lvplot.rrvglm}} and \code{\link{biplot.rrvglm}} are equivalent. In the example below the predictor variables are centered, which is a good idea. } \seealso{ \code{\link{lvplot}}, \code{\link[graphics]{par}}, \code{\link{rrvglm}}, \code{\link{Coef.rrvglm}}, \code{\link{rrvglm.control}}. } \examples{ nn <- nrow(pneumo) # x1, x2 and x3 are some unrelated covariates pneumo <- transform(pneumo, slet = scale(log(exposure.time)), x1 = rnorm(nn), x2 = rnorm(nn), x3 = rnorm(nn)) fit <- rrvglm(cbind(normal, mild, severe) ~ slet + x1 + x2 + x3, multinomial, pneumo, Rank=2, Corner=FALSE, Uncorrel=TRUE) \dontrun{ lvplot(fit, chull = TRUE, scores = TRUE, clty = 2, ccol = "blue", scol = "red", Ccol = "darkgreen", Clwd = 2, Ccex = 2, main = "Biplot of some fictitional data") } } \keyword{models} \keyword{regression} \keyword{graphs} % pneumo$slet = scale(log(pneumo$exposure.time)) VGAM/man/lvplot.qrrvglm.Rd0000644000176000001440000003262612136651105015120 0ustar ripleyusers\name{lvplot.qrrvglm} \alias{lvplot.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot for QO models } \description{ Produces an ordination diagram (latent variable plot) for quadratic ordination (QO) models. For rank-1 models, the x-axis is the first ordination/constrained/canonical axis. For rank-2 models, the x- and y-axis are the first and second ordination axes respectively. } \usage{ lvplot.qrrvglm(object, varlvI = FALSE, reference = NULL, add = FALSE, plot.it = TRUE, rug = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol = par()$col, clwd = par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin", "mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A CQO or UQO object. } \item{varlvI}{ Logical that is fed into \code{\link{Coef.qrrvglm}}. } \item{reference}{ Integer or character that is fed into \code{\link{Coef.qrrvglm}}. } \item{add}{ Logical. Add to an existing plot? If \code{FALSE}, a new plot is made. } \item{plot.it}{ Logical. Plot it? } \item{rug}{ Logical. If \code{TRUE}, a rug plot is plotted at the foot of the plot (applies to rank-1 models only). These values are jittered to expose ties. } \item{y}{ Logical. If \code{TRUE}, the responses will be plotted (applies only to rank-1 models and if \code{type = "fitted.values"}.) } \item{type}{ Either \code{"fitted.values"} or \code{"predictors"}, specifies whether the y-axis is on the response or eta-scales respectively. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. Here, for rank-1 models, points are the response \emph{y} data. For rank-2 models, points are the optima. See the \code{cex} argument in \code{\link[graphics]{par}}. } \item{pcol}{ Color of the points. See the \code{col} argument in \code{\link[graphics]{par}}. } \item{pch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{pch} argument can be of length \eqn{M}, the number of species. } \item{llty}{ Line type. Rank-1 models only. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol}{ Line color. Rank-1 models only. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd}{ Line width. Rank-1 models only. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{label.arg}{ Logical. Label the optima and \bold{C}? (applies only to rank-2 models only). } \item{adj.arg}{ Justification of text strings for labelling the optima (applies only to rank-2 models only). See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{ellipse}{ Numerical, of length 0 or 1 (applies only to rank-2 models only). If \code{Absolute} is \code{TRUE} then \code{ellipse} should be assigned a value that is used for the elliptical contouring. If \code{Absolute} is \code{FALSE} then \code{ellipse} should be assigned a value between 0 and 1, for example, setting \code{ellipse = 0.9} means an ellipse with contour = 90\% of the maximum will be plotted about each optimum. If \code{ellipse} is a negative value, then the function checks that the model is an equal-tolerances model and \code{varlvI = FALSE}, and if so, plots circles with radius \code{-ellipse}. For example, setting \code{ellipse = -1} will result in circular contours that have unit radius (in latent variable units). If \code{ellipse} is \code{NULL} or \code{FALSE} then no ellipse is drawn around the optima. } \item{Absolute}{ Logical. If \code{TRUE}, the contours corresponding to \code{ellipse} are on an absolute scale. If \code{FALSE}, the contours corresponding to \code{ellipse} are on a relative scale. } \item{elty}{ Line type of the ellipses. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{ecol}{ Line color of the ellipses. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{elwd}{ Line width of the ellipses. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{egrid}{ Numerical. Line resolution of the ellipses. Choosing a larger value will result in smoother ellipses. Useful when ellipses are large. } \item{chull.arg}{ Logical. Add a convex hull around the site scores? } \item{clty}{ Line type of the convex hull. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{ccol}{ Line color of the convex hull. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{clwd}{ Line width of the convex hull. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{cpch}{ Character to be plotted at the intersection points of the convex hull. Having white spaces means that site labels are not obscured there. See the \code{pch} argument of \code{\link[graphics]{par}}. } \item{C}{ Logical. Add \bold{C} (represented by arrows emanating from \code{OriginC}) to the plot? } \item{OriginC}{ Character or numeric. Where the arrows representing \bold{C} emanate from. If character, it must be one of the choices given. By default the first is chosen. The value \code{"origin"} means \code{c(0,0)}. The value \code{"mean"} means the sample mean of the latent variables (centroid). Alternatively, the user may specify a numerical vector of length 2. } \item{Clty}{ Line type of the arrows representing \bold{C}. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{Ccol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Clwd}{ Line width of the arrows representing \bold{C}. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{Ccex}{ Numeric. Character expansion of the labelling of \bold{C}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Cadj.arg}{ Justification of text strings when labelling \bold{C}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{stretchC}{ Numerical. Stretching factor for \bold{C}. Instead of using \bold{C}, \code{stretchC * } \bold{C} is used. } \item{sites}{ Logical. Add the site scores (aka latent variable values, nu's) to the plot? (applies only to rank-2 models only). } \item{spch}{ Plotting character of the site scores. The default value of \code{NULL} means the row labels of the data frame are used. They often are the site numbers. See the \code{pch} argument of \code{\link[graphics]{par}}. } \item{scol}{ Color of the site scores. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{scex}{ Character expansion of the site scores. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{sfont}{ Font used for the site scores. See the \code{font} argument of \code{\link[graphics]{par}}. } % \item{Rotate}{ % Numeric or logical. % A value from the set \{1,2,\ldots,\eqn{M}\} indicating % which species (quadratic predictor) is to be chosen so that % its major and semi-minor axes are parallel to the latent variable % axes, i.e., that species' Tolerance matrix will be diagonal. % If \code{Rotate} is \code{TRUE}, the first species is selected for rotation. % By default a rotation is performed only if the tolerance matrices are equal, % and \code{Rotation} only applies when the rank is greater than one. % See \code{\link{Coef.qrrvglm}} for details. % } % \item{ITolerances}{ % Logical. % If \code{TRUE}, the tolerances matrices are transformed so that they are % the order-\code{Rank} identity matrix. This means that a rank-2 % latent variable plot % can be interpreted naturally in terms of distances and directions. % See \code{\link{Coef.qrrvglm}} for details. % } \item{check.ok}{ Logical. Whether a check is performed to see that \code{noRRR = ~ 1} was used. It doesn't make sense to have a latent variable plot unless this is so. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{xlim} and \code{ylim}. } } \details{ This function only works for rank-1 and rank-2 QRR-VGLMs with argument \code{noRRR = ~ 1}. For unequal-tolerances models, the latent variable axes can be rotated so that at least one of the tolerance matrices is diagonal; see \code{\link{Coef.qrrvglm}} for details. Arguments beginning with ``\code{p}'' correspond to the points e.g., \code{pcex} and \code{pcol} correspond to the size and color of the points. Such ``\code{p}'' arguments should be vectors of length 1, or \eqn{n}, the number of sites. For the rank-2 model, arguments beginning with ``\code{p}'' correspond to the optima. } \value{ Returns a matrix of latent variables (site scores) regardless of whether a plot was produced or not. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ Thomas W. Yee } \note{ A species which does not have an optimum will not have an ellipse drawn even if requested, i.e., if its tolerance matrix is not positive-definite. % Stationary points which are not bell-shaped will not be plotted % at all. Plotting \bold{C} gives a visual display of the weights (loadings) of each of the variables used in the linear combination defining each latent variable. The arguments \code{elty}, \code{ecol} and \code{elwd}, may be replaced in the future by \code{llty}, \code{lcol} and \code{llwd}, respectively. For rank-1 models, a similar function to this one is \code{\link{perspqrrvglm}}. It plots the fitted values on a more fine grid rather than at the actual site scores here. The result is a collection of smooth bell-shaped curves. However, it has the weakness that the plot is more divorced from the data; the user thinks it is the truth without an appreciation of the statistical variability in the estimates. % Yet to do: allow for the contour line to correspond to the tolerance % matrix itself. zz ?? In the example below, the data comes from an equal-tolerances model. The species' tolerance matrices are all the identity matrix, and the optimums are at (0,0), (1,1) and (-2,0) for species 1, 2, 3 respectively. } \section{Warning}{ Interpretation of a latent variable plot (CQO diagram) is potentially very misleading in terms of distances if (i) the tolerance matrices of the species are unequal and (ii) the contours of these tolerance matrices are not included in the ordination diagram. } \seealso{ \code{\link{lvplot}}, \code{\link{perspqrrvglm}}, \code{\link{Coef.qrrvglm}}, \code{\link[graphics]{par}}, \code{\link{cqo}}. } \examples{ set.seed(123) nn <- 200 cdata <- data.frame(x2 = rnorm(nn), # Has mean 0 (needed when ITol=TRUE) x3 = rnorm(nn), # Has mean 0 (needed when ITol=TRUE) x4 = rnorm(nn)) # Has mean 0 (needed when ITol=TRUE) cdata <- transform(cdata, lv1 = x2 + x3 - 2*x4, lv2 = -x2 + x3 + 0*x4) # Nb. lv2 is weakly correlated with lv1 cdata <- transform(cdata, lambda1 = exp(6 - 0.5 * (lv1-0)^2 - 0.5 * (lv2-0)^2), lambda2 = exp(5 - 0.5 * (lv1-1)^2 - 0.5 * (lv2-1)^2), lambda3 = exp(5 - 0.5 * (lv1+2)^2 - 0.5 * (lv2-0)^2)) cdata <- transform(cdata, spp1 = rpois(nn, lambda1), spp2 = rpois(nn, lambda2), spp3 = rpois(nn, lambda3)) set.seed(111) # vvv p2 <- cqo(cbind(spp1,spp2,spp3) ~ x2 + x3 + x4, poissonff, # vvv data = cdata, # vvv Rank=2, ITolerances=TRUE, # vvv Crow1positive=c(TRUE,FALSE)) # deviance = 505.81 # vvv if (deviance(p2) > 506) stop("suboptimal fit obtained") # vvv sort(p2@misc$deviance.Bestof) # A history of the fits # vvv Coef(p2) \dontrun{ lvplot(p2, sites = TRUE, spch = "*", scol = "darkgreen", scex = 1.5, chull = TRUE, label = TRUE, Absolute = TRUE, ellipse = 140, adj = -0.5, pcol = "blue", pcex = 1.3, las = 1, C = TRUE, Cadj = c(-.3,-.3,1), Clwd = 2, Ccex = 1.4, Ccol = "red", main = paste("Contours at Abundance = 140 with", "convex hull of the site scores")) } # vvv var(lv(p2)) # A diagonal matrix, i.e., uncorrelated latent variables # vvv var(lv(p2, varlvI = TRUE)) # Identity matrix # vvv Tol(p2)[,,1:2] # Identity matrix # vvv Tol(p2, varlvI = TRUE)[,,1:2] # A diagonal matrix } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/lvplot.Rd0000644000176000001440000000400312136651105013413 0ustar ripleyusers\name{lvplot} \alias{lvplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot } \description{ Generic function for a \emph{latent variable plot} (also known as an \emph{ordination diagram} by ecologists). } \usage{ lvplot(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for a latent variable plot is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. They usually are graphical parameters, and sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination. For the latter, latent variables are often called the \emph{site scores}. Latent variable plots were coined by Yee (2004), and have the latent variable as at least one of its axes. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Latent variables are not really applicable to \code{\link{vglm}}/\code{\link{vgam}} models. } \seealso{ \code{\link{lvplot.qrrvglm}}, \code{lvplot.cao}, \code{\link{lv}}, \code{\link{trplot}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Bestof = 3, df1.nl = c(Zoraspin = 2.5, 3), Crow1positive = TRUE) index <- 1:ncol(depvar(p1)) lvplot(p1, lcol = index, pcol = index, y = TRUE, las = 1) } } \keyword{models} \keyword{regression} VGAM/man/lrtest.Rd0000644000176000001440000001046312136651105013417 0ustar ripleyusers\name{lrtest} \alias{lrtest} \alias{lrtest_vglm} %\alias{update_formula} %\alias{update_default} \title{Likelihood Ratio Test of Nested Models} \description{ \code{lrtest} is a generic function for carrying out likelihood ratio tests. The default method can be employed for comparing nested VGLMs (see details below). } \usage{ lrtest(object, \dots) lrtest_vglm(object, \dots, name = NULL) } %\method{lrtest}{default}(object, \dots, name = NULL) %\method{lrtest}{formula}(object, \dots, data = list()) \arguments{ \item{object}{ a \code{\link{vglm}} object. See below for details. } \item{\dots}{ further object specifications passed to methods. See below for details. } \item{name}{ a function for extracting a suitable name/description from a fitted model object. By default the name is queried by calling \code{\link{formula}}. } % \item{data}{ % a data frame containing the variables in the model. % % } } \details{ \code{lrtest} is intended to be a generic function for comparisons of models via asymptotic likelihood ratio tests. The default method consecutively compares the fitted model object \code{object} with the models passed in \code{\dots}. Instead of passing the fitted model objects in \code{\dots}, several other specifications are possible. The updating mechanism is the same as for \code{\link[lmtest]{waldtest}}: the models in \code{\dots} can be specified as integers, characters (both for terms that should be eliminated from the previous model), update formulas or fitted model objects. Except for the last case, the existence of an \code{\link[stats]{update}} method is assumed. See \code{\link[lmtest]{waldtest}} for details. Subsequently, an asymptotic likelihood ratio test for each two consecutive models is carried out: Twice the difference in log-likelihoods (as derived by the \code{\link[stats]{logLik}} methods) is compared with a Chi-squared distribution. % The \code{"formula"} method fits a \code{\link{lm}} % first and then calls the default method. } \note{ The code was adapted directly from \pkg{lmtest} (written by T. Hothorn, A. Zeileis, G. Millo, D. Mitchell) and made to work for VGLMs and S4. This help file also was adapted from \pkg{lmtest}. \emph{Approximate} LRTs might be applied to VGAMs, as produced by \code{\link{vgam}}, but it is probably better in inference to use \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} and \code{\link[splines]{ns}}). This methods function should not be applied to other models such as those produced by \code{\link{rrvglm}}, by \code{\link{cqo}}, by \code{\link{cao}}. } \section{Warning }{ Several \pkg{VGAM} family functions implement distributions which do not satisfying the usual regularity conditions needed for the LRT to work. No checking or warning is given for these. } \value{ An object of class \code{"VGAManova"} which contains a slot with the log-likelihood, degrees of freedom, the difference in degrees of freedom, likelihood ratio Chi-squared statistic and corresponding p value. These are printed by \code{stats:::print.anova()}; see \code{\link[stats]{anova}}. } \seealso{ \pkg{lmtest}, \code{\link{vglm}}. % \code{\link{waldtest}} % \code{update_default}, % \code{update_formula}. } \examples{ set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let , cumulative, pneumo) # Various equivalent specifications of the LR test for testing x3 (ans1 <- lrtest(fit2, fit1)) ans2 <- lrtest(fit2, 2) ans3 <- lrtest(fit2, "x3") ans4 <- lrtest(fit2, . ~ . - x3) c(all.equal(ans1, ans2), all.equal(ans1, ans3), all.equal(ans1, ans4)) # Doing it manually (testStatistic <- 2 * (logLik(fit2) - logLik(fit1))) (mypval <- pchisq(testStatistic, df = length(coef(fit2)) - length(coef(fit1)), lower.tail = FALSE)) (ans4 <- lrtest(fit3, fit1)) # Test proportional odds (parallelism) assumption } \keyword{htest} %(testStatistic <- 2 * (logLik(fit3) - logLik(fit1))) %(mypval <- pchisq(testStatistic, df = length(coef(fit3)) - length(coef(fit1)), % lower.tail = FALSE)) VGAM/man/lqnorm.Rd0000644000176000001440000001001612136651105013404 0ustar ripleyusers\name{lqnorm} %\alias{lqnorm} \alias{lqnorm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Minimizing the L-q norm Family Function } \description{ Minimizes the L-q norm of residuals in a linear model. } \usage{ lqnorm(qpower = 2, link = "identity", imethod = 1, imu = NULL, shrinkage.init = 0.95) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{qpower}{ A single numeric, must be greater than one, called \eqn{q} below. The absolute value of residuals are raised to the power of this argument, and then summed. This quantity is minimized with respect to the regression coefficients. } \item{link}{ Link function applied to the `mean' \eqn{\mu}{mu}. See \code{\link{Links}} for more details. } \item{imethod}{ Must be 1, 2 or 3. See \code{\link{CommonVGAMffArguments}} for more information. Ignored if \code{imu} is specified. } \item{imu}{ Numeric, optional initial values used for the fitted values. The default is to use \code{imethod = 1}. } \item{shrinkage.init}{ How much shrinkage is used when initializing the fitted values. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used in conjunction with \code{imethod = 3}. } } \details{ This function minimizes the objective function \deqn{ \sum_{i=1}^n \; w_i (|y_i - \mu_i|)^q }{% sum_{i=1}^n w_i (|y_i - mu_i|)^q } where \eqn{q} is the argument \code{qpower}, \eqn{\eta_i = g(\mu_i)}{eta_i = g(mu_i)} where \eqn{g} is the link function, and \eqn{\eta_i}{eta_i} is the vector of linear/additive predictors. The prior weights \eqn{w_i} can be inputted using the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}} etc.; it should be just a vector here since this function handles only a single vector or one-column response. Numerical problem will occur when \eqn{q} is too close to one. Probably reasonable values range from 1.5 and up, say. The value \eqn{q=2} corresponds to ordinary least squares while \eqn{q=1} corresponds to the MLE of a double exponential (Laplace) distibution. The procedure becomes more sensitive to outliers the larger the value of \eqn{q}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ This \pkg{VGAM} family function is an initial attempt to provide a more robust alternative for regression and/or offer a little more flexibility than least squares. The \code{@misc} slot of the fitted object contains a list component called \code{objectiveFunction} which is the value of the objective function at the final iteration. } \section{Warning }{ Convergence failure is common, therefore the user is advised to be cautious and monitor convergence! } \seealso{ \code{\link{gaussianff}}. } \examples{ set.seed(123) ldata <- data.frame(x = sort(runif(nn <- 10 ))) realfun <- function(x) 4 + 5*x ldata <- transform(ldata, y = realfun(x) + rnorm(nn, sd = exp(-1))) # Make the first observation an outlier ldata <- transform(ldata, y = c(4*y[1], y[-1]), x = c(-1, x[-1])) fit <- vglm(y ~ x, fam = lqnorm(qpower = 1.2), data = ldata) coef(fit, matrix = TRUE) head(fitted(fit)) fit@misc$qpower fit@misc$objectiveFunction \dontrun{ # Graphical check with(ldata, plot(x, y, main = paste("LS = red, lqnorm = blue (qpower = ", fit@misc$qpower, "), truth = black", sep = ""), col = "blue")) lmfit <- lm(y ~ x, data = ldata) with(ldata, lines(x, fitted(fit), col = "blue")) with(ldata, lines(x, lmfit$fitted, col = "red")) with(ldata, lines(x, realfun(x), col = "black")) } } \keyword{models} \keyword{regression} VGAM/man/lomaxUC.Rd0000644000176000001440000000503712136651105013453 0ustar ripleyusers\name{Lomax} \alias{Lomax} \alias{dlomax} \alias{plomax} \alias{qlomax} \alias{rlomax} \title{The Lomax Distribution} \description{ Density, distribution function, quantile function and random generation for the Lomax distribution with scale parameter \code{scale} and shape parameter \code{q}. } \usage{ dlomax(x, scale = 1, shape3.q, log = FALSE) plomax(q, scale = 1, shape3.q) qlomax(p, scale = 1, shape3.q) rlomax(n, scale = 1, shape3.q) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{scale}{scale parameter.} \item{shape3.q}{shape parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dlomax} gives the density, \code{plomax} gives the distribution function, \code{qlomax} gives the quantile function, and \code{rlomax} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{lomax}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Lomax distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{lomax}}, \code{\link{genbetaII}}. } \examples{ probs <- seq(0.1, 0.9, by = 0.1) max(abs(plomax(qlomax(p = probs, shape3.q = 1), shape3.q = 1) - probs)) # Should be 0 \dontrun{ par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", las = 1, ylim = c(0, 3), main = "Black is standard exponential, others are dlomax(x, shape3.q)") lines(x, dlomax(x, shape3.q = 1), col = "orange") lines(x, dlomax(x, shape3.q = 2), col = "blue") lines(x, dlomax(x, shape3.q = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape3.q =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard exponential, others are plomax(x, shape3.q)") lines(x, plomax(x, shape3.q = 1), col = "orange") lines(x, plomax(x, shape3.q = 2), col = "blue") lines(x, plomax(x, shape3.q = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape3.q =", c(1, 2, 5))) } } \keyword{distribution} VGAM/man/lomax.Rd0000644000176000001440000000544112136651105013222 0ustar ripleyusers\name{lomax} \alias{lomax} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Lomax Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Lomax distribution. } \usage{ lomax(lscale = "loge", lshape3.q = "loge", iscale = NULL, ishape3.q = NULL, gshape3.q = exp(-5:5), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape3.q}{ Parameter link function applied to the (positive) parameters \code{scale} and \code{q}. See \code{\link{Links}} for more choices. } \item{iscale, ishape3.q}{ Optional initial values for \code{scale} and \code{q}. } \item{gshape3.q, zero}{ See \code{\link{CommonVGAMffArguments}}. } % \item{zero}{ % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. % Here, the values must be from the set \{1,2\} which correspond to % \code{scale}, \code{q}, respectively. % } } \details{ The 2-parameter Lomax distribution is the 4-parameter generalized beta II distribution with shape parameters \eqn{a=p=1}. It is probably more widely known as the Pareto (II) distribution. It is also the 3-parameter Singh-Maddala distribution with shape parameter \eqn{a=1}, as well as the beta distribution of the second kind with \eqn{p=1}. More details can be found in Kleiber and Kotz (2003). The Lomax distribution has density \deqn{f(y) = q / [b \{1 + y/b\}^{1+q}]}{% f(y) = q / [b (1 + y/b)^(1+q)]} for \eqn{b > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \code{q} is a shape parameter. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)]^{-q}.}{% F(y) = 1 - [1 + (y/b)]^(-q).} The mean is \deqn{E(Y) = b/(q-1)}{% E(Y) = b/(q-1)} provided \eqn{q > 1}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Lomax}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ ldata <- data.frame(y = rlomax(n = 1000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, lomax, ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/logoff.Rd0000644000176000001440000000423212136651105013353 0ustar ripleyusers\name{logoff} \alias{logoff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log link function with an offset } \description{ Computes the log transformation with an offset, including its inverse and the first two derivatives. } \usage{ logoff(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{offset}{ Offset value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The log-offset link function is very commonly used for parameters that are greater than a certain value. In particular, it is defined by \code{log(theta + offset)} where \code{offset} is the offset value. For example, if \code{offset = 0.5} then the value of \code{theta} is restricted to be greater than \eqn{-0.5}. Numerical values of \code{theta} close to \code{-offset} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the log of \code{theta+offset}, i.e., \code{log(theta+offset)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta)-offset}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ The default means this function is identical to \code{\link{loge}}. Numerical instability may occur when \code{theta} is close to \code{-offset}. } \seealso{ \code{\link{Links}}, \code{\link{loge}}. } \examples{ \dontrun{ logoff(seq(-0.2, 0.5, by = 0.1)) logoff(seq(-0.2, 0.5, by = 0.1), offset = 0.5) log(seq(-0.2, 0.5, by = 0.1) + 0.5) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/lognormal.Rd0000644000176000001440000000777612136651105014111 0ustar ripleyusers\name{lognormal} \alias{lognormal} \alias{lognormal3} %%- Also NEED an '\alias' for EACH other topic documented here. \title{ Lognormal Distribution } \description{ Maximum likelihood estimation of the (univariate) lognormal distribution. } \usage{ lognormal(lmeanlog = "identity", lsdlog = "loge", zero = 2) lognormal3(lmeanlog = "identity", lsdlog = "loge", powers.try = (-3):3, delta = NULL, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmeanlog, lsdlog}{ Parameter link functions applied to the mean and (positive) \eqn{\sigma}{sigma} (standard deviation) parameter. Both of these are on the log scale. See \code{\link{Links}} for more choices. } % \item{emeanlog, esdlog}{ % emeanlog = list(), esdlog = list(), % emeanlog = list(), esdlog = list(), % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. For \code{lognormal()}, the values must be from the set \{1,2\} which correspond to \code{mu}, \code{sigma}, respectively. For \code{lognormal3()}, the values must be from the set \{1,2,3\} where 3 is for \eqn{\lambda}{\lambda}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{powers.try}{ Numerical vector. The initial \eqn{lambda} is chosen as the best value from \code{min(y) - 10^powers.try} where \code{y} is the response. } \item{delta}{ Numerical vector. An alternative method for obtaining an initial \eqn{lambda}. Here, \code{delta = min(y)-lambda}. If given, this supersedes the \code{powers.try} argument. The value must be positive. } } \details{ A random variable \eqn{Y} has a 2-parameter lognormal distribution if \eqn{\log(Y)}{log(Y)} is distributed \eqn{N(\mu, \sigma^2)}{N(mu, sigma^2)}. The expected value of \eqn{Y}, which is \deqn{E(Y) = \exp(\mu + 0.5 \sigma^2)}{% E(Y) = exp(mu + 0.5 sigma^2)} and not \eqn{\mu}{mu}, make up the fitted values. A random variable \eqn{Y} has a 3-parameter lognormal distribution if \eqn{\log(Y-\lambda)}{log(Y-lambda)} is distributed \eqn{N(\mu, \sigma^2)}{N(mu, sigma^2)}. Here, \eqn{\lambda < Y}{lambda < Y}. The expected value of \eqn{Y}, which is \deqn{E(Y) = \lambda + \exp(\mu + 0.5 \sigma^2)}{% E(Y) = lambda + exp(mu + 0.5 sigma^2)} and not \eqn{\mu}{mu}, make up the fitted values. \code{lognormal()} and \code{lognormal3()} fit the 2- and 3-parameter lognormal distribution respectively. Clearly, if the location parameter \eqn{\lambda=0}{lambda=0} then both distributions coincide. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } %\note{ % The more commonly used 2-parameter lognormal distribution is the % 3-parameter lognormal distribution with \eqn{\lambda}{lambda} equal % to zero---see \code{\link{lognormal3}}. % %} \seealso{ % \code{\link{lognormal3}}, \code{\link[stats]{rlnorm}}, \code{\link{normal1}}, \code{\link{CommonVGAMffArguments}}. } \examples{ ldat <- data.frame(y = rlnorm(nn <- 1000, meanlog = 1.5, sdlog = exp(-0.8))) fit <- vglm(y ~ 1, lognormal, ldat, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) ldat2 <- data.frame(x2 = runif(nn <- 1000)) ldat2 <- transform(ldat2, y = rlnorm(nn, mean = 0.5, sd = exp(x2))) fit <- vglm(y ~ x2, lognormal(zero = 1), ldat2, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) lambda <- 4 ldat3 <- data.frame(y = lambda + rlnorm(n = 1000, mean = 1.5, sd = exp(-0.8))) fit <- vglm(y ~ 1, lognormal3, ldat3, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/loglog.Rd0000644000176000001440000000417512136651105013370 0ustar ripleyusers\name{loglog} \alias{loglog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-log Link Function } \description{ Computes the log-log transformation, including its inverse and the first two derivatives. } \usage{ loglog(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ Values of \code{theta} which are less than or equal to 1 can be replaced by \code{bvalue} before computing the link function value. The component name \code{bvalue} stands for ``boundary value''. See \code{\link{Links}} for more information. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The log-log link function is commonly used for parameters that are greater than unity. Numerical values of \code{theta} close to 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the log of \code{theta}, i.e., \code{log(log(theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(exp(theta))}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 unless \code{bvalue} is used. } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{logoff}}. } \examples{ x <- seq(0.8, 1.5, by = 0.1) loglog(x) # Has NAs loglog(x, bvalue = 1.0 + .Machine$double.eps) # Has no NAs x <- seq(1.01, 10, len = 100) loglog(x) max(abs(loglog(loglog(x), inverse = TRUE) - x)) # Should be 0 } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/loglinb3.Rd0000644000176000001440000000573012136651105013614 0ustar ripleyusers\name{loglinb3} \alias{loglinb3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Loglinear Model for Three Binary Responses } \description{ Fits a loglinear model to three binary responses. } \usage{ loglinb3(exchangeable = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{exchangeable}{ Logical. If \code{TRUE}, the three marginal probabilities are constrained to be equal. } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none. } } \details{ The model is \eqn{P(Y_1=y_1,Y_2=y_2,Y_3=y_3) =}{P(Y1=y1,Y2=y2,Y3=y3) =} \deqn{\exp(u_0+u_1 y_1+u_2 y_2+u_3 y_3+u_{12} y_1 y_2+ u_{13} y_1 y_3+u_{23} y_2 y_3)}{% exp(u0 + u1*y1 + u2*y2 + u3*y3 + u12*y1*y2 + u13*y1*y3+ u23*y2*y3)} where \eqn{y_1}{y1}, \eqn{y_2}{y2} and \eqn{y_3}{y3} are 0 or 1, and the parameters are \eqn{u_1}{u1}, \eqn{u_2}{u2}, \eqn{u_3}{u3}, \eqn{u_{12}}{u12}, \eqn{u_{13}}{u13}, \eqn{u_{23}}{u23}. The normalizing parameter \eqn{u_0}{u0} can be expressed as a function of the other parameters. Note that a third-order association parameter, \eqn{u_{123}}{u123} for the product \eqn{y_1 y_2 y_3}{y1*y2*y3}, is assumed to be zero for this family function. The linear/additive predictors are \eqn{(\eta_1,\eta_2,\ldots,\eta_6)^T = (u_1,u_2,u_3,u_{12},u_{13},u_{23})^T}{(eta1,eta2,...,eta6) = (u1,u2,u3,u12,u13,u23)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the eight joint probabilities, labelled as \eqn{(Y_1,Y_2,Y_3)}{(Y1,Y2,Y3)} = (0,0,0), (0,0,1), (0,1,0), (0,1,1), (1,0,0), (1,0,1), (1,1,0), (1,1,1), respectively. } \references{ Yee, T. W. and Wild, C. J. (2001) Discussion to: ``Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)'' by Gao, F., Wahba, G., Klein, R., Klein, B. \emph{Journal of the American Statistical Association}, \bold{96}, 127--160. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must be a three-column matrix of ones and zeros only. Note that each of the 8 combinations of the multivariate response need to appear in the data set, therefore data sets will need to be large in order for this family function to work. } \seealso{ \code{\link{loglinb2}}, \code{\link{hunua}}. } \examples{ fit <- vglm(cbind(cyadea, beitaw, kniexc) ~ altitude, loglinb3, hunua) coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/loglinb2.Rd0000644000176000001440000000700412136651105013607 0ustar ripleyusers\name{loglinb2} \alias{loglinb2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Loglinear Model for Two Binary Responses } \description{ Fits a loglinear model to two binary responses. } \usage{ loglinb2(exchangeable = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. Should be set \code{TRUE} for ears, eyes, etc. data. } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none of them. } } \details{ The model is \deqn{P(Y_1=y_1,Y_2=y_2) = \exp(u_0+u_1 y_1+u_2 y_2+u_{12} y_1 y_2)}{% P(Y1=y1,Y2=y2) = exp(u0 + u1*y1 + u2*y2 + u12*y1*y2)} where \eqn{y_1}{y1} and \eqn{y_2}{y2} are 0 or 1, and the parameters are \eqn{u_1}{u1}, \eqn{u_2}{u2}, \eqn{u_{12}}{u12}. The normalizing parameter \eqn{u_0}{u0} can be expressed as a function of the other parameters, viz., \deqn{u_0 = -\log[1 + \exp(u_1) + \exp(u_2) + \exp(u_1 + u_2 + u_{12})].}{% u0 = -log[1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12)].} The linear/additive predictors are \eqn{(\eta_1,\eta_2,\eta_3)^T = (u_1,u_2,u_{12})^T}{(eta1,eta2,eta3) = (u1,u2,u12)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. } \references{ Yee, T. W. and Wild, C. J. (2001) Discussion to: ``Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)'' by Gao, F., Wahba, G., Klein, R., Klein, B. \emph{Journal of the American Statistical Association}, \bold{96}, 127--160. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ The response must be a two-column matrix of ones and zeros only. This is more restrictive than \code{\link{binom2.or}}, which can handle more types of input formats. Note that each of the 4 combinations of the multivariate response need to appear in the data set. } \seealso{ \code{\link{binom2.or}}, \code{\link{binom2.rho}}, \code{\link{loglinb3}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) # Get the n x 4 matrix of counts fit.temp <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or, coalminers) counts <- round(c(weights(fit.temp, type = "prior")) * depvar(fit.temp)) # Create a n x 2 matrix response for loglinb2() # bwmat <- matrix(c(0,0, 0,1, 1,0, 1,1), 4, 2, byrow = TRUE) bwmat <- cbind(bln = c(0,0,1,1), wheeze = c(0,1,0,1)) matof1 <- matrix(1, nrow(counts), 1) newminers <- data.frame(bln = kronecker(matof1, bwmat[,1]), wheeze = kronecker(matof1, bwmat[,2]), wt = c(t(counts)), Age = with(coalminers, rep(age, rep(4, length(age))))) newminers <- newminers[with(newminers, wt) > 0,] fit <- vglm(cbind(bln,wheeze) ~ Age, loglinb2, weight = wt, data = newminers) coef(fit, matrix = TRUE) # Same! (at least for the log odds-ratio) summary(fit) # Try reconcile this with McCullagh and Nelder (1989), p.234 (0.166-0.131) / 0.027458 # 1.275 is approximately 1.25 } \keyword{models} \keyword{regression} VGAM/man/loglaplace.Rd0000644000176000001440000002125512136651105014206 0ustar ripleyusers\name{loglaplace} \alias{loglaplace1} \alias{logitlaplace1} % \alias{alaplace3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-Laplace and Logit-Laplace Distribution Family Functions } \description{ Maximum likelihood estimation of the 1-parameter log-Laplace and the 1-parameter logit-Laplace distributions. These may be used for quantile regression for counts and proportions respectively. } \usage{ loglaplace1(tau = NULL, llocation = "loge", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, rep0 = 0.5, minquantile = 0, maxquantile = Inf, imethod = 1, zero = NULL) logitlaplace1(tau = NULL, llocation = "logit", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, rep01 = 0.5, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{tau, kappa}{ See \code{\link{alaplace1}}. } \item{llocation}{ Character. Parameter link functions for location parameter \eqn{\xi}{xi}. See \code{\link{Links}} for more choices. However, this argument should be left unchanged with count data because it restricts the quantiles to be positive. With proportions data \code{llocation} can be assigned a link such as \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, etc. } \item{ilocation}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{parallelLocation}{ Logical. Should the quantiles be parallel on the transformed scale (argument \code{llocation})? Assigning this argument to \code{TRUE} circumvents the seriously embarrassing quantile crossing problem. } % \item{eq.scale}{ Logical. % Should the scale parameters be equal? It is advised to keep % \code{eq.scale = TRUE} unchanged because it does not make sense to % have different values for each \code{tau} value. % } \item{imethod}{ Initialization method. Either the value 1, 2, or \ldots. } \item{dfmu.init, shrinkage.init, Scale.arg, digt, zero}{ See \code{\link{alaplace1}}. } \item{rep0, rep01}{ Numeric, positive. Replacement values for 0s and 1s respectively. For count data, values of the response whose value is 0 are replaced by \code{rep0}; it avoids computing \code{log(0)}. For proportions data values of the response whose value is 0 or 1 are replaced by \code{min(rangey01[1]/2, rep01/w[y< = 0])} and \code{max((1 + rangey01[2])/2, 1-rep01/w[y >= 1])} respectively; e.g., it avoids computing \code{logit(0)} or \code{logit(1)}. Here, \code{rangey01} is the 2-vector \code{range(y[(y > 0) & (y < 1)])} of the response. } \item{minquantile, maxquantile}{ Numeric. The minimum and maximum values possible in the quantiles. These argument are effectively ignored by default since \code{\link{loge}} keeps all quantiles positive. However, if \code{llocation = logoff(offset = 1)} then it is possible that the fitted quantiles have value 0 because \code{minquantile = 0}. } } \details{ These \pkg{VGAM} family functions implement translations of the asymmetric Laplace distribution (ALD). The resulting variants may be suitable for quantile regression for count data or sample proportions. For example, a log link applied to count data is assumed to follow an ALD. Another example is a logit link applied to proportions data so as to follow an ALD. A positive random variable \eqn{Y} is said to have a log-Laplace distribution if \eqn{Y = e^W}{Y = exp(W)} where \eqn{W} has an ALD. There are many variants of ALDs and the one used here is described in \code{\link{alaplace1}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. In the \code{extra} slot of the fitted object are some list components which are useful. For example, the sample proportion of values which are less than the fitted quantile curves, which is \code{sum(wprior[y <= location]) / sum(wprior)} internally. Here, \code{wprior} are the prior weights (called \code{ssize} below), \code{y} is the response and \code{location} is a fitted quantile curve. This definition comes about naturally from the transformed ALD data. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. Kozubowski, T. J. and Podgorski, K. (2003) Log-Laplace distributions. \emph{International Mathematical Journal}, \bold{3}, 467--495. Yee, T. W. (2012) Quantile regression for counts and proportions. In preparation. } \author{ Thomas W. Yee } \section{Warning}{ The \pkg{VGAM} family function \code{\link{logitlaplace1}} will not handle a vector of just 0s and 1s as the response; it will only work satisfactorily if the number of trials is large. See \code{\link{alaplace1}} for other warnings. Care is needed with \code{tau} values which are too small, e.g., for count data the sample proportion of zeros must be less than all values in \code{tau}. Similarly, this also holds with \code{\link{logitlaplace1}}, which also requires all \code{tau} values to be less than the sample proportion of ones. } \note{ The form of input for \code{\link{logitlaplace1}} as response is a vector of proportions (values in \eqn{[0,1]}) and the number of trials is entered into the \code{weights} argument of \code{\link{vglm}}/\code{\link{vgam}}. See Example 2 below. See \code{\link{alaplace1}} for other notes in general. } \seealso{ \code{\link{alaplace1}}, \code{\link{dloglap}}. } \examples{ # Example 1: quantile regression of counts with regression splines set.seed(123); my.k <- exp(0) alldat <- data.frame(x2 = sort(runif(n <- 500))) mymu <- function(x) exp( 1 + 3*sin(2*x) / (x+0.5)^2) alldat <- transform(alldat, y = rnbinom(n, mu = mymu(x2), size = my.k)) mytau <- c(0.1, 0.25, 0.5, 0.75, 0.9); mydof = 3 fitp <- vglm(y ~ bs(x2, df = mydof), data=alldat, trace = TRUE, loglaplace1(tau = mytau, parallelLoc = TRUE)) # halfstepping is usual \dontrun{ par(las = 1) # Plot on a log1p() scale mylwd <- 1.5 with(alldat, plot(x2, jitter(log1p(y), factor = 1.5), col = "red", pch = "o", main = "Example 1; darkgreen=truth, blue=estimated", cex = 0.75)) with(alldat, matlines(x2, log1p(fitted(fitp)), col = "blue", lty = 1, lwd = mylwd)) finexgrid <- seq(0, 1, len=201) for(ii in 1:length(mytau)) lines(finexgrid, col = "darkgreen", lwd = mylwd, log1p(qnbinom(p = mytau[ii], mu = mymu(finexgrid), si = my.k))) } fitp@extra # Contains useful information # Example 2: sample proportions set.seed(123); nnn <- 1000; ssize <- 100 # ssize = 1 will not work! alldat <- data.frame(x2 = sort(runif(nnn))) mymu <- function(x) logit( 1.0 + 4*x, inv = TRUE) alldat <- transform(alldat, ssize = ssize, y2 = rbinom(nnn, size=ssize, prob = mymu(x2)) / ssize) mytau <- c(0.25, 0.50, 0.75) fit1 <- vglm(y2 ~ bs(x2, df = 3), data=alldat, weights=ssize, trace = TRUE, logitlaplace1(tau = mytau, lloc = "cloglog", paral = TRUE)) \dontrun{ # Check the solution. Note: this may be like comparing apples with oranges. plotvgam(fit1, se = TRUE, scol = "red", lcol = "blue", main = "Truth = 'darkgreen'") # Centered approximately ! linkFunctionChar = as.character(fit1@misc$link) alldat = transform(alldat, trueFunction= theta2eta(theta = mymu(x2), link=linkFunctionChar)) with(alldat, lines(x2, trueFunction - mean(trueFunction), col = "darkgreen")) # Plot the data + fitted quantiles (on the original scale) myylim <- with(alldat, range(y2)) with(alldat, plot(x2, y2, col = "blue", ylim = myylim, las = 1, pch = ".", cex=2.5)) with(alldat, matplot(x2, fitted(fit1), add = TRUE, lwd = 3, type = "l")) truecol <- rep(1:3, len=fit1@misc$M) # Add the 'truth' smallxgrid <- seq(0, 1, len=501) for(ii in 1:length(mytau)) lines(smallxgrid, col=truecol[ii], lwd=2, qbinom(p = mytau[ii], prob = mymu(smallxgrid), size=ssize) / ssize) # Plot on the eta (== logit()/probit()/...) scale with(alldat, matplot(x2, predict(fit1), add = FALSE, lwd = 3, type = "l")) # Add the 'truth' for(ii in 1:length(mytau)) { true.quant <- qbinom(p = mytau[ii], pr = mymu(smallxgrid), si=ssize)/ssize lines(smallxgrid, theta2eta(theta=true.quant, link=linkFunctionChar), col=truecol[ii], lwd=2) } } } \keyword{models} \keyword{regression} VGAM/man/loglapUC.Rd0000644000176000001440000000643212136651105013611 0ustar ripleyusers\name{loglapUC} \alias{dloglap} \alias{ploglap} \alias{qloglap} \alias{rloglap} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Log-Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the 3-parameter log-Laplace distribution with location parameter \code{location.ald}, scale parameter \code{scale.ald} (on the log scale), and asymmetry parameter \code{kappa}. } \usage{ dloglap(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) ploglap(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) qloglap(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) rloglap(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location.ald, scale.ald}{ the location parameter \eqn{\xi}{xi} and the (positive) scale parameter \eqn{\sigma}{sigma}, on the log scale. } \item{tau}{ the quantile parameter \eqn{\tau}{tau}. Must consist of values in \eqn{(0,1)}. This argument is used to specify \code{kappa} and is ignored if \code{kappa} is assigned. } \item{kappa}{ the asymmetry parameter \eqn{\kappa}{kappa}. Must consist of positive values. } \item{log}{ if \code{TRUE}, probabilities \code{p} are given as \code{log(p)}. } } \details{ A positive random variable \eqn{Y} is said to have a log-Laplace distribution if \eqn{\log(Y)} has an asymmetric Laplace distribution (ALD). There are many variants of ALDs and the one used here is described in \code{\link{alaplace3}}. } \value{ \code{dloglap} gives the density, \code{ploglap} gives the distribution function, \code{qloglap} gives the quantile function, and \code{rloglap} generates random deviates. } \references{ Kozubowski, T. J. and Podgorski, K. (2003) Log-Laplace distributions. \emph{International Mathematical Journal}, \bold{3}, 467--495. } \author{ T. W. Yee } %\note{ % The \pkg{VGAM} family function \code{\link{loglaplace3}} % estimates the three parameters by maximum likelihood estimation. %} \seealso{ \code{\link{dalap}}, \code{\link{alaplace3}}, % \code{\link{loglaplace3}}. \code{\link{loglaplace1}}. } \examples{ loc <- 0; sigma <- exp(0.5); kappa <- 1 x <- seq(-0.2, 5, by = 0.01) \dontrun{ plot(x, dloglap(x, loc, sigma, kappa = kappa), type = "l", col = "blue", main = "Blue is density, red is cumulative distribution function", ylim = c(0,1), sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), dloglap(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa), col = "purple", lty = 3, type = "h") lines(x, ploglap(x, loc, sigma, kappa = kappa), type = "l", col = "red") abline(h = 0, lty = 2) } ploglap(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa) } \keyword{distribution} VGAM/man/logit.Rd0000644000176000001440000001400312136651105013212 0ustar ripleyusers\name{logit} \alias{logit} \alias{elogit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logit Link Function } \description{ Computes the logit transformation, including its inverse and the first two derivatives. } \usage{ logit(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) elogit(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue, bminvalue, bmaxvalue}{ See \code{\link{Links}}. These are boundary values. For \code{elogit}, values of \code{theta} less than or equal to \eqn{A} or greater than or equal to \eqn{B} can be replaced by \code{bminvalue} and \code{bmaxvalue}. } % Extra argument for passing in additional information. % For \code{logit}, values of \code{theta} which are equal to 0 or 1 are % replaced by \code{earg} or \code{1-earg} % (respectively, and if given) before computing the logit. \item{min, max}{ For \code{elogit}, \code{min} gives \eqn{A}, \code{max} gives \eqn{B}, and for out of range values, \code{bminvalue} and \code{bmaxvalue}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The logit link function is very commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The \emph{extended} logit link function \code{elogit} should be used more generally for parameters that lie in the interval \eqn{(A,B)}, say. The formula is \deqn{\log((\theta-A)/(B-\theta))}{% log((theta-A)/(B-theta))} and the default values for \eqn{A} and \eqn{B} correspond to the ordinary logit function. Numerical values of \code{theta} close to \eqn{A} or \eqn{B} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. However these can be replaced by values \eqn{bminvalue} and \eqn{bmaxvalue} first before computing the link function. } \value{ For \code{logit} with \code{deriv = 0}, the logit of \code{theta}, i.e., \code{log(theta/(1-theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta)/(1+exp(theta))}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0 (for \code{logit}), or close to \eqn{A} or \eqn{B} for \code{elogit}. One way of overcoming this is to use, e.g., \code{bvalue}. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the univariate logistic distribution (see \code{\link{logistic}}). } \seealso{ \code{\link{Links}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{logistic1}}, \code{\link{loge}}, \code{\link{mlogit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) logit(p) max(abs(logit(logit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) logit(p) # Has NAs logit(p, bvalue = .Machine$double.eps) # Has no NAs p <- seq(0.9, 2.2, by = 0.1) elogit(p, min = 1, max = 2, bminvalue = 1 + .Machine$double.eps, bmaxvalue = 2 - .Machine$double.eps) # Has no NAs \dontrun{ par(mfrow = c(2,2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for(d in 0:1) { matplot(p, cbind(logit(p, deriv = d), probit(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logit(p, deriv = d), col = "limegreen") lines(p, probit(p, deriv = d), col = "purple") lines(p, cloglog(p, deriv = d), col = "chocolate") lines(p, cauchit(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } else abline(v = 0.5, lty = "dashed") } for(d in 0) { matplot(y, cbind(logit(y, deriv = d, inverse = TRUE), probit(y, deriv = d, inverse = TRUE)), las = 1, type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchit(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } } p <- seq(0.21, 0.59, by = 0.01) plot(p, elogit(p, min = 0.2, max = 0.6), type = "l", col = "black", ylab = "transformation", xlim = c(0, 1), las = 1, main = "elogit(p, min = 0.2, max = 0.6)") par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} %plot(y, logit(y, inverse = TRUE), type = "l", col = "limegreen", % xlab = "transformation", ylab = "p", % lwd = 2, las = 1, main = "Some inverse probability link functions") %lines(y, probit(y, inverse = TRUE), col = "purple", lwd = 2) %lines(y, cloglog(y, inverse = TRUE), col = "chocolate", lwd = 2) %abline(h = 0.5, v = 0, lty = "dashed") VGAM/man/logistic.Rd0000644000176000001440000000724412136651105013722 0ustar ripleyusers\name{logistic} \alias{logistic} \alias{logistic1} \alias{logistic2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logistic Distribution Family Function } \description{ Estimates the location and scale parameters of the logistic distribution by maximum likelihood estimation. } \usage{ logistic1(llocation = "identity", scale.arg = 1, imethod = 1) logistic2(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions applied to the location parameter \eqn{l} and scale parameter \eqn{s}. See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}} for more information. } \item{scale.arg}{ Known positive scale parameter (called \eqn{s} below). } \item{ilocation, iscale}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter logistic distribution has a density that can be written as \deqn{f(y;l,s) = \frac{\exp[-(y-l)/s]}{ s\left( 1 + \exp[-(y-l)/s] \right)^2}}{% f(y;l,s) = exp[-(y-l)/s] / [s * ( 1 + exp[-(y-l)/s] )^2] } where \eqn{s > 0} is the scale parameter, and \eqn{l} is the location parameter. The response \eqn{-\infty 1-Smallno} where \code{U} is the upper limit. } } \details{ The details are given in \code{\link{logff}}. } \value{ \code{dlog} gives the density, \code{plog} gives the distribution function, and % \code{qlog} gives the quantile function, and \code{rlog} generates random deviates. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{logff}} estimates the parameter \code{prob}. For \code{plog()}, if argument \code{q} contains large values and/or \code{q} is long in length then the memory requirements may be very high. Very large values in \code{q} are handled by an approximation by Owen (1965). } \seealso{ \code{\link{logff}}. } \examples{ dlog(1:20, 0.5) rlog(20, 0.5) \dontrun{ prob <- 0.8; x <- 1:10 plot(x, dlog(x, prob = prob), type = "h", ylim = 0:1, sub = "prob=0.8", las = 1, col = "blue", ylab = "Probability", main="Logarithmic distribution: blue=density; red=distribution function") lines(x + 0.1, plog(x, prob = prob), col = "red", lty = 3, type = "h") } } \keyword{distribution} VGAM/man/lms.yjn.Rd0000644000176000001440000001277312136651105013502 0ustar ripleyusers\name{lms.yjn} \alias{lms.yjn} \alias{lms.yjn2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Yeo-Johnson Transformation to Normality } \description{ LMS quantile regression with the Yeo-Johnson transformation to normality. } \usage{ lms.yjn(percentiles = c(25, 50, 75), zero = c(1,3), llambda = "identity", lsigma = "loge", dfmu.init = 4, dfsigma.init = 2, ilambda = 1, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) lms.yjn2(percentiles=c(25,50,75), zero=c(1,3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init = 4, dfsigma.init = 2, ilambda = 1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. } \item{zero}{ See \code{\link{lms.bcn}}. } \item{llambda, lmu, lsigma}{ See \code{\link{lms.bcn}}. } \item{dfmu.init, dfsigma.init}{ See \code{\link{lms.bcn}}. } \item{ilambda, isigma}{ See \code{\link{lms.bcn}}. } \item{rule}{ Number of abscissae used in the Gaussian integration scheme to work out elements of the weight matrices. The values given are the possible choices, with the first value being the default. The larger the value, the more accurate the approximation is likely to be but involving more computational expense. } \item{yoffset}{ A value to be added to the response y, for the purpose of centering the response before fitting the model to the data. The default value, \code{NULL}, means \code{-median(y)} is used, so that the response actually used has median zero. The \code{yoffset} is saved on the object and used during prediction. } \item{diagW}{ Logical. This argument is offered because the expected information matrix may not be positive-definite. Using the diagonal elements of this matrix results in a higher chance of it being positive-definite, however convergence will be very slow. If \code{TRUE}, then the first \code{iters.diagW} iterations will use the diagonal of the expected information matrix. The default is \code{FALSE}, meaning faster convergence. } \item{iters.diagW}{ Integer. Number of iterations in which the diagonal elements of the expected information matrix are used. Only used if \code{diagW = TRUE}. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ Given a value of the covariate, this function applies a Yeo-Johnson transformation to the response to best obtain normality. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. The function \code{lms.yjn2()} estimates the expected information matrices using simulation (and is consequently slower) while \code{lms.yjn()} uses numerical integration. Try the other if one function fails. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yeo, I.-K. and Johnson, R. A. (2000) A new family of power transformations to improve normality or symmetry. \emph{Biometrika}, \bold{87}, 954--959. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Yee, T. W. (2002) An Implementation for Regression Quantile Estimation. Pages 3--14. In: Haerdle, W. and Ronz, B., \emph{Proceedings in Computational Statistics COMPSTAT 2002}. Heidelberg: Physica-Verlag. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response may contain both positive and negative values. In contrast, the LMS-Box-Cox-normal and LMS-Box-Cox-gamma methods only handle a positive response because the Box-Cox transformation cannot handle negative values. Some other notes can be found at \code{\link{lms.bcn}}. } \section{Warning }{ The computations are not simple, therefore convergence may fail. In that case, try different starting values. The generic function \code{predict}, when applied to a \code{lms.yjn} fit, does not add back the \code{yoffset} value. } \seealso{ \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, \code{\link{bmi.nz}}, \code{\link{amlnormal}}. } \examples{ fit <- vgam(BMI ~ s(age, df = 4), lms.yjn, bmi.nz, trace = TRUE) head(predict(fit)) head(fitted(fit)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit)) \dontrun{ # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) with(aa@post, deplot) # Contains density function values; == a@post$deplot } } \keyword{models} \keyword{regression} VGAM/man/lms.bcn.Rd0000644000176000001440000002134012136651105013432 0ustar ripleyusers\name{lms.bcn} \alias{lms.bcn} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Box-Cox Transformation to Normality } \description{ LMS quantile regression with the Box-Cox transformation to normality. } \usage{ lms.bcn(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init = 4, dfsigma.init = 2, ilambda = 1, isigma = NULL, tol0 = 0.001, expectiles = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. % or expectiles. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\}. The default value usually increases the chance of successful convergence. Setting \code{zero = NULL} means they all are functions of the covariates. For more information see \code{\link{CommonVGAMffArguments}}. } \item{llambda, lmu, lsigma}{ Parameter link functions applied to the first, second and third linear/additive predictors. See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}}. } \item{dfmu.init}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of mu. See \code{\link{vsmooth.spline}}. } \item{dfsigma.init}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of sigma. See \code{\link{vsmooth.spline}}. This argument may be assigned \code{NULL} to get an initial value using some other algorithm. } \item{ilambda}{ Initial value for lambda. If necessary, it is recycled to be a vector of length \eqn{n} where \eqn{n} is the number of (independent) observations. } \item{isigma}{ Optional initial value for sigma. If necessary, it is recycled to be a vector of length \eqn{n}. The default value, \code{NULL}, means an initial value is computed in the \code{@initialize} slot of the family function. } \item{tol0}{ Small positive number, the tolerance for testing if lambda is equal to zero. } \item{expectiles}{ Experimental; please do not use. % A single logical. If \code{TRUE} then the method is LMS-expectile % regression; \emph{expectiles} are returned rather than quantiles. % The default is LMS quantile regression based on the normal distribution. } } \details{ Given a value of the covariate, this function applies a Box-Cox transformation to the response to best obtain normality. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. In more detail, the basic idea behind this method is that, for a fixed value of \eqn{x}, a Box-Cox transformation of the response \eqn{Y} is applied to obtain standard normality. The 3 parameters (\eqn{\lambda}{lambda}, \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, which start with the letters ``L-M-S'' respectively, hence its name) are chosen to maximize a penalized log-likelihood (with \code{\link{vgam}}). Then the appropriate quantiles of the standard normal distribution are back-transformed onto the original scale to get the desired quantiles. The three parameters may vary as a smooth function of \eqn{x}. The Box-Cox power transformation here of the \eqn{Y}, given \eqn{x}, is \deqn{Z = [(Y / \mu(x))^{\lambda(x)} - 1] / ( \sigma(x) \, \lambda(x) )}{ Z = [(Y / mu(x))^{lambda(x)} - 1] / (sigma(x) * lambda(x))} for \eqn{\lambda(x) \neq 0}{lambda(x) != 0}. (The singularity at \eqn{\lambda(x) = 0}{lambda(x) = 0} is handled by a simple function involving a logarithm.) Then \eqn{Z} is assumed to have a standard normal distribution. The parameter \eqn{\sigma(x)}{sigma(x)} must be positive, therefore \pkg{VGAM} chooses \eqn{\eta(x)^T = (\lambda(x), \mu(x), \log(\sigma(x)))}{eta(x)^T = (lambda(x), mu(x), log(sigma(x)))} by default. The parameter \eqn{\mu}{mu} is also positive, but while \eqn{\log(\mu)}{log(mu)} is available, it is not the default because \eqn{\mu}{mu} is more directly interpretable. Given the estimated linear/additive predictors, the \eqn{100\alpha}{100*alpha} percentile can be estimated by inverting the Box-Cox power transformation at the \eqn{100\alpha}{100*alpha} percentile of the standard normal distribution. Of the three functions, it is often a good idea to allow \eqn{\mu(x)}{mu(x)} to be more flexible because the functions \eqn{\lambda(x)}{lambda(x)} and \eqn{\sigma(x)}{sigma(x)} usually vary more smoothly with \eqn{x}. This is somewhat reflected in the default value for the argument \code{zero}, viz. \code{zero = c(1,3)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Cole, T. J. and Green, P. J. (1992) Smoothing Reference Centile Curves: The LMS Method and Penalized Likelihood. \emph{Statistics in Medicine}, \bold{11}, 1305--1319. Green, P. J. and Silverman, B. W. (1994) \emph{Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach}, London: Chapman & Hall. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must be positive because the Box-Cox transformation cannot handle negative values. The LMS-Yeo-Johnson-normal method can handle both positive and negative values. % LMS-BCN expectile regression is a \emph{new} methodology proposed % by myself! In general, the lambda and sigma functions should be more smoother than the mean function. Having \code{zero = 1}, \code{zero = 3} or \code{zero = c(1,3)} is often a good idea. See the example below. While it is usual to regress the response against a single covariate, it is possible to add other explanatory variables, e.g., gender. See \url{http://www.stat.auckland.ac.nz/~yee} for further information and examples about this feature. } \section{Warning }{ The computations are not simple, therefore convergence may fail. Set \code{trace = TRUE} to monitor convergence if it isn't set already. Convergence failure will occur if, e.g., the response is bimodal at any particular value of \eqn{x}. In case of convergence failure, try different starting values. Also, the estimate may diverge quickly near the solution, in which case try prematurely stopping the iterations by assigning \code{maxits} to be the iteration number corresponding to the highest likelihood value. One trick is to fit a simple model and use it to provide initial values for a more complex model; see in the examples below. } \seealso{ \code{\link{lms.bcg}}, \code{\link{lms.yjn}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, % \code{\link{bmi.nz}}, \code{\link{alaplace1}}, \code{\link{amlnormal}}, \code{\link{denorm}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ require(VGAMdata) mysubset <- subset(xs.nz, sex == "M" & ethnic == "1" & Study1) mysubset <- transform(mysubset, BMI = weight / height^2) BMIdata <- mysubset[, c("age", "BMI")] BMIdata <- na.omit(BMIdata) BMIdata <- subset(BMIdata, BMI < 80 & age < 65) # Delete an outlier summary(BMIdata) fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata) head(predict(fit)) head(fitted(fit)) head(BMIdata) head(cdf(fit)) # Person 56 is probably overweight, given his age 100 * colMeans(c(depvar(fit)) < fitted(fit)) # Empirical proportions # Convergence problems? Try this trick: fit0 is a simpler model used for fit1 fit0 <- vgam(BMI ~ s(age, df = 4), lms.bcn(zero = c(1,3)), BMIdata) fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata, etastart = predict(fit0)) } \dontrun{ # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 66), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values } } \keyword{models} \keyword{regression} VGAM/man/lms.bcg.Rd0000644000176000001440000000757412136651105013440 0ustar ripleyusers\name{lms.bcg} \alias{lms.bcg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Box-Cox transformation to a Gamma Distribution } \description{ LMS quantile regression with the Box-Cox transformation to the gamma distribution. } \usage{ lms.bcg(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init = 4, dfsigma.init = 2, ilambda = 1, isigma = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. } \item{zero}{ See \code{\link{lms.bcn}}. } \item{llambda, lmu, lsigma}{ See \code{\link{lms.bcn}}. } \item{dfmu.init, dfsigma.init}{ See \code{\link{lms.bcn}}. } \item{ilambda, isigma}{ See \code{\link{lms.bcn}}. } } \details{ Given a value of the covariate, this function applies a Box-Cox transformation to the response to best obtain a gamma distribution. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. Similar details can be found at \code{\link{lms.bcn}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lopatatzidis A. and Green, P. J. (unpublished manuscript) Semiparametric quantile regression using the gamma distribution. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ Similar notes can be found at \code{\link{lms.bcn}}. } \section{Warning }{ This \pkg{VGAM} family function comes with the same warnings as \code{\link{lms.bcn}}. Also, the expected value of the second derivative with respect to lambda may be incorrect (my calculations do not agree with the Lopatatzidis and Green manuscript.) } \seealso{ \code{\link{lms.bcn}}, \code{\link{lms.yjn}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, \code{\link{bmi.nz}}, \code{\link{amlexponential}}. } \examples{ # This converges, but deplot(fit) and qtplot(fit) do not work fit0 <- vglm(BMI ~ bs(age, df = 4), lms.bcg, bmi.nz, trace = TRUE) coef(fit0, matrix = TRUE) \dontrun{ par(mfrow = c(1, 1)) plotvgam(fit0, se = TRUE) # Plot mu function (only) } # Use a trick: fit0 is used for initial values for fit1. fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), etastart = predict(fit0), lms.bcg(zero = 1), bmi.nz, trace = TRUE) # Difficult to get a model that converges. # Here, we prematurely stop iterations because it fails near the solution. fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), maxit = 4, lms.bcg(zero = 1, ilam = 3), bmi.nz, trace = TRUE) summary(fit1) head(predict(fit1)) head(fitted(fit1)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit1)) \dontrun{ # Quantile plot par(bty = "l", mar=c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit1, percentiles=c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit1, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit1, x0=42, y=ygrid, add=TRUE, llty=2, col="red") aa <- deplot(fit1, x0=55, y=ygrid, add=TRUE, llty=4, col="blue", Attach=TRUE) aa@post$deplot # Contains density function values } } \keyword{models} \keyword{regression} VGAM/man/lirat.Rd0000644000176000001440000000451712136651105013220 0ustar ripleyusers\name{lirat} \alias{lirat} \docType{data} \title{ Low-iron Rat Teratology Data } \description{ Low-iron rat teratology data. } \usage{data(lirat)} \format{ A data frame with 58 observations on the following 4 variables. \describe{ \item{\code{N}}{Litter size.} \item{\code{R}}{Number of dead fetuses.} \item{\code{hb}}{Hemoglobin level.} \item{\code{grp}}{Group number. Group 1 is the untreated (low-iron) group, group 2 received injections on day 7 or day 10 only, group 3 received injections on days 0 and 7, and group 4 received injections weekly.} } } \details{ The following description comes from Moore and Tsiatis (1991). The data comes from the experimental setup from Shepard et al. (1980), which is typical of studies of the effects of chemical agents or dietary regimens on fetal development in laboratory rats. Female rats were put in iron-deficient diets and divided into 4 groups. One group of controls was given weekly injections of iron supplement to bring their iron intake to normal levels, while another group was given only placebo injections. Two other groups were given fewer iron-supplement injections than the controls. The rats were made pregnant, sacrificed 3 weeks later, and the total number of fetuses and the number of dead fetuses in each litter were counted. For each litter the number of dead fetuses may be considered to be Binomial(\eqn{N,p}) where \eqn{N} is the litter size and \eqn{p} is the probability of a fetus dying. The parameter \eqn{p} is expected to vary from litter to litter, therefore the total variance of the proportions will be greater than that predicted by a binomial model, even when the covariates for hemoglobin level and experimental group are accounted for. } \source{ Moore, D. F. and Tsiatis, A. (1991) Robust Estimation of the Variance in Moment Methods for Extra-binomial and Extra-Poisson Variation. \emph{Biometrics}, \bold{47}, 383--401. } \references{ Shepard, T. H., Mackler, B. and Finch, C. A. (1980) Reproductive studies in the iron-deficient rat. \emph{Teratology}, \bold{22}, 329--334. } \examples{ \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) plot(R / N ~ hb, data = lirat, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead") } } \keyword{datasets} VGAM/man/linoUC.Rd0000644000176000001440000000454312136651105013275 0ustar ripleyusers\name{Lino} \alias{Lino} \alias{dlino} \alias{plino} \alias{qlino} \alias{rlino} \title{The Generalized Beta Distribution (Libby and Novick, 1982)} \description{ Density, distribution function, quantile function and random generation for the generalized beta distribution, as proposed by Libby and Novick (1982). } \usage{ dlino(x, shape1, shape2, lambda = 1, log = FALSE) plino(q, shape1, shape2, lambda = 1) qlino(p, shape1, shape2, lambda = 1) rlino(n, shape1, shape2, lambda = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{shape1, shape2, lambda}{ see \code{\link{lino}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dlino} gives the density, \code{plino} gives the distribution function, \code{qlino} gives the quantile function, and \code{rlino} generates random deviates. } %\references{ % Libby, D. L. and Novick, M. R. (1982) % Multivariate generalized beta distributions with applications to % utility assessment. % \emph{Journal of Educational Statistics}, % \bold{7}, 271--294. % % Gupta, A. K. and Nadarajah, S. (2004) % \emph{Handbook of Beta Distribution and Its Applications}, % NY: Marcel Dekker, Inc. % %} \author{ T. W. Yee } \details{ See \code{\link{lino}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ % %} \seealso{ \code{\link{lino}}. } \examples{ \dontrun{ lambda <- 0.4; shape1 <- exp(1.3); shape2 <- exp(1.3) x <- seq(0.0, 1.0, len = 101) plot(x, dlino(x, shape1 = shape1, shape2 = shape2, lambda = lambda), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, plino(x, shape1 = shape1, shape2 = shape2, l = lambda), col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qlino(probs, shape1 = shape1, shape2 = shape2, lambda = lambda) lines(Q, dlino(Q, shape1 = shape1, shape2 = shape2, lambda = lambda), col = "purple", lty = 3, type = "h") plino(Q, shape1 = shape1, shape2 = shape2, l = lambda) - probs # Should be all 0 } } \keyword{distribution} VGAM/man/lino.Rd0000644000176000001440000001012512136651105013036 0ustar ripleyusers\name{lino} \alias{lino} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Beta Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter generalized beta distribution as proposed by Libby and Novick (1982). } \usage{ lino(lshape1 = "loge", lshape2 = "loge", llambda = "loge", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ Parameter link functions applied to the two (positive) shape parameters \eqn{a} and \eqn{b}. See \code{\link{Links}} for more choices. } \item{llambda}{ Parameter link function applied to the parameter \eqn{\lambda}{lambda}. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2, ilambda}{ Initial values for the parameters. A \code{NULL} value means one is computed internally. The argument \code{ilambda} must be numeric, and the default corresponds to a standard beta distribution. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3\} which correspond to \eqn{a}, \eqn{b}, \eqn{\lambda}{lambda}, respectively. } } \details{ Proposed by Libby and Novick (1982), this distribution has density \deqn{f(y;a,b,\lambda) = \frac{\lambda^{a} y^{a-1} (1-y)^{b-1}}{ B(a,b) \{1 - (1-\lambda) y\}^{a+b}}}{% f(y;a,b,lambda) = lambda^a y^(a-1) (1-y)^(b-1) / [B(a,b) (1 - (1-lambda)*y)^(a+b)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{\lambda > 0}{lambda > 0}, \eqn{0 < y < 1}. Here \eqn{B} is the beta function (see \code{\link[base:Special]{beta}}). The mean is a complicated function involving the Gauss hypergeometric function. If \eqn{X} has a \code{lino} distribution with parameters \code{shape1}, \code{shape2}, \code{lambda}, then \eqn{Y=\lambda X/(1-(1-\lambda)X)}{Y = \lambda*X / (1 - (1-\lambda)*X)} has a standard beta distribution with parameters \code{shape1}, \code{shape2}. Since \eqn{\log(\lambda)=0}{log(lambda)=0} corresponds to the standard beta distribution, a \code{summary} of the fitted model performs a t-test for whether the data belongs to a standard beta distribution (provided the \code{\link{loge}} link for \eqn{\lambda}{lambda} is used; this is the default). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Libby, D. L. and Novick, M. R. (1982) Multivariate generalized beta distributions with applications to utility assessment. \emph{Journal of Educational Statistics}, \bold{7}, 271--294. Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, NY: Marcel Dekker, Inc. } \author{ T. W. Yee } \note{ The fitted values, which is usually the mean, have not been implemented yet and consequently are \code{NA}s. Although Fisher scoring is used, the working weight matrices are positive-definite only in a certain region of the parameter space. Problems with this indicate poor initial values or an ill-conditioned model or insufficient data etc. This model is can be difficult to fit. A reasonably good value of \code{ilambda} seems to be needed so if the self-starting initial values fail, try experimenting with the initial value arguments. Experience suggests \code{ilambda} is better a little larger, rather than smaller, compared to the true value. } \seealso{ \code{\link{Lino}}, \code{\link{genbetaII}}. } \examples{ ldata1 <- data.frame(y = rbeta(n = 1000, exp(0.5), exp(1))) # ~ standard beta fit <- vglm(y ~ 1, lino, ldata1, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) # Nonstandard beta distribution ldata2 <- data.frame(y = rlino(n = 1000, shape1 = 2, shape2 = 3, lambda = exp(1))) fit <- vglm(y~1, lino(lshape1 = identity, lshape2 = identity, ilamb = 10), ldata2) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/lindley.Rd0000644000176000001440000000417012136651105013540 0ustar ripleyusers\name{lindley} \alias{lindley} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 1-parameter Gamma Distribution } \description{ Estimates the (1-parameter) Lindley distribution by maximum likelihood estimation. } \usage{ lindley(link = "loge", itheta = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) parameter. See \code{\link{Links}} for more choices. } % \item{earg}{ % List. Extra argument for the link. % See \code{earg} in \code{\link{Links}} for general information. % } \item{itheta, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The density function is given by \deqn{f(y; \theta) = \theta^2 (1 + y) \exp(-\theta y) / (1 + \theta)}{% f(y; theta) = theta^2 * (1 + y) * exp(-theta * y) / (1 + theta)} for \eqn{theta > 0} and \eqn{y > 0}. The mean of \eqn{Y} (returned as the fitted values) is \eqn{\mu = (\theta + 2) / (\theta (\theta + 1))}{mu = (theta + 2) / (theta * (theta + 1))}. The variance is \eqn{(\theta^2 + 4 \theta + 2) / (\theta (\theta + 1))^2}{(theta^2 + 4 * theta + 2) / (theta * (theta + 1))^2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lindley, D. V. (1958) Fiducial distributions and Bayes' theorem. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{20}, 102--107. Ghitany, M. E. and Atieh, B. and Nadarajah, S. (2008) Lindley distribution and its application. \emph{Math. Comput. Simul.}, \bold{78}, 493--506. } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function can handle multiple responses (inputted as a matrix). Fisher scoring is implemented. } \seealso{ \code{\link{dlind}}, \code{\link{gamma2.ab}}, } \examples{ ldata <- data.frame(y = rlind(n = 1000, theta = exp(3))) fit <- vglm(y ~ 1, lindley, ldata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/lindUC.Rd0000644000176000001440000000244212136651105013256 0ustar ripleyusers\name{Lindley} \alias{Lindley} \alias{dlind} \alias{plind} %\alias{qlind} \alias{rlind} \title{The Lindley Distribution} \description{ Density, cumulative distribution function, % quantile function and random generation for the Lindley distribution. } \usage{ dlind(x, theta, log = FALSE) plind(q, theta) %qlind(p, theta) rlind(n, theta) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{theta}{positive parameter. } } \value{ \code{dlind} gives the density, \code{plind} gives the cumulative distribution function, and % \code{qlind} gives the quantile function, and \code{rlind} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{lindley}} for details. } %\note{ % %} \seealso{ \code{\link{lindley}}. } \examples{ theta <- exp(-1); x <- seq(0.0, 17, length = 700) dlind(0:10, theta) \dontrun{ plot(x, dlind(x, theta), type = "l", las = 1, col = "blue", main = "dlind(x, theta = exp(-1))") abline(h = 1, col = "grey", lty = "dashed") } } \keyword{distribution} % probs <- seq(0.01, 0.99, by = 0.01) % max(abs(plind(qlind(p = probs, theta), theta) - probs)) # Should be 0 VGAM/man/lgammaff.Rd0000644000176000001440000000755712136651105013666 0ustar ripleyusers\name{lgammaff} \alias{lgammaff} \alias{lgamma3ff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-gamma Distribution Family Function } \description{ Estimation of the parameter of the standard and nonstandard log-gamma distribution. } \usage{ lgammaff(link = "loge", init.k = NULL) lgamma3ff(llocation = "identity", lscale = "loge", lshape = "loge", ilocation = NULL, iscale = NULL, ishape = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link function applied to the location parameter \eqn{a} and the positive scale parameter \eqn{b}. See \code{\link{Links}} for more choices. } \item{link, lshape}{ Parameter link function applied to the positive shape parameter \eqn{k}. See \code{\link{Links}} for more choices. } \item{init.k, ishape}{ Initial value for \eqn{k}. If given, it must be positive. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } \item{ilocation, iscale}{ Initial value for \eqn{a} and \eqn{b}. The defaults mean an initial value is determined internally for each. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\}. The default value means none are modelled as intercept-only terms. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function of the standard log-gamma distribution is given by \deqn{f(y;k)=\exp[ky - \exp(y)] / \Gamma(k),}{% f(y;k) = exp[ky - exp(y)]/gamma(k),} for parameter \eqn{k>0}{k>0} and all real \eqn{y}. The mean of \eqn{Y} is \code{digamma(k)} (returned as the fitted values) and its variance is \code{trigamma(k)}. For the non-standard log-gamma distribution, one replaces \eqn{y} by \eqn{(y-a)/b}, where \eqn{a} is the location parameter and \eqn{b} is the positive scale parameter. Then the density function is \deqn{f(y)=\exp[k(y-a)/b - \exp((y-a)/b)] / (b \, \Gamma(k)).}{% f(y) = exp[k(y-a)/b - exp((y-a)/b)]/(b*gamma(k)).} The mean and variance of \eqn{Y} are \code{a + b*digamma(k)} (returned as the fitted values) and \code{b^2 * trigamma(k)}, respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kotz, S. and Nadarajah, S. (2000) \emph{Extreme Value Distributions: Theory and Applications}, pages 48--49, London: Imperial College Press. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, p.89, New York: Wiley. } \author{ T. W. Yee } \note{ The standard log-gamma distribution can be viewed as a generalization of the standard type 1 extreme value density: when \eqn{k = 1} the distribution of \eqn{-Y} is the standard type 1 extreme value distribution. The standard log-gamma distribution is fitted with \code{lgammaff} and the non-standard (3-parameter) log-gamma distribution is fitted with \code{lgamma3ff}. } \seealso{ \code{\link{rlgamma}}, \code{\link{gengamma}}, \code{\link{prentice74}}, \code{\link{gamma1}}, \code{\link[base:Special]{lgamma}}. } \examples{ ldata <- data.frame(y = rlgamma(100, k = exp(1))) fit <- vglm(y ~ 1, lgammaff, ldata, trace = TRUE, crit = "coef") summary(fit) coef(fit, matrix = TRUE) Coef(fit) ldata <- data.frame(x = runif(nn <- 5000)) # Another example ldata <- transform(ldata, loc = -1 + 2 * x, Scale = exp(1)) ldata <- transform(ldata, y = rlgamma(nn, loc, scale = Scale, k = exp(0))) fit2 <- vglm(y ~ x, lgamma3ff(zero = 2:3), ldata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/lgammaUC.Rd0000644000176000001440000000473012136651105013570 0ustar ripleyusers\name{lgammaUC} \alias{lgammaUC} \alias{dlgamma} \alias{plgamma} \alias{qlgamma} \alias{rlgamma} \title{The Log-Gamma Distribution } \description{ Density, distribution function, quantile function and random generation for the log-gamma distribution with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{k}. } \usage{ dlgamma(x, location = 0, scale = 1, k = 1, log = FALSE) plgamma(q, location = 0, scale = 1, k = 1) qlgamma(p, location = 0, scale = 1, k = 1) rlgamma(n, location = 0, scale = 1, k = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Positive integer of length 1.} \item{location}{the location parameter \eqn{a}.} \item{scale}{the (positive) scale parameter \eqn{b}.} \item{k}{the (positive) shape parameter \eqn{k}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dlgamma} gives the density, \code{plgamma} gives the distribution function, \code{qlgamma} gives the quantile function, and \code{rlgamma} generates random deviates. } \references{ Kotz, S. and Nadarajah, S. (2000) \emph{Extreme Value Distributions: Theory and Applications}, pages 48--49, London: Imperial College Press. } \author{ T. W. Yee } \details{ See \code{\link{lgammaff}}, the \pkg{VGAM} family function for estimating the one parameter standard log-gamma distribution by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The \pkg{VGAM} family function \code{\link{lgamma3ff}} is for the three parameter (nonstandard) log-gamma distribution. } \seealso{ \code{\link{lgammaff}}, \code{\link{prentice74}}. } \examples{ \dontrun{ loc <- 1; Scale <- 1.5; k <- 1.4 x <- seq(-3.2, 5, by = 0.01) plot(x, dlgamma(x, loc, Scale, k), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlgamma(seq(0.05, 0.95, by = 0.05), loc, Scale, k), dlgamma(qlgamma(seq(0.05, 0.95, by = 0.05), loc, Scale, k), loc, Scale, k), col = "purple", lty = 3, type = "h") lines(x, plgamma(x, loc, Scale, k), type = "l", col = "orange") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/levy.Rd0000644000176000001440000000631712136651105013064 0ustar ripleyusers\name{levy} \alias{levy} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Levy Distribution Family Function } \description{ Estimates the two parameters of the Levy distribution by maximum likelihood estimation. } \usage{ levy(delta = NULL, link.gamma = "loge", idelta = NULL, igamma = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{delta}{ Location parameter. May be assigned a known value, otherwise it is estimated (the default). } \item{link.gamma}{ Parameter link function for the (positive) \eqn{\gamma}{gamma} parameter. See \code{\link{Links}} for more choices. } \item{idelta}{ Initial value for the \eqn{\delta}{delta} parameter (if it is to be estimated). By default, an initial value is chosen internally. } \item{igamma}{ Initial value for the \eqn{\gamma}{gamma} parameter. By default, an initial value is chosen internally. } } \details{ The Levy distribution is one of three stable distributions whose density function has a tractable form. The formula for the density is \deqn{f(y;\gamma,\delta) = \sqrt{\frac{\gamma}{2\pi}} \exp \left( \frac{-\gamma}{2(y - \delta)} \right) / (y - \delta)^{3/2} }{% f(y;gamma,delta) = sqrt(gamma / (2 pi)) exp( -gamma / (2(y - delta))) / (y - \delta)^{3/2} } where \eqn{\delta0}{gamma>0}. The mean does not exist. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Nolan, J. P. (2005) \emph{Stable Distributions: Models for Heavy Tailed Data}. } \author{ T. W. Yee } \note{ If \eqn{\delta}{delta} is given, then only one parameter is estimated and the default is \eqn{\eta_1=\log(\gamma)}{eta1=log(gamma)}. If \eqn{\delta}{delta} is not given, then \eqn{\eta_2=\delta}{eta2=delta}. } \seealso{ The Nolan article is at \url{http://academic2.american.edu/~jpnolan/stable/chap1.pdf}. } \examples{ nn <- 1000; delta <- 0 mygamma <- 1 # log link ==> 0 is the answer ldata <- data.frame(y = delta + mygamma/rnorm(nn)^2) # Levy(mygamma, delta) # Cf. Table 1.1 of Nolan for Levy(1,0) with(ldata, sum(y > 1) / length(y)) # Should be 0.6827 with(ldata, sum(y > 2) / length(y)) # Should be 0.5205 fit <- vglm(y ~ 1, levy(delta = delta), ldata, trace = TRUE) # 1 parameter fit <- vglm(y ~ 1, levy(idelta = delta, igamma = mygamma), ldata, trace = TRUE) # 2 parameters coef(fit, matrix = TRUE) Coef(fit) summary(fit) head(weights(fit, type = "w")) } \keyword{models} \keyword{regression} %%\eqn{\delta + \gamma \Gamma(-0.5) / (2\sqrt{\pi})}{delta + %% gamma * gamma(-0.5) / (2*sqrt(pi))} %%where \code{gamma} is a parameter but \code{gamma()} is the gamma function. %%mygamma = exp(1) # log link ==> 1 is the answer %% alternative: %%w = rgamma(n, shape=0.5) # W ~ Gamma(0.5) distribution %%mean(w) # 0.5 %%mean(1/w) %%y = delta + mygamma / (2 * w) # This is Levy(mygamma, delta) %%mean(y) %%set.seed(123) %%sum(y > 3) / length(y) # Should be 0.4363 %%sum(y > 4) / length(y) # Should be 0.3829 %%sum(y > 5) / length(y) # Should be 0.3453 VGAM/man/leukemia.Rd0000644000176000001440000000120112136651105013664 0ustar ripleyusers%\name{aml} \name{leukemia} %\alias{aml} \alias{leukemia} \docType{data} \title{Acute Myelogenous Leukemia Survival Data} \description{Survival in patients with Acute Myelogenous Leukemia} \usage{ %data(aml) data(leukemia) } \format{ \tabular{ll}{ time:\tab survival or censoring time\cr status:\tab censoring status\cr x: \tab maintenance chemotherapy given? (factor)\cr } } \source{ Rupert G. Miller (1997), \emph{Survival Analysis}. John Wiley & Sons. ISBN: 0-471-25218-2. } \note{ This data set has been transferred from \pkg{survival} and renamed from \code{aml} to \code{leukemia}. } \keyword{datasets} VGAM/man/lerch.Rd0000644000176000001440000000616012136651105013176 0ustar ripleyusers\name{lerch} \alias{lerch} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Lerch Phi Function } \description{ Computes the Lerch transcendental Phi function. } \usage{ lerch(x, s, v, tolerance = 1.0e-10, iter = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, s, v}{ Numeric. This function recyles values of \code{x}, \code{s}, and \code{v} if necessary. } \item{tolerance}{ Numeric. Accuracy required, must be positive and less than 0.01. } \item{iter}{ Maximum number of iterations allowed to obtain convergence. If \code{iter} is too small then a result of \code{NA} may occur; if so, try increasing its value. } } \details{ The Lerch transcendental function is defined by \deqn{\Phi(x,s,v) = \sum_{n=0}^{\infty} \frac{x^n}{(n+v)^s}}{% Phi(x,s,v) = sum_{n=0}^{infty} x^n / (n+v)^s} where \eqn{|x|<1} and \eqn{v \neq 0, -1, -2, \ldots}{v != 0, -1, -2, ...}. Actually, \eqn{x} may be complex but this function only works for real \eqn{x}. The algorithm used is based on the relation \deqn{\Phi(x,s,v) = x^m \Phi(x,s,v+m) + \sum_{n=0}^{m-1} \frac{x^n}{(n+v)^s} .}{% Phi(x,s,v) = x^m Phi(x,s,v+m) + sum_{n=0}^{m-1} x^n / (n+v)^s . } See the URL below for more information. This function is a wrapper function for the C code described below. } \value{ Returns the value of the function evaluated at the values of \code{x}, \code{s}, \code{v}. If the above ranges of \eqn{x} and \eqn{v} are not satisfied, or some numeric problems occur, then this function will return a \code{NA} for those values. } \references{ \url{http://aksenov.freeshell.org/lerchphi/source/lerchphi.c}. Bateman, H. (1953) \emph{Higher Transcendental Functions}. Volume 1. McGraw-Hill, NY, USA. } \author{ S. V. Aksenov and U. D. Jentschura wrote the C code. The R wrapper function was written by T. W. Yee. } \note{ There are a number of special cases, e.g., the Riemann zeta-function is given by \eqn{\zeta(s) = \Phi(x=1,s,v=1)}{zeta(s) = Phi(x=1,s,v=1)}. The special case of \eqn{s=1} corresponds to the hypergeometric 2F1, and this is implemented in the \pkg{gsl} package. The Lerch transcendental Phi function should not be confused with the Lerch zeta function though they are quite similar. } \section{Warning }{ This function has not been thoroughly tested and contains bugs, for example, the zeta function cannot be computed with this function even though \eqn{\zeta(s) = \Phi(x=1,s,v=1)}{zeta(s) = Phi(x=1,s,v=1)}. There are many sources of problems such as lack of convergence, overflow and underflow, especially near singularities. If any problems occur then a \code{NA} will be returned. } \seealso{ \code{\link{zeta}}. } \examples{ \dontrun{ s <- 2; v <- 1; x <- seq(-1.1, 1.1, length = 201) plot(x, lerch(x, s = s, v = v), type = "l", col = "blue", las = 1, main = paste("lerch(x, s = ", s,", v =", v, ")", sep = "")) abline(v = 0, h = 1, lty = "dashed", col = "gray") s <- rnorm(n = 100) max(abs(zeta(s) - lerch(x = 1, s = s, v = 1))) # This fails (a bug); should be 0 } } \keyword{math} VGAM/man/leipnik.Rd0000644000176000001440000000647012136651105013540 0ustar ripleyusers\name{leipnik} \alias{leipnik} %- Also NEED an '\alias' for EACH other topic documented here. \title{Leipnik Distribution Family Function} \description{ Estimates the two parameters of a (transformed) Leipnik distribution by maximum likelihood estimation. } \usage{ leipnik(lmu = "logit", llambda = "loge", imu = NULL, ilambda = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, llambda}{ Link function for the \eqn{\mu}{mu} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{imu, ilambda}{ Numeric. Optional initial values for \eqn{\mu}{mu} and \eqn{\lambda}{lambda}. } } \details{ The (transformed) Leipnik distribution has density function \deqn{f(y;\mu,\lambda) = \frac{ \{ y(1-y) \}^{-\frac12}}{ \mbox{Beta}( \frac{\lambda+1}{2}, \frac12 )} \left[ 1 + \frac{(y-\mu)^2 }{y(1-y)} \right]^{ -\frac{\lambda}{2}}}{% f(y;mu,lambda) = (y(1-y))^(-1/2) * (1 + (y-mu)^2 / (y*(1-y)))^(-lambda/2) / Beta((lambda+1)/2, 1/2)} where \eqn{0 < y < 1} and \eqn{\lambda > -1}{lambda > -1}. The mean is \eqn{\mu}{mu} (returned as the fitted values) and the variance is \eqn{1/\lambda}{1/lambda}. Jorgensen (1997) calls the above the \bold{transformed} Leipnik distribution, and if \eqn{y = (x+1)/2} and \eqn{\mu = (\theta+1)/2}{mu = (theta+1)/2}, then the distribution of \eqn{X} as a function of \eqn{x} and \eqn{\theta}{theta} is known as the the (untransformed) Leipnik distribution. Here, both \eqn{x} and \eqn{\theta}{theta} are in \eqn{(-1,1)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. (pages 612--617). } \author{ T. W. Yee } \note{ Convergence may be slow or fail. Until better initial value estimates are forthcoming try assigning the argument \code{ilambda} some numerical value if it fails to converge. Currently, Newton-Raphson is implemented, not Fisher scoring. Currently, this family function probably only really works for intercept-only models, i.e., \code{y ~ 1} in the formula. } \section{Warning }{ If \code{llambda="identity"} then it is possible that the \code{lambda} estimate becomes less than \eqn{-1}, i.e., out of bounds. One way to stop this is to choose \code{llambda="loge"}, however, \code{lambda} is then constrained to be positive. } \seealso{ \code{\link{mccullagh89}}. } \examples{ ldata <- data.frame(y = rnorm(n = 2000, mean = 0.5, sd = 0.1)) # Not proper data fit <- vglm(y ~ 1, leipnik(ilambda = 1), ldata, trace = TRUE, checkwz = FALSE) fit <- vglm(y ~ 1, leipnik(ilambda = 1, llambda = logoff(offset = 1)), ldata, trace = TRUE, crit = "coef") head(fitted(fit)) with(ldata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # sum of the prior weights sum(weights(fit, type = "w")) # sum of the working weights } \keyword{models} \keyword{regression} %fit <- vglm(y ~ 1, leipnik(ilambda=1), tr=TRUE, cri="c", checkwz=FALSE) VGAM/man/latvar.Rd0000644000176000001440000000437412136651105013377 0ustar ripleyusers\name{lv} \alias{lv} \alias{latvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variables } \description{ Generic function for the \emph{latent variables} of a model. } \usage{ lv(object, ...) latvar(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the extraction of latent variables is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination models. For the latter two, latent variable values are often called \emph{site scores} by ecologists. Latent variables are linear combinations of the explanatory variables. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ \code{\link{latvar}} and \code{\link{lv}} are identical and will remain available for a short while. But soon \code{\link{lv}} will be withdrawn. Latent variables are not really applicable to \code{\link{vglm}}/\code{\link{vgam}} models. } \seealso{ \code{latvar.qrrvglm}, \code{latvar.rrvglm}, \code{latvar.cao}, \code{\link{lvplot}}. } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2.5, 3), Bestof = 3, Crow1positive = TRUE) var(latvar(p1)) # Scaled to unit variance # Scaled to unit variance c(latvar(p1)) # Estimated site scores } } \keyword{models} \keyword{regression} VGAM/man/laplaceUC.Rd0000644000176000001440000000607712136651105013741 0ustar ripleyusers\name{laplaceUC} \alias{dlaplace} \alias{plaplace} \alias{qlaplace} \alias{rlaplace} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the Laplace distribution with location parameter \code{location} and scale parameter \code{scale}. } \usage{ dlaplace(x, location = 0, scale = 1, log = FALSE) plaplace(q, location = 0, scale = 1) qlaplace(p, location = 0, scale = 1) rlaplace(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Positive integer of length 1.} \item{location}{ the location parameter \eqn{a}, which is the mean. } \item{scale}{ the scale parameter \eqn{b}. Must consist of positive values. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \details{ The Laplace distribution is often known as the double-exponential distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is \deqn{f(y) = \frac{1}{2b} \exp \left( - \frac{|y-a|}{b} \right) }{% f(y) = (1/(2b)) exp( -|y-a|/b ) } where \eqn{-\infty0}. The mean is \eqn{a}{a} and the variance is \eqn{2b^2}. See \code{\link{laplace}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \value{ \code{dlaplace} gives the density, \code{plaplace} gives the distribution function, \code{qlaplace} gives the quantile function, and \code{rlaplace} generates random deviates. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } %\note{ % The \pkg{VGAM} family function \code{\link{laplace}} % estimates the two parameters by maximum likelihood estimation. %} \seealso{ \code{\link{laplace}}. } \examples{ loc <- 1; b <- 2 y <- rlaplace(n = 100, loc = loc, scale = b) mean(y) # sample mean loc # population mean var(y) # sample variance 2 * b^2 # population variance \dontrun{ loc = 0; b = 1.5; x = seq(-5, 5, by = 0.01) plot(x, dlaplace(x, loc, b), type = "l", col = "blue", ylim = c(0,1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), dlaplace(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), loc, b), col = "purple", lty = 3, type = "h") lines(x, plaplace(x, loc, b), type = "l", col = "red") abline(h = 0, lty = 2) } plaplace(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), loc, b) } \keyword{distribution} VGAM/man/laplace.Rd0000644000176000001440000000636612136651105013512 0ustar ripleyusers\name{laplace} \alias{laplace} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Laplace Distribution } \description{ Maximum likelihood estimation of the 2-parameter classical Laplace distribution. } \usage{ laplace(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Character. Parameter link functions for location parameter \eqn{a} and scale parameter \eqn{b}. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{imethod}{ Initialization method. Either the value 1 or 2. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The Laplace distribution is often known as the \emph{double-exponential} distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is \deqn{f(y) = \frac{1}{2b} \exp \left( - \frac{|y-a|}{b} \right) }{% f(y) = (1/(2b)) exp( -|y-a|/b ) } where \eqn{-\infty0}. Its mean is \eqn{a} and its variance is \eqn{2b^2}. This parameterization is called the \emph{classical Laplace distribution} by Kotz et al. (2001), and the density is symmetric about \eqn{a}. For \code{y ~ 1} (where \code{y} is the response) the maximum likelihood estimate (MLE) for the location parameter is the sample median, and the MLE for \eqn{b} is \code{mean(abs(y-location))} (replace location by its MLE if unknown). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. } \author{ T. W. Yee } \section{Warning}{ This family function has not been fully tested. The MLE regularity conditions do not hold for this distribution, therefore misleading inferences may result, e.g., in the \code{summary} and \code{vcov} of the object. } \note{ This family function uses Fisher scoring. Convergence may be slow for non-intercept-only models; half-stepping is frequently required. } \seealso{ \code{\link{rlaplace}}, \code{\link{alaplace2}} (which differs slightly from this parameterization), \code{\link{exponential}}, \code{\link[stats]{median}}. } \examples{ ldata <- data.frame(y = rlaplace(nn <- 100, loc = 2, scale = exp(1))) fit <- vglm(y ~ 1, laplace, ldata, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) with(ldata, median(y)) ldata <- data.frame(x = runif(nn <- 1001)) ldata <- transform(ldata, y = rlaplace(nn, loc = 2, scale = exp(-1+1*x))) coef(vglm(y ~ x, laplace(iloc = .2, imethod = 2, zero = 1), ldata, trace = TRUE), matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/lambertW.Rd0000644000176000001440000000374212136651105013661 0ustar ripleyusers\name{lambertW} \alias{lambertW} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Lambert W function } \description{ Computes the Lambert \emph{W} function for real values. } \usage{ lambertW(x, tolerance = 1e-10, maxit = 50) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector of reals. } \item{tolerance}{ Accuracy desired. } \item{maxit}{ Maximum number of iterations of third-order Halley's method. } } \details{ The Lambert \eqn{W} function is the root of the equation \eqn{W(z) \exp(W(z)) = z}{W(z) * exp(W(z)) = z} for complex \eqn{z}. It is multi-valued if \eqn{z} is real and \eqn{z < -1/e}. For real \eqn{-1/e \leq z < 0}{-1/e <= z < 0} it has two possible real values, and currently only the upper branch is computed. } \value{ This function returns the principal branch of the \eqn{W} function for \emph{real} \eqn{z}. It returns \eqn{W(z) \geq -1}{W(z) >= -1}, and \code{NA} for \eqn{z < -1/e}. } \references{ Corless, R. M. and Gonnet, G. H. and Hare, D. E. G. and Jeffrey, D. J. and Knuth, D. E. (1996) On the Lambert \eqn{W} function. \emph{Advances in Computational Mathematics}, \bold{5}(4), 329--359. } \author{ T. W. Yee } \note{ If convergence does not occur then increase the value of \code{maxit} and/or \code{tolerance}. Yet to do: add an argument \code{lbranch = TRUE} to return the lower branch for real \eqn{-1/e \leq z < 0}{-1/e <= z < 0}; this would give \eqn{W(z) \leq -1}{W(z) <= -1}. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[base:log]{log}}, \code{\link[base:log]{exp}}. } \examples{ \dontrun{ curve(lambertW, -exp(-1), 3, xlim = c(-1, 3), ylim = c(-2, 1), las = 1, col = "orange") abline(v = -exp(-1), h = -1, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/man/kumarUC.Rd0000644000176000001440000000372312136651105013452 0ustar ripleyusers\name{Kumar} \alias{Kumar} \alias{dkumar} \alias{pkumar} \alias{qkumar} \alias{rkumar} \title{The Kumaraswamy Distribution} \description{ Density, distribution function, quantile function and random generation for the Kumaraswamy distribution. } \usage{ dkumar(x, shape1, shape2, log = FALSE) pkumar(q, shape1, shape2) qkumar(p, shape1, shape2) rkumar(n, shape1, shape2) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{shape1, shape2}{ positive shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dkumar} gives the density, \code{pkumar} gives the distribution function, \code{qkumar} gives the quantile function, and \code{rkumar} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{kumar}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{kumar}}. } \examples{ \dontrun{ shape1 <- 2; shape2 <- 2; nn <- 201; # shape1 <- shape2 <- 0.5; x <- seq(-0.05, 1.05, len = nn) plot(x, dkumar(x, shape1, shape2), type = "l", las = 1, ylim = c(0,1.5), ylab = paste("fkumar(shape1 = ", shape1, ", shape2 = ", shape2, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pkumar(x, shape1, shape2), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qkumar(probs, shape1, shape2) lines(Q, dkumar(Q, shape1, shape2), col = "purple", lty = 3, type = "h") lines(Q, pkumar(Q, shape1, shape2), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pkumar(Q, shape1, shape2) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/kumar.Rd0000644000176000001440000000550712136651105013224 0ustar ripleyusers\name{kumar} \alias{kumar} %- Also NEED an '\alias' for EACH other topic documented here. \title{Kumaraswamy Distribution Family Function} \description{ Estimates the two parameters of the Kumaraswamy distribution by maximum likelihood estimation. } \usage{ kumar(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = NULL, grid.shape1 = c(0.4, 6.0), tol12 = 1.0e-4, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ Link function for the two positive shape parameters, respectively, called \eqn{a} and \eqn{b} below. See \code{\link{Links}} for more choices. } % \item{eshape1, eshape2}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % eshape1 = list(), eshape2 = list(), % } \item{ishape1, ishape2}{ Numeric. Optional initial values for the two positive shape parameters. } \item{tol12}{ Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. } \item{grid.shape1}{ Lower and upper limits for a grid search for the first shape parameter. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Kumaraswamy distribution has density function \deqn{f(y;a = shape1,b = shape2) = a b y^{a-1} (1-y^{a})^{b-1}}{% a*b*y^(a-1)*(1-y^a)^(b-1)} where \eqn{0 < y < 1} and the two shape parameters, \eqn{a} and \eqn{b}, are positive. The mean is \eqn{b \times Beta(1+1/a,b)}{b * Beta(1+1/a,b)} (returned as the fitted values) and the variance is \eqn{b \times Beta(1+2/a,b) - (b \times Beta(1+1/a,b))^2}{b * Beta(1+2/a,b) - (b * Beta(1+1/a,b))^2}. Applications of the Kumaraswamy distribution include the storage volume of a water reservoir. Fisher scoring is implemented. Handles multiple responses (matrix input). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. \emph{Journal of Hydrology}, \bold{46}, 79--88. Jones, M. C. (2009). Kumaraswamy's distribution: A beta-type distribution with some tractability advantages. \emph{Statistical Methodology}, \bold{6}, 70--81. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dkumar}}, \code{\link{betaff}}. } \examples{ shape1 <- exp(1); shape2 <- exp(2); kdata <- data.frame(y = rkumar(n = 1000, shape1, shape2)) fit <- vglm(y ~ 1, kumar, kdata, trace = TRUE) c(with(kdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/koenkerUC.Rd0000644000176000001440000000527312136651105013773 0ustar ripleyusers\name{Expectiles-Koenker} \alias{Expectiles-Koenker} \alias{dkoenker} \alias{pkoenker} \alias{qkoenker} \alias{rkoenker} \title{ Expectiles/Quantiles of the Koenker Distribution } \description{ Density function, distribution function, and quantile/expectile function and random generation for the Koenker distribution. } \usage{ dkoenker(x, location = 0, scale = 1, log = FALSE) pkoenker(q, location = 0, scale = 1, log = FALSE) qkoenker(p, location = 0, scale = 1) rkoenker(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of expectiles/quantiles. See the terminology note below. } \item{p}{ Vector of probabilities. % (tau or \eqn{\tau}). These should lie in \eqn{(0,1)}. } \item{n, log}{See \code{\link[stats:Uniform]{runif}}.} \item{location, scale}{ Location and scale parameters. The latter should have positive values. Values of these vectors are recyled. } } \details{ A Student-t distribution with 2 degrees of freedom and a scale parameter of \code{sqrt(2)} is equivalent to the standard Koenker distribution. Further details about this distribution are given in \code{\link{koenker}}. } \value{ \code{dkoenker(x)} gives the density function. \code{pkoenker(q)} gives the distribution function. \code{qkoenker(p)} gives the expectile and quantile function. \code{rkoenker(n)} gives \eqn{n} random variates. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link[stats:TDist]{dt}}, \code{\link{koenker}}. } \examples{ my_p <- 0.25; y <- rkoenker(nn <- 5000) (myexp = qkoenker(my_p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my_p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my_p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard Koenker distribution myloc <- 1; myscale <- 2 yy <- rkoenker(nn, myloc, myscale) (myexp <- qkoenker(my_p, myloc, myscale)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my_p pkoenker(mean(yy), myloc, myscale) # Should be 0.5 abs(qkoenker(0.5, myloc, myscale) - mean(yy)) # Should be 0 abs(pkoenker(myexp, myloc, myscale) - my_p) # Should be 0 integrate(f = dkoenker, lower = -Inf, upper = Inf, locat = myloc, scale = myscale) # Should be 1 y <- seq(-7, 7, len = 201) max(abs(dkoenker(y) - dt(y / sqrt(2), df = 2) / sqrt(2))) # Should be 0 \dontrun{ plot(y, dkoenker(y), type = "l", col = "blue", las = 1, ylim = c(0, 0.4), main = "Blue = Koenker; orange = N(0, 1)") lines(y, dnorm(y), type = "l", col = "orange") abline(h = 0, v = 0, lty = 2) } } \keyword{distribution} VGAM/man/koenker.Rd0000644000176000001440000000700712136651105013540 0ustar ripleyusers\name{koenker} \alias{koenker} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Koenker's Distribution Family Function } \description{ Estimates the location and scale parameters of Koenker's distribution by maximum likelihood estimation. } \usage{ koenker(percentile = 50, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentile}{ A numerical vector containing values between 0 and 100, which are the quantiles and expectiles. They will be returned as `fitted values'. } \item{llocation, lscale}{ See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}}. } \item{ilocation, iscale, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for details. } } \details{ Koenker (1993) solved for the distribution whose quantiles are equal to its expectiles. This is called Koenker's distribution here. Its canonical form has mean and mode at 0 and has a heavy tail (in fact, its variance is infinite). The standard (``canonical'') form of Koenker's distribution can be endowed with a location and scale parameter. The standard form has a density that can be written as \deqn{f(z) = 2 / (4 + z^2)^{3/2}}{% f(z) = 2 / (4 + z^2)^(3/2) } for real \eqn{y}. Then \eqn{z = (y-a)/b} for location and scale parameters \eqn{a} and \eqn{b > 0}. The mean of \eqn{Y} is \eqn{a}{a}. By default, \eqn{\eta_1=a)}{eta1=a} and \eqn{\eta_2=\log(b)}{eta2=log(b)}. The expectiles/quantiles corresponding to \code{percentile} are returned as the fitted values; in particular, \code{percentile = 50} corresponds to the mean (0.5 expectile) and median (0.5 quantile). Note that if \eqn{Y} has a standard Koenker distribution then \eqn{Y = \sqrt{2} T_2}{Y = sqrt(2) * T_2} where \eqn{T_2} has a Student-t distribution with 2 degrees of freedom. The two parameters here can also be estimated using \code{\link{studentt2}} by specifying \code{df = 2} and making an adjustment for the scale parameter, however, this \pkg{VGAM} family function is more efficient since the EIM is known (Fisher scoring is implemented.) } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Koenker, R. (1993) When are expectiles percentiles? (solution) \emph{Econometric Theory}, \bold{9}, 526--527. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dkoenker}}, \code{\link{studentt2}}. } \examples{ set.seed(123); nn <- 1000 kdata <- data.frame(x2 = sort(runif(nn))) kdata <- transform(kdata, mylocat = 1 + 3 * x2, myscale = 1) kdata <- transform(kdata, y = rkoenker(nn, loc = mylocat, scale = myscale)) fit <- vglm(y ~ x2, koenker(perc = c(1, 50, 99)), kdata, trace = TRUE) fit2 <- vglm(y ~ x2, studentt2(df = 2), kdata, trace = TRUE) # 'same' as fit coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) # Nice plot of the results \dontrun{ plot(y ~ x2, kdata, col = "blue", las = 1, sub = paste("n =", nn), main = "Fitted quantiles/expectiles using Koenker's distribution") matplot(with(kdata, x2), fitted(fit), add = TRUE, type = "l", lwd = 3) legend("bottomright", lty = 1:3, lwd = 3, legend = colnames(fitted(fit)), col = 1:3) } fit@extra$percentile # Sample quantiles } \keyword{models} \keyword{regression} VGAM/man/is.zero.Rd0000644000176000001440000000265012136651105013472 0ustar ripleyusers\name{is.zero} \alias{is.zero} \alias{is.zero.matrix} \alias{is.zero.vglm} \title{Zero Constraint Matrices} \description{ Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a 'zero' assumption. } \usage{ is.zero.matrix(object, \dots) is.zero.vglm(object, \dots) } \arguments{ \item{object}{ an object such as a coefficient matrix of a \code{\link{vglm}} object, or a \code{\link{vglm}} object. } \item{\dots}{ additional optional arguments. Currently unused. } } \details{ These functions test the effect of the \code{zero} argument on a \code{\link{vglm}} object or the coefficient matrix of a \code{\link{vglm}} object. The latter is obtained by \code{coef(vglmObject, matrix = TRUE)}. } \value{ A vector of logicals, testing whether each linear/additive predictor has the \code{zero} argument applied to it. It is \code{TRUE} if that linear/additive predictor is intercept-only, i.e., all other regression coefficients are set to zero. No checking is done for the intercept term at all, i.e., that it was estimated in the first place. } \seealso{ \code{\link{constraints}}, \code{\link{vglm}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or(zero = NULL), coalminers) is.zero(fit) is.zero(coef(fit, matrix = TRUE)) } \keyword{models} \keyword{regression} VGAM/man/is.smart.Rd0000644000176000001440000000301512136651105013635 0ustar ripleyusers\name{is.smart} \alias{is.smart} \title{ Test For a Smart Object } \description{ Tests an object to see if it is smart. } \usage{ is.smart(object) } \arguments{ \item{object}{ a function or a fitted model. } } \value{ Returns \code{TRUE} or \code{FALSE}, according to whether the \code{object} is smart or not. } \details{ If \code{object} is a function then this function looks to see whether \code{object} has the logical attribute \code{"smart"}. If so then this is returned, else \code{FALSE}. If \code{object} is a fitted model then this function looks to see whether \code{object@smart.prediction} or \code{object\$smart.prediction} exists. If it does and it is not equal to \code{list(smart.arg=FALSE)} then a \code{TRUE} is returned, else \code{FALSE}. The reason for this is because, e.g., \code{lm(...,smart=FALSE)} and \code{vglm(...,smart=FALSE)}, will return such a specific list. Writers of smart functions manually have to assign this attribute to their smart function after it has been written. } \examples{ is.smart(my1) # TRUE is.smart(poly) # TRUE library(splines) is.smart(bs) # TRUE is.smart(ns) # TRUE is.smart(tan) # FALSE if(!is.R()) is.smart(lm) # TRUE \dontrun{ library(VGAM) x <- rnorm(9) fit1 <- vglm(rnorm(9) ~ x, normal1) is.smart(fit1) # TRUE fit2 <- vglm(rnorm(9) ~ x, normal1, smart = FALSE) is.smart(fit2) # FALSE fit2@smart.prediction } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10.6.1. VGAM/man/is.parallel.Rd0000644000176000001440000000302012136651105014277 0ustar ripleyusers\name{is.parallel} \alias{is.parallel} \alias{is.parallel.matrix} \alias{is.parallel.vglm} \title{Parallelism Constraint Matrices} \description{ Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a parallelism assumption. } \usage{ is.parallel.matrix(object, \dots) is.parallel.vglm(object, type = c("term", "lm"), \dots) } \arguments{ \item{object}{ an object such as a constraint matrix or a \code{\link{vglm}} object. } \item{type}{ passed into \code{\link{constraints}}. } \item{\dots}{ additional optional arguments. Currently unused. } } \details{ These functions may be useful for categorical models such as \code{\link{propodds}}, \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}. } \value{ A vector of logicals, testing whether each constraint matrix is a one-column matrix of ones. Note that parallelism can still be thought of as holding if the constraint matrix has a non-zero but constant values, however, this is currently not implemented. No checking is done that the constraint matrices have the same number of rows. } \seealso{ \code{\link{constraints}}, \code{\link{vglm}}. } \examples{ \dontrun{ require(VGAMdata) fit <- vglm(educ ~ bs(age) * sex + ethnic, cumulative(parallel = TRUE), xs.nz[1:200, ]) is.parallel(fit) is.parallel(fit, type = "lm") # For each column of the LM matrix } } \keyword{models} \keyword{regression} VGAM/man/invparalogisticUC.Rd0000644000176000001440000000370112136651105015525 0ustar ripleyusers\name{Invparalogistic} \alias{Invparalogistic} \alias{dinvparalogistic} \alias{pinvparalogistic} \alias{qinvparalogistic} \alias{rinvparalogistic} \title{The Inverse Paralogistic Distribution} \description{ Density, distribution function, quantile function and random generation for the inverse paralogistic distribution with shape parameters \code{a} and \code{p}, and scale parameter \code{scale}. } \usage{ dinvparalogistic(x, shape1.a, scale = 1, log = FALSE) pinvparalogistic(q, shape1.a, scale = 1) qinvparalogistic(p, shape1.a, scale = 1) rinvparalogistic(n, shape1.a, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dinvparalogistic} gives the density, \code{pinvparalogistic} gives the distribution function, \code{qinvparalogistic} gives the quantile function, and \code{rinvparalogistic} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{invparalogistic}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The inverse paralogistic distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{invparalogistic}}, \code{\link{genbetaII}}. } \examples{ idata <- data.frame(y = rinvparalogistic(n = 3000, exp(1), exp(2))) fit <- vglm(y ~ 1, invparalogistic(ishape1.a = 2.1), idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/invparalogistic.Rd0000644000176000001440000000543712136651105015305 0ustar ripleyusers\name{invparalogistic} \alias{invparalogistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Paralogistic Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter inverse paralogistic distribution. } \usage{ invparalogistic(lshape1.a = "loge", lscale = "loge", ishape1.a = 2, iscale = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) shape parameter \code{a} and (positive) scale parameter \code{scale}. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale}{ Optional initial values for \code{a} and \code{scale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2\} which correspond to \code{a}, \code{scale}, respectively. } } \details{ The 2-parameter inverse paralogistic distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=1} and \eqn{a=p}. It is the 3-parameter Dagum distribution with \eqn{a=p}. More details can be found in Kleiber and Kotz (2003). The inverse paralogistic distribution has density \deqn{f(y) = a^2 y^{a^2-1} / [b^{a^2} \{1 + (y/b)^a\}^{a+1}]}{% f(y) = a^2 y^(a^2-1) / [b^(a^2) (1 + (y/b)^a)^(a+1)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \eqn{a} is the shape parameter. The mean is \deqn{E(Y) = b \, \Gamma(a + 1/a) \, \Gamma(1 - 1/a) / \Gamma(a)}{% E(Y) = b gamma(a + 1/a) gamma(1 - 1/a) / gamma(a)} provided \eqn{a > 1}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Invparalogistic}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}. } \examples{ idata <- data.frame(y = rinvparalogistic(n = 3000, exp(1), exp(2))) fit <- vglm(y ~ 1, invparalogistic, idata, trace = TRUE) fit <- vglm(y ~ 1, invparalogistic(ishape1.a = 2.7, iscale = 7.3), idata, trace = TRUE, epsilon = 1e-8) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/invlomaxUC.Rd0000644000176000001440000000340012136651105014160 0ustar ripleyusers\name{Invlomax} \alias{Invlomax} \alias{dinvlomax} \alias{pinvlomax} \alias{qinvlomax} \alias{rinvlomax} \title{The Inverse Lomax Distribution} \description{ Density, distribution function, quantile function and random generation for the inverse Lomax distribution with shape parameter \code{p} and scale parameter \code{scale}. } \usage{ dinvlomax(x, scale = 1, shape2.p, log = FALSE) pinvlomax(q, scale = 1, shape2.p) qinvlomax(p, scale = 1, shape2.p) rinvlomax(n, scale = 1, shape2.p) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape2.p}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dinvlomax} gives the density, \code{pinvlomax} gives the distribution function, \code{qinvlomax} gives the quantile function, and \code{rinvlomax} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{invlomax}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The inverse Lomax distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{invlomax}}, \code{\link{genbetaII}}. } \examples{ idata <- data.frame(y = rinvlomax(n = 1000, exp(2), exp(1))) fit <- vglm(y ~ 1, invlomax, idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/invlomax.Rd0000644000176000001440000000521412136651105013735 0ustar ripleyusers\name{invlomax} \alias{invlomax} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Lomax Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter inverse Lomax distribution. } \usage{ invlomax(lscale = "loge", lshape2.p = "loge", iscale = NULL, ishape2.p = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape2.p}{ Parameter link functions applied to the (positive) scale parameter \code{scale} and (positive) shape parameter \code{p}. See \code{\link{Links}} for more choices. } \item{iscale, ishape2.p}{ Optional initial values for \code{scale} and \code{p}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2\} which correspond to \code{scale}, \code{p}, respectively. } } \details{ The 2-parameter inverse Lomax distribution is the 4-parameter generalized beta II distribution with shape parameters \eqn{a=q=1}. It is also the 3-parameter Dagum distribution with shape parameter \eqn{a=1}, as well as the beta distribution of the second kind with \eqn{q=1}. More details can be found in Kleiber and Kotz (2003). The inverse Lomax distribution has density \deqn{f(y) = p y^{p-1} / [b^p \{1 + y/b\}^{p+1}]}{% f(y) = p y^(p-1) / [b^p (1 + y/b)^(p+1)]} for \eqn{b > 0}, \eqn{p > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \code{p} is a shape parameter. The mean does not exist; \code{NA}s are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Invlomax}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ idata <- data.frame(y = rinvlomax(n = 2000, exp(2), exp(1))) fit <- vglm(y ~ 1, invlomax, idata, trace = TRUE) fit <- vglm(y ~ 1, invlomax(iscale = exp(2), ishape2.p = exp(1)), idata, trace = TRUE, epsilon = 1e-8) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/invbinomial.Rd0000644000176000001440000000656512136651105014421 0ustar ripleyusers\name{invbinomial} \alias{invbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{Inverse Binomial Distribution Family Function} \description{ Estimates the two parameters of an inverse binomial distribution by maximum likelihood estimation. } \usage{ invbinomial(lrho = elogit(min = 0.5, max = 1), llambda = "loge", irho = NULL, ilambda = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrho, llambda}{ Link function for the \eqn{\rho}{rho} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{irho, ilambda}{ Numeric. Optional initial values for \eqn{\rho}{rho} and \eqn{\lambda}{lambda}. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The inverse binomial distribution of Yanagimoto (1989) has density function \deqn{f(y;\rho,\lambda) = \frac{ \lambda \,\Gamma(2y+\lambda) }{\Gamma(y+1) \, \Gamma(y+\lambda+1) } \{ \rho(1-\rho) \}^y \rho^{\lambda}}{% f(y;rho,lambda) = (lambda * Gamma(2y+lambda)) * [rho*(1-rho)]^y * rho^lambda / (Gamma(y+1) * Gamma(y+lambda+1))} where \eqn{y=0,1,2,\ldots}{y=0,1,2,...} and \eqn{\frac12 < \rho < 1}{0.5 < rho < 1}, and \eqn{\lambda > 0}{lambda > 0}. The first two moments exist for \eqn{\rho>\frac12}{rho>0.5}; then the mean is \eqn{\lambda (1-\rho) /(2 \rho-1)}{lambda*(1-rho)/(2*rho-1)} (returned as the fitted values) and the variance is \eqn{\lambda \rho (1-\rho) /(2 \rho-1)^3}{lambda*rho*(1-rho)/(2*rho-1)^3}. The inverse binomial distribution is a special case of the generalized negative binomial distribution of Jain and Consul (1971). It holds that \eqn{Var(Y) > E(Y)} so that the inverse binomial distribution is overdispersed compared with the Poisson distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yanagimoto, T. (1989) The inverse binomial distribution as a statistical model. \emph{Communications in Statistics: Theory and Methods}, \bold{18}, 3625--3633. Jain, G. C. and Consul, P. C. (1971) A generalized negative binomial distribution. \emph{SIAM Journal on Applied Mathematics}, \bold{21}, 501--513. Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function only works reasonably well with intercept-only models. Good initial values are needed; if convergence failure occurs use \code{irho} and/or \code{ilambda}. Some elements of the working weight matrices use the expected information matrix while other elements use the observed information matrix. Yet to do: using the mean and the reciprocal of \eqn{\lambda}{lambda} results in a EIM that is diagonal. } \seealso{ \code{\link{negbinomial}}, \code{\link{poissonff}}. } \examples{ idata <- data.frame(y = rnbinom(n <- 1000, mu = exp(3), size = exp(1))) fit <- vglm(y ~ 1, invbinomial, idata, trace = TRUE) with(idata, c(mean(y), head(fitted(fit), 1))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # sum of the prior weights sum(weights(fit, type = "work")) # sum of the working weights } \keyword{models} \keyword{regression} %fit <- vglm(y ~ 1, invbinomial(ilambda=1), tr=TRUE, cri="c", checkwz=FALSE) VGAM/man/inv.gaussianff.Rd0000644000176000001440000000625012136651105015022 0ustar ripleyusers\name{inv.gaussianff} \alias{inv.gaussianff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Gaussian Distribution Family Function } \description{ Estimates the two parameters of the inverse Gaussian distribution by maximum likelihood estimation. } \usage{ inv.gaussianff(lmu = "loge", llambda = "loge", imethod = 1, ilambda = NULL, parallel = FALSE, apply.parint = FALSE, shrinkage.init = 0.99, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, llambda}{ Parameter link functions for the \eqn{\mu}{mu} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{ilambda, parallel, apply.parint}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod, shrinkage.init, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The standard (``canonical'') form of the inverse Gaussian distribution has a density that can be written as \deqn{f(y;\mu,\lambda) = \sqrt{\lambda/(2\pi y^3)} \exp\left(-\lambda (y-\mu)^2/(2 \mu^2 y)\right)}{% f(y;mu,lambda) = sqrt(lambda/(2*pi*y^3)) * exp(-lambda*(y-mu)^2/(2*mu^2*y)) } where \eqn{y>0}, \eqn{\mu>0}{mu>0}, and \eqn{\lambda>0}{lambda>0}. The mean of \eqn{Y} is \eqn{\mu}{mu} and its variance is \eqn{\mu^3/\lambda}{mu^3/lambda}. By default, \eqn{\eta_1=\log(\mu)}{eta1=log(mu)} and \eqn{\eta_2=\log(\lambda)}{eta2=log(lambda)}. The mean is returned as the fitted values. This \pkg{VGAM} family function can handle multiple responses (inputted as a matrix). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ The inverse Gaussian distribution can be fitted (to a certain extent) using the usual GLM framework involving a scale parameter. This family function is different from that approach in that it estimates both parameters by full maximum likelihood estimation. } \seealso{ \code{\link{Inv.gaussian}}, \code{\link{wald}}, \code{\link{bisa}}. The \R{} package \pkg{SuppDists} has several functions for evaluating the density, distribution function, quantile function and generating random numbers from the inverse Gaussian distribution. } \examples{ idata <- data.frame(x2 = runif(nn <- 1000)) idata <- transform(idata, mymu = exp(2 + 1 * x2), Lambda = exp(2 + 1 * x2)) idata <- transform(idata, y = rinv.gaussian(nn, mu = mymu, lambda = Lambda)) fit1 <- vglm(y ~ x2, inv.gaussianff, idata, trace = TRUE) rrig <- rrvglm(y ~ x2, inv.gaussianff, idata, trace = TRUE) coef(fit1, matrix = TRUE) coef(rrig, matrix = TRUE) Coef(rrig) summary(fit1) } \keyword{models} \keyword{regression} VGAM/man/identity.Rd0000644000176000001440000000455612136651105013741 0ustar ripleyusers\name{identity} \alias{identity} \alias{nidentity} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Identity Link Function } \description{ Computes the identity transformation, including its inverse and the first two derivatives. } \usage{ identity(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) nidentity(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The identity link function \eqn{g(\theta)=\theta}{g(theta)=theta} should be available to every parameter estimated by the \pkg{VGAM} library. However, it usually results in numerical problems because the estimates lie outside the permitted range. Consequently, the result may contain \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The function \code{nidentity} is the negative-identity link function and corresponds to \eqn{g(\theta)=-\theta}{g(theta)=-theta}. This is useful for some models, e.g., in the literature supporting the \code{\link{egev}} function it seems that half of the authors use \eqn{\xi=-k}{xi=-k} for the shape parameter and the other half use \eqn{k} instead of \eqn{\xi}{xi}. } \value{ For \code{identity()}: for \code{deriv = 0}, the identity of \code{theta}, i.e., \code{theta} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{theta}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. For \code{nidentity()}: the results are similar to \code{identity()} except for a sign change in most cases. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{powl}}. } \examples{ identity((-5):5) identity((-5):5, deriv = 1) identity((-5):5, deriv = 2) nidentity((-5):5) nidentity((-5):5, deriv = 1) nidentity((-5):5, deriv = 2) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/iam.Rd0000644000176000001440000000677612136651105012664 0ustar ripleyusers\name{iam} \alias{iam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Index from Array to Matrix } \description{ Maps the elements of an array containing symmetric positive-definite matrices to a matrix with sufficient columns to hold them (called matrix-band format.) } \usage{ iam(j, k, M, both = FALSE, diag = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{j}{ An integer from the set \{\code{1:M}\} giving the row number of an element. } \item{k}{ An integer from the set \{\code{1:M}\} giving the column number of an element. } \item{M}{ The number of linear/additive predictors. This is the dimension of each positive-definite symmetric matrix. } \item{both}{ Logical. Return both the row and column indices? See below for more details. } \item{diag}{ Logical. Return the indices for the diagonal elements? If \code{FALSE} then only the strictly upper triangular part of the matrix elements are used. } } \details{ Suppose we have \eqn{n} symmetric positive-definite square matrices, each \eqn{M} by \eqn{M}, and these are stored in an \code{array} of dimension \code{c(n,M,M)}. Then these can be more compactly represented by a \code{matrix} of dimension \code{c(n,K)} where \code{K} is an integer between \code{M} and \code{M*(M+1)/2} inclusive. The mapping between these two representations is given by this function. It firstly enumerates by the diagonal elements, followed by the band immediately above the diagonal, then the band above that one, etc. The last element is \code{(1,M)}. This function performs the mapping from elements \code{(j,k)} of symmetric positive-definite square matrices to the columns of another matrix representing such. This is called the \emph{matrix-band} format and is used by the \pkg{VGAM} package. } \value{ This function has a dual purpose depending on the value of \code{both}. If \code{both=FALSE} then the column number corresponding to the \code{j}-\code{k} element of the matrix is returned. If \code{both = TRUE} then \code{j} and \code{k} are ignored and a list with the following components are returned. \item{row.index}{ The row indices of the upper triangular part of the matrix (This may or may not include the diagonal elements, depending on the argument \code{diagonal}). } \item{col.index}{ The column indices of the upper triangular part of the matrix (This may or may not include the diagonal elements, depending on the argument \code{diagonal}). } } \references{ The website \url{http://www.stat.auckland.ac.nz/~yee} contains some additional information. } \author{ T. W. Yee } \note{ This function is used in the \code{weight} slot of many \pkg{VGAM} family functions (see \code{\link{vglmff-class}}), especially those whose \eqn{M} is determined by the data, e.g., \code{\link{dirichlet}}, \code{\link{multinomial}}. } \seealso{ \code{\link{vglmff-class}}. %\code{ima}. } \examples{ iam(1, 2, M = 3) # The 4th column represents element (1,2) of a 3x3 matrix iam(NULL, NULL, M = 3, both = TRUE) # Return the row and column indices dirichlet()@weight M <- 4 temp1 <- iam(NA, NA, M = M, both = TRUE) mat1 <- matrix(NA, M, M) mat1[cbind(temp1$row, temp1$col)] = 1:length(temp1$row) mat1 # More commonly used temp2 <- iam(NA, NA, M = M, both = TRUE, diag = FALSE) mat2 <- matrix(NA, M, M) mat2[cbind(temp2$row, temp2$col)] = 1:length(temp2$row) mat2 # Rarely used } \keyword{manip} \keyword{programming} VGAM/man/hzetaUC.Rd0000644000176000001440000000432412136651105013444 0ustar ripleyusers\name{Hzeta} \alias{Hzeta} \alias{dhzeta} \alias{phzeta} \alias{qhzeta} \alias{rhzeta} \title{ Haight's Zeta Function } \description{ Density, distribution function, quantile function and random generation for Haight's Zeta function distribution with parameter \code{alpha}. } \usage{ dhzeta(x, alpha, log = FALSE) phzeta(q, alpha) qhzeta(p, alpha) rhzeta(n, alpha) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of quantiles. For the density, it should be a vector with positive integer values in order for the probabilities to be positive. } \item{p}{vector of probabilities.} \item{n}{number of observations. A single positive integer.} \item{alpha}{ The parameter value. Must contain positive values and is recycled to the length of \code{x} or \code{p} or \code{q} if necessary. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \details{ The probability function is \deqn{f(x) = (2x-1)^{(-\alpha)} - (2x+1)^{(-\alpha)},}{% f(x) = (2x-1)^(-alpha) - (2x+1)^(-alpha),} where \eqn{\alpha>0}{alpha>0} and \eqn{x=1,2,\ldots}{x=1,2,...}. } \value{ \code{dhzeta} gives the density, \code{phzeta} gives the distribution function, \code{qhzeta} gives the quantile function, and \code{rhzeta} generates random deviates. } \references{ Page 533 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. } \author{ T. W. Yee } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{hzeta}} estimates the parameter \code{alpha}. } \seealso{ \code{\link{hzeta}}, \code{\link{zeta}}, \code{\link{zetaff}}. } \examples{ dhzeta(1:20, 2.1) rhzeta(20, 2.1) round(1000 * dhzeta(1:8, 2)) table(rhzeta(1000, 2)) \dontrun{ alpha <- 1.1; x <- 1:10 plot(x, dhzeta(x, alpha = alpha), type = "h", ylim = 0:1, lwd = 2, sub = paste("alpha =", alpha), las = 1, col = "blue", ylab = "Probability", main = "Haight's zeta: blue = density; red = distribution function") lines(x+0.1, phzeta(x, alpha = alpha), col = "red", lty = 3, lwd = 2, type = "h") } } \keyword{distribution} VGAM/man/hzeta.Rd0000644000176000001440000000523312136651105013214 0ustar ripleyusers\name{hzeta} \alias{hzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Haight's Zeta Family Function } \description{ Estimating the parameter of Haight's Zeta function. } \usage{ hzeta(link = "loglog", ialpha = NULL, nsimEIM=100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function for the parameter. See \code{\link{Links}} for more choices. Here, a log-log link keeps the parameter greater than one, meaning the mean is finite. } \item{ialpha}{ Optional initial value for the (positive) parameter. The default is to obtain an initial value internally. Use this argument if the default fails. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability function is \deqn{f(y) = (2y-1)^{(-\alpha)} - (2y+1)^{(-\alpha)},}{% f(y) = (2y-1)^(-alpha) - (2y+1)^(-alpha),} where the parameter \eqn{\alpha>0}{alpha>0} and \eqn{y=1,2,\ldots}{y=1,2,...}. The function \code{\link{dhzeta}} computes this probability function. The mean of \eqn{Y}, which is returned as fitted values, is \eqn{(1-2^{-\alpha}) \zeta(\alpha)}{(1-2^(-alpha))*zeta(alpha)} provided \eqn{\alpha > 1}{alpha > 1}, where \eqn{\zeta}{zeta} is Riemann's zeta function. The mean is a decreasing function of \eqn{\alpha}{alpha}. The mean is infinite if \eqn{\alpha \leq 1}{alpha <= 1}, and the variance is infinite if \eqn{\alpha \leq 2}{alpha <= 2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Page 533 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{Hzeta}}, \code{\link{zeta}}, \code{\link{zetaff}}, \code{\link{loglog}}. } \examples{ alpha <- exp(exp(-0.1)) # The parameter hdata <- data.frame(y = rhzeta(n = 1000, alpha)) fit <- vglm(y ~ 1, hzeta, hdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models; should be same as alpha c(with(hdata, mean(y)), head(fitted(fit), 1)) summary(fit) } \keyword{models} \keyword{regression} %# Generate some hzeta random variates %set.seed(123) %nn = 400 %x = 1:20 %alpha = 1.1 # The parameter %probs = dhzeta(x, alpha) %\dontrun{ %plot(x, probs, type="h", log="y")} %cs = cumsum(probs) %tab = table(cut(runif(nn), brea = c(0,cs,1))) %index = (1:length(tab))[tab>0] %y = rep(index, times=tab[index]) VGAM/man/hypersecant.Rd0000644000176000001440000000515412136651105014430 0ustar ripleyusers\name{hypersecant} \alias{hypersecant} \alias{hypersecant.1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hyperbolic Secant Distribution Family Function } \description{ Estimation of the parameter of the hyperbolic secant distribution. } \usage{ hypersecant(link.theta = elogit(min = -pi/2, max = pi/2), init.theta = NULL) hypersecant.1(link.theta = elogit(min = -pi/2, max = pi/2), init.theta = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link.theta}{ Parameter link function applied to the parameter \eqn{\theta}{theta}. See \code{\link{Links}} for more choices. } \item{init.theta}{ Optional initial value for \eqn{\theta}{theta}. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } } \details{ The probability density function of the hyperbolic secant distribution is given by \deqn{f(y)=\exp(\theta y + \log(\cos(\theta ))) / (2 \cosh(\pi y/2)),}{% f(y) =exp(theta*y + log(cos(theta ))) / (2*cosh(pi*y/2)),} for parameter \eqn{-\pi/2 < \theta < \pi/2}{pi/2 < theta < pi/2} and all real \eqn{y}. The mean of \eqn{Y} is \eqn{\tan(\theta)}{tan(theta)} (returned as the fitted values). Another parameterization is used for \code{hypersecant.1()}. This uses \deqn{f(y)=(\cos(\theta)/\pi) \times y^{-0.5+\theta/\pi} \times (1-y)^{-0.5-\theta/\pi},}{% f(y) =(cos(theta)/pi) * y^(-0.5+theta/pi) * (1-y)^(-0.5-theta/pi),} for parameter \eqn{-\pi/2 < \theta < \pi/2}{pi/2 < theta < pi/2} and \eqn{0 < y < 1}. Then the mean of \eqn{Y} is \eqn{0.5 + \theta/\pi}{0.5 + theta/pi} (returned as the fitted values) and the variance is \eqn{(\pi^2 - 4 \theta^2) / (8\pi^2)}{(pi^2 - 4*theta^2) / (8*pi^2)}. For both parameterizations Newton-Raphson is same as Fisher scoring. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall. % p.101, Eqn (3.37). } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{elogit}}. } \examples{ hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rnorm(nn)) # Not very good data! fit <- vglm(y ~ x2, hypersecant, hdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) fit@misc$earg # Not recommended: fit <- vglm(y ~ x2, hypersecant(link = "identity"), hdata, trace = TRUE) coef(fit, matrix = TRUE) fit@misc$earg } \keyword{models} \keyword{regression} VGAM/man/hyperg.Rd0000644000176000001440000001027212136651105013376 0ustar ripleyusers\name{hyperg} %\alias{hyperg} \alias{hyperg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hypergeometric Family Function } \description{ Family function for a hypergeometric distribution where either the number of white balls or the total number of white and black balls are unknown. } \usage{ hyperg(N = NULL, D = NULL, lprob = "logit", iprob = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{N}{ Total number of white and black balls in the urn. Must be a vector with positive values, and is recycled, if necessary, to the same length as the response. One of \code{N} and \code{D} must be specified. } \item{D}{ Number of white balls in the urn. Must be a vector with positive values, and is recycled, if necessary, to the same length as the response. One of \code{N} and \code{D} must be specified. } \item{lprob}{ Link function for the probabilities. See \code{\link{Links}} for more choices. } \item{iprob}{ Optional initial value for the probabilities. The default is to choose initial values internally. } } \details{ Consider the scenario from \code{\link[stats]{dhyper}} where there are \eqn{N=m+n} balls in an urn, where \eqn{m} are white and \eqn{n} are black. A simple random sample (i.e., \emph{without} replacement) of \eqn{k} balls is taken. The response here is the sample \emph{proportion} of white balls. In this document, \code{N} is \eqn{N=m+n}, \code{D} is \eqn{m} (for the number of ``defectives'', in quality control terminology, or equivalently, the number of marked individuals). The parameter to be estimated is the population proportion of white balls, viz. \eqn{prob = m/(m+n)}. Depending on which one of \code{N} and \code{D} is inputted, the estimate of the other parameter can be obtained from the equation \eqn{prob = m/(m+n)}, or equivalently, \code{prob = D/N}. However, the log-factorials are computed using \code{\link[base]{lgamma}} and both \eqn{m} and \eqn{n} are not restricted to being integer. Thus if an integer \eqn{N} is to be estimated, it will be necessary to evaluate the likelihood function at integer values about the estimate, i.e., at \code{trunc(Nhat)} and \code{ceiling(Nhat)} where \code{Nhat} is the (real) estimate of \eqn{N}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ Thomas W. Yee } \note{ The response can be of one of three formats: a factor (first level taken as success), a vector of proportions of success, or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified. In particular, for a general vector of proportions, you will need to specify \code{weights} because the number of trials is needed. } \seealso{ \code{\link[stats]{dhyper}}, \code{\link{binomialff}}. } \section{Warning }{ No checking is done to ensure that certain values are within range, e.g., \eqn{k \leq N}{k <= N}. } \examples{ nn <- 100 m <- 5 # number of white balls in the population k <- rep(4, len = nn) # sample sizes n <- 4 # number of black balls in the population y <- rhyper(nn = nn, m = m, n = n, k = k) yprop <- y / k # sample proportions # N is unknown, D is known. Both models are equivalent: fit <- vglm(cbind(y,k-y) ~ 1, hyperg(D = m), trace = TRUE, crit = "c") fit <- vglm(yprop ~ 1, hyperg(D=m), weight = k, trace = TRUE, crit = "c") # N is known, D is unknown. Both models are equivalent: fit <- vglm(cbind(y,k-y) ~ 1, hyperg(N = m+n), trace = TRUE, crit = "l") fit <- vglm(yprop ~ 1, hyperg(N = m+n), weight = k, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) # Should be equal to the true population proportion unique(m / (m+n)) # The true population proportion fit@extra head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/hunua.Rd0000644000176000001440000000377012136651105013225 0ustar ripleyusers\name{hunua} \alias{hunua} \docType{data} \title{Hunua Ranges Data} \description{ The \code{hunua} data frame has 392 rows and 18 columns. Altitude is explanatory, and there are binary responses (presence/absence = 1/0 respectively) for 17 plant species. } \usage{data(hunua)} \format{ This data frame contains the following columns: \describe{ \item{agaaus}{Agathis australis, or Kauri} \item{beitaw}{Beilschmiedia tawa, or Tawa} \item{corlae}{Corynocarpus laevigatus} \item{cyadea}{Cyathea dealbata} \item{cyamed}{Cyathea medullaris} \item{daccup}{Dacrydium cupressinum} \item{dacdac}{Dacrycarpus dacrydioides} \item{eladen}{Elaecarpus dentatus} \item{hedarb}{Hedycarya arborea} \item{hohpop}{Species name unknown} \item{kniexc}{Knightia excelsa, or Rewarewa} \item{kuneri}{Kunzea ericoides} \item{lepsco}{Leptospermum scoparium} \item{metrob}{Metrosideros robusta} \item{neslan}{Nestegis lanceolata} \item{rhosap}{Rhopalostylis sapida} \item{vitluc}{Vitex lucens, or Puriri} \item{altitude}{meters above sea level} } } \details{ These were collected from the Hunua Ranges, a small forest in southern Auckland, New Zealand. At 392 sites in the forest, the presence/absence of 17 plant species was recorded, as well as the altitude. Each site was of area size 200\eqn{m^2}{m^2}. } \source{ Dr Neil Mitchell, University of Auckland. } %\references{ % None. %} \seealso{ \code{\link{waitakere}}. } \examples{ # Fit a GAM using vgam() and compare it with the Waitakere Ranges one fit.h <- vgam(agaaus ~ s(altitude, df = 2), binomialff, hunua) \dontrun{ plot(fit.h, se = TRUE, lcol = "red", scol = "red", main = "Red is Hunua, Blue is Waitakere") } head(predict(fit.h, hunua, type = "response")) fit.w <- vgam(agaaus ~ s(altitude, df = 2), binomialff, waitakere) \dontrun{ plot(fit.w, se = TRUE, lcol = "blue", scol = "blue", add = TRUE) } head(predict(fit.w, hunua, type = "response")) # Same as above? } \keyword{datasets} VGAM/man/huberUC.Rd0000644000176000001440000000627712136651105013447 0ustar ripleyusers\name{dhuber} \alias{dhuber} \alias{edhuber} \alias{rhuber} \alias{qhuber} \alias{phuber} \title{Huber's least favourable distribution} \description{ Density, distribution function, quantile function and random generation for Huber's least favourable distribution, see Huber and Ronchetti (2009). } \usage{ dhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) rhuber(n, k = 0.862, mu = 0, sigma = 1) qhuber(p, k = 0.862, mu = 0, sigma = 1) phuber(q, k = 0.862, mu = 0, sigma = 1) } \arguments{ \item{x, q}{numeric vector, vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of random values to be generated. If \code{length(n) > 1} then the length is taken to be the number required. } \item{k}{numeric. Borderline value of central Gaussian part of the distribution. This is known as the tuning constant, and should be positive. For example, \code{k = 0.862} refers to a 20\% contamination neighborhood of the Gaussian distribution. If \code{k = 1.40} then this is 5\% contamination. } \item{mu}{numeric. distribution mean.} \item{sigma}{numeric. Distribution scale (\code{sigma = 1} defines the distribution in standard form, with standard Gaussian centre).} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the result is returned. } } \details{ Details are given in \code{\link{huber2}}, the \pkg{VGAM} family function for estimating the parameters \code{mu} and \code{sigma}. } \value{ \code{dhuber} gives out a vector of density values. \code{edhuber} gives out a list with components \code{val} (density values) and \code{eps} (contamination proportion). \code{rhuber} gives out a vector of random numbers generated by Huber's least favourable distribution. \code{phuber} gives the distribution function, \code{qhuber} gives the quantile function. } %\references{ % Huber, P. J. and Ronchetti, E. (2009) % \emph{Robust Statistics}, 2nd ed. New York: Wiley. % % % Huber, P. J. and Ronchetti, E. (2009) Robust Statistics % (2nd ed.). Wiley, New York. % % %} \author{ Christian Hennig wrote \code{[d,ed,r]huber()} (from \pkg{smoothmest}) and slight modifications were made by T. W. Yee to replace looping by vectorization and addition of the \code{log} argument. Arash Ardalan wrote \code{[pq]huber()}. This helpfile was adapted from \pkg{smoothmest}. } \seealso{ \code{\link{huber2}}. } \examples{ set.seed(123456) edhuber(1:5, k = 1.5) rhuber(5) \dontrun{ mu <- 3; xx <- seq(-2, 7, len = 100) # Plot CDF and PDF plot(xx, dhuber(xx, mu = mu), type = "l", col = "blue", las = 1, ylab = "", main = "blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(xx, phuber(xx, mu = mu), type = "l", col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qhuber(probs, mu = mu) lines(Q, dhuber(Q, mu = mu), col = "purple", lty = 3, type = "h") lines(Q, phuber(Q, mu = mu), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) phuber(Q, mu = mu) - probs # Should be all 0s } } \keyword{distribution} VGAM/man/huber.Rd0000644000176000001440000000635612136651105013215 0ustar ripleyusers\name{huber2} \alias{huber2} \alias{huber1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Huber's least favourable distribution family function } \description{ M-estimation of the two parameters of Huber's least favourable distribution. The one parameter case is also implemented. } \usage{ huber1(llocation = "identity", k = 0.862, imethod = 1) huber2(llocation = "identity", lscale = "loge", k = 0.862, imethod = 1, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Link functions applied to the location and scale parameters. See \code{\link{Links}} for more choices. } \item{k}{ Tuning constant. See \code{\link{rhuber}} for more information. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. The default value of \code{zero} means the scale parameter is modelled as an intercept-only. } } \details{ Huber's least favourable distribution family function is popular for resistant/robust regression. The center of the distribution is normal and its tails are double exponential. By default, the mean is the first linear/additive predictor (returned as the fitted values; this is the location parameter), and the log of the scale parameter is the second linear/additive predictor. The Fisher information matrix is diagonal; Fisher scoring is implemented. The \pkg{VGAM} family function \code{huber1()} estimates only the location parameter. It assumes a scale parameter of unit value. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Huber, P. J. and Ronchetti, E. (2009) \emph{Robust Statistics}, 2nd ed. New York: Wiley. } \author{ T. W. Yee. Help was given by Arash Ardalan. } \note{ Warning: actually, \code{huber2()} may be erroneous since the first derivative is not continuous when there are two parameters to estimate. \code{huber1()} is fine in this respect. The response should be univariate. } \seealso{ \code{\link{rhuber}}, \code{\link{normal1}}, \code{\link{gaussianff}}, \code{\link{laplace}}, \code{\link{CommonVGAMffArguments}}. } \examples{ set.seed(1231); NN <- 30; coef1 <- 1; coef2 <- 10 hdata <- data.frame(x2 = sort(runif(NN))) hdata <- transform(hdata, y = rhuber(NN, mu = coef1 + coef2 * x2)) hdata$x2[1] <- 0.0 # Add an outlier hdata$y[1] <- 10 fit.huber2 <- vglm(y ~ x2, huber2(imethod = 3), hdata, trace = TRUE) fit.huber1 <- vglm(y ~ x2, huber1(imethod = 3), hdata, trace = TRUE) coef(fit.huber2, matrix = TRUE) summary(fit.huber2) \dontrun{ # Plot the results plot(y ~ x2, hdata, col = "blue", las = 1) lines(fitted(fit.huber2) ~ x2, hdata, col = "darkgreen", lwd = 2) fit.lm <- lm(y ~ x2, hdata) # Compare to a LM: lines(fitted(fit.lm) ~ x2, hdata, col = "lavender", lwd = 3) # Compare to truth: lines(coef1 + coef2 * x2 ~ x2, hdata, col = "orange", lwd = 2, lty = "dashed") legend("bottomright", legend = c("truth", "huber", "lm"), col = c("orange", "darkgreen", "lavender"), lty = c("dashed", "solid", "solid"), lwd = c(2, 2, 3)) } } \keyword{models} \keyword{regression} VGAM/man/hspider.Rd0000644000176000001440000000632712136651105013544 0ustar ripleyusers\name{hspider} \alias{hspider} \docType{data} \title{ Hunting Spider Data } \description{ Abundance of hunting spiders in a Dutch dune area. } \usage{data(hspider)} \format{ A data frame with 28 observations (sites) on the following 18 variables. \describe{ \item{WaterCon}{Log percentage of soil dry mass.} \item{BareSand}{Log percentage cover of bare sand.} \item{FallTwig}{Log percentage cover of fallen leaves and twigs.} \item{CoveMoss}{Log percentage cover of the moss layer.} \item{CoveHerb}{Log percentage cover of the herb layer.} \item{ReflLux}{Reflection of the soil surface with cloudless sky.} \item{Alopacce}{Abundance of \emph{Alopecosa accentuata}.} \item{Alopcune}{Abundance of \emph{Alopecosa cuneata}.} \item{Alopfabr}{Abundance of \emph{Alopecosa fabrilis}.} \item{Arctlute}{Abundance of \emph{Arctosa lutetiana}.} \item{Arctperi}{Abundance of \emph{Arctosa perita}.} \item{Auloalbi}{Abundance of \emph{Aulonia albimana}.} \item{Pardlugu}{Abundance of \emph{Pardosa lugubris}.} \item{Pardmont}{Abundance of \emph{Pardosa monticola}.} \item{Pardnigr}{Abundance of \emph{Pardosa nigriceps}.} \item{Pardpull}{Abundance of \emph{Pardosa pullata}.} \item{Trocterr}{Abundance of \emph{Trochosa terricola}.} \item{Zoraspin}{Abundance of \emph{Zora spinimana}.} } } \details{ The data, which originally came from Van der Aart and Smeek-Enserink (1975) consists of abundances (numbers trapped over a 60 week period) and 6 environmental variables. There were 28 sites. This data set has been often used to illustrate ordination, e.g., using canonical correspondence analysis (CCA). In the example below, the data is used for constrained quadratic ordination (CQO; formerly called canonical Gaussian ordination or CGO), a numerically intensive method that has many superior qualities. See \code{\link{cqo}} for details. } %\source{ %} \references{ Van der Aart, P. J. M. and Smeek-Enserink, N. (1975) Correlations between distributions of hunting spiders (Lycosidae, Ctenidae) and environmental characteristics in a dune area. \emph{Netherlands Journal of Zoology}, \bold{25}, 1--45. } \examples{ str(hspider) \dontrun{ # Fit a rank-1 Poisson CQO set.seed(111) # This leads to the global solution hspider[,1:6]=scale(hspider[,1:6]) # Standardize the environmental variables # vvv p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, # vvv Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ # vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, # vvv fam = poissonff, data = hspider, Crow1posit=FALSE) # vvv nos = ncol(p1@y) # vvv lvplot(p1, y=TRUE, lcol=1:nos, pch=1:nos, pcol=1:nos) # vvv Coef(p1) # vvv summary(p1) # Fit a rank-1 binomial CAO hsbin <- hspider # Binary species data hsbin[,-(1:6)] <- as.numeric(hsbin[,-(1:6)] > 0) set.seed(123) ahsb1 <- cao(cbind(Alopcune,Arctlute,Auloalbi,Zoraspin) ~ WaterCon + ReflLux, family = binomialff(mv = TRUE), df1.nl = 2.2, Bestof=3, data = hsbin) par(mfrow = 2:1, las = 1) lvplot(ahsb1, type = "predictors", llwd = 2, ylab = "logit p", lcol = 1:9) persp(ahsb1, rug = TRUE, col = 1:10, lwd = 2) coef(ahsb1) } } \keyword{datasets} VGAM/man/hormone.Rd0000644000176000001440000000771312136651105013555 0ustar ripleyusers\name{hormone} \alias{hormone} \docType{data} \title{ Hormone Assay Data } \description{ A hormone assay data set from Carroll and Ruppert (1988). %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(hormone)} \format{ A data frame with 85 observations on the following 2 variables. \describe{ \item{\code{X}}{a numeric vector, suitable as the x-axis in a scatter plot. The reference method. } \item{\code{Y}}{a numeric vector, suitable as the y-axis in a scatter plot. The test method. } } } \details{ %% ~~ If necessary, more details than the __description__ above ~~ The data is given in Table 2.4 of Carroll and Ruppert (1988), and was downloaded from \url{http://www.stat.tamu.edu/~carroll}. The book describes the data as follows. The data are the results of two assay methods for hormone data; the scale of the data as presented is not particularly meaningful, and the original source of the data refused permission to divulge further information. As in a similar example of Leurgans (1980), the old or reference method is being used to predict the new or test method. The overall goal is to see whether we can reproduce the test-method measurements with the reference-method measurements. Thus calibration might be of interest for the data. } %\source{ % Originally, %} \references{ Carroll, R. J. and Ruppert, D. (1988) \emph{Transformation and Weighting in Regression}. New York, USA: Chapman & Hall. Leurgans, S. (1980) Evaluating laboratory measurement techniques. \emph{Biostatistics Casebook}. Eds.: Miller, R. G. Jr., and Efron, B. and Brown, B. W. Jr., and Moses, L. New York, USA: Wiley. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. } \seealso{ \code{\link[VGAM]{normal1}}, \code{\link[VGAM]{rrvglm}}. } \examples{ \dontrun{ data(hormone) summary(hormone) modelI <-rrvglm(Y ~ 1 + X, data = hormone, trace = TRUE, normal1(zero = NULL, lsd = "identity", imethod = 2)) # Alternative way to fit modelI modelI.other <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, normal1(zero = NULL, lsd = "identity")) # Inferior to modelI modelII <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, family = normal1(zero = NULL)) logLik(modelI) logLik(modelII) # Less than logLik(modelI) # Reproduce the top 3 equations on p.65 of Carroll and Ruppert (1988). # They are called Equations (1)--(3) here. # Equation (1) hormone <- transform(hormone, rX = 1 / X) clist <- list("(Intercept)" = diag(2), X = diag(2), rX = rbind(0, 1)) fit1 <- vglm(Y ~ 1 + X + rX, family = normal1(zero = NULL), constraints = clist, data = hormone, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Actually, the intercepts do not seem significant plot(Y ~ X, hormone, col = "blue") lines(fitted(fit1) ~ X, hormone, col = "orange") # Equation (2) fit2 <- rrvglm(Y ~ 1 + X, normal1(zero = NULL), hormone, trace = TRUE) coef(fit2, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") lines(fitted(fit2) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit2) + 2 * exp(predict(fit2)[, "log(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit2) - 2 * exp(predict(fit2)[, "log(sd)"]) ~ X, hormone, col = "orange") # Equation (3) # Does not fit well because the loge link for the mean is not good. fit3 <- rrvglm(Y ~ 1 + X, maxit = 300, data = hormone, trace = TRUE, normal1(lmean = "loge", zero = NULL)) coef(fit3, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") # Does not look okay. lines(exp(predict(fit3)[, 1]) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit3) + 2 * exp(predict(fit3)[, "log(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit3) - 2 * exp(predict(fit3)[, "log(sd)"]) ~ X, hormone, col = "orange") } } \keyword{datasets} % from \url{http://www.stat.tamu.edu/~carroll/data/hormone_data.txt}. VGAM/man/hatvalues.Rd0000644000176000001440000002012012136651105014065 0ustar ripleyusers% 20120312 % Modified from file src/library/stats/man/influence.measures.Rd \name{hatvalues} %\title{Regression Deletion Diagnostics} \title{Hat Values and Regression Deletion Diagnostics} %\concept{studentized residuals} %\concept{standardized residuals} %\concept{Cook's distances} %\concept{Covariance ratios} \concept{DFBETAs} %\concept{DFFITs} %\alias{influence.measures} %\alias{print.infl} %\alias{summary.infl} %\alias{hat} \alias{hatvalues} %\alias{hatvalues.lm} \alias{hatvaluesvlm} \alias{hatplot} \alias{hatplot.vlm} %\alias{rstandard} %\alias{rstandard.lm} %\alias{rstandard.glm} %\alias{rstudent} %\alias{rstudent.lm} %\alias{rstudent.glm} \alias{dfbeta} \alias{dfbetavlm} %\alias{dfbetas} %\alias{dfbetas.lm} %\alias{dffits} %\alias{covratio} %\alias{cooks.distance} %\alias{cooks.distance.lm} %\alias{cooks.distance.glm} \usage{ hatvalues(model, \dots) hatvaluesvlm(model, type = c("diagonal", "matrix", "centralBlocks"), \dots) hatplot(model, \dots) hatplot.vlm(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, \dots) dfbetavlm(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...) } \arguments{ \item{model}{an \R object, typically returned by \code{\link{vglm}}. %or \code{\link{glm}}. } \item{type}{Character. The default is the first choice, which is a \eqn{nM \times nM}{nM x nM} matrix. If \code{type = "matrix"} then the \emph{entire} hat matrix is returned. If \code{type = "centralBlocks"} then \eqn{n} central \eqn{M \times M}{M x M} block matrices, in matrix-band format. } % \item{diag}{Logical. If \code{TRUE} then the diagonal elements % of the hat matrix are returned, else the \emph{entire} hat matrix is % returned. % In the latter case, it is a \eqn{nM \times nM}{nM x nM} matrix. % } \item{multiplier}{Numeric, the multiplier. The usual rule-of-thumb is that values greater than two or three times the average leverage (at least for the linear model) should be checked. } \item{lty, xlab, ylab, ylim}{Graphical parameters, see \code{\link[graphics]{par}} etc. The default of \code{ylim} is \code{c(0, max(hatvalues(model)))} which means that if the horizontal dashed lines cannot be seen then there are no particularly influential observations. } \item{maxit.new, trace.new, smallno}{ Having \code{maxit.new = 1} will give a one IRLS step approximation from the ordinary solution (and no warnings!). Else having \code{maxit.new = 10}, say, should usually mean convergence will occur for all observations when they are removed one-at-a-time. Else having \code{maxit.new = 2}, say, should usually mean some lack of convergence will occur when observations are removed one-at-a-time. Setting \code{trace.new = TRUE} will produce some running output at each IRLS iteration and for each individual row of the model matrix. The argument \code{smallno} multiplies each value of the original prior weight (often unity); setting it identically to zero will result in an error, but setting a very small value effectively removes that observation. } % \item{infl}{influence structure as returned by % \code{\link{lm.influence}} or \code{\link{influence}} (the latter % only for the \code{glm} method of \code{rstudent} and % \code{cooks.distance}).} % \item{res}{(possibly weighted) residuals, with proper default.} % \item{sd}{standard deviation to use, see default.} % \item{dispersion}{dispersion (for \code{\link{glm}} objects) to use, % see default.} % \item{hat}{hat values \eqn{H_{ii}}{H[i,i]}, see default.} % \item{type}{type of residuals for \code{glm} method for \code{rstandard.}} % \item{x}{the \eqn{X} or design matrix.} % \item{intercept}{should an intercept column be prepended to \code{x}?} \item{\dots}{further arguments, for example, graphical parameters for \code{hatplot.vlm()}. % passed to or from other methods. } } \description{ When complete, a suite of functions that can be used to compute some of the regression (leave-one-out deletion) diagnostics, for the VGLM class. % This suite of functions can be used to compute some of the % regression (leave-one-out deletion) diagnostics for linear and % generalized linear models discussed in Belsley, Kuh and Welsch % (1980), Cook and Weisberg (1982), etc. } \details{ The invocation \code{hatvalues(vglmObject)} should return a \eqn{n \times M}{n x M} matrix of the diagonal elements of the hat (projection) matrix of a \code{\link{vglm}} object. To do this, the QR decomposition of the object is retrieved or reconstructed, and then straightforward calculations are performed. The invocation \code{hatplot(vglmObject)} should plot the diagonal of the hat matrix for each of the \eqn{M} linear/additive predictors. By default, two horizontal dashed lines are added; hat values higher than these ought to be checked. % The primary high-level function is \code{influence.measures} % which produces a class \code{"infl"} object tabular display % showing the DFBETAS for each model variable, DFFITS, covariance % ratios, Cook's distances and the diagonal elements of the % hat matrix. Cases which are influential with respect to any % of these measures are marked with an asterisk. % The functions \code{dfbetas}, \code{dffits}, \code{covratio} % and \code{cooks.distance} provide direct access to the % corresponding diagnostic quantities. Functions \code{rstandard} % and \code{rstudent} give the standardized and Studentized % residuals respectively. (These re-normalize the residuals to % have unit variance, using an overall and leave-one-out measure % of the error variance respectively.) % Values for generalized linear models are approximations, as % described in Williams (1987) (except that Cook's distances % are scaled as \eqn{F} rather than as chi-square values). The % approximations can be poor when some cases have large influence. % The optional \code{infl}, \code{res} and \code{sd} arguments are % there to encourage the use of these direct access functions, % in situations where, e.g., the underlying basic influence % measures (from \code{\link{lm.influence}} or the generic % \code{\link{influence}}) are already available. % Note that cases with \code{weights == 0} are \emph{dropped} from all % these functions, but that if a linear model has been fitted with % \code{na.action = na.exclude}, suitable values are filled in for the % cases excluded during fitting. % The function \code{hat()} exists mainly for S (version 2) % compatibility; we recommend using \code{hatvalues()} instead. } \note{ It is hoped, soon, that the full suite of functions described at \code{\link[stats]{influence.measures}} will be written for VGLMs. This will enable general regression deletion diagnostics to be available for the entire VGLM class. % For \code{hatvalues}, \code{dfbeta}, and \code{dfbetas}, the method % for linear models also works for generalized linear models. } \author{ T. W. Yee. } %\references{ % Belsley, D. A., Kuh, E. and Welsch, R. E. (1980) % \emph{Regression Diagnostics}. % New York: Wiley. % % Cook, R. D. and Weisberg, S. (1982) % \emph{Residuals and Influence in Regression}. % London: Chapman and Hall. % % Williams, D. A. (1987) % Generalized linear model diagnostics using the deviance and single % case deletions. \emph{Applied Statistics} \bold{36}, 181--191. % % Fox, J. (1997) % \emph{Applied Regression, Linear Models, and Related Methods}. Sage. % % Fox, J. (2002) % \emph{An R and S-Plus Companion to Applied Regression}. % Sage Publ.; \url{http://www.socsci.mcmaster.ca/jfox/Books/Companion/}. % % %} \seealso{ \code{\link{vglm}}, \code{\link{cumulative}}, \code{\link[stats]{influence.measures}}. } \examples{ # Proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative, data = pneumo) hatvalues(fit) # n x M matrix, with positive values all.equal(sum(hatvalues(fit)), fit@rank) # Should be TRUE \dontrun{ par(mfrow = c(1, 2)) hatplot(fit, ylim = c(0, 1), las = 1, col = "blue") } } \keyword{regression} VGAM/man/guplot.Rd0000644000176000001440000000521012136651105013406 0ustar ripleyusers\name{guplot} \alias{guplot} \alias{guplot.default} \alias{guplot.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel Plot } \description{ Produces a Gumbel plot, a diagnostic plot for checking whether the data appears to be from a Gumbel distribution. } \usage{ guplot(object, ...) guplot.default(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) guplot.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{ A numerical vector. \code{NA}s etc. are not allowed.} \item{main}{Character. Overall title for the plot. } \item{xlab}{Character. Title for the x axis. } \item{ylab}{Character. Title for the y axis. } \item{type}{Type of plot. The default means points are plotted. } \item{object}{ An object that inherits class \code{"vlm"}, usually of class \code{\link{vglm-class}} or \code{\link{vgam-class}}. } \item{\dots}{ Graphical argument passed into \code{\link[graphics]{plot}}. See \code{\link[graphics]{par}} for an exhaustive list. The arguments \code{xlim} and \code{ylim} are particularly useful. } } \details{ If \eqn{Y} has a Gumbel distribution then plotting the sorted values \eqn{y_i} versus the \emph{reduced values} \eqn{r_i} should appear linear. The reduced values are given by \deqn{r_i = -\log(-\log(p_i)) }{% r_i = - log(- log(p_i)) } where \eqn{p_i} is the \eqn{i}th plotting position, taken here to be \eqn{(i-0.5)/n}. Here, \eqn{n} is the number of observations. Curvature upwards/downwards may indicate a Frechet/Weibull distribution, respectively. Outliers may also be detected using this plot. The function \code{guplot} is generic, and \code{guplot.default} and \code{guplot.vlm} are some methods functions for Gumbel plots. } \value{ A list is returned invisibly with the following components. \item{x }{The reduced data. } \item{y }{The sorted y data. } } %% zz not sure about the reference \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. Gumbel, E. J. (1958) \emph{Statistics of Extremes}. New York, USA: Columbia University Press. } \author{ T. W. Yee } \note{ The Gumbel distribution is a special case of the GEV distribution with shape parameter equal to zero. } \seealso{ \code{\link{gumbel}}, \code{\link{egumbel}}, \code{\link{gev}}, \code{\link{venice}}. } \examples{\dontrun{guplot(rnorm(500), las = 1) -> ii names(ii) guplot(with(venice, r1), col = "blue") # Venice sea levels data }} \keyword{models} \keyword{regression} VGAM/man/gumbelUC.Rd0000644000176000001440000000716612136651105013613 0ustar ripleyusers\name{gumbelUC} \alias{dgumbel} \alias{pgumbel} \alias{qgumbel} \alias{rgumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Gumbel Distribution } \description{ Density, distribution function, quantile function and random generation for the Gumbel distribution with location parameter \code{location} and scale parameter \code{scale}. } \usage{ dgumbel(x, location = 0, scale = 1, log = FALSE) pgumbel(q, location = 0, scale = 1) qgumbel(p, location = 0, scale = 1) rgumbel(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required.} \item{location}{the location parameter \eqn{\mu}{mu}. This is not the mean of the Gumbel distribution (see \bold{Details} below). } \item{scale}{the scale parameter \eqn{\sigma}{sigma}. This is not the standard deviation of the Gumbel distribution (see \bold{Details} below). } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \details{ The Gumbel distribution is a special case of the \emph{generalized extreme value} (GEV) distribution where the shape parameter \eqn{\xi}{xi} = 0. The latter has 3 parameters, so the Gumbel distribution has two. The Gumbel distribution function is \deqn{G(y) = \exp \left( - \exp \left[ - \frac{y-\mu}{\sigma} \right] \right) }{% G(y) = exp( -exp[ - (y-mu)/sigma ] ) } where \eqn{-\infty0}{sigma>0}. Its mean is \deqn{\mu - \sigma * \gamma}{% mu - sigma * gamma} and its variance is \deqn{\sigma^2 * \pi^2 / 6}{% sigma^2 * pi^2 / 6} where \eqn{\gamma}{gamma} is Euler's constant (which can be obtained as \code{-digamma(1)}). See \code{\link{gumbel}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \value{ \code{dgumbel} gives the density, \code{pgumbel} gives the distribution function, \code{qgumbel} gives the quantile function, and \code{rgumbel} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{\link{gumbel}} can estimate the parameters of a Gumbel distribution using maximum likelihood estimation. } \seealso{ \code{\link{gumbel}}, \code{\link{egumbel}}, \code{\link{gev}}, \code{\link{dgompertz}}. } \examples{ mu <- 1; sigma <- 2; y <- rgumbel(n = 100, loc = mu, scale = sigma) c(mean(y), mu - sigma * digamma(1)) # Sample and population means c(var(y), sigma^2 * pi^2 / 6) # Sample and population variances \dontrun{ x <- seq(-2.5, 3.5, by = 0.01) loc <- 0; sigma <- 1 plot(x, dgumbel(x, loc, sigma), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), dgumbel(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), loc, sigma), col = "purple", lty = 3, type = "h") lines(x, pgumbel(x, loc, sigma), type = "l", col = "red") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/gumbelIbiv.Rd0000644000176000001440000000517312136651105014171 0ustar ripleyusers\name{gumbelIbiv} \alias{gumbelIbiv} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel's Type I Bivariate Distribution Family Function } \description{ Estimate the association parameter of Gumbel's Type I bivariate distribution by maximum likelihood estimation. } \usage{ gumbelIbiv(lapar = "identity", iapar = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function applied to the association parameter \eqn{\alpha}{alpha}. See \code{\link{Links}} for more choices. } \item{iapar}{ Numeric. Optional initial value for \eqn{\alpha}{alpha}. By default, an initial value is chosen internally. If a convergence failure occurs try assigning a different value. Assigning a value will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ia}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = e^{-y_1-y_2+\alpha y_1 y_2} + 1 - e^{-y_1} - e^{-y_2} }{% P(Y1 <= y1, Y2 <= y2) = exp(-y1-y2+alpha*y1*y2) + 1 - exp(-y1) - exp(-y2) } for real \eqn{\alpha}{alpha}. The support of the function is for \eqn{y_1>0}{y1>0} and \eqn{y_2>0}{y2>0}. The marginal distributions are an exponential distribution with unit mean. A variant of Newton-Raphson is used, which only seems to work for an intercept model. It is a very good idea to set \code{trace=TRUE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean. This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{morgenstern}}. } \examples{ nn <- 1000 gdata <- data.frame(y1 = rexp(nn), y2 = rexp(nn)) \dontrun{ with(gdata, plot(cbind(y1, y2))) } fit <- vglm(cbind(y1, y2) ~ 1, fam = gumbelIbiv, gdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/gumbelIIUC.Rd0000644000176000001440000000357112136651105014031 0ustar ripleyusers\name{Gumbel-II} \alias{Gumbel-II} \alias{dgumbelII} \alias{pgumbelII} \alias{qgumbelII} \alias{rgumbelII} \title{The Gumbel-II Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Gumbel-II distribution. } \usage{ dgumbelII(x, shape, scale = 1, log = FALSE) pgumbelII(q, shape, scale = 1) qgumbelII(p, shape, scale = 1) rgumbelII(n, shape, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{shape, scale}{positive shape and scale parameters. } } \value{ \code{dgumbelII} gives the density, \code{pgumbelII} gives the cumulative distribution function, \code{qgumbelII} gives the quantile function, and \code{rgumbelII} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{gumbelII}} for details. } %\note{ % %} \seealso{ \code{\link{gumbelII}}, \code{\link{dgumbel}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp( 0.5); Scale <- exp(1); max(abs(pgumbelII(qgumbelII(p = probs, Shape, Scale), Shape, Scale) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 10, by = 0.01); plot(x, dgumbelII(x, Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pgumbelII(x, Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgumbelII(probs, Shape, Scale) lines(Q, dgumbelII(Q, Shape, Scale), col = "purple", lty = 3, type = "h") pgumbelII(Q, Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/gumbelII.Rd0000644000176000001440000001001512136651105013570 0ustar ripleyusers\name{gumbelII} \alias{gumbelII} %\alias{gumbelIIff} %\alias{gumbelII.lsh} %\alias{gumbelII3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel-II Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gumbel-II distribution. } \usage{ gumbelII(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the (positive) shape parameter (called \eqn{a} below) and (positive) scale parameter (called \eqn{b} below). See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % eshape = list(), escale = list(), % Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } Parameter link functions applied to the \item{ishape, iscale}{ Optional initial values for the shape and scale parameters. } \item{imethod}{ See \code{\link{weibull}}. } \item{zero, probs.y}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{perc.out}{ If the fitted values are to be quantiles then set this argument to be the percentiles of these, e.g., 50 for median. } } \details{ The Gumbel-II density for a response \eqn{Y} is \deqn{f(y;a,b) = a y^{a-1} \exp[-(y/b)^a] / (b^a)}{% f(y;a,b) = a y^(a-1) * exp(-(y/b)^a) / [b^a]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y > 0}. The cumulative distribution function is \deqn{F(y;a,b) = \exp[-(y/b)^{-a}].}{% F(y;a,b) = exp(-(y/b)^(-a)).} The mean of \eqn{Y} is \eqn{b \, \Gamma(1 - 1/a)}{b * gamma(1 - 1/a)} (returned as the fitted values) when \eqn{a>1}, and the variance is \eqn{b^2\,\Gamma(1-2/a)}{b^2 * Gamma(1-2/a)} when \eqn{a>2}. This distribution looks similar to \code{\link{weibull}}, and is due to Gumbel (1954). This \pkg{VGAM} family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Probably similar regularity conditions hold for this distribution compared to the Weibull distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Gumbel, E. J. (1954). Statistical theory of extreme values and some practical applications. \emph{Applied Mathematics Series}, volume 33, U.S. Department of Commerce, National Bureau of Standards, USA. } \author{ T. W. Yee } \note{ See \code{\link{weibull}}. This \pkg{VGAM} family function handles multiple responses. } %\section{Warning}{ % This function is under development to handle other censoring situations. % The version of this function which will handle censored data will be % called \code{cengumbelII()}. It is currently being written and will use % \code{\link{SurvS4}} as input. % It should be released in later versions of \pkg{VGAM}. % % % If the shape parameter is less than two then misleading inference may % result, e.g., in the \code{summary} and \code{vcov} of the object. % % %} \seealso{ \code{\link{dgumbelII}}, \code{\link{gumbel}}, \code{\link{gev}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, eta1 = -1, eta2 = -1 + 0.1 * x2, ceta1 = 0, ceta2 = 1) gdata <- transform(gdata, shape1 = exp(eta1), shape2 = exp(eta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgumbelII(nn, shape = shape1, scale = scale1), y2 = rgumbelII(nn, shape = shape2, scale = scale2)) fit <- vglm(cbind(y1, y2) ~ x2, gumbelII(zero = c(1, 2, 4)), gdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/gumbel.Rd0000644000176000001440000002026412136651105013355 0ustar ripleyusers\name{gumbel} \alias{gumbel} \alias{egumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gumbel distribution. } \usage{ gumbel(llocation = "identity", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) egumbel(llocation = "identity", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions for \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. See \code{\link{Links}} for more choices. } \item{iscale}{ Numeric and positive. Optional initial value for \eqn{\sigma}{sigma}. Recycled to the appropriate length. In general, a larger value is better than a smaller value. A \code{NULL} means an initial value is computed internally. } \item{R}{ Numeric. Maximum number of values possible. See \bold{Details} for more details. } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. This argument uses the argument \code{R} if assigned. If \code{percentiles = NULL} then the mean will be returned as the fitted values. % This argument is ignored if \code{mean = TRUE}. } \item{mpv}{ Logical. If \code{mpv = TRUE} then the \emph{median predicted value} (MPV) is computed and returned as the (last) column of the fitted values. This argument is ignored if \code{percentiles = NULL}. See \bold{Details} for more details. % This argument is ignored if \code{mean = TRUE}. } % \item{mean}{ % Logical. If \code{TRUE}, the mean is computed and returned % as the fitted values. This argument overrides the % \code{percentiles} and \code{mpv} arguments. % See \bold{Details} for more details. % } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The value (possibly values) must be from the set \{1, 2\} corresponding respectively to \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. By default all linear/additive predictors are modelled as a linear combination of the explanatory variables. } } \details{ The Gumbel distribution is a generalized extreme value (GEV) distribution with \emph{shape} parameter \eqn{\xi = 0}{xi = 0}. Consequently it is more easily estimated than the GEV. See \code{\link{gev}} for more details. The quantity \eqn{R} is the maximum number of observations possible, for example, in the Venice data below, the top 10 daily values are recorded for each year, therefore \eqn{R = 365} because there are about 365 days per year. The MPV is the value of the response such that the probability of obtaining a value greater than the MPV is 0.5 out of \eqn{R} observations. For the Venice data, the MPV is the sea level such that there is an even chance that the highest level for a particular year exceeds the MPV. When \code{mpv = TRUE}, the column labelled \code{"MPV"} contains the MPVs when \code{fitted()} is applied to the fitted object. The formula for the mean of a response \eqn{Y} is \eqn{\mu+\sigma \times Euler} where \eqn{Euler} is a constant that has value approximately equal to 0.5772. The formula for the percentiles are (if \code{R} is not given) \eqn{\mu-\sigma \times \log[-\log(P/100)]}{location- scale*log[-log(P/100)]} where \eqn{P} is the \code{percentile} argument value(s). If \code{R} is given then the percentiles are \eqn{\mu-\sigma \times \log[R(1-P/100)]}{location- scale*log[-log(R*(1-P/100))]}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Smith, R. L. (1986) Extreme value theory based on the \emph{r} largest annual events. \emph{Journal of Hydrology}, \bold{86}, 27--43. Rosen, O. and Cohen, A. (1996) Extreme percentile regression. In: Haerdle, W. and Schimek, M. G. (eds.), \emph{Statistical Theory and Computational Aspects of Smoothing: Proceedings of the COMPSTAT '94 Satellite Meeting held in Semmering, Austria, 27--28 August 1994}, pp.200--214, Heidelberg: Physica-Verlag. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \section{Warning}{ When \code{R} is not given (the default) the fitted percentiles are that of the data, and not of the overall population. For example, in the example below, the 50 percentile is approximately the running median through the data, however, the data are the highest sea level measurements recorded each year (it therefore equates to the median predicted value or MPV). } \note{ \code{egumbel()} only handles a univariate response, and is preferred to \code{gumbel()} because it is faster. \code{gumbel()} can handle a multivariate response, i.e., a matrix with more than one column. Each row of the matrix is sorted into descending order. Missing values in the response are allowed but require \code{na.action = na.pass}. The response matrix needs to be padded with any missing values. With a multivariate response one has a matrix \code{y}, say, where \code{y[, 2]} contains the second order statistics etc. % If a random variable \eqn{Y} has a \emph{reverse} % \eqn{Gumbel(\mu,\sigma)}{Gumbel(mu,sigma)} distribution then \eqn{-Y} % has a \eqn{Gumbel(-\mu,\sigma)}{Gumbel(-mu,sigma)} distribution. % It appears that some definite the reverse Gumbel the same as others % who define the ordinary Gumbel distribution, e.g., in \pkg{gamlss}. } \seealso{ \code{\link{rgumbel}}, \code{\link{dgumbelII}}, \code{\link{cgumbel}}, \code{\link{guplot}}, \code{\link{gev}}, \code{\link{egev}}, % \code{\link{ogev}}, \code{\link{venice}}. } \examples{ # Example 1: Simulated data gdata <- data.frame(y = rgumbel(n = 1000, loc = 100, scale = exp(1))) fit <- vglm(y ~ 1, egumbel(perc = NULL), gdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) with(gdata, mean(y)) # Example 2: Venice data (fit <- vglm(cbind(r1,r2,r3,r4,r5) ~ year, data = venice, gumbel(R = 365, mpv = TRUE), trace = TRUE)) head(fitted(fit)) coef(fit, matrix = TRUE) vcov(summary(fit)) sqrt(diag(vcov(summary(fit)))) # Standard errors # Example 3: Try a nonparametric fit --------------------- # Use the entire data set, including missing values y <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(y ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) depvar(fit1)[4:5, ] # NAs used to pad the matrix \dontrun{ # Plot the component functions par(mfrow = c(2, 1), mar = c(5, 4, 0.2, 1) + 0.1, xpd = TRUE) plot(fit1, se = TRUE, lcol = "blue", scol = "green", lty = 1, lwd = 2, slwd = 2, slty = "dashed") # Quantile plot --- plots all the fitted values par(mfrow = c(1, 1), bty = "l", mar = c(4, 4, 0.2, 3) + 0.1, xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2,5), tcol = c(1, 2,5), lwd = 2, pcol = "blue", tadj = 0.1, ylab = "Sea level (cm)") # Plot the 99 percentile only par(mfrow = c(1, 1), mar = c(3, 4, 0.2, 1) + 0.1, xpd = TRUE) year = venice[["year"]] matplot(year, y, ylab = "Sea level (cm)", type = "n") matpoints(year, y, pch = "*", col = "blue") lines(year, fitted(fit1)[,"99\%"], lwd = 2, col = "red") # Check the 99 percentiles with a smoothing spline. # Nb. (1-0.99) * 365 = 3.65 is approx. 4, meaning the 4th order # statistic is approximately the 99 percentile. par(mfrow = c(1, 1), mar = c(3, 4, 2, 1) + 0.1, xpd = TRUE, lwd = 2) plot(year, y[, 4], ylab = "Sea level (cm)", type = "n", main = "Red is 99 percentile, Green is a smoothing spline") points(year, y[, 4], pch = "4", col = "blue") lines(year, fitted(fit1)[,"99\%"], lty = 1, col = "red") lines(smooth.spline(year, y[, 4], df = 4), col = "darkgreen", lty = 2) } } \keyword{models} \keyword{regression} VGAM/man/grc.Rd0000644000176000001440000002503712136651105012660 0ustar ripleyusers\name{grc} \alias{grc} \alias{rcim} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Row-Column Interaction Models including Goodman's RC Association Model } \description{ Fits a Goodman's RC association model to a matrix of counts, and more generally, a sub-class of row-column interaction models. } \usage{ grc(y, Rank = 1, Index.corner = 2:(1 + Rank), szero = 1, summary.arg = FALSE, h.step = 1e-04, ...) rcim(y, family = poissonff, Rank = 0, Musual = NULL, weights = NULL, which.lp = 1, Index.corner = if (!Rank) NULL else 1 + Musual * (1:Rank), rprefix = "Row.", cprefix = "Col.", offset = 0, szero = if (!Rank) NULL else { if (Musual == 1) 1 else setdiff(1:(Musual * ncol(y)), c(1 + (1:ncol(y)) * Musual, Index.corner)) }, summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{y}{ For \code{grc} a matrix of counts. For \code{rcim} a general matrix response depending on \code{family}. Output from \code{table()} is acceptable; it is converted into a matrix. Note that \code{y} should be at least 3 by 3 in dimension. } \item{family}{ A \pkg{VGAM} family function. By default, the first linear/additive predictor is fitted using main effects plus an optional rank-\code{Rank} interaction term. Not all family functions are suitable or make sense. All other linear/additive predictors are fitted using an intercept-only, so it has a common value over all rows and columns. For example, \code{\link{zipoissonff}} may be suitable for counts but not \code{\link{zipoisson}} because of the ordering of the linear/additive predictors. If the \pkg{VGAM} family function does not have an \code{infos} slot then \code{Musual} needs to be inputted (the number of linear predictors for an ordinary (usually univariate) response, aka \eqn{M}). The \pkg{VGAM} family function also needs to be able to handle multiple responses; and not all of them can do this. } \item{Rank}{ An integer from the set \{0,\ldots,\code{min(nrow(y), ncol(y))}\}. This is the dimension of the fit in terms of the interaction. For \code{grc()} this argument must be positive. A value of 0 means no interactions (i.e., main effects only); each row and column is represented by an indicator variable. } \item{weights}{ Prior weights. Fed into \code{\link{rrvglm}} or \code{\link{vglm}}. } \item{which.lp}{ Single integer. Specifies which linear predictor is modelled as the sum of an intercept, row effect, column effect plus an optional interaction term. It should be one value from the set \code{1:Musual}. } \item{Index.corner}{ A vector of \code{Rank} integers. These are used to store the \code{Rank} by \code{Rank} identity matrix in the \code{A} matrix; corner constraints are used. } \item{rprefix, cprefix}{ Character, for rows and columns resp. For labelling the indicator variables. } \item{offset}{ Numeric. Either a matrix of the right dimension, else a single numeric expanded into such a matrix. } \item{szero}{ An integer from the set \{1,\ldots,\code{min(nrow(y), ncol(y))}\}, specifying the row that is used as the structural zero. } \item{summary.arg}{ Logical. If \code{TRUE}, a summary is returned. If \code{TRUE}, \code{y} may be the output (fitted object) of \code{grc()}. } \item{h.step}{ A small positive value that is passed into \code{summary.rrvglm()}. Only used when \code{summary.arg = TRUE}. } \item{\dots}{ Arguments that are passed into \code{rrvglm.control()}. } \item{Musual}{ The number of linear predictors of the \pkg{VGAM} \code{family} function for an ordinary (univariate) response. Then the number of linear predictors of the \code{rcim()} fit is usually the number of columns of \code{y} multiplied by \code{Musual}. The default is to evaluate the \code{infos} slot of the \pkg{VGAM} \code{family} function to try to evaluate it; see \code{\link{vglmff-class}}. If this information is not yet supplied by the family function then the value needs to be inputted manually using this argument. } \item{rbaseline, cbaseline}{ Baseline reference levels for the rows and columns. Currently stored on the object but not used. } } \details{ Goodman's RC association model fits a reduced-rank approximation to a table of counts. The log of each cell mean is decomposed as an intercept plus a row effect plus a column effect plus a reduced-rank component. The latter can be collectively written \code{A \%*\% t(C)}, the product of two `thin' matrices. Indeed, \code{A} and \code{C} have \code{Rank} columns. By default, the first column and row of the interaction matrix \code{A \%*\% t(C)} is chosen to be structural zeros, because \code{szero = 1}. This means the first row of \code{A} are all zeros. This function uses \code{options()$contrasts} to set up the row and column indicator variables. In particular, Equation (4.5) of Yee and Hastie (2003) is used. These are called \code{Row.} and \code{Col.} (by default) followed by the row or column number. The function \code{rcim()} is more general than \code{grc()}. Its default is a no-interaction model of \code{grc()}, i.e., rank-0 and a Poisson distribution. This means that each row and column has a dummy variable associated with it. The first row and column is baseline. The power of \code{rcim()} is that many \pkg{VGAM} family functions can be assigned to its \code{family} argument. For example, \code{\link{normal1}} fits something in between a 2-way ANOVA with and without interactions, \code{\link{alaplace2}} with \code{Rank = 0} is something like \code{\link[stats]{medpolish}}. Others include \code{\link{zipoissonff}}, \code{\link{negbinomial}}. Hopefully one day \emph{all} \pkg{VGAM} family functions will work when assigned to the \code{family} argument, although the result may not have meaning. } \value{ An object of class \code{"grc"}, which currently is the same as an \code{"rrvglm"} object. Currently, a rank-0 \code{rcim()} object is of class \code{\link{rcim0-class}}, else of class \code{"rcim"} (this may change in the future). % Currently, % a rank-0 \code{rcim()} object is of class \code{\link{vglm-class}}, % but it may become of class \code{"rcim"} one day. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Hadi, A. F. (2012) Row-column interaction models \emph{In preparation}. Goodman, L. A. (1981) Association models and canonical correlation in the analysis of cross-classifications having ordered categories. \emph{Journal of the American Statistical Association}, \bold{76}, 320--334. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information about the setting up of the indicator variables. } \author{ Thomas W. Yee, with assistance from Alfian F. Hadi. } \note{ These functions set up the indicator variables etc. before calling \code{\link{rrvglm}} or \code{\link{vglm}}. The \code{...} is passed into \code{\link{rrvglm.control}} or \code{\link{vglm.control}}, This means, e.g., \code{Rank = 1} is default for \code{grc()}. The data should be labelled with \code{\link[base]{rownames}} and \code{\link[base]{colnames}}. Setting \code{trace = TRUE} is recommended for monitoring convergence. Using \code{criterion = "coefficients"} can result in slow convergence. If \code{summary = TRUE}, then \code{y} can be a \code{"grc"} object, in which case a summary can be returned. That is, \code{grc(y, summary = TRUE)} is equivalent to \code{summary(grc(y))}. } \section{Warning}{ The function \code{rcim()} is experimental at this stage and may have bugs. Quite a lot of expertise is needed when fitting and in its interpretion thereof. For example, the constraint matrices applies the reduced-rank regression to the first (see \code{which.lp}) linear predictor and the other linear predictors are intercept-only and have a common value throughout the entire data set. This means that, by default, \code{family =} \code{\link{zipoissonff}} is appropriate but not \code{family =} \code{\link{zipoisson}}. Else set \code{family =} \code{\link{zipoisson}} and \code{which.lp = 2}. To understand what is going on, do examine the constraint matrices of the fitted object, and reconcile this with Equations (4.3) to (4.5) of Yee and Hastie (2003). The functions temporarily create a permanent data frame called \code{.grc.df} or \code{.rcim.df}, which used to be needed by \code{summary.rrvglm()}. Then these data frames are deleted before exiting the function. If an error occurs, then the data frames may be present in the workspace. } \seealso{ \code{\link{rrvglm}}, \code{\link{rrvglm.control}}, \code{\link{rrvglm-class}}, \code{summary.grc}, \code{\link{moffset}}, \code{\link{Rcim}}, \code{\link{Qvar}}, \code{\link{plotrcim0}}, \code{\link{alcoff}}, \code{\link{crashi}}, \code{\link{auuc}}, \code{\link[VGAM:olym08]{olym08}}, \code{\link[VGAM:olym12]{olym12}}, \code{\link{poissonff}}. } \examples{ grc1 <- grc(auuc) # Undergraduate enrolments at Auckland University in 1990 fitted(grc1) summary(grc1) grc2 <- grc(auuc, Rank = 2, Index.corner = c(2, 5)) fitted(grc2) summary(grc2) # 2012 Summer Olympic Games in London \dontrun{ top10 <- head(oly12, n = 10) grc.oly1 <- with(top10, grc(cbind(gold, silver, bronze))) round(fitted(grc.oly1)) round(resid(grc.oly1, type = "response"), dig = 1) # Response residuals summary(grc.oly1) Coef(grc.oly1) } # Roughly median polish rcim0 <- rcim(auuc, fam = alaplace2(tau = 0.5, intparloc = TRUE), trace = TRUE) round(fitted(rcim0), dig = 0) round(100 * (fitted(rcim0) - auuc) / auuc, dig = 0) # Discrepancy rcim0@y round(coef(rcim0, matrix = TRUE), dig = 2) Coef(rcim0, matrix = TRUE) # constraints(rcim0) names(constraints(rcim0)) # Compare with medpolish(): (med.a <- medpolish(auuc)) fv <- med.a$overall + outer(med.a$row, med.a$col, "+") round(100 * (fitted(rcim0) - fv) / fv) # Hopefully should be all 0s } \keyword{models} \keyword{regression} % plot(grc.oly1) % oly2 <- with(top10, grc(cbind(gold,silver,bronze), Rank = 2)) # Saturated model % round(fitted(oly2)) % round(fitted(oly2)) - with(top10, cbind(gold,silver,bronze)) % summary(oly2) # Saturated model % zz 20100927 unsure % Then \code{.grc.df} is deleted before exiting the function. % print(Coef(rcim0, matrix = TRUE), dig = 3) VGAM/man/grain.us.Rd0000644000176000001440000000202412136651105013622 0ustar ripleyusers\name{grain.us} \alias{grain.us} \docType{data} \title{Grain Prices Data in USA } \description{ A 4-column matrix. } \usage{data(grain.us)} \format{ The columns are: \describe{ \item{wheat.flour}{numeric} \item{corn}{numeric} \item{wheat}{numeric} \item{rye}{numeric} } } \details{ Monthly averages of grain prices in the United States for wheat flour, corn, wheat, and rye for the period January 1961 through October 1972. The units are US dollars per 100 pound sack for wheat flour, and per bushel for corn, wheat and rye. } \source{ Ahn and Reinsel (1988). } \references{ Ahn, S. K and Reinsel, G. C. (1988) Nested reduced-rank autoregressive models for multiple time series. \emph{Journal of the American Statistical Association}, \bold{83}, 849--856. } \examples{ \dontrun{ cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Rank = c(4, 1)), epsilon = 1e-3, stepsize = 0.5, trace = TRUE, maxit = 50) summary(fit) } } \keyword{datasets} VGAM/man/gpdUC.Rd0000644000176000001440000000714112136651105013103 0ustar ripleyusers\name{gpdUC} \alias{gpdUC} \alias{dgpd} \alias{pgpd} \alias{qgpd} \alias{rgpd} \title{The Generalized Pareto Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized Pareto distribution (GPD) with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{shape}. } \usage{ dgpd(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps), oobounds.log = -Inf, giveWarning = FALSE) pgpd(q, location = 0, scale = 1, shape = 0) qgpd(p, location = 0, scale = 1, shape = 0) rgpd(n, location = 0, scale = 1, shape = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required.} \item{location}{the location parameter \eqn{\mu}{mu}.} \item{scale}{the (positive) scale parameter \eqn{\sigma}{sigma}.} \item{shape}{the shape parameter \eqn{\xi}{xi}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{tolshape0}{ Positive numeric. Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. If the absolute value of the estimate of \eqn{\xi}{xi} is less than this value then it will be assumed zero and an exponential distribution will be used. } \item{oobounds.log, giveWarning}{ Numeric and logical. The GPD distribution has support in the region satisfying \code{(x-location)/scale > 0} and \code{1+shape*(x-location)/scale > 0}. Outside that region, the logarithm of the density is assigned \code{oobounds.log}, which equates to a zero density. It should not be assigned a positive number, and ideally is very negative. Since \code{\link{gpd}} uses this function it is necessary to return a finite value outside this region so as to allow for half-stepping. Both arguments are in support of this. This argument and others match those of \code{\link{gpd}}. } } \value{ \code{dgpd} gives the density, \code{pgpd} gives the distribution function, \code{qgpd} gives the quantile function, and \code{rgpd} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \details{ See \code{\link{gpd}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The default values of all three parameters, especially \eqn{\xi = 0}{xi = 0}, means the default distribution is the exponential. Currently, these functions have different argument names compared with those in the \pkg{evd} package. } \seealso{ \code{\link{gpd}}. } \examples{ \dontrun{ loc <- 2; sigma <- 1; xi <- -0.4 x <- seq(loc - 0.2, loc + 3, by = 0.01) plot(x, dgpd(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), dgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgpd(x, loc, sigma, xi), type = "l", col = "red") abline(h = 0, lty = 2) pgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) } } \keyword{distribution} VGAM/man/gpd.Rd0000644000176000001440000002163512136651105012657 0ustar ripleyusers\name{gpd} \alias{gpd} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Pareto Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter generalized Pareto distribution (GPD). } \usage{ gpd(threshold = 0, lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, giveWarning = TRUE, imethod = 1, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{threshold}{ Numeric, values are recycled if necessary. The threshold value(s), called \eqn{\mu}{mu} below. } \item{lscale}{ Parameter link function for the scale parameter \eqn{\sigma}{sigma}. See \code{\link{Links}} for more choices. } \item{lshape}{ Parameter link function for the shape parameter \eqn{\xi}{xi}. See \code{\link{Links}} for more choices. The default constrains the parameter to be greater than \eqn{-0.5} because if \eqn{\xi \leq -0.5}{xi <= -0.5} then Fisher scoring does not work. See the Details section below for more information. For the shape parameter, the default \code{\link{logoff}} link has an offset called \eqn{A} below; and then the second linear/additive predictor is \eqn{\log(\xi+A)}{log(xi+A)} which means that \eqn{\xi > -A}{xi > -A}. The working weight matrices are positive definite if \eqn{A = 0.5}. } % \item{Offset}{ % Numeric, of length 1. % Called \eqn{A} below. % Offset value if \code{lshape = "logoff"}. % Then the second linear/additive predictor is % \eqn{\log(\xi+A)}{log(xi+A)} which means that % \eqn{\xi > -A}{xi > -A}. % The working weight matrices are positive definite if \code{Offset = 0.5}. % } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. See the example below for illustration. However, if \code{percentiles = NULL} then the mean \eqn{\mu + \sigma / (1-\xi)}{mu + sigma / (1-xi)} is returned; this is only defined if \eqn{\xi<1}{xi<1}. } \item{iscale, ishape}{ Numeric. Optional initial values for \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. The default is to use \code{imethod} and compute a value internally for each parameter. Values of \code{ishape} should be between \eqn{-0.5} and \eqn{1}. Values of \code{iscale} should be positive. } % \item{rshape}{ % Numeric, of length 2. % Range of \eqn{\xi}{xi} if \code{lshape = "elogit"} is chosen. % The default values ensures the algorithm works (\eqn{\xi > -0.5}{xi > -0.5}) % and the variance exists (\eqn{\xi < 0.5}{xi < 0.5}). % } \item{tolshape0, giveWarning}{ Passed into \code{\link{dgpd}} when computing the log-likelihood. } % \item{tolshape0}{ % Positive numeric. % Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. % If the absolute value of the estimate of \eqn{\xi}{xi} is less than % this value then it will be assumed zero and exponential distribution % derivatives etc. will be used. % } \item{imethod}{ Method of initialization, either 1 or 2. The first is the method of moments, and the second is a variant of this. If neither work, try assigning values to arguments \code{ishape} and/or \code{iscale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. For one response, the value should be from the set \{1,2\} corresponding respectively to \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. It is often a good idea for the \eqn{\sigma}{sigma} parameter only to be modelled through a linear combination of the explanatory variables because the shape parameter is probably best left as an intercept only: \code{zero = 2}. Setting \code{zero = NULL} means both parameters are modelled with explanatory variables. See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ The distribution function of the GPD can be written \deqn{G(y) = 1 - [1 + \xi (y-\mu) / \sigma ]_{+}^{- 1/ \xi} }{% G(y) = 1 - [1 + xi (y-mu)/ sigma ]_{+}^{- 1/ xi} } where \eqn{\mu}{mu} is the location parameter (known, with value \code{threshold}), \eqn{\sigma > 0}{sigma > 0} is the scale parameter, \eqn{\xi}{xi} is the shape parameter, and \eqn{h_+ = \max(h,0)}{h_+ = max(h,0)}. The function \eqn{1-G} is known as the \emph{survivor function}. The limit \eqn{\xi \rightarrow 0}{xi --> 0} gives the \emph{shifted exponential} as a special case: \deqn{G(y) = 1 - \exp[-(y-\mu)/ \sigma]. }{% G(y) = 1 - exp[-(y-mu)/ sigma]. } The support is \eqn{y>\mu}{y>mu} for \eqn{\xi>0}{xi>0}, and \eqn{\mu < y <\mu-\sigma / \xi}{mu < y -0.5}{xi > -0.5} the classical asymptotic theory of maximum likelihood estimators is applicable; this is the default. Although for \eqn{\xi < -0.5}{xi < -0.5} the usual asymptotic properties do not apply, the maximum likelihood estimator generally exists and is superefficient for \eqn{-1 < \xi < -0.5}{-1 < xi < -0.5}, so it is ``better'' than normal. When \eqn{\xi < -1}{xi < -1} the maximum likelihood estimator generally does not exist as it effectively becomes a two parameter problem. The mean of \eqn{Y} does not exist unless \eqn{\xi < 1}{xi < 1}, and the variance does not exist unless \eqn{\xi < 0.5}{xi < 0.5}. So if you want to fit a model with finite variance use \code{lshape = "elogit"}. } \note{ The response in the formula of \code{\link{vglm}} and \code{\link{vgam}} is \eqn{y}. Internally, \eqn{y-\mu}{y-mu} is computed. This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. With functions \code{\link{rgpd}}, \code{\link{dgpd}}, etc., the argument \code{location} matches with the argument \code{threshold} here. } \section{Warning}{ Fitting the GPD by maximum likelihood estimation can be numerically fraught. If \eqn{1 + \xi (y-\mu)/ \sigma \leq 0}{1 + xi*(y-mu)/sigma <= 0} then some crude evasive action is taken but the estimation process can still fail. This is particularly the case if \code{\link{vgam}} with \code{\link{s}} is used. Then smoothing is best done with \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} or \code{\link[splines]{ns}}) because \code{\link{vglm}} implements half-stepsizing whereas \code{\link{vgam}} doesn't. Half-stepsizing helps handle the problem of straying outside the parameter space. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. However, for this \pkg{VGAM} family function, \code{\link{vglm}} is probably preferred over \code{\link{vgam}} when there is smoothing. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. } \author{ T. W. Yee } \seealso{ \code{\link{rgpd}}, \code{\link{meplot}}, \code{\link{gev}}, \code{\link{pareto1}}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{s}}. } \examples{ # Simulated data from an exponential distribution (xi = 0) threshold <- 0.5 gdata <- data.frame(y1 = threshold + rexp(n = 3000, rate = 2)) fit <- vglm(y1 ~ 1, gpd(threshold = threshold), gdata, trace = TRUE) head(fitted(fit)) coef(fit, matrix = TRUE) # xi should be close to 0 Coef(fit) summary(fit) fit@extra$threshold # Note the threshold is stored here # Check the 90 percentile ii <- depvar(fit) < fitted(fit)[1, "90\%"] 100 * table(ii) / sum(table(ii)) # Should be 90% # Check the 95 percentile ii <- depvar(fit) < fitted(fit)[1, "95\%"] 100 * table(ii) / sum(table(ii)) # Should be 95% \dontrun{ plot(depvar(fit), col = "blue", las = 1, main = "Fitted 90\% and 95\% quantiles") matlines(1:length(depvar(fit)), fitted(fit), lty = 2:3, lwd = 2) } # Another example gdata <- data.frame(x2 = runif(nn <- 2000)) threshold <- 0; xi <- exp(-0.8) - 0.5 gdata <- transform(gdata, y2 = rgpd(nn, scale = exp(1 + 0.1*x2), shape = xi)) fit <- vglm(y2 ~ x2, gpd(threshold), gdata, trace = TRUE) coef(fit, matrix = TRUE) \dontrun{ # Nonparametric fits gdata <- transform(gdata, yy = y2 + rnorm(nn, sd = 0.1)) # Not so recommended: fit1 <- vgam(yy ~ s(x2), gpd(threshold), gdata, trace = TRUE) par(mfrow = c(2,1)) plotvgam(fit1, se = TRUE, scol = "blue") # More recommended: fit2 <- vglm(yy ~ bs(x2), gpd(threshold), gdata, trace = TRUE) plotvgam(fit2, se = TRUE, scol = "blue") } } \keyword{models} \keyword{regression} VGAM/man/gompertzUC.Rd0000644000176000001440000000360112136651105014175 0ustar ripleyusers\name{Gompertz} \alias{Gompertz} \alias{dgompertz} \alias{pgompertz} \alias{qgompertz} \alias{rgompertz} \title{The Gompertz Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Gompertz distribution. } \usage{ dgompertz(x, shape, scale = 1, log = FALSE) pgompertz(q, shape, scale = 1) qgompertz(p, shape, scale = 1) rgompertz(n, shape, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{shape, scale}{positive shape and scale parameters. } } \value{ \code{dgompertz} gives the density, \code{pgompertz} gives the cumulative distribution function, \code{qgompertz} gives the quantile function, and \code{rgompertz} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{gompertz}} for details. } %\note{ % %} \seealso{ \code{\link{gompertz}}, \code{\link{dgumbel}}, \code{\link{dmakeham}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(1); Scale <- exp(1); max(abs(pgompertz(qgompertz(p = probs, Shape, Scale), Shape, Scale) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 1.0, by = 0.01); plot(x, dgompertz(x, Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pgompertz(x, Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgompertz(probs, Shape, Scale) lines(Q, dgompertz(Q, Shape, Scale), col = "purple", lty = 3, type = "h") pgompertz(Q, Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/gompertz.Rd0000644000176000001440000000705412136651105013753 0ustar ripleyusers\name{gompertz} \alias{gompertz} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gompertz Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gompertz distribution. } \usage{ gompertz(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, nsimEIM = 500, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the shape parameter \code{a}, scale parameter \code{scale}. All parameters are positive. See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % List. Extra argument for each of the links. % eshape = list(), escale = list(), % See \code{earg} in \code{\link{Links}} for general information. % } \item{ishape, iscale}{ Optional initial values. A \code{NULL} means a value is computed internally. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Gompertz distribution has a cumulative distribution function \deqn{F(x;\alpha, \beta) = 1 - \exp[-(\alpha/\beta) \times (\exp(\beta x) - 1) ]}{% F(x;alpha, beta) = 1 - exp(-(alpha/beta) * (exp(beta * x) - 1) )} which leads to a probability density function \deqn{f(x; \alpha, \beta) = \alpha \exp(\beta x) \exp [-(\alpha/\beta) \times (\exp(\beta x) - 1) ]}{% f(x; alpha, beta) = alpha * exp[-beta * x] * exp[-(alpha/beta) * (exp(beta * x) - 1) ]} for \eqn{\alpha > 0}{a > 0}, \eqn{\beta > 0}{b > 0}, \eqn{x > 0}. Here, \eqn{\beta} is called the scale parameter \code{scale}, and \eqn{\alpha} is called the shape parameter (one could refer to \eqn{\alpha}{a} as a location parameter and \eqn{\beta}{b} as a shape parameter---see Lenart (2012)). The mean is involves an exponential integral function. Simulated Fisher scoring is used and multiple responses are handled. The Makeham distibution has an additional parameter compared to the Gompertz distribution. If \eqn{X} is defined to be the result of sampling from a Gumbel distribution until a negative value \eqn{Z} is produced, then \eqn{X = -Z} has a Gompertz distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Lenart, A. (2012) The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. \emph{Scandinavian Actuarial Journal}, in press. } \author{ T. W. Yee } \section{Warning }{ The same warnings in \code{\link{makeham}} apply here too. } \seealso{ \code{\link{dgompertz}}, \code{\link{makeham}}. } \examples{ \dontrun{ gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, eta1 = -1, eta2 = -1 + 0.2 * x2, ceta1 = 1, ceta2 = -1 + 0.2 * x2) gdata <- transform(gdata, shape1 = exp(eta1), shape2 = exp(eta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgompertz(nn, shape = shape1, scale = scale1), y2 = rgompertz(nn, shape = shape2, scale = scale2)) fit1 <- vglm(y1 ~ 1, gompertz, data = gdata, trace = TRUE) fit2 <- vglm(y2 ~ x2, gompertz, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) coef(fit2, matrix = TRUE) summary(fit2) } } \keyword{models} \keyword{regression} % probs.y = c(0.20, 0.50, 0.80) VGAM/man/golf.Rd0000644000176000001440000001053512136651105013031 0ustar ripleyusers\name{golf} \alias{golf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gamma-Ordinal Link Function } \description{ Computes the gamma-ordinal transformation, including its inverse and the first two derivatives. } \usage{ golf(theta, lambda = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{lambda, cutpoint}{ The former is the shape parameter in \code{\link{gamma2}}. \code{cutpoint} is optional; if \code{NULL} then \code{cutpoint} is ignored from the GOLF definition. If given, the cutpoints should be non-negative integers. If \code{golf()} is used as the link function in \code{\link{cumulative}} then, if the cutpoints are known, then one should choose \code{reverse = TRUE, parallel = TRUE, apply.parint = TRUE}. If the cutpoints are unknown, then choose \code{reverse = TRUE, parallel = TRUE, apply.parint = FALSE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The gamma-ordinal link function (GOLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying 2-parameter gamma distribution. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2012) for details. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the gamma distribution (see \code{\link{gamma2}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{gamma2}}, \code{\link{polf}}, \code{\link{nbolf}}, \code{\link{cumulative}}. } \examples{ golf("p", lambda = 1, short = FALSE) golf("p", lambda = 1, tag = TRUE) p <- seq(0.02, 0.98, len = 201) y <- golf(p, lambda = 1) y. <- golf(p, lambda = 1, deriv = 1) max(abs(golf(y, lambda = 1, inv = TRUE) - p)) # Should be 0 \dontrun{par(mfrow = c(2, 1), las = 1) plot(p, y, type = "l", col = "blue", main = "golf()") abline(h = 0, v = 0.5, col = "orange", lty = "dashed") plot(p, y., type = "l", col = "blue", main = "(Reciprocal of) first GOLF derivative") } # Another example gdata <- data.frame(x2 = sort(runif(nn <- 1000))) gdata <- transform(gdata, x3 = runif(nn)) gdata <- transform(gdata, mymu = exp( 3 + 1 * x2 - 2 * x3)) lambda <- 4 gdata <- transform(gdata, y1 = rgamma(nn, shape = lambda, scale = mymu / lambda)) cutpoints <- c(-Inf, 10, 20, Inf) gdata <- transform(gdata, cuty = Cut(y1, breaks = cutpoints)) \dontrun{ par(mfrow = c(1, 1), las = 1) with(gdata, plot(x2, x3, col = cuty, pch = as.character(cuty))) } with(gdata, table(cuty) / sum(table(cuty))) fit <- vglm(cuty ~ x2 + x3, cumulative(mv = TRUE, reverse = TRUE, parallel = TRUE, apply.parint = TRUE, link = golf(cutpoint = cutpoints[2:3], lambda = lambda)), data = gdata, trace = TRUE) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc } \keyword{math} \keyword{models} \keyword{regression} % # Another example % nn <- 1000 % x2 <- sort(runif(nn)) % x3 <- runif(nn) % shape <- exp(0.0) % mymu <- exp( 3 + 1 * x2 - 2 * x3) % y1 <- rnbinom(nn, mu=mymu, size=shape) % cuty <- Cut(y1) % fit <- vglm(cuty ~ x2 + x3, fam = cumulative(link = "golf", rev = TRUE, % mv = TRUE, parallel = TRUE, earg = list(lambda=shape))) % coef(fit) % fit <- vglm(cuty ~ x2 + x3, fam = cumulative(link = "probit", rev = TRUE, % mv = TRUE, parallel = TRUE)) % coef(fit, matrix = TRUE) % coef(fit) VGAM/man/gew.Rd0000644000176000001440000000463612136651105012671 0ustar ripleyusers\name{gew} \alias{gew} \docType{data} \title{ General Electric and Westinghouse Data } \description{ General Electric and Westinghouse capital data. } \usage{data(gew)} \format{ A data frame with 20 observations on the following 7 variables. All variables are numeric vectors. Variables ending in \code{.g} correspond to General Electric and those ending in \code{.w} are Westinghouse. \describe{ \item{year}{The observations are the years from 1934 to 1953} \item{invest.g, invest.w}{investment figures. These are \eqn{I=} Gross investment = additions to plant and equipment plus maintenance and repairs in millions of dollars deflated by \eqn{P_1}. } \item{capital.g, capital.w}{capital stocks. These are \eqn{C=} The stock of plant and equipment = accumulated sum of net additions to plant and equipment deflated by \eqn{P_1} minus depreciation allowance deflated by \eqn{P_3}. } \item{value.g, value.w}{market values. These are \eqn{F=} Value of the firm = price of common and preferred shares at December 31 (or average price of December 31 and January 31 of the following year) times number of common and preferred shares outstanding plus total book value of debt at December 31 in millions of dollars deflated by \eqn{P_2}. } } } \details{ These data are a subset of a table in Boot and de Wit (1960), also known as the Grunfeld data. It is used a lot in econometrics, e.g., for seemingly unrelated regressions (see \code{SUR}). Here, \eqn{P_1 =} Implicit price deflator of producers durable equipment (base 1947), \eqn{P_2 =} Implicit price deflator of G.N.P. (base 1947), \eqn{P_3 =} Depreciation expense deflator = ten years moving average of wholesale price index of metals and metal products (base 1947). } \source{ Table 10 of: Boot, J. C. G. and de Wit, G. M. (1960) Investment Demand: An Empirical Contribution to the Aggregation Problem. \emph{International Economic Review}, \bold{1}, 3--30. Grunfeld, Y. (1958) The Determinants of Corporate Investment. Unpublished PhD Thesis (Chicago). } \seealso{ \code{\link[VGAM:SUR]{SUR}}, \url{http://statmath.wu.ac.at/~zeileis/grunfeld}. } \references{ Zellner, A. (1962) An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. \emph{Journal of the American Statistical Association}, \bold{57}, 348--368. } \examples{ str(gew) } \keyword{datasets} VGAM/man/gevUC.Rd0000644000176000001440000000716012136651105013113 0ustar ripleyusers\name{gevUC} \alias{gevUC} \alias{dgev} \alias{pgev} \alias{qgev} \alias{rgev} \title{The Generalized Extreme Value Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized extreme value distribution (GEV) with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{shape}. } \usage{ dgev(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps), oobounds.log = -Inf, giveWarning = FALSE) pgev(q, location = 0, scale = 1, shape = 0) qgev(p, location = 0, scale = 1, shape = 0) rgev(n, location = 0, scale = 1, shape = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location}{the location parameter \eqn{\mu}{mu}.} \item{scale}{the (positive) scale parameter \eqn{\sigma}{sigma}. Must consist of positive values. } \item{shape}{the shape parameter \eqn{\xi}{xi}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{tolshape0}{ Positive numeric. Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. If the absolute value of the estimate of \eqn{\xi}{xi} is less than this value then it will be assumed zero and a Gumbel distribution will be used. } \item{oobounds.log, giveWarning}{ Numeric and logical. The GEV distribution has support in the region satisfying \code{1+shape*(x-location)/scale > 0}. Outside that region, the logarithm of the density is assigned \code{oobounds.log}, which equates to a zero density. It should not be assigned a positive number, and ideally is very negative. Since \code{\link{egev}} uses this function it is necessary to return a finite value outside this region so as to allow for half-stepping. Both arguments are in support of this. This argument and others match those of \code{\link{egev}}. } } \value{ \code{dgev} gives the density, \code{pgev} gives the distribution function, \code{qgev} gives the quantile function, and \code{rgev} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \details{ See \code{\link{gev}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The default value of \eqn{\xi = 0}{xi = 0} means the default distribution is the Gumbel. Currently, these functions have different argument names compared with those in the \pkg{evd} package. } \seealso{ \code{\link{gev}}, \code{\link{egev}}, \code{\link{vglm.control}}. } \examples{ \dontrun{ loc <- 2; sigma <- 1; xi <- -0.4 x <- seq(loc - 3, loc + 3, by = 0.01) plot(x, dgev(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0,1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), dgev(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgev(x, loc, sigma, xi), type = "l", col = "red") abline(h = 0, lty = 2) pgev(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) } } \keyword{distribution} VGAM/man/gev.Rd0000644000176000001440000002303512136651105012662 0ustar ripleyusers\name{gev} \alias{gev} \alias{egev} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Extreme Value Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter generalized extreme value (GEV) distribution. } \usage{ gev(llocation = "identity", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), iscale=NULL, ishape = NULL, imethod = 1, gshape = c(-0.45, 0.45), tolshape0 = 0.001, giveWarning = TRUE, zero = 3) egev(llocation = "identity", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), iscale=NULL, ishape = NULL, imethod = 1, gshape = c(-0.45, 0.45), tolshape0 = 0.001, giveWarning = TRUE, zero = 3) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, lshape}{ Parameter link functions for \eqn{\mu}{mu}, \eqn{\sigma}{sigma} and \eqn{\xi}{xi} respectively. See \code{\link{Links}} for more choices. For the shape parameter, the default \code{\link{logoff}} link has an offset called \eqn{A} below; and then the linear/additive predictor is \eqn{\log(\xi+A)}{log(xi+A)} which means that \eqn{\xi > -A}{xi > -A}. For technical reasons (see \bold{Details}) it is a good idea for \eqn{A = 0.5}. } % \item{Offset}{ % Numeric, of length 1. % Called \eqn{A} below. % Offset value if \code{lshape = "logoff"}. % Then the linear/additive predictor is % \eqn{\log(\xi+A)}{log(xi+A)} which means that % \eqn{\xi > -A}{xi > -A}. % For technical reasons (see \bold{Details}) it is a good idea for % \code{Offset = 0.5}. % } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. However, if \code{percentiles = NULL}, then the mean \eqn{\mu + \sigma (\Gamma(1-\xi)-1) / \xi}{mu + sigma * (gamma(1-xi)-1)/xi} is returned, and this is only defined if \eqn{\xi<1}{xi<1}. } \item{iscale, ishape}{ Numeric. Initial value for \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. A \code{NULL} means a value is computed internally. The argument \code{ishape} is more important than the other two because they are initialized from the initial \eqn{\xi}{xi}. If a failure to converge occurs, or even to obtain initial values occurs, try assigning \code{ishape} some value (positive or negative; the sign can be very important). Also, in general, a larger value of \code{iscale} is better than a smaller value. } % \item{rshape}{ % Numeric, of length 2. % Range of \eqn{\xi}{xi} if \code{lshape = "elogit"} is chosen. % The rationale for the default values is given below. % } % \item{mean}{ % Logical. If \code{TRUE}, the mean is computed and returned % as the fitted values. This argument overrides the % \code{percentiles} argument. % See \bold{Details} for more details. % } \item{imethod}{ Initialization method. Either the value 1 or 2. Method 1 involves choosing the best \eqn{\xi}{xi} on a course grid with endpoints \code{gshape}. Method 2 is similar to the method of moments. If both methods fail try using \code{ishape}. } \item{gshape}{ Numeric, of length 2. Range of \eqn{\xi}{xi} used for a grid search for a good initial value for \eqn{\xi}{xi}. Used only if \code{imethod} equals 1. } \item{tolshape0, giveWarning}{ Passed into \code{\link{dgev}} when computing the log-likelihood. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\} corresponding respectively to \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, \eqn{\xi}{xi}. If \code{zero = NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. For many data sets having \code{zero = 3} is a good idea. } } \details{ The GEV distribution function can be written \deqn{G(y) = \exp( -[ (y-\mu)/ \sigma ]_{+}^{- 1/ \xi}) }{% G(y) = exp( -[ (y- mu)/ sigma ]_{+}^{- 1/ xi}) } where \eqn{\sigma > 0}{sigma > 0}, \eqn{-\infty < \mu < \infty}{-Inf < mu < Inf}, and \eqn{1 + \xi(y-\mu)/\sigma > 0}{1 + xi*(y-mu)/sigma > 0}. Here, \eqn{x_+ = \max(x,0)}{x_+ = max(x,0)}. The \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, \eqn{\xi}{xi} are known as the \emph{location}, \emph{scale} and \emph{shape} parameters respectively. The cases \eqn{\xi>0}{xi>0}, \eqn{\xi<0}{xi<0}, \eqn{\xi = 0}{xi = 0} correspond to the Frechet, Weibull, and Gumbel types respectively. It can be noted that the Gumbel (or Type I) distribution accommodates many commonly-used distributions such as the normal, lognormal, logistic, gamma, exponential and Weibull. For the GEV distribution, the \eqn{k}th moment about the mean exists if \eqn{\xi < 1/k}{xi < 1/k}. Provided they exist, the mean and variance are given by \eqn{\mu+\sigma\{ \Gamma(1-\xi)-1\}/ \xi}{mu + sigma \{ Gamma(1-xi)-1\} / xi} and \eqn{\sigma^2 \{ \Gamma(1-2\xi) - \Gamma^2(1-\xi) \} / \xi^2}{sigma^2 \{ Gamma(1-2 xi) - Gamma^2 (1- xi) \} / xi^2} respectively, where \eqn{\Gamma}{Gamma} is the gamma function. Smith (1985) established that when \eqn{\xi > -0.5}{xi > -0.5}, the maximum likelihood estimators are completely regular. To have some control over the estimated \eqn{\xi}{xi} try using \code{lshape = logoff(offset = 0.5)}, say, or \code{lshape = elogit(min = -0.5, max = 0.5)}, say. % and when \eqn{-1 < \xi < -0.5}{-1 < xi < -0.5} they exist but are % non-regular; and when \eqn{\xi < -1}{xi < -1} then the maximum % likelihood estimators do not exist. In most environmental data % sets \eqn{\xi > -1}{xi > -1} so maximum likelihood works fine. } \section{Warning }{ Currently, if an estimate of \eqn{\xi}{xi} is too close to zero then an error will occur for \code{gev()} with multivariate responses. In general, \code{egev()} is more reliable than \code{gev()}. Fitting the GEV by maximum likelihood estimation can be numerically fraught. If \eqn{1 + \xi (y-\mu)/ \sigma \leq 0}{1 + xi*(y-mu)/sigma <= 0} then some crude evasive action is taken but the estimation process can still fail. This is particularly the case if \code{\link{vgam}} with \code{\link{s}} is used; then smoothing is best done with \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} or \code{\link[splines]{ns}}) because \code{\link{vglm}} implements half-stepsizing whereas \code{\link{vgam}} doesn't (half-stepsizing helps handle the problem of straying outside the parameter space.) } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Tawn, J. A. (1988) An extreme-value theory model for dependent observations. \emph{Journal of Hydrology}, \bold{101}, 227--250. Prescott, P. and Walden, A. T. (1980) Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. \emph{Biometrika}, \bold{67}, 723--724. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{gev} can handle a multivariate (matrix) response. If so, each row of the matrix is sorted into descending order and \code{NA}s are put last. With a vector or one-column matrix response using \code{egev} will give the same result but be faster and it handles the \eqn{\xi = 0}{xi = 0} case. The function \code{gev} implements Tawn (1988) while \code{egev} implements Prescott and Walden (1980). The shape parameter \eqn{\xi}{xi} is difficult to estimate accurately unless there is a lot of data. Convergence is slow when \eqn{\xi}{xi} is near \eqn{-0.5}. Given many explanatory variables, it is often a good idea to make sure \code{zero = 3}. The range restrictions of the parameter \eqn{\xi}{xi} are not enforced; thus it is possible for a violation to occur. Successful convergence often depends on having a reasonably good initial value for \eqn{\xi}{xi}. If failure occurs try various values for the argument \code{ishape}, and if there are covariates, having \code{zero = 3} is advised. } \seealso{ %\code{\link{egev}}, %\code{\link{ogev}}, \code{\link{rgev}}, \code{\link{gumbel}}, \code{\link{egumbel}}, \code{\link{guplot}}, \code{\link{rlplot.egev}}, \code{\link{gpd}}, \code{\link{weibull}}, \code{\link{frechet2}}, \code{\link{elogit}}, \code{\link{oxtemp}}, \code{\link{venice}}. } \examples{ \dontrun{ # Multivariate example fit1 <- vgam(cbind(r1, r2) ~ s(year, df = 3), gev(zero = 2:3), data = venice, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1)) par(mfrow = c(1, 2), las = 1) plot(fit1, se = TRUE, lcol = "blue", scol = "forestgreen", main = "Fitted mu(year) function (centered)", cex.main = 0.8) with(venice, matplot(year, depvar(fit1)[, 1:2], ylab = "Sea level (cm)", col = 1:2, main = "Highest 2 annual sea levels", cex.main = 0.8)) with(venice, lines(year, fitted(fit1)[,1], lty = "dashed", col = "blue")) legend("topleft", lty = "dashed", col = "blue", "Fitted 95 percentile") # Univariate example (fit <- vglm(maxtemp ~ 1, egev, oxtemp, trace = TRUE)) head(fitted(fit)) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) vcov(fit, untransform = TRUE) sqrt(diag(vcov(fit))) # Approximate standard errors rlplot(fit) } } \keyword{models} \keyword{regression} VGAM/man/get.smart.prediction.Rd0000644000176000001440000000173712136651105016151 0ustar ripleyusers\name{get.smart.prediction} \alias{get.smart.prediction} \title{ Retrieves ``.smart.prediction'' } \description{ Retrieves \code{.smart.prediction} from \code{smartpredenv} (\R) or frame 1 (S-PLUS). } \usage{ get.smart.prediction() } \value{ Returns with the list \code{.smart.prediction} from \code{smartpredenv} (\R) or frame 1 (S-PLUS). } \details{ A smart modelling function such as \code{\link[stats]{lm}} allows smart functions such as \code{\link[splines]{bs}} to write to a data structure called \code{.smart.prediction} in \code{smartpredenv} (\R) or frame 1 (S-PLUS). At the end of fitting, \code{get.smart.prediction} retrieves this data structure. It is then attached to the object, and used for prediction later. } \seealso{ \code{\link{get.smart}}, \code{\link[stats]{lm}}. } \examples{ \dontrun{# Put at the end of lm fit$smart <- get.smart.prediction() } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/get.smart.Rd0000644000176000001440000000347012136651105014006 0ustar ripleyusers\name{get.smart} \alias{get.smart} \title{ Retrieve One Component of ``.smart.prediction'' } \description{ Retrieve one component of the list \code{.smart.prediction} from \code{smartpredenv} (\R) or frame 1 (S-PLUS). } \usage{ get.smart() } \value{ Returns with one list component of \code{.smart.prediction} from \code{smartpredenv} (\R) or frame 1 (S-PLUS), in fact, \code{.smart.prediction[[.smart.prediction.counter]]}. The whole procedure mimics a first-in first-out stack (better known as a \emph{queue}). } \section{Side Effects}{ The variable \code{.smart.prediction.counter} in \code{smartpredenv} (\R) or frame 1 (S-PLUS) is incremented beforehand, and then written back to \code{smartpredenv} (\R) or frame 1 (S-PLUS). } \details{ \code{get.smart} is used in \code{"read"} mode within a smart function: it retrieves parameters saved at the time of fitting, and is used for prediction. \code{get.smart} is only used in smart functions such as \code{\link[stats]{poly}}; \code{get.smart.prediction} is only used in modelling functions such as \code{\link[stats]{lm}} and \code{\link[stats]{glm}}. The function \code{\link{get.smart}} gets only a part of \code{.smart.prediction} whereas \code{\link{get.smart.prediction}} gets the entire \code{.smart.prediction}. } \seealso{ \code{\link{get.smart.prediction}}. } \examples{ "my1" <- function(x, minx = min(x)) { # Here is a smart function x <- x # Needed for nested calls, e.g., bs(scale(x)) if(smart.mode.is("read")) { smart <- get.smart() minx <- smart$minx # Overwrite its value } else if(smart.mode.is("write")) put.smart(list(minx = minx)) sqrt(x-minx) } attr(my1, "smart") <- TRUE } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/geometric.Rd0000644000176000001440000001017612136651105014061 0ustar ripleyusers\name{geometric} \alias{geometric} \alias{truncgeometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Geometric (Truncated and Untruncated) Distributions } \description{ Maximum likelihood estimation for the geometric and truncated geometric distributions. } \usage{ geometric(link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) truncgeometric(upper.limit = Inf, link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function applied to the probability parameter \eqn{p}{prob}, which lies in the unit interval. See \code{\link{Links}} for more choices. } \item{expected}{ Logical. Fisher scoring is used if \code{expected = TRUE}, else Newton-Raphson. } \item{iprob, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more details. } \item{upper.limit}{ Numeric. Upper values. As a vector, it is recycled across responses first. The default value means both family functions should give the same result. } } \details{ A random variable \eqn{Y} has a 1-parameter geometric distribution if \eqn{P(Y=y) = p (1-p)^y}{P(Y=y) = prob * (1-prob)^y} for \eqn{y=0,1,2,\ldots}{y=0,1,2,...}. Here, \eqn{p}{prob} is the probability of success, and \eqn{Y} is the number of (independent) trials that are fails until a success occurs. Thus the response \eqn{Y} should be a non-negative integer. The mean of \eqn{Y} is \eqn{E(Y) = (1-p)/p}{E(Y) = (1-prob)/prob} and its variance is \eqn{Var(Y) = (1-p)/p^2}{Var(Y) = (1-prob)/prob^2}. The geometric distribution is a special case of the negative binomial distribution (see \code{\link{negbinomial}}). If \eqn{Y} has a geometric distribution with parameter \eqn{p}{prob} then \eqn{Y+1} has a positive-geometric distribution with the same parameter. Multiple responses are permitted. For \code{truncgeometric()}, the (upper) truncated geometric distribution can have response integer values from 0 to \code{upper.limit}. It has density \code{prob * (1 - prob)^y / [1-(1-prob)^(1+upper.limit)]}. For a generalized truncated geometric distribution with integer values \eqn{L} to \eqn{U}, say, subtract \eqn{L} from the response and feed in \eqn{U-L} as the upper limit. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee. Help from Viet Hoang Quoc is gratefully acknowledged. } %\note{ % %} \seealso{ \code{\link{negbinomial}}, \code{\link[stats]{Geometric}}, \code{\link{betageometric}}, \code{\link{expgeometric}}, \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link{rbetageom}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta = -1.0 - 1.0 * x2 + 2.0 * x3) gdata <- transform(gdata, prob = logit(eta, inverse = TRUE)) gdata <- transform(gdata, y1 = rgeom(nn, prob)) with(gdata, table(y1)) fit1 <- vglm(y1 ~ x2 + x3 + x4, geometric, gdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Truncated geometric (between 0 and upper.limit) upper.limit <- 5 tdata <- subset(gdata, y1 <= upper.limit) nrow(tdata) # Less than nn fit2 <- vglm(y1 ~ x2 + x3 + x4, truncgeometric(upper.limit), data = tdata, trace = TRUE) coef(fit2, matrix = TRUE) # Generalized truncated geometric (between lower.limit and upper.limit) lower.limit <- 1 upper.limit <- 8 gtdata <- subset(gdata, lower.limit <= y1 & y1 <= upper.limit) with(gtdata, table(y1)) nrow(gtdata) # Less than nn fit3 <- vglm(y1 - lower.limit ~ x2 + x3 + x4, truncgeometric(upper.limit - lower.limit), data = gtdata, trace = TRUE) coef(fit3, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/genrayleighUC.Rd0000644000176000001440000000417112136651105014627 0ustar ripleyusers\name{genray} \alias{genray} \alias{dgenray} \alias{pgenray} \alias{qgenray} \alias{rgenray} \title{The Generalized Rayleigh Distribution} \description{ Density, distribution function, quantile function and random generation for the generalized Rayleigh distribution. } \usage{ dgenray(x, shape, scale = 1, log = FALSE) pgenray(q, shape, scale = 1) qgenray(p, shape, scale = 1) rgenray(n, shape, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dgenray} gives the density, \code{pgenray} gives the distribution function, \code{qgenray} gives the quantile function, and \code{rgenray} generates random deviates. } \author{ J. G. Lauder and T. W. Yee } \details{ See \code{\link{genrayleigh}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Kundu and Raqab (2005). } \seealso{ \code{\link{genrayleigh}}, \code{\link{rayleigh}}. } \examples{ \dontrun{ shape = 0.5; scale = 1; nn = 501 x = seq(-0.10, 3.0, len = nn) plot(x, dgenray(x, shape, scale), type = "l", las = 1, ylim = c(0, 1.2), ylab = paste("[dp]genray(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pgenray(x, shape, scale), col = "orange") probs = seq(0.1, 0.9, by = 0.1) Q = qgenray(probs, shape, scale) lines(Q, dgenray(Q, shape, scale), col = "purple", lty = 3, type = "h") lines(Q, pgenray(Q, shape, scale), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pgenray(Q, shape, scale) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/genrayleigh.Rd0000644000176000001440000000462412136651105014402 0ustar ripleyusers\name{genrayleigh} \alias{genrayleigh} %- Also NEED an '\alias' for EACH other topic documented here. \title{Generalized Rayleigh Distribution Family Function} \description{ Estimates the two parameters of the generalized Rayleigh distribution by maximum likelihood estimation. } \usage{ genrayleigh(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, tol12 = 1e-05, nsimEIM = 300, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Link function for the two positive parameters, shape and scale. See \code{\link{Links}} for more choices. } \item{ishape, iscale}{ Numeric. Optional initial values for the shape and scale parameters. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } \item{tol12}{ Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. } } \details{ The generalized Rayleigh distribution has density function \deqn{f(y;a = shape,b = scale) = (2 a y/b^{2}) e^{-(y/b)^{2}} (1 - e^{-(y/b)^{2}})^{a-1}}{% (2*a*y/b^2) * e^(-(y/b)^2) * (1 - e^(-(y/b)^2))^(a-1)} where \eqn{y > 0} and the two parameters, \eqn{a} and \eqn{b}, are positive. The mean cannot be expressed nicely so the median is returned as the fitted values. Applications of the generalized Rayleigh distribution include modeling strength data and general lifetime data. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kundu, D., Raqab, M. C. (2005). Generalized Rayleigh distribution: different methods of estimations. \emph{Computational Statistics and Data Analysis}, \bold{49}, 187--200. } \author{ J. G. Lauder and T. W. Yee } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Kundu and Raqab (2005). } \seealso{ \code{\link{dgenray}}, \code{\link{rayleigh}}. } \examples{ shape = exp(1); scale = exp(1); rdata = data.frame(y = rgenray(n = 1000, shape, scale)) fit = vglm(y ~ 1, genrayleigh, rdata, trace = TRUE) c(with(rdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/genpoisson.Rd0000644000176000001440000000723012136651105014264 0ustar ripleyusers\name{genpoisson} \alias{genpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Poisson distribution } \description{ Estimation of the two parameters of a generalized Poisson distribution. } \usage{ genpoisson(llambda = elogit(min = -1, max = 1), ltheta = "loge", ilambda = NULL, itheta = NULL, use.approx = TRUE, imethod = 1, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llambda, ltheta}{ Parameter link functions for \eqn{\lambda} and \eqn{\theta}. See \code{\link{Links}} for more choices. The \eqn{\lambda} parameter lies at least within the interval \eqn{[-1,1]}; see below for more details. The \eqn{\theta} parameter is positive, therefore the default is the log link. } \item{ilambda, itheta}{ Optional initial values for \eqn{\lambda} and \eqn{\theta}. The default is to choose values internally. } \item{use.approx}{ Logical. If \code{TRUE} then an approximation to the expected information matrix is used, otherwise Newton-Raphson is used. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for the parameters. If failure to converge occurs try another value and/or else specify a value for \code{ilambda} and/or \code{itheta}. } \item{zero}{ An integer vector, containing the value 1 or 2. If so, \eqn{\lambda} or \eqn{\theta} respectively are modelled as an intercept only. If set to \code{NULL} then both linear/additive predictors are modelled as functions of the explanatory variables. } } \details{ The generalized Poisson distribution has density \deqn{f(y) = \theta(\theta+\lambda y)^{y-1} \exp(-\theta-\lambda y) / y!}{ f(y) = \theta(\theta+\lambda * y)^(y-1) * exp(-\theta-\lambda * y) / y!} for \eqn{\theta > 0} and \eqn{y = 0,1,2,\ldots}. Now \eqn{\max(-1,-\theta/m) \leq \lambda \leq 1}{ max(-1,-\theta/m) \le lambda \le 1} where \eqn{m (\geq 4)}{m (\ge 4)} is the greatest positive integer satisfying \eqn{\theta + m\lambda > 0} when \eqn{\lambda < 0} [and then \eqn{P(Y=y) = 0} for \eqn{y > m}]. Note the complicated support for this distribution means, for some data sets, the default link for \code{llambda} is not always appropriate. An ordinary Poisson distribution corresponds to \eqn{\lambda = 0}{lambda = 0}. The mean (returned as the fitted values) is \eqn{E(Y) = \theta / (1 - \lambda)} and the variance is \eqn{\theta / (1 - \lambda)^3}. For more information see Consul and Famoye (2006) for a summary and Consul (1989) for full details. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston: Birkhauser. Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Consul, P. C. (1989) \emph{Generalized Poisson Distributions: Properties and Applications}. New York: Marcel Dekker. } \author{ T. W. Yee } \note{ This distribution is useful for dispersion modelling. Convergence problems may occur when \code{lambda} is very close to 0 or 1. } \seealso{ \code{\link{poissonff}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 200)) gdata <- transform(gdata, y = rpois(nn, exp(2 - x2))) # Ordinary Poisson data fit <- vglm(y ~ x2, genpoisson(zero = 1), gdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} % yettodo: see csda 2009, 53(9): 3478--3489. %{% f(y) = theta*(theta+lambda*y)^(y-1) exp(-theta-lambda*y) / y!} VGAM/man/gengammaUC.Rd0000644000176000001440000000435212136651105014106 0ustar ripleyusers\name{gengammaUC} \alias{gengammaUC} \alias{dgengamma} \alias{pgengamma} \alias{qgengamma} \alias{rgengamma} \title{The Generalized Gamma Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized gamma distribution with scale parameter \code{scale}, and parameters \code{d} and \code{k}. } \usage{ dgengamma(x, scale = 1, d = 1, k = 1, log = FALSE) pgengamma(q, scale = 1, d = 1, k = 1) qgengamma(p, scale = 1, d = 1, k = 1) rgengamma(n, scale = 1, d = 1, k = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Positive integer of length 1.} \item{scale}{the (positive) scale parameter \eqn{b}.} \item{d, k}{the (positive) parameters \eqn{d} and \eqn{k}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dgengamma} gives the density, \code{pgengamma} gives the distribution function, \code{qgengamma} gives the quantile function, and \code{rgengamma} generates random deviates. } \references{ Stacy, E. W. and Mihram, G. A. (1965) Parameter estimation for a generalized gamma distribution. \emph{Technometrics}, \bold{7}, 349--358. } \author{ T. W. Yee } \details{ See \code{\link{gengamma}}, the \pkg{VGAM} family function for estimating the generalized gamma distribution by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } % \note{ % } \seealso{ \code{\link{gengamma}}. } \examples{ \dontrun{ x = seq(0, 14, by = 0.01); d = 1.5; Scale = 2; k = 6 plot(x, dgengamma(x, Scale, d, k), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qgengamma(seq(0.05,0.95,by = 0.05), Scale, d, k), dgengamma(qgengamma(seq(0.05,0.95,by = 0.05), Scale, d, k), Scale, d, k), col = "purple", lty = 3, type = "h") lines(x, pgengamma(x, Scale, d, k), type = "l", col = "red") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/gengamma.Rd0000644000176000001440000001126212136651105013654 0ustar ripleyusers\name{gengamma} \alias{gengamma} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Gamma distribution family function } \description{ Estimation of the 3-parameter generalized gamma distribution proposed by Stacy (1962). } \usage{ gengamma(lscale = "loge", ld = "loge", lk = "loge", iscale = NULL, id = NULL, ik = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, ld, lk}{ Parameter link function applied to each of the positive parameters \eqn{b}, \eqn{d} and \eqn{k}, respectively. See \code{\link{Links}} for more choices. } \item{iscale, id, ik}{ Initial value for \eqn{b}, \eqn{d} and \eqn{k}, respectively. The defaults mean an initial value is determined internally for each. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\}. The default value means none are modelled as intercept-only terms. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function can be written \deqn{f(y;b,d,k) = d b^{-d k} y^{d k-1} \exp[-(y/b)^d] / \Gamma(k)}{% f(y;b,d,k) = d * b^(-d*k) * y^(d*k-1) * exp(-(y/b)^d) / gamma(k)} for scale parameter \eqn{b > 0}, and \eqn{d > 0}, \eqn{k > 0}, and \eqn{y > 0}. The mean of \eqn{Y} is \eqn{b \times \Gamma(k+1/d) / \Gamma(k)}{b*gamma(k+1/d)/gamma(k)} (returned as the fitted values), which equals \eqn{bk}{b*k} if \eqn{d=1}. There are many special cases, as given in Table 1 of Stacey and Mihram (1965). In the following, the parameters are in the order \eqn{b,d,k}. The special cases are: Exponential \eqn{f(y;b,1,1)}, Gamma \eqn{f(y;b,1,k)}, Weibull \eqn{f(y;b,d,1)}, Chi Squared \eqn{f(y;2,1,a/2)} with \eqn{a} degrees of freedom, Chi \eqn{f(y;\sqrt{2},2,a/2)}{f(y;sqrt(2),2,a/2)} with \eqn{a} degrees of freedom, Half-normal \eqn{f(y;\sqrt{2},2,1/2)}{f(y;sqrt(2),2,1/2)}, Circular normal \eqn{f(y;\sqrt{2},2,1)}{f(y;sqrt(2),2,1)}, Spherical normal \eqn{f(y;\sqrt{2},2,3/2)}{f(y;sqrt(2),2,3/2)}, Rayleigh \eqn{f(y;c\sqrt{2},2,1)}{f(y;c sqrt(2),2,1)} where \eqn{c>0}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Stacy, E. W. (1962) A generalization of the gamma distribution. \emph{Annals of Mathematical Statistics}, \bold{33}(3), 1187--1192. Stacy, E. W. and Mihram, G. A. (1965) Parameter estimation for a generalized gamma distribution. \emph{Technometrics}, \bold{7}, 349--358. Prentice, R. L. (1974) A log gamma model and its maximum likelihood estimation. \emph{Biometrika}, \bold{61}, 539--544. } \section{Warning }{ Several authors have considered maximum likelihood estimation for the generalized gamma distribution and have found that the Newton-Raphson algorithm does not work very well and that the existence of solutions to the log-likelihood equations is sometimes in doubt. Although Fisher scoring is used here, it is likely that the same problems will be encountered. It appears that large samples are required, for example, the estimator of \eqn{k} became asymptotically normal only with 400 or more observations. It is not uncommon for maximum likelihood estimates to fail to converge even with two or three hundred observations. With covariates, even more observations are needed to increase the chances of convergence. } \author{ T. W. Yee } \note{ The notation used here differs from Stacy (1962) and Prentice (1974). Poor initial values may result in failure to converge so if there are covariates and there are convergence problems, try using the \code{zero} argument (e.g., \code{zero = 2:3}) or the \code{ik} argument. } \seealso{ \code{\link{rgengamma}}, \code{\link{gamma1}}, \code{\link{gamma2}}, \code{\link{prentice74}}. } \examples{ \dontrun{ k <- exp(-1); Scale = exp(1) gdata <- data.frame(y = rgamma(1000, shape = k, scale = Scale)) fit <- vglm(y ~ 1, gengamma, gdata, trace = TRUE) coef(fit, matrix = TRUE) # Another example gdata <- data.frame(x2 = runif(nn <- 5000)) gdata <- transform(gdata, Scale = exp(1), d = exp( 0 + 1.2* x2), k = exp(-1 + 2 * x2)) gdata <- transform(gdata, y = rgengamma(nn, scale = Scale, d = d, k = k)) fit <- vglm(y ~ x2, gengamma(zero = 1, iscale = 6), gdata, trace = TRUE) fit <- vglm(y ~ x2, gengamma(zero = 1), gdata, trace = TRUE, maxit = 50) coef(fit, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/genbetaII.Rd0000644000176000001440000001037412136651105013732 0ustar ripleyusers\name{genbetaII} \alias{genbetaII} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Beta Distribution of the Second Kind } \description{ Maximum likelihood estimation of the 4-parameter generalized beta II distribution. } \usage{ genbetaII(lshape1.a = "loge", lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", ishape1.a = NULL, iscale = NULL, ishape2.p = 1, ishape3.q = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale, lshape2.p, lshape3.q}{ Parameter link functions applied to the shape parameter \code{a}, scale parameter \code{scale}, shape parameter \code{p}, and shape parameter \code{q}. All four parameters are positive. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale}{ Optional initial values for \code{a} and \code{scale}. A \code{NULL} means a value is computed internally. } \item{ishape2.p, ishape3.q}{ Optional initial values for \code{p} and \code{q}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3,4\} which correspond to \code{a}, \code{scale}, \code{p}, \code{q}, respectively. } } \details{ This distribution is most useful for unifying a substantial number of size distributions. For example, the Singh-Maddala, Dagum, Fisk (log-logistic), Lomax (Pareto type II), inverse Lomax, beta distribution of the second kind distributions are all special cases. Full details can be found in Kleiber and Kotz (2003), and Brazauskas (2002). The argument names given here are used by other families that are special cases of this family. Fisher scoring is used here and for the special cases too. The 4-parameter generalized beta II distribution has density \deqn{f(y) = a y^{ap-1} / [b^{ap} B(p,q) \{1 + (y/b)^a\}^{p+q}]}{% f(y) = a y^(ap-1) / [b^(ap) B(p,q) (1 + (y/b)^a)^(p+q)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{p > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here \eqn{B} is the beta function, and \eqn{b} is the scale parameter \code{scale}, while the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1/a) \, \Gamma(q - 1/a) / (\Gamma(p) \, \Gamma(q))}{% E(Y) = b gamma(p + 1/a) gamma(q - 1/a) / ( gamma(p) gamma(q))} provided \eqn{-ap < 1 < aq}; these are returned as the fitted values. %The distribution is motivated by the incomplete beta function %\eqn{B_y(p,q)} which is the integral from 0 to \eqn{y} of the integrand %\eqn{u^{p-1} (1-u)^{q-1}}{u^(p-1) (1-u)^(q-1)} where \eqn{y>0}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. Brazauskas, V. (2002) Fisher information matrix for the Feller-Pareto distribution. \emph{Statistics & Probability Letters}, \bold{59}, 159--167. } \author{ T. W. Yee } \note{ If the self-starting initial values fail, try experimenting with the initial value arguments, especially those whose default value is not \code{NULL}. Also, the constraint \eqn{-ap < 1 < aq} may be violated as the iterations progress so it pays to monitor convergence, e.g., set \code{trace = TRUE}. Successful convergence depends on having very good initial values. This is rather difficult for this distribution! More improvements could be made here. } \seealso{ \code{\link{betaff}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{lomax}}, \code{\link{invlomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}, \code{\link{lino}}. } \examples{ \dontrun{ gdata <- data.frame(y = rsinmad(3000, exp(2), exp(2), exp(1))) # A special case! fit <- vglm(y ~ 1, genbetaII, gdata, trace = TRUE) fit <- vglm(y ~ 1, data = gdata, trace = TRUE, genbetaII(ishape1.a = 4, ishape2.p = 2.2, iscale = 7, ishape3.q = 2.3)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/gaussianff.Rd0000644000176000001440000001163212136651105014227 0ustar ripleyusers\name{gaussianff} %\alias{gaussian} \alias{gaussianff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gaussian (normal) Family Function } \description{ Fits a generalized linear model to a response with Gaussian (normal) errors. The dispersion parameter may be known or unknown. } \usage{ gaussianff(dispersion = 0, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{parallel}{ A logical or formula. If a formula, the response of the formula should be a logical and the terms of the formula indicates whether or not those terms are parallel. } \item{dispersion}{ Dispersion parameter. If 0 then it is estimated and the moment estimate is put in \code{object@misc$dispersion}; it is assigned the value \deqn{ \sum_{i=1}^n \; (y_i - \eta_i)^T W_i (y_i - \eta_i) / (nM-p) }{% sum_{i=1}^n (y_i - eta_i)^T W_i (y_i - \eta_i) / (nM-p) } where \eqn{p} is the total number of parameters estimated (for RR-VGLMs the value used is the number of columns in the large \eqn{X} model matrix; this may not be correct). If the argument is assigned a positive quantity then it is assumed to be known with that value. % zz 28/8/06 check for RR-VGLMs % By default, maximum likelihood is used to % By default, maximum likelihood is used to % estimate the model because it is known. However, the user can specify % \code{dispersion = 0} to have it estimated. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\} where \eqn{M} is the number of columns of the matrix response. } } \details{ This function is usually used in conjunction with \code{\link{vglm}}, else \code{vlm} is recommended instead. The notation \eqn{M} is used to denote the number of linear/additive predictors. This function can handle any finite \eqn{M}, and the default is to use ordinary least squares. A vector linear/additive model can be fitted by minimizing \deqn{ \sum_{i=1}^n \; (y_i - \eta_i)^T W_i (y_i - \eta_i) }{% sum_{i=1}^n (y_i - eta_i)^T W_i (y_i - \eta_i) } where \eqn{y_i} is a \eqn{M}-vector, \eqn{\eta_i}{eta_i} is the vector of linear/additive predictors. The \eqn{W_i} is any positive-definite matrix, and the default is the order-\eqn{M} identity matrix. The \eqn{W_i} can be inputted using the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}} etc., and the format is the \emph{matrix-band} format whereby it is a \eqn{n \times A}{n * A} matrix with the diagonals are passed first, followed by next the upper band, all the way to the \eqn{(1,M)} element. Here, \eqn{A} has maximum value of \eqn{M(M+1)/2} and a minimum value of \eqn{M}. Usually the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is just a vector, in which case each element is multiplied by a order-\eqn{M} identity matrix. If in doubt, type something like \code{weights(object, type="working")} after the model has been fitted. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ This \pkg{VGAM} family function is supposed to be similar to \code{\link[stats]{gaussian}} but is is not compatible with \code{\link[stats]{glm}}. The \code{"ff"} in the name is added to avoid any masking problems. } % \section{Warning }{ % This function probably contains some bugs, so the user is advised to % be cautious. % % % } \seealso{ \code{\link{normal1}}, \code{\link{huber2}}, \code{\link{lqnorm}}, \code{\link{binormal}}, \code{\link{SUR}}. \code{vlm}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}. } \examples{ gdata <- data.frame(x2 = sort(runif(n <- 40))) gdata <- transform(gdata, y1 = 1 + 2*x2 + rnorm(n, sd = 0.1), y2 = 3 + 4*x2 + rnorm(n, sd = 0.1), y3 = 7 + 4*x2 + rnorm(n, sd = 0.1)) fit <- vglm(cbind(y1,y2) ~ x2, gaussianff, data = gdata) coef(fit, matrix = TRUE) # For comparison: coef( lmfit <- lm(y1 ~ x2, data = gdata)) coef(glmfit <- glm(y2 ~ x2, data = gdata, gaussian)) vcov(fit) vcov(lmfit) t(weights(fit, type = "prior")) # Unweighted observations head(weights(fit, type = "working")) # Identity matrices # Reduced-rank VLM (rank-1) fit2 <- rrvglm(cbind(y1, y2, y3) ~ x2, gaussianff, data = gdata) Coef(fit2) } \keyword{models} \keyword{regression} VGAM/man/garma.Rd0000644000176000001440000001476612136651105013203 0ustar ripleyusers\name{garma} \alias{garma} %- Also NEED an '\alias' for EACH other topic documented here. \title{GARMA (Generalized Autoregressive Moving-Average) Models} \description{ Fits GARMA models to time series data. } \usage{ garma(link = "identity", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean response. The default is suitable for continuous responses. The link \code{\link{loge}} should be chosen if the data are counts. The link \code{\link{reciprocal}} can be chosen if the data are counts and the variance assumed for this is \eqn{\mu^2}{mu^2}. The links \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, and \code{\link{cauchit}} are supported and suitable for binary responses. Note that when the log or logit link is chosen: for log and logit, zero values can be replaced by \code{bvalue}. See \code{\link{loge}} and \code{\link{logit}} etc. for specific information about each link function. } \item{p.ar.lag}{ A positive integer, the lag for the autoregressive component. Called \eqn{p} below. } \item{q.ma.lag}{ A non-negative integer, the lag for the moving-average component. Called \eqn{q} below. } \item{coefstart}{ Starting values for the coefficients. Assigning this argument is highly recommended. For technical reasons, the argument \code{coefstart} in \code{\link{vglm}} cannot be used. } \item{step}{ Numeric. Step length, e.g., \code{0.5} means half-stepsizing. } % \item{constant}{ % Used when the log or logit link is chosen. % For log, zero values are replaced by \code{constant}. % For logit, zero values are replaced by \code{constant} and % unit values replaced by \code{1-constant}. % } } \details{ This function draws heavily on Benjamin \emph{et al.} (1998). See also Benjamin \emph{et al.} (2003). GARMA models extend the ARMA time series model to generalized responses in the exponential family, e.g., Poisson counts, binary responses. Currently, this function is rudimentary and can handle only certain continuous, count and binary responses only. The user must choose an appropriate link for the \code{link} argument. The GARMA(\eqn{p, q}) model is defined by firstly having a response belonging to the exponential family \deqn{f(y_t|D_t) = \exp \left\{ \frac{y_t \theta_t - b(\theta_t)}{\phi / A_t} + c(y_t, \phi / A_t) \right\}}{% f(y_t|D_t) = \exp [ (y_t theta_t - b(theta_t)) / (phi / A_t) + c(y_t, \phi / A_t) ] } where \eqn{\theta_t}{theta_t} and \eqn{\phi}{phi} are the canonical and scale parameters respectively, and \eqn{A_t} are known prior weights. The mean \eqn{\mu_t=E(Y_t|D_t)=b'(\theta_t)}{mu_t=E(Y_t|D_t)=b'(theta_t)} is related to the linear predictor \eqn{\eta_t}{eta_t} by the link function \eqn{g}. Here, \eqn{D_t=\{x_t,\ldots,x_1,y_{t-1},\ldots,y_1,\mu_{t-1},\ldots,\mu_1\}}{ D_t={x_t,\ldots,x_1,y_(t-1),\ldots,y_1,mu_(t-1),\ldots,mu_1}} is the previous information set. Secondly, the GARMA(\eqn{p, q}) model is defined by \deqn{g(\mu_t) = \eta_t = x_t^T \beta + \sum_{k=1}^p \phi_k (g(y_{t-k}) - x_{t-k}^T \beta) + \sum_{k=1}^q \theta_k (g(y_{t-k}) - \eta_{t-k}).}{% g(mu_t) = eta_t = x_t^T beta + \sum_{k=1}^p phi_k (g(y_{t-k}) - x_{t-k}^T beta) + \sum_{k=1}^q theta_k (g(y_{t-k}) - eta_{t-k}).} Parameter vectors \eqn{\beta}{beta}, \eqn{\phi}{phi} and \eqn{\theta}{theta} are estimated by maximum likelihood. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (1998) Fitting Non-Gaussian Time Series Models. Pages 191--196 in: \emph{Proceedings in Computational Statistics COMPSTAT 1998} by Payne, R. and P. J. Green. Physica-Verlag. Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (2003) Generalized Autoregressive Moving Average Models. \emph{Journal of the American Statistical Association}, \bold{98}: 214--223. Zeger, S. L. and Qaqish, B. (1988) Markov regression models for time series: a quasi-likelihood approach. \emph{Biometrics}, \bold{44}: 1019--1031. } \author{ T. W. Yee } \note{ This function is unpolished and is requires \emph{lots} of improvements. In particular, initialization is \emph{very poor}. Results appear \emph{very} sensitive to quality of initial values. A limited amount of experience has shown that half-stepsizing is often needed for convergence, therefore choosing \code{crit = "coef"} is not recommended. Overdispersion is not handled. For binomial responses it is currently best to input a vector of 1s and 0s rather than the \code{cbind(successes, failures)} because the initialize slot is rudimentary. } \section{Warning}{ This \pkg{VGAM} family function is 'non-standard' in that the model does need some coercing to get it into the VGLM framework. Special code is required to get it running. A consequence is that some methods functions may give wrong results when applied to the fitted object. } \seealso{ % \code{\link{identity}}, % \code{\link{logit}}. The site \url{http://www.stat.auckland.ac.nz/~yee} contains more documentation about this family function. } \examples{ gdata <- data.frame(interspike = c(68, 41, 82, 66, 101, 66, 57, 41, 27, 78, 59, 73, 6, 44, 72, 66, 59, 60, 39, 52, 50, 29, 30, 56, 76, 55, 73, 104, 104, 52, 25, 33, 20, 60, 47, 6, 47, 22, 35, 30, 29, 58, 24, 34, 36, 34, 6, 19, 28, 16, 36, 33, 12, 26, 36, 39, 24, 14, 28, 13, 2, 30, 18, 17, 28, 9, 28, 20, 17, 12, 19, 18, 14, 23, 18, 22, 18, 19, 26, 27, 23, 24, 35, 22, 29, 28, 17, 30, 34, 17, 20, 49, 29, 35, 49, 25, 55, 42, 29, 16)) # See Zeger and Qaqish (1988) gdata <- transform(gdata, spikenum = seq(interspike)) bvalue <- 0.1 # .Machine$double.xmin # Boundary value fit <- vglm(interspike ~ 1, trace = TRUE, data = gdata, garma(loge(bvalue = bvalue), p = 2, coefstart = c(4, 0.3, 0.4))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) # A bug here \dontrun{ with(gdata, plot(interspike, ylim = c(0, 120), las = 1, xlab = "Spike Number", ylab = "Inter-Spike Time (ms)", col = "blue")) with(gdata, lines(spikenum[-(1:fit@misc$plag)], fitted(fit), col = "orange")) abline(h = mean(with(gdata, interspike)), lty = "dashed", col = "gray") } } \keyword{models} \keyword{regression} VGAM/man/gammahyp.Rd0000644000176000001440000000523212136651105013703 0ustar ripleyusers\name{gammahyp} \alias{gammahyp} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gamma Hyperbola Bivariate Distribution } \description{ Estimate the parameter of a gamma hyperbola bivariate distribution by maximum likelihood estimation. } \usage{ gammahyp(ltheta = "loge", itheta = NULL, expected = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ltheta}{ Link function applied to the (positive) parameter \eqn{\theta}{theta}. See \code{\link{Links}} for more choices. } \item{itheta}{ Initial value for the parameter. The default is to estimate it internally. } \item{expected}{ Logical. \code{FALSE} means the Newton-Raphson (using the observed information matrix) algorithm, otherwise the expected information matrix is used (Fisher scoring algorithm). } } \details{ The joint probability density function is given by \deqn{f(y_1,y_2) = \exp( -e^{-\theta} y_1 / \theta - \theta y_2 )}{% f(y1,y2) = exp( -exp(-theta) * y1 / theta - theta * y2) } for \eqn{\theta > 0}{theta > 0}, \eqn{y_1 > 0}{y1 > 0}, \eqn{y_2 > 1}{y2 > 1}. The random variables \eqn{Y_1}{Y1} and \eqn{Y_2}{Y2} are independent. The marginal distribution of \eqn{Y_1}{Y1} is an exponential distribution with rate parameter \eqn{\exp(-\theta)/\theta}{exp(-theta)/theta}. The marginal distribution of \eqn{Y_2}{Y2} is an exponential distribution that has been shifted to the right by 1 and with rate parameter \eqn{\theta}{theta}. The fitted values are stored in a two-column matrix with the marginal means, which are \eqn{\theta \exp(\theta)}{theta * exp(theta)} and \eqn{1 + 1/\theta}{1 + 1/theta}. The default algorithm is Newton-Raphson because Fisher scoring tends to be much slower for this distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Reid, N. (2003) Asymptotics and the theory of inference. \emph{Annals of Statistics}, \bold{31}, 1695--1731. } \author{ T. W. Yee } \note{ The response must be a two column matrix. } \seealso{ \code{\link{exponential}}. } \examples{ gdata <- data.frame(x = runif(nn <- 1000)) gdata <- transform(gdata, theta = exp(-2+x)) gdata <- transform(gdata, y1 = rexp(nn, rate = exp(-theta)/theta), y2 = rexp(nn, rate = theta) + 1) fit <- vglm(cbind(y1,y2) ~ x, fam = gammahyp(expected = TRUE), gdata) fit <- vglm(cbind(y1,y2) ~ x, fam = gammahyp, gdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/gamma2.ab.Rd0000644000176000001440000000743012136651105013627 0ustar ripleyusers\name{gamma2.ab} \alias{gamma2.ab} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 2-parameter Gamma Distribution } \description{ Estimates the 2-parameter gamma distribution by maximum likelihood estimation. } \usage{ gamma2.ab(lrate = "loge", lshape = "loge", irate = NULL, ishape = NULL, expected = TRUE, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrate, lshape}{ Link functions applied to the (positive) \emph{rate} and \emph{shape} parameters. See \code{\link{Links}} for more choices. } \item{expected}{ Logical. Use Fisher scoring? The default is yes, otherwise Newton-Raphson is used. } \item{irate, ishape}{ Optional initial values for \emph{rate} and \emph{shape}. A \code{NULL} means a value is computed internally. If a failure to converge occurs, try using these arguments. } \item{zero}{ An integer specifying which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value should be either 1 or 2 or \code{NULL}. The default is to model \eqn{shape} as an intercept only. A value \code{NULL} means neither 1 or 2. } } \details{ The density function is given by \deqn{f(y) = \exp(-rate \times y) \times y^{shape-1} \times rate^{shape} / \Gamma(shape)}{% f(y) = exp(-rate * y) y^(shape-1) rate^(shape) / gamma(shape)} for \eqn{shape > 0}, \eqn{rate > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu = shape/rate}{mu = shape/rate} (returned as the fitted values) with variance \eqn{\sigma^2 = \mu^2 /shape = shape/rate^2}{sigma^2 = mu^2 /shape = shape/rate^2}. By default, the two linear/additive predictors are \eqn{\eta_1 = \log(rate)}{eta1 = log(rate)} and \eqn{\eta_2 = \log(shape)}{eta2 = log(shape)}. The argument \code{expected} refers to the type of information matrix. The expected information matrix corresponds to Fisher scoring and is numerically better here. The observed information matrix corresponds to the Newton-Raphson algorithm and may be withdrawn from the family function in the future. If both algorithms work then the differences in the results are often not huge. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe the 2-parameter gamma distribution, e.g., Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ The parameters \eqn{rate} and \eqn{shape} match with the arguments \code{rate} and \code{shape} of \code{\link[stats]{rgamma}}. Often, \eqn{scale = 1/rate} is used. If \eqn{rate = 1} use the family function \code{\link{gamma1}} to estimate \eqn{shape}. } \seealso{ \code{\link{gamma1}} for the 1-parameter gamma distribution, \code{\link{gamma2}} for another parameterization of the 2-parameter gamma distribution, \code{\link{bivgamma.mckay}} for \emph{a} bivariate gamma distribution, \code{\link{expexp}}. } \examples{ # Essentially a 1-parameter gamma gdata <- data.frame(y = rgamma(n <- 100, shape = exp(1))) fit1 <- vglm(y ~ 1, gamma1, gdata, trace = TRUE) fit2 <- vglm(y ~ 1, gamma2.ab, gdata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE) Coef(fit2) # Essentially a 2-parameter gamma gdata <- data.frame(y = rgamma(n = 500, rate = exp(1), shape = exp(2))) fit2 <- vglm(y ~ 1, gamma2.ab, gdata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE) Coef(fit2) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/gamma2.Rd0000644000176000001440000001411112136651105013240 0ustar ripleyusers\name{gamma2} \alias{gamma2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 2-parameter Gamma Distribution } \description{ Estimates the 2-parameter gamma distribution by maximum likelihood estimation. } \usage{ gamma2(lmu = "loge", lshape = "loge", imethod = 1, ishape = NULL, parallel = FALSE, apply.parint = FALSE, deviance.arg = FALSE, zero = -2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lshape}{ Link functions applied to the (positive) \emph{mu} and \emph{shape} parameters (called \eqn{\mu}{mu} and \eqn{\lambda}{shape} respectively). See \code{\link{Links}} for more choices. } \item{ishape}{ Optional initial value for \emph{shape}. A \code{NULL} means a value is computed internally. If a failure to converge occurs, try using this argument. This argument is ignored if used within \code{\link{cqo}}; see the \code{iShape} argument of \code{\link{qrrvglm.control}} instead. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for the \eqn{\mu}{mu} parameter. If failure to converge occurs try another value (and/or specify a value for \code{ishape}). } \item{deviance.arg}{ Logical. If \code{TRUE}, the deviance function is attached to the object. Under ordinary circumstances, it should be left alone because it really assumes the shape parameter is at the maximum likelihood estimate. Consequently, one cannot use that criterion to minimize within the IRLS algorithm. It should be set \code{TRUE} only when used with \code{\link{cqo}} under the fast algorithm. } \item{zero}{ % An integer specifying which % linear/additive predictor is to be modelled as an intercept only. % If assigned, the single value should be either 1 or 2 or \code{NULL}. % The default is to model \eqn{shape} as an intercept only. % A value \code{NULL} means neither 1 or 2. Integer valued vector, usually assigned \eqn{-2} or \eqn{2} if used at all. Specifies which of the two linear/additive predictors are modelled as an intercept only. By default, the shape parameter (after \code{lshape} is applied) is modelled as a single unknown number that is estimated. It can be modelled as a function of the explanatory variables by setting \code{zero = NULL}. A negative value means that the value is recycled, so setting \eqn{-2} means all shape parameters are intercept only. See \code{\link{CommonVGAMffArguments}} for more information. } \item{parallel, apply.parint}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ This distribution can model continuous skewed responses. The density function is given by \deqn{f(y;\mu,\lambda) = \frac{\exp(-\lambda y / \mu) \times (\lambda y / \mu)^{\lambda-1} \times \lambda}{ \mu \times \Gamma(\lambda)}}{% f(y;mu,shape) = exp(-shape * y / mu) y^(shape-1) shape^(shape) / [mu^(shape) * gamma(shape)]} for \eqn{\mu > 0}{mu > 0}, \eqn{\lambda > 0}{shape > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(\cdot)}{gamma()} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu=\mu}{mu=mu} (returned as the fitted values) with variance \eqn{\sigma^2 = \mu^2 / \lambda}{sigma^2 = mu^2 / shape}. If \eqn{0<\lambda<1}{01}{shape>1} then the density is zero at the origin and is unimodal with mode at \eqn{y = \mu - \mu / \lambda}{y = mu - mu / shape}; this can be achieved with \code{lshape="loglog"}. By default, the two linear/additive predictors are \eqn{\eta_1=\log(\mu)}{eta1=log(mu)} and \eqn{\eta_2=\log(\lambda)}{eta2=log(shape)}. This family function implements Fisher scoring and the working weight matrices are diagonal. This \pkg{VGAM} family function handles \emph{multivariate} responses, so that a matrix can be used as the response. The number of columns is the number of species, say, and \code{zero=-2} means that \emph{all} species have a shape parameter equalling a (different) intercept only. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ The parameterization of this \pkg{VGAM} family function is the 2-parameter gamma distribution described in the monograph McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ T. W. Yee } \note{ The response must be strictly positive. A moment estimator for the shape parameter may be implemented in the future. If \code{mu} and \code{shape} are vectors, then \code{rgamma(n = n, shape = shape, scale = mu/shape)} will generate random gamma variates of this parameterization, etc.; see \code{\link[stats]{GammaDist}}. For \code{\link{cqo}} and \code{\link{cao}}, taking the logarithm of the response means (approximately) a \code{\link{gaussianff}} family may be used on the transformed data. } \seealso{ \code{\link{gamma1}} for the 1-parameter gamma distribution, \code{\link{gamma2.ab}} for another parameterization of the 2-parameter gamma distribution, \code{\link{bivgamma.mckay}} for \emph{a} bivariate gamma distribution, \code{\link{expexp}}, \code{\link[stats]{GammaDist}}, \code{\link{golf}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # Essentially a 1-parameter gamma gdata <- data.frame(y = rgamma(n = 100, shape = exp(1))) fit1 <- vglm(y ~ 1, gamma1, gdata) fit2 <- vglm(y ~ 1, gamma2, gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) # Essentially a 2-parameter gamma gdata <- data.frame(y = rgamma(n = 500, rate = exp(1), shape = exp(2))) fit2 <- vglm(y ~ 1, gamma2, gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/gamma1.Rd0000644000176000001440000000427512136651105013251 0ustar ripleyusers\name{gamma1} \alias{gamma1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 1-parameter Gamma Distribution } \description{ Estimates the 1-parameter gamma distribution by maximum likelihood estimation. } \usage{ gamma1(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) \emph{shape} parameter. See \code{\link{Links}} for more choices and general information. } \item{zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The density function is given by \deqn{f(y) = \exp(-y) \times y^{shape-1} / \Gamma(shape)}{% f(y) = exp(-y) y^(shape-1) / gamma(shape)} for \eqn{shape > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \eqn{Y} (returned as the fitted values) is \eqn{\mu=shape}{mu=shape}, and the variance is \eqn{\sigma^2 = shape}{sigma^2 = shape}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe the 1-parameter gamma distribution, e.g., Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. The parameter \eqn{shape} matches with \code{shape} in \code{\link[stats]{rgamma}}. The argument \code{rate} in \code{\link[stats]{rgamma}} is assumed 1 for this family function. If \eqn{rate} is unknown use the family function \code{\link{gamma2.ab}} to estimate it too. } \seealso{ \code{\link{gamma2.ab}} for the 2-parameter gamma distribution, \code{\link{lgammaff}}, \code{\link{lindley}}. } \examples{ gdata <- data.frame(y = rgamma(n = 100, shape = exp(3))) fit <- vglm(y ~ 1, gamma1, gdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/fsqrt.Rd0000644000176000001440000001051412136651105013236 0ustar ripleyusers\name{fsqrt} \alias{fsqrt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Folded Square Root Link Function } \description{ Computes the folded square root transformation, including its inverse and the first two derivatives. } \usage{ fsqrt(theta, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{min, max, mux}{ These are called \eqn{L}, \eqn{U} and \eqn{K} below. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The folded square root link function can be applied to parameters that lie between \eqn{L} and \eqn{U} inclusive. Numerical values of \code{theta} out of range result in \code{NA} or \code{NaN}. } \value{ For \code{fsqrt} with \code{deriv = 0}: \eqn{K (\sqrt{\theta-L} - \sqrt{U-\theta})}{K * (sqrt(theta-L) - sqrt(U-theta))} or \code{mux * (sqrt(theta-min) - sqrt(max-theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then some more complicated function that returns a \code{NA} unless \code{theta} is between \code{-mux*sqrt(max-min)} and \code{mux*sqrt(max-min)}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } %\references{ % %} \author{ Thomas W. Yee } \note{ The default has, if \code{theta} is 0 or 1, the link function value is \code{-sqrt(2)} and \code{+sqrt(2)} respectively. These are finite values, therefore one cannot use this link function for general modelling of probabilities because of numerical problem, e.g., with \code{\link{binomialff}}, \code{\link{cumulative}}. See the example below. } \seealso{ \code{\link{Links}}. } \examples{ p = seq(0.01, 0.99, by = 0.01) fsqrt(p) max(abs(fsqrt(fsqrt(p), inverse = TRUE) - p)) # Should be 0 p = c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) fsqrt(p) # Has NAs \dontrun{ p = seq(0.01, 0.99, by = 0.01) par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y = seq(-4, 4, length = 100) for(d in 0:1) { matplot(p, cbind(logit(p, deriv = d), fsqrt(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logit(p, deriv = d), col = "limegreen") lines(p, probit(p, deriv = d), col = "purple") lines(p, cloglog(p, deriv = d), col = "chocolate") lines(p, fsqrt(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "fsqrt"), lwd = 2, col = c("limegreen","purple","chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for(d in 0) { matplot(y, cbind(logit(y, deriv = d, inverse = TRUE), fsqrt(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", lwd = 2, las = 1, main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, fsqrt(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "fsqrt"), lwd = 2, col = c("limegreen","purple","chocolate", "tan")) } } par(lwd = 1) } # This is lucky to converge fit.h <- vglm(agaaus ~ bs(altitude), binomialff(link = fsqrt(mux = 5)), data = hunua, trace = TRUE) \dontrun{ plotvgam(fit.h, se = TRUE, lcol = "orange", scol = "orange", main = "Orange is Hunua, Blue is Waitakere") } head(predict(fit.h, hunua, type = "response")) \dontrun{ # The following fails. pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(link = fsqrt(mux = 10), par = TRUE, rev = TRUE), data = pneumo, trace = TRUE, maxit = 200) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/freund61.Rd0000644000176000001440000001533512136651105013537 0ustar ripleyusers\name{freund61} \alias{freund61} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Freund's (1961) Bivariate Extension of the Exponential Distribution } \description{ Estimate the four parameters of the Freund (1961) bivariate extension of the exponential distribution by maximum likelihood estimation. } \usage{ freund61(la = "loge", lap = "loge", lb = "loge", lbp = "loge", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{la, lap, lb, lbp}{ Link functions applied to the (positive) parameters \eqn{\alpha}{alpha}, \eqn{\alpha'}{alpha'}, \eqn{\beta}{beta} and \eqn{\beta'}{beta'}, respectively (the ``\code{p}'' stands for ``prime''). See \code{\link{Links}} for more choices. } \item{ia, iap, ib, ibp}{ Initial value for the four parameters respectively. The default is to estimate them all internally. } \item{independent}{ Logical. If \code{TRUE} then the parameters are constrained to satisfy \eqn{\alpha=\alpha'}{alpha=alpha'} and \eqn{\beta=\beta'}{beta=beta'}, which implies that \eqn{y_1}{y1} and \eqn{y_2}{y2} are independent and each have an ordinary exponential distribution. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3,4\}. The default is none of them. } } \details{ This model represents one type of bivariate extension of the exponential distribution that is applicable to certain problems, in particular, to two-component systems which can function if one of the components has failed. For example, engine failures in two-engine planes, paired organs such as peoples' eyes, ears and kidneys. Suppose \eqn{y_1}{y1} and \eqn{y_2}{y2} are random variables representing the lifetimes of two components \eqn{A} and \eqn{B} in a two component system. The dependence between \eqn{y_1}{y1} and \eqn{y_2}{y2} is essentially such that the failure of the \eqn{B} component changes the parameter of the exponential life distribution of the \eqn{A} component from \eqn{\alpha}{alpha} to \eqn{\alpha'}{alpha'}, while the failure of the \eqn{A} component changes the parameter of the exponential life distribution of the \eqn{B} component from \eqn{\beta}{beta} to \eqn{\beta'}{beta'}. The joint probability density function is given by \deqn{f(y_1,y_2) = \alpha \beta' \exp(-\beta' y_2 - (\alpha+\beta-\beta')y_1) }{% f(y1,y2) = alpha * beta' * exp(-beta' * y2 - (alpha+beta-beta') * y1) } for \eqn{0 < y_1 < y_2}{0 < y1 < y2}, and \deqn{f(y_1,y_2) = \beta \alpha' \exp(-\alpha' y_1 - (\alpha+\beta-\alpha')y_2) }{% f(y1,y2) = beta * alpha' * exp(-alpha' * y1 - (alpha+beta-alpha') * y2) } for \eqn{0 < y_2 < y_1}{0 < y2 < y1}. Here, all four parameters are positive, as well as the responses \eqn{y_1}{y1} and \eqn{y_2}{y2}. Under this model, the probability that component \eqn{A} is the first to fail is \eqn{\alpha/(\alpha+\beta)}{alpha/(alpha+beta)}. The time to the first failure is distributed as an exponential distribution with rate \eqn{\alpha+\beta}{alpha+beta}. Furthermore, the distribution of the time from first failure to failure of the other component is a mixture of Exponential(\eqn{\alpha'}{alpha'}) and Exponential(\eqn{\beta'}{beta'}) with proportions \eqn{\beta/(\alpha+\beta)}{beta/(alpha+beta)} and \eqn{\alpha/(\alpha+\beta)}{alpha/(alpha+beta)} respectively. The marginal distributions are, in general, not exponential. By default, the linear/additive predictors are \eqn{\eta_1=\log(\alpha)}{eta1=log(alpha)}, \eqn{\eta_2=\log(\alpha')}{eta2=log(alpha')}, \eqn{\eta_3=\log(\beta)}{eta3=log(beta)}, \eqn{\eta_4=\log(\beta')}{eta4=log(beta')}. A special case is when \eqn{\alpha=\alpha'}{alpha=alpha'} and \eqn{\beta=\beta'}{beta'=beta'}, which means that \eqn{y_1}{y1} and \eqn{y_2}{y2} are independent, and both have an ordinary exponential distribution with means \eqn{1 / \alpha}{1/alpha} and \eqn{1 / \beta}{1/beta} respectively. Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Freund, J. E. (1961) A bivariate extension of the exponential distribution. \emph{Journal of the American Statistical Association}, \bold{56}, 971--977. } \author{ T. W. Yee } \note{ To estimate all four parameters, it is necessary to have some data where \eqn{y_1 a} and scale parameter \eqn{b > 0}. The positive shape parameter is \eqn{s}. The cumulative distribution function is \deqn{F(y) = \exp[-(b/(y-a))^s]. }{% F(y) = exp[-(b/(y-a))^s].} The mean of \eqn{Y} is \eqn{a + b \Gamma(1-1/s)}{a + b*gamma(1-1/s)} for \eqn{s > 1} (these are returned as the fitted values). The variance of \eqn{Y} is \eqn{b^2 [ \Gamma(1-2/s) - \Gamma^2(1-1/s)]}{b^2 * [gamma(1 - 2/s) - gamma(1 - 1/s)^2]} for \eqn{s > 2}. Family \code{frechet2} has \eqn{a} known, and \eqn{\log(b)}{log(b)} and \eqn{\log(s - 2)}{log(s - 2)} are the default linear/additive predictors. The working weights are estimated by simulated Fisher scoring. % Note that the \code{\link{loglog}} link ensures \eqn{s > 1}. % whereas \code{frechet3} estimates it. % Estimating \eqn{a} well requires a lot of data and % a good choice of \code{ilocation} will help speed up convergence. % For \code{frechet3} the default linear/additive predictors are % \eqn{\log(D)}{log(D)}, % It would be great if the first linear/additive predictor was a direct % function of the location parameter, but this can run the risk that % the estimate is out of range (i.e., greater than \code{min(y)}). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \section{Warning}{ % Convergence for \code{frechet3} can be very slow, especially if the % initial value for the location parameter is poor. Setting something % like \code{maxit = 200, trace = TRUE} is a good idea. Family function \code{frechet2} may fail for low values of the shape parameter, e.g., near 2 or lower. } %\note{ % Family function \code{frechet3} uses % the BFGS quasi-Newton update formula for the % working weight matrices. Consequently the estimated variance-covariance % matrix may be inaccurate or simply wrong! The standard errors must be % therefore treated with caution; these are computed in functions such % as \code{vcov()} and \code{summary()}. % If \code{fit} is a \code{frechet3} fit then \code{fit@extra$location} % is the final estimate of the location parameter, and % \code{fit@extra$LHSanchor} is the anchor point. %} \seealso{ \code{\link{rfrechet}}, \code{\link{gev}}. } \examples{ \dontrun{ set.seed(123) fdata <- data.frame(y1 = rfrechet(nn <- 1000, shape = 2 + exp(1))) with(fdata, hist(y1)) fit2 <- vglm(y1 ~ 1, frechet2, fdata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) head(fitted(fit2)) with(fdata, mean(y1)) head(weights(fit2, type = "working")) vcov(fit2) } } \keyword{models} \keyword{regression} VGAM/man/frankUC.Rd0000644000176000001440000000314612136651105013433 0ustar ripleyusers\name{Frank} \alias{Frank} \alias{dfrank} \alias{pfrank} \alias{rfrank} \title{Frank's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Frank distribution. } \usage{ dfrank(x1, x2, alpha, log = FALSE) pfrank(q1, q2, alpha) rfrank(n, alpha) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{alpha}{the positive association parameter \eqn{\alpha}{alpha}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dfrank} gives the density, \code{pfrank} gives the distribution function, and \code{rfrank} generates random deviates (a two-column matrix). } \references{ Genest, C. (1987) Frank's family of bivariate distributions. \emph{Biometrika}, \bold{74}, 549--555. } \author{ T. W. Yee } \details{ See \code{\link{frank}}, the \pkg{VGAM} family functions for estimating the association parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{frank}}. } \examples{ \dontrun{ N <- 100; alpha <- 8 x <- seq(-0.30, 1.30, len = N) ox <- expand.grid(x, x) z <- dfrank(ox[, 1], ox[, 2], alpha = alpha) contour(x, x, matrix(z, N, N)) z <- pfrank(ox[, 1], ox[, 2], alpha = alpha) contour(x, x, matrix(z, N, N)) alpha <- exp(4) plot(r <- rfrank(n = 3000, alpha = alpha)) par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/frank.Rd0000644000176000001440000000573412136651105013210 0ustar ripleyusers\name{frank} \alias{frank} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Frank's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Frank's bivariate distribution by maximum likelihood estimation. } \usage{ frank(lapar = "loge", iapar = 2, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function applied to the (positive) association parameter \eqn{\alpha}{alpha}. See \code{\link{Links}} for more choices. } \item{iapar}{ Numeric. Initial value for \eqn{\alpha}{alpha}. If a convergence failure occurs try assigning a different value. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = H_{\alpha}(y_1,y_2) = \log_{\alpha} [1 + (\alpha^{y_1}-1)(\alpha^{y_2}-1)/ (\alpha-1)] }{% P(Y1 <= y1, Y2 <= y2) = H_{alpha}(y1,y2) = log_{alpha} [1 + (alpha^(y1)-1)*(alpha^(y2)-1)/ (alpha-1)] } for \eqn{\alpha \ne 1}{alpha != 1}. Note the logarithm here is to base \eqn{\alpha}{alpha}. The support of the function is the unit square. When \eqn{0 < \alpha < 1}{0 1}{alpha>1} then \eqn{h_{\alpha}(y_1,y_2) = h_{1/\alpha}(1-y_1,y_2)}{h_{1/alpha}(1-y_1,y_2)}. If \eqn{\alpha=1}{alpha=1} then \eqn{H(y_1,y_2) = y_1 y_2}{H(y1,y2)=y1*y2}, i.e., uniform on the unit square. As \eqn{\alpha}{alpha} approaches 0 then \eqn{H(y_1,y_2) = \min(y_1,y_2)}{H(y1,y2)=min(y1,y2)}. As \eqn{\alpha}{alpha} approaches infinity then \eqn{H(y_1,y_2) = \max(0, y_1+y_2-1)}{H(y1,y2)=max(0,y1+y2-1)}. The default is to use Fisher scoring implemented using \code{\link{rfrank}}. For intercept-only models an alternative is to set \code{nsimEIM=NULL} so that a variant of Newton-Raphson is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %% improve the references \references{ Genest, C. (1987) Frank's family of bivariate distributions. \emph{Biometrika}, \bold{74}, 549--555. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to a half. This is because the marginal distributions correspond to a standard uniform distribution. } \seealso{ \code{\link{rfrank}}, \code{\link{fgm}}. } \examples{ \dontrun{ ymat <- rfrank(n = 2000, alpha = exp(4)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = frank, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/fnormal1.Rd0000644000176000001440000001040512136651105013615 0ustar ripleyusers\name{fnormal1} \alias{fnormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Folded Normal Distribution Family Function } \description{ Fits a (generalized) folded (univariate) normal distribution. } \usage{ fnormal1(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, a1 = 1, a2 = 1, nsimEIM = 500, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lsd}{ Link functions for the mean and standard deviation parameters of the usual univariate normal distribution. They are \eqn{\mu}{mu} and \eqn{\sigma}{sigma} respectively. See \code{\link{Links}} for more choices. } % \item{emean, esd}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % emean=list(), esd=list(), % } \item{imean, isd}{ Optional initial values for \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. A \code{NULL} means a value is computed internally. See \code{\link{CommonVGAMffArguments}}. } \item{a1, a2}{ Positive weights, called \eqn{a_1}{a1} and \eqn{a_2}{a2} below. Each must be of length 1. } \item{nsimEIM, imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ If a random variable has an ordinary univariate normal distribution then the absolute value of that random variable has an ordinary \emph{folded normal distribution}. That is, the sign has not been recorded; only the magnitude has been measured. More generally, suppose \eqn{X} is normal with mean \code{mean} and standard deviation \code{sd}. Let \eqn{Y=\max(a_1 X, -a_2 X)}{Y=max(a1*X, -a2*X)} where \eqn{a_1}{a1} and \eqn{a_2}{a2} are positive weights. This means that \eqn{Y = a_1 X}{Y = a1*X} for \eqn{X > 0}, and \eqn{Y = a_2 X}{Y = a2*X} for \eqn{X < 0}. Then \eqn{Y} is said to have a \emph{generalized folded normal distribution}. The ordinary folded normal distribution corresponds to the special case \eqn{a_1 = a_2 = 1}{a1 = a2 = 1}. The probability density function of the ordinary folded normal distribution can be written \code{dnorm(y, mean, sd) + dnorm(y, -mean, sd)} for \eqn{y \ge 0}. By default, \code{mean} and \code{log(sd)} are the linear/additive predictors. Having \code{mean=0} and \code{sd=1} results in the \emph{half-normal} distribution. The mean of an ordinary folded normal distribution is \deqn{E(Y) = \sigma \sqrt{2/\pi} \exp(-\mu^2/(2\sigma^2)) + \mu [1-2\Phi(-\mu/\sigma)] }{% E(Y) = sigma*sqrt(2/pi)*exp(-mu^2/(2*sigma^2)) + mu*[1-2*Phi(-mu/sigma)] } and these are returned as the fitted values. Here, \eqn{\Phi()}{Phi} is the cumulative distribution function of a standard normal (\code{\link[stats:Normal]{pnorm}}). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lin, P. C. (2005) Application of the generalized folded-normal distribution to the process capability measures. \emph{International Journal of Advanced Manufacturing Technology}, \bold{26}, 825--830. } \author{ Thomas W. Yee } \note{ The response variable for this family function is the same as \code{\link{normal1}} except positive values are required. Reasonably good initial values are needed. Fisher scoring using simulation is implemented. See \code{\link{CommonVGAMffArguments}} for general information about many of these arguments. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. It is recommended that several different initial values be used to help avoid local solutions. } \seealso{ \code{\link{rfnorm}}, \code{\link{normal1}}, \code{\link[stats:Normal]{dnorm}}, \code{\link{skewnormal1}}. } \examples{ \dontrun{ m <- 2; SD <- exp(1) y <- rfnorm(n <- 1000, m = m, sd = SD) hist(y, prob = TRUE, main = paste("fnormal1(m = ", m, ", sd = ", round(SD, 2), ")")) fit <- vglm(y ~ 1, fam = fnormal1, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) mygrid <- seq(min(y), max(y), len = 200) # Add the fit to the histogram lines(mygrid, dfnorm(mygrid, Cfit[1], Cfit[2]), col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/fnormUC.Rd0000644000176000001440000000436712136651105013461 0ustar ripleyusers\name{Fnorm} \alias{Fnorm} \alias{dfnorm} \alias{pfnorm} \alias{qfnorm} \alias{rfnorm} \title{The Folded-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the (generalized) folded-normal distribution. } \usage{ dfnorm(x, mean = 0, sd = 1, a1 = 1, a2 = 1) pfnorm(q, mean = 0, sd = 1, a1 = 1, a2 = 1) qfnorm(p, mean = 0, sd = 1, a1 = 1, a2 = 1, ...) rfnorm(n, mean = 0, sd = 1, a1 = 1, a2 = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{mean, sd}{ see \code{\link[stats:Normal]{rnorm}}. } \item{a1, a2}{ see \code{\link{fnormal1}}. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } } \value{ \code{dfnorm} gives the density, \code{pfnorm} gives the distribution function, \code{qfnorm} gives the quantile function, and \code{rfnorm} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{fnormal1}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ \code{qfnorm} runs very slowly because it calls \code{\link[stats]{uniroot}} for each value of the argument \code{p}. The solution is consequently not exact; the \code{...} can be used to obtain a more accurate solution if necessary. } \seealso{ \code{\link{fnormal1}}, \code{\link[stats]{uniroot}}. } \examples{ \dontrun{ m <- 1.5; SD<-exp(0) x <- seq(-1, 4, len = 501) plot(x, dfnorm(x, m = m, sd = SD), type = "l", ylim = 0:1, las = 1, ylab = paste("fnorm(m = ", m, ", sd = ", round(SD, dig = 3), ")"), main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue") lines(x, pfnorm(x, m = m, sd = SD), col = "red") abline(h = 0) probs <- seq(0.1, 0.9, by = 0.1) Q <- qfnorm(probs, m = m, sd = SD) lines(Q, dfnorm(Q, m = m, sd = SD), col = "purple", lty = 3, type = "h") lines(Q, pfnorm(Q, m = m, sd = SD), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pfnorm(Q, m = m, sd = SD) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/fittedvlm.Rd0000644000176000001440000000502212136651105014073 0ustar ripleyusers\name{fittedvlm} \alias{fittedvlm} \alias{fitted.values.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitted Values of a VLM object} \description{ Extractor function for the fitted values of a model object that inherits from a \emph{vector linear model} (VLM), e.g., a model of class \code{"vglm"}. } \usage{ fittedvlm(object, matrix.arg = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ a model object that inherits from a VLM. } \item{matrix.arg}{ Logical. Return the answer as a matrix? If \code{FALSE} then it will be a vector. } \item{\dots}{ Currently unused. } } \details{ The ``fitted values'' usually corresponds to the mean response, however, because the \pkg{VGAM} package fits so many models, this sometimes refers to quantities such as quantiles. The mean may even not exist, e.g., for a Cauchy distribution. Note that the fitted value is output from the \code{@linkinv} slot of the \pkg{VGAM} family function, where the \code{eta} argument is the \eqn{n \times M}{n x M} matrix of linear predictors. } \value{ The fitted values evaluated at the final IRLS iteration. } \references{ Chambers, J. M. and T. J. Hastie (eds) (1992) \emph{Statistical Models in S}. Wadsworth & Brooks/Cole. } \author{ Thomas W. Yee } \note{ This function is one of several extractor functions for the \pkg{VGAM} package. Others include \code{coef}, \code{deviance}, \code{weights} and \code{constraints} etc. This function is equivalent to the methods function for the generic function \code{fitted.values}. If \code{fit} is a VLM or VGLM then \code{fitted(fit)} and \code{predict(fit, type = "response")} should be equivalent (see \code{\link{predictvglm}}). The latter has the advantage in that it handles a \code{newdata} argument so that the fitted values can be computed for a different data set. } \seealso{ \code{\link[stats]{fitted}}, \code{\link{predictvglm}}, \code{\link{vglmff-class}}. } \examples{ # Categorical regression example 1 pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fitted(fit) # LMS quantile regression example 2 fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit, type = "response")) # Equal to the the following two: head(fitted(fit)) predict(fit, type = "response", newdata = head(bmi.nz)) } \keyword{models} \keyword{regression} VGAM/man/fiskUC.Rd0000644000176000001440000000323412136651105013264 0ustar ripleyusers\name{Fisk} \alias{Fisk} \alias{dfisk} \alias{pfisk} \alias{qfisk} \alias{rfisk} \title{The Fisk Distribution} \description{ Density, distribution function, quantile function and random generation for the Fisk distribution with shape parameter \code{a} and scale parameter \code{scale}. } \usage{ dfisk(x, shape1.a, scale = 1, log = FALSE) pfisk(q, shape1.a, scale = 1) qfisk(p, shape1.a, scale = 1) rfisk(n, shape1.a, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dfisk} gives the density, \code{pfisk} gives the distribution function, \code{qfisk} gives the quantile function, and \code{rfisk} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{fisk}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Fisk distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{fisk}}, \code{\link{genbetaII}}. } \examples{ fdata <- data.frame(y = rfisk(n = 1000, 4, 6)) fit <- vglm(y ~ 1, fisk, data = fdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/fisk.Rd0000644000176000001440000000542612136651105013041 0ustar ripleyusers\name{fisk} \alias{fisk} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fisk Distribution family function } \description{ Maximum likelihood estimation of the 2-parameter Fisk distribution. } \usage{ fisk(lshape1.a = "loge", lscale = "loge", ishape1.a = NULL, iscale = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) parameters \code{a} and \code{scale}. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale}{ Optional initial values for \code{a} and \code{scale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2\} which correspond to \code{a}, \code{scale}, respectively. } } \details{ The 2-parameter Fisk (aka log-logistic) distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=p=1}. It is also the 3-parameter Singh-Maddala distribution with shape parameter \eqn{q=1}, as well as the Dagum distribution with \eqn{p=1}. More details can be found in Kleiber and Kotz (2003). The Fisk distribution has density \deqn{f(y) = a y^{a-1} / [b^a \{1 + (y/b)^a\}^2]}{% f(y) = a y^(a-1) / [b^a (1 + (y/b)^a)^2]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \code{a} is a shape parameter. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)^a]^{-1} = [1 + (y/b)^{-a}]^{-1}.}{% F(y) = 1 - [1 + (y/b)^a]^(-1) = [1 + (y/b)^(-a)]^(-1).} The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(1 - 1/a)}{% E(Y) = b gamma(1 + 1/a) gamma(1 - 1/a)} provided \eqn{a > 1}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{Fisk}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ fdata <- data.frame(y = rfisk(n = 200, exp(1), exp(2))) fit <- vglm(y ~ 1, fisk, fdata, trace = TRUE) fit <- vglm(y ~ 1, fisk(ishape1.a = exp(1)), fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/fisherz.Rd0000644000176000001440000000552112136651105013553 0ustar ripleyusers\name{fisherz} \alias{fisherz} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fisher's Z Link Function } \description{ Computes the Fisher Z transformation, including its inverse and the first two derivatives. } \usage{ fisherz(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bminvalue, bmaxvalue}{ Optional boundary values. Values of \code{theta} which are less than or equal to \eqn{-1} can be replaced by \code{bminvalue} before computing the link function value. Values of \code{theta} which are greater than or equal to \eqn{1} can be replaced by \code{bmaxvalue} before computing the link function value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{fisherz} link function is commonly used for parameters that lie between \eqn{-1} and \eqn{1}. Numerical values of \code{theta} close to \eqn{-1} or \eqn{1} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, \code{0.5 * log((1+theta)/(1-theta))} (same as \code{atanh(theta)}) when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{(exp(2*theta)-1)/(exp(2*theta)+1)} (same as \code{tanh(theta)}). For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to \eqn{-1} or \eqn{1}. One way of overcoming this is to use, e.g., \code{bminvalue}. The link function \code{\link{rhobit}} is very similar to \code{fisherz}, e.g., just twice the value of \code{fisherz}. This link function may be renamed to \code{atanhlink} in the near future. } \seealso{ \code{\link{Links}}, \code{\link{rhobit}}, \code{\link{atanh}}, \code{\link{logit}}. } \examples{ theta <- seq(-0.99, 0.99, by = 0.01) y <- fisherz(theta) \dontrun{ plot(theta, y, type = "l", las = 1, ylab = "", main = "fisherz(theta)", col = "blue") abline(v = (-1):1, h = 0, lty = 2, col = "gray") } x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) fisherz(x) # Has NAs fisherz(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/finney44.Rd0000644000176000001440000000261012136651105013535 0ustar ripleyusers\name{finney44} \alias{finney44} \docType{data} \title{ Toxicity trial for insects %% ~~ data name/kind ... ~~ } \description{ A data frame of a toxicity trial. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(finney44)} \format{ A data frame with 6 observations on the following 3 variables. \describe{ \item{\code{pconc}}{a numeric vector, percent concentration of pyrethrins. } \item{\code{hatched}}{number of eggs that hatched. } \item{\code{unhatched}}{number of eggs that did not hatch. } } } \details{ Finney (1944) describes a toxicity trial of five different concentrations of pyrethrins (percent) plus a control that were administered to eggs of \emph{Ephestia kuhniella}. The natural mortality rate is large, and a common adjustment is to use Abbott's formula. } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Finney, D. J., 1944. The application of the probit method to toxicity test data adjusted for mortality in the controls. \emph{Annals of Applied Biology}, \bold{31}, 68--74. Abbott, W. S. (1925). A method of computing the effectiveness of an insecticide. \emph{Journal of Economic Entomology}, 18, 265--7. %% ~~ possibly secondary sources and usages ~~ } \examples{ data(finney44) transform(finney44, mortality = unhatched / (hatched + unhatched)) } \keyword{datasets} VGAM/man/fill.Rd0000644000176000001440000002155312136651105013032 0ustar ripleyusers\name{fill} \alias{fill} \alias{fill1} %- \alias{fill2} %- \alias{fill3} %- \alias{fill4} %- \alias{fill5} %- \alias{fill6} %- \alias{fill7} %- \alias{fill8} %- \alias{fill9} %- \alias{fill10} %- \alias{fill11} %- \alias{fill12} %- \alias{fill13} %- \alias{fill14} %- \alias{fill15} %- \alias{fill16} %- \alias{fill17} %- \alias{fill18} %- \alias{fill19} %- \alias{fill20} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Creates a Matrix of Appropriate Dimension } \description{ A support function for the argument \code{xij}, it generates a matrix of an appropriate dimension. } \usage{ fill(x, values = 0, ncolx = ncol(x)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector or matrix which is used to determine the dimension of the answer, in particular, the number of rows. After converting \code{x} to a matrix if necessary, the answer is a matrix of \code{values} values, of dimension \code{nrow(x)} by \code{ncolx}. } \item{values}{ Numeric. The answer contains these values, which are recycled \emph{columnwise} if necessary, i.e., as \code{matrix(values, ..., byrow=TRUE)}. } \item{ncolx}{ The number of columns of the returned matrix. The default is the number of columns of \code{x}. } } \details{ The \code{xij} argument for \code{\link{vglm}} allows the user to input variables specific to each linear/additive predictor. For example, consider the bivariate logit model where the first/second linear/additive predictor is the logistic regression of the first/second binary response respectively. The third linear/additive predictor is \code{log(OR) = eta3}, where \code{OR} is the odds ratio. If one has ocular pressure as a covariate in this model then \code{xij} is required to handle the ocular pressure for each eye, since these will be different in general. [This contrasts with a variable such as \code{age}, the age of the person, which has a common value for both eyes.] In order to input these data into \code{\link{vglm}} one often finds that functions \code{fill}, \code{fill1}, etc. are useful. All terms in the \code{xij} and \code{formula} arguments in \code{\link{vglm}} must appear in the \code{form2} argument too. } \value{ \code{matrix(values, nrow=nrow(x), ncol=ncolx)}, i.e., a matrix consisting of values \code{values}, with the number of rows matching \code{x}, and the default number of columns is the number of columns of \code{x}. } \references{ More information can be found at \url{http://www.stat.auckland.ac.nz/~yee}. } % \section{Warning }{ % Care is needed in such cases. % See the examples below. % %} \author{ T. W. Yee } \note{ The effect of the \code{xij} argument is after other arguments such as \code{exchangeable} and \code{zero}. Hence \code{xij} does not affect constraint matrices. Additionally, there are currently 3 other identical \code{fill} functions, called \code{fill1}, \code{fill2} and \code{fill3}; if you need more then assign \code{fill4 = fill5 = fill1} etc. The reason for this is that if more than one \code{fill} function is needed then they must be unique. For example, if \eqn{M=4} then \code{xij = op ~ lop + rop + fill(mop) + fill(mop)} would reduce to \code{xij = op ~ lop + rop + fill(mop)}, whereas \code{xij = op ~ lop + rop + fill1(mop) + fill2(mop)} would retain all \eqn{M} terms, which is needed. % The constraint matrices, as returned by \code{constraints}, do not % have a different meaning when \code{xij} is used. In Examples 1 to 3 below, the \code{xij} argument illustrates covariates that are specific to a linear predictor. Here, \code{lop}/\code{rop} are the ocular pressures of the left/right eye in an artificial dataset, and \code{mop} is their mean. Variables \code{leye} and \code{reye} might be the presence/absence of a particular disease on the LHS/RHS eye respectively. % % Examples 1 and 2 are deliberately misspecified. % The output from, e.g., \code{coef(fit, matrix=TRUE)}, looks wrong but % is correct because the coefficients are multiplied by the zeros % produced from \code{fill}. In Example 3, the \code{xij} argument illustrates fitting the (exchangeable) model where there is a common smooth function of the ocular pressure. One should use regression splines since \code{\link{s}} in \code{\link{vgam}} does not handle the \code{xij} argument. However, regression splines such as \code{\link[splines]{bs}} and \code{\link[splines]{ns}} need to have the same basis functions here for both functions, and Example 3 illustrates a trick involving a function \code{BS} to obtain this, e.g., same knots. Although regression splines create more than a single column per term in the model matrix, \code{fill(BS(lop,rop))} creates the required (same) number of columns. } \seealso{ \code{\link{vglm.control}}, \code{\link{vglm}}, \code{\link{multinomial}}. } \examples{ fill(runif(5)) fill(runif(5), ncol = 3) fill(runif(5), val = 1, ncol = 3) # Generate eyes data for the examples below. Eyes are independent (OR=1). nn <- 1000 # Number of people eyesdat = data.frame(lop = round(runif(nn), 2), rop = round(runif(nn), 2), age = round(rnorm(nn, 40, 10))) eyesdat <- transform(eyesdat, mop = (lop + rop) / 2, # Mean ocular pressure op = (lop + rop) / 2, # Value unimportant unless plotting # op = lop, # Choose this if plotting eta1 = 0 - 2*lop + 0.04*age, # Linear predictor for left eye eta2 = 0 - 2*rop + 0.04*age) # Linear predictor for right eye eyesdat <- transform(eyesdat, leye = rbinom(nn, size = 1, prob = logit(eta1, inverse = TRUE)), reye = rbinom(nn, size = 1, prob = logit(eta2, inverse = TRUE))) # Example 1 # All effects are linear fit1 <- vglm(cbind(leye,reye) ~ op + age, family = binom2.or(exchangeable=TRUE, zero=3), data=eyesdat, trace=TRUE, xij = list(op ~ lop + rop + fill(lop)), form2 = ~ op + lop + rop + fill(lop) + age) head(model.matrix(fit1, type="lm")) # LM model matrix head(model.matrix(fit1, type="vlm")) # Big VLM model matrix coef(fit1) coef(fit1, matrix = TRUE) # Unchanged with 'xij' constraints(fit1) max(abs(predict(fit1)-predict(fit1, new = eyesdat))) # Predicts correctly summary(fit1) \dontrun{ plotvgam(fit1, se = TRUE) # Wrong, e.g., because it plots against op, not lop. # So set op=lop in the above for a correct plot. } # Example 2 # Model OR as a linear function of mop fit2 <- vglm(cbind(leye,reye) ~ op + age, data = eyesdat, trace = TRUE, binom2.or(exchangeable = TRUE, zero = NULL), xij = list(op ~ lop + rop + mop), form2 = ~ op + lop + rop + mop + age) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) # Unchanged with 'xij' max(abs(predict(fit2) - predict(fit2, new = eyesdat))) # Predicts correctly summary(fit2) \dontrun{ plotvgam(fit2, se = TRUE) # Wrong because it plots against op, not lop. } # Example 3. This model uses regression splines on ocular pressure. # It uses a trick to ensure common basis functions. BS <- function(x, ...) bs(c(x,...), df = 3)[1:length(x), , drop = FALSE] # trick fit3 <- vglm(cbind(leye,reye) ~ BS(lop,rop) + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdat, trace = TRUE, xij = list(BS(lop,rop) ~ BS(lop,rop) + BS(rop,lop) + fill(BS(lop,rop))), form2 = ~ BS(lop,rop) + BS(rop,lop) + fill(BS(lop,rop)) + lop + rop + age) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) summary(fit3) fit3@smart.prediction max(abs(predict(fit3) - predict(fit3, new = eyesdat))) # Predicts correctly predict(fit3, new = head(eyesdat)) # Note the 'scalar' OR, i.e., zero=3 max(abs(head(predict(fit3)) - predict(fit3, new = head(eyesdat)))) # Should be 0 \dontrun{ plotvgam(fit3, se = TRUE, xlab = "lop") # Correct } } \keyword{models} \keyword{regression} %This function is unrelated to the \code{zero} argument found in many %\pkg{VGAM} family functions. [zz implies I should call it %\code{fill1(x, value=0, ncolx=ncol(x))} and create .Rd file for %\code{zero} argument.] %eyesdat$leye <- ifelse(runif(n) < exp(eta1)/(1+exp(eta1)), 1, 0) %eyesdat$reye <- ifelse(runif(n) < exp(eta2)/(1+exp(eta2)), 1, 0) % \deqn{logit P(Y_k=1) = f_k(x_{ijk}) }{% % logit P(Y_k=1) = f_k(x_{ijk}) } % for \code{k=1,2}. % fill1(lop, ncol=ncol(BS(lop,rop,mop))), data=eyesdat) % Models using the \code{xij} argument may or may not predict correctly, % and inference obtained using \code{summary} may be incorrect. VGAM/man/fgmUC.Rd0000644000176000001440000000273412136651105013105 0ustar ripleyusers\name{Fgm} \alias{Fgm} \alias{dfgm} \alias{pfgm} \alias{rfgm} \title{Farlie-Gumbel-Morgenstern's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Farlie-Gumbel-Morgenstern's distribution. } \usage{ dfgm(x1, x2, alpha, log = FALSE) pfgm(q1, q2, alpha) rfgm(n, alpha) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{alpha}{the association parameter.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{dfgm} gives the density, \code{pfgm} gives the distribution function, and \code{rfgm} generates random deviates (a two-column matrix). } %\references{ % %} \author{ T. W. Yee } \details{ See \code{\link{fgm}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{fgm}}. } \examples{ \dontrun{ N = 101; x = seq(0.0, 1.0, len = N); alpha = 0.7 ox = expand.grid(x, x) z = dfgm(ox[,1], ox[,2], alpha=alpha) contour(x, x, matrix(z, N, N), col="blue") z = pfgm(ox[,1], ox[,2], alpha=alpha) contour(x, x, matrix(z, N, N), col="blue") plot(r <- rfgm(n = 3000, alpha = alpha), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/fgm.Rd0000644000176000001440000000455112136651105012654 0ustar ripleyusers\name{fgm} \alias{fgm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Farlie-Gumbel-Morgenstern's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Farlie-Gumbel-Morgenstern's bivariate distribution by maximum likelihood estimation. } \usage{ fgm(lapar="rhobit", iapar = NULL, imethod = 1, nsimEIM = 200) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar, iapar, imethod, nsimEIM}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more link function choices. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = y_1 y_2 ( 1 + \alpha (1 - y_1) (1 - y_2) ) }{% P(Y1 <= y1, Y2 <= y2) = y1 * y2 * ( 1 + alpha * (1 - y1) * (1 - y2) ) } for \eqn{-1 < \alpha < 1}{-1 < alpha < 1}. The support of the function is the unit square. The marginal distributions are the standard uniform distributions. When \eqn{\alpha = 0}{alpha=0} the random variables are independent. % A variant of Newton-Raphson is used, which only seems to work for an % intercept model. % It is a very good idea to set \code{trace=TRUE}. % This \pkg{VGAM} family function is prone to numerical difficulties. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution. % This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{rfgm}}, \code{\link{frank}}, \code{\link{morgenstern}}. } \examples{ ymat <- rfgm(n = 1000, alpha = rhobit(3, inverse = TRUE)) \dontrun{plot(ymat, col = "blue")} fit <- vglm(ymat ~ 1, fam = fgm, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} % for real \eqn{\alpha}{alpha} (the range is data-dependent). VGAM/man/fff.Rd0000644000176000001440000000621612136651105012644 0ustar ripleyusers\name{fff} \alias{fff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ F Distribution Family Function } \description{ Maximum likelihood estimation of the (2-parameter) F distribution. } \usage{ fff(link = "loge", idf1 = NULL, idf2 = NULL, nsimEIM = 100, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function for both parameters. See \code{\link{Links}} for more choices. The default keeps the parameters positive. } \item{idf1, idf2}{ Numeric and positive. Initial value for the parameters. The default is to choose each value internally. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Initialization method. Either the value 1 or 2. If both fail try setting values for \code{idf1} and \code{idf2}. } % \item{zero}{ % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. % The value must be from the set \{1,2\}, corresponding % respectively to \eqn{df1} and \eqn{df2}. % By default all linear/additive predictors are modelled as % a linear combination of the explanatory variables. % % % } } \details{ The F distribution is named after Fisher and has a density function that has two parameters, called \code{df1} and \code{df2} here. This function treats these degrees of freedom as \emph{positive reals} rather than integers. The mean of the distribution is \eqn{df2/(df2-2)} provided \eqn{df2>2}, and its variance is \eqn{2 df2^2 (df1+df2-2)/(df1 (df2-2)^2 (df2-4))}{2*df2^2*(df1+df2-2)/ (df1*(df2-2)^2*(df2-4))} provided \eqn{df2>4}. The estimated mean is returned as the fitted values. Although the F distribution can be defined to accommodate a non-centrality parameter \code{ncp}, it is assumed zero here. Actually it shouldn't be too difficult to handle any known \code{ncp}; something to do in the short future. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \section{Warning}{ Numerical problems will occur when the estimates of the parameters are too low or too high. } %\note{ % This family function uses the BFGS quasi-Newton update formula for the % working weight matrices. Consequently the estimated variance-covariance % matrix may be inaccurate or simply wrong! The standard errors must be % therefore treated with caution; these are computed in functions such % as \code{vcov()} and \code{summary()}. % %} \seealso{ \code{\link[stats:Fdist]{FDist}}. } \examples{ \dontrun{ fdata <- data.frame(x2 = runif(nn <- 2000)) fdata <- transform(fdata, df1 = exp(2+0.5*x2), df2 = exp(2-0.5*x2)) fdata <- transform(fdata, y = rf(nn, df1, df2)) fit <- vglm(y ~ x2, fff, fdata, trace = TRUE) coef(fit, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/felixUC.Rd0000644000176000001440000000247512136651105013445 0ustar ripleyusers\name{Felix} \alias{Felix} \alias{dfelix} %\alias{pfelix} %\alias{qfelix} %\alias{rfelix} \title{The Felix Distribution} \description{ Density % distribution function, quantile function % and random generation for the Felix distribution. } \usage{ dfelix(x, a = 0.25, log = FALSE) %pfelix(q, a = 0.25) %qfelix(p, a = 0.25) %rfelix(n, a = 0.25) } \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} % \item{n}{number of observations. % Must be a positive integer of length 1.} \item{a}{ See \code{\link{felix}}. } \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \value{ \code{dfelix} gives the density. % \code{pfelix} gives the distribution function, % \code{qfelix} gives the quantile function, and % \code{rfelix} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{felix}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \section{Warning }{ The default value of \code{a} is subjective. } \seealso{ \code{\link{felix}}. } \examples{ \dontrun{ a = 0.25; x = 1:15 plot(x, dfelix(x, a), type="h", las=1, col="blue", ylab=paste("dfelix(a=", a, ")"), main="Felix density function") } } \keyword{distribution} VGAM/man/felix.Rd0000644000176000001440000000314512136651105013210 0ustar ripleyusers\name{felix} \alias{felix} %- Also NEED an '\alias' for EACH other topic documented here. \title{Felix Distribution Family Function} \description{ Estimates the parameter of a Felix distribution by maximum likelihood estimation. } \usage{ felix(link = elogit(min = 0, max = 0.5), imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the parameter; see \code{\link{Links}} for more choices and for general information. } \item{imethod}{ See \code{\link{CommonVGAMffArguments}}. Valid values are 1, 2, 3 or 4. } } \details{ The Felix distribution is an important basic Lagrangian distribution. The density function is \deqn{f(y;a) = \frac{ 1 }{((y-1)/2)!} y^{(y-3)/2} a^{(y-1)/2} \exp(-ay) }{% f(y;a) = (1 / ((y-1)/2)!) * y^((y-3)/2) * a^((y-1)/2) * exp(-ay)} where \eqn{y=1,3,5,\ldots} and \eqn{0 < a < 0.5}. The mean is \eqn{1/(1-2a)} (returned as the fitted values). Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston: Birkhauser. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dfelix}}, \code{\link{borel.tanner}}. } \examples{ fdata <- data.frame(y = 2*rpois(n = 200, 1) + 1) # Not real data! fit <- vglm(y ~ 1, felix, fdata, trace = TRUE, crit = "c") coef(fit, matrix=TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/exppoissonUC.Rd0000644000176000001440000000404512136651105014540 0ustar ripleyusers\name{exppois} \alias{exppois} \alias{dexppois} \alias{pexppois} \alias{qexppois} \alias{rexppois} \title{The Exponential Poisson Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential poisson distribution. } \usage{ dexppois(x, lambda, betave = 1, log = FALSE) pexppois(q, lambda, betave = 1) qexppois(p, lambda, betave = 1) rexppois(n, lambda, betave = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{lambda, betave}{ both positive parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dexppois} gives the density, \code{pexppois} gives the distribution function, \code{qexppois} gives the quantile function, and \code{rexppois} generates random deviates. } \author{ J. G. Lauder, jamesglauder@gmail.com } \details{ See \code{\link{exppoisson}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{exppoisson}}. } \examples{ \dontrun{ lambda = 2; betave = 2; nn = 201 x = seq(-0.05, 1.05, len = nn) plot(x, dexppois(x, lambda, betave), type = "l", las = 1, ylim = c(0, 5), ylab = paste("[dp]exppoisson(lambda = ", lambda, ", betave = ", betave, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexppois(x, lambda, betave), col = "orange") probs = seq(0.1, 0.9, by = 0.1) Q = qexppois(probs, lambda, betave) lines(Q, dexppois(Q, lambda, betave), col = "purple", lty = 3, type = "h") lines(Q, pexppois(Q, lambda, betave), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexppois(Q, lambda, betave) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/exppoisson.Rd0000644000176000001440000000467512136651105014321 0ustar ripleyusers\name{exppoisson} \alias{exppoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{Exponential Poisson Distribution Family Function} \description{ Estimates the two parameters of the exponential Poisson distribution by maximum likelihood estimation. } \usage{ exppoisson(llambda = "loge", lbetave = "loge", ilambda = 1.1, ibetave = 2, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llambda, lbetave}{ Link function for the two positive parameters. See \code{\link{Links}} for more choices. } \item{ilambda, ibetave}{ Numeric. Initial values for the \code{lambda} and \code{betave} parameters. Currently this function is not intelligent enough to obtain better initial values. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The exponential Poisson distribution has density function \deqn{f(y; \lambda = shape, \beta = scale) = \frac{\lambda \beta}{1 - e^{-\lambda}} \, e^{-\lambda - \beta y + \lambda \exp{(-\beta y)}}}{% f(y; a = shape, b = scale) = (a*b/(1 - e^(-a))) * e^{-a - b*y + a * e^(-b*y)}} where \eqn{y > 0} and the parameters shape, \eqn{\lambda}{a}, and scale, \eqn{\beta}{b}, are positive. The distribution implies a population facing discrete hazard rates which are multiples of a base hazard. This \pkg{VGAM} family function requires the \code{hypergeo} package (to use their \code{genhypergeo} function). % This \pkg{VGAM} family function requires the \pkg{hypergeo} package % (to use their \code{\link[hypergeo]{genhypergeo}} function). } \section{Warning }{ This \pkg{VGAM} family function does not work properly! } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kus, C., (2007). A new lifetime distribution. \emph{Computational Statistics and Data Analysis}, \bold{51}, 4497--4509. } \author{ J. G. Lauder, jamesglauder@gmail.com } \seealso{ \code{\link{dexppois}}, \code{\link{exponential}}, \code{\link{poisson}}. } \examples{ \dontrun{ lambda = exp(1); betave = exp(2) rdata = data.frame(y = rexppois(n = 1000, lambda, betave)) library(hypergeo) fit = vglm(y ~ 1, exppoisson, rdata, trace = TRUE) c(with(rdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/exponential.Rd0000644000176000001440000000537712136651105014440 0ustar ripleyusers\name{exponential} \alias{exponential} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Distribution } \description{ Maximum likelihood estimation for the exponential distribution. } \usage{ exponential(link = "loge", location = 0, expected = TRUE, shrinkage.init = 0.95, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function applied to the positive parameter \eqn{rate}. See \code{\link{Links}} for more choices. } \item{location}{ Numeric of length 1, the known location parameter, \eqn{A}, say. } \item{expected}{ Logical. If \code{TRUE} Fisher scoring is used, otherwise Newton-Raphson. The latter is usually faster. } \item{shrinkage.init, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The family function assumes the response \eqn{Y} has density \deqn{f(y) = \lambda \exp(-\lambda (y-A))}{% f(y) = rate * exp(-rate * (y-A)) } for \eqn{y > A}, where \eqn{A} is the known location parameter. By default, \eqn{A=0}. Then \eqn{E(Y) = A + 1/ \lambda}{E(Y) = A + 1/rate} and \eqn{Var(Y) = 1/ \lambda^2}{Var(Y) = 1/rate^2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ Suppose \eqn{A = 0}. For a fixed time interval, the number of events is Poisson with mean \eqn{\lambda}{rate} if the time between events has a geometric distribution with mean \eqn{\lambda^{-1}}{1/rate}. The argument \code{rate} in \code{exponential} is the same as \code{\link[stats:Exponential]{rexp}} etc. The argument \code{lambda} in \code{\link{rpois}} is somewhat the same as \code{rate} here. } \seealso{ % \code{\link{cexpon}}, \code{\link{amlexponential}}, \code{\link{laplace}}, \code{\link{expgeometric}}, \code{\link{explogarithmic}}, \code{\link{poissonff}}, \code{\link{mix2exp}}, \code{\link{freund61}}. } \examples{ edata <- data.frame(x2 = runif(nn <- 100) - 0.5) edata <- transform(edata, x3 = runif(nn) - 0.5) edata <- transform(edata, eta = 0.2 - 0.7 * x2 + 1.9 * x3) edata <- transform(edata, rate = exp(eta)) edata <- transform(edata, y = rexp(nn, rate = rate)) with(edata, stem(y)) fit.slow <- vglm(y ~ x2 + x3, exponential, edata, trace = TRUE, crit = "c") fit.fast <- vglm(y ~ x2 + x3, exponential(exp = FALSE), edata, trace = TRUE, crit = "coef") coef(fit.slow, mat = TRUE) summary(fit.slow) } \keyword{models} \keyword{regression} VGAM/man/explogarithmicUC.Rd0000644000176000001440000000426312136651105015352 0ustar ripleyusers\name{explog} \alias{explog} \alias{dexplog} \alias{pexplog} \alias{qexplog} \alias{rexplog} \title{The Exponential Logarithmic Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential logarithmic distribution. } \usage{ dexplog(x, scale = 1, shape, log = FALSE) pexplog(q, scale = 1, shape) qexplog(p, scale = 1, shape) rexplog(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dexplog} gives the density, \code{pexplog} gives the distribution function, \code{qexplog} gives the quantile function, and \code{rexplog} generates random deviates. } \author{ J. G. Lauder and T. W. Yee } \details{ See \code{\link{explogarithmic}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Tahmasabi and Rezaei (2008). } \seealso{ \code{\link{explogarithmic}}, \code{\link{exponential}}. } \examples{ \dontrun{ shape = 0.5; scale = 2; nn = 501 x = seq(-0.50, 6.0, len = nn) plot(x, dexplog(x, scale, shape), type = "l", las = 1, ylim = c(0, 1.1), ylab = paste("[dp]explog(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexplog(x, scale, shape), col = "orange") probs = seq(0.1, 0.9, by = 0.1) Q = qexplog(probs, scale, shape = shape) lines(Q, dexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") lines(Q, pexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexplog(Q, scale, shape = shape) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/explogarithmic.Rd0000644000176000001440000000464112136651105015122 0ustar ripleyusers\name{explogarithmic} \alias{explogarithmic} %- Also NEED an '\alias' for EACH other topic documented here. \title{Exponential Logarithmic Distribution Family Function} \description{ Estimates the two parameters of the exponential logarithmic distribution by maximum likelihood estimation. } \usage{ explogarithmic(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{tol12}{ Numeric. Tolerance for testing whether a parameter has value 1 or 2. } \item{iscale, ishape, zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The exponential logarithmic distribution has density function \deqn{f(y; c, s) = (1/(-\log p )) (((1/c) (1 - s) e^{-y/c}) / (1 - (1 - s) e^{-y/c}))}{% (1/(-log(p))) * (((1/c) * (1 - s) * e^(-y/c)) / (1 - (1 - s) * e^(-y/c)))} where \eqn{y > 0}, scale parameter \eqn{c > 0}, and shape parameter \eqn{s \in (0, 1)}{0 < s < 1}. The mean, \eqn{(-polylog(2, 1 - p) c) / \log(s)}{((-polylog(2, 1 - p) * c) / log(s)} is \emph{not} returned as the fitted values. Note the median is \eqn{c \log(1 + \sqrt{s})}{c * log(1 + sqrt(s))} and it is \emph{currently} returned as the fitted values. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Tahmasabi, R., Sadegh, R. (2008). A two-parameter lifetime distribution with decreasing failure rate. \emph{Computational Statistics and Data Analysis}, \bold{52}, 3889--3901. } \author{ J. G. Lauder and T. W .Yee } \note{ We define \code{scale} as the reciprocal of the rate parameter used by Tahmasabi and Sadegh (2008). Yet to do: find a \code{polylog()} function. } \seealso{ \code{\link{dexplog}}, \code{\link{exponential}}, } \examples{ \dontrun{ scale = exp(2); shape = logit(-1, inverse = TRUE); edata = data.frame(y = rexplog(n = 2000, scale = scale, shape = shape)) fit = vglm(y ~ 1, explogarithmic, edata, trace = TRUE) c(with(edata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/explink.Rd0000644000176000001440000000440512136651105013553 0ustar ripleyusers\name{explink} \alias{explink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Link Function } \description{ Computes the exponential transformation, including its inverse and the first two derivatives. } \usage{ explink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } % \item{earg}{ % Optional list. % See \code{\link{Links}} for general information about \code{earg}. % } \item{bvalue}{ See \code{cloglog}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The exponential link function is potentially suitable for parameters that are positive. Numerical values of \code{theta} close to negative or positive infinity may result in \code{0}, \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{explink} with \code{deriv = 0}, the exponential of \code{theta}, i.e., \code{exp(theta)} when \code{inverse = FALSE}. And if \code{inverse = TRUE} then \code{log(theta)}; if \code{theta} is not positive then it will return \code{NaN}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } %\references{ % McCullagh, P. and Nelder, J. A. (1989) % \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. % %} \author{ Thomas W. Yee } \note{ This function has particular use for computing quasi-variances when used with \code{\link{rcim}} and \code{\link{normal1}}. Numerical instability may occur when \code{theta} is close to negative or positive infinity. One way of overcoming this (one day) is to use \code{bvalue}. } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{rcim}}, \code{\link{Qvar}}, \code{\link{normal1}}. } \examples{ theta <- rnorm(30) explink(theta) max(abs(explink(explink(theta), inverse = TRUE) - theta)) # Should be 0 } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/expgeometricUC.Rd0000644000176000001440000000425512136651105015027 0ustar ripleyusers\name{expgeom} \alias{expgeom} \alias{dexpgeom} \alias{pexpgeom} \alias{qexpgeom} \alias{rexpgeom} \title{The Exponential Geometric Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential geometric distribution. } \usage{ dexpgeom(x, scale = 1, shape, log = FALSE) pexpgeom(q, scale = 1, shape) qexpgeom(p, scale = 1, shape) rexpgeom(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dexpgeom} gives the density, \code{pexpgeom} gives the distribution function, \code{qexpgeom} gives the quantile function, and \code{rexpgeom} generates random deviates. } \author{ J. G. Lauder and T. W. Yee } \details{ See \code{\link{expgeometric}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Adamidis and Loukas (1998). } \seealso{ \code{\link{expgeometric}}, \code{\link{exponential}}, \code{\link{geometric}}. } \examples{ \dontrun{ shape = 0.5; scale = 1; nn = 501 x = seq(-0.10, 3.0, len = nn) plot(x, dexpgeom(x, scale, shape), type = "l", las = 1, ylim = c(0, 2), ylab = paste("[dp]expgeom(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexpgeom(x, scale, shape), col = "red") probs = seq(0.1, 0.9, by = 0.1) Q = qexpgeom(probs, scale, shape) lines(Q, dexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") lines(Q, pexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexpgeom(Q, scale, shape) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/expgeometric.Rd0000644000176000001440000000445212136651105014576 0ustar ripleyusers\name{expgeometric} \alias{expgeometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{Exponential Geometric Distribution Family Function} \description{ Estimates the two parameters of the exponential geometric distribution by maximum likelihood estimation. } \usage{ expgeometric(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape}{ Link function for the two parameters. See \code{\link{Links}} for more choices. } \item{iscale, ishape}{ Numeric. Optional initial values for the scale and shape parameters. } \item{tol12}{ Numeric. Tolerance for testing whether a parameter has value 1 or 2. } \item{zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The exponential geometric distribution has density function \deqn{f(y; c = scale, s = shape) = (1/c) (1 - s) e^{-y/c} (1 - s e^{-y/c})^{-2}}{% (1/c) * (1 - s) * e^(-y/c) * (1 - s * e^(-y/c))^(-2)} where \eqn{y > 0}, \eqn{c > 0} and \eqn{s \in (0, 1)}{0 < s < 1}. The mean, \eqn{(c (s - 1)/ s) \log(1 - s)}{(c * (s - 1)/ s) * log(1 - s)} is returned as the fitted values. Note the median is \eqn{c \log(2 - s)}{c * log(2 - s)}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Adamidis, K., Loukas, S. (1998). A lifetime distribution with decreasing failure rate. \emph{Statistics and Probability Letters}, \bold{39}, 35--42. } \author{ J. G. Lauder and T. W. Yee } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Adamidis and Loukas (1998). } \seealso{ \code{\link{dexpgeom}}, \code{\link{exponential}}, \code{\link{geometric}}. } \examples{ \dontrun{ scale = exp(2); shape = logit(-1, inverse = TRUE); edata = data.frame(y = rexpgeom(n = 2000, scale = scale, shape = shape)) fit = vglm(y ~ 1, expgeometric, edata, trace = TRUE) c(with(edata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/expexp1.Rd0000644000176000001440000000661212136651105013475 0ustar ripleyusers\name{expexp1} \alias{expexp1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponentiated Exponential Distribution } \description{ Estimates the two parameters of the exponentiated exponential distribution by maximizing a profile (concentrated) likelihood. } \usage{ expexp1(lscale = "loge", iscale = NULL, ishape = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale}{ Parameter link function for the (positive) \eqn{\lambda}{scale} parameter. See \code{\link{Links}} for more choices. } \item{iscale}{ Initial value for the \eqn{\lambda}{scale} parameter. By default, an initial value is chosen internally using \code{ishape}. } \item{ishape}{ Initial value for the \eqn{\alpha}{shape} parameter. If convergence fails try setting a different value for this argument. } } \details{ See \code{\link{expexp}} for details about the exponentiated exponential distribution. This family function uses a different algorithm for fitting the model. Given \eqn{\lambda}{scale}, the MLE of \eqn{\alpha}{shape} can easily be solved in terms of \eqn{\lambda}{scale}. This family function maximizes a profile (concentrated) likelihood with respect to \eqn{\lambda}{scale}. Newton-Raphson is used, which compares with Fisher scoring with \code{\link{expexp}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Gupta, R. D. and Kundu, D. (2001) Exponentiated exponential family: an alternative to gamma and Weibull distributions, \emph{Biometrical Journal}, \bold{43}, 117--130. } \author{ T. W. Yee } \note{ This family function works only for intercept-only models, i.e., \code{y ~ 1} where \code{y} is the response. The estimate of \eqn{\alpha}{shape} is attached to the \code{misc} slot of the object, which is a list and contains the component \code{shape}. As Newton-Raphson is used, the working weights are sometimes negative, and some adjustment is made to these to make them positive. Like \code{\link{expexp}}, good initial values are needed. Convergence may be slow. } \section{Warning }{The standard errors produced by a \code{summary} of the model may be wrong. } \seealso{ \code{\link{expexp}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # Ball bearings data (number of million revolutions before failure) bbearings <- data.frame(y = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(y ~ 1, expexp1(ishape = 4), bbearings, trace = TRUE, maxit = 50, checkwz = FALSE) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0314, 5.2589) with log-lik -112.9763 fit@misc$shape # Estimate of shape logLik(fit) # Failure times of the airconditioning system of an airplane acplane <- data.frame(y = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(y ~ 1, expexp1(ishape = 0.8), acplane, trace = TRUE, maxit = 50, checkwz = FALSE) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0145, 0.8130) with log-lik -152.264 fit@misc$shape # Estimate of shape logLik(fit) } \keyword{models} \keyword{regression} VGAM/man/expexp.Rd0000644000176000001440000001166212136651105013415 0ustar ripleyusers\name{expexp} \alias{expexp} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponentiated Exponential Distribution } \description{ Estimates the two parameters of the exponentiated exponential distribution by maximum likelihood estimation. } \usage{ expexp(lshape = "loge", lscale = "loge", ishape = 1.1, iscale = NULL, tolerance = 1.0e-6, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions for the \eqn{\alpha}{shape} and \eqn{\lambda}{scale} parameters. See \code{\link{Links}} for more choices. The defaults ensure both parameters are positive. } \item{ishape}{ Initial value for the \eqn{\alpha}{shape} parameter. If convergence fails try setting a different value for this argument. } \item{iscale}{ Initial value for the \eqn{\lambda}{scale} parameter. By default, an initial value is chosen internally using \code{ishape}. } \item{tolerance}{ Numeric. Small positive value for testing whether values are close enough to 1 and 2. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, choose one value from the set \{1,2\}. } } \details{ The exponentiated exponential distribution is an alternative to the Weibull and the gamma distributions. The formula for the density is \deqn{f(y;\alpha,\lambda) = \alpha \lambda (1-\exp(-\lambda y))^{\alpha-1} \exp(-\lambda y) }{% f(y;shape,scale) = shape scale (1-\exp(-scale y))^(shape-1) \exp(-scale y) } where \eqn{y>0}, \eqn{\alpha>0}{shape>0} and \eqn{\lambda>0}{scale>0}. The mean of \eqn{Y} is \eqn{(\psi(\alpha+1)-\psi(1))/\lambda}{(psi(shape+1)-psi(1))/scale} (returned as the fitted values) where \eqn{\psi}{psi} is the digamma function. The variance of \eqn{Y} is \eqn{(\psi'(1)-\psi'(\alpha+1))/\lambda^2}{(psi'(1)-psi'(shape+1))/ scale^2} where \eqn{\psi'}{psi'} is the trigamma function. This distribution has been called the two-parameter generalized exponential distribution by Gupta and Kundu (2006). A special case of the exponentiated exponential distribution: \eqn{\alpha=1}{shape=1} is the exponential distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Gupta, R. D. and Kundu, D. (2001) Exponentiated exponential family: an alternative to gamma and Weibull distributions, \emph{Biometrical Journal}, \bold{43}, 117--130. Gupta, R. D. and Kundu, D. (2006) On the comparison of Fisher information of the Weibull and GE distributions, \emph{Journal of Statistical Planning and Inference}, \bold{136}, 3130--3144. } \author{ T. W. Yee } \note{ Fisher scoring is used, however, convergence is usually very slow. This is a good sign that there is a bug, but I have yet to check that the expected information is correct. Also, I have yet to implement Type-I right censored data using the results of Gupta and Kundu (2006). Another algorithm for fitting this model is implemented in \code{\link{expexp1}}. } \section{Warning }{ Practical experience shows that reasonably good initial values really helps. In particular, try setting different values for the \code{ishape} argument if numerical problems are encountered or failure to convergence occurs. Even if convergence occurs try perturbing the initial value to make sure the global solution is obtained and not a local solution. The algorithm may fail if the estimate of the shape parameter is too close to unity. } \seealso{ \code{\link{expexp1}}, \code{\link{gamma2.ab}}, \code{\link{weibull}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # A special case: exponential data edata <- data.frame(y = rexp(n <- 1000)) fit <- vglm(y ~ 1, fam = expexp, edata, trace = TRUE, maxit = 99) coef(fit, matrix = TRUE) Coef(fit) # Ball bearings data (number of million revolutions before failure) bbearings <- c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40) fit <- vglm(bbearings ~ 1, fam = expexp(iscale = 0.05, ish = 5), trace = TRUE, maxit = 300) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(shape=5.2589, scale=0.0314) logLik(fit) # Authors get -112.9763 # Failure times of the airconditioning system of an airplane acplane <- c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95) fit <- vglm(acplane ~ 1, fam = expexp(ishape = 0.8, isc = 0.15), trace = TRUE, maxit = 99) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(shape=0.8130, scale=0.0145) logLik(fit) # Authors get log-lik -152.264 } \keyword{models} \keyword{regression} VGAM/man/eunifUC.Rd0000644000176000001440000001246512136651105013444 0ustar ripleyusers\name{Expectiles-Uniform} \alias{Expectiles-Uniform} \alias{eunif} \alias{deunif} \alias{peunif} \alias{qeunif} \alias{reunif} \title{ Expectiles of the Uniform Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a uniform distribution. } \usage{ deunif(x, min = 0, max = 1, log = FALSE) peunif(q, min = 0, max = 1, log = FALSE) qeunif(p, min = 0, max = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) reunif(n, min = 0, max = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of expectiles. See the terminology note below. } \item{p}{ Vector of probabilities. % (tau or \eqn{\tau}). These should lie in \eqn{(0,1)}. } \item{n, min, max, log}{ See \code{\link[stats:Uniform]{runif}}. } \item{Maxit_nr}{ Numeric. Maximum number of Newton-Raphson iterations allowed. A warning is issued if convergence is not obtained for all \code{p} values. } \item{Tol_nr}{ Numeric. Small positive value specifying the tolerance or precision to which the expectiles are computed. } } \details{ Jones (1994) elucidated on the property that the expectiles of a random variable \eqn{X} with distribution function \eqn{F(x)} correspond to the quantiles of a distribution \eqn{G(x)} where \eqn{G} is related by an explicit formula to \eqn{F}. In particular, let \eqn{y} be the \eqn{p}-expectile of \eqn{F}. Then \eqn{y} is the \eqn{p}-quantile of \eqn{G} where \deqn{p = G(y) = (P(y) - y F(y)) / (2[P(y) - y F(y)] + y - \mu),}{ p = G(y) = (P(y) - y F(y)) / (2[P(y) - y F(y)] + y - mu),} and \eqn{\mu}{mu} is the mean of \eqn{X}. The derivative of \eqn{G} is \deqn{g(y) = (\mu F(y) - P(y)) / (2[P(y) - y F(y)] + y - \mu)^2 .}{ g(y) = ( mu F(y) - P(y)) / (2[P(y) - y F(y)] + y - mu)^2 .} Here, \eqn{P(y)} is the partial moment \eqn{\int_{-\infty}^{y} x f(x) \, dx}{int^{y} x f(x) dx} and \eqn{0 < p < 1}. The 0.5-expectile is the mean \eqn{\mu}{mu} and the 0.5-quantile is the median. A note about the terminology used here. Recall in the \emph{S} language there are the \code{dpqr}-type functions associated with a distribution, e.g., \code{\link[stats:Uniform]{dunif}}, \code{\link[stats:Uniform]{punif}}, \code{\link[stats:Uniform]{qunif}}, \code{\link[stats:Uniform]{runif}}, for the uniform distribution. Here, \code{unif} corresponds to \eqn{F} and \code{eunif} corresponds to \eqn{G}. The addition of ``\code{e}'' (for \emph{expectile}) is for the `other' distribution associated with the parent distribution. Thus \code{deunif} is for \eqn{g}, \code{peunif} is for \eqn{G}, \code{qeunif} is for the inverse of \eqn{G}, \code{reunif} generates random variates from \eqn{g}. For \code{qeunif} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{deunif(x)} gives the density function \eqn{g(x)}. \code{peunif(q)} gives the distribution function \eqn{G(q)}. \code{qeunif(p)} gives the expectile function: the expectile \eqn{y} such that \eqn{G(y) = p}. \code{reunif(n)} gives \eqn{n} random variates from \eqn{G}. } \references{ Jones, M. C. (1994) Expectiles and M-quantiles are quantiles. \emph{Statistics and Probability Letters}, \bold{20}, 149--153. Yee, T. W. (2012) Vector generalized linear and additive quantile and expectile regression. \emph{In preparation}. } \author{ T. W. Yee } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deexp}}, \code{\link{denorm}}, \code{\link{dunif}}, \code{\link{dkoenker}}. } \examples{ my_p <- 0.25; y <- runif(nn <- 1000) (myexp <- qeunif(my_p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my_p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my_p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard uniform mymin <- 1; mymax <- 8 yy <- runif(nn, mymin, mymax) (myexp <- qeunif(my_p, mymin, mymax)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my_p peunif(mymin, mymin, mymax) # Should be 0 peunif(mymax, mymin, mymax) # Should be 1 peunif(mean(yy), mymin, mymax) # Should be 0.5 abs(qeunif(0.5, mymin, mymax) - mean(yy)) # Should be 0 abs(qeunif(0.5, mymin, mymax) - (mymin+mymax)/2) # Should be 0 abs(peunif(myexp, mymin, mymax) - my_p) # Should be 0 integrate(f = deunif, lower = mymin - 3, upper = mymax + 3, min = mymin, max = mymax) # Should be 1 \dontrun{ par(mfrow = c(2,1)) yy <- seq(0.0, 1.0, len = nn) plot(yy, deunif(yy), type = "l", col = "blue", ylim = c(0, 2), xlab = "y", ylab = "g(y)", main = "g(y) for Uniform(0,1)") lines(yy, dunif(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, peunif(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Uniform(0,1)") abline(a = 0.0, b = 1.0, col = "darkgreen", lty = "dotted", lwd = 2) abline(v = 0.5, h = 0.5, col = "red", lty = "dashed") } } \keyword{distribution} VGAM/man/erlang.Rd0000644000176000001440000000530012136651105013344 0ustar ripleyusers\name{erlang} \alias{erlang} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Erlang Distribution } \description{ Estimates the scale parameter of the Erlang distribution by maximum likelihood estimation. } \usage{ erlang(shape.arg, link = "loge", imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{shape.arg}{ The shape parameter. The user must specify a positive integer. } \item{link}{ Link function applied to the (positive) \eqn{scale} parameter. See \code{\link{Links}} for more choices. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ The Erlang distribution is a special case of the gamma distribution with \emph{shape} that is a positive integer. If \code{shape.arg=1} then it simplifies to the exponential distribution. As illustrated in the example below, the Erlang distribution is the distribution of the sum of \code{shape.arg} independent and identically distributed exponential random variates. The probability density function of the Erlang distribution is given by \deqn{f(y) = \exp(-y/scale) y^{shape-1} scale^{-shape} / \Gamma(shape)}{% f(y) = exp(-y/scale) y^(shape-1) scale^(-shape) / gamma(shape)} for known positive integer \eqn{shape}, unknown \eqn{scale > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu=shape \times scale}{mu=shape*scale} and its variance is \eqn{shape \times scale^2}{shape*scale^2}. The linear/additive predictor, by default, is \eqn{\eta=\log(scale)}{eta=log(scale)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe this distribution, e.g., Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ Multiple responses are permitted. The \code{rate} parameter found in \code{\link{gamma2.ab}} is \code{1/scale} here---see also \code{\link[stats]{rgamma}}. } \seealso{ \code{\link{gamma2.ab}}, \code{\link{exponential}}. } \examples{ rate <- exp(2); myshape = 3 edata <- data.frame(y = rep(0, nn <- 1000)) for(ii in 1:myshape) edata <- transform(edata, y = y + rexp(nn, rate = rate)) fit <- vglm(y ~ 1, erlang(shape = myshape), edata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) # Answer = 1/rate 1/rate summary(fit) } \keyword{models} \keyword{regression} VGAM/man/erf.Rd0000644000176000001440000000266412136651105012662 0ustar ripleyusers\name{erf} \alias{erf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Error Function } \description{ Computes the error function based on the normal distribution. } \usage{ erf(x) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Numeric. } } \details{ \eqn{Erf(x)} is defined as \deqn{Erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x \exp(-t^2) dt}{% Erf(x) = (2/sqrt(pi)) int_0^x exp(-t^2) dt} so that it is closely related to \code{\link[stats:Normal]{pnorm}}. } \value{ Returns the value of the function evaluated at \code{x}. } \references{ Abramowitz, M. and Stegun, I. A. (1972) \emph{Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables}, New York: Dover Publications Inc. } \author{ T. W. Yee} \note{ Some authors omit the term \eqn{2/\sqrt{\pi}}{2/sqrt(pi)} from the definition of \eqn{Erf(x)}. Although defined for complex arguments, this function only works for real arguments. The \emph{complementary error function} \eqn{erfc(x)} is defined as \eqn{1-erf(x)}, and is implemented by \code{erfc}. } \seealso{ \code{\link[stats:Normal]{pnorm}}. } \examples{ \dontrun{ curve(erf, -3, 3, col="red", ylab="", las=1) curve(pnorm, -3, 3, add=TRUE, col="blue", lty="dotted", lwd=2) abline(v=0, h=0, lty="dashed") legend("topleft", c("erf(x)", "pnorm(x)"), col=c("red", "blue"), lty=c("solid", "dotted"), lwd=1:2) } } \keyword{math} VGAM/man/enzyme.Rd0000644000176000001440000000170212136651105013405 0ustar ripleyusers\name{enzyme} \alias{enzyme} \docType{data} \title{ Enzyme Data} \description{ Enzyme velocity and substrate concentration. } \usage{data(enzyme)} \format{ A data frame with 12 observations on the following 2 variables. \describe{ \item{conc}{a numeric explanatory vector; substrate concentration} \item{velocity}{a numeric response vector; enzyme velocity} } } \details{ Sorry, more details need to be included later. } \source{ Sorry, more details need to be included later. } \references{ Watts, D. G. (1981) An introduction to nonlinear least squares. In: L. Endrenyi (Ed.), \emph{Kinetic Data Analysis: Design and Analysis of Enzyme and Pharmacokinetic Experiments}, pp.1--24. New York: Plenum Press. } \seealso{ \code{\link[VGAM]{micmen}}. } \examples{ \dontrun{ fit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, form2 = ~ conc - 1, crit = "crit") summary(fit) } } \keyword{datasets} VGAM/man/enormUC.Rd0000644000176000001440000000726112136651105013454 0ustar ripleyusers\name{Expectiles-Normal} \alias{Expectiles-Normal} \alias{enorm} \alias{denorm} \alias{penorm} \alias{qenorm} \alias{renorm} \title{ Expectiles of the Normal Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a normal distribution. } \usage{ denorm(x, mean = 0, sd = 1, log = FALSE) penorm(q, mean = 0, sd = 1, log = FALSE) qenorm(p, mean = 0, sd = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) renorm(n, mean = 0, sd = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q}{ See \code{\link{deunif}}. } \item{n, mean, sd, log}{ See \code{\link[stats:Normal]{rnorm}}. } \item{Maxit_nr, Tol_nr}{ See \code{\link{deunif}}. } } \details{ General details are given in \code{\link{deunif}} including a note regarding the terminology used. Here, \code{norm} corresponds to the distribution of interest, \eqn{F}, and \code{enorm} corresponds to \eqn{G}. The addition of ``\code{e}'' is for the `other' distribution associated with the parent distribution. Thus \code{denorm} is for \eqn{g}, \code{penorm} is for \eqn{G}, \code{qenorm} is for the inverse of \eqn{G}, \code{renorm} generates random variates from \eqn{g}. For \code{qenorm} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{denorm(x)} gives the density function \eqn{g(x)}. \code{penorm(q)} gives the distribution function \eqn{G(q)}. \code{qenorm(p)} gives the expectile function: the value \eqn{y} such that \eqn{G(y)=p}. \code{renorm(n)} gives \eqn{n} random variates from \eqn{G}. } %\references{ % %Jones, M. C. (1994) %Expectiles and M-quantiles are quantiles. %\emph{Statistics and Probability Letters}, %\bold{20}, 149--153. % %} \author{ T. W. Yee } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deunif}}, \code{\link{deexp}}, \code{\link{dnorm}}, \code{\link{amlnormal}}, \code{\link{lms.bcn}}. } \examples{ my_p <- 0.25; y <- rnorm(nn <- 1000) (myexp <- qenorm(my_p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my_p # Non-standard normal mymean <- 1; mysd <- 2 yy <- rnorm(nn, mymean, mysd) (myexp <- qenorm(my_p, mymean, mysd)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my_p penorm(-Inf, mymean, mysd) # Should be 0 penorm( Inf, mymean, mysd) # Should be 1 penorm(mean(yy), mymean, mysd) # Should be 0.5 abs(qenorm(0.5, mymean, mysd) - mean(yy)) # Should be 0 abs(penorm(myexp, mymean, mysd) - my_p) # Should be 0 integrate(f = denorm, lower = -Inf, upper = Inf, mymean, mysd) # Should be 1 \dontrun{ par(mfrow = c(2, 1)) yy <- seq(-3, 3, len = nn) plot(yy, denorm(yy), type = "l", col="blue", xlab = "y", ylab = "g(y)", main = "g(y) for N(0,1); dotted green is f(y) = dnorm(y)") lines(yy, dnorm(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, penorm(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for N(0,1)") abline(v = 0, h = 0.5, col = "red", lty = "dashed") lines(yy, pnorm(yy), col = "darkgreen", lty = "dotted", lwd = 2) } } \keyword{distribution} %# Equivalently: %I1 = mean(y <= myexp) * mean( myexp - y[y <= myexp]) %I2 = mean(y > myexp) * mean(-myexp + y[y > myexp]) %I1 / (I1 + I2) # Should be my_p %# Or: %I1 = sum( myexp - y[y <= myexp]) %I2 = sum(-myexp + y[y > myexp]) VGAM/man/eexpUC.Rd0000644000176000001440000000714612136651105013277 0ustar ripleyusers\name{Expectiles-Exponential} \alias{Expectiles-Exponential} \alias{eexp} \alias{deexp} \alias{peexp} \alias{qeexp} \alias{reexp} \title{ Expectiles of the Exponential Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of an exponential distribution. } \usage{ deexp(x, rate = 1, log = FALSE) peexp(q, rate = 1, log = FALSE) qeexp(p, rate = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) reexp(n, rate = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q}{ See \code{\link{deunif}}. } \item{n, rate, log}{ See \code{\link[stats:Exponential]{rexp}}. } \item{Maxit_nr, Tol_nr}{ See \code{\link{deunif}}. } } \details{ General details are given in \code{\link{deunif}} including a note regarding the terminology used. Here, \code{exp} corresponds to the distribution of interest, \eqn{F}, and \code{eexp} corresponds to \eqn{G}. The addition of ``\code{e}'' is for the `other' distribution associated with the parent distribution. Thus \code{deexp} is for \eqn{g}, \code{peexp} is for \eqn{G}, \code{qeexp} is for the inverse of \eqn{G}, \code{reexp} generates random variates from \eqn{g}. For \code{qeexp} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{deexp(x)} gives the density function \eqn{g(x)}. \code{peexp(q)} gives the distribution function \eqn{G(q)}. \code{qeexp(p)} gives the expectile function: the value \eqn{y} such that \eqn{G(y)=p}. \code{reexp(n)} gives \eqn{n} random variates from \eqn{G}. } %\references{ % %Jones, M. C. (1994) %Expectiles and M-quantiles are quantiles. %\emph{Statistics and Probability Letters}, %\bold{20}, 149--153. % %} \author{ T. W. Yee } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deunif}}, \code{\link{denorm}}, \code{\link{dexp}}. } \examples{ my_p <- 0.25; y <- rexp(nn <- 1000) (myexp <- qeexp(my_p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my_p \dontrun{ par(mfrow = c(2,1)) yy <- seq(-0, 4, len = nn) plot(yy, deexp(yy), col = "blue", ylim = 0:1, xlab = "y", ylab = "g(y)", type = "l", main = "g(y) for Exp(1); dotted green is f(y) = dexp(y)") lines(yy, dexp(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, peexp(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Exp(1)") abline(v = 1, h = 0.5, col = "red", lty = "dashed") lines(yy, pexp(yy), col = "darkgreen", lty = "dotted", lwd = 2) } } \keyword{distribution} %# Equivalently: %I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) %I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) %I1 / (I1 + I2) # Should be my_p %# Or: %I1 <- sum( myexp - y[y <= myexp]) %I2 <- sum(-myexp + y[y > myexp]) %# Non-standard exponential %myrate <- 8 %yy <- rexp(nn, rate = myrate) %(myexp <- qeexp(my_p, rate = myrate)) %sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my_p %peexp(-Inf, rate = myrate) # Should be 0 %peexp( Inf, rate = myrate) # Should be 1 %peexp(mean(yy), rate = myrate) # Should be 0.5 %abs(qeexp(0.5, rate = myrate) - mean(yy)) # Should be 0 %abs(peexp(myexp, rate = myrate) - my_p) # Should be 0 %integrate(f = deexp, lower = -1, upper = Inf, rate = myrate) # Should be 1 VGAM/man/dirmultinomial.Rd0000644000176000001440000001511012136651105015125 0ustar ripleyusers\name{dirmultinomial} \alias{dirmultinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitting a Dirichlet-Multinomial Distribution } \description{ Fits a Dirichlet-multinomial distribution to a matrix response. } \usage{ dirmultinomial(lphi = "logit", iphi = 0.10, parallel = FALSE, zero = "M") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi}{ Link function applied to the \eqn{\phi}{phi} parameter, which lies in the open unit interval \eqn{(0,1)}. See \code{\link{Links}} for more choices. } \item{iphi}{ Numeric. Initial value for \eqn{\phi}{phi}. Must be in the open unit interval \eqn{(0,1)}. If a failure to converge occurs then try assigning this argument a different value. } \item{parallel}{ A logical (formula not allowed here) indicating whether the probabilities \eqn{\pi_1,\ldots,\pi_{M-1}}{pi_1,\ldots,pi_{M-1}} are to be equal via equal coefficients. Note \eqn{\pi_M}{pi_M} will generally be different from the other probabilities. Setting \code{parallel = TRUE} will only work if you also set \code{zero = NULL} because of interference between these arguments (with respect to the intercept term). } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \eqn{\{1,2,\ldots,M\}}. If the character \code{"M"} then this means the numerical value \eqn{M}, which corresponds to linear/additive predictor associated with \eqn{\phi}{phi}. Setting \code{zero = NULL} means none of the values from the set \eqn{\{1,2,\ldots,M\}}. } } \details{ The Dirichlet-multinomial distribution arises from a multinomial distribution where the probability parameters are not constant but are generated from a multivariate distribution called the Dirichlet distribution. The Dirichlet-multinomial distribution has probability function \deqn{P(Y_1=y_1,\ldots,Y_M=y_M) = {N_{*} \choose {y_1,\ldots,y_M}} \frac{ \prod_{j=1}^{M} \prod_{r=1}^{y_{j}} (\pi_j (1-\phi) + (r-1)\phi)}{ \prod_{r=1}^{N_{*}} (1-\phi + (r-1)\phi)}}{% P(Y_1=y_1,\ldots,Y_M=y_M) = C_{y_1,\ldots,y_M}^{N_{*}} prod_{j=1}^{M} prod_{r=1}^{y_{j}} (pi_j (1-phi) + (r-1)phi) / prod_{r=1}^{N_{*}} (1-phi + (r-1)phi)} where \eqn{\phi}{phi} is the \emph{over-dispersion} parameter and \eqn{N_{*} = y_1+\cdots+y_M}{N_* = y_1+\cdots+y_M}. Here, \eqn{a \choose b}{C_b^a} means ``\eqn{a} choose \eqn{b}'' and refers to combinations (see \code{\link[base]{choose}}). The above formula applies to each row of the matrix response. In this \pkg{VGAM} family function the first \eqn{M-1} linear/additive predictors correspond to the first \eqn{M-1} probabilities via \deqn{\eta_j = \log(P[Y=j]/ P[Y=M]) = \log(\pi_j/\pi_M)}{% eta_j = log(P[Y=j]/ P[Y=M]) = log(pi_j/pi_M)} where \eqn{\eta_j}{eta_j} is the \eqn{j}th linear/additive predictor (\eqn{\eta_M=0}{eta_M=0} by definition for \eqn{P[Y=M]} but not for \eqn{\phi}{phi}) and \eqn{j=1,\ldots,M-1}. The \eqn{M}th linear/additive predictor corresponds to \code{lphi} applied to \eqn{\phi}{phi}. Note that \eqn{E(Y_j) = N_* \pi_j}{E(Y_j) = N_* pi_j} but the probabilities (returned as the fitted values) \eqn{\pi_j}{pi_j} are bundled together as a \eqn{M}-column matrix. The quantities \eqn{N_*} are returned as the prior weights. The beta-binomial distribution is a special case of the Dirichlet-multinomial distribution when \eqn{M=2}; see \code{\link{betabinomial}}. It is easy to show that the first shape parameter of the beta distribution is \eqn{shape1=\pi(1/\phi-1)}{shape1=pi*(1/phi-1)} and the second shape parameter is \eqn{shape2=(1-\pi)(1/\phi-1)}{shape2=(1-pi)*(1/phi-1)}. Also, \eqn{\phi=1/(1+shape1+shape2)}{phi=1/(1+shape1+shape2)}, which is known as the \emph{intra-cluster correlation} coefficient. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. If the model is an intercept-only model then \code{@misc} (which is a list) has a component called \code{shape} which is a vector with the \eqn{M} values \eqn{\pi_j(1/\phi-1)}{pi_j * (1/phi-1)}. % zz not sure: These are the shape parameters of the underlying % Dirichlet distribution. } \references{ Paul, S. R., Balasooriya, U. and Banerjee, T. (2005) Fisher information matrix of the Dirichlet-multinomial distribution. \emph{Biometrical Journal}, \bold{47}, 230--236. Tvedebrink, T. (2010) Overdispersion in allelic counts and \eqn{\theta}-correction in forensic genetics. \emph{Theoretical Population Biology}, \bold{78}, 200--210. } \author{ Thomas W. Yee } \section{Warning }{ This \pkg{VGAM} family function is prone to numerical problems, especially when there are covariates. } \note{ The response can be a matrix of non-negative integers, or else a matrix of sample proportions and the total number of counts in each row specified using the \code{weights} argument. This dual input option is similar to \code{\link{multinomial}}. To fit a `parallel' model with the \eqn{\phi}{phi} parameter being an intercept-only you will need to use the \code{constraints} argument. Currently, Fisher scoring is implemented. To compute the expected information matrix a \code{for} loop is used; this may be very slow when the counts are large. Additionally, convergence may be slower than usual due to round-off error when computing the expected information matrices. } \seealso{ \code{\link{dirmul.old}}, \code{\link{betabinomial}}, \code{\link{betabinomial.ab}}, \code{\link{dirichlet}}, \code{\link{multinomial}}. } \examples{ nn <- 10; M <- 5 ydata <- data.frame(round(matrix(runif(nn * M, max = 10), nn, M))) # Integer counts colnames(ydata) <- paste("y", 1:M, sep = "") fit <- vglm(cbind(y1, y2, y3, y4, y5) ~ 1, dirmultinomial, ydata, trace = TRUE) head(fitted(fit)) depvar(fit) # Sample proportions weights(fit, type = "prior", matrix = FALSE) # Total counts per row ydata <- transform(ydata, x2 = runif(nn)) fit <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, dirmultinomial, ydata, trace = TRUE) \dontrun{ # This does not work: Coef(fit) } coef(fit, matrix = TRUE) (sfit <- summary(fit)) vcov(sfit) } \keyword{models} \keyword{regression} % zz \eqn{\alpha_j = P[Y=j] \times (1/\phi - 1)}{alpha_j = P[Y=j] * % (1/phi - 1)} are the shape parameters, % for \eqn{j=1,\ldots,M}. % Currently, initial values can be improved upon. VGAM/man/dirmul.old.Rd0000644000176000001440000001220412136651105014146 0ustar ripleyusers\name{dirmul.old} \alias{dirmul.old} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitting a Dirichlet-Multinomial Distribution } \description{ Fits a Dirichlet-multinomial distribution to a matrix of non-negative integers. } \usage{ dirmul.old(link = "loge", init.alpha = 0.01, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to each of the \eqn{M} (positive) shape parameters \eqn{\alpha_j}{alpha_j} for \eqn{j=1,\ldots,M}. See \code{\link{Links}} for more choices. Here, \eqn{M} is the number of columns of the response matrix. } \item{init.alpha}{ Numeric vector. Initial values for the \code{alpha} vector. Must be positive. Recycled to length \eqn{M}. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. } } % formula is p.49 of Lange 2002. \details{ The Dirichlet-multinomial distribution, which is somewhat similar to a Dirichlet distribution, has probability function \deqn{P(Y_1=y_1,\ldots,Y_M=y_M) = {2y_{*} \choose {y_1,\ldots,y_M}} \frac{\Gamma(\alpha_{+})}{\Gamma(2y_{*}+\alpha_{+})} \prod_{j=1}^M \frac{\Gamma(y_j+\alpha_{j})}{\Gamma(\alpha_{j})}}{% P(Y_1=y_1,\ldots,Y_M=y_M) = C_{y_1,\ldots,y_M}^{2y_{*}} Gamma(alpha_+) / Gamma( 2y_* + alpha_+) prod_{j=1}^M [ Gamma( y_j+ alpha_j) / Gamma( alpha_j)]} for \eqn{\alpha_j > 0}{alpha_j > 0}, \eqn{\alpha_+ = \alpha_1 + \cdots + \alpha_M}{alpha_+ = alpha_1 + \cdots + alpha_M}, and \eqn{2y_{*} = y_1 + \cdots + y_M}{2y_* = y_1 + \cdots + y_M}. Here, \eqn{a \choose b}{C_b^a} means ``\eqn{a} choose \eqn{b}'' and refers to combinations (see \code{\link[base]{choose}}). The (posterior) mean is \deqn{E(Y_j) = (y_j + \alpha_j) / (2y_{*} + \alpha_{+})}{% E(Y_j) = (y_j + alpha_j) / (2y_{*} + alpha_+)} for \eqn{j=1,\ldots,M}{j=1,\ldots,M}, and these are returned as the fitted values as a \eqn{M}-column matrix. % One situation that arises for the Dirichlet-multinomial distribution % is a locus with M codominant alleles. If in a sample of y_* people, % allele i appears y_j times, then the maximum likelihood estimate of % the ith allele frequency is y_j / (2y_*). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. Paul, S. R., Balasooriya, U. and Banerjee, T. (2005) Fisher information matrix of the Dirichlet-multinomial distribution. \emph{Biometrical Journal}, \bold{47}, 230--236. Tvedebrink, T. (2010) Overdispersion in allelic counts and \eqn{\theta}-correction in forensic genetics. \emph{Theoretical Population Biology}, \bold{78}, 200--210. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be a matrix of non-negative values. Convergence seems to slow down if there are zero values. Currently, initial values can be improved upon. This function is almost defunct and may be withdrawn soon. Use \code{\link{dirmultinomial}} instead. } \seealso{ \code{\link{dirmultinomial}}, \code{\link{dirichlet}}, \code{\link{betabinomial.ab}}, \code{\link{multinomial}}. } \examples{ # Data from p.50 of Lange (2002) alleleCounts <- c(2, 84, 59, 41, 53, 131, 2, 0, 0, 50, 137, 78, 54, 51, 0, 0, 0, 80, 128, 26, 55, 95, 0, 0, 0, 16, 40, 8, 68, 14, 7, 1) dim(alleleCounts) <- c(8, 4) alleleCounts <- data.frame(t(alleleCounts)) dimnames(alleleCounts) <- list(c("White","Black","Chicano","Asian"), paste("Allele", 5:12, sep = "")) set.seed(123) # @initialize uses random numbers fit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old, trace = TRUE, crit = "c", data = alleleCounts) (sfit <- summary(fit)) vcov(sfit) round(eta2theta(coef(fit), fit@misc$link, fit@misc$earg), dig = 2) # not preferred round(Coef(fit), dig = 2) # preferred round(t(fitted(fit)), dig = 4) # 2nd row of Table 3.5 of Lange (2002) coef(fit, matrix = TRUE) pfit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old(parallel = TRUE), trace = TRUE, data = alleleCounts) round(eta2theta(coef(pfit, matrix = TRUE), pfit@misc$link, pfit@misc$earg), dig = 2) # 'Right' answer round(Coef(pfit), dig = 2) # 'Wrong' answer due to parallelism constraint } \keyword{models} \keyword{regression} VGAM/man/dirichlet.Rd0000644000176000001440000000770212136651105014053 0ustar ripleyusers\name{dirichlet} \alias{dirichlet} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting a Dirichlet Distribution } \description{ Fits a Dirichlet distribution to a matrix of compositions. } \usage{ dirichlet(link = "loge", parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to each of the \eqn{M} (positive) shape parameters \eqn{\alpha_j}{alpha_j}. See \code{\link{Links}} for more choices. The default gives \eqn{\eta_j=\log(\alpha_j)}{eta_j=log(alpha_j)}. } \item{parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ In this help file the response is assumed to be a \eqn{M}-column matrix with positive values and whose rows each sum to unity. Such data can be thought of as compositional data. There are \eqn{M} linear/additive predictors \eqn{\eta_j}{eta_j}. The Dirichlet distribution is commonly used to model compositional data, including applications in genetics. Suppose \eqn{(Y_1,\ldots,Y_{M})^T}{(Y_1,\ldots,Y_M)^T} is the response. Then it has a Dirichlet distribution if \eqn{(Y_1,\ldots,Y_{M-1})^T}{(Y_1,\ldots,Y_{M-1})^T} has density \deqn{\frac{\Gamma(\alpha_{+})} {\prod_{j=1}^{M} \Gamma(\alpha_{j})} \prod_{j=1}^{M} y_j^{\alpha_{j} -1}}{% (Gamma(alpha_+) / prod_{j=1}^M gamma(alpha_j)) prod_{j=1}^M y_j^(alpha_j -1)} where \eqn{\alpha_+=\alpha_1+\cdots+\alpha_M}{alpha_+= alpha_1 + \dots + alpha_M}, \eqn{\alpha_j > 0}{alpha_j > 0}, and the density is defined on the unit simplex \deqn{\Delta_{M} = \left\{ (y_1,\ldots,y_{M})^T : y_1 > 0, \ldots, y_{M} > 0, \sum_{j=1}^{M} y_j = 1 \right\}. }{% Delta_M = { (y_1,\ldots,y_M)^T : y_1 > 0, \dots, y_M > 0, \sum_{j=1}^M y_j = 1 }. } One has \eqn{E(Y_j) = \alpha_j / \alpha_{+}}{E(Y_j) = alpha_j / alpha_{+}}, which are returned as the fitted values. For this distribution Fisher scoring corresponds to Newton-Raphson. The Dirichlet distribution can be motivated by considering the random variables \eqn{(G_1,\ldots,G_{M})^T}{(G_1,\ldots,G_M)^T} which are each independent and identically distributed as a gamma distribution with density \eqn{f(g_j)=g_j^{\alpha_j - 1} e^{-g_j} / \Gamma(\alpha_j)}{f(g_j)= g_j^(alpha_j - 1) e^(-g_j) / gamma(alpha_j)}. Then the Dirichlet distribution arises when \eqn{Y_j=G_j / (G_1 + \cdots + G_M)}{Y_j = G_j / (G_1 + ... + G_M)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the \eqn{M}-column matrix of means. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be a matrix of positive values whose rows each sum to unity. Similar to this is count data, where probably a multinomial logit model (\code{\link{multinomial}}) may be appropriate. Another similar distribution to the Dirichlet is the Dirichlet-multinomial (see \code{\link{dirmultinomial}}). } \seealso{ \code{\link{rdiric}}, \code{\link{dirmultinomial}}, \code{\link{multinomial}}, \code{\link{simplex}}. } \examples{ ydata <- data.frame(rdiric(n = 1000, shape = exp(c(-1, 1, 0)))) colnames(ydata) <- paste("y", 1:3, sep = "") fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, ydata, trace = TRUE, crit = "coef") Coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/df.residual.Rd0000644000176000001440000000435312136651105014303 0ustar ripleyusers\name{df.residual} \alias{df.residual} \alias{df.residual_vlm} %\alias{df.residual.default} \title{Residual Degrees-of-Freedom} \description{ Returns the residual degrees-of-freedom extracted from a fitted VGLM object. } \usage{ df.residual_vlm(object, type = c("vlm", "lm"), \dots) } \arguments{ \item{object}{ an object for which the degrees-of-freedom are desired, e.g., a \code{\link{vglm}} object. } \item{type}{ the type of residual degrees-of-freedom wanted. In some applications the 'usual' LM-type value may be more appropriate. The default is the first choice. } \item{\dots}{ additional optional arguments. } } \details{ When a VGLM is fitted, a \emph{large} (VLM) generalized least squares (GLS) fit is done at each IRLS iteration. To do this, an ordinary least squares (OLS) fit is performed by transforming the GLS using Cholesky factors. The number of rows is \eqn{M} times the `ordinary' number of rows of the LM-type model: \eqn{nM}. Here, \eqn{M} is the number of linear/additive predictors. So the formula for the VLM-type residual degrees-of-freedom is \eqn{nM - p^{*}} where \eqn{p^{*}} is the number of columns of the `big' VLM matrix. The formula for the LM-type residual degrees-of-freedom is \eqn{n - p_{j}} where \eqn{p_{j}} is the number of columns of the `ordinary' LM matrix corresponding to the \eqn{j}th linear/additive predictor. } \value{ The value of the residual degrees-of-freedom extracted from the object. When \code{type = "vlm"} this is a single integer, and when \code{type = "lm"} this is a \eqn{M}-vector of integers. } \seealso{ \code{\link{vglm}}, \code{\link[stats]{deviance}}, \code{\link[stats]{lm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) head(model.matrix(fit, type = "vlm")) head(model.matrix(fit, type = "lm")) df.residual(fit, type = "vlm") # n * M - p_VLM nobs(fit, type = "vlm") # n * M nvar(fit, type = "vlm") # p_VLM df.residual(fit, type = "lm") # n - p_LM(j); Useful in some situations nobs(fit, type = "lm") # n nvar(fit, type = "lm") # p_LM nvar_vlm(fit, type = "lm") # p_LM(j) (<= p_LM elementwise) } \keyword{models} \keyword{regression} VGAM/man/dexpbinomial.Rd0000644000176000001440000001477612136651105014570 0ustar ripleyusers\name{dexpbinomial} \alias{dexpbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Double Exponential Binomial Distribution Family Function } \description{ Fits a double exponential binomial distribution by maximum likelihood estimation. The two parameters here are the mean and dispersion parameter. } \usage{ dexpbinomial(lmean = "logit", ldispersion = "logit", idispersion = 0.25, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, ldispersion}{ Link functions applied to the two parameters, called \eqn{\mu}{mu} and \eqn{\theta}{theta} respectively below. See \code{\link{Links}} for more choices. The defaults cause the parameters to be restricted to \eqn{(0,1)}. } \item{idispersion}{ Initial value for the dispersion parameter. If given, it must be in range, and is recyled to the necessary length. Use this argument if convergence failure occurs. } \item{zero}{ An integer specifying which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value should be either \code{1} or \code{2}. The default is to have a single dispersion parameter value. To model both parameters as functions of the covariates assign \code{zero = NULL}. } } \details{ This distribution provides a way for handling overdispersion in a binary response. The double exponential binomial distribution belongs the family of double exponential distributions proposed by Efron (1986). Below, equation numbers refer to that original article. Briefly, the idea is that an ordinary one-parameter exponential family allows the addition of a second parameter \eqn{\theta}{theta} which varies the dispersion of the family without changing the mean. The extended family behaves like the original family with sample size changed from \eqn{n} to \eqn{n\theta}{n*theta}. The extended family is an exponential family in \eqn{\mu}{mu} when \eqn{n} and \eqn{\theta}{theta} are fixed, and an exponential family in \eqn{\theta}{theta} when \eqn{n} and \eqn{\mu}{mu} are fixed. Having \eqn{0 < \theta < 1}{0 < theta < 1} corresponds to overdispersion with respect to the binomial distribution. See Efron (1986) for full details. This \pkg{VGAM} family function implements an \emph{approximation} (2.10) to the exact density (2.4). It replaces the normalizing constant by unity since the true value nearly equals 1. The default model fitted is \eqn{\eta_1 = logit(\mu)}{eta1 =logit(mu)} and \eqn{\eta_2 = logit(\theta)}{eta2 = logit(theta)}. This restricts both parameters to lie between 0 and 1, although the dispersion parameter can be modelled over a larger parameter space by assigning the arguments \code{ldispersion} and \code{edispersion}. Approximately, the mean (of \eqn{Y}) is \eqn{\mu}{mu}. The \emph{effective sample size} is the dispersion parameter multiplied by the original sample size, i.e., \eqn{n\theta}{n*theta}. This family function uses Fisher scoring, and the two estimates are asymptotically independent because the expected information matrix is diagonal. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Efron, B. (1986) Double exponential families and their use in generalized linear regression. \emph{Journal of the American Statistical Association}, \bold{81}, 709--721. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}, however multivariate responses are not allowed (\code{binomialff(mv = FALSE)}). } \section{Warning }{ Numerical difficulties can occur; if so, try using \code{idispersion}. } \seealso{ \code{\link{binomialff}}, \code{\link{toxop}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # This example mimics the example in Efron (1986). The results here # differ slightly. # Scale the variables toxop <- transform(toxop, phat = positive / ssize, srainfall = scale(rainfall), # (6.1) sN = scale(ssize)) # (6.2) # A fit similar (should be identical) to Section 6 of Efron (1986). # But does not use poly(), and M=1.25 here, as in (5.3) cmlist <- list("(Intercept)" = diag(2), "I(srainfall)" = rbind(1,0), "I(srainfall^2)" = rbind(1,0), "I(srainfall^3)" = rbind(1,0), "I(sN)" = rbind(0,1), "I(sN^2)" = rbind(0,1)) fit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3) + I(sN) + I(sN^2), dexpbinomial(ldisp = elogit(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist) # Now look at the results coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) vcov(fit) sqrt(diag(vcov(fit))) # Standard errors # Effective sample size (not quite the last column of Table 1) head(predict(fit)) Dispersion <- elogit(predict(fit)[,2], min = 0, max = 1.25, inverse = TRUE) c(round(weights(fit, type = "prior") * Dispersion, dig = 1)) # Ordinary logistic regression (gives same results as (6.5)) ofit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3), binomialff, toxop, trace = TRUE) # Same as fit but it uses poly(), and can be plotted (cf. Figure 1) cmlist2 <- list("(Intercept)" = diag(2), "poly(srainfall, degree = 3)" = rbind(1, 0), "poly(sN, degree = 2)" = rbind(0, 1)) fit2 <- vglm(cbind(phat, 1 - phat) * ssize ~ poly(srainfall, degree = 3) + poly(sN, degree = 2), dexpbinomial(ldisp = elogit(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist2) \dontrun{ par(mfrow = c(1, 2)) plotvgam(fit2, se = TRUE, lcol = "blue", scol = "red") # Cf. Figure 1 # Cf. Figure 1(a) par(mfrow = c(1,2)) ooo <- with(toxop, sort.list(rainfall)) with(toxop, plot(rainfall[ooo], fitted(fit2)[ooo], type = "l", col = "blue", las = 1, ylim = c(0.3, 0.65))) with(toxop, points(rainfall[ooo], fitted(ofit)[ooo], col = "red", type = "b", pch = 19)) # Cf. Figure 1(b) ooo <- with(toxop, sort.list(ssize)) with(toxop, plot(ssize[ooo], Dispersion[ooo], type = "l", col = "blue", las = 1, xlim = c(0, 100))) } } \keyword{models} \keyword{regression} VGAM/man/depvar.Rd0000644000176000001440000000221212136651105013354 0ustar ripleyusers\name{depvar} \alias{depvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Response variable extracted } \description{ A generic function that extracts the response/dependent variable from objects. } \usage{ depvar(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object that has some response/dependent variable. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ This function is preferred to calling \code{fit@y}, say. } \value{ The response/dependent variable, usually as a matrix or vector. } %\references{ % %} \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ % This %} \seealso{ \code{\link[stats]{model.matrix}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fit@y # Sample proportions (not recommended) depvar(fit) # Better than using fit@y; dependent variable (response) weights(fit, type = "prior") # Number of observations } \keyword{models} \keyword{regression} VGAM/man/deplot.lmscreg.Rd0000644000176000001440000000533212136651105015023 0ustar ripleyusers\name{deplot.lmscreg} \alias{deplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Density Plot for LMS Quantile Regression } \description{ Plots a probability density function associated with a LMS quantile regression. } \usage{ deplot.lmscreg(object, newdata = NULL, x0, y.arg, plot.it = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Optional data frame containing secondary variables such as sex. It should have a maximum of one row. The default is to use the original data. } \item{x0}{ Numeric. The value of the primary variable at which to make the `slice'. } \item{y.arg}{ Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. } \item{plot.it}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{\dots}{ Graphical parameter that are passed into \code{\link{plotdeplot.lmscreg}}. } } \details{ This function calls, e.g., \code{deplot.lms.yjn} in order to compute the density function. } \value{ The original \code{object} but with a list placed in the slot \code{post}, called \code{@post$deplot}. The list has components \item{newdata }{ The argument \code{newdata} above, or a one-row data frame constructed out of the \code{x0} argument. } \item{y}{ The argument \code{y.arg} above. } \item{density}{ Vector of the density function values evaluated at \code{y.arg}. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. } \author{ Thomas W. Yee } \note{ \code{\link{plotdeplot.lmscreg}} actually does the plotting. } \seealso{ \code{\link{plotdeplot.lmscreg}}, \code{\link{qtplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4, 2)), fam = lms.bcn(zero = 1), data = bmi.nz) ygrid <- seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (red)") deplot(fit, x0 = 40, y = ygrid, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = ygrid, add = TRUE, col = "red", llwd = 2) -> a names(a@post$deplot) a@post$deplot$newdata head(a@post$deplot$y) head(a@post$deplot$density) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/dcennormal1.Rd0000644000176000001440000000544612136651105014312 0ustar ripleyusers\name{dcennormal1} \alias{dcennormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Normal Distribution with Double Censoring } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution when there is double censoring. } \usage{ dcennormal1(r1 = 0, r2 = 0, lmu = "identity", lsd = "loge", imu = NULL, isd = NULL, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{r1, r2}{ Integers. Number of smallest and largest values censored, respectively. } \item{lmu, lsd}{ Parameter link functions applied to the mean and standard deviation. See \code{\link{Links}} for more choices. } \item{imu, isd, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ This family function uses the Fisher information matrix given in Harter and Moore (1966). The matrix is not diagonal if either \code{r1} or \code{r2} are positive. By default, the mean is the first linear/additive predictor and the log of the standard deviation is the second linear/additive predictor. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Harter, H. L. and Moore, A. H. (1966) Iterative maximum-likelihood estimation of the parameters of normal populations from singly and doubly censored samples. \emph{Biometrika}, \bold{53}, 205--213. } \author{ T. W. Yee } \note{ This family function only handles a vector or one-column matrix response. The \code{weights} argument, if used, are interpreted as frequencies, therefore it must be a vector with positive integer values. With no censoring at all (the default), it is better (and equivalent) to use \code{\link{normal1}}. } \seealso{ \code{\link{normal1}}, \code{\link{cennormal1}}, \code{\link{tobit}}. } \examples{\dontrun{# Repeat the simulations described in Harter and Moore (1966) SIMS <- 100 # Number of simulations (change this to 1000) mu.save <- sd.save <- rep(NA, len = SIMS) r1 <- 0; r2 <- 4; nn <- 20 for(sim in 1:SIMS) { y <- sort(rnorm(nn)) y <- y[(1+r1):(nn-r2)] # Delete r1 smallest and r2 largest fit <- vglm(y ~ 1, dcennormal1(r1 = r1, r2 = r2)) mu.save[sim] <- predict(fit)[1,1] sd.save[sim] <- exp(predict(fit)[1,2]) # Assumes a log link and ~ 1 } c(mean(mu.save), mean(sd.save)) # Should be c(0,1) c(sd(mu.save), sd(sd.save)) } # Data from Sarhan and Greenberg (1962); MLEs are mu = 9.2606, sd = 1.3754 strontium90 <- data.frame(y = c(8.2, 8.4, 9.1, 9.8, 9.9)) fit <- vglm(y ~ 1, dcennormal1(r1 = 2, r2 = 3, isd = 6), strontium90, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/dagumUC.Rd0000644000176000001440000000560112136651105013425 0ustar ripleyusers\name{Dagum} \alias{Dagum} \alias{ddagum} \alias{pdagum} \alias{qdagum} \alias{rdagum} \title{The Dagum Distribution} \description{ Density, distribution function, quantile function and random generation for the Dagum distribution with shape parameters \code{a} and \code{p}, and scale parameter \code{scale}. } \usage{ ddagum(x, shape1.a, scale = 1, shape2.p, log = FALSE) pdagum(q, shape1.a, scale = 1, shape2.p) qdagum(p, shape1.a, scale = 1, shape2.p) rdagum(n, shape1.a, scale = 1, shape2.p) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a, shape2.p}{shape parameters.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{ddagum} gives the density, \code{pdagum} gives the distribution function, \code{qdagum} gives the quantile function, and \code{rdagum} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{dagum}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Dagum distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{dagum}}, \code{\link{genbetaII}}. } \examples{ probs <- seq(0.1, 0.9, by = 0.1) shape1.a <- 1; shape2.p <- 2 # Should be 0: max(abs(pdagum(qdagum(p = probs, shape1.a = shape1.a, shape2.p = shape2.p), shape1.a = shape1.a, shape2.p = shape2.p) - probs)) \dontrun{ par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", las = 1, ylim = c(0, 1), main = "Black is standard exponential, others are ddagum(x, ...)") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard exponential, others are pdagum(x, ...)") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) } } \keyword{distribution} VGAM/man/dagum.Rd0000644000176000001440000000663112136651105013201 0ustar ripleyusers\name{dagum} \alias{dagum} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Dagum Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Dagum distribution. } \usage{ dagum(lshape1.a = "loge", lscale = "loge", lshape2.p = "loge", ishape1.a = NULL, iscale = NULL, ishape2.p = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1.a, lscale, lshape2.p}{ Parameter link functions applied to the (positive) parameters \code{a}, \code{scale}, and \code{p}. See \code{\link{Links}} for more choices. } \item{ishape1.a, iscale, ishape2.p}{ Optional initial values for \code{a}, \code{scale}, and \code{p}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3\} which correspond to \code{a}, \code{scale}, \code{p}, respectively. } } \details{ The 3-parameter Dagum distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=1}. It is known under various other names, such as the Burr III, inverse Burr, beta-K, and 3-parameter kappa distribution. It can be considered a generalized log-logistic distribution. Some distributions which are special cases of the 3-parameter Dagum are the inverse Lomax (\eqn{a=1}), Fisk (\eqn{p=1}), and the inverse paralogistic (\eqn{a=p}). More details can be found in Kleiber and Kotz (2003). The Dagum distribution has a cumulative distribution function \deqn{F(y) = [1 + (y/b)^{-a}]^{-p}}{% F(y) = [1 + (y/b)^(-a)]^(-p)} which leads to a probability density function \deqn{f(y) = ap y^{ap-1} / [b^{ap} \{1 + (y/b)^a\}^{p+1}]}{% f(y) = ap y^(ap-1) / [b^(ap) (1 + (y/b)^a)^(p+1)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{p > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1/a) \, \Gamma(1 - 1/a) / \Gamma(p)}{% E(Y) = b gamma(p + 1/a) gamma(1 - 1/a) / gamma(p)} provided \eqn{-ap < 1 < a}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. From Kleiber and Kotz (2003), the MLE is rather sensitive to isolated observations located sufficiently far from the majority of the data. Reliable estimation of the scale parameter require \eqn{n>7000}, while estimates for \eqn{a} and \eqn{p} can be considered unbiased for \eqn{n>2000} or 3000. } \seealso{ \code{\link{Dagum}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ ddata <- data.frame(y = rdagum(n = 3000, exp(1), exp(2), exp(1))) fit <- vglm(y ~ 1, dagum, ddata, trace = TRUE) fit <- vglm(y ~ 1, dagum(ishape1.a = exp(1)), ddata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/cumulative.Rd0000644000176000001440000002633712136651105014267 0ustar ripleyusers\name{cumulative} \alias{cumulative} %\alias{scumulative} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Cumulative Probabilities } \description{ Fits a cumulative link regression model to a (preferably ordered) factor response. } \usage{ cumulative(link = "logit", parallel = FALSE, reverse = FALSE, mv = FALSE, apply.parint = FALSE, whitespace = FALSE) } %scumulative(link = "logit", % lscale = "loge", escale = list(), % parallel = FALSE, sparallel = TRUE, reverse = FALSE, iscale = 1) %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{J} cumulative probabilities. See \code{\link{Links}} for more choices, e.g., for the cumulative \code{\link{probit}}/\code{\link{cloglog}}/\code{\link{cauchit}}/\ldots models. } % \item{lscale}{ % Link function applied to the \eqn{J} scaling parameters. % See \code{\link{Links}} for more choices. % % } \item{parallel}{ A logical or formula specifying which terms have equal/unequal coefficients. See below for more information about the parallelism assumption. The default results in what some people call the \emph{generalized ordered logit model} to be fitted. } % \item{sparallel}{ % For the scaling parameters. % A logical, or formula specifying which terms have % equal/unequal coefficients. % This argument is not applied to the intercept. % The \code{scumulative()} function requires covariates; for % intercept models use \code{cumulative()}. % } \item{reverse}{ Logical. By default, the cumulative probabilities used are \eqn{P(Y\leq 1)}{P(Y<=1)}, \eqn{P(Y\leq 2)}{P(Y<=2)}, \dots, \eqn{P(Y\leq J)}{P(Y<=J)}. If \code{reverse} is \code{TRUE} then \eqn{P(Y\geq 2)}{P(Y>=2)}, \eqn{P(Y\geq 3)}{P(Y>=3)}, \dots, \eqn{P(Y\geq J+1)}{P(Y>=J+1)} are used. This should be set to \code{TRUE} for \code{link=} \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}. For these links the cutpoints must be an increasing sequence; if \code{reverse = FALSE} for then the cutpoints must be an decreasing sequence. } \item{mv}{ Logical. Multivariate response? If \code{TRUE} then the input should be a matrix with values \eqn{1,2,\dots,L}, where \eqn{L=J+1} is the number of levels. Each column of the matrix is a response, i.e., multivariate response. A suitable matrix can be obtained from \code{Cut}. } \item{apply.parint}{ Logical. Whether the \code{parallel} argument should be applied to the intercept term. This should be set to \code{TRUE} for \code{link=} \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}. See \code{\link{CommonVGAMffArguments}} for more information. } % \item{iscale}{ % Numeric. Initial values for the scale parameters. % } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\dots,J+1}. Hence \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}; for \code{cumulative()} one has \eqn{M=J}. % and for \code{scumulative()} \eqn{M=2J}. This \pkg{VGAM} family function fits the class of \emph{cumulative link models} to (hopefully) an ordinal response. By default, the \emph{non-parallel} cumulative logit model is fitted, i.e., \deqn{\eta_j = logit(P[Y \leq j])}{% eta_j = logit(P[Y<=j])} where \eqn{j=1,2,\dots,M} and the \eqn{\eta_j}{eta_j} are not constrained to be parallel. This is also known as the \emph{non-proportional odds model}. If the logit link is replaced by a complementary log-log link (\code{\link{cloglog}}) then this is known as the \emph{proportional-hazards model}. In almost all the literature, the constraint matrices associated with this family of models are known. For example, setting \code{parallel = TRUE} will make all constraint matrices (except for the intercept) equal to a vector of \eqn{M} 1's. If the constraint matrices are equal, unknown and to be estimated, then this can be achieved by fitting the model as a reduced-rank vector generalized linear model (RR-VGLM; see \code{\link{rrvglm}}). Currently, reduced-rank vector generalized additive models (RR-VGAMs) have not been implemented here. % The scaled version of \code{cumulative()}, called \code{scumulative()}, % has \eqn{J} positive scaling factors. % They are described in pages 154 and 177 of McCullagh and Nelder (1989); % see their equation (5.4) in particular, % which they call the \emph{generalized rational model}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. Agresti, A. (2010) \emph{Analysis of Ordinal Categorical Data}, 2nd ed. New York: Wiley. Dobson, A. J. and Barnett, A. (2008) \emph{An Introduction to Generalized Linear Models}, 3rd ed. Boca Raton: Chapman & Hall/CRC Press. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Further information and examples on categorical data analysis by the \pkg{VGAM} package can be found at \url{http://www.stat.auckland.ac.nz/~yee/VGAM/doc/categorical.pdf}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is the matrix of counts. The formula must contain an intercept term. Other \pkg{VGAM} family functions for an ordinal response include \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. With the logit link, setting \code{parallel = TRUE} will fit a proportional odds model. Note that the \code{TRUE} here does not apply to the intercept term. In practice, the validity of the proportional odds assumption needs to be checked, e.g., by a likelihood ratio test (LRT). If acceptable on the data, then numerical problems are less likely to occur during the fitting, and there are less parameters. Numerical problems occur when the linear/additive predictors cross, which results in probabilities outside of \eqn{(0,1)}; setting \code{parallel = TRUE} will help avoid this problem. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x2}, \code{x3} and \code{x4}, then \code{parallel = TRUE ~ x2 + x3 -1} and \code{parallel = FALSE ~ x4} are equivalent. This would constrain the regression coefficients for \code{x2} and \code{x3} to be equal; those of the intercepts and \code{x4} would be different. If the data is inputted in \emph{long} format (not \emph{wide} format, as in \code{\link{pneumo}} below) and the self-starting initial values are not good enough then try using \code{mustart}, \code{coefstart} and/or \code{etatstart}. See the example below. To fit the proportional odds model one can use the \pkg{VGAM} family function \code{\link{propodds}}. Note that \code{propodds(reverse)} is equivalent to \code{cumulative(parallel = TRUE, reverse = reverse)} (which is equivalent to \code{cumulative(parallel = TRUE, reverse = reverse, link = "logit")}). It is for convenience only. A call to \code{cumulative()} is preferred since it reminds the user that a parallelism assumption is made, as well as being a lot more flexible. % In the future, this family function may be renamed to % ``\code{cups}'' (for \bold{cu}mulative \bold{p}robabilitie\bold{s}) % or ``\code{cute}'' (for \bold{cu}mulative probabili\bold{t}i\bold{e}s). % Please let me know if you strongly agree or disagree about this. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{propodds}}, \code{\link{prplot}}, \code{\link{margeff}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}, \code{\link{pneumo}}, \code{\link{Links}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}, \code{\link{logistic1}}. } \examples{ # Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # Sample proportions (good technique) fit@y # Sample proportions (bad technique) weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE), pneumo) \dontrun{ plot(fit2, se = TRUE, overlay = TRUE, lcol = 1:2, scol = 1:2) } # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(2 * (logLik(fit3) - logLik(fit)), df = length(coef(fit3)) - length(coef(fit)), lower.tail = FALSE) lrtest(fit3, fit) # More elegant # A factor() version of fit ---------------------------------- # This is in long format (cf. wide format above) Nobs <- round(depvar(fit) * c(weights(fit, type = "prior"))) sumNobs <- colSums(Nobs) # apply(Nobs, 2, sum) pneumo.long <- data.frame(symptoms = ordered(rep(rep(colnames(Nobs), nrow(Nobs)), times = c(t(Nobs))), levels = colnames(Nobs)), let = rep(rep(with(pneumo, let), each = ncol(Nobs)), times = c(t(Nobs)))) with(pneumo.long, table(let, symptoms)) # Check it; should be same as pneumo (fit.long1 <- vglm(symptoms ~ let, data = pneumo.long, cumulative(parallel = TRUE, reverse = TRUE), trace = TRUE)) coef(fit.long1, matrix = TRUE) # Should be same as coef(fit, matrix = TRUE) # Could try using mustart if fit.long1 failed to converge. mymustart <- matrix(sumNobs / sum(sumNobs), nrow(pneumo.long), ncol(Nobs), byrow = TRUE) fit.long2 <- vglm(symptoms ~ let, fam = cumulative(parallel = TRUE, reverse = TRUE), mustart = mymustart, data = pneumo.long, trace = TRUE) coef(fit.long2, matrix = TRUE) # Should be same as coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} % pneumo$let <- log(pneumo$exposure.time) VGAM/man/cratio.Rd0000644000176000001440000000762612136651105013372 0ustar ripleyusers\name{cratio} \alias{cratio} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Continuation Ratios } \description{ Fits a continuation ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response. } \usage{ cratio(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{M} continuation ratio probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the continuation ratios used are \eqn{\eta_j = logit(P[Y>j|Y \geq j])}{eta_j = logit(P[Y>j|Y>=j])} for \eqn{j=1,\dots,M}. If \code{reverse} is \code{TRUE}, then \eqn{\eta_j = logit(P[Y=j])}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{sratio}}, \code{\link{acat}}, \code{\link{cumulative}}, \code{\link{multinomial}}, \code{\link{pneumo}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cratio(parallel = TRUE), pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) } \keyword{models} \keyword{regression} VGAM/man/crashes.Rd0000644000176000001440000000723312136651105013533 0ustar ripleyusers\name{crashes} \alias{crashi} \alias{crashf} \alias{crashtr} \alias{crashmc} \alias{crashbc} \alias{crashp} \alias{alcoff} \alias{alclevels} \docType{data} \title{Crashes on New Zealand Roads in 2009} \description{ A variety of reported crash data cross-classified by time (hour of the day) and day of the week, accumulated over 2009. These include fatalities and injuries (by car), trucks, motor cycles, bicycles and pedestrians. There are some alcohol-related data too. } \usage{ data(crashi) data(crashf) data(crashtr) data(crashmc) data(crashbc) data(crashp) data(alcoff) data(alclevels) } \format{ Data frames with hourly times as rows and days of the week as columns. The \code{alclevels} dataset has hourly times and alcohol levels. \describe{ \item{Mon, Tue, Wed, Thu, Fri, Sat, Sun}{ Day of the week. } \item{0-30, 31-50, 51-80, 81-100, 101-120, 121-150, 151-200, 201-250, 251-300, 301-350, 350+}{ Blood alcohol level (milligrams alcohol per 100 millilitres of blood). % Aggregate number of alcohol offenders or number of dead % drivers/passengers on NZ roads. } } } \details{ Each cell is the aggregate number of crashes reported at each hour-day combination, over the 2009 calendar year. The \code{rownames} of each data frame is the start time (hourly from midnight onwards) on a 24 hour clock, e.g., 21 means 9.00pm to 9.59pm. For crashes, \code{chrashi} are the number of injuries by car, \code{crashf} are the number of fatalities by car (not included in \code{chrashi}), \code{crashtr} are the number of crashes involving trucks, \code{crashmc} are the number of crashes involving motorcyclists, \code{crashbc} are the number of crashes involving bicycles, and \code{crashp} are the number of crashes involving pedestrians. For alcohol-related offences, \code{alcoff} are the number of alcohol offenders from breath screening drivers, and \code{alclevels} are the blood alcohol levels of fatally injured drivers. } \source{ \url{http://www.transport.govt.nz/research/Pages/Motor-Vehicle-Crashes-in-New-Zealand-2009.aspx}. Thanks to Warwick Goold and Alfian F. Hadi for assistance. } \references{ Motor Vehicles Crashes in New Zealand 2009; Statistical Statement Calendar Year 2009. Ministry of Transport, NZ Government; Yearly Report 2010. ISSN: 1176-3949 } \seealso{ \code{\link[VGAM]{rrvglm}}, \code{\link[VGAM]{rcim}}, \code{\link[VGAM]{grc}}. } \examples{ \dontrun{ plot(unlist(alcoff), type = "l", frame.plot = TRUE, axes = FALSE, col = "blue", bty = "o", main = "Alcoholic offenders on NZ roads, aggregated over 2009", sub = "Vertical lines at midnight (purple) and noon (orange)", xlab = "Day/hour", ylab = "Number of offenders") axis(1, at = 1 + (0:6) * 24 + 12, labels = colnames(alcoff)) axis(2, las = 1) axis(3:4, labels = FALSE, tick = FALSE) abline(v = sort(1 + c((0:7) * 24, (0:6) * 24 + 12)), lty = "dashed", col = c("purple", "orange")) } # Goodmans RC models \dontrun{ fitgrc1 <- grc(alcoff) # Rank-1 model fitgrc2 <- grc(alcoff, Rank = 2, Corner = FALSE, Uncor = TRUE) Coef(fitgrc2) } \dontrun{ biplot(fitgrc2, scaleA = 2.3, Ccol = "blue", Acol = "orange", Clabels = as.character(1:23), xlim = c(-1.3, 2.3), ylim = c(-1.2, 1)) } } \keyword{datasets} % % %\alias{crashi} Table 18, p.39 %\alias{crashf} Table 19, p.40 %\alias{crashtr} Table 30, p.66 %\alias{crashmc} Table 35, p.72 %\alias{crashbc} Table 40, p.77 %\alias{crashp} Table 45, p.84 %\alias{alcoff} Table 3, p.121 %\alias{alclevels} Table 2, p.132 % print(Coef(fitgrc2), dig = 2) VGAM/man/cqo.Rd0000644000176000001440000005353112136651105012667 0ustar ripleyusers\name{cqo} \alias{cqo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Constrained Quadratic Ordination (CQO)} \description{ A \emph{constrained quadratic ordination} (CQO; formerly called \emph{canonical Gaussian ordination} or CGO) model is fitted using the \emph{quadratic reduced-rank vector generalized linear model} (QRR-VGLM) framework. } \usage{ cqo(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear predictor. Different variables in each linear predictor can be chosen by specifying constraint matrices. } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. Currently the following families are supported: \code{\link{poissonff}}, \code{\link{binomialff}} (\code{\link{logit}} and \code{\link{cloglog}} links available), \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. Sometimes special arguments are required for \code{cqo()}, e.g., \code{binomialff(mv = TRUE)}. Also, \code{\link{quasipoissonff}} and \code{\link{quasibinomialff}} may or may not work. % \code{negbinomial(deviance = TRUE)}, % \code{gamma2(deviance = TRUE)}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{cqo} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. Currently, this argument should not be used. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix. If \eqn{M = 1} then it may be a vector. Currently, this argument probably should not be used. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. Currently, this argument probably should not be used. } \item{coefstart}{ starting values for the coefficient vector. Currently, this argument probably should not be used. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{qrrvglm.control}} for details. } \item{offset}{ This argument must not be used. % especially when \code{ITolerances = TRUE}. % a vector or \eqn{M}-column matrix of offset values. % These are \emph{a priori} known and are % added to the linear predictors during fitting. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{cqo.fit} uses \emph{iteratively reweighted least squares} (IRLS). } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix. % ; to get the VGLM % model matrix type \code{model.matrix(vglmfit)} where % \code{vglmfit} is a \code{vglm} object. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{model.matrix.default}. } \item{constraints}{ an optional list of constraint matrices. The components of the list must be named with the term it corresponds to (and it must match in character format). Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values. If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. Constraint matrices for \eqn{x_2}{x_2} variables are taken as the identity matrix. } \item{extra}{ an optional list with any extra information that might be needed by the family function. } % \item{qr.arg}{ logical value indicating whether % the slot \code{qr}, which returns the QR decomposition of the % VLM model matrix, is returned on the object. % } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{qrrvglm.control}}. } } \details{ QRR-VGLMs or \emph{constrained quadratic ordination} (CQO) models are estimated here by maximum likelihood estimation. Optimal linear combinations of the environmental variables are computed, called \emph{latent variables} (these appear as \code{lv} for \eqn{R=1} else \code{lv1}, \code{lv2}, etc. in the output). Here, \eqn{R} is the \emph{rank} or the number of ordination axes. Each species' response is then a regression of these latent variables using quadratic polynomials on a transformed scale (e.g., log for Poisson counts, logit for presence/absence responses). The solution is obtained iteratively in order to maximize the log-likelihood function, or equivalently, minimize the deviance. The central formula (for Poisson and binomial species data) is given by \deqn{\eta = B_1^T x_1 + A \nu + \sum_{m=1}^M (\nu^T D_m \nu) e_m}{% eta = B_1^T x_1 + A nu + sum_{m=1}^M (nu^T D_m nu) e_m} where \eqn{x_1}{x_1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x_2} is a vector of environmental variables, \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables, \eqn{e_m} is a vector of 0s but with a 1 in the \eqn{m}th position. The \eqn{\eta}{eta} are a vector of linear/additive predictors, e.g., the \eqn{m}th element is \eqn{\eta_m = \log(E[Y_m])}{eta_m = log(E[Y_m])} for the \eqn{m}th species. The matrices \eqn{B_1}, \eqn{A}, \eqn{C} and \eqn{D_m} are estimated from the data, i.e., contain the regression coefficients. The tolerance matrices satisfy \eqn{T_s = -\frac12 D_s^{-1}}{T_s = -(0.5 D_s^(-1)}. Many important CQO details are directly related to arguments in \code{\link{qrrvglm.control}}, e.g., the argument \code{noRRR} specifies which variables comprise \eqn{x_1}{x_1}. Theoretically, the four most popular \pkg{VGAM} family functions to be used with \code{cqo} correspond to the Poisson, binomial, normal, and negative binomial distributions. The latter is a 2-parameter model. All of these are implemented, as well as the 2-parameter gamma. The Poisson is or should be catered for by \code{\link{quasipoissonff}} and \code{\link{poissonff}}, and the binomial by \code{\link{quasibinomialff}} and \code{\link{binomialff}}. Those beginning with \code{"quasi"} have dispersion parameters that are estimated for each species. %the negative binomial by \code{\link{negbinomial}}, and the normal by %\code{gaussianff}. %For overdispersed Poisson data, using \code{\link{quasipoissonff}} is %strongly recommended over \code{\link{negbinomial}}; the latter is %\emph{very} sensitive to departures from the model assumptions. For initial values, the function \code{.Init.Poisson.QO} should work reasonably well if the data is Poisson with species having equal tolerances. It can be quite good on binary data too. Otherwise the \code{Cinit} argument in \code{\link{qrrvglm.control}} can be used. %(and negative binomial) It is possible to relax the quadratic form to an additive model. The result is a data-driven approach rather than a model-driven approach, so that CQO is extended to \emph{constrained additive ordination} (CAO) when \eqn{R=1}. See \code{\link{cao}} for more details. In this documentation, \eqn{M} is the number of linear predictors, \eqn{S} is the number of responses (species). Then \eqn{M=S} for Poisson and binomial species data, and \eqn{M=2S} for negative binomial and gamma distributed species data. } \value{ An object of class \code{"qrrvglm"}. Note that the slot \code{misc} has a list component called \code{deviance.Bestof} which gives the history of deviances over all the iterations. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. ter Braak, C. J. F. and Prentice, I. C. (1988) A theory of gradient analysis. \emph{Advances in Ecological Research}, \bold{18}, 271--317. %Yee, T. W. (2005) %On constrained and unconstrained %quadratic ordination. %\emph{Manuscript in preparation}. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee. Thanks to Alvin Sou for converting a lot of the original FORTRAN code into C. } \note{ By default, a rank-1 equal-tolerances QRR-VGLM model is fitted (see \code{\link{qrrvglm.control}} for the default control parameters). The latent variables are always transformed so that they are uncorrelated. By default, the argument \code{trace} is \code{TRUE} meaning a running log is printed out while the computations are taking place. This is because the algorithm is computationally expensive, therefore users might think that their computers have frozen if \code{trace = FALSE}! The argument \code{Bestof} in \code{\link{qrrvglm.control}} controls the number of models fitted (each uses different starting values) to the data. This argument is important because convergence may be to a \emph{local} solution rather than the \emph{global} solution. Using more starting values increases the chances of finding the global solution. Always plot an ordination diagram (use the generic function \code{\link{lvplot}}) and see if it looks sensible. Local solutions arise because the optimization problem is highly nonlinear, and this is particularly true for CAO. %Convergence of QRR-VGLMs can be difficult, especially for binary %data. If this is so, then setting \code{ITolerances = TRUE} or %\code{EqualTolerances = TRUE} may help, especially when the number of sites, %\eqn{n}, is small. %If the negative binomial family function \code{\link{negbinomial}} is %used for \code{cqo} then set \code{negbinomial(deviance = TRUE)} %is necessary. This means to minimize the deviance, which the fast %algorithm can handle. Many of the arguments applicable to \code{cqo} are common to \code{\link{vglm}} and \code{\link{rrvglm.control}}. The most important arguments are \code{Rank}, \code{noRRR}, \code{Bestof}, \code{ITolerances}, \code{EqualTolerances}, \code{isdlv}, and \code{MUXfactor}. When fitting a 2-parameter model such as the negative binomial or gamma, it pays to have \code{EqualTolerances = TRUE} and \code{ITolerances = FALSE}. This is because numerical problems can occur when fitting the model far away from the global solution when \code{ITolerances = TRUE}. Setting the two arguments as described will slow down the computation considerably, however it is numerically more stable. In Example 1 below, an unequal-tolerances rank-1 QRR-VGLM is fitted to the hunting spiders dataset, and Example 2 is the equal-tolerances version. The latter is less likely to have convergence problems compared to the unequal-tolerances model. In Example 3 below, an equal-tolerances rank-2 QRR-VGLM is fitted to the hunting spiders dataset. The numerical difficulties encountered in fitting the rank-2 model suggests a rank-1 model is probably preferable. In Example 4 below, constrained binary quadratic ordination (in old nomenclature, constrained Gaussian logit ordination) is fitted to some simulated data coming from a species packing model. With multivariate binary responses, one must use \code{mv = TRUE} to indicate that the response (matrix) is multivariate. Otherwise, it is interpreted as a single binary response variable. In Example 5 below, the deviance residuals are plotted for each species. This is useful as a diagnostic plot. This is done by (re)regressing each species separately against the latent variable. Sometime in the future, this function might handle input of the form \code{cqo(x, y)}, where \code{x} and \code{y} are matrices containing the environmental and species data respectively. } \section{Warning }{ Local solutions are not uncommon when fitting CQO models. To increase the chances of obtaining the global solution, increase the value of the argument \code{Bestof} in \code{\link{qrrvglm.control}}. For reproducibility of the results, it pays to set a different random number seed before calling \code{cqo} (the function \code{\link[base:Random]{set.seed}} does this). The function \code{cqo} chooses initial values for \bold{C} using \code{.Init.Poisson.QO()} if \code{Use.Init.Poisson.QO = TRUE}, else random numbers. Unless \code{ITolerances = TRUE} or \code{EqualTolerances = FALSE}, CQO is computationally expensive. It pays to keep the rank down to 1 or 2. If \code{EqualTolerances = TRUE} and \code{ITolerances = FALSE} then the cost grows quickly with the number of species and sites (in terms of memory requirements and time). The data needs to conform quite closely to the statistical model, and the environmental range of the data should be wide in order for the quadratics to fit the data well (bell-shaped response surfaces). If not, RR-VGLMs will be more appropriate because the response is linear on the transformed scale (e.g., log or logit) and the ordination is called \emph{constrained linear ordination} or CLO. Like many regression models, CQO is sensitive to outliers (in the environmental and species data), sparse data, high leverage points, multicollinearity etc. For these reasons, it is necessary to examine the data carefully for these features and take corrective action (e.g., omitting certain species, sites, environmental variables from the analysis, transforming certain environmental variables, etc.). Any optimum lying outside the convex hull of the site scores should not be trusted. Fitting a CAO is recommended first, then upon transformations etc., possibly a CQO can be fitted. For binary data, it is necessary to have `enough' data. In general, the number of sites \eqn{n} ought to be much larger than the number of species \emph{S}, e.g., at least 100 sites for two species. Compared to count (Poisson) data, numerical problems occur more frequently with presence/absence (binary) data. For example, if \code{Rank = 1} and if the response data for each species is a string of all absences, then all presences, then all absences (when enumerated along the latent variable) then infinite parameter estimates will occur. In general, setting \code{ITolerances = TRUE} may help. This function was formerly called \code{cgo}. It has been renamed to reinforce a new nomenclature described in Yee (2006). } \seealso{ \code{\link{qrrvglm.control}}, \code{\link{Coef.qrrvglm}}, \code{\link{predictqrrvglm}}, \code{\link{rcqo}}, \code{\link{cao}}, \code{\link{uqo}}, \code{\link{rrvglm}}, % \code{\link{rrvglm.control}}, \code{\link{poissonff}}, \code{\link{binomialff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{lvplot.qrrvglm}}, \code{\link{perspqrrvglm}}, \code{\link{trplot.qrrvglm}}, % \code{\link{vcovqrrvglm}}, \code{\link{vglm}}, \code{\link[base:Random]{set.seed}}, \code{\link{hspider}}. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \examples{ \dontrun{ # Example 1; Fit an unequal tolerances model to the hunting spiders data hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental variables set.seed(1234) # For reproducibility of the results p1ut <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = poissonff, data = hspider, Crow1positive = FALSE, EqualTol = FALSE) sort(p1ut@misc$deviance.Bestof) # A history of all the iterations if(deviance(p1ut) > 1177) warning("suboptimal fit obtained") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow lvplot(p1ut, y = TRUE, lcol = clr, pch = 1:S, pcol = clr, las = 1) # ordination diagram legend("topright", leg = colnames(depvar(p1ut)), col = clr, pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) (cp <- Coef(p1ut)) (a <- cp@lv[cp@lvOrder]) # The ordered site scores along the gradient # Names of the ordered sites along the gradient: rownames(cp@lv)[cp@lvOrder] (aa <- (cp@Optimum)[,cp@OptimumOrder]) # The ordered optima along the gradient aa <- aa[!is.na(aa)] # Delete the species that is not unimodal names(aa) # Names of the ordered optima along the gradient trplot(p1ut, whichSpecies = 1:3, log = "xy", type = "b", lty = 1, lwd = 2, col = c("blue","red","green"), label = TRUE) -> ii # trajectory plot legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "red", "green")) abline(a = 0, b = 1, lty = "dashed") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1ut, col = clr, label = TRUE, las = 1) # perspective plot # Example 2; Fit an equal tolerances model. Less numerically fraught. set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) sort(p1et@misc$deviance.Bestof) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1et, col = clr, label = TRUE, las = 1) # Example 3: A rank-2 equal tolerances CQO model with Poisson data # This example is numerically fraught... need IToler = TRUE too. set.seed(555) p2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, IToler = TRUE, Rank = 2, Bestof = 3, isdlv = c(2.1, 0.9)) sort(p2@misc$deviance.Bestof) # A history of all the iterations if(deviance(p2) > 1127) warning("suboptimal fit obtained") lvplot(p2, ellips = FALSE, label = TRUE, xlim = c(-3,4), C = TRUE, Ccol = "brown", sites = TRUE, scol = "grey", pcol = "blue", pch = "+", chull = TRUE, ccol = "grey") # Example 4: species packing model with presence/absence data set.seed(2345) n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "binomial", hiabundance = 4, EqualTol = TRUE, ESOpt = TRUE, EqualMax = TRUE) myform <- attr(mydata, "formula") set.seed(1234) b1et <- cqo(myform, binomialff(mv = TRUE, link = "cloglog"), data = mydata) sort(b1et@misc$deviance.Bestof) # A history of all the iterations lvplot(b1et, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S, las = 1) Coef(b1et) # Compare the fitted model with the 'truth' cbind(truth = attr(mydata, "ccoefficients"), fitted = ccoef(b1et)) # Example 5: Plot the deviance residuals for diagnostic purposes set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, EqualTol = TRUE, trace = FALSE) sort(p1et@misc$deviance.Bestof) # A history of all the iterations if(deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) par(mfrow = c(3, 4)) for(ii in 1:S) { tempdata <- data.frame(lv1 = c(lv(p1et)), sppCounts = depvar(p1et)[, ii]) tempdata <- transform(tempdata, myOffset = -0.5 * lv1^2) # For species ii, refit the model to get the deviance residuals fit1 <- vglm(sppCounts ~ offset(myOffset) + lv1, fam = poissonff, data = tempdata, trace = FALSE) # For checking: this should be 0 print("max(abs(c(Coef(p1et)@B1[1,ii], Coef(p1et)@A[ii,1]) - coef(fit1)))") print( max(abs(c(Coef(p1et)@B1[1,ii], Coef(p1et)@A[ii,1]) - coef(fit1))) ) # Plot the deviance residuals devresid <- resid(fit1, type = "deviance") predvalues <- predict(fit1) + fit1@offset ooo <- with(tempdata, order(lv1)) with(tempdata, plot(lv1, predvalues + devresid, col = "darkgreen", xlab = "lv1", ylab = "", main = colnames(depvar(p1et))[ii])) with(tempdata, lines(lv1[ooo], predvalues[ooo], col = "blue")) } } } \keyword{models} \keyword{regression} %legend("topright", x=1, y=135, leg = colnames(depvar(p1ut)), col = clr, % pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) VGAM/man/constraints.Rd0000644000176000001440000001225212136651105014447 0ustar ripleyusers\name{constraints} \alias{constraints} \alias{constraints.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Constraint Matrices } \description{ Extractor function for the \emph{constraint matrices} of objects in the \pkg{VGAM} package. } \usage{ constraints(object, ...) constraints.vlm(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{type}{ Character. Whether LM- or term-type constraints are to be returned. The number of such matrices returned is equal to \code{nvar(object, type = "lm")} and the number of terms, respectively. } \item{all, which}{ If \code{all = FALSE} then \code{which} gives the integer index or a vector of logicals specifying the selection. } \item{matrix.out}{ Logical. If \code{TRUE} then the constraint matrices are \code{\link[base]{cbind}()ed} together. The result is usually more compact because the default is a list of constraint matrices. } \item{\dots}{ Other possible arguments such as \code{type}. } } \details{ Constraint matrices describe the relationship of coefficients/component functions of a particular explanatory variable between the linear/additive predictors in VGLM/VGAM models. For example, they may be all different (constraint matrix is the identity matrix) or all the same (constraint matrix has one column and has unit values). VGLMs and VGAMs have constraint matrices which are \emph{known}. The class of RR-VGLMs have constraint matrices which are \emph{unknown} and are to be estimated. } \value{ The extractor function \code{constraints()} returns a list comprising of constraint matrices---usually one for each column of the VLM model matrix, and in that order. The list is labelled with the variable names. Each constraint matrix has \eqn{M} rows, where \eqn{M} is the number of linear/additive predictors, and whose rank is equal to the number of columns. A model with no constraints at all has an order \eqn{M} identity matrix as each variable's constraint matrix. For \code{\link{vglm}} and \code{\link{vgam}} objects, feeding in \code{type = "term"} constraint matrices back into the same model should work and give an identical model. The default are the \code{"lm"}-type constraint matrices; this is a list with one constraint matrix per column of the LM matrix. See the \code{constraints} argument of \code{\link{vglm}}, and the example below. } \author{T. W. Yee } \note{ In all \pkg{VGAM} family functions \code{zero = NULL} means none of the linear/additive predictors are modelled as intercepts-only. Other arguments found in certain \pkg{VGAM} family functions which affect constraint matrices include \code{parallel} and \code{exchangeable}. The \code{constraints} argument in \code{\link{vglm}} and \code{\link{vgam}} allows constraint matrices to be inputted. If so, then \code{constraints(fit, type = "lm")} can be fed into the \code{constraints} argument of the same object to get the same model. The \code{xij} argument does not affect constraint matrices; rather, it allows each row of the constraint matrix to be multiplied by a specified vector. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. \url{http://www.stat.auckland.ac.nz/~yee} contains additional information. } \seealso{ \code{\link{is.parallel}}, \code{\link{is.zero}}. VGLMs are described in \code{\link{vglm-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}. Arguments such as \code{zero} and \code{parallel} found in many \pkg{VGAM} family functions are a way of creating/modifying constraint matrices conveniently, e.g., see \code{\link{zero}}. See \code{\link{CommonVGAMffArguments}} for more information. } \examples{ # Fit the proportional odds model: pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ bs(let, 3), cumulative(parallel = TRUE, reverse = TRUE), pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) # Parallel assumption results in this constraints(fit1, type = "term") # This is the same as the default ("vlm"-type) is.parallel(fit1) # An equivalent model to fit1 (needs the type "term" constraints): clist.term <- constraints(fit1, type = "term") # The "term"-type constraints (fit2 <- vglm(cbind(normal, mild, severe) ~ bs(let, 3), cumulative(reverse = TRUE), pneumo, constraints = clist.term)) abs(max(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be zero # Fit a rank-1 stereotype (RR-multinomial logit) model: data(car.all) fit <- rrvglm(Country ~ Width + Height + HP, multinomial, car.all, Rank = 1) constraints(fit) # All except the first are the estimated A matrix } \keyword{models} \keyword{regression} VGAM/man/coalminers.Rd0000644000176000001440000000216012136651105014231 0ustar ripleyusers\name{coalminers} \alias{coalminers} \docType{data} \title{ Breathlessness and Wheeze Amongst Coalminers Data} \description{ Coalminers who are smokers without radiological pneumoconiosis, classified by age, breathlessness and wheeze. } \usage{data(coalminers)} \format{ A data frame with 9 age groups with the following 5 columns. \describe{ \item{BW}{Counts with breathlessness and wheeze. } \item{BnW}{Counts with breathlessness but no wheeze. } \item{nBW}{Counts with no breathlessness but wheeze. } \item{nBnW}{Counts with neither breathlessness or wheeze. } \item{age}{Age of the coal miners (actually, the midpoints of the 5-year category ranges). } } } \details{ The data were published in Ashford and Sowden (1970). A more recent analysis is McCullagh and Nelder (1989, Section 6.6). } \source{ Ashford, J. R. and Sowden, R. R. (1970) Multi-variate probit analysis. \emph{Biometrics}, \bold{26}, 535--546. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}. 2nd ed. London: Chapman & Hall. } \examples{ str(coalminers) } \keyword{datasets} VGAM/man/cloglog.Rd0000644000176000001440000000761412136651105013534 0ustar ripleyusers\name{cloglog} \alias{cloglog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Complementary Log-log Link Function } \description{ Computes the complementary log-log transformation, including its inverse and the first two derivatives. } \usage{ cloglog(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}} for general information about links. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The complementary log-log link function is commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the complimentary log-log of \code{theta}, i.e., \code{log(-log(1 - theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{1-exp(-exp(theta))},. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \eqn{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. Changing 1s to 0s and 0s to 1s in the response means that effectively a loglog link is fitted. That is, tranform \eqn{y} by \eqn{1-y}. That's why only one of \code{\link{cloglog}} and \code{loglog} is written. With constrained ordination (e.g., \code{\link{cqo}} and \code{\link{cao}}) used with \code{\link{binomialff}}, a complementary log-log link function is preferred over the default \code{\link{logit}} link, for a good reason. See the example below. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the extreme value distribution. } \seealso{ \code{\link{Links}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cauchit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) cloglog(p) max(abs(cloglog(cloglog(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) cloglog(p) # Has NAs cloglog(p, bvalue = .Machine$double.eps) # Has no NAs \dontrun{ p <- seq(0.01, 0.99, by = 0.01) plot(p, logit(p), type = "l", col = "limegreen", ylab = "transformation", lwd = 2, las = 1, main = "Some probability link functions") lines(p, probit(p), col = "purple", lwd = 2) lines(p, cloglog(p), col = "chocolate", lwd = 2) lines(p, cauchit(p), col = "tan", lwd = 2) abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logit", "probit", "cloglog", "cauchit"), col=c("limegreen","purple","chocolate", "tan"), lwd = 2) } \dontrun{ # This example shows that a cloglog link is preferred over the logit n = 500; p = 5; S = 3; Rank = 1 # Species packing model: mydata = rcqo(n, p, S, EqualTol = TRUE, ESOpt = TRUE, EqualMax = TRUE, family = "binomial", hiabundance=5, seed = 123, Rank = Rank) fitc = cqo(attr(mydata, "formula"), ITol = TRUE, data = mydata, fam = binomialff(mv = TRUE, link = "cloglog"), Rank = Rank) fitl = cqo(attr(mydata, "formula"), ITol = TRUE, data = mydata, fam = binomialff(mv = TRUE, link = "logit"), Rank = Rank) # Compare the fitted models (cols 1 and 3) with the truth (col 2) cbind(ccoef(fitc), attr(mydata, "ccoefficients"), ccoef(fitl)) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/clo.Rd0000644000176000001440000000264312136651105012660 0ustar ripleyusers\name{clo} \alias{clo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Redirects the user to rrvglm } \description{ Redirects the user to the function \code{\link{rrvglm}}. } \usage{ clo(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ Ignored. } } \details{ CLO stands for \emph{constrained linear ordination}, and is fitted with a statistical class of models called \emph{reduced-rank vector generalized linear models} (RR-VGLMs). It allows for generalized reduced-rank regression in that response types such as Poisson counts and presence/absence data can be handled. Currently in the \pkg{VGAM} package, \code{\link{rrvglm}} is used to fit RR-VGLMs. However, the Author's opinion is that linear responses to a latent variable (composite environmental gradient) is not as common as unimodal responses, therefore \code{\link{cqo}} is often more appropriate. The new CLO/CQO/CAO nomenclature described in Yee (2006). } \value{ Nothing is returned; an error message is issued. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{Thomas W. Yee} \seealso{ \code{\link{rrvglm}}, \code{\link{cqo}}. } \examples{ \dontrun{ clo() } } \keyword{models} \keyword{regression} VGAM/man/chisq.Rd0000644000176000001440000000273112136651105013210 0ustar ripleyusers\name{chisq} \alias{chisq} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Chi-squared Distribution } \description{ Maximum likelihood estimation of the degrees of freedom for a chi-squared distribution. } \usage{ chisq(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The degrees of freedom is treated as a parameter to be estimated, and as real (not integer). Being positive, a log link is used by default. Fisher scoring is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \note{ Multiple responses are permitted. There may be convergence problems if the degrees of freedom is very large or close to zero. } \seealso{ \code{\link[stats]{Chisquare}}. \code{\link{normal1}}. } \examples{ cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, y1 = rchisq(nn, df = exp(1 - 1 * x2)), y2 = rchisq(nn, df = exp(2 - 2 * x2))) fit <- vglm(cbind(y1, y2) ~ x2, chisq, cdata, trace = TRUE) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/chinese.nz.Rd0000644000176000001440000000532212136651105014144 0ustar ripleyusers\name{chinese.nz} \alias{chinese.nz} \docType{data} \title{ Chinese Population in New Zealand 1867--2001 Data} \description{ The Chinese population in New Zealand from 1867 to 2001, along with the whole of the New Zealand population. } \usage{data(chinese.nz)} \format{ A data frame with 27 observations on the following 4 variables. \describe{ \item{\code{year}}{Year. } \item{\code{male}}{Number of Chinese males. } \item{\code{female}}{Number of Chinese females. } \item{\code{nz}}{Total number in the New Zealand population. } } } \details{ Historically, there was a large exodus of Chinese from the Guangdong region starting in the mid-1800s to the gold fields of South Island of New Zealand, California, and Southern Australia, etc. Discrimination then meant that only men were allowed entry, to hinder permanent settlement. In the case of New Zealand, the government relaxed its immigration laws after WWII to allow wives of Chinese already in NZ to join them because China had been among the Allied powers. Gradual relaxation in the immigration and an influx during the 1980s meant the Chinese population became increasingly demographically normal over time. The NZ total for the years 1867 and 1871 exclude the Maori population. Three modifications have been made to the female column to make the data internally consistent with the original table. % The second value of 4583 looks erroneous, as seen by the plot below. } %\source{ %} \references{ Page 6 of \emph{Aliens At My Table: Asians as New Zealanders See Them} by M. Ip and N. Murphy, (2005). Penguin Books. Auckland, New Zealand. } \examples{ \dontrun{ par(mfrow = c(1, 2)) plot(female / (male + female) ~ year, chinese.nz, type = "b", ylab = "Proportion", col = "blue", las = 1, main = "Proportion of NZ Chinese that are female") abline(h = 0.5, lty = "dashed", col = "gray") fit1.cnz <- vglm(cbind(female, male) ~ year, binomialff, chinese.nz) fit2.cnz <- vglm(cbind(female, male) ~ poly(year, 2), binomialff, chinese.nz) fit4.cnz <- vglm(cbind(female, male) ~ bs(year, 5), binomialff, chinese.nz) lines(fitted(fit1.cnz) ~ year, chinese.nz, col = "purple") lines(fitted(fit2.cnz) ~ year, chinese.nz, col = "green") lines(fitted(fit4.cnz) ~ year, chinese.nz, col = "orange", lwd = 2) legend("bottomright", col = c("purple", "green", "orange"), lty = 1, leg = c("linear", "quadratic", "B-spline")) plot(100*(male+female)/nz ~ year, chinese.nz, type = "b", ylab = "Percent", ylim = c(0, max(100*(male+female)/nz)), col = "blue", las = 1, main = "Percent of NZers that are Chinese") abline(h = 0, lty = "dashed", col = "gray") } } \keyword{datasets} % Albany, Auckland, New Zealand. VGAM/man/chest.nz.Rd0000644000176000001440000000254712136651105013642 0ustar ripleyusers\name{chest.nz} \alias{chest.nz} \docType{data} \title{ Chest Pain in NZ Adults Data} \description{ Presence/absence of chest pain in 10186 New Zealand adults. } \usage{data(chest.nz)} \format{ A data frame with 73 rows and the following 5 variables. \describe{ \item{age}{a numeric vector; age (years).} \item{nolnor}{a numeric vector of counts; no pain on LHS or RHS.} \item{nolr}{a numeric vector of counts; no pain on LHS but pain on RHS.} \item{lnor}{a numeric vector of counts; no pain on RHS but pain on LHS.} \item{lr}{a numeric vector of counts; pain on LHS and RHS of chest.} } } \details{ Each adult was asked their age and whether they experienced any pain or discomfort in their chest over the last six months. If yes, they indicated whether it was on their LHS and/or RHS of their chest. } \source{ MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. \emph{New Zealand Medical Journal}, \bold{108}, 499--502. } \examples{ \dontrun{ fit <- vgam(cbind(nolnor, nolr, lnor, lr) ~ s(age, c(4, 3)), binom2.or(exchan = TRUE, zero = NULL), data = chest.nz) coef(fit, matrix = TRUE) } \dontrun{ plot(fit, which.cf = 2, se = TRUE) } } \keyword{datasets} VGAM/man/cgumbel.Rd0000644000176000001440000001011312136651105013510 0ustar ripleyusers\name{cgumbel} \alias{cgumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Gumbel Distribution } \description{ Maximum likelihood estimation of the 2-parameter Gumbel distribution when there are censored observations. A matrix response is not allowed. } \usage{ cgumbel(llocation = "identity", lscale = "loge", iscale = NULL, mean = TRUE, percentiles = NULL, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Character. Parameter link functions for the location and (positive) \eqn{scale} parameters. See \code{\link{Links}} for more choices. } \item{iscale}{ Numeric and positive. Initial value for \eqn{scale}. Recycled to the appropriate length. In general, a larger value is better than a smaller value. The default is to choose the value internally. } \item{mean}{ Logical. Return the mean? If \code{TRUE} then the mean is returned, otherwise percentiles given by the \code{percentiles} argument. } \item{percentiles}{ Numeric with values between 0 and 100. If \code{mean=FALSE} then the fitted values are percentiles which must be specified by this argument. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The value (possibly values) must be from the set \{1,2\} corresponding respectively to \eqn{location} and \eqn{scale}. If \code{zero=NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. The default is to fit the shape parameter as an intercept only. } } \details{ This \pkg{VGAM} family function is like \code{\link{gumbel}} but handles observations that are left-censored (so that the true value would be less than the observed value) else right-censored (so that the true value would be greater than the observed value). To indicate which type of censoring, input \code{extra = list(leftcensored = vec1, rightcensored = vec2)} where \code{vec1} and \code{vec2} are logical vectors the same length as the response. If the two components of this list are missing then the logical values are taken to be \code{FALSE}. The fitted object has these two components stored in the \code{extra} slot. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \section{Warning}{ Numerical problems may occur if the amount of censoring is excessive. } \note{ See \code{\link{gumbel}} for details about the Gumbel distribution. The initial values are based on assuming all uncensored observations, therefore could be improved upon. } \seealso{ \code{\link{gumbel}}, \code{\link{egumbel}}, \code{\link{rgumbel}}, \code{\link{guplot}}, \code{\link{gev}}, \code{\link{venice}}. } \examples{ # Example 1 ystar <- venice[["r1"]] # Use the first order statistic as the response nn <- length(ystar) L <- runif(nn, 100, 104) # Lower censoring points U <- runif(nn, 130, 135) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ scale(year), data = venice, trace = TRUE, extra = extra, cgumbel(mean = FALSE, perc = c(5, 25, 50, 75, 95))) coef(fit, matrix = TRUE) head(fitted(fit)) fit@extra # Example 2: simulated data nn <- 1000 ystar <- rgumbel(nn, loc = 1, scale = exp(0.5)) # The uncensored data L <- runif(nn, -1, 1) # Lower censoring points U <- runif(nn, 2, 5) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored \dontrun{par(mfrow = c(1, 2)); hist(ystar); hist(y);} extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ 1, trace = TRUE, extra = extra, cgumbel) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/cgo.Rd0000644000176000001440000000220212136651105012642 0ustar ripleyusers\name{cgo} \alias{cgo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Redirects the user to cqo } \description{ Redirects the user to the function \code{\link{cqo}}. } \usage{ cgo(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ Ignored. } } \details{ The former function \code{cgo} has been renamed \code{\link{cqo}} because CGO (for \emph{canonical Gaussian ordination}) is a confusing and inaccurate name. CQO (for \emph{constrained quadratic ordination}) is better. This new nomenclature described in Yee (2006). } \value{ Nothing is returned; an error message is issued. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{Thomas W. Yee} \section{Warning }{ The code, therefore, in Yee (2004) will not run without changing the \code{"g"} to a \code{"q"}. } \seealso{ \code{\link{cqo}}. } \examples{ \dontrun{ cgo() } } \keyword{models} \keyword{regression} VGAM/man/cenpoisson.Rd0000644000176000001440000001115112136651105014255 0ustar ripleyusers\name{cenpoisson} %\alias{cenpoisson} \alias{cenpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Poisson Family Function } \description{ Family function for a censored Poisson response. } \usage{ cenpoisson(link = "loge", imu = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean; see \code{\link{Links}} for more choices. } \item{imu}{ Optional initial value; see \code{\link{CommonVGAMffArguments}} for more information. } } \details{ Often a table of Poisson counts has an entry \emph{J+} meaning \eqn{\ge J}{>= J}. This family function is similar to \code{\link{poissonff}} but handles such censored data. The input requires \code{\link{SurvS4}}. Only a univariate response is allowed. The Newton-Raphson algorithm is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ See \pkg{survival} for background. } \author{ Thomas W. Yee } \note{ The function \code{\link{poissonff}} should be used when there are no censored observations. Also, \code{NA}s are not permitted with \code{\link{SurvS4}}, nor is \code{type = "counting"}. } \section{Warning }{ As the response is discrete, care is required with \code{\link{Surv}}, especially with \code{"interval"} censored data because of the \code{(start, end]} format. See the examples below. The examples have \code{y < L} as left censored and \code{y >= U} (formatted as \code{U+}) as right censored observations, therefore \code{L <= y < U} is for uncensored and/or interval censored observations. Consequently the input must be tweaked to conform to the \code{(start, end]} format. } \seealso{ \code{\link{SurvS4}}, \code{\link{poissonff}}, \code{\link{Links}}. } \examples{ # Example 1: right censored data set.seed(123); U <- 20 cdata <- data.frame(y = rpois(N <- 100, exp(3))) cdata <- transform(cdata, cy = pmin(U, y), rcensored = (y >= U)) cdata <- transform(cdata, status = ifelse(rcensored, 0, 1)) with(cdata, table(cy)) with(cdata, table(rcensored)) with(cdata, table(ii <- print(SurvS4(cy, status)))) # Check; U+ means >= U fit <- vglm(SurvS4(cy, status) ~ 1, cenpoisson, cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check; U+ means >= U # Example 2: left censored data L <- 15 cdata <- transform(cdata, cY = pmax(L, y), lcensored = y < L) # Note y < L, not cY == L or y <= L cdata <- transform(cdata, status = ifelse(lcensored, 0, 1)) with(cdata, table(cY)) with(cdata, table(lcensored)) with(cdata, table(ii <- print(SurvS4(cY, status, type = "left")))) # Check fit <- vglm(SurvS4(cY, status, type = "left") ~ 1, cenpoisson, cdata, trace = TRUE) coef(fit, matrix = TRUE) # Example 3: interval censored data cdata <- transform(cdata, Lvec = rep(L, len = N), Uvec = rep(U, len = N)) cdata <- transform(cdata, icensored = Lvec <= y & y < Uvec) # Not lcensored or rcensored with(cdata, table(icensored)) cdata <- transform(cdata, status = rep(3, N)) # 3 means interval censored cdata <- transform(cdata, status = ifelse(rcensored, 0, status)) # 0 means right censored cdata <- transform(cdata, status = ifelse(lcensored, 2, status)) # 2 means left censored # Have to adjust Lvec and Uvec because of the (start, end] format: cdata$Lvec[with(cdata, icensored)] <- cdata$Lvec[with(cdata, icensored)] - 1 cdata$Uvec[with(cdata, icensored)] <- cdata$Uvec[with(cdata, icensored)] - 1 cdata$Lvec[with(cdata, lcensored)] <- cdata$Lvec[with(cdata, lcensored)] # Unchanged cdata$Lvec[with(cdata, rcensored)] <- cdata$Uvec[with(cdata, rcensored)] # Unchanged with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cenpoisson, cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check # Example 4: Add in some uncensored observations index <- (1:N)[with(cdata, icensored)] index <- head(index, 4) cdata$status[index] <- 1 # actual or uncensored value cdata$Lvec[index] <- cdata$y[index] with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cenpoisson, cdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check } \keyword{models} \keyword{regression} VGAM/man/cennormal1.Rd0000644000176000001440000000623512136651105014143 0ustar ripleyusers\name{cennormal1} \alias{cennormal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Normal Distribution } \description{ Maximum likelihood estimation for the normal distribution with left and right censoring. } \usage{ cennormal1(lmu = "identity", lsd = "loge", imethod = 1, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsd}{ Parameter link functions applied to the mean and standard deviation parameters. See \code{\link{Links}} for more choices. The standard deviation is a positive quantity, therefore a log link is the default. } \item{imethod}{ Initialization method. Either 1 or 2, this specifies two methods for obtaining initial values for the parameters. } \item{zero}{ An integer vector, containing the value 1 or 2. If so, the mean or standard deviation respectively are modelled as an intercept only. Setting \code{zero = NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. } } \details{ This function is like \code{\link{normal1}} but handles observations that are left-censored (so that the true value would be less than the observed value) else right-censored (so that the true value would be greater than the observed value). To indicate which type of censoring, input \code{extra = list(leftcensored = vec1, rightcensored = vec2)} where \code{vec1} and \code{vec2} are logical vectors the same length as the response. If the two components of this list are missing then the logical values are taken to be \code{FALSE}. The fitted object has these two components stored in the \code{extra} slot. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ This function is an alternative to \code{\link{tobit}} but cannot handle a matrix response and uses different working weights. If there are no censored observations then \code{\link{normal1}} is recommended instead. } \seealso{ \code{\link{tobit}}, \code{\link{normal1}}, \code{\link{dcennormal1}}. } \examples{ \dontrun{ cdata <- data.frame(x2 = runif(nn <- 1000)) # ystar are true values cdata <- transform(cdata, ystar = rnorm(nn, m = 100 + 15 * x2, sd = exp(3))) with(cdata, hist(ystar)) cdata <- transform(cdata, L = runif(nn, 80, 90), # Lower censoring points U = runif(nn, 130, 140)) # Upper censoring points cdata <- transform(cdata, y = pmax(L, ystar)) # Left censored cdata <- transform(cdata, y = pmin(U, y)) # Right censored with(cdata, hist(y)) Extra <- list(leftcensored = with(cdata, ystar < L), rightcensored = with(cdata, ystar > U)) fit1 <- vglm(y ~ x2, cennormal1, cdata, crit = "c", extra = Extra, trace = TRUE) fit2 <- vglm(y ~ x2, tobit(Lower = with(cdata, L), Upper = with(cdata, U)), cdata, crit = "c", trace = TRUE) coef(fit1, matrix = TRUE) max(abs(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be 0 names(fit1@extra) } } \keyword{models} \keyword{regression} VGAM/man/cdf.lmscreg.Rd0000644000176000001440000000407712136651105014275 0ustar ripleyusers\name{cdf.lmscreg} \alias{cdf.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cumulative Distribution Function for LMS Quantile Regression } \description{ Computes the cumulative distribution function (CDF) for observations, based on a LMS quantile regression. } \usage{ cdf.lmscreg(object, newdata = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}. } \item{newdata}{ Data frame where the predictions are to be made. If missing, the original data is used. } \item{\dots}{ Parameters which are passed into functions such as \code{cdf.lms.yjn}. } } \details{ The CDFs returned here are values lying in [0,1] giving the relative probabilities associated with the quantiles \code{newdata}. For example, a value near 0.75 means it is close to the upper quartile of the distribution. } \value{ A vector of CDF values lying in [0,1]. } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The data are treated like quantiles, and the percentiles are returned. The opposite is performed by \code{\link{qtplot.lmscreg}}. The CDF values of the model have been placed in \code{@post$cdf} when the model was fitted. } \seealso{ \code{\link{deplot.lmscreg}}, \code{\link{qtplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{ fit <- vgam(BMI ~ s(age, df=c(4, 2)), lms.bcn(zero = 1), data = bmi.nz) head(fit@post$cdf) head(cdf(fit)) # Same head(depvar(fit)) head(fitted(fit)) cdf(fit, data.frame(age = c(31.5, 39), BMI = c(28.4, 24))) } \keyword{models} \keyword{regression} VGAM/man/ccoef.Rd0000644000176000001440000000531412136651105013160 0ustar ripleyusers\name{ccoef} \alias{ccoef} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Constrained/Canonical Coefficients } \description{ \code{ccoef} is a generic function which extracts the constrained (canonical) coefficients from objects returned by certain modelling functions. } \usage{ ccoef(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the extraction of canonical coefficients is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ For constrained quadratic and ordination models, \emph{canonical coefficients} are the elements of the \bold{C} matrix used to form the latent variables. They are highly interpretable in ecology, and are looked at as weights or loadings. They are also applicable for reduced-rank VGLMs. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } %\note{ %} \section{Warning }{ For QO models, there is a direct inverse relationship between the scaling of the latent variables (site scores) and the tolerances. One normalization is for the latent variables to have unit variance. Another normalization is for all the species' tolerances to be unit (provided \code{EqualTolerances} is \code{TRUE}). These two normalizations cannot simultaneously hold in general. For rank \eqn{R} models with \eqn{R>1} it becomes more complicated because the latent variables are also uncorrelated. An important argument when fitting quadratic ordination models is whether \code{EqualTolerances} is \code{TRUE} or \code{FALSE}. See Yee (2004) for details. } \seealso{ \code{\link{ccoef-method}}, \code{ccoef.qrrvglm}, \code{ccoef.cao}, \code{\link[stats]{coef}}. } \examples{ \dontrun{ set.seed(111) # This leads to the global solution hspider[,1:6] = scale(hspider[,1:6]) # Standardized environmental vars p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = quasipoissonff, data = hspider, Crow1positive = FALSE) ccoef(p1) } } \keyword{models} \keyword{regression} VGAM/man/ccoef-methods.Rd0000644000176000001440000000174312136651105014623 0ustar ripleyusers\name{ccoef-methods} \docType{methods} %\alias{ccoef,ANY-method} \alias{ccoef-method} \alias{ccoef,cao-method} \alias{ccoef,Coef.cao-method} \alias{ccoef,rrvglm-method} \alias{ccoef,qrrvglm-method} \alias{ccoef,Coef.rrvglm-method} \alias{ccoef,Coef.qrrvglm-method} % % This does not work: %\alias{ccoef,cao,Coef.cao,rrvglm,qrrvglm,Coef.rrvglm,Coef.qrrvglm-method} % \title{ Constrained (Canonical) Coefficients } \description{ \code{ccoef} is a generic function used to return the constrained (canonical) coefficients of a constrained ordination model. The function invokes particular methods which depend on the class of the first argument. } %\usage{ % \S4method{ccoef}{cao,Coef.cao,rrvglm,qrrvglm,Coef.rrvglm,Coef.qrrvglm}(object, ...) %} \section{Methods}{ \describe{ \item{object}{ The object from which the constrained coefficients are extracted. } } } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/cauchy.Rd0000644000176000001440000001101712136651105013352 0ustar ripleyusers\name{cauchy} \alias{cauchy} \alias{cauchy1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cauchy Distribution Family Function } \description{ Estimates either the location parameter or both the location and scale parameters of the Cauchy distribution by maximum likelihood estimation. } \usage{ cauchy(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, iprobs = seq(0.2, 0.8, by = 0.2), imethod = 1, nsimEIM = NULL, zero = 2) cauchy1(scale.arg = 1, llocation = "identity", ilocation = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions for the location parameter \eqn{a}{a} and the scale parameter \eqn{b}{b}. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Optional initial value for \eqn{a}{a} and \eqn{b}{b}. By default, an initial value is chosen internally for each. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. The user should try all possible values to help avoid converging to a local solution. Also, choose the another value if convergence fails, or use \code{ilocation} and/or \code{iscale}. } \item{iprobs}{ Probabilities used to find the respective sample quantiles; used to compute \code{iscale}. } \item{zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{scale.arg}{ Known (positive) scale parameter, called \eqn{b}{b} below. } } \details{ The Cauchy distribution has density function \deqn{f(y;a,b) = \left\{ \pi b [1 + ((y-a)/b)^2] \right\}^{-1} }{% f(y;a,b) = 1 / [pi * b * [1 + ((y-a)/b)^2]] } where \eqn{y} and \eqn{a} are real and finite, and \eqn{b>0}{b>0}. The distribution is symmetric about \eqn{a} and has a heavy tail. Its median and mode are \eqn{a} but the mean does not exist. The fitted values are the estimates of \eqn{a}. Fisher scoring is the default but if \code{nsimEIM} is specified then Fisher scoring with simulation is used. If the scale parameter is known (\code{cauchy1}) then there may be multiple local maximum likelihood solutions for the location parameter. However, if both location and scale parameters are to be estimated (\code{cauchy}) then there is a unique maximum likelihood solution provided \eqn{n > 2} and less than half the data are located at any one point. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \section{Warning }{ It is well-known that the Cauchy distribution may have local maxima in its likelihood function; make full use of \code{imethod}, \code{ilocation}, \code{iscale} etc. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. Barnett, V. D. (1966) Evaluation of the maximum-likehood estimator where the likelihood equation has multiple roots. \emph{Biometrika}, \bold{53}, 151--165. Copas, J. B. (1975) On the unimodality of the likelihood for the Cauchy distribution. \emph{Biometrika}, \bold{62}, 701--704. Efron, B. and Hinkley, D. V. (1978) Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information. \emph{Biometrika}, \bold{65}, 457--481. } \author{ T. W. Yee } \note{ Good initial values are needed. By default these \pkg{VGAM} family functions search for a starting value for \eqn{a}{a} on a grid. It also pays to select a wide range of initial values via the \code{ilocation} and/or \code{iscale} and/or \code{imethod} arguments. } \seealso{ \code{\link[stats:Cauchy]{Cauchy}}, \code{\link{cauchit}}, \code{\link{studentt}}. } \examples{ # Both location and scale parameters unknown cdata1 <- data.frame(x = runif(nn <- 1000)) cdata1 <- transform(cdata1, loc = exp(1+0.5*x), scale = exp(1)) cdata1 <- transform(cdata1, y = rcauchy(nn, loc, scale)) fit <- vglm(y ~ x, cauchy(lloc = "loge"), cdata1, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) # Location estimates summary(fit) # Location parameter unknown set.seed(123) cdata2 <- data.frame(x = runif(nn <- 500)) cdata2 <- transform(cdata2, loc = 1 + 0.5 * x, scale = 0.4) cdata2 <- transform(cdata2, y = rcauchy(nn, loc, scale)) fit <- vglm(y ~ x, cauchy1(scale = 0.4), cdata2, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/cauchit.Rd0000644000176000001440000001052312136651105013517 0ustar ripleyusers\name{cauchit} \alias{cauchit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cauchit Link Function } \description{ Computes the cauchit (tangent) link transformation, including its inverse and the first two derivatives. } \usage{ cauchit(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ This link function is an alternative link function for parameters that lie in the unit interval. This type of link bears the same relation to the Cauchy distribution as the probit link bears to the Gaussian. One characteristic of this link function is that the tail is heavier relative to the other links (see examples below). Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the tangent of \code{theta}, i.e., \code{tan(pi * (theta-0.5))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{0.5 + atan(theta)/pi}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. As mentioned above, in terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the Cauchy distribution (see \code{\link{cauchy1}}). } \seealso{ \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{loge}}, \code{\link{cauchy}}, \code{\link{cauchy1}}. } \examples{ p <- seq(0.01, 0.99, by=0.01) cauchit(p) max(abs(cauchit(cauchit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by=0.01), seq(0.97, 1.02, by = 0.01)) cauchit(p) # Has no NAs \dontrun{ par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for(d in 0:1) { matplot(p, cbind(logit(p, deriv = d), probit(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logit(p, deriv = d), col = "limegreen") lines(p, probit(p, deriv = d), col = "purple") lines(p, cloglog(p, deriv = d), col = "chocolate") lines(p, cauchit(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "cauchit"), lwd = mylwd, col = c("limegreen","purple","chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for(d in 0) { matplot(y, cbind( logit(y, deriv = d, inverse = TRUE), probit(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchit(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "cauchit"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } } par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} %plot(y, logit(y, inverse = TRUE), type = "l", col = "limegreen", % xlab = "transformation", ylab = "p", % lwd=2, las=1, main = "Some inverse probability link functions") %lines(y, probit(y, inverse = TRUE), col = "purple", lwd=2) %lines(y, cloglog(y, inverse = TRUE), col = "chocolate", lwd=2) %abline(h=0.5, v = 0, lty = "dashed") VGAM/man/cardioid.Rd0000644000176000001440000000552612136651105013664 0ustar ripleyusers\name{cardioid} \alias{cardioid} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cardioid Distribution Family Function } \description{ Estimates the two parameters of the cardioid distribution by maximum likelihood estimation. } \usage{ cardioid(lmu = elogit(min = 0, max = 2*pi), lrho = elogit(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lrho}{ Parameter link functions applied to the \eqn{\mu}{mu} and \eqn{\rho}{rho} parameters, respectively. See \code{\link{Links}} for more choices. } \item{imu, irho}{ Initial values. A \code{NULL} means an initial value is chosen internally. See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter cardioid distribution has a density that can be written as \deqn{f(y;\mu,\rho) = \frac{1}{2\pi} \left(1 + 2\, \rho \cos(y - \mu) \right) }{% f(y;mu,rho) = (1 + 2*rho*cos(y-mu)) / (2*pi)} where \eqn{0 < y < 2\pi}{0 < y < 2*pi}, \eqn{0 < \mu < 2\pi}{0 < mu < 2*pi}, and \eqn{-0.5 < \rho < 0.5}{-0.5 < rho < 0.5} is the concentration parameter. The default link functions enforce the range constraints of the parameters. For positive \eqn{\rho} the distribution is unimodal and symmetric about \eqn{\mu}{mu}. The mean of \eqn{Y} (which make up the fitted values) is \eqn{\pi + (\rho/\pi) ((2 \pi-\mu) \sin(2 \pi-\mu) + \cos(2 \pi-\mu) - \mu \sin(\mu) - \cos(\mu))}{ pi + (rho/pi) ((2*pi-mu)*sin(2*pi-mu) + cos(2*pi-mu) - mu*sin(mu) - cos(mu))}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Jammalamadaka, S. R. and SenGupta, A. (2001) \emph{Topics in Circular Statistics}, Singapore: World Scientific. } \author{ T. W. Yee } \note{ Fisher scoring using simulation is used. } \section{Warning }{ Numerically, this distribution can be difficult to fit because of a log-likelihood having multiple maxima. The user is therefore encouraged to try different starting values, i.e., make use of \code{imu} and \code{irho}. } \seealso{ \code{\link{rcard}}, \code{\link{elogit}}, \code{\link{vonmises}}. \pkg{CircStats} and \pkg{circular} currently have a lot more R functions for circular data than the \pkg{VGAM} package. } \examples{ \dontrun{ cdata <- data.frame(y = rcard(n = 1000, mu = 4, rho = 0.45)) fit <- vglm(y ~ 1, cardioid, cdata, trace = TRUE) coef(fit, matrix=TRUE) Coef(fit) c(with(cdata, mean(y)), head(fitted(fit), 1)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/cardUC.Rd0000644000176000001440000000453312136651105013244 0ustar ripleyusers\name{Card} \alias{Card} \alias{dcard} \alias{pcard} \alias{qcard} \alias{rcard} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cardioid Distribution } \description{ Density, distribution function, quantile function and random generation for the cardioid distribution. } \usage{ dcard(x, mu, rho, log = FALSE) pcard(q, mu, rho) qcard(p, mu, rho, tolerance = 1e-07, maxits = 500) rcard(n, mu, rho, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a single positive integer. } \item{mu, rho}{ See \code{\link{cardioid}} for more information. } \item{tolerance, maxits, ...}{ The first two are control parameters for the algorithm used to solve for the roots of a nonlinear system of equations; \code{tolerance} controls for the accuracy and \code{maxits} is the maximum number of iterations. \code{rcard} calls \code{qcard} so the \code{...} can be used to vary the two arguments. } \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \details{ See \code{\link{cardioid}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for the formula of the probability density function and other details. } \value{ \code{dcard} gives the density, \code{pcard} gives the distribution function, \code{qcard} gives the quantile function, and \code{rcard} generates random deviates. } %\references{ } \author{ Thomas W. Yee } \note{ Convergence problems might occur with \code{rcard}. } \seealso{ \code{\link{cardioid}}. } \examples{ \dontrun{ mu = 4; rho = 0.4 x = seq(0, 2*pi, len=501) plot(x, dcard(x, mu, rho), type="l", las=1, ylim=c(0,1), col="blue", ylab=paste("[dp]card(mu=", mu, ", rho=", rho, ")"), main="Blue is density, red is cumulative distribution function", sub="Purple lines are the 10,20,...,90 percentiles") lines(x, pcard(x, mu, rho), col="red") probs = seq(0.1, 0.9, by=0.1) Q = qcard(probs, mu, rho) lines(Q, dcard(Q, mu, rho), col="purple", lty=3, type="h") lines(Q, pcard(Q, mu, rho), col="purple", lty=3, type="h") abline(h=c(0,probs,1), v=c(0,2*pi), col="purple", lty=3) max(abs(pcard(Q, mu, rho) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/cao.control.Rd0000644000176000001440000002674612136651105014336 0ustar ripleyusers\name{cao.control} \alias{cao.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for RR-VGAMs (CAO) } \description{ Algorithmic constants and parameters for a constrained additive ordination (CAO), by fitting a \emph{reduced-rank vector generalized additive model} (RR-VGAM), are set using this function. This is the control function for \code{\link{cao}}. } \usage{ cao.control(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, Maxit.optim = 250, optim.maxit = 20, SD.sitescores = 1.0, SD.Cinit = 0.02, trace = TRUE, df1.nl = 2.5, df2.nl = 2.5, spar1 = 0, spar2 = 0, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Rank}{ The numerical rank \eqn{R} of the model, i.e., the number of latent variables. Currently only \code{Rank = 1} is implemented. } \item{all.knots}{ Logical indicating if all distinct points of the smoothing variables are to be used as knots. Assigning the value \code{FALSE} means fewer knots are chosen when the number of distinct points is large, meaning less computational expense. See \code{\link{vgam.control}} for details. } \item{criterion}{ Convergence criterion. Currently, only one is supported: the deviance is minimized. } \item{Cinit}{ Optional initial \bold{C} matrix which may speed up convergence. } \item{Crow1positive}{ Logical vector of length \code{Rank} (recycled if necessary): are the elements of the first row of \bold{C} positive? For example, if \code{Rank} is 4, then specifying \code{Crow1positive = c(FALSE, TRUE)} will force \bold{C[1,1]} and \bold{C[1,3]} to be negative, and \bold{C[1,2]} and \bold{C[1,4]} to be positive. } \item{epsilon}{ Positive numeric. Used to test for convergence for GLMs fitted in FORTRAN. Larger values mean a loosening of the convergence criterion. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO = TRUE}. } % \item{FastAlgorithm}{ % Logical. % Whether compiled code is used. % For \code{\link{cao}} this must be \code{TRUE}. % % } \item{GradientFunction}{ Logical. Whether \code{\link[stats]{optim}}'s argument \code{gr} is used or not, i.e., to compute gradient values. Used only if \code{FastAlgorithm} is \code{TRUE}. Currently, this argument must be set to \code{FALSE}. } \item{iKvector, iShape}{ See \code{\link{qrrvglm.control}}. } % \item{Hstep}{ Positive value. Used as the step size in the % finite difference approximation to the derivatives by % \code{\link[stats]{optim}}. % Used only if \code{GradientFunction} is \code{TRUE}. % % } % \item{Kinit}{ % Initial values for the index parameters \code{k} in the negative % binomial distribution (one per species). In general, a smaller number % is preferred over a larger number. The vector is recycled to the % number of responses (species). %} \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression (or formation of the latent variables). The default is to omit the intercept term from the latent variables. Currently, only \code{noRRR = ~ 1} is implemented. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } % \item{Parscale}{ % Numerical and positive-valued vector of length \bold{C} % (recycled if necessary). Passed into \code{optim(..., % control = list(parscale = Parscale))}; the elements of \bold{C} become % \bold{C} / \code{Parscale}. Setting \code{ITolerances = TRUE} results % in line searches that are very large, therefore \bold{C} has to be % scaled accordingly to avoid large step sizes. % } \item{SmallNo}{ Positive numeric between \code{.Machine$double.eps} and \code{0.0001}. Used to avoid under- or over-flow in the IRLS algorithm. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Use.Init.Poisson.QO }{ Logical. If \code{TRUE} then the function \code{.Init.Poisson.QO} is used to obtain initial values for the canonical coefficients \bold{C}. If \code{FALSE} then random numbers are used instead. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument works only when the function generates its own initial value for \bold{C}, i.e., when \bold{C} are \emph{not} passed in as initial values. The default is only a convenient minimal number and users are urged to increase this value. } \item{maxitl}{ Positive integer. Maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. } \item{imethod}{ See \code{\link{qrrvglm.control}}. } \item{bf.epsilon}{ Positive numeric. Tolerance used by the modified vector backfitting algorithm for testing convergence. } \item{bf.maxit}{ Positive integer. Number of backfitting iterations allowed in the compiled code. } \item{Maxit.optim}{ Positive integer. Number of iterations given to the function \code{\link[stats]{optim}} at each of the \code{optim.maxit} iterations. } \item{optim.maxit}{ Positive integer. Number of times \code{\link[stats]{optim}} is invoked. % At iteration \code{i}, the \code{i}th value of \code{Maxit.optim} % is fed into \code{\link[stats]{optim}}. } % \item{se.fit}{ % Logical indicating whether approximate % pointwise standard errors are to be saved on the object. % Currently this argument must have the value \code{FALSE}. % } \item{SD.sitescores}{ Numeric. Standard deviation of the initial values of the site scores, which are generated from a normal distribution. Used when \code{Use.Init.Poisson.QO} is \code{FALSE}. } \item{SD.Cinit}{ Standard deviation of the initial values for the elements of \bold{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE}. } \item{trace}{ Logical indicating if output should be produced for each iteration. Having the value \code{TRUE} is a good idea for large data sets. } \item{df1.nl, df2.nl}{ Numeric and non-negative, recycled to length \emph{S}. Nonlinear degrees of freedom for smooths of the first and second latent variables. A value of 0 means the smooth is linear. Roughly, a value between 1.0 and 2.0 often has the approximate flexibility of a quadratic. The user should not assign too large a value to this argument, e.g., the value 4.0 is probably too high. The argument \code{df1.nl} is ignored if \code{spar1} is assigned a positive value or values. Ditto for \code{df2.nl}. } \item{spar1, spar2}{ Numeric and non-negative, recycled to length \emph{S}. Smoothing parameters of the smooths of the first and second latent variables. The larger the value, the more smooth (less wiggly) the fitted curves. These arguments are an alternative to specifying \code{df1.nl} and \code{df2.nl}. A value 0 (the default) for \code{spar1} means that \code{df1.nl} is used. Ditto for \code{spar2}. The values are on a scaled version of the latent variables. See Green and Silverman (1994) for more information. } \item{\dots}{ Ignored at present. } } \details{ Many of these arguments are identical to \code{\link{qrrvglm.control}}. Here, \eqn{R} is the \code{Rank}, \eqn{M} is the number of additive predictors, and \eqn{S} is the number of responses (species). Thus \eqn{M=S} for binomial and Poisson responses, and \eqn{M=2S} for the negative binomial and 2-parameter gamma distributions. Allowing the smooths too much flexibility means the CAO optimization problem becomes more difficult to solve. This is because the number of local solutions increases as the nonlinearity of the smooths increases. In situations of high nonlinearity, many initial values should be used, so that \code{Bestof} should be assigned a larger value. In general, there should be a reasonable value of \code{df1.nl} somewhere between 0 and about 3 for most data sets. } \value{ A list with the components corresponding to its arguments, after some basic error checking. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Green, P. J. and Silverman, B. W. (1994) \emph{Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach}, London: Chapman & Hall. } \author{T. W. Yee} \note{ The argument \code{df1.nl} can be inputted in the format \code{c(spp1 = 2, spp2 = 3, 2.5)}, say, meaning the default value is 2.5, but two species have alternative values. If \code{spar1 = 0} and \code{df1.nl = 0} then this represents fitting linear functions (CLO). Currently, this is handled in the awkward manner of setting \code{df1.nl} to be a small positive value, so that the smooth is almost linear but not quite. A proper fix to this special case should done in the short future. } \seealso{ \code{\link{cao}}. } \examples{\dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, df1.nl = c(Zoraspin = 2.3, 2.1), Bestof = 10, Crow1positive = FALSE) sort(ap1@misc$deviance.Bestof) # A history of all the iterations Coef(ap1) par(mfrow = c(2, 3)) # All or most of the curves are unimodal; some are plot(ap1, lcol = "blue") # quite symmetric. Hence a CQO model should be ok par(mfrow = c(1, 1), las = 1) index = 1:ncol(depvar(ap1)) # lvplot is jagged because only 28 sites lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) persp(ap1, label = TRUE, col = 1:4) } } \keyword{models} \keyword{regression} %cao.control(Rank = 1, all.knots = FALSE, % criterion = "deviance", Cinit = NULL, % Crow1positive = TRUE, epsilon = 1e-05, % Etamat.colmax = 10, %% FastAlgorithm = TRUE, %% is.loaded(symbol.For("cqo2f")), %% GradientFunction = FALSE, % iKvector = 0.1, % iShape = 0.1, % noRRR = ~1, %% Parscale = 1, % SmallNo = 5e-13, % Use.Init.Poisson.QO = TRUE, % Bestof = if (length(Cinit)) 1 else 10, maxitl = 40, % bf.epsilon = 1.0e-7, bf.maxit = 40, % Maxit.optim = 250, optim.maxit = 20, %% se.fit = FALSE, % SD.sitescores = 1, % SD.Cinit = 0.02, trace = TRUE, %% df1.nl = 2.5, spar1 = 0, ...) VGAM/man/cao.Rd0000644000176000001440000002763512136651105012655 0ustar ripleyusers\name{cao} \alias{cao} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Constrained Additive Ordination (CAO) } \description{ A constrained additive ordination (CAO) model is fitted using the \emph{reduced-rank vector generalized additive model} (RR-VGAM) framework. } \usage{ cao(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = cao.control(...), offset = NULL, method = "cao.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is used to construct the latent variables, upon which the smooths are applied. All the variables in the formula are used for the construction of latent variables except for those specified by the argument \code{noRRR}, which is itself a formula. The LHS of the formula contains the response variables, which should be a matrix with each column being a response (species). } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. See \code{\link{cqo}} for a list of those presently implemented. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{cao} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. For \code{cao}, this argument currently should not be used. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix. If \eqn{M=1} then it may be a vector. For \code{cao}, this argument currently should not be used. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. For \code{cao}, this argument currently should not be used. } \item{coefstart}{ starting values for the coefficient vector. For \code{cao}, this argument currently should not be used. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{cao.control}} for details. } \item{offset}{ a vector or \eqn{M}-column matrix of offset values. These are \emph{a priori} known and are added to the linear predictors during fitting. For \code{cao}, this argument currently should not be used. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{cao.fit} uses iteratively reweighted least squares (IRLS) within FORTRAN code called from \code{\link[stats]{optim}}. } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the linear model (LM) matrix. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{\link{model.matrix.default}}. } \item{constraints}{ an optional list of constraint matrices. For \code{cao}, this argument currently should not be used. The components of the list must be named with the term it corresponds to (and it must match in character format). Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values. If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. } \item{extra}{ an optional list with any extra information that might be needed by the family function. For \code{cao}, this argument currently should not be used. } \item{qr.arg}{ For \code{cao}, this argument currently should not be used. } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{cao.control}}. } } \details{ The arguments of \code{cao} are a mixture of those from \code{\link{vgam}} and \code{\link{cqo}}, but with some extras in \code{\link{cao.control}}. Currently, not all of the arguments work properly. CAO can be loosely be thought of as the result of fitting generalized additive models (GAMs) to several responses (e.g., species) against a very small number of latent variables. Each latent variable is a linear combination of the explanatory variables; the coefficients \bold{C} (called \eqn{C} below) are called \emph{constrained coefficients} or \emph{canonical coefficients}, and are interpreted as weights or loadings. The \bold{C} are estimated by maximum likelihood estimation. It is often a good idea to apply \code{\link[base]{scale}} to each explanatory variable first. For each response (e.g., species), each latent variable is smoothed by a cubic smoothing spline, thus CAO is data-driven. If each smooth were a quadratic then CAO would simplify to \emph{constrained quadratic ordination} (CQO; formerly called \emph{canonical Gaussian ordination} or CGO). If each smooth were linear then CAO would simplify to \emph{constrained linear ordination} (CLO). CLO can theoretically be fitted with \code{cao} by specifying \code{df1.nl=0}, however it is more efficient to use \code{\link{rrvglm}}. Currently, only \code{Rank=1} is implemented, and only \code{noRRR = ~1} models are handled. % Poisson and binary responses are implemented (viz., % \code{\link{poissonff}}, \code{\link{binomialff}}), and % dispersion parameters for these must be assumed known. Hence using % \code{\link{quasipoissonff}} and \code{\link{quasibinomialff}} will % currently fail. Also, currently, only \code{noRRR = ~ 1} models are % handled. With binomial data, the default formula is \deqn{logit(P[Y_s=1]) = \eta_s = f_s(\nu), \ \ \ s=1,2,\ldots,S}{% logit(P[Y_s=1]) = eta_s = f_s(\nu), \ \ \ s=1,2,\ldots,S} where \eqn{x_2}{x_2} is a vector of environmental variables, and \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables. The \eqn{\eta_s}{eta_s} is an additive predictor for species \eqn{s}, and it models the probabilities of presence as an additive model on the logit scale. The matrix \eqn{C} is estimated from the data, as well as the smooth functions \eqn{f_s}. The argument \code{noRRR = ~ 1} specifies that the vector \eqn{x_1}{x_1}, defined for RR-VGLMs and QRR-VGLMs, is simply a 1 for an intercept. Here, the intercept in the model is absorbed into the functions. A \code{\link{cloglog}} link may be preferable over a \code{\link{logit}} link. With Poisson count data, the formula is \deqn{\log(E[Y_s]) = \eta_s = f_s(\nu)}{% log(E[Y_s]) = eta_s = f_s(\nu)} which models the mean response as an additive models on the log scale. The fitted latent variables (site scores) are scaled to have unit variance. The concept of a tolerance is undefined for CAO models, but the optima and maxima are defined. The generic functions \code{\link{Max}} and \code{\link{Opt}} should work for CAO objects, but note that if the maximum occurs at the boundary then \code{\link{Max}} will return a \code{NA}. Inference for CAO models is currently undeveloped. } \value{ An object of class \code{"cao"} (this may change to \code{"rrvgam"} in the future). Several generic functions can be applied to the object, e.g., \code{\link{Coef}}, \code{\link{ccoef}}, \code{\link{lvplot}}, \code{\link{summary}}. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{T. W. Yee} \note{ CAO models are computationally expensive, therefore setting \code{trace = TRUE} is a good idea, as well as running it on a simple random sample of the data set instead. Sometimes the IRLS algorithm does not converge within the FORTRAN code. This results in warnings being issued. In particular, if an error code of 3 is issued, then this indicates the IRLS algorithm has not converged. One possible remedy is to increase or decrease the nonlinear degrees of freedom so that the curves become more or less flexible, respectively. } \section{Warning }{ CAO is very costly to compute. With version 0.7-8 it took 28 minutes on a fast machine. I hope to look at ways of speeding things up in the future. Use \code{\link[base:Random]{set.seed}} just prior to calling \code{cao()} to make your results reproducible. The reason for this is finding the optimal CAO model presents a difficult optimization problem, partly because the log-likelihood function contains many local solutions. To obtain the (global) solution the user is advised to try \emph{many} initial values. This can be done by setting \code{Bestof} some appropriate value (see \code{\link{cao.control}}). Trying many initial values becomes progressively more important as the nonlinear degrees of freedom of the smooths increase. % The code is a little fragile at this stage, so the function might % hang/lock up in the microsoft Windows version. Currently the dispersion parameter for a \code{\link{gaussianff}} CAO model is estimated slightly differently and may be slightly biassed downwards (usually a little too small). } \seealso{ \code{\link{cao.control}}, \code{Coef.cao}, \code{\link{cqo}}, \code{\link{lv}}, \code{\link{Opt}}, \code{\link{Max}}, \code{\link{lv}}, \code{persp.cao}, \code{\link{poissonff}}, \code{\link{binomialff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}, \code{\link[base:Random]{set.seed}}, \code{\link[gam]{gam}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(149) # For reproducible results ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Pardpull=2.7, 2.5), Bestof = 7, Crow1positive = FALSE) sort(ap1@misc$deviance.Bestof) # A history of all the iterations Coef(ap1) ccoef(ap1) par(mfrow = c(2, 2)) plot(ap1) # All the curves are unimodal; some quite symmetric par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a=0, b = 1, lty = 2) trplot(ap1, label = TRUE, col = "blue", log = "xy", whichSp = c(1,3)) abline(a=0, b = 1, lty = 2) persp(ap1, col = index, lwd = 2, label = TRUE) abline(v = Opt(ap1), lty = 2, col = index) abline(h = Max(ap1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/calibrate.qrrvglm.control.Rd0000644000176000001440000000634312136651105017202 0ustar ripleyusers\name{calibrate.qrrvglm.control} \alias{calibrate.qrrvglm.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control function for CQO/UQO/CAO calibration } \description{ Algorithmic constants and parameters for running \code{\link{calibrate.qrrvglm}} are set using this function. } \usage{ calibrate.qrrvglm.control(object, trace = FALSE, Method.optim = "BFGS", gridSize = if (Rank == 1) 9 else 5, varlvI = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted CQO/UQO/CAO model. The user should ignore this argument. } \item{trace}{ Logical indicating if output should be produced for each iteration. It is a good idea to set this argument to be \code{TRUE} since the computations are expensive. } \item{Method.optim}{ Character. Fed into the \code{method} argument of \code{\link[stats]{optim}}. } \item{gridSize}{ Numeric, recycled to length \code{Rank}. Controls the resolution of the grid used for initial values. For each latent variable, an equally spaced grid of length \code{gridSize} is cast from the smallest site score to the largest site score. Then the likelihood function is evaluated on the grid, and the best fit is chosen as the initial value. Thus increasing the value of \code{gridSize} increases the chance of obtaining the global solution, however, the computing time increases proportionately. } \item{varlvI}{ Logical. For CQO objects only, this argument is fed into \code{\link{Coef.qrrvglm}}. } \item{\dots}{ Avoids an error message for extraneous arguments. } } \details{ Most CQO/CAO users will only need to make use of \code{trace} and \code{gridSize}. These arguments should be used inside their call to \code{\link{calibrate.qrrvglm}}, not this function directly. } \value{ A list which with the following components. \item{trace }{Numeric (even though the input can be logical). } \item{gridSize }{Positive integer. } \item{varlvI }{Logical.} } \references{ Yee, T. W. (2012) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. } \author{T. W. Yee} \note{ Despite the name of this function, UQO and CAO models are handled as well. } \seealso{ \code{\link{calibrate.qrrvglm}}, \code{\link{Coef.qrrvglm}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Needed when ITol = TRUE set.seed(123) p1 <- cqo(cbind(Alopacce, Alopcune, Pardlugu, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, ITol = TRUE) sort(p1@misc$deviance.Bestof) # A history of all the iterations siteNos <- 1:2 # Calibrate these sites cp1 <- calibrate(p1, new = data.frame(depvar(p1)[siteNos, ]), trace = TRUE) } \dontrun{ # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = lv(p1)[siteNos], lty = 1, col = 1:length(siteNos)) # actual site scores abline(v = cp1, lty = 2, col = 1:length(siteNos)) # calibrated values } } \keyword{models} \keyword{regression} VGAM/man/calibrate.qrrvglm.Rd0000644000176000001440000001176112136651105015523 0ustar ripleyusers\name{calibrate.qrrvglm} \alias{calibrate.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calibration for CQO, UQO and CAO models } \description{ Performs maximum likelihood calibration for constrained and unconstrained quadratic and additive ordination models (CQO and CAO models are better known as QRR-VGLMs and RR-VGAMs respectively). } \usage{ calibrate.qrrvglm(object, newdata = NULL, type = c("lv", "predictors", "response", "vcov", "all3or4"), initial.vals = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted CQO/CAO model. } \item{newdata}{ A data frame with new response data (usually new species data). The default is to use the original data used to fit the model; however, the calibration may take a long time to compute because the computations are expensive. } \item{type}{ What type of result is to be returned. The first are the calibrated latent variables or site scores. This must be computed always. The \code{"predictors"} are the linear/quadratic or additive predictors evaluated at the calibrated latent variables or site scores. The \code{"response"} are the fitted means evaluated at the calibrated latent variables or site scores. The \code{"vcov"} are the estimated variance-covariance matrices of the calibrated latent variables or site scores. The \code{"all3or4"} is for all of them, i.e., all \code{type}s. For CAO models, \code{"vcov"} is unavailable, so all 3 are returned. For CQO models, \code{"vcov"} is available, so all 4 are returned. } \item{initial.vals}{ Initial values for the search. For rank-1 models, this should be a vector of length \code{nrow(newdata)}, and for rank 2 models this should be a two column matrix with the number of rows equalling the number of rows in \code{newdata}. The default is a grid defined by arguments in \code{\link{calibrate.qrrvglm.control}}. } \item{\dots}{ Arguments that are fed into \code{\link{calibrate.qrrvglm.control}}. } } \details{ Given a fitted regression CQO/CAO model, maximum likelihood calibration is theoretically easy and elegant. However, the method assumes that all species are independent, which is not really true in practice. More details and references are given in Yee (2012). The function \code{\link[stats]{optim}} is used to search for the maximum likelihood solution. Good initial values are needed, and \code{\link{calibrate.qrrvglm.control}} allows the user some control over the choice of these. } \value{ The argument \code{type} determines what is returned. If \code{type = "all3or4"} then all the \code{type} values are returned in a list, with the following components. Each component has length \code{nrow(newdata)}. \item{lv}{Calibrated latent variables or site scores. } \item{predictors }{linear/quadratic or additive predictors. For example, for Poisson families, this will be on a log scale, and for binomial families, this will be on a logit scale. } \item{response}{Fitted values of the response, evaluated at the calibrated latent variables or site scores. } \item{vcov}{Estimated variance-covariance matrix of the calibrated latent variables or site scores. Actually, these are stored in an array whose last dimension is \code{nrow(newdata)}. } } \references{ Yee, T. W. (2012) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. ter Braak, C. J. F. 1995. Calibration. In: \emph{Data Analysis in Community and Landscape Ecology} by Jongman, R. H. G., ter Braak, C. J. F. and van Tongeren, O. F. R. (Eds.) Cambridge University Press, Cambridge. } \author{T. W. Yee} \note{ Despite the name of this function, UQO and CAO models are handled as well. } \section{Warning }{ This function is computationally expensive. Setting \code{trace = TRUE} to get a running log is a good idea. } \seealso{ \code{\link{calibrate.qrrvglm.control}}, \code{\link{calibrate}}, \code{\link{cqo}}, \code{\link{uqo}}, \code{\link{cao}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[, 1:6]) # Standardize the environmental variables set.seed(123) p1 <- cqo(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, IToler = TRUE, Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cp1 <- calibrate(p1, new = data.frame(depvar(p1)[siteNos, ]), trace = TRUE) } \dontrun{ # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = lv(p1)[siteNos], lty = 1, col = 1:length(siteNos)) # actual site scores abline(v = cp1, lty = 2, col = 1:length(siteNos)) # calibrated values } } \keyword{models} \keyword{regression} VGAM/man/calibrate.Rd0000644000176000001440000000524612136651105014033 0ustar ripleyusers\name{calibrate} \alias{calibrate} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Model Calibrations } \description{ \code{calibrate} is a generic function used to produce calibrations from various model fitting functions. The function invokes particular `methods' which depend on the `class' of the first argument. } \usage{ calibrate(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which a calibration is desired. } \item{\dots}{ Additional arguments affecting the calibration produced. Usually the most important argument in \code{\dots} is \code{newdata} which, for \code{calibrate}, contains new \emph{response} data, \bold{Y}, say. } } \details{ Given a regression model with explanatory variables \bold{X} and response \bold{Y}, calibration involves estimating \bold{X} from \bold{Y} using the regression model. It can be loosely thought of as the opposite of \code{\link{predict}} (which takes an \bold{X} and returns a \bold{Y}.) } \value{ In general, given a new response \bold{Y}, the explanatory variables \bold{X} are returned. However, for constrained ordination models such as CQO and CAO models, it is usually not possible to return \bold{X}, so the latent variables are returned instead (they are linear combinations of the \bold{X}). See the specific \code{calibrate} methods functions to see what they return. } %\references{ %} \author{ T. W. Yee } \note{ This function was not called \code{predictx} because of the inability of constrained ordination models to return \bold{X}; they can only return the latent variable values (site scores) instead. } \seealso{ \code{\link{predict}}, \code{\link{calibrate.qrrvglm}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2, 1.9), Bestof = 3, Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cp1 <- calibrate(p1, new = data.frame(depvar(p1)[siteNos,]), trace = TRUE) # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Solid=actual, dashed=calibrated site scores", label = TRUE, col = "blue", las = 1) abline(v = lv(p1)[siteNos], lty = 1, col = 1:length(siteNos)) # actual site scores abline(v = cp1, lty = 2, col = 1:length(siteNos)) # calibrated values } } \keyword{models} \keyword{regression} VGAM/man/calibrate-methods.Rd0000644000176000001440000000117712136651105015473 0ustar ripleyusers\name{calibrate-methods} \docType{methods} \alias{calibrate,rrvglm-method} \alias{calibrate,Coef.qrrvglm-method} \title{ Calibration for Constrained Regression Models } \description{ \code{calibrate} is a generic function applied to QRR-VGLMs and RR-VGAMs etc. } %\usage{ % \S4method{calibrate}{cao,Coef.cao}(object, ...) %} \section{Methods}{ \describe{ \item{object}{ The object from which the calibration is performed. } } } %\note{ % See \code{\link{lvplot}} which is very much related to biplots. % %} \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/bratt.Rd0000644000176000001440000001235412136651105013217 0ustar ripleyusers\name{bratt} \alias{bratt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bradley Terry Model With Ties } \description{ Fits a Bradley Terry model with ties (intercept-only model) by maximum likelihood estimation. } \usage{ bratt(refgp = "last", refvalue = 1, init.alpha = 1, i0 = 0.01) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{refgp}{ Integer whose value must be from the set \{1,\ldots,\eqn{M}\}, where there are \eqn{M} competitors. The default value indicates the last competitor is used---but don't input a character string, in general. } \item{refvalue}{ Numeric. A positive value for the reference group. } \item{init.alpha}{ Initial values for the \eqn{\alpha}{alpha}s. These are recycled to the appropriate length. } \item{i0}{ Initial value for \eqn{\alpha_0}{alpha_0}. If convergence fails, try another positive value. } } \details{ There are several models that extend the ordinary Bradley Terry model to handle ties. This family function implements one of these models. It involves \eqn{M} competitors who either win or lose or tie against each other. (If there are no draws/ties then use \code{\link{brat}}). The probability that Competitor \eqn{i} beats Competitor \eqn{j} is \eqn{\alpha_i / (\alpha_i+\alpha_j+\alpha_0)}{alpha_i / (alpha_i + alpha_j + alpha_0)}, where all the \eqn{\alpha}{alpha}s are positive. The probability that Competitor \eqn{i} ties with Competitor \eqn{j} is \eqn{\alpha_0 / (\alpha_i+\alpha_j+\alpha_0)}{alpha_0 / (alpha_i + alpha_j + alpha_0)}. Loosely, the \eqn{\alpha}{alpha}s can be thought of as the competitors' `abilities', and \eqn{\alpha_0}{alpha_0} is an added parameter to model ties. For identifiability, one of the \eqn{\alpha_i}{alpha_i} is set to a known value \code{refvalue}, e.g., 1. By default, this function chooses the last competitor to have this reference value. The data can be represented in the form of a \eqn{M} by \eqn{M} matrix of counts, where winners are the rows and losers are the columns. However, this is not the way the data should be inputted (see below). Excluding the reference value/group, this function chooses \eqn{\log(\alpha_j)}{log(alpha_j)} as the first \eqn{M-1} linear predictors. The log link ensures that the \eqn{\alpha}{alpha}s are positive. The last linear predictor is \eqn{\log(\alpha_0)}{log(alpha_0)}. The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Torsney, B. (2004) Fitting Bradley Terry models using a multiplicative algorithm. In: Antoch, J. (ed.) \emph{Proceedings in Computational Statistics COMPSTAT 2004}, Physica-Verlag: Heidelberg. Pages 513--526. } \author{ T. W. Yee } \note{ The function \code{\link{Brat}} is useful for coercing a \eqn{M} by \eqn{M} matrix of counts into a one-row matrix suitable for \code{bratt}. Diagonal elements are skipped, and the usual S order of \code{c(a.matrix)} of elements is used. There should be no missing values apart from the diagonal elements of the square matrix. The matrix should have winners as the rows, and losers as the columns. In general, the response should be a matrix with \eqn{M(M-1)} columns. Also, a symmetric matrix of ties should be passed into \code{\link{Brat}}. The diagonal of this matrix should be all \code{NA}s. Only an intercept model is recommended with \code{bratt}. It doesn't make sense really to include covariates because of the limited VGLM framework. Notationally, note that the \pkg{VGAM} family function \code{\link{brat}} has \eqn{M+1} contestants, while \code{bratt} has \eqn{M} contestants. } \seealso{ \code{\link{brat}}, \code{\link{Brat}}, \code{\link{binomialff}}. } \examples{ # citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) # Add some ties. This is fictitional data. ties = 5 + 0*mat ties[2,1] = ties[1,2] = 9 # Now fit the model fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal"); last is log(alpha0) c(1, Coef(fit)) # Abilities (in order of "journal"); last is alpha0 fit@misc$alpha # alpha_1,...,alpha_M fit@misc$alpha0 # alpha_0 fitted(fit) # Probabilities of winning and tying, in awkward form predict(fit) (check <- InverseBrat(fitted(fit))) # Probabilities of winning qprob <- attr(fitted(fit), "probtie") # Probabilities of a tie qprobmat <- InverseBrat(c(qprob), NCo=nrow(ties)) # Probabilities of a tie check + t(check) + qprobmat # Should be 1's in the off-diagonals } \keyword{models} \keyword{regression} VGAM/man/bratUC.Rd0000644000176000001440000000543212136651105013262 0ustar ripleyusers\name{Brat} \alias{Brat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inputting Data to fit a Bradley Terry Model } \description{ Takes in a square matrix of counts and outputs them in a form that is accessible to the \code{\link{brat}} and \code{\link{bratt}} family functions. } \usage{ Brat(mat, ties = 0 * mat, string = c(">","=="), whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mat}{ Matrix of counts, which is considered \eqn{M} by \eqn{M} in dimension when there are ties, and \eqn{M+1} by \eqn{M+1} when there are no ties. The rows are winners and the columns are losers, e.g., the 2-1 element is now many times Competitor 2 has beaten Competitor 1. The matrices are best labelled with the competitors' names. } \item{ties}{ Matrix of counts. This should be the same dimension as \code{mat}. By default, there are no ties. The matrix must be symmetric, and the diagonal should contain \code{NA}s. } \item{string}{ Character. The matrices are labelled with the first value of the descriptor, e.g., \code{"NZ > Oz"} `means' NZ beats Australia in rugby. Suggested alternatives include \code{" beats "} or \code{" wins against "}. The second value is used to handle ties. } \item{whitespace}{ Logical. If \code{TRUE} then a white space is added before and after \code{string}; it generally enhances readability. See \code{\link{CommonVGAMffArguments}} for some similar-type information. } } \details{ In the \pkg{VGAM} package it is necessary for each matrix to be represented as a single row of data by \code{\link{brat}} and \code{\link{bratt}}. Hence the non-diagonal elements of the \eqn{M+1} by \eqn{M+1} matrix are concatenated into \eqn{M(M+1)} values (no ties), while if there are ties, the non-diagonal elements of the \eqn{M} by \eqn{M} matrix are concatenated into \eqn{M(M-1)} values. } \value{ A matrix with 1 row and either \eqn{M(M+1)} or \eqn{M(M-1)} columns. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. } \author{ T. W. Yee } \note{ This is a data preprocessing function for \code{\link{brat}} and \code{\link{bratt}}. Yet to do: merge \code{InverseBrat} into \code{brat}. } \seealso{ \code{\link{brat}}, \code{\link{bratt}}, \code{InverseBrat}. } \examples{ journal = c("Biometrika", "Comm Statist", "JASA", "JRSS-B") mat = matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) = list(winner = journal, loser = journal) Brat(mat) # Less readable Brat(mat, whitespace = TRUE) # More readable vglm(Brat(mat, whitespace = TRUE) ~ 1, brat, trace = TRUE) } \keyword{models} \keyword{regression} VGAM/man/brat.Rd0000644000176000001440000001132012136651105013023 0ustar ripleyusers\name{brat} \alias{brat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bradley Terry Model } \description{ Fits a Bradley Terry model (intercept-only model) by maximum likelihood estimation. } \usage{ brat(refgp = "last", refvalue = 1, init.alpha = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{refgp}{ Integer whose value must be from the set \{1,\ldots,\eqn{M+1}\}, where there are \eqn{M+1} competitors. The default value indicates the last competitor is used---but don't input a character string, in general. } \item{refvalue}{ Numeric. A positive value for the reference group. } \item{init.alpha}{ Initial values for the \eqn{\alpha}{alpha}s. These are recycled to the appropriate length. } } \details{ The Bradley Terry model involves \eqn{M+1} competitors who either win or lose against each other (no draws/ties allowed in this implementation--see \code{\link{bratt}} if there are ties). The probability that Competitor \eqn{i} beats Competitor \eqn{j} is \eqn{\alpha_i / (\alpha_i+\alpha_j)}{alpha_i / (alpha_i + alpha_j)}, where all the \eqn{\alpha}{alpha}s are positive. Loosely, the \eqn{\alpha}{alpha}s can be thought of as the competitors' `abilities'. For identifiability, one of the \eqn{\alpha_i}{alpha_i} is set to a known value \code{refvalue}, e.g., 1. By default, this function chooses the last competitor to have this reference value. The data can be represented in the form of a \eqn{M+1} by \eqn{M+1} matrix of counts, where winners are the rows and losers are the columns. However, this is not the way the data should be inputted (see below). Excluding the reference value/group, this function chooses \eqn{\log(\alpha_j)}{log(alpha_j)} as the \eqn{M} linear predictors. The log link ensures that the \eqn{\alpha}{alpha}s are positive. The Bradley Terry model can be fitted by logistic regression, but this approach is not taken here. The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. Stigler, S. (1994) Citation patterns in the journals of statistics and probability. \emph{Statistical Science}, \bold{9}, 94--108. The \pkg{BradleyTerry2} package has more comprehensive capabilities than this function. } \author{ T. W. Yee } \note{ The function \code{\link{Brat}} is useful for coercing a \eqn{M+1} by \eqn{M+1} matrix of counts into a one-row matrix suitable for \code{brat}. Diagonal elements are skipped, and the usual S order of \code{c(a.matrix)} of elements is used. There should be no missing values apart from the diagonal elements of the square matrix. The matrix should have winners as the rows, and losers as the columns. In general, the response should be a 1-row matrix with \eqn{M(M+1)} columns. Only an intercept model is recommended with \code{brat}. It doesn't make sense really to include covariates because of the limited VGLM framework. Notationally, note that the \pkg{VGAM} family function \code{\link{brat}} has \eqn{M+1} contestants, while \code{bratt} has \eqn{M} contestants. } \section{Warning }{ Presently, the residuals are wrong, and the prior weights are not handled correctly. Ideally, the total number of counts should be the prior weights, after the response has been converted to proportions. This would make it similar to family functions such as \code{\link{multinomial}} and \code{\link{binomialff}}. } \seealso{ \code{\link{bratt}}, \code{\link{Brat}}, \code{\link{multinomial}}, \code{\link{binomialff}}. } \examples{ # Citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal") c(1, Coef(fit)) # Abilities (in order of "journal") fitted(fit) # Probabilities of winning in awkward form (check <- InverseBrat(fitted(fit))) # Probabilities of winning check + t(check) # Should be 1's in the off-diagonals } \keyword{models} \keyword{regression} VGAM/man/bortUC.Rd0000644000176000001440000000310012136651105013266 0ustar ripleyusers\name{Bort} \alias{Bort} \alias{dbort} %\alias{pbort} %\alias{qbort} \alias{rbort} \title{The Borel-Tanner Distribution} \description{ Density % distribution function, quantile function and random generation for the Borel-Tanner distribution. } \usage{ dbort(x, Qsize = 1, a = 0.5, log = FALSE) %pbort(q, Qsize = 1, a = 0.5) %qbort(p, Qsize = 1, a = 0.5) rbort(n, Qsize = 1, a = 0.5) } \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{Qsize, a}{ See \code{\link{borel.tanner}}. } \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \value{ \code{dbort} gives the density, % \code{pbort} gives the distribution function, % \code{qbort} gives the quantile function, and \code{rbort} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{borel.tanner}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \section{Warning }{ Looping is used for \code{\link{rbort}}, therefore values of \code{a} close to 1 will result in long (or infinite!) computational times. The default value of \code{a} is subjective. } \seealso{ \code{\link{borel.tanner}}. } \examples{ \dontrun{ qsize = 1; a = 0.5 x = qsize:(qsize+10) plot(x, dbort(x, qsize, a), type="h", las=1, col="blue", ylab=paste("fbort(qsize=", qsize, ", a=", a, ")"), main="Borel-Tanner density function") } } \keyword{distribution} VGAM/man/borel.tanner.Rd0000644000176000001440000000624512136651105014476 0ustar ripleyusers\name{borel.tanner} \alias{borel.tanner} %- Also NEED an '\alias' for EACH other topic documented here. \title{Borel-Tanner Distribution Family Function} \description{ Estimates the parameter of a Borel-Tanner distribution by maximum likelihood estimation. } \usage{ borel.tanner(Qsize = 1, link = "logit", imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Qsize}{ A positive integer. It is called \eqn{Q} below and is the initial queue size. } \item{link}{ Link function for the parameter; see \code{\link{Links}} for more choices and for general information. } \item{imethod}{ See \code{\link{CommonVGAMffArguments}}. Valid values are 1, 2, 3 or 4. } } \details{ The Borel-Tanner distribution (Tanner, 1953) describes the distribution of the total number of customers served before a queue vanishes given a single queue with random arrival times of customers (at a constant rate \eqn{r} per unit time, and each customer taking a constant time \eqn{b} to be served). Initially the queue has \eqn{Q} people and the first one starts to be served. The two parameters appear in the density only in the form of the product \eqn{rb}, therefore we use \eqn{a=rb}, say, to denote the single parameter to be estimated. The density function is \deqn{f(y;a) = \frac{ Q! }{(y-Q)!} y^{y-Q-1} a^{y-Q} \exp(-ay) }{% f(y;a) = (Q! / (y-Q)!) * y^(y-Q-1) * a^(y-Q) * exp(-ay)} where \eqn{y=Q,Q+1,Q+2,\ldots}{y=Q,Q+1,Q+2,...}. The case \eqn{Q=1} corresponds to the \emph{Borel} distribution (Borel, 1942). For the \eqn{Q=1} case it is necessary for \eqn{0 < a < 1} for the distribution to be proper. The Borel distribution is a basic Lagrangian distribution of the first kind. The Borel-Tanner distribution is an \eqn{Q}-fold convolution of the Borel distribution. The mean is \eqn{Q/(1-a)} (returned as the fitted values) and the variance is \eqn{Q a / (1-a)^3}{Q*a/(1-a)^3}. The distribution has a very long tail unless \eqn{a} is small. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Tanner, J. C. (1953) A problem of interference between two queues. \emph{Biometrika}, \bold{40}, 58--69. Borel, E. (1942) Sur l'emploi du theoreme de Bernoulli pour faciliter le calcul d'une infinite de coefficients. Application au probleme de l'attente a un guichet. \emph{Comptes Rendus, Academie des Sciences, Paris, Series A}, \bold{214}, 452--456. Page 328 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston: Birkhauser. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{rbort}}, \code{\link{poissonff}}, \code{\link{felix}}. } \examples{ bdata <- data.frame(y = rbort(n <- 200)) fit <- vglm(y ~ 1, borel.tanner, bdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/bmi.nz.Rd0000644000176000001440000000261112136651105013273 0ustar ripleyusers\name{bmi.nz} \alias{bmi.nz} \docType{data} \title{ Body Mass Index of New Zealand Adults Data} \description{ The body mass indexes and ages from an approximate random sample of 700 New Zealand adults. } \usage{data(bmi.nz)} \format{ A data frame with 700 observations on the following 2 variables. \describe{ \item{age}{a numeric vector; their age (years). } \item{BMI}{a numeric vector; their body mass indexes, which is their weight divided by the square of their height (kg / \eqn{m^2}{m^2}).} } } \details{ They are a random sample from the Fletcher Challenge/Auckland Heart and Health survey conducted in the early 1990s. There are some outliers in the data set. A variable \code{gender} would be useful, and may be added later. } \source{ Clinical Trials Research Unit, University of Auckland, New Zealand, \url{http://www.ctru.auckland.ac.nz}. } \references{ MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. \emph{New Zealand Medical Journal}, \bold{108}, 499--502. } \examples{ \dontrun{ with(bmi.nz, plot(age, BMI, col = "blue")) fit <- vgam(BMI ~ s(age, df = c(2, 4, 2)), lms.yjn, data = bmi.nz, trace = TRUE) qtplot(fit, pcol = "blue", tcol = "brown", lcol = "brown") } } \keyword{datasets} VGAM/man/bivgamma.mckay.Rd0000644000176000001440000000752212136651105014772 0ustar ripleyusers\name{bivgamma.mckay} \alias{bivgamma.mckay} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Gamma: McKay's Distribution } \description{ Estimate the three parameters of McKay's bivariate gamma distribution by maximum likelihood estimation. } \usage{ bivgamma.mckay(lscale = "loge", lshape1 = "loge", lshape2 = "loge", iscale = NULL, ishape1 = NULL, ishape2 = NULL, imethod = 1, zero = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape1, lshape2}{ Link functions applied to the (positive) parameters \eqn{a}, \eqn{p} and \eqn{q} respectively. See \code{\link{Links}} for more choices. } \item{iscale, ishape1, ishape2}{ Optional initial values for \eqn{a}, \eqn{p} and \eqn{q} respectively. The default is to compute them internally. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ One of the earliest forms of the bivariate gamma distribution has a joint probability density function given by \deqn{f(y_1,y_2;a,p,q) = (1/a)^{p+q} y_1^{p-1} (y_2-y_1)^{q-1} \exp(-y_2 / a) / [\Gamma(p) \Gamma(q)]}{% f(y1,y2;a,p,q) = (1/a)^(p+q) y1^(p-1) (y2-y1)^(q-1) exp(-y2/a) / [gamma(p) gamma(q)] } for \eqn{a > 0}, \eqn{p > 0}, \eqn{q > 0} and \eqn{0 < y_1 < y_2}{0 0}, \eqn{a>0} is the shape parameter, \eqn{b>0} is the scale parameter. The mean of \eqn{Y} (which is the fitted value) is \eqn{b(1 + a^2/2)}{b*(1 + a*a/2)}. and the variance is \eqn{a^2 b^2 (1 + \frac{5}{4}a^2)}{a^2 b^2 (1 + (5/4)*a^2)}. By default, \eqn{\eta_1 = \log(a)}{eta1 = log(a)} and \eqn{\eta_2 = \log(b)}{eta2 = log(b)} for this family function. Note that \eqn{a} and \eqn{b} are orthogonal, i.e., the Fisher information matrix is diagonal. This family function implements Fisher scoring, and it is unnecessary to compute any integrals numerically. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Lemonte, A. J. and Cribari-Neto, F. and Vasconcellos, K. L. P. (2007) Improved statistical inference for the two-parameter Birnbaum-Saunders distribution. \emph{Computational Statistics \& Data Analysis}, \bold{51}, 4656--4681. Birnbaum, Z. W. and Saunders, S. C. (1969) A new family of life distributions. \emph{Journal of Applied Probability}, \bold{6}, 319--327. Birnbaum, Z. W. and Saunders, S. C. (1969) Estimation for a family of life distributions with applications to fatigue. \emph{Journal of Applied Probability}, \bold{6}, 328--347. Engelhardt, M. and Bain, L. J. and Wright, F. T. (1981) Inferences on the parameters of the Birnbaum-Saunders fatigue life distribution based on maximum likelihood estimation. \emph{Technometrics}, \bold{23}, 251--256. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. } \author{ T. W. Yee } %\note{ % %} %\section{Warning }{ %} \seealso{ \code{\link{pbisa}}, \code{\link{inv.gaussianff}}. } \examples{ bdat1 <- data.frame(x2 = runif(nn <- 1000)) bdat1 <- transform(bdat1, shape = exp(-0.5 + x2), scale = exp(1.5)) bdat1 <- transform(bdat1, y = rbisa(nn, shape, scale)) fit1 <- vglm(y ~ x2, bisa(zero = 2), bdat1, trace = TRUE) coef(fit1, matrix = TRUE) \dontrun{ bdat2 <- data.frame(shape = exp(-0.5), scale = exp(0.5)) bdat2 <- transform(bdat2, y = rbisa(nn, shape, scale)) fit <- vglm(y ~ 1, bisa, bdat2, trace = TRUE) with(bdat2, hist(y, prob = TRUE, ylim = c(0, 0.5), col = "lightblue")) coef(fit, matrix = TRUE) with(bdat2, mean(y)) head(fitted(fit)) x <- with(bdat2, seq(0, max(y), len = 200)) lines(dbisa(x, Coef(fit)[1], Coef(fit)[2]) ~ x, bdat2, col = "orange", lwd = 2) } } \keyword{models} \keyword{regression} VGAM/man/biplot-methods.Rd0000644000176000001440000000142112136651105015026 0ustar ripleyusers\name{biplot-methods} \docType{methods} \alias{biplot,rrvglm-method} \alias{biplot,qrrvglm-method} \title{ Biplot of Constrained Regression Models } \description{ \code{biplot} is a generic function applied to RR-VGLMs and QRR-VGLMs etc. These apply to rank-1 and rank-2 models of these only. For RR-VGLMs these plot the second latent variable scores against the first latent variable scores. } %\usage{ % \S4method{biplot}{cao,Coef.cao}(object, ...) %} \section{Methods}{ \describe{ \item{x}{ The object from which the latent variables are extracted and/or plotted. } } } \note{ See \code{\link{lvplot}} which is very much related to biplots. } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/binormal.Rd0000644000176000001440000000626212136651105013707 0ustar ripleyusers\name{binormal} \alias{binormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate normal distribution family function } \description{ Maximum likelihood estimation of the five parameters of a bivariate normal distribution. } \usage{ binormal(lmean1 = "identity", lmean2 = "identity", lsd1 = "loge", lsd2 = "loge", lrho = "rhobit", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = 3:5) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean1, lmean2, lsd1, lsd2, lrho}{ Link functions applied to the means, standard deviations and \code{rho} parameters. See \code{\link{Links}} for more choices. Being positive quantities, a log link is the default for the standard deviations. } \item{imean1, imean2, isd1, isd2, irho, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{eq.mean, eq.sd}{ Logical or formula. Constrains the means or the standard deviations to be equal. Only one of these arguments may be assigned a value. } } \details{ For the bivariate normal distribution, this fits a linear model (LM) to the means, and by default, the other parameters are intercept-only. The response should be a two-column matrix. The correlation parameter is \code{rho}, which lies between \eqn{-1} and \eqn{1} (thus the \code{\link{rhobit}} link is a reasonable choice). The fitted means are returned as the fitted values, which is in the form of a two-column matrix. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \section{Warning}{ This function may be renamed to \code{normal2()} or something like that at a later date. } %\references{ % %} \author{ T. W. Yee } \note{ If both equal means and equal standard deviations are desired then use something like \code{constraints = list("(Intercept)" = matrix(c(1,1,0,0,0, 0,0,1,1,0 ,0,0,0,0,1), 5, 3))} and maybe \code{zero = NULL} etc. } \seealso{ \code{\link{normal1}}, \code{\link{gaussianff}}, \code{\link{pnorm2}}. } \examples{ set.seed(123); nn <- 1000 bdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) bdata <- transform(bdata, y1 = rnorm(nn, 1 + 2*x2), y2 = rnorm(nn, 3 + 4*x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Estimated P(Y1 <= y1, Y2 <= y2) under the fitted model var1 <- loge(2 * predict(fit1)[, "log(sd1)"], inverse = TRUE) var2 <- loge(2 * predict(fit1)[, "log(sd2)"], inverse = TRUE) cov12 <- rhobit(predict(fit1)[, "rhobit(rho)"], inverse = TRUE) head(with(bdata, pnorm2(y1, y2, mean1 = predict(fit1)[, "mean1"], mean2 = predict(fit1)[, "mean2"], var1 = var1, var2 = var2, cov12 = cov12))) } \keyword{models} \keyword{regression} VGAM/man/binomialff.Rd0000644000176000001440000002153312136651105014210 0ustar ripleyusers\name{binomialff} %\alias{binomial} \alias{binomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Binomial Family Function } \description{ Family function for fitting generalized linear models to binomial responses, where the dispersion parameter may be known or unknown. } \usage{ binomialff(link = "logit", dispersion = 1, mv = FALSE, onedpar = !mv, parallel = FALSE, apply.parint = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function; see \code{\link{Links}} and \code{\link{CommonVGAMffArguments}} for more information. } \item{dispersion}{ Dispersion parameter. By default, maximum likelihood is used to estimate the model because it is known. However, the user can specify \code{dispersion = 0} to have it estimated, or else specify a known positive value (or values if \code{mv} is \code{TRUE}). } \item{mv}{ Multivariate response? If \code{TRUE}, then the response is interpreted as \eqn{M} independent binary responses, where \eqn{M} is the number of columns of the response matrix. In this case, the response matrix should have zero/one values only. If \code{FALSE} and the response is a (2-column) matrix, then the number of successes is given in the first column, and the second column is the number of failures. } \item{onedpar}{ One dispersion parameter? If \code{mv}, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar = TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if \code{mv} is \code{TRUE}. This argument allows for the parallelism assumption whereby the regression coefficients for a variable is constrained to be equal over the \eqn{M} linear/additive predictors. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } \item{apply.parint, earg.link}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{bred}{ Details at \code{\link{CommonVGAMffArguments}}. Setting \code{bred = TRUE} should work for multiple responses (\code{mv = TRUE}) and all \pkg{VGAM} link functions; it has been tested for \code{\link{logit}} only (and it gives similar results to \pkg{brglm} but not identical), and further testing is required. One result from fitting bias reduced binary regression is that finite regression coefficients occur when the data is separable (see example below). } } \details{ This function is largely to mimic \code{\link[stats:Binomial]{binomial}}, however there are some differences. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate (see pp.124--8 of McCullagh and Nelder, 1989). A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the binomial model. Over-dispersion is more common in practice. Setting \code{mv = TRUE} is necessary when fitting a Quadratic RR-VGLM (see \code{\link{cqo}}) because the response is a matrix of \eqn{M} columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix). When used with \code{\link{cqo}} and \code{\link{cao}}, it may be preferable to use the \code{\link{cloglog}} link. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Altman, M. and Gill, J. and McDonald, M. P. (2004) \emph{Numerical Issues in Statistical Computing for the Social Scientist}, Hoboken, NJ, USA: Wiley-Interscience. Ridout, M. S. (1990) Non-convergence of Fisher's method of scoring---a simple example. \emph{GLIM Newsletter}, 20(6). } \author{ Thomas W. Yee } \note{ If \code{mv} is \code{FALSE} (default) then the response can be of one of two formats: a factor (first level taken as failure), or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified as any vector of positive values. In general, 1 means success and 0 means failure (to check, see the \code{y} slot of the fitted object). Note that a general vector of proportions of success is no longer accepted. The notation \eqn{M} is used to denote the number of linear/additive predictors. If \code{mv} is \code{TRUE}, then the matrix response can only be of one format: a matrix of 1's and 0's (1 = success). The call \code{binomialff(dispersion = 0, ...)} is equivalent to \code{quasibinomialff(...)}. The latter was written so that R users of \code{quasibinomial()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. Fisher scoring is used. This can sometimes fail to converge by oscillating between successive iterations (Ridout, 1990). See the example below. } \seealso{ \code{\link{quasibinomialff}}, \code{\link{Links}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{betabinomial}}, \code{\link{posbinomial}}, \code{\link{zibinomial}}, \code{\link{dexpbinomial}}, \code{\link{mbinomial}}, \code{\link{seq2binomial}}, \code{\link{amlbinomial}}, \code{\link{simplex}}, \code{\link[stats:Binomial]{binomial}}, \pkg{safeBinaryRegression}. } \section{Warning }{ With a multivariate response, assigning a known dispersion parameter for \emph{each} response is not handled well yet. Currently, only a single known dispersion parameter is handled well. See the above note regarding \code{bred}. The maximum likelihood estimate will not exist if the data is \emph{completely separable} or \emph{quasi-completely separable}. See Chapter 10 of Altman et al. (2004) for more details, and \pkg{safeBinaryRegression}. Yet to do: add a \code{sepcheck = TRUE}, say, argument to detect this problem and give an appropriate warning. } \examples{ quasibinomialff() quasibinomialff(link = "probit") shunua <- hunua[sort.list(with(hunua, altitude)), ] # Sort by altitude fit <- vglm(agaaus ~ poly(altitude, 2), binomialff(link = cloglog), shunua) \dontrun{ plot(agaaus ~ jitter(altitude), shunua, col = "blue", ylab = "P(Agaaus = 1)", main = "Presence/absence of Agathis australis", las = 1) with(shunua, lines(altitude, fitted(fit), col = "orange", lwd = 2)) } # Fit two species simultaneously fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude), binomialff(mv = TRUE), shunua) with(shunua, matplot(altitude, fitted(fit2), type = "l", main = "Two species response curves", las = 1)) # Shows that Fisher scoring can sometime fail. See Ridout (1990). ridout <- data.frame(v = c(1000, 100, 10), r = c(4, 3, 3), n = c(5, 5, 5)) (ridout <- transform(ridout, logv = log(v))) # The iterations oscillates between two local solutions: glm.fail <- glm(r / n ~ offset(logv) + 1, weight = n, binomial(link = 'cloglog'), ridout, trace = TRUE) coef(glm.fail) # vglm()'s half-stepping ensures the MLE of -5.4007 is obtained: vglm.ok <- vglm(cbind(r, n-r) ~ offset(logv) + 1, binomialff(link = cloglog), ridout, trace = TRUE) coef(vglm.ok) # Separable data set.seed(123) threshold <- 0 bdata <- data.frame(x2 = sort(rnorm(nn <- 100))) bdata <- transform(bdata, y1 = ifelse(x2 < threshold, 0, 1)) fit <- vglm(y1 ~ x2, binomialff(bred = TRUE), data = bdata, criter = "coef", trace = TRUE) coef(fit, matrix = TRUE) # Finite!! summary(fit) \dontrun{ plot(depvar(fit) ~ x2, data = bdata, col = "blue", las = 1) lines(fitted(fit) ~ x2, data = bdata, col = "orange") abline(v = threshold, col = "gray", lty = "dashed") } } \keyword{models} \keyword{regression} % a vector of proportions of success, % In particular, for a general vector of proportions, % you will need to specify \code{weights} because the number of trials % is needed. % To input general positive values into the \code{weights} argument of % \code{\link{vglm}}/\code{\link{vgam}} one needs to input a 2-column % response. VGAM/man/binom2.rhoUC.Rd0000644000176000001440000000706512136651105014313 0ustar ripleyusers\name{Binom2.rho} \alias{Binom2.rho} \alias{dbinom2.rho} \alias{rbinom2.rho} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Probit Model } \description{ Density and random generation for a bivariate probit model. The correlation parameter rho is the measure of dependency. } \usage{ rbinom2.rho(n, mu1, mu2 = if(exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if(twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.rho(mu1, mu2 = if(exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. Must be a single positive integer. The arguments \code{mu1}, \code{mu2}, \code{rho} are recycled to length \code{n}. } \item{mu1, mu2}{ The marginal probabilities. Only \code{mu1} is needed if \code{exchangeable = TRUE}. Values should be between 0 and 1. } \item{rho}{ The correlation parameter. Must be numeric and lie between \eqn{-1} and \eqn{1}. The default value of zero means the responses are uncorrelated. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{twoCols}{ Logical. If \code{TRUE}, then a \eqn{n} \eqn{\times}{*} \eqn{2} matrix of 1s and 0s is returned. If \code{FALSE}, then a \eqn{n} \eqn{\times}{*} \eqn{4} matrix of 1s and 0s is returned. } \item{colnames}{ The \code{dimnames} argument of \code{\link[base]{matrix}} is assigned \code{list(NULL, colnames)}. } \item{ErrorCheck}{ Logical. Do some error checking of the input parameters? } } \details{ The function \code{rbinom2.rho} generates data coming from a bivariate probit model. The data might be fitted with the \pkg{VGAM} family function \code{\link{binom2.rho}}. The function \code{dbinom2.rho} does not really compute the density (because that does not make sense here) but rather returns the four joint probabilities. } \value{ The function \code{rbinom2.rho} returns either a 2 or 4 column matrix of 1s and 0s, depending on the argument \code{twoCols}. The function \code{dbinom2.rho} returns a 4 column matrix of joint probabilities; each row adds up to unity. } \author{ T. W. Yee } \seealso{ \code{\link{binom2.rho}}. } \examples{ (myrho <- rhobit(2, inverse = TRUE)) # Example 1 ymat <- rbinom2.rho(nn <- 2000, mu1 = 0.8, rho = myrho, exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) fit <- vglm(ymat ~ 1, binom2.rho(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = probit(-2+4*x2, inverse = TRUE), mu2 = probit(-1+3*x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.rho(mu1, mu2, myrho)) ymat <- with(bdata, rbinom2.rho(nn, mu1, mu2, myrho)) fit2 <- vglm(ymat ~ x2, binom2.rho, bdata) coef(fit2, matrix = TRUE) \dontrun{ matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, lwd = 2, ylab = "Probability") legend(x = 0.25, y = 0.9, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) } } \keyword{distribution} VGAM/man/binom2.rho.Rd0000644000176000001440000001460212136651105014056 0ustar ripleyusers\name{binom2.rho} \alias{binom2.rho} \alias{binom2.Rho} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Probit Model (Family Function) } \description{ Fits a bivariate probit model to two binary responses. The correlation parameter rho is the measure of dependency. } \usage{ binom2.rho(lrho = "rhobit", lmu = "probit", imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = 3, exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) binom2.Rho(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrho}{ Link function applied to the \eqn{\rho}{rho} association parameter. See \code{\link{Links}} for more choices. } \item{lmu}{ Link function applied to the marginal probabilities. Should be left alone. } \item{irho}{ Optional initial value for \eqn{\rho}{rho}. If given, this should lie between \eqn{-1} and \eqn{1}. See below for more comments. } \item{imu1, imu2}{ Optional initial values for the two marginal probabilities. May be a vector. } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none. Numerically, the \eqn{\rho}{rho} parameter is easiest modelled as an intercept only, hence the default. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{imethod, nsimEIM, grho}{ See \code{\link{CommonVGAMffArguments}} for more information. A value of at least 100 for \code{nsimEIM} is recommended; the larger the value the better. } \item{rho}{ Numeric vector. Values are recycled to the needed length, and ought to be in range. } } \details{ The \emph{bivariate probit model} was one of the earliest regression models to handle two binary responses jointly. It has a probit link for each of the two marginal probabilities, and models the association between the responses by the \eqn{\rho}{rho} parameter of a standard bivariate normal distribution (with zero means and unit variances). One can think of the joint probabilities being \eqn{\Phi(\eta_1,\eta_2;\rho)}{Phi(eta1,eta2;rho)} where \eqn{\Phi}{Phi} is the cumulative distribution function of a standard bivariate normal distribution. Explicitly, the default model is \deqn{probit[P(Y_j=1)] = \eta_j,\ \ \ j=1,2}{% probit[P(Y_j=1)] = eta_j, j=1,2} for the marginals, and \deqn{rhobit[rho] = \eta_3.}{% rhobit[rho] = eta_3.} The joint probability \eqn{P(Y_1=1,Y_2=1)=\Phi(\eta_1,\eta_2;\rho)}{P(Y_1=1,Y_2=1)=Phi(eta1,eta2;rho)}, and from these the other three joint probabilities are easily computed. The model is fitted by maximum likelihood estimation since the full likelihood is specified. Fisher scoring is implemented. The default models \eqn{\eta_3}{eta3} as a single parameter only, i.e., an intercept-only model for rho, but this can be circumvented by setting \code{zero = NULL} in order to model rho as a function of all the explanatory variables. The bivariate probit model should not be confused with a \emph{bivariate logit model} with a probit link (see \code{\link{binom2.or}}). The latter uses the odds ratio to quantify the association. Actually, the bivariate logit model is recommended over the bivariate probit model because the odds ratio is a more natural way of measuring the association between two binary responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. } \references{ Ashford, J. R. and Sowden, R. R. (1970) Multi-variate probit analysis. \emph{Biometrics}, \bold{26}, 535--546. Freedman, D. A. (2010) \emph{Statistical Models and Causal Inference: a Dialogue with the Social Sciences}, Cambridge: Cambridge University Press. Freedman, D. A. and Sekhon, J. S. (2010) Endogeneity in probit response models. \emph{Political Analysis}, \bold{18}, 138--150. } \author{ Thomas W. Yee } \note{ See \code{\link{binom2.or}} about the form of input the response should have. By default, a constant \eqn{\rho}{rho} is fitted because \code{zero = 3}. Set \code{zero = NULL} if you want the \eqn{\rho}{rho} parameter to be modelled as a function of the explanatory variables. The value \eqn{\rho}{rho} lies in the interval \eqn{(-1,1)}{(-1,1)}, therefore a \code{\link{rhobit}} link is default. Converge problems can occur. If so, assign \code{irho} a range of values and monitor convergence (e.g., set \code{trace = TRUE}). Else try \code{imethod}. Practical experience shows that local solutions can occur, and that \code{irho} needs to be quite close to the (global) solution. Also, \code{imu1} and \code{imu2} may be used. This help file is mainly about \code{binom2.rho()}. \code{binom2.Rho()} fits a bivariate probit model with \emph{known} \eqn{\rho}{rho}. The inputted \code{rho} is saved in the \code{misc} slot of the fitted object, with \code{rho} as the component name. In some econometrics applications (e.g., Freedman 2010, Freedman and Sekhon 2010) one response is used as an explanatory variable, e.g., a \emph{recursive} binomial probit model. Such will not work here. Historically, the bivariate probit model was the first VGAM I ever wrote, based on Ashford and Sowden (1970). I don't think they ever thought of it either! Hence the criticisms raised go beyond the use of what was originally intended. } \seealso{ \code{\link{rbinom2.rho}}, \code{\link{rhobit}}, \code{\link{pnorm2}}, \code{\link{binom2.or}}, \code{\link{loglinb2}}, \code{\link{coalminers}}, \code{\link{binomialff}}, \code{\link{rhobit}}, \code{\link{fisherz}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.rho, data = coalminers, trace = TRUE) summary(fit) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} % (i.e., \code{\link[stats:Normal]{pnorm}}) % with correlation parameter \eqn{\rho}{rho}. VGAM/man/binom2.orUC.Rd0000644000176000001440000000743012136651105014137 0ustar ripleyusers\name{Binom2.or} \alias{Binom2.or} \alias{dbinom2.or} \alias{rbinom2.or} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Binary Regression with an Odds Ratio } \description{ Density and random generation for a bivariate binary regression model using an odds ratio as the measure of dependency. } \usage{ rbinom2.or(n, mu1, mu2 = if(exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if(twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.or(mu1, mu2 = if(exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. Must be a single positive integer. The arguments \code{mu1}, \code{mu2}, \code{oratio} are recycled to length \code{n}. } \item{mu1, mu2}{ The marginal probabilities. Only \code{mu1} is needed if \code{exchangeable = TRUE}. Values should be between 0 and 1. } \item{oratio}{ Odds ratio. Must be numeric and positive. The default value of unity means the responses are statistically independent. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{twoCols}{ Logical. If \code{TRUE}, then a \eqn{n} \eqn{\times}{*} \eqn{2} matrix of 1s and 0s is returned. If \code{FALSE}, then a \eqn{n} \eqn{\times}{*} \eqn{4} matrix of 1s and 0s is returned. } \item{colnames}{ The \code{dimnames} argument of \code{\link[base]{matrix}} is assigned \code{list(NULL, colnames)}. } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{ErrorCheck}{ Logical. Do some error checking of the input parameters? } } \details{ The function \code{rbinom2.or} generates data coming from a bivariate binary response model. The data might be fitted with the \pkg{VGAM} family function \code{\link{binom2.or}}. The function \code{dbinom2.or} does not really compute the density (because that does not make sense here) but rather returns the four joint probabilities. } \value{ The function \code{rbinom2.or} returns either a 2 or 4 column matrix of 1s and 0s, depending on the argument \code{twoCols}. The function \code{dbinom2.or} returns a 4 column matrix of joint probabilities; each row adds up to unity. } \author{ T. W. Yee } \seealso{ \code{\link{binom2.or}}. } \examples{ nn <- 2000 # Example 1 ymat <- rbinom2.or(n = nn, mu1 = 0.8, oratio = exp(2), exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) (myor <- mytab["0","0"] * mytab["1","1"] / (mytab["1","0"] * mytab["0","1"])) fit <- vglm(ymat ~ 1, binom2.or(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = logit(-2 + 4*x2, inverse = TRUE), mu2 = logit(-1 + 3*x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = exp(2))) ymat <- with(bdata, rbinom2.or(n = nn, mu1 = mu1, mu2 = mu2, oratio = exp(2))) fit2 <- vglm(ymat ~ x2, binom2.or, data = bdata) coef(fit2, matrix = TRUE) \dontrun{ matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, lwd = 2, ylab = "Probabilities", xlab = "x2", las = 1) legend(x = 0, y = 0.5, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) } } \keyword{distribution} VGAM/man/binom2.or.Rd0000644000176000001440000002111512136651105013703 0ustar ripleyusers\name{binom2.or} \alias{binom2.or} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Binary Regression with an Odds Ratio (Family Function) } \description{ Fits a Palmgren (bivariate odds-ratio model, or bivariate logistic regression) model to two binary responses. Actually, a bivariate logistic/probit/cloglog/cauchit model can be fitted. The odds ratio is used as a measure of dependency. } \usage{ binom2.or(lmu = "logit", lmu1 = lmu, lmu2 = lmu, loratio = "loge", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = 3, exchangeable = FALSE, tol = 0.001, morerobust = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu}{ Link function applied to the two marginal probabilities. See \code{\link{Links}} for more choices. See the note below. } \item{lmu1, lmu2}{ Link function applied to the first and second of the two marginal probabilities. } \item{loratio}{ Link function applied to the odds ratio. See \code{\link{Links}} for more choices. } \item{imu1, imu2, ioratio}{ Optional initial values for the marginal probabilities and odds ratio. See \code{\link{CommonVGAMffArguments}} for more details. In general good initial values are often required so use these arguments if convergence failure occurs. } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{morerobust}{ Logical. If \code{TRUE} then some measures are taken to compute the derivatives and working weights more robustly, i.e., in an attempt to avoid numerical problems. Currently this feature is not debugged if set \code{TRUE}. } } \details{ Also known informally as the \emph{Palmgren model}, the bivariate logistic model is a full-likelihood based model defined as two logistic regressions plus \code{log(oratio) = eta3} where \code{eta3} is the third linear/additive predictor relating the odds ratio to explanatory variables. Explicitly, the default model is \deqn{logit[P(Y_j=1)] = \eta_j,\ \ \ j=1,2}{% logit[P(Y_j=1)] = eta_j,\ \ \ j=1,2} for the marginals, and \deqn{\log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = \eta_3,}{% log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = eta_3,} specifies the dependency between the two responses. Here, the responses equal 1 for a success and a 0 for a failure, and the odds ratio is often written \eqn{\psi=p_{00}p_{11}/(p_{10}p_{01})}{psi=p00 p11 / (p10 p01)}. The model is fitted by maximum likelihood estimation since the full likelihood is specified. The two binary responses are independent if and only if the odds ratio is unity, or equivalently, the log odds ratio is zero. Fisher scoring is implemented. The default models \eqn{\eta_3}{eta3} as a single parameter only, i.e., an intercept-only model, but this can be circumvented by setting \code{zero = NULL} in order to model the odds ratio as a function of all the explanatory variables. The function \code{binom2.or()} can handle other probability link functions such as \code{\link{probit}}, \code{\link{cloglog}} and \code{\link{cauchit}} links as well, so is quite general. In fact, the two marginal probabilities can each have a different link function. A similar model is the \emph{bivariate probit model} (\code{\link{binom2.rho}}), which is based on a standard bivariate normal distribution, but the bivariate probit model is less interpretable and flexible. The \code{exchangeable} argument should be used when the error structure is exchangeable, e.g., with eyes or ears data. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. These estimated probabilities should be extracted with the \code{fitted} generic function. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. le Cessie, S. and van Houwelingen, J. C. (1994) Logistic regression for correlated binary data. \emph{Applied Statistics}, \bold{43}, 95--108. Palmgren, J. (1989) \emph{Regression Models for Bivariate Binary Responses}. Technical Report no. 101, Department of Biostatistics, University of Washington, Seattle. Yee, T. W. and Dirnbock, T. (2009) Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, \bold{259}(4), 684--694. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ At present we call \code{\link{binom2.or}} families a \emph{bivariate odds-ratio model}. The response should be either a 4-column matrix of counts (whose columns correspond to \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1) respectively), or a two-column matrix where each column has two distinct values, or a factor with four levels. The function \code{\link{rbinom2.or}} may be used to generate such data. Successful convergence requires at least one case of each of the four possible outcomes. By default, a constant odds ratio is fitted because \code{zero = 3}. Set \code{zero = NULL} if you want the odds ratio to be modelled as a function of the explanatory variables; however, numerical problems are more likely to occur. The argument \code{lmu}, which is actually redundant, is used for convenience and for upward compatibility: specifying \code{lmu} only means the link function will be applied to \code{lmu1} and \code{lmu2}. Users who want a different link function for each of the two marginal probabilities should use the \code{lmu1} and \code{lmu2} arguments, and the argument \code{lmu} is then ignored. It doesn't make sense to specify \code{exchangeable = TRUE} and have different link functions for the two marginal probabilities. Regarding Yee and Dirnbock (2009), the \code{xij} (see \code{\link{vglm.control}}) argument enables environmental variables with different values at the two time points to be entered into an exchangeable \code{\link{binom2.or}} model. See the author's webpage for sample code. } \seealso{ \code{\link{rbinom2.or}}, \code{\link{binom2.rho}}, \code{\link{loglinb2}}, \code{\link{zipebcom}}, \code{\link{coalminers}}, \code{\link{binomialff}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ # Fit the model in Table 6.7 in McCullagh and Nelder (1989) coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or(zero = NULL), coalminers) fitted(fit) summary(fit) coef(fit, matrix = TRUE) c(weights(fit, type = "prior")) * fitted(fit) # Table 6.8 \dontrun{ with(coalminers, matplot(Age, fitted(fit), type = "l", las = 1, xlab = "(age - 42) / 5", lwd = 2)) with(coalminers, matpoints(Age, depvar(fit), col=1:4)) legend(x = -4, y = 0.5, lty = 1:4, col = 1:4, lwd = 2, legend=c("1 = (Breathlessness=0, Wheeze=0)", "2 = (Breathlessness=0, Wheeze=1)", "3 = (Breathlessness=1, Wheeze=0)", "4 = (Breathlessness=1, Wheeze=1)")) } # Another model: pet ownership \dontrun{ require(VGAMdata) # More homogeneous: petdata <- subset(xs.nz, ethnic == "0" & age < 70 & sex == "M") petdata <- na.omit(petdata[, c("cat", "dog", "age")]) summary(petdata) with(petdata, table(cat, dog)) # Can compute the odds ratio fit <- vgam(cbind((1-cat)*(1-dog), (1-cat)*dog, cat *(1-dog), cat *dog) ~ s(age, df = 5), binom2.or(zero = 3), data = petdata, trace = TRUE) colSums(depvar(fit)) coef(fit, matrix = TRUE) } \dontrun{ # Plot the estimated probabilities ooo <- order(with(petdata, age)) matplot(with(petdata, age)[ooo], fitted(fit)[ooo, ], type = "l", xlab = "Age", ylab = "Probability", main = "Pet ownership", ylim = c(0, max(fitted(fit))), las = 1, lwd = 1.5) legend("topleft", col=1:4, lty = 1:4, leg = c("no cat or dog ", "dog only", "cat only", "cat and dog"), lwd = 1.5) } } \keyword{models} \keyword{regression} VGAM/man/bilogistic4.Rd0000644000176000001440000001040512136651105014312 0ustar ripleyusers\name{bilogistic4} \alias{bilogistic4} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Logistic Distribution Family Function } \description{ Estimates the four parameters of the bivariate logistic distribution by maximum likelihood estimation. } \usage{ bilogistic4(llocation = "identity", lscale = "loge", iloc1 = NULL, iscale1 = NULL, iloc2 = NULL, iscale2 = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation}{ Link function applied to both location parameters \eqn{l_1}{l1} and \eqn{l_2}{l2}. See \code{\link{Links}} for more choices. } \item{lscale}{ Parameter link function applied to both (positive) scale parameters \eqn{s_1}{s1} and \eqn{s_2}{s2}. See \code{\link{Links}} for more choices. } \item{iloc1, iloc2}{ Initial values for the location parameters. By default, initial values are chosen internally using \code{imethod}. Assigning values here will override the argument \code{imethod}. } \item{iscale1, iscale2}{ Initial values for the scale parameters. By default, initial values are chosen internally using \code{imethod}. Assigning values here will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, choose values from the set \{1,2,3,4\}. } } \details{ The four-parameter bivariate logistic distribution has a density that can be written as \deqn{f(y_1,y_2;l_1,s_1,l_2,s_2) = 2 \frac{\exp[-(y_1-l_1)/s_1 - (y_2-l_2)/s_2]}{ s_1 s_2 \left( 1 + \exp[-(y_1-l_1)/s_1] + \exp[-(y_2-l_2)/s_2] \right)^3}}{% f(y1,y2;l1,s1,l2,s2) = 2 * exp[-(y1-l1)/s1 - (y1-l1)/s1] / [s1 * s2 * ( 1 + exp[-(y1-l1)/s1] + exp[-(y2-l2)/s2] )^3] } where \eqn{s_1>0}{s1>0} \eqn{s_2>0}{s2>0} are the scale parameters, and \eqn{l_1}{l1} and \eqn{l_2}{l2} are the location parameters. Each of the two responses are unbounded, i.e., \eqn{-\infty 0}. The shape parameters are positive, and here, \eqn{B} is the beta function. The mean of \eqn{Y} is \eqn{shape1 / (shape2-1)} provided \eqn{shape2>1}. If \eqn{Y} has a \eqn{Beta(shape1,shape2)} distribution then \eqn{Y/(1-Y)} and \eqn{(1-Y)/Y} have a \eqn{Betaprime(shape1,shape2)} and \eqn{Betaprime(shape2,shape1)} distribution respectively. Also, if \eqn{Y_1}{Y1} has a \eqn{gamma(shape1)} distribution and \eqn{Y_2}{Y2} has a \eqn{gamma(shape2)} distribution then \eqn{Y_1/Y_2}{Y1/Y2} has a \eqn{Betaprime(shape1,shape2)} distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %% zz not sure about the JKB reference. \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) Chapter 25 of: \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have positive values only. The beta-prime distribution is also known as the \emph{beta distribution of the second kind} or the \emph{inverted beta distribution}. } \seealso{ \code{\link{betaff}}. } \examples{ nn <- 1000 bdata <- data.frame(shape1 = exp(1), shape2 = exp(3)) bdata <- transform(bdata, yb = rbeta(nn, shape1, shape2)) bdata <- transform(bdata, y1 = (1-yb)/yb, y2 = yb/(1-yb), y3 = rgamma(nn, exp(3)) / rgamma(nn, exp(2))) fit1 <- vglm(y1 ~ 1, betaprime, bdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(y2 ~ 1, betaprime, bdata, trace = TRUE) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ 1, betaprime, bdata, trace = TRUE) coef(fit3, matrix = TRUE) # Compare the fitted values with(bdata, mean(y3)) head(fitted(fit3)) Coef(fit3) # Useful for intercept-only models } \keyword{models} \keyword{regression} VGAM/man/betanormUC.Rd0000644000176000001440000000522112136651105014135 0ustar ripleyusers\name{Betanorm} \alias{Betanorm} \alias{dbetanorm} \alias{pbetanorm} \alias{qbetanorm} \alias{rbetanorm} \title{The Beta-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the univariate beta-normal distribution. } \usage{ dbetanorm(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) pbetanorm(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qbetanorm(p, shape1, shape2, mean = 0, sd = 1) rbetanorm(n, shape1, shape2, mean = 0, sd = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} respectively in \code{\link[base:Special]{beta}}. } \item{mean, sd}{ the mean and standard deviation of the univariate normal distribution (\code{\link[stats:Normal]{Normal}}). } \item{log, log.p}{ Logical. If \code{TRUE} then all probabilities \code{p} are given as \code{log(p)}. } \item{lower.tail}{ Logical. If \code{TRUE} then the upper tail is returned, i.e., one minus the usual answer. } } \value{ \code{dbetanorm} gives the density, \code{pbetanorm} gives the distribution function, \code{qbetanorm} gives the quantile function, and \code{rbetanorm} generates random deviates. } \references{ pp.146--152 of Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, New York: Marcel Dekker. } \author{ T. W. Yee } \details{ The function \code{betanormal1}, the \pkg{VGAM} family function for estimating the parameters, has not yet been written. % for the formula of the probability density function and other details. } %\note{ %} %\seealso{ % zz code{link{betanormal1}}. %} \examples{ \dontrun{ shape1 = 0.1; shape2 = 4; m = 1 x = seq(-10, 2, len=501) plot(x, dbetanorm(x, shape1, shape2, m=m), type="l", ylim=0:1, las=1, ylab=paste("betanorm(",shape1,", ",shape2,", m=",m, ", sd=1)", sep=""), main="Blue is density, red is cumulative distribution function", sub="Purple lines are the 10,20,...,90 percentiles", col="blue") lines(x, pbetanorm(x, shape1, shape2, m=m), col="red") abline(h=0) probs = seq(0.1, 0.9, by=0.1) Q = qbetanorm(probs, shape1, shape2, m=m) lines(Q, dbetanorm(Q, shape1, shape2, m=m), col="purple", lty=3, type="h") lines(Q, pbetanorm(Q, shape1, shape2, m=m), col="purple", lty=3, type="h") abline(h=probs, col="purple", lty=3) pbetanorm(Q, shape1, shape2, m=m) - probs # Should be all 0 } } \keyword{distribution} VGAM/man/betageometric.Rd0000644000176000001440000000777212136651105014725 0ustar ripleyusers\name{betageometric} \alias{betageometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-geometric Distribution Family Function } \description{ Maximum likelihood estimation for the beta-geometric distribution. } \usage{ betageometric(lprob = "logit", lshape = "loge", iprob = NULL, ishape = 0.1, moreSummation=c(2,100), tolerance=1.0e-10, zero=NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob, lshape}{ Parameter link functions applied to the parameters \eqn{p}{prob} and \eqn{\phi}{phi} (called \code{prob} and \code{shape} below). The former lies in the unit interval and the latter is positive. See \code{\link{Links}} for more choices. } \item{iprob, ishape}{ Numeric. Initial values for the two parameters. A \code{NULL} means a value is computed internally. } \item{moreSummation}{ Integer, of length 2. When computing the expected information matrix a series summation from 0 to \code{moreSummation[1]*max(y)+moreSummation[2]} is made, in which the upper limit is an approximation to infinity. Here, \code{y} is the response. } \item{tolerance}{ Positive numeric. When all terms are less than this then the series is deemed to have converged. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. If used, the value must be from the set \{1,2\}. } } \details{ A random variable \eqn{Y} has a 2-parameter beta-geometric distribution if \eqn{P(Y=y) = p (1-p)^y}{P(Y=y) = prob * (1-prob)^y} for \eqn{y=0,1,2,\ldots}{y=0,1,2,...} where \eqn{p}{prob} are generated from a standard beta distribution with shape parameters \code{shape1} and \code{shape2}. The parameterization here is to focus on the parameters \eqn{p}{prob} and \eqn{\phi = 1/(shape1+shape2)}{phi = 1/(shape1+shape2)}, where \eqn{\phi}{phi} is \code{shape}. The default link functions for these ensure that the appropriate range of the parameters is maintained. The mean of \eqn{Y} is \eqn{E(Y) = shape2 / (shape1-1) = (1-p) / (p-\phi)}{E(Y) = shape2 / (shape1-1) = (1-prob) / (prob-phi)}. The geometric distribution is a special case of the beta-geometric distribution with \eqn{\phi=0}{phi=0} (see \code{\link{geometric}}). However, fitting data from a geometric distribution may result in numerical problems because the estimate of \eqn{\log(\phi)}{log(phi)} will 'converge' to \code{-Inf}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Paul, S. R. (2005) Testing goodness of fit of the geometric distribution: an application to human fecundability data. \emph{Journal of Modern Applied Statistical Methods}, \bold{4}, 425--433. } \author{ T. W. Yee } \note{ The first iteration may be very slow; if practical, it is best for the \code{weights} argument of \code{\link{vglm}} etc. to be used rather than inputting a very long vector as the response, i.e., \code{vglm(y ~ 1, ..., weights=wts)} is to be preferred over \code{vglm(rep(y, wts) ~ 1, ...)}. If convergence problems occur try inputting some values of argument \code{ishape}. If an intercept-only model is fitted then the \code{misc} slot of the fitted object has list components \code{shape1} and \code{shape2}. } \seealso{ \code{\link{geometric}}, \code{\link{betaff}}, \code{\link{rbetageom}}. } \examples{ bdata <- data.frame(y = 0:11, wts = c(227,123,72,42,21,31,11,14,6,4,7,28)) fit <- vglm(y ~ 1, betageometric, bdata, weight = wts, trace = TRUE) fitg <- vglm(y ~ 1, geometric, bdata, weight = wts, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) sqrt(diag(vcov(fit, untransform = TRUE))) fit@misc$shape1 fit@misc$shape2 # Very strong evidence of a beta-geometric: pchisq(2*(logLik(fit) - logLik(fitg)), df = 1, lower.tail = FALSE) } \keyword{models} \keyword{regression} VGAM/man/betageomUC.Rd0000644000176000001440000000401612136651105014112 0ustar ripleyusers\name{Betageom} \alias{Betageom} \alias{dbetageom} \alias{pbetageom} %\alias{qbetageom} \alias{rbetageom} \title{The Beta-Geometric Distribution} \description{ Density, distribution function, and random generation for the beta-geometric distribution. } \usage{ dbetageom(x, shape1, shape2, log = FALSE) pbetageom(q, shape1, shape2, log.p = FALSE) rbetageom(n, shape1, shape2) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} in \code{\link[base:Special]{beta}} respectively. } \item{log, log.p}{ Logical. If \code{TRUE} then all probabilities \code{p} are given as \code{log(p)}. } } \value{ \code{dbetageom} gives the density, \code{pbetageom} gives the distribution function, and % \code{qbetageom} gives the quantile function, and \code{rbetageom} generates random deviates. } \author{ T. W. Yee } \details{ The beta-geometric distribution is a geometric distribution whose probability of success is not a constant but it is generated from a beta distribution with parameters \code{shape1} and \code{shape2}. Note that the mean of this beta distribution is \code{shape1/(shape1+shape2)}, which therefore is the mean of the probability of success. % See zz code{link{betageomzz}}, the \pkg{VGAM} family function % for estimating the parameters, % for the formula of the probability density function and other details. } \note{ \code{pbetageom} can be particularly slow. } \seealso{ \code{\link{geometric}}, \code{\link{betaff}}, \code{\link[stats:Beta]{Beta}}. } \examples{ \dontrun{ shape1 = 1; shape2 = 2; y = 0:30 proby = dbetageom(y, shape1, shape2, log = FALSE) plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y]", main = paste( "Y ~ Beta-geometric(shape1=", shape1,", shape2=", shape2,")", sep="")) sum(proby) } } \keyword{distribution} VGAM/man/betaff.Rd0000644000176000001440000001141712136651105013331 0ustar ripleyusers\name{betaff} \alias{betaff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-parameter Beta Distribution Family Function } \description{ Estimation of the mean and precision parameters of the beta distribution. } \usage{ betaff(A = 0, B = 1, lmu = "logit", lphi = "loge", imu = NULL, iphi = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{A, B}{ Lower and upper limits of the distribution. The defaults correspond to the \emph{standard beta distribution} where the response lies between 0 and 1. } \item{lmu, lphi}{ Link function for the mean and precision parameters. The values \eqn{A} and \eqn{B} are extracted from the \code{min} and \code{max} arguments of \code{\link{elogit}}. Consequently, only \code{\link{elogit}} is allowed. % See below for more details. % See \code{\link{Links}} for more choices. } \item{imu, iphi}{ Optional initial value for the mean and precision parameters respectively. A \code{NULL} value means a value is obtained in the \code{initialize} slot. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter beta distribution can be written \eqn{f(y) =} \deqn{(y-A)^{\mu_1 \phi-1} \times (B-y)^{(1-\mu_1) \phi-1} / [beta(\mu_1 \phi,(1-\mu_1) \phi) \times (B-A)^{\phi-1}]}{% (y-A)^(mu1*phi-1) * (B-y)^((1-mu1)*phi-1) / [beta(mu1*phi,(1-mu1)*phi) * (B-A)^(phi-1)]} for \eqn{A < y < B}, and \eqn{beta(.,.)} is the beta function (see \code{\link[base:Special]{beta}}). The parameter \eqn{\mu_1}{mu1} satisfies \eqn{\mu_1 = (\mu - A) / (B-A)}{mu1 = (mu - A) / (B-A)} where \eqn{\mu}{mu} is the mean of \eqn{Y}. That is, \eqn{\mu_1}{mu1} is the mean of of a standard beta distribution: \eqn{E(Y) = A + (B-A) \times \mu_1}{E(Y) = A + (B-A)*mu1}, and these are the fitted values of the object. Also, \eqn{\phi}{phi} is positive and \eqn{A < \mu < B}{A < mu < B}. Here, the limits \eqn{A} and \eqn{B} are \emph{known}. Another parameterization of the beta distribution involving the raw shape parameters is implemented in \code{\link{beta.ab}}. For general \eqn{A} and \eqn{B}, the variance of \eqn{Y} is \eqn{(B-A)^2 \times \mu_1 \times (1-\mu_1) / (1+\phi)}{(B-A)^2 * mu1 * (1-mu1) / (1+phi)}. Then \eqn{\phi}{phi} can be interpreted as a \emph{precision} parameter in the sense that, for fixed \eqn{\mu}{mu}, the larger the value of \eqn{\phi}{phi}, the smaller the variance of \eqn{Y}. Also, \eqn{\mu_1 = shape1/(shape1+shape2)}{mu1=shape1/(shape1+shape2)} and \eqn{\phi = shape1+shape2}{phi = shape1+shape2}. Fisher scoring is implemented. % If \eqn{A} and \eqn{B} are unknown then the \pkg{VGAM} family function % \code{beta4()} can be used to estimate these too. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Ferrari, S. L. P. and Francisco C.-N. (2004) Beta regression for modelling rates and proportions. \emph{Journal of Applied Statistics}, \bold{31}, 799--815. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have values in the interval (\eqn{A}, \eqn{B}). The user currently needs to manually choose \code{lmu} to match the input of arguments \code{A} and \code{B}, e.g., with \code{\link{elogit}}; see the example below. } \seealso{ \code{\link{beta.ab}}, \code{\link[stats:Beta]{Beta}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{betabinomial.ab}}, \code{\link{betageometric}}, \code{\link{betaprime}}, \code{\link{rbetageom}}, \code{\link{rbetanorm}}, \code{\link{kumar}}, \code{\link{elogit}}. } \examples{ bdata <- data.frame(y = rbeta(nn <- 1000, shape1 = exp(0), shape2 = exp(1))) fit1 <- vglm(y ~ 1, betaff, bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models # General A and B, and with a covariate bdata <- transform(bdata, x2 = runif(nn)) bdata <- transform(bdata, mu = logit(0.5 - x2, inverse = TRUE), prec = exp(3.0 + x2)) # prec == phi bdata <- transform(bdata, shape2 = prec * (1 - mu), shape1 = mu * prec) bdata <- transform(bdata, y = rbeta(nn, shape1 = shape1, shape2 = shape2)) bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ x2, data = bdata, trace = TRUE, betaff(A = 5, B = 13, lmu = elogit(min = 5, max = 13))) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/betabinomial.ab.Rd0000644000176000001440000002021412136651105015104 0ustar ripleyusers\name{betabinomial.ab} \alias{betabinomial.ab} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-binomial Distribution Family Function } \description{ Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the shape parameters of the underlying beta distribution. } \usage{ betabinomial.ab(lshape12 = "loge", i1 = 1, i2 = NULL, imethod = 1, shrinkage.init = 0.95, nsimEIM = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape12}{ Link function applied to both (positive) shape parameters of the beta distribution. See \code{\link{Links}} for more choices. } \item{i1, i2}{ Initial value for the shape parameters. The first must be positive, and is recyled to the necessary length. The second is optional. If a failure to converge occurs, try assigning a different value to \code{i1} and/or using \code{i2}. } \item{zero}{ An integer specifying which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value should be either \code{1} or \code{2}. The default is to model both shape parameters as functions of the covariates. If a failure to converge occurs, try \code{zero = 2}. } \item{shrinkage.init, nsimEIM, imethod}{ See \code{\link{CommonVGAMffArguments}} for more information. The argument \code{shrinkage.init} is used only if \code{imethod = 2}. Using the argument \code{nsimEIM} may offer large advantages for large values of \eqn{N} and/or large data sets. } } \details{ There are several parameterizations of the beta-binomial distribution. This family function directly models the two shape parameters of the associated beta distribution rather than the probability of success (however, see \bold{Note} below). The model can be written \eqn{T|P=p \sim Binomial(N,p)}{T|P=p ~ Binomial(N,p)} where \eqn{P} has a beta distribution with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Here, \eqn{N} is the number of trials (e.g., litter size), \eqn{T=NY} is the number of successes, and \eqn{p} is the probability of a success (e.g., a malformation). That is, \eqn{Y} is the \emph{proportion} of successes. Like \code{\link{binomialff}}, the fitted values are the estimated probability of success (i.e., \eqn{E[Y]} and not \eqn{E[T]}) and the prior weights \eqn{N} are attached separately on the object in a slot. The probability function is \deqn{P(T=t) = {N \choose t} \frac{B(\alpha+t, \beta+N-t)} {B(\alpha, \beta)}}{% P(T=t) = choose(N,t) B(alpha+t, beta+N-t) / B(alpha, beta)} where \eqn{t=0,1,\ldots,N}, and \eqn{B} is the beta function with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Recall \eqn{Y = T/N} is the real response being modelled. The default model is \eqn{\eta_1 = \log(\alpha)}{eta1 = log(alpha)} and \eqn{\eta_2 = \log(\beta)}{eta2 = log(beta)} because both parameters are positive. The mean (of \eqn{Y}) is \eqn{p = \mu = \alpha / (\alpha + \beta)}{p = mu = alpha / (alpha + beta)} and the variance (of \eqn{Y}) is \eqn{\mu(1-\mu)(1+(N-1)\rho)/N}{mu(1-mu)(1+(N-1)rho)/N}. Here, the correlation \eqn{\rho}{rho} is given by \eqn{1/(1 + \alpha + \beta)}{1/(1 + alpha + beta)} and is the correlation between the \eqn{N} individuals within a litter. A \emph{litter effect} is typically reflected by a positive value of \eqn{\rho}{rho}. It is known as the \emph{over-dispersion parameter}. This family function uses Fisher scoring. The two diagonal elements of the second-order expected derivatives with respect to \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are computed numerically, which may fail for large \eqn{\alpha}{alpha}, \eqn{\beta}{beta}, \eqn{N} or else take a long time. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. Suppose \code{fit} is a fitted beta-binomial model. Then \code{fit@y} (better: \code{depvar(fit)}) contains the sample proportions \eqn{y}, \code{fitted(fit)} returns estimates of \eqn{E(Y)}, and \code{weights(fit, type = "prior")} returns the number of trials \eqn{N}. } \references{ Moore, D. F. and Tsiatis, A. (1991) Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. \emph{Biometrics}, \bold{47}, 383--401. Prentice, R. L. (1986) Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. \emph{Journal of the American Statistical Association}, \bold{81}, 321--327. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}. But it does not handle the case \eqn{N=1} very well because there are two parameters to estimate, not one, for each row of the input. Cases where \eqn{N=1} can be omitted via the \code{subset} argument of \code{\link{vglm}}. Although the two linear/additive predictors given above are in terms of \eqn{\alpha}{alpha} and \eqn{\beta}{beta}, basic algebra shows that the default amounts to fitting a logit link to the probability of success; subtracting the second linear/additive predictor from the first gives that logistic regression linear/additive predictor. That is, \eqn{logit(p) = \eta_1 - \eta_2}{logit(p) = eta1 - eta2}. This is illustated in one of the examples below. The \emph{extended} beta-binomial distribution of Prentice (1986) is currently not implemented in the \pkg{VGAM} package as it has range-restrictions for the correlation parameter that are currently too difficult to handle in this package. } \section{Warning }{ This family function is prone to numerical difficulties due to the expected information matrices not being positive-definite or ill-conditioned over some regions of the parameter space. If problems occur try setting \code{i1} to be some other positive value, using \code{i2} and/or setting \code{zero = 2}. This family function may be renamed in the future. See the warnings in \code{\link{betabinomial}}. } \seealso{ \code{\link{betabinomial}}, \code{\link{Betabinom}}, \code{\link{binomialff}}, \code{\link{betaff}}, \code{\link{dirmultinomial}}, \code{\link{lirat}}. } \examples{ # Example 1 N <- 10; s1 <- exp(1); s2 <- exp(2) y <- rbetabinom.ab(n = 100, size = N, shape1 = s1, shape2 = s2) fit <- vglm(cbind(y, N-y) ~ 1, betabinomial.ab, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@misc$rho) # The correlation parameter head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomial.ab, data = lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) fit@misc$rho # The correlation parameter t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # A "loge" link for the 2 shape parameters is a logistic regression: all.equal(c(fitted(fit)), as.vector(logit(predict(fit)[, 1] - predict(fit)[, 2], inverse = TRUE))) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomial.ab(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) coef(fit2, matrix = TRUE)[, 1] - coef(fit2, matrix = TRUE)[, 2] # logit(p) \dontrun{ with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) } \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for(gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp) } } } \keyword{models} \keyword{regression} VGAM/man/betabinomial.Rd0000644000176000001440000001746312136651105014537 0ustar ripleyusers\name{betabinomial} \alias{betabinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-binomial Distribution Family Function } \description{ Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the mean and correlation coefficient. } \usage{ betabinomial(lmu = "logit", lrho = "logit", irho = NULL, imethod = 1, shrinkage.init = 0.95, nsimEIM = NULL, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lrho}{ Link functions applied to the two parameters. See \code{\link{Links}} for more choices. The defaults ensure the parameters remain in \eqn{(0,1)}, however, see the warning below. } \item{irho}{ Optional initial value for the correlation parameter. If given, it must be in \eqn{(0,1)}, and is recyled to the necessary length. Assign this argument a value if a convergence failure occurs. Having \code{irho = NULL} means an initial value is obtained internally, though this can give unsatisfactory results. } \item{imethod}{ An integer with value \code{1} or \code{2} or \ldots, which specifies the initialization method for \eqn{\mu}{mu}. If failure to converge occurs try the another value and/or else specify a value for \code{irho}. } \item{zero}{ An integer specifying which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value should be either \code{1} or \code{2}. The default is to have a single correlation parameter. To model both parameters as functions of the covariates assign \code{zero = NULL}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{shrinkage.init, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. The argument \code{shrinkage.init} is used only if \code{imethod = 2}. Using the argument \code{nsimEIM} may offer large advantages for large values of \eqn{N} and/or large data sets. } } \details{ There are several parameterizations of the beta-binomial distribution. This family function directly models the mean and correlation parameter, i.e., the probability of success. The model can be written \eqn{T|P=p \sim Binomial(N,p)}{T|P=p ~ Binomial(N,p)} where \eqn{P} has a beta distribution with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Here, \eqn{N} is the number of trials (e.g., litter size), \eqn{T=NY} is the number of successes, and \eqn{p} is the probability of a success (e.g., a malformation). That is, \eqn{Y} is the \emph{proportion} of successes. Like \code{\link{binomialff}}, the fitted values are the estimated probability of success (i.e., \eqn{E[Y]} and not \eqn{E[T]}) and the prior weights \eqn{N} are attached separately on the object in a slot. The probability function is \deqn{P(T=t) = {N \choose t} \frac{B(\alpha+t, \beta+N-t)} {B(\alpha, \beta)}}{% P(T=t) = choose(N,t) B(alpha+t, beta+N-t) / B(alpha, beta)} where \eqn{t=0,1,\ldots,N}, and \eqn{B} is the \code{\link[base:Special]{beta}} function with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Recall \eqn{Y = T/N} is the real response being modelled. The default model is \eqn{\eta_1 = logit(\mu)}{eta1 =logit(mu)} and \eqn{\eta_2 = logit(\rho)}{eta2 = logit(rho)} because both parameters lie between 0 and 1. The mean (of \eqn{Y}) is \eqn{p = \mu = \alpha / (\alpha + \beta)}{p = mu = alpha / (alpha + beta)} and the variance (of \eqn{Y}) is \eqn{\mu(1-\mu)(1+(N-1)\rho)/N}{mu(1-mu)(1+(N-1)rho)/N}. Here, the correlation \eqn{\rho}{rho} is given by \eqn{1/(1 + \alpha + \beta)}{1/(1 + alpha + beta)} and is the correlation between the \eqn{N} individuals within a litter. A \emph{litter effect} is typically reflected by a positive value of \eqn{\rho}{rho}. It is known as the \emph{over-dispersion parameter}. This family function uses Fisher scoring. Elements of the second-order expected derivatives with respect to \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are computed numerically, which may fail for large \eqn{\alpha}{alpha}, \eqn{\beta}{beta}, \eqn{N} or else take a long time. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. Suppose \code{fit} is a fitted beta-binomial model. Then \code{fit@y} contains the sample proportions \eqn{y}, \code{fitted(fit)} returns estimates of \eqn{E(Y)}, and \code{weights(fit, type="prior")} returns the number of trials \eqn{N}. } \references{ Moore, D. F. and Tsiatis, A. (1991) Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. \emph{Biometrics}, \bold{47}, 383--401. Prentice, R. L. (1986) Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. \emph{Journal of the American Statistical Association}, \bold{81}, 321--327. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}. But it does not handle the case \eqn{N=1} very well because there are two parameters to estimate, not one, for each row of the input. Cases where \eqn{N=1} can be omitted via the \code{subset} argument of \code{\link{vglm}}. The \emph{extended} beta-binomial distribution of Prentice (1986) is currently not implemented in the \pkg{VGAM} package as it has range-restrictions for the correlation parameter that are currently too difficult to handle in this package. However, try \code{lrho = "rhobit"}. } \section{Warning }{ If the estimated rho parameter is close to zero then it pays to try \code{lrho = "rhobit"}. One day this may become the default link function. This family function is prone to numerical difficulties due to the expected information matrices not being positive-definite or ill-conditioned over some regions of the parameter space. If problems occur try setting \code{irho} to some numerical value, \code{nsimEIM = 100}, say, or else use \code{etastart} argument of \code{\link{vglm}}, etc. } \seealso{ \code{\link{betabinomial.ab}}, \code{\link{Betabinom}}, \code{\link{binomialff}}, \code{\link{betaff}}, \code{\link{dirmultinomial}}, \code{\link{lirat}}. } \examples{ # Example 1 bdata <- data.frame(N = 10, mu = 0.5, rho = 0.8) bdata <- transform(bdata, y = rbetabinom(n=100, size = N, prob = mu, rho = rho)) fit <- vglm(cbind(y, N-y) ~ 1, betabinomial, bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomial, lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomial(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) \dontrun{ with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) } \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for(gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp) } } } \keyword{models} \keyword{regression} VGAM/man/betabinomUC.Rd0000644000176000001440000001021612136651105014266 0ustar ripleyusers\name{Betabinom} \alias{Betabinom} \alias{dbetabinom} \alias{pbetabinom} %\alias{qbetabinom} \alias{rbetabinom} \alias{dbetabinom.ab} \alias{pbetabinom.ab} %\alias{qbetabinom.ab} \alias{rbetabinom.ab} \title{The Beta-Binomial Distribution} \description{ Density, distribution function, and random generation for the beta-binomial distribution. } \usage{ dbetabinom(x, size, prob, rho = 0, log = FALSE) pbetabinom(q, size, prob, rho, log.p = FALSE) rbetabinom(n, size, prob, rho = 0) dbetabinom.ab(x, size, shape1, shape2, log = FALSE, .dontuse.prob = NULL) pbetabinom.ab(q, size, shape1, shape2, log.p = FALSE) rbetabinom.ab(n, size, shape1, shape2, .dontuse.prob = NULL) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{size}{number of trials.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{prob}{ the probability of success \eqn{\mu}{mu}. Must be in the unit closed interval \eqn{[0,1]}. } \item{rho}{ the correlation parameter \eqn{\rho}{rho}. Usually must be in the unit open interval \eqn{(0,1)}, however, the value 0 is sometimes supported (if so then it corresponds to the usual binomial distribution). } \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} in \code{\link[base:Special]{beta}} respectively. } \item{log, log.p}{ Logical. If \code{TRUE} then all probabilities \code{p} are given as \code{log(p)}. } \item{.dontuse.prob}{ An argument that should be ignored and unused. } } \value{ \code{dbetabinom} and \code{dbetabinom.ab} give the density, \code{pbetabinom} and \code{pbetabinom.ab} give the distribution function, and % \code{qbetabinom} and \code{qbetabinom.ab} gives the quantile function, and \code{rbetabinom} and \code{rbetabinom.ab} generate random deviates. } \author{ T. W. Yee } \details{ The beta-binomial distribution is a binomial distribution whose probability of success is not a constant but it is generated from a beta distribution with parameters \code{shape1} and \code{shape2}. Note that the mean of this beta distribution is \code{mu = shape1/(shape1+shape2)}, which therefore is the mean or the probability of success. See \code{\link{betabinomial}} and \code{\link{betabinomial.ab}}, the \pkg{VGAM} family functions for estimating the parameters, for the formula of the probability density function and other details. } \note{ \code{pbetabinom} and \code{pbetabinom.ab} can be particularly slow. The functions here ending in \code{.ab} are called from those functions which don't. The simple transformations \eqn{\mu=\alpha / (\alpha + \beta)}{mu=alpha/(alpha+beta)} and \eqn{\rho=1/(1 + \alpha + \beta)}{rho=1/(1+alpha+beta)} are used, where \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are the two shape parameters. } \seealso{ \code{\link{betabinomial}}, \code{\link{betabinomial.ab}}. } \examples{ set.seed(1); rbetabinom(10, 100, prob = 0.5) set.seed(1); rbinom(10, 100, prob = 0.5) # The same since rho = 0 \dontrun{ N <- 9; xx <- 0:N; s1 <- 2; s2 <- 3 dy <- dbetabinom.ab(xx, size = N, shape1 = s1, shape2 = s2) barplot(rbind(dy, dbinom(xx, size = N, prob = s1 / (s1+s2))), beside = TRUE, col = c("blue","green"), las = 1, main = paste("Beta-binomial (size=",N,", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Binomial(size=", N, ", prob=", s1/(s1+s2), ") (green)", sep = ""), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = s1 / (s1+s2))*xx) cumsum(dy) - pbetabinom.ab(xx, N, shape1 = s1, shape2 = s2) y <- rbetabinom.ab(n = 10000, size = N, shape1 = s1, shape2 = s2) ty <- table(y) barplot(rbind(dy, ty / sum(ty)), beside = TRUE, col = c("blue","red"), las = 1, main = paste("Beta-binomial (size=",N,", shape1=",s1, ", shape2=",s2,") (blue) vs\n", " Random generated beta-binomial(size=", N, ", prob=", s1/(s1+s2), ") (red)", sep = ""), cex.main = 0.8, names.arg = as.character(xx)) } } \keyword{distribution} VGAM/man/betaII.Rd0000644000176000001440000000554312136651105013242 0ustar ripleyusers\name{betaII} \alias{betaII} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta Distribution of the Second Kind } \description{ Maximum likelihood estimation of the 3-parameter beta II distribution. } \usage{ betaII(lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape2.p = 2, ishape3.q = 2, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape2.p, lshape3.q}{ Parameter link functions applied to the (positive) parameters \code{scale}, \code{p} and \code{q}. See \code{\link{Links}} for more choices. } \item{iscale, ishape2.p, ishape3.q}{ Optional initial values for \code{scale}, \code{p} and \code{q}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3\} which correspond to \code{scale}, \code{p}, \code{q}, respectively. } } \details{ The 3-parameter beta II is the 4-parameter \emph{generalized} beta II distribution with shape parameter \eqn{a=1}. It is also known as the Pearson VI distribution. Other distributions which are special cases of the 3-parameter beta II include the Lomax (\eqn{p=1}) and inverse Lomax (\eqn{q=1}). More details can be found in Kleiber and Kotz (2003). The beta II distribution has density \deqn{f(y) = y^{p-1} / [b^p B(p,q) \{1 + y/b\}^{p+q}]}{% f(y) = y^(p-1) / [b^p B(p,q) (1 + y/b)^(p+q)]} for \eqn{b > 0}, \eqn{p > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1) \, \Gamma(q - 1) / (\Gamma(p) \, \Gamma(q))}{% E(Y) = b gamma(p + 1) gamma(q - 1) / ( gamma(p) gamma(q))} provided \eqn{q > 1}; these are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the note in \code{\link{genbetaII}}. } \seealso{ \code{\link{betaff}}, \code{\link{genbetaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{invlomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{invparalogistic}}. } \examples{ bdata <- data.frame(y = rsinmad(2000, shape1.a = 1, exp(2), exp(1))) # Not genuine data! fit <- vglm(y ~ 1, betaII, bdata, trace = TRUE) fit <- vglm(y ~ 1, betaII(ishape2.p = 0.7, ishape3.q = 0.7), bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/beta.ab.Rd0000644000176000001440000001121712136651105013374 0ustar ripleyusers\name{beta.ab} \alias{beta.ab} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-parameter Beta Distribution Family Function } \description{ Estimation of the shape parameters of the two-parameter beta distribution. } \usage{ beta.ab(lshape1 = "loge", lshape2 = "loge", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2, i1, i2}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more choices. } \item{trim}{ An argument which is fed into \code{mean()}; it is the fraction (0 to 0.5) of observations to be trimmed from each end of the response \code{y} before the mean is computed. This is used when computing initial values, and guards against outliers. } \item{A, B}{ Lower and upper limits of the distribution. The defaults correspond to the \emph{standard beta distribution} where the response lies between 0 and 1. } \item{parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter beta distribution is given by \eqn{f(y) =} \deqn{(y-A)^{shape1-1} \times (B-y)^{shape2-1} / [Beta(shape1,shape2) \times (B-A)^{shape1+shape2-1}]}{% (y-A)^(shape1-1) * (B-y)^(shape2-1) / [Beta(shape1,shape2) * (B-A)^(shape1+shape2-1)]} for \eqn{A < y < B}, and \eqn{Beta(.,.)} is the beta function (see \code{\link[base:Special]{beta}}). The shape parameters are positive, and here, the limits \eqn{A} and \eqn{B} are known. The mean of \eqn{Y} is \eqn{E(Y) = A + (B-A) \times shape1 / (shape1 + shape2)}{E(Y) = A + (B-A) * shape1 / (shape1 + shape2)}, and these are the fitted values of the object. For the standard beta distribution the variance of \eqn{Y} is \eqn{shape1 \times shape2 / [(1+shape1+shape2) \times (shape1+shape2)^2]}{ shape1 * shape2 / ((1+shape1+shape2) * (shape1+shape2)^2)}. If \eqn{\sigma^2= 1 / (1+shape1+shape2)} then the variance of \eqn{Y} can be written \eqn{\sigma^2 \mu (1-\mu)}{mu*(1-mu)*sigma^2} where \eqn{\mu=shape1 / (shape1 + shape2)}{mu=shape1 / (shape1 + shape2)} is the mean of \eqn{Y}. Another parameterization of the beta distribution involving the mean and a precision parameter is implemented in \code{\link{betaff}}. % 20120525: % Regularity conditions not satisfied; support depends on the parameters: % If \eqn{A} and \eqn{B} are unknown, then the \pkg{VGAM} family function % \code{beta4()} can be used to estimate these too. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) Chapter 25 of: \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, New York: Marcel Dekker. %Evans, M., Hastings, N. and Peacock, B. (2000) %\emph{Statistical Distributions}, %New York: Wiley-Interscience, Third edition. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have values in the interval (\eqn{A}, \eqn{B}). \pkg{VGAM} 0.7-4 and prior called this function \code{\link{betaff}}. } \seealso{ \code{\link{betaff}}, \code{\link[stats:Beta]{Beta}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{betabinomial.ab}}, \code{\link{betageometric}}, \code{\link{betaprime}}, \code{\link{rbetageom}}, \code{\link{rbetanorm}}, \code{\link{kumar}}. } \examples{ bdata <- data.frame(y = rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1))) fit <- vglm(y ~ 1, beta.ab(lshape1 = "identity", lshape2 = "identity"), data = bdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, beta.ab, bdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ 1, beta.ab(A = 5, B = 13), bdata, trace = TRUE) Coef(fit) c(meanY = with(bdata, mean(Y)), head(fitted(fit),2)) } \keyword{models} \keyword{regression} % 3/1/06; this works well: % fit <- vglm(y~1, beta.abqn(link = logoff(offset = 1), tr = TRUE, crit = "c") % 3/1/06; this does not work so well: % it <- vglm(y~1, beta.abqn(link = logoff(offset = 0), tr = TRUE, crit = "c") % Interesting!! VGAM/man/beniniUC.Rd0000644000176000001440000000405212136651105013573 0ustar ripleyusers\name{Benini} \alias{Benini} \alias{dbenini} \alias{pbenini} \alias{qbenini} \alias{rbenini} \title{The Benini Distribution} \description{ Density, distribution function, quantile function and random generation for the Benini distribution with parameter \code{shape}. } \usage{ dbenini(x, shape, y0, log = FALSE) pbenini(q, shape, y0) qbenini(p, shape, y0) rbenini(n, shape, y0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{shape}{the shape parameter \eqn{b}.} \item{y0}{the scale parameter \eqn{y_0}{y0}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dbenini} gives the density, \code{pbenini} gives the distribution function, \code{qbenini} gives the quantile function, and \code{rbenini} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{benini}}, the \pkg{VGAM} family function for estimating the parameter \eqn{b} by maximum likelihood estimation, for the formula of the probability density function and other details. } %\note{ % %} \seealso{ \code{\link{benini}}. } \examples{ \dontrun{ y0 = 1; shape = exp(1) xx = seq(0.0, 4, len = 101) plot(xx, dbenini(xx, y0 = y0,shape = shape), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1, las = 1, ylab = "", xlab = "x") abline(h = 0, col = "blue", lty = 2) lines(xx, pbenini(xx, y0 = y0, shape = shape), col = "orange") probs = seq(0.1, 0.9, by = 0.1) Q = qbenini(probs, y0 = y0, shape = shape) lines(Q, dbenini(Q, y0 = y0, shape = shape), col = "purple", lty = 3, type = "h") pbenini(Q, y0 = y0, shape = shape) - probs # Should be all zero } } \keyword{distribution} VGAM/man/benini.Rd0000644000176000001440000000523112136651105013343 0ustar ripleyusers\name{benini} \alias{benini} %- Also NEED an '\alias' for EACH other topic documented here. \title{Benini Distribution Family Function } \description{ Estimating the 1-parameter Benini distribution by maximum likelihood estimation. } \usage{ benini(y0 = stop("argument 'y0' must be specified"), lshape = "loge", ishape = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y0}{ Positive scale parameter. } \item{lshape}{ Parameter link function and extra argument of the parameter \eqn{b}, which is the shape parameter. See \code{\link{Links}} for more choices. A log link is the default because \eqn{b} is positive. } \item{ishape}{ Optional initial value for the shape parameter. The default is to compute the value internally. } \item{imethod, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Benini distribution has a probability density function that can be written \deqn{f(y) = 2 b \exp(-b[(\log(y/y_0))^2]) \log(y/y_0) / y }{% f(y) = 2*b*exp(-b * [(log(y/y0))^2]) * log(y/y0) / y} for \eqn{0 < y_0 < y}{0 < y0 < y}, and \eqn{b > 0}. The cumulative distribution function for \eqn{Y} is \deqn{F(y) = 1 - \exp(-b[(\log(y/y_0))^2]).}{% F(y) = 1 - exp(-b * [(log(y / y0))^2]). } Here, Newton-Raphson and Fisher scoring coincide. The median of \eqn{Y} is now returned as the fitted values. This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. On fitting, the \code{extra} slot has a component called \code{y0} which contains the value of the \code{y0} argument. } %\section{Warning}{ % % % The median of \eqn{Y}, which are returned as the fitted values, % may be incorrect. % % %} \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ Yet to do: the 2-parameter Benini distribution estimates \eqn{y_0}{y0} as well, and the 3-parameter Benini distribution estimates another shape parameter \eqn{a}{a} too. } \seealso{ \code{\link{Benini}}. } \examples{ y0 <- 1; nn <- 3000 bdata <- data.frame(y = rbenini(nn, y0 = y0, shape = exp(2))) fit <- vglm(y ~ 1, benini(y0 = y0), bdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) fit@extra$y0 c(head(fitted(fit), 1), with(bdata, median(y))) # Should be equal } \keyword{models} \keyword{regression} VGAM/man/benfUC.Rd0000644000176000001440000000664312136651105013251 0ustar ripleyusers\name{Benford} \alias{Benford} \alias{dbenf} \alias{pbenf} \alias{qbenf} \alias{rbenf} \title{ Benford's Distribution } \description{ Density, distribution function, quantile function, and random generation for Benford's distribution. } \usage{ dbenf(x, ndigits = 1, log = FALSE) pbenf(q, ndigits = 1, log.p = FALSE) qbenf(p, ndigits = 1) rbenf(n, ndigits = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of quantiles. See \code{ndigits}. } \item{p}{vector of probabilities.} \item{n}{number of observations. A single positive integer. Else if \code{length(n) > 1} then the length is taken to be the number required. } \item{ndigits}{ Number of leading digits, either 1 or 2. If 1 then the support of the distribution is \{1,\ldots,9\}, else \{10,\ldots,99\}. } \item{log, log.p}{ Logical. If \code{log.p = TRUE} then all probabilities \code{p} are given as \code{log(p)}. } } \details{ Benford's Law (aka \emph{the significant-digit law}) is the empirical observation that in many naturally occuring tables of numerical data, the leading significant (nonzero) digit is not uniformly distributed in \eqn{\{1,2,\ldots,9\}}{1:9}. Instead, the leading significant digit (\eqn{=D}, say) obeys the law \deqn{P(D=d) = \log_{10} \left( 1 + \frac1d \right)}{% P(D=d) = log10(1 + 1/d)} for \eqn{d=1,\ldots,9}. This means the probability the first significant digit is 1 is approximately \eqn{0.301}, etc. Benford's Law was apparently first discovered in 1881 by astronomer/mathematician S. Newcombe. It started by the observation that the pages of a book of logarithms were dirtiest at the beginning and progressively cleaner throughout. In 1938, a General Electric physicist called F. Benford rediscovered the law on this same observation. Over several years he collected data from different sources as different as atomic weights, baseball statistics, numerical data from \emph{Reader's Digest}, and drainage areas of rivers. Applications of Benford's Law has been as diverse as to the area of fraud detection in accounting and the design computers. } \value{ \code{dbenf} gives the density, \code{pbenf} gives the distribution function, and \code{qbenf} gives the quantile function, and \code{rbenf} generates random deviates. } \references{ Benford, F. (1938) The Law of Anomalous Numbers. \emph{Proceedings of the American Philosophical Society}, \bold{78}, 551--572. Newcomb, S. (1881) Note on the Frequency of Use of the Different Digits in Natural Numbers. \emph{American Journal of Mathematics}, \bold{4}, 39--40. } \author{ T. W. Yee } %\note{ % Currently only the leading digit is handled. % The first two leading digits would be the next simple extension. % %} %\seealso{ % \code{\link{logff}}. %} \examples{ dbenf(x <- c(0:10, NA, NaN, -Inf, Inf)) pbenf(x) \dontrun{ xx = 1:9; # par(mfrow=c(2,1)) barplot(dbenf(xx), col = "lightblue", las = 1, xlab = "Leading digit", ylab = "Probability", names.arg = as.character(xx), main = paste("Benford's distribution", sep = "")) hist(rbenf(n = 1000), border = "blue", prob = TRUE, main = "1000 random variates from Benford's distribution", xlab = "Leading digit", sub="Red is the true probability", breaks = 0:9 + 0.5, ylim = c(0, 0.35), xlim = c(0, 10.0)) lines(xx, dbenf(xx), col = "red", type = "h") points(xx, dbenf(xx), col = "red") } } \keyword{distribution} VGAM/man/backPain.Rd0000644000176000001440000000254212136651105013611 0ustar ripleyusers\name{backPain} \alias{backPain} \docType{data} \title{ Data on Back Pain Prognosis, from Anderson (1984) } \description{ Data from a study of patients suffering from back pain. Prognostic variables were recorded at presentation and progress was categorised three weeks after treatment. } \usage{data(backPain)} \format{ A data frame with 101 observations on the following 4 variables. \describe{ \item{x1}{length of previous attack.} \item{x2}{pain change.} \item{x3}{lordosis.} \item{pain}{an ordered factor describing the progress of each patient with levels \code{worse} < \code{same} < \code{slight.improvement} < \code{moderate.improvement} < \code{marked.improvement} < \code{complete.relief}. } } } \source{ \url{http://ideas.repec.org/c/boc/bocode/s419001.html} The data set and this help file was copied from \pkg{gnm} so that a vignette in \pkg{VGAM} could be run; the analysis is described in Yee (2010). } \references{ Anderson, J. A. (1984) Regression and Ordered Categorical Variables. \emph{J. R. Statist. Soc. B}, \bold{46(1)}, 1-30. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. } \examples{ summary(backPain) } \keyword{datasets} % set.seed(1) % data(backPain) VGAM/man/auuc.Rd0000644000176000001440000000200012136651105013023 0ustar ripleyusers\name{auuc} \alias{auuc} \docType{data} \title{ Auckland University Undergraduate Counts Data} \description{ Undergraduate student enrolments at the University of Auckland in 1990. } \usage{data(auuc)} \format{ A data frame with 4 observations on the following 5 variables. \describe{ \item{Commerce}{a numeric vector of counts.} \item{Arts}{a numeric vector of counts.} \item{SciEng}{a numeric vector of counts.} \item{Law}{a numeric vector of counts.} \item{Medicine}{a numeric vector of counts.} } } \details{ Each student is cross-classified by their colleges (Science and Engineering have been combined) and the socio-economic status (SES) of their fathers (1 = highest, down to 4 = lowest). } \source{ Dr Tony Morrison. } \references{ Wild, C. J. and Seber, G. A. F. (2000) \emph{Chance Encounters: A First Course in Data Analysis and Inference}, New York: Wiley. } \examples{ auuc \dontrun{ round(fitted(grc(auuc))) round(fitted(grc(auuc, Rank = 2))) } } \keyword{datasets} VGAM/man/amlpoisson.Rd0000644000176000001440000001162412136651105014266 0ustar ripleyusers\name{amlpoisson} \alias{amlpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson Regression by Asymmetric Maximum Likelihood Estimation } \description{ Poisson quantile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlpoisson(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{poissonff}}. } } \details{ This method was proposed by Efron (1992) and full details can be obtained there. % Equation numbers below refer to that article. The model is essentially a Poisson regression model (see \code{\link{poissonff}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1991) Regression percentiles using asymmetric squared error loss. \emph{Statistica Sinica}, \bold{1}, 93--125. Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. Koenker, R. and Bassett, G. (1978) Regression quantiles. \emph{Econometrica}, \bold{46}, 33--50. Newey, W. K. and Powell, J. L. (1987) Asymmetric least squares estimation and testing. \emph{Econometrica}, \bold{55}, 819--847. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlpoisson} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. About the jargon, Newey and Powell (1987) used the name \emph{expectiles} for regression surfaces obtained by asymmetric least squares. This was deliberate so as to distinguish them from the original \emph{regression quantiles} of Koenker and Bassett (1978). Efron (1991) and Efron (1992) use the general name \emph{regression percentile} to apply to all forms of asymmetric fitting. Although the asymmetric maximum likelihood method very nearly gives regression percentiles in the strictest sense for the normal and Poisson cases, the phrase \emph{quantile regression} is used loosely in this \pkg{VGAM} documentation. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{amlnormal}}, \code{\link{amlbinomial}}, \code{\link{alaplace1}}. } \examples{ set.seed(1234) mydat <- data.frame(x = sort(runif(nn <- 200))) mydat <- transform(mydat, y = rpois(nn, exp(0 - sin(8*x)))) (fit <- vgam(y ~ s(x), fam = amlpoisson(w.aml = c(0.02, 0.2, 1, 5, 50)), mydat, trace = TRUE)) fit@extra \dontrun{ # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, dig = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, fitted(fit), lwd = 2)) } } \keyword{models} \keyword{regression} VGAM/man/amlnormal.Rd0000644000176000001440000001424112136651105014062 0ustar ripleyusers\name{amlnormal} \alias{amlnormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Asymmetric Least Squares Quantile Regression } \description{ Asymmetric least squares, a special case of maximizing an asymmetric likelihood function of a normal distribution. This allows for expectile/quantile regression using asymmetric least squares error loss. } \usage{ amlnormal(w.aml = 1, parallel = FALSE, lexpectile = "identity", iexpectile = NULL, imethod = 1, digw = 4) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary least squares (OLS) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{lexpectile, iexpectile}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } } \details{ This is an implementation of Efron (1991) and full details can be obtained there. Equation numbers below refer to that article. The model is essentially a linear model (see \code{\link[stats]{lm}}), however, the asymmetric squared error loss function for a residual \eqn{r} is \eqn{r^2} if \eqn{r \leq 0}{r <= 0} and \eqn{w r^2}{w*r^2} if \eqn{r > 0}. The solution is the set of regression coefficients that minimize the sum of these over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1991) Regression percentiles using asymmetric squared error loss. \emph{Statistica Sinica}, \bold{1}, 93--125. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. One difficulty is finding the \code{w.aml} value giving a specified percentile. One solution is to fit the model within a root finding function such as \code{\link[stats]{uniroot}}; see the example below. For \code{amlnormal} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See the note in \code{\link{amlpoisson}} on the jargon, including \emph{expectiles} and \emph{regression quantiles}. The \code{deviance} slot computes the total asymmetric squared error loss (2.5). If \code{w.aml} has more than one value then the value returned by the slot is the sum taken over all the \code{w.aml} values. This \pkg{VGAM} family function could well be renamed \code{amlnormal()} instead, given the other function names \code{\link{amlpoisson}}, \code{\link{amlbinomial}}, etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } %\section{Warning }{ % The \code{loglikelihood} slot currently does not return the % log-likelihood but negative the total asymmetric squared error % loss (2.5). % If \code{w} has more than one value then the value returned by % \code{loglikelihood} is the sum taken over all the \code{w} values. %} \seealso{ \code{\link{amlpoisson}}, \code{\link{amlbinomial}}, \code{\link{amlexponential}}, \code{\link{bmi.nz}}, \code{\link{alaplace1}}, \code{\link{denorm}}, \code{\link{lms.bcn}} and similar variants are alternative methods for quantile regression. } \examples{ \dontrun{ # Example 1 ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo,] # Sort by age (fit <- vglm(BMI ~ bs(age), fam = amlnormal(w.aml = 0.1), bmi.nz)) fit@extra # Gives the w value and the percentile coef(fit, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", main = paste(round(fit@extra$percentile, dig = 1), "expectile-percentile curve"))) with(bmi.nz, lines(age, c(fitted(fit)), col = "black")) # Example 2 # Find the w values that give the 25, 50 and 75 percentiles findw <- function(w, percentile = 50) { fit2 <- vglm(BMI ~ bs(age), fam = amlnormal(w = w), data = bmi.nz) fit2@extra$percentile - percentile } # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", las = 1, main = "25, 50 and 75 expectile-percentile curves")) for(myp in c(25, 50, 75)) { # Note: uniroot() can only find one root at a time bestw <- uniroot(f = findw, interval = c(1/10^4, 10^4), percentile = myp) fit2 <- vglm(BMI ~ bs(age), fam = amlnormal(w = bestw$root), data = bmi.nz) with(bmi.nz, lines(age, c(fitted(fit2)), col = "red")) } # Example 3; this is Example 1 but with smoothing splines and # a vector w and a parallelism assumption. ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo,] # Sort by age fit3 <- vgam(BMI ~ s(age, df = 4), bmi.nz, trace = TRUE, fam = amlnormal(w = c(0.1, 1, 10), parallel = TRUE)) fit3@extra # The w values, percentiles and weighted deviances # The linear components of the fit; not for human consumption: coef(fit3, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col="blue", main = paste(paste(round(fit3@extra$percentile, dig = 1), collapse = ", "), "expectile-percentile curves"))) with(bmi.nz, matlines(age, fitted(fit3), col = 1:fit3@extra$M, lwd = 2)) with(bmi.nz, lines(age, c(fitted(fit )), col = "black")) # For comparison } } \keyword{models} \keyword{regression} VGAM/man/amlexponential.Rd0000644000176000001440000001206512136651105015122 0ustar ripleyusers\name{amlexponential} \alias{amlexponential} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Regression by Asymmetric Maximum Likelihood Estimation } \description{ Exponential expectile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlexponential(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the expectiles. The larger the value the larger the fitted expectile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{exponential}} and the warning below. } } \details{ The general methodology behind this \pkg{VGAM} family function is given in Efron (1992) and full details can be obtained there. % Equation numbers below refer to that article. This model is essentially an exponential regression model (see \code{\link{exponential}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlexponential} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See \code{\link{amlpoisson}} about comments on the jargon, e.g., \emph{expectiles} etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ Note that the \code{link} argument of \code{\link{exponential}} and \code{\link{amlexponential}} are currently different: one is the rate parameter and the other is the mean (expectile) parameter. If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{exponential}}, \code{\link{amlbinomial}}, \code{\link{amlpoisson}}, \code{\link{amlnormal}}, \code{\link{alaplace1}}, \code{\link{lms.bcg}}, \code{\link{deexp}}. } \examples{ nn <- 2000 mydat <- data.frame(x = seq(0, 1, length = nn)) mydat <- transform(mydat, mu = loge(-0 + 1.5*x + 0.2*x^2, inverse = TRUE)) mydat <- transform(mydat, mu = loge(0 - sin(8*x), inverse = TRUE)) mydat <- transform(mydat, y = rexp(nn, rate = 1/mu)) (fit <- vgam(y ~ s(x,df = 5), amlexponential(w = c(0.001, 0.1, 0.5, 5, 60)), mydat, trace = TRUE)) fit@extra \dontrun{ # These plots are against the sqrt scale (to increase clarity) par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, dig = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty = 1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, dig = 1), collapse = ", "), "percentile curves are orange"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty = 1)) for(ii in fit@extra$percentile) with(mydat, matlines(x, sqrt(qexp(p = ii/100, rate = 1/mu)), col = "orange")) } } \keyword{models} \keyword{regression} VGAM/man/amlbinomial.Rd0000644000176000001440000001131512136651105014363 0ustar ripleyusers\name{amlbinomial} \alias{amlbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Binomial Logistic Regression by Asymmetric Maximum Likelihood Estimation } \description{ Binomial quantile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlbinomial(w.aml = 1, parallel = FALSE, digw = 4, link = "logit") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{binomialff}}. } } \details{ The general methodology behind this \pkg{VGAM} family function is given in Efron (1992) and full details can be obtained there. % Equation numbers below refer to that article. This model is essentially a logistic regression model (see \code{\link{binomialff}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlbinomial} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See \code{\link{amlpoisson}} about comments on the jargon, e.g., \emph{expectiles} etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{amlpoisson}}, \code{\link{amlexponential}}, \code{\link{amlnormal}}, \code{\link{alaplace1}}, \code{\link{denorm}}. } \examples{ # Example: binomial data with lots of trials per observation set.seed(1234) sizevec <- rep(100, length = (nn <- 200)) mydat <- data.frame(x = sort(runif(nn))) mydat <- transform(mydat, prob = logit(-0 + 2.5*x + x^2, inverse = TRUE)) mydat <- transform(mydat, y = rbinom(nn, size = sizevec, prob = prob)) (fit <- vgam(cbind(y, sizevec - y) ~ s(x, df = 3), amlbinomial(w = c(0.01, 0.2, 1, 5, 60)), mydat, trace = TRUE)) fit@extra \dontrun{ par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, dig = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, dig = 1), collapse = ", "), "percentile curves are red"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) for(ii in fit@extra$percentile) with(mydat, matlines(x, 100 * qbinom(p = ii/100, size = sizevec, prob = prob) / sizevec, col = "red", lwd = 2, lty = 1)) } } \keyword{models} \keyword{regression} VGAM/man/amhUC.Rd0000644000176000001440000000271512136651105013100 0ustar ripleyusers\name{Amh} \alias{Amh} \alias{damh} \alias{pamh} \alias{ramh} \title{Ali-Mikhail-Haq Distribution's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Ali-Mikhail-Haq distribution. } \usage{ damh(x1, x2, alpha, log = FALSE) pamh(q1, q2, alpha) ramh(n, alpha) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{alpha}{the association parameter.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{damh} gives the density, \code{pamh} gives the distribution function, and \code{ramh} generates random deviates (a two-column matrix). } %\references{ % %} \author{ T. W. Yee and C. S. Chee} \details{ See \code{\link{amh}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{amh}}. } \examples{ x = seq(0, 1, len=(N <- 101)) alpha = 0.7 ox = expand.grid(x, x) z = damh(ox[,1], ox[,2], alpha=alpha) \dontrun{ contour(x, x, matrix(z, N, N), col="blue") z = pamh(ox[,1], ox[,2], alpha=alpha) contour(x, x, matrix(z, N, N), col="blue") plot(r <- ramh(n=1000, alpha=alpha), col="blue") par(mfrow=c(1,2)) hist(r[,1]) # Should be uniform hist(r[,2]) # Should be uniform } } \keyword{distribution} VGAM/man/amh.Rd0000644000176000001440000000554512136651105012654 0ustar ripleyusers\name{amh} \alias{amh} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ali-Mikhail-Haq Distribution Distribution Family Function } \description{ Estimate the association parameter of Ali-Mikhail-Haq's bivariate distribution by maximum likelihood estimation. } \usage{ amh(lalpha = "rhobit", ialpha = NULL, imethod = 1, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lalpha}{ Link function applied to the association parameter \eqn{\alpha}{alpha}, which is real and \eqn{-1 < \alpha < 1}{-1 < alpha < 1}. See \code{\link{Links}} for more choices. } \item{ialpha}{ Numeric. Optional initial value for \eqn{\alpha}{alpha}. By default, an initial value is chosen internally. If a convergence failure occurs try assigning a different value. Assigning a value will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ialpha}. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = y_1 y_2 / ( 1 - \alpha (1 - y_1) (1 - y_2) ) }{% P(Y1 < = y1, Y2 < = y2) = y1 * y2 / ( 1 - alpha * (1 - y1) * (1 - y2) ) } for \eqn{-1 < \alpha < 1}{-1 < alpha < 1}. The support of the function is the unit square. The marginal distributions are the standard uniform distributions. When \eqn{\alpha = 0}{alpha = 0} the random variables are independent. % A variant of Newton-Raphson is used, which only seems to work for an % intercept model. % It is a very good idea to set \code{trace = TRUE}. % This \pkg{VGAM} family function is prone to numerical difficulties. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ %Hutchinson, T. P. and Lai, C. D. (1990) %\emph{Continuous Bivariate Distributions, Emphasising Applications}, %Adelaide, South Australia: Rumsby Scientific Publishing. Balakrishnan, N. and Lai, C.-D. (2009) \emph{Continuous Bivariate Distributions}, 2nd ed. New York: Springer. } \author{ T. W. Yee and C. S. Chee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution. } \seealso{ \code{\link{ramh}}, \code{\link{fgm}}, \code{\link{gumbelIbiv}}. } \examples{ ymat <- ramh(1000, alpha = rhobit(2, inverse = TRUE)) fit <- vglm(ymat ~ 1, amh, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/alaplaceUC.Rd0000644000176000001440000000650312136651105014074 0ustar ripleyusers\name{alaplaceUC} \alias{dalap} \alias{palap} \alias{qalap} \alias{ralap} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the 3-parameter asymmetric Laplace distribution with location parameter \code{location}, scale parameter \code{scale}, and asymmetry parameter \code{kappa}. } \usage{ dalap(x, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) palap(q, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) qalap(p, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) ralap(n, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location}{ the location parameter \eqn{\xi}{xi}. } \item{scale}{ the scale parameter \eqn{\sigma}{sigma}. Must consist of positive values. } \item{tau}{ the quantile parameter \eqn{\tau}{tau}. Must consist of values in \eqn{(0,1)}. This argument is used to specify \code{kappa} and is ignored if \code{kappa} is assigned. } \item{kappa}{ the asymmetry parameter \eqn{\kappa}{kappa}. Must consist of positive values. } \item{log}{ if \code{TRUE}, probabilities \code{p} are given as \code{log(p)}. } } \details{ There are many variants of asymmetric Laplace distributions (ALDs) and this one is known as \emph{the} ALD by Kotz et al. (2001). See \code{\link{alaplace3}}, the \pkg{VGAM} family function for estimating the three parameters by maximum likelihood estimation, for formulae and details. } \value{ \code{dalap} gives the density, \code{palap} gives the distribution function, \code{qalap} gives the quantile function, and \code{ralap} generates random deviates. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. } \author{ T. W. Yee } %\note{ % The \pkg{VGAM} family function \code{\link{alaplace3}} % estimates the three parameters by maximum likelihood estimation. %} \seealso{ \code{\link{alaplace3}}. % \code{\link{dloglap}}. } \examples{ x <- seq(-5, 5, by = 0.01) loc <- 0; sigma <- 1.5; kappa <- 2 \dontrun{ plot(x, dalap(x, loc, sigma, kappa = kappa), type = "l", col = "blue", main = "Blue is density, red is cumulative distribution function", ylim = c(0, 1), sub = "Purple are 5, 10, ..., 95 percentiles", las = 1, ylab = "", cex.main = 0.5) abline(h = 0, col = "blue", lty = 2) lines(qalap(seq(0.05, 0.95, by = 0.05), loc, sigma, kappa = kappa), dalap(qalap(seq(0.05, 0.95, by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa), col = "purple", lty = 3, type = "h") lines(x, palap(x, loc, sigma, kappa = kappa), type = "l", col = "red") abline(h = 0, lty = 2) } pp <- seq(0.05, 0.95, by = 0.05) # Test two functions max(abs(palap(qalap(pp, loc, sigma, kappa = kappa), loc, sigma, kappa = kappa) - pp)) # Should be 0 } \keyword{distribution} VGAM/man/alaplace3.Rd0000644000176000001440000003211412136651105013724 0ustar ripleyusers\name{alaplace} \alias{alaplace1} \alias{alaplace2} \alias{alaplace3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Asymmetric Laplace Distribution Family Functions } \description{ Maximum likelihood estimation of the 1, 2 and 3-parameter asymmetric Laplace distributions (ALDs). The 1-parameter ALD may be used for quantile regression. } \usage{ alaplace1(tau = NULL, llocation = "identity", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, intparloc = FALSE, imethod = 1) alaplace2(tau = NULL, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau/(1 - tau)), shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, eq.scale = TRUE, dfmu.init = 3, intparloc = FALSE, imethod = 1, zero = -2) alaplace3(llocation = "identity", lscale = "loge", lkappa = "loge", ilocation = NULL, iscale = NULL, ikappa = 1, imethod = 1, zero = 2:3) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{tau, kappa}{ Numeric vectors with \eqn{0 < \tau < 1}{0 < tau < 1} and \eqn{\kappa >0}{kappa >0}. Most users will only specify \code{tau} since the estimated location parameter corresponds to the \eqn{\tau}{tau}th regression quantile, which is easier to understand. See below for details. } \item{llocation, lscale, lkappa}{ Character. Parameter link functions for location parameter \eqn{\xi}{xi}, scale parameter \eqn{\sigma}{sigma}, asymmetry parameter \eqn{\kappa}{kappa}. See \code{\link{Links}} for more choices. For example, the argument \code{llocation} can help handle count data by restricting the quantiles to be positive (use \code{llocation = "loge"}). However, \code{llocation} is best left alone since the theory only works properly with the identity link. } \item{ilocation, iscale, ikappa}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{parallelLocation, intparloc}{ Logical. Should the quantiles be parallel on the transformed scale (argument \code{llocation})? Assigning this argument to \code{TRUE} circumvents the seriously embarrassing quantile crossing problem. The argument \code{intparloc} applies to intercept term; the argument \code{parallelLocation} applies to other terms. } \item{eq.scale}{ Logical. Should the scale parameters be equal? It is advised to keep \code{eq.scale = TRUE} unchanged because it does not make sense to have different values for each \code{tau} value. } \item{imethod}{ Initialization method. Either the value 1, 2, 3 or 4. } \item{dfmu.init}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of the location parameter. See \code{\link{vsmooth.spline}}. Used only when \code{imethod = 3}. } \item{shrinkage.init}{ How much shrinkage is used when initializing \eqn{\xi}{xi}. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used only when \code{imethod = 4}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{Scale.arg}{ The value of the scale parameter \eqn{\sigma}{sigma}. This argument may be used to compute quantiles at different \eqn{\tau}{tau} values from an existing fitted \code{alaplace2()} model (practical only if it has a single value). If the model has \code{parallelLocation = TRUE} then only the intercept need be estimated; use an offset. See below for an example. % This is because the expected information matrix is diagonal, % i.e., the location and scale parameters are asymptotically independent. } \item{digt }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{tau} values; used cosmetically for labelling. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for more information. Where possible, the default is to model all the \eqn{\sigma}{sigma} and \eqn{\kappa}{kappa} as an intercept-only term. } } \details{ These \pkg{VGAM} family functions implement one variant of asymmetric Laplace distributions (ALDs) suitable for quantile regression. Kotz et al. (2001) call it \emph{the} ALD. Its density function is \deqn{f(y;\xi,\sigma,\kappa) = \frac{\sqrt{2}}{\sigma} \, \frac{\kappa}{1 + \kappa^2} \, \exp \left( - \frac{\sqrt{2}}{\sigma \, \kappa} |y - \xi | \right) }{% f(y;xi,sigma,kappa) = (sqrt(2)/sigma) * (kappa/(1+ \kappa^2)) * exp( -(sqrt(2) / (sigma * kappa)) * |y-xi| ) } for \eqn{y \leq \xi}{y <= xi}, and \deqn{f(y;\xi,\sigma,\kappa) = \frac{\sqrt{2}}{\sigma} \, \frac{\kappa}{1 + \kappa^2} \, \exp \left( - \frac{\sqrt{2} \, \kappa}{\sigma} |y - \xi | \right) }{% f(y;xi,sigma,kappa) = (sqrt(2)/sigma) * (kappa/(1+ \kappa^2)) * exp( - (sqrt(2) * kappa / sigma) * |y-xi| ) } for \eqn{y > \xi}{y > xi}. Here, the ranges are for all real \eqn{y} and \eqn{\xi}{xi}, positive \eqn{\sigma}{sigma} and positive \eqn{\kappa}{kappa}. The special case \eqn{\kappa = 1}{kappa = 1} corresponds to the (symmetric) Laplace distribution of Kotz et al. (2001). The mean is \eqn{\xi + \sigma (1/\kappa - \kappa) / \sqrt{2}}{xi + sigma * (1/kappa - kappa) / sqrt(2)} and the variance is \eqn{\sigma^2 (1 + \kappa^4) / (2 \kappa^2)}{sigma^2 * (1 + kappa^4) / (2 * kappa^2)}. The enumeration of the linear/additive predictors used for \code{alaplace2()} is the first location parameter followed by the first scale parameter, then the second location parameter followed by the second scale parameter, etc. For \code{alaplace3()}, only a vector response is handled and the last (third) linear/additive predictor is for the asymmetry parameter. It is known that the maximum likelihood estimate of the location parameter \eqn{\xi}{xi} corresponds to the regression quantile estimate of the classical quantile regression approach of Koenker and Bassett (1978). An important property of the ALD is that \eqn{P(Y \leq \xi) = \tau}{P(Y <= xi) = tau} where \eqn{\tau = \kappa^2 / (1 + \kappa^2)}{tau = kappa^2 / (1 + kappa^2)} so that \eqn{\kappa = \sqrt{\tau / (1-\tau)}}{kappa = sqrt(tau / (1-tau))}. Thus \code{alaplace1()} might be used as an alternative to \code{rq} in the \pkg{quantreg} package. Both \code{alaplace1()} and \code{alaplace2()} can handle multiple responses, and the number of linear/additive predictors is dictated by the length of \code{tau} or \code{kappa}. The function \code{alaplace2()} can also handle a matrix response with a single-valued \code{tau} or \code{kappa}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. In the \code{extra} slot of the fitted object are some list components which are useful, e.g., the sample proportion of values which are less than the fitted quantile curves. } \references{ Koenker, R. and Bassett, G. (1978) Regression quantiles. \emph{Econometrica}, \bold{46}, 33--50. Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. Yee, T. W. (2012) Quantile regression for counts and proportions. In preparation. } \author{ Thomas W. Yee } \section{Warning}{ The MLE regularity conditions do not hold for this distribution so that misleading inferences may result, e.g., in the \code{summary} and \code{vcov} of the object. Care is needed with \code{tau} values which are too small, e.g., for count data with \code{llocation = "loge"} and if the sample proportion of zeros is greater than \code{tau}. } \note{ % Commented out 20090326 % The function \code{alaplace2()} is recommended over \code{alaplace1()} % for quantile regression because the solution is % invariant to location and scale, % i.e., linear transformation of the response produces the % same linear transformation of the fitted quantiles. These \pkg{VGAM} family functions use Fisher scoring. Convergence may be slow and half-stepping is usual (although one can use \code{trace = TRUE} to see which is the best model and then use \code{maxit} to choose that model) due to the regularity conditions not holding. For large data sets it is a very good idea to keep the length of \code{tau}/\code{kappa} low to avoid large memory requirements. Then for \code{parallelLoc = FALSE} one can repeatedly fit a model with \code{alaplace1()} with one \eqn{\tau}{tau} at a time; and for \code{parallelLoc = TRUE} one can refit a model with \code{alaplace1()} with one \eqn{\tau}{tau} at a time but using offsets and an intercept-only model. A second method for solving the noncrossing quantile problem is illustrated below in Example 3. This is called the \emph{accumulative quantile method} (AQM) and details are in Yee (2012). It does not make the strong parallelism assumption. The functions \code{alaplace2()} and \code{\link{laplace}} differ slightly in terms of the parameterizations. } \seealso{ \code{\link{ralap}}, \code{\link{laplace}}, \code{\link{lms.bcn}}, \code{\link{amlnormal}}, \code{\link{koenker}}. } \examples{ \dontrun{ # Example 1: quantile regression with smoothing splines adata <- data.frame(x = sort(runif(n <- 500))) mymu <- function(x) exp(-2 + 6*sin(2*x-0.2) / (x+0.5)^2) adata <- transform(adata, y = rpois(n, lambda = mymu(x))) mytau <- c(0.25, 0.75); mydof <- 4 fit <- vgam(y ~ s(x, df = mydof), alaplace1(tau = mytau, llocation = "loge", parallelLoc = FALSE), adata, trace = TRUE) fitp <- vgam(y ~ s(x, df = mydof), data = adata, trace = TRUE, alaplace1(tau = mytau, llocation = "loge", parallelLoc = TRUE)) par(las = 1); mylwd = 1.5 with(adata, plot(x, jitter(y, factor = 0.5), col = "red", main = "Example 1; green: parallelLoc = TRUE", ylab = "y", pch = "o", cex = 0.75)) with(adata, matlines(x, fitted(fit ), col = "blue", lty = "solid", lwd = mylwd)) with(adata, matlines(x, fitted(fitp), col = "green", lty = "solid", lwd = mylwd)) finexgrid <- seq(0, 1, len = 1001) for(ii in 1:length(mytau)) lines(finexgrid, qpois(p = mytau[ii], lambda = mymu(finexgrid)), col = "blue", lwd = mylwd) fit@extra # Contains useful information # Example 2: regression quantile at a new tau value from an existing fit # Nb. regression splines are used here since it is easier. fitp2 <- vglm(y ~ bs(x, df = mydof), family = alaplace1(tau = mytau, llocation = "loge", parallelLoc = TRUE), adata, trace = TRUE) newtau <- 0.5 # Want to refit the model with this tau value fitp3 <- vglm(y ~ 1 + offset(predict(fitp2)[,1]), family = alaplace1(tau = newtau, llocation = "loge"), adata) with(adata, plot(x, jitter(y, factor = 0.5), col = "red", pch = "o", cex = 0.75, ylab = "y", main = "Example 2; parallelLoc = TRUE")) with(adata, matlines(x, fitted(fitp2), col = "blue", lty = 1, lwd = mylwd)) with(adata, matlines(x, fitted(fitp3), col = "black", lty = 1, lwd = mylwd)) # Example 3: noncrossing regression quantiles using a trick: obtain # successive solutions which are added to previous solutions; use a log # link to ensure an increasing quantiles at any value of x. mytau <- seq(0.2, 0.9, by = 0.1) answer <- matrix(0, nrow(adata), length(mytau)) # Stores the quantiles adata <- transform(adata, offsety = y*0) usetau <- mytau for(ii in 1:length(mytau)) { # cat("\n\nii = ", ii, "\n") adata <- transform(adata, usey = y-offsety) iloc <- ifelse(ii == 1, with(adata, median(y)), 1.0) # Well-chosen! mydf <- ifelse(ii == 1, 5, 3) # Maybe less smoothing will help lloc <- ifelse(ii == 1, "identity", "loge") # 2nd value must be "loge" fit3 <- vglm(usey ~ ns(x, df = mydf), data = adata, trace = TRUE, alaplace1(tau = usetau[ii], lloc = lloc, iloc = iloc)) answer[,ii] <- (if(ii == 1) 0 else answer[,ii-1]) + fitted(fit3) adata <- transform(adata, offsety = answer[,ii]) } # Plot the results. with(adata, plot(x, y, col = "blue", main = paste("Noncrossing and nonparallel; tau = ", paste(mytau, collapse = ", ")))) with(adata, matlines(x, answer, col = "orange", lty = 1)) # Zoom in near the origin. with(adata, plot(x, y, col = "blue", xlim = c(0, 0.2), ylim = 0:1, main = paste("Noncrossing and nonparallel; tau = ", paste(mytau, collapse = ", ")))) with(adata, matlines(x, answer, col = "orange", lty = 1)) } } \keyword{models} \keyword{regression} VGAM/man/acat.Rd0000644000176000001440000000703412136651105013012 0ustar ripleyusers\name{acat} \alias{acat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Adjacent Categories Probabilities } \description{ Fits an adjacent categories regression model to an ordered (preferably) factor response. } \usage{ acat(link = "loge", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the ratios of the adjacent categories probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the linear/additive predictors used are \eqn{\eta_j = \log(P[Y=j+1]/P[Y=j])}{eta_j = log(P[Y=j+1]/P[Y=j])} for \eqn{j=1,\ldots,M}. If \code{reverse} is \code{TRUE} then \eqn{\eta_j = \log(P[Y=j]/P[Y=j+1])}{eta_j=log(P[Y=j]/P[Y=j+1])} will be used. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\ldots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. By default, the log link is used because the ratio of two probabilities is positive. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2002) \emph{Categorical Data Analysis}, 2nd ed. New York: Wiley. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee} contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or an ordered factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{cumulative}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}, \code{\link{pneumo}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, acat, pneumo)) coef(fit, matrix = TRUE) constraints(fit) model.matrix(fit) } \keyword{models} \keyword{regression} %pneumo$let <- log(pneumo$exposure.time) VGAM/man/VGAM-package.Rd0000644000176000001440000001703012136651105014222 0ustar ripleyusers\name{VGAM-package} \alias{VGAM-package} \alias{VGAM} \docType{package} \title{ Vector Generalized Linear and Additive Models } \description{ \pkg{VGAM} provides functions for fitting vector generalized linear and additive models (VGLMs and VGAMs), and associated models (Reduced-rank VGLMs, Quadratic RR-VGLMs, Reduced-rank VGAMs). This package fits many models and distributions by maximum likelihood estimation (MLE) or penalized MLE. Also fits constrained ordination models in ecology such as constrained quadratic ordination (CQO). } \details{ This package centers on the \emph{iteratively reweighted least squares} (IRLS) algorithm. Other key words include Fisher scoring, additive models, penalized likelihood, reduced-rank regression and constrained ordination. The central modelling functions are \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}. For detailed control of fitting, each of these has its own control function, e.g., \code{\link{vglm.control}}. The package uses S4 (see \code{\link[methods]{methods-package}}). A companion package called \pkg{VGAMdata} contains some larger data sets which were shifted from \pkg{VGAM}. The classes of GLMs and GAMs are special cases of VGLMs and VGAMs. The VGLM/VGAM framework is intended to be very general so that it encompasses as many distributions and models as possible. VGLMs are limited only by the assumption that the regression coefficients enter through a set of linear predictors. The VGLM class is very large and encompasses a wide range of multivariate response types and models, e.g., it includes univariate and multivariate distributions, categorical data analysis, time series, survival analysis, generalized estimating equations, extreme values, correlated binary data, quantile and expectile regression, bioassay data and nonlinear least-squares problems. VGAMs are to VGLMs what GAMs are to GLMs. Vector smoothing (see \code{\link{vsmooth.spline}}) allows several additive predictors to be estimated as a sum of smooth functions of the covariates. For a complete list of this package, use \code{library(help = "VGAM")}. New \pkg{VGAM} family functions are continually being written and added to the package. A monograph about VGLM and VGAMs etc. is currently in the making. %but unfortunately will not be finished for a while. %~~ An overview of how to use the package, including the most important ~~ %~~ functions ~~ } \author{ Thomas W. Yee, \email{t.yee@auckland.ac.nz}. Maintainer: Thomas Yee \email{t.yee@auckland.ac.nz}. } \section{Warning}{ This package is undergoing continual development and improvement. Until my monograph comes out and this package is released as version 1.0-0 the user should treat everything subject to change. This includes the family function names, argument names, many of the internals, the use of link functions, and slot names. Some future pain can be minimized by using good programming techniques, e.g., using extractor/accessor functions such as \code{coef()}, \code{weights()}, \code{vcov()}, \code{predict()}. Nevertheless, please expect changes in all aspects of the package. See the \code{NEWS} file for a list of changes from version to version. } \references{ Yee, T. W. Vector Generalized Linear and Additive Models. \emph{Monograph in preparation}. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. (2013) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}. (Oldish) documentation accompanying the \pkg{VGAM} package at \url{http://www.stat.auckland.ac.nz/~yee/VGAM} contains some further information and examples. } \keyword{ package } \keyword{models} \keyword{regression} \seealso{ \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{TypicalVGAMfamilyFunction}}, \code{\link{CommonVGAMffArguments}}, \code{\link{Links}}. %~~ Optional links to other man pages, e.g. ~~ %~~ \code{\link[:-package]{}} ~~ } \examples{ # Example 1; proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) depvar(fit1) # Better than using fit1@y; dependent variable (response) weights(fit1, type = "prior") # Number of observations coef(fit1, matrix = TRUE) # p.179, in McCullagh and Nelder (1989) constraints(fit1) # Constraint matrices summary(fit1) # Example 2; zero-inflated Poisson model zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pstr0 = logit(-0.5 + 1*x2, inverse = TRUE), lambda = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rzipois(nn, lambda, pstr0 = pstr0)) with(zdata, table(y)) fit2 <- vglm(y ~ x2, zipoisson, zdata, trace = TRUE) coef(fit2, matrix = TRUE) # These should agree with the above values # Example 3; fit a two species GAM simultaneously fit3 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(mv = TRUE), hunua) coef(fit3, matrix = TRUE) # Not really interpretable \dontrun{ plot(fit3, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(fit3)[ooo, ], type = "l", lwd = 2, col = 3:4, xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", ylim = c(0, 0.8))) with(hunua, rug(altitude)) } # Example 4; LMS quantile regression fit4 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit4)) head(fitted(fit4)) head(bmi.nz) # Person 1 is near the lower quartile among people his age head(cdf(fit4)) \dontrun{ par(mfrow = c(1, 1), bty = "l", mar = c(5,4,4,3)+0.1, xpd = TRUE) qtplot(fit4, percentiles = c(5,50,90,99), main = "Quantiles", las = 1, xlim = c(15, 90), ylab = "BMI", lwd = 2, lcol = 4) # Quantile plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) # Density plot aa <- deplot(fit4, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)") aa aa <- deplot(fit4, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit4, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values } # Example 5; GEV distribution for extremes (fit5 <- vglm(maxtemp ~ 1, egev, data = oxtemp, trace = TRUE)) head(fitted(fit5)) coef(fit5, matrix = TRUE) Coef(fit5) vcov(fit5) vcov(fit5, untransform = TRUE) sqrt(diag(vcov(fit5))) # Approximate standard errors \dontrun{ rlplot(fit5) } } VGAM/man/Tol.Rd0000644000176000001440000000534612136651105012644 0ustar ripleyusers\name{Tol} \alias{Tol} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Tolerances } \description{ Generic function for the \emph{tolerances} of a model. } \usage{ Tol(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of a tolerance or tolerances is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define an optimum in different ways. Many models have no such notion or definition. Tolerances occur in quadratic ordination, i.e., CQO or UQO. They have ecological meaning because a high tolerance for a species means the species can survive over a large environmental range (stenoecous species), whereas a small tolerance means the species' niche is small (eurycous species). Mathematically, the tolerance is like the variance of a normal distribution. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Tolerances are undefined for `linear' and additive ordination models. They are well-defined for quadratic ordination models. } \section{Warning }{ There is a direct inverse relationship between the scaling of the latent variables (site scores) and the tolerances. One normalization is for the latent variables to have unit variance. Another normalization is for all the tolerances to be unit. These two normalization cannot simultaneously hold in general. For rank-\emph{R>1} models it becomes more complicated because the latent variables are also uncorrelated. An important argument when fitting quadratic ordination models is whether \code{EqualTolerances} is \code{TRUE} or \code{FALSE}. See Yee (2004) for details. } \seealso{ \code{Tol.qrrvglm}. \code{\link{Max}}, \code{\link{Opt}}. } \examples{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, Bestof = 2, quasipoissonff, data = hspider, Crow1positive = FALSE) Tol(p1) } \keyword{models} \keyword{regression} VGAM/man/SurvS4.Rd0000644000176000001440000001223212136651105013244 0ustar ripleyusers\name{SurvS4} \alias{SurvS4} \alias{is.SurvS4} %%%% 20120216 \alias{print.SurvS4} \alias{show.SurvS4} \alias{Math.SurvS4} \alias{Summary.SurvS4} \alias{[.SurvS4} \alias{format.SurvS4} \alias{as.data.frame.SurvS4} \alias{as.character.SurvS4} \alias{is.na.SurvS4} \alias{Ops.SurvS4} \title{ Create a Survival Object } \description{ Create a survival object, usually used as a response variable in a model formula. } \usage{ SurvS4(time, time2, event, type =, origin = 0) is.SurvS4(x) } \arguments{ \item{time}{ for right censored data, this is the follow up time. For interval data, the first argument is the starting time for the interval. } \item{x}{ any R object. } \item{event}{ The status indicator, normally 0=alive, 1=dead. Other choices are \code{TRUE}/\code{FALSE} (\code{TRUE} = death) or 1/2 (2=death). For interval censored data, the status indicator is 0=right censored, 1=event at \code{time}, 2=left censored, 3=interval censored. Although unusual, the event indicator can be omitted, in which case all subjects are assumed to have an event. } \item{time2}{ ending time of the interval for interval censored or counting process data only. Intervals are assumed to be open on the left and closed on the right, \code{(start, end]}. For counting process data, \code{event} indicates whether an event occurred at the end of the interval. } \item{type}{ character string specifying the type of censoring. Possible values are \code{"right"}, \code{"left"}, \code{"counting"}, \code{"interval"}, or \code{"interval2"}. The default is \code{"right"} or \code{"counting"} depending on whether the \code{time2} argument is absent or present, respectively. } \item{origin}{ for counting process data, the hazard function origin. This is most often used in conjunction with a model containing time dependent strata in order to align the subjects properly when they cross over from one strata to another. } } \value{ An object of class \code{SurvS4} (formerly \code{Surv}). There are methods for \code{print}, \code{is.na}, and subscripting survival objects. \code{SurvS4} objects are implemented as a matrix of 2 or 3 columns. In the case of \code{is.SurvS4}, a logical value \code{TRUE} if \code{x} inherits from class \code{"SurvS4"}, otherwise a \code{FALSE}. } \details{ Typical usages are \preformatted{ SurvS4(time, event) SurvS4(time, time2, event, type=, origin=0) } In theory it is possible to represent interval censored data without a third column containing the explicit status. Exact, right censored, left censored and interval censored observation would be represented as intervals of (a,a), (a, infinity), (-infinity,b), and (a,b) respectively; each specifying the interval within which the event is known to have occurred. If \code{type = "interval2"} then the representation given above is assumed, with NA taking the place of infinity. If `type="interval" \code{event} must be given. If \code{event} is \code{0}, \code{1}, or \code{2}, the relevant information is assumed to be contained in \code{time}, the value in \code{time2} is ignored, and the second column of the result will contain a placeholder. Presently, the only methods allowing interval censored data are the parametric models computed by \code{\link[survival]{survreg}}, so the distinction between open and closed intervals is unimportant. The distinction is important for counting process data and the Cox model. The function tries to distinguish between the use of 0/1 and 1/2 coding for left and right censored data using \code{if (max(status)==2)}. If 1/2 coding is used and all the subjects are censored, it will guess wrong. Use 0/1 coding in this case. } \author{ The code and documentation comes from \pkg{survival}. Slight modifications have been made for conversion to S4 by T. W. Yee. Also, for \code{"interval"} data, \code{as.character.SurvS4()} has been modified to print intervals of the form \code{(start, end]} and not \code{[start, end]} as previously. (This makes a difference for discrete data, such as for \code{\link{cenpoisson}}). All \pkg{VGAM} family functions beginning with \code{"cen"} require the packaging function \code{Surv} to format the input. } \note{ The purpose of having \code{SurvS4} in \pkg{VGAM} is so that the same input can be fed into \code{\link{vglm}} as functions in \pkg{survival} such as \code{\link[survival]{survreg}}. The class name has been changed from \code{"Surv"} to \code{"SurvS4"}; see \code{\link{SurvS4-class}}. The format \code{J+} is interpreted in \pkg{VGAM} as \eqn{\ge J}. If \code{type="interval"} then these should not be used in \pkg{VGAM}: \code{(L,U-]} or \code{(L,U+]}. % zz is this for type="count" only? } \seealso{ \code{\link{SurvS4-class}}, \code{\link{cenpoisson}}, % \code{\link[survival]{coxph}}, % \code{\link[survival]{survfit}}, \code{\link[survival]{survreg}}, \code{\link{leukemia}}. } \examples{ with(leukemia, SurvS4(time, status)) class(with(leukemia, SurvS4(time, status))) } \keyword{survival} % Converted by Sd2Rd version 0.3-2. % with(heart, SurvS4(start,stop,event)) VGAM/man/SurvS4-class.Rd0000644000176000001440000000266612136651105014361 0ustar ripleyusers\name{SurvS4-class} \docType{class} \alias{SurvS4-class} %%%% 20120216 \alias{print,SurvS4-method} \alias{show,SurvS4-method} \title{Class "SurvS4" } \description{ S4 version of the Surv class. } \section{Objects from the Class}{A virtual Class: No objects may be created from it.} \section{Extends}{ %Class \code{"\linkS4class{Surv}"}, directly. Class \code{"Surv"}, directly. Class \code{"\linkS4class{matrix}"}, directly. Class \code{"\linkS4class{oldClass}"}, by class "Surv", distance 2. Class \code{"\linkS4class{structure}"}, by class "matrix", distance 2. Class \code{"\linkS4class{array}"}, by class "matrix", distance 2. Class \code{"\linkS4class{vector}"}, by class "matrix", distance 3, with explicit coerce. Class \code{"\linkS4class{vector}"}, by class "matrix", distance 4, with explicit coerce. } \section{Methods}{ \describe{ % \item{print}{\code{signature(x = "SurvS4")}: ... } \item{show}{\code{signature(object = "SurvS4")}: ... } } } \references{ See \pkg{survival}. } \author{ T. W. Yee. } \note{ The purpose of having \code{\link{SurvS4}} in \pkg{VGAM} is so that the same input can be fed into \code{\link{vglm}} as functions in \pkg{survival} such as \code{\link[survival]{survreg}}. } \section{Warning }{ This code has not been thoroughly tested. } \seealso{ \code{\link{SurvS4}}. % or \code{\linkS4class{CLASSNAME}} for links to other classes } \examples{ showClass("SurvS4") } \keyword{classes} VGAM/man/SUR.Rd0000644000176000001440000001214012136651105012545 0ustar ripleyusers\name{SUR} \alias{SUR} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Seemingly Unrelated Regressions %% ~~function to do ... ~~ } \description{ Fits a system of seemingly unrelated regressions. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ SUR(mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), parallel = FALSE, apply.parint = TRUE, Varcov = NULL, matrix.arg = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ % \item{estimator}{ %Character. %What estimator is computed. %% ~~Describe \code{estimator} here~~ %} \item{mle.normal}{ Logical. If \code{TRUE} then the MLE, assuming multivariate normal errors, is computed; the effect is just to add a \code{loglikelihood} slot to the returned object. Then it results in the \emph{maximum likelihood estimator}. } \item{divisor}{ Character, partial matching allowed and the first choice is the default. The divisor for the estimate of the covariances. If \code{"n"} then the estimate will be biased. If the others then the estimate will be unbiased for some elements. If \code{mle.normal = TRUE} and this argument is not \code{"n"} then a warning or an error will result. } \item{parallel, apply.parint}{ See \code{\link{CommonVGAMffArguments}}. } \item{Varcov}{ Numeric. This may be assigned a variance-covariance of the errors. If \code{matrix.arg} then this is a \eqn{M \times M}{M x M} matrix. If \code{!matrix.arg} then this is a \eqn{M \times M}{M x M} matrix in matrix-band format (a vector with at least \eqn{M} and at most \code{M*(M+1)/2} elements). } \item{matrix.arg}{ Logical. Of single length. } } \details{ Proposed by Zellner (1962), the basic seemingly unrelated regressions (SUR) model is a set of LMs (\eqn{M > 1} of them) tied together at the error term level. Each LM's model matrix may potentially have its own set of predictor variables. Zellner's efficient (ZEF) estimator (also known as \emph{Zellner's two-stage Aitken estimator}) can be obtained by setting \code{maxit = 1} (and possibly \code{divisor = "sqrt"} or \code{divisor = "n-max"}). The default value of \code{maxit} (in \code{\link{vglm.control}}) probably means \emph{iterative GLS} (IGLS) estimator is computed because IRLS will probably iterate to convergence. IGLS means, at each iteration, the residuals are used to estimate the error variance-covariance matrix, and then the matrix is used in the GLS. The IGLS estimator is also known as \emph{Zellner's iterative Aitken estimator}, or IZEF. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Zellner, A. (1962) An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias. \emph{J. Amer. Statist. Assoc.}, \bold{57}(298), 348--368. Kmenta, J. and Gilbert, R. F. (1968) Small Sample Properties of Alternative Estimators of Seemingly Unrelated Regressions. \emph{J. Amer. Statist. Assoc.}, \bold{63}(324), 1180--1200. } \author{ T. W. Yee. } \section{Warning }{ The default convergence criterion may be a little loose. Try setting \code{epsilon = 1e-11}, especially with \code{mle.normal = TRUE}. } \note{ The fitted object has slot \code{@extra$ncols_X_lm} which is a \eqn{M} vector with the number of parameters for each LM. Also, \code{@misc$values.divisor} is the \eqn{M}-vector of \code{divisor} values. Constraint matrices are needed in order to specify which response variables that each term on the RHS of the formula is a regressor for. See the \code{constraints} argument of \code{\link{vglm}} for more information. % This \pkg{VGAM} family function is currently experimental. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{normal1}}, \code{\link{gew}}. } \examples{ # Obtain some of the results of p.1199 of Kmenta and Gilbert (1968) clist <- list("(Intercept)" = diag(2), "capital.g" = rbind(1, 0), "value.g" = rbind(1, 0), "capital.w" = rbind(0, 1), "value.w" = rbind(0, 1)) zef1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SUR(divisor = "sqrt"), maxit = 1, data = gew, trace = TRUE, constraints = clist) round(coef(zef1, matrix = TRUE), dig = 4) # ZEF zef1@extra$ncols_X_lm zef1@misc$divisor zef1@misc$values.divisor round(sqrt(diag(vcov(zef1))), dig = 4) # SEs mle1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SUR(mle.normal = TRUE, divisor = "n-max"), epsilon = 1e-11, data = gew, trace = TRUE, constraints = clist) round(coef(mle1, matrix = TRUE), dig = 4) # MLE round(sqrt(diag(vcov(mle1))), dig = 4) # SEs } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} VGAM/man/Rcam.Rd0000644000176000001440000000401512136651105012760 0ustar ripleyusers\name{Rcim} \alias{Rcim} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mark the Baseline of Row and Column on a Matrix data } \description{ Rearrange the rows and columns of the input so that the first row and first column are baseline. This function is for rank-zero row-column interaction models (RCIMs; i.e., general main effects models). } \usage{ Rcim(mat, rbaseline = 1, cbaseline = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mat}{ Matrix, of dimension \eqn{r} by \eqn{c}. It is best that it is labelled with row and column names. } \item{rbaseline, cbaseline}{ Numeric (row number of the matrix \code{mat}) or character (matching a row name of \code{mat}) that the user wants as the row baseline or reference level. Similarly \code{cbaseline} for the column. } } \details{ This is a data preprocessing function for \code{\link{rcim}}. For rank-zero row-column interaction models this function establishes the baseline (or reference) levels of the matrix response with respect to the row and columns---these become the new first row and column. } \value{ Matrix of the same dimension as the input, with \code{rbaseline} and \code{cbaseline} specifying the first rows and columns. The default is no change in \code{mat}. } \author{ Alfian F. Hadi and T. W. Yee. } \note{ This function is similar to \code{\link{moffset}}; see \code{\link{moffset}} for information about the differences. If numeric, the arguments \code{rbaseline} and \code{cbaseline} differ from arguments \code{roffset} and \code{coffset} in \code{\link{moffset}} by 1 (when elements of the matrix agree). } \seealso{ \code{\link{moffset}}, \code{\link{rcim}}, \code{\link{plotrcim0}}. } \examples{ (alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*")) (aa <- Rcim(alcoff, rbaseline = "11", cbaseline = "Sun")) (bb <- moffset(alcoff, "11", "Sun", postfix = "*")) aa - bb # Notice the difference! } VGAM/man/Qvar.Rd0000644000176000001440000002123612136651105013013 0ustar ripleyusers\name{Qvar} \alias{Qvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-variances Preprocessing Function %% ~~function to do ... ~~ } \description{ Takes a \code{\link{vglm}} fit or a variance-covariance matrix, and preprocesses it for \code{\link{rcim}} and \code{\link{normal1}} so that quasi-variances can be computed. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ Qvar(object, factorname = NULL, which.eta = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{"\link[=vglmff-class]{vglm}"} object or a variance-covariance matrix, e.g., \code{vcov(vglm.object)}. The former is preferred since it contains all the information needed. If a matrix then \code{factorname} and/or \code{coef.indices} should be specified to identify the factor. %% ~~Describe \code{object} here~~ } \item{which.eta}{ A single integer from the set \code{1:M}. Specifies which linear predictor to use. Let the value of \code{which.eta} be called \eqn{j}. Then the factor should appear in that linear predictor, hence the \eqn{j}th row of the constraint matrix corresponding to the factor should have at least one nonzero value. Currently the \eqn{j}th row must have exactly one nonzero value because programming it for more than one nonzero value is difficult. } \item{factorname}{ Character. If the \code{\link{vglm}} object contains more than one factor as explanatory variable then this argument should be the name of the factor of interest. If \code{object} is a variance-covariance matrix then this argument should also be specified. %% ~~Describe \code{factor.name} here~~ } \item{labels}{ Character. Optional, for labelling the variance-covariance matrix. %% ~~Describe \code{level1.name} here~~ } \item{dispersion}{ Numeric. Optional, passed into \code{vcov()} with the same argument name. %% ~~Describe \code{level1.name} here~~ } \item{reference.name}{ Character. Label for for the reference level. %% ~~Describe \code{level1.name} here~~ } \item{coef.indices}{ Optional numeric vector of length at least 3 specifying the indices of the factor from the variance-covariance matrix. } \item{estimates}{ an optional vector of estimated coefficients (redundant if \code{object} is a model). } } \details{ Suppose a factor with \eqn{L} levels is an explanatory variable in a regression model. By default, R treats the first level as baseline so that its coefficient is set to zero. It estimates the other \eqn{L-1} coefficients, and with its associated standard errors, this is the conventional output. From the complete variance-covariance matrix one can compute \eqn{L} quasi-variances based on all pairwise difference of the coefficients. They are based on an approximation, and can be treated as uncorrelated. In minimizing the relative (not absolute) errors it is not hard to see that the estimation involves a RCIM (\code{\link{rcim}}) with an exponential link function (\code{\link{explink}}). If \code{object} is a model, then at least one of \code{factorname} or \code{coef.indices} must be non-\code{NULL}. The value of \code{coef.indices}, if non-\code{NULL}, determines which rows and columns of the model's variance-covariance matrix to use. If \code{coef.indices} contains a zero, an extra row and column are included at the indicated position, to represent the zero variances and covariances associated with a reference level. If \code{coef.indices} is \code{NULL}, then \code{factorname} should be the name of a factor effect in the model, and is used in order to extract the necessary variance-covariance estimates. Quasi-variances were first implemented in R with \pkg{qvcalc}. This implementation draws heavily from that. %% ~~ If necessary, more details than the description above ~~ } \value{ A \eqn{L} by \eqn{L} matrix whose \eqn{i}-\eqn{j} element is the logarithm of the variance of the \eqn{i}th coefficient minus the \eqn{j}th coefficient, for all values of \eqn{i} and \eqn{j}. The diagonal elements are abitrary and are set to zero. The matrix has an attribute that corresponds to the prior weight matrix; it is accessed by \code{\link{normal1}} and replaces the usual \code{weights} argument. of \code{\link{vglm}}. This weight matrix has ones on the off-diagonals and some small positive number on the diagonals. } \references{ Firth, D. (2003) Overcoming the reference category problem in the presentation of statistical models. \emph{Sociological Methodology} \bold{33}, 1--18. Firth, D. and de Menezes, R. X. (2004) Quasi-variances. \emph{Biometrika} \bold{91}, 65--80. } \author{ T. W. Yee, based heavily on \code{qvcalc()} in \pkg{qvcalc} written by David Firth. } \note{ This is an adaptation of \code{qvcalc()} in \pkg{qvcalc}. It should work for all \code{\link{vglm}} models with one linear predictor, i.e., \eqn{M = 1}. For \eqn{M > 1} the factor should appear only in one of the linear predictors. It is important to set \code{maxit} to be larger than usual for \code{\link{rcim}} since convergence is slow. Upon successful convergence the \eqn{i}th row effect and the \eqn{i}th column effect should be equal. A simple computation involving the fitted and predicted values allows the quasi-variances to be extracted (see example below). A function to plot \emph{comparison intervals} has not been written here. } \section{Warning }{ Negative quasi-variances may occur (one of them and only one), though they are rare in practice. If so then numerical problems may occur. See \code{qvcalc()} for more information. } \seealso{ \code{\link{rcim}}, \code{\link{vglm}}, \code{\link{normal1}}, \code{\link{explink}}, \code{qvcalc()} in \pkg{qvcalc}, \code{\link[MASS]{ships}}. %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ # Example 1 data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, quasipoissonff, offset = log(service), # trace = TRUE, model = TRUE, data = ships, subset = (service > 0)) # Easiest form of input fit1 <- rcim(Qvar(Shipmodel, "type"), normal1("explink"), maxit = 99) (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) # Another form of input fit2 <- rcim(Qvar(Shipmodel, coef.ind = c(0,2:5), reference.name = "typeA"), normal1("explink"), maxit = 99) \dontrun{ plotqvar(fit2, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) } # The variance-covariance matrix is another form of input (not recommended) fit3 <- rcim(Qvar(cbind(0, rbind(0, vcov(Shipmodel)[2:5, 2:5])), labels = c("typeA", "typeB", "typeC", "typeD", "typeE"), estimates = c(typeA = 0, coef(Shipmodel)[2:5])), normal1("explink"), maxit = 99) (QuasiVar <- exp(diag(fitted(fit3))) / 2) # Version 1 (QuasiVar <- diag(predict(fit3)[, c(TRUE, FALSE)]) / 2) # Version 2 (QuasiSE <- sqrt(quasiVar)) \dontrun{ plotqvar(fit3) } # Example 2: a model with M > 1 linear predictors \dontrun{ require(VGAMdata) xs.nz.f <- subset(xs.nz, sex == "F") xs.nz.f <- subset(xs.nz.f, !is.na(babies) & !is.na(age) & !is.na(ethnic)) xs.nz.f$babies <- as.numeric(as.character(xs.nz.f$babies)) xs.nz.f <- subset(xs.nz.f, babies >= 0) xs.nz.f <- subset(xs.nz.f, as.numeric(as.character(ethnic)) <= 2) clist <- list("bs(age, df = 4)" = rbind(1, 0), "bs(age, df = 3)" = rbind(0, 1), "ethnic" = diag(2), "(Intercept)" = diag(2)) fit1 <- vglm(babies ~ bs(age, df = 4) + bs(age, df = 3) + ethnic, zipoissonff(zero = NULL), xs.nz.f, constraints = clist, trace = TRUE) Fit1 <- rcim(Qvar(fit1, "ethnic", which.eta = 1), normal1("explink", imethod = 1), maxit = 99, trace = TRUE) Fit2 <- rcim(Qvar(fit1, "ethnic", which.eta = 2), normal1("explink", imethod = 1), maxit = 99, trace = TRUE) } \dontrun{ par(mfrow = c(1, 2)) plotqvar(Fit1, scol = "blue", pch = 16, main = expression(eta[1]), slwd = 1.5, las = 1, length.arrows = 0.07) plotqvar(Fit2, scol = "blue", pch = 16, main = expression(eta[2]), slwd = 1.5, las = 1, length.arrows = 0.07) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % \code{\link[qvcalc:qvcalc]{qvcalc}} in \pkg{qvcalc} VGAM/man/Perom.Rd0000644000176000001440000000303112136651105013155 0ustar ripleyusers\name{Perom} \alias{Perom} \docType{data} \title{ Captures of peromyscus maniculatus %% ~~ data name/kind ... ~~ } \description{ Captures of \emph{peromyscus maniculatus} collected at East Stuart Gulch, Colorado, USA. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(Perom)} \format{ The format is: chr "Perom" } \details{ The columns represent the sex (\code{m} or \code{f}), the ages (\code{y}: young, \code{sa}: semi-adult, \code{a}: adult), the weights in grams, and the capture histories of 38 individuals over 6 trapping occasions (1: captured, 0: not captured). The data set was collected by V. Reid and distributed with the \pkg{CAPTURE} program of Otis et al. (1978). %% ~~ If necessary, more details than the __description__ above ~~ } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Huggins, R. M. (1991) Some practical aspects of a conditional likelihood approach to capture experiments. \emph{Biometrics}, \bold{47}, 725--732. Otis, D. L. et al. (1978) Statistical inference from capture data on closed animal populations, \emph{Wildlife Monographs}, \bold{62}, 3--135. %% ~~ possibly secondary sources and usages ~~ } \seealso{ \code{\link[VGAM:posbernoulli.t]{posbernoulli.t}}. } \examples{ head(Perom) \dontrun{ fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + age, posbernoulli.t(parallel.t = TRUE), data = Perom, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) }} \keyword{datasets} VGAM/man/Pareto.Rd0000644000176000001440000000405312136651105013332 0ustar ripleyusers\name{Pareto} \alias{Pareto} \alias{dpareto} \alias{ppareto} \alias{qpareto} \alias{rpareto} \title{The Pareto Distribution} \description{ Density, distribution function, quantile function and random generation for the Pareto(I) distribution with parameters \code{location} and \code{shape}. } \usage{ dpareto(x, location, shape, log = FALSE) ppareto(q, location, shape) qpareto(p, location, shape) rpareto(n, location, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a single positive integer. } \item{location, shape}{the \eqn{\alpha}{alpha} and \eqn{k} parameters.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dpareto} gives the density, \code{ppareto} gives the distribution function, \code{qpareto} gives the quantile function, and \code{rpareto} generates random deviates. } \references{ Evans, M., Hastings, N. and Peacock, B. (2000) \emph{Statistical Distributions}, New York: Wiley-Interscience, Third edition. } \author{ T. W. Yee } \details{ See \code{\link{pareto1}}, the \pkg{VGAM} family function for estimating the parameter \eqn{k} by maximum likelihood estimation, for the formula of the probability density function and the range restrictions imposed on the parameters. } %%\note{ %% The Pareto distribution is %%} \seealso{ \code{\link{pareto1}}, \code{\link{ParetoIV}}. } \examples{ alpha <- 3; k <- exp(1); x <- seq(2.8, 8, len = 300) \dontrun{ plot(x, dpareto(x, location = alpha, shape = k), type = "l", main = "Pareto density split into 10 equal areas") abline(h = 0, col = "blue", lty = 2) qq <- qpareto(seq(0.1,0.9,by = 0.1),location = alpha,shape = k) lines(qq, dpareto(qq, loc = alpha, shape = k), col = "purple", lty = 3, type = "h") } pp <- seq(0.1,0.9,by = 0.1) qq <- qpareto(pp, location = alpha, shape = k) ppareto(qq, location = alpha, shape = k) qpareto(ppareto(qq,loc = alpha,shape = k),loc = alpha,shape = k) - qq # Should be 0 } \keyword{distribution} VGAM/man/Opt.Rd0000644000176000001440000000472112136651105012644 0ustar ripleyusers\name{Opt} \alias{Opt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Maxima } \description{ Generic function for the \emph{optima} (or optimums) of a model. } \usage{ Opt(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of an optimum (or optima) is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define an optimum in different ways. Many models have no such notion or definition. Optima occur in quadratic and additive ordination, e.g., CQO or UQO or CAO. For these models the optimum is the value of the latent variable where the maximum occurs, i.e., where the fitted value achieves its highest value. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. At an optimum, the fitted value of the response is called the \emph{maximum}. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ In ordination, the optimum of a species is sometimes called the \emph{species score}. } %\section{Warning }{ %} \seealso{ \code{Opt.qrrvglm}, \code{\link{Max}}, \code{\link{Tol}}. } \examples{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars # vvv p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, # vvv Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, # vvv Trocterr, Zoraspin) ~ # vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, # vvv Bestof = 2, # vvv fam = quasipoissonff, data = hspider, Crow1positive=FALSE) # vvv Opt(p1) \dontrun{ index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, lwd = 2, main = "Vertical lines at the optima") abline(v = Opt(p1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/Max.Rd0000644000176000001440000000427712136651105012635 0ustar ripleyusers\name{Max} \alias{Max} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Maxima } \description{ Generic function for the \emph{maxima} (maximums) of a model. } \usage{ Max(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of a maximum (or maxima) is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define a maximum in different ways. Many models have no such notion or definition. Maxima occur in quadratic and additive ordination, e.g., CQO or UQO or CAO. For these models the maximum is the fitted value at the optimum. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. For a valid optimum, the fitted value at the optimum is the maximum. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{Max.qrrvglm}, \code{\link{Tol}}, \code{\link{Opt}}. } \examples{ \dontrun{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, Bestof = 2, data = hspider, Crow1positive = FALSE) Max(p1) index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, llwd = 2) abline(h = Max(p1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/MNSs.Rd0000644000176000001440000000375012136651105012723 0ustar ripleyusers\name{MNSs} \alias{MNSs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The MNSs Blood Group System } \description{ Estimates the three independent parameters of the the MNSs blood group system. } \usage{ MNSs(link = "logit", imS = NULL, ims = NULL, inS = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the three parameters. See \code{\link{Links}} for more choices. } \item{imS, ims, inS}{ Optional initial value for \code{mS}, \code{ms} and \code{nS} respectively. A \code{NULL} means they are computed internally. } } \details{ There are three independent parameters: \code{m_S}, \code{m_s}, \code{n_S}, say, so that \code{n_s = 1 - m_S - m_s - n_S}. We let the eta vector (transposed) be \code{(g(m_S), g(m_s), g(n_S))} where \code{g} is the link function. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Elandt-Johnson, R. C. (1971) \emph{Probability Models and Statistical Methods in Genetics}, New York: Wiley. } \author{ T. W. Yee } \note{ The input can be a 6-column matrix of counts, where the columns are MS, Ms, MNS, MNs, NS, Ns (in order). Alternatively, the input can be a 6-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{AB.Ab.aB.ab2}}, \code{\link{ABO}}, \code{\link{G1G2G3}}. } \examples{ # Order matters only: y <- cbind(MS = 295, Ms = 107, MNS = 379, MNs = 322, NS = 102, Ns = 214) fit <- vglm(y ~ 1, MNSs("logit", .25, .28, .08), trace = TRUE) fit <- vglm(y ~ 1, MNSs(link = logit), trace = TRUE, crit = "coef") Coef(fit) rbind(y, sum(y)*fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/Links.Rd0000644000176000001440000002142312136651105013160 0ustar ripleyusers\name{Links} \alias{Links} \alias{TypicalVGAMlinkFunction} \title{Link functions for VGLM/VGAM/etc. families} \description{ The \pkg{VGAM} package provides a number of (parameter) link functions which are described in general here. Collectively, they offer the user considerable flexibility for modelling data. } \usage{ TypicalVGAMlinkFunction(theta, someParameter = 0, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } \arguments{ \item{theta}{ Numeric or character. Actually this can be \eqn{\theta}{theta} (default) or \eqn{\eta}{eta}, depending on the other arguments. If \code{theta} is character then \code{inverse} and \code{deriv} are ignored. The name \code{theta} should always be the name of the first argument. } \item{someParameter}{ Some parameter, e.g., an offset. } \item{bvalue}{ Boundary value, positive if given. If \code{0 < theta} then values of \code{theta} which are less than or equal to 0 can be replaced by \code{bvalue} before computing the link function value. Values of \code{theta} which are greater than or equal to 1 can be replaced by 1 minus \code{bvalue} before computing the link function value. The value \code{bvalue = .Machine$double.eps} is sometimes a reasonable value, or something slightly higher. } % \item{earg}{ % List. % Extra argument allowing for additional information, specific to the % link function. For example, for \code{\link{logoff}}, this will % contain the offset value. The argument \code{earg} is % always a list with \emph{named} components. See each specific link % function to find the component names for the list. % % % Almost all \pkg{VGAM} family functions with a single link % function have an argument (often called \code{earg}) which will % allow parameters to be inputted for that link function. % For \pkg{VGAM} family functions with more than one link % function there usually will be an \code{earg}-type argument for % each link. For example, if there are two links called % \code{lshape} and \code{lscale} then % the \code{earg}-type arguments for these might be called % \code{eshape} and \code{escale}, say. % % } \item{inverse}{ Logical. If \code{TRUE} the inverse link value \eqn{\theta}{theta} is returned, hence the argument \code{theta} is really \eqn{\eta}{eta}. } \item{deriv}{ Integer. Either 0, 1, or 2 specifying the order of the derivative. } \item{short, tag}{ Logical. These are used for labelling the \code{blurb} slot of a \code{\link{vglmff-class}} object. These arguments are used only if \code{theta} is character, and gives the formula for the link in character form. If \code{tag = TRUE} then the result is preceeded by a little more information. } } \value{ Returns one of the link function value or its first or second derivative, the inverse link or its first or second derivative, or a character description of the link. Here are the general details. If \code{inverse = FALSE} and \code{deriv = 0} (default) then the ordinary link function \eqn{\eta = g(\theta)}{eta = g(theta)} is returned. If \code{inverse = FALSE} and \code{deriv = 1} then it is \eqn{d\theta / d\eta}{d theta / d eta} \emph{as a function of} \eqn{\theta}{theta}. If \code{inverse = FALSE} and \code{deriv = 2} then it is \eqn{d^2\theta / d\eta^2}{d^2 theta / d eta^2} \emph{as a function of} \eqn{\theta}{theta}. If \code{inverse = TRUE} and \code{deriv = 0} then the inverse link function is returned, hence \code{theta} is really \eqn{\eta}{eta}. If \code{inverse = TRUE} and \code{deriv} is positive then the \emph{reciprocal} of the same link function with \code{(theta = theta, someParameter, inverse = TRUE, deriv = deriv)} is returned. } \details{ Almost all \pkg{VGAM} link functions have something similar to the argument list as given above. In this help file we have \eqn{\eta = g(\theta)}{eta = g(theta)} where \eqn{g} is the link function, \eqn{\theta}{theta} is the parameter and \eqn{\eta}{eta} is the linear/additive predictor. % The arguments \code{short} and \code{tag} are used only if % \code{theta} is character. % That is, there is a matching \code{earg} for each \code{link} argument. The following is a brief enumeration of all \pkg{VGAM} link functions. For parameters lying between 0 and 1 (e.g., probabilities): \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{fsqrt}}, \code{\link{logc}}, \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}. For positive parameters (i.e., greater than 0): \code{\link{loge}}, \code{\link{nloge}}, \code{\link{powl}}. For parameters greater than 1: \code{\link{loglog}}. For parameters between \eqn{-1} and \eqn{1}: \code{\link{fisherz}}, \code{\link{rhobit}}. For parameters between \eqn{A} and \eqn{B}: \code{\link{elogit}}, \code{\link{logoff}} (\eqn{B = \infty}{B = Inf}). For unrestricted parameters (i.e., any value): \code{\link{identity}}, \code{\link{nidentity}}, \code{\link{reciprocal}}, \code{\link{nreciprocal}}. % Other links: } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \seealso{ \code{\link{TypicalVGAMfamilyFunction}}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}. \code{\link{cqo}}, \code{\link{cao}}, \code{\link{uqo}}. } \author{T. W. Yee} \note{ \pkg{VGAM} link functions are generally not compatible with other functions outside the package. In particular, they won't work with \code{\link[stats]{glm}} or any other package for fitting GAMs. From October 2006 onwards, all \pkg{VGAM} family functions will only contain one default value for each link argument rather than giving a vector of choices. For example, rather than \code{binomialff(link = c("logit", "probit", "cloglog", "cauchit", "identity"), ...)} it is now \code{binomialff(link = "logit", ...)} No checking will be done to see if the user's choice is reasonable. This means that the user can write his/her own \pkg{VGAM} link function and use it within any \pkg{VGAM} family function. Altogether this provides greater flexibility. The downside is that the user must specify the \emph{full} name of the link function, by either assigning the link argument the full name as a character string, or just the name itself. See the examples below. From August 2012 onwards, a major change in link functions occurred. Argument \code{esigma} (and the like such as \code{earg}) used to be in \pkg{VGAM} prior to version 0.9-0 (released during the 2nd half of 2012). The major change is that arguments such as \code{offset} that used to be passed in via those arguments can done directly through the link function. For example, \code{gev(lshape = "logoff", eshape = list(offset = 0.5))} is replaced by \code{gev(lshape = logoff(offset = 0.5))}. The \code{@misc} slot no longer has \code{link} and \code{earg} components, but two other components replace these. Functions such as \code{dtheta.deta()}, \code{d2theta.deta2()}, \code{eta2theta()}, \code{theta2eta()} are modified. } \examples{ logit("a") logit("a", short = FALSE) logit("a", short = FALSE, tag = TRUE) logoff(1:5, offset = 1) # Same as log(1:5 + 1) powl(1:5, power = 2) # Same as (1:5)^2 \dontrun{ # This is old and no longer works: logoff(1:5, earg = list(offset = 1)) powl(1:5, earg = list(power = 2)) } fit1 <- vgam(agaaus ~ altitude, binomialff(link = "cloglog"), hunua) # ok fit2 <- vgam(agaaus ~ altitude, binomialff(link = "cloglog"), hunua) # ok \dontrun{ # This no longer works since "clog" is not a valid VGAM link function: fit3 <- vgam(agaaus ~ altitude, binomialff(link = "clog"), hunua) # not ok # No matter what the link, the estimated var-cov matrix is the same y <- rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1)) fit1 <- vglm(y ~ 1, beta.ab(lshape1 = "identity", lshape2 = "identity"), trace = TRUE, crit = "coef") fit2 <- vglm(y ~ 1, beta.ab(lshape1 = logoff(offset = 1.1), lshape2 = logoff(offset = 1.1)), trace = TRUE, crit = "coef") vcov(fit1, untransform = TRUE) vcov(fit1, untransform = TRUE) - vcov(fit2, untransform = TRUE) # Should be all 0s \dontrun{ # This is old: fit1@misc$earg # Some 'special' parameters fit2@misc$earg # Some 'special' parameters are here } par(mfrow = c(2, 2)) p <- seq(0.01, 0.99, len = 200) x <- seq(-4, 4, len = 200) plot(p, logit(p), type = "l", col = "blue") plot(x, logit(x, inverse = TRUE), type = "l", col = "blue") plot(p, logit(p, deriv = 1), type = "l", col = "blue") # reciprocal! plot(p, logit(p, deriv = 2), type = "l", col = "blue") # reciprocal! } } \keyword{models} VGAM/man/Inv.gaussian.Rd0000644000176000001440000000413512136651105014446 0ustar ripleyusers\name{Inv.gaussian} \alias{Inv.gaussian} \alias{dinv.gaussian} \alias{pinv.gaussian} \alias{rinv.gaussian} \title{The Inverse Gaussian Distribution} \description{ Density, distribution function and random generation for the inverse Gaussian distribution. } \usage{ dinv.gaussian(x, mu, lambda, log = FALSE) pinv.gaussian(q, mu, lambda) rinv.gaussian(n, mu, lambda) } \arguments{ \item{x, q}{vector of quantiles.} %%\item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{mu}{the mean parameter.} \item{lambda}{the \eqn{\lambda}{lambda} parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dinv.gaussian} gives the density, \code{pinv.gaussian} gives the distribution function, and % \code{qinv.gaussian} gives the quantile function, and \code{rinv.gaussian} generates random deviates. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Taraldsen, G. and Lindqvist, B. H. (2005) The multiple roots simulation algorithm, the inverse Gaussian distribution, and the sufficient conditional Monte Carlo method. \emph{Preprint Statistics No. 4/2005}, Norwegian University of Science and Technology, Trondheim, Norway. } \author{ T. W. Yee } \details{ See \code{\link{inv.gaussianff}}, the \pkg{VGAM} family function for estimating both parameters by maximum likelihood estimation, for the formula of the probability density function. } \note{ Currently \code{qinv.gaussian} is unavailable. } \seealso{ \code{\link{inv.gaussianff}}, \code{\link{wald}}. } \examples{ \dontrun{ x <- seq(-0.05, 4, len = 300) plot(x, dinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "blue",las = 1, main = "blue is density, orange is cumulative distribution function") abline(h = 0, col = "gray", lty = 2) lines(x, pinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "orange") } } \keyword{distribution} VGAM/man/Huggins89.t1.Rd0000644000176000001440000000575612136651105014223 0ustar ripleyusers\name{Huggins89.t1} \alias{Huggins89.t1} \docType{data} \title{ Table 1 of Huggins (1989) } \description{ Simulated capture data set for the linear logistic model depending on an occasion covariate and an individual covariate for 10 trapping occasions and 20 individuals. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(Huggins89.t1)} \format{ The format is a data frame. %chr "Huggins89.t1" } \details{ Table 1 of Huggins (1989) gives this toy data set. Note that variables \code{z1},\ldots,\code{z10} are occasion-specific variables. They correspond to the response variables \code{y1},\ldots,\code{y10} which have values 1 for capture and 0 for not captured. %% ~~ If necessary, more details than the __description__ above ~~ } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Huggins, R. M. (1989) On the statistical analysis of capture experiments. \emph{Biometrika}, \bold{76}, 133--140. %% ~~ possibly secondary sources and usages ~~ } \examples{ \dontrun{ small.Huggins89.t1 <- transform(Huggins89.t1, Zedd = z1, Z2 = z2, Z3 = z3) small.Huggins89.t1 <- subset(small.Huggins89.t1, y1 + y2 + y3 > 0) # fit1 is the bottom equation on p.133, but this is only for the 1st 3 responses. # Currently posbernoulli.tb() cannot handle more than 3 Bernoulli variates. # The fit is not very good. fit1 <- vglm(cbind(y1, y2, y3) ~ x2 + Zedd, xij = list(Zedd ~ z1 + z2 + z3 + Z2 + Z3 - 1), posbernoulli.tb(parallel.t = TRUE), maxit = 155, data = small.Huggins89.t1, trace = TRUE, form2 = ~ x2 + Zedd + z1 + z2 + z3 + Z2 + Z3) coef(fit1) coef(fit1, matrix = TRUE) # M_t model constraints(fit1) summary(fit1) fit1@extra$N.hat # Estimate of the population size N fit1@extra$SE.N.hat # Its standard error fit.t <- vglm(cbind(y1, y2, y3, y4, y5, y6, y7, y8, y9, y10) ~ x2, posbernoulli.t, data = Huggins89.t1, trace = TRUE) coef(fit.t) coef(fit.t, matrix = TRUE) # M_t model summary(fit.t) fit.t@extra$N.hat # Estimate of the population size N fit.t@extra$SE.N.hat # Its standard error fit.b <- vglm(cbind(y1, y2, y3, y4, y5, y6, y7, y8, y9, y10) ~ x2, posbernoulli.b, data = Huggins89.t1, trace = TRUE) coef(fit.b) coef(fit.b, matrix = TRUE) # M_b model summary(fit.b) fit.b@extra$N.hat fit.b@extra$SE.N.hat fit.0 <- vglm(cbind(y1, y2, y3, y4, y5, y6, y7, y8, y9, y10) ~ x2, posbernoulli.b(parallel.b = TRUE), data = Huggins89.t1, trace = TRUE) coef(fit.0, matrix = TRUE) # M_0 model (version 1) coef(fit.0) summary(fit.0) fit.0@extra$N.hat fit.0@extra$SE.N.hat Fit.0 <- vglm(cbind(y1, y2, y3, y4, y5, y6, y7, y8, y9, y10) ~ x2, posbernoulli.t(parallel.t = TRUE), data = Huggins89.t1, trace = TRUE) coef(Fit.0) coef(Fit.0, matrix = TRUE) # M_0 model (version 2) summary(Fit.0) Fit.0@extra$N.hat Fit.0@extra$SE.N.hat } } \keyword{datasets} % data(Huggins89.t1) %## maybe str(Huggins89.t1) ; plot(Huggins89.t1) ... VGAM/man/G1G2G3.Rd0000644000176000001440000000404712136651105012735 0ustar ripleyusers\name{G1G2G3} \alias{G1G2G3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The G1G2G3 Blood Group System } \description{ Estimates the three independent parameters of the the G1G2G3 blood group system. } \usage{ G1G2G3(link = "logit", ip1 = NULL, ip2 = NULL, iF = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{p1}, \code{p2} and \code{f}. See \code{\link{Links}} for more choices. } \item{ip1, ip2, iF}{ Optional initial value for \code{p1}, \code{p2} and \code{f}. } } \details{ The parameters \code{p1} and \code{p2} are probabilities, so that \code{p3=1-p1-p2} is the third probability. The parameter \code{f} is the third independent parameter. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 6-column matrix of counts, with columns corresponding to \code{G1G1}, \code{G1G2}, \code{G1G3}, \code{G2G2}, \code{G2G3}, \code{G3G3} (in order). Alternatively, the input can be a 6-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{AB.Ab.aB.ab2}}, \code{\link{ABO}}, \code{\link{MNSs}}. } \examples{ ymat <- cbind(108, 196, 429, 143, 513, 559) fit <- vglm(ymat ~ 1, G1G2G3(link = probit), trace = TRUE, crit = "coef") fit <- vglm(ymat ~ 1, G1G2G3(link = logit, ip1 = 0.3, ip2 = 0.3, iF = 0.02), trace = TRUE, crit = "coef") fit <- vglm(ymat ~ 1, G1G2G3(link = "identity"), trace = TRUE) Coef(fit) # Estimated p1, p2 and f rbind(ymat, sum(ymat)*fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/DeLury.Rd0000644000176000001440000001371212136651105013306 0ustar ripleyusers\name{DeLury} \alias{DeLury} %- Also NEED an '\alias' for EACH other topic documented here. \title{ DeLury's Method for Population Size Estimation } \description{ Computes DeLury's method or Leslie's method for estimating a biological population size. } \usage{ DeLury(catch, effort, type = c("DeLury","Leslie"), ricker = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{catch, effort}{ Catch and effort. These should be numeric vectors of equal length. } \item{type}{ Character specifying which of the DeLury or Leslie models is to be fitted. The default is the first value. } \item{ricker}{ Logical. If \code{TRUE} then the Ricker (1975) modification is computed. } } \details{ This simple function implements the methods of DeLury (1947). These are called the DeLury and Leslie models. Note that there are many assumptions. These include: (i) Catch and effort records are available for a series of consecutive time intervals. The catch for a given time interval, specified by \eqn{t}, is \eqn{c(t)}, and the corresponding effort by \eqn{e(t)}. The \emph{catch per unit effort} (CPUE) for the time interval \eqn{t} is \eqn{C(t) = c(t)/e(t)}. Let \eqn{d(t)} represent the proportion of the population captured during the time interval \eqn{t}. Then \eqn{d(t) = k(t) e(t)} so that \eqn{k(t)} is the proportion of the population captured during interval \eqn{t} by one unit of effort. Then \eqn{k(t)} is called the \emph{catchability}, and the \emph{intensity} of effort is \eqn{e(t)}. Let \eqn{E(t)} and \eqn{K(t)} be the total effort and total catch up to interval \eqn{t}, and \eqn{N(t)} be the number of individuals in the population at time \eqn{t}. It is good idea to plot \eqn{\log(C(t))} against \eqn{E(t)} for \code{type = "DeLury"} and \eqn{C(t)} versus \eqn{K(t)} for \code{type = "Leslie"}. The other assumptions are as follows. % (ii) The population is closed---the population must be closed to sources of animals such as recruitment and immigration and losses of animals due to natural mortality and emigration. % (iii) Catchability is constant over the period of removals. % (iv) The units of effort are independent, i.e., the individual units of the method of capture (i.e., nets, traps, etc) do not compete with each other. % (v) All fish are equally vulnerable to the method of capture---source of error may include gear saturation and trap-happy or trap-shy individuals. % (vi) Enough fish must be removed to substantially reduce the CPUE. % (vii) The catches may remove less than 2\% of the population. % Also, the usual assumptions of simple regression such as % (viii) random sampling, % (ix) the independent variable(s) are measured without error---both catches and effort should be known, not estimated, % (x) a line describes the data, % (xi) the errors are independent and normally distributed. } \value{ A list with the following components. \item{catch, effort }{ Catch and effort. Same as the original vectors. These correspond to \eqn{c(t)} and \eqn{e(t)} respectively. } \item{type, ricker}{Same as input. } \item{N0}{an estimate of the population size at time 0. Only valid if the assumptions are satisfied. } \item{CPUE}{Catch Per Unit Effort \eqn{=C(t)}. } \item{K, E}{\eqn{K(t)}, \eqn{E(t)}. Only one is computed depending on \code{type}. } \item{lmfit}{ the \code{\link[stats:lm]{lm}} object from the fit of \code{log(CPUE)} on \code{K} (when \code{type = "Leslie"}). Note that the \code{x} component of the object is the model matrix. } } \references{ DeLury, D. B. (1947) On the estimation of biological populations. \emph{Biometrics}, \bold{3}, 145--167. Ricker, W. E. (1975) Computation and interpretation of biological statistics of fish populations. \emph{Bull. Fish. Res. Bd. Can.}, \bold{191}, 382-- Yee, T. W. (2010) VGLMs and VGAMs: an overview for applications in fisheries research. \emph{Fisheries Research}, \bold{101}, 116--126. } \author{ T. W. Yee. } \note{ The data in the example below comes from DeLury (1947), and some plots of his are reproduced. Note that he used log to base 10 whereas natural logs are used here. His plots had some observations obscured by the y-axis! The DeLury method is not applicable to the data frame \code{\link{wffc.nc}} since the 2008 World Fly Fishing Competition was strictly catch-and-release. } \seealso{ \code{\link{wffc.nc}}. } \examples{ pounds <- c( 147, 2796, 6888, 7723, 5330, 8839, 6324, 3569, 8120, 8084, 8252, 8411, 6757, 1152, 1500, 11945, 6995, 5851, 3221, 6345, 3035, 6271, 5567, 3017, 4559, 4721, 3613, 473, 928, 2784, 2375, 2640, 3569) traps <- c( 200, 3780, 7174, 8850, 5793, 9504, 6655, 3685, 8202, 8585, 9105, 9069, 7920, 1215, 1471, 11597, 8470, 7770, 3430, 7970, 4740, 8144, 7965, 5198, 7115, 8585, 6935, 1060, 2070, 5725, 5235, 5480, 8300) table1 <- DeLury(pounds/1000, traps/1000) \dontrun{ with(table1, plot(1+log(CPUE) ~ E, las = 1, pch = 19, main = "DeLury method", xlab = "E(t)", ylab = "1 + log(C(t))", col = "blue")) } omitIndices <- -(1:16) table1b <- DeLury(pounds[omitIndices]/1000, traps[omitIndices]/1000) \dontrun{ with(table1b, plot(1+log(CPUE) ~ E, las = 1, pch = 19, main = "DeLury method", xlab = "E(t)", ylab = "1 + log(C(t))", col = "blue")) mylmfit <- with(table1b, lmfit) lines(mylmfit$x[, 2], 1 + predict.lm(mylmfit), col = "red", lty = "dashed") } omitIndices <- -(1:16) table2 <- DeLury(pounds[omitIndices]/1000, traps[omitIndices]/1000, type = "L") \dontrun{ with(table2, plot(CPUE ~ K, las = 1, pch = 19, main = "Leslie method; Fig. III", xlab = "K(t)", ylab = "C(t)", col = "blue")) mylmfit <- with(table2, lmfit) abline(a = coef(mylmfit)[1], b = coef(mylmfit)[2], col = "orange", lty = "dashed") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{ models } VGAM/man/CommonVGAMffArguments.Rd0000644000176000001440000003472212136651105016213 0ustar ripleyusers\name{CommonVGAMffArguments} \alias{CommonVGAMffArguments} \alias{TypicalVGAMfamilyFunction} \title{Common VGAM family function Arguments } \description{ Here is a description of some common and typical arguments found in many \pkg{VGAM} family functions, e.g., \code{lsigma}, \code{isigma}, \code{gsigma}, \code{nsimEI}, \code{parallel} and \code{zero}. } \usage{ TypicalVGAMfamilyFunction(lsigma = "loge", isigma = NULL, gsigma = exp(-5:5), parallel = TRUE, apply.parint = FALSE, shrinkage.init = 0.95, nointercept = NULL, imethod = 1, probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), mv = FALSE, earg.link = FALSE, whitespace = FALSE, bred = FALSE, oim = FALSE, nsimEIM = 100, zero = NULL) } \arguments{ \item{lsigma}{ Character. Link function applied to a parameter and not necessarily a mean. See \code{\link{Links}} for a selection of choices. If there is only one parameter then this argument is often called \code{link}. } % \item{esigma}{ % List. % Extra argument allowing for additional information, specific to the % link function. % See \code{\link{Links}} for more information. % If there is only one parameter then this argument is often called % \code{earg}. % } \item{isigma}{ Optional initial values can often be inputted using an argument beginning with \code{"i"}. For example, \code{"isigma"} and \code{"ilocation"}, or just \code{"init"} if there is one parameter. A value of \code{NULL} means a value is computed internally, i.e., a \emph{self-starting} \pkg{VGAM} family function. If a failure to converge occurs make use of these types of arguments. } \item{gsigma}{ Grid-search initial values can be inputted using an argument beginning with \code{"g"}, e.g., \code{"gsigma"}, \code{"gshape"} and \code{"gscale"}. If argument \code{isigma} is inputted then that has precedence over \code{gsigma}, etc. % The actual search values will be \code{unique(sort(c(gshape)))}, etc. If the grid search is 2-dimensional then it is advisable not to make the vectors too long as a nested \code{for} loop may be used. Ditto for 3-dimensions. % Then the actual search values will be \code{unique(sort(c(gshape, 1/gshape)))}, etc. } \item{parallel}{ A logical, or a simple formula specifying which terms have equal/unequal coefficients. The formula must be simple, i.e., additive with simple main effects terms. Interactions and nesting etc. are not handled. To handle complex formulas use the \code{constraints} argument (of \code{\link{vglm}} etc.); however, there is a lot more setting up involved and things will not be as convenient. This argument is common in \pkg{VGAM} family functions for categorical responses, e.g., \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}. For the proportional odds model (\code{\link{cumulative}}) having parallel constraints applied to each explanatory variable (except for the intercepts) means the fitted probabilities do not become negative or greater than 1. However this parallelism or proportional-odds assumption ought to be checked. } \item{apply.parint}{ Logical. It refers to whether the parallelism constraint is applied to the intercept too. By default, in some models it does, in other models it does not. Used only if \code{parallel = TRUE} (fully or partially with respect to all the explanatory variables). } \item{nsimEIM}{ Some \pkg{VGAM} family functions use simulation to obtain an approximate expected information matrix (EIM). For those that do, the \code{nsimEIM} argument specifies the number of random variates used per observation; the mean of \code{nsimEIM} random variates is taken. Thus \code{nsimEIM} controls the accuracy and a larger value may be necessary if the EIMs are not positive-definite. For intercept-only models (\code{y ~ 1)} the value of \code{nsimEIM} can be smaller (since the common value used is also then taken as the mean over the observations), especially if the number of observations is large. Some \pkg{VGAM} family functions provide two algorithms for estimating the EIM. If applicable, set \code{nsimEIM = NULL} to choose the other algorithm. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} or ... which specifies the initialization method for some parameters or a specific parameter. If failure to converge occurs try the next higher value, and continue until success. For example, \code{imethod = 1} might be the method of moments, and \code{imethod = 2} might be another method. If no value of \code{imethod} works then it will be necessary to use arguments such as \code{isigma}. For many \pkg{VGAM} family functions it is advisable to try this argument with all possible values to safeguard against problems such as converging to a local solution. \pkg{VGAM} family functions with this argument usually correspond to a model or distribution that is relatively hard to fit successfully, therefore care is needed to ensure the global solution is obtained. So using all possible values that this argument supplies is a good idea. } \item{probs.x, probs.y}{ Numeric, with values in (0, 1). The probabilites that define quantiles with respect to some vector, usually an \code{x} or \code{y} of some sort. This is used to create two subsets of data corresponding to `low' and `high' values of x or y. Each value is separately fed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}}. If the data set size is small then it may be necessary to increase/decrease slightly the first/second values respectively. } \item{whitespace}{ Logical. Should white spaces (\code{" "}) be used in the labelling of the linear/additive predictors? Setting \code{TRUE} usually results in more readability but it occupies more columns of the output. } \item{oim}{ Logical. Should the observed information matrices (OIMs) be used for the working weights? In general, setting \code{oim = TRUE} means the Newton-Raphson algorithm, and \code{oim = FALSE} means Fisher-scoring. The latter uses the EIM, and is usually recommended. If \code{oim = TRUE} then \code{nsimEIM} is ignored. } \item{zero}{ An integer specifying which linear/additive predictor is modelled as intercept-only. That is, the regression coefficients are set to zero for all covariates except for the intercept. If \code{zero} is specified then it may be a vector with values from the set \eqn{\{1,2,\ldots,M\}}. The value \code{zero = NULL} means model \emph{all} linear/additive predictors as functions of the explanatory variables. Here, \eqn{M} is the number of linear/additive predictors. Technically, if \code{zero} contains the value \eqn{j} then the \eqn{j}th row of every constraint matrix (except for the intercept) consists of all 0 values. Some \pkg{VGAM} family functions allow the \code{zero} argument to accept negative values; if so then its absolute value is recycled over each (usual) response. For example, \code{zero = -2} for the two-parameter negative binomial distribution would mean, for each response, the second linear/additive predictor is modelled as intercepts-only. That is, for all the \eqn{k} parameters in \code{\link{negbinomial}} (this \pkg{VGAM} family function can handle a matrix of responses). Suppose \code{zero = zerovec} where \code{zerovec} is a vector of negative values. If \eqn{G} is the usual \eqn{M} value for a univariate response then the actual values for argument \code{zero} are all values in \code{c(abs(zerovec), G + abs(zerovec), 2*G + abs(zerovec), ... )} lying in the integer range \eqn{1} to \eqn{M}. For example, setting \code{zero = -c(2, 3)} for a matrix response of 4 columns with \code{\link{zinegbinomial}} (which usually has \eqn{G = M = 3} for a univariate response) would be equivalent to \code{zero = c(2, 3, 5, 6, 8, 9, 11, 12)}. This example has \eqn{M = 12}. Note that if \code{zerovec} contains negative values then their absolute values should be elements from the set \code{1:G}. Note: \code{zero} may have positive and negative values, for example, setting \code{zero = c(-2, 3)} in the above example would be equivalent to \code{zero = c(2, 3, 5, 8, 11)}. } \item{shrinkage.init}{ Shrinkage factor \eqn{s} used for obtaining initial values. Numeric, between 0 and 1. In general, the formula used is something like \eqn{s \mu + (1-s) y}{s*mu + (1-s)*y} where \eqn{\mu}{mu} is a measure of central tendency such as a weighted mean or median, and \eqn{y} is the response vector. For example, the initial values are slight perturbations of the mean towards the actual data. For many types of models this method seems to work well and is often reasonably robust to outliers in the response. Often this argument is only used if the argument \code{imethod} is assigned a certain value. } \item{nointercept}{ An integer-valued vector specifying which linear/additive predictors have no intercepts. Any values must be from the set \{1,2,\ldots,\eqn{M}\}. A value of \code{NULL} means no such constraints. } \item{mv}{ Logical. Some \pkg{VGAM} family functions allow a multivariate or vector response. If so, then usually the response is a matrix with columns corresponding to the individual response variables. They are all fitted simultaneously. Arguments such as \code{parallel} may then be useful to allow for relationships between the regressions of each response variable. If \code{mv = TRUE} then sometimes the response is interpreted differently, e.g., \code{\link{posbinomial}} chooses the first column of a matrix response as success and combines the other columns as failure, but when \code{mv = TRUE} then each column of the response matrix is the number of successes and the \code{weights} argument is of the same dimension as the response and contains the number of trials. } \item{earg.link}{ Sometimes the link argument can receive \code{earg}-type input, such as \code{\link{quasibinomial}} calling \code{\link{binomial}}. This argument should be generally ignored. } \item{bred}{ Logical. Some \pkg{VGAM} family functions will allow bias-reduction based on the work by Kosmidis and Firth. Currently none are working yet! } } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \section{Warning }{ The \code{zero} argument is supplied for convenience but conflicts can arise with other arguments, e.g., the \code{constraints} argument of \code{\link{vglm}} and \code{\link{vgam}}. See Example 5 below for an example. If not sure, use, e.g., \code{constraints(fit)} and \code{coef(fit, matrix = TRUE)} to check the result of a fit \code{fit}. The arguments \code{zero} and \code{nointercept} can be inputted with values that fail. For example, \code{multinomial(zero = 2, nointercept = 1:3)} means the second linear/additive predictor is identically zero, which will cause a failure. Be careful about the use of other potentially contradictory constraints, e.g., \code{multinomial(zero = 2, parallel = TRUE ~ x3)}. If in doubt, apply \code{constraints()} to the fitted object to check. \pkg{VGAM} family functions with the \code{nsimEIM} may have inaccurate working weight matrices. If so, then the standard errors of the regression coefficients may be inaccurate. Thus output from \code{summary(fit)}, \code{vcov(fit)}, etc. may be misleading. } \details{ Full details will be given in documentation yet to be written, at a later date! } \references{ Kosmidis, I. and Firth, D. (2009) Bias reduction in exponential family nonlinear models. \emph{Biometrika}, \bold{96}(4), 793--804. %Kosmidis, I. and Firth, D. (2010) %A generic algorithm for reducing bias in parametric estimation. %\emph{Electronic Journal of Statistics}, %\bold{4}, 1097--1112. } \seealso{ \code{\link{Links}}, \code{\link{vglmff-class}}. } \author{T. W. Yee} \note{ See \code{\link{Links}} regarding a major change in link functions, for version 0.9-0 and higher (released during the 2nd half of 2012). } \examples{ # Example 1 cumulative() cumulative(link = "probit", reverse = TRUE, parallel = TRUE) # Example 2 wdata <- data.frame(x2 = runif(nn <- 1000)) wdata <- transform(wdata, y = rweibull(nn, shape = 2 + exp(1 + x2), scale = exp(-0.5))) fit <- vglm(y ~ x2, weibull(lshape = logoff(offset = -2), zero = 2), wdata) coef(fit, mat = TRUE) # Example 3; multivariate (multiple) response \dontrun{ ndata <- data.frame(x = runif(nn <- 500)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x), size = exp(1)), # k is size y2 = rnbinom(nn, mu = exp(2-x), size = exp(0))) fit <- vglm(cbind(y1, y2) ~ x, negbinomial(zero = -2), ndata) coef(fit, matrix = TRUE) } # Example 4 \dontrun{ # fit1 and fit2 are equivalent fit1 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = FALSE ~ 1 + x3 + x5), mydataframe) fit2 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = TRUE ~ x2 + x4), mydataframe) } # Example 5 gdata <- data.frame(x2 = rnorm(nn <- 200)) gdata <- transform(gdata, y1 = rnorm(nn, mean = 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2 = rnorm(nn, mean = 1 - 3*x2, sd = exp(1))) args(normal1) fit1 <- vglm(y1 ~ x2, normal1, gdata) # This is ok fit2 <- vglm(y2 ~ x2, normal1(zero = 2), gdata) # This is ok # This creates potential conflict clist <- list("(Intercept)" = diag(2), "x2" = diag(2)) fit3 <- vglm(y2 ~ x2, normal1(zero = 2), gdata, constraints = clist) # Conflict! coef(fit3, matrix = TRUE) # Shows that clist[["x2"]] was overwritten, constraints(fit3) # i.e., 'zero' seems to override the 'constraints' arg # Example 6 ('whitespace' argument) pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = FALSE, parallel = TRUE), pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = TRUE, parallel = TRUE), pneumo) head(predict(fit1), 2) # No white spaces head(predict(fit2), 2) # Uses white spaces } \keyword{models} VGAM/man/Coef.vlm.Rd0000644000176000001440000000321612136651105013551 0ustar ripleyusers\name{Coef.vlm} \alias{Coef.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Coefficients for VLM Objects } \description{ Amongst other things, this function applies inverse link functions to the parameters of intercept-only VGLMs. } \usage{ Coef.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A fitted model. } \item{\dots}{ Arguments which may be passed into \code{\link[stats]{coef}}. } } \details{ Most \pkg{VGAM} family functions apply a link function to the parameters, e.g., positive parameter are often have a log link, parameters between 0 and 1 have a logit link. This function can back-transform the parameter estimate to the original scale. } \value{ For intercept-only models (e.g., formula is \code{y ~ 1}) the back-transformed parameter estimates can be returned. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } \section{Warning }{ This function may not work for \emph{all} \pkg{VGAM} family functions. You should check your results on some artificial data before applying it to models fitted to real data. } \seealso{ \code{\link{Coef}}, \code{\link[stats]{coef}}. } \examples{ set.seed(123); nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) fit <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) # intercept-only model coef(fit, matrix = TRUE) # log scale Coef(fit) # On the original scale } \keyword{models} \keyword{regression} VGAM/man/Coef.rrvglm.Rd0000644000176000001440000000263712136651105014272 0ustar ripleyusers\name{Coef.rrvglm} \alias{Coef.rrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Returns Important Matrices etc. of a RR-VGLM Object } \description{ This methods function returns important matrices etc. of a RR-VGLM object. } \usage{ Coef.rrvglm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object of class \code{"rrvglm"}. } \item{\dots}{ Currently unused. } } \details{ The \bold{A}, \bold{B1}, \bold{C} matrices are returned, along with other slots. See \code{\link{rrvglm}} for details about RR-VGLMs. } \value{ An object of class \code{"Coef.rrvglm"} (see \code{\link{Coef.rrvglm-class}}). } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ This function is an alternative to \code{coef.rrvglm}. } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.rrvglm-class}}, \code{print.Coef.rrvglm}, \code{\link{rrvglm}}. } \examples{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} % # print(Coef(fit), digits = 3) VGAM/man/Coef.rrvglm-class.Rd0000644000176000001440000000423612136651105015372 0ustar ripleyusers\name{Coef.rrvglm-class} \docType{class} \alias{Coef.rrvglm-class} \title{Class ``Coef.rrvglm'' } \description{ The most pertinent matrices and other quantities pertaining to a RR-VGLM. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{Coef(object, ...)} where \code{object} is an object of class \code{rrvglm} (see \code{\link{rrvglm-class}}). In this document, \eqn{M} is the number of linear predictors and \eqn{n} is the number of observations. } \section{Slots}{ \describe{ \item{\code{A}:}{Of class \code{"matrix"}, \bold{A}. } \item{\code{B1}:}{Of class \code{"matrix"}, \bold{B1}. } \item{\code{C}:}{Of class \code{"matrix"}, \bold{C}. } \item{\code{Rank}:}{The rank of the RR-VGLM. } \item{\code{colx1.index}:}{Index of the columns of the \code{"vlm"}-type model matrix corresponding to the variables in \bold{x1}. These correspond to \bold{B1}. } \item{\code{colx2.index}:}{ Index of the columns of the \code{"vlm"}-type model matrix corresponding to the variables in \bold{x2}. These correspond to the reduced-rank regression. } \item{\code{Atilde}:}{Object of class \code{"matrix"}, the \bold{A} matrix with the corner rows removed. Thus each of the elements have been estimated. This matrix is returned only if corner constraints were used. } } } %\section{Methods}{ %No methods defined with class "Coef.rrvglm" in the signature. %} \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.rrvglm}}, \code{\link{rrvglm}}, \code{\link{rrvglm-class}}, \code{print.Coef.rrvglm}. } \examples{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo) coef(fit, matrix = TRUE) Coef(fit) # print(Coef(fit), digits = 3) } \keyword{classes} VGAM/man/Coef.qrrvglm.Rd0000644000176000001440000001123312136651105014443 0ustar ripleyusers\name{Coef.qrrvglm} \alias{Coef.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Returns Important Matrices etc. of a QO Object } \description{ This methods function returns important matrices etc. of a QO object. } \usage{ Coef.qrrvglm(object, varlvI = FALSE, reference = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A CQO or UQO object. The former has class \code{"qrrvglm"}. } \item{varlvI}{ Logical indicating whether to scale the site scores (latent variables) to have variance-covariance matrix equal to the rank-\eqn{R} identity matrix. All models have uncorrelated site scores (latent variables), and this option stretches or shrinks the ordination axes if \code{TRUE}. See below for further details. } \item{reference}{ Integer or character. Specifies the \emph{reference species}. By default, the reference species is found by searching sequentially starting from the first species until a positive-definite tolerance matrix is found. Then this tolerance matrix is transformed to the identity matrix. Then the sites scores (latent variables) are made uncorrelated. See below for further details. % If \code{EqualTolerances=FALSE}, then transformations occur so that % the reference species has a tolerance matrix equal to the rank-\eqn{R} % identity matrix. } \item{\dots}{ Currently unused. } } \details{ If \code{ITolerances=TRUE} or \code{EqualTolerances=TRUE} (and its estimated tolerance matrix is positive-definite) then all species' tolerances are unity by transformation or by definition, and the spread of the site scores can be compared to them. Vice versa, if one wishes to compare the tolerances with the sites score variability then setting \code{varlvI=TRUE} is more appropriate. For rank-2 QRR-VGLMs, one of the species can be chosen so that the angle of its major axis and minor axis is zero, i.e., parallel to the ordination axes. This means the effect on the latent vars is independent on that species, and that its tolerance matrix is diagonal. The argument \code{reference} allows one to choose which is the reference species, which must have a positive-definite tolerance matrix, i.e., is bell-shaped. If \code{reference} is not specified, then the code will try to choose some reference species starting from the first species. Although the \code{reference} argument could possibly be offered as an option when fitting the model, it is currently available after fitting the model, e.g., in the functions \code{\link{Coef.qrrvglm}} and \code{\link{lvplot.qrrvglm}}. } \value{ The \bold{A}, \bold{B1}, \bold{C}, \bold{T}, \bold{D} matrices/arrays are returned, along with other slots. For UQO, \bold{C} is undefined. The returned object has class \code{"Coef.qrrvglm"} (see \code{\link{Coef.qrrvglm-class}}). } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Consider an equal-tolerances Poisson/binomial CQO model with \code{noRRR = ~ 1}. For \eqn{R=1} it has about \eqn{2S+p_2}{2*S+p2} parameters. For \eqn{R=2} it has about \eqn{3S+2 p_2}{3*S+2*p_2} parameters. Here, \eqn{S} is the number of species, and \eqn{p_2=p-1}{p2=p-1} is the number of environmental variables making up the latent variable. For an unequal-tolerances Poisson/binomial CQO model with \code{noRRR = ~ 1}, it has about \eqn{3S -1 +p_2}{3*S-1+p2} parameters for \eqn{R=1}, and about \eqn{6S -3 +2p_2}{6*S -3 +2*p2} parameters for \eqn{R=2}. Since the total number of data points is \eqn{nS}{n*S}, where \eqn{n} is the number of sites, it pays to divide the number of data points by the number of parameters to get some idea about how much information the parameters contain. } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{cqo}}, \code{\link{Coef.qrrvglm-class}}, \code{print.Coef.qrrvglm}, \code{\link{lvplot.qrrvglm}}. } \examples{ set.seed(123) x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) lv1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * (lv1-0)^2) lambda2 <- exp(2 - 0.5 * (lv1-1)^2) lambda3 <- exp(2 - 0.5 * ((lv1+4)/2)^2) # Unequal tolerances y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) set.seed(111) # vvv p1 <- cqo(cbind(y1, y2, y3) ~ x2 + x3 + x4, poissonff, trace = FALSE) \dontrun{ lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) } # vvv Coef(p1) # vvv print(Coef(p1), digits=3) } \keyword{models} \keyword{regression} VGAM/man/Coef.qrrvglm-class.Rd0000644000176000001440000001066412136651105015555 0ustar ripleyusers\name{Coef.qrrvglm-class} \docType{class} \alias{Coef.qrrvglm-class} \title{Class ``Coef.qrrvglm'' } \description{ The most pertinent matrices and other quantities pertaining to a QRR-VGLM (CQO model). } \section{Objects from the Class}{ Objects can be created by calls of the form \code{Coef(object, ...)} where \code{object} is an object of class \code{"qrrvglm"} (created by \code{\link{cqo}}). In this document, \eqn{R} is the \emph{rank}, \eqn{M} is the number of linear predictors and \eqn{n} is the number of observations. } \section{Slots}{ \describe{ \item{\code{A}:}{Of class \code{"matrix"}, \bold{A}, which are the linear `coefficients' of the matrix of latent variables. It is \eqn{M} by \eqn{R}. } \item{\code{B1}:}{Of class \code{"matrix"}, \bold{B1}. These correspond to terms of the argument \code{noRRR}. } \item{\code{C}:}{Of class \code{"matrix"}, \bold{C}, the canonical coefficients. It has \eqn{R} columns. } \item{\code{Constrained}:}{Logical. Whether the model is a constrained ordination model. } \item{\code{D}:}{Of class \code{"array"}, \code{D[,,j]} is an order-\code{Rank} matrix, for \code{j} = 1,\dots,\eqn{M}. Ideally, these are negative-definite in order to make the response curves/surfaces bell-shaped. } \item{\code{Rank}:}{The rank (dimension, number of latent variables) of the RR-VGLM. Called \eqn{R}. } \item{\code{lv}:}{\eqn{n} by \eqn{R} matrix of latent variable values. } \item{\code{lvOrder}:}{Of class \code{"matrix"}, the permutation returned when the function \code{\link{order}} is applied to each column of \code{lv}. This enables each column of \code{lv} to be easily sorted. } \item{\code{Maximum}:}{Of class \code{"numeric"}, the \eqn{M} maximum fitted values. That is, the fitted values at the optima for \code{noRRR = ~ 1} models. If \code{noRRR} is not \code{~ 1} then these will be \code{NA}s. } \item{\code{NOS}:}{Number of species.} \item{\code{Optimum}:}{Of class \code{"matrix"}, the values of the latent variables where the optima are. If the curves are not bell-shaped, then the value will be \code{NA} or \code{NaN}.} \item{\code{OptimumOrder}:}{Of class \code{"matrix"}, the permutation returned when the function \code{\link{order}} is applied to each column of \code{Optimum}. This enables each row of \code{Optimum} to be easily sorted. } % \item{\code{Diagonal}:}{Vector of logicals: are the % \code{D[,,j]} diagonal? } \item{\code{bellshaped}:}{Vector of logicals: is each response curve/surface bell-shaped? } \item{\code{dispersion}:}{Dispersion parameter(s). } \item{\code{Dzero}:}{Vector of logicals, is each of the response curves linear in the latent variable(s)? It will be if and only if \code{D[,,j]} equals \bold{O}, for \code{j} = 1,\dots,\eqn{M} . } \item{\code{Tolerance}:}{Object of class \code{"array"}, \code{Tolerance[,,j]} is an order-\code{Rank} matrix, for \code{j} = 1,\dots,\eqn{M}, being the matrix of tolerances (squared if on the diagonal). These are denoted by \bold{T} in Yee (2004). Ideally, these are positive-definite in order to make the response curves/surfaces bell-shaped. The tolerance matrices satisfy \eqn{T_s = -\frac12 D_s^{-1}}{T_s = -(0.5 D_s^(-1)}. } } } %\section{Methods}{ %No methods defined with class "Coef.qrrvglm" in the signature. %} \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.qrrvglm}}, \code{\link{cqo}}, % \code{qrrvglm-class}, \code{print.Coef.qrrvglm}. } \examples{ x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) lv1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * (lv1-0)^2) lambda2 <- exp(2 - 0.5 * (lv1-1)^2) lambda3 <- exp(2 - 0.5 * ((lv1+4)/2)^2) y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) yy <- cbind(y1, y2, y3) # vvv p1 <- cqo(yy ~ x2 + x3 + x4, fam=poissonff, trace=FALSE) \dontrun{ lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) } # vvv print(Coef(p1), digits=3) } \keyword{classes} VGAM/man/Coef.Rd0000644000176000001440000000415512136651105012757 0ustar ripleyusers\name{Coef} \alias{Coef} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Computes Model Coefficients and Quantities } \description{ \code{Coef} is a generic function which computes model coefficients from objects returned by modelling functions. It is an auxiliary function to \code{\link[stats]{coef}} that enables extra capabilities for some specific models. } \usage{ Coef(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation of other types of model coefficients or quantities is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ This function can often be useful for \code{\link{vglm}} objects with just an intercept term in the RHS of the formula, e.g., \code{y ~ 1}. Then often this function will apply the inverse link functions to the parameters. See the example below. For reduced-rank VGLMs, this function can return the \bold{A}, \bold{C} matrices, etc. For quadratic and additive ordination models, this function can return ecological meaningful quantities such as tolerances, optima, maxima. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ %} \section{Warning }{ This function may not work for \emph{all} \pkg{VGAM} family functions. You should check your results on some artificial data before applying it to models fitted to real data. } \seealso{ \code{\link[stats]{coef}}, \code{\link{Coef.vlm}}, \code{\link{Coef.rrvglm}}, \code{\link{Coef.qrrvglm}}, \code{\link{depvar}}. } \examples{ nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) # Original scale fit <- vglm(y ~ 1, beta.ab, data = bdata, trace = TRUE) # Intercept-only model coef(fit, matrix = TRUE) # Both on a log scale Coef(fit) # On the original scale } \keyword{models} \keyword{regression} VGAM/man/AICvlm.Rd0000644000176000001440000000550612136651105013217 0ustar ripleyusers\name{AICvlm} \alias{AICvlm} %\alias{AICvglm} \alias{AICvgam} \alias{AICrrvglm} \alias{AICqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Akaike's Information Criterion } \description{ Calculates the Akaike information criterion for a fitted model object for which a log-likelihood value has been obtained. } \usage{ AICvlm(object, \dots, k = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{\dots}{ Other possible arguments fed into \code{logLik} in order to compute the log-likelihood. } \item{k}{ Numeric, the penalty per parameter to be used; the default is the classical AIC. } } \details{ The following formula is used for VGLMs: \eqn{-2 \mbox{log-likelihood} + k n_{par}}{-2*log-likelihood + k*npar}, where \eqn{n_{par}}{npar} represents the number of parameters in the fitted model, and \eqn{k = 2} for the usual AIC. One could assign \eqn{k = \log(n)} (\eqn{n} the number of observations) for the so-called BIC or SBC (Schwarz's Bayesian criterion). This code relies on the log-likelihood being defined, and computed, for the object. When comparing fitted objects, the smaller the AIC, the better the fit. The log-likelihood and hence the AIC is only defined up to an additive constant. Any estimated scale parameter (in GLM parlance) is used as one parameter. For VGAMs the nonlinear effective degrees of freedom for each smoothed component is used. This formula is heuristic. } \value{ Returns a numeric value with the corresponding AIC (or BIC, or \dots, depending on \code{k}). } \author{T. W. Yee. } \note{ AIC has not been defined for QRR-VGLMs yet. } %\references{ % Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986). % \emph{Akaike Information Criterion Statistics}. % D. Reidel Publishing Company. %} \section{Warning }{ This code has not been double-checked. The general applicability of \code{AIC} for the VGLM/VGAM classes has not been developed fully. In particular, \code{AIC} should not be run on some \pkg{VGAM} family functions because of violation of certain regularity conditions, etc. } \seealso{ VGLMs are described in \code{\link{vglm-class}}; VGAMs are described in \code{\link{vgam-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}; \code{\link[stats]{AIC}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) coef(fit1, matrix = TRUE) AIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) coef(fit2, matrix = TRUE) AIC(fit2) } \keyword{models} \keyword{regression} VGAM/man/ABO.Rd0000644000176000001440000000412412136651105012500 0ustar ripleyusers\name{ABO} \alias{ABO} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The ABO Blood Group System } \description{ Estimates the two independent parameters of the the ABO blood group system. } \usage{ ABO(link = "logit", ipA = NULL, ipO = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{pA} and \code{pB}. See \code{\link{Links}} for more choices. } \item{ipA, ipO}{ Optional initial value for \code{pA} and \code{pO}. A \code{NULL} value means values are computed internally. } } \details{ The parameters \code{pA} and \code{pB} are probabilities, so that \code{pO=1-pA-pB} is the third probability. The probabilities \code{pA} and \code{pB} correspond to A and B respectively, so that \code{pO} is the probability for O. It is easier to make use of initial values for \code{pO} than for \code{pB}. In documentation elsewhere I sometimes use \code{pA=p}, \code{pB=q}, \code{pO=r}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 4-column matrix of counts, where the columns are A, B, AB, O (in order). Alternatively, the input can be a 4-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{AB.Ab.aB.ab2}}, \code{\link{G1G2G3}}, \code{\link{MNSs}}. } \examples{ ymat <- cbind(A = 725, B = 258, AB = 72, O = 1073) # Order matters, not the name fit <- vglm(ymat ~ 1, ABO(link = identity), trace = TRUE, cri = "coef") coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/AB.Ab.aB.ab2.Rd0000644000176000001440000000332712136651105013672 0ustar ripleyusers\name{AB.Ab.aB.ab2} \alias{AB.Ab.aB.ab2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The AB-Ab-aB-ab2 Blood Group System } \description{ Estimates the parameter of the the AB-Ab-aB-ab2 blood group system. } \usage{ AB.Ab.aB.ab2(link = "logit", init.p = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{p}. See \code{\link{Links}} for more choices. } \item{init.p}{ Optional initial value for \code{p}. } } \details{ This one parameter model involves a probability called \code{p}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Elandt-Johnson, R. C. (1971) \emph{Probability Models and Statistical Methods in Genetics}, New York: Wiley. } \author{ T. W. Yee } \note{ The input can be a 4-column matrix of counts. Alternatively, the input can be a 4-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \section{Warning}{ There may be a bug in the \code{deriv} and \code{weight} slot of the family function. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{ABO}}, \code{\link{G1G2G3}}, \code{\link{MNSs}}. } \examples{ ymat <- cbind(68, 11, 13, 21) # See Elandt-Johnson, pp.430,427 fit <- vglm(ymat ~ 1, AB.Ab.aB.ab2(link = cloglog), trace = TRUE, crit = "coef") Coef(fit) # Estimated p rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit))) # Estimated variance is approx 0.0021 } \keyword{models} \keyword{regression} VGAM/man/AB.Ab.aB.ab.Rd0000644000176000001440000000331512136651105013605 0ustar ripleyusers\name{AB.Ab.aB.ab} \alias{AB.Ab.aB.ab} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The AB-Ab-aB-ab Blood Group System } \description{ Estimates the parameter of the AB-Ab-aB-ab blood group system. } \usage{ AB.Ab.aB.ab(link = "logit", init.p = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{p}. See \code{\link{Links}} for more choices. } \item{init.p}{ Optional initial value for \code{p}. } } \details{ This one parameter model involves a probability called \code{p}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 4-column matrix of counts, where the columns are AB, Ab, aB and ab (in order). Alternatively, the input can be a 4-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab2}}, \code{\link{ABO}}, \code{\link{G1G2G3}}, \code{\link{MNSs}}. } \examples{ ymat <- cbind(AB=1997, Ab=906, aB=904, ab=32) # Data from Fisher (1925) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab(link = "identity", init.p = 0.9), trace = TRUE) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab, trace = TRUE) rbind(ymat, sum(ymat)*fitted(fit)) Coef(fit) # Estimated p p <- sqrt(4*(fitted(fit)[, 4])) p*p summary(fit) } \keyword{models} \keyword{regression} VGAM/man/AA.Aa.aa.Rd0000644000176000001440000000323412136651105013261 0ustar ripleyusers\name{AA.Aa.aa} \alias{AA.Aa.aa} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The AA-Aa-aa Blood Group System } \description{ Estimates the parameter of the AA-Aa-aa blood group system. } \usage{ AA.Aa.aa(link = "logit", init.pA = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{pA}. See \code{\link{Links}} for more choices. } \item{init.pA}{ Optional initial value for \code{pA}. } } \details{ This one parameter model involves a probability called \code{pA}. The probability of getting a count in the first column of the input (an AA) is \code{pA*pA}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Weir, B. S. (1996) \emph{Genetic Data Analysis II: Methods for Discrete Population Genetic Data}, Sunderland, MA: Sinauer Associates, Inc. } \author{ T. W. Yee } \note{ The input can be a 3-column matrix of counts, where the columns are AA, Ab and aa (in order). Alternatively, the input can be a 3-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AB.Ab.aB.ab}}, \code{\link{AB.Ab.aB.ab2}}, \code{\link{ABO}}, \code{\link{G1G2G3}}, \code{\link{MNSs}}. } \examples{ y <- cbind(53, 95, 38) fit <- vglm(y ~ 1, AA.Aa.aa(link = "probit"), trace = TRUE) rbind(y, sum(y) * fitted(fit)) Coef(fit) # Estimated pA summary(fit) } \keyword{models} \keyword{regression} VGAM/inst/0000755000176000001440000000000012136651113012010 5ustar ripleyusersVGAM/inst/doc/0000755000176000001440000000000012136651167012566 5ustar ripleyusersVGAM/inst/doc/categoricalVGAMbib.bib0000644000176000001440000004507312136651113016651 0ustar ripleyusers@article{yee:wild:1996, Author = {Yee, T. W. and Wild, C. J.}, Title = {Vector Generalized Additive Models}, Year = 1996, JOURNAL = {Journal of the Royal Statistical Society~B}, Volume = 58, Pages = {481--493}, Keywords = {Nonparametric regression; Smoothing}, Number = 3, } @article{gree:1984, Author = {Green, P. J.}, Title = {Iteratively Reweighted Least Squares for Maximum Likelihood Estimation, and Some Robust and Resistant Alternatives}, Year = 1984, JOURNAL = {Journal of the Royal Statistical Society~B}, Volume = 46, Pages = {149--192}, Keywords = {Scoring; Generalized linear model; Regression; Residual}, Number = 2, } @book{hast:tibs:1990, Author = {Hastie, T. J. and Tibshirani, R. J.}, Title = {Generalized Additive Models}, Year = 1990, Publisher = {Chapman \& Hall}, Address = {London}, Pages = {335}, Keywords = {Regression; Nonparametric; Generalized linear model} } @Manual{gam:pack:2009, title = {\pkg{gam}: Generalized Additive Models}, author = {Trevor Hastie}, year = {2008}, note = {\proglang{R}~package version~1.01}, url = {http://CRAN.R-project.org/package=gam} } @article{ande:1984, Author = {Anderson, J. A.}, Title = {Regression and Ordered Categorical Variables}, Year = 1984, JOURNAL = {Journal of the Royal Statistical Society~B}, Volume = 46, Pages = {1--30}, Keywords = {Assessed variable; Logistic regression; Stereotype regression; Maximum likelihood}, Number = 1, } @article{firt:1993, author = {Firth, D.}, title = {Bias Reduction of Maximum Likelihood Estimates}, journal = {Biometrika}, volume = {80}, pages = {27--38}, year = {1993}, number = {1}, abstract = {It is shown how, in regular parametric problems, the first-order term is removed from the asymptotic bias of maximum likelihood estimates by a suitable modification of the score function. In exponential families with canonical parameterization the effect is to penalize the likelihood by the Jeffreys invariant prior. In binomial logistic models, Poisson log linear models and certain other generalized linear models, the Jeffreys prior penalty function can be imposed in standard regression software using a scheme of iterative adjustments to the data.}, } @InProceedings{alli:2004, Author = {Allison, P.}, Title = {Convergence Problems in Logistic Regression}, chapter = {10}, Year = 2004, Crossref = {altm:gill:mcdo:2004}, Pages = {238--252}, BookTITLE = {Numerical Issues in Statistical Computing for the Social Scientist}, PUBLISHER = {Wiley-Interscience}, ADDRESS = {Hoboken, NJ, USA}, } @book {altm:gill:mcdo:2004, AUTHOR = {Altman, Micah and Gill, Jeff and McDonald, Michael P.}, TITLE = {Numerical Issues in Statistical Computing for the Social Scientist}, PUBLISHER = {Wiley-Interscience}, ADDRESS = {Hoboken, NJ, USA}, YEAR = {2004}, PAGES = {xvi+323}, MRCLASS = {62-02 (62-04 62P25 65-02 91-02)}, MRNUMBER = {MR2020104}, } @article{yee:2010v, Author = {Yee, T. W.}, Title = {{VGLM}s and {VGAM}s: An Overview for Applications in Fisheries Research}, Year = 2010, Journal = {Fisheries Research}, FJournal = {Fisheries Research}, Volume = {101}, Pages = {116--126}, Number = {1--2}, } @article{imai:king:lau:2008, AUTHOR = {Imai, Kosuke and King, Gary and Lau, Olivia}, TITLE = {Toward A Common Framework for Statistical Analysis and Development}, JOURNAL = {Journal of Computational and Graphical Statistics}, YEAR = 2008, VOLUME = 17, PAGES = {892--913}, NUMBER = 4, } @book{stok:davi:koch:2000, Author = {Stokes, W. and Davis, J. and Koch, W.}, Title = {Categorical Data Analysis Using The \proglang{SAS} System}, Year = 2000, Edition = {2nd}, Publisher = {SAS Institute Inc.}, Address = {Cary, NC, USA}, PAGES = {648}, } @article{neld:wedd:1972, Author = {Nelder, J. A. and Wedderburn, R. W. M.}, Title = {Generalized Linear Models}, Year = 1972, JOURNAL = {Journal of the Royal Statistical Society~A}, Volume = 135, Pages = {370--384}, Keywords = {Probit analysis; Analysis of variance; Contingency table; Exponential family; Quantal response; Weighted least squares}, Number = 3, } @book{agre:2002, Author = {Agresti, Alan}, Title = {Categorical Data Analysis}, Year = 2002, Publisher = {John Wiley \& Sons}, Address = {New York, USA}, Edition = {2nd}, } @book{fahr:tutz:2001, Author = {Fahrmeir, L. and Tutz, G.}, Title = {Multivariate Statistical Modelling Based on Generalized Linear Models}, Year = 2001, Edition = {2nd}, Publisher = {Springer-Verlag}, ADDRESS = {New York, USA}, } @book{leon:2000, Author = {Leonard, Thomas}, Title = {A Course in Categorical Data Analysis}, Year = 2000, Publisher = {Chapman \& Hall/CRC}, Address = {Boca Raton, FL, USA}, } @book{lloy:1999, Author = {Lloyd, C. J.}, Title = {Statistical Analysis of Categorical Data}, Year = 1999, Publisher = {John Wiley \& Sons}, Address = {New York, USA} } @book{long:1997, Author = {Long, J. S.}, Title = {Regression Models for Categorical and Limited Dependent Variables}, Year = 1997, Publisher = {Sage Publications}, ADDRESS = {Thousand Oaks, CA, USA}, } @book{mccu:neld:1989, Author = {McCullagh, P. and Nelder, J. A.}, Title = {Generalized Linear Models}, Year = 1989, Edition = {2nd}, Publisher = {Chapman \& Hall}, Address = {London}, Pages = {500} } @book{simo:2003, Author = {Simonoff, J. S.}, Title = {Analyzing Categorical Data}, Year = 2003, Pages = {496}, Publisher = {Springer-Verlag}, Address = {New York, USA} } @article{liu:agre:2005, Author = {Liu, I. and Agresti, A.}, Title = {The Analysis of Ordered Categorical Data: An Overview and a Survey of Recent Developments}, Year = 2005, Journal = {Sociedad Estad{\'i}stica e Investigaci{\'o}n Operativa Test}, Volume = 14, Pages = {1--73}, Number = 1, } @MANUAL{thom:2009, TITLE = {\proglang{R} (and \proglang{S-PLUS}) Manual to Accompany Agresti's \textit{Categorical Data Analysis}~(2002), 2nd edition}, AUTHOR = {Thompson, L. A.}, YEAR = {2009}, URL = {https://home.comcast.net/~lthompson221/Splusdiscrete2.pdf}, } @article{yee:2008c, Author = {Yee, T. W.}, Title = {The \pkg{VGAM} Package}, Year = 2008, Journal = {\proglang{R} {N}ews}, Volume = 8, Pages = {28--39}, Number = 2, } @article{Rnews:Yee:2008, author = {Thomas W. Yee}, title = {The \pkg{VGAM} Package}, journal = {\proglang{R}~News}, year = 2008, volume = 8, pages = {28--39}, month = {October}, url = {http://CRAN.R-project.org/doc/Rnews/}, number = 2, } @article{yee:hast:2003, AUTHOR = {Yee, T. W. and Hastie, T. J.}, TITLE = {Reduced-rank Vector Generalized Linear Models}, JOURNAL = {Statistical Modelling}, Volume = 3, Pages = {15--41}, YEAR = {2003}, Number = 1, } article{yee:wild:1996, Author = {Yee, T. W. and Wild, C. J.}, Title = {Vector Generalized Additive Models}, Year = 1996, JOURNAL = {Journal of the Royal Statistical Society~B}, Volume = 58, Pages = {481--493}, Keywords = {Nonparametric regression; Smoothing}, Number = 3, } @article{good:1981, Author = {Goodman, L. A.}, Title = {Association Models and Canonical Correlation in the Analysis of Cross-classifications Having Ordered Categories}, Year = 1981, Journal = {Journal of the American Statistical Association}, Volume = 76, Pages = {320--334}, Number = 374, } @article{buja:hast:tibs:1989, Author = {Buja, Andreas and Hastie, Trevor and Tibshirani, Robert}, Title = {Linear Smoothers and Additive Models}, Year = 1989, JOURNAL = {The Annals of Statistics}, Volume = 17, Pages = {453--510}, Keywords = {Nonparametric; Regression; Kernel estimator}, Number = 2, } @article{yee:step:2007, AUTHOR = {Yee, Thomas W. and Stephenson, Alec G.}, TITLE = {Vector Generalized Linear and Additive Extreme Value Models}, JOURNAL = {Extremes}, FJOURNAL = {Extremes. Statistical Theory and Applications in Science, Engineering and Economics}, VOLUME = {10}, YEAR = {2007}, PAGES = {1--19}, MRCLASS = {Database Expansion Item}, MRNUMBER = {MR2407639}, NUMBER = {1--2}, } @article{wand:orme:2008, Author = {Wand, M. P. and Ormerod, J. T.}, Title = {On Semiparametric Regression with {O}'{S}ullivan Penalized Splines}, Year = 2008, Journal = {The Australian and New Zealand Journal of Statistics}, Volume = 50, Issue = 2, Pages = {179--198}, Number = 2, } @book{cham:hast:1993, Editor = {Chambers, John M. and Hastie, Trevor J.}, Title = {Statistical Models in \proglang{S}}, Publisher = {Chapman \& Hall}, Year = 1993, Pages = {608}, Address = {New York, USA}, Keywords = {Computing}, } @Article{pete:harr:1990, Author = {Peterson, B. and Harrell, Frank E.}, Title = {Partial Proportional Odds Models for Ordinal Response Variables}, Year = 1990, Journal = {Applied Statistics}, Volume = 39, Pages = {205--217}, Number = 2, } @article{pete:1990, Author = {Peterson, B.}, Title = {Letter to the Editor: Ordinal Regression Models for Epidemiologic Data}, Year = 1990, Journal = {American Journal of Epidemiology}, Volume = 131, Pages = {745--746} } @article{hast:tibs:buja:1994, AUTHOR = {Hastie, Trevor and Tibshirani, Robert and Buja, Andreas}, TITLE = {Flexible Discriminant Analysis by Optimal Scoring}, JOURNAL = {Journal of the American Statistical Association}, VOLUME = {89}, YEAR = {1994}, PAGES = {1255--1270}, CODEN = {JSTNAL}, MRCLASS = {62H30}, MRNUMBER = {95h:62099}, NUMBER = {428}, } @article{firth:2005, Author = {Firth, David}, Title = {{B}radley-{T}erry Models in \proglang{R}}, Year = 2005, Journal = {Journal of Statistical Software}, Volume = 12, Number = 1, Pages = {1--12}, URL = "http://www.jstatsoft.org/v12/i01/", } @book{weir:1996, Author = {Weir, Bruce S.}, Title = {Genetic Data Analysis II: Methods for Discrete Population Genetic Data}, Year = 1996, Publisher = {Sinauer Associates, Inc.}, Address = {Sunderland, MA, USA} } @book{lang:2002, Author = {Lange, Kenneth}, Title = {Mathematical and Statistical Methods for Genetic Analysis}, Year = 2002, Edition = {2nd}, Publisher = {Springer-Verlag}, Address = {New York, USA}, } @article{macm:etal:1995, Author = {MacMahon, S. and Norton, R. and Jackson, R. and Mackie, M. J. and Cheng, A. and Vander Hoorn, S. and Milne, A. and McCulloch, A.}, Title = {Fletcher {C}hallenge-{U}niversity of {A}uckland {H}eart \& {H}ealth {S}tudy: Design and Baseline Findings}, Year = 1995, Journal = {New Zealand Medical Journal}, Volume = 108, Pages = {499--502}, } @article{altm:jack:2010, author = {Altman, M. and Jackman, S.}, title = "Nineteen Ways of Looking at Statistical Software", journal = "Journal of Statistical Software", year = "2010", note = "Forthcoming" } @article{fox:hong:2009, author = "John Fox and Jangman Hong", title = {Effect Displays in \proglang{R} for Multinomial and Proportional-Odds Logit Models: Extensions to the \pkg{effects} Package}, journal = "Journal of Statistical Software", volume = "32", number = "1", pages = "1--24", year = "2009", URL = "http://www.jstatsoft.org/v32/i01/", } @article{wild:yee:1996, Author = {Wild, C. J. and Yee, T. W.}, Title = {Additive Extensions to Generalized Estimating Equation Methods}, Year = 1996, JOURNAL = {Journal of the Royal Statistical Society~B}, Volume = 58, Pages = {711--725}, Keywords = {Longitudinal data; Nonparametric; Regression; Smoothing}, NUMBER = {4}, } @Article{Yee:2010, author = {Thomas W. Yee}, title = {The \pkg{VGAM} Package for Categorical Data Analysis}, journal = {Journal of Statistical Software}, year = {2010}, volume = {32}, number = {10}, pages = {1--34}, url = {http://www.jstatsoft.org/v32/i10/} } @Manual{R, title = {\proglang{R}: {A} Language and Environment for Statistical Computing}, author = {{\proglang{R} Development Core Team}}, organization = {\proglang{R} Foundation for Statistical Computing}, address = {Vienna, Austria}, year = {2009}, note = {{ISBN} 3-900051-07-0}, url = {http://www.R-project.org/} } @Book{Venables+Ripley:2002, author = {William N. Venables and Brian D. Ripley}, title = {Modern Applied Statistics with \proglang{S}}, edition = {4th}, year = {2002}, pages = {495}, publisher = {Springer-Verlag}, address = {New York}, url = {http://www.stats.ox.ac.uk/pub/MASS4/}, } @Manual{SAS, author = {{\proglang{SAS} Institute Inc.}}, title = {The \proglang{SAS} System, Version 9.1}, year = {2003}, address = {Cary, NC}, url = {http://www.sas.com/} } @Manual{yee:VGAM:2010, title = {\pkg{VGAM}: Vector Generalized Linear and Additive Models}, author = {Yee, T. W.}, year = {2010}, note = {\proglang{R}~package version~0.7-10}, url = {http://CRAN.R-project.org/package=VGAM} } @Manual{Harrell:2009, title = {\pkg{rms}: Regression Modeling Strategies}, author = {Frank E. {Harrell, Jr.}}, year = {2009}, note = {\proglang{R}~package version~2.1-0}, url = {http://CRAN.R-project.org/package=rms} } @Manual{Meyer+Zeileis+Hornik:2009, title = {\pkg{vcd}: Visualizing Categorical Data}, author = {David Meyer and Achim Zeileis and Kurt Hornik}, year = {2009}, note = {\proglang{R}~package version~1.2-7}, url = {http://CRAN.R-project.org/package=vcd} } @Article{Meyer+Zeileis+Hornik:2006, author = {David Meyer and Achim Zeileis and Kurt Hornik}, title = {The Strucplot Framework: Visualizing Multi-Way Contingency Tables with \pkg{vcd}}, journal = {Journal of Statistical Software}, year = {2006}, volume = {17}, number = {3}, pages = {1--48}, url = {http://www.jstatsoft.org/v17/i03/} } @Manual{Turner+Firth:2009, title = {Generalized Nonlinear Models in \proglang{R}: An Overview of the \pkg{gnm} Package}, author = {Heather Turner and David Firth}, year = {2009}, note = {\proglang{R}~package version~0.10-0}, url = {http://CRAN.R-project.org/package=gnm}, } @Article{Rnews:Turner+Firth:2007, author = {Heather Turner and David Firth}, title = {\pkg{gnm}: A Package for Generalized Nonlinear Models}, journal = {\proglang{R}~News}, year = 2007, volume = 7, number = 2, pages = {8--12}, month = {October}, url = {http://CRAN.R-project.org/doc/Rnews/}, } @Manual{ElemStatLearn:2009, title = {\pkg{ElemStatLearn}: Data Sets, Functions and Examples from the Book `The Elements of Statistical Learning, Data Mining, Inference, and Prediction' by Trevor Hastie, Robert Tibshirani and Jerome Friedman}, author = {Kjetil Halvorsen}, year = {2009}, note = {\proglang{R}~package version~0.1-7}, url = {http://CRAN.R-project.org/package=ElemStatLearn}, } @Manual{Zelig:2009, title = {\pkg{Zelig}: Everyone's Statistical Software}, author = {Kosuke Imai and Gary King and Olivia Lau}, year = {2009}, note = {\proglang{R}~package version~3.4-5}, url = {http://CRAN.R-project.org/package=Zelig}, } @article{kosm:firt:2009, author = {Kosmidis, I. and Firth, D.}, title = {Bias Reduction in Exponential Family Nonlinear Models}, year = {2009}, JOURNAL = {Biometrika}, FJOURNAL = {Biometrika}, volume = {96}, PAGES = {793--804}, NUMBER = {4}, } @techreport{kosm:firt:2008, author = {Kosmidis, I. and Firth, D.}, title = {Bias Reduction in Exponential Family Nonlinear Models}, Journal = {CRiSM Paper No.~08-05v2}, year = {2008}, URL = "http://www.warwick.ac.uk/go/crism", Institution = {Department of Statistics, Warwick University}, } @Manual{Kosmidis:2008, title = {\pkg{brglm}: Bias Reduction in Binary-Response {GLMs}}, author = {Ioannis Kosmidis}, year = {2008}, note = {\proglang{R}~package version~0.5-4}, url = {http://CRAN.R-project.org/package=brglm}, } @Manual{Hatzinger:2009, title = {\pkg{prefmod}: Utilities to Fit Paired Comparison Models for Preferences}, author = {Reinhold Hatzinger}, year = {2009}, note = {\proglang{R}~package version~0.8-16}, url = {http://CRAN.R-project.org/package=prefmod}, } @Manual{firth:2008, title = {\pkg{BradleyTerry}: Bradley-Terry Models}, author = {David Firth}, year = {2008}, note = {\proglang{R}~package version~0.8-7}, url = {http://CRAN.R-project.org/package=BradleyTerry}, } @Manual{gnlm:2007, title = {\pkg{gnlm}: Generalized Nonlinear Regression Models}, author = {Jim Lindsey}, year = {2007}, note = {\proglang{R}~package version~1.0}, url = {http://popgen.unimaas.nl/~jlindsey/rcode.html}, } @Manual{Konis:2009, title = {\pkg{safeBinaryRegression}: Safe Binary Regression}, author = {Kjell Konis}, year = {2009}, note = {\proglang{R}~package version~0.1-2}, url = {http://CRAN.R-project.org/package=safeBinaryRegression}, } VGAM/inst/doc/categoricalVGAM.pdf0000644000176000001440000245236312136674024016225 0ustar ripleyusers%PDF-1.5 %¿÷¢þ 1 0 obj << /Type /ObjStm /Length 5752 /Filter /FlateDecode /N 96 /First 782 >> stream xœí\Y“9¶~¿¿BoÑM¦v¥ˆ‰‰€¢‹fX›˜écgUåà­mPýëïwŽ”vz)0e¦é‡ (Y©”töE‹­…Fhé…Æ9Ò /¼"ˆJãOÄ*Š(”rJ()”QBÙ`U½®Ð(T¨ð€¦hðé0'={¡µFcÚx/T%´“èñiñ …öA ­„ƒPôÒ#­j”§Î #: cx[¬á³)"ÌÆHSxØ|`aáÖŒr…y~>@áåóËýE3§¦ç‹ÅôvY=¿ó¤x~k:›üЋÉ쬼ùªmŒƒ.a Œ-”Ðj`ÓÓÊC)¯%×Wõ˜9ÿáy3Ö—±ÔÈN©päªXDÈVãÙrœÑE€KÿþjÙ€—ZSÈ*ü…0¬d!%Ec_òÏx&µ$„Ñö@~½ ‡ÞbˆÄ¤0®áϽ٬ ³xf‘üE922[’Ò¯ ÐFL—˜Á”+D¾0;G„V8-$…Ež¥dUxJÉ*UP ‡œíß «ÊÝ++C!‘M]«ç‡`ãeàum¬ …Ü×FˆŽÒA%áAþll€™h‹SUlôzØD­”2i;C ²˜¶”aG µ§'¸Gæ‘R£×oÞRÞ‰l»HÐÇÃ!æÈýîöæõ1Ò3ÖÞ<þ凣Çw‘Ô•ÇÍl¾8:ïÍ(Á/©Ç½zÞŸ5ÓÅdFy-Ïü¨—ûÐB`…¡£Z¬Òä¿6ƒÅùœ2c¸ûáý¾¼ØÏ×aW[ +»Úî:ìýôÅñË_ÞôÉÉ:l¿Mw6€#yß ¶ß ûͽã×Ïîö‹ë°‘OoÂŽr‹p½p·øÉÃã£;O ø‹uàX)nW[Àý~ÀmøÑäï0òa3˜/Ôå%Fêªm"‹åô–ˈvQÛ%À2çŸïv 'wN‰{± ³ ‰!Vž*ª"V´Fb讕ˆ|d0q‰e¨ëëš/&ïozšÛï'ýs 1‡,F SeiOÎ\"éðX@Wxy­xüD’–èYØÁÔ@o\·?ÖƒÁm¬¹Jÿ"R\y¼ÒüI™´1ˆÎ_µlú樨—Ñz…š“…â¥k 6¯yWì‡ÐkL$ïv@)rei+,À0=¯Bh éz™0¢š¤Ä“ReI9•-¬§M¬ÊÓŽÆŸÌ#…œÓiÏË Ë+3èm^Á‡Hs]Œìð Ñl12 ¶DÈ€SŽÝg‹ÓºˆÑˆ‘?Hv:Ú=DªîiŽ]#ôôÚ¦¶ß¢á Œà*=mH¶Udûú@ŒªCÔ(`åŽtSÄòF!íå:U˜ë­.ëúö«ûwº§áá͖Ò$‡I»Â𕼖ÿV¨- CªE|8m=_µÞÙ¬þF»2AÑf%ôœ¶Ü"žYÔõÖ¤ß+dÖ…‹Õ -ë ¯~Z§½óÙíÅÅâÂm¿¥á¸9WÐñH‹mÆTú¯›nʬĩbŽ=HœÃz2>4éÓ´ŽHÓ¢et„-øï–1È:µZ¡åba>­áäÙ^äOÛD¢|6›ôOj"»|vï¤ÔŸ«íÊåµJ›F*M®ÒxujÔ¹1Ѥª&Qb²&ác*&í4™4‹I³˜4‹I³˜4KÞ£ú­s´MyËgÛ.·¤£m¸âh{µ‰÷üíñÛ;´i{òhc×Öoï^fA|ý®­;wŸ?úéøÚµ½ûzü®ÍÓÍ ÌjÏ-D}ôÇ/_ôÇ_Þ<­¶÷/÷$]íþöùɃ·OxÕ… £Ù‚­¯ [î†ýä§ŸN^?"™¿Y'\nî6aËý˜îãnØ>¹w'Á®¾zóŒÀï·]í¯8"xóæ—»ž0Ç7(°Ün¶x¾'ô+N Ž_ÞùùÉ: X#Ûî Ûm–û^;$X::›4Ð&-²ùjNò6Á²i¤M¸ÛÄ@›øŸwÑ]šÅ§.o¼§Éò§ÊŸ:¶ïóμÌîUfÿ*³ƒ•ÙÃÊRçKA:ß ÒùZÎ÷‚tv°:_Òª}Îó¶Þ\åys”Ñ9jéiÖ½¯?Ôûj³á~ÃUî÷¿”}´‹áåU°…Ù?Æÿ‹t.²)׃…+’B!ªÙBçzz{H™æé– –“ª¹L-o±”v}‚h©”¢:ò>$”™Â\‘Ðäž´E¬Ð:´TÚ¡¿÷ÈÂD ÈsŸ<3Ïë•¢QQËJ†Ë¥’Q­5x—Dt Ú&.Í[  ×êF["·Ó'3;µ{:ÆÍ쩈”Ô§Û?õ±Êto3ri–PÒÛDPªÿA{¬õàä"¯ õ•H¨‘©¡Õ+ž“.=bràYd]ð¨ç¾´ñ‰|Žz{àé(Õ‘oÒJK)Mï\•u‘n‰VÔ'*¼Áåñ|kJ‹åÝ—•›…m6—.¸¯ú뎭¢Í3¬jáÙó{ïYÅ<¨‰†p¯”DéiÿRðà\Ýà«¡‡,è$•Á‘Ò)_±ktšuëbOWM‰qt‡¡žZ$õô´ IzF]#sÁÃ~ÁÚ@¥¸$§N¹âŒ Ä9oô’ÿŽÎ÷ I–²£·Î%ÎZôI2µðtÙ–å…eJ@öJ}”4$Ô_ë*RB(MlÉþHtI]ÉO8V¾ÞKÖjH P­Võh3‡Õ­ëE²•w¼K·¾O‰ „ôªëLö¼×Ôùãê7]ØëoÖ†®·%—‘ÊÏ ¢Í&Ý66³TFÏå2E&¹t©GB({¯ä)yJ’A`ýÒ\¦ºƒ¡oÖö)•$‘ÿö¶Ç¤\ìnmá­Z:WÏ>™$—;»jºuÍ=YgSYeÓ¦’9 Íõ¹L=˜Tv•¹ì ™ …X±ö³s×!;¸h·æžŠí†. !“ :.“Õd¶¾"cEªGÆVÁŠ)œy"&ä€(,ŸÖÒŒž%ïà5G#xZú®AE¾Ü{VŽ"]Rw”–YKq»l¹o˜E©LªDªõyÃvkrؕĒàõ âC Æø YY)¾ƒ„_¢Û]p@’=¶Ï¡ZIÍ –H¬¤g'Iî­2ŽØæ¡År;0ß+®³–0 €TË‘v“žT®[¥59™ŽDF˜!ºZâÝ9‹|Ç‹xÄê¢È½O´YÝú_.6 žDÎ8¶+Þ$yaž?Dš?×è®]`,é.Y2hö<‘ý¸¦vŽä†axÖô ­Ü$EcÚ\¤=ãÍ1ЙÃi¯†5E2•ÑæÞ«FÌ„O’oO‚ÚäjX×4±Ú;zë5…Ñ[Å2dü0Þx’£ç¤qùЇä:¾Ï_wì.gæ YQ oŒ¸ŠsËé$éW0+ ÅÙP*ÖS N’Ñ“Œ«äª+Éi#EÿJr‚™h¢¯jDÎp÷KùVE|§Œ¨Êœ,‰Šs ÇsyNM»V”ÊW,Û9…YÏúCßÁáÙ#X–³­SB–£aWb9‹¢~Y±¦W¼)¿c“†mܳ.s÷d}¤­Y`YrÏ 8— Š5ƒdÏa<åGœÏUœáUHšcÎzçoQ’¼#QbÈ{9α"Û=e|SiJ¡$}ý†”‘„ r½J‘Ò°?¦ŒNÌ®æ«Õ¥«*]…Iéw*³7ç/%·É­ŒQRO_Õ ¯s(M,SŒŠ±™…Çó¤6›gOìà˜’çgõ$¹ë%x׆°¡›µ x²ì¡;u“âòG‘[õjZ4™6Ø-{§†…dXzIG»o©žJ¼Òÿþþä?Žñg™p@vòúÆHRJcø­Jj»»Ì~ГÿôÙ ø\æUÊ3ÈRÍ:‘ 8ÅÞN¥„&òº)Õ>÷¡‹È1—iÆî[ã’ž¦Ù>‡é>ež³S&XÝ>©%p:$s¢ÙQjÆ+Üdø!­M GædÄ¢ÃkŠÈQV3¥¼ŠÇgšW­,¡ !õä2Fš§Û’$Úåk—+]šR ÉŒå¶|Êü^¶uÇPk’\’PÖœ_x>­–ËÚcÚÑ>åCì -ó#õcmìæ–ìȳ·ˆœI°+Oo“³Õwìӊܲö†åò‡]gž+Õ-e"É#ñ⦛”'hUÇ·%ZÙÛ¥rÍ#îä‡W™owvzŽRs:Ùñþ›®9»Öe­Û«øWiK~×AïQî/åV:Áɩ޷XBúj‰%7ÜîyêÖÁ¤õiB•s¨ÀJo’¡%)å_Ò~T~bö/k!m¦¹˜M4AU2©¯Ë^‘W8,¹Jò Ô¦ÚA¶Z©Ášä²·?\r6Ë%·Ünú\~Âûîó>m݃NQõ¦?×ÍÙ9é‘NZèÔøFy§¼[þTÞ/•ËçåIù¢|U¾-{廲_ʺ<-ÏÊóËéy=.›rXŽÊq9)§å¬œ—‹rñqR~(/oŠ’NtØ-EÐŽ‡½³¹Èß‹¸›ŽnaÙK³mÐÇúôþ¸Ö”Ãçãm´<éêÍ/ب‡x¦6’þYZÏfuoQÏÊs¨ÁÊ÷å°7z7è•Ãz>/‡ ?(Çèú1¾€‚L{³EÓšÓÓrZϚɠœž7åÿçMù;+м9ÃÌóao~NÊtN?–ŸÊOMyYþ§ý¤`–Še·¥`¤Ü’ÂúIøš ”m…ôe`ÔWÊà.dð4›Y—áuËTâæ8›Wâ˞ƥmæRrÔ&Öä¯bÁ—­ëË,X³®7IÕˆâ¤^¤09佡jirt%q‹P½Ièæ9ø¤*e¾½ÉÝ-Ê{0¼cø”ŸËå?ˇìg“>ƒ9ü’•á%¼î¯àyÞÞö2ïd˜³^ÿ}½Ö§‹¶>#X¬;Ãɸ£AŒCY¤5õ8}ü~Ñ–õ§>¬Ì>¥ÿMÉÿ?ÀµËÓÉÅŒüÊų²›7ã¾~‚’͹3\KHOâÃÒ¾‡sXöïõœ®­£2YÔƒwCÖ>¤‘ü´jO¬ôõ¨I”Íë@hÞ|ê8ˆY]s̹@Ôù˜´~b6ÙÓFœj…Œ"ú¸¡;fSwÖ¿f¹3a’oì%ŽYE^A’ô’¨HN£$‹’ Ö¯V<úôœ1¾eLØå=ì&cÖ®wìäKðßÞ{’1=ƒ±üʱ«ß&'PÚ÷«Ì$©bJO¾61¡KÙ¦ÞmÅõKd»cxøŠÁÀWìIQVôH¿Àã6õÜo´ñÞݽF¦E¼¼Z ½ét6ù””ü]oÆþ­^zºìúͬ?¬GÃE3^–x?"ƒQ͇ް÷ë63I5ãÓfÜ,.9”¦¦Q3†q,§×g=rOÉ«$}!l1Cra©ÿ¼5ÃÞlOÏ!yÏÑ—V6nƒáa“áWÒ®ˆI•ûzúËçäkçïG½Å92¸Å¢é׋É4³h:kFõžC‚`G‡ÿ´;65l+—_¿w½ßÜ—®‡àͼ ½vû‹š—=ë‘u0Y.5ìnNO×£åWÅJrLánq@8SÞ´~Ëúálkõ±ñõù+¤à¯“ôÑÒŽøqïDhg4ï7Í¢ê¥Ò–½EΈ¶¼ÄÞiÒ`2„Q¯²¥a;1ö¬³ÐÙȉ89ÿ*a/¢”úäçzi¦™;õ€2Ã_¡àC¤5%P[‹ŸŸ18<©yGÉ7/oÜ›ôo,°î»)nœä¯.ÊâÁx1› .Ò£ºIø&›#òåÅ€uSùÎfjÐÔà: †|§ÁRCr´Ÿ®d{=cŸY!æ_b[~æŸdË—ÔU¾[nòûß®"{ǯõlÊWRs^as8¶9J¸œ†:i?*ýlŠë€ è7ãÖ~B"±º½ÿtZóƒ¹öèÓ‹Å =§-ëÔBw7O´3¯—¯óGlüšÄQ 49[úƒ‹Å9ô䯋óɨ7¿âMM!刌 `ïÁ´@}=i¥Êk­ìJÿMÊ¿µýh‚G½õkñ±Yœ ˜a †žŠ) 8 ÛÃúòãdvÞèc³ɬé÷†b\DoÜ^Λù⸙Ÿ×31ïãõøìGÑ̀ćzx)fõGVäz †5Œ˜ÃÌjŒâ> stream xÚZ[sÛÆ~ׯà[Á™pÅ.nÉ“ê$NÒÄm5L’™B$$1I ¢¸¿¾çºX€ÃæÁæb¯gÏõ;g•.néâÍE*¿½ºxõ¥O¶4™ÍýâêfQU¦°Å¢Ì¬±¾^\m?%Wwíò—«o`jOuµÉÓ6¢I?,³*ysùÌmZXS•™ÎüÇÒÙ¤YÚdûeáèëþµË•+ó䚇Û>ycG»=ÀÿÝbßÏý|É›ž¹çA›|€=N©ó<#š “Öv±ò™)|¡Å#v ­¡Ýþmø÷Çeí’VØ` ©©q9l´*MYÔ‹•u&WŽýk[lC–´HtŸt¬ºVÎeÉá†/Ÿˆ0p/„7û tñÅÕÅû l•]Àq`øzwñÓ/ébÝß,RãêjñL“v ïqúÃâû‹!„ ò¨M]dRž• Áºñ¦,+&ýòšn´vHÊúH¤L7#& ;­l^WÁoe|•ñN¯é.½Þ}Ý Äl w…n‘h7쨟þïU~‡=í˰iƒƒ"#èî︣‘<ãN´ —ã”§Q'Ü¥tØaKÇÑÃ-‹Š÷YPã刣DQ®ûö°Ø÷÷¦ö~IјÕ››E¸H+§î2i:&Û=~@ŠŸ÷¼èxà‘kø)t­h6Bk= ܉qg»á=®Eÿh¿;¸€•u}Ó÷-2¨í˜?|Äu+íØã%šNn€ 6Ò¦o#6‰= £Ò1‹Æ’]J'?§y Ÿ? Åo ñíw¯B¼ŒÅ §Öä.ˆÈ]ûŒG:´$°ÜƒyAÒRfU>ùd¹ò6g~µ¶LA£=i+jY×uíïê•ö‘–BK6CÝ}Z#1wLcÓÏÝ»cÙÐ\i­è&û{ö;Ñõû©oÌ•ûnD-ìð&²ŒáÞðÐçª\Ô¶¬`]S«¸ÅÑ‚L•~[æ9ëç6X¯¤zÔù仆¬i,.ø §VÝñ”ý#«õZŽø9µ~ ËÓš--Ø9 <(µ-QöjVãJ¢f?fo•+dÄÈë“ÖàµÍÐ-.ë°'¡îØ$hK!ƶëÈÏLÜTà)h¤wDé7ÕW²VsqH‚íׯE×·âpW'ýa'øT"K­Mm²‘†&=/ÕÝó˜lEd‰GË5òÞÞÖ¹qàåf#‘E3]n<Ð/ß…ð9̨Me­Nx ¾'•¹mQ^c's ({BIÀ(_·‰hÞâ ™‡j+vÍ™OÚØ±ÚU›ƒí8Çœ¦uª½‘˹QäÏâ”~sSÖúÿ&g#úx.çºýÜà àM¸ú§ ƒ¬Øˆë%6®%8áà†þhØ‹E ‡47óeò¥~³;ãùý:[RFš[ÎÒÌfì©5œÀê.(N+SnïðK—¢Ï‡Oe|ÓåTøxfÓ„.DžFc‘“ðe„§Æ:s%›y€)ø¸åiêë²¼Œï£ ÈbƒØ²™K”W“·Øž$X»Š9‚Ÿì¸Ø;´=É÷?pP¹ |‡ nuëŽ7½m5¢rÐ¥ˆËnK=BÄ•“ŒÊ;XC!x³Œg|2 @Oˆ?è¨uXjý¢¥GS]eJ_Ÿšz¼à°Ù©±›YÔë o,`ö3aoÈi<ø¨‚ >¨›ÉŠ|áSèsŽ· ò4M¾&Ù;æR‡Ñ&…й>Š"Ía~ én‰¦ŸNÓ‡Á±û´š:ç¸ d¹ÔÍ3w|¦Z>$P¿© ’à‘hðtŒÆ,l¯5Np%c’ 4Ld³Wx¹öþ}ŽbºdÄ…;\7½ J<‰yt²HM¾ènÚ|÷æbñÍùqYi’¢s>¶M²Ô¦Íd-@Æyà¼=òïûª0ðzd-”&PrÓ¾8×Òš#[ÄM«±à©SxÝÂr]ìpÞz«\{t޼Ì~…g4üƒ’Ç$×·aë$ ±{ Žö:D|&·Õöma×gœ÷ Ù.Zô㈠5ú;ÅélCä(*“Uu¤˜ˆó+«lXå>ùµe ý püõçHü%ˆ6`sM´­É7í5¥€uÏê^îxžèÓqÏ­,HäŽ8ý¨ØßEŽP.D+¡àf.Òl”í ºLë’¿€çù€5´îÑt=öíP ¡ ú‚¢á’Ó~ÛQ2M˜q$@thK&GÆÑšxÛGûŸFk¤ìe”´ä…‚âµë¦ÈVOªÆöÚŽS‰¬Î˜9‡^µ= æ:ÆAk>îIqúAT þíáYx§õŠ­²sg¼¬¨™ò v‡ªÒT˜4¬Ó©l’¸òSÌ«aë­ PXLÒzbx‚K†ð:嬼ς^a{§&3ªáD¸Ð ëÑ察F¹?oÅèÏ«[; C1e "|Sq&ð;( ßEÐCCû0è®{G.w¸åI¡ÀTð4"{œ3GlzV¼’âÚHñA……Ê^{),)7Üy¯Å0*p—°ìù»Je‚íQb.ÐHeΛP #Ù²*ŒŠ+iùé–?©D#µˆ\T¬ƒÆujíž{ú‡ EGvØ$ÞÿœÚEŸúö©ú½ðn7Ô€¤&‹¸Féξ:WüÞ5÷$»^ç9ì&(£8Ù÷¡b"Ê„“6RÞ(lI€h €7=¿åö£b#(¯)N+óçí¨o#7-6RÕÉMÇËEÑ)g©Ë©ÛªÈú Ÿ°PIw!–+€¢mŸD‹¢Šrç«5É7”¹ÆÂ•"جN©îÃ,¡q£œuXõ©YèN<]ÝŽs<£œ_ÐZ àSï· 5ýc¨²*›ÅIe?¸ Ñ¿÷ûAma”]™ÕKàMËE\(x¸Iõ¼(Øõ!¦•â–—®›=7B ?ú`t4ÂÖ@Óƒ·áÖCp&ÌŠÔd™ûxhM Ð\¦Ü¨ÛW)åª7Üá^‚ w‰Ž¥ãÕHN0þšb¶–}J ™Z5 Ý‚‰H<áŠR.œ¹YîLi³?¼˜ w¿ìŒSïÂ'o[r9¡TMQ²õîòí'ãMNña€4ßáÓK9>7+!Ù ´ÝiJ(NûS´@ùG»cýþ½%}ãï8‘|xဿ ^R5ÒßÉCÝ«9a…¯ô@Ò™"ÂÕJË'Ñ ]9MË,Ëñ¤—ÿS}˜Ë˜Á‚Ê"¢ ¸{&GeŠÚŽÂÇwȦ￟Ý=5åðôùÙÇA›Ë3Hî1ý¥¨q²ƒ\p6ÞmCáòÃäÅí#ÛfišÍ¦Å4Êæƒ§àÁ‹C728å.~«R˜íN^g„Ø8T†wªNÞÜ(ßøChQå¹2´¥±E¨§†H|œÛ9³Æûò,ùAºUšºœ8XžxÁV¸UEåhh³T…+زþ˜`a”[F~hÿáóf5}Þ,OC(¦wUž|uH­ðãk+Áãp¼ã-(ÚUÑk«ä9ØÇ§ O sWøVt®í´¸Gý‡AVUVe™ü…³—iZ=]e€oD÷öÓ0K‹;Eª¡D®ùRŒƒ7$àëã03þ¡ð¥OVZ ‡~"¸ÕÜÊräØðë*¼¸L™4'ç|ffÒ,8&õ•gØš³&‡(y®­&«ƒk¦#fß8ǤÞé'³Ò%_5£¼òAÀ(¨ä7Øoγ Ü ,¨ž· ¬Ì8íÀi\sh$¥¥®4'NJ*–R®ë¤–’¾c#‚±¥’Xh“^'<=ÿˆ¼hí&åËñ_ ”œºeçTnÀ‹UóN‹’/T¦NƆ"LÜ?-Í0NÖ" òÃë`O”p~,c¨åz_BŠr±Ð9eO ÃÙŸD+]>fóìI0ÃeöϘUfl `¶z›ÆÕúÖ<×q:„®:«ÇÉH*õæymÒ—Ö‰|4ž»ôDjÄõLªÄÝpDÓMC¬øð)÷¾àôò„—›<õ'¹¸¼‡a)íäÏvðÀú$¶Ü7}¦Úê'þlŸhËɈ€"ÙÍyšâ©«ÿ‡dUPÊ[<ïa7 Bœ)lu.ˆf|»ÕâY6–(È–/¼ˆ(8c®RHz¯©‰ú²„çOV@͵òÜ\š-“žâMLÂcå”2ÄYÞKݨ“ûBÉ ön6¿’‚Æ%שׁôalx@ЊÀI=á3| ä×De•²yÙ6ìûÊÓsظ¶bøXÎŽÍåEåŠs•÷ÅÕÅÿDªçendstream endobj 99 0 obj << /Type /ObjStm /Length 3626 /Filter /FlateDecode /N 96 /First 852 >> stream xœÕ[[sÛ6~ß_·MfÇ q2ÝÌÆÉ:Í$n³všÞ&ŒDÛl$QKRI܇þöýHÙ’L5º%éŽÇ"Eç†s¾@"MYÊD*˜¦‹d^࢘WÍ„Å}j˜Ä¿H-“Úáê˜ôWÏ”"úÀ”H™ èÍ€NH¦è„b†ºš~Ac%ú–YkquÌ¡/!RO0¨Œ7 ÒšØÇð&Ðü;M€ð^¥$ ØëR¦t#™T4°RLš”h4“ÖÒ+ˆîI&Ùƒ#ÇTPAzéé ÄפȨ¢€ZBÀ†–LyO7 6ž¸cZz4׆iùµ¸±àªÕºÚã†8ÔŠ¡sÚ‘xªCN2CŠÊUd1íjâÐf¬&bËŒ#ÅÁ&Á%+C70Ùz´šx†´Ö’­dÖ»MïRK7š9I†²†9MŠÂg½‚é\$†íB¤ ð!Ý‘õH@óéžÜË‘z1Œ÷A“eXHitgX4Ø Úƒ‘içYð$;† !@ ]vÑ[5IF¤ÆÑy¬£7žü#€3á£W’ïzò7¥ˆŽ¼¡ƒz !47ô“„:(®àaw áÄË]ÉѸ‡Ð­ŒmW˜~Ÿ¦ÔÂso{)ñv‘Ò¸”§°æ”©á¶_r„ˆeRõ¾J©¥ãÒõö‰ ºH CsWèÓû2¥ZêS¼5rEŸ®_ñ«„:Xnì&]"âr—öоB ¿äé6Êâ=w²×7}º"ºæÎ÷îÔ ¥¥é¥ÔË®„I>m/å2Ÿ:…D²Ï•ÜÊÜ@ôeŸ1ÅŠ1­—ݬPž”“†ÅAɯmqBI I¥ûB³¿»74ÿç_`F)æ÷È(UNœd;öÇ,yY•ƒó¼GÉË'',y•lzÇG#Ûõ°È n¬šß#ÌÙÂsëž úZ±»È!óÁÚí]Ö“4´Jo(Ç#ÖêýÞ hr~6É?Ô~Îó2Mý}–œ³äiùªŒ—Õ0¯ÐOÿÞ°äqü"Ú/ß²äKÎò ¥à¡ÁžB)W”í1‹ÈÏgo›ëiÎ’Å上÷&“²ùܬ!Œ„˜K"k±Â§;±–ÿw–5E9áÒlÄ’˜YbIa‚)L›`xŠ ¤lÊ%ᢔ2Òfõχ%\ð»eÿZ˜‹þµèSÈÏÆíä_ÄdM’#S‚€íµíÞó ¨TÆ1Nóa‘—;õ(Fx-8bš¼Ì*´íœåu9«ye|ôêj6~‹.;Z½Ì.ó~ÿº(ªæ”¢öp-r)ã#Ìã)rCB½þÙÁëÁ•À á–)¤fBÕ»3õ®¬Ç"g˜Ša}3ƒT HÏÌá·žT6"pÇ_“7²!úÞ¬ ÁïÎÚs°V ‹zߦ¬å³\»€°ƒ DHD*;8ö×äŒ8 v3…˜Šy»kŠÑðÁ5¢>&ÝGiHHá§òFħ(X”ò<~~EÖPS"ó„[Ö,H’>ðßê&kêò¢áeu™¼2)R‘ÜÿÄd=xÅqZÂRæ WdjQs“¢Ì6süÓ|?>{ô?;šVåo<2>Íï ‚WÙp”_¿Ê«êz{!(kÌXŠéÆÀxH`P6âývéf/)ªq½=ó¨#ƒ µ=@MÐ<¤qÕ–+Lç/Åûe6Þ]ñnc‹æŠ‰Um‹i¡Ýt!bý"ÈF ›T}´.Ûmƒ»"Ñ-xŠ®?Z'o¯]LîòÇrLµûÇT3 ]LuëbêqVçQ»Éwÿ>yýâ?ÿx|z~þL¤½¨êæñUVk´x’׃ª˜6eWŸc/²ŽJ»5$#æÃµÝÿX ›«:.e·Ê\þÅ÷ÏO¦ázµ2¾í?]Ÿvc6cÀ/2ðÕãR7µ¥KQÍvÈ)Y0ÕÅ_|jw¼Ã1´ çäqci‚öݾëÓ*¿—ÃíÙ—¨@·Úp/5ê7Ç=&“5¸¦á‹ ðK>*.wHËàÖ#“9 ¨ÀÅpÚã4Ùa»%Ž½Ø¯³‹ü¸˜dÕõY~YåuÆ»ƒ ƒâ™“€Ó¹ƒ1Œöx_Îo«ËÑSáÆiŠúìÖBrJWå~ZN/ó ŸMŠq–Õ|2JþømTL†u~Tƒr˜ó«f<ÚçTâÚ§2®¼˜ôÀÕPªÚ bÀuÞvBsÇq)<†”­·( Œ‘ú û5ÛBÛî­D» ® ]1,ºBXtÛ7¢+¤…F}`È]Caú¾»’C -%ªÏ“C‰íeÏÝ}ªcy ƒÎ§š±ž§z«âÍ®áåxViÅ>h„ç:˜[V•àTbŽÕ¼^•ãœòIÞ$Œ<›ÖåDJ‘œOG³zXÀæM.ùtx±^ Ð1œY§´g§oÒ(Ä‘h>{œ–ƒ$îGàt»É‡ÄHG~á[K¹iþйåÏpîd‡ä.½ãš"<‰)EZ¾•ÂÜUNîÑç)´ðò#Ï|ö.™ÎÞ&§ÎÏõz—iˆéDz4âqM«”e>ÿÚÍ>.£‚¤“h€ça!«dùžˆ®dRˆt•+ðéÅ|[[K¨ZÈ­¶µà诟>:ÝašúÀj¸9Ñp­üÎhdýjÍÁ°Éþ‹5¶[·óÙ¯¸îÚ£ƒí÷¿¸ùµÃ1ÝQ4:4ØÒÍŸÛ;¸Æ§éþ¸Æ-mùûT×ì :Ÿ’¿i¹Ç¡38l6ý6/.¯ðÞ–ÐÍ€{É£ä8yšœ&gÉ«äuòK’%o“A2Lòä"¹LŠd”Œ“IR&Ó¤Jêä}r}Ÿ%´qB=ÑÏ’“QvY3ÝÎâãv É€ú ¨w&n<Ðû“b”Ãûm~zò]6Î{¶kž5Ù¨<š\‚þÅirÞäã×ñG9 Û0 [8ÉO€tºf>•û5¿.hàEòòŸ'?$W×Ó«|²™ˆ€­ˆt꟎€/‹hVE\Ý[–PÎ%ZÜ\ÀŸˆ±êÞyc=üõÓ§ ÁuV¾Ï«÷Eþ‹û·–Ÿ<°b<åcˆÞ²aÞ £ÕD 1ˆázH¼!”<Ò\dãbtÍ.f“ø¸niÍ2­âß7WÈ3ãr˜:»L¢[fgu^•Uvó!kÊi1hé…\¦7üß3’¬{½2â'e]ñf½‚†@ì§]‰š!}²YSŒŠ¦h‡¼‡dWß0Ÿò³|8äã*Cê#Ö\ö«øÏûrEÑž?¾›ÅÝ‘–`®Š[$¸S¤}"Ú«p€‘›*+àö㬩 䦶¡êiCãnYOGÅ$o{˜æøôï°Ñ¨x—Š«²¶]è;]¬×ÍñuFXl/øSô?Î&¯Aý‘=fY]—ƒ¢µjô*ên±ªPQÈ­g®šc‡è‚wÅ“¼ÛÉ=Š[¹s7ô¡‡^ñ§9 |1X¤i¥æ¯®ª<‡• xo£og7ÒÐ>Ñje?.ÇcpU—3òýŽö®mûuD¿jü´Q,ơӯìDz Ù bÀIQ_“K†€;žRoWÅ´^µ‹G»SpVÀX~qÇf™m†¡Ÿ_®ŠˆÜIÈÀŒ°ð¬fìÉZr×C.9‚s•—ÇÞÚ]ܵ»»ÕtÜ:#2y׈.rÃ}üNmTaÖ)I-šæ1Ûôendstream endobj 196 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5090 >> stream xœí[}Pgþ€"dÒÀ…H˜ð%ÂF°8 Rœ’;µE µ½sjP:­:xN¦2=-Œzs0sG[¸CñT"' gŠHœ3¡¥ŠÍF%FÌ2.$ …4„$@x ¿?v~N§=_°âñùƒ °ûäùì÷û|ß×gff¼Ôðý¹7°àXb¸ø±Äpñc‰áâÇÃÅ%†‹K ¨ªªªªªúéßív;ñ“ø0 ~tÍœ×ÿW”––>Á]Ÿ9s »Ý~öìÙK—.‰D¢½{÷ªT*€ÉdŠŒŒ¼}û6ƒÁ”””ìØ±cç΃Ááp (Ê`0L&SBBÂG}D< ‡ÃqìØ±ÒÒÒ·ÛÍd2étºJ¥JHH°X¬ññq“É$‰²²²JKK¥Ré?þñ   ·Û]YYYPPÀb±&&&L&ÓŠ+–-[¦Õjù|þàà`eeåS1|a·Û‰';,J†…%K³ø±à çcl¤…þ‚ºººÉÉI•Jåïï/‰h4Zllì¹sçJJJL&SCC†açÎ[¸ ,,‚½^A…B¹yófXXFkll,))Ù½{·Ãáçñx©©©×¯__ =yòäÊ•+·mÛÖÜÜ<33ÃçóqïééÙ¸q£ÓéìèèX½z5@£ÑˆÅb†•J% Ã0 ÿë_ÿÚ¹s'Š¢ÓÓÓ£££333>>>‚Ü¿?66‚ ¦¦¦?þøïÿ{__Ÿ@ àr¹jµzzz:$$$::º¶¶Öh4ÎgÏó•¡J¥JMM-,,8N³ÙŒ €ÃálÚ´é/ù‹Á` Z¿~ýðððÆét:`ppðÕj… èöíÛ† ‚¸¸¸øøx@XX…Byûí·) A±±±ãããF£ÑÏϯ¦¦fÍš5L&3--íÚµk‚lܸñæÍ›D¢<Ï?ûsø£ÀÅd2q8œY.XhÌWKU*Uqq1@,‹Åâ . oÞ¼9""¢ºº:22rttÔãñp¹\Ç[ZZ˜L¦Óé|ýõ×Éd2‚ }Úl6»\.Çããã)Ê¥K—ˆÃ|õêU_ßùîü±µôg‰žŸóÕR­Vûá‡fff¢(GLªÆÆF:îñxV¯^ív»- ƒÁصkWiiiHHÇëîî …›7oþóŸÿ àóùmmm<ÏápÌ^b{ˆùÊÚn·‡‡D"ùúúfEѬ¬, Ãrrr`ÆqœÁ`0 f4 z‹Å¢V«a2™^¯·®®.,,Ìjµ~ñʹ¹»ví †o¿ý‚ £GVTT¤§§“H$‚233Fcss3 Ã===jµzõêÕ ebbbž;ùcšÇÐÒªª*:n0)a¶ û{z¼ü2üŸK nÞ¼I¡PRSS1 Ã0lÏž=eeeÙÙÙ.\X¿~= ÃF£Q&“íß¿???P[[›˜˜èõz{{{333 ‚ (ŠŠÅbF êïïÏÉÉÁ0Œ°F(Š6lØàp8ŠŠŠ¤R©Óé”H$(Š A*•ªT*|>ßn·'&&Ö××À0|üøñYؼ´ô |àó¹e>˜C†¹¹¹###8Ž;ÎÒÒR…BaµZ—/_¬Õji4šT*mmm%®^¯7==½ººú­·Þºxñ"A£££0 '$$üûßÿ B¤¯½öZ{{ûØØA‚ †ÄÄD"è ¨««kttT(&$$TVVÚíö÷ßÃ0ENçôôtDDÄØØØôôô† öîÝû„ Y,–ÅbyHÏçóù|¥Rét:q¹\aaa&“)$$A·Û}üøñÒÒÒ¸¸8“É455•`³Ùìvû­[·±±±ÝÝÝOP%üê“áå·¥Kµ¶Å—ß.1\üXbøtøy»¿¶ª¯R©ÒÒÒêêêt:‡Ãòññ±Ûí"‘Èáp8ŽÀÀÀÁÁA‹ÅrþüùÚÃ’·Xü˜;{hkkÛ½{·H$Òjµ C§Óïܹ³¾¾žF£Y,ÿ¨¨¨ØØØ‹/:t(??ÿ§V]]N_¿~=Š¢J¥’Ïçóx<±XÜßßæÌ™C‡Ÿøpùòe³ÙìñxˆN‹Ùl~ÿý÷•Jedd¤Ýn÷z½|>llìäÉ“)))ÑÑÑ)•J©TŠ¢(Qb„ (&&æ¿Ö©L&Ó²eËpŸ˜˜ “Ér¹¼¼¼‚ ÈÈH“Éäç瘞žæp8%%%¢0GLÃ`07lØðë_ÿ “ÉØlöþýûy<^CCƒËåZ½zõ«¯¾ZQQa6›qONNŠËå6›ËåÊd2µZÝÙÙù›ßüA¯¯ïž={ˆÅ׬YÃçóQmjjB„Á`Øl6±X<222>>ðËå Æqüĉ‚;v,%%†a½^?::J£ÑÈd²ŸŸßì}Òç—>_2‡–~òÉ'b±˜°.J¥R$}÷ÝwB¡P©T²ÙìèèèÎÎÎööö¼¼¼K—.1™ÌåË—S(€Ùlîë닊ŠÂq\"‘¨ÕjbAA"""ˆy~¢È][[› P*•b±X©Tfgg;NFóÚk¯õööŽŒŒLLL0 ¡PèïïßÒÒ"ˆZ8±ìG}ô( ?CnÑÓÓž—<çë-h4šÝn÷óó3+V¬˜ššºwïÞŸþô§òòrAˆÈ Ã°æææÊÊÊ/¿ü²¯¯O$aâõz>|ëÖ­ŽŽ­V+¨T*Š¢Çjµ²Ùì­[·VVVÂ0ü·¿ý-''‡L&Ûíö€€6› fk4͉'ÆÆÆöìÙ#“Ét:Ý… 233ãããg9ŠsXšÌÌL.—;99™••åçç744´|ùò©©)§Ó¹fÍšÆÆÆ£Gâ8Îårããã!"‘HçÏŸ¯««#‘H!!!iii===,+&&¦©©©¾¾^,ÇÄÄF 榦ØlöÌÌÌÄÄ„Ñhœžž¦R©999n·Ûf³‰D¢ÁÁA@ÀãñX,–ÉdÒjµŸ~úé'Ÿ|2>>®ÑhP-..&º:³à©´ô§jö(Å›S!NcçÐR@@œf½^_VVvíÚµŽŽ‰DÂãñîÞ½+‹‡‡‡ ϶jÕ*ÂGMNN†††šÍæ-[¶466 †aÄ:D` 0k×®‘Ëå*•Ên·s¹Ü®®.‹ÅÃðÐÐPHHˆÃá ΂ 2™ì­·ÞÚºu«P( (J¡PØl6†ar¹|-}ù«sÈðõ×_Ÿœœ$rm‰DÒß߯ÕjaþöÛo#""‚‚‚ÂÃÃ-‹T*Å0ŒÅbµ¶¶2ŒÍ›7ý)‰DB ‡ †œœœÇMíž ^ÎÿŒÊ去²³WJ¥¼îîaµú?¡¡%%i‹“ͦ}õÕ”GSSY|üXJJŠ*“’’|||¾þúë––LJ††úûûwíÚJ¬_UU%•J)JEEÅùó燆†ÈdrQQ‘ËåBQT C§^¯W,3‘ …B§ÓAT[[ûÆoÐh´+W®ÌBáiµÔng0(O³Â\ëÏ·âô(Ì¡¥o¿ý6Ç;uêTRR’Óé ^·n]oo/†a‰‰‰SSS0 ××׋D¢–––µk×Z,–£Gþö·¿‰‰!JĸrCCÃÖ­[‰²E{{;‰D’Édååål6{xx˜ˆàˆÉÍÖÖV¢Zår¹ «««‡††„Báƒ`þ‘  гT1æá‹0ü”{˜ƒá»ï¾ËápÚÛÛ…B¡Á`زe‹Õjýúë¯ÓÓÓãããËËËSRRêêê˜Lffæ1­Ÿ ô‡aH(dݸї•µòðák›6EЭV×J$’øøøÓ§Oëõz"¯jóù|µZìv»ß{ï=ÇÓÓÓC¼öÀb±o„¢(F{øâEDD„Ëåâñx‚Ìú9rd†íííW®\IHHX·nŸŸßåË—=Ïþýû5ÍäääªU«îܹ# ™Lfww[mí§¡¡¶ÔÔðû÷¯¸\=CCígbc'{zZûú:?øà Ã4 Ã>>>áááµµµ$‰øµ³³3%%effÆét3T8ŽGEEQ(”o¾ùæÍ7ß´Ùl===þþþĨûöíÛÛÚÚBBB222Èdò,áÄ äsÇ¥iii555ýýý~ø!›Í>vì˜Ç㠆ᤤ¤³gÏR(*•*“É>ÿüóýû÷×ÔÔ°X,½^O ¢ AÐÌÌLTT”^¯—H$6›­¥¥E"‘$%%utt‚ýì³ÏJKK 1™L›ÍF¼122"“Énܸ¡ÕjE"‘ÉdJKKûãÿ˜­ÕjùË_º\®çW‰z,Ó0‡ ÷ìÙC”·qÇ0ŒF£Ý¹s§¸¸˜8÷ÓÓÓ J¥’Á`ÇÒf³Ñh´{÷î–)//O§Ó555ñx<â}NG4Œ8°jÕ*‘H@²Û=¡ÑôJ¥<Åi4ÊÈÈ8‚ÀtzO~þ¾ºººšš»Ýž——§P(~¤)De€ÇãÍbi^¬s¸ñü&h‰W绬P(ˆwëNœ8qãÆ ½^/> stream xœí\Ûr7}߯˜ÇÝryp¿©R[ËQœ‹³^Û›8ÙòEM$&©Tlçë÷ †Q²Mrä([  4ºîÆ*ÅJV*¥JéPi)+í,rUYéëʹ„ÜTA±ÞVѳÞUJ*‚¯”òl*e ±R6ò]ô骴ÂQEFòBi> e*­kÐÀjÖàç†dïÁ°ÆÄ~be¤a?©2*âOZVÆh¼¬Uel@¯ZWƳm*س¶•Iì?+#k|eµ±¹²&²y¬¬³¬I•õ Yaø“%Rb¤P¹ÆTÎPèÂYN(÷˜Šñ•‹nBå’eM¬¼L,¤Ê³Z[YyëÑʪÊûä8ïÊGÌI£OÍ%(LE[W0eë«écCgjc°h` ‘C8YÅ<0EMz……!Ž&‹åêðl´ÀTñ„W7Ëñbr±š/òX²z^‰oGÝKñâòxõÄK¤ªÍÚ¶•øar²:[fälùÏn ÿïçOž~ó¢^ö‡‡„Ý>nqßnxßþóŠéãêïãɪ©—“óù”ùúªÄ—ó—s¼óÅI³¨þ‹–üïu%óƒjžTâ«J™ÉH)…÷ˆLWKÓƒ–Jgh,=–uº¦âé`Ùë€tß°’­“3kXNÔÞX«³ù9a¥°ŒÒ55¦¡öØÏ·µÏL• ¢¿7X>Õ êV2u„êió®Y<ø©™L›ÉòÁ“ùb6ù•(ýµõ`Þx-oï@«°±ùàëÿU`®¿†IíæRø0ñ\†šfæQ5vý=a®FÇÓ¦ÞnŪîaC‘¤¦Š³™õhMÚZæۃhïšæà {ßÐÝÊ8U[½Î˜ aºêH}MKì¡YðÌ™4çjí6Ú›ÉôäŽÆU`£®|“šÓºFh0d±@Á@­GûÒb}âÑqÜÙóYófyðc“·­8h“µ¡×"} çˆ>XM;’P•ÛkÜ6Xõâ -Øà¥Õ Vó~Øšß.G«É|6Lg`hÐe¶ŽðϬ†î‚‹gPïü®B¶†¤@‚TÇ@ß ¶­¤?¢jCœurWÛ¶U«Cð8mÅÍêÞcünÂ3¯éIýÙxà9f©Q¶,må]méÚA¨ä®¦F g;«úv8Ж°áN;>ºžÖ,½c¬Ã°+uîvÛ‰Üöc]Ó]¶ÛC€ö3#îlÛÁælTŒlú¶¡Ý…n_S Ê<µ±¢¬Ô=Å nÉ}îˆ^Öpz×È¢­MRC‘Ý©¨IUÆÞ:Qƒ:Uæ^WÁZÔ:h¨Ý?4¥a¢b; ö2ƒ¨²Žm0{Ð}ÓØ ƒºf wz®ÓùœÂ·Sû±”¥¾WîžQY “†–`‡ k‘ä¨`"PÙ!ü£!ŠåÇPŠb€Ò›š¡I £ÿ{_°<‰¢¯`1Òcw„•¹ r >G(x Koº2z]–fêÊ ¡+ÃÓV¥ly>Ò5€¤º44R÷ày¤Ñ•yªÑ•auˆ`<ú¶ždz¶˜_4$‚xöøkÞ®0ßOº]DYKÚkÅI‘2°£“r8þæñß<{òàðéãÅÞ§BÚ«À±”[Ž]u[àxyy¼Ä¤²µ?ÈÞ×pÅ/²‘Íc ¨`÷£¯w6÷‡Û²´=„/€9V†Ê`‡W`ãØ2n‰gÇÑžFí¹<Þ»">º ÄQ¢øì¶«ƒ“]’ye]ºáq_›»’û’‡’Ç’·0yÐ׿ªä¥?Sú3m;² 4mf5Ê2•§ÍÉdôhþ¶0ÖÁRåiT´ Ž#›<-Т*˜Àáåür1n–ù80W½<»67—Û«—_ÿôý°Üžûk§4é–Ŧ6›Ûv­Ù[×ÚøÕÜ­„èL^_ûap>Ûª1îΆÍjr¼¤^”ƒ"ûØnhÞD×FqºÀRŠ0¯ï›½¢mmÂÀÆxºÃÊ„~²nWÍÊ0$Dbàÿê¤XóˆßZ°+.À*ôfWr­!mgH¼'à^ž\¦¤R„GdvS›×9x|ùËè ÏÆ8$4n=yõˆ˜Y‹XôàÔý,¶¶¦öSb§lÆ+Í–øþDKlÀ^³ËþbÊ~a˾`˾àÊ~aí-ûC¼?Øý¡{¼±?|"¥ÓŠÎQÂs:g;ÓõöâEÁN i¥“®ž‡&’z¯ø:mê媹 g¼2|ßÑœ©!¢ä®‚õ`ì7X ŠˆÇŸ÷ˆÌ„6ˆÞA³*‡§qûC{3šÌçƒÿ4sMÞqÔ°9Hšj# f/~Þ4Þ>ˆª-ÀÐC ŽÎ.FãáÇ«ÉÔ¼¢åà²HÕžòæóÔÕ{mrwÍê¥f¯ !‡ 3Úùx|y0k²×;hg3–[ CÍÛ$ÎÃû%TYó6Ú}BãuJf -‚ZÒì m¸óiéó 3Ðÿû7o¼(™oPÖ¼šù§ã³Ð\:b|^×¾ÈÛ¡XiÏ“aÎy¬¥¥mÓ¼™gçK ßf×}±ÂpÃBØ  ²˜|U6Õ>æë¹Ð{)úñC6'`I ¶ƒ4¼Ð«ÀÊìÆ»WA¤ü^×òî šRnÍ‹š¾Î—3ÁU# U¦½Lò‹fÅxûb1Ø#…¤1JÐa[“í/€M)h‰”ÂûWYpXª|êêì^‡Nwq;ñ:IÇ&!ïÝ‚`<#W.ü~â] Ò:Bƒ¹5*ÒÖA•Ù8ƒ†»2œŠî²¬•Üã˲wËbAJm®`aAFcwƒÕóAû~ç{|ÍOÝ_\Sñ2}èâÙÛ;¡¶3m fÚÌ´Å9uÅ9usZ‚ ®´+uÁ¡]×®A‹·­‹§­ËD=üâÑ?~ÿàðé‹o_¾¼~jÕq3ZÚ»m¿íu÷.Ôð)7°â¥¯÷/¥±|Ò_bè u{„‚ŸîöÜ"ÖçæÙ¶ä@6ä$Ôz›ïEÓûä:݆Ïãc·³éý ŽZ“j•¿áiãƒtôøÑÒvøÖ*-UŠª Ee„¢2b«\QÁ^Ê’«’ë’›’Û’»’û’ßr~£ôŽª'êMÝã;õÕéžNÝÐ=‡óKt­ñÍä$Ï<–¨™/'M¾D}Qæ¾(çŒtc%Gù/dÅýÿw¿× ½WªV%å³1ðÁàÀ»Úc߇ýÒ¥)Ôº¤ÁÀ/+©Óù¯msÛipòÚ.9”E—…¤êPÒ~+obnI£$¶›©ÒhÉbžQIa³ðç±Ñ„¤1æ1¶HËû¥ïõ=zÆLK¥òƒåÅÀBØR¥\›}èIæ ”r¥‹¶×÷¥ÊæW ®÷,ÝÒJBi•î(.¸­~¼hl»”u&rÙYCiÉi[=߈*fJIò1BCB8zІó†%­‘Ê|9Jå¦Å³ªÛ¾øUfÎøJ„WÀIùb[Ž^×n-6x#Z²,¬TËËEN¿¹+È}¯¼9+~SÜvÚ2©”±TûeDnÑÖvtȈ馔š¹“ñ«·Û7,CòØ™rj¹âòJuž-xº„_b§¤ü‚ )hËX*ËïK£M,ƒ.ؼ>rmèþ®$ØáY0¥¾ë[×ùãà2bûv[¾zËægÛ{£×úôa„Û¤¥Ç²<˜F¦¤y5湆”²6ÈhÐÚ”¶ù UgàÛ‰”V)Ï0ÂRˆ•œP°­²á Kòåc&$Ë‘ÆDfGˆ:‹‘Íï¥LC¯K±My¹¦ÆéS£?#~ ’r§äSÎWvŸ¿¬½Æõd²´^q%·[¿ÛþÑZ—O»cáÒÕßZ µå’bô×­Ù³s÷paÅáèâI39=Ãc„U@¯‚.àßÅçâ‘8OÄWâñJü(FâXŒE~Uü<ù½?Ãö³É¬süb)–ÍïÍL,'oÅJ¬ÎM#Voæâf1‡G?†£>Töh::]V¶uSµFÏCkê!¿W¼Åï¹_·/M¦MEë®u[QóÝè¼¹ñ­ðW«Ñt2þ|vŠ·yÃdÕœOÙï»D=wJ¼*3‡N\[„Û“æ±øB‰/A oÅSñ/ñL</ÄKñ=Ö‘k<ŸÎgâD4áÄÏüÒ£à©8gï.Î@¹‰øULŹ˜²fÂ^ˆ‹f1™Ÿˆß.ç+”ØÍub/§£åÙ5’_ŠßÅñV¼ì@~¸ëy¬”Iy_¿N}·IýÅo§¾L»SßÊ>ñeŸôNjѸ™6?¯MNOÛ§L’üx6Z½™œ4]¾\‰ šªåõk½§ãëO½FÍÛ«òê¢-w£]ê?o<®z]ù].æ'—ãÕÉdy1½ËËóó|4}£-Wâw$Íìd9!û<¥;r;OµÅÿÈ…â‡(!è ¦úM¦nÜ¢¼•©6|”§F}lE™>[)èÜíäTËNK$s‹œ†rÚ¿©vë„fç ñŸCèM(?n¨*ˆ¯Ä×YA|×Sÿ‚ø *b´O&«É2;ZAaPžmV”·®Ü.øN“Œçè““ùt:Z¬Õ o4LEóv<SÇô”Ëé¢Amô•Ì/YÍL›åòJ×Ìà66‹åätv¥v:Ñï‰,ugß餋éåRüÕ„Å)muÔÉñ´h¨óI {_]Å딸ªt[}ÚÜ(â¦܈³õÅàaêäÀ\Y]ÉAëIwåJ—°dqÏ} W†#Å“V%Lúú†çü?Ž…{endstream endobj 294 0 obj << /Filter /FlateDecode /Length 4778 >> stream xÚå\Ys#Ç‘~ç¯À›ÁØA©ëèËÞ݈YÉš•Ãr¬5c9’#@Ôhôë7ʪêF‡cHôQGVVæ—G%PŒ–£bôæâ¿Þ]|ñµ.GºPmÑêÑ»ëQ£GuY«Ö–£wóÑcsù÷wøâëÊ¥t¡•)JƒÚ¼»¹Ôã7lz££\QK»ï/­¿¶¯¿Íêœ*t#ÿçÒ4ãéìÒÔã÷—e5ž.—kíøØîðÚŒ¿œîqê%Üëñ-üͦ+~óÜÐË)÷z½7züþ:lØ!…g„|þþÝÅO.‹‘iíTcÛQmjL;š­/~ø{1šÃË?Œ eÛfôš®G¶¶Ê ׫ÑÛ‹?çÑ­*šSc@k«J×ú!2{£k ÄU£ÚX¥ß ?Ãbî§äÖÞ3a7i½ ²|!ZLݪR×}Zþ´Ý Û‘ŽíækjU¢=‰×£öU•eÛ§á{\ü‘°>ѶQ5 è% iºFBY2‚tÝÃßfFŒ„…äVášFéê «È F£`Èó¤«(•¶ö±ì4¦'^¶R•s0X«t[‹føØÀÖ6þX”7êç”qVý-7ŒÖªn:üGnhRê4ÿÈò䌴ø·Ë‰1f¬(Ϫ*ª0evuÀ§¶¶ŸoyÕÉåÁÆÆ²¬#zýR`ÚRµíKÁ…&À`OáyP£&ZU¶J–pDãŸF]Wù”º)?A=«,m™>¬ º&çPíªVîÁ1ZÛ<¬v0¾rÍ¿ŠÚ¡\ÖC¹¨O¤:¯=ªhãŽ|D=þÙîÈ ÃëIi,l}¼ƒ'ßÁß_àï÷êðã‘ÑáºV¦¬ž©Ãe©êÒŒª¤P¿„ ›(쓲(Æÿ ²X7"´¦ï‹5*¶ÿGnD« «êcN€H^ùéíÕ‹°íU Ÿ²þÇBWžüª§8¦2©âüX6£` INÛ³5éi«8¢IÉ*f^KD‘È¥>×¼=žªŒjTàwêæ…T£ªÁ[0ŸB7þýsë†û ºñ¼7­QÈ÷XB7ôºü·N–à ð¯ÏV—§-숺¤+{@_>‰zôb2ZæZUWí3c·Z+^AUZÔ} 'òÉ:ñ?òˆN¼§È^¸²Ï)PŠ2ëi¡WCñMý°`?º#‚R7ó9‰µÿ\ „\$ú<Ãðh23"kKøÔÏ4 "²®P¦ùgˆlùDö%8eZŒ‘tŸSGD¶«ï²Áæ=ÎÅè§Q}D”Sª)ÊŸ³½¸ŒkÒÏLçˆØÀ¨L¥¬ó ”Õv™•]§ð*d.3‚[*W–1{þBÖ'‹´ÏΘ3òªl†ö ÓRdz‘åñläòŒC†àEö óήªû{vDYRÁµ|D¶ì‰T畨Gu÷ùÃÿÇ/&¯ze[>æü7_¶QÖà×VÆÞ]6v<½òFÖhªb¬{9©ªŠKS¬®ˆUü¹è uÙŽçì‡k‡¦|1ç‡lÚI6ÜÀñÙ K=NœÂƒH¹&=…Çr}Êtθªþ–ÛÝ­'¬¸ ícw‡-±À6q‘Õ5£•6íCøQ†T¹ð^¼—%Sò3Už¬<¿„3YWµjá2¬êw—]ÕãßâGŵ|üŽæyZdSÿ±.ÇZù4›±Ê:L<˜S“/·¸‡L§ÁL÷ ©pùâkPî_ŒrMÀ²_r$À:tØÇR(šþ¡¹v|EÌ_,6|»%oïE§&a)âÖB£Ez»~… Ȩºœ8]J uûã¿å>?=,§ÐjºË €)Ai¢uLÖYLsþžÃcA]c+XVVè”såSV¹){ùX«š&‰j 1‘E/Ö/£ÑÑyº]Ù£ÛoIŽö’2©§Œ€žKÎ)éžÕ]©=U&®pªmJ–÷2q|t±¿œXFuOë€Vº©ÅtC2™!´5´ÿ§]Ù6Y4±Å óY`Iõ³ç+ÓÒpô€F~ñ(ü×ü„ÀðgT&Z{ÇHí׮ǯ°UÕƒÌŬ—mñ\ã(VÇÒ¸ñë=$î­·¸Ócv±"\a¦²A%2]2èšÑ{ÎÍ®pŠ `º=±`k‰©/þûUë±p, x;ç{œÃn³Bˆ³‘fkÆ‹_¼ÕXÜeiá 1½ïOhƒ2ºxψg°¦XÄxÔt÷ ž·€î=ÍBï·»¹L ('oE¡*}º("ƒ2@ ê•<Ž&¶n!üôoÊÊ!JÊïº&6t¬Wp_ÁòˆLãD˸»KŠè U·°ß'ßxƒê­©&y«Ýø› ¿^âTš 9a"h‘ØùªóU ¾’éHeì¼v%&è\ªí–#¹üîÍÅp<ð²þ«žJ®¶Èš5-‹-êËí.ò|57^L׃ŠÔq£ˆM¦(ÚþfÆ`a41­®j6Ðb‘iÚ_öHê‹7QÑk…®7ºµN‡»÷–3ÔÒõöZü ? ¤&£~ršýNn邇ž‰ˆtR§?>¾Ük÷·^34[±Èš¢Ï“N ïzÂ-BfDh±EôNZÃ0Eò¾ìl¯òµãÖ3†)º¾¢uùa<ïÁ†p<ç6S?ºÅ- ÆZ¦ ¾îR_[mæ2´wˆ…D!9ÏžxO͆FºΫ”A¥ø–QÿŸHcC•±èzC^óg ßÒi06II0‘N]>P!öì[ñÀÈcŸzÍPüh}ßí5¿&uš^­¤&ý„*A/=Ð#bëíϾ‹$Üz;ˆAкüÞÑDdFèÖÓ.î é瘆§`à«ó»äw×%ÊL¾Ú‘ZÈÐÑ9qqguõ¾37äw7ÜøÆ;ÿÀhˆ¬[Ø#8ã…ƒçýÈjÁÒ2ýe0>œ©wHÓ„»Ö¦—¾_læïÑ]ðtAi ˜è¶G~kà+¼y­röéïÕ[÷øe…·9Ð7ÂKý ›LÖ¹s|³¯á~/A“æ·3už ÁÆ×wØz;:ÆA¯fèÇ‹kIšÙD—@äù1+z,º4Ýdm¹+”T—F4EŒU²vÂr~_áÜç5@Ù’š÷ÈãÌ<`cçAšçbK± ì )â²mõå (ö˳½ n†mGrÙK>ÕeC˜:Æ‚s0ùöî@ûáéRÊ÷ ?«£nWªk‹ÛŒ£MÈNlÔžßFwbgŠCpl£ú#V‰´îc2 [Ͷ7Cè Ž·-ÄUÏù‡äNv*X?@iRŒ·h³`Úìc:à›}dðÀý! x"[ÚJ ®üí2 StkgG|~Ûö3(V+IZôMš!“‹»ïÅÛi¬¸ ѱ©n¢¯…×wD´ÏŠízÖVúGs2ï|<†EÇ@éIG±cruKrïÊbü! WüŒÞ¹–#¯)lÝEš»È2jOû¾Üõœ< —Þù˜ªc“zÒ„có·žmÛ÷$"]N÷LmT ãÙM\UrrïØ<ÐÊBÙøÕ6ã34)šz®}À­¾MðãuÄ%Y2¼þcðʧZ°#çe!I:àõ_)™2^Š™¹›a¬‡pÝ1³ÞÖ&ï)»Ú»Cä=b"²© '"»mVV¬³*ž¿ÑÆ›ä¸r™ÕD0féºv~k;f~ä2ãLQÑYXß΋Û¼ÚÙµ¸®´}— ƒW,ÇÁG;æð¦_:©0?Â?ˆ[yÔÝÊòáú¯({oñÚ%3tGX³îe1¦û3 @AJQËq¬Mºãñ‡Î"Üİ“q† .Z÷Ǩ3 ‚½z™ÔYÍ*I­ï HQ·&KvC.ô~×» ¢Òù1io½HIè,W‘¤Y‚¾Á.­s¯œäzwþD·#óL×Ì ‚ùù0™• z&CÀq€¤> -¨Éb#ÃGwû«LluˆB(±<õ=/ñB(¾MŽEZÕ`hCÊÿ{ä"ÃÜáúõ·áä ›^KŽb½ §™0ø½WÜÀŽÛŤbHÙ…¤âBb ì´UÄ|¸žÞ‹fÐl¿‰QOÿľÆ:ðò¬ûV5u›êððwpÞk9[黡%عèØGϯ×S± ps'êèÌ.¶ÃИ¸‘êv™†H–î“W¢'Öoñ:äöÁíð(ª›J™¡ÆßP¯NZÇÐÃÍo`óßã‹"̯ӴßG~@-XivDïE:}‚QfÚòåšþÁð¿þèk5 ÕŒtâ7<;)¦¼zQ†Ö¹³>Y išÅýŸÜÈèiíü>…T2zUÖ´>ï¡xËÏÈfw3:dÌ¥Ù͂ۇêØ+ÌB˜Ë?"HÈä_«Á)kÎÒ‰Ƀ"š6Œ30t2i¸ǯPðV’$ðh!g&Äç¶fPë±Í^‰{K…=äÞÒg¼ãû¬áÀ~vjã?i «UçN2'†bÇEÿlè´›†ã ]&zH§…ëu¨ù•ò!Uôà±Ñù„>W€Ž!m {¨e>¢ŠäÛ¡‹z°ö`4 ÊÚbç Qâ°Ï‘£4zEÕ0pá3ÐÜñxMUë”nΫ©êU1*žåM4fž¶©|ÎáÎT’ȶ8i®°2GNܨñF:ßï‚ç>ÂÑ•-tÏRQN(œviÎdœ¥·©ýöôòìP“³¦¢’ ºðx‚Ë<©¸–­©Âa\¾}œxC7ŒHWšñ_:I®ú¥å`LoRvòÌ$`6õôú…l_mÏoh ñøçiË*‡5ºVU58Kþp#’Ѳ|Ž'ÐO\ß6Â}°Øz{-e`$i´cøbç«Ë@zÇ/!øà˜Ì(Cº&Ó%£c“-[SK`KêšåФ{q~2úZžV9ÐÀëeOm°SÐÐç!ŠmÚq•‘›JÛðŸ3R}xrÙ’_-v𖵞‡£nž#T²AÚÞI•%åí’¸§§‰Ï¶CZŒ‚š¦Á]¸óJ"—¹ÓÚp¦<‹2~ì7 ÿæG„%endstream endobj 295 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 6081 >> stream xœí\}PSWÞ>@„rY‚á‚ Ü`5ˆ (Ò:èH! u` …­¸.®Ûi­}åu\³Žî %º¶«B3ê` ¬…*+*›D%›^¢`oÞ?Îl^F‘€ÚV»>0ɹ÷œœß}Î9¿Ï‹ÝÄÄøEÃþçžÀŽ×¾úx-á«×¾úx-á«Òô—³³³ mŽ’’’b4%Iuu5€D"µ··‡……nß¾ÝÔÔÄb±A[·n=}útooohhè¿þõ¯E‹ét:‘HtòäI___£ÑH£Ñt:™L&‘H"‘(&&æ™%´{™­6(êsòRKøB`c•NFcc£Á`x¶Ÿ±X, e&3N‡»À&f!¡Á`xr?hµÚíÛ·{yyõööúøøV­ZÅãñ»­ººZ«Õ^¼x‘Åb‘H¤[·n‰Åb …÷-@¥RI$’Í›7{{{óùüyóæ©Tª®®.EQÅq¼¼¼ü±g8íY¬Ò’’:>ý=ÝÝÝžžžO¶755…„„̤Â&½ƒ!55uúÉ@Ì‚C:9,(( ‘H‚0™L&“ùÃ?´¶¶:;;êêê FFF†V«ÕjµL&ÓJ;üðé§Ÿþá°ž›7oþíok6›ùðÃg>™™s8 I¤)n6™L·nÝ"EÑ¡¡¡½{÷ªT*¹\˜L‚µïÄÄDYYYFFÆc±mÛ6N‡¢¨R©Œ‰‰‰‰‰áñx¸yó&•JÝ´iÓäÅ?åd¦ÆÄŒQUUõdcooïL­}ïÞ½k½:å€Ï3™)1 ƒ\.7 CCCééér¹¼¡¡aÑ¢EÉÉÉr¹|dd$..ŽÉd>Ùë¯ýë'Ÿ|býúÁ0™ÌŽŽxJq¹ÜçQîScæíâÅ‹Ó\µ2óïÿûI­}{{{W¯^=“Ÿg2“1 GGG'}‡*•êÊ•+qûjµÚ³gÏ=zÔÚø‡8Ž|}}õzýààà©S§ª««U*ÕW_}Åf³Åb±•áÇ&ób8¬©©™æêôZûN¾4ý€Ï3™É˜‡###O6Nf²¨¨¨½½=;;ûIcÒÚ·¨¨(99Ò;22’––F¥R5Mdd$ $$äÌ™3†ÙÜSNfJÌBB‹Å2ý ?üðÃÐÐÐô}»»»•J%”pše2™L&sssãp8J¥.×ÒÒR6›ã¸X,¶9™ÿÇÌÆÙ³g§¿aÊõ911ÑÛÛkíÛÛÛk½Çæ€Ï3+fÁ!…BÙ·o_mmí»ï¾{âĉ„„„ôôt•JUUURWW§R©˜L¦Ñh”Ëå{÷îU*•<ðóó›Ò ¢P(</<<< Àl6k4šŽŽŽU«Vá8ŽaØô&Î,Lö™?¶²²2ø¡½½}rû”¼=vƒµï”>fÞwBËwÑä3n’§i|fí»eË–O>ùRêììÌápÖ¯_ÿ ß:àÂᔘ^[XûÞ½{÷رc³åa¶“™Œgáb2‡*•ÊÊá”2‡Vÿ$‡3×ø3çpþáô>¾Õ£Óh4(ŠN> àÉ[ûüóûøÛ¶m#Â×××l6óx¼ˆˆˆ¢¢¢ææf†aP±X¼gÏžÈÈHFc0ø|>ƒÅb•–– ???Ašššx<ÞÀÀÀ½{÷V¬XQ^^îååÅf³ܼysmm­P(4›ÍÿøÇ?àé¬ÓéH$Ò‚ JKK?øàƒÕ«WzzzRRR„B¡‹‹‹››lœ?V$ê…„É^ls¸sç•JuuuuppàñxNNNÇGÄ××W«Õ:88˜Íæàà`•J ()) [,–ÖÖÖ•+WšL¦Ý»woܸÃ0‚ }}} …EQ6›]SS3wîÜ   ÖÖVwww½^¿jÕ*…B1::šžžþàÁƒúúz³ÙK£ÑŒF£‡‡Ghh¨N§+))¡Ñh‰äi"üDÑÄÙRj4PøáÙÆ69Ü·o µµüç¸zõêüùóóòò •ˆ¢h`` üa‚ –.]j}®&“I£Ñ***Þ|óÍcÇŽùùùݼyó׿þu`` ‡Ã9qâÄ£G¡P(‹eppP(æææ¾óÎ;¹¹¹t:EÑþþþ˜˜˜òòòõë×Ëd2‹U__èààÐÜÜL£Ñbbb²²²ž&Âórhó¡Îê©?Ûî¾— SRRœœœCCC®®®Ax{{Ã3æ!üýýÙlö½{÷8ÿîÝ»/^|ýúõyóæyóÍ7Ï;çíííìì¬ÑhØl6AõõõR©”B¡À \[[‹¢¨F£ÉË˃Ñ4ww÷k×®iµÚ‚‚•J•••5ÛGðËêÿò³k¿| mœ¥ï¿ÿ>›Í†1é .lذ¡©© ”J%4#ƒƒƒcbb”J¥^¯wuuõðð¸wï^ss³“““‡‡ÇªU«îß¿“†]ºtixxøÓO?U«Õß~ûíŠ+\.W­îãr1¹¼åÆn¡ÐO§ë3›G,–!À— ,îîoà_ENjvq™#‘DnßþíŠ ¸ûá‡ï=M„YïÃ'®—Ç|™68LIIqpphmm rvv¦Ñh}}}L&“L&·´´äææ–––’H¤åË—Ÿ9sF,C=F¥Rããã±±±2™lÍš50Âi±Xrss<Èb±îß¿O¡PÆÇÇ QZYYg6›qOHH B©TæååÉd²±±1‹(--?~ll,´]'ûOÂÆ>¼~ý:•JÍÊÊ"“Éããã]]]¡¡¡‡rvv~ðàAppp~~>‚ ããã‡Æ0lçΣ££$‰Á`ÔÔÔX,–¦¦¦+V<|øEQ;;;¡P(“É\\\T*Ubbâ•+W˜Lfqq14'âââjjjôz}pp0Žã‚`vüøqÀÈÈ‹ÅR(‚DEEÁ(kGG†aÓG¥ž][¼ä‹Ó Û–·N§r¹œËåªÕj¸ŠP5™Lööö8ŽGGG—••EEE™Íf@SS™L‹Å)))}ôQmm­F£Ù°aƒÑhÔh4ÅÅÅÅÅÅ8Ž-Z´H¡PÐh´œœ™L&‘HvîÜ “ÉkÖ|Ìd>:tèP___ll,Š¢\¸p¡ŸŸÇS©TuuuB¡°¤¤IìÝ»÷i"< ‡01ødûËyÙ>iÚÛÛSSSqOKK;uêÀd2ñx¼Û·o[I îéé¡R©7oÞäp8d2¹´´T èõz2™œ••U]]][[»råJ Ê‹‹…B!Œ666†……™ÍfN·~ýz©TŠ ‹ÅÂq<77·°°D"avîܹ]»v]¸pá»ï¾ƒ‡›Íær¹0lùìvé¶mÛôzýåË—7lØ`oo€F#“Él6»§§§§§êL•€3“ÉdДH$/Ƨ-ŸiðSØ¥?ïZµ]eooŸ””¤ÑX®]³sttpq™ƒ¢€Ù<¬RQQ¾ju7›M'³É4˜À&ˆ%%`ÔrV÷rlll``˜˜xæÌƒáîîÎ`0êêê¹¹¹‹E©T¢( #\‹…Åbéõú€€8+¸¼••• .lkk«ªªzš/†Ã—áDylp¸råJ‰D2y÷§§§WTTlÚ´)77w``A‘H„㸻»»F£ñ÷÷‡†NJJJrr²Éd¢R©F£ñ믿þâ‹/>Œ¢èÍ›7/^ =L@mmmffæçŸuùòåôôtÇ5MZZÚ×_Íf³ÛÚÚ¸\®R©Dö‚4ÖÔÔØÙÙåææNcÓüòýÃi|“Éäïïo49‹ÅÒh4jµÚÅÅ… Ÿ={öìÚµ+>>þèÑ£¾¾¾L&³±±qbb‚ÃáTVV.[¶ŒB¡ÑÑÑqãÆ ‡·hCCÃÀÀ†a“þþ~“É„¢(‚ gΜ 6¼ýöÛb±8///??÷îÝ$ÉÏÏO$Éd²°°0«5wñâEÿöövȼX,V«Õ0‰›—÷=Ÿ?¯»Û‚ãÄÉ“ïž8q£³Ó´vmHee›É4øÛßVøûûGDD "‹•J¥@ 8xð T*ݾ};L455åäädgg···?M¾“ÉŒŽŽˆ‰‰)--ýè£P--- )--]¶lÙÒ¥KjµÃ0½^—\ee¥Occ#…Bå³yyy\.×`0Èd²ˆˆ/¯&ƒ¡F;À=ÿŸ¬,Ú—_n”Ä¡èèÞ7¢(J&“ÃÃÃ2™ÌÛÛ;)) EQGGG Ã,XP]]œœ<?ÿ*…šæylÚéï´­ñM&Skk+›Í^³fÍÞ½{SSS¹\îÑ£G¹\.ŽãkÖ¬ùæ›o, Š¢|>P^^ôÕW_!²víÚ‡–””Ì™3'??_­VÃa>¼lÙ²  ÷ôô (zùòeèìQ©T­¬­­e³Ùñññr¹¼¿¿?##®ð5kÖ455ÁÃZˆ¥¥¥ŸþùÓDøq9´>ÝŸÑ$°ÁaZZ|TÖØ;†a™™™Ÿ}öÙÀÀ@ZZÚèèèÁƒ‚ R©L&“ÇãI¥R&“yãÆ ÃÆÇÇׯ_äÈ€H$ª¨¨èïïg00–¥T*÷ïßèÐ!ƒa4a8A‰‰ ;;;‹…aØ•+Wªªª„Ba|||QQ‘ƒƒŠ¢nnno¼ñtYÓ”5Ø8i ƒ@ €ùW{{{‹Åårÿüç?{yyœ:u*''ÇÕÕ•B¡ˆD"/ëüyÏ¢"ãØXjbb®@°Ç9[vìhe³34š¥û÷wwvò»º¢ÚÚBËÒ„„ÝŸ}víã?vwwïêêdffž:u >JEGFFzzz8N||¼T*7ož§§§££ãÀÀTŒ---===ÓˆðóŸ4?6lëáPÏØØX{{{AAZ­®¬¬|ûí·U*•Õ>zôèïÿûÐÐPLbbb222RRRà2–Ëå‰äöíÛëÖ­S«ÕIIIL&³ººšÏçß¾}ÚºA@ï¹³³3$$Äh4jµÚÞÞ^¡P¸nݺþóŸç΃9`ÚUú_Ïavvöðð°Ùl^¼x±ÙlÆ0L¡PÀ TEEEdd¤H$š&‡þ2à§æð§W¶} §ÉÏϯ®®†i ëU…BA¡P­­­ï½÷ÞŽ;¾ýöÛGeggñÅ=‚™¶¶¶ääd‰DƒÁ T˜Öµ:~pØ„„„Çóù|ƒÁ³¼Z­V§ÓÁE„¢¨‹‹Kww÷Î;¡¦yÁÑÄŸÏÃüŒüC ÃZ[[E"Ñ‘#G/^ÜÖÖ‡ÉZ¥RyéÒ¥°°0>Ÿݹ¬¬,˜f0ð††¨©Qår¹2™ EQh‘™L&__ßßüæ7………vvvñññååå0 •J"##½½½/\¸˜˜˜ŸŸÝÈ;wî (:<..N"‘$&&VWWÏ;÷îÝ»Ð$ª¯¯§P(‹/ÖjµÓˆ`{•¾ÌQ¦™À¶¶˜|``Æ'""b×®] nnn0— “ÉóòòV®\) srr”JåÝ»wGGG£££>|hhhpuuEQtùòåÇŽsuu={öldd¤ODDÄùóçûúúFGG¯\¹²nÝ:£Ñh4-ZTRRB¥R·oß^QQßò‚QOOOA´Zí»ï¾ûU}ÍíÆ­_ÔÚ±åV*•0/) 9²ÿþâââ¡¡!¶H$÷ï߇^6A Føëëë™L¦F£¡Óé°z}CCCNNN‰‰‰çÏŸÿþûïÙl6,‹‡Ž‚õ$CD©T—á¾üòË7Â8<àP&“9Í«¼6Nww÷ààà   øuùòåPÅÆÆ"rüøñ   Í›7+•ÊÑÑQÂH¥RƒÁ•••œœ<22âããC„½ý*.— qvvnhhãp8 ¥¥ÅÑÑ1??“"4Á`ô÷÷ÇÇÇÿñ´†^Q5›ÍJ¥’Çãñùü¸¸¸iDx5ôáóÀvÌ›N§CßW$J×h çÎ ÷òòêéé•u“_ö …Z­véÒ¥ …¾˜Í`00 «¯¯îîî6 ûöí+((ðõõ½sçNHHHGGGgggooojj*,‹‚î"™LÖh48ŽoܸÑÑÑEQ‹Å-žÚÚZ&“i6›}||0 ûêK_ضizzz Çãøððð’%K‚hjj²r+‹åòÁññ2‰ÔÓÓãààððððññ)))‰ŠŠ‚9£²²2‡ÓÒÒ²lÙG"‘ÿ‘#Ò'N„‡‡GDD|¸¥e¾Édš˜˜0™L]]]›6mZ¸04=ýq¯­­ ÊÌÌd³ÙIII/^$“É&“‰Ífk4 …R[[ëåå¥V«áÿ—àóùŽŽŽjµEQ‹åïï ȶmÛvìØñÖ[o $''»»»ßºu+''Çl6ÛÛÛ‡††ÆÆÆÖÔÔ¾ÿþ{Àœ9s:::¦á¿~•¾ÿþû"‘H*•úøøôõõ577Úg:~óæÍÕ«WÊ>OOÏû÷ïGEE ³+•Jsss¥R)¬åZ¾|¹N§c³Ù0E£P(¸\î{ï½wíÚ5ë ´sççFã#‰$955fE[ZZÞzë-¥R þS¢×ë A4­¬¬lþüùÐ<ú« gˆ—7æ=¥¶TVV†……Y+ø|þÉ“'Éd2,—‚ž±V«;wî•+WT*Ç«ªª ïêêÊÊÊ‚å|Ðç0›ÍÖÊMŽãf³ÙÛÛ:а¼ŠÏçoÙ²E¥RÁøåõë×/]º“Ï¥ñ­qšÐÐPXk%†‡‡‹‹‹çÏŸN¥R===aÕÌ–LVn4ÍÓÓÓÍÍíöíÛK–D)#ËПþ´ô/© c(š¼¼yÖõ¼aÆææf£Ñ¨Óé˜%m³‘ð€çª‚†FéKŽYÞ0}+“É`9tGG,c†ªÉÎÎþ3‹­[·ž={vëÖ­0síÚ5A<<<¾ûî;¸===·lÙ"—Ëp‡¡{Ç ûúú`˜˜ØÐÐ ‰ C(‰þþ÷¿/Y²¾6äèèˆ ô¡–,YòÊGŸ¿üZý×¾úx-á«×¾úx-á«×¾úø?0ÀÙendstream endobj 296 0 obj << /Filter /FlateDecode /Length 4320 >> stream xÚ­[Ýsã¶¿¿Âo¥gN ÙN:sí4—tÎyhoÒv’Δ–i[±$:’|_}÷ HB²ÎíÃHb,‹ÝßîÂÅÅÝEqñöÕŸÞ¿úæ;[^¨"oŠF]¼¿½ÐÖäÊ^¸Êå©.Þß\üœ½¿¿TY¿i÷— ctöœÿuY›¬ƒ¦îòßïÿúÍwª TÕy]Â44„A’W…Ì[ÀÿE^]ìð•ÿööÕÅ¢,ÕÅÂTym-w{s·Ãñ÷ðßa5ââç…q*û¥¨ ùžòg•é¢Ð“®ØMÁ˜¯Ov]˜Zgß]ÖeÖÞ#\ëjwI·Û|ÐÙ{”Â2ÿ¾ÌyÔgò¨^Îã»®ße‹LÞ¼œƒâà`r©]öùÅ\êšÌÇîž+ÃGi¾eamúË’„PT{n¾E>wBºbÿm»æŽl4¡·ÍhY{0!‰ ZÑ §åc‹ìÒ=]ÒsØ@xöÖé@óO…V5g ­:ª{0‹ÍØòϾðžh½0O´Á–b$øwÇMË~³AÊ­Œ’/|¦Áž¢Iy”g¥Ûê¤@ˆ¹—›ëÄþÈê,t]d´–×ñY8è7ý/ sBÉj1´\I·{~ÿpYY6©+’¹&Û÷·DâÿˆZLw9CLÜé#R÷Û©˜p’çm%H©9qÖt•½¹´KäõVs'Œz}áuò ÷(«å·úâÏÓ],tËö@+­êwHµä~ éµ·üT`†ŽÑA£óÂzpð·~€GkODûÍw唬©Œ'#›²¼IŽ×äM]yÂ3íz÷™ŒÔN–`â˵ v¤p 1?ðÌ “ËëJùa‹ít‹ŸsG€ÊNj °¸PenJؘtál^ ¸«ÉUadnT¢¦É~º4*{ ]ßÁ¿«oàUgoßà3‚Ÿ ´À¯žƒbÅâ_(­s«[hƒ¸p–¼µ‹²(ù<.þÌmå+j&Y®–î ŸdjãK'_Â#Iü"³òüæ Mv‹,¶„âèèö»nˆ>ŽÒòÏ#Íè¹A5Z3Ì¢f‡ñ}»/ãàÁÐ`!nRæëÄQœ_'BÀ7M…¶kÙ1ñ-ÏKâo¿á§ŽhÛG‰âÐ ’\· Ýxb|ö¤~õ` «Á ßÔÿˆ]­!ö0ÐÅ»¥×ÂbŽÈ=ÚšJ‰Ø&íÂÙÙ,ƒŒt m[oð÷$ÿ׌ ºOÂú®cÜÔ6ó]aå ¹ÖÔ&û\¾ÃZŒ=õ½a&h2^üH}#Z’‘¸Ç}Ðn™ÉÓ{6÷ íEN@xMÝÚý"a•÷mØÁØ ¿æ[Ú•µ7ëh±1¥p‰Zƒ8ÝoO>˜`%2ó}”q4°ç²{Œƒ_™Iå,Hgy ܹõ¸?‡X†F~uY]²(aBc[à—=A¥MÞ›c«GŽÄ¢øÚGs+Vf´…¥¸íŽûÍM <=p›||$fý·kÖ4h F-¬æK7ð|í÷è(¼l¯qûºgŒ?ôœ!'áýªß Ìß‘MHF,¤wdÊŠMm ©óž¿/tΟÄáÁ'¶´ï®dá¥vžc;–í–û^ËÁ¡—ÛÞïµô=zY”‰˜Mÿ}»÷ÐÍ )´ÿ/ äœ(+ÉdÒ)üÛY­wf4kÓÜâs=²X.å ëi]rܦ)nÃδ 7¢(øYL¡#¹k—^ç—1ÍÈ‘®;þº¤¶q!‹nêàÛ”µ6((~:r B=à»V¹2Òýïèåç½±X7à xî¯É©³F@|Ãßø~Ën„¾+\B@üŸ þz€òÞŒbÌÔzAè"wM€ß¿¥¬ kO±¸ ~vë}¿õæÎú|„WÞ~‡-] )=Guˆú°yE‰º·ÞŠ#מ–_Ç>ut¡êšR#U ±<¨ÖGÜ>3ÐÕën|Zέ,BÍãxì„óÉvÖ©mkÊÜ•aÛþ•Ü6“;µ»UP’-óÈøhí1@ËÒýœšäQêÚõ阒h=U’€6Æâ¹%‹°I©¿È5" ]åMc¼ðSÚ¼R³ÀqÊz™ëÒLÔ[O†©Š_Sc˜¼0'ïrý&¨Ê1uä÷§Ôºë¼Ñå0ÆBYt*Æ&e­ J¬£•ƒ±Jeß&ϣ͋áôÞ§©k{Ž õTz*Èù´ÚÄÔ¸Âk Æè •[㸃J Z!pD%Ãþ,ÿ8ÿóþMi¢)u“Ûª¡)k9ÒW©)›D}ý^LØF€€9ÅB× Œ¡¦Ïæ¦4˜ÛÚH( /hÎ8ŠÅ8ÜÔÙ-³åØx$V¦`üÚ4/Q…ñ‰©Â/R"™Ž„‡FÅrؤ´UöñÞÃ….yRª¼ÐfrTf†­<Ì·0leª­r¹mšpsutyeA5tnU3ÒÆñXúˆèÄhõh4’b˜t4HR”Ÿl$Ýɲ,ô6‰e™“Œ\¥äI?â êÚjÏ¿IGpÇ4AWÓKÄc4(ŠNª%¶0N’CXmeɦ½æ'fÝ þú@åa’=[öê¦ty­š±W3ú ϵ”,¶#aÑÝS´W·un›(ñ•ZjY“¦Í)– ±²88g tö¸“4ÁÍÊCõ~7AÉ3kuÞPbÑ€Ù)ýæ$f•—ʱ™¶'x†-,0ý;%]‘‚µÅ¶òè”öù)ËÜÔ&žñ,¿uÌ{ÇZ:ñ¦ãÑÈ›–‰sggç®Ì«B²@L¬¿Ëu¡NàñD‰'³±§—‰áù·)×§ÁpUHh|^JŽ«‚ÃßX`õ,¯ÜeÀOŠúgb°IpÊ0¨åyHÁ2·:˜ÿoUÚÁî5ø^ì¤Eµ9j¡Î¡ÿÔ1Ŭ™\7åäpÆIÕåÔW' Ò|HãœÆ ÁôÜ|JŒ~Ô¾qÎ ›@„`óûI•× WÚF~O»: ›À^6n‚ÄNA¦+ ©à[ŒKægÂ6§GØÂäž)5*.(ú(ÙØïc?9RUn}.¸b Yx¤S#ã*7n¢M²7ÁvÅž# s† Sª`мtc#ú˜¶hΪ‰ES ëÂÍ@ØÞŒ¬Ë¼L§žæ@%[Rq̯_&·©9uV}h*%NW·üeÈ€–„7ð—’øNþ½m'Ù=Ê"(ƒª²ûÃÐßÙøýœ<Ï‘"ô¦V»ÝÏÆNñVUQ߆q~Udë#`ÆTÍÈÃs¶ËÃym'…«]-E//,zW5 !ÉHãË5]«À¬\ÕdrÇâ0Â++*©H²›úOêTt¸á¶Væ¸õˆ*ªÿIS:7¡_hëΘq#UýpàA)=[%À:i‰·ßÇynbÞWÿ¡«N|¯…$­-€ÞÊ%ÍÉ$Úî"ÃÌÚª‘j¸µ–Ê!µ†d<u9/ Gɤb뼸£ Îǰ 5ºçm墒ŽJ8@.:"5ŠBj‡>RSxN•&µò%A·7þ¸C¶ž÷©„¤ê¡”»ŠSi(ÝjTí5 H?E·|Æ$,Àv³rÚõÀA üÖ)θ—–>ºØyÂPºhH©åd&ü8‘©ýu1MŒâ|­¬¡œeÿ°™Jríz¸ÍA_}y­Ýt¾þ¼u­ÁŒƒsçÎïCâž“¥Ö |$-YËÒo$Š-·”X€Tò–º?ø\½/ðaó’ïô~×¾IG…úÐqCàµEûõÉç;7ƒJ«r_¨½Ÿ\A ‰RV'6~A ¯׸jMиÆ`22ù!—‡!AÙ3ÉVÎà €i¬LñˆX°óçaljÝr8ª\¬ªŽð¸õØÀú)Èþ>XêqjAÃŦ®îîL3ï£ \2õŽiÆà¤Bl6(ö„¹:¢ûn@ƒ¿Gg\Ë©"ÖÉ̈„}.|Ï‹÷… ”ýj0,ËØ;Ὀ9:Ìø:vZ$õÞŸˆ¯]ÕŒô4ȱw^ŸÆyörØý¸ôiÊpmÏŸÈÂMö½¿HŠúA.#‰ÊUQ­ ŒòÞeöŸ¥¿é±ßG÷JTö;î· åó»Þç7–^µŽ§**:R©{)•¤ß¿ª^ ÔÓ:­Ú]zùèì¾l EldZ•ʼ.fì4GåúKåš\œEúEnM'P1¶• L.‚ªŠLÒ±ô–ܵ-ÍWÐ:«æ×N¥„ïSûè–ë¦KÞ•f§®Ä¿yÿê¿UŽF|endstream endobj 297 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5484 >> stream xœí\{PSgÚn9!‡ áPhB ´\ÌIY2 ÚBZ´ØªíèF«u·¸S‡í®ýüdÆ]³Î¢‹%ݺv[ÓTmùa*‰Šæ@QHˆF!@ €!òýñÎ8Nw%Þ/¬¿ÿH'çÉsÿ=Ï/·Û æ4¼Ÿõ›6E|úiGff¬ÉdÇñ!£1ºµÕÄáЫª´ùù¯S(–¦¦)‡Ã ‰ñ££‚0 ßÞ²ežV‹„Édj;yrÛ'ŸœÇ0 A˜ŠŠÂZZ½½†¬,JU•7ƒAÙ¶mŠúßK„çέVÇÝkµZQ}”Þ—pÇÆ†tvõõÙüüH\nƒAilì½ys Ç1‘ÈW«EJK/G°X~r¹>^6Ûއc˜íÇ­™™±a’HHB¡pýúõiiiAŒŒŒÐh4&“yøða©Têt: ‚‰DA‘‘‘ ™L¦ÓémmmÑÑÑ8ŽkµÚK—.¥¦¦VVVæææ~÷»ßÝK„§­ÃG×ɃƒKJJ"#—÷õÙÃΜéárCø|L&»4>ÃhœØ¶-L& _­®nþÃøŸþ¹ŸŸ_nn.ÌQ===†ñx¼ôôtµZa˜Åb¡Óé*•*!!ÐÕÕ•˜˜¨V«¡®š››}||0  ¢P(ALNN.\¸0##£ªÊ>00===C£à¸ú°Íæ,* »W2ø|öÙg³HøÍ7ßP(c~~ýW®Ô'&’ëë:& þî»Ò7ßdUVþ“É4þÅòQ©ùå—ëÖ­›žžîêêòòòâñxz½~åÊ•·nݪ¨¨àr¹>>>ò®Ùf±„{y%šÍa4Zêô4óÕWÑ¡¡¤éi^TTо}Ÿ8qb||œB¡H$§Ói0ìö+drÏ+¯ŒÖÔ°Xn è¯¯În¿˜q/î×JŸ¾u=.x°Ò÷ßÇq‚ üýý£¢¢ôzý{ï½·{÷îüüü™™™ÁÁÁððp‹ÅB&“322öíÛ÷Ûßþ¶¥eD¥šÂñð¦&ãæÍ)ß~Ûîr¹W¯N¬®Ö9L&»L¶¸¶¶¯ºZ—íSX(zÒ_Ü“Š4z½•Í~.tþÜåÃÇŽ¹_µÍ} g¨Ñhz{{W¬Xqøðáððð††•J={ölqq1 «««··÷­·ÞúöÛoQÕëõR©´¥¥`³ÙüýýoÞ¼pîܹÚÚÚÇ^úá‹•÷N³ÙÜÐÐPVV&—Ë)ŠÝn/((P«ÕX[™L&|]&“ÕÕÕPÕh4)))‹ÞM¯×§¤¤h4.— Éd"‘ˆF£:tA’’ŸÏ—Éd\.—B¡èõú      œœœâââ7ÞxÃh4 …ÂŠŠŠ²²²ººº²²²¼¼¼E‹I¥Ò{‰0÷­ôajš²²2@P[[›žžÎãñ~úé'‰Äd2ét:AL&³¶¶6??¿³³óµ×^S©T†™L&¡PØÔÔô«_ýŠ ˆ¼¼¼ööv€V«u¹\qqqf³yll¬»»{Ñ¢EQQQ°@?pàÀÖ­[U*•P(´ÙlAª««ÇÆÆ„Balllww7àƒ>÷ÁCåýé§ŸR©ÔÉÉÉééi:~íÚµ¶¶¶¨¨¨øøx.—+Ôjõ¼yóìvûµk×$ÉÅ‹CBB|}}ÛÛÛ'''ét:‚ t:]$uvv¶´´Øív‹500088A£Ñ&''µZ-Žã ÅËËËÛÛ›N§ëõz¥R) q¿|ù²Ùl¦R©F£‘Íf'&&* ///.—›™™p/ÆJ¨ æ%»+åñx‰Ã°ŠŠ ‘H´dÉ’}ûöá8n6›F#Žã°ãnhh R©o¿ýv}}}~~>àÌ™3w3A0~¨Õê‘‘‘ððð×^{Íd2>}šÁ`½òÊ+‡cÑ¢EÍÍÍ0`ÿóÏ?'''ñÅgݺuGŽAän“Þ»wï½Dð¬C½^Ïf³ð‹{ŽàA‡K—.õóóãr¹†UVV.\¸Ðßß_«Õ:Î 466†„„8ŽèèhF“˜˜¨Õj?üðÃÆÆF³ÙÌ`0ŒF#¼ž ˆÍ›7“ÉdAîÎ&“ Ã0 ÃZ[[õz=àÖ­[ëׯomm%"&&&++K¯×[,‡Ãñæ› %’B†ÿN&“ ƒÍf;zôè,Žà!ÒTWWnذA&“Ñét—ËuñâE·Ûm·Û1 s¹\t:J¥Úl6§ÓY__ïïïßÖÖ¶`Á‚©©)ŒR¯¿þú'Ÿ|ÒÜÜ,‰är9‚ 333ׯ_çr¹t:½³³“ͲcÇ&.jrrþüù‚W´ÚèÔÔe¾¾îë×£&&b²³sM&Cc£rjjjéÒw::^b G“Hó].oà‘{üçº<³‰(P]­Ýµ+R±ÕÕÍëׇ^¸p!  µµ5;;› @jjêÚµk!G¨ÓénÞ¼™’’oRUUcÉd²Ùl\.— ³Ù¼lÙ²;ŸE‡ÃÑjµ Ã0‚ 0 ƒ!§¿¿_«ÕºÝn@`2™®^½ <˜žž¾mÛ¶Y˜¨Ç¦Ãgžî: ùøãÍfóÌÌÌXït:kkk !QËç¯”Ë &“ÆdÚ!ç=XRrÏBñ)ãÅóÃÅÜïžæ¾„üpãÆÙÙÙóçÏ·X,½½½(Šº\.HLÌÁžÌ}?¼ßªÍd2á8Ðét(Š Çúé'_ojjŠŠŠjmm‹ÅjµšÉdB k4½^Ÿý¯ý+>>ž ¡PSŽã …€ãxGGÇ‘#G>ú裑‘Ç›››¥R©B¡ ‘H‹ÅÇLJÃáœ?>??¿££#**ŠÏç—––ÂÊ>??ÿá§kåcz|Ä[ypÆ €ãÇ‹ÅâØØXN#€ Ç»»»QíììÄ0L"‘:tÈápÄÄÄÐh4FC¥R äryqqñ±cDz²²d2‚ uuuõõõË–-ëííE¤¿¿Þ¼yK–,a³Ù«W¯~÷ÝwÙlöèè(A pãÆÚÚÚÇ/áÓÄªŠž# ŸóN”R©looGQ”L&÷öö&$$TVVBª/++kçÎd2933ó›o¾‘J¥t:ð׿þuÿþýPLKK{Ðn»¤¤$##c¶¼å~¬¸uëÖý\ÖÓÓS[[{çO±XüxãnÌ}+õPµY­Ö§ó÷}¤ûʉD.—›L¦]»vÉårÇwïÞýþûï ‚ÊÊJH×jµÚ‚‚‚C‡Y­ÖuëÖ‰Åâ±±1Æçó¡w¡(ÚÞÞ^WWyþüù¯¿þ:88¸´´Ôív‡††²X,8°Z­{÷îݹs§^¯‹Å‡¶Ùlo½õ–Ñh”H$ …ÂÛÛ2}Z­¶¦¦F,———?¼„®.++KOO·Z­EEEr¹œÃáØív¸ " 9Ajµz×®]pt¡V«Ífsll¬J¥‚¤cEEE~~þŸþô§E‹Ñh4‘H´eË–®®®RÈCàIùáóÓòÿ×GšÇ«Õú”£×—P©TÞ-’R©T©TOúCï†FØ#êêê’’’êêê4ÍÉ“'ÿñP(6› _ÑëõV«Õáptvv²Ùl«ÕÚ××§×ëOŸ>ýÃ?ŒŒŒäøEÈK `tt”Åbݹ`ŽTÞO?Æ>q?üE\yú)Äó¦BSSǃ½©ÕjÕjµuuuþ󟃃ƒ ömÛ~ó›ß îܹ³¸¸øÚµk999gΜ‘H$ ¢¢bffÆÛÛ Óé FFFÆW_}•œœ¬Óé`K=66†ãxaaáÆƒ‚‚`1 “É0  —›L#,–_{{{UU‹ÅÊÊÊjjjB$//¯¡¡aff¦§§‡Á`<ÒüÐ#Åðž‚Ñzfgffããã}||W®\‰‹‹ëééÙ¸qã?ÿùÏåË—êëëF£‰ˆˆÀ0,((h||ndggWUUùúú.[¶¬¥¥.'ÝÝÝÞÞÞ{÷îݽ{wNNŽB¡€ …BQ*•¿þõ¯› pe…D" 1ŒÄÄÄ«W¯NOOkµZ&“9‹=d‹ ."""X,ÖÐÐF[¼x1•Jõòòúûßÿ. Ngbb¢Ãáp8$‰D"ùùù ƒÁ°bÅŠšššeË–ŽŽÒéô¥K—ž8qÂår‰D¢ÉÉIƒÁ––VVVÆårO:e00 ûúë¯FFF ûûûgff=zôêÕ«ƒƒƒ‡F£ŒŒøúúªTª°°°ÜÜÜYúæû²Ò»mÉ£]=¨á=iCõ¼Ogz6›M(òù|ÈpB[êîî†<*¼n¬ÿý÷p?;44tÿþýÛ·oÏÉÉQ«‡32B5͹sç„BáÀÀ@}}=‹ÅJJJ‰D(Š®^½úóÏ??uêÔš5kd2Š¢ÑÑÑUUU¥¥¥---ÝÝÝT*µ««+77·¼¼\$µ··ÃYellì<0‹îS9ÏX‡!!!°‘¥R©0Ö+•Êžžž´´4AŽ;†ã8‚ qqqåååñññ•••©©©w&»ÇŽ{ã7üüül6ÛôôtRRRppð¼²f͹\îr¹¬VkhhhAAìn‡‡‡GPP\زe ÜÐ„Ãæ±±1NÇáp|}})Jtt4†a}ôѽD˜û5 ·mÛ622òñÇ/]ºô÷¿ÿ=ä¼Õj5‡ÃÔ5üv!?üðÃ+V<ùÇ~Ìý؃ÂH(—Ëm6›Ë劌ŒÜ´iÓßþö·ääd2™Ìf³kjj/^\]] w(„Ba[[[RRÒíÛ·{{{ …H$¢R©½^ ÕÕÕ …°°0E‹‹‹1 CÒ9€ÂÂÂêêj¸¶çp8bccÏž= M† ‡Ã‘––VUU…¢è–-[ Å_|q/xð …B0›Í|ðÁõë×ÞÞÞ±± 8¾b–³ÿõV"•J­V+›Íþᇶoß®T*¥R)$6Õjݲe ‡Ã … ÃÂÃá-ååå9rdrrrjj ÇñœœœS§NÁ•i*•*‹•J%¼ŸŸ†a"‘Ò¤ˆ‰‰III½{‰–Ãá Âë—,YrŸ!íÁtø4cýãú¬ûÚdg0V«U,+ 80š˜˜€kŸqqq!!! l6Ûb± 2>>»>XR544p8œäääË—/ÇÇÇïÛ·/,,, 22’F£EDDðùüS§N1Œï¾ûîöíÛ)))·oßv¹\111¹¹¹6›m```ÇŽ2™ – ,˜™™ù½Dx€ÓyàÞºÇïû.©zÞT@Q4%%¥¶¶vóæÍååå;v쨫«ƒþÓÒÒ¢Õj7nÜX__ÿꫯ²Ùì±±±‚‚‚ýû÷ ³Ù ˆåóùt3‚`¦R©à._wwwÿ/¦â«V­Z¹r¥··7›Íööö†ƒn˜‡rssa"‘H£ÑÀ39ÿs!–ÎnAt¸xñâ   .—ËápÔjµOdddGG‡Ûí†4 Ç///W*• _}õ•R©Ü³gÇS(R©T¯×CN ¯¯¯³³355Õd2ýå/„j{{{rr²Ífƒ’Õj=qâÄÛo¿=55åt:5 3?ÞétnÚ´I£Ñ”””h4¥R™žž~üøñµk×ÎWïkº¦V«% A›Íf0˜L&/""‚ËåªTª¢¢"¥Ri4áR(|EQ©T rhhÈl6ßùG·fÍšk×®1 x‡N§÷õõOMMÁÜ×§‡‡‡oݺÈÏϯ¬¬Ü´iÓÏ?ÿL¡Pššš&''ù|>…Bçmþ3žÜxÙív÷ôôÌ~AKKË}·ÛíÁJW­ZåããÓÝÝã8—Ë… Q§OŸÎÍÍu:YYR…âI$Òôôôóð÷¶nÝ*•JṂ«W¯â8>44”””ßU©T\.·¾¾~jj ¶AQ´µµ•Éd’ÉdXÙšL¦œœœ²²2ØŽŽŽ&''£( Ë —Ëe³Ù"##ÇîÝ»Y,ÖCZé3ÁÝ> stream xÚ­–g8\k÷õ.£·„Œ‚1ŒÞ‰èF7zƒaÆhÑ¢G‹N´D":!AôN‚è¢GI¢‘„wÎ9ïs’÷y¾¾×þ²ë^÷ý÷º×¾öæå40VqB;"î¢=±Â`°,PMO, ‹ˆRðòªaX$ÚóŽ! ËÈ€*¾.@1ܺ¤¬8DV\’‚¨†ö Ä ]\±À[j%IU<$ÜÁ¨ç€uExà4à( 1ŽD`E€*(Ðè¯>@#„ã‡p¡ƒNH8èˆpAzR€þ2¤åéŒJývòõúÏ’ãƒ3¼…3)ÄYtB{¢Ng >W sòÿÃÔ‹ßõE¡ô<þ’ÿ«Iÿ³ìàDþß´‡—/ê¡ÏÿN5GüãMúŸ*ZX®âé‚BEÿ !}î"NH,Üèì€òAüGx:ý·\×þ625ѵÐ3¼ýÏiþ½fà€ôÄšzý«úWòß þ͸Þ`@+QQQ0.wýçÎæ¿j©{ÂÑNHOÜ8HH0‡@ Ü\àH"=@DÎ0HÄÅmâ:tFc(þ:LI Hå¯Ð?$ ©þ& Hý_’A¿ —©û›¤€ ½IZ 2úMâ@ño‚A&¿ §bö/ɈA–¿ §é𛤠Çß„sÿ—À¢¸N Î)âÄÙqþq~\þEîù]½\ždàbÈ?Wõâj{üFܹ‚þØ‹{@è?WÛëÄ==æÄòùqÝÀþ‹8a¬ÿŸZ8~¿Q W)ðoüßáSUE KŠ…Åp2¸†€R¢!ÿO"܃Axbÿ~­q#üvFâ@À)æfÐp¹(·ìƘòPõ¢wÄ‚øª.M)úõïÛ(#gSñQ%Ã:Þ‚Ëu°‹Ê\úk[Ä[7ý^÷‰l8z÷k¸wÊ£éË-?û­Ü{Í×aG¹z›^\g‘{€ÖÆ©“} |èäòHy†Å󾂃þ½§·î˜l“.râuYù5tåEIJÁîæ¢Lc£ëøÄ9hy—?€Ä.ùÓd¤-G†ºÅ’½¾½pˆ~ãKwQDV˜~i‡÷N’àÃöà–’zþóu¥ˆæ•)a‰î˜õÇÛUOË_é^\ïÜ´üxûç]µÿæSkîõ–Ž‹õ;KC•×åG:"Rf̾O·_ް×®çduÒh[Ê#5Íœž¯W„CâרDoQ_“C¨–zh³’ÆqT&˜/z]ù›bgFù^Wå+:õÀéy;(r*Uäæç‹^±)2%–™í‡@ܤ[kŸyQp× §ÒQ·[JäÛ˜òJE¿ÍÆÿ6ãU¹‘‚L”ã FÀ‡l½ì1QÕ¶©²Öù&ÁÏxK5"&ª =zÌ2)ð>Õú -œslTMç-·6ÉnøéäÀLŸãïé}1{I cE5:܉—d~-_–¨âëy£ÊJÈhìÏÝçxÒ‘ÀÔËõÁùYAø¸D©Œ6z‰ø>ÒŠŽ>+)åá(í6k™÷ªµ}ôÓ@’³ +á ·-¯lØVX<´Çñ'&gÖö­Ôà»é×=ã~þ,ºóùd4€=ƒOª®¤ uÎÎ&›píñ˜”7ÅG‹¢%#IŒûé×óüì†Õ‰…aÂÝ fC×4@e ³ð+ƒÊŽÕŸ¤;ŸI®û¸3¦ÉWR$+ŒÛûf\ÊÖOoѸþ›.Õ;ŒŸA‰¸ª”Ê{iq×Èðä­q¢o{ÂÀá#·‡ÙL?ï¹2>tnØË¨Ò¾×9 þOLó¬EýøªosoˆŽ1rl@3<…¦Íú»Ê;¶þ€3T‚ï@W é7 Y€’1*]¦Å1ê‘¿kÁȰ¾PÚ¹—àÊ_hÝ›Är—ë¸uy,LŠfPoûÎðŠM›Æ¦ÝÔ9éÛeй¬,äeƒ*ߺ "ŸH(¨¼ý´Q•Ö–Û+6u{ÏÈzX µ”Eãœ>öTË ®Ý»’&·Y$&T3¡îq¢iP¾¨èý~!Z#w¸ åòCpŒËE6ÞTÔ¦õH#ïT?©ŠÛÐYZ´“j¿H+oÑ]Ñ¡xêWè]Óö~÷-%9ª>®PG Tdñ B”žíõô€÷Tk›0káÀÎ6'ìâãÑ ÕêÅ’§gÐí5…D<©Faò®¡‹Ÿ"¾îf ÁÐÜÍÛ7èë×¿Ö }ê¶\ÃF†f‘ 3 *é*žcæÛµŸø.Ï)ªähÕýhCÂÆƒ"?(ò5aèîùp¤Ôj ¯Ž˜¯›ÚÊ6Œ`:LÌç&¼Utõ‹³–}Q¢µ›œ÷Ó&»ˆïzð‚»ŒboàÐ#˜êÉg†Q öȹiC˜ê=aîôOÒÌÄD¢ä^³vø{_hyÀ†]_é7¼r£Ê<̲´M–ãµHÉ'¾Òä4¯¾õ®hRÕà1C)t¢×r`JUv ´¥kcz*ñåëªÖuL™}l!|ÈŸG®/¥T~|¥øÔÞ.ûh½"þžJ»!üàù¨×IÈǹøªì]¹÷œJþãšz! ÌñEþ¿ˆI€µM‰¤N}ïÕé¨ÛØ©õ²X´? ^ÿÉf좙œß3›Æ>bœÍºÒJM'ϤEÒ·ðÛ)‚’,Àô…ýÄÓüŸ 8ö×0¦ò‘u1ƒæ×’€x·ÖÑbYB÷c¨7ÚÔK±t¤qs.²£¡˜¥Ûêçæ®Š® ¡²´´¬¯N÷Ø(¤JÒ0±§fÐü 5¯qq,¡´{G²“lÕÍWÆ59ç¶þÉ‚Áƒ–¼Öß©ÖKÔð¿ä¸¦ª°à‘Ï¥>9ãze}HbÊ#—µw ÊE-1Z(ºk=V5~CªG[8/»’Øq'Óèà—¿kÕ‚.ÿ”.þ‰ÎÛVÌæÀ6iãxü Eˆ±|à-¯Ô¸aÓL™ €Š%XRKs<[>Q=OgôÚ¢CÂ24';<òê”Ré1äØž¿Ð–Ð# þ.Z´AToY_J ÒÐv Üñë`…æNG+gê˜q. Ó .Þ§â{øXþ†ÔHÅ[&¤Æ“‡üôU!‡Z]ÔD5WéíSÇG Â!TL££µî=‡¹œÄ€¹ôƒÊ`K¸Bû“,ý<ãt€:u™êòÚÕ~(JÿèTe†O®Tí€+Q\Õ1ÄoSB$ür^÷ƒG[fT]1Ê´„Ñê¬% ö«r½BTƒu½Åã M¶©dXq|Ì„ýUФàñ}öɌںd>)ÀÛ@×…v'yL¾‡ÂÜ[ªëçIЋ¡ˆÒ}ÁPÝç°ûŸ»5o‡L^©ËŠzœ]õC4!ìõ^¿´ù·3oȪAÏ^Cƒg- ƒMõñcâǰx¼Qwíѧ¾Ìnb?ˆPó®ìxœ0”,š¼/¨Ül›s`òÛ%]úyçj2æ)õÎb4ìÞ^¡‰Ó@n÷”!>òl”Ò<­F¹#ß#u¿'v€ ݵ1ÁâɃÐ=Š~ÍfRýâ›×Ëá›·î”PÎÃ4ãc@y”ß& /›T»nLy¡'Ê€qäëÎ$î‘ë‹y£T*ÂXѳ£õ„¯]=£C!ðp91½’Jן$YôMf0¿hß„fUG}«ÉÝŸ u±´NÙ`‰lvùÜ¢ÝÂQ'h2IlÀ>ž»&RÁt8ÞXbÍÄýF4Ÿß—»7ÇeúXöÆvom³Â¢q¯Ìæ ¾„ù–/T–h2ø1Ë–ýw@…~ž¤bÍÂ…—±6áä"R®5qðŽOtŸ¯žÉPÅʦ|m}Ú1^9yG¾s¿ñª+É×…Éod5ºš¿´í˜Û^L`§ø/ËáùO¢4»ô+}ÏPÉ=|N†ˆ±= Ï{•þpVª¡¡Ãƒ¼k\yâz~ÈÄüõØÍNøró 4so_(=÷´Y$³ÙOeDuo×·!iÒsѨðZê62e¿™R;L©I+Þrag0ì8xúK¶‹W'ÌF@Ù8Ë»›öÑcác‚$ê‚Wáî‚K>+‡çþr©0g,k¡™ƒô^^ý‹®"f`å´½Wòx3E¼‚I´ž‡ùHÛm&áÄbÑ#½ô|âK² ÇS¨ucOÒÔ¯f‚w˜<-­Wy*E:Ÿ\ð¹ò›Å2rù}p6µÄtMëµ!ÿoíXFPæžL¹X€U#º…ÇÙá8}¾Ì}êyÞMβ ïr¢/üP˜Î'ßé_a¢w–Aê˜üÝT—wÑ7{ØeywPÕëäN¦‘Âow‚ŠGñ‚>Î?ˆMÜIU›9~å²QT­œVW\ (ôxßa:k€‰ÀËns†ò»¯Ö)¹Ã`Õ”C¹tƒhN›>ç‚Ö‰îò̈I×›.[.À½µª‘xò‹k4T™¼¨Îxʉ?Ud(1l#Áo.ç‡ãÍk ñ$Å?â–\«nߎ_ùäÝY@Ó}Ñ7/îÖµhRà3lÖ›c¼ž²HxŒ:æš×™ùÁN|œÊì¸fåSEµ)j™'35ÆÅÂt~Âæ»yNwäÑ “nöÔ¡@–‹¢ ÈÝŽ…õé Ï €Â<Þh÷NÓÅ{¤~“|ß{{ýaѼ°Óo!‹˜Mcõ¨fÌFƺs½‚ëõ¦ðÆiòº(ÙæŠhÙ•Cª¨É÷üâT¤¥EWçXR܋ՉÇâ:Ûã‘ñíDT¶@‘wöó©'à_·2G$$ÈŽ‹ n •?†§Òh®ÌX[KKçP€.ñxA/¶+Eúâ€J©ÎÆùø£M’*t;ãÆÛAµ'¬û/9•$ÞG¸±µµèŸŸ Φ&x,¢ž)S¼Öœ  K$“T¯åæ ”qçÝÓnvKÉñ`|é\ᇠe…&dw™½Pxo†ð½µZ7¿ïUKÛðŠz=N%Ø®±Ÿ 22¹ªÀO˜mvçwôÆ? h{qhU Þ9Qb~–,&ü5LzEðÞå£Ù¤¹åþ_œ,ϲĭÕr’Síê Ø{|$õ?=Ô‹)ÄÈú–Åó»X¾…„ü„œ }a-dЦ¿ÒOÛÃv-~éÝyß»ÜÔÄ’(’óáfø«B{B¤|2w˜ä  ¯/¥|ez4ã¢&µÐÚV‚mžÅO-ß¼ä¯öÝäx;ù zöÔÞA׉õ>,!· ­ê¹·’#9&¡ùÀG¬qÊXÑQ\@JöÈŸ©=Ÿ)¾ÌÉÿ… 3«hœ9!DÀ¢Ø@{0ã0îzÅ ’vÊO9c{µñùð‚ì…€‹¥é–|ÆÙÙ‡÷Ò°ªéœe)2»OxlÛhŸ‡]¶Ÿjì²I3¼õ¤›Îñ¹a@Ð Èõ£»iËòþ´/úæØ“Õi]f¡ŒìÔÁ#òSh¿ƒõ¥q±¥Lr³ÀµÔTªfÃ;`¡UPTØœìÎåsu)µÀ8ö_{ý.û»–Ÿ•fB5ïÌÜ.°b0±¯zÄæ² KÞ`0îÈÙœ±È%wÑŸõyñÌ®D[SDVåuš ö»T[o1‹|äásŽ2šK2\Ô¦· G¯­Ôµ6嵘%¿ƒÒ©Ìθ‹Ë?¬Pá÷0ó~Ìj"žÑx.²€¥üPš¶ZνÌRÄwrükÃ.E³ÞàÛM»ÔJõ]$|öJÖá+E穊*°¡'x}³ÌOo;‘Tª— —6Ó‚K|®‚;§¿%]yg&Ã-g{?”öì™™½,¦§Ÿ¬KÈ@Ò3–ŬÀ&3”û‹®¯R׸KØ—X¦g×§†?Jüp;ƒz“˵˰:¿ÎoìµtÂ'Õež§ï+š˜°Z ÏL˜@±WÀ=@ÌâZÖVOå’„Ø-±¯?;Ö¯¯…zO°X±Ð}ë+po‰¶ ??¸}ã‘­þµ¿~Džäɪ@DJJôâªûo¨WÞ}‹„ªZ*áýá0M-rž²Y5ßVDp•Àuõ|¬z§¡„¹üÓ˜Õh£ÔÇl—[~â/EÎDõ̲¢špŒ6œðËTË žu‘Ö«ØÓMdC>¡‰~ÿ”·L§¡ö—Q²ö½º©J<ó¸™@´(q3²³úûI1SH¾ëÌ×vÑ7®êt#¬œ22¯Kô§e™•Ôï?¶õÅR *T=ÖçÐ(žÌ ɶ]MäDÕ'9(’÷v> stream xÚ­Ry<”kÎ>MÔ”6½²e™Í2c("šj¤3ï˜aÁŒ­S’ꫬY¦„£”)§,GDÖ¢lYÎp:9r¾¡Óé÷Õ¿ßïùç¹îûzîûz®ûÖÑ$¸!l¨_ÐÃæ!0HŒ`çlëåæ„A$ª£cÇÉ<‡½›Ì- l‚ýcIka‚µ05…êvœ@—áGç;ìô—I8À†r2p&óè KRƒBfn ä € “ ¸.¿\Á R‘P  2(<Àôc°¡¨eM{Ù4€û¦~M…€Ü ‰(`‡D¤> ‘Hå°™€ Ò (ޤ(Qòÿõ}q‡`&Ó…ÌZ.ÿO?0È,Sð‡Ã æ\À™C¹ì艹ày¶æöòÈLņíÇiŠýf90ø •ÀàQèÌ Wâ ›ú½ ‰y+"P®Nöí ÿëJš@f°yî‚@@ã¯`Ì7,q‰ËàD4ÆHˆ’óõvô»völ ‡Ê`Kà ¹\²*Ù 2Â1ƒMùÈ—hF!Ùžä 1& q¸Ðå±bÍ”Írhá0jÏ7„PÎÿ"s,€òXA?þ×Ö–ÃGàÌ„±™¤»1ÎÀ™¡#ÿ‡H ærA6oe¡$®}Å4†Ägäƒhw'‡bíŸRy¦ Ê>·µPN?hKÚè_ëbî¦Y"'ú-áܪwPãwåeF,ùÃþb•f6¸±Ò&¢)Ü óJ¹IIUÿzº›Â4eÂÛGZ”õäá:VfüûîmÃU†Ó=Zó!ur%Nc\ÎÆge º:?‹£N<ì^“JÕI†ͪqJ=F.6/LîáärºóE¼ÓçU–º7>ìkèUJÃ'd9¯r&1Ôó¤òîØÖ˜6†“ëˆe~ëé÷WªX»ð vvØMxu}<ïWuŸjñraˀ?¥#îQ"†9èÔëÎü9ùp¢Md *ðaè@_šÿ€”׋᧥ag+|*ØŽ*9¤”!Sq©¤×ÏQZ½úXÊÐYwRDäÕ÷QAŽö9i3Û—.eÍuXÑ Qø€cÃýÅ3îMN7xÖâ¾’é3î@Ìb µ¾4yµ=¢MÍdþ1I5pÈ^eâM\Û(ŒV1:¸&qdŽø¹¿@ä>¸¥)^ã¡¿Œuã[{Þõɯ„3Μ åæjsíÍH¬Ôë½.ìhäћҬ:¼ÕÃÑ:L®Å¼ž0–YW`Rþ‡å–½ïåÃ~³è°”­ÿË¥¢“7ygÕÎË&&~ç–fç­ŸFö° 1Ù­½VcGõ¢—dv˜XŽ<½Jµ{îe5ª®¨)4ì¬få-®Vv•í¨<­N°+´nbÃŽkh7쪖…w.ÜAÓÇÛTo"ÑþžJÓ.«ßÏúƒ+7U#tÝ»­f‹&?Û›éÆ&nXœ}Pgø¢…|+jçk–¯ŒATá‚Ó›Tñð²l…ÚU22½š½ŸÍ­n>j,èJ•þ³±$„;_Úžn÷Tþ–!L÷âÕ׫ot—÷tžš_3~mõ‡C¥å…¥ëo‡àfC Ýån(å)ÿâÐä ›t®&!ç‚ïzonÔÑ\§.©UÛû x+5:~¾ºážÚóŸþfû„fÓâ… 2ó2¾MðNFQD·Ú¿oýÂèŧÁBÏ~̧ŒI*¸+Þ] ‘·÷µ5—>—Éï))^\Kªü ºK(÷»T€Ñ-Ú¬z³$Î¥¾YðZÞp€h³.hàîDl}ôŠ¢…°bW»¶±e›’ºT?¤J_IÊq1¡ñ&Z<_‰ß©whQ)óA[˜eì»Ýiü•Â#z‡ m”lgºü!ÏÝDû)Íø„è'KŸ¬ÐÖ7ÃÏÀBl¢Òó“?)º?j¼¹ 9X|àŒÿ[ãÇçêqG£J2´ ªµÑ¹dgÙ­[SÂö&Ötfyzïõìó=|k{Zd”y‚ôëÈ7“Wn1PJõïÒ¨É~Î?’£hte”ZÞN8©=ì÷±2¨R߬Ë Ùá•6Û™èªë¤·:þù©³´ôLZÕËË/;‰ÕnæéUë“ÞººÇ„º!ÐÈÅE1ö•"H„0Cj¯Íj·;ªU)þ™´«­©ÍÑVÝHýØßxŸð¹üoåË^“Í;§ÜϤ¿¾ðS¿ZjÔQ¯°êd«sÇ;lÜãM„sV]Ø6mußZV~].BlÏÔ {\„^›‰‰ÞøÜÈ÷ú¥É‚¨]i ó2CrOQþ¾%MxIþ‹÷ð+•z7ÚÚpµ© )Z—jÚss½­–læ`ñÈfëi¿:ñlkÙ”ÑAjÛ5üôܘ‘BÑmŸÛÈÿÈ[Œ` ÑMoç*»t÷¸èÜb¾í"~#7».8ÛP³éÍ_'¦ëC‹ÒªæÝ¦DóÛSóýµ«!ÕY[ñª yS„ã»’Rë,q5<ƒ»æ´&ïûX†+uœÑÍh0)jÉê3 PCk1R›î¢î–_Ó-°‹ëWg{Y›1ä\bñ< SåöªæÊõC0>/”Hª>3¤ž³,²¶Ñ¾ÿ²/>6ñ\ÞéaΫ–BÃ85#¿ãCÛ.Ÿ»zfq*­öeäRC %ëµá ¥;ÚÌ¥n•ÿ©Sàendstream endobj 300 0 obj << /Filter /FlateDecode /Length1 1558 /Length2 11616 /Length3 0 /Length 12524 >> stream xÚ­veTœÍÒ-܃‚»{p‡àî:8ƒËàÁƒKpww.!¸»»w‚syÏù¾“ÜsÿÞ5žÝU½k×îîZCAª¨Â ljg ”°93°0²ðDåä¤Y˜oßÌ̈¢Ž@#gK;˜‘3ÀÂÃà q±°²˜¹x9Øx9¸)¢vö`GKs gµ(Í?I\a[ £¥‰ gäl´}ã01²¨Ø™XÁŒa€ò?;œÊ@' £+Д‘…`jiâ 0š[‚™þÑ$ 2³pý{ÙÔÅþC®@G§7Qêɤ¼‰4µÙ€¦@3D&y»·jÀ7-ÿ?dý7¹„‹¼‘í?ôÿrêÿ‰ÙZÚ€ÿ'ÃÎÖÞÅè³3:‚þ;Uøoqr@SKÛÿŽJ;ÙXšƒÌm€vFfö¯[:IXºM-M,fF6NÀ­A¦ÿ­äÍ¿é`R––PS¥ûŸ£ýWPÑÈä¬ ¶˜ÿdÿ ³üÁo&9Zºt˜™™YÞß~ÿû¥÷_ÅÄA&v¦– s+'ÀÈÑÑŒøv‰ÞÀ“` 2º€îoŠ™AvÎo[oÎxÌìÿ9WN“ð?KÿF<&ñÿ ..“ÜÄ `’ÿƒÞ2þƒ¸™LŠë[ç;€IõzcÑüƒÞX´þƒÞ® “ÑÐÛ¥d2²±·ø³Âó¶×øOü­Š1Ðù¯ð™ÉÇ?ÈÎÖö/Bæ7•¦à›.S Í_,oõÿ&da~+aöd0™ÿÿ £¿ p¾¥›ÿólß.ÞŸMoe,þ‚o~[þ9LVÁ7Öÿ¬oØÙ›þUâ­ “ÓŸ¦ßlœL-íÿ¢yëÝöÍ›,[—¿|}ãý‰¾µú;ú–mÿ§Þ›z{#GgK#SK³?^p¼…ýÛ‹¶ûã'Û­½ÅŸÞXß:µÿÙþáuú«õüsø ¾UúË6–·Ýºd}óÅÉÒüo¯ß¼q²1rúËZ–7+þ2áMáÛ¤ùû8YÞlqûëþ¼ vÿSâ­¾û_òþ1ü¾q{ü/ÙÿûÚEDìÜ=ÞZd`å`þç̹\ÌÞÿW¢‰‹£#äü¯‰ú63þ›Y¾ Ðh‚¸8ggÂh•ô=¸ÄG…ñhïýŒcã¸*ø#Ãʲ =ñD¶‚ÉÔŒcÿ»°W¿§ SÐF? MNª ܙɑ®>Ô@V_7šmÆ×ýE’ûFº³%²›uטªÏ,\¹ø'aïJæçž¯|¬}G7‘’MÉvõ²Z?'Õ­Ç ?KÚåÙ- àê¦Þ`¾,âw¯NY¯ ¤ð„gÉAϦð?!0Ó¸›mŽ"²¤Y“,jfòȨðÂrΨg§ >ð“¦h 47ÉÊÂoOÒðl“¦ OIû{'JiH«Ý”æ¼7)!°á—,À‡VwkwÚ €~„ìT+ù¿·®ž`åÿÆbߘ@¡lxÿLqIùîZïñqí÷k8~癉')^%”wš +5ÝÍ3>Ìth?©–‘Eü¾³° Ô’PxŸí˜à¦Wó@“gÖk®8”l‘ñ€÷Þ6)›^N›®¶©?NWÜ‚G«ôi3÷ü°§3* Ù¿(`J5ÚÝjÀ§/Øê †) °ƒÏ&csà $Èùø‘ ¿l±÷ª‹ZùÄÚTpæîèó%Nh9Œ‰ëÁ¡R!j¡e£³ˆ@…åÊÊ/ïçÂE;ôÝv].it>ª„Åð4Š¡5A"’ŠBQ†mFF;­DéOFÜhYç0*ÝÛÂz)J§G Ÿf²?çÞ<ã¿"ýJ¬´ˆÞy¨‰Tqy•0˜:m‰¦—¬j ·ØÓèw\6Ë ·}X®ß!J¶¤c¦¯ æ“kí!1d4¦D|§œ¡Å]|z-9›týeWÅi5*CüḼøÕ¬h1be3]ÐkŠäÒï1ÿ`E}Þü»—úBÒuFØ Âp?…ärY?‰Èá¹DfÈ?Å—Èýd±RÃßX¿‰Eï“'Ýqøtê¦=°qÞaä’ÙbÙªúšæ–Á @ñ- ­>¨ …²‰Êwà« v:e\K ‰ÇC¸Ïý&~$Ó¬ d§¥úqµ&¢>™F`"³ÝþÄLnëQ¤.ì:YÇÛ KTßQÀ/”¡Ýâ^‡5c“Ù1’\VÞûô@ìØ©rT‰†sEú²¨æPÕ—;Ç xìê)1a­Î÷ளÂD”N¥{ÿÌ+^Çu&®Tܧšï§þiflõ;‡=³´TZ_Æ{اE&ŒETiܶS³RôîÝæÐ#¼0RŠÌtD"Ëå¤_ý\६¡Ó×Ûšv-k¤Lt¸|ª%´Â{©N7æ´Ÿj~Q ÄCô6ûà?e鵆NMØ™úMeÈÙ…¼6éCÅ Ëûð—ËCã|¾ê+›"mE¡:±’þѦ†¯‚„ü,¢Q®sxjÎCÌ–X¸±¨€gýý‘yî~‚F:rR7÷©};«°ešxŠ4?ü1¤pï™f~Hã4b¥ pnT`BbxÀ* ¥õ‘åÊi|ÚaÊd“磛ÖiwZM¦ÊPh8mìÄÄÖ°@Í™™·ŠÞ‰ÏØÖªî_œ È}z·‘°VOŒS_žqÄKÂsÃÊᕵ½=3ªC¡Ž R¾Ràƒ„x ¯¶^ óØäFˆå¯N \QüéÊÐgÐHÈÃRòå¡áò˜¤^–×?P¼àV¨øôYãGdÀØ­W$M8í“WÕŒ{ݾ"’dˆ¦êÞ¹NuDaa¢,Vb®s4Êwž_ ÷r4· IÇû<ޔ𠇩CmTïè±cò©Üe8ÝÉæÙj<8e\à«ÉTµMñlªñ–y äˆ-`Z¾zÜ0,GÊßµ±NÍ‹3¥âž žüÖ·3¶_BÈÑpoë𶘦V«‚¬¼Pq. +ÂqǵÚÑÙZ¾8»íêäÂÍRù!¾'7à5C¿)›bf5^HGâ]/ðÿXçF¨ âÊÉâz‹zGÕ‰#ðÃ!£Õ±‘’ôêÂÛÕ÷³ögí€Ç#7},Šv!L þÚ¿;˜¦¬ÅSÌ4»˜ß5ÛÝ -IJ²ó¤Y®8 £R,®ÔÔ›²¦)iiçȤA'ØŸ$ÍËM|™ˆæ%=¿µàå*_Óró&ph¯&^q9—+RFšzØ2…©"hJ*9MdDãaϼÐG-êU˜´€îz‰Â[Ñ0úB{°P µAO¶Åµv†ÎF:r7ÊV8Ë©¥Ã*q"Ù8” °0\:’{zúݧ×i²j>)[ê'ªÀ•Áh¯ŸúYnu|—ÙªrBQKHFc¢àéƒLèI36LÐbU\¢„ØÇÍ‚ÃTd½ê²o=ÄB=åÒ&.˜»Ñ¢ß‰Æ–®èŠ@õ»°vN‘>;w½¿{cš#·ÍT„Š |Τo9ÜÒ:\~À—Ð|Úa¿ãír²½Ø’nIéGë[Þ#zæ—¨™-YÇ/·Æ˜Ô_žp´djF?¾ˆ‘ADêá®»LY>2¤G}(6kšàÝä‹fC¯Oïߨȵµ¶é»”ÕêQè®qL%[B¦~¤çߩƆd–p¤hX§†¨*(n¸1癄“˜#ÅwVÊY ñì‰èÕùÃ%¨< zV%Á´rfw½Ãã"œo«°Vݲ‹Ùé“%eSZNßËF6•ØKï¨ÇôwüÐü!¶J„†fNJ{Çq9G|÷Ç%%DªrÁT4´ò¦ê:w?°0uŒËq· >%¬,;H û&® cÎ È­Ö˜W5Ù6:cŽêÊmö„Dœˆ¿gkœésG½N-Վץa8”PžÒk†xc“$k݃. ½ÍTÞ îÎ"·ÅѶ²Çñ5úW“‡Üb·ºÙ’s9f¼)QM!Ý}È$ÆKnëšImþý8¹ŠÆµYk&r3þ”½yþ SÓŒL½Ù9iØÊ HU(]lE+•¥Á²?JOd¾ZwqËj¥tˆì/äˆðpZñÌó[—jÂ&9à°°¶M7'ÄÖÓcà5\Ñœ>ÙAµ§UŠ3ÇóogÀ—Ø 2Ás|åÕÇž ÷o­òæ °i§ÀƒNHÕ$ ý𨴮k§ÃŽó)ÚÒðÓÔI§±a^ãTÈ©ަ«Å ÀÙ@jy úØ„ï·Ògî·½óå„äŠ||¡Éô¬¿—ËY¦¾JA5˜IÆT¡ÊÜ|HU ‘¤|¡¬dSbñe‰™ÛÌùýuw2@XËATþD8áÄre†™rϪ@´«Ð×>µŒyîb#6É¿_“îtîÇ\†AQ—ºóÒX›DRSÜ7íÆeÈ¢¾iâBöIÌG|¿Ú‡[k¾ åeHóK©–ª§º¬ÂG¹tBÙA’F kˆ^߆!:¶Vˆð’È©àÒª£ilª£è„úŒòbÂ'6|èmºØ×p ¼¼czo…üjùå쪔wTrØ\WÕ®xLgý4àvàø<*Y_~OÓƒfÅ7úãbî`Iä#>¹ÓÝzùã¤ØOѲ¯8IMg³1ŸÂ šùj¿¾ìá’QÊ&GZ@lFЉ•)ì#C‘áÌæê ¸ü*-¹#ÙB϶O.Ö«>T7ÂŒÌX¹(wè<<ZK•4(Pòq?¯Xá²Uqáè¦'ü¡}ø›NZ‰Úž@lCp‹»¨UéÕûÇá>‡Q ü÷ÂbÊñî<0Å †j¹ÌΤÓ[ö¯áwq šc{ ÍzÇpVÛ¥¥¿‡ŒlÑx«àÑüÒŽk¦ÏIHëÝb ¿³3>á,6Ÿq‹$wŒ7®øâÐèTõyͧáÄr÷ ¬vù„‹êüÕÒFúŸ”¶Zç;íQMT>£G¼é4™‹;DGjè”=›{oZ¾ ™E®¡Y¨Q9)›Ëuÿ/do‹Ý-/²½—‹àáW¬Ø¶4ô±^ Îe]—š¦½îQ›EºYMÃÞ¨pͤC‚žáŠP ð]R\â)÷öjB®üoy ‰ÝM€ëÛHä96ÁdÒ#óikC !É'›9Ê`¸½ê¬lŸÿ€ÂĘ_æÁèS—ÔÒ𙥩¾*ïü¦«»ÔÕTÒS×+[#¯hãÔà ŽYˆw Æ}ÈD_U‰-« £ úÐîLɘ[~êMÇdv¾Ÿ=2æª%Ì í7Æo¿;ãí"°ˆíÅô<±Ö¯–3Dèx0l{'óÀØd0ò\úÛáᒨش¾ÏÏlèµ¢}Ü´a"†‚°V-y‡×œ˜gCBäU£ÙƒÍvªMÇp¹^qØùA#ÁªMp¸¡¢T+¿B8!jB öj莮µïÀ?ÈJFev”*%fÌû|¬°/E&ü{e?ñÓ¥åݳ¿ÐÊ ¹e•¹i)Vm"º•¼Øþ²0Õ ÜøŽ%¢H¬l+Z~ÔÙëù2f.tý%÷&¤¬™™È#%ΣµXp?•S 9¾™ml~ÇuÝŒ2Žú–˜Ð´ !×ê”gX^‰%Ag·ãìHEH‡óluàh"œw-øì]kZ¹OS³ e>ô¥ÉX§’0×ÿk¶^DÚÛ”¶t1ÍõUÏn,†®‘+T‘8†Å™$f£s{M)XuK—Î]ªœô®q³Ïxn·#;bþd÷ë©Õ@*dVXÝõ™§‡¥~GÍ«¢L›\÷¢bIÙ&çqÎÞséÌåo BªD¸'<¦H H-$¡°[6 muʦ҅Ÿv ‹}Ï–8zoÛ‹Z–<ËÈùÍûQsáé!äÛe©­Fâ  TJ{XVÌ^Ü9,&Å À¼;*ƒcn.]y¹ExÄX©iàK±Ñ³Yp[íÑl¹2ó™¢ôZ/¬­¹@Öçÿ°¨gÉ«ÜÛz˜H€ >5L ¬M]²[ Ãú„HÏX©ù†FeöSidâ0t³Ø äÝdðœÒî9Áû±³,÷ÙªŸÀ…›Z¯Î®K "j™'b€kãZA+q™¡äT%[ÁZòsóÒ­Å–&B›: ..ø½sÓý–ìÎA"ÔL¬Ülˆ_e`çùw Æ1¸£µ> Ñ™µý-E­7~ªêÀuÜ8²Â?ßJYHÕ3®³±]< Û+ÏÄàT˜[Ù³w׿æZù/ŠX^xwTÍÛZï}þ€Ã©n]7 ºmÃ:`é?²Œºu@›uû¼² dTurø›$ñ£dŒýûÊ·­;q¸–›%Ñ‹U³‹…øÝuçͱ<"Bߪƒй§ÛùE&p©@0ûêKO,(HZ¥ƒ,¢=…ô„çÔ'„_Ø/¼î%SÖ9\\öýÒ‘tÞ‡à27É'¯¿ÇòPZÔ>†0HŠÃ‚¨±#êXèæQÇ Aª¡E[qñ„ᣥ€ôޱò0‚ÅjªUÙj.gAîTÿ½L˜´ÊmËÉÍðèx*#iHùÉOl†o¹ŠïËÖÃelî^¿¹÷fzbÞÜ §=™„K€–*ÖÇa^^uÔ»ú97xÓ'9„ÚŸ—¤ú›±Þ±°#È¥tí^¿Ñ`uRàKh°Á% #k‡p±F=«º_*{²ÃkÕ í#0ìˆ Ã?9I¯7°8!å„©–b WP*uõ–î1W6ÆŽ¹A¤f ÉL /ÓÞ9ì½ÏÎû’RqEjùY”`-=%‡@AïKXœ8r£sEÀÒB÷I½vÙ\±âŠ„½ªa’£|¸½Oj÷Ñ/žîéŽ J¨8nWÈ}|n—k+Å[¡W„!Ëv$½˜c…ƒ‹ÝqvR@le£‘sr\tƒ²|–Ðô1Mzë²`MbÖ^j0N†óÛpZP ÓNlK$Ùm Ö¯VC[¨C:0Ïr4QÐ:A½¥CðT„ŕp÷äF‰Îg[Å„¨¹™ @wO@:#šÞ€I+ÌrnC5½eÚ‰jƒ„Ãu>¿‹\FÝä%¾Àjè€FøÚé>βºyyUÑ7Ù'ÔsyØ"ˆx]p±(ãK¸äZ£Ü ¯o–í+¤ p“Ëù$—ÉYS‘ v©v˜ Æ÷œœ²M[)1×y±É±Gº)0cl¿ëú…q]ÏFÓT«C,±¶ô^ièñW…]}î½ÒE{iò/õ Q4&¾ûKÃèA(â¸sšÈÍÄñ^]Ra‘ñC<‘cû#Î2bX£ýúªý‰Ø“*¹Â϶ÙBÒª.¬N‹Ï×0çZ4ËBÛ”Z8ûѤ\úÄvWxfÆ"*Ìõ'"&ÒÉ(¯iú¥w?'6A—B•bÀ¡oG_™¹»š®kç|*+Ç*"øÕ/ ͳ—Š?Vx0æÊÿ zÞ4êt²kEßKÊ·-ý ÍNð‚á¬[FŒîÞò‘(ësºƒ€‘з­´šýˆŠ’ž(<Ÿð Mö‘†¾ë§Õ¶gê2]¢|…+ƒô•hÕ•kûãY8Ôįõ½˜ÓúÙWDîŠåÉ£‘GhVð¡²ÎØ3¦—OßI+.nä>ú.tÇC7¡2{"Ÿ}-PGçm¢<•ždV“÷¨yQÓÿT©«ü*“¡(ü9…¼ô<âsâô8»Ý·ø‰Cî/º:«å6w5ýgó¸pË^bÈ5vt†VÃÊ™xOU¯‚™{å’ôtÞ¡ðþ5•Ùóf³ŸzÐí'Â&6Õ08Úžü·àyJã/Ƕ®ÆH‚#C–j‰¤J4+öxÈTÆh×OkOÄcÊ¡’š¡3â$!aÆ?Ÿ£eo¨«É×÷súÖXlûÙ0|‹3£ën8·é~­>Ï>µª˜²nÁ”׿-!Ä•¾ÐÚ˜pMÃwŽHžcÌ#|£ÂþÙë­ÊŒ Ët—Ú¹_·¢$s·aòN³t)w•÷Ê[wñÞ Hí|¥ ÇWÒÎLÄÕ`ÝæevœÍP”ªw¨ñ¾êmf›sÖÜ+ òFüëÖñ,Q\ÜgDRÖ¦Á’¨gR%õ"ݹ½wµpŠôu˜¡¨Ú3ÍOÈU{Zil˜ìÒñD ü1ž¤P&ýNBR2Ý_½—r Óyôì òžêݱ‹¡NÝÁÝö‚_ÜÞÇI~åIÔýN! [üj°{¶IµUÛ°LK©§©¼V^èkÒ¤ƒ[ØÂ)-ÆŸÝûšÝç‰ñ±d¿x¦IÁÙ-Aƒ/o}6Ÿý™ÂI hF' §‹·4±*rVµg©,Uåò%zs¿Õê«Y½ 8÷ù4—(iU64µ;F‡î‡·WÄÅ/J`-¹3(6úx 0Ø$A}…Ï47Ü rë• ¼:3ÙXÝÔÍypÏ‹#x̃Ó>OS kò ¦õ'B’·hiâ©dSoèzóÔ~ú!‡­ÖÅ3ªf5èD5Ô ¬ù¸EÝ¢pµ=+Ÿ‘V8¼Žv¯³èlN:øºø?ó¬áAJ= ©»P|ôq⇧'ðÏ|Z:QÅ¥JVBZïÁ}ªÞñàôü"m³“[ÇLX²u?5_K9K#(gžòEFvñSÝ{”V®_É;Æ:R$èÜUÒ¢B`„âÁ1õÍÃý]ï”»êàÆŠP:»˜vA õïRú¤‚Q©<ÃâdìËÂ"ךNõ”û¸ŽSè3[â,¢5 óffßá-!ÃŸ× B <‹ÿLCNÊ”|Ø“UÉNjò< ´µ_И¿÷zרbDy ¾‡1©<ñ¼ ÑòE}&¬g¡þÔ?)Ãü86ÆFœöU² ʦÀ•ñNÞ1  ù K¿zxë¨R†³'»•δ៾ƒùQíÑó¬¼}ÒNüE8¿Î…NaX+0ÜÿDê´ —/ï`¼¢§Ð=Ô `Øû(?v/˜‘OÄ|á{Ø!ÓGŠ{jìr7T¼åd!kï1·ïå1â ×”†ØÉ'YôÃ$æðwñ´ôdüªÉ¿âO9 Ðî2Å<ÑTjø6ÐíËÃF Rµ¼˜ ¶pÊw:÷UoÞS|äH²lÑÛžú€™€_5sý°À› r®•ÓZ÷–­eõË Çþ]rÊ›W"õâMÄ&æÑ@™ÐS?” ïe6´±¨RZ[ž—!Ô˜@~rQ>ÂBy.°ÂüÃú˜‚÷Õ`´Ï]{•³çEÇã¸-ÂÙá@ÒØWö^ž‚GÆäcYØ -s‡eË´t![af©§Ÿ?¹åow\};ÉQ<¡püœ‘ápÙz™óñK.µ¸ðrNßu’Ø–ØJøIN“€º¬ÉŽ˜Ñ€ž’Á&i}ìš\ÿ%²vdŽÒm_x±xÒØ_ âçË>0wP¢àº$Ü1Ú«ûHA̺ÕÚÞ¦ÖË$¼}´ù¤Íó¿$†§ä>T]Wû‡ëíë[b7—M“½tž"R»0ÄÎÅ­^掹•»ÅhÛ/M êò¯Ç$$$1Bµéåmî½6*±q'KÄ…ê†A“t´RÝ»zä»Ô¤ÓàŠÚJs¬$œîÏ}‰„³,ÁV ]ûs*ˆSðš#Ž"ò<5ÕŽCRú³þ"‘o‘Ô䃃·Þ+Hˆ˜8£á²N8Ü?«ó!†§#r MˆñgÈ8ç¨ljc=lÞÐÊ¥»ÄçM?ɓ۩_Èœ_!·°êQeh â鎊BÊé £w±Šûâb ‘n†yœ5)p½ÎDÜlXBF'EÈi.ÔdºÛjLïJÒˆò$~fÜ·(9ã“§ŽüŠêŠ?Â9ÞtÐó !ê§ÛØ;°”g”‘ªL³ç †çƒ|1Ôz ‰ö)͵dØVÊA%ÕÞŠ.õ™Aa[šžpî3Ÿ©¿ ÈÚ›\_žÉó¸'°ß'I…‚5BUpb×»½%׈Òóû=Wjú‚¯j´SrÐ29bZòCÃê@ƒU0¸¦d$±ÒZ¦J³p>ò”Êùú7¤fŠøy½,ÒßBÁ#óTÕ@.æ<Ùe4̾5P$µ¿´}þÒ"ËCª(£÷y\°vgòñA±ymGhƒâÈT/™.äæ5}KcŽkýæ‡4ìA*8˜Ê£D¿éµ8ìFæ2X&;& Kö‡u¨^KØJüÇó—9h|&¾ü™öБA–aKTå¡þŽæRÙg“8££ÖRN‡ÑÑX! S30EBU„ŸNEÊë—D¯˜}M:Š0Ax°Ø5;-ÀÕpPf2aÈCo¤¡p¾MfÕ¶»½ª|~,ÿüä-¥¹ciŽÏÆÜNŽ:ÚE¾ÚZ–†ÙïÙeìGÈÝ`®ü»ô3²“0ó¨Ù¯ °ç-;Ëþïµ>±Éõ ?5”±¶y¦¹øh»Ý÷9Ÿ`!È•Ž´ï¶h®%Êñj¼âÏ×LZÇ.‚÷Øõ¥cA¹S‚sñ2+Õs«ÉRPPc/Ý©~ÉOµz7GmuÛ Á‹W*hbòk}Ô´©³µ= ˜?"£õëZãœpèÊh§!!À­KíiKHÎqª½ Ó‹äf ¸š.¨¨jdR¾±-V]öoá7V½gÂZ·RèRˆq‘uâ\oú6Þ`P<Œ(sÜú¡ÎšH%*³=—LþऎÉendstream endobj 301 0 obj << /Filter /FlateDecode /Length1 1060 /Length2 5303 /Length3 0 /Length 5989 >> stream xÚ­“u\”mÓ÷‘”\é^jiéNAšX`waiAB:”Ž)—¤%]ºAéI¥¥^¼®÷¾½žëù÷ùœÿß™9f~ÇÌœ¬tôyålÖeÜW€O@  ©©*¸9òó±²* !`w(®v‡HÄÅEjÎA!¿¨HH$LÄ P@¸ø ¡öîÎßA¢9 µÃš`wì&‡ Ø °BÜ}ørÎνß7Üz7ÒbËG$ °…Ú¸¬!öP8ð·$U¸ ú·ÙÖÃå?.OÒíF€ã/™œ€‘¶¸³ÀbGÔBÜTƒÜhù¿õïäÊÎÎZ`Øïô¿õ¿Ü`ÔÙçÿ `.î$@a AÂÿjù[›&Äêû·WÕì µ‘ƒÛ;C¼Â|üÂÛ¡nÊPoˆ­ÔÝÆ`vvƒüe‡Àmÿ­ä¦}éšè髚hsÿ=Ø¿|:`(ÜÝÀÇàÿü üá›!¡ÞS~>~~›À›ï?'óÕR‚Û l¡p{€ HF"Á>Dü7©A €¯ ·…x Þ7‚|p„ûÍÀMc^ìH¢ßS€r¿M“(ÿ_5ÿ8¨ý_õþ¨ÿ‡„@ƒÿÒÍèà?t“Åæ¿úMìÿf§@Û ùÞÈ…þoô:þoJ9ýyÌM&gˆ›Û¿Àþ¼yƒË?ðF6òxSêÏ]‘nÎ`7‡?‚7&Ÿ¿ðO_^áíË+( àÝŒãweQÿ‹ÿhãDBàîý`7;ô¶ƒÞ¬â ±!šGØH†8¦Ö†ù+¡‡Šñ8Ý  CúšC«Q’|[s’£ÈºAãOòPrGè{²”õR ­h¶VίÓW:IÞILÅ™›®O°g³efÝ•ÝÑB ËŒ\Ÿz¸âRǽ÷ùhÞ³¯\cS@ô-íNnÑÄøå¿S@ÿÂÔ4[æÕ"óìFdˆáÆ|\ï¯m1•ßEc–~Dy5EÛ2;ì4CŒÎÖÄ·oªRž¼ÇQÔžžIkó;ÒÓ¹Å@»)¯½¤~,Av·ØšbylJ<Î-à• LØ×F!¾{^;ôãu&IÚ(kl»wÂ쉾-Ï[‘ãÀ79€w,SåP6UjÒìk ·dÇU{¦5BÞo¬ìAçI¬u+BõCËê×¹áÌn¸¢ E‘ÜŒ*8Ó«±žÒC  kÅŽ OV@—înØäÉa¥ô)Bó“ÞºŸ]ϽÈ.i#}îÓ’uEZÏ'•A6¡$ÍH# º»ˆU~;l˜Tãst]€Ý±¥|G¿Û¾ÍP¾s°±Ä¾iúfüt3áÁ“ ªqžK#)†ë«¡šü4ṇã¦W`Z¥“ }Á[¢Wm•Îö¶ü™¬r–ËŠˆ}¢Êƒ^ìMRw6¨HÍÝa¤¨Ë‘‹½˜slÔš ¸Ê:ÚеÙïèV§WØ8afÏÀëÌây»>>oõ'nÆ{šü£o»»ÚÛÌöâ/ ùDò¶,òÛE¡Çê ³^ ®M%;ÛÚ9¢Ÿé3öu­®!×d{#£ª[¶ OIu.ÖŠþæ‚‹wµm£+Ú½?»ˆ™»ë#í¨;¯>rDØò4n¹]ûtÑo¹'Þ⋊ÓÈ ·HÏçH""yv¢<#[ð\«-|¡–'.À?'l¤³YÒHøÀ—.$oéTILL·ö^`ó}p£üØ@ÓõTƒ£SmÃØÄüÚâY;^ú]¦”[G™S1Ê,9ºâ®’ÚgýcÁâqZ©½ô»N¤W‰½Âõ”_øâ^’·IeL=6g ¼=£3‘Qôµ,ô>܄ϩëÉaóWƒ1×1èÃ_É„ÃJq’ë£?s„˜£B¨nË~?y}·Õ[$g_E¸í"$Òì^&,7³Ëƒqt5JöÉý_§/MÛµzÚ¿“h¤¾ïˆþkÖU=i§@&ˤ`˜ØÐÐÝæn§¢GJ³RO ÏBwÅIh9ŠÝ,d=¼ç˜IôB ÇnêËÝ”ñú’6‰Z£D>…}€d÷à£ß©þÚÙ¶çø –ò€ÐÏý‰§þĪ–¬Jï"%&ptÆr#" Æ`Vf è6éPé%†Œ:éÅ›•K’‰´ÖRÛõ´[ìƒ,}³ÄgÊÝa)†Bù­¥tâÐËÒVQ'Õ•sÍŒèÂ3Ö¸“É݉÷ÓVÌ£ØÃeb„uiaUürU?fäJ¾´î}óéË‘î½Ç"üzKód"I»WÖŽ½"Â=E.V<òéKwÝL˜Éˆ·ã4¨ß­¤’î[F¯ äÎÎ%v:´™âr«[Ö›§Í6ïQh˼³´H—åð²8Ûâüµäéû÷$],;Š/—cé*ÝëH˜„lMè#,HW’JÊ+¶qx2Œçí*ªw¬½ÙçÏä)\tØðº6EÞIÃû·1I'¢úö²¿í0V!;J[Ä'dvW Ù‹»—övn¦´”Þ®ùäq êÕ&KÍâp»WŽê)}Ï›¥)?œc£ïp°]»çì#Þ½®®-JO¯cÝCª=”–¬nÄŒWua™¥µj¥«RC c‡A´X™yÆ4O0@vB¡‡||!« ÁCƒ©,oiuë™khE–i ‹çòs|ŠZyc]K˾÷ ÏÕ{Š-šÜÌ\¦ÉFEŒ(é‚`uQ«Äc 2x80¢:¸ÞØxI•MØÝ œiÈ,EJ7qäNfY²Úôåá ¨%fŸ|§vüàü"âþçŠõnQõökŸêƒïUøFÞâÅu…;ÿÐ…‚Q4¶»òd@½Ì;†©Òºia»¼ÇQŠäܧ&”c;¾øRy„È¥ö? cM6öCwÞ×mpIö8©{ð83Gþ„$-I~¯Tñø¸õ+’V“­ÑiZp½-‚û«&F²iîí‹“Â7ŒÈeîµ#Ú•Ó £c§{YH+Ù2Nl¸Ú¡ìFêöÙ4Ø3Ô®Ø_ "PBJN¦Ý¤›*@‘úÍÀm§J'û7å˜m%/‘]óœï½A™†é•]{â1›¡£ bòü…ålEÓ™xŸ¿±ÿ3Év'V_—'°Ÿ6Þq‰lå Ý'JÊGÎ+÷­qR $p1X‰é–;‘‡y׳½JÞ®$NÛK&M{VÙ<·r> )û¾YR¸iæ¢Vcǘ3%Ä]pë*dÄ!J[ÎXàÃæÃ?²~ô½ºØµ^Yˆ«sHòÀ¢¼2猛gå&"Ëâoî"îý8S¦=^s:eMy8.µ£Ý‚b}û ß–¬§HƒÏvãI°{<®bXÄ—>vΦv銃aÏìi©²çëµHÙ‰œ#„…«h N¿IWªÂîÍ«Š*ŠuøòÚÓwøÈ@-åJfot¥~Ž« +É|uÐÿu2ÅÞ³»KO_[ú}ðÐJ;Ð\–»;®ÿã±T„2f£øº;ë@ÁÚ%{^¿^õ™‡dé¢{êtwwV›± ïÚÔgaÿ󎺳îž‘Úö4!û£§êþ#:¢hKIñÝJÅü>9½º6%"sáü€ÓªÙPù3à#½òŠê5úÛÀU[ f©½NÅ74ºªW–PóEÓ!¾žó-zª=ˆ²«üR3НåÁá’˹­/kS†®ãU˜¶øreìH¸Üˆ{•úR©·¦Hå"òÐvu­ô.» Hå†\Y?Ðt²®Àè=!¹ÓºÿH® _‰…%áYÃêý¾ˆâx"^Í&Óð§~~ïWñ©§Nu‡rs-Ô€gçÒª—~ áª&x=?f®1ÎPó©‡ù„Å©êUDSjRç"?wz(P! ùJ¤'âÏZìð—'؆ß^þp>º x–jÆW<ó€»ãûSÕÉ{!/8Ù}ã€Û\ßaAÂïå_|{ºpOŸ]âåqÀmÒñÄùÙŽŠ 6¯.½QÜu9b–@’ø.N‘,OTø{PëúÒ7ë’wÏ]uôz š&>˜µ[8ãZ‘Í©¡Þl¼DÒNÀ)·ÍµÍ —Þ§ãÎvuÕź»m:õ_ _ëg=¢ÁÔªæÂߪŠYiÞÌ3ˆŽË‡X¡ÏÇøËÛ¯¿s+;LævódÅ™dzwíe#üØä‹_u0^¹þ‘n_(žÜ #,l4\W•¾‹S?±\+pÂåqZS¥QÔ —Úç_ݭܤ¤;0ÝAÂz}“xR{\„#ŽÆ´ØeF„v^ãÄ™ð¸ª~øÊ¼~6Ì<¯–LPå+W3ÚÇì±ÛꈕäõÁµ½rQ[¦Å #l‘êZHoLÅy¿qK§ ¾‡¡tÔŒ *Ëí› gý:Ó|†Sy^‰­E«:ó¿=ØyÔ{ÑíÛ•šç#$9Ò#ÔÏtÜMZ–Ç`âYqnð¦&1Ây»÷ùl\Î-mçÜNIý]IÞûðæé×q3êÖ­E„Ó©‰—¶VVr­NLÜíóļÏ9D±Èν‰[yÏæ–‰¬{võ_O÷ÜÛ1´£i¦f|y…=äh5"ò‹¯¹Œw0ò#¤ÑýUp:ô©5±£VÍ#‹‰è1R¥d&Ý7‹Ï3ñùï\D™ý©Q`»G½a ʽ¤|eá O¬DWeë ·\~¤ Ù?qDuðµYÆ*VŒMà%Z.ª£jy&{¿3v Â®ËÇxhl nÔež?[7 ³Y8-/EmiR4X±“Ý´ù¸²®ºš7ÕËÆaå+^´ÌG/·„h~¶„¨¶ŽA[©ã ÊÞåêí¡‡5™&¾K¨Z´Ñ³¢+îÖ–¯VE …¨×K.Wâ}~~²2÷ëÁtžÆ`KOBÁþ|iÍ×dbnû¯û-q#UÏÆñ'«Lx y"'Ø>©±˜fÅ<" 2üù‹ÃZSôÓ·ÝPÐÕ¼²FP‰B9Ì–¨«GÃß§¼8<”ϰ¡ÿÂ<Ðízf.Òè’sKZCÍ»õBès…ò8”`Á¨Ã÷8Šä5.cZ´:ŸW±rÎ’c+ù\šØÇ³  %ά‰,ZlâHƒ`ŒR çuoæÈ'ïò"÷ê>Û‘6‹ƒÂ/b6•%)üùa 3+l,çò׊êšéWy½VöÀK®œërŠ 4è¸.‚jg`‚ã²g£?²êWÈŽÅ¢I÷ \[ªz—ƒ¶ZÆLc¹d[a~„Í ‹(â·ºÒ3w‹6o¡› ˬ¢u\šœ¸…3úžÎ_Ò':oÔ‡óVWWjƒËêý¦{¦b’£¢³]°–ä­EoUýJ­\šN×lÆ*yŒï5n#ãEü“õOúÇ`Ý ½ëýÔ7º¹Ê.³½¬YUOý\ì^öÓE rõ7Æ‘ñ{?Ss¹ìûŸK¯RO#ËÉ2"¥|ª¹ujk}¦I“æ2ª"äÉŠ'o:ñ”- Vº|±ÞÏL þY?ušåÊ(+?²zû«u ‡Üí€ÄÀ Ãhú[ýp3k¦ï8,¯Œ¯Ðs°Ðoì9¦F^gjø÷7ša{¾º¹ñ$€„ÝŠ<©a5µä0¿Âï `cš ÂT$ãòE Ÿ5“ä÷-ÝŸJ<¯ŸX§q iîFŽË°dpÊ[Ó§.¿¬]çbóÌæëaT'Q—2hÛR ØtÈoÉ4,ƒ$+ïŽ?’ÿ¤ä€êŠz»^œ5L¸ÎÇüpXœýšib‹‘@÷q¹+bkØ;ÃEÙˆz*Gô߯z™e?œu"]‰qJsùäŽ)”^í¦è"íÉ>=­ Ì%Jå;±g™n“‰Æ–*v¡Ã‰È§'ÐRL`ñÎÈ&#6ºs踂U‰Â;¾'и¦·ÒS€Áõ±‹gœ†¹+÷ uÞâLäHóe¤ÇIŽWúJ³«ß•úé fŽ|´’z ÃSqffw#»-¯›ùÄÅ}§F$BëX%«];A®IÞÙÀ6>ë^ÄÒ5Vþ²ÇE9àl«§ ŠìxˆÒ}üAÒ”±§ÓšB·A¨(Yá{bí³Ÿ6\lÕvÝxfOùؽu?ð¼÷']Gzœ²o"§8þXÒ¼°\+ó:ghÁñ‘OßëJ¬n_¤×~z[Zug¬R"ýMLæù Y€U 3÷Ö| nÒ÷4i…xq.Ë#–o¥ÄÛXhúY9eû~¸µÞ÷…ÍÄóÇÄ6wù.ýqr±?ó‚¨t%øî«Ò®ôîÆ7c1­±Ö~P–Ï7¢¢e«6mUÛ4½À‰Z?¾þp„…–émÛ *~ ãËvð—?n3%øå…;¡¹“Ÿ¹Õ7Ú3ÊD.í ä44˜€Á9Ô¯u µŠ5¢ým_´å§OÖ©sR;hkÇ-–….óú'éû¼‹ÎaŸ43ôgy¤Û „—W¯J§ÌŠÕ‹¿sŽ nɰ¬?V/˜C+ª(Ȳ/*Ph‚[mìB©…Sû„ ï~˜{Jסdi…5IYÿøØISnñAvÐÿµvÈåendstream endobj 302 0 obj << /Filter /FlateDecode /Length1 902 /Length2 3268 /Length3 0 /Length 3870 >> stream xÚ­’y<Ôû÷ÇÉV#²eÉÒGc cƾ\{È2öHHcÆ23ÁŒ}ׂ,ÑUÙIÊRQך¦SÖAF¶H‹¢ìü¦îï~»ß{ÿý>Þÿ¼Ÿçý:ç¼ç¼e¥œULPx´GRA`z€ne ƒL€BA²²fD4‚„ÅãN Hh=¦««X‡jêT[OSCï‡0Ã(D¬¯ P0Sü!ÒL‚ÐD,à’:ˆY‰œñH,šD&€ÓŒ`À Œ&†¢Q  °HàƒöÅâ@ª?LYá0x@ûÏ0*„ð×S(šÌ4(ü´©0M¢ð¸@ €Bc@ªvxf74ÓËÿÂÖ?‹[„Ú!‚~”ÿsTÿ ‚°”ÿ—àƒ!$4€ãQh"îŸR7ôŸîLñÿêcEBb‘&8ß@4 Ó€@5þŒcƒ-°d4ÊKBúD`0úgCýÓs|?]¨º»;šÚÚ)ýµÚŸ¯,ŽäB! è/ùO†ýbæˆX2à…@¡0¦yþºyý£›9‰Gaq¾€š¦€ ó1Iˆ€X MÐd¦eUOb¦ÌÁD<ôc¯:ºL›?B? ¦¨ú Iˆ_(PEý a€*új3é¿ÔÌ|ÌPIA!¿PPÅ…ü­³4ùoÈ,Mù2“Ãâ¿§kjŠ'G¨À45MèL-@[õ_Bd‘ˆÆ‘~~`æŽþb –¹V4šŒF‚èÃx¤~¢ÿµÉ•Ñæ¥´*Å`‰¼ŠÄÞ–¤†<}ÈÒ¤¾À ña¿ Zê+¿?ö_îG›jUFsJìI$5ÂY5i”ŸÊ#¤X|Ý™ë3rÉóì¾ç…Ï:å_|G?:Kx¨ôy üÚÂQk»Ó.ûx‰½rdxg%: æåûwx.äUøØ6íºÀÈèÙü`‰/ÅÓËŸ‹x^ÿ&¸Këxó:`‚'O÷r!œÅ&!Hc!Uö{_ð‹çÓsוú­ûcK‚È)l³"çˆÎ?¯™“v¦fÝúVQþ¢&§¬ ŸÞàŠUÕÔÉX„eäŽÎìªݾŸ3øx$~ÅQÐ]H_$#¶‹Äm,‘}0WþЫ;†=ýŸ½æŒ,škrâÓÜ2²äŒ?_jék Åî)×ýˆ£!Ö¢Å>7+Úæà…¸Ÿ©ûXÆS:q3ÎÈÔi«‹D®Jsö,˜ÔtpziMÁö#dD½½ÈÒk«…Ú Cúf|…OTxKaƒ*9žùÆb<¬E°@¹ºEæ›çÑ—ÓâÞO$ü/ÝOçøØ²ñª½ý²!2¬ù¸_epmÿFy¿½k×L¼îšà¼NØÙs(ÔK_;¯ÏÅ[¹·Ns>_|ÍÙ =mñ%Vðî3jÝï•úâQaæ½urbRÁågºýóf—kÍe(JчÂå»iÚŽÖnQÅÝWN!kïMŸÔÓáöE"…G¢õÁ aè6ù#<­7•VÎƾZ«¢ï;é×ÉSí¹¦0~üühó½yR.•uíóµ:Ù’ ÄõjK§2̹㰔  ŽljDJÕ½þ8ÎBÈ×Auë~‰Mzÿþ «T¡æ‹/îs ˆÅF2üî)·[ûg:Ø3òÇ¿Â}'Ó.奟gT¹çYîBÖPQûmÝFBŠ~«†?äÌU•"«×Ñ}¨Ñæ{E™OŬ†ÛwE6˜î“¦Ö/]¾Èw¢qkŸŸbPRjŸÀÀœw´üúÕÈ•]DV¾\¾"¶¸3bðÒ» œÙê£æ§%ŠäÁiï cÓ´üìsU­›G¶$¸{×C<$×7&rö#ÂÛ³(#OGF£ú3:¶©ÏZ毵m9µ²Uˆá¿±ŸTΆ— ûÖ©v6¦™Æ‰¦œÉ©ÁI,ˆ*|âôÌ¥ÍZó²|©¾Ëõ"ÁêŽh{8Ûʤ¢=i™G¼åy[þ=Þ/!SÝ~|Sª„‹a O+GÌøFGÜ´¦¿*¸2:TeÒE^n$g¿ÃeBâ?ŸùvDÌÇš—L´Ëe¥œïÎçdÈwl•Œ”J-šáÝ¢³w†²l,‚Ê!ul´GTqrjz#{8¬^8FFàVå5¶­Sà%£,¾ô}Û£®µõà ù–R™á»ÆÚÖiÓyÑñ-­.«E}¿ R²Î‘9$MÊ:ÒÚBD­üöм‡7ç|\oîOð[²S‡%ˆ¨NFmzH\žW§;û y¤5ëɇ¡QkéûÓu§ã丆£ Ÿn7F“Žf‚aÒW¹t@„úŒ•5Ô~ï>§©&9åò·‚Cä¸|%øf¶ù²œ&8OU‘jF.Ë^!ŸFÔ|’]ÑÌ«µºÑ®a#zuMp%IüÀ¼áíy.¾»ÃXîüv§Dbø¦Æ ÀâÍDZº1jí\·pK-u|¢TÒ%¶AÔ^ÄÚûœ ø0ÅtLªì}kØÕ,`êg}dÈZ*z"§{“ô‚«¥­®2}šÀCI1¿®EQÝ"a‹Ž=Ý ôW+Ÿu-àQ¯³lh…÷šœ+{t-=£Ñ¬<×ĵðîhëÚåCV±_ì#1³ËÒ›dÈRáâ‘w·ž¬çÏJöömê{°Ðú»ÚêÅy]ä LyYìë”/:Sæ[1ܽwªÉîÚ‡lAO˜Ç$½ ÕTšé濱ܓyÍã>Á9 °¼ ¾¸=Ùu{{‹qN­×´äñäDYg7‚ò¾q(:6Ù»Íå¯"à¦ã֋щú?>pöendstream endobj 303 0 obj << /Filter /FlateDecode /Length1 2171 /Length2 16115 /Length3 0 /Length 17274 >> stream xÚ¬¹ct¥ß²öÛèt¬ŽmÛvÒ1WlÛ¶í¤cuìtlÛ¶íΛÿÞçìî}ž¯ïHFÆúÕ¬Yuͪš÷½Æ ¡¼€‘@ÔÆÚ‘†–“@HF‘ž€–ž^†„DÈ ïhfc-¬ïà$`àà`$|~øüådaæda„!!²±u³731u$ ¢øÇ‰@À `of¨oM £ïh °úŒa¨oI dchpt£%°´$Püg‡"À`ï 0¢…a` 023t$0˜˜YÃÐý#IÂÚØ†€íßf#'Ûÿ]rØ;|Š" ÿ—L ‚O‘F6Ö–nFc:Y›Ïl€O-ÿÈú¿ÁE,-eõ­þ ÿO¡þŸe}+3K·ÿq°±²urØÈØì­ÿ¯«àßÚdFfNVÿwUÂQßÒÌPÀÚÄ@@ÿo“™ƒ¨™+ÀHÞÌÑДÀXßÒð/;ÀÚèÿŠø¬Ü¿$ÐI‰Iªˆ Qý»§ÿZ“×7³vTv³ýOÔœÿÅ ø³:öf®šôŸåeøtüüùßOÚÿ'—ˆµ¡‘™µ # +¾½½¾Ìçô| ™µÀ•àú)˜ŽÖÚÆñs ÁgM¼Œmìaþi(+ À?¦+àb# úCìtˆƒ€Nä?ÄFO@'ú‡èÄþ#øb" “øCÌt’èS‹ÔúÔ"ý‡>µÈü¡O-²èS‹ÜˆýS‹üúS³?ø)Má}JSüCŸÒ”þЧ4å?ô)Må}JSýCŸÒÔþЧõÿǧ?ôé©ÿbúÔ¦oeûy•ô?'è?>ŸVƒ?ô©×À^ßÐàh 0vücgúýß7ì? Ÿé ÿC,ŸÁ m,?oêÿZ˜™ÿ±XYý‘Á@ÿ©Ñè/üL øáó¨€ÿÊÀÀø`e¤ï`ú—íSÏç=øÛÆúO;§ÏëýŸÿãäjh©oõW¶ÏÿÁÏ=Æ!ó?höWšOþƒ,Lÿ ó± ÿþäcùÇÝÆÉþ¯lŸ&ágü?‚™?{lêfk °þËãÓöW~úÏj˜ÿ…Ÿý´ø ?«mù~¶â¯“~>èþDfùÜjmfý—öÎnóGÌçf›ÿZþ<ŒíŸåÏ`¶úöëÿš f†ÿ±þ÷L0}æú3C€õ_®¬ÿ²™Ùüé<ógql-þÊùi±ûÓÐÈ àðÏëèϼ1þc´qXþ—&æ? ÿgDYÿgå¿ý9þÇúßÎ ÿÌÀ_dølÈ…,Ÿ›Vfÿ=ã,ÿøœÿê#Ëg‡ÏêŽúYËÿš`†OUÒ²|VÒÑÔð×Eø¬¸£‹Í_>c8ý…ŸÍwþ ?•¹ü5¸Ÿ»]ÿÂÏðnágÝÿˆûŒä°ÿwªÿ÷Å"(hãêAÃø©ðóý?ח㳤^ÿåièdÿ9ŽÿzoÞËÿec³Ï·à 0„Y^°1ä 4Oi .õ)˜*§4iŠ•­û5Û°lY4"eG¹Qûý¥<áüßå Û!¼ÝSa\ôÒÏ.6uþ÷³ÞAº{ ö÷›t·}[¢Ç€SĶƹ»3`¹™ÑÒDõÂþ¬«Ó¨¬¡ÄchZ˜ª¥{_~ùe»Ünq‚ÚXâfRŠÜm£ì9tæõÈ“k0p_3z×»áíý;ëˆp4jöÓo"</¨¢¨Ñ;Izãô»¸ù_CàxºÁD`Õ¬4E±MÒ-ŠÃwn9µJ:bÀqNI0øœðÐlÀúâDTÊšx@¥Ø3(h÷ø¦ ýÄí¯%‰gȽ,|ãWaÔ@§ß9e ¾Ù}MÓ3JÃö³É‘@,f×Ñ—;óéáÔa'kTIñ»¿:KÀ…Ó;Dœ¯P¹¡5t›5£ç3Ì=„ˆgCŽÊ-ÒlÐ8ÁHiWèžRâÇ—Íб—“H]ît‹ï÷á ¬­`¼íH„…§¬ÿàÀfWf9}¸>R·Öû‡Î•û­Ý·ì2Šw¦Ž[0´Š5sh ¸dvM·¨ðKÔ¬QgŒDdžhž¡m:Šæš"…Æ™ÂÆô^ÝÌþWÅ ÕÈ’kûv\áò'i‹Ê¹´Å‡Åç™—Âk¿¾œêèÃÕ¹»ò×<y A×Ûy’t,"í¤êx¤÷V—û‡-h}˜öxg.²È1ÈÌs9ÄWüìi~{ ¹f&~¹ý/ª÷DIŽ9ͪ¡5i$ÕiqÇ|¤ïk ð›æãgðF[)´aE•:Û÷m—ê\›²?—ªq|´dŸ³NÏ“‘z͵ÝÌÈÂï#§¯â‹¬#%§×`0oSRFÇcˆÏ÷ž¿M8aë]9•ïpvÐŒ“æ—$¯$>œðÆ01ecD:±Ïàj[j.÷6ùeAÏ=±ÏË¥#09f‚Æ"l±WS”ʽŸaŸz µC˼2³r: p| šà$¼¶SLkâ\W¨©³¨:k…¾ÀZ{?ˆ„ΚòÚîWÇ‚O¸÷°‰Ö»b ήLyBcs;#“ ½ýÊ ·~VÐíÒðdlÿû¶Ú×+ÓÍ”øvw*+†¤ÿÔ Ç^ó^c&ˆè†<)Ï)±¨Y³køåmÔŠÛ’Œù°Zj ž8Œl“Š,j;ñÑtM‡ÀØãµ‰Ñ¥†+Ne# LCÕlüŠ/è ‹Dl«ôLO¤$`›`qÐÇê&[óÖ2|<¹mòÍðMØD=£V'¹=«UÑæ*b7¶ß ôLÞñ €ô£y•ÑV·ÿ‹XGsgg[‰GÎNª²z4®à×wö¥êó9æú˜6‡…m2B‹i Ÿôom †]79CŒéØx_#º¨¬‡ëã»]ͨøi‡b,Ä^¬¬ÑO5ù&:"Gà€Å[0IC²´hâm ÂfÍ︣ö!YAè;Ž@•°®že÷£U¸•Ϫ ¼ˆ§Ûsåš3·£úHÉɯ™“ tàºÛ& 6‰™‘õq”¾…š„¾Ý5òsæÑÀ¡ôg•l¿K¹ÖìÛõûœš.ïlÌ ÑžøþxJ Ýt¥*½ìÉ­ M903Šv ë´¡¢«dÓ‚âW[.r ‘îÒk'²æ0ä)E’W\š¹5ýý”n·þ&L¨ªq¹@ÿ euw4P¹½%–ë*éK¬üî@ä 5hX.éL$ÑVt i|ãQäŒ÷ææ"€ž}Á)3ˆõ‹ém€÷JÐ`]~²¾BÑvdˆ¬Ó¨j°éÆ\¬yÞ8Ã~!œ÷…ÃBÌ[Ã:ÐA÷í y"wè’Žq/s™?ç:l; 9, Õ2{`ìŠ×¯ŸAûô¼þÔÓ3'j³4ex~’…–:}`ÂCÖÐÔú ¶à¡ÿœëf ¾Y g^9ýùØWxñÙ^ÒFƒáºz˜FI÷0Sò22ײÍf¹<œ4A øÔ.8)Å­–9N>;$@ÒÌ ùH^  ‰ JÄ,æSBh¥|_†!'œZ&™¡&‰x\±ï® +§Œí–^‡~T9/¿ƒ•Þhõö2±Js\w°3L£JÀwÀyCTÈÀÑa(=½GòÃZ…4‘%*oã)tÞ­Ø(•C-ÆÉØïóRBƒ6WGlG9a™÷`}b˜§<3V™ð㔸ÞÞà÷Òòuvu•Ü…ƒz‘«BÄ;"bd%ˆf䈓‹ É÷QJPwÔ§®Yþ`tÀG‡Â®Šnæ”s°`'öKמr!³‘ÐXj타ø²GÉ£Œyÿ•°Ö[' Ê|íUa;EKÝsªpI³Kf¶«Ššw SûŒ4üxzv»e‘\¦Qâüã#wp× ½:G¼š?CU} _ˆy ÷m0• n¥DÎ<*–ÍÿÕ²d~;·ijN'³ÔIDp]>©[[î2«§5 ©u©X~Ä.Ѷ÷U“Dø„׳,OÄ)_z°Œ+Ïw¸µó(¡00ÈAJ»„mÆ~GT‹ä‘SW–láøíÚÚõÇï•ß§ `Ó¿¾Ÿ H«šRœïôà\çò›‚À2ÿˆÕŸßlþêªý~D&¯)˜ÝÊñõ8æþÉù&®³SýÁV¼Ô·ˆÌ'“HÚŒ½Jþ 65·DM²}äÑÄH™×hH·~€2Z{VËÖB}ãkøV3Æn›vÌÅìõœ>ø£¡y‰-uű»]¹µ…É$챆ƃ®¼º?ÜNå--¡E Õúœqe¼ï0¶Y6Sæ ,CöªÊÙ&Šˆ{RLø«gŽ][O‡òÔÜ$ÝKw𖃾 ^ã>ö¶ÂqÅíÈô‹¹<ÔZ‡æŽv=#ÙÌWÈÊÃ_pò9;C Ñ씇ßXýØ/ÊK‰È71oåG¢ƒ¶Ö;çÛ‰ÕrA¿Å—!×>vJý#>øá¡³`Àˆ€•JÆô–“Ù¶Ìh±eå ú_ÎÞvq”ÎÇÇÎDkò¹·Ù}z‹ÒÚЫ[â3²jµãEƒ^èª NV´ð¸&X¹úŒ8Z¯2åÑUºûap?MÀÁUFiÏaÀ„n5‡ø¾O¼V:c5²Êåù¶$èGtíîxË`ÉŸƒíúÕ‘ÿLMEV¬þYðl>Û¿>Ô„½¼ Y£T‰H*S?Q‘k±VQ®IÒWÉíƒh`%ñõëF™Û!‚ 0‚=»öˆÚÀ˜ôÅWš]¡À¢©óîÀ›¶~û¢iu›Cƒ8NäbíÚ›O„cwn¼”Q7¿hþÞKLwY^‡RêX$W¹$JÍå% ‡¹m£q?>o²àH¯¬/0µòå¦q߆ð8k)ƒÝQ{13™ñÖ¢;‚ó–D¬Iƒ&æãB}¸q·Hå¨N€GcüZÓÐiÔé­°VeøÎ:rTiĂ涠Eæ=Ôº0g­‹!€"‡`‡,r°aâP¤®+Ë$òÎLê¸ôÃqÏÌÃzC‹!N³ê‡©µ=¹=’!J§ÈôŠ\k•1ã>Ž ÄeðXÓÞ››g†A¸¦…¨ÀQr8óAбmHs]iï”{ûyŸ†¢¨ÈÖK™ Vo·>o]#NùxÜÔ¬âdåjaaÔ„î/î•‚Þe]ëô5è'ÊžšSå¬#M‹”F˯ü{1U˜’)Îd‡fTL[ƒÃ‹¼|]„Rão¡‰Ås‰HѲuê á@¦­¤2¾bjœsÚš˜á34Þ µ4ºe4DTŽåÒŒ@¥›"4%´˜6I B)ÅI—8Í,ÔUI¶¸=¸ÅD ¨iàh—H¼â‡ªžÄ¥‡NÉ}â4K‹$§FW_-6ž/Gîj}kÓSGR‡¨é‘œ_çæ=î±!¾îô‡„â&{£‡;®­ÃºPŸÔ}Åý3J$Bâ<èBkÚ r¼'ôŠ’8]½=êÕðVK!ö†Ewâ· bjIxÙÓ%2Ï€•ѳ¦²@=! ¬P†y ¾Hâ”Ê”Ûy¯9O¨±Â<@Û`qÏ“à#œ©¯¯cÝ0tb. Mp‘/`bîG¹ecÛ'å »I2,EÑ¡´Z¹ïˆôMl͸~âçÉÿDC)‹*ËäàÂÏFRã8r[§aÚ›ž(Á´Ù¸p”'ð}§õÍZUßÙ|ââš½]|µmhDjgZ÷K†FG¢g*O7{|­ókQŠc¹k)9h…Ð]Ú߆Kæ6ëKîxÙó[¶ÌIFw§ßlÛ`@[¤¹!H¨õtHjZû†žÞ#{“:› …äÄ6¾‡x]oÓwƒ™Èä·ºðñúøÌ î¾èjô¿ˆîBŽêX²Â1 UI3!˜mCT ÓjWîÈŽ %8ö”h>¢—ÚÛ«ì YÒM黤²;Ý+ 8 5êã¯W»¬¿sf†ôþ, „ s ¶;^9…TÓL¨Þ'òss²ÐAŽ”µ•}d©íŒü¦4÷õÆM ÏOƒ=By[!;¯œ" <Ù (_]VÔNˆJ±¹{>ù´Ò¸Êƒ%›Ÿd«rL2ZÕá=sk\Q“ÈùxA°®ÈVüÛ}£+5wqÅ-â²Nû42ýXrÛIh…P‘`ãô¡çÉõòU‡m°‰bŠñ’©åy4¿û>´G¹ãN+3¹±Xï*krð$=FÓÎ •KmÚÀ+æÊ¦QÉœëÑ Ø¦‰?÷ð¬×x‹%vfžš1Pr§jñ²¡aÀE˜¦á<Óþ“ñ©'?SÈ“gŒ¯°E,X2µÂÀ×÷úÌu/Ì&yÞÙšRŠJ´K·ø —Jd.Y;ø¾–HÙ"šÉCãþžö‘Ú#‘»C~‰[Üü 0öù>/îärüy#Ê §ñJ{‹“pkCV»(V-Ø7AØG!×.ô˜ù˜V?H½'ƒ¶:¶óAT(b°´¬:N-cÍ Nʃ©;ä@+¤Ê7Fâ ¼e3TCãFlz•’Òó¶ncpEö<`á²¾:DWüÚqv}ŠliõÒ˜„ˆŒ¼q8“ls¶†ŸQß)²Â Fzϓτ“§)1¸X¼ ù{a:cù.|:!5= ©›:üb"ͧó­fê¢%­Ó¨˜•>7‚1v‘|à™‚¨¤nád¦ò¨„ÌÃ5ÉùÊÆII è¬Ô/óžáøñz+Ò)µŸ6+ðÁÕe¬ÜY>OŒ$¶?\<ëE†FÑÑÖ%yLr/›:1ãƒ?š¿‹Èˆ_Ú–FâŸ"©¬æ»'xφ 4‚¯@VX0Ì…:¶ÿ¥øƒY2?n¾’œ»hrÛcN¾¾4£©ÎAàÂmæç¤2T+N vX5çÁYºpo^WÇÐWlGeÿÌI˜Ý¯š(ze­ŒåéÆ;N^$әקּ”¦¾y†[ˆ=°\‘&Jþå}Aºe»F ¦×zþÞ"5 Û¾Ô%OµOŒ ʦít4Ç´íø6—ãC?9WBüÐ seœc¯nålƒíÁÞ\·ˆâÅih>Ê‚xì\™<ãû¯#; bó$ë"×7]ì¸ÊÂàJ=Á3Ò‘*^ÊXûNÍ@Q[ñ~Í×9 f#%{HëÎÏ1±ô.¸–¬îÜ«éù%1„,‹hjÕÏc¨7’‡vfóe'ÐxÑ`¼*…ʺ0ÄSŠö÷‡ßþ7}<Úûawfb…WuÃÕ\ŽÌª½fÎM˜,È<>¾c±Ÿ©¦¹¿0õ'WñíN>Ä[ɯHËŠÉ}s¤3½…*N{˜[“&Ì&Ldò¼Cš²ûÒ »y¯@>Š4úcð‡i5G*XØ7 ¤Æg]™C4Û[ý´Ö;N®ºšÉ¼¥SISßq¬4‰Ä6ºÂ.Ÿ±2­'TjÄû/pò’-ùXì®ÛÞ“V7ŸÌƒØ`ŽÂP®UKËÉjS‰Òäô½ô†3ÖVÜßÚÊš ØÓȵ~ÜI#ñÝüj&V&Ñëb0nÂ(’Öç„ÝO¿÷„“Û•B³üMäpP+’aº9»ÔŒfG“muøúØdfÑÚe}Ðû Cú /®¿MÉLÛ–ò±R‹×,ð~îÒ_Ýç’PŸ@³Ò4ªÂí+b(ü"œÓDvHewž¶ñÛnéà—1Èôún±™¾®²¿rÅû\×Ïú`ÿ ³&œ=am¤µ'~ËÉ«®Ö݃\ºó!ÃE‚H8Nôü¡ÝGÄ&)…*Àe£m7#Ÿý¨P^-̃æÇCž$ éˆèˆt,S>qîªFYž5ÕÍØØ06èR_þ±ÇCžÚ õRØïÀæs°—h½4¼­~´-õÄm¾}Å/)TÌti  Õ¨ëzúSŽé•ë3‘=´’v桎+¡ŸŠÈSž´ƒtä<¶"M >Þ¨3ˆü–²7ÿ–¨]2axÉÂ:º|ìr}i›×ø–™¡\0±2^äTGï7ÀD40ÀR¬¶Ë’D1sWn ï‘fç¥ %±÷d•îÛ4˜ô«dq`1UR„Kªç'ïZÎS:úŠ£\¢×ëåÜRÌ^f"1ÂBy+˜ZÔh*Pa«"ŒBÜè< lök¨ÎdiòPX™oXˆò…qYnôLޤ]5ðÔËé`~¶Zžý2¾±¹@s¼Žk7%Õ¬7)µX J=•ÔæîGcM<-8mÇS%Á¿~Êœø©K¨²‹·Fž*Ùñ¥‚,é`Ä€½¡ø4yûêyÿ (ñ¢'ì(¢øaæÆWJï3gT·n6Õ¦æšßZ“ª'ÌÁXñÂ%7BïÛÉÞ὿LÝ0­áò*›¢ÁƒÀA6Uaf(õ š¹~ û†‚ËÜz˜x•I_ã³Ö­øþþŠ:ñããæáv®âí=Ê“43éëºKéúVºÅƒv.ήÝÚ±,7hB}·Îy&ÈH­/ãŽìHÒõÃ-}Y^¹ðñf¦Y‰Êèè‰Ô‡R‡¿v£’U,°žƒȽ¬i² ,”ûXNÈK»á¹¨$xÈ “"ï )`„«Etuä, ŠÂö€—t÷ƒðÈß_ûv/a¥/˶¹Ub *¦õ*™Ì“qR:˜…2*ëéò„ðЦCªŽ2´7‰4B§I*(SVH­ ¬ÙTI8XÃ4Z¤*çú!­œÀG·ûZá+'…®Lèûx꣭¾W% ^Æ‘Ž©JIöq ?à„a²'š/Ô¡wC¤ $§IŒË¦È¼{ 1κe2À¸ý¢D‹žüáHB]«ûHÖgÍfß.y8ÇspÊ SðÌY;C‡ˆà‘‚ëÑäŸoE§„ræ15¡·» aо‘¢^ÌP…ºcÛþ}Ítƒz/ÁšR¢™Ôñ;?+&G´jøë/×džè‘~ÉÒ‹ü9|+1¦AJUŠÈ¨+k˜ãb›ûRñ0‹JÜD©%ñžFl¼{fšcÒ/6Yv"èº-ü;Íõ/ùøÀ~¦#ù\óâ“ÉP#0‡ÚäjÔG[d£Ù¾úÞ]ŒY¥ð¸_¾‘Mð~·oO"¿m°pÙ9?Ý«FiªX.ÇÇ(ü±Æ?]|©§ä®µ±ÿe¤þkáwóF+¸At‡#Og×ö#£´Rø· 0H˜ Æ °ÖÁÌgºMef€ëòþ=Îå7((Ô´Ú~úÞøÍ2Ð(Ó·xä’ pÿ=íe d:Ïb^6Îã4ðÎæmªÖ„ëEŸ`p7 z4ÏéöM.>tŠ;uKSD:}"•_y_|º¾‚î¤|—ÁË£Ö9®|SÌþ¾rÈåÙªøµªf—¤ñœz#ÿ€3Ê»…J:[ÅœZ N}æWA7’¤båð¾Xsvhn,‡ãÂïTmõ“Vñé8(bûàÜ0Ö»Zö(p¢TÙÿ/ŠÏ=«‰ .úê*{¦¿jÚ™CÈwº¯³#¬_I׈¿~°’ÑÊ'z4Êx w×ÀpV8ø*iÙ¾­{{Šíâ|0±Ëk¤Ë‚Ù2Õ0:´ùÛ­xJƒz\Ùù;f¼í-Áülõ(çQÝpø¹{ö3Gwÿ‚~ØåJ¦J¶öÃÅüZ)6©þ÷N°®©˜Ü [OàÖveSå}½f´Je¾¦ö-éxèmßš(cTs/l0[[XÀJ:%ÑkOð‘~"£Ó=%@ËX*âÄMhS'…¤ûæc Øû ©³¥êº©rñÆ¢a÷Z~'°Æ\}0”·ˆuIaïù˜˜8üBûËcÏô8ðõj%òÆ@´£nÊÌfŽ´Ò¦Ö׫…˜n{Û0+?FgfÙñ›ifý‘ÎGš^l¢Ìj{¡&ä.F=0—£ôà-Ÿ7ÅÝÈ$+K°Å[ƒöo´’\‚Øg¨+5$ÄñPi—6"éÄŸ\Xv(EŠôyݹýÃéuí¯–= Ê[}¤‘qÕeæÜý9F‡N?¬ÂvÍŽDb@Dñðà²Ûó5yéjHxÐBÍéÛë)Až%žŒn†õ6WìZ€[©­L¦@÷§výªÎo}-½šLw†'‰Šeˆš|ƒÛXl ÁŒèiJNÙèLr†Î0þ£Þ„õ;¾h£„4ïùÜ:ÆÓ”«ÏÔÖø¯Gd=­ôŸÏËÍ:uêÌ#jô÷zÇ 6èz4SÁ÷Tƒ>¢Ï¿Ø¡wØP¨zßFD'Œê×zå¿”35¾¶îÑŒL! à°ð÷*DOm=0o«ÇRµòîTÓΣ®Öù92®yÜÔ¶h‘ñ´w£Ñ¥õp\¯°F?6GãyêNÙuÓ©L 7ÕZ›¦ˆ>v¾iPçÖá-/rbY³Ãêâ¾Áÿb"2…ûücÄàN5/®¾7C'©ïPÍÔ[ë'±ÂëDÝ;´Uð/¿‹ú=³5dŠAl8÷@Bù÷7R‰ÒùkÙEbt:áæM ™5=g¬ZjM&ì”murkz;Âp.C%w’†dÁ¤Ncn¡ó&žK!?öÞÞ ÞBÕ~áU,U<{½æ˜@4ê_OÐúÝ -cw!3ucq}”„íy¬uos´nV¦á TBfÙhÉ4uh]Cw«\‘´ßp¿a’;£‚ž:ð>©@Ù¦Í,^¿*ˆŠ}½ŠçH£„w€4ÅÜ[¬õ~¢¡’Jë½2ŸuØ–dÏ>sINä $ì •Ã"K@¾Lû1”[eü>àKl”ÚîhÞËLô‘zúð‹|¢W[í=ºzrÙ¥ÅuG™Æ!á£waȯ>µ3 LpxA­ÅS0¼ûʉ´~³÷—¢·Y¸H1Ý>¢¹#·÷0ërû©¡o#¨y \jm|f ñ‡¡õBâ_­Ë:7`ZXSÚN¯¨3»rò“ÒáC½_+¶òêqFwIUv*gÈyу•v'ËL¹*÷4Çý-/ 7ÒyÞD%Ggh#twÖõ¿¨?õm’ .J´6î«Çÿ: Êè´N¾!Z@ü ý²ðŹY§¾,þ,êZG›_˜ ‹™¥:Y™ë o/*·‡Œª9½Y5Ž“`Ùˆ»`½2óBêrYó¦ÈHŽë-¼Ÿ'ÄRà§X'BêÚÄÇIüà1Aê­ÂïƒõÔi¢›Ñ­ÒƒR€UÞj¹T8s>áZt’]ë—_äß™ŒVÈB<žëjt>Æ?ÒÆéõŠ\:– òÃó1ð5Õ‘É#m©žNUxЧğŸÄ 9Ýf*'§c´vß”Æ%¤ZžWù¹Ý¡B–¯"IMµGÚHÛïóó»ƒ ÄÕ/ÑDÐ4JÇå ÐRBSŒK·Ÿ~ñÆðÕOD}¹„ÈÕ'-Ð7É9À°+~ wCE[ïN=*ÉóW| Ml[Éõ`ÅXåÙ»Í|©ÇMgƉ¨N–Iç}ó›p.ÚX!zpõîto—*ÕoJ˃DËJ%¹Ç&Sw´l‘кÞÕ`ÞèMô&¬,ƒD8XÃ<ÐbTÕ ²Qî(áìTìtÆhêÛE¢¼±F+ÕËäŠâ&¢¤r•C}5Yn™ èo çËFÄí¹Œß­©tLY´õwWI ½R¤?ĸ´v‘f}{Ê­@×6ŽÔ 臇 ëMFukóÄÿîR¯<¶>”}µ~­ §Q ãb.b¿»oGZй8 šŒI!®ŽkÅP$^ðœ“)¼|T2$ƒ #­¯yº1 ÈˆÒh[†7l0”vê®Þ½ÉéK‘hù:áÆM˜V(0ÊG·ítÁ+. ’”$Ñ#ª8"¿ä¡&y|ŒG*¤IÂ%ŽÜäɈ~÷ji}Ç–2wI_ëyN­Qï©$B‰b2Íx\š ”!w÷”¼n *‡¦ 5Z îø†übÑ¡®ü“È1ó:|ßÔõÕÕîGsbcƒ%Oµl®FÏ¿þõ‘s 8§3 8 øè‡‡‰¹ˆU¾ÖÚ¼€ûvSœæ¨àWTkøJàˆ;$Ø‘•‡·Pìý&ìAS ïN¨E#Ö9$ïÌ ò̶SlXLL»ºŽ…“‡FwWˆ”Š2ø’°2x“:=xÆy,¡K‚²¾;™ïHÌ%•vtR8ƒIü¸< Š Ó ùEìþ úŽ¿`(„¢Ë Z« <™õ„ZSŠqbÚ„Œégʳˆàu*XÖiTöBÖ{'TŒ_ (_ZÞõÌB ˆ5‚Ï/V¾Èúú}lùÙ¸´ž¦0ЀdRµl ? ×:ùHKÛ4ëá TUÅ}¢mŸ%LªJVè§ê“4ZàG˜ÒÃOE¦'/Öy`–4 ‚3Pâ. í:s«-ú›–´iÜ-ƒs¼]·Õ|MœÜ³4ë–¬®ªºØqqÄ+×ü!Sê|ç&Uø¥ŠÚV¸¼†2û]¡¿•¨XãËw£Œ‚EÁòc—"½äÞ&]]äy@Š`¢[äÆ7ߟ%H1(”Œ‹Í)§R9D®E©q[и:0XpXÉ´‹›«fÊ UÎ÷dbÇ8NÕjž–®È æf* µ‡Ü2:… ÓØTfͬUCvøù³¬Ûn¬¢c’RˆYuu¹-1Íëý:ò'ÓfÃsÇÕcU(]‹ bc]¨kƒÇD2îl¡rmïÞÄËŽ(F]Іãã§ÁçÛj«³8Ró¡Ôárƒ5æèÈq ¬%“ ŽTR6éˆ1{GÅùö¥KHñ­C·  éÐIâ!°uÕæ0Pî "ºýÒª5C³òLms„â­½!ßM¦vzÔ")â€2†·|«TYÛÁ8·`uó³§Júñ›Ó›Þ¨iK(c(bχ)}”9XU4·¹µYN õ>~K?lðü:E+˜"8UqÑój{‡È%×k]æDX¦8x‹¡°yJ﬇HCʧFàÆz!=ø.ª„¦ fjÅnâ½»Öï••·ÂjO­Õ½&Ñ]xUù«S0–îX² Ïtð@ê;Sû±˜êÚzõ ‘Øãþ¬l"V&ïnÓ³\Ñ I5k:¬¶ßÏü_æoa¬ÂG˜j³Ï8UnæÑx@^úŠ«¶åøÅÕÈ59=è°qúŠuÇëѰ±íÉP³fóˆóTÑW¹#6ä”aÝΰŠ2¥;çUwPøê]x^†²™áÕuˆyCE ¸à]f«Š01ÃÅ$ PÇËa:]¨õ1-]`W¯r%XþÁ Jü Úñí¸úþÀ¯7Ælp™‘÷ìÓ­ì$ñZ1AÄšüí\"•Ü2j ‹2]Ñö ÜÎ1œzì™ëÉàÎ0¨,y'«7ñ±æ-\óikSo?gHé‹~²%}!ßÜ*mÏÀ? ô1êø2Å@_í”{ŒË1Vãï)æ¿æõkwwÈóJ&æRfVú¤H²Ct¶ê"3ZZÁâÿÑu®èJàãǪãÂæþrß9‚Ëöün þíñº)+ÿ\Þ5’ q)IÁXÊ)%Ð Zʤ°‡õgðãNÆï1>–tØ{4üæßu J…ª›ÞÌðý–ÔoéŰ¯¯I&A A1™´BójY‚Ö© '–(6F¨ÕÜÎ/¸ý¯na“›ì±I´N¸+ÞDD&‡¸ÛnÁvïàŸ ß'Ý#GbQ—u¯kV'ú-ãÉy&Iý­ë=)°ãßÀ)}ƒåhbÂMì²yýyïIŠŠF.AµxmÌ‹ð• fíŽÑ« SÛ 'î~uã ;8qºsX·Lj„Xrh“²ŽîM1®M:ØÖá`wSàö’|è ßZ}ºö3÷Ñt#fñm…Mº>1þHT` ôøï•Ï„›ãˆ(Ùÿà]cßÊOw ÁÚü (1‡Wgö8Cë:UcI"I n±¦‡9x%90êŽ{ˆŽ±ªáÔo¢ã’‚œE™ºñév)Xck‰Ö|®MΕÎp÷çÀÏ;Í&”å.ÉîÀ©Öú—–úY›âUÛ ÈʈiiºŒÆ~¿¸ ¶˜ßÜ tJ¨¿›®WW"ÉF”j3$ÉÒR¡[¸…[š,7ÅØ†´QgðµPœf}@®”"ͬï3õcàî‰ß9ò#^räÈkg&pb˜Éôuë7‰/‰7ʵ’ÊaÚnÉi‚3 ËÐwåS¯^#GÝðo±I™ ž1Ó‚ƒCTd êüÛ¨§'¤Úx€+ˆ¡£Px@lÊymRð¹Ù AÍÏûò)wåóTÒi<NŠ4Àç÷~ºí“•É?Èïã:‰ßg r‰.¸=›r×òø¦â…!{M@Düâ¾q³dêh%³½ÙÃ"äÒŽ¬é%î˜Çb‡xLÓH÷å’"ÛgÅuV`°ã]‰WŽ€vø{Ɔ^mÎà5žŒ~è¹!|ÈSÈ7øCQuƒ5 mPO¥ßèœ?;°Êè—Û n{;H…$äMQ“þ¤‹¿4Ÿ¦6Œ×ˆ@;3U^ܺ˜,£_q¸töueG¼EpþÝùö€ß˜‰Yjl—aëñ6iÍ`áPQú¸X "ÎPÐ Ca÷kØâ‰ºÑLŽKðIÚRø©Î©ACæÞ(½öle¿G 9Æ´­BÖy¶‚R.ñ—)6Œ03y ãõ¯ÊÒ[ Ü+dIZ ÷ÖÝZ{ÿº›é˜û«nKÖ»Hj…'žÝÐ+ú)ò©“ÕÞh¨*°—4Îâ”ÜaGþæqËÆˆA¯ŒÆC¶ÄX~ÚV¹Â4e˜NÅc[bŽšö õ…í÷ÊKê*F5hæÇ<>*i¥¸­Ó»‹B[b-Ú±2åļdl”Ã.ŠB£ÞA®F ¨°YM9°ÇÇ}ˆ=p½ã—QÌüQÛÚ»A‚WД 1Ó§îåÍÍË n9K)Fíwf2X•.òt–¡&ÅÝ=Àp…ÒÓQmpJ¹5ˆ>Z½í‘Kþ“õ&ò¥„‹|¨³ýÕd—š¢N®ù ׫)lÝí~x¹q]caye8æé"tsîqv·I¢ó‘´¡R9QjÊüT'VP€E‘5<¼Àµ7Í|¼VëHzlðcv“j^ÄPw8mýxœ#O£Ã ÕùW¢®õGðˆ…þBÇlÁŽô/²Hµ:YEç(ž ¹Y¸{âiNèNW"+—@òAä䥽îG~ 4X~øJÛê`$¢åÁ^x‡%hwµj¦Œ*t#ã_zpxrȾGÉä¿ã§.¬Hûâ;D´èb9oÔøë\K1X1¶pEœ8ìàx±´m¿ØY`4)Žë©mÑââúM,V¾7|äÑreûÂI—•˜y£%‚ŒüÎ5\t÷]´%ûÁé‰Õ¬íyFJ“gN1i©HjìQ®›M°,Ãï%Vx´Óhßv^Š µŒ^+rÂ8øò$9µê’;tŒl³vòñR)Y° LöÈÍ|)¹ǘ:FîÚ>nt©£Ú%µ‰85B¯2y›éç£Ö ”¡•ríÌë´½˜Ì*Ÿª¨¹ºkÔ€Ñä¸Â½½ÚøŒÏ`Er¾šóÊ4²~ô‚”TÇEbc•«fè÷ÖÎÞ2¢q:¾M^¨ŠÌã%¨ÝËü]®ó±³µ‚%´’º¨¯íŽ7q¯Ö­›æøÛúGºù$cê)Ñ'zèúžââ_‚, ã7??R—-MG’†Ã×íqžÜâŒEú‚¤Q¬ÒNÈ\ §Ë¨›¦°&"à½0¥é.¡œ|úñ<9éÂÚßayév | ²õ¯æ·ó|ñ[8ü2Þìï0ÏÂ~æú·6&\ÐËd¾mäÙ}äXMâX¼ ß»×ÐÂÝ¡¿LgÜþ¨±p©gõ#s>òØHÂJ`H%ѦX™]¤ü1trRZ¯î¸c½·ƒ»Aˆ°uÒ1þ#â'PB ?"BAK qKa^;5+ã"Veô*K ÍÔÓ†nÓ=EYó-¾Šš5®žøŸ>uÛ†¤û›sÚ™¡L,Œ ½WyX Á),æ@ØÃô¦`0/ ·{:ßô«Æ…;P›Žè\Ëp,¾«äìáÃÚ^ÐÅ®žJQÛè|›e_2^Fgüúº åAxƒ“>9LaÔÁ3Ú¼×ßzº}n Ä}Ò»Åj­Êå0u‡m³þ {K9)öjM€ F!:q)*í™wè×cqè«]W—^|òk˜†Ó'–Ú<‰¼ou¦ ¯)˜wXà9GÞ«Ï‘ XxGà£ôýì‘i$(ÏÚ׆–;Ó?„PÀj¿|‚˸n’Ú~CÒûÑõTÓ¼]¬ÁŽ–Øê…àþƒg¸§®† ;œÙ–øXh5x¹Ç<k±¿`¯F+b~Ÿ)"†aHN”qÙbKÙ–þÁÍù”jG£_Kú½r—[µ;‚8Ý[FœêâwWÿ-Ù›7â—„G©@OÜйÜ9º>Á{u:ÛQ p1äz­åÐŒ„¨æÚ›V°þ†×Üã Õs§€![Ç—g<+Œ v}g‡ÊÊ S!£’ ¼au{4ýòEHm¤£WmÏdz9gó;÷Œ  §@5Ø”ðÓ´~RߎR‹<É aÿëî˜QÓðõ ó > ‰æ:°ÒÏ—ŸÍ/Æ‚aÆdІÛzÎýëÌ’’Ìœ64¾¹¾øÞɾ؊üû6èqrW¿n¶ó¯²xÙÍB¸¡³EˆUˆÊ·81É™ù»v¶¿­Ì´2ªiŠ“Þ0ç²»¨à}[YQy²}E”(F±e("üñ‹4¾2±ÞÎ#w‡p·-«ëa-æ1¾C—3Ù(Ž&0SÛ¡›këÂÆaÓCsCÞ+0“ôæbpÂ|pæDen¿!#â© +æñDæ\WbëÄµëº !ÊÅáîuÒˆ8Âlo=šµ«eQUŸ#àgb [&ûVËWa–(ãt.ól/°Ó ŃÕN©±’ïÏDãÊb暨 “VO ŽšjÂ0šþ2ípOwF4x\5PB–Ô˜S?(1çža8òÅW|ű5•ižÃ£Œc*{y;–2¾µXíì^[0-ªu ÃW g6’Z÷°Gy^œéù2~sÕuÑxóú^TRÌJB ™¬åÚFèDInË9€–†‹ÒÈÅœS¶ÀvØsiGÐú¼6£Qßœ£©·£ ?ÄIä’9Í¥œÌG&ÛœXp¡¶EˆÝôF`¯±6(nǵ§'ûyõQð.ÉãÒf@.ú¼3m¾Ë÷s(3ÿ²ä$ì*æ OÍÔ¤‰ÎXi ¥›G¥cˆ|¶—ób/[SÊ®J"¸ÐY‚ß²Ã`¸j8ÖX®)Ô“Üq}GšyŽõ¥2*׸Ùn¸«>¾ª‚µÅÕ?*ŒÜ[p1•¾úÌY‘œý ¹} úõÓ Mcœ AAôÿ´žže+M%«ª}W+?lýågHèï†Fÿ\œI;‚Š» 2ÃÎ&÷ÕA÷l?º#ö‘ŽDVê‹€pεOEçç¶È—æyx9ñÿñŽôödU2XLfjÜ3f“ .{T¶ý©–Ý^½퇡ÿD=‡F2e§³N¡zi:¼CÞÜòÈoQø³;òPT‘Ø9'‹ì1Ë«(ñkendstream endobj 304 0 obj << /Filter /FlateDecode /Length1 1016 /Length2 3813 /Length3 0 /Length 4481 >> stream xÚ­UgTS ³•&MŠH¤z‡„^¥ Ò› À…@B!¡¤„*½EE:ˆ ½£)‚ôªê”ÞDÊ‹ú]}ï~ß:ÎÞ³gf¯™9ëðs›‰«AQ0m#–+¦ XDÅϯ†A0pR‚)`EE0 †u¤@XNIZQI† 4P~h¸³ Òþ!’ÔÜah¸# @0.0wB G0C9Âa? @ Ldx¦0/Ú• ƒ(Ü8ÀœáH*É~t‘N(@þ Åzüò†¡½¦!‚Ia€`ŠB"ü(̉JÒEè#8ùÿ0õïâÚXÂâþ£µÙ]y{Ýš9g\Î94>é2áŽpÄÄêˆÊpÅæÅg76•çH^–®HÌ@Ž©ä$Cä´x“Æ_` ¬ÁK„ÎÝïÏõ[æiæ@‡•;[ÃtŠVÏÁ‹Ã¾p”ªr=ÂFL¼¡bˆ·ìù8YªZÜ4šÆ:Òþæ¦;Ô;Ríë*•5í÷v™À¹6 ’É[iþÌu—féºìžä%.hœ:•QZ­w'jPüËÔÇ–êxrWÍ" ¦7ã´ÑÌï×»3hÉÆfŸFÍ´lœ‚ûk•q^šnH‚ÍÆ³qõÒ *ØK]{‹¸¨LÊÕ‹ã†YùºPi:Ñ_WákNgÑmPywù­UßX÷°¥¯x¢˜9Œ:/“¸º{{{žHC™Æ#óEPë¢ËðÙWx˰ŠNÕ¦U¹Ã”s¬ƒ{OÞŽ)“XfSšê7;ÜGCå¨ 7ËrG÷©‡ujèdÛ«@^ÆÎúz'“Iš+FaãÚºº˜»!™´70þ´§Vôk†ÞQ¹ÜÀÄ ,«ÝtL7á‹>šºæý‰þìPÄØfênEáÕ/êqô€³§N¶¥tj±FìµÆ™Ü0 ì÷aî ƒñ‰e?-‰MðŒò¹¤¯)4òÍ“%º )ÃóË‚$rC[¶”žhV¿XMd'÷'{äñ¤ÞoW¤³MfTžå¯1¶ªîw¹UÍȾäfßy#ÐIªíT!½6–UÐÄ*Ðñlë&çÅê#– °ëÖLË r XÂêžåÏtíUyH•B©Ê–4òWZߨåq+DUÝÙ]L.š.l퉇̰«ëÃH.ß?phŸ«xÚò:I$Ú£:ÔLá MkLéX5{‚G!–n¢ú¬”H×¶ØX ióÄ¢çf‰$dõAÙóão&§ä_“DËKZÓgð’K¹NP_èežÒóépïP¬µÑYÊrŸ¦ùïïú³v‡˜•yUråy>BX¾3&ß0djdtX|Ñ—+µiVLÅ­ØX N îËÎìc½”{ç}ôžj^¥°=“[{Èdóݬí"ý¹1ZmΘ+·ö׿חw«¼È©ß—/¨¼eƒZqà¶aúnTjï8#¦Ô7v'Г§ð•‹Õ]¿í‰v`³+ºÙ ôŠT#€Å× ’+¿¼­ éì©f؇_7ïŽÛ\êôº ê-àâ˜yYä›z(g…Ø<]Üà±ÐóKÜQ ¼½Ø¦IèáõG×›)ÃN“K”ªlC²HA…Ñ;o–bí“à#JŒ2¢æ–ÂYr¦]ß*ŒH7öªbbÓhÚSVŸç†Ùg~öpzε|YTç-Ö_:Þý¦ë\îÌ)5í²Ø™ÿîúà Åc(ëU·>ü=ß ¹uÒã <½—ǵ¥ç†æB‚•PØükÙÜzpSe*SÄÞnİëù¿põg™]Bõ.O&b¨²\6fÄ…ZôfZ—@ÝÎ{«ß8óÚë°\ðºôKÚ£VUP¶—úžl aË{çmÞÞ:P¹ß(šèšv‘ìVªìø’òŽEpóB.Ç<0†Rñ3å$ŸüÁ eY/eå—÷êU[ vƆcI¸!¢—žú±SÊ-ϰì%î\PéÈ ‡A6¬p‰¶Eáhí'eNWoQGT”=Æåš'_Öíä+¢Ñ“©P3dêM¹Ë"õ|žå*è^ÈÃÔ¤-1|Lò°M-³K¡˜ÃS ÅÊ¡ÝÀ{>êirú̘{ÀÙøy²Q;„V®ÝD`óKoôyø{ÍP~¢B_¾®–A*B»`O£k­ÜÏõûV|–y‘¶SMˆ;_l…}»Zê“ñRðJÄ‚zàÅgW“¾Š?Ó.µ?ÙŽÊÒ–¥À_â&ž‘)LUZh²r}/3ôŽe¥‚Î6°,¶K/TÜ\BNäpSû ³KK©ÞIZN£<è«ÉèØêï-°7´—„‘½ëuåª<ú,<Õ" ,N‡Ø–“ÉóŒ™ø;ù_pÛt3àí9Í=œ6QoUvZ©†¯iŠÀ6¸ÜƒXÌ&r¢a‹i ˜Î@v~ÒÔ§BI/¼G¯®œ~ ·ãÜóãé” lÜ)½'¼ï´«âq+ìÀ¤ŽjŽäéŠë·M6 X˜z‚—È9°wϺ®ÎâºÈ£c³ΠÉ­ŸvÿM6ß––D6õs5‹¦êÁQÒô­›2O ¸Î:: 2ÐéOS¼H ×¸<3%κX©¿ß¸aú½ÎmR+`õ!Ú¥àú «óÕjzD](ˆËî4&éTå–Î.f}£“hwY¶àx7ehzÿ´§Û6[sBû”Ù${ºD²ÊðÐÖcî©îdÃÑ7Aj¡ ô‚ó9=)³eˆ|VÃVüG®åhÖ½småúýšÖNÖ=ù`£Ñ{Çœlϸuñ†ÅGó½f¶Nw³•}ÝÂ@~–ÛV¯[BsÆôÁÔ,^QçÌyyóÚžðá:jÞ'ë?t68ðü¢ÞNm°…PØ”s‘ÒîWbWÐw•«8ÊëC…mÆ&~\²«•×ÿ»G[û oaÖrßm~é!¯¤?wsÒ€+k—è9ú,¸çémzRX¿]KÆSlçÒ`¦z|AÊ´–bm\4Mdt»Îrv…/H.TƇ1-[™ëÅ¿»]½%»Åƈw’°ôÙKZŸâ·ú‡££XÕ‹Jx•“r÷™)(ü³}‹âôV½ñRûÝ5†´ |Ñ«>R>•ÃI®pU áäÚÄO~ü@…LäÞkIË4šê¹ãJ¤sök[HK}f,èQü¹Ê}Ö¯g䚯áýŒ¤‚§Ü4¬tÆùˆ9ѦÙC ûЮutD‡ mÊ‹ÃÏä9!'E;wù«¯I¶0¼Úùl|>åbî©Å¼çç’6ƒ-™{Á°ûÃß4»7 ó"L¬ÓóYáÈLm­¯—’Ù#dÞNF|òŒâÔ†¸ÁÌÖ`Ç{ÎÍbqˆfð&‹ü,tŒqF˜ãË­Æ:!‰÷ öĘ‚…ƒˆ 奒ÄÞRž… mبä?®;K%ÌqayåašûráΗHd½ðtéÓ£SÇ¡† b‘Á´—}¦Mðé÷pD5Œøôc%JÝõÈ^ÛÒ‘áâ,÷òÈõ³Ä"¢Ää°Mžbðù+N³l¥œxËÇ•5“Ð1åW”¤8~i•“‡ò©¢«7h¡‹ÏÑÆÏdEÊ ýͽòvãèì/ Œ$~#«âi¡]ËŠ¢œ ¯¥6: Äñ)ÍûÔìà!ìj7E=ôdõbÇWnÉŒˆûì°â2#¾_’1³B Ï·ïHo“ó¯(ßàè˜LÅz-w>÷¤Ã¨O¸©÷b›Q¼ËßÄ3æ ੌS•z£!<7rJTih¶,€^›ñŽþ9i,‚ëEÀëp¿OrŠã»Ãû÷ˆªuÙÜ÷l—ß<Š íïy)„MN‡èðE ÓøløZhé$q×ÙºÚgûjˆx»=Yj3ŸP#¼C¸5Ü6Mh–¤!aù5wuÔ£ú¸·‹5:'R¬»t-.‘–y‘]'òñ¹1ÿæâ;˜Æ$Nd%>n{8™Ý}>Î8g_Ë@¹r5X7!^'s€ª˜aM-*Yºp÷, úYˆF5âI%Þ£°Ízê}õÝ|š#Ò²ÚÙ1#uôfÙ *ZxÙõجæ~ü8nPŠƒØžñWÊzýK(b˜‡ÝÒv©ËŒÛC‹¨TF²è·Ä¯\ð âЛøN‰Ÿ iRëf 9x%öî×Çø­Ñfææ[)sƧNÑ:É–%³t™Ûï׸*å`çžÌÕò@7\¬&JS)Yóö:l‰u&¯µ|zoãù.õûè†kÎáùº²~ 3igµQó~|÷•ב&_¬´|W‰ ?RT[ŠOÆNAŠwHsšÕšÍ÷?: êàdIoˆŒcöƒ‹eö¨ko.g@ ðwê–Òì!é€PÆb|‘nzý©xùîØì6ÿ;}±‰"ç k™¡ˆådÒý‘4Þ^/p…ý†Äô,ÞóÊ,»ù¹AZöÞë¢9P~Ëh™ð\+MÊm”«‹Íå’`édÃØ#º{gg‘¥±:­«~•>_Ë|jÈ?,ÈOÇgQø$„†S6¹¤ÞxdÔðÐ!:À…±7-Å ëMI(NН´‰gf½UʳÆ×þ]aÙ§àá&ŸÓÕ¨ìÿºC$ûAÜà#‡5$ㇹ¤oáêÒ+ôáv!÷g”=z²Æ£›yù<¥ÌGÇ/ÆCVÉzïV|~÷‰«À–'AÊú¥Òðv$%#ëåŠúAù!ÕLêÔò8÷žb«qÑš/år#sÑ+ÖŒ ÕªESî¹]{NÏ Eû˜™ªø¿¸(º†e ›àWAuÞ=:‰Ô©=}2¹”æ3Ë üˆ¡xÃù–µ‚ØÃ›¨£ëŒ:]ý› ÁÖôš£QY‚‰ —ʨÓ<Õ¶qKpÈݶ6½ªpUþ¼à-†'}ýîÙ{BÏWlEðÆŒGÞƒ®w°Ml…¼‰Åäè^±’³ôÒWâoòaµª“ŸÑÞ1/¢”k|êE4–8o÷Gæ‹0:XR5"ÞLÌNì7AÀ†ÍÎä´dí+óÆ­—×ùÏ'Ý~¿°®;R/]²¢¾^ÎÙ¿ØYF·Ì×25Q=aÐñ_¤ž¹T³›aãn¥úä”Tù tÒ ß->tÂ{p%NÓ2M®>1¦~žòÙ2ÄËy"æcˆIÎG™˜¶UèÎuG˜}Záº|Íž@Ìÿ!µAendstream endobj 305 0 obj << /Filter /FlateDecode /Length1 1084 /Length2 5392 /Length3 0 /Length 6070 >> stream xÚ­“w<•ûÇ‘yd"Ž=ŠsŽ™½·²å8gèØ«ˆÌ„ì!YÉÞ„Ìl²CÈÈ!ñ;Ïó|¿Oýžß¿¿×ýÏý¾æç¾®ëæá00R´ÇØÁÕ0hw!ˆ0D¨¬k¬!Â`2uG`Ð*Pw¸4r÷.¨èáÁù%¤ÅÀÒ"¸  2ÆÕ‹ptrò+ ü$ TDÁ± Ô…º;ÁQ¸0(hŒ!àî>Â@E$hôg†ÐîÇzÂí…ÐsÚÁhèOEšh Pòo³½‡ë]žp¬N'Rˆ“hA#}€öpHƒëÇ)ùÿõïâjH¤õgù¿¦ôüPéóŸ ÊÕÃŽêbìáXô¿CMá‹Ó…Û#ºeçïwŒW“ìÙÇÐ)&æö–5*{CÕ"/./¸l2VmæåÌÙíÙ]]lõõ§ânût_6¡õž¯#;õŠ»Ì©½‚~ÀåªrƒLsTýòVÓÐÀBT>žÝõ kòïžë´ÍïKwzìW6Ö`Èäd¼Mhën;^e‡•™2Ýæÿ< {ØgV0]ÑÃyEØ‚¾¥\ÑçjÖ]o|<]=±Áö©ÈjžÆüý·ŒàÙÒ¤‰žž†rI½c£‘ëTòkÔ*ØÞè1SÁ2!Õòbzõ%¶åŽV®…»í#•–JFÉ--.½úþ™]«l]õ±.§‰µÇVªo>*õÏûV”u+0pu¦BB³sêïÙ«BñòDîwÁ°rØYpró•ŠßåB@E@‡uĬ6ÊÖÙÐ4«ð²n™ˆ× 6žþ–Æ.Ý7|C×EšMstWÓ(‚Ò:0‡ ØH×CàFwª:¹=è&,˜ß¤­ãºÄ.=]ÖQgk7ñøç6».Ý—¢—ªU²‹dŠ­ŒCD£µ^m·tC´*®Ç|Ù#ç›þZ$”’—ìS*±×W«ALA×éÎgžµ½ïýyÌóÅÛ©Í×ÞLGä»EÁŽ„¹xz¬Šòtô䯛¼h÷ê½ä~{É+æ‹Í—’×:tuà ϵ¸ï¸‹UÞ‰ûøœ`Úå¥ÞÚ«˜ýÎø’ äç]ÁRHˆ/¸=¼Wj’Â;$Ð;°ôõ븦ä¿Ã×À¨Rs<‚—‘½ èš $ý™d3ê”þ)͘EüÔ¶VÓHF¾áºõâM”ª³û²ØÞfÚK'À¬© mR“ø¢¿¼·‡çÁQøêg?Yì8u.Ošv×ðËN­Kу"Ï. ÆqXˆë F:<«Ÿ’zÀZ¼v#ÙÁê…fo1¤#ã‹¡Rtíð¼·ùAìÁàćdTŽHñ÷Í_wÄ&&Äm¦Ê5?"!ØÜ7cSª£ÉÏ_Þy)«ÞðqLµ-ø²(Ao¦Ftìo[qDDæQ­N &‰÷|”.Ì·™õyµ2üž/0º VJZÒ’uñ…çyœ¡f÷3xgg*'›J“òG 8Š9åæ¹Ì)“åõõfçÚ-IÉòAU“ŽlNï¶ÒÁo.•X×™ÍuH[’o–䫆îßÔŸvïîjÂø ŒUm”•¸Ô"?%Mð„”Ò ê/.»¥II” Þ'Ý´pŸˆ :Q*n‘1ïhS{\«dá×íÉcàÙ›Qvf"p1W Р #[­Ïò@¨I—'7ó]š±”&<]>LšlæÍ±ÉbyVÜ1X ±Ô_ÎÚ²Ö= Ž·Td˜ rט¸i¾?MÓÄ?àØ—ÌY¤ÝûqEÌøzkƒÐ÷À¢ê©cO…)ʸ`ïèÙ7ø›i÷´‰ÙÔ©± Œ¼¼\è…ºj¥^§rQsIygñ7µño³¾Ó¼‘w¢|˰‹}ZK/lÝÀ•þàÍê%«n‘q‹Üަ§PúA~ºãåu®”Ûûе`ñl*C|ÎÚŠõz¤gù£ãš§G¬fÊQ;‹Î¯eÒc³ûG•tž<¾´Št!,ηä¶cú1e‡Íè‘ R`|—€u[ÿ¡Ÿ¾²DÓjT¤6«ÂÔjB˜>ÑX²ä2E´X씩òõ¸wÉe­êò˜©jnZàø8èÜrï²¢ùawêÝÉœÆGˆjÅÛÔƒœûñmÏÅ?‰~åÛYiacc׎¿Î>–ó©oÁ§ï ™gX¿ü>I~€s¥Yìç´0ý+Åo…i_Ûk²ÇÛ gcaŒ®¶ÙOˆ¦|2Z¯t<äÊ¥2 NV.·ƒ¹)È+d’¶ñ,ÊÝh:NiÊøÈßþœÿªS[]m]¼l¬•ǶUò©aêº,Uˆô½NQç[ ‰fš¡—cÜŸk¾ILvôKçœÞi T.0"/ÐC+áûï,£“Üh›¾ùÝO¬4žd÷ÒÏ µ6|Lá2¿ëÃeõ$¹jy– ˜!›º^‘Ǩ WS6ªRNý*9ÃD ¡iãGMê¨ñ"嵚'¬k˜yŸ‹¼ºFBO0)ÝþZ*¸¢—B8E´ºL ¬¿¤Í¢±CN½Ó½ókþvJ“u[vmÙäýlbŽ)„ÀÓQúë€w|š $` ªŸ\º¢^†ö„È8žD ¾ ‹'Ë}¹ÂL~ÆíT~¢-ÿcŽéQ¥ç ÁYÝ Ë+¢ÝS­u×së>VÃq¥5×/ m¼eüýoòȉëßÀo~à1:,ñÝjÍY´-ºX¹Ò’ö¾Ÿ—‚b{bÔ cIÊ"ôp~«ËwîÐ:¯Tµ¯ÎqŽŽ‹/TPŒIgVóƒyv\ID¶ž¿k³T&þ’2ïèÓ˜¨ šô ¹V 3¯¯)W˜s¨c(›^YåÆlÍæÞ€_ñˆ¿¶œ¯V8q÷¢ÖߟOÕxKëû€zˆäDöÚ–ÒWqIs‹§Bg±˜ÐÊœM :º<’+vFbm¼ÆºÐXóÝ'-Zw`¾¡oøs¿û»ºÍ­­ÒL?…;#Þ“0ð›?ë:/ˆ‡í~|"Gg^#tqݶ–"-<Ð2÷ÅYÚA–MïIV’ìži¡7Ù™oîIbñDÌUà+{7%‹t}$© odxù¶h³Ÿ±ò=·˜-öÚäf_®kڴ䟢•i¡B=ô}óÈJñä }À2Sßýã )«“ÚHT”s®ùøŠºíŸÝÅžg!_ëœj€ŽE^ý•¬G÷÷ÁÞ/(i–uã¾%+t)xÊÒ¶¦7Ñ)q£%`Qÿ‚3.tqh¹nw‘%!Ú’Å2„Ïîpà@nü©Ýê ¡} aÂ[Y¯x¯?îû5$ïÚÌ$ɘâ9ˆÞg9—sAi/„î`C65fË‚%Ƕ>àHü,qÀsöÚ¬™•(á¶îÓ1ÀH۬ϮR=(ýŠ)³~Yüf‹;Üm=íKœ[QÎwæYSST¢¡3šPì9—ÂGnÞTškü*\~<·Çmi²&ñÇÊ•Ro†­E­U؈n€kî5» lPôÓÐÆ Ê^òŒàñR$r ´ªI^,¼&O.5§5‰bŸÃü\i´ï®y·RCd¦GoPk›™^Ç­_…¸!á´Ü“ xÏR‚ ¡¹äö»zßãï–[é5*‚ÐØ‹R!Ä”{¾Qô¬³ þNOȲÇ¿öu•%Z-4Fá©‹ö‰ 0‹´ï,ÞÈ&‰o«z²‡÷ŒïÆBdÅcüý9™Çå]´‰k¢Y¯éi.­Õ¦î=ˆçOŠœié¦üD„`ýz´OEobØâlŽ š^rY?Ubã1@¼u" ‹JRC1¿ÅBMlM¬RR|uj"èl„¸%zS–Dt ŠM‰ÊÔQìý “ –ÓjLgr•“Ó[„“îÓŠQ*Wù©iŽui¼fc&@‡Ý”À²ïÒIþÇLûù÷}‹‰Ì+sœ>ö•'x*o+»bÍ“a§&HkÑôg:û*»ŸÁ+uï(Ty¢)‹¿K…D c²^ˆn'ú ¯_y37b_\¼~>œªŽRµŸüP±¶nƒ. 6»‘0·ì²óH‡ùpÝÁFKó±Í ™Þn¡ˆx°ÀÐñ-zµoú˜"WlÏžhŠtséKåKwP!{aœ%ÍÕ r*°ˆ]9;’Šñò£Àe>‰äJ…Ýê°ðž¸ßq"$­?Ù.ïl —ŸVÎɈ'ó¹¸öä§75º­ý`GL··åi„¯dçXöêö£ËÊê¯Æ¯ŒµŸ(¸AsÈ çÖ±B”îYB%êü“Ø tq‚g{š,æ‰1Ïù•Ø(<s+ä_÷|aŸ/iÕ<ˈmþLsµûáìLȘèÎ} äga^KêØdfC‰=ÇS[å1« „ m—ã)þ$é@±hl·q'ÓÒ6‚‰¥Uù•Æ(«ïFÓL1DÜÞŸÔ”àsä‰í5­§øû"Ýfv†¤œôBЄ/|GâÚ{ r Gœ¤òâN3åßrv’B·å:ZU–ýßäjy ýã;L;†æîU›<Š,š-6´ñjÜIµGÌ:fh5?g&¥§±ŸNêèÙ°ßäí¢5”6Öô-Ÿîõ(2š¦Z˜íîYÄ:)£,UO8L.ÅñÆnÄÜjs¸ÛXt½)¶Ô= «Ë‘a3„GEýîŽÆÔíÅŸô9–˜ù¬S½5a=¡œæ ôLójªOÌ<[Þ“g.2m>4Ð16Eð=ÏUi_ãÁßOEêlí¾ñŒ—‚lÃú”ú/)Ó$<¼é“‡öDnÎú—*~Ø­p”Ñu÷s7jšXiåÙŒ>¿ZšÎœÈg‰/ûx–kx&u-±àÚÝ=G-?ù ¼Î—C“4lJAGT3EeꩇÅüÅ}*î^zZ¬WÚ¿æ%<Šß‘þah`õ]Üd[{ŸÌW–‚´–ŸâÁÝT)ìS,*Ñs¡Æ~«×³E˜kᡈâ½|hú@­Œ‡ÔXê7'&5'iˆ¦ ûz_ÜáaAʱîí«P…¤RH†(–— kÛCލŸÎ‡sõ¤­1¨½£uÎ~A¦d52/ïzÎÞ§¤u­;µõÐ,Ÿ¿ú~.éàª÷]wuMÿÍ/Agû OWú÷?p0iì$É”¤pîCEBÝ––ôX.!uŠºølò×¹5ŠŒæŸ«‚Do¿h«kº ö¾(ãé†E©ÌZÝá°{õ>W| Ø$êÌÆEf3܈É"BtÊÕiß9TR)wÌ`n[ž¡Ål²Z37B'ÎØöW"5aâWxAÆBƒ~ô¡FG~Ø=ÅÇ60R§béÐñŒÙh=ñ¶j‘HRèhË×ÓAH]ÿêO–ÓWSY±n ›Äi¢]aÔïÌ÷dó‚T°õdㄟŸºÇņ‡Jµx­¾6²rULÛÇ8Éš„S¨ñ‹Ò±ô|ñý©>éÀÅõa8¸0·«åâ§Ç4SÚœdÅñ'·¤ïÝ%‰XÃxˆÖY‚é!œ†BœC›—Ü{&2 ‚¡“œ®oïÊg¼=Ô ñ)®›µè-a*¨6C²šÛ©ü<އÓÐ÷­ œÌm{ô£e"ÆÛ˜e Àmb¾ÙVg™Oø%/ÛcâѾ°žjŸ¼¯³4ÙF~¬^(ÔÞÆR‹2:ëë§²[d¡„f‹RX9œ&†ÃÁe.µ#W,rî¤y‹rqà§u*¼uŠž”¬!íÄ/󫘞kôŽ¥SÞ5kûõPâã½Õ¼;%ªÇFXÓâžÖÊHÅ´tÎy/aF^$mÞ?ÃâšëWW™ïF:ÝÙÆì!öBK«G½Þf„P÷üU|œ=ëpædÖìEúîö¨' >›š~¿(–l‘6;1§©5¯ìf8kÙt JFV7pðc›HFry2HǘãÜkj¤^ï¼­ïýÜê0Àùô-÷·ÐâžnºGBB:šƒ`µé¾L9lòFzoS½ê·ï®dPá1+?¼û=ÏVÄÄÔ´™– ;{“— ‡È(æ§-A®ÖB„)8hcPèNxÍù dãR@UË œàô°aRôš,AðìêÓö}Â@ˆSÎRzÞZ1ÜS ¥šAú*ð2ÏDƒÂ¡wíп:ò̯{Ó1æCÜæµ™]$iåm=Ob"™øqš‹'›’¶ZN2ÈEsi}$ÑÓí‰úCÅP±ÇÃW¹wiËYÀ-ôç.Jñ7ëÂúh#™zµBõZ©¦/ŽR‚wg¥rÞCã^ºÅp…l­Ô™˜Óçq¥"&󬦡ƒÙíËEôœÚÁbuLÇØÈ( ìª€Š¨þvÂÉkª³9±~‹tƒÒ Q î<Çf¾‰‘"ª½ŒL™ÜNŠgih< ³dœ ÏøÛp¾¡þö85‰ÆR‰BvÌ‹8ñùj2œÅ€W¾HóÛu½‚ÄžfRV¯­lßzp² ©´r$´ “-w]Lþéáà endstream endobj 306 0 obj << /Filter /FlateDecode /Length1 766 /Length2 1315 /Length3 0 /Length 1847 >> stream xÚ­Ry<Ôë¶Ç¤h\êࣟ¥ÛDcfcfDeÊ2v… cæ7ü4›1ÃL–®dIDÒµDis®¨pˆì"ÕEÙòq§È¢¬‡;tºçsïçýç}¾Ïó~ßç}Þï.]W$‰Î É6‰1ÆЇ `ŒÑ°]»lx •qضT>H0 ‚ oN45%šba»Wă‚‚ùÂfÏšÈ ±@D£² • ²$=hT&àÁ¡A _d ˜LÀ}íDà†¼pn Ã`:Dã`Ć¡ÖÙ³Àü{™.àþ ÂA^˜Ä€˜ÜH,Ò9l¦ ƒ Ê™#¹ ”8ù˜ÚØœ,`2©¬µöë)ý…§² ¦è‡ÅðA@áÐA{£ôøÝ¤CÖFÖžOeB4;ˆ èï%(Œ Aº+ħ *3 \¯ƒlúF’àÖ- \<É^n¾Füè:éJ…Ø|O÷¿m×Ôëó'–äÄÀQ´1‘%ëÇîø†ËìØ4bKF Py<ª&™ 2"1ĦƒBJ£ŒÙ¾ä %`px°µÅ™(ÒZiáM”ûŸÈ@y¬£¿¾ÏÚš#ŒDâ0ÒÄ ÀÜŒý?:š€ÇÙüõÙ‘„ô3 I¤ (i°_û94‹3!Y÷ŠbìntÝ–7”¶ªJwþ¥©·~sÜÀifá¿C ‡Ë½‹sÕ¶ŠåÅ;#–µÂ’ë¢Ü:ÉcCÓ³_¬ˆÃĹ'hyÉ¥ˆÆ¸zsqïUjï÷Í|0“vén/Êô¹ù(ÿSÛûk®[ÏñM¯t¥š†W4_>ƒ3÷&ç2½ãËÿnªë®šZ”„MŠPÍÌz’¨Xmôò3§N_¼¡XpqåØœzÚ6Zý³É¤ÔM„—}O»³›ðüJñüÒLµˆYòÃYÚeÖvª¶ïþ¼7slÊÏÌÄÇ´›å+8Ï2“” ^ÕS4j¼¯úv}Ó„»7º¬«òR¶æšŽýfAæDV?|rsÊipÕk™e%†}yÖØ\Bd32ækcç‘ 5®w—K=ƒ÷ûö²í¶É3òùæ•>U0|Hó$v<Sæ¶Ï ­NåÈu¬&÷Á»ç“{7é}FÖ]ëh«¨{˜ªPÚsÁ~胻rZÚ‰Ó#oD*RåÔg½[VÁ´ö¸Q~‹’3›ÒOù[ý,ëºÍ¶S s„´¬÷—|"6MÑÚD15ZäÉŸŠË믅þ¢ÑéªrœÞ¾ôv¡½E‚µŽÞGQ¶Ù­Š¸»Ðø«–ËŒñ,Ój…‘¸#1š©|"Ü<`G‘RðÓ¾›ß~8| r¥«û)ùUáõBî´né BGàöùt¯C¡zk*‚WøU)ý‰½&øë‹æíplÐK’öU$'Ø»ƒÛ<êÞŠÎ2Ñrré{Ô9*OòÏ9ϳK??%~}øgÜdD} -(Zíªr´D|²wØæ =ONÚ]|Xq3“ì`Ér…Xðb½}ÓÚÔ?ᘉ,ãW[ ó”kÕŠ³R²Çd]’©˜<.ÿ]/í[ûæ³¥vÅ22ʧGgCYm¨˜ø«>Q#¼ËtMfzFÎUi¢K•hvÔr¤Zëâ Þj÷¤wBO 'u0æ¼÷ŽÇËÉûÙgÏÕ£ÉúΓéðÄqí ø?[1xbµÿŽ$Ø€ËØÏTé_â†$ó=º ºÒ ¿3TqÿÅQ3Ýd\¬³‹ÏU[)C‹·GÄ·J jfÉêÕ¼$P oð)»?Jà(KJªÏ¾[rø_NØH…&AêþE–=¨]ì)ò*ÐÀë\ÆÍ[øu°Ú"“lýe~ïèÏ‘ï›>)ÇvDdÞvþè[}=ˆ} W³Ü«bu©ØÞ¾{wƾŒ6k«.yÿ}Ú)ˆ¡Hí™îËJsá¾WR¶jÈC3w ‰_bt_„·XÈÌv<éû„è9.3Óÿð^XþÓgŒ!ZŒå«ñ{ã•ÊÖÝüö ã±G¡å1Ù)Ýi§“iâ–®†VQÂnÿ3áI:•J0Zë¢~ÍK9肸º0CS/3 AÎÑŠ?•ä{[êÕ©>2~«o0™1z|"ÇÊjxû¢Eî…}Ø(já=ö7U\p$¶×ó?~¢Rd‹ßuÿÚhŸI•Šž•°1µcô´‚§Í|ive~ݵ宼ðÏS8ã³ÜW²÷úªÊ]·šg)´<%Ú»ùVBUð„µÁL‰_Y ãéö×;•#·ðÕeîWP¾¶ë÷¹ù*’\›ý•-d:yþ2‡F<·*&89ãßYúËuÌÕvçu^ÂÊíöµ©ì]”­ø³4¹F5Œcñ;òð tÿŠ€?endstream endobj 307 0 obj << /Filter /FlateDecode /Length1 1088 /Length2 2832 /Length3 0 /Length 3547 >> stream xÚ­TyÏó|žÏsÞï÷çÈHZÙ*éb‰n ‘@UB(#4}s[G@(Ã!22údMÅ h*¨ 44€.Í@ ¸º&®‰R‡ÈúDƒŒóð¤rú‡¾‰Ô]ŒÃ  €9šê ú°<0h<`KÄà@*CÐÅã›o;(€ HÉ~ V‚@X† ¸8ö-Ñq‚;Pÿci¤Ÿ”H¦°Br¬‡VD,‘€gXг ²ÎYIþ?BýnnDÃã-Ð>ßì¿Wé?<Ú‡gü­ úhT ˜± ™ð»ÔüÎÄâh>¿³Ç©h<£KðÀƒ€BU®‚úAà(F8:ˆµÂQ1ž€;O¿ã û{Vù¾YÚ:˜)üÝ×ï¤G žd@þKý}øµfU‰Œ£§áÊp8‚%d=?ßÎüv˜!CÄ⥠Éd4š Ö " ÒÎJ S&©¬-«4Á€;‘ ùÖV$ €¡I$2‘úÒX-a‘ßq\ €¹¡Éÿ,€ŒÆ€xÐúXåoøGÇá ƒ#cð  OÅ‘XqR(Å2ñ ÿ«³0_Î ¬ú‰#ÌãÛEÉÿΧªÀpwGýåŒd9ãA åßRèƒ#Ð(¿d,à?¡Tôø~ã(x4ÅóW4Fb}‹afÅÀþê O£üfÎ2¢àX³øwíþ;zzDz RPÒPcµ PÔÕQÁÿbhd2ë¸ï·5`?×î8ÖP‚ Ä@ú{‰˜#^W+¢ B s:ïpÀ¸×š gú¹ê*CÒpØ`[ÔMkIjîlÖhù…'Pï®ý¦˜#WÉ0-\–²ï•Çeæ9\ CÕ)÷ÙBáq)|› ²³ÇŸøžº–9*¢YGK•Vì_’m潉ÖsÊ©]îåÙ#o“$/%¬ÞŸÏõ†MÊx3Õ-ñ©2ôê3-Ì|û€Ùq+(Å(N¶Ë]›Çm ½29ãî×Õ¥`=©û¸gzÔ?ö+¿eBÝiúœXŒT´ÌB£m}9.D$ðu ‰ ú2æ=3k9È’R'¦÷ôM瓈g£°Ä0ãZ13Ñæ1Ïuáf‹”½ða•í¥“•Z½ÆíJŠÑ[Ó\·B>硽Ó&‹ÌøÞ®ÖZµƒ2ƒ•OT\sü[ý„§·zBP§§_?q-ƒÛ¢3iN+_I©ô¼ðøKµ­·Ó8w,ÌgïAÖgB˜–h£MNæã‰áø§$Õ…û0²£s Ïà¥psÎË|ÆÚpÞ}ݽ€ò:ÚÔ. ÐÃI ÷cˆ6Ulƒñ³1âø]âCëyë~éôÊÙÙ‡€ˆ¶ó7Ý@7¨§±‡¡ÔÕc!]¸rÄû°¾“_*miyUËò™¨‰¸4Þ]±&^RŽŠs…ÏM:‰ï{²ô;á6*^êúª 4uÏ8ûûI×”„™5ª!Õ$UŽÛñí”q›+fV=²[°ÒƉ¦Ê4ó6¯{[wŒyÿ9 »cyCê$‘_üsÌ'ç´;'Jz/ä¹våÎß¶1¢Ç6»miD×ËC®Q_f޹+”í÷ œ|Ž‘G¶{[*`½²|¾ÂFØËÔ{ájþ¶§ÖÇwi|i?.=Æ­áºxŒŸ] c!°Å¸Ê¦õᤅ^ß…)b=–àwñ‚µ .³÷ƧòN‘ÙÄÎNnÉ€&è’àWNˆžªqþ¤!m^Rï@Rû ùYçÓþÄ(8Ì;àUŒÆÑPqÒ hÂæ›q£3ƒ-ñ%ï„gFèn»Õn@?¨³x¸d ÉcCÅp66ó‚–åkùènOUýlobé%R=»Ýfo«ŠÈÍí §E™±E›\yC][R ÅÍWC+]Mx‘' ÷ÐíÿÜn[~^üpó®£+œHŽE¼güYÔlÉhiÁäRü¦@ó5_•Iÿ}¥Ç-¸©–\ã¥bΔnʼŧ‘òï5Ó–Dí û[?rÝ1Þ–ל]ÁÄØïFi8ÝšGøÏ\²S°ü„¥ £Ó5c¼|'³ÙÈõQŠwôÅa­¶c­‘ï:øò;;N¯¹ïZ˸ÁÈ“µHwæäÖ†Žgë¿T\`ǸþwÞÚ7PÇ%Å&1dzGxŸÖGšï+ÞêÐÉãÓåp²E}1Z éKP¬^÷~ýè Nü¾O–ü@ø¦\j¬*áumÈ0¾…ndW»ƒgÑòrpìÕ —ÿ²k¼yê‘qèôdzÉõÚa¦áڽϙ®6Ö^a×±IÿRf¥¸Ž{{Ê)M:ÔQæ¯âœÿÒ˜zð´ö8[YPGàÃn³sð‘EÞ•¼ºó¢‚çvàÜØÚu ßÔ{¹ù!UÉצÁ CÅI<šœ7v¨xFÇÒ>8 qðž¿1ÛùË–•+ûñðnIìvåÇI$žÊ®©ÐšÎŸCeçü)õÕôòBŸíýŸ9dlB¬æï¤RsÛùó!›Ægáí™»Îol÷LÙáU}hLóÊæìLœÓœ®yÄç×è›ÎDzÒÜC4;Xl6ÖOäk) qKl3[@ôÌ]wzy/ÆWÕe„jFx4úBçÍÕXZTªÔ'òê°§p¦ÊûŠÕš¸ÔQå:_£È¸ÑCáT²Í5[@Èlïf)T@ÂYœ_F=ݸ$¨ÇqòÜ¢‹ü/'vÉÔ€£M:£êŒì0¯È9äó÷ô熩†Ü¹nCÙc'U CI®ç÷–š8pl8WeZaÔ /vxs€„±!‰‘JõÙˆ‡$ꨜSGÛb±ý³ÕH´”ï=ëÙÀÈy¡ô\k©½ÈWG‡[íôe̯š{TþÐÁ‹8Qh‡š÷#_‘³„lÇÓ?AÍdQO ÜNñ ôÞ»µ£PàO“Ééù4ÄîØÑžî|+®N¡Ý[—t J‚‘#ÏnFÖ\=Î';5©IÖZZYÉ"½l¦-m=ŒÕ«ã/t¬yÔ¼5pJœ,kÛÉC¦ñì®C—Èœleب-=Ú–Áx5ÃÑôrW›¶’iWmãDo¼éìz•ÒƒÄìíö!Wƒ£2¦(æLš@rÂ,Tˆ-ά-{d(·Kæ/hV›ØEÐá5HùEäZÆ3O „^)|qÁÑ]ĵ9e>ë×Ê-fÿaßåm¼Ò¯JÙ”ßNl wê6žVßöUÔtÙªŽÓ_TÃ%í ¹©ˆ?×¾¾»pŒsômÇö6tA¡—]EõpVÙ¾á¢Ý:µ‰dzábª¤XL_Ǭ €ÙÝkkùZtÍvx{]ïœq®ýt°…xÿª9[Ó9ïö0 N0Çîôºum£¥ÎõÅ óJéÚÝÛ—'ObÒf.DU=¾ý¾¨ñÀO>øÒ´ªê®ʳñÎÞ¾Ø$>ÉâöOFà=˜˜Ÿ3eÖS¹g3òÌzÅ{ÇW«vN#¨ý]ÅMÛæf Q?چ͚\é#³}މ7o›&C5V}ü£açúb?Ù$)îß³ÛwmŸ ܧïFg•ü\‡ÄÊ éÁ»&]ÂŒ¤^M—Þ[µCaý'ý¤÷§Ë­Ö(EîxÊa<™t ûºêÅ :—7Êî>µ××ÿyn„Æ Ý.”&~Ù[ËþñŽ×Çw¶ønMÀäœfËU&–¬»ßzWfÞw´XäWP\iº)|é#ÒÅê4®ž‹}·u3등-EC¸–V°·&y¯ ­Ñ¥œ2ßÝO¥­Ž˜"Dg±®ÇвFφ™9ÄÏÍûÅû7Ï®´+·tìšÀ‡Þ¹h9r˜=_©w̽ ¹\–6yùUrƒØ7la½¨ä[Ó‘µ™¡Œj—êÍÜfßeÕAÌø¢é8µt‘ñ.»H *ª¦0BªRA‚£¨Úlb¯œÀ¤Ð#9iÔæÊVªÃBFÝ›×$±.GÖyÞî¾FÔõ‘‡BŸîÑT6¤®,ó¼/±µ/E‰Wktš)Ç?DñèåI5L†óÖ&ž©gŠ5Yð½+P[yè´â¨Íá&Æ^Op–bJ _B­,“÷Ô˜<=E^¸˜WЖÝ-ŧdÓ!ä0=äQ?-áí¶ïYÎtd¬šã‘£Ûõ[Žksþa²†ƒgp6û}KyHÆF ŠÖ Ðк>Ëè„a/¡ö¤Ñ® £!‡¿|¦\ìÖawŸ7硜«vofP&^a»¤’ïW }•nÕ2Z&óßÞk@‰­î ÆgOS›Öj—˜k}´nÙs³+§q3µX.r=!dͪhžéP&]¡¿`=Ÿ6†Ò„ž•Ž“2M±ü ªÏ&xWH•\¼¤'馚›}{‡òõx”xÐnºŸžë唽ÝæÍÇÿ<5Ðkendstream endobj 308 0 obj << /Filter /FlateDecode /Length1 810 /Length2 970 /Length3 0 /Length 1525 >> stream xÚ­RiTW5xD ¨,JT\^ÀT²µBX"Q6é0™„Á0&LD«T@E„‚(àaUY”ªÅ"R\¨ËQhA‹Z ÷#.Øc´@­§ø·gþÌwï}ß»sï°˜Á¶‡ E|pŒdó9|ð ”„»>‡Gg±< "Qó‚HDøB!xh€/Ð=\RnÌ\ÜíÖŒ¶ý`ûü´×½=6uÇÎ;q‰‡£üÌiwºúþ­‹“qcËo§oxí™ß»ägfT?D `m>bîø´kyv¾*#–í»Û «¶UÇ+7_¨õ-Xo#y„uó÷³oc§ —_‰ÅK20ýÒ6¦ïRѹ<ߟ‡·ÀÌo:#z‹^´E½©¢™HWיܿ³¬>ææÞÄ›1Ì éoN›g4½]š"tÎmn¶ÙØ~u/–Çc¹ä œoiÇÿJ“‹EFÍ)E) ÇŽ¾öÑ_ªï!‘íoswOñ_rÂæùž’© aUóê÷ö±+N­ Yà5[ÀaùÒ/Ÿ ¯¬{½nÿ°äI¯¤¨¸}#j2>÷´ØîRYm»q?ûÆQÂv ©åàF_eÉ àÄþX3ÏsìU™–?´ªS;’"¶Á%¦³/îŸ_½âÜ@ìÏõöº²:ëeÓ†ötX… Ætúõ3‹K3¡Œ»]_cnE/l¸„Âõsì@Gï èe)íN³·Â ðÒ¯tójË• a¹ß±È)åO÷e8ó*úI¥@M6]kú[ Ìh¶Ðd™OõamÓBª†Ërº[™ “ŒsÚ]•›c;'„Èä×Ñ—ã:³’óD›ØÜ$“õŽ¿Ÿ9oØÖ£ž!-ŸP¨¾RýÀmp\¨WZ—xEü›ã×°$ȯ©ÌÒ Õå–+­®_¼ä9×üh9^rU“ÜIîpp™Z°lè¸Ègx´ÀØâÈå!FRI[7å^ß8ÓÚ ÎÒü‡ç6–̽ŸºÂüNYYùÛJðþάˆÁ­:[yÞ“;y¼>èÌzÚví$Ñ’ÃÑ¿ÜR$Vºç¾´“Ó÷—o ¯ŸÜTë>½¸#UôCÈúÑ}¶`R[˼™Œ¢gÖ±þ_Ó¥5ßÕT4J š®=Ñ„·\)cUY˜Z™?Ç‹?ˆÒoÆT\x&Oüñƒ{Ÿ.§K‚‚¿ŽìÒ»ÃÂ~‹8>—¾ù–ÑÉZÓTc¹Ú9;jõÔY;ÍÇ‹æ=Æ oìnáªSÇYÒ BÆëwçÿ îÒôendstream endobj 309 0 obj << /Filter /FlateDecode /Length1 1648 /Length2 12635 /Length3 0 /Length 13569 >> stream xÚ­·UXœí’¶ w n»4nÁ5 ÜiÜÝÝ îîîÁ%¸CpwwןµÖÌäùvÿ6ú¬ª»®«ªû¹ûh yE!Ck}¸µ•È‘Q’2€ŒÌÌÂð"v =Sk+Q=ÈÍÍé¼øøçaçàrÀSD¬m\íLMÔ"4ÿ*âY‚ìL ô¬2z& ËzEkSƒ+#@È ð¯ö=ÈÎ dÈ M ú cS+x¦y’²2²pþ'lèhóß)'ý‡)õ¿mÒ>LZ[Y¸ AFðL²Öj /ÿØú¿ÍÅ-,dõ,ÿÕþß›úòz–¦®ÿUamiãè²ÈX‚ì¬þo© è?æd@†¦Ž–ÿ7+å gaj del0Ù™Ùþ7µ7uÊ›:˜Œô,ìAÿŽƒ¬ ÿ¯“ýýÛ“š¨¸ª¼Ý½µÿNÊë™Z9(¹Ú€Ì«ÿÍÀ¿ü±$;S€óÇ–…ÿýJëÿˆ‰YXšZXØ9zvvz®ð¢b¸¦V† ÈåÃ1£•µÃÇÀÇf<FÖvðÿz_9ØLBÿ ý‡8L‰À$ò—¸L¢‰À$ö?ÄÉ `ÿK@“Ä_b0Iý%6Ó׿ô¡.ý—>Ô¿ý¥u™¿ô¡.û—>Ôåþ‡¸>Ôåÿ €Iá/}¨+þ¥u¥¿ô¡®ü—>ÔüqôTÿK^ôþ‡>žA&=ƒÏ×ßü‡;ý¿ô‘7øbÿÈX[|<4ÿacûWÄÒò™?ä ÿâ‡Ck ½½é?j>Ö ú~Ìiô?’Fÿ@æ¹ÿ’& | nâjc²úGÅGìŸb›ÿ?†°ø~Lhùí÷[±µ2µú‡Õ™³þ«þqØú¥?&±ù›þhf£g²²9üÿ+úŸ{囲ù¸Y¬ÿ.õ£½­£õÇÓÿ¿*ÀØý?æµÿ~4ú[ÌþáÁÁÙúé™ÿëpú~H:ÿE–Ó.ÿÀë®ÿÀiÝþðÑÉ d÷©ÿ÷¶vqgr°X>L} Áàæàöü_•Žv‹qø÷]ýqý7™~\` È~qÎÚ€7À,©1¨ÄK,o²ŠÆž ¥(`´#°>…—ñx}Ö®iB Dt…ˆffZ†šx*]ÁdhľÛ(ä1à®È¸€6€ô™&'UæÜàXSb0«¿Å2#t‘xǦ‰î|‰ìvÝ©ªêÛ!3÷4 ²d~îõÚËÜ{lc!Ùl7^+«õ[èÇÁzôÈÓ‰„užõbÁ ¶fê-ÆÛ"nÏê´ù R wx–̧?)_^à˜i\Œ6ÇàiæÄ‹ª™Ü’ïÕõû($ŒCNsò) ²HœEE¶½¨„fÓfPœQ$'"0™'CeJõÎŒZÃ~“³å7Lõq¥y_ñç O6ƒ€Zé¯#xŸ>Ì0Sñ µã·8UÜ6ްg5`¸0©¼xó|Váæ]Êç.ÌïM|Ðe)ñ;Ú$w÷<àåݱ¼ L~×¶(LR‰Ð&zÉͬêÎí`@¥ª>)[]3Ǹ4x`L uœP™Š¶Q¨„žËm‰œ-'(cÜ0®NpW*Ðö¬(@Z|ÞR‰m‚è ±gÒÀhÝ?hÌÉYª9â á—Ä?j$;]lßwA:+p®(ÅÊ4*ó ‚©‡ÓËt½žPØÃ)GªzÈ{,S…}¶›=XÄÞ£O^¯–›RÖÍKðm7Wþ‚#ЂAG/ùCdÙ ¦é‡E*aÙ*/×îÔEÃb'å¨â§y©ø°¥²CJO@5â„Ú±ˆ?a課U•d&#ð:P¯å†î ¬ñˆ¶ù¹#¿‡.—'V¥çˆ¦{7ŽóV"’4u^²¡7pXÈ7¼ÿå\Àë¤}–Á #]“¹êGÖ‚CJDþE`.Œž‰Ã<¡.Ô­®‹´ëØýÂ…²øFå³ÁëÅ®{ùÁ°TŽäõ‡É´®u¶°o¦™ßÐ¥g¤6[U¦­êyñÙIxÑ[6ÐÞ*TŠAâ¶F|,ÌÎÔ_ûa Hš±2njò†Í}gð‚²,ñ¼9Øœ§™…CìšÜ6 q'¼rQœàftdX g¤xBA>á ~Yáÿ#Ò.±¤œ·ÜQUi@ž‰L²k}àóËHå¢ÊÝ-w_“æ›ÅVPâÆOlŸ€#&c§j¹©ƒjŸDàì÷õ2Mµvç{¹š‚ºÚ‹btàc–‚ÕöluWâHýu¤Efm'ÿÖ*í.ÿ&UF2ú6oÈÊÙrÖ}­E¢«ŠÇwCz©^ÉÔžóÍå¦F6˜WÅ(J¸õÖŠt½g­½˜­3ƒÚµNY8ˆÁ‚BÕ\Ôýðˆž|æ—…°£Uìî] =Âõ¸`¤ä‹ÚU¿¸­í}HÜÝWO1“>¶Õp»öЭ ¡ü;–Ê6¿ßrÅôà]¨?Pp`x­ ËΩ7^(o¤(ÜýS4ªµ zB#ë+Å+èÅ&»{îÁJ3rñ°ß¼¼‡Ñ¥S‡ÞÚMSO’Ñ4ªá¡?:H)ÆœÚ̳Î*G”l#mê¾á*KÿtüÆŠ`ôŠ…s]ˆwë úpΨ[¤wsÒáLºÖ5·{HB®ŽA)º5JÁ_ÁCšTaª!}b¹æg2Í(ž:zeû±ë²rúì@Á¹àÑ„×Ϩœs‘Þ%l½·£c!"¨6ê r€~±H&yãmü3¹#Ñ=øµ¦jé0€Fªh±gÞ¨b&/)䨧ã§,“€µùËØDLû»ÎF¨Nö¹ ΋y¹³¿Û>@”æ³ Ñ¢ƒJs^>9½Ë»^HõýšýIl»w­G_vhæÌ­$¾<ù‚Ë «Y"¥uãÓ‘Iñ‰€WéjÀœû÷Me·O'FåÆ}Mdž‘áìµERkÛˆœë”êOW‹)ûŸ:ÚY;öàoà1á{zr¾˜üٽ̮Y ¶Wô¿{lÕâж‰šLGÁþüÌïV]óM¥FBÔ¬»gmÆj °ˆøÿySûÙ5ùҜ䫯/o˜tºL—)Í,ƹ1¦ÕgXFDMа^H½“ºk"øG€öó=ÿ-‚Èi/.囌rÚ]ŸÊTÄ%( Q¬Ä²Äg!Ö4Ì65à8¼Y¯í[ ì—=NÆŠíg—òˆ¡cJ™x‘ÓOõö0¥•ññ¡›Üª!p ?!FÇ– ¤ãCDSçKûó罚FÔ y‰Ÿì=†M!ÈðÒ;ÚféAUõETÔ|wÆæšé“Vêi÷Uðð(hµÕ3¼¤çe´ËÒoó ¶Æ¥ ?+¢-ä¢X†mža Äû)KTÉN´›ß$ƒ`¯Õ²GYšu¶Ã‹S«Ô*ÍòÁ<õÞž ª‹ƒ!yOHâÞ(D¬E¦ÜϘ:$§ºBtöÞô¤ÎŠWçcÂI(äU˜yÁu» =°Ã?%eWIÌ?äfÂyf=]2/¯¹—§iƬc€Õpxid1l¿Á{3¼Þ«Z‘5}&˜7HÑ@¿ÂÏ¢åÜæ¡B •M¶düs&¯ÆÊôÓGò=|üÞYFwúVQ×(M”›x"ætl _‘ëÌ&ŽœyÖ\ÚÙ+Y%ÖãWYÆë³1R6±Ý—l„ÄßE+7¦‚¯ ¾ir‹^_¢z…ÚÆ—.´Nù‘§®”'•}~ ›ðsŠââÔ>4êQq B1G˜··´¨+)‹L&J]‡¤¿hEo3©Ê0þñY_gk²óÐ\ >ÈNç•3ЬƇ«ozbAdþ}Æy/±ÿÕkt¶é'í3OíÕothüqf–ÍǰòžryÚð0†D…‚Û‹_¤ïÈ¢q\ñþ¥5ñ‚|⥂0U‹¼‰v_âh@øöi§Þá Ù¾?Oç„›¿Ó¤MCïêû¥zëû="6A´t’´Òožô¸EëÞѺVI³û·ûV½ñëöÅÒku'2 £ ¶Å(›u÷µN°üþ=)xÞ(ÅÍù (þ“ yL&`^ÓÃÓ£¶)Ñ-’r=»&¹ÙJº%dþ»"áÓRFŠôÓáÂÄ.æ‚)AžDÌ%†æ7@ÃEÆ÷”h›¯‘j8vq’ªf=ùÓ™ª…ØëCPq¯d"p;Im§\\0矱j^Ë´5º×ÒEÍ“^.=n^l~7G8ßÅ0TÖUwl!5nH*¼¸3Z•êr:|•Ò¹yP;«QGçZ±L·øÅbu²¥±hÅXÃZ©ï5µîŸn6ËîÖ¬j¦A›-T¦Á.X=t4O.¡d|k‚¢‚fëS*_±Ù°¼Y¸d‘Yèñže°6jY*p~¾• †XÙkúnÌ­EÏh&êíÎ$Ò¼huMwÏ9ÀÏBM%ó¥h4°ôgÒÙ¶þW1݉¦ºÆÚï•d8‚´×·ñ‹þ«Œ›GMZái^± -oóåñyò²ÕÇ%«˜ù…0Ê3ápSîÊYQG¸Ih°‘t^1ð¹ =RÞ™´lz˜ ÿqŸ.*ÖYEoQ‹Èܧ‡™«“³>Ƶ§6þÌô˜ˆ…âÝ(`Äùkqjœò€²~x¨3Jõ"Ü#. ÂÛ ¼H娸K³!Øÿºé˜ æy59ø½Z…J©ItÕW“ż'(˨“W“•ß7¦Q=V ‘;‹¨LBÑ«ßpæ5ºgäýt«›ž‚8ÿ#f,gæn`´'ëÈÝ%’å!v!‹¿’Ä·N–À²¾ë̱·“¬Á˜Ž?nÖÐxRüS¦ø%žðWjîaÁƒi“ÄÉY¡¡I žy-³¿0*ü2?ˆ‡P.Ÿcí'¾8-ìàª@Õ}æ3‰ÜŒ&æ×Lî§ô9è‚Jíë=ÉDÊ×cÕ9D=¦ìÇ:Âa’FÌ}6&YxóÙ•_¾Òƒ'Èì•s|3øÚEEHk»G "¯^šFOüÈוí5XHÆÀ—¬o¼ !Û"¹^$†W ß|o!@m¨TØŸ ã%þëÓ[½™Ô“Aû¬ñŽB6 ×Önûƒ‡›(ú¡ÐV !“Ó›¨z´Ô» {?¯Õ±AmzÛˆŽjW½x5æDä"™2Ïs¾RJèãb+ݱ㎨aj ·têž!ž¶†,9ߘÚû\%cáææ:è¥×x0:Á:{ˆëü«£ H/¥4—FßT~¾ÕC-Ÿ{âó஦öŽd #Þ1*½ *¬V¯ñ~ÁÜà[c‰7C$‘e£ÐÜ'KÄ1^rGá·éV‚‡C’$ø ô)©ú.‡$~†C÷òLk2 ©ÃÙó╚y¶Æ„ ®¹ÞC}ó¤(ÅFydaÝ5oÍsï¥bx[µ|à²Å5™IUÿ^ëö) §Ä­53¹Õëê݃ð»i×£ç$„3ôäè…ÉÌ*_E#±Þ'múgžp@ùÍyó…7†”ó;Ÿ¤ä…#ÜŸzþ^rð¢TŠ}Ζéû`úñï¾Õ¬3buæªñ­)p¸Éó´m6û@1¶Åðý £S\ùz·Áõçé.oP”±[jþk4+Òêha+æxTð xS節bšæq¾®Þ¤Ðߘxê‚ò:d¨ï@e5G¢¾ –0ívÉN—Pu2jZ†j.7ônù¢¦kDX¬§¡VnÒµr!GØž‰'Ïâ} Ét઼ª$Ó®Ç2¦åÉ Â0x¥ÎÝç"û”È–Müeq¨$j%»A¬6ûåÏ}^Ý.óÅî¼Ö÷áô± ®W-•\•äÉ¡XÖ¡ÆâDKF¬Vµ‚r4}öÆ6ŠL¿Ä–1¶Snú6R[¨¬vmÿÜv훲TéTAÏÉFP„¡ï'¨ž UŠyÄùKýBÝß‹•wÏHxH­îç–#¼ƒØy×¢ðϺ¤ï»ˆ§ÆØp£w8H||–¬Ô6gt(l—e)ÏÕÓó­ØÇÆŽ»Š¿˜ÝEoíiè+$?öG€Í´‚÷¥ ç×b{q; ¡^|ë1…e# ‰B§¾ô”W›Á¤íiÐA–áRíÛßV»ºPZ`«\ï$ÂÍ!þ­º¿ƒG€h°DC·£?ÍÀZ·ˆ<ßHÇÝ‘‰¸KD¶SÊâ¢o¸bëZÄn± F¸¹KŽë,„uïÚð1Û9n4{ŸïU8o¶¢Bè=(=BÑC+‹²ÌKyëq’7®UÆø z˜6SG[:Môd®àr^s„ÏÛ›•€à5Sxj;öÒâ¢w^Ý.™Û~èH Q(qøö‚‘ó“Ψát€¤¨kã!ä^ƒæœäB¼KßF3y-Þ#ò$¬DKÐõ5Ü*˜¦Â¯ËåòÌÆXy@7˜jІg¼TÕÔdi’}ù¢À¯3îm½Ð‚ žbxÖîš*û|©×šrÁ[÷‡ù|.!]h—.ÂuÛ¹´Ø®Gpñ‹ ÇïÂ5ñW"†5¤ ± ?kgÁ©;°‰8–åWZ°'?‹Ú¾éTÅõ·µ" ¾iòöý!p䟵{þq ÙSS³9-JÍ'xl*(Íj’߸Q¥dmyfnfÊÉ–á¿%·=ƒÛ«–§å¹B|~¤ÄAzšDÚà/éw¦ëMéÙ4O°7~>tX±ä— ‘X=³ íæÂåq¾Ã"Q’‚˜éöþ¬ZÜV<¸û4lÖ%nÓ¢í ÝÆ³Êº“øcßLgĘt©´ãÕÉ£¿L•K<:ÙüJ_ÙOœ(´Q_ÊÀ8r}@Ýxœ¢Îm€RŒøêrt% †­Ô‰l޽6&p\*>50ZYÔóƤ4tB7·8*ŠéŒ\&Å^Z{3T±á\lQ±¨ºÚœÛÞC‹ZŽkæâËÃçl§Š¥ïÄßez©Pí7µÁ™þ©ƒs­DÀ×”¦B8kü€²µ°pÜåYij¿h'† nmøB‚_ð;û’N£öVíŽxðIJäSßÛÍÛOñÝ Û°Mj&ØÿÔ˜GRG6adÝ“ZIŒjd ‰¼R–@ƒÚ ösI}^íœ?Ò$cü>ØÛFÚw¶–Çðt/!)–Ïs12ç §Ôý8åAäIZ¼œôBE Š‘êënd„Å(`ÍjŽcPmM\uLÙ¬‘¹šÃ»]˜DþÃj¤¹®ï1xŠ‘–bÒÈÍ2©¡O1@S¤U'",^n‹š˜/Lce¿o5‰)¨ÍÑp껯á/ñQ ã­˜ÅN¦R[ç’çî™p×µ#‡dnßHˆ÷û•aV4ö‰ùP¼—ÛdAÄ‹.ìN]VÞ^чàÑNB”šó5š'&-À ZÄ.‘p°e“¹ás>)Y²_M˜ÚcžðwnwV‚î<ËGß,^#ëºz!g®˜™¿¤¾/;Ÿ`#ÂpŽœ*i¥dSlCéÃ*Ã=Iü½” ½Ú9êƒúûíÑad(GÄ‘ 0·o-™0á tÐï7ÏSúg᯹¹»fi…ìA?ÝðtVLs„~,•ΘõYg¬¢¡“¹ßld„·ùL.n•÷R°!½Î ¡ñ5üj$µaêX3ýdÌÙJ‚]3®Åø5o%©`-½¸4 …ŸŠê{þ²2/Kú0&‹§DÂdùµ\ŸÀ²óøºä_뽞 .†‘+Ÿ9î Yz…^ºf™wñŒaùd}*—sBÇá›iàòÓLI VÆ'©jð§[™,Z«PÐŽmÀû|AG´rB)Üúb”éD²¹µ²xvE¥’ú~‹ü“d^ õU/WRëjUm#Î'.b—bøp?÷ÍnãTW 0f¡“z+,÷š¡Òl-ùu$ŽoŠNÞ_ýB³ì2úuãdÛØ|náÞ˜°áÈkÈrìë·ˆ¤œ{, EÔ¤™—™|Oo϶éÀ&£c2…\;,@ÿ” 3ļm’²^W­0~ÄfWk†,ü‘k ªžZ9æÂ}Æ‹,Äœ‡“þͨ5bvT–kº˜.½ †=½Â÷œWÇx+‰ßÂ×xÓ:æ~aÄÙD2Oqœß§ gtõÃ.­0¤=ïÛ¼ÆPöNäë û³¾dV¿¥éQW²lØm"^¥tU$Snª<Þïn«5¢ÂÙ‰H~`Çuh7¸ƒNú pcK}`±_IYëôh¯ßÑÏ+åê'€?‰:à=¦sÂNóxû%@M)Rs~Žb;wù Û¼A&µ!²ïË5¸ƒn°²öŸ‡d݉5åÿöÕAõ^ëð ?K«µ ÝìÚãaæGiç¾P­¿³†áåäÓX[(*66ûø-È‘'€µ†n¦…>…ýrÿ8Æv‰}Á-ÕÞ×›…ù›ZÂt«_TJPŠ »2B Fuu+˜–ÉSËêtr÷&¤ýfšrâ7›xœá>.RSиIÀÚÝ>ä!J@ü©³Ío £L"µ„#ÍlÆ({”ä•Ú•†$¿yÓ>w:Qݤøo-Ä2;¾lPqŸÊí1 g´¬l{ioï.H›±¸(°Üêó‘|Ê%êׯ^ý¥”.4—ð’øåAÃ1µ“¶ìEz]vv%±KùRŸ*Ú7¯ékOÁÒ:¯¿Lˆ ­øû…í˜ñây#sîüÂÃFæR Ñéª5©cÖßøZ–;ø®,$¿sŸ:dчœ”ZUéˆgLƒOEÊÆ‘Ød…õv);^Œ“ýœñåºfrMº»¬x¿ûjúf½®Ùèª]B\«ŠÊ$_ǼÍ5ˆrÍn!]düÚË&çUÄ׸ÿpºõø£çZcö€ž´]´q 4o)˜¿v냉Fõ¢”Ý;·á§çN(ÚÒ nõä:úã,Î+C±ó+LNÑíA»Þ®ØPaÊ’p“¶Š€^9Gé‚ñåïjèpUa Huc Å~µ—|šÕ*Ž/Î32‘:Lo YôÞ¿©p+V¥gª®†Sv4Úâ ªYðicMžÍÜKWî·Õlä¹ËÊù Ñ©Á,Þ —†O{Ï£Ð÷‚ê'˜ Á:ãì–Ø—~p¨H½jg¿WGÊþE×w ©~WÂrmŸ'>r'œÑ™Ôóב+º³mfû‚ú£’îypĹ1ñUˆ±Ÿ\4ö(·‡¼K? 5®:\'/™ >K“¿P>×JXY(}KÐÏ1öb3¤R¹v]¼Ùe±Ž#÷„PB¬œ‘sl|Éá¥Zèÿ à.]×âcéœÖâþ)~zéÚÿ]º ~ÐŒÈé†úîñ×ÖŠ¯ØÈcy&!~Wú$d©- Â#É}Ó“r¸«<}G¨©ýbV»Ÿª\Œw¦‘+d’1ú¨g ‘1¿ÞÙq)¥Šh ZÆ¬Ùæ”‡1í»Cý7\+òêIk d;y2‹¢ˆEY“Shëž dlG¿‹{ÀC•nÆ|ÿ»ï/d¶­Ù¹€Û+2Eðág{ÔwËQ/O{´¶bÖÞøü¶s‘ëNu¾¼|lf6úŠéøLUM(i0ð|G¶pÔâXq£;ú ‘iüº‚^4CILÆsÅÂÁ¯G¢®çµàòƒ°zœŠZíØp«úq“ÄÈáâNægÜ×OÕ­Ôƒ@…\¯4˜IÀ7¾…ö4ªÓõ *¦ ΤæE¯ˆC·öxýßUlJõ70ºW¤?–/Â!¥L£·}„ã˜×WÃ+}Œ­ø u0–n ¸j˜|RgÀÐ nž#DóŒë^?Hˆï7sg‹<© ÍXšÌzšc3ûtç®ñ`¶CUTÑ0eÙ èÙ>•ìf`Âæ‘ƒˆ}Õ.ÙxÅE[*M£,˜GúÜ3/{G\á® :sÌ¢|uØa°Y1Ûª#ä[f†™¾Q^Kt_­Ããì8è²ò Ù4¶~NnU›êv_Wž‰¢ÛÜm¹€m‘´ÐùãeÜûýN³)ãBGÒbó@m<Á£øÆL_ܧõ51*ˆ;Iígó#é²¶]Ö£À‘àò àí 2ˆ[©Nôe·®¼ŠŽnÔLTalÂyŠ$ŠÅ˜¾J©.÷üuWEê\B›³_ÚìZvl…oÄXC¸/x‘“qðI’‡în…‰ ¥¤ðƒ…K4t[=N¸/Wc«*n57gñTóè¼³~ðxX,m¼Ëòvr¨dÀV^ çÞôV} 4vj+¹~ìýiƒ )Ar‹Æì‚š¤{Æ’°?M¥º„QþíLŸ£@[Á¨êü<{xá$b`3õç€#0â¡þŠ1¶VË>J)¨€ÑÍ&`ƒ^;G“Ý´–f˜ÅÝ—î²MÞ¾nAºÆþD4¹ÃõÄå®ò‰Þh¥„b¡çs /Ò¬]&8G™ .α(©šä¼î˜W£ÿ v[]µZ¸%ŸgɤºðÞзÌ!qÕËù¼$`º)òÓå­fÎæNÓHö7Îô¶!_Û:Èéøy~'³è“A»ˆNr§žxîŸÕ£‚ápÚæ¢BùÙ¢üÍ›È÷źõ«''?‰4³í‚×íÞ•u9'bWFö#žt u›ó[ì kRÄfx:¢ª¿0º—:±W8PDŽLab ÊÁcŸ%çY«j•ÀØYfú}^ÍyωŒÑÒyc•€íq”Z`h.ojûJ%=°åmÀnzT¼åø­î‚‡?xt7™6.3‚¯¨)>”ö:ZK{ö‹«rC3¢høÌ-¬«Të—Á@êOFÇæ ··w+„ÒœX¥ð²¸âk×ÍAÿÅÖ]™0ßu…®¾?ä8”;¾¬]N˾jâ–ò“¥BZ»°yt娟·ž!Ú]puÚ_núO7¾I<.}‘Jºˆ¦êƒ&4Í»¢dð¶Ï¼ˆ…Òœ[½>ðнVrdzÏLPÜ ž‰7…¿ëwE^ºwÿ´¡îžÞzƒ»äþ£7Žàk?hè”nÅQƒp¦(v6#»æ2f´h½0ÈG”¢ëvèJ¡ ™¦é BÕ–L«Å‡}.ûɘùþiJ盂‰ììÂÌ.1vÛXdd}ï6Tˆ4AÕ#‡G…Òì=‚L×Ñ ÁF6CÑWKOœÔy%ò\¤F÷ŸTD/#¬MÍ9Ûî@‰¼ªøY­zñ1±LöƒtÁ sÑMã.+¶í_átštä¿JÓ5TÍÆÖ¹ÔWPèUˆ0 nöŸø”,3H£GÄÛDÏC¹X'̳ë½™­šVmi)Öê•9Ò—0MõË·‰ K PFƒ­_r¶X Ö]P’ˆœÀ1ò2‚™Á7A Ù¿.Z)mà¤ná[Dã‘be?#è‡ñ8ñâc+l¤tô)#ãFŸ¡îÌŒžâ§²ÔÚP6îÖ噜ø`ZõV탫@jšj7ü®åìxqwÖŽ¹¢íFøà&á›h=È Wï·¯ªŒ˜YmóCУÔ¡#±<޼(wA–2ÅÀ¥ˆÃÝÁ{?ÖÀõIìgYÅ+‰8έJ‡&‰€´ ‹þGs§Ïmœ0âÜócäÁÛ?UŽªFÞ··ó¸ ÖEŽ'ü ‚~^'¿Ö ®zÓ[š$¶dè/â´ Äyn£Îérï#ï\Ð/GÍ[ÑžÁC]·~Œñiü—Q-„ÓÁiD¢ÆyKn3sqì@Ø w:˜Œ‘Oj^ Œ8k.Õ%pþqPí}¾‡äÚgG[ƒö#Ñçt½œj¦& i=Ã)Ìœ{“–uXß™`yó†ßBË’™µfÒ5}ý4µC{-~®çR÷ÖØÇ^uªnİí£HθߩéE–ö‰è9oÊÅc–mHF½vïÝŒ.Ng‘yÜKÆÙñá!““דò'Ò\ùZuú:@g=¯å ™®8ôŽ4„nÌVt7C©àhT Ôb¨ólØn=ãìÙî£V ™J÷uâëÍKΧߙ‰–´ wEu ìÇĤÚ£Ø=´ÛŠž@uì“9òXÙi©»¼¢­3¬•˜¡=",Õ!Övä|wCn©Õ+‘T½YÑ{ö±ÒHyéèÓ(4³2q%xdëJõ|?#gUQKÊ}8”PJ@·žàÍ`59 ©ŒÞ'xâì ðJÄ9ûâTãóíÖbö#v…%*Áàti›@üLÅ n-DäJX D¡Ó¢ÚǪÃ1›Û¤™ˆ)ëû&²øl ’Ilx$‡/WÞ‹NÔ®ž_JbêØÐCô‚A¤™Ûe-t„å\i3" æ×{–ШèV*ýÎùœöw÷ÞŸ Qã6^ýxȪwO¡ˆüYeéöEÎe9 ¤õv!ÝÎvÂËßò4CªV—d¯žª"{ î!ûZ{›­ tíM0ÀÙw›Ü–w¾»p7·ÿÚš  … %v“OÄ£” c>÷áè…S"œ-GSÏ@ø!·ÌŒW.£e·ëDf,Ö×ç3± ¨†xë6ë#TÜêÞö™F‹B+‡*4‚‚F[3„¢(Ú¼bb7¦¢ÂÎ×ÂÚN8Èâþ34VÌÁ¥£xM›uߨ’-%‘XóZ.ø$Å)é jþÓ£TŸ$]+Î Ex”%œ¨o]+¿ßý¹sŠušÖP§Ê'*Ñ€&yáDÌ—W# ~Œ^–æN]ji©G*‡n7\SJ^<‘8Ah‘¥DÒº÷ ™Œ¦v9ΜˆËwÍ7‘ðå@¾óÀÖß{ádB§Ôm$´Fr¸Û¸Öóñen?®$wlYL y6Ÿ"_˵¹!lÅFçŸó7}LŸ+ º¹Ã·•º NI!„»sëëû×!z±|:µfN¾#o¥_‹âÃòrL7Dª56ÐËônà›Ù&A 1ÿ೨°—¶Æ370í*ß$Z¨íy F\çÉÞk-’à?Üìî1oƒ·Ïóü­9"<;IÌ2ÇsOÜ<ðé:Úsc¥ŽÍ>«*C’k>>3ðj}þXPõ±…+©r iÂÐ7·¢oczÆ­P]ù3µÎ«MMññÈ^|-Ñ|Œqk7Hb¸Ôã§õXè^ ï"fìx¯EîùÝ(ÌãÆŽK¥..y³¿yeéÿËCendstream endobj 310 0 obj << /Filter /FlateDecode /Length1 2038 /Length2 13415 /Length3 0 /Length 14527 >> stream xÚ­·UT]ݲ¶‹ÁLÜ‚»ww׉»»[p÷àÜÝÝ‚»»‡àg~kí½’µÿÛÓà‚§ªzÕ[UcôÑ #’W¢0¶5ŠÚÚ8Ñ2Ò1r„d”•Œt ‚pddB@'s[a' €‘““ `ç`b02p±0~áÈB¶vîæ¦fNJ!ª‚ØÖ@s#€Œ“ДÃÈÀ  dkdtr§XYÿ9áP:\€ÆtpŒŒcs#'€!ÐÔÜŽþM6&¶ö›íþ×åtp‰PþK&$ÒØÖÆÊ` 4£—µU‚´üÿ!ëÿ&u¶²’5°þ'ý¿&õÿø ¬Í­Üÿ'ÂÖÚÎÙ è±5:ØüßP5à¿ÅÉÍ­ÿ¯WÂÉÀÊÜHÀÆÔ `ø·ÉÜQÔÜ h,oîddprpþË ´1þ¿@“û—z%)Q!9šÿY꿜òæ6NÊîvÿÉúOô¿˜ñƒÆã`îÐbÍ—úùß¿tþO1#[csS+ÀÀÁÁÀôø€ˆàÉ0·1º€n Áôt6¶N #ÐL¼&¶pÿl”@/ðéßÄ üCìz¡?Ä þCœz‘ÿ;€^ô1èÅþ€^ü1è%þ €^ò´Hý!é?Ò"ó‡@ZdÿH‹Üˆ¤Eþ´(ü!Å?Ò¢ô‡@Z”ÿH‹ÊiQýC -jT]ã?Ä ª®ù‡@‘ÿ!FÐkKoàhdnîdne ü…é3èA5w´ü3`§?™@þ• $ÝÐÁÀh4qúËÌú?æ¿wÿ9Ìøo³%Ðé¿â9™ÿcÿ?@=ý‡XAµl­@/ü³üc±¶þ«7PãÆÿAfP§Æ¶VVE€Tüiô´Òÿ«(Û?~{gÐ…ðçh2&Ž€äš˜»ü•ã·­óß5@!¦2‚ü¦ÿÜ À¿C@ÚÍþtš˜™»Ðæ¯Íü/)µø A;µü AÃù£˜ 4+ £ã_~Ð(­ÿ è ÿSŠ”ËÆÜø—4Û?ê@‡mÿË êÈî”ÌÎÀhó_[eaüëï”Ô†èÁ°ý³%д쬜ÿR˲ØÿÙ"hTöζN@cÿ–²þ5OFдþd`ý‡€.“îºÌþSÔ³£•£Ù_)@Òþeéw2sþµgPŸN®¶åpþ A;pù ASsýë¥vû AéÝÿBÐD=þˆeò:ü»Ôÿ{« ÚºyÒ²h™˜YAW#è²aðþ¯0#gÐäþõÅ}þ—MÌAŸ Ð h·¼`kÄd‘ÒRê#’?õš\д)V¶®{¶>p)ܪhTÊžz£Vý©, ñú€ÐõÏ1¼ÝKaBôÂß>6uþíÀEÿ Ý£Oý:]Æ}ߎø!ð©­qîöŒ\nfc¬4Q£`àûåài®<¥°ò!ÌX–K}OF»ºhº•Jhp-93‘"rª}iKèº+rb<Ôúd E(l3Íê•m»3êS>lN›أчˆ&Á¬¥¦I ÞÐtZŽumaÓÇõH\X]Q=™aFeOZeÄ\«ÍrÛÃJ×Ô¶RkM;ØÜËÎGCê œÃ!iJ˜)NGµs˜2QE¨S.‘¦rdKGÈŠ@ñ`¦b‘[¬Dþ ¹s M fA EI=ŠÇ;¾ÔÏ@‚äÆ`¬ä©Nó€&ÀÏVï—òu‘„káÙ3DM/É~’øßŸÁÛÕÞP«˜o»Nâ`ßQ&SÐeÎìòú{ön(sOÉöXÉž 0»}(ÇY¿ß Zz œX| Z7¢PJ*„üq »=LË‘=MMtdžæžåKZš(F—‹XéúÑôc'ÊI†eŒB|¸gnä™­ò¨ž\€Ô‘ƒ.Øân©mš[;»Ë2/ì'¡¸}÷>¹Ç÷1”l­ÎRÜ$c@'Í”‘%ÔË8ü/„Þyk“KëZRJⱓK¾ã“Î ›îejß·7èUzIïçBÛ…ª»S69ð˜Ñç* ·Y_#jœÛϲ…Ýåd”æ:ßË.ŽÌÓávσëÃLj‰³0 ™ÓFx—Y%WV<Ð {Pæ$çùòµÔö§^àű{LÄ«Ñ/󳈭 r%îœt[íÝÛENu¹/\¼XØ‹¤"~C?upœ ßl#»ôÉàÆ)D¤º`SF¥b3 M’BÏ¥tÝ·Cô]ä0tC.³ž²]‚¯ï |†5]Cëæ²2j¨‡SÌâ¸Jyv4ÅÊ_ì?9BÆ2LR.µ r«$—ÖÓ0.'JrØ3·K^éçWß°g°ÓìžHühu®ü5°¾½Ôe&ÎÃKDêWÖ‚¯<îŸìÇDWæ°Mç?¼â` }±0ÒÒò…wþò–/¾c˼Ÿú‹bd¸>*ÅÏÎËa„n–•åžìZﺴŽÊ€’>CÝË%ÏÞ›€˜–6ü¦HŒ§¿{¡okUâCäidÆôÀ„¸…üJ.ü·a:‰Êî5þŒòŽX¿ C8|åÈì•ÇñO&6ÅãS]§ëöóÒ`¢þÉ>x)󕙊N¿ýdÙ©² +ÇÆ•YÄ/ò»N¾ûlÛ…Ó±S”×hp÷°wëë›®kAìgug u¬Ð›ba2«Ók–_S7¡òÒv…õàÊ”êÏÁò›Á>+T})ÎOß×DšÐÁ¼ú5s›™l±ÑÞÍo/ Æ+/åS‡x-÷¥wl‘I«N5‹Û˜\Ò­tØm.Ýö:ñäAíÝK}ÿ ,‹[-œ.òÔ§ÍvǶyQzLÛ‡5¯î#‘·÷ƒÖ ¦ÊéyçC´¤êDú é³g#iAª˜îŒ‘¤ŽA‹¼–u”³Š 5"­|§N.çÆðÎáÉ|[¿ 2mkK}ŸW9Âó8Î¥Ÿiñyÿp½¹»+ü'¶4«ŸMœ@Sºu‹Êº³UôqŠ SìÊ+$¥õ’â"ÏݯB ƒ…õûmÒ‡šý› Ÿ•%âÌSÜYÞ¹¶xB«öÏQ¾ö(æÊ0àèHHe¯›QoN·2¡K·¤—\¼²'ÀÖ8v´ßì¨Y¨ã¡¹ž,‹ÉON¾Ö®Ñ—f«Mö*r™Æ«˜’uªÏy"ñ²Ïuíóá«5Ê+‡Éä1_}6\£R¡”èö6éÛ4âØ÷×~€)Œ…\Yƒ¾ìÚ(˜îíVaÞ›ÿF¸Î>­µdù#í QzT¿#Âg9O½ÿ|VõÂmÃT•Ò4q¦DíöûëgJ3ÿ lhe“»O׋f78y5¯(Ö;mÓÝ{þ‹>[]¡)ˆJú:íã­ã}aÒ%? ÷gë–d>Ïw®Â4ä“+æ»ò· "²Îì<í©=˜¬äUÇÓ=‰µ¡&~9±ynߦi ÞÁÙ!=n„8›æ¦œœÿT;h}Gm)P|dù)3ªùÎ<èW q°ð"»ô aìuJeÀAxßFî—#æoWÂÒ"eó^›{fGãtIÛò«Yl›Èøº«D„Y.}$¢d >CÜï35ÿ5Ÿ|äÌt-îsÙÝ•ãû²&˜mÌ>L¯¼˜j»½/Zá0^Þ C/ôv¿)Ô‡;WK4{ON©IŒêíoÙh¤ô0b_áráû<´HÅQݧ—0ÛIMë§@Ê,ÏŠ¾˜ˆ´9bêÏsÅ$|¥ïõ¼u¶N‘Ø£}³«Ü+Ÿþ4sgóQõúù=6Ìc;ÍS»=L‡–Éóšäކ E"¬•3}×W—íã깃p¨Ñ…z®VÑrͧßÙŸ$/åóž‚jOÉ[/ö¥½¿Ë ™¾a³0Qg1\‚#˜":Ò|w~MØË3|̸_;êr‡WØSFáÚ JÞ:¦­8A:2Œ¬GEµ> ÐI—¹ÁYfßTÔÙvñí=XFšq2’ž'sRRÛ ®/ª v<õlÛúåYòù „õH›dÌCi Åêž±ŠÓ–¨ò©Ì÷o™&&Т÷¬Âb£ò è¥ÐV?}—\=ú¦^S»È£_çÆÝ=,×D^þ mEXÇZÞgN©rŽY_ewzƒ6¼âbÕ@QQ/5¼ÂFÌâr‰üfLr Bt›ï‹=ºG´z߯nO Ã3ƒ‹8®T³{Ö"l*]‚QŸßÎyÿᔺàY o0ú)ß]{xF±ML ¿XjRÿ}Eည^ç=Ô$OÚÂ,3oH*ø(í)׎é½çï=dä†<á‘™³Ì§žf#/$¨fj¹ C 4îk?½NªߥCÕÔ5b!à7dm#ÁX¿ªhRÁcÝ!Ió´_ø¹ûf\‰¸  '>ÛˆÍâÉa5:Ft~h€M±‚r‹@³àLÇYØu—t —ùA…&ª’r.qwoíÏ`>£`üÑo¾¾â.ãzÞö7íÄoÙ9Ô·Ï 2¶1­€ ûžðæÜ·«™>+çŠ.GWóõ°æ/Ö×KxÍËóZnœ›ŽÙ"F ’üK\ѕڞh€„)íŒkt”š0Ç-ëծ׋O–¦lŒ=š‚I#½Ÿ!}÷vwm°£RëT úx!„ñ ×øU·<„äÌ‘.ø¾Å'½U«µ¶iV ðz‹‘g¹#û¤€~ëB‡wˆZÏ*»jºàzü.ĤÀ…çn O¢„Z;î,¢O1vë‡Áy5µíâÙ‰†•ç%gß<ƒ/FVorAÑXW‘´q2ñ&^ýµ9Œ>!eÞY ï­Aâa߬?èëÆ¸Eu¦Ìüm>&d`(¶S !ÍÃp‹C‚:1Iì“+± C"]¥"uÍKþ}˦{â5 N•“·‹ÞN²h‚’T–¾œ¾ˆ˜Ê¹ßpgT‹þ ¶œ\S»–¬@?J}Ù¢)qYßä~f3ñÛ¢r œ¼ÀÊLûæ=¢´„ý]ÝSñf+d ‹úÚ…oíz»¦]ÇÁM¡K¨÷Iõ ))å+²z99¨€Ò+¯Ü~nYžD+#ŒL¦+ž´bO£¦îûVZó%ϲå ù:Õ_<·Œm7µZ¢þÃþ -è¢J,d~´$L³·T*RVRY¸ä19ñ-µÄØ 5¥?Ÿ÷P“_L)¿ÿøŽáûpUkä¢häE©¬J /—bè}y‡Ãä³ü> À"ûz Cê†go'q:‘}Õƒ¤¢:¼,ˆÊmøÞÑ“>%)M·ð´²À#vÉ@Rdí…Ý:¦~K ËÒìH¡ÄÄw#x‘SÿQ‚æ–·Ü HÏçIñeUP€GX~~ÑNt–»Ù çð9©µ©êž"‘Ý¡  ÛJ‰Þ Wp9{`ùÜ’`NÀ~Zwþ‘ÉËÝù’®øªæD· Ê§þþŠf¾R¿ýkl¶‘˜ݶq ÎE Ò}Ñyß2Y6 "ùP¹Ï\“IÒ£}•'?°vP±0íÖH–3‘á3Ì´6` *-yw+Y~3íÒ7õËè<¢Ý‹uÿ¦Xç]TöÓö…»iÝûÇYÃURXÂ_éêÅ|êAeسÌ0¿1éž¼°¯É¥•³ðæJS ÀæN ã´j¶¾J} âüÚh¹9öùS®ûA7Œ£!ºRÖái3v䊊ûB÷ÂÓ I² í—jZbbVΞ΀P‰kŒ6Ë ä«GâSf¥¯äÊÆéYîÐ>è¤Á^ƒõ×ó)QV¯ÞBŸVœÏ]4k (Óæ_NC1ŒadŸ\Ç/ÕðàÁ|JiS)ÍÁ›VÖ«·?·Ó¦5½¶Ž?òë¬õïà XrÔ@–&Ïv÷éÉÇâO[õ'çë8 ˜ÔTØŽQš ß²¶Á¢'sDEš?‘‡QìèÒX: ?ÏÏUeR6F½/ÔÞýPaC|ú»Šc_$X»±`kåCÃ-¯k9©’édçáæã-ÂMuçY#¶Ù¥EüQoëf÷ÕŽ3—ñP¾ì°²Ò(>ûÉ@}xÔ’”˲¸²š;ÞàmfŒ-ÁÐõÒ.£ba>=Ü/³ãnÍ)6Š3pïX­7hk*Û$xŠ£O8Iƒ²äÏ"1{ù¨:Ca˜¿ÑmúìùlµñðûŽV0{h^Éz¡¢Œ¼Ü·ï•WOLöî–kÜ¢š)’gÔ¶æ[Ùèk`ÄÁ­‰ßÚ\7&¨Ÿ{±®ZN¬Îf¾RH$¨¾×3…†ÌÎ?÷ä‹waX¦ðŒ¬£;,£áÝÿ8ä‰Ðôª@Æ›È5"c Hµñ˜ÂïEÂL§Þ %Ý—Ö`K¹)€-D•tŠtb›G>ŠA]s§h!ÎRÙ€ ¸7œ`Íá]ZÛñ9¸qWñ^ –Ø#MpUÍÏÀVê¦[_Ûäf ,7CLÕ%Sp»¥‚쇾¬ùî7Yî [èÄFh¸—çMRþŠ“þÞcô <2“< bx1Ù`²sB¸…>˜øMò…~ŽaííŽ¯ÎØº0‹7œâíÝ/ãS!4´ÚNÂr„Þ+|sÐÀB¤¬yÖ÷ìãP ¦~ÈHM6çÒúB¿›çÔľ‚]U1„Gß`®>…]6,XŠiÄʉce÷7»'ݶT‡ØBöoáÃ?4mÇ‹DvÑ´¼è;Ml‰^´\-ZŸÀÇÞH't¨4^Á;¢Ö8$3å 8WåÙ‹hJ“˜8T½I²¬!¥^ŽWU¤»­S¥_¸ÚãƒÅösy~Œë½BÄPQ ½qèk.Zš-˜Ÿú­Zˆvn‡Q‘ýpÎC¯öLJá«2é\çÌé¯mBÓñ·Nãy…Æ¿8Ýa`­?x`‡,*ÍZïþ)£Q”>”%çýIÒ2„Ä+]«!µAõ&“È^}îU Jà~QŽ6z—ûø¶â@Uðó áNoJåî¯5›†a¸!BJ³šÞ÷yésATÃìÕ¤ä¶ÜÙx3Œí]Ëyas¶b˜nÊHÑ¢ÜÝæ‚«Ë_è }Ö¢3ÝùÑ|+QúæýZ33-ª^¨?¥=ÃxB“ÕÁý°Aáj—ÉHƒÂÎ]R7تÜY« $YFt^‹ß¿ö-ÙÅ·ÉõgNIï*%BfäÝü°t34QŸ'—ÔÑÕpè» Y;I'îdƒàftÞ>¬ÿÂ2qÿý¶<º>=`Þ×1õõº'N9©¤Þ j2^ÎÖÃ3óh2ÈËã¶ aUeáN¦8¿3Y¸èпCȨ0”ÏôþŒÅñÊÙ2ºš×XßÀÿ!ñýk²ãg^oz‰cÖ©_§JÊÐÕÊ¿[ºèS(œÑÏÓš-µC?+• §³Ó"®ÜÇRn§më6g5À„ßuü¿Áím^„qÞÙÈ*¦:Ve„˜•3nf£kQÈËŠ÷•úêpNÂüÚCÿ¦¦È·¢ú“ç×ä¼™6*©œÉ\!}ÁÎCEExdü»~äÂÍ©YûÛ²æ]íÓdî…:Ÿ- ‚«;Âñ‰B0÷]’°r›†E²ÎÅE¬îž3G6UÖј}˜Ãw8,±÷v³ùskíÜH8‰üÏB#Ž]õл~Sž­-bÇ…Yþ‚Jƒ^~W8õžW.S´d]yªF{þÅ:<ÉUÕó%µ£b“b” šÜ=Âe²¨±‚ÏiÔ8˜áÎÌ«™Ld}õÏþ2[v´Ÿ†CKµ/ø-ÆDdˆE‰0èR£ZÔ9ãNRïÒ[U#Ž`à$;IÑF=?k¿é;(MåË6=´°—UÝ«œ€ÎBX¤7e ÝŒ§ºhæÿµãµÑº²ÉƒÚÏúÙX3çÅ®Wĵø;cõiµ]~‘H¸Ò@êO·É4²—=­8*åòï¢RÍ¥š’ºPª¹ÑÃ;=Ô}Ï¿“†JIò&…9²Þ„Æä²ŽkãÔòŸž×)x~Yå•m·ÁL|K&É!@*œŠ.åß:#å»%xݘ„sþˆDCâ­Har¦ö‘ýYõÚÕ2•Ý|f&êG½\q‹8N?öo6eŠÁå ’–¾žçÃîýbgœ1™Gƒ'NaŒ‚™×Ÿ= ÓU NÒ0Œs>È}@óíî‚YÑ¥|ñ±ø5ç¶€0¼#%Dy$Z\”8Wìz+9Fža½¡fÈ’Ö7Á„W˜‰ ì¥ÿ™J‘ø‡9#ç4¿MGZ?G\¦´â†7;Q´%YŽâ>ò`àÙ&Q1¼mÞ÷K\¿ƒ"gÔéï­µñªÂ›=w—kêòR¶ÕßΓ71ž>”øÈ¸8eaò€ï:AWg_ŒlNGÜS¿ð| OÁÓ­ftµ߀PÓÒ`ÁÂÒµìLd å%;ÜoG¯øfžáçÉ%u²ÌKî3`Dnu›!Mòùg­ÝR.êP’—ø6=ýB`Ouqâ"x^L#Ðíà̸OãøWÚiÃDPq¿*,ý%ßÕ÷@q1(SO| šJ’¯|÷î£7u¿Ò ­lí>Q…Q3†&{¹ =~ã^cŸåª¯RK)¦*rbu9è–<¶Kð‹$ I{ÏÿÆšÝ{ŤhÏ‘šã:—Ùyv®E5ÙFª§ƒv“ÅyÚyž¥áØGÍ —Ví䕂[É4йÔ$VJÁRÜ`SwŠm3ÖüNAÍ#ÃQYoÛùÓÂVq¾Å¼*ЩأKTY #ö!·½ªâ½yxI˜Oû.¨Þ¯>Äc,ω&Áým=£Kh½ïøÆÈ½/Ô©LiþÉ+%BÅü¡ZAñë2ztvÞŒ 3UÖ#ûfd÷±q–u5FTl1!O.ã·&…Ì"X¢ŠÜÁVô¹’ÀÛo<^»›GOŒ¥ØÅÒÂi_:¾9;?l$ž‡pÇÓ P‰ó2äê§ 9ÆDdvÍ5Eê%‘½'hßl™'·Bñ^®Û±ÚEë˜ë³wc?oÖb%sÃTµÕkÏÀ#o~Ë:ª:9Jñ×À<*wó7àC°2„3ÔP÷mJS”›®e#È•fí/ X›‚#ú-4X"u¨´ˆÖF$êñ*;‘Æ0Bž8Ýrzzj%®x¡ëN$ëûûถL8Wè4rMÕû¾"×°'WtºýùSXŠŒ&,̸ƒëN­žñyjš€é:7l|%ÌJDµ5Ün§g4±–Ï®B“6†@ãw⮞¹†ó5è.M› ,Z dfÀ1KƤձa³Y°çŠˆŽPFOoâyýÿ9'Œù‰þ¹B1ú˜§Kê•H6*ôƒƒH#˜/ù¾Z§¢¨rÛÈ|a¶›Â˜ ÕÁ´5‚/« x×b]=óä½þyo<‚@£bõÓ’]Yn'ónÏP›ùò%I6ðx‰·C »CŸ7cuì.7\§"zÉ=*u¢Â>{ôĶEr™–pÛ4/–› rð¶ÌåªÂ0†H’¦ à%â¿¿¢vŠ˜WÖmÜ]ÞÕZ¤ ð3åÛ¯»‰‘RœÅ4*õ$o¯d˜W¼úàIjþ4‰–Rü [}x ‡¦aLÜvÛ>%ûç’Ÿ“‘4 ‹éèÞ%3rÍ®¤ñjè^qùÞ„4ÔÊÆ° 3˜l[nƒŒº”_[³8Ùq‰¼õ=éɃÿ=[ï§ Iü¤í@ÛLdt™ã×\ZŸˆ|ª ÀøJ3–­•y‹;K1e½ÝÍ$Çâî<Î’÷J³>]‹½ç3ÕœY¡´BùQ…½„‰ŒXX2Ô)‡5ðI³šÌÚþ~$u7â¯^`0Ä6ZyW¦ˆbÌwe®AØI…ÉšûΗ\M?‰ä ‘Ž¿öDã zL³9Ýï @A} ´²Á|M [Ì ¼ÕD©!åÓÍœ?ÃpdÝ9¿–ßÑÈ4©«S,yx´lldE)õfÞ_+ºÒ­À¿±†6VC½ÛÛ†ˆð3#‘€*Ñ™%ÖU$ƒù6%¾Œ¢é%骦¦Â‰Ž}Þªtü¹²-8ê;.4œ…ŽiùÁã}–’ÑóÃ/wI&¬Ñ¹H‹âÐŽB'Â/ü.Ù×ãìÿ Ï0Z §NmŘîØ×KÓiX`-")»^a"¸N¾»0üÂ&õ<ùã+mbÒÙµ}i˜f׊ÂìW¤ã_)‡ŸœüeI’ªê«Ž"+Õùýá¨t,“WCù]ISÉü¦¢Ý5uݽzX/^f3ùÆÕ¹_×ãD~oåÀÔ/:·t®•s˜9©ÔC|Ý.jMÀ$ïœãWqÈà‰sPÓ‡ÉX¸y¯»+ÿy½ÆÞŽMJ²5ªÅmÌ+à 0Î¥éHŠ 'Õ³þ-ׄÂÝ svn‰¹–·ÿuÀȇ”:(Ø7GLª¢Ô9‘œ¾’XÄi¯‰Pqs-å§„©o(…€/UÍ ‡Sc?+x²ádVëïaÉ{XÊÄZ£nW°ë•Çï<>ñ²‹+?•h‚–¡ðô†›E!•¢Â‘ñ…FžüךH¥½ IymÉZÁv¬y9ž²ìœÑÎ0ÜŸpàùòÓ;üîp„ÈøèµÙ­~r6œ‹fÈ]ò.\]iÄGþÎëHÕw -Ñ 'È¥y£º¿´û1e<{v YPícá%[BZ8å!ßQþŠ"ÅõK¦=—¥ÅŒÁdÒÒ›‰Òk¨±²›Ù™N®«V/oÆ5hæ ¹O|ÊèÂ]&»97$ÌÊo'ü\\ÙRˆyÖ¬’ìÊp=Ó•´l5×E.Í@…¡e”¡¯G¢ÈêàÊãLÈ>„#‚¥¼Œë®R}ôßæpJw<]«'0îBÁš¶— ´¾îgÇŠýŠû¤é/DZGn·¸E¾œV¢ý:„&F­bJš—q¶Ä3$ŒÌÏY#‰ca—$þjäI§²Š 3šªV(2ËwXsubÑ'U~]†ùÜë¤]tÐ{½ÒئÑúšRßæüÑ’]ˆ|d¸œúl÷EGìGÇ›X–Î^SgHx™ÂÎKqónKNm¥¾ÙP´|/B¼/ï}÷tXp{Ëʤ ˜Ì…ÓrŒwËåæ‹°¨3kä$ÔY½VållÕ4zCw¬„ÉK\¨ûb þñaÿýu«u¥éóËúN"lVª³ÁûB„5\¼ƒjXgÔPõ…¦ª’î™b‚ÚSÃÇËÌð;n€Ôh 1ÍØgÿŠâéŒ>ËÉ©^³ƒ3pg)ŠftN°Gí³_ò¿”úÕd£µHu¥s'|L?:„•ºK¡`Kò¾+\JŒ¥ áÈœu¡5‘£Òt Q“gÏs qÔÔ*5aNfÓ‡+Õ·#uÍE”'C)i[®†Ч¤”SbIÃÝÙ‹$‹¥xíåõ™þ}¹Ý“¨„^±‹=CöfÄMzG úÕ—Ékß$Ô1aØHKëñ!ž¦vºííSxba„×$˜›‰fRl>ín|ï–ØÍªÔâÏ—æƒÙ'(ö öSEŒ­ãpi¬ïŽwÆÛÒ²Ö ¦È®¬YÁ'kœ~ùõ/b²·âûŠŽ ·´£Å³¡³¬¹qô­õ1iÁý°þY£ßUvñ¾´»'±C|¹ZÑ^YäøéñÙx—¯&½­8uúú¼_¾†þë9dŒÐ¼—öw‰Ì4Æüa Fr1y îþcîã&m^Zçj¸êŒ'drÊËóíû‡osÇáz¥¢œ ø.HDiÊ×·¿ÂŒòñ?Í‹:]_±Ñ5ïò¢tíÒÔÌÀ&ÂF²b²iÈH3Ùç‹BwK\§Æ _OŒ¤§×[ìÚÚƒ¸›Çmë]öQŽñ ïÑ?y„Qˆ{vA$µûªã„yM=j(šqJ½Ë&à1Bß6“g››[uí Æ>LÃó¹E8QÀ@ýbŠJºª}ŒÂgÁ¡Œ9ÀÄ™|L*À‰}uA¬ÞIÁ#ÊH„qb6`FÏTB œ:±á[Ÿ’Š~ s&0 ­¹•‰¸XyôWFHˆõ¸~Úb¹ú\u=ÙUƒØœRxNœ’ ÉŽ·ÎW½‘'z‚ÊËê2ô]+\á1¯Á7ÓZ#pÖ^Û ÷‹u*y‡yføþÛ-öEãg†òË õ›¨ Ö‰þAIz‰òÝÁn‘¢Ìž,F)^H¿ek #õ‹Fž;ìP‡{{ó×\Ôž´$àË‘ð{M£ª°§R…Å‚r¦EˆcœÍ¿{Ž{=qûüÖ-ÈÒ¿*ÆÖsþù{Ð=‡íƒ}»ó‡H1àš8l;uC[v Ã3·`c £ÀÍ ™ÿ¥öÚEåïß]7]‡—¡!¨tka«<×Ë(B´_•5¯*³BK¸‹IP±Þή7X¸Tx‘ĨðÑi¬eQldIa±ɱï e°uwÄ:òcÄD<¦dH¾ÉÛ"Mº‚0„s>·Ðegyܘw,ž°º ‹¤ø%QÊ6Xµ{F¦6‹ãQI=¨ }þ%ÄÂÜzr¦ßçgõ]è£SlЄWARCë&òý°^w›$úÆÌ+”-†~K–sj…¾;«n¸C™«12œ.a£Énï7!{Yç–xˆ¦{M÷Xû14݆þ,"Š4­·Ç <àYº'~™ÕO÷ý€‹á«žíÚ>~ç_¯>©s¤Î‡¸ÐOÜÚ|k©®ûdDz¢éÒùº‰úA`k› TI_Îç¸8á©g˼êvßrª'²»$¿âd KûÕèô‘DÒ/ßÁÌ^öà'¸øI®ó¦¹-à˜@&`*§2(G[P¶^ÑÁUx? >Q¤ D­}ŽÒÅ3“Fw36^Ç5+™G£€ŠõÀ“›ßaJ8í@úd,‚»«¡LGÉyÑ*÷V·ËÙø:pÒáá-—eœýÍY츱Ú[Rnj`z¢:` ûsýÆBîm}–¡Ü?/ös¦buµ*o˜RŸñƒ>zz(•;ÊeÆ3UxfÓ¬éBzB¯s] ‹¢Éë-A´ÑÆi¯SO`B¼-¯'ó^仜܅h ‹g¢`ᧇ…V‰>þ¼Âùkz°¢í»Ø‡`ÔQ¶`>®1K•™»õyŸbb8 ÊoYÑpk›/Ú™FÔ¡¶šÅ޼–æJuê¡ cêïC'ÏD>BÈÔdJ¨‹§ªB…ó­Ù&ÊàÁëëò@¡î_ ~^N8¡,˜ÈR”g`šF×Ö:-Ÿp¡àœÄ£²"ý/˜½U»ñ•ÉZ³Ú;ýÌÿÐKÇBZ"çà±jÂ;õbe ƒ:KáÆ«s6x–G P.i}ÈWaÚßzŒ‰Â˜]&«ùó­/vÖ:ÆAÀéÔ¶o±yƒ‘ø²3)FãºÓ A´ø[F\äh:™GÖøk lå³[l¿½Rä¥{QÚ}+mÉÀ“ZÐvÿÊLJ`^k®èÝ$õÉ4éHÝ£¼çœ>µ—X‡Gè{®4|Å^–_ûØqÖ`K1eœÐtnÂRÕýp@Y›¼W–»2뵦mÏñ-A©£Ù'ÃgøA2Wš|Z4–‚ÇñÓø±…-ÞA¤‘d\Kä½mr6V…©{a)}28¥èritoÎboò¶øôãS‘stÂî·Ì‰|ÊíA¥·4•¶1KlmÕØïÁ,œG†Ò¶Ó2Ðü…¢÷?4¤›ÂÈÖ“´·-l±>p#y‡mN}X½˜EûOâ2ÓÝ‹n®”µi¯ø t×k¶µE§uƒeTãf÷ÞFºª¯•$£VihýúQ˜xvMfE#Ÿ$W®¥Dê+A ¶ÃELæÕ܈³U|o™Q9š¥”̦„â‡@¤½´w£N¡3^…ŽÇy™Ï—#m#öµF„‹ay¶“.¾§î™H¿Ý†ñ7êaê;û±A¼./–K™P#ìX£ô:ðýÌeè»Qm!|¼ó}sPm'sYzetƒ>jôrÀ.Û䘢éUÙGKÝ?hâáˆnÈ •²³V/…¿o”×ølI÷€A™>LúIì#E-Éu5v‰{C² ƒãB}%txy‚Z[æ³ãÏÓ1–ºå!ÊÖõMHoð Úœ³OLšu;Xç>5Œ}l$kxT!§zíp¤÷©CÞq^èªÐ¬·ÜÝYëaã4Aœ.ìxJäeöbÿJù€‘¬ÎS¸µÈÖ•üDä>ý6(3á´¸m“„Þî¢É.93‘¥¼øÊ1jW~H¾Œú\öþI™wIDT^t=¡!QÓ‚ïïë/:,è^¬õtµÛ‰=,år Š%(üfãU-Å©ˆýPÅ8¿HÌœŒ²€ß¾ù>™¼‹²}3®½AGXËÓf×ïí{¨4@üíKE¼`M“:?ÚÚzaÓl¶ùSDcëéFÃòh†ˆÁ ‡aÅO på7#Õ¡‡9Tj¾ÃraEËÈrlµS:z1õ¿à½ô%6'6¨¸~F­Ûß”¿/½Ã¸˜ÀpÑ/üz ï™öä§X4ŒG©zµh_f}½¾Q}BÂk¸]¿*õޤ»7Ós5S¤JüàŒ”Aò»C´Gjµœc/| k¹,à#çåãyL»Kgýº˜ñf#³èºc—¤ºFRïÅþ}6•]2˜È8sý¥¦/ÛäÐ!X¥X0Ìõ—^úПâÚûÒ"î h‰]‚0±˜žû6Vù?rŽtk ¿{V”Ô  @,ŠÃûhܶm ±´.Lô|¤ãDÎ&Ý$Ñü2öi—/È H…›ÇªB³+·iÂÏ–ÑuVé ìÛ‰ˆ6ªþñnÛêÑ…9çn!Xï8µŠÏT<-Ù툳G›_XRݵRÓRÈ7J!ê|’I»6"K=dì´k‰Vî<ï¹j™ 7‚Ž‘ùã±sWåX+Wìä”2ÒâPdM¯½Ë‹òBç# pè=¢ÔrÆ-¾ÞãJ(Utâ6†5ë~Êƾù+ÞE&:´}ëனš ^ݯr¸Üxµ» ÕÔŠM}ÙCÔ‡ý}\}Û>´EØJæ^#Ô×ÑZ²„>ç€v%«jõmžUØÁ²dOêtŠðE¥Ññ{dÃ6´U¬x9(³ŸqþßR5é1Kðúh÷åò!œ£™¿¨¿’¡/`7—ò‘Ea–ðÄD‡hvåÄx\².|`Îûrrš#}©N~4ž÷¸ev ~p(ÔÜksfMšb"Igš’³åÿZÚFi¤ŸW½CÞžü ›§Hy1îÙˆ)ŽÓ™KbãEo„Ý@;{ua\( e€£S¬œ î_ñaC§•¨‚ëÓžˆš"ëƒiÝYú„i¼5ÙÄ|&¡£ÿõgt…ãÏ>÷í®—£LV½Š”VZ†Kä†Cç¹Wr§4ªK!åH*yµ0ÏÉÐðš¿´oŸV ü*O$÷"÷±P 7l†žØE©táöýP$ìçTb°‚”¥Ã‘S7}IÅÄ–lĆÏË»t‹Ð[+Jc»o¾þ›Ï'Ú‹§OvÓžE1| D·Näya¢)Õ¾C –ö÷²Ñ¦PЀ]£¿Õ+CüÔ'»cªò‹b‘€S[¨Ý§õ®è݃ ðð£_s¯¢ñê¸MŸZL ¶¹÷­Ñuª…-}šüÚq] Œé3_¬ÈИµ=¶³¶F9=HVŠ]˯‰=åUÑÁ†‹HY¨ÆÓIÌ’—dµî%$p©y“P!Ô´¢pØÌ4;ÑuX2qÂmâˆãšñÕF&“ÕCŸ”XK >(ƒÿÉ­#›®¢&½|0ˆ~Ì—(@V~–fÏÖhÖñ3«!öT|‹í£Â ÉØL0Uzߟ5ƒäx¥1Að›’¬”CE_‚7¹G•ñOä°|Ž;‹ò6JQmlaÇ¢ÒxÑi‹]Ãw¯.s¥PnŽ¡Äès^ÌÏpZíg¾)¹Vű"¹µj®“¾4N@ïÞ ùÝ· ŸWïíßLuNÕ}o†~NÐ[akb¸8ŽîÞ0}”Ì«X¥— =³,©õ„TX #ê'^t!&ÿÞ]…w¤òAf1ððɆ‚&ÞNŠŠ¿ø#ݵÎÉF \NÑ»[¸ª+¼:qiÞ"Ì[*} ˜®Y„ X4sÉDÊX‡jC1b¦gë¨k¦Óšð¢Ÿ'>„-ÎHö`_^ŸPÛ—$5šÆÆÎ/ª”;È+ë_¬·4ËŸhÄw°v¼ér›ÇzNt?Oï*éDKªREçò¥s,Á,¹Ëð^•}gu¥jîxñ4/£C" Á¶3•ö‹™@¡åê(é°¼qdª,37ˆÒƒ‹eHôB퓪ƒ/ acNWÝ¿é_sßÝr>f9ËàŒd4H;_K^,øÿÊY—endstream endobj 311 0 obj << /Type /ObjStm /Length 2211 /Filter /FlateDecode /N 96 /First 837 >> stream xœYKoɾçWôÑ‚‘v¿ÄbA²Áfs°EÖAcr"5$’Z­ö×ç«æPìé9-N}]类é±Ú%¬¶Â,N$‡Å mÖ tHX£0šPIO°,LÎà™ÃŸÑÂ&à΂nÀ-bŸqÂÂyá#ႚpQ„@¸$¢"\Ñg•H 8«¡†ÇjDÊàm­È6bu"'‹ú)l² ªHÈ(´&5lÂ?°6À,RP°ÓB[0,ÕÖì,þÉ;'´sg—É,pö$Ó³'¡œ uàH¨çHBÁKG çèDB=8' ³t&¡œ3 õàœI¨‡W õp«J†_5l²Aá´0n±p—1‘HVr´ ÿPt‚Æ!r6üCB¡ŠñŠÀ0O|ÀÙg`®àȹœ#YÁ9’ÐÎ ±±0Ò$²‚M&¡œs > +ŠeÄEÁL” ¿’¦ )Qˆ‘,å‘(2&Š™ÒÆ&0Käk¸üpĬx„Òâᡦ%u’Ñf²¾±™¦ˆ3ÒÑü²àî<"cs@ÒVøÚ+2''¤#Œ³p [™âðÃçDOŒˆ þ±" ìEð…äE (;x6$çò8gû‡ï¾ïþ,þ ÅôA¼û×/ŸmÑJ"Åæ±ïÅÄ÷ß¿ #t®‘&eéP6 q®‘!iIÙ÷m¤ÓQZhÏ áÞ‘ôà¥A#à~ŒtVæÌ!µ[¤£“ é c…ôVF´T¨Ì¬®Új#âüª½{Í;™ç1j¯x'/•å"ÁiâS~¶•ÞzQ“óÐŒ™µ’A‡œ$u^[¡IкN"t%é2ex£)_Gë¨eô¥Ÿ!GØX2Ñ .J>ð Øi]Öy¡tFKçX2h?—·AR¡f¢uôHÁ¹iâ ‰Ws5A³D(ó\Þ6Joçú§£t~nY^N*.C°Ò%Voí³–t>Îl'à­Ù’7®j}&D™‘×.¬\0Gp ƒ¢RKmÙø09k KJlûáÐÚJ˶w.>8}ÐçòvhmÁÍõx0GÒÜØã\•™¯5®%_n÷œÞ”áinÛ þæ£Ã¡S•YÞŒ¿¯d8£·7FÅù›†˜sZ9%ú¦×ýžS| GwH4ž"y[÷8£Ëx,i§:j™fŸ°VÁ7;ic4ûÍL,høòáZ¾‡ÞœÚÁÙº‘"ôàŒ–KÃàsíB-ÀÁI§f÷7Ô¦ç{'wž ";ÿ©QBÝ$š÷é„`ÂKR¢™;HàýFº07ôÞÃJ.gcíAcžQmNz¶†WGôM¶©Ä阒ðNˆwN5—Xcî*wí$½Ù|c>6˜"b¤7Õ(5;wÐã#r-x%-;©L˜ÛÒ6*ȲãÄ„{”P¯–ïlßDCc…n¥âØ‚ãÊg~ž=ce¥×üKFõê iòÅ$²‚JìØÄõš€ù±‰«}t:Þ+Ú[…¾2÷˜0ˆ3s»gÀ{ß'8EÐǧ±ù{·î{lxóññ˾]ºíF&©å§»VüÖ}ÍîöqÝnÒ܈7ËîÐʚݮíû4Í7àôîͺ˂n|H¯‚=ÃQRÉ?íšUß>ÿñS»Û=‹õvÕöâ©;܉C×݆ۤÁîî»Í­8L…–máÒ6-ßCЮËíú¡ï–Í¡]‰ö·?ÚãÖxi«‘# iè¡Xµ‡¦ëEÓqã1ºÄ"ƒ}ø–´»|úaìjIW6g©?n7ÝþDÉzDÙ¯»Õ‘˜nÊEÈ™øSûÜîÞ~n»¾íöoØî6Ý=ÃM¹êú6Ä Ãì‹êtQwÍæ›öi¿øô¸Û€å_ºÝáŽ8Å“ñ¿´-~kÅøàâNºwªt(°#Ÿb³«]õñýÇ›r‰x~4åHVES~n7Í—¾Ý¿ýÐ! ž CùmjËÏzÓMæ5|nûîvð/_›Å]³?,Ý—ýÃXf̯÷Ñ¥×Yvs»kOJÙZ©¦ï;"¸›ráUëÅ×fyÒ9Ö>j6«–´ m>VVQë®Û»¼kÖÇM:g{²ùŽ L{ »‹n?ÏŠü·¹Û-‡ßi»¾)·»‘Q)K_9®Y»?lï«æWdÂvy7xüHz:ݯŒÉì6úTsVâp·]ŸbnëÔ|BÛYlwë—¾íë}j»bM¡6ꩃ§žK1bª\éJß-®YLПÿúþ§¡#¾iÿ÷XÎzÉžE4Måc©¥Ð_;=!îíéܽš‹Aƒ1nHñ³27å ×äa‰ôEóÎ8=²+ŒíÁìð¦U=-µ¨¦Oɾ¦O©© ciõ”j ¾â@vÆ‹µwÆÅZy£/)_æ?Bù_©YNú©š¥Ú¦Ö›rHNOZØxñø;ãL­»» :¡lšúÇ”SLOŸ’‡Ó4; y8M/)è&˹áÇšÿØ­H#úBEÏéû×qõÆ5ë)åæÍµ£ã`*òèD7\Lз³ãj†Õ«» òU{»{,Î M C‘ff—qƒ\fĽ0&)Í-^Ì âÿ§hŠÌendstream endobj 408 0 obj << /Type /ObjStm /Length 1634 /Filter /FlateDecode /N 96 /First 836 >> stream xœX»r7ÌýÅZ  J¥È™]ùQ $›V±J’ÿßݸåØÃ»7èíéy‹…nÕmNþx ®ð".HÆ5º Wu²\““Äñì¤ ®æbŒ¸-áZ›M˜„{ .m¸qI®Ñ¥Q—#ð’\6â³³@»9ËÄÈ ½º’ÔiÜ\Ýð| ®*pQ\­ÀÁwØðC£âƈL.ÈÓ˜q“ˆ5IJÁY,¸Q°ÆŠ›yˆ8Dp*´…hDx‹BÁ¬™C`N‡Àœ¼‚91zp… Ï IHœ*˜[ À…IsÙN`. ÓæRF`¡ Á`®F0Òº‚ 7 ÈVs˦D  •Pĉ aš7Œ=G'‘±g”+â—fÔK{3õ*ª)Z sŠƒ9˜¡@2*¢æÌ²˜ ý¡HBQ¸«`Fµ¥°„[ kˆ?©L¸¤2uV]ܘ:´TÜ?-7à".TWKÄ jQ!nnÁ7•àÖe£Íb!Ìl&EÄÌ«`6:Eabe ¨²µ£V¹¶&aK(ÛìÆÆª…n®èX`ض)ò†Sb㢊ÉP/vC*uh„ðÝ›7îõ÷îw'˜I?»×¿üú›ƒJKæ+RôåñÓ'÷ν}{‡ã òp‰CÎÀ ÑoЛJõœ‡·Ñ7Ý£=z`ŽÕ'äo¢Óô ´ø‚~™ 1Ú#“m~CGLœ§1z4š‡ —!ªçä¸DšŽ2ƒy†<„nIEI_ÌÑr2Ñv­9K®«&¢ÆJ™rö)Ïà9ÕÖ„E3Yð%ÎZe`¿Ý|iDæ*ó‹”¡ÚÉÐVæeZ¯ž=…ž!C¦¡ ðZ|…IxL_†'ñ+Þ¾ÎËb«•>"s>†Ùl8¶.Þ1žëèËóæ†÷ã Û’ÏóYÛ#…Q¹" :g6'؆ãrâ8 뀼1)H¼Ø½ØLlF7/röÈ ê{Lü¶Mçï¦kÂjðܤ̦z¯@ŠxieSLŸá/¬!#2†„Κ¯ «ï˜#/½••æÎzèÁâm:gs^Ö™—›*/ë´eïeu;"cPoe:MʪwÛV½È7¹§T´‰MßsÛjªdÀ:•ЋɲçÆv_^$ßÝÜqÞP—Sq@F,–f…°(ý|­Øea °§é6".×MWߦu\2?°…³m¾“Y®Û©vG2Ûp‘ ätÒÚòBp±ZFæ|¶$·Åøj˜!—'í)9]Ù“XYÝ=‘¢[Œ'?pÖ8È€—Ê|û–_*G$–%슧ȴZͰ¼X[޽¬pþøðùáÛWÀ_ÝÿûøþÛÃß_|ºs¯þzøøøß½w@¾þéýçû ÏHÿlÌwí€a4Ú];må®=ŒÆzךñÙ½îsg"x äþóþ#ÐÛ¨öyœßô=o¼kõ½IïÚÇ|obû«qwÃ6à‘çWž0á¬/úÎÃ<@èH)X7Ô›Bo¡Ü}#²[¨6ÅÞÂzìxUœ=‰“0Çê퟿»…¥Kƒ\Ö-•ÎBý¡×&LSÞz £Þ·2״ɹ°bmÒêª=i+k¯MZUmL“ R˜&ëGwj·'åI\”™8¦É)L“õU¥Zé•D¦©ôeŽLSéµÑ›–Ûã“¶YÃq”G@'³Tz±‘Y*}Þ‘>ÙmèY˜Æn c³}½ÿ£Íö8‘ƼÇÞI['zLâþA~fbŽÂÁÖZ0Ž6Æ®Võ ¥Æ:jì <(È™ŽG3²oÖÎ6†Êhc”¡Ž6Z®‹}üpŽ”ñ÷=¦öæáñÉW?ÀŒIš 0mûntCªxù„6ªh“Fuu9 :•½\ ªUãÄ!«¢ùb 5íZ&Mû%U>½Š¶ÉŸHy!¨Ìø›aÈÉñ{|‚gÍG¯ÖäØe­É±Ër•Ó—TåôD—¤ u<¼¿ÞÀÕºñÛûŸîgmxå™öÄqÙ²l×FÛ"¶‡yr×–Ð[ZK>¥WרFϤ1€ó°î_)Ï0nûÒþä3<§ã§«íײ_O³!í_¡ïækù•ž}ñ<ýÄ%—\·ê”v®ÿÏD¨endstream endobj 505 0 obj << /Type /ObjStm /Length 1663 /Filter /FlateDecode /N 92 /First 808 >> stream xœX[oÛ6}߯àÛÚ‡ÈâŠYºšµh²îa؃j3 WÛ $¹Múëw(ÙŽ¨JQ¼œï|wʲT¤$²ÔDH< ¡Tài e–HZj°L)áX“”ÞÏsÂ5ÃSÁ±ŸJ"DxWýYI5QýÓ%(ž–hŒ%+‰ øŒ#±—1b!C2NhY€ XX‘hÅ ¥–c Á)` ÉT@KÎ!†ƒ&WÀá”P’3B%‡FÈ2à€&U;f©–adm ”ÙHr Û  ÔJÈ%ae`-Xi5…¤àhœ‚0&Ã’Ä Eá”&Lè°d“%´©‰”@V .%¦¥².AUYó‚Ó&,Ùô›l ˜ Z€%/¡¥Ä §< Á©‚P¯1˜X*¸Î ~ãÁa0ç8ØÇ6J¥0¾Všð ·T@VT5"Dj ëà7”øJ‚%70›Ô@¶%NÁ"ÜBmQi…Aà,A-8kC TµÅ ÅŸàÐ[â¤èC vDHaáÍ Â ØzÓ « —ro1dÝo²n‚Õ„±ØcloÒÙâ„ ³=ŸU%Ö¥ !lN² … !HÃŒA©~yó†,Þ‘WWnÙùz[”ÅÛ®©W»á•¾&‹+²¸¨¯kòöm¿ù4Aj}"‹?Ý}×çRÿö±jܶë‘û÷kß­y5Æ{@â™)Z|¾x¹ø|qzIêï®ùîÝ ;:«w{ÂÉâÜ7m×gq¿ð¾êßì˜-Ì›dû±qßûr1æþ”Ê Abþ¨ÇîK»W…Ášag[°$Ê"3ª˜ØÁ¬#*í£ô6-0Xïô²%ÕvE–õ¶íšÊlSu_º,±i"ƒ…¶=Òšw ;·%©+>c\7¤½[û­Nß¹mµö?ÝŠ¬ý7·ö·u½Ê)dfUP± /µWæ…›§aÍzÇ›jã×äf·í§ÛLd#­Ç!+çBöIœ$Ä<:âéÒ”)/>t·®!›zåÖY‚"J=yL=fÆ©'øX¡æô8$æ^1‰~4“tûäPWÜꤩ¶ßÈ\î1=fÆiÌì`ñ=—§¨£Ä|’-.›jûk Ô{rFª¶­—¾ ëƒFš¼ŒhŠ4ͽMDúY‘ƒÏoKªÄŠßšjµv'×®iÑ’Ñ‚GÑBÑÂõ8ZxäQÎ){ðØ^Ù™A¿äÊX¥±V!”8@~øî–t”§œDöÁQyvYš£-IðâÂm]ç—Ï?j‡ŒÌX5æãïã%žK2Åõmãмߒ•GÍ÷_vC½KslŽ•ˆ+\{à—]Ö9Q\ÕGv­kNêÆC.*xWßùe®è …1;†±°ã0–Q[—³Å{Rh²ŒÝòË 7HÄòY½Ù «æënƒýYó›ˆñÄ"n—SÌÚ“Ù$#Z¼÷(•í/ÍGÒ4ƒ}8؈OŒ8°™ÌM]/‹ßï«ÍÝÚå<-ãööx³”Q{Sbl7¥ç<=±âÀÀ÷ø–°š‚YøÞø»nÖ+rŽæ}îÛ[¿ýJÎnq,hxëï²¶ŒŠªšXVÆñ÷9ãíLjC‹Ëªñ]µ&mWu»–¬ª®ÊWQûS"M~0°ŒÛ_BÈÀ<µ$ÊŠ«Î5®îîÜl“V|Ž—Š;ÕsŸÙÓÙiȪ¡Zù&!Á†½r]å³—4¥2õJ—ã(ÖÑ%MÏ^Òd¢^¥9*VfCÂèú±f-a†œ½£Ž¦'ÖW:U®\T©ú‰$… è÷'_Âíþ¦nð­Bêòãg†fiƒÉt9‰Ë4ö!83«ÓxÐ(aÛ¶!×ú¯t¹µï¯(é`й楣æe¢æeÅ\0L,#t(q™Õ„ ÂiúÕãʸ›LçÊ›ŽZ—™xB«iMˆ0!žM2¢¸Ó8rïÿ;ö¸ #C3½Ãȱ¡-MSÝMÜ㦢WÉlúZkú"þjC}î¦:¤ï³&jlfò5mâ/µ4ò@1³–'J‹STÀÆ!?Q5ü² —7´ÇÙ¨ÃMíz ‰Þ®&®_yYý™õ¼ ¬ø#]¥Óôm9K8îjiäýÏjÏ”Ù#Åw¾]îÚ6ÿûžåsi?IªG´Æè}/ú²WéÜwäß'‡D±Ç<)£ÂC º½qña× ?¼àìÿ_^ Lendstream endobj 598 0 obj << /Type /ObjStm /Length 3127 /Filter /FlateDecode /N 96 /First 887 >> stream xœÝ[ms·þÞ_étŒÃËâ͓Ɍ$[µ§qÇ–œ&™Œ?ÐÔYb#‘.IÕv}ŸÅáøf²áñ¨‘§“ȇ;€‹Åî³ œKI(ᕚ,®Z˜p5‚œÆÕ —"®$ñÕ‰˜<®^h•+¡u2(D¡-q•„¦¸ 6½Ã;Z sÁ£ð¢×VíQK“0VóONò„‚†kx„‰”PˆÂ*⟒°x½QÂZãPÐÂ’C;ÆË`½±Â‡— ›œ 帖d4? ‚¬G&ò(¹zä=ª[%(ò«%‡.¬Nñ-Da18oI8‡ªÞ:á|~Ç ˜ c¶¡R‰þôý÷¢:üï3ñÝ|ðþ¶–æÏ¢ºÕ_'o'â‡Du:™^ÕSñ–'ÿ÷NTg¸ÑíÍ Q½ÕE=œã)4]FW²Ò8Öü(-/6/”¯_Þ¿ŸùX‹êÇÑøwQ½Íå“ñx2Ggëp†£y-¿ÔõÓ›Álþ Öî…Lel›Èˆ¢„Z·Àœ12a5}È"Iƒ…«C’A±æFÉ:c&,I7dõ¿îóÑd,SY„%“„iÓÞKõI.aV¡d!Êá ¬Áøª~ªS¤‚²Éa®l–›C"ƒ¹ä¤÷±P‘ä%Ö‚òQò8ÓõdrŘtLX…°Ã,®lV’̆K ›ñH49é`\[P>IËÖïP³û÷3´ÊjN½Ì•I$-|B[‹¥˜zHên8¼:®o³¸ö[‚»Í“  a@á|[{¬–ú0Å:6x@™ÀŒ‚"’Ï à;ƒ¬c?lç“ñ<Ã;ÓƒùSâ‚Ë`_ª” ³–¶ òÛ2ü×âx5]Ê 6X{CšJÙ±ÙooÐ$+eø¶gðß>‡o7MÏ,š×ÓÉð²æW¯Ÿc0õç9Æø >ÁF-™ ºä%s8r^‚'0©•L±Ó‘z##Ĺ€½ º²c˜ xp´ÒšZë$s7FI`¨ËZ/-–È–‡…Qá@X ç®U£ç”“ѳ§6Pc°«ú¡W/rm™rõËz¹”«s_ÄM™©ÒÒkœ‡>Ã\róU¥r¯ËÕ”«-W*W—¯ßgcmPºÃŠG@¥ l(ßAxdúÀšßLîúú| LØà;xÛÍØXÌb”¤J –!%y‡p˓䭟°ŽYgÅ[P FÚ =R¨ZIÞäzÔübú—Ð@è\Ý ) Ð/îßù¬ÅýKGÓÅ·Øâ;Ê(yµ¹ßa‹ï â;h›ï°ý}møÚå;–¹ò’(ï“&×€è{Íd#ež–òn7HvçͳE+ôÙ¢²J‚’ ³Bz}âsÒèÃój}¶>•yµ9SoófKÞªÒ`ÇŽ#ÌšuV2"Ä<{Ö#€ô|À~%vµŒ è# 2ED o*’€çræöÀ)ë#!§µDÀ/HihÍJ&T;I©+¤þ4‰7Åø †1.Ó$8““rF‡ÏIüYå.Æ— §b„©a*F˜Š¦b„Úb|]ãëôºñms¯l壴úwwí:2 KØ,v ä'ÐÁ§ú¢*'HÃ+>-Ã;C(ƒ †©áô6g¢I¤(!'k<ž>šœ´‚©ƒ¶ 8+B:ÖÒ[õI'ðA˜œk—H>æƒ0– yŸ‰á7—QØgSç0rçJ¢À;ãŠ)ýùÒþº= G°'qÝž´·»íÉq¦žTbpï1ŒI`Ñq( zÍü®I:pbü6CÞq×Ì·ª¸"x¿nÈíÎ]³³É=7!ª¿®D%·ä‹ëñÅå„ò<þ_f{eþÛ £e'³48&JeVöåÈó NÛr€Š"Çš P`7NÓÁÇFöлƒÃRßlÇŒßV}[ߥµ;3~+£ãº¡_×c1Ç^‡Å0ú-»Ø¶cT»U>ë†ÐîŒj9É gùôk~éùx8¹¯óÉØf”Ë[뙞6À !?þÿ]óüÕ`>eÈRéÆX.KïVS¨ÜÒßwu™šôŒkS/ïú4‹r#ójwd^½XɼîAá;ºÕM“¬(CG–½M†qþc–½ðŠûíÔípŠð~62UD¬¥Lfܬܖó‹‘áp*Jƒ tA)sýC5¹ÃÁðîi=Üò¦fŸüdÀ?ÀÌH:þê"Ô‘uT]ƒæã"ã3Ôħæ 6Ãé |Êû`lìø>rµ×N°AèÆ™©4¬ ýM ³ åÀ¬ATº[õ[6ß°¿ßZcüÏtèÑB¡ü¡x¶Ò%Ñ\ ¿ q‹ÇëHý·¹´nävRÿUç¾òx´éñÒºÃÓpv0aJ4É£N./­z<{|׿6]Èø}Zç[´•ßóIV&Åï2€g£j4ÈõËc¤,³¦ÒBæ/ÿiqŒ?™fkS™9 ´ý_ñ¯±¤öskÖ)^–ðûÿÙ•¿à–Ϲ †ÊœIrL€{Ë%ˆŽÿÍåàö—÷là¶ù¹æš¹óóænµ©ŸÛÏ¿;.©¶·æß´Z§A¡–c³عL®ùkËV-ïÛ÷øÞà/r•±æºÚHþÒo‚ 9þꎱ–V +ýÏ&Ìç°ç8¸ø)УnÔXáþù[;ÉŸ¹='‹±œX¼m2÷·G)Œ¥ž?O”üŸærph/ /[”ãd6ÌZ‹x§:||Q®oæy5ä¥ÎKê»ê¬º€=ç=}~÷‰æ—Ïo×3AkËû Vö–í–'¢,òóÑm j$ž4+yã€åK¸ÜÑðd|·ŸÀ¥W—óúîÕê’]9ZPýRðB ›ºÏ€FÕ?«ß÷ Q3"ƒÕõõˆhsDë‡6w (}@'Õiõ²z[ýT ªauUÕÕuuSÝ|ùxS1ÚÛê®W“êc5­fÕ¼º¯>UŸ«/{Nªgðgˆ0»_‹ÀmŠ`ãpÈ®Iµ{Êà¿’m¢endstream endobj 695 0 obj << /Type /ObjStm /Length 3011 /Filter /FlateDecode /N 96 /First 880 >> stream xœí[[s9~ß_¡G( µŽî¢¦¦ ’Í@-0,¤vf‡â¡cw’f›ñe`ö×ïwänÇNl°ÝÌE‘V«u9::—ïÉ>y¡„OA#<£ÐÉᙄµF…oZãI"ø„§1 ¬ŒHh”¤"W8A¤¸‡d,7Å6qMä ×$AÑ(H J! @BƤQˆ\c„ÖhÈ mCÁ í×x¡EAèèAEaTnœ„!ãEÐJ·   Â@½±<²6Âxƒ^Ú ·qÂ$¦G{aU® Âj ztÄúAxÐ`„óèŽv–ç †„¼vŒåñ'#9Ìn¬p:‚Bã„3žk< ‘á<¾… Ì1pÄ%,x¬˜¹ ×kâ-|îŽÕzNÐä6(XÞ¦Ùzf1Êkãž&r/Œå>9ÔFæ÷L !8ìŸò ÐÉc.´‹:-àu4k< ‰=3ÜEìzÂ`.‰¤ Á+‘ˆÙëI$Ñ û‘,80zrÌLoEÊ+Å4 P€l(@"8JJknéP&¨,_¤ö4à”g@Ö _šK¥ÄË ¢F¼,ŸHGÌXü,KF'r«‡€ñv…€9 ]ÜshÅãEÌ¡5 !H$í !bíY¾±‰X1ÿ0‡NÌ[¬’Œ‚v„È2®™>ÃûŠM†Üó:@‡±˜e “™›0‡çþñâ8 §" þ†õï5—!çª)£66eˆYlË´Ô”Á^×öÅòuÛdbãæeL­ñ_~üQ¯Æ£Þ›j*Þ¢x|"ŠÓêÓT¼Ã§LÖãLÓ±¸w^_ÌÆ•¤û¢x#ŠŸF§£ÜûÉhܯÆè¬ò¿w¢8 µ/OEñL¯«OAˈMç$X ½WÒ@rüFó7³³é_*Q<¯‡¿ƒ”\~<ަ·è©þ˜•Óz4”ä:‘d$ ¹%+¤ž¢—Þ±Âi [Ñt˜­ƒÖ{õ…mÜnëŠ_>{Ïëcšž]‘ˆÖ·}Z62ýtdãÎŸØØó“¹…åTC|¶™U¿.ŸŒ>5ª®ÊÞx4ÅÙ¸â†èYöz¼#E¿Æl“z" ÁëWç¢gBz­Á \ª¿œ /ÊñìjPÎÐqt1VP»^™Gž|({oõœ‹m°Vº‹ið$Y¶lHYÚÉ'I0Ù6) 6} kà ¬`Ù`¸ ¯œ‹Òoi?d«¼ðtp—sÃn•8µÉn1’œ[Ûn9êl· .Û-§7Ú­ÏY·­EqwdQÈt°(Î|·(7,Š9Ø€ÞI‡(‡"IàN¨u„eÙÑ¢1³ŽôÅz‘v¢gÉž,Ùƒâ%În´dtàÖØ×Ý_µ~/;¶µáŽì€¦.v |·7ôÎv²N†„¸'AÏÖS4’Ó–é] Á!†PM‹˜ ð™Á'cl‚ÉâØ¤^=­d9ìW(Åí8¥2]·,”WÒ«y„¨ 'v‚Œ@<†"€Ï® ì`d2بkªLÔ‰¨×ÃêãäÑél<¬ÆNêñôV!Û‚½It^rÎK'’FΠXÄÞ°4ÒÆ¿ ‘Ö*8 k½ԅƛԥ.Ôù(­÷×ä%ÝÆNä­÷›0¶åèâÅáåà û–ÕM­Gª]óôͳÚ:®q°±»ƒÕ« ‚6Ƹí`¿Ô°"úÍ!G3ö üB C+ðû…q[ˆïb´Ñä^Ã5n{ôí¬×pÏ #¶ÀpöZ3È€®9xvëwÓòlÐ-b ¼c° zŒuÒ•îGOƒÂº!F[ŒAÆi|  0ON«höa]¶Ì8M‘ IòÎíEÑÁÀ… =Œ¿èqˆB[á´öÈç›AC£d¸&ÊYÉ‘ì.4>m¿œÛèÊù”!-ü:Ÿl®ñd@j­8´¿7m=6Ï&½n›´º¥æ©×x;ÛÝÛÙ69ßx;·ÉÛÝ0>Nž`d;dTL|T¼äCNãO¸ýTk0½zô¾ìýÎÎUuѯd‚hÍpÓBÝa— ÌÔ~Ë#¨;¢MccÂÁ–6qÇ_»Ð¶AÏTÈgÑí[ÌÒÜL:÷Æs7>³Ÿ«Y³íšó7¿ëù[›¥^V·Õ<¶ßxþvîžÿûäéypôâô”€âLM¦G—å˜OâçÛy\MzãúCM>ÅÎ#>/ÛVúóЀ®?½œäÓñ9wnÌôÛñë“cžÿYZž[y{vusvlòv³‡åÙo>6§i¡²Ð˜>¾"0Î÷šðå§ßÀ»Nþûéãš”œï‚à žøîUJ±‘Šb¨$£úºÇ}›ìë®&õÀPÆ­±™Au·™nÕfº©ÐwQŒ!É7”œ„%@“¤2ˆ´’dö¿Ó% !¤Í·™@“Í·•¤á›L±œÞ›¦N§òVK¾Y bà:9ˆÓ|- Ü#»%Û ÜYÕ‘.Pã–*4“‡fâU•ÐÝU¯¢öpËÇ}f»»ì·&ÄĉÓuàèÓA¦Áøà¤U»FŸ×4m‡Š7¨j2Ò#|² ŒC˜d¡²„:e%vu_’º]­C€ŠB”Œ?ˆSÕ‘2…*í¯¨4,2 Þæ{4ä´ä+”6éì®qÍavÆLZ 6“¼ä«ÊNÁ ñÝ]øÔ°ó¥žkÕWeýèw €G€ |ä»9¤$b¯¥…mqZƒN¸ËJkw­ÃÓGLGN&òÉ%_}…¼EoXµW¬ú[5¨/ºÕD+ÙsÎS9&ËÁ—2™{‡÷!‹L€Ñ5]Ùˆé=é:ŒZšœöÌ™²ÄçžÓÂ|ŽTïo¼:i%œwŠìp™4¾doa9|Nîñ=áobP!B|•½Å@<†oƒï‚3mP=¶ˆ2ÑÒÀ¦±#M]®2hƒå¯i‚1ö0pŒá[P´3³T»…,üc“¶Ü%+›ÿ¤cgüÖœÂ6éÐÐdŽB“ MÞ&4avm¢ÉÖ†öÙŒÓ4þ5Äü¹æzTØ5›»†ÕÓÛðålîá½RTˆíµSR|w?îæ”–²_/_¾89ypôâÙé)©/¦¿bÀ¥ô—Ú2Õ&ï"t3FI—lÞLc´@^EÉ?Äùá¤ÖF¾÷ÄîÙçk‡šÁ#Ø5·%I †„FCB£!±‘üh×H¼ï,ñÔÜ“XH|X'ñoÝ\§ÿï0†åÁæ¯Ó€ÑM™á`\©Ÿ×,×k¯¥áßlI6NaÑryk/…¥Ä}±v„Y@©ùêaË)Óá0®ÿJç ÄÇ“œ¹óÝ9*?<­ê‹ËöJÆõ^Qï‹OGVGnýLq2(/&ÂεíÉ|“Ñï¡6Fx˜O?Çùüõ¤TØÁÖ fžú¼ižMËAÝ{<¼@ó‡ðKÅ›iuõŒ´¬ØKF¡øµ!Èf!¸ÍŠ ÿË+‚´.­è¬¸,>.¯ˆ¸ùº=„Â<ÔN1$9o±º¦tsM«Iõ +‚WÙ}E_ܣŧâ[íÓüOüƒIøïäV×Õ­}Z=¨X^”Zl’ÛrIÿ¯Ãáoendstream endobj 792 0 obj << /Filter /FlateDecode /Length 2476 >> stream xÚåZYsã¸~÷¯På%T­…ÁÁ3•<ìåIR»Ij×I%5³4EYÌè %ÇyÈoO£»ekœ­ìlRS’@è–“Û‰œ¼¾øâúâÕ•Ê&JŠJVjr½˜”jRd…¨L6¹žOÞ$FN¸þý««< ©”ÌEžÃHs½œª¤%º2šMk‘ÊÂÑýejTòh?ÿvlÒ4Ryâ?Mu™ÔÍTÉ»i–'õm;c’L°íí»N¾¬vé[øVI¿¦^QÏWð5ú|=*y€ßÞî-’ñꪈøžU¥( =™™L” (rô×øût–i“ü†+nÝÃÏ®÷VfnâÏ/-Ó1•¢ñÿ¢Ç3ô~7ð»Èxügg·ÏžY{ÉøµIsàx•\Z-Nr-Êr2SFdiÅ.ÁZY£g„‚Æ=% Y£dxŽò$·³v,nÏ“èÈÄÀ¹’iñ³´?ËLÿ cüHâ…ÍÙÃËħ{¾z«5kºaVO;Èœ©9£ÎqË|øy¸Ò;Ç8f»8ÑΊT)FÃS´Ì½ån…?J&ʺ•0yåäf{›WAæ‚$ÎD÷6n‰—™Ö¬;ÓyÒ¢Aîö¤! bXö—îÈê½×ö/Ø+6 ËÚm7®W' Xy‰Ó¨ÊKQež™—Äψ„0iVùÍçrláLäÕÇ,{v#ËJHºIkK:ºU¥?bñ£˜óSØ­4½²îÙb°¯Zw€™tÖWÛÝ>á®h‰œQ7¨#ËòY;ÙÖ(týî­TÆÒôÈó,U’DÙ  ýÃZ­Eý»؇¦e æ™éŒd]¶Øˆ“"ã;ºÝcQ†y‡ºP¦u$,F_ĸuz_yP|ËÖ®c%·aºqº[¢ÖaÉ÷þ–*…fkǦÄÏžsÃmoÿßïÙõ±Èh¶°dÝÀlÞtLëÕ°Gu¹ÕsQHô+«ŒËÃ/O¥L¨8£ò„d€W4’Fñvïìy³"Z2JDH¶Àš cÙŽºl©ÂÒë‘ÏžòI —nÐÁ¹ÇzÄÇ’mMزÌ#U Õ›¢4Pï½À`ò8÷°ë1-Ìð@ïw®¼sö„ÅkJêZBCu9‹û®±¼Y×ÿ8wšbL‚ÀÌ7.dZ²9kßßOÙú舨ì=yÍ=a½CÁ®Ú©+4° £ÔîV¥A=Á¨9«Œ'Xº½ê’ÚÐ˹ ƤÁn¸jQ/#†•ñöò ûž¶­_ÃëŒ^}jç\ä5<$Dëˆ;BpæC“e}mŒ&p‰I…®í©Ü¬yv)_ÆX‘Z<Ýø)4)å£Öû…òI/X0ž=X¤g]ì¢Å®ì9~ß NUò5ü~OÝ,{h <2¹íµ,lÒÙ±íÐ\ÛG9¤Ûßc2/ó¤±_77DýÅܱ̎¸{‹íißãjÅSPª™ó£ž£ÓJ¤™îc’•¢6wܹÄYÇO Ç–e ÷øÜú-¡g>¬žoïÖÑîåôÀO,gZ_ N³RYE•¤O<¥;+ÚZ¿1b,>ä#ŒLš»¨ÕÅ4¡uÑSBÀû¥OKj –ÜZEjJB¾séòs.Gée½Ê|ÛóŽ”v_›Lɇëñ©°€£ƒ±]y1fÈ B§y¦€gsûÌ ‹"¶q‡«HJE{Ü0[ÇJÎ{©;¿£Ôÿ|†Zð;Yz”©Þsô®¸º2‡OÓSªX¸+¬'9¤]ίƒÜVbÜtÏM«°œϳú\Q{9vz9.BŸ +!\¥e›ëÇ:©<Eñ¨ö\³P«`ÇØ0»kþvŠcÝìÎkС¥VQrµ,¿?cbំ¾#•d?Šº‘„]£6ø(Dê§WÛññ‰ð÷ÙØaá Þ^°èd§p¤—ñö‚EÏToä¤1 ó? ~RNú² ú?vÔÓ¼}B@èÓŸ†@Ÿbåi]Ÿ«–Ÿ”ú=;R¡ä©zø§)'Ž1ÜLdBtÇDSÆX]‰ó® ƒ¢ãÔ@…«µý.N¢³JŠÒ#šO‹>Âe ÃpX8Œåt hJ¨ èºëƒ#7Dx*kGÁۙɔPJwªuþáŽu|8¥Ñ¬ÙÊ®kÆŒDœ>ê?>»#²Öì#¬‚äELJ—1Ä ¢!ÏB¥)θν;?áµç’›ú4LÿVªôàQN{òÁæ~ °´‹®‚j§í(åߎLZßÖÝÆû¥¤-¥/§Â,‡ÖùšÆqí y6Ÿü/¾FœþSŒú¯)D®òÿÌÝhÁ•F˜4g°–/~ÿèà?вáiЩ»Þ;Úvƒ%AŽúp®µðpÁƒ]Ÿ¤à  Þƒ‹.¬3‡8ØÈÇ–ÔÃõºcŒ€BU•É[eÒý‘H^eçÖ˜wüópú/x ,º+‡"›ôöÊœ^¿{}1y£’¿Y³m£õSÔÈ“–²Œ¨AÉ\ ±¥«j¢”¨2È‚ €§¥ìb¿C`uÕ®ÛÍ¡>ŸÉdÞên5jj;yšÿÁÂ#ØK ]UêåÎWcIºy1Ü£WxW±%¨ž`¦]MŒ\í••¨RïŒõÂá€ýhÉD©‹xîo¶ýÜÅ7òô@3dÑŸZäR¤ƒ˜%-'…¨ ‰•Hí¾-rS¸?ÞPÉ7ßžÈjÞÚuíuW‘±n¸RjÐåkŸ`ì^!«êrWUˆôy,³±!7Ý"”¹Þ¸Ø¶{%”2ñ®á ±Xz¥…,ô3Òչ⡲<ß]Ëé·gw‰–²¶ká°e +Êè"Îv7”O<\ÕwdFÊPÉ»òÝOµËü,§†(IMõü%'Е™ 7¾øì42µQ¢´>Žö"æiÈêé!¼‘;Z8@øÊ¬ˆö£³xÓ¥ÈJÏÛ/ÉSííL`>pFjr‘Ü“s8Xi#¢> /Filter /FlateDecode /Height 106 /Width 75 /Length 3909 >> stream xœí\}LSWÿ?OÁZ ·¥£y«ÞÂ* ½U^TJçêÔu‰Â%[Ä¡›«n\¦›‰Nº%“ç’úŠ3sµ“M„!zEZä'öªTݺÞj©X+¥ÕJ]ùýqb|y³ 2>µ§¥=_¾Ÿó}9çsúŸþþ~0¡AyÙsLZøúcÒÂד¾þ˜´ðõ‡÷à/Š‹‹‡ü ±X,“É jiiñööv:.—«Óéúúú(Jhhè¹sçæÏŸ R©f³9&&fß¾}7n|üøqMMMAAA___MMÍÒ¥Ku:]{{»ŸŸ…Ba2™[·ne±X£³ð?¯rÕfµZGmØ^i =‚‰¿½¾ùæ›Ñý¥B¡¨¨¨(**Ú±cAZ­öã?v:gΜٲeKooï‚ ‚ ßÐÓÓC޳Ùì/¾ø¢££ãòåË|>¿¢¢bûöí÷ïßÿå—_T*†a …¢¬¬ÌÇÇG§Ó…„„dgg···ïÛ·¯»»ûÌ™3===‹/îèè Ñh‘‘‘ϛ狲Ô#KeLáùuˆ¢hee¥V«U(\.— òé¶mÛ<˜’’" ‚àr¹çΣÑh‚TWW!âr¹‚p8AT*N—ÉdB¡pbÆR`ˆ|8”Jå­[·ººº`^¾|¹Õjår¹\.W­VõÕWyyy‹¥ººú믿®ªªºwï^RRR^^Š¢(Šêt:†i4ZXXXNNŽB¡ ‚ (22R"‘äååI$’ .¼ýöÛ&“iÑ¢E 8ŽËd2‚ òóóG4Ï1dé3´´Z­ôûŸ´d2™n·Çñï¾ûEÑŒŒ Ìg’¥#Ç3,]µjÕ–-[”J%Af³9,,lÓ¦Mjµz ÒˆD"2ß”••%$$üüóÏ‚$%%q8œìììŸÏ¤ŸE?üðÃÔÔÔôôt2”Ÿ:uêÁƒ€ˆˆˆ¿þúkþüù[¶lñØLG‹‰ïÃqªK­Vëø|Ñc–N›6mÍš5l6Ûív÷ööî߿ɒ%þþþ‹ÅËËëÞ½{ ƒJ¥"¨¬¬ŒôèA8Ž'$$ù$%%…B¡À0\XX˜——'6lØ––S]] HddäŸþÙÙÙÉ`0$ Žã< KHH¸uë™'%ÉóL‡B¡Ífk4š–––¹sçFEEÁ0ìv»™L&‚ ÉÉÉ?QÑn·kµZ“ÉôðáÃË—/ïß¿ßb±Øívg³ÙØl6†a|>ß`0´¶¶Õ××÷ôô¨T*©TÚÜÜüí·ßæååAXÀÞ½{gÏž=uêÔ¤¤$ úººø|~KKË©S§4 ÇCQtÆd§xY£z|ÂÂuëÖá8ž’’A Ãõõõ:˜››{íÚ5F“ššzûöíàà` Ãärù¡C‡$oËËËív{MMÍKì?<ïÃW­Ÿ"Ò¬[·Îd2ùúú†††ÖÖÖ®\¹²±±Q ¸\.‡Ã1}úô‹/"b2™Çœ9sp·Ûíõööòx¼Û·o³X,ƒÁ ‘H8Nee%Az½¾°°ðÚµk€ÞÞÞæææÄÄƆ—ËÕ××Ç`0`niia³Ù=âñxgÏž]ºt)ùÉ‚0™Ì‹/¦§§oÞ¼yöìÙ;vì@Q´°°ðy&ŒØ‡c½l†ƒÑd ×®]ëããÓÝݽk׮ݻw €H$2 t:Çã ‚mÛ¶ÅÆÆÚíö¦¦¦ÄÄD‡c6›m6 Ãmmm‹-:{öìãÇF#—Ë ÀqA­VK.×_ýuîܹ3gÎÄqüÀ/þ/…/Ž—îó‰_µ i233 ‚(..ær¹(Š^¾|Ùf³Íš5ËétÒh4‚Ün7Ùt÷ôô¸Ýn‚ Ôjuqq±N§S«ÕƒTãOúðUË$FÐ=‘­­\.˜ÍæÏ>ûŒ×h4ñññA¨Tª (ªV«m6[nn.àÊ•+ÝÝ݃aÙ²en·[&“‘+“t8‹Å²Z­äžÅ©S§"##>œ““3HÆp¾ˆK=Ÿþõ‘¦¨¨Èl6×ÕÕ!Ã09¨Óénß¾Îåroܸñæ›oòù|Žã åáÇ)))o¼ñ†V«u8F£155Ã0:N¡PÜn·Ýnë­·fΜ) ÉF„ÃáÄÅÅmÞ¼yãÆ8Žóx<@mm­X,&‡Ãa³Ù|}}].€<¥Û½{w~~~@@À › CtO‹…Ïç/Z´†á¸¸8“É$‰îß¿Ÿ–––œœÌd2Åb±ÓéÄqÜd2AD¡PºººŠŠŠètzww·Ñh”Ëå‹ÀãñÜn7…BÉÍÍ-//G¤¡¡A«ÕÈB¢µµuõêÕdCð@ ÈÊʪªª’Éd<¯¿¿?44tÞ¼y.—+::Ã0‡³k×®ØØX@0ˆ ÿz–feeÁ0 AùðþùÇËË«··744”ÅbQ©TƒÁ››ûÒóÞó0ñ}8Ä:|‰;HžÂ,U*•ÁÁÁÕÕÕ'Nœ8~üxvvvVVV\\Üùóçß}÷ÝúúúY³fADöø*•ªªªJ§ÓUTTÐh4@aa!AkÖ¬ùàƒöìÙ“••Å`0îÞ½»aÃòàÍ`0@TYY‰a¹ûôÛo¿uuu?~ýúõ"‘Èd2åääìÝ»×áp+ rã|˜ÛPàßÀRÏŸ[ jµzÞ¼y{÷îÕëõB¡P(J$’+V8†CBBîܹQXXø‚µî¤Ÿ¥RY^^.‘HÚÛÛÉ*Äd2555­^½º««ë÷ß÷òòÊÍÍu:édíÚµ<ð'N;¢ëP$iµÚ%K– sæäälݺ•<E×á0YšŸŸÏår¿ÿþû÷ßtµøÄgé˜ëiØlöž={Ž=êr¹îÞ½KIJeËŒFcgg§B¡ðööމ‰ÉÊÊêîîþᇼ½½I‰LPPÓéôH?=ñõ4£÷! ŸgFÔjuzzúp|¸sçN*•:àçeNAAAB¡ðEæFb<Î% •J}fÐßßÿÆÏ >xð`öìÙµµµüöÉH3rü7K=ºyóf‚žaé§Ÿ~:mÚ´§YÚÖÖöÇ”––úûû"Æ&¾=_—N*÷Æ£_‡J¥òøñ㥥¥ £¹¹Àb±†Ÿ-ÏøžÊ£géð5‚‰¤¬¬ì™Aÿ«W¯Êd²†²…Gô^$&Y:rx$[„……yŠ¥߇c˜-^í˜Ô¥S§Nu»ÝÏ Z­ÖŽŽÀœ9sþçxttôXLf’¥#ÇÓ5 Ù_¼x±¥¥Ål6'''{Pü;LLúðù˜Ô¿*˜øw׆`éâÅ‹…Bammí—_~yìØ1ò ‚ †©Tªúúú§·h ÂÃÃÉÓEÉ 3*•ŠÁ`dddèt: …¢Ñh¤R)©—Å0ŒT}af³ÙÄbñ¥K—233oÞ¼ùôe›´´´ .øûû§¦¦–––òx¼ªª*±X\[[ûÞ{ï<Ï„—ÌÒqتZ©Ð××G§Ó ƒT*ݳg϶mÛL&“J¥‰Dn·ûï¿ÿ¾wïÞÔ©SY,Ö“'Of̘1wî\­Vët:].×êÕ«kjjJJJÊÊÊJKK}}}m6‚ ‹% ðˆˆò‚ —.]Š7(Š.\¸Ôvww[­VAHE'©4 S¦L ôööŽƒlìáÃíÛ·‹D¢’’’•+W²Ùì’’ÇKNN®««koo·Ùl‚äåå­_¿>::záÂ…%«¶èèè+VŒÎ/މK‡fi{{{xxx[[›T*½té‹Å ¨««£Ñh}}}ééér¹œÃáÀ0œ™™™‘‘! OŸ>-‹ccc¯_¿Îáp8Î… /^Ìf³ïܹƒa˜L&Û¹s'—Ë}øð¡T*Åq<;;ûôéÓ½^ÿÑG‘Rƒáv»IMþÖ­[¯_÷ª©éܸ1±¤äÿ hJ]þôé´gjݧá_¤5þ§Ÿ¥ÿúŒ_RRrüøqæççg±XètºH$êèè°X, ,`0F£±©©ÉÇLJÉdþôÓOã3éaâ³tXÚÄàà`ƒÁ€¢è;ï¼ÓÜÜÜÙÙI¦A‹ÅróæM÷äÉ‹ÅB&C£Ñ(‹ÉT°Ùl111µµµR©tùòå …âäÉ“µZ­×ëÉTn2™®]»&ÌfsJJ †a‰‰‰±±±‡îïï'Oùt:Ýn·“‘¹±±±££ƒN§ËåòÑgü§1Ì08H\%_"âÇœ2eŠD"5Ü‘v&“I§ÓCBBL&•JÕétB¡ðüùó'55•ˆVVVÊd²+W®·µµÁ0|ìØ±ÔÔT·Û©×ëu:ÝŒ3ÂÂÂîß¿ïïïoµZ%‰J¥"¥ýýýäõw??¿ææf>Ÿ_UU%•JËËË{zz4Íúõëa&uª›6mZ¾|¹X,Ž‹‹dÛj Ç:}½üÊ[©T¶¶¶^¹r%<<¼µµ†áyóæUVV²X¬U«VéõzÒ{~~~:ŽËå&%%<èë뛟Ÿ_RR"—Ë•Jåœ9s ÈV‹ÏçïÞ½ûäÉ“‡"»'FAPGGGTTTTT”H$Ú°aÃçŸþô¥ª™3gªªª.\xäÈ‘#GŽ(•J¡Pèp8úúúù qŠ¥/QÐ0„×®]k0D"‘ÅbAQÔËË+33óêÕ«ýýýÓ§Ow»Ý'::EÑàà`*•ª×ëçÏŸOÞT¼yó&¹Dqg±XQQQƒ!$$„ ªäíY•J%—c2±=jÚ¾}d?1L¼¨_úµ¦!1ñ3þįK'-|ý1iáëI _LZøúcâ[øÿ_é*sendstream endobj 794 0 obj << /Filter /FlateDecode /Length 3713 >> stream xÚ•Z[sÛ6~ϯÐ[陊%xçÎìƒÛ&o“v7ñ6³Óî-Qk‰TE)Y÷×ï¹ M'Îd¸çòD‹í"Z¼~ñýí‹ï^åéÂDaUfq»YÄyš|QdEX%Ùâv½ø-¸Ý]™ ?ÔÃÕ2IâàCÈ¿ÿ¹*“ ¦æê¿·ÿøî•ÉF„²2,S˜†H$û¼ˆdâÅ2MÍb™da™çÜã t†¿R<1ÅbD11a£„âÿ [ ÌNnÂȔڵÁ¾õáÿïq¹¸, ê­ÊÛöãU\Òn:¬ÉˆvÇ´,í›}'„¨‹ v¬<òw¿™Û‚©²°Êt]¡±—¿ß£,â3·©¤„“)tp«XÏM›O“Dû}„n[™à!ü­ð5Ð]iiIä»{ì‡êö¬VI­;Õ]$LÁ›Q˜ã#™«Ske|‰+OŠÚ`¯VtlÅ–&Ž}K“Æ`Ø kSb±¼q²VDÔ+ SSÃ<9ëùÀ–@Ýõ Ù§h ‹‰¾Ý9ïŽ}H¦‘õ5¿âî_ƒ¹™÷JEj$iDž¨ gëͼèåÀˆ—À•Z=ß[ÿØ”(Ô¨Åö·,¯T7`‡½U!9y¿áþdŒr–ð½’¢SƒJƹõ\¸RiáUnB“åÀª<,ªÄg•Š%Ì¡N(p¿r\;n܈ĒäzƒH£ÑzXÇ×s `ª”¦Á{B[²Û Úƒ†èíe#Øwí/Ü >òh~Ưº-t+áÄ…L÷߯0 „%Icp¸o(ŽûüFŽõ{þŸ Öú2= ¸Ì\ðhøf‹áâ;¤†¤õÈhª§iSUÉda b¨µø·ÆŸ< SN¨Û¢VòVX #;Xõö¼"ŽÆÎ¨2´‚v©2ü2  ÂX£ êQD{‚Яæ2÷w:¦¶Î©}Õƒ ‚˜­6ÌA—ØxÚq!5"8ñ—@2úf}ä‰x T¾#Ò lÈï\¢DXÌ=áIíPZáÂØ:ÉÃÀdî²’¬$|¨/£º‚ÍĨØÊ–Cc½˜\8+å<šE’ JWB@IRã³óVWËïÍ»7ï¥7hÏö&1™Áϰ ?áôípíø½’¦ÓWØ“qU³¶ƒàMžGv7Þ!_ÐØOV€-œ8;p¸¶á dG¿mq1\£ņ[2Œ´É¨+Â50è¥maØ„Ÿ4Ù7^k3 ¸&jJ…ebÒ`/)a[+Ç4ÑHìÉÙJÉ2Š-™FŒ¨ o^jèHcV$&qj€µ$4ˆsŸeûM9 ·¡{½þC¼éÀRìÐ'Ñ&(P¯î­þV¬l>Ô:Å0Ç9§† 7öŠøküïA`wsËG÷ ë¹–µ!£ºÃÁ_5¶¸M’MÎGæßiXÅj¶ßÍ™í*Ì‹ô3¨kÓ%U˜U6)‚¶ý´Ý«N>‚uP4üÔ×jèó¼7ð4°|Â{§·ihrH9;µöÛW€,Yh¢||ÒcØ=0TYMÅrÍ?S(Ü1œ—. 5«ààÔ_Ð& Bž<üp¹¨‘‰à#U¸ãÌÖÉGдF²ôõzÆ‘º›ñ9Ül/>É™dR ð¾ÌFB#â“g`áþÎ?hz¯ÑÀØ—sB•— ¹ŸZëÝKÄ)4…ØQ¢T^HaEŒÅn5wÙÐ×6Ña1á'L"íœy¼'dHàÔLð ŠÈ[•˜Çæ×e|+ŸÑqñL´ èã¼èY5PÝfŸý/µÌ6Ó@e]<9Š{PJa% %ìlO*û{æ:ZTرÖEÓ uÊ—X¸ÚÒ:¨›kͲN=RM€tó8Šò`'¸Üw”NT1eÍÉ`Çv‡eå1ÓW™Õ”™8h$yTk¹¨ßx˜?Í©¶ äFb˜v’³…OIÆ@·–¹;³mr—=JDµ#tL6¾4gɪ`Ãä ìÒø ÕÊoן¹@»ݱ2a!n©¥ªh*MK’*[‰¡Õ¸w}7iŠdá†!’Q˜±Ì_ÚEœV6e˜8ÏX’þ<¸UÀ>p7·‘º yƵwšj9 ã©*U6Q‡m˜µÔä Uøw6 9Y,¶æ^ê½fv²±Öh6¿z²>ü?Dôµw{1Ø’å¸ãÙ9 2ÙÕÔ†«¹òMx2kÂo¡‚ÖméŒIOòYýE ãóšF²GâØæOYmí¹n\ýš¼)ü ®šÓ°+ÖåÛ:2d’É×7êÕ0­| E× ªS¿ ƒ(Š{Ê€kÈe“LÞúlÒr²ÍKÁH?}ï•”&fé¡Bl LÃ]P0¿Ô× ÜitºÙ‡w6›7Õ¾9s—‡¥KõÌ:<ó¬{“ŸúîëÒ.H÷ÉÄ'µjÔ(A'™rL´Í¹ ü˜»×ðœ|éUKFnàÐóÔΰaò€Í@£»÷ÜÓæmt”KLŽS ÄE¡L³ú¨›3:cDñöÍK.xðæ@Æ‚KB6?oÕú¤Âc·*mÌð,ì½å®´={Û 7”qN<ßËÝP®µâ{Ž~°&Õ9ŽG—Vc™/ÀŒEæ™Ù’<úü]oìŒåm‹ @%$]üÆ‘²ÑO[Å£ôv×ôš%Söbvt @U’GÀ“ÇÈ0ŽDžÈÌm1´&ªÏŒ4[Ðf6NMãgë<¼¿26øÔs­ˆ½nåNkÕÄb™ââ!~Q¬ F¿Ùc¯èá‚,ÌØ(dp+:¸AË×JáµtéÔ-ï¼4žûYϾÚºí¤’ÅXO«%ß òÉÓ¨=5¤§_žˆ´Þ¹d¤Þ°¦ypÓÓW§°PÅ?¸åV¦H‹½ü„rÍ?'NµÁ–Å”úˆ¬€}Ýîä –%QïÅ~ôb>´˜Ûã¢eu~DL†þìZýuüKÓ-<·vŸäظï{gu%i:VüûòöÅÿt­Ùendstream endobj 795 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5363 >> stream xœí[}T“çÙ¿ 4Q’ á£ábbJÂÇ ’Ä…j.¦…X¨8ÀuÎÁk»®eéÔÇMÛUëÆq9¶Ø•ެ32eðè V,"49P“ˆ_i OBC ñýã>ãx¶¡Xµ_~qò;÷õ\÷uý®ë^qçÎðDƒð}oà±cYÂ¥e —>–%\úX–péÃ÷;øììl€Ýn'‰ÑÑÑ/¾øbUUÕ©S§òóó?ÿüs½^ÿëÛíöS§N½óÎ;ÿõéŠå¬mÉãÉ—ð>~XXXØßß/ Fcdd$›Íîïï7(Š>óÌ3SSS!!!±±±†‰D«Õ àp8T*Õl6{½^.— Àq¼¯¯Ïn·‹ÅbÀõë×7lØ€aXtttOOOHHˆOTTTqq1‰DJOO¿råJ|||CCÃ/ù˪ª*F#‰l6{rr2//ï­·ÞJOOÏÌÌÌÈÈHOO?qâD{{û|"<˜Úív¶ðÿ_ ¸·nÝš™™‰ ˆJ¥ò÷÷÷z½N§“B¡<õÔS×®]³Z­Ï?ÿ<‚ Z­–Ëåæää¼ôÒKß;ŽåXºô±,áÒÇcÏÚ ***,ËÔÔT@@™L”––îÝ»wlllppðÂ… oË‘féãþ9ÍÈÈHssóš5k¤RéÜççÏŸ/**joo×jµT*U ðù|Fc³Ù¹zõê¦M›q:ÇŽs8»víB$--íÃ?ܸq£ÍfƒëX,–7nÀÅ;véÒ¥²²²½{÷* ±X¬V« 11±¼¼œÃá„„„p8œÒÒR788¨Ñh6mÚ”››+‘Hæá>:´X,n·›B¡°X,>ŸÿÙgŸñù|£Ñ¸sçN¥R944D£Ñ8‡Ã)(( çw¸\®Z­xá…vìØÁd2ÓÒÒjkk‰D"ÀÏÏO"‘TVV …B??¿‚‚µZ——WQQÁãñt:Óéd±X …B¡tttØív …b·ÛÛÚÚvíÚ•ŸŸßÞÞþÃþ0::Çq‹År¾#?üÓ½ûXi```VVÖèèhRRRSSÓæÍ›/_¾üꫯ;v,99922R§Óy<ž†††¼¼¼¯¿þzË–----^¯wxx8%%¥ªª*99¾æýèG---PT¡PØÕÕ¥×ëáS*•ZWW700A&“ÃÃÃá/â8ÞÚÚš™™yåÊÇi4ÚÚµk}}}cbb,KKKK@@Àððpyyù=^ß“Kï£Ãììl@@¡PœN'Àl6GFF666¾ùæ›0hµÚ‰‰‰ŒŒ £ÑÀ0L ˜L&~ @.—ÿâ¿8~üøàààõë×ýýý &&¦¾¾> @.—ûûû †;wÂ/NOO …B­VK¡PBCCy<ÞÐÐPmmm^^^ccãŠ+ärù_ÿú×·Þz‹ÅbÍ'Â}tc)NŸûÄb±dee•••Q©T·Û@§Ó ƒX,nnnöx<ç?øATT†aPN‹%‘HÖ¬YÃåróòòt:€B¡p¹\­VÇf³+**‚ƒƒq¹\‰¤¶¶ööíÛl6{jj*..Îf³UVVšL¦ÿÜ¡Ýn<+½Gð˜ïÑ÷ot¯[·ŽH$ÆÇÇwuuFFF\.תU«0 c0*•jÿþýD"‘J¥Úl6‹Åþãÿ¸µµU¥R!âïïߨØÝÙÙùÕW_±X¬gžyfåÊ•*•J$åää´··£(zwJ¡P¨T*•JÅq\§ÓÅÄÄ|øá‡*á“ižü¬m‘J¨×닊ŠÉR ²Òüã åèÑ£ëׯþâ‹/q8¾¾¾k×®E¤µõiá©èè 'Èd²UUÝÌÈ z½ƒÐÖÖæããƒã8äÚššš²³³/\¸àãã3<<¼uëV*•ÊápΞ=+—Ë+**ètºH$b³ÙÍÍÍF£ðgŸ}vrrR­VËd2¸ Óé\¿~½N§ …7nÌÊÊšoó Ò!ä—$‰Á`Xµj™Lv8)))³³³†]ºt©§çä‹/z9œ±ººSMMEùK²×;XZZj6›‹‹‹ÇÆÆ<Oll,Àf³ b–ŸŸÌ`0¨TªZ­.++“Ëå:Îd2ŒŒäççÇÄÄx<žÈÈHFsöìÙÔÔT‡ƒã8™L~饗L&ÓîÝ»»»»ï‘v/T‡K ­ñ5ÍñãÇ…B¡Ãáp:)))ÕÕÕr¹\©T:Žääd>Ÿ_[[K§Óõz=‰Dâñx8‹ÅV«500°¿¿?88˜B¡äääÔÖÖ"‚¢¨ÉdâóùCCCo¾ù¦R©t»Ý,K&“egg'&&&''›Íf‹ÅÂf³Éd2Š¢999€æææ¤¤¤ššš°°0Aâããïa¥Ë:üôzýÁƒ7¢£Ÿ !»\3ýEE’¢¢ÏI$ßôtnyyŠãî×_ßðÁ­n÷Œ@àÎË“ݺukÍš5‹%))I£Ñ0ŒÖÖV™LF JKK ůýk,vù|þéÓ§y<^oo/Š¢@&“  …ÂöövXyggg'%%‰Åb …ÒÚÚJ§ÓÿûßÏ·sŸ·ß~{!vwwëtïÓhc‚•LÇ‚ƒ}k7lXõ¿ÿûQj*ÒÛûéð°öË/Û|}ÍVë§Bùâ‹/ÂÂÂFFFfffÜn·@ °Z­ C(’H¤ééiµZÍçó+++Ÿ~úi™LVYYÉb±†‡‡'''ƒ‚‚âââ¬V«D"©««3™Lçµ×^Óh4.—Ëh4öôôlܸÑÏÏï¹çž›oçËVú/èõú>øÀ××W*•ÖÔÔÄÅÅŽ;&“Érss ‚ ðÌÌËË{ùå—u:†a@àõz›ššÄb±Íf3 sE äGúûûgffRSS?ý4z+Ôj㊠4”*¯ùÃjß{/¦¬¬ŒÍf;Žñññàà`hÒ8ŽOLLlÙ²åý’‡ÕáâïF=Àiqúôé»S*• ³“ÐÐP…Båt:Gnn®R©Ü´iÓÈȤ'ìvûÉ“'?Îãñ‚ƒƒ1 ƒÁÐÕÕ•””ÔÖÖ655E"‘^~ùeXÖ××ÿ›Æ`ÔÁq6Âp_¿~=Fkhh(**ºÇ¡¿xýð¿Z<h…J¨ÑhN:%‰êêêd2Yuuõt:‘H´ÛíðÀ½xñ"‹Å*--=yòä¡C‡rrr¶lÙÒÖÖ@Qô›o¾a³Ùׯ_á…t:]DDÄÝܼys÷îÝÆh4 ‚¨¨(N711Áf³W®\ÙÝÝ——÷íš“Q‡ßâ}Ïáº÷‚üpß¾}{öìQ©T Û·o7 ÿþúë¯cbbººº ¿9‹Å²ÿþ   †a‡L&wttìܹ³­­-''G£ÑhµÚÌÌÌ9Åêõú¬¬¬êêj±Xüç?ÿ9!!affÆãñ…B‘H¤P(¸\.‚ 6›M¯×'&&¢(šžž^WWÇ`0†††¤RéþýûçÛüâõÃG…8?ùä™L¦P(äryQQ‰D‚T¢@ p¹\‡C*•^ºt)$$dË–-ååå>>>‚H¤;wîܾ}ûý÷ß?{ö,Š¢™™™eee\.·¹¹Y*•êt:¶råJ&“iµZe2™V«‰½yóf@@4.—ÛØØ( ­V«ÇãAQ´¸¸¸¬¬,<<\"‘,ÉXú_ñ-üóÎü¼¼Ÿüä':ŽB¡@Î_lݺ588˜Ãá ¸Ýî3gÎÌ骫««¯¯AµZÍ`0Ö®]  R©cccR©´¤¤ªAÊÊÊììl2™¬R©Š‹‹¡½¨T*€ÓéÌÍÍU©T999sŸC’šÉdÞ#ó~:¼û½.ÂçtX[[ÛÓÓ3::J¥RE"Q__Çãóù*•*22òÝwß=räl³tvvvwwËd2“É„¢èÏ~ö3HØÆÉÉIØ&¸xñbRR’Ñh„etnn.äKa?§³³sß¾}W¯^=|ø°R©”Éd(ŠVTT¬[·A.—‹aXZZÚ©S§’““‡šoçÀµy½^2™åv»Ùlöìì,@¡Päää455Éåò¹.ÒéÓ§·nÝ:00€¢hrr2@0›ÍpªH"‘8µZ½cÇ­V;<<¬R©8ŽÓé4W®\¡R©ûöíëëëKMMÕét‚´¶¶ÞºuëÈ‘#ÓÓÓÉÉÉÝÝÝD"1??†œÕ«WßcÛO~¤y€¬íôéÓÛ¶m+//‰D===t:¾{¹\®P( FOO‚ n·»²²ò[‰ðè±Ätø-°H9ïGˆ…æ¥üã_{í5XYÃQ³Ù ðz½&“é7¿ùMMMÍÌÌLPP•J…]A«ÕÖØØ»víZ•JuäȃÁÐÒÒ²aÆøøøêêj¯×ëv»Åb1Š¢€•+W¾þúë»wï†=ÐmÛ¶AâÒ(Š’H¤ààà„„£ÑSQQ³wï^¡P8ßæÀaS–Åb!"‰`pÇq|zzšF£±Ù솆±XŒaØøø8‹Åÿ9¶ fs_§P(8e0Ï='»|yšÁºvmY¼šÃæÍ‘8î1™lN¯¯éèÑy“쇒pébñú!l_?<ä‡ …"..N©TB?Ôjµƒ!;;;$$Äb±LLLÌÎÎvvv FÉÈÈøç?ÿÉf³Q}÷ÝwÏ;‡¢èæÍ›].™L …J¥pw£?33“Éd¢(úÛßþöOú`nRÜív'%%?~(`RÕÜÜœššºëÃG˜ÁÞ{©¨:´iÓ&«Õ ‡#`-×ÞÞÞØØ(“ÉJ¥Â0L$A¾,%%¥¤¤$$$dttT&“Y­ÖíÛ·ŽŽNNNòù|¥R933Æ`0à¨?œ‰#ûÛß`Òyÿ6ÂkEµZ δ|?õá"©3@‡¿ûÝï6lØ@¡PŒFão¼qîܹk×®]¸p¡¨¨Èn·{<ž={öÀ"€Ùl^½zuZZÚà#œÑˆŽŽ&“ÉMMMqqqgΜ‘Éd‚LNN¦¤¤ †™™2™Ü××ËHȑ¢~nþ(..®¤¤D(òù| Ãx<^yyù=î[,´3ƒaØ7ìv{||<œ—¬©©YµjUoooDDÄÐÐPGG‰D§¾¾žÅbÅÆÆVUUÍÌÌ ‰Ä€€€«W¯Þ¹sÇår9N7>>÷ÕW_µ´´DDDܼyÓápìÛ·ÏdbŽ?-H[Zž*,üµÚwv6"$ÄYPp°¼¼<;;{ݺu&,,Ìl6¿ñÆÑÑÑóíüQZéÃЇPÃôòåËÛ·o/--e2™4 Aóçϯ_¿~.k™Ý³gOI‰ÝÇg…B‹™†×Õ™SS}øüÐ'N$$$`“›´´´‹/šÍf©Tz÷ .@XµjUGG‡@ ¨««;|øð{ï½wàÀ˜ 1™L›ÍÆår¡Ûl¶GÃbÜw‚ì?ÿ)EF!xkµZ±Xìõz ‹Åjii±X,QQQ‹ÇqÀºuëAAA8Ž£(:==••ÕÐÐ`±X`tùì³Ïà9áñxˆD"“ÉœšššžžNLLüéOú%üNüïØ]ŸüÌû>Vú«_ý*--íܹsß|ó ‡ÃAQôàÁƒeeewOœÂ¶QYY,óp‹‹ ‚‰‹@ ÐjµH±¦¤¤œ?žÃáÀÖ´yøT­V¿ýöÛeeen·Ûår…‡‡CUϵ7®]»¶mÛ6³ÙL£Ñl6Š¢‚Èd2§Óy®í;Õábœ/=pà@jj*ŒÑ¹¹¹EEEÿÖš5°„•Ëåµµµccc‚H¥RØØHOO×ëõt:½µµõàÁƒ%%%½½½¯¼ò ›Í>q℟Ÿì j4šáááO>ù„F£A¦€ HGG‡Ë嚘˜€½'€V«•H$üqLLLuu5‡Ã …_œO„ûÔ‡CCC:Çq‡S__?33C£Ñ˜L&‰DÂqR "‘H,+•Ê´´4»Ý.•J p†?~æÌ™ŠŠ ›Í&‘H.]º¤R©>L¥Rÿþ÷¿'''···ôÑGÓÓÓ¾¾¾]]]"‘¨³³S¥Reee‘Éd§Ó9;;ÛÑÑÁçó¹yófaa¡ÛíÎÈÈ t¹\÷žbüiî¾%;>>îïïïïïïv»ccc•J%4WØ–¡Óé%%%pZ æÙv»Ö5Áëõvttà8.æxÿñññíÛ·755‘ÉdÈšÃt§³³sÇŽíöÉ×áâåi–%\úxŒ·d_yå•„„„ÁÁAïLÀë7ðo˜£ŒŽŽÐh´Û·o?¦»²Ë‘fqàaØáÝ ¢Óé°ôFQ”Ë妥¥) ˜Içææ–””H¤îîn8JÇd2ëëëoݺÅãñ222`U ©dX’›Íf‰D¢T*333+**BCC?þøã]»vÁ©<·ÛÍãñV¯^]YY ‹úëׯOLL<ÿüó(Šr8“É411ñì³ÏZ­VAL&“D"ùùÏ>ŸO¾•Þ?§ ô—_~©ÕjáXHNNNII ¤ý †ŸŸ_VVVmm-ŽãýýýYYYW®\µ¾¾¾G-,,4 s:/((p:ÅÅÅííís ½½}óæÍ}}}½½½s3¸¯¾új]]Á`ÿ2H.ÂuvîÜ “¤Ç8A»ø±4"ÍÃ`YÂ¥e —>–%\úX–pécYÂ¥ÿlÕ,aendstream endobj 796 0 obj << /Filter /FlateDecode /Length 2832 >> stream xÚÕZ[oÛÈ~ϯú° €h2W¹@ì4ÉÆHÒ…íì¢èö‘hY[ÌJr\¯ÿ|ÏefHÊm÷¡@dçrfæœï\i9YNääÝ‹ãó¯Þ*7QR”²T“ó‹I¡&ÞyQ79_Lþ™=ý×ùÉ«·¹íÎR2y4hÎùåTe5Ï+zÔ´Vú8ï—©QÙ;˜{ôqˆ¨µBª4ùç©.²j>Õ>û:uyV-ëéÌ“]fƒm½®v¸õÞU¶‚ß¼ºâ‘¿Á V¼êh #*»ƒß'nñ/d`Ä>fÊ! ;™' ¸)é 7ÀÅõ[xípxÚì§ ñκYpßçõŠúà*õ†v…cîðõŽ'\ÐM G%6â‚ SÆ÷fû-\.sÅ­–À-NZ]‘hQ$Ä—¤®ҙ΀“óh^Ãñ@8ö-r¸ 3o7Ív|ÓD×npo¿ —G6 gÊgKæÓ¶ú3¾D ¬ætr–Ë÷UË• Æ•e…±¹&pZøÜï aó€ÃÓåNL¬×Àú} –d¢– €#jGK"„çØ­à`¥ÏŽðQd¿I'¡WK©¡©Ú3‰ÙR «ËˆÕ×tÕzê4ð…ÐI·†× Zps«|@dÅ/GSg’Û$-Ú¨~¥­ðÊÄÄt–ŸiÁ‚OŒ¼XD™á³Y ¾Ò ¾\®ùå×'ŠÀKà¡3Z„Çx‰§³Ù§ú–'ücZ¾ÎWEaÏ´tBÓ—ùg$r$e3éB°Ô«tÁñNµ¥ýak#AíéÑŠÁ&1Ìr òQƾnÀ­Z-“B2 Ô ù%^÷zË# ^j~h–ÄóÈ´9wŸâôå†ðEriÖ´·Êþ‚û—Ùûõ Ø.Ÿ'Ûõ‰Ô,lÞ"ÁæÙû ø»a(C¿ËVkf0Ø+Q¸¢Ïà³]EhÝFI·$µ&6×h*nvtË%÷Gc‰íÝeͳfêðÆ|óHâ,và‰Ö»¨§»!Tjå…+u¼ïK¦€Û9Yi2mŠ{í4½Ù}ð©löžmPdÐvžvÆcÔâOͰ[yÖ|%Y¯; 'èñèÓ c Íç³§¡QuЈ|½®PaÀˆ ´ ‰d£+bÁÆIgüHpU²…«)®Ÿ¢Í¬É¢Ö B\öë´°`"€î]°ÖÎgÍ> ”Lvóµkraœü ŒŸí’í™,-Ž7ÑW°'“ÇP€0ˆIø"‰”lÇM*Ff ÈôAõƒeæóæÐ]Ôæ«T(òináÐ[š:i„7¶cê¬ÕÙ[äQ³‰nmÞ ûWăòg’[³'`2™„·ho›xÏ#n!ª²°]!–$ÄÓš]†U"¾~½àÆß‘ô"*} 4Ð A ž¸Ù0c‚?„Á_ÐÂVÑí’ã¼¢M ÊX-rSÔÌø’…â Öylœ¢Ê7¨}I–F ožjdX†]»§ÜÝñÈñ2pôü„GÂQêEd>‰¦$£Rš±xR•Ðn3ª=„ …Eî‚9» §Mñ sN=],!Kšv6Ä]äŒë_SæV\Vß’[„ ý™Aᱦ #ŒŒà2»®×xF¼äDŽ ºÜPµ›É>à1<餆 Ãþäwüh£I²Ã0À"Ÿ$€<¾¹jN#MrAñ—6zàÖý¶R‚Ñ})Íœäpc3p¤`4&^ˆ&XFѼc·«ßÕÑRÐ5y ¾ôŸ{KبPœ‘8®tÉŽïkîlp`À½´9{ðš¼èW½èdˆm¶#(ÐoÑ‘­—tîü!G:…[VË ¨ ÈÌ÷!Cãð°p+•÷íùÛ ŽàÔ R.zìé fÖ=>^± hÚ)Ýíf/G) ô£ÅC ÿª‡mÈ^p˜Oƒ$„¯Ø™»ºl-{°Ï4óM7Z»N±ÁˆÑvRx—$rŒÝf˜W š_ ܨù-„²É@@H>Ùü¶n˜€gp°ª‡;íï!“–¦÷Ei ‡¢ÄG«É®õ® ¶,MQµèå焼MÈyÂÇ}gÉÝ«˜×é^ž# QÈtçÓ!¶x!MG]’{û0h[Ç L:šÌI“vU›õ‚iœø”`ÚBÌcóQ±{ˆÏŠ®×Õÿ…ØÁ•}Á¬4F”°O¿:púå/ˆr²ñ‡æ)Öl|?»Äâb R.;å(ôD?ÂïUøÝv~"ÄsÑgVáIÑ>ü.»ïT8tо3ø}E:À}¯F+…ª(ºÑø!˜k¸“VãËHO0—¾ó¢5yO€¨Ù®a@ts06ТTâÜ"—‰6¬ðôZ”m>ÿ#Z* î[5"ˆfý|ê}—(™í=3\ÑRJ˜®ƒÐ;¸—ÌGµšà&ã”o=ÕŠe,Åœ/¥m0"^Ä#†•~&F¸É²¨‚”Í¡T¡yë*C/W§”~ªìS&¾Ï€ûލlTŒŸ@ø[€ü<ùÃu…ßq˜Ïàè“dÙ–ªDB^+мxò`<ä=Óþnß'ÆbŠ6Eöሟm"®Z#¯rMFþh»mËÙ«n ¤ d†2"©BÙTÌÊ×´&ÕL¨³‰, eH‡"N ɵ~\ n´ç¸ ½i¨â7#\D§×‰‡çW1$ ¿Ieçý QÐÅŸ¨^ô½SÒÎtбJ±à^ñiÇXÛ5‡‹@ʾ< MõìëTîLkÍÃgÝ-Þ&3Ý­Kîç‡êZ oËQï6³\s¾§|~´È°ïæ ©è§—FË{ø?„y€º1!Oå ü–¢s›ý|ÂÏÁz“Fë0ÏyFÐÙnÖª{ǨXMµ=új´¼L%U~™AM Úñt +þ¸‰™L½å9±BŒíƒ.{á"ö"\ÜèMJ[¯„›¶vÛEKÔL*\Rp[óËiƒå–¦H=Ý©ÄmJÀªôa ÆRE·NqVT¦ïñ#ŠTÑ~ª²…R-÷¢ô~ÕÏ*Ñ=ž£^ø"KËêzØÚs[¼æJ’㨹´ãð«ýúÅm ¬ÒW`:é÷pT£ŸZòÁd±h“ÅABÓ™ü1”ºö#èL‘o0Ån…â— ?ü®>7·BýÿŒ/Cßõ#¹%à²ÆQؾ9ñü²*endstream endobj 797 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 3966 >> stream xœí[LSw÷þð«­¥-ôm])µ m‘Ê­Nt³0-¨t©®PãBs’IUœ ‹‹M–¥2@eq›& CöÕ W‘âÏJ/ÎV T*8ðZ×ZèV è÷›˜7¾/Â|yþ¥··÷sžsÎsÎõxöìx«áù¦ÀkÇ,ÙY†3³ g>fÎ|x¿üÏ2™Ìjµ;vlÌ/Š‹‹KII!“ÉíííÞÞÞ.— Àáp4Ûíöôôd±XçÏŸ_¾|9‡Ã™L¦°°°¢¢¢œœœ'Ož\¼xqûöín·ûâÅ‹kÖ¬Ñh4÷ïß÷õõõôôôóóÛ³g•J}5†Ó¹j³Z­¯Lì9¦5ÃIÁÿ|ÆÇÇ3 §ÓI ‚ƒƒW®\YQQA"‘Ìf3…Bùàƒ`$“ÉL&Ón·“H¤ 466ŠÅâæææGa±WXX˜––p¹\áðáÃUUU'Nœ`±X‹ÀãñPEQÔn·;¡Pˆ¢hUUÕ–-[$‰T*ÍÊÊBQÔb±ìܹS¡P@„ ˆX,ær¹"‘h4 “JÇfN*¬VëTÞÃÙbR “ÉØl¶N§ëïïŸ;w.‘HÔh4`4+++_ëÝg•fæc0„ax"—«{ ½{÷nRR Ã"‘H¡PÄÆÆ655edd¬_¿>--mhhháÂ…ÍÍÍn·ÛåråççÃ0ÜÖÖF£Ñ ```Àl6ÆÇÇK$¬cÊËËËÊÊŠu8)))$éèÑ£áááB¡†a‹Å2<<üÅ_”––R©Ô¢¢"›Í&‰¼¼¼ëëë+++?Îår™LæöíÛG£0iq8f±ö •ÚuOJ¥Òl6«Õê~øaâ÷›zÌjéÌÇØJƒÃá***ÂÃÃwìØ¡T*“’’ ™LŽŒŒLLL .((xøð!öygÏžA„µ|ÝÝÝN§³££EÑ””¡P¨Ñhnݺ5wîÜ®®®ÈÈH‡C$þùgvûöm‚ÂþýûaÖh4ÇŽÛ¾}{ww·D"©­­5 B¡pÑ¢Eƒë$%‰R©lhhÂÛJÇ®Ú`.,, Õjµƒƒƒ>>>X2èêêº}û¶X,~ôè‘F£)((0v»½¯¯/,,ŒÏçwtt”——ãñø£GÖ××766R©T›ÍÆçó÷ïßÿÙgŸedd°ÙìÝ»woذËC ,¨««£R©·nÝZºt©^¯ïííåóùL&PUUõé§Ÿr¹Üââb:Žõø.—+!!aBÙ†áÖÖÖ%K–ÔÕÕ544ÄÇÇçåå½ ª˜¬OЏ¿ðÿž·ÿ”Îj©LF£ÑnÞ¼ÙÕÕµxñb@TTÔŸþi±X‡Ùl~üø±\.—Ëåyyy …óËJJJÊËËñôô´Ùl»Ý^XX˜““AÐo¿ý&‘HÜn7“ÉD@¡P¨Têàà T*Õëõz½žF£©ÕêÀÀ@&“yíÚ5:TRR"•J•JeYYÙêÕ«Ïœ9³pá—¸òS}J§ÒeÃ06CF£V«½¼¼ÚÛÛµZíÐÐÐÚµk¯\¹’œœ\]]Íb±ôz=‡ÃéëëKMM5›Í·nÝ<}úÔ`0°Ù쨨¨¤¤¤7èÊë¾DÖ&W?_Þ~-WØÒÒAÇkjjŠˆˆˆD¢ÖÖV‰T[[[PP022"“Éèt:‡‹ˆˆ¸}ûvHH™L¶Ûí*•*??ÿܹsMMM±±±(ŠŠÅâšš,_6›m47lØpýúu‹Å2gάøûï¿·lÙÃð¥K—Èd2‚ Ÿ|òIDD„^¯×étƒ!..kAq8Ü4Rš‰ã…¸3LÆUÓügÕväÈ›ÍöèÑ£U«V1 ‹€MK êï|˜››[QQÁãñ¸\nEE¦FØ-ŒF£——Wgg'Aqqq‰‰‰»ví r¹œN§ûûû œ:uj4 ¯˜ñ«««sssïܹÃ`0ív{OO‡3›Í111\.÷‡ßsL4ãOŒ«·xãô&2ÒWÆÇŠL±X|õêÕÔÔÔâââõë×ûøøh4g2™æÎËd2çÍ›WTT$ I$Rkk+‚ »wïV«Õííí—/_&‰XY[PPðõ×_K¥Ò'NÐh´°°°µk×öööΛ7——Çd2±i>Çknn¦R©§OŸþæ›oZ[[¿ýöÛÓ§OO©#>>t:B¡ÆC‡effúøø„†††‡‡«Õêþþþüüü'N sèСÑ(L¯Sú:ïØ ­Vkrr²T*-)) Ÿ_QQ HVV–N§ÃãñAAA:N$]¿~]¯×=zôÇ|òä ÇÂú–——c¦‹¯¯¯Ó錉‰9þ^¯g0óçÏ ›h¶€aX¡P`aRRÒÁƒ±a}ww÷êÕ«'q^ÿOWÆù™Y-ù[KbccïÝ»—ÐÓÓ#‰JKK‰EQAP­®®–Éd€§OŸîÙ³§´´ôÒ¥K,+&&æ—_~áp8©©©0 c]o{{{ooï¶mÛŒF£Á`°Z­.—Ëjµr8œ;vÀ0Œm8‚\.ÛÕ‹ÅûöíKIIq»ÝØ£ÏÕ…D"‘H$<ßÓÓsãÆ¥K—¦§§F½^Ûßßÿ×_ÕÔÔ¼¯mÚâµ{msh4uQ‘ÃÙÚØøºÉüWL©×öFðöŸÒ·¿j›e8ó1Ëpæc–áÌÇ,Ù·Ÿáÿà£Î‚endstream endobj 798 0 obj << /Filter /FlateDecode /Length 3217 >> stream xÚÝZ[oÛ8~ï¯0ö¡+±JФ( °i:½¤I;HÜÛ}PlÅÎÔ±Ûi&`~ûžuqd')°‹Á>Ø¢HФÎå;¥z“žê½yörøìÅëÔö´Šs•ëÞð¼—¤&ÖiÏ;çÆõ†ãÞ¿¢á´¯£Åe±êŒI¢/1_ÿÙÏLTBSÙÿ÷ððÅkíZ¹,Î,LCCƒ}ž)™xóÚX«{ãâ,Mù‰·Åj c_Ðøƒ$·ÑpO®Xw¿ÕT:,‹9•¨ùDº½„ªëßà¯À{ísõ¯Ê)¨Ôyn¡¨¡¯“8ýª} ·3œñwùlVÏÿ ëVð7ZJë%þÍ üë'>Zs¿}\LÞâ"±ÏŠ[ΰ×-—?Bõüè%/ù j8Å44‰áïo,áÔ4%ÌÅÞ@›ØÙœ¥vˆšºF‘ÐÊ`Hã|´€›s.®§üBÆ¥ ‘ÄE—tà:G}—D LêŠ Yk^?ë2ì¾´¬X;Âô:‹y§E$.öiLbû¸–ù¥qê*£É`¬¼k$Ä.©º‰Fm’‰B÷Ø>5hõN'^Ŷ×C­ƒ‰‚“ìÎvÙŽ3< cšhˆMgòÞÓ t1¿à®'‡|­LUFÖ¥ÁTǹ5áEÞˆ*椖4NñÕÃí8ÈÝéh<¾Jù^«ôõOÍ¥˜òªS€ÆÆ:¯T`m mWdŒs~ßçü>oÉœgµŽh›cœm¾U¼ykÀ¯ âEéþAŽƒÖ].an^Ë\¢TŽ‚©eiÛîMœf.,þ $•Gåùå¯ã.9[+åÃ?õ.WÑ'’ݬ’bÉ~j“$Z/ðj¢×bk¾ý¥`…»ZX}@èHî -E‡ ‡»“‡°NdŽsl^ÊÀAߨá¼lÞÍG¡Ô©È$Wqâ©ÉoŸ4ûÀ>VéRœtÚ…ŠSè$‹†0BÌúÖÛ*&%+ÿ{?É¢r)ïˆÞNF¢©²NÅÿ>±eĨ+Õ[‚aHñä ¿a’ylUõ&SƵÀäOð{!?ø ü>oñý¤é¨÷’Ü@G¿‰^G2R,õØ>‘Ñè-¥Ï7)Oä¹HûRîQi—2Ƹeö÷–¥YK1>br€ —´!èÝex’¤Ñ{ñ8)’×LøþÔ×\øÈ—Úƒ²:¾å)¾ 1d/P£7øG¶:æçöñ⣲äKÒ)Ô¾Æâ.ñýé)ŠM߸ýM §"a«ØP®.pŽ‹·ÁnV\]̃0;#FÝ’-½gê µHÈe¥vHowÔ#CcíÇ×uä[HO\§Àï› ¯¦÷bâ) a#hnó^«b£Ìã#"ªÓ?>"¶ãa–'w9ˆv›)JBDlÃ5›bâó`Š¡ÈÜî3E‘&²F¬ûÈ÷x^ H·9”+çÿŠ:Çhs1éÒk’ÆÖ˜œ[c£Ÿ¿"#ÄkAaôïŒêFeðì°@ÿ^ßô ÅÝb«„cwBnçZ‡ÛK;¥Ø´(Uƒ&oä3 ÚÐ=m{û! 6Yì½þëAðW)ÏR&»¡71`î!èM´vmè}O†#”ݘ *àâkÇ™Ž­¯fFçåË€ÜdW·$)‡pteÂäeÕÐpOÔY¬]Ú´ø̵8â—ꨚ¤`û‡ÊçI˜ªLjâ"œ]úwSuœ03ùNÏb•äÙ¸Oün6‘5m\DÕÄ&ô Ù+§7Ú·­üiÖ #¤õöã¯cÝ+©;—º–½4íDÚ' BÒR%=³Ë3ŒÊcý0'±iìîpŒFÇ‹qEp£²è^|'ïèð\ÇI­„32ÿÉŒhUÒCXH} é}ô'§d„Æ}CÅÆ¯ƒ*3Òã]7ücÀ^(Ìú K‹ùJÜ ºcH;:Þ·U§wºPn÷whŸîŽÁ;Pã ¥rWà‘¹¸‡ÄnûÎý+{ùïÅ€{¦ns½ÛÃ+°â°ñÇ—™rÛ+nê ƒ ÈÒïÊrñ“G'£]7T‹MìýTüŠ¿“xH½Ê9ÚDxÉ v¦n¥_ˆxn¹ö=â&1¢bÉMœ*ÀQë\Á“rMÁ¥!…´¦+¯Ãà]¼)}<ÙÍa ôGÉ®ÏÍ]¦ì6Åg „&ÓÖûQÁ Ü“k üV7 l²+““y€µ*Ó`ZÊn±hSMÄ0ÚL`át+#ÅÖã Ù)'uV\]åŒo'dj ìÞ½-Wše.ÖiÃW^Ù(ámÈBEeðƒ­"…RÐ…káòì«eµ]%ðB¼ýLùâ¥ØZ1M}(o8­@ùdÙ\V‰¦O8Úþ65³l©±\0Ó’õë4ò¥Öåά\Žá¯Š’û(Mûâ°u”(•ÁOaûq@z®¸çoY—ôÂÔéUØ |Ï*¢»U¤sÍt]šëÍt¾ás¾Ôéº'S¸0 ²Ã‚‹":±›¹!Z$‰QÔ€™°£Ož®ѪvšåæJsÕЊßI h+*0™7Ã@×±Þa¾/µÙUx3»jUŽÙÀKì¶ä†šjLšº 6 rVMÃ&*3½;{—Ç^ûÝD(UËí¤Ù¹ê¤Ù «À° ìÿ½ýß0ì+aW(?fp-募ò[I{P=Žó§°‹Yƒ3+'eÞüBwT%î¸-¦àÍ뻤1v2 ÿF‘Û8Si;‹Bö ¦{÷wìTXÜŸT4ž£9Ôf8­Ì‰«‰„wVòx°ôRúÏï5aõâœÇù(oO–¸,¡Ã9y9Y,›‰:l©a \LJ™»j¬Ú޾‡c’n} ‘‡˜§!àån±hµIѨÁp¨©‘Ä÷”`Åvp,=©Ìñ”rTí ÇcIQø…;­ðð&1}tx(¨û¡Ÿp!¼›Wç/ŽöA˜‰Ær:lÎèÑᨠ¢êGFö½¯‡ìNO¯ñ°“Mu'þÑ”ì£i•Þ8S»ó[ˆD¹79©Œ=9Õ ù€ß5NÉòÜ 6­ºâVH˜Z)æš0é*¢’C¬äšØÄ\w¤n/èÄ.ÐIc5®Ç™f06š³ËÓ9¿ØrVA½ôsn:¥¨º¢x Û&NªS¡?Ì}86§>6 ‹&¹úà»T­ñq™§XlžQúÆQÛ†èÑnl}•é €ÐðV S0o òQˆëVÄ6ŒM§šÓ¬ó5Ïô¹ïÓúX®Àälëæ™´olí.×Di1’r Ì3†ø¦WáÞõYˆD£ê$„d²j O‰K®/©1¡ø&–ކâÙÿaLýO9iœE§{|ý°X®CÁç´c—äýaƒ:੾÷–Ù(©^VsZ¯æX&™1£º0Ø1 ø¥8Ü 8â)³jKkž¢,é+cÒ†ñ7¢+ŸáW Áu×dô0¯G`Ú= }xƒ3øDŒe#[³ÆÛ[~ á‡p=ÂÚoaD§JPÿ¶,8£@wÏ«ÊYfÀ…‡s6|÷ñ-±ªµ9ûM:¡ G;ú6Œí%e!Ê&g*kš_quÞ¾ëꎯÚǺ>V¯° Xçײâ×Õ±Ùqåµ÷ÍîÍuÁ®…²ÏžvV†:κA<öõ'âs6ÏïœJvQ»{ßjÄ–XmlQU€D¿`F‡vŠ…3 Æ5‡”9£"£Î³|÷§"*6õVas3à ùþ×"qsjp'ƒUýJ÷Q­Ä€…–°BÍÏ¡T@‡NÆx0-ˆ¥ñ—DЬ.ͯHªÍgæ“¶¥Ê§%u–á ˜t’Ù)°(fŽ{\üZ()%€Ú}i}»X† Ç7®yÏ—š„§5L¤ gó†Ó ð•[.)Ÿw%s-Ö< åæ–Å%“åú¼7=|&:sMÀtNXCèzÖõà ¥û˜XmÀƒE+QÈ@5IrP©ˆ¾(BÐ*W\{sQ!لݰ Ü%VNO€7wZO›&MÚÆÒm¬ëO îÒcŽ©[†Ü(v÷«o‹ú¬«GÐæò8âcçyæn6ø²©ù2îgôðtб5OÜJg*ö;>†!÷±JÇns›vØ7_,»ä ·B®+ÙŸïÈË㮄EÀ÷«†îw혲”{à¸éçá³ÿ,E£;endstream endobj 799 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4748 >> stream xœí[kPS×ÚÞˆIL¶€$ PDLªƒŒ!*°•–Ó†[EQ[AM/VtLÏT)#`3¨µ¨ÓAÄ)ÃMÈ![„ThC·- —põû±çcÎ÷õhñBφÖÚy²Öû¼—õ.‹§OŸk Þôxå˜g8÷1ÏpîcžáÜÇ<ù«×ðŒ¸¸8`F£ÑË—/ß¼ysqqñéÓ§÷ìÙS]]-“É^r~†E"‘““SDDÄ_ÿj1µÍyüýš±Ãôôô[·nmÛ¶ ‚ ‰´qãÆßÿ]¯×‹ÅâO>ùFãëë[SSÃ`0úúú>|˜˜˜( ™L¦R©QQQŸ~úéáÇI$ROOR©d±X:ƒÁ`±X£Ñx÷î] …rçÎ77·~øáСCd2Ùd2äååñx¼“'O …B¥R¹lÙ2©TJ§ÓïÞ½ëææÖÑÑáââbgg—’’âááñ, o©ªT*äCÃ0L ^f*óZ ÃpRR’¿¿KK g2™Z­V*•Btÿþ}ƒáàà€Çãïß¿O£Ñ˜L&AÆÊʪ£££³³“ÅbŽŽÖÖÖZZZ‚ øÎ;ï‰Ä¢¢¢ôôt¥R ÀØØØ™3g¾ÿþû¡¡¡ÒÒÒÐÐP eggW]]½uëÖ÷Þ{ïeè3[×ÿ^_'f¢4¯Ÿ Ã3ûJÃáp:;;ÃÃÃÏž=[[[Ëb±>|ˆÇãÙl¶‹‹KCCõk×@Œwuu-**joo7™L0 ƒ XSSsäÈ•J¥V«a6,ëĉÖÖÖëÖ­£Ó餥¥ 800@§Ó½¼¼*++ ­[·¢££q8œŸŸ—Ë kmmÍÏÏþ7þ–*Í,bÞ¦§ßºu‹J¥‚ XVV¶|ùòˆˆˆšš‘H•JMJJâñx2™lõêÕ¦»»ÛÑÑñÔ©SÉÉÉýýýh4zrrÒÑÑq÷îÝB¡°¥¥åÃ?ÌÎΦÑhuuuß~ûíåË—+++Ñh´••Õøø¸‹‹Ë©S§âââŒF£½½=›6mÒjµMMM¥¥¥–––(jóæÍB¡0!!!55õYÌïR±X @ff&‹5™Lx<žH$ªÕj“É„Á`ÜÝݱX, …rrrºÿ¾»»{yy9 Û=<ôMMú¦&ß÷ßÇ89éî݃ÛÚžNN’Y¬eg{¿÷ž¾¡a½ý{{aœœ¬llž¨Õ^,V—\®(*Z´tiÿŸÒ¿úʨÓéïݱF£ÈäòÍÈ œæ¾a;4Á°í+Væ7l‡ÿ‘ÞËø†¿ÂŒîß¿‚ $är¹>>>®®®d2€G‰D"VTTd2™$INNŽ«««P(üñÇùå—7n¬Y³F¥R!ù[ZZÚ²eË,--q8666&,,ìØ±c, Çc±ØíÛ·gdd„……eggÇÅÅݾ}›Çã …B­Vëéééíí™™¹téR6›½cÇŽgQ˜“Þâ…‚*óq)ŸÏïïï‡ Èd2™L¦ááa‹uýúõììlWRR¢Õj¸\.’ ˆÅb…Ba2™›››‡††ØlvIII|||CCC}}=ŸÏçóù½½½K—.ôèQ\\\AA§§'ƒ)//g³ÙK–,yòä‰^¯·µµõóóÛ¹sçW_}…D¼£££iiiÓ$9Ëk8•¼=˜“»ô…ðJ“’’réÒ%WW×–––ÔÔÔÎÎN­VÛÖÖ)“É"""Ÿ©ÓéäH¥Ráñøèèè„„&“ÙßßïîîN¡P ¼½½ß}÷]…BÑÖÖ6eTqqqEEE+V¬Ðh4h4Z«Õ"~üY®†á¬¬¬£G=zÔÍÍíY¦åñ•Jå ü¸¶®Ž’ð¸¶vÊ78.[6åÇu€™ìãö¿|ÖS³W²K_ƒŸ>^‰Ç>½ÙuèfaÆ7lØàéé©ÑhRSSù|þŽ;233 AH¸k×®ˆˆ½^?44ÔÕÕ…Ãáp8Üàà N§[¾|ù£G|||4¿¿?‘Hôööær¹VVV{÷îÍÍÍ G†L&CÆÂ0<99Éf³½½½¯\¹Âf³[[[Õjµ\.÷òòâp8çÎÃápD"‚ ÔÔÔ²²²¨¨(­V»}ûögrxúÒèëë{ÎËçÿóÔ;Ïõ20ïñ…BáÝ»w>\üÉ'8W×A½~éÚµ''èÜ9†{zü·o·[¼Xþë¯Ä  ¾ÖÖ‘00ÐÞÛ[]Q168Hðô¬=}šÄ` ™Á;~~x©úŸÿ HH耠%Tª5ûge¥S` ÜÞŽÆáÿÊÎö‰‰±D£{õúø ^pcþ̾ÒÌLf^8ý×{|Di‰D;v옘˜(..ŽŽŽîîî »~ýzww7_³fMqq1R ººšB¡ÔÕÕÙØØ0™Ìªªª˜˜@AÐ’%K´Z-¢1kÖ¬¹qãÆ7ß|# ‰D¢B¡Ûµk×½{÷@¢â¿züÔÔÔšš …òœ²ê›\Ã×Sw5¯4<oÕªU999/Óüµ6a2ì½¼Q(¡ÿøÇ+bø÷iÌ0ü«&$$$&&nÙ²…B¡ÆíÛ·‹Åâ©°Û`0?~œD"åææ677S(”   ŒŒ 0 * .›N§óù|__ßúúz*•ZUUÅd2F#•J-))Ñëõ €èèh766†Á`ìì캺ºÂÃÑÀ£¤¤†á¹wö„`V²Mó Ãp]]ݶmÛÔjµ³³3‚"‘ˆÁ`¨T*&“ÙÙÙY\\¼iÓ¦cÇŽ1Œœœœ·çèæ­^ÃYÁ}UŸÇãµµµyyyuttÈåòŸ~ú©´´T*•àççwàÀäääµk×^¼xÑh4b±X<?22ÒÕÕuøða‘HT__Ïãñ²²²<==›šš†††¬­­¹\®P(”J¥ÝÝÝ$&&òùü‚‚‚ýû÷#eH2™¼gϱX|çÎÈÈÈÖÖV¤¾ˆÅbOœ8a0ÅjjjJLL|K=þÌ*ÊËÄæÊd²'N„††J$‹ÕÝÝ=44tÿþýöööØØX‘H$ÔjõÁƒ°²²Z¿~}OOT*=uêÔ—œ·b _)æ€Ò ‡_3†y¥ihh0 vtt\ºt‰Çãùùùétº’’&“™””D ’““ Bss³““Skk+‡Ã‰D_|ñEqq±T*íéé‰mll4ß}÷Ý—_~éïïF£‘Gtww †ŽŽŽØØX6›}äÈ2™,•Jq8Ü¢E‹âããù|>HýbíÚµ"‘È`0xyy1ŒÌÌL‹599™––ö, æw)ŸÏ“J¥$ Áòòr__ßÀÀÀâââÄÄD¤ïù3˜Õ‰WšdÌÛá܇y;¬¨¨@šÑÐh4’F0 …B‘’’‚äû …B¥RÀ0üÁp8œ¦¦&{zzd2Òg4CBBJJJBCCU*UWW—§§gKK ‡£ÓéHqbbÂÇÇçöíÛ7oÞLNNþÅ©?11ÑÚÚZ§Ó}öÙgÏ¢ð»tÖ­åmÉñŸSM¬ËÌtðõí¨¯_R©÷òò{y©%/«I ØÀã©+*HtúèÀ@—\n…F[c±P^H¥ºÑé¥RG__µDòŽŸß€^?n2ÙâpÁ‡9=ûŒefxJóÿRþ×\x­ZúFZþ¦¥4X,ÖÛÛ{íÚµŸþ¹Ïùóç¿ýö[…BqòäÉÜÜ\©Tj2™|||¾þúk™L& åryjjªX,–Ëå R©Ô   @`ooÃá ‘79ŽX,îííEZØjjjž>}ºuëÖÚÚÚªªªõë×ÇÄÄnÚ´ ™P­Vƒ ¨ÓéH$ŸÏ§Ñh¯ªSaú«ñÆ[5§¥4ŽŽŽuuummmp8@€B¡t:ÝÂ… GGG?úè#@05éE ˆÁ`lذ@ ¼ÿþû111wîÜAº¦£¢¢._¾ÜÚÚ ÃðÖ­[e2YgggRR™L®¨¨€ háÂ…8N§ÓY[[pòäÉääd,‹, •JE&ikkÛ¹sçsΞ¦U§AÚ¸•Jelllaa¡½½=ùî]³˜•õŸÚÞ¼òì …BÕÕÕEFFªT*fmm­T*!Bz.d2YVV–Á`˜˜˜ ƒËåž;wŽL&?|ø°§§ÇÒÒòèÑ£b±†áööv…B‘P__/—Ëétº‡‡‡X,ö÷÷§Óé æìÙ³«V­ Bv|||÷îÝ|>Ÿ@ œ9sÆ`0DDDXZZ–———–– Að9Q۴΀gv@û~¡©få`xZJóö]f€y¥™û˜V†J¥ÆÆÆ —îîîk×®Q©T:Î`0’’’ôzýðð0‹EN£þ옘˜‹/ÚÛÛ#ýÙ[¶l)..V©T===ëÖ­3™L2™,..N$‘H$¢¢"€B¡&&&]\\222mmm]]]•J%‰Dinnž˜˜¨¬¬¬ªª¢ÑhÛ¶mëèèxN%eZušÇWTT¸ººº»»›L&Fƒp>tè•J‹Å«V­Òh4?¦ÑhUUUd2yåÊ•jµ€;wÌØarr2ƒQ«Õ(jÑ¢E^^^‰dddd×®]"‘‹Å‚ (‘H>­’’ÄMûúúJ$ggg‹uéÒ¥ööv …òàÁƒàà`FðÁ`@v;Ç[´h‘L&³±±ñòòBîv888`0 ÕÚÚºwïÞ¼¼<*•Š$2$$$¤··W.—›¹ôò.uf0ëÍoݺ5+šVötõêU£Ñ˜ŸŸÏãñÐh´R©¤R©ÃÃÃ)))û÷ïwrrjll´°°˜˜˜8pàiii¾¾¾îîîeeeéééW®\Âb±  Ãb±lX.—S©Ô‡ZYY!B…œºEEEñù|•J¥P($‰““Ó¾}û¸\®OssóáÇ;;;ûûû³²²ÂÃÃ9μÅà (ÍKÊ鋟VÔvóæMä Fnê•••EFFÆ… VVVÀ`0òóó/\¸  e2Fóðð8þ|ppp___KK‹X,Ž‹‹ëëë³±± HNN>vìØ‚ M&SrròÕ«W‘EV*• öõõ-^¼X"‘˜½|7 ç:檷˜~£±yoA$GGGI@ºè ‚»»û“'O~ûí·•+WN] ‹ˆˆ¸wïÞéÓ§ƒ‚‚Ž?>11QTTTXX«Õj{{{Ÿ‘H …a˜Åbååå‰Ä+V477kµZ6›]^^>::ºlÙ2[[[*•Z[[‹Tõ!2ÁÁÁ0 ³Ùl‘Hd6?œVÔæáá1ý[©oæípîcZvøàÁbgg'—Ë@šêããã/_¾-@lmmõóóC.댌üûI¸J¥šª…s¹\ooo‰AŸŸ_DDDFFÆøøø‚ h4ZNNNRRÒððpnnnzzzQQ™LvvvžœœÔétGåóù f``€ÉdÖÖÖŠD¢Ÿþ™F£=‹Â´véëIyfö³£ÌGmS÷€©T*AŽŽŽÝÝÝX,–Ãá ®ÜÖÖ–ÃáÄÅÅ­^½zß¾}2™lppÐÛÛ‰¼ŒF£N§svv@¯×¯^½Z¯×WWW`0ôz=N§Ñh¥¥¥½½½t:]¥RmÜ¸ÑÆÆæ?þðôôT©Tü1r088ƒÁœ;wN§Óq¹ÜÁË—q®®#F㪔¬w_ÎxUŸ5ðÕmó Ub1òó‘D’pó¦ü×_:]ëo¿ÑvìxTQ1n2m<}ºŠÇ#xx`A°K.îí%1FN-‘ø&%¾úâÿóñÂkøVu8OóþpîcžáÜÇ<ùy†só ç>þþ ÿÄÐÙÄendstream endobj 800 0 obj << /Filter /FlateDecode /Length 3334 >> stream xÚÍZKs7¾ûW¨ö’a•‰à Lªö@+¶ãÄV\¢l×f³Z¤iÆ©´­ªö·o?€yiHJÎîÖF ¦ûëî¯Gó#yôüÑ“³Gß>SîHIQÊR}8Šê(¸ JãŽÎ¦G/ŒüãìÇoŸyÛ¥¤ÞÃ4æìã@3[³i-¬ yÜÛQÅs;zÕ7©µBªjðëŽÅä| Cñià|1™ÏCcLñ&X­±­‹ãÉ_=‡ßªXÀu>¹àžïáuNø©ÑzTq×npdD÷W*CŒÑ ¾œ–ø ß}ƒëœÁz´¶ôÖÇÜüe––t‘ìpÅ( ø½ÄžO|ç'þ÷«tnj)Kh*hŠ$[ßÒ,)”:Ëëú|àb1M‚5í‘Âêjàwƒ¡•%(Vµ?ŸQHiÿÄ?Ë9,Dž¤;pº˜“ÐIÂø¥¬]hJ™ÞÛ–š²J8ó‹ÓWxÝãjyÀiß$¥°±šâ _ÕÁÊôš”ëMúÕ’{”@QqfPž Bê4°°’ç 1oN_2&„ƒ×É£5@!5O¡µ,E€™ÒºÐ¶éÂ5~×·é:†ëA× )“AŽF#VpýFÒFóL"ÝÇþyš$Æ|Jíyzî¯p]§¾i ß]¡j¥…”¦¥™;Fp4Ô^  4jJñà“Ùͯõ¥ñësûÝ Úb†½SÒÇû„°õ’ûáË•‚Q ¬‡n$´«2è íCgCñ«2ö9αœ%L.¢øæ)?ür‘û'¸%É&W úÀ+Ä¥nPŽé3 •Öe1àw¬>§—¡¿Z˜gõÿo?θq:p0î†Í&áJ e|Wã-êfK6†Ú\T–cºãÎržÐŠ_°½ážQ¯€ÛQ6/ô1q-÷ ÙU#PŽ.×k—€ÝXy邏¤Ÿ¬Áyk¢Ý… e´P¾ƒ rÙ³-#äN6_ô„E¥éRÖšV¥%M¿œeû!_ Om“&Á«a“c MôtšÄFCÖäÐ\ñ3*.÷,Y;4üt6_'o6•o€Ž>¬pOŽ1ô6´ºE†ü%M€CWsv‡G(z‘ÆMHЬ£¸§Ë´°*~F–ˆ„ÐÀcþ¼‹xDœ<åEáÓ—ÙéѺXsŠu}º÷NXŽ$uI :ÁºÛ`ýN§"`9†ŽEè<æÿ?LÖIJ¤¦ ¾ûì)ÿïE–‘ÅDbM¶˜ÄˆÏ¼ÆùVW >_P$&]-CQÓ)£ÂîÄ‹màÛ?§…›ÓÎ6äÀqzφ8S Ùy¼D˦5¿ÇwÑã;}™s›J£+ ‹¿§ŒðIÛ÷·‹*6}šÀÜÜýQ‚){¢…Ó¶ãktÛ×hén5”^Àt ÃPù(€û´ñ³ƒ:¸2¿ tÆ4ŒÅsIÑÙÄ”$ñxµN^ÆØâ U@¡å’oìäeJÅÖ';+‚×VÓÖV²VÖwBtüðÏ/Ñ~'Ëy¦fó´$èš,§<ôéò:AcÚ]-³Yny@åÀ =FeO¶LA+½g¡¬p†«Ï„xw4adCž½d㾃åº)cÅ?ž!iX}^&«™dS„Õ•ÅþŽ’¿ƒº«Oá°š\}éKP·ü%Df‘oGU¼]€È"é–.F(àͶá'à8Œ•Å |Ù“žÕ K)AãC†p[¦§HK ÃÚïfŒÄ$,8G×u™I~i\†Eîc†JYµÛÏ ïÚ¤v"Û¶IáhÜ %#”« äÅ2k!ÇôÏœw€&<÷žs»¶@Ó¶Àn°…ªùÏ=ÌzMóÓŠÄ7×ÛžÖkk¶<¾Éð˱ýXÐà´ƒ¯¢y31€ÞRì ¢ÀíB-׮)€ƒYàܶ,NŽS×ÃP!@«xŸ4¥ ®MJ.6YÊ@s!C5´XÌyB×åAd¬£w‡R =@Z\’,W¿Je8j ;srO\”lªR7ñ/ŽÏ7)­ p=gzµ?ùÅÌùú3_Hd,›Zµº_5<ÍrNHA‡BÁžxÏÅdžbâÉŒtAŸ÷7ì§Ðô)çª„Õ KGÑåæÙy®>a¼›1u1f½Køµ‰Äb÷u]±Àû?æ!¾ø©¢@”…ä»ïx\­¹×<‰X!á˜ÄØt;WÓÉ‚#“¨6_¤bΦ±N÷fSÉ–o4 SÝ •Ðêž¶ß×±ýÝ|À¶ Úz9ÍÞÂ{uÛÁÏš ; ÁŠNë"ôøI^d?)jVUù“ïœß >™6áC’äÈJÜÜU¼á}^¤K n~//‚ècŒëÐÅÉrÚï½K`Q¦©Â!²ú—p½éÕ¥ñB–±1»‚/ð‘ˆýd™ ap§,¶+þ?â,ÀH˜%1A»ágG9Ý"&6«HÖ7ŒK²×•*T ~š6**Ø[’·º{ìªåßàÕ¶dÐÁˆ˜R€E& ÀÞÌ0@”kú¬(ÕƒIŽuÚÐ 7›NÈù؈³všñ伂2“YæjIšã_” ªªDŸÍï[¥gtºTŽS˜ˆ !¶ÎÆéÑ‹d]›œÈ¥öyÎBs±"M+Òs8öÃ~‚%!–öPT°¨(;NœR’ª@ÅLƒê`?Põ-Ï©“ÅA}ßsW’Ь³9Ê‘¶ÃJ¡k[œ/w9< €(›É üÂd_Þ[ÅÇŽ*c†öóœ¼ÐÕµè ”†œPÂ|‘côŒÊ}26SòYÐ,÷u†UÂ[³ßÁ²âc) ïH’5påz{£7KîO‰Bßl^è:kìÏ”ã-äÓH& &¹ºûÛfðÐŒÿûŠõ4Ý;O¿OVü%™Õæ(A41¤ŽR ãý}L8 ™ |Γ±žL-cW`ë ÆQújym4³—¯êÖ©c½BHg¤/é>aÚ«„i§æ.lö†Vo„æA%œ‘ɶ5Õ­®åÈëFmú §R8üßætª³#%!À[Ýç1ÚKÓõwk^¯'w75úø°ø`÷Ö!‚ˆÎíÝU‚&„÷}ÛJȸ®ÓÞ^Êæ(•+””ÊÉ¡üª#ô«ðýÿ[Gód˜—÷Ø:Ò‡ŒÑ*øJx²eŒœèPÎõ>ïŒ"w/! 9AaBà8EÌ]]ðÖ*Ýyò=¹ËsúÍÑCöS3ö¶5‘À¢@B½ºj•H­òT"í”ÊØ¡ÝÕ‚ýDáO _ǃq¿“—®‘½q…M®?² PÔÜCÔdÝØß˜2óÖ¦òàülO™Kª$V¹åÝĒ߃YêÑŒ[Þm…a›p¸­* ŒÛ?R;ãU$>ö©eüý>ý~•¬dœ.{ âDˆñp=÷K:<Œ69©k¢§w¿ÎiVämƒõeÞÊÈÐÅ™ô7k4ÇšÙPû™aг)Ñ\dû]OhæªfyΣò×&Ÿo _¯i@¶ZÜ7ã$½‹¼GpMn/=Â{1ËÖ±€ÙôÎVE“17Ù¾ýVë„®©oªË¥MOHo2áÞ6¢fÞ ›pq˜ÚUmÚ¿dó\ V–Óûle䪆ÜÏð\NÞ.«LÚaxçFˆÙOïT(oUwcQßÍÌyÃI ¨/OR],ÆÆ¾˜ßç1£‡{ùKò€°C#•±*ÃÜ4J÷â>µ_Ñ$+œ†|XåÜ%Ò´íô*åÑÛ~½ºÊ`äü•ž<¸Ò>öàŒ¨iP*lörôqµ÷9íJ£ŠF›M][äÍ ²»Ç2M³CÖ¶¬ÏL`™/ÿLϼJ'(¾®8ó.Ÿ4Bð{Sç‚O.~–Êéޒѽã®>tÐ ô@£i#óݲ 36ôèÓ?rþ»l8œTˆ³yëG>ß“ØÑLÕ>ÎeµóSUð,m¾ÿŽò›l[õtÙ,gýéõ~ín·¤£6ì>B®ƒ{jnÓQô$­³ ¼Åºo·-¨äB w>éu%²åð0'ïí„lÛ ¬nƒv;‹ËδTŒ2Ú8 bØì r 2„ØYÎ":ui„R¾>4°{žO vŒ9ˆ Ýžƒ»K6 ]g½ S)Ê:¥úÓ…'bEšãW¤ÿx$FÇ[Hp÷Ñ?¦<¨áþÛ93QÐ_̵L—­ð?]ˆRH]bo%<`â.¬–¦8{Çÿ+ *9©‘ªÙ‹T¥š+÷"5 Sì­Š3ª·íaIýg[±§y¶ŸU\­. ïtvª„$¥6¯o§Uåí ÙÕЙQ@N‡ß@ ®s¨r¼ÂOÙ~á“ynG*o±:<씉¾÷)%;ìêÖØ¯-žE ¬Á|M–óÛŽL‡E¼;[¿Nç®tú½HÅkyx ?âößÛxzöèßΔUØendstream endobj 801 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4864 >> stream xœí\{PSgúþä&Hˆá–‘€`cRØ Q–fóë.èrJQ¡¤-¸Ú]·dÜnÚ%wµ­Ö-5v;ˆ‚Æâr)r‰Dð$(‚˜˜³ˆ'Ò`Ä ùýqfœN·Vñ‚>d˜áÌ™<¼ï÷|ï÷¾ÏÇ»Ý5žôXp¼`øìãÃg/>ûxÁðÙ‡Óì¿ …4-77wÎmÚ´)55•H$ö÷÷;99Y,FƒaØjµ:88øûûŸ;w.66àââ¢×ëW­Zuøðáßýîw“““ÍÍÍo¿ý¶Õjmnn~õÕWapwwwppðððøÓŸþD&“Ã%OsÕf42±{xª><÷ëP øúúöõõ‘Éd …{êÔ)''§éééììl‚¢££«ªª¨TªR©Œˆˆ@$77÷СCD"Ñd2}øá‡üãfffL&“^¯Ÿžžvqq1 +W®„aøïÿ»@ ˜œœ|ýõ×;::¶nÝzçΩTJ$óòò:::®]»Æápär9“ɼ}û¶N§³X,!$$Ø´iÓý(Ì+KÉz˜åµ ô~sÄAÙl>qâ„ÑhLHHxFÓ÷ôè{zÖlÛFðóC ãÍ›v›Îçw9ž˜¨ïîv£PÜ(í¥KQ¯¿>ÔÑ1¦Ñ„ñùÃ×®õŸ9ã±bŸýû1éë3ŒðÊËŽX ¥±® ù¥‹_KçÈÒ²²²ÊÊJ/// ÃNž88¨R©ÒÓÓ‡††ˆDâ—_~ùí·ß …BGGG¥RÉf³W¯^-ŠŠŠ(І……!HHH8{ö¬J¥Ú´iS__…Bqqqùâ‹/îGaîBôé§Ÿ†††öôô˜Ífgg碢¢£GšL¦ááá_þò—‡æp8W®\óôôd±X±±±ï¿ÿ~lllgg§““Saaappðýíï‡ÈÒœ*õ°Yº 2øHðܯC¡P¨V«ív{bbbuuulllWWתU«¬VkGGG||üîÝ»?ùäFc³Ù\\\L&³³³Ó××AµZ––ÃðG}TXXh2™X,V]]]ZZZ]]@àp8Z­Ö`0F__ß .P©Ôìììúúz*•ÚÕÕe6›Qåóù‚¸ºº„°°0¼˜˜˜xã7h4Úý(<÷1ÄÁáp¼¼¼‚‚‚þóŸÿDGGóx<EµZmww÷øøxjjªÉd*((àñx]]]NNNd2yëÖ­­­­Ë—/×ét¡¡¡ááá_|ñE||¼V«½{÷.›ÍNLL\h†&†‚ühžFÀ“•¢ç>K 4yyy"‘èôéÓ)))ÁÁÁ†yxx$''‹Åbƒ®R©***222rrr)++ŽŽV*•©©©b±xbb‚Íf···ÓétWW׺º:‘HAPSS‡ÃÙ²eKYY™££#•J˜˜`±XçÏŸë­·(Jgg'‚ "‘H$yyy‘H$“ÉÄd2 Ciiéý(<÷1Ä‘””D"‘\\\¨TªJ¥*((`0‡F£¡(:22"‰ðº§½½ÝÇÇÇf³>|X"‘¤Riyyyff& 77·¥¥%**jÇŽ ÊêûXü1|î»))) ÃÙÙ¹¡¡N§»»»Ëår*•jµZ333 ÂÁƒß|óͪª*“Éäîîþë_ÿEц†—““S\\ü—¿üE(z{{ŒŒ\¹r%66V.—ûøøLMM À0âàààìììëëÛÔÔäì윔”AP^^ÞñãÇPôWTTøûûËd26›ÝÖÖÆf³ãããgébûcÄwß}wïsvà>ÿw΂yu1 …\.Ç0 àïïo±X¬Vë½Ò±ººÚÍÍ-44¯0 Óh´ÑÑQ???FS^^þÎ;ïètz]]A™™™L&EQ»Ý^TTtìØ1­VK&“ãââ®^½êææ&—ËT*5..®½½ÝÁÁo|¡(ZTT4QQA œÄ°»w“¦.E ÿ¼%•¾^_êµ×6îÙÓuôh(w×`˜žœ æpn66’i4"•Èç?m‡©Ÿ®¥OU3f,þÝbƒƒÃ¶mÛ$IKKK\\Üôôôš5k …âg?û‰Dêììôôô$“ɸlÂ0\PPpäÈ‘;w’H¤æææ¶¶6“Éd·Û›šš~ÿûß“Éä .Édü±¬¬¬¨¨¨ÂÂB*•êååµaÃ…B‘““AÐÅ‹½¼¼þüç?×ìÜI œÐëW°Ù??|É$üíoÚ®.ù±covtÌ’M Ãjs,DOd^JÓSQaŠÌËc,üqî‘cñ¯Ãç¾j{å•W¦¦¦ø|¾T*uvvÎÊʺqãF@@Aø”755511ñÞc½½½—.]*..†a¸¯¯Ïd2Q(³Ù<<<Ìçó­Vk__ßÝ»wOŸ>Àáp>Þ¿ÿ–-[^yåE322>ÿüsVVVVZZZ[[ëïïÿ«_ýª££†á€€€¯¾újþ e–â:ñ}µxº©‹Î¥\.?+lÛ¶M$ñx¼ªª*N‹‹srrÂ3Ê`0p8™L–œœ|èÐ!üçâââ™™™šššÎÎÎÑÑÑääd‹ÅÃpNNÎgŸ}f6›³²²iiiár¹J¥’H$â•*‚ CCC---<ozz:77†a£Ñ¸zõê+W®(•Ê+V¤¦¦~ôÑG&“©©©é~_ çÌØy$4óç5™)))qwwŒŒÔëõùùù‡JNNnll´X,/½ô~b4&“I |ûí·MMM»víšœœÔëõV«•ÃáÔÔÔdeeÕÔÔ‘ËåÙÙÙ‚ RZZzâĉï÷,Èd2A†¹|ù2 à ((H§ÓEDD  ‚+W®dffªÕj*•Ú××—ŸŸÏb±V®\éíí½zõj|§¥Óé¿ùÍo$É7ÆÆÆìv{LLÌoû[‰D¢V«ûûûýüüœœœð¶¸¸ø~ÿú׿ÎÂ0!!!00°ººÚÕÕuݺuv»ýøñãÞÞÞ===ÍÍÍ!!!'Nœ˜œœLHHÉdK—.=wîÜ矾gÏžÚÚZ‰ôÉ'Ÿ$%%íÞ½[*•Êd²åË—¯\²$Ð`Xg³E³‚ƒtzðؘJ,S«;v„jµ,oo?µúÕ_üÂÃ|<<ïÞpuMáñX..Ë57‹eúìY–‡Ç/(”——.ý?6›·f ;&æ¡Îø?Чa+Ÿ'æîÓ 0 ;æìæFðõÕÈdAl6ÚÝ  Ðé“vçÖ-Jx¸i`à%.7ú1öyçů4ÏýÙB(vww/_¾Ün·óù|†9NUU“ɼyó&‹Å¢Ñh*• od8;;üñÇGmkkÃK‚ôôt‘Hd4m6Ûøø8@ K–,äççbF$ãããe2™““Srròùóç1 c2™­­­ cË–-x(—Ë/_¾\YYÉb±RRRh4Zkk«››ÛÆKJJîGañgéÜJ#‘H&''+++ËËË£¢¢öïßñâÅŒŒ ±Xü¿Ù¾êjT.Øgf"ÓÓ­ããmmdí;¥rfzÚ#0pÝ®]µo¿í¹b94t ÃFU**“©‘J}""Æõzï5kÖääK¿¯4~ø¡@ 8}útRRRTT†a1114M§Ó}¿vCäÌ™3jµ¯õÖ­[W___XXˆ ȵk×x<ÞÙ³gm6›ŸŸß§Ÿ~š’’²uëÖ¯¿þšB¡H¤­[·"ÝÐÐÃpii©@ X²d ›ÍžžžÙ·o_gggggçªU«H$Š¢z½Þh4ž>}ú~採D"¹uëVuu5—Ë­¯¯/**" …ð¾¾¾5kÖôõõ½õÖ[b±·îƽ{÷ž;w¤Nž<‰{S×"ˆÓ²eK‰DJX†¢w † FVîM½Y_´iÓ½qÈ=ojoe%•ɼxñÎÀÀK?ÿ¹ïêÕÎË–u?ž As.ÚËÒÇP¬=òûb …†N§÷ôô¸ººâîõ   "‘ˆ&<=={zz¸\nUUUtttDDŠ¢ø°±¼¼’’‚a˜ƒƒÃÆ<ø£þ|Ä›˜˜xçÎëׯ“H¤¯¾úê_ÿúWdddAAP(‹Å$Él6‡„„0™Ì#GŽ,]º”F£^ºt©¨¨ˆÁ`ÜÂ#ˆáSr–¿æÅpûöí$Éf³½öÚk:îüùó¥¥¥ …ÂÛÛ›ÉdâwÖ®]›õòËúžƒJÆçüüäb±gP ôÕWb±»Òܼéý÷Uuut>_'—‡ÄÇ·8@e2§­V†i\.rî1 …á ùùŽ>>?)YüëðÅÙB($“ÉýýýCCCøbc0æÖ­[6› Ã0OOÏÍ›7»¸¸ô÷÷ß“©TÚÛÛ+“ÉÚÛÛF#‚ !22R.—‡……avçÎGGGܾA——×Ê•+Õj5NW©TÙÙÙ)))111ï¾û.A¾¾¾­­­k×®U©TŽŽŽ;vì8tè“ɬ¬¬Ü°aÃÌÌÌæÍ›g¹]·ø³tî³îa^¶lYTT~97¬áj6›¯_¿>55åêêÄ`0º»»ñ.ÛÎ;-Ë©S§h4ZNNŽT*… ('''22òÆžžž<O«Õ:::ö÷÷÷ôôLMMñx¼ÿûß%%%HKKCä7Þ¸7cÇmDä»}ò1\è:éÉ3\hÌ­4¸½›ËåÎÌÌH¤‘‘‘o¾ùŸ:áæ‹ .LNNúùùá€Ï@›ššð±)nؤÑhëׯǧW½½½D"qÙ²e£££ùùùeee_~ùe^^žÉdZ¿~=îzÃ-ý/¿üòôôô­[·†‡‡ÿð‡?Ðh´íÛ·ggg·µµiµÚ””‚"##ãââfÉÛ¹cÃð?ÿùO*•:88¨ÓéNž<ùÎ;ï0™L¹\®T*ÓÒÒÊÊÊòòò Ú¾}»¿¿ÿgŸ}æååuùòe???NÇårW¬XÑÝ݃ß(ÂÉÉäÇP-þ,}±ã …d2¹¡¡áøñã………@©TÖÖÖîÚµ«±±Ñf³åææÊd²o¾ù°wï^‡S\\¬ÕjÕj5ŸÏW©TL&Óf³Ý¼ysttôòåË*•*!!A©Tòù|…B!•Jgffp;ãøøøæÍ›% —Ë­««‹‹‹Óh4ƒáƒ>‰DÃÃÃ,ËÑÑqpp0;;»¬¬Ìd2Q©Ôü㳨ñâ¿Ù5/iIIÉÞ½{£¢¢$ †a(Šúûûz{{³²²Îœ9311ÀÄ|ð\.4 &“éöíÛiii!""¢¶¶öÀÍÍÍ&“‰D"½ûî»%%%ááá333ø±óü$³ã¡bø¿œOHsØ¿–Î}Köûº‚;IˆDb{{û‰'4  ®®nóæÍ …Âb±Œ¥§§ýõ×÷nÐÿ÷¿ÿÝ·o_YYw4Ñét±X¼oß>³ÙÌb±2339òñÇïÙ³·[¦mØ ¹pL£ ]¼¸‚ÍÆP”¬‘ÉF¯_ÌÈX¶|ùµÊJ›ÝNe2Àfcåæ>I÷åÂaž]¹y¹/¯–—¨T7 ŸI¸’H˜VëL$~§Rm(*zÊ-™Ïp ç‰ÇTÓàŸ~ŠÒ0™L¼XINNnmm?&6Ùl6µZÍãñnß¾=>> P(0 ãF9ƒ¢(@X¿~ýÐÐF£ÉÌÌÄ_îáá!‹§¦¦*++anii!‰r¹<))éÚµk\.Aì·³¸Mžp–þ¨Z̾a.ˆ¯íèÑ£k×®tqqÑjµ%%%"‘ˆN§[­Öˆˆ³ÙVQQQXXøÞ{ïýè?¹×8uru½QS‘žnŰުª ¸87 % :ºÿÌ•:¦ÑجVÿè軃…xøé|cø¨ ‘Ç?*}¡¥Ï>^0|öñ‚á³ Ÿ}¼`øìcñ3ü›h endstream endobj 802 0 obj << /Filter /FlateDecode /Length 1328 >> stream xÚ•WßOãF~篰úäHçÅûÛFªÔ@!‚kr ¶×7˜$%$( wHýÛ;³3vœÔ ÇÃÆ»Þ™Ýñ|ß~;I£Q”F½ƒãÁÁá™3‘LEžæ2ÜEÊi!]ä­¹¶Ñà6ú3Œ;2ž?ËN¢µŠo=ïd:.aªìü58?<“vc!›‰ÌÀ6a mÑæ å·ŸQbŒŒmEæyÐêe'1y`—ê~Nm #•ÊôoxÂPÂCP™k†;arW…pÝQyÜÛî%;Ý46©HeïQ'±i>ÞÃçYÁq5_Ðë^øhÏðgÓÅtÝ—ð†·dw1!#4`×b¯pZ¦q7˜Þ¢Ñjò%ø‚M_bº;ìpý)´ek†Ù:dNSM©…2µI¿m©„ñª2y„ÝŠaGùø¾c]\Œ¸M¼Ð€¬hMNÆ_:*‹Ëˆ1Ÿ1RŸÈô?õ/}aa›4ZèÜí÷€~#žþ,µ¹FÀz—*Á¾âöi60Rhfqw†³iü+¡ˆÙ@"åWr¼Ã4-¨ßÅüVçl¶dÆé.ûÐ_ƒïàkËêQcË]ð"„ë î–æ·áµÀžê"…è^c^°ÆÀ¬‹OaüçèÿÀßâTz¢[§íàosOøN!¦iM¡Ó°kP°m-mlÐÂdú½jÐ~½ZeÞT‚W™ï”˜2pG¶ß&skR@ÿ†3>僯NÎéYmžçnMïÝ.€Ë^À|+Ò>æ¸X¡Ï~¬´ͬ3~Ž®O¼ï HŠ¿ù=WãPzRùùsˆ•nÕLm*;Å#\WA%†µr‚Í3Í·!š[¡½“5í,,˜í¤†Þ¢†Þ$†Éä«Ú‹üMI)rkúÃa…ú”Mø”™Ñ U±tA’鳃(µ…Â">\2ß_é§Fr®$(\ç«p gxÞWä i­k¹ªK‡eÕ.ÛÖý„×Ïd],Ùz… {n®­â.\]`wÏÄG%ªSøÛ¢¦¦µ´{I3ÇňBË¡®ÊÛbè>UJséàŒä.&Sœ„s‚v+æìFªì›«IiÇûË/Œê’xMf-Ñ¡’&EuÜÞQeë\Ø´¾íªšø™ùí'®Ÿ5ñ”ßQqI>UÍ,øýËÞÚ8á7‘î_¼'v• ·Öæ}ÿ¾6šàZpÅ1¯ñ¿ÿiý·rv¸÷_îéàà?ŒŽ{Òendstream endobj 803 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 1979 >> stream xœíšmLÙ€P>ÄPJ/B¡¡,Èg§$tÅÂ$-4¬`Á@ð  Ùe…°«„ꢤ1D ‰!íE T åÁnÉxˇ¥”:„R»|ÜM6¹ñjEVtê<{Ò9ϼïyßsfÆassØ5Ž_zŸÒø†Ä‡4$>¤!ñ¡ìÀ5Ã0—°°°ôôôÞÞÞ¦¦¦ÒÒR…B¡T*·ùÿ†Ý¸qƒÃᤤ¤¼û«¹k#<öohcŠD¢”––Êd2ŸÏŸœœ”H$0 K$’²²²ŒŒŒžž‹Å¢ÑhpÇ0ŒÉdVUU>}Z¡Pû_‡¶kiee¥V«åñx‰Äà???//¯±±±˜˜ƒÁ ‘Hª««ÍoM¦p¹T??ä÷ß›ëëì´4woïñîn:‡óú?Ö×Ö<ö|÷Ý+¥òO³ è`1™|##WÆ]>>«Fc›‘±}Ã!Žan´c—³òÍg©µÒäçç÷õõmllðùüæææÐÐPN^RRRWWçêê AR©£R©B¡Ãáܽ{×ÓÓs||ÜÝÝýèÑ£</???!!ÃáÔÔÔ±ÙìæææÆÆÆû÷ïwuuåää899AŒãøÌÌ Ã&“‰F£YË ‚  %::zjjÊÑÑ‘Åb=~ü800Çñšš‹õ>…ŠaVV–V«uppðññINNÞØØèêêòõõõôô\\\„ èØ±cr¹<--M¥RÅÅÅ©Õj>ŸßÞÞþâÅ .—‹¢¨Åb¹yóæ–ïÿßýg)Ùñÿ·ãGEE©T*©T* ­ ƾ}û:::pçñx(Š–””´¶¶ÖÕÕ …BFsàÀׯ_ÏÌÌÐét€V«…a¸¿¿ÿêÕ«.\`³ÙCCC‹%))ippB¡¤¤¤˜Íæ¡¡¡ØØØ´´4‘HäääÃpKK‹X,މ‰‰D …°¼¼¬×ë>ÜÖÖö>ûÏRÛð¿öMQQQ(ŠfffZB0 /-- »ºº²X¬¼¼EÄÁÁ¡¼¼ÜzÆ_ZZ²n—‹ŠŠ>|¨V«ÝÜÜØl¶õ9‚ 2™¬¸¸X§Óñx<A˜L&†at:}``€Á`dggGGGCÔÝÝ=77÷ìÙ³•••êêêÖÖÖË—/çääÐh4‹511áãããââb?ÿÝ^e³{}T¥iii Q©ToÞ¼qvv¶ÞËååå………ääd:îïïžžž~éÒ%E766:::®_¿ÞÙÙ¹´´”———ñwœÖ?íÆðëß,K?âÕÒ­B€a˜µUëP$ÍÎÎ3™L6›=77çååE§ÓÏž=»cSÜ&d¥!>6vm•••z½Çqëö*++«¾¾þÊ•+ååå+++·oßÞ™YnÛO1ŒFcll¬‡‡Ç“'OSSSr¹œF£effîÙ³G*• ‚ÞÞ^µZýù'¼eìÚÎÒññq9yò$ƒÁÐh4...†-//S(”ààಲ²;wî:t¨±±Ñzh R©333¹¹¹ÕÕÕ©©©‹å'ñÏÇÖbˆ¢èÞ|ØŽ!†af³y÷îÝsssoß¾e±XT*ÕÉÉ)55P[[Ëb±N:õ¿ƒøØò×&\.w}}ýÞ½{çÏŸ/,,¤!ñ! ‰iH|HCâCÒø†Ä‡4$>¤!ñ! ‰iH|HCâCÒø†Ä‡4$>öoø_Îo­"endstream endobj 804 0 obj << /Filter /FlateDecode /Length1 1098 /Length2 5855 /Length3 0 /Length 6547 >> stream xÚ­“e\TíÖÆ‘.‰!¥‡¤khi˜`¨¡C”nP@:DZzDB¤E‰!¤¥^žç¼çÑ÷œ¯ïoÙÿµ×½®ë^kmNV}#{p”-B åê) ,(, R†*™k „…€œœÊh„'åªb㉠KI ƒîyÙƒ„%AÂi1 iÑë$2ÊÍ´wðq+óü•ÝsA ‘0WÔÆÓár]fã 2BÁO?AÐ=ggá_'<@†Ú ƒàH˜'Èat‚ÿò¤éj‡Aþ†{¹ýû“7íqm Ä}m’tmŽruöÁv@°.êZ qíäÿÃÔWórvÖµqù«üÿöé¿2l\Î~ÿ›ƒrqóòD APvýÏTSÄ¿ì)¡œÿKHÓÓÆ »çjïŒ ‹ ‰ý+ŽôPCú"àúHO˜ÈÎÆÙñwá ÿO×ÝûÛXGOjnÀ÷Ï`ÿþ¬oƒtõ4ösC€„~çÿÍ¿ùºMh¤/ÈRHPHHø:ñúù÷ÛƒÿSu…¡àHW{ˆ¸È¶ñ^¯Ð5‰ƒ„AHW8„ð½ö tEy^]wæ1È…þ5W qøÞ_¡‘¬ôA„A`õß¡ÿ¤lø›Ä@`ãßt]Åä’-~Óu›ß$ Ûþ&)ö ]„ÿ×nൻ?P¶ÿ¯¯…ü¯uœÿÀk!—ßx=°ëx-„ú¯…ÜþÀë«¢ÿÀk!?ðZÈû7Š\—òûÿ{G””P¾"R ñ뉉@Ä@q±Çÿ'æ…F#\=ÿþ ¯7íßl‡¼^NÂÎL¢`2áŽÏš"_©}*Ççñ`Ì|þ±=¢!SF;/C1Žn1F0G;"+È2v´«Àp;ñEú¦{ýFàˆiòþ[T<YF„{0¬ÕC¼¼¾®Û.9Ñë3,«nÍ|{³ìÇ Þíø5:›ÂBú˜¯¦&/ƒœ‚‡gˆŸÃÙ×2€òZuÒ& I~m«£ŠP3%´VYÇ”—3ô]_ÇœæneJÅæAq´Ã\Ä6â@»w‡É9@©Q… ËþñÏ ‘Ò9†™4åĬÞßv؃‘ÓZ‰»õ6£áhÂv¾IFÇ{à«+Êšm‘Jñ·Ãæˆã 8]©à‡LrÞz™AÉmóÛ?R¸Ì)6qçŽXÍ.ÅÍEºˆ~o” (<¦^ìy$"Jn#Ö"©y§"oY¨÷£ßçÊt¨_Þ]Ân¯â´ï? ä¡a"ëk*n3h–61«/ ¡‘û/ú&][aL“êŽúÉöÅ/J8_ò—†¾ûÇhÔ1×ì7j5Û`“4y‡dÛŇïØ¾…Æw;ÜV²y  7}š&©žIûîƒÎLvŸ¡J.¼éJKÕ·w8õ ˜f¢bTÃfîveÚæò9?ˆ‹ˆ:œŒŠâ"ø*3vùqÆaœ¶)¿EQ~F@Nj\Õ£Ò¿n“q·8FM9mmž«¦ã]x…  k]…xp"úÔîÁÁ‚i²ÖØ ö'Ë -F]¿V99l**iH¡â¤6ïä^auT㨸¼2‹(KÔ¢Œ•-M“P&nWža ¦µe¬1I,«pÒ~”HÂözUIî`îɰK”ñ‘ø)h¼5ùZ…“Üì×9ÆÖZÉ+ó;’x(匱=;Jü*(ÆÒú±c·zŸ¹äî#ø‘7Óæöø<ø»pF•È}™Ç"¶(äqb}ø)X¸€´¶!ÖxçT7­còäX†~ ¨:ÅDJmZè×Þ÷bzÃOtÙ–Ÿàøz\,"“¹õ´Ø—1®¦´9æ™~UÑàÝš'°½è§‹šk?¸ÏG‘û¨<´OqÒ?{…8àep ¯’$‡n%ºL«IW=<ÚÏ!Å-h©ôNaZ'á“¢øF¿MbiBž[ÿz¨’.]$Lmüдìô<´ê©Šw4Õ aß/Wi|>¨*‰tÞG×|ì…·úŠotsXïîžÛîÖK@ªW…c=(yl·NÈËç-¦¸í;žmø9Ö5[†ß­Í7x™ÿøeˆ5ÛØ|Þ¢ñL­9òáé·ãMy ¡÷FÓÝŽÖ/FŒ_¨· o®ºM­<ÙÙÉutÏ~¼Á0ýÇìØÊ ŠC(ƒQòÛ¼¸Ïï?rL¬“Kü²©2,óTl%×M“ÙS¢¶˜È¼©°ÝL ý¸ú®7“ÄO s‰@>Q^_•…!Y <òbÎÜÌ¿ñÂ÷ÓÑW&ÎÀŽ4!ê/y¹~M¿ŸXÇQ·¶/,8)ÈiôøoÓ§«}’(þLQÚèèe•à;‘·Ù>HEƧѨ}±Y¶²L2²ò<¤ÛléûŸˆCYŠ„eËp4ài7;¶U~æ Ð.Ð[¾\‡W&Ë$¬¦u¼˜!-zo%<“X~8Úä¦Ȩ››ª$yQâKH,ÙNìd;à !ý.§3ÛýžïM>¢Ùã=^ ììJÊ"Ù é¢h>Ó¤¼TØŸ" ;I˜[UEˆì^q«¡>›¼(Žûüçi‘ðñœÞÇyÝ0–`ÉyßÓM›¹$ªq̺²ŠÉ° 7ÍÓ+ýÂfõ}Ù‡;¨rÃÃì-ÝWù›“˜Ü‹wí 5m+ܲ꧆æ»C/?œI È_ÝŒgÕîäîSª°L—dMŽÙ¼Ÿø°¨#]|z+s(=ZˆÅ…¸¯ÕkÃøõQ÷X'ê2%œE勯Êg7¿QÒ–»°Ù€vª1N'¨©^`?Èz7iÉBFXÁ´WÈÑB«¸ÓžC4C!8U¶U@1û©ÖÞ ûjã.A=vk,lR—SÔ\ÎWHã10f<­'.O{P áØÊk&è¦À>§3§y«·’ØP¸Gm[6Úçû£‰ æ§ä»ä$>æþÛ¯cÍ4¯¸>»¹Í¼”2y xR©Üƒ´Š¼Úî Ò@’‹Ï>I[_ì7HßCðú畹)Íâ7³·p1¤bïdhœGq†F„}Y ehJ-émˆPbõÞ3)Âßù|ÎÁ¸2¸ou|Å5–õàÐ3¤µH›9áѰ ÍE•Oyç¤$©¼6¦¡ «Þ‹”M~lKiÙXþÜ›RDŸÑý ù¬}7º˜Æ©*ëî¦Û­5|~ªGÌ[·½¨ ß2OCY†jí94USk1f _k~Ì$D² ó©^õ†ès3˜*%\p¼~çyEÂh'Z9U1è¹oO€ÖEFЬ¶~?©üPë-\Û–ÊûLòàá…-€J9½]b§müÿŽÔ™!䲨¡˜<éñöËÕy‡©T÷,@~’ÔÒžÅè É4ä¬Ð™÷þc¿¨¼ W: ?þ/u‘:óö5õ™ £È×­5%”YBȹÒ¢K[¶¥ÖΆƕ‰c—ŒOz[ýã.úr›æmM\6³¦E† Ó7¹—VÐk]Ó¡‚ñ›3qeÚY)éÚ` RÁrÚÜLàUŒ¥Æ³êxûýèiÇÞ±§WÇ~ÅpÁìü·´Öðtîñ5$Äwb­šÂ^¯vKßô«â&¡_ ´ÀS¡6j ½tƒºZÛ…“c¸_F±•péÞh‘—¨z¶/zòL(EŒ¡½T“ÕþÅNCÙ] ¯©¨RC~€Å‡€øCºî}ØF+v§gCØ/hzã¾\‘TßEOg«21`ˆqQ'£è2Sÿ¼mF\ÌG}u­y|;£†šXæv¶rÁ_zC ؈ô@<{#„ÅZèBX ¤ÃI“ÈðúVÜ_ü¤Ãß$øQ€1Øæ¢óˆö5skzP 0O_;ô¬ÀoœæMô!g2È Å—w]ø8QåÜÎüµ}VßÌÝ͹ýfb\:]ˆ?ðéöÄ¥(å¸ ŽVúsWŠoØ’ ^[®î+£JìÛ©øˆl•5Ë_7ÑLsƒƒBC9š•Œ Ó“>ºò”ËW%×+$±b7 Ã,!ðRÔJ½½uœW Å㟂*;{øo è;æ?ÉÚ¸Ýÿ©êFî/‚þ— ”ödý]w"N ïä¥^åÄûm“d¤Îõé£ïFbãiÇÒìgo™¿Šì†â2Ž|‘؉‰šd¥‡M¡`fy5,T $Ï]¶ÓƒÖBï(êáñ,ù¼×ú84öD+Xóm4,÷EìûÛfpÎJÓ͆z¾B”,†¶ñÉÕ ¤½nyŠö—ŒDšð/Qg}…']d¶úT¢¤ÏªEµñkã•GËŸÆcs0«Ö ´²,Tf "Å¢uàG鮹Ôʵ2iÀ‰­×}Ê…­•ý=U1z‡¿<5†÷fÈ¥ ë·ÿœÉ¦Ù*¨è½ùúêûpµcå]Ý5Ô¼“¬Ö†}í»aõÎþ«¹3Ù­ ŸOíRè$Iûêê\©ˆ2&ÚLߤŠ˜ã¬eéióŒF°¾¦Aä)¦\©Æ]*–Ëüjí]ZÅuÙÚÙñv¦øÚX6FUøJ“O ؆‚ñ›rç6jåáV2µòÌÇ“6ví[ú )«q(eÊ o§µ¼Íͧ+µê¿µibY»ªÏù…©ŸÞÊã©)h°³¾h=d4‘!\Ý'8'e¶Ô`çàÍ °÷äxSÓ"ý&uzk—‚·HƒPµo“°}²êo“%ט,ÿÎg˜/Tš,ûÉu ÞODWIR-·kU²F‹–óÍWÃ<¾´µcuq¦7ëñØnIKL?ì FZh5žÕ˜ªÆFGuPvŒú+9¤ßëv‹V~Þ\2£¹ú„º+ê~cü%!HšWHðt4¡ gݧðÁ¤®÷µEŒ,†6Áu˜í¹¿¸êÝíÆ°O/‡Ï̳'k0:«Úß’£˜â /2%à¡t `Ô€´hÐ}Ën3—Éߪ]ÃRåT¼·ì—RV4à.¯˜ó‹œïçw!uÇg©£ ¶Ï‘3Y'ŠSܺ‹¦·µO‘ùT£q‰yΛHý0@ù•rÂ@RÜKÒF–Ší’ƒ u¼°±u¡¸•2åÆqæû3¬Q¨—¥„‹C‚õº(þüCƒØ_äVɾ]GÓ;޹Ÿ¥¾Õ«3:‰e3—°N!Æãb¤69!åäggÉKÜ­}q›Ô4õ«Ä ;aåó @ý¸»o¾›¹â:"X2bó qÙ‚ø:ê?°×e¸T££G®|7ï1H§a Q–FiÀ‰Gþ•ÓôW3,M3_}øâÈ»Šš2,×ÝvÜ>xË?Ý‘*6àíâ5±zLÏ“òLþIš}FÉ~Èá¾n^±!¾ÚÞx“µï!ÍŠG7æÁ&4¼qÜ/ð³8&£±0u š¿GÂghÜȾYÅÏpï`›ètvTH;Ì'±nHþ"ŽCF}¹3’64œ5¬éw2) [bv4•ö ì5ký½lÍwÂÏE7û™9óÙš¹h¶Mo‘†ijèUŒw$°™ª>™YŒuÅvöÉn$™~ã¦o,šÎU솬.ÌÑ}l½á€1Z8Âý)KÞ‹—"Ç™Ýð¶/K¤á¢³ê§Ö‹ûÀ­ÇÒ›7“£P"ÁßRÞþü*€t’ þIô´§†‚­ü„˜N”iñ!—õáç.b=êwGr4g¡:4U·øfŽbÈÒŒé‘+Öö½Þžb'~x›{<£|«ªIvÎ)ìJj~¾Ýö–!yÕòNñ4Ñ“K…³ÔEr¦ ú‚ÏégD‹*§ ¿”rbN©[ö××?†Ïbt¥°–ß„Äfæ­}úæÒ+spJ› "–O}˜|héª× ʚБûËøšoó?*o¬¶×ý  3ryeœ]w³¨x«ÃM½Ñz:gK–rÚ" %\ô“€Îœ_†þ¸X¤¦t~ù%¶ÓÃwZÖ¨Ä8Në –Hè”»m™Ì JhEØ1Ô}j" '>@÷ÔcïŒÄ—þÐ5–¥-µD{v…aH­µX51GŸQáùÌ%³q¿…¤ÿâ‡ÄO\r!ò„V6(I‚׬O¹†µ%çÃ,ƒq‡Yƒý¡Lô'¼=ïî*ås”ú¬B:ó–JTª®ÅYûÛÚv„uX¸½üÜ ?ÍóJh‘Á²?˜õ.-¢Ť6—¹ù?R§›ßXS'þÕ"jÛ³Åï\¢pÚGÆrÌË2ß×gwàÒC+­3ÍüÖ±ÄÅ ³§j¡Ø"ÌWe>A‘|§‘”»r:>Æñyiò¬Lþ~V׺·>HЙÇJ‚ãèã!`TÆ_˜k¥éß[ÁÃ_7@H°5ñfé[”Xjþ.r·'ÂäRcdxêö²‡ÊÜÕÚwN'‚ñ§žJílùD÷zD-i‘Ÿø—ì•Ïj âúÆïí‰Øä ,h¯Ü¨úV*Œgì ö†]½æoÑÚïì^g\D×0К¯ÊZ´=ÐP{4°³Ï&³HòRÃ^t.:ð mŠ‘h•æp&$šŒ àKD }‹ÇU‡™V7ðz°“5DKjŒ$%ÜgK ÌF¥8få–ÏæÕLÄ%ã a‹À»2HõðX]æªd¤#ÿlňÑÁ›¸Éå!žÛ7}ðØ?rß{4ãóÿÈ_€Œ:»t´ecò̹`J­—³™cW2Ä,“”û¢…©Gpô°Ëh%sâÔsêÚ<á õ]×k#%‹4~é®’¹õ:ˆ´ÞýþœT²j4ôjÇJ±YFÝQJCZi;£ S0Ê·Äù ð²ì“ ÊfŽÖpÑ –bÃØÉç±7þde¥Ûendstream endobj 805 0 obj << /Filter /FlateDecode /Length1 833 /Length2 1597 /Length3 0 /Length 2162 >> stream xÚ­’{<ÓûÇ égêäÚɧ´jÇØæ²1©ƒ6Ç}YH¡³¶/ÛÌ¦ÙØ,·.ŽË¯S9ä¶\«CB"üÜ MH%MErKÑ¥DHgôëôøuþý=¾ÿ|_ï÷ëóþ<¯Ï¾‘D6´¦qBD›gˆ1Âà­3™lAŒ‡Ûr! Áaï¢ð <ÀXX`€5ßËú8¼‰)Þƒ[NËð§óÀ6[Ä¢ ¬!.ƒJag ÊfP),@æPOh¬Y,à¶x"¸AÁ7¢Á0@cPyà äÏ`ÃP‹Höl?À})ÓøA_[!7X¶É @†Hã°YB@ƒü`(Žì.HFòÿ€ú~8‘Ïb¹PljéJ ƒ%ü¯…ÄçA\àÌ¡A\ö÷VOè 3Dcð¿ïÚó(,ÕšíÏ‚€!ÆøK‘Ld ‰Á£Ò… -Õ!6í{ YvK(ÑÃi·Á×W]ê’( 6o0èoö%ù¦eq°m„FcdFÙ÷õÏç»Ûl*‡Æ`ËÖ (\.E“í‡L™0Ø4H eÄæðdG€,—pàÇáÂkPÖ‹¥%…ÔÓßÊ P¤oʠܾ)€"S²)î+S™¢ ƒè{©ôÏtll8‘¡l) Íd°hàÌ,ÂÿÇHås¹›·´}²Œ¿j?†ìM HQaÝ8TËcÌ”Š˜‚B^Çňàõik¯?^žfiôªÏr”[yw´aRý¸:“Q¨vf̱Eó3Ю°>|SDF¤~SåDN:yåõ•·¯\KVs£jàÙ¸‘n½á Jƒ‰ý©þú%N/0¸\í±x…‚‡>½‹ˆ¼=ЭœJÓvæ“Uã” yŒöŸº5ûÚŽ“Çé>ߢé>µv¡[»ñIg@¯JšEB–³|WšÕ¼¦—^°Fäã#·F½…_%.ãœjpy~>8V»ƒí´cì4A*)ŒÉ“v4´5‘Ž\œím=X)Ø­ÏG`1ùŠBrÂæåIQ¡+Rî&ÆÉÿš¸ó£Ö™“ãÁB‰4 óÕ„n=Q¹4ä3¾-pÅκ…g4òsÅ·C^¼ oxò!•R7äKóÁ•èNz¸^n"â i(|“ƒ]XS©´wvM[skÍ>äúšayOêÎǵQuÕS›ß’Âàˆ·I^(¯Tt\÷&=@š€íÝ5úæ¾U~—x¶úklk{-Ô™¹ªÚú*WæV0oܕד™K¶êœ.õÆ­™1>1¯Ò§Ÿí:jñ6 ÓÀ7:ôJÝí&¦Ø­Ýê…·t¿’Ö2hºgM~äM8¸n$_ ·>$×F<1…ÛDøé–Ê¢ïë×ýÐ!° :–Ån¡l¦óý§ì\.ßh®mQ¤´"”¢åÚpò§Ç£Þ žùÕ+¹ñ5.ÂòÌjB†‹í‡Ù¼œ‘ÄÇ™Q](:}}&¥‚¶ZGQ]Ë£¨ô\Oð–9-ôÇæú„agÑæ -ÕCî:Ø`‹Ðî¦÷ ЛÙi,ñP;çö0`Ö@ñ3Ú¹ûþF[ŸÛ•Vy"-,þòò#5ïÒ´†3öÔHyפ1Jº/Ï2wá½:Óym›²û,ç:‘zÈ,Åê2Ýô°Æä—m;"’G2ï§•v‡†k¸JFVƒ™'Ø4"Iùš"<‘N°œËFÇ^<«î/lêW(G¥…y î£sáEï+ÚEËGŽyi„ÇV½~|/„Ù¸Vˆ$*m‹rUíR»1Ä”TS‘ó½ºÖŽDLÝýŽcTÆ5O™ðøû¤&e!;ÖØ@ÏPY­9J«¢üÖzÊ•´Üìm–®S>Î#~¶É,Yˆe!JäJ¼>’—?*UŽ,³ÖQ6Å©Nî©KpŽR!oÌ8¹+x·]#ÉJè)dœÄ§Çn0ñÚâz|8lfxgNÑ$¦ò—ÏpVn‹¡c/cÐðádòt¯èž0ýÔì ­'ÙõùÝB;AÿÀuß‘;lWä¯{s8ê¤;jWsÄû á3/:{É‘bÝí^cû[’ËXƒn[é‹tÑK¥óC¯rVM›®s¼ToSLø±5ŸíÒ‘;‚â’z³»þU;»:ÐtÀLp?öïŽJ‘WQtÊÜýr×LñÊ W2ÙæJñ̪Üãà³ µ#RwrtS”¿¥Ÿñʃ½ñYü˜ÛŸ–ï-!>ß rä£ [ÑTå}q•¦î$w…óeM*ñ{çg´Î±sÔ†,®«{h°¹¶™ä9Ô®d÷Üú'V¶¦&>”(w¬gm%²Ö+ñ”¤¦%&¼ÓܦÉÁYF 2uKûhеøé…‹ÛÇŒZÞÈ{>š˜ñ~YbÐñK‘(Ö/™{ÎÓ4íxÇùy„»X¼Œ¦™ó4.`¡Ê¸*À¼LA+µUzßb¹i¹y·Ãa±Zó¾V–§_<…öɤ+"ï›$I¿¶0 7ÊßßzËþtÆàìÊß?”›¨¥Úƒ^(mDž8Ü'WŠÔ»3¯ðé3ÿÎû™=˜õU·W^«•÷‘â]¯¹VÞy«×Ní&Ö¡Ÿ¶×©ö£µ~ÃSà#ÑËo¦ß{$õmŒÓ_åŸ?ÇÞþΪ3c úÞ<üçóŽHƒÄSZÒÒÇ£ZqƒÈù³’-Ê£¹ªmaQŽ—Á >ñZ•æÃ*•ÕéÛf̤}†OûÙŒËCŸÙbÅpb¸êÓP÷ê¿Òå¬z`ßqž©è…Ý¿¢Y»§ÀóÖ“±ô?Ì}\ôt¸¸¢PAœ*¹$‡Ñèè“n-û „1—endstream endobj 806 0 obj << /Filter /FlateDecode /Length 4657 >> stream xÚÝ<ÛrDZïú ¼¬“¹_âRÊŠN,;%Jd–œ”âª@$x‰€HSÎןî¹ìÎì €•¤Îƒ„Ån£§§§ïÝK:¹žÐÉ›¿?ñ›o™š0Julr~5±lb”!N¨ÉùåäãTžýtþÇß|«eÄ(#œ*ÀáaÎoÎØtms"©IpΛ¾ØWïjX¥$”Ùü§3n§ó‹3n¦ŸÏ”žÎ¯g3!Äô ¬7xͧ¯ç¸ô5|gÓ[øw1¿ Oþ¾ø‡óð«W+x¦¿À¿-n‘‚42"}þáüÅ?_0¸¤¤q¢D#8±ÜM.–/>þD'—ððJ„³“'ºœgˆ6®ï&?¼øs s„Ú!-ˆ’.¢ðg£MÁv-|b8|Êx@£œù½´sáˆÇ.[Š—ŠÖYöžÂÙ Nçr—w}—Ä©ç!A¸ûHXÁ×Z,œÂ3‘b-Ñ ¦)ï‘€+üoƒÿ­.’B”d( « }8y¢Š(ÁŽÂÑÇ 8šÓ=ÇÐñRG”Ó§¨çxžQ¹×«Ý'{ÂVšS9G~,`~N´T’ ޶ÈËÓäI@NÂ÷épOÁѰÔÂÆÁ‘Æä²®:¾ì‡ä@$VÊ ë¿þƒghðÄÄq1‘DÀ1{ ßE šÑÉ̪]€`Ñ­^žƒS†8 ¾Tð¬›ÌѸJE0È[¨_¨îñ29ògÚƽñÚÞ,Zå{ûUôeuà•Ý·?YÙ_Ÿz9™e`_v0½a&Ð4àíк»/ôÆaº½}·ÛÓ¹¼©ú‚bùŸ½[>~ ÿðhÂÛé§oR¢&›ëIº|ÿf((hüë™ÓÅ"„ŸóÕe¸øn¾MîkQ„œ“þ¹ßI¸_[|ò‘M9¥¢óÓ;oŒÝÁ[¯;:Œc„RŽÐŒ“œÐ 8É<G§P4çrކ¥Bxã2ˆtHi6 I°zÔý÷9!=Ê é ¡-.&‰„ð3´êì>‚Áy˜Ñ>ç$\ \’ÑÒï„IXN ª­¨‰²:Ùéqk%?4qÄiîf ’M&L©” ÷›á/2øÏUÊ5g‰¨—;¨6#©Öc8”΄U='H¶Ž¤Äé» IŽX`¬hŒ)vVŠ”7a'ÔUE~ ‘p²À’ Æ´³¦aÆ]±^)v žƒjÛð bâEƒp)G¿TYÎ…¬í¯²^´Çõgv¸þÜ,4®ÿÕë?nù®?_>¹þù!®ÿ`jwý9u×ÿcôû´\ö}¿ãû™sú@ßðÞë>Æpu¢ïgÈi¾ÿhž‚#ÊÓ (šs9GÃRø4|O‹5WÙý¾ÒÎGù~±Ï÷›ÿ°ëÊ?_ÕˆbŒpgÛ¬Š›Óz×w˜ÖœëïßÏÀ6;˜XGÁëšS€usüwT‚u0U‡[ÙœÖÿ¦ëà­×•YñK… e@?O1²§àHzŽ N§ Hçr ކ¥†'ÙÞäÙ9±[Ý“‘UbOø”ÑÅ 0ú¸ ºhÿ}¾ò„¿q‡//žWâ!y®$#XpåMÓÆ‰Í´?óÕ¯|“Îoè=¶_ï6GÒV7m×Ñ<^Œ4‰‡Sr°Iû£ƒM˳ƒ‡ï·®ofG×ÖûÖJ BY¬Sœ£Í?E—Š$…œòßžÍ@3™×Äõ2Þ]_…OTª›ø›Ï¶m>û(B–Z®4äײíoü›ÔÜîdµªLë ?lÃõU”·ùÕÿ ƒõæ3\’³™‘&uÚ=ìÏøt‘úcØo7#¦w¡3ŽÍH„{wö›Ùà’-o¶µiÆœ&ÒX¸æ²”úæb@Ý©3t‚2FkF^WâMG¨’•x³NÎ2°qñäžš„zò Þ´ó åâ7¡»É}Yp`n˜rÅzìbXãÉà^¢þÚ¬¨Ð‰W‹–ÃŽ…mmån´®Iâ«U/R´nL,˜|]#ËÞö6ì&†×Ã}ldÄ 5"¥‰š1¸Å73!=íÝ~:NAê£C…ëEãC—iLdq=¦‘0I÷‚§}Hr1O¾·Ñ¿0Wâí×xJêú&¨_\¹é> ÕÛþÛÜ‘¸àñxãk¾;øÁÛ`ýtã‡\ÐÎÜ„oÛÝ#ÞMÚº¬Ç»t+ÐŒC®ç½}½JC6)¹^on£7 s6Â' !ŒèµG8œ7i`b±êŒã쌎8z)YõRÞJJ)§¯î<%z‰ÃEâ8ŠøÏdóoãýù&/nò.²âfއø3²=¥awëëY=ù÷ۈ´ðÌ5ec‹˜ DR…Tý{ÕbàÀ•*ä‚—UŽû k<ÚU¥ºŒõ`=™qâl„úK­J„ňMúy[Q`N¬m*/«æ„¥` Š¥ÂSeACÀçÖú¶†LQq¹µþ{•ÙV}\´ô8k4Ä"GµO4±Okl$G›&4z>äOgDm`Î7x/ºž‹Ë‚óŒŒVk”Ù'oÉüÍ{C=]o“²ß&yCàq›B ¸×Ú14bÖÿh‹DVoKÍS98kLJ à}ç¼k‹…o¾ †|èð:^#M_àßý]ˆ‚£±^o~Áç**0*öCØ£JÄhAK!â‘¥y¾7WÕMRbšÕ¼ 1RVv½Z$Wï) …%3(r6s”NÏ}AÑ*¬€ÚØ€ÍZBÒæµÑ¢Ç³­=SÂ[Uú±Æ$¯“#*¢Ú&§¨<ÁóMµP)á¼4;½èÜ+¥|ð]öMšŽ¨ÅiàœD¨ýÿ“Ö6H¡}_¶]{#âÛ±]güII” 9Z]†“›I»†(Gm¿îöEëÖÂM/|Qøò6üY¯¶-ÂhÕ·coxOXM€tÊLË"Z=?HþìÈv…e0"ÔvÈwh¬?V!i™J–^K€WÎ:ϼ>£o„(úï{ZIfH²|o/ëãüR©~ɸÿîïl¯ûÖƒpv´\óÁ#ñBÍXL÷«´úE·ÕoµtìaðÀQÙjÙO±±ýú6Žªzk|ÝjÏí6u„’ÄÃ%+»7Ø´Ã@K—ÚúmD·¶Ô·'¶>€Ìçpï®í'ÔË&Ç\(¥qo5O€Ž ‘¿ìèRž5n„u»gB0ù2ݒ嘆zkÚA 7‚*¤ðCªÏ¦lOw´Ÿ—/áÖ¤.¬·wΆù†^ö)äí¹¦wˆ×~î#Î ‹4,hš°zU3¨øR“ÎÌiÕð`,ܯtðà+¥e^õö]š<ôÄö†.ÂÝ6õPsý)Ívxa^ÄIRxâgÍÖKüb§Y“³œ‹dŽÉ›ë¦q™eô5ÞkU«Y’R_¦-4.DÜËÐ ôåW G×*ëkÔ™†c{æÔ{®»ˆw±5É´Ÿ:)xVÕQ/ ‹™'šQAS„úªf°<22B4ƒ9"ãÅbqÒ°nŒÃÖÈ×ÎÒÌ¢?&Úh ^_àño6åIÒÖû«Û4:´éu¸9“y8ùM§PÚ6)?bXâ3/eíà•bÄo³˜¢˜qÅL5äÆQ&¬ëfÆjÉ€g²É,îÊÿE¡eP—0X´œŸAž·À—ñ³™ B<_…{Q´(ùM!|g¹$/Ο6óBÙ0IUá4R¸ô]±ìoíô¬†"”wÍÆ‡ÜnÀ^;Y«¿Õ1Õ×i¤WâRR Gã;³"1H* 7GÄ\?ƒñ´ ºN‡e c§W?¾[£U³ÜZìj•Iõ~èöoWô¢j7ªe¾d‘¿É‰§³#þܸðjÀE ð׫á— œ$„ã Ù(ÝøƒÛøyãǫΠl“Í–Wõó¦Aó'áPö9D+i´Ù/y>›8lÓîxž]¯’?s^Bró°i¼–· Û4¥Ã|$‰es“:mÍ«œ¶ÆòGÀ]ÝõâÀoýb}ÜrGŸ4øM¬°Úp±²ü‘µpUÖmÿU¬Ñ¿<ËD¯,'z‰îD׎"µ—h¥wN'J±òíø`Ρb½Ô$d»¢?ŰD WÔ %³ÕZ!Þ/j…«`-¡N<—-”DÅ̬Zè©É\׫ ¤zv‘”.ŒŽ%ã=oÿìN”Š0̆IB¬YöâCP+_U˜_·9F',q!šç*'뢜 È ÈJp¬Ã:i„t„wly‘Ÿ m£y«`ô6jUq]üEŒÕé »ÒNçÃí¸ò0=׌v»)Jè/M3dŽ_õÀ‡YÌÀBÈ€=0W^]Y†7XÛl¸ä#El—$¦Óxé‘,ÚEÛÒ9ÜøÑªÇ¹*œgæ¦Òtf¾Qö'\ô—p}ŸÆÀû±Þ}š®¾hFï×i¨ð*áŽó…ëLJø£Ç‡*l¿ösÙáµ—Ïqn_ľÒÅz•ßL×Û4߯Р™èN…–²&Àª}Á¼:f‹DåÖÖ€5‡~!ƒËZ!âßW{œßå//ÈéÍ|.®ËÁuo•Žf¹(ÿÖ&.Ò¼>þÜA0cñ„"¦Ýþ·IŠÔFBÜ)¸|F›¸©MK#½Áè³ÔGœy¹ˆ‚»j^ {Š8æ›dvÁÒtzE^ôb–¼¹ÀþNHKZ£QÀÒ«sô>4u):Ýú‹ðÍsoú¿‹§‡¤è³÷s¼ÙúѦ|ë“—¸tö®~xŃÕ_¡ ñŠHÅàŸTý?  ¯endstream endobj 807 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4546 >> stream xœí\}PSG×_!J”{…€Š¡ØdRÔPn„‹æ±Dª„jK' ¨­3¢Á:b­­Õvì(é‡2mi›XœÒ*mÈ£µTðƒ[%±µ’Mê+"áëÆ>$)$( ïû6À$áCŸâðû#“ìž³{ÏžÝsÎîÙ›I}}}੆Ïû;&$ÿ˜pücBÂñ Ç?H³³srr<¶"l6›X,Æq@"‘êêê.\¸{÷®N§c2™Ë–-#B«ÕîÞ½ûÌ™3V«uÁ‚/^Œ5™LB¡°°°Ð××—F£Ùl6 …b2™Èd2‰D …\.wÄNú'GmPÔQ6ò–pLàa–BÔ×××ÖÖ·i‡Ã ÈpkÝs  |ùå—=’y%ammí#W‚B¡ P(C-Çe2™¯¯oMMMZZÚž={Ô •JÕétIII›6mR«Õ‰„D"1 ³X,´°ÏèóåååÞ ‹k¨Zïûò’Ò+vww{5ZÃáêîîÆqüèÑ£+W®¬©©q8\.×û¾¼¤ôÊÒ\¸pA¯×{Ù± ---¡¡¡Ã­uÏ5€òøñãžé¼Qô·ß~Û××gµZ‹ŠŠú—K¥Ò”ýK —û6½/1¥W³A£Ñøúë¯oÙ²%33S,«Õj™L&‹SRRx<^ZZZIIIAAT*ÍËË[µjƒÁ@$;;`±XÚÚÚŠŠŠ´ HOOŸ6m‹ÅŠˆˆ€†AôôcÕnw66¶55ÙÙljY™1-í4"##07÷?(:yÒ¤6§Ç›‡÷J‡eee^Ž«÷\CÕzß——”^é°³³Ó[ÓÜ·nÝA­{®”ÞDs^IØØØ¸uëÖ…yyy³fͲZ­™™™ä2 nž`¨Z÷\(½!ó}÷Ýw=µµµRSS;::L&ÓÔ©Sq‡‘4F£Óé€Íf+ ›Í&‘H¦NÊ`0ÚÚÚd2ŽãÐõS(ƒÑ¿Mƒ‘žž¾wïÞÎÎÎÖÖÖ¹sçÂò·ßÖÜ»g»t©î›onååݼsç?j5qàÀ•°°Ö3gˆë×wï–¿ðÕÇǧƒCÂû_WWWVVÖßZÖÕÕÕÕÕÁïR©ÔeláçxZ‡ £ÿ˜I$A&Ož ÷M|>ßÁf³3æ¹¥‰ˆˆp8³gÏC„ˆãÏÒØl6Çù|¾—Oð±4^IÈd2{|µZ psÀd2W¬X(**Ú¶m•Jíèèàr¹pŒ`›Ÿ}öAQQQeee;wîœ3g“ÉôÒã''£Þ<ü„Çpÿþýxüëׯ Ö=×Ê1[‡ííísçέ¨¨P©T4Z†Éô'œE*•iñâ°)S|I$Ÿžž>mR*}9œg¯]»•›»Ò`0à8n2™0 +))±ÛíÑÑÑZ­6..ÎÇÇ'::PQQÑÞÞìÚ °‹’ƒÝî\´(tÕ*’Á€;vãÍ7CöìYï¢ôj$Æp> ‹ë‰ÍÒ§ÿ¼Ô«°Ífƒ–s0¼?8 †êÅ›uèAÂììl|:Ά††ÂÂBµZ]QQñÕW_­X±‚ËåÊår‹ÅØív“ɤ×ëY,VFF†Ë8©Õj±Xüõ×_/X°€ ˆÃáP«Õ+77—Çã9rD­V›Íæ’’f2™ÚÛÛ#""’““år9$¾~ýúŒ3‚ƒƒ-ËÔ©Soß¾Íãñ޹†!áS{‹ììl.— 7F£ñСC³fÍZ´hѵk×NŸ>røðáîîn‚ ŒF£D"9{öìŸþ™ššJ£Ñà^῎§_‡O¿-}ú%ôœ]†a€šš¡P(Þxã E™L¦77cF†‰˜fücÙµ‘Çq›ÍVXX˜˜˜¸~ýzÇÝŸ‘Á¤êôéÓM&ôœ£J× wá m­Vëà¡Xrrr ËÈΣ]•„b±¸ÿc‰ÅâwÞyçÔ©SeeeƒëÞ½{ýºXvíÚ%‰*++a‰Ë®ºè½·´Ähu8Ø|CEA|þùçƒËû®uÉ3¸ddí: ô/Ùºu+\T999Ç_³fÍfc2™nÚñjvv6\‡R©–Œ£/aµZE"QŸ×Ú€Ó5''§¬¬ ^Mïc)¡Õju/ƒ7–æ‘,£ÁXúCµZÍf³%ɺuë6oÞ °Ùl0X5‹/þôÓO‡Ûæ„飡Gb¨ª|ô¹D7Â4ÓéÓ§áOx‡j0 BNŸ> “.F³Ùœžž~ðàAÛßHIIÑjµƒ!33S«ÕºézóæÍçÎs÷p£$¼rHæq¹üÐ(—âXFÞðôåСCÓ¦M[½zõìÙ³Ùlv[[N—H$µµµ #>>¾§§çI^šzú÷øOÿîÉCLóæ›o&''Ì›7ïêÕ«EEE{÷î …r¹œ ˆ¥K—ªTªŽŽŽäççÛív ÃT*…B¹h“É„ Š¢ñññ­­­L&“Ëå ‚œœµZ­P(D"QyyyOOFËÏω‰a±Xz½^©TBƒÑÜÜŒaŠ¢111*•ŠD"Ý»woݺu8ŽS(µZëæ4ñ±x‹G ×÷þ‡ ®ïÃ2'V«Ò{$y˜¥æ[x<^TTÔ7Z[[ãââNœ8!•JëëëcbbJJJvìØa4)Êûï¿ït:Y,™Lž>>ííí†Á+bQQQ—/_vY_~ù®ƒÁPYY9”tˆãøüùóÓÓÓ¯¼òŠ\.w:Ï>ûì¾}ûöïß¿gÏžððp&“Éòòò4 “É \²d‰Íf£R©AAAJ¥^üƒúûûûúúj4ÇQEð~ø¡««ëòåËþþþ7oÞlll´Ûí‡céÒ¥­­­G4i’Á`¨¯¯‡^¤¸¸X£Ñ$$$Æ)S¦”••\‡Þ`€1`0FÙøèá!j[±bERRAAtvv.Y²Än·ëtº%K–Èår6›ÝÐÐÖÒÒ•# oܸÑÕÕÕÜÜÌ`0, €Ãáðùü“'O._¾<22R*•Â8eÊváÂ@Ðëõð]7‘H$‘HØlvYYYZZÚÍ›7ëêê8Ì4ñx<6›m·ÛCCCår¹›YêA‡ÿ=ŒžsO.ýÀo6›8 “Éø|~iiiSS…BY¸pa\\\UU•F£ï5CU¿újFGµ½ý7mâI$½^Ÿ••UUUU[[ëçç7gγÙÜØØÈårwîÜùÓO?Q(”ãÇ755mß¾ýÖ­[ÍÍÍf³¦.I$Roo/ü'&“ùÍ7ßp8œÞÞÞèèè'qŸÆ¥jÇÙlvÿO ÅäÉ“ÿøã‘H4úŽFëæ•Jå†  À××—D"Íš5ËßßEQø~¹+Cèp8öïßÿüóÏ×××´µµ]¹råÅ_‰D0FƒWØ8Nmmm{{ûòåË‹‹‹9ŽR©<þ¼kuñx<@¿k×®üü|xo®¢¢"11ñáÇ<¨®®Ž‰‰illd±XC½ÎþÿðT‰ÊÊJ˜ø«Y,‰D²zõêÈÈH@kk«Ùl¾téÒ† ÊËË«ªªØlvTTÔÙ³g.\¸€¢(•JMNN~ë­·[[[N'†at:ž¯*•JE;;;}}}“’’`/v»Ý`00™L“É´xñbE¯]»æçç·aÃ7w=èpX]A cÂ8Ö¡D"ár¹‰$55Õ`0@ÇÈf³ýýý5 ‡Ãill$“ÉwîÜÉÊÊrݨ‡J`±X555YYYüñÆ™L&<#Å0ìöíÛ===GŽYµjUII ‚ 3gÎ$‘H ¥k&55Çñ­[·Â¿ƒ.¼}ßb̽âpÛñ3xС›ýá‘#Gººº¨ƒ  Ãär9|Ëiûöí]]]3f̘9s毿þ*“É<ÑØØØÓÓsàÀÝ»wóx¼šš*•êt:ƒƒƒ]‹ÅG…>Ðét’H$‚ "##«««Éd2“ɼwïÞ3Ï<€‡]nîÓŒ?4ý}çQ!c8k†—Â÷r‚€î+22’Á`÷õõAS¦V«á+.ÐäZ,–àààï¿ÿ~Ñ¢El6ûêÕ«—.]Ú·oŽãt:ý÷ß äp8F(vww—––®[·F4Íb±8ŽeË–iµÚÞÞ^‹uóæM×5d‹U[[ Ë'î?í˜pücBÂñ Ç?&$ÿ˜püãÿÖ?Oendstream endobj 808 0 obj << /Filter /FlateDecode /Length 4023 >> stream xÚí\ës·ÿî¿‚Í—RS9<hÆ™qÚ:I·Ž&q'éLOeÑ)ËÎ_ßÝpq$%?ât:ëÀÝÅb÷·‹T£—£jôõ£¯N}þ̨¯˜«^Œ„‘Œ›Q­k椞~Ÿ^žðñjÑlN&RŠñÌ?ÿybåxU³“þùóg\giˬ‚a¨ MUaÜÑD)>šHͬ1¾Asmáß| ®©×‰¨ ½»ì~âxoÖP†â4|2;Ç:=žCyysÅÎ u‡?—þÛ”éå ¾Å¯gÿes3ó… Ö,Öq 7þ-Í¡Y&?º ­Î ¼Áiݼ>–&xî«h>ðï¢8ÞÜ„·S˜ ?FB‘&\2­\ ÎÒw*¥ô„šù2ŽÕДq¶o=[Îgq…~>çÔ+UÑÔ©zÖNk櫚ëë¤M7’³YÄ!æøÝÆ×øÕÑ7§( ÍÙu”`4üÇôèùí‹ÿ¶ÿH­y. ?¶½-Ö¾(Í–8÷­ïþY Ðæ2ð_n¦«H9¤ÎKæ…PU#ΙÓZ eÅêJy: hÈèÏÉDWÕøûèèë§ÏqEµ7Ø’x3]á 1¨iÇ‘œæ„¼Cº üÑ$¦ž.…m3œUUØJߟˆzü54öƒÂðkì`…¯_‡aÎ#KÎÏÛ(8¯±5Y¬P„ÆØ`G;c¸Ü´²ÆÙ®<ÛÚ|÷çü8çzH>ßtRóS¥«½\ºæÅOxè( Œ¿œ-=ÿ`¦ŸÃÞXy ‘W†iW#A1)ëÉøS%xh,G ©êŠÚj¦8´äÌÈ V^•x¢˜´0F×*®”a7JFö¦ÔU͸u±…_µá|ü¤´aX¥eט«ÐªJ–`¯A5 ßöæ–-U2áÚ¾^:’ÌvêgÆKó‡¢Ò£n´ß†â†)çF ¤iú9æŒ0Øj¢YGÉD2‹¤V/ M@jàƒ‰ŒË Þ® ÓW̧ÿD%C3d—`J±¸(t¤™TGÒSÝ‹ž‡˜rU¢·f\ŒŠ“}Á=ÁËF2LÀé ’^×õ¡á’Fɼ‡ÇûâdÂáå½#jà_Ý2µû±qÆ¥šzü{|ÿ¨ãã ÿò9(e?R]²#¯R]R%ôãcוZ0ÓÊ‚<*Æ5ç(è]Q·Ý. ¬Üxuѽ Juå•!U_Ü¢2&:Ø+ÔäËôÀ„×sVCãVŸãp͵¯žvÚ^k˜Ï£Y2‹JÈð ?Bݹõß‘ñ£ÝáÉé<§ õ÷Ö×À¹t/|vP‡óñ7Í&Yæ,µCtÏ £ùÛÅCyì˜f9ÏØ0l¸sUñà ó,D0 . gÛ)¹_µ¾8°µK¤Ži+﫚Ÿz²ÌYþ1U³½‹ë:QÒ€¹v-¢\ö ³ºö~> ¬à—uYé[X|˜C!ŠÃjPÎò¡CUòZÚ‰2–VKÙÃÒ8Hž+{±Ö[±|çËC+VÙŠåñ+>ì$‘U>3¸<0.´´÷rñ‹¹ø¿?œå;¸P– äkªæ6«ÜDn¶¥ÞáèwN¼!…îɺàisQÆòÀÃâ÷ßµb˜…Žqk’‘;§r6ç@'ó°±å=WÝ›d¶êŠ@)9~†æÐÀÉÎÎÖú…ò´ A2rW(ÚCÞ@ájs›x ­ ¬Ùu†3llb«`‡ÓÔH¹èภK‚ˆCA^ ,Ó¨…òΤ`F‹ZŽã¼+ƒh}H˜Õ ÙiÅ0ìQ1 ùac=(S1¥ËAŒ2œ)#Xæ=‚…öË„¿´}S —c¶ ^Yu9Í úŸI£DŠÌŽ)¦cô÷§ª’…QÁF›ä¢&AJì¯AÖrdí›ÂˆÈ[â~Ÿ ´º(òå¶ÌÙç:ɇ…ÛÕ¿ t÷À’1e 1(©³Ž“2Uàò]ë`ÏJ\’Pö·Mrç§OˆZfQî/JÃ)f»#à¸ÑÄ‘£í‹ ”ôCf If>•uù8²<³‡#H+l¼ö¦(–þeÄ¿®ñœPˆîÚÇ }Â&>º®Š4œkuÝN~ŠðóŠr0¶7Xö˜ö• iHmÚÖ¦¸*%ÊãòFä»Far`üð&äâ@¼¬Æ€Y]…]p¤PãN‰˜ ÚIøvêÚê4Xžx¯Hj´´TŽKg"\M};ÛµXwÖ¥=a“ЮMcJ¼ÝxŒ˜ÊˆÃàsJ^@Œ–©)†µ$ K޵éZ~‘ÎffjΡœK¶`Úô%;À‹‡J ^C© sG¦t„ Li9’P#|®ò]áêq†ÃÅxIm Ø [ $0cÞIæEª•‡áyîƒ6‚­íѰÇÅÓ ¥á‡T—-Ù}}áÀéMZÍpj!ï7œ¸ÏpBï —ä›TÜ«ÎX2¾TöŽ1]†ÆÑk‘{×2 jfCºv-߆ƒÓCκ,λc 8“f{ö#gRü2»´@²‹~PŸ½ƒ~=Þ‚ƒÖµFµ®Çßͯ"R ;«ªÈgj¸⸼Š{8"è+3ÞŽ+4hû‘ΰ‰Â¼cî lÞ‹Yƒ‡DÊ£Ö‹YŠ>RÒš¨¹(LN—t†obòŠJq‘f]&™•{(Âajwˆü +,ì¬Ô9N# 4õ&æÕ¦PTÄ¡òý<áܲJ¢]‹ÙX¾ã¯Š‰ ¦œ˜ÿ_ ˆ•».i÷d×_"üÒbcQ·°†ªJóТ˜4½—ÝÇ?vÀ¿½ÊäH“—·tºbÅ$œ§¢‚ Ø|hìDöZ|mïõzÇΩiÀåä5¹x »´GÒÒ=ˆ–⣥èÛϰ®„–öt¸PƒÓ­÷’†£µi0üƒî¬C‰`ë"’"ŒÜÍ&Ky!Y +a’aB¿ÙŸ´Š±z¾I®¼6¨÷¥®‚vµ:[ÇÌ™Õ"(ñöÒI´î[O2'¶Å³ ƒh't€fIb ôà ÓyïSŸB’ñÅD¤+Wˆ š,ÃèôrÐAg*¾(*_e³Ýö½¿ÞQJ߆‰çàH~ éÏBJòަ!çÙûr7·QÒb&/Ñ–EÙK“É-°F½­¥Ó¦²@¿ÛAš(*wV´? HW§Y˜it…æð\´G ¦¸Ó®âœ¯E)PÝeœ[|PÕ*d`ÜÄt’yKÊè:°”µÍ]çÌn É”’7ë ǻԷÆZ›E8i†R¤ÜKm m{Ùl‡e¨)´>Nàꃇ¨9ÆELBm/Mn‰O‡[ÄøÿÃò&÷ÉÛz@Þú>ݼ™‡ÉÛ0jc»8)Ô»Ø×þÑ„d—Ü»Ï=+RÖ“7¤<@eéGCAeI«`!*¼‘ŽÑ]Nnà *¤ðãóg²™Vÿ‹ÌÂÙ|4^^Äo߯·ºü«ùÒ?÷fZ „¾ÅÞK>&8ƒ…ý†hP¼“°mã>>©A`LEЮ Œñ L§.æ]M'ðxmšN ѯ¨Áîì îeF·LŠÒ…©ž1 ]rѲÞé¢Õë‚_¦aÁŽ./Ù¾_VHmx£¶‹Šz'ZÞK‰¸ÕŽÞí%\e·§J¶”¤»èpžÄý÷ò½>M~„ÐvSC-kAƒÆ1”„ÂA $“é h%E¬‡ô1ø†×ïÁ_*^øneSÕéU•ø“nP ½2>ÚwoE/‰êH_ÀG½ |èÙךq¼€åxmË;±ÅÙÅÊÒÿn#zä‚ȹÜûëj¡ù>8\ ¦¥{:‚iöBâêqÌí÷­ˆ`»?(Ó^{Á«³,$¡߆îaQ§uï¢!x2u«E}ªb¸žbò,ܶk"lР<žl¯é^™oŽMÿBú¯B7Óî•ÞÃ}.~Íü«'þñYè,vôYÔÚùEdÿüÓé£ÿ鬸endstream endobj 809 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4638 >> stream xœí\yPS×÷¿ ¨ay"„§¬1 Ø"fKQÔHUÀµI¬Ö.‚±Øal…‹_;.…ÒŽØ•Œ:C]J`üºLѱ¤*K…¢I*`"%@YÀ4@òûãþ¾Œ£BAÅòùƒÉ{yË=œsÏ=çsÎÕj¯5/{£Žq Ç>Æ%û—pìc\± /à|>€ã¸ƒƒCPPPLLÌùóç8ðçŸJ¥Òç|¾T*ÅqœËå>ñ[»ñ¨mÌcd$Äq|Dž3°1—,Y)“ÉD"‘F£ …Z­vvvf2™¹¹¹µµµUUU3gÎtrrrrr‚·«Õj“É(//§R©gÏžõõõ%“Éz½žÍf£(ºcÇŽC‡Éår™Læçç'•Jûûû»ººPíîîžlëÐÑѪÎÁÁ¡±±‘Éd¶´´lܸQ(zyyÍš5«±±‘ÍfF™LÆd2¯]»ˆˆˆ0 (ŠVTTH$ÇÛÛÛ‰D"Ô§Á``2™999''§5kR¸#‰ûûû{{{§NJ 1 £ÑhË—/‹Å­­­ÞÞÞ“&M:pàÀSJ8¾jZã«Å؇y¸eË–÷ßàìì\^^®R©Èdrmmmss3‚ ‚H$2™, O:UUUåçççééYSScggG§ÓÃÃÃ<ÓÐÐàáá¡R©²³³_ˆhÿ×ßJmè0%% ÑhH$’··÷ùóç]\\¨TªR©\³f;}ûÞ~ûm‰¤T*gΜyåÊ•¨¨(èT F+**B$ @,³Ùl@ ‘H‚H¥R¹\ž‘‘qôèQ‡c±X¦OŸ€›ZÝe±´"‘˜““n0F£qáÂ…:ŽÁ` …>Ž×_‡¯¿§yA¾ÄÀudòÃüü|­V;qâÄÄÇÇK$ÇÉd2‹ÅÂ0L*•Þ¹s'&&Çq‹… ÈáÇõz}`` ‡ÃA$??AÇ=<}zÇŽjµšÃá †ŽŽ³ÙìããÓ××çêêú÷ß“H¤ŽŽ—;wî„„„ܾ};>>žN§ÿøãË—/bÚïÞ½{ óóóI$RYYF3™LgΜ¡ÓéD"±¶¶¶µµ•Çã8pÀl6;;;S©T¸ŸÑ€GÛªG^ÂèèèÄÄćuØÝݽtéR𘓒’>øàƒ‘}ûãxù:müë=M```VV–\.?~üøõë×a›Í–H$,«¬¬¬¨¨(%%娱ckÖ¬©­­•Ëå|>¿±±‘ËåÆK—.©ÕêÌÌÌŒŒ ˜Mš4iÅŠ B¡ËåŠÅb@ —ËÏœ9Ã`0Èdò… öìÙQL&ShhhAA‚ Ÿ}öÙáÇ ƒ~ûí·§”ðtøˆ=nr¯ÚJñú[éèVHq¦Ú(†a#>9þÐH$—/_>tè‘H$“ɰó$%%fk'Nœ¨®®ÖjµAAAéééUUUW®\1™LðJ‰D"•J1 «®®~ë­·D"‘»»û‘#G–,YÂçóY,V~~>`çÎ!!!CØÿ®]»nÞ¼9oÞ¼AG9DzRR,ŒfggPýsûV«µ®®®°°°°°ÒûV«5-- ~â®gÅèÎC‘HÔßßO¥Raq{hW”ŸŸO&“!ñ¯|øï°Ç0º>ÓàFÉ¿þ¾ÔÆŠ?eÊ×ÔÔ4{öl ÃPmjj¢R©Ë–-œ;w|àÀØSªT*/_¾ ˃€eË–‰Åb''§ææf2™ ëJ¥²¿¿ÇñÈÈÈòòr½^o0ÒÒÒ¶nݺtéÒššÇ'L˜ðå—_vuui4À`0Ž;f2™üýý§M›(**b2™&“)<<|ÕªUƒ‰`×¶··×××òÉ'V«µµµuÆŒ---÷ïß÷ññùé§Ÿà@¥R©««k[[Š¢@¯×744„……UUUÁj1Žã¾¾¾Z­Öb±Àúiww÷Ê•+ FCCC^^ž¿¿¿Åbñôô¼wï^xxø›o¾I ***üüüššš Eqq±D" ›2eŠ£££L&ëëë#‰uuut:=((h0†c¥ÏY*y‰¤´´4&&†ÅbÔ3† SRRµ{™Lf6›M&“N§[´h‘^¯Ÿ>}ºB¡hhh`0ëׯ߿?ÇCðÚÚÚåË—`.€===¿Á0L"‘ðxާ­rÆÊ+W®¸¹¹-\¸2Ü™™™L&³¸¸˜Ífÿþûïk×®¥R©½½½/^ܰaäã:::t:ÝŠ+¼¼¼ «}ñâÅÔÔÔ¸¸8*•êàà “ÉÔjuzzº\.‡Åmggço¾ùf¤VŽ—£Cð;l^ÿõp„s ‘Hôðáq qêÔ©'>rþñŸ#É&.Y²¤°°9(@[[ŸÏ÷ññéííõññY´hŸÏ¯¯¯§P(mmmqqq8Žëtº   öoß¾råÊêêj8óŸ(ŠÅh=OØn0ÃÀÍ›7Ÿx8ða sñ‘Ÿ/b‡åÓã%DÞ¯&õòLxªúaqq±@ °ÙlþÂRajjjIIÉÔ©S?üðC@ee寿þšœœÜÕÕñBÆo#¿ZŒlÔúü°]?„"†ÁÐl6cÔØØ¨ÓéÎ;«¼L&S­V+ ‡c ©TJ£ÑfÍš•ŸŸŸ——·yófX|‡[öÈd²T*år¹_ý5•J –J¥îîî4M©TÚÛÛ·¶¶nÚ´ =(ЉDhDp`DD ’7nÜ8Dö4º¾ž¶wv©Õj>Ÿ_TTP*•l6ûþýûb±899¹¶¶vÆ °Q‚H$ øV(555nnn'..nË–-úçr¹\.—Ïçã8Îãñ$IvvvJJ ‰DÒh4 e`'P{{{ÿ¡C‡žó_ðÒtø4ÓuD4<—>; )‡øí„¯»ðaÓáÃiø|¾P(”J¥"‘¨®®Æ¢P<>ŸŸœœœ‘‘±~ýú“'O u-‰0  ãîÝ»EEEB¡P£Ñ¸»»»ººÖÖÖÂk”Jåþó"‘x÷î]‰Drüøq©TúD«>|ø°Ñh|»‚ '>!ÞQóÄk:ÿøyØ‘ê[)‹ÅzÄÒã—à&Û!…ü¸OlxCzý=yxøðá{÷îAò7==ÅbÁØE&“Áݰ$éÖ­[nnn°Sö¯¿þrrròöö …\.’°9L&Ã^©ôôtooïwÞyçêÕ«°Í þ.ƒÁ€mT°ûz÷îÝË–-‰DpË\.okkS©TÐ Âb±†`"mëp°EéU‹?Ã3°ú€´´4è3E"Qssóœ9snÞ¼©T*M&S@@€Íf÷öö"¹ÍØØØS§N544ìܹ333300Ðh4R(´µZ]YY©R©Øl¶X,^½z5™L¾qã,îÃ1tttÀWèt:77·ÄÄDè½~ùcåÊ•#—ÂÙßßÿ‘ó¯`>iC‡"‘nÆ%û—pìc\±q Ç>þ¿®I”endstream endobj 810 0 obj << /Filter /FlateDecode /Length 3930 >> stream xÚ¥[Y“ÛÆ~ׯØÒ·"˜ Ž(NJv,Ù©8NÙk»\N°$È¥—h‚”,?ä·§¯9‚»+¹\+â˜iôôôñu÷8½Z_¥Wož}vóì“×Ê^©4©ÒJ]ݬ®JuUØ"©Œ½ºY^ý<˯ÿ{óO^çYuÿðQó²ñ‘V9æªe =²ÕÐõ¢CÉžpËÚžô…–iS03ÐBU€edΈ`…lD( •ÎêÊkJï¶mÇÖ±äa·hPïeÊ~ïw“Ƕk&PóÏÎoÈrC:¤²ˆÔ[$Õ8DcÄ9¨­Î`ý”£ü ¾6K»cÊb ‹•ÂÊÃ?;ýBVÅÞÄ ¼‰òØÐGæ:”m2Øžiêgî‰ tMUe5Úg64ÞZ–îvÐZ`°ˆ‰æ*)rÏãïð+çñÇ{ª‘Iâsš‰²ß²Ê.ÀqÎäÔúz¦ …e#èîk™€ l<"~§ äÄöÉXšG¶ †ÿx]fað+Ñõ7‚wHÿà´ìñÐE´3Üžò‚7„·, úEá%œüD²­·`ÒÕCuÈ2(޳Ì:ì©æ‹ÎùßÄ0‚77ÎñéŠì»'ܡۡ¹!ôÎK`V»(ê@t-øWvä6%ÄZŸq¼ó`„??éfŒÑ!¦À%+$îÔ{f–r ä`ƒb«ü”¥9± ËPœ…“Y‚Itî×ÿÃý²öø°yš>á/ê[µÕ³sœuˆêàlÄ‚€1²5)ˆ>³CekIQö,0øg×Ĺ¿Ü’…Òg?Nú;Ô²,Ï"·¼ìùÉ. Mbš Ï·òê8ÈŽ·2hè4’ˆïåUï ›¼Kb¼(ÏD÷UŽ…-{峜ñR/8«…$þ.Š-ëv$ºeÅ;™å”M÷Âó[ù=Äk ÿ åKAxžÌ’v"–¥‚x1%…?!( Z‘Û ‡èêH¨oCiÄ_ÌoVæ¸'(æïáï‹Àå^h"ñl}%æ!Á]ÖÖË~x¤÷_6n/Ø–z‡#(;β“?qçç"ŸŽX¯óå'‚-m϶Z‡ØÍÉ@-qÛ§çt×ÇÙ*{©(¥ ¤ÙeîôEîü’jø°ñ COÌeT3ØŠûiÉ] W}àTÒ¥¼Yb²œS^e,ä¿„›2)%ÆAnã᪊ëa? í7®R9$lUbs?rål˜lï=C,|xjRë§3WUÆ¢°œ•e -ŠœrŒù)éäÎKâw_0Ë=ŒÄëe#¤-¶c ̤ì B‘ +„°u™¹ ‘áZ²41j(‘ÝÆ—˜è{,‘Ñb7+þ½Á¦¾õVûh=Q×á[T“BH-õ¢H P€{ZÔ. _w‡x^‚©ÚŽË¡Ì*ê£ÎÀ1a=š¦0§òœBç½óÛk&ă·'¦ÇEžAÄÔ©MÒÒÓš,H*PT?E–Jç †ÔTÖ |9¹U` ž”ž¢a“ÌÚ@‹á³?ãOÎ?…ûyÉ¿¾€­uî¿¡@Cê§feŠsCØFô¯w]ä+*$ö‚ùP†§¾„³;î-z!›\ê$ƒ<â,q)Ef¹ÂÃ)Ü5íµ ò8?Ç75ÿq!j#ê/8ñŽ,ÍcScô*50Gbók€îiecÌ¿}æ×ó”óÃ^ä°§>4a†2`›¹§HOü–©e½Ë߸ºYNMKt8/zÚµuË·±g²n]%ªÊÏS|†‚ó˜fʆö% k‚·nC©qPOù´\%©~ªGÓæV\òhÔO†•tF²3ÜÌX=À›²6±O÷·Ã½q©\zÀY™É~ŒÜ$šè—Ô›EÔXð«³Ôªð,g>(F"CSý¤¡§ÒÙÁ²!!øÁ¶U5ó‚ù(é+‚aå÷Y }‹&.‚ ºáv1ªÒ›N˜ij)¿]¢—ÄPWêèX­ÙÚƒ}QCÍÉn1Ý'¨Ž‘@s÷I†H{¡” W;øtn¡Û´Šê4ÖÂ|ÇŒ‡´|zjqIë\w69² V0mcURò1LçGdmã#þܤzö.nÉNWJå½Ó"Âý~3wqY•¤Áê$W#&PHÈ)—1»“«Ð¥zXÀ†ûPº,óê“ØÜSN¦±¢ÖUzÓa9 n¥ÞªßÓ^˜.$ –¯; “åìTj©s/]½Å†~)잃b™Ò÷ú ¥]Ð(ߌò™BFÉWnyÈwžy!æ†Ú9kk*‹Ã3‚ð ‚W}}åÉ· 0L«‘ËŠN1¨ÛKÖî[0ñ0(;>¿V€°»e>["­3l‰\,1âÖCº©ª) §— †Úäƒ}t8o¨" û¬±•ù“ìÓ&*´=ÅæÄÿ”œúT™Õ ƒ=ö_»Ç;0†Ê&kÙÖ d9öù&ø|ÃÔ(E¡ æD8‡÷ ‡©D3–üò¹¾"Õ×Þ—]Í Q„·¸îzÓ\òãÂqíÒ~s‘v+ÃVcjzuh‹¥Ök(\ ûî¿Ö ºÉdŒOMR>Ñ×| ®m$§9©ú nÒl´p|"Î0²…Ç !=7×K •³KÂad«:îÚ¡»P)¶}¹sP:v<%Ä=¸áÞašyÙ²g:ÑãZ±ˆ`ðåYD­[Ñ(|Þ]ÛKbç–+LN0ðä6~rÙN] k#“€V‘f¡€ïŠ?¨@ÿ•³¶b Ó'º˜³Î@t>4gl’›¬ ­¬àBy'\_%§hLInVŒªäø¤¦£7'Jc­Ux·‘1¤à5s8þ#ú—•ÃÆp&©³¼#·í=t³~(öa] ä‚CæzUóÏE,ø@#·½ÇœÒÃcL?‡mq¥áå#%,àõó»z‡«¿õ‰ÂAŽ"vN“ÒŒÊÞÍÃ⤪*s¹yo!ËzE`à4‹ JEP%j{ûR²¼pÑÛêÝ|áq@PkáYÄ4óÜ_[é„Lƒ¨\û˜·+=Áг?Kðhèú¨t>µñŠÞ“÷( CØQ[•V÷ëàtvëbÁö ÂìYzùÄÙ;—C#=ƒ‘ ÿêÃÓ<Êg-ƒ·›Ð(\H‘[IÙ3ç³µ/¤-\]Õ£Ó<1i9 gt˜' áL®IÒ;P·ûx£ǼZûš.›ÌØ7[ûă/úb'| ”*ÉŒsÔè…]ÊY p£mÖ’Î: 1!Ü.†÷î0ÅÐ ›”Uuƪçó8]ÏëÞó [’U¡RÒOûÊ2Qá´´Ôn3ìåÐXâ#“Ô$/òlOîM,À êÃØ •æ†üÌùÈ’£{ÉÖq"ÚÆÉ©Ò¥|´ Ÿƒ‡z”c±o®Ìì5¦á.U4OùØu„x&;’9õu¢ ð1Í߉5—x4ÈG— ÒÖIN’?ýã¯ñô¶dß]bŠ<´|øü¤;…âóé'ÉòÎÉÍN%a u]I˜àó¥\—:Öip¾píõöèÀ'<ô'Gü©×U'¯ZVìo1†Õ³ï{oܰÆÆOvüÛ7®&}þZä‰úhÿƒ‡¦Œÿ_™ÐNkÜ9r/‹È{P_XW¸ÄaÈçß/nžý£òŒendstream endobj 811 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5588 >> stream xœí\kTSWÚÞ\ŒùHr0ŽÁL,4'†Q+ Ø#UÒ嵘ÐÑ….Gé²buB+Ú²Ä*k´fµRJÆ0u¦RBY8HÒù¨bÏ C¨ILE.I á¿û׬¶ëÖçgNÎIž<û}÷{Ûñ¹uëøEÃ÷IGŽg '?ž1œüxÆpòãÃÉÿ»_^³fMEE…×§¬Y³Æn·Ëår­V ð÷÷¿råÊœ9s­­­F£‘ËåŠD"›ÍÖÔÔôöÛoWVV Ìž=ûôéÓB¡°½½]*•–••ùùùEDDØív&“ÙÞÞN¥Rýýý¥R)Žã÷ÍÐçiŽÚ Õ|ÈSÍð¡`ÒØ¡Ýn¿¿½ØáâÅ‹§NºlÙ²’’™LF’dRRÒ_|áp8’’’( “ɼtéRHHˆÇãùæ›oæÎËáp.^¼¸víÚ+VlÞ¼9$$Äl6gddà8¾yófÇ388˜‘‘ADSS“Óé‹ÅN§Ól6³X¬+VŒ«ÕjAêë룣£gΜ)‰ìöégÎæñx.\hllܺu+ŸÏW(ùùùGŽa±X¸…‡¼J`9?ËŠÕý^fffªÕj‡ã8›ÍÖh4ATTT¨T* Ã’“Óm¶ÚÚOp·ÛíÞH’$‚ V«AuëÖ­X±â¡õ{Äcõ4H¥»Ã‹æææz{{) ‹Å‰D‡ÎÉÉ9~üøõë×GGGÉÉÉååå›6mjmm?¾J¥Šw¹\&“)??_«Õ! kjjP$%%?€a˜Édºyóf`` Ëå¥ÓéT*•Éd1uêÔ‰‰ ¡PH’d@@@VV–Z­ÎÎÎÞ¹Ó8uªÿÈÈøøÁƒÿ;1që7bW¬˜u' SÃÿ–è‰Èõ“ðÂP©$4šËÁÁÿ“‘1Í`0, “ÉIOO‡QËSŽûÔðé‘È+žÅ4“^|é¦M›6oÞ|õêÕðððêêjƒqùòe·Û½cÇŽ÷Þ{obb"::šF£ÕÔÔ(•ʬ¬¬3fdgg«T*.—K£ÑZZZ‚ƒƒ{zz’’’:;;;::Ünwxxøøø8ô«|>ÿAò†{ÁS·Jº…{Ñpûö퉉‰EEÖââeK—.-**Òh4,khh¨¯¯/&&àt:$I¦§§ã8ž’’ãr¹ÒÒÒêëëoܸa·Û%IEEÅ–-[8 ‘H‚"‘¨¨¨(22òÚµkv»}Û¶mjµÀãñ:ÎårÍ›7ÇñÂÂÂàà` Ãt:]XX“ɬªªŠ`2™/.} áEÃÌÌL©TªR©ìvû‹/¾(‘H ÃG}´uëVN'‹æÏŸ_UUåp8,Xàñxššš¸\.ƒÁÐéôšššsçÎýå/éëëCQôwÞ)..v8jµšN§óxŠ¢íííEEE&“©­­-55Õf³‘$Éáp †J>ÿ{©Ô_&»ÙÜü©FS8sæ¹ßÿþ9¨üã;c]]ƒ'O~Ét¹F_xava!Ñݽ   “F[åñ¬4™--³Åâwtôôô<ÿüóÝÝÝ(ŠÂpòNøÕ¯ÒÌÌL×ßßÿÉ'Ÿlܸ1%%E¡PˆÅb@yyùܹsûúúŒFãµk×d2 ¨¨¨¾¾>22ò1ðŠ_½†¹¹¹½½½,Ën·³X¬ÆÆÆ;v¨Tª}ûöíÚµ«¡¡Çã%''ïܹAÝ»wóù|½^`0‹exx8..ŽËåà8þ¯ýkåÊ•l6»   <<¼§§G.— .—«V«/_¾œŸŸ^W÷I’´ÅÇ¿ž>[¡¸È`Lih0~ü±àüùóÃÃÃiiiÇ—J¥ÅÅÅGU*•r¹üNwpߥªûƒ÷È{ppEÑ/¿üò­·ÞúÛßþÑÖÖ–””T]]e³Ù0 TVV&''‹D¢‚‚·Û]PPC-EaœÝÒÒ²nÝ:—Ëe6›­VëêÕ«Õjµ¯ïr‹>::n·»¸ÅáuÓ¦ðóçϯ]»öqWŸBx×°«« 022"úûûm6›X,nnnNOO—Ëå2™ FjPÆþþþ–––‘‘‘ÈÈHÿâââÇAâ®x„>%•Ž_ý*ÍÍÍV«õ«¯¾Â0Ìl6#"—Ë;::ÆÆÆZZZ.^¼ˆaØÅ‹£££N'ƒÁàñxŒ‰‰ñxâ¤IÆð>ðÌÓ<<üØØíöGêc ¼xšŸ‹ÜÜ\™L¡T*ív{lllgg§V«Åqvç¡!¡(:}útAp„-))©µµ•N§£(|§0ã>ðdìðq&VOÆLÏër}TS_°ÖÖÐÐPTT¤R©aÏ$;;ûµ×^£Ñhz½n’ããã¿ûÝï>ûì³ÎÎμ¼<•JÕ××Çd2ýüüFGG­V뫯¾ 3#…B±mÛ68§ hkk{ã7ª««‡††ètzXX›Í>yòäíi0 Ãø|¾Z­–J¥ï¿ÿ¾¿¿ÿ¼yóØìT6ÛþÑG) Fs—Z›†íí톹ÝîãÇs¹\«Õúúë¯ †áááÐÐPN'‘HH’4™L/¿ürmm-F“J¥ûöí›6mÚÔ©SåryAA—Ë¥P(8Ž .—«··÷½÷Þô÷÷¦¤¤ŒŽŽ>ÿüó€ŠŠŠ„„.—[VV& ‹ŠŠ Š¢›7oÞ¸qãõë×ÛÛÛÛÚ ®]»ýÙgŸÁ2×ðËß頻p:°jèr¹`ǯ¿¿åÊ•0Èš3gŽV«¥Ñhëׯ×h4CCC‡#&&¦½½Ëåþÿ'ùû×ÕÕ:tèóÏ?—H$•••‹%99yõêÕAlß¾ýÏþ³J¥Š={ölJJÊÎ;_{í5±XL’$,s9N½^Ÿ½{÷n¡Pƒ HccãUÜé=ñz” •Jeiiitt´P(Œ—Ëå999Ÿþ9‚ cccÁÁÁÉÉÉB¡°¸¸ØÇÇ@„\.ߺukXXX\\\TTÔ¥K—ùöÛoãââ¨T*†ab±xË–-·eÃqpðùáë_ýujjê¥K—Ìf³H$Z¾|¹R©ìêêBQvŽ¡»gΜùÓŸþtG·î ÷ÒÏ•+Wîû^/«tñâÅr¹vÔ ï^µj•Ñh¬«« ÏÈȨÕjƒ‘’’¢×ë{{{‡‡‡©TªT*…Ýv‘H¤P( L‚ÒÓÓ§L™rêÔ©®®.÷ÝwßÁ…`4õz=Š¢ðmË–-«¬¬Äq~hFFFmm­Óé êïï÷õõÅqÜh42ŒS§NUVVÞż0ÌÍͽví‡Ã1 ,P«Õ¡¡¡p aΜ9‹ðÛY¿  @"‘À g6›ÝÒÒM¿çååÕÕÕl6´C±XÌår].—ÍfÓétX ª¯¯•Éd}}}mmm_|ñÅÂ… aPªR©à½€»”RïßïOÏÇÜ“†‹…ÃáôôôüýïÏÌ̼}Õf³¡(h0Ün·ÃáX¾|ùúõëù·þ9xz}éÃRû^g1¬Vëðð°X,Öét cbbâÊ•+………*• ‘‘¡T*ýüüL&“D"¡Ñh ¸¸xddýÌfóÐÐPgg'FóóóƒƒyøôÓO].•J‹‹ …ÇŽc±X€   >ŸG?RSSe2ŸÏ×jµ:nïÞ½*•*--­ªªê GÞàÎnðñؤ†k×® U£Û€O&“577Ûív˜•BSœ={ö¿ÿýoÇÃãñÎ;÷Ê+¯À[`sÙ .ÀÄP]] svY0cÆ Éd‚j›Íæàà`dzråJµZýü#***22ŠC¨½{÷Î;×n·³Ùì»hèe•NLLÜží¶Ùlyyyp‘H$:§Óéjjjx<žÃáp»Ý°Òxå•WàfÀãñ`{ŒÍf»\®öööiÓ¦EFF–——‡‡‡OŸ>B¡44(££7®_¿þÇjÿ @[¶lÙO¾~'ܧ§¹ËþûÄÂÀûœ7†a_}õ“Élii ïèèàñx7nܘ9sfMM Õ«©©ÉËË;yòdUUÕ† ôz½D"ètº„„___‹Åâñxàž.CBBÝn7,dÝN ÆÇÇ»»»…B!‚ \.¶tššš²²²>øà …²jÕª)S¦TVVR©ÔîînŸmÛ¶=´¨í±ákäAÖ… ÷ìÙC’ä¶mÛöîÝË`0`ÕA9sæèõúˆˆˆ³gÏòx<˜Ô (æëþ›ß0Ê~ûí· ‚(--MLL„oð÷÷ß³g¬¾Ô××Ã&™Édtww ‚öövIggg‹D"©T O?·1 ‹åëë[__O£Ñ²²²ôzý?ÿùÏwß}wòiøáý°\.×ëõAà¯N’dFF†Á`øïß’ ˆœœƒÁ@’äªU«Ž;Ó_¡PX]]˜˜ØÜÜL¥RR©” ˆO?ýtáÂ…!!!CCCÇb±Ü.k4ÇÏœ9s{Ëimmݵëà®]ç¸Ü £±wæLfSSO_ßðîÝb8àwöìº;QøÕk˜›«={Ö*“½D’¶U«¢Oœ¸,‘ð¬Ö {ödåææÂÉyN‡¢(‡ÃAÄd2-X°àÌ™3G§Óõz½T*…{&Çlî?uʲt©ßþý[+**Ž;òÜ>| `2™ýýýt09„'PœNgÿèèèÍ›7gÍšåïïß××çt:—/_þˆNC?áUúäg„·oßž’’ýNg0F£1..®»»@¥Rß|óͼ¼¼²²ª-[¾¤Pü.\è”Jcx¼!›I’6§³í¯ݲeËXqáñx>>>óæÍ;qâì(iµZ˜@À‚esss\\Ì'àÁ0ƒÁ°xñâÚÚÚ—^ziÖ¬Yõõõ>>>AAA§NŠïéé9räÈ(<†O[€ö“ð¢aNNŽÃáxóÍ75Í×_ ‡+}||<OOO†acccn·[¯××××kµÚÒÒR>Ÿ¯Óé^ýõ7nðù|ô---.—K ŒY­Ö˜˜6› Û†‰$22ò¿+¼¹¹¹6›M¡PìÚµ A³Ù¼páB@llì‰'೺ººýû÷—””|ðÁw96ã]ÃI!Ô]àýxüüüà¹Æ„„‹ÅOp555³Ùl¥R¹`Á‚[·n%&&šL¦   òòr˜Â»ÝîÛÿùq»fã¸B¡ ;}úô’%K^|ñE6›ãøüùóW¬XQZZZVVVWWG’¤ÓéSŸAAA$INLLøùùݺu+,,,!!áúõëA”””܉ƒæøO¿Â÷”ã N#àŽŽŽ›7oR(¸Å_¸pB¡øúú2™Ì’’’wß}wãÆ›6m …‹Çñ'ND",á¨Tª;vTWW———'%%9sfÉ’%‡#""‚B¡=zT,#ÐÚÚZVV£ÅãÇwvv¦§§‡††ªÕjx^ü'~Ôjµ¿Øó÷²‚&7Ã{Á³¹¶Ég '?ž1œüxÆpòãÃÉÿñÇzÍendstream endobj 812 0 obj << /Filter /FlateDecode /Length 4517 >> stream xÚ¥[Y#Ç‘~ׯ ü"6V,U^u¬ Zi$C£¶v²×Õlbx´X¤fÆ0üÛWULv·` zª*ÏÈÈȈ/"ƒål=+g¯?ûæî³/¿«ìL•E[¶jvw?Ó•)T5«]]´ÆÍîV³_æw7j~ØuÃÍÂ=ÿß‚ŸÿÓ˜yUýÍßîþøåwÊrMÑX˜††¨±Ég¥Ì;[X«f ㊦ª¸Á_Kea¨ÓÍB7õ¼ãÇ#”qîGø®ç‡#µPó –í»-¶ªæª\AÙjà’!q[øûçPÍû{ø:c)âÖ8â9ð¬p†-u¥úßû£§N:àØ;î€ß‡ø UôŽsuË´LÇø$+>#é}‡íÞ#E{ø[#ë€m e g[f—zX[1£d‘KìŽ\²Öy.­xØ öÞ‘»Ç¤ µî°ëŠÛ-otôÂ[=Ï-6ÒEûíÆ §°bŵÐï·þkéJøW¸Ù%_ß‚@ü¢æÆÁ{•Ø=5#±ùå¥c©¶-']±›‚A¿ºY¸RÏûY‹ÿ}Ö6p”äí¼~>ä˜<ì“¥vû%ïƒq%ò~alKB·yë'ÜÈØ¼^^©Ç‘eBdšËˆþ/4ó­@}ð|úÍ¿c·‡H QI2HbÊP¶zÚÝ›í‘Êó®—¥o–"3Øø‘˜ñŽäÅDºP“½¬˜ƒ%Ë3 ‘á×/+ÀTJDù}}ì=z)òBÄüó_cæ1S¬Bå„ ¸oà',Û$u8Ù~ð’Ž¥ïX1°¼waÒ~?ÎB%ñw #4D²neÒŽ.ÈxæçŽtMŸrÊMNuPå<(&Ø*ÙúèXéÚ‘Ò­šÂ˜Ú+ÝŸ‘Ü×·orêÙ4…ªoI;BtmQÕ-•œQL–qy<1 ˆ—‰^W¥+”jüèA# ä¹’¿¥¼”&,×jîÅq`F8mæ_óãJÞÂß_àï[D2 T¶,´ó$|xU {¸`-Î÷ëYÚˆ.#YÁæ'Ú*wU4us©ˆY˜Âh?ýRÌÎΛäa":~ɉG©>Ês+lÏžâÆ\ü2V^oò§oOç1íü¡é@&J{ð‡'öÄ8[X¤®¾VuQ|­€¿‚¾A9 *Ø:yþ÷馱H" "íp’­NÁÌ¢þ>®‰ïÉ^òžCC2}A°ñÝÈV• óõ½É,Ú6…‚€zçz` »twB•g9˜ ¡×c¨E:¥Ç’´ D²úQÀÖ¤Õ±o÷‘t1ZÑ(íD€oAA5NN5Ë—掘ƒyæÔº(õËXs]l©è_ü@r5¿þW,qüºàGþ¼WmÑÖÊÓB;Þí&ÞK¾•Ç.çù=0 Cft~œIe§ê»‹ÒÀf„¯xs„¿ªhšT¥w8Ý£Ÿ£G«oËTkh®Â¡¬U~ÿ'MÊ ñº¤mQ‚†_@;#GEçØB;œ´D—™S«B7ê¹9›Ñœ.+épŒu:'Úªbÿ¬0,}çeå“J¼†•UÄȸ}ÒAð66È®öÞpn0SÕFä„WƒÂ©[ûûÅ÷;(¹…¿á溺QmfsõøÄ%rm³ž“·(X®/ØÙ•=eÅe&ðA^=P1¢ÙÉ å] ªí?z™$¡ ðu8~≘çGnü ^‰à(Á´ÆVš­\[4­†•žð2o ¯¶ó?¡Ixðµv¬½àU‰k6ä–f@52M›¢rŠÇ|DB@/:*U3r„àóg¬~ýã(ù’ßq«Þpå½ßÜ]O0ïpD_¦wÏ p/ûeâ =åePO}á T­˜BªmA³β «„ôN†8½ÓV±÷ðÑ»DÚ=ФNݱÏámÚýn%ÛÊÆŽ|ÄÁ+#­=?q*Ɵ䀴øqhÔ 5vpQŒÐ…FGŸö|XzÑ’cï»-ëãÇÈþ@ï>χ‘\‚úuŽä8b`§k°Db;ñ¤\YÎßö«ó’àëâˆG}ÿÔ‚ø€l¾–“ý&+ž02j+rùêÈYy¦¦†©xôM¶ ÒÛ“ÂÚxþBÓía-cêiè‚|H\‚œú.ë¢lª¨Å2D·EU¶¾Å“– j<Ëú ÊÁyM|Ô×Êûyð~®Œ,íº¾CÉ-7˜»§Æ áŽøw‹åt½÷¯É½f½îa%ìI]«±Ø“„t;¯Û$ÁÛ, ö?ýŸ+:ƒ¾ ³æC¹Øxñ›€áÁd°²önç@ï–V€Êê°;¬«s´=v‰yO zÝgý?À@*´o=2Ü¡b;ÑÒ~ìÉÄís„Ô`À«°Ù¤!Dø®kB¤îûnð2.¤ÁP)²± ¥*öMkѾ#%šŸåÂJXUELE@µ­½÷ ¬µeU…ÆBWlKÑëßrñÑ[Ãz¿ù÷\d?Y ¾o¼“AJô° êtÈ6fT…}ÿ5{Ú@4ÊÍ}XcçJátõ|øDˆUóÑ«yÂIáHCˆ•œ—¢ï­„_Ž †è£‘eH|´{ôÿàA­EG6Å´i<c?(’ î²› nC–a^'º7Œæ@t¬=yCMç>A/ÖjZOØøNB+/¯2›*¹‰áñ,›u÷d$ObiŽI¯D;ÁwÉÊg¹•–«—Æek£D„˜&m„‘hRÃw¸ž*T@FÓ5<Ý:Ã݆ªó)Ñ.GÒêdÓÐ*;­IÙºöº "ãlk=LKÍ”†½L*TxZ%È!r Ù­\dN ˆàˆ ]ãp†üͰý"Ô#M¡];3…—„yÇœ£È¢h¿HÚ}“=aº0U0žSYJ³ªPz3¹ª ­õÄþê±f-œIk©¯vã >Ê= _ 'üšŸÌ”.™¬¼:YP>Š%Qƒmþš ÏîiÃõÑoëÔ-YJ8r¥ªåÆGX“H ¶»'#·U¹¢¶Ú}¼¸h…¼&·p~ªúl®ò´mó¢q"ÇìƒÄlÛB5z,ö‡w¤¨ŽÁŽ!;]S…§,^¸ /ŠJ7EáÙþ ÝÖ…Õùã‹'9™On¢q ÷ 45=%p±T·‰¯Ã.ŒÑo­l¸\ò OüÊ>Ùy+cd>Ö·'MŒ£ [X׌ôoÎ$Š¿ÁQþàeyQß1ýÄu2^sáz÷ì¤ìâøý6ÞJ mÖ1âÖÖ„iV ‚!Ïš¶ ²_w!úM†ƒ7EáÆ‹=ÇÛ*c½¹a7i«Ö€Ð@q¼H³ºhVhà‚¦zç¹(¤ž„Ö#›’QÀ! a'—¬uóî1^á‰eclç¥2DcýºW"Ú¸ÂU:o¤7ƒÈdãwý”²ex†²õH=”E5ÒÇìâU¡#”‰!RX›D¿×UQGoN‡ŠÑ¸ ÀÖÉzþ'²™ƒÄýU9]^£›é¢®Å•ý*GŽ-ÛLa&!ÏFêǃ"õvz2Ê‹súl¤ÞÒå×eÔü‡=S°¦€—É¡ÂÊÈB¥‹¨k¯Ê°E$ ßq¸|Bk½ê;o47Ѻµlø¡æþ¢ãôíõ5+Sñ·0ôѱl´UÐ R ª|³¿†tµvt‰ÂQ g1}CUU>Ê'Ù•6ÐJæVæ ˜~Ù Bk\Q¹æŠç\&D[/yU¨ªžÍ&ÀNE„ŠPNÀ†õ3q<¯¸® »©/9îâ27Ñ Ákâ7¼ó}l1 ïFpµï>çÒ­‡PkäžcL†yë ´$còn}ÉÃlÑÎ.ÛË à€óI"l‘¶ónÐç:É—!|6¸®]È?É*Y[&únßdÎ9kX:çõ“7r¦r³¤Õ»XvɽÍS:Ö@HŽÓf†!ÞSL 7ª°/Q,á‰'©·Ïk)ÐØ€ÆR5õu–]®P/¼âÃz›±Ø ËÒ‰†"î.æot¡Á¹[$=$PÓzS”æj4X²z7‚ [²4ßókzƒTªTOå@SS”UXßÛ+ж¶f¢hqd­=Jª,%}‚Sz%þn+¡l3p{J —sdÅ#H7˜øNg³Ntˆz¹îpW›ÊSˆsâ,Ï{÷t@[À!j|ë½ ‘sFê1»€27ð°pY¶FDÙ>ÍbÆfÐkH¶“b‘#øÎ‹ôÖ+(PãqV µŠ^§R˜Ûx`p¿ i4]“«@ ¾Œ‘¤Ìœb¶Ïµl Ú^O|ˆLÌäÍVí‹r¾0+«Jí#+L\É÷lé²àùË(=Š·— Gé1{‡zúÙø"J®N`‡ÓÐþ[—ñÃNK«’Ì‚!VW©ÉíÙÛ·‹†À}%æ_†Ü¤{ð@äѳ´ä;ä2Ii Â‹Ëà‰ýmU}Æà?WPCÂðܨ¢ùù Ur#õ÷˜š@­8ÿï.3Á·· ¯¢ª’ôÛ÷ yù{nÐ>2!ÛŠ«_eÉÔP±lÂJr÷ŽoCŸ¼ÈÄNúú¢·ÿ: ½V}ñÌ€µK'Á¬ÙG6}ß I¸©O=N~AÚ».K“¥~aᬠ)ÞPfô›lp)¬ÜΟel“$Ø \·ü± AlÚ. ˜R}f£¬?ÆžŒtº3#1åsŸ÷Ò$(ÕéRŸòsç…„or¼â”I=íF¿& zÔ‹uvôÓkBÆ0¾®¨WÏçÉe‘5œ!uÝ6ýMÃ>u¿æ&»ÀÈcP³¡ä«ÛçwÞ^Ùyèœ;99íd'a„Û+jÚŒ˜¾Þ„«î={Üuá-]°Çè‘jr›…5&¨î©3¦Š:¦G¾Ê»È¥³iò\òk–„2Ö*–æ…ç›™»­Šº®ŸAuî²^Ì”˜Úøsä6Ï‘ˆÌy×£„-²Ât¼›Æ@&‚¥6fR“õá¸I,ü.^• ‰Wë Ná‡TwNŠÄ” ÞêKOG`°G>¦ x)bœ óÀ#ÿðöÇŸbîdXGräÇ·3‚=H¿/C>oÎ¥r¨Ü|¢|N—…S»$µNÚÍð…>³úÕ´¬ƒ“;o*äpd¾; Ó¬W fœühÚñ:^kHo D 5ÿŽ®IŽ<‚äLyϤ§¥Ùñe½Ø Þû’ãl‚ð›Z½®Ç ˜]2 ôvÚ”~TT¼ø¿š1„ÕiTÎa”°‰þ6¾¬ô`îpúD¬tW~©èXö}âøˆV"ÿ–÷6 ‚ãîžIiÂlQ˜IµÍ“9K~HB’ÄG\øË ÏáHbIàè:uÇQI2+6}ѯíueŸ{-àÀ/‰^ñû„ÉÆØIºµúišFúìºlA?ÈBœäâg@ØiiŒ·‘¤k’ÛHs™|I™ÞDÛJ ÿLX–F.‚ŸßÞ}öoÚŠË©endstream endobj 813 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5887 >> stream xœí\{PSgÚ B 49@ H!šD’v0®Š»á¢uÑÁÇ¥Õ¶ºŽˆ¶¤ÎŽ|*Xê2[W_qq É* +F0' ˜`Ê%åpIˆärçûãÙÙ­`»^Šå÷W&‡9ç<<Ïû\~ÏódÉÌÌ x£añº_à¥cQÂ…E >%\øX”páÃê<#** `2™‚Ï/ù˲²²3gÎ|øS´ÒW†çò4mmm^^^'++K¡P”–– ݹsgÉ’%•Jåïïã¸V«åóù‚ÔÔÔ˜Íæ™™{{ûøøø?ÿùÏ, š€@ 455 ‚k×®íØ±£··wtt´££ƒH$*ŠììløèË—/÷÷÷‹D¢#GŽšL¦   ¹\ ‘HVVV†]¼xñY"¼`êt:ƒñoøßc~FGGgggwvvz{{{yy½½½ahÖh4wïÞõôô¤Ñh¡¡¡ …B¡œ9s¦ººúĉÁÁÁ±±±6›­P(®\¹"“ɤRéÐÐЊ+z{{}}}iiiÑjµqqqЯüýý™LfmmíÖ­[ƒƒƒ„„?ûs˜––O‹ÅÊÈÈ(((hkkC¤¨¨hß¾}/^ Öh4(оÿþû_~ù¥½½ýÄÄ <<¼¼¼A˜0ÆŠŠŠ]»v¡(Š †añññR©¦]]]] aaa å‹/¾HHH¨©©vvv·nݲ´´tpp˜œœD¤··0==­V«y<ž^¯wqqùì³Ïæ§ó×ÑÑÑ(Šâ8Ž Hcccuuõ²eËz{{•JeWW ··W$•——óÍ7‰‰‰ …â믿þÃþ0==]__Ï`04MNNζoßÐjµ8Žß¿çÎ111:ŽÏç—••‘É人:˜Ü ƒaaaqç΃Á@&“1 ËÌÌŒˆˆ°··G¤¤¤dŽ÷¬tî”h+ ³¶¶¦P(“““!!!ƒƒƒL&S*• ×ëÿô§?}þùçÉÉÉEEEÐm¸»»+•Êîîn.—K&“oݺµbÅ …B! 9NUUUFFFYY™H$*((À0L$Ážžž••Åf³qÏÏÏööövvv£££ÐétºîîŽ5kÖDDDJJJt:]nnî³Dxt87æÑa^^^bbbHHHiié¹sç<==—-[¦T*F#ÌQ!èt:›Í–Ëåééé‡f±XT*µ®®nãÆ(Š:88Éd£ÑhkkÛÐÐ ‰ž>}zìØ±W#áBªÞ|+ýÙ3Qo^"#¬Óé Q+‘H4 ÇñÝ»w§¥¥ ‚¦¦¦ééi 6›Íd2ç¨ñþ,žÃ„×ȻΟyS(”¿ýíol6›Á`ÔÔÔ¤¤¤Èd2€ÑhLLLüꫯÈdrSS“]bbb~~>̶³²²D"‘ƒƒÃ¯~õ+©Tºoß¾#GŽÀiooŸ˜x¼®®†É|K¥RYXXètº®®.@@$ëêêZZZ®_¿¾k×.@ —ËŸzô(99Y"‘ðx¼ªªª˜˜˜O?ýT,wvv®Zµ Àãñf»Q E&“…‡‡Ï6¶7nÜÿöÛosrrÒÒÒ„B!lu@naåÊ•¶¶¶r¹|éÒ¥ f|sÄl¥¯·ü^̯C½^?<<<11!p† dÐQšš¢Ñh^^^---¶¶¶ÐÇÈår@À`0ƒª««wìØÑÖÖ6== )¹\¾råʺºº÷ßðÚÚZ*•j6›­¬¬¨T*¤ö¢££ÛÛÛQEÎ¥¥-QQïxxP††Æêë»wìðß¶ÍçY"Ìs ±^/´±Ù™SøøxE¿ù曎އ3<<ÐÔÔ)£ððp“É„aXFF†££c}}½N§ëééñññAQÔËËkß¾}d2yÓ¦Miii|>?''G«Õ¶¶¶NNN‚ƒƒu:ÝÓ§O‹‹‹–.]j2™zýµ;RS­››ÿçÆSÝÝÿ‹ã_‰ú9Dø©øÒ—gÞóXé[o½uàÀÞÞ^¡PØÞÞ>›wuuÙØØ477ÃDœÁ`\¿~ÝÒÒò½÷ÞÛ¶m[ZZZSSSJJJEEŽã†¹ººB.ó‘#G ¦¦¦FFF Žã;wîT«Õ°½àr¹|>ÿìÙ³AAAr¹ÜÖÖ6))I¡PhµÚøøx…BñÏþ“ÉdÞ»wA×?©ðzñ\ýCÇ>|¸dÉggçöövXCœ‚\‡Ã‰ŠŠúÅ/~199 UÍ`0:::¸\.å‘Éd?¾Æ0:ÔÚÚº|ùòºº:‘H4>>~õêÕèèh¬¤z{{V®\ ÐjµL&“D"Fx8q( Arrr^KB÷"uøœQ[¥R½Êºq ÓÒÒÄbqVVVWWŸÏ‡‡Ã9~üøÖ­[/_¾¼sçÎÐÐÐ?üÉdR©ÔÆÆÆääääääßþö·&“ÉÑÑf§gÏž]µjÌaS¹¥¥…N§ët:h#Z­vݺupä > Ã0µZ¤P(àÄœL&«¬¬LKKÿO ÍqÁÌ ÂÀÀ@qqñÌÌLnnîìçÿDqqñÀÀÀÌÌÌ©S§f?¿TÌñ ¦¦fzzFö¡¡¡øøø’’¡PXXXèçç' ¿øâ‹¨¨¨={öèõzƒ!‹a¶€ HaaajjªP(üàƒb±8,, Æh …rúôi,8öíÛwþüy*•êèèX^^N$I$ROO——†a~~~ åþýûf$iîÙÄy$loo·¶¶®¬¬œír#bmm Y …B"‘ššštèÐVÖ“ð_sËšs|oÜeKOàytH¥RU*Upp0‘H,,,ô÷÷úôéÔÔTRRRaa!\>²µµ­¬¬ŒŒŒ¤ÑhùË_Þ~ûm ‹ááá½{÷ž>}zË–-»wï†1F£•——‹ÅâÖÖV''§M›6(ŠJ¥ºwïÞ‰'h4šB¡¸s玵µµŸŸŸJ¥ªªªŠÍ¹“’’6lØ€ ˆ££#¼:—ˆ¯·´y! …s\ÇÓ;v î{bfkk»víZEa³ÖÝÝ}éÒ¥ …Ã0{{ûŽŽŽ+W®ìرãСCQQQ‰‰‰F£qtt4..îòåË}}}À/**òôôôññéèè “ÉæðáÃjµº¬¬ÌÊÊ*00Ðdz7"b²¯¯ïÚµk,ëþýûB¡píÚ˜‚tÅ £ñ)‹õE1.Áñqï™”þüVú`~ùå×F¹{÷»uëÆ9Üì2›Í§OŸ†ë¹%%%8ŽôÑG‰êV©Tþîw¿»pá“É„7£R©çÏŸÏÍÍmoo‡ß_»vmË–-8ŽÆñññÁÁÁYÅÎöÛàS¤R©••Uhhhaa!ÜxÌÈȈÕét0!‰Ç÷âg¯Ã={öXYYÁµ;++«›7o^½zNÄ™Á`pssƒÛÝAAAÕÕÕ‚ðù|Øô¬¨¨¨­­=yòä… ìììàôŠ¢$ v8¾úê«ææf___6›š––æææõ÷÷_»vmII‰X,>sæQl,9##cíÚµccc4ÍÒÒòÌ™3IIIsì[Ì“y„¡¡! …‚ È·ß~¸ÿ~ÈóZ[[‹ÅbÔÓÓsãÆ >Ÿ¬R©·nÝ*•JZ­Öd2!¢V«aíÆ7oÞ„ÆyñâE©T wšâââär¹Éd¢R©}}}7nÔëõ¾¾¾ d29//ÏÅå~SSSdddPPƒÁHOO·¶¶Þ¼yóÜ¿;ñ³·Ò°°0àíí­ÑhàˆŸŸ_UUÕš5k:;;Pµ³³ãñx³S°†ÅÄÄ”——'''Ÿ={výúõááá2™ ÎSlß¾þš€ÑhÌËË;pàŠ¢®®®b±øÌ™3K–,Ù²e äšá0ÑÍ›7M&ŸÏ‡}¸4Y ñ³×á|˰ëׯGDD@N-444)))!!¡ººÚÅÅ%:::555&&&???22’Á`Àu¹âââýû÷#²iÓ¦ .â8Žã¸µµµÑh$‘H†‰ÄÞÞÞÍ›7?zô(""Ö%L&ðîîîàà`¹\N§Ó?ùä“´´4 ò²²âãã­¬¬ÖÖÖL&óرcyyysôñ°ܰð<:LJJÂq\ \¹r…Çãõõõ±Ùìòòr:îáá ‘Hýýýááá'Ož‰D(Šòù|³Ùlaa·»+++!Gžšš UtøðáƒzxxOæ½[­ÆPÓh<=)d²Mtô; EÝ;ë×»q¹ƒ€ììlÇãñaO24ðgYà(>œÏ_¹rO]ÝÓ'CV¯^’’‚a‹Å’J¥G«Õ‰D@pùò倀ȋoß¾]©TNMMµµµÁ(Ïf³F£Éd2›Í\.EÑŸ{ßâÍŸ^”pácQÂ…E >%\øX”páãÿ½SÃäendstream endobj 814 0 obj << /Filter /FlateDecode /Length1 1097 /Length2 5722 /Length3 0 /Length 6413 >> stream xÚ­Su\Tk·éNE @R`è’Ž¡¤¤[b†‘é’é.én@RBZPéFÊ;çœûÏ=ß¿÷·÷ûyÞg­õ¼k­ÍÌ ©Í%c·„€à0W.^n^1€œš¬/€—›‡GŸ™Y±p…Âaò®1¯¨(±D} ^1A1!|f€ÜÉ µ±u°É±ÿ!È8BP°  fáj qDå[8´á`(ÄÕ‹ ãàxñG„ àÄ‚p‡Xqãóò¬ `W€%Ä ÃþáIf ÿE[¹9ýçÈ‚pA™°ýi“€2i‡9x¬ Öø@u8ªååÿÃÖ¿“ƒÜÔ-ÿHÿg§þëÜÂêàõ¿ ¸£“›+Pƒ[A°Kõ!™“…;üWeW (XfãðüEA]@POˆ•&Ôl °¶ppüÉC`Vÿ¶€jÜŸ€Z/”ÔTµŸþïLÿ<Ô´€Â\u¼œþNû‡úOÌû£ºƒ€zŒyPíåE QϾLÿUL†[Aa6>A!€aá…Úøð 0+ˆ'â‰r ä†Á]Q!TK^¬áü?*$ÊüAý…„@ÙßH”û ó€J¿?¨ü7A)õ#Ðà7 ÿF¨ZüF(¥åo„R‚ÿF¨ëÿÐß ª¦5Ôò›@Y²†»!~¨ì0(ì·B•þÌË+"þQ×wùþ‚¸C`¿”Ü5‰ß(WÿpÄ‹B¶È?,ñ øo¨zCÿ½&²²pO.~Ôh¹øP¡¨ôQ¡WÿG vC 0×?BÔ¶ý[CQ» xBÀøó³pð³`»”ÆÐR?…ü‰2,tY›qêu]SíAŸãÑŠFT9–k ®Þ§SoamÑ{Üи¼ióÕB‚œãRgî¶Ü_n¥{7Ó§«y}ub¼Ú!imœ>ÝDט\-M2,èÏ:ØÉÕd“×ÙÆYb@ë6v¯ïÎ6¥;膅Բð3¼ Mu. ûâAš”€ùe<ÈÏ. ·ééâ¼Íü*÷]â9ÚO0vd¤ uøœ+ÑÂTà«æ•za.QÁžÐìíòœÆÒ†15`%M×W£Õ§7 9Oæs¦–úÇWò_†ßÓˆvYÆ}–Ôû9Óq7ú«z#-åØj·)$öKŠN¯ýÀÏPŽŒÿõB¥aYƒ¡ÕOÜ2[ÔÜ€žâ4|€½öuPjY*d1àëÐ)×ßl²yUµŸ›.sWsŠFSfë]ḸÉç¬î•y¥ü97—í„;’ ëI›¿ÌÑ–ú˜´v_þ.4Ý­‰;'9бî\Ž?6ÄŸ -†ÿMDà-ƒû¸u·Ø¦cüPTØrµøÖÇF­4âÐO®xG ¶+h!+µÐü.ï"kÌGí$Ä'ýy¼‘¹¡øuêl-߸iå¾ĆW<ðb¹e».Ú1æ ,³¾{ð<1މ‰ÀhfÔLÅÉÔÍ÷S7¬äÈjÈÉëʨ9;•“T?4“JåÙÁ5óƒw°X—‰Fq³asŒçŒèé‰âßòUIÛ–þì2~Àý$R~„”Xh¨6}ªÖIÿ ¹[”ÐG,ÛqIJö°M[ßø˜•wûÂÙK ÂÙ¨÷™©×‡àaiÐRÿ3ñ§s=–uþñXm iŸü•%ÞËhQw°÷b¦ËŽ›¥Ä…A®ëœ´ˆ1§)®ÝNœ ÿ÷ξN?áŸd~YS ä3í‡Ø®šý%…¢†]ºý³¸¯é·®Mƒ°àÎïSÁ€#Æ$é±QóbFìé/&e÷!±ã™²¯ ÙN6¹ ÅP8rLæè¾b½·(È…ë*ÿà’]ŸHÑÆº†V “§e·÷yés‰6a½~tìœf‰‘¥z´Á=êE½V^9$su´w†–‰À£åºS3 ëesJ.Þ4 ÙOº7€%mù"òÀy¿=9àTêÒîæá“º·+è j‹b{¬&mÌYìŸ5†XõRÏq­ uÔ…©ÇB©ƒÐx§Ñ-$Íž‚èŽ3@¼$Ú¼ê¯L¢Žrž˦ú¿¢[znµS»^l£´Fw=ÚC̤è.wø°@xœË)=N“UpÞï*µN$ª-¦'»ûå4^hZ¿ÿi¡ã›(ðÍPè¬Ùll1Q¥(à’Þ©hÏ!Ë™lŽygP„¬ -qzÞ†Ÿ‰H«âx4kŸ)“Ì52y-qlüêëòëcEºÀý]1dÆæî4ƒ¼Ô|ã3Kdûö*¥{Yð WŸÒjÍ”Yߊ/8‡Êl‡!üûD:—\’¸H]lêØæ¤@üaQ?í9‡’¥L•NÖCñJ“yL„»ø=QÍ •‘ÖO«Ø*ÝJnõò(к‰DY†Ñj cü9{´«khu ü¥ÓÖ84fh=p¹'Lí—«ù|@ $C¯cZd ]\È¿ù°J4¾Q :ÅT¸¨qä·þÌN 3¯ŠÿÚF_÷íêcß©Ï×µâ†ˆÛ WÚÄÊÅ@/ý[xøU¬qº–݈y5ƒïAÈÃÎÑwk—ˆ;•™<(¶bÒ³ Fÿ—j'šõR§~ëðwÒ´:z84’5­R ‹XÌqR¬\"sÒ÷Ê?V7ife4 :¢ÅözIŒÎÒY!]Þê‚ö…0ñÄ…?ÑPéÅ8¼ï¢7Œ}Š…<À6’Õ Ú²¦l£ÏÕ _ez]*{'ÓW"ᇻ(péÙÎ_3˜—ýBâ-^/y©«–X‘_Âê\®‹ ×öÛ·=ïGà3IÑ-K{nú•µ²&{­ØÈŸ>ô ÿRû:Âx4N…b½U‰{mÒ_@#x¯sö­A®Ðuò«Ùè’…BÔw@ÁÜÔ”2KŽèº94¡7©_Ì—Îû(Oì:ŒõÈœ¾Œ&XEßkàÖ1d) °Ë ®úÈÚ C¢Ôf®ÑÞfŸúÀöüÊÂ5­-¥‡2H˜²| ôƒÃ†GЊ×òœ#˸AÕxå4¾ó±Ì{å<Ÿ¬Î­R/ÖÏ'>SØéz×$õ±e?f·ø©ÅX,C,í—ÖéÌÝS±Ý´¦>ÿt-”¢ÜÅ=<}×ó²liýQgWµì× *C“›lEkj^"1Œ²ÂÇçÆgZP®GâÇWœk§§c•––{•CŒ¬ ù! °ŒŒÂ¹Ó;X[„S†_©«È´§=@ËWúØA}Ö’ýÁ‚ DV?n„>±²-óŠF¶b`kjO/ÄO.2‘# ]yhaÅÿ& “ÚDL—Ê=2?Zf@´µ¯qœ=6Ϫ8Ò3R=püPK÷x?V$á¹/ÌÜ‹±õ™ÛíF¥µà…gè]UŒÑß’Ù®'¯æï–—4“ˆØ]rÖúT Ç¢:ÞÅ—F™ Kþr9&ðÉÜJ)½$8Ä|l#tƒÄˆ=¼…“7ÔóIü¥éð ØÏ8ë±!;y]›±¨ƒRm*&IQÄ"ÚX»}à’¾©]¥/TC7õT#p ×»‘’w_»‰Ù$¥T¸efÛ½$Ìä0—šD’ư¡²£ÏÏÜi‰S©´,R XÆ­Myn8eß80–)ÕXM_ouhÌòn¥”t|`Ñ5ã°²Ùµ¼×ýêEÜL*+šïš2†'iô¾¿@ضc²ûËT¾K’ùB/OôÚÓ3¶{I,¹¢¢äg3Sï’òsð‡ÄT ~ÔË h³ýäðws.\ò$·²2«]Z‰Æ-"«çž‘›t¡ß®ÁÌDàsN{yIÄgƒæëdˆŒZøõ÷å*¾õ/p 5ßÿ’øYïÜÎkf²ÐX‰£Au ¾.š ´i ‰à «°QuL{øœ2Ûh²§0mäZ2F‡0?!½û)ÞmÃÀóŽÕnOïòqÁÍ©Í §Do„$óßÍwØXF—˨]»=D)‡)žE½1KjU£¦(=ãW¿Ï.Nb®ô${N}3~4ËüAq€\‚S0“@0‹vÎG»ªp~·Á D ¾Î§<@6ÝN±y^Ê§ÑÆýtZ]B/dÀˆ,Žü|šØÎn¡uWñ—{ùÅ»û`&!“)õ…P2èß\ûéyj×ħ×Âaol%‹JLïrù‚¾íê-FÕïù´ͤ¥Ì»FVçûúO(•O"à !.ãK÷7ލWÈ…MðH0Þž¢c$HµpJ ‹„Q‹ nnfÃYdžu`mÜnñ¤‘ê-o¦›][©jâ+§[§Î°êó‹½Ó}„È` ÞW8³½îxá÷:««‡ª^ÀŸZð'’¿7ðkб©swÿ+Ý9O 3S¾yJ8qÞD Ð(ºù…ðN±MÕÛÉŽEÌóK«}¨ý7{'ù¼ã"Ûþi‘µËeK)¼&JúVQqÏC‘‰×/Gt±qy–i'Ñi7£âÄîy–—Ä×ïç`4Ë~FbOÕi ëÊž°¨8ÐÈ´d:» €xøSé„€ñ¶+7®y Æˆüíbßôª1ŽÀ‡öµ;s";êÖE<íW;-+ýªæ%ÿ 9š‰tipN+CÞ9× pç*IžlU­ÂhÈÓïgHº¤G\ìÃ-çauÞÍy LxéìJù怖E®Šd«×÷ÑAIlÂu+`w]ÞÕXŠwe#7Ûî£ ¬œã>@çñ¢Æ*äVº9J ÔžÔ ûæ*qÆËÕ9"ÛÚ$ [C¤úÄýý ئXxʈý¼’”ØNü“¸ƒ|¾G$ÊàÍê„çFÁËp#nÂäêur•½O`c¾/Ьk±ÇW B„Irú<]j:£˜ÓJÏS=w#dl¦ˆò"Óé²§ÛÓÉ ôôÊxðìþÛñꎋ˜¼>lÜ71Q¼óÎH“‰b—Wö÷ˆ`0fcº‹h2ƒ™o¿6¹œNQªÃZ—–àe9CàŠDùw=¦H—gé·9À"Î ÛWC˜ÆµSÊžµÏX‘Í”$Ε2¯Ž#ô•²ˆ¼VÙ¢E±^[ô¸ËnÄ}ÓtæÌ„[ér 7¿ÖÉü”žÙÝÞ'WSSñ5æyµ$éE±þõÞ'S@)Î`æäÒ»ÈíK³ÕnI2«Æl?´z’+M­’så„;¬“ ZdnVH*eÕ¼œ{{hïqP[À÷M»ÁšF™!Ì˹)þë[ƒ{<«ˆì2é·ÜXÌ §XN` ¥%è% .ìˆä10ÇÉ¥ ™€‹‡ùκÖ_eã|Ò8.Ø:Ž‹ tÐjº°èN–ܲ´”J¤ Ãkæý¨Ì¢šK]aiD´¢½¿Ç€qÝÚôià£&Vfñ˳ëñŒ¯aSYINœý(ßèè'ø˜²Æü ~&¦tÝ‚±5äüó‘¥UGWBj¦»íÖ>_· ã¶³â¶v™H€uT÷>ï¦oIq 1x£d(+‚mY›ìƒlÚˆÃ;æPs"i²¾þ˜¥„äaŠFWþ¸"´ÃÔéGø9ÀÄf^ ñƒ©`Ðd‹0/Üè0¨ëçæÆÍ½¯E'ÊryœçÒ×-«ÇŸ¥®«K§…ˆÄrx’øÎœT?^£¥›¹M–°:”ñ(pæÀ©Š}ÛÎ7£AÝ»MGQ½·†5à^¶ ç°´ç˯OuYkï¿AëþŠ)ÛÇWó‚ç¶l¢%€àpr·A¯•‹kÊ3Ú|A¥4.^ŒmÊg 3f ?@àóû™¤îÕ ˜½XÕ²ë!‘ô²!á ¤cf©"Ÿd×pÐ~_šz@Kûý]³Æ_[ÊÚ?÷åx ùEíºE•jßöF£×Øô6½×ÀÞÇœâ@çqKÞÝm‡ËålœbÀ†zYø–Mo\š@³ÎíÛíöôý=zKúÒx©0Á›Së'ÌÏ6 ìËøØ2¶ÕAR.šå!Û±þÉæ_bOáM<÷ËÍ'wä 3Ópæ=Šéœ®˜©6n§˜;0€}·ò}{MÇPŽ+kõ†ªâ¾=_yÊ^¾f.?nþÛ‹›øl7AŸ ²å¡Ï²÷ár½É8Šïû#ï4ξ’dª[O¡ß«´é‡UÙ‡ÙÄ.Ý´³éºs“ùûB÷ XÞj! DzP¯òÇ»ùëÄÛ þbiYÜ-Mäl°Ôï‡e'{ÊKYÑ•5ˆ!h¶²”çöÞŠîÄÜ”:çk1ìg3g¼ìéSsDuý×þ¯©p¥.Ü2(žMÑÄÖUžI3™ë¸ë/]™“nÚß!\ª­ÈÂyq1öádþcô”µÕ6Aì}BAs#ß&¼˜Ù y2Íåž‘ÖG³¹Y£^7iâ?gÇ®ÝzêÆ;‹iî´H¾·ìÚ&ðý“îP7ˆ¨|·Ö`-ǘâÁ¸óóƘmoRæÕ•@ƒq€(Ëc·ƒ·xÍ×TÔ¨¸ k̪M§­÷<\à¹ì,Û ©Ãwø(áüü^?¥ùÃþ.‚ã ‘íÀ&µãæ)烤@x_ãF^¡›ÿïü¥Gsí‡á¸Qô­ßHå.(KÖî=7ÇÒ@ºŸ¦€®dº2aÉrt ÛiÀ‚ïÉ',ï-bvzª}>ýbîÃqŽÿHáÂí(9Qó“”V G%’ Úñ?ÎE•Óq6í—ŒWãÆ2®Ò½2˜ð×Reé‰Ò&¼0êõè¶6¯ÈT\Úáœ{:l_ÙÑ>…­/=ŸÌ©3–Ï"ùŠŒûÔÛñìË€Ëê$`äÉ¡î°ÐÝ£‡“‡EN¦$ÉÒ‹àÁª¬§ÙˆìCÛïtilÉvâ§ì9õÍõ´xi\Ćï‘7–zîäñT#_–¿Döñ2pŸÊagÇG†¾<È̪²µòy(½ÉÓy¾a˜ãÈÀ˜hØÖœqP˜úuóuÑ„­Ê`¹¢ã<ž?w‘›Ñ§ïÛ BO"‡ñÈ‘·šª›.-,†ÍþEèuÁendstream endobj 815 0 obj << /Filter /FlateDecode /Length1 1630 /Length2 9750 /Length3 0 /Length 10668 >> stream xÚ­—eT\ß’öq‚† AK€¦q îîîÒXpo$¸»»wwwnAƒ„ Á-Þþß;s“™ùú®þÒ¿½«žzªÎ>{­CM®¤Ê$lfo–°·sa1ƒx¢ò"Z Vˆ™•šZÔ lìbeo'fìæ€xx@aW + ÄÉËÆÎËÁŽJ µw€8YYXºèDéÿ âÛ‚¬LíòÆ.–`[¨†©± @ÕÞÔ ìaÛØTþÉp¨€ÁNn`3fT`feê0[XÙ¡ÿq$mgnàú÷²™«Ão¹œ¡¦tP“ô¨E3{;À lŽ T°‡ÖCüÿ0õ¿Å%\mlŒmÿ‘ÿ×”þϾ±­• ä¿"ìm\]ÀNy{3°“ÝÿÕÿÛœˆ½Íÿ)#íblce*lga°ü{ÉÊYÂÊl¦dåbj 07¶qÿklgö¿-@Çö/@-MII5…÷ÿõ<ÿµ©dleç¢qøì?ÑÿbІNÇÉÊ ËÂÌ‚BÿýOÿ·3µ7³²ƒN€±““1z2 Äð¬ìÌÀ°Ô1ÙÎÞš€ŽÄ`nï„úÏãää…ÿYú7q€"ˆ ýCÜ ØâÅÿC\, Ä€’ˆ ”þCÐ rZAþA5ÿCÜPM¥?Ä ªü!¨¦êbÕþ´‚Æ‚VÐüCÐ Úÿ!èyÿ!h&iúâ€î™ÚÛ@Ïû¯€X öÌþBhÏà?ñPÿ>\  ˜ÿAh¸ù_ÈþZýɇögnåö—à?Ùö®NéAC,þB¨‚åúT-!–à¿ C׬þBhóÿBh‡6!´}Û¿ÜB›ý#ÅMµ³²ÿµmÇþOuh²ýÿ؆ºwø³ Žô°ÿ3>6h†£«=ôUýŸCûg.õ ‚¶àüÇÆ?vû«Gh¸3ô½ùO%¨QgcgË¿$ Åÿà€úv±tÿ5hh§.îö%@5\ÿBèœÜþB¨q÷?½”!TòBÇàùÇTÉìôïRÿ÷±÷ðb‚&V¨K:".Ÿÿhêêä¶sù×Ý ½†þ›Í­ —ì6E]]²7å ²Nm)û$^ø¥‘VÄ¢%N¡¡w¾ -p%Ö¦x\Ö‘a³^ë±"óñ€Ìý‰Ø9¢Ó[yJâ§¿c\Úâó›ÑA†g±Öe†];¦êÉD÷+YH׿2*söŠÑE/½ºÛa­z!ç³Îƒâ+BörÉp. «ˆö®Ïäê0uq54¾3-–F: ·¤“…K$–²’/Maž|§Ûó˜1í=€4tJÍÛc̺Lzª6ÙàÕØ¥`ñßU¾é¤Îqu˜ß­%ŠjÂÌ{±(”¨÷C˜¤7›VN&’WE‡P9ù-?Ç9Ši`øf„"þ&âcoC¹x"–ôŽ’‘¼GYþ$wéÅéÛ|U)#ÎIü°MV£N{óm‚Ád!+@Co/ÏJ¬§®O¥¥~È2/Zj¨¨£AÙ>h†òçɯS—p¸|ÞdVdÒWˆAER›ñe'"^ qïKrS¨Ò ¾Î¹x%ú ÷:wk&Úc-s¹ Ô)½”c¾ìEܦñ I4?3q4åÑžc. Ú³–ùùÒšÁÕ| òÏBê_I.³£ÌÑ:¬h½ËʪG Óï,ÁúŽ"Zb r} h*4Ͻ|¼¿aηc°p áF—®>²K±ãþ.ýö5ÇF2Gà,ãMžšïØ+ÃϦD³YGœÛÚ1 T€oI‡úÌâO§ê>¦ŒW*OGÐGfÜèÝ|ç÷œ´X9Ÿ%&X1;vü§ÎÃ1™šq8‚}I)Ù¼]éH»{MÑ¿ú*$;± <ì;|ø”„—†é$8lœ½M0*ÉkKü~—é¸ ¸-6ÒÀl®ªÍÿ@’Þ1(H¹@|ã¶œ/€ú¨2F‚¥ö…u rB[Y<Ì£P|‡EÙltÆiaÉutüyï}éX”7îõ»²óï4,9ÄÓVÒÙ!¯3 J‚¬nUvž¬âËÈÒ=Ú:á|Ô8sd´ÏHàÇj/ßíQ_/Ú??Ãê^ôqF/{{~éÅÛ¨Õ±»ö,2Øòàâ~½q!ò&œé­Éc×éPW ­\ ¦‰Ë ÉûŽ#ºi±xÿv;/šó-bŸ0l} Ä»VË q›{F¼·;BâjÊÜ@ƒÏ)_664‰/7’‚ønS¥è¢Þ×|ОÚÂìñäC1ÝŽ®¨Ç.áî¸F²®Œ*É;$ågPÌ—Îäg9ž°›¨/©¼®±ñ$zC…Y3Í7¼_KFÛ•Doª¢xbmÌ”×ÉŸP¦³¥zVÎs¡™¦ðCPMÔÁeBÿ ©Z¤6±õÊWáj;$ƒ#Ft¸„Æò„»J+Y@Oï'ïÖÕ_ÿô]±#^Šñ4¥îvõÁ`9éL5rw%FóU UºxâA2Ëô"+¹‘„*‘žubb/œÊ>JÏj’Rêå N¯o£Îûyü^&Ij wéï#ʯÐâë±Wh2ó^fÌQG?ëctçÛ‚!ÞJÄâÜ{¸ÈÁer‘ï~Tf°8Î öjA´OZÞ=ìŠXTŸæÀ%µÆ˜~\éŸX¾€™Wòc4p®Råb«„ô¸‚œ‘t·Ž‰_[a,ܤÇÝ ëPÇ ­üäëìùR¸Ùn9W¥£&Ìóå”Äw¤J20›è‡uÝìa†RÉ‹´ß5Y¡PîÉ5B”Z `v ‰Wû½|ìˆâˆØé/Ô of¹‰\ä=MÝÊIm¡¡8¯¥çôÚ”±øs³¶Éšmªî,'ÝdBR¾÷Xàù¹Ï‡K¿[Êì<¿Q¸‰V5çÙ¿Cdó÷cÿ5jÊ©éìSïÞ(„¥0a”w§9Fá|öèçñ$ ðÑM‘ÍŒ Ù´µD°þÎ~xøÄ½Sìý£v§VÃQ/¯ZRÖ {gx¢'RК$çá}±œDMÃ50àT¹ßW¶ÜR¼~D\ "œÑ÷Ag QH4£Ö~zwÈY?Wµ«‰ßP¾Œ® ûNøb¼Ê,ùµÝMÍ~B>2u@þÞøêˆ …µk*¹:Æ&ap­ï’¬@ÞÉÅ2'U1ž‡‚QàÒ­u²¨c™Ïíšã7¦F 1mýZa‰ýVÒ¯mNÔ$ÇÛì[¬^#œ ÒEÔœEQÚmÜê ¿Uÿ’t5žÖ·Br¶QÍs½AkÎ[¿õcæžWYêØæìËpm½²²JNÝj÷xra“,›¸ù†j›žÅ¬¾Ar}e¹…íŠìç31HŸ¹¿&ñ߆uÊJù`MG©SV °É•©Hý¶SlëñZéÌýV Twly–Á©Ë˽ÿ2¥¨[u6ƒæ‚(3•Çcvì…‹Zc鲤å­;¬ ¼X_6™Î† z¹â"p`Zæu&d#×ÊR*ñ´M ÓЮK {ðiáx“ýeRËÎÿˆ!rlkš-üx&ìüA3ßZKÚ”ÉT¿dÇø7ù§ŠíB«ƒ±¹iUe‹ˆ G.M²4Ý/”l•ïCºnHGÜž?)2ì—æw3Y⟼"®ëðýRøCùá­{Cex2"2ëù¥‡ oþ Nõ¯`øm´ðv@±a~ñúM³¿ÎèÓÇÏT)-u²KabcMÎìc:®¬ÐÓ±§Êúh)cŠÍ…²ÀþÔÉíñØaY­:²Wi“†Z#|»ao›ÌÝùª²ñÞæ~@€T†Dì<Ñú:²ìÅV…‡G¨QJˆí„}`+Vw¢)€ÔOû«z*ˆ\(r[þndüÁi-ãvÕtAÊ:DaùjÜ2KÕã,¢ªùÑ·ë›o—¢³eýcŒGj€ >^Å63 a_ðŽ£º¢É·„ùØü×_\!Ëõ<šx¦±™ŠÛ,R%Ëñ$NG¹Á—n´Aàœ0§Ýá¾¼PŒ%™;Á}{…-'É@¬‘.Ÿ`Dâ·Ð‰K®Š’¼‘)é«îŠ s{ÅO\éW‹×k—ª¼ìÊ“Èa«zLï¥Ç† œ¿n¦àKt|»žHòùÔq~`±~¸ÙSQÄ­õ³Ø6-ɺ*¹Ñâš9J2,‹Ý?a:¼÷’‹8Oˆñí¶Ï”4k¡Ïqk>x×¢ò<›%Ð’(ºi%·òÔõ>©ëZí»o9<±ku™ôD¯Ó4q´—2G ª_îäQ}3(¸˜PÃü©ÐëÉ|*Õ0èãw%Ìо5ÓoÉ4â÷œä©6]eðb –Ý[K°˜5ø¼[úSÒæÒLéR´+Rõkï ›Êò‹lza?G xi¬›ÿñ å—$Rð‰+mWU>ÁL~§yúy¦IáGÏÀ¢Ó‹“ø"RScN/¨üü®qKyíZ›t !åHJߨÈÊ™ôFcÇ&ϰép …u_µ8Ùîb!(àW웜£T„ Á¢á”òd”œ jÛV§ùž7¹C<Þl84g7ÞìwWñ(.ÒnS|æá:שãL™ìޝ>rØmH+Þ\ú ·èoãØâ«½z×®™×®®ŸïÄÆ¶gÎ…ÅSÔßK ´D°#ûÎÑ GŸ×ìÈW;Øï KT/ª+y˜Û£Ø?°¹^^#“°ÎOqƒmaâûÊœw¥‚ð'VUæ.}‡Å.ã“W§ÍjÒ…¹Gþ‰ª'Ëœ Š1Iœ›»}*_8ÁÓG¸†ŒIb·Ý}ö()!×òÀ aJ”>Òyq;æñ¶å«“çüÙð¼Æì¨Ÿ«åñeÖñufVEíjïNñ©Júfž»‘œ•k5ØU ‚äÝ:=eHÅ %-eä±I¦‡° · Ò/{¾ÊÂ}:q.÷Dqûi°W Ðàä=ÒdðÂ~.ÖsŠ˜-â’šg6g  £êDÓÝb }g¾`Þýä¾Dí&\ û6Êœ—!¨Ý°a ]<× uy«5qZ–|Mquè»A Ù·(ì7rÅdÁÑÏè÷ jñ¹&Ú?|øÛ¢¾%’ùDÑé0ë3Üù /Mè(’(ú.„ ±liã„ð<†Ñ¥ïZU_ÔÞöµÙ…µfJ µÉ-dûÈ^{__ïXÜb¦Á©Ë}·Õ¯¥1µ Rð™s¸Î%Pé® \t¸kX´]ºO7ú‹ïºdÍR[wH°„û" ­]mÑà+CÐ47o+ô«~ùÂÒ®¤¼vá3Ýh¦a”êEåjÀóެyÙÁðèú:ô,÷½ÿòZ2aùzÝOH¼D²”Lhu5&jò$À ߆/»CQp*çÆÐš=xœóšª1hÀŽXÒÄ7¥I>E§Aó¬g÷ó;¼üÆm¬GA «¤Š|f,Î ‚ÑK;-»« 嫿¾ûÆ_øL’@ø«D ¸¤±ÞØ.çÉ0jÍÝúíö pa} [So”&ÆDÏ|·¶`Þ¶Ò-û(¹†q½Ïë<ìÞ ˆ_JàÅG8-,¿Qå#'Pe±&º‹ífÓ©Ê=G—¤ù^¤Æåuǽ³iTãÏÌ ÐùÍKº‡¿UKì*.ÿVŸ[sh%Ü@Tô\z‚ÔÌ•÷食¤Ôì ™˜Ý¦üë6µdê/í6¨}zù¶ö†¦Û«tKìžÙˆãJÝ„ö»~+ß…É‹a¹Èü‡ªXO›råxtج4“Ü¥ùV¬Ù¤àÓ‹ª "…ò ún»k/·g3µ9ƒú¨):©D”å!Xç¬#½êb¿ÌÍÎJP³ñ{㊇m-OÅ~$qï·Õ˸ú_Ü’Z•Úžï/~!¶ªË¢GÖ±>%nÂ_TS81k>-òƒÆ=¿ÂE†ÿ0À" ÆÝGÒ¯Mi$`ù19æƒû&–Ž˜†F\uÙJ>Ù þ£9¯W5ëlEâÈ'=´YQî§±èìksʵËÔÝFŒÏxù¯#6¯©&[¼Ì¤(§Gu•ªm‡±¾Ö™u Èr|>oª`Š›µæq}&匰Ÿ_ŽÄâÎýZ ¼2gcZtj«]‡† 'Ã8ˆ^7ªÅ ÂZduç»Ùàñ¿o"¤1ÎÊw¡ëËŠæN[– `/:S‰—^qðÜtƒ½Ó`o*ú¢ßc^計|ë}©Ùwùš¸žª¬¦m0-H«`xó _¡CfëP*¤ÙEÕUÊ…Úˆ˜ ì³w+¶ó…ĹЌì±7§ƒ;à¤ÞMáX•^NÌ»wXecÉœ”úì!ýCoþfö¹"'ïX¯{Wt,²\ty‰Ð_¬QTø3§#ð2óu²Ø¶ù•α¬=ɉ‰Áò,”¥ýµr ,Ûö¨m° iPñÿ M*Šâóoz[#IÕïŠ}€±üQ*—îêJ$^¾œþÂj^zà‡ÂÛ„†b&φÛuaåa ÃGšFS@Ä{ÂåÐC3¬÷`w ÙùSœ¥° íœ]†>É–ß6ŸÒïn˜s¬ƒ´¶Æá蜰]eƒt ^(ÇÇ÷©ZЃô)M¹mm@?—*#Oo”ÈÛͼqðb&FàŽÖï°½§fÓ¥Í;)‰]y@¬Uáêô|{0óå¹1oŠ‹ E;wŽÈS†ÝÅ‹öt/^[:€ B¾4ÏÓÇð¬÷– NO¶Lsµ[òY¢O(³rmp~ðÇ;;G›ò~‡k·Yb§lâÚ'¶D~Ý¡lœ³…æ„|É*-éé_f÷ŠÈ.&Eü”Ç÷Ïï%nOðXg,c¢[~o‰_ð67@ÔIQNÉ4Ðt«f®I× üH…ªèMs*åsµÓëü GoÅ÷kê^{¹ Ê‘‹Žtn¾vyvUpr¯´ѧ!hÀ¬ò ‰µš÷G$¾ûžswùãI–ÌÂ,)ùugB§5qHf)/×nÊJÊita'::ãî«É(µàÜêxÓK)‡*wù@ZŽï‹­ó^‘TOZêü]«¿ ?°s'–cjƒ°‚ôäŠHËo™y>¦EOçÁ£VoðåÆÑÉ14|*8÷Ók´Í/?ËÕ/ $ÀÙx½S&áPç“ÎGµ©C’àîhÐi›”"zä<£³ ²’í³?|×ôÛ™äk9ÑR(EfFÜ5Æš%¶&ð– ùÐ#ú³(¡´aC-`RˆÕùzzÇ ‚>œ«ÍŒ]ÙAŒ¹ZoƒI¼5Nî)î“!“}’s®€Œ…Øù )ä7v·pgsƒÒÞ¦BÔ;A98a¥ë_­#GÛÔ§µ˜ñ ð$m˨­Óž0î óÖК;½MØÔ#­5pb1»Õb@†d9ßÛ‰— Þ/ân.ž£3 µ‰¦>_éÐ¥¬‰,6â>Ì‚m;ŸIN~M5×cV±Ž€ØìÔj„]”‡ ŠLá>Nj¼ˆ²¾”´+§å1“$ýé\íÑqî>¾ö’VF(ý.[׺íZæ]º÷ëÔ=N<Õ±˜|õs–|!Jk~2Î/Dッ/EçQ«—¾ø«I–6Q*¹©Q”Ù¢‘lC^÷QEˆ!ßHä8½piÕM'óÅxáÏů>Y¨|×)ÎËôWTÚi¸x4l`2ÿ}Û ¯Å‘»pá>L„×a‰„MÏ «ûø2qú¦bÊ­Ö0ç4SÏ·ƒu/Ó–Ž ÅH)ó{çÙ“{!ÂXä0qJü"@ÞvnàTÙ…æÅ­Ò„8Hâr°|ò…Žz¶¤œ ±0Ž}zܸ‚üè²Ø©Ð4A©l2ímü›Olò?ÁÇÌû_C.t‘] SB7¦$£4MKÞ cãoדº·r½™çg-'"ŸvH9¾±Σ'g¥—a{¯kuЙt¿÷ÝæÌ\cãôŽé{z#ùJy`ÊÁê{ˆÛǦ}çó{!xö†sÔrtEvdÄ=É©!ñœ¬ü ex0GîÒ°sUx®‰d*²U^'P{ôÚjAáç·Ò’6èõâ{I_KnÚw¹€m Ò–»wv6[xé~vJïQ…Œn*ÉP©nœ¥/…„‹öš1°#Ǿo4ã¢Ý½C\í0#Žþ­MI'"i€é퀵Ū޿|åÑŒ±<±c1Èšð\öº¥G!B€&­x^™ÜòJœÝõ&kjt­ˆïÄß0™Èý(ÂYÕº§iº4e-ŵ}\®ÒÏËOlL@Ñ÷™3eòÔ€Bãs‡”+rˆ÷hZjfF¯Vô½Çxró-«Õ‡¦’ ¥Î1AËô ×…ëô@)^ü<¬‡½d]:ø˜3^üt{R@W ­éõáokß\Òžo¦½FĨi>Á7’”¦_1Q“P§UÑ4Ÿq$è[bÓM£(Tq“±ØÄ{2Ogé´|¨ÂR~e­àS%Ží;ôÑ>£‹hÊ%$c žßày6û¶IMóì¥Î.>/]º÷"Z¾w U B!z\T2 ¡ð9 ™çßæÎr–À¬eVÛq½2Òâ‘Iû/v§†­V¤éŒK¿œp£e3¹ÎV:Û÷ÌŠ›ÙîXë$—ws‡I|Éh ¾^¦Á_¿c`4ÒŽJOðØî¢³ªÝFOâ?ÔxogOÆöf²?Z¾ž»Ÿ’ûFCòMQ90¤¹õ‘œÚ•Ä|È$ƒmþç!ðu¾Ä°÷Çc }*,–4kr>Ùþ̈‚’°Ì— \·¶Î²ùnWkHËÅâÌEM/OÑFÖ—[ÒªˆØ ºÆúÁÈ~d[½¬Ná©fŽG>ÅJËiëz~Ö™DÞRFþ!?¤C®ïlüCÖû¡_Árþß÷§z±Ó2v—ÝyÆ4Õ«UŒª,ÜÕÛÁè/®]±)ébÇ Ç·¿rN Áû³K2÷mŸ H¹ww²sý»ÆS/œ[›“dpDC‡çØtZd RV³{<Øo¯ë_å‚)]–‚yBì¨]ñ—£ÔAr’£°AWbÔ#>ÊrûôaɆüÁõԿߤf­Á•Ïp=:^Tj.¥&^S½Ì=-Ù çJõFÅù… bÒ¡ I l’H8ùÛ>¬ˆÔ.$pˆÖç kïÕ“áAÊ n”7\‘Ò>`èN.÷Y€ÃÓ"è ~ˆ„ºF%súÅvâ09a²öý„3 .&’§?áf‘¤,¹ãW£¶FЂɫ|ëLBÄW›¦qZöY3´Ze´™´ˆ%ì˜WÊ]}™ãî|M–•´¼Ï©«MjiÑ Jæá©£«À¿nâ-QÚZ Ÿû1¥hè—x úš|êߡɰ %=ÆøW†FÛRåÜQã‰Ù“Øv( ²~KÇ̈›îñÙ¯,~éŠtK)òÒ­ÛØK¼7àåì,Gdú>×À¤Ö œ>»ÿ €Æ)¸³½¢úÎõÃüa$g~yùæ‡ 7—§?KãÚV[ýXºU´iú{Râ^£¦Tœ3¡S"Y˜'•ÔÞïÚëå«îëZ°" ˆñ‹Éwˆ°¯‡rôdz4ODVÛ'/F¼ÛÜie«$;œrÐíó ²K( íñÛ%kEGÛ¦#Ðä9)¨›ï1{ÍŸU-v¿›}C³b«_ïrxÛ3­îHSØs“8@ôëÒ¬ýiµÆ„Q-U²±÷ Aq1"½Š#“¥¿U¶Çy «l°TF°^3Táïfå×úYË6ñ™d:Û %¢ f+Õø+úY Ej!ãÒ«V]-Úø[£ôËUb‰‚ˆ¾)–“4ù?s}çb’Eú`™a oÏ9ˆoKþ,•8¤b‘*mz‡ê%Ÿ@T.ìÓ\b s0t³CƹÏD¥Ä´­Ü:¿¶dßsHKàËñc·™-4¿Ñ•Vª jôkÀÐퟬ‡ß}“¸!ôZÊó·±EÙ½xÞr­aª{@cžx2<ÑàøÄ¦w˽ n rC0ZP¸»MME¹%ze!‘Þ•Fû´@#,€Ý«|# z ˜þÙðøÓ›Y9Ù2§é¹7†|)0ð­eYÄ\j´t* Ù¹ÇD¾f$½ŸuЉO$W‡Æ“á ±oà5îT*ciBù¸œj| …/+ZO?Øtuê÷ýzc'¶°®â~)¢Žä•—Ú£¦(ìŽÃe¹àY3,´î?eĵ{@e:}`ƒr1#Î\› I{"¨ªÉêqfô:á`š± yíûÑOÎ¥Åæ \\ºG]ÏkÉJÛ¯Ê0g<õ¾. /Ô£+›Á_š^N£ÄÒ¬ ¯ðÊñ×ΑPrï‹õFμEãæ\ÝI:r°á7Ò0V›B$¤"d+‡¿¤Y=Ž3ɉªm‘v ¦¢³“‡$àLZlaee®ŠjUÝ´Qtgç³°rpøŠËK͉ªØžšú©IƒÝn·êZûçcT:¦ŒùÉtÁÉïîàZ ¼‘Ì•Ò0ýß0…2‘è÷5¶LtÙmWGeúˆM½üè€,mŽQꚉwZ¡W-‚\Ìr¤9¡:šÙlÕ÷f÷ŸÍëGa¨pD£ú¤œ4ØFÓµ[#Gy;0ïÃ1±&ÚîOòâc§×uS8áÄ'k Ç8RÈ·]á­n|€#°g×q§ž0gäÜbFçýœ7|ñ-ÓÕ…–m]ÎDøÅB”|í=í`‡ÙbÎ…(¬è!Wù¸|­¯\ÃÁúWVNxÙVÿQ¯ Lô]:•/Ê\¯w¶›¨JœÜÈdئwY-£Œ¢jG'>òÔ’œ.OS‹j£xÝæ‰ºl-ì° U ÍÉÛO×xy¿MR¬…xtO`ŽMÄFȾ|eê7òÿålœ<=æ¹5e„äðûŽ˜pæ¢=À×E^Y“争“œ“2>.Ý¡z%Ð6>áó›ÄqO;<”÷0ƒÿ˜+»CÃßßü7?QU^jÕcÛ½G‘mœUÕB˜ðXfgŸ[Ë> t!ŽQs¤!$IúŠüÉ··›™"{öåÉY`àtfP{ýc”Ïbß1®k¸ŽLæ=bÖ—Ù~Ïl>½e6ˆkœ0O¢ÀV€.ÄÕ D†.Œ´9é¿ÙsiÖÛfpq.'ƒ±¶'±€°Lmr´£õÊ ¿ÀÛi\~¿=µíÍûZùC…eö3ažŠ˜FG€aˆÈŒ [€ÔÅÞpáµ~‚¶ª™Ñ%J^Pj¢¶ÎÝšéÏ×.çË ëØù•¡*ý_?Ù)™ Z]õnRø9± ’Rνd›uÔ3{C(øü˜¯¥¸ Dž$>l*¹µ’þÅ×¢.^•Ö+:×$S/^/ÇO˜ëò3#÷äK#ƒú@/þ°ÁO6¬alµ€+ŒÀ‰#‰|= ŽíQî‰ÆoŒ \þØ‹Ã&›DèÂ.ÆøÁÇ‘sжáíIÙs>…KîÁPZÀÅb9¨Já2¸›aðý(Ù`™i:ÓƒþºÞ ßQ½l1‡òÈÙ/®Œl×r™Ie^„yJƒ‚B†óÃoîÉ“ŸBßÊYxÌy6¿´¨õ~åÙž¶à9š^ï”6oþÕ}â<.³d)„àÚæb]Ìô°Í]Oh¸JÞ±þ^¨ÂTÙ™ìßD2,Æ{?ß…ˆÅ1îrî’â>4;±næOZ¥Ñ|N” vƒyÉVþ²CS|Au—¤ÚšÙ9+¨°Ìc£ èþ6êF'åˆÃùòx)…¢Í‰³÷7RKÄ/ƒŠýÖ±’̸òïB{—o¶gé~>W „;á¢W£U¹%w–H·›Eêj«óü /ÒãuÝ–WÝ Á²ú^à±àSKÿì>%ÈÉ6†è™4öˆï¨î"ÃXëUQ¦ ŒÙS0uæµÖ<ØeìJ±£·ÙAô>IøÜmV ò¶Èp¯,Ù{@Á¥§GÚfÑ&8rðRd„ËêÌ%@{¨ÜeD ]£€’hf×ø„¾:ïqR}G0-"}ÍrOr`,݈]¸ˆ‰Ó½=a$X)‹3S@`Ñ~fÇ7Û{X4U)Ÿµ—žr¨÷š3ôÕáa‚>SÏ»!ÿVÏ•}« A›¾`°1>z·$;áž…akˆ™òÅþ¬X.h•F_ô³ÀQi`Òâmlˆz+wý÷8\e3çŠqë…T b_­T¿ï­zuƒ}™0< â‘ǃs%çœj]åâtVð4YœMƒ…¢l9CJsis&µò‚N4b´u/'vJIÆV¸ÉžÒ2j÷açÝ ‰ 7m¬Ù¨CŽwBþ¼G(ÝÁ´"MÿfóadÕã)‰ÚíÁ¯—V¥cAj÷¹}"Ë·Øðžë'UC½õÃSr°CÔp3þΣ®?rW>™) йÄaæk¢ëجp²H©þ?ó‹ÿÃendstream endobj 816 0 obj << /Filter /FlateDecode /Length1 1236 /Length2 3835 /Length3 0 /Length 4554 >> stream xÚ­”y<Ôß÷Ç+»[Ä(Æ–elcŸBÖ±Kö1Y†f2fÆÌØk¶ŠP”˜$Y‹”}‰ eJ¶ì„ì%’~SŸ%}¾ÿþïÞ÷<Ï}×=÷<®„ˆ•œ.âqd9°<øPßÂЬË+*²KHèQ23@Q§€`D hê‡*)•À§””N©h°Kõñ„ "Æ MJéKÿLRêú ˆh £Q>t hƒ÷À ÈAò@],hýs h"¡ˆþ(¤<; Db<È@w”Ç®ðÓ’ ÎTÿ+Œô#üƒüQDÝPê§Ki Ý#Ñ(OvK<½ŠnåÿÃÕÅ¡~X¬%Âç§ü¯>ýGø`°Agà}~dhG¢ˆ¸ÿ¦: þ2gBbü|þKMÈ,ÆCç…Eÿ aHPL i…!{ ž, õ+ŽÂ!ÿk‚Þ¹_Ì `fVÆ ¿ïô´B`pdÛ Â¿²?³­Á¿×ôþ1@¸"½¿`z"ýûçÏù?Å qx$çTRU"ˆDD;}zè+U  Äà¨@ *îXA‡'Ó·éM zâ‰ì?oTE¨àNDx °(O²ÆËë'þ‹€ÿ&¿.ñÑ* ä õo ¬¨øoDþV*Dîoý²ÒŸà7kì!î{‰šÊ^ò»ˆÚÞ"ôóþ{„½JdÂï êÇÿ9ÚoWÊÿ!{lAö¢?}©þöû£Ð^gˆí±¦ññH?2C"`A¿ý4$?Ÿ_Âÿ@Å=^iÏ ÐíOÐ甄ù5Ï?ÉÿN¬ž>"§¤”S‚¨ÑGHEU¨®®òG¦‡‘nšüëA  þ³öÄÐeQ¨@”ûû·xÓQR«cŠB ©½÷™döëyÕ$YV6¾yz0ràú~ìÝ.3_™Ñ‡ŽÛÅt\˰š&ˆnF.p<©îßXTÝöõhwQ ìN[öJûBž•”í,˰Ⱦ&¸USf”šº#4ký𤲈5gšoÑe•ؑΔŒ#=‘¡bYkAC«øz?îm*ëíäÝóŸk»Ú Ÿ=êõŸæúQ.3.À9Ê‚ŸHé .i[w»&”'8Rx#l] ó½¢øCg‚™ÎlÚÚ¹EV¯….ƒF{‡•áëÚEÆ‚õqe &v·ÞpëJç°Oc©ChÒ¦^šëí²0©•UÅ&åª(Ñgý1üKsÖµYÎ Óôu£± ˆùŒeÉ#ÜFKëùg0øs$¯Ov<¨˜ÃÄÓòæ‰`€Ö±+­a LÚÜÆí#U¡K]cA¦EY¤5ú*¹ðna)[mQ ¦˜KÓ¼ˆýIÁÄJð%_ÃQ0×T9PMQÛDÌo4‹=17÷f–”¶ÒíEƒsM¸˜"ý–ë`—ͨàxøsû¨µˆ1˜`cßÃ;0ž°õýˆ}µ%Ð2œq=›ÐåƒXt¤ÕÇR+ò’±±äñ©’–kñõ‰Z•R<#´JÝò ÏýR€: ÓÔ¸4ãS;­Ž½“õ3ß#„²U ¥]ôö¨jìäŠÄ£rÑ¢ö„wY#Ó%>¥tºr¬n…µk‹P]1Ͷ¤=óôoq? ¸O½ùr™Ýdìè ejí›nšIjú«÷%¬‘æoص/ݼœð­™™}ä]]«ÓÒÒËî%`ÌsNò5•`¨f®‡Ï-Y~È5eˆE³‰Ðj´W,Kà Sƒï•’úG G<©µÇbøÌUÃGN²u¹ÕWsüŒœöÅ÷„¥8àdMx]JŠ<µµzí•ׂ³vaè¬ÊëgoŶH3Q¯• Éjžô±èkê”VÖjY×õ•¥%^Y€ ‹Ï~.\—­bíM”b€¯Øð–j–.ܺåý²?føá ëðâÂ3ŽHß&é«ß¾çÓÒãÜxÚrV¦¤¨Ì²*³¤¯Ÿ~jD÷§›IOXgµ]Õ¾KEíwß…‹79ub’¨˜iy£Õ4¦MuÄqI¥h·q*`EŽ+~èùv¯Ö ‹dŽòjûIÕÜÓºã]q"¥âOçn ?³-«¶ˆVÅe'ŒÇeû²O$J¿æ¹hêÐx9`»Täݵ1jnb…A•çÍ4ŰK;à×+cÌÍa@…Þ.š mJÕÇœvl­ÎÇhCu2? IJ\ó žÜNïlH6»/8‰IU-g«b\ýΖ°8+lùFìâvî3ÈêH›, áæ$µaCGäDc°-g¾Þ7B]7ÅMAëÞž…6è%6Ãì44ê(üi_97wݨ ÍQ5lƒ¹º ¹"_ß1û°U¦ÍUb=R= Q'ºK.ÂÛ`d ÷ZËìwtˆç MÅ‹g’ êpµ]ŒÔ­ÑJd{qn€m—¿Ÿù~‰ÃGþÕñ#gïôÜÒ]ÌŽt×Bº{Yï(™¢ï…+¢Ö.Î…x?ÊÏf±pïÖË&[Y+[ÞÏJ/z«£Å3÷ߟ2t`Ðþ°õñãó4‹Ù¤ @ÔB³¸ í³å{Fõ§†¡Š÷^ªYkõclºDyvëvÍŠ÷]ƒ+.÷7ïX=Þäj˜0m¿kºOs"`sæ,Y.:à½íä[¹èdY\"±8•?ìvþBû´ò @dz5}öõ;©ùìï;—îÄ„f$[’TAÁ/$ûXeI7¿ü˜— ºó < B0ÇnB `wqÇGE&qÅz¼=ˆ&eøáâ‘;Å,™hõæ3¬km–ž3JóüøxÜ]µ™í#/Èqàl2må¸þ©»Ý§:æ#Jcß½fý1‚©Rc¾h+&që‘q{’BDÀ«—¦§ýdÐËïc˜œn–C•±ÚƸ/ïw]ú_Uyh-øó¹¸vÑ /üt™yŶœ*Nö©ÈŠÌWû3ù -ûÞ*;ÿ¨Vît$7ƒÐºê¹øne=%”¸ç½‘uþÀÁx.òðÏ'L’v/˃æû*ßç{>:o÷ÔïI†ÍIŠÍ¦<ä¢êuR3Y§²Ãš±op«Ã}Øé9ÍÉ“ 4ãºä…4hŠGÚ6Yøj–ܰ|]§G[ù6 øÈF¤‡BÛ×põá©”ñØx®GGÖßÄBÅ쨳:?ªù4Xn#ól©sµ›\¹Ã~+‹ä²Ni;%—Èï·«à¯ÀÖû*FhT€ù{T%ËðÊU¶™@îæì·›Tëïåó¥,A´ïn#AÕÆl®Å=ÛZínÝÁŠ~.øfñï ñOׂ»¦i쀔2¼ Ã64ëž6®zjè9 ˜{®+Pð  ‚tŒz?B"×^hR±¶u2X[·ºŽXõ%¶×§=¨Ó&çèå‘=ð„[ÿ¶átŽ6Í1øKJ7*íã£mÝGDé3Q‡ù¡LÜF¢¿Ö=Òdò%ø´B%ï(v6­ÍzÚBpªMD¨b°Låû"åIVäiÕÔ„"™²;Âl9Ÿ¦ûÂëè¨bhO}†ÁiŸ ;¦O‰¥¢-Âr‰%éˆó;Â÷8(ZhyËñ‰yì`à=A°Ö.úŒ+v]Û74(@5>ä±S‚‹pøŽmq{×öR8á뉛|½dÉЧì4P»Šx}F9ê³”ÙÃ…Õr&G‡ø6½¾LŽßö|Qä4eÐ?šdömÌâ>Ñ0ù ?pé37 )0ÅŠ¥áÌkª …+w¿Üšª¾ )²×àL¤27T‡j¨l*Œ0Õ•Ê^_vƒ?±Z‹uXÏŒ›yãÒyÈ»jÖi cø €»¾Ÿ¥è{F q [Ý­šk=;‰ßæz»¥;6ùÆXRPÌs6~µnf7sùÛŽk-TÝH®Òu>~D;:³(º§¾(9µZ<ò ¶S?†lëÏ÷6Yç°EÝô±§:¨§jŒv+73‘ìòä²>Þ?˜þÆ3H5K°åˆé«))–ã³KRd+ÜçUKù,#.Ë­)îÁ$žD´™÷ê[Î{úF8•;Ä2¢kiÉÙ¶è#Çܯ”ûm¥;í ÅU>ö jZÊII¶]~•3¡³–Ÿvb`Ê4z­ŠïÕmD&Y‰”kN¥Ùœ!ìØ8MO›Ê‰?ÍÊR1ÿ¬Ï…Ú—‚°Í<4ïóA—uFÉÂQ©c,¤·­5#RŸ–Ñ8]¡˜y?råŠ(­wçÆ»ò>ˆ!IªE4Eà’]ú-âùñžû,E|ÅU›¹ þ¦ A/$dû®5uT8 ÉcÁ´;ÞÆg|«åû÷GÁ³*¦Ü>Æ*ù„“Ï^£(LëÇa»ÎÎ ó‡ð–m:ò£¯©/y9².ЂNÅE šîØäйmíäÎh±ÃOZ¿ (½Ûl:èü5F¶7œz½·¼“ø2Ð^ïôk¿×߬cor«ç¤l0T¬él “ÂÔÉžSç,Oz×suX´uÊ¢½¯àÞêتá{«v×sÍcØä¤Ñ”5Æní¼øVǦ M—#¾„õ*¬áÝ{çœ{ÏO,Æ‘üƒ ¦:‹²Ö‡šúïÑʦmÔ2$x‹ùˆÉ:d ÆG¯Í…â÷õ»;X¾¿§~èÿ`»endstream endobj 817 0 obj << /Filter /FlateDecode /Length1 751 /Length2 1212 /Length3 0 /Length 1744 >> stream xÚ­R{<”i^›ùÒÈG|k·PODdy1ãÔiœ6+ QH›1óÎÌËÌûÎ 3NYt”-§%ÆOr(tZÛRR”¤MdZÕ°hÛbWdVìKÛ~¿Ïþûýžžë¾®ç~®çznãô=4 »a¨Ä"AÀÙÓ— ™hlì,‚C]Ø@öö Ep€@Tk{²Ñ8c™áp%ÀÔÙlQd h|X„0(ðdH¸0ïÁdðÀŒ‰À Ðx<à»xB |a1,Š„Y$"”€P˜ƒ DËE?î(¶ïʬÁ{*‰qSÀ7ip‹, åÉ f-½0ü.wòÿ0µ¼¹[çÅà/¶Ç3úËà#<Ù_<ÆDH`ðÄX°].Ý¿³æ ³þrÖ]Âà!LÊáÁ€ü®„ˆÝ)Ì¢#&°<1¼T‡QÖrxlK,ü>Úëo¾ô›K ?™à曆Ú% ýãÙˆ)ØO&‘É.Ä×ûÝeW¹¢LŒ… ø8P¨€!1dD|.pD1@P,°÷kIB1 ~à‘Ä6&".~¦=°ÄPx±¸„)d`)‰Â–ð?ßää„Ic,ðÁ³°Â…do l)ä¸ÿ2#D"•, žÌ{ÌFðaX 3‰=ÝÓ19,»êHY¼kQ[9a‹Š§ú´×·õ·Ô“~LSá•>ôné­ PVäêü{ˆ0´!jNO|¢6Ö§ÅmüKá霮ù¡È¡Üèz¿åzÊ~Î$iÞ¬êœú…¢âÝÞÛ\–Xܘ?q¬nêâ7¼òùÆîì¼v'/™jà–Ëó?z¸ÒÄz£¯Vްì˜ÍQE”VfºªâqR|ØQµëæÏ&±Úme‘ÚÙŒùà©—í;÷ëÊ»¨ÐÖ®'Ün>ËH9¹ËÜfÃÉ¢¯ònÔÌ)/É-w´YW™œÎA;·É3T×ëFé]û×Å^Ë'%>ï<Ü\·naö Ó¦Çñ­/‚ëª4ÅžÕ·¡KÛS·7¿Qm¹;O7Vj5Ž×ß{ßüJGÑwÊæ…ê”ð¾‚älm É«×+Ííú^a›>a5MÝ-HÛåÓzf²ü ¹.ÍW’ϯáç¨m0Òð»´‹'¹wnßœÀáqZâÚ¶’ì’鄪½ϼô¦.­dUX¿Êë{6µ,)»}‹$ƒTm&÷øâåžæ­ åL9ÓòŒ“§©÷~ ø<š7ßœ|;åµë‡ceW:ódF1ò½^K~©•r½q ¬xèØøÀÙòoùZÔôäU·Î*ÌŠÖˆUqÙû é­¬o8izúîêÓšµÝvT>xQl<þñ/5óñˆ~"Ô¤¢¯©Ïebendstream endobj 818 0 obj << /Filter /FlateDecode /Length1 1984 /Length2 12134 /Length3 0 /Length 13232 >> stream xÚ­µUXœÝ²5Š[°àèww×àN @ðÆ¡qw×àîîÁ%¸»»‡àîzú[kýßž‡‹f”ŒU5ç|É?*ªÐ  € kzff€¨œŠ¬ª*3€™ ‰œ\Ô¨ï`²Ówò˜¹¹™ÂŽ&°Ÿƒ‡™‡‰ ²qµ331uP‰RÿÄ ¶Ú™ê[äôLV`C}K€ ÈÐ èàʶ´(ÿ“aPÚ휀F HÌÌ#3C€ÐÄ̉ñMRÖÆ ç¿ÍFŽ6ÿërÚÙƒE¨À"©`‰F kKW€Ð‰Q®+ùÿCÔÿ%—p´´”×·ú‡þæôÿDè[™YºþO ÈÊÆÑhí¬ÿo¨:ðßòä€FfŽVÿ×+å oif(lmb Ðs3°qüÛlf/aæ4R4s048Ø9ÿeZý_àéýK£¨¸ˆø×/´ÿYì¿ÜŠúfÖª®6@ÓŸøaæ?<&;3€38ü÷¿ÿ}û?åÄ­ AFfÖà“ÁÎз³ÓwE0b¸3̬€.  X2#ƒ5ÈœOÆ` ²Cúg¯ìFáLÿFF‘?ˆÜÃÄ `ÿâd0JüA¬F©?ˆ À(ý9eÿ 0§ÜÄ`”ÿƒÀþƒ¸Àÿ £ò®§òë©þAàŽÔþ 0ç×ÿ n0§æÖ¢ÿÄ̪oohfæ`fiüÓ7˜]ßáOX²ÁÄ Fvú†@K ñ_Q¬ÿ±ÿû*üÇÖcøÄ&3Y‚/ÔÿZØØþ±XYý¥‹ ,Úè?¬Òdi©o÷WXÅÁàsÀü¯¢ÿømÁWî?,`}àSa©oõ xÄÆXÀÆfNÑþã9þ]bò§ØoòÏ£ü;ÜŽéŸæÀ«1uµ1Zÿ¶™ýÁâÍÿ‚àYüÁóúÓx0–@{û¿üàéþÕøÎ1þ)Åæ²6³þÓ+¸´µ£•Á?ošÉ_’ÀÏ!#èh0'è¯,ffp£6Üà6úv@ëÿÚ?óÿXÿ{û¬` 6@;C õ_¡ÿ²™þì˜ ®¤jÏ“ 9È ƒ£lúuCTLþ”˜I_~tÒq«E ¸Š8Q>ˆ¬›¸ÐŸÞ£E¡ Gdp‹¤ƒpUi~bú2¯çêc”`MîÉÜÅŽøä›«¸´ï8à&3*uðÄWs3[W¨ÀN"7ï˺¶`†’¸‹!š.w¨+a{ƒÍ:±KªRÔ壎¯0ÎþFIUŶß öx%¬G8»¢¡1”€þ³/¹)ñÚ§¬¸éü„U§¾C¿ÙÔöQ÷°>×üHÛƒ¡ >eü§ÿEæY1$¢ ‚yÅdÓ» 4º[BÌèÇ/ë9Ù DkE&Åy^ùŸ™\øôàáô,õäË—¤y }'' ðægŽöà“xL¾íÅi¨é]3´Q~Ü!…ˆD0wÊy«¥õ^”!2®Î¿Æö»þšêwÏAÏ… |8:ÄùÖ{UÖ±myƒi²l‘wëêW¦7ïÛ?„hžxkX[þÉÂi#Õ<TLV}àôŠû¼? «ÝT$J]82pĶl/Lœºp ô û²a Í«ðê9&·Xe>‰)4:ÑÊE‡7,N,b9é÷Ñç*ø™²g½žV€Œ‘yêÈuj;õƒÂ¿¥H…ŒQï©{öî=šƒf½WŽ®ÜŽœ_¹PX±•ÌÍ÷“§~)÷VoHØä,AßÞΟ½öVê/¯¤ú®­¿;é‰ÓmyïC¬qÑ´/ª(B tB¨`Ê5·˜ýbJV·*ï›Ú&ï­µ«üÝîžô3q&ŽóÄ*ú#¹1×ý«FÉATþ†©«²ÛE +Úv}q>Áôžñ=!¾ÔRÛƒ­¡ ÚAI{¶l{ÉŽy¾,ó||v%¿G+Ior”¹/&&Œ­"€'0 óz‰4æÄGh5ð y-È[ñÔT;œ¶J]a‹¡Š‘¹¦ÔкWÉÃ^ßl”è<ÛÍ—²—Q(?imBkÒw:—Z=UVM¸'V\O~/vüt#÷MØ•ž '‹jÑ5zíó¹+£%?]ǦlìIP¨½ñ÷ï¨ã)cW5Zk¥/(¨íxÖB¡K¤ìN•]+?—6+Ÿ±gC©Î‡ðÖ‹5 ãÇ"„#­ƒƒXß5gü*qDûFRÿ!#¹‚`<~ó÷õ¼÷él&íJ $Ö¤tÁ—9q—‘\‰tçµ@³¼Ý’„66ãS#ÎÕ/ï+”²YvMužø³ìÎøFœÅâÞ†€öÈ¢³Qqˆ—,ÕãU õޤ·ñâð­žCãºtá$›ŒÚV UŠ „‘“ötN»V5¹AÇ/g}´µéÎç]O„µ“èæwÃõ¼•WqQþR"¾±¾8¿§›ØŸÒ*Þ5ù^½¨<Ä©l›Ñ5ÜŒWáÅ:¢šÝs99<ƒÖ‹èD9J¤²fHp>‘XF®ÒÑ=ËðüB­”~Ð®ÊØÒ¶ ø ^n ÃtG÷Ù§oŽ*"j‘rYµ—ß ûFWÝèåÉsÈÁéèŠB}Ur+_ÂD›áËsÍøx§!€½›ÇHˆN¶ Àp@ÔÕ^7ö”¤KÓŸè¸B£‹j1•­¹çcrýŽý›GJ µXá•OB{mûÑ}¸ ÑCZLöÏÁp‚íùåðG€gÝ5žªq\ü‚®Íë¶él »R\å»áO÷ðKiò^ôCnGØDÔ€ ê¸`gt*ѳª8QFAÇ#ë”×’ò®4VP˜/˜ám'R³d¿ë°„±,Ð,)óF¶ˆeåsÇ>z‡‡pæÐ<ž}E‚žµ<3;œl9§ÿÙ<¹úèµK¢ @Ëú9Qí¤}îŠ_†Ñ|^•héÖÊ|Yû4óMªwÆÝÂ(AaþýTË쨎$Ÿ„и),M Á‹UrŒïd²oËÞÖ†Á[*ÐçªÑÜ c8ê;«t™ÔæWœüh"$…ÈD6 ñøMH”åΓ§kjÿTI3%ýN0v5÷î%iƒ§%à¹"xþmñð]{lùˆž+l¡à5MÀ5Ç\Õ³i¶éb î)k~Õ¾ ñÛ(1iÝžŒR¬É»ä*qÊH›Þ±P—Äù}UåXJÞI!*I X–ídÏ€$Dvíc¼û.·BãÛMø OÝ\dÙM é4*«ÆS-D3%›%qÐgi|9Ƥ9ŸÖH‚Rjlž¤cp ÎÄO(ða-¥(\»ðÕÝ-v¡ö¹§åe@"g„þ¯»i§vpoi õ!ˆâ,Gj Z)¼j+HÛaã…7&:mžŒÉxgצÏ\˜™ßžn÷ß‹<\7.|°¸F\§ÈŠÆÍ\]’ne,òO€¥±ØÛY;2ž2ìà‹47n4ÐP:>$Í—žuxÓ²/1Ï´µUOé+ ÛŸCß*.Û"— ݪ¬”>¥ç“ÛÊeçªå {^ÚÒœ]:˜Õ07O/kÄ{)™#v™ ¾Ønç L-ððª!}. ¸C£‹$Íô-b›¹&HBeô>ú¥¹_ú¡RB*X¸16±Ëi‚åmÙT8…ÄwDµ¡¿·ö)^LÜÿ,ÏÖɵG°Z‘¢uë…2SÅLfЯ1/ËÎÏSбyã~¬”,’ÔV‘bkÚç§-ú¾—Ö»q‡âÊŒ+}¶ý\»s›Î‹U ›û¬¨ÆÏ|:c·.ý("94 Ezˆ g¹eAƒ¢…j6÷®s¯Š æùN±ÍM¶ëg¼áYhy£ŠÀ¸ ˆ}dØ÷é0.Är¶ ŠYŠÐ"ÐÜ)ûg”œ8nFCPú @Ç·—oÉwsýI?Ê”×ñ¼[úBbï5"›G嵟 ¹#²û]ìUƒ›G2ø¶9¸ghŒJŽß ŠuÄïJz„‰¶RÊÃ:ý¸@j÷¬þ³ÅP!<© §ôn¹WŽëÛ÷å;i½Ã­ köÖl÷ëbˆÈ?@[E`Xø½ÐÚâã>š©o,—lÇ ànž¥Wž™–™ŒöÐhÊrlºrŒÊ^š€¢ažÔqW:À|?:DØ;*­ h„i \»÷´½cVp)Td5O8ň¿ Ÿ¢»o$Õ§T¨rFÒù÷ÞøRÙáu6eQ7× ÓûåÊl5OýÊ÷‡Pc$‡|?ûë½¾-j‡Yåé­ g'^*Nƒl¥%¯VÕÙ(‚þ7EŽâ ìF±´˜Ò÷x[r6I ÒÊ?÷÷V”k©Î…ª‘e¼Ñ;ƒÒ~ùpn³>ØÏÕÆéóä¹×DŸñÑ£Äaó1 ôÔ™!¸ÿ .Êd9¹X†îì|t°¤âÛuë‹rÆÏ;ߺ pûÑ$†s<`“Ð?ÆsWòã(ñ+ÞWVK GÇ™Eq¥­ô&¶pc_Î^«‹XÎÓšV‹Ñ-²7\i¢^íßo$ŽJg;>þ~ wgœÞŸºHÁñ{rö@¸e´\t2ðã’’JÑ6‰›DEݼµbk—ÜĦ'*¹`¢“ór‹A)TëÇšffq¯Ó‡ÁCñÆã÷a•#BXà âåt,ÿ‡–H¶6ÉrW+—a'åäŒzNsz݈/Õ×&whcö±•Å:êb¡˜E.8ÚY±¤2¥ùT£ÏU£¯\~¥uÎ0Åó~©`¯ÃcR êÅÒ™|ø÷#Ò›û]ã×sD7 °è@FEêbè¶Å8H¹ÕËbRqJÜR Œø®ÛVœ’mg?%;ǧkQÅTSˆ«::£Z1J£0Ù¥`Þ ñÇXðýZzê]ýÙXõe'þk  bãb™2ÄGÉÎ +sI&Ñç(Ýut¤S¢Ò¨MEæ[l€Ô`9ƒäí\y­÷µûgØi\ö!$t½ÎÒJ]ÝoÆcâjÖËëÑÔ“’¯Ã§çæÝ¬Z b™‡6In±×~I”¯ ƒw'&/i†/ÓÔZzc¦ÂWË)w\ßôk'{â嘩h)¯+q²ŒÉ\5ÛÍL¢7ŒP|–ΣK°0ë:¯r6C¢.>&ã%¢‹è@S)Œ 0_a«a(¡tGÔBÐ J4´®cqõH›Ä–62¡÷?©bêO›Š_÷$ˆU½ô5Ih¤Fð,àù‚d‰dÝ!QûÝæ‚ ʸÕú¨1ÈoœZbE^™ÙëOf€ÎRG´5lh¿S>!Ê®V\õ[Îà«×º;§¹z#3Ð/2¿´×<,sæŸêNT9i"ïoîæÎ Ã]³Þ”hè‹Ïöö‰æ!½uAwOã£PMkj4ŠD„pQl¼”j‹»‘Ñ@ˉ$F3µ;‹©³²15ZvÝ‚7ÓÊ&3~àIWá\YÈ&Ž+ˆ›ì k)cyò†äîUQœƒ-^¹Õ¶¦çCãf­üát'Ñ(̸ð©Ä£ähD“ÄÛýÝùR"ÔÌÈ>të«wÔ ÉçîvÅy_÷¶ß‰{.bct¢ Ö†°ž…G–ÞZ|уùT(ÊêE‰Tíuœ†ÔOÙã±Å.S@Sâ¿íõx1Sm)‰O¼>¬”’›ß&ó—3³6Ï”æQþî#Z—«Su„FÊ‚YèµoͶ'WN誇‘4wpú î°´#-0…vvˆÒl¹¼fV@ìmÐÜï‚ C÷t¦zÐz§@´‚vƒ!ÞÑ9—¥ˆŽH«^ïÑéE»ñFÈßT¾›ž¥Û”Ø^»LV”³d~ƒ™îAˆ ¾4&YºZ§N Fr–Ê‹i~y§AºÅ¡å¦@Tð$hÆ`Ù-à!çÕ¥þR%*ˆ_Úˆ¯)8|‘b}]Nv-‹õ8ì[ëÛÍØk£aÔPž÷±j¬óQ«ìÜëš6 ŽÓ°ugè“BWéµ¼e»X€O¬a&Gv½ 1WS¦ipIÁf)Î^ßóÑojÝŠ€´©›á­¤y<öÇ¢ÖJ¶15j¼Ë;½,§ŒÛí£–"yè¿_ÜX½²Ö…ìDq§±¸×‹ ¢*+5Ž^~pœ/d $ÉÏA1 ºøÐr‡æÿ,.|ç1§I8f¹‰õ ;}c—òqÊà¥Ë@—ƒP_ [Æm‹ó­v·»ÜÍihW-µe§Þ`«%bú"·*+ã¢}vµRp.ÛÔ.WiQ\w0ÝÞ™ÞFÄNËì{ÔMÅ]T˜´#0â|´‹ëÊ„¥²i¥þ\¿“7Ó…ÈLÇîT®­l¿”Ë¥"œB <.):̵TH”ÑeOÑûî×ij.äFÓ¤à{FÚ2E·õ82rTîm'¡&Ž-Q8Ð ¦Øs3ì3שIœ7£—Ô÷pðlÚŒkàC­³ì?µ•™Ç°œã9®\1’ÂAïÚÏž r²qO8€ÀÉ¿v­­êDšE‹5‡ ³¶VßÁˆÇ¸¿ K’^|âýH1ÿ®.ð«E£ÛgÚG˜Õ(;ÚýÖâ(»…ôÍØ—²°GŠðŒ0×=|ï‹ÖåßwÍ GäT~&Á´ÐYs)Å PQ„[Zë´›«eF¹IµäB OïÚ׌sá&{D<Î`Ù­±rî,_hè‰Ìzâb/ÑN‰éas“aÝX¦YüIW¯`^P¾ ?@k‹úv±*’«\ ÜþCA!͉]ƒ¤ƒÃ„i;~¹92ƒƒÖ†^óŒÞYI+ÞsQw]%á”,X8 q;Vmöd¼ÒZš¯mæø) <¾äêM,øú}ó1ý–ãYÈ)¼€X$ݳÒGš#þ–a‹»8k ] Ö ©a„S“d„_=×Ñþê;mÓÇ÷SY!±UÉcì Fed‡æ»ºèýo5¾—O2‡TTè¬oCó/ílâÊtY@ó³ôm<$nåX½ÝÊDRt2å$Ñ9V—O¾*(•e_£¿Óc ¨Nvqà˜^ éßòDo»ž̇ø¹¦“ÕE£MøA ÊpÍ.`ó¿û®¯Çëõ×+0P«/äÛ<“ ôt¹¥ýkê]Ùí“·Á$î´Ú^Ç-ò¬¸E{LZ¹|I›FhùŽe_Þ×å_Ïœ;35ÁDZèc5g4l¿uCµý«Dl )1Pã,9àF ^ï»þmWú…QÍ/²Œx M±΋ Y?\ï3i)µ11H>ÓñÝýø8GÛUIÐ`›7èò#Ú™úQ®£‹zÉÖüEN´“ÿ´Ü¤à8Ž©¥@ø¡ÙÊS2@] úNîÀiŒÁzôëxusEÉfW’¯X¤–û4Ï"kk»«:y´dê,&w1 ṳ̀F1(D#öµ¢59Ð}‘/â`~¾>6Ú@õhå¨üµ¬ÑÏK~çºe2§4—O2,vW‚Ãj[`˜?+aÝmc«g†™fß\-Î_ …‡0V{§{âŠ%éÓrÊ¥{Y°Å¸j-ÌÜ ·îÏGUø’]ÔHÌdžÿÕ-k£bJá_‹ý@ηñŸønØå‹LìÈAö¨QÊâ>ä•çöeg#5“†~YxvŒ[^ù7a’Õ3A Á©Q'§˜á u©7^¾5D/qönÁóB¶aòÇŠ¸¾?tÉ)Ȉ?a¼Æô¿XÕ~6\¢ÂÛ*£KCã.Fs’?Úíâè6özRBefñI"UAÜ6ñµ%v+:ê)‘$ƒÏÍjÁ†Æ-SÚÙ2¡¢Ü…ˆÿÖýÎi†dÏÆÝIfI‹¼IÂæö§ø®ŒNÁ@BȘ=üÔš­œ—àþc =æÛg­Úªéæ/ÏÏW®„¶)Ó?·x×¾´þŽ£>ÔõÍ5Š÷Ï@’€c:8!Û*,»Íò³Îƒ« ‡–¬g4-%ý`©•$VN?|2iüpà ›«›[©À¹l¨¢™KD÷À¨#ƒ¬=?cü-(Œ?íÆOøXHjãÐÎîZ³¤‹ÏÊz¬÷Ñ,"cé\ܼÏsDZù¿~üJWÀ¦…bäH„§ª°ó-¥ÿsUÕÒùR¸Ç%ò4”t’¥eIOö;”®\ómÈLƒËâÌ£÷ÍddµÔKä¬óÕ‹þ#å9­‚>¡\ä»)D¦b9¦0@ÕÔaÒ¦ Ë«— Ä<àšéÐÎ\{0õ~O…öê_NQ‘ÕjϾ¼ô>××¾šXªÒH…-¨¦·1òƨx&ü¼5Ê/Ù¡‚™É¨`ìè²!±¤Ïtä¡üõ¤yy=_8Ùn¾râÀn¿f`›‰k÷9Qêƒ3ÔA³˜“_¯É:Ö¹”Áã­æÑ=ÕÁÿî‡içjÐÙ̪ïé•Ë‹~h¸Å>‚SuwÙS,,(!ðøÐl°?<•ÇÒU9³wÑ‹ó}ϪÍ}òלHÌ=Ñ.XÛ£”F‡yJ}HOGì7YÁê,7¡»’HÛRsUMFz¡9›É4ÊJ d§¼P~{åT&Q Öµ/ %"ªŸZöd¬‹ˆð¹ØQš>põb§4æüJq$ë)Ì:ò;Æ/^úë1b>[¬ÒŠAã ¡óè^lšŸIŠŸë|€È?B忯Rc¼ÜÛ¸¤ÈO¨LÕ«Hµð^·©Ã+%WѾsi²I0{O»¦‘«S]•9_ô‹ÈÞ *OþEBOSÌïÉÅ2a„Ö ²kÁüY‹þÃ.ĶècÕŽLêœÀ÷S Hj¬KJÙ[mìÏ»ÚÊxe{äâŸÒ ¶Ðy‰Ã´HÚtÙ³¶üT¡+û¬ÅMü9:qLÅïݽŒ^»T¨Ôô#çÂË]¸Õ€ÍÜŠŽ´ãj:J%ºPÁF>êyºI¿ïß7kÚžýd†ýò7ÒæË"Oæ –¨°¬u¤5ˆ, ïÐÖ‡’æˆ 2þW^·úõ@ø@‚«ŒÉžÚUýaC¿—%çnK5f›Þ-žƒÜCW±¥ÅôÔ8þì o§ž±A#n$Ž ‚:«±Óâ%ƒÅçè“+½›´Àiÿ³n‡yˆF$–Àó \1~ÝÕàvŠs+/,xÁG¡êóC¤°>¹ kãkÓ;£þiáãEfõˆ÷êÏzOJ}†ßî¤f·!¾Öݵz!B6òÞé† v<][d2,Â<é·ªt˜‘pP?r©A{ÏC¿¹m^zx=–ò8}F#UOí|ËFa¥Ÿb~(PÕ´b':® ©ð„þ¨›Zfq,}¢÷ôú\6ÜëW`•lå'Æá+ÇÁª:%U!¯üz¢â+¼0®Í1Åݦ¸Í¨€²Î)x[L…˜¹ÇŸJ]8LK÷€$÷Ô&Ø+ÖãT]T]Âó-Ÿ®ÍžÏ^§À•M^Øòã·ÊV}Øß6™9 õDçnÂy–^r-ýÔ¼=é¬q€™qjöö~ÿÓOÁ'1Éû6åžk5é,™¾ÕОÅC¢¥bµf2Q|õþ¹2ÙXö°ðÔGLh˜ù~ˆ« å  Üa±´4%wvläoE«uT7{ao$KM”ØÌž»–wòS‰¿ò&H¡jÌZNÄ!R#ÂC¶8;݆à4UK ¿@míž[GdÙìçûïypx|‡vƒ6ýú3 7¾+O`fÉËüF'V¤åA9`Mì7sÓ~àáæAÛ(ÃÕÑn§?hšÌµbÔ~M´06ÀCÓTèk| ]÷é¢yØ¥äKªøwèÀv—Ê!4g÷p6Doìï³ÀÇ>öÚ<-ŠgÕmnír¾2£ :²DäÖ™uøÊv¸‘ðiÚ¯´üV¨u¤ý¬$¬PÐ?_¼ýécG„ù¨ŸàEg”ñ®&Ó+†Rù› Ñ…Zdçòë‹ûœm‘Gót¶|¢k©êËÜ9GC–t†š 8'HñíÛPpíŸÐŠ•‰<§[­ÙÙ”ÞÀj’ºw÷“ubÈu=gÿÑ^F ¶S}GŒŽºÊ‚–¤ÍÈi•ã«Ë»i£6&HˆE¥E¦Hò(žExöÇP ?Q$Êwp2ⵘø#d} p]ù0ºÛ«}ŒU´ÈñoÃ6ŠAWlË­"Þ*"¥K*a§/å·eAíäbdëP~¾0e&³[ïBMC'¶<,jwÇÌ–œAª9>Z S¬÷Û}O%p\Z¬c * èÉùTǤo¡32 r.÷ iBB*–Úq¦]¹QZhÕC*»<üá×h˜ñûsøx]æ¼Ý¤MÆIÖp/GÒœq0ò¦8ø9þ­”?ÈPªÉ|õö*N$RsÃEÛCrâ¤ã `µÆÆt¢âÔŠÜ,ݳ-YD2ˆáz½=‹VÜ®M4[æy¶®C¾ÿ¦vVY—Iýjqƒ‡Æ`æ£sÌÎhôLöAoÞ™ËÚ6WY ÚŽJùF¼ôÜÐ;­°˜ÎøYÝD6( þ@NÞnNøåÑØÖì5øƒh¥µîdÍÖm B_øx®\ë ,޾:\µÆÜÃ:ă’9|æú||mæEö­ 9&ª[Rb[“™ÍÍ-ÜÂ…‘ÕGÌnìz-š·f6V#çßnÞ}9ÅùŽ/¶‚ßdº§V@5ŸpPÃÈnÓ³C…üÒŸÍ? †0°f[ä`*{ÅïÁ>FbU‰k6£]§/ÿ¨Ü“Zì©äñ¶IjtÎ{MN,S~­‘t¶é$<=4Ppgð8o‹\ ‰Hm¢àô|Ú`µ¾XîbbÆ%ë0$¿¿ÕÍn í(ƒi•áGžö&¯z 6VβœìîªJxQûgóeVçÁžÂ 4$ïÒ„¤GÒÄ›j¡“Äy¤¹y†ÆWø6‹àñëV ƒ{¦éD9¾©)‡ZM¸®/ Ëç´oâU×§u&UÇŠ4Ã8o£ú¾'Çkö<@ëÉ®MO6d9p‡XtÒ¯—qÛ%Òäí2ÖÚ†÷-“®ßøœ)¦,]Ÿï]'lQ¤ ³’ù¹äê‚sWáGy ÃKE'ýÜ0"IÊnÇG·ŒOnåSqŒ„Î$ 9©4èÜ¿¥mH¨Û«JAFcˆ— Ɇáb&4ÊeåA¢ ôÎ?©™h ÷Îãõ9ÅV/»èPŒìJꊡ”@hrjòì”4ÝÝÏ?&uËéRY]¿†Z‰ IúD"߈k±4Ö„@µkg÷A} !ÈL]V­ô÷Dø¨Þ˜°"¾;ísƒÖÉè~·›KeëƒûÂ2Zñan8¾¡!èc Òh¡àó“E%žu¤ÇgwfBrëoîÛ–ò M5qH¢ÑžZT—¸ïa¥ÜSÍníëÃåñ¥[\zƒ§¢ßŠÇ@šãÒÂO^×à¡s]‚—]:í»øl4Çe£'Ã5G!ÖzŽp$oD€vÞõ­»X%¹zH—!×´Öɯ:Í’ZE.ÚŸö±%–*·‚\c¸I¶œûÞXWð%ó·ëÒ£T²óå/´³ªd-•)=ÉåÁ >ó'µfž¨T{}¶“™‘œ¡LèÈ(µIû §ã³nƒ÷\FUh… ^š»ÐËínÓ¯ËG@øäÁŒÛK^gL•ÅÐ9oMw†B™aT¨³Ïi­†‘éyKÖ¡Y”l¦µ×Ð<ÁË*0#ü¦ŸÂvyß—­Ì†¥Jú¨p0¹DNžÍ_g4{n|ÔÞ³fµ¹b=ü”É.yÈ£€ª]Ò´\Òœí¶:é²Ê>^ÏCý\:7¾a}¬mÇ |TçÜåFö>à¶­{Îiýj¹%Àå.,”—†42/,©]•¬H@\æ‚픵£îî™› ›7ÿB÷[OKõÚ\'ö“{¡Ôa­îŒ>Ý1à&èF\׈0ñ‹ß°7í«Ä’ ¿Ä"œ€Øþ7¿Az8„o©¤6CÓHû®Ú´´|lÊž(Ê¡3¹ü\¼»cÚ°ŠŠ¿ ‘½± Ø%ÞØ¥÷*6> müª=\I ½(èЙÎlg<:-Ì—î3ù¾Gð>+êb‚¢†ÜßVú ¡óò½±õ¼ëP o$NšTú` R·û–+ìÎÞŽŒXü£‚–ùøn7ó³" ÷y›„5Æ/2‹DO¤)˜ü`]tøNy8˜^¹›ÄwÖýæ ˆ,1½ó™ŽW0>{‡À».Äø º)v›.Î$-Q¢2àÎ1ºÒ[&<™†ííL»Z_e€Zâêžävv’m!ù\OÜj2}ÕŰÃÍx+·> $îfÑ­×ÍX?kçÚä$»}WËå£#ÝÕÇÃGÞ±h½vü¤°€9>Üéwò$0kšÒDcÏçðîöI.‚ x°¢€«é»]¥a üí¯n6ÏC+ÝV.÷cfXMl¸BA–Dv5àÄfgî¹Ì_±ž‡­¬Ò{q(kªº–¤ëÑsÌÙÃû&i’§bªâþÕ6RX7 N?u~¡Y¥&ëÓ$j²Ôt£üñ‘ì ¹¬ølPóŽî4ññÛ»\>ªˆÑsbè3Ù;Ž›7¶‡ÚjÕ Ïþ<ìI©ÛÚ¼`¨ß0³Oõ.:¶»Msï‹Æã$gºñé»êî Ò^‹ö`Ã_Íœ&Œ#"XQ+Ùt9œÃ¬³*ˆÆ@3héÓƒ FˆÖNÞºïS‡ó¿Ì¡âp_ASŽÈw“ÃW¼‹¤I•(ó"DtÎP×îïu‰¬.ωjcâ@§‘lrØZÁ¦ËªÒçr=·LÛµ¼÷ † Mö°M&ïëµNô§‰·3?¼ç{ýffý˜ï‹ñ ºb1™E»ˆÃÕÎl×w"Cg‰lœ«B͘ãñØoŸºcò%ZiV?c0³ç+¶Œt¤ãÓmܵ·Aû›EéþE$騙›¿½«Žé  J¼-of@°÷)ä<ÕÏ_°w†¥‰H¿#Ë4[îËñ³û椈Ǖ¤áy" W!ýÜ ­ýf£°‘Çm‹ ìòԡϱÇaÀ? ç!ÆP"æ©4¢³!‰µ~µ/vI&CñŽ&s‹YlÊ&…ïˆìcáX¡>tÞ6˜ÖÍÅ—«:#.-­àДٽ¶aóâªP>¥ ÿŒ¹kóGæE<ù`#u«â' ௻iùU3‰®¯¹BæÿlLaendstream endobj 819 0 obj << /Filter /FlateDecode /Length 4579 >> stream xÚÕ\YsG’~ׯÀîË€1F¹îÃm„¥kwcì±ã°ýЛ$$à eͯß̬£Tƒ -?l8(twUWeeåùU¶ùìfÆgo^¼º|ñù×ÂÌg1»¼žy1sƱ ÌìòjöãÜ_ü|ù?Ÿmu¿“à‚In` êsy{!æmìè£IÉ4w¹ßß/”˜¿¾_}SUkÆ…Ïÿz!ý¼Y^H7aì¼¹i/J©ù5 °Ýᵜ¿n8õ Ü‹ù þ–Í]lùO¸¡Æ&¾õÕZÄü#üí±ã)xÁ#Æ\Xë˜U³…2Ì[)Zo/@ÍΈc±‹…–vþí–h€6Ïçxy óv7¹MÌ‘q—Ç RÖ±{³ß?ÀíûRÛvßÝ^÷†Lsa¿ÝönìœpHoÀ4w±K¢øŠL­±K|Ú!Ôï'nxbüÇÌl‡|‰—ß½y‹ÿ™ ¯2ýÍÍm¢{s…ïB÷…PÌèÙõ--¸µ0{jŽ…V|.‚ƒÞ0P'×¶øÝÞÒªÛxCLÙ"ï€$XÙ!vFžas³O¿øƒ¬ÁB9¸ïƈ Æ—vÈ•+kIW drƒ<}GÙ‘ÀÑ¿û,LÛDÕ‡Û¸Z1 ,]/©ã>mî®Á7Hˡƽ5 oë6î Jr“ÆÇë,a«M”ú¼Ÿmªå\gÕ±‹T3PtÇ©‡eÒ¼ f}¢ éèP1Ó.Ìz½>‹7K$å!ŠÈò7nµnHˆä÷´lBØ©F‹_uÌ‹ŠÇŒõ»²P^)qï&¶]?¤1—=µ!âÝ€x ¶¾X˜]bð/(8I¨×‘nžÉ¯°@d±}{¤‘dïèÕü.l»{ŸîViÃo#­À hþÖCò¸gA˜ÎZ‚l¼ùªº *0%rG4ˆÍz•VðñO¨e¹ÅEÞ7ùù[|±Û „`^ù¡(ö–PaªÔLI™iy"/•f2” !ƒêçÿ /¹s@ëCÚ¤} ’&ƒ™¯Ù˜¹yƒ&ðþ®Øþ·éŠž$=‡Ž6ïR4Hôj1 Òû¨Ó8lѹeZr›º’ل˱$&Z>ܶéå4hŸïâ( Ë|µÀgÍæ}ä¹ñÌÊ!ǧl ꪊ‚[ɼ¶™‘¯’‚󞂃lIM ®\ì%k;♲fÖëµnÞ##{h×dÈZ–䘃n±`Œ$çé9?,üŠ4ŒF‡)£×4œÏß9(0¾WkrÔ0êöqÄ`û#Ç|g¾¾K«3ý.pimÑŸ8—µâeîõºF=ìƒ%dihÙðÉ]®šìœ·¤_ å I4^8T8 ;k˜Ðb¶˜›lÛ_ÚCm¡‹´E³¨‡ª¢3/aêÀç?v–”q8 E0>V,¿a äŽ#:«wµyP%Ãf&ñ3¨,îØ‘©xV¡C9 #d:6Ý|‹ÕÇ;8vªô©šFp B•‰†nô×H-úz$4Z—M â>ƒtZmæß·cˉïi7²Œq*^¢>?\€hÛøBöxý_MVí(Ø! ǀ ÎÕDœì)""»OVsfÚ™>ä€õM “#½Üü![9º_O [×hÃHOí‹ë6 ư´ÉJã\5Ø?7p•E5›óâQ鯣¢Îç8½@ÃvHÜùýNŠÉ¢GíG"Mù&>¸¦h—צЂÍ¿”îÚÛäØ4´Ñí¡—0`L„ 1ÀÕâËFáãè1Sâ£9äv‡¸*õr-lúå˜|b[óö.;ÍÏÐm»Îy>dçù°K©¾þ/rÛ}¼Ãœ.–”¯”wsTƒxh' < ã`V ˜OË®–¦"Aø4oaׯ‰?ãL ï¸¡ƒN‰D—ELNƒ‡šµ·ÔA6÷œh1÷eï;Êw™)ú£%:Q6Y„"’ô½fZ%Pã刚Q–À ˆ¨û ô§à°{ýhÚÂ`n-À/7‘’þí*ƒ”Ó\­’m&ÉëCÏ>ˆ˜…¨^ž!K&ŽS›­C˜—‰ »íM• Ž9Ù3PÒVÙ`R²tÚeÂ`Rú¾Ï|YÓ3­ýñœcËB!ìÕqè²G¨*íÁÜ}m6×ÀÛŸ7£©VûÁŒUfY˜ÑÕfì"£!œ’Ù0ô23ô=ÒRÄ¥XmØnøI½þQk–^XHIõ{_‰œ5Äç%${)êªdüÑÊç鱮 Î8á\VŸ÷U? b*f½nËÊ`U {¡µŸOŒñBÄ//BX[ß|àˆQ¦bJççõ…£8åŽ8¦uó/ðÇÆ—¾Œ7Õ,3ºAêü¢Ä}ïêê!´û=H7mt$k/t˜¹÷4˜’^ÁsÚÛ`ÿ@ÂQê09 :eØ=ܨ; fºþ·Ú$È00)zÌ&áœEþ\ç¸í¼ÅTH ¾>×¥<š~ØqúÍþªËM{+Þ®[rØÑ#PØ?èSSH5uÑöªv@þeÊ^/ªTªG¤Õé"òË6yï»ÏŽñ²ÑŠb’#ÄñÄÞ1ÃÅÉ\”ÝØ—{OÐmÆv3ú^«™ÿiÂt„nŠõj3™˜‚­ì`´MRÄjJ'6ô“†„B€±ƒ©£†RÛIÔÛ0î~85ä¦/ã&…\Ë3Ð-2êH¡„,³îI”ëÀo³ÉØÜP˜»"MWMS;FãùP¡f ˆdxU k‹TÓqŒM˜õCB©cOoÚM¶WMÉ`=o UæŒÁ«yUÂ!óÁ ƒÃ˜¶à¿¤ »J“C$Ù×¼%šÖøo'pߨ°~¡ç<'ðlÕ]:}èF-Y> “\ùM‹XAàóÿΩŒø°§ƒ‘öhÌ¢°e55ïÓ*èÀç®a¬Va5ŸÑƒŠ@*‡§jú„@þÐ× VuøG8œpø» ÃE‘4R¡'ÅyBÌa¼C£‡ù”’NटçÜ Ò(Ÿ µ“%~ãeß&€¿Æ62åð;–ODµ˜¿ú›id‚Ç ú=àS—#³Œm5¸¤žéœâÄÔ™€ê¼åѺÜ|µõ¸sÈppDUÇtUu(ê+ÂPHùñÔeqyT”ö]:p%1E¬•œÍ¾¶ ž Ê4Ù똚¯ºS<ÈtÉ9Áoœâ×tŒ–Ô‹¤¹íeðØ¿;÷ZåLuGÈPö8>³c˜pçälÂ=ç Ëêõ{‡:xEª»¤}¸ÝFJˆÆu< mcݬvj©¡hñzÊ[ H( zYzœ®¢j(/úkø¬6Ÿe\«“!‹e·ß'8®Q ƒˆ.S]x9/}¹NfÇô6 ªTu§±!¡X E»ø ãW¤à1xÞôJµ,‡ÈùS4W‹Ð/?mʪ¬kÂmz½ÐÚÐ2Œ¦u‡_ÿíÄ!ƒèâö—Ç96ž·8$Õñ‚Ý,ìô_+Ë€XÏã28ÓV$Öýé]u ;“a˳LÀc©N¾”S?”Ž~d­£NS)•èdÄCËôê.¹ONoÌ’Wr6™O„÷ihYf#“­a4±öãqfû¡gÁKïh¶Mã&æ$öÔ«TÇÀu ÐVØãfŸf*ûˇÛ}“'êÈ÷sÄk‚† Yêtnª„ahoc^“À¹ë\ÀÓ‘H‘ù~˜©e 'ló!mö-Øq`¤ñAA ØÆë}Øã°=Â(`PSŠ¢¸fÚôÐ@5Q:!Â9v|]“R°D}:€«]݇Xìs;‘rð.åˆ|”ñL¢%I®ÍI%?å뼕=á­ S+ wsNdãQM’ç Ï3Ûc'ªÅ€øÿˆn²6¤p<Ã,GÈ QÇ\$8rS†û*€¨A’ÉdW/ë¦< ×vÖu¬O豌ıŒY>MÀ_üùòÅ?_JëHž%„F:°òËõ‹æ³+hËÀTð³ÔuMâLhÂÝìû‹UˆcÄ+ ë0FœK; ÐvO´Ïœ±êË 3Î˲„SRXSÄÏŒT\w˜‘­‡çü1ÀÎv ÓBf(uTŽ d:‚¸yáHPL7ÿ ‚窱> ýIô Q´¡¾)êúæÁBÙg(Ü¿ÇÅ$°m߸?=;fB¯Gt6FV"Ðxf[R‘jgvU¸N0Õa`ïêb+ºj””õ}þ뾤^Û`ùÈ Ÿ3`,ýÒퟵõƒ@o‹oáiSîVAð§?X ÓŠáñUçlÍ0òƒÛ‚¹^ưK»x>ƒ¶²R?s^Ì¥}„ý†³O‘‚5ôŠI*p?’ÇŒÎa‰p'UatxX)žFßÔAÞ1}›ƒªÀ»Ci>Eâ˜S®?}.x݉+×Â"KœJ‚°!f©õÜQ;ðËöyçrêtÆ-å"Þ; yuhÇP<³# )`nÔ"xMA·3BV[õB•>À¯°Øm”f]Þ–c‹ññO*ö°ØUÆÄ,æ,†)E#u¹„Zô‹^bQSà­¨”ÏA›Fy­°; „zV½m!± [fó÷Gÿ¶©õ`…V‚ ê9Þ0Ÿƒô‹Y T¸~S>W“&õÏ‹?ó9ûä¥3g¬lfsStdÞã¡󞼞3v(FÎÙh/„§tF 3-ÆR3EÇR'%˜ŠÎñY;ÿ®’ÉCEVŸdçË`gîüïVšvæŒ;ïÎÚy?ØùMeç!£ÏkŽwæØ*v)½~âœ×m“wãâþ±¤™ú|ÃL”d Ã,ÅÓ"¿¨£ø½ð%}>)|õ1VÄöÎ[1P9t'Ø£D(–£«•Þ?ÑÃÈñ\’©ðEþ¤˜:õèN®D¬V]u•Á©V†j`½cãŪƒâ O˜µÎ•«ì¤o’¼¦£~ír‘þE:AÍé™Èíß tú.Êô߿͕A7™¼n´´?-*¬b D”ïÓ£OÊW­}pºIaðv“`õe®­¢Se\ò¦{«Üà£&þ÷Ƨ²Þe*0Ÿ Ò¸$÷ÇÇÛ6E…;ï8ȇd1ˆ"Ûîh"åÈ?e€"ëÚÞÄí<2Rhkªüuê³íÙfûQÍuÀ0ÔXªþm;¦-÷ÿOØ ³­fûè{!e…²eÇ^^,œs]‰xX¦Tglpë¬>Y@¦$–5»söh°T8í–ó`ÏÚßßÈm?ævý X jâMU±º UžzÞô¤ð9é»zN8¢¿8YAŒ–Æô}b,!§Jˆñø&'å[i«ºOŒ¾ú&>¸¦ŸPðAÍ¡J¥/dÅáú¶ÉgÊwi¨~~ ·¯h «\¯ÖV2Úø1v<þŸðvG]òp*¨)áµ¹”­Íÿ„Åv“!+¿¾ý²vô¼JÕ^`Ïo›ztä6z¯=uÇ9®Íã•Õ¦_©~¶O;{ʡ驽f6¨gÅ£S-äëÈ)(ªÁ,‚>ÏéC€\t(ÔYôN}B÷ºárÑ!Ô×!}ò»}9ðˆÞ+Ù`³ÞË#½ÿ‚oø?á€ÛRendstream endobj 820 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5295 >> stream xœí\}PSWú> ›×!¢|\Hø("&5 Ð(]âb¿ QqÝm¨ºuºÅa\²#ýÜ®Kví,3-´¤ê¬». fQè–R—L!Ѧ $¡Eš„˜Ù ˆ\ üþ8S~Œ ø±>0ɽçÞœ÷¾ç¼ç9Ïû^ÖÌÌÌ€'„‡ÝÇS <µðñÇS <µðñ‡ë§ÃºººœÞ%;;Çñ‚‚¹\puuíêꊋ‹tvv¶µµÑét‡ƒaXkkk~~~UUÕàààæÍ›¿úê«øøx‹Å"ÊËË]\\‚ƒƒqGÄb±¸»»»ºº wß®y”Y4õoòH[¸,øŸŸ‡ÙÙÙ...z½þ¹çžÛ±cGee%ƒÁ¸uë…B©¨¨H$Z­V£ÑxyyMOO_¹råç?ÿ¹N§ Ý´iÓ¥K—.\¸ý“Ÿü„L&Ãf‚¸ººZ­V*•ÚÞÞ¾qãÆîînƒA"‘ÚÚÚnܸ! µµµ………‰Ä××—F£Ñh4BŠI2y­——ûØØdl¬GÇÍž›››kf&ãÀÖ|&¬Ô(o -ËÔZœøP*UWT´ûúzÚlãå寜<©–H¾)(X¿eK@AAD"‘J¥(ŠÞºu‹Á`( 2™L"‘Aeee[[Û| V«­V«——WWW—H$ª¬¬ ‰Äårÿþ÷¿ÇÇÇÓh4ƒÁššÚÐУ7N7 "‘(;;›ÍfÓh4 …B¡P`¯nݺe0Çðð°O^^ÞÁv5"Íêûm.œÏî®..—K£Ñ:::¢££õz=|œ:ŽÍf{xx˜Íæððð3g΄„„`Æd2á㇓ P(bbbJKKÿú׿¾ð «b×ÿcE|øpvœøP,÷õõQ©TµZÍb±êëë·nÝjµZß{ï=©TjµZa Óh4™™™8Ž·¶¶nÚ´©¾¾>###--íwÞ!“Éëׯ×étjµ:===)))""âÅ_|ûí·6› :\«ÕŒF#€Åb)•JE4íÒ¥K###333^^^"‘èÔ©St:½¯¯¯¥¥…Ïç+•ÊgŸ}¶¨¨h>\Þ}÷Ý,ÌÎÎÞ¾}ûÀÀ@``àèè(F ܽ{÷ØØ˜ÅbÙ·oßÕ«WïܹÃd2×®]«P(úûûm6ÛÈÈÈéӧ׬YÃd2‰DbBB‚Ãሉ‰1™L‹eçΆuwwF??¿ööv‹ÅB§ÓGFFúûû)Jccc__ßöíÛ9"ÌfsJJ FS©T:ŽÁ``–‘‘qûöí¾¾¾œœø8î‰û¥K„&“i¬4œXøÚk¯aÆårKJJ’““qÏËË«¬¬4›ÍA"##;::·lÙRRRÒÒÒòÑG­~8YOyéã'ßBç«ÅóÏ? ’Éd­­­L&Óh4šL¦ÉÉÉ^x¡¶¶¶¨¨H&“!©“L&3 D"188AÖÖV:^[[ûÖ[o•””°X,“ÉäââRZZº:kæ’çál ]|D}¤Y›D"éìì|çw>ýôÓ¢¢"¹\®Õj§§§Y,àƒ>(,,T«Õ8މÅb“É”••%•J£¢¢†‡‡£££Y,dÛp¹ÜÜÜÜììl___‰d·ÛE"Q{{ûµkײ²²¾üòK"1Ãbb2i6Û„RiÙ³g“Z)æÌ̈ÐÐÛMM“Ç @$ò9|øpnnîï~÷»óçÏ/'óÆq|xxØb±ÄÆÆ>Ï,uD8ñák¯½F£Ñ‚ÃáX, ìVkGGGBB@HOO?qâÄ¡C‡0 ÔÕÕ¹¸¸„††’ÉäÚÚZ>Ÿ¯Ñh8Ž^¿qhhÊá˜ôòr'‘ˆƒƒc ©T½mÛgê“O>Œ§   ³³óæÍ›yyyÅÅÅyyyUUUCCCL&³¹¹ÙÛÛûÚµk,«´´499ùÈ‘#y÷ôpá|÷Äãñ>üðC‡ãp8bccétz]]]VVÔ/ÜÜÜ Q¤Ñh$ÉÕÕÕÃÃÃh4¶´´0™Lƒñõ×_×ÕÕ…††îÛ·Ã0xŸÜÜÜ“'OÚl¶¦¦&¡P(“ÉRRR¦§§¢££m6›R©,,,”Ëåwíw¥RéìžN£Ñ,–¬‡(Š* &“I üýý;::<O©Túúú ¸\nFFÆÀÀ@dd¤Á`hjjÒëõßÿý¾}û6›-33óÈ‘#Ï<ó ‡Ã1›Í'OžÔëõ'88x×®]8ާ¥¥]ºtÉh4ÆÄĤ¥¥ uuuÕÕÕ‰Äñññ÷ß×®]p"ðx<¹\ž‘‘QQQ!‰¤RéB6̬ nÐÕÕµøÆ333555‹üi'óP,wttÆÍ›7{yyá8ÞÔÔô‹_ü¢±±‘B¡„……utt¼÷Þ{Ÿ~ú)ÔËššš\]]?ûì³…‡Æƒ£µµjêNñäGšUâ¥2™ ¦4VO¾¬óa÷îÝXoذaïÞ½r¹œÇãÉd2:ÞÔÔ¤V«wïÞ­×ë¯]»¶ÿ~ƒÁ`2™x<ÞÙ³gýüüêëëwìØqèÐ!“É„ ˆÉd²ÛípRÕÕÕMOO>|X"‘°X,‹UQQñÓŸþÐØØXSSsúôé¹ CuuµÁ`=zt¾®>¨ÍfsHHÈCn…ã8A—_~yݺuüãáÙááá{^u×åÒ±Y¬Ò(}ˆÛ‹Œ4ÐîîiݺuYYY7nÜØ¼y3‚ QQQžžž,+77799™J¥úùùiµÚ-[¶s¹ÜæææÉÉIŸ½{÷BÍ[£Ñ¤§§÷õõ±X¬S§Nñùü3gθ¹¹effk4‰ó/½ô ­­Íb±ÄÄÄÀ477—••Éd2™L¶mÛ¶¾¾¾ØØØîîî™™™Ý»w/jI\$3€Ðjµs¿^¿~}‘.†¦¬œsšYͦònÞ¼‰ã8ŸÏÿç?ÿùüóÏëõz»Ý..^¼ˆ¢(@HIIT;))éÛo¿…R···7ŽãÓÓÓP&?vìØþýûÏœ9““““˜˜xòäÉ™™™gŸ}EÑ©©)ƒÁ€aN‡›f£Ñ822Âd2u:@°X,{öìihhö d®=š³<>\‹÷Ò\ º¤ ïïÌÆ‡‡×ÕÕùúúº¸¸ÀÃÕ«WÁ™3gÆÇÇ'''™LfFFFee%‰Djll … ×ëÁ©S§D"QAAP(¼~ýziiéǬT*Y,•Jíîî¦R© ev&ߺuëÂ… AAA(ŠŽŒŒ”––¾ÿþû€7ß|S"‘ (J"‘`ú±¹¹9**ŠJ¥fee- ©;É[|ñÅçÏŸ'‘Hµµµ>>>R©tçÎCCC)))SSS åÎ;$©³³ÓßßÍš5d2@$)Š\.'“ÉÁÁÁk×®5›Í‡C£Ñ‰D&“³nݺÑÑQ …rõêÕÁÁÁÑÑÑñññ™™™|üñÇ‚xyyÅÆÆ2ŒÁÁÁóçÏ3Œ+W®0Œêêj6›íïï¯Õjýüü"##ç3áá³¶•Îj<| W´âÏ]ÓY8×¼…B¡ÙlîïïohhعsçgŸ}Æáp[·nU©TF£1>>Þl6§¥¥I¥R¸¢FGGËÊÊVÃg¸ÏQzÏ|¤ÒÚwaçá#b¶óQj±X8ŽJ¥‚Ùl›ÍF&“õz=” ŸÏÿüóÏóóó+++'&&<==Ù¸q£^¯'|>_¡PTUU½üòËøÃ²²²`éál:^(b¦P(‚‚‚Øl6†apÑ›ššÊÌÌ R©‰‰‰&“©½½¦_Íf³H$ª¯¯W*•>>> ädÅÚ†††vìØ¡Ñh0 Û¸q#Š¢pF$== ìîî¾sçÎsÏ=Ðëõ•••l6;88EѼñÆ==={öìáñx‚üæ7¿a0[¶l ÜOâ8Îb±P…ù,€J¥òððP(yyyZ­Öh4†‡‡F«Õj·Ûù|¾^¯_HûZ ñ¹‹gÝZZZîùyƒƒƒ---]]]555þóŸ/.ç>œ˜˜ ‘H‡C*•þú׿ž˜˜€ 6p8~~~ …B(º¹¹:u꥗^:}ú4Žãt:Ýn·c–˜˜ØÙÙ %‰sçÎ…‡‡§¦¦^¸p!??ÿèÑ£Û·o÷óó+//OOOùî»ï´ZmNN€L&Ûl¶îîî½{÷*•JX”áîîþßÿþwÍš5¿êky±Tä$Òäççωè¢ÎÎN›Í€Õz~~~ÑÑцõôôDFF^¾|ùÕW_-++ËÏÏ×h4ðÁK$’¢¢¢¯¿þZ£Ñh4@ “É0½‘››K£Ñjkk †··wBBÂñãÇQ¥P(&55U&“ùúúNMM¡(Š¢hMM ‹ÅÚ´i“J¥JMMÍÌÌ\À„GQ§YÞìò*)Â÷ì‡X,V«Õ‹l¼øÛÞ'»§%áàÁƒÃÃÃçÎ3‚|øá‡ÓÓÓ(ŠÊår¹\^UU5::!—Ëùæ›oPèééÁq|ýúõ™LöÌ3Ïüö·¿‹ˆˆX–^-ÿ(¯”Çq‹Åµ#“É—»„„„‹/ÜóË‚‡KÃé­î‰%¨‰l6»©©)((ˆÇã}þùçqqq¥¥¥...$‰Á`„„„˜L&€a¤8Pwh4š¡¡¡’’’7nüþ÷¿wssãr¹£Ñ×ÔÔD&“_}õUèL™L¦R©ø|>¬ÇœžžŽŠŠ:{öìl­µÍfÓh4ããã‚Øl6È”ÓÌ¥³àçÅëHN[Î×à. s©p^‹a6›7lØ6$¥MMM666‚‚‚ŒF#‹ÅR(‡îìì$æúƒƒƒÙlö?þñ3!.—‹a˜Ífƒ•|ïß¿ÿÊ•+}}}‡C ´´´xyy™L&¹\nµZccc;::^yå¹\Î`0\]]ããã†ÄR•Jåââ¢R©¢££ýýý½½½ÿõ¯]¼xñ»ï¾óõõÍÈÈhoo ™»víR*•aaa'Nœ »víšÝnˆˆÐét£££!!!111†©T*@påÊ•_|Ñb±xzzÞ¾}»µµu×®]F«Õº¹¹½þúëííí …brr²§§§ªª*""" ð¯¾úª¿¿ÿg?ûYrr2…Bq¢|ß·÷çª{Róѹ—Q#w®—’H¤Û·o(•ÊË—/'%%ÁÜ|O‚N§ûí·6›Ãḻ»Ã=$`vøñùü¶¶6www&“iµZ·nÝêææïíím±X***fkµwîÜI&“á+L&óßÿþ7 $$Äl6S©T*•úý÷ßñÅ ÷||øà(++›û5++«¬¬ìܹsËû+g=„•ÿ?>n2™ ~wϳ÷‡eã¥2™¬¸¸øàÁƒ&“éìÙ³2™ «à_‰D[Êåò±±±Ùãb±xö8@€›}¢—ËéC˜W€¥´8ŽëtºÉÉIŠ¢0Yo·Û£¢¢p÷õõ5›Í\.2ïÞÞÞ€€ðƒë`Ý-¬xð^-çÞbn‡Äb±··7ü,—Ë}}}[[[ÇÆÆ\\\qqq$ ÇqµZm2™º»»]]]áàMâââ–š.ãœ>vìX׸~ýúàà`}}=T_f£WWW}}ý̪Ì\’/;–s”ÎV^€GF,‹ÔKE"ÑñãÇ'&&|||„B!›Í.))±Ûí4íÝwß‹Å(ŠBð“O>!‰°˜„F£Ùív½^ Hhhh[[‰DB$11Q©T"RZZÊç󇆆ÊËËaµ"äÖÝÝÝÛ¶mkhh‰DÅÅÅL&S£ÑÉdøÞ“Ãá(,,ú.²ÅÅů¿þ:‘H$“É[·nLOO3ŒY) mÛ¶­­­Íf›Íf˜± ‘Ho¼ñ†X,žý׆N§oذ¡¹¹@£Ñ`:dÆ †åææÂº¤C‡‰ÅbøæurrrDDį~õ«ùLxûC§Cqa.Õ½NbéÛo¿-—ËY,Ö2†õÙþÍ×Ñ… Xêè}üò‡KÅ}V²ÿ‰VÐAo÷ööæääÌ]ÖÝÝÝß|óMµZ­V«u:]RR’P(DD*•8p ººÚn·'%%y{{×ÔÔ|ùå—eeeNÝU]] ßZ`.çþp. þ1!ž•Ég¿Î×ryñäÒ'ÿ]î§>þxjáã§>þxjáã'ßÂÿe]Zendstream endobj 821 0 obj << /Filter /FlateDecode /Length 3550 >> stream xÚÅksÛÆñ»~'_JMLø¸4™‘ÚÚN&‰[KÓL'ÎL!’’˜ð!”eõ×wwo¸4©¨ÓñЇ½½}?Nbp3ƒ×'ç—'/_Ù| EV‰J.¯ÊêLÚAaЬÒfp9ü2¼¼=•ÃÕ¢nNGZ«áÏ™»þë´ÔÃ)¼šžþzùýËWÒD€L™•9,C *œr"x]ýÛåɇ ·b ²°™*Ô Ð*+U5/N~ùU &ðòûÈtUhê`‹,7î烋“¤€È*å>0[g&¯‘"v K•)˜P( Iδøç©*‡¯Ï~LmY—YóyË×@™z1ƒÿçð{Dši¼‡ßr ÿmðåjI¤‰H¡ª*ÓR?q=¤„ÉToß-aá ²îît¤ bâr‚œ\âö6Ž¿wõš¶€o7ø›mlse³B•Çc›bM)ÿ ïaÇRëcÉV¤™/ªLV…£Ûà~oá÷^á.rŸ†ŠR pPÉ•»¯OGÒÃ{øqôâÛ´´•ÌLùL´´•Él^º üÄû ~$TÒó‰èôè©‹LÈ: §Ð x–U… ‰UfŠr0’™-yÖÏŠàÙLT`C»YÌšH™g¹*¦¶nv“‚ Ó$Áô³æ2Sì6ži©8ž„ ©(ÊLèü™¤®¢ªb Ëà‡÷W|_ã5&1OCµ'1VÀ¾mŒêݽh‚Æd 4ÏDc[fÆObu0™Ÿ†í2Øî£òÑk&¨ Þ¤{*ƒ¡¨“žÌg™ë€´õÁd}z}ÿa³©œ¥òÎ2“¤*4ƒå+If•U~Þo)H0Èv- }®Äðá¶c íƒXÍ9mûìß]w;ºòý”ÒH&&²µ¦0µÈ3ÈOcF<´âS ƒ[2F½u©I}i˜-¼—Ëiã†ëµ j¸\ÑÜrXÏy'+„ëTcâæÜ·)ñÚÍõ;¢¥ñ³mœÌÝœ¾Õ 7ƒn”[íÊ( á®ÝÍï§Æë›Nãªá5-Ž€í"nÅž¯'Áç,ý5o¿çë¶  W`ЙÙãeQ·RêœLÜÆE·Lû§\äbDr}¢,´ýÎ?u„ höcÌØj΂GLD7„%ür7$M!‡LëÁ-9 Ñ1 üÐzéð`-ù€KB†+U¼*kãÇÄ&º PT<&»i’sžÕƒËÞ‹GóøöN˜‹@’½p¿Ïó™Wë³{Þ“N†÷àãWþ9ÿD_|ó95k·8ç)ÍÑvüžÖ·qÒ³«<+eëv8öƒ — !Ó-ÚÖÞ^ц–[†Ø'¤±÷ƒüV¶áä{!Tj!wëgIåëhÆ–²Ârfe0a¾É*ª¬€ˆ¬.$·\º€øßP02Œö#ÿ…gæ#…@ þŸLçîña¶¹uw›Ù´ÙÅxòÕuÕhbÜR*+`DÐ ÂnˆÝÝ ²y¯Ä7!7ÛnÑ«ž8ëå:«JÓ%©4µn˜ñÖ1äT‹æ=² ßNµ`âÂ\]jDP|4qëfEA9¼g¬—+÷úÎ ñ5-EVwr+c#ÓK;C…žxŠ^ì&ë(Å^³w¿",W«D–$AJÙòí2Q'³ ;&ª®$Ó2ì2äaueoj&«#×ü- Ö\wá’áY‹yÜ•¼q¬óÙ ÑÉÇHW”ig®ròf€s9,Š­Ñý¢S›xƒ#¬ÒIŠ\®Iü=½#«Îª€€$ W*W’ËF,ê·nr©Â6Ë·H‚2 RæU2ÉÈhˤó]•Dï‘@í¦`UÍÏ×2¹#%«íõ*˜ƒ–hm¨}¿§Âë:³³‚ÙS&+Tþì¶ë ¥0!q¾Lª`ä3^1É +Ú§¯(z+–ÑŠIÙÊCvYeRVmQUIkÒ⌒6ÿßÉ{Ÿ=yOФDU5Ç-wŒ.X)wè‚€¬Q¥uAì彨Ö9haŸGZ`êBy˜.ØçÓ…ò0](þߺÀ œE=§X<úÓÕwºé8BþcXYauìR‘l¤Âû½¶ªï‚OAþÉѳ?GïÙ¡~·‰ŠÒ/_Ù¢¦ ÝH߫̕JÆxl»eÛ7Ρ=™¯nR›Ö°_[MÀ½)8­ð^¬»þ’Y¸(Ks”_ðMLŽÎ• f•Ë wÉÍ›²%Ðþž\¹O¼AR»SaÇní qî¯ç»z¡Õݦ’\ÃÿFä{Ä~¤*˜b1%Í­ÏA¸W’ç®ÜMùÍ=‡•,Ã"ˆ¼QÖÛn—¯ß·ž¦(‚§é€³"bbO%!/êBõ1µõºeWî£Ä‘ÀÚ`bµRg †›Ù¾hÅX½´ Åb—ÊRNÓJ˜„¢°Ã\ÁÎ׫û¹`éq‡éÕ‘5ÿ¨êL\’]ÉøÁݾn95hžq=Çãh½æ Õ|Þ ©Ûòrx¢pl¹žvýS9û6,ŽP9ûn»O|×e}nÚGjŸP`êËT“²D&Bê­w$negg{GÂ2eZ{ð.É+ÈC;ó¿Õ¿¡^a¥†×˜7áMÛ«T”FcÖ•L£qbÜ^¤!ê0Ž}—c’.Jè(L4H/ýYÝvÁdµ¦0Ô9=%¯ óË:¨­{qi 'tÉ-ˆ†Q™ÔyTØü«szÈÎW®ãÝ•¨3l„¯Ümn}¿k›ÑÊɦk/Ï|ðkn‡Ã¿Ì Ö7ûîõÉà·ò¬mVq\´Ï÷|§„0ÑlÒžÏ,§ñ³²ÿ™ ƒí¥øiÁg¨HØœ `áÆgéé2Zè^2±G9|Cu¸ÿøréÍt}MªäæÚb^.¢bžeö†¢<Ë>–’³Ü‰†bøÚÆàžÌ² _ZÒ’êyrxxýŽý P³-ƒà=•@¼8ù÷x1î´Æýÿ¿jýÈÚMÁ˜¢*6HWÈ”ûy¿Ñ†nȰM»N&N”à0ðºÁ«²ºá}H8̓vÌ’ÝÌrhÎ$¬ƒ¯g|zï>:œñ"U®‹eŸÕžŸ±=9y#0¾Uá½@†²ªì‘9`ùj'£ÛkëCd]q…:)¯ô }V¿u •¶-±éêè ýEÚvç`sËò Šò±_Àå°÷ÙÜ“lÒy ï7GªÌ!á(Ø}r™®íJ繌ºÒ>$³ƒû³ó·x#†Wó¶(ÜVqBäêq yôärí Tî> /Filter /FlateDecode /Height 106 /Width 75 /Length 4463 >> stream xœí\{PS×Öß<‚ I ‚ÂA%!à­<ʉz…)DhÀ¶ñ2ñ1¨á©Ø’q¦É T®m¨ÞÂÔ©””T™Ñ)!ÚÓàXòIÉX©Ñè¨$„ IL$A4|œùFÑ D+Ôß™p²Ï>kµ×c¯µ6vããã`^Ãþï&àµã‡sï8œûxÇáÜÇ;ç>ßÀ3ƒÁÉÉiåÊ•›6mª©©)..þôÓOÿøã…B1Ëù¥R)€Á`Lù«Ý»¨mÎë´ºº‹ÅNs°Ùlžþ`€££ãúõëgBÖÄ ³¹‹E5 P(bcckjj\\\H$Rff&ŸÏ¿{÷.AT*µ±±1''G­V×ÖÖR©Ô+W®ÄÆÆÊd2àááaooo±XnÞ¼éââ²|ùr“Éçåå5Kòl ‡ÅÅÅAAAÏ_ŸR\­­­S~ºººÒÒÒfCž d$ ©Tê£G˜L&F{ùøçžT*Õétmmm………Ïÿ4Kòl`iÁb±çλté’ÕÁ“ÿ‰Dd2¹¢¢bÛ¶m§OŸ~†¥gÆÏ㳆X,žÍàëׯWUU×××3ŒO>%lÀa}}}jjªX,F?Éd2F‹ÅL&ó›o¾y~ð”“p¹Üû÷ïO9ù,ɳ‡555¯ið Æ?èáÂ… ÓÒÒª««ÑO??¿Õ«WWWWGDD”––VWW?3xÊIx<Þ‹&Ÿ-}³|C¯úšçª ‹ŠŠ8NZZÚéÓ§Q;vìEƒ§¼3¥c˜½ m³·0 ÞÞÞ•••t:=33344ôE‘þóàóù}}}:N$±X,›4Ä4¥¥¥ ešƒU*ÕK?éõú;v̆¼ù¿{šÖ*5 ûöíÓjµL&‚ @EE…‡‡z‹ÅJ$Ã0‰Djmm•Éd±Ùl¡Phooïïï_[[Ë`06nÜxðàA†‡‡‡;:: µ³³‹½ví•JÅb±r¹Üh4îß¿¿¤¤†a©TJ$£¢¢Ž?þÙgŸ;wŽD"¹¹¹y{{§¦¦«Ä¿9 †édsÌÿUúF÷øƒáM>…=äñxR©T­VoذÍf×ÖÖ®X±Âd2ݸqÃÝÝ@  y$<Ï`0T*•R©$¾¾¾###†a“ÉÙÜÜŒ ‘Hìîî Ñhd2¹¶¶ÖÞÞ°lÙ2FƒÃá˜Lfcc#•JH$’ÁÁÁðððþþ~6›}öìÙ¡¡!ŸèèèÖÖVwáÂ…ÜÜ\.—[TTô2ç4ˈÅ”A³­pûöí‰GÌàAVd˜˜˜xûöm‰”››[PPàääÄf³e2‡“H$L&AGGÇàà`‹uäÈ"‘¨×ë/^¼ˆÇã=ŠJU(®^½:77W("²xñbƒÁÀb±¤Ré‚ 6oÞìää´eË"‘˜——×ÙÙéïïo2™ºººÂÃõZ­J¥BÍ2*[‚ ¨SŒŒ,**"“Éeee/baZ–æu›Á)çW«Õd2yö“[ápñâÅ{öìñòòêììd±XÅÅÅvvv999†á7n8::Â0ÜÕÕUSSãããk2™&¿õ/¿ü’ÉdÂ0¬Óé AÉÉÉb±ØÓÓsppÐÅÅE,ðÁ åÖ­[kÖ¬‘H$!%%‹Åêt:‰D‚Çã-ËD¸C¥R)ʉ'>úè#¥Ri6›ÙlöKôÐ!??ÿ%ªÕê;;»?üÐÓÓ³££ƒJ¥Þ¿õûÎÎÎ:ŽL&;88À0ìççרØh6›GFFÞ{ï=”«Ç/]ºôÁƒ...£££&“ÉÛÛÛb±(•ÊÝ»wÂ0ìîî~ïÞ½‘‘‘ääd‹ÅÂçóïܹƒÇã;::Ølv{{»Åbéêê ¬¨¨HLL4‘‘‘W¯^ussëëë{ ‡¯×þ]^~2¬Xš˜˜.—[RR€axË–-gΜY°`AOO‡ó÷÷§P(qqqëׯ‡aø“O>)..¦Ñhz½þáÇááá‚àp8tu) ðP(Äb±T*A<o0¼¼¼Z[[ÿú믪ªª‚‚‚€€€èèh #‘H:;;—/_îëë‹Ãá´ZmXXØÍ›7Ÿ;;;22²®®‚ ¥Ri2™ìììˆD"ÀÝݽ±±ññãÇ€°°0ZWWg2™a0==="‘ˆúëžžžøøx sáÂ…¡¡!ooï¾¾>6›}àÀ//¯C‡={6%%eféý9/C«øÇמx<ªÄ‹N Èd2ŸÏ§Ñh­­­—/_öóóëë닉‰Ñh4vvv .„a8..®´´T"‘ üðÃ+W®4›ÍZ­‚ '''.—«P(z{{óòòvìØ±nݺ¸¸8‡ƒÃáêêêöìÙ£Ñh?~¼sçN«Å‚á­ˆi^+¦%C>ŸïááA¡PÌf3‚ ©©©‚dggk4š±±1N×ÔÔäììŒÃá, A­­­K—.ÍÏÏçp8!!!»w··///—J¥¨ä¹\.j±ÊËË-KaaáÑ£G]]]óóó322>þøcÀØØ˜Z­V(eee3~ÅÿxK#‰^i:µZ‘‘c0bbb222 Çãñxƒ!99ùرc---èVxãÆÇŽC¿—––¶´´H¥R‡ÞÅáp²²²ZZZÔj5rp8œ´5X‘!ºJ¥RéÐГÉ,//Çb±0 £¦B"‘DFFâp¸†††¨¨(Ô´œ9s†Éd~õÕW‚$''÷ööšÍf4DÞ¾};Z·ùòË/ét:AZ­6//O$]¼xqÕªUf³ùÛo¿‰DMMMè¯ÝÝÝ$)"""..îUÙ³ÎákÂtL—­ÌÛ?^ç¬x‹´´4´‡bÙ²ehÄÁår)JOOÏ®]»*++-ËöíÛ]]]›šš+V¬ …>>>ééé,‹Ãá Ù¡PÈd2M&“½½ýË7¥¶Å»Uúf߆ö†aE†EEEþùgPPPWW×Â… ³²²~úé§þþ~WW׬¬¬ÔÔT//¯›7o–••‰D¢ÎÎÎÌÌÌÚÚZ³Ù¼ÿþéØ^+^a•N6ßs(Rµ¾åÜZ¼õz½§§çÄ•ŽŽŽíÛ·K¥R¥R Ãp@@€L&#“Ér¹<88øÆ999B¡ðúõë›6m‚a‹Å^¸paÑ¢E]]]¾¾¾Aèt:‰DÂ`0%%%VS÷—.]ÂãñyyyX,M[~ÿý÷%%%t:]«ÕjµZ.—ûþû￈…ÙÊÐVœœÿµí¢°"Ãääd77·‰²I[[›^¯OIIÙºu+‹Åêéé Q«Õx<žF£œ9s†J¥BtëÖ­_ý500ÐØØý?”J¥N§‹ŠŠ‹Åñññeeex<þСCÝóùüôôô‡ŠÅâÄÄÄE‹EFFÎÌtÙØZµF¶ÊÕO6ŽÚ&³„~/**šìB_‰=´¶7K’f(CƒÁ˜˜Èd2CCC;::x<~íÚµd2Y*•2ŒÉ”M”0 NGoÏÈÈX¾|ù–-[Ð|@@ÀØØ€Á`(Šçk©ŽŽŽ!!!3ÐO›Yšg–_~~~TTT{{{\\œJ¥úí·ß’’’žÑ¥)oçñxhÕE­VoÛ¶M.—«ÕjƒñVèá&HG¥ñªï~òí³´«ó?ò¶â-/^œ­×ë ƒÁ@×Rsssyy¹™L&ƒƒƒèþÈd2uww£‰}…BáííÍd2›ššŒF#Zëý["Õù/Ãü˜ì'‘H'‘Hh’÷äÉ“K–,Á`0F£ÑÓÓS¯×ÛÛÛ»¸¸`0˜ÖÖV??¿ÜÜ\ÛÒón•Î}Xï§A‡ññq sðàÁèèè‰rtBBBII •Jurrª¯¯çp8X,öàÁƒeee```€L& Q(•JÕÞÞž——'¶nݪV«ÕjuUUštFC™L¦R©ÐŒsMM ‹ÅrwwŽŽ‰Dmmmt:Ýd2i4š;v »§ï¾ûîÔ©S …‚Ë很ë«ôÎ;$i/ï-u:88Œ-Y²D£ÑlÞ¼¹¡¡axxx¢ Çûûû£éFµZú½\½zÕÝÝ=Ó•”””––¶jÕ* …R\\üf›Àü·4Ö³wîÜ¡ÑhJ¥2)) €*À•+WÜÜÜFFFØlöùóçš››Ñ[$É“'O=zœ0‘˜HHHàóùhoÀd2]¾|™B¡\¿~½¨¨ˆÏç„þþþððp¥R ¸}ûö† ärù† Øl¶D"ùâ‹/öîÝëëë«T*!Š­««ëíí­¯¯!3ëœ&÷Zí)œrðó_fëÕµmÛ¶¸ºº¢}ع¹¹ÐëõZÇãñÂÂÂÖ¯_ Díá×_½w¯,8ØËdÉÎþ÷¡CÿóᇾXì D2L§û?^÷ßÿþK©TÊd²Ã‡ £ÑxïÞ=ôPf]nxØ ‚°—/÷lØàwõêÝþ-—÷"ˆV«ý_±˜óJ»ù¯‡óßãÏ­ÛÒuëÖ‚ƒƒE"QKK š¼w¯ï/¿üB£ÑÐl¢N§khhx#4¿^c6ñ-Á´²ú:.$$dppÐÑÑÑb±Ü½{ aFD&“ B;íµZ-•JEsþûX,¶¢¢"!!A§OŸNô‡‡‡›L¦Ÿþ9==€ZcÔåJ$Ößß,‘HV¬Xáëë{ëÖ-¹\ž““S]]íîîîçççìì APVVÖ‹XxKm© ׂr8£Ñ˜-ˆD·_íàp<››]Dk2uüç?ýðCaD¸”H4ËåvŒöîÛ·oAÃÃÛ´Z#Z@hh¨@ xúôéäÔã”Õ£ÑH&“ÑjÚ¼)“É’’’4Í©S§¦ÉáÛ(Ã7Z™ÉÈÈÐjµ‰‰‰‚øùù=|ø000ͯ…††–””Œ‘Éd…B144400Àår¯\¹"—Ë—-[Æb±~ÿý÷ææf$•J-Kff&ƒÁÈÈÈprrBû™ÏŸ??ù?H¬Y³æøñãK—.EÿƒzX-44€Àb±ƒaíÚµµµµ[·n}Isí›ák5°V3ÎÖm©\.HJJª¨¨ðôôD½ **ª¡¡­½ÑéôÔÔÔÜÜ\WWWwwwTaÖ­[—žž¾ÿþ5kÖ>|xïÞ½jµzttT*•†……988°Ùl@ Õj### Èl6ŽŽ2 ô ¢³³3…BiooGë …rêÔ©ÜÜܶ¶¶îîn´ ûã?¾ˆ…·QŸÁ, rÖe`±Xh“bxxø‰'UUU‰‰‰h›è®]»0Ì‘#GÊÊÊ\]] ÆÄI5­Vk±XRRRäò‡Ÿ|âSYYYXX¸sçÎÞÞ^´‹O(:::&$$xxx¬\¹²²²2-- 5¹F£1>>^¡Pøûû£]‚íííè‰'.—ûù矣¶@¥RÍ=8%f&̹ÄáÌ0ÿ÷ï8œûxÇáÜÇ;ç>Þq8÷1ÿ9ü?‡FÀ}endstream endobj 823 0 obj << /Filter /FlateDecode /Length 3953 >> stream xÚÕksÛÆñ»~gú…šȽvœ©äÚN3“(MÔt2IfJKÅXiRŠí|èoïîÞp$•¨ÓöƒDXìííûqd£Ùˆ^ž}úŠëgµgžÎ¯Fެ¶µ—zt~9úqÌÙñÏç_~úʨŠ3S8æüú˜§Îu° Q+fÜ÷Ç’_ìÉW%¤JÕŒ7Àß 7ž\ ;~{¬Íx2›WRÊñ X®ñZŒ_Lîpé|çã9ü]Ln“¿Àz8 oÜÂ>þÜ G,2bèóåùÑ»#—lÄGÜñZ85²RÔNøÑÅâèÇŸÙè~9bµônôž@#¡X­$¾x3úîèo%$Ü×ÌíÂвÖÊGYqgkaÄÈ Ys†üÞ"îƒWÇ•0A:Ýí ÂÑòQ(ÚÑ~;”œœWœy3>A!žmS ¬ ƒÉuÍŒëRpzJèñiiu£j£Üã¬nYm­îíÿ´°¨Ýa´¨wµu½EÏ@ò…Í*ÁkËÍÃ×-h.ÓµY?Žê‚ ho’ÉóñzùÿÍo¢QEÞ¡Á¿"!:*lx­Y¢UðS‚À%ZF€ºVÎdíµ@"õ6èk§Í¨Ú†ë. Ø´N>oµÞa$±IÉmm½înò]q“(6©•|à&ßC6ãõƒ| C‹4Âtw:Hœj$ðn‡Í>†`E²K׺,¡·$0`Õ¦lÛ¬çµwâqÌÚxS3e£F;^B²ñÿeѳõòž¬† ûQhzj-z4ahr CÓ Á=Êâ’;È¥züÀµ¥Ÿ›À㬃Rwã;‚Ò£¬ª˜¬™PÝUwD¤‡/ZÖ] Fõ€Œ¬œúÌGƒ;?v2D¢Êo+¥ÝX==®Œ—!V¡^¿ ± ¡²X°v|—"ÙÍ4ÜH©+"Â‡× ¯¹œ… ‚ïY ÞŸÑj÷«ðmƒ¹ì†Ö‹ú¸rÜŽ¿^Þµ”¦ .x=i@L¼õ{E÷,®F &•—%\åkíZÿ^F%tS0LÖÛdR†:ªT’óQ…"׉8ØÅz²@(ÚàšxÊ뇩²@CÕºD•¢š¸ô¹íÆ<âê:éfýÄ+­.{,í"ÿÎåÃðù‰ñ™'ˆºSËô±UT_aš×ܹX}-”=@Þœ¬qgoè{Kˆû:È;^¯(|™n\SòÁõÉüTèEñëäö2|? ß×á5 +h)ŠÄ ÌNñ‚šÀ;{‘è=ÚíÚež‡˜ËaË<0ì*pTQ… [Ÿµ,º ¶ˆ>ÂslQù´iæruý"Ùó%* x!¥djyRèŸU)“ÛH¿]’‚+Ä{›K Pª¦jßb…¯t£  ¢òŸ%†(Ø…nOO 4ºz±ÚØŸI—w>`¬ÖX]RÒh€o#ˆÌ>ÔÁ.xm’ž 9eëFØÍrTªÁA ü뻥í’ØÃôŽlßÖ+š ìœå´šƒ7ª[ÙöbswˆD„e·µ¶æa<XÕãßËÁ¡z‰ÄAû0ÊBÙTŒÓö!lÔTÄ2!×׃šXÜ’`¢®[l•”B—63Ì)@Ëäöt6´´ÔŒbf D:ñ?–ˆûœ\ô¬„¦R¤ô"ºU@ß_§ôkZL*9½òX)ö†hÅ÷0¶ãõwà‡§\OJH@˘ÛI‹«5oÄ'TP2Ù RØ@ ¹Þ)ÇNØájÅ“?ÇÄ–ŸëËÏ‹ÿ—¬ÊN8Ó‡OQHœÕø³¡._;ßà˜Aaÿc§(âK?ÌgøÃvöe¡¦0-Ÿ/'ÒmŠÍe0Jipœ 1³²kP$v²0ä ó1$·ÙPøRêAT­ TtGF™É‚êCg-:ùϨÒ2ö6ÝáÞOîä?ø´6¹œÇ*–›­"DÙ-SÊ|û6¦Ép}ßO“c®Æ'·MǼO$vJÚÊl€HÑÖÍWÔ“_„UcGä&Qz„½àÓh=¥Ëo_~$hÝ)Ké&¼½jоUÚMîÚ‚¡îÉ4º å?þ[„òŸ”ÒÞDð̤DÙ…e ß]XQ%¡îðv¯ïj %Õ^°¥÷óX=Åiå岎%—#†¤¬¥6¼©è¥;*S¡Tú^š~ ž¬Ú¡Þ­¤œ)Èqøž6‹RÏ÷»íU!ã .‚2.¿¯šQ£ lgÛûÝus':ýÊ]‡òá*‚ç”gªïu`¼«±Ž†<BI€þ ?‡‚Q žsjû-i眾߅GŸÁe.Û'᳆¿«hKø}M#65NÂ[ÏÇ…;þ4ü= ·N;é¡kNàò´ˆ%œurx€ # t÷^wÙÆ¯š_³¿Âå,*ö¢ÝÔEäÔ<6 .óýf;ëí!£˜§B¾ý«%¼~ÖÞ* ç í]ÄצQR‘Òv×Í"‹ qòŠ:Ü=‡;ˆúïð÷2[þÓW¶£–ÆÑX©šÒ¦&x,#Cç-ùŒºds-SöŸæ[¯Òj9"ð_£ˆZÄ7ˆVV{;'AÔQ{$ÜÇÉ}Uï«øL”ŒŒu÷œ‰áÅ~1´ÛÿSøxÛ‚)ºæš’&ت5´I£’͓Ӓ€„¨Ñ‘XÛ6¸O"#š$‚æÜe‘"Z›'¾Ñ’é‹f> ®«Çª¶_§Ëí·’šA‚"›Ø7ÀáVG{ƒ)ÈåÚ¬öIX™âÖü—ëp'¤ 7y/îRžsC!PÅ—&º ÿ.¦IϪðRlü’Eê9òÐÜSD<“UTPêDOÓ¤ÁĵŠ]a1Ûôrª^LLRL‘qäûK9]NŒ2¨ƒš”YžÓ]“ þûÖD£tb €ºÚY¿]¶ôc(7û׆bMu֎ãŽâØ‚«ÚÕÆØ[ÌB£ýʦÛmÞ›%“"V‹¶é4+ X9)lð¥ÎPÖI`ÝNSÛŠ+±¦2p ù……äQîà ”d,kžz…™r¬ ñxªšaǧVÌ¥yKa()0»Êàþ°ðB5Þ2"›%l,‘Gv&œÙM°ƒ<½ÊàYÓë¥f…^k“5œ†§$«¹‰¬ÂÐSëPaA5>“2lÔ;«Æ‹I““Àw=¾œSì Hþæ> HðéfGÑTrÒ˜ÇäyÖ ¶îæ[u•—§,É d°Ð”Í^ÆŽ.ã66›är}(<ÚýÂÅ$m_¤·òÌåÍ}9—·K˜7ù•ÏmzLüð¬çzÞŒÎ‰Ž§Àu£ÛÔo‰ñxN':ÁmƒBC€§fRwsÊ$ ÙŒ\öô2o†È@€Ás§v¼œ'Ðô nïå/eh×DØjÓxšìDéP¥ì-JÆ%Cúf `¤bÑ›ä$@‡mÆéøð=’?½‰Œ¨èIª¸§›irŸYƒqšÌƒeôΓ˜š·G¾Gó~Žòö&Öª=ROò&¹PC÷RÇÉlŠQØ8(Å’ŠÛÉí<£Ö1PƒpƯ£¾¤NBÜX¡'…ƒÌm­Öéñ+lÌ‘›ˆð*X‘’Å•›VLb*€¦N \NJœœTÛ†ÞÃÑŽÅ®¨‚ŸgÉ ,QŒZôä64AÚ¹›”aÌ£ÑR‡©‹^Í´Œéì<¦´7û6cjÙŽB¦Fé<0û:´G)InZ‹Ž¢mÈ$|,ÈöC¢G[ÁÿMC]2N‚Múû,WOý'hÚE%ÄáöÅ„|YE 2Dj?Ñ2x®û@G¨iÂf;<œ'ç­Ë6®Ãõ®¯P x¤wÉŸÞNr)ªÔ¿…5‰,ö >+ŒthaFçíæäí»:DŽ{ÛăÞMÚZMÛd)P@û¾]„â1UžÜ¬—á´|¾AE™Î&Ù {”¬Óž$ýê‹õÔ{NT= R Ç3–ȃy÷|>ç­K4¿‘-.«ÉM^O6".'…ú ²”Iîî8pmÀu†’çGµð뛆\­3,w-D[]\nJQhÝž›€ëÕ#t«Mý{•d'Z´Gk¶ ;•ËË(ñ}Æ ]Ö§—J#(Ø!ßM·\Ü…×0ø î^íiÃ_!šAÝãÑ‘Ùu@ñÀÅ׸ÖÍep¿‡…Ox‹{·#|*cÚWe‡Vb¡H(èÌOUÅ¡ð°8ù섦bŒ}úv*A%¶9®£w½­”iüÞ',¦DÞz'ÕšÝçæJUÍ]L²'‰Ô~¿ýë˜I<9Ct¿Hj=›æ§Äz½Úäs÷‘S÷Ÿ²µêΊk dWJ–@ú…Ê(ŒÊò,½¨ŽŠKß©+žj(¢ âv5O‘dh¶Öé×K§é&øtÑý¹¸ª„rÌc-%„mFžË o¦ »ž±ðŒoS­–Xÿ4ç”x‰êªO†•ðÃßbåCˆ®xðtD{Dá‹Båéø/ˆÆÚ¸ù·¢±TA5`ÿ5ÑeRÆ6xØüg¥P90ª"qˆalo¢;tü'Ÿ.Ñüô)~˜ðaÓdzpsõ¬w ´/<ÈÞuwÌÊÇ\¦°Ð:Ï f#³ †„?ò|“ÂÌÅ´m–…šê“Ëö·vf°keu~‚ë‡b÷'¨Ño¡Π"{J(ñ ¤œâ@œ¥¡}>Ðû0pÆ«;¬ éܼ= ÐËÚ·‘ä›mµ C9Øðm7„•Ðâ±È (‰ê³ú—òv @îzãüÌΖ副«D"WÉŒ4õ½Š·m÷vŸé²–8Õ¥ó7Æg‹Ûz›;*˜4]ö  -? Ïiësºœ‰1J—­>;9€M´Ãi‡“e3oÆ„9|@-R <徨¶ËYQÇÀªXv:K±¡3NíÇÙÌv²Y–ƒ¿øÈ öØÐzïÐaoÙ…3ÀK'©]íxTÔN!PI+²ÔUîDËpØ,Žm'`“ÙŸa9úUÇv‚Õ ä¢Ò> /Filter /FlateDecode /Height 106 /Width 75 /Length 4285 >> stream xœí\}LS×û?”RÊh+e \”][ДvºáW˜Z3ŒJa²M(2qFB¨ŠÎLÌd.b& £¾1£6¨¸ã¸0ZE ЖýxZ¼â \^[h‹Ö¢å÷ÇÉ—øÓIQ@…/Ÿ¿nïyî¹ç¹Ï9ÏyÎóR‡ááa0­AxÛ˜tÌp8õ1ÃáÔÇ ‡S3N}Go–H$†Ÿ¿zõê÷Þ{O$¡(:00žžÞÚÚ:NÁð¸QYYùìO½^ÿ\S~~þ‹ôeeeeeeÿÚ ¼æóù/vþ˜ I$’ææfÇY,V}}ýìÙ³½¼¼¢££‡††(ŠŠD"ÀêÕ«9•Jåñx(ŠB½nÝ:&“éëë»{÷n???(üŒŒ Øù8‡7û¡ÙlÀ•£T*·nÝj±XúûûøáØÄ`0 å™3g¬V+‹Å‚?Q¤¦¦ ‹Å`0L&†af³™Á`ÀVØÃx0ºôâÅ‹/jFòª7Ÿ½6›ÍkÖ¬Ïð¦ÿn1ý­¶éÏ¡]*‹q÷óó³Z­yyyb±ØÁÁáÆ, AÇ)Š@ J¥‹eãÆ¥¥¥&“ Àf³CBBd2YcccXX´Q™L¦\.ÏÌÌLIIéîîÆ0ŒL&»¸¸…B!ª««-Ë._¾ÜÕÕØTf2™,&&pòäɬ¬,»¦ÂÌ,}g ‘H^ïA;û¡»»{LL̽{÷.\àóù\.W¥RÑéôìì쌌ŒÜÜÜèèhè¿Ðh4‚x{{wvv^½zU$µ··Ûl¶Û·o{yyÑh´ÂÂB.—‹aô;)ŠàààmÛ¶ñx¼¦¦¦èèè‘“ôÙ³gÝÜܸ\nyy¹›››Ozz:ŒÑhtss³Ùl ƒF£×òÞ»w/\ û÷ï‡þµZM&“O:.$éèÑ£‹-Z¶l™J¥ÂqÜf³mܸQ&³X,²÷ß---}}}¡¡¡€ÐÐP'''µZM$+**Akk«Ífkkkc0"‘èĉK–,™;w®P(LHHðööÆq<<<ü•üãqcݾ}{|~°GmmíöfgŠÅb•J¯R©Äb1›ÍŽŒŒLIIùꫯÜÝÝ1 [¾|yjjªD"Q©T(Š …BÀ×_¢(†a#7÷îÝ;ÆÎår_MJ£c,ŸáYè設­}Ö :òøÁƒ_íËO¦ÿnaß‹Áåre2™££#‡Ã‘Ëå6làóù÷ïßÊÊÊ c2™ýýýz½^§Ó=}ú4000::Z.\ …kÖ¬ …111t:]£ÑTVVBó@$q¹Üœœ‡Œa˜Á`ˆ‰‰ÉËËëêêòôô\¹rå8íòŠÅ\.×`0`Æårét:ƒÁÀ0ìÀsæÌ¡L*•ZWW÷ÙgŸuvvîÝ»EQ''§¡¡¡ÒÒÒÇ †âââuëÖA76—ËÍÎή¯¯ONNöõõ-))ill¤Ñh¾¾¾ cÓ¦MÎጠÅbhÄ444,[¶¬­­M¯×oÞ¼Y*•*•ÊÈÈÈû÷ïS©Ô°°0.—+‘H›ššvîÜ9ÁJÿu1ýe8e,ïׯkÆ1 ;}ú´Z­^¿~ý£G´Zmggçï¿ÿ¾cÇA0 ³X,nnn•••Ç»ÓuúÏR;2DQ´´´”B¡ôôô0Œ¦¦&À‚ ÒÓÓGh ƒ‚‚¼½½aŒþÒ¥K0 ÓéRSSáÍ·…é/ÃÿzMó²ýÁ`ܸq#???!!Á××·©©iÍš5jµº¾¾^$566º¸¸¨Tª§OŸÆÅÅÁ`عsçÞ KÏaf–N}Œ•Ô”˜Ñ±zõê·âÞ} LÿYê¸oß¾QšÅb1‘Hܹsgmm-@"‘˜Íæ?þøcÏž=sæÌ™Ô˜ÑDaúËpúkûûaWW————Á`hooG„Íf+ -A$<< “É L8éééáñxµZM¡P¶lÙRQQ¸yóæ|àãã£V«9‹Å:~ü8•J]»vm}}½R©DD 8;;?Û €B¡Ff±X H.—GGGŒîóž€ìËáááÚÚÚQü…v‘côVB²7´^¯˃vÖaZZZGGG?†aIII&“I§Ó!âààÐÑÑs-£¢¢ŒFcFFŒ655íØ±£¤¤D 9r$((¨²²ò·ß~[´hÑ+ί‰ÁÄhšgS~`®Äsa½IÍ 3ºô?0 ¯šÈòŽÀ¾.ýôÓOÁÁÁ¹¹¹‹¥T*×®]+‘H†††¾ÿþûâââ‡._¾\£ÑøøøP©Ôžž2™\RRòÏ?ÿ¼ >^Ž·6KßØÊ´+CÔ`°0™ôÖV}LÌG:ÝÿVW;0¼d‰•År/(( “É‚P©T€V«¥R©8Žs8œšš‰äêêj0/^ ¥´Z-NŸ7ožT* §R©«V­ÊÌÌ …ù74ÅbY­Ö¾¾>"‘888èîîþùçŸ+•J¸…vuu=zT(:;;#òóÏ?ò±¦¿¦±Ÿ×f4q7™L‰D$ÅÅÅ…„„9rÖ¨ÕêÄÄD¦AØr¹¶·÷æL"h42Àh´p8È£G{÷ŠRRR˜L¦V«e³Ù­­­L&Çq‰$‘H¾øâ‹ØØXNW\\ Sá.]ºôñÇ744„†††‡‡k4šÿ¼HÃb±¤RioooLLL~~þüùóׯ_?Jè{ÊÈðµ×­FFFzxx„‡‡«Õj6›­V«aºÀýû÷,XPTTôá‡òù| Ãt:¿¿¿^¯‡&ePPP@@ BÅÇÇ8p€H$ †ÑhLII)((€Žc2rªniiÙ°aŠ¢ÝÝÝ eÙ²e5554­±±188˜L&S©T“ÉT__O&“{{{ËÊÊ^ÆÂÈð hÅñ¼Â¾ ===F#•J¥Ñh°Xg``à§Ÿ~:wîù䓳gϲÙìÀÀ@N×ÛÛÊápt:B¡àñxr¹\ à8~ëÖ­Í›7×××S©T­Vk6›{{{ýýý•JåªU«Ìf3Žã0»511Q©T*•Ê .ŒÂ¿3~kkkiiéîÝ»M&“ÕjÕjµOž<Ü·o_NNŽÅbáp8wïÞþî»ïÑétNNN«V­’J¥[·n½v횣£#Žã‘‘‘×®];tèПþI"‘*++cbb®_¿®×ë9NAAF#“ÉOž<©««sww§R©wïÞ]²d ÇëééquumnnþöÛoýýýoݺ¥ÑhÚÚÚ¸\nUUÕ(nõ7§i&;Söe°3K…B!œ~OŸ>½}ûv\\\II‰‹‹‹··7‘H\²dI]]L±Õét~~~eeep;‘J¥°öW¯×;88øøø¹»»Ã„êššØÚÞÞÞÞÞ><<’œœüÍ7ß`688uüøñ¸¸¸‘¢ íV«¿¿oooPPVo¼2|[°#Ãääd¸I<ëeHLLŒ¢è‘#G’’’º»»›››-ZtõêÕ ôêêj2™Ì`0à_CÀJ-‰DÿðÃd2Í;×b±üúë¯B¡ÐÃÃÃd2q8µZ Eš™™¹páÂÅ‹Ëår:þ&ü4…Iò÷¼CNf,ï©§>f8œú˜ápêc†Ã©ÿ(ÆIèendstream endobj 825 0 obj << /Filter /FlateDecode /Length 4542 >> stream xÚÅ[[o䯕~÷¯hä%-dšf]XEÆp€qb{³pö!V,ìBI­VG}QšÒh&¿~Ï­.dW«ÛÙ Éb±êÔ©sýNu=[ÍêÙ÷_|sýÅ—ß9;SuÕÕš]ßÏ´3•r3ßøª3ÍìúnöÓüúáJÍ÷Û~¸Z£ç«øúßW­™/áÕòê®ÿóËïT3¨i«ÖÂ44„RØç‹Z&ž-¬U³…iªÖ9î£ëNÏû^Í| ï®t;Æðß-Îõ„‹ýþßÀß'þè纩ቿ¡ÏWØy‡ÿzìøŽiô#Uc*Hü­úãb‹+rU{oû+íÃô7pqð1Í QóIY!•Ë»|]z¾N‹ZñtÔü‡Ãç[|µ—Öá[ðÕ7àKÜþ9{^†!7§Å¾ÈŽh=ÞWim þQZãÂâž´f¶P¦jl'‹EÂúÝPÕ [ý¹ÖJz™ˆ¯©SSYª…ªœqoÌŽ2gfY¯¯K3*]ÕÆ¥••^u6£«”Ÿ-4,8t 9&ÌT°eŽŒÆ1À ›æj”*ÞVµv³4Û;Ð×’QŸüý üxü?ð`²q¨!þìÆÅ‘ÎËý“¨wØ«I<_@3.î¼£©ôöüÎÕ•GÕï\)å Ü|žÃP†mÇcàHnìÎp‚ Rö ÓÝ$'ºÆ¿ˆÑÙŒmk•ý½ÃäÓAŸ(s$/M_óåWÙfÒ#·åØþ«7Ö¡:q—/méx·ÊP”²OÅ]j­-lÓØdA/8uJk$Btß~få×ge¶,#mù,šq2j…P©ñ‘yƒìéVþzÙ›O(–ª¬õ—‹Ó¡l/þë\€ úùyaµC›„ñ•õ.xÉbÎ_™Ú`ìW{Ò˜ÖpÆ•üÞ(ªÔU6UúsQ¥ã¨ÒO£Ê)†!öÈ_AS˶aÚ Ù5g7 Ý8†þ(ãI¹"µM3¿'D XÁïâGÁ±c¯ˆPÂýòc)%¤3vœžK²Çþ»I‡ø#zôë!ÉÏ´«—</ÉD¥) rûœ™Äetµ†|ŠÂÙÆxˆ…€^† !á¤K„_”¦"1¾£¤"æjþgøûËFÅ%[ƒ5É\Ì!.áØ›‹£ÍbfÆ ß8>Å }OÖÕÝd²8Šé¦ouyL_©bLO¯8¯¯a¯ŠÁl‰9J&ÚcM—(®õ‰¨Ñ&xºQ(ˆ¼Ì»íÇ‚÷±5"8G™p!Ùí¡½A·/âIŒ7Ÿ¡èlåj?–:B(ßyI0j¯2P…†ÇÚæ¸aïú@@"±Üj|õ¢m'› }¤+“-.‹Þf-îw+=ò Û4Gt$Cp9 !¸5È(eÈd»U Àý=U°ppvôÑ’¯YÑ$‚ß… ƒÑ5Ùàä‚4mÍÉßAØHÆ o„3~‡½fDdìÖÂÞ€A÷{ÁôàõùÅ*ø;nYËõGA/ÔøbŠiEýÓ£¥$9“‰Öu³Ìĉ¾k9ìq«‡â y4öûMŽ< BFÈfÛ€˜m·*m]Éþ2 @ ò ?¾‘"ÞO +v’ôù<x ?„(ÕªQpדðK¹ ZÃÎ/ÁvüVªHÓ59ø›M ë °z«ØÜÕ†Áfxÿ–®—#w­ÛªnìXóY žÃ©2.Ø|²—ž E¿ ^E5yºÔgˆÈHÏO$v]ª"ö‚8CÊÏ•ª}4?A(èÝkH£øµü(ï1’)dEÒhýAn¶]Æ}•èp‡H­L´t4é@ÁÐAf¦]ˆvƒ ;µ›*¢Ym»±žt@Ðê…kKTbó'â Aã/‡žÊLë»Ìû[°×à_B©Ù´m@¿ŸâhL/\ºšæ`;uÇí¡ü8<ó×lø:ñ"C¤1+%Òû±ó»]&ºÇA…Æh¿;zy½üÛ —že° £ôü»«ÖJ€a8hÌÙG›HÁkOîˆDt>ô9‹€ÙCžˆý(l YOTãl˜c8ímuÊSÛqô÷F Û`&Èa_¡ (C°>®°Ù© L6~ žÌá®'–»!Á^oþ¿*ø'‰}ËY®`6Q¹ÑDÂNwu\F t ¬ÙÊ·Y£Ì$;[‚±BKe%0F÷lVT(†¸Ý³S•¨x¦L–<ükôõ\ÖÚ”|½]#¡Ær…¬™e Gf$û#©À¹®á‰¡¦eŸ†þjˆ ;ýæ^BFcÛÉrõÉå6¿`¹OÅ„««¼‰Ø5¤x&ŠŸa[Cäû®dù@YÚ®½Ðòµo.»Æ (õúºX]ÐþÄc[EÈJ{CI^Û½‘ã `?Ë:åë^ÑÇê……—åß¶i¦8áe©\Bî8«–X E’º<O ÷¸ïayT°ýõ‰S(£dg›ÍEÝB¦Å…«h3×Tÿ„©bFu°'Q—j­ïcyýAŽ‘YוØGíÒî[*ÙcÛ•-?`†/ùpË?Œâ+øâ÷À!ßóÃ:™÷)DÓT©lrþXž(Yc3>œ«Àu#ÔÊÖ"œDk”læV†¹ÿ”N©HÕÞcîø6S:î¦*ÕvŸç¸[Kf0ëòzɯ÷|›<ËñuÊñ©Ž´£„ÙÊÉŠ[IÎÞLê¥(þ~Cð€œ°Ò¦Ÿº)å’˜«huÑQOöï8•üÄŒóH$?ä‘Ú¸ü1_¸)ê|é9 C‘i:‘‘XÓœVRÕ9ma†ø/e÷¤®³Èó Æême”\ÂMù~ÊõúŽñÖG™ˆ£“ñ{Çšjt%îDFUÓ)E²—ô°Éˆä¦â!xSG%«ó0«V#=ÊU«ÔöC•¶l"tíAÿÛ l„Õ«ÿQ®Sšq±šŽfq !ÁZÅr1÷C¨–-å-!Ô(%ë-’ý²åf: õD²Ž‘ó™ôü÷’è$fEÿ¶c ’V~äÈcÉåuçƒ}ÓàíŸÎû˜2Xê&NŒÒâàݯðoŸx‹øa½‹c² àÃc”`”6éQòÄô%Ʀ•\FŸ óÔ!÷„΀\¹GÎJu‘ªtö.äœ c]H/Ÿ‚M/:®©\­ò"z&ë“ g^bßKyä^*›R"]fyåj4@LçÇ•Ý|œ!– òÕ–AŒc¢ÍÉ ¡K‘g`â Öó"Vé>ðÙØf´'Ù'[}è7øY3ñô{˜Î A ~[ôô‚-ûy<}GŒ¶/,ÈÚצÌÉc/mXŸ®×E¸£ ×[ˆµ³^¿q…äƒÅ9:ÈѨ [ÍàEAoŠóù$äg!·­ºvIJS<²•N¥¸ßOü˜J™ÌÇâùdB›R b @¯bÊBŒ¸W© g`ñQ7ǪURØTŠø,µ˜ía¼hd~VƦòiŸ\£ŸT‹:è¼nêÒ6€Ò1x|»`д¦ƒÎä€+-´‘‹ còY‡Ýcè 9u¢YùÇ*‚xÒ»à]áÓÓ‡váå]zYÏßÄӬ㊨_<cÓ™õû>E8ÔºZG˜WЗÌ490òû]ƶ…Ö]¨ßBÔÃÅ*îžM tl›ýj`àOÈ÷Ëà1öT|íWÁõè.z=¹=ȯž`¦;±›&ºÙè-á²I®~ö@gÁ¿Ã½'™’…ÌMv Úò²ŽaM&›/7¡¹g‚Ž ¶ io§p~XÃ6[cš£´Šº2 ÊG(–U-OƒŸËk\S¦Ö›‚äó )ªãí¸B1 —à 8É¿¤%«QÜÒó…S»cwË6OL@í˜æS>ŒbuqfþÖpfúø¾ Bè v¹»lVS´™œ€)ÂçàjŸÁÁ£X¨¥p|Oz2;vWÔ~ üFݼ¤xÅ@–ÜùIµ>à7ðr<—æ¸G; •Âtz—Ž“=7dªu!Ø™ºÈÖdÀiù }“ Ý_€ì«“¿­CÖ˜Œ×¿9qêÂ;ûïN¬OLlÀ»¦îh–ºîÊhàNŠÍ)æ$jBÁñ ª©|ÓÎLÕ…}ºœJz³nè,–LÉ!‹GØìÅh„ƒÚ…È„î‡`”ÖT1’|œ$ÿäÿDànr€”~sKcˆÏ»Kè©@_¯‡ð[¦eÀµ¤ÓK0EЋ,å+·oòäžKÉØ¸P¶OÝlÒ<¬c%øÇÔ]…¸Ô%øž°w ù/üò«•oýø§‡.¡Zß~ßÀ~zyâ÷„ÞUm«Æ¿'¼Î@T£láÇiruÀe^F…ÚçÀ‡RW'蟧 düù¼ Ã’:° v½2‰Aˆ¯¨®~'€6 ³çV£_°*>H¤0lPM+ö"MLÈ=w|M?9ƒy†Q,®ß^ñ¿Ónendstream endobj 826 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5513 >> stream xœí\{TSWö>B ’` ‚ ’@y•\D4QË`1øêÐÇ€Ú©.‡ÊÌ ®j‹R‡T]d9u4­ÔvJd¥ Œy„‘j0Á&AC"á/!i€øûã¬quu ÁW+þøþ`A÷Þ½Ï>ßÞû;Ìxøð!x©áðk?ÀsÇ´…SÓN}L[8õ1máÔî¸Grr2Ã0<éÒ¥üüü;w^½zU&“=åõ1 ËÏÏ?pàÀÏþvÆ4k›òxù-´³ó›ßÄÆÆšÍf‰tíÚµ xyyÉårA˜Lf^^ÞŠ+bbbAffæÉ“'—,Y¢P(^ýõ‘‘‘ëׯ÷÷÷#ÒØØˆÇ㣣£-‹Ùl>wî\DDÄÐÐ_¹r¥D"yõÕW¯]»–––vìØ±… zxx A4D"IIIéîî¶Ùlt:ÝÁÁ¡££ƒÅbÉårF³hÑ" …ž””4ž O»1 £P(vߦ×ë ÆÓÜ艟ÁŽ“““‰D"‹Åòññùá‡<== Æo¼‘––f³Ùœ¿ÿþûÄÄÄ¢¢"F¡Pf³ù?ÿùOXXNˆÅbWW×€J¥š?¾T*ŽŽîììüøããââ,XÀ`0ˆD¢Z­ö÷÷7›Í(ŠfddˆD""‘ˆ Hjjjrr2•Jåp8R©”F£ çââB¡PòòòÆ3a²>„ŸÓ£O Ã0Àÿ~r“técá)¯iÇÂmÛ¶õôôÌ™3Çf³ÚÚÚbccårùÆKKK‡‡‡½½½õz½£££F£A„Çã­]»ö‰Ÿæy`z?œúx–>=ÿzx–¼´ªªÊÑѱ··7???$$D¯×p8½^ôèÑuëÖa¶|ùrFóÃ?X­ÖŽŽ€ÍfKJJ:þüüùó{{{1 [³fÍ[ÜcazN}؉ÒmÛ¶‰DƒÁàááA¡PúúúX,–ÙlˆD¢ãÇ=z4++«©©  Óéááá555f³™Åb¡(º~ýúòòò5kÖ¡(º`Á‚¨¨¨ï¿ÿþÕW_•J¥<ÈÌÌ,//§Óér¹œL& ¢P(6lØpíÚ5…B‘Éd}}}ð©ôz}DD„L&c±XA&Ø¢^þ(µãCww÷Õ«WÏš5kÆ åå僃ƒ6›Íd29::r8À¥K—BCC ‹µ˜HœG$ö~úé§øÃAWW—ÙlÎÈÈ0›ÍJ¥rûöíÉÉÉ[·níëëCQÀårÙl¶P(ŒŒŒ¶oßãB p¹\77·{÷î)•ʨ¨¨ƒòx¼Y³f‰D¢ììlµZ½cÇŽO>ù„F£=ÖöLð<8]Øñ¡¿¿DD„X,^´hQll,“É<~ü8•Jœ;wn__Ÿ££#™L¦P(0ï³Ù즦&¡PøÊ+¯¨T*Xõ8;;cæáá!—ËW®\©×ëá-d2,Ðp8œT*ƒwäóù¹¹¹<¯¢¢bïÞ½‰A±X K'­V›’’B$! ŸÀ‡àá/‹ìììÿXPPÐÚÚZPPðüîhŸyÓh4±XœššJ$‹ŠŠ¢¢¢P¥Óé3fÌðôô„kìÖ­[|>_¡P B$ëë룣£ýýý{zzJKKÿùϲ³³ÛÚÚ}}}‡:þ|UUULLLUUÕöíÛsuu5\.f×ðð­..·U*•Á`àp8(ŠjµZ2™lµZ/^l±XØlvLL̯ìÃo¿ýöáÇgÏžýøãüzAAÁ?þñçzk;>œdn˜øm†ÁzòË|½^¯×ë'òÀSÃŽ…[¶lDDD¨Õê„„„ÑÑQ¡PØß߯Ñh–.]úàÁwòäIÀ;ï¼#•J EtttTT†a§OŸŽ‹‹»}û6FÓëõf³yhhÈÏÏÍf;vL úèÝwßET™L¶bÅ ±Xüþûï¡…Û¶móóó“H$p«5›Í‹…ÉdjµZ&“ Ëå999ÕÕÕÅÅÅ|>_&“I¥Ò´´4¹\þõ×_3™Ì€€€²²²#GŽÈd²úúzX|±X¬ÚÚÚðððúúúÒÒÒäädWWW‰$—Ë ÂgŸ}6y/M¼W½üÕÓË_ã¿üÚïbÄÅÅ………Ùl¶O>ù„N§WVVbcc‹‹‹…BáÙ³g¿ýöÛððp˜Ð‹‹‹Þ~ûíçJÄ ÓëpêÃN”îÞ½»»»›F£yxxh4ÈQP…CL8Ÿ¡Ñh*• A¹\n±X6mÚô¿• 6œ8qǧ§§K$xÊÊJ&“yëÖ­œœSRi½V{M£ÑÌœ9sΜ9IIIgΜ$“ÉÑÑÑ$ EQ&“ÙÞÞ¾yóf8“…½À_þò—ñLøõ£Ônö”Ýû>d0Z­6×äryff¦H$‚íwWVVf±X>,`tåÊ•%%%yyy”0™Ìï¾ûnÆŒ™L–½sçιs熆†úûû766Bv {v"‘(%%…ÃáTUU¡(êíímµZ==_ÏÌ\QXXÉà‘#Göïß/‹Íf3NOHHÏ;ëB¡èõz•JåëëK"‘8άY³rssSRR¢¢¢˜LæÙ³g©TêÒ¥K1 CQAAÔjµ››[VV–ƒƒCCCƒX,>uê…Béîî.(( … óæÍ{ÿý÷+++Y,Ö¹sçþüç?cf4sssùùù}}};wîÔjµx<¾¬ìÀþýûëëë;:: ÆŒ3P¥ÑhN§»¸¸L`³©ñŸžÕ­'UãûùùÕ××s¹\EQõòòruu…á* ™L¦H$b0ÖÚ_ýuuuµ··÷¾}û4MJJŠH$²X,t:ýÎ;QQQl6[  €¤÷_ÿúסC‡$Éš5k„B!‚ ;88$%%ñùüW^yÅd2Y­VX:©Õê;wêt:¡PH$O:õ„¾Ï™“t²ýLsïÞ½ÁÁÁ‘‘.— « NG£Ñ 8ÑÙÙ©V«W®\‰¢(‡ÃÑh4W®\IKK«¯¯ Ójµ_}õUUU•L&;wî‹ÅJOOONN^½zµ———B¡ R©L&óóÏ?çr¹ÞÞÞÝÝÝ~~~r¹ÜÛÛÛ`0 ·aJKMMíïïŸ9s&€Ãáhµ¬ücòx&üj>|še6žägaLJ[¶lIIIÝuð߯DDDDMM Ç»|ùrNNö_DDD¸¹¹ †ââbö׿þuË–-D"±¤¤dÙ²eƒ!88øúõë±±±€ØØXø %%%{öìØl¶¬¬¬ÒÒÒÝ»w«T*:îççG&“e2YAAŒø=ŸÏ‡„äÉ'3PõuåÊ2™<22B¥Rýüü0 3 D"Q"‘lÞ¼¹¿¿¿¥¥ÅËË«­­-22R*•&&&ÂqBppðåË—M&ÓòåËai›˜˜(‹­Vkpp0Š¢uuu<O©TŠr¹œÇã555ÕÖÖÒétAÜÝÝ333Á“º} gšIÂ~¦Á0Ìjµººº²²²ÊË˲a@r¸jÕª#Gެ_¿^©Tº»»cbbvíÚµpáÂÚÚZ2™¼oß>6›]UUŸ–W%%%Pï%—Ëét:‡ÃÉÎÎNMM…ú@6›ÍçóÃÃá–¯ºº:77÷ƒ> ‘H8Çãp¸g£úz11÷N¶×F&“]\\¸\nnn.›Í†ýO€X,æñxpHÔÒÒ²}ûöC‡Q©ÔG2”# "‰>|¸hÑ¢ÈÈH½^O"‘>ÿüó7ÞxŽ}P½pá‚§§ç¥K—nÞ¼yæÌ‰TWWÇår~÷»ß=ù øüùó‹E¥RÆ€€€îîn8÷»{÷.“Éœ3gN[[[XXXoooPPZ­¾sçβeËÔj5T´,‹ÉdŠŒŒÔétîîî}ôÑÇ©T*$F(Šj4šyóæÑéôÈÈÈ7nÐh´––…Báááa4}}}kjjª««q8œ«»»»WWW×ÖÖ.]ºÔÅÅ¥££Ãh4ÆÇÇkÃ$'©O3…ý1Æ›ò>ÖŸ?ÖÃØ¯žX,–N§ƒò‘HÄb± oß¾íììü“Ñ,dž*• ŽÝÁ›o¾Y^^YSSƒ È6mÚ°Ùlð óæÍ3›Ík×®…b7xT$55µ¸¸øÎ;;vì€ÇI ‹‚1O$Üž{!´‰¿bGËþ~ˆ¢hrrr]]vMF#›Í¾}û¶J¥‚ôªª*€N§›={6 ««ëèÑ£z½¾©©©¢¢âôéÓðÌ ŸÏ—J¥[·n…)ÇÇǧ±±ñwÞÄÄÄ>|xÕªU`hh(((H*•¾öÚkP('—Ë™L&,e’““###ß{ï=™L¦×ë;::–/_>ÁßN¦¹zõj¿L&Û»w/…BY½zuzz:@hkk«©©ùàƒl6›L& jmm…Šm¥Rùå—_r8‘HtåÊ•‹/ŽŽŽÞ¼yS ¸¸¸°X,ð§?ýiΜ92™ì·¿ýmNNN[[›Á`#MMM»wï.//omm½ÿþïÿ{ƒÁ0::zîܹ'N444DEEÁîååE$—-[6® O°Ð'~å .ò\ñx] ‡£Gêtº–––ªª*’²G«U€ vllŒÁ`tttÀ-1++«¢¢¢««‹H$:99åää?~|Ó¦Meee¯½öÚ7ß|C&“™L¦Z­öôôT*•.\àóùÞÞÞ Pã Ó ÔìQ©T—’’râÄ 2™8±rïågmöw ¹\žšš »l‘@õ%Ü (JhhèƒFGGµZmssó’%Kp8}šÃá0 «ÕŠa˜B¡P«Õ€ŒŒ ‰d2™JJJ8¬Å:;;÷ïßÏçóM&S?™Lîèè ‘HÍÍÍ4ÍÉÉ)!!Ád2)ŠÑÑQ&“²k×®E‹AÑ9iû¥“L³qãF˜ôÉdò7àÑ㸸8 à wìØA"‘Ìf3”€2 ???Hÿ/_¾¼gÏNOwttÐétx¸ Ê´Z-‚ ñññÁjµB íððpbbâ[o½õ4¾ü>|ùCÓ>;À©Ó/v»Gxîë°ªªÊb±¬]»ª€ ‚››<ÖåëëÛÓÓó üþ,¦3Í‹§ ïIiÚ»ð¤Essóðð°ÉdJLL<}ú4<Œÿ—ŠR©$‘Hl6»¥¥Ήét:Ô"¢Ó醇‡ÓÒÒšššt:<»600•7n¬©©€ÓE î_¿¾dÉ•JÇÃgΜIJJrrrZµj‘Hü]O(}L[8õ1máÔÇ´…SÓN}L[8õñ™)¹ßendstream endobj 827 0 obj << /Filter /FlateDecode /Length 4244 >> stream xÚÕ\ë“·‘ÿ®¿‚uŸ¸eÆsäÊWe;–“T’»“·œÊÙù0"g%.¹!W^ëòÏ_wã1ƒ!H‘\F©+ÕŠC htÿú ùävÂ'ß¿úæúÕ—o„™Î<÷br}3qbbe^™ÉõbòÓTÈ«¿]ÿáË7ö¼aMcPŸë»+1íB?WŒ&%Óܦ~?^)1ýú~ý§Ú Z3.rçÿº’nÚί¤¾¿2Í´½í®fJ©é °Þ൜~Û>áÔ·ð]LïáoÞ.Ãߺن§¾^Á1ý[ì¸E ^ñȈôùÝõ«¿¿pÉ'b",gÚ5+挘Ì^ýô7>YÀÍ?L8SÞMž©ëÃD6’qéàz9ùáÕ׊ ® Ý3ÚÇ1*¶aºQ0˜e¢qQÄå¬Þ‡e#s>¬O‘#ë­´X™Ô ³gÓ$e!bï˜4º¤é6È·¤Þ0ãuïÏÜðЩN3©UßIÊúHZG»KT²äeø®¬aRúro[äsBåú¦ºÜÛYNI©V’¡Ï ´†2 së—Aµñž cNe™­C•+`]XÿœÔŠø„¿»Q„*áAìfÄ뙄•²” ËF7aOíj%不D Œ` Å1¨–»¨–Å@–9 sÕTQ5½*RÕ´@Åÿ«ªéél¨«=ê”C|ÿ'a8“`üŒqLº˜K^_95mßÅmÓxÄÃüæjfm3ýaýПwÒetyi¯?â¦ïÓa§±(™ísÎe´1ñàþò*ÙL»Í ®» …^i¡…Fx"ëEÝñòî>=.§í&=E{Äø$«ž†šÌ„ßÙ<ÇíF‹ÓüBg²på˜ODOOÅ]®ñ0—ÃÃ\†sÅY… \ÈB h 0†Ãd‚‰¬6ØOPç8¦©„KŽ6ØÁ¨ÖLÿrå™éÍ’JÞÔøfù/ôô -›¿º ÷¾E’ðÊÃcàë6±çq[Ð9×¢8ë6“ÊÃÜNOéèYœßįñ«‹¤)èÄPè÷-Ô&"ˆÌ( ƒÎæ/ð÷æ ²î[Ò-ºʼn—mïéè¶LÜÞ¯?¤n>¾y"¡ƒøp®`ZÌy]BB»í°óõCÄÙpï ?Q†Œ‰ŠJŒ!¾D>h([­f¤ ]†·@Š‘Ó߯B'ɹ Waº|FÛmø26‚ /ÿÜ=‡‹ÿéèl_XNhZ|ˆé}d¹æQ‚ø°ÏÚ( ­Ô²¾ Ÿj7Qnz)¾<¸ˆ +\èio4Íß-cµ‹´5`WVéâ1r˜¦‹…Vl‹Ð ƒú(ã¢9$X©c¢f¸zøšA_ÐÞlÚ‡€ ° ’¼…°!Ÿ÷|äÅóÚüRBð’ÆëÚˆàx´©ˆºTe¾gq\è— žtq¯©ì’tè/±1ô¯Ì6ÓieÇHØö)aÍO:„ºvàÃ,f€Ÿøý¶‹“K"õè¡?¤æO¡©²?¬ÅaŒMßû†FJj IÛÒÀ±Ù«eïz¨1k*&ƒ¯ÑZ«ˆ=Ô‚lÌÞþ13ÙÜNÒå[ð;?ÑCE'Üu…kšÌ"{ ^§hïx,àór¨ÔE?DšÜ€·‹yÐ7éHqi¤Iöë5:‘&­g¦`)Ï÷‘xò¸±`V€ávºèú¦§(ìNޒؼ ›5ïo¡ùÁî0¡GI¨yÞb÷Û»`¬¨iÇØ œ ¶wm´îaZc1×Í(¡ù° ÷¶Ø¶ÅYî ØÔ>\o·OCËkƒ\)m¦ž‡O:ˆ ŸÛuøÄÕT!·Þ`ô¡5Éê&• ÂS„c›qŒÝ„FHʰœ,”mð…Øõù.Æ:Ð=³®‹ÑŽž„}ŸÒñœûMhˆ|è½0(œv"À8Ȧ‹³È7)ò~œ˜ I†æaèyX|Œ¾tN!CW=o{•ܦä’XFýî–)ÀZ„¿ÿÇa°›À^Š'Ä wá|rÜð}è÷6X;h¡¥ œµšþ±}OMááÿL¦æ!ŽÎß ÎrÔùízVc­šlÕ¦…ç´å!Zjï ¡ æ-¨¡–ìÃãñ´•‘„$}€ýøylQ0³D@>õ݆ˆ…hðšL UêCðÆÐ<”zßïïˆÉ.G÷ƒ§Â ŠM»d*¤Õú©ÿp¹DIãIZÔÔ§tœmZÇ sÀù˜×–Ó•]nk@ßõ`…û‚§¬s)} î±adCS}HÎì—G…¡Žñ±Ï•Ȧ¨Ÿ!ÈN«…ûdI®V±ñ9ÅëÙšÂ]È_îB\wW~Eí¿ !ÝQ,“Dýâ;)É4ocÎ6Æ ;äÓFîªê,š)ã_æãļ§Â,B3½¢³¾G‚· hËèkJJoÛú…öwÉk­‚ê„fbÇm@Õv`éfÊg›ÒÖÁ×`ßðÿ^É6ñ±±ïD ãg–箳›S†ßK‘`– ùl›€ŸÜС£ë ºáÔÙ;f°Ê&ÏϨ̡ÆÐ^ã4“eÛáP˜žÒÉE¸nW¡—žTxÞ=ÆõÌr÷1<ß«ªèeòáÈSaèt¸ìWlh ³ß»H¬ê6Õúq[®ñ½¯ŸT£2’`Ê©³IØZ‚õ&?û×z­kp <¦Ò6ï*„ÐÙ :˦þꇱjT,^ì)–J¦t3ò«*`j^ÔKÑxBBþÍ:fªÈò2ÈÌ¢¹#Àý\xF 5ngËAÞƒ'ÝË)j Õ`& 3Á=³ +ÌXÝ9Á[òÍ2+í.ߊŠu`ÛL0›j˜íb[O$ÁÂMýöÉJ롬ê›m’i+F%æÑ@† = Œ^mWHXª¶ß6¡ªg}Z|Ëî©ûǤ…ª²Í[lkÒaëm”N¹É ×q3êꌴÖÁXm}OXËÓ'4• ñÕ,W cQŸPº!,Ž›°ÅP»®p^ßžM3\âo g¹cÇ43J/…‰©Ð¹4Þg³ O¬‘M©[Ïe˜´{¤V.?e›î¥Ö6L _}‰!°LñB/S©g‡J³Î"éXùP9¬’ºYW½;džü[9l÷œþкG&9áúéR\h¿õñ®Ń&|ØôQÝ–’. VµåNNþzÏ[»Þ›s-ÇÎÛ¡°†šR~¾ n§? ¥…žðL(G©,-”Û¿…§÷x ¡L^ͼŽkä±iüËD’ÑíEàÂáh{:E#êâ#‡ÊL´(ÎÁ—˜Q‚+EZš\úÅbƒÔ”ÓEJ㮀ìëGxׄªQh.«ÐÐa68_ÆjEk‹Xt|]ßP‡ ÑÚãݲ=à–ÁÈØz¸¨a¥€A9;í³ø7qs(Ûî9Û¤{¤|…!¬ˆ›Ã4a.ôàõM*ŒoV¹,w™°‚Ç\#êߊ+Rê ]àŠ|3‚ºŒ™®JçºÁê%[X· gÎ̸Á ¸ÛL½’˜Ti!¨¼A›Táäfcò3•Š‚J {×tYZÚfåi³ÜŸc>õ"€ÜwÔà„— ‹UÀ.þðG¨ò9€9m[–`L¥J€c&G °ÙÆj´ŠÅ×TÓ"XJ–’ÕaÙ š‹é»v›2«¼=åI#àNÐ-Ûgn~7•€"€Òùˆ=ï]Š"l­Ç’ÙæÈé˜M9ãLy5å-d«ç¸¥ªýôÏdr9Ÿ~ØÆ¢vAÓ†ï¶J6ÄÁ Ït¿«ã×7CmGRÄžRöǹ_èØpïIO¿TƒËz¾Ëg.BI‘P·‘+_ƽ¼ö)>w’£7ƒ¬Ž«ê–åêêú—™‘Êûä1»¸÷xË&=ökîê ™;¹O›x LÞõ%ïó6L¿“·Ò@´K÷ŠÔɧý±eø’+‰4L>èôè:xóu>ÕB4´ñ¹XîXôd=ôÕñn9˜)(s¥Š*vÝp7)ÖÖú2˜R¥1ãò»‚Uè ÉŸ¬¦ª”RUØŸŒY¦št(ËàY™t&ƒ¢”>.¯îTþ.ï8l^¶á#FI8ÚûxÙïTO´-†¿—ßoRIz&`…"¦·Óïúb¸Óñè/0õ9|O?†cGgè¾¾wÚ¦?(Ïš> /Filter /FlateDecode /Height 106 /Width 75 /Length 5046 >> stream xœí[mPS×ÖÞ†˜„˜ 0 D*m‘œ¨µÜ† R-æV(–/G­øU§ŠuhèÔb§—¢ŒíhgD -¨õZ ©­8­DLÀAÎ JDMb *„ï@5|Hî=ÃËàK¢€Wáúüʰûœužµ×~öZçL³Ùl`Jƒð¢oà¹ã•‡“¯<œüxåáäÇ+'?ˆö‡³³³1 ËÊÊr8Qhhhll,N¿uë‘H´Z­‡ƒaX__@ðöö¾páBpp0€D"µ¶¶¾þúë‡Þ±cGoooiié¶mÛúúúJKKßÿ} ÃîÞ½;cÆ 0sæÌ/¾ø‚Á`ŒÍÃi/³j3cvl/µ‡‚ÿùuèJ¥²¿¿´Ñ–––Ù³g?åTO[,6}út‘H4æ;¯‡ýýý†=xð@«Õ"2mÚ´ææf"‘èëë2{öì°°°áör¹üâÅ‹L&399yÄTr¹\&“ݸqÃÕÕU,[,FÓÓÓ³yóæñÜáx×áéÓ§Ï;wùòe6›M"‘¼½½¹\®³³3ŸÏ\»vmþüù­V›••E§ÓgÍšÅb±¼½½i4ÚÍ›7g̘±lÙ28Õ±L&#ƒƒƒÑÑÑ€öööøøø±ß¢m|())óè‘#Gl6[WWWWW—cû“8Äx£’’Ò××§Ñhx<‚ ‚ŒÎ\½zuíÚµ,«´´@ œœœ„B!†a¥¥¥|>„=äðÉH Æóxl6[AAd`´Ñüü|Û0¢ Fü†(((m’ñÜáx9¤R©ÙÙÙ€òòr‹åááÁãñ†8¤R©&“éÀJ¥2..ŽF£ÚÚÚ âãã5Í7ß|STTÔÜÜìãã7w¹\Î`0îÝ»×ÐÐðæ›o†……Q©ÔÌáSŽþý÷ßv,_jSRRÔj5rH ‚‚‚X,–@ þø}||à ÃŒFãˆ]„Á`P©T ÃÌfóéÓ§©Tjww7…BINN'‡ãÕ4‹@§ÓY,ÖŠ+"""vîÜ)†F!d2Y||¼Ñh,(( ;uêÔFL%‚‚‚>|¸wï^‡Ãáp†O2Œ'l6Û¥K—žiÔNêm*û—pˆñFéÀÀ@JJJcc£««ëŠ+`pn,—ËüñLJ655Y,–›4–Ëå………"‘è»ï¾ûå—_8ΈIžãy<6›M¡P<ëèh»ËhSÙ¿„CL€.âpæÌ™nnnÙåpÉÍ™3';;{åÊ•|>î ¿ýöÛ¦M›8Îãá;¾eÿ4o¦!ÿïõôô ž={öÉÑääd³Ù¬Õj÷îÝ[TT¤×ëÍfseeå{CÆr¹ÜËË‹B¡èt:(G‡_b,OØl¶?þøã™FídšÑ¦² ‡o”Òh4¥f³ù£>zôè‚ ‘‘‘C£Ã1 ËÌÌT(ÞÞÞOfš!ãQ:b’gÆxíY4ܹsÇŽñ‹×4ååå€'9¤R©©©©t:ËåjµZ>ŸaXXXXvvvxx8Fc0;wîܺu+‡ÃÚeÂuéxOÀOYÅ€ºd´xƒf£•<ÆYŘúµ¶±Gi||<‚ < ¢ÓéóæÍ«ªª’J¥b±xÈF§Óüüüt:ƒÁ0MMMð”ŒãxRRÒ¡C‡±Z­J¥2))I¥R”JeKK ƒÁX¾|9Žã€#GŽŒ­v:õ9œúõҩuøÞ{ïÑh4//¯––*•¨Ó颣£:$‰Ìfs}}}@@@}}ýÚµk‡6‰— ¯ÖáäÇÄx¨×ëGüxyà`nÙ²%<<<((¨³³³ªªª°°°··W"‘Èd2ƒ¡V«§OŸ¾fÍšŽŽFÜÑÑ‘••µtéR•JÕÝÝ}ìØ±_~ùÅÝݽ³³óúõë £¿¿ßb±´··/]ºÔf³]¹r¥²²ò¹z8aëpBº™Ï8LIIimmõôôÔÖÖòx<"‘h0Øl¶Ñhd2™uuuÞÞÞÐ7µZm±XPmoo7"‘èúõë|>_.—kµZ2™>>Ë–-S«Õ ãÖ­[ÅÅÅáèÑ£áááL&3::º²²R­VÃÅ/ŠŠŠîß¿_RR2F§çÒžžž–––ƒÊd²ƒ¾öÚk)))‘‘‘[¶l1™L&..ŽÏç †¾¾¾O?ý´»»[*•nܸ1''Ç`0ˆÅb…B1”÷$É¢E‹ø|þ¥K—¼¼¼0 Û³gL&ƒr7)))''‡H$R©Tx¤lkksuuˆˆÐjµÉÉÉ...«V­JOO?pàLNÎf»îß?Ö\úïŸ7:]—Ÿ“Ëm9µµ}¼ŠÒ””ºº:.—»téÒ¨¨¨Í›7ggg‹Åâ„„„´´4±XŒãxHHHUU•§§gII ¬b”••Í™3çñãÇl6»±±Q, Ng2™¸\.Žãd2yݺuÅÅÅ÷ïߟ3gƒÁhll„;Äàà N7 óæÍsvvªndff …B£Ñ˜ð믿Ÿ9s&!!ËåNÀޝ×ë‡ÊïŽçªp¸iÓ&˜âq߸q#,Aäåå-X° ¯¯ÏÃÃãÞ½{7nÜX·n†a<ï­·ÞºpáÇ^\(•J2™L£ÑB@@À¾}û¢££¡onnnÅÅÅqqqÕÕÕ±±±*•ª¸¸xÅŠ¹¹¹‰‰‰×¯_÷õõ=}útUUÕ'Ÿ|‚¢¨B¡ØµkWTTTbb"‰DÚ¿ÿh.8àp©³Ñà€ÃŒŒŒÆÆÆ™LŽã8ä Çq.—ÛØØØ×ׇ¢(ŸÏ/++«««P(”ŽŽŽ·ß~{`` ,,ìÌ™3«W¯V©T8޳Ùl6›m0ªªª–,YrëÖ­öööU«Võ÷÷_¾|™D"ñù|ooo©TºuëV¹\N"‘êêêîÞ½‹ ˆP(¬®®@Äd2ét:‹Å²dÉ*•J£Ñì$ùÿù\ºiÓ&A:::fÍšyôèÑÁÁAƒÁ€¢hHHHFF†»»{ww÷îÝ»ËÊÊÊÊÊrsswìØ¢è•+W|||p‡©uΜ9CD}øá‡§Nb±Xµµµ0.¬V«H$JJJ’ÔÃ3gRRRLLÌŠ+\]]W¯^““ãåå¥R©ÂÃÃÍfsccãŽ;†·ÖGÀé믿¶ãaMMM]]ÝÀÀ›ÍNOOÿàƒ¨TjTTÔ®]»x<žÍf{ã7|}}srrš››©TêôéÓ+**X,–³³sssó‚ H$N×ëõÎÎ΀ÚÚZOOO …R[[;kÖ¬¹sçêõúÞÞ^ESRR‚ƒƒM&See¥F£‰‰‰ijjòððP«Õ4Íf³Y,–®®®GíÚµ«®®îÚµkwîÜ™øóá‹Å3m]¢”ÉdÂððññ)//'B¡ T*áyB hµZ6› ÷úß~ûmÕªU}}}‚(•Jx€CÍÍÍ---ùùù'Nœ€²³³³Ó`0@-J£ÑètúÊ•+<ªP(\\\PÅqA•+WÂÐ=~üøþýûÿþûoE)ʱcÇì»0‘>WU0f8ü;^>£ªªßÕu†—ÃnþóŸƒÁ(,,\³fÍ¥K—pOLLT*• 999QQQçΉDùùùÛ·oÿ믿ioo ÌËË  …………ƒA$ Ï(*•ŠF£AU‹E,WTTçåå!ubCCƒD"ÉÌÌtrrÚºuëT[‡ÏÆÇÇs¹\N‡…3µZ]YYÙÛÛ+ }}}ét:Žãýýýþþþd2™B¡Ô×ןÜÖÖÆápôzýùóç/^Œ ˆT*}üø1ŒO&“¸páÂîîîÀÀ@½^ùÌÈÈ€í êêjww÷ëׯ‡††â8îââò2îø£µž&8Ÿ)€DéÑ£GÝÝÝ ìœdffR(&‹]]]KKK¯\¹ –F% ÇÛ¸qã¡C‡ètº‡‡‹Å*//‰‰)//§R©b±ö[îܹƒãø¶mÛŠ‹‹ CPPÕj5t:€  ¢¢bùòå‚|ñÅG=qâl¹ÃÊÉÉ (jç+·qqøÿ>K;ø…T_pøùçŸ/_¾¼¬¬ŒH$655555ÅÅÅ©ÕêÖÖÖ;w>~ü8==ýÌ™3r¹<77÷ã?†ÿת··÷»ï¾ £>>>Ÿ}öÙŽ; B¡ •J¥'??_.—[,–   @CCCuu5 ¼¼üСC ,F$666--Íl6ú8lƒïÛ·/+++==}x1~¨¶‹/öôô”––.^¼¸¦¦F$I¥Òõë×ß»wïÚµk‘‘‘çÎ;{ö,ŸÏïêêÊÌÌ,**âr¹†Ùl677·îîî%K–H¥Òyóæ%&&jµÚY³fÅÆÆ~ûí·=rqqùóÏ?Ož|øÃáP( …‚¢è‹}¿a*phS¿Nã8Ó Ã0 {çwètz}}=“É„»ÿ‘#G0 »}ûööíÛÿ;wü¬x¥“¯<œüxåáäÇ+'?^y8ùñÊÃÉ©ïá ûëendstream endobj 829 0 obj << /Filter /FlateDecode /Length 2008 >> stream xÚ­XYÛ6~ß_á.TFb­¨[½€n›£i›‰Û MòÀµä£±%C²w³}èoïpfHQ¶|¤A9~œ›ô³7xzq=¾¸z‡á¹™—‰Áx:ðãÀñ ‰7 ¢Á8¼uÆó¡pª•l†£ ð7.}ÿ¦SÀP1|?~~õDDFQê¦!,ƒ,D h.<^øêIÒ!…~æFq2‘›Æ1Mj·p&ðnxëá(ò3Ôð»àá’†oxXò̦$)¡¹…wÅÃ2WÜpxÂ3VÌ<ŽâTP»dN+œÖ!ßòð†¹Ý«mFB¸aÀ7p£0c‘€E8!|ã8Qÿ¡È„é÷aº7ù^€¢»6’ 4jû?Áû/_óf>ÀÜXdÎkhÿï/Åë‚ð€ð{@¨v ㉈œç bÎû}a™c›ÿÚ?àxÏâÁÅþyáèöo¼Ð„’¼Û%¼Ï,{ùÀ}¤Vó†Áüjƒa`Ñýiù¤ó±e"’÷þšu/þ[¨S¦ÿ›i¶–Œþ‚÷qŸŒ¢¸t¿¢¼Œœ2î×VÒ°,TûÚ¶°gbyïË>\~êFAH¸Þ ÓPŸwžµ÷lTWà¬Õ|Ü~A~ŠÔOÎ(þógÉØ6ü.•"¿Á‡‘3AÇI-‡<¹ƒ³Ú¬ZÕÔ;ŒbGÖŠ©¼ÑÖÑИ,sk²š¶ÑBšl¬%Z؃À’UÜ,ºû«úT‰SfK+ÐiæÜap¥¶dC[Áˆ£úp¥j§z›¬¦Žc ê±RÜÍ1D©g…Ž”RçBÒj7ðú"rD&Ó%ZCö9€1Ù¥2˜P)[Юò†ðL+VýÔÙ®† ÿEL~ÈT©âÔl®F7„1ô|R䟃`Eœ¹Y¦ALûpFnÙëL ǰšë'~W=èž ~`[N6Ô#—–Êü]‡ˆÜ(e‡PŽó=†)Å ý~ Í5uZLÑXa¡æPò΋<øÜñД#ˆk˜*õ˜×Ãá(ͼ'ÈRßµøä^^5+ïOÆ!±»p´¿î‘Œ}ÞÚý ú×ïÛxÎìï?eÑ /ºà¨ 0ÛÇu Þ¸¢ß"=ðÊ‚¢>m+/èÃtžÐ|°‡é¤ôÎ(›Î“èqF]!õ`:Ë—¦–C¼e‡ÑcÞB™vüg~aå?ת ËàË©6‘n9h¸MZªŒßGð&X0P;Å*ŠÚ&lj«¬Yô¢I¬þtÏ®ŒF…ó¾Ç'?w0šž|T–Ôssž@¶m9 f§…Þ´ÆëLê-Ë|fäãï¥ÅW™Âå¨"ð²ìó!F½‰íÚ…A³7ܰ•]ö+ø\×ΗÖz=²·´nS¼_Û‡› i×¹íÈÒÖ¹å"ýjú\r;P·Ziz¬k¸(ì¯û 8'‰®*ú¥Ñ³rÂIÚœ€u…ëbM‘›4»¤Þ»6xá@u«Ê©bibÖ(òR<Ìy+JÂ[5y´Ær«"î\`5š¸Ì+´óRuTMÖÌäW©[嘊u¼+rêU^ci5'o=áfp2èXêâvOõÔŠ*0˜²Õ;|ʳáVPµÍɬü†ˆLÝ}Oÿ3©Ø@ˆ¹T‡u¦C™ÃŠT¢·¬ j7’ë"Åa¿ZÕºBYÒ·$?šÒ=#_âùäH¸šÏÏXúHyce*¦>äœ{n‘ôç‚F͵\Z÷El]—ít‘›öD1ÁIN±Â ü1ÙµÿÒôøÕˆ!Ùø_ êïH7w¤Ôš¬[5“OÍЇ¥ÑZ Ëck©ˆ†O·æ§'ªó-O¯õ쪴¬›N¿y®-I‹{ >ÔYëèÒbÅGrZtLgð¸Ñ V÷gÔ|õôbðj†°s³²`Í»ŸfŒSË]ôI]ñìf[.—ÝáHúù÷@†±ìÔ¹áá±¹'*臧ÖÝ9.œ¤ï;8™9Ò*—{G“‰å²[¦–5Þv½}m]áH&_š¼k3óeÎïx³d"I½“¬k#ýc“O p¯¹…\ît@;3Mw.0ÂÀ3> n=6½‰:tcaRîÿÙk´4t#¡y6aØÌéÞŠ~Ú»¡Pèæ4ñ„UY`šÙ44‡./)•™é …»ÔJ±Ì™G½“$‰T6D%é£ ¢š-è:N_ñQ¢3Wø;‰z÷ÂŒ8C‘fžó³Š{aê¬ Yr$ 3³aÉ£TJã¤:«;÷S8Ð'œ¢-V »†š\h4VtZ˜äZWb%/_¶,ʉ–˜lÚ3=’Qé·Ürr€ ÕÔ0è1òâ#;úg)ÍÕ*-âC¥$ Ef3@%›¦½]è‹O¬«â&âV _CçkO…Þ ñVTü(hn%ÛzÚ¤ NX¨ÉB•‘Aê¼,5V¨+ŒA È6Ñ”*ã8·EFyïÍbè¦iÐ^Ö „kvÐÊÊ=9¿M繩v}]êsA!s/–8SFÃ0×JÓj¯:ðɤ0ü¬uÙØ"Ò¦²ŠG™¢XoÛ2õ¯:¿jÓÿD‰Ã’¬¢TnvŠ] Òš3“` ”ÌÕ®oªÛ U·SèïãñÅÉHüendstream endobj 830 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 3416 >> stream xœí\{L[e~)Ð2Ö³Ñ1‡È­ôFDfO7.SZ†%¸tI4u\§˜âÔ„`P‚#]ì²Xq&UºF‘%%Μ-»Ar †ÖJaX:Ú¹v±C&ßç“lÓBíöãùëôô\Þ§ÏïöÞ곸¸64HÿïNÇq|é£P( ]úÊd2-™è‹/¾(“ÉŽ9’œœìt:233m6[vvöò …B•JµÜ°W…§ÞJ0Ë•Xi}}ýr M&SZZÚò(ŠWTT„„„444(Š„„™LöÊ+¯TTTH$’ï¿ÿ~¹1£(ªV«%É*©ýkEÃåÉÀãÄðHx_CE—kˆ¢hii)qLD”å·466¡E.—'''—––J¥R@ “ɖ¡arròjxýkEÃLJ§¾.•J¥QQQN§Ób±À0¼}ûöÎÎδ´´±±±+W®ôõõ £Ñèr¹Þ|óM ÃîÞ½ A†a­VkµZSSSív»Ýng±X€¾¾¾„„„ŸþY¯×óùüàà`»Ý¾eË–ááa"O´··¿þúë\.wppÐh42ŒéééÇ+•ÊÐÐP»ÝN´¤¹¹ùþýû|>?//ïï ]°N­”ðóF#7 322(JllìÔÔTYYYnnî¾}ûÊËËîÝ»A‚ ‡†á—ËUXXXVVvïÞ½ÔÔÔ˜˜˜¶¶6ƒÁ‘‘áççÇápNœ8QSS$‰d2ÇÓétUUU]]]‡ƒD"ÍÌÌÞ¼y“F£•••µ´´øûû»\.*•š­×ëu:]GGG}}}eeeSSÓÄÄ‹Åjjj÷!¸½¾Ÿ0Ü0,..&U*•O¢]ÞúôÃUa3[H¥<Çñ .”——ƒ¿ú?üðC`` F£T*uzz:""Âétø|¾J¥ÊËË;xðà“`àßJW7Šñw8}úô­[·t:•J‹ÅÄÉo¾ù†Íf“Éd§Ó™ÐÙÙm±XŠŠŠžXpöŽÚl61²622BTYYYf³‚ Ççç秦¦,‹ÅbS*•½½½^yµ[lZéʬ´ªªŠD" †¶¶6µZãxOOOllìÌÌ ‡Ã™œœÜµk×;w¡R©*•ŠÏçwww§§§wCr‡ŠŒŒ|þùç½Äïj¸6Kïh˜ŸŸÏd2‚ ‰D‚¢(F …&“Éd2ÕÖÖâ8N§ÓŸ{î9ƒÁðÙgŸ566nÛ¶ ‚ øøxïŠö¼i’’’ ŒŠŠ‚ èСCø_hiiinnF䥗^‰DY,–Ï?ÿ|vvEÑß~ûmttÔ+mø;<õ‘æ‘5 †aÍÍÍ"‘èÇ$2žËå  ̘Za0]]]Ï<óÌÍ›7q÷ññ)**R©TT*5&&fnnN.—K¥R2™<==ýÉ'Ÿœ8q¢¿¿?))I©T ‚þþþ—_~ù?þ Ñh7nܰZ­III2™,''gÇŽ‘‘‘6›í:¾Kð\ÃUuC½Ž•¿Ý ÃUÕ¥‡ƒB¡TVV®©ˆê†áJb)Š¢---t:]£ÑìÙ³'''‡Çãž}öY‰AV«õ÷÷×ëõ===2™lïÞ½„Y¦¥¥UWWwww'&&òù|ï–ìO}¤Ya>|‰¤¿¿?==ý4,**joo‹‹KKK+..¾}ûv\\AEEED0s8,kqqqÿþýwîÜyã7 °°P§ÓY,–àà`:þÝwß%&&J$’5Õ$6# _ý5!#±º‰¨od2™V«}xAÔĦ†ëÞg¸|ÙÞZ€›lÑØØØÓÓsñâE@II †a™™™‡ÃjµCóóó~~~4mll¬¼¼œðL¡Pˆa˜Édb0Z­öþýûéééz½žÉd"¢R©`Öh4»wïöõõ˜˜ &†óòòêëëSRR[·n½{7j~~gr²ó_.núáúÇJÇÚL&‡Ã±Z­}}}gΜ!LT­V_»vmëÖ­;vì Zê[|øá‡€Ž…=nl|+õpÌ{iâ†a*•JLú®ÍªÕC?„……­}ØøVêFC¢ }UUUÕÕÕD½úw׬ü+ ׿ò’àa¤Y¾íµ×^{ë­·Kë¼÷rÛl¶ÉÉÉÌÌÌÀÀ@«ÕªV«i4ZUUÕÎ;Ùl¶ÉdÚµk ÃcccÃ06›MÜèããSUUEwíííÑÑÑؽ{·\.Ç0ŒÃáèõú’’W[[û믿†……‰D¢å[‹V…iÖ¾¡zh¥GˆˆéèèP«ÕW¯^%ãÛo¿ýxÚé9žúlñ8ð„;î+ï6›]WWR<11Á`0p'&u:B¡ ®omm››«©©éîî¦R©-mè˜Íf‚ z½~~~Þd2%&&Òéô¡¡!b˜oÙ·o`xx˜J¥.,,°X,­V+‹ýýý EJJ Asss!!!þùç?ld÷²•>áÀ³’׹Ѱ¸¸Øb±…B‘ššŠãøW_}õÑG¥¤¤ŒŒŒ†††²²²~úé'©TJìÎ1,ë÷ß?wîqÑý'µg·oßÞÓÓC¥R_xá…‹/³¿‚ TÄb±R©\\\äñxmmml6»¦¦¦¶¶–Ëå.,,@411Ø€;»V÷~h0vîÜ999Éf³}}}!‰DÄ9==ír¹ÄbqYYYdddff¦^¯÷óóKHHÐëõû÷ï—J¥)))‰„Ø¡Óé.]ºÄf³¶mÛ6>>o·ÛI$F3›Íùùù¡¡¡ÁÁÁgΜ9~ü¸V«L&óÚµkSSS‘‘‘f³™ÉdZ­Vƒáp8ˆÅtüÂr¸Ñ°´´”Á`´··çææž;w.--D"q¹\¢ÿÞÕÕE&“GGGÁ… >øàN÷å—_îÙ³'))‰Éd®…M%žXéÚ¯c–cÓÿʇt:}xx˜p0.—ÛÛÛëããC¤>âZ kjjŽ9‚¢hyyù©S§Äbñ7´Z-±o¶°°ðôéÓ€ËåF£‘X‚óé§Ÿž={Ößß_£ÑˆU5^ÙZ³ñ5Üøã¥› ×?6®l2\ÿØd¸þ±Épýã?k»Þàendstream endobj 831 0 obj << /Filter /FlateDecode /Length 1568 >> stream xÚ½XmoÛ6þž_a`&•ÊQlh¶¥Ã†u+ZoÃÐõƒbËŽ¿’Ó®ûõ;òŽeÓNÜûÀXÒï><ÞÃF‹ýpñíäâùK®Fœeši>šÌG•ªÌ´T£Élô.áùøýä§ç/‹<Ôâ¬ÈŠlXÉ͘' êUkBd9+ÞïcÉ“@÷òUÌhžgŒ{å×cQ%õt,Êäv¬Š¤^4ãTJ™ÌÁÀ¶5Ï"ù®Þ× xçÉÆ´^¡ä{x±Âg]n@“O0:£ØŒˆxþR‹JÊK™±*¥Re¬ÔBzó¾§JÈWl¬ódã/¦üÌ †qœÁ¸3Xa¬a¼ ÷ÏЂÄY™(´qžåRŽR¡2“ó9yüˆÓ×d¹µŸŸ: ÌÂ2È—™Ê5íè8åªä §‚iå-`c ,ÎQ–{ÊÜO²kRÌ†Ë ¸Ò:Ì~½…qOvçñ|Œ\••¼øïܲ}V¹ÝR°¼´ïŒ2xWVÇBº4‚‰• Þ©9 7!\ýfbûlÑžøÅj¥ Ý>Þ™m¯‰ÛfÝáYù証A5|ôÞù»{ϸ#²€ DñËÖ@%“hÞ¨~I8èà¥í­ù"Ìù´CêvÝàËÖy1{zîRbp°#LŒ â|1À ²?¬Ì ^À_`î~‰zoÌJ>˜/M‹Q2¼Â|kt ]bkc†Ï6Ÿ˜~B[œTÚKƒ.8ˆCxÝ5+·mUU²j<¢2e¾nçøkm’¦ ÑzºsX;CŠÉ劂r «¹…Û&uºìX_¯úL]îejÀ QIÙ7<Êssûûá› ÛÜM]ö aÂelói½u&—–jÔpL N v3ê9DãfRm:‹ Xd¹ÒÃ3ù³rkÙµ!Þ5´ëRxnÍ󼞒 ðáFYÅ|µ›Sʯmž‰Ð!Š®œúÝ|> /Filter /FlateDecode /Height 106 /Width 75 /Length 2687 >> stream xœí[{LRï~%$,©˜ž.«Î¸u/(#–â ÍêL§a[«‰“l­²ÖÄV[9k6WËÖ•Ùe‹L²›sètgZYæ9 33XgtQd™¬f²¦ß?Î~ÌŸ™ˆb òüõŽÏyŸÏûy?——€ÞÞ^àÓ üëŒ:ü ½~†Þ?C¡÷ƒ:ø´Z­¤§§»|Pttô–-[BBBš››©TêÏŸ?0 £(êp8(ÊŒ3ªªª$ €F£Y­V@pîܹ={ötww?|ø0##Ãáp<|ø0..EÑ?2 …2yòäììl‹5<†c9j³ÙlÃ&æÄ˜fèŒû}¨R©D"Š¢(Š®Zµ*$$$00°££ƒÏç÷ööêtºðððÉ“'ïÝ»·¨¨hýúõ0 ÿu#µRl•Q…ïïCV ÀqüÀ|>†áG EVVVZZšÅbÉÉÉýE޾¯¡ _ZRRBT*9@A~³cã^Ãäädrེ`øûI “ÉœcE=¿"OcÜ[©À÷5.µZ­³fÍÒétsæÌ‰ˆˆèéééêêJMMÍÎÎf2™===Ó¦MS*•>—’ËÑ©ï{×qiJJ @¡PTWW[,–©S§Â0|ïÞ½ˆˆˆÌÌÌ1+¾¯¡k_ªÑhHQ*•d“ŸŸ¯×ëÁÍX†_Cï‡ëóÐjµ†……}ýúÕn·O˜0! `ñâÅl6ûÎ;¡PH¥RoÞ¼Íb±l6‡ÃÑét eÚ´iéééÉÉɉ‰‰ …ÃáØl¶Ñh\¾|ùÑ£GÿCß·R×5ïGUVVfgg766644lݺðٳgÿøñ#!!áòåËííí"‘èÇ\.Ã0¡P˜™™©T* Fhh¨ÝnçñxF£Àãñ ª©©iii …UUU±±±†I¥RN'‹ ƒR©,**"…Ãá³ATWWoݺõóçÏAÌ›7Åb½}û–Íf×ÕÕ………åååý‰‚÷iènüäBÃäääðððÆÆÆeË–µ¶¶ …B Ã~þü)‘H, A<´¶¶òùü¨¨(©Tššš*‘H˜L¦ÝnÇ0 A“É|ýú5Noooçp8l6»°°pË–-Ó§O×jµ2™Ìápa6›E"F{ûömWW×öíÛ¯\¹’Ãpyy9A‚ ‚ˆŽŽÆ0ÌáplÛ¶->>þO\hxòäÉÎÎN“ɤÓéŽ9â\4A‘¶·bÅ 2Ä Èb±”•• *•:wî\­V›››Ëf³ýú¥ÕjI+e2™ƒ,Èã𘕎ÙàÛûö¡»p»oAzK‹ÅÂd2§L™L£ÑnܸqêÔ©¥K—&&&.X° ((¨««kÓ¦M8޳X,Ng6›cccår9Š¢%%%2™,))éï0ô} }?jó}†®3`·€ãøõë×1 ãóù‘‘‘aaa< “É´aÃÇ_½zÕÛÛËf³©TjPPP``àÂ… ÛÚÚjkkSSSE"‘g—ä‚!‚ åååL&óË—/0 755æÏŸðàÁ¿ÃpßnT||¼F£Y½zuAAAmm­@ èwéÁf³•––Úív‹ÅÒÜÜl³ÙF̨?\xš1{Ê .ö¡Z­æóù))))))û÷ïw¶1†Fc6›333% yw¥/pW©T<¸xñbddäh4Büþ¿†J¥ÒÝòù|>Ÿ?ˆ†ùùùsæÌñk8¸ð¥·nÝ …‚h4Ú ¹æ€Ðh4L&Ód2‘ys¿?Çq\­VgeeVTT:t¨o÷Î#pa¥AAAžzAƒœuAþƒÓÂàÂJcbb8ΧOŸ>~üsøðaw·%i¥ÕÕÕF£qûöíýR ÒJãââÊÊÊÓÒÒ<žsø=MAA}}}hh¨Ãá¸}ûö0^àô48Ž¿|ù²¾¾¾ïl_OóâÅ‹;wþOãp8F¢¤T*dÖã¡v? ÉJ½ÚV]W„;;; !âñx³g϶Z­Ïž=¤¥¥Æ_¿~­]»ööíÛ\.·µµõÖ­[ýž Ñh6nܸk×®I“&±X¬ßÏCEM&“^¯?{ö¬ÇûäãÞÓôÓpÒ¤IAôöö677“å}‚ „Baqqñ… tN 1 ;pà@¿ü°¯†çÏŸÏÏÏÿÛ§…`Ü×iÈz)‚ ïÞ½‹ŠŠÂ0,22²³³³ªªj÷î݆Ñh4FÄ’%K¬VkMMÍÁƒsrrââât:]|||hhèŽ;.^¼ÈÈÈ ó/ƒ±aÆ3gÎ…¨¨¨óçÏètzBB‚ yyyEEE %""EQAÒÓÓ/]º´råJ‹Årúôi·úïy{?\[©Á`X±bE]]ݦM›Èv¤ÍfkooŸ2e Çkhh˜;wî¢E‹jjjîß¿ŸœœÌ`0BBB‚œ8q¢®®Ã0W\\,—ËF£‘Çãµ´´Ì˜1C¯×www‹D"‹ÕÝÝþíÛ·OŸ>Ñét:n±Xbcc_¿~ èééyÿþ½B¡P©T»wïnjjª¯¯—Ëå%%%yyyƒFÞ¡!ŽãŠ\nï`8 ¿÷APqq1Ar¹¼¶¶v„ 0 ·µµ ‚§OŸ¤Réĉe2Š¢W¯^ÍÍÍý'7ýw½ã>¦q‚X­ÖwïÞY­Ö²²2©TJúw.—»nݺ™3g>~ü8$$Äl6'&&²X,2ò.//‰‰0ù9†Ù{JLLÄqü÷d‚üD¥R­_¿¾§§§££Ãf³9+‚,_¾\§ÓÙív€H$²Ùl;wî´Z­ ƒ¬Êz¾¿‡©!ŸÏþSÕ¨¤¤¤³³³²² P(œ_S©Tb±X«Õ:ŽŒŒ ™L† ˆH$ÊÉÉ!"=¼b×à÷9~¿¾ÅçÏŸ]þ›É¡X,&/™¤§§;ÓvRC½^ßÖÖ¦P(`¾{÷.—ËÕjµ\.×d2†§q¯oá–žÕÕÕ---NÑéô¾- ƒÁ@ë£qo¥jµúêÕ«nuלVš››Ëårûy¹\~òäI Ã***`&+âZ­V(655]»vÍ#¬ú¯á4¬¨¨–H$}= ©!Aûöí“Éd¤†f³¹££ãßxAÖ¬YCÄ›7o²²²†øÜW¯^9Žß?7›Ít:½¯_‘J¥ƒÇq:>ôu~+U«kjjÜê®9­ôùóç>|ÐÓ †cÇŽ%%%õõ4Ïž=»ÿþH ýïé2ŒË—/å‰ÎVÄ€ŠÅâÒÒÒïß¿Ëåò¤¤$½^o³Ù*++aÆ0l4<_C÷û‡N 1 Ã0¬oKA'Ož”–– ‚ãÇ“'þÆ7oÞ ‹Åî^Ö |??ôý:Ÿ¡÷ÃÏÐûágèýð3ô~øz?þöSendstream endobj 833 0 obj << /Filter /FlateDecode /Length 2566 >> stream xÚ­ioãÆõ»…¾¡‘ˆ™!‡WÐH¤Ò"ݸï¡%ÚVµ¢´ÞE€þö¾s8#QÞÝlaМá¼y÷5#3{˜™Ù_¯þrsõõ÷¥›Y“6¦±³›ûYVæ©-gUQ¥M^Ìn–³ÛäæñÚ&ý¦®çyž%?§üþåºÎ“–ºëW7ÿøú{[DˆŠ:­!¶@˜+#„gsçìlži]– ñ¯þh‹:Ž€r¿Ãð¬üÏöa ÿß_×.ù Ë"9Àüéîl¾£íŠË/´{æ“fO+Y}ä9ýò×Y•ô3 í~+8¶0Yòx‹ØZܸz{ÕºõNyí78j×,ñõÀÜ©8ýVx{i #²vsT(hnó´p +fw=Ïʤ߈Eá¡`:¢TøK‹ÙuËàâ[sb¤.Òþ³2cÍ"6Ö-2i¾0 ûCÇllú´I{Öí^U€b7h‘v- (Æ©æ\BöaÓy»¹„° !ò>xߣòÖbãmõ4_?£søÑû…SGaB=®<ŒRW9¯¢nöX3‘³ÀôuÝqn¡ÉnOn clp†…¸‰ ¡YØ@vàu™KþÆ¡¡1 ‚CòÌê‚SïQ-øÈOhôÞäû© ‰ô’ç“mH@ç'ÂØLþ±EŒ‡Qg¸q9zB.ûÑÉѽ‚ÌK|ÌcÁõaΦ=GÂFk/V‘v>àSsñpÚ4ÏêÏp*ÍlµÑ¸Áa‹/Nž½µ½[P-Êñžç*Fçȇy=ê`Ð5ºE¯ElKÀ†Ð÷ÙH‰‘ÔBU“,#ó 5±#a!óm`2Ü ê¹Þu侊T1»E™Üµî”š„L¥€¯÷ýžQ¯ütµ×jZ&KA ‹‰-‚"Ç´I¹äví¸ê4— yŒ¥œà ´ ÎbÊr±’jï3Þ0¥¬áÑA@“ªUÿ² ¶PôÆBsD{A@± œ½@ã=Îj‰’-¾yaLÒ» HƒôSðu‰v䯧¤LÒðÀï'{Ðײ±@¹P˜9÷^ÈÌkM ÝŽC[² ӹɫ>+QÇq.T¾6#ÊBh×ÙsÃHÝZ·Ô¬j“D–2ÿbTÕà*ùÚ¥µÕ¿'GCÿ•ªwÞ,§UÙ(øªä=sÔò \¬ävhE=Åzl»cÂe}»Çô« þÃl¿õ§;*=õI‡6½/•õ‡»%òÝNÚ{M´{:˜ñaj+Ri­&ú©f»„°l¼awS†m@ ÀAJu@Xá¤<©oâ'I|=7– t'ɉ–Q´5®: ¦ªH‹Ìg8ì» “<Õä “rz,Ö³Í?ë¾4\kÑíx•]3v0Ž 2Ì;,þä‹Ò‚¸ªKCT2 Æ£˜e;Íäw-ïý•ì»!gÒÓS5`dK„·ÆG@å#Àš0`®5ËhÁy*Çñ^3pÊ¡ÆjÀ‹ ëXޝçT)‚S¡Q±¬¥ßÈ:­QòâãJ‰Ç[——ÉO*ëBk»š ˜nN‚ŠñµÛe¨¹Õ™ˆŒŠ !4AƒšcòÈG;88DõAQ?ïÆ» üD•ÞM ߌi7¿ÙŒ‡X>7´hZÊÍo¿e»òZã{Ÿ¬qg=BRÚhž»c:qˆLBÝ@ýh*£GæbæÒ¬ÝXJdßÝ\½¹²tGbgÖº´Éò™k²Ãl±¹º}efKXpÔ¬gOºAD9fõì§«ó%[DÍ£ª!¿TeÄÖiv1eÅ£‚ÿEýÞ18QÑuÈžS}xæ {2IïÁùŒ‚á< `ßÑ—þÕ:À*.Û);¼“X‚e>W­í>8ÖÚ?QsÐÔz&]ÿÐ ¦ã(—«N›À×çòQ§ÖëݘösdùV—:± Šê ·L›VŽ^êd”ê`Î  ô‹š;bMžh&èÑ‚ÛD–c“f*-´ñ[iØ86'<ÂAÓf}IJ¤‰Ë³3€?³±š_Æ?ÃÑ&cŸQiSJUüôyòGl»yø–²6Ô†›np¼®ãæÿN…Ý—´ÖñAÒ…–oÆE{¿œÈTºõ²¢Î·N‘»áƒ_t6ÂòWÏmZx=ð{-›>¬œðH…;YÞË{í{*ÈãŸøõ=|ùžàÁ2ð*š^Vå¡×e-Ñå‚ì…ˆò³÷6 ‰ÞÀôšÿA«—ÀÏj`HÜFØî……b:·sà¢ÂY+f-D¶Nd’ÏÏ;´{0§aúîãDž[㪤x?;xÕiÑ”suj¥#^ {ÂñðÜÂó ‰l¾… `“WþT‹ì|"‚°ñn¤e±³ERˆjQ <87²fe­! ‚™ð’˱çEUæÉ\P¿ ãšž¹³}fý æ‘ ¢oçãõO$è‚FZ¯Ê˜àôúyÈ> /Filter /FlateDecode /Height 106 /Width 75 /Length 4023 >> stream xœí[L“×÷¾P(PÞ"XXi€Qj_Dú6Š4B™kƤ Ðiæ©ÃªSIpñNæºEâlÀhæTÖù3ŸÂÄ.UPGß"B[뜎„R í(Ê÷›˜E…¢Œç¯BoïésÎsÏ9÷â222f4\ÿé¼vÌz8ý1ëáôǬ‡Ó³N¸½ÏHKK˜L&"‘8þü>øàòåË………;vì¸yó¦Z­~Åç›L¦ââb.—+ _|×e¶j›ö˜ù:ÉC™LvýúõÌÌÌêêjµZÍáph4ZUU™L \¹rå©S§l6[hh¨Ñh¬®®f±X€N§—••EDD`võêÕmÛ¶utt0 •J%0 Cä—_~ÉÈÈ “Éf³¹ººZ,;v,11±¹¹Ùn·£(j6›1 CQÔjµöõõ-Z´èúõë6›Ïç›ÍæÆÆF>ŸÏår5 ‹ÅZµjÕh.L>M&“ŸŸßäþöM ‡[¶léëëioo‹ÅJ¥’ÇÛü¿ÿiÓÒþÇÍ8nÁqszº›@ غu+Š¢…B!‰¸\nII ‚ ^^^!!!,«¸¸˜F£Ñh´žžE?ž‘‘qýúuŸ§OŸòù|ljDbeeå‡~h0ÇÙ³gçÍ›§ÑhvïÞÝÑÑA$™L&‚ wîÜ¡R© FSffæKUbæk©7mÚÔÓÓ³xñbÇÍf3F«®®FQf£@ 0›Íl6ûܹs²§×ë©TªÝnommår¹CCC:nÞ¼yþù'•Je0½½½?öôô4™LL&ÓÕÕU¯×3™Ìšš:Îf³ïÜ¹ãææN£Ñ¾ûî»åË—;WWW‡süøñ„„@CCCXX˜Ífc2™›7oÍ'r8EY,†a8ŽÓéôŒŒŒ+VLêÛ|][f~”N¿ýÐd2MÈ~\u©L&KMM…j©ÕjKKK÷ìÙSQQáïïaXgggbb¢¯¯oOO€F£!âíí]__Ÿ™™)•JCBBîܹ#‰:.>>fŠ¢P‰D¢N§³Ùl6l¸uëÖ£G®^½ÊápètúÚµk Ûòòò7VUUØl6&0`×®]£-þµD©Á``0SþØÉa¼½…Z­ …[¶l1‹…F£%%%¥¥¥ÅÄÄ,\¸B¡`æææUQQáãã3oÞ¼°°0…BA£ÑÚÚÚx<Žã6›mhhhxx˜Çã©Tª5kÖ¨T*€Õjíéé»% ¶¶Ößß ‘H¢££_ÑÃ×¥4ÿg¾–NXi`rs¹\µZm0ìv{@@@]]]tt´Õje2™z½žD"‰D"©TJ&“E"QxxøÑ£G©TêÅ‹QµÙl&“)//O.—‡‡‡ ¸ºº´Z-¬ÝQ}¶ýòùü„„„Í›7Óh46›ýðáC2™Üß߯×ë> Ÿ|øðáÑÿZ8Q>¶%Ü^±¾¯‡J¥rå„Åb•——“H¤ððð+W®dgg—––ÂL¥RI¥ÒóçÏÃ.‰F£‘Éd€^¯'ùùù0úûû{zzZ[[ B\\ì•"##F#Žã€Ëå–––²Ùl ÃRRRΜ9C Þ{ï½ß~ûÍfC  (:Úüb¼;>ü"/]ºôÃ? ?ÝÝݽ½½(Šj4Ç;::jjjîß¿_PP`4a³Ù8Ž÷÷÷CÍ ,,,´Ùl‚˜L&–Àd2«ªªt:ŸŸ‘Hd³ÙÉÉÉZ­6-- Š6Š¢*•Š@ ¬_¿º×ßß/‰ÚÄÀy,ŒŒùùùã7þ÷`¶¦™þp¢4---+W®Ä0,>>¶ù¡¡¡ €ãxkk+‡Ãý†af³EQ»Ý~íÚ5@»gÏ&“900À`0¨Tjcc£X,NIIIOO‡{ûö혘˜ÊÊÊœœ¥Ri4a_ßÕÕ`µZSSS% lÁ‚‚‚bccW®\9N)žÒé'Qº{÷nX” ‰D"“ÉêëëÉÍÍ­©©Y¿~ý¦M›|||º»»#""`5Ã5??zL„åryjjê‹¿ÏËËsuuÕétð¦ÐK-ÿqÌütg•••˜˜¨Óéà͘ººº)qÀ)œxèååõ÷òéééi6›—-[6†M\\‰Dçú^ÿù(•Éd'Nœx¥D"q4…8}ú4‚ p6¥P(Ξ=ûœAVVÖ¡C‡Š‹‹ÛÚÚ<<0::ÚÃÃãöíÛ_~ù%(Љ¶L3?J'|2“““Ãçó~~~eee;vì8qâÄ•+W’““ E|||`` Ãáèïï߸qcnn.‚ €Ïç3Œœœœ… Úív¹\Î`0úúúX,–Åb‰ŠŠjll ñõõ½ÿ~SSÓ®]»æÌ™£ÓéÊÊÊ:;;9ÎöíÛõz}YYÙüùó[[[ÿúë////ÇSRRÆ.qg>‡3_if¾‡ÎgÞ‹A¥RéééÙÛÛëáá! §Ñîï¼?„/f{‹/f~Îz8ý1ëáôǬ‡Ó³NÌ|ÿióÁendstream endobj 835 0 obj << /Filter /FlateDecode /Length 1680 >> stream xÚÍXYÛ6~ß_aäIFj…IE m³A‹R7@äAk{½n|ÁÞ#yéoï\¤(¯¼G}%“Ã9?g¨ó¼>ùy|òìT»Vy­j=Ÿ*=(]™×Æ ÆÓÁ§Lûá—ñïÏN½M©´ò¹÷ÀƒhÆC͘®êp+ŠÜª2Ð}½Ú—oû˜Z›+‰ß ‹*k&â̾Ïšùl82ÆdçÀ`³Ãï"û¥¹DÑsø¯³<“fÉ3¿ÂšlxÕË5Ìèì;<{$Ü£'Jqè…‘.M®*;—W`)©4Džר×l\KOÌáµÄ‰¯4±EÐ%›á´çß)Sî‚Z‹ þ{ø¹d—EÇÏ ^KÅ'[¦éêZ箎×ä*TâjÆn.‚kЕ<È Ï5*º¿Ìñÿâ³Ò–¼¹F£PÇ|8r¥ÉÞ϶`ŽÏfl<ëykã€ÿÎc$BÎ8Q¶á"¤iD/° Üð¼MîlšDnÎàwÉB¬öú Ò<ÍXà(ôcÉÓˆ q5Mma°ó_·<—,NsŒQàŠIüÞ_áÀ–‡1]37Ò ­DÜŽ,ÝoÐÍY´Àe8±kãßo¨íl0¡íøãf!â. ewÇÙ¼ÒU€:{J›ÒžÂ綎䟕S}0´yaŠ{@¨‹¢„Ì®+´Êµó­P£ûXj;Ý«—Ë­³âÇ>&°‹ŽGüyM[…a¾‚‡bÂÄ;‰&¢èª]üwŠIÛOiª&ù{AðDô`V´ôiÀÂÚU"v- :'÷¯íïaô‹DÎJtþá¾Ex!.aÏù5‰žâ÷RV„¼t-bÄÉ[™ÞÉ{Ó ¾SƧxŠÁóž?àyBA/,Žh}6ÑðNv1T© 1Œ žþ$!¤©>ôËA̧‰‘M‡Ù¹xxõ87ï…hÖˆ⥉àb&xáû¡M_%†“ûÛÃ̦Ù*Ñÿ0ú ÎCÜun*Ã6-E…y¢¢hýžOð|$³5Ègí¿$ð‘ LÊ€¶ Uêiað›å2ÁÁD¾·­,ÐÄ–pbxq-.-á©HÏ·ãÛË»è òñ´€ à¤÷&Ã’À C'‘¡Å“ªÖwÌ?R IëЃðšŠûçoï…»äÙDž:ð˜;0°þ‘º#kùoŽlç{Þ‘}Q•fN<Æ^»-¬;ĺ4…AHÊÊÕ™NÀh#Zé¼°ú‡—”Æ¸Ž–µÄ >ˆQðà‚‡Å€êfÌÛ›•tt†ì©òÂdOºu¶¹jëúÆÈnùƒ:¤^hö«¢W¬ÏÛ}Ð&Œ.;Lñ†â›l(3(óºTDPæuF šñwî(SéABÕbŒk¹±8Љî/ú¸•t»7JȺ۞6'm!jˆ×á–Ž.’öqïR"¡í=ãï7o¥ Y‡Ôn²noõÑO’+•'mr9–Q€üIÀÿ…™…RJ­lI»¸ê\(‡÷«ñÉ?ì®Þendstream endobj 836 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 2743 >> stream xœí›LS×À¯¥m¦¼‘ÑT2WK!L]xÍ&¬Å¥Eñý¡qR ÉŠ«°„鮘T37þÀlD¶ˆfŠx4›XØà¡%àk°N[x±tR˜Ò"iýß?Þ7fáWAK¥¥Ÿ¿^ß»¼ÞÃ9÷œ{Ï9]733‚Ú»žÀŠ’0ð Iø„$ |B>ôÅ«Õj‚ ~ýõW¯/ÊÈÈÈÉÉaøþýût:Ýív¸\.AÏŸ?§Ñh§©©içÎ&“922’””tîܹââbÇÓÒÒRXXøüùó–––={ö188N£Ñ"##Oœ8Áb±ÞLÂu«y׿t:ßX°×¬j }š_‡ …b||pàÀƒÑÞÞn4ÓÓÓanllÌÍÍ}úôikkëöíÛÅbñÅ‹år9Žã€ÑÑQÆb±ÔjõÎ;a–Ëå_ýuiié… Š‹‹ ƒÅb¡Óé;v쨭­ÍÍÍ-((())¡Ñh6›Ín·£(j0òòòÂÃÃ1 ƒ ˆÏç÷ôôìØ±‚ žžž/¾ø¢²²2,,ìÈ‘#b±x!‚ßJ½è0//ÍfCÔÛÛûÞ{ïÉåòòòòÿýEQ«ÕÊd29¢Õjõz=›ÍÞ´iÓèèh~~¾Ñh¼téÒ;wªªª î]»~úé§o¾ùF¯×3™ÌÎÎÎ?þ8;;ûäÉ“<Ïãñp¹ÜgÏž}øá‡l6[¯×»ÝîÔÔTF’$ƒÁèëëËÊÊÒét(ŠR³êïïçñxf³`·Û=êÎëÖ|âëVš°ï¾ûn‘ÇÑÑÑ“““µµµ$I–••Ý¿hh¨ºººµµð«W¯ÆÆÆJ¥R’$].ו+WŒFcrrrVVVOOOXXXqqqll¬B¡0™LJ¥r||<--Í_¢ýŸå­Ã€PÚ,Ö¼§)))N§“$I@ ‹‹ŠŠP­­­ýå—_l6›H$:|øðÔÔ”T*­¯¯çp84­¶¶ö÷ßp¹\¿H±k^‡&;;{î}•JE£Ñ, KŽ;†ãø¼#ß9o¸këîîÄÅÅ­~~+õ¢CF3ï}•JuêÔ©¼¼¼EƬÞJ‡ßÐÓTTT8½^AÐÁƒ:´"³ó^$\ˆ'Ož<|ø033sÆ ###†Åb©Tª˜˜˜ÄÄD’$cccÙl¶ÉdJJJ""11‘úÃuëÖ©T*ÇI’†aØb±ÔÔÔ$I’¤ÓéÌÎÎ~í¨ÏŸ?¯ÑhpŒŒœššâr¹B¡2NGM 77wq;z[O³r†ê«7{‘p!+ÍÏÏïíí­¯¯×h4wïÞ¥caaáÛÏÉ·´XÒ¾” ‚ >ýôS† ‚ v»‚ èèèÁÁA¡PˆaŠ¢:îüùójµZ¯×:tèÑ£GG ‚ÒÒÒ¡¡¡¸¸¸}ûöeddÈd²ÒÒRÿìZƒ_‡k>×¶$I^¾|Ù`0b```ÖÅq8‡ƒº¾w¾t)xY‡8Ž744@ôøñc.—k2™ÉÉÉÇŸ5R£Ñ0Œ¶¶6ÊݼysÖ©TZTTÔÒÒ¢×ëcccëêê|«ª…ð"a@ì<Ç{íéÒ¥K‚Ølö?ÿü³ÐÿþÚµkééé'Nœ@Äd2UWWÏPXX¸{÷n—ËUUU%“ɾúê+Ÿà•5¯ÃŸþ¹¦¦æµ'''ª%VVVr8œÆÆF§ÓéZ[[g Édß~ûmssóèè(I’~[‡^âáúõëÿûq}FFF!ø÷¥^t¸k×®¤¤$›Í688(‘HT*ÕBFKéð?þHKK3›Íßÿý¬T´`±X?þøãâõ0ßâÅÓìÙ³Çf³EEE%%%•——/²&ãââD"‘Ûí6›ÍslÛ¶ ÐÙÙùèÑ#ªNì–äiÚßx÷4íííQQQááá‹ûÊJ1 CÄ`0ÔÔÔÌPYY)“ɨª°\.÷›•.ÉÓ´ƒß—®ù3¾D"‰‰‰¡º[ººº>ùä³Ùìv»gff^½zHHH …§OŸæñxL&“ÇãaÆb±”JeyyyLLŒV«½~ýº@ xWv²ÒÀgÙùRƒáp8ž>þرc«güëÐ{•›Åb¹\.jKÝÑÑaµZËÊÊpOMMíèèlܸ1!!PZZ*‹7nÜøùçŸ L ,¯Ããñü÷LÐÑÑáñx†‡‡- @,[­V°:š½^³æ­T¡Pdgg»\.­VK§Ó9NTT‡Ã!"55ullìÏ?ÿ„a˜Íf³ÙìŽŽŽ‘‘‘§ñ7¡ˆ¿|‚ðù;ç²ô½‡+•H$™™™€¶¶6‘H0›Í‹E&“iµÚÄÄDª`˜Àb±´Zí—_~1>>®T*q×ëõçΫ««»råŠÓé*×<===00€¢èÙ³gáóù$Ib600pæÌ™¾¾>*Ë\PP`4øá‡ëׯcöÛo¿ù@·d) ’$W4º¿§Yöùphh†a“É”œœ<==Íãñz{{§¦¦233$I²¸¸8>>žêýill|ÿý÷ù|¾ÝnúôiFFI’MMM‚$$$<|øÀ`0^¼xÑßߟ••ÿŽÏ‹·ÛÔÕÕͽ900 T*ëëë ”J嬧‡cÿþýõõõJ¥2--­¹¹ù [}|Dð{/V*•Jçmš : ZZZ>ûì³¹ñ@¡P°Ùl«Õêv»ýYfš—àס_ªP(ºùòåÇ«ÕêYH’,))¹qãFII‰T*õg¹w^Ö¼߀ՖSõâiN:µ}ûöÛ·o»Ý¬±±±}ûöQ§žžž›Í& «««ÓÓÓIIIÁ0ŒÏçìvûää¤X,njjâóù)))/^D¤¿¿ßét†‡‡¿|ù’N§ïÝ»···— ˆ­[·>~üX.—cF§Ó/\¸à —a¥A¬žáÒY†•®ªáÒ ~O³¤ó!Žããããyyy·nÝüè£\.WDDDWW—D"±Ùl0 OOO¿þù…N§‰DÔÏ/‚ÐétÃÃÃr¹œ:ò744˜Íf‘H´eË?œý¡ÃwÛ¬²æ­Çq—ËAPCC‚ (Ša4©r𦸖xïk£.VO–~¹¿•#$aà’0ð Iø„$ |B>ÿ+ñÓÿendstream endobj 837 0 obj << /Filter /FlateDecode /Length 2075 >> stream xÚÍÛnÛFöÝ_!} ˆ9C‹ìC$Í¢Ù‡ÖhЦ–‘(ÉEzE+Ž·?ßs’2íØE.…s9gÎý2c³Ø.ÌâõÙwgÏ_ùl‘˜¸4e²¸Ø,¬OãÄ/r—ÇeêëÅûèbwžDݾêÏ—ij£w19/Ò¨†­úüÃÅ¿ž¿JÜä WÄEdèˆ$G˜3#„Ÿ¿*íx™eÉb™º¸ðž~„sÿy¾t6Ö0¼„ß~¿gdØÁoMÔ“¨_,ëü®áw´/ÚF–pÛ Ê•ü:AÆDäV¶j^zÁŸo`å³ÐD„oøüD?¤‹|¢ ²™ØXÑç{Eð|`)SÃSâÒŒVõ¤ŒOºÙ‰D«s›G;¶KOæÂ…² ³cApúóor_ME»ÛÃÝFÿ……c òId¿åÍUèŽÑ»îIsKë£zuÍp×t*‰d'¶·E©^òvNä$‰óÜ*ø^–¥ÑUø(2°ažxUj á#qîÊ;žµ»·ðëGnS‰FZYëŸæBâ ïIc3œËC7 ý€úá^ô­(ˤc ç¾¢¬ÿÌãbŸ !¼½ [‹V2¾’õÁ¹OHzyù ÍeT„x’Æ.+‡0x6 ]²ª0]»ÂÞÅL0­)ÌùkÍÜE°'& ‚ÍcS–ª\6u/JX‰:QŽ%¤ûx¤z¢dýO±ìþ?YÞXl%ö6ÄúßÎî}ÉWpšg­ËIõݾæI·á/åFY£„Ó†TSq’±LiŸêmQºE3ö’ÚܤL'…Ë"ú÷’XÍÊvn¤Œ “ƒ¤îRSg…ÊãÜC8P/fé%±PÏïœ8´2Lþ€ÄØQî#·¼¶êZY8R%®5Í–Q[·[Ðn…«,íï)8eQx¸¡ç\NM 7èµh *ä`»¯¹¦ä¡™[óöG„¿åƒä’Måur‘ ¿Û W„U¬sFF¡}â‰ÌKäÑ+ô~PíŒô|µì¶5žX³ÿŸÔÔC É储ÃÜgl¾ª‘ Z?Ú Ã5d¨¦ƒB¢NäO d›ã¥Å§$~½q)üp¾R-Íœ'¾³¥Ú«®I‹=ÇÔÇFŸNõAWÿýÄÑǹ´eÆíW™ íŒÃuùZÔb4¡4ƒ“4KlìîJâ¼¹CNÒÐ)EV¡\—qÛôèØ ÈéÕaàb­aT]6!€ï¦P¦÷pI… |‹­èÿ]×r¡òÓÛ[ge®îJjÄ›øäÙæä¯Õvr—|ˆ#Jkn¶h³C+ø°PüðTOhüNŒÒŒÎQ:ð8ý½}Å2ƒpÒZNÔ2uòêæý( 4¬¿H½oºÃ4£Ú‚Ÿjø^1÷^’Zgy¸Áÿrß›I@^ð%)K-u¸“˜ÚW‡Ñ‹7<6”êú(é9x=¬VúåÏÚÕ¨KÁÁÈi’ZIÓ/óiê¾Þ˜‚É´fƶ56w>2¬Jïªx®Y…Ÿ­ø’VÏrœù¸ô_Ñ¿‹;Jn¢žŽëM(¾?S¦§íÿ;e2D7>Šú89y9éÉk´IK½áf­­ æ_ Ï~· ¤i&õIxTýúŸÕô8‚ê5ãjÅå“N—»êA½gŠi¢«JYl•«z=°Ö‡~§Qž çCPÝ\Kh—ƒF„…wüê> /Filter /FlateDecode /Height 106 /Width 75 /Length 3098 >> stream xœíœmLSgÇoy„é¡K… µÒvëà´BÊ3ZYuÝš ÃÛ4cÆÜ> ls°lYœ,›ýâše/Ø(a“ ȶª#g!…¨ÝÌ9¼• -´”ÃF¡ÒS¤ZŸ'#KöŒ:…ÎÖþ>5œ„^ÿs½Ý×}îÓ ÷îÝAMØ¿mÀºRø„>!…OHaà³ÑßQTTp8O>ùä /¼ÐÙÙùùçŸWUU]¾|ð‡üÿ‡ãÔ©Sû÷ïçóù½º!´j x‚_¡<¬««“H$(Š:ÎÒÒÒîîî[·n¥¦¦º\®èèèk×®åååY­V‚¼^ï–-[nݺ%—Ëõz½B¡Ðëõ_ý5†az½~zzº¼¼ÜápÀ0|îܹ¹¹¹—^zÉãñˆÅb‰D²® ý‘‡”°õþ–¿#Tiyˆ¢¨Ëåb0(вX¬ôôô¾¾>ÀÑ£GýbÞà#Jõá_L¤‡$”‡<ܲeKaa¡Ó錎ŽNNNÔ××···÷õõá8ÞÒÒ²oß¾¤¤¤ÜÜÜÆÆF’$÷îÝk³ÙŠ‹‹›››;::„B¡D"±Ùl1ŒÃ‡çååíß¿?,,,99ù‹/¾¨ªªúùçŸëêê:;;³²²ÚÚÚÇÆÆ©©©eee™ þ‹ÒèŠkÒHƒ?ïkÕ†a˜ÉdR(AlÚ´iqqqnn.11‚ ððð¹¹9«ÕªP(FFF†‡‡=zéÒ¥¤¤$‡£P(Pe³Ù'&&æîÝ»F£‘Éd|ì»…H$ª¨¨¨¯¯ …ÅÅÅz½^©Tâ8¾}ûv›Í;;;K„R©4^¯wçÎ<Ïh4Úív&“i6›rrò‰'žøý÷ßÅb±^¯W«ÕZ­6--M"‘¨Õꘘ‡ÃápÌfsqqqmmmII‰×ëíïï߸q£ÇãAÇñòòòãÇïÞ½[,766ÖÖÖjµÚÌÌÌááa‹Å’˜˜¸mÛ6‚p_XXxþùç+++ÿNBðG©ïJc³ÙL&SQQNÇqœÃáPÞr¹\;wît:SSS"‘ˆ AÆÆÆÆÇÇ+++u:]DDDyyyccã?ü°¸¸XZZ*Ün÷ÌÌÌää$†a{öìÙ¼ysddäèèhtt4—ËÅ0,&&Àf³ét:N×ét---}ô‘D"±X,åååZ­AƒÁÀç󧦦òóóWñ¡<¤Îê«T*6›}èÐ!…B144Ä`0H’ìè設­MOOw¹\555ƒA TTT´´´DDDÀ0<11!¾úê«]»vI¥Òšš»Ý~ýúõäääçž{îƒ>سgƒÁ°Z­CCC555Àh4r¹\@bbbkk«Ûí~ýõ×­V«H$’ËåwîÜéêêZXX477gggC´úYÎ!JWŸ¡×|Âö¥N§“ZyþòË/™™™ƒƒƒO=õTFF†F£™žž†a˜F£%''‹D"FsèС°°0j7€$I‚ \.WNNÎüü<€Á`ÑÓÓ£T*8Ž+•J½^O§Ó8ðÉ'ŸˆD"§Ó‰ ˆN§“J¥‡ãÆ%%%€Ï>ûìý÷ß§N¸SËW&“yéÒ¥§Ÿ~úã?^eôyþõ6?Ê[>|HÕÇGGG-Kii)‚ Ô¥þþþ´´´ÎÎÎÍ›7³ÙljVˆŠŠ:þüÞ½{I’,++kkk+,,Ôjµõõõ§OŸ¾pá‚Á`())©¨¨P«Õ+ÇÇGGGkjjΜ9.“ɺ»»W* àâÅ‹"‘ˆr Çãp8---TmS*•ËËË«<ðûϱcÇVQøÓO?Y,…BqùòåÛ·o{<žÁÁA‚¦¦¦Ün7‚ ‹‹‹\.÷ìÙ³Û¶mËÊÊ¢Óé™™™N§333ó×_íììüöÛoõz}WWWTTÔððpzzúÌÌ F;uêÔÀÀ€B¡À0,%%Åãñ˜L¦ÔÔT6›ÝÔÔÄår³³³Ýn7Žã‡#??ßf³eddlݺ•:Do·Û“’’ìv;‚ k¥~` ÃÞG”¾óÎ;qq²k×î1™t‚ E¢˜¹¹;n·G.ç¹\K×®M½ü²€ ‹:wn¸¸xSWWFS©Tããü›7o ,«u~r’”H8³³‹óóîÊJ¸¯/lj æß{O¢Óé¸\.A7nŒŒŒ4IIIñññcccN§“Ãá,,,À0<;;ëv»Ùl6ƒÁÀqüæ›ozzz¨õäÒÒR~~þÅ‹wìØ‘‘‘ADzzºF£Q(TyXXXX)3&“‰Ëåâ8®R©t:Ý3Ï> stream xÚÅXKoã6¾çWÛC%4bÄ74E·ÝM€¢iÓÆÝ=t{PüHŒØV'ÙMýíåc(‘28E€Iæpœo>]e—Y•ìý0Ú;8Æ<ê«g£Y¦p&¹D5åÙh’ý™cUü5úéàX°P W ¡uX™ÑUó©“S‘6B«¤—ûPPœŸhÙ·§)¥Œ¡ wÂgQy3.ˆÌ¯ .òærZ””Ò|¦´·æä?6wÆô¥þÆù\q³p3ïô‡lܪ·+=ƒóG=ÖFpm<Ø«`#Žeäw‰F‚㬤)©u©Ñë.m¨%'T?pn,˜èWÞ><שּׂR%ó¥5ç~kàyï]Ц0b”ê'EœÕ°ëE‰+&s¢ÅjxËL¨­7Ö§r¥M)Œós0¿O>1:ÊÈ¥\ ðÝ X™Sø¾…1™‰zœówàȾ\ ʙ`ÿ—3€wl Ý̓…w`þDn ¡;çð}¸åñйsp\“‰Ú;"‘¢°U¿k¹ïœÉ5xµ„áUˆªx5øy ÐÀÁ†üíı3—eˆÔÊyÀ6jÀ‚{‡"¨ÅÏQˆ+Ãâ43Ò©àç‡úë! ‚ ~ß.‡°®²˜wé5k_'wÆ7›jQlê6uÜmƒ•ý ÜF… ÂCó(o»D–DàÚ˜³óC¹ÿ^ Ö¥Cp_ÂP‰"9 TÏãBðˇQüàÄYMG‘f0‡5Ø`À© >$Þƒ&ôIÁïg°«þ`ð;²ŽÂÂ>~„ù ¬ÿì Ôû‰ÁOžæ<·ó“4 >I·…[â4þ}‡‚Ñ-ð%G‹¨¦?Šåpº~ª0sø4§¨ÍØÔ½0Gñ‰?º7YŠ¡º–,5 ¸žìÔ·úµt¯í$þJ“†]õOO„° g'œï»Y“¤™{=ê Ô2^'³½ °áØ‚ó€7Á’¯ßÏðþÏEdŽ„æ6k離”Š×ÖϦ‹5Tðr3>Hީĕã7›ñæ²qñÚ,/<¡íö9ÀÄ­OÅâÑÉŽM‹v5µtiÛ´ ÌXņ3ï]þ_« ”'ýΧ…ó!§ì‰sÍ·\A ¼Î!`-¿oaB3µ€žê(í9¬9LÕ•4OŠ™!©’ !c’á]“­Á°¦J­ š¹—/b°¼‹miØpƯŸ]ÒzwmCΠ{%Ô¼R7ûÜ@iêg> /Filter /FlateDecode /Height 106 /Width 75 /Length 2314 >> stream xœí\mHSï~Z¦V;NMÅÍ|éÐæZbzŽñŽØ4›8šÖLJ¤!Œ'Dˆ}q|¡òCCÁHÈ)}0J(Ú–fg‚NÐFç¬ã¬œèL¡ÿ‡ÿ~êVΩs×'žóì¾vÝÏýÜ÷óÜÇ«««À­ÁØi¶†{†{†{†{^›ÿY©T*Š/_¾Ø(---??‚ ‘‘//¯ÅÅE ÃAX­VƒöæÍ›¤¤$€···Édâóù=ºyóæÒÒRwwwii©Õjíîî>wîAãããGe0,«²²2 àߨÍY›Ùlþgb6ìj†NÁ¾Ÿ‡Ç Š¢†i4­VËãñ‚ƒƒ FUUÕºO)•ÊÚÚZA8ÎôôôÐÐЧOŸœl»cØ’—:ežl7ìhØÑÑqñâE©T ÃpMM\.þü¹D"Q«Õ/_¾ÜèÙ„„„¬¬,‚H’d±X555Î6Þ!¸¤±£¡T*moo—Ëå€4Äq\&“Éd2oooƒÁÀårõz½ŸŸ†aOž<áñxÞÞÞf³ùñãÇ.aô'ìÄRÛ4ƒaxûÙì{/•Ëå´sÒnö_/-//ïêê¢(ŠÍfÛ¼T¥Råçç755¥¦¦r8œ•••_¿~í”—º¿†û>§‘Ëå(Š*•Ê   ˆˆN—‘‘A’䨨—Ë ÿñãÇ‚ A RXXè»Ç_{éžÈcÖÂ3÷>ì×6à8ÞÐгQ†)—Ëm9ÍÌÌLqq±Ùl¾~ýº##¡ÓéBBBœ^ˆl¯—î†Ik_Ãäädÿ§OŸÒ¶’$ÙÒÒâåå•””„aýõk4Eõz½V«mllÄ0¬±±±··×ßß_"‘`¦T*{zz›$ƒ!==½¬¬L"‘œ>}:%%Ã0W0”ÉdEÑ4 àñx$IÚn`0´X,Ïž=³ýR$•••á8N_Â0\UU%‹éK???½^ïããÈÌÌôõõ ‰Dr¹<''gíø[„ûÇRû¢(Êf³™ŸŸ‰D¿~ý2?þ´y©F£ùøñ£@ ¨««³=E’$ ô††ÑÔÔd4“’’¾ÿ^__¿îgÑO%$$äççß¹sÇ) =€¢(EQ999tqðâÅ‹?4x÷î]\\œ^¯¾råŠZ­~ðàÁZ 322PÕétEFFnTÛ”())qJ¶¯áåË—gggËËËéHc±XSSS±±±¶X·V…àñRP=uêÔÇi•Z[[)ŠZXX ½tû-Ü*Ü_C÷ϼݟ¡ý?22’¢(Š¢<Þ†b±X:îÈ‘#ííí.±óßᙇ{íD™Íf’$ù|þÈÈI’555F£qyyY¡P]ºt©©©)..nhhèĉL&3%%†á7n`6??ÿáÃúhEQ†Q­®®‹Å¿ÿžœœÔjµjµzyyY$á8.‰&&&´Zmtt4I’AAA~~~wïÞ¥+5úœÏñÃý½Ô¡ÓµMn ¢³³“$Iºö  e¡(jqqQ$…„„¨Tª¼¼¼ÍÇÙ>¸¿†[Š4f³ÙYvlìGº¯…Çãt:¯¯ïÉ“'u:Ýàà`ffæ±cÇæææŽ?ÞÓÓƒ ÈÛ·oÏŸ?Ïf³U*‡Ãõññ©­­ŠŠr›õà/õTO; ‡ÖC‚ ‚ …:thzz:::zuuµ­­-&&¦¢¢¢ººº°°EQ×ýWت†»Ù?iì{/à8ÞÜÜ ìmWççç+•J“É„aEQ‚ÔjuXXÝ”Áf³-K^^žB¡HMMåñxYYY …B( ùúú"ïÜ­÷×ÐΊßÑÑáÈ(8ŽÛ6ðÿê¶ÿ^:2Î_aßk(•JÅñï~ímëJêt í‰Ú»Ø÷^J÷ì9‚ˆ4®ñÒ}¯¡Àý:Z[˜L¦ðððW¯^EEE%&&®¬¬,,,ß¾};11±¿¿¿²²Ò=k »¹ÂpÿHcÇKé]ݺº:>Ÿ/Ôjuvvöàà Ñh Ãq<11‘.EEEb±øðáÃEq¹\«ÕÚÕÕ•››k4'''Ïœ9³y·wlllnn.ý>GVV–³* ÷×p«±T"‘Ü»wÏî2ÝÐРP(bccí˜àÜußý5ü‹ÞĵHNN‹ÅZ­°…¹ñ {{{éèޗݹÒp/uÿ¬ÍýÚÏKM&Óû÷ï%%%mmmÀ0liii£þÂÝ×ÍCú€Ú5ŸµvÒÝ˯_¿–H$ô[cc£T*c±X‹%==½¿¿_¯×3™L‚¡_†ºÿ>A©©©Z­Öjµ–––vuuùøø¶µµô÷÷óùü¹¹¹¯_¿Â0l6›gggÑjµééér¹\(Þºukë‹ë4¤Sÿ(A\P”¸ÿjáPŒãxEEEKK‹Ñh …0 777s8œ¾¾¾¢¢"GÞ¨ØAx4üÿ.†Á`@„¢¨ºººÖÖÖÐÐP½^Ïf³5 “ÉÔëõôQ—Ë… ¨¯¯ïìÙ³&::Z$1™ÌÆÆÆ™™™ààà .$%%¹ò[‡òÒšš™L699™’’ríÚµÀÀÀááa@ðùóg:~FƒA÷v¯;Âæå/•ËQ%IÒd2±Ùl.—;66f;Çg2™ááá€ùùùoß¾EFFVTT`VPP@QÔÄÄŸÏ·Z­;µUµï5\·¯­££C¥R †Õjµ———ÉdÊËË»zõjZZZvv6EQSSSÍÌÌp¹\Arss·‚lc/Æ.ÙDµÃБ†áúúz>Ÿn0, EQ›üGÃÉóp—è¶îiÜ¿Æ÷0Üûð0Üûð0Üûð0Üûp†ÿ/sÙáendstream endobj 841 0 obj << /Filter /FlateDecode /Length 131 >> stream xÚ31Ó³´P0P04P01V06P05SH1ä*ä2´ŠÅ¡Rɹ\Nž\úá †–\ú@q.}O_…’¢ÒT.}§gC.}…hCƒX.O… † ÿèÃÿ õäÿ3°ÿ?ÀøÿCýù ì¸\=¹¹ >"„endstream endobj 842 0 obj << /Filter /FlateDecode /Length 155 >> stream xÚ31Õ3R0P0U0S01¡C®B.c ˜I$çr9yré‡ù\ú`ÒÓW¡¤¨4•Kß)ÀYÁKßE!ÚPÁ –ËÓEÿƒý ñÿÿæÿÿ?0°ÿÿÿƒÿÿÿ? òÿÿÿJþÿ!êD‚âÿH"Ð @˜ ¶l%Ør°3À‚8 äH.WO®@.E‡Þendstream endobj 843 0 obj << /Filter /FlateDecode /Length1 762 /Length2 1724 /Length3 0 /Length 2261 >> stream xÚ­’y8ÔkÇËš± 2iO2‰b–h2¡kY":È,?ü4 3cË2j´¨ƒ&1–P–¤K(%GÖ“R¡š¢òfRI’%„ÞI§ë\oçß÷zþyîûþ>÷ýy¾Ïƒ^ãæibCgS!6‹g‚3ʼnLvÆátÅ"Ðh¢ð`6ˎƒˆga±ìˆ`<`·Í DssØa188„Ö“Œ¾‹À† q`…È^Ä”ö QÀ“Mƒ!^Œ)°a0€Ç÷\àq!N$D7Eàp€Óx€ Ã,æ;“3+ˆ ?Òôˆ°Ÿ¥HˆÃ•Bõ ˜F@ Ig³1€!0®lé4HÊòÿÀúµ¹CƒáJa~o¿àÔ¿ê&̈ù[Áf†Eð  ³é‡õ«tôŽ Ñáæ¯Ug…ÓlXÁ ˜àÌL±f?ò0×ކèn0‚( .´‡Xô_I¤þ-p`¼<}ÝÝ6üý´ E7 ÌâíŽ ƒöõBŒû'–šÄ£ÁoXS,'J×Ïÿ/ÃìY46f¼ùf@áp(1¬´ÞÜÄâÌ¢CÑŠ–cLYlžô:‚ØÄ÷wÝLÒ÷ÔB´0 Ñ¿odkËŽŽ5Ù„&xsé,~3 ˜cãÿGH‹àp oá×H}ùÁR+!(¢!ÄÙ´­GBÓ«/%Ø<(‘7âêœ+>ÒQ/¨:·ÕtèÅVÍnNMçnH÷³†@#.U?;¼³C2µ¢Ú&®5Ö#xªÑªªe”—é©8Bò iµ4"™ÙÇÅza5Fž­xY/_áòGÈ_1|BîÒ“Çsc ûù÷^‰•3èk%gþ¢:—4ÈûíË”»3Ùlqa›¶_æÄÒyñŠÆ¾Gû{UÏY$‰È²/~úͱ"mdý¥fÊóÞŒ¦¸ ·Å«W¼Û̪žCYÕUJ¨š¯{Ä)\þ)<Ó,6¡—”ÚþÒNFP;‹ÊVËèF'7GŸîmæÇÈÚå(¦îªÏ ¶D5¢,t¯ÙL24?Fõ<‰SÖè/óR_})²£¬~|¸]ÇS+ËI¹®l<þ!Îв£Qø§-{Ò¦*ÿ+Æ5„æ'|~xÁ›_q$ünƒ[to÷€:\RáÐhÖék¬óv,gUÞÕÂ3æÞ„Þ²„rrzÏÊOÏU‘ŽöÏ ‰xâ\Ñ ž¼Y¡zU˜E?ðuLB|á@Ü{_|ß ¬·Ê.Õôݱ÷ú3v^•D2÷P>ÆÛ$ÎKNùèÌ­’åf‡Çm¡¢`I{|Ú77’ŽNö&ØË$úç 64c×t5m2Ã)íõý:9©óÉü¨4>„|¢eœ„p>Ò¢)#œ¼îxfçð¾“Ct¬•ë¹l+$?{"pQˆÞ‹ê»U9r («X¥Š–­ÏUN!Ÿþ¥Ü£iŸ“û,áõôì0 ¨^¿Úç® · ø)® (© G®°ÚZ°ž¿Q6‘v¡Ž>bz„a@ <¥•¥|9Uf¯ sËs"ÞÎ6×µÓ+dú“;õUMÖ¨‹®ño[n²Ô8æãú°m-´Z¦—¾ jä/1v-ÖÊŽwnhIЕ…ºêú«\§L Õ¾hÞ‘=—.;´¾ÿMGÃ>Êï9ÙÈjV\Ÿ¢\ÕÉ'ôOƒ³Úcãâ Ž¨µÑ-evÆô·§JÊÅ]£•L^r!Î~cì3W£wµôƒ3)œI"èüÚÖ"Ñtû¼áw`­îQ>øå°Œ\^\ûT{çC¥Ó2䨓û’€ópfÒòîü‹‘Ãõ‚Щ¯ÙTÝ”ÁÜQ%›HãFñbÅüur¶mãDý„ÙœaêVs鳚øÑ©Ç9#êÝ@‚Rø@BÂV ‹[¬O‡–bVÖvG«Wž86ôB_9DÞ`¶zO_nw¹ú&«Ã*`´½>K)ÆkÔŒä  OõÕœ)ï:Õl~è<ã õÍê ¾yRTŠA¡\’ôB”ìVÝñ(EN®Qð¿?B=fùB{ Vì^2Ô¦ÿ~6…L 5lz–VP8þùsOBQOXÍØeDßÄ^í-3þm/!“Ü-ßãÖ¥9î4l@õ`Þ‘×edT¿µ=äYdn—x¤¿iRsV>¸¾°ÆjQg½ó¶àë%“W’Í}µ‘Cûø¢0W;2¨8>½†:ìWÝ奪»Òº#ÿÛÆ(õÐÇMû«u÷EZ­Uçßnij±‘Z>GWÉâ’ÓûùMí8ʉ´´E>áK5›“‡f²¦b.Íã}d©í’[(~™È¥ýÝ+²‘éç~Ó’Õ¼ÿ¶†@zendstream endobj 844 0 obj << /Filter /FlateDecode /Length1 775 /Length2 1575 /Length3 0 /Length 2116 >> stream xÚ­’{<”iÇ#¢iå°Î¡ÇÛNã9`X$ÌŒSÅ”-¢1ó OÆ<Ì Í¤F‰D“qJ¥’–ÊYH‰Nޱ(Ù– ©äÐmŸ}ßöß÷óüsÿ®ëw_÷÷ùÝ7RßÃËÄ‚N0“c‚C㬢››« Yb±$’È)f’(ÐÀYYY[#€©€µ´Æ›YãMH€‡ñXPP00 .™,‡PQ)LÀ C%3¨àS!ÃC ๴ƒ x‚l ÒРATALf É•I‡˯eZDØ·V$ÈbK ƒeLC@Iƒ™ @éŒ;,9 ”°ü?°¾îÁ`¸SB—Æ/õ¯6%bðþ6À¡a¸Á4ÅüÞê~esiPDè÷]W…Q˜A 0Á™£±æ_ëÛ â‚4ˆC è\®ƒLÚ÷$’ø–90žždWG㯻Üó @LŽ7/ °ÿ˜—5î-‰ˆq_,‹ÅIŒ’ïÛjÏwg92©0 b¦x €ÂbQx¬d”)DáˆI¹È•cÐL˜#ÙH‚9ÐabéVqX<€–jK ³ï¿¤%€ Y–ÿþCæFIމ)^r¢©9°ÄcýÁbLÎò’ÄôMÓ!I² È©ˆÞ.˜j»ïlù±b¾cޓ˲†l¬¢ØG5q·²lТ•VÅcoPoJ9NytE)c|[ †FÇi•;¼å…‰ëQ¾¯ j˜{ÎKn’*òó—n47(†fŸ|Ó»þuX…ñä‹ 3ƒ‘5²×·â,…Zãñ2ÅÝ] Óüè¶¡Þ5™´ #ˆ=‚ÊíéàÎ?“[?¿u†óàÞ‚ ¿s3?.öj5üþ,¤_!Ë*Aà¶²ë’Ø×ùzú¤Aq¥¯?³ñàŒ§‡”®Ö¨³|AÍÖZé‡Ë*ýVÉìèDÓPó(~?1åÁ I:îö¼ZöÚÌäé&nêïbóM¹òrÄ‹‘òéG÷uÍ™4ÖÛÉEkpÏD¾Ž•æ1ƒÉr×÷Úݦwú½KDtù¦Zƒ™;}@¡})R.ÕÑùóþÆéû•J¤ÊgGÏåù8R^b ü ˆ5ìTfåwOͬtQ gÙï/þ™:~.õ$@6óP¼z:ZH7¯nWÑ»öÉð®‘1 Ý%ßSÀ®T×WaÎNN4uT——¹œùéÉeïŠüÃoÔtò1¿$’{ï_Yña¤h\ó¸9´ÖŸ…UõÖQËmvê½k­ß­O" ,›„1‘qììW »&bÅAÝ2dÝÚüõPïÍ 9m Ù=ñ1Ñä¡ãm†¨»ó|Æ\÷ÅQèSr®²ëó7Â&³0×=ÅéÚ,¯|õRK[ʯ=†‚è ¦‹7fÝ™=UŠNæéŸo˜B=·ñ½ð›ÍŽn£÷…E•;!¾t]Ù!YAO8ªý£­¯]­ åJ ·:Õ¾:)*:œ>¶¦ù47ç8lh3ÖIÉvz¶R#ϱ÷þ‹ûµ@õ†0RkrøŽF»p·L=àÄØ–6ÏZrÉô"ã!ÂJ½]œ‘è*¿[ôŒØ¨MŠ9Y(öžá©&Ö«ä=âŸþ¤t»þbŽìûŒºw¢<ó×hÙ{‘N@uŒðÚ:áôÕ,òãJÕÍô¹ØÙÆqÔ`ر4Ï®u/ÕäåÎÞ>¸Ÿè=£n<®D•ÍÑ1,õ¡ùƒP HÃzÂVW‘—à”ê“V«òs‚‘ÕŠÚ‰*«BiEö…®˜™ƒeŽºôøB[n°ÎT~]á®Õ+ 2³„ú<… 2ÛzhÀY•:ÔW–ªÀxÑ÷0’ØÇþ왪cvŒ5ÉÊ»¶èm&~¸CKü1l°d¼Û ±g´AðBÆYX¥µn Jc•Z{IJ¦ÿ¼z0ñÁäà­{A{ù[îŽé~±¨¹%ÌÚ&NÄÁ {ÄòêZ7öFå½|5ÊÝBÐ gïþl¢N4È¿–ÞÝIµ\ÌðE–‚(—49ÙéÆÜ‚¼áÀˆYSÇU"íøS4 í]…G·Ï+$ú¤j!¯3P+RÊeï´ßå6ïÞBWç'ßÄ}Ø(Xx%r¡´O{FQ@­vÂ&±Q‹w¢d´ë·†ÞúFÀÓŽ•õÍv•¾Ý–.H‚n~Aß…”6+¾ÊzÉ%÷´Y]}¡:/ŠÁõl»Èc¿ ¥úÞ£¤q©ÃÓ»“CÔŸç Há â§ÒK¨Š_ô³¯Õ˘CwޤÍ{5ÙG°__‰ ‘8ÇÓùÒ?ÚÏ{8µ¿g£”W.!¦óváÜdSÇÇ Úpà‘xñrÂT­·Í ô,©ÀUò5V@u’`¬mÂ*Ábš™’¹9wbkf™á¤kÓ1/›ª”ãüÇMeÛž„*_ñ Q„¨…øž8ÆCfåÈÓܵÝëƶJ­ðIn"÷?i´6çÌ™râþT°êFÛ–r%gß4A­5yEo?‡j^t_¥Ð¼í91gÝãkíø¼¤õ~ÞkWv¾z©ÉÐO±aÉÂ[Qn“òÂŽC/C¶÷[0zFÞwFSO×¼£±òˆ¶‹ï¢8ÈnCyÛjÂáE·ø%;ÓúS?”èÏO’c .^½d«¸–¯Ô#«0Nžý Ò, endstream endobj 845 0 obj << /Filter /FlateDecode /Length1 1102 /Length2 5568 /Length3 0 /Length 6262 >> stream xÚ­–g<\ë×÷‰Þ»èÑÃÌè-ºÑ‰D‰0Æe†1Ê¢G%"¢‡-„è$½÷ˆíèÑB„Û9ÿûNžs¿}>ûÍþ®k­õûíu]û³7?Ï]315'”# ‚BbÄÀâ`€†¡¹.X‘óók a ©é€)Àòò`€š–H€@2 Ò’äü ”'€»`BÂ&ÉÔ<`hÔ 0tÀ¸À<®{@Üf((†ÁŠÔÜݦVxLaÞ0´/ÌIœ 8! €# Ž@’ÿt¤‹tFdÿvòñüï’/ í}m tmRpmÑ …tÇœ`Îä@#ÔµìÚÉÿSÿnñqw7rðø³ý_Sú?ëwìÿf <<}004ÀåC#ÿj û9C˜ÂÇãß«ºwT w‡ÄÀRâ ©ÿÄÞ„?Ìé.u8;¸{ÃþŠÃNÿvr=½¿|­îZjÝ7ýßmýkñ®‰1Çz ²ÿbð?|=$4Â`À׉××ïìþ%¦…„¢œH8@BZà€F;`ÉA×­$¤¥ÀÒ æ€ù_;Š#Q˜ëÀõd‚Î(4ùŸ»*# ªýúɀꓬ$¨û7ÉI€æÿÐuݽ¿ézƒÿ<ý›À èô‚@Øox-ÿ ¯U\þF©k¬§ ù[Æu ñÊ€î¿áµ¶Ç?¾Öþ­öú˜Q¿¡èù^k£Ãk!ïßðz6˜ßðú‘}~Ãk]¿PâZ×ÿ7¼ÖÅþ…ÿ÷¤¨«£ü‰IÊÄ$¤Á:–ÈJƒ‚þŸD¨ Cbþz¯ÏÛÙq}Da0”|fUŒp}VU¬U0üšHØ›#«$¢¿5².KQ|kA‘~Ý0dãúNIçŠ(§ÍØÑtr–^d­W ìydŒœ¦ë¡bÎÏ6#ÙƒnÙ>¸ñ)ïc'GnÌú ÷7ÏѽY¾ã¯¾­D8ƒM°ì+ÖX²©É_‡Áng(2øV3ÈíòÞ¤Ã,6¾&õok£ P3EŸnÚf3\ΰv~u›§Ê’Ë3$˜,Ý·ÑÆ¥ï •}p˜›Ïì<6½Kp8·ÚFR³¡ìÜpô%pq·ÁÚ¿%E°M:OÔu.öC€äåøÞ 6ÏÈêåFÉ…‡@šSFøö>FoE'üRàtÌ¥Åßÿ.lî‘·†…CJ‹K0ƒ.ïý°0àX=œ@‘ª¡d-¤8¬1„oìùãÊТ‰wð—Åî±/ç>)2e”ÊT¸bñà-ý‹Ää:txJØ_ý^S*ÅÈ}6FÙ,êOìÜÙhÏÛ:miEßÕAO ç\Ï.Ðîõ.Ÿl½@÷gYO:—i‹rP݈ìð…ø(¦a‘NÛ /}÷ŸŒ{Ô\Õ ²Ö&c)k»;YåMSs B0.G?• ³y2^ÈÄŽy¾Q¨sè|êÅ kÓÑ?á ¤Ž‚{a‰…ÐIeUä`d ¹;QLÍ弃Wg¢ßÂ3C˜ÝñUÚþ¥WPJl´=ªxèÛh{ÈãÄÇ©XæÅŒ‘‚&xˆ(·ž.-µƒâKiܾ¦ñÙÚ/< =­Žy #Þ4ÔßF~*è­ó½Rú…º0n‘OÕÞÙG öñ¹LbîTŸãUÃ7aPž¶n­T?£ØórÐäùp-‹)_uɤ_ž|I*‹ñmE4÷p¥G"±â¸?¡¿]e‰Þóð1³Ù·Ûû›móæY¥ÁÅD[«i^.Ë쩹«•Ôœµâï`†=÷sIFð+åj:©fs:»ˆ®‡e¢³VÛ\¼¸3ãu3„‹ûHœ0Õã[–zV¾Šð õDOš¼X .Õ<ˆR* ¬°Y $îñV]Ë“`ÒÝ}‘ÁÓ»šPݘÀ)‘øhiãü Nó¾6%üQŒL{.ö¶ñ’!¦³TäílžØ‰dÈšíô–úsLï¢—ÏøAÛ—-õj—}ÊË2uq…8¼d*F‹í ºÎcr›»ßƒ*ªÈ†Ç§•á íåé>ŠÖ­L[’ˆŠ¦E×&J¢ñŠ÷‡˜’­[A²’8Î[*#vAMt>‘#ÏâLÙœ·öÛbLµ¯g^ˆ…wY)‡uGV¤¼öís•Y C*q/pï0HXõçžÃSÑéqá~2t‹’üMøŒnÒtP' à²ÏÃõkó½.Òfé‹Ì`$ÂÒ9[ þvŽ(´Õ_œæ©žþ<×ÙÅÖ .£9 tÎð;Ó¤}¨¬avóöœè Oª ŸFøð ÝM`œ Dbºâ\¡UuüæñÅèOü$âêG7‡[‡+cøvLgË€“¸ã ‚׌½üJ–)ÛäIQéOkãžëKX(­Ì±@ˆA›;|cü¹©ÆOæ`ÉFr½~ý„õãþ(e®mΚ­µüu"öil#¿³Æ/ÙÜ(⫦†¤ï³>ˆ|F6WðÛ,~%ÈCa"í¾{p$p×â§¡>{gôRKPUw<%S»:§ §•I¡z‘?[›^ÇûQóù(³É³ùdŽ7pýD$,€Jó²ÉQÛ¬µŸ¼Í›DP’§Ù¶0jžüF¿qR^× =7iá§¼4Ñæ‡ôváñ/ÑQ4æ™ÏH-ª{@•l`¨cֻǾê‡_»xe[À[IJï»@í\ê—›ðåsÈoW”ê¦a«ú-nùGË%†=­qÄòJ…½Ú’r&µYë¥Ì:åîÌ7Ü=!áËßìŒhwXi]Ó@æx1~amtÊú/ÊQœ†K¯½òˆj8©\‰¿ì¸'æÍÚüµÀ¨ÝÊØoˆL†–GâñÊjï^2y¥±qÏ<¥-IzÕ²œc\*¸)9ý“ç‹eǷ礪£—_Ícä#—?:—¢Û9ò)ï´‘S¨j35Çg§c“ ÜV-mùRo—jBRiB†OT€¦,´|Ó"ØbØ O¡ bÖ€ó/¼.¬Ì$MywÖŠ!šE5Å̤V©BøÝ™ç·çä,Ctm}³êÇõIx_EiÑ• 㜖 »BYH‚:ß6î®7?¥Ô%…X…)‘ºßÍž]F4yóÝŽ8dcÚð¶+jûFÈPÑüÄ(©·Wþ:rœa2<óQ‘öÂõ—Ep·ß✆Žnü}AyÖúx¶Šwêù“>ÉR¯øÐLo¹©øã™8Œ{1C‘—€ë¦OŒŠ34}ò-O…j3‰Ép¤ø9=°ðI’9òb£ŒuRc¥°.ÿGžòQ¥Z+ÃB9@¢ZÎÞz)6À³ZXŸÂýãFôåÈayå@§Nîá 4;¨ ˜Ô¦ø06ÐÎSÉð˜®$¦+½JK>É)„“ÿ8O¨ZÙÊôã92û`‚û:L¼-g­QóíÀªPúnÜ ÐÓân~@Èe,³@Báug–pßuI{|äÇìÁ ³Ì1m—ÌÅ|öµ™'Áû˜z˜Kÿ)3éfÁö•Y²6>G ï—`Ê­Ä"ªì’ì¶aí¹.êÇ#B+ ÕÉaÈó×u*eûc”¢ÔlD¼ÒÏý»éò/fÞ~u9Î_$‰ªvôp ̹¤î=$P¢s,ø,§ÉO³J8Ÿ¹‘`½Då&Écô\ƒ`”üö´-„R»v€kIÎN¤¿ÄÙ½-ÀžÄôRìZÕoUýA˜Æ7slMœiüY—Â>ûÐMHBrS! ´9éN®]Jx×½ƒÑ"Ë„Õ̇Û0å6ÜåK²4uvê«áµìµq/­—ÌÎÎÛ€fýCðñÑÞb¦òýqËOŸÌr£ž‘˜ƒV %b¶f‡Dåû,=¥‰²Us³ºµ”;s¼V<‰;ñdQ¡|ñ„~Oô-ûY¸™§˜K)>õ¢ $²Eò‰ölC9÷½!ùc3ŸðC™lgŒäI­ ’Ãè 3êå¿áWâM@Nožô8ãù”*A× ßz\¸·¯´Îä==LŠÈ‰›2å¹ÑÇMvÊQëp¼/Jö@+Ô*u_¥xµêƒè‹Á·)+JCÉÊÚ8*¶_3‹“ÉQÚâJeê³L õ 5üv9/â¬w ؆3Œ”&ÖN/xmº2ÓN @Ì"xÛü%8©‚¾lþŸÓ±2ìN¨Êè#M§«ëzU~n|¤†Á¾Xb]!¢½Ù‘2:SO‹y¤z&åhÎk‰ú™ªhŒÉi%Ÿ" ??¿Á=¥}~Q˜ MG;÷‡TÞHzŒECüŒ®RMîÓ½9Nó¯¾ûvc55û}•ð¿ƒÍ²4u}†¨Ùޱ„mwåN×Vj.û›Á^ß—›pàÖÅllwC¨Ô’bn3—º,öÁRF°—{+"§Œ áØ¤þ’ 5æÙöîExQ;²/£f§‚¤×‡þ²Ôž#¦% 2~šæP€Í@ë¶q²"úC~Øs!VÃÃå–Ò†¼^:ò¥_¤(l‹[þHÁ¥,Éñ ¡•@‚‡”üÚâÚm¥¹§ ü|ã(zxrM‘eOK·-ºžYvÁÆ8¯f |ZË»àòtƒåh°¥@ñÙ=AºûÕéy{yø©šô²GxödƒÏÅg\,÷oS?ݽŸý½’RãIDÌ"‡pÒmHöçK¹³$NÔüõú ÍE­f‰¿[æú`›4>ÆN7= ºÍ7pgà$Ž,Zj“t„°í£-9=½ŽÄÒzÍP¸J«3Ü®23[E?¦GßzpKÊ„Òüyà> Þ‹Š|gìȯƒC›ý’ÈsóðÊd*>=4ÊqïÝãÔyb/T·©ç36$Š–a põÊ_³bŽ£ –ätX1w»ŽèpIm¤d§­¬Õ³ÒÕé÷\¶H=öÿ©ÚÉÿ‰KL%¥âP“wüÉç"6¼óíC­69¯ÉtY6÷ z—'%,;Ñ.¶ $ë¿(“â³|?Š=Z1ÐÑÀmv¨©x>1zW«|vÄò=ñ5‚yñí‘­¡>FèzMíÌÙ#y\I|¦P=¬pä•’´ú]Ìçp4•Ÿå5guéø€ºÔ=t–|SZY sæÄU{©föÁr#«R5±RûF,€£¼ ­: ÌOàT½Þh;ã{§A~Õé½òY^)ø@eÚ`|Š\ÛÐÔ⾟¿›L£Ð×:ðÌœ+‚ó®_×^>A©rõQ*»"’U ’— Ä´NMš“St.1CÌ= ´²séu¡Êë¾ë” ×¹“nuïk~Vay_I´u‹!’D·WCí\öŽ„3ÕQ=óªó]™eÂI_¢ÐµÊã-&;ÜMŠ_éëN¸:(ÆêþyãSEIÒÆZ~å6žÁƒÄ¨—•¿4ŠgLÏóÚ&Iåyùþž!Þ —ö-ˆÑÁîë8<– Ú˜'V|põ˪®xYn[v!zW"Î4ò-qÒY¶L ö­j“tF×EºÅ _`9d˜$bÓ=z B(ñ¡æq“ÿ½×jñfÕ7ƒn¸ùˆ'*ØßªŒ!µRÎï°êûBØÕz°ŸKK¹9_‘Tá2bÒüî€oÿ§ív$&"™W¥«3KAM¤Zåh0ˆIÁFª—PÊÝ9ÎKÛQ±BÛøúd‹çˆãÒ Ê ¤iZ þ<¸(ž£|Rnóƒz•LšÈ'xp0`ª,¹»&§ÊÂͺð–öl,%’Í­•rÜ„Å:4hh;.UdLhU©·v¸t“Mþ;š endstream endobj 846 0 obj << /BBox [ 0 0 576 403 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (./categoricalVGAM-023.pdf) /PTEX.InfoDict 702 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /sRGB 704 0 R >> /ExtGState << >> /Font << /F2 703 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 89859 >> stream xœÜMìJrž÷çWôr°Ž:¿# x#Á6 Àì¹€’†> w سðßw‘,VW<ïà2«š]çH‹™{²âi3ÞÈL2"ÉJoó–Þþåíÿ|ûo—ÿýåŸþûþ«·¿þ÷÷ïïïïo÷ÿÿ‡¿þ¯—OG{ûßþöïßÞßþñ[zû›Ëÿþå[Z€·ÿò­ï#¿å–¿»¿ýñòûžní_ßþ@àÚ¬ãû{^í^¿ûýÚü°§œïŽøÇ·½}GTÿ^î‰kûŽý{»'®í"¿—ØkûŽ(ïáD÷öúþá‹â¯~¹:ûþôgœý§øßßþò?å‹2¿üóÛæßÿÛÚýÒ÷~ùêô½ÚÛ/|û]~ÿýÛ/ÿòí?þ²û·ÿØ.n \øã”.. ]ùëzqgøëöÈ_‹«Ã_÷GþÚ NüuÎ ^³Gþú Ñk~ûëK”.Doßß—hÙš¥_þ“Ö`¡½|/{óÎní»Ù‡}k~ØË{ÿÞûÍ~mÞÙóø^ë‡}kÞÙ«}Ïùþ5ïìÛ ßìç?ãï½]º”¾·÷Õ9ÑqíúÇ–¿gßþ¸>úÇå2ÈßÇöÇyþ1ýâ©òÝÚúÇÄõokýÞË6€þÛKÔ¼@„Òuâù–ûû÷–n“ðÞ.6VíwåbóFÿaY¶ïÍïé{_bÌîZÿ÷ŸÞþÇÛÿþü\–j_:’ßÇ÷¶ÿù¿þéÏyãîoþ"…£,SŠõ«+ÿöwú»í©½ýîß½ýã?¿ý‡·òw¿ÿ÷é÷ÿöËß,G]:z›SZ<“ÊE±÷춦'ÖÃÖ¿[½Heßm‘¢ù2þKºL½ïß^æïlkó²¸\šm‚ïßS]šëÚ¸4Û¢ËeÖ» Å¥éci–e+9}/‹LcèKÓÒÒ´E›rYs¹4--KàÒ¼Äù¥Y¿¿×5dß—#_|6úÚ¼ÌJ—¦/ÿ]z9²_ޱª.#ë×ë½4ûÚ¼DÍr¨mºôû}™—víkûò¶¶­¬íË?–£õïÉÖ¶-KïÒnËy§‹¹®mËk»,sý2¥Í~Ñ×ÖöÚítñ]ݦ­µg×Ë¥½ž|ºœç"@¶ëß_ ‹—vYÛ—e{‘àÒ^ûžJÙ4°%Œ–vÛD°Eæ¥m› ¶öÒ®i“Á®ßw™÷l;ž¯ç‰êþ~ßöM‰K{‰€t ¼÷ëñVµºiqù¾õxmlb\Úëù^Æn¾žßÖΛ¶LšK»mzØÕ?Ý6=l鿥=ҦǺ.í²éaËiüº_-þÛÎwø¦Ç¸žÏ%’®z¬“Eº„ÒU¶ò—YäªGY¢4ùû®G^Ï÷M¶oû>o›ýÚ·Mm]ú¶L2ï[¼,ëÐ~a¶´×ãå÷¾éÑ–ÓZÚ¾éq‘mñÇ2¢m‹¿š×vÝôhËi-í±éQ·q“/ÃkÕ£nñ/ñ´êQ¿§¶¶Ûmp¬Ç_û¾Õ¾]®Ckk/ñ´ê‘·!žKßôÈË–¶oz¤ëÜ|‰§UËܺò—¥aÕã2,Öã_®ó6 Øz~—øYý3þ0w _ì7¦¬±^í—­{s,ëù²_æ°ËÐý°_›÷öËÞÛ×f°Ç¿Ïüûÿ¾òïküûÊ¿o#Ø×æ½Ýz°û÷›öË|¿¿ƒ¹6ƒ}D{øþ¾NÑökóÞ—YùÃ~mÞÛÓ{°¯Í`¯Ñ^ioÑÞ`Ïў϶—xþ…ç_bÿ û™Ú‚Ýa¯)Ø+ýÛ¢ýÓâ÷7~‹ßßøý=ö¿³ÿÝ¢Ý`%Ø×f°Ç¿ü{‹þµèߺ,Íök3Ø=ÚööìÇ?´§hO´Çïoüþßyüž£=Ó^¢½ÿ¶7ïí%ú¿Ðÿ%ö¯°%~á÷—¨O¡>%ž_‘óû¤½ÆþUöïÐõ«Ô¯Fý*õ;´Çó¯<ÿÇOãøiQŸF}Z<ÿÆóoQ¿FýZÔ¯Q¿C{ì_“þ}ÖÇgãø<²÷è¿NÿÝ_¿ïÍ`Çï<þˆÇ<þˆþô߈ýìÿˆß?äû?i·xþÆó·_Æø²ÿÆø·Øcÿ-úßè‹þ1úÇbÿŒýó8¾œãëÓöè§íq|:ǧGÿ9ýçÑ?XŸs¸ÿÝ›ÁÞ£½ÓîÑî°÷`ï´×h¯´Çïü~‹Ç7ÿþþoožjÏÑžiý3öïþþooÞÛ=ößÙÿCûˆöñ¨Ý¢=ÆO ÷'{3Ø[´·WÛG´Ú-Ú¥íû}}no{ôO¢R<¿ÄóKñüÏ/ÅóK<¿Ï/óürô_¦ÿí=Úû‹í%ö¯°÷ùñ½ù=êS¨OúVê[ãßWþ}¸~+¼~+¡~¹7OµGÿ6ú·Åóorþ1>ãóÐã·1~ì={O´ÇøíŒßÏ/æWG¬¯ ÖWF‹÷7÷7-ÆOcü´P?Ø›÷öûúÁÞ öx~•çwŸß¬ïŒX߬ï,Äó¯<ÿpÿÂúÏöèŸJÿ„ñÓ8~ZŒÿÆøŸ°Gÿ5úïÐýÛèß/·Gýõ;´Gÿ7ú¿ÇøìŒÏ0>Ç纃-ØéÿC{ô§ÿ{ôO§zŒ?ÿ#öo°Ÿ¶Gÿ úçÐã0þGœç‡ý;èßý;èßOÛ£ÿ‡øÿ³ö¿ƒñkQ£>ýcôÅþûgq|Çס=öÏØ¿C{쿱ÿãË_ãË_ãË_ÏÏy~Ÿ³÷å_ök3Ø{´÷Gí#Úíír~GvvÐî/:ï/&ì±ÿ‰ý?´Çþ%ö/ÅóO<ÿpÒyr‚=E{zÔÞ¢½mñ•_Ÿ¶Gÿgú¿DÿúïÓöèßBÿÚs´gÚc|Æ÷§íQ¿BýJ…ã£ÄñQ8>JÔ§PŸý[éßý‡ú…Åüˆ1?bq~3Îo±~ðhwØC~ʉõçÁúóˆõçÁúóˆõçÁúóöØÿÄþçØ¿ÌþÚS´S¿ý“éŸ1æG&ì±YúõÍÔ÷È^bÿ û÷i{ôO¡Âø6Žï {ô_¡ÿjÔ·Rßpn¼?·˜?2æ&ìq|TŽã»2¾kÔ¿RÿÏÚ[ôO£íQÿFýí1>ã£Eÿ6ú÷Ðýßèÿã§1~Zô_ÿÅñÕ8¾zŒ¯Îøú´=ö¿³ÿ=ö¿³ÿ=Æ_güõØÿÎþÖ>b| ÆÇˆñ1#úgÐ?‡öè¿AÿÚ£ý;b| Æ×ˆþôEÿýshþ3úÏ¢Œþ9´Çþûwhý7éÿ‘=Ž?ãø;²{œßœóÛ¡=ú×é_ñáŒOÛ£þõ8¾Q?ô¸¿Ë¹¿ËÃóu{3ØK´Úk´WÚ{´wÚ-ÚvvЮ׿¯׿÷9÷Ø-î1ÿøÿcÂ~??ìÍ`Ïў϶—hgÿíÑ?FÿÙ=ößÙÿOÛ{´wÚG´³í±ÿ1ÿf)Œß½ù½F{¥}Dû Ý¢]ÎÏ£ÝaO)ØSzÔû—Ø¿ûüÙÞ öx~‰ç—ãñ3Ÿ£ÿ2ý—ã÷g~ŽþËôß}~go>dþ+ôß}~eo{<ÿÂó/ñü‹œÿ'í5ö¯²Ÿ¶GÿTú§Fý+õ?´GÿVú÷Ðý_éÿC{ŒïÊønÑ?þiÑ?þ¹ßߺ7ƒ=ŽÆññi{ô_£ÿZœ¿ç¯ûßÙÿõíÔ÷ÐÏ¿óü{<ÿÎóïQßN}G<ÿÁó?´Çó<ÿÏðü?mýìÿ¡=ê;¨ïˆóËàü2âøãßÿãßÿ‡öèŸXß´¸?ɸ?iÆÞ¢½Ñ>¢}œm·h·Gíí~²=ŒÿÂñ_âø.ß%äoöf°×h§¾=êөϧí±ÿ]ú`Ñ?ƒþùáö¨Ï >#úÐÿ#úoÐ_nãgpüÚ£~Cô;°‡ù­p~+ñþ¯ðþ¯Äù­p~+aÆÞ öíöØcÿíq~1Î/ýcôÇþ;ûhççh·h7Ú=ÚÙ¿ûüÌÞ|©=E{¢=ú7Ñ¿‡öíõQ{Ô'QŸõIÔ'E}õù´=ꛨoŽþÍôoŽñ߇öíöxþ™çÿY{‰ý+ìß§í±ÿ…ýÿ´=Æoaü–Ÿ…ñYb|Æg‰úêSbüÆo‰ñU_5ú¯Ò5ö¿²ÿ_nþ­ôï¡=ú¿Òÿ5Æge|¶8ÿ5Î-ú¯Ñ‡öØÿÆþÚcÿûßb|5ÆW‹ñÓ?¿mïq~éœ_zx?ÁÞ<ÕÞ¢½ÑÞ£½Ó>¢]úwd·h·³íí{È_rÿšÅýgÆýg÷÷YÜ?fÜ?6cþ/ôÿ¡=ú§Ð?%ú§Ð?5ú§Ò?aþãþ3ëq~ꜟ&ì1~+ã·ÆþUöïÐû_ÙÿϯñüíñüÏÿÐõmÔ÷ÐÇŸÌ?-ö¿±ÿ=êß©úwêßcÿ:û×cüwÆÿ¡=ö¯³Ÿ¶ÇøéŒŸý×Åöý;èßC{ôÿ ÿGŒÏÁøQŸA}Fœßç·ý7è¿ý7è?‹ý3öÏâ÷¿ÿÓö_Æø:´Çñg‡öè?£ÿí1¾ŒñåÑ¿NÿzŒgüxŒgüxô¯Ó¿_nú8õñè§ÿ¬}„ý1{ó!{‹öö¨½G»œß=ä7ó#ÖÇëã#æ/óvv‡=ÇóË<¿Ï/óürüþÌïÿ´Ý¢Ý`/ñü Ïÿ·íqÿ–qÿÖŒ}Dû Ý£=ú?ü:ÉÛÞ öí‰öíùÕöí•öíöíãl»E»ÑîÑ.ú|Òž¢¾‰ú†ñ+ûÓ&ìÑÿ‰þùIg~ÒãþçþùKgþ2þþñ÷–¢ý“c|gÆ÷—Û£3ý›£3ý›£3ý›£2ýshþËâ¿{‰ñW‡öè¿BÿÚ£ÿ ýW¢ÿ ýwhñ[¿%Æoaü–èÿBÿ—èß"þ=°×8VΟ‡öØÿÊþרÿÊþרÿÊþÚcÿ*û×¢þúÚãøk?½=Æwc|Ú£~ú}¹=Æcü·¨£þ=ÎóGúwêhã£s|ô¨O§>=Æwg|÷ØÿÎþ÷ØÿÎþxþƒç?âùžÿ¡=öo°‡öØÿÁþØÿÁþµÝbüãçÐûoì¿Åñg_nãÏ8þ>múõµèÿÇø6Æ·ÇøvÆ÷—ÛãøpŽú;õ÷¨Së«s}=´G}œúÚ£~NýíQßøüÇ÷K;ß/ý {Šöô¨=G{Ðßãþ[çþÛ3ì#ÚííF»G»?h¿Ïo;÷ÏØS´'Ús´Ó¿Ÿ¶—h/¯¶×h¯´·hoÚcü ÆÏˆúÑ÷ÀnQ_£¾‡ö¨QŸ/·G}ŒúXô¯Ñ¿ýkôï¡=Ž_ãøõ8>œãÃcÿœýó_Îø:´Çówžÿ¡=ÆÇøÉq~ΜŸ'ì)Úí9Ú3í%ÚË«í-Úí=Úû£öíöý—è¿OÛcÿûŸbÿûŸbÿû—bÿ’ôïGÛ-ÚvvŽ#{Žã#s|ä¨O¦>‡ö8~2ÇOŽúfê{ŸŸÜ›ÁõÏÔ?Gý3õÏÑÿ™þÏÑ™þ+Ñ…þûr{ôo¡íÑÿ…þ/Ñ¿…þ-Ñ…þ+1~ ã÷Ðý_èÿýSéŸã³2>kôO¥>mþ­ôoñ]߇öß•ñ}húUêwhúUê×¢>úÚ£~úÚ£ÿýߢýshñÙŸGöûßÙÿnþíôoþíôï—Ûãøè=Î_óס=ÆgüÚãøè=ÆG—ø8°¨ß ~#ê7¨ß§íQÜŸÖ8¿WÎïq½sý ìa}àþ}ûëûë=îŸwîŸ÷¸Þ¹~ù E{£½G{Ô>¢}mú&ꛢ¾‰ú¦¨o¢¾)ꛨoŠú&êûåö?‰ñ“¢>‰úäèŸLÿÚ[´7Úãùgžÿ¡=ö/³‡v‹v£=ú'‹ì%Æga|–Ÿ…ñyhþ/ô‰ñ[¿‡öß…ñ}húê÷åö¨¡þ%ê_¨ÿ¡=ê_DÿOÚkŒŸÊø9´Çø¨Œõ«Ôï·íñùçó73öíéQ{Žöü¨½F»ôï«í-Úí=Úû£öíãÕv‹v{ОcüdÆÏ¡=꟩ÿ¡½D{9Ûã#3>íQÿLýsÔ'SŸýWè¿Ç_áø+±…ý+ñü Ï¿Äó/<ÿC{ì_‘þy´ûÉöýSéŸã«2¾jô_¥ÿíÑ?•þ9´GÿUúïÐýSÅ?öýÓèŸ/·Gÿ6ú÷Ðã»1¾[œßç÷nñÑ-êߨ‹ú6êÛãüÒ9¿Úãøê_=êÛ©oúuê×£~úõè¿Nÿõè¿Nÿõè¿Nÿõ¸þu®=ú·‹ì#úwп‡öèÿAÿÚ£>ƒúŒ¨Ï >#úÐÿ#úwпŸ¶G}õ9´G}õ±è£ÿ-ú×è_‹þ3úÏ¢ÿŒþ³¿Æø=´Gÿýwhþ1úÇ£œþùéíQ?§~LJs|Ú£þNýí1>œñáQ?§~ãßÿõõ¨¯Åü¬1?;aOÑžµ×h¯´h´{´Kÿì)ö/±‡öØ¿ÄþÚs´ç³í%ÚËÙöííQ{Ô÷¯ç_çüëq¾sþ„½D{yÔ^£½ÒÞ¢½ÑÞ£½?jÑ.þ³h·³ííþ Ý£¾N}=êçÔÏ£>N}<úßéþuú×ãù»œÿoÙÓ{|ìÖŽDÑ… Æ„°D8ñØý”{kG>MâÓ "ƒÈBTõ ¢hOˆ$ñ‘ ~õtI¢K‚.Iu9$2”ˢܿ"”%‚2"(KÝ'oíH ‚²DPF|d‰ŒøÈñ‘%>N!cYbìQ˜5 ‰õ‹¨?A >ŠÄÇ¿)±^$Ö'Œ†"£a‚Àx)2^ FC‘ÑPÉE"y‚@œ‰ÓSDrÑH>¨ˆõ*±>A ‚ªDPE|T‰SÄG•ø˜ AU"h‚€¶U´­Ð¥Š. >mâÓ kⱆ1×dÌMðiŸNsMÆ\ƒO›ø´Á§M|ÚÉ]"ù^ïâõtñGGŒu‰± >íâÓ ^ïâõ¯wõú!1àÓ!>=…€.Ctˆõ!±~ kº!×tÚÑv@¹!Ê (7D9Ãüa2W“¨ÏÕcŠ€ú\=Î!´åê…¼µ% Ë]âÚemÈX²¬ kC–µ!‡²ç­ý(Þrõ˜"ë&±>AÀc\=2V,«GÆê‘eõ˜!sñ˜c컌ýð©‹Oqê§/" ­CÛ ”·ö£D‘Ÿ ˆòƒˆ ¢ Ñ@¨OÏ :ˆþ1@ ! „ á œDB|°Þp‘@¤'D!k%”koퟓ@œ²vRP÷(R÷xbÝ£ VP¤V0E@[V¦xõ†‚jB‘jÂQÉJÀqË<A¿H†(ð:óÚS|ÊœtANºHNúe”+¢Üå˜ù.ÈZÉZŸD@[f­ 2ÎE2Î/#0*™“.a ù­}>ø`^»„WÜÚ‘À\ȼöªAê37^/’/áµ"·ö£´eö¼ {^${>E@9fϧ(×D¹]˜Ë)ÈÔÉÔœD@[f{ r9Er93D‡¶ÌÃLPŸ™šc¢¡òΧc×Oˆ!„°'á$b=›ÏÁžF$â±D~‚( ÊDÑ„€¶\ O"\M*¼|Žõ4Êq5mXMù¼ìú ¼Î•°aj²5¬/MÖ—¢¡·\_N"à1®QS<ÆõeŠ@$sš"0[rš" -W “Œ(®bçês:‰@|è*Ö1æX‘8‰@Œ±ªÑP³hR³h¨74©7LÐ…õ††ZA“ZACž¿Iž¿¡ÒܤÒÜP%nR%ž"à1Vª Mª 'ð:+SF%k3„a4°fÑP‘hR‘h¨g7©gOPŽ5‹)Ú²jÞP÷hR÷h¨{4©{4T5šT5êÙMêÙ3„CVF¦xU†ªF“ªÆ¯³"1E@Ö,N" ­‹¶mÚvTFºTFN"ˆ$D‘¿„( ÊOKTUˆ¢?A Â'⃕‘ŽºG—ºGGÝ£KÝã$º°fñ2ê³2ÒQ³èR³èx"¤Ë!O„ty"äe"ˆÕ•Žg5º<«ÑQéR騌t©ŒtÔ=ºÔ=:²=]²='PŽ£)>e>¨#—Ó%—Ó‘eé’e™"  ³,S”c–¥£2Ò¥22°ò)êõaO^7¬„|z’È ² Dûb€?-a ìBâ#® |ÚzýºðiAÃÌÏ'ª' ô…³öIüÁ™†ˆkƒÉÚ`¨¼›TÞ§èÂÕcŠè ú—ЖkÃItáúbXLÖün2¯æu“yý$>åóbS<ƪ¹!o’?‡¨ ÌÀOPŽ9zCŽÞ$Gÿˆ1Ö ¦̸¬šjâ&5ñ—ˆBÖ=^F`æg}ÿ¢!’Y;™"묜D`âÓ†Ú‰IídŠ@|°vb¨{˜Ô=¦ÌA¬Í*&‰“xU CE¤"a¨H˜T$¦hËšÅIÔï¢þø`eäßHæsSæ VЦŒÖ’ ϘgRŸ›"¬¾½Œ@±ÆgÈlšd6 GÏ¥‚÷2"HBdù ¢‚¨B4Mˆ"ê’ð.‚$ï"X>I Ò—DùADÑž :õú1ž „ á œ„A}õdAa·K’÷;ÌÐÖDÛ Ê™(7A@[m'¨oªþOB8ÔwQß¡¾‹úm]´  ­‹¶? r‰ G|¸ÄÇå\•; R\M÷ö£D‘¾„È òDQž *ˆú%„ƒPå‰å’(—àõ$^OðiŸ&x,‰Çz›¤·DѾ„è úÄ1ž „ õ“¨Ÿ¡~õ_D@Û,ÚNðz¯Oðz¯ŸB@¹,ÊB@ý¬êÊQn‚ÀüQdþ8…ÀTdš 0K™¥&Äi‘8-ˆÂ"Q8A ÆŠÄXA|‰õ‹ªQUâãñQ%>&ÄG•ø8…@U‰ ŠªAkT•5j‚@ŒU‰± QX% +â£j|œ@4hÛDÛ Ê5Qn‚€¶M´}jA ñÑ$>&ÄG“ø˜ AM"h‚ÀL×d¦ë˜?ºÌ"¨KuÄG—øèÐ¥‹.té¢ËŸvñi‡O»úô˜À¸í2n¼Î¼åI”cÞòeâcH| Œlf6§DК AÌŽ&dG“dGO"§ÌŸþ,ÞË’ä½,sDQ„è ºÂ'ÂN¦$ï2™#ˆôq½Å»L’¼Ëdù¤‚¨B4í b€ø˜   W±“áês­,X ‹¬„'P¿‹úês5-qoÏÞþâƒ+òõ¹"¿Šˆ®ÈSf®¦+a‘•pŠ@|p%œ" -W±—ˆ]£&Œ}Ö ªoEªo%î˜ÙÛ‘À|Êê[Am­Hm­ U¤5C8¢«)þ`=jŠ@œºÄ©Ã§.>uø”Õ¦‚jS‘jSA-©H-iŠ€×^¯¨U©UThªTh¦ˆ ¢þ´DÑ„è ú„°'!Ú&hËZRE-©J-©¢–T¥–TQKªRKzmYª¨%U©%MЖµ¤)Ú²Rô2ÄzTE¥¨J¥èeâ4KœžB Æ²ÄØ(d]ì$qš%N'Ä)«oSÄ1„@Œ±¦uQA¬zMˆÖ´¦ÄëQSÔgŪÆ7 îíA >X«¨zU©zUT¬ªT¬fˆ mYš" -ëQ£*£ŠŒQ•ŒQE-©J-é$º0/UQ)ªR)ªÈKUÉK½Œ€ú¬6Ud¿ªd¿:Võ.«ú1@ŒÇ‰¸ÎÉ»L–OˆôQ@!*ˆú%DÑ…€Ç¸zLÂ'âNyɯs^Ç›J’¼©ä,‘ÌYû'"\=~"Â@Ø—ˆS®sëK—õeŠ€.܉бztY=:V.«GǼÞe^Ÿ!úÂjBG5¡K5¡£âÝ¥âÝQ èR ˜"Ðæù;²ø]²øYü.Yü)³sô9ú.9ú)^g¿£âÝ¥âÝ‘µî’µ>‰€×Yñî¨w©OðóÚYë.YëŽ o— oGƹKÆyŠ@$3㉀rÜG?E@[æ×Ùs—ìù]˜=?‡èЖùuÇ3g.ÏœD >˜Ÿ" >sô'ˆæù§D÷âOˆ îÅŸ!ÔçxÇx—=ð'˜?Xo˜"¬HLPŸ5‹)곪á¨Y¸Ô,Õ—j‚c—¼Ë.yG­À¥Và¨¸Ô µ—Zc§½ËNû)þ`ÆÈ‘íqÉö8ª .Õ„Âá1惦ø”ù GÍÂ¥fáÈ93FÏ#gyyùÄAø!Âú’噿³ˆ"} ‘AäDåKˆ¢=Atý ±Þ%Ö'aB ’»Fò!1AC"hÀëC¼>àÓ!>èíÞ¾ˆ€O‡øÔ0nMÆ­Ác&3xÌÄc§„̸”3QîE”3QÎÉ&‘|L8”sQÎ1¹Ìc”sQîÚêz;A@]&Ì….sá)âÃ%>&Œ}—±?A Æ\cì€Èñ9¾½‰"} ‘AäDå ¢‚¨?ˆh ÚOK#‘§Iâ4!Æ’ÄX‚úIÔOÐ6‰¶ ^OâõPÚÛÄKê1!cÿ "C—,ºLÐ%‹.§Ð6‹¶ÿŠŒ¨,#*#ƲÄXFŒe‰± Q˜% 'ÄX–+ˆ "4A`v(2;ø´ˆOºÑe‚€.Et™  \Qå‰ åª(W¡\å*Æ~•±_1¢ªŒ¨S¬ UÖ†ˆ *T1;T™&ÄX•;…@V‰Â QX% b¬IŒB N›Äé‹Dr“Hnˆ &Ôàõ&^Ÿ   sŸç3 sŸS´efsŠ€.ÌJNPŽ9Ç«o{ûQê3+y†yËcOçgy:?ãÙû,ÏÞÏDy‚h ÚD¡þ8ƒ Æ„0ÊqÝÇ“õYž¬Ïx²>Ë“õgЖë~Åz[e½Å“õYž¬Ÿ#àS®sS„ƒðŸ•hˆ&ñ1A@[®„?ãŠ|8åª>E N›ÄiÃìÀÊêøàª~ÑA\÷+Vä*+rÅŠ\eE®Xo«¬·Stájz]X¬¨V©NÐ…UÀb@¹!ʽˆÀÅze»nöö£bŒ5Ï÷ÔìíH@}VEO"0; ™&D!ë¦'ˆ±¡1vHâƒÕÙ)Äúí2‰ C±~û2ÄÚkEíµJíµÆ§Òöö£´euöÂáÖM+ª¢Uª¢SFë•ÕÈ*ÕÈ)>e5²¡’ؤ’x‘@¤'ˆ ¢þ ¢hBtýKˆb«^ûa†ì‡¨‹ ©‹ ÔņÔÅjZCjZ5­!5­šÖšÖË(ǺØå¸ËdŠ€¶¬­MPŸU¯¢A}î2™"¬G Ô’†Ô’N"04™?N!¬X Ô£†Ô£N" -ëQõ¨!õ¨]&Cv™LÐ…«zÔzÔË̬XýDÔgåìeæ VߦÄ)«o3Ä@$³r6P9R9¨‹ ©‹ T½†T½ê@Cê@Ïà yo þ2¤þr¾ðé¸)qÊêʈωïíHÀc¬¿Lð)ë/SbŒÕ•Ÿ$ßÛ‘€×ù„ÝÀósCžŸ¨á ©áœD@[Vyž}òìÛËD«MS"ˆÕ¦jÓjÓXL‘@|jI&µ¤ŸˆH ÒD‘Ÿ ˆ"DQŸ ˆö1@Œ'aOB¢0A}æéðN•,ïTÉxcJ–7¦d¼1%ËS2Þ˜’å)Ë'ÄøA„°¯ âÎ y·Ë‘@¤'ˆ "?A ‚¸ïoÉò™×ˆB^¼Œ@|ðúcŠpNÂA¼BÁ»n²¼ëæ,Ä«œ—ˆS^½Œ@œòjkŠÀ<Æk˜Ÿˆ@œòJé(ïñfoG"HBdYˆ¢ü ¢‚¨?ˆh šDb€BÂAH|$ÄG’øHˆ$ñ‘Iâc‚€¶I´  K]tI¢K‚.ItIÐ%‰.§Ð6‰¶ÊeQn‚€¶Y´  mm3”Ë¢\†.YtÉÐ%‹.Ëê±×sYÆ\.Et™  K]&(WD¹S¨_Dý‚q[dÜžB ÆŠÄØ‹Dr‘Hž 0™?މи¨§ˆ*ñQUâ£BÛ*ÚžB@¹*ÊUèRE— ¯Wõ:æ*óÇ¦Ê sL4Ì0Mf˜m›h{ øh´m¢í‹ÄG“øhˆ&ñq jA ñÑ4>N :b¬KŒý4b¬KŒM˜ÇºÌcñÑ%>:”ë¢Ü‹hÛUÛˆ]†è20 ™?¼>ÄëFö‘=A`õ²z ¨?Dý]†è2AÀëC¼nð©‰O¼§¦È{j ÞBSä-4s„°'á$âJ˜e%œ"ñX\ÅäÝ.gDùADQ¿„@q½"A\+gˆõ»¨ß¡W)^çú‚7·ysËÏD@®„9î˜ÙÛ‘€.\Å^F`†á:—±ÎeYç~"‘ÌõvŠ@$‰äHæŠX7" >+«uÓ"uÓ“(ǺéIDÑž A¬ÎTV‹TV§Äk¯/#…I£ð"#NY%.¨©ÿDbµè)3«ÕÕê"Õê)‘Ìzv‰Ï¬îíG Ä«Õ/#…¬‰T«‹T«_F >XÏž"A¬xOˆÖ€_F ÆXG.ñÙÙ½ý Q1U™ƒþˆ1ÖÄ jâEjâS"ˆUó‚Šw‘Š÷OD >˜ÅŸ!taŽþ$Ú2Ï?E@[fñ 2ßE2ßxãR‘7.¼ ©È» ÞcTä=F¯# ˆòÓ„è2A4í ¢ƒèB CD×ì"îRk²KmŠ@|pEÆû”мOiùÊq5" ÷5ììj²³ o\*òÆ¥9Êq½mXM›¬¦-¾«bo¢B®bS”ã:×°Š5YŦèÂÕã'"¬‰7T¼›T¼ö5Ù?6E >¸V6ÔÄ›ÔÄVÂ&+áøàJx‘ÍZômu½}jA êsï×ËÄk⠻Úì;‡èˆBîì:‰À,ÅÚüIâ”õý)1Ö%Æ:bŒµù“D«÷'ˆVÞ*ÞM*Þ •æ&•æ“èÂZôÕƒ5à)>âÓ3ƒ×M¼>A`æg¥¹¡ÒܤÒÜPinRin¨7©O ¬ÿ#c¬VŸD ÆX­ž"°F±ÝâsÀ{;Ž8e-º¡ŽÜ¤Ž‰Àh`-úeF+Þ'ˆdÖ³§Ä©KœNˆSÖÄO"묫·Å õ¢£&Þ¥&>E$Iˆ " Q@”'ˆ¢} ÑAô/! „} á 4>‰„ø`M¼£&Þ¥&Þñp—瀧ˆ ¢>A@}V«;ªÕ]ªÕµè.µè)Ê1Ü‘ î’ >‡ÈPŽ™ÞŽ k— ë]˜a" 3¬xCY‘7”EdYˆ¢| QAT!:ˆþ1@Œ'aB8?ŸÀ›ÁмlŽH ’D¢€(OD}‚h Ú":ˆ.Ä1¾„0öá 4 "!Ö¹ªOˆuîR3¬ê&«úIbëþIâ”W†UÝdUŸ" ×lÚm²föK™ì—š"0n¹Þ¾Œ€×Y7"0öYY"  wCÍ#Š•UCÝÔ¤nj¨ŠšTE 5O“š§¡^iR¯<‰@¬³æi¨WšÔ+O"0^Xó4ìË1Ù—ó2së•'ˆÖ< µF“Zã Ñ  k'ЖµÆ)cŽ{{N" >+š'PŸõJC½Ò¤^ixþÖäùۓ̬WžD`bÍÓP¯4©WžCtD2ë•S"™O,ê•&õJC­Ñ¤Öø2QÈj¤áic“§ µF“ZãI"ˆÏOˆÖ< 5O“šçøàS¾S"ˆÏê¦&uÓŸˆ@²:û2±Î§_F`D±Òl¨›T‰ O,›<±|ñÂ÷¡œCFëÙ†jµIµÚP­6©VOˆ V«_F ‚XG6Ô€MjÀ† ¯I…×Pá5©ð*¼&Þs‡úÌôjÀ&5à)ê3{m™ >‰€úÌrÁ&¹à)ñlqÅ[½ª¼Õ«â­^UÞêµ|RAÔ'ˆ¢ ÑAt! „öÖAøWáê¢Ê[½*ÞêUå­^s”¢Ü)Ô¢þOC Æ†ÄØ(…âtHœNı>$ÖO!ëCbÝë&±>A ÖMbݧ&qz ø0‰ ñaõMÔ?…@™DA}õsñ˜Ã.þpøÃÅ޾¸ôe‚@o]zëˆd—H>"R¬îíG‰¢ 1@Œ'aB8ém¨XííA$é "ƒÈBE(—D¹ ¢ƒèB@Û$Ú¾ˆ@%‰  Ÿfñé)tÉ¢Ë]²è’1²Œ†c¢ ’‹DrAœ‰ÓSø´ˆO ó0 y˜$y˜„,K’,ËY›y˜“Ds9 ¹œ$¹œc¢`*²FýDDÑ„è ú—ÄÂ@¨× á_AÄU]ÞSSñžš*ï©™# ˆ"ÔçZ9E@9®•S´åJXâ>”½ý Ñàu®¦S¼Îµo©ò™Š7ÈTyƒÌŸr+X犬s3‘™Š@o¹6œDÀ\¦ÌcœùKÜ#°·YJWŽXgžÿb`4°V0E ‚X+˜"01¿>E >˜_Ÿ"A̯䯋äÆO" 3ßùä"ùä«Õ{ûQcÆyŠ€O™“.ÈÉ—XáÝÛðóÉ%Vg÷öùüÁœtAƹHÆyŠ@Œ1'=E@Öo§D²K$ž¨ñiŸ½}>‘@¤'ˆ " Q@”'ˆ ¢ Ñ@4!ˆñá D¹¯3_‘_¯’_å˜ÅõY+˜" KR]‰ å²(w ]²è2AÀ§ÌóOð)óü'·YÆmƸe5¡ÆghööùÆ~–±?AQ鼂 Q¬YTÔ,ªÔ,¦h˪ÆI⃹œŠ\N•\NE¦JfŠ€¶ÌÔœD >˜Ë™"ÌÔœCTÌcÌöTärªär¦D!+4xWE•wUT¼ß¡Êû–O2ù–¸¾Èû*ÞÞPåí g Dþ`Å{Š Æ„0ºp›" W±Žy½Ë¼Þ1kw™µ§ô–óúá ü+ˆX5ïR5Ÿ"  ×—“hËèeF%W )QÈõ¥cmè²6tÌü]fþ)ñÁy}†¨ð˜Îɳ%sôS|Êý¯3Gÿ2Ú²påX+è¨t©tTºT:*]*ï.@±"ÑQ‘èR‘訚w©š÷ø|åÞD‡×Y‘è¨&t©&tÔ ºÔ :j]j9ú.9úb /Ì|wäµ»äµ;²Ö]²ÖSz˼öõ¹o¼#¯Ý%¯Ý±ç»ËžïŽÌw—Ì÷ að:wtOÐ…¹ñŽÌw—Ì÷Itaö¼c7v—ÝØ»±»ìÆîÈÀwÉÀ÷øÖÞ~p̸ÌÑ÷øÖÞ~”ÀìÀ<G޾K޾cgy—åYü.Yü)Ê1‹?E@[fñ§¨Ï,þõ]Õ? 2ðC2ð'Dþ¢€(BTUˆ¢=A CaB8Q.f†dŽ u$Ï#ÏD¢h?ˆè ºÄ áð׆)"{QAÔ/!  ×——Ð–ë ž®úðIF%×ÇÚà²6üDD‘ž 2ˆ,DQ…h ÚÄ1¾„0&„ƒÐø8$¼ÎUÌ‘=wÉž;²ç.ÙsGæÛ%óíÈ'»ä“•U—ʪ#[ì’-vÔ+]ꕎz¥K½rŠ@±¢éÈZ»d­§D³ÖS¼Î¬µ#ãì’qvä“]òÉ'ˆS擵F—Z£#ë’Çudi]²´/#  3½¯"*¼Î ž#[ì’-ž" ³ÅŽ\°K.Ø‘?uÉŸNðGU/ÌÒ:²´.YZG–Ö%K;EÀëÌã:ò§.ùÓ“(Ǭ#ë’ƒ!:|ʧŽ9X—¬#Ãê’a"0ó3;C ô…û“§ô–YÚ“ÄßvâȻ䂧x¹`G×%;E`dóÙqìrªÄÇø¨ÚVÑöêWQ¿BÛªÚ º08E@—&º4Œì&#{‚€rÌ9NЖ9Ç)cŸ9Çßu³·% >ó–x¸ÉsÀsDQž *ˆ*DÑž :ˆþ1@Œ'¡º8œˆ«i•ÕO ïíH@}®¦xB·Éºs”ãj:E@9®¦xþ¶Éó· O×6yº¶áÉØ&OÆNºp«±ž½·#p}™"஫G•Õ£bm¨²6TÌüUfþ—ˆu®/«G•ÕcŠÀLÇÕcŠ€ú¬zU¬UVŸˆ@Œqš"…ºŠ½ˆ@¬³‚WQÁ«RÁ›"§¬ñUT½ªT½**VU*VSzËzT;öö£FkZ5­*5­ŠjS•jÓI¼ÎzÔXg=ª¢U¥UQmªRmª¨%U©%UÔ’ªÔ’*ê@Uê@•‘*•‘ŠªF•ªFEÍ¢JÍ¢¢VP¥VPQ+¨R+¨ÈâWÉâWdñ«dñO"0[ºÌ–¼ÎZA]L‘PŸ yþ&yþ)"HOD¢hBt]ˆbªÄÇø`fsŠ€úÌ}ä>‡ä>rŽCrŽSææ §(ÇŒâ¯3×7É’Éȱ ɱMð:³p9¶!9¶ÛÛ¯sýË(Ç}ôùÂ!ùÂhoÿê3+9C ¨?Dý ê3oyY›¹Ï¬ä¬äIt¢Ë±ÏÌæIÔ¢¾A[m'(Ç ë@†uH†u :$:?’?ýy‡?˜ƒ"à1fiÜ#줲“z :$:EÀ¼Ï>‰@$뽺csYç^A2½&™^CVÁ$«àxÆÊå«—¿‚ˆë‹<Ûðìl“ggž{mòÜkÃs¯Mž{]>i ÚDÑ…0ö8aðgÜ)"ƒÈBÀ§¬zMðŸmr<¹äòäÒc]Ìñ”ËS6Ž9ÙeNvÌÉ.s²£ræR9›"§œ×§x3ÿ¯sæwÌÉ.s²c¶tΖý=Ζ{ûQ"H_BdùKˆ¢QAÔ/!ˆö%„0 ê'Q‚€¶I´MÐ%‰. Kâ±µÞÛçÄøº$Ñ%ÃëY¼žáõ,^?…€rY”;…À¨Ì2*O!cYbì1–%Æ&DP–:…p²zqZ$N'Da‘(,жˆ¶”+¢Ü·ÈŒ[ mm t)¢Ëù£Èü1A@Û¢ÚÚVÑv‚€úUÔ?…ÀTeªˆ *TUâ£">ªÄGE|T‰ ñQ%>*Ô¯¢þõ«¨ß mm'èÒD— Ê5Q®A¹&Ê5(×D¹5ñXGo»ô¶#N»ÄétñG‡?ºøc‚€Çºxì^ïâõ ã¥ËxyQÙeT(‡üzdz‘]žìxò±Ë“Ï$vy&qù¤‚¨O D¢ƒèOè-ç <¥×å)½9¢€(BÀœ?^FÀ뜥ð_—çø:žÁëò ÞÑcœé2f©,³Ô]8KeÌYf‡ŒQ™eTæXÁÛÛ}Ñ‘= ím'ôvHoz;¤·}Ú—c£rȨ<& £ÒdT¾ˆ€r&ÊMPÎD9è4•¯›xÝ1æ\Æœ£·.½ З¾LˆB—(tô™Í^•,’•,Ë?#Q„h ÚDÑŸ „öÖAøãD‚ǘ§+ÈÓÉÓ•¸+|o?JÀ̱äÇŠäÇ òcEòc%î?ÝÛcŽ­ ?V$?VÛ*’Û*ÈKÉK䔊䔦ø”Ùž“è’E— ¯3ctQ sJ9¥"9¥wÊîíH@9æ” òAEòA¹œ"¹œ‚kþ"×ü×üE®ùÏ!*|Êû†‚,K‘,KA–¥H–eŠ€×™‡™"0^xÿRpÿRäþeŠÀLÇûóÚ óX“ylŠ@1óÝ06™ æÂ&saÃÔdjûMÆþËøCg‡ŠÞ2;E Ƙ!iÈ4ÉL˜a˜!iÈ]4É]Lðs-îèÞÛ}aV¡!oÙ$oÙwh’w˜"Ð[f§ô–Ùb`2½ˆÀ¼ÎJC†¤I†dŠ€.Ì¡LÐeˆ.^gfŠÀØgfŠÀìÀÈí1Éör9&¹œsˆ ñØ€úCÔÐ…™‰Â0LFƒáL™w8‰@oMzkð)¯- WŽ&WŽSÆ-¯-‰}°CöÁÎ DÂ@ØãD·CöÁ¾ŽÈ òDQ„€O»ø4ÌCöŸì?²ÿtŽ€.]uqc>âÓ ñ؀džxl‚@$‰ä º Ñe‚€rC”ðú¯At]ø£¨?„“¨ðg:ì²wô,êsìŒì"#»Ä½_{ûQgÊU0¢ŠŒ¨‚UdDŒ¨"#ªàN¼ÈxÁx‘;ñ3y{;è-ïÕ îÕ‹Ü«܉¹/¸/r'>EÀ§¼W/¸/r'^p[äþ¶àþ¶Èým‰»öv ÚšhkÐ…w%îºÙÛ‘€ÇLå=áItqÕ倨¸7­ro:E$é ¢€(B ãKaB8ñi¼Ÿ«r?Wq·Våní$}áß ïçä?~ÏsÈïyüŽæßÑ<‰(ˆ1^MˆB^uŒ†.£aŠH ’D¢€Ð3=&„“HèK’¾$ô…ù)¢‚¨B4MˆbjŠ€Çx§5E`T6•cîbŠÀüÁ»µŽ;­.wZ÷@]î:îºÜuÜu¹š! 1Æû—Žú\—úÜâ=PÇN—;œÂÑÞáœD ·¬zMˆÞátT½ºT½®P†\¡ \¡ ¹B™" ˆ"DQ…h šz‹½=Cöö,Ÿ4DŒuÙ—³|’Ad! ˆ"D!}‰÷êºÛÅ…&Qh¸Ï6¹Ï6Üg›ÜgbÌ$Æ wÑ&wÑŽ(t‰B‡¶.Ú:®`]®`§œ¯`õ9—úœã×å÷$b€ŸÆ;>—;¾"^ãº\ã:®q]®q÷•.÷•ŽûJ—ûÊ)ñ‘%>2zËûJÇ]£Ë]£ãÊÑåÊÑqÅærÅæ¸Úr¹Úr\)¹\)9®a\®a¦øƒ{Ð9i—œ´#Ÿì’Ovä‚]rÁŽk—kü~Ãßoøý†!¿ß°|‚Hæ5 ~¿aÈï7œE ’¹¾x|2voGÃ~)K1Æöv$*ˆJ"\}îíG‰¢ a ¤/g:äL'áaUßÛ‘@oMzkè­Ioʹ(çè­Koß‚|²å˜¹ÚÛD¢hB8'‘pŒ$ÇH8F’cd#Ë1މUØÛ‘È ²ø–ªß‚¾ànÞ 6É_> ™«½‰¢QAT¡â½·#1@ gJŸÖ¸šîíH8ñX÷p.¬ñ~o¢Ã]üÑ¡ çÂïÄ÷v t¢‹Áœa*æ*óÇm]´uøƒsÐáîÕZ¬áìí@„ëd“÷:Þëdò^'Ã{LÞë4G8'2Î&ïu2üлɝ¸[û‚÷v$ „‘ˆêwQ¿nòÛèË'½1#°·%ˆ&Ä1H„û¹½ˆ¨‹ü÷òIQ¿„è :‰‚3-r¦½-ÒÛ‚ó(r^/âõ‚3-r¦ßRå[Ôo¢þQ@!З&}ièK“¾tô…3ÿóàÚ€_@5ùÔåcè1„ŒJÃxáü1EÌ03ß{ûQ"ƒÈODb€BémTNîM—O„“0œ½ŽßŽ3ùí8ÃoÇ™üvœãÈ\~lŽ( ÊDÑ„pN"dò\~“kùç‘ä<µƒË/n9~joGgšåLÃU°ËoÙ8~ïÄå÷N¿"âò+"Ž_qùåœi“3 #Êå9>ÇïYìíH@Û.ÚNðG9®•{;„#ÜY¸Ü‹9~kÀå·¿$àòKŽ_pù%å“"‘8Æc„ÕÃåÍùŽ·Þ»¼õþ$"Tð\Þ?G /&}1(‡Êˆãíê.oWw¼ÝåÍèŽ÷ž»¼÷üòIض·‘qYÎ#ÎAònÇ.y—]òŽ]ò.»ä;Ë]v–{‹×ë{ûQ¢ƒè$ÂõºË»~—Oð-ô)Þìò6`Çt]Þ ;GTⱌÞfém¨H¸¼A×ñÆX—7Æ^>©èm•Þ†¬ÓÞDÃ1¸zàͤ.o&]>qN¢ã<8óã}Ž.ïst¼GÑå=ŠŽw º¼qùßbò-¡nêòV@Çûú\Þ×çx/œË{á–O Îï0sy‡™ãÝa.ïsüüÞ~”08S¼ùÉåÍOŽ·ò¸¼•ÇñV—·ò,Ÿ9x=&ïí¹|¯ L® ðÞ—÷ö,ŸàL9*-æ‚÷v$à^õ®úL®úfˆŽ¾téK¼¦“7·8ÞËâò^–Ë'çÁëì»pÙwáØUáÜU‘ßãuò­ˆû9èÖDÇ1ºc‚À·tù–ûÞÞÚ¸ßÿqkGßbò-Žc8ŽŸñ¾µ%ˆ&DÑId#Ë1îGÔ­ á$*úR¥/gÊJaDÝÚwD¼b»µ#ÑAtÇ(rŒ’Ad!„“¨#uhø–&ß2AT•DGo»ô¶ã]ŸrTÆç´níHÀCüa8SÎAS‡skG"H©E"Éy¤¢ 1@ }áx‰ÕÈ[;8SÎñÌ·öÑqÜñ¿~’Ad! ˆ"Ä1ž „‘( н­ÒÛ8rgùòI‡?ºø#ŽÛ.ãvŠpNbà<†œGû]Æ~Ü{kGº8t‰×R·ö£DÑ…pN"^]ðZjýç1ä<â<¦W[†yŒ{X×Oð-¼†1ŒJ“Q9ETUaŽ«—«9é[;)E"%!:ˆN"^¸\8Æ¾ËØw\¸\8Vu—Uݱªsâú <ÖÄcm€þàµC¬%ÝÚDJ!·ukG¢ƒèOÄ ®O“\Ÿ¦øN•[; ÇhrŒàÄ·j¬Ÿà[º|Ë}ÕëÖÄÀ· ù– ñ؀džxì¾rvkÂГ¾Lè­Io^wñºã˜aRÜ wk?Jt]!ç‘j$Rb€$ÂuÐÞŽú’¥/ú’¥/a¶L¬ò¬Ÿ ·û9ìw¸µ#ó(zÂIÔ‰ŠñR¡mm§á$Ò{$Ò» D#®“÷v$*ˆú1@ çÁù´b>­2ŸÖ°SåÖŽ„0 çÁ9¹†ìù­ |K“oéè gí¸¿ðÖDœµ«ÌÚ³v•Y»bÖ®2k×Å¿µaˆB“(4ôŤ/Žóp9Ç1Çh˜q›Ì¸qâ­ˆ”#‘ ~‡Çºx¬Ãc]<ÖÃ[šoíHdrVAT!p\ q•ä>*˜wØÛD¢€($µöÞŽ„0çÁ™n„ZÁ­ | WylÈ<6Âæ[;8ç ¸îÖŽΔsÐ Ñq¦¼ú1«°·#QAT!ˆ!ΔW°Ÿò)¬õ“¢w'IîN–O*ˆ*DÑH |Ëo8Ó!g:pCÎ#^ks—Úò‰õHX‚0¢‹ã<0kç¸÷ëÖŽDQ„h šÂH„<ÿÞ~yþ½ ô¥H_*ˆªΣÊy4ø´‰O¾¥É·„œAæ›ósÎðGä8KííHàMa ìq"Ìc™{®ÖOˆ!„ƒpadgîÛZ??†øÃp “cúbÒ—PIÜÛqGÕ­‰ " Ñ@4!ˆA"á<8;œDàL9”xý±·‘á,þÈ8Ç *‰Y*‰9>‡sk?J8Drƒ¶Üûµ~Ò@4!š¹÷ký¤€(áÎ3sßÖ$ÑAô'ˆb(ðGx«Güåó[;gʵ!þÂö­ˆ¸z4Y=Z¼>ÝÛè ×—øËÖ·v$ÐÛ.½=…€¶\Z¼ßÛ‘€?†øc`Dqm˜! ßbò-†c˜㘀?Lü1A{‚@$›D²£/.}q¨ï¢þÞ"w‘;æS¾0gÜe¹š"bŒ ‰±âSGë'ÂI„œÁÞ¾#â/KÞÚˆkƒÉÚ0EEˆ¢=NdœG–óˆ3?÷9.Ÿ|K‘o)ðX…¬BæÄõøƒ÷ wííH8'Ñð-¼o°x½·#oÁ•tÁN‘;œ“ˆÐ—"÷@÷@Eî–Oð-M¿åÛÛ‘H ’èK—¾tô¥K_Ž1äÇr áJˆ8¯ìQ,²G±Äß]¹µöþ"ÊúIQ„Àyd9p]¸·Q@`dܹ;™#„“óXás'ë'D~œÙÂ_E˜$:ñXE_ªô¥6MaBÀ§U|Úð-M¾¥¡/ûã–oð_> 8*K¼ÚÚÛp(ç¢\·|ÿúú zëÒ[Çyà:¨`'d‘—Oª¾·ï\[¹¶,¸r,råXzx×Í­ˆp¿_ø›õË'qvè2;Ä7ùÞÚˆ1Æ·½.Ÿ„û¨½ ô…kTÇêÁw—®Ÿ ·]z®>÷ö£„]މŸñéÀ1P}+†o1ù÷ð ˆõ“ " oÑóp„û£„cD¹Œ(ìR+²KíòIöHdù–‚cpNö¸CuoG"HBEˆba ìq¢á[°­¦è±½ˆ°Îíí@„½ •¿§µ~ÒAt!9Ó†óhr çÑä<ΣÉyt|K—oéø–.ßò{ûŽˆo(»µ® ÷v$ù–p¥´·ïˆøK·v â1øÎ®õ“¢×0{; Ç ¶Ø‡ReÊ哎3ír¦ßBm㻲ní@„ªF•"—OB–eoß=æØöö1âN·½ˆp'^ù6œå“0óW¾¹e1…jA—OÂ\¸·oÄ{û?oõ~â^7~¦·t¹Ú¹¬™ÿ÷ŸÞþÇÛÿþö—úïÿù¯Þþúß–}{¿ûÿ?üõý¶þ0ïÿûö·oëN»åñ˜¿{ûÇoéío.ÿû—oiAßþË·¶î )y}Ùø¿µõÅ¡Kór¥ñë·¾îÂ[š¥.ͶüÑÒ¼\Ù_škii^.Â~ý6Þ—)ji^îÇ.͵²6ÓÒ\/³Ê’ýZŽ<ÖûþÕú~iZZ’Ùksù"«Ë×#¯Ö¾lLZ¿wm®sòzVËix^>^Ïy,Íõýe×]šë†‚¥iK3½¿/‹SÉcÉ:-í² Ï¥}m·%ø–vËkÛ–,í±=¥õ‡/Ë’µJk»,¡¸´ózü´^””-7²´×¥niõxyt–¶¯|®›ëº»´Ç¦€/±wi_n€W ÖýK»lø’YÚmÁ—®KÛ6|¹B¿´/å"CYß̲´×ŸZÚ«kÓe ,B”õ}(KÛW%–öÚß¶N£kÛÖöºIì®=V1n|_Õ(k}içUŽò¾IºÄP¯wßß×dÖz~ëñ¶k]Ûëñ¶ÇZÛëñÆz‰»¶×óÝ^0¼¶Wÿ.¯¿¼o=þòâ½Í¾oy=Õû–Álþó•ß^A±ús³·]±žßöòÚ^øõÈ«ym—]U¯õQ›íxukû6,|YñÝ7$/í¼µë60|ÓcÝB´oíÿRhßâÏmmçmlØâæ_·Î¶ÙmëÅÝ,ù¾]¶ñ±ó¥oãÃ6ÿ¬—ŒÛù­þ]Þëùmíºßâ™óµ¿ËøX¦°«ÿmñ×ö'­I™ÛŒ“òv¼ëŒ“Ê×gÙÛÚ>fœmsßmÆIkfÿ6ãlên3β¨|Ì8i}ôú6ã¤ëyï3Î5ŒögF?fœõ'K?fœë9ï3N^Ò]3NÙ¾Í8ë}÷ÝŒ³¾!ónƩ׼Ï8mŸQ®3NÛúq›qÚ·§_g°}ÆYÇânÆé×ãï3Î>î3Îúº»gl{›qÆõxûŒ³Vîfœ±d>fœ¥Ýïfœ­šð1ãä± }ÆÉWö'ëŒwqò:p?fœå|ÛÝŒ³ôgÜÍ8¹o½Ï8yuÓÇŒ³¯ûŒ³ø¯ßÍ8¹_gÌ댓ÛÁûŒ“׉ácÆÙê±w3N»žÏ>ãÔe'ÆÝŒS¯ýÛgœuWôÝŒSöê:ã”åÚënÆ)ÛùÞfœ¼‚ÛŒ“·è6ãäMïÛŒ³þFëÝŒ“¶!w›qÞ·þí3ζ7ëcÆIë³H3N²Mÿ}ƹ´ßëÝŒ³öcÆIýzü댓Ú6l÷'­×x3Nªqƹ ƒÕ_ûŒ³íöÿ˜q.W8«¾ÛU)ãèŠê‚}\U½ÿæUÕŸ¿–*-–qÛ%Ò?n8–K;{ûÒ…eÚYË!··?ˆšÞÃmÌÞ¾#Ê[ööÑã6ö½}Gx gº·?ˆ–,œéÞ¾#Bÿ?üñAüÕ/WÿßþŒÇÿô—ËÜÿ”ßÒÛ/ÿü¶9ùãÿ¶viïk¾¢¬?óüËß~—ßÿöË¿|û¿¬?øë±½Íòã¯Ë]ß·ý]ùë²=#õñ×í‘¿nÛïn|üu䯭Ãkã¿n©ÀköÈ_×wxÍo½Ò¾¾†ãû -ÖÖ‹áex‘–YúÚ¾'lM˜|[ûŽ(ïëƒL7âÚ¾'– géŽØÚ÷D]ÜõƒØÚ÷D__/ýAlí{bëÝñÑÛ‰N¿Ç;VoþE} þ¯}—›£õ¯ó#ík>z{‰Ìò× ½í×ÿämà>þÇu]ùב÷ø÷µ²¸ÇÿØÖÁ»Æ>ƒ÷:Û]f?»ÛtþëÞ.Ëû`—uí&lßømÝÊfë§ïÛ/ª‡ÖuÝúô$ZËö0ÕûºÃéÔÿü_ÿôç|r÷7qw¨%–©ì2«\®à.ÿ·¿ûÓßmǸÜþîß½ýã?¿ý‡·òw¿ÿ÷ù÷ÿöËß,‡\“e=.—%fz=®Û•îrA°Üj”º]éÖë½Þ²x,WVul÷²¥mWºµn·×Õ¹Ô´]Ø—¶]é.(Ë•Né×{ëk"¢ôë½uÙ®dK¿Ý[¯W2¥_¯tm»3(ãz¥Û®Ç×+ݵúº´¯Wº×»bïû•UY·],WRë•æ2s¬WV×;bÛ•îö4î¥íÛ•îå‚ðÚÞ®tßÇ•÷íJ÷½o÷âÅ·+ÝË:W–ˆ¬ïÛ•î{Þ®d—p¹ÒzO[ÿëûz¥›}}_í¯·+·íJ³¦íõŽ>¶+Çš¶Ëyß®äkÚ^ñåmëÍi ¯º]éÖ¼ý`·—íN æíg—wdÖµ½/.Ý\ïõ—%v)í]ÚÛ÷•õJ7_.Àmko×E~½R¬u½ÒÍ~ÍUÔš·¸öí^¹ÖõJwi¯©¥Z×+Ý¥ÝÖómë•îÒ®[{{µÍ¥½ÞéÕ¶ýH¦ù¦_më•îʯýéë•îÚ^í׫2[Ÿ¶_ÚÛ‹Ä·w/m_/¶w _Úc½ò]Ú¾žÿå¸ný]õ¬—[à´õw½r®—x²­¿Û÷ÛV€^Þ»ž¯m/#ñ÷->«mYÉK{Ó÷zM¸…ÕÒÞ¨ôtÕÓÇ®G]ãÃ}×£-|{ß^º¦­ímK«§-—ÑÞmÓ#oÇo—xZôXn¿ëÚÞÊòž·óoi{ÍËöfÖ_oW¨Ë›I—ómy++,7\«}¹ïk<­¹¢¶ÜBÚÚ^sWmKZ,í²ÚK^õXâoýþ²m¾´·þ\â§oo·;ã[ŠKlþÆÑàÖ¾e‹¯³×Á'ÛJµ#cd9FÅ1ª£âUŽq_9»µa=&½=",즼µ#1@à<<uøÔŧŽÞºôösñØŸò %>éwkGsõا‰výÞÚD¢‚¨Bt]a$RŽDÊB4Mˆbá ħÇD†O³ø4^Ãt¹†é¸rä[‰×O K]2táuÐ110nùçúI‘„¨ ô[މ¢=A C(4Œ“ñbá=W·v$ ˆ"DÑ…pþDs&c.>]{kG¢‚¨B4íq"ä-#dù¤@—"ºô¶Ho ΃wI†ìŸÐ]?Á™=ÓˆŠÞò®`Š€¶¼+0ܘÜLè ï Y'þÞÉú ”ã]᚟Ï#¯Ÿ@[^Ñ2Š&Å)½mÚÛ3ŒlæØfˆŸþîÞeו]YÓ믧˜Ís1üŠÉ¯dÊ]cÍÏTû'h×üç޾p½îX¯óÙéû'/\¯;Vã.«qGÎ% çˆÂ¹Dá+z—½ceà²28‰€Ä¸_"0²¹2ðt«äVÎ$†•ÁÌ7ëoåLLSˆ$îïÜÊ™è ºh‡I;¿âò+i{”Ï'*ˆ*zëÒÛ´¿=ʉÈ#D „°+y§u”31@Œ_DÂAhoDH™Õ£œ H¬ˆÄ ZZ¤¥--ÒÒ‚–iiEK«´´B¦Udº@Ló× ½mÒÛ”‘˜¼÷ºH@sM4סý.ÚïøŽ.ߑ֖“ï^Ø?Á¯ ù•SH}ˆÔú2´/°ä!–¼@ÀÖ‡Øúcb–LÌ"l}Š­O´Q뉬ה¬×ÌϹ•3a Lˆ$Rç(g-íÒÒ”]™’}›È¾MɾÍ[ç›áöO„Èôë"±<æøþ¹ý“¢¿@@¦Cdº@@ê•o# [Žý%zá¸Íïô»•3Q@! —)zY  z‡†µ6߸;Uÿaè-×Ú'˜‰ÄŒ“ñbð0&Æ uîò»oåó èÅT/g°u[whŽ{ †½G“½GÃÞƒï|Ü?Á¨t• zëÒÛòp‘GÀ C¬0`…!V°Â+ ´”{ 3ˆ¾ýïq-gb‚˜/„pÚÒÇD€ˆç‰¼Kê²KZ" "òX Ð[î´:vZ]vZ;-¾yô4¢€(/ÄøVXÅ O! —*zi)÷„'z©/D£¡Éh8…€n›è¶aDqÿÒs¦è(gšãþ¥cÿÒeÿÒs.é(ߣ)£ o9Å[âŒÀ”3Û'"HäˆÑ”ˆÎL9g0qÎ`Ê9ƒ‰sSΜE@ô–8‰0å$ÂQ@ˆn+$VEb9b4%b´D ·U{ íÓ­ ½¥w8‰€Äèa&¼Ãï°D@¦ô#›ï‹Þ?i šhclKF£±‹)±‹‰¸Ã”¸Ã9ĀĆHl€} ±ÄÉDüƒÏû\$ Æ?&"S"Å)ʼn¸Ã”¸ÃDnmJní$ò`Üa"î0%îŸnz+gò`ÜáÂ`AŒLÄ ø~õýHŒ1ƒ%2eTa‰€Ô3˜ÈhNÉhNDø´×í‡Ä˜\" Sæ+g>p”Ÿ%Ð[—Þ.#KÆ-c+DÀ[†xËRgÜaæGù|Rg–x"<%lùôàQÎDQ…h šDb‚˜B8"@ÄóD^¯›¬× ëu“õºå3ŠGù)gŒ¦œ1š84å|Б=Œ‹‡Á£)gŒÎ"‘ljѬy0æx1AL! „} y0nû!c‰è º„pÚÒ$JÉD)/è-Ç~ æÈ§šïŸ ¥Œ(ö·!ûÛÈçèr&ÐîoûÛým úýZ" SîMQ§¨S`_ɧ¼ïŸ /Œ:Ctô–»×“HŒ±­È÷+ò³¤Îð½ðÀÑÀÜ|`²¿ ì^ùŽåý“¢ ŽéIdÊým kÎgüoŸLȃ9ñÈ'©ò³ú½i`oÊ7JïŸ@û̉²Õ|çô"¾p_Øññ­Ô§wK´o¢}ƒ—âº0°êã¶·Oã…«¾Àj+dµµD@bÈVNº™œt[#ˆ!„°/!„¿@ˆø "ù£œ H}ŠÔSlËø^ý“B´?¡¹)š;…€<¦Êã!a˜‰Ä¾ Í™hΠ½dj"Ó7s&cn€nMuûHÞò(gRw‘ºc¼¸Œ‡<\ä‘blG9‡‹<x)/嘋Äò‘Ç–†´4ÐŽ@;jÞ›åL4Mˆb¼@ÂA¸Bz›"WGùWDR/"õ¢ƒè/Ð\Íh®ˆæ 4WDs§Ð~íWH½ŠÔ+FC•Ѱ@LSô¥J_Î zÛ¤·§GyœBÀÖ›Øzƒ%7±äKnbÉ šk¢¹[obë VØÄ ;dÚE¦òè"7z©/ÐK½tXrKð§Cüé€L‡Èt€<¸Z" !ò°BîÅ*öQUöQ‰/ÕÅKåwØÝÊçÄb‚˜BíícÂAø—"H¤X°ÉYIÃ9G“sކ3Š&g gMÎ(®Ð }á‰5‘XƒÄè-qÎÑ䜣ᜣÉ9Gëðt]<Ý[§§ëðc]üØyt‘Ç@K郖ô…^j‰€ö‡hÀ¸U?6 Fj:"5]"5ù-™·r&Ð[Fj:b(|Kæ"ÞNéí)llŠMÈ”±œ uFY–èÅD/Kf¤¦ç³NG9ð–ŒåtDHºDHzŽ'åD8zëÒ[G;é=‰€2B²D`Ü2†²D@¦Œ¡,°BÆPz>…t”3 ±±€1ÊÒósÐŽò¯  ¹Íô¢—ïAŒ|¢ê(?K ãb‚Ж>$r´gH´gäG9 D{A¢¢¥UZZÑRîÖüCÖü'ÂI4ô…+éÇΚœ \# „   ËÏ/<Ê™( ŠDýEDÑ…˜ ¦¾„p.D€PÍ@hŸþ'!MNB®Ð #½†H¯I¤×ò¹­£œ H±`¼Çä}8Û'ý˜!ŽkÇ}©ÓáÝ>&ïö1C„Ä$B‚·ÿ˜¼ýg€L™X!ì”þt‰€LO^" SÆ? Ñ “èÆ[gtà 6‰â&ñCüÃ$þ±BtødÆ.–È£‹<:äÑEòè"½eüÃÇ5‰ã.·Œnü] CFÃÝ2Ró6ã…§ ñ “x!d2ăLâAKFOÿbJ&1%C,Ç$–cˆå˜Är ‘“H!Êbe1DYL¢,KzË8Ìy0cˆÃ˜ÄaÞE8lŒÑž%òàiC´Ç$Úó6ã–Ñž“hŸñ é“S7†ˆ‘IÄÏÅ7y.þûŒ(F,?•ç(gšcÄOß7yúþÍq¼D@·Ü#/Ð>îáÞ`ò€÷Dy¨ `87nrnü,Â@˜Â…ñ<‘³ rÂ}( Š„Hý¢h¿ˆè ºÄx€™Ø˜Aû<]ºB8´Ïùv‰€æxBõm4Ç•A ‡’à ̷!óí±Ï90Ï…Ìsy.dž Ìb!³Ø¾pZ"`c‘mÌ?²ç?ÊÏD¢‚¨B4íÄb‚˜/ÂHÈ´ˆLO! "ò(Gyô¶Ho z[´·ß…p.D€õ˜¨QUFT…æªhnÀ˜«2æ*´_Eû)Ò{”3û¨böQÅ>*ôRE/2­"Ó™6‘é›H½‰Ô襉^¤ÞDê 2m"Ó[obë ôÒD/ë"±KîbÉë"±SH½‹Ô;FC—Ѱ@`4t tÛE· tÛE·š¢¹º¢Ûz¢—‰ ‘ØKbɉ yL‘Ç·! õ)RŸú©¿‰Àˆš2¢&|á_¸@`¼L/ FÔ”5aAS-è!aЭ‰n º5Ñí)4‡¸WÌUfÜ*q¹Uò6"Ï@rwÅq«ÄåV‰ãΈËÇ—;#Û'Äb‚˜/Â^ Ó®2}HäÙCî¿8n·¸ÜnY# ¹!š[  [Î@5g#r& Î@óK•ùe…˜)=ÿ™rnX" Ó)2] Szþš#½Gù|£ž‰€/äÜPá׫øõ%z¡×~ÝšèÖ0¢tö0hßDûoiâ-O! [ÝžBÀ>Líã!áЋ‹^N! ½,º‹Ô£ÒeTžB@/.z Œ¹1°õ[_ 0BFÙ†È4 yä*XaˆžB`ž ™ç Që&Që%¢€(BTUˆ¢ ÑAt!&ˆ)„0!„Jìï†(Ð\ÍB@ûŒž·|â(gÚg|½åÌG9Ð>ãëo#`cŒž7D¾›D¾bÒMbÒKÄ1„@_q^"Ð[Ƥ—áB@bŒk¯ –̘t˧ò³ô¨5ÞéòÌ5£¡ÉhX  Û&º}û`¾!BÒ$B²DÀ>Ý8‡è°1Fñ—Xc( ’&’Çî­¹Ü[[# ˆòQAÔˆB{ûb€BLóÂ@Ø/"„?OTØ3¼Kìƒ3áÑ@´/!`AÌ/°ÎÈóíùw#]îFnŸ`Ürž˜£†ÌQ3Ðh`n27,è-g¯=ÄkŸCtHŒ±ñ¿>įÄÆ‡ÄÆ—HŒqí%2e\{‰€Ä×^!$ƈóÛHqí%£‘ u”¿' bŒ~äg»åL@ûŒÑDà‡Dà—ŒJFàâëCâë¹×!¹×üü@fuHfu ¾>$¾>==ˆž‰žäM‡äMWƒÔ__" ÆÆ—hŽÑóèùèù@\{H\{ &=$&}Í1j½D@s&šsèÅE/©3®=òù䣜 È”Qëˆóˆó™2&½D@b. HŒqí¿+º Ñm`D1¿DÀ>_"`AŒÑDà‡Dà¢çC¢çc«Ê´?ùžù^" ˆòÑAt! „  ´·‰‚Þ2"°D ·Œ,Dý¢h_B ãÚç^Ýà×Müºå›)GùY¢h/Db€BLóÂ@¨L„  ây" }z~ÜHu¹‘긑êr#Õq›Ôå6©ã®¨Ë]Ñíô…>ù1û•.÷+·O&ˆ)„°Ὲ"±ìµ]¼¶Ã'»øä%¢‚¨Btýb€/°Æ`×Åã:bŸ.±OGÐ% ¸BThŽÑÑ¿+Ä›:Ž8®K׃u‰Á:"¬.VGÞÔ%oêˆÁºÄ`1X—ìIlŒyÓ“X!3«+Dƒöq^"`ŒIÿ #`ëÌ#/ ŒÑ//Œâ;bô.1zGŒÞ%F¿DÀN™á=‰€%3ÎÑaëÌ/° æù—|ÃI샙æ%ÄŒÄÝòž–#ßà’opœ’w9%¿D@bÌ&8r.¹‚%2e6a‰À˜cŒÞó¹Žr& SFñ1z—ýI¤Î(¾#Fï£wDà]"ðKdÊý©3ŠïˆÑ»ÄèWƒL£?‰€^£wÄè]bô'XÓ1ÎïùI%G9Уøx#½Ëéo“wy›¼ãMð.o‚_#`댡8b(.1G„Ä%B∸DHð6y—·É¯°dF7ñ‘}áQ~–˜ æ—Â^ „  Tb‰ä“Cîξ( ŠD¢hBÀ>¦ØÇíOÑþ)´?Eû ´?Uû'ÝšèÖ [Ý.Ћ‰^ÒÜp”3©›HÝ0*MF¥£·.½] Ð[—Þ.°u[_ 1‰9$æ"1‡Ä\%öH,Db±2ößD@¦!2 Øiˆ¾‰€nÅ’sžGùY¢‚¨/ DûEDÑ… T¦gÄ|0&„ƒp!D(°"öq Q@”Xa+,°"öñmØi;-°"öñ&TÄ‚*4WEs t[E· ¤^Eêã¶Ê¸] 0*«ŒÊã¶Ê¸­zU©?$¤ÞDê dÚD¦ –ÜÄ’ßD@sM4·@@sM4× ¹&š;…€n›è¶Cs]4÷&£²Ë¨ì° .t ûèbÚï¢ý_ØÅ.° .Ô¡ý.ÚïÐ~íÌ·CæÛº¢ÛÝÑíÍ ÑÜÍ ÑÜ€^Ë)ˆÔ‰ÔœD@·ŒöÄrŠÄrVˆ Ý2³D@ûŒÔ<&p9ä>rà¶qÈmãÀmãÛÆkD€)Ÿ}”Ï' ‘Ø›ˆ ¢¾@4íb€B@·œ O"`œMq8äðY4ÇÙ·CnoŸ@êœ f &3PÃüÒd~Y!zËùå$ãµD@bœ_–X2g %Þ’sÔÝr:‰Àˆâ,v1¡}ÎQ'°Å&Æ3'°1f5rMr ù†&ù†%za¾¡!WÐ$WÐçoço9Ó|”3y˜Èc€Ä˜MhÈ&4É&œD@êÌH,•ÌY¬ŽÑÀœECF¢IF¢å|öQ~–€æ˜³X" [Ý:ô¼GCÞ£IÞ£!«Ñ$«Ñr>û(?IôÂÌÈ©3«ÑÕh’ÕX" uf$–è…9‹“è6D·ÝâVZtdFºdFN" ˆ"DQ¿„h Ú·%:ˆ.Ä1_ „ÚG€ˆç‰û`f¤#ïÑ%ïÑ‘÷è’÷8‰€^˜³xí33Ò‘³è’³èùFÈQ΄ƒð_DÀ‚˜]éù®ÆQÎtËÌHGf¤Kf¤#ïÑ%ïÑíéí9‰€æ1Z" Sƃ:b9]b9Q–.Q–%za”e‰€æeéÈŒtÉŒà~vÈýìÀýìûÙkD€€Ôq·:änõQAT!ˆñ%„°oK8ÿED€ûÈsƒÜÏܾ¹}¸}rûz@_èµO" zþ"Ï S憉Ìû”Ìû½pöX"&ˆù%t˹á$záü217L™&üú¿>á×§øõ“È´‰LHŒYó‰ü”ü9DÇh`~‰€æ£ŸˆÑO‰Ñ#6Æ\ÁˬùDN|JNüm¬y·ðüÌïŸC X2s'Kl¹““ø Þæ˜ÈLÉ,°æN&òSòK|só‰)‰“HY‰ŒÄ”ŒÄDFbJFb‰€n™³8‰€ö§h€}03ò/Œ€%O±ä>ˆ™¢%ã…¹¤‰{'Sî, ÌGM䣦ä£ÞF`D1§µD`¼0ëµD`¼0§µDÀ‚˜:‡phŸ9­%öÁ¬×™2õhŽy±%†™³“hŸÙ··ðc.~ì1°Bfð–øæø&2xS2xù¹)ù¹%öÁìÛÛXs|‘Í)‘ÍÇ„!ƒg’Á{Q@!*ˆúÑAt!ˆ!ĽàY!Ï"Ø>) Ê— DûEÄ1^ &•º°áBˆ áÐ>gäoDÀ‚8ïãù!ÏwX# [ηK4ÇYl‰€n9‹-Ð>O»|" }ηžŸÄy”3ÝrF^" [Îs߈€q®td]²€K4ª¹D`6 ™M—ˆ¢| QAÔˆ¢½@týKˆ¡š{HhŽ'f'fBNÌN»„œv äCrs(!çP–ˆb| 1AÌa/Â…€ö™Ñ d4C2šo# Û*º] uæ+—HÙÈ“hŽw¬N" }æMWˆÍñäÎÿÁììI|Ïö,ðRÌ/°Sæ€Þ ïc8pO+äžV Ã’á=‡è°æ€O"`Ì#/°fxO"`AÌîV‡Ü­d‰C²ÄKlŒÞ%VÈìlàf[ÈͶsˆÝ2+ºD@s¼[½D@·¼Ùö6Äp Ã’á]"`Ì/° 怗x:Þ} dxC2¼K,ˆ9à@8$Ƚ†ä^—è…¹×%2e62k É5.·ÌFâ–!qË“hŽqË·°æ+‘ÍÈæ b6r‰€1:ˆŽ†DGO"`§ŒŸ~¢äç²ÜÊÏ Db‚˜B8ž¸?Ét+?Kå"±‰ ‘ØýLx+gb€/Bìc€^†êå "@ÄóÄ„ö§hBsS4w íOÑþ„ö§hbDMQo"`Sìc€ö§hÿ=„Á>Lìc€w0ñí›h€}˜ØÇÝšèöMìCç¨cßdì;F¥Ë¨tHÌEbêâO-ui©£¥®-}H¬0Ä È#D ì4ÄN2 ‘i@¦!2 È4D¦‘2²H= õš2E·r&*ˆúÑAôoK Cˆ b¾@8¢ÛÝÑm) ŠÐmÝ–¢ý"º-¢ÛÝÑíÝÑíÝÑí›XP ª° *ô&vZÅNO!`cUll€V±ÂSØi;] `§Uìt0&l¬ª@4XP Z `Mìc€}4±Úo¢ýÝ6Ñí›ØGûhðAM|Pƒn›êö!Ñ¡Û.º]  Û.ºí˜ 1ªˆU‰Õ”Kº•Ï' Æ¥jÊÝÊ™€æ—zíwÑþ€nýê˜Õ»ÌêK„°ç‰<ÏñY&û'Dyh šDÿb‚˜B@bœ=–ˆÏ­f¢ÕH~½Ã¯óI%§°dzíoDÀ>8{|#ÂAø—°SÎsóK—ùe‰€^ºè¥Cœ=:f.³G‡_ïâ×Wˆ¾0›Ð‘Mè’Mè)ã}+g}a&`‰@_çïˆâw‰âwDñ»Dñ—x)Æè;bô]bôK¤Î(~Oï[9£Ö'ú©$f"±c\»#jÝ%jÝS†÷VÎ,™ç%–̈ó©›Hý1á°dK>…ÀJÉe¥äÐ-£ç==©õV~–€<\å;el¼#òÝ%òÝ×î×îˆkw‰k/°F­O" 1ƵûV• •Øb¤Û ·ò³DQ¿„h Ú Ä1„˜ æ—Â^ „  Dû96>$6¾D@sŒD­‡D­ßFt]ØwÀK´ÏÈ÷It˨õÀ>{È>{‰ÀØç>{¤çŽÞÊwD~À­œ a_B8'‘çÞÎß?) Ê DÑ…€ÄB$è-g%òàüò˜0Ì &sÃQAÔ_D4í¢ƒè/Äøb‚˜/Bíã1 ‚DžLf à d2½€}p2Ì@&3ÐI´Ï9j‰€ö92«&™Õ%Úç,¶BTØg±%2ew‰€L™küF„0! ¹*š;ƒh˜˜I|‘͸¶!®m×¶t÷íVÎäÁا!öiû<‡è Ì.)ó„߈€æ ^"0¢-^"àOO6Ä“MâɆh±I´x‰€}0“¸B Øs†Sò&§ä qm“¸¶!®m×>‰€æxŽ~‰€n_7DÏM¢çKôÂèù9Ä„n_·tçìV>Ÿ€}0¿D@ûŒÑŸDÀ>ç_"`A<‹¿DÀ‚x…0hŸgà gàMÎÀŸDÀ0ß°DÀ>˜‘X" }æ,–hŸY CÎÂ$gaÈ&˜d §äMNÉr&¹C®À$W`Șä 'íMNÚ/#F†hI´ÇM0É&¬‰1´D@¦Œr&9 CÄÈ$b˜=xyÿ$@į!òüÂ;ͧDù¢‚¨¿ˆh Ú—Äx˜ æ l³Øá \X2³æ+„Á‚8Ïf ÞœÞ?L9{²ÕzømdÊ9*0…Ì@(d dšC2Í'„xÜšã²·ÐçÊÀL2®Íqž Ìs!óÜÍ17Ýê|»@@/¼û¶DÀ2tû`¦h‰ÀØg¦h‰€…ÚØå#ßã;Ê™( Ê—DýEDÑ^ :ˆþ‹ˆb|[ÂA8‰;-b§6VÄÆ ´_Dûº-¢Û©‘zÊåg aB@bE% dìŸAT襊^襊^N! Û*ºý;"0¢ªŒ¨ «bc6VÅÆXa+\ `cUl¬Á‚šXÐïÐÄ;4È´‰L¿ ½4Ñ˽4ÑËÍ5ÕÜC¢Cs]4ס¹.šëû]Æ~Ljê2¢N!07t™ÞDÀ‚ºXP‡wèâØX;…€v±ÂVØÅ llˆBÀN‡Øé›XòK° !4 õ!R_  —¡z9˜ð0S<ÌÝNÑí½LÑËÍMÑÜ„w˜âhŠöO!àa¦x˜Gnç¹_p÷¾ÈÝû5¢h/Äx˜ TgÂ^ „“èÐç}ܬ/r³¾àf}‘›õgÐ-çýŠù¶Ê|‹›õEnÖ¯)ç¹%"@Äw%ìcˆ},Ð-gÂoDÀÆ8#ŸDÀN9«/°Ó!v:à†x‡öÁYýb‚8ïWÌÈUf䊹ʌ\1ßV™o—è…³éIô2E/fЇY  —©zyH4g¢¹7˜£Læ(ƒ™XÐ3±1ƒ™XAû&Ú?…€w0ñ ¬ÐÄ O!`c¦6öp؇‹},°  Z `A.ä°  z r± ‡æ\4çð0.f€n]u{Gˆ<ã6dÜ.Q!#* Ó™.)²‘¥!“Ø$“xQ@”ˆ¢ÿ"b€BLóKa/Bíã-Dö™<‰¨ ª DöÁLâIl¬ˆBÀ>˜¯\" [f4²‘M²‘ï"*¼3š-ß?Êϰ1f4O"`§Ì56ä›ärMr'° f#—薹ƢAsŒ.ÐK½,ûMÆ~ƒLë;‰€Ô Ä“JŠ<©dû¤ƒè/Äx˜ ¦B{ë üy"Ÿv‘gˆ¬:稓ˆ¢½@@·œÅ–è–sÔíó\ÎÀ,6dÃSFŠg ùeÈü2pÚeÈi—ùeÈü20{ ™=–è–³ÇÀì1dö8‰€L9{|#ºÕYl€ö™õ83ä<Ì@^lH^l /6$/6Ó’ÓÈi Éi ä´†ä´ÞF@sÌ‹-ÐO™,Ð-skK´Ï¬× 1 }ž2Y"`ÌG ä’†ä’N"à?†øSØ3Vù¨!ù¨“è–ù¨|Ô|ÔÀ)“!§L–è…«|Ô|ÔÛxf¬¾í3sö6>ˆÙ·%vÊìÛ a°dfÎ2gC2gy±!y±¬×¬×@hHhä;xG9Ž–2ÿr¾¸ôe€2»2ò=ñ£œ HŒù—%2eþe‰€1»2òMò£œ H=Dê ±±€·dç$ºe–gä»oGùW° f›–X³MÙ¦!Ù¦±Ueö1‘Kš’KúFDQ^ *ˆúÑ@4!:ˆþ1@Œa/Â_ „Xaö§Ã3UЏþX"DXW(xÖM‘gÝœEÀ‚¸Êy;å:èm씫­%~Œk˜oDÀN¹RzL8Ö0.kÇ Åe…âX¸¬?ë—õÇÛˆ¢ÿ"b€BLSaB8"@ˆ}ØW9Ž35.gj¹F—\ãÝ2׸D@/Ì5:2‰.™DG&Ñ%“èÈ$ºdO" [ž©qä]rKtË\ãÝ2éÈ5ºä9>—Ÿ#Çç’ãsdð\2xo#0æ˜'tÜ}s¹û¶D@/Ì$.Ðs'Ð>oØ9²‘.ÙÈ“Øoé½€%3£¹DÀ0£¹Btس‘'°f4M—Œ¦ã޼˽ù“hŽMG¾Ò%_éÈFºd#¹F—\ãÃ\ã 1àaxÏ‘tÉFžDÀ>˜Ñ\" [fßFÀ>˜¯tä]r'° Þßwd4]2šç6Ƭè7"`c¼-¸DÀ1;ë¸ïr+Þ‘ãsÉñ½€n™'<‡0è…¹FG®Ñ%×èˆÁºÄ`—ŒlFi—ÌŒã:â¸.q\GìÓ%ö¹D@ê¼ûæˆlºD6Ï©©òœšŠ§ÐTy Íá ü"@‰4åg‰B$–f±*Ïv9‹h Ú/":ˆþ%,hˆ-° !ô˜˜ÐþíOhnŠæH}ŠÔ'Få”QùmèeŠ^Ò‰™£œ èeª^ÞCÀÃLñ0ÿaâ?¾ K6±ä–lbÉK6±äÞÁÄ;,°S;=…€­›Øú)ìÔÄNšÓYýÚwѾC·.ºuhÎEs§М‹æèÅE/§ð….¾ðäjA'âƒXaˆžBÀ’C,y€­‡ØzÀNCì4`…!Vø&bA ì#Ô>0›fÓ’3šGùWDù¢‚¨¿ˆh šDb‚˜BÂA¸B-è¢À>ŠØÇíÑ~^Šèåš+¢¹Sˆb¼@À‚ŠXP± 6VÄÆÞDÀ ‹Zá D…V±Ó ;­b§ß†€­W±õž®Š§«°Ó*vº@À’«Xr…V±Ó6VÅÆÞDÀ «Xaƒn›èöM죉},° &´@À>šØÇ›ØXkÐ~í?&:|PôwDÀƺØX‡u± ÔÅ‚:fä.3ò·!`Œâ¯zaŒþ$ºeœ‰€nÅ/ˆ|‰|ã‰KUž¸Tñ,¤*ÏBªxŽQ•ç½h Ú·%:ÑË1@Œˆ b a LXçì"R;ÊϰÎÈxžR•ç)mŸ@sœM—h®‰æôÒD/ zálºD@sœofÓ&³iËϪ8ʉèÐ g±%šã<×0‹5™Å–è…³Ç7"`̉7d¼›d¼[>?v”Ÿ%`œ+râMrâ 3a“™p‰€}p&<‰ÀÈf.z‰€nu¾} bAÚ¢ý7°1æÄ[>v”O'&¬pŠžBÀK17;e~‰€M±± cnþ$ÄìýI샙÷†Œw“ŒwC¦¹I¦ù$za.z‰ÀìÁð™šÈô Â!u©/ðüÌ47dš›dš2ÍM2Í Yâ&Yâ%£YâalŒÙê“Ø³ÕKæ(æ¢[¾|”°Sæ¢òÈMòÈK,™y䓌æ¢ßF`D1ã}Kf>{‰€†Øé;eNü$¶Î¼zÛª2¡¶þ€èȉwɉ/D¢‚¨B4íb€_BLóKá_Bµ‡D}0'Þ‘ï’ïùðQ~–è ú ´ÏluG¶ºK¶º#Ý%½D@sŒwÄ‚»Ä‚Ï!*4ÇHoG„µK„u‰€^a]" 9FXñ„²*O(;‹¨ ª Dû¢ƒèBLóÂ@Ø „ƒp!DœOàÉ`Už ¶FEˆ ¢ Ñ@´ˆ¢¿@ ãÄÂ@Ø—Â_ „Zá7! l³ú[ç)µ‰Y}ʬ~[ç¼;åÊ`bVŸ2«/Ðçì‰9{Êœ=q^jÊy©%ã–óíÛHyÓ%cŸ™Õ%zái¨¢aD1³:‘7’7ÈŠNÉŠNä<§ä<'ò•Sò•'°uæ<'ò•Sò•'/ÌyNœË™r.çm|ó•'°æ<'rSr+Ä€^˜k<‰€n™k\"0æx¶ç$ÚgFó$Úg¾r"_9%_9qÿvÊýÛ“øæ+O"àƒ˜óœÈWNÉWžCLX2ó•K,™7–'ò•Sò•¹Æ)¹Æ·°Bf#'nO¹m<‘kœ’k<‰€ñ.ñû`Îs"ç9%ç¹DÀ>xËw‰€ñðDÞtJÞô°BfgßFÀÖyÛømF3ÍYâ)Yâ‰ËSn,ŸD`¼˜Ž—Ljb>{"[=%[=‘­ž’­^"`AÌV¿€1<‘ž’žÈðNÉðNdx§dx'2¼S2¼çí3Ò;‘ž’^" }ÆqO" [Æ‚O" }F‹'bÁSbÁKìƒÑb<Õ«ÊS½*žêUå©^Û'D †ÄÂAhoD|‘WòT¯Š§zUyª×ÍquqíseðØ×K¬óþ;åÊ`‰0&lóþIló¾cÞw™÷—Ø:Oº9Ö.k‡“Øgõ%öÁyß1ï»Ìû'° ®k—µƒ#;ë’uÌs.óœãŒ‘Ë#ÇLè2.è-gBߪ2!–üˆdC²€KÄ1„0öá \ˆ!½Í«ŒÕÛˆ¢¼@TUˆ¢ Íñ´Ë1AL! [žvy bn-õ ÉzD@/<³D@/Ì‹²M!Ù¦¢Á’™ Üô ¹éw™2ëÈz…d½wCî.:3g»€!w—è…7ý9­œÖ©3ëµD@/¼é·D@sÌz-ÐsZ'Ðoú-ðR¼éw1 }æÅY¯¬×I´Ï¬W c’±:‰€n™õ ä£BòQ\RH.)K É%­šc–ç$šc¦è$Úg&‡ ‰Ã¢,!Q–%^›q˜“Xc9XNH,ç!ÑJž£Žò÷$ˆ!Ä1¿„0&„ƒP©?&D|‘fõ&Ï©ixNM“çÔ¬ DÚï¢ýšë¢¹ºí¢Û™v•éCb@êC¤¾@@êC¤žæÊ&Oix‚L“'Ȭé™ÈtˆL'ú2¥/ z;¥·§Çy,ðcSü؄ĦJì=¼”ζ>ÅÖÏ £Ád4,°  Z àÇLüØû0±dbAÝšèöš3ÑœcD¹Œ(‡Ä\$¶@@b.[ S™:FƒËhXaˆ.XˆÄò‘Ç)ä"€çñü l,ÄÆè%T/ Xrˆ%ÿy¢æÛ>Gù|¢€(/D¢h/Db€B{¢¹©‘zL‹ÈôM4WDso" ý"Ú_  —¢zyHTh®ŠæN! —*zY Ó*2] Ó*2=…À¸­2n+Æm•q›îÐåó Œý*cp.¬°ª>$죉}4h¿‰ö趉nO!`ŒåTÄrªÄr*â0Uâ0KtËHÍI샱œ%öÁHÍ9D‡c´§"–S%–³DÀ ‘¡ixVE“gU4<ß¡Éó¶O*ù•<¿ÈóžÞÐäé gÄò(*Ç„°á$*ôÂYl‰€æ8‹uøõ.~½ÃkwñÚKzK¿¾Dˆø "eÍò³ôÂùå$ºå ô6£’3Ð+äüÒ17t™:<Ï¿DÀ>è×Wˆ‰©Oîð–ŒÑ/)côK¤ÎýÛè–™€“h޹‚ŽL@—L@G& K& #Ð%ÐsÆû(OÄŒDGF¢KF¢ç¬ùQÎô2D/RgF¢#›Ð%›Ð‘+è’+èÈtÉtÄè»ÄèWC_ùîˆkw‰kwD­»D­—ô–qí%Ú7ѾaT2®Ýó™ï£œ Ø#ß+„Cê.R_  ÆÆ;"ß]"ß'Ð £ç=ŸÆ>Ê™€^\ôâÐ #ð=ßÂ:ÊOË}Ï·°Žò³¼ãü1ú.1úžO–åL@/Œâ/УøKtË(þí3Š¿D@û¡Ú@ Dà‡DàO"*ˆú%DÑ„è ºÄx0&„ƒp!„h.G†D¸KÜä.ñw"DH·ŽšÜG^#*ˆ*Ä1~1AL! „‘ȃsÃQ@ˆ½‰è ú—Ð ç—·Ð-çÜnrø,£’ó‹an0™¾Q@”ˆ ¢ ÑAt!ˆña ìKáBµ‡DÔ9‹¢ç&ÑsCôÜ$znˆ|›D¾ ñd“x²!³j’Y5D‹M¢Å†|¥I¾Ò¯4ÉW.° f4 Qk“¨õ bÔz‰€Ôµ6DœM"Άx²I<ù$vÊx²!×h’k4ÄqM⸆(­I”ömôÂHﻈ©3ƒgˆ›D‹—hŽÑbC,Ø$lˆŸšÄO—È£«</ŒÒ¢´&QZC”Ö$J»D@êŒãâ§&ñÓ“hŽ1XC Ö$»BLÈtŠL'$Ƭ!Âja]"àùƒ]! }áùä%½e”ö$6fbcû`,x‰€Ô 6ÄqMâ¸KF6÷Ȇ=²Éy…pÈ”QÚ%Rç.Úé5‰ôâŽf“;škD¡¿ò˜è ºÄb‚˜/Â…ñ<1!Sz)Üklr¯±á^c“{ ÷›Ükl¸µØäÖâûè–¾p‰0öí3ë¸wrïä [úÓ€· ñ–oâ-ßF@s̾-гoK¤NŸð¸!w‰€æè“—èV½¶CêÌ­D@/ŒŽ.𸌟.Ð-#¬o#à˜I<‰€2׸BæÆ“ßF`¼0žÈh†d4ÍŒæÛŒF­ñäxr žOŽ­*ªÛ?Kô¥=ÊÏDùEDQ…h Ú DÑ… Æ „°/!„¿@±± *bAš+¢¹½ÑËÍÑÜ)Ä1_  ¹"š+ÐK½|öQÔ>TÅ‚ØX[ `cUlìM죊}T襊^襪^N 4×Ds zi¢—2m"Ó†‘Ýdd7xí&^»ad7Ù tÛD· šk¢¹ziª—‡D‡æºhîMV]V 죋},°.öÑ¡Û.º=…€ö»h¿C·]uûÐ˽,Ð˽ Œì!#{€æ†hn€n‡èvÀØ2öt;D· ´¸eÇ=à.÷€×ˆ¢½@t]ˆb¼@LóÂ@Ø „ƒP½ˆxžÈ³i•Ù7…r& }Φ¸¡Ûå†îÍq6]" 9Φ¸ÛåþmÇíÚ.·k;nÆv¹»Dtè…³XÍù죜 ȃóËypö¨˜=ªÌsC•¹¡ÂóWñüo#`ëœ_*f*³ÇOÇÙc‰€ö‡hÀÖ9{|#6Æ9j‰€ê,ö&¶>ÄÖ'¬pŠ.°Ó)v:1²§Œì‰¾LéËÞNé­¡¥&-] 0¢LF”a¼˜ŒÃx1/§º‰Ôغ‰­¤n"u‡L]dê©‹L2u‘©C.òpôÅ¥/s±1Go]zèmHo½ ém ·!½ ô6¤·§ð–!Þr€ÔC¤i¨L qþ&qþ%¢€(/Db€BLSa/Â…"õ™26Þòiì£ü÷J@·ŒóŸD4Mˆ¢¿@À ‹Xácœ¿!¾Þ$¾Þ=o=où™*Gù|zaÌ !"Ð$"а‹n²‹>‰€L¹Làîl—»³k„°–xÜ!÷<»Üóì¸çÙåžgÇ=Ï.÷<;nqv¹Å¹F#QÐz‡ï0Ä; Œý!c ƒ7$ƒ·D /û#{ÈÈÈz Éz-ÇíÀ¸2n—è–qº17dÌ-ãcѯ!ѯè×è×y0Û4{½0Æ6cc;‰Àh`œn‰€}0’wc¬o Ö7$Ö·DÀÆºØØí3¢x bÌq‰€}t±öÁÈæí3ö9ûûˆ9‰9.ðŒ.Ð#ŠK¤ÎXß@$oH$o Æ6$ƶD@êŒÂ ÄØ†ÄØblCblK¤>Eêo" ¹)š›sŒŽü~ £ü+hŸQÉ }í/Ð>ã–'ðÚŒ}D%‡D%O" ½,ûŒlžD@û&ÚwèÖE· 4Çë@„uH„u ~:$~:??ý>D@ŒÁ.£´{ä!{ä‘ORåLÀN?]" î³O"`ɺWÌs!óÜ;ˆ‰Hï”HïDTaJTÁò«£ü+ˆ_AäùEîßvÜírw¶ãÞk—{¯÷^»Ü{Ý> Æ Ä1…pþ<á=îQAT! Sf½–HÌEb‰¹Hl€1/fù–ÍQÎ$FŸlðÉ&>Ù93Éœ-°Súõ%R§ç_" uz~ƒO6ñÉoiâ-ÞÒÅ[.Dù¢‚¨_B4Mˆ¢ 1@Œ/!„“(Ð>s|KtËÜš#òíùvĵ]âÚŽ¨µKÔú$Â@Ø—Ð 3gŽØ¸KlÜ‘õrÉzD@sŒÀŸD`T2Fã™Þ“ØOì.° æ O"„̉;å™Þ%Vȼ‡#gá’³X" 9æ–x\f$Ù—l‚#Wà’+X"à?˜ X" [æ VˆÝ2W°D@û¼ÿrÄŒ„ãËG¾Á%ßàÈ&¸d¹—\Áû`®À‘+pÉ,Ð>³ Ž3Î.gœ—è…'˜—hŽùG6Á%›à¸!ãrCÆ‘ pÉ8âü.q~Gß%Š¿D@Sä1!Æè—HŒQü“H™€%ã…Qü·•Œö8âëÎøúÀÝÈ!w#n>¹ù8p'qÈÄí“¢¿@ Cˆ b¾@ ·]z›ò•Cné­ Dò"7ú©ÈtˆLllˆ=&&llŠ%/u”Ÿ% —)z™Çy¤Qy”3ÞNííCÂÐÙÝšèv@oMzkè­Io }1íËc£ÒdT>&£ÒeT¾‰€æ\4·@@s.šsŒJ—Q麋Ôc.dÌzÒÛ} éË+ ±Â@oÙ%G%r&ˆ&Ä1^ &ˆùá ´·"ž' $VDb)Nw”3y‘Çy‘GŠåL@bE$V ¢òxHTH¬ŠÄ*$VEb)¶u”3;­b§)¦t”Ÿ% Ó*2=…€^ªè¥BêU¥~Ñ ¹&škÐ\Í5h®‰æ4×Ds £¡Éhhi™6È”kþ‚5‘5ÿ9D‡L¹o(9Êr”3™v‘é©w‘úñÂýKÁþ¥Èþe‰€§ãþ7¨Žr& ˆòÑ@4!:ˆ.Ä¡}1&D€ˆç‰ ‰Ñ'ã.Ï»<Û'Äx@oé qgÈ=œíô–~¬Á5ñc ~¬‰[" ý&ÚoýØ jbA 2¥/lð…M|aƒjâƒÆ~“±ÿ6òPïÐÑÛ®½}LÀÆ!iˆ4‰,ð0Œ4Ä.šÄ.–HŒ±‹–Otå'‰‰¾0ªÐrÜò(g£q‡%½ÒÛ½etc…0ø ô&~1”†I“ɽ0†²D@/&z1Hq˜%cŸq˜%Þq˜†I“ÉItËJC ¥I å$ºuÑíí3–³D@/.z ŒF{ÞF@·Œ-Ð~ˆöxí¯Ð>ãR'sŒ\DÀ>öÑóI·£œ‰¢ ÑAôˆba ì"@¨<y/Öe/Ö±Óê²ÓêùÛQÎ$ÆVG­K­c/Öe/ÖëÃ!7Bn„ ¹òˆb¼@ÂA¨L"Hd?6ÄáFÈ!·9†ÜæX#Ðú ÜÄrcLø )>hÂMñAo#:ˆ.Ä1„˜ ¦„*Óˆìé¦xº ?6Å-ý؉1¦´D8€L—Z!r®`J®`"Š?%Š;­b§2elk"r5%r5=Ÿ=ŸˆžO‰žOÄ¥¦Ä¥ÞFÀNÛšˆžO‰žODϧDÏO"`cŒ~MÄ¥¦Ä¥&"FS"FÑž)Ñž%í`´gâ,Ë”³,KF6OªÌ|žî(g–Ìs(ñ )ñ %}a—Ø:cŸK4ÇØ§ç»ÄG9°1Æ –H1GÌÀ%f8µrj/ò=à£|>QAÔˆ¢¿@ Cˆ b¾@ÂA¨æD(:££˜AHÌ`‰h š)£ ý~È~?pÊ$ä”Ià”IÈ)“ÀN© ª Db‚¾ä½ºœvùü$ÐRZáÄ>{Ê>{bŸ=eŸ=acSllb=em°B+4èÖD·†¬É v‰@;¸‚µœŸ;Ê™è ú—Bdšw|&;¾"¯qMÖ¸†5®É×°¯4ÙWö•&ûÊ%öQÅ>*zË}¥a×h²k4¬MVކ›ÉŠÍ°Ú2YmVJ&+%ÃÆd ³D@Cä‘cÒ&1iC<Ù$žlˆ›Ä‚ k“5 Þß0åý ïo˜òþ†íX2×0xÔ÷7œEÀ’9¿X¾{”3‰á¼Ô ØXˆ!“8%“øùI^}†¬>—ˆ b á ¤/†–R¦KD€ˆç‰<«‡Ìê9;dά>CVŸØxHl<0#‡ÌÈxr0žl9ru”Ÿ%:ˆ.Ä1„A¢à;Š|GÁwùŽŠï¨ò‰U8Ê™¨ ªø•®¿‚¾`7oˆ›Ä‚??I‘«£œ‰¢ ÑAt)ã}”3a ŒDGK)ÓšgÓ£œ‰!ø•!¿2ÐÒ!-ÇyLèeŠ^&Z:¥¥½˜èÅ!yzÒÛº Ñm@!òX 1ì^­åÎQNDZ'›<×Éð\'“ç:žëdò\§5"@‰q6y®“á-î&oq·žÏåL8'‘µßEûx7ºÉ»Ñ·Oz‹ˆ€ID` †ÂH¤ýÜQNDÖ‹¼ƒ{û¤ƒè_BL“DCK›´´¡·MzÛÐŽ&íhz©7´´IK;~¥Ë¯ hˆöˆ¢ ¾ éË@_†ôe¢/ôüKÚÁ¹o@5yêö ¾Ãô;„ŒJÇx¡ÿX"<ŒåÈ÷Q~–¨ ê Ä1…0&„ƒÞfÍÉÞtû$@ G;(u¼;ÎäÝq†wÇ™¼;Îð2“7­ D{ †"H¤HžÉ;¹¶OÐŽ"íÈkyã–áíPG9hi•–æU°¼ËÆð¾“÷Þ"bòÃ@LÞ²}‚–iiQrÏð>‹£œ è–žn‰€<8nþ‘çÊ£œ‰ B¾#í,\öbŽw ¸¼kÀñ&—7 8Þ$àò&í“¢0|‡Éw¤ÙÃåÉùŽ§Þ»<õþ$"eð\ž¿F /.}qh™ÇÓÕ]ž®îx2ºË“ÑÏ=wyîùç'étØQNDE;ª´#ù —'p;NÉ»œ’wœ’w9%ï8Yîr²Ü[^¯åg‰ b’Hëu—gýnŸàW(S< ØåiÀŽ'èºÁ¯¸üJÊ›º<Ðñ¼>—çõ9ž çò\¸ívàf.Ï0s<;ÌåÙaŽ7Áåg –âÉO.O~r<•Çå©<ާò¸<•gûÄAH;ÒzÌå¹=ŸŸä•Á”•žÛãòÜží´”£ræXðQÎäÑE-ÒÒÇÄD_¦ô%­é\žÜâx.‹ËsY>?1´ƒëœ»p9wá8UárªÂ±NvY';Ö¸.k\ÇÊÑuå¸DàW¦üJî­<Âñ—ç8îï»Üßÿü$ð8»á¸ãírÇ{ †Ä$Qñ\¡àþ­Ëý[Çý[—û·ŸŸtô…c·I]n“~~’GTpDVl!+¶í“ b’høŽ&ß‘²+!÷ ·ôBnénÇ…ÜŽûüdàW†üÊÑAt½ÒÛ‰ï˜ú陦NÈ=­íÈÃDŽ–º´t€Ä\$æh)ö/{8!÷p·lBnÙ,)“’IÜ>™ ¦ÂHTô…ãÙÈldàÞIȽ“À3˜CžÁ8ñrâ?pž?ä<à<ÈyþÀyþóük„ƒp D¢£·]z›|aÈÉòÀùäóÉÛ'DAÂГvä±ßeìã(ò­ø£œ HŒë1œÉ 9“÷ù‰£/ôAKzëÒÛ€ÔC¤øŽìaÆG> w+?KLS!í¸ÏzÝÊ™0Fâ~t+g}©Ò—}©Ò—{oy+g½mÒÛ†v4iGC;š¶#@‰û“n·òQ¡Û*º]"D(™(B ƒÄý:ùVÎDÑ_ „‘èhG—vô¢ vtiGwNb CÚ1 ¹!šø•!¿2Ñ—)}™Ðþí[É„‰ZjÒRƒ^Lôbè‹I_Vèb…޾¸ô%ÐŽv¾#ð ·‰ÇÍ'oåD”š‰íwH¬‹Ä:$ÖEb==¥ùVÎD!íð¢ v¸´#>2Ð\¾Ox+?KTUˆ¢‘(3e á œDE;èéFÊÜÊ™À¯pöðcCüØH'˜oåLà;èƒòY¸[9h)}Ð 1ÑÒ)-h锖΢ a L´t¢¥™òÖþIÑHÌLÌ!:ˆ.Ä1H~ÅäW -5i©¡&í°$|fÂç „ƒ½ÚA¯íXÃðì×þIÑ„ †ÂIÔ™‰:Ÿ'ZÍD«B /ô0ŽUŽË*'gÎnåD ÈtˆL~…£!?9ÿVþI”,£œˆä¥Žr&ðC¿ÃAøóDòc…g®öO „  ‚DÙ…ç¶öO y8¾Ãå;}qéË}&ñV~ŠÈ'ªnåLTUˆba ŒDA;Š´ã--ÒÒ´þ8ʉ¨GyT|ÇKÎ$ÞÊwDMñÓ[ùY"@À’t˳_û'Ä QJ&J¢hÏiçYxnk‘˜ æ „0 òh"©7‘zC_šô¥£¥]ZÚ„h?ÍG9hé–.è ç—üfë[9èí”ÞžB@·œZÞ‰åL@&ò0Œ(Î +„ãW\~Åñ®ßñ˜€<\ä±@8€%»Xr /!} h?Dû z‹ØEéð§|*à(ØÙ-ÙÆ†ØØ€ñÖÑþI€)fp”ïˆüfÉ[9yn˜27, Db€Ïí¨ÒŽìùyÎqû¤áWšüJƒÄšH,E O îŸ@]äÑÑÒ.-Ísߥ·}2ð+C~%í¢r&ð+\Ic‡Sd‡s‘û"{ ‚=PÑ=cÿâ²Y!²Ä\$æØ¿ð½^û'è çÊ|ÛçVN„á;L¾Ãð&ß ¸úÌoݹ•ïˆÀÚ’g·OR”¶ð½+Û'õ#õCˆ¢ vTiG^†¬ #›™æQ±;©²;Y#DH~¬òÞÉþIQŸ'Rt£ò­‹Ä!ëèK—¾Üß‘¸•3á \È´‹L~eȯ ôeH_&ä1E„H«­£œˆ€æB4—Æmåó×÷OÐÛÞÚuPÍ'!oåD¤Yý(ßX[VY[V¬«¬kOϺ¹•‘öû•ï¬ß>IÞá('"íæ+Ÿä»}’mŒO{Ý>Iû¨£œ ôeH_ÒìQùìÒýôvJoÓêó(?K8ÑËc S™¾Ù·:ñ+S~eâWxbÿ¤‚¨BàW´™ˆx–0Œ(“e°džRÛ>©‘‰*¿ÒðôÉ–N¨ÞÊ™( Š DÂ@˜Ÿ'~eàW ‘X`ž ™çòû´nåLLS!-hÇv ´ƒþ#0‹ñýQÛ'¿ÂY,òêó('"ÅŽòO¢å'”ÝʉH룜‰B~%­”Žò‘ß4q+'"ŸÙµÒ@4i s”1ðC¾c ¥CZ:ÑÒ)-ø•)¿2ÑÒ)-MY&'D>?IQ–£|Gôc;ÊwÄÈ'ÝŽr"ÒN¼ñi8Û'Éó7>¹etXP úü$ù£|#þ×ÿ÷Öî=w-û[¨JÛ'Íÿç?ÿøßü_¿ý·ÿü¿ýOÿúÇ¿ù§ß¶gõ}üØN¡Õ—ÿÿ§ó¿ü¶çŽþßßþÝýúÄ ü?>~ü§ßÊûùÏý­ìøûŸk}?ÒÆ¾™ÿëgyvhmsÀ¿ÿÖÆ~oûϧ0¶rÛþpÃ>]ÖVÞãHÛ×\˾yªÖ÷W}–çž imß`oå}½ÕÚ~œ`+ï€ÖöÃG[9¶¸vk{û>ËV·qÜêþxÖ­Ü7°]¦ö­¼{èV÷Ë(ŸeÿØör­ì…ÜÊu[Ͷ²O9[y?aÐÊ~Dd+û6[µ-E±·?Ê6^Û%aµ•ÛfícèVžÛ«}í‰ýM˜ír´á÷ßúGÙl³}ôË÷õ}•Ò¶C^}/ï i{é³\.^(v‡¶•ë~R'ö øV¾<¥(ö [Ù÷“‘[3¶ßïõ’{‹±]ÜÊ—Øqì¶òÜgŽØ¯$oåØgøÏnõ½½íò^¨¨Û¶rßã ±ŸÛʶçb¥Îg¹_üj쯵ÚÊ—ScQ®¿ßÇ>cƾ©Úʾï­ãjýÓ¶ÝEì·­Üv_¹Òöûþ^÷vù¾¾ÿÞõí­·ï¿¾-ëråë÷Ëóû/ß{{¯OÞ.'íòÛž‡9÷rÛ¿o{ߥ¿s—ßö¼ªKýRï}´ëß_ŸIí*ëmøØÅüûížrô‹½ì7"?örÛÿ>Ê¡¹÷7.ïŽøÔ×nßǽ€Wy\O$–mû½ýdßÇ®ïR÷ò¸èc7Ãß÷Lü®¹m¸~¿d‘/ö²ËsÏàø^ö¹—çE~ißa©{Ù÷r½ÜÇ»zÿý²O™{9Ê^¶}|l‡Zm/Çe|”‹}í³HÝËóR—ñ±¿Øç÷Ýÿíã¡]Æÿ?Ý~z º=þ¢ýô@[yŸa¯h+×úÓUï[ðæ¶ò®¯ËŠ«ïB—‹Õk½õ éšëç¿ï×\{¥uyùåþú¹´Ûµ|)n˾\})õýtèVûû@ŽÚKégm©õç—ýõǵxWßwßt«¿ïêmw)·úKñg}ýh©í—â]}û¸oÞµxWßÙ[ßÖÿë¿\EûÿùoˆöŸÿãçj÷¬?Ê¿ü—iþü×¥¬`žö[;qußG±ø~˜á÷›ÂSñ€/ÎÏ_ÞV.Û>öíuºùÓ.±ôýÀþô¦­#ÿÇÿùŸÿ–îþâ_•ôŸ®É÷›ŸýïþáŸÿýå>—ØÿðßüøOÿåÇÿ£ýûüïÚ?þ‡ù·Û—.M¢s-‹èçºìs´=j~jàòžz¿†!.ï Ýž½³-æ¾¹ú\`î[ÖË\ºm(7r{Ê~ÿ{Œé³´?!<®KËómc\¶Ú¶gÚc^–¹¾?•åÓqÖô='t¹MûYš›´ÊG¹ìé¶GÍøgñºå¼<£l“æ_&ï²ÝbÙ¬!öeiù\ÝïûåÍ+µí»ìÒÀíÏ>¶/»în¶¿ómâ¹îʶMøüº²?zä³¼ýü¾ËaÈ­¼ŸH*¥_º¿ÍXŸ«Ÿ²Ý){9¶Õçöql«ÅëÚaµûê©Ô}w¶c}/ﻳR?.«éÒöÝÙæ,}ÿ¾Ö.¾´n#w+ﻳrI¢må}wV>˾š+}ß•zÝml“¢oßgGýî·Jõëß_Ö.[y_M–±ïÎJkÿǾ;Ûš5m/ïËàò9°Çþ}sß•ÏE⾺.sß¶lÍÞWëeÍl/¿7÷ÝÙVÞw7ÅöÝÙÖ­ËïÙ¾;+m^âW×µSivÙÛwg[·/eßwg¥Å•÷}wV>¥ûj¶ø¾;ÛIJÇ3J컳M,»+)±ïÎJïWýľ;+}\â[ÛÔ¾éãSlûî¯^Œ²t»Äk®k·rª[yß•ñqY½×}wV>ñ¾»Ø†ñ¦q—Ô²ïÎÊhû¬eßmb­}µî»³2æ%^Vë¾;ÛÔ°ïþ¶Õæq?ÔºïÎ6o½ÛçuíXæuwXÛ¾;+Ÿ›®=^PÛ¾;ÛÔ¶Û_mûîl›l÷¡_û¾;+³_âCµï»³M­—þ÷}wV./aû,}7¶©ÙoÎùñBÿs:üÿ÷Sû~êXì^KcïÞŽÍÐnµ—Ò}ígëîj·RªM[ñ·=ýmÇßöô·û¹ùº«ÝJ÷µ>ïk}.×~*òãn_s)¥ZKµ÷¿;w§r«½”~Öîw¢oµ—Ò}mù¸¯ÝJ©¶§ÚŽÚ‘jG®­©¶žXÛR›ÚÜRúûùù}mäÚ^îk;$9’4¤1ÒïüîH¿;ð»3õw¢¿ÓS­çÚÏ ó®v+¥Úô·†¿õ$IO’ÜŸt«½”Rm¤Úȵãã¾v|SÛì¾¶j#ÕÆ3µýã¾¶ 6õ·£¿=Å3{EmKµ µIV²º×¯A¿–¼·Á{[òÀl÷aÐk)Õ¦6öTmjóÏÔÎôÍß<ÓßNþí×&Ë™°œ{ßnðíj-ÉÙ gKú5è×’µÛ|ª6õÈÐ#Km6¶9YŽÁr<ÇXðÔ_/OÕ&«óúTm² ‡mx’³CΞlÃažäì³'Y9dIVYEêQÔk“4¢=U›dU$» ØU$i¥ñr­ßE\KOÔ¶TÛPÛSmGíLµµžj=×Þ%{®¥T;Rí@­¥ZCm¤Úx¦¶&YUÈê~îvÌÝžVAŽUo¡ûZȹ&9×¾^iÆÑ~$ã¾¶ ¶§Z~ó׎T;žªµTk¨õT{o9—÷ε×Rª-©¶ ¶¥Ú†Ú™j'j#ÕÆyµwcáZJµ©Ím¾[™_K©v¤ÚñLí]éZº¯mIÎ rn©G =jéw~·%-4h¡¥V5¶êOÔöÔ£^žªM:êÐQO:êý©ÚÔæŽ64ÆÂHZÐÂHmhóH:ÐÑH:ó©ÚÔ£Áý™Ú4ÊFN¬MÒ°òTm²gƒ=[݆ÑmI’IZ’¤k“œrþ3µÉ& 6éI -x’†Cžzäè‘§‘âöTmê‘ûSµ©¿ŽþF²œ€åD²œ€åD²œ€åDjUøYµ3E/'¢—3E/'¢—k-Õj=Õ²U\©6ž©½_™O¬ÌÖ¦þ–ùTmêQAJjsA›ïWõ«ú?Y[Rmyªv¤Úqbm²œj'Ö&9Wȹ%Yµk“$[yª¶¦ÚŠÚd±­ŸX›tÔ £–¬½ÁÚ[²ökoI ZèI’’ìIV9fî)àˆxòH”r‹†ÜâVŽT¹ö>àˆ¤¼¤!/i)/iÈKZÊKò’¶6õ· ¿5õ¨~U›úëìïצqäñLm$OÕ&I$Iû1N¬M’ H2Ò͹§H'p'pâþ¦Ïµ”j[ªm¨í©¶£v¦Ú‰ZOµŽÚHµñLíýz2°žŒ´ž ¬'#í œíùÃZOgg<å÷ùý‡µw£ûZJµ5ÕÖk[ªmOÕ&i¸?S©¿QN¬©v¢ÖR­X›ú›âH^îÇàµôDmOµµ–j µžjÙªHµ‘kK¹¯-å©ÚÔ£‚ÝÅ‚®¥T›ZUЪš¾¹â›k’U…¬júÝŠß­IV²º‹`\KOÔ&Y5Èê.’p-¥ÚÔæ†6·ÔæÆ6ÿ‰ÚžzÔ?N¬MÒèFOúíõ©Ú$É>žªMrîó©Úd±;’4¤1’4¤qwbðZJµÉÚG?±6Éj@V#ùœŸ3S'ú;“g}ª6µy¢Í3µy¢Í3ipBƒ–ÚlOÕ¦6Úl©ÍÖO¬MýµñTmÒ Aƒ–<ƒÁ3X²vƒµ{²g‡={²goOÕ&i¤Œ˜§S%ŽS%kGª¨µTk'Özªõ§j#ÕÆyµ÷ã·aü¶4BFh»Q\K©¶§Zhp&-Ìqbmêïdÿ°Ö’4ìã›Ô&-´`IÎ9[’•·Õ¦±`öTmÒ‘QGX{ï‘U›Ú<ÚSµIƒc>U›ÆýÆHýèïLúÐïLúÐïL=šèÑLö<ÇSµ©GÓN¬M¶1a3ÉjRVXkI’öñTm’³AΖ¬Î`u–´`Ђ%dðH–de•%Ydå©GŽyú]ï'Ö&ËññTmG>ŸªM²r{ª6YŽÃr"I2 ÉH¶°H¶°H’Œþ¶Ú¤…€"É9æw¨µûÓ×Òµ#ÕŽ§jgªe«þ°ö~GoØÑ[Ê–²¥–öì†=ûÃÚHµ‘kkjUE«jjUE«júÝÚN¬õTë¹¶¥6·õÚtÆÆqÆæq­¥ZCm¤Ú$çûgÔÿ¸–RmIµµ5ÕÖ·ÕöTÛQ;SíDíÿGÝ»ôL²,Ùuóó+jÈDèsw3s·&$$$@ê h@r ð!€8-@êþ¾2YkGwy"³êô ÏmÿVe¦ÛÞÛ-ÙAûéBT]¸A ,âàq êUCS ‹è|<«–rV-qEÊu‰sn)çÜð¼ë!Ï»^ÆP£ˆ‰­å—Q(YEÉ %«(Y¡d%+Ô¨ã…VUµú)mÈUûºD¡U+—(´j¢UƒVÍ/Qd²I&2Ù$“ :7ѹAɦJþ”zÕKõšÔk¨×¤^C½Ö/QTdR‘Ã_/—(Ö‘·‰u»Dá‘Ç/£È³KžþºøXû!k?ào”KiI{À…‰ Il Þzõ†ÔÛ1ç.sî˜s¯—(*êíE½]êí¨·_E²1¾.QÔ;¤Þu4ì—Q¬£o¤ppˆƒ:Õ‰’ØDb³ü2Š´§¤=áoŠ¿ R\HléÒ/Q¸q‰Â£ì—(Äu׉'—¦<¹ô³´€–K´‚ýM\ǘrã]ÚA»Ð:„&h^¡‡3®)WWÎi-B+h}#m í—Q5¡ê—(²Ñ%vuð§tÀÁñu‰Â…Q… C\Prˆ’JޏD±‡¬ÁDÚSÒž¨(¥¢DrÒ.QÌ9ãE6pflù1êƒûUºè”Ð"´‚V¡ ´ý2ê .4@ãí ´@«RÞHQo‘z ê-RoAEE**¨¨hE¿‹Ð!4Aó ­H{•´W¸PË%еPe-T8XÅÁÃYµ} «ø[áo+t®¢s…VU´jЪ}ý2 %[½D¡s”l¢dƒVM´jÈd—(tn¢³A 5 ©3IA «o¤PÒDICbÍ.Q$Öâ…GÖ/Qxdâ‘ÃÿºDá‘—K:»èìPÃãEê<¯Ð@½ñõ¡P2DÉ€’Ñ~EÚCÒè9á—(òýEÚCÒp?ÔýŸÒºxÔáQ/o¤pGm†lÒq¥qʕƥÇî-×0'®4N¹Ò8q-qʵĉk‰S®%^ÆêB4.ÑÚ/QheªÕOé±Ëõω+œS®pžS¸àõ…G.9‡æø’S¿¢úMy%¯ÊÜ®r'÷«œþõ¿Ðߢþê_TÿBýËIÿ ¯ô§ª?yÎ|TÍGe>ªæãpâì9$g>ªæ£ÒÿªþWú_ÕÿJÿ«ú›3?Uós›3_õ”¯ oô·©¿SNÿ›úÿ/ž3¿Mó;åÌwÓ|O9óß4ÿùnšïÆ|6Íç”3Mów›3Ÿí”ϛܘ_ÓüN9óaš£ÿ¦þßæôßÔÿ)g>Ló1åôÏÔ?£þ¦ú;õsÕoÊ©«>Îõãº~¦œú¹ê7å\?®ëÇ©Ÿ«~Ný\õ æ34Ÿ·9õ Õ7XhýÁü„ægÊ©_¨~SN}Cõ ê'}'¼S¿®úÝæÔ¿«þùíšßÛœûO]÷Ÿ:ýëê_§?]ýéô§«?ƒëèúÔg¨>SÎüÍï`ýCë¬hýƒõ­°þqªƙϡùü9¯ì_Uû× <Èãݼ“÷«|åI~ÒgÂÑ?«öÏ_À yQ^É«òFÞ”ÓÿPÿ§œþ†ú{›ÓßP§œþÆÉß›¼Ó¿®þuú×Õ¿)§þ]õGÿ®Ú¿+ûwÕþ]Ù¿«öïzüêì9¼ÆYßÐú¦œùšß)§>CõÔg¨>ƒúŒ“>žÔ'UŸäúM]¿çÔ/U¿dþRó÷qNÿ’þµã—]Ïá5^ÉëUÞÈÛ¯æFnÊü¤ß]äq•wò®|åIžÂ ý/êÿm^ÈËUÎ|ÍW¡ÿEýÿíœù+š¿Bÿ‹úÿqÎ|ÍG¥?Uý™rúWÕ¿)§¾Uõ­\U×ß”s}U]_SÎõWuýUê[OúNx£¾Mõrê×T¿Æ|6ÍçÇ9ýiêϔӟ¦þ4úÓÔŸÛœþ5õÏè©?ç\_¦ë˘Ó|ÜæôßÔ£¿¦þû—iÿšræÃ4FMý5úkê¯sûçºý›rúçêŸÓ?Wÿ¦œþ¸ú3åôÇÕ§þ®ú;õwÕÿ6§®þ9ýó“ô/Ô¿)§¿¡þþœ;¿Ÿuý~Ö¹}rÝ>9·O®Û§x’§p|ÿéúýçx!W}>Î+y½Êy»Êܕӿ¦þÝæô¿©ÿFÿLý»Íé©?FýMõ7êkª¯qý˜®cývªÂõ¹Öw›SW}¦œú¸ê3å̧k>§œý͵¿M9ýsõï6çúp]wyÐßPosúÚ¾×Oèú¹Í™ŸÐüàü¹ëùsçùo×óß/pê'ý¹~C×o§~]õ묿kýõw­Ê©OW}:×W×õu›Sß®úN9×W×õ5ãƒùšïA†ú3èÏP¦œþ õgÊéßPÿõªÿ ¾CõÔoœôc¾‡æ{Æ“ú§ê?åÔ7Uߤ~©úM9õMÕwÊ©ªþ·9ýKõ/é_Ò¿àùùÐóóoà…¼(¯äõݼ‘·¿7rSäq•wò“ÿIžy¡ÿEý/ô¿¨ÿ…þõï6§þEõÿ8§¿Eý-ô¯¨…þõ×LJ^ÿ 8óQ4•ù¨šJÿªúW©oU}qþ=ôü{ðüFèù7púSÕŸ)§~UõkÔ¯©~ë«éúšrêßTÿ)§?MýiÌ¿œŸïÜ>uÝ>á6ÍçðOrê;¸}º}zWòªÜÉýݼ“÷¿äãWó$WÿÑ¿‡öïÁû›†Þß4ØŸ‡öç8ç_tþ·9ë/§ú'ý{hÿü~vè÷³/pê_Uÿ)òx7§Uý»Í©Uýõkª_£>MõAÿÚßÀ©_Sý¦œú4Õ§QŸvÒç&7æÛ4ßSNLý1nL·¿3?¦ù™röGÓþhôßÔÿsæË4_çìϦýù.wæÓ5ŸSÎüºæ÷6gÿpíÎü¸ægÊé¿«ÿN\ý™rö×þÔ7TßÛœú†ê\ß¡ë;¸~C×ï”Ó¿Pÿnsúêï”ÓÿPÿÿÅsæ34ŸSÎþÚ?¦œùÍg~»æwÊ™ï®ùîÌw×|œs}t]SÎüwÍÿ”3ÿ]ó?åÌG×|ÜåƒþõwÊéÿPÿ§œú Õï·sú3ÔŸ)gÚnsú;Nþ~š³ÿ í?3žÌWj¾¦œë?uý'×wêúNúŸêÿ”ÓÿTÿ?ΙÔ|$ýKõïç<ùýPê÷C¿€ò¢¼’׫ÜÈM¹“»ò ‡þ…÷/½yùC!/ïæ¼ýjîä~•ùIßNÞ¯òA>”'y ôw¨¿¿3Cóq¼¾¡èý߯pú7Ô¿)§?Cý™rú7Ô¿)§¿ãäïoæISýMú›êoÒ¿Tÿ¦œþ¥ú÷Û9󑚤ÿ©þO9ýÉ“??åÛ·}xòòn^ÉëUÞÈÛUnäönžä'&¼ÐŸ¢þê[TßBýŠêW¨OQ} ë+Zß”;¹¿›y\å¼_åƒ|(§¿Eý­ô·ª¿çô¯ªSN}«ê;åÔ·ª¾·9ý©êÏmNëÉß oô§©?SÎõßtýßæìMûÇ”³¿4í/SÎü5Í_c¾šækÊ™Ÿ¦ùiô¿©ÿþ¶“¿7¹ÑSÿosúoêÿ”ÓSÿosæÃ4Æ|˜æÃ¸ý0Ý~L9ócšŸ)g¾LóeôßNþßäNÿ\ý›rúãêÏ”Ó?Wÿ>Ι×|8ýwõÊé¿«ÿSÎ|¸æcÊÙŸ\ûSpý‡®ÿ)g>Bóô?Ôÿ þ¡úO9õÕÊ©_¨~Aýâ¤ßŒsý…®¿N}»ê{›ÓŸ®þ|œÓÿ®þw®Ï®ësÊ™®ù˜ræ£k>:×g×õy›3]ó÷{9Ÿ¿Pôù ¯ðFÞ”y(äã"?^ŸRôù¯ðB^®rêãªSW}°}jº}âó Š>ßàÞÉÕÿ)§þ~Òÿ.Oò¼Èƒþ†úô'ÔŸÛœþ†úô7Ôßàú]çô?Ôÿ)§¿¡þ~šwúßÕÿ)çú;ýíêï”Óÿ®þO9ýëêßÇ9ý?m?¦œë·ëú\_C×× >CõìCûßàü†Îop~ã4¿ Oæ+5_SÎúSëŸræ/5IýRõKê—ª_R¿Tý’ë3u}N9õ•ï/ŒßO˜~?aü~Àôû¸‘Û_;¹+ò¸Êù¸Ê“\ý+ô¯¨øþÂôû ã÷¦ß_¿¿0ýþâpúWÔ¿BÿŠú7åô¯¨SNÿŠú÷qÎ|ÍGe>ªæããœù«š¿Ûœù©šŸ)g¾ªæë6gþªæoÊ™¿ªù›òNÞ•3?õ”Ÿ›¼1Mó1åô¿©ÿSNÿ›ú?åô·©¿þ5õïãœþ7õ¿±4íþµ“nôÏÔ¿)§¦þ·O¦Û'£¦þý3õï6§þ¦úןéú3úcêÏÇ9ý5õ×éŸœß n_C·¯/ðNÞ/rlôùË y¹ÊySnäön䡜úÔ“>3žäy‘ãûi}þÀ+œú6Õ·QŸ¦úÜæÌgÓ|þvNÿ›úÿÛù ïæÌ_ÓüócšŸ)§þ¦úë7­ß¸¾L×—±>ÓúfÜ9×ù;óíšoçü]çëü§œó÷Óü韫ÁþÚ¿§œý%´¿õ ÕoÊ©o¨¾Áü„æ'¨O¨>·9õ Õ·SŸ®úL9õéªO§>]õéÔ§«>ùìšÏ)g>»æsÊ©oW}g|0ŸCóy›sÿdèþÉ Cýôo¨SÎúÇ©~æohþ’õ¥Ö—œêü“óKß”ÓÿTÿosꓪORŸ<éóSÞy}w×ë»_à•¼¾›7òv•;¹+òx7ïäý*äCy’«¿8Ûõüí œþõ§Pÿ¢úœ¹)§ÿEýŸrú[ÔßÛœþõ¯ÒŸªþL9×oÕõ[é/Ÿ¿WxÿpÑû‡ ïï-zï;ø ÂÑÿõþÞå…¼\åFnÊ©Oª>ÉúRë›rÖŸ§úÊyÿkÑû__ᕼþjÞÈÛUnäv•;¹¿›y\åƒüäÿŒ'y Çö!uûÜ>¤n~§ÿEý/ô·¨¿·9ý-êï”ÓߢþúWÔ¿)§¿åäï„Wú_Õÿ)§~Uõ›rêWU¿ßÎ;yWNªús—7öï¦ýûãœë³éúl\ŸM×g£~Mõk¬¿êg~›æ÷.7æÛ4ßSNýLõûíœþ˜ú3å\¦ëcÊÙÿLûŸ1¦ù0æÃ4SNÿíäÿ„;ýwõß¹>\ׇS_W}ýѵ?ÞæôÇÕŸ)§®þ9ýsõoÊ©¿Ÿô¿Éƒþ…ú\?¡ëç6§ÿ¡þO9ý õ÷6§ÿ¡þO9óš)g>Bó1ãþvõ·ÓŸ®þÜæ\ÿ]×ÿ”Óÿ®þO9ýíêï”Óß®þvú×Õ¿ÎþÛµÿv®Ï®ësп¡þ ê?TÿA}‡ê;XÿÐú§œõ­0ŸCó9Xÿ8Õ?áI}Rõ™rê—ª_R¿Tý’úðüHåýUï\þäù‹ù±ÿW½ò¼—wóJ^5oäíÝÜÉý*ò¸Ê™ßÐüNù Ê™Ï8åsÂ;óÑ5úvÕ·S¿®úuÖ×µ¾sê×U¿Áõ7tý ê3TŸA}†ês›¹öÇ)§?Cýù8§?CýÌçÐ|ÎxÒŸT’ý'µÿL9ýIõç6§§íß”SÿTý§œý+µÝæô?Õÿ)çúM]¿SÎüä)??å÷íCòB^ÞÍ+yýÕ¼‘·«ÜÈíWs'÷¿äCxaþŠæ¯0?EóSèoQ ý+ê_¡¾Eõ=~ÿ°¯ñNÞ•SŸrÒ'ÉuýÞå•úWÕÊ©UýosúWÕ¿¿<çú¨º>*óS5?•ù©šŸ)g¾ªækÊ™ŸªùiÌGÓ|L9×wÓõݨ_Sý~;§þMõŸrêßTÿ)§?íäÏ„ý1õÇè©?Æõkº~ëÃt}Üæìߦýûãœù0͇q}›®ï)g~Lós›3_¦ùšræË4_Îü¸æç6gþ\ó÷qÎ|ºæÓ™×|8õuÕwÊ©¿Ÿô¿Éƒý!´?L9ý õoÊ©¨þSNBý ®ïÐõ=åô7ÔßÛœý!´?üœóþÞª÷÷VÞ¿[õþÝWx#oW¹“ûUä§úïòNÞ¯òA>„ý1õǨ¿©þØþêý¹ïàôÏÔ?£?¦þõ5ÕwÊ©Ÿô›ñ$Ï¿wúïêÿ”Ó?Wÿ~;g~\ós›3®ù›ræÏ5Îõíº¾§œþûÉÿ›<˜Ð|ý õ'¨o¨¾Øþ™nÿ^àÔ?TÿÛœú‡êì¡ýaÊ©œôŸðNºúóqÎíG×íGg>ºæcÊ™Ÿ®ùéÌG×|túÛÕßÛœë»ëúžræ«k¾nsæ§Ÿò3áƒþõÊ™¡ù˜ræch>ó14çÌÇÐ| ú3ÔŸÁþ0´?L9ý'ÿnòdý©õ'×_êú›r®Ôõ‘Ô/U¿)§~òý–óû)×ï§ÞÀ y¹ÊÜ~5wrWäñnÞÉûU>ÈOþ˜ú[ÔßÛ¼’Wå¼)§ÿEý¿Í™Ÿ¢ù¹ÍéQÿ§œþ•“Iž¿˜Wö—ªý¥2Uó1åÌOÕüÜæÌ_ÕüUú[ÕßÊþPµ?ÜæÌGÕ|L9ý«'ÿ&¼ÑŸ¦þL9õoªÿ”sý6]¿ú5Õï6§¾r~‹Ï'¨ú|‚åFnW¹“ûU䡼“Ÿêäã"Çõ úü€W8õ-ªïmÞÈÛUNÿŠú7åô¯¨SN‹ú[èoQÑßõù/ðJÿªú‡þ®Ïx…Ó¿ªþUúSÕŸ)§?Uý™rúSÕŸÛœþVõ·ÒߪþV®Ïªë³Òߪþ6úÓÔŸ)§MýkÔ·©¾·9õkªßoçôï´}™rúÛÔßF›úk\¿¦ë×诩¿FMý5®?ÓõgÌ¿iþ?Îé©?SNLý™rúgêß”Ó_;ù;áN]ýrúïê¿ÓWÿos®×õ›ÓWÿþºú{›Ó?Wÿ‚ë7týý õoÊ©¨þAýBõû8çú]ß¿ÓßP?ÎÙ?BûÇ”3qÊß„wæ³k>;ó×5ý¥kéÌG×|têÓUŸÎú»Ö?8¿¡ó»Í9ÿ¡óŸræohþû×Ðþ5¨ÏP}¦œú“~3Îü ÍO2?©ùIꛪo2?©ùIö·Ôþv›Ó¿Tÿ’þ¥ú÷qÎ|¤æcÊ™Ô|$ýOõ?é¿ÜŸ3øýÅÐï/þ¼—«¼’׫¼‘7åFnW¹“ûUÞÉûU>ÈÇUžäš¯BåüŸŸPõù •ÏG¨ú|„Êç#T}>Båóª>aùC'ᅳòñfŽïçõù ¯ðB^®òJ^¯ræch>ó14çÌ×Ð|}œÓÿqòÆ“<…'󑚤ÿ©þßæÌGj>>Ι¿Ôü}œ3©ù›röŸÔþóÛ9ó—§üýŒ·/ì?ìCòB^”Wòª¼‘·_ÍÜ~5wrW䡼“wåƒ|(Orõ¿Ðÿ¢þú_ÔÿBÿ‹ú?åô¯¨SNý‹ê_¨Qý õ/ª¡þEõ¿Íé_Qÿ*ý©êϔӿªþM9ý«ê_¥?Uý©Ô¿ªþ•úWÕ¿RŸzÒçÓœë§êúiÔ¿©þSNý›ê?åô§©?·9ýmêoãúkºþnsæ§i~>ΙϦùœr®ÿ¦ëÆþ›ú›ÓSÿþ›úoôÏÔ¿Ûœþ˜úcÔßT£¾vÒ—ëßtýO9ûƒi˜qgpíNÿ\ý»Íé¿«ÿSNÿ\ýû8§ÿ®þ;ýwõÿ6g>\óáôßOþßäÁü„æç·sæ'4?SÎþÚ‚þ‡úô'ÔŸsú'ÿnòNý»êß¹þ»®ÿN}»ê;å\Ÿ]×申¿wíïþvõ·Sÿ®úO9õíªï ~Cõû9çó'š>¢ñùMŸ/ñ äã*OòŽíSÕíÓ ¼«>ؾèóÞÁyûÕÜÈíÝœùpÍÇ”3®ù˜ñ ¿¡þý õgÊ©o¨¾Áõº¾~;§þ¡ú¯Ø‡äÔ?Núš³?„ö‡ÎõßuýÿvÎ|vÍç”3Ÿ]óٙϮùœr®ï®ë{Ê™¿®ù»Í™ß®ù½Í™¿®ùôç´}½ÍéïPýêß ?Cý¹ÍéÏP¦œúÕÿ6gÿÚ¿nsæcœòq“'ûGjÿ˜ræ+5_·9ó™šÏ)g~Só›Ì_jþ’ùJÍ×Ç9ó‘š)§ÿyòŸÛ7ù~¬ñû±¦ßý^ÈË»y%¯¿š7ò¦ÜÈMy‡òNÞ•ò¡<ÉOù¸É ý/êÿ”Óߢþê_TÿÛœþõç6wr¿Ê™¢ù(ÌGÑ|L9óS4?çÌW9åë&¯Ì_ÕüUæ¯jþ~;g~«æwÊÙŸªö§ÊüUÍß”3ŸUóY™¿ªù›ræ§j~>Ι¯ªùjô¯©çô¿©ÿSÎ|4ÍÇ”Óÿ¦þœ3?MóÓèoSgÜØ?LûÇ_ž3?¦ù1æÃ4SÎ|˜æÃ¸}4Ý>þvNÿíäÿ„;õwÕÿ6§®þM9ýsõÏ©œŸåóQš>¥ñù%MŸ_Òøü‘¦Ïù¼‘·¿7rÕÊܯò å¼+g>ªæcÆq}‘ëõE/púßÔlÿôù'ËèOS¦œþ4õ§Qÿ¦ú7êßTÿ)§?MýiÔ¿ôçúmº~ú›ê?åôÇÔ£?¦þL9õ7Õÿ·súoê¿Ñ_Sþšú;åôßÔ£ÿ¦þ;ýuõwÊé¿«ÿ·9×§ëúœrúwÚþ}œ3®ùpúëêïÇ9óãšg~ü”Ÿ›<˜¯Ð|Ýæì/¡ýå6gþBó7åÌOh~‚ù ÍÏmÎ|„æã6§ÿ¡þwú×Õ¿Nýºêw›Sÿ®úO9û{×þ>åÔ¯«~wù ¾Cõröç¡ýyp} ]ƒúÕPß¡úN9ó=4ßÿâ9ó34?·9ó3Nù™qn?†n?×ÿÐõŸÌ_jþ’ùKÍß”3Ÿ©ù¼Í™ïÔ|œs}¤®ÛœùLÍç”3©ù›ræ/5·9ó›šßd~ó”ߟòà÷§¡ßŸ¾À yQ^É«òFÞ®r'÷wó wóA>ÞÍ“üäÿ„ú_ÔÿBÿ‹ú_è_Qÿ¦ÜÈí*§¿Eý-ô§¨?øþ2ôûË8ý)êO¡þå¤ÿM^éOU*õ¯ªÿ”SÿªúO9ý‘óƒ|>PÓ罃Wòª¼‘·ws#7åAWy'ïWù Ê“<ßÊù|ž¦Ïçy…ò¢¼’Wå¼]åFnW¹“û¯æAÊ;y7äã*OòS¾~3/ÌoÑüN9ó[4¿…ù,šÏÛœù-šßÛœù+š¿Bÿ‹ú?åô§¨?•ý£jÿ¨Ô¿ªþSÎõWuý}œSߪúN9×oÕõ;åÔ¿žôŸðÆõÑt}4êßTÿÆ|7Íw£~Mõk¬¯i}·9óÛ4¿ÆúMë¿Í™Óüõ3ÕïãœýôÜæôßÔc>Mó9ãNý]õ¿Í韫SÎõãº~nsúëêïmN]ýuúë꯳ÿ¹ö¿Ûœëßuýßæì®ýÙO?åó&æ34ŸSÎ|†æ3˜¯Ð|ýõÿãœù ÍW0¡ùúêïmÎ|„æcÊé¨ÿþvõwÊéWÿ§œùèšÎþѵüvÎ|uÍ×Ç9óÛ5¿ç\]×Gg¾»æ»3¿]ó{›3ÿý”ÿ›|p} ]ƒùšïÁ| Í×”3CóñqÎ| ÍÇ`>†æc0Có1èïPý'nò¤¿©þ&û[j›rú›êïmNÿRý»Íéoª¿ISýrúÏó›Æçë˜>_Çø|Óçë,0r»ÊÜ•y(ä§ú’<ßÌÛwÓç럯cú|W8ýéêÏmN»úûÛ9óÓ5?SÎ|uÍ×”3]ó7å¼+g~»æ÷6g~»æw0¿Có;åÌïÐüæohþnsú?Ôÿ)§ÿCýôw¨¿·9ó14ƒþõ7©Oª>ÉúSëOÖŸZrþ©óŸrÖ—Z_2Ÿ©ùü9/ø~i^ãNîÊ;y¿ÊùPžäZßñû‘}øky!/Wy%¯ÊySNŠú3åAÊé_Qÿ>Ι¢ù¨Ô¯ª~·9õ¯ªÿ”SÿªúWæ»j¾g¼1ŸMóÙ˜¿¦ù»Í©_SýõiªOcÿiÚ¦œú6Õ·1ßMó=åÔ¿©þFýMõŸrêkªï”SSý§œþ˜ú3åôÇÔŸÛœþ˜ú3åì/¦ýå.wúëê¯Ó?Wÿnsúëê¯Ó?WÿnsúçêŸÓWœúúI_ö×þ?ãABý¹ÍéO¨?·9ý õ7èO¨?ABý™rö×Ðþz›3¡ùú'Ê·M·îä®<Èãݼ“wåƒü¤ïŒ'y¾™cûªÏŸ0>Âôù¯ðFÞ”Ó_S§œþ˜ú3åôÏÔ?£~vÒoÂúºê;åÔ×U_l¿ôùÆçC˜>âNý\õsêçª_pþ¡óŸrÖZßmÎúCëŸröŸÐþÔ'Nú|š³¿œú{0¿¡ù½Ë;óÝ5ßSÎ|tÍÇ”³ÿtí?SNÿ»ú?åÌG×|tú×Õ¿Ûœþtõgp} ]ƒú ÕgÊ©ÏP}¦œú Õo0ßCóÌWj¾¦œú¤ê“¬?µþÛœõ§ÖŸìÏ©ýyÊ™ŸÔüL9õÏ“þ3Î|¦æó7Üß°ßË y¹Ê+yUÞÈÛUnä¦ÜÉ]y'ïWy’«?…úÕ·P¿¢ú}œÓŸ¢þ|œÓߢþN9õ/'ý'¼ÒŸªþÜæÔ¿ªþSNýªê7åÔ¯ª~·9×_ÕõW¹þª®¿ãýûð½œë·êúòA>”3_õ”¯ oô¿©ÿþ6õwÊé_SÿnsúßÔÿFÿ›úßè_Sÿ¦œþ5õï6§ÿMýŸrúßNþßäÆþcÚŒù2Í×”3_òýï_7½Ýxÿ·éýßË*¹¾?ú¿Þÿm¼¿Ûôþîwp'w嬿œêŸñNÞ¯òA>„Wê_Uÿ)§?Uý©¬¯j}•ó¯:ÿ)g}õTߌ'y¾™ãûÕÐïW_àÔ¿©þ·9ýkêßÇ9×WÓõ5åÌWÓ|5æ§i~óÑ4SNÿÛÉÿ 7êsêŸÆþfÚߦœú™ê7åÔ×TßsúgêßmNLýq®O×õé\_®ëË鯫¿Nÿ\ýûíœùp͇Ó?WÿœëÇuý8õwÕ?¨o¨¾A}Cõ Î?tþÁíwèö;8ÿÐùÏxçü»Î¿Sÿ®úwοëü§œõu­oÊéoW;×W×õÕ©O?éCÿ»ú?ãƒúÕwÊ©ÿPýõªïmNý‡ê?¨ÿPýõªÿ þCõÔwœôðdLíIýSõŸr®ïÔõô/Õ¿¤þ©ú'õOÕÊéOª?SNÿRý›rú›êï”Óß<ùûSÞy~¸ëùá7ðJ^ßÍySnä¦ÜÉý*ïä]ù Ê“\ýÁñu×ãëçý‹¦÷/þx’§pÜ_¡÷?¾Â+yUîäþ«y‡òNÞ…'ëO­Ê ¹æçãÜÈíÝœú§êÿqNÿòä×Gêú¸Í¹¾¤ÿ'ûwjÿþ ðB^®òJ^•¹)wr¿Ê;y7äCy’ŸüŸðB}‹ê[¨oQ}q~7õünòülêùÙäùÏÔóŸÉïçR¿ŸKžßL=¿™üþ+õû¯ä÷_©ß½À™ªùÀùÕÔó«/pæ£j>¦œúÖ“¾ÌGÕ|4êßTÿÛœùkš¿Fÿšú×èOSõiªÏÇ9õoªÿ§¹Q_S}ëÇtýL9ý1õÇè©?FýLõ›rÖo§úgœù7Í¿3¿®ùuêëªï”S_W}úºê{›ÓWœýɵ?ÍxP¿Pý‚ú„êœ_èü¦œý9´?Ïxçü»ÎÊY_×únsæ§k~:ýïêÿ”Sß®úvúßÕÿ)çúìº>;×g×õ9ãƒú ÕoÊ©ïP}õãùIçýc®÷½ÂùéýgÜÈM¹“»ò «|åIžyP¿PýŽýÅõþ+çýW®÷_9ï¿r½ÿÊy•ëýU¿‚Ó¿Pÿ¦¼“÷«œþ†úô/NþÝäþuõ¯Ó¿®þuêÛUßsúÓÕŸ)§?]ý™rêÛUßÁõ3týL9ýêϔӿSÔw¨¾·9õªÿ”³?íSNÿ†ú÷qÎõ=t}ßæÌß8åo“ý?µÿœ3ÿ©ùOæ;5ßÉ|§æûãœùOÍ2ß©ùNæ'5?Iÿòäß-^q~q^ã…¼üj^É«òFÞ®r#7åNîWy'ïïæƒ|\åI®ù)ÌGÑ|úSÔŸBý‹ê?åô§¨?·yÇUNŠúS¨Qý;§ÿåäÿ„Wæ£j>¦œù©šŸ)g~ªæçãœþWõ¿RÿªúO9õ¯'ýoòFšúÓ¨Sý§œú5Õ¯q}6]Ÿýµim\ŸM×ç”Ó¿¦þ5úÓÔŸ)§þí¤ÿ„ý1õçãœÛgÓíó”ÓSÿ§œþ›úoôÏÔ¿ÛœþšúkôÏNþM¸SWý§œú»êï\Ÿ®ësÊé«?SNÿ\ý›r®_×õëôÏÕ¿)§¿rþ÷ºÞø oäí*7rSîä~•y\å¼_åƒü¤’çEŽí›éö÷'ºÞŸè¼ÿÐõþCçý®÷¾ÂéOS¦œþ4õ§Qÿ¦ú7êßTÿF}ÛIß 7êoª¿QS}Œõ›Ö?å¬ß´~cý¦õë3­ÏYŸk}ç̯k~ùuÍ?¹ö§)§¿®þ:óëšßßΙ×üL9óuÚ¾|œ3¿®ù æ+4_SÎü…æ/¸>C×gpþ¡óŸrÖZ_çüºÎoʹ>º®ÎüwÍgþ»æÿ6§¾]õræ·k~;õíªï ~CõÔo¨~ƒú Õo°þ¡õÎèüó34?ƒõ ­/Y_j}ÉúRëKÖ—Z_²¾Ôúnsö·Ôþ6åÔ7Uߤ~yÒï§ÜyþÙõüó ¼—«¼’WåNîʃ<”wò~•ò¡<ÉUßBýŠêW¨OQ}þòœþõï6oäM¹‘ÛUÎ|Í×”3?EóS˜Ÿ¢ù)ÌOÑüTê_UÿÛœúWÕ¿RÿªúWÖ_µþÛœúUÕïçœ÷ï¹Þ¿÷ ïäý*äœ_gìÚyÿ™ëýgÎûÏ\ï?sÞæzÿ™óþ2×ûË^á¼ /œÑùοèü ç_tþø~¨ë÷C/pοœæŸä)߯tý~åN}ªêS©OU}¦œþUõë§ëúySŸªú4æ»i¾ëoZÿ”³þ¦õ7æ§i~>ΩSýóÙ4Ÿ·9óÝ4ßSNÿ›ú›3?Móc\?¦ëgÊ™ÓüL9ý5õ÷6g>Ló1åôßÔÿ)§ÿ¦þO9ý5õ×诩¿ÎþàÚ¦œëßuýO9ýqõgÊ©¯«¾Ný\õsêçªß”S_W}ƒë#t}õ ÕwÊ©o¨¾çô'ÔŸàú ]?AÿBýû8§¿¡þÎx§¿]ýrúÛÕßÛœýµkíô¯«·9õïªÿ”sýv]¿·9ýíêï Cý›rú3ÔŸA†ú3¨ïP}ëZßïæÉúSëŸrꓪORŸT}’ý'µÿ$ó—š¿)gýyªÿ.g>OǯÉíOêöç³|ðüäÐó“ƒÇ×C¯“÷¤Þ?ò x’ç›9ú¿Þÿç¼Ïõþ=çýw®÷ß9ï¿s½ÿnùƒ“ûUä¡|‹|°þ¡õOy%¯Ê©ßPý¦œú ÕgPŸ¡úL9ó34?Éü¤æ'©Oª>I}RõIÖŸZÿ”3©ù›rꛪï”SßT}“ùKÍ_Rö·øBÛ‡×x!/ïæ•¼¾›7ò¦ÜÈíÝÜÉýÝ|á…þõwÊé_Qÿ õ/ª¡>Eõ9ž_݇ï弿›Sÿ¢úWê[UßJ}«ê{›ÓŸªþÜæ\_U××mÎüTÍÏmÎüTÍÏ”3Uóq›'¹ö÷oÌ_ÓüM9óÕ4_þ5õoÊéOS¦œý±ilô¯©ú7Õʹþ›®ÿ)§íäß„ý3õoÊ鯩¿·9û‡iÿ0æÃ4FÿMý7úoê¿ÑSÿ§œþ›úoô×Ôß)§¿¦þ:ýsõoÊ©¿«þSN\ýqúãêÓWœú¸ê¬/´¾`þBó7å¬?´þ`ý¡õO9õ Õç6§¾¡úN9óšÿs®¯ÐõÕéÏÿïß ½+xVèýYÁû§BïŸZþ`äv•;¹+ò¸ÊYŸi}Çï¿Bï/z…7ò¦œõ»ÖÿqN}]õuê窟3?®ù™ñ`~BóƒþRµ¿¼À©¨þÁúCëÇúªº¾*¾Ú‡—xçüOë³Ó¿®þM9ëëZ_g}]ëëœ?Íƹ¾º®¯\_C××Ç9ýêϔӟ¡þ ®¯¡ëkPß¡ú&×OêúIÖ—Zß”sþ©óŸræ+5_Éúx~.ϯ5=¿Öx~ªéù©Æó?MÏÿ¼Àƒ<®òA~ª/Éó"/Ô§¨>8ÿÔôüSÃõµûðgýEëÇùŸ¦çÏÿ4=ÿÓpýß>¼Ä+õ©ªO¥>UõÁù›¦çoÏ¿4=ÿÒxþ¤éù“8õ«ªßmNý«ê_©o=é{“7úÓÔŸFšúÓèOSýiêOc¾›æ»Q¿¦ú5ê×T¿ÆúÛ©þ›Ü¨Ÿ©~FýLõ3êgªß”S_S}§œù7Í¿ÑS¦œýIöÿyHèý!Áû/Bï¿x…7ò¦ÜÈMyŸæßÉ»ò$Ï‹¼RŸªú ?êý ˜ܯrÖWµ>ô/½ÿ`ùë«Z_c}Mëkô·©¿SN›úÛXÓú§œùhšFýšêר_SýŒõ™Ög¬Ï´¾sÖZ߯úìTߌ3?¦ùqêãªÏ”³?¸ögý®õO9õqÕÇ™×|Ìxpþ¡óÎ/t~Á|‡æ{ÊY_h}SÎúâTß„wö®ýããœý·kÿíÔ¿«þSNý»ê?åÔ¿«þúö“¾3ÎõÛuýN9×w×õ=¸~‡®ßÛœþ õoП¡þÜæôo¨SN‡ú;åÔ¨þÉü§æÿãœþ¥ú7åô7Õß)gMí¯ISý½Í¹~R×ÏmNÿåú«àõI¡×'Ïÿ„žÿ ^_z}Ñ ÜÉ]y'ïWy’ŸêŸp¿„¿OBO‚×…^<> => ž =?<~ =~áïÛ‡þ¾}ðúøÐëãƒ×LJ^ÿWàNîWy'ïÊùI¿OòŽþÓµÿðúøÐëãƒ×·‡^ßþ çüS矜¿ôÁþ1´ ö¡ýãp#7åNîʃ<”wò®<ÉOúÝäèOCûÓ`ÿÚ^àÔ§¨>SN}Šê3åƒ|\åÔ¯œô›pœ¿zþzðüòÐóËoàÌ_ÕüUêWU¿ÊúªÖ×èSÿ×WÓõÕ8ÿ¦óÿ8gþ𿝱þ¦õ×éú¹Í™Óüë3­Ï8Óù;ýqõgÊùù®ŸïÔ×Uß)çút]ŸA}Bõ æ34ŸÁù‡ÎÊ9ÿÐù×_èú ®¿Ðõw—wêÓUŸN»úÛ©Wýg|0ßCó=8¿¡ó»ÍYßÐúõªß`}ãTߌsý ]?å×v½þðîä®|‹ü¸þº^ø+x%¯Wy#oÊ©_¨~ÇõÝõú¿Îëÿº^ÿ÷ §þqÒ?É5?úuÕ¯SŸ®útêÓUŸ)g>»æsÊ©Wý§œþtõ§Sß®úê3TŸ)§>Cõ¹Í©ßPý¦œùš¿Á|úÓŒÏßíÃkœùMÍï”ÓŸT’ùIÍO2?©ù™rꛪo2<~î×WíCòB^ÞÍ+y½ÊÜ®r'wåAWy'ïÊùÉŸ$Oá…úÕ·P¿¢úMy#oÊ©_Qý ë+Z_eþªæ¯r~UçWéoU+õ¯ªåüªÎ¯q~Mç×XÓú?¿éçO9ýoê¿q~¦ó3æÃ4·9óašsúoê¿ÑSŒë×týý1õǘÓüý3õÏ™o×|;ëw­Ê©«>SNý\õ›rê몯SW}‚ùÍw0Ÿ²ÿÞp}Ñ>$ïä]ù ÂÑôúÅÎë»^¿Øyýb×ë;¯_ìzýâò#·«<ÈC9ëo§ú“<…õ1ÕÇè©?·9ý5õ×8Óù;ýsõoÊ9?×ù9õwÕß9?×ùõ Õ7˜¯Ð|óš¯`~Bó¬/´¾`}¡õý õ'˜¿ÐüM9õ Õ¯SŸ®útÖ×µ¾ÎúºÖ×ùù]?п¡þ ê?TÿÁÏúùƒú Õgp~Cç—œ_êü’ùOÍÿ”3©ùKÖŸZÒŸT’ùJÍ×”S¿TýnsêŸ'ýÊÇo¦Ço/ðB^®òFÞ”wòþn>ȇò$Wýpüczüc<¾1=¾yçü‹ÎÆqü£¿Öùûg]ÿ¬ó÷ǺþþØxc~šægÊ™/Ùÿ æ;4ß/ðB^”Wòª¼‘Ÿæ7ãIžÂ ç_tþ…ó/:ÿ)7rSî䮼“÷‹¼²¾ªõUοêü+ý©êO¥þUõ¯œ_=ÍþTõ§qþMçߨoS}§œú7Õûï¡ûïÁýóÐýóàþyèþyðø?ôøÿNýMõ7ÖoZÿ”SS}¦œú™êgÔÏT?§~®ú9õqÕÇ©«>Îú]ëŸrÖïZÿ”SW}¦œëËu}õ ÕgʹþC×°¾Ðúpüzü<~=~?„?Ìù`~†ægðõãôúg>†æc°þ¡õÏxrþ©ó¿ÍY_j}SNÿSýOæW¾_áïÃtý}˜Î߇éúû0¯ðFÞ”¹)wrWä¬×ot½~cùƒ“»päW¯¿XþPÉ«òFÞ”¹Îǯ§ëó54_ƒÇŸC??‡ægh~/‡_&󕚯¤©þ%÷S÷_àüü¢ŸïR¿ÿIî?¦î?¾wrÕÇG©ÇGsŽýÇÔýÇäþcêþcòø+õø+yü•züõ§ÿUý¯¬¯j}þ6õ×XŸi}ÆÏ7ý|çû»¾¿SW}œïï§÷ŸqÖïZ?Ο¦ž?MžÿL=ÿ™<™zþ2¹ÿºÿÀçw}þwçó¿»>ÿ{ùóÙ5ŸúvÕ÷6g>»æ³s}v]Ÿúðú—QŸ}Hnä&ü¸·¯ñ åƒ\çß9¿®ó›ò$Ï‹ü¸}݇ä¬oh}ƒõ ­/éOª?ÉúRëK¾?ÏŽŠó3ûð7rSîä®<ÉSxá닾¾ðõE__ùúª¯Ÿñãñõ>$¯äU9ßßNïÏùóøvðüåÐó—?ÏÏìCòFÞ”¹ ?~?ºÉ;ynœŸéüŽÛ·}Hžäªóý]ßß9?×ùë­?¨¨þÁù…ίSÿ®úÖ?´þd}©õM9ýKõ/YjýSN}x|7øûÂC_xðù+CŸ¿2øü•¡Ï_|þÊÐ篼“<…Ï}þÊàïËý}Ù¸þr’ò!þ†úËßoúû­Ë’œõñøxèññ+ÜÉ]y'ïÂÇ?ûúëï‡.0r{7òÞ8¿¦ók¬¯i}ŸßôóõmªoãüšÎÏøþ¦ïïô×Õß)oäM9çï:çü]çœèü§œŸúùþuõ¯óõýôú$×õ5˜ÿ¡ùŸòAÎþ8?»¯ñJ^¯ò å¼+äZüÑã·åIžÂ?èç'õIÕ'Y?·ŸÉ߇Iý}˜Wx#oW¹“»ò$OáÇóS©¿Ÿ²üŸ_ôóÛïÔßGIþ¾Gêï{$ÿ!õ÷’¿_úûÉçß§>ÿ>ù|ùÔçË'ŸŸú|øåœŸëüŽë#õþ£äóÏSŸž¼ÿ(õþ£W8ë­ÿç¼bûµɃ\_Ü?O=~I>¿:õùÕÉçS§>Ÿ:ù|êÔçS/(äExç뻾þØßSŸÏœ|¾rêó•ßÀߥ>ŸøÎùÿ ?oòù»©Ïß}üáx}Ï>¯üüªŸþ¡Ï‡M^_œz}qòúâÔë‹“×ç¦^Ÿ›Žýß}xy?îÿ¦>¿rùß¿èû£?ëó-“χL}>ä+ÜÈUŸÊúªÖwŸ”úü äó{RŸß“ü}ÚÔß§}…rÎÏgI}>Kòù©Ï×H>_#õù˹~>öôù?`ûcùç×t~Çó—ûœõ›Öëüf<8ÿÐùcÿIŸÏ|þBêóèüü®Ÿüþ'õû÷ä÷ë)߯×/ì>‡à‡þñ‚_úú)çû‡¾ÿ¡¾çüðýÿsHÎ÷úþÉ×'_ûGŸÃkÜÉ]y‡ðÊ×W}ýa}<‡äIžÂó7¿q~¦ó;¬çðÀ±ô’yo|}Ó×·J^•'y ·n]¸óý]ßÊÜ„ë ­/øú8½žú…ê×¼»rÖßµþÁù ß”SŸ¡ú Îop~¸ÿà9$/äå"/^ôóK‡òNÞ…Wοêük#oÊ9?Yÿx~èsxàÁÏý|\ÿü’7ò¦¼“÷«|á¼)7ÖgZúWhÿÂõŸÏ!¹‘ÛUžä)¼óó»~>ÖoèúÅõÏ!9õOêý—çðòPžä)Û÷¡Ûw<ÿé9Gÿ9íß öŸ¡ýgpÿaèþÃàúº¾^àFnÊù¸Ä“û©ûy<ú‚—^Šò áØ>§nŸ“ë7uý&·Ï©Ûçäö5uûšÜ¾¦n_óøýÍsHÞÉûE¬?´þ ¾8¿WK9ž¿yɃ<®òNÞ…÷ÿŠîÿútÕçðýÌs>8ÿ¡óŸrÖ7´¾¤¾©ú&_ÏþPpýÒsxy(äúùÅÀ‹)ïä]øqÿc’sþUç?åœÕùû[‘ïÖ?°¾¦õ5~~ÓÏoüüvúü$OáVÀù7úgêß <ÉSxù/_ÊÜ…÷?÷!¹‘ÛUÞÉ»pãç›~>úŸiÿ³ãõ Ï!ù ŸïúùN\ýq¾¿ëûç:ÿ ¿¡þ¢¿šöWc5í¯ÆþjÚ_íx~ù9Ì×Ð| Îèü“ŸŸúùÉ×'_ïì®ýׇ=‡à¥‚úÔ'TŸ >¡úÄñù¢Ï!y%×ÏFnÊùùC?ÿxüQôø£àþ§çð¯äUy#oÂû¯û|ᕟ_õó?¿éç7¾Ó÷7¾¿éû;õqÕÇùz?½ÞÉ]9çç:¿Î/t~Áù…Îïxü\äú§õ¼+çüdÿ0©_ª~yü~í9?îßÝ¿_þ`ä¦ÜÉ]xçûw}ÿÎùu_ççwý|ì¿¦î¿æñúÝçðäªòóÙ_+®ßyÉySîä®|áÇóÏûð?žÞ‡äœÓù¹8?ßôóú¹êç|×÷??Wy>s­•õW­¿ÿþ’óõ~zý ù±ÿT¹~fýC'ïÊ“<…×g•ëoÖ?°þ®õ¾~èëç?tþÇï§öáŽëcžCòJ^•;¹+ïä]xáçýüÛœó+:¿ãö¿ÊïK/¨¬¿jý•¯¯úúÆùñø°âþƒçðOræÓ韫Ü­ºÿZqýÎsHÞÈÛE~<>«rýÍK<Èã*ïä]xcýMëoÔ·©¾óo:ãüLçgI®þ¢¿»öwÇþß>¼Æ9×ùý õ7X_h}·9ý õ¯Ó¿®þuÖßµþÎõÑu}Ìøàû}ÿÁ×Óëgœõ­Êù¸Ê™Ï¡ùLÎ?uþISýrÖÇã÷ì¡ýÇU^àÈO×ütæ£k>pÅs~<~Þ‡ŽßçzÁÑ¿‡öïx#oÊÜ/òÊϯúùèÏCû3~ÿè9$§>Mõ9_W¹>lýë7­ß8?Óù¡í߃ûßC÷¿Ž/÷!9ߟû§ÇMÞÀóozüÐxüÐôøaùßßOï?áG}ö!y!/Ê9ÿÐùç:ÿÎ×w}}ç뻾>ÉóÄ9úoãõcM¯kxþþs~¼þ£Éóñ×?4ò¦œŸ_õóû_û¼‘s}6îß7Ý¿…'y ?öŸ&×߯¨äõ"?ß7y¾öK<ÈUãüMçoNîÊùPNýLõs¾¿ëû;çï:ÿ`ý¡õwò®ü¸³Á“þ¤úƒõ×týáù¿Ï!9?Ÿû×§5½>íñ‡ãöu8÷ßšî¿5îŸ5Ý?køýáçüxüÛä÷s—?`}‡®o<Ÿò9G~Bó8þ؇䜿ëüÑßCû{àøa’wò~•rÕÆ;õëª_çëùýN|ÿ¡ï?øþCß`ÿu’óýOŸ<ÿ°/ðäúH]¼¾¨éõE?Ô¯úþ¯oúzãç›~>úkjÅó­žCòNÞ•òq‘;ߟ×Y>ûü¸ýÙ‡àÇï¯M~ÿdýC‡òN®ós~¾ëç;?ßõóŸïúùÁ÷}ÿàû‡¾ÿñø{8žô‚÷¿ö!¹“ëû÷OöáãùäÏ!8^oç×÷ïL¯/~üá¸ÿ°Á¯w}½s~®ó Î/t~Á÷}ÿàüBçw<¿nz}ÀãÇó ûðÀçöáw\Ÿ´Áǧ&Ï·XþpìÏ&Ïg¨Î|¸æãñ‡cÿÚ‡ßùÿöíÿùf‡»^€W¾•Ç~Æc;öÿþ—oÿÇ·ÿûÿþÿ÷ÿùß|û·ÿÇ×ã~}Ûþ[Öÿþý¿ý_ÿX’ðÿûãß[¯Znøß¾¾ýç?Ê·÷ø¿ÿöGY^ðíùÃ×kJY¿ú‡?|}”Þ:ìßþü#Ö+§–a+ËЗ•íz¦Çp=o² mùÇýki3Ëðq4÷®gå—a_麓³ ^=†ë1q)ë®ýŸŒ²œ‚]†¶ÐaËŠ\†}Æò¿e»é1\ûiÙ.»ùó¬ËáÏ2Ì…æú4¡R|1í1\¿|^†c–¯¯ecRJl5”¯¶,¾e¼Î«|ù¶²|'Þ×ñXvnJYÏq<Æeý1±eƾ6ÇÒV‘—ñz1Ð2²¾ÞûjƶüÇ×êFó½ÞG„;ZlÞ•G†?ZòõäÎ2í²êµýØ]Yö Ö¿ýøI±¯}þ}ÝÙ\Êj«þÛÃ5Ë£Çúúúåar÷{”Ù×÷_~õx?ÛÓ±>lfy¿Xløs{VÄò~}×o»™½ØØýÜn¾]dk>¶›%‹—íõëÍ]÷óºÍg½Ùæñ~¾þæÍŸÛ½ ÷óõX>¯½.¾î³ÿù¼Vt•ñkÛꇯ×äÿ¹]k²¼_nù]¾Ì]üˆõž²?·ï*ïûŠ]¿lx¼_´mq¬'ßï¶ô¨?·s÷{ضŽ×›J¬×Àü¹Äó7±ÿÜöõ–÷Ëå;È?·mëãý¶ß¤ûsÛV<Þ¯×åçϵ7.~ôõš¹?×^·èß×s¤þñ÷{Çñõ™+φ»z[¿YÚL~o7K—‰ïÝæÑdÖ¹5›¾.†g¯y´ÿÑjúضušñµ­§­ÑŒuÑ?ûÌð¥½>ÛÌè[R¶.“ûºØšÌö¤ˆgyüw´ï-&×§L}ï0™› Ï³Ü¤g‡óÕö¼7˜ÇŸ­ÌWß~6˜¯õ!¯?L)[ ÏSÖÜþh0ÛE‡?ÌãeÛÝÌvÑÍSr_Ïóµuõï ¦,ñ‡S÷øl0ëAö¡ÁؘgƒY¾dh‡S׃º ¦ö-0ϳœt>6˜ºþÈõSsoÐÏóµëól0ëžLYN²ÌúÐà f9 Ô f9è÷CƒißÈÖ`–ƒöcƒi±ÅôÙ`ZßúÞ`¶ƒ° æ±®6¿ž æk[Ïcu_à{ƒ±¶5Ägƒy¬»vl0¶‡øÙ`,6Ÿ ÆÖ›Ü4Ûï{ƒùZ¾äÿÑ`|ýÑÎ Æ÷ À³ÁxÛ6ˆÏã¶ùõl0Ý™5ŸÏãßÎÖ`m Û¡Ál6ÿh0>¶¼>Œ­¡?ŒïëëÙ`âkkÛîS{Èÿóý§Ç¿ú§÷¡¾NûPÿôžSsüŠŠ—ÃMKË^œð}hÛ2p+_Ço™÷áž¹p|à÷ÏÿÁÿÍßöúþÓ?þõýãzìBþO]¿ýí¿~ÛJúñŸmÜ|ýÝöº^ýõ·øö¯ê×ß}ûÛûãüÛúÞ?±}­ýþx±]y±¯O<ÿñâ¸ðb4ÌG@~¼x|ñ3ëù€x†d}~é׿™þƒ²^yºü±1)ùƒoÃ_6æóýìãã¿XÚ©þÅ6>þ‹e¦þÅ6>þ‹Xï‹þñ/¶ññ_l•ýø?*ëØÊógû}cÑñ_Çë&ì/îkï__l—_œëeµë‹ë•çr¶rÙA´ÁåÕ²ó|q[ïè[3ýž^ιfþú‹Yz´ª5óšÚm¡ÿáÎýýê×?ŸÃå!šëöâ»õ?ÿùÖëß?ÿÿ—íÌë½;ÞnÖ¶›.–o¾–jþÏÿë¿üSb^ñ¯oT—G^öµž[ƶ·÷¥~­ã¾­GÊK}ûrÙñ\çoe½lq¯;ζë8¶ñØvëV¿-œëxݱºþlå²ãº.«ë×e»Úpï;ªûŽ—=ö×N½ï(. ïv÷åäÅ:^ô³Ç]éÛûõu\·ã¶ ›­m*ÛÕ™Ëx}ŒÎ:¿uüòWÙv´·ñú5ZÙ®V]ÆñÜ÷µ¾Gß)»ëçÅúX÷e¼î(î;«~ë|—C¾MŸm>±^ƽ꽾þ‘§¾ù³î¸[_w-–qŒu¼î¡-c_ù#O뙌²ÞX¿V\ý^ßïёꖗíýÇú³'ËxÝQ¶\÷Зñzàb<Ùv ²ÙZ®§ –ñšO{äi]?¹˜øWÝÖOnyò/ßÎÔä¶~ü‘§uý¬ûIñ#OëúÛ9§e¿a]?ûq£—ø¾~6>žg†Öõàuë0eýõe¼Þæ¶ŸõuÛúY¿][ƹ­ßŒ¼ÕíL”oÇ”ÞÖÛŒ–qÙû×¶!ÛûǽÔöÈÇ®õìK}=Ëø£/Ůó/El;ÐϾ믆þèK¸Ôc_züÏ:¯g_òý Ú³/yÛÎ`<û’?×ýÞ—–Žc_2ßßïKå°<û’í>?ûÒc9­˜Ï¾ô'¾}©µm‹öìKmý–äЗò9ÞûÒîÛ÷¾ä{ß{ö¥¶¯»g_ªË·*‡¾ôµ°=ûÒ’3;ô¥²°<ûR‰íŒÏ³/-ÿÓ}©Ø–Ûg_ZNvŽC_*uËɳ/mwÑþèK¥lz?ûÒcÛçí}é1¶rèK_¹ùýìK_¹¡zö¥í.þ}ékýUœ}iyÊ@;ô¥í©?úÒ×~FôÙ—~Œ·¾´üûzèK_cŸÏÞ—¾r;`|ö¥åóóЗ¾Ö²ô¥¯ÜÖí³/mwáÿèK_¹ë½÷¥¯ý Ö³/•¯½ïí}©<ûÌÞ—Ê×îïÞ—Ê~éÙ—ãͯ½/•ýþ{_ÚÏø~ïKeÛÎïKe{ÿï}©lïÿ½/•mþßûRÙûƳ/ÕM¯ï}©.o{èKûYúï}i;~øÑ—êÖÇ¿÷¥};ó½/Õ­OïKm¯çÙ—Ú¶žþ¹o ö¾4Ö‹?¿·¥Ñ·Uµw¥1ö­)å~bïIYž-lmIY·Î½w¤ÜÏ8ì i{Šâ÷~”±¯ÙÛÑö ¼ïÝ(s;½úlFÛI–ÍèáY;6£¯¶-Ög3úÚOQ>›ÑWì;y{3úêûT÷fô•ÛNųÙI*…;I‹ØÇ¤íåͨøÖÌ¿7£XÄE3ªõØŒ¾7«}'i½$î°“T¶³YÿìN’m¹ï;I¾/žúýlÞº‘ý¾“ÔwþÜIûëŸ;Ic;[ø}')÷œ½µ¯m#úlF­l_<›Q«ÛÙÔg3Z¾Ž‡fÔžóÝ›Q³çNÕÖŒšïŸ·7£çÙ»g3jëÉæÍh»ÄâG3j¹ÿû½Ùw’l_¬ÏfdÏ̽™íþìÍh9mslFûNÝÞŒl߉6£íëÁÍÈ¿öνyÙwŠ÷fäuo.{3ò¶ïíÍÈmÛÉ~6£mýýhFÞ·Æg3z,ÈUÿg3òçN×ÞŒâk;ùlFËŠ=6£¨[þžÍè±3Ò;IË×~hFá[ýÏfôXã«ÞÏfûÆæï_;¯÷(á'Çyv¼{ú9üþíìÒÈâÈåîÖõ¼ ?\÷‚®~{|¯NxÁW=‡äƒ|o ¼5áfàfÂwç>‡?ø’ÕÃë÷!y'ïÂ_ßôõ‡«‡žCpãû›¾ÿáêçÜùù®Ÿï|×÷?\ýý‚÷ÞU¿c~\®^~…Îèüg<9¿äüðôÌçÜÉ]y‡ðÃÕÏÏ!9?¿òóëñêìçðÀqwåsHÞÈ›ò åIžÂ ?¿èçã{…Æ»?Ö?Tòª|ký•Ÿ_õ󧜟_õó+õ©ªOåüªÎ¯Qÿ¦úO¹‘›òNÞ•s~Mçgä¦Üùù®ŸÔ7Tßàëãôzêªï”sþ¡óïô·«¿úwÕ¿óý»¾ÿpðáÊùúqz=××Ðõ•Ô7UßäúJ]_ÉúSëOÖŸ¬ßWÏ=‡ä•¼*oä§÷wr¿Êƒ<”'y^äý ¼)7rSÎùwçü»Îÿ6g}]ë¬oh}ƒþ õùvÍ÷ œó:¿dþRó7åÌgj>“õòŸô/Õ¿Û¼“³÷OB÷O‚Û÷Ðí{pûº}n¿C·ßÁísèö9Žw—<‡×¸‘›rοèü+ß¿êû/\î.YÿÐÉUlßC·ïq¼{ÿ9|/§þMõŸrêÛTßF}šêÓX;Õ?ãIžÂù3ÍŸÑ_Sõ™Ög¬Ï´>c¾Lóe¬Ï´>c}vªoÂõ»Ö?åôßÕÿ)§~®ú9õqÕÇYŸk}Áù‡Î?˜ÿÐüç:ÿ`>Bóô?ÔÿŸsžp=ÿà<ÿàzþÁ××Ðõ5xü?ôøp} ]_oàIžÂ±>‡®Ïq¼{é9$§>¦úL9õ3ÕϨŸ©~3Žõ1t} ®¡ëãÎù»Îß9??Íþ¹úçôÏÕ¿ ?¡þÜæœèü?ΩO¨>þvõwÊùù]?p} ]ƒú Õoʹ>†®Á| Í×àüÇiþ¼+§¾CõMꗪߔs~©ó»ÇóøtÏ缑7åNîïæƒ|(OòŽý÷Ôý÷<ÞÝõ^ãA®úN9ë+Z_a}Eë«ô¯ªSn䦜õW­ÊYÕú+ë¯Zãü›Î¿1ŸÜ¿ˆ‚|ïCò å¼ /|ÿ¢ï<> yúÇú#·ws'wåIžÂ+ë«Zßñø2äé$m» \õ;ú¿É9¿ªókœ_Óù5êßTÿÆùŸòÓ¨_SýëkZ_c}Më;î?îÃkœó3Ÿq~¦ósÖïZÿmÎù¹ÎϹþ\ןsþ®óúêÿ§y'ï'N}ºêÓ¹~»®ßsúÓÕŸÎõ×uýÍø`~‡æwp}]ŸI}SõMꛪo²þÔú“õ§Ö›3¿<•Û§ªÛ§Šó›ûÜÉ]ys}ñûŸÐï^á¼)7rSÞÉ»ò$Ï7óÁú†Ö78ÿ¡ó?_„~¿õ äã"?~ÿúý×+¼—wsꛪïmÎü¥æ/©ªþÉü¥æoÊ™¯Ô|ýœãéTÏá5^É«r#7åNîʃ<”wò.¼pþEç?å…¼\åÔ§¨>Çï_BžþµþúÕoÊ©oQ} õ+'ý>Í“\óYéOU*õ¯ªÿ”ÓŸªþTê[UßÊ|VÍgeýUëÇñ‡éñÇœ7æ§i~nsÖß´þÆú›Ö›³þvªÆ™¯¦ù2æË4_F}Lõ1êcªqý™®¿)§>¦úÜæÌ§i>§œúÚIߛܹ>]×§ÓWœów¿3?®ùqÎÏu~ÁþÚ_‚ópþ¡óæ+4_ACý Î?NóŸðÎõÑu}tÖ×µ¾)gý]ë﬿kýSÎõÕu}M9óÁëË"ŽO~ÉÜ”y(ïäýÝ|k}Çï·C¿ß~úýõ+œú¸ê3圿Ÿæ?áÁúBë›òB^”sþ¡óúêÿmžäy‘wú×Õ¿)gý]ëïœ×ùwæ·k~;ý=­ÏÎúºÖ7èïPýêïñû¿}HÎú†Ö7XßÐúëZß”³þ¡õ'çŸ:ÿ)g}©õ%??OŸÿSÞyüÙõø³óüU×óWøõ¿çÜÈí*wrWäñn>ȇò$W} õ-ªo¡¾Eõ-Ô·¨¾¿ÓŸ¢þL9õ-ªo¡¾Eõ­Ô·ª¾•úVÕ·²¾ªõM9ó[5¿·9õ«ªŽ»ÿÎy£~Mõ›rêÛT_|?×õû¹8õoª£~Mõk¬¿êgþšæoÆú˜êcÔÇTc}¦õM9ë7­ÿ6gþLóg쟦ýÓ¨¿©þF}eÿ8ÙRû¯ß½~cùC'ïÂÑôú‰àõ¡×?¯o½¾ážä)ë/uýñú‰Ðë'^ᕼ¾›y(§?Mý™q£>¦úÜæ¬Ï´>c>Mó‰õ•º¾’ë+u}%×WêúJ®Ÿ<¯§ÿ®þ;ßßõý§œþ¸ú㜟ëü‚ú‡ê?å¬/´¾sæ#4AÿCýêª_P¿8éÇþÚ?:óÙ5Ÿùëš¿ÎÏïúùƒú ÕgPŸ¡ú ê3TŸÁù ß”3¿Có;嬜ê§þCõŸñ¤~©ú%õIÕgÊ©Oª>¿SÿTý“ú§ê?åÔŸÇ¿ý ߯îCr#·wóA>„ίèüŽûG]~û%Îùß”y(g}åT_’«?3~ÜÛ‡^Ð_ö!¹“ûUä¡<ÉóÍüØ?º^?ÕyýR×ë—zÁúÛ‡äƒ|¼›³>YŸç§öá¯å…¼(¯äõ*7r»ÊÜ•y¼›'¹úWXÑúÑŸªö'ü:Ësxò¡œó/§ùOxe~ªæ§2?Uó3åÔ¯ª~SN}«ê;åÔ·ª¾•ù¨šãù§}øVÞ¨Sýõmªoc}MëkÌOÓüë7­ß8ÓùßßNï?ã̧i>ú¸êãÔÇUg¾\óåÌk~¦œú»êïÔ×U_§¾®ú:õñ“>Ô/T¿)§¾¡úõ Õwʹ}Ý>ÜæÔ?TÿÛœþ…úÌhþ§œþÆÉß ïô¯«þtõ§3ÿ]ó?å\]×G§¾]õíÔ·«¾úuÕoÊ©_Wý×ÇÐõ1¨ïP}õªïmNý‡ê?¨ÿPý§œþ õgÊ©ï8é;áIýSõO꟪ÿϹ±›öocÿ6íß¼~¬ëõcïàƒ|GÖëË:¯ëzýØ+œú„êœèüƒóÿñú±®×-`}]ëëœ×ù¯/ézýW7®oÓõýçü†ÎoðóÇéógœúÕ0ŸCó9èÏP¦œþ õoÊ“<…'õMÕwÊ©ÿi}Oy%¯W9ó—š¿)§¿©þ&ýMõ7é_ª·9ýMõ÷6g>äü‰óü†ëù çù ×óÎó®ç/^àFnW¹“»ò «¼“wåI®úêWT¿B}Šê3åÔ§¨>·9õ-ªo¡>EõÁù×ó+Îó'®çOœç?\ϼ³¾ªõÝæ¬¯žê»É×oÓõÛèoSýkê_£Mý3êkªï”3ߦù6ÎßtþFýMõ7ÖgZŸ±>;Õ7ãôÇÔ§?®þ8çï:çç»~¾óóýôùœ_èü‚þ„ús›7òv‰wž_ìz~‘×·u½¾í|áXŸz}Ü+¼’WåFnÊ9ÿ¦óoœ;ÍÆ“<ßÌÑ?ôú¸åÔÇTk䚟)§¾¦úN¹“»rúcêÏ”ÓSŒúšêëÌŸkþœú»ê›Ó?Wÿœúºê;åÔßUÿsúçêŸÓW‚ú„ês›3ÿ§þÔ7Tß`ý¡õO9õ Õ'¨O¨>ùíšßsæ»k¾;õíªo§¾]õrêÛUßN}»êÛÙ?ºö)§?]ýÌßÐü Ö7´¾Áú†Öw›sÿaèþÔSß¡úN9õªï”Sÿ¡ú'ó›šß¿<çúJ]_É|ýÿå½»Ž,ɶe§×WÔ0í½Le  æ(4¨‘ qH ›Ÿî¹} o”¥Ã3j×%…S8cg¸ÙœÓ–›?c:_“þéüNðüGøüGðüGøüÇ7x&Ïæ•¼^å¼™wòÓøù¸Ê'ù¼ÈõKÖï6/äÅœú&ë{›ÓŸd–<Èã"ÏÌwv¾—œúfë»äÔ?[ÿÌ|eçë6gþ²ów—êW¬ßmN}‹õ]rÖŸâúSèO±?KÎüçÿ¯9ï¿§ûo<>P¿·æ5žÈÓUžÉ³y!/Wy'ïæƒ|˜y\å“Üú'ꓬO¢>Éú$Ž?yüÇú¾¿øñA#oW9õKÖ/qüÉãÏ_öø2Ç—=¾Ìþe÷ï6g~²ó³äÔ'[Ÿ~>àóÔ¿ìúÇçÂÏïÿßÿÿø ’׫¼‘{|KÞÉûU>ÈÇUäñÓœþûô/ìßñúWøù‚ïpúöoÉ©Xÿ%§þaýosêÖ?¨œô¿É'ççôüœôwÚßßÎY¦ëÃ’3?ÓùYrÖ‡éú0™¯é|ÝæÌÏt~–œù˜§|ü%/Üÿï ÷/Åû—ÂýKñþ弓wóA>Ì'¹Ç—8¾äñ-y"O?Í3y6/äå§9ýMöwÉé_²+ž©¶þ™úeë÷vN²ýÉÔ7[ßLý²õ[rΟìù³äÔ?[ÿBý‹õ;§¾ÅúÞæô§ØŸBý‹õ¿ÍéO±?…õ­¸¾Ö·âúV9þêñWŽ¿zü•ý¯îÿ’3_ÕùZrŽ¿zü8?Z|~ôœúÕ“~ Þ˜ßæü.9ëKs}Yrê߬£¾Íú.9õoÖÿ6§?Íþ4êßNú¿™wêß­çüêž_ó«{~Ýæ¬ÝõcÉé·ÿ¿3Ýùèô¯Ÿü[ðA‡ýÔXÿ%çüÔù‰ÆëcÍ×ǾÁ3y6/äå§y%?oó qÔO?_ôø€ú4ëÓ¨O³>KÎñ5oÉy»Ê©O³>útës›Sßn}osêß­ÿ’3ŸÝùìÔ·[ßÎ|vçó·súßíÿ Ãþ-9ýöç6§Ãþ ú7ìßmÎùyª¿ƒùÎÇmÞÉûEô/ìßmNÿÂþ-9õ ëÔ'¬O0ÿáüóÎÿäø¦Ç7ÙÿéþOæs:Ÿ·9õ™Ögɩߴ~KN}§õÅù¿æó÷‡vßú<“竼‘·Ÿæ¼_åƒ|\åAWù$?ù»à8ÿÙ}þóoàÌOr~–œùIÎÎvŸÿüÎ|%çkÉ™ä|,9ó‘œ%g>’ó‘9þìñãüe÷ùËÎó“Ýç'ÿÎñgÿ./Ìgq> óYœÏ%§þÅú/9ý)öç6§þÅúßæÌqþ ý)'˜ïâ|ßå•þWû_éoµ¿·9ý¯ö¿Rßj}—œúêøÏ…Ÿû ÞÉ»ù â¸~¾~Ãç»ÂÏwýçø²Çw›SŸl}–œúå“~+äÎÇ’OòùÃõÙÏ—Ÿï ?ßõø ‘·Ÿæô§ØŸ%§?Åþ,9ý)ögÅ¿/´5¯qúSíÏ’sþUÏ¿·sæ§:?•þWû¿äô¿ÚÿJNõ¹Qÿfý—œú7ë¿äìsÿóÙœÏÛœú´“>¬Íõ£s|Ýã»Í¹êÞ?uæ¯;KÎ|vçó6§þÝúÿvNÿ»ýÌÿpþ—œþû;èï°¿ƒþ û·ä¬Ãõã6çüž¿ƒúë¿äôgœü¹Éƒþ…ý úö/è_Ø¿%§aÿ–œó/<ÿ‚þ…ý[rúö'¨oœô½ËYÃõwÅ'ýöw2¿Óù]rê3­ÏmN}¦õ™ÿ<ÿŸ<;}þöx&ÏWy!/Wy%¯Wy#oæƒ|\å“üäÏ›y¢Éþ%ú“ìÏmNý“õ¿Íé_²KÞÉûßÍ™Ÿäü,y‡9óÁó¿óû·­I^ÈË?Wòz•7òöÓ¼“÷«<ÈCü¸Üšä‰<™Óÿiÿ'õŸÖÉ©ÿ´þoçôgÚŸ·sú;í錄óäïŠOrÏï¿æ|>qúùÄïðDžÌ3yþiÞÈ›y'ïæƒ|˜y\å“ü¤ÿ‚'ꛬïqÿ¸5É9þäñßæÔ/Y¿D}’õYrꓬO¦>Ùú¼3ŸÙù<^?Øšäô/Û¿·sΟìù³äô?ÛÿÂúQ\?–œúë_¨±þ·9ý+öoÉ©±þKNý‹õ_rÎßâù[è_9ù·âœ¿Åó·Òßj—œþWû_©µþ•úTë³äÔ¯Z¿%§~Õú-9õ­Ö·QŸf}Ç×<¾%çøšÇר¿vêëwsýîô¿ÛÿÛœútëÓ™Ÿîü,9ço÷ü½Í©·þKÎ|uçëíœùèÎGg>ºó1èï°¿¿sÿ1¼ÿÌÏp~ýöÿíœùÎÏmNÿ‡ý¿Ëƒþ„ýYrúöçíœþ„ý Öçp}^rúö'¨oœôåü ÏÏI}§õ½Ç Þ·5É+y5oäí*ïäÝ|q쟊÷O?ÀÙÿîþwö¿»ÿ·9ÇßOã¿Ëƒ<.òãõÛéç¿Ã3y¾Ê™ßáüæw8¿KNÿ‡ý_rú;ìï’SÿqÒÅ'ùü‡ñ`>Âùúö?èØÿÛœùçcÉ™p>–œùç#èØÿ%§?a&õŸÖR¿iý–œãŸÿäø§Ç?YߦëÛmNý¦õ›ÔOç+ÏïVŸß­¸þº5Éy»Ê;y7äÃ|’{|‰ãKß’'òt•gò|•SÿdýoóJ^¯rú“ìÏ’Ó¿dÿ2ýÉö'Sÿlý3õÍÖwÉ9þìñgæ;;ßoçœ?Ùó'Sÿlý÷ooMrú“OþÜä…þû·äô¯Ø¿ÂüçÉé±ÿ…þû³äô¯Ø¿%§¿Åþú[ìïoçÌGq>*ý¯öÉ™ê|ÜæÌOu~–œùªÎWe~ªóS™ê|Tæ£:KNÿªý{;g>ê)¿™7ú×ìßmN›ý½Íé³ÿKNÿšý»Í©;é“wÖ]àó¯ÓÏ¿N>Ÿ:ý|êwx'ïæƒ|\åAWù$Ÿâ¸~Û}ýö<‘[ìßýüìOðB^®ræ£8·9ý-ö÷6§ÿÅþßæÌO9ågÁ+óSŸ·sæ³:Ÿ•ù¨ÎG¥¿Õþ.9ëCu}¨ô·Úß%§ÕþÝæô·žü½Éýkö¯Ñ¿fÿëCs}XrúßìÿmÎ|4ç£qÿÒ¼iô§ÙŸ%§þí¤ÿMÞéÏiÿ»äô§ÛŸ%§þÝúwêß­ÿ’sþuÏ¿%§?ÝþtêÛ­ïàüžƒúë;¨ß°~ƒú ë7¨Ï°>ƒùÎïmÎú9\?—œþ ûóvNÇÉßúö?èØÿ ÿaÿ—œùç#˜p>–œþ‡ý_rúöÉé_Ø¿àü ÏßI¦ýYrú7íß’Ó¿iÿ&õŸÖRßi}—œúMëw›SÿyÒÿ.çüÓõ•Áë+Ã×W~ðþ÷áûßžoÛš×x'ïWù æA?Í'¹õOÔ?Yÿ·sú›ìo¢¿Éþ.9ýMö÷6oäí*g~’ós›3ÉùKÌOr~–œùJ§|Ýä™ùÉÎÏ’3_ÙùÂóÃÏüg~²ó“™ì|¼3?ÙùÉÌOv~–œùÉÎÏ]^è±ÿ…þû›³þèüYpÿÞ¿ƒwò~•òaäq•Oòy‰Oîÿ§÷ÿ|¾~úùúïðLž¯òB^®òJ^ÿnÞÈ›y'ïæƒ|\åAWù$?åç&OÌ_rþ–œùKÎ_b¾’óõvÎ|&ç31?ÉùYræ'9?KÎü$ç'1Éù¸Í™tÊÇ‚gæ#;™ùÈÎÇ’Óßl3ýÍö÷6§ÿÙþ/9ëOvýÉÌGv>ÞΙ¿ìü-9ó•¯%g¾²óU˜¯â|-9óUœ¯Ûœù,Îga¾Šóu›3Åù[ræ³8ŸKÎ|ç«0?Åù)ÌGq> óQœJÿ«ý_rú[íïmÎ|Tçãíœù©ÎÏ’3?Õù©ô¿ÚÿJÿ«ý¯ô¿ÚÿwóÆ|4ç£Ñßf—œú6ëÛ¨o³¾+ÞÙÿîþwö¯»ùéÎOgÿ»û¿äì?õÅYºëÏ’3ŸÝù¼Í™ïî|wæ¯;KÎú×]ÿýöÉ™á| æc8ƒþûÿvÎü çgÉ™Ÿáü ú;ìïmNÿ‡ý_ræc8AÿÃþßæÌG8oçÌG8AÿÃþ/9çxþ/9ýûô7ìoÐß°¿“þLû39¿§ç÷¤¾ÓúNê;­ï’Sßi}—œósz~.9ý™ögÉéß´“þMû·äô÷t~ð¯xýø8ž?Ü›ä™<ÿ4/äż’Wp¼ÿao‚ðññwóDžÌ3y¾Ê yù§ñJnÿ–¼‘7óNÞÍù¸Èƒú‡õö?Üÿ%çøÂã[rö?Ný_ñI>/òÉüOçÉ9?¦çÇ’3Óù[rú3íÏ’ÓŸi&ó9ÏI¦ý™¬o“õ-³~g×ï|¼¾³7ÿY¼—¿›WòúÓ¼‘7óA>®ò w>ROù*§~Éú-9õIÖgÉ©_²~©“wsê“Nú¬ø$Ÿâ™ùÎÎ÷’Sÿlý—œúg럩o¶¾KNý³õÏÔ?[ÿ%g¾³óéO¶?·9ýÍö·°¾××B‹ý]rú[ìïmÎ|ç£Ðÿbÿ ý+ö¯Ð¿bÿ–œú—“þ7y¥ÕþÝæô·ÚßJªý¹Íéoµ¿KNÿ«ý¯œÿÕóÉ™ŸêüTæ£:+ÞèO³?þ4û³äÔ¿YÿF}›õýíœþ6ûÛè_³ú·“þw9ëss}^rÎÿæù¿âùéÎOg>ºó±äÌOw~nsúßíÿ’3ÝùXræ§;?·9ýïöÿ6g>ú) >˜Ÿáü Žox|Íñ~‹½I>ÉçEŽý[õþ­rÿV½ûïäý§ù âãkß’òb^É«9ûßÜÿ%çøšÇ×Ùÿîþ/y&ÏW9Çß=~Ìïêùý Îñ÷ÓøWœùïÎÿ`~‡ó;8þáñ/9ó3œŸAý†õÔçTÞΩﰾƒúëÌ_8AýÃúõ ëì¸ÿ·9Çÿ’³~‡ë÷Û9ý‰“? >éß´KN§ý]rίéùµäœÓóoÉ™¿éüMæg:?oçÜLï?nsæ{:ß·9ó7Où»ÅÏ7ŸÿOäé*ÏäÙ¼“÷Ÿæƒ|\åAâ8ÿÛ|þ÷¼’ןæ¼]åÔ7Yß%§¾Éú&ꛬof>³ó¹äÌgv>3ó™Ï%§¿Ùþ.9õÏÖ?Sßl} Ç_<þÂñ¿°Åý+ì_qÿ ûWNý[qæ£8KÎüçgÉ'¹ëg¥¾Õú¾3Õù[rú[íoeýÐùO<_½7É3yþi^ÈËOóJ~ÿ]ÞÉ»ù 9êßpý¬_Ãõëœúeë—9þìñ£¾ ×·àAæ“|ŠêW¬_aþ‹ó¿äÔ·XßB}‹õ]rêS¬OaþŠó·äÔ·XßB}‹õ­Ô§ZŸÛœúTëS©Oµ>•ó«z~UޝžÆ·àùjÎ×’süÍã¿Í©_³~ú5ëטŸæü4êÛ¬ï’SßvÒwÁ;õíÖ·SŸn}nsÎÿîù›Ó¿nÿ:ësw}¾Í™î|tæ£;þwû¿ä¬OÝõiÉ™Ÿîü æg8?oçÌ×p¾nsæk8_ƒþû?èÿ°ÿ·9ó3œŸÛœùÎÇŠý û·äô'ìÏ’sþ‡çп°·9ý ûœ¿áù{›ÓŸ8ù³âœÿáù?éß´KNÿ¦ý›œÓóoÒßiosú;íï’ÓÿiÿosÎßéù;™é|,9ýŸ'ÿoñàùÕðùÕoðDžÌ3y¾Ê y¹ÊyûiÞÉûß̓<Ì'ùÉÿOô7ÙßDý“õÇù‡ðù‡op꟬¢>Éúü5ŸÜMï¿&÷_Óû¯ÉýÏôþÏîÍk|«<ÈãŸÆ'ù¼È±›Þ¿Mîߦ÷o“û·éýÛäþizÿôàÌßtþ–œù›ÎßmÎ|Nçs2ÓùXræcžòñW<ñùÊäç+ÿ <‘'óLžÍ y¹Ê+y5oäͼ“wóA>®òI~òwÁõMÖwÉ©²þ‰ú&ë»äÔ?YÿDý“õ_rú“ìÏmäñÓœþ¦“¿ ž9ÿ²çß’3ÙùøwÏ™ïì|ßæÌwv¾3ó›ßÌú]–œùÊÎW¡ÿÅþßæÔ¿XÿBýŠõ+¬Åõ£Pßb}osúSìÏ’³~×%§¿ÅþÞæÌG9åƒõ¥¸¾Üå•ù©ÎOe~ªós›3Õù«Ì_uþnsæ¯:·9óYÏÛœù­Îï’3ŸÕù¬Ìgu>os泞ò¹àùmÎï’3ÍùkÌOs~–œùhÎG£¿Íþ.9ýkö¯QÿfýõmÖ·S¿ný:çw÷ü¾ÍéO·?úwëÿvN»ýíô¯Û¿%çüíž¿·9óÑÎ|tçc0ÃùôwØß%§ÿÃþ/9õÖwPŸa}nsÎÏáù9¨ï8é»àAýÃúßæô'ìÏ’ÓŸ°?Áùž_KÎùžAÿÃþý ûô/ìßmÎùžÍù|ròóɉÏ÷&?ßûÞÉûßÍù0rëƒý—ŸïM|~7ùùÝÄço“Ÿ¿ý/äż’WsúÓíO§>ÝúüvNúÉŸŸäó‡ù`>†ó±äÌÏp~ó1œ%g~†ós›3ÃùÌßpþýöoPÿqÒÿ&úögÉé_Ø¿ ~aý–œú†õ êÖ÷6çü ÏÏ·sæã´ÿºË'ýöwÉéÿ´ÿ“ósz~ÞæÌÏt~nsæo:KÎ|MçkÉ™é|,9ýŸ'ÿYtý­ðúWñõ¯Âë[Å×· ¯__¿*¼~U|ýêx#oWy'ïWù Wy‡x¢?Éþ,9ýKöoÉéo²¿‰þ$û³äô'ÙŸ·sú›ìï’Óßd—œþ§“ÿ+>É=¿3ó‘Û<‘§«œùËÎß’3ŸÙù\ræ3;ŸKN³ý]rú›ío¦¿ÙþúSìÏ’ÓŸb õ/Öÿ6§?Åþ,9çwñü^rÎïâù]©Oµ>·9õ©Öç6§~ÕúÝæœÕócÉéOµ?•ó£z~TÎêùјÿæü/9ýköï6§Íþ5ê߬ÿ’Sÿfý—œþ4û³äœ_Íó«±>6×ÇFÿ›ýoô¿ü+çóûÉÏï‡WòjÞÈÛOóNÞÍù¸Êƒ<~šOrûƒýŸßðžÈÓ?gòlÎüç¯0Åù[ræ«8_KÎüç¯0_ÅùZræ§8?wye~ªóóvÎ|Tçã·sæ¯:•ù¨ÎÇmÎ|UçkÉ™¯ê|-9óSOùaýª®_+Þ¨³þ·9ëKs}Yrúßìÿ’³þ4ן·sæ«9_ùiÎOc>šó±äÌOs~óqZŸ¬xg}é®/KNÿºý[rêÛ­ïmÎñ÷ÓøßÌõÖoÉ9?‡ççàüžƒúë›Sÿaý—œú“~ Ô/¬_P¿°~·9õëÔ/¬_°þ„ëOP¿°~KÎú®OÁú®?KNâäÏ‚Oê7­ß¤~ÓúMŽzü“ãŸÿmN}¦õ™¬ßÓõ{R]Ÿé¼>Ó}}æ<‘§Ÿæ™<›òr•WòjÞÈ›ù ?Í'ùÉ¿›–<“gsú›ìïmNÿ’ýûwÏ™¯ä|%æ'9?؆÷Ÿkž9³çï’3ÙùÈô7ÛßÛœúfëûvNÿ²ý»ÍY²ëÃmÎú‘]?þñœùΧ|ßä…õ±¸>Þæœ_Åóë6çü,žŸKÎùW<ÿ–œó§xþ¼s~ϯۜó«x~-9ó]œïÂ|ç³2_ÕùZræ§:?·9óU¯ÊüTç§Òßj+ý©ögÉéOµ?·9ý­öwÉéµÿþ6ûÛ¨o³¾ú4ë³äóø—œãoÿÛ9õmÖ·Sßn}—œó£{~túÓíOgþ»óß¹>ì^.9õéÖ§SŸn}Ç7<¾·sÖ—áú²äÔwXß%çüžƒùÎÿ þÃúóÎ×’S¿°~oçÌw8ßAýÂú-9õ ëÔ7Nú®8óÎÿ»ù¤¿Óþ.9ý™ögÉ9¦çϤ¿Óþ.9ç×ôüš¬_ÓõkÉ™é|¼ÓßÓù½÷òÉóƒÓç'ÏßMŸ¿›¼¾4u})óýÙï‡È|¿Cöû¾Ãù¸Êƒ<Ì'ù?ÞýþˆÌ÷Cd¿"óýÙïx|PÈËOóJ^¯rú;ìïmNÿ‡ýôwØß%§ÿÃþßæÌÏ8ågÁƒù ç+˜Ÿp~–œùçã6§aÿnsúöwÉé_ؿۜþÆÉߟôÚÿÛœõiº>Mæg:?·9ëËt}Yræc:·9ýöwÒ¿yòïOØÿnÍk<“góB^Ì+y5oäͼ“÷«|«<ÈCüx}kk^ãÔ?YÿDý“õ¿Íé_²KN“ý½Í™ä|$úŸì¢¿Éþ&ú›ìo¦ÙþývÎüdççÏ™Ïì|.9ó™Ï·sæ7;¿ÿxÎù•=¿2çWöüºÍ9?³ççÛ9ç>Íÿ»|’{ÿ\8¿‹çwáü)ž?…ó£x~¼s~ÏÂ|çë6g¾ŠóµäÌGq> ý+'ÿ~3¯ÌOu~*óSŸ%§ÿÕþWú_íeý©®?oçô¿Úÿ%g>ªóq›ÓßjýmöwÉéo³¿·9óÑœ%g~šóÓèo³¿ÿxÎúÕ\¿óל¿%g¾šóu›sÿÖ¼»Í™ÿæüwæ»;ßùêÎWg¾ºóÕé_·þuû×éO·?KNý»õ¿Í©·þƒúë;8ÿ‡çÿ’ÓŸaþšóý-ÙïoùäÃ<ÈÃ|’Oqì‹÷¿ßà‰<™WòúÓœúUë·äÔ·Zß%§þÕú7êÓ¬Ï?žòbÞÈÛUNšýYrúÓìÏmNÛÉßçükž¨ÿÅõÿœþtûÓ©·þKN}ºõ¹Í©_·~·9õï'ý|°þ ׿%çüž_·9óqÚÿ¼3Ãù¬ßÃõ{É™Ÿáü¼3_Ãùz;g>‡óÌ_8·9óÎ_0á|ÜæÌW8_KÎ|…óõvÎú®¯KÎ|†óy›3áüó§ü-ød¾¦ó5™¯é|-9ó3ŸI¦ýYrú3íÏmÎõÍôúfÒ¿iÿnsú?íÿ¤¿óäï-^qÿÕÖ$OäÉ<“ço¼óJ^¯òFÞ®òNÞ¯òA>®ò óIî|%æ+9_‰ùJÎW¢Éþ-9ýMö7Ñ¿dÿ–œþ%û·äô/Ù¿Ûœþ%ûw›ÓÿtòÁ3ó‘Ì|dçãíœùÉÎOf>²ó±äÌGv>nsæ#;·9ëGvýÈÌOv~nsæ+Ÿòu“æ³8ŸKÎ|çkÉYߊë[¡ÿÅþúSìO¡¾Åú®xåø«Ç_9¾êñUޝz|•ó¯zþUÎ/Ÿäûy²ßÏóø Èã*ŸäSû¿îýß7x"OWy&ÏWy!/Wy%·?·y#oWy'ïW9󕜯ۜùKÎ_bþ’ó—éo¶¿™úfë{›SÿlýßÎé_¶™úfë{›ÓŸ|òçͼ°¾×—Ûœõ§¸þ,9óYœÏÂúR\_–œù,ÎçÛ9óYœÏÛœù.Î÷’3¿Åù]r毜ò÷ç•ù®Î÷’3ßÕù®Ìwu¾—œù:­o–œù©ÎOe>ªóQ¹ªÞ?Uæ£:•úWëߨo³¾KNý›õ_rê߬ÿ’³þ4ן%§ÍþÝæô¿Ùÿ%g>šóјæ|4æ£9ùh§|¼™wæ«;_·9ó׿Î|tç£Óßnosúßí§~ý¤ß‚Žxüƒókx~ ê3¬Ï`ÿ‡û?˜ßáüæw8¿·9ó?œÿ%§¾Ãúõ ë»äÔ?¬Pÿ°þKÎü…óô/ìßmN}Ãúõ ë;9¿§ç÷Û9ý›öï·sæg:?“ù˜ÎÇd>¦ó1éï´¿·9ëÃt}XrÎÿéù¿äÌçt>ÿš^¿¾~÷<“çÿ¯ñB^Ì+y5oäͼ“wóA>̃Üþ'ú—ìß’'òôÿ7Î|$çcÉé¿Î/÷ßáý÷šcÿí÷}‡gò|•òb^É«ù âØÿùý@™ïÿÉ~ÿOæû²ßÿ“ùþžì÷÷üïäœß|Möûkòr•Wòz•7òfÞÉOã»Ëƒ<®òI>/rÔÇéú89ÿ§çÿäõéëßàô'ÙŸDý’õ[òA>Ì©O²>™údës›s~dÏLý³õ_r꟭ÿÛ9ýÍö7sþeÏ¿%§¿Ùþ.9çWöüÊÌGv> ý-ö·Ðßb ý+öï6§?ÅþúSìÏÛ9ý-öwÉ9ÿ‹çÿoçÌ_qþ óWœ¿Ûœù-Îoe>«óY™¯ê|Uæ«:_KÎ|TçãíœùªÎ×’Óßj+ý©öçíœþדÿoæõ±¹>6æ¯9ùkÎßmÎ|4ç£q~7ÏïFÿ›ýoô§ÙŸÛœú·“þ7y§Ýþ½3Ýùèô·ÛßÎúÓ]:ýíö·SŸn}û?Üÿ%çø†Ç78¾áñ½³þ ׿ۜúë¿äœÃóoпqòï&úö7èoØß%§?a–œùçÿ/yáû1Šßñ<‘§Ÿæ™<›òÓøW¼‘·«¼“wóA>®ò óI>/òN»ýíÔ¿[ÿN}»õ]rö¯Ÿú·àƒýîÿ`¾†ó58¾áñŸ/ÝšäÌÇp>ó1œ%g>†ó1¨Ï°>A}ÂúÜæÔ7¬ï’Sÿ°þKÎ|êß’Óß°¿AÂþý ûs›sþ‡çÿ’3qÊÇM>™é|,9ó1%§¿ÓþþvÎú0]–œù›ÎßÛ9ó9ÏÉüMço2?󔟿ä×w·æ5žÈÓU^È‹y%¯Wy#oÿ4ÞÉûß̓<Ì'¹ó“˜ä|,9ó‘œ%§ÿÉþ'꟬¢>ÉúÜæƒ|\åô'ÙŸL}³õÍÔ/[¿·óLžÍ9ÿ³ç¦¿Ùþfú›ío¦?Ùþd꟭¦þù¤ÿŠsþdÏŸ/ô·ØßBý‹õ/Ô¿Xÿ%§?Åþ,9ý+ö¯ÐŸb–œþû·äô¯Ø¿BŠý©Ô¿ZÿÛœþTûS©µþ•ó£z~,9õ¯Öÿ6§?ÕþÜæô¯Ú¿ÆùÕ<¿ýiögÉé_³þ5û·äœ_ÍókÉé³ÿþ5ûwóùúâçëdòl^È‹y%¯Wy#oæü4¾ ‹¼püÅã¿Í©_±~KNýŠõ[rê[¬ï’SÿbýQÿý|{áóëÅϯƒWæ·:¿¿Ó¿jÿ–œþUûw›Óßj+ý«ö¯ÒŸjPß«ë{eý®®ßßàœ_ÍókÉ©³þú5ëרO³>ãïçøºÇ·ä_÷ø:ûßÝÿÛœùéÎÏmÎúÒ]_:õï'ý™Ïî|âü{õù÷ÊóïÕç߀Óßa—œùÎï Ãþ ê?¬ÿ þÃúê;¬ï’Sÿaýƒú…õ êÖgÉ9Âó'¨oXß%§þaýosúö'¨Xÿ þqÒÁ'ý™öç6§ÓþMú7íߤ~ÓúMæ:ÿ“úNë;Y¦ëϤþÓúOê«óÃço›Ïß~ƒgò|•òb^ÉëOóA>þfŽó§ÍçOÏ6Ÿÿüïäýïæ:Ń<Ì'¹ó©o¶¾·9çGöüXrú—íŽO›OO›O?›?WœÏw?ßYøüfñó›ßá¼›òa>Éç%Î狟ü/äż’׿›wò~•yˆ'Ž?yü¸>ãçãpûÉÛ;äCówxþž¿>õœúfë»äÔ?[ÿ%§>ù¤Ï»9ó—?\_¾¾2x}eøúÊàõ•áë+?Àéo±¿…ó·xþ.9ó]œïB}‹õ-Ô·XßJýªõ«õø+óYÏ%çø«Ç_9¾êñUæ£:ïæú5ëר_³~KÎùÛ<óÑœ%§¾Íú.9óÕœ¯Ûœú·“þ7y§ÝþuúÓíÏ’³>tׇÎùÑ=?:çG÷üèô·ÛßNÿºý[rê×Oú-ø ¾Ãúê7¬ß’Sßa}—œúë?8ÿ†çß þÃúæ{8ßwyPß°¾KN}Âú¼Sÿ°þKÎùžAÿÂþý û·äœ?áù³äô7Nþr~…çפ¿ÓþNΟéùs›Óßi'ý›öoÒŸi–œþLûs›Ó¿iÿ&ýÑñgðø2||¹â“õºþÿ;à¼ý4ïäý§ù WyÇU>ɧ8ö_Óû¯Éë7Ó×o~€ÓŸaõÖwɩﰾƒú ë·â¸¾ãçÓ Ÿ?+~þì'8õ ë»äÔ?¬Pß°¾A}â¤óÎß»9öoÓû·ÉýÓôþéœúMë·äÔwZßI}§õ½Åëö[óOäéïæ™<›òr•WòjÞÈ›y'ï?̓üäï$Ÿ?Ìó‘œÛœþ'ûÿvÎ|%çkÉ™¯ä|%æ+9_KÎü%ç/1?Éù9žßÞšäÌWr¾ÞΙ¿tÊß‚gæ+;_™þeû÷Û9ýÏö?Óÿlÿ3ýÏö?Óÿlÿ;§¿Ùþú[ìï’sþÏÿÛœþû[XŠëC¡¿ÅþÞæœÅóoÉéO9ù³à•þTûS©oµ¾•úVëûvNÿªý»Í9ÿ«ç¥¿ÕþÞæ¬Õõ£2Õù¸Í™Ÿêü4æ§9?KÎ|5ç«Ñÿfÿýkö¯Ñ¿fÿ–œþ4ûÓèO³?oç¬ßÍõ»Ñßfým'¼Óßn;ýíö·Óßn;ýíö·Ó¿nÿnsêß­ÿ’ÓŸn–œþuû7¨ÿ°þÍù|`õóßá¼™wòn>Éç?Œcÿ˜½üOäé*Ïäù§y!/æô¿ÚÿÛœù¨ÎÇñþêç ¿Ãƒ<®ræ£:þ7ûßèo³¿ú4ë³äœÍóï6§¾ÍúÞæÔ¿YÿFýÛIÿ7óN»ýíÔ¯[¿Î|wç»SŸn}–œúuë·ä÷øÇÚ,9ëÏpýÔoX¿Aý†õ[rê7¬ß’S¿aýõ'ýnò ¾a}ƒõ%\_‚ú‡õÖÿpýÖŸpý ö?ÜÿÉíOoÿ6gÿ§û¿äßôø&ó9ÏÉüMço2_ÓùZræo:“úÏ“þÉ ¯/__(<ÿ_|þ¿ðü|ñùùÂóóÅçç ϯŸ_/x>ck’y˜Or?qüÉã_rꓬϒòr•Sßd}—œú'럨²þ™údë³äÔ'[Ÿ%ÏäÙœãË_æøòi|+Îüeç/3Ùù[ñBýŠõûÇsúWì_¡Åþæ¿8ÿ…õ§¸þ,9çGñü(ÌOq~–œþû[©_µ~•úUë·äÔ·Zß{œÏÏU??÷>ÉçßÌQ_›ëë7x%¯Wy#oæ¼›s|ÉãCýõóq•Ï¿U?ÿöžÉ%§¾ÙúþvN²ýÉô'ÛŸÛœþfû[èo±¿…þûWèO±?…úës›Sßb} ëGqýXrêW¬_¥~Õú-9õ­Ö·RßSý¼ÍY_ªëKe¾ªóu›SÿjýW¼Qßf}õmÖwÉ©_³~ùkÎß’SŸf}:û×ݿۜãëß’s~vÏÏÎ|uç«SŸn}:õé'}VœùéÎÎo5Ÿßj<ÿÔ|þéœú ëóÛ9ýöç6§¿ÃþÞæôØÿAÿ‡ýôœügý®ÿÁú®?ÁùžKNÿÂþõëxüÁñ‡Ç¿äÔ'¬Ï¤>Óú,9õ™ÖgrþMÏ¿·sæs:Ÿ“úOë?©ï´¾“úΓ¾É;ÏÏuŸŸûOäÉ<“çŸæ…¼˜WòjÞÈ›y'ï7äã*ò0ŸäÎG¢ÿÉþ'úŸìÿ’Ó¿dÿþšÖïáúý<ÈC×?üüâwx%¯Wy#oWy'ïW9õ ëÔ/NúÝå“|þ0ÇþÅÏ7V>Xý|àw8õ›Öoɩϴ>oçÔožôûKÜ„÷?À3y¾Ê+y5oäíïæƒ|˜y\å“ÜþaÿÞ?ü§¿Éþ.9ýKöï6§?Éþàüjøüê?€3?Éù¹Í™¯ä|%æ+òu“gæ+;_™ùÉÎÏmÎú’]_ns꟭¦þÙú/9õÍ'}oòBýŠõ+ñø—œó·xþÎßâùûvNÿŠý»Í©9é¿à•þTû³äô§ÚŸZÈ‹9ý«öoÉYÿªë_¥~ÕúUÎêùÑX_šëË’S¿fýõkÖï6§þÍúßæô§ÙŸ%§?Íþ4úÓNþ,x§?Ýþtê×­ß’SŸn}:Çß=þ%g¾»ó½äÔ§Ÿôy7gýé®?wù ?Ãþ ê;¬ï’Sßa}ó;œßA}†õYrŽxüÁü…ó_x|ÉŸïj~¾ë;|’Oñcýl~¾«ñù®æç»¾Ã y1¯äÿmÞÈ›ù âÇúÓü|SãóMÍÏ75>ŸÔü|Òãö¯»ýëîß þÃúö¸ÿƒýîÿ’s|Ãã»Í©Ï°>ƒùÎÿ`þ‡ó—çOxþý û³äô/ì_П°?A}Oõ%¨oXß%g>Ãù ê'ýÞÌ'õÖwÉ©ÿ´þ“ãŸÿd>§ó9Ùž¿û$Çóo[“<“góB^®òJ^¯òA>̃<ÄÇŸ<þÄñ'?q|Éã»Íy3çø“Ç¿äÔ'ô™äÎÏŠgê—­_æø²Çw¼ÿmk’s|ÙãË_öø–œã˧ñ-øñþ»­yS¿bý óSœŸÂùS< õ/Ö¿Pÿbý õ/Öÿ6§?Åþê_¬¥¾ÕúVêW­ß’Sßj}+õ­Ö·Rßj}+õ©Ö§RŸj}V¼QŸf}–œõ³¹~6ê׬ߒSßf}—œúòþ»VX¿ŠëךÏO7?¿òwðDžÌ+y5oäͼ“[?Ô_?ÿÒøüKóó/Ï·4?ßò ^¨O±>KžÉóU^È‹9õ/ÖÉ©±þ…úë_¨±þ•úTëS9þêñ/9ÇW=¾%g>«óY©Oµ>KNýªõ«Ì_uþõkÖ¯q~6ÏÏFýšõ»Í™Ïæ|6êÛ¬oc~šóÓ¨O;é³àúuë×™î|,9Ç×=¾A†ýYrê?¬ÿ þÃúö¸ÿKÎñÓø˜ÿáüßæœ?ÃógÉ™á|¬x0?áüÜæô?ìП°?AýÂú-9õ ë·äœ¿áù»äÔ?Nú/ø¤¾ÓúNΟéùs›s~LÏÉñOr|:RqíÖüYžÉóU^È‹y%¯æ¼]åƒ|˜Oò“¾ ž¨_²~KžÈÓUN}“õ]rꓬOâø“ÇŸ9¾ìñeö?»ÿ™ùÉÎOfÿ³ûŸé¶ÿ8~¨>~Xñ†÷olÍk¼‘7óN~ÚþŠòaäa>Éç%Îû÷›ïßÿ¯äõ*äÃ|’»ÿ˜Ÿ¾?ýñA&Ïæ¼]åìrÿq~´ûüèšg꟭ÿ’wò~•s|ÙãËìvÿ ý)ögÉé_±…þû³äñø Ç_<þ%§>ÅúTŽ¿zü•þVû[9¿ªçWeÿªûWÙ¿êþ5n¿yûþ4ûƒãÇîãÇÎóoÝçß~€Óÿfÿ;õïÖ¿s|Ýã[rŽ¿{üýïîgÿû©ÿÌow~—œþwûßY»ëï]>¨ß°~ƒú ë38¾áñ-9çÇðüXrê7¬ßàø‡ÇÌ_8ÁùžAýÂú-9óÎç’3Ÿá|õëÔ7¬ï’Sß8é{“Oú3íϤ?Óþ,9õÖwRßi}'õÖ÷6§þZŸÞ°5¯ñJ^Íy»Ê;yÿi>ÈOú¼›y\å“|þ0?¾¿sk’3ÃùôwØßA†ýìßpÿ‚ý ÷oÉ3y6g¾ÃùŽ/<¾ ÿaÿƒþ†ý½Í©_X¿I}¦õ¹Í™é|,9ý™öõ7\ƒõ1\ƒÇ×áãë»|òø{úø{òüÝôù»ÉówÓçï&Ï¿MŸ›<>Ÿ>>ÿ>É=~ßOßOž_›>¿ö Ný’õ[òJ^Í;y7§>ÉúÜæAy¦þÙú/9õËÖ/3ŸÙùÌÔ/[¿Ìñgÿ6§>ù¤ó›ß/Ô¯X¿%§~Åúæ³8ŸKÎþ÷¿²Õý«œÕó¯2Õù¨_õø–œù©ÎÏ’ÓÿjÿÇß<þÆñ5¯±þ6×ß%g~›ó»ä_;þ7ûßéo·¿ãïgþºó×9þîñwú×íß’s|\¿ö„ó[“¼óJ^Í;y7äÃ<ÈCü8?»ï_üçø«Ç_9þêñWŽ¿züKÞÈ›xcÿ›û<Ù}açýƒÝ÷vÞØ}àãƒIîütö¯»KÎþu÷¯³Ýý{;§?Ýþtêsš_ƒß?üýƒß?üýƒógxþ nxûAýÃúõ뜟áùÜ~xû“ÛŸÞþ¤>ÓúLê3­Ï¤>ÓúLÖéú±ä_zÆñÉÖ$¯äÕ|óI~Úþ›ùñúüÖüYN}’õ9_lMrê—¬ßqý¾5Éé_¶KÎñeïxýk’s|Ùã[rŽ?ŸÆäaNÿ³ý/Ô¯X¿ãû-¶&9õ+ÖoÉ©o±¾·9ý)ögÉ©±þoçôWë#ÞßÔ}SçýMÝ÷7}‡7òfÞÉûOóA>.òãù“îû—~‚gòlN}“õMòøoó w~VõÕ÷WuÞ_Õ}UçýUÝ÷W}‡3ßÙùÎÔ7[ßL}²õYræ+;_¨¯Õõµ²¾V××Êã«êã«ÊúT]Ÿ*ëGuý¨¬õ\?*Ç_=þÊï¯þþßΩµþú7ëߘ¿æü5æ¯9þ5ûרÿæþ7ö¿¹ÿýëî_gÿºû·äìwÿ;ó׿Îütçç.¬/Ãõep|Ãã[rŽxüÁ퇷üû8ý=ýûœŸáù9™ßéüNú3íϤ¾ÓúNöoº“ýÓñSãú y}ð >ȇù$ŸùñúËÖ<ðÎãóîãsÞÓ}ÿÍwø$Ÿâ8~÷ý3÷¯tß¿òø ’׫œãùìÎ'ï/龿¤óþ‘îûGdòlÎþO÷²ÿÓýG¾}ÿÇ’<Ÿ°5É+y5oäí*ïäÝ<ÈCëÓáõéàútx}:¸¾^_ÜŸ¿5Á3ÿ>ûï õ-ÖÇŸÃÇŸßàñø õ+Ö¯p|Åã[ñÊñU¯²ÿÕý¯ô§Ú¬†×Gƒç‡Ï®†×?ƒëŸáõÏÀõ—­IN}šõéì_wÿ:·ß½ýN}ºõÔXÿÁ|çsÉ9þáñõåý5}rý:½~åõ«îëWŸï¯Úš—øñþÕ­IÞÈ›9¿œ¾Å9¾áñaÿ2½™¸ÿek’sûÓÛÇúfj}3>p~ek’¹ÿþX·æ%~¬[óOäÉ<“竼‘7óNÞÅ+¿¿úû+¿¿úûõok‚7êÓ¬O£¿¬o#ÓŸl2ÎlMðãñýÖ?î¶æ5Îﯧïïä]üXÿ·&9ÿ¾ùïLJÿï>øûîÿï>øûçÿ>øûåÿ_>øûÛÿ¿ýø€§¿_ñ «œù çgrû<>ü}éáß—þ ÞÈÛUÞÉÝÿÄí'o?ñû“¿ÿxþok’Wrú_qü±5É+9ÿ¾áúßÖ$¯ä§¿oäͼ“wñÂíoÿxýcø÷#ÿqø÷ÿqø÷??¨Ü~õö—œúTës¼?ik‚ׯÿ_8øû…ÿ_øø€ãoãöÛiû+NýšõëÜ~÷öó1œÁ| çc°Ãý[ñ ¾a}ƒù ç+8¾ðø–œú…õ›ìßtÿ–œýŸîÿ¤þÓúÿ5¸ÿek’yˆþýðßcÿ8¼8¹5ɹýqÚþ$ŸâÁþ…û‡| 烿¯4üûJƒ¿_4üûEŸLŽzü“ÛŸÞ>ö¯þý¢Áß'þ}¢\ÿ‡×ÿü}áß×üý›áß¿y|ÐÉ»ù$ŸâÇó›Ã¿?óùAæö³·_ø÷Å_ø÷åô÷ìqÿWüxþcø÷ßÏ?ü~þÇÔ§YŸÎñu¯sûÝÛ?^~?þàûñ‡ß?ø|×ðó]Áã£ðñÑムóI>ÅúnMðFÞNœÛoÞ~ãö›·Ôk’Wòj>ȇ9·ß½ýcý ¿Ÿ;ø~ëðû­ƒïŸ¿:øþèðû£ƒïw¿ßùñýûxü“þMûw¬[“œßÏú ç¿·&øñþ¯ðû{ƒï× ¿_÷óƒãúvkxAýÝšàÇûŸÂïWüüàxþck’groÿ¸~¿_0øþÀðûƒï÷ ¿ß/ø~¾ðûù‚ï× ¿_/ø~»ðûí‚ï¯ ¿¿îóƒÎñw¿óû»¿ÿ6§¾Ýúvö¿Ÿú¿àƒþûäq™Oò)>Ù?ž_ Þ¿¾+xVøþ¬%çïc‡;xü>þøü Èãă<Äë·ðï ‡³ÿÓýŸìßtÿPßü~¢è¸ÿhk’òÓß/øñø|k‚£~øý)Á÷£„ßòø`’Ï‹õ¡»>tîß»÷ï|Høý!¨O·>˜ŸÝósÅùûYáßÏ þ>Vø÷±¾ÁQ_½þ þ¾Sø÷‚ëËðú2ø|{øùöàóÙá糃¿þ}œàóÑá磃Ï/‡Ÿ_~|ÀñMoòûUß&ÎÏmMò ?ýý$ŸâØOï¿y}-|}íó¬§Ö¿“¿?0ýû“¿/0ýû“ëŸéõÏçÇë[ü¸~ßšäì_uÿV¼ñûOã?æúù¢É÷‹O¿_|òù éçƒ&ŸŸ™~~æóƒcþ·æÏrŽ/<¾ ¿a'¿úû—œþLûs|>eúù”™q~ak^âÇó[“¼óJ^Í;9óQpü²5É3y¾Ê+y5rÎO®¯¦×W“÷ŸOßþ žùýÙ߬Ó÷OÞ<}ðçµ€×"Þø÷Íù]=¿yåôý•Ÿûîÿqÿ3}ãã~?׳cÿº5Á÷L¿ÿjòýWÓᅳ¼oúþ½É÷[M¿ßjòþ»éûï&ß/5ý~©Éûߦï{|Àþ+¿Ç[“¼‘7óA>.òÄí'o?qûÉÛOüþtúþI>Åëû­IÎïÏþþÌïÏþ~쿇÷߃ósx~œ_ØšàÇó[¼q|Íã»Ë;ÇÇós“ïo˜~ÃãƒFÞ®òNÞ¯òA>®ò ñãõÏé÷?L¾ßaúýßá•ÜúÏM¿¿aòýÓïG˜|ÿÁôû>?üþáïö/Ü¿%§ÿaÿ±ÿ˜Þðþ­©û·ÚÇÇñüåÞ$Ïäù*oäí*äÃ<È=¾Äþ%÷ïP÷&y%¯â™ßŸýý™ãËßáþß½ ^èO¡?ù¸ÿڛ伙yˆC¿lýpÿ×Þ϶7É+y½Ê;y‡>Õúàù§½IäÌGcýo®ÿõµ¹¾¶ãùŸ½ žù÷ÙúÑ\?ëCs}Àýa{¼VðZÍù¸ÊÙ¿êþÕI>Å[oɼ‘7ñÎþu÷¯³ÝýC}j®Oõ§¹þ|ƒsûªO㸾ܛ伋£þ×ÿA}‡õÇãÿ½ ÞÙ¿îþ ~ÿð÷þý8ý=·?¼}è;¬/îoÚ›à“ý›îßdÿ¦û‡ú3\¾ÁÙÿÉþës¸>×?áõOp}^ßëS¸>Åñøvo’r÷¿²ÿÕý¯‰<™gòlÎñãüvKÇó{“|’Oñãú7yýûø ’WóFîþë÷Ö?Öï­IÎþ÷ÿ¨ÿÖ$gÿ«û_©_µ~ýkî_ã÷7ç÷wÿ’S¿nýÇ?<þ?œßÛ›ä_x|+>Ù®?~Ÿ|o’gòl>ȇysûx>boþ,òÓöüX¿·&y!g>¹¾M^ß>>¨äÕ¼“wó óI>ÅSO͜ߟüý‰ßŸüýù<˜gòlN}²õYrê—­_áßÿ}¡>Åú,ù ?ÌóÕœ¯F}›õmóø·ß¼ýÎïïþþÎ|tçc0ÃùXrŽxüƒýîßàø‡Ç¿äôØÿÁü ç/Øÿpÿƒó/<ÿ‚ú†õü~Õ¯Æý_óþ¯Qßf}¿Á3y?žÿHz~ãñêks}ýÎñOûÏæýgÇúyk’ò"ŽúÕ]¿:×oÝë7ü>ØÞ$oäͼ“wñãú9é÷µž ò!ÞØ¿æþ5n¿yû¨/Ýõ¿µ7š\M¯ðû{“|qÌßéù;™¿éüMÎ_½¿¾e®¿³×ß÷íMðãúkk’òb^É«y#oæ¼›ò!^É«yã÷7£>ÍútêÛ­ïqÿ¶5/ñ Ëœþ„ý úö'8þðøW|2?Óù™Üþôö'óÁùyý-ûú[æú({}ôø “góA>Ä'ÿžçß2ÞϹ7É3ùéï;y?îß·&ù$ŸâÇý{ÖýYÏøýÅß_ÙÿêþÏ¿d½ŸðñAçö»·?¨ß°~ƒß?üý͹>É^Ÿ|‡Wò*Žñw¿süÝãÇýa{<¸ýðöç7²îßz~ÀïW¾ñ~¯½ ~<þϺ?êùA‡øqý“õ~ªÇÇóƒYï§z~È“y!/敜úMÎßéù;™ÿéüOnzûk~\ÿfÝ?ó-Îþ7÷ðï‡ÿ~p|Ããìÿpÿ뿬÷3=?èäÈWù€þ[“¼’Wó Gþ ÷ÅûÇ‚÷ëïMò ñãõ×¢û“d~ö÷ó½5Áy;ñI>Å;y7?úS|CáýÅ÷¼_vo‚×OEï}|p\½ÿ¤•Fýšô«¸~»5Áçç«ÎÏÿ§?ÿËŸ¥Þ±˜Óó „ôèèøó¿þïþ/þ_üwÿ÷ÿüÿû?ÿÿýñxúôãÏÃÿí?üO|<2ÿÿüñŸÿ|¾|ÿq†çýóãÏÿíôçÿðù¿ÿãô¼éëü£ÔçÕó”?)ÿçg{<Î<Úññç¿þ(íùôð£ý¹“x´ŸOs>ÚŸ;•G»?fÇ£ýYÄíçÓ8öçAùgûõë+Ïv~¶ë£CÏöóûûó×IŸíôl?߯öl?·7žow~ÿ“çÛLŸÛµŸo_xöïÙŸ×ÓÚöˆgûùvœm|v{¸™¯öóéóG»ŒGûñ´I¶û³¯»ÓÓk‘úh?ï†Nù9éígõ{´Ÿã«Ï·]=Úm<Ûõ±:{´ŸzÖçÛúR~>´ûÙ~\ªÏvîÏöómBv}þ}z¾ÝäÑî/þ<Ú~´Ÿý©<ô¨?íòXí>ÚéÅŸw?Ú¹=Ûϧíòäå¹z}´ŸúÖò|Ûï³ýü÷åyõüÙ~Œ·~¦5W{>ÛÏ«‡öóm/‡ÿ|{³ýì{^­y¶ŸÛÿÌÓ8nÿõkçÏös<_¯k¯þ>¿ïñk+¯þ<óµ½=:½‘ÿz½\óÕŸòÔ÷ñöºWžùz¾-kÓãÙß×Ûy~éóx[Ä«?ùùïãyuê©ç“?ž~õçãù÷§_ýùxýûçÓ`æsû3½æÏî׬¯ùó<)û¯×ÍÚéåï«=_óç¹™í7›=óðj·ÇÑì£ýÌëóêø+Ïñ<¯ö½¶ŸË³]^óçy‘ã_¯“{¯þ¤çß§xÍŸ½_µæëßççÛçŸß÷Ü~î¯ùS^ú>öž}Ëë³ý™§òêOëÏv{ÍŸçIìíÅ1½JþõÇ¿ý±W®ò9M~Õ¯½.þØùýªK#¿tÝëÒ§½‡ºÔŸ‹Í_u©—½ýªKýã1=~Õ¥öüÚ_ué³ §t¨K-¿tÝëR[ÝÛêRí[]ØêR­/Ý÷ºTÓcqó«.•çÉ”_u©ôm<[]zÔøy¨K%¿tÜëRy^,>Ô¥ØêÞ^—öþìu©mu`¯K›/‡ºô1Žué¹3úU—ÒÜæÝV—>ã’Žu)=ç¿êÒ#^ǺôùÏR>Ô¥T_u¯K¯‹3¿êRzÞì÷«.¥Ìº”Ò«Žïu)muz¯KŸíWÞêÒgûéç^—^“Õ¥´åa¯K‹ÑóP—<êRJ[ÝêÒãûó¡.¥ôòw¯KŸíWÝÛêÒ£¿ãP—RÚ¶·Õ¥”7¿¶º”òV·ºôÙ~ýýV—ÒóæÇ_u)mót¯Ki«#{]JÏ“ ¿êÒ§Ïy¾×¥T_ùÞëRª[ÿ¶º”Úk?µ×¥ÇÅ—z¨KÿÛ¡.=N®Ç¡.¥þÒs¯Kéyðõ«.¥ñêß^—Rlun«K)^ýÙëRš[ÝÙëÒÇËŸ¯ºô|9Ë¡.=O6êRzí—¿êRq]zü7êRzÞ´ü«.¥¼Íë­.¥¼­g¶º”òkíuéáë8Ô¥‡¯Çº”ž7íþªK)oë—­.¥­>ïu)•WÎöºôºh÷«.=|þ8Ô¥´íŸ÷º”êV·ºôéëkýµÕ¥ô|ÉÁ¯ºô¸è‡º”úÖ¿­.}úØÆ¡.=.b¤C]Ú}ÜëÒ§¯y¼×¥m}²×¥}ï르ÍË}½” ^®—Ò¡.åç\ÖKm«+ûz©oë‡}½4¶õ¾^ŠÇô=¬—¶:¾×¥ò±Í»­.•ôÚìu©<„ëW]*Ûþv¯KŸ_“Ê¡.•m?´×¥ÏýDú8Ô¥2vþªK%¶º±Õ¥2·ímu©~lýßêRM¯y·×¥Ïn=÷+{]ªÏ‹Ò¿êRÝò±×¥ÏýÞ«îmu©ŽM¿­.ÕØüÜêÒç~ó¹ÞëRÛëæV—>÷³¯þlués˜Ïã½.µçK*Õ¥Ç4ê‡ºÔÆ«Nîu©mûŽ.µùÊï^—>eyî·öºÔ·ù»×¥ÏuÃV·^uéñ#‡ºÔÛk?µ×¥þ|ˆîW]ê±­ß¶ºôz ó¯º4¶ã½.¼Õ½­.m}¼×¥×Kèué¿}¼·×¥ça䯺ôÑ_õq¯Kc;ŽÚêÒGlÇi[]ú˜ÛqÙ^—>^9úªKéµîýªKé¥ÃW]úª[_uéUöº´­Ë¿êR{ùðU—ÚV·öº´÷¯KãµùªKãµÿùªKñªï_u)^뙯º4_óä«.Í}}µÕ¥¹­?öõÒÇvœð«.uÇ}¼ÖŸ‡ºôš‡{]Ú|þªKi[ïìu)mÇ{]JÛ÷íu)m땽.å}}µÕ¥}ý¶×¥¼­Çö㸼—ìÇq›__ÇqÛqí×q\Úæý~—¶õâ~—¶õà~—¶õÃ~÷ñ(‡ã¸í8üë8îcÿ~·×ùý8îc«kûzi[ß­—æk^­—öõé¾^š¬KéZu)žÚÖK¡õÒØö+ûziÛ}­—ÆVW÷õRߎë÷õRßüþµ^zÖ­Ãz©¦C]Ju›÷ûzi[­—Êëxàk½T^þ­—ò«î~­—ÒV÷öõҖϯõÒÇkþîuécÛíué#x÷¸¹½êÒGÕå½.}N‹§¿«ã¸þÜþªK½½tþ:Ž+ÛqÍ~·íöºÔöy¼Õ¥Ï_ë‰ý8n;¾ü:ŽK;Õ¥ÇþÇq}«ûqܦó×qÜó¢ñ¯ºô¹ÝÖ?Ûq\Ü”v8Ž«¯uñ×q\Þêä~÷|Hòp~)^>~Çmû³¯ã¸º¯¯¾Î/c]Ú×C[]JÏáþU—öõÖ×q\ßëÔv×¶ýë~·Ÿ¿Ùã¶ãù¯ã¸¼Úãöº°Ç}¼êö×qÜÇV§¶ºô1y÷™³8Ç}ĶÿßêÒc:Ï/=†Õuécló~«Kc[lué3§óX—>úvܳեκôh—C]z´?uéWûU—>úV'·ºôh×C]úúþ­.=¶¬K_ýÙêÒGßÖ7[]úèûqÝ«.}ôm?°Õ¥Ïñ¾êúV—í|¨K¿Ú¯ºô1¶ãÖ­.}<ÔçW]Ú·¿×¥í<Â^—öñíu飿ò¶×¥þ:žÙëÒGßëØ«.}lë×½.}4Ö¥ÇCFÇã¸ÇCYýX—Úk?÷U—êk½ÿU—žui›Ï‹ã¸ö¬_e©o«Ð­*½~9ë«(½~èä«&ý?KÒ(/…·Šôz«öWAzØ~¨GwrNwGÙÎ&¿ªQ´ýäö³1¶c³W-šÛ¡ÆVŠæ~†èU‰o9œPš_ë¥gút?ËÐGzí~÷2ô)Ûky¸•¡ö’u/CcÛ½meè±û8žNzÝ+(CyÛýîe¨¾ÊæWêÛaϯÓIý¸‡9ާ¹?wk¯2µ•¡2÷åÒvØö±-·2T÷ååV†jÙ—C¯2T÷Óð[ª[’÷2TûæïV†êx•Ž ÕxFø:lûئýV†Zz†ßËPÛ«÷2ô)ós··—¡ÏÃFœæní•Ͻ µ¯Ã¸Wj{™ÛÊÐã°2e¨Å+¯{úl?w{jÛüÚËPßNGýÛÿéÿïº3endstream endobj 847 0 obj << /Filter /FlateDecode /Length 2234 >> stream xÚí]oãÆñ]¿‚E_(äHs¿¸d÷¡è9MŠöâ"H“>Ð’, '‰Š(åÎ@~|çc—\Ò+Ÿsq ™\îpfv¾gxE²NŠä«Ù_ngW7¥ND‘×E-’ÛûD–*ebÍke’Ûeò}zû0i»kºy¦”L¿ÍùúݼRé ¶VóÿÞþýêF˜"Så•2„BÔ3+áK×··37E"•Ê #mL®T,v³gE^VuQHxO›þM÷àêëHþÚÎþÿüVæ±fZ’˜ù è*‘WÒ2û7pÊ üÖç#7S•HÅŸæY-U¿y:ÑžH—¼ÿCaŠf?¬¸·ŸK›zÀ#Aœp$çÃ<“eÚîpÚ½‡õŸ!¢Ø´ˆ½sÒOê¼.eÉJ´¹í©\V%ŸàŽ¡¤ ™É\U’p÷®HÀlA` )-Þ& <™ƒS ©•×xA¤òZ”-c–Så…4I”Úˆ{KÝ“û!W’2‘—Ê.JOçVâÊ+D’Ȥ^£Ô¼Èôÿáë5ÇSK³e}u~}:{5¨‹"·BW*7ºf‚K‚Â?èYŠNŒo¯Vìiotªo¼ÚٙРLr\'þöÝW³ä{zE˜‘ÃùS‰œWkÅ>½brwlpˆûD'…÷ßýíÞ<ÀÓ-ŠàÄÍqx‘ÄáxìP-`²ú@¦ç-ZðOø‡,Q5r ä%äñê¦é·/Uyi4 é¼ðçyfÀãþ·«‡,DzÁïü®çìV%ðÆ]@肺)X)æ ¹^N¡×½÷{: “ï?½Õ9'Gná‡ÿ ¿·c…;Ç6G¶$|ycùh…!n~òÄ!@Di>ƒlë|pš¹~ [Ç~ë^¾vBxœ<ŸœöSà^!S?.$$§ÊÙÉíMkC :²í-§@°…-ÞpØv2Z=ï8ˆKLýÆÚšÄùà2ñ%”ÊACà›*5r|EõÙ‘|j}Fa¬ âWñ‘ùMÇWŠ«%/ÈqâúÔòuEÉàÌi…wåæ,vÆ¡T6·EÏÏcŒe“Û k>:ËëXtàÁÓ‰dãB^U@©Ç6´hñ´;e™‚w‹Æ;ÀÏÄéÆEYst}ï#ˆ{èR%IŽ`].¤Mh«ŠòƒqåHõA îC¶ e[÷ãT‹äö½ç´skÔø¹›» ððwh«hó¥ÝÆE#î”{ðåÁ) ûÑðýC!4Bs¾¡ Kg<äàB¸ Ë*ã=BF±mˆÃ›Ûrì›a½6¢ú½ÞøUõHSJ{Œz¨†Ìd†³HœvÄ€xobKJ½¿” üu_rÂrDP]"¨ÆÉÒ?Á3jŸµÍ•DÛƒwõNÔ³Gïbœ‘Ö†¬\c>±é¶]Ï3¡/Z›2`ݵÇ?ã†kD˜÷E«•è» ,Úªò ×1$bD¯›S4£åµ]퓼\Gϱ±¶úõµA•JdŒ½­Eü¨Z§20Cˆ­1Oö·‰PQ‡…[F¨/¢:¢[ë÷¸úB3ürÒƒLÎ>VÚ‘9Byn<æ¡NÌ4† H‡:Tý ì·ûAiãn…þPÈ‘ceàõhÑxAÑ3BiÍèŒOÅUéÞ\Ä%YÕyYÛOâ©B ÉÏÇÓó£^‘õ ühäûð~Õ)DúʾЎí3v u•ª’j3Ly&‡ÌìŽõ¥†CØ\ Ç*²²e_EjÌGXgvØ,Àu¼­[øR¡‚.»ïøÝóŽŸû¦¿âd4ý=––oÝcjd!òªŒõ Ó:?ïKi( ¢$Я_i*îG^ß‹+^ðXeÏ Ïus¯}q0É(z¶ 3e<ê骒ˆ·&A{šôYH‘Äs„’ÏqŠ0¸ÛñƒÆfÎq'Ö€3Û>u~ë¶ †!+/ë'¶.óJõ†´¦ÊùÀ4]'†·¬ßß,Î^µ4ÕqÒF:›¥çj}Y€«M´êþˆ&Òž÷d¹$ªf½êøÎ5qÔJŸy˜BÏÛ{ÉšjCýžcÃC £µ7hŠt@Èóæ=Ù(ЇŽéHxöÚ­×1拙ẠõUвe^¨>½}ãd·w¾zy@)¢îÃX‡mŸ¨¼Ã>iÀ2]`¶U8à†òEý_ö`å kù 1v¨ì¨³&mÔ~gnl&uJ_?`›'|O³…Ö»Á’âKG(acç‹’,6ƒõV|ïæ¡‹>Ün†éH^"ýC¬ §²4Ì'6†Ý0å{ò!—(‡ó¿i©aXô~á“Ô¥O' ¿æ÷_(#G3÷®è §ÂKÿ‰S‡)GijÌ(¥&%'ß§mBdŒ½`K@&bÿUàííì ²y‹endstream endobj 848 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4241 >> stream xœí[}PçÖ?B‘&¤¡0"M»M " °Qq h{_Ó‚…¤^µøu®Â8n_[§3"#ðG­µ£ÞzuªR…+ZKFHÈÐV¢+‘H ,ð¼lË¥~¥ËïLÖg»¿œóœçœ³ÏγZ­ðRÃé¾iÇœÂÙ9…³s g?æÎ~~‡k$''Žãd2™Ëå¾ÿþû%%%ùùù;w®V(ÏÉãø‰'BCC£££ŸüßysY۬ǜÂÙ©Dšœœ>ŸïèY†mݺuü«D"Ñëõ—.]Ú¼yósRMÇbVSSãââòÔ¨59$ÉcG.\¨Óé^Õ$°×K1 AÖ­[7::êè=MqÑÑÑ 3bøþÿöíÛOa€a€ ¿98þÿ¯‚ AAt¹ºþòADND•¨ŠáƘ’°WaSSSHHˆD"°Þ»¹¹PWóæ^óæÁÐÉðÖ[àæZ-<|##ðö۠׃¿?89«VM¤R(<èîî~÷Õ•úׂ¿Ö}R÷G*4›Í000àëë«{õU0aÍˆŽ†­^¡øÅ¼¿¥šèQQÇ×çfsÿïá¸à¸©)yìUh4 ..ð6«rr콟ÄÒòk$¨wŸÒ!~!µÿ¨M>žüié§ykòþ‚Çh$?IƧ+ðg{ÆÅÅÑét©TÚÞÞn?û$TÄ"1qJ3Üååí/̾˜íæêÖÑÓáìäüÐôñD˜Ún­Ýþ¶àoö_Î^…‚€@ €ÜÜ\û/ð,ªÒÒRx2,‡ø…o/žäẗaØ$ƒc+þÙ³gÝÝ݇††:ë© Óé=òððpôDÄÁÀ…Žå¥ëÖ­‹‹‹vqd2¹ººz"•@ @¤µµõù©lÀ:%ܽ{w⇉m~2ÕcíÄk‹¬¬,…B¡P(vîÜ ß|ó H~Åž={ˆƒ …B"‘`†ãø¦M›ž“ Ã0â 1ÒNÌÕøŽÇñsçÎáø H2_ Õ ®€™L&Ãf–óûR9æ¥D"¢R©–/_>…‹M,_ Õäpl=OD*[1 JK!(è7e+ÁP¢*a¹±6†o´ŸJ&ƒ ¨€§RãÛ_J$på ¨Õ@§CPÐS¨ÜÝU{óµ7£¢lRÁð0¸ºBR’#7mçªòé§ŸvwwŸ>lsü?ÿiݳÇ*X7n´–—[­Vk9ñÏ„ãT‹þoÑ¿ÿóïgQåç[wì° …Ö´´Ç©ì½6 g0ãEݳ†aÀðæ›°j>ü”1z½žX!Ç©Z´„ Õõê¶Dl™HuêTT› +VÀ_8b· °W¡^¯ollÄqÜ××÷ÉdÃà_ÿ‚‹L†]»`ؤG¥Rœœ£Re«V¹º°æl®SÚU‰W‹Æ}h”áßyæï „C‡_§Ö¯‡W|^¼B¡P!!!†ut µžBµµ —CX46ÚEETÀÑÑц5Ý—,ÚIsµ@gçgóªIžk^!wå¾WÍñ_jUÈ­ßê”ACå‹«ËÅuÄÛ÷¢#Ùõâž>}zåÊ•UUU@¡ ¡ÃàÚ5pw‡’Ûµþ8ˆ ˜¨žœ¨#Ñ»BÏÿZÁÿé' “áF&ô޵+‡so]…@°øCT€ï|Áß©jcë ª¾ëæsì—`w¤ÉËË»wïÞxx8~Ü*YKJ¬ÇÛ;ãÇÃÃÝ»w'­ÂÂ_¨ ¦²öÚ0##ˆÃ`ÏP(`xìn=ÿD=222::ºuëÖ‚‚@066F_"‘X,ˆˆˆ’’›7oZ­Öððpprr BÛm2ðòÇR»lˆa˜Á``2™§­­­»»; ÀjµR©T??¿’’èïïïèè`±X"‘(:::%%E§ÓiµZ.—k±XD"Ñ‹ÊW… %=Ñ?‘H$ùùù\.×ÏÏO£Ñ˜Íæ ŒŽŽªÕê/¿üòRò,<¯—â8>Ó$=^úTUTTP(*•j0:4“Eþé#ͳl(‹‘J¥$É`0$$$lܸ1**J(êtºÎÎNÖÓÓÃápP${žnLã<œ!SôOï¥[¶lY³fMAAŸŸ_}}½P(´X,|>ÿÇìîî ãpÞðY½:$33³¥¥%==}çΛ6mºråÊ'Ÿ|’““{ëÖ-­VëééI¥RÙl6ŸÏ?vì™L6‚ôõõ½ú꫉‰‰‰D£Ñ (zçÎ6›M¥Rëëëi4ZBBÂéÓ§½½½€Ëå¶µµÝ¾};55U&“ݺu+--M¡PˆD¢gIp>pàÀ$ Ï;g6›™L¦B¡ˆ‰‰a2™ååå –,YÂb±Š‹ÅŽ—X,Æ0ŒN§ŽŽZ­Öæææùóçwww‡„„(•J‡¢¨J¥*//§R©f³988Ø`0p8œFóÆoÈd2­Vk±X>øàƒ[·n577›Íf.—ÛØØ¸téÒ†††·ß~;88¸¨¨¨§§§§§§®®®¯¯D"Y­VÇ'ÙWð¼t†Ìº'aÃK“““ét:ÑÑjµÎÎÎ~~~^^^Æb±¬Y³†J¥êõú²²2‹õÚk¯Ý¹s‡F£­š¦¦&*‹I$ÒÈȈ¿¿?(•J“É$*++ûûû÷íÛ§P(†‡‡‹ŠŠ/^¬V«ÓÒÒ “““•JevvöêÕ«óòòŽ9‘•••””tÿþýàààC‡ÙT8S" Ñü/˜) §ÏûþáÚµk¿ýö[‰D››+•JQ¥Ñhccc"‘H"‘\¾|922òèÑ£B¡pÆ …¢¨¨ˆÍfïÙ³G"‘ òÑGÅÇÇ¿õÖ[?ÿüsddd]]Ñ¿\¹råä¯]¼xQ­V±…ñ©˜™÷8Étøª ;v Çq£Ñ˜——G¼Íc6›ÛÚÚ<==‰—/_^²dÉýû÷ããã¯]»–‘‘±wï^*•Ú××'‰Ž=ZPPÀápòóó333 ƒ··7™L.,,¬­­]¿~½——W^^ÞÞ½{I$RbbbAAAzzºL&VYYEô—‰>êÐÐPll¬J¥€¦¦¦ˆˆˆI^ôšÆy8þ~Ó4ñÛ  ?üðCç—ÝCCC^^^R©ôÌ™33sé{*þô±tß¾}l6Ã0µZ½bÅŠŠŠ ¥R‰¢¨»»;›Í>þ|NNŽJ¥R*•+V¬¨­­mnn^µjU}}=‹ÅZ¾|ù… V®\ÙÚÚÚÛÛK§ÓýýýOœ8éïï/•J…B!“É‹ÅF£qÙ²eD`DQZZZX,–R©Œˆˆprr2™LDÎ`0är9Š¢iii111</,,l’¥ß^ÚŒ™¿sÖfÿålØ0&&&!!ÊÊÊ@(šÍæÆÆFoooNçìììââ­­­l6»ªª EÑëׯ-\¸P­VkµZŸæææ]»v8qbxx˜°UYYYVVVccã½{÷233‹‹‹#""BBBL&(•J///‰D|njjjoo¿yófRRÒ7ˆÈOOO•JÅd2¯]»¶cÇ&“‰¢hKKKkkkZZZ__ŸÉdzôèÑÞ½{) ±ê (ªÑh”Jebb¢““…BéêêR«Õd2Y¥Rá8îçço2™¾ûî;WUU… È;wi4ñ+·´´0 E%%E*•ºººúùù­_¿¾  €ÇãUVV~þùçF£‘¨*.^¼èííM¥RËÊÊrrrd2™R©¤Ñhµµµ ÃÓÓ3))©¢¢‚xpßßßÏf³ÌfóâÅ‹e2ŸÏ {²!8ŽßcÅw´÷b]Àöj[\\œ˜˜H´Xˆ‰Dª««[´h±câúõër¹<==],¤R©¯¯o¿³³sXXX}}½··wKKKJJJqq±›››››[oo/—ËÕh4, EQ±XÌápˆu?55õäÉ“D;‡é>>>z½>==Ø£ÑÐÐÐÙÙyáÂ…I~6Ü¿CCÃîÝ»‰LŸÉd~õÕWû÷ï÷ññùój{à˜—Î¨b'þô™÷xÁªP(,XCCCL&“ËåNÜÚ”ššzäȑݻwŸ:uŠÍfF •ÉdíííZ­666¶«««¥¥%((ˆÅbÙöèè¨Éd¢P(:.--íêÕ«(Š*•J¡PXUUÅ`0Èd2Kü×_]__Od@ÄêIšãxùmøòïŘS8û1§pöcNáìÇœÂÙ9…³ÿÐendstream endobj 849 0 obj << /Alternate /DeviceRGB /Filter /FlateDecode /N 3 /Length 2596 >> stream xœ–wTSهϽ7½P’Š”ÐkhRH ½H‘.*1 JÀ"6DTpDQ‘¦2(à€£C‘±"Š…Q±ëDÔqp–Id­ß¼yïÍ›ß÷~kŸ½ÏÝgï}ÖºüƒÂLX € ¡Xáçň‹g` ðlàp³³BøF™|ØŒl™ø½º ùû*Ó?ŒÁÿŸ”¹Y"1P˜ŒçòøÙ\É8=Wœ%·Oɘ¶4MÎ0JÎ"Y‚2V“sò,[|ö™e9ó2„<ËsÎâeðäÜ'ã9¾Œ‘`çø¹2¾&cƒtI†@Æoä±|N6(’Ü.æsSdl-c’(2‚-ãyàHÉ_ðÒ/XÌÏËÅÎÌZ.$§ˆ&\S†“‹áÏÏMç‹ÅÌ07#â1Ø™YárfÏüYym²";Ø8980m-m¾(Ô]ü›’÷v–^„îDøÃöW~™ °¦eµÙú‡mi]ëP»ý‡Í`/в¾u}qº|^RÄâ,g+«ÜÜ\KŸk)/èïúŸC_|ÏR¾Ýïåaxó“8’t1C^7nfz¦DÄÈÎâpù 柇øþuü$¾ˆ/”ED˦L L–µ[Ȉ™B†@øŸšøÃþ¤Ù¹–‰ÚøЖX¥!@~(* {d+Ðï} ÆGù͋љ˜ûÏ‚þ}W¸LþÈ$ŽcGD2¸QÎìšüZ4 E@ê@èÀ¶À¸àA(ˆq`1à‚D €µ ”‚­`'¨u 4ƒ6ptcà48.Ë`ÜR0ž€)ð Ì@„…ÈR‡t CȲ…XäCP”%CBH@ë R¨ª†ê¡fè[è(tº C· Qhúz#0 ¦ÁZ°l³`O8Ž„ÁÉð28.‚·À•p|î„O×àX ?§€:¢‹0ÂFB‘x$ !«¤i@Ú¤¹ŠH‘§È[EE1PL” Ê…⢖¡V¡6£ªQP¨>ÔUÔ(j õMFk¢ÍÑÎèt,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ {{{;Ž}ƒ#âtp¶8_\¡8áú"ãEy‹.,ÖXœ¾øøÅ%œ%Gщ1‰-‰ï9¡œÎôÒ€¥µK§¸lî.îžoo’ïÊ/çO$¹&•'=JvMÞž<™âžR‘òTÀT ž§ú§Ö¥¾N MÛŸö)=&½=—‘˜qTH¦ û2µ3ó2‡³Ì³Š³¤Ëœ—í\6% 5eCÙ‹²»Å4ÙÏÔ€ÄD²^2šã–S“ó&7:÷Hžrž0o`¹ÙòMË'ò}ó¿^ZÁ]Ñ[ [°¶`t¥çÊúUЪ¥«zWë¯.Z=¾Æo͵„µik(´.,/|¹.f]O‘VÑš¢±õ~ë[‹ŠEÅ76¸l¨ÛˆÚ(Ø8¸iMKx%K­K+Jßoæn¾ø•ÍW•_}Ú’´e°Ì¡lÏVÌVáÖëÛÜ·(W.Ï/Û²½scGÉŽ—;—ì¼PaWQ·‹°K²KZ\Ù]ePµµê}uJõHWM{­fí¦Ú×»y»¯ìñØÓV§UWZ÷n¯`ïÍz¿úΣ†Š}˜}9û6F7öÍúº¹I£©´éÃ~á~éˆ}ÍŽÍÍ-š-e­p«¤uò`ÂÁËßxÓÝÆl«o§·—‡$‡›øíõÃA‡{°Ž´}gø]mµ£¤ê\Þ9Õ•Ò%íŽë>x´·Ç¥§ã{Ëï÷Ó=Vs\åx٠‰¢ŸN柜>•uêééäÓc½Kz=s­/¼oðlÐÙóç|Ïé÷ì?yÞõü± ÎŽ^d]ìºäp©sÀ~ ãû:;‡‡º/;]îž7|âŠû•ÓW½¯ž»píÒÈü‘áëQ×oÞH¸!½É»ùèVú­ç·snÏÜYs}·äžÒ½Šûš÷~4ý±]ê =>ê=:ð`Áƒ;cܱ'?eÿô~¼è!ùaÅ„ÎDó#ÛGÇ&}'/?^øxüIÖ“™§Å?+ÿ\ûÌäÙw¿xü20;5þ\ôüÓ¯›_¨¿ØÿÒîeïtØôýW¯f^—¼Qsà-ëmÿ»˜w3¹ï±ï+?˜~èùôñî§ŒOŸ~÷„óûendstream endobj 850 0 obj << /BBox [ 0 0 518 172 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (./categoricalVGAM-025.pdf) /PTEX.InfoDict 710 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /sRGB 712 0 R >> /ExtGState << >> /Font << /F2 711 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 45116 >> stream xœäÝMcKváyýŠœY X%ÆÞ;vDðD‚m@€ ØÝ€’†> Ýìøï›dfÝâ»(äÉè<ɼ¨íSÏM’'öZÁL~–—¿~)/ÿüò¾ý÷óÿýÅÿÇùË—¿úí·Ó÷Óéôrû¿¿ý«ÿvþ×V_þß·¿ù»—ÓË?|+/}þ¿þV.ÿÁËýýû°—ðïÍ^þðR¼~·þvøû—ß ¿ùùÛU[ûÞN¿èëÑO-§üãçe¿Þøí•ýrÝ?ý/÷vjÿÇåÔþø÷ÿòí/þ³×áwÿôÒ¿¿ýûëÿ\#¾G{±üžçÿä/f§ß¼üý§ß]/ø½ŸlþÝòæ'ãÃ?YNöý7?šÿQ?}ï~ó£ý—}]’hßk9¯ÓëQÉøîãºNÂñÝÊÛÑOÍü>Æ/úzôS{^–ꇾýÔrªßGý…ßoÜê÷ðŸþzxãçIŒŸ7ííðÆ_Oåÿyf[k6¾›]æœ~]±??}/õ£ë}ýÙóU•òËÏ~xVןíþ½Ž?{úðõÿÞÇyýû©]úB“?lö½Ö?<{Íqލ¿ýðÇOù퇳\rõöÃUúZäo¥ž¾ø±‰ü8,9®±ùeè8üñÿö²£Ý\wiö½ÿòÿüß|ùŸ/ÿ²wcèý²Ï´ïyºžÈÿúßÿø¯-ÂÏÿþÏûÍϟ׽õK{Î?ú7öÇ÷·¯?_êËŸýû—ø§—ÿøâû›ÿP~ów/¿ûëËE^NçöK)¯Éïæ?Nè—¼ü);¸/y¾Àr9¡?|»Ôáôãð÷ß"/ÒÏÃö=úÍáuCÿåð<)»¨Ë0j¹®ý/‡çm1nÏWôvQår˜—*]Ï­8¶ËR]ãrx]³Ëa\ótÙ".‡~¹Þsªz»ž.?›þÝ/u>ïëa\Îÿr×ÿø|Eç‹*ãrÓχíRÿË¡ûåp\zUJ¿Ü¸ßk§K^Ki× ï[+—Ð]¯×{ÞèýrQy¹ˆóa\vàRÎû[^Ï#j×C»þìõî­”ø^Ûåp|/çÕ¸ôârQý|EçÕ(vÉúùð\²·a_~ö¼WÄyaÏ#»f¾ŸwÀËÅøÞ¯ÿq~÷óEηùúŸ¯è|Q§öºtýºÛ¼Ý{þþÛ8}ϼnÂõrQÃÎ%>Æå_χ~ù¡ËîÒ¯z¾¢~=,—ç˰ëÔë϶ï=¯‡årEc\V¥œ®Ã8÷öt^¿_oÖõ¿k×ãÚ¯Ç~™üõøt=®çÍèz×Þ¿Þâ˱¿úy¯—w½1—Ó÷Ëå.;Ëùø¼:½\¯ |YK«×ãòzì—r¾•¯ÛÊy.çË9Z\óòc£ÿÐsiËå°äõp|úr>á׋¶ëuž—c¼Úw·Ë¡×ëṩù2êåô/‡çœ.‡åzÉ—{èx9¯ìu:ÅÎWÔ/‡§×ÿx|¯ç‹:áz½ç;õs–·åzÉn— ŸgÖËõÐ/8¾^Ôë/@çCýÙ¼œËùðõýz.—y¿^Ô¸tö|x­L‰réùåðú³çM>®uºþÇç=â|—ûó0/Í×ßUÞöÜíß"½¿¿•Ÿ¿&¾µËÂ\‰ŒÛßߎnµú­^Ž  jBSµC;õæ—×·#h–=·ïF/GЄ歞S×~êëÑ­ör«½ˆÔDSšP¹Í·jÈ­ÚEÙÈK­~Ñ×#h@C´C;µœnµœ¨5Õ-SŠ[er«×ër½ŽKv¹d7(¦ßpFMÎhS ª—ЭÐ*šÐmÐ&: ƒê¸U.·Êq«\n•ãVyN)n³·u˜BÈgrFK½äíÔŠë­6¥õ)Å«L°b 5§«QÛ”b­ª®Õ»šX«”µJœoÆ”âzS®·¡ÝMÚ}{_Ö侬_~7¾y(}JSZ¡uJšSÚ M´C»è€Ž½Q—mj–)Å|ͦSh2…†•lm¡b ­O)¦Ðt ïjÇúiJ±’ݦ-ë¾PÑÁSŠõv˜b¾½¥=e¡êƒìTLpø”b‚#¦ûä}r`F£M)Öyè:£ƒc|\Çõ‘ÿúz-Ð"ЭÐ*Ú mJ;´Oé€Êj¬F9M)ηÈù–„¦(nU‘[e¸^ü­QO·kõv4¡Ú©7çûv-вP j Õ¡.Ð˜Ò ­¢˜BiSŠÑ€Ž5ÌWsµK1_“ùfd2#Ã:›¬³aMÖùæ¯ã·£ Ŭ-TLÐd‚†™ÌȱÎ.ëìXg—uv¬³ÛBEܧt™ cFžSŠ)¸L!pF!g¸Í!·9p›CnsàVEN)nsÈm®˜~=-T¬Fµ…Š•¬>¥è~•îWL¡Ê*Ö¹Ê:W¬sÕuFC«4´¢¡uÌh¢¡) MœoÆ*-XÉ"+Y°’EV²`­Š¬ÕNÐAÍÓ­æéI´@ËBÅŒt‚‰eÒíÔ†[ÕäV5ƒš¨C]gÔäŒns«SŠ\5ÉUÃù6=ßÏR¤½IÚ;Ö¹Û”b ]¦Ð1…SšÐźL¡ã|»œï@ꆤnà|‡M)VcÈj œïóHÝÔ ¬ÆÈ)Åj ]wÔnÿî~;‚Ô¦Ô¡>¥ZEš¢Ú©—<ê”âz‡\ïÍã*oGŠÛŒG]ªß>ròv´L ´ˆÔD¢ZEšSÚ mJ;T×y@µ`%Ëé0ÅŒJY¨˜o±…êPErJ| Ež‹ä¹ ±%*ò\ÚBEJŸQCbMkȆI6 Ù0Ɇa SŠY]¨˜‘åBÅ:›®óÅnfc:¦ï2}Ç|Ý*¦ï1¥˜¯×)Å=§3r™‘c]Ö9°ÎqšRìüQ*ÚÒîÀC&˜QÄaŠùF=L‘l²ý(­HN=M)²QË”"9ÕŽÑ@ËBZ¶Oo;ÒÁ@Bz´S j‡©C]´B«hBS´AÛBíÐ.Šù†Î÷]­˜~=M)æ[e¾SÐ¥˜BêÞÕ†un²Î ëÜÊBÅš?‰bú-¦Ùhu¡¢ûMºß«Ö¦¹j}J‘«&¹êÈU—\ud£K6:úÛmJ1ßîSŠùöX¨˜`Ï)Å»L°cF½O)&Øu‚ïêÀGY¨˜ïù4eÔ)Å:ü¸V$§Jrðî¤*ïNºWh]¨ ÚD;´So§ ïŠÚVƒÚ”b%‡¬äÀJŽ˜Ò„æaŠ)Œ6¥˜Ñн¯:Viâ“”gL6Õ 6¥uÑ ­¢ MÑíGiÁZY«Ûç&Rž›H<¿òüÂNÅ:ŸÒ€Æ”b‚E&xûX}Êcõ;3*2#ÃŒLfd˜‘•)Å̦+i²’†•´:¥hŠå”b%MWò9u@ÇŒ:²á§)Å|Ý*êÒPGr\’㘯Ë|=ò¶P1#ï3Xg>ƈ÷áVyî¶vhÐ1£·Ùh’jP{ hLi…VÑ„æ”bú.ÓwLßeú)hênnòèô¦:Ô§+q˜bFQ§3 ™Q`FÑSL?túï+ºcF+rUO‡)2YËaŠ}£Ê¾Q‘º*©«ÈU•\Uäªæ”"µM)²Qû:ML0e‚‰uÎr˜b‚iSŠÝ,e7KL?c¡"9YŸD‘ÉÌ)E&³-Td25“ïjCbÛé0Eb[Y¨Ès“<7¤®Å”"M²Ñ°ÎM×ùqŠ{«6f´cFý4¥Xçn {N÷…Šùò‘^|–E•ϲØVƒÚaêPŸÒ€êjThMh¦:žCo›"ŸÎq9Frz™RL°ûBÅôµ Óï2ýŽõœÒmSÚ¡]3ê:£‡éÀô‡L`¾Cæ;0£!3˜Ñˆ)ÅGRLa´)ÅZ ]«?Yñ U>ac[ j¢õ…Ð˜Ò ­ 5¡)Ú mJT'ø®Þ>Û"Ÿ²W‘"Ù(ÈF±)Åô‹O)fTê“(æ[Ú”vhÅô‹Lß0A;M)ækeJ1_³…Šé›?‰bÏ1Ùs Ù0Ɇaß°\¨HÉO|bLÊ'Æ\Žš‡iƒ6ÑÕ3z_tÌèMSR>mf[ ´L©AíIÔ¡.ИRdÒêBE®¬M)²ašêHŽKrÙpɆcFnSŠ ºLÐ1#)Å\¦àè¾K÷Sð¶P±3x_¨˜¯ë|ŸR© I] uQ¦© I] uáSŠÔ…¤.ºÔR¹P‘ºh © I]`¾¡óݡ٨§)E6jY¨ÈUµ…ŠÔUI]Eêj| Eªt¡"Ï5*ò\%Ï©«šºšÈdžS$6ËaŠ<§M)Òž>¥H]ÆBEbS›ÈU¶…Š=6û”"“©™ü$mHl;¦Hl+ ynÈ3>Ï-åóÜvêm÷å3Ù¶µ@Ë”TÎ÷¶¡Eº© Ñ ­SÚ M´Cû”bF©3Ú¡ l2Á†5™QÃŒ4“»l¾P1ý‡)’Ó$97ÏK¾-Sd²I&R×ú”"WMsõ®v¤®Ÿ¦™ìeJ‘ºnSŠ\uÉUÇô»L¿cú½N)æÛóIÙè}¡"9]“³Cr5N‡)29Ê”"“ÃS¤}øBESF<‰¢ƒ£¦hÙÈ)ÅÞ>Ú”¢ƒ£/Ttph?Eíö‰oGZ å05¨‰:Ô§4 1¥Z§4¡¹P;´/ÔÕ\íЂÔI]A6JyE&‹¦Èd‰…ŠÄ–ºP‘X>ËÏMùüØmu¨‹4j…ÖÃ4¡º’ÓmSÚ¡ýW :ŽÒÛ}R>8ñ)¾)ŸâûXEC‹4´ ¡ÅSô·Ä”¢ƒ¥N)z¤{݆búE¦o˜¾¦´²P1}³/¡˜ ÕÃÓ7™¾aµ¶P±›YÿŠ.˜váaê葟S´ÌË”"ínSŠ]Ô}J±‹z¦h™×)E=S4ÔÛBE¼/T4Å¥)<ÇiJ‘ɰÃyÉs u!© ä*êBÅ|£M)&2ÁÀC&X1£z:L±›Õ2¥˜`õ_½"“5*2YëBÅNXe'¬Ès•¥ª©{˜$¶œ*Ò^Ê”¢ ÅSt¡H Ò^$í¹*y˜"“¥=‰"í¥/Tä¹hžŸR M1iŠ¡ V¦y6ûÕ+j~˜¢ûSŠ{:«O¢ØU,*úk}JÑ#“9zäÒ#GrܦÓw?L‘—ä8&èuJ1#Ï)ÅÞîmJ1_ïSŠé»NÿaÈUœSd2ìI‰ _¨H{Ä”"íQ§iI{ ÏÑžAñ­.)ßê’øV—”ouy¬&4ETÏè³´Cû”èX§·Ÿ%ßbó™êPE&#¦¹Š:¥H]ä¯^‘gÝ6y޾P‘çÐ<’Vä¹–ÃÔ 6¥hYõ)E˪´¬¢Gµ¦ÈsÍ…Š´×v˜¢ µÏèíë‚䛆¶©Ëò$ŠL¦O)›±P‘ɬ_B‘öÌ…Š¥ÈUÓ\íÐŽ\áõH ßÓÚä{Z¾‰µÉ7±îÕGémËä;^·µ@ËajP›R‡ú”4D+´>‰&4E‘öÖ¦yný0Ež›æù]íȳvC‘É^Sä¹Û”"Ïݧiï1¥Èd¯O¢èBÏÃ=êmJÑ…Þ*šÒµ)ïê@SÆi¡¢ £L)º0ìK(:8üIíñ%Ýu¡¢Ý#Sì £=‰bW}J±« ÝUþdÅ·ô6ù–Þ†ïÒmò]ºUƒÚBu¨‹4~Z¡õI4¡¹P´¦ÚET;¸C ú[NO¢hw)SŠþ[¨è~ñ…Š–X¨èQ©SЦ”\¨èBi_BÑÐÒ*Z´¡;ÔÐ2“–zdå0EËÌStÐ|•âÛ›|ò^-Ðr˜Ô¾„:Tgô¾4j…VÑ„æBmÐ6¥ÚEt¬SCìô$ЦXY¨È³ÙaŠ.èn¶¡H»Å”¢ &]0äÙr¡"í&i7¤Ý$í†Äš&ö]uäÊ%WŽl¸dÃ1A·)Åݧt™ c‚.tLÁs¡b‚Þ¦óõ¾P1}×é?L¹ŠÓ”"uQ¦© [¨Èdø”"± i:¥ÈsHž‰ Il uÑ*rš«‡iE&ëé0Ežk™R$¶Jb+2Y%“¹ªñ+P¤½Ö/¡è`Í)ECk›Rt°ö)Eªöè)5ÑÁ<} E»Ó*v†ô…Іf,Tô(딢G™SŠe;Fñ]ØM¾ {[+´NiBs¡6¨žo‡ö)Ðq”ÞöW¾ {[ j‡©C}¡"“)™L¤.ëaŠÔiËv)2™’ÉDrR“óIÚÉv:L ´,T¤½IÚÛ|J‘ç ‰m’؆]´åBEž›ä¹!Ï­?‰"ÏMóü®vd²Ÿ¦¹ê6¥ÈU÷'Qd²Ç”"±½N)2Ùó0EÚ{›Rd²÷¯¯i’ö=v”…Ц [¨Hûð)EÚG,TtaÔ)EžGN)ò<$ÏÙš?YñíÞM¾Ý»áÛ½›|»wÃ7t7ù†îmu¨Oi…VÑ„¦hƒ6ÑÕµz_t¬Ó‚™QÁŒŠÌhà’ùêJ|ûs“oÞ«ZSƒÚ—P‡úB hˆVhMhNiƒ¶)íÐ~˜¨6åaZУrZ¨hJ) y.ö%M)~˜¢ƒE:XÐÁRSô·H ZÚBEŠtÁg“<2ie¡"9f Ù0ŸR$ÇbJ1}“éfdíIû³õ)E®LråÈ•Ÿ¦Éñò$ŠÔ¹M)Rç>¥HÇ“(Òî’vÇ^ç9¥H¬·…Š<{_§<Çé0Eb£L)’Ø@&ßD‘Ɉ)Eb£>‰¢ ‘ ‰ M,vïÏ¡i¯§…Š.Ôò«Wô¨úaŠ´ó¹ò†=§Éž³©õ…ЭP=£=šÐœÒmSÚ¡ýK耎uz»o4Ù76‰­6¥ÈdõÃi¯q˜¢ º3¥è`'Qt°J+ZVsJѲ*-«èBíSŠ.Tí»šH{ž¦‰Í²P‘º´…ŠL¦ E&³¦Èsæ”"íÙ¦iϾPÑ…”.4¤½¦ynåI]hö$Š´7?L±ó·xE[=LÑß–‡)ºß¤û ým}¡¢ûM»ÿ0íhh/‡):Øm¡¢e]ZÖ‘öSŠÄöúqÅ·Õwù¶ú½Z¡z½Ó„æ¯^´Mi‡ö)Ðq”Þ¼®þíh™h™RƒÚBu¨¦hw…ŠþêŽôiŠ–õ\¨è`oO¢hwïO¢è~—îôwH:ÊaŠþ›R4eÄBEÚ‡¤} ±#§¹í0ErF_¨HÝÐÔ½£øNù.ß)¿W ´,Tƒš¨C}¡4Dš¢ Ú¾„vhPÍó'iAÊi¡¢ ¥üêí.6¥èoñ'Qì%Ó ­ {]ÉÃûUi_B±›•¾P±#Ý‘>I »™*ö +SŠ}Ãdß0ô×|¡¢¡ M±ÚBEF_¨È3_Á^ñ*ô*¯Bß©ZSƒÚBu¨Oi@ã0­Ðú%4¡9¥ Új‡vÑÕ–}A-hw9¦hw‘v4´ØBE‹/T4´Ä”¢)EšRÐ…’‡)ºPú¯@Ñ…¢]ø“5±×¥ìu‰Ý,e7ÛÔm_B;´Oé€ê:?Lo÷«”ýjS ´L©AíK¨C}JO¢èo© í.¹PÑÁÒ¦,ý9ÔÐ2;¦h¨•)EȘM1ŸRäÙâ0EÚ­.T¤Ýò0ES¬M)òl’gGêüô$Š<{™RäÙíIMq_¨è‚ÇBE¼¦è‚ç“(Zæm¡¢¡® Åï„>¾‚Ú§)E»£L)šþ%ŒøŠþF=LÑÐÈ)EC:è`ô…Ц„6e‡Vô¨ž*ZVË”â~°Ú”¢)ÕS¤½Æ¯^ѲZ§-«ù$Šv×¶PÑîÚ*Ú]¥Ý‰†¦44ÑÁ,O¢hwÚókÃÛdmØ'›ì“Õ-SjP;Lê 5 q˜VhÒ„æaÚ m¡vh_¨è`ÕîÐDËò´PÑÁ,SŠ鎴KÑ£”%Òž1¥H{օЉ¢)‘ - iY`޾PѲЖ½«¬§…ЦàÕãtûjá·£ƒt@ÇŒÞìWoGЕó½éàÛÑ2­ÐºPš‡iƒ6QÌ7úBÅôC§¿C+’S%9ÉѦl¨AíK(ò\ý0 hL)šRë”"ϵM)[û”"“U3ùIšèBžSô(¥G‰Ä¦-Td2ý0Ež3¦yÎ:¥ØÛSööDÚSÒžÈsöuÚv:L‘ºV*R×l¡"WͧÓoù$ŠÔµ6¥ÈUÓ\a¯kcF;æÛËaŠéw›RL¿Ëô;v• 3ê2£ŽuîºÎïê@Çé0ÅŒF9L1ßaSŠé?L‘ ÷e£~\ ZV¤e=*Ò£êP?LSZ¡º’Ó„æ”6h[¨ÚÓëôv¯+²×m*z4ÊaŠ– [¨èÑð'Q4tHCz¤{Ý.Eˆ´l )£-TtaôÃMÚ”wÔn_Cõv4¡µ)u¨¦)­Ð*šÐLÒÞN‡)zÔÊBE›M)ZÖ|JÑ… =juJÑ”–O¢hYkSе>¥èBÓ.¥èQÓ½«-ë§)EËz9LÑ£îSŠuéQG®zRd£çaŠ\õ6¥ÈUïSŠ\uÉÕ@r†$g`¾£,TìÀæÉ’œäŒX¨ÈÕ¨‡)r5rJ‘«Ñ*r54W²âÛ‡|;ðÀwøùß½jP›R‡úaЭÐz˜6híÐ.: ’‚l”ÓBE6JY¨HN‘äÌ·øaŠäINÁ|‹Ì÷æ3p†|ÇëåÓ/m¡"9¥/T¤®hêv¨!uvZ¨H•)E&ͦ¹2_¨È¤I& ™´z˜"í–W|oéï-ÝÖ€ÆaZ¡õ0M¨®ämÐ6¥ÚEtÌèm»åûC¾!tÈ7„~¦Ô¦i7I»!“‡)2i’IC®´ÝŠ\Y;L‘X“Ä2i’IG&]2阯۔búîSŠ zL)æëuJ1_—ù:¦àmJ1#ï__ɉÓBÅŽeJ‘ɰÃy_¨H{Ä”"íQ§]éB ¡Ùx_±#ŘъäÔÓaŠÔÕr˜b¾5¦ó­2ߊùÖœRìuµ-TL¿êôwhb¾yšRÌ(Ë“(v¤”)±«¤/T¤.cJ‘É”L&R—9¥ÈU¶)Å~•ý0EbSûIÚÐ…vZ¥øŽ¹!ß1÷XÐA½í¾|‹ÜÀ·¹ ù6·½êP hˆVh=Lš µAÛ”búÙ§ÙHÍÆ»ÚMû“j–…Š´7[¨hJóÃlq˜¢ƒ­.Tt°åBEC[[¨èoëO¢Øšì ì§Ã íÒÐŽ–u[¨Ès)Åô{;L1ß.óXÉQžD1…aO¢ØE‡O)²1âI»è¨SŠ}räBEžG[¨èÂÐ.¼¯Ø ÇX¥ø^¼!ß‹·W jO¢õ…Z¡uJš µAÛ“h‡ö)PÍó½}5K—W³l*ºP¤ ™,v˜"±E{ûœu—ç¬ßW|/ÞïÅøæ»!ß|·­õ) ¨Þæ=Z¡U4¡9¥ ÚD;´‹è Þ&V¾smàʆ|CÙc묹*XçR§ë\r¡bFEfT0£"3*˜Q‘fd§)E­L):h6¥˜ Å”b‚&4LÁò0Å­/TLßdúŽùºÌ×1A/‡)¦ï2}G»]Ú혾ËôÓ÷ºP1_Ï)E»½=‰"WÞ§©sMÝ»ÈdH&É IN 9!É $'|J‘«ˆ…Š\ERä*$Wu]ç‡iÅ«L°b‚µ,TL¿Ú”búÕSd£J6*¦_eúÓ¯y˜bߨmJ±oÔ~ˆ–¾,êÇáZOzÎz£·£½ÓïÖïÙ}ЇøÍ¦ôã^èeµ;ÝW{ÐCù­šßÊ|VÍge~ªæç³=9ßÔù&ç“:ŸOw£›:çŸ:ÿMçüSç¿Û™ŸÔüìvÎ?ïæ¿åÜ?R÷Mçþ’º¿lyc>›æóáÎü5Íßng~›æwÓ¹5Ý¿6ùlšÏ‡;óÛ4¿»ùnšïÝÎü7Íc¾›æ»3]ó·éÌW×|m:óÕ5_›Î|uÍWçü»Î¿s}»®ï¦sý»®ÿ¦s>ýn>;}p~Cç·Û9Ÿ¡óœÏÐùl:ç7t~›Î~íçÃùšŸÁ| ÍÇà|ç[nߢÿã^èEÝè6éÈ_Ñüæ§h~>à¼ýCoÿàízûv»Ó]=è¡^éõhoô6ëÞÕ9ɧ1Ÿ¦ùü€zYíF7u§û¬Wzõ¤§z£·Yïô®>è:ßÂõ-º¾…ëSt}JÐcÖ¹>E×g·s}Š®Oáú]c¾Mó½Û¹þ¦ë¿éÌ¿iþw;çg:¿ÝÎþ˜ög·3¦ù0öË´_›Î|™æk·3Ÿv—ÏîÌ—k¾œùpÍÇng>\óᜯë|óu¯s~®óÛ휟ëü‚ýíÿÃó ï¦s~¡óÛtÞ„Þ<Ü™¯Ð|óš¯Mgþäùg¾]óíÌ·k¾xÒïn_£7õNï³>èCùuÍïnt[íN÷£ù Í_0wù æ#4wæ+4_Á|Å]¾vze~ªægÓ ½Ì:ç[u¾›ÎùWÿÃûSÕý©2Uó·éÌOÕü<ܙϪù¬ÌW½Ë׃=™ÏÔ|n:ó™šÏÝÎý3uÿÜtæ/5›Î|¦æs·3?©ùÙíÌ_jþ6ùIÍOc>šæc·3Móñpg¾šæk·snº?7ηé|7ókwóÛðÎõéº>»ëÓu}6ýêÚ¯Îû‡®÷»ëßuýw;ûÙµŸöÁ~íçûÌgh>x§÷ÅŽçGCŸ]à…^fÝè6ëN÷Õôx6¯ôªžôTg>»æ³3_ý._{}ÐÇbÌßÐüm:ó74›Î| Í×à|‡ÎwÓ9ÿ¡óœÿÐùïvægh~ç#Ï¿U>¿Võùµ¸ÑmÖîêAõJ¯³žôTt]?<¿Uõù-|¤øÃ9çú]ŸÂõ)º>…ç_ôü7ëSt} ×§èú×Çt}6ëcº>Æõ1]ãú˜®Ï¦óüìîü>Ùëãº>w®¿ëú;×WŸNÎ/u~ðJ¯³žôœõFo³Þé]}Ї8ž_K}~-9ŸÔù|Àƒ®ós®ëúl:×Çu}6ëçº~x|>õñùÎõ]<~Ÿúø}òñõÔÇ×pö#´Áù…Î/8ŸÐùçwóÙræ?4ÿ•ë_uý+Ï¿êù?ÜÙŸªý©\ߪë»Û9Ÿªó©\ÿªë_¹þU×?¹þ©ë¿ÛÙŸÔþìvÎ/u~›Îù¦Î7Ù¯Ô~ívÎ7u¾Éù¦Î79ßÔù6Χé|×·éún:×·éún:׿éúo:×·éú6ö«i¿<~Õøþš¦ï¯iÌGÓ||À;½‹ãñ÷¦¿àF·Ywº«s}›®ï¦s}ïæ×¸¾ín}}ˆãñ½¦ï5>~×ôñ»œëÛu}7=è¡Îõíº¾ƒùšŸMçù =¿ÁÛ7ôö æch>oÿ¸»ýœ¿<~ÓùøL×Çg¸Ñmµ=Ô“žâ…·¿èíßôB/³Îó+z~…çWôü6½Ò«:×§èúÏßôü7ç'¿ >>6ôñ±Áõº~ð ‡zÒS½Ñ›8ÎoÜŸŸñúM¯/ }|aðñƒ¡ >¾2ôñ•Á¿Ÿ‡þý<ø÷ñпÿ>ú÷ñà߇Cÿ>üûoè߃¿ ýûmðﳡŸ}À¹þU׿òöU½}•·¯Þݾžœ_êü6럺þ›Îõáß…Ÿ/Pôó ?_ èç ~¾@ÑÏ(|ÿ~Ñ÷ï_þ¡Ó»øí냊¾¿¿ðýïEßÿ^øþî¢ïï.|ÿvÑ÷o¾¿ºèû«?â\Ÿ¦ëÓùó]~Ó ½¨s~]ç·åƒ—?ôòw;¯Ü]?ç7t~›Îùò÷£Â÷'}âG<è¡^éU½Ó»ú ëí¿ýý°èûï>â¼ü»õÙç|^Ñ÷ç¾®èûç ß?Vôýcç(¼ü¢—¿éNwõ Ç¬WzUçù=?ãú˜®ñúM¯ßxý¦×ûüBÑ÷ÿ¾§èûw>àÎõu]ßÛß_о¦ðý/EßÿRøþ—¢ï)|FÑ÷g¾?£èû3 ß_Qôý—àí ½}ÁÛw·ëº~•ù­šßÛßÞçœç/÷ïŽç/ÞéIOõAâ8?}ýzáëÓ‹¾>½ðõÝE_ß]øú颯Ÿþ€ã÷ ×ß/pž_êù%×?uý“럺þëßtý׿éú7£Û¬]ó÷pçú6]ßÎóïzþ—ßõò;çÓu>·¯¿*úúÈÂ×?}ýãGœë?týó74ƒýÚ÷¯o+úú¶ó?àö‡Þ~¾>«èë³ _ÿTôõO›Î×÷}}Ï彩wzÇïUÿàë{о¾çò¼þ¢×¿é¼}åîö ú¿}}EÑ×Ï”Êß/ªþ~/UÿqHçí7½ýÆÛozû·Ïõö9¯ßõú×ïzýÎëw½þà|Cçû¾óõE_ßQð5Î?ç¼ÐËjzÌz¥Wñàõ‡^?~¿Ó×/¾~¡èë>âIOñjðjê<ÿªçŸ<¿ÔóK^~êå'/?õòóÑ4›ÎÛ×ôö5^»»þA“Þ9_>ÿSøüZÑç×>àx|¥éã+ ¯ß~;œs^¿»þFoâ¸ÿkzÿ÷çízûð÷»>´é|~¨èóC¥óñ®ðù“¢ÏŸ|ÄnêNwõ ‡:o¹»ýÞÄ·ÏôöaÿïºÿÞÿ½ÿåó EŸ_(ƒ—?þ•Ëô!Žû—¡÷/ƒû÷ÐýûžôzÜ9Ï/ôü6çzþ[Ž¿‡þý¸íØÿ‡îÿpæ#5øûkÈß_ÆÏ§5ý|ZãçÓš~>­ññwÓÇß/ÿô˜ôÆÛ×ôö5^Óëßô¤§øíýƒéãïÆÏ—4ý|IãçGš~~¤ñóM?Ñøø¹éãçñFGþŒŸïgúù~ÆÇ¿Mÿ^á•^Å o_ÑÛ·é¼þ¢×¿éNwuÞ~Þ?˜áù·CøíýÇÛ!½Óû¬ú¿}~ßôóÁ.ÿàtWçù™žßíý“éãÏpçú¸®Oðö…Þ¾àå‡^~p}B×§r}ª®Ï¦ÝÔ9?>ÿi|üÓôñϸÑMÝ鮞ôToô¦Þéz~ÉÛŸzû7ç—z~¸ÑÏ_øˆWzUçú¤®Ïí㯦¿žÿáöﯷC:çÓt>·¯éík¼þ¦×ß¹~]ׯs¾]ç»éœ×ùžÿÐóœÿÐùÞþ¡·p}ø÷ïÿBïÿøø£éãñ ‡ú s>|¥éû+/ÿPèEÝè¦^éU½Ñ›z§÷Å^xþEÏ¿ðü‹žáù=Ü¿ëû'ïŸ4}ÿ¤ññWÓÇ_¯š>¾ú7ž¿éùÏßôüçgz~·Ïš>>k||ÖôñYã㳦ÏߟhúþÄó?8ÏÏõüœççz~Îós=?çù¹žŸóö»Þþ`?Bû±Ïùþ7Ó÷¿}ĽÍz§wõAâ·ï_3}ÿÚå‚ê<¿Ðó ÞþÐÛ¼ý¡·¿Ÿ¥þ~ÆÇgMŸ5¾¿ÉôýMÆÇgMŸýˆóúS¯?¹>©ë“\ŸÔõI®Oêú4^Óëo¼þ¦×ßxýM¯¿ñú›^ÿí㻦ï^þ·¯ëíë¼ü~wùœ_×ùá÷‹Ôß/øø°éãÃÆ÷W˜¾¿â#ntSwºÏzÒS|ðü†žßàå½üM¯ô:å|üØôñc×vÿ8¤úÇïúø²ññ_ÓÇ?à¸ÿÖ×Ï_þ?ow?ŸôœôÛ׿ÒîêAYçùññ_ããÛ¦o__oúúúx£7õA“ŽßO†þ~ÂÇ·Mß¶Áûס÷¯øâÓ‡ôJ×õÅýßÐû¿Ádž>>0øøÀÐÇøú{Ó×ß_oúú{ãëçM_?þ‡äù¥žß–7®_Óõk¼ýMoãå7½üÎõíº¾›ÎÛ×õö½ëÎ×·»¾¾ýÞ ÞLÝé®^éU½Ñ›ú ñÛõs}üÝùýA®ßä|}¹ëëËß¿ãúý;ÎÇÏ]?w>~îúøùå’žâ·ß¾Ònê•^g·¯ÜݾNïâÆÛozûw»Ó]=è¡Îóçý«óõß®¯ÿv¾¾ÛõõÝñNïêƒ>ı>úø¼óñy×Çç?â•®ës{ÿéúýÎ×»¾þûî¼}®·oÓ“žê¼ý®·ßyûüîöíôàüBç·é<¿Ðó ž_èùóš¿Êë¯zý·Ï/¸¾~üò¼}Uo_åõ×»ëçúU]¿-OÞþÔÛ¿é<¿Ôó»ýûýíÎ|¦æsÓ¹¤î›Îþ¦öwÓ9¿Ôù%󕚯ä|S盜Ÿÿp>¿äúü’óù×çV8¯?﮿ћz§wñÛçwÞéF7õJ¯â—ßõò;ϯëù ®ÏÐõ¹ýûÞõõáq^¿ü~Åç_\Ÿ¹üƒÑM½Òë¬']o~? ýýlËùüëó7ÎÏ·tý|Kçç[º~¾¥óõ뮯_w~>¥ëçS:Ÿ?q}þä#^éuÒëcº>Æë7½þÛ××¹~þ£óù×ç?œ¯_w}ýºóù ×ç7œŸèúù‡q^¾ëå×'t}‚ëº>Áó=ÿ`>øúçë«]__í||ßõñýxÒsÒoßÿõvÇý·>þî||Ýõñuçã뮯;_w}|ÝùùT®ŸOåü|)×Ï—úˆóú›^?îRï/¼Ò¹~]ׯóüºž_çü»Î¿óüºžß–^þÐËßt^>v~>ëçó\þ¡Ó¹þü|×Ï·q~¾ëçÛ8¿v}üÚùù.®Ÿïrùþ|ÑŸ7Þ>ÓÛ‡¿ÿºþý×ù÷]׿ï:žß};¤óú]¯ÿöñc×Ç/ÿÀów=çís½}Ø?»îŸûW×ý‹ŸÏâúù,—pº‹ãúõñÝË?ðòC/ÿ ýûïº>¾ë||×õñ]çã»®ï^þ¡Ò«8þþú÷_ßìúúfçã»®ï:_ßìúúæË?ðüšž_ãù5=¿-ï¼|îqÂþüv¿}ýTè÷Ã_úúàË?8]¯ðú‡^ÿíëB?#øøfèã›Á×÷†¾¾7øøeèã—Á×׆¾¾ö#^éUüvÿz;„;ÏÏõüœ·Ÿý~?nè÷ã~Äî³ÞèMü¶ÿ¡¯?ýˆóú«^ÿíþúùçhüù¦?ÿ¾;ž¿y;„£úúÁàëC_û×·Ãççç†~~nðõ!¯øï/ÿçÅJùùÈÛ'°\^hpäÿýÇ—ÿùò/—ÿêÛ_üñü—¿|ù«ß~;ÿB<½Üþïoÿê¿}»>¯òÿ¾ýÍß½œ^þá[yùëóÿýó·rù^þë7³r» úéæYñËÍ;<ß™oÊÕëéöaÀ·Ãoxôíð§ã ¹þŸþ—¿{;¿¿ÿã¿r~üûùöÿù¼6/¿û§—ëÈN¿üÏõÐJ»ÄÚ®oŽüÝ^þÌN¿yùÝ?ûO¿»^ò»?zžÈù/âŸ?ÿÑvý2—Ÿ?šþQ?]?Çáçö_~ô- µ]ž¨øÃp”—ö®k-ÿÁ©]¶Ü×Ãoù6‹W=¼ñ‘—¿¤ñ×ÿ¼ùü§Ê/ÿÁÛñíá×/søù_¼ßþ¯·úçñó,¶V¨ôë'ýœàüûÚe…þütþËû£ëûöÓíúi ?~úÃÓyûéá—½üí§Ëìu_>*åü§ÔŸ½òë3!ýÇŸ&®ýú Ì¥–ËFsùñ‹I¸ÞzøÍ//÷þ±;ÿþÇaÉqù#ç÷¿LQŽü篻Wi×ó¼Þ‚·ÿÿÛÿó¶yíív^?òª]~/¼œÎÿúßÿø¯­ÅÏøóŸ—Rz¹ü9Ðúå—–óÿÍŸýñßýíë%œ ø³ÿòÿôò_üoóì7÷ò»¿¾\èÇvä»ÝØÏwÞÏU:ÿ•X¯•ÿáÛe÷ŠÓõ°_W¹_R[¯„Ÿít Âùð|Çöû;õåðUýÒ¹óáx=ŒË:Õë·_óòçâ¹D§×Ãvyv§¾Öô|Ø/æÖëK χ~º,Ví—gP.‡åry}Âûrè—ÓO¿Äêr—Ÿ×—Ó]óò¸_»tôzØ.¿æ_ê÷zQ×´öš;ÿþržM^/êü;ÓùwÚ^/{áåÐ/¿bõë Q.‡qYËsízQçíúüê¸þ™~9l—}´œN—?{.Çãòúóë×È]úõŽêò±·ç_ /Çåòƒ—y‹ëu¿®ÔõcUÚõøúýd—·1ûõòëõû:®oË{ýùëçG_^†ï¯—ý<ÑËËÒ^O-¯ŸOx}ÿº.ùVæë6v9¾~þÑåa€×³Ëëç™\~­Žëõçõó.¿†¼.d^?ϹD};Ÿ¼~¾h‰ëËÐÎÇç?/b9Ÿf¹.O»~~Y9çát½¾v}¿N©o•½Üe]²\¯“^ޝŸ—Pêõk.Ç×÷§—zßïܯ—s(^×·__ßQ.§}õ~}?Úåøõòûõù“r^†×ˆõëû.ǯçׯWt9~ w¿¾ðrÜ_/ïúù£—ã×ùŽëç^¯ë9®Ï?_¯??®ï—¹ñëë÷Ë/©×ßÇ®Ç×ù×M2¯oƒ¼Ëßë×ë¿\žŸƒtôõ8®Çv¹¸·z=¾æÿzû_ýúxô帖ëñµL—ãkß~ѹ[»_ÿ¾¾®ÏÕËõ†^¯;p¹>¿{9>½úõìzœ×ãëãË7Ç×Ï[¸_oßåýÃãçño?ö«ãù¶¾¿Yýü ûÇá/¿Á^÷ºùÕïí~óÇCzÐC<èqçÞÅou};¤ºžß–ß<ñãžô„÷ÛwÈý8„ggQ7º©=f=ézûo_ÓÛ×:½«úïï®ÎÛßõöw^×ë'ø`>Æí3t?é…^f½Ó»xáõ½þÂË/zùÅèȇp~o‡snô»Ëz¨WzUOzª7zSô!^xûŠÞ¾ÂÛWôöÞ¾¢·oÓyû‹Þþ½nœé|Œçgz~ÆË·»Ëïô.î¼~×ëßt§û¬s¾®óuÎÇu>›Îõq]ŸMçúùÝúmxpýB×/xþ¡ç¿é¼þÐë¯Üªî·÷Ÿo‡7^nŸAþq8çAY¯ô:ëIÏYoô¦Þé]}ÐǤc~Eç÷/tï¦ó¡Çj³ÎùTOåúV]ßÝÎùTϦs>õn>žœOê|6뛺¾›Î~¦ös·³¿©ýÝtÎ/u~wÎ?uþöÆþ5íßn¿{ê@œóm:ßMç|›ÎwÓ¹ÿ6Ýç×t~›ÎõowëÏþ6íï–wö·k;çÓu>ëÓu}:÷Ï®ûgçùw=ÿMçút]ŸMçút]ŸÁõº>›ÎózþƒùšŸÁÛ7xûŒ?™þýd\?Óõû€wz¼þ¡×ûøÎÛáZ7º­v§»zÐcÖ+½ªs>Cç³éœß¸›ß k¾Þwgþ\ó·À ½¨ÝÔƒê•^Õ“ž³ÞèmµwzWt_áú]ÿÂõ/ºþ…ë_týw»Ó}Ö9ߢó-œ_Ñùm:çSt>Æó3=?ãí7½ýÆÛozû·Ïôöm:o¿éíwæÃ5»ëãº>»ë뺾›ÎýÃuÿpÎÇu>Îõw]çúûÝú³ß®ývöÛµß[ìwh¿ƒçzþû<¸¾¡ë\ßÐõ ®_èú-ðAâq‚ÇéÙ¼ÐËjçüîæœ_èü6½Ó»xåí«zûðøNèã;qû Çtž_Õó«¼ýUoÿ¦3UóWyþõîü?ÛÙªýH®êúo:ç“:Ÿä|Rç³éIOuÎ'u>ÉóO=ÿÆ|6Ígãù7=ÿMçú4]ŸÆóozþùlšÏÆõiº>›Îõiwë³áë×uý:×§ëúl:ׯëúm:ׯëúu®O×õéì—ü}]9Ÿªóù€'=Õ½Íz§ëíÇãKU_Zà…^ÔnêAu®o×õí\ß®ë»é\ß®ë»é\ÿ»üàñ­ªoU>~Sõñ›œó:¿ÝÎùÿnwº«3_CóõôÎüÍÿ`¾‡æ{·3ÿCó¿ÛÙŸ¡ýyß“_¥>~•||*õñ©¼}þCzÐcÖ+½®ö¤çjïô»õßëƒ>{a>Šæ£pþEç¿Û™¢ùØtοèü7ó-:ßMçüŠÎ¯pý‹®¿qýM×Ó ½¬vî¦ûƒq¾¦ó5ÎÏt~wÎßtþwæÇ4?Æü˜æçÑîÌ—k¾6ùqÍϦ3_®ùz¬7ö³i?÷;úÛ´¿ýkÚ¿nt=ÿ‡»Ó]½Ò«zÒS½ÑÛjïô®ÎùÛÝü7Ü™×|l:çï:ç|îòÿpçü]çÇ×õýqæÇ5?›Î|¹æË™×|l:ó㚟Mg~\óÌGh>v;óš¯àüCçœèüƒó op~¡ó ®èúo:×?tý7ó‰»ùlxåúW]ÿÊõ¯ºþ»ó©:ŸOwæ£j>6ù©šŸÝÎý£êþQ™¿ªùÛtæ¯jþ6ù«š¿dþRó—ÌOj~’ýOíÿ¦sþ©óßtÎ?uþ»óMï¦s¾©óMÎ/u~›ÎùæÝ|7¼q¾Mç»Û9ÿ¦óoìWÓ~m:׿éú¿ïùêš/¾?Ðôý—¨ôºÚ½©wzÇ|ôý‰q£Û¬s}›®oãú6]ßMOºÎ÷áÎù4Ϧs~ín~[>èc±ãù­®Ïo}À™®ùØtæ§k~:ûÓµ?óë:¿Îõíº¾öÁõº~ƒýÚÏÁõº~»ë?tý7ýÚßMç|‡Îw°_CûµÛ9?y~…ï6}´ñýϦïþˆÝf=è¡^éuÖ“ž³Þéwëûì>ècÒ óS4?›Îùÿnwº«3_EóU8ÿ¢óÇ듇¾>ys~Eç·åÆõçã·Î÷ß»¾ÿþ#Þé]}ÐÇ¤ßæçíp­ݞ̓³^éU=é9ëÌGÑ|æ£h>Œó¹Ëçíóo‡sîtŸu®¯éú>Ü9?Óùm:çg:?ãüLç÷pg>ì.[ÎýÃtÿØrgþ\ó÷pg~]óûpçþãºÿ8óéšOgþ\óçÌŸkþ6ùqÍϦ3?®ùÙëÁù†Î7¸þ¡ëÿpç|Cç»éÜC÷Ç`>Bó±Û™¯Ð|}º3¿¡ùÝtæ74¿»ù»ünxe¾«æûáÎ|WÍ÷ngþ«æ¿2ŸUó¹éÌOÕüT®½[ÿG;ï«Þ?nyr~©óÛt®êúïvî_©û×nçüùøºóós\??ç#nt;Úî³ô»õ©ôªžô<Ú}<™£_úù@—`¾Róµéœoê|w;óqןd>Ró‘œ_êü6½ÑÛ¬wzWçüòn~öÆ|4ÍGãü›Î¿q~Mç×8¿¦óÛtηé|7ói:ŸMçúµ»õÛéëÛu}7ûs×ý¹s>]ç³Û9¿®óÛtίëüv;ûݵßóï:ÿMç|ûÝ|7|°Cû·Û™Ÿ¡ùÌÏÐül:ó14›Îù ß§;ç?tþ›Îýèþ?˜~¾óó¥\?_ê#^èeÖn«ÝéþlôP¯ôªžô\íÞÅoßÐÛ7xùC/ÿáÞèMç7îÎoË]ûõ¾óó±\?ë#^èeÖnÏæNwõ Ç¬Wz]íÞf}Ðïò³Ó óU4_…ù)šŸÂùߦs¾Eç[8¿¢óÛtΧè|ðü¢~þ—óó¿\?ÿk…wz_휹›ÿ“»1Ÿ¦ù4æÓ4Ÿ›Î|šæÓ˜OÓ|n:óišOc>Mói̧i>w;óišÏÝÎ|šæÓ8»›ÿNwæÇ5?›Îü¸æg·3®ùÛí̧k>ùtÍçÓ;ûãÚgþ]ó¿Û™×ü;óéwùÜéÁü†æ÷áÎ|‡æûáÎü‡æÓÙÐ~l:óšÏÝÎ|‡æ;˜¿Ðüívîß¡û÷¦3¿q—ßOöÊ|WÍ÷Ãù®šïÝÎüËóküüK×Ï¿\àØ?ôó+?â…^fÝèzþèwh¿?àAõJ¯³ÞèM½Óû¬s~q7¿^9ߪó­œ_ÕùUÎï.¿»ó­:ßÝÎ|TÍÇÃùªš/<ÿúüóg~«æ·2ŸUó¹éÌ_½Ë߆'ó™šÏMg~Só»éÌgj>7ùKÍ_2©ùHæ#5›Îù§ÎÿÓùIÍÏng¾ò._;½1Mó÷pg~›æwәߦù}¸³Mû±ÛÙ¯¦ýútg›ö÷áÎ~6íç¦óþ£éýǦ³¿Mû»ÛÙßv×ßOöÎ~wí÷¦³ß]ûýpg¿»ö»³Ÿ]û¹éìO×þl:óß5ÿ›ÎüwÍÿng~»æw·3ý.;}0ŸCó9˜Ÿ¡ùùtg~‡æ÷áÎüÍïng¾‡æ{·3ßòü>?ÿÛõó¿?âNwõ Çj¯ôz´'ýn}íÞf½Óû¯Í}ìØõóÝŸÏîúùìG8û=´ßƒýÚï‡;û?´ÿ›Îþí靈wûç¦3òú%~>ºëç£Ä ½¬v£ÛWóJ¯G{ÒS½ÑÛjïôþÕ|Ðïúó`/ì_Ñþ=ÜÙÏ¢ýÜtö£h?6Ýé>ëA£ý,ÚÏMg‹ö÷áÎ~í÷ngÿŠöo·³_EûeÌ¿iþ7ù5ÍïÃù7Í¿1Ÿ¦ù4æÏ4»ó7ÿ¦s¾¦ó5Î×t¾Îù¹ÎïáÎýÑuÜtÎ×u¾¿zg~]ó»Û™_×üîvû«3ÿ®ùwæÛ5ßÁüÉë/øý®ßàüü×Ïÿÿˆwz_íƒ>žÌÑý~‚x¡—£Ýè6ëN÷ÕôP¯ôºÚÙ×þl:ûãÚŸMgþü.ö`~ïö—ÝÎ|†æsÓ™ÏÐ|n:󚿇;óšï‡;óšÿMgþCó¿éÌhþƒ÷¡÷»ù»üoxe¾«æ{Ó™ïªùÞtæ»j¾7ù¬šÏ‡;óU5_OïÌwÕ|ïvö£j?6ù¯šÿ‡;ûSïúó`Oö'µ?›Îþ¤ög·³©ýØíÌoj~7ùLÍçÃùNÍ÷¦3Ÿ©ùLæ'5?ûwÓý{Ó™¯¦ùjœ_ÓùívÞÿ7½ÿßtæ£i>׿éúo:çÓt>wοéü;çÛu¾›ÎùwçþÐuØíÌG×|<½3Ÿòú‹Îþuí¿ŸÅõûY>âA¿»þG{¥×ÕÞèíÙ¼ÓûÑ>èC¼3]ó·é…^ԙϮùDôûaŽpæ÷®Ÿ›Î|vÍç¦'=væ·k~7ùëš¿Mg>û]>ìƒùšïÝÎ~ íǦ³?Cûópg†ög°Cû1˜¿¡ù{¸3¿CóûéÎ~ íÇngþÇ]þŸÚùýL®ßÏäüþ%×ï_úˆÝþ­¹ÓýhzÌz¥×gó¤çjïô>냮ý+ì_Ñþæ«h¾6ù(š‡;óU4_…ó-:ßMçüŠÎoÓ½Í:ç_tþ›Î|”»|<Øù3ÍßÃù5Íï§;ómšïÝÎ~˜öcÓÙÓ~l:ûaÚcþMóÿ©ü~²Ðï' ~?Yè÷“áIOõF¿;¿ÏöNï³>èc±ß~þ_è÷³=ƒ;ÝÕ™_Óün:ógš¿Mg>Móù«wæÿnÙtæß4ÿ»ù·»ü²;óïšÿ‡»ÑmÖÙO×~n:ûéÚOgÿ\û÷pgþ]ó¿ÛÙ×~<ÜÙ×þlùíëÏB¿_ï#Î|†æóÓù Íï¦3ß¡ùÞíÌoh~ŸÞÙÐ~ìvæ?4ÿwö'îúóhçýOèýÏ^¯ìwÕ~Wö«j¿v;Þ?ívö»j¿w;û]µß›Î~Uí×Ó;û[µ¿•ýªÚ¯‡;ûUµ_»ýªwýÚéÉ~¦öóÓýNí÷¦³¿©ýMö/µ›Îþ¥ö/™ïÔ|'óšïd>Ró‘œoÞÍ÷ɽqþMçß8_¾~/øý‰¡ßŸø ^èw·ËnêN÷£=è1ë•^öFo³>èc±7æ«i¾6ùºëOc¾šæk·3?Mó³éÌOÓüìvæ£i>îIOuæ¯iþn_?÷v¸Ö™Ïv—Ïޙ߮ùÝtæ»k¾7ùíšß‡;óß5ÿ»ùïšÿMgþºæïÓùíšß§wö£ßõcÃû1´›Î~ íǦ3ßCóýpgþ‡æÿáÎþ íÏàþ?tÿßtÞ? ½x¸³Cû·ÛÙ¡ýx¸³_ã®_»œßÿúý¯G¸Ñmµ;ÝW{ÐãÙ¼Òë¬'=ŸÍ½Íz§÷ƒ½°Eûñp/ôòlÎ~í÷ng¿‹ö{·³ŸEûYØ¿¢ýÛçüþÝÐïß ~ÿnè÷ïáAÕ^éwë“ô\íÞV{§÷góA“ŽýK¿¿8øýÅ¡ß_¼ÂÙŸ¢ýÙtæ¿hþw;ó_4ÿwöënÿy¸³Eû·éìWÑ~m:ó_4ÿ…ù.wùÞéÆ~˜öcÓ™Óüº³_¦ýz¸³¦ýÛtöÇ´?wöÇ´?›Îþ˜öçáÎþ™öoÓÙOÓ~îvöÓîúùoÜûëþ³Û¹¸î»ýví÷Ãýví÷Ãûƒëþðåûëþ³éÜ\÷ŸOwî_®û×Ãûƒßíöàþºìvî¡ûǦó÷Ðß?6ûGèþñpçþºl:ûÚÏÝÎþ†ö7ØŸÐþl:óšÿOwö#îú±á•ù¯šÿÝÎ~TíǦ³Uûñpg?ªöcÓÙªýØíìGÕ~Tö£j?6ù«š¿Mgþê]þvz2òú7~ÿzè÷¯¿_=ôûÕWø ƒýÔïoÿˆz9Ún³îtŸõ ‡z¥×gó¤§:ûQµ›ÎüWÍÿÃù¯wùßðdþïöMg~SóûpgþSó¿éÌjþ7ýHíǦ3¿©ùýtgRûópgÿRû·éìOjv;û•wýÚðÆ~5í×ngšögÓÙŸ¦ýyzg›ö÷ÓûCÓýáéûGÓýc·shº?<ܹ¿4Ý_>ݹ?5ÝŸ6ûS»ÛŸvzçþÔuêܺî?wî]÷ÝÎþwíg¿ºöëË;ûݵߟîì×þïvö·kîì×þwö·ßõw§öhÿ?ݹ? Ý6ýÚÿÝÎýcèþ±ÛÙß¡ýÝíìßÐþm:û5´_»ýÚŸ§wö{h¿w;û=îú½Ë+_?_õõóŸ¿ûvx¬ÝŽv§ûb\ÿ¡ë¿Û¹¾C×÷áÎõº¾Oïœß¸›ß–=V{¥Wõ¤çjoô6ëÞÕ]÷¯}žÜÿR÷¿'ðB/«Ýèv´;Ýg=è1ë•^Õ“ž«½Ñ›z§wõA¿Ë÷†æ¯hþ óS4?…ó-:ßMç|‹ÎwÓ9ߢó-œoÑùΧè|v;ç[t¾›Îùÿng>Ê]>ìÆü™æoÓ™OÓ|n:óišÏÝÎüšæwÓ™oÓ|ïvöô›Îü›æß˜oÓ|óišÏÝÎüÙ]þìÎüºæ÷áÎü»æÓ™o×|;óëš_gþ\ó÷åýpíÇÓ;ûëÚßMg¿]û½éì¯k7ýó»þ=¹ûÚß§wî¡ûÃnçþº¿ìvö;´ß»ý íߦ³¡ýÛtö/´õÆý³éþù¯ô:ëIÏÕÞèwçßé}Ö}ìèÓþÀnG»Ó}µ3¿¡ù æ34Ÿwæó®Ÿ»ù Ío0_q—¯OöÊüVÍïýÐËjg?ªö£2ßUó½éÌÕüïvæ»j¾+÷çªûóngþ«æ¿2ÿUóÿéÎü×»üox2¿©ùÝtæ/5›Îü¥æïÓùMÍï¦3ß©ùÞtæ75¿wö#µ›Îü¦æ÷«{c?šö£qÿnºïvö«i¿v;ûÑ´›Î~4íÇngšögÓ™ÿ¦ùßtæ¿iþóÓîò³Ó;ó×5ù隟Îùwÿ¦sþ]ç¿éœO×ùt®×õï\ÿ®ëß¹~ýný¶œ÷Ÿ]ï?÷úàü†Îop~òú‹Î×w}}pg>ºæcºÞ¾‡»Ñí«¹Ó}µ=Ô+½ª'=g½ÑÛ¬wz?ÚÙ¯~ׯû`ÿ†öo·³_wûËngþ‡æÿéýÚ¯‡;û;´¿ƒýÚ߇;û?´ÿƒýÚïÝÎþÈë¿_5ôõWƒ¯Ÿúú©nt[íN÷YzÌz¥WõFoÏæÞg}Ð5…ù+š¿Mg¾ŠæëÓù,šÏMg>‹æsәϢùütg?Šö¯/úú²8ó]4ß»ù/šÿ½nÌ¿iþîÌ·i¾7ù6Í·1¿¦ùýtg~Mó»éÌ·i¾?ÝÙÓþìvæÛîòÍûÓû‡Ïvg?\û±ÛÙ×þüêýsíßÃýàë'ê û×Ûáœ;ÝW{ÐC½ÒïÎo¯'=g½ÑÛ¬wzÿj>èc±ßî?o‡sÎ|»æ{Ó™_×ü>ÜÙ×~<ÜÙŸ»ýåáÎþ¹öo·³Ÿ®ýtæß5ÿ»ù÷»üqö3´Ÿ›^èeµ³ÿ¡ýßtö7´¿Ÿîìwh¿ƒý í_°?¡ýÙíì_hÿžÞÙïÐ~?ÜÙ¿¸ë߆Wö¯jÿv;ûWµ»ý¬ÚÏÝÎ~Uí×—wîU÷‡§wî_U÷¯Êý§êþ³Û¹?TÝv;û_µÿwîõnÿxrOîO©ûSrIÝ_v;÷—Ôýe·³ß©ý~zg¿ÿ?{÷ÒëÈÒ¦çyþýŠ=³° Šs¼<‘``¶ð@ÒÀÐÁ€Ð`÷ÀßEÖªþÖ}§PÁD0ÉU»5èëÚµ˜ïóWñìî÷ÛûK÷þ²tîÝûǶsÿèÞ?¶ûC÷þ°íìowû;ÜßËýî÷¶³ßÃý¾ÜÙÏá~.ýîÏÒÙŸáþ\îì×p¿–Îþ ÷géìÇp?.wöoú÷KOÜŸ’÷§|ÐÇïæ“>íA?¬ß¦cÿJÞ¿ðDOöLϯöB/¯öJ¯¯öFoÏvös¸ŸÛÎþ ÷çíÎþ÷÷rg¿ûë»}rÿ˜Þ?¾¼s›Þß&÷Ÿéýçrçþ3½ÿ,ûÇôþq¹s™Þ_–Îýczÿ¸Ü¹?Lï“ýîïÛýîo°á~ûîÇåÎ~„ûq¹³_á~óÎÿåÎþ„ûÌw8ßÁ|‡ó}¹3¿áü^îìGúq©g¼?ýã&=ÑÓ³=Óó«½Ð˳½ÑÛ«½ÓûYôqÖ'}>Ûƒ~È÷¦'ö#¹owö+¹_—;û•ܯËýKîß¶³_ÉýZ:û•ܯ¥³_ÉýÚvö+¹_‰ýHîGf>³ó™™Ÿìü,óÏžÿ¶sþÙóÿµö³¸Ÿ/ðDOÏöLÏÏöB?¬o¥×g{£7{§÷gû ³>éóÙôx²gæ;;ßowö#»owö+»_™ý:ì_ÛÎ~e÷kéì_vÿÞîìov—Îþf÷wéìgv?/wö7ú»ðÂ~÷kÛÙâ~\îì_qÿÞîìoq—Îþ÷÷íÎþ÷·°ŸÅý\:ûYÜÏÂþ÷géìO9ôgá•ý¨îÇÒ™ïê|o;óYÏmg~«óûåù­ÎïåÎüWçéìGu?–Î~T÷cÛÙŸêþ4ö£¹Kgþ›óÿvgšûóvg?šûq¹óñ¥ùñåíÎþ6÷÷rgÿ›û¹sÿhÞ?ûßÜÿmçþÑûÇÅÞÙïî~_îìow·ýìîgg?ºû±tæ»;ß¿öÊ~V÷ó Þè‡û¿Ú;½ÿcóAg}ÒçYz¼Øñù“êÏŸ<Á=õLÏÏöB/¯vîÝûö³ÿ‡ýííÎ~v÷sÛÙßîþ¾Ý¹?tïowîÝûÇ`ÿ‡û?Øïá~_îìÿpÿ—Î~ ÷kÛÙá~ æ{8ßKgþ†ów¹3_ÃùÚvæsò¹ðÉüNçwÛ™Ïé|n;ó;ßÉÇŸéÇŸmg¾§ó=™Ïé|Næg:?_Þ™Ïé|¾Ý™ÿyÈÿ›=ØŸp¶ý ÷çOïÜÂûÃÒÙÿpÿßîÜÂûÏåÎÇïðã÷¶sÿ ;÷¿ðþ÷åûcxÜvîoqØßÞêïïn~÷<ÑÓYÏôl/ôòl¯ôúlïôþjôaŸôùbOÌWr¾󓜟àÏÿü·{ÐÝÏ=ïìgw?OôdÏôül/ôòj¯ôújoôvÖ;½¿Ú}|5Ÿôùjúaÿ¸ØûÜ拾¿ÉýÝvö3¹Ÿ—;û™ÜÏÄþ%÷ïrgÿ’û·tö#¹Kg¾“ó˜¿tÈßoٟ—;û™ÝÏ/ïìvÿßîܲ÷‡·;÷§ìýiÛ¹ÿdï?ÛÎý){úòÎý1{¼Ü¹ÿfï¿™ûW>ì_{aÿ‹ûÿåûCñþ°tö·¸¿_Þ¹?�s*ÞŸ~{çþU¼]îÜ¿Š÷¯¥s+Þßþì^ùû_õï—;÷ÇêýqéÜ«÷ÇmçþU½Uö¿ºÿÛÎþU÷oÛÙ¯ê~m;ûWÝ¿¥3¿õß]çïÕ¿_¬¼1ßÍù^:óÝœïÆ|6çséÌos~—Î|6çsÛ™Ïæ|þÚû;Üß¼Ð˳½Ò뫽Ñë·ëÞ_íƒ>ì“>_íA“ŽýcxÿÜ?†÷8ûÕܯ/ïìos·ýlîçÛý>ì¯ÛÎ~6÷sÛÙ¿æþýöÎý¡ö‡MïÜ_º÷—/ïìww¿¿¼sÿèÞ?.wîÝûCg¿»û½í||ï~|ïìwÿ·ýëîßÒÙ¯~è×›}°?Ãý||~|_:ó=œï¥3ŸÃù¼Ü™ÿáü¿ÝÙá~l;û1Ümg†û³íÌÿpþ'ó?ÿmg?¦ûq¹³_ÓýÚvögº?Kg¾§ó}¹óñgúñçË;û;Ýߥ³ŸÓýÜvösºŸ“ýœ‡~þæÜÂûÃåÎý!¼?ûî÷¶³ÿáþo;ûî÷ÒÙ¯p¿‚ý ÷çrgÂýùíý‰C¶|rÿœÞ?'÷Çéýñôñ»ù¤Ï³ôÃú_ìØÿ¦÷¿<ÑÓYÏôü»y¡—³^éõ«9ûîÿ¶sïÛÎþ†û»tö7Üß÷zðóáϼÀ=õLÏg½ÐËY¯ôújoôölïôþjôqÖ'ÝùOÌgr>ßîÌrþ—Îü'çÿíÎ~%÷kÛÙŸäþl;û“ÜŸËýIîÏÛýLîç¶³ßéÐï Çoæ™ûCöþ°tîÙûÃÒÙ¯ì~}yg³ûûåýÏîÿåÎ~g÷{éìov3û›Ýßmg¿ò¡_›^Ø¿âþm;ûYÜÏ¥óñ·øñwéìWq¿.wö£¸zg?‹û¹tö³¸ŸowîÅûös(Þ¶ûCñþPÙïê~Wö·º¿owîÕû×öþ û÷ÇMøçý÷ãæk=ÑÓYÏtŸÿå^èåÙ^éõÕÞèí¬wzµúx¶Oú|¶³¿åÐßM¯ìgu?·ý­îïÒÙ¿Ãþ¶íì_uÿ*ûQÝ¥³ÕýØvæ¿:ÿ—;ûQÝÊ|Wç»2ŸÕùlÌWs¾–Î|5ç«1ÍùhÌGs>–Î|4çcÛ™æ|,ómžïÒ9ÿæùyg>›ó¹tæ·9¿ùìÎçÛýèîÇoïìww¿·ýïîg»û»tö·»¿—;ûÓÝŸ¥³?ÝýùÝ}°_ÃýÚvæo8ƒùÎÏåÎü çgé||~|Y:ó7œ¿¥sÿÞ¿W>9¿éùm;÷·éýméÌÏt~&÷¯éýkÛ™Ÿéül;ó5¯¥3_ÓùÚvæo:¿öÄþ%÷ïïô~Öýp|+Ÿôù»yÐãÅŽý!yHìrÿ¿€gz>ë…^žíìÏt¶½Ñ›ý›îߟ޹ö¿É~N÷órg?硟›ìw¸ßÛÎ~†û¹íìo¸¿Kg?ÃýÜvö3ÜÏ·;÷‡ðþ°íì_¸Kg?Ãý\:ûî×åÎþÅ¡¿ôŒ÷ï~Ü|­'zz¶gz~¶z±Wz}¶7z;ëÞ_í“>ÏzÐù^xb>“óy¹3_ÉùºÜ™¯ä|m;󗜿mg¾’óõùý¥7éÌG:äãbÏÌGv>2÷·ìýíË;ó—¿¥sÍÞ_/wæ;;ßÛÎ|gçûíÎ~e÷kÛÙÏì~fîÿÙûÿ¶³ŸùÐÏ…ö·¸¿ÛÎ~éýAïÿ¸ùZzœtìÅû_áþR¼¿ö·¸¿OðFoÏöNï¯öAvÎ?{þÛÎ|äC>6½0_Åù*Ìס_KÏôü»9ó_œÿ˽ÒëYg¿Šûµtæ¿8ÿKg¾‹ó½tæ·òûf¯ìOu.wö¯º•ù®Î÷¶3¿Õù½Ü™ÿêü/ù¯ÎÿÒùøQýøQÙê~Tæ¿:ÿ»Þ˜Ÿæü\îÌgs>·ùlÎç¶3Íù[:óÑœ·;óٜϥ3í?îŸÍûçÊ;çß=ÿËùèÎÇÒ™î|tîOÝûÓ¶s~Ýóë\ÿ~Xÿ…ö¸ÿ—;ç7<¿ËóžÿÒ™á|\îÌÏp~¶ŸÃŸ¿öÊ~V÷³²Õý{‚zyµWz=ë~Xß«½ÓûYôñlŸôùjz<Ù±VïŸ8û7Ü¿Ëýîç¶³Ãý{»³ßÃýìßaÿÜvös¸Ÿƒýî×¶³?Ãý¹ÜÙ¯qè×Â'û5ݯ¥3¿Óù]:ó5¯ËùÎïÒ™Ïé|Næs:Ÿ—;ó;ßmg>§ó9™¿yÈ߃ù çoÛ¹¿‡÷÷`~Ãù æ+œ¯mgþÂù[:çžß¶sþáùo;ó‡|lyãû›šßßôOôtÖ3=ÿn^èÅ^éõÙÞèí¬wz·ú°Oú”'Î?yþ‰ë›¼¾Kçú'¯ÿ¶sý“×?q}’×'q}Òa}¶¼s¾Ýóíœo÷|ðAÏöI?œßÕôx²ãýuÝï¯ëìGw?ðB/¯öJ¯Ïvæ;9ß_ÞÙ¯ä~%æóÐï·;óù^xfþ³óÿåýËîßÒ3=¿ÚÙßìþ¾ÝÙÿìþgö+»_Kgÿ²û—ùø”ýøt¹³ÙýËìW>ôkÓ ó_œÿmg?Šû±íÌqþÿôÎ~÷ëíÎ~÷ûíÎý£xÿ(Ü?Š÷¥³ÿÅýßvîÅûÇåÎý¥ö—‹½rÿ©Þ*÷êý£²_ÕýªÌwu¾·ù¬ÎçÛý¨îGeþ«ó¿íÌuþ·ù®Î÷¶3¿õß‹½±Íý¸ÜÙ¿æþm;ûÛÜߥ³ŸÍý\:ûÓÜŸmgÿšû·tö«¹_owö³¹ŸKgÿšû·tö§ús±wö§»?—;ûÑÝ=ÜŸ†÷§ÁýcxÿìÿpÿðL÷ñ]î…^ì•^Ïz£·ÿæôNï¯öAg}ÒçYgÿÚ¡ ïìgw?—Î~v÷órgÿºû·tö¯»ùëÎßÒ™Ÿîü\îÌ_wþ–ÎüuçoéÌ_wþó5œ¯ÁùÏÛ¹¿ïïKg¾†ó5˜¯á|m;ó7œ¿ËùÎßÒ™¿áüm;ó7ùÛôÉüNçw2ÓùÛvæs:ŸKg~§ó{¹3ßÓùžÌßtþ.wæk:_“ûßôþ7™ŸéüóÎ϶3?áül;óÎWpþáù_îÌW8_Áù‡çÜ¿ÂûW0á|l;óÎ×¶3ŸqÈç–O¾¿lúýeOðDOg=ÓóY/ôòl¯ôjoôöjïô~Òƒç>ÿ¥s}Âës¹s}Âës¹s}ã°¾»>èã¬Oú´ÝûǯßÿÝýýßßïÝýýÞ_Á3=ŸõB/öJ¯¯öFoöNïg}ÐÇ«}Ò§=èÎob~“ó›8ÿäù/ùHÎÇÒ9ßäù.óOžÿÒ9ÿäù'Î'y>Kçü’ç÷»{f¾²óµíÜß²÷·¥3¿Ùù½Ü™ÿìüo;û‘Ý¥³ÙýX:û“ÝŸÌüäC~VÎý-{[ya¾Šóu¹3ŸÅù¼Ü9ÿâù/ó/žáü‹ç¿tîŸÅûç¶3åM¯œõü—ÎùUÏïíÎý­z«ÜŸª÷§mg>«ó¹tæ·:¿•ù¬ÎçÒ™¿êü-û_õþw¹3ßõï7{cšû³åƒßß:üý­¯ð ‡üóþ1üý¬ƒß:üý¨ÏðB/öJ¯öFo¯öNïÏöAgù¨ÎÇÒ™ŸzÈÏÂóuèÇ—÷DOÏvö£¹ÛÎ~5÷ërg›û{¹³¿ÍýÝvö·¹¿ÛÎ~7÷{ÛÙÿæþ¿Ý¹¿4ï/ýíîïåÎ~w÷»³ŸÝýÜvæ¿;ÿKg>ºóq¹sþÝó\ßáõ}»s>Ãóy»sÞŸ—Îü ççíÎýyx^:÷ßáýwÛ™ÿáüo;û3ýY9÷×áýu×'û3ÝŸmgþ§óÿvg¿¦ûµíÌ÷t¾—ÎüNçwÛ™ßéü¾ÝÙé~,ùŸ‡üozð÷ðïKgÂý æ7œßËùç;¸ÿ‡÷ÿ_;¿vøûg¿_vøûeñB/g½ÒÇ¿ëÞìÞÏú û¤O{ÐCŽ|ûûK¿ßsøû=_á\ÿCþ‚ë^ÿ¥sýÃë¿íœ_x~Áù…çœßß2øý•Ãß_ùˆ'z:뙞Ïz¥×³ÞèÍÞéýÕ>éóÙtç#qþÉóOœoò|/wæ#9Ÿßòq“Î|$ç#1ÉùØvÎ?yþKôñÕœùKÎßÒ™ÏtÈçÂ3ó›ßÌ|eç+3_ÙùÊÌWv¾–ÎüeçoÛ™¿ìü-ùËÎ_æúçÃú_ì…ó-žoá|‹ç»íÌGq>–Î|çãrg~ŠóS˜â|æ£8—;÷ŸâýgéÜŠ÷Ÿkߟ8üý‰ÏðNïg}ÐÇ«}Òë÷Õ=è!Çþåïoü~Æáïg|†zy¶Wzµ3¿Åù-Ìgq> óSœŸw{å|«ç[9Ÿêù¼Ý3=Û9ÿêù/ó¯žÿ¶3?ÕùÙvοæ¿rîÕûÇÒ¹¿Tï/+oÌgs>/wæ¯9ÛÎü6çwéÜ¿š÷¯¥3ŸÍù¼Ü™ßæün;óÝœïmgþ›óߘïæ|wæ¯;Kg¾ºóµtæ«;_Kg¾ºóÕ9ÿîùw®o÷ú.ëß½þKç|úa>›>8¿áùm;ç3<ŸÁù Ïgéœßðü–Î~÷órg~†ó3˜á| Îwx¾“ó™žÏdÿ¦û79¿éùýÚùýlÃßÏ6øýkÃß¿öˆ'z²gºoÛ ½Ø+½Ú½½Ú}œõIŸvÎxþ“óžïÒ9ÿéùo;çÈ÷äü§ç¿tÎgz>Kïônçü¦ç·tÎwz¾“óžop}Ãë\ŸðúûîÏÒ¹>áõÙv®Ox}‚ë£×ùýNÃßïô OôtÖ3=?Û+½>Û½=Û;½Û}œõIŸÏö ò¹é‰ùJÎWb>’ó±íÌGr>ç›<ßÄù&Ï7q~ÉóÛvÎ/y~™ýÏîÿåÎùfÏwéœ_öü–^èåÕÎ|eç+3_ÙùZ:ó§×øýMÃßß4øýHÃßô ïôÃñ ú°Oú<ëA9òëïz…gz~¶zyµ3ÙùËÌÇ!¿™ùÈÎÇåÎ|eç+3_ù¯M/ÌOq~–žèé¬s¾Åó]:ç_<ÿËûSñþT˜¿âü-ù)ÎÏåÎ|ç³0_寋½2ŸÕù\:óYÏmçþY½.ù«ÎßÒ™Ïê|n;óSŸmgþªó·tæ§:?ùhÎǶ3Íù¸Ü™¯æ|m;÷çæý¹q¾Íó]:ç×ó[xçút¯Ï¶s}º×géìWw¿:º¶ëß½þÛÎ~v÷ójìçp?íü~†áïgx†Oú|²ãõQ?Ã3<ÑÓYÏô|Ö ½<Û+½~5oôfïông>»óÙ™¯~È×®=žìƒùÎßÒ™¿áü-ùÎ×à|‡ç»tÎxþƒóžÿ¶3?ÃùœÏð|&÷—éýeé\ÿéõ_:ç3=ŸÉõŸ^ÿÉõŸ^ÿ¥s}§×wrý¦×/˜ßp~ƒç>ÿ¥s}Âë\ŸðúÏ?|þKçú„×'¸>z}‹×w¾¾û#žéÙ^èÅ^éõ¬ýp~oöÄõI^ŸË럼þ‰ë«ç§y}ìáëc?âÞÎz§÷³>èã¬Oú´=äx}-üú¯=|ýèG¼Ò=¿ÄõI^Ÿ¥s}’×gé\¿äõÃóó¾>ò3œëŸ½þxþÞ×7¼>ñðõ‰_áìGv?2ç—=¿ÌùdÏ's>ù0Ÿ•3ÿÙù/\ÿâõ/<ÿâó¿ÜÙŸâþ®oñún;çS<ŸÂõ/^ÿÂõ/^ÿÊõ¯^ÿmgªû³íœ_õü–ÎùVÏ·²_ÕýÚvηz¾•ó­žoå|«çÛ8Ÿæù4®oóú.ëÛ¼¾Kçú7¯ÿÒ¹¾ÍëÛØ¯æ~]îœ/Ÿ¿š¼¾êôõU'¯Ÿ:}ýÔG|Ò§üóóïÓ×}…gz>ë…^ì\ßæõ]:×÷0¿Æõm‡õ zÈ??¿7}}ÆÉë+N__ñÎõí^ߥWzµs}»×w0?ÃùY:Ïoøüoøøó1œÁã‡ãçü‡ç?yüÓÇ¿í<ÿéóßv®ßôúM®ßôú?|üKgÂýY:Ï/|~Áó ŸßÒ½Ù¹>|~f&¼?øãæ9Ïô,ÿüüØôõ­¾ÿÖ/yýðJ¯öN÷ù¾þÓôõŸ&¯4}}¤ÉëM_ÿhòúGÓ×?š¼>Ñôõ‰&¯¿3}ýÉë×L_¿fòú1Ó×™¼>Ìôõa&¯¯2}}•Éë—L_¿äöüùÙ?ÿó¿Ï¦¯ÿñˆsý‹×¿ðøŠ¯ðøÊáø6½r~Õó[:׿zý—Îõá¿?&¯/0}}Éë L__`òúÓטüüþôç÷o0éSþùýAÓŸïŸüüûôçß'?ß=ýùîÉÏoO~{òóÕÓŸ¯~Ĺ>ÍëÓù÷»ÿþÒ=Ù9¿îù­|ðçÿümçýÃýs~Ãó[:ç;<_üþáÏ'>â<þé㟜ÿôü'úø'_ŸüüÝôçïñ ~þ¦ã÷G>oòósÓŸŸ›üüØôçǾÿAðç‡þÒ ½Ø+½žõFovž_›ü|Ôôç£&?ß4ýù¦ÉÏ'M>iòó?ÓŸÿ™üüÎôçwðϯ|ܤwz·ú°óø“?óþ³ï?sý²×/óø²/óø²/óøòáø¸~Ùë÷ùýÝ7鉞Î:Ï_ï¯_|ܤwz·=ä8?¿}òýéÓïOŸ|÷ôû»'ß?=ýþé¿_tÿ~ñçùUŸ_åúW¯åúW¯ãú7¯ãú7¯?ž?óûOñJwþ.w®oóúvž÷ùwþüîŸß9Ÿîù|~ÿÕôû#'ßÿ8ýþÇGœë?¼þƒùÎß`?†ûñkçûÛ¦ßßöýpüÃÇÏ÷gM¿?kòýOÓïzÀ'}|Ð}~øýmø÷·Áß?†ÿàû{¦ßßsûÞøþ—Îã‹Ãñûß?3ýþ™Û4z³wz·ú°Oú”'_òñ%Þòý'Þòý'Þòý~ýÿãæ çû;¦ßß1ùýâÓß/þˆ'zz¶Wz=ëÞä™÷Ÿ}ÿøýÎï_˜|ÿÂôûñNïòÏןý¸IçùŸåùUŸ_åϯþù•?¿úç7æ£9Kçñ5_ãý·Ãý=Nzç|ùúOðõµðëkøççWÂßovûƒFog÷ß÷?èCþùñ/üýT8oøø>ÿû=üúÑ>¹~Óë7yÿÓ÷ÿùùýðë'x¦g;×'¼>Ÿÿý~}%øúJøõ•Ûp~|ü ¾>~}äö“Žý?2?nÒ½Ù'ýðóƒòÏ/áï'^?|ýýG¼Ó»<ÓóÁy~Ùç·tžöù¯üó¿Ã׿~À?ïÿ7Ï9óQÏÿþú¸ùÉy}Úðõiƒ×§ _Ÿ6øü{øù÷ÛTz=éÇ×||÷ß|ÿKïô.Çミ^_2|}Éàõ#Ã×üþƒ?øçžÿðù/}Ї|òïOÿýÉó›>¿mçúL¯OðøÂÇ·tÞøþ—Îü„ó<~=>T¼>ðqŽÇêÇ^?,|ý°G<èÜ¿x}°ðõÁnPèÅ^éÕÞèí¤þ÷Yøùßàó»áçwƒ×Ï _?'øükøù×àó¯áç_ñLÏöIçüøügøùÏG<Ó³½Ð‹½Ó»}Ї}Ò}~•Ç_}üKçùUŸ_|ý…G¼Ñ›ëS½>ŸŸ ?ÿ|þ5üükðóûáÏ ?üü}øó÷ÁÏ¿‡?ÿüüxøóã8çß=ÿÁó>ÿÁùÏðø‡p}†×gòçëñÏ?†Ÿ|Ä+½ÚƒrüûÊŸ¯¼ýA¢ûøñøëÏW?ÿþücðùÏðóŸÁç?ÃÏ>Áƒç>ÿàù‡Ï?xþáóÎ/<¿àüÂó ®Ox}‚ó Ï÷×>ùø<ýøÌÏG†?|þ6üümðùÙðó³ÁçgÃÏÏŸŸ ??ü|bøó‰ßÿ ñü’Ï/ñü’Ï/ñü’Ï/ñü’Ï/ñø“¿?Lÿþ°ëüü[øóoø ³>éÓôþüZøók·?¨ôjçùeŸ_æñgæñg?~? ÿ~ÆçgÃÏÏ?ßþ|SðùÙðó³8ï¿úþ+×§z}*×§z}*×§z}ï¿ùþï¿ùþï¿ùþï¿ùþ??¿‡çw¯¯~ܤóç÷ÃÏçüºç‡ß/‚¿_”ox~øçMz¢þþÊ3=Û ½œõNïòÁó>¿ÁŸ?üó—Þèí¤Oú<ø¤O{ÐCœOx>Áûßÿ¯ïŸÿy“Þ臿ßéý¤§ OÙ^èÅ^éõ¬óüÏÏoÿ¼I¯ôzÖ}؃'=%xJöNïvþü䟟ùó³~nt¯où/ß왞íœOñ| ׿xý Ï¿øüˤOyåùUŸßÊׯyý¿ùø~óÏï\ßîõ]:¯ûø~íxûÏ›§¼exËöB/öFoöAö ‡ëW¼~wùìÃÇ?*½Ú;½ÛyüÃÇ?|z>“?úçÏ/|~Áó Ÿ_pýÃë¿t_Žù æ§òñ­úñí ^èÅ^éÕÞèMüùáŸüù៿ôI÷úEЙÿÆõi^ŸÆõi^Ÿ¼Ñ›}Ї=è‡ã_xâñ%ßÒ;½ÛyüÉÇŸx|ép|›ž9¿ìù-ç—}~™ç—}~yÒ§¼ðþ‹ï¿$z²óøŠ¯ðþËáþ¹~Åë·òÊã¯>þ¥óüªÏ¯fz¶3ŸÕù\z¥×³ÎþV÷wéœ_õü*óU¯ÊùVÏ·r~•óëÌOw~:çÓ=Ÿ'8ï¿îЇ}Ò§¼}ƒ·oöLÏöFoòΟßýó;ϯûü×gx}ðïûîß?à¼ÿáûŸ¼ÿéûŸ\Ÿéõ™\ŸéõY:oúø‚ë^ß_;^¿ùy“^èÅ^éÕÞéÝôãùáç?ðFo§|ò÷›éßoðúÆÏ›ôLÏöB/öF?ߤOyâý'ßÿÒùó“~æúd¯Oæúd¯OæùgŸ:óüûá¿ç÷Þ<çÞOziðÒäxü?~÷ÿðþÜÿÃûpÿïÿÁ?‡ÿýÜßÃûûÎûo¾<>„‚Ï/„Ÿ_>?~~:øüsøùg<ÿó&ç×}~+üùÃ?éüù|þ9áú¾ÏûçÇMz¥s>™ÇŸ}üÿ~ý¸ ÇýûùÝÛðçgÿüÏÿþû¸ /üûÅ¿ðøŠïóó»ÉÏïÞþ Ñ›üó¿¿>nÒyþÕç_ùó«~ãù5Ÿ_ãù5Ÿ_ãù5ŸßÊ;>÷¿T°?Ü„÷ïCþùõ¹›ôB÷ýÞÿðý ‡üóïçÉÏo¦Âý­xãó—ÉÏ_&¼¿öçÍsÞèÌGåþU½U¼~óq“žé<~|?îÏ›ç¼ÐËYô!Gÿ›ûÿ€óþ‹ïûCóþÐØ¿æþ­Ÿ?ÿyŽ~t÷ïüyŽÇ×îÇW\?÷çMx£ãýÿûÿÏeÖ¿¾’Òý c)ß¿(ñÿýüŸü—Ûõ—ö÷ÿÇÿòÏÿøÿê/ß¾ÿ èÛŸÿ÷¿úÿÛ_ÿù×ÿöoüû¿¤?þå÷ÿùOI·ÿàÿõ/%êçm°ÎO¯Šß“þq³ÌܼۡæòùiÀ›Ÿ¼áeЛŸ|âeø›õþ·ç÷ïþþ¿r~ÿïþË_þÙÿü}mþøÛÿøÇý€ßþáÝo–øv‹u¾øþoÿóÿ$û›?þö?ýåúÛûOþÕ_­iÞþEü׿Zÿ«õþe.ý«ýñ¿:î×qøë_ÿðW¦¡ß~úÏ?Ñz¾½0pŸ•þƒ÷-÷~ó“÷v{$ùÿqó“Ïû7Mÿƒÿ¸ùÉÓ·z{(û‡ÿàãöçÿ"×Û/ƒý/~Üþü_Ôûå®þú_ü¸ýù¿øq^ý/þzž«5,í祾嶆ÿô»µG'PÚýJÅßïnü˜ÀNþåy1þã/Ÿ½çô-ÝÊüítö®S¾ÿ¢ûñ·Oßw½¿ èÇß~¼+?ÿvÿv{üøÛÍÁýÑñ¿Ôøte¿ûy3õ¸=ÿÝ_çÏÛ?ÿó±¶Û?GïGF¾=5öñ>6ÆÍ}£–ûå´î߉u;ÿëÿþÿµµøë_ø§ý)·ø}ŸÀ˜·0}ÿËÿúŸüý÷o~ü„ïüOþû?þýüãü£ü›¿ùÊßüÛ?þö_Þ~èc»ýu§OîôsÞþšnTßÛû—ÛÖ˜¿ý¼ý}#Ý6­O·Ëí±èÓíûÖÿév»ý‹ñÓíqû ýÓíyûþávýöíö Òýv¹ßN·†Ýoßsð­Ü–í~ûÛýv½½¢»=Æýv¿ý“~<ßnÛ3*·Û½ßoÏÛ3Œ·Ûí~ûû‰—?¯Þÿûï+:~ü¼|¿ÿtßÒ÷_âþó¿ÿ†ÖÇývÿñß÷[˜o·S»ß·gPRºÖíö}À)Ý_Øþ~;»íÜ·Ûåþ÷¿qÞ~Þý÷ó‘ò6Øùãv½=£ÒýƒÅ·Û÷o Nßâçíû7¢¥ï§õãüóýBÒÓþ~»Ü¯XnËr?ßr¿‚é}7¾O¹_1}_Öo÷ã-÷]ñ¾»Ü¯Ü¯¸”nc¹O¹_Aå6µ?þ~Á…÷^ï×þ~3݇Sï—3Ÿ³üÚÚo7ã>êz¿XÚ@ÜnÞ?ôãÚ·›÷Ÿ÷å»Ý¼_™ávßîÇÕ¿gºßó_Ò·ü1Çv'Iúñ†ŒÛíû'ßÒíÔûo÷WjnëRîëÐîŸL¸­[»k»ÓÃm]Ûýï÷û;Ó'øn·ïW:½ÍåÛ}îý~eÄÛíúãöý•îôã Ûíû'sîsüñ÷ݞ÷ûï÷ßün·ûýïû'Ýo·óÛùöÌ@úñØív¹½õv|q?ŸÑnÿr¸ÝN÷õ÷€ßsðãö}ëI?^¹Ý¾¿øvûÇ\üJu[Ÿqÿïçý_ò·Û?ÎÿveÃ~¿]îÇ3ï¯$ßn»ÿ+Ýrõñ÷ïÏdßsöã¿¿_Ùáv»Üçuû¤rÜoçúsÃ^ÿ’ú}V¿Þº>ý.ÿqó¯¿Kß~Ëûô»øÇMøççR>nÒ+½Ê3=|Ò§üókå7é‰îó[ùçç:>nÒ;ÿù¾õú,ÞÏ›ðÏϵ|ܤgz¶Wz=ëîão<¾æãûü\ÊÇMzÐCþù¹š›t÷ñwÞ÷ýþ·èÇMø¤Ïƒóü§Ïé<¾éã Þøþƒ??üóƒùàs9‘y~Ùç÷€gúáçWzµ7z³wz·ú°=äÁã _ðøÂÇ<¾ðñ-Ç>þ=/x­àã&½Ñ›}Ð?Ò§<ñþ“ïé…^Îz¥W{§÷³ÎõI^Ÿ¥sýÒaýž¹~Ùë—yþÙç¿tÞöýãñµøñµðñ³øñ³âµê›ç¼ÒëYoôvÖ;½ŸõAöIŸö ÇIÇüªç÷€'ºç»t>ÉÉç²qΧx>…ë[¼¾ÛÎùÏgéœO9Ìgá•ó©žÏÒ¹¾Õë»tö³ºŸÛÎþV÷wéœ_õü.wοzþW{cÿšû·í‡)äœoó|—Îù6ÏwéÜ›÷߯ù5Ïoé\ÿvXö·¹¿+ïìow;çÓ=ŸÎõé^ŸÎý³{ÿì<ÿîó_:×§{}–Îõé^ŸÁõ^Ÿ¥óü‡Ï0?Ãù<¾á㛼ýû©qýš×ïŸô)Çù7ŸÃó;7Ÿë™žŸí…^ì•^Ïz£7;ç3<Ÿ¥s~ã0¿ ÇIŸœÿ!ÛÎùOÏr~Óó›\ÿéõŸ\ÿéõÇóÍÏ<àœÏô|¶óžïäü¦ç\ÿðú×?¼þÁõ¯ÿ¶³áþ-ó Ï78¿ðü–Îùèù‘ÎçGºŸéx/ÏÇMz£7{§÷³>è>þÏïÕü¸ù\çú$¯Ï¶s}“×wé•^íœOò|×?yý×?ÖÒ§=èqÒ?¿×4ô].÷?àùgŸÿž®ïðú®ïðú®ßðú=ÁƒòÏŸÅø¸ùµ<ÑÓ³ó;Ì7s~Ùó[ú¤Oyáñžß~~gཌ7é<¿âó+<þâã_:óWœ¿Âó/‡ó·³Åý¨\ÿêõ_:çS=ŸÊùTÏgéÞíœOõ|*Ï¿úüóÙœÏÆóo>ÿ¥s}š×§ñü›Ï¿1ŸÍùl\ŸæõY:×§Ögáë×½~ëÓ½>Kçúu¯ßÒ¹~Ýë×¹>ÝëÓÙ/ýûzr>ÓóyÀ;½Û}œõI÷ñãù¥éç—žà‰žì™ží•^í\ßîõí\ßîõ]:×·{}—Îõ?äÏoM?¿5ùüÍôó7/pÎox~ÛÎùÏÛ ½Ø™¯á|}ygþ‡ó?˜ïá|o;ó?œÿmg†û³òÉ|Oç{2?Óù™ÌÏt~&ç3=Ÿ¥s~ÓóÛvÎoz~ÛÎõŸ‡õßuîÓûã®óÎGpþáùo;óÎÇÒ9ÿðü—Îù†ç»tÎ/<¿àúó³Ì¸–ÄÇÍsžèéÙžéÙ^èÅ^éõÕÞèíÕÞéÝ>éóÅž˜¯ä|-ùIÎÏÒ™¯ä|]íègû¹ã·ž|êïÏ›ôDOÏöLϯöB/öFoöNïöAÏöIŸö æ¿ðÄ|$çcéœòüç“<ŸËóOžÿ§ç×Þ<çÌOr~–Î|%ç+1ÉùX:󓜟¥3?ÉùÉÌGv>¶ùÊÎWæü³çŸ9ÿìùgÎ7{¾™óËž_æúg¯ÿÒ¹þÙë¿tÎ'æ³ðÂõ/^ÿÂõ/^ÿmç|Šçóvg>Šó±tæ§8?ÛÎý£xÿ(Ì_qþ–ÎüçoéÌ_qþ*óW¿ÊüTç§²ÿÕý_:ç_=ÿ¥sþÕóßvηz¾Kç|«ç[9¿êù-ó­‡ù.¼q¾ÍóÝvοyþýjî×Ò¹þÍëÿkOÌWr¾ó‘œôùó?o>×}Ø'}Ê1Ÿäù<à™žÏ:×·y}×·y}—ÞéžïåÎù4Ïgéœ_;ÌoåA'{ÿïßÎ:óÑ¥3?ÝùéìOw:ç×=¿Îõí^ß«}pý†×o°ŸÃý\¿áõÛv®ÿðú/ýîïÒ9ßáùök¸_ÛÎù Ïor~Óó›œßôü–ÎùLÏgé\ßéõ\ßéõ]:û5ݯ¥s}ça}¿ºsÿœÞ?WÌO8?KçüÃóßvö;Üï`¾Âù Î?<ÿ`ÿÂýÛvÎ/<¿_{æó·ÙÏßâó÷?ožóI÷ý#?ÙùY{ðøÃǿ홞¿šWz=ëÞìÞÏ:óÎG0Êga>‹óYøú@ñëx¡—³^éõÕÞèí¬wz·úxµOú!+zœôÄü%çïrg~“ó{¹gz¶3ŸÉùLÌ_rþ󗜿¥3?ÉùY:󓜟]Ïœoö|3×?{ý/wÎ7{¾Kçþ˜½?fæ#;ÛÎ|eçëíÎüfçwéÌov~·ù͇ü.¼0ßÅù¾Ü™ïâ|o;ó_œÿÂ|çséÌOq~ ׿ÖÿjçãcñããÊ+çW=¿¥sý«×Û¹Uï_ÛÎùëùõÊüUçïÏôüj/ôrÖ+ý°>ÞìÞ_íA/æèWu¿*_ÿ¬~ýóç|«ç»íÌÇ¡?•ù¨ÎGåüªç·ôAg}Ò§ó«‡ù]ìùhÎGãü›çß8¿æù5ίy~Kç|›ç»tΧy>KçúµÃúmzçúv¯ïÒ¹?wïÏóéžÏ¶s~Ýó[:ç×=¿mg¿»ûÝ9ÿîù/óí‡ù.|°ÃýÛvæg8?ƒùÎÏÒ™á|,óžßÛóžÿÒ¹ÿïÿƒùÎÇä|§ç»tÎzþKçü§ç¿íÌÇt>ÞîÜ¿¦÷¯ÉüLçgrÿ™Þ¶ùÒë+ùnÎwûüù¿Ÿ7_ëƒ>ì“~8¿•=N:úÕܯ<ÑÓYÏôüռЋ½ÒëYg~§ó»íÌßtþ–ÎüÌC~6=˜¯p¾‚ù ç'8¿ðü–Îù†çœ_x~Kç|Âó îáý#8Ÿð|¶ûKxÙvÎ?óÿÒÞùúl÷볯¿v¿þú€gz¶z9ë•^íÞìÞŸíƒ>ží“>íA?ÌÓ󓜟¥3?ÉùÙvæ/9ÛÎ|&ç31ŸÉùüòÎþ$÷'1ÿÉùßvæ?9ÿ‰ùL‡|nzf~³ó{¹3ßÙù¾Ü™ÿìü/ýÈîÇÒ™Ïì|n;óïÌüeçoÛ¹gïßKg~ó!¿oöÂ|çûrg¾‹ó½íÌ¿^_Ü_†÷—}Çþ1¼<à‰žÎz¦ûüÑïá~?à•^íÞÎú û¤Ï³ÎùåÃü6½p¾Åó-œ_ñü çwÈï¶s¾ÅóÝvæ£8—;óUœ/¼þ<üúóœù-Îoa>‹ó¹tæ¯ò·ðÊ|VçséÌou~—Î|VçséÌ_uþ*óQÊ|TçcéœõüßîÌOu~¶ùª‡|mzcþšów¹3¿Íù]:óÛœßËýhîǶ³_Íýz»³¿Íý½ÜÙÏæ~.ÍKg›û»íìo;ô÷ÍÞÙïî~/ýîî÷åÎ~w÷»³ŸÝý\:ûÓÝŸ¥3ÿÝù_:óßÿmg~»ó»íÌ_?äoÓó9œÏÁü ççíÎüç÷rg~‡ó»íÌ÷p¾·ùÖëû“ûÃôþð€z±Wz}¶7z{µwúa}¯öAg}ÒçŸÍƒ/vì¿Óûïäþ7½ÿ½ÀÙïá~ö{¸ß—;û?Üÿ¥³¿Ãý]:ûwØ?—Î| çc2ÓùX:ç;=ßmg>¦óñåóžïåÎ|LçcržÞŸ·ûãôþøåý™‡þ\ìÁþ…ûw¹³Ÿá~.ý÷céÜŸÃûóÒ¹?‡÷çËý ÷séìo¸¿—;ûî÷¶³áþm;û¥÷§ߟ~ÚžéùÕ^èÅ^éÕÞèíÙ>èã¬Oú´ÝóMœ_òü.÷DOgóMžïŸÞ™ßäün;ó›œßmïôngþ“óŸ˜ïä|gæï¿Hüþ€äïH¼þòõÿñIŸÏö ÇóÏýOþ~‚G<ÑÓ«=ÓóY/ôòl¯ôjoôölg’û³tö'¹?KgþÒ!{f~û˶3ŸÙù\:ó™Ï¥3Ùù»Ü™ïì|_îÌvþ—ÎügçéÌvþ3²¶ùχü/¼0ßÅù^:ó]œï¥3ßÅù^:óYœÏËù*Î×—wæ»8ßÛÎ~÷céÌqþ/wö§ús±Wö§º?Kgªû³íìGu?¶ù­ÎïÒ™Ïê|^îÌwu¾—Î|Vç³2?ÕùiÜ¿›÷ï¥3_Íùjœ_óü¶ÿÍÿKg>šóѸþÍë¿tΧy>—;çß<ÿÎùvÏwéœ÷ü;÷‡îýaÛ™î||yg>ùþ‹”Ø¿äþñûY’¿Ÿå¯ôÃý_íÞžíƒ>¾šOú|µ=äùëÎßÒ=Ù™Ïî|¢?þ~˜W8ó{èçÒ™Ïî|.½Óû«ùíÎïÒ™¿îü-ùì‡|^ìƒùÎ÷¶³ÃýX:û3ÜŸËýîÏ`?†û1˜¿áü]îÌïp~ßîìÇp?¶ù‡üqŸì×t¿&û3ÝŸ¥3ÿÓùÿÓ;û=ÝïËûÇôþ±t>¾N?¾¾Ý¹?MïOÛÎþO÷éìßtÿ‚ý ÷/˜¯p¾–Î|„óq¹3_á|çžïÒ9¿ðü–ÎÇðãÇÒ9ÿðü—Î|Ä!—:¿¿+ùû»^ᙞ¿šzy¶Wz=ëÞÎz§wû /æÁõ ¯OðüÃç¹s}Ãë<¿8œß»}ÒçYº÷Ÿ=ç÷³%?ÛWðB/öJ¯g½ÑÛYïôþÍ}œõIŸÏö òÿfOÌrþ/÷LÏgýLîçÒÙÏä~&ö/¹—;óŸœÿmg?’ûq¹³?ÉýY9Þæï×{Ä™Ïì|¾Ý™ßìü.ùÎÎ÷¶3¿ÙùýòÎ~d÷cÛ™ÿìü_îìO>ôçjçãOöãÏ®ö»¸ß…ý*î×¶óñ©øñiÛÙïâ~o;û]Ü拾_ÅýúòÎþ÷·°_ÅýºÜÙ¯â~m;ûUýÚôÊ~V÷óíÎ~W÷{éìou+ûWÝ¿¥³Õý«Ìwu¾+ó]ïÊ|Tç£r¾õ0ß/îóožã|õþ=~bò÷'~OôÃñ¯<Ó³½ÐË«½ÒëYoôöjôqÖƒOöÆ|5çkéÌס?ùjÎ×¶3?ÍùY:óÓœŸmg>šóq¹wz·3ÍùÃûçªß?÷g>Û!Ÿ›Þ™ßîü.ùîÎ÷Ò™ßîü^îÌwþ·ùïÎÿÒ™¿îü½Ý™ßîü~yg?ú¡ ìÇp?–Î~ ÷céÌ÷p¾/wæ8ÿ—;û3ÜŸÁýxÿ_:†.wöo¸ÛÎ~ ÷ãrg¿Æ¡_›>Ù¿éþ]îìÏt¶ý˜îǶ³_Óýz»³¿Óý]:û7Ý¿·;û;Ýߥ³Óý»Úƒý÷ãrçãgøñóíÎ~‡û½íìw¸ßÛÎ~†ûì_¸{ÎïßMþþÝÄïßMþþÝWx¥×g{£Ö§Óû³}Ðdz}ÒçWó ÇIÇþåï/Nüþâäï/~†³?áþ,ùçÛ™ÿpþ/wöë°ÿ\îì_¸Kg¿ÂýZ:óÎ0ßqÈ÷–óû““¿?ùOôôÕ<Óó«½ÐËY¯ôújoôvÖ;½¿Ú}œõIŸÏö úùÜ÷ŸäýgÛ¹$ïÛÎ~'÷ûrg¿“û}¹sHÞ~{çþ“¼ÿ,ûOòþóvçþ•¼]îÜÒa¸Ø3÷ìýcÛ¹dïKçïÙ¿,ûGöþq¹sÿÈÞ?–Î~f÷sÛÙßìþfö'»?Kgþ³óÿvg?ò¡ /Ìqþ·ý(îÇÒÙâ~\îìGq?–Î~÷cÛÙâ~ö£¸KgþŠó·tæ¯ò·é•ùÓûßøýëÉß¿žøýêÉ߯þ z¼ØÑOû#žèéÕžéù¬z9ë•^íÞ¾šwz·³ÅýX:ó_œÿËù/‡ü/¼2ÿ‡ýcéÌou~/wæ¿:ÿKgþ«ó¿tö£ºKg~«óûvgªûs¹³Õý[:ûSÝŸmg¿ê¡_ oìWs¿¶ýiîÏÒÙŸæþ|yg›ûûvçþм?|yçþѼl;÷‡æýárçþÒ¼¿¼Ý¹?5ïOKçþÔûÓ¦wîOÝûSçþÓ½ÿ\îÜ?º÷mgÿ»ûßÙ¯î~ýöÎ~w÷ûíÎþw÷ÛÙßîþ^îìwÿ;ûÛýÝôÁþ÷ÿíÎýaxX:û?Üÿmçþ1¼l;û;Üßmgÿ†û·tök¸_ÛÎþ ÷çË;û=Üïmg¿Ç¡ß›>ÙÏé~Nöoº—;û9ÝÏËýîïžOîÓûã<ÑÓ«=Óóïæ…~˜ßÊ+½>Û½Ù;½?Û}œõIŸö Ç“}²?Óýy»³_ÓýÚvæ:ÿ—;ûsØ—Î~L÷céìÏt&ó?ÿmg?¦û1Ùé~Læ{ò½ð`þÂù æ'œŸà|Ãó]:çžïÒ9ßð|ƒó Ï78Ÿð|¶ó ÏwéœxþÛÎ|Ä!—zðýãá÷?à‰žÎz¦çg{¡—³^éõÙÞèí¬wz·ú°Oú|¶ý¿‹=1¿Éù½Ü™ÿäü/ùNÎwb~“󛘿äüýöÎ~$÷ãË;û›Üߥ³ßÉý^:û›Üߥ³éп/î™ýÍîï—wîÙûösÉÞ_¶ýÎî÷¶³Ùý[:û—Ý¿¥³Ùý»Ôó7ìŸ7Ïy£·³ÞéýÙ>è‡óŸôyÖƒ/öÏýÿ¸yÎ3=¿Ú ½<Û™ßìüfæ3;Ÿ—;óyèç¶3¿ÙùÍÌW>äëÍ^˜ßâü^žíìGq? ó]œï¥3ÿÅùßvæ»8ß…ûsñþ¼íÌqþ ó_œÿ·;ó_ù_xe~«ó»tæ¯:Kgþªó÷vg~«ó»tæ»:ßKg~«ó{¹³ÕýX:ó[ßßÝûÑÜÆý»yÿÞvö«¹_ÛÎ~4÷céìGs?¶ýiîÏÒ™ÿæü/ùoÎc~Ú!?›Þ™¿îüuæ§;?óïžÿÒ9ÿîù/óéžOçúw¯çúw¯çúõÃú­œŸÝŸ»>8¿áù Îï¿È ïþ¸ G>’óñOtßåžéùwóB/ÏöJ¯öFoöNïg}ÐÇYŸôùjg¿ú¡_û`ÿ†û·íì×aÙvæ8ÿ_ÞÙ¯á~]îìïpû;ÜßËýîÿ`¿‡û½íìÏp&ó?ÿÉüNçwÛ™¯é|m;ó3Ÿ¥3_ÓùZ:ó1ÉùMÏïíÎýzÿ_:ó7¿`þÂù[:óÎ×Ûù çséÌg8ŸKg>Ãù|»³á~÷Ïðþ¹tæ;œïmgþÃùßóŒ÷}Ü|­'z:뙞í…^¾šWz=ëÞ¾šwz¶Oú!ßA/æ‰ýHîǶ³?ÉýùÓ;û—Ü¿ËýÐû' ÷¯âýë/ôòl¯ôjoôÃùíz§÷³>èã¬OúüÝ<èñdÇþS¼ÿ<àÌwr¾—Îü&ç÷rg?’ûq¹³?‡ýårgÿ’û·íìgr?óŸœÿmgþÓ!ÿ¿¹gö3»ŸKOôôlgÿ³û¿tö7»¿owö;»ß™ýËî_f²û³íì_vÿ¾¼³ßÙý¾ÜÙ¿|èß ûWÜ¿mgÿŠû·íìgq?·ý*î×oïÜŠ÷‡/ïÜ¿Š÷¯Âý§xÿÙvîÅûö³ÿÅý¿Ü¹”ÃþñŽrªÞŸ*÷—êýeÛ¹¿Tï/ÛÎ~W÷ûË;û]Ýï·;÷—êýeéÜ?ª÷mçþQ½l;÷‡êýaÛÙßêþ6ö·¹¿—;ûÝÜïmg¿›û}¹³ŸÍý\:ûÓÜŸ¥³?Íý¹ÜÙ¯æ~-ýiîÏÒÙæ~\îì_;ôï—^¹?UïOø ßÍ'}Úƒ~X¿MÇþU½=à‰žì™ž_í…^^í•^_íÞžíìgs?·ýiîÏÛýmîïåÎ~ö×w{çþѽ|yçþÖ½¿uî?ÝûÏåÎý§{ÿY:÷îýãrçþÒ½¿,ûG÷þq¹sèÞ:ûÛÝß·;ûÛÝßÁ~ ÷c°Ãý¸ÜÙá~\îì×p¿ó?œÿËýîÏ`¾‡ó=˜ïá|_îÌïp~/wöcúq±Oöoº“ý˜îǶ3ÿÓù¿ÜÙ¯é~m;û1ÝËý›îßÒÙŸéþ,ùŸÎÿ¶3ßóïMö#Ü·;ûî×åÎ~…ûu¹³áþm;ûî×ÒÙ¯p¿–Î~…ûµíìW¸_Á~ðúª¹ñýíÍïoo|ÿzóû×ðLÏÏöB/'=xþáó¿Ü¹~áõÛv®_xý¶ë‡õ­ôúloôfïôþlôqÖ'}>Ûƒîþïyçþѽ|OôôÕ<Ó³½Ð˳½ÒëYoôöÕ¼ÓûYôqÖ'}¾Úƒ~èïÂû•ܯmg?’ûq¹³Éý{»³¿Éý]:û›Üß·;û›ÜßÄ~&÷séìgr?û“ÜŸ¥³?éП…gö#»Kg¾³ó½íÌgv>·ùÍÎï—wæ7;¿—;óŸÿ¥³ÙýX:û‘Ýmg²ûSØâ~,ù/ÎÿÛý)îÏÛý(îÇåÎÇ—âÇ—·;û[ÜßËý/îÿåÎý£xÿ(ìqÿ·ûG9ì{e¿«û}¹³¿ÕýÝvö³ºŸ•ý¨îÇÒ™ïê|ÿÚû9ÜÏ'x£îÿjïôþÍ}œõIŸg=èñbÇçO†?òOôtÖ3=?Û ½¼Ú¹?TïÛÎþö··;ûYÝÏmg«ûûvçþP½?¼Ý¹Tïýoîc¿›û}¹³ÿÍý_:ûÕܯmg?šûјïæ|/ùkÎßåÎ|5çkÛ™ÏvÈçÂ;óÛßmg>»ó¹íÌow~;º¶ùîÎwg>»óÙ™Ÿîü|yg>»óùvgþû!ÿoöÁþ ÷gÛÙŸáþüéûÃðþ°tö¸ÿowî?ÃûÏåÎÇïáÇïmçþ9¼^îÜÿ†÷¿/ï܇÷Çmçþ6ûÛ›}rœÞ·ûÏôþ³tî?ÓûÏdÿ§û¿íì÷t¿·ýšî×åÎ~L÷c2ßÓù¾Úƒù ç+˜½?q²ŸÓýüôx²£ŸÓýü <Ñ=_ìÓûüÐË«½Ò뫽ÑÛYïôþjôñÕœûÃôþp¹sÿ˜‡ýãböû°/ý ÷wÛÙÏp?/wö3ÜÏ`ÿÂý»ÜÙ¿pÿ–Î~„û±tæ;œï`þâ¿ßÚƒïO¿?ýžèéwóLÏ_Í ½|5¯ôúloôölïôþ»ù Wû¤O{Ðû×ÅžØÿäþyçþ¼?,ýMîï—wîÉûÃoïÜŸ’÷§ßÞ¹%ï_—;÷¯äýkéÜß’÷·?»gþþ—ýûßåÎý1{\:÷ÇìýqÛ¹eï_™ýÏîÿ¶³ÙýÛvö+»_ÛÎþe÷oéÌo>äw×ùûEöï+/Ìwq¾—Î|ç»0ŸÅù\:ó[œß¥3ŸÅùÜvæ³8Ÿ¿ôò ýý¸yÎ ½<Û+½¾Úý°~»ÞéýÕ>èÃ>éóÕô8éŸ÷›ôDO¯vö«¸__ÞÙßâþn;ûYÜÏ·;û}Ø_·ý,îç¶³ÅýûíûC9ì›^¹¿Tï/_ÞÙïê~yçþQ½\îܪ÷‡Ê~W÷{Ûùø^ýø^Ùÿêþo;ûWÝ¿¥³_õЯ7{cšûÓøøÞüø¾tæ»9ßKg>›óy¹3ÿÍù»³ÍýØvö£¹ÛÎþ4÷gÛ™ÿæüwæ¿;ÿÛÎ~t÷ãrg¿ºûµíìOw–Î|wçûrçãO÷ãÏ—wö·»¿Kg?»û¹íìgw?;ûÙýüÍ}pÞ.wîÃûÃ`¿‡û½íìÿpÿ·ýî÷ÒÙ¯á~ ög¸?—;û3ÜŸßÞÙŸqèÏ–'îŸÉûgâþ˜¼?>àƒ>~7ŸôyÖƒ~Xÿ‹û_òþ÷€'z:뙞7/ôrÖ+½~5gÿ‡û¿í܆÷‡mg‡û»töw¸¿ïöÉ~N÷órg¿§û½tökº_Kg¿¦ûµtæ:ÿ—;û1Ýmg?¦ûq¹³_ÓýZ:ó?ÿ`>Ãù|»3ÿáü/ùçÿíÎ~…ûµíìO¸?ÛÎþ„ûs¹³?áþ¼ÝÙÏp?·ýŽC¿ùûkø÷ׯíŸÿø¸yÎ=õB/¿›WzýݼÑÛ«½ÓûYôaŸôùlú¡_›žØ¿äþm;û™ÜÏ¥gz>ëìWr¿.wö#¹zg?“û¹tö3¹ŸowîÉûösHÞ¶ûCòþÙïì~gö7»¿owîÙûÃ×öÂý»xÿ.Ü‹÷ßx¢§³žé>ÿ˽Ð˳½Ò뫽ÑÛYïôþjôñlŸôùlgÓ¡¿›žÙÏì~n;û›Ýߥ³‡ýmÛÙ¿ìþeö#»Kg?²û±íÌvþ/wö#»™ùÎÎwf>³óY˜¯â|-ù*ÎWa>ŠóQ˜â|,ù(ÎǶ3ÅùX:ç[<ߥsþÅóÿòÎ|çséÌoq~+óYÏ·;ûQÝßÞÙïê~o;û_ÝÿÊþV÷wéìou/wö§º?Kgªûó»{c¿šûµíÌ_sþóÓœŸËùiÎÏÒùøÒüø²tæ¯9Kçþݼ¯¼s~ÝóÛvîoÝûÛÒ™Ÿîütî_Ýû×¶3?ÝùÙvæ«;_Kg¾ºóµíÌ_wþ~í•ý«îßÞéý¬úáøV>éówó Ç‹ûCõþPÙÿêþÏô|Ö ½<ÛÙŸîþl{£7;û×Ý¿?½sÿ8ìýìîçåÎ~öC?7}°ßÃýÞvös¸ŸÛÎþ÷wéìçp?·ýîçÛûÃðþ°íìßpÿ–Î~÷séì×p¿.wöoú·ðÉ~M÷ërgÿ¦û·íìçt?·ýœîçd?¦û±íìÇt?–Î~L÷ãrg¾§ó½tæ{ò½ð`>Ãù¼Ü™¯p¾.wæ+œ¯mgþÂùÛvæ+œ¯àþÞ¿ƒùˆC>.õÆ÷w6¿¿³ñý›Íïßü <ÓóY/ôòj¯ôúloôöÕ¼Óû³}Ї}Òç³=è‡~.<±¿ÉýÝvö+¹_Áó ŸßåÎõ9ìo¿öÎý¯{ÿëÜ_º÷—Îþv÷÷ ÞèíÙÞéýÕ>èÃ>éóÙôC>6=1_ÉùJÌWr¾–žéùwsæ?9ÿ—{¥×³Î~%÷kéÌrþ—Î|'ç{éÌo:ä÷ÍžÙŸìþ\îì_vÿ2óïmg~³ó{¹3ÿÙù_:óŸÿ¥óñ#ûñ#³ÙýÈÌvþw½0?Åù¹Ü™Ïâ|n;óYœÏmgþŠó·tæ£8owæ³8ŸKgþÊ!Ü?‹÷Ï•Wοzþ—;óQ¥3Õù¨ÜŸª÷§mçüªçW¹þõ°þ oìsÿ/wίy~—;çß<ÿ¥3Íù¸Ü™Ÿæül;?›?íƒýîç`ÿ†û÷/ôòj¯ôzÖý°¾W{§÷³>èãÙ>éóÕôx²cÿÞ?pö¯¹—;ûÙÜÏmgÿšû÷vg¿›ûÝØ¿Ãþ¹íìgs?ûÕܯmgšûs¹³_íЯ…wö«»_Kg~»ó»tæ«;_—;óÛߥ3ŸÝùìÌgw>/wæ·;¿ÛÎ|vç³3ý¿…æo8ÛÎý}xÌïp~ó5œ¯mgþ†ó·tÎox~ÛÎùÏÛ™qÈǦOæc:ÛÎ|MçkéÌßtþ¾¼3ßÓùžÌït~·ûóôþ¼tî¯Óûëd?¦û1™ïé|çžp}Ãë»t®xý·ë^ÿàú„×'¸>qXŸ-Ÿœïô|'ç;=ß|Ðdz}Òçwµ=žìxÝôûë&û1ݼÐË«½Òë³ùçûË;ûîW0Ÿ‡~¿Ý™ï8äû—|ÿLøý3¿'z:뙞_í…^¾šWzµ7z;ëÞíƒ>^í“>íA?ôkÓóŸœÿmg?’û±íÌrþÿôÎ~%÷ëíÎ~'÷ûíÎý#yÿHÜ?’÷¥³ÿÉýßvîÉûÇåÎý%ö—‹=sÿÉÞ2÷ìý#³_ÙýÊÌwv¾·ùÌÎçÛýÈîGfþ³ó¿íÌvþ·ùÎÎ÷¶3¿ùß‹½°Åý¸ÜÙ¿âþm;û[Üߥ³ŸÅý\:ûSÜŸmgÿŠû·tö«¸_owö³¸ŸKgÿŠû·tö§ús±Wö§º?—;ûQÝ-¯ß°?}Ü„Þ?>nÒ=õL÷ñ]î…^ì•^Ïz£·ÿæôNï¯öAg}ÒçYgÿÊ¡ ¯ìgu?—Î~V÷órgÿªû·tö¯º•ù«ÎßÒ™Ÿêü\îÌ_uþ–ÎüUçoéÌ_uþóÕœ¯Æù7ÏÛ¹¿7ïïKg¾šóÕ˜¯æ|m;óל¿ËùkÎßÒ™¿æüm;ó×ùÛôÎüvç·3ÝùÛvæ³;ŸKg~»ó{¹3ßÝùîÌ_wþ.wæ«;_û_÷þ×™Ÿîü æg8?ÛÎü çgÛ™¯á| Îxþ—;ó5œ¯ÁùÏpÿÞ¿ó1œmg¾†óµíÌç8äsÓ'ó9Ïmg>§ó¹tæw:¿Kgþ¦ó·íÌït~'ó;ßËý˜îǯß?^ýýãx¥×W{£·W{§Öw×}œõIŸö ÇIÇþàïÿ®ü~ïêï÷þ žéù¬³Óý˜Ìït~/wæw:¿“ù;ìKgþ¦ów¹3ßÓùžÌït~ƒù ç78ÿðü—Î|„ó±tÎ7<ߥsþáù/óÏ?8Ÿð|–Îù…ç÷{;¿¿¹úû›Ÿá‰žÎz¦çW{¡—g{¥×³Þèí¬wz·Oú!?+zœôÄ|%çërg>“óy¹sþÉó_:çŸ<ÿÄù'Ïéƒ>žíÌG:äcÓ3çŸ=ÿ¥s~Ùó{»sËÞß2÷§ìýiÛ™Ïì|.ùÍÎof>³ó¹tæ/;Kçþ—½ÿ]îÌw>äûÍ^ØŸâþì9¿¿µúû[_áA9ö?kå÷£V?ê3¼Ð‹½Ò«½ÑÛ«½Óû³}ÐÇYg>²ó±tæ'ò³ðÂ|úñå=ÑÓ³ý(îǶ³_ÅýºÜÙßâþ^îìoq·ý-îï¶³ßÅýÞvö¿¸ÿowî/ÅûKe«û{¹³ßÕý®ìgu?·ù¯ÎÿÒ™ê|\îœõü×·y}ßîœOó|ÞîÜŸ›÷ç¥3?Íùy»snÞŸ—Îý·yÿÝvæ¿9ÿÛÎþ´CVÎýµyÝõÎþt÷gÛ™ÿîü¿ÝÙ¯î~m;óÝï¥3¿ÝùÝvæ·;¿owö£»Kgþû!ÿ›>øûÇðïKg†û3˜ßáü^îÌ÷p¾÷ÿáýÿ×ÎþþÙÊï—­þ~ÙG¼ÐËY¯ôÃñïz£7{§÷³>èÃ>éÓô#ßþþÒÊï÷¬þ~ÏW8×ÿ¿Áõ^ÿ¥sý‡×Û9¿áù Îox~ƒóžßäü¦ç·töwº¿Kg§û»tÎwz¾Kç|§ç;9Ÿéù\îœïô|·ù˜ÎGpþáùçžïåÎ|„óÜÂûC0á|óÎǶsþáù/ûCxx»3áü-ùŒC>éüþÉêü~Çêïw¬üþÆêïo¬ü~ÅêïW|Ä+½>Û½õNïö ÖÿbOœoò|ç›<ßmg>’ó±tæ#9—;󓜟Ä|$ç#1Éù¸Ü}œõIŸ/u~bõ÷'>Ã;½ŸõA¯öI?¬ßW÷ ‡û—¿¿±òû«¿Ÿñ^èåÙ^éÕÎü&ç71ŸÉùLÌOr~Þí™óÍžoæ|²çóvÏôlçü³ç¿tÎ?{þÛÎüdçgÛ9ÿ|˜ÿʹdïKçþ’½¿¬¼0ŸÅù¼Ü™¿âüm;ó[œß¥sÿ*Þ¿–Î|çórg~‹ó»íÌwq¾·ù/Îa¾‹ó]™¿êü-ùªÎ×Ò™¯ê|-ùªÎWåü«ç_¹¾Õë»t®õú/ó©‡ùlzãüšç·íœOó|çÓ<Ÿ¥s~Íó[:ûÙÜÏËùiÎOc>šóÑ8ßæùvΧ{>ýëî_çüºç÷kç÷³U?[å÷¯UÿÚ#žèÉžé>¾m/ôb¯ôjoôöjôqÖ'}Ú9ÿæùwη{¾Kçü»ç¿íœÿ!ßóïžÿÒ9Ÿîù,½Ó»óëžßÒ9ßîùvη{¾ƒë;¼¾ƒë3¼>ƒýîÏÒ¹>Ãë³í\Ÿáõ\Ÿáõ™Ì÷t¾·ë?½þKgþ§ó¿íœßôü¶ý™î϶3Óù˜ì×t¿–Î|MçkÛ™ÏyÈç¦óÎW0á|l;óÎGp¾áùçžop~áùm;ç§×÷&_Ÿ›~}îžèé¬gz>ë…^^í•^íÞÎz§wyðþÃ÷üù៿í<¾8ß û¤;ßK:÷~¿Põ÷ ½Â3=?Û ½¼Ú+½Ú;½Û}¼Ú'}Úƒ~Èצ'æ'9?KOôtÖ9ßäù.óOžÿåÞèÍÎü%çoéÌOr~.wæ39Ÿ‰ùJ‡|]ì™ùÌÎçÒ™Ïì|n;÷ÏìýséÌ_vþ–Î|fçsÛ™Ÿìül;ó—¿¥3?Ùù)ÌGq>¶ù(ÎÇåÎ|çkÛ¹?ïÏ…ó-žïÒ9¿r˜ßÂ+×§z}¶ëS½>Kg¿ªûUùøPýø°í\ÿêõßvö³ºŸW{c?›ûùKoü~†æïgx†Oú|²~}´ùûžá‰žÎz¦ç³^èåÙ^éõ«y£7{§w;óYÏÊ|ÕC¾v=èñdoÌ_sþ–Îü5çoéÌWs¾çÛ<ߥsþÍóoœóü·ùiÎOã|šçÓ¹¿tï/Kçúw¯ÿÒ9Ÿîùt®÷úw®÷ú/ëÛ½¾ë×½~ƒùÎïàùŸÿÒ¹>Ãë3¸>Ãë3xþÃç¿t®Ïðú ®ÏðúL®Ïôú,ë3½>“ë3½>“ë3½>KçùÍÃù½Ùƒë^ŸËë^ÿàúòùéÆëc7_ûoôvÖ;½ŸõAg}Ò§=è!ÿüúÚÇM:×/¼~K¯tÏ/¸>áõY:×'¼>KçúñõÆë#7_ùžèÉžéÙ^èåÕÞèÍÞéÝ>èÃ>é‡ù¬<è!O\ÿäõO<ÿäó¿Ü+½Ú¹¾Éë»íœOò|×?yý×?yý3×?{ý·ýÉî϶s~Ùó[:ç›=ßÌ~e÷kÛ9ßìùfÎ7{¾™óÍžoá|ŠçS¸¾Åë»t®oñú.ë_¼þKçú¯oa¿Šûu¹s¾zþŠ×Wm¾¾jãõS›¯ŸúˆOú”~þ½ùú¢¯ðLÏg½Ð‹ë[¼¾KçúæW¸¾å°¾A9žßóõ¯¯Ø|}ÅW8×·z}—^éÕÎõ­^߯ü4çgé<¿æók<¾æãkÌGs>¿ŽŸóožçñwÿ¶óü»ÏÛ¹~Ýë×¹~Ýë7xüÃÇ¿tög¸?Kçù Ÿßàù ŸßÒ½Ù¹>Ãë3yþÓç¿tžŸžáõ­š¯oõý°~Õë÷€Wzµwz·úãü|}¤Æë5_ÿ¨ñúGÍ×?j¼>Qóõ‰¯¿Ó|ýÆë×4_¿¦ñú1Í×i¼>Lóõa¯¯Ò|}•Æë—4_¿äö“ùúx¥W;/ùø/ŽoÓñï‹æ_<à\ÿìõ_:×Gÿþàõš¯/Ðx}æë 4^_ ùúŸßoþüþí&}Ê?¿?¨ùóýŸoþü{ãç»›?ßÝøùíæÏo7~¾ºùóÕ8×§x}*ÿ~õß_z¢';çW=¿•7þü柿í¼ÿv¸ίy~Kç|›ç‹ß?üùÄGœÇß}üóïžçñwçñëñ“Ÿ¿kþüÝ#ôÃÏßtüþèÏç5~~®ùósŸkþüØ÷?üùÃ?é…^ì•^Ïz£7;Ïoøü&×gz}&ïúþ'ïúþñú‚?ÿÓøùæÏï<àÁõ ¯/~ñçg?ÿÒüù—ÆÏ¿4þ¥ñóÍŸÏhü|Fóç3?_ÑüùŠÛ ú°Oúáø‚òÏïïþ¸IOôtÖyþz|¾~~ý‚ïßo~ÿ~ãûÛ›ßßÞøþõæ÷¯7¾?½ùýéïïn~wãû§›ß?ý€ã÷‹ðï/pž_öùe®öúg®öú®ñú®ñúãù3¿ÿô¯tçïrçú¯oåùWŸåϯþù•ó©žÞå÷G6¾ÿ±Þÿø€sý›×¿1ÍùkìGs?~éïoë~Û÷?ø|ü7é…^ìüùíðóÞéýàƒîóûüûÛÇMøàù ŸßÈôlçýßÿÒy|ãp|AùäúO¯ÿçß/>nÒ¹¾Óë;yüÓÇ?yüÓÇ<¾ðñï?|ÿÁûßðþùûEOxýÿãæ)Çùùýß/Þýýâx¢§g{¥×³ÞèÌß?Ñýþ‰Î÷/t¿¡óý Ýï_xÄ;½Ë?_öã&½Ò«<óü²Ï/óçgÿüÌŸŸýó??¾w?ç#Îã+>¾Âû/‡ûzœôÊùòõŸÎ×׺__{À??¿Òýýf·?hôvÖyÿõpÿƒ>äxüó÷S=â<¾æãûüï÷î×ðÎõë^¿Îûï¾ÿÏÏïÜ<癞í\Ÿáõùüï÷›tÿ8?ç7<¿Éã›>>ìÿÅûåãoõã/__è~}¡óû º¿Ÿ óû º¿Ÿ óû º¿Ÿ óúûÝ×ßÄ;ýàë ݯ/t¾¾ÐýúÂ#ô8éŸÿýØ}ýëûóþÿ€z‘þ÷×ÇÍOÎëÓv_Ÿ¶óú´Ý×§í|þ½ûù÷ÛTz=é…ÇW||…÷_|ÿKïô.Çミï¼¾d÷õ%;¯Ù}ýÈÎë#v_±óùóîçÏñAòοßý÷;ϯûü¶ëÓ½>ƒÇ7||KçýßÿÒ™Ÿáü ¿^ø¸ ÇãÇðã¯Ö}ý°G<è!ÿüú~÷õÁnPèÅÎó›>?<>ùùç<¸>ú÷Ÿßí~~·óú9Ý×Ïé|þµûù×Îç_»Ÿ}Ä3=Û'óãóŸÝÏ>♞í…^ìÞíƒ>ì“îóË<þìã_:Ï/ûüðøòÿ·v.9Žã@ÜÏmÄßK£ï™)^0Â@IF÷R ˆù¡Ê*=&Iï¿ð„OòiÎü4ççÔ_—õ×EýuY]\¿¿¼~Q¿\Ö/×ß/¯¿_\ÿ¾¼þ}qýøòúñ'œã?<þ“ñOÇ?9þÓã?éÿ´ÿ“ù™ÎÏbÿ|ÿ…úc¬?>áƒ|˜oò-~~_Åë+_ ¹ý?ß¿ñúÊpýc¼þ1Ô?cý3Ô?cýóð0þ8þ0þ8þ0þ8þpüâñ Ç/¿0?q~ÂñÇ÷Žã/Ç_Œ¿1¾r|§>볡>볡>ë³áúÄx}âOÃf|ÛñmÆ·ßf|ÛñmÆ·ߦÿÔwsáÿ‡÷å?åðÏëßžðç[^äŽÿÜ?ï}yp®_‹×¯½ù0_äË<ä1/ò?ÿ?{_’Ó¿Ëþë›âõM¡>ë³O8í7ÛoÌOs~óÓœŸÆü4ç§Ó~·ýNûÝö;íwÛï´ßmÿÔwc}÷Õ@ÿ†ýì|ôÏñ¿óÿ‹÷åÁ©Çúp¸¾"^_ñ„7òfÞÉû·|‘/ñÉø¦ã›ìºÿ[>Éç—|‘¯^äe¾É·x8>ñø„öcûw¼˜ßr~‹÷×Çý¿òøÝñÍçkûùÚ¿íñ;çïÞ—ßqÆGý7Ô·c};¬¯ëëŸðÇ|“ï/9þ?ñù¹¡¾ëÛáùªñùªáù©ñù©áù¢ñù¢áùñùáù™ñù™áù–ñù–aý}\ÖßÇõ÷aý|\?ÿÓÐ_s|w¼3Ýùëô¿ÛÿÎþ»ûÌïp~o9ýöïwÎúö¸¾ý?çâó¹Bý<ÖÏÃúö¸¾=<*>*<_*>_*Ôßcý=?(¬/ëËÃówâówBý<ÖÏCý<ÖÏ_ ì¹ÿ0¾8¾0¾8¾0ÿqþo9ýˇ|~âç§èÙÿ¿æ|þÊÏ_q|Ëã[Œ_ïWÖÇõßa}w\ßý„y™oò-ŽüXŸõùXŸÂ'¹óƒ÷§ÏŸë¿ãúï|Ó¿mÿnù"_æôÛÿMÿö‡Åy>B|>¾ȗyÈc^ä%~ÑþeûçüB\?þj —ý»hÿú°¿É¿;Þè³ÿ·œñ5LJï÷ò÷{aþû}ùäã[>Éç·œã×<~ÏWóóÕ8¾ÍãÛ8~œÿç—âù¥pþ'žÿùœöÛ‡ýǼÈKüœß‰çwÂúø¸>>œß‰çw~ûî0¾áø&ó3|ß»>ü §ýiû‹ö—í/æg9?‹ùYÎÏ-§Ëþ…ùóû+/Îß”çoŠû[–÷·,îoYÞß²X¿^®_/îOYÞŸ²8Rž?yÂ'ùü’óSÎOÑ~ÙþY_WÞÿ±8ÿQžÿ(Ö¯—ë׋óåùâþ‡åýŸpöOý½X?^®5\ä—y'ïæ›œÏë«ËõÕE}¿¬ï?á‹|}ÉÏõ_ïKðóý]Öß‹úzY_/êëe}½¨¯—õõâþTåý©ŠûK•÷—zÂi¿Ûþù~x_’_ä—9ó7œ¿Áø†ãÿáñŒo8¾;>Ùÿtÿ·œýS.îÏSÞŸçÕPä%~¾¿ÊûÛ÷·)ïoSÔ¯ËúõOCè_ì_x|Ñ¿²ç÷ßû’œý—û?çwß—ä´¿mÿÔËúñ«ñoÇ¿é¿Ïjð÷sø÷sð÷kø÷‹û³”÷gy5trþ}Pß-뻯†FÞÌCñ‹÷_¾ÿÔwËúnQß-뻯†I>ÅÏï¯÷%9ãoŽ¿±ÿæþ;ã뎯3¾îø:ãëŽïŽö¯ß¿Åßçåßgž_>¾X\®~5trÛŸ´?mÿ¬?*ïQÔ7Ëúf±¾·\ß[Ô/Ëúe±¾¶\_û„Oò)^Ìo9¿›ñmÇ·é¿þ¾y>nù|Ü'¼“÷oyÈëOËõ§Ox'ïæƒ|ˆwÞß}ÿïœëÏËëÏ‹õƒåúÁb}`¹>ð§ï×­÷ëæþ¹ÛûçnÖGlÕGüùïFEvºendstream endobj 851 0 obj << /Filter /FlateDecode /Length 1831 >> stream xÚ­YmsÓFþî_¡á“< Ë½KjKgh! –¶à–™„í8™Ú–±lN{÷öö¤S¬$“a”H·«}yöU'‹„'ÏG?LF'§Â$‚³’—"™œ%…Hr“³R™d2KÞ¦’ßO~:9µ:æÜ2kAòLÎÇ"{¾¢'MJ¦yøþ+‘>Þ'¿ ÕšqÑ2ÿ6–EZMÇ2Oÿ›V‹ù8SJ¥g  Þº{™þXíœê<‹ô®iµô”§ð€ÄÊ¿õd ‘~«qŒ³`Ä ˆë~?›ŒÜðD8`La™)m2]>Ž8+Œ”5¾Gbx‰N^¬dò´ýÿ)Y$q8Æ*Æ ŠÅ)9¹ØoðL‹2•ߌ3ÁUž>s^íˆeåkæÙÞq¡·žŸgŽPû„ïÚ'À|ŽeZò´>ó¯8úy§xZ¯6ãLÚ´^;æµ{ …éÙÞLáGY¯øùȇ]$%+­´ÎG%K–:QL¥wóƒç’2Æ"¬(Š$‹øÎˆÏ&¯9G6€SX&ŒðL”¾’GL’å¥ðYž×;n£ŠAY©BB6’+E[ DHò°GÆ¥­Vhëù¨™Ô­ºËu–IÈ2€ÂªœÊsHŸf¹Á—³ì„F ¬özåƒ{%¨/ÿrf›3 fB1Cé·ªB¾`]Q¡aÊaîì2‹ë}u©*¨€]¢~tüø¢ÏªFWJ&Ù.’pûêù(y‹R…éU,y/³ÞùÉiÙÔ'™Vaý+xáûqf¤J7pëZBM‘¥g‘ÌUYÿÈ¿5£²êJG¤á‹:%ÁK"¯¨÷ÄçÄÞÐqEÇsz…“ÊÛ&†" ÞIÅ”õÎaWpâöKr ñˆ_`}"Œ§¾óÛpW>ÃbØ™K$•þJœ¢Z-ªHÍ. ý$9g¹n‹æË[P™:0dÕ%AMÎ4ÔæÑWTVmçþZ’Ãz¾vÜÁGß ™Ïk‘[¦ 70À Öãù•ɲì‹Ô‹À&,s_?ähn®ÎHà~<¦&Ó¦mé \¯éyCé°¥ß]/Ëé  ¬îýÆ›WmP\F½Ú6þ|écb+r¤©½s¡!l^‚?þˆÕê ݶƒbêI˜Óadg)`òkíû‡¦†úÖH­,§9á$usÂQ¨N0gžõŠŸ{ê.*Àù ëJ)O_Ö$¸ þ&°5Áá…_-ÎÒ~ð ³ú5ú𤠀¶N‚¡ hÑú0hÔ9†ìüÜ~Ÿrr¦m@½N…E³6ÚÆŠçì×¾åaƒÐJC§¬vÍá¾h[èÒ4£Výù¦ö›…—{F•ïÔ@vçK|Àº^~ñGqüiÒ¾*¥ çÓäì»è‚L #p÷7¶j5§² ±Z?,èØÁæíÑâtF°Ôþñ;¸Í:JÛÒãíŸÎhź얟PF"¨Ýòch ‹³&Æ÷!ÏKƒ©[·¥ØjØÐÊuq'Å´P‡¼A}„Æ”]О¸;@¥OmyÐݾ 3wÑr9v]gÜ/óq·^Ð’°è䩨™aôl!¼ × $ιjÅ–p hÏ:1ž¼;dŸ94¯K¶AÓîjOd„¸Ýž!¼bí_£öФöˆD‹?Ldïã$J¸JåS´»¬:ÓVQf†]s߉ý¯ûXºc–àÓë-ãú(Æü}¹ÃNé\øñPEÅN#9ŠÊŠüYFýhM¦¬¢òYv¦mI³þ3Ýjg~,\ÞšÈhå4êë°Û_Dµßô„v–[À˜E!¬@¨¢¥y9ÍÈ€nÿÎ4eüÁ´¾aj˜áFYGÈ|oËáoîUäWðà²çßNœª?àzÖó£?è Î8ì¶™4pD_áo‹~M¸œy‹6ø.öž¾Ñ®Ò$Ѻ>B ºoúX¸â$ºë(Êò”Ðqº¨Y¸J|öÝ2¦é–žYYäH+‰f½ã)†òû«F”³P)“æ‘u9jðçáÏ+¼½¼Õ‡çÂáÜ‹U2ò_\ÁÀâuOÖ77¤R«!±™É­jÍ KC}X6ÉÜÅ"y½E¨MP,°¸+8ÞLH“Â}T6C8ô? Ï&£ÿÓœœ¥endstream endobj 852 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 3784 >> stream xœíœ{PUÀoi‹…&Ð¥Y*–FZ P²‘¶áa¨–ˆöa Tf BÉÔG‡2Ã:¥ŠÃŒíÄAfêØ!âø1“ÑÂNEžVØòˆZpdá%%²&ò€Â6Tˆ¾?V,¥*¿¿6»wïÙ³çÜsîž»›u333à±Æîß¾€‡Îš†«Ÿ5 W?k®~Ö4\ýl0X"‘@tá‹………ÅÆÆ2™Ìîîî 6LLLØl6Žã&“ÉÎÎnÛ¶mUUU!!!{{{ƒÁ°sçÎO?ýôôéÓ“““555§N2™L555/¿ü2Žã·nÝÚ¼y³³³óÙ³g!Zš†ëVò¬$É%+6ËŠÖpY°à¥sÁqœ$IómØl6›Íž»‡$IÇ-v. —EÜ|,kˆaAnnnííí™™™ß'ÇñùWÀ58y÷/ô=ú¹=\»vmß¾}A¤¥¥Ù*n>–5œ½>•Je±ñ\Nœ8áëëK’ä| /^»È~’-FÄä]×à¬Í¬¹âèö2™Ì&q‹aƒ GFFlêúË/¿ ](Š£Â]Bd;€!á.!6€ÍGÛÐVq‹aAÃÖÖVAè›j“  EII‰§§ç½{÷î;„kp1"^ð,µZíííýñÇÛ*Î 4¬®® lkkc³ÙÖßT’$)ŠBQ̳!ö+¶˜zgg碢¢uëÖ-Áe†0 »ººÒ#~ð矆’8Øl’²ØYF±XüÚk¯õ÷÷Ï=„kðŒ—3;Q£ÑDFFÒÁ£­í†Ž±X (‹bAC‘HT___SSÃf³Ð1C,¤R’²˜ä£G¦§§t:ÝìNâ69šËàÛ·o¯¯¯Ç0ÌÍÍí?t´4¡Èå‚€X H\¼øWã…"´íDDD:tÜ70ØlpáËAü¥ä?Mš˜˜èîîh0fw¢M¨ººº¶nÝ:_\J Àq •³3H‚ñÀ:88466Òig1«A€‹ÍšôÅ_|ÿý÷Á?Çál\ †KJJ~ùåÿû!@ìa³—CÃÈÈH‚èÉ¡¹àÆfƒŒ ÀþŠû{÷î‹ÅAAA³ãÅÑÁ¢C—‚ ÄÄD‚H’¼~ýºåË· ²Ùl…Bq÷î][ƒÛüÄm&IÌA†ammmþþþ...Ö‹3ƒåŒOoØš Äb±§§' Ãâ6Ávµ*Ò·æÑÙ@ÛÐâôo>t>ÌÉÉAq”'-ºè\ Z.ZûôD’ä®]»†‡‡Í7kllDþ‚ðññ1Ö–-[†††èm…B188Èf³•Jå¥K—l·3¶••Eo¤§§Ó/½ô½ÐÔÔ´àYŸ|òIuuõÌÌL^^^qqñÌÌLuuutt4½qàÀ›ÄÑR̈»Çÿ xé•(’$¿þúë%œXWWgÍ3ñ‚â,>Ïçjmtœ\[·n]š¸%”mÖ¼t™X‚w-–ë¥...z½Þ××—öÉøøø˜˜˜çž{Ž¢(ŽãaaaNÇáp`V*•ãããccc Ãh4’’’ ½½½øá‡ÐÐPÀ•+WöìÙ366ÆçóKKKKKK:´sçÎééi.—[ZZ*‘HX,ÖÕ«W÷îÝ«Ó骪ª8œœ|úôé©©©{÷î!B÷6==M’¤™ŠîÊõRºXúà%S ÆÄÄôööŠD"ŸÏW©T}}}E½ñÆÇŽ›žžvrrBQô½÷ÞËÎÎF¤®®N$µ··'$HT*ætž?oÃlfÙY¹6\.ÿ•™5 W?²…T*E„ ƒÁèèèàñxF£‘Á`ЇH’¤wÒqO(¦¥¥EEEÑßß/0 ÃqüóÏ?.))ihhðóó{á…0 ;{ö,Š¢tÍê!ñÐ#Ͳ¬=,µà¥'OžT*•'44ÔËË Ã0€‹‹KKKK||_£Ñ>|˜®)>8K·¡5î÷¯»(°&Ò ww÷êêj‹åçç7<[Ñàr¹:Ž®mÃ0loo¯Ñhø|~~~~lll~~~hh¨‡‡‡V«=räÈ7ß|3>>ð···N§ëééa2™!!!z½^«ÕšL¦S§Na6<kooßÔÔäâââììl4ÃÃÃûûû Ãàà`@@À¾}ûPMII©©©P¥T*Ï;—››Ëçó™L&EQ&“I(bcc•Jess3“Élhh€ H&“åææòxøÀŒ†?ýô“»»»N§ËÌÌôòò‚a¸±±ñ™gžikk›˜˜P*•\.÷Þ½{ ÃÓÓS«Õ&''ŒŒ466޵··NwåÊ•òòò¯¾úŠÏççååݼyÓÑѱ££C§ÓEGG/¦ÂJÏøý X"‘x{{_½z5;;»°°Édêt:“ɤÑh¢¢¢4Mppð7x<^QQQff¦¿¿ÿéÓ§SSS[ZZêëëSSS/_¾üî»ï¸»»wttp¹\Š¢T*•H$¢(jóæÍ†¹ººÂ0 ÃðØ .ƒ ‚XªÆ#c¥{éƒcÕÓI’¥¥¥åååôb0Žã………Ï?ÿüèèhoooRRRUU•‡‡GjjêáÇýüüêêêöìÙ333sçÎwÞyÇòâÉÃä?oCEÅb1Ã0½^÷î]‚ ìííýõW‰DÒÒÒ2==‘‘A›®³³ÓÉÉ)!!ajjJ¯×‡„„¬„ñi³ WÝÕªqˆã8‹ÅâñxA¤¤¤( ‘H”ÝÜÜ7:::44´iÓ&£Ñøûï¿ûùùUTTœ9s†Åba& yóÍ77n<~ü¸J¥JNN~d·É‚ g½tA0 ûâ‹/Ž9rãÆ ŠŠ 77·¢¢"©Tjoo/èåŠÎÎΛ7oR•°qãÆC‡I$’¼¼¼'N$$$Ћå‰4+Ùu­4 ‚aXeeeTT”J¥Òét )))ôbøÛt|>ðÛo¿åååÑN+‘H„B¡™n——µlaÅ8äp8€ HAAÁÈÈEQwîÜäääAQTXX˜L&óòòzd¦›Å‚ gç4AüùçŸÑÑÑ—.]:vìAÖ¼E¾XóÒ9^zòäIww÷¡¡¡ééi­V‹ ˆÑh$I2**ŠÎ™ÅÅÅAAAA477GDDœ9sÇqzrCZPZZš•••M’¤@ øðÃQEQtpp011111ñah¸Ì6\ic)‘EÑÂÂÂØØØ½^/ é…šÊÊʧŸ~šÇãåææNNN2Œƒö÷÷···»¹¹Ñ5;''§sçÎÑk5‚<‚ÀcÃ73³@488¨V«§¦¦¾ûî»ééi&“™““C¿­·{÷îÞÞÞððp±X¼ IÅbñZ>\6–â¥sÓ Hjjjx<ž§§§F£ èîîMOO_ éäñ·áÚ*·Tj0îܹóöÛo/ø ÚÊgÍKW?ÖV1:::|||öï߯V«y<Š¢0 cVVV&—ËÕjµ£££§§çää$‚ ‚dggkµZ‹¿VM|¸üç½Ô?""b±[N„T*}ë­·ÊÊÊ´Zíb/ÿ!—˃ƒƒ¿ÿþ{‡¨¨(ë“óãi,g‹ÑÑQŠ¢(ŠJOO'B¥Rq837Çñõë×0™L@`¦¥T* V©TÍÍÍf>ࠛѫüIIIÖÛЪXj2™îÛsëÖ-3íé§ µZ­×ëÍ4{4XðÒåzü_Ä‚—ÊåòÖÖVú®ÌÌLNh‡`f4 ù|þb§qõêÕúúúääd•JÕÕÕEÿIÁ‚,ÙK­²¡•}­L,ŒC¹\'“É||| …L&‹‰‰±¦_ þýö[ú\ ÃkFÄ+¯¼B’$-ÂL‡R©”î0..ÎL‡óyü³…îØ±ÃÖ{Fc¥ Á߯‘Édqqqæ›Ñ¦Þ½{7aÍWêóŸ·ácÀš†«Ÿ5 W?k®~Ö4\ý¬i¸úù?‡–ïjendstream endobj 853 0 obj << /Alternate /DeviceRGB /Filter /FlateDecode /N 3 /Length 2596 >> stream xœ–wTSهϽ7½P’Š”ÐkhRH ½H‘.*1 JÀ"6DTpDQ‘¦2(à€£C‘±"Š…Q±ëDÔqp–Id­ß¼yïÍ›ß÷~kŸ½ÏÝgï}ÖºüƒÂLX € ¡Xáçň‹g` ðlàp³³BøF™|ØŒl™ø½º ùû*Ó?ŒÁÿŸ”¹Y"1P˜ŒçòøÙ\É8=Wœ%·Oɘ¶4MÎ0JÎ"Y‚2V“sò,[|ö™e9ó2„<ËsÎâeðäÜ'ã9¾Œ‘`çø¹2¾&cƒtI†@Æoä±|N6(’Ü.æsSdl-c’(2‚-ãyàHÉ_ðÒ/XÌÏËÅÎÌZ.$§ˆ&\S†“‹áÏÏMç‹ÅÌ07#â1Ø™YárfÏüYym²";Ø8980m-m¾(Ô]ü›’÷v–^„îDøÃöW~™ °¦eµÙú‡mi]ëP»ý‡Í`/в¾u}qº|^RÄâ,g+«ÜÜ\KŸk)/èïúŸC_|ÏR¾Ýïåaxó“8’t1C^7nfz¦DÄÈÎâpù 柇øþuü$¾ˆ/”ED˦L L–µ[Ȉ™B†@øŸšøÃþ¤Ù¹–‰ÚøЖX¥!@~(* {d+Ðï} ÆGù͋љ˜ûÏ‚þ}W¸LþÈ$ŽcGD2¸QÎìšüZ4 E@ê@èÀ¶À¸àA(ˆq`1à‚D €µ ”‚­`'¨u 4ƒ6ptcà48.Ë`ÜR0ž€)ð Ì@„…ÈR‡t CȲ…XäCP”%CBH@ë R¨ª†ê¡fè[è(tº C· Qhúz#0 ¦ÁZ°l³`O8Ž„ÁÉð28.‚·À•p|î„O×àX ?§€:¢‹0ÂFB‘x$ !«¤i@Ú¤¹ŠH‘§È[EE1PL” Ê…⢖¡V¡6£ªQP¨>ÔUÔ(j õMFk¢ÍÑÎèt,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ {{{;Ž}ƒ#âtp¶8_\¡8áú"ãEy‹.,ÖXœ¾øøÅ%œ%Gщ1‰-‰ï9¡œÎôÒ€¥µK§¸lî.îžoo’ïÊ/çO$¹&•'=JvMÞž<™âžR‘òTÀT ž§ú§Ö¥¾N MÛŸö)=&½=—‘˜qTH¦ û2µ3ó2‡³Ì³Š³¤Ëœ—í\6% 5eCÙ‹²»Å4ÙÏÔ€ÄD²^2šã–S“ó&7:÷Hžrž0o`¹ÙòMË'ò}ó¿^ZÁ]Ñ[ [°¶`t¥çÊúUЪ¥«zWë¯.Z=¾Æo͵„µik(´.,/|¹.f]O‘VÑš¢±õ~ë[‹ŠEÅ76¸l¨ÛˆÚ(Ø8¸iMKx%K­K+Jßoæn¾ø•ÍW•_}Ú’´e°Ì¡lÏVÌVáÖëÛÜ·(W.Ï/Û²½scGÉŽ—;—ì¼PaWQ·‹°K²KZ\Ù]ePµµê}uJõHWM{­fí¦Ú×»y»¯ìñØÓV§UWZ÷n¯`ïÍz¿úΣ†Š}˜}9û6F7öÍúº¹I£©´éÃ~á~éˆ}ÍŽÍÍ-š-e­p«¤uò`ÂÁËßxÓÝÆl«o§·—‡$‡›øíõÃA‡{°Ž´}gø]mµ£¤ê\Þ9Õ•Ò%íŽë>x´·Ç¥§ã{Ëï÷Ó=Vs\åx٠‰¢ŸN柜>•uêééäÓc½Kz=s­/¼oðlÐÙóç|Ïé÷ì?yÞõü± ÎŽ^d]ìºäp©sÀ~ ãû:;‡‡º/;]îž7|âŠû•ÓW½¯ž»píÒÈü‘áëQ×oÞH¸!½É»ùèVú­ç·snÏÜYs}·äžÒ½Šûš÷~4ý±]ê =>ê=:ð`Áƒ;cܱ'?eÿô~¼è!ùaÅ„ÎDó#ÛGÇ&}'/?^øxüIÖ“™§Å?+ÿ\ûÌäÙw¿xü20;5þ\ôüÓ¯›_¨¿ØÿÒîeïtØôýW¯f^—¼Qsà-ëmÿ»˜w3¹ï±ï+?˜~èùôñî§ŒOŸ~÷„óûendstream endobj 854 0 obj << /BBox [ 0 0 576 403 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (./categoricalVGAM-029.pdf) /PTEX.InfoDict 718 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /sRGB 720 0 R >> /ExtGState << >> /Font << /F2 719 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 65854 >> stream xœä½M,Irž»?¿¢–3Àe1ýÛlKB0€HÓ€$E 4É+ÍB¸ÿþf|ä©´çmMxfEeŸ¾w1=ÇÓžÊp·×Ü=Â,"2½ýá-½ýéí~ûÏ×ÿýõŸÿËø›·¿ýã·Ëûåry»ÿïÿö?]?ííû»x»¼ý÷oéí×ÿýé[Z€·ÿø­Û{º¼åÖß›½ýË[.—wO·öÏo$pk^Þûeµ§œßËø¾·?ˆÑß{½#níïD<ê]/¾óÓ>Ìüó/ óÏÿø¯ßþúßç«O~úç·mdÿÙÚ=¿·ë1K{¿Œ·Ÿþåíwùòû·Ÿþôíßý´~÷_þã”ú»¥û¿®üõHï»ÿëþÀ_çÜÞK»ÿkûþ×ÕWâªI/W·mÍÒý}lÚÑ^Þ/åÖ¼³›½[ÿ°oÍ{¹\ÿÏ¿Û÷æ=û{Éö­yg¿þ_kö­ygß:üÝþÑÿ ï´÷~uN¿:§®Îù«þ€k·?¶öž7ÏþU}ôË¥½·ËöÇyþKz¯~õT{eýãbÿÛÚÞ½o‘üðßöþž|‹c„Ò>¿å~yoëÀ­]l¼÷klÜ”‹Íïô—ui;n¾¤e/AõÑú_ÿôö_ßþõó“:Õþ^ó[Né=m¾øoÿãŸ~ÉwóW)|ËèoÙÛ»­þÝïþïûùÿùûí[R{ûÝÿõ–ÿþ÷¿ÿ‡·Ÿþ°|ã2¨0ÒåÀW¯¤²Nð}Tß—ãôÄrÜÆ{¾FUòw«Wš/"—¼:ÿço×E¬kó:ܶ4¯+ƒ¯Íe¾ì+uYÖßëôü6.Ëšµ4},ÍòÞ–ÉY–uðÚìï¹^›uù›kÓÞ½­Íeiøfi;P{O lõ½ØÚܬc;Ð5Š–^™oººd9_WÚ|mŽeD×fÛ4¥®Í±h¼ÛÒL—ËA%Û{¾¬í²ÊÞ[_Û}Yl–¶§µm‹VË“—®¤E¼¶×a¥TÃÒÞ¾ÿºrûõð× eY³®m_wm—•ßv«¥½v>]Çx•ri¯>MKG/k{õb*Û<¸*¾z&•òžóÚÎë÷_—éõx׉¶ö¯ØºL\ÛËš÷-Õô^úÚîëß׺ïªáÚßkDßÚëñ—Åò²¶·ñ´|;ž­ßê»—ûöX¾xm¯Ç¿ÎÛýxcí_/Ë6²¶WþCc;þ:‘Ó5ˆöãmþ¸î]yo]í×0òíøeýþm/_ý±öwø²[-í˪—åíx—½?×P›û:žk,-+Ò¢ÇÚ?¿lÇ»l!ž®Ñ䛾¶Ù×\Ú}ý~·õxËž³´—æ²Å“ûÚ.ïc¬íÞÖvßV^ÛôÊ×Í,_Öxô¥?ËŒö´¶ÛÖ®ÛñÆë9%p–`ï‹òuj^ÚÚÎ+§±Íœ±ïúºMÜ×¶-Kï2Ñ6ûv¦³´×xÎ×xêy¦}]wK_½´·ãߎ——Ù|m_ãiôu ð­]ßë¶D”ËÚÛñ.›¾ù?WÇ- ̲Go+õñäuJþŸ—¬¾„Þ÷s·½¹íË&~]îü°oÍhoÑÞîí×]òÝ èÖŒvv‡=µ`Oüþ4¢}ÐnÑn°—ìK3Úcÿ ûWãß×ø÷yÙ>ì[3Ú{´wÚ-Ú öÿ¾ðïKüû"ïÑî°‡ñe_ÍÑžiÑ>`oÑ?þ9²xüÁãÚcÿû7¢ýcÑ¿ý»®Øö­ù=E{¢=_‚}iÞÙk¸6Û›ÑÞ¢½ÑîÑÇßÖ«Žïö­í-ÚííòýŸ´§ì©Â^.Á^.´Çþö?̯ÆùÕ–S‰{{¥jüûÊ¿o±ýk)Úí9Ú3í±ý;´h´G}õéq|ãëñøÇïÑþQÿAýGÔwPßÇ78>‹ý7ößâ÷¿ÿÐÞ£½ÓûgÒ¿/¶{¿ÇñåRàþ5£}Dû€½§`ï‰öí™öøý]¾ß£ÝO¶K°úçÐÇ?8þÇ?8þQ¢½ÈÑžµ—h—ã·ho´h´{´;ì5ö¯²5ö¯²÷ñµ7£=ö¿²ÿ‡öív‹vƒ½Åñ5ŽïÐßxüýßèÿû룽íqüãïñøÇïñøÇñøƒÇqüƒãQßA}Gìß`ÿ,Îãü±x|ãñí1>ñyhã3ŽïÐÇo2þÏÚãü5Î_ú:õõè§ÿ=ú×é_ãwŽßcü9ãÏãütÎOã‹ç#…ó¿½ù˜½F{¥½E{;Ùžbÿû—âß'ù{‹vƒýþüqožk/Ñ^µGÿgúÿ>¾7³h´Gÿeú¯ÄñŽ¿Dý õ û{âþžÂùéÞ|ÌÇÏOG¹¿sàmoF{ŠöD{ŽöL»G»?hO%ØS¡½G;Ç—cÿ3ûŸcÿ3û_¿Ù›Ñ>¢}Ø›Á^cÿ+ûί ϯJçø<ößÙÿC{Ÿs|a~7Îﯯ:¯¯&ì=Ú;í#ÚåøGvv‡=å`O™öíåQ{‹öö¨=ú'Ñ?‡öèŸDÿ¤èŸ$þ9°‡ëÇÎëÇ {Šöô¨=Ž?sü_nþÍôï¡Ý¢Ý`/qü…ã/1þ ãïÓö8 ço‰ñ[¿%Ž¿pü%Ž¿ÈøcüÆßgí5ú·Ò¿5®•ëá=ú·Ò¿‡öèßJÿÖŸ•ñY£+ýÛâülœŸ‡ö8þÆñ·ØÿÆþÚÇ×d|Ÿ´÷8þÎñ÷?ñÓ£:ýshñчöè¿NÿÚc|uÆWó»s~Ú£ÿ;ý?¢ÿýÿi{\ß×·C{ôß ÿFßñýE»Åùoœÿç¿qþ[œ¿Æùë·ƒõÛß‚=G{~Ô^¢½íQŸN}îŸ>ݛ٣:ýÓ£:ý3bÿû?bÿû?bü ÆÏˆýìߡݢóßbÿý·Øcÿ-öߨÿC{Œcü[ÔϨŸÅñÇÿi{ôŸ‰ÿ¾ÚîÑîÚ=êëÔ×£þõ8¿óÛcÿý÷Ø?Ô_=Ö—œõ¥ {ŽöL{‰öò¨½E{£}DûxОbÿûhýOìÿ§íqü‰ãùg~Àc~À™ðXvÖŸ=Ößœõ· {öJ{ì_fÿrì_–þy´3¾KŒßÂø=´§hOÚ£ ýS¢þ…ú—è¿BÿÚcüÆO‰þ-ôoþ©ôOã«ß§íÑ?•þ©qü•ã?´GÿTúçÐýWÅ1>cýÔR˜ÿ{3Úk´×Gí=Ú;ííìß}~Ïxÿˆ¥¿Û›çÚ£2ý“ãø3ÇŸãø3Ç¿¾ï™±Gÿeñßý~}0Þ_b)¬{3Úãø Ç_âø Ç_âø ÇWbÿ û_£~•úÕØ¿ÊþÕØ¿ÊþÚcÿd~µ¿ñ{hú4ês}·7£=ö¿±ÿ‡ö¨O£>GöÇ×9¾õëÔïÓö¿ñÛc|tÆÇ¡=ú¿Óÿ=ú·Ó¿÷ù«½í1¾:ãkÄñŽÿÐý3èŸû?Øÿû?ØÿOÛc| Æ×ˆþôÅø3ÆŸÅùeœ_õ7êoQ£þ‡ö8~ãø-Žß8~‹ã7Žßãøã÷Îøðè§í1¾œñåÑ?Nÿx¿süǯ/—7”Ü?ãúqù E{¢½E{£}Dû€=Åã'?ÅïOüþû룽ù˜Ý£þɱ™ýûËöçGáüˆ÷/ï_š±÷hïÚ-Úvv‡=ÌÂùï¯2Þ_5c/Ñ^hþuú×ãøã¿Ïß±Gÿ9ýçÑ.þû”½Æù[9ãý_Æû¿–j´WÚ[´7Ú{´wÚG´Ú=Ú9¾ûŸØÿûüÏÞŒöØÿÄþÚÇ—8¾OÛ£’øçÀž£ÿ2ýwhþÍôï¡=ÆWf|åèßLÿ~Úý›éß#{‰þ)ôO¸þ«¼þ›°GÿúçÐç_áü+q|EÆwd·h7ÚcüÆ_ëSåúT£+ý[c|UÆ×§í1~*ã§FÿVú÷Óöè¿*þ;°·èßFÿÚ£ÿýߢÿý×¢ÿýwhãk2¾/¶÷èŸNÿÚ£ÿ:ý×£ÿ:ýwhëGçúñi{\_:×—õëÔ¯ÇøîŒï×—ÎõeDÿú÷Óö¨Ï >#úgÐ?#Žpü‡öèŸAÿŒèŸ!þ9²Çõyp}¶è£,ÆŸ1þíqüÆñ[¿qüûçìß_¶÷8¿;ç÷±=ÌoÞ_fñþ+ãýWËñû»|ÿ=Ägg|NØs´çGí%Ú íqüƒã·è?£ÿ,ßxüC{쟱Ÿ¶×h¯´h´[´íQ£þGvþ•ø÷è?§ÿ¾ÜýëôE{£=ÆŸ3þ¾Üõuêûi{Ô×÷#^ß^ß¿Àž¢==j/Ñ^µ×h¯´·ho´[´Ûƒöý“èŸC{ŽöL{âø¿Üý—è¿Ç—9¾Ç—9¾ŸyüõÍÔ÷ÐûŸ¥ÿ#ÚíQÿLý?m÷hçü>²—8¿ çס=êS¨O‰úêò ƒù…ìQŸB}Jôo¡?k¯Ñ•þ;´GÿVú·Æø­Œßã·2~kô_¥ÿjŒŸÊøiqþ6Îßûߨÿã£1>Zì_cÿíQÿFý?mú7êߢÿp}ïo3Þßfñþ0ãýagØk´WÚcÿŠôÏ£ã;²‡üï/[>HÑžhã«_ã«ß¡½E{{ÔÞ£½Ó>¢}Ðý_éÿý[éßýÛèßýÛèß/·çhÏ´G}õ=´G}õmQ¿FýZÔ§QŸý/ó»ÇñwŽÿÐýÓéŸ/·Gÿvú÷‡·Çù×9ÿzœ_óëÐõïÔÄù78ÿFÔPÿOÛ£¾ƒúŽè¿AÿÚãüœ_‡ö¨Ï >Ÿ¶Çù;8íQÿAý-úßè‹þ7úÿÐýoôÿ§íqý3®_núõ;´GýŒúyœÎùwhú:õõè_§=öÏ¥GöÎøû´=®_ÎõëSv¿„üÏÞ|Ìž¢=Ñž£=Ó^¢½œm¯Ñ^µ·ho´÷hïÚ-Úvvê—¢>‰úÚ£~‰ú¥¨_¢~‡öèßDÿ¦èßDÿÞßãü}*¿Oåü}ªåƒè¿$þû¤=GÿeúïÐã73~íÑ™þû´=ÆofüÚ£>™ú”èŸBÿ”_…ñõ—íñþWçý¯ïuÞÿêñþWçý¯Ë9Ú3í%ÚËÙöíõQ{öNûˆöñ¨Ý¢Ý´§¨O¢>_nú'꟢þ‰úÚ£¾‰ú¦¨_¢~‡öèß$þ=²{´sþÙsæøsæøs_æøíq|™ã+QÿBýíq|…ã+q|…ã;´ÇñŽïþùѽù˜=ÎÿÂùhó¿pþ—èÿ"þñS?_m¯QßJ}?mëGåúqhñU_‡ö_•ñUc|UÆWñS?‡ö?•ñScüTÆOñS?Ÿµ·¨_£~-êӨϡ=êÓ¨O‹ú4êÓ¢>úÚ£>ú´¨O£>-ú¯‰ÿâüjœ__mïQ¿NýíQŸN}zÔ§SŸOÛ£;ý;b| Æ×ˆýìÿ¡=öo°#Æ×`|?ƒñshópþ¨ÿ þGv‹úõ·è_£-úÏè?‹þ3úÏ¢ÿŒþû´=Îoãü>´Gÿýo1>ñiÑÿ&þ?°{ô¿Óÿ‡ö¨SŸC{Ôϩߡ=êãÔçÐõqêãqþ8çGýœúÚ£¾N}=ê‡ëçøûÎ߯x…½F{¥½E{{ÔÞ£½¿Ô^Cýjoû}ýÊyÿû+ì)Úí9Ú3í%Ú í5Úë«í-Úí=ک߈ú ÑïÀnÑÿFÿ[ô¿ÑÿŸ¶G}ŒúÚ£þ=´Gÿý{hÑ>ζ[´Ûƒvú:õ=´Çùåœ_õqêshú8õ9´G}œú|Úçö§ó»ùÝó»ùÝì9Ú3í%ÚË£öíõQ{‹öö¨½G{§}Dû8ÛnÑn´{´3>RÔ/Q¿OÛ£~‰ú¥è¿Dÿ¥8¾Äñåß™ñ}hýÏìŽñ›¿9Ž/s|‡öŸ™ñ™£2ýóY{‰þ)ôO‰ý/ìÿ§íqü…ã/qþοÇWd|Gö8¿ çס=ίÂùU£+ý{hñYŸ‡ö¿•ñûi{Ô·Rßõ­Ô÷Ðõ¯Ô¿F}+õý´=ê[©o‹ú5ê÷åö¨O£>Ÿ¶G}õ=´G}õ=´G}õ9´ÇùÛ8[Ô·Qßýßéÿ/·ÇùÝ9¿íÑ¿þíÑþëqÿíÜíÑ¿þqüƒã?´ÇñŽÿËí1þãÿÓö¸þ ®#ê7¨ßˆúÄûC=>ßã|¾gù D{yÔ^£½>joÑÞhïÑÞi·h—ñÙ=Úýd{Xßù|ÒŒ=G{~ÔõkÔ¯E}õiQŸF}ÂúËç“fìÑþëÑ?þé1¾;ãûËíÑÿþïÑÿþ?´ÇøïŒÿã»3¾¿Üõë¢ß}Dÿ úoDÿ úçÐã{0¾GŒïÁøÑÿƒþ1¾ãÛbüã×b|ãëÓöè£,úÇèŸC{ôŸÑ‡öè_£-ú×è_ãwŽßc|9ãËãñÇ÷x|—ãÙãüqΟ¿l·˜ß3æ÷N°çhÏÚk´WÚ{´÷³íí{ŠãOÈoó[öíåQ{‹öF{âøÃýãÆûÇO°[´3>SôÿÒž£~™úå¨O¦>9ú?Óÿ9Æofü~ÚõÍÔ÷ÐõÏÔ?Gý2õËÑ¿Yü{`/Ñÿ…þ?´G} õù´=®O…ëS‰úêÿi{Ô¿PÿC{Ô¿PÿC{ŒÂø8´Çø)ŒŸçáü¯QÿJýkÔ¯R¿C{Ô·Rßõ©ÔçÓö¨_¥~_núWêÿi{ŒÊøør{Œ¯*ñ×§x}.ñ­[DѾ„è ºÄx‚0&„ƒÝo™·6Œ6Ëh'ø#‹?2F›e´ü‘Å? ]²êrHÜo¶·ö¯B ‚ŠDÐ)fv‘™}¿9ßÚ *ˆú%b½H¬Dr‘HþQˆ õ«¨_¡\å&(WE¹ ŸVñiÅXªŽå¢! ›DaÃXšŒe‚@Œ5‰±†·ÉŠÛà±&kmÓÑbk²ŽB`¥k²ÒqÚ%N;¢°KvhÛEÛ ÚvѶC¹.Êu(×E¹ŸvñéŸvõé Ä€.Ct9…€.Ct™  ÜåN! þõÔ¢þ]†èbð©‰O óÅd¾|jâSƒ?LüaXéLV:ƒÇLW°ÏÙçN" ÷¹‚]¬È.6CdÌÊ,³òE”Ë¢Ü)´eÝãeÔgueŠ€¶¬YœD8Ù_މ‚yËŠÄIv1V$ ê Eê SÖVN"AE"è‡!§¬Ô,ŠÔ,JxHëÖŽD…ú¬jÔ,ŠÔ,¦¨ÏªF åþ[eÝcŠÀ|aUc†PŽ5‹)3jÈŒ:…@±22E >X÷x(…QÈúË(…mY])¨®©®ÔNŠÔN r9Er9SæœÉœ› àÖ=¦ÌJf¦ÌJ“YyL8”cnkŠÀ|aíä˜hØø¤ë$1@Œ'aB8œˆûŸ‹]?I ’„øô¢€(BTUˆ¢=A@}î• {e“½rŠ€úÜ+vB>ñ;G hË=ªaåçs³ë'ðW톪9Ÿ]??¸®7¬ëMÖõ†u½Éº>E YW?‰€O¹¿4TšTö—&û˯sohXù›¬ü'ˆSî +“•Š€ú¬{LÐVwSÄëû Õ•&Õ•)ñáf6k8 õ—&õ—†êJ“êÊIbŒÕ•3ˆŽ{ºÜ«ÐQ¡éR¡é¨jt©jœDTUˆ¢=Atý b€BÂAˆr ʱ"1E@[Þ#ÐñŒD—g$:ª]ªStaÝcŠ€×Y³è¨«w©«Ï>e]½ã‰.ÏHtT$ºT$¦ÌÖ¦è’E—ŒXg%`Š€.|££Ð¥0CèÂý07þ2±ÎÌ÷I”cö¼‡øní/ 0瘟!*´eŽþ$Ê1ÿ›"¬tÔ ºÔ N"°:ð)Š—ˆÓ*qz Hfu¥£vÒ¥vÒ—FBö—õY™"¬LPŸY§)Ê1ë4EÀëÌ)udŒºdŒ:²=]²=Ç„a3ÙŦá$â>ÇçêO#ˆô‘AˆÇbMœÏп¨ ª D{‚è úÄ1¾„@œrWÿÌž;ö}¾‡à4³wL˜/UæKE|p¿" >÷Ê)ºp'œ" ÷JÃ^i²WNð:wÓ)ºð^…)kŸó4ì·&û­a75ÙMO" \åN! >÷ìsˆŽø`-iŠ@|ðÜÁpf`¿pf0A >XK:‰@Œ±–d¨%™Ô’¦¬¬%Mˆ V›fˆmyW…¡ÚdRmš" -" kZ†»LîfxbmÍP[3©­jk&µ5Ã3«&ϬÎåX[›" +gS´emmŠ€.¬¾îÌ0¹3ã$ʱ>÷2ê³>g¨¾™Tß µ5“Úš¡.fR;‰€¶¬‹MPŽU¯“èÂÊÙåX±:&5-—𖣦åRÓr<1äòÄ£båR±rT¬\*VŽz”K=jŠè úÄ1„0öá D¹åX±rT¬\*VS”ã“:S´eMËñ”ËS6ŽŠ•KÅÊñ Ë34ç^gMËQÓr©iM˜Q¬iD@9V½U/—ª—ã—ghN"¬œD ÆX}›"AÌJN˜ûÌ[Îqʬ¤#çè’stä]rŽ'ˆBfO"ÉÈJ¦K¼Ëdoƒp~>7b$y#ÆYD‘„È ²D¢‚¨O Dû¢ƒèOÄx‚0ö+Bãô"! “DaB&‰Â qš$N'Äi’8 …I¢0!Æ’ÄØ‹Äi’8Mˆ $t JA/!2ÖÓ,ëiF$g‰äŒ8ͧQ˜% 'ÄX–ËP?‹ú/"cYbl‚@e‰  êgUÿ%DA‰ ‚*AñQ$>N!°Òñ\ê$‘\$’±^$Ö "¹H$Oˆä"‘\cEcì¨X骬tâ´JœNˆ*ñQ¡K]*t©¢K…׫x}‚€.Ut©ðzU¯ ^oâõS¬AMÖ åš(7A@Û&Ú6hÛDÛm›hÛ \å¼ÞÄëëâ± >íâÓSx½‹×;|Úŧ>íâÓŽXïë>íêÓˆ¯ñú ñؽrÈ^9A`6 ™ ´¢íõ‡¨?0£†Ì¨c ‹‰.†X7‰õPßDýSćI|Lˆ“ø˜ &ñaˆ“ø0¨oªþV“Õá˜pD¡Kþ0âÔ%N'D¡K:âÃ%>&DKMˆ1—sćK|LØÅ˜éýQˆŒ,m–,mF–6K–vЍ ª D;ŸÀÛ_’¼ýeŠˆû¾¼ý%áÍ-IÞÜ2GåW"*ˆ*DÑ…0ö]†êò wõ‚¹ÈŽŒ÷Ã$y?Ìò ¼ÎÝô$¢ùbЖ;2Þ“äý0 oIòö—9ºp¿Å»]’¼ÛeŽ€rÜŦø”;PÁTd*Ø_Šì/5¾"5¾×»G•Ý£bo¨²7ü¦ˆ ¢ Ñ@4!:ˆ.Ä1„0öá TÛˆpÑÞþUÄ)«€5¾*5¾)¢€(B ‚XÁ«¨àU©àUÔÖªÔÖN"…¬ÏUÔÖªÔÖ*jZUjZ5­*5­ŠšV•šVEŪJÅêe´Í¢m†¶¬œUÔ´ªÔ´¦(ÇšÖmYÓš"°:°5CÄ+VS"ˆ5­ŠzT•zÔIâƒõ¨Š:P•:Ðø`¥è$ÄZRE¨J¨¢ÊS¥ÊSã3x{¼^Åë? õYmª¨6U©6UÔªÔ*ªa··&àSVy¦xu )ó…•¢ŠJQ•JÑD >X±ª¨GU©GD ‚Xmš"A]"¨#>ºÄÇø`Õ«Æççö6ÌlV½fˆŸ2/U‘uª’u:‰€Ç˜1:‰ÀœcÖ©¢RT¥RÔq@—{N"*ˆ*DÑ„pN"®Ú]VíŽ5¹Ëš|Q@!à®Éëi—õ´c벎u¬0]V˜)"HBÀ\ƒ~ ºp-<‰€¶\-O"0_¸âžD y¯ÂD`6p™"°Â°îѱ¿tÙ_:îfèr7CG½¡K½aŠ@|po踡˻G—Ý£c÷è¿°{ÄšE—šÅ að)ïU˜"°J±"ÑQ‘èR‘8‰€r¬jtÔºÔ:ª ]ª •€.•€s‡×Y+è¨t©tÔ³»Ô³;òü]òüSbŒÞŽ,~—,þÄ@ö|Höü$¢‚¨B4íKˆ¢ a L¡>} ³çC²çS”cn| 7>$7>ù’ùÈ8É8OðŸÄÈZÉZd­‡d­§xŒyíŒóŒó@ÆyHÆyŠ€×™“ž"0˜“È8É8䓇ä“G|Þto?L@f‹§èÂLï]˜ >‰€×™¥¸ãÈÿSf3½™Þ!™Þ«è!WÑS¼ÎìXç•ø1KëX-]VËßQ@!*ˆ*DѾ„è ºÄÂ@¨¶gÂIÄ=JÞ1G$é ñÁ] o€Hòˆ×Ж;ÐI´å.†÷*$y¯BÂ{’¼WaŽ€rÜÅðÖ„$oMHxŸAÒ÷8ª‘.ÕHG%Ñ¥’èØ_\ö—W>ååØ\v “D2÷(Çä²9žJsy*mŠ@¬ó©4GÆÙ%ãuñiØÅ²<¡{Ÿr'<‰pX?2öÛ,û-žÐÍò„îYD‘Ÿ ˆ"DQ%¢hBt]ˆba ìK¡1vH$Äw±“Dw±“D!wÂŒ],Ë.vã>w8å^9Cd(—E¹ ¯gñz†×³xýEtÉ¢Ë9—eνˆÀ¼Í2o_CìQEö¨‚*Ab¬HŒý0¢°HžB`î™ûæ~‘¹?A`Ÿ+²Ï½†¨ˆ *4AÀëU¼>A`ý¨²~œB@Û*ÚV(WE¹ ^¯êu¬RUV©†Õ¡ÉêðÈ&ñq õ£Éú1A ›Dá‹Äi“8 ÉM"¹!N›Äé)b½I¬O˜ MgÃ1ý¶É~{LtÄz—XÙÐe6tÄz—XŸ §]âôEb½K¬¿ˆ@¬w‰õ ±Þ%Ö;"¹K$Oˆä.‘<cCbìE"yH$Äé8 cCbl‚@| ‰ kò5y‚@Œ ‰±cñ14> ƒ¶&Út1Ñe‚ÀhMF;AÀÌ㾌À¼e¶8Ç'dööù„C9æ¤ñì}–gï3ž½Ïòì}Æ3ïYžy_>1F"î„ò¼úòI‘ž 2ˆ,DQž *ˆ*üÁuýe„ƒð_‡PŸ»ÇIâƒ;ÐIbŒ{TÅTeš"0o¹{àùý,Ïïg<¿ŸåùýåDК  >÷—ïºÙÛð˜‰ÇN!0³Mfö¯s÷¨Ø=ªì+••¿b寿°òŸB@]³Í“(ç¢ÜźémY¯œ" ¾‹úâƒUцªh“ªè‘@$!2ˆ,DQ„¨ êo˜h ÚDÑ… Æ—¾„pÉ qš$Nâ”uÓ—ˆuÖM§¨ÏZãm“h{ myÑõ“¨Ÿ¡~õO!°Ž±:ÛP7mR7"°~°n:E`ýȲ~œB@ý,êghËÚëI⃕ՆÊj“ÊjC5²I5ò$ÊQn‚€.¬F6ÔøšÔøfˆŠuŒ5¾“x¬ŠÇ*üÁ ÞD@V_F`åg5rŠ@|°^y™Íšg‹ïcÛÛ sŸõʶü3§"™µÆ){ë„'ˆdV²M²'ˆBf§ÄsŽm1EBb¬!‚XlÈJ6ÉJ6ä›ärŽMrŽÇÞî‘åí'q¯”·{Ì Dz‚È Ä§D¢‚¨B4MaOÂ'*”«¢Ü)|Ê{{vÓ!»é¯s'؆ì@Stáu帋 ìbCv±—ЖûËÀª=dÕž"à1®Úkò5y`M²&ŸD`…ÑU»Ãc¬D`6t™ ^ç½ ÷* ¹WáeÔgÅj –4¤–4PRzø`h Ê3¤Êó2+.ïf8‰@œ²bõ2³u±ºØºØ@ÕkHÕk ê5¤ê5CbŒ÷] ÜU1䮊)ñÁª×¯ó9¾ŠÕŠÕ á ë@Už!Už)‘Ì'ì¦øƒ•¢:Ð:Ð(dè$º°–ta¨6™T›¦ˆ" ‘Aä'ˆ¢QAÔ_‰h Ú—Db€BûÂAH&ÄkI†*I•ç$QÈgð •"“JÑâvSâƒÏÏêQ&õ¨)Ú²Rd¨á˜ÔpN" Ÿã›" -kI'PŸÕ¦)ê³Ú4E >XKš"Yâ#c…anëeâ”+CÅʤbuQÉÌÓ²p&Y¸—ˆdæé y:“<ãÆåæ$ÂAD¯¼Ù§È›}~$"HBdYˆ¢&ÄG•ø¨ˆ*ñ1A >ªÄGÅþReiXš¬”k¢\ƒ×›x½Á§M|ÚàÓ&>=…€×›x}‚€.Mti˜QMfTƒrM•;$:VÜ.+n‡.]téÐ¥‹.¼ÞÅë>íâÓ >íâÓ3ˆÙ0d6 ̆!³áE¼>Äë? m‡h; ËP]Ž Ì—!óå Â&ñq õMÔ7Ì}“¹?A@9å Ê™(÷"+®ÉŠ{ õMÕ áˆ—øx(t‰Â†@¬»Äºc'tÙ ûœË>ç˜/.óÅc.16A`¥C–¶$ä>“ä>O"ˆ$D‘…h Ú£ÞþRäí/ïe)ò^–9"HBdù ¢€(_BTõKˆB”û ÄÂ@˜ˆSž;à3EÞ1ó#˜ <ÿ˜"0xþ1E ÖMbݧ&qz â™ÞÛSä½=oÔ)òF‚wÝy×MÁ»nмëæ,Úr¿ÍØo³ì·ûm–ý6c¿Í²ßžD@9îÙ9¾å}oƒÀÜçŽÅ¹·#Q1³Y'œ"0÷«ÌýSÄkS⃵ƂZc‘ZãmYküDP•z8eeõÇ!Ö1Öo_F`V6™•/"0o›ÌÛ˜û¬‰O˜Ù¬gŸD`V²â]Pñ.Rñ.¨V©VÏ1Ö%Æ&ÄG—ø˜  -ëêS´eå}ŠÀÞÀÊ{Aå½Håý$ñÁÚüˬڬÞOˆBVÎ!´eŽþe"ˆYü‚Üx‘Üx‹oÄØÛ ˆô‘Aä_‰( ÊDQ…h ÔëÇDÑ…0&„ƒp êsß?‰@|pWÇû¥Š¼_jŽ€rÜoñö¨"o:‹€¶Ü‘ñn¨"ï†*x7T‘wCE >¸gãíQEÞ5G@}îÈx{T‘·G-Ÿ@[î¦'Ж;òm¹ÏMP®‹rVî• û\“}îb >X›o¨Í7©Í7TÞ›TÞ§¨ÏÝôe"ˆ{öˬRzf0A ’yçßI"™µù)qÊêý aˆSÖħÄ+Þ µè&µè†Zt“ZôI|jâSƒÇXÏ>‡p¬¬g7Ô³›Ô³O"°Â°&~ø`]½¡âݤâ=E`÷à]j/#c¬«wT¼»T¼§ˆ "?Aå ¢‚¨_B4íK¡Ê½†pN"!>øÄòø`µzŠ@|°ÝQGîRGž" ëÈSÄ1ž  Ÿ®" -Ÿ®=‡ˆ•æ.•æ)ê³=E@[Ö‘O" >ëÈÞ.Þ)^g…·£:Û¥:ÛñDj—'R_F@9>÷:E`î³Òx.5E >x.uâù˜á®=“»ö¦èÂ36Ãù˜Éù˜á\Êä\Êpdrd8Ë19Ë1œ˜œLðÏP »ºÉ®~9Ç}ÿ$Úê¹ÃõYá5TgMª³†ÊªIeÕPó4©y*š&MC5Ò¤i‹)2Ú†€×Y7!:taÝÔP75©›¾Œ@|°:k¨¬šTVO"…¬½îu2¹×iŠ@$³:k¨½šÔ^§Ä«³SbŒµWCíÕ¤öj¨½šÔ^§Äk¯†ª¨IUtŠ€ú¬hê•&õÊ—ˆ V4§DŸX6ÔeEÓP4©F¾Œ€r¬Wê•&õÊ)3›OùNPŸ5Ï)ñÁ'…O"0³Y=‰@$³²jxÙäyäcÂQ{u©½:*«.•Õ“ˆ¢| QAÔ'ˆ¢ a L¡ºœ@$h˪è‘@$! -ó–'ЖYÉ)Ú2+yø`Þò7Etý b€O˜Q̰:2¬.ÖC¢^âÊÞÑ@4!:ˆ.„0!„ô4ìêUÞ¦Uñ®¬*ïÊš#0Z—ц]¬Ê»²*Þ•Uå]Ygðö¹Š·iUy›Ö‘@$!2ˆü%Ä1~XÂ@Ø„ƒPm_B$ÄG’øHP.‰raÚÛ *ˆ*DÑ„€.It9…€rI”ËðXę,3j‚€×³x}‚€.Yt9…€¶Y´ÍÐ6‹¶á{ûaÊeU³!Ël8ƒ(ˆ"ñQ ~õ ´-¢í‹¨_Dý êQÿ3»ÈÌ>…@Œ‰±‚ø(‡DE|T‰S¨_Eý m«h[1÷«Ìý åª(W1³«Ìì ¯Wñz…O«ø´ÁMüÑ0–&c™ 0Ú&£ à&þ8…€O›øt‚€×›zý%DÇŠÛeÅ ]â£c¾t™/ó¥Ë|™ ]âãñÑ%>:Ôï¢~‡.]tðú¯x}ˆ×¼>Äë? ÕaÈê0A@¹!ÊB`Ÿ²Ï”3QîEâÃ$>N!0+™!IñM­{û (Ç<Ì™ÍLÍIVf{²=I²=)V›ö6(‡:P-Ø‘‹ìÈSDÑ„è ú¯D CaB8œˆ»©¼ ç,ÊuQn‚€¶Ü O" wÓ“¨ßEýõ¹gOPŸ»ú 1 -÷ý“ˆBV‡¸gÙ³§ˆ¢øà~;E@[îÈ%ÞU±·A@—¡º¯sGž"  ÷Ûï»ØÛ_@@9îê;r‘¹`¿-²ßžD`îs¿-ñî{âƒ;rÁŽ\dG.¨¿©¿œD@]º°†SPÃ)RÙ"àu¯OÐ…•¢“hËJQA¥¨H¥¨¢T¥TQªRªñ.‚½ ¢h_Bt]ˆba L!>MðXMD¢€(B@VW*ª+Uª+S¼žÄë§PŽš)ʱ&^QéRé¨ÐT©ÐLЖšŠêJ•êÊI⃚)ñÁÎËDë@'ˆ±,16A`…a=ê¢ ’Ym:‰@¬³bUQKªRKª¨U©ÕøäÒÞø`hŠ€úEÔ/Жu sˆ ¯Wñz…×Yé¨áT©áTÔpªÔp*j8Uj85œ*5œ“èÂ:ÐIfkISöV›êòÏ@4Qn‚€rM”›  -³_Ù¯*Ù¯Š¼T•¼ÔÙÀÌU]L‘¯7xy©ŠœR•œRE•§J•ç˜À›Jª¼©dùd€B#W\yÿGÅÛ=ª¼Ý£âÝUÞÝQñÎŒ*ï̘"*zÊU o¢¨ò&ŠŠ7QTyÅQAT! × ¼›¡Ê»Î"Uâã˜hð:דx+Ì]¸u¬]Ö“Ì9®0/#0£¸Žu¬c]Ö±)sŽÕêŽZt—ZtG~½K~½#kÝ%kÝ‘µî’µîÈIwÉIOð)sÒùä.ùä)ó…çŽlq—lñD@[Ö¢_F`Mfn¼Çg÷6ijç™ï.™ïÂÌ|Oˆ æÆ;òÚ]òÚSÔgæûe´eæ»#kÝ%k=E@9æµ;*Í]*ÍS”c-º#ãÜ%ãÜ‘Oî’Ož"ëÌwä‚»ä‚rÁCrÁ¹à!¹à\ð\ð@wHwŠ0F"a,¼} ;$;E`´Ì°NDÿb€BÀ§¼Ã}Špcgº0Ã:EdYˆ¢| ø`†uŠÀ|á•çËD!ó§'ˆd^E\E¹Š¸[ÈÝú3DÁ ÃìèÀµúkõkõ!×ê'ˆB^ïãùì*ÏgÏ^wì.»žœ®òäôQ@!*ˆ*DÑž ˆñ+Bu9ƒpþDÜÕå î9QÈ]}Š@Œ±öŠ'§«<9]ñ‰€rÌZ;rÒ.9é)º°¶æ¨­¹ÔÖÎ!fT“u m›hÛ  sãSf6+xSÔgöü$ñÁ*à Ñ1ç˜ùvd¾]2ߎ̷Kæ{Š@O™×vd­]²ÖެµKÖÚ‘“vÉIOˆSf­§øƒùä)QÈLïc.Ø‘ vÉOPŽ÷ZŸD@}fœùd—|òm™qväq]ò¸Žû“]îOvÜŸìr²#ë’Çuäq]ò¸Ž,­K–Ö‘¥uÉÒ:îOv¹?Ù‘ÇuÉã:²´.YÚ)þЫè¢]bwoƒÈ òDQž *ˆ¸‹5†Ä‡A[m'ˆ"=A >Lâc‚€ú&꿈@™DÐÑAt! ­‰¶å\”søÔŧŽÑºŒö‡!0+]fåå2£&¬ëÜç2ö¹,û\Žç½ ¢€(BTõ ¢hODb€OB½î üq"A—$ºœB@—$ºœB@¹$Ê%(—D¹]’è2A@¹$ʽ†È˜•Yfe†rY” ¹à½ ó6˼ÍÐ%‹.¼žÅëfC–ÙPà±"+ðXMðXÄz‘X]Šèr åŠ(÷ØsEæ\EU‰ Šø¨/"0³«Ìì ‘\%’+Ô¯¢þU»Êª]¡\å*”«ªÜB`•ª²J½†hˆä&‘| 8m§â´Iœ6Äi“8mX ›¬… qÚ$N'Dr“Hnˆä&‘| õ›ªHthÛEÛ ÚvÑöñÑ%>:â£K|Lˆ .Ô]â£C¹.Êu¬̱MЖ9¶sˆõ™cûÄ3ySâƒyº)Ä<]Fž.Kž.ÇúËÞ~Œ¨±ò¾·#ÏräÉØ³ˆ ¢ Ñ@4!:í1@Œ'aBÀëÜ‘gˆŸr¿Å©MžHmxÞ´Éó¦süÁhŠ€Çšx¬c´Ü_¦ˆ"} ‘Aä'(Ǩbª²Uì@Uv )ó¥Ë|™  -÷¨ŠU»Êª=E@—!º Œ–ëiEE¢JEbŠÀh‡Ž3›UŠšE•šÅ°fQQ‘¨R‘xXgEbŠ€¶&Ú┉kâ{ûaÚ²ªQQÕ¨RÕ˜"àVj|g×ÞþA (ǪFEÍ¢JÍ¢¢fQ¥fÑP³hR³8‰H Ò—D¢€(BTõ ¢hBt]¡Ê9œHðX%x,‰ÇN!àSV5¦xu)b€B@Ö¦è’D—ŒXgEâe”cUcŠ€rY”ËÐ…u—Ð…•‘†ÊH“ÊHCÝ£IÝ£Åç£ööð:« ‰&‰ï’ßÛ_@`¾°šÐp%ÞäJ¼!Ïß$Ïÿ2º°VÐP+hR+8‰€úÌ4\ï7¹ÞŸ"°ZòzO,7ybù,"ƒÈ_BåW":ˆþD¬š©šOPŽû žznòÔóY|Ê=jЍ ê— DÊeUn€Yžkò<òá üq¢ >¸WâIá&O Ïð:wÂpÈN8°‹ ÙŦèºú]¸WžCTÌlæ¤vÂ!;áÀN8d'ØÅ†ìb/# >³çSÔç^9°WŽ_Ø+'Ì(f¾gˆm™ÈÉT‰‡T‰rãCrãS|Ê*ñ@ö|Hö| ~;¤~;CtxŒÙóÌ÷Ì÷@EsHEóeæ>3Î/#àuÖMgˆ†ÕÈZãZãXg5r {>${>E ÖY<‰@|0G?£’£ÈÑÉÑäׇä×2ßC2ßyí!yí)Êñ^üÜøÜø@ÖzHÖz†p¬ÌZŸDÀë.^Ÿ   Ÿȯɯ䵇äµòÚCòÚ#¾lo߆Œ³IÆyŠH ÒD‘Ÿ ˆ"DQŸ ˆ&Ä1„0ö%„ƒõ´M¢m‚OymÈôšdz§Œ…™^C¦×$ÓëX ]ÖÂ)¢hBt]ˆba ìK¡;ˆë©<¡{‘@¤'¨Ï5ٱ⺬¸xR¸É“¯#….Q8A ‚XÑ<‰@aêx¦¹Ë3ÍË' Dz‚È ²D¢‚¨Bt]ˆba L!>MðiŸ&x,‰Ç<–Äc§ðz¯‡ZãÞ¯'ñz‚O“ø4Á§I}zHdx=‹×O! \å&x=‹×s.Ëœ› AY"h‚@Œe‰±Œ™efgDPÖz QcEb¬ ‚ŠDй_dîOˆ"ñQ ~õ ”+¢\rE”› àõ¢^ Q¡K]*mâÓŸòê¤àڣȵÇõ£Éúq ñêOauy kùÄ@˜BŽ3ßM2ß?pý˜"ˆö%Ä1„€rE” wtïm-W˜“ˆ¢?AÀUüQáfñ§Ä:× †,~“,~Ã*Õd•jȯ7ɯ7¬cMÖ±“xëXC¥Iåe”ãZ؇i’‡ù¨Ï}C¦¦I¦¦!SÓ$SÓ!i’!iÈ4É´x¯õÞ¹ÏÜxCn¼In¼Å;˜÷öÃâƒ9”)sŸ9”†J“ʱ0wñ˜•ÌIÏŽbÞaŠ@Œ1«Ðâýc{zÊœA‹÷~íí{¢ãÞ¯.÷~MD~‚( ŠD¢hBtý b€_B8Q.Ác¼o«ã¾­.÷mMÂHdÄï—š"ˆô0ïÐqÝÐ庡##Ð%#ð2QÈ{:îuêr¯Ómy•Ôq•Ôå*©ãN¦.w2D@}fYN"°J1S3°&Y“Öä!kò@wHÏŒtyfd‚À3#]žéxV£Ë³Ë'D‚ †ÂH$Œ…ùSý´&÷ÒžDdYø”k²!k’Ç"ˆ&ta.O„ty"ä,Úr÷˜"„Äz†rÜ,¾Wao帿V~“•Š€×¹®Öu“u}Š€×¹òϺpÅ=‰€×™Û2äÆMrㆬ“IÖÉu2É:M óëSVæ×O"…U¢°Â§ÌmMð:3W'ˆSf¿ Ù/“ì—!·e’Ûš"àuf®l1EBÆrLtŒ…™Ã}9&÷år9&¹Ãý0&÷ÃX|n~oƒ€.¼cÆí1Éör9&¹œ)ºð~C¶Ç$ÛsùÂ|!—c’Ë™! £åÝ.†ŠIeŠ@?x—É áˆSÞÿa¸»ÃäîC–Å$Ëâ8Ov9O>$Æ%ÆúÞá ü×!ÂlØÛ ˆ"DQ… Æ—íÐÑ!³9ä~Ü{i‡ÜK;GÀc& w‡ ¹wà^Ú!÷Ò^?qhë¢í)zêÒÓp¼·&0Z×ј•.³Ò¸Š)Îý½ "HODþ¢€(B4í ¢ƒèB Ca_B8Q?A¹$Ê…kõ½ Kⱄž&ééDF$g‰äŒÑfmFe‰  dñÇeñØËcþ(â s®Èœ+ðXø£ˆ?&̹"s® >ŠÄGÁl(2*üQÅþ¨â £­2ÚŠ±TKÅXªŽåhm“ÑN˜QMfTƒ?šø£!‚šDPÃlh2&DP“j˜QMfÔ¯7õú D‡.]téÐ¥‹.té¢ËXïëÊuQn‚€¶]´  mm;f%¯R¼3coƒÀlàÕ ~÷~ÈïÞüîýß½ka‘µ°`¥+²ÒáNÙ!wÊÎD¢€(B Cá þàJW°ŽYÇpë{XîarëÀªCîP¸wtȽ£×O:úÁYyÑ@HvŒ…s®`ΙsSb³² gP$g0C Ä:gö ñÇÀX†Œe §¼Þ/ñ)ß½ý0X7‰õSøÃÄõ™(ñy±½ 3õØ „ç.>ućK|8<ÆÌD‰¹¾½ >eÞ¡ gP$gP‘¨’¨¸¯r%>ETUˆ¢?N$Œ%ÉXâp•+à+Þ{ûa¢h_BÀIü‘á^#W\½V¹z­ñν b€B‰Â‚â•gÅ•g•+ÏŠëÊ*ו5ÞÏ¿·A`´¼òœ"0ZžÍþà¹ÔðÊ¿M:ä·I~5tȯ†ŽŽ9×eÎM D"ƒÈBEˆ¢ 1@èh „}‘á1Î[Ü=8äîÁåx,‹Ç&xŒs÷ä ¹'oàž¼!÷ä Ü“7äž¼ë'ÚrîO D=å¬ì¸êr 4EÀ뜷SFË먎™Ýef÷x7ÃÞmuîWxŒ™«Žk±.×b×b]®ÅúòÏHˆO_D@^v\v¹&ì¸&ìrMØ‘Ûê’Ûš" K]:FË|PÇ5a—kÂŽkÂ.ׄ×s]®ç¦Œ…W|=ÞU±·&à^ÏuÔ‘»Ô‘;25]25SFËkÂŽêl—êlGu¶Ku¶ãZ¬ËµXǵX—k±Ž«¤.WIW8]®p¦ŒÖe´ñ §£*Ú¥*:pý2äúeЍ ª¾„p2Úx>6ä|Ìp5or5»n†Üu3GTõ ¢hB#á8 ¯³§ôƒwêû&õ}GŒ¹Ä˜£ZíR­vÔ¢]jÑŽr‰ ÇÙ¸ËÙ¸ãlÜål|†ˆç¸.縎s\—s\Ǭˬ£ÂëRáu\»\;ª³.ÕYǬˬ£öêR{u\½º\½žDtýq¢b,¬&Lè)¯M§èÂs\ǹ¥Ë¹¥ãœÎåœÎqNçrN7Ctôƒ<Çù˜Ëùرúæ8‡q9‡qÜ¥ær—~ñbÈ/^,Ÿà(\ù§D!ÏPð›C~cà÷,†üž…¡ÂkRá5TgMª³†z¥I½ÒP'4©ê„&uÂå“¢± K8·ÜÛ‘µ‚½ ý0釡&ýpÙsËñ\joƒ0F"ìb{ûQ"ì/{DÑ„@O³ô4ì {ŽRä(G)z”C¢¡MúÑà&þhP®‰r ci2–†ž6íéQ¡K]×6ÉkòÉ&ùä9G)z!c©øŽ*ßr{zJ]*t©¢KOQìmø®…5^‰ïm WË)ÂA8 Ãw˜|‡AîëX•u¬Æ«‚½ý0±àêÕZ¼nØÛ *ˆJ"ä¤÷v$ÂÙøÞ~˜è ºÄ=åzÚáÓ.>±æ¹·AEˆBb ŒDÜä×Ó—Oˆ!ŽBåð»Õ&¿[=Eø£ˆ?&ô”+~e×äWvçá$¼ÎÕ¿²kò+»†_È5ù…\Ãoךüvíѧ]â4®ÉCÖäïËÙÛ  K]úÁU¿÷jò{¯†ßj5ù­VÃ/šüò¨áwEM~Wtù£Åuƒá=&ï0ü²¤É/KÎzêñþ ½ Â@Ø„ƒpѧr-føC“_04\i™^iáwðL~Ïð u&¿PçøE6—_dsüVšËo¥9~ Íå—Ðæá$BöËå7Êæˆ ¢ Ñ@ˆÇBŽÍå—¿¿0åò Ss<–ÅcÇDÁXŠŒ¥ EúQ K]º¾·&cMb¬#N»Äi‡ú]Ôïm—Ñh;DÛ~à<Ùs̱ííH„kB—kBÇõœËõœã\~áÁq=çr=wý¤ƒèB ô”þÀÛ÷]Þ¾?E„=Êåíûs5ñ©ã;po —xݰ·A…D¨%¹¼ |ù$HB8'ÎO]Þkíx'µË;©÷À»Üï¸ÝåþuÇýë.÷¯{‹w!ímD‚ †ÂH„«5—«µåŒ%ËXBÝÃåÃ×O ŽRä(=-ÒÓ¨K]ð¶W—·½:Þöêò¶×å“¢ ¯sÃûO]Þ:Et¨ßE}ÃXLÆbPÎD9Ãh¹µxN··A@}ÜàxK¢Ë[wþ¹Üù縯Ïå¾>±J¼·#î3ØÛ÷® \® çü.çüŽ_“wù5yÇû¶\Þ·åxó“Ë›ŸoKry[’ãýA.ïr¼Èåí@Ž7긼QgùGá¹Þ¹ãòÎÇ{j\ÞSãx?ŒËûa®Ÿtô£K?:<Æs©)ýàù‡Åk±½ ƒ¶œ·¸›ÁånÇÝ Î»òårô·6ˆ¢‘è º÷ý¸µ#qçέ ßáøŽøtþ­‰ûó[ûaÂ@H?J‰D)BTUá$*zZ¥§÷•€[ÆRe, Gir”†ž6éiÇwt|Gü}[DÑ…pN¢ÖHÔú8Ñ@4%0ΗÎ níHt|gT ù±[;ž!àSŸ:¾Ãå;cqË~8úŸ)ºµ& ˆ"DQ…è ºÂH$ô#I?RÑ„pâ± "ƒˆo¾¾µ#‘@pµŒÏHÜÚ‘È%¹ïÈú‡D±H#Qs$j&Ñ0Z®…3»ËÌîX »¬…SD‘„h Úã„A]މ¸:tYâ[ðníHŒK$ÆEˆ¢=A#a9–…Àwð,ÇàÞ:I ˆ1Ç Ã{6—ORŽDÊB r”¸Â¸¬0ñŽÌ[DQ„@?8³=d‹oíHij —³ Ùâ[; ci2–xfàrfàáêäÖ~”èèç¾cf»ÌlW'·v$ŽsÁ9¥pÅwkG"¬–IÎS|·Ë­ ¢ƒèB#ÑÑ.ýèèG—~LD¢hB`´]F;A ƒÄ€?†øã¾:{kƒ€?†øc §CzzLzjÒÓûkä[zjÒSƒÇL<΂÷ö=‘ã¹åÞá ä;ÂYßÞŽD®‘ÈU%ËQÊ%å"ÆRd,ý(Ò Â@‰Š~TéG…ǸÕ¸Ïí퇉Bb ŒD‘…ˆã½ë' D¢ƒèB Cô”ºÔx½·#QáS*W¡\ýåüÑÄ þhâ†Ñ6mÇQ¸jW¬RUV©Šõ£Êúïü»µ&0Z\#§¯^÷6ˆ¢ 1@ ! „=N¤‰”„h 0ÚÏÆ÷öÃDQ…pþ8á)ž„@O]zꔋ÷ÞÚ ˆA"êÂû ×O2ˆ,Ž’ä(ù‰|ýÈÒÜAˆ?âN8d'1ó½·#1ðC¾Ã0ž¡¼ŒÀXëSDQ„€× ^w¬É.k²ã Öå Öq~êr~ê1÷¹·A Cú1ЮëŽuO -ŸÄ™í2³3Ûef{Ìlîí;"_â¹åÞQ@!ˆA"dòö6ˆ" ž&ééý½ ·6a$2zš¥§ý(ÒŠï¨ò>­âÓP+ØÛm•Ñ6xy‡Ÿ:ºµ#QR$Š|GÁwÐcñ™¢[DÑIDŸÊùúò ŽRå( DSýhÒŽÑvmÈJîíH øtˆOŽ2ä(!縷A Cúa8ŠÉQ þ0ñ‡ã;\¾#ä÷ö=Q°ÂYa VÞûµ~Ò@4!p”$GÉ òDAOqœq6žål|ùÄ@Ø£D|çÖá üq"z]εsƒ×yÿØ !KmÒÑáì3óN·õè’E— Ÿfñi8ƒÍ|¶iýþ`œÆ_µ¿µ&ÐS®ë ëz“u=þŠû­‰Ÿr‹¿ž~kƒ( ‰Ž±tKÇXºŒ¥#>¸òÇßྵ&àîSF;d´†~˜ôc‚@?LúaPŽ;Йm2³=ué©#>\âÃq®¸4$‚F̯ïíH„ÜøÞá œDÔE®ø2®Å²\‹-Ÿ­¡&ýpôƒ^83àï½®Ÿ ôzüÕÐ[;qÅå/K®Ÿ4Má$âUÁpý=ålüÕ¿[;Í"ÑÄýèÒ¸:˜¬†³OþîÛú Æ‚8-¸*r Tp…Sä §à §ÈÎò‰ƒp¡’¸·A ƒD8ç/|†fùd §Cz:ÐÓ!=5ôÔ¤§÷÷lÞÚ‘pøw”Ï ÷ö£D¸ßÛ „‘ו{ûa=-ÒÓŠï¨ò¡ÞPø«*ë'8 ã´Ä5hoG"äö6ˆ" ‘Ad!ˆA"z¬ˆÇJ¬òìm„pN¢b´Ô¥@>!³~Ò@4!:ˆN¢ÁMü1ÐÓ!=èÇ~ øƒs?¾±þÖ]LtqD¡K:¾Ãõ;ÐSd î„,r'dÁ]ŒEîb,ñ™‘[DÑ„pN"d&ö6ôƒkñZ}oßñ©’[D‘…( ŠÂH„³œ½ ¢±T|Wºóc{;þè⎣t9ŠÁŒÃŽÌ_Ù]?i ‰p¥UäÞÀ‚;ÿŠÜùwý$ÎJ“YiˆB“(ô˜eÙÛ‘ˆ$wÜÜ Wä^¸âˆ¾{pýd€B §YzZ@påw¬É.k²ÇŒÀÞŽDƒ?šø#dz‹Ô’jй‹½ ÂA8‰pŽ[å>¶å¥ÊQ*ŽRå(aÎU¹­â)Š*OQÔø›\·v$ „)1@ !ЬÉ5‡'ÛnmÄ fÃÞŽDˆõ*÷:ÕŠ£T9 î’¯r—üõ“p~º·#QñT¿BÛ*ÚÖ˜SÚÛ‘×b{F;d´Ž£`߯-ÞÏ¿·#p¿C•û*î¨rÀõ“p¼·ATQý†øh-¾àÖþ þóÛÿ|«÷?n½Þ<šÞRY+õýÓÛ}û×oýçÿòþæíoÿømùéÛËÛåî¿üÛÿôí²l'ÿûÛß½­oò[Nÿáíòöß¿¥·?\ÿ÷§oiAßþã·¶¾#©,©Ñko¾5_nàXšWþÖó’E*¹/§Y×f[–©¥YÊÒ´¥R²4ûåÚÜ~¤yiÚo¿$Vòz³Æµ¹ž4,ͲZ×7 /Íõo-mZŸy½6×»»ÊõÒÿ²tÃÆv [êצo²eiþù›¯¯ï[šcù[oÛÖ\›c;ÐzŸøÏ×q_–ÔêÒ¾ei—íPëýÊK»/[äÒ¶¶¶m¹T.e};ʵ֟‚\Úë8ÒrË\YÛ½®íõQ‚¥íem¯7”«vyélZÊcm·ËÚ^oCZÚ«ÇÓ–R+%/µòk»¬q±´ójß.Í–v]û³œ6Öµ}¢¥½žŒ,í­ÿuýÑÑrý³Ëz¼Z·ã•ee[Ú}¹`(Û×.íõå²k{ýþ–·ã­WsK».+ÛµÛÒËÀ?Úýr;^YßׂÛÚ^¿ÿCc;~^ý×ív¼´oûyÛµ½öwûA©þ¾//Yß·ýnãë÷YÞŽ——;¡Þ^¢gwþÚ^¤µ´·þ-/qÙý»~¿çíxy¹OéçíAö²êã›Ý¶ã¥-ÚÖ‡97}›­í²Y¶kýŸ·Çw|Ó¿¯íõÑK{§en{Úâeå—[ý¶xmmÅñeËý¼Ý2Ѷø[ÿ~)½Žµ½ö?o3y×´¶mqìÏë÷/Jñ¾\†õ¼¶ëÖ^'÷:_Öï/ë˹×é´¿®·C¯³mí_]JgŒëß/˪¯í5^×å-­3{,þùãmÅIë8}_qÒ:k¿¯8i½€ø¾â,wIÝ­8ɶɼ¯8É7_î+Îr«Ê¸[qÖß}ÿXqÖÛÛ?Vœõ]1+NÙÜx[qʦÊmÅ©›Ón+NÛÆx[qÖËÄgý‘»g}×þÝŠ3¶åëûŠ3¶nßVœeµw+ΖKýXq® ]ÒÝŠ“}sÙmÅÉëÙïÇŠ³(^îVœì›"·'¯w Ý­8{}_q.ûŒ¼­8ë0w+ÎeSðûŠsÙWÛŠsÙW€ÛŠsÙW”ÛŠsÙgämŹ`ŹýýmÅYß¾t·â\¶ù¾â¬÷ðÜ­8—m3¹­8Ù· â¶âÜfÈmÅÉ«?VœÅŸ÷+Îâïû'û®ï¾âÜfÈmÅɶEÚmÅɶûs_q¶zÌÇŠ³ì”ùnÅÉëoW|¬8Ë.ìw+Î~ãnÅYê¡ùnŹï¶bí+N®Ûp[qrÙâå¶âl ËÇŠ“÷ñ¶âä´íÆ·g»ècÅYÂÐïVœdÛÞ~[qÒØü[qÒúK+Nêû ¹¯8Ûݸ+Nª›ÿn+N*{ÿöçz†³úc;¡*eQ]±³ªË_<«úås©Òb‘¶¥uQÜÛË©‰ïíËzt%êÅC"í{ûƒèñÇî¾·¿ñ¸wýøNüÍOûHÿñÏ¿0Ò?ÿãõôòßç·ôöÓ?¿mƒûøÏÖ.×ïZòHe½û§yû]¾üþí§?}ûw?­_þ—ÿº^êš“ýøëúÈ_·í­Ýøë–òšgûøkûþ×wAÒWy¶v¹nKh- ‘n÷8v ÄXo§þ ¶ö=qÝ'®.ü ¶ö±L‰ëÊöØÛ÷DY_¶ùAlí{¢­7Ô~[ûžØF÷A|Œö؛ˬ[sae™¬‹7ÿªÎkqû뱞®ùk_ó¾&Ö—¿~ ö?^z0ö~üËú#£kü>þÇmÝÆ×ð}üÇúòì5z!xÛz)WÃêsk—íÍ©?—íïü¶næýÅ¢û»1Bk_7?½˜Ô²=ˆ“ÖZß2¨ÿö?þé—|r÷7u÷UyyIÍu‡ØÞFqýû¿ûÝ?ÿþÚ|ûÝ¿ýÛß/ßõ÷¿ÿý?¼ýô‡åë&÷ÙÊu‹žÞ®;Ôu«[^•(u}­î²f®ž]þq=u(µogtåºcùu­yÛ÷]¡,μ¶mY.KYïq¹¶{Z¶îõšfµ_Ïxl?g_ÿ¾÷õxËeûƯ§*Ëîëñ¯g<>Öû²µ×ÄϺ¼\ÖöXn0(ÛSG×¶­·T—”·¸\Ïx–ã]ö¾X[f`¹¬¶´m=Þ¥m×”åzƳïR¶ÓöâëÏ]—k­gŒÅûzCƒûv†T|û©žë‚¸žAÕ+¸Ü˜³¼3Ë×ööЕïg4õ²½èy{ÓÝÏß÷PÏÛñkÚR,ž6ÿÔ´^UäåM£ë÷-××ã™o×ôu¹dÈk°¯ã¯yûQtÛÆk{+o,o:¶µmëc/6¶ñÔ²=ܺ¼);­ííQ´å Þ—µ½ýT•õíꢖíÇG¾·¯°ï{{»‘ï£=¶ãÝÚm{­æG{+›~´{<^Û^Ùô½ÝS<Þ~FñÑÞ^=þÑöx¼‘ãñÆv3çG{Äã]ã©Ý×J<žµx¼Eˆûñ^… Ç»B8Þ5Pì~¼îáxí’ׯG»®·2|´G8^KÛKZ?Ú%¯¥Žw;ßúÞ¾ž*‡ãå—·—\|´=¯l? òÑ®ñxõ²&*oí?ÎÊ^]ø–°x÷í­ý‘¥\µK ød×òÉýÝû·6|G‘︿÷éÖŽÄ}ÍïÖŽ„ƒpñw·öÃD¡G F$â/ÝÚ *ˆ*Ä!Gñ‰X7\6¨ÐÓÆçrÖóÿ‰X›¾]!¢£$=Šƒp÷w¿ÜÚ Ð,ý¸¯^ßÚ :ˆNâþΕ[ÆRd,>­âÓŠžVéi…O«ø´a,MÆbð©Á§ýàó0“„0í‰&ý¸¯ÖÞÚ ˆ"úѤcé2–>@ =ÒÓžéé@?†ôc‚pNâþéã[;ž"áIŒÅe,úáÚŒÖe´]ºÄwCßÚ *ˆ*„ƒp9E"'!2ˆ,zÊu¬„ÄЭ ¢hBtD¹D¢\„@?Šô£ ¢KÅQªå˜hð)Wœå49Ëi {ŸþY?1F"ªÏ'wÖO2éiTŸo±^?ÁX¨mƒ¶|¶gýéÉì$¢úMÔïʾµA`´EF[0Ú"£ *ˆ*´e6ìÙMöì†=»ÉžÝÂo·öÔ«¢\ÅX¸ï7ì·MöÛ%´Õ9×§Mâô ¢C—.ºtø´‹O_D@9ž;Äç¶ní‡ èÂó)sŽg( ç|ökýþâ ñ‡A[m c1‹a,&c1ŒÅd,†žšöô˜À¬4™•m]´uøƒçc g[¼y’À*Å󱆳­&g[ñI·[VÇ ÓqÆÖåŒí$¢‚¨Btý Â@˜BýqH$Œ–WâSü‘Ä £M2ÚŒ£ðf`¾ ™/ó…O®Ÿ8'çËù20†Ì†Þ|tkƒÀX8b}H¬Çßs¸µ¿€H ’D!‘p”$G™ Ð攦ˆ " ±0Öÿ_îÞ¦W’IÓÛß_‘ËîEC‡Ÿf¶ÁŒ€$@ê´Ì¢1š4¨ÒG·ý}÷ðÈ {Þ«ô(?‘Y½¨[IçsÂI3£‘nFº/Dÿb€BL“DE_8æO.Où[÷2ˆ!–Ü ®ù+z—ý‰5‘XCo›ô¶¡·|*pÄúøÝ‘=a[糩§3h÷2ô¶KoØŸ,q:~Ýd¿ÝrÍŸ·›Ü˧ ´ƒ+ú%š¢¹º¢Ûy ‘ÇDoùì±DÀ>¦ØÇ„n§èv¢·|²plGâ[η+©sEïXkóýãûØ)×ÚŽµ6¿³_^8ï;æ}—yß±’vYI/Ð×ùTÖ½|š€Ä°þØþùØÛ£œ‰ÇÓú÷2ˆBî’"ß“_ÐÙ¯#a¸‹É]RNë(ƒÀ]Lïò”pÈÃEÉÆ&¿t‘´”A@ê!R_ &ˆ),«ÏYòÊñ(Ÿ& ˆ"DQ…h šÄ|*§D<ŠÈ£ ·Ez»@@Eä‘Ö–GÄ1„€ÄŠH,ÅÆ2H¬ˆÄ*죊},GyTÈ£Š<ªƒp ¿Ñä7zÛ¤·Ï‰ êbAšë¢¹Ž»t¹Ë€ ±±Þéí@;†´cÀ‚†XЀԇH}BœÅ f1~a¿‚–r["ÐÄqgþvë½üDÑ„˜ ´¥Â^ „ÿ$"@‰™v‘é%ôÒE/éYlònû•bÝvÑí™v‘i‡Lé?Vˆy ‘Ç€<èa< ¿X·_AoéaÞF`¼ /é‰ï(Ÿ& Szº?Æïó-è­zº‰ÞrÍÿ6–<Å’'ìtŠ^A¤ìÛQÝšèÖàaL<ŒAû|Žj9·v”O:Ÿ£Zξeè‹K_}qé‹Ã;¸x‡}qí ,ˆOk°  zN¤"õRgT¡á¹’_¼Ü¯À‚ø\¹D@b!»„€Lù|ûœèx¾íò|Ûñ|Ûåùv‰ Æ „ƒðóD~òä7‡ Ü¥è]žw©r— yT‘GÊYehŸø–ˆ!ôœh°±&6ö&vÚÄNó0¿µ_^šè%?ð[P‹t˧“‰qË/ÄîW ˆ"DQ…h šÄ"@Äy¢ /EúRÐŽ[ä÷§ä÷׈ b¾@Ñí%„ƒp! uF®f>yp”A@/Œ\aGÄ”ï# [ÆÇ–HwÂÃLñ0Ñ)щ±?gì7è–1ƒ‰Û”Û¾téKǘcÌ`""Àw".ð0Œ\D`Ü2ª0˜˜ˆL‰,hŸÄ—XŸÕ÷ªLˆ]B@bÌàMDø¾Ëý ìcŠ}Lè–Ïû+„a4ðùv‰@_ø¼D`DñyâyÊ3òD®qJ®ñ"VÈçì%öabÏ ‡^˜óœxšŸò4?ñ4ÏïîW u>Í_D@ê|Þ_"`ë.¶¾@@s.šsxÆ&ž÷ù¶ÂíJ@/|šŸÈ#ó댋$"±cÌ`æGäÁ'ñç„aEo²¢·¼§÷(Ÿ&&ˆùa ‡§ãWö+ Db‚˜B8?Odãâa–ˆ¢a Dbω@;BÚhÇÜÛhŸãv‰€<8æ4—šc̹Œ¹çDàI<äIümDÑ„è ºÄÂA¸B%ö”ÈÏû!Ïûç}¾Õ{¿ypÉypIày?äy?ð$ò$yòQa LÈ”Q¸@|,$>¶D@/|¾]" u>bl!1¶@-$‚¶D@bÌ«ž‘Cž‘Ï•!Ï•çJ¾±}¿ÝvÑí¾ðÙ4ò›Ž2hnˆæÆ÷Ô2Þ!ï@Æ›ï–߯@b|¾]" 1>_CLÈ”;wÏ·!Ï·ŒwHÆ;%Éò·!ùÛÂÐ>ßž^Cž^—ØŸ+ó÷£ïåŸB@êÌžMƒÏ¦–¿}t/1@ ! „ á ü+ˆäaŽ2ôvHoÓØ7Ùqgù{÷r&Ò¸5Ùqgù‹÷2ˆ ¢¾@4MÈcŠ<&ä1E—° )d©‰L 25‘©Ab&[ 1‰Æ‹ÉxY 1‰FƒÉh0Ø©ú‡L]dê©‹LÈÔE¦™ºÈô6æbc ¤î"õçD@ê!RÈ4D¦‘2²ò‘G ·!½] Ð[<[þ¦Ì½|š¨ ª Dûb€BLóÂA¨LDœ'Ò9‹£ R/"õ2-"Ó”?Ê ±"[ „} ©•úD…æªh®bDUQz©¢—£¡ÊhX  ¹*š«z©WŒ—*ã¥B¦Ueú”hGy4XrKnGy\B@Mä±@@Måñ¢Cê]¤Þ!>TK Ø)=ÌyÐ,°õ09ÊÒ%ÊÒs”ö(Ÿ&`ŒÔôa=Ê×Rg´ç"zaÄh‰€æ1Z"0nSêˆ)u‰)]DÀ>¦ØÇ„}0êÔóé§£œ ƒ<Z" 1‰z˘RGĨKÄh‰€<SZ"0LFƒÃ’Sêˆu‰-:cJo#`AŒötD{ºD{:¢=]¢=o#`…Œõ¼Oé(Ÿ&`c!6)cJù{¸÷òû‰üMÝ{DQ_ &ˆI¢ Œå Dj†Djâ0Câ0KÄ1„@_øì·E›¼-z» D/òàŠo‹6y[´a¡ÉþB›ÙSFöD”vJ”{Mö®Äx0ö‰q¼Æ‹ÉxY" ˆ"DQ_ ˆö%DÑ… Æ—ÄÂ@Ø „ƒðŸD±Ó+¤?5ĵMâÚ_hâ —è…Þr…¨è }¡Áò›í—c†Ø…Iì 6‰/°F‹ “É»ˆ‰1Bbˆ8›Dœ—X!ãÉK4ÇHͽ0&½D@¦Œá[X&ßÂZ#01¦dØSc²§Æò®½£ –̘ÒEdÊ=5¿1 ƶ–x)F®–ŒÊ!£r€}0>¶D`T2>fˆ™ÄÇ ±-“ØÖ1ÇÝP×2eÔi‰€LS2ìB2Ù…dˆ™DŒ û”Lö)]DÀq¯Ó[gäj‰€ÔÛ2Ä¥LâR†}J&û”V‡ÄuZ" 1F ñ “xЉ¹Hl€%s¯Ó[ç^§%£Ñ¯‹hŸOâ†'q“'q|¯Àä{WÐ~ˆöhŸ1ƒçDlUÄ­ "@‰œ‘½ç†½ç&{Ï·+D¢¾,Db€/ÄüWM6Æ™p‰€NµÂ ƒ%s6Å©“S†Ýú&»õ·+è-÷ã.è ç¹À,2‹f±Yl‰À¸å,˜_Bæ—%Vèb…©sZ" uÎ/ù%d~¹†hŽsT`~ ™_–HÑâ%¾Ñâ%za´ø"ºe¦(“‰I?%ü#G‹òi¢ƒè_B Cˆ b¾@#‘â§Gù4Q@”襈^ˆ¢ ½ÑKL‹È4íc;Êg‰ ™V‘éyT‘Ç%dZE¦2­"Ó [¯bë—Ð\ÍUŒ†*£áÚ¯ªý!~ì9Ñ`MìãMFv“‘Ý`cMlìvÚÄNì´‰6XP jÐ~í7h¿‰ötÛT·šë¢¹ºí¢Û½tÑË%4×Esšë¢¹Kè¶‹n;ôÒE/2"ÓKÈtˆLxÜ!w€L‡Èô91ÑÛ)½èË”¾,èË”¾L´Oâ^á-«xË oYÅ[.D}h šD Æ „°á$:4G_xÝÒŸVøÂ*¾ð"Úï¢ýÚ§O^" ý.ÚïÐ=.Îò¸œåY# úSœÔq9©³]Ao‡ôv ·ô–Kl}ˆ­_Bˆ8OLÈ”^»Â'WñÉKìtŠ.ÐÜÍMè…sÃÛˆ Bf  ÝNÑíĈš2¢ž†e2¢ š3ÑÜÍ™hΠ9Ídj"SƒLMdj©‰LßD@/¦zyN`ÜšŒÛç„cd»Œl‡n]tëЭ‹nºuÑ­C·.ºuèÖE· ô⢇Ô]¤þœŒ¨zˆÔè%D/½„èe€^Bô²@@/!z è%D/©‡J¶ÎxrË{œ2ˆ¢ ÑAôŸDLóÂ@Ø „ƒpRgÄy‰( ŠDýöÁ˜ôÝ2jÝòþä£|š€öù¾ˆ€öoùÌÈQÎD…ö¿ˆ€U±  í3zÞ=o=ÿ…Ø£ø°BFñ¢øM¢ø ø&ø†|“ü Ñ }Æa–Øã0ø¾©Ë÷M·+#$8Ùær²ÍqnÍåÜÚUDÑ^ í­°_–p.D€ˆóDžç†Ìs8Oèržp»½pZ" "òX 1Î >yˆOðtC<ÝEì”~l‰˜ ¦=Ý€âÇbÒCbÒK¤NO79’9»ˆ€<é]" 1Æ‚rkCrkqÜ!qÜ%£’QÚ‹ˆ¢ ©3ë5òÉ”£ cÖk ;$»D ·ŒÁ^D`ìû—ðŒôŽ­*b§›"±KX2㸿Ý2Z¼D@sŒ_D`T2â|cM`40J{ÑÀHïE,ˆÑâ8î8îÍ1Ž;òÞ⣜‰€^齈À˜c,ø"`AŒIDœ‡Dœ—ØGˆ},ðAŒZ/°1F­¢ÖC¢ÖQë!Qk|Øå{ÀkDQ~Ñ@4!ˆ!Ä1_ „ á üz§8]NqnW „½@8mG€ˆóDÎ$Ê)NÇùJ—ó•kÄ1^ &ˆù©s~ÁùJ—ó•Žó•.ç+§]N-:Î$ºœIÜ®tý'Ð ýé½0“ø °Î †¹ÁdnX"`Aœ=>ÙÅ'/D}h šDb‚˜?‰0ö Ds9+ê’uD‹]¢ÅK4Çx²#Zì-vd4]2šŽ|¥K¾ÒOv‰'¿pþD…Ô™ktD­]¢ÖŽˆ³KÄùm,™§[Vˆ[gÔÚµv‰Z;òs.ù9G~Î%?çØí² Ú‘ÁsÉà-)cãŽÈ·KäÛùv‰|_CtH‘ï%za\{‰€æ¸ƒÙw‰/Ðw0_D@ûŒÑ¿€1°DÀ‚¸£Û‘MpÉ&8r.¹‚¿)£aÈh°uæ,–X2w§;2. ÇI—“:Žå.;Ë/"`AC,è bBsÌŒ82. GŒÞ%FÍ1Îïˆó»Äù—è…;Ë—Œì)#Û uf." 9f$."0*¹Ã_µwùªýí33²D@ûÌ{8"F.£%cŽ1¥§D|äYý(ÑAôˆb1AL! „  â+ˆ4‹eDò""±2"Ó‰ÞNéíDo§ô6ešCΛnW`…S¬pÂÆ¦ØØDo§ôv€<¦ÊÃAȨœ° )ôœ0ÈÔD¦ dj"SƒLMdz K6±äz1ÑË%t«÷Ú7Ѿc̹Œ9‡ö]´ïs.cn€n]t{ í»hÿcße쿉€ºX¡Ã‚\,ÈaA®ô"`§!v°Ó;] `§!vðt!ž.`…!V°  è6D· ´¢ýöbψ’3ÞGDQ„¨ ª D¢ƒè/Äb‚˜B{p*õ$ ôRD/z)¢—7Ð~í/°"öQ ý"Ú/iQ™>%*¤^Eêò¨"½­ÒÛ K®bÉ–\Å’XrK~½TÕËS¢AsM4w Ý6Ñí%죉},/MÆKƒ5±±6ÖÄÆ|a_ø&ÔÔ‚. :,¨‹uh¿‹ö;tË8Ìí3S‡)‡Y" }Fj–xÆr â0Eâ0q˜"q˜·ÐíÝ>#p9ä<òUÄ1_ „} á üKˆ$òÊ@ÎE¯„h¨ ê DÑ^ :ˆ.¬kœi9Ó|ãúãm¬°ŠVX!W9ï",™ë ‹Ø:×A8r |»ísý±D@sM4·@@·\,Ð W+D‡Ô¹þX"à?ºøKø®P–è–«‹‹hŸ+”†•A“•Aüß~gÞÿehnˆæü:óQ Ù¦&Ù¦†lS“lSËoÒ;Êg‰ ½0cÕ±j’±jÈX5ÉX½€/d^ìmF%³o ¹µ&¹µ†ÜZ“ÜÚÊì[C^¬I^¬åGù4‰1·¶D@¦Ìœµ¼á(©3/¶D@/Ìz5d½šd½²^M²^K´Ï¼ØÛhßEû 샙³%^›y±†ŒU“ŒÕí3cµBäÁNCv¥Ivå"¶ÎüKCv¥Ivå9Ñ‘é’éÈ¿tÉ¿,Dý¢h_B CaB8ÕK€šc†¦#wÒ%w²D@êÌ,)³+Ù•.Ù•%2e”e‰€Ô‡Y!*ôÂ8LG„¤K„¤#þÑ%þ1à ‡øBœŠ98r*>p¢=äDûUDQ„¨ ê DÑ…0ö©Óã>'p <äxà|vÈùì5¢‚¨B4íKˆ¢ 1@ !„Êô9 ‚D^èq'üéºD@bô–ñõ)ñõ‰èù”èùÛa_B@·ôë^{Š×¾†¨°zþ_ˆ€2οDÀp{ÿÁLÀD&`J&`‰€­3Î[g‰Àh`œ‰€­3F?=Ÿ=_"àé__" 9îø…hŸ™€‰8ÿ”8ÿÍ1F?Ÿ_" [î"X"0²£ŸØg0eŸÁEüãü;¦ìD˜ˆâO‰â/Ð-ãüKFv—‘½@À‚¸ßanÿLÄÝ^BÀ>˜ ˜ù}ŽGù4íÑþû`Fb‰€}0g±DÀ>˜Õ¸ˆ€ ± 6ÆÜÉ5Ä„cþå"vÊÜÉcVcæÓ¤G¤ÎŒÄ5„A&ò¸„€w`îd"32%3²DÀ;03ò6Úgþå"#›¹“‰ÜÉ”ÜÉ/D`ì3‡3‘™’¹ˆ€n™;™ÈŒLÉŒ,ûÌ\DÀ>xh"¢8%¢xÝ2*¹B< cްÆ-—x)Æ-/"`§Ìi='ðn†w3lWDœ'ò|+ïwX# ˆò%DQ…h D¦o"&ˆ)„°Ø×Kìƒç‘¯! ÄõÞ"ò‰À BÞ±F ãzᬎ·7„¾½Á1›ºÌ¦Žý.ûs¥Ë\é8åërÊ÷môÂ9{‰è ºsÌÏ9æ[—ùv‰€n9›^DÀ>8ß® âlú6vÊy‰€q¾};Õ9{€%3+ºDÀKñIJc·‹ËnGVÔ%+ºDÀ C­ð Ƚ†ä^/" ˆòÑAô/!ˆñ1AÌáBÕíDn™dxC2¼o# [ˆ€æ˜^" „}  b–x‰€}0¼BTX³³¿QAT!ˆö%¬Þ@þ6$»DÀN™ ä^Cr¯o#`§Ìß.°Sfx¯!lŒ§°."à ™i¾ˆÀhh2X2OÅÿB¬™æÀ‰²e\tH.z…èÐ>sÑÐ>³ÕK´Ï|ö_ÈlõÝ2Ÿ½DÀ2[ÈE‡ä¢/"`cÌV_DÀ ™ñä¢CrÑo# }æ³Ùêlu ’‹^" 9f‰‘ÞHo ŽÇ Ä`Cb°KdÊ(m ~?]" uæo—H=gxûG~ ͽ|=ñ8ßÞ˧‰¢ ÑAôˆB$ö¸çê^a LH¬‰Ä:äÑE³Ç½ ¢‚¨B@¦]dº@@¦]dz ½tÑK‡^ºèåÂAø—°®öñ”° !4`AC,h@êC¤> ±! CFÃsb¢/Sú2Ñ—)}™°ä)–Bì# ÛÝtª[ŒÛÀ¸­)t/ÿ¢DѾ„ †Ä|0öá ü—%„XaùÈDùöQÄ>JQÑAt!`cEl¬À‚ŠXÐ *bA—Ð~í_BÀ‚ŠZÐ[ˆ ;­b§vZÅNØX[ àéªxº ;­b§ ,¹Š%/°±*6¶@À‚ªXÐí3rU—ª—Z" ý&ÚoÐmÝ6è–ѯŠÈU•ÈUEäªJä*¿¹å^QAÔˆ¢ ÑAôˆb¼@L*1aBˆø ¢@sœM;f1¾Se‘€Ô9ÏuÌ]f%ÂAø—)=ÿ5D…^8{tÌ ]æ†ÏÏw»ìW`Éô¸Kl½Š­WhŽ>¹ÃŸvñ§+DƒÄèO;üiºD@¦ô¸¹‚.¹‚ÚÇŸ6HŒ™€ž2Þ÷òY¢CbÌ\D@¦Ì7\D@/ÌY,°uæ –h®‹æ:,™økˆ½0Š¿D@bC$60#™‘/!0¢†Œ¨ÅlÂEFÔ5 —¡zyJLè…9‹%𛢹 ™2ßБ+è’+èˆów‰ówÄù»Äù—HŒ™€Ž(~—(þ‰1Îßçï翈€^˜+èÈtÉtdºd–è…qþÂ!uFñ;bô]bôñõ.ñõޏv—¸vG\»K\»§3#÷òõD`Ü2j½D@¦Œ8/:cÒ1é.1é·ðÉŒ|_DÀ‚=w‰ž?'ž‘‡<#/D¢h/Db€BLSag‰ ¯Í÷*\F4Mˆ¢ 1@Œˆ b¾@•ésÂAøß.á™ðˆ¢ã<·DÀ ™­Îoo¸—O°BηÙê)Ùꉙoo¸Œ€n9«OÌÈ|7ÃÐ-gÓ‰Ù”o<د@s!š[  9Ε3|¼—OÐ-çÊ%šãlj˜çLæ¹%¢€(BTõ¢hBtýb€BLS¡RA¢@êŒ/)cÁ–Þ){/ƒ€Ä-¾ˆ€Ô™Y5äMMò¦K„ƒp! Æ‚ ±`“XðEôÂ,à™2Zlˆ›Ä‚ Ù7“ì›!l ¾†hãɆh±I´ø"Rgo‰€^˜ã[" Æ“/" îp7DœM"ÎK¤Îx²!Zl-¾ˆ€—âþuÃÎr“冈³IÄÙ-6‰bÁ&±à%2"Óy0žlˆ›Ä‚ ;ËMv–ÛV• ±±c”Öƒ5‰Á.÷žâ¸&q\Ãît“Ýé†8®Iw‰€rÏ·!l þ…è–ñäwûàîtCÔÚ$jý6VÈÉOÇÉE,™1C„Ä$B²DÀÆ!Y"àƒ™X"`cÜi˜£xBw¿2@Œ/!&ˆùa Lá_B•úDž³Cæì‹ˆ¢QAT!ˆ&¬9à%6ƵC ’œ9ãiãËXW(ÕO,ïW ®.« žGÞ¯@/\]òÈEê2"Ó ™N‘é™N‘é[Ÿbë—Ðíݾ‰€Mµ §„ÁÆLlÌ`A&d°  2X‰ìÃÄ>ÞDÀÆLlÌ`&ö±@@·&º}9Ûdξ„€šZáSÂa§.vº@À’],ÙaÉ.–| Ñà2X²‹%;´ï¢ýºuÑ­Cs.š è%D/ 4¢¹€ÄB$XˆÄc?dìFvÈȾ„€^Bô:3‰;DCF³IFó"¢€(/D¢hBt]ˆb| a Lá牽0¹D@/ÌF.Ð3‰KtËLâí3ƶD@·Ì$6DášDá–ØãtKìƒù¹%"@ˆÿ¨° Fá."`cŒä5DòšDòžxÛI‘·lW&ˆ)„°á/"Îye ïCY# ˆ"D!zY ˆ&tËÕÅÛX×xJ‘÷¡¬°ûX `\;à)EÞ˜²]) Ê DQ…h šÄÂAø D€P‰]@Hsö©sΘ‘‡ÌÈKÄ1^  9ΕK„0! [΄o# }ηï"*,ˆ3òÀŒqx”϶Î<áÛXÐ QÌFä‡ärCr™Ä!™Ä‘Ï$eÐ-sc«Ê„èv@sÌFd‡d—È”¹Æ\ã\ã@&qH&q‰€Ô™IÈÉäç†äçF~·íQ>M@/Ìà dð†dð–è…¼ Þ ÞÛè–¼·° æ —Øó„°Sf—Ø)s|Ù·!Ù·ìÛìÛ@ælHæl /6$/ví3·6§§ˆ± ‰±­™2¶5ÛÛºˆ€^ýˆ~ ‰~]Aæ9“yn‰pN"ﺑ÷ö¬Dù¢‚¨/Ä2å<·D@êœç–ˆ$ zá<·D@êÜí²D@êœM—ˆ¢¿@@/ôëxûK‘·¿üB„Csœ_–hŽ3Ðß â\i˜çLæ9ÃÊqÆÛåŒ÷EôÂ3x޼˜K^ÌqöÍåì›ãì›ËÙ·wzaæÌqÂÎå„Ý/DÀ‚˜ã[" }æø–øSæÖ–X³o×#›y±‹ØØ»„€}0?çÈϹäçÞFÀ1 ¸DÀ ™'¼ˆ€§c®Ñau‰°.°Ó¡v ?Ælä 1a§ŒÒ:"¬.VGüÔ%~êÈWºä+ÙH—l¤#éÌFV¼ §ÊÛp*ÞdSåM6o¡©òšŠw»Ty·ËUDÑ^ :ˆ.Ä!2M³z•÷²T¼u¥Ê[W*Þ©Rå*W"Hth¿‹ö;´ßEû tÛE·šë¢¹ºí¢ÛÚï¢ýÚï¢ýÍuÑÜsb@sC4w ÝÑí€n‡èöMìcˆ}¼‰€} ±7ðRC¼Ô›Œ†!£áMFÔÐõ>yˆOžsSÆÜ›Œ¹)cî#jʈš°u]åLXá+¼„€æ¦jîÂà“M|ò›؇‰},о‰ö/!`A&t µƒÉÚá¶nbëOgâéx)/e°SS;}J8lÌÅÆäbAo"`c.6¶@À‚\,È¡}í/°û¸„€/tñ…s±1‡¹ØØ/CÀ’],9`É!–°±{; ±Ó€†Øé›Øzˆ­ì4ÄNØiˆ^BÀÆBlìvj§5Qræý(ÿ¢DQ¿„h šÄøIÄ1¿„0&„ƒðóDÚp”A@sE4w íÑ~zïÆQ>M@·Et[ —"zy ÄTh®Šæ.! —*z©z©WH½ŠÔ/!0^ªŒ—R¯*õ§Dƒ?eÿ"#ŠQü‚}‘ýÝ2ŠÝ2ðœÀ[¬ª¼Åªâ TUÞ@uQ@”ˆ ¢ Ñ@´ˆ¢ 1@ !&ˆùá \ˆ¡öñ”(Ð\Í-Ð-gB¼=ªÊÛ£Öaç‰ [çÜÐ074™–Xg_ˆ€V±Â âÜÐn’~;åüò7E`Dq&¼ˆ€%s®l˜çšÌs ³X“Yì"`ë̼/ð0ÌÍ_D`Ì1óÞo’oÈx7Éx/Ð>3ÞÐsâK4×Es ôÂÌûÍuÕÜSbÀÓ1¯~4Ľ‰€%37¿DÀ’™W_"`§Cìt€n™]!&47Es ¤Îìì©3÷Ú{m’{]" Ó)2] 0²™{mù¤ðQ>M`5>e5>¡[fgWƒn™YmÈ›6É›.Ð-³¢ ¹Æ&¹Æ%^›ÙÈ‹hßDû ´ÏœçE´ÏœgCγIÎs‰€ö™¯|cÎs‰€ÿ`V´!çÙ$çù66Æœç bVô"~Œ͆|e“|åE,ˆÙÈ%Ä|å/DÀ’™Ñ\"`…Ìy.ð§Ìy.°äK^ `§ÌŠ^DÀãjüôÑííˆ}v‰}vÄ>»Ä>;"›]"›D Æ Ä1÷UyÿØÑ@4!:ˆ.Ä1„˜ ´/Ï a?‰p.D€ˆóD^¡È{¿*ÞûUå½_kzËùv‰€<8#¿‹Ès¶¼÷ë}4Çyo«òf°Š·zUy«×qËÙt‰ÀÈælºD`ds6]" }îzNLÌ„Sf‹ˆ¢ü$¢‚¨B4í¢ƒè/Äøe‰ bþ$Â@Ø „ƒÐñ ☭þ…XsÑo#`…ÜQuÿÁ³Õ°dfï'rÑSrÑyä)yä‰<ò”<ò/ÅLóE´Ï ïDvvJvö"šãéë¢ÃðôõíóôõE,ˆÙê·°Bf¼'²ÕS²Õ¹è)¹è‹Ø)³ÕK¬Ùê‰\ô”\ôDyJy‰€} ±ºex";;%;;qêyÊ©ç‰óÈSÎ#/:Ï/ûCÆþ›Ø³Õï"&ìƒÙê%Ä|öÿÁ|ö;e>{âÄò”Ëï)ï¿)£™÷‹Ø:³÷¹ù)¹ù‰Üü”ÜüDæ}Jæ}‰€2â<qžqžˆ8O‰8OD‹§D‹—H‘Þ‹hŽÑâçÞéVånol«òƶŠw©Uy—Ú‘çJy‡YÅÛŪ¼]lè ú Ä1„˜ ¦"Îã^§_ˆ( ŠÐç¼Õ«Ê[½*ÞÙUõ]_èâ —!#j€néOþÔÅŸ:|¡‹/\"`ëô…Oçâé–È”~l‰€Ä˜ÓZ!2åþ %–ÌÌÙ½0svñÂýAŽ·¸¼­À±/Çe_ÎÝ2ÇçȾ¹dß~!6Æ÷82x.¼‹Ø³€Ž, Kp‰€1 èÈϹäçÙ7—ì›#·æ’[sìCqÙ‡òœäÖBrkœVHN+± ÉX-Db€/„p.D€P™>% ¤Î³æÐOØ-Ð3ELQH¦h‰€æ˜)Z"&ˆyž¨)³+KDQ„€Ô™Y" fF." 9ž œF9È„äN‰ŒÄ1ǜŠѠ[ž\" }fF™‘ÌÈÍ13Èj„d5." }žŸû…x~fW¹“ÜIà„]È »wVÈ ÍE,™š@„$$Br[g”% É¿ò!y@&$x?nÈûq—è–¹“b@sŒ=%Þ1Óä3 oiò™†7È4yƒÌ1AL!„  â+ˆ4ï7y?Ìûh®ŠæÞDÀ>ªØÇû¨böQÅ>Òž‰&o²Y#`cUlìM¬°Š6ØGû¸„( Ê DQ_ `…M¬p€6±ÂVØÄ /! ý¦ÚJtè¶‹n;¤ÞEê Æm—q»@@]ä±@`dwÙ Fe—QÙ!õ®R¿€МÎÙ—·CÆíûb£rȨsCÆÜ›Øé;½„€%±ä;b§v:ÄNll¨]@LØØ›° )4aSìcBsS4·@@êS¤~ ½LÑË©O‘ºAê&R_  ½\B@·&º]  [ÝƜɘ3hÎDs½˜èe€æL4çЋ‹^2u‘©Ããºx\‡L]dê©‹L/! ½,ð….¾Ð!u©¤"õz ÑK@ê!RH,Db $"±€­‡Øú©‡H=0BFC@/¡zyBÔ|Æê(Ÿ&*ˆ*DÑ„è ºÄøIÄ1á ü"@ˆ}¤¬×Qû(b ,¨ˆè…±­ŠØV•ØÖa LÈ”ñ±ŠØV•ØÖ Q!uÆ¥jÎzåG¢Áó7ñüxÛI“·¼pN"Ïò¦’5¢€‰¥MÞÒðþ&ïÿhx»G“·{4¼»£É»;®" uÎ@x«F“·j4¼U£É[5ÞªÑä­kDQ…h Ú DÑ1AÌa?‰pN¢À‚8{¼€q~éÈÐtÉÐtÌ@]f %2åü²BTô–sCÇÜÐenèy¿ÃQy0#Ñ‘oè’oX" 1f$:2]2KdÊŒÄ Ñ uFñ;bô]bôø.øŽØx—ØøE¤ÞDê 2eô¼çüþQïÐÄ;<':¤ÎøzÏYó£ Rg|}‰Àhè2:dÊü©3ß_ï_w‰¯¯Rg\{‰€^†èe@êŒkwĵ»ÄµßF@sŒ8wDœ»Dœ—è–1é¾UeB5÷”˜ÐãÉo#`ŒZ/° )4¡[F¾—hŠö'47EszaLº#&Ý%&½B$ÆhqG¤·K¤·#ŽÛ%Ž{y0ÒÛÇíÇíˆÁv‰Á^D@bŒÁvEw<#wyFîxFîòŒÜñÜå ø bÂ[Nñ–ÞrŠ·œð…S|áDömJöí"ûÂ)¾p‰( Ê DQ…h D/R§§»ˆ€æè-q¸É9à†sÀMÎ7œòmrÊwè ºÄx€ÄL$fýéDælJæl…pÈ”>y‰€L™}›ðÉS|òÛø zí‹è–~}‰€æ˜Ÿ›ðüS<ÿ„çŸâù'¢£S¢£~}þŽ__ 0¢?ˆ}N‰}^D@¦ŒŽ¾‡0Ä`Mb°†«I„Õ5‰Ž.Äøb‚˜/Â^ „  DsšcTr‰€n™93D6M"›†È¦Idó"¢ƒè_BÀ‚a}+d&ÑÇ5‰ã²€&YÀ¢Â‚ 6DzM"½K´ÏXð™2ÒkˆÁšÄ`—HŒ;º q\“8îEÆ-cÁ†X°I,x‰Àxa¤w‰€%3Ò»D@/Œ"½&‘^C¤×$Ò»D@êŒ/X‰uX2㸆«I„u…è-#¬¶ý3Ò— b„Õ°c×dÇ®áIÜäIÜðœmòœ½D`Üò)ÚðŒlòŒlx6y6<ßš<ß>'p"µÉ‰Ô†©MN¤®D¢ƒèB ãKˆ b a ìKá_AthŸ>§8›œâl8Ù保DÀÆè ¾0Ä^DÀƘ[" [zÜ·"H Ø=ÿ+äܰDÀ>8{2g!™³%6Æè"6Æ ÞãL¸DÀ1j˜+CæÊbÂ>8W¾€1∇ñàXð1Çhq Z-^"Ð[“Þ^B`Dñ4ÇÛŒ9F¾qí¸öEÆsØxHl|‰€1683rf$I É$.so# uƵßFÀ06¾D@ûÌ›^Cü#ðKælÆè/"`ÉŒó_DÀÖç_"`§ŒâvA‡ì‚^"`A<©ó7EÀ!ßÐ?r¾á(ƒ( ÊO"*ˆ*DÑ„è ú Ä1¾„˜ ¦„pþ Ô‚ž6VÄÆ t[D·z)¢—‰‘XÄŠHl@o‹öö)Q!*òx1WeÌ]B@·Ut[1n«ŒÛ íWÑ~…æªhîMFT•u «bc zi¢—K趉nh¿‰ö´ßDû þ´‰?mðˆ[vœjírªõ*¢hBt]ˆ Bûb Lá/"H¤|v—“ W:g‹h®ˆæ 4Ç9j‰ †Ð>ç9œjírªu€ö‹hÿ9‘g 9ÕÚqªµË©Öí ¤Î¹¡æLóQþ)ôRE/Rçì±D@êœ*æ†*sÃ5Dƒæ87Tøõ*~½Â¯Wñë5çŽ2´´IK;¬°‹.èm—Þvø.þãM¤ÞEê–ÜÅ’;,¹‹%wXa+ìÐK½ H}ˆÔôvHollˆ-Çy Øé;½„€Ô‡H}@êC¤>!Ó)2°õ)¶>a…S¬pB/Sô2!õ)RŸú©_B@bS$f˜‰ÄÈÔD¦ ¤n"õz1ÑË™šÈÔ`É&–lº©ÔŸŽÞºôöM,ÙÅ’/!àµ]¼ö%<Œ‹‡ Øzˆ­,9Ä’Xaˆ.iˆLH,Dbo"0æBÆ\@s¡š{N`Ì1Züœhˆ'7‰'/D¢hBtýKˆb¼@Lóo•蘺Ì@ D¢ƒèB Cˆ B{ûœ0&D€ˆóD:…ÕåììUDQ„€æ8W.Ðg·М‹æzqÕË[ˆ€^8›öœ=Ê 0¢8›vÌ•]æÊŽY¬Ë,Ö1‹u™Å–è%D/—Â_  9ΕóÜyî"¢h_B ãÂ@Øy"ç‡äâÉCâÉ¿QAT! Ƥ—ˆ b~ Í1â<OOˆ'‰'_D@/ŒIĤ‡Ä¤ò•Cò•ñä!ñä‹è…1éxòxòÈ' Žòi2e´x O8$Oø6ºeÔz j=$j=k’k\!:dʘô@,xH,x <$|yt‘ÇKfLú"Rg\{ä÷me/Œ|D¾‡D¾¯!샱ñ‘÷óeÐ #ßK¤>DêRgô| ò=$ò=××^!&$ÆÈ÷E<£çÐ-#ðûŒâ/Ð>#ðø!øøúøú™2¾>33X"à°³¼ü©‰?ʼnÔ.'R¯"ˆñ1AÌaB8•és"@‰ìOåüíQ@!*ˆ*4G»D@sôÉkâqq†·ËÞ%bBbô–8ÃÛå ïU¤NO‡sÀ]ÎoW uæ+/" ¹)š[ 0æèq/"`AS,h€Mµ Œ[Î Ïoâù žßÄó"Î&gƒç7ñü†x²I7̦ö;³é bô|…pX#ßKlŒ‘oËo(;Ê `…ŒZ/Ћ‹^.! æ‘ 1i“˜ô‰1j½D /Œ8/°dÆ“—HŒñ䋨:cÒÏ GþÖ%ëˆI»Ä¤™U—̪#Zì-¾ˆ0ö%D€™È”QkǾ`—}ÁKÄ1¿„pþ%dÊS6+D…Ô vÄ‚]bÁKDQ¿„h Ú F%£ÖKFw0_D@·ÜŸìˆI»Ä¤1i—˜ô½0ž|Í1&íˆI»Ä¤g—ˆ³c'µËNê¢CêŒZ;â.ñ‹ȃñ%þ”ñGtÃ%º±DÀÓ1vq 1 Ä.ÆGÎeDb€BLSaB8"@‰äù‡œ8×8ä\ãU¤^EêR¯"õ ©W‘z…Ô«HýMôRU/O‰Í5Ñ\ƒÔ›Hý¢‚¨B@/MôÒ ±&k MFCƒÄšH¬Ã‚ºXPGK»´´£¥]Zº@ /]ú²@ ·]z;`êé.!`Cìc€^†èe@¦Cdº@@¦Cd:àO‡øÓ©‘úÄh˜2øÂ)¾pÂN§Øé„<¦Èc€Ä¦HÌ`&öaо‰ö ò0‘‡¡·&½]  [Ý>'½ué­£·.½]  }í/Q.#j€L]dꘋĽ ém /!}Y З¾úÒ—€­‡Øz ·¡½}NÀ ¦4JŽeD}h Ú—Db€BLóÂAø D€PÍ=%Rlë(ƒ€ö‹h¿@·Et[ >¿<¿y~)x~)òü²D@/|ÂY" „ Íñg½†œõZ#&ˆ)„°ˆ_A¤lÂQ>M4Mˆ¢ ™†Èt€Léµ— Â9­!ç´®" ˆ"DQ¿„è ºÄx˜ ¦„pª¹ç‰ÍÑk7xí&^{‰€^è×üz¿Þà“›øä‹hŽ^»Á'7ñÉ Q§&Q§†ˆQ“ˆÑKfÄh‰@_ªöå)ÑÐ[Fj–ô–q˜%ÔÄ‚H¬‰ÄX£=+DGK»´´£Œö4D{šD{–x)Fj"5M"5-ï…;Ê ÐÛ!½h£mf$Bìt ¥Œn¼‹˜#$ ’&’–w˜åÓ´Ï(ËElq˜†8L“8Ì/D@·StkðtŒ5ăšÄƒZ>õ|”A@¦&2] 0^jy¿ÔQ>K8úÂXÎãóKÃóK“ç—†§“&O'K$ÆxÐEdÊ'œ"0W2.µDÀNù”„óCÎ{ œ÷rÞc‰H»~‡œ‘8ß0ä|ÃvwáÈy‡êQÎDÙrò`@_-ðC¼Ã€wâpzaÈé…íÊ1„€né?FÞÕy”¿€€^è¥|Ð4××þ…Ø£ç#ï.=Ê  [zËo9Ä[øÂ!¾p‰€^è¥<Ìø°uÆaâ0Câ0Q–!Q–ÊÊ@ eH e"þ1%þ1ÿ˜ÿX"*ˆ*DÑ„A¢ /Œ,è #Kz˘ÁDÌ`JÌ`"<%<˜¸ˆ˜ æ „0!„Øiö¹cf"v1%v±D@·Œ]Lìe™²—eb—É”]&‘‰)‘‰‰]&Sv™LD&¦D&&öLÙÿqKfdb"î0%î0±eÊ>”‰¸Ã”¸ÃÄ>”)ûPfþ–ÞQ>M@/ÜË2ó{Xòi2e e"†2%†2·&bÈ]tËèÆÄ:yÊ:yæósGù4–r<U˜U¸‚ÀžÞ!{zvìÙ±»FEˆ ¢ Ñ@H_ò˜“ݶەbá ü<ÑÑ[ÚºçwDeè m{X‡ìaÝ®@·Œ¾qž°1îýrŒ}—±ïû.c‰€2öéð.þÃst‰9:ü‡‹ÿ¸ˆ@où$þ6¶ÎØçEì”QGTÁ%ª°D@·Œ9:â.q‡‹H‘‰‹ŒF7." [FH—‰#Bâ!qD7\¢Žè†KtÃ3p‰8â§.ñÓ%½eÌÀ·t‰[þ:D@b! øuÆ>q—¸ƒ#îàw¸ˆ€^¸¯ÏÝp‰n,°SÆ?ñ—ø‡cçŸËο@„$$Bˆ„Ä?ÑèF`wGÈîŽ%b‚˜BémA_ݸˆ€<ÿD&B"¸CHÜ!w‰;â!q‡Èç2ô…'d»Bv3¢ !Q…@T!$ª°D ·Œ;â!q‡¢Aû| <á„<áž_‚Ï/;v§ìØØ±;eÇîUDQ…h šDÿb€/Â^ „“(i™.z©ô¶Hoˆ b y‘G<ŠÊ#@ˆ­Wô¶JoXP ªGyTÈ£Š<*z[¥·½­ÒÛe9Ê 0n›ŒÛy4‘GCo›ôv€<šÈ£Á>šØGƒ<šÈ£C]ä±@@b]$Ö1溌¹Rï"õ2í"Óëbc2í"Ó‰ ‘ØÀx2^ìcˆ} ´tHKŸò˜"‰ß˜úO ƒ}˜Ø‡Aû&Ú7èÖD·‰™HÌÑ—v8ÚáÒG;\ÚáЭ‹n-umésšsÑÜÿáâ?–bÉy„Èc€ÄB$v ;ÅóܬX±UY±á ¨S¾€º]  9ì–›²[n»R@!*ˆúa LáBˆ áhGTÃxi2^ÞFLSÈÃEω€nCt›ÇK“ñ²D ´õ[ïbë¶ÞÅÖ{Že„pÚÒ ˆ´wã(Ÿ& >7, D}áŠþ""@Äy¢BbU$VacUl¬B|²èx²èòd±D@|nè9fp”AÀ’¹Ö^"ÐŽ&íè)×ÚK¬+é%záJz‰ Æ < ×ëëõ.ëõb /Cú²@À‚¸¢ïXwY_CLØÇû˜)×ü=çøŽ2´ƒkþŽUN—U¾„6åKhß9›ò³5½5é­¡/\å<'&ìtŠN<5Nyjœ°±)66ó¾ñ£|=1ÑŽ)íX ˆ!ÚA›°±)66s–ø(g"ÛÇû˜xjœòÔ8¡ýù;ÚOùÊ£ íàêsbõ9eõ9ñ´6åimâimÊÓÚÄÊqÊÊñ9aX9š¬ +G“•£!âlq6¬ÇLÖc–3EGù4–rŶD /Eûâ \ˆ$*úµ”a-e²–2DXM"¬K„0!ÐÆ` V“«!Âja5¬ØLVl†«I„Õ°b3Y±Vl&+¶¢Ã’¹b»ˆ€Ä¸["Ю”7’7ÈŠNÉŠNdð¦dð¶+„À](õÈ;ÿŽ2ˆbÞrV¼kï(ƒAb€Bä5]Èš.ò>Ø£ RŸ"õã¬XÓ…¬éñÓøi`ž ™ç"Ÿ²9Ê™ȃQ–@„$$BȳÄö‘g£|šh šÂHüF‘ßHó‹É;C g“ˆóç•$Ó£ ¢‚¨B8ˆ,u«èm•ÞÖ¼å(g"Ë£Š¯d©‡H;ÿBvþ}^éz©wô¥K_R<è(Ÿ& ÎØ=²{ðóÊÀ]¸¦ÃοkìƒëÈ’£œ ‡Ô]¤î:×cõGÈú;ÿBvþ­°B+ ´”þ4ð”ò”„݃Á݃ã#ïë»—At]ˆb1AÌLäsó÷2áBˆ ñ»—AtDÌL„ôe@_¨—%½ ô¶¦,འ¢ƒèBÂAøy¢ ¥EZZˆ&Ä1IÔÈD‰5´£I;×ë÷2ˆbèø.¿1J&Fy@;†´c¢·Szk°û0HÝ õüür/Ÿ%ro»ô¶§ý½œ‰Ü[>íWðS#@ Ã]LîbÄÂAøyÂÑR—–ZÒÒìǺø±žf {ZJ?6à†x‡¼'ï^ÎD©™(Ua/hi‘–VU ô…>h¤½Ö÷2ˆ¢ 1@ÀÆòi°{9DUÂ@˜ÂI´’‰V„˜ ¦hG“vdkâq 2å©£ý îÒõ.O‰vÐKYŠ[ÞË™È~ÌÄåw÷r& ½åjËà×MüzÞ w/ƒ@_èƒ >ˆ{áÆ‡c 㲆q¬a¸m¿â ü<‘W.«‹¼÷ë^~$ã“Å~Å@‰è™ˆ.D€vt/ƒ0F"y˜£œ‰ÇŒæ½ ¢h$Òjë(g¢£/]úÒq—.wyÌHÜË !ò˜hÇ”vLüÆÔßxJúbÒƒÄL$f¸‹é]„X£·.½u´Ô¥¥ r± €Á” ½TÑË ‚Äl™˜MˆbvP·5¯>r&¬dŠD}¡UXOíWÐRÚGÞév/ƒ@K]Zêh‡K;’Ç-OZEž´>¯ä•Á•Á€Äxêh»’žÅŠ<‹}^-£‘Èó>¿‘º_é : Ã]è“ówEïåL8~ƒóíÀ|;d¾ð¸ü~å~Å@ˆæ2圿ùx/?^jŠ—šðAü>á"ÑAt!ˆA"¯`§¬`'¼Ô/5á?ø…ºQ«qžÙ¯8'‘ûÂó/ە쓹q¿‚vÐ[® wir—Ž–viés"{)~?j¿‚–ÒåoÝË ˆ&Ä1…AÂ@pµµB8샫œüE”{¤zþÎȽ ¢€Èz©iÒ½œ‰ƒ=Ê ˆA"Å.Žr&Òz¬rã¨ù{÷r&’ª|Çÿv%ùÊû•b ‚DŠÂU¾K~¿‚v4iGCo›ô6­a*wnW:ÚÑ¥íèÒŽyt‘Gzš¯Ü_¸_q.d:D¦º¢Û‰–Ni©¡·&½5ü-+¶*+¶«ˆ ¢ a ŒDÙÜ_¸_ AÂp“»~Ãä7Ç~˾ð(ƒ˜ `cùÍÆ÷r&Ò\y”A8'‘Ç ßº_Á]µ®ùËÅ÷r&²w྾ýÊ1I¤µöQQAT!ÐövbdOÙy_ß½ ¢€(B ·Sz›-™ûú¶+iE_¹ko¿‚ß %ç7-ÞË„åýGÄ1H¤ur•\RÍß°»—3 °v¨XT]äorÝË™Ès%¿Aµ_1&„ƒpi[¹l¿ÒAt}áL9¿”3‘­0Ä #Ç r&òp‘‡CX9¶¼ãÿ^a ŒDsGDÑHÚaÒáB¤H^ã»ÃF«yMw”3‘V[G~£Éo¤=Gù‘Èï1º—AL“„á7(±–¶´dÏ”3‘æýÎýÿôíÿþÖš=<;ÔR¶ÊÒökýóùö¿|û?~ûïþåþïÿÍ·û¿mo;øøöyŸOÕÜþýÿöüm_ãÿ¿¿ýÇouôŸ³þøOß>¾ý¯¿•oÿáóÿí·²¿$áø­õýHíö¡Í¿|–mYmì¯ÿóom|lçê¶ÿûts[y?·ÜúÜ­[yl¾¼õý“[yÿ"tk¾=Ø–o_òjŸÂü\2nåý Dkû᢭<÷ûÕý%[yÿÊ]«ûv”Ïòö6:û,ユ÷](­ìŸÝÊû[7ÛŒ([y{/ÃçýoA´­¼Ÿoe?ˆ´•Ç6ÛÇ~,m+ïçÛÇÜ›ŸåÛé«ö±¿<{+ïë²ö±‡ý·ò>S·m(lå›ßÝ6Öníé·3/a·þöÛ;ÇcÚleÛ½pì¯ïÞÊ·}Œ±‡8?Ë[žãSåQoíí[œîsž»×ÚÊ~»ßþú‰Ïòö¬öy?ß7önå¶ÇäÜoò럳àv¿ÏrÙï_o_ÝtÛþï³¼y»•Ç^îûý6~ÿûv{ǯïŸàÞÊ·=¤ßËŸŠ?îw”ÇÑùQ¶ÛýîåqËâý(ß"·?Ê3ßo3ÌòPž%ßïø²ÀòÜçÒåÈ÷ÛÞ÷÷x¿ãW?Ê–ï·½OⱿÞòý¶Ó¾÷ó[Lã{y;5öx¿OŧûÅí½Ô?Ê‘î·ïÛkåÛ»Â~”-ÝoÏ Ûc¹¥ûí1ÊÇò-ô½|=Îí~w´‡‚<Ðþ1 ´öë»Ú<ÌM7´y˜h?¢ôàúÍ£~÷@m˃º{˜»*‡‡º{ »‡9<Ðwsx ÷ÃÃÝ=оYåÁí/*~ð@sÛxñàæ>ãýð@s ã>x ]­hoƃº—ïè{ùð@ߡº—ïè{ùð@ßË3ßïîîå»ú^îù~wô½ù~wô½|x ïeË÷»{ ïå–ïw÷@ߡº—ïè{¹çûÝ=Ð÷r¤û}÷@ߡú^¶t¿ïè{¹¥û}÷@ߡº—ïè{¹åûÝ=Ð÷räûÝ=Ð÷rÏ÷»{ £|[qõ=8;n6o½õ éšëÇ×\¿¿Òš{âv‡¿|«m ÞŠÛ²/WßJ·Fmµ¥Ö­“÷ê£øPoû§¾×ߊ?êÓí¾ßýGý¿ùÓѹÿü/¿Ó¹ùÏŸëÍ_¿•oú¯ßnýùñŸ[yÖÍh>}n·oúË·¿«ÿíOÿí·÷§ý§ÿøoËçâÍËÃ÷3üéžËãÏ\ëžãüñÇþýo²±-¶ó—CRÛ½>ÁM`¨n[ áVúQûéAÇü^{+ý¨-Û>"ÿ^}êÛþmâïõ·âCýØO½¯¿êoý^ÿ£í rÛ»}˜cË?ô2Ýÿvóö7‘þC]ÿÛ¶¶³|ìg·?>¡Ëãoë¾Ûu7ÁÓû9S|6}·ÀÓ;m‹ì˜Mès¿xLj¿Ë6íÝ~ûOæâ¾ù¨w>&Ûº­>îê¯Ã¥ï1ÆØìtëÈ?ýoÿå÷ÄððÿPÒoØüö¹Jø\Õ|þõü»ÿëŸþùï¿ýÃçðú»ÿçÿ§?û¯ÿ¹¦ùöwÿç?Û~÷?}ûÓØ~xÉ÷þ®ß-‹~wØæî}ÿlä_~±m5úœB÷§Õ¹oLó~{¶šc{X÷q[÷Ü\ðö"žM¶oúû\ÓìOy¶oóØnÛŠðÓÙnwˆÛúÔöTáçhoÛêÆ÷4Çí埥ýócQo+ÓÏ_û¼C´Û“öçßNS·ãìþ-ö#.16Ûú,í/³ÝŽeo÷ÛV~Ÿw°Ûnù|Ýná›)lÅýIr;PØ÷âþà¹C¾—÷çò±îõ³|Èûþ|[Êþµ·íW,öòÜnV>ö—)oå=®V>ö%ýù>1m·ÙW³¥î«™mÇ~ÿÏñ¸ßÏ7ðYnûj¦Ü6Ùn庭NïÞï³Üöû—rÓPic¿©·Õiiûê¦Ü¶Y}–ûžWÛš½ë©ô]™eÛ"³·§ï«›rÛŠ°•ãv»i«Œz»ŸßôU63ß¶nî+c_Ý”ÏE2÷”ZnZ+s_Ýlbè{{çþXPj»i®|šÑv¿ÚoºÛÜð§›ÞÄ4öö~šÒv¿[àýÏ÷‰|ÛM·ûù¡?¯·ûÅ¡¿O“Úî×>ýù¾Ë~?7ý}âv¿¶o(ÜÊu¿_k‡þ>Mk»_ë‡þ>íh»ß&ö­½›‹)Ûd4oú«Ÿ†±ßÏoú«»Q–ÏEù®¯ú±ŸA.Ÿ‹¬]_Ûxþ\iljÚõUËø(½ÞôU?Ñ6×»¿¨î³\÷7onjÜõUëÞð²-ê÷û×}ÏѦÖýi§V¿ÝÏnú:V6¥ûM_õ³£ûý⦯úÙ‘í~Ÿ»¾êgÃËm2ÝõUûþô´™Å®¯Úwŕχ„]_µïOO›™ìúªcZ*cÞâ9ÿ¸¶PülÛÿ¿ÃÚ¿Cu_ª%ßvYîÄýLó÷Ú[é±vÖÇÚ­”j#ÕF®µô·öø·óãa2ûv”RmOµµ#ÕŽ\[Ò/ür-µ[é±¶ùcíVúQ[7£ú^{+­×¶ùX»•k´p”k?mæ¡v+¥Úô·;?kçÇ©Út߉û><"¥ÇZKý5ôד¬²ŠúXµé¾ûFúåH¿¼¯]¿×ÞJ©v¤Ú‘kíªÁ®öwg>Öâ—Kúå’~¹§þvôwÿ"Àcm;U;SmÒ¾4ù^{+¥Ú‘jjgªå/Gª\[Ûcmm§jÓ/WþòÖ¶ÇÚöÚ’j j“4¤ÑÒ}î;Ò}î;Ò}î;’ŽF=Uë©ÖQ›Ú<Ðæ™~yÖSµÉ6&lÃúc­uÔ&9älé—¿œzdq¦Ö“Ž:ò¤#‡Ž.¬M¶a° KcÁ0,Ù³Áž-Ù³Áž=µÊÑ*O­r´Ê“ZðÔfï§jS›mþãÚ¤}‡ö=é×ãLm¤qG‘zèQ$¥UŸ•ǵÙQ:QÛRm;U;R-[e©ÖNÕzªõ3µ뺣t¢¶¤ÚramOµ=×Öô·µœªM:ªÐQM÷­¼ï×&ýVè÷!nv”k[jUK­jÉ&l2E\ ×çµ#ÕŽSµžjµ‘j#×–Ô£‚•tß‚û><¥µ–jíLíÃÓÄQJµ%ÕÔ¦þVô·&UèèÑꬮ=>/¥ÇÚ–ÚÜÐæ–ÚÜÐæ–~¹ñ—ÿ¸6ÙFƒmôÔ£ŽÔæ6Ôæ6Ïô·;“]Íqª6õw¢¿–4híTmº¯á¾–ìÙ`Ï–F™Åuµž$é¤'-8´àÉÚÖîIÞNÕ&Yù8U›lÒa“‘úèo¤þú©ÍÑNÕ¦V[•t±^Û“çïðüeíHµIÎ#­ùÖü#­½ÖÞã1£}”ÞTÛRmCíHµãÂZKµ–kÇ 2&?³6iÐ˩ڤ‡ŸSDZ‘F÷Àè~Z;SíDmÒ‚S \›¬ÝýTm¤Ú8SIGEÒB@ ‘¬=`í‘$ãÂÚ¤…˜Ö&tI’ÙOÎäë&|Ý|ŒÕ¥µ3ÕNÔzªõ\ûøä5ñä5ÓÓÓÄÓÓÓÚ‘jjS› Ú\R› Ú\Ó}k9U[SmEmjsE›kjs§j-ÕjS+ûûWÔ¶¤ßöqª6ɪÕSµÉÚ¬½%{n°ç–´Ð …–äÜæ©Ú$«FYý5µ‘jãLmOZè§j“]u;U›zÔýTmêQGFÒþ€öGÒþ€öGÒþ§j“ö´?“ÅÎzamêïDgjóD›gjód›“¦ŸªM:šq¦Ö’լΒµrª6IÒ IK¶‘£âû Îïµ·Ò‰Ú–jjGª¨©vžªµTk¨õTë§j#ÕF®}Œo8âOk“$$ù˜™B^Ò<}ÇØ÷4öcßÓøuŒ_OY-GVëim’ä€$G’ä€$g’Õ„¬fêïl§j“4&¤1S&zdéo9RþªÚ¤ƒ,µÊت?®MZ0?U›tdq]­'kwX»'kwX»'ýz»°6IÒ!IO=rô(R›mޤý€ö#yÂÖ¦6çç£ýMFßko¥µ5ÕÖSµ-Õ¶Sµ#ÕÔÎT;Që©ÖOÕFª¥$¿¬¶$I–zam’sœ3&ŒI¤§ÅÀÓb<î`?JµÙ–@¶%ÒÓbài1R>%O‰”O äS"=iž4#=Óžé"åø9¾HÙ–@¶%ÒS[à©íimêoC[êoC[êo§j“ö›ªM²jUOÚïÐ~O’ìåTm’d¯Ö&-th¡'ËI+FOÙpG6ÜËã=J©v¤ÚÚ™jg®}eG)Õ¦ûVÜ÷a”92émmêQEÆàQJµ‘j#×¶Ô߆þ¶¤…VNÕ&Y5Ȫ¥þ6ô·¥þ6ô÷a¥T›¤Ñ žúÛ?NÕ¦õzª6õ¨£G=ÙdŸ§j“~;ô;Rz4R«Æ8U›ä< ç‘Z5Ъ™d5!«™lc¶Sµ=ÕvÔ&INHr&»š°«™z4Ù£?¬µ¤ƒ,#Ã8²$+«Ö&I$iI’IZ’•AV–lÃüÂÚ$g£œÿ°Ö“ZðÔ_G=YŽÃr']%ÕUp%LèJ˜«´¶_F¡dï/¤Ð¹Kç»u†û=}îH©Ž”èH©Ž´¥pa´_FáÑè§(êh¨Žª{¨ºÜrâçøesžšóD&§29‘œ™gèB®–rµœUNQt•UOQ$gµS:/é¼ ó’Î ÙXÊÆ‚’ËJ^ âÇ )<ŠòB £þK(²ÊF «D!EêB© ¤.”º@êB©Kø›?NQh•õ…’)%jd?E¡UJ«„V)­Z¥µB•e~›æçã*è]¢¢ ªí>IývÃçO>Ó*Š9—öËè§è§(\(r¡À…" \(v᧴£úã…ƒUVè\¥óÓ›Çè…’UJV(Y×) kœ¢Ð¹Jç%›”|ºÞû1úE¶zŠÂßÖ¾OËs¯{ŒNÐÍïS\!™ºBrO h­ õm M´ƒöSt€ÑºD4NÑ• :é\ s)§(t.õ—Q8XÚ) Êø—Pd£(î—8E‘âlü”V$§þ8E‘œZNQ8XÛ )ê·öRø[åïó>´jº¥ÈFU6*¬vðmð·Éß›lЪI«†í6o÷§´£3ôúË(rÕÛ )’ÓÇ ):CS.t»ðS:œñã…VCZ ¬wŒSU6TeU6Ö )´ÒjBùã…:U¡iŸJû„ ³¢¨ý©ÚŸðhŽSÎyŠÂ£)&jaÆ )ÜŸvÿ§tAç%t^ÒyA%5æ¼4ç@®âÇ )Rõ…ÑNQä*Æ)ŠäÄz!… a®Pä*”«„G)J¦”L¤.•ºÄzs¢˜³¿Kþ§ ß4›¾i6|Ólú¦‰'´§žÐ¾§´‹P¤®£3tu\;ºvú½´‚VÑÚDè8E'èüe4@ƒô¹×éÚéìÏg=#P(õ…ÎÑ^H;h…G!JÆÉ§ù"_gy’§x¡>ÅúêS¬O¡>Åúl9×_¼þ-çúÊa}^™ÿ¨ìÏÕý¹>_¦ÿ5<Çù0Ÿäó,_äë,òGÿ­î¿•ý·ºÿ~ƒSÿ´þI}Óú^æô'íORÿ´þI}Óú&õUÿkìoÍýí¼óJ^Í;y7äã,ŸäÓ|‘/ó ?è›äù‹y¡?Åþl9ý+öï2§ÿÅþoy#o¯æÌ_qþ¶œù+ÎßeÎ|çsË™ŸâüTúWíßeN«ýÝrúWíß–Ó¿jÿ*õ¯Ö¿²T÷ÊþPÝõiÖ§qþÍóoœóü·œëk^ßoçÔ·Yß-gþ›ó™ÓßvðwÇYÍõ×ÙŸ»ûó–3_Ýù꬯îúÚræ³;ŸùëÎß–3Ýùèô¿Ûÿ-§¿Ýþ^æô¿ÛÿNÿ»ýôwØß-§Ãþ]æÔwXßA}†õÔgXŸA}†õ™œßôü¶œùÎïeÎ|Oçû_Ïéß´—9ýŸö1ßËù¾ÌÙ—ûãb~–ó³åÌÇr>.sê·ú]嬿åú êÖwË©Xÿ-§þaýƒú…õÛrîÂûŸ`ÿ ÷¿-g}„ëcËé_Ø¿ aÿ’þèøùàùÛáó·ƒý}¸¿öïáþ½ç8ÿ:|þõðJ^Íy;Ë;y7ŸäÓœúOë¿åA¯æôwüýÍ|Ñÿeÿ·¼sú¿ìÿ¿ž3ŸËùÜröåþ±åÌÿrþ󷜿-g>–óô7ìo°?„ûCPß°¾AýÂúõ ë³åÔ/¬_°?„ûÃŽ'ן^rýéõ'ן^ÿ–sýéõ_æ\Ö¿ãÌ_:Éü%ó7y||úøø7x#ogù æ“|¾š/òu–yœåI~ðgÃq|}úøú7x!/æô¿ØŸž>>=y|yúøò/àô¿ØÿB‹ý-ô¯Ø¿BÊÁŸ ¯Ô¿ZÿËœõW][N}«õ};§ÕþUúWí_¥ÕþUúSþ\äõ×\[N›ým¬ÏæúÜrúßìc}7×÷–Óßfýmö·ÑßfýmöwÇ;õïÖÿíœþvû»åô·ÛßËœùèÎÇ–3ÝùèÌGw>:óÑÎ|tç£ÓÿnÿýögPÿaý×?¼þËœú ë7¨ß°~ƒú ë·åÔwXßÁþ7Üÿ&õÖ÷íœþéúùàþ#¼ÿøOò<ÉÑŸÃýù¼‘·ŸäóÕœþ5û‡ýCxÿ<>ìÏáþü NÿºýëÔ¯[?ô×p ö×pý§¾Ýú^æô§ÛŸ-§?ýàφ£ÿ‡ûÿ/àÌÇp>¶œùÎÏ`~†ó³åÌÇp>.sæc8ƒþû¿å¬ïáúžôgÚŸIýûIý¦õ{;gýO×ÿ¤¾Óún9õÖ÷2§?óàÏŽ³¾§ë{Çý_öÑÿeÿßÎYßËõ}™3?ËùY¬ßåúÝrú¿ìÿ–Ó¿uðoÃú†õ êÖ'Xáú ®?¼þ-gý…ëoË©_X¿`ý„ëç2§þqÐÿ"OÖgº>·œþ¥ýKæ?ÿ·sæ#ßΙ¯t¾’ùHç#éoüý)O^ÿŸ¾þ?yýúúÿðJ^5oäíÕ¼“÷³|óI>ÏòE¾Îò ³<ÉùÜðÂüç·0ÅùÛrú[ìï–Óßb ý-öwËéo±¿…þû·åô¯Ø¿-§ÅþUúWíß–Ó¿jÿ*뿺þ+ý«öoËéoµ¿•þUûW©µþ•úVë»åÔ¿ôßðFý›õßrúÓìÏeN›ýmô¯Ù¿·sæƒço ï.¾ÿyËysñýÍ¯à…¼œå•¼žå¼Ÿåƒ|œå“|¾š/òƒ¿;äažäy’ú[ìïeNÿ‹ýßòFÞÎræ§8?…ù(ÎG¡?ÅþúSìOeýU×_¥~Õú]æÔ·Zß-§~ÕúUêW­_e}T×G¥¾ÕúVê[­o£¾Íún9õkÖ¯QŸf}óל¿=§¿Íþn9ýoöÿíœùjÎ×eÎ|6ç³1ŸíÏgnîÏùíÎïÛ9룻>:룻>.sæ·;¿ùèÎÇÛ9óÓŸ·sæ§òs‘æc8—9ó5œ¯-g~†ó3˜Ÿáül9ûßpÿÌÇp>.sú?ìÿ`îO[Nÿ‡ýŸÔZÿËœþMû·åôoÚ¿Iÿ¦ýÛrú3íϤ?Óþl9ý™ögrÿ1½ÿØñÅú[®¿E–ýYÔYÿEý—õßr꿬ÿeNÿ–ý[ÔYÿ-§þë ÿ޳þ–ëïÝ<˜p>‚ùç#¨XÿËœõ®Ÿ·súöÿ2g>Âùúö/é_Ú¿¤iÿ.sö‡tØrö‡tØræ'ŸËœý#Ý?’ùIç'éÿáø^Òß<øûS^y|¯úøÞ7x!/gy%¯¯æ¼ÿÛø §xÃõ5á)Žý»ŸŸPøü„âç'ü ^ÉëYÞÈÛYÞÉû«ù ·¿[>ÉçY¾È—9ó³ùÙð ¿aƒú‡õ®?¼þËœú„õ êÖû_?Ÿá;œúÅA¿ OÖoº~“ú§õOÖOº~¶œþ¥ý»ÌY?éúÙrúŸö?éÚÿ-§?yðç·òÎýw÷þû¼—³¼’×WóFÞÎòNÞÍù0Ÿäó,_äË<ÈC¼Ð¿bÿ ý)ö§Pßb}/sê_¬ÿ–ÓŸb¶œþû·åô¯Ø¿BÊÁŸ$w}VúWí_¥Õþ]æÔ¿ZÿJ}«õ­Ô¯Z¿-§¾Õún9õ¯Ö¿RÿjýõoÖË™ÿæüo9õoÖ¿Qÿfý/súÓìÏeNÿšýkìÍýï2g>šóÑYÝõÑé_·[Ný»õßrêÛ­ïeNºýéÔ·[ß·sú×þ±¾»ë{°¾†ëkPÿaýõÖï2çú‡×?¸þáõ®xý“ýkºm9ëcº>ÞΩߴ~[Î|Oç{Rßi}·œúσþù¢?Ëþl9ûÓrZ¬åúØrÖÏrýl9ýÑñ >¿¡øù …ÏO(~~BáóŠŸð >Éó_äë,ò0Oò|1Çùs?¡ðùÅÏG(|>Bñó¾Ã©ï°¾[N}‡õÔwXßËœúŽƒ¾>™ÿéüo9ý›öoËy3§þÓúo9ý™ögRßi}ßÍõ]Öw˩ﲾ‹ýk¹-Ö×r}-ê{èŸoçôoÙ¿E}—õ êÖ/¨_X¿`>Ãù êÖïíœý-Üß.súö'©_Z¿-g>ÓùÜrú“ögË©oZßäúó°þg>Óùü9_<~º|ütñøèòñÑoðJ^ÏòFÞÌ;y7äã,ŸäÓ|‘¯Wó ?ø·ãIžâ…þû™Ó¿bÿ ý+ö¯PŸb}¶œúëS¨O9è³á•ë¯^ÿ–sýÕë¯Ì_uþ*×_½þÊù×Ãü7¼qþÍóo¬ÿæúoô¿ÙÿÆõ7¯ËY¿ÍõÛ¨_³~ú5ëרO³>útë³åÔ¯[¿-gÿìîŸoçô§ÛŸNºýÙrúÓíOg}ëûeòüNúüÎ/àƒ|˜Oòy–/òeä}®ò$Ï_ÌÑ_}ÿtáýÓÅ÷O‡7òfÞÉûYÎ|Tçã2g~ªós™3Õù»Ì™¿êüm9óSŸÆü4çç2g¾šóµåô·Ùß·súÓìÏ–ÓŸfv¼Sßn}·œý¹»?wÖww}wêÓ­Og}t×G§>Ýút®ÿ°ÿ\ßðú¶œë^ÿàú‡×?Ø¿†û×–SŸa}¶œú ë·åÔwôeîWù¤þÓúOê?­ÿ¤¾Óú^æÌ÷t¾·œþMû7©ÿ´þ[N}§õ]Ìÿrþûër]ôgÙŸ-§¾Ëún9ó»œßËœú.뻨ﲾAýÂúm9õ ëÌ8ÿ—9õë¿åÌ8ÿ[NÂþl9ý ûw•'ýKûw™Óÿ´ÿIÿÒþ½Óÿ´ÿIÓþn9ûcº?&ýɃ?¬Ït}þ”×8¾ú’7òv–Oòy–/òežä^ßóñÇÇð/äÅœúëW¨O±>ÏÇ7Ã×rê[¬ïoçô·Øß-ò8Ë™ŸâüTæƒÇjÁþå1$äãÕ|’ϳ|‘¯³<ÈCüùþêû/+¾ò¼’׳œþ¦ý};§iÿ’úçAÿ$ÏSœ÷GVßù ^ÈËY^É«y#o漟å“|žå‹|½šyœåI~Èdžú_ì¡¿Åþú[ìï–òq–Ó¿bÿ õ/ÖË©±þ[Ný‹õ¯Ô¿ZÿJý«õ¿ÌéOµ?oç¬ÿêúßrÖuýWæ§:?•ù©ÎÏeÎ|Uç«2?ÕùÙræ«òµáùkÎß–Óßf·œþ6ûûvÎü4ç§1?Íù¹Ì™æ|üvÎü5ç¯1_ÍùêÌOw~:ûSwºÌ™¿îüm9óÓŸÎütçç2g~ºóÓÙºûÏ–Óÿnÿ/sæ§;?[Î|õC¾.òAÿ‡ý¿Ìéï°¿oçÌÏp~¶œùÎÏeÎü çç2g¾†ó5˜á|Lö¯éþµåìOÓýiÒ¿iÿ&õ›Öï2§>ÓúLê3úløâú—×™³?/÷çE}—õ½ÌY?ËõóvN—ý]ôwÙß«<˜ÿpþ·œþ†ýÝrößpÿ½Ì™¯p¾ÞΙ¿pþ‚ùçcË™Ÿp~¶œý7ÜßΙÏ8ä“ý)ÜŸ’ùLç3™¿dþxmõýµ•÷×Vß_ûäñ_ÇþÑ÷÷VÞ[}ÿí+x%¯gy#ogy'ïæÌÏt~~;Ÿäó,g¾§ó=™ŸyÈO’çI¾˜¿åüm9󵜯˜ùYÎÏ–3_ËùºÌ™å|l9ó±œÅ|,çc1Ëù¸Ì™ŸuÈφóÎÏ–³ÿ„ûOÐÿ°ÿ—9ýû¿åô7ìoÐß°¿¿Óß8ø»áIÓþn9ûÃáóÅ–3?éü$ýOûŸô7ío²¾Óõ}™3?éü$ýKû·å¬ÿtý'ýÕùÕÁó£ÃçGÏoŸß|oäÍ|WóI>_̓<^Í“ÜþâüæðùÍopúSì®ÿ¾þgðüåðùËðE¾^Í©±þ[NŠý©¬¿êúÛrúWíßeÎú­®ßÊ|Tç£2ÕùØrÖouý¾3ÕùÛræ«:_;ÞèO³?þ4ûÓ¸¾æõ5οyþ[Îù·ÃüßÍY_ÍõÕY?Ýõs™Sÿný;룻>:óßÿNÿºýëô§ÛŸNýºõëÔ¯ô»Èó;œßAý†õÔoX¿·sú3ìÏ–Ó¿aÿýöoпaÿ÷?ÃûŸËœþû;éï´¿“þêøŸPýü€ïð q\ßáç|‡òb^ɽ>\Ÿáç T>? úù•Ϩ~~@åóªŸðÞ¹þîõ¿Sßn};õíÖwË©·þúwë¿åÌow~;õïý¯rÖGw}ly’§ø`ý ×Ï ¿Ãþn9ýöÐßaýö÷2§ÿÃþú?ìÿ þÃúê?úoø¤~‡þ»å\ÿôú·œúLë³åÌ÷t¾'õ›ÖoRŸyÐgÃó½œï-g¾—ó½åôgÙŸ=g}.×çeÎü-çï2g>—ó¹XßËõ½˜ßåün9ó½œï-g¾×!ßÌw8ßoç¬p}óÎßÛ9óÎg0_á|óÎW0_á|m9óÎÇeÎüÄ!?žô?íÒÿ´ÿ—9ýMûû¯çÌO:?—9ó•ÎW²¥û×eÎü¥ó—ÌWòõS<ÿ>ÿñ^ÈËY>Éç«ù"_ÿ6äažäö·ÐŸb¶œþûƒû¿Â÷}ƒwòn>ȇ9õ)Ö§PŸrÐgÃ+õ©Ö§RŸj}p~ |~ xü?|ü?x|>||þœù¯ÎåúëaýÞ¨O³>ú4ëÓ˜æ|\æÔ¿YÿFý›õ¿Æùü‡zxþÆ·èáky%¯æ¼å¼›Oòy–/òeäažäÖÿ¹?6?Ÿâ;œúë[¨_±~—9õ/Öÿ¹¿>†¯åô¯Ø¿-§¿Åþú[ìïeÎ|”C>.òÊ|Tç£Ò¿jÿ*õ¯ÖË©µþ[Nªýy;§¾Õú6Ögs}n9ûgsÿÜrúÛìï–³¾›ë{Ë™Ÿæü4ê۬~šëç2§¿íàï†wúÛíïeNÿºý»Ìéo·¿[Nÿ»ýïìÝýá2gÿèîùêÎ×–3?Ýù¹Ì™¯~È׆æg8?[Î| çcÐÿaÿ/sæg8?oçÌ×p¾.sæs8ŸƒùÎß–3ã‹|rÿ3½ÿ™ÌÏt~&õŸÖR¿iý&õ›ÖoRŸi}&ëoºþ¶œú̃>¾XËõ·å¬ŸåúÙr꿬ÿ–ÓŸeýYögÑŸe¶œþ,û³åôgÙŸ`¾Ãùæ;œï-§?a¶œþ„ýÙrúöï·sæ'œŸ`>ÂùØrúÿ7Òùx;g¾ÒùJæ'Ÿ¤iÿ’þ¥ý{;§ÿiÿ“ý!Ý.sæGÇ× Î_<†çx%¯¯æ¼ýjÞÉû¿òqŠóùÍÏ·xÇç‡êÏßà•¼žå¼›rë³å“|šyœåÔwôÝð ¾a}ñùÂÏŸø§þaýñùÁÏh|þCóó¾ÃéOØŸ ?aÞÎùz5g~Âù¹Ì™¯8äë"Oæ3ÏdþÒùÛræ3Ï·sæ?ÿËœùOç?™Ÿt~¶œùHç#éoüý)o?ø N‹ýÝrúWì_¡Åþáü^óù½opúSìÏÛ9ý/öËéo9øûf^éoµ¿—9ûCu¨ÌGu>*ý¯öÿ2§¿ÕþVúWí_¥¾õ ï¿'¹÷;ÞØ_šûËÛ9óל¿-g>›óÙØßšûÛ–3ßÍù¾Ì™ßæüþvÎúi®ŸÆ|6çsË™ÏvÈçoæùíÎïeÎ|wç{Ë™ÿîü_æ¬îúx;gýt×Ï–3¿ÝùíÌgw>;óÑA‡ýôoØ¿·sú;ìïoçôØÿÁýÿðþËéÿ°ÿ[Î| çcË™ŸáüLæg:?[Ný¦õ›¬éú˜ìÿÓýÿ2§¾Óú^æÔZÿ-§?Óþ,꿬ÿ–³>—ës±>–ëcÑße·œþ/ûÿÛ9ëw¹~󵜯Å|,çãÜÿï_À+y=Ëy;˹×ýÛðþÏi~>È+x'ᅳOòiäq–'yŠOê?­ÿd>¦ó±åÔZ\ß3|}Ï`ÿîÿƒý{¸ÿNý§õ¿Ìéß<øw‘/öåþ±åôÙÿËœùYÎÏb}-××b¾–óu™3ŸËù\ÌÏ¡ÿo9ó±œE×Áß Ö¸þ/sæ'œŸ·sæ+œ¯`¾ÂùÚræ/œ¿`>ÂùØræ#œ·sæ/œ¿ËœùŒC>/òd>ÓùüíœùMç÷2gþÓùßræ?ÿßÎYéúÛrö÷t¿Ì™ÿtþ“ùÕùÙÉ뿦¯ÿú/äż’WóFÞÌ;y7äãÕ|’Ï_̓<Îò$?ø¿á…þû¿åÌGq>.sæ§8?…ù)ÎO¡ÿÅþúSìÏ–ÓŸb.súWþmx¥ÕþUê_­ÿ–ÓŸj.sú[ío¥¿Õþn9ý­ö·ÒŸj.súWí_£Íþ½Óÿfÿ/sî?š÷ùhÎÇ–3?ÍùiÌGs>ÞΙ¿æüáüæôùÍopæ¯9ùk‡ü½™wúßíÿ–Óÿnÿ;ýïöËé_·þuû×éO·?þtûósÜ?†÷ÿðA>Îò ·>Øÿùù7¿‚òr–wò~–Oòùj¾È×YNÿêÁ¿«<Éó$oô§ÙŸ·óJ^ÍYÍõq™Óÿfÿýmö·Qßf};ë§»~~;§?Ýþl9û_wÿëÔ¿[ÿËœþuû·åô·Ûß·sæ§ò³áƒþû;èß°ƒõ7\—9ó1œA‡ý½ÌÙ?‡ûç ¾ã ï†Oê;­ï¤>Óúl9õ›Öo˹ÿœÞn9õÖ÷íœõ;]¿—9ësº>¯òÅ|,çã2g~–ó³èﲿ‹ó_‡ù³~–ëg±>–ëãÝ<ØÿÂýï2§þaý·œþ„ý Öo¸~·œþ†ýÝrÖG¸>.sæ+œ¯ ¿qðwÓþ¥ýKú“öç2§¿ißΙt>¶œùHç#ÙŸÓý9éÚÿ¤ÿiÿ;gK÷·-g>óÏŸòäù©ôù©äù©ôù©ðJ^5oäͼ“÷WóA>^Í'ù|5_äË<ÈãWó$?äÿ"/¬Ÿâú)Ì_qþ¶œþû{™Óßb·œþû™Óßb/sú[þ^ä•ù¨ÎÇOyç󡺟õ>ȇù$ŸæAæIžâÏŸºŸ¿Ôùü¥îç/}‡7òöjN}Óún9õOëÿv¾È—9ýMûûvÎüðóIç󕺟¯Ôù|¤îç#u>¿¨ûùEß἟åƒ|œå“|š/òu–yüb^¨±þ…úë[¨_±~…ë/^ÿeÎõ•Ãú.òÊüWçÿ2/äÅœõU]_•þUûWéOµ?•ù®Îw¥þÕúWêWú%¹ûËŽ7ê׬_£>Íú4êӬϖ3ÿÍù¿ÌéO³?¿3ÍùØræ§9?[Î|5çëíœùl‡|nxg~»óÛYÿÝõ¿åÌow~·œùìÎçeÎ|uç«3?Ýù¹Ì™¯î|uúßíÿ–Óÿ~ðÿ"ÌÏp~¶œï?ï‘Oæs:Ÿ—9ó;ß-g>§ó9™é|l9ó1_ôoÙ¿E}–õÙrî—÷[N}—õ]¬¿åúÛræk9_;Ô/¬_PŸ°>oçÔ7¬oPß°¾[Î|‡óì¿áþ»åô/ì_°>Âõq™Óÿ°ÿIÿu|£áþáÇ|‘¯Wó ó$OñçëÃ×òNÞÏòA>Îrú3íÏ–Sÿiý'õŸÖÇÑŸý|‰ïðB^Ìéϲ?[ÞÈ›9ý[öoÑŸe¶œþ,ûóvNÿ—ýÿíœùZ‡|±þ—ë?˜Ÿp~ÞΙ¿pþ¶œù çó·sÖG¸>¶œõ®ËœùˆC>6<ÙŸûÇ-g>Òù¸Ì™Ÿt~¶œþ¦ýMú—ö/©ZÿdÿI÷Ÿ·sö—tÙræCÇ÷;®x Éy;Ë;y?Ëù8Ë'ù4_äKüùüðcxŽs}ÅëÛr®¯x}[Îõ¯¿pýÅë¯ÌGu>ÞΩOµ>•ë¯^ÿeNýªõÛrê[úyœä8>Þ}|¼ãþ‹Çð×rö‡æþÐèo³¿[Nÿšýkô§ÙŸËœþ6ûÛè_;ø—äîÏ;Þé·ÿþtûs™³ÿu÷¿Nÿ»ýïô·Ûß-§ÿÝþ_æô¿Ûÿ-g>ºóq™3_ÝùêÌO?äç"ÌÇp>ó1œ-§ÿÃþo9ýöwпaÿ.sú3ìÏ ¾ã ï†OÖÿtýOê;­ïd}êø9ŸÐýü€ïð ó$Ï“û?ßßßy~÷ýùßᕼžåÜúbÿæû÷;ï¯ï¾¿¾óþúîûë;ïŸï¾þœþ6û»åô·üÝðNÿ»ýïô§ÛŸNºýÙrúÓíO§?Ýþl9×ßëßðÁ|ç{˹þáõæw8¿ƒú ë³åÔgXŸÁ|çs0ÃùÔoX¿É|MçkR¿iý¶œúú菉Óún9×7½¾Åõ-¯oqþËó;gý.×ïb~–óó¯çÌ÷r¾w<è_Ø¿-§þaý/súöoËYáúØrêý®röŸpÿÙñäúÓëO®/½¾dý§ë?™¿tþ¶œý#Ý?~ί^¾>xñøáòñÃðNÞÏòA>Ì'ù4_äëWó$·?…þû³åÔ¿Xÿ-§þÅú_æô¯Ø¿BÿŠýÃõ»Ë×ï.¿\>~ù N}ªõ©\_õú¶œë«‡õ1?ÕùÁ÷·åïo/àÌgu>7oô¯Ù¿Æúh®F›ýmô¯Ù¿kÏòE¾Ìƒ<Ä õ+ÖoË©_±~…úëƒþ™îŸßàƒ|˜S¿bý õ)Ö§rýÕë¯\õú+×_½þÊ|TçcË©Oµ>[Nýªõ«Ô§ôy7Or×ÿŽ7ú×ìßeNÿ›ýoô¿Ùÿ-§Íþ]æ¬æúØröŸæþÓèo³¿úvë»åÔ¯[¿·sêÛ­oçúûaýÌow~×?¼þÁü çopþÃóßröŸáþs™³ÿ ÷ŸÁ| çkPßa}õÖwRßi}'ç7=¿-çü¦ç·èß²—9ý]öwÑŸe.sÖÇr}\æÔYÿ·söÇåþ¸åÌ÷r¾·œùZ‡|]嬟åú¹Êƒõ®¿`¾Ãùæ;œï-g~ÃùÝræ3œÏ`~ÂùÙrúö'¨oXߤ~iý¶œëO¯?¹þôú“ëK¯/™ït¾·œú¤õù)?ðýï1<Çy3ïäýÕ|óIŽþ3xÿ×ðý_ƒ÷g ߟõ ÞÈ›ù ¯æ‹|åA¯æIžâAÿÂþmy!/æô'ìÏ–wòþj>Éï-§¿a/súö÷íœù‰C~6<™Ÿt~¶œõŸ®ÿd~ÒùIúŸöÿ2g>ÒùØrúŸö?éOÚŸ¤þyÐÿ­¼rÿV½û¼’×_Í;y?˃<Ì“ü ÿ†?} ÿ]œþû[¨±þ[Ný‹õßòA>ÎòI>Íé±ÿ…þû[©oµ¾•úVë»åÔ·ZßçãË!9õ­Ö·R¿jý.sê_­ÿ–ÓŸj*ý©ö§Q¿fý.sêÛ¬ïeN}›õ½Ì©³þÏLJÃsœþ5û·åô·üÝðÎúì®Ï-g}v×gg>ºó±å¬ßîúÝræ§;?þtûs™Ó¿nÿ¶œþõƒÿr>˜Ÿáü\æôØÿAÿ‡ý¿ÌÙ?†ûÇ`ÿî[Î| çkRŸi}&ësº>·œõ9]Ÿ[N¦ýÙrê?­ÿ¤þÓúo9õ}YŸÓõ9™ÿéü/ú³ìÏeNÿ–ýÛrú·ìßeN—ý]ôwÙß-gý,×Ï–ÓÿeÿßËyÿßðýßá¼å¼Ÿåƒ|˜Oòi¾ÈúìxÇ«y’çIŽýW÷þëœùÎÇ–Wòz–3_ÃùÚrê?¬ÿ–SÿqÐÃ'õ™Ög˩ϴ>“ëŸ^ÿdýL×ÏdýL×Ïd}L×ÇÛ9ý™ögRÿiýó½œï-§?Ëþl9ý[öoËéϲ?[N}û‡E}—õ êÖç2§~aý‚ú„õÙrÖG¸>‚ûŸðþ'X?áúÙrêÖÿ2§qðõ®¤?i’þ¤ýÙrú“ö'éOÚŸd¾ÓùNê“Ö'¹~¿¸þù1$/äż’×WóFÞÌ;y7ò0OrëS¨O±>[Îõ¯¯pþÅóÇñåáã˃Ǘ‡/<ç1<Ç©O9èóf^©oµ¾•ù¬Îg¥þÕúWêW­_åüêa~Þ8ÿæùo9××¼¾-çú›×ßX_Íõu™3ßÍùnÌws¾/sÖsý¿3Íùèô¿ÛÿËœùèÎÇ–3?ÝùéÔ¿[ÿ-gýu×_gêîO[NºýéôGß/yÿßðý¯àƒ|¼šOòy–/ò%Žþëû¿Ã+y=Ë©oµ¾8ÿèû ï/¾¿ðý{¹¿€S¿fýõiÖgË©O³>—9óÙœÏF}ÛAß«œùmÎïÛy‡x§ÿÝþ_æÌOw~:óÑ-gýu×ß–S¿ný:õëý~7Orï?õÖoPŸa}¶œõ5\_ƒúë»å\ÿðúßΩï8è»á“õ1]“õ1]“þMûw™³N÷ÏËœýqº?^æôoÚ¿-§Óþ-ú·ìߢËþm9ýYög±þ–ëoQÿeý·œú.뻨߲~[N}—õ êÖwË©_X¿-§¾a}ƒú…õÛrêÖ7¨_X¿ ~aý’ú¤õIæ/¿¤>i}~;§?i’ú¦õMî_Òû—¤þiý“úçAÿŸòàñÍðñÍàñÃðñÃoðA>^Íùz5òx1/Ô¿X ýœþû[¸>]ŸËû÷†ïßûäãÕ|‘/ó ?¬ï*Oò<ÉÑ?}á+x%¯æô/íßÛù$Ÿæô7íï–Sÿ<èÿ3> ¿>†ä…¼üÿWòz–òq–Oòy–'¹ý-ô·Øß-§~Åú]æÔ·Xßçã¯!y'ïæÔ¿XÿB}‹õ}>~úžãAæô¯Ø¿Jªý¹Ì©Oµ>[NýªõÛrêW­_¥~Õúíxcþšó·åÌgs>/sæ»9ß[Îü7ç¿Ñ¿fÿõoÖ¿QßvÐwÇ™ïæ|wæ³;Ÿþtûs™ÓŸn¶œþtûÓéO·?þtûÓ©_·~ƒú ë7¸þáõÎxþƒóžßeÎþ0Ü.sæ{8߃ùÎï þã ÿ†Oê?­ÿd¾¦óµåÌ×t¾ÞΩߴ~—9õ}7|±>–ëcËYËõ±åÔoY¿ÅúX®-gþ—ó¿å¬åúXÔwYß ~aý.sæ?œÿ >a}‚ë¯?¸þðúßΙÿpþ;§ÿaÿ“þ¥ýKö?~Ÿ¼¿kúþ®WðE¾Îò qôwßö^É«y#oæ¼›òaÎõO¯ÇÑ?}ÿÕäýUÓ÷WMÞ5}Ô¯àÔwYß-§þËú_æÌ÷r¾ýY®ræ9ÿWy0?áüóÎOÐß°¿[NýÃúo9ý û\ÖŸä)žÔ'­Ï–sýéõ'×—^_2<þ86?l<þÖ|ü­ñøZóñµ†ëÇCòA>Ì'ù4_ä‡õíx‡xáú‹×_¸þâõ®¿xý8þÕ|ü«ñøTóñ©ÆãSÍǧO5Ÿj<¾Ô||éœúTës™Sßj}+õ«Ö¯2?Õù¹Ì©µþ[NªýiÔ§YŸ-gþšó·å¬ÏæúlÌgs>·œú4ëÓ¨O;è³áútëÓ™Ÿîüt®¿{ý—9çßóßðÁú®ÏÁõ¯ÐÿaÿõÖgÐßa·œþû¿åÔgXŸÉõO¯²¿L÷—I}ôù}°¾‡ë›÷—Lß_2yÿÈôý#“÷wLßßñÞÉ»ù ¯æ“|š/òu–SÿvÐÿÝ<ÉS¼Óÿnÿ;××½¾Îíwoðý‡ßË™¿áü]敼š3ŸÃù¼Ì©ï°¾ƒþ û7¹¾éõ½S¿iý¶œýáп.sÖÿtýOÖÿtýo9ý›öoÒ¿iÿó¿œÿ-§¾Ëú.곬Ïb>—ó¹åÔwYßE}–õYìËýã*®?¼þ-çúÂë ®/¼¾àüâ0?æ#œ¤¿i“óOÏËéoÚßäúÓëßrÖOº~’ú¥õKê—ÖoË©oôý)Ÿ<~0}ü`âúÏÇðäÃ|‘/ó Wó$·>¸¾gúúžÉãÓÇ&?L˜<þ0}üaòú›éëo&OLŸx§>ÅúTêS­Ž/L_˜¸ñ1$çú«×¿å“|šsýÕë¿Ì™¿êü5æ§9?ú4ë³åÌWs¾ÞËy}úôõé[Îë×§¯_¿ý¡‘7óA>ÎòI>Íù:˃<Îò$?è·á…úë[¨o±¾[N}‹õ½ÌéO±?…úë_¨o±¾•ë«^ß–wònÎõU¯oÇqü"|ü"Ø_Âýåœóož£¾Íú6æ¯9ùëÎß–Sÿný;óÑÎùwÏ¿3ÝùÔoX¿Áù ÏoËéÿ°ÿƒóžÿàüÇaþôgØŸÉùMÏo2Óù˜|ÿyxÿ‹|q~Ëó[œßòüó±œE}—õ½Ìéϲ?‹ë_‡õox°~Âõ³åì/áþ²å¬p}\æô?ìÿ–3á|l9ë3\Ÿ[Îü…ó·åÌW8_;žô?íRÿ´þI}Òú$矞r~y˜ß޳~ôý˜¿o=ýûÖ“¿_=ýûÕßᕼš7òf>ȇù$‡~ë®z É;y?˃[N}ªõÙrêW­ß–S¿jýžÏ??†çx!/¯æÌ_sþ.sê߬£¾Íú^æÌos~·œùmÎïó÷ÛǼ3¿ÝùÝrêÛ­ï–SŸn}¶œùîÎ÷eNý»õ;§¿Ýþn9ýïÿ7|°þ‡ëËYßÃõ}™3ŸÃù¬ïáúÞræc8ƒþ ûw™Óßa'õ™ÖgRŸi}&×?½þÉú›®¿ÉùOÏq~Ëó[œßòü.s®oy}‹ë[^ßâúx|dñúÀåë¯ï[¾¾ïö‡I>͹·úóõ}‹×÷-_ßwûC%¯æ¼™sþÃóßò “|Rÿiý'õŸÖr~Óó›Üþôöõ[Öoq~Ëó{;çú—×ÿ||î1æ+œ¯`¾ÂùÚrÎ?<ÿ-çúÂë ÖO¸~‚ë¯?9ÿôü“þ§ýOÖOº~’ëK¯o˹¾ôú’ùMç7¹~žß\ǧÃs¼óFÞÌ;y?Ëù8Ë'ù/|}ñë·œï_ï¿È—y‡x¥¾ÕúVn¿zû•Û¯Þ~åöëaûIî|½›7æ³9Ÿ[Îü6çwË©³þ[Î|7ç{Ëéo³¿[N}›õíÔ¯[¿-çü»çßY?ÝõÓ™¿îüm9×§ï?“õ9]Ÿ“Ǧÿðú¡åë‡nèäý,äãÕ<ÈCùŸÎÿ78·ß¼ýÎ×w¿¾Sÿný/sú×í_§?Ýþt®¯Ö·ã‹|åIî|Îxþ[ÎùÏÿ2gý ×ß–SŸa}ó=œïÉüMçoRŸi}&×7½¾ÉùMÏor~ó0?ú;íïŽ/®oy}‹õ±\‹ë_^ÿ–SŸe}ÞΙŸåü,ú³ìÏ¢¾Ëúõ ë·åì?áþÔ7¬ï–s}áõóÎߎãüÝôù»ÉïWÓ߯&¿?Mšüþ3ýýgòûÍô÷›Åï/Ëß_^À+y5ŸäÓ|‘¯“¼pûÅÛÇù=ÿþÍíA§8¯ïX¾¾ã;<ÈýþÈO8?ÁïçáïçÁü„óÃë#–¯øø¾Ÿ¥¿Ÿ}ƒyˆ?_Ÿþžã•¼šs}ÕëÛòI>_Í©_µ~•úÕƒ~Ižâú5ë׸þæõoù æœóüw¼sûÝÛÇçÓôçÓä÷£ô÷£äñåôñåäñåôñåäñåôñåäçËôçËäç·ôç·äç·ôç·opæc8;>©Ï´>“úLë3¹þéõOæg:?“úL볨ÿ²þ‹¯_‡×sýËëßñ`}‡ë;èOØŸOêÏÏQàÏcH^È‹y%¯æ¼‰?þ Ÿ ž ŸúÎ?<ÿàüÃó¾Þ’Oñ¤¾ýÎ+>?=†ä¼‹?¿x É'ù4Oòoœ_óüžûÿcHÎù7Ï¿qûÍÛïÜ~÷öŸ_<†äAâƒï?üþÏýý1$§¾ÃúN¾¿ê¯cÿø‚?Ÿ} OñçýÏcøZÞÉûI}úQŸÉ×ÏÃë¹þéõoù"_æIžâ‹ó_žÿsÿÿþ÷Çž?¿?†ç8ýIû³å¼™3Ÿé|&õã÷‡àïw‡¿ûö‡Eî×??_ì1¯|ÿê÷¯|}=¼¾‘7ñÆ÷Wÿ[8¿÷’/òeä!þ|ÿßcxŽ7òv–wòn>ȇxåüªçW¹þêõïx£¾Íú6¾¾ùõóëž_§¿Ýþ¢?ú÷ƒ¿ÿþý¿àïû…ß/øû|áßçû§>Óú,¾~ùõÏÇÿÿüýµðï¯_-üûjßáÌW8_Áù…ç—œ_z~É×çáõ;N}ÓúþœóþæðýÍ‘¸ã1<űþÃ÷þ¾Oø÷}‚¿Ïþ}žÛ¸ýLú÷a’¿ï’þ}—äï·¤¿%ùûéßÿHþ~Fú÷3nàü›çÿÜßÒ¿O‘ü}‡ôï;$ß!ýûÉßwHÿ¾Ãí\_÷ú·?¼ýÁ÷~ÿÁõ ¯ï¹¿¦Ÿ ù|ÿôóýoäã,§?ÓþLê3­Ïb¾–óµ¸ýåí/nŸý9+ί>†àÏßoÒßo’ÏO?üãXõú+Îß=†§øóþ)ý|ëäó‡ÓÏN>8ýüáoðçþ™~þpòù½éç÷Þþä)ŽþÓÜøüÜôóso˜äSüùóoúù±Éç·¦ŸßzûÃ"_æA'yãüšç‡þãë‹“ÏÿL?ÿ3y}púúàìßÃý{àóïcH¾È—8ô÷÷Û*y5ŸäóóFÞÌ;××½¾A}†õAî/ƒý{¸óùxéçãÝþÀ÷Ÿ‡÷Oò_ÌÇr>¶|sÎyþÏŸ¯CrêÖ7¸¾ðú’óã÷ûäóµÒÏ×Jþ>oú÷yoèäÝ|’OóE¾Nq>Ÿ(ý|¢äóuÒÏ×ùøê7\¿Áú ×O°†ûgàøécŽýwxÿÍ磤ŸrûC’§øàú‡×?¸þáõ£¾ÂõÅ烤Ÿòñ‡Åí/oñõëðú þ~™~>ÅÇ’ÛÇþ½þÀçϯ!øÓ÷÷¯!øÓú¾†äƒ|ˆ?Õ÷×<¹ýôöŸŽï} OpÜ?ý5/ ¼,ñö¼ý0ŸäÓœÛoÞþŽ÷Þ½¾ÎùwÏ¿óý»ßðý‡ßÿéøÌ×ð‰7οyþøüó5$çë»_ù5ÏÏ/ÿžã‹|‰cýí¸~ä¿9ÿ¸?çk|ÿðûõ ë“?ÀUÿ¸ÿäkxŽs}éõ%õKê‡ûK¾†ä…¼˜ò!^øþÅï_øþÅï_øþåðþ‹Üë«|}õë[oÍœ¯o|=žú5/¼Ló$OqÌzþßà|ÿzxÿ Gÿšî_“õ?]ÿ“õ9]Ÿóùüþלë›^ß"_æÁ÷¿ÒŸ´?ä¬ïÅúY®\ø5|âx>Ø×óÏ?¸ÿᅢý)ÜŸv×ç} ÏñNÞÍù0rÏý%Ý_¾Á y1¯äÕœó/ž?úSº?%ë?]ÿù|Ê×¼QÿfýõkÖ¯qûÍÛï|ÿî÷ï“|Šê3¬Ïàü†ç7©ÿ´þ‹þ-û·¨?ë«àù _CðçþVüù«àù_Cð§ë¾†ä…¼˜ò!þtýÓ×ðµœë;è÷t~èkHN}—õ êÖ'¸ýðöƒï~ÿäëùù§àú§¯!ù âÏýå1$oäí,çüŠç÷Ü?ÃS¼x ñÆõ7¯¿-rëÛ¸ýæí÷Þ‹9×ß½þÎùuÏoË9¿~˜ß%Žû¿†çx‡øsÿ/þüúñèÛ­o>÷5$äCüùû[ÑõWßâ\ÿðúÑ?»ûggÿì¾ëkHžäöoÇ·¿¼ýE–ýAëîoýùøçלëã÷Â룾†äAæIžâ…ï_üþ[^È‹y'÷ú*__ýúŸsÜ?õ5<Ç ùáýy3Orê‹û›¾†äAâÏŸO‹®Ÿºÿ!ɽýÊíWo¿6ò&Þøúæ×7¾¾^?ɧ9çß<ÿÎõw¯¿óõݯÜþðö_?üzô¯åþµðý÷1$_äËœþOû¿¸ýåí/곬OpûáíׯÏ_I}Òú$õIë“ìïéþž8>÷’7òfÞÉ»ù âÉù§çŸœzþÉù§çŸÔý¡âú¤¯!y'ïæƒ|˜yˆ?÷÷ªë›n¨Ü~õöß¿ùýŸëû1ïÜ~÷ö;õéÖçùøVÕõEßâœ÷ü;çß=ÿA}øù¨Öçó{_CðçÏwÕŸŸ+î?ø’óõïîOá9ÎùOÏÿùûácxŠçžÿsý>†àÏõWõûÞ÷?ðõyxý»yÇKychîøýî¯!øó÷תß×¾ÿ!ÉSüùóEÕïKßÿÀùUÏïùûSÕõEµœ¿x É;yÿÕ<ÈÃ<ÉS¼ðý‹ßÿùóßcH¾È—8üöçœï_ïÏõU¯ïùóaÕïëÞÿ0Éí£¾Íúvæ£;ÛïÞþ–Sÿný·œóï‡ùS¿nýv|пaÿ_?üúÉ×O¿~Ëéß´“ù™ÎÏóç۪߽ÿÛ_Þþbý,×Ï–Ó¿eÿç¿<ÿàüÂó úö?¹ýôöÎ'ûãtœü|9ýùrâüÖcøÄõ[Öoqÿ»¼ÿ]øþü’Orn?ØÃý7øù7üù7ØŸÃý9¸ ïß‚û¯ðþ ¿ïõ5$çö«·_¹ýêí?_ó’WòjÎí7o¿óõݯG ÷ÇÀñÝÇ|ðý‡ßÿùüKÕï_Ýþ0¹þéõ?¿| É9~¾m¸ákH>ɧøóúšî_¸ýáy~Í×½€?÷ÏæëÇîøžãIžâAýøý»á÷;¾†çø$ŸæAâÏõý‚×àõ‡xãû7¿ÿsý4=ÿÿþ‡FÞÌ;¹õ{þþÚ|ýYãçóæÏ矿›?7<ûkHä!Žü7çÿœóžßsh¾~ì|rþÓóGý4×OÃþÿ1 Óœ¯¿>¹ýôö·¼óFÎüuÖOwýðú®æë»n¨äÕ¼‘{ûÏ×W5=ß÷þ‡ENýpýú×üùóIÓï“ßÿÐÈ›y'ïæ‹œó›¸¾æ1$äCüùûaÓïKßþÐ x+敼šy˜'yŠwn¿{ûëï^ÿóñ­Ç|ñý—ß?¨ö?ü|Ñüù¢á÷5¿†§ø$ŸæØ†÷ŸÁþîÁü†óøüü>qü~Þ×üùøBóõO×75_ßÔð|µ¯!y'ïæ“Üóoœ_óü/_ßÔðü­¯á?¼ã÷ƒ¾†äAæIžâÏõñ‚?_ŸÔ}}RÇïw| Éùn?¼ýçüu=¿èö‡äöÓÛOêÇþ×+úßcHÞÉ»ù â¯o|=žßÿ5|âóëž÷Ýû¿oðçüv_üñ‡ÁíoÿùóQ÷õ×Gt_ññ‡äû§ßútëÃ뺯?è¼>·ûúÜÎóÛÝç·;ÏßvŸ¿íx¾Ä×ð>x}ÐðõAxþüùþÍÿ‡?ÿóŸý©EÞ/,–Ïïÿ×ÿöçÿôçÿùÇõÿÿÍýçú?~|ü?þüüw¹ÿû?þÓÿÇý–œÿçÿùÏû³µn7(ü/þøóý£üùß~üó¿ÿQn/øó¿ûcÜϽÇýÂÿñÇÈ›qžì_Ìz».ðóé£ûQЈÛß>F÷cq¿òò¯?ÖýØç}ö£ûïÕäýWÙ>F÷§«g½ÃÿÅí³JÞèÿõGܯ<Ì~;þó1ê·>÷1ºéðÇçSen÷þÝGy»†/çí“ð_|Þñ÷óÞ£ûñáÏ»Ä>F÷»õò~Íé_ë¼ßü~ûOû¸Ý6R>ïù¹çm3åG½ƒ¸ï?V|çmÕå¡ýûgèÛøþc åG¿ |ÏÛæÊñ©K¹LžÞÞvÜV_jýÜÞºínãûÁöÛfÚçx}nïþŒÏñç—õòáOù×ÛÝÆë>ÿÛ—¥í—{nãû‡¿RîÏH»ã¾ýÒ>½(Ÿ;Û´çýýnÍäcûå~Oæm<ïÛ/÷kÐoãüÜþý7ï?Æ£~n/?Ý,·H~èYïÓ¼×}{µ|:SæýdëmÙq¿y¿y¼ÔûopÜÆwCo2Ôûû}þ¸y©÷g¸Œ?Lñ&ËÝÛû›Ü¶õ>l»Ô|øóùpÍ[m”ûûÝ6÷±½ÙÚýýn¯ºÕÎý꿾ž}S>>³ÏûûåýäñMÆOó~²¬´uëù}Þkû±½Y?ýÉøÜ^ÞÎ ÿõy/Ù‡~ý~Ý_Ÿ÷î|lïsŸ÷××½ åcí>¾Ø(Ÿ=ù¯ÏkI?ôë÷s}^ö±½~¿'⯯kQÊG¹çëq2¼ôû5û}ž‹üXÏøq;Æú×ç±õíû1…¿¾ŽÞll÷÷k÷C-£}VîýËçÇöÆý²¿¾>[—1>«ðþáêc{6ÿ¸¿_/ŸÛ»ßú××¾¢|tŽ~¿‰~4œ[ æýýÆýá壗äíõÿñè8ë~Lô«ã|>¡öïŽóãöyäïŽSngþî8õö]÷«ãÄ=_'ÆgWùì8qÿÖñÕqb}v‡ÏŽsëbëïŽ÷ë/þî8å¶·ý»ãÔÛgÇ¿;Nûì*ŽÓoŸÛÿî8ãÑ>;NÎOà çvnù§ßd~Úýw»ùñi÷ßí¦|µ“G»©ŸvÿÝnú§Ý·›ñi÷ßífÞ¤{j7ë«=ÚMÜ>²ýÓn~Ü/1ùy»)Ïí¦|ÆãïvS¿ÚɣݴÛ%Oí¦šøw»¹Ÿrzj7÷¯xOíf}Æë«ÝÜÚÑzj7%¿ÚÉg»¹]Â2žÛÍOKÿn7÷KПÚM½}$yj7÷[ ŸÚM´ë¯v3íäÑn>ÚÕ=_íæCÖO{í¦ÞoÙþ§Ý|Œ?õ}´›zäêS»ùñÕŽí¦|¶“¿ÛM}´ƒG»iíQÎvÓúgûÿj76•ùÔn>¿ÂÿÓnÚ=Fÿ´›vý©Ýܾ¶§vóÑæs»éöÿÕnzùlß_íæö‘·<µ›¸õÇS»é÷[öþi7Ÿñþi7ý~ÈäŸvó›{¿ÚM_Ÿz}µ›Ÿ¥øÕnÆÏü|˜úãÿåu¾endstream endobj 855 0 obj << /Filter /FlateDecode /Length 1139 >> stream xÚíWKoÛF¾ëWðV h.—O£é¡h\¤§6UQN€nHJ"*в(Ùu}çµ|Ht𨷢6¤¹3ßÌÎ|³3òµã;?̾[ήnâÐQ¾—ù™r–+'ˆµ§b'‰/Ó‘³,œ[w¹™+·©M;_h¸¿yüü}žj·„­rþaùãÕŠF@Qê¥!˜!ˆ@¡ÌÌÃO=ß,g ^|G”öüH9ayZgN^Ïîf¾§™“Èð6­¦|¸z[kçûfö3ü³[ ‹ºÀRÆÎ§`W+/ vÿNYÁZŸtÞEäg®¾ž/”öC÷§y¸wLñ8Â: xŽ‚©{û¸"-n4;†hVºdd]%Ê ÒÔÆn%ÆÖƒU#ÊTÈuèY§v= ­¼ÄŸDþ =ô=_u d‰j”|Ƽ©÷óEãÑ B»y,­œ`ír1U‘LË»fQüÜ`°Ðg 5¸ aö¢0cËwg §!@D~ŬÜW„Á/p«ex­5k¢NÑk¢àÕß69Í+–l ¶‘ ˜ƒ7"1ü4òbÅ~åtª÷#ã0ÚÙ°AßY—÷h-×¼÷@TEôµE‚ÿÆ@ZÏJ“cˆ7¼“SÚ ÊŽ,K @£¹=2¹) ÃÐ]¢{âÎÜ{ßW605S" †”ð1E^šÈ=ñ¿…LÚÝãyE÷½ùB¬fç5I'Pù Ÿ”} ü1 Îlm]¿ÇBûuþ4Ù‰;ˆ$\Á××üÀà ©_a½ æbb{¡¤º£L8þ5D=Q©åâõv¤ŽN=Žv•{ a?\úðl€áe%¢ ÊQ Hhyv°ÿLB¢—æ#øT>˜ÛŸ©úå™þ/a<õKKC¿4!zóÎyvMÚŽ‡÷뺢¬ÀM-“®`j7¼y>upóÁÉ'rkǾ¾ƒVtKºzÜŸn Åì F6{n¥9ën’ýgÙÄŒŽmº^—Ú@Íg~{ìO'­EºU‹Éª>öl±-³Š•¡is¨i4T=ömðp¢†ÙAQFÀ¢â)VOÖKw;•hÚ<3àå£-ÙQeüÛçÑ'˜¯¯äzhû«ÊNŠøW%¤£™½d÷¿¨x„Ì…º<®h‘Ä7ð:¨Œ•ô ^¶¢QØßÆÏ­’óÊ{þv_ˆS?¨ÿÆ-”Ôendstream endobj 856 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 2890 >> stream xœíœ{LSçÇ íHz!§»3R)Å¢Â°ì‚æçº©\²Ú.]¶‰±ûÃ)Ó˜ñ‡.˲%³•¨1!ƒŸlnkœ¡Ù0'íBCÐ2¶ÕA `¥•r¸é Ôß'’Î miEŠýüA8çô}Îû=ïó^žç=mÜÝ»waCCyÒxìÄF?1…ÑOLaôSý$¬Á=är9A£Ñ¶mÛVRRÒÖÖ¦ÑhŽ9ÒÕÕe2™Â´ãxssóÇüЫq±U[ÔSý¬f¤©©©ÉÏϵŽã‡Z>Ä0Ìår]¿~}ß¾}ašZ™Ðâ8n4©TªT* µZ†ù’œNgø¦V&4/*•jii)Ä*= Ãt:ÝÀí|ßÚ «ðôéÓ`2™t:Ýääd8·$[@*•–––²’X¥gKñ`½´¢¢‚ ‚ RSSÙlö¿®éõ ×ÃØLOƒÇiiÀ劂P€ ðïNËápFGG}>ŸÑhLô$(<ðfÓ›+/FFЫP§Ó•——K¥RÇŸ€·Þ‹B;€N‡²2HM øí7èí³®] ÒÓB×_'M™Íf™L†ãxVV–Óé_§ÓÀ‹YYPWòðOûZ-ܹ³|"..Îf³åççã8N:ü‡”j¤³ÞÙSûN­VÈ£¹######äÿµµµA–ò§££ã¾Ã¶¶¶êêêå3ÿwpWí®U˜Z™`ÛÏç€^¯·ÙlaŽ4$ä$a±X–Ï\xóBŸ­Oô‘ˆù “þ =#%ØtöäÜ$›Áž_˜Ÿ_˜6ƒ½›¹;ø…6J$¸7®† éð÷=¬<^žù#sÀ²!͇¡)lmme±X III!ܳgZ­ö?SZZ }}}á›Z™ÐªT*°ÙlÿüóOH;;;ï«Ö²Ã‡o*ÁwY4w?qâ„ÿ0PRRâxåÊ•LUTT\¿~=SW®\ iŒ!‰EÀ¡@Ä¥K—‚XW¦"¢(ŠÔñù#MM©>‘g5YŒ#GŽP©Ô ÆÅÅ;w.¤"#Ô€’dxxxÕ‡kÌ*£' Ãü×™ÍÍÍþ—Nœ8áˆãwgbe6~ô´Ncü²~cüHóÒè'À|¨V«Ùl¶ËåÊÎÎ …G¥R544Œ¥¥¥€ÉdÊÈÈ8sæŒR©ôz½ ƒ ˆªªªöööÇ×ÔÔH¥R*•züøñŽŽ Ãúûû§¦¦ÊÊÊ>ûì³êêêòòòêêê›7or¹\s:2™L¯×3™L±Xœ””¤Óé$‰Õjõx™†éêêÚµkWRR’@ Ü»wïÐÐPSSSbbbqq1à8>:::55% Åb1ƒÁðx< £½½œNçþýû C\\œÏçÓh4Z­–J¥òx¼ðûv…ä—!ôÒ‚‚§ÓÙÒÒBz©×ë=xð \.§ÑhJ¥2))éêÕ«Àår¥R)饂,--½òÊ+G§Óedd/;.ßËßiW±û(b^¢—NOO3 /åñx³³³.—kaaáµ×^GQT¡P¨ÕêÅÅÅœœ›ÍöÅ_h4š¿þú«¶¶Öd2!âr¹vîÜ)ÂW™6$cm S£Ö,àe¶@dåÙâôéÓ±Ù"bD¸nÚ´Éb±ìß¿åÙ¢»»»  `åÙÂh4ªTªðg‹5ê‡÷}†ügmrÇO½—íÙ³Șµ²²Ò`00™L£Ñ'99™ þãóµeff€Óé´Z­T*õù矧Óé½½½b±¸··X,‚ ŠòòX61¢hÓ'O¾ð £®®N"‘ÌÎÎ †‚‚·ÛM6ZggçæÍ›SRRrrr~øá¯×{òäÉ/¿ürnnÎb±(Š¡¡¡¿ÿþ»°°ðòåËN§S¯×+Šo¿ý¶¨¨ˆÌÓܺu‹ÃátttÈåòôôôžž2wþâ‹/þòË/2™Ìy«ÕŠ¢(“ÉÌËËëëë“Édä‹Ù+á6\NÖµ§ÞKÉ5 †aä7±çææ%ÉíÛ·—––x<ŸÏ÷xŸ¯»»;++‹N§û|¾ììl“ÉoµZe2N§P(.—ëÖ­[©©©n·EÑ„„„žžEQ]XXp»Ý[¶lq»Ýßÿ}sssKK ¹¿o·ÛE"ѯ¿þ*‘H†††fffPÕëõ2™ìÀä.ÐC‰@Fvq´¡\.'0ŸÏÿóÏ? Š¢çÏŸ—J¥ …âêÕ«/½ôÒøøxff&‹Åâóùäc¶Ûí(ŠNLL$&&¦¥¥™Í檪ªºº:òïÖ­[=ÏÌÌLeeå矮V«{{{ýßïØºu«X,¶Ûí E¯×€B¡¸téNV(üñGUUUÀ9’ýpÇñ`ÖFOýHC®Kùé§Ÿ˜L¦ÇãÙ¼ysjj*‚ €ü ÖÒÒ"‹É×vØl6õ.gâ0 ;zôh~~>Aß|óÍ{ï½wæÌ½^Ÿ››Ëãñ Ão¼100@þÑÀÀ€ÅbyçwD"Ñ:Ê—®g6þª-¦0ú‰)Œ~b £Ÿ˜Âè'¦0úÙø ÿSåendstream endobj 857 0 obj << /Alternate /DeviceRGB /Filter /FlateDecode /N 3 /Length 2596 >> stream xœ–wTSهϽ7½P’Š”ÐkhRH ½H‘.*1 JÀ"6DTpDQ‘¦2(à€£C‘±"Š…Q±ëDÔqp–Id­ß¼yïÍ›ß÷~kŸ½ÏÝgï}ÖºüƒÂLX € ¡Xáçň‹g` ðlàp³³BøF™|ØŒl™ø½º ùû*Ó?ŒÁÿŸ”¹Y"1P˜ŒçòøÙ\É8=Wœ%·Oɘ¶4MÎ0JÎ"Y‚2V“sò,[|ö™e9ó2„<ËsÎâeðäÜ'ã9¾Œ‘`çø¹2¾&cƒtI†@Æoä±|N6(’Ü.æsSdl-c’(2‚-ãyàHÉ_ðÒ/XÌÏËÅÎÌZ.$§ˆ&\S†“‹áÏÏMç‹ÅÌ07#â1Ø™YárfÏüYym²";Ø8980m-m¾(Ô]ü›’÷v–^„îDøÃöW~™ °¦eµÙú‡mi]ëP»ý‡Í`/в¾u}qº|^RÄâ,g+«ÜÜ\KŸk)/èïúŸC_|ÏR¾Ýïåaxó“8’t1C^7nfz¦DÄÈÎâpù 柇øþuü$¾ˆ/”ED˦L L–µ[Ȉ™B†@øŸšøÃþ¤Ù¹–‰ÚøЖX¥!@~(* {d+Ðï} ÆGù͋љ˜ûÏ‚þ}W¸LþÈ$ŽcGD2¸QÎìšüZ4 E@ê@èÀ¶À¸àA(ˆq`1à‚D €µ ”‚­`'¨u 4ƒ6ptcà48.Ë`ÜR0ž€)ð Ì@„…ÈR‡t CȲ…XäCP”%CBH@ë R¨ª†ê¡fè[è(tº C· Qhúz#0 ¦ÁZ°l³`O8Ž„ÁÉð28.‚·À•p|î„O×àX ?§€:¢‹0ÂFB‘x$ !«¤i@Ú¤¹ŠH‘§È[EE1PL” Ê…⢖¡V¡6£ªQP¨>ÔUÔ(j õMFk¢ÍÑÎèt,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ {{{;Ž}ƒ#âtp¶8_\¡8áú"ãEy‹.,ÖXœ¾øøÅ%œ%Gщ1‰-‰ï9¡œÎôÒ€¥µK§¸lî.îžoo’ïÊ/çO$¹&•'=JvMÞž<™âžR‘òTÀT ž§ú§Ö¥¾N MÛŸö)=&½=—‘˜qTH¦ û2µ3ó2‡³Ì³Š³¤Ëœ—í\6% 5eCÙ‹²»Å4ÙÏÔ€ÄD²^2šã–S“ó&7:÷Hžrž0o`¹ÙòMË'ò}ó¿^ZÁ]Ñ[ [°¶`t¥çÊúUЪ¥«zWë¯.Z=¾Æo͵„µik(´.,/|¹.f]O‘VÑš¢±õ~ë[‹ŠEÅ76¸l¨ÛˆÚ(Ø8¸iMKx%K­K+Jßoæn¾ø•ÍW•_}Ú’´e°Ì¡lÏVÌVáÖëÛÜ·(W.Ï/Û²½scGÉŽ—;—ì¼PaWQ·‹°K²KZ\Ù]ePµµê}uJõHWM{­fí¦Ú×»y»¯ìñØÓV§UWZ÷n¯`ïÍz¿úΣ†Š}˜}9û6F7öÍúº¹I£©´éÃ~á~éˆ}ÍŽÍÍ-š-e­p«¤uò`ÂÁËßxÓÝÆl«o§·—‡$‡›øíõÃA‡{°Ž´}gø]mµ£¤ê\Þ9Õ•Ò%íŽë>x´·Ç¥§ã{Ëï÷Ó=Vs\åx٠‰¢ŸN柜>•uêééäÓc½Kz=s­/¼oðlÐÙóç|Ïé÷ì?yÞõü± ÎŽ^d]ìºäp©sÀ~ ãû:;‡‡º/;]îž7|âŠû•ÓW½¯ž»píÒÈü‘áëQ×oÞH¸!½É»ùèVú­ç·snÏÜYs}·äžÒ½Šûš÷~4ý±]ê =>ê=:ð`Áƒ;cܱ'?eÿô~¼è!ùaÅ„ÎDó#ÛGÇ&}'/?^øxüIÖ“™§Å?+ÿ\ûÌäÙw¿xü20;5þ\ôüÓ¯›_¨¿ØÿÒîeïtØôýW¯f^—¼Qsà-ëmÿ»˜w3¹ï±ï+?˜~èùôñî§ŒOŸ~÷„óûendstream endobj 858 0 obj << /Filter /FlateDecode /Length 2238 >> stream xÚÍY[oëÆ~÷¯\ Ð£õ^xmê¶n è ‚ÄhÑœœÚ¢%!’¨Šô-ýóÛ’K‰²¢…!Üùvfvf¬'‹‰ž|<ûÓÍÙŵI&F«Bfrs?ÉÍ$K2U¸dr3Ÿ|ެ~¹ùëÅu‡TF§*MÑÜ,§&ª˜.p³VÅ:ótŸ:}Ú«OcLãXiÓ;µyTÞMmý4MÒ¨\TÓ™s.ºõßmôç²Å­ðm¢üîÊ5Ïü>h²äUW[˜1Ñ ü$lP‚3-@\\g¹g&sJçñd敃¦$ÒgXgà÷e:K¬‹¼*JèIJ ñÔ“™-TšäÌâiéEÅ–,ëbõˆÊV$I[ny¦Üí`•«‘þWnH3ÛÚÁ±¸Té¸;—ݘ@…2ÖxŠGBy¨xÇúžvˆGœ(“f LªâLlã;X÷{Æ1˜ÁoGª™È«ÙÀï_ðûQ's2ûòt ¿­,©z²{™náp‚iÃûÍøñ_ð³cá¶ZeywR> m£ÞæâBeñIg«)à>G¾G¯OfËÞþ7âû`º¥•¸ÌϽÔBê½kßsê•ì<Ç»®q*‰‹#ÿ7byòVâÌZ`hÇÅûܡĿ/=§ÐSZ!Ÿ¿3ü3þÐ1Aá7hh¦ ^/"x%æ\”8ï%[‹˜áÔ“z)..S/ÂÈ_—g~ÛqxE2¿·½ $»‰öë/%^â—®9:·mÎß’ãù„WÁ™Hýjh+±¿êÓIÛ¢eM0„Ͼ{qªká¸Zžþ^à×O½V<ôµðòèøãöÓ(Ɉ;Y»uî†ó"Cž¤¿g:lÎB'’ÿ4„ï§YÀ-tðwùùZ´õǵúÕ³(”T2´bo¹JnI{tàÏÁ!·Ê]ø—Ó¾as.M³ÿ;¨_v-Z½±=<~ϱc¹<j+:l·}#$þQäfQc¡êÐ|ޝµ2€Ò»sÄ= ³F†ƒ´ÒˆÕ| œª¿ ÐJޤ\žfè˜F:Hû‡ ­é\0ß ®/DZ><ÏsAÌÇÿep3òÎã„…øD‰Ç’"B«Œ†G|x_P¸ª¤‚?•LöXñëwÏ&Ÿg±+ÀÌŠ 6RÓY–&Þq¢O\InªUžpÄñÅ·„D>n»³EYž°b“¤å¹þw¡Íï˜çáó›ø‰µ rQÛPíRÉVP"Îy\Ò»`$ÛŽ9I—F®è¤f.³’s½&rºáŽ8ðÊn½Â³™e~ö‰ÞKR ›?öþœ_숳(±­z˜,“Ôô©ræóáiRû%«zË=!Ï9#íœóBº¬ºM‚‰@x-›jÄ'¿®òiÂÇÑ=]kþÈ4>˜C¥i¿†g@F/VéóȆ‰oå~l™’ Ú”·ÏyXŽoíëÞ'^¬ù NŸï.èM¿æ«~†¬è.¯ö<æOêÔ¬¢zDXŒ˜Ií+r”çÑ#Nž½àâçâ:Öc $H¬¸¥ÓPÇeÌo3Å-ƒY¢uô=L§up~£vrú ”ŽæÂ —‡¯\„ûÇ4i!ðÝûsÛ×älsAüNæ›z#Ðhö1|†µ|–A³Äw%˜úÕ(4Wt¹6\"D "¢(û+ˆ®E~p¸ü·EÁN³`ãZuyð6‡ªìÈd{>ØPzm„®ìÖ-ÚKë çCCxê Eò_iSÁ€&¿‹&‚¾Øc!maC!‘d4«ð›ü¬ÜTcU'v¦Š"ñ¥äíAÞñí +ü¡hÕiØ3؃E°ýKÖD¹®Q´ØÒ¥?VÙ:•e]eûŽ“½™æŽsdrù=£VrRBêû«‰BØr`'myX­³C›ùðºÉʲbÔÔÈWcðÂ<.Ž”àRà⹿DÈ.ÑòL"ñ›€«»–ÇÏqßÍáò¶ ]&†áŠ_ÖlŒ`4í’GÈj=kœõ}¯úÁ/öÕZg’8êOå°_hÓBéŸݳèI¦ŠLEª¬K@W•:i‚ަ ™ÊRàÜS]bìÉ=ñA—2ö~Ó¯FmTåq×W±}’åL.îµeÅ$Å1l> Aþ”º¹V6ÿ…êÚSêº_¢nZ¼©oGaǘ$*Nâ÷óp]¶ ökÀìÕ{›Ë’h]û;º–Î8 Ÿ1MTÞ7½ÞQî>7&pâL¢K¼‰‹â}_ûöþ›@²ô·EG걓r”GnТòÄçO4ÑzÏÂŒÕ,ýå!D%A”FtQ`$ÛTÃÿ? M€0'd¾ê*÷¾¡üsuL»å›­õDÀQ;em~"kÝ`ˆŠÝ ÉZ?ø.¿Œ´õÉæ¢”ÙßËrÐß..ü|<貄%œ¯ ÿÀ¤ÿ †¾ev”¦>¹jºF ‘õÖy¹T¿ýÿv"Ä®ãѵõW}ÿJØøÂ¯êëÁUߺ9Ò‡†Ó¾ìSo´C–Aó'hƒ¼~ewåaüÊ¿ ÍŒþ£à™€éz$¶uP¨åIÑÓå ¹0 5&FÇÙØ“Ø4ƒ6Áqù`O¯;v{Qß—¸*h`† ÆG¡ÙÝ‹v ywz_«‹¤+Ï7Aÿ¸ ª$å}?èATÔûÙ,>ØÄ¾Ò˜§˜M#ûæ Eþò~òväMüüúæì?*V·Aendstream endobj 859 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 3565 >> stream xœí[mLSg~lkA‹åC(8ÄŽ–¢0¤e™0it0`ö”ÕI¯ÎM2ÓnÁLK .Èp³“›qn) pFh5ˆ¸Ó2¨£´X ´:Z;iZÞ'#Ëޢȧ\¿éáô¹zÝÏýÜ_gËÜÜØÐÀ¬ö–› ×?6®l2\ÿØd¸þ[øã‚‚À… >(666##ƒ@ (•Jg³Ù †a»ÝŽÁ`üýý[[[£££x<Þ`0ÐéôêêêãÇ?|ø°½½½°°Ðn····¿öÚk0 »ººb0ww÷O>ù„D"=Ã-k9j3›ÍOLlkšáSÁ3¿!²Z­nnn"‘())©««‹Ãáp8œ•YÜS+5›ÍèÅÒ÷Ãjas®8؇éééãããD"q||À`0›››u:““FÃãñ¿þú«‡‡‡¯¯/‹urr˜L& C"‘†††¾øâ …²lþ ›VºþáÀJ“É$‘H£££¾¾¾ Ã0“É„aX©Tòx¼ÚÚÚ½{÷:99ŽŽÒét‰DrêÔ)ô ¦¬¬lÇŽÓÓÓl6;""B$…„„ô÷÷“Éd‚>üðC‡SZZJ FFFüýýÃÃÉDbRRÒ-|ÓJ×?6>ÃÅFmíÛ·ïâÅ‹èFZGÛfÔ¶þñÌïC@`0ÆÇÇß|óÍk×®ÙíöÔÔÔšš³Ù<77Çf³ÉdrGGÇÈÈHTT”^¯g³Ù€k×®ÅÄÄÈd2*•Ú××çìì|þüùÕ²óo¥‹ÒÐÇÇghhÈÍÍM«Õêêê"##ƒ‚‚Äb1•JµÙl»ví"™L–™™ùã?¢ÿåíííìììììÌ`0.]ºôÞ{ï}þùç‘‘‘f³™B¡(ŠÐÐPOOO™LÆ`0—/_~ùå—ƒ‚‚"""ªªªBCCÍf³L&;sæ “É|\†_ÃMO#0™L‚ôzýG}þ*^¹r…J¥„®®..—ÛÙÙI ¬V+‹Å’J¥‡6ÍÍÍ‚øùùÅÄÄÔ××Fö–¶l|+]”†"‘ˆB¡àñøDkii‘J¥ÁÁÁQQQÍÍÍt:½««Ël6'%%)•ʱ±16›››‹šÃðððÝ»wƒƒƒñx¼‡‡ÇsÏ=×ß߯Õjßyç‚ÆÆÆrssÝÜÜÐÄm‰ —]C4î#‘HÆa¦73`1÷/ŠD¢ôôtô‚ ±X<::J¥RF£——ƒÁ`³Ùç΋Å)))'Nœ”•• ¿ñÆááá|ðAdd$AÞÞÞ§NêííMKK[úºVú`±X___`4 ƒV«•J¥nnn\.700°¸¸Àãñ¶lÙ277×ÓÓS]]½{÷îéééK—.¡Ê¬|Iê1¬ô©ôIV4ü»•Â0ÜÑÑ111áççw÷îÝééi­VËd2ù|¾@  P(R©”H$2 £×ëx<Ç[­ÖäääææfôO±X, E"‘B¡`³ÙGŽ!‘H­­­ 555 #**êÊ•+~~~÷ïß·ÙlT*µ»»ûã?¶Z­käÏüiñO#‰˜L¦N§Ðh´_~ù@§Óƒƒƒiii'Nœ ííí©©©d29%%E(jµÚ?ÿü“L&'''£5;N755UXX¸ÌìpµÁ0,‰„Ball¬T*E>@xøðaoooVV–OFFFkkëo¿ýÖÑÑ!—Ë Æõë×õz=A f||Üf³””” rçÎÔÔÔ•¡ž–•®e'ôVŠ"33s>]R©T£££ A2™–‰/]º´sçNµZ·}ûö´´´×_ÝÛÛ@§ÓF£\.ÏÍÍ]ße©Ϋ§ÑhÀwó·Í—¶ÀJU·#¦A! u:«««¿¿¿B¡àóùkÖ>Q<^L@‡IÄb1—ËÅãñEEET*õÎ;/¾ø¢T*uuu¥ÓéÛ¶mS*•0 {yy±X,.—+‰ZZZ Åbqttô?~¸åÃJDÞ«+òb³' 3;;«×ë_z饑‘‘…Âôôt£Ñ855åâârõêU«ÕºuëÖƒÆÆÆûì3µZ]SSA ÃQQQ...SSSõõõccc‰‰‰Û¶m‹ïêê2îîî½½½)))‹Å`0Øív›Í611Ÿ••UTT„†D‹E§ÓÅÇÇ¢õßÎÎÎcÇŽY­Ö‹/¦¤¤ÄÅÅ=xð ®®ŽÃá ³X,•JòàÁ“ÉT^^þ„ ÿUwýóXüJìÃÝ»w'&&*•J//¯èèh‰DB£ÑÂÂÂúúúX,Vss³D"ár¹h^*•ææævvv¾ýöÛ‹A™Lž˜˜Àb±ÞÞÞuuu/¼ð‹Ý¹s§P(ÌÊÊšœœ„aØ××—L&Óh´[·náñø™™™œœœo¿ýF‹Åooo'—ËцÂìì¬Z­¶X,\.pòäÉÿ¢°X ÿµòµÜ’®Ä|©@ @[(===8‡ÃmÛ¶íܹsKüÖ•Äšð¥ËŠg¾3SPPPXX¨ÕjŸþùîîn•J¥V«ÙlöÍ›7I$’ÕjÍÉÉ©¨¨HHH‹Åûöíc2™+–4,Oh¥kçØpˆ¿gOhÅ­Ä B$ÕjuzzºT*5hþ‚Ãáôz½Éd"“É6›­¡¡áèÑ£&“‰B¡h4…bŒF󘘰?ÌÉaH¥Ý##Î86%'_A~ïÞ½-[¶„††šL&† ÈÀÀ@ll¬B¡H$GE„N§÷÷÷ûùùY,–¡¡!»ÝÎ`0ž`Z {zzöîÝû_Ÿ®°…;Î-Ž;f0Ìf³»»»Õj‹‹ûòË/ãââ8Nqqqxxø™3gÚÛÛe2ÙÌÌŒ¯¯o___PPP¿‹‹ “ÉT( ãÆNNNþþþ€ÙÙY788ÈçóKJJ=zTRRR_ÿˆHt ÿy䣹Y¥ÓYH$çééÙ¬¬p±vwëm¶éÌL/¡a³;¿þúÕºº:tâ©Å¥ë4ÌËË#‚ ‚ v»H$þñÇaaaµZ¨Ñhœsrr¾úê+*•êéé)‘Hš@Úív¿~ýz|||}}}@@@BB‚J¥BϛӧOŸ>} Ñhh4 ÃX,íÉx{{K$4j½wïFƒ ¨±±‚ ÑÑÑk×®¶µµ-—:8ñµZ­J¥Š‹‹“Édd2°gχš‡ÃillŒˆˆØ¾}{FF†F£Ù³g§§'€D"ÍÌÌ”——Ÿ?>99000ÀápÉÍÍU*•öÖ[oUUU¹¸¸ÔÖÖþôÓO.\xôè‘\.ojjJMM½}û¶Åb±Z­÷ïß¿uë–Õjòðð8{ö¬\.GD­V·µµ™L¦(lZi^F«««ƒ ­S u*•ŠN1•––®ÌBŸ_Ãg>òžŸ‰ÊÏÏ¿zõªR©Ü±cGTTÔ7ÈdòÉ“'?ÓÔÔ”ÝÑÑqóæM>ŸO¡Pª««ûúú`‹Åz½žB¡¨ÕêÂÂBFóý÷ß“Éä~øáìÙ³&“ixxØ××uHh5íÝwßEGvX,úæƒP(œ¯Çåçç×ÔÔ +,))éêêzå•Wþ+ÆØøVº( a†a˜ÏçËåòßÿÇãuwwoݺA‰‰‰òòòÒÒÒC‡­ðÀÚ"±ñ5Üøžf“!€C‡òòòÐÝAÐÂ÷ ‚ÀÀ@ô]þU‡†"‘šš ÈÈÈ`2™ŠþhgÕ±ñ=ƒÓâàÁƒ è¼Â|‡‚ >ŸÎÛc±ØÄÄÄ––??¿ññqOOÏ¡¡!†ÑbûZˆZ—QÃ5Rs\/½páÂÿkˆ^‚¡¡!???¹\¾ÿ~‹Å’••ÕÖÖ†¾-ƒ È7ß|³$į߇|©@ yT´ þr• 22200033ó)-rIxæ5ÜXTn! ½¼¼ Å«¯¾ªÑhL&Svv¶F£iiiár¹MMMÅÅÅ$·X#gÀâ±¹×?6®l2\ÿØd¸þ±Épýc“áúÇÿ#6Zendstream endobj 860 0 obj << /BBox [ 0 0 576 345 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (./categoricalVGAM-034.pdf) /PTEX.InfoDict 733 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /sRGB 735 0 R >> /ExtGState << >> /Font << /F2 734 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 4405 >> stream xœíÝϓܴ¶Àñ}ÿZ’éè÷ò’w¡Š*¨“* .ɽ* —„zù÷ŸŽì™qB t¦3Ó=ý]Œ§?–¬¶Ýî¶-IÁ}å‚ûÅý¶ûÖýærŸ\)¢Éå R’ÓšmÞË'î{÷b÷Ù«ï¾üÜ=¼ØñÞ;¿™^<üfÌÍÞ½Þýð£óîñ.¸¯Æß/»`Ü×»ÚÄGW«uÏw-IQSÌîÙ«Ñ`*mU¥ySïèŒ|’4Ø$%!ü[§úñ2Ø„ðŒY%7W»]gCBMªk^b…BxŒ^ê`õB‰UbqM%!{õÝ—Ÿ»‡;/Þ{·^<üfÌ­Ù½Þýð£óîñ.¸¯Æß/»`Ü×ã²ûíqƒ§É=w¹d‹XøÌ]¼¾2wñkzõö\¥/¼N×XìaÒeúÊëôuD§Ëô•›ô%*ð*}áuúZC}™¾r“¾œw®Ò^§¿±ÁWÛþù£uÿþüêû÷ÕÏ/vŸ}¡.¸GÿvAeMX&‹{µodš»åÑs÷‰úîÑ/»>šEÿõ²¡Œã–·3’ö8”æÂA£ø¼.­{/]£U@-K§}—V ª–¥ËÞK×lÉËÒmߥã8aÕe̓¼u<­?:»\ƒ”«ÝKZ' jω×ÏïM^f^N7þ•Š~œ†`—ç¶²ÿøÏ“wmêf‘O7%—ö)åqÕÙæâ_<ýý÷'Ý_þú¯ŸþõôÙÓߟ>yuYÞXa·Ýˆ<–.iîç1ïå÷½{ñ¾“Z°_§qRsÉ^Ô1y÷ém|‹ìùå ì{¾×EelˆK Ú‘°ûÙ c•sƒïaµ:a\æúá–ëµigFï1Æø6=NRã"5Œˆ\0ÆøF®vÑòzŸ€1¾Ck²ºéPÆmGÀcŒåè-Š+4Åc|^ž!¡7{RŽ1ÆøÊ)ZìžZ°IÅcŒÕÙ[SÕ:ƒÉ1Æã?qµ¶‹»´Š1Æãûá­)°æÙ¤ cŒ1Æø^ú^Ñ2›‘`Œ1Æ£‹uˤ¥/]aŒ1ÆÜ-Z׺:fû†1Æc|ݽõ•¯-H©cŒ1Æø\lŒ mEÔcŒ1Æã÷Z}´!í´uicŒ1Æã[w·aaµ«5£ÃcŒ1ÆøÌФ:œ—þ(1ÆcŒñ­Q|.Ò ÆcŒOÄ]йÙcEŒ1Æc¬±ˆæá.)`Œ1ÆcŒßtRiÉEïE+ÆcŒ1Æð¸pŒÓ=bŒ1Æãûç\Ä›ÃZ_ˆ1ÆcŒñ)¹¨® cŒ1Æc|ÏÝEð®ýá`Œ1Æc|Ÿ]³4?­cŒ1Æcü×n*iq,cŒ1Æ÷ÂMzŸNcŒ1Æøܳä6cŒ1Æ÷Âö"ÔÅcŒ1Æ'ê&uuÅcŒ1Æx_‡l:ÓcŒ1ÆØ¬Az^1Æc|¦n’§Š1Æc| ŽYBšŽcŒ1¾-§ 5NkÀcŒOÖM¢N{1Æøx“ôuüºˆ1Æ,— 9LÇŽ1Æ÷ÚM‚õí¥7Œ1>&×$¥O'ŒñžnA´;í]ZÇ÷ÀUZ›^žgáûåž,ðS{“¤¿ÇiÜÈù2\í¶ã·]¥˜ËOŽïØ!‰fgÝ^kÇÇgõÒÒp”^ð-¸JŠÃºŒï‚ßã˜Ä›½ÔŽßÉKQ7‡1Šø®v㣭H+øôœ£¯-Iñ¿ÏÅÛx»6ìnˆø¸Øø|Zûz}‚Û5ÚxKZ‹ýÃGçæmü µÙ¡‹õG­Õ‹*Ƨæ­a·ŽÓÂŒwÅÛÝú£Ó­ZÚÙë/Gs·°!|ëÑú‹P £x¸Û_³—œð.k±ö|š²ô³pTkO¢)Øaq îva«±HÈâ¬o¦š¥¼Ë;/Þ{ç%Ž·]^_<üf${÷z÷Üåʘüè¼{¼«Í~×mÔ—±;Ÿï¬ÚÈZçÄ¥5è‘°‡YÚórއźÔÿ•å÷Â+Ÿfeb-’:ÆߦƒŸ•ã%Ilc|#×ù0gÜüV1¾[kšïâ¸UŒ1ÆøPŽ~>\]ZÅc|F®3ØÇÇõ~cŒñâ-X/´ùØcŒ1>R[ÔêpÕµ~cŒ1~§«EчÜÖú Œ1ÆŸ¾K´FL!ÉcŒ1Æø®Þ:OX1ÆcŒÒÅ: Z–ÎJ1ÆcŒî6{Ý¡IËcŒ1Æ÷ÑÝ[¯ê!¨uSˆ1ÆcŒ÷w±Á(‚¯â+ÆcŒ1~¯­#'5«Œ1ÆcŒoß]Zs½‹OB!„Þ”¡Hª®ñB!„ð´©Q|q=J.B!܇]Ê`ì!„Bˆf׺Ô!„BáÇeRiɵ&1A!„Â[g—]›#×B!„Â`.â „B!„+‹JQײÄ!„B!|»hp-ÙL!„B÷eÍÒ¼qéÇB!„Â[c›ƒÏ·(¹B!„~(›ôn B!„ðŒØ³äæÆ¥Q«B!<£W Õ8ƒj „Bø¡lR'„B!<2†,±µ@!„ð(¨Az6† !„Â;b“<é+„BáŸ2f=Á „˜‚ÔèZ°Ö^Bá!Ù$êdƒBxSæ$]¯c!„žK®ës „ðˆØ$øÉ !„`MRºqÌ„ðœ9îta'Â*mvþ¦ O’=I𠨵Á3gòAül/3ßµdõš‹žÝ±z°ú;SB[U»Aç¦à“Uˆ3„þ-ou¶ÂsgµÆ!< 5Ù“ cƒBxŒÞž·;„Âcbµ°Á „ð„™¢…N„B¸³·¸PcBx²¬å>8ü!„Bx–h ©Œ B!„'Ïê­±±A!„nX¬1­B!„÷-Z­±A!„†ÝÛ¾ƒ-A!„=‹u6>Ø„Bá±Q}]Ø „Bá=b—Ö\­â+„B!„7e(’ª1!„ÂÓ¦FñŨB!Ü‹]Êd¬B!„wÀXD³1{!„B?.“ZƒsH!„Bx»ì’¢±v!„Bø!ÌEüd/B!„‹JQW›„!„B!|»h0Æ!„BáÞ¬Yš7f!„Bo—M%MÖ !„Bø¡lÒ»±w!„Â3bÏ’›«]4B!„ðPŒ^%TcjB!¼›ÔÉY“!„Bˆ!K,®yñ B!„GA Ò³1B!„ð®Ø$O–!„Bø§ŒÙzmA¼B!„1©Ñ˜„BxP6‰jl BáM™“ôA]ú`‡Bxv,Ar0¶!„GÆ&Á»%u!<k’Ò]K ­:²»—OÜÅ.¼³`·N×…ŒsB…Œ#plásó|þ¼ú™û³U¹,Å%;r-|åÍò|–¼-oñ¶¼8ÊGy6ËÃÿQ^èA¦¼Õo¯_ø‹ò–5ýcɵØN¿.y±•üù£Ýg¯¾ûòs÷ó«wì…W?¿Ø}öÅØñîѿǩUÖ„e²ØÂÕ›íÌ,ed{î~øäžþß§Ù}òëËŸŸ<þìâÉzùÀï>ùé÷'üè}µûç£ùÖ¯l¬jy–ýõO/GYŸ†QúÓQöÿ>pÉʵyöâ÷O^~È[„q5Üæ[|rñôÅž=yàý²_ ã LëJ~ÿôñ¯cƒÇ:¾^¶{³JãøNÙm§Û£Þ©=ŽCË8¾M>¹ñq¥8>¾\ƒ”¶Ò>½%ÙêÜ[¿N_½É Õ®=¯3,Þd_ζy‡Õ› µYüÝu†Å×Ô•Þ¼ÅêM#µ^gX¼ÉgTæu†Å› vǽ}‹Å×âø–Í[¬Þdˆãðß¼ÅjË0NÓ©ºRìd=¾'SÖƒ¶™<ò]~™¼h“|²ÊÏËô•Û qîñ« ·Æïâø…»Ê°p“ÁvèØã—Vn3ÄbÇÎU†…Û ¥Û×è*ÃÂM†uü¿«°p›aéüè*ÃÂm†%Êü*ÃÂM†5ôç2ÃÊm†å™èU†…Û K½åU†…Û ËÀU†…› ë™î2ÃÊm†2ƒ¯2,Üdøv÷ÿ²ÿèáendstream endobj 861 0 obj << /Filter /FlateDecode /Length 1048 >> stream xÚµW[oÛ6~÷¯à£„V2¯ºÝ€]’bZ¬…aMû ØŽc̲\Ùm²—ýöÚ”#;¶±!PH‰ç|çö‘<ælÆ8{3øy4^gš ž–¼ltÇd¦R‘±Üäi© MØM4ºEÔÔÕ:N”’Ñ©ÿŒ Mai½^ Ó2EZh0ƒRY™'ÇƫÑ@À„3Á„4)/Kf¤Hs€׃¯žfð%‚).y=ú0ü­Öì×fðþüRâ1“sÐu½L—E*¹rÎ_CŒsxfßZŒ6QYéWqRz»¸¡g:±ëE´²²Í-ü«níú"›Û¬­ÎkδÇ2îÝÛØª@Ò{DK„¹ßyóþ“Ó©«ÅîãLÑåf'2ø¥—j½7 ŸÖäZeÿ}[»|«lé ‰P©Ñ¥ËÉgn8¬Y—Ú¦v„¸újuÄb7Kª¹-–aíŒùéÇ7vƒ*Âtxá€(§ïÃë¼S!ÍS.4K´IM™9AKÄIVsÌU ÎC壯ãbG÷X¢t*ˢݻ­ˆõ$&Ñ+Ä·ï"âø¸ïÿ—ìƒM¡M/‹š¦y ¯ŠLXÚ¨‚D7Nä•Ã"²òŒŠ<_¥ÏÊšÒ^»8 '‡—Õ}ôz‡‚"šPÂ*LÚ;äÖ¯r(¿•g´7$7§WT5[½ïd¢¦qé'¸Î·6Šc ŸžP%stïı/Æœ?#ëqU [SÒiIoCïç½ó‰÷¬`r:ŸQQç«ôY©)j«öWÀ˜KK.•ÜzÜ;Q·OV!Û‡;¦ |P ",yKs¿¦xÀ #Ê-•2×”2<¶·nÆ ­Ñ.è ü£+Æ-ù9¦DÿÞ9¬Qä5L7Ý„\Ò‘tœ™6º¥Ž‹è¥C\SŽ„{ýÁ aIÉg„°)`º·ö ä¹Pò ”:Jí pèiÒx·xZÔ¶¶ù›Ú¥:-‡r~ÍѾ6­?(ZßÁý>®> /Filter /FlateDecode /Height 106 /Width 75 /Length 3333 >> stream xœí\oLSW¤DoûrW(¯Ýš›Õ™:ZõeEš»E¨nŠk!þ™.,àbÒ \R>ldº/Vƒ-­‰nlÉ6Äše¡K¨Ê†…áʸZ™T@{±½Hk×ò~8¾M7•Rp(MnÎsïsOÏsÏsÎsþ¤ &''!¤ñ¤ð#láüGØÂù°…óa ç?"çà7T*0 •’’²iÓ¦'NTUU©ÕêöövŠ¢fY?MÓ?ÿüsRRI’>]ÎÚæ=få¥E1 ó¸š2}<Ô…ZxäÈ‘çž{nxxxÆ †±,‹a ÂÔâ4Õ%·3´°¬¬ >ŒaŽã€®¾ÂÔâ4Õ%…Yy)ÇeFY îûwX–õ]ÿ&Z­V»ÝÎçóÀn·GDDDEEõ÷÷ å©+™fe¡Ãáø7þoÆÃpûÐïÝßßo±X¸\îŠ+†††ø|þÂ… q8Ž›L&‡ãv»oݺuá‚ P ÉÉÉB¡ž,‡@ bYÇð¾¡>,ú~Ì +˲E‰D"’$QÉd2ÿÐ@€D.—+‹}O-KsssddäÊ•+QÂ,8„ÉYàСC6›mrrÒf³ Œ Âäääõë×<èÑõÁB@±³³óøñã{ÚÚÚT#ƒæ°©©I©T¢2Çó}`§ÇÉp$þòË/Û¶mcæ¡"Í'AEQ|>ß®Á"h ?ÿüsŸ…‡Ã×—bÑs—Á1D" $x}©L&½^¿sçι°ËåÀ^xáðûÞ`NÖÉrXŠ¢d2ʦàP¯×O‡_ôæää Ô… ³6©T ¯¾ú* Ñu!¶Ðáp…B$úîÿMD@ @¹ Ã02™,22R¯×wwwGFFr¹Ü®®.ð¯„$É5kÖÛà 9äñxþe–p^ßQ¿cËŽ©ÌWèðˆˆ°Û¹\®Ñh<þ¼ÍfËÉɹqãÆØØXvv¶Ãáx0˜ÿq ÓÓÓ}eÿ8Äqœa˜¤%IÓÌi€ŸÄ_ˆ-äX8W®\‰ŽŽÎÈÈxçwz{{‘“ …BŠ¢ü_Ÿ£ñ°¶¶–$ÉÚÚÚÅ‹Ã_Y2²ŒK×.ñ8÷yžšCûMð1†¡].—×ëG1̲¬P(D:O€Ã»wï@II ü5/·Ûa؈}D€ rHÓ4Aj¶D"ùóÏ?ÿ–Xû¿NQ¸LöÏ[xêÔ)_Ù?ûúú222X–]½d”E Sp¨T*§3™ðqØÞή\´y3±Ð¾8dYEÃ08Ž›®˜–áËÎìÇC–ƒ'ɹÍK+++Åb1üŸ–îîn’$}ß[°X`ºb'Šg™ÓEÓÉ’äœÏ-öíÛÿCš¦ÑŒ=B…5Â5”…"øÄŒ9´Zadä~à=Éùa__˲‰äÁ$†à}ƒ}ÿqýÜ>X¿+°Ï<ã÷.ôõ±, "‹¦SOx~(‘HÐHˆã8Ð4Ð4NQœŒ_¼!ñxàZ£5Kîö¸˜{Ž˜˜·ÇNN´ÛãºÌDc÷¸VsoïÍ%‹Ghj$#É›™Bܱ8û\Ì¢hf'æÖ]÷ŸNçÊUn ?l#g¸šˆòÒáááìÿë±ßaøn<ó¬÷_QÉ„×ËòùØí,ŸÙílD64÷î±·oc(~ù|Ìëe#"0±xÚë4V+`54ÔJÔ¬ÖK *F”s8Çc±Xž}öÙG‰€îÌL´Ûí~øa­œéì÷/(,,ìììô‰ÆžZ[[;…ØÚÚšŸŸï_I’þO …O´Ùl¥¥¥Aµ-¼æýÏ ººzΖ’Ÿ ‡ ÃܹsG$ÍÁo…½tþ#Àˆ_RRwóæÍ´´4³Ù¼wïÞ[·nétº—^zé§Ÿ~zå•WZZZ„BáåË—¥Réàà`ll,˲EEE•••ûöíÓétÀçóívûÕ«W·nÝjµZ //¯©© Çq𦠂 i:%%%222&&Æl6GDD\¿~}ýúõmmm¥¥¥Z­V$ÅÇÇŸ;wN¥RéõúœœœŽŽŽ·ß~[£Ñ$%%•––N1B†¾—àððáÃEݼyó½÷Þ«««ûã?H’t:ƒƒƒEEEçÏÇLL\¬¨(ž›¶Î ¡Ïaè÷4a ç?BßBÎ|0}m­Vët:}[™B­Vs¹Ü)4Í‚ ¦PxÊËËËÊʬV«Çã îõ€³ÌÌL…BáÛ¾øþûï Ñ ¨µµµ´´´¸¸Ø§ðå—_Úl6…ÂÂÂÝ»wû^?~ü8ªÝÑh4{öìùôÓO}w¦ÀÀÀ€F£ Ïžþ‚À^ZSSsòäIÇsôèÑ;wîDEEÆýû÷ÇÆÆ"o©©©9räHll¬Z­ÎÍÍýöÛo ÃÙ³g'&&‚0 *•Цi­VÛÞÞ. Ïœ9Óßß¿ÿþ×_V­ZUYY9::Ðýhš6 ííí²`–¾CŸÃÀkm2™L H¥ÒñññÜÜÜ»wïz½ÞÆÆÆ¢¢"’$ Ãùóçøá‡ÜÜ\£Ñ¨×ëëëëGFF<Ozz:RÐétCCC(ãÓh4Eq¹ÜÆÆÆo¾ùV­Z•ŸŸÏårM&ÓöíÛ;::Z[[³³³£¢¢,‹Ùl^¶l™X,.--e†¢(­VÛÙÙ9} ÃÈd2«ÕºiÓ¦ÚÚZ iº§§§££ƒ$I‡Z­¶¡¡Maêëë—.]zìØ±÷ßÅ¡N§3™Lh[ަiŠ¢Äbñ'Ÿ|òÙgŸÀË/¿œ ‹Ïž=›žžîõzwíÚõÅ_üþûïYYYh'X,Ÿ;wnçÎ---A LËKSSS?þøcôK---ö4f³ùСCHÁh42 ã¯ðÆo£a‰¦éòòòmÛ¶ùzšòòr‹Å’œœl±XÒÒÒÇ®]»Z[[Ïœ9“••e6›‘3£w)Š¢(*(/}Ú9œ=B?ó} Äayy¹L&£iúÂ… EEEƒÁår) ÔïQ]]ýüóÏ×ÔÔˆD"›Í†ãxNNΩS§½^ïåË—E"‘×ë­¨¨P©T$I¢“$J¥ríÚµb±8..ަé×^{­§§çÅ_$IR¥R‰D¢‚‚tŒ¨¤¤déÒ¥HM.—_ºt)//©eff®]»Ö§–°nÝ:ƒÁ°zõêK—.egg/_¾\&“…ãðéÀl69x©J¥êîî‹ÅA¤¦¦ŠÅâêêj…BaµZ­V+Ç3ÙÙÙV«U*•:ŽÈÈH“ÉäñxÖ¯_?11!—Ë¿ûî;8}ú´ÉdÚ³g†ajµ:--móæÍõõõ©©©ýýýÙÙÙ¿þú+‡Ããã㉉‰ƒƒƒãããhÂßÕÕµnݺwß}·     `šû¤ÓõÒÙ¼OªòÐÃéö¥ 䤤Œuww»Ýî 6ÈåòÁÁÁ+V ½^Ÿàõzy<Þ²e˺ººòòò‚`æÄ‰ !!A©TnÙ²eÉ’%ÅÅÅ4M×ÕÕeffŠD¢Ý»woܸQ"‘$$$¬^½º©©éÞ½{Û·o§iZ©TÖÕÕ¡nü«¯¾Š‹‹óz½r¹Üw„w:}çG_:„¾…ãpxxxlllïÞ½AbyzŽÃùiEõööJ$’¬¬¬ÆÆFFÓÖÖær¹Ðy§¬¬,¹\>ƒ…ú¹AØKç?BßÂq¨Õje2ÙŽ;¶lÙ“˜˜¨T*ŸÚ{8‚Ú© ˆÍ›7?Þ:g‰Ðïi쮕——ët:§Óùæ›oÞ¸qÇñŽŽŽ“'OªÕj$jµÚß~û-33³¹¹ùèÑ£:îêÕ«h_m®L€ 8ô_J@g»ýÅÇܮLJ 8¼}ûvrrrYYÙèè¨Z­FâG}Ôßß?11ÑÓÓSUUõõ×_ÿøãYYY111seB1Z8N‡Ã±hÑ¢‰‰ Ÿèr¹ˆ€vŸÈ-< ¡ï¥óR‹Å¢P(ªªª E~~~[[›Ûínhh@â±cÇ’““ÓÓÓY–íèè°X,R©íþÎæðiåÐív£Üo½õÖÓcsèsú9MèÏ-Â΄-œÿ[8ÿ¶pþ#láüGè[ø?Bé„Hendstream endobj 863 0 obj << /Alternate /DeviceRGB /Filter /FlateDecode /N 3 /Length 2596 >> stream xœ–wTSهϽ7½P’Š”ÐkhRH ½H‘.*1 JÀ"6DTpDQ‘¦2(à€£C‘±"Š…Q±ëDÔqp–Id­ß¼yïÍ›ß÷~kŸ½ÏÝgï}ÖºüƒÂLX € ¡Xáçň‹g` ðlàp³³BøF™|ØŒl™ø½º ùû*Ó?ŒÁÿŸ”¹Y"1P˜ŒçòøÙ\É8=Wœ%·Oɘ¶4MÎ0JÎ"Y‚2V“sò,[|ö™e9ó2„<ËsÎâeðäÜ'ã9¾Œ‘`çø¹2¾&cƒtI†@Æoä±|N6(’Ü.æsSdl-c’(2‚-ãyàHÉ_ðÒ/XÌÏËÅÎÌZ.$§ˆ&\S†“‹áÏÏMç‹ÅÌ07#â1Ø™YárfÏüYym²";Ø8980m-m¾(Ô]ü›’÷v–^„îDøÃöW~™ °¦eµÙú‡mi]ëP»ý‡Í`/в¾u}qº|^RÄâ,g+«ÜÜ\KŸk)/èïúŸC_|ÏR¾Ýïåaxó“8’t1C^7nfz¦DÄÈÎâpù 柇øþuü$¾ˆ/”ED˦L L–µ[Ȉ™B†@øŸšøÃþ¤Ù¹–‰ÚøЖX¥!@~(* {d+Ðï} ÆGù͋љ˜ûÏ‚þ}W¸LþÈ$ŽcGD2¸QÎìšüZ4 E@ê@èÀ¶À¸àA(ˆq`1à‚D €µ ”‚­`'¨u 4ƒ6ptcà48.Ë`ÜR0ž€)ð Ì@„…ÈR‡t CȲ…XäCP”%CBH@ë R¨ª†ê¡fè[è(tº C· Qhúz#0 ¦ÁZ°l³`O8Ž„ÁÉð28.‚·À•p|î„O×àX ?§€:¢‹0ÂFB‘x$ !«¤i@Ú¤¹ŠH‘§È[EE1PL” Ê…⢖¡V¡6£ªQP¨>ÔUÔ(j õMFk¢ÍÑÎèt,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ {{{;Ž}ƒ#âtp¶8_\¡8áú"ãEy‹.,ÖXœ¾øøÅ%œ%Gщ1‰-‰ï9¡œÎôÒ€¥µK§¸lî.îžoo’ïÊ/çO$¹&•'=JvMÞž<™âžR‘òTÀT ž§ú§Ö¥¾N MÛŸö)=&½=—‘˜qTH¦ û2µ3ó2‡³Ì³Š³¤Ëœ—í\6% 5eCÙ‹²»Å4ÙÏÔ€ÄD²^2šã–S“ó&7:÷Hžrž0o`¹ÙòMË'ò}ó¿^ZÁ]Ñ[ [°¶`t¥çÊúUЪ¥«zWë¯.Z=¾Æo͵„µik(´.,/|¹.f]O‘VÑš¢±õ~ë[‹ŠEÅ76¸l¨ÛˆÚ(Ø8¸iMKx%K­K+Jßoæn¾ø•ÍW•_}Ú’´e°Ì¡lÏVÌVáÖëÛÜ·(W.Ï/Û²½scGÉŽ—;—ì¼PaWQ·‹°K²KZ\Ù]ePµµê}uJõHWM{­fí¦Ú×»y»¯ìñØÓV§UWZ÷n¯`ïÍz¿úΣ†Š}˜}9û6F7öÍúº¹I£©´éÃ~á~éˆ}ÍŽÍÍ-š-e­p«¤uò`ÂÁËßxÓÝÆl«o§·—‡$‡›øíõÃA‡{°Ž´}gø]mµ£¤ê\Þ9Õ•Ò%íŽë>x´·Ç¥§ã{Ëï÷Ó=Vs\åx٠‰¢ŸN柜>•uêééäÓc½Kz=s­/¼oðlÐÙóç|Ïé÷ì?yÞõü± ÎŽ^d]ìºäp©sÀ~ ãû:;‡‡º/;]îž7|âŠû•ÓW½¯ž»píÒÈü‘áëQ×oÞH¸!½É»ùèVú­ç·snÏÜYs}·äžÒ½Šûš÷~4ý±]ê =>ê=:ð`Áƒ;cܱ'?eÿô~¼è!ùaÅ„ÎDó#ÛGÇ&}'/?^øxüIÖ“™§Å?+ÿ\ûÌäÙw¿xü20;5þ\ôüÓ¯›_¨¿ØÿÒîeïtØôýW¯f^—¼Qsà-ëmÿ»˜w3¹ï±ï+?˜~èùôñî§ŒOŸ~÷„óûendstream endobj 864 0 obj << /Filter /FlateDecode /Length 3463 >> stream xÚÅZYsÛF~÷¯`%jÍ1æÀ•lRåxc¯·6U)GÙ—$Q\S¤BP–•‡ýíÛ×!šq¼µ¥‚ÌÑÓÓÓÓýuÏä³å,Ÿ½zòíù“g/u1Ó¹jòFÏίfµžUE¥[ÌÎ/g?gÆýzþg/K—¶Òy©ÊhP›óë3uÜ®P3F¹¼òíþufuö Ú>ÿ~Ѝs*סñg¦ÎÚÅ™©²·gE™µËîln­Í®€Àv‡ï&{Ñîqè%|ëlÏ¢]sÍßàƒ*[îõ|5:{€§Ç†=rð$A<{Y øžëʪ¼v³¹-T 3%–^œÍ c³¢ª3¤îÇ}Ï—Hræ UY ¬*\ÃqäwT;Ÿªî…€fúsx5ð(xJy¯©w>ÝÏöÓð8êŸô+†Ýl즥› ]¥›©TÞÔÜïÛÈâq”µ*šòP[x–ðü’9ü ‘;x~†ç© ü+<Ï&êJ©ƒ®Â ͧ&hO%HK=%!üžÈ¯.äýVÊ™³²Öz°²MX"®Ë¥ÜH®z1ª+¤O4)å­8y…ܧhñhžè»θ…•‹`K¶WCS‰Û£Tº¶ÌI»£ñ ¬¤á®ýg#k·÷Ü]2­b`ûJ«êh(ŸO ×(WûO'i¨¼ -^L‘ÐZ5…õMZØ$7&W®4¾Ý·ÜÄä3ð UÎnB5ÆÂJ¨ÒŠqÔSVP¯gI+Y®¾‹²ú^õ's–d»ã ª˜íÐFóë›WOf?“H›ýfÊš¥-ó¢÷ó³Úfí¬»5½Ç%ƒéZFZÒšwäOh2ØlÅ«O-®eJR ì8`gU£Tgsgtö÷n'êú”•]›ZYã†Ú~ÍnmnŠH^ߊúo‘¹û ¶P©M™ý¢­[‡Ä&0Õ.Hн7âíŠêÁçâgÿwM†¸ëYyéC–dÝú}p›À´hÐWSºáÒZ$ ´;’ZƒNø¡âíÕ#ûÆÚz´oÌPé2&(ý/¹ÑlJìàZÀ"UQ£ ãøÈyŸwq9‚i57==¯©Õ•ªªÑš»Q•¼†8áªÈz4†`J]öoÔ‹…”ïi©yKã=pùŽ×VjáßæXÖnÞ& ÐïHa{ÙjTw‘ºÄ꫇_ ÿ¤¤€^ÇMnÚ÷¢Ø7øï÷NäUëír.=Wo‘*Ö­Wb‰qù/®¯í /JcFö1 ñOÿmt.bë•Øÿ8•µüêÄ™,ŽZ¢ýú„x LZ鼉ʔçSÔj0ÕQ—ŠÉ-¢KeÿJ£/§È z[¸uéÁöܺ*ë à’r’]r19ÀÃdÚ¿Xû‡+S×hŸð åÕîZÚm¢·P^ŽÞpƒ Ü~¼í…èC»Úˆ5ÀÆhu¦p®Vv€[ïdQ<’Ü 8ÿDkY7A´ ·¢±`zy«ÚÚ°\óù—ªŽ]ŒÒEkÈUÞipÅMiJ ãX%«Œ­x¼‹);ž²ä’4›vÐØ U›[™ôó°÷«ïkò› í¤­q2EÁ6û'®oĸqJôË)@Kªé®²MeYgë#¸”/@×=ûm¶EÀ€x+[¹T#Ú^ÔZôRÍËÌE¸üË%¹©Ä}šÆ)[ŒÜç­ôWÉNàù[“-»=¿ÜcÛÃŽKÚ~Ò,ç–÷Åq]oT¥Ët£²Fò܈ï§<ÊJŒX§N ˜ãBƒÝ³‘6[LIÔŸJ›KUÆi~Hs[ž¤Ïº)OÒç!É)ù)1`_sÊ—¿Îz5¤”Ž7€s —Ü„AFRASÑWi‡‹^)×f‹ƒæ¦¨(&érdlˆ. ~ÙˆÎB»ÍÖû¨–ÐÎ珞1œ9t2èÜ€67b[ß@ûo8|þ ”¬Ï_qþüê[½Œä[ŠO½W˘zü‡F ¨¿ KÍd©e}öÙ -®B4Âã¯dàuìr!E p? x ÍÞHñ†šQÑ×ücB¤4‰ÀfÓ€ |! „¸‡ÞS/1)PòGx¾‹ãþ$HÚ¡rö“tz ˆ`‚ ”g¢¥ÚyèË[ W4ME´¶ܺµ·j]Üm÷ׄ;À;ÍÀˆ—ÕÈ}W°ÍçI»hÒjĈ¾R¹+ýjjN¥rÚ¤ý×S H°?ÅÁxLT0P‰|5*‡nà{+iôÍÔ<{@Üó¤Ùû‰¬XźÌÑLHÂCE´M“ÀV*ÄæVg¯7¼šWí‚¶ÂĶw†òʹG;ýÊ.]Ë>¯Çðp¥l´nÝo¢|„²ú©•±F•¡ýë A:ЪjJãôOS§bBWò!4Gi ÿ!!pSgÍwή •47Øí  í2ÔÅŽ=#aXÛ:‡¹öD ¢_´ë°µ¨d+)Ã‚ÃÆ¨ Ïî6b›öü=Ùd*:„eu´Z“뉽ƒ´Âr»òè~›ø'j*«¿ã„NÏ…S`H©Ê¹ßl‡ˆˆ™Ø‹¼Eyµ>¡ÑXòù‡ áÞ¾ŠÞ:ÎÆN‡S ß§ASs4ÕÚŸ °Ö Gk˜æ†ChRÎKa-%À6æÌ8Pôcî|%›Ù'2ölŸ‹szž8&¬þžÏ„ÄZöv'eÒåÅ#].ûóVPƒÙ<¾IÈô—ÈMz!7vŠ8g}âÔΈXû,’YDøc)3rÈÉÃiU^ ‰|½Ä#øéí‘ä>±_¯øu–n o‚»ž€%¶,«Sfí’S›§ÉÉù)˜tš¦÷1ùÃX9×*Ñ_$ÀòAššaõ©›áNK6­w¶É–«wœÂ¤¯—"„åÝÎx?~ž€Šñyâ„Òd¯)þ«³U/¿Ã¼ñ­On´Þ MŸ”€Û„ ±áN1þ¹ãŒ±ÂM@œ2•½L4åØ<ô#'-ÚÏ%¸;dERÖ¶ó3™)˜%mÝ8BÂiq„4V˜Êïl`.Zˆ°!ýîøs•ÿþðvçóšéÙµ÷Þgœ”Õ„×¢ÿúLïaduäð«QÚ2T®Cg3›˜ >yXsÒ°$5‘¢œ^¡\)½rù3û9IL-%G>óùÿ=hsƒƒÅ?Ÿh+µ;55Qçö¤ÔD"ߣ© ˜½œ9º¢±c\ {hŽtæ6È <šlÉ$ã™b…Aþ­V…õg2¯£µ2ÆMè#Íã6Ý$’Fìe^>͘.XDèE —]'Ö,À÷ ‡é<€:Ó±üú“2|_ñ=•&Pžp˜’2s˜'Û®† ¾C,œÈ­ËÿMP\ ŠëÊ$êÄ̶d¹Û°¿VÃbê–®0¹†m8J@·üzé]ñbà&¸rã—ñÎ?NÖ~÷É=­¡D HN‡ÔâûIIŒCçÓ#ãÆÅÓ`#†=3= ÍÆ~Ê$µ‰!pˆ aðÅ@öÝ.œk5ä+Hn óëÄ‘B÷3”œ´IRÕyYMÞUÙù5º–œÆvsîd1‡œT §‡®ü‰­oXénÎ47o7ÒïrËÔý¹2–ñÝ‹þØoC@+›ï+npî’ûDWÑ–®<‹Ô»bÃiï V*9ù°1;–Zpª²f¤hEí(ƒ$) SUæ#“`m3õ=„‹íæ,ï„X…™ä¨óò3Ùs2­´¾}z£Åƒ(æð¥›ÚM_J¡CkU‡_‡ûLXÞ¦ér ô8:âŒyÌGvë9ì#ùx  ^…£fO—dëÏ“ØÁÙ"ëÐõÒ‰ó­˜x,½ô—>Ò;)ûcR›òÁgëCê5gføƒ³Îó¶|^Ín+[®XGqV7þôÊ_Ž`kŒõw÷L*–ûÙ-ýnÄè@ÇO^ÞŽà ßWE®j,&ÖÜò™½¿û…ˆÄá}#tÆžcñXP¼·—âdVÉFMf%À sw3tEû€™¡]8§!xÓ!4ˆÇL2ï•?P¿œ¸Šúj|%„¢ƒÍQß +/D‘7² ûò;6l´ôšpçì ‘èÙmÈ *¶#.¼ª,^p“Dza0—„—žÅ}W䵚ìÇ-¯!TƒÒËIîoÁ®¶éÍÞg$)ÖÌ Ü4¦9Ü+2Q¾C }+»µ÷`sÅׄdÏ'·Ä9´ º “F #"Þ…;kO¯ Ý áw™+]Úx¿Öá•¿Ü=µ –ÿÞÏ2øÝÖþÔ.½xÓ º%¼„i„|]¿÷¸0amþ6Ú¶kQš3ãZ2 Œ7ýq,g«7ñc‚¢íï> /Filter /FlateDecode /Height 106 /Width 75 /Length 4695 >> stream xœí›kP×ÛÀw„M KÌ$€„ Š &¨AD 5 Z[-x·S±Ž“Ø"åZ™bÁŽØâ•i524‚‡¦(DT °!„„KÄp AÂûaçe­D­P~Ÿ²ÙÍÉyÎóœsžËY“ÑÑQ0­1ý·;ðΙ‘pê3#áÔgF©ό„Sóñooß¾ÉdnÛ¶ÍhC¡¡¡<@ Ô××›››ët:F«®®Öëõ¦¦¦ÎÎÎW¯^ XZZªT*//¯ãÇóÍ7CCCeee;wîÔëõeee«V­ª®®~üø±­­­©©©ÝH$ÒÄ$4ù½6F3aÁÆø %œ¦ÿ<4KIIç¶@ „„„477—••ÕÔÔ899½½å¼Of¬ô%rrrpÅN¦¿ì‡\.—Éd  †’’’†††¤¤$‡ó~ú÷ö·R…B1ö™Ãá ò.û3ùÌXi`àŠ+ÂÂÂh4ZmmmII‰¿¿¿ƒƒ—Ë}?ý{{¦¿§¿Oc<¶àr¹iii]]]Ë–-³´´$‘H\.—F£½Ÿþ½=ÓßJèðU`væÌEÙlvGG…B¡Óéjµº··¿´±±innŽŒŒÄ0¬¦¦fpp°¥¥…Á`_¹reÛ¶m|>?00088ø¯¿þŠˆˆ R©¿ÿþ; Ã0 gddDEEýý÷ß,‹ÏçGEE9;;+ ‹õÆ[ñèdÐÓÓ3::zçμ|žæææ±^¾|¾ÁžžüîË—o„+‰DW®\ ¨³³“F£=xð0oÞ¼ýû÷£^@°aÃEçÎ;N~@ ,Z´¨°°D"}ýõׯšÛ"‘¨¯¯O"‘ÉäÜÜ܆††×Vž‡/[i¿½½ýðð°Q+­¯¯çp8/[)“Éœ@OŒ3½¿Ì¤[é$2ùVŠUnnîºuëæÎ;΀[©P(T*•ïΛŸþ»ÅÞ§ [¹r%àæÍ›l6ÿòÂ… |>?''‡ÃáˆÅb±X¼hÑ"Ṳ̈R)Š¢###jµšÉdÚÙÙ=|øA>@çêjuòäÉØØØòòr†ûûû—.V*Ÿüøã7\.÷éÓ§D"Ñ`08::677£(ÚÙÙyàÀ¦¦&…BÑÑÑÁáp”J¥Á`ðóó÷ßúúúþþ~ÿ;w¾R†IœÓÙÙÙ“ØÚdaD‡³gÏær¹VVV‚tvvjµZ™LVZZй|ùò‚‚KKK¥R Ãp___[[Û¹sçØlöØH?yòdì‡îîîÞÞÞ¶¶¶---ÃÃõµµnnn|×Q*•ýýýÙÙÙ€F£ÙØØdddÄÅÅ´Zí£GBBBjjjpU{xx455Éd²ˆˆ¡PxçÎW‰ðo®4o”ÒÆ0lbî¾ ñy¨Õj«ªªAzzº™™A<O$uwwÃ0l0𛛉D">¨ƒƒƒùùù111¡¢¢ÂÏÏO¡PˆD¢%K–$$$¤¥¥ÅÆÆÊd²ÚÚZ*•:oÞ¼-[¶¤¥¥ÕÕÕ±Ùìââb:N$mmm•J¥L&c±XÁÉÉ©¡¡áÖ­[(Š`noog0¾¾¾ãDä“ Ã7ÝI©F¼>F$ÌÉÉ)**rvv¶±±¡P(ƒƒƒ:ŽB¡<~üØÝݽ©©‰ÍfS©Ô²²²ýû÷§¥¥a‘——÷Å_@ÔÑÑQ\\|÷î]GGG[[[gggN·xñâ‚‚‚åË—WVV†„„(•ÊÈÈÈŒŒ “5kÖdggGEEÝ»wÏÚÚÏz566Â0L ð.³X¬îîîžžž®®.‰ÄápÆñ~gõI$Ò¬Y³ÌÍÍMLL(JmmíÆ5MUU•««ë;wlll‚ƒƒ333år¹ƒƒÃ½{÷ž={výúu€D"A„@ tuu………ݺu«µµ•L&‡„„H¥Ò„„„sçÎA$‘HvíÚÕÚÚªÓé¼½½ÍÍÍ[[[wíÚuåÊ­V‹¢hGG™LniiY¹r¥L&ëèèÐëõ(Šº»»¯[·®¼¼ü“O>™ ŸçÝY×;µ[#»Åúõëét:Š¢‰‰‰±±±<ÇãeffÂ0¬R©–.]Šqbb¢P(d³Ù E(šššbfeeµfÍš’’6›íêê:f´xË###ÃÃÀݻw=z´¤¤äÓO?={6 Ãb±8))I$a†¯j(ŠÆÅÅA$‹ýýýƒ‚‚ÒÓÓ ‹Åª©©©®®þ@w‹÷ƒñL”•••µµõƒØlv@@ÀéÓ§?îââ²bÅŠ­V{ùòåððpKKKÐØØˆwNNÎ÷ß_UU•žžŽ ÈíÛ·mmm ™L¶°°ÈÊÊruuݱc‡@ pttÜ»woFFÆüùód2ù…6étú8óíUüçu(8NJJн½}tt´»»{ZZšƒƒž2’H$ùøøÀ0|þüy½^¿yóæþþþ‚‚ …B¥RKKK×­[×ÝÝ­R©t:‚ 7oÞÄç°Z­Þ·oß{È¿+NØÉštf¬T `2™†i4//¯'OžÜ½{wxxØ`0¤¦¦¶¶¶úùù¥¦¦vuu988 À`0þüóÏ;vÐh4FSXXÃ0^̉‰‰ù裶mÛ†aXNNÎâÅ‹]]]7oÞ¼zõjOOO‡ÁÁAEçÌ™C¥Rûúú<˜““SSSƒ HUUÕÿþ÷?ƒÁœœ,‰²²² úå—_ª««sæÌ ýGŒè0??ÿ5§Juuõ?üÀ`0`Æ1&“yìØ±¦¦¦û÷‡óùü]»v!"‘HÂÃÃi4“ÉLMMmoo§P(7n|G¹¶éo¥Ó?O3ùVWWçääLz³ƸOãááQ\\ A@HHHÈÌÌd³Ù(ŠŽŽŽZXXøùù¡(Êb±T*Õ—_~‰a˜H$:räÈöíÛ ÆüùóO:zûöm…Baaa±téRµZ]WW7{öìÞÞ^ooïúúz<S©T‘‘‘ %33Ã0þ_‘‘‘"‘ˆL&_¸p!-- OvÑét¹\îââ¢T*ûí·W‰`D‡OŸ>U*•,ËÂÂB£Ñ¬¬¬º»»»ºº¤R©³³3 ±±Q,ß¼yóÇÔÕÕ­_¿žD"ÕÕÕeffÆÇÇ———„‹/bF¡P†iµZN÷øñc.—›››;00àíímffVZZ*•JÙlv__ÇS*•¹¹¹ø ;;;???@mm-”Éår__ßqÜn0á•æ=Çéoƒñ|©¯¯ï¯¿þzÿþ}‘H´gϞ˗/„BaOOÏÎ;Åbqkk«Á`Ëå»wï>|øpttôX„îêêš””D§Ó}||Ž9ò^$z‘™Ýbêóº^[MMÍ’%KîܹÜÛÛ‹‡‹Ï;qYYYiiir¹œN§Óh´ÎÎÎÞÞÞææf:Ž •JMOO·¶¶¦R©‚…B†&&&¸KµeË–êêê“'Oâ‰Åb‹¥Õjvvvãç“Æaú[©æçç‹D¢ììì±oRRR´ZmOO•J…a8<<¼¶¶öرc?üðChhèÞ½{MMMq|ì®D"Y´h‘L&Û²e‹@ ÈÎÎþꫯ*++“““ñZ—ËU(>>>™™™kÖ¬¡Ñhéééü1†ažžžo³JŸ‡Ï‹÷2­­­---AAA---7nÜP«ÕÑÑÑÉÉɆ ¥¥¥¡(êââ¢P(,X Ž9"Ö®]£ÑhT*ÕÏ?ÿlkk{ïÞ=±XÌ`0D"‘F£YµjUDD„µµõÐÐPrròêÕ«'&á;±Òj·4n¥/DOëׯG¤¥¥…L&Óh´Ë—/»ººFGG?{öìÆ÷ï߇ (99ùù…§´´ÔÝÝ_xÂ8EòIg’uøAiç?¿–nݺ•Çãeeeá^Xee¥µµ5‘H$‘H*UÀ矓¯]»feeåëë{áÂKKK[[ÛöööÐÐЀ€€”””””†aƒñèÑ#SSÓ§OŸ>}ú”Édêtºööv333½^Ÿ0Î9Žÿ¼?ûì3¼rÀ`0ðàÿ˜˜(‹U*U[[›§§§F£iii™;w®››@P*•ÁÁÁ>Ä åååcá%Þ‚ (Šâå‡ÆÆÆõë×—••á…´S§Núùù-X° ¬¬ †á¸¸¸øøxN·oß>üxDRR’P($“Éãm˜oàÏ>½^óÏC±XL§Óñú‘H$rtt¼uëV||ü¥K—ÌÍÍ7mÚtöìÙ1]ŽŽ‰DA®^½ÚÛÛk0ð"6†a ,hjjŠˆˆ()) ¨§§'88¸¼¼<**ª¢¢‚D"]»v-..®¨¨¨¨¨(>>žH$ZZZÂ0¬×ëëêêH$’ŸŸ_@@€H$>|˜N§£C#>M__ßÂ… ÙlvmmmKK˃***"""***¬­­·oߎ~¥¥¥®®®Ã0ggg'''­V ðT*Õh4}}}ß~û­»»ûÅ‹»»» …L&355=}úôçŸ~õêÕC‡`F&“ãââP]³f͆ ¬­­/^Ü××Çb±Èd2@Òjµ'Ož$“É{öì±´´ÄE}“°Ò¼þøö»åZ0b¥?ýôSaaáÚµkKJJrss“““!êêêÂsÌõõõNNN0 £(J£ÑÚÛÛ=<<8Ι3gÚÚÚ’’’ð³c…ø’’‡mÛ¶ác/“Éèt:“É”H$R©ô»ï¾+**¢R©ùùù …¯ÕaÉdò¢E‹ z>ÇÅçó_%ÂëðuFñMGú߬roݺÀ`0„B!N§ÓéøÄbñ† ÊËËOœ8ÿ§F£›5ËÞÐé³?’JgAUIIUfæ*ÿƒBT\\ìããsâÄ AvïÞ­P(òòò¢¢¢<==Ÿ¯Ûåææ²X,—ššš”””™™ijj °±±a2™oT*}Ûyø:¢/`<‹¡R©*++³²²222˜L¦J¥Ø´iŸÏ'QQQø±E­V+—Ë%IVVî…BNçææ6wî\F£Ñhär9‹ÅÊÎÎ âp8‰$''çâÅ‹ÙÙÙÞÞÞø7<Û‹';\\\ð®\¹Òßß¿f†{|c?ÉËËg”œ§Ù½{÷œ9s𕺡¡ÁÜÜüáÇgΜÉË˃ ÈËË«½½‚ ¹\®×ëGGG»»»sss=*`¶·· ;tèÐæÍ›oß¾M&“år¹†ajµAúúz@__Ÿ““Š¢‚Èd²‘‘‘˜˜˜ëׯkµÚ¶¶6ssó'Ož@¤T*CBBž={F§Ó¥RiGGÇÐÐ……ÅÀÀÀ8u«ÿ¼_Š[©N§#‰¸ÿ±víZµZs¥Ré7ˆD"N¿ÿ>¾µµµ) ƒááá188˜œœü^$z‘IÖá‡S¾cú[éôÏyOš„ø‰ ¾»öšÿ',,,!!A©TªT*›eË–?~ddàææ¦×ëÔjµ½½=‰D²´´èõz‚X,^ÅO{»¸¸œ={öM_qúGfæá ^~M^×klÙ²© !!Ïç«Õj.—‹¢¨››ÛÂ… ÅbqWW“ɶµµ}ôèÑØ›‚àNþC …Âd2…BaLL FÃ0 ÿ¯êêj>Ÿ›˜˜¨T*y<þ¹¾¾ÌÌÌlì\4‘H¬¯¯÷ôôT*•þþþ“=M9ŒGO‚À0, GFFZ[[Ùl¶\.Ç“€ÄÄDü̓¸téÒÊ•+/\¸pâÄ ±X|ûöí?þøãß A¦¿§äJóFÌH8õ™‘pê3#áÔgF©ό„SŸÿMópendstream endobj 866 0 obj << /BBox [ 0 0 576 381 ] /Filter /FlateDecode /FormType 1 /PTEX.FileName (./categoricalVGAM-041.pdf) /PTEX.InfoDict 753 0 R /PTEX.PageNumber 1 /Resources << /ColorSpace << /sRGB 755 0 R >> /ExtGState << >> /Font << /F2 754 0 R >> /ProcSet [ /PDF /Text ] >> /Subtype /Form /Type /XObject /Length 1474 >> stream xœ¥ZMo$5½÷¯ðqFbWùûº+@ŠiÉÙÂQÄfü|ªl÷GÆvjúQÜö³«í×U¯ºÔ¥u¯þÞÓßÅóß¾Uﮣ1jù{õúg¸¾QFý:€º¤¿ûx€ú~úÇf‚F§•Ð~l>¨«ãþÖôY›Ö£Æ0÷׿ÜÎiÀ©¿5ç~KæyþÖ\ôÓ„iž¿5ç~GÆyþÖœû_ÜÐtsÿÛCÛ¿Ï+û÷üñi¸øi³¿)@Ý:êOmƒ±Ú{åh_PÕîÛ«Ãýðõ¡Ìýp4ÚálO£¥½NK0ž¶†â…Ù G§ýÒls:Ö9`ÍXÁºÞ$K£§®šNžO¹´¬GbeÁËnÐilÍ½àƒ¶0u·æÜ:†©¿5ý‰ŽcÆ·æÜßÌûÖrZ:•‚öf,øÒŸ´ÎQ!Dí¼ô¤6¡Î =銵.iÖNº=σw¨SýÕØd/AOÆÃ¸ó/›ãà«â=7;$¢¯Ø±R^ï¾»ý|÷ôYý´WѨÝí§½zCÚîÛ_ö éÊÂý:\ïÅbÖ7‹Åè_>Á˜°§®€Ó t—jyç.ÙB@²˜®}ºS?«§W·N݆˜™"œÎmþüýÃîqORíþ¾þ n.æF¸ù°_Ý‚×öØó -òÊì¸evçu`Ûɯ˜ÕÙí–Ùa§äo`Ýv·eöìxgÐxók³ûWg¥Í=U8´HTA7h§9¸HiH¼Ù´IµÓŠÿý­ŸRó YÛZ`Ê5cP2ÙÍ1‚¤;H±L¡.ëIÁ9•ÊW ã'«ÉC–T#q9Nö\äã7 ž«ÞBðÿlöûá?—Ê}ªendstream endobj 867 0 obj << /Filter /FlateDecode /Length 2734 >> stream xÚ­YK“Û6¾Ï¯`퉪µ`< 2)&»qbWùݩͦl8’FR¬ÇDÒXv~ýö $¨¡Ç®­”K&h4ýøºÔŲÐÅOW?Ü\=YùÂhÕèÆ7w…­œ2UCT Åͼx[Þ¬&¦ÜoÛãdêœ-UüümR»rS‹Éû›×Ï_š0`jU{؆XØ€4WZ6þÒóÇ›+/º0…±Aé¦)|Ó¨ª Ål{õÇ•VŒEöJSi <µ Å?÷W¿À¿45M<§SÒÁPôÚ>VÊÇ¿„3®á·|8Ði§Îº2|7™ÆÊ”?àÌý5tÂ[îå®iwàÿ)ü,ú’™rþÿÍèmÊ„ðß^¸Øº|@Î'`·ßÊ[»aÚw¹&Úüú(K´ ˆ|>£”“©­Ê&Þîa$–s$Û¤õÆŸÒz‹÷Á?÷ýëŠLJ§‚oXO·(Ølá÷-IÔ´K‹‹[ðŸh›òÇ›r†¬w¸ê# Š›~â«NæwDmðT{|Û!Ë#ÓŠTD¶i“ôÄöÄ£'¡*Ûiï–x’-qñ‘d8,ˆûrѨ¦²ºƒs^5œH…Д¨ª˜»ÍT«>:ºwÚF&µ®€ˆŠš(£Òª\ń뱪•œ=dщG×É3OyU \GmD\Jf§ä@㾋˜$ºgG†ç`û…?Ìm鬻””aä¶=rà(†êGÁß­˜³ªƒÙ)h '{îѸ._ñÄG¸ÚÞt¤#õ’©½²Kµ/Rc¨…’,1LÃü ß©ÓPÐfIØöIøM+ úüÓô¶åúžÔ×GÛ¯+ÀtÌ73UPuÕõgXðç˜ØÓDÈ©Üf.ÖÕ¾b¿¢m?¤ök™ÍPa»\ üzhÐv˜uŽ~YüAºT±êjÐ_Gº¨®šŠ»ƒødwíoF´M)eI)e¿íDEÞa)‰R‰zÆÑ¾Ü§c†‘€3†u:αOÀÄaØ=òrlí ¿’ÖÃTx¬f¨K‚IJ!­D¬×Q” ¶m,ƒ·t„Ý`jAvè,FÛÔI4vP¬\h¾Ò ,ÿšª/‹twQõÉãQAÑñ£ÜêØÐ tîº4ãÑáÓyÎ÷Ônù>*~Äôøžw£…t©Ñ}‡¾ Àõì °8«}¢K°–ò,˜‚°ztY/žÇ³«ä2™Ê#xmwc­ÅÔ†]O¼ÈÒ2’Ϥ‚ñÑÁÀx=!öy-='¥4*ÓÖûÑm ¼íªÓYßȼa Ö寻'ú”•©\ˆÓÊ×åÏR¹ RYµñ|ËãºKq(ó£öµ‡sº¿Äcˆ¹§‡2?K¶¥>½žMíCž<%VM9so|ž_…/ÿIཅ²ÆMMW<°¶š¥ä‹Ýê^º×]×1pR<¥Ì 9ñ¢­E^÷rëuD'ÏM_N¢Ø2º"DºËÓ§í yç¸âëó!^ÊÚ• YD§R‚Ô7lN¹ Wu¨¾š· d-?¸uhP©¯C"û(ÝùFlª¤O¶gÿ’KWF…ØU™ÒñîM»þî Þ%kv)“ BÔÄXÖ„á¡FŸHœÐkê:þ‰3"ˆøŽ„iùq”ËRöçY* ñ|wRˆœú»µÐ˜A‚›µ©]Ì/o[øÚÐåâŒ@9«#…Û'¼2¾› zÜ:¾·§¢N©ÔÈyhŽA€®;ryEub¹R¸O=q;šŒB­šÎÅíEÒôr½ÓÚŽò¡×D%W¶Ê­jTŒ®p€zÝ­ªv㑃¼<Í(¥$KÚ䣞WÒXŒˆ3µ`È耋mTôá«by´Z,¼Öh€cFú"Õ®5j©‚2” |E/®D|ô™Ûóã‹F­œ­ADÕÔBùß‘p*ïðDP`¸A9¨«,°4¹‘õl{õö½.æ0ùuØÔÅ™H·0 ª›âßW¿Œ}9êxA•_×F¾}E}­„òg¾Õ|6züY…o8 Ä€é ˜Þå×\"î~ôÃpÏ ‚ÄöÊ“zŠ¢ ÅϨ†9”cÉ Diš¼Ší”•>—èÖ>ÉGð|ÉËITZý s·é{„sÔÍã;ÁÁƒ%ÕDºûÒ“nÀ v]` µ5~EÔÃÔtÅü¥kè“ ] ê™™dOº ]wÑà®Y*ªëÛ%U„tm£ÍÚkV}¹Ö¦¿Š> /Filter /FlateDecode /Height 106 /Width 75 /Length 4286 >> stream xœí[{P×þ?*Є°„1ä:paã ZÄ4bÑB­µNBk{¬Ã ZA­­g§¡`ofl‹FQ2ƒ^!Ž<~BHB)$$‚<¢XÃáþ±ë ÅŸ¿²çìÙ=Ÿ|¿çû:gç WógzÓŽ9†³s g?æÎ~Ì1œý°y (J |}}ß~ûí‚‚‚ÌÌ̽{÷þúë¯J¥r’Ï×ëõ/^ âñx#{çÍEm³“ÒR¥R‰¢èTMe\@Äf³':jR Q}®êOd2™£¬ÔR¥R)•Ju:uÇÁÝ}Ùô©‚• Ùlvdddww÷ägžîÜ©ËÎÓDr†-Mz:ˆD"0M$g’¡Dx<A^NÉc(“ 3ÓAÒz†R©´««Ëº±J%@QLÙ% %‚–Z=±a°Þ[DFF644XóJÖ \‹‰± ëÒwè»ÍÝŠ»1²¼4½ÐhVÏî/LX†Ö9¥gñÝw×F¶_®¾œr9e e { W×IG¬XÁ077w2ï“Ɇ¯=—«/g\ËèµôB ÿ´¬%#üãàöm`2 Ì›~üà76€L¯¿ª«``¤¦ P© ²ÄŤ$<¬<½âÓÿ“-„â9©²R &#@`ÃŒŒŒa-ë×''Àf£tv <ý èÏ lÛ6àåT*àã @¼¼À¶mÀÃã¯ç@ !ócó­Ú[OJ$Ï_·ÃUÐétUUU'Nœ°nxZÚPW×(OþC÷sÅÏUMU‚øún{¶«¸¸ØŠwYÉphhH§Ó%''[=ÜÖöóyó ‘íñññ¡«k(+kx—u ­Ki4š³³³ÕÃ-–/¿þºkd••Õßߟ bb¬~öÿ`&³§Ñ2‰§Iã”`†3à‘ñç°¤qò°’¡T*|%‡HôWü)‘<'iœ$¬d A££ã”LB$ ptÜASLL²æžžþ´€)—Ëžv™Íf«/G¶8::úøøèõúx+ü£ö÷¹¨ªªšªË¹áÅ1e–æüùóc\^»ö?)/AÏ.cEÇ^Õ“©¬¿ú;3/cw ‡L&ëïïÇ0¬¾¾~ýúõÍÍÍD"ÑÓÓÓŠ­ˆ á¥Ê°ººº­­-$$`4Éd24…Žo¼úZ:Ëö­09ã¬Ã„„ggçû÷ï/[¶ Ãpoookk«J¥¢R©›6m*++«©©‰‰‰Q(0 “Éä… ^½z†a:®R©üüüŠŠŠœœœÓÓÓíìì`6 ‚ ŸÏÏÏϧR©QQQ}ôQxx¸»»ûõë×ýüü´Z-—ËU(+W®,,,„ ˆÅbõöö¶··óù|£ÑH$+++]]]ÓÒÒF£ð÷ÕRE§d•Ž#ÃØØX2™l2™Ö­[—••Ãpbb¢D"éëë{çw6oÞœAPss3`É’%•——;88$''>|˜Ëå–——3 ¥Ri2™‚‚‚˜L&àâÅ‹ÁÁÁ÷îÝËÉÉÙ¹sgPPPKKK__ߎ;Ôjugg'…BÔÕÕ¥¤¤èõú¨¨¨”””S§Nýøãµµµ111÷îÝ3›Íþþþ(ŠfeeFaœuAÁ`0 F£ÑËË‹Ëå^¹r‚ ‹Å¢ÕjwïÞ hnnFAÊÊÊžžžß~û-!!áСC‰‰‰¥¥¥d29((ˆÅbÅÅÅ1™Ì’’ŸÏïêêrqq9xðàáÇ{{{<<<$‰J¥"€ÒÒRWWW©TªV«™LÞ»w/ ,,Ìh4FDD@dooÏãñÆŽ¬ÑÒ©ÒŸ A¯×Ó¬ÚLJáîÝ»q½R©T‚¨T*@PZZ `2™†Ý¹sgÍš5f³ÙÆÆF¥R}ðÁ™™™L&388¸©©iÑ¢EAAA"‘H DEE=zô“O>qqq ´Ž§˜.K£×ëçÏŸÿÔ§ãVEQëä0ü}méTa–y|+0Çpöc/‘Hlmm¿ùæ†‹ŠŠ|>?///%%åé¦×hGg¯¾¥G†‡¢Óé‹/Æ7·;::h4N¿xñâöíÛ/\¸@¥Ríììlmm‹‹‹CCC5 AF#‹£££L&³¼¼<00°°°P X,–ÎÎN­V»fÍšììì&“I jkk¹\îíÛ·ÚÛÛSSSwîÜÉçóñðšF£©Õê¾¾¾G½÷Þ{2™L£Ñ Âb±Æ¨P#à _&б'9Ã÷ߟF£UVVÔÕÕ%&&–••!"‹öööQQ)(ú¡¡¡uuur¹œÃáàaPee¥Ùl&‘H~~~½½½?¦Óé¸4BMMMHHȱcÇrssÕjõ‚ 4··7@ ‰L&“H$fffÒh4 …RZZÚßßïââB$) FS(¸"Œ‘=cKi4‡ÃñôôliiñððP«ÕR©”ËåÚÛÛûûû …ŸþüÆ¡¡¡ÞÞÞ›6m:®¼¼<((Èd2íØ±Ã°ŠŠ ‹ÅrûömƒÁüøñc†½½½1 KLL„ ¨¾¾‚ Ë—/+•J¼–C§Ó©T*ƒÁ`2™¾¾¾·nÝR©T===Û¶m³··ojj*..ÎÈȈ‹‹sppÛ½lKóòÕ~KÃb±ðüzÅŠ|>ðžžž  (ÚÑÑqïÞ=†I$’P(‡——Ç»qã†P(<~ü8™LÆ“`¼ný¾…L&sss«¯¯·µµ½{÷nZZZqqqvv6“É´X,r¹¯ÓÄÄÄÜ¿ÿСCííí4­±±1...&&F¯×Ÿ={6''G¥R•””°ÙìÌÌ̼¼¼(•Ê .0 ‚ðŠB¡p8ƒq÷î݆† ÃÚÛÛ[ZZ‚‚‚¢££oÞ¼©P(r¹<00ÐÞÞ^,Óét>Ÿ?N•ùÏltvZr"í!]]]:ÿýôÇTaœuÈb±6nÜ844TWW·iÓ&†µZmii)ž1©Õê’’’}ûöájéèèØÖÖ†‡Å&“I ÔÖÖ~÷Ýw»v튈ˆØ³gP(¬®®Þºuë¥K—\]]•J%†a[¶lQ«Õ'Ožœ¦µj¥¥™õ/ê-š››aÆ¥·fÍŠ¢®®®'77×ßß¿¸¸833óèÑ£,‹B¡„……>}ºµµÕÍÍmÕªUxy&;;? ûûï¿›Íæ‘.ÇÏÏÏh4Z,A¼½½e2žp…‡‡+•Ê3gÎìÚµ«¤¤ÄÝÝ=..îÔ©SOžûì³Ó§O?xð/½‘H¤šš??¿°°°sçÎáEW<Á â4âããW¬X¡Óébcc}}};::º»»Ïž=ëè蘒’ÒØØcEY}..8$ÉÈ/fSÏJ¥Ʀ¦¦±o›üW~/ˆ)8¹—œœ AH$’Éd<ÏËËkñâŃƒƒ©©©õõõ"‘H¯×£(Êçó%IRR~A~ø¡‡‡‰DZ½zµX,öõõÅV\ºt‰D"±ÙlAÐ$·´¦~*•ʶ¶¶… Ž=³„„„1‚É)ÄÔ3|®}ÃèOwð0Ž–¦¦¦r8‰DAPmm-‹Å2™LÝÝÝB¡/®:::â~xxøÃ‡ëëë6nÜØÜÜ|óæÍO?ý´©©©¤¤„@ ðù|<9yòdbbâ—_~É`0ÂÂÂÄbñªU«L&Suu5@ …ùùù#ƒ»;wîœ8qÂÇÇ'$$¯|Ÿ;w‚ ±s‹ö/!úG†û÷ﯯ¯ ­¨¨Ø¼y³Á`8räHAAÁñãÇ<8wî“Éd0û÷ïOKK‹Å®®®Z­ÖÍÍ-333!!¡ªªÊÙÙÙÅÅÃ0†ù|~MMR©Œˆˆ0™LåååZ­644´­­míÚµz½¾§§ÇÑÑÑÅÅ¥¥¥¥µµµ¢¢bÆ +W®$‰ÝÝÝ(ŠTTT,[¶ AÍ›7FáÕ÷øãŸMÔét\.ð¸¸8|{4''AŽŽŽgÿæÕ«Wóù|†1 «¬¬trrJMMý;䯾 _ýjâÃÙ9†³s g?æÎ~Ì1œýø/²01yendstream endobj 869 0 obj << /Alternate /DeviceRGB /Filter /FlateDecode /N 3 /Length 2596 >> stream xœ–wTSهϽ7½P’Š”ÐkhRH ½H‘.*1 JÀ"6DTpDQ‘¦2(à€£C‘±"Š…Q±ëDÔqp–Id­ß¼yïÍ›ß÷~kŸ½ÏÝgï}ÖºüƒÂLX € ¡Xáçň‹g` ðlàp³³BøF™|ØŒl™ø½º ùû*Ó?ŒÁÿŸ”¹Y"1P˜ŒçòøÙ\É8=Wœ%·Oɘ¶4MÎ0JÎ"Y‚2V“sò,[|ö™e9ó2„<ËsÎâeðäÜ'ã9¾Œ‘`çø¹2¾&cƒtI†@Æoä±|N6(’Ü.æsSdl-c’(2‚-ãyàHÉ_ðÒ/XÌÏËÅÎÌZ.$§ˆ&\S†“‹áÏÏMç‹ÅÌ07#â1Ø™YárfÏüYym²";Ø8980m-m¾(Ô]ü›’÷v–^„îDøÃöW~™ °¦eµÙú‡mi]ëP»ý‡Í`/в¾u}qº|^RÄâ,g+«ÜÜ\KŸk)/èïúŸC_|ÏR¾Ýïåaxó“8’t1C^7nfz¦DÄÈÎâpù 柇øþuü$¾ˆ/”ED˦L L–µ[Ȉ™B†@øŸšøÃþ¤Ù¹–‰ÚøЖX¥!@~(* {d+Ðï} ÆGù͋љ˜ûÏ‚þ}W¸LþÈ$ŽcGD2¸QÎìšüZ4 E@ê@èÀ¶À¸àA(ˆq`1à‚D €µ ”‚­`'¨u 4ƒ6ptcà48.Ë`ÜR0ž€)ð Ì@„…ÈR‡t CȲ…XäCP”%CBH@ë R¨ª†ê¡fè[è(tº C· Qhúz#0 ¦ÁZ°l³`O8Ž„ÁÉð28.‚·À•p|î„O×àX ?§€:¢‹0ÂFB‘x$ !«¤i@Ú¤¹ŠH‘§È[EE1PL” Ê…⢖¡V¡6£ªQP¨>ÔUÔ(j õMFk¢ÍÑÎèt,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ {{{;Ž}ƒ#âtp¶8_\¡8áú"ãEy‹.,ÖXœ¾øøÅ%œ%Gщ1‰-‰ï9¡œÎôÒ€¥µK§¸lî.îžoo’ïÊ/çO$¹&•'=JvMÞž<™âžR‘òTÀT ž§ú§Ö¥¾N MÛŸö)=&½=—‘˜qTH¦ û2µ3ó2‡³Ì³Š³¤Ëœ—í\6% 5eCÙ‹²»Å4ÙÏÔ€ÄD²^2šã–S“ó&7:÷Hžrž0o`¹ÙòMË'ò}ó¿^ZÁ]Ñ[ [°¶`t¥çÊúUЪ¥«zWë¯.Z=¾Æo͵„µik(´.,/|¹.f]O‘VÑš¢±õ~ë[‹ŠEÅ76¸l¨ÛˆÚ(Ø8¸iMKx%K­K+Jßoæn¾ø•ÍW•_}Ú’´e°Ì¡lÏVÌVáÖëÛÜ·(W.Ï/Û²½scGÉŽ—;—ì¼PaWQ·‹°K²KZ\Ù]ePµµê}uJõHWM{­fí¦Ú×»y»¯ìñØÓV§UWZ÷n¯`ïÍz¿úΣ†Š}˜}9û6F7öÍúº¹I£©´éÃ~á~éˆ}ÍŽÍÍ-š-e­p«¤uò`ÂÁËßxÓÝÆl«o§·—‡$‡›øíõÃA‡{°Ž´}gø]mµ£¤ê\Þ9Õ•Ò%íŽë>x´·Ç¥§ã{Ëï÷Ó=Vs\åx٠‰¢ŸN柜>•uêééäÓc½Kz=s­/¼oðlÐÙóç|Ïé÷ì?yÞõü± ÎŽ^d]ìºäp©sÀ~ ãû:;‡‡º/;]îž7|âŠû•ÓW½¯ž»píÒÈü‘áëQ×oÞH¸!½É»ùèVú­ç·snÏÜYs}·äžÒ½Šûš÷~4ý±]ê =>ê=:ð`Áƒ;cܱ'?eÿô~¼è!ùaÅ„ÎDó#ÛGÇ&}'/?^øxüIÖ“™§Å?+ÿ\ûÌäÙw¿xü20;5þ\ôüÓ¯›_¨¿ØÿÒîeïtØôýW¯f^—¼Qsà-ëmÿ»˜w3¹ï±ï+?˜~èùôñî§ŒOŸ~÷„óûendstream endobj 870 0 obj << /Filter /FlateDecode /Length 3400 >> stream xÚÅZÝs·×_Á‡>­‰¾NÖqm;u¦ÕôÁÉÉ¢(F"©b»Óÿ½»‹Ž ¤:Ît4Ôp{Àb±¿]\3ZŒšÑ듯OO¾xÅõˆ7Ì5ŽN/F-Ym™“ztz>z?fòÃé_¿xeTNÅÃŒ1ˆæôrÂÇsO×£ ÁTc#ÝwÉǯöùÛÚ J±†'â¿OD;îfaÇWmÆÝb>™J)Ç0Àf‹÷bü¢»Å©Ðæã%üfݵòhÐÃοõ| OøøüvH¸CNš ˆ¡¦ÜJÖ´j4•šµ°Rbé¼·¿7“)°µÙõË¢X‰`‰¸’°Ør®XÛÈHò lÇŠM¦J¸ еÏïiÁŒsñ½; ûxdxÃÓÝኯé7Ž/;ëÏ(á¹ïÙßø+JlS[’̵f°$aF *¶ñSrf­Iæ´óTß7¬1g™5DÛS†õó¹ãrüîWkÚ+â  ïW(›mÜÞ%,ükØöªHÍ=Ü®qå?áH}ÿÿVøo¢¸1š n@û9¨€dZþ6~º´Ðn®¢Lþd"s™€Ô ÅŒ :´¬ ¤q2ªn—éúuؼ rz·{Ze¸U¸V UË5GpÛ¨‚¥glä`Øx$ImÇ„h=UCBlЀ˜–Êw~;5°ÑVúì Ï0iô®r'É„éÇã“©ãr¬ý¥Å‹B}¹†}²åŽ Oh&S®àêŽÊ@a‰B4¬µ¢¤PÉ4Là”ë…Àk’yQ‘Ì‹Šd^T$ó¼f¦p«Áuƒw–áÕ§÷::,*Ðö&Z¬n@Ó¸^¼š%@`ySðÓJgžìp*[0Eô‡OàÞ©Üe+À_Ÿ{«õ„Þ¦ñÁ÷ øÐÚâ«øÎõOBïÏýý9Òw‹ ZzwMÈH€öK &<ëúÜ7º30|ãí':½ôy‘0èÅÀÖðß¼ç Û9ƒý|;$›mü¤q£AG‚œÎõõ9zwåÙ±%;Jè|Hlž‡†+Ǩ†¯Î‘½# +°p|Ås…σdñu”è~†Ã_úŽÎ_|ø\…}£8Óæ&ìÀ[ÂáùøÕ¤Ukqlb/›Wü ³º‰ĨTK\T¦ïƒ~è×{Ì<¦TɃ+ î¡-m”„™töÌf[¸ömzèABÿ|îïK=¥.å/³¨Pwa ‹Zä HÙ>ºÁ õ‘¶Û%³bŒüͽ>†}5Ц¿9&p SšÏ x•q'1m—«O”1í›lx7WnÜPð­OL— xŒã$xèª ¾uÌ4ú1r7ÊȆ]Ï£¼£ÐM£ÆoêLÛOÓp#K‘k×V5ü€qÁÚFWCÌûÙT¤cãR=N:Iy½&žG?n¨cö‡bBpn6ʰóžÕ‹£ÉÄá@"(@Ì:‹”'/OO~:AA6#>RÀ%ÂÝ Æ!PÎV'ïhFçð\-“ ©;"]á@´œëÑ»“ø §˜/ eaEmB•X vˆp±ê=8Q¯(âg-!`Bɇòݦäç!4àá†1ú"ƒ„¡0ŽÜp/¾ÿ«™·°Œ…:ÈL>Æìùå0~ȰÃ[‰ÀŽ Æ®&¤Ç +ùZ¼'Ë öŽm°êš9N4_MbVpÛOWÇêBÆÍc}¡½×:é Ó–®'G#õñ0&¤Ëˆl¼ÓCæ?¢¶Ü@°–âÕsr®¦‡ !{þ.<ÛF‡´Ÿ…ÉÖ ÿ$'Ф„÷chÔ}$×°íí£‚“–IMãÚv´8ÆŒŒ®«ÍîÖßͺ]ÜRZ­2¸"sxÓÀÁóòs"¤!7„Uþ£!®J ÀeRè„å“BФ$su³kðÙä„Bdýœaª÷ï÷ÉIê$Nò¶!ìÁØ»BO澓"ÞÌ£±, ýÆ`8Œ@r½M "Ñl¶A"(cc¨ðË|qsHÚ€«Àˆ Žwq7“àm/#¸…V b3•Vø0âÈæ`?©3/B`Û£àyldcÐR•’âÐ$\ô[\h·#6¡áˆìi <ÚmÈKÄ…QÎHï&=tßYS/…!ÙîÖþ¶ kúí瀵РÚT uUVª”€É‡õŠïp¯ŸW#ÔŒ÷¥¡§°BY÷ `Dö1ºÞp‘û„Ý‘Ô4ƒ)1ô ƒ`ãúXÚˆcã4òs³>Rá" K9kð^RнÅcUÊë;ê_H¬Yö{‚æ,I¬ó œ‹ »!šáãg=ÕG?¯WD`´®ü$¨Bt ;ô×·aσ÷ôŠOtäI¶±À wqÍÁàB̵K•[€.ÅÀÒ‘yi³½òO³ìp€D[*'=FÝzA'Ë—¸ˆÊ6 º Û,ɤ³òêâ÷ rAN-“°‚9FÀ€¸T #sˆžNZé]“Oç»Ù2C·¾&ŽUŸÃéž"µïGL…}yL„æ€)üë±Ôe ö‘FjüÃù]èÂÉ~ÉÙêbø ”{v–Å›°º’?Ã1Z„TzîˆÍÛX¶@efyé%*æ›5“f|™ 91ôí¯c"R“Á’ˆèT“û³s„{«e®ˆ]iíÇŒUJ&û ÿî2BE\u?DC=yu(°µŽ%¢Y$Šg#xß…uwѦ;_(pEÑdêk@¥ÎM- ’ô¸aU3a³¼b:9’q‰JÅ* @I¥Ë½šû‚±D`„†}P¿{â§q……¾ƒ±«PZýÞÿÊ;ÏÊÖ|ƒ§P½4ß¾WõÎ( G‘BNùOø½ì_HE<JqÏ ½²”j†«rÌ€”iY¿‹zÂÉE²ýcYˆ¶%¬akL)Þ÷XÛ…ßžî/Â_d\íñgegÈLÃ<‚Ÿm{&gÉëßÉjgÙR5…L?óõe$ÀI·«ß÷)‚¿§ð{Ûç¿÷—øµŠo=v„v +¾2•Á¯úqè]jýÛþÔèÃqÊTg¸Ì_™‘¾ýM2ÒCl|\TõÝC8äzay±€ù$n¯íÔ¡äøAÆ[êÀÿ0Hž‘j0Í71$;ÿyüªï †ŽÖ-ƒúKW|j• tàEºYÌ?é˜ÃéÀ2°@³Üùë.ÚÛâ2‚ñøY<‡A~M ,SyÞ§VF8*û ý‚ >»ÎC¾Ê¹ƒhÛ¼®zïƒcª¯¢SÁrµÌù½· ˆ  ò´óE@3ŽŸ‹Ë´àŮѪW”0ø†Ï¢‡26øQÁ4V/- ‡Õ‰ë:H-ø¦ÞEêÂÞ=”Éàw mï +XÆûðf©Å=< dZV¥ÍèÖÇr¸Î&v¢í\g“­Ã„«ÌM\?Ì€m™‡-™6~øA;N!wþY:«[û6¿‰ººž÷ŸuIˆ§Mã>Ì¥è3ËÒ_4Œ‰5+ê ñúòôä¿ÔQ¶¹endstream endobj 871 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4340 >> stream xœí\}PW÷¾/Æ % ìˆ+ ¸$A-"ù¨Eª Ò,Ò€[AQéh´UâçÐV+8S'€HJZ™· ~Õ„èX³´(wHQ²Á ðûãÎ˼ӷŠ`¡åùka?rŸœ{Ÿsî9gó¯ááað·†Í_=€IÇ Ãé†Ó3 §?fN0F?ÈÌÌ´ú ÿèèh6›ýèÑ#ƒÑ××À0L¯×[,—k×®ùùù˜Lf[[Û’%KÒÒÒvïÞÝßß_VV¶sçN‹ÅRVVöÁèõúß~ûÍÎÎÎÆÆÆÞÞþàÁƒ‚Œá¿¦rÔf2™ÆMlSšá„à¿mmmù|>EQB¡¦éGåççgddÌ;wáÂ…­­­ßÿý¡C‡rrrBCCãââ …D"IJJâóù< …Bã8I’r¹\.—ã8^UU ‚Çãét:‹¥P(ÒÓÓúûûW¯^m4ëêêbbbpwrr2™LOŸ>5›Í‰„Íf766Þ¿ŸÏç …B@ “É^Eá?K•J¥Z­ŽŠŠÊÈÈ Äq<$$dóæÍšåÍŒòu0VNˆ¬ý%øÇÏÒ‰Žãjµü'„ðððìÛ·ÇqÇ'õ£­hé„ÇñßiH$„††¾OŸ”YZPP€ã8ŒõpGRúCèõz ÃÆ·È/]ºD›ÍÞ¿ÿ+/ž4<~üüøã¿;•””4<<ÜÕÕ•ŸŸÿ¿g'“Ȱ«« \¿~}”ËF?ûú˜D¥™x2™ìåD¯×LÞ f¼Åô‡o¡T*ÛÚÚØlv@@€ÙlvvvV©T …B£Ñôôô mݺA•JU__¿iÓ¦uëÖ•••—™™Éb±úúúx<ž‹‹‹‡‡‡J¥( øãÇK$N—˜˜˜››ÛÚÚ*•Je2™J¥’J¥<ï§Ÿ~:þü®]»ôz=‚ 2™¬²²’Ïç‚Ð4¾}ûöWQ˜ÈY:ÆÈîuÀqÜkņQQQöööð;£(Š¢(€D"¹wïž““I’K–,©««[¶lYnnîÙ³g«ªªz{{“““•JeCCCpp°Éd‰D:îÖ­[‡Òëõl6ûÁƒ4M—––8p€ÍfWVVŠÅâ_~ùåðáÃ&“I£Ñœ9sæÂ… ÕÕÕ[¶lÑjµ€çÏŸ¿xñ‚ÇãI¥ÒöövAJJJÞÿ}•JuÿþýWQ˜•œœ< ‚‚ºº:‹ÅÂb± OŸ>MDmmm\\\iié¢E‹¢¢¢jkkýüükjj˜L&àÆ/^¼ I²¥¥…Á`tuu¹ººêõz‘HôðáÃ>úŽÉ`0ÔÖÖž9s¦±±‘Ëå¾|ùrÏž=Z­6&&¦¬¬,//ïÙ³gîîîЭoÙ²åÎ;îîîf³¹££ãîÝ»ï½÷žN§óööehÅŽø´qœF℉ºÒÊ,MMM7ož@ 0™Lz½¾¾¾~``à­·Þôõõ¥(ŠÉdšL&™LvõêU///@ii©A/^ܽ{wxx8ŽãIII‡–H$E577WWWoܸ‘ÅbÝ»w¯»»;111===22’¢¨+W®ˆÅâÐÐP˜hll4 ï¾û®H$ª¬¬tppxöìÙààà¬Y³œ0â¿ ' eíÌÌÌ?{‹-‰‰a0‹/Öétaaa‘‘‘‡æñx .$IA¤¤¤mÛ¶õõõ577K$( ëׯ¯««ƒŠ_\\Ìç󡹜œœ:;;u:]zzºF£‰‰‰¹xñ¢“““Ñh  üüü(ŠZµj4~BBBPPàÚµk†uwws8A¢££Ïž=kccòá‡¾Š‚ïææöðáô´4EJ¥ …|>ÿÔ©SL&³½½}ݺuÑÑÑ&“éÈ‘#‰Do2™¼¼¼ëëëËËË¡x2™L’$ù|~HHˆÙl6 õõõ·nÝòõõåp8nnn&“‰¢¨+V WW×”””§OŸ1 š¦iš ’Ëåf³Ùl6{{{+•J—Ëmhh…ÂëúéÝ} V”fÛ¶mÐvvvêõz[[[¡PØÝÝ=gΜºº:…B¡ÕjçÍ›?ã×_ˆˆøöÛoW¬X±páÂÞÞ^.—[QQÑßß/‰H’´··7›ÍB¡0>>>""ÂÓÓ“Ë媫«/^LQƒÁ˜={6‹ÅÒh4×ÐÐ÷Í7ß„„„äåå‰Åb‹ÅÂ`0Ãwß}÷* yO†I_ó™Ö3‚À°† ˆŽŽA?~,•Jiš¾páBfffSSSee%†awîÜq&Š¢ÑÑÑwîÜ‘Éd:®¸¸xãÆ(Š^¸paΜ9‡ËåÂU-•JF£¡(*,,¬¸¸8,, pòäÉíÛ·«ÕêÈÈH>Ÿ/pg2™AØÚÚfee%$$¬]»–¢(6›ÝÙÙ™’’ò* Óx÷4FÛZ±¡Z­®ªªêëëëîî–ÉdmmmŸ|òÉçŸéìì¬V«år9I’#½AAA{{{bb¢F£qpp¨ªªÊË˳··þü¹»»{LLÌÀÀÀìÙ³y<Þ•+WªªªpÏÉÉY´hÑW_}Åáp EQ|>¿¢¢bÍš5½½½l6›¦i6›]^^îêêåçç7<<,‰„Bá(ÞŠ ;–––§âÉ“'ƒƒƒæÏŸ?ÂS¥R §¦¦`†aXAAA~~~`` `ùòå4MŸ:uê/ÔÛ1ÍR«óabðÆÜŒõp?t²Æ¾¾¾(ŠjµZ¨ƒÁàóù4M(ŠBQÔÆÆ¦¼¼jP(ÀÓÙ³g÷ïßïîî®Õj u:EQ}}}uuu+W®EEER©@,-ÁJŠ¢(Š^¾|999yÆ »ví¢(ÊÍÍíµféÆdÖŠ ÷íÛªR©`lI’¤ÝÑ£G³²²JKK===Ñh4#n **J.—s¹ÜÆÆF¹\ž‘‘Áápbcc«ªª`ÍÍfWWW¿|ù2..®¤¤„¦é¦¦& ÃH’\¾|ùË—/7lØPYYùìÙ³9sæ@™±±±INNNHHðññ3ÈÍÍ-%%E©Tbö†²…‰µ¤†J¥òÉ“'½½½ÐË?xðÀÓÓÓh4FGG2û§¦¢ '3ùR¥R$™L¦7n|ñÅ0 Ÿ’’²wïÞcÇŽ!¢T*e2™££cyy¹B¡ˆ‰‰ …---aaa"‘H­V÷öö644œ9sF©TZ,–E‹­_¿–ÇóòòÜÝݹ\îÎ;'áÌ,þøû3´²Åb1Ç$%%áååPodx€™u€Z­Þ·oÇñ¨¨(’$ac,zŽüž…—Mö ÿ¬ÛP¯×ƒÿtOLGüýgéŒÒLXï‰R«Õ=:}ú4˜””’$I’„EµZaXpppGGŽã01%—ËßÀÐLj¥¦BébX¯[´´´ˆD¢ÆÆÆ­[·Âª% ¢¢ÂÅÅehhH§ÓÁ¶äØØØ´´´‘úLUÀªƒP(¤(ŠÅbÑ4¦ÑhP­¨¨‹Å¶¶¶Ÿ~úiII Š¢YYYÑÑÑðš¦á ººz¤zs–`lÙÊ7ª4“‘•´Š?Ñ‹qóæÍàà`’$Ùl¶“““Ùl&bpp°§§çí·ßîèè@QÔd2‰DÛ‹ŠþïßÿÞ¿f½^ïääDÄÕ«Wõz=ŽãwïÞ]µj•\.‡6_~ù%´?Õ¡˜ÆŽŽGGG˜SÔét0+e47oÞ ‹SB¡Ð`0¼¹ÚÓ„õÚ¬0óx<‹ÅâååÅãñX,VUUUyyùºuëP=xðàíÛ·³²²‚ذaCnn®££#—Ë=qâDaa¡V«%Iv†H$’žžŠ¢–,Y’ššŠaXPP@ ())a2™555R©ÔÝݽ©© f„ååå{÷î=wîÜ“'OŽ9¢ÓéêêêvìØqüøqE9Î(ï-ÍØ000((¨¢¢0’iohhH$€ØØØ”””¥K—úúúöìÙãïïïïïÏãñ‚‚‚šššÜÝÝ9a4ãââ´Z-\ÕÅÅÅð!AÀ÷DH’\¿~}aaáüùórssgÍš…¢¨­­mfff~~¾F£ …,kîܹ?ÿü³‹‹ ìf¥ÛÄ C à F||ûì3Ç1 ÀòXii©³³s||<ŽãýýýƒA"‘ŒÖY7Ùø³p÷ïßÿÃãÿÆ÷é½›r’`¥ƒV©TÂÜYBB‚ 0 »wï^ooïÕ«WE"Ñ‚ þð®¹sçŽÿåëvœëpĹM}Œ•á±cÇØl6A>>>"‘.KU®ZµŠ¢¨S§N%$$ˆD¢íÛ·GEE%%%Éd²ÀÀ@èýÛÛÛ~øáÇõz}~~>ŸÏ?qâI’† ˆÒÒR¸Ÿhii1b±ØÞÞ~Ó¦M¯Ïp¬ï®577‡†† ('‰‰‰ï¼óÎÒ¥Káû» …út’$/_¾œ™™yîÜ9•J%‰8hmm p‹ÅÞÞÞNGÓtDDÄÉ“'?~ìãããêêºzõê‰T¦ _Ù¯’–ßÉÒ/{}Xa8Ž®ã©+u‹é"'£Àz/ƈLJÉRµZ [õÆòcSÿ¨í_ÎÈÈ€o\¿~}ãÆB¡Ðb±FEY,l̃­»~~~þþþJ¥ÒÑÑÑÁÁ!22R£Ñ ¾xñB*•677·´´¸ºº=zô믿†ÍdB¡ðÖ­[‹eÇŽÎÎÎ#o‘ìììX,Š¢7oÞìééæp8#í|b±Ød2aö†z„'iok$ã¾Ý ÃmÛ¶Á¬ŒÑh¬©©a³Ù¾¾¾2™ vfK$š¦E"QMMM[[Û–-[âââV®\¹`Á‚5kÖÀ'”••ݼysíÚµL&æBu:N§ûøã‹ŠŠd2ÙÝ»wiš†?.R[[ëé陞žs|ðݵœœœÛ·oK$Eaì‘­P(²³³½½½³²²êëëÇÉð1í6ÁÖ÷‡‹…¢(ƒÁšššžž>kÖ¬M›6M‹ž=ˆo1ý1Ãpúc†áôÇ Ãé†Ó3 §?þq Ë~endstream endobj 872 0 obj << /Filter /FlateDecode /Length 4202 >> stream xÚÕYo#·ùÝ¿BÈKed5Ë›3»ÐÈ6)š^1ÐIfå±ìµl9’ÝÍýñý’ÃQ‡¥IÖšƒäÇïàwÏŠÉb"&>ûÓÅÙË7ÎL¤¨ÑÈÉÅÕD9]I7ñÖW¶“‹ËÉ·Ó‹ës9]ݵ›ó™Öjú9¯õ´ƒWÝù÷yùFÚÁB¶®j`h åqÌ™€_¾1b"M¥S8x¦%€Õ .lU;Ç“jX`Y!¦Ÿý×ë~oVðç÷ãÍ´ÅëK¼¶Ó§Ç¸[âºê6¥½Í¤ò¦žÌ”®¼÷š8‰ðÔÓ â6 W÷Œõ%>ÝÌ×áñÛó™r†§¡:ù„wÝüyZ†'q[Ë0;.Þm^â²=’ƒ¡¢¯®¾¾&šÓmËðWçÊOÿs®êiW1ÂH_Y5Ö}VTu£3ÂÊžº_" £¼o—LØî;!5š?É«*Y  #­ú%î¶ã™FË@I'ˆ,OsÜÞ5? Q² cÓñõâð&a¸%ÚŒÁW«SŽ´8æ’#”;„ò„ãÚH\¦ ‰„¿¿@wíù.)A“ªjŒŽÐþQ^ÇÊ&ŽøNXQZÇTʨ8è›Ò2@ jF½™€”¬b+)ýh„óúU"-ñci#¾ #ÏgF‚DÝöuóÀ]dÕTxpdœ¯§_ ñàÍꎅoZdüCdì¼íÕÊÉ ¤Åô'Ó™}å|ÁÓã M$¡"!¶n Òž! Ô ꩺ©d­N#9p³Qï£æº&ɆM4U s¾få+Ã) ªtf„¥©+¼téàU\|Ý›•‘TѸOx¹Jiƒx%Í~[«Í"^iTPý¸TC¤þ¯¢Bz /ŒbÝÁÛDÅÞ’y@®* û'5G8 §ä•â;Ø#3ÍBd l^„…ÒÎìôæ.Ñtµ|%úã‘õ¹sŠ-YÇÜ‘ èë!wÈ(ÝÌÑÈhdà Ú$D:YŠ€êUøGª@ò*ÛÚJÛĵ9±ýá)jN²õP¿Ö™üo æ$×$sèëû Ëó…ï¤8¹xr4 -ÍøäŒÎ„h|¯6„*ê8°}–®ü²¸øIýIþx¯›©ì)=Üx4f{gãC;K£î‹@EelB±EÒ®[]P‚fZÅ“¯«†tH&[op™Ap[3_¥7Í‘“÷$ÉÀØ¡‡ÆfX«iö6óÖºvC*šÁ²Š!—ŒœW>tERøJx žŒƒgÁ“ù#ÀTª¨4S/ kM¿Gk2ϲQ=ÿ‡ ù¯÷©3°PQM'EU¨ $9ûìâì‡3 À%œÈ¦&|­•õf2¿;ûö{1¹„—à¤Tº©'ïièÝDdÍå䫳r°0:*ÂÃîk8Ý0ÓéH¼–unîw`9´EE$Ñ]‹¦M6û\‡º2'ñšz.ºÓqÑÅE7ä" ´§›8á`Pßëýn+4̓¡Òwh)"ª\¤mÑçrU-ü$JF” MA/Î5å¦D6ªýÑ«¶¦3°æÞá¶š:œÎ¯ Ðf¢’€Æ#ÞH·)À}•¹²(<àU ݩ¶8Lp÷3ƒˆIÙ ßLj~‰&Ìz*`šJžÈ™KôŸ'â.ü.ƒçÓ» ìý„­Ãm|¼&³D†Ï‚cø ÿ¼'ŸÂ¿¿Â¿¯àßg{ü%i!ÐJú}p÷…ë¶õ‹þ„ª´>MįˆMO6"[èL]¨Æ¡X톒+Ä÷E2ø©@Ý@Ôuî.‰‚Îp•­Ae™T^Gª뺴È`}”ØQ\É2‹AÍý$ó&ˬÆÈ´Í–æ4ª#ôÿ'îšÊëm*Áa´¬`ºõ®`"6S9? ¯ËѨ>”‘••‰x‹º¶²àÕƒ‰³ÍÏhí`×ÙR™§=JArH ê($%@ýmú·œ·T}¦®—1ú½åûb4J³ƒ~Eu|²Î˜„¡ôÙ2’ý ýUpg'ëÅ$^þ ŒÂ·´ÆÈR$öÁZ›>L @BÌák_9«)WÜDŠì‰}Ku ‰år}Xòû zbjYpkö¹ýh@›½n?Øz ‡¯êgºý#ÞWÂ6Iòèë‘n?«SYª¢®ò’ϲa?WÍúŸ¥f·]²g{dÏöÒÕ(ÎG7ý7HX÷«ª÷“9Ì§Üø˜¸ñ Ô«½¾´bš-gºÞëLƒÍ1yy 5¸¨9-zDþÐJ²ž hªZ&©)'úeù7ÏWÁŸ'­iD°G”æ¥ãl»-%E–p„kÎ-Q€ž\s:F,{jq~ößp.ß>Å -¨S ’5pbVßo•Mú°`wØQs qŸÐc, €õ²Ï¯?d,kƒÍ]pË" R™„¬XŒœI°¤5Ø ®Iª¬&©úšäÅuDY6$AË̋ޕ (Ìâ&Ñ#T ä½Fi(•†=§zXؼȼ<ð‹W¸¾Cî’˜)ՌҕÀŽpoEp’à¼ã/cþsÁk0®6ý¢W˜?Åõ¸€Ô-VëTQZò›KzÓ†õß·ý-¼ÎD®€ŽárÏ¿ÝTX¶’lÐêõC©~ÉKÒ¦†¾X!•â`¹ÆS®iœÊË5DÄX¡^‡ ‰1Lû.Ësg@Ol|¾¥1ºOÌ‘×ü0Õ‡qj»~ÙâìË›AèÏÕd\*y»cþ˜%FNDô÷Ǿge—áÐx烈“è—›ž&m@™<ôx†g·A(9;~ÏÚH+LˆêE$í],m¢5nzEî;¾™¯HSÖ„•?fâÂ8¢O˜9>&ÌŸ «£œÁe˜‰«M˜D’ ¿T“]q퀬@:7¬}iTÀ^5¾PR™¾KEûË~t˜DgõÀy#Xåù °1C4ÆK× G;5ZÓ9Ú$Ö¡hƒBŠ®“nMIÈžXŒÃ0Piò°†é· ¨¥Æ—ñº}Ìn¢(аu—Á~böàý°ãWÅF QÛýE}F: bgÚ«© €I£9é¤6o©*úѹTvú¢8k&ÂŽlâ7;ªl¾~>A¹ª§ ~„%X‰ÂÄâ=0á|š^Sí†åG°tkNJã#|`ˆ50ë/ö0˜1JãTA>T×—'Â¥sM þ6&”F¶â³ÿé&;žpÊ›LdT3”ú¾‹)êe(¬©\^BóK>.ú3 *œçÍxG:¡¥n¦Pï^±c¶¿Ù£$âä m¦_=ã”õÜtý¼Üøá³6]‡Ò:݇óÐ,• X´v]Yò•‡ˆ¥÷8¾.øê a®Q•Ãtïô0OHÛ%}¶áN‚ÒÑ…\»Cð‡q¨~üw¤ÖAlº=án¢67¼®Û8>±\ôfT >@¬`¼Ì®ÍLmV°1ÞQPÌhÑg­àf»$¯A‡i[ú¤mA)ÄËe Ó mn}÷ÛÞ„u‚ª‰û ƒÄឆ9”o^€zµ¼QÚŒ¤ô!~Ýõ,´fÔrݱϋ™D4ø3‰"X9Ìß yQ"1ú‹8ೂÓ;ZpýÀäç©ê!L´S½F<Eµ EõL§OEÑãÀ™Sa'‹L ü ƒØmù%¨Áà\þmõΓ—|H7¨–¼ÈêWžÓåWÙó~Î’µŸ9x½î2ÍyÏ@œ!qžÒÃ> /Filter /FlateDecode /Height 106 /Width 75 /Length 4921 >> stream xœí[}PÇûßBÔ$å 1ˆÄH+\5A hIK+Nˆmµ´‚¡–j±¶SqƪmÍØŽG´RÐÒNËKŠXZ°Â)&UT0GT x" 'J¸B"åEÃï/ÃO… Å>eî6»ûÜ>ûìçyÙ—z{{Á ›g='Ž1 G?Æ$ý“pôcLÂÑÆS#** @Ó´——×믿þÛo¿íÞ½{Æ §OŸÖëõÿ±’$sssW¬X!~ûÒkõx"Ò4ãø“èya-ŠŠúðÃ}}}£¢¢‘J¥$Iêõz;;»àà`“É„¢¨P(Ôh4$I ‚ÄÅÅÙØØŒ?ÐÑÑÑÕÕ.—ËGjJcûpôÃÊi‘””ÔÜÜ\ZZ ˆ5™Lƒ!&&&11ñöíÛr¹œ ˆiÓ¦êtº––‘HDÓ´‡‡‡N§‹ÅgÏž>}:—ËÔÕÕ!b4Íf3†a€üüüˆˆŠ¢ÂÃÃÕjµD"¡(Š¢(€B¡xóÍ7GDÂ1-ýø¯>WÃ#aEK’““ýüüŠŠŠpUUU‚\¹r…Éd&$$èt:“Éd6›¹\nDD„T*}jS"Æöá臕Ó"..ŽÇãét:Aâããu:Ý©S§>ÿüóüüü™3g–••ñx<’$;::$‰V«e±X÷ïß¿zõêÊ•+FcMMM|||vv¶§§'@«Õvuu ‚óçÏËd2ƒÁ€a<"##+++u:Š¢111*•J$¯\¹²¶¶ÖÓÓ“¢(EëêêÚÛÛÙl¶Åb±±±±X,ƒ!//AD°²†$Iêt:¡PÈçóI’|ã7Nœ8èêê¢(jË–-R©4''Ã°ŠŠŠ={öÔÖÖΙ3‡ËåÚØØ,[¶,>>þàÁƒ …‚ ˆ‚‚‚„„>Ÿ?qâÄW^y…$I;;;‚ X,Ö§Ÿ~zøða‹E„‹‹KEEÅæÍ››››CCC³²²àó{÷îiµZAbbb, ‡Ã9pà@nn®L&KJJL†Þ¡¡µµµ··÷êÕ«ƒ¼üɈc É<+–fíÚµp Y,VLLÌñãǧNÚG_ ­­ ˆÈÈÈ3gÎðx<‘H´ÿ~š¦-ZDQ”X,Öh4f³yíÚµþù§Á`°X,AAAuuu|>?##ÃÛÛ{ß¾}b±ø‹/¾€jqùòe///‘H´cÇ Ã(ŠÂ0 ê­\.ÿî»ïš››y<ÞÎ;ÁŠ„4M¤â¿¤ñ3„ W­Z%ÊÊÊfÏž}ùò刈ˆ­[·J$™L¦Õj9‡Ãñ÷÷OOO¿ÿ~GG†aÐC¶ÉårI’ôöö–J¥?üðƒ¯¯/ ¬¬ŒÉdº¹¹egg§¤¤¨Õj³Ùœœœœ­P(©©©"‘hÆŒ›6m:þ¼F£™0aBPPPjj*MÓ$IúøøÜ»w/22288x˜ÆÅÅQ›——Çãñôz½@ hjjš2eÊíÛ·}|| T*Uzz:—Ë ùå—_lmmQ½{÷®@ ¸{÷®X,^¿~½\.GQôúõëyyy±±±Û·ooooæ¢üXUœ§tâC÷) ô0Æ8Íãàqã‚&..nëÖ­#8‡‡1’kÕÓÓ³lÙ2{{{ƒ±jÕª‡Ûà8n•C+O£ÿÛ‘ú„4M»¸¸ØÛÛ»¹¹988LŸ>ÝÅź#III·nÝJIIijjâp8©©©ãÆ3ãÇß³gÏÂ… 7nÜØÙÙ©R©ÌfsJJŠH$2 <†§Á‘#G~ÿý÷k׮͞={ÀF#Ë3är¹R©<þüÏ!Åimm-..îýimmU*•999ðw êk6"a !¡lLý‘-[[[þFÃÆ‹oK­xO“&M‚ÁY’$W¬X‘žžÎf³oܸáééÉd2}||>ŒaAGGÇÙ³gëtºÌÌLÇq¿pá­[·üüü>úè£ÐÐP¥RYUUÅb±ÜÝÝJ¥•YYYK—.EäàÁƒ&Là8ÎápPuvvFD£ÑØÙÙyzzVWWGEEÉåò††__ßAxéPµôüùó;wî¤ÁðÜ‹¾6}ŽÂÃÿzÀ‡¢KÑ+Zš””$•J!IR.—«T*6› 9gOOÏÍ›7ÿùçŸ@mm-ÀÓÓÓd2áíím4Ùlö¥K—fÏžíàà€¢èñãÇ“““³²² ƒ““Ó«¯.7Züýq7(Šb6RaÒ>¼øûp„ã44MÓ4=²}þGŒ°„z½þÛo¿Ý½{÷ûï¿ÿ@õ‘aÕ§ka-Õëõs AD"¤i©©©'N<}útXXØÙ³g_}õU&“©Ñh¤RéÏ?ÿ}âÄ ŸÛ·o»¹¹Y,–!îÆ#GŽPe6›?ûì³ÚŒü>Äq¼¦¦fòäÉç¡#Òß©ë{žŒ“5fiF?†”?äñx4M߸qEQOOOØmnn^½z5I’.—+‹³³³E±Ùì5kÖÀx EQL&Ód2edd$&&VWW¿óÎ;jµúÂ… ‡JLLtvvf³Ù988¼òÊ+°ÿ†††ØØX­V+‘HYYY(ŠºººÂ$$Š¢/^ @dø±¶¤¤$6›=nÜ8‚ úYKK‹½½}DDD~~¾««+ŽãAAANNNîîî ìgÄÃpCß±cûpôc¨>‡deˆ°bi¾ùæÀÔ©S÷îÝ ?uê”ÅbÙ¶m[pp°B¡8|ø0‚ ^^^€ÊÊʼ¼¼§0ï¡c˜ûðù à?/¾¥ÒyxëÖ-___£ÑÈ`0ŒFãÕ«Wù|¾T*-))±µµÅ0lÏž=Û·oÿõ×_]]]y< ~ …´´4‰DO@`2™ FYYÙýû÷çÌ™CQ”B¡HLL”Éd!!![¶l9yòäk¯½]Кš¡PÈår ‚@Q”Éd²X¬“'OŠD"˜0e0ÉÉɃ„G½–Z‰•5|ë­·àÇ«¬¬tpp€IÂðððŒŒ ¹\^XXX[[;nÜ8š¦Åb1A jµº´´4'''##ƒ¦éèèèÊÊÊS§Nùùù¡(ZPP`2™0 3›Í111jµÚÍÍ­¬¬lÁ‚¶¶¶2™¬  àã?®««KOOGQÔÎÎÎÎÎN,¢¢¢$‰¿¿ÿ½{÷ ñcÇärùÁ:§ijjº{÷nOOÌ“9::ž8qB&“ÑÝÝÝÕÕÕÕÕåää$“É`F€aXyy9LBUVV†……Éåòµk×ÚÛÛÃŒEQXSú×_õöönܸqÿþýB¡RjèÔÂq Æ’%Kd2™F£"‘H¯×ÃaÒ;33óûï¿~†ô€•5LMM•Ëåz½^£ÑˆÅ⢢"ww÷ÆÆÆèèèsçÎ%&&â8®Ñhüüü>mooŸžžIÓ4|K’$Žã¶¶¶¶¶¶ååå¿üòËâÅ‹*++{zz”Jå¢E‹T*‡ÃÑjµ&L'++K¥R©Õj‹e2™¼½½UUU%%%ÐÕ‚%œZ­ÖÛÛ{ïÉJîI¥R=zÔ`0…¶¶6ƒA’dPPPiiiJJ MÓMMMb±øÇ´··ïîî.//_°`AWW×Õ«W÷íÛ×ÑÑQWWÇd2 ƤI“X,Vgg§D"9qâÄÌ™3«ªªPÅq<11±¸¸øàÁƒ00[]]$lmmwïÞ=yòä¶¶¶òòr ÃΜ9#•J[ZZjkkÇ/îܹSSSóî»ï$Âó¢¥0èö@j‘$I’$áÃ>#ô¸ÖȺ––””ÄÄÄ4‡‡¬e¹yóæúõë7oÞ  è†=zÔÏÏO«Õ®\¹2-- Ã0¡P8eÊT-“ɤÓé’““aEðÙ³g§NêââU.$$dÅŠëÖ­ƒõÅjµZ&“¥¥¥Mš4I$F@}}}XXEQp”ººº†††àààÄÄÄDxâkøÌÙÏPO ƒH¬Áƒ_ÏçÃ`þüùð‰Á`X¼xqyyy}}½H$"IÒËËëÔ©Siii_ýµÙlFQT«ÕÂà ‡Ãa2™}åÐùúúúûûÃE.--ŽŽvssûꫯººº$Éž={6lØàïï¯Óé`œF&“ùøø r{aä×ð™/Ú°²†3fÌP(ÍÍÍŠ¢ììì¤RiEEÅÑ£G]]]£££ããã—/_^SSãààÀd2ÍfóâÅ‹;f±XÞ~ûíäädXà077wÁ‚===—/_Ž%¢ÿæô÷÷ß½{÷K/½ÑÖÖË-!w•H$|>ÿ?þèîîÆ0ÌßßÇñââb•J5Hõƒ•(†X,†$ÐÏϯªªŠÉdþüóÏ¥¥¥¹¹¹·nÝJOO—Éd¹¹¹yyy^ÆÄÄäççÏ›7/((Çñyóæyzzž9sfþüùYYYaaaííí­­­,+##ƒÅb‰Åâ”””„„„¢¢¢ôôô3fH$’¤¤$‚ Z[[/^œššêíí]TTprrb³Ùph½^/“ÉÌfó "øàƒ½{÷òx<‡SPP¶0=xœÆú˜!¸yófppp]]\.OKKëËÀÀ.‚ 0ÎËçó+++ ø|þêÕ«‡]¤6‚\çÅ·¥V´4**ª½½}Ö¬Y---(ŠÒ4=qbdx¸P§ûš™îîî%K–äçççääÀØ¡R© …Çñ;wB¦š””ÛôÿMÓ4¤Ýqqq'Nttt ºråJGGŒÛØØP%—Ë¡yKLL\¹råË/¿|æÌ™;wî°X, ÃŒFãÔ—Z,×îííU*•ýŸK¥Ò¾ßýK j߇‘ºýfEKaœVº¸¹¹½ñÆo¿ýöŽ;>ùä“ 6À6!!!Ð$éìì|ç΃‹çáuÄ’’’»wï-ZyßO?ýäîî#ÿ}÷/)Š‚u§ C¡PdggÏ™3'??_*•ž;wš4Š¢.]ºôÎ;ï8::†††æææÁf³‡Ÿ{꡸EÏá…6ë·d¹\.ƒÁhooçñxÿý7¼‹ßö,ÁÃ7++ Ã0ƒQWW§T* ‚P()))‘H$‰ .\8yòdaa¡R©„ë)‹ãããÅb1¬2tvvÒ4ýï¿ÿŠD"˜\x`2P¿.]ºtñâÅe°ªÇ×®]Þhmm­¨¨èû=¼Nú÷6¼ÉXÑÒM›6UWWoܸr%˜‡± ˜fx> stream xÚÕ[m“Û¶þî_¡é'Ý4¢ñF€L&IÚÄu§™v'ÓiܲĻS|’<'÷×w_ AI~kÚñœEŠàb±X<ûì³›™˜=yôõ³G¿•õLŠª­œ=»ž5ræjWµºž=[Ï~ž«æê_Ïþòø[kÒVRØÊZAmžÝ^ÉyÇíšLšR•.´ûéJËùhûÕw%¡ÆTBÆÆ¿RÍ|¹ºRnþòª¶óåMwµÐZϯAÀþ¯ÕüË#v}÷r¾¿ÕòŽŸü nèá’ßújOäü-ü°á5x$¼!†VXH§+јÙB×U#%•¾ï|rî{¹Æ~÷[î⹨…— ÿªzvÂùòû'f?˹Êz‘ðŽdQ¨ë-+MÒÞàøýéû5ÛL©l"Œ¨œkg ÕTƶ¬çs¡¤o¬g0—NPÛº2ÒÂÐ*«ý€~áVùØM¥è£oÆóåò†Êè0]¿:t•lÚ¬ÃM©C ÊA‡`+åüËR¿ÊV¢6¡ãçBß·Íú®m*ÖÂ;ò ŠÙÂU ¯kä9rUÙÖžèã„KaL:Ð/ ²`ÉÔµI°JíUå¥ÍÚªµ0Jtкݳ²µæVÿ([€ïÁ * 7{雉¤™©,¸•7è—²ìh•©Á2° u3m [)mÐÎ9oŠ—%qmÕ(œóØ,™F‘É“N‚þ¼>pƬ_]©6úâ/AºjšÄgjyNHQíº’jÖ+óùI‘ l8*ÍDè½Ç…Qà4Q L±Ÿ©§ÛWW eçne;DÃ#ÁBYcæŸ%àÁ°F€±âÖtý‚„x$¹AÛá(ty‡ïü!tíí‹“¯ša­/”«šº9‹2î7F™b°Òmýé`Æ|$˜qï3Åt´ªyœ1gtû› øg\ü®œ¨ÿïñæ‹Óx#Ûª–ÍoäI¼i+ã¼ô@ªÞ™:zFµƒ¿e Ì>°–.¼‹µÄÿÞò^‹¥&ÏÆ¶(åWkˆF7gñûA²ö"0»5üáH>+;ªZƒãG”êR€r”{€²#€J0â¹ú,LlJã4•Ó*…‰/Ê5¦ýMªMUºVõ»Âê„‚†P ¤!Ø|›‘áâÂePU€Ü6wCCÉ:3Ôï'Ô‚‹÷°Î„b˜4 ñwç-Øïô'²P¦9D'ž#X… =þò…g¹Ý›nŠó#M[©æ<Ô#êiëÔÌXÈôõ‰Ü·@ìíÍÆú¶Ø»ÉA†ãÚ¾ bž„òï`þ Ø·Ä*ï’Ò®uPOøåÚª7ÝÄë>8n½ŸßyµCV³‡¿mXþ¹ç}Â&‡‚]_¢¾õ‰ ªæq‡ë*o=‚¶˜]¹1‚ÆÂNæ; W5zÈ«»XÍNÆIk\ÁÂõþ…Í»=ØoÙ{¡  ¬ Ð*¬é¥î×mwÇ!!>ïúkrôíh,æÏ¥6[Ôí!³Üžl†uv| íoØ#è‹íž–Æ:tô;Œ¾ ¨³îCÈ-„ ÖÙ€]•Ž!©tì÷ØdË(ƒÝu~üäð.gxeéôæÞt»0lÄM¬†ÁKwqdxW™ª%£Gç©vq )ÐÍíþΧ»D°f¬3Oúz“ÌÊ·7[úÊq6;ó‡Ã1¸ÖÁçáØî’1Èib‡âöw4ø#*¢¦¨öO®jÝš…xå|7´þg~Z7Æo"Œž8.ñ£öü Ÿ¿Žœ..RŠà4ÐÒ®äš5¨Æ±Ç¹©н·Ár5?€ëZ¥³OßDðGá#oÅò-y+¼šx+Üåö¢·¯ãä£5(8øk²¾| ¿ k–<íu¨ÃxøL@¶Ez3 †®ˆàÃ'[ñú!œ ØïC­"½ºæ]²”–©¢Ø€P>#*³ƒ2¤Î6+^}o¹9YD$€,Ô(rÚ°0tü@´ÌË^>†N–9ðÛ«´‘f-d%ÚOA˜þ˸šf¨Ñ‡0¥Þ4̵Q ;ÈB‹B¦/mÆñÚ{•ª¦ª—–T/L6<ذ ¢?’eCt›±ê/…Xj1$±½bGgˆx}½èÉ¡ ¡d¥q?o‡×!‘¨Æ›$·óÙf²w<²¤Ž ¿H"OÌi(ò”^.s{ë?¡&ò£^„vHžZH;žîJI)Ât{îTVºmGå N•âÚˆ‹×Xã#”ወñµ_Iñ²#ºkMò`é<ºÕy€³ðe2å¶ë{I¤ï n¯Ø!ÌäÍ-Ghê1ê4ÙD¹CûÓ0 Sў̾õŠÝwAÃ;/28wT¹_]>1°˜ö^ Ø&÷'[Î11iÛ>@RXz¸¹åŽ)ó°ì‰aK%Åx³\‡\Ö+2îbö`QekÈxM\pëohÁ$5¼MŒl‹>:²íc=Ž^¼K»#ñ—ëŒò±sù2)°Ñ{¶­Ï¡"õ¥ZBë’ÄÇørýËV{²ë ¦ñ½Còx9h[ ®×5×8çÂî^b2 Ä}*îåØ:‡ruº©¬üX§Hì°ÎnÏ—Ü ×:¾l}h¯qiª¾dÝ0Zsƒíòeb>læôº—ØòŠÖØ!VzCÍ „­ö1èKJ4Û¾à(†ÈYç¦ÿŠ¡»@”ŽØ^VN•êO®ð@&û}TTˆ­&ñ‡|…/÷|ë©xóZÏ“þõ»ø”+[pM0{äë_<¡9Yƒ(0K°%U"÷‰à¾ Ђmò’ˆFÚê$§Y±ó§*~›Éê<âh:™“ÍG€L1ZU'7Fµ \DäN²,ÑLTsŒ¿ ç…x²ôVîp{¦ûN³¨í¨îblÂãF‡0[²æÌÂm³Ó;Fx2©IÏsШö1™ß².Yqî7!ø<„T‹Y~Z”Åvá$pmȺ©„Šû|ÿ ágsS&ÒUŽvú@zÁ>Ý–ÝIµÀøâ"ìBQÆ•ü¨,}´¬qÓ&Ïi‹žãe?þ¶mq»®­kÅÊAq;¤è°'û#å^+`¹_Ò¤¯Ö 0ÜL OÞ‹RàM²‘QÓNF·;–&háêªnPNž'…º­!³m¢¶qÃZµ#~FË=Q°Ø2݈Äo‰Sô…"*oÀ×aÃ#/O(ÑTµ4oRCÜU6e“ì@#µ8½y՘ʶFY5=è6¯ˆ„:È•ŸÙ–›d³ÛBGf°Ù½<9ÞÅŒÑòÁZøü?¸ÑsRܾhŸdÃl…oCü=ø¬/”™ì`'VE÷‡‰ž—OöɲÞ”¬êù7¡ ÆJîyc'‰nKœYÚýlÅ>„,ÕŽ6Ø‹NÀ¥âd}WB#i+ÑÏg¾AU+võãD"&a•º©D,©‡Áú*/O‰åoˆþÜsRž•f‰‘RmÖ»ÞX4GÕ D‹´wÏ‘=9l`Ũk.`Áó§¾Êd¹èßÃÛwýx>¤LBq®cýoÞºxÌAM¢qkÛGï>Kà_Áx{ó Ï2×¶r¸¼š¾îÙ£T>Ó`|È]J‡.ջтÓ?=™:¼X›éË_N¹4U«Lá@xé¸÷"iöþcH~Vâþ{ËáqïÜ¢…ãÞ.ôˆ·)×s»‚ëÉK|ìƒÎ—懋G´¹Ë3'÷“ÓêÍ©ŸÅQ:Õÿ̉o˜U~a¢O€Ô@Ä/ݺ.Ÿ~¬¥}3†“'Õ©‘¦’xøÓ‰&sãÔ?%jTã‡j#5V¢|t¾Åœ6kY“±'ÅÀéœÀ0êÀÑàrâp?ž/Vç÷ÃJ“:Ìõâÿüsü°üáÂÇüåÄ ßk'O9ŽÆ`Gc8÷Û„{ßÀ U³þ ·Š*#ì)¨:tltâ§f`­tâÁV€…­ÿüÈf§¼¨vþŽ0ÓÉOB×:ß… ¿sØ~<Õ…OU°´) ¯m†ccÛôG™”>ê†}£âJLZ°ÝVaÛÈÿjÈ|_k¶RãŽn¨ÛKâŽþ´qgàã¸CÐåM² rz Ÿ5žØv—¡¼»xå—úƒeÏÇËÆ‘uÐa¿ óh ô&âý™0ì΄a7ñk«aìh?Nf€¸ü§¢­¬EN3ŠœÃÏož=úJOŒãendstream endobj 875 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 5325 >> stream xœí\}Pg^ùXt– Ĉ!Aª ñ#M$#±ZÛS(èÙëP¹kpD¯½¡­™Ó+ÞÈTSr5ŠÕ”Z«–rj_6~fƒ‚ ¦P KD äÁp¼3ŒcãÐâùü•lvvßgßßû{~ïfÆèè(ôBÃå·À¤ã%Ãé— §?^2œþxÉpúÃíñ?çää(Š;wî8½PBB‚P(„a¸¹¹ÙÍÍÍf³AD¡PÔjõðð°‹‹ ‰D:þ<›Í† ÈÃÃÃ`0Ðéô¼÷Þ{CCCJ¥rëÖ­ÃÃÃJ¥rÕªUjµúçŸöññqqq™5kÖŽ;y6†3~ÏQ›Ñh|fbcø]3œ<Å:4“7ŽÉƒóuˆãxTT”Íf3nnn­­­"‘èßÿn_·ÎÿÔ©S0 ôÑG!!!»ví2›Í §Nšš¡?!^ZédB¯×OÁ]¦Ž¡D"yø«Z­>qâÄÜ×õÃ?œð‹ »Ý^QQa³Ù(ŠB¡H$K—.ýôÓOƒ‚‚ššš(ŠÍfëîî† H"‘ܺuëïÿ»ÃáÀ0,**êÉoTZZzîÜ9•Jµ|ùòñΙ”utì5Óëõ eì`YY™§§§··7‹Å‚ œ>Lì`~3O3!jþ$pÂpûöí†ñù|•J•ššúí·ßr8œÀÀÀêêj‚h4šÙlîèè°Z­ ¥±±‘Éd^¼xQ ´··F ØL¦P(äñxS@æâ©çpÊžýDá©}éÒ{ÁÕbÊ0ñj±iÓ&ƒÁÐØØØÜÜìíí­V«)JNN—Ë­®®îïï'“É …A£GÆÅÅq¹\½^ßÞÞ^YYùÝwß„   '¼Wiié7ß|óàÁ …2Þ9ïKÐ6é±ãcjQWW·bÅ ×ßß?kÖ,‚ÿ‰Ïÿ½Z¼p’=åççóx¼ôôô?ýéO‡ÃßߟJ¥VTTTWWÏ›7ÏÓÓ“ÃáÔÔÔ (zòäIV\\|úôé³gÏΙ3';;ûòåËv»Ýl6GDD@¤Õj]\\||| Â0,++ëÍ7ßœl†9‡OhxS¬¨“b¥ùùù‘‘‘•••üã¿úê+>Ÿïíí AÇËÏÏ knnf2™V«566V&“¥¦¦ž={–N§'&&>•§)--5›ÍmݺuÜ“F'÷ïß¿ÿ>ø¬ÑhþéÎ;à„GÎüé§ŸÆN œÏá´ ÓsOÓÝÝm±X¶lÙ¢P(’““9ŽB¡âÆb±ôzýádz²²´Z­Z­^±bµÙAz »@£Ñ¢££‹‹‹©TªÑh´Ùl È=A"‘(''')) ‚ ÃîÝ»‡ ˆ«««ŸŸŸÃá …r¹\(;vìí·ßþøã===SRRúûû1 éíí9s¦›››ÑhT©T---ãQpÓTTT,_¾|ñâÅ«W¯.++“ËåZ­V¡P8Žo¿ývΜ9ûöí+,,‹Å3fÌÐjµþþps³2.ŽuôèQ//¯ÿüç?ÑÑÑ7oÞ‘‘‘ééé{÷îuwwÏÈȋž¾¾‡~ï½÷º»»£¢¢¬Vëðð°»»;0ìõë×Ï;·§§‡N§“Éd™L¶qãFNg·Û«««¹\®N§INNŽ‹‹—Ó˜òýû÷«««9ò´ë,¿©‡+}çwh4ZOOOss³Ýn6såÊ•††ÿ€€€ÔÔT.—›ššÊ`0Ìfs||¼T*µÙlyyyL&Çq&“ÙØØ N‹Å÷ïßçóù(Šêt:­V‹ HSSŸÏohh`³ÙYYY¶øŸêy<ì!Æ›™'™ä©™U'¾4??ß`0Æððp‚píµ×ÊÊÊA„B¡ "—Ë­V«Éd²X,ñññ8ŽŸ¸råÊo¼qåÊ•¾¾¾ÇT+_+}.O#‹?þøc±X¼sçεk×þüóÏ‹/ŽŒŒÁgyùàoòóó/_¾ te¬. ŒÇq ÃÖ­[Œ³¿¿Ìa$''WUUùùù…‡‡755EDDÔÔÔäææ)òð𨭭EQ”ÉdÖÔÔÌž=;55õ“O>‰‰‰•h Ã8Ntt´\.ÿâ‹/Æ£0ÍæðÄɹ§ñôô Ÿ?~QQ‘X,Æ0¬¶¶vÛ¶mÛ·o÷öö"a6›•JåÂ… ¯_¿¾nÝ:‚Nœ8Áçó CoooxxxCCÃܹsA-ýôÓýë_'Ož,))©¨¨˜?>™Lf±XÀ9™Íæ×_Ü hÉÕ«WöìÙ3…géB“Ћ~äx}'s¸{÷n†[[[;;;_yåAz{{ÁO4 ‚ ÃL&@`0­­­]]])))UUU|>ÿĉ¹¹¹b±xΜ9(ŠZ­Ö={öðx<>Ÿ®ÐÑÑL#::¤|"‘(&&fÙ²ez½†á#G޼õÖ[† …Âàà`.—+ …B¡NgQ*ïnÜxêÔ©ÇÏ¡µ@QÔ××÷ÆAAAV«µ··×b±Ðh4"‘¨R©¢££ïÝ»·hÑ¢¦¦¦ÁÁÁ;väæævwwƒ€#77÷ðáÃ+V¬ÈÌÌ´Z­‚ddd,X°`xxøÒ¥K0 ÷ôôhµÚôôt³Ù\XXŠ¢ÉÉÉV«ÕjµjµÚääd‚´Z-†aJ¥r×®]8ŽËåò9sì tþóŸÿ´ÙlI¡iÓ<ž(¦†"‘H¾þú뢢¢ÜÜ\©Tj6›aîèèx89‚ hppÍf ‰D"àýÇÊM@? EKK˶mÛ$‰«««V«{Μ9“’’òÃ?Ô×דH¤ÐÐPP¹JJJܽ{÷öíÛ_{í5½^ÿpMäСCãQ˜NsølÈy5ñ—_~ñññ¡Óé/^Œmii)((P*•›7oÆq<::dF£Ñd2á8¾oß¾ÒÒR>Ÿ_TTÉãñd2™———‹‹KWWWnnnQQ“É„ H.—+ µZ””äp8êêênܸñå—_ÖÖÖbæééÉf³²³³Õjõ²eËÀ"/++3™LÏ›?~|Ù²eýýý\.÷ƒ>øê«¯ÂÃÃ5Í¢E‹|}},Xpîܹ¨¨¨áááÖÖÖõë×[,›Í644LËÍÍÍÕÕU¡PDDD¸¸¸Øl¶¬¬,©TúÊ+¯ìرã³Ï> ëîî6™L"‘èСC 6›íÖ­[kÖ¬ñôô\½zõÁƒ‡‡‡9Ž››[XX˜§§ç­[·t:›Í&§NâóùÜÇwj-»…)­ÿO’=çBp^§}i‚8Af³EÑØØØcÇŽY­Ö––ÿœœ©TÊçóår9hž€J>‹Åªªª2™L3fÌ`2™ß}÷ÝÍ›7ø|~kk+ØMÛÐÐ@¡P®_¿®P(ÂÂÂØl6¨åˆÅb???  E"F …yyy Väææf·Û)ŠN§;~üøxœDÞuuu >>>@§ÓY,µZ­R©~üñÇ´´´»wïzxx,_¾¼½½=((¨¤¤$11Q¡PtttܸqÃh4R©ÔêêêeË–‘Éd­Vàp8þö·¿q8Ç{{{SRR~üñÇÀÀÀÑÑј˜www³Ù¼uëV»Ý~þüù   M›6i4$®/#cYyyùÀÀ‚  á²bÅŠ«W¯fggOé^ýßœÌa^^ž««ëÎ;;våÊ77·mÛ¶iµÚ{÷îeff’ÉäçÜì3xñßFpâižeee  ÇS(z½¾³³“B¡,Z´ˆÅb) ‹µÿþ?ü¼*¤×ë³³³srr.\¸ðTÌKKK!Âqü1{¢&qŸ÷¯3à‡ìÙ³|øúë¯GGG:4::ºsçÎ É$2|ÌÁ .TUU’ààØfö ɤ0òxLƶüÿ‰gaxýúõÇŒïÂ… Ï1ž‰‡OC ƶBA<þ|Aöï߯ÓéÂÂÂV®\¹xñâööösçÎ †Ù³gƒ7èdh4š±KuuuÙív"‘Ã0èx‚pDD™L¾|ù²Ãáèêê¢P(‰D 0™ÌÊÊÊ-[¶<ç¶O§ãåPz½|}6˜$ý˜ =ümsÈ©¨bìß¿_­VOÁþ'&Xñ322èt:@HIIÿ¦Àãñ‚‚‚ŒF#ø©¾¾žH$ÆÄÄ,\¸,àÏ?ÿÜßߟN§Ÿ9sfíÚµ2™ìÀOø~WiiiMM ŸÏLURÔâO»gÏž1}xD÷ƾ>¬…¾øÙÓ‹_/}Épúã%Ãé— §?^2œþxñþÒ¥e¥endstream endobj 876 0 obj << /Filter /FlateDecode /Length 3256 >> stream xÚÕÙrÜÆñ]_±ÉK–ïsá°£JÙ‰d9弨Lâ”p$×ÚƒÁ.E)©ò·§¯ ÀáUeÉq±–===}wŠÙŬ˜}ùì‹ÓgÏ_•n¦ ÕžžÏLi•.g•¯Tcýìt5û~~zy¢çûm{8YXkæÿP|ýçImç¼êNþuú—篴!òµª,C(Lƒ0Ï Yx¶pNÏÖ«º,eĶGÌvŽëµ»üßzü¡ð… ?åg=ââÛoå÷znôx•ïq’ÿ}Ìd“L[½91ÓaæËvÇ7Ý©ë‘ØÍþÿçiĺ»×;"Ù«e•rÕ;U5È9£ /œû¡à}?eì ÄVZªÆÏF•†ÁÖ9)i«¬s³êE“qª°ÁzÇ`W‚lÖ¨¦4%B-¼*@—V•Þ2Ôwd `LX£´mîÀ œ’LPŸ:K¼SÎ]F9[3 ê̺½ìœf¤”NS¬ï²\36¡UUU·u•ò…xtn)”µ³Ïg' ]VóOñRò¥ —Ïx0G-”Š ÍòÄf—@“V¤Yz¥LeÃ^gVnT]Œ~“SIX¸?ZP;AWŽYVÖÀ_¸WØÌ²Ål¤á}²dªhF+W63 #qEWä˜Q©ºÆðVïÔox¯Ašÿú=–PF¿¿Ë0¶Qî.¾S¾úJG­(rÚ¦L•âaÍ¿K·¼*‹eõëЯ±íz¥ úÄ&šnF¹j`»Œ8´'¦ž¿¿­5Ú+‹6QÁßä–lTl©ê§£ø0]ÞÂú®NQ€ë3>H$ µ`0H DÿĠµÄ¨v Ç×ó%…ø+p,å|¿Ã¨¾Ã·G|[Í÷ç …qéR¢×€€C×½øåî=C,÷‡’߬øþ(âÛv"‡ºQ–r’D‘X§I!Û³P’â&+“ö®4Ø"ú*¡ )¾Žüi¨F4Ð3™L¦’“LéÍØ‚Ä} hâ‹ <'?´°³Í K͵<®ŸD ,=$è«æ?hëršï -\T}%ÔJ5QÅ}t·¿e;["À˜Ã\í’¥®uûc7Xc߆PûIŽ2°|ã•3’#Ÿ Gq¦]ƒóþÜ.øö-픟å8\ùOe"K'¾:; ŠMLC$‘ùÉ yµ”á~º‘¡ Lýøæ'¾ü‰R)ú=_Nƒv°&%oþ,o²ÓVÈXŠ6§fWmÆì>è-#zÁ—­ì2F Á·—Õ×É&„1WÉVëElFÈqßÀïoð{ù·N "l¸ø1 ƒÿN©ÝLðËv3¤f7¡É6­IIÏAVôäX·p?%Y=ÞS†o?å8z™9È<™9È”Ì÷rÏeŸ½ 1#Q座ôñ9,&û?'r&:üáœGN\+æY©¨.„Ž£\oÁF¯—‰‹à£È‡KÅ} –3ù£Ç©FˆÚÄ"ï)ÜŸadkôü5Uk†µÆŠ±Ú†ì+-orÉ]yX+ÖÛª™KˆŒª¼ŸV®@\ËuÕŠ”fâ_”@ï9ÎpËcT¬ê¢Pn(f¾ÍV«EÚÊ$Rû-îì:ǰÐM¨+©HÁ‘µPGšÚ°7àUSOêMŽPþ”¤){ÌQèaR‡b ‰ãôÜ‹QªKy~»[J΄@X38ã1åÆŽ Oì„;Þ1Ä‘ d­Az”I`±ÔtµTÛ¨ª,S¾µÛµ.¡¯w¡ÆÁ—\³€¢UhV¢W:δ™“ <ãõ˜“”(²Ur¡ô@ÏëLÙûd¥­F çi?ª©°O¡^d{Š ®6fú_e“•/GËÙÜr*“.×&ê<Ù¢VÍÐë|ÜÍϵÅÇ-g?îrîçZNg–+©…ø(êt9é ,[ñ àGÇM-xÜpÇã ?Q­WJû6æ1Ú Ün00|n¯FíˆÃ°˜“+Šq [i*ꢦ!øövœŽE4Vg\£I^na«×õ6Ò%ËERñÑ<DZ6Wô”Êå47„-¸?´‘-8´6»nU(—ùe§éF† ×€ ­‡Ä2«iÛU-±q¾ð¤ÝyÂîL~ÙÇìΫáäê“PXVd>· K¤lMÁ£ˆÞZ ’DcÃõ³3:6‹ðíÁwüpµá7CçGoB©y™Ï(4ßáX‰Õ>äI¡mÇÅè]©’¯ VÄ”ÀïNè;Ƹ¯UÔ8ÿ‹ ßðY>ÙI{!ÐçÒÊ ÷8¹6ØK_ÒÕªt·³AæéòzÜ:n‡Ð©bÖšt·¾¸Õl£úŸ»ÜŒ«ª•ŽGìû Úi&Y“—f‡oö˜¨T•+g¬º¹·çŒçx ËiãY£l²î[7JC bU©Í(}zä÷wô¿ˆÁÔS/VBÚØ.CãÞœ%GPSÎ8U÷a9óPKþÇ * õ…K;òwÆÂŒKÎÝÆ^Ö HÒ3jÙn÷£ÎœåÞLtÜ÷½–Î ~äq¯%ƒþke´×)¿@/åŒ|¿¯O¸':î§è'Õyiu~«.Óຬ½·.³£>/Já2®Hߦ‡I"9<~ü72iMN0 îc#æŸíÀß&ë×ÛÒº£QõüUÓàNq2†…ì.|2õ9ðØ5Õú_p ¼Ñ¿ýöj³¦»öØáeÅ Ý»Þdc;6Q U[—|ó%'™Ys‚ñîUOWÉwZ¾Ioǧ<—!]“xšÜ}èÈ×1à‚Ših8x–û;òt–*Óû@cT)nÈ:›D:ÂÅéim鸢iÌ_šœKiNsûX¡¿ÁXÏùí*„Ñv±âÔ–òsJæFÇr0çœrÙe¢èûP´N«~z"”Æ"= c‹³¶V¾®{èåm;Dç´[û˜ã]¨¬}{lDD gbHdFãÆ,Þ|Џ{$þоBüû>ë˜ðÛ<“¦zû¨W.`ÄÊz8\YÔ¶™µãïªíÉ.ã=pÔ5ç=ÛìGZœ’yF]?ÈhzdˆùDÝàig©Ÿ*Œ³î©> Ÿ±œxŠ$¢.Ò¹(Ò¿]éGjŽÏñsJôŸPJŽ Éîð–OŸÅZèL¿óDæ…öºðåam{¶~ÎöŽr)Ô@¶Húw™­–Êk?úÂ'û¡qö¡/=·$n¬Ì,gÀ‚0”€˜ìMëñ”Ya^§Þñ=;:Uìˆ9ço Ž\KõÁÃíãT¸=f?é¥ï!¹çæS¾,¹²½RXòŠ"9‰æzÀ~øˆï Í%?„óG¸ §êuR±Ãpr˜³ÍŸT@ÑàîTª¨¢}%(-Ûòå°陸Tðe!º±íì‡Ûr ¦À¶³MÂŒÉRw¸®ü °*K{ë[¤¡…³ë†”/nÃýu@ޝñ=_&±OÃW‚°½ãÐ5¾âˆµbàF{|â¡°ýU Oüá´4ÀÀ£â_(ƒþX[¿/…x¸¾<}ö?7—5×endstream endobj 877 0 obj << /BitsPerComponent 8 /ColorSpace /DeviceRGB /DecodeParms << /BitsPerComponent 8 /Colors 3 /Columns 75 /Predictor 10 >> /Filter /FlateDecode /Height 106 /Width 75 /Length 4477 >> stream xœí[LS×û> BÕ¶ÒµXŠ2­Î}Qz«Ä_´èª 6Â,ÖümâjfquFâœl³s‹Æ‰ à &µ*Q H§¶à†?î­ ¶pÕQPjVm 뾜„7@œ8ž?šæöÜsïÛ÷=ç¼çyÞãñ×_7žÿö 9F,þ±pøcÄÂá ‡?¼^Á3V¬X°Z­ÞÞÞS§N]ºté¹sçöïߟ˜˜xõêU Ã^²‚ Μ9Ãår…BáßõÉÚ†=Þ| û‡Û·ow:Ož¬VëkÒÉ?â߇fÐ[öÄ@ÖÃ+V( §Ñh„BazzºP(´Z­iiid2900°­­í­·ÞZ·nàêÕ«×®]=z4NŸ7o^jjª@ (++;|ø0Fƒ=h4 ÄB¡R©_§Ó™L&‡Ã1nÜ¸æææ¤¤¤ÀO@påÊ*• ½aÉDÒÑÑA¥RÓÒÒz3áy}øBÿñÐ9dèLJS¦LÙ¸qcQQQXXXcc£L&;pà€H$*((`±X]]]¿þú+Š¢AAA%%%(Š¢(ZPPàëë;iÒ¤ÆÆÆÖÖVƒ¢(Àl6Ûl6‡c³Ù Åf³7𸸸K—.íÞ½[©TVVV¢(Êçó÷ïßïááõñÇ?}ú466V$åææ Ç].—Ãáhoo‡iÓ?üЛ ¯ENÓï`~ô?ëêêÍf“J¥)))|>ßív×ÕÕ‰Åâòòò?ÿü³½½ÝÛÛ»££cÁ‚™™™«W¯f³Ù …‚Åbq8œúúzƒA§ÓÉd2…BñööF¤¨¨ˆÍf›L&©TJ&“››› ‚‚‚îÝ»WUU%‰l6Û£GV¯^ £Æf³™ÍfNÇçó9Nnn®Ýn߸q#Žã(Šö±öãÃE‹-^¼Øf³F™L–™™xçÎ*•º`ÁµZ½wïÞK—.á8Š ˆÓéÌËËóöö7nŽã7nüàƒøÿ^*JŸ™Q^« ¦¯Å8Rô¿?lll,..Ž·X,R©455•D"‰Åâ‹/êt:©TÚÒÒbµZgÍšUPP@£Ñär9\Áôz=›ÍV©T­­­0Ÿ¬©©‰ŽŽÆq<$$äâÅ‹"QŒÑHyúô®HÄÓh4¥¥¥@`±Xèt:Àf³UTT¸\®ˆˆÇe2YNNŽ——\]£¢¢îß¿ÿÛo¿ñx¼½{÷öfB?{‹ÆÆF‹µeË–{÷î™L&@gg'Ã0OOO>ŸÏb±"##G…ã¸H$’Ëå …B ƇþüóÏAAA #>>>%%%88øÌ™3 ÅÏÏÏl6§¦~WY™>fLû¥K—ŠŠŠ¢¢¢œoªªªp¯¨¨ÈËË Ñjµ=¢Ñh‡cñâŇZ¼xñÇç̙ԇ ÿù(Ý´i‡Ã1›ÍƒÁÏÏÏ×××n·WWW/\¸ÅbmÞ¼ùÕ¼åËàÍ÷a?ãpèÈ…W†~¢4==Édfff^¹r%;;{ÕªU±±±!!!%%%K–,)//÷Ýw)Jmm-“ÉT©TEEE†]¸pD" Aëׯ_¾|ùþýûccc©Têƒär9‚ Aärù²eË‘H${ö쩨¨àñx<oôèÑ:N§Ó±Ù쀀½^Ÿ˜˜8€äáÍÒëééé_ýõäÉ“  ¨¨¨©©É×××ÏÏÏh4Þ¾}Ûét*Šû÷ïgdd‡Ã HNNÆ0 ðS§N‰D"Hç‘““¸uëVµZ ÈÑ£G§M›Æçó¡«/_¾|ýúu·Û¢èìÙ³¹\.‚ †Ñh´gÈ›g0âÃÞ¡T*­V+$NQ*•/£Óé‹E­Vß¼yÒ0¦¸¸x×®]ááá< ‚ ZZZ7lØ T*ýüüPíììÄ0 ^ «®®6 Ÿ}ö‚ ÿ‚…08»³ínA£ÑÀM0†aðÍœN§§ç³“¶P(¼xñ"àÖ­[=¯Óh4ooo@WWŽãd2™ 6àœ~$J{Gzzzuuuss3›ÍŽŽŽ¶Z­eÒh4F­VÿòË/[¶lé-,M&Ó‰' üLX~óÍ7F"‘ Š…#>|qÀq¨Óé’““{N-V«õñãÇE‹‹‹»§‚ àõñãÇ»ÝîA÷áèø>>>n·›L&GEE! V*•peS©T555qqqß}÷]TTÔĉW­ZõLƒe‰Ò¾ ÑhÖ¬Y#Äb1œÊÏœ9ÓÖÖ€”Ùܹsû’„^Þ|þû éPã¹X}­Vk·ÛwîÜyìØ1HìC¢éܹseee‹-b0cÇŽ]½zuVV“É,))Ùºukvv6•Jíììd0³fͺvíÚøñãq—Ëå4-???//oÆŒ›‘Ëå~~~_|ñEdd¤V«…ÇËËË£P(Ož‰ŒŒÔëõPŠ„$?a` Ab±8--måÊ•UUU+W®iÒ¤7nP©Ô»wïNŸ>D"àæÍ›l6»¶¶öÑ£G(Šêt:£Ñ(‘HÈd2Ü=UWW›Í憆:Îb±~úé'¨gñù|…BÁf³oݺÕÑÑñ¦1Â/:ýôϤR©R©d0,‹Åbeddô\Bôz=àÆ‚X­ÖŽŽEq_°`ÁöíÛcbb<==i4šËå"“É­­­V«Õ`0$''{zzŽ;öرcP€ •J¥R©‡#,,¬«« '9›Í&šššìv;…B™:uj~~>™Læñx ÏzšáçÃEÿð¨Q£l6›Ó鄊¼Éd"‘HUUU'Nœx Õ¿ã?ïC¨†††–––&%%‰Åb‘HgN€@ 8xðàñãÇ8°|ùò–––dzX,§óÿþ÷¿¹<„F£éõz­Vk6›á,WSSÃd2ÛÚÚœNgbbâÑ£G+++Ý¥3GŽ™={vQQQdd$   `öìÙååå|>ÿСC …¢¦¦æøñãÏiaÿ>t)öðÊ$ñÿ|”‚˜;wnHHˆB¡ …<oæÌ™]]]'N4&“)::Úl6“ÉdÈÞS©T‹ÅòöÛoCöž ˆ¦¦¦ÆÆF‰täȘ÷ÀŠË¼¼¼ððp³Ù¬R©6nÜxëÖ­M›6EDD ®…o¾GvÀ=vÀr¹òð09ìÎr|}}ßy犊 Ç) ™L¶Ûí0­…À¯Ä^1„QúšŒÃá!_-999÷ï߇ò[·œ¸oß>‚ z2âÝŒTxË‘(þxó-쿞f„ )))çÚµk§NÚ»wo}}=‰Db±X&“I(ÖÕÕFX¡wóæÍðððyóæ ¢xô’‡ÂkU+Öÿ¸¥¥…ÉdBz³¨¨hÇŽ_~ù¥——AŽŽŽÐÐP*•záÂ…öööèèèÒÒRF§Ó333cbbššš¢££sssçÏŸo2™‚ƒƒÇŒ£Õj“’’RSS©Tjee¥H$2'Ož¤R©ááá6› ®4p«a³ÙÊÊÊBCCÉdòÂ… ¿üòKwøða‘HˆŠŠêÍ„QŸþyªT*—ËœŸŸßÕÕE¡Pôz½Ãáàñx‡£³³EQ‹Åâp8`RÊç󛚚&L˜0{ölH]ñôéS—Ë5}úôúúúÎÎN—ËUQQ1eÊ”ââbÿiÓ¦‘Éä5kÖ|úé§t:Ýd2iµZ@°yóæ“'OÌ™3çúõëÍÍÍׯ_Ÿ7o‰D’H$¡¡¡jµZ$͘1#  7sn" ó—ï¶Ÿ(MHHÀ0L*•r¹ÜÂÂÂ9sæœ={6$$„F£±X,@ V«a8¹Ýn›ÍæááÐÑѱtéÒóçÏs8œ§OŸþøãáááV«u÷îÝ»ví¢ÑhÓ§OÏÈÈ P(þþþ€µk×~õÕW;vìØ¹s§R© Ÿïåå5fÌ™LÎÊÊš;w..ú2[Ï6ÏÙÕîíLJ‹-òññÅì(Šr¹\±X|öìY±XœœœÌår¡hàr¹í칸\.,ê:tèPZZZCCl¬Óé°¾¸££C*• ‚>úˆL&K¥R¹\ár¹¦M›VPP ‰öìÙsìØ1x;A þþþfþüù2™¬³¥o~NÓ?O#•J!#‹¡*Š “É„GHØl6T­›ššD"Qjj*Ÿ“&MÂ0 Š{‹…Åbyzzö½ö†Ê‡¯ I†î4ÂkbxÎÓ§OŸ¾páÂPœF0v»ÇqÇ—,Y²jÕ* Ãà±­V ‹ý),,¼|ùr\\œV«¥P(R©ôù%üçgš>žžžŸŸ/ ÿýwƒÁW”»wï®\¹²¹¹9//oÔ¨Q2™ÌápÀÒ ¡POQá8~òäI‚ ¶mÛ& C[[[`` ,7 …{÷îíìì4›Íb±X(Êår(oäää„……ÅÇÇ#²oß¾˜˜˜~9«7߇o>O3báðLj…Ã#ŒX8ü1báðÇÿ”çòÎendstream endobj 878 0 obj << /Filter /FlateDecode /Length1 775 /Length2 1480 /Length3 0 /Length 2024 >> stream xÚ­’y£’¹hçdøøÄ»ðrY4Yoùà¸s m|;c€Ãµtfôæ%Ý }š)„#\~9½;†ív]2W0³J8Ä9ÖüÓRwæ„Oå‚ŵËf2ƒ5I‚ V#ÒésTôôPIÝ´YMñô”“£sêÇ®ŠZ…yíû3F#Ž«6Þº.=$n ôÏšN1“}úîc{÷ŸjÆs×H¥7‡•Õª›5žéenÅï>w…©ªÛR¶±À4 »æ} žuLa'†ÿØíÃ)B9ͬÊÝÿj¬Ñ½éâ„åD m‹/šÏ9$a9ÿI›3—â Ý*Fo/”ÇxîÐ,qç¹âºd#:9¿Jeô|,ÈKhX|3ƒ¡.SðaFZÙ‡£1:}-ª6e2‘`¸¡J%4ÅI#£Ê¿¥‰‚U«*[\ÓZéàÂ,kì ¼7q»ý¾‚ç/ÖΗʗ˜çî­Ó•Nà¼ÉŸæT¦È‡îŠÓ¹ßúVYPCë~–Q=oB_W ‘À\ò«¢íz÷ÀÃI®p{T´Ð·Ã½/”š»‚:Ú,¯ÜÛâÿèºdwZ…ÊŸxM¿Ôäϧd7Ù5Q¿çÄØµ±Ÿ´‚^©Oçæ6L•lXf& FiMÒ0·ëâÇD[A‰ù¿,IÏ+%Ñè^|\Šy¬•L%t‡U×M©]mçë sŠš˜X´h-¿k V蔨³ˆ]ÔPÒTUÜñ-.7Ÿât>ë g3[:È$û”£n^’ÕKq¾±ÉÓ{|êIkÏRãZ©»î4Û׵ܾ[¥ØJ>–m¸ß’3ç{·Vç~V[y×#ÓøõÛî…¤«³­=+Ù¿Lo<ÿ¤Ö.I¹)¿'ï·m¦êñy‹­‡ûm“yO!/Ú§;šÊ àã˾ü©®‘‡É™½4÷Û*ÌÒáú.¿T~¤0·UŸ¹bìp¸Á!!ß‹²ËÙv©xÇë«äcIHæÇ«‘ôÃQ#×Ç3ŸÔx ”Öø•ß™<ÑšýÞì8ÉÆ3­õSG¿„ƒjêÅF‡xËî>?ø³ò?{ÑZu¦UÁ³_~ï›3Ø,áïxÞ4ˆð´˜uá7úKu7 l½Û5Ͼw> stream xÚ­’}8ÓëÇsâ4%ÏBߎÅllÍS4O!Ϧ\5Ûw3fß&§!‰èEž«5"[¡ð E+æòDΑ‡å¡~£ÓÕõëüû»¾ÿÜïÏç}î×ý¾¿:Ûݽ pd(´‡leظàM ×ѱaD6 bØÙ €25E¸0*€2ŒPfH3$®Ø@ÌHÈtmôVMX²h$"p!²Áñ ‘xA$ÈŽD8:ð\Ý x‚¡ +$#à(@¦‘Ø@H¥1à†«@Ž `¿•ÉaÌï­p*†tÅz€‘ 1葤À ]!ñY ˜äÿõópû0:Ý•²:~5¤µ‰!4zä?(„ÆY€ DYŒŸ­‡Áol. ™òsבM¤ÓH8•(4‰þV§…ÚÓ8 ÙÆ&"=\«ƒ òÏ$âðÖ8 m|m=ím÷|{Ôµž;‘Æ`ã#™ €üa^Ó¨Z‹ÆŽ H$JlßW~?eÇ Adƒ aöD‹ GŠGa0@  1È 9b`Cb‹·â`b Ä‚¯¾©© `°ZZS(ñU H”)`±&ÿ}Akkˆe`Œ Œ0âQh‹AÆü‘Æb öÚ$Žé»¦ÐÄÉ‚ $ÁE¯ ’97è /¡ô„]þó2)½PŒbn?þn†9b|À|‹Uý jÍÈÇËÑÊ7_ž8xÓLÁ¼Vãᢛ£¼ ã{ä›eõr3½6L‘Æ þ¿´äÙþZ$s•¬=rî—Sç| <46˜ötñý(¶¨2?*|©5ôw÷Éf˜&縬™±oY©Ç¡ µÃQYÁÛD>Ù¦_«£¾nn­/˜™?söŽ'-­s|´k¡¡kŠ{Ùõ@U6uËx,§ÿÑòý‡N$Ðw8(Úæhe˜•[ë[i–4ËÇsgFÔN ù%·²/YÐ…TýMµp¥jìkêÓ÷AgÇíäúƒ¿Ëd Šç3Ž̹˜³¼;–í¹—ZYjs°[T«ã]ÞNÆáϳkÇgÜŠÖkûmŽþõÎ…3³´Þ0IÿƒÄܯ¶þeÊCI¾üK¿Þß®æfiË_XL­ Mï¢IDäÃcÝ–'¬þö¯lA 9å„íÐÊݘ~Ź7`ï8w“Ð%dÀ%ˈÒx?ëEræAÉÓ¸kбKWôêv©Îç¾°íØJš[šiÙ9·eˆ!#•GÔIðU\‰b"ïrê’Ÿ<0õÂ4Ö^”Ó@ÿAû ÿÅãå¡¶ÿƒdªŠ†¶ðÜW7~+ä~tœ~ÿ¥oëGC/¿{uiêý‚°×·øTJm˜m8³žM“Hq­ÔKuÓßU±`UlÉ·™V¬;ì{Óƒ.¹¡2[Ù ëy_íÒªÀš†J¦°ê³‹·@Tj} îQ^9XØëUȶAsk)Ì̃ªT"HHuÐ5æ]¡Zr£i›#ß}è«y¥.’—è>\c ¤ÇxúN>ýå\Uë|ÑèMC#ý•Ç%Æ£o'¸ô ‘$×õÛ'|1QºÂРоöh¾a¹]ù/ Ÿï{ƒŽÖXàŠ´<Ú‹§˜Ü cmî8У'ø˦Ý{lí$}¿¥[¹¾s{ÞŠƒÉLÚ0Ì/NÙ†»£;‹­;k¹êóeGõÔ'Õ¬èш¯®›´Ûc–AöÝ(ÕºlhÝza­·ÉAßÏ¡|jPUʨ6Q»} ‰`P<ÙèÆß(õøS+I¦V¾ž×Ún/û›èD÷ÝW¥}{¨äô£Šã°9fÒv‹x«¹êŠÎfØõb¢„é¼çfk—»¬„ÜŠ¢E‚±‚ÑIûbu¸ê5J„¶„ H^5³ùPΖ’ªþéWö,¤{W›Üˆª¬]oY³_1Ý)µÕQ}’è)¿ß©o§vߦô%ÿ×êo'‚xwªßeÑ 7N6)XóF¢ä¼{Ê´(­æÊ(üÎçoY2P9™>„®*H$:¿MÜ> stream xÚ­’y<”kÇ”¬²<êLÙg±Ee†laìRÆÌƒaÌŒ1#Ä”Ý![ö¡$½-²Öà%Y"L²Uv‰”²9g¨ÞÞ·óïûyþy~×õ»¯ûûùÝTÞ£jˆ'{‚h2‰¦ŠPCè(+{{@¨Áù¡PÄÒd’1–ê`H÷p¡¥«¡©«¡ÅPdJ•àíCPŠ›&mÀФpX`…¥ù€þœ8,Àq¢‰€Ýæ‰@À ©A ^ð ð½ $~Ø&Ñ ’ÐþVÆÓ)?ZA 5(p "žL"†xЋfMæÜrHþP¿GÓ‰Dk¬ÿæø­”þÑÇúˆ!ßd R+2¤’~µ:ßà¬@<îÿk÷ K$à IÞD€+Ñ„`oC á|•n•AþWNn[0K[´™‹“ò÷ÝjÚ` $š}å?S7Ý[ñSsâ¡‚7¸Žà9ß?÷_.3!áÈx‰³šZ–JņðsVƒ£4s€@ƒÁ̆©‘È4΀“I8àE¦òo¾'® À›µïR €ùþ”vö§äl,ø¿$€…nÉ`dD>§ŠT‘ê€&RÐÒÑ ÿŽN¥‚$ÚÖjqBü¡½œÄA0Äñ¿ ãŽFûfWÇ3LŠºK¶+AŒ¼¤ZW<êý·@ÔË4ñÆS‹¥ÑrçµÛLQáéíÓrg¿ìL¨ ³íBÏG¤æô¯OyL3CYûÿ`Z…LQ¬DÍîûôAr²g´£8ÃåzËå…ÖÙBcû·¼ÃòÛÝ‚*ó¢µ´ÑL¢C\Lù!uy»Ý9Åñq#gwg\âyÅðÛùPyh‘\GY+Úy5}ý 7¾ËÑc]tRˆ¥ä› t ·É¸}ñZ8‘>¸PÏ_¦)« Y?ƒM8é­,þÈo*{´Ã%oMØÃðs¿³)†2^ìeÊ­!e?3@¹%'XáP&cbS© }ãRUÚ“¿Mw¿gþÅ€ô¸ª™Ô_6„q† ÝU|–TÃw©¿M[ݸ””÷UÊi;¥ZX)¦Þvûæ|HnøŸÜéîJ¡™+×ÜÅR*W— ýrT­åÎÞq²/*’÷nC:^LÒFÜ y_ÊwçÔtK¡b³Ã1^ù‚Ϙv{XxÆ"ÂwUO}ƒ!“ƒÖd œc‹úí˜Ë ­Lúê£!7Δë#TÙû¼AXEVpgÌȘI‹]MÃÐYüôsò‡æI¥æÞŠÙ èáívØØâR€ÀÃùSEηïc¸>hpoIkzÆW£tDQÆäzã™Å4òë¦_ÛøöÌæc—{êÇÚ‡ö9ǘé¶,”uŠôlT¨íäîe #ó'‰öê Ã¡;ŸuáÒŽk÷//œsçI4õ 4e‘õó1lÇ‘^Öq‰®K¥˜ÏØX«=!GÒ’D(-fÍ•Qr¼Ù~öU .½ZÔx÷U¿·PšÌNÖOâF{Ç#•`ØÙ uÒýVÃl\lLžÁ2¾òuœtnóŒä”šn¶S ]üŠLÒ÷t¢=Ò²Nè}gw*5©~˱9sÝT~ÛˆÀ,*)’1Ó ?e,ìö‘§Ê>Ú ý€¸{•RÛõrŽÁ­-X4“™‘§h-bÅò±@§}™‚N¡U.ñ1Òs³Ì©Ú:-›³Ë´O¢Óo/x3Ðí“._*G¿`ra24J-Ý%$iÂö½ý¯3SÌ"PûR^$¸®$žÿ]#LS.|¥cÿÅ_vfnCa;¤Õ8Éch–T‘RÀdi¯ e ÃÇÀ’çÅì›!ÌähDáÞça)ï¶÷Oø‹½YÛ¡^"È ®F••NÙ[?ØsRÃBR>Íá”)0ïúÙÒ¹V™±¦…O>æ*^™©,ÐÑG,Ò?(q6Îö¨q]ײO©)[بØ3l,þÖâîI®ÕríS –*¹cDõ‹d·ÓÓ†k‰CÚQ箎†Ú­'¯+«ŭ÷ÑݰãÖ²Œ,׬—yOIá'b‡¹0±Ej™—JAÈÙöW Z9P›œtM]ÕµcÕhóÊÓ”ŠZÿaxÇÒ”Dt*5y›ÒäPÞf¹‡ø( j£©¬4•ªÖ¡þÙ> /W [ 1 3 1 ] /Info 97 0 R /Root 96 0 R /Size 882 /ID [<2c6d8275d7666133a3a978bbc8b55e84>] >> stream xœí˜K(DaÇï½ÆÌÜÁ¼g0J¢ÐÊ«dAb7 …ˆÅh"™²¢ì(…¼Âjjß*eÎ0˜ÖZ¸ OmÓý´Õ^î÷L]`ä(“ŒáêjŒ7š'J7sªì€£ÙÆÁMÞk̬6òë—S¦AÃ.ûœ—ŒW¹ ¿×Î1ÖJö?c=eØ“‹þOÆÏ³ÚDÌ,=‡ì9ëtÛ°¢=÷®0'Ï g ×í`{´Ë¤«Òl8 endstream endobj startxref 676343 %%EOF VGAM/inst/doc/categoricalVGAM.Rnw0000644000176000001440000025211212136651113016200 0ustar ripleyusers\documentclass[article,shortnames,nojss]{jss} \usepackage{thumbpdf} %% need no \usepackage{Sweave.sty} \SweaveOpts{engine=R,eps=FALSE} %\VignetteIndexEntry{The VGAM Package for Categorical Data Analysis} %\VignetteDepends{VGAM} %\VignetteKeywords{categorical data analysis, Fisher scoring, iteratively reweighted least squares, multinomial distribution, nominal and ordinal polytomous responses, smoothing, vector generalized linear and additive models, VGAM R package} %\VignettePackage{VGAM} %% new commands \newcommand{\sVLM}{\mbox{\scriptsize VLM}} \newcommand{\sformtwo}{\mbox{\scriptsize F2}} \newcommand{\pr}{\mbox{$P$}} \newcommand{\logit}{\mbox{\rm logit}} \newcommand{\bzero}{{\bf 0}} \newcommand{\bone}{{\bf 1}} \newcommand{\bid}{\mbox{\boldmath $d$}} \newcommand{\bie}{\mbox{\boldmath $e$}} \newcommand{\bif}{\mbox{\boldmath $f$}} \newcommand{\bix}{\mbox{\boldmath $x$}} \newcommand{\biy}{\mbox{\boldmath $y$}} \newcommand{\biz}{\mbox{\boldmath $z$}} \newcommand{\biY}{\mbox{\boldmath $Y$}} \newcommand{\bA}{\mbox{\rm \bf A}} \newcommand{\bB}{\mbox{\rm \bf B}} \newcommand{\bC}{\mbox{\rm \bf C}} \newcommand{\bH}{\mbox{\rm \bf H}} \newcommand{\bI}{\mbox{\rm \bf I}} \newcommand{\bX}{\mbox{\rm \bf X}} \newcommand{\bW}{\mbox{\rm \bf W}} \newcommand{\bY}{\mbox{\rm \bf Y}} \newcommand{\bbeta}{\mbox{\boldmath $\beta$}} \newcommand{\boldeta}{\mbox{\boldmath $\eta$}} \newcommand{\bmu}{\mbox{\boldmath $\mu$}} \newcommand{\bnu}{\mbox{\boldmath $\nu$}} \newcommand{\diag}{ \mbox{\rm diag} } \newcommand{\Var}{ \mbox{\rm Var} } \newcommand{\R}{{\textsf{R}}} \newcommand{\VGAM}{\pkg{VGAM}} \author{Thomas W.~Yee\\University of Auckland} \Plainauthor{Thomas W. Yee} \title{The \pkg{VGAM} Package for Categorical Data Analysis} \Plaintitle{The VGAM Package for Categorical Data Analysis} \Abstract{ Classical categorical regression models such as the multinomial logit and proportional odds models are shown to be readily handled by the vector generalized linear and additive model (VGLM/VGAM) framework. Additionally, there are natural extensions, such as reduced-rank VGLMs for dimension reduction, and allowing covariates that have values specific to each linear/additive predictor, e.g., for consumer choice modeling. This article describes some of the framework behind the \pkg{VGAM} \R{}~package, its usage and implementation details. } \Keywords{categorical data analysis, Fisher scoring, iteratively reweighted least squares, multinomial distribution, nominal and ordinal polytomous responses, smoothing, vector generalized linear and additive models, \VGAM{} \R{} package} \Plainkeywords{categorical data analysis, Fisher scoring, iteratively reweighted least squares, multinomial distribution, nominal and ordinal polytomous responses, smoothing, vector generalized linear and additive models, VGAM R package} \Address{ Thomas W. Yee \\ Department of Statistics \\ University of Auckland, Private Bag 92019 \\ Auckland Mail Centre \\ Auckland 1142, New Zealand \\ E-mail: \email{t.yee@auckland.ac.nz}\\ URL: \url{http://www.stat.auckland.ac.nz/~yee/} } \begin{document} <>= library("VGAM") ps.options(pointsize = 12) options(width = 72, digits = 4) options(SweaveHooks = list(fig = function() par(las = 1))) options(prompt = "R> ", continue = "+") @ % ---------------------------------------------------------------------- \section{Introduction} \label{sec:jsscat.intoduction} This is a \pkg{VGAM} vignette for categorical data analysis (CDA) based on~\cite{Yee:2010}. Any subsequent features (especially non-backward compatible ones) will appear here. The subject of CDA is concerned with analyses where the response is categorical regardless of whether the explanatory variables are continuous or categorical. It is a very frequent form of data. Over the years several CDA regression models for polytomous responses have become popular, e.g., those in Table~\ref{tab:cat.quantities}. Not surprisingly, the models are interrelated: their foundation is the multinomial distribution and consequently they share similar and overlapping properties which modellers should know and exploit. Unfortunately, software has been slow to reflect their commonality and this makes analyses unnecessarily difficult for the practitioner on several fronts, e.g., using different functions/procedures to fit different models which does not aid the understanding of their connections. This historical misfortune can be seen by considering \R{}~functions for~CDA. From the Comprehensive \proglang{R} Archive Network (CRAN, \url{http://CRAN.R-project.org/}) there is~\texttt{polr()} \citep[in \pkg{MASS};][]{Venables+Ripley:2002} for a proportional odds model and~\texttt{multinom()} \citep[in~\pkg{nnet};][]{Venables+Ripley:2002} for the multinomial logit model. However, both of these can be considered `one-off' modeling functions rather than providing a unified offering for CDA. The function \texttt{lrm()} \citep[in \pkg{rms};][]{Harrell:2009} has greater functionality: it can fit the proportional odds model (and the forward continuation ratio model upon preprocessing). Neither \texttt{polr()} or \texttt{lrm()} appear able to fit the nonproportional odds model. There are non-CRAN packages too, such as the modeling function~\texttt{nordr()} \citep[in \pkg{gnlm};][]{gnlm:2007}, which can fit the proportional odds, continuation ratio and adjacent categories models; however it calls \texttt{nlm()} and the user must supply starting values. In general these \R{} \citep{R} modeling functions are not modular and often require preprocessing and sometimes are not self-starting. The implementations can be perceived as a smattering and piecemeal in nature. Consequently if the practitioner wishes to fit the models of Table~\ref{tab:cat.quantities} then there is a need to master several modeling functions from several packages each having different syntaxes etc. This is a hindrance to efficient CDA. \begin{table}[tt] \centering \begin{tabular}{|c|c|l|} \hline Quantity & Notation & %Range of~$j$ & \VGAM{} family function \\ \hline % $\pr(Y=j+1) / \pr(Y=j)$ &$\zeta_{j}$ & %$1,\ldots,M$ & \texttt{acat()} \\ % $\pr(Y=j) / \pr(Y=j+1)$ &$\zeta_{j}^{R}$ & %$2,\ldots,M+1$ & \texttt{acat(reverse = TRUE)} \\ % $\pr(Y>j|Y \geq j)$ &$\delta_{j}^*$ & %$1,\ldots,M$ & \texttt{cratio()} \\ % $\pr(Y 0$, where we may take~$\eta_1 = \xi$ and~$\eta_2 = \log\,\sigma$. In general, $\eta_{j}=g_{j}(\theta_{j})$ for some parameter link function~$g_{j}$ and parameter~$\theta_{j}$. For example, the adjacent categories models in Table~\ref{tab:cat.quantities} are ratios of two probabilities, therefore a log link of~$\zeta_{j}^{R}$ or~$\zeta_{j}$ is the default. In \VGAM{}, there are currently over a dozen links to choose from, of which any can be assigned to any parameter, ensuring maximum flexibility. Table~\ref{tab:jsscat.links} lists some of them. \begin{table}[tt] \centering %\ ~~~~ \par \begin{tabular}{|l|l|l|l|} \hline \ \ ~~~~~~~~~~~~ $\boldeta$ & Model & Modeling & Reference \\ & & function & \\ %------------------------------------------------------------- \hline \hline %------------------------------------------------------------- &&&\\[-1.1ex] $\bB_1^{\top} \bix_{1} + \bB_2^{\top} \bix_{2}\ ( = \bB^{\top} \bix)$ & VGLM & \texttt{vglm()} & \cite{yee:hast:2003} \\[1.6ex] %Yee \& Hastie~(2003) \\[1.6ex] %------------------------------------------------------------- \hline &&&\\[-1.1ex] $\bB_1^{\top} \bix_{1} + \sum\limits_{k=p_1+1}^{p_1+p_2} \bH_k \, \bif_{k}^{*}(x_k)$ & %\sum\limits_{k=1}^{p_2} \bH_k \, \bif_k(x_k)$ & VGAM & \texttt{vgam()} & \cite{yee:wild:1996} \\[2.2ex] %Yee \& Wild~(1996) \\[2.2ex] %------------------------------------------------------------- \hline &&&\\[-1.1ex] $\bB_1^{\top} \bix_{1} + \bA \, \bnu$ & RR-VGLM & \texttt{rrvglm()} & \cite{yee:hast:2003} \\[1.8ex] %Yee \& Hastie~(2003) \\[1.8ex] %------------------------------------------------------------- \hline &&&\\[-1.1ex] See \cite{yee:hast:2003} & Goodman's~RC & \texttt{grc()} & %\cite{yee:hast:2003} \\[1.8ex] \cite{good:1981} \\[1.8ex] %------------------------------------------------------------- \hline \end{tabular} \caption{ Some of the package \VGAM{} and its framework. The vector of latent variables $\bnu = \bC^{\top} \bix_2$ where $\bix^{\top} = (\bix_1^{\top}, \bix_2^{\top})$. \label{tab:rrvglam.jss.subset} } %\medskip \end{table} VGLMs are estimated using iteratively reweighted least squares~(IRLS) which is particularly suitable for categorical models \citep{gree:1984}. All models in this article have a log-likelihood \begin{equation} \ell ~=~ \sum_{i=1}^n \, w_i \, \ell_i \label{eq:log-likelihood.VGAM} \end{equation} where the~$w_i$ are known positive prior weights. Let~$\bix_i$ denote the explanatory vector for the~$i$th observation, for $i=1,\dots,n$. Then one can write \begin{eqnarray} \boldeta_i &=& \boldeta(\bix_i) ~=~ \left( \begin{array}{c} \eta_1(\bix_i) \\ \vdots \\ \eta_M(\bix_i) \end{array} \right) ~=~ \bB^{\top} \bix_i ~=~ \left( \begin{array}{c} \bbeta_1^{\top} \bix_i \\ \vdots \\ \bbeta_M^{\top} \bix_i \end{array} \right) \nonumber \\ &=& \left( \begin{array}{cccc} \beta_{(1)1} & \cdots & \beta_{(1)p} \\ \vdots \\ \beta_{(M)1} & \cdots & \beta_{(M)p} \\ \end{array} \right) \bix_i ~=~ \left( \bbeta_{(1)} \; \cdots \; \bbeta_{(p)} \right) \bix_i . \label{eq:lin.pred} \end{eqnarray} In IRLS, an adjusted dependent vector $\biz_i = \boldeta_i + \bW_i^{-1} \bid_i$ is regressed upon a large (VLM) model matrix, with $\bid_i = w_i \, \partial \ell_i / \partial \boldeta_i$. The working weights $\bW_i$ here are $w_i \Var(\partial \ell_i / \partial \boldeta_i)$ (which, under regularity conditions, is equal to $-w_i \, E[ \partial^2 \ell_i / (\partial \boldeta_i \, \partial \boldeta_i^{\top})]$), giving rise to the Fisher~scoring algorithm. Let $\bX=(\bix_1,\ldots,\bix_n)^{\top}$ be the usual $n \times p$ (LM) model matrix obtained from the \texttt{formula} argument of \texttt{vglm()}. Given $\biz_i$, $\bW_i$ and~$\bX{}$ at the current IRLS iteration, a weighted multivariate regression is performed. To do this, a \textit{vector linear model} (VLM) model matrix $\bX_{\sVLM}$ is formed from~$\bX{}$ and~$\bH_k$ (see Section~\ref{sec:wffc.appendixa.vgams}). This is has $nM$~rows, and if there are no constraints then $Mp$~columns. Then $\left(\biz_1^{\top},\ldots,\biz_n^{\top}\right)^{\top}$ is regressed upon $\bX_{\sVLM}$ with variance-covariance matrix $\diag(\bW_1^{-1},\ldots,\bW_n^{-1})$. This system of linear equations is converted to one large WLS fit by premultiplication of the output of a Cholesky decomposition of the~$\bW_i$. Fisher~scoring usually has good numerical stability because the~$\bW_i$ are positive-definite over a larger region of parameter space than Newton-Raphson. For the categorical models in this article the expected information matrices are simpler than the observed information matrices, and are easily derived, therefore all the families in Table~\ref{tab:cat.quantities} implement Fisher~scoring. \subsection{VGAMs and constraint matrices} \label{sec:wffc.appendixa.vgams} VGAMs provide additive-model extensions to VGLMs, that is, (\ref{gammod2})~is generalized to \begin{equation} \eta_j(\bix) ~=~ \beta_{(j)1} + \sum_{k=2}^p \; f_{(j)k}(x_k), ~~~~ j = 1,\ldots, M, \label{addmod} \end{equation} a sum of smooth functions of the individual covariates, just as with ordinary GAMs \citep{hast:tibs:1990}. The $\bif_k = (f_{(1)k}(x_k),\ldots,f_{(M)k}(x_k))^{\top}$ are centered for uniqueness, and are estimated simultaneously using \textit{vector smoothers}. VGAMs are thus a visual data-driven method that is well suited to exploring data, and they retain the simplicity of interpretation that GAMs possess. An important concept, especially for CDA, is the idea of `constraints-on-the functions'. In practice we often wish to constrain the effect of a covariate to be the same for some of the~$\eta_j$ and to have no effect for others. We shall see below that this constraints idea is important for several categorical models because of a popular parallelism assumption. As a specific example, for VGAMs we may wish to take \begin{eqnarray*} \eta_1 & = & \beta_{(1)1} + f_{(1)2}(x_2) + f_{(1)3}(x_3), \\ \eta_2 & = & \beta_{(2)1} + f_{(1)2}(x_2), \end{eqnarray*} so that $f_{(1)2} \equiv f_{(2)2}$ and $f_{(2)3} \equiv 0$. For VGAMs, we can represent these models using \begin{eqnarray} \boldeta(\bix) & = & \bbeta_{(1)} + \sum_{k=2}^p \, \bif_k(x_k) \ =\ \bH_1 \, \bbeta_{(1)}^* + \sum_{k=2}^p \, \bH_k \, \bif_k^*(x_k) \label{eqn:constraints.VGAM} \end{eqnarray} where $\bH_1,\bH_2,\ldots,\bH_p$ are known full-column rank \textit{constraint matrices}, $\bif_k^*$ is a vector containing a possibly reduced set of component functions and $\bbeta_{(1)}^*$ is a vector of unknown intercepts. With no constraints at all, $\bH_1 = \bH_2 = \cdots = \bH_p = \bI_M$ and $\bbeta_{(1)}^* = \bbeta_{(1)}$. Like the $\bif_k$, the~$\bif_k^*$ are centered for uniqueness. For VGLMs, the~$\bif_k$ are linear so that \begin{eqnarray} {\bB}^{\top} &=& \left( \bH_1 \bbeta_{(1)}^* \; \Bigg| \; \bH_2 \bbeta_{(2)}^* \; \Bigg| \; \cdots \; \Bigg| \; \bH_p \bbeta_{(p)}^* \right) \label{eqn:lin.coefs4} \end{eqnarray} for some vectors $\bbeta_{(1)}^*,\ldots,\bbeta_{(p)}^*$. The $\bX_{\sVLM}$ matrix is constructed from \bX{} and the $\bH_k$ using Kronecker product operations. For example, with trivial constraints, $\bX_{\sVLM} = \bX \otimes \bI_M$. More generally, \begin{eqnarray} \bX_{\sVLM} &=& \left( \left( \bX \, \bie_{1} \right) \otimes \bH_1 \; \Bigg| \; \left( \bX \, \bie_{2} \right) \otimes \bH_2 \; \Bigg| \; \cdots \; \Bigg| \; \left( \bX \, \bie_{p} \right) \otimes \bH_p \right) \label{eqn:X_vlm_Hk} \end{eqnarray} ($\bie_{k}$ is a vector of zeros except for a one in the $k$th~position) so that $\bX_{\sVLM}$ is $(nM) \times p^*$ where $p^* = \sum_{k=1}^{p} \mbox{\textrm{ncol}}(\bH_k)$ is the total number of columns of all the constraint matrices. Note that $\bX_{\sVLM}$ and \bX{} can be obtained by \texttt{model.matrix(vglmObject, type = "vlm")} and \texttt{model.matrix(vglmObject, type = "lm")} respectively. Equation~\ref{eqn:lin.coefs4} focusses on the rows of~\bB{} whereas \ref{eq:lin.pred}~is on the columns. VGAMs are estimated by applying a modified vector backfitting algorithm \citep[cf.][]{buja:hast:tibs:1989} to the $\biz_i$. \subsection{Vector splines and penalized likelihood} \label{sec:ex.vspline} If~(\ref{eqn:constraints.VGAM}) is estimated using a vector spline (a natural extension of the cubic smoothing spline to vector responses) then it can be shown that the resulting solution maximizes a penalized likelihood; some details are sketched in~\cite{yee:step:2007}. In fact, knot selection for vector spline follows the same idea as O-splines \citep[see][]{wand:orme:2008} in order to lower the computational cost. The usage of \texttt{vgam()} with smoothing is very similar to~\texttt{gam()} \citep{gam:pack:2009}, e.g., to fit a nonparametric proportional odds model \citep[cf.~p.179 of][]{mccu:neld:1989} to the pneumoconiosis data one could try <>= pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE, parallel = TRUE), pneumo) @ Here, setting \texttt{df = 1} means a linear fit so that \texttt{df = 2} affords a little nonlinearity. % ---------------------------------------------------------------------- \section[VGAM family functions]{\pkg{VGAM} family functions} \label{sec:jsscat.vgamff} This section summarizes and comments on the~\VGAM{} family functions of Table~\ref{tab:cat.quantities} for a categorical response variable taking values $Y=1,2,\ldots,M+1$. In its most basic invokation, the usage entails a trivial change compared to \texttt{glm()}: use \texttt{vglm()} instead and assign the \texttt{family} argument a \VGAM{}~family function. The use of a \VGAM{}~family function to fit a specific model is far simpler than having a different modeling function for each model. Options specific to that model appear as arguments of that \VGAM{}~family function. While writing \texttt{cratio()} it was found that various authors defined the quantity ``continuation ratio'' differently, therefore it became necessary to define a ``stopping ratio''. Table~\ref{tab:cat.quantities} defines these quantities for \VGAM{}. The multinomial logit model is usually described by choosing the first or last level of the factor to be baseline. \VGAM{}~chooses the last level (Table~\ref{tab:cat.quantities}) by default, however that can be changed to any other level by use of the \texttt{refLevel} argument. If the proportional odds assumption is inadequate then one strategy is to try use a different link function (see Section~\ref{sec:jsscat.links} for a selection). Another alternative is to add extra terms such as interaction terms into the linear predictor \citep[available in the \proglang{S}~language;][]{cham:hast:1993}. Another is to fit the so-called \textit{partial} proportional odds model \citep{pete:harr:1990} which \VGAM{} can fit via constraint matrices. In the terminology of~\cite{agre:2002}, \texttt{cumulative()} fits the class of \textit{cumulative link models}, e.g., \texttt{cumulative(link = probit)} is a cumulative probit model. For \texttt{cumulative()} it was difficult to decide whether \texttt{parallel = TRUE} or \texttt{parallel = FALSE} should be the default. In fact, the latter is (for now?). Users need to set \texttt{cumulative(parallel = TRUE)} explicitly to fit a proportional odds model---hopefully this will alert them to the fact that they are making the proportional odds assumption and check its validity (\cite{pete:1990}; e.g., through a deviance or likelihood ratio test). However the default means numerical problems can occur with far greater likelihood. Thus there is tension between the two options. As a compromise there is now a \VGAM{} family function called \texttt{propodds(reverse = TRUE)} which is equivalent to \texttt{cumulative(parallel = TRUE, reverse = reverse, link = "logit")}. By the way, note that arguments such as \texttt{parallel} can handle a slightly more complex syntax. A call such as \code{parallel = TRUE ~ x2 + x5 - 1} means the parallelism assumption is only applied to~$X_2$ and~$X_5$. This might be equivalent to something like \code{parallel = FALSE ~ x3 + x4}, i.e., to the remaining explanatory variables. % ---------------------------------------------------------------------- \section{Other models} \label{sec:jsscat.othermodels} Given the VGLM/VGAM framework of Section~\ref{sec:jsscat.VGLMVGAMoverview} it is found that natural extensions are readily proposed in several directions. This section describes some such extensions. \subsection{Reduced-rank VGLMs} \label{sec:jsscat.RRVGLMs} Consider a multinomial logit model where $p$ and $M$ are both large. A (not-too-convincing) example might be the data frame \texttt{vowel.test} in the package \pkg{ElemStatLearn} \citep[see][]{hast:tibs:buja:1994}. The vowel recognition data set involves $q=11$ symbols produced from 8~speakers with 6~replications of each. The training data comprises $10$~input features (not including the intercept) based on digitized utterances. A multinomial logit model fitted to these data would have $\widehat{\bB}$ comprising of $p \times (q-1) = 110$ regression coefficients for $n=8\times 6\times 11 = 528$ observations. The ratio of $n$~to the number of parameters is small, and it would be good to introduce some parsimony into the model. A simple and elegant solution is to represent $\widehat{\bB}$ by its reduced-rank approximation. To do this, partition $\bix$ into $(\bix_1^{\top}, \bix_2^{\top})^{\top}$ and $\bB = (\bB_1^{\top} \; \bB_2^{\top})^{\top}$ so that the reduced-rank regression is applied to~$\bix_2$. In general, \bB{} is a dense matrix of full rank, i.e., rank $=\min(M,p)$, and since there are $M \times p$ regression coefficients to estimate this is `too' large for some models and/or data sets. If we approximate~$\bB_2$ by a reduced-rank regression \begin{equation} \label{eq:rrr.BAC} \bB_2 ~=~ \bC{} \, \bA^{\top} \end{equation} and if the rank~$R$ is kept low then this can cut down the number of regression coefficients dramatically. If~$R=2$ then the results may be biplotted (\texttt{biplot()} in \VGAM{}). Here, \bC{} and \bA{} are $p_2 \times R$ and $M \times R$ respectively, and usually they are `thin'. More generally, the class of \textit{reduced-rank VGLMs} (RR-VGLMs) is simply a VGLM where~$\bB_2$ is expressed as a product of two thin estimated matrices (Table~\ref{tab:rrvglam.jss.subset}). Indeed, \cite{yee:hast:2003} show that RR-VGLMs are VGLMs with constraint matrices that are unknown and estimated. Computationally, this is done using an alternating method: in~(\ref{eq:rrr.BAC}) estimate~\bA{} given the current estimate of~\bC{}, and then estimate~\bC{} given the current estimate of~\bA{}. This alternating algorithm is repeated until convergence within each IRLS iteration. Incidentally, special cases of RR-VGLMs have appeared in the literature. For example, a RR-multinomial logit model, is known as the \textit{stereotype} model \citep{ande:1984}. Another is \cite{good:1981}'s RC~model (see Section~\ref{sec:jsscat.rrr.goodman}) which is reduced-rank multivariate Poisson model. Note that the parallelism assumption of the proportional odds model \citep{mccu:neld:1989} can be thought of as a type of reduced-rank regression where the constraint matrices are thin ($\bone_M$, actually) and known. The modeling function \texttt{rrvglm()} should work with any \VGAM{} family function compatible with \texttt{vglm()}. Of course, its applicability should be restricted to models where a reduced-rank regression of~$\bB_2$ makes sense. \subsection[Goodman's R x C association model]{Goodman's $R \times C$ association model} \label{sec:jsscat.rrr.goodman} Let~$\bY = [(y_{ij})]$ be a $n \times M$ matrix of counts. Section~4.2 of~\cite{yee:hast:2003} shows that Goodman's~RC$(R)$ association model \citep{good:1981} fits within the VGLM framework by setting up the appropriate indicator variables, structural zeros and constraint matrices. Goodman's model fits a reduced-rank type model to~\bY{} by firstly assuming that~$Y_{ij}$ has a Poisson distribution, and that \begin{eqnarray} \log \, \mu_{ij} &=& \mu + \alpha_{i} + \gamma_{j} + \sum_{k=1}^R a_{ik} \, c_{jk} , \ \ \ i=1,\ldots,n;\ \ j=1,\ldots,M, \label{eqn:goodmanrc} \end{eqnarray} where $\mu_{ij} = E(Y_{ij})$ is the mean of the $i$-$j$ cell, and the rank~$R$ satisfies $R < \min(n,M)$. The modeling function \texttt{grc()} should work on any two-way table~\bY{} of counts generated by~(\ref{eqn:goodmanrc}) provided the number of 0's is not too large. Its usage is quite simple, e.g., \texttt{grc(Ymatrix, Rank = 2)} fits a rank-2 model to a matrix of counts. By default a \texttt{Rank = 1} model is fitted. \subsection{Bradley-Terry models} \label{sec:jsscat.brat} Consider an experiment consists of $n_{ij}$ judges who compare pairs of items $T_i$, $i=1,\ldots,M+1$. They express their preferences between $T_i$ and $T_j$. Let $N=\sum \sum_{i T_j) ~=~ p_{i/ij} ~=~ \frac{\pi_i}{\pi_i + \pi_j}, \ ~~~~~i \neq {j}, $$ where ``$T_i>T_j$'' means~$i$ is preferred over~$j$. Suppose that $\pi_i > 0$. Let~$Y_{ij}$ be the number of times that $T_i$ is preferred over~$T_j$ in the~$n_{ij}$ comparisons of the pairs. Then~$Y_{ij} \sim {\rm Bin}(n_{ij},p_{i/ij})$. This is a Bradley-Terry model (without ties), and the \VGAM{} family function is~\texttt{brat()}. Maximum likelihood estimation of the parameters $\pi_1,\ldots,\pi_{M+1}$ involves maximizing $$ \prod_{i>= journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") squaremat <- matrix(c(NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(squaremat) <- list(winner = journal, loser = journal) @ then \texttt{Brat(squaremat)} returns a~$1 \times 12$ matrix. \subsubsection{Bradley-Terry model with ties} \label{sec:cat.bratt} The \VGAM{} family function \texttt{bratt()} implements a Bradley-Terry model with ties (no preference), e.g., where both $T_i$ and $T_j$ are equally good or bad. Here we assume \begin{eqnarray*} \pr(T_i > T_j) &=& \frac{\pi_i}{\pi_i + \pi_j + \pi_0}, \ ~~~~~ \pr(T_i = T_j) ~=~ \frac{\pi_0}{\pi_i + \pi_j + \pi_0}, \end{eqnarray*} with $\pi_0 > 0$ as an extra parameter. It has $$ \boldeta=(\log \pi_1,\ldots, \log \pi_{M-1}, \log \pi_{0})^{\top} $$ by default, where there are $M$~competitors and $\pi_M \equiv 1$. Like \texttt{brat()}, one can choose a different reference group and reference value. Other \R{}~packages for the Bradley-Terry model include \pkg{BradleyTerry2} by H.~Turner and D.~Firth \citep[with and without ties;][]{firth:2005,firth:2008} and \pkg{prefmod} \citep{Hatzinger:2009}. \begin{table}[tt] \centering \begin{tabular}[small]{|l|c|} \hline \pkg{VGAM} family function & Independent parameters \\ \hline \texttt{ABO()} & $p, q$ \\ \texttt{MNSs()} & $m_S, m_s, n_S$ \\ \texttt{AB.Ab.aB.ab()} & $p$ \\ \texttt{AB.Ab.aB.ab2()} & $p$ \\ \texttt{AA.Aa.aa()} & $p_A$ \\ \texttt{G1G2G3()} & $p_1, p_2, f$ \\ \hline \end{tabular} \caption{Some genetic models currently implemented and their unique parameters. \label{tab:gen.all} } \end{table} \subsection{Genetic models} \label{sec:jsscat.genetic} There are quite a number of population genetic models based on the multinomial distribution, e.g., \cite{weir:1996}, \cite{lang:2002}. Table~\ref{tab:gen.all} lists some \pkg{VGAM}~family functions for such. For example the ABO blood group system has two independent parameters~$p$ and~$q$, say. Here, the blood groups A, B and O~form six possible combinations (genotypes) consisting of AA, AO, BB, BO, AB, OO (see Table~\ref{tab:ABO}). A and~B are dominant over bloodtype~O. Let $p$, $q$ and $r$ be the probabilities for A, B and~O respectively (so that $p+q+r=1$) for a given population. The log-likelihood function is \[ \ell(p,q) \;=\; n_A\, \log(p^2 + 2pr) + n_B\, \log(q^2 + 2qr) + n_{AB}\, \log(2pq) + 2 n_O\, \log(1-p-q), \] where $r = 1 - p -q$, $p \in (\,0,1\,)$, $q \in (\,0,1\,)$, $p+q<1$. We let $\boldeta = (g(p), g(r))^{\top}$ where $g$ is the link function. Any~$g$ from Table~\ref{tab:jsscat.links} appropriate for a parameter $\theta \in (0,1)$ will do. A toy example where $p=p_A$ and $q=p_B$ is <<>>= abodat <- data.frame(A = 725, B = 258, AB = 72, O = 1073) fit <- vglm(cbind(A, B, AB, O) ~ 1, ABO, abodat) coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB @ The function \texttt{Coef()}, which applies only to intercept-only models, applies to $g_{j}(\theta_{j})=\eta_{j}$ the inverse link function $g_{j}^{-1}$ to~$\widehat{\eta}_{j}$ to give~$\widehat{\theta}_{j}$. \begin{table}[tt] % Same as Table 14.1 of E-J, and Table 2.6 of Weir 1996 \begin{center} \begin{tabular}{|l|cc|cc|c|c|} \hline Genotype & AA & AO & BB & BO & AB & OO \\ Probability&$p^2$&$2pr$&$q^2$&$ 2qr$&$2pq$& $r^2$\\ Blood group& A & A & B & B & AB & O \\ \hline \end{tabular} \end{center} \caption{Probability table for the ABO blood group system. Note that $p$~and $q$~are the parameters and $r=1-p-q$. \label{tab:ABO} } \end{table} \subsection{Three main distributions} \label{sec:jsscat.3maindist} \cite{agre:2002} discusses three main distributions for categorical variables: binomial, multinomial, and Poisson \citep{thom:2009}. All these are well-represented in the \VGAM{} package, accompanied by variant forms. For example, there is a \VGAM{} family function named \texttt{mbinomial()} which implements a matched-binomial (suitable for matched case-control studies), Poisson ordination (useful in ecology for multi-species-environmental data), negative binomial families, positive and zero-altered and zero-inflated variants, and the bivariate odds ratio model \citep[\texttt{binom2.or()}; see Section~6.5.6 of][]{mccu:neld:1989}. The latter has an \texttt{exchangeable} argument to allow for an exchangeable error structure: \begin{eqnarray} \bH_1 ~=~ \left( \begin{array}{cc} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ \end{array} \right), ~~~~~ \bH_k ~=~ \left( \begin{array}{cc} 1 \\ 1 \\ 0 \\ \end{array} \right), ~~k=2,\ldots,p, \label{eqn:blom.exchangeable} \end{eqnarray} since, for data $(Y_1,Y_2,\bix)$, $\logit \, P\!\left( Y_{j} = 1 \Big{|} \bix \right) = \eta_{j}$ for ${j}=1,2$, and $\log \, \psi = \eta_{3}$ where $\psi$~is the odds ratio, and so $\eta_{1}=\eta_{2}$. Here, \texttt{binom2.or(zero = 3)} by default meaning $\psi$~is modelled as an intercept-only (in general, \texttt{zero} may be assigned an integer vector such that the value~$j$ means $\eta_{j} = \beta_{(j)1}$, i.e., the $j$th~linear/additive predictor is an intercept-only). See the online help for all of these models. % ---------------------------------------------------------------------- \section{Some user-oriented topics} \label{sec:jsscat.userTopics} Making the most of \VGAM{} requires an understanding of the general VGLM/VGAM framework described Section~\ref{sec:jsscat.VGLMVGAMoverview}. In this section we connect elements of that framework with the software. Before doing so it is noted that a fitted \VGAM{} categorical model has access to the usual generic functions, e.g., \texttt{coef()} for $\left(\widehat{\bbeta}_{(1)}^{*T},\ldots,\widehat{\bbeta}_{(p)}^{*T}\right)^{\top}$ (see Equation~\ref{eqn:lin.coefs4}), \texttt{constraints()} for $\bH_k$, \texttt{deviance()} for $2\left(\ell_{\mathrm{max}} - \ell\right)$, \texttt{fitted()} for $\widehat{\bmu}_i$, \texttt{logLik()} for $\ell$, \texttt{predict()} for $\widehat{\boldeta}_i$, \texttt{print()}, \texttt{residuals(..., type = "response")} for $\biy_i - \widehat{\bmu}_i$ etc., \texttt{summary()}, \texttt{vcov()} for $\widehat{\Var}(\widehat{\bbeta})$, etc. The methods function for the extractor function \texttt{coef()} has an argument \texttt{matrix} which, when set \texttt{TRUE}, returns~$\widehat{\bB}$ (see Equation~\ref{gammod}) as a $p \times M$ matrix, and this is particularly useful for confirming that a fit has made a parallelism assumption. \subsection{Common arguments} \label{sec:jsscat.commonArgs} The structure of the unified framework given in Section~\ref{sec:jsscat.VGLMVGAMoverview} appears clearly through the pool of common arguments shared by the \VGAM{} family functions in Table~\ref{tab:cat.quantities}. In particular, \texttt{reverse} and \texttt{parallel} are prominent with CDA. These are merely convenient shortcuts for the argument \texttt{constraints}, which accepts a named list of constraint matrices~$\bH_k$. For example, setting \texttt{cumulative(parallel = TRUE)} would constrain the coefficients $\beta_{(j)k}$ in~(\ref{gammod2}) to be equal for all $j=1,\ldots,M$, each separately for $k=2,\ldots,p$. That is, $\bH_k = \bone_M$. The argument~\texttt{reverse} determines the `direction' of the parameter or quantity. Another argument not so much used with CDA is~\texttt{zero}; this accepts a vector specifying which~$\eta_j$ is to be modelled as an intercept-only; assigning a \texttt{NULL} means none. \subsection{Link functions} \label{sec:jsscat.links} Almost all \VGAM{} family functions (one notable exception is \texttt{multinomial()}) allow, in theory, for any link function to be assigned to each~$\eta_j$. This provides maximum capability. If so then there is an extra argument to pass in any known parameter associated with the link function. For example, \texttt{link = "logoff", earg = list(offset = 1)} signifies a log link with a unit offset: $\eta_{j} = \log(\theta_{j} + 1)$ for some parameter~$\theta_{j}\ (> -1)$. The name \texttt{earg} stands for ``extra argument''. Table~\ref{tab:jsscat.links} lists some links relevant to categorical data. While the default gives a reasonable first choice, users are encouraged to try different links. For example, fitting a binary regression model (\texttt{binomialff()}) to the coal miners data set \texttt{coalminers} with respect to the response wheeze gives a nonsignificant regression coefficient for $\beta_{(1)3}$~with probit analysis but not with a logit link when $\eta = \beta_{(1)1} + \beta_{(1)2} \, \mathrm{age} + \beta_{(1)3} \, \mathrm{age}^2$. Developers and serious users are encouraged to write and use new link functions compatible with~\VGAM. \begin{table*}[tt] \centering \medskip \begin{tabular}{|l|c|c|} \hline Link function & $g(\theta)$ & Range of $\theta$ \\ \hline \texttt{cauchit()} & $\tan(\pi(\theta-\frac12))$ & $(0,1)$ \\ \texttt{cloglog()} & $\log_e\{-\log_e(1 - \theta)\}$ & $(0,1)$ \\ \texttt{fisherz()} & $\frac12\,\log_e\{(1 + \theta)/(1 - \theta)\}$ & $(-1,1)$ \\ \texttt{identity()} & $\theta$ & $(-\infty,\infty)$ \\ \texttt{logc()} & $\log_e(1 - \theta)$ & $(-\infty,1)$ \\ \texttt{loge()} & $\log_e(\theta)$ & $(0,\infty)$ \\ \texttt{logit()} & $\log_e(\theta/(1 - \theta))$ & $(0,1)$ \\ \texttt{logoff()} & $\log_e(\theta + A)$ & $(-A,\infty)$ \\ \texttt{probit()} & $\Phi^{-1}(\theta)$ & $(0,1)$ \\ \texttt{rhobit()} & $\log_e\{(1 + \theta)/(1 - \theta)\}$ & $(-1,1)$ \\ \hline \end{tabular} \caption{ Some \VGAM{} link functions pertinent to this article. \label{tab:jsscat.links} } \end{table*} % ---------------------------------------------------------------------- \section{Examples} \label{sec:jsscat.eg} This section illustrates CDA modeling on three data sets in order to give a flavour of what is available in the package. \subsection{2008 World Fly Fishing Championships} \label{sec:jsscat.eg.WFFC} The World Fly Fishing Championships (WFFC) is a prestigious catch-and-release competition held annually. In 2008 it was held in New~Zealand during the month of March. The data was released and appears in~\VGAM{} as the data frames \texttt{wffc}, \texttt{wffc.nc}, \texttt{wffc.indiv} and \texttt{wffc.teams}. Details about the competition are found in the online help, as well as~\cite{yee:2010v}. Briefly, we will model the abundance of fish caught during each three-hour session amongst the 90 or so competitors (from about 19~countries) who fished all their sessions. There were five~sectors (locations) labelled I--V for the Whanganui~River, Lake~Otamangakau, Lake~Rotoaira, Waihou~River and Waimakariri~River, respectively. The sessions were sequentially labelled 1--6 where odd and even numbers denote morning and afternoon respectively. There were three consecutive days of fishing during which each sector experienced a rest session. \cite{yee:2010v} fitted Poisson and negative binomial regressions to the numbers caught at each competitor-session combination. The negative binomial regression had an intercept-only for its index parameter~$k$ and $\Var(Y) = \mu(1+\mu / k)$. Both models had the log-linear relationship \begin{eqnarray} \label{eq:wffc.use.loglinear} \log \, \mu_{adsc} &=& \eta ~=~ \beta_{(1)1} + \alpha_{s} + \beta_{a} + \gamma_{d} + \delta_{c}. \end{eqnarray} where $\mu = E(Y)$ is the mean number caught, $\beta_{(1)1}$~is the intercept, $\alpha_{s}$~are the sector effects for $s=1,\ldots,5$ sectors, $\delta_{c}$~are the ``competitor effects'' for $c=1,\ldots,91$ competitors (8~competitors who did not fish all~5 sessions were excluded), $\beta_{a}$~are the morning ($a=1$) and afternoon ($a=2$) effects, $\gamma_{d}$~are the day effects for day $d=1,2,3$. Recall for factors that the first level is baseline, e.g., $\alpha_1=\beta_1=0$ etc. Not used here is $b=1,\ldots,19$ for which beat/boat was fished/used (e.g., fixed locations on the river). We will fit a proportional odds model with essentially the RHS of~(\ref{eq:wffc.use.loglinear}) as the linear predictor. Here is a peek at the data frame used. Each row of~\texttt{wffc.nc} is the number of captures by each sector-session-beat combination. <<>>= head(wffc.nc, 5) @ We first process the data a little: create the regressor variables and restrict the analysis to anglers who fished all their sessions. Here, ``\texttt{nc}'' stands for numbers caught, and ``\texttt{f}'' stands for factor. <<>>= fnc <- transform(wffc.nc, finame = factor(iname), fsector = factor(sector), fday = factor(ceiling(session / 2)), mornaft = 1 - (session %% 2), fbeatboat = factor(beatboat)) fnc <- fnc[with(fnc, !is.element(comid, c(99,72,80,93,45,71,97,78))),] fnc <- transform(fnc, ordnum = ifelse(numbers <= 02, "few", ifelse(numbers <= 10, "more", "most"))) fnc$ordnum <- ordered(fnc$ordnum, levels = c("few", "more", "most")) @ The variable \texttt{ordnum} is ordinal with 3~levels. The cut-points chosen here were decided upon by manual inspection; they gave approximately the same numbers in each level: <<>>= with(fnc, table(ordnum)) @ Now we are in a position to fit a proportional odds model to mimic~(\ref{eq:wffc.use.loglinear}). <<>>= fit.pom <- vglm(ordnum ~ fsector + mornaft + fday + finame, family = cumulative(parallel = TRUE, reverse = TRUE), data = fnc) @ Here, we set \texttt{reverse = TRUE} so that the coefficients have the same direction as a logistic regression. It means that if a regression coefficient is positive then an increasing value of an explanatory variable is associated with an increasing value of the response. One could have used \texttt{family = propodds} instead. Before interpreting some output let's check that the input was alright. <<>>= head(fit.pom@y, 3) colSums(fit.pom@y) @ The checking indicates no problems with the input. Now let's look at some output. Note that the Whanganui~River, Mornings and Day~1 are the baseline levels of the factors. Also, the variable \texttt{mornaft} is 0~for morning and 1~for afternoons. Likewise, the factor \texttt{fday} has values \texttt{1}, \texttt{2} and \texttt{3}. <<>>= head(coef(fit.pom, matrix = TRUE), 10) #head(summary(fit.pom)@coef3, 10) # Old now since 0.7-10 is nicer @ verifies the parallelism assumption. Standard errors and Wald statistics may be obtained by <<>>= head(coef(summary(fit.pom)), 10) @ Not surprisingly, these results agree with the Poisson and negative binomial regressions \citep[reported in][]{yee:2010v}. The most glaring qualitative results are as follows. We use the rough rule of thumb that if the absolute value of the $t$~{statistic} is greater than~$2$ then it is `statistically significant'. \begin{itemize} \item The two lakes were clearly less productive than the rivers. However, neither of the other two rivers were significantly different from the Whanganui~River. \item There is a noticeable day effect: the second day is not significantly different from the opening day but it is for the third day. The decreasing values of the fitted coefficients show there is an increasing catch-reduction (fish depletion if it were catch-and-keep) as the competition progressed. Replacing \texttt{fday} by a variable~\texttt{day} and entering that linearly gave a $t$~statistic of~$-4.0$: there is a significant decrease in catch over time. \item Mornings were more productive than afternoons. The $p$~value for this would be close to~5\%. This result is in line with the day effect: fishing often results in a `hammering' effect over time on fish populations, especially in small streams. Since the morning and afternoon sessions were fixed at 9.00am--12.00pm and 2.30--5.30pm daily, there was only $2\frac12$~hours for the fish to recover until the next angler arrived. \end{itemize} Let us check the proportional odds assumption with respect to the variable~\texttt{mornaft}. <<>>= fit.ppom <- vglm(ordnum ~ fsector + mornaft + fday + finame, cumulative(parallel = FALSE ~ 1 + mornaft, reverse = TRUE), data = fnc) head(coef(fit.ppom, matrix = TRUE), 8) @ As expected, all rows but~\texttt{(Intercept)} and~\texttt{mornaft} are identical due to the parallelism. Then <<>>= pchisq(deviance(fit.pom) - deviance(fit.ppom), df = df.residual(fit.pom) - df.residual(fit.ppom), lower.tail=FALSE) @ gives a likelihood ratio test $p$~value which is non-significant. Repeating the testing for each variable separately indicates that the parallelism assumption seems reasonable here except with~\texttt{fday} ($p$~value $\approx 0.012$). For this model <<>>= fit2.ppom <- vglm(ordnum ~ fsector + mornaft + fday + finame, family = cumulative(parallel = FALSE ~ 1 + fday, reverse = TRUE), data = fnc) head(coef(fit2.ppom, matrix = TRUE), 8) @ Some miscellaneous output is as follows. <<>>= head(fitted(fit2.ppom), 3) @ are the fitted probabilities $\widehat{P}(Y={j})$ which sum to unity for each row. The $i$th~row of <<>>= head(predict(fit2.ppom), 3) @ is $\widehat{\boldeta}(\bix_i)^{\top}$. The dimensions of the LM and VLM design matrices are <<>>= dim(model.matrix(fit2.ppom, type = "lm")) dim(model.matrix(fit2.ppom, type = "vlm")) @ which shows the VLM matrix grows quickly with respect to~$M$. Lastly, <<>>= constraints(fit2.ppom)[c(1, 2, 5, 6)] @ shows some of the constraint matrices, $\bH_1=\bI_2$ and $\bH_2=\bH_5=\bH_6=\bone_2$ (see Equations~\ref{eqn:constraints.VGAM}--\ref{eqn:lin.coefs4}). \subsection{Marital status data} \label{sec:jsscat.eg.mstatus} We fit a nonparametric multinomial logit model to data collected from a self-administered questionnaire administered in a large New Zealand workforce observational study conducted during 1992--3. The data were augmented by a second study consisting of retirees. For homogeneity, this analysis is restricted to a subset of 6053 European males with no missing values. The ages ranged between~16 and 88~years. The data can be considered a reasonable representation of the white male New Zealand population in the early 1990s, and are detailed in~\cite{macm:etal:1995} and~\cite{yee:wild:1996}. We are interested in exploring how $Y=$ marital status varies as a function of $x_2=$ age. The nominal response~$Y$ has four levels; in sorted order, they are divorced or separated, married or partnered, single and widower. We will write these levels as $Y=1$, $2$, $3$, $4$, respectively, and will choose the married/partnered (second level) as the reference group because the other levels emanate directly from it. Suppose the data is in a data frame called \texttt{marital.nz} and looks like <<>>= head(marital.nz, 4) summary(marital.nz) @ We fit the VGAM <<>>= fit.ms <- vgam(mstatus ~ s(age, df = 3), multinomial(refLevel = 2), data = marital.nz) @ Once again let's firstly check the input. <<>>= head(fit.ms@y, 4) colSums(fit.ms@y) @ This seems ok. Now the estimated component functions $\widehat{f}_{(s)2}(x_2)$ may be plotted with <>= # Plot output mycol <- c("red","darkgreen","blue") par(mfrow=c(2,2)) plot(fit.ms, se=TRUE, scale=12, lcol=mycol, scol=mycol) # Plot output overlayed #par(mfrow=c(1,1)) plot(fit.ms, se=TRUE, scale=12, overlay=TRUE, llwd=2, lcol=mycol, scol=mycol) @ to produce Figure~\ref{fig:jsscat.eg.mstatus}. The \texttt{scale} argument is used here to ensure that the $y$-axes have a common scale---this makes comparisons between the component functions less susceptible to misinterpretation. The first three plots are the (centered) $\widehat{f}_{(s)2}(x_2)$ for $\eta_1$, $\eta_2$, $\eta_3$, where \begin{eqnarray} \label{eq:jsscat.eg.nzms.cf} \eta_{s} ~=~ \log(\pr(Y={t}) / \pr(Y={2})) ~=~ \beta_{(s)1} + f_{(s)2}(x_2), \end{eqnarray} $(s,t) = (1,1), (2,3), (3,4)$, and~$x_2$ is~\texttt{age}. The last plot are the smooths overlaid to aid comparison. It may be seen that the $\pm 2$ standard error bands about the \texttt{Widowed} group is particularly wide at young ages because of a paucity of data, and likewise at old ages amongst the \texttt{Single}s. The $\widehat{f}_{(s)2}(x_2)$ appear as one would expect. The log relative risk of being single relative to being married/partnered drops sharply from ages~16 to~40. The fitted function for the~\texttt{Widowed} group increases with~\texttt{age} and looks reasonably linear. The $\widehat{f}_{(1)2}(x_2)$ suggests a possible maximum around 50~years old---this could indicate the greatest marital conflict occurs during the mid-life crisis years! \setkeys{Gin}{width=0.9\textwidth} % 0.8 is the current default \begin{figure}[tt] \begin{center} <>= # Plot output mycol <- c("red","darkgreen","blue") par(mfrow=c(2,2)) par(mar=c(4.2,4.0,1.2,2.2)+0.1) plot(fit.ms, se=TRUE, scale=12, lcol=mycol, scol=mycol) # Plot output overlaid #par(mfrow=c(1,1)) plot(fit.ms, se=TRUE, scale=12, overlay=TRUE, llwd=2, lcol=mycol, scol=mycol) @ \caption{ Fitted (and centered) component functions $\widehat{f}_{(s)2}(x_2)$ from the NZ marital status data (see Equation~\ref{eq:jsscat.eg.nzms.cf}). The bottom RHS plot are the smooths overlaid. \label{fig:jsscat.eg.mstatus} } \end{center} \end{figure} \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default The methods function for~\texttt{plot()} can also plot the derivatives of the smooths. The call <>= plot(fit.ms, deriv=1, lcol=mycol, scale=0.3) @ results in Figure~\ref{fig:jsscat.eg.mstatus.cf.deriv}. Once again the $y$-axis scales are commensurate. \setkeys{Gin}{width=\textwidth} % 0.8 is the current default \begin{figure}[tt] \begin{center} <>= # Plot output par(mfrow=c(1,3)) par(mar=c(4.5,4.0,0.2,2.2)+0.1) plot(fit.ms, deriv=1, lcol=mycol, scale=0.3) @ \caption{ Estimated first derivatives of the component functions, $\widehat{f'}_{(s)2}(x_2)$, from the NZ marital status data (see Equation~\ref{eq:jsscat.eg.nzms.cf}). \label{fig:jsscat.eg.mstatus.cf.deriv} } \end{center} \end{figure} \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default The derivative for the \texttt{Divorced/Separated} group appears linear so that a quadratic component function could be tried. Not surprisingly the \texttt{Single} group shows the greatest change; also, $\widehat{f'}_{(2)2}(x_2)$ is approximately linear till~50 and then flat---this suggests one could fit a piecewise quadratic function to model that component function up to 50~years. The~\texttt{Widowed} group appears largely flat. We thus fit the parametric model <<>>= foo <- function(x, elbow=50) poly(pmin(x, elbow), 2) clist <- list("(Intercept)" = diag(3), "poly(age, 2)" = rbind(1, 0, 0), "foo(age)" = rbind(0, 1, 0), "age" = rbind(0, 0, 1)) fit2.ms <- vglm(mstatus ~ poly(age, 2) + foo(age) + age, family = multinomial(refLevel = 2), constraints = clist, data = marital.nz) @ Then <<>>= coef(fit2.ms, matrix = TRUE) @ confirms that one term was used for each component function. The plots from <>= par(mfrow=c(2,2)) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[1], scol = mycol[1], which.term = 1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[2], scol=mycol[2], which.term = 2) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[3], scol = mycol[3], which.term = 3) @ are given in Figure~\ref{fig:jsscat.eg.mstatus.vglm} and appear like Figure~\ref{fig:jsscat.eg.mstatus}. \setkeys{Gin}{width=0.9\textwidth} % 0.8 is the current default \begin{figure}[tt] \begin{center} <>= # Plot output par(mfrow=c(2,2)) par(mar=c(4.5,4.0,1.2,2.2)+0.1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[1], scol = mycol[1], which.term = 1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[2], scol = mycol[2], which.term = 2) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[3], scol = mycol[3], which.term = 3) @ \caption{ Parametric version of~\texttt{fit.ms}: \texttt{fit2.ms}. The component functions are now quadratic, piecewise quadratic/zero, or linear. \label{fig:jsscat.eg.mstatus.vglm} } \end{center} \end{figure} \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default It is possible to perform very crude inference based on heuristic theory of a deviance test: <<>>= deviance(fit.ms) - deviance(fit2.ms) @ is small, so it seems the parametric model is quite reasonable against the original nonparametric model. Specifically, the difference in the number of `parameters' is approximately <<>>= (dfdiff <- df.residual(fit2.ms) - df.residual(fit.ms)) @ which gives an approximate $p$~value of <<>>= 1-pchisq(deviance(fit.ms) - deviance(fit2.ms), df=dfdiff) @ Thus \texttt{fit2.ms} appears quite reasonable. The estimated probabilities of the original fit can be plotted against~\texttt{age} using <>= ooo <- with(marital.nz, order(age)) with(marital.nz, matplot(age[ooo], fitted(fit.ms)[ooo,], type="l", las=1, lwd=2, ylim=0:1, ylab="Fitted probabilities", xlab="Age", # main="Marital status amongst NZ Male Europeans", col=c(mycol[1], "black", mycol[-1]))) legend(x=52.5, y=0.62, # x="topright", col=c(mycol[1], "black", mycol[-1]), lty=1:4, legend=colnames(fit.ms@y), lwd=2) abline(v=seq(10,90,by=5), h=seq(0,1,by=0.1), col="gray", lty="dashed") @ which gives Figure~\ref{fig:jsscat.eg.mstatus.fitted}. This shows that between 80--90\%~of NZ white males aged between their early~30s to mid-70s were married/partnered. The proportion widowed started to rise steeply from 70~years onwards but remained below~0.5 since males die younger than females on average. \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default \begin{figure}[tt] \begin{center} <>= par(mfrow=c(1,1)) par(mar=c(4.5,4.0,0.2,0.2)+0.1) ooo <- with(marital.nz, order(age)) with(marital.nz, matplot(age[ooo], fitted(fit.ms)[ooo,], type="l", las=1, lwd=2, ylim=0:1, ylab="Fitted probabilities", xlab="Age", col=c(mycol[1], "black", mycol[-1]))) legend(x=52.5, y=0.62, col=c(mycol[1], "black", mycol[-1]), lty=1:4, legend=colnames(fit.ms@y), lwd=2.1) abline(v=seq(10,90,by=5), h=seq(0,1,by=0.1), col="gray", lty="dashed") @ \caption{ Fitted probabilities for each class for the NZ male European marital status data (from Equation~\ref{eq:jsscat.eg.nzms.cf}). \label{fig:jsscat.eg.mstatus.fitted} } \end{center} \end{figure} \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default \subsection{Stereotype model} \label{sec:jsscat.eg.grc.stereotype} We reproduce some of the analyses of \cite{ande:1984} regarding the progress of 101~patients with back pain using the data frame \texttt{backPain} from \pkg{gnm} \citep{Rnews:Turner+Firth:2007,Turner+Firth:2009}. The three prognostic variables are length of previous attack ($x_1=1,2$), pain change ($x_2=1,2,3$) and lordosis ($x_3=1,2$). Like him, we treat these as numerical and standardize and negate them. % The output <<>>= # Scale the variables? Yes; the Anderson (1984) paper did (see his Table 6). head(backPain, 4) summary(backPain) backPain <- transform(backPain, sx1 = -scale(x1), sx2 = -scale(x2), sx3 = -scale(x3)) @ displays the six ordered categories. Now a rank-1 stereotype model can be fitted with <<>>= bp.rrmlm1 <- rrvglm(pain ~ sx1 + sx2 + sx3, multinomial, backPain) @ Then <<>>= Coef(bp.rrmlm1) @ are the fitted \bA, \bC{} and~$\bB_1$ (see Equation~\ref{eq:rrr.BAC}) and Table~\ref{tab:rrvglam.jss.subset}) which agrees with his Table~6. Here, what is known as ``corner constraints'' is used ($(1,1)$ element of \bA{} $\equiv 1$), and only the intercepts are not subject to any reduced-rank regression by default. The maximized log-likelihood from \textsl{\texttt{logLik(bp.rrmlm1)}} is $\Sexpr{round(logLik(bp.rrmlm1), 2)}$. The standard errors of each parameter can be obtained by \textsl{\texttt{summary(bp.rrmlm1)}}. The negative elements of~$\widehat{\bC}$ imply the latent variable~$\widehat{\nu}$ decreases in value with increasing \textsl{\texttt{sx1}}, \textsl{\texttt{sx2}} and \textsl{\texttt{sx3}}. The elements of~$\widehat{\bA}$ tend to decrease so it suggests patients get worse as $\nu$~increases, i.e., get better as \textsl{\texttt{sx1}}, \textsl{\texttt{sx2}} and \textsl{\texttt{sx3}} increase. <>= set.seed(123) @ A rank-2 model fitted \textit{with a different normalization} <<>>= bp.rrmlm2 <- rrvglm(pain ~ sx1 + sx2 + sx3, multinomial, backPain, Rank = 2, Corner = FALSE, Uncor = TRUE) @ produces uncorrelated $\widehat{\bnu}_i = \widehat{\bC}^{\top} \bix_{2i}$. In fact \textsl{\texttt{var(lv(bp.rrmlm2))}} equals $\bI_2$ so that the latent variables are also scaled to have unit variance. The fit was biplotted (rows of $\widehat{\bC}$ plotted as arrow; rows of $\widehat{\bA}$ plotted as labels) using <>= biplot(bp.rrmlm2, Acol="blue", Ccol="darkgreen", scores=TRUE, # xlim=c(-1,6), ylim=c(-1.2,4), # Use this if not scaled xlim=c(-4.5,2.2), ylim=c(-2.2, 2.2), # Use this if scaled chull=TRUE, clty=2, ccol="blue") @ to give Figure~\ref{fig:jsscat.eg.rrmlm2.backPain}. It is interpreted via inner products due to~(\ref{eq:rrr.BAC}). The different normalization means that the interpretation of~$\nu_1$ and~$\nu_2$ has changed, e.g., increasing \textsl{\texttt{sx1}}, \textsl{\texttt{sx2}} and \textsl{\texttt{sx3}} results in increasing $\widehat{\nu}_1$ and patients improve more. Many of the latent variable points $\widehat{\bnu}_i$ are coincidental due to discrete nature of the~$\bix_i$. The rows of~$\widehat{\bA}$ are centered on the blue labels (rather cluttered unfortunately) and do not seem to vary much as a function of~$\nu_2$. In fact this is confirmed by~\cite{ande:1984} who showed a rank-1 model is to be preferred. This example demonstrates the ability to obtain a low dimensional view of higher dimensional data. The package's website has additional documentation including more detailed Goodman's~RC and stereotype examples. \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default \begin{figure}[tt] \begin{center} <>= # Plot output par(mfrow=c(1,1)) par(mar=c(4.5,4.0,0.2,2.2)+0.1) biplot(bp.rrmlm2, Acol="blue", Ccol="darkgreen", scores=TRUE, # xlim=c(-1,6), ylim=c(-1.2,4), # Use this if not scaled xlim=c(-4.5,2.2), ylim=c(-2.2, 2.2), # Use this if scaled chull=TRUE, clty=2, ccol="blue") @ \caption{ Biplot of a rank-2 reduced-rank multinomial logit (stereotype) model fitted to the back pain data. A convex hull surrounds the latent variable scores $\widehat{\bnu}_i$ (whose observation numbers are obscured because of their discrete nature). The position of the $j$th~row of~$\widehat{\bA}$ is the center of the label ``\texttt{log(mu[,j])/mu[,6])}''. \label{fig:jsscat.eg.rrmlm2.backPain} } \end{center} \end{figure} \setkeys{Gin}{width=0.8\textwidth} % 0.8 is the current default % ---------------------------------------------------------------------- \section{Some implementation details} \label{sec:jsscat.implementDetails} This section describes some implementation details of~\VGAM{} which will be more of interest to the developer than to the casual user. \subsection{Common code} \label{sec:jsscat.implementDetails.code} It is good programming practice to write reusable code where possible. All the \VGAM{}~family functions in Table~\ref{tab:cat.quantities} process the response in the same way because the same segment of code is executed. This offers a degree of uniformity in terms of how input is handled, and also for software maintenance (\cite{altm:jack:2010} enumerates good programming techniques and references). As well, the default initial values are computed in the same manner based on sample proportions of each level of~$Y$. \subsection[Matrix-band format of wz]{Matrix-band format of \texttt{wz}} \label{sec:jsscat.implementDetails.mbformat} The working weight matrices $\bW_i$ may become large for categorical regression models. In general, we have to evaluate the~$\bW_i$ for~$i=1,\ldots,n$, and naively, this could be held in an \texttt{array} of dimension~\texttt{c(M, M, n)}. However, since the~$\bW_i$ are symmetric positive-definite it suffices to only store the upper or lower half of the matrix. The variable~\texttt{wz} in \texttt{vglm.fit()} stores the working weight matrices $\bW_i$ in a special format called the \textit{matrix-band} format. This format comprises a $n \times M^*$ matrix where $$ M^* ~=~ \sum_{i=1}^{\footnotesize \textit{hbw}} \; \left(M-i+1\right) ~=~ \frac12 \, \textit{hbw}\, \left(2\,M - \textit{hbw} +1\right) $$ is the number of columns. Here, \textit{hbw} refers to the \textit{half-bandwidth} of the matrix, which is an integer between~1 and~$M$ inclusive. A diagonal matrix has unit half-bandwidth, a tridiagonal matrix has half-bandwidth~2, etc. Suppose $M=4$. Then \texttt{wz} will have up to $M^*=10$ columns enumerating the unique elements of~$\bW_i$ as follows: \begin{eqnarray} \bW_i ~=~ \left( \begin{array}{rrrr} 1 & 5 & 8 & 10 \\ & 2 & 6 & 9 \\ & & 3 & 7 \\ & & & 4 \end{array} \right). \label{eqn:hbw.eg} \end{eqnarray} That is, the order is firstly the diagonal, then the band above that, followed by the second band above the diagonal etc. Why is such a format adopted? For this example, if $\bW_i$ is diagonal then only the first 4 columns of \texttt{wz} are needed. If $\bW_i$ is tridiagonal then only the first~7 columns of \texttt{wz} are needed. If $\bW_i$ \textit{is} banded then \texttt{wz} needs not have all $\frac12 M(M+1)$ columns; only~$M^*$ columns suffice, and the rest of the elements of~$\bW_i$ are implicitly zero. As well as reducing the size of \texttt{wz} itself in most cases, the matrix-band format often makes the computation of \texttt{wz} very simple and efficient. Furthermore, a Cholesky decomposition of a banded matrix will be banded. A final reason is that sometimes we want to input~$\bW_i$ into \VGAM: if \texttt{wz} is $M \times M \times n$ then \texttt{vglm(\ldots, weights = wz)} will result in an error whereas it will work if \texttt{wz} is an $n \times M^*$ matrix. To facilitate the use of the matrix-band format, a few auxiliary functions have been written. In particular, there is \texttt{iam()} which gives the indices for an array-to-matrix. In the $4\times 4$ example above, <<>>= iam(NA, NA, M = 4, both = TRUE, diag = TRUE) @ returns the indices for the respective array coordinates for successive columns of matrix-band format (see Equation~\ref{eqn:hbw.eg}). If \texttt{diag = FALSE} then the first~4 elements in each vector are omitted. Note that the first two arguments of \texttt{iam()} are not used here and have been assigned \texttt{NA}s for simplicity. For its use on the multinomial logit model, where $(\bW_i)_{jj} = w_i\,\mu_{ij} (1-\mu_{ij}),\ j=1,\ldots,M$, and $(\bW_i)_{jk} = -w_i\,\mu_{ij} \mu_{ik},\ j\neq k$, this can be programmed succinctly like \begin{Code} wz <- mu[, 1:M] * (1 - mu[, 1:M]) if (M > 1) { index <- iam(NA, NA, M = M, both = TRUE, diag = FALSE) wz <- cbind(wz, -mu[, index$row] * mu[, index$col]) } wz <- w * wz \end{Code} (the actual code is slightly more complicated). In general, \VGAM{}~family functions can be remarkably compact, e.g., \texttt{acat()}, \texttt{cratio()} and \texttt{multinomial()} are all less than 120~lines of code each. % ---------------------------------------------------------------------- \section{Extensions and utilities} \label{sec:jsscat.extnUtil} This section describes some useful utilities/extensions of the above. \subsection{Marginal effects} \label{sec:jsscat.extnUtil.margeff} Models such as the multinomial logit and cumulative link models model the posterior probability $p_{j} = \pr(Y=j|\bix)$ directly. In some applications, knowing the derivative of~$p_{j}$ with respect to some of the~$x_k$ is useful; in fact, often just knowing the sign is important. The function \texttt{margeff()} computes the derivatives and returns them as a $p \times (M+1) \times n$ array. For the multinomial logit model it is easy to show \begin{eqnarray} \frac{\partial \, p_{j}(\bix_i)}{\partial \, \bix_{i}} &=& p_{j}(\bix_i) \left\{ \bbeta_{j} - \sum_{s=1}^{M+1} p_{s}(\bix_i) \, \bbeta_{s} \right\}, \label{eqn:multinomial.marginalEffects} \end{eqnarray} while for \texttt{cumulative(reverse = FALSE)} we have $p_{j} = \gamma_{j} - \gamma_{j-1} = h(\eta_{j}) - h(\eta_{j-1})$ where $h=g^{-1}$ is the inverse of the link function (cf.~Table~\ref{tab:cat.quantities}) so that \begin{eqnarray} \frac{\partial \, p_{j}(\bix_{})}{\partial \, \bix} &=& h'(\eta_{j}) \, \bbeta_{j} - h'(\eta_{j-1}) \, \bbeta_{j-1} . \label{eqn:cumulative.marginalEffects} \end{eqnarray} The function \texttt{margeff()} returns an array with these derivatives and should handle any value of \texttt{reverse} and \texttt{parallel}. % ---------------------------------------------------------------------- \subsection[The xij argument]{The \texttt{xij} argument} \label{sec:jsscat.extnUtil.xij} There are many models, including those for categorical data, where the value of an explanatory variable~$x_k$ differs depending on which linear/additive predictor~$\eta_{j}$. Here is a well-known example from {consumer choice} modeling. Suppose an econometrician is interested in peoples' choice of transport for travelling to work and that there are four choices: $Y=1$ for ``bus'', $Y=2$ ``train'', $Y=3$ ``car'' and $Y=4$ means ``walking''. Assume that people only choose one means to go to work. Suppose there are three covariates: $X_2=$ cost, $X_3=$ journey time, and $X_4=$ distance. Of the covariates only~$X_4$ (and the intercept~$X_1$) is the same for all transport choices; the cost and journey time differ according to the means chosen. Suppose a random sample of~$n$ people is collected from some population, and that each person has access to all these transport modes. For such data, a natural regression model would be a multinomial logit model with~$M=3$: for $j=1,\ldots,M$, we have $\eta_{j} =$ \begin{eqnarray} \log \frac{\pr(Y=j)}{\pr(Y=M+1)} &=& \beta_{(j)1}^{*} + \beta_{(1)2}^{*} \, (x_{i2j}-x_{i24}) + \beta_{(1)3}^{*} \, (x_{i3j}-x_{i34}) + \beta_{(1)4}^{*} \, x_{i4}, \label{eqn:xij.eg.gotowork} \end{eqnarray} where, for the~$i$th person, $x_{i2j}$ is the cost for the~$j$th transport means, and $x_{i3j}$ is the journey time of the~$j$th transport means. The distance to get to work is $x_{i4}$; it has the same value regardless of the transport means. Equation~\ref{eqn:xij.eg.gotowork} implies $\bH_1=\bI_3$ and $\bH_2=\bH_3=\bH_4=\bone_3$. Note also that if the last response category is used as the baseline or reference group (the default of \texttt{multinomial()}) then $x_{ik,M+1}$ can be subtracted from $x_{ikj}$ for~$j=1,\ldots,M$---this is the natural way $x_{ik,M+1}$ enters into the model. Recall from~(\ref{gammod2}) that we had \begin{equation} \eta_j(\bix_i) ~=~ \bbeta_j^{\top} \bix_i ~=~ \sum_{k=1}^{p} \, x_{ik} \, \beta_{(j)k} . \label{eqn:xij0} \end{equation} Importantly, this can be generalized to \begin{equation} \eta_j(\bix_{ij}) ~=~ \bbeta_j^{\top} \bix_{ij} ~=~ \sum_{k=1}^{p} \, x_{ikj} \, \beta_{(j)k} , \label{eqn:xij} \end{equation} or writing this another way (as a mixture or hybrid), \begin{equation} \eta_j(\bix_{i}^{*},\bix_{ij}^{*}) ~=~ \bbeta_{j}^{*T} \bix_{i}^{*} + \bbeta_{j}^{**T} \bix_{ij}^{*} . \label{eqn:xij2} \end{equation} Often $\bbeta_{j}^{**} = \bbeta_{}^{**}$, say. In~(\ref{eqn:xij2}) the variables in~$\bix_{i}^{*}$ are common to all~$\eta_{j}$, and the variables in~$\bix_{ij}^{*}$ have different values for differing~$\eta_{j}$. This allows for covariate values that are specific to each~$\eta_j$, a facility which is very important in many applications. The use of the \texttt{xij} argument with the \VGAM{} family function \texttt{multinomial()} has very important applications in economics. In that field the term ``multinomial logit model'' includes a variety of models such as the ``generalized logit model'' where (\ref{eqn:xij0}) holds, the ``conditional logit model'' where~(\ref{eqn:xij}) holds, and the ``mixed logit model,'' which is a combination of the two, where~(\ref{eqn:xij2}) holds. The generalized logit model focusses on the individual as the unit of analysis, and uses individual characteristics as explanatory variables, e.g., age of the person in the transport example. The conditional logit model assumes different values for each alternative and the impact of a unit of~$x_k$ is assumed to be constant across alternatives, e.g., journey time in the choice of transport mode. Unfortunately, there is confusion in the literature for the terminology of the models. Some authors call \texttt{multinomial()} with~(\ref{eqn:xij0}) the ``generalized logit model''. Others call the mixed logit model the ``multinomial logit model'' and view the generalized logit and conditional logit models as special cases. In~\VGAM{} terminology there is no need to give different names to all these slightly differing special cases. They are all still called multinomial logit models, although it may be added that there are some covariate-specific linear/additive predictors. The important thing is that the framework accommodates~$\bix_{ij}$, so one tries to avoid making life unnecessarily complicated. And~\texttt{xij} can apply in theory to any VGLM and not just to the multinomial logit model. \cite{imai:king:lau:2008} present another perspective on the $\bix_{ij}$ problem with illustrations from \pkg{Zelig} \citep{Zelig:2009}. \subsubsection[Using the xij argument]{Using the \texttt{xij} argument} \label{sec:xij.sub} \VGAM{} handles variables whose values depend on $\eta_{j}$, (\ref{eqn:xij2}), using the \texttt{xij} argument. It is assigned an~S formula or a list of \proglang{S}~formulas. Each formula, which must have~$M$ \textit{different} terms, forms a matrix that premultiplies a constraint matrix. In detail, (\ref{eqn:xij0})~can be written in vector form as \begin{equation} \boldeta(\bix_i) ~=~ \bB^{\top} \bix_i ~=~ \sum_{k=1}^{p} \, \bH_{k} \, \bbeta_{k}^{*} \, x_{ik}, \label{eqn:xij0.vector} \end{equation} where $\bbeta_{k}^{*} = \left( \beta_{(1)k}^{*},\ldots,\beta_{(r_k)k}^{*} \right)^{\top}$ is to be estimated. This may be written \begin{eqnarray} \boldeta(\bix_{i}) &=& \sum_{k=1}^{p} \, \diag(x_{ik},\ldots,x_{ik}) \, \bH_k \, \bbeta_{k}^{*}. \label{eqn:xij.d.vector} \end{eqnarray} To handle~(\ref{eqn:xij})--(\ref{eqn:xij2}) we can generalize~(\ref{eqn:xij.d.vector}) to \begin{eqnarray} \boldeta_i &=& \sum_{k=1}^{p} \, \diag(x_{ik1},\ldots,x_{ikM}) \; \bH_k \, \bbeta_{k}^{*} \ \ \ \ \left(= \sum_{k=1}^{p} \, \bX_{(ik)}^{*} \, \bH_k \, \bbeta_{k}^{*} , \mathrm{\ say} \right). \label{eqn:xij.vector} \end{eqnarray} Each component of the list \texttt{xij} is a formula having~$M$ terms (ignoring the intercept) which specifies the successive diagonal elements of the matrix~$\bX_{(ik)}^{*}$. Thus each row of the constraint matrix may be multiplied by a different vector of values. The constraint matrices themselves are not affected by the \texttt{xij} argument. How can one fit such models in \VGAM{}? Let us fit~(\ref{eqn:xij.eg.gotowork}). Suppose the journey cost and time variables have had the cost and time of walking subtracted from them. Then, using ``\texttt{.trn}'' to denote train, \begin{Code} fit2 <- vglm(cbind(bus, train, car, walk) ~ Cost + Time + Distance, fam = multinomial(parallel = TRUE ~ Cost + Time + Distance - 1), xij = list(Cost ~ Cost.bus + Cost.trn + Cost.car, Time ~ Time.bus + Time.trn + Time.car), form2 = ~ Cost.bus + Cost.trn + Cost.car + Time.bus + Time.trn + Time.car + Cost + Time + Distance, data = gotowork) \end{Code} should do the job. Here, the argument \texttt{form2} is assigned a second \proglang{S}~formula which is used in some special circumstances or by certain types of~\VGAM{} family functions. The model has $\bH_{1} = \bI_{3}$ and $\bH_{2} = \bH_{3} = \bH_{4} = \bone_{3}$ because the lack of parallelism only applies to the intercept. However, unless \texttt{Cost} is the same as \texttt{Cost.bus} and \texttt{Time} is the same as \texttt{Time.bus}, this model should not be plotted with \texttt{plotvgam()}; see the author's homepage for further documentation. By the way, suppose $\beta_{(1)4}^{*}$ in~(\ref{eqn:xij.eg.gotowork}) is replaced by~$\beta_{(j)4}^{*}$. Then the above code but with \begin{Code} fam = multinomial(parallel = FALSE ~ 1 + Distance), \end{Code} should fit this model. Equivalently, \begin{Code} fam = multinomial(parallel = TRUE ~ Cost + Time - 1), \end{Code} \subsubsection{A more complicated example} \label{sec:xij.complicated} The above example is straightforward because the variables were entered linearly. However, things become more tricky if data-dependent functions are used in any \texttt{xij} terms, e.g., \texttt{bs()}, \texttt{ns()} or \texttt{poly()}. In particular, regression splines such as \texttt{bs()} and \texttt{ns()} can be used to estimate a general smooth function~$f(x_{ij})$, which is very useful for exploratory data analysis. Suppose we wish to fit the variable \texttt{Cost} with a smoother. This is possible with regression splines and using a trick. Firstly note that \begin{Code} fit3 <- vglm(cbind(bus, train, car, walk) ~ ns(Cost) + Time + Distance, multinomial(parallel = TRUE ~ ns(Cost) + Time + Distance - 1), xij = list(ns(Cost) ~ ns(Cost.bus) + ns(Cost.trn) + ns(Cost.car), Time ~ Time.bus + Time.trn + Time.car), form2 = ~ ns(Cost.bus) + ns(Cost.trn) + ns(Cost.car) + Time.bus + Time.trn + Time.car + ns(Cost) + Cost + Time + Distance, data = gotowork) \end{Code} will \textit{not} work because the basis functions for \texttt{ns(Cost.bus)}, \texttt{ns(Cost.trn)} and \texttt{ns(Cost.car)} are not identical since the knots differ. Consequently, they represent different functions despite having common regression coefficients. Fortunately, it is possible to force the~\texttt{ns()} terms to have identical basis functions by using a trick: combine the vectors temporarily. To do this, one can let \begin{Code} NS <- function(x, ..., df = 3) ns(c(x, ...), df = df)[1:length(x), , drop = FALSE] \end{Code} This computes a natural cubic B-spline evaluated at~\texttt{x} but it uses the other arguments as well to form an overall vector from which to obtain the (common) knots. Then the usage of \texttt{NS()} can be something like \begin{Code} fit4 <- vglm(cbind(bus, train, car, walk) ~ NS(Cost.bus, Cost.trn, Cost.car) + Time + Distance, multinomial(parallel = TRUE ~ NS(Cost.bus, Cost.trn, Cost.car) + Time + Distance - 1), xij = list(NS(Cost.bus, Cost.trn, Cost.car) ~ NS(Cost.bus, Cost.trn, Cost.car) + NS(Cost.trn, Cost.car, Cost.bus) + NS(Cost.car, Cost.bus, Cost.trn), Time ~ Time.bus + Time.trn + Time.car), form2 = ~ NS(Cost.bus, Cost.trn, Cost.car) + NS(Cost.trn, Cost.car, Cost.bus) + NS(Cost.car, Cost.bus, Cost.trn) + Time.bus + Time.trn + Time.car + Cost.bus + Cost.trn + Cost.car + Time + Distance, data = gotowork) \end{Code} So \texttt{NS(Cost.bus, Cost.trn, Cost.car)} is the smooth term for \texttt{Cost.bus}, etc. Furthermore, \texttt{plotvgam()} may be applied to \texttt{fit4}, in which case the fitted regression spline is plotted against its first inner argument, viz.~\texttt{Cost.bus}. One of the reasons why it will predict correctly, too, is due to ``smart prediction'' \citep{Rnews:Yee:2008}. \subsubsection{Implementation details} \label{sec:jss.xij.implementationDetails} The~\texttt{xij} argument operates \textit{after} the ordinary $\bX_{\sVLM}$ matrix is created. Then selected columns of~$\bX_{\sVLM}$ are modified from the constraint matrices, \texttt{xij} and~\texttt{form2} arguments. That is, from \texttt{form2}'s model matrix $\bX_{\sformtwo}$, and the~$\bH_k$. This whole operation is possible because $\bX_{\sVLM}$ remains structurally the same. The crucial equation is~(\ref{eqn:xij.vector}). Other \texttt{xij} examples are given in the online help of \texttt{fill()} and \texttt{vglm.control()}, as well as at the package's webpage. % ---------------------------------------------------------------------- \section{Discussion} \label{sec:jsscat.discussion} This article has sought to convey how VGLMs/VGAMs are well suited for fitting regression models for categorical data. Its primary strength is its simple and unified framework, and when reflected in software, makes practical CDA more understandable and efficient. Furthermore, there are natural extensions such as a reduced-rank variant and covariate-specific~$\eta_{j}$. The \VGAM{}~package potentially offers a wide selection of models and utilities. There is much future work to do. Some useful additions to the package include: \begin{enumerate} \item Bias-reduction \citep{firt:1993} is a method for removing the~$O(n^{-1})$ bias from a maximum likelihood estimate. For a substantial class of models including GLMs it can be formulated in terms of a minor adjustment of the score vector within an IRLS algorithm \citep{kosm:firt:2009}. One by-product, for logistic regression, is that while the maximum likelihood estimate (MLE) can be infinite, the adjustment leads to estimates that are always finite. At present the \R{}~package \pkg{brglm} \citep{Kosmidis:2008} implements bias-reduction for a number of models. Bias-reduction might be implemented by adding an argument \texttt{bred = FALSE}, say, to some existing \VGAM{} family functions. \item Nested logit models were developed to overcome a fundamental shortcoming related to the multinomial logit model, viz.~the independence of irrelevant alternatives~(IIA) assumption. Roughly, the multinomial logit model assumes the ratio of the choice probabilities of two alternatives is not dependent on the presence or absence of other alternatives in the model. This presents problems that are often illustrated by the famed red bus-blue bus problem. \item The generalized estimating equations (GEE) methodology is largely amenable to IRLS and this should be added to the package in the future \citep{wild:yee:1996}. \item For logistic regression \proglang{SAS}'s \code{proc logistic} gives a warning if the data is {completely separate} or {quasi-completely separate}. Its effects are that some regression coefficients tend to~$\pm \infty$. With such data, all (to my knowledge) \R{}~implementations give warnings that are vague, if any at all, and this is rather unacceptable \citep{alli:2004}. The \pkg{safeBinaryRegression} package \citep{Konis:2009} overloads \code{glm()} so that a check for the existence of the MLE is made before fitting a binary response GLM. \end{enumerate} In closing, the \pkg{VGAM} package is continually being developed, therefore some future changes in the implementation details and usage may occur. These may include non-backward-compatible changes (see the \code{NEWS} file.) Further documentation and updates are available at the author's homepage whose URL is given in the \code{DESCRIPTION} file. % ---------------------------------------------------------------------- \section*{Acknowledgments} The author thanks Micah Altman, David Firth and Bill Venables for helpful conversations, and Ioannis Kosmidis for a reprint. Thanks also to The Institute for Quantitative Social Science at Harvard University for their hospitality while this document was written during a sabbatical visit. \bibliography{categoricalVGAMbib} \end{document} VGAM/inst/doc/categoricalVGAM.R0000644000176000001440000003251512136651167015647 0ustar ripleyusers### R code from vignette source 'categoricalVGAM.Rnw' ################################################### ### code chunk number 1: categoricalVGAM.Rnw:84-89 ################################################### library("VGAM") ps.options(pointsize = 12) options(width = 72, digits = 4) options(SweaveHooks = list(fig = function() par(las = 1))) options(prompt = "R> ", continue = "+") ################################################### ### code chunk number 2: categoricalVGAM.Rnw:613-616 ################################################### pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE, parallel = TRUE), pneumo) ################################################### ### code chunk number 3: categoricalVGAM.Rnw:899-903 ################################################### journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") squaremat <- matrix(c(NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(squaremat) <- list(winner = journal, loser = journal) ################################################### ### code chunk number 4: categoricalVGAM.Rnw:1004-1008 ################################################### abodat <- data.frame(A = 725, B = 258, AB = 72, O = 1073) fit <- vglm(cbind(A, B, AB, O) ~ 1, ABO, abodat) coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB ################################################### ### code chunk number 5: categoricalVGAM.Rnw:1314-1315 ################################################### head(wffc.nc, 5) ################################################### ### code chunk number 6: categoricalVGAM.Rnw:1324-1336 ################################################### fnc <- transform(wffc.nc, finame = factor(iname), fsector = factor(sector), fday = factor(ceiling(session / 2)), mornaft = 1 - (session %% 2), fbeatboat = factor(beatboat)) fnc <- fnc[with(fnc, !is.element(comid, c(99,72,80,93,45,71,97,78))),] fnc <- transform(fnc, ordnum = ifelse(numbers <= 02, "few", ifelse(numbers <= 10, "more", "most"))) fnc$ordnum <- ordered(fnc$ordnum, levels = c("few", "more", "most")) ################################################### ### code chunk number 7: categoricalVGAM.Rnw:1341-1342 ################################################### with(fnc, table(ordnum)) ################################################### ### code chunk number 8: categoricalVGAM.Rnw:1349-1356 ################################################### fit.pom <- vglm(ordnum ~ fsector + mornaft + fday + finame, family = cumulative(parallel = TRUE, reverse = TRUE), data = fnc) ################################################### ### code chunk number 9: categoricalVGAM.Rnw:1368-1370 ################################################### head(fit.pom@y, 3) colSums(fit.pom@y) ################################################### ### code chunk number 10: categoricalVGAM.Rnw:1381-1383 ################################################### head(coef(fit.pom, matrix = TRUE), 10) #head(summary(fit.pom)@coef3, 10) # Old now since 0.7-10 is nicer ################################################### ### code chunk number 11: categoricalVGAM.Rnw:1387-1388 ################################################### head(coef(summary(fit.pom)), 10) ################################################### ### code chunk number 12: categoricalVGAM.Rnw:1434-1442 ################################################### fit.ppom <- vglm(ordnum ~ fsector + mornaft + fday + finame, cumulative(parallel = FALSE ~ 1 + mornaft, reverse = TRUE), data = fnc) head(coef(fit.ppom, matrix = TRUE), 8) ################################################### ### code chunk number 13: categoricalVGAM.Rnw:1447-1449 ################################################### pchisq(deviance(fit.pom) - deviance(fit.ppom), df = df.residual(fit.pom) - df.residual(fit.ppom), lower.tail=FALSE) ################################################### ### code chunk number 14: categoricalVGAM.Rnw:1456-1464 ################################################### fit2.ppom <- vglm(ordnum ~ fsector + mornaft + fday + finame, family = cumulative(parallel = FALSE ~ 1 + fday, reverse = TRUE), data = fnc) head(coef(fit2.ppom, matrix = TRUE), 8) ################################################### ### code chunk number 15: categoricalVGAM.Rnw:1469-1470 ################################################### head(fitted(fit2.ppom), 3) ################################################### ### code chunk number 16: categoricalVGAM.Rnw:1475-1476 ################################################### head(predict(fit2.ppom), 3) ################################################### ### code chunk number 17: categoricalVGAM.Rnw:1480-1482 ################################################### dim(model.matrix(fit2.ppom, type = "lm")) dim(model.matrix(fit2.ppom, type = "vlm")) ################################################### ### code chunk number 18: categoricalVGAM.Rnw:1486-1487 ################################################### constraints(fit2.ppom)[c(1, 2, 5, 6)] ################################################### ### code chunk number 19: categoricalVGAM.Rnw:1524-1526 ################################################### head(marital.nz, 4) summary(marital.nz) ################################################### ### code chunk number 20: categoricalVGAM.Rnw:1529-1531 ################################################### fit.ms <- vgam(mstatus ~ s(age, df = 3), multinomial(refLevel = 2), data = marital.nz) ################################################### ### code chunk number 21: categoricalVGAM.Rnw:1535-1537 ################################################### head(fit.ms@y, 4) colSums(fit.ms@y) ################################################### ### code chunk number 22: categoricalVGAM.Rnw:1546-1558 ################################################### # Plot output mycol <- c("red","darkgreen","blue") par(mfrow=c(2,2)) plot(fit.ms, se=TRUE, scale=12, lcol=mycol, scol=mycol) # Plot output overlayed #par(mfrow=c(1,1)) plot(fit.ms, se=TRUE, scale=12, overlay=TRUE, llwd=2, lcol=mycol, scol=mycol) ################################################### ### code chunk number 23: categoricalVGAM.Rnw:1601-1614 ################################################### getOption("SweaveHooks")[["fig"]]() # Plot output mycol <- c("red","darkgreen","blue") par(mfrow=c(2,2)) par(mar=c(4.2,4.0,1.2,2.2)+0.1) plot(fit.ms, se=TRUE, scale=12, lcol=mycol, scol=mycol) # Plot output overlaid #par(mfrow=c(1,1)) plot(fit.ms, se=TRUE, scale=12, overlay=TRUE, llwd=2, lcol=mycol, scol=mycol) ################################################### ### code chunk number 24: categoricalVGAM.Rnw:1634-1635 ################################################### plot(fit.ms, deriv=1, lcol=mycol, scale=0.3) ################################################### ### code chunk number 25: categoricalVGAM.Rnw:1644-1648 ################################################### getOption("SweaveHooks")[["fig"]]() # Plot output par(mfrow=c(1,3)) par(mar=c(4.5,4.0,0.2,2.2)+0.1) plot(fit.ms, deriv=1, lcol=mycol, scale=0.3) ################################################### ### code chunk number 26: categoricalVGAM.Rnw:1671-1683 ################################################### foo <- function(x, elbow=50) poly(pmin(x, elbow), 2) clist <- list("(Intercept)" = diag(3), "poly(age, 2)" = rbind(1, 0, 0), "foo(age)" = rbind(0, 1, 0), "age" = rbind(0, 0, 1)) fit2.ms <- vglm(mstatus ~ poly(age, 2) + foo(age) + age, family = multinomial(refLevel = 2), constraints = clist, data = marital.nz) ################################################### ### code chunk number 27: categoricalVGAM.Rnw:1686-1687 ################################################### coef(fit2.ms, matrix = TRUE) ################################################### ### code chunk number 28: categoricalVGAM.Rnw:1691-1698 ################################################### par(mfrow=c(2,2)) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[1], scol = mycol[1], which.term = 1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[2], scol=mycol[2], which.term = 2) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[3], scol = mycol[3], which.term = 3) ################################################### ### code chunk number 29: categoricalVGAM.Rnw:1709-1718 ################################################### getOption("SweaveHooks")[["fig"]]() # Plot output par(mfrow=c(2,2)) par(mar=c(4.5,4.0,1.2,2.2)+0.1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[1], scol = mycol[1], which.term = 1) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[2], scol = mycol[2], which.term = 2) plotvgam(fit2.ms, se = TRUE, scale = 12, lcol = mycol[3], scol = mycol[3], which.term = 3) ################################################### ### code chunk number 30: categoricalVGAM.Rnw:1736-1737 ################################################### deviance(fit.ms) - deviance(fit2.ms) ################################################### ### code chunk number 31: categoricalVGAM.Rnw:1743-1744 ################################################### (dfdiff <- df.residual(fit2.ms) - df.residual(fit.ms)) ################################################### ### code chunk number 32: categoricalVGAM.Rnw:1747-1748 ################################################### 1-pchisq(deviance(fit.ms) - deviance(fit2.ms), df=dfdiff) ################################################### ### code chunk number 33: categoricalVGAM.Rnw:1761-1772 ################################################### ooo <- with(marital.nz, order(age)) with(marital.nz, matplot(age[ooo], fitted(fit.ms)[ooo,], type="l", las=1, lwd=2, ylim=0:1, ylab="Fitted probabilities", xlab="Age", # main="Marital status amongst NZ Male Europeans", col=c(mycol[1], "black", mycol[-1]))) legend(x=52.5, y=0.62, # x="topright", col=c(mycol[1], "black", mycol[-1]), lty=1:4, legend=colnames(fit.ms@y), lwd=2) abline(v=seq(10,90,by=5), h=seq(0,1,by=0.1), col="gray", lty="dashed") ################################################### ### code chunk number 34: categoricalVGAM.Rnw:1787-1800 ################################################### getOption("SweaveHooks")[["fig"]]() par(mfrow=c(1,1)) par(mar=c(4.5,4.0,0.2,0.2)+0.1) ooo <- with(marital.nz, order(age)) with(marital.nz, matplot(age[ooo], fitted(fit.ms)[ooo,], type="l", las=1, lwd=2, ylim=0:1, ylab="Fitted probabilities", xlab="Age", col=c(mycol[1], "black", mycol[-1]))) legend(x=52.5, y=0.62, col=c(mycol[1], "black", mycol[-1]), lty=1:4, legend=colnames(fit.ms@y), lwd=2.1) abline(v=seq(10,90,by=5), h=seq(0,1,by=0.1), col="gray", lty="dashed") ################################################### ### code chunk number 35: categoricalVGAM.Rnw:1834-1838 ################################################### # Scale the variables? Yes; the Anderson (1984) paper did (see his Table 6). head(backPain, 4) summary(backPain) backPain <- transform(backPain, sx1 = -scale(x1), sx2 = -scale(x2), sx3 = -scale(x3)) ################################################### ### code chunk number 36: categoricalVGAM.Rnw:1842-1843 ################################################### bp.rrmlm1 <- rrvglm(pain ~ sx1 + sx2 + sx3, multinomial, backPain) ################################################### ### code chunk number 37: categoricalVGAM.Rnw:1846-1847 ################################################### Coef(bp.rrmlm1) ################################################### ### code chunk number 38: categoricalVGAM.Rnw:1875-1876 ################################################### set.seed(123) ################################################### ### code chunk number 39: categoricalVGAM.Rnw:1879-1881 ################################################### bp.rrmlm2 <- rrvglm(pain ~ sx1 + sx2 + sx3, multinomial, backPain, Rank = 2, Corner = FALSE, Uncor = TRUE) ################################################### ### code chunk number 40: categoricalVGAM.Rnw:1889-1893 ################################################### biplot(bp.rrmlm2, Acol="blue", Ccol="darkgreen", scores=TRUE, # xlim=c(-1,6), ylim=c(-1.2,4), # Use this if not scaled xlim=c(-4.5,2.2), ylim=c(-2.2, 2.2), # Use this if scaled chull=TRUE, clty=2, ccol="blue") ################################################### ### code chunk number 41: categoricalVGAM.Rnw:1925-1933 ################################################### getOption("SweaveHooks")[["fig"]]() # Plot output par(mfrow=c(1,1)) par(mar=c(4.5,4.0,0.2,2.2)+0.1) biplot(bp.rrmlm2, Acol="blue", Ccol="darkgreen", scores=TRUE, # xlim=c(-1,6), ylim=c(-1.2,4), # Use this if not scaled xlim=c(-4.5,2.2), ylim=c(-2.2, 2.2), # Use this if scaled chull=TRUE, clty=2, ccol="blue") ################################################### ### code chunk number 42: categoricalVGAM.Rnw:2047-2048 ################################################### iam(NA, NA, M = 4, both = TRUE, diag = TRUE) VGAM/inst/CITATION0000644000176000001440000000513512136651105013152 0ustar ripleyuserscitHeader("To cite VGAM in publications please use:") ## R >= 2.8.0 passes package metadata to citation(). if(!exists("meta") || is.null(meta)) meta <- packageDescription("VGAM") year <- sub("-.*", "", meta$Date) note <- sprintf("R package version %s", meta$Version) citEntry(entry = "Article", title = "Vector Generalized Additive Models", author = personList(as.person("Thomas W. Yee"), as.person("C. J. Wild")), journal = "Journal of Royal Statistical Society, Series B", year = "1996", volume = "58", number = "3", pages = "481--493", textVersion = paste("Thomas W. Yee and C. J. Wild (1996).", "Vector Generalized Additive Models.", "Journal of Royal Statistical Society, Series B, 58(3), 481-493.") ) citEntry(entry = "Article", title = "The {VGAM} Package for Categorical Data Analysis", author = personList(as.person("Thomas W. Yee")), journal = "Journal of Statistical Software", year = "2010", volume = "32", number = "10", pages = "1--34", url = "http://www.jstatsoft.org/v32/i10/", textVersion = paste("Thomas W. Yee (2010).", "The VGAM Package for Categorical Data Analysis.", "Journal of Statistical Software, 32(10), 1-34.", "URL http://www.jstatsoft.org/v32/i10/."), header = "and/or" ) citEntry(entry = "Article", title = "The {VGAM} Package for Categorical Data Analysis", author = personList(as.person("Thomas W. Yee")), journal = "Journal of Statistical Software", year = "2010", volume = "32", number = "10", pages = "1--34", url = "http://www.jstatsoft.org/v32/i10/", textVersion = paste("Thomas W. Yee (2010).", "The VGAM Package for Categorical Data Analysis.", "Journal of Statistical Software, 32(10), 1-34.", "URL http://www.jstatsoft.org/v32/i10/."), header = "and/or" ) citEntry(entry = "Manual", title = "{VGAM}: Vector Generalized Linear and Additive Models", author = personList(as.person("Thomas W. Yee")), year = year, note = note, url = "http://CRAN.R-project.org/package=VGAM", textVersion = paste("Thomas W. Yee", sprintf("(%s).", year), "VGAM: Vector Generalized Linear and Additive Models.", paste(note, ".", sep = ""), "URL http://CRAN.R-project.org/package=VGAM"), header = "and/or" ) VGAM/demo/0000755000176000001440000000000012136651105011760 5ustar ripleyusersVGAM/demo/zipoisson.R0000755000176000001440000000063112136651105014143 0ustar ripleyusers# Demo for Zero Inflated Poisson set.seed(111) n <- 1000 phi <- 0.35 # Proportion that are zero by definition lambda <- 4 # Poisson parameter y <- ifelse(runif(n) < phi, 0, rpois(n, lambda)) stem(y) fit <- vglm(y ~ 1, family=zipoisson, trace=TRUE, crit="c" ) true.mean <- (1-phi)*lambda true.mean fitted(fit)[1:5,] fit@misc$prob0 # The estimate of P(Y=0) coef(fit) coef(fit, matrix=TRUE) Coef(fit) VGAM/demo/vgam.R0000755000176000001440000000100612136651105013035 0ustar ripleyusers# Demo for vgam if(dev.cur() <= 1) get(getOption("device"))() opar <- par(ask = interactive() && (.Device %in% c("X11", "GTK", "gnome", "windows","quartz"))) data(hunua) fit.h = vgam(agaaus ~ s(altitude), binomialff, hunua) plot(fit.h, se=TRUE, lcol="blue", scol="red", llwd=2, slwd=2, las=1) attach(hunua) n = nrow(hunua) o = order(altitude) plot(altitude[o], fitted(fit.h)[o], type="l", ylim=0:1, lwd=2, col="blue", las=1) points(altitude, agaaus + (runif(n)-0.5)/30, col="red") detach(hunua) VGAM/demo/lmsqreg.R0000755000176000001440000000173112136651105013562 0ustar ripleyusers# Demo for lmsqreg # At the moment this is copied from lms.bcn.Rd if(dev.cur() <= 1) get(getOption("device"))() opar <- par(ask = interactive() && (.Device %in% c("X11", "GTK", "gnome", "windows","quartz"))) data(bminz) fit = vgam(BMI ~ s(age, df=c(4,2)), fam=lms.bcn(zero=1), data=bminz, tr=TRUE) predict(fit)[1:3,] fitted(fit)[1:3,] bminz[1:3,] # Person 1 is near the lower quartile of BMI amongst people his age cdf(fit)[1:3] # Quantile plot par(bty="l", mar=c(5,4,4,3)+0.1, xpd=TRUE) qtplot(fit, percentiles=c(5,50,90,99), main="Quantiles", xlim=c(15,90), las=1, ylab="BMI", lwd=2, lcol=4) # Density plot ygrid = seq(15, 43, len=100) # BMI ranges par(mfrow=c(1,1), lwd=2) a = deplot(fit, x0=20, y=ygrid, main="Density functions at Age = 20, 42 and 55", xlab="BMI") a a = deplot(fit, x0=42, y=ygrid, add=TRUE, lty=2, col=2) a = deplot(fit, x0=55, y=ygrid, add=TRUE, lty=4, col=4, Attach=TRUE) a@post$deplot # Contains density function values VGAM/demo/distributions.R0000755000176000001440000000134512136651105015013 0ustar ripleyusers# Demo for the maximum likelihood estimation of parameters from # some selected distributions # At the moment this is copied from some .Rd file ## Negative binomial distribution ## Data from Bliss and Fisher (1953). y = 0:7 w = c(70, 38, 17, 10, 9, 3, 2, 1) fit = vglm(y ~ 1, negbinomial, weights=w) summary(fit) coef(fit, matrix=TRUE) Coef(fit) ## Beta distribution set.seed(123) nn = 1000 y = rbeta(nn, shape1=1, shape2=3) fit = vglm(y ~ 1, betaff(link="identity"), trace = TRUE, crit="c") fit = vglm(y ~ 1, betaff, trace = TRUE, crit="c") coef(fit, matrix=TRUE) Coef(fit) # Useful for intercept-only models Y = 5 + 8 * y # From 5 to 13, not 0 to 1 fit = vglm(Y ~ 1, betaff(A=5, B=13), trace = TRUE) Coef(fit) fitted(fit)[1:4,] VGAM/demo/cqo.R0000755000176000001440000000672612136651105012703 0ustar ripleyusers# Demo for canonical Gaussian ordination if(dev.cur() <= 1) get(getOption("device"))() opar <- par(ask = interactive() && (.Device %in% c("X11", "GTK", "gnome", "windows","quartz"))) data(hspider) hspider[,1:6] = scale(hspider[,1:6]) # standardize environmental vars ## Rank-1 model (unequal tolerances, deviance=1176.0) set.seed(123) p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = quasipoissonff, data = hspider, Bestof=10, Crow1positive=FALSE, EqualTolerances=FALSE, ITolerances=FALSE) lvplot(p1, lcol=1:12, llwd=2, llty=1:12, y=TRUE, pch=1:12, pcol=1:12, las=1, main="Hunting spider data") print(ccoef(p1), digits=3) print(Coef(p1), digits=3) # trajectory plot trplot(p1, which=1:3, log="xy", type="b", lty=1, col=c("blue","red","green"), lwd=2, label=TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[,1], ii$species[,2], sep=" and "), lwd=2, lty=1, col=c("blue","red","green")) abline(a=0, b=1, lty="dashed") ## Rank-2 model (equal tolerances, deviance=856.5) set.seed(111) r2 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = quasipoissonff, data = hspider, Rank = 2, Bestof=10, ITolerances = TRUE, EqualTolerances = TRUE, Crow1positive = c(FALSE, FALSE)) print(ccoef(r2), digits=3) print(Coef(r2), digits=3) clr = (1:(10+1))[-7] # Omit yellow colour adj = c(-0.1, -0.1, -0.1, 1.1, 1.1, 1.1, -0.1, -0.1, -0.1, 1.1) # With C arrows lvplot(r2, label=TRUE, xlim=c(-2.8, 5.0), ellipse=FALSE, C=TRUE, Cadj=c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj=adj, las=1, chull=TRUE, pch="+", pcol=clr, sites=TRUE) # With circular contours lvplot(r2, label=TRUE, xlim=c(-2.8, 5.0), ellipse=TRUE, C=FALSE, Cadj=c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj=adj, las=1, chull=TRUE, pch="+", pcol=clr, sites=TRUE) # With neither C arrows or circular contours lvplot(r2, label=TRUE, xlim=c(-2.8, 5.0), ellipse=FALSE, C=FALSE, Cadj=c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj=adj, las=1, chull=TRUE, pch="+", pcol=clr, sites=TRUE) # Perspective plot persp(r2, xlim=c(-5,5), ylim=c(-3,6), theta = 50, phi = 20) ## Gaussian logit regression ## Not recommended actually because the number of sites is far too low. ## Deviance = 154.6, equal tolerances. attach(hspider) ybin = 0 + (cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) > 0) # Matrix of 0's and 1's detach(hspider) set.seed(1312) b1 = cqo(ybin[,-c(1,5)] ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = quasibinomialff(mv=TRUE), Bestof=4, ITolerances=TRUE, data = hspider, EqualTolerances=TRUE, Crow1positive=FALSE) lvplot(b1, type="predictors", llwd=2, las=1, ylab="logit mu", ylim=c(-20,11), lcol=1:10) c1 = Coef(b1) cts = c("Trocterr", "Pardmont", "Alopfabr", "Arctlute") text(c1@Optimum[1,cts], logit(c1@Maximum[cts])+1.0, cts) round(t(Coef(b1, ITolerances=FALSE)@C), dig=3) # On the probability scale lvplot(b1, type="fitted", llwd=2, las=1, llty=1, ylab="Probability of presence", ylim=c(0,1), lcol=1:10) VGAM/demo/binom2.or.R0000755000176000001440000000177412136651105013724 0ustar ripleyusers# Demo for binom2.or if(dev.cur() <= 1) get(getOption("device"))() opar <- par(ask = interactive() && (.Device %in% c("X11", "GTK", "gnome", "windows","quartz"))) data(hunua) attach(hunua) y00 = (1-agaaus) * (1-kniexc) y01 = (1-agaaus) * kniexc y10 = agaaus * (1-kniexc) y11 = agaaus * kniexc detach(hunua) fit = vgam(cbind(y00,y01,y10,y11) ~ s(altitude, df=c(4,4,2.5)), binom2.or(zero=NULL), data=hunua) par(mfrow=c(1,1)) plot(fit, se=TRUE, scol="darkgreen", lcol="blue") summary(fit) # Plot the marginal functions together mycols = c("blue","red") plot(fit, which.cf=1:2, lcol=mycols, scol=mycols, overlay=TRUE, se=TRUE, llwd=2, slwd=2) legend(x=100, y=-4, leg=c("Agathis australis", "Knightia excelsa"), col=mycols, lty=1) # Plot the odds ratio o = order(fit@x[,2]) plot(fit@x[o,2], exp(predict(fit)[o,"log(OR)"]), log="y", xlab="Altitude (m)", ylab="Odds ratio (log scale)", col="blue", type="b") abline(h=1, lty=2) # Denotes independence between species VGAM/demo/00Index0000755000176000001440000000041412136651105013114 0ustar ripleyusersbinom2.or Bivariate logistic model cqo Constrained auadratic ordination distributions Maximum likelihood estimation of some distributions lmsqreg LMS quantile regression vgam Vector generalized additive models zipoisson Zero inflated Poisson VGAM/data/0000755000176000001440000000000012136651170011747 5ustar ripleyusersVGAM/data/wffc.teams.rda0000644000176000001440000000103412136651170014472 0ustar ripleyusers‹]”ÍnÓ@€7  ·Ü‘¢&ý 7/ih(UÀâØ‰%×vì­"N¤­*õÈqâ8!à”Gè€Äp@ÊkÏ̶I$ûûvv¼3kkcnìÔJ;%ÆX–å’{rËgÕ-Ãò¬¨X;ŽU‘¶Ø‹ËÝT‘‚ºÕu3rÝ&i{÷é}““v¶u´õXG›íMÒ¤½®Nhð¶^¬¥µeêj¼×=/a’>èè\ó|±vsƒô)×í4Ìm…kÉnŒçK~ÆË °`|ˆãá{ ‹q÷Ð#â|P†§À¨Œ—€r‰ëKȧ¾ø‚LäÅ[e æùiúÏó£)ð Îßf¯Ãs™iƒœ}Kó¿›PïÏ4Ï8» ñ_ï€gŸ?ÞBÞï¯Ó‹}ãŸØ/í ¦2 qÌpŒûáûñ¿=Ì÷1oPFbžÀùg¾KÙ§¶°¯ ®{„}n!˳<>Á¼úqÝcääÓÜó¸¯ öwˆõ^¿™? ¾fÎSÁ{¶:Jé*`ð²ìû2z…ÃHáuôßT_HQq"µE5ú—\ÿÉÙîÜVGAM/data/wffc.rda0000644000176000001440000002373212136651170013373 0ustar ripleyusersBZh91AY&SY"“•Û»ÿÿÿþDUUÿÿÿÿðÿÿÿþ@@@@H@@@@@@@@AàA<øžx•B…‰x’‚”ŸNÐúû¾>{}»¸*ØÜhJ©T(QW¬×¼c¶q™îJP*€ U^ lÁ4Õ›u޵Ýã9⨊ŠHЉR©UJ)KÌÄHr¡í‘!‡6Ç©æ 3iO MLh F$c#@¢zl’z dh†¦ž¡“@ŠŸ´ÐM¡Q“F€Ð €ÓÑ $M24J=&LCÐhÐ @¨@ ƒ@€a'ªR¢dMÑ õ†OH@Рhš©DÈÄdŒG¦SÒ ¦šmMOQµ6£Ô@ÚÕ2 @))"h!2 y&О¡”ñDÚŸ©4z‡¢z4M=ªišŒDbÂi‚?T|êûOöì2Üææ:n»uNî®çY6¨m‹n•FÕ]“b¶›B¶G1mEs Ú«eÌ5G2Ú­•nlsI¥Ês Pàs9M–Æ[ªî–0Û±Zí¹T«s-¹]Å¥‹n³u±•¹Y\ÝuÝJŠHtÝWüûÿkú{ŸPZ*ž÷Q"\•ŸËÎNgüEe•vçÖÄʳ‡p¾ q×ÚN¥uî]š¶ìþ±×E—åÞš·l+%‰„†JÐTè^+^[¥ì\Æ3”¨X 6¡+,›®WPø~1B¨½“2V‹v kاm¨EpUW³bJ¼9M­Ël§Ë¯A–éŒð\l,¡Ó®øA²1ŒAÇ‚|1Abcocd!9š˜L:2Àk3Ù@(`ƒ[(åá˜CqÚžŽmèã#PIÆ´EOÒØä"³/rñÒb´x¶¨ð¶T §Ân#vYa0­RH:ŠîÕ×auî›ÜpñÞ¢éðµÚ+O"¶\3 ÅÍØò>®J¾o7a5wìç3d¨ú·â¸>qo~s·Y7¥²btÀHìñ÷j»z¶;·Ã†‰n¦Ý4o‘9³²Y‘HpÙ¡C¹ÐÈtÀÇeAØ#^q¤¤*IÓQ©'.âbeåÝàž‘Œ›ÑÃc\ío ÛŽí¶®IãtÐ¥¿IV :wȯ{fØ»[ Å%#åíŸUï\LÖÚ´Ó/—+5r+jârYi Ɔz__'™Ojh:(wB„!°!X9B¡ h8›1ˆo^`²ÔNÌvÐH`K/=7¸ìM5ÔtõŒœËe·Š ÜOøÎëy®§eg<³eÀ§œQ8šEµÁT×DxïÉ›=Sx›É\!–ÓMÝÌ4ͽ[LQnµ“qµ¶œ[¡-X†ü.´Æi„ÜmÛ¹w,ËL,ÂX„T<@„û°’U‘“!ÛÀJö–ìÎì—7,:»v³Ë¬ÌhdÅ5òec•!ðÓ&©) `=eél¼ÇbWHÝ“HØæÉÆÕǺÔ@gX,ib!YÄ4•> …¨o&ÝX"; QÙ©"˜ï)L³Ø T&•Õ%¶®¬-kžŒÅêðÎÓ¡ìW¹Af[¸ujñæ Þµ-Z‹.ÜoqZÜÝxnÜÅq^w#XÆØ6Ú¶Ôwxò8²ÜlÊᄤÛMbÂVí©º‡RîÌo^¬ÊÂ+Ëq&Ôi5 t­ãšd.unjÍäÚÜ‘¸/vY£5˜4„î³ jÅNía%Ã]Z›¯%‚¼Éx|ŒsVMãÜxâl× ½Ö¯&­ŠÖ^î,ÂDñX îÙv3"ßÐndLo#ÖíÉ1q6Ðn÷ݬÝ$ ‰ŸyÃ'¾†6{ÆRÀnî¦îÜg§ÂFM’«/•›«Kî,Èíªæ·8g}Ù¼Ë,byàU‚ã<8›ÒÅ0­%ªbž·39¨æFb‘ª_|šz¬’¹Úa³ _U,™¼ÑÜ^ô°YÒp*&êu|KóœÅ»xTœq«¸wr{ù½­N´927Ñ9ñVñàëYê+uJ?QM³¸Ä6¢ZÐô“©­æñ¼®:tzÞù-93ãó uâB Þºð¹ÞÒ|s×{ÊXZÔc]6Æ·×}ã"SŠ÷­¯Ï©/í¶ çÅ×v£c£Öj<Ö-Mw庿-¦SôÞÖ66€†@æ9ª¯öôSýlÝíÂuж µÐÙGlŽfÑ~ö‡Y/Þýî:Ðï­Žkj9Žg]vêx}zþ-í û»»šC`•îž) 6BKÿuÃìv}IpaW»‰tíôuÅìà˜ÇŸÕÕêõ{{#½.Æ$¼¬]ØñØ0T`»¸ûv°GkBU3‡¸õëõí…àäĖ͘PTb_d=v^.`jv$˜—„öõ‰ö4ÍÙôö#­‰¿gœKÛê.~}ÚåêíôéÓÜÃkè”êäáoÝëòW"Lé–þ^nã/_ŸÖxš\rç唯÷{Ïko&ÎŽ^îNôuþ{ŠSßÈ rµßg»Ï»/F]Aîio·§Úú¾So,.WèäëÙÿïýÿó‡þØ}M/Ñj&@}æ¾ôÄ®Fcqjñ¹Š¾s•\¼4˜’Lj<îCÏ/gªô ¢Ûg÷SzJ±”¬L_Þ’`*îK-£kÄYnë®ã²ù/;ËDZۦþ»FfùJÑÀí®B/dÂ/“äe|Žö¯ëá«ðRä¼yŸ÷ú”µé­•]ª!ÈÝ] #Ñãîû?“øÒBBä^¿èH ×IUßÿ«²…E‹·R¡: §ø)$ÿ´!¹Ïû„^ÎÅÿŸ\ª¥v@¿WËŠ¥:ÿû<Ù”¥Ýþû7Q% ý± QqâR¯õøxý~þ•9ùoñY„J-¸ªŠ¶õýÖß5\¿1®Z¯–ûu¼W¥¯nSû9vÔþ®;äëWâÞz¹§mWvϽƬLJ©ñ…L® *èT“E*\´õ/q]±´Úí“™wÙvÁâ78M¹£®¸üq]´ó×–W1ãPñ®ùs:Ë¿ô8»Ë¾¹‰´^xgNZôçY6+ƼñãJbxÅë¢óÕ;`sNo-óι»á僭6Lç'lN³¶ñ•ß+ËÆòÑxÎÚxÔ6®üá<³ÏË”»Ë¾œÞ4»÷äïÎvÈñ‹Æ6Žì򨯯s —72]d®)Ö¼h®j^5·[hæ©ÍÛRæ‹×.im+ô »2 ÑX%Û+híéÈë¯Onž[¶;åÖ¼h»`ñ¥Û2ºÅÏNžƒa²ñ¢Ú.´ïŠ»êwds;èë#·8¦Ó×lí í‡lñ«­!²¶«e9©ÛGmQîÃeø}—Í©uªï…l ¡¹À‰5&‚{ 5[ŒÖ•”„ªjL[.¸U8b£šAM.ÓQ¤†ÄŠ\Lµ¡Ð^ÐY‰&"È„#f" ë„0*i„º'Êí…S­€]‡Ó²=y¸©º‹¾§¿O9ÖÎÚºíÑ{·œ}§¶”øùwQí‡(û»ò54+pƒ !‚mI£ J’X4(h½õvù¹KÏC¾|cÏRëhí']qq1±ikFƃ ¤äÕ¨U0–p%µ‡ZëœÁ¼q´Ž=´§Íñ#Ë<ºäöÒ±°¾\Ô«„@ÊT$Ƃֽ'7Z¦ƒ6•Ì+!% (²\Ö»QSS¾‡Y>z§¦žï‡ 釿>¾‚ƌؚª âa]ÕRŒšÚÂá¢Ö|~ yúræKdâlÕsË…|¹‡Ve  Ó`jÎ2¤® ¯S± µ‚µ¢ ±ÚÂË!ãrEÃ0h©Ù )´Iª#’WÊ YbH­ ÇI£e°Q¥UP$ZØ’-emwÀ d1i{+ͧ˜^ÙîÃæ†+Ó^ý:ÉçØž0j`µW«X¤˜"¤³x¼G&¦¨Â»ïH·BU4a ÅU±&`{$&ÓiÌ {£wFÍI[t Å +b3jØIdÊš@`Ò©ñcž-"æ lÊ«„*16gè¹”íœ×\Ú÷ûý:Ÿ |²-^ÌŠü M¤TÄ_ÄËWmrU"nl­¤”ÚâÕ@`­i;Œ¹…ŒBÉ£&VÐLFM1R^ÄVÒRhZª®/‘eüùSkåŠ<³×xëÂ=5[nÇÛѺª¼ºßU€ZÁ±oÓ†‚C`›Ží'£¹ˆú‡Fô·oUÄänzü¼ò¤ _œ+l(?t¦±‡Ü8Ô¥2“I :ªOZKâ ^"|ÇM‘Œ×LQü`D}[ö`«FQO6ðª@º™4öo² ß‘{Z§^vÏ¥Ÿl_eÒ”y5c^,£è•±¾vÖÙÆ ɤêåÆBHB“W1eÜÔêã”e|jà­M›F:_!¸%Áxòe;pÃc)l’°Ã‹ZDS³N{l^î IØ%J™T’J™^÷½ûOô—É¿‘ßµyüY|ÉxHå%M ¦ º‹>݈Šª­ ®”@R“šçÜjíßÉÎO™Ï¶}€¤Ò§ØÑÒÏ/š)6%æjM6“í/ŠâÑ®|…èz=ßU·cDôö½+ŧuʒƯ¬òëÉëKÕÝbK öûà ž¨÷^'œ]ìur·îk"`ÑMz­=…—©Í‰a1¥’âž~·+cH +ÚÓíî¨K}¯yyú%ÕUñx+]Ñ)*hCsù7V_¥a]×yªÒ¿-•_ªÙž–lù'·~Ýd> ×èIlhm$›¥±[H“aµ?&j ªå¶¹®ZµÌZ£&ѵclRkF,[Å2ÒÕµÂ)²Ú¨sU²m)Ì ˆÙ[&ÈØ-¢“lb j‹hÖ6ŒnZ¶KšØ6ò4©kj ëW2®aFÒ§4©ÍP•µU&Ä&ÒCeSaUl£bªÚÚ“j–Ô™ª‰™*Ú’¶ªm)6´j›(l‚ÚU´…±M“iÔHÚRlª«jªæTsJ¦Ñl[Q±&£m¨ª6´k´m˜§4æ¢lƒi&h[U²Å¨ÕhµlZ¢Í£X¬,m‹RXÚ£ikÓQª*V£V²EYЬ´T[E,UxÚæÔÅdØÙm£J­1E¢¶hª*ŠQ¶6…Jm%¶¨Ú¶Ml‹cHjͨ´mf­%¦(³ÎÖr6¡‹ef#&ÊÚ¶R°5¢¬lÆ£\Öê£i(ذIVmRÆÙ5Ú6Q6’Ø‚%,ÉD6LX4T•‘¥AbÐmØ¥±©+ Ö6))k5@cZ³lbÒ©.k´Ö”Úl¤ë'1µ@ØÑ¨Ôi,m¤¢É¬[‹kVhÕmmbÑT›ÆŒi)”Fd´RM`Ú‹¡–1£I¢´j5XªÅm*ŒQQ£bƒRAɈJ,"Ø#h±¶4`¡4P™£QF£b,%F+cE´jeI¨¨ÑZ $Lj‰4cbÅÆ QXÅF4lcF(ŠkØÕŠŠX¶6 °’m£h´kF¨,b¨ÛEE£lmb¢ &Ñ Ú5±±Äc´TAjŶРDbÔ´c[ÑhÔEccIZ,Ê1“"ÐDÑc@FÑh´Tc–Œi cIműA´DTXŃb6Å©1µfX¨‚+E¨Ú’ØµŠŒmŠ¢«E¶-±¨² ±F¬h£Q­bűclTXÚÅTm¨ŠÑhÕ­¢Ñ¤´cmµQ©•µVÈÚ“jì&Û@[FÚ5ÕW»ñ¦ùå;³»uÜÇÉG5s ¢mDÜÍMª+–«–¶pŽç$îér¹«vUsi6‡4U̔ڜmö;f0±…ƒÂUû"”/¸'”'ù/àQ÷Èþ¿ÈÜÞ¡ÜzQlú$¹H}ªp%wóQO5KñQ{žéйâ£óÈá¦ÔNG ™U.Jj9![Ê6¾j«z…ôR—Û§­¹J×#zøªÏ¢­ÂZ‚ï%Éçªb&¸W|.a8¢Ÿ÷ââÊÅ–VV$¶š¶±.ws®Lη-ܘ‰ô„=Òé/œ}2^èzÉ<‡Ò¨é+ aª§È+©U{ÂÖ3ž¥¼Kí{XÉUóÓòÊ»ÒxåzÑnƒŠªØ%ôÓ÷)÷éûÑG@ƒu{U…oJš¡Wb/ ‚NÚŒK²¥ëŠä§2«®aK&R-ˆb…²‹(Šªò¥uH}æ—ÞTwúèübv¤÷Ö•¨¼T8Q_=åæÊÁ%Œ#%wv±Z¬˜u[ÜÛ”nkRîºîÌì¸nîEÞþ«këêCHO×AôƒAöJ¾5OJOr)n¡V ˆ³RæR¹tI|yþo­æóú~ë³ãgØûA5B€1†Øêùén S»ëcˆ~tÈdŽò'IK ¦Kð„üÂ~ ô!~5TÉ~áT[î ¿uD÷’=‡ç¡ïWíèWIOwÞmšŒŸ¸«÷>?—ÒÙññHÉL©µbÚš¬R¶©î*è/ÏÌzú”Vaç‘dª²EvJœ”}Š?¢SòåQðPýjçØ« ÷æÈžxo’ê©z¬ xeMj¼0êKÃ%îEqŠ|i_…á‘VuÌJ¯í¥k¥ÀK©å!áIâçû¨ƒÎçõÀó'Ó‚èù²ªÊ©l“Ì­áv€¿%*GO©û?O¯’ûS=µÐ})/m *°4‘T @Ú¯øÍ·H݆"ŒFôöÙM³òš©<éD$4[N:¨êR¸Á,¸å2¬Ižx%"Mä®áoÇL­ÇM3Àc­ƒ¥5[Š~Uòs §5­ënL†?>Ï$âxʘÏèÛŒµ]§Í|õÕŽ;åSÕ¿nºN?ÃŽU2lÿÜz¯³>{{ÎÕW¥Ê••Í OT•Cצ*g5ìÕ©SZi¨™Úæ4ƒVvtÕžZòמ¸¶b˜RË¿óÐ_2/{”þ¤.Š~¯™WÚƒ°‚åˆú áÔ_ ƒø_´¾B~×b¯Èƒåúaõ§â§ÖÐ'Ð'½Î¥ù‚ Vr=a.uU½Ç±Ó'~Fê988ê³p¦ª¹¶Ü9ÜWß÷©{râ×J»òpÑxæŠõ•WÛ«Hw¿ÂŠ£ç”pl„Ôæú´=Äz‘ô~šW¢£Ý´½Iû÷ ú©öëÀè9Ü®IÄ+åóa\’w¤+]GÔ.‰S .e }W¢K§³ÊÜßRW–¥¸VîªÓ})’¦ôت¼j»J=ÿnk2«!ÚÈL˜¾U>äê“Æ«Ÿ®2%Û°%uPkõB%zÂüè\(î+½ £çF’é_ñÿ&ýµ«ÒSÄ'̃Dz^ðpp§Ú'¼êZT{"Y*¯OKÁRÚÕm½(bo‘ºUʃ±©| ð½ß‚úß^ÙC­í¹Ô.vë®T·n͸*ï¸!Z’UÃ'u òW«uBX‰nëžY[¶Õ.=š{±÷ê=ï½Ï7YÄ\ ®u͹Ç(tÜAæ¨úû°½”÷Düé{Iv÷Õy—ÖSÛ±‘Y¤zÄ{_<²‘Ð-VýKz‰î•iI§Š”Z£ºÁQ¯[N‰Vye–HaS4ÍžYãT ø½ºvµR=Â]$>>×ÏëóóœçÒl28Ɉ²±æðжÅu{ÂyÕ]ò4ßb»©d[mgïýt¿/ò¿Y üϪ‹àõÂY›hÛ‘'ÍPd¹N¿ŠC}È*4½~ÊŽÞjR¶EO‘=~Þ],”ö³Iž,ógVVxÆt3ÆŠ,è°-4$ïëæQ^`vBð¥jŒƒð¬-‘Ü¢¬âiUj#ƒ)|T6Ó} ø}$ØGÏšQ›íéÖºÓ*Ú†ç*»"x„ì•uÞˆí2UZéåU^0¶LÒåðcqºåuS×\ä¹çOÎ]J¿ ZGxWWtqiW¡”œ°W"•’Wí’õRòÕ6Õ¾šƒ¾Êu'Ê;CÚª#º£òJ}øPøRÝäPÜYq0°¥Þ…tS¼èûÒwò˜C>/ÌU Âß%ägRóH¶Æ”¯D¶÷U-Úd'Õ"諪—)ùb¿­#6gÞ¨ÕFò+ºYPò©[ªœŠè•]`§€}¾0ð…ØÕWß¿F›(œä«n9œç9ª:z©’®IUç©m£š«2ºÉmŠØQÔ§*P®åuNñä ;$䎡Á<•çH¾â=sÍ͹©9ª¹Â®s¨.„ꃒ½Ò™Ôµ4×(¼É5T4U["Ö*üu?“îG‘+ñ¨ëÜWh[6Û<’tGD/²#@è«ç«â&‰=êªÒe¿#†&w•<µ¨ZÆÜUÈT\IåUÏ ýši-wø‘àÃȹĖTY Ú­Š¨èȶÁ=>ǧ¨URâZ‹^w^6¨¬kºm«Žj‹m͹mn㮹β“”9ÎuË­Ìëpˬ7*u¡•lºës²®ô…yŠb£uRÑJײ­V„ÖƒFDgÐ¥”j‘g IVB`[VQ4óõêˆØ•b,¡ÚOb¹JÁj¥ÕQ1/ÛYCHÚGhŽ{m‘6ÈÌã·g,³h4AØ'J$ïIÉK¹]Ê¥©;Õ:¹hPÊâ.„i'm)ÏÌÀ=T_EÙfÛb­È˜¯Z¦'ÒUƒÕÍJd+I\^j“}Uy »*_U+߬ԯ~TÜ‹ÛUZCУª&µ ¬W7€W{’ëÅŒ±™zIeºQ×\Ûª]t¤ç8ÜëW_·}U/Æ¥ü¸ŽTž¡<¤žàr‡A‘Ðð•bê«Û¡Æ«"g ñºDøÕuà’ê"ì•_ µÑÄ•\!ÕÏTºb¶Ü¸*ÞåV®¨V’äI»†ªí ê«8W‘ Ã|ƒ›î|>ç{ˆ”|hÑÙj‰?ê” çl¾’ߢDžÏžÙW–Kè”[VsâÔ#Ÿ±‰ ¦ÀMB@Žyî±5S"¬"¯Z‹tÛvãDn-ÌÙ9ËuQÅUõââŸ$œ§âã» Vûj_—̧úê<ö*;Ô¿ØŠtÛj˜Vª9Ré±µÒ5Õ4 \+ƒeR×M’wæÊ¿—ª^¤uОÆÑ1G] Iº¥^ð–„´U°SPœB_lJrÕ_sÑ çM•&䛈Äd‘w¡YQpIoª­J²[ÓËàÊб€ø8³ì•^ŒÁbv8’ªv! ªÊ”UAIu#Ž·X]»õÐN·]œó£•'*–…Z ©j•0™ôV¨çI¼ØÞ|ڈߑ¥òSr¬‰?NªO_ˆv+ºOa9"ï(ïÖͪL¯s¿µGúžIâõ+›…_†¤ùTy¢óôEìã™Êç1‰1–3M¤m™ q\1•³‚ª¾Ï× g£m’ñ˜œ!5nø}ÀzÍ1ÒžR=Á8+Ýëäü›’¶ÞRN}Ï%GÂò ªOx^•ÔrSô!y$ðnþÎ6Ûh4©š«¨.UöJ:¢ºÒ{ôQsR¸°¥{àR¸¥W\u XæÉ\²^â¨Ø'Ç!^š¬K¢ƒI}u=B÷ȳBûú¡ø€Ðü1Ÿ‚K(óÇ]å,±!Uò~צZ$Þªah‡ò¿gUö>*8¥ÖEÖn9u®©Ötëœê†îêt®àèüÃ<㺘:x샽.Ê'tŸ £Á}Á<àw|cö*]zÀ¨HýŸÖÝgÍõú?+Óðìí§O«§_]Ÿ8ç‡eøjËVW[è¯CÕŸÆyi¯YYeX´4¨ZBIS$€#n[tqCVÞú²““²ÔUž.>úqbwàõÃÑKøóÕ{i=ú—Çã»ë•Ì}K6’Ñ€ežäÕ¹[ø’5°lA¦ûá Z¸pµ¡íÆ—ˆ[öih— z÷gLZ!_ fÏx/ø<µž ‹„Ýka?̳eöÒYF7Î^ù¿‹fÆY«ìÕ¦¼6U«ŠîM#——oЫ2*\Û¶ðfÌcqôâcº´¡êj¼lù[Q^åy-Ó‘Ìí¼!ô<'–Tœ;núk³në5ôé§F2æ0ÙfÌš$ÅŸZÎV ^ÒÜÁo[Û}÷æR÷ÖJ“|Fø‡Š¢½¡6Ò^«B’vÜj’ÖëD ¤ÄÑAYë•Û® AH  ™^"Ÿ¥Ð°²ê7Èü2°Q)A|CmIp÷3Æu—5ê@ÄÛVü}V ýÌ%¦›.¥ZöóY,å§ kÖø5Mܧ*ûh)¡¤ÙUŪ©qª&ø*©íç6\_t§|«ÜN« ¥$ìˆâW‡±¿7„㺿<ë‹%:ÉY+ùZ@H $‰X ¬R±*¯>Œ5›·kÝ"\4—‹³±v0Hì`uøàò:øúøýÆ‹|sQPG­·Áj¼^Êô‰“b¥çt:r tú§žÎä$m5oâGVü2ÄF˜•žb®©˜ß)Q<)$´TÅ;¨¤ÄØ›U5  )2tB™@ÅÂûé‹ËŽ×l·`kV#O{½xo'Û(z‚»Ž ôÉ7¥„@ÂÇ^i»*ÛW]›tt~Kv˜psߺ ³HòqðÝ+ôë·…ôß·ËJè_¬«vÍé»8ëÑ`Øè¡Ì3Ùy=7:éJÉc7{e)v<˜O’¹sJs«-âEÎæo{ÚE¼|*8ÅS(ÔùHÚÊÇUª\‘Q"rZÚõ™êZSÁWD*6²Õ2¨¨ŒTÆ×…W òµÓX0BLï@À¡ÐŸ¾ €~¿Ì}à ~(‰ˆ’4í«÷j˜ƒ1¥í¼Q5žÞ}OO¾ÉÃÛ#‰å{Ã(³WœWeË–ëóãwhÖûâ*8윇&•ç\R®[wç²E”æ®’Äš†µ±IÖ÷Û+®ÃUq†‘v¯ð%TµÞ§Ýÿñw$S… ) 9PVGAM/data/wffc.nc.rda0000644000176000001440000001022712136651170013765 0ustar ripleyusersBZh91AY&SY õ±; ÆÿÿþDUUÿÿÿÿð?ÿÿð@@@`ü9ïž4ó¾|õñÝJTªª«P¾6Éšml:t@ì ±$¶Úƒ'Lj™I‰M4j`MCE3ÒG©¦€4Ì €D©=íFÿUR~”‘´FFŒLÐ=Q‚`&˜&˜†$"ÈÚƒA 4ЀJID ¥=4hõ  h!”õ?F¦hBj~Šzšzz&(̦† h@šš’MM  &)´Ó#ShF“ÔÐhÓ!ú öÿÏ[b&°ÓklÑA’֚͊"¨iÕƒjÚÓ¢ ÐPQN‹[l[ETº4[m¢=¯² ‘°h.ëÑ÷]mº—˜ÔÊ’*»ä|%õàO?/SBFòF EMMPTfk ¸n‰éy9ø}máÏ Þb´C1RHA‰~ŽxÇ–Y'9Õ,Ó¢Ѫ²$TB¨@‚"U©,#Røå]ÃÏ·îì4ØOl0Š!0„AäöÖ­þçO÷ò|¾&µLô¶ Ј€YbîÞàí·Ø~[lŠÍQZª Ð†ætÔí]&øgÉ?>¸dË4"gH³ –Âfd#†bú>$ZdẄ!éµrÓ‘ð;‹/‰TY¥èAé?.[Áörpâ@/|ƒFIò/6\´Ñ©V|~ëð^ÀƒÃDa;*͆=”¢d§ë–n ògU5¯,aårÇ7{°NPCZªÒ9@ Ç‰Ž¨qf8÷˜Áwa#æ¼Zyõ;Ì`ÅCpîÉп`UÇ~õñS‹ÙU 6k$‡Ú°-¼"»–plÕ‹F­‚ö_ñW…Ù9WÂtêÏÏPbÒߎ<|]ëŒz"Ñî¯[v2tf{¶ó]9jtf¶y,å亻üýÝ;ùùzµF÷ÑgY໌º:.æ-ÊðoÂæ^­ŸNm;Æì`<ö2ßf¼wëÝ6ôË]»¹õ'©xôèïÜš7@pïëìÑ·Ÿ¬sðwxcw&ø·ôïñW‹w`™äæêâ ¶óötΑìÑ¿FÂ!Ó—‡eâëÔ«É—%zxÎIkÕ} JU©øÂÿ]Œ–Â)Ä@E€U e]“'¯Û*”HO·(°ÿ<ôBA‚¾_M"(Ü|iBŠù`¢í´z4¢‘ˆ¤R? (ŠR"‘öáqÿ^CÏæîw Ð‡O°£¦:$ßR­#bR©«-:‘âRtUlÎhWhë6˜Ì¢hgQ4q±Í²^Ø‘‘1S&jîÅÞAÆ-<¥sŒÐ78«9b§2äÎi‘!;T jìÒg-ÒÖj¦¥Êœ‹ƒÖ%ª¯8¬g„â¯Z¬[Ö ëjEÎ3‚ÅHLT¢p¶)!kzÞg3¦w@‘‰Ìï%TС'LÎq[bœâñníªzgJóK.ó)a“8š2(é‘Bq.X«Åv’0ì%Læ¨fëXƒ;ïX*VáYx fH+Gw@\dÀ&›MnYyµýå³_§Ò¯×“ñPìù¤ˆ2B£Ë@z½4Bwëj³¶/—]ŠCi[ãBxV—[bnë«|yÊ5Ƀ !X’ ‰šœ0kØ¢¡ÌKÕ!‹ŒX:a56€‘„!B ’&--¤ £š™PV@)õ$åd •Jzd¸Fp©‚M"J{Œj½Ñ¤ÂJz…Ð’ælž"˜u±…D¤Ð© €*‘yÓX…2‹‚¸“`GBÍ‹‚Àši°t)d”piSa1ï+ åJ–¥ë®–ŒÖê” €ŠÚt ÁS¦TúÔY)çLdû"@“'&TÐÊ äà€ j„ J›j‹ˆä–µŠ­tªQ:ÄÚaArKYU g5ÁG@$ŒT`/lWHº,9’Œ–Oè!?r* J€¤€ŠüyUœ ,*ŸšA~¨ï(” Q`Ð|ø+ ÞÏw¤k[[[F4hvI$’I$’I$’I$’I;s6Ý­m==0jãŽ8þ1¶Ûm±¶Ûm±¶Ûm±¶Ûm±¶Û8À8ÝÝí¶Üxñ®¥­kjE"f[bm¶ 6Ûbm¶ 6Ûbm¶ 6Ûbm¶ 6Ûb(ˆ•ÝÞûïË—-kZÐXãŽ9E"fff) ¶Ø€Ûmˆ ¶Ø€Ûmˆ ¶Ø€Ûnb‘333‰™™˜¤e¦Xa†óçË,€–8ãŽQH™™™ŠDÌÌÌP6Ûbm¶ 6Ûbm¶ 6ÛbmÖµ­r¤S°Ã 3çÇ,²µ­kZáwv’I$’I$’I$’I$’IVµ­k[Å)y™›Z÷½ÀZÖµ­k…ÝÚI$’I$’I$’I$’I*Öµ­k[Ò"ó36µï{€ kZÖµ­p»»I$’I$’I ’D’I$œÙ³fÍzE/Z× Z÷½ÀÖµ­k…Ý’I$’I$’I$’I$’OëZÚÖ½ïp¾Y$’I$’I$’I$’I$ðD(ˆ€=XA; (þWÑ„êëégµÈb9ìy¼ë¬*LˆGV}ÑèÁêvß(_]b]*XšW¨*a^Us¢< óð焈ixg \¶…î¢ðó®-·ˆã×4¢ŠH7V‚¨íEЉ‹Å"†„åE:;Zäé¡TtºëœGCÔR!¤ *ÄFÀ…fz”KÊ‚€N©ä«¨ƒJ´kB¸…iˆ¨$X'@«l€STÚs U KAIHPSUQCTÕ5@TEV´R@EÄT”4ÅQUËITU±ˆÆÌÈÕ$M,1LTMA5TTQÕSMEM$Q>BÞ(ðlA£E‹5¶Õª¬Z¦Õ´b-[FƒZ4í¨‰RO‹…dYIó æƒàž´þG‚w§‚VŸòP‘H§B\O2dS/§2vÒô§:SÚKÉ0¦´À—DÀ õ¡ÆI&—S­&'…Àšï$’iç l ÉRV˜úR ÞOE<äîÜø?Õ¬ÒEU XÅPŒap ò ìM‰Ä‰±5‰ˆƒÈ”HA±2%©ÞI&ƒ2LNúx“kKRk}. ñ'àždò åo&¤ÞH$Ä©µ9Ó•<ɵ:Szq§•‰±( ’%“$¤É4ÚƒçO¬›SRs%'Zq&äÌOÁëÇ­xùm¶¢¨+m¢ŽZˆ¹U°sbÜáØÁG6‰IŸR¸ ÜœÆÀæ9ŽªÓ ZI…C eª—™6%Ò“jzA6&´ëH&jkÍ6‰Øž´÷4ÙÑF¢Ð±kZÖ,U&¤ãOÑ6'©4M©©Ñÿ‹¹"œ(HPúØ€VGAM/data/wffc.indiv.rda0000644000176000001440000000503612136651170014500 0ustar ripleyusersBZh91AY&SY9 zŽ ¯ÿÿÿÿþVUUÿÿÿÿõÿÿÿþEHFOS\~ÿÝÿÿÿæf™`Ý÷wÑÇÏ£9ñ¼á»r^Íììw;¬àì4I£F„È™O'©¤Ú§²b2‘é<“#OSÞªi2jz™2šž¡µªx£OS§¦‰ˆõÐi„Q£ÉH4‚É £CMŠy @¦§¨dШ4444zƒ@ €Ð@¨4TöšR!OSÔ6Ði¦†‡¨ÐÒzSÔ‚‰54ò›Êšzši£FÔ Põ1 CM1M4 šFš4@dÈÐÄ! h4Ðb&F€i $ŠQ¨ÓM4Ñêm@C! 4ÐôšhF€ÐQê“@h¡ÐdÉ ¨dìyÉ(± B"‚’HLLÊ)ÆvvaÝÇwNîã»E6›[U%c7k“ǵ]쪥bc\B.ýŒ—a<©/‚ÿé’ÁǶòfz(¬AfÓŽQ*§‚JÂÕg㉳).r¤Tٰ표–\›6 þ\‡rìâìë\tôyån»n-?CÚì6\ϳü÷ú¯^/ Ö ÎHmÏ} ¡pko×ppÍõÁ¥eÓÅ^ÿçпÀñ$jÿ;4"€“‹ßÃÁƒ| ݵßÇc é6,+xîc+<¤C–¬Þ&o{m˜˜ê$Úo|£)¹³¦"|—4ååñÀ„m0ÌpíX@ã=æ6k!‘yÓË€cŠÒ6Ùû–øÎ¼†¬Ÿ<¥H0é8¡jUù* ÀäLq óå/’\¦äמHä}~Jµ’ zEªPBB5L BB ¸:\¸N„tW:ˆà’ Ü$ÈuÓÿ8 vØ)þ« ‘.Â@«“ë+(]èO¡ ÷ºæUªççSoÝës°8k:Ì+2³ZÆk7œŒRÐ[o-#`BrÉê‰ ¤§ÀÞÍ¡¦÷­kC7™ðiºçO¸V>%ÐRÇMÔ„ä ­'ÉdeZ÷ª$,÷VHy’¥:I8F¸ ZÉ,L6:%¹ñÜ °.D][’j¥Í¶£fJÈÅ>]ûÍÄžVS„o„„’@$ÀÀ  É! Ų#²=L§èÊ'HÄ2@~L€Ð}ì!ÎY庒ùÌ@zYì°$U$±Ðçóú ÌtCD@Tµè%»‰ ÅqÇ{W bµ­kZÖº ƒ¡×]tÌÌÏãœç9Îsœä?$DDDDD?¤$DDDDD7Afffffb"""""fffff]ÍY·ßëú½Ùi(ä9œY“½êóSwëü‹’!Åĸ i\Ù;$Ì d…ˆTaA„…FA…aPU„PET…UEQETaXabQPDeAUUD!`ˆR- JH) ,‘ DEE%‘(ÄÖ4XŒ²d•™™™ØEŒ˜9Œn4xÜÜd2Î ´$€%½ƒ†e…ÎÃÞ.îÈ€âÂÛ˜°à(†™‚éÁÈË&Úõ¸å?jÄ„j)AYJ%‰HPA½ |ˆPÀÂÛª"§«sÊ!’ ­ÔùuD-=îóxB‚!ëꄚCË9ŠÇvjøT‹).Z$3tÝ •àžHŒµJVíJÙ-lPZÅl/ ""†ÞW1†šðÒõÌMGÛyÂ#hޘŲÈ媃IÌËȃnƒ ŸM.´ÇÀ‘¸D°ˆ“$°Ä¤;ÕÖfØ#>ÈR¼!‚p[+°±D&&—&FY—5ÄY„PY(((Ája5ÌKXá%¸:z Hu Q.3.[¦õC‰¨¬â½€†¶± BÔ2µ-ùûåpx Y^Á!šÍ\¾!ê½ßmöûþ›}µ¥¾êh³² ÛBáñ^äMºáSnedwQ½5ÞÒÚloX™Sm¦Ãµe3ÈùQb*­¹O\ø£c» .µb­›ÏM@]Ôà;J:Û[ñXÊÐÀê)©*M2kï?ÙÑ2È=“:Œ+uN¦ ŠZ’Bé&¤§à í…­¹÷dɘÐAN%¶l„!Tdmb¦)5¶…TzÞénŠQ,–2ÌwYtI Š,‰¡AõIt&PŽõµÙsi„-C0k*‚7RÁ ~\ÓVn†Ò1eØiÂ5^™ˆlVGá’¡$wD«JŸ:%ö:ºî2FÌó±:­ eŸ§\x–á DTXh'u¥15%‰H&/#OKBìé|qµ]^z,¹K™¦m¨Ó^š<—*š”Ú÷2Ü™Ò=¯Ræ/oN,¥f7³soc|DrOf|úu\0ÜŠ¸’û5wÜú“ ¶åíõ—[Ȼε¨'•ú˜¢Ïâ–u¸(±³M%™X¶5óoѵ2Q*LãBkˆ­èMsJ× Ôq°Úp‘¦¯¶ù÷ÉÐáq4*_nUâái’gº9­„f±J4JUÁEí{Úõ´ÐlMiRQ®nmK©XÐJÅ2+ ¡H¥))i‹ZA·ZVdÀPKQÛ0,ÄÃP€"°nNÑ#\ñ‘]³zæ±ZÙÍ r¥É·ý§’ª(Éž»gu,y²þ¾e¶õÍ•Vê¸gä(bÜPDÊkK±±›â¹É.?‰” E³”•æµ`¬lªœ?Š ë<±0-‘ Ì©¦ÆµJ²§–„ç*©•TÆ8’U[Šf&¥¤ŽDʆ_im±1ºV ¶¦BN½5iz8iŽ÷-.`“}¡•Já’¤‚¥RÐvXó1DPl”Ž)òÃ%¨Ÿ 5ª{Þ³0Â×<ÝÁ­†ð…åë_Ÿȱv™482ýúŠþ¶qŸ(âEk¨û6oì•eT»´¿! *Z‚²îC}â ât^«Hõb™«n\-†º5‹eðªíÁA…¶D<ã Cøé/''¹ß^>g'"óäú©k¥¶×Ý%$ÎÑ8òÐ'…tع혡xð˜iöБ–À|Ãwaon0 0™îŠJ ×Å/ 8‡µ1£¾þum™±‘êoÜܺ2ž{Œl|%z:}hS¦ÆBw|Y˜¯ Fo¾žõÇË9WÊüõ.nÝ š†*„ ´ñïßà`=A@@ â€áɶJM·b•àjzÑSÓ =L2j O%$b iˆÂ4À“Õ%*CL‚ª£5TC@FF„bŽûfËn™¶–+V+)š›²lPn2²³+aB°V‹&1h¶/­[•­r˜µY¶¥V¶¨$³bÑhÕ llj£F4F±¬Z¢ÛH+lVÁ±XÐ[)Š-F´lZ-bµIE¨fØ´m¤Äl[(V™[É ÙT"جj fˆ"¢A$ÊHm“b“Æ(Š•QªecI„mE£$QŠR4†2DLjcbÌ©"@‰¢1`ȉ°cX%"L¡d´I-£m”´ÌÅb¨‹b™ˆÔHÊMR$H"ÙMcPf`R²LRÍQHÔ°–‰3 S!J±&Ѳh(V(Á0Q’ …#’“a,L ²I1š $™"KHØÐA¬j“ ©LSVÙ¢! ѲFÉTÁ( Ù E›10F6*#V352f DkÆ4I)­F£$&ÆŒZ M bÍŠ I ‘J HÔc0¤Œ†@‹Š´VƒTš±lRX±j(Æ-j6ÐF5ƒS*Ñ b‚¶%”S)”ÂTXÑRRTʈÆBL(E$¥ÒM“›  µÐŠQ¨ÍQƒQEd©6‰ Ö6ÄkcX¬lTh(*PiIIYE#FSa…’I¨¤´‰¦E‹dh©“b ¦Lmc[dÆÖ"ÑQ¨‹Tk¶1bÆÐ!,V ¨«±XÑm£hˆÚš*#cZ+`ÂX¤Ò 52-&š“ˆ‚Åb2HГ&–1&„"¥e™‚ %4H˜ Q ‰1“B*(6 !K"!*D’ME°Y2h PÊ$4›S1EŒh! EDd4&`-)2kQDÂÆÆÑ¨ÅIbE”# „‚4ƒAMIF&”Y”™M™±b¬Òe­&(ر3iL`Æ2#lL2H“M-!¶*-&f¤“UL¬m%„‚L$”F˜Q°i,M!™02FƒFdb³$’ Fl i$P`ˆÔF†`SSLY‘¢Ô$ Ô‰ŠMXµQ£Ʊ!lI‚¤bJh¡‘"‰2FC&É™F‘ ‚EEˆÑ±D1`˜„Ä!4TjaDL¤²Q¤`…H›$‘#È@chÌÉ3(’,EÌŒ”¥DARReF™± 22A`£M¨Ñ¨ÄjM ¦R ",Ìdl¡¢†"4¨„‰˜Íˆ¦QhÉ QH‘( "#h Ö,dÄÈ Š4Äbˆ„Í&Å53)™¡„2Õ2),ËFeA¡"(™±¨H±Вɪ1lLhÆÅˆ(ÄdÃ2h‹$k*$Œ¦STU%cYXƒF# ±„FcL¢”˜ÌJDš&ƒJT’†&E*0b”e0LÉŠ%$EjM$0mƒLÅŒš"›DŒ22j)¦f…F1H¤„„ªc4Ê‹c’)´d¢E˜Hb&ƒ)F dŒÀ¥"E,²TŲ‚k F LÄX¨1h‰–™ŒVMFˆÚ¥d´%ŠMh1¢ˆØ£Pj"ɱJ)”lXDÅDÌdИ*@ÀlQ‘hƒhÉ2-’¢J2I‰ š¤Äi &ª#`ÆÆÚ‹XÐÆ (#F2‰™d6˜I”HËn-Ÿ]£Öª½ög|:ÐöÍtfÕ þ±•ô’£Ðó¤,Ñ”+ªºfZÀ­"+­ývªa¬ÑˆÃ‘ ž:#…T f[¸˜}TͶíY80+N—9r^'‘Ü8“W­‘â®ó ØÔMÂÚâd4’ÉxK‹8Z›ËP½fºé·x¦H‹³•hÅFÖZ´n³cŠ(++{PªÛíæ-k1ˆ;B¬QÀ®<±Ne@ûN±X¹É7u ·e®­H¹W}½Êƒec˜hõ¬«;¹];±’•’{g{tq`l·Z XÆËG»\dü^öݦüØüwð(Eo¥](†:ª¶»Ý”Œ­¹rnÆË ÷VkÏàç<Ï9ÀHà ’m¬Úe›Vf¤‹Ê‹lÊØÊÌ(SFeS1$±Ui­m²E´Û)‰ Úª‰¥‘F“VÚ4SK"›QQµbCZ6¬D•lf¬SlÔ2±YX‰µa­-LlV¥ VÕcZ̈Àb‹VŠ [³ ÕDE“QEAF¶-²i‘¥J1J3mµ¦²Ð± •3d´©’”X”JQd•hcR±µmŠÛaTV+66RÑlXm-«Y5¨¶Û&­¨£c&Ù6Øe$•‹H[Qf­µ[bÙ–-©3!Ve0Ä ³&Jf… ¶ÕÚÄkµT [FÑ£FÖlj±[Rlc62™¢+ 1ÁÕDœÎ 9\KÍÞõw®ç.Ósw{ÞóŠçL\8nƒß5ôTÚMDÁ´Å0£mXSm´UE†elµ´hË+mIj6Ö+h£±cFÖÊMYˆš²•¶(-B´µfS6(*¶ÒPiPhV‘¤T¡A¡šÅZ6ÕEµc¢Á‹iÚÓmX(¦2,­›0I Ž$E4oœàçç8[gN.½kÔöõyîøö Ž»º7|ZÖ^÷µ«ãm||÷Ï+Ü‘[QwÈ÷#»ƒ1íq=Ï'bxO,°8š ¯$å!J7JpÏ%ôO\9ÔîëÌNãâ¸KåÞ¸£'··žï{Þ^yë¹Û¹Û¸Ä™ãl„Ci&ÃI$Ø||'‡¢-ÜbÝ];¹tç7uõ7ŒÉé¤Q”‚´Rª5ï­9Øt•_/ã}üOÉÞÈÅ$8¹?xùVÇàæYà +æ²å2^t>ªGI[%‘¯ -Že F—¹ªñw$S… l†àVGAM/data/venice90.rda0000644000176000001440000002003412136651170014060 0ustar ripleyusersý7zXZi"Þ6!ÏXÌà‡“ß])TW"änRÊŸ’Ùš–š…áMÝ6r?ëi×þ XÞc#!#ÁX–ºÖLçWr‹]Äæ:¼ÿ×é ÷-í¼uË1µ\þKÍùÃaŸ€\Û­$’ˆí;ÛÛ05|Ðdç'úЦuRüwI9ì<ñ 5ú3™«ßã²4zæHƒy݂܂Šb)|bM"“¢À‰±úÿ‰Ôu2ƒ™Ixbxúm[Þ–4p_^açžé¾Š© U¯†×±Þ'Ö¿)+Á3H?c@SÕn¸òÁe·c¨NJ}ø¦_Òk¤Äb?wk÷Ïn’}N²èu|]J¯¾À\³p=N×þ’ïz‰²^@ภXâsPkÝ«¾€p€Æ¸ÐüÑŽaµbf$Î^»t,ú)ŸŒ°@[|ޙ巫i{ñÊ? CJ?»î!‰énÁ@õ¾å¢ë¾ÿã&›á‚:ÞîegÖú˜óF7‘Nréøá”¨†4<¥RlâoÍë'_‡€`Æ´#•2šòéýŸå?ÔÁ¦ê…ÜÃI¬ê¬¹ €áPë3Þ¬6Ù4UJ;{Êm“cé=N/ vëÂ/úÓ#%Xglë׊ ×tÎÉÖ ÝHÛ¡éá(_ÂÃ^¯vÌ#0ÑP-´våòÝ;óã‚Ò^ß: Ÿã—ÔlJŒOS†}Ttö+5øòm¥ê<‚'À÷›ë½A]Ã4ˆÙ:7gãYzCm@(š.çä´óMI‹Ãù¿‚¤̧ç Oºi<Œcûô[’ÆÔµ³øE`ÃÊòÚ2~³ænÝæ¨ƒS9¯äÜ·lUZÚø#Þ¶N¿”Tû¢g¸Ú´2Î+>RÜ•vþÖÔº§•z°•¡w"¶Œ¥“m<†ø[q¾ý–®£Ž^*n ¡ÜðB…'ŠîiBrV¥[ëcÙö¬5âÍ)÷ähG=q²ýL:tÒpgy+qõ´>mˆeÛPuAzžº@û ú,‹³—’² ¨;Ú õnœÉƒ^赋'ðÆÏ›„2$m¡©Y=7èRÄhá2IœZ÷P‚ëºj/ù~üg’8ÉÛFd&+©Ñhbûå]™â¤ Ñ«-@C^mjÕäÈ!¿bFjQÿ„S0óýÎE‹éCÁä*¶t»m+øf^â‰Þ¢_œþ5ùŸËd)¢2«Že(4›©|5N,yRqnÊX9E}-s,Ý{hê¾NÚ‰æNa# öÛ›oÁBÜú.¬ÇfÐ¥/[X¨6š象†xËÃÕÁÐÆ\1…CƒŸîxø×Ë6 ¥ê‹Û{‡6ðpuœÏõ5‘ákã¼_]b’’‘±dÂ#µó6ú@¿¥a¸¯†VѪ{åb¾å8"Mú¶ƒ ^»+’º¾WÐÛ¥ ñ×ã©€«Û;'ˆQv Ç&Ôû:üˆe¹H:RÓÔ¦¿m~Ç*KßmOD!¸pŽJ?Æ]jµ6ªQÚÇ«]Pn¶~ÈåP4¸ ¼ÂrŸÁ‚òÓ7³(4ªuDK?wtõy8ºŸ<—sE½»hgâî8ÇL²,Å©·(ùÆ é;Ÿ4^qH)ÿ&oðD˜@îi²MÍ¿/7Jš„8L¬´ûsyq~ç÷0^¢P éò¥âfOxucË«(àšâ^p½iŸ-<äèH;Glå>Dj^B€xï2ı§&éÄ“cz-ºµ%E€0{FwG)ì¯%ƶ,âÉ!ÛÿH@`;”Ë äSTF4ÞØÕ˜ ã²á¥“JýÖM/0¢üÀš6‹G.áÛß ²3KÕyÏÜ~BµÃr„ØY¾ÜÆúÇÑ­ºÏº,Dý8û×NˆëW¶0§;h+ÍßP¥¥¢…·G…,]A:¹RLi mqË«•)!Cƒ);"}[ ,Âa3Æ~3^«l^oïZ„å-^ö^ a¤nõÝliH€úEBdzò²°+º+àx­Ž(–ñªM“%Þçº{I¦iñÀÍÔåKCmûiÇàýqÆÄ»¯ìŒˆHaVHiññ[„Àšî«tŽLIÈá[ m\G¬ö§ËÛ4ÿ˜òZ!eú~zíxºÝWÖV®£˜„O´ZÁjŸ3IèØv¯‡Š}?^ЇôuS‹ŠO‚-Á½…ûKZe4íhqÞIçê²m¾ª›b¸r»ä:ÅÜXÓ{2÷SnÅI¢ßp?šÔ"Xq-°(º.›u’£÷Ï]\`ãÇ a*)eòô>Ð4×±mã̰·-“¸cHæ¨!)ñqr”P‡þMZËVòî}‘–­°ªÝ’ ‚:úõí¼#ÌUå?ô¡\BUqÜŠiT('¨;˜|&ÌÇ,±x‹úžÒΨ™Bjn?’™Òçäš¡PxŽêìoŽYk[í°hõ>BÖrAÖ.ÔîûèåÒxÕdÌ‹èÀuKôëKá.`G úž’/bÙ@6é5Ò. škt°$sn}Å*“8÷Ûay¨¨Fžƒ†‡ò0³S Jͤ‰£?iÈíáÙ[ÚXëQGzÛní±ê0ùRLo >8:¾\ d£.‚óD³£p{x\<úfHrP½Q‰ìR7eŸïu–S/ i~™cº÷i\……G™yÌù*äƒLïJ9Á /‹‡î!øÓÁ¾Ú×uv¨¢`+’•°NBº´˜U“Ô¶¢D2<8‡&YZú ë­Un*¿Ü¦¾ô„¶ïÀL(Ò;‘I¥¨¤uÿ„ò¶Oå÷ãö:eÕÊ׬³d-B㸼nV4%fßpúv•!6› —F“®½`dKo¡W­B‡é‚Ñ?#MqWqÒ[ºyãCøR{ö™5ígèˆüàe{,^—:Œ›Mm?S·Nµ‚ç‰q‹Y±EX{>ù–ôwoe9ö¤Ì›[Ë2Þ‚ð+èO¨ ûW³ Ðjož-.ÿ¿ 3ľÔ=‹;yG4BŸÀ•@¡·˜çN 6ùœD„˜ž)ΚøþD˜ „Þ—464À@î„$²>íÝùû3#›‘†\aH›´1ë ’Ô….Û(ÎS37L£OZõr•ÿŽ»ó~hRž}¸Ç[!ˆV$‹ d3°™ b(€žEnü#l+0?ôXô¯ˆg÷#²Ž£ØõeN5›ˆÆ¹¡Uü"µñ+ÞH¬ÓÚ°° 1‰Y MŠSÛK/\b#Ðsê s£Ä%¼2õ«àIµwéYNö·—†JWÂhBËÓ]ªÕÿÇPÐTæÓŸ/W^Pº£q:,gÕ†4ï9 1)Â(·ãØn0ÂßϽ EMåØ>\‚~Æœ(‹ .ä:¡½ò‡_éö3èãúþe}M»‚eAç² ¿¸pf/"G˜5 k–½^è¨aÓ‹èdCÕ¤,&ZþÙ÷Dò4Ç&ªÅ[Â]ñ)¨Bþ@Z†­6LþâÿºmÞ*•XvAå8nYà:à¿ê.“ Ö“ÌZ9üä´ƒ®ý„ aÉ0ñç4cÁLspùXr‹"ø¤þRC‚é"¡Ê5H†­©µ4n¶ŒäšZˆ&u –âág· ±íÒIEéÌ$§÷þ\W–_³ó¶;Áü6£ÑÆ ëRÔg…7dÿ»Âã—Ä\½vP†b’¬ÆD.sz³‰¥ I+½lüÉ »øª®äÚ,ø!N¹º—‚0<[÷~ï3ùâ_H´×pU26"Ò\[¬Ëô¶[i•ApØ—Hdý6Œ¦K¾ïLl»-u“÷$P\„?éÚœ>Ã^ üúMª ÷iûl–Î!pñÄ)-¾6Bn`H˜Ÿ;76’A‚ÄßÏi 7± o3n -Òöî)eLökÿä wÙNÆ 5­ô-tlù~”–ÉfÚ:ðv°ãÛ­?Y@‘7‹‰‡ºÄkˆù‹³XÝ ÆÓ F»¢ŸG4©×Œ †d+ú>ÌpÄÌ$hU‰ µ§³Ìo  çÑýÐ _0µYí[ôû™a¹7g âY à§$©úæô¦ÙAÆ4 w¶‡i4àHqMôæz‘ûò… $â/ÿQY¯ÉwÏDòØf’ÆòAM¾l®t‡Ÿ+ÒP]š§›¿Ö*ïÇ6~ð“¦"ZÖôól¸jÌŽÕGVטq”ÛP›èò"TJÒùKÆ*.>`’vªrR R¨¬œÌ)ÈŠmgu$éi„3á&–ÏÂq³©Rn§{›ïœ×ìr‚¯†0Jé¯,‰Óõöç'Y•ÊM䎈Š…ïd´¢äF‰MÙ9ҟθliî7Ù|ÓÛQÿŸlÅw80x.Q:ƒÞŸÄm"Y¼odN«>t€AVnÿ¯fÈü®mˆ¸=Ò :%b &ô+j;Vžwœà°?»t‡ZÊ Å•­²˜âˤ,ç³Gêù`޾¨7ƒ§¸l“R%Y’º5B+ûÀ¿¯QPÿw ìÓûYÝxÐgC°È¾Ÿ¥-8l3B}?²Jª¦ídðÑSˆƒ=HtEŽd†µk?bW9õ¹Ëk£Ò]Qã`]E‡HRDÑêï\Ž .‡£à(´JõO>¤§=K¤Âï M‘Û Ðû(&ºÝÈ"^~zÁ¨;FÈ-/}ˆ½6& ·ÙÉø@£éMÏ”‚ßµøc]=èçÇuxkÛn± ¯©îlFËë‚Âvr$º'‘J°6Ç­(˜›šd§Ã=Sçˆæãgþ¶ÎÓÔp·*T… á”t´É|¤l6¸˜ÛÏsëS»R‰¹–ý²ª“éM¡En›‡0Uk5 ›J'HšJê7̼û(0¤OÕ·ü‡£ßsQ&½éø$‰ƒhÇ—íø±Öf}{§½uz䀗P|”g“35sµ×2xÆD*‰í À­‘faµ€ŠúÆ#þïo7=™`ƒUýDrë*D•$ZΠ{ÏØÿÝ(õ˜ìÕÔÓ€¿5¼ êñd/¨uÑ“M€Š™Äî0Ýýä}¬j¦ç§ü5ɨi¢ÔâùÌø ¶bû:R£-Ñ'¦»î#.þj#~Ö‘¿ý=ú§ 8*ÄYº>¹i–Û(ü¼%åþÑBO=66WQ’DõÜudµA!èÈ?¹o˜t·´Å:`NÉÞ¥ùç"º·0›n·÷Ò®Ï÷Žo•ÃÛÁÿéìaÙà†8Öhîû‹C¤`˜~t0¨LüÎ<Ö/’t‰Ö5¹Åvœ29AäXhp˜®5–©©m}ÛR1ZU˜ÃZ=‡íaÿ[­ù„z ZðâÆºÔ{¥ñ\6òâk9ãê4ÒkKfq«p‹ü:Ìr ÝðÉB€w¯¯h?ÿOÏÚ þ ¶Ú}yqi#•ž½h;é¼?àMIË©#>ÀÞ6Ó?ŒË…@f.&\kO7dõ:¼i« ,<“6üý·8é<õ‰5~‚¥8lð¯U¸Ã©ï8y{¨NüÍ]¯ w? ÚWýEÓb®r³Ò0À±WYJ€›Ê‹¡¤óº¢¤êKZyâ€}ÍûC¡<YPd¨§Šìdؤº+jnbz&ß@‡ q}ÇäŒBú^ÕØrNä"懴)3jœ÷$%üñ8›Š[yÓ ñŸ‚eåO»r>µ"¨‘`KÕˆ¾pêæÖ [ÉÌ9¢Ä^jÞ“¸M²ûqÃUÄOÄ <˜ÞL¡º!Ø•JéhP¹ ·.)EL¬H‘òbºuÇk¬Þ±÷­e{`e°tzïñ³çÛ¸ƒ$¢¾¬ÿo„@Þå%Y­Ù¦Ïåß’V™Š dUÂî¬Úf‚®qqA7v_¯SìÈŸ-½i1FoEØÀú›E=æ5Ûõ=]œ$ÍŒ Þý#'ëž«­™¾ÀݦÙJxªMãÝì6¿»pyéî¾Óbùî¸f=ê:Ï Ñ´ÇÝÐ'þ® ¨»?b]Ë%Îß 76f‚@‚ ›ÑJˆNem„e£8è{ì®#neÍ%€RJ »ó‰1=v0®€‘°iÂ7öÃ51š?(yVh£ ³æ ^àS1Õˆ éGa ü¢NµËä‚(ý/¬Ç—inðÑ5 îÿ~ÝûJš ¬$> NìH5ÆÚ)ïåµ$µGÞ"æ^ xõ›TþGÌûÃnÐÕ"ýês·½÷¨Ž¡á§²T4ÏžkÀsü¡˜?`çPŒkdI+?äw?uP&^º?n믪WÁ UTzœý Û®B!(]*¶{ÖTlï•0~ ô½Yꥉ»rVhƹÜ6òl§ÌÈ›WÈeõ'†Ëûe´.s¡:\hùö½ïˆ¦2$=³­Ö)‘¿B÷]øaÖ@ùØÇ[ Ú'Ÿ­L7žö,¿%Ú²XAÕΗ6ó‚V…å’jÃ*ăB»^õcÕ€†¡AôÒaÂMIÞZ‚%!­ø9âRM(‹®÷Ÿ J$¦G¿Qýv²Û“ÅßñÛðÛÝÌ/Z‡{ϵnv)ï«\Ðá†mkVCAžl©U µÍF©fr˜0RÆ¢Ü4-÷6ç±¾U¿©+‹Ü~{ehxn“•[žú¶BG”rªf@’(‹œ<òenBJdۈĪòWoƒ¢£Â8ÒÆ1¼ä´¨$ÎølÖø«KBHH Eh2#-Æcd‘ÑG 4:zµ!öâÕ©ô¥šÒg «Ûù¦­B4¬v›NnKE3£cm3úƶi»GE0šøýü0„¢Ùh¸›–2Ë?U9Ëcó¤®qÜúÝu;—?ýÖsˆGÅRsí+-/£ÔŽŽÎÖID;€¶Õ YöíöÏšœÚëJ ƒ©2ÎÍ®JU$3 ÷HB<Ÿ·.²´Ÿ_Dª¾[Ê ×zݼät@KAñ= Ö¥4ïI¾47ôÝ£g(Ü£»Ý—,»*nP£C>ù­ˆTp8AÈŒ¤×Ÿt=vÃ’öôþú{Ð ³¨â¦ìêQ(M æ¡`¸ÁPõ¡ª»Àu’iš Á…MîßĶ?̲^§›e;Z1gçAZX»m¯Ù¤m …C˜{>°¤ð×JÅì¾¾£+ÜfšššÙ:ê_ª#Ô|{¬¼ÙÄÂ|&/Ô}‰ñþƒò;阯(i¼oâL%dß7pÚ6À/ðúl4¹¶±©]–Y MH}Uýr+Nà3½ª•=y&Wç··t!¶b²NæÐd.²o!EXkcûÙNdjï4_Gä–Ö*@e•ÕRˤyâ)_Nž\ãÛ\ žÓ•† x‹!º·Ë˜I5¤êZ°´ÿ†I¥¯WšìSbØâ»ˆÇm¨ö™öé]ŠšJ'|=¿rb3cØcõŽN.’Ò•^i€˜´6ë¤UÓòÊ0CÔ}5${ ^ÕIƒß=Ç•†Ñ÷‰þë>º©KÓ?Û¿Õ7îô?¼X2¾eäcRzám«Ìi#hòÁ)I!xÕØ`Z¾Gˆî½™¤FʶÜ%Ëx05ì»É)mÆÒü!¨ ·á“/xëNÏ¡0#lZ1âµø§¶7`O §KšGæøÑŽÔåQ£)³–w=µ{* £³÷?”çšÎ>0 ‹YZVGAM/data/venice.rda0000644000176000001440000000171312136651170013712 0ustar ripleyusersBZh91AY&SYØ’•Tÿÿ¹H@äDDÿÿgÝäÄDDETDDDDDDDDDDPxsºX@3h¦“Ôß©Iê iˆšådõS‰D €Ó@¡ ™T©š €@M4 ¦™24dÁˆ&Ô hhÐ@ ¤È2¡ò~œ²].e ‘kTŽªfjŠQ"R „RZDFÕ±$„M PÌH4H• J1*,ªTHUΪ–ȨÍTh•T›IE:dŠh!/?qC#—¡qăr#€ãµãŽ$˜³MMGmÉîù}ÿOàx<Ï Æò¹½©ëû81eÎëIÁ?V½›¸qä‚<úGëÛ¿>½Îù÷õ)„XC, „‘E€Øˆ ®Û©ç(CúK†ô©–rVU”¼3’–bf塚¤Ì©f%@Yq êe ëUke,JÈæ'\,Ñ#ãYÜq[¥`@«ÌÀ ¶Îki ŸWMôÿZúö>Þ—SŒìµUåˆA ˆ›Õ£òóíô´$’I$’I$’I$’I$’I®ç9Îsœé± „"ÑB`6A66;Ø Á±Ìë­øœŒªAÜð¿»ÏÚé‹éééŒøˆ ÀXI8H°Îªªª‘a6°ÅUUUBÀ’I ’Iˆˆˆ’I&"" êëñ¾‰?—$5°c ùS$3ëJÁ#¾µªÙnñ8Ù‘ÝnÙ\£ ):˜šë¶ç.a}˜}1¾ÜnY@îK¤PN•´ÈHñœ'BQÊ/ ‰„Ó‘Âáp‚ ŽU˲âIȈ£•qVAPDQfC*„éËWaL#”h AEȹEP\äå2Š "¢å3•2¨¢Š  ˆ‰—. 0n£´w4ÇŽ†$ØØS›ù[m°b  “ØÀƒ½SÀ¥r~P0¿^Ä’<,Ël/K¿G6ÖÙ²¯©¨÷snÙkfFj¬àáBˆQF”÷²Cô¤ŒÎ‚©Â¢*Ù=õñLb&ì…U!ÝP,̺¥R×P"ln =ábC9t‡¶µõâ÷ZZ„‹âÎŬЭ&c±|k9Í[Y¦vK¶lšº–…ô³Õmn¬“úZذ8.‹è`ÚH&S.šV)L(# E(©›LAE¼3T%]Ìg²át]bj]b8t¨[,è—g\\},Í+kj às‡ü]ÉáBCbJTVGAM/data/ucberk.txt.gz0000644000176000001440000000017512136651170014405 0ustar ripleyusers‹%1ƒP C÷œÂ'¨ˆó¿>ÛÒÞé30€Po_K,qž•Ø}9Nl¾ì?ÉÜ·õÄuãu£=¨NÀÓ^àÐ5 ”Ûì-§Â9 ÒµÓ&°ðH°©ÛGS×úb*'‹}•ᩆ&-ö¼F˜VGAM/data/toxop.rda0000644000176000001440000000074112136651170013612 0ustar ripleyusers‹mSÏOA~´K-E`E@Š 1†ph&@ˆÑ !!¤rèuSKܤv›îœöèà²ü¼yôOðèуgâÑ#GNÔ÷œošÌ„I¾ùò}óæÍÛ™·ÕÚR©V"¢åe–ÉÉñÔG 0÷'ÑǨM”¿/+ŒaÆœŸ•I†ê+>ŸþVœÕ{¦>_C™ëçô¡â£ŸŠ/ÍüÙ7s¿Ö)öíÂo[y¾"¿«øä ø9Ο¿ƒxäÏ~ÁÿŽóÿ ï_ÂP÷⢮'à1K“å{àgàI+~UñË+èàeËjémðx <û>ÿªÌ*^„¿½½ ]ýaúº^]ÿzŠ/¥!†TÑGÒdô¿Íä±ûÆF‘!Í&8ȸ‹}r§# —q1Ê>”ÛgL00äÔ)ÆCÆ4CšóCõ1ÃcÌ¢õN(Î/Z÷m¯÷tÙò=Ó×ï€]ËŸ´´Ÿq½¾ +NŸ³øU«žW`ýÞ+à­ÔÜoïŠ>Áüé[ÁûFLêâ˜ÅN¶öƒfSÅqø¹Q¨‡É§H‡¶£8Lƒ†•w }¨èÜòÀ9)¯Ûí^ÚÔ›A¬ Ðfém•ýïgu#øñzFíÀVGAM/data/ruge.rda0000644000176000001440000000037712136651170013410 0ustar ripleyusers‹ r‰0âŠàb```b`‘ ‚… H02°0p‚8E¥é© ÌÂ`5 |@ÌïàÓÀ™ ºü„nÈ€Ò t%T]¡„N„ªwƒŠ[3@h( QöPi˜ýèòP¾ª9"PZJË@i(­„f¿”Ö€ÒZPZf#Jذæ%æ¦ 똠‚lÉù¥y%Å0^^inRjšF΢ür=dÍüP FCÃÆ0†1L` Sà Æ0‡1,` K(ƒÉÐÎ2„³Œà,c8ËÎ2E÷frNb1Ì¥0A®”Ä’D½´" '€¼ ÔšMVGAM/data/pneumo.rda0000644000176000001440000000041212136651170013737 0ustar ripleyusers‹ r‰0âŠàb```b`‘ ‚… H02°0pi¶‚¼ÔÒÜ|faó1‡ƒ¸18è1€ƒi„¶†Ò ´3”v‡ªó„ðáæD@Õ{BiG¨:G˜9P¾9”Ö€Ò0c æÀ80ZJ‹@i(-ƒ&7ÕûPy¨ˆƒ”VBãÃÌéD =Ö¼ÄÜÔb ä\¨ ojEA~qiQª^Ifn*,œóò‹rs <–ÜÌœ˜LqjYjQ*ºÉÉ9‰Å0“a‚\)‰%‰ziE@KÑ”så—ë!;†*ÁhcÁÆ0† Œa c˜Áæ0†øÂkŽRáKVGAM/data/oxtemp.txt.gz0000644000176000001440000000040512136651170014442 0ustar ripleyusers‹-’;n0 Cw"G°õµŽ“!c€¢èÐÞ¾VÈéÁ2EÑŸ÷ó÷çõþzü½žßrú±{m9>TÒäÄÐåìaHë0¥>ú¢îPרï%=ë}ýjxýlh\_¿ñÙÁzbþ¦ß¾~9¤Ÿ.ìëF]UzúÔ¸vÌ× ¯ßæé!>¶¤Fg9MY§Ÿ9É|–è7æ³C6æ9óùÍw†Š{rž×z2¥'ŸyØß˜ ÷õPèÂ'¾Ô%û çŽ#=ïý\Ðç–úPñžÉ|É÷Í@&æda~œ+¬…sýJqe\;æ×ý/ߣ˜¯ø_ª1ç,ù•­×9VGAM/data/olym12.txt.gz0000644000176000001440000000163212136651170014254 0ustar ripleyusers‹]•Ëvâ8†÷zŠzß±—@§Ó™4}zB2‹Ù 6jŒÄ‘å0ðôó—ðg°àsU鯛ìÙŒmiëü…Z×í¨7Ý»ö´ñÎ^5¸;êw*%z³&èÝ:pÐ=e|Ó¤PÑro,S^S6£,§ºV9Ñ£×Þ66ÚÎ(m¨*UAô2ô½aÊ Ê*Ê3ª3U­ÝöÏ~”" !X­*DÒþÈöBi*Ò‚ŠB͈¾z¶[)¾å…ª‰ž úçF(ü¢oƒmÙ ,¨„•&4úใȪ¢{©àñŸØ +Dp^+¾¨óxèt•Œ*EZЮ³o|Ò|X‰Ê¼ÿR¼«!F^Ò ¾¬ÿ@‹¼©yB¯×ƒ:é¨PF5¥Èe¬£ –1÷Bä)=ŒÚYðÙÆ4Ö¨asúªMÌYÞ∼ ïìß&£sÇ«Õc¦= /¶×³jî->ÞšÉÅ„žQʱåîΊ$$’ Eà6ظ¨,Ô PÁG6`@kö{Æ÷<'ûÑèAÏP‰©F™ØÌ•(*Z;ËF1ü }×þ8ÆCs‡ËpÇ(œë$V7eB{í NiööÀ­‹AU‰S¡[墇Pdا·+gBA*ÐÆ©X÷Ú¡ås£‚V݇«P ”Z‡9ZÛÎö‡ÏfO×^®qRftk­k;G{þ£§‘úK+îçNEªðXÜX7G›úCÁ„¶8¹—aÊdb;|øŒ3ô †«võÛ…ËnŽÇfÊc?µòèíÂÉp½º5×Àí`ý÷…ææb!iõ»=¢“¬(,àÇörös°«nN¶<âk¢ÇáÄ÷ wÕímìúýý§2ê:¢"wø?I2×aVGAM/data/mmt.rda0000644000176000001440000001015512136651170013236 0ustar ripleyusersBZh91AY&SYZs€#3ÿÿü5$ÿÿÿþ` @0@ `_ yäAlÕPD‚ Â@ êö2¥(‚€¥ÃÈ Ø` ° §è"ª{M‚˜¤y©’O*’ Ié˜ 2`˜&e*ªi€˜ ˆÀ”ýJª¦L˜L&`˜Læ˜0˜M0™4Ð0…(”õÄòOM!=Ljl‰¦&šzÒ¥? éöaC#’Ñ’¦J f %"U%9A–x¸8s¿Çï“g‡]/ËñücåJ!½Y‰gަJÆ©d¿,mèÕèfúÔd^M‰: ½&(TbCÞ™ŠB9ëùºb0õâx‡Kˆö^ã®=Q=ÅÐÞÉ8ày&."UhÓ—ï½f3f÷ihªXÂ7Q·B—4……çR1x‹k#Ug466°8]¤ÆÃ³¯ 3ËþhD¶yÁ’©¹¤Ìû'òàÄ9>wÃkF^=Ž ù.[ð•ÉiøûS«"GéˆÌôxoëBkVéK°|Ò:ý~J¦32>´Ì¤‚(Ôòi<§"BµBAøº,òã±ö|ƒx\T&øÏ]m ¢Hÿt¾¸O,¹ÔP ûEtt:êîï¦ñÂ(VNê-y8¦S.#&á GßX·Çjµí‚¿†ìlºÓE8ÖaŸ=ÿh H_¦Ä[(súñA@çζ¢ ºTvÞÛö² ¿tQpúqXä˜á–XfEIY‘–[MÛœK:Ðìݶ¢[n’9ÚlÆf´I…E+PÒ¢´QU"*ÍS†¤¨bu(¤êX›&%YÍ­›µ¶œ‘Ñ8Ç6kmXœcI¶˜"ç4ÍØs6[¶Y«RÍ*IJæ,Ö¦j(‘ZÒ…#2Ö¤iJIa©a$IÍEPUJTCe±LÍ,«‰ÉUY©I…š‘µAk.i%$¤Eªa–†J6ÜZMIØÃ76-:ÉÃ,ÈŒ,+5@ÔGf¦³³,šZ¡F¥¬•«LÕI¡d¨Y’¦ªÈÄЊ¤àÐØÅµ[b3œË",Jº¥´”°´"ÔF´YM®H‰k–›m¦Š²äiˆqRCK9QM5µ·5ºÑ -iÃ1™UÕ3 :)[ D­hhe(„YF!fˆh¤(ZUUj-RÍCQEdš*&(IUEÓH’M å*dfe’#ku”6àŒÉ-m¶ÍÎã¶Íˆš[3NØÆVš[C+mgn™­M´¶ˆ™†ˆj‹2%".”ª(jHh b[)iZZÕsƒvæÒÛZÖiŽ"Î[m4SMf³Z¶²R$Òv3S¶'Y‘79²ÖËLiÛšF¶5¬-Ķe²4ÚÚá£+M¸–­±!ˆ–’µ–%‚¥ˆ†+1h†fHiµ‘–jhVª«L‰E’hJ‰-VqM ™†›T©N©Ô­(V’­%’W E„É9mÁ\.ö÷YY›Go“ϧ\|¹çMŒÅÍ@tDðT%‚·Å ™Ð~G-N‡%‚²üUUð35Àzùi6Ð0A%É3ÅñÄ#Ñ¡/7áaËR[§Ó#J|ëE‰±Õ3×Tšàã¾7ç3PïèÈ8ã©A»ôZ¬¶ úhè¶-¶ä,/s @†@@ßTdHT~衺:dhZ$B•È¡„6#/éúýÿ_¹ç÷R_éû"¬GË÷Ÿ?:K8»µÇï>´Â$$™dýìÚÍÒM ðÏŒvņdÓ˜n€õRcL÷²™O5£¸Hª7V¥ô"¨…MoYD>¨§hSy„ááÞæ$ÎÈKœÃÁ{Ò­œØ˜RQ*F§êìB<ûZ·_=Ó-ZÒ/Qpã˜ôb7uw^DUÈé§Ú Xõjˆ/'#.©øû0¹¨o¯ktÇ¥Ú¸¾§ a¹˜˜¢î°“ )6çc¬!ÓLu$±µ ¬KËy{33Š ·Ðç^²Xã]øì»¦#Kõ‘$o³ jå~}·l ¢åÁ!lDDD[(\Ù ö-ðó¸ÿß Ôa“Íç†äoGÊbøÉw¨¶ý¼>¸ÁnIǃÀìÅÎþuß}“Dk&}ýSœoe=–rÆ:Ûòdu>y;9â$A¡^³º§¼š-¿ƒùÚ„’‘ö¨gáÙxbÆb³‘w©þø{|(N¾Ÿ‚$ 8Ò®Àh0غvTný!#5„½Á…ìÎËõ@¼2椮ÌiµGÁ©& Ž`Éi˜vuW'¼_ûéYgg ‚ O ;ˆaÙO]“Ž¥‡UŠ :í˳àªnU(¸‹ƒ·Ã»J¢³Á¢ ñŽÙ äØÇsV{¦¥ ÈoÇ‘èc)Ýf¸ŒÇ™'NŽ˜F{r©}™k:Wü¨hq+–UgU§YqYTeeÖhf{lÇñöóúý>žÿËÚ~ŠÆóõ6&c6³Tƒ„C$C/Ìãé0í,B>¤ÉK $6GäfUüPä]²m ml€6ÞÁ‚&š[ßFYQ./|ºÔB̧ž™ég£M"?*öJÅSKxu»%Ê áR†lÔOºdÎÌ:›—.éO›žæ²`—ŠìËßb¾ÖN+=3ò­MGÈùñ¿œUú§*OE]컉ž SQÊò¸w^9ëf+¾q9Ì’¢Qú¥±‚5÷ON0"¥ÚÑŒD(ìÚ!»Ï³Êçܰ_>£%£÷\izrÆS*…JäÍb. ~“øW=Ô¨·ÛiÙ>ý!F‰ihi —Â'GQÑ$Q$IÜÉÑÔQGt " ’wG%$TœS¤J.’R’\G:s¤B.‚9)ÉÐw)s‡!'œŠ$Üç‚EÎrqJ ˆª"*#½¯ÜÛŸ–õàïïhhWã®lµÔ÷ò51? .Peƒòb4 Gâ'ÆÊLÀÁØûÄŒx£ðÃ;!€Ë䥒”41$̦G•¢ÂMEEZ¾ýÅ­²Ì9¥Ä’`¡ÓuïYæãŽÙ’nTðCAWM¤Ü«¤¾Ó1yÕ¡[`ÖΟ6™À]ûVQ5 ÍwŒäâzë4çð|GáþÛm²¢»€д”½P¡’(äÐРd™4†^ž>?G!®šú£óäëm­YÕÛéSû$¥¨E<CZ}~Qóe;âŽÕÔïþ™Þ±œMªSé‘ÁΟ^û¾šœ”àÈ› óH•…”ÓÜš¿;¼¤•]î¡uç­ÎRøÃl¦hÑ(÷/MË”R˜ÄÊ6÷¥\!\áocl2“añë.¾”ïÕÑãNH=oüŒ¯2ÁòŠ•™…óuNiÕ™˜‡N¸ëÍIp]ÒF2hŒƒ©¿f„ŒAÙ:ÑooÝ{©O>fõEÙpÒfgJXÝfªfÌš¼5¤Á{ 'I)[´Éɯµ8¿*CmVOLÞ£µ§*/,ƒB¥$T¥4 %Rrtq9tq€äéÀ@„œà%Η8r ÔNs ‰À(œ)'!Ô8BC’‰ÜS¤Nq)Ò”âtåà É(qÑÉ't' ŽR—Â"Ž \Žðïò¡ævñÆæJ M¡ÀR} ¤;n›2#èÍ%Ò]%âÌŽ".”¤¥eÈ6A÷YÖ`ÓÁ6ðððÄã› ÜöÒ–ðøÑÏP´f& xè·HY.º=Çc§ÎäæIÍ瓌æÕÝc“ ~þ.äŠp¡ ´æ VGAM/data/marital.nz.rda0000644000176000001440000002436012136651170014523 0ustar ripleyusersý7zXZi"Þ6!ÏXÌáÚÎ(³])TW"änRÊŸ’Ùš–š…áMÝ6r?ëjk{ZÚ:<=m@¡@ÍOŽ3øêÝAS×uš¤ZÎv˜î8>ç1ú¸^!ïÙ'„Ç«”µ5ä4Î2¼ eôëœL[c|ªV;EPÉ ÷ÔsLtÒ} wj÷6vӇΡ‘ŠY¿$º£IWм¾ ~˜—†ì«£°äLÜZ¾GU½—¶žq‘KS½ï2¾Ü}OI)[K¥•  zp`ð(.ѬÇìÓgrN„§–0b8|ä‹ ±Ç_Åk>ˆ°(ïK þ#YŠ¢â JÐxMÐodQ–Ž–¢^P™ßC|ÆNÙ[å[·®ã¼Ž’ ƒÛÏJ©÷X&¢:{KYKR¥Û³Ì—,IbEÎÒ›‚Ñu_¹¬>IûRØÅ]¦–zàEÓŽÒùtþQ˜=iŸHÿ'Iû » ïü©AÉë&‰\6l‡è'¯P8M58ë9>¦«Î¹Â "åÅâ–Û ò™ Ó¤gx§dòƉ ÅN±ã³¶¡_TWä忯QQ.³'aÒeï£oLò-‰ÆCƒ9P3˜“[(€Ý}Ãu¤,[a÷Û¥•|êœÀš‡5Jš²<\`@¸ë¥…µxØ_†ò˜îb«ÙD°àNbaf/`€÷ rïEbE®ÂQÝqWÃF“ ®…åJ´þÙ$¥ 2ÄôþÏ$CŠJOž¨ŠAÝföI*Fð!¦ìš@áÉõyNtÿüº!Â…Zã­6úºƒüId\fg7ž©Œq¸rLÒæJþý”©Ûöó¤6>T(;6ˆuïF2ýñhŠCîÂìgœ®ù½áˉÇäÆg:¥GI+,d'Î2µ4¾1½ïoû }G*AKaÀ&ò8Dº¾˜%%ýàã—ã²±‹Thjoðüf ¾—‡X ˆ­\!ŠCƒôÖbá®)º«+©›)¾aé ô9g=Øè9UIBÐ×­;ž¶ÙÌÊbçû&Œ/Ap²¤úhu±K`}“ªjq‹ÚT|qêÒ ¥R  8Ú$"ôJ•ëW<ßû©ð¯(ÁZŸ¬¿c.·Œ‚´Ö*¬[V(Kê !É`l0’çϧÞRÒ!yÉŸ;@’\„2Î6ç+qÛQGG{ü׸>.°Šo1aÁÝúî?;‡Ý¼ÂùOÑìTàºsW^Ö7/šB¿?fH<û•ÈôB~A–ê4¶×LV¹îŠ' @dÛ9ºÈÉ5ë3F¶ËY(Ÿ&p¹8™µÓeDÊnà6ñ½™×-…+m‘ XüÂD‹Î…9𛚓Oe æ©z¦/šk·&¨l¢arëìÄz~¥Eÿ²¶€>¦)F] Ÿú‹‰š±#Ýt‘ŒFÊòÝË)òĬžm46Ùîüqªˆ‚YwáõièK’ýÁCÓý SÌ´b&?õ¢´-ƒº»éeÿ˜Ý´¸Æ]‚ Mž¢°6e¸Ê< ê½FÓ°ù6¦o“¹Ô­éó¾AL"h[÷!à—c%|˜üþÁ¼~×A¢22jé@ë®râ€Û)ŠJ|KEü|8»rÑu·Ý|’¦^}ÿd¨)ä(¢MC PýøŠº,M̢ʓ50f’êäo¶^›ë­W$ë>#ÓoLOÉ˰æèiÛí<Ñí9Éâ#t5̨ª‡Ôñãª"EmútÁU÷á[Ïd!ÆUQGx¡Ó˜h]¥®Á†±*iDb4•ùÿw­òÁþÀ¾•×Ç”UŠSþRé¶{:Ö­Zgˆ«KÑ%µZ¾r«çž-Fd±q¹µyú„·ªN˜=Û=£î¤1hƒ+‰ÇäŠß4¥¾dÜí…˜^|³iÖàÈoçw×.H0Q·¨0çÉu¯~/ 3pïhOiþw  ¸ sÚ _#æZo1ï Õú«lr,"6Žþ®É™¦¨X¦:+"õ¬UT •†‰²£Ðޅͤ†üxáxŒ·†‡·Åw¾°:\—„ Î3Aý¯×Z䔀å}Ôt²üë‡ezƾrŠ>¾‡³Ã.¯³a˜êâ×ÓN<æ1±kºK@÷‰¼;W?«óòê á‹›ûŒ”OŠl·beG‚L†þC„œE±±Ÿ÷Ëuœ¼¹.¦åŠÆ†­×™DÚÚ%3WË1ö¡7°° ‘Y®Núæpf%¢^-dªŽÑ¼_'p1ßdÞïÅ¥Ùut]×Qq'=;f ™õØâ:Ø!CÝxäª!5æ9_\ÿa^‘žífùF€ô#IÄ‘ql †P´TjæKfNß_l«ždõ%W>dðæÅ"ñõçw žÏù=”uà8.R(gô ŠXDrjÔÀ“úsY‹m aDm¶Å’§ˆä½1®ï ;r¤âB€p"( /u—.çµåš«¬ÜÑ!݇œþèüFË7Ä ”c-6ؼ£)¦8@Ï3=Þá9,Ü"C¡ŠË>onv§Dæ1¨‘ $á¬{tÛýÓ ½‘Wb¯ŒwµÄB°©áb&ðuÚØžÛ‹#/(¦2‡,‰é-Z‚Çз \Lq2wô Qøœx¹…9¯éLb7É&Á•+ž ±ŽÈqƒ¢MNÏ`x×îKo…§1Õº¼HV^cE Ûìu9¸1+öö€;´9Q6 AƼöã:K‘Jƒ›ÃE–‹ F¯¢`צ_1j5á7+¦°”·V܃0o«píc2æÉ<ÉûÝc‚ÝÌs_OÚ•ÏÞÓA1’¸ºb’}8dbyÊ Ž2 ¥>ïëÝs㺓ÃI M_º^! +Ÿò—ÕøQ;ŠÍ™± Ûl°0hµvkÃXõÊC_ë2sÕïoÔœÇݪß4,¶_º_¯å%§´¡b6 ¶sÑMb¬ý› ,鼦ªpÚ¦’6Ž ‹ôJE"úT@¨n$KPæ?iuèÇü¥¸^‹qGtóFw¡pëhÅéÕK· ž4ÙuíÈ®‘Ó!ß´Ñ«<űßí)[æðªU]ò4ñ?Za:p&Ôž¼U§­(3Ç«R¨-e˜èb—æpÒïØêถoçü1,:˜ÇÙgжèË-Ï‘5ißÈù¼#C+0|=A…ïé\›>hç×!j8pp×)+ç]{œ@X¡£Òñ‘ãe¨¹ JV´›žçO9«÷ŠTŽÍ+ê†áÞn› SÀäâIK–1î™ã Tâzê¶8Uà ¦õ»(ô½<—ÛCQ¸EÍ"£ñÜ]‰ÐÆÙqpUƒiúq³äÄG.–ÿ­ k—L4}ØØàŸ#ï$Þîeùƒ¬7hDŠ÷}«ó1÷M‘²kÈl€¹£˜šà!ŒZˆÐ…ð!—„“ùŽÄŸÅæ·ÙÅbkØ`@²ˆÑ*ŒÄI¡NoGÑ'#ÚUpðÒ.œ›gsçAò¦¢ó7[Hypøf»ÀË©(ŸC+ŒœkñXÄ,Õ¢ ŸàÀðýAéŽÕýv¨ ß[Û¯·NKÚƒš’£å=^ ×Vú17z>ÃŽ–3Ù_…3©œ!ŸÄþÀˆl[Û©žçn×®c—ˆ=ô¸ÉóUW?Òõ—µU: ¤2ÞÀÛYN¤!RGõTï/Ù›¤(VÕ&OŠK+…Ħ ³«Ô¼”W²' ÚXQ¸Ééy¡+ Én3'Ƚ–ê„ݶ†(C&‰£Àž÷BˆEŒ…þ¼–å¡ rh µÚjiíbê¶ÙÆ—¶Î_«¼‰þ½HžÙÇ™øi½o`ïp„êšàù,²(yå©)á“Ò}=½:Î+}Ÿ³ÂN”Fóm–·ë(½ñ40Ùü˜3ü"}Ž ë"U½»,ÎBÎ+Š_X(¡þÜÁÍN¥vø ^åìúºÙI¹ÖêG<.Ut+Ò—b©(–Šåí›d÷l§Žà§üËC®dzO.‰¶*ç=ûVÞ0O[jU”/`¤H ¹«úK6z(ý“¡ª{!ÿùN`3{“Z¶Ø7ÙóÀ>3•²®¶¯uë†ÙÐ_ü"5’ÇÊWÆoð2 Nd-Ó_ ®mjú?žŽùßΡP=O{TѸE‡…‰¿XÿÛˆL.‚ëPhÝ+x[›ô±L¾“Ddz—ÖcÍÌ5~q-%ÍÜdÖ:úý ©F].½y›µÆô‡vþ]³~eY56çHûeí¨î‡©;wj@<ÔPfý%1º3Ÿ OĨV«ïuÑá·‘æqÔÍõ$Ö)–„ÍCÞP9‹`+ó2dRê¯ÔìËAµLLÎt}‹ö£$0`‚üü¡ô‘¾ËŠÜØ0omÔ!¼šOfFÚÑ-Fá^ß¡ÝYò\3êEMÃ.ë&¸;PfÕÒpÍÀ¿ ñžäH eÏ·)CÇO&»E·ffeûzÕ‘ßJQg(íšo>„¼îÜû¬ŠD méu3”ôÓ%âÿU…=ájšìûîvpëËs(Ã^ls­ðLµØ­÷šX³3o…KF¨ÿ…špߟnµ˜ѰXù=Äñ ¬ßg4–Öü=ï’Wʼ0…?PjÞØu ´o`–ËVÒ9¼2iùO4Çqv ›œ\€€“¢d‹kyвðä êE “!d•‚B»11Ø%W•ˆ kû’‡¤­*¡¨¡ƒ³¶ÿ‡z]à-»%ó…Mpý ׊t=[Zî|–ä™éTt±ç­ë¶…%åäMÙ¼Pƒg°U.úY±ææ{®g6૞0 drXÛWœüo«VsçÙUफ़qÒf æÁàbrƒíôf¢:¶1œü¼i~uˆ.U&Û`ƒä³Ü¡|ŽCßÞp` !ë—z¹fjM-V“+Jò§á·ßµ0ÇAœ8β̡ÿ8ýUÝUurænÁCäpn˜°ÎÀ)­ÎRU{ï‡_\gQò¾O1½në£H.}N/ˆü¿±yN ; ™9qå7ò!n" ùÌ¡›·Æu-†P÷|TO÷€ã‹Û˜çÿb„ž¨jTÄ+õr}XLjà£ËHß ,ýºüγ•XE6Üùë>vWó ÓBþ¥Ë¯ÒfÒ)ŸÉwö݇PíI«y%T1²+Ñ;žË$FtÃzʰ™§ Ÿž]<мþ­ç<3SK#Ú®‚ÞZþÁ»²g)–'°Û/€ZSWÜ:ßá÷¥ª}¬^󌮜U÷h·Kš ›Yé8¼[Á¨Z>ÜsC$I˜µŒ?Ê£àO,l3éó0¹‰ö‡©üWhÖâžçÌùfÅyׄšgÔxb1ç-ê‰æ˜Ê·¬Ü·¥-4GöL†Å°ã¥™Ï•cœ¹ÂÆ>r÷˜ óY|P„ܺ@3±G emÙ›.ºGžê·I†Ž0A×íÿwk€Ý塨³l8}ýnsÑÍ¡ý‚©ƒe±"‡û+CnùFÿ pLªè²$N€5 SÏ–3mÕÑLÒ»‡±'7¹I$0±ÅŽÍ½P‚~¶Ž.pIÏÒ31O\n˜Ñ€m5c–¼é ÑЗñ©ËB´äæ€5ßî ™êÅRf« `àñ±ÖÇ„a¨¹6ÎÜɱš'¼éÑôq bñ? [ß“iÞÒBŸIQ"7’FYgLD“½aFEŸÿì×ûj‡ÀdV VZS÷B/o}¡$³¶Ýe+IšuþíåÎEÊ•yÔÔêªüŸ/f‰Ð8çRô+ÌXˆõ¿Ïà¿<o+{PáˆO¬·5ÌÛ¯Jx(#ö•øÐÌÉ%OÈâõjòÅNLTLß<:H÷T—ÜY”µ¬ƒhOpÀž1ŽðÃMN¶K¡‰ÙaŸ606̃/®n±s}µSÞÏýÎý—©Ó9¯¿©‡”Gè@ü¹!º"å¥ÐÄú¥ÎG(E€8̃t6™ØÔ(¾D–3„#ÈͰÑÃQnÃÑ\/eOQ—¹Ïm4VȘ柽LýW|MÏÇ0b¤Ý;ù¤¡±Ýæ}á&: µ‘£9O½LaïRðQ¿VÖR#ÎéX§èNs¡z½]ÚGÿIÉG_uªÑÞ•=¯«òHô%ÁÄèüoˆë!ZÌ£\¨Íá›t 9´ˆVá&Î5^ËE“ûÐÀãªYx{nOê);«–äç.ïO€ Ša;N}}&È-{å›hKßÕ‰‚BmG½Š·‡N9S”CóõXYlJÍ À¸ ùK)b/ò®\¼AÞý\WiêKC¾:d9 l[ùŠš]ï¨Á†¨1É(€fV‹ÿmv•TWí¤’Û÷9ô'gUÛuUäY­ÐÝÎ+ÄtB-²0þ\°?¨;õ¹€ÒæíI÷Ú3†“Y:¬œFÖ OáÎÓ&¾köÎ컑)ÈVLVo²{ÊâÕ+¡.•1œí㺻Ðlf·µåžÆàÏHQÌd²‚ÆwºôŒx³P¯£ß)µùÆ“ma’ÌnµJ¶÷FE§A`Y¶¶8†ÉD³ú=Ùm©~tcé”~ˆD•"ÐH1Ì;dd6{óð§FøúÓê®>í÷{¡ ÇÍɆÂ^²C’Pùp$gÕú/ÂRÈd7æ½F”TæQë!º2ýQñ‡‹îQ0•Þ²Üd.99tÜøoùm/}3QjMÀÄØ-f¬ôê»&‘úmÑ¢Ô²þíø¬óùË£o‹ ‚ú©µºLÐ'鑉S­*_Ȧ<9Iìî®2 .ê²møš}-:àTJ›ƒ_Ñ]5œG÷.i˜êW€Y746JÜýPf=jØ;ìÞ±Å3Ù^ª4 М; åíx‘2vXø5ã0ÄRyžc9µ»å™„—îbYñ29ÜÚY×SY±ëÝ".«LQ¾› cbB#Šàì‹ Ê­ŠÐ^D|‚­¯5+:nÝôOÓ\# ¥piUÂ{Ä«,†T[=©®ï¯ÿ$,Ö×ÛÝêfDb$–ë–¥lµ¿ äAŸ]oô¸dA ¥9í~!‰’ *‚´A¬ß£÷©#+ÃbQ“ÓÂØÁÐxì8ÒÏ•±¦dÖµ’öÀ ƒj;2¡«Ð¨§*%Ÿñ׿”ý™ò‘1‹1ØôBξ<(K\Kœ7Ç û• úDeäCl¤~V{fÉ<§_ö“_Æ,ŠR†,%Æû"%ÞPÿ‹§Wþ³Šà^K“àœZü9¦-?O9\i–4I¢( ?ãäÈöúD0ô éµs¢¤gÒP»œ/>ÿïC$oëÄšµ³ùü4ÖxAâA™{ÐÁ3MA#ÀŒMÀe¸26©E3Á-…-§!g%$Ü“Hœ½Rü–"ò]x¥h{<¤^¤rö{ïª,cÚö‡—œÌÓ‡4 JÏÖ±ÌÚë«öñQG¸³åçšg-Uayá5Ãù_x†7iþ¥’˜åi$Î`ˆ0„Y땳‹Ò0Wl®z2QÚ¤“›Y棻|øï‡ët+ª6¤¾(‹Áp¯¨ÖŒÆæÃf @Þ®IŸ4QŽÖ÷Tsî] pßè XbÁmœA 5àÂÖà¶¢hö¢a!v¡|²€êDtZb“tçMhV!–ÎOˆ¥¡-¯mç ›2ªÉ=–éç²²ÏyKhàE±<-÷P¼+ƒH.`«™©18åû>üNá+÷z—Öø…÷9x”ÕŽ•Sm·íH®y'£kòÏ‹ûÿUy¸ â !8,U_ŽŽMÄÐKý&Ç$qª™ìÅ1> /î’.„Y.×HªpÍt­ehî{á©8¤˜”RrY@Û(fizrϽ&«p» 2õ…©1Pƒ“0ü<´¯Ú¯@~ÜÍŸ!(5På6|ÐHχÁ$e4ª¼—ÐZEÏ^Чn||jºŠ{ÝaŸ®W ã¶•h¡07/±¶¯»,Å«œƒ £s¤ÉÛ«™›÷ÓYÈ5&e.ŠÙ£-øX/Î[Íï‹`ƒÞƒÈ{uÙÙ/SIÿ£O,(Ámiü^2ßÂ_¬4¯oïÆ¨¹ÅJÆDE³¼£–Š•‚þ>»Æ®§àAŒÓ wÔ¦ûQÌXžèÛò%µÐoÿ?@ê°| ×ré/°ƒJñ\‹›#Fÿs‚£tÊ*Œ³ >ϯ–ÔC„ýã`—öeSØùr__[a%Z;å›ã¦Xr]8yŸí”°‘-´ß„‘3h½‚¬ƒÁÝâîóG)~Ïü+É£_o‹óeƒôó¶n{69çÁ€aÏçOø}u’m¶86ÛÒ| »Z•aØÜÎûGš½Îøñ_ÍÙ‘½6çËtUIKaì¾âøEö¾õVBýaò'õ£Î–“‹r«¦^aɱÅa•‘QªŽî–#ñwï©d˪šn†#»ö9 ±±øB ßaH ü %„‹]ÔÅ_]iû˜L&ùž? º ’˜v¹W®®X4 …lá9†+k ™¶B†JMú<*U“õ½á£¨NoZA™a–þ§xj=¸Î.¼? £M%3F !pò€'8 K~ý¥õ~âÏçùŽˆV«º{î‘«´Ž@„Þh.´>žt¶‰XgqNÃà ^©¸ðÈj{„/~Õ¢\*;ƒe±©ÆblƒAÃcÍGjËøSɘ/^7 žG¥`•EtRoØÐ\ôÒ¸D•Ñvrí£nîÀ Éö<ÄÊÆÙž ÆFgpü¹bŽþÉ–J)—©t{j"hûfª²=%8$½©¹Ã"zK Û Ñ„¹¼6gVf=vëAš9=î_ñpxOÌ«’†­utùö:ßnnÃ.ŒjBO-b%ÛŠ¯_#žŒô—”ú†6î~Qè¿OY€(#šE*ñàî70bè¤R DÑJš¥N/Œù+Ié¼thg¢<`Æ®<µáD–'áôL)‚©wÇ| ³<ðõÔÏìyÍlGž´ü”{º`,ÁPX"‚ªZ»ºº6[@­GK\Æò2qMÒðNᜇúDzÙ ‘ À— E™cÿ#$ø¹§DžÍ–ßÖ—ÃöD˜µŒ¸mî×¢tÔ¨… ô¥ óޏÃ8Q²Æ_{Fu÷çí¡ãR¦×uõ 9Œ ó%Ó?€wÞÄV a]Ô9„ Ù‘|8UñJWSζçøÐ.5©GǶA}cÉ~(†1ï1Âw¹¾€Ãÿ[Ú4¨R9 ôÏg=räÀ“D~K¢1^éŽðÅÂ#ß h‘ôhþ¯Ï%‘sšJ*èãŽÕµ½žoý3îýéðòCÊ&oîs9³‡’ “i ÀKP$‰¹«›­®ßö÷æûOŸÂžyø¨ÿبÇVÁ]ýd×2J ë#Q"‡ŒŸà,êoU«i¿¿5ó’ÌGÆÖ 1º*ÀQͤœ¦xÐ` `O¦eÂÐ!JUŒÉŠü2ºYaë ²è§|36Ôgl*{ L<}+Ù,ÈZò,=äo{‰˜þF蘬ܓè艗¾•bîBÙ#Òºì4z’ôo-zÊ#¦óÚ‰fŽ æ\Ô5Ð0lÅ8mHcd‘·M§™lx›“¢àT ©$QíÕÆÝÛê~—Œ_ÖÏ#ðí»_Ò¡À ã ÿ œ»Ÿb‚ÝóÔÁÇe ¦]âµ›0×ìзÇÉ¿$FÍ o7”oÂYø·.TÒab¨ù‚ˆE–/ I ÍÄt_ˆG‰þò>ëYeMRÅexªxQ­C_Ï"Ú—‚ê8•”%íG¶øÙØü N¯Ïü…î+ 8¡…Ž/‡g°gL§Sm"['Ê4¡_o–õo3‚¿ rÎèš)—GÈÌÞ_#ßr€“´îð‚!^ªÒ€Lc¸ˆŒ(.¢¯’–ý’zD_òe÷e @)—1'3t«XÿL÷$š ¥ß·óµžñ"„æç×+nG!ÞuOÐÜM®—×s¡æ“@YàüJØœŒw×>~T3¶ZòVãY±šŽ†Èê#x¯–²HñyqN+Mwmñ2) ëówwdUQ%î"=o|Ç©MÜ{ó[T¦“e&#g%Œ &·W£Ð¶•tý¾!;‡"ÀÍGÁ>@¸QS8ÏS+”kTrƒea´«¡·ª×¶4(VqõÑ¥éûiòÍœ3.Þ \_þd>@Ž1Ô*ÊW=_ žð¬uþt)@·o¢ƒèCvs{¥"ÿbÁPÒUÐó?¨kä,“óv¢Ó2$fM`‘¢µØûpÅ{›‡c!¸ Ú,²Rƒ¦57 ºêŸÕP»”wö#ƒhaƒà–Ð1Ÿ„ùô4愽MHfWÉÛ ï§jc0b1V.£ ÙHZ½•jZPȱ/©Æxð”~®-©3uŽÀ dú6º¼nP‰Bñ-ÀNÖÕ ;„Zü ¢É=T[È9V»[´^^no’ãÞx†Ú•¶‰ÒB‡wÇLMÔ`lþ鳩À¦¦°#X$Lö%ŠØîrÐûUm…•ÜdƒÏ@H”G=½ž¡ã8çþ! 'JäÆËÁ‰Ö¹t„@4t+á:O ‘}w/qÚÎZO/‡à ƒ:¿ERáŽ_D¸È:aé\ªáœñì×eäÁþ¼a´×Mªc²ë×ËÒ§‰¦D—œ,öö¶ I‡4YÁ¡íí¸ápY5of¯º™<[¼¼ÎŠ;Vs€ˆ|¸Û=óe¬½¡{¸Í‚ SïŒ~^…ºº:X¶ ûAÍÖ¢Âп ?ž|±£“-†¸/2´¼Ý<°‘ÖçLøSüé_w'Vš#XüóóçÂ#Ì€c·ãäùÔMïnʤAç2ïR#„ºŒL_ã*úrMf.Ó;Ÿ!bŠÛÀdÉ,ÂG=Ô=‘8Ìoü€ðÒ;ÓŸ»‡–ki0E‚fÊ‚·GãÉv¦6MÜ{‰ØÃvüsìd EØ;ׯ>$M)µ¹ë™bÚkàlÚ,»Lð2zQGŸ“ UÅ€…hà§ exÕeŒÞŒ³u– ‚ ›¡¬/Ÿ¢ çìO¦‡Xá,„ò òf÷¥fÂö6fQ·ú4J]ûnnzbJnÄ¢sS¼7‹¿výC4rÙRŸ_çöšµÓN.™¦Nú“긅¿MØ9Ë~ȽÛäBE¬‰y·ú۸ݖÐl?û#‡Æ5šRbodÛƒÃrÿãëz,Üã[N&Z×a(¡H%(®Ð*ûWC ù†Z‚Td²N6k×¹p~u§èþWì8¯'c;›à×}“ ª¶ÔvWïÓEÎ×a»BÝ»&§6T6¿™S4ìíuØ–ØsUMShúâ׶߸O[¾Us9<-žÈȰ¡žnºìeÅT¦Øëa^ê‚×K‹Ë\¹)Vûœ—ðyÞpuÿ!Vé¥CiMZ ¯úe‘~i±\dn«)aÕøqÞ¤Ô€4Ù\9ÔÆítHïA‚z¦cp“‰HjŒoeÄnàó»Õ;3˜%0¦üÁÊmÃXXrÚÆÏ»Y¤£¥óæ)åN­Ã/Z*šXú°¬¾Å#Ž?n²@DUŒ2ÅóÄ‹3WI¼¿ëSÑšéês'?ÂWHz~gQ<3×ï1.ô8Á}JŠÛwå@^ãû*uÃ'J™sS¾U}"®Ý®^|†|Ó+‹›ï¼±ãuñuB±ÝDN[‘a½%.rbË·ÔÓ?½ÖrÄdfõIõJ1²½êÜ£XÙê„ý©>PG/ÝaÇR9ií¯°]±û Dó†œM¹Ž¤¯zìâÝI^¢ /æHh±Ÿ¼é`ªx8 ¤ÌÖÏ&+Ps4ÈK+‘ìæ¢ód€˜X­uwU,¾Pb%1ï÷ Z³óª´IÓ Ü*à»Sƒâ¥Úýe1\\X‘ Fæå¬'„zl#Æö,ÚÉyĹ¿ï¶¤É~SÉî‡FžÍL¸r €µúù}ÂÍOð“PÖ¾¸Z…WxžÌÌ£’"@ªO|psD¡AîÜ*fc.œÎ«°Üc_!â«ÿpÿöç{ÒÔ #?¸¹Mãá–•s?˜vÌ;bSXY'ýZ5 a“¼CEí\!¿¹kߘ1;(¹F—øpåt®?@ñâ·ä I(´eu*›Ú#àŒ‚@¸Gæø³`êÀqchþÞa©ytSE®£H¨O•ÿh{áäª/nq[ý™£Xi^ªEYÔ2÷„‹7Qä;?ç¿ÚxîêÀ“®µ¡·-¸EA0‡9›aƒ´Åb‡Í’B×ïݧe[2,¢7Eê€mýg>Ãn¹£(éyþ¸ØïúèKæÄ ò}(’0 ‹YZVGAM/data/lirat.txt.gz0000644000176000001440000000046112136651170014243 0ustar ripleyusers‹UQ»mÅ@ ëßœÀ°t?ß)2Âk’2ÈþEDžÎp¦ù%ãø¾ßøúýyÙ Pƒ½Œo@9œÀIfàg1Ì ÚRf’µcR6ž`¸Àöd™kÍá¨z”d殃 |þÍé’¡«Îòðó L+T˜Šö\a<—›÷Тèì†v3«h_Œ ´Õ`{ÊsŸ™ew •õ¨y·²î}f¦ÅÐqÏqÊμAt½bÑÍ;Æ .Ñãa8)»Öÿ™ÑÀu߀§¬tÌèæ:ˆ6ôÔL³‡®93º¹Vã( P·F óh½€æ´”)à À}BÖCVÕ-@ YÍÖÖ¢[0C`1ÖRF<ÄhŽñÏÕÝ­,YD3MÝýð± ½VGAM/data/leukemia.rda0000644000176000001440000000051012136651170014227 0ustar ripleyusers‹’MKÃ@@·I¨ ½Oâ¡4›¶©(6‚Š=xêui¶Ì$[õØŸì/¨nº3S(X»ïídv&Ùäõ~ÆÝ™Ë³˜]ÍÕäXzj1‡µ5¹|“i,³OõºJ8Ö£ž³Í^6Èð8ÞW†O°Žz†g¬Î^ƒÀA£Ï5ðBý‡U­>ÿôË,xý7¾+o÷Íßî³;zê²êý}XûÚ?p˜Èw™”ÚN6wMÔ}q¦ôD:/y–nƒõ"óD”Xƒ*/Ä\å…¶u3=©Ät‚ŽŠS‰[K%Ô²„Uë³±¿]äýß5º˜é¡peˆ2B£(”+ËydœÌ'’ÈÆdÙ„ŒzpêÁ©§Ü7oo7ÎׄýE¡ÏÁœ1[ÿóVGAM/data/hunua.txt.bz20000644000176000001440000000260512136651170014327 0ustar ripleyusersBZh91AY&SYxØÙœÙ€@à>ïßà` ¿‚‡Ðïç§¥S§wÏGÀÕ?DÊhšl‰è#LOPjxBjT%<¤ªiê`A‚hžªª0T`¦©%4õhÙžÛ ŽÝ³½¼fë®f–j:Žöífí–í6ê5 F­™Z’1[ ¶7)ºš¶!Q¤ÚŒ"1fZ’’),QƒDT†Ð¦ÆÁˆ“C&–¨(È$Ì„V"ØÖ,Se3m#cQƒlTF"¢dš6Òi– …,lj¬Ð±Œh£RŒal¡&¥1 A¨el’L˜Ñ¤,`ˆJ4b2F“*™E¤Ê J6Š,XÑbÒÂi³-‘‘m£ESÍ4˜¢„ˆÆ$@HÍ&HÔÅ13I±D‰²dØ IdTƒbL¤Ô(CP€QJY603(¨¬hÒTL›Õ*4ņ›dL3‰&ƈƈXˆ`ÊdB’Q¨ÐTÅ„ÈDd6#%ŒmFØ£U¶6 •X…£QF• ¥c&JdhÅh5£IŒm´b„ ØŒF‹cɵd™‰0I‹¢µ%’¬Z#`Ø1dJHÉ"X¶bª5iFVjÊÌP¦+2¶55cPhÆÁF±mH3I0Ж¶¢"YM"‚ÄXѨ¢ŠŠª5VJ´”ƒmclahT¥P¥R A¡‰š-²J"Œ`Æ"6ˆØ ,±j1ZTh‹Ò(³6eIŠ(ÄZ#E(ÔLf„1‚’e’-…fb¤ŠI6¬ÍˆÔCKY6¨¨#`*M ÙŒ ¶-EÅ( Q„¢Æ#E –ŠEAl‰I±£j"¬U#QZ) T(Ã# ŒJFЕ"bCbÑV‰PTdÕ‰1BÂX„*$„DIEI’b±$¨XÐkQX¶É„Äbb!(©²B`Ù˜…1€ÒmbÔ˜ÔRÄÔ¡™‹"HhÛ±b‹Q´b‹bÆRÚ¬T’ih,Y,%ŒZ5¡˜5¢¨HTF’J(Å" ¤-™5F¨Úˆ´T T•b¬cmEB%bL€,˜ ÖDi¬b“$“.;ºyðÿ(«·fž¸¦”Y‚ƒH&poH™uƒeJª9j;2W>Tݺ¶Ùyw8ÈO ÑŠl Ýd©tÓhÞÄ+RC]¤!nmXÛóh’ºBípxk¨ S%çh¡Mž¨ v4¨0Õ9µ•G…¬‹;ÛÈ¢Ëˈ“nä€Ö½®ð›øuÁЦ(iÁ@„ÓF +6w/2ü̼ިFÓÃUBr¶¢EöëÎÏ„pðŽs€âf°…)±#(VÓV›2³a-†R™šÓ¶F„I%R”&a-¢ÊZ¶0’¶I,š¬¦ÌVÍLÛ’5ÔÆi[Tm«Ñ3h“ÖÈX™X‹e4†2”Û1X£`–+31š²Ð«4ØÍ-¶‹Y˜¶™$™[m[–ªJ¢›M,ÚBа¶–™µdZ´’ÛA¤KlÚZ°¨²LcwxÞ—8ðç,¨BE#Å¡È\4¨ª¦Ø1p¸’A—]8¥ÕÎ''9Ìðv†²Âib–Ù™&+5Š4LÚ‘Zš‘)BšZQSÚÌÕRi1¶”†;!`9˜eͲϖœØæBoU¯yFa†’-VaŠdKP<&"" !28"y9èžg£4«¬¿£ÐFdGçK˜.§SK¥h*µ %Ê]ƒ$vË4³:&ìиŽKÄ…Á·«œv†T«!6SjY^WÛo×[Ƕ©‡Ò1Dì/'÷|Û yÔ\JEð®N¦±bŒ’Zå¿*?˜­ÉrB÷HÔ(øx¾ójâîH§ s VGAM/data/hspider.rda0000644000176000001440000000247712136651170014107 0ustar ripleyusers‹ÅXolSU뺬k7Ö–%.0ƒ™¸LÑe}l݆nïlÓ©0l¸-ÈÞÚ·¹ñÖv¯í&âô !¸H"~Pƒ"c‚ F%º/¢~0$YˆS¢€#N¦›¾®÷œõÝùX«$ܤýßýsþÝóN_Úôp›ho³ ‚`Òcß±/«EÿJ¬B–Ž™O…CÝ~E„ô…:uëŸú§,–:Ò›B™Úq¬Ñ{ìû!ÿkGß¹°-ù¸ÿó÷ªÀ‘ýø‡›Ê%ÈR?9¼¡UëßGß~¦Ò¿[kö„´Ýßìù}ÙfH¹üËdý4XZíÕ¡áQȾ{TöípÑϧ¦Á^ÌìU¿áze¬ìÚ¶VŸ~l“oþ©¦2gÔö‚íÛàÉ%¿]€Ì/O^;øçÒuýTÃÙÒø"ï÷— ê¥IyÆ®4å™Ñ+]ùâμóǾ–~Ý9Ö³mã’^rÑóÇÞ!ÓõÜHó®ñÕ=ü¼ô×!ïÓ¹C{’µcªßî|飗?;¶ …_]h¯#ž}È»p;îÝôÚâ,ô§¦Æ6\œóù“p.Î]9i+^ΘÏ?^/¸Ø½»7°s[ñRp(³^Z¹svžùë–å&”ip׎ž¹¾}ê¿gˆÕ-ËtÙÁ|p÷‡\ºøÀXÎâ¾~ÚïjÎh.ÊÇ<€³ï…O­ÏaÁÞYz(´ŸÅKœCÊË‘™Âä×ÉŽs‘E²ˆèêG»s$|éÝ®×)NÌ'ËåÙ={\Êsb<çë‰é'è~˜~Ì+úçpž“ËÏ{,¯'úÅÕ-ä®:þæ•WßAýóü9áÄûw@æ„'¦,ñ¸ÉÏå3„ÎñÈÖ)ïüýáÓÎÌ­û8rÑpm_{Pyûg§¯„!úq•[çìã:ظó¥Æùÿ;€uð0,bXÈÙ¯6ú!`œKÙ<ê+c˜oÂѳÃçâ¾á#ƒýä7?ž£óܼ٘o]0ù]áýIvðþ¥àGRƒêëó÷‰÷€÷|?½iœ¼ß¤§8Eÿ8}·a¤ôžp»8âs“Ç­‹Æ} y?Ÿx¾†!ÖA%Çñ>Köø1§oqöoõ€ù·߇®Þù}°j0)?æôã|Ù>¼|.¨•õ‘&݇` çöe¬ÔÏw 7¿‚¡—aëHÛǺ˜í ) Ó>y+2ÙŸOSñ81ØÇ°Î0Þö±8>Èx ìšå“ì­4èÇ¿TûUBÞ'…0¥+Q¢Íßë C²È­jéÞXÅEËÓ„fæâ•.*+&ÓªqlË!Ým6]´4AÎI¡Â‚—4¥Ä–„O{¢É̉tt‹×ÀÇiÞÄÖNîEåvá[’¶ñQ«ÃœÚ9ªq5­è9R¤¹Sm¦Á$JîX®¹atì“qJ¢*¦ 4·’rùy/3à¿3¸Œ‰ÜóØtŒË uT´CV!Y$ƒµvf¬ƒ"ix£ ÂL…¼R²ØÔ1Sw{ÚG %’àrÕ‰áJ>‹¹"œ(Hm–(VGAM/data/grain.us.txt.bz20000644000176000001440000000156312136651170014737 0ustar ripleyusersBZh91AY&SY(ž“­ÉÙ€@à+E– PÙë÷vZª xjyT4Œ ¦¦M0†P`„ ¦‰êdÄÁ)¢š4 šE)èš`’àçÒv„HÜMEçUK%GJÔh¸Ø_p¡•ªhL.Ö˜Zb`FÅ™ÁÎîðÜ­Â •p¡½ ˆB@ë»Îõyªûµ½¨ñÝoà.Py­åÞ+¼l9}ØÞ±—N ÞÓ´hmìj×CÉ[7;”#5¸âÙ/;•Þ„§sq›Ý§›±Ê“æžõAyîxå\¿’ÍÃWrS7„¡kj±5,hr,«Ã…“Bbó4šªt]%{GE»‰Ü‰WEéÆPÜî¯g5ó¹TB‹c¦ £TªêÝO#·¶Ö*ÕÞžržíê8{kMåe *ös&„ãr`ô  AN‡{`e׌s5ùRãY„°¶ù8B ¦oyРƒ$8˜pæ’f9“"âR X ² Â1ˆÀ¨µ¨XŒ†!°Ì²IX(Ùk¨*Š…j)¸‚³S °®¸ŠSRÅ•·0lÈâŠ[jZ•/fµ £®;B@ø¿Ÿ wé'ÆfLk³Æç 4TÛæ’RPXÍqm¼Xã”ï5ct”¼†[ ùžc²€†”²M3›Ç«VŠÎ—«× ÖB'Y9`B•²H[@*œÉYhÒCšï)'íê‹LUÔÖ÷ÀH ˜„ps&à{Ø­tÝËìÂBHh›¼CÊîmR˃[Ð]$‰IjŠLˆƒw¡ÀŽÑå‘åâg²»º»°íÍ~å8[G¨BE ɱß8ë´žÃî¯Us ¡e÷»Fåw$–‚ª"ñr¶ö2U-ñsÚ·t"Ó É_‰>ã!". „I’=ÎjÒÆÓ-£ôž /iã’˜¥&*½x‚ òð>íBm³LD-P% `Ša†Îp._¨FÖg•'O ¾Xa‚üÈDb÷áÕb{ׯõÕYt5hóÔJLV%u6“i£v®Ë¾ãÄŒµ ÞÒ~&$zS¶rf í1 BŽw›¬V¨f’7‚ Øè-´Ô Q´&pFoé H_Ü‘N$ '¤ë@VGAM/data/gew.txt.gz0000644000176000001440000000104012136651170013704 0ustar ripleyusers‹=T[j1 üï)|a½lù8¡„¥´iBo_½¼û³^{Æ#¤cü{}ù=Þ~~¾þù€ãóåýï«¿¿¿üzûxy÷U}õÑ×sôõmŒ‡uÄà 蟸'¬ÜgƒÅ '¡°¾kvEaŽAµ8§€Ä‚Ž Å|æÃÞù½7#m2p±Ñ Ì~ohn:{ƒ4»¤D"bš| ®C]qŸïØ í5+?3ÀfŸb[äMt97•5ÉÈvƒ[*šY2]¶­b“F8+Í^+KޤÐX A³¤<*-åÝGƒ=Àf—”—ãD²Ö^¹dWLÜÇŒ|a‡5Öã¹”Å~Ä‘¬œ¶†oš.9)û²M®çR`wË•pÙ…0aÕÎÙ†™÷êrD 7òrâpUlvºæ5¤ØÚühz´«,šqºUÖÝÂÒ­¥ ‰Ùžm{îå˜ÍN)”¼—ÞðÄW™”*h°Õ°-AÄÛ-R£àJáÚºÝ=ԽΕû˜ìµè61û(5»úçdÄ(Œ=% ;3¾$ό鼮ihãMny.ÒlŠÅòqé6ônÙÍ®^«#4¼õ^»+¦9I›mŸ¶Ë¦ò|–æ¾3¶½¼y±ß²-¾MŒþ3è>W®‰:1±cëwas–ö™0£‘°’K°¥ê¡û:‘jû(Ý¥g‚Á,û•¦ÏÁ ¢ÏÁJBi-(Í€ ú¼ òPpÚJ»CiO˜6°æ%æ¦`O@ ’óó’¡öŒÄ’äŒÔ(—³4&€jgQ~¹Ì4^P öÿÿÿ_èV&ç$ì„ r¥$–$ê¥õyÿ@,%é4…VGAM/data/enzyme.txt.gz0000644000176000001440000000015312136651167014443 0ustar ripleyusers‹UÍ1 €0 н§È Jš´iT׋ÖWØ'CÏ3f"k„øŸBiï›{Òžô›ˆz†ûS"#yI™…Î#Å> 9ÅÞ©èM"‘/D¦Üò~Œ÷å{åÞ¬Ñgð½ï´×C …^ZÌÒ“Ö±8›Ä{ç àïsóïDMö‹‡p¹‹PôV0zFû¨¶_Uëàæö\ÔOÇšðµ¯ߪ=W=ácóAøb;ƾövŸ¶%'*æ{ÛÙÙ{ãBxŸgÍñ{&ÃMñE0'( AI° X¬Ö‚{UÌ™ &ÃT2-˜–L+¦5Ó†‰= {ö0ìaJ7þ úûЦ²VGAM/data/crashp.rda0000644000176000001440000000056712136651170013727 0ustar ripleyusers‹m’[O1…»íÞXTLH Ñÿ@`¹¿ß|Qyݳ@ü÷Æn8gV›LûmÛ™33Û§ûyžÍ3cŒ5®š«)´~ Lh~e±[ãÚþ+ñvé­cŽÃšÓ8z×*LZ…_À—ÖòVE¿†Ã¨xZç¿a£é-CÜ~´oÕy °–/uêúîÞÀ7†VC}ß¡–&tb5´{U׺‚o„•ùGÈA×ë°ÚZýÔ jÆ=­C­DiVÿº…šÚ*~Š;Né°§Öüíï:ô+Vµ¤È¡‰o®ì/™:©ªY¿ÍÖœÞsrˆC¿gáyOÎ^}´)>W;4 Á¦{Ünˆ/‡ñuµ”ÝõøP¾Ÿ‹½àaSWZ|;*q3[û¢ûVú$j×åö»«“ëà èú„œ0 #˜0!L 3€í÷„úB¹Ð@h(4 M„¦B¢‘‹F.¹hä?ÿTö ÝÏÊÖ±VGAM/data/crashmc.rda0000644000176000001440000000057712136651170014070 0ustar ripleyusers‹u’ÛN1†»Û=+º®®‚@9zo¼óFMävL’]ˆoolÉ?C©ØdÚ¯Óîü3Ó}yœªlš !|!Íl¦Àד'‘ê5žÕU³\Í„×f«­¥­½¿r>Vã‹´%Úε¥Xé»P[ŸYKÜ•ˆ!­8¤C{w˜û9bfЏÖKÄ)°CðŽ«åY9çР‘UG y¨õ gbø'ꡞѬ>ÙõÄâÐGÿ–C;DߤSOˆ½ûNbV/˜‰w³6¬„?A.¡øû¾Ý7Ï:‹·Ä7·ÈßðôZâøNéØFõÐVŽw–Æ ö%r þ:};úíÃuµZ4Ã)Ÿ7k·݂ð}1gïrGøT¾V[ÆÝÚUš}U )‘3›WÛªóQë$œëi½ùîØÉµqàu zŠ O0 ŒÆ‚€ßë2õ˜SŸiÀ4d1™&L¬¡XC±†b Õ×ó±_“' ²VGAM/data/crashi.rda0000644000176000001440000000075112136651170013713 0ustar ripleyusers‹]ÒékAÆñiÒ3ÆVzë"&6›«EÔbcð Um©®m4MeÓ øF)RŠà+Å?µô;øÌ°í‹Ïî“ͶÏÌoò¨¶¤–RƘ„IÚ«½ô'¸ô™~3Â}p% »í5c’Y> á ÆqHyÀ¾¥ïÒ¸€›èaNàhâºø‚¸‚ã8«ÿ}*ÖcWwÃviêUÏ<6ñuõ<Æ-,â:øªg¶+K8c±ž£Ó¾†Õi{&ñ ßPÃux¢½¼Âw­ã'ž¢¬÷ 8‡Ã±žt¬7­¾ ršÑ¦î÷µ»eíë/¶ñ!fPU_I}®gB=6gµŸ¬ösŸq[îâ…zÚø‡_øƒ-aMk°ùZ¬§ªyÚó8­ýžÄ\ÄU±³ŠÐ¿'óŸðV :£ºfWÑçxϬäÔcÇÝsš‰=›çšßKu-kf³šÕeý]æÿ¬öüê:áûVW‡3¤‡ÉÆzÇÅf¯åâbkÕ?m÷\¬Gk..„>ö:û›VÞ…]×ä¦VÃ0ÿ:bû^‰Ö?æã‹×}S.\\(ºPr¡ìBÅ…ª Ó.Ì($ S>| |*úTò©ìSŧªOÓ>ùŽÀw¾#ðA‘뎵 ,ƒ¹±VGAM/data/crashf.rda0000644000176000001440000000052312136651170013705 0ustar ripleyusers‹m’_KÃ0Åo›t²‚ ø²O1Ötÿ|ß|QÁ½–­c‚vЮøíÅÔå\¡›ürs“so’—ÇËw¹ˆ¤b†~èlê»D¬Lý8Ù·Uw:Š˜{?»ñvëmöqm ÏÓˆËÂ84,ŽÁ:t¸%‘q¼¥9Ö iòKçÅ:ˆ¯ÃD¹r“_¬“„œdäœ$œÁû-Å£Kõr-|oœòˆï{q|Ø‹9×…7‹ëá»k–ö[ùη´†{ÈH~ÔÌÿˆë{ÜaF9Z—è×gMõUwîdøôW§y>7À·¾¾×õžzàSû|­.Š}+í?«Jpæ‡êRÍ­O" Ÿ¶çï9'7 ÉP ,+À°lÒb¡T(9¥Ri©´RZ+m”¶JªáTé†S Wúþg°_ëW>ɱVGAM/data/crashbc.rda0000644000176000001440000000056512136651170014052 0ustar ripleyusers‹…’_KÃ0Å“&m·ªÃ¡à?ÃØ²¿¾‹o¾¨à^kW™ t~{ñϽÆÎ±ÂI~½MÏÉ y¸¹l–)¥"eüèÑ •UMšÓ¢Ê׋—B)sæ_I'¤ŽÚÿhÒ1霓¼7H éˆÔ&ùh‹šþ×rLày  ‹Ì§¤ |3¨sŽ:!£ $Øk 6ðð½´Ð›Å¿:ðÚ—ÃýZä\ûäó³`ß‹?ÛKÔ›ø×`æž÷õÃûNá•{!‹>8‹×ð:öᜰÇúcq.I0s~‚9C-R»göß=¨¯ ûjÀ³¥~ïHôÅïz×ãϵ—ùG¹–Cú)šûÕ’ñi[2>—s©.¶ŒwÕãc¾Ü.ëIÅ{¾æ$.fó|“w_+ÚDmy³Z}vÃÍuðA÷ú ŽaÀ0d1Œ& S†@Ôï õ…œÐ@h(4 M„¦B’á$ÃI†“ 7 ñËëR{`²VGAM/data/coalminers.txt.gz0000644000176000001440000000022712136651167015272 0ustar ripleyusers‹-ޱÃ0 {OÁLJ$å6K¤N‘KçýËàunp hf¯·Iné-{ã>¿ï¡¡]HcÒÌ×t³HŒû ©œš5gé¦SJ:jÛ\ f³t¼®§=vQ¡I'6Y‹â!Bgž¼E“ 6ÏØ¤ôÎàÔäÈR(錮çÇ­8þ]ÔpVGAM/data/chinese.nz.txt.gz0000644000176000001440000000057412136651167015207 0ustar ripleyusers‹ER9rA Ë÷ûÞÇs6XE’göëEªÒTÍôt£Iÿ½_ŸÏç×ëóýüxïrÏŸÿ®Èùca½³˜W8c°dìBó0Ãέ­€agÅv˜Ì«f]¬€™ü`QVÈiÝþ‹¹)‹½)Wq%„Ìy°æË‡¡BD… 0jæiØ,i*lnZ UWo™{…[µÎ‡9–ÖPêÁ˜O­\ΉcòRV`G¼l-2Ú[6®{¹(±$G$°ëÓg6 M GûÔ6¬— ª³§ƒ§ùvw{ÍŽ,œ$*àé¨7¤u)5¸´¶%xúö“®l’ŸµÓÏŽóa}×@=™ë_<‰ÖŽP AFóøÞ;SÝ÷ dÊ43à_'âŠÖâÈo•¥µàÃVXï2`¦Š™èu¬›À¥‹NF)h€ï3Qa5ÊÓÔi%°‰ÓÌÙ Æî‚KIBðÝÉ‚`, a@4%óñ é®´ VGAM/data/chest.nz.txt.bz20000644000176000001440000000074412136651167014753 0ustar ripleyusersBZh91AY&SYx.ç&Ù€@à"…@p8ªTÂTɦSÔQè4É J( S&&A‘‚BSɪmJÿË6—ÑÁýƒ4ÙwðV,•¦]¹@¡s¥ˆ&Š•"ÁMLAL[u¿w.ätØ­“wF‰¡³®êùNÖ1¼fªÈK%ådqÄI¢8Ño†^Ñî §4.zðÝÓ­ à¡gHËUrŠ©¡X”å%µ·¥oY(?‘"lÞæÏ¬wô‰¨ç÷[Ë©a{“¨¯Ï'± Hý µƒ÷%¼á”•I’+gÇß7›šÐiA6¨Ä‘g6]fàˆ5¤ÚV6Å$DÀ2áã<Ø¥>R# æg7x…¬±'œ‚ó2©ð‘œæfà’I1a´/GÎcA ˆt÷·±…nžã!¯rù»”Ö´Ù$m"|e 9ñ’ï_½s‰À„FßTlÐ’9¼Þ±† ‚I ž³Y~2 ç{½€` H±Ê³K+/0ÁåWVˆ‡å‚6.äŠp¡ ð\%ÎVGAM/data/car.all.rda0000644000176000001440000001551312136651170013760 0ustar ripleyusersBZh91AY&SYjˆð_8˜ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüòÑÔwÿÿÿÿÿÿ}ÿß}àü8yåîž]óAå¨D§¼­ãx@¤§w£žâ;¶Ž,îyÊ\—¸÷˜µ’Ö½°÷¾è÷>zãÑzL›A‰„=OMMéªz™êž“M‰‚dÉ£AéŠa¨õ4dh4†4 Ôh¡  Ðh 42ja4Ä5=‡¤6 ÄÈ&ž¤Ú#ši0 @hL†˜Ð È`L&L#&šbb0ÔñSD…4Ð6£CÔБ š =LÔ=M¤ 4ˆ=CÔSÔÄ &’DI 2¦òzŒz“Ô6“MÒ24ɧ¤Ñˆ2mM1¨¦€@Èɦ#ÒzhOSMh2hi¡ iêSÒ&iF„zhÒl Óe1£L‘´&&Aµ M£ ‰‚zF4A‘¤&OQf È21©€@jl“„š#SC!O&hšž“ô¦M4Ú#2žMG¨òš=OSКzž“Ô@4€2jišÉ©£@zP C@ÐGúÂó²Éù¦6ˆÈ²Ú ²)ÎLdŒ-¢–ÙmePhV¥JØ©[jÅ€¢)mkZ•A•+Ûæe´¶[¶f(‚ )bÙJÖ–6ÚkYrUùqQ¶¡\fL-im#eh•–ÐZV"ZŒ(5¶VÒ«Z)QR‚Q¥U¥¶ÛkËD£[cJ(µ*Ñm±E«Z[`–¥£hÖ¡[¦Ž[fÍK•kXÌ)mZ4TelÛ2¸`ÛZ«j¹’¦J©m¬ª¬mm*²ûº5M”†ZW\kŒŠŠ¼iY*…3)’Y‚˜‘Tâˆp )’ ÉSL4 JsH‹@N$šTÑ$&“†ÌQF…(ƒ&TÁA8UD”¡ˆ ¨©fŒˆUÈâ±JÛ®gäqþçÎí÷_†@§KXT ž<µÅS rÉJøåNØ?Ù‰8ÞÆªw@“+2N“õx”±Q'YÏc1“è6’(\ÖŸ˜Ú.’»}²,ð¦瀣ò_ž´ß/íÀ0©ÜiÆ'qCåù1Á›‹ŠKµb£Z¢FVî"ïî£úh9Skbñ.âøµ†Žrö++zÈ'r n›Œbqõz÷@Oã!Z´Eþ†öv?‡ÂÜoíý9)š…‘Xa Ñ-ªƒý;n©d´¿è0‰er"•…:HÕû-u'ìßlº³í=íG¯ïÜs²ÀÅ’#HÜKÆÌº‘œ¬î+5-ñõ—·Ý :î6§Ø>eÛÃySM)b H‚,M¬›2I—ÔbŸ“ùW¹¯rú¬¹GÇß<»l}QùfÜšáf8bU»«œ-Òw[£­¾WîgdV+Üóýk=Ißtü¿(z‰]JVc–ª¨å ÄŒ½\ºž-_šÚÙ‰âÎÿphOŒ˜žçdcâ¹­Ÿ¡wq{v€›ëµ×¿XïѦõu¯Ú»e¨[“­R®Ý諸®ëuµ¥r:=î.«–²(È’Ò ótX{Dßy`ÞüxÛÑ ¡Ã€¦O ‰ðzÿ‹]09¬í•;¿UG™dägmÛSºyXi€mÆžY‡„Cžö}­âÉ8ÅgëŽCÖ<Ôó)ÇØyn:õ‰¹›ËÒ¡‹±Þ­æÛiÒÚ±ÛNµ·æ7¨t7×^Þ‘s–1È1SÔ`g¸!<Ç“ìŽIuÞí.í°Ù} AÛ›Ïy¯h‡3¨ zÀtˆ€@ÐN¢Q0ƒÕußel¶l+—Y +¿Èø^ýÃ;÷yÊI9ˆIXwäæj@шrÚT]<—r“fµƒµXˆåưÁƒn3BöWoCvª‰«L[,A™»¬3XFoQ$¤‡Úѳ±§ª»jÐSí¸¤\8†™DC¸P(6ÌzNgeìµ"é(Ó¸ïíìpîFþF–¸Ü‘sÃÇ2Ýu9|øXµ¼}1³UÛ¨ï/ 5p0(ÍŽÿ3f__‘v›’Î÷~vk?˜~æ”ãmq-‚l§;g@Ñ~T"BèQd)‰u¸«—b|S{¤ìôË;Ï~.ÙÊTäçž<Ú{Ú¯Ô,QCúΣbáçTÂÔñR©æ¡qd×èrQ½‘)бœ‰x ßZìœÆjº“êwµeS³nÿ ¹-wÔ3J{ÇLK2‚Á‘IS$6òÞ­â©À²réõt–YLj›.òÖ<‹cÜøZ¨útKY8Ð)=n|72^Kµnc•ê2!&JÏów·¶È‘1ˆ•ÂŒŠ–±,…âò±aC …W¸¥ea6 ¶˜Á{}ãê͹œ!¾·bÜá·k6ákøøîvnaÌ¥=ûûI.S „\ »+;.³sç­&†“õÀUÖÕÔx6ÖyÆøÙÕn®V{¤¹|q_抌›Œ0&ÞØQ[Ek“a²É—“gtÔ7 xl†œxä8qMõ„PÛ+¡ÈžÖ/È)pÀ"[ª’Z°ÐhLÏ:‘¢å+°Û·yÜ:Õqº ¸døBðÚQX ÇJ ‡…€Äв ("„ˆHˆ’! Tä'‚X¤ˆ²¤€„"²H ( H„Š‹¥Ø”A ÙKj'³,Òô¥dRøª­±@Y8!Y%¥­°*VAHJ’DI%d(‚À­k!P%J¬¬+B¥E­H(«H!Ä”šd“tP E ( ¢RDaÖ—õ]öjAèµ±|^÷Æ„!Æ1ŒcÆ1‹ÜÆo9 Á‘"0HD"gwtJ¯îŠÑKpçbË‹A•d,ç{¢ˆ5° ɨ;NìÜmïñÕk8ºîZ ÚEUUUQQ¯ê‰½½RV N³yZpÕÅTrÕPøUqÌ­V¨ðj*£‰TU‰”¡iDErÊ¢â7n>~:5<7Wâµä¼žS˜:Úp èÁfh²ó³®Á¦Ó_¦ÃŒCWÝnðs¨ÐC'iÜ“µ-k ¬¸ÐrA½á·Aײ¯kD Ç£¯¥pJ“#8»‰AÔDIœ¬Ý;‡”Û%±™Hä´¢£{`ó„çÅcV¥xÒDM²£8ªˆU`lH4‚7“+;·ô;@kä¶bû`ƽC)2zÝR„äÙÞȰBñ;¡ ÷h‹p±¦]ÿÄ"ÜðûiF#‹P(Y¨T¤hŽa¡Ê–½øå×q¹9o7–ýDKÁ#ž±ØˆeÛ)©Mj¢ê`ÊIÓE6Ô¦¸òp;Ãtƒ}Íö­”±}ªdEµh„.£\TKGE àœ³Ò“}Õ`¸:Žtíd@¶J¾nà„ZéÂ{ãË¿¼g5y(UfV›hY@?ÀêÿçÍÐî<÷S*9%Z߈½ÕtØJȰ0){”@Ùcùtý>É DýX] XkÕ‰¯z&B8´4&#äœ4›h5PÀ–µ,±Ö¼XË,“åÖšˆ¢"¡Î8‚Ä©„C©‹C¸Ú¦–•qBõ'mU8š¬æÌ šYÍLÝÞø¼‰ˆÌEâƒÍÚ98¾ÕÞ¡G’N©Ô6[}Qv^½påé¤ÌÕâ"wÕŒRs:c^âÛ™gT˜„’ Z{L,k9µa†Þ!·¦Ç}¿†¬:¨îšq˜JT ¨«CY‘yÆ+' ¬²o ™˜®*" Æ33Z ÅPæ1–á19‚xtHÌ@0oº ‹OYÎôdŠî«ájâÇÜ‹le€ÒnB½UlœÄ(È€ç JF`À0s¸¾˜0U(ÒÛÌJ`dUºŠ"­Éptb!Àò«ÃÁš¨A€¨$´Q*¶l{ÁD"8"˜W=÷Ù2*”ì”K›9š³m°LVchaØ`¥˜Wy¦fÀØÑ ÁÕ}–çFz’X ?ÞÂTó\³½a‚vÇİY<熸†3Ħ>m¶—íì6TGr³zÿ/5ÏjèGŒcÓô#Ú‘ÒXAu ¸|F1ÍË´Èb÷™D¦ýÜ"D2 bd°‘E¤cw¹ÁØ$;“VdÈâﻇéø[Ï*å͇ˆäkáÀÑèÀé2Æ"^óH8îEôb?Ê=~9¡PòJRƒ–ؾËMi؋lj@Ž}Ñ: Õ®ºú—šï<¥ÚBÜRE$¨$é ‰A‚#0 xu»ûðÛroÙòs1Ú³Ç!Ž’¡¦+lªÔËqŠ‚eËL•1˜Ê0ª‘`¥s/b(Èb)ÍË I½,Š‹ Œv¦ŽI¤P ' Xhde5lÅAj•wq7@®éSt0Bå ÐÑw—UiP¬ÖŠåZ©ÌÓ1ÒÑ6ÌÁ¨.²† ]e­0;¥r•F[m± Ó3+ˆ9™‘sÚ‹V¥Š0VÒ £bl•@V"9d¨*ˆ¢², 1PQTf5L¥Q ŒDTÙTPXˆ(°PX®RˆÅX,R)³QA`  ,DYÊ!”±b©™IŒE ²EXŒŠ)+©DDQQEDT*¬Q`‚‚ÁUG,¬X¤Š±cUb‚È ²(±dU‘`"FEX’TETuh±ËD"$YÒ¦0*H6’² ŒPP1dREX¤EHb°¬»ÂÂi&’¡ dP¬* HJÂrDša(¢$ATXé(.R¢e¨¶…j)¥ˆ©mSi/<<ô›gsÚÞæ‹Â$Ä{ÜÊaÃÌàÜp»Ýq.C³ºÓ´—R¾òÇk¿ÂébÈBéî£åè/;yÖœ®›_ Í%uEø€w"jÕ”žòÃhÐ8A ÛÎtSÒêùSÔÐC¥H cj€â óŽÍ¡e—eID@ ¥NùJ¨8"ºŠ‚\¸‚]zÒ!X$ÐÃ!S®­`ÌL@™•÷ÇJepÑlão¼×דަ÷Þth©逺Áæ*Рÿ’mß÷D›ÒωCNeæi{*ÑÛ'uP3q*4ŠœÑÀ¡»` æÝ{f¨;X ͒󸙴°Qiû!£f„lAL50õ'dI.è®…d}oŽÈA:uzR °;˜jǾï¸]MÇLd‹BkpÐ69f^ºríV$~´6d iy¬Û ¯ú”ÏAÓppäE× ã‚Ê jÕ¡@]§¤üÍR%ñ±ò21uù찺뮾C 2à€Þ  æ6†5yôv=ž·žR5ÕÂÒÖè¬>|ô¾kw‡¦ö-µÒOD9ôw‚€A²¶²áÊ&‡230`ß ° z`MR85˜/T<8Ÿe€ºW½Äfð×á[BwìÐ éÀAŠúÕ î¿ž¤È7Î ž:ÅÄ0Ñà5R]pO.A¶e¤ðŸrǧ.Ù… _Ѻ@ Ðd=(v?c5¿CbS7ïY'`“ À¶x§/MxÛñåÝ5~+e\›¯ª¯È1¯Æè‰ML—ZÔ±.ÖŒ÷ôZKô…©Œk ™ 8”×fçhÌ` õã;q›æ YÀ{ÁH‘#ç¶ò1Fop™Ÿ1œÉ Û(14Ng[ïòíwfY³Œz@ÑN ·K1Ù<ÂÓ:òÍ£°2=óø^^Ë9ãP†'²ddê5€íh®Ð:¢/{Òﺊ;9ZJGŽ÷jmüu•é|_@ ïY€ŒêÐ{'àú=ñÇáà$¦Š'.a©HZ½> ¼GbB$5xù¦x™ãqà* ü$ðXx ÛÌ­4ÉM¤wŽp`ÎCƒº³Ã@ÍÒóÅŠ˜p™ádÇ_›»’‰mƒ— s¸šÍÏ`éu³M&V¬Úa5'¿ bÐ¥± ˆk¹ML%¡eÁ­ºšÃžöŰ8‹ÊÖ·fòÁÏ á%ØÌ9Z \6%ÚzRÚh†Û±oKÔ÷º$@wRƒ€²c`ó`:’öOëM9¨Ó#Ý7¾í¤.n¬éíÑë¸òëFÓYÏ |':“OQ>üØ™kÐD bï‚î:ŽÞ@H)^iƒBel6(XNƒbó‡šb($30+#ÊŽpù@y޳6fûÕAl÷\CÄÞgwîœ+Š*—WÃÇMˆuWj»ãkH‹Ö ¡² e±Ü_×»îßÇ#ˆSô¾9Ó4@å«a½Ýnä±ik8QØÙÇÄ.À÷FG£è“·Úi‚ùk0ŒòVIA‘ï”Ѭ€Œ´%f7 ø»µÉÌÌ]/½}#±×Ò³©—m´¶iBúŸ“u}…8ßV Û}¬ó®þŸ_Ûàînïy*ó“±Òr½ù„”c¸5Í&þC‹—Ñï¹ós½ž&¹Ôµ‹{¹S¯¤Áìi¼Ãi¯¾™Bñ:¯ X¨Æ"ñ¨>Ê ™ºTWÕ‘N&®æ(Ÿó˜;:£ö`‡k¬¢œž=‰£2!F@ënòúÓûÿ±7ýgo헥؀Zc"cÆÈe' çß>G7Ð,aK#SlùÕxKV©‘[-é)¬Ç–é0ù`保iX,guÑ?{ì+æ=¿Nzf“òáQ<“7ï†þ,Ëšó ÷±Éòhã·Vü“æ`ÒƒèYƒÖvÄCQDôæMŠÄÒ ·îMã#YÜUùXL0úè9¿^e6-µFµŽGF4ìÜÆá)3¨bT(ƒî©MúÖÚ¿‚+á‚C&nn.‡²«0Î+BòKï£ñæ?¿nÿöÜÇ©â)y¶“ÿ)'3¼ýÁžÕ û,«U²›f«õôk^Ù7¶}=ºØÌÈxUSa÷‰×d¾üz’µÓW`¦¦-fóf*œ3Úƒ¶¸êlËm{“¢Þ]ŸÉK6ßîŽìí(!k10Ìš¡5›n P¦"à×öÍûeøÃótïý«<ü¸l* Üéµ"ïØòÄ#<-]MÔ7[Eƒ vk#.êz_’¥쩬e™g8¼´½†þ‚uªÑ‹eVjTù'ˆU–Õ=²¯¸v'þŒÆiùìQ5–^¡nüã²qxdçˆæø@ÚÈ8Ê;oOÙÕË‘qQ¨˜¢0²7( ž~È 1c%Â×ç¼à§žþÈBøJ2* | ÚFô­äÈnƒ ëŸ^7RŽåëÇF¨pVëàΟ(2ê Œª™ègrf·§ÏYq-Íé6E—<›8v»Ä»é«•ŽCˆ_# ¹ø¡øÕ%mÜÜ鹊‘4 ãjHøß\³,aØÒ­Ø•œ¬WP‹Žp:qPÃï4}Ë…"JèpØŸ¸"[Ó ž[ ÅHGÌâRÛÏÔà™±|äòñ§üßÓ8E£öõl¦Zdö6“,S´ àübƒ?µm#Ò¦yÀ§ýÆPrx 3}ún\þ%Ç_ž&¦#“²4?µ¡íÿÕ‰Ês°ŽBNqeÛBš Gp‘/'„_†?Y·}uÜp¢ÕÜ×H=ióƒêiˆò tµñRpdæ+)P}¯Jz•ø§d—Ÿéóïx¶„d”ÑåâËbúõXbޱðæS\á­W‰hB“µi*àPUóWäy÷e1™4hiOòkϓÔH‚€(É·W ö[+–€Cº®2{pIÁ`ËЛÔ§%“¢¤9§•`º4i8ø(˜ø¶š[÷ðC!£ÚgºÆæò;±ôói¡#×|‚P+µp5m+&‚ û~-›_1\†ñì°MašSÊ-2Œ÷·8ïúlòÅ„ ï‚•e%@Of@100âž¼ê‚>3`µ*2$éy^›ÌU~_jª .›ê²¾×dñˆIûW“çE}³ŽAÄ/$*ו…µkDÏ›rM¶·Ò gùŒì|ÝzÉÖÈoGWâQŽ„SžÜGFÕžÿc2)¶ÇýaîO†»j°NˆþÌÏ(°hšô"×ð,b4€&ª.°pØΙ§MF‡LG'ÀÍ •:~Ò ‡/c3Þ ?k︔R*‰¢Zb(Õ¸'':Ý È–Yø=ͲšöV6“x_Ó ÚeIË*F»3Ÿƒÿˆ­ 0ÿßP«¡°äkΊÿMÙtz»4BNH¬OïCw¦ìX”ƒ\ÈOk…e\ºŠ25éŸèŽLׇ“ |6QÀ‰º‹1ñ5³­ÆTJ,­{ß³hà[Hbnh{Y„4ådôPŵöŽÎ*‚’x¸pï°Ñ>ÚÈ#UeÀ†ó©óÇ¿1¿¤§gfW̺Læ%ù3ZƒÝMçµ½ª£s«A±‘ŽŸûí‘£Á@D¿•0®¼>2÷¡i“„Î=S½ÊJ¤Ÿ9”†¤%@‹?15fAŒÈ['¢P {´¢.ƒ–Û¸&’XÁc3eò&›C<Ý Ú©d)^{ö«ç@(ÀC§ùQO©`ÌD€grÍLj…yÏñéBƒó¶ËüÇöú¦S€…¼ŸÄæï^Oýn‰4þžƒ-Å—az8çGߣ.i°Q;ûõ 8ˆZ\·ÏodüÅwìh¬“¡”ÊÂH}^óòd·ÔäÙ;Ê "©RåHYàk¡§ÞÓ«oì­¯ÒÖ—„=ÿ]9æ|xy(ÜVÔ×±Þp¿ÔeˈG¸ˆÿ<Š=“ý—mrý@Á7&å0¨x1ÔC cÜ*Q00Ä^¼€§Ež{0”š6déÐÙžjÄF¬"aË~ŽzÍ ÛÝ/…xÑ?šÂ.þ˜b€0™»9i²Ó •é99ºgN¿Šáë–¨v›±ÜNéW‡*ý%J´Æ)­zßÙv4\»<Öª°ý9•M6ö%}Xà!ì¿vb£u£Ù¨Ž$a1Ñ*]£H4yètWÒ·'V¼ÀÌå¯ÓÇ5XÅ( #Åv«õ°-ó%õrËœÂÀÒn¶kQLÈã“—’i á¥b~ç2_Ôã=û¨ÙœWVü'{2«Tª‹©¸ˆ‰I寗\*œ˜ëa•yà°ÎùmÇÒÝ„œ b&êóˆ«E™ý}7;$°øÀœ™FªÍ '÷Š¿C 㘢dâh¡±ƒ¥Oéc›Rþvhh—lôðxC:6ÔÙ Sÿ]b4azL~‹^;ÛSŸ‹æ^µ“6Ç0 R©i%ƒ5U-Þü@@ÄŽÁ ö° ÃRTV P‘“tM "Çg"D0²Lô–^ZBUbûød s?zhÆ×倖b9Ý7·èáp‡Yr‰¸6Is.K§GyÃù\£?ññWD߯|<áYÇ‘Úˤ·¼ÊTHÊ ÞŠplïœ)ýõÛìlAúȦ‰í+:µZ©>ë¥Y©‡©ài26ˆdµN¨÷#âøóµÍÙ r$£~É>u“I‹×†ÞÍïÎ Ó-M]úz_àœß§Æ.> f«Å ‰Ží‘Ü0Jv|ŽÏÓû!—/øltmºKb#½å—„¬îE+<ªRÁçFŠoèÓ¥àìß5j®i-EE o$'áIk |:•&§˜ É\ç¡d'5VØ(³…‘ÉÛ.¦Ö FÑ16ÞöD^ÅñÞüIJµæ±1·¹õ<Ý÷¼qý¶‚â겘úf0ÝѽÕ)‚] â»^w+«Ò yÕ#S\Dbg¢™SJZjêDòúÉ:Õ)HhñÛ~ªôŽ{ðwœ0ãùW)3NTñ0™ô£Õ ©Qp F<΋¬öBÐÀ!C§â!D#CS:2 s´äŠ»‰Wæªó3ÀuMãt¶ÔË pO©Íx„›H 3Ô»wôJ%B ºœÚž"ŠÌF÷ ¸ÅôÌ2i`iÑpÅ€næïcÄÓ¦~ëÛ¿Æo<­— ¢1Ç(ù!cºÿDœ[ŸŠ’\P 0 ‹YZVGAM/data/backPain.txt.gz0000644000176000001440000000070312136651167014645 0ustar ripleyusers‹}•ënƒ0 …ÃÃT±sœjÍ6´R*Zm}ü1p&.¶Å¯“ø8œ4/h^ؼls?w·šåyœûÒ"‰þ<~•Ë©ëïãð]úr{¶–ÐÛÐ߯åYNc¹vå½uÓÖù¾Šk÷ñùÜÌ„˜Ò‘оtš ±é‡KÏÓøzR®ðXÌD,ï@lo8—Ãe!`I±ÞàˆŠøfÙBf 㺊ÄÓÜ.²31¶G4•ípé¾~t¬êè–ïá*e<<±ý’pÙÚhŒ¤¸iÿû?Ãø˜^Í$ç‰ÖTuìÚ1¶k‹•2ù²^$KëH.–¾*Æ2ã-#ÑýfØe‰ÿ~™$ãà 1ÖÁ|tJgW<ÔN~}Ü]Ï»‹ Kr¸,3o C…Y!}¼“€—@3ÉG…%9¯|ò*-¡ÈAPaV¢àä8 ^aa5!nT’ƒ'd™E£0Ð+¢JÕ°‹N¥^ŽÂ––e–Œ½ ä°M¨0» âäÖAœ¼¶)¨4ª4©4Ë1žÍ:·3ln”•3=;…)wAVï‚Uº½²r!€Qo0 á_)—ò½ VGAM/data/backPain.rda0000644000176000001440000000074012136651170014150 0ustar ripleyusersBZh91AY&SY¢h´€ÿòÜàP@?ïÝÀÀ;‰rv@Ñ(ÈÞžª2m HD¢?ü¥@€“Ê14ô&F˜4ô…$h™€ ƒ h"J¦Ó'¨=A i!@&D@’¡‰‘0AHB+-¹Vt(l…Ñ ’HÂ,jRÈ•¨„€T ƒP ¶$ƒ—&Ô© ².¶äT€ZI *’‡Û‰¡“-«WSD¤ %ÄðÖ*Á]c.TŠH¡Ó\;”™0Ö¢ï Ùô•8º~°P¿âŠ‚›2iÚ³]Šå‚uôfõö«?½vXªë³f͟ײ°ÙdÜt*,R$PBA,Š ž:(9`yf¾ÿÚÂíXãŽ8ãâÔæfffjê€T’I$“Q $’I/&ffffX¤*ª«…UUbªª¬RUUŠBªª±H·†_›|vÕ·þ½G:¤í³§wm£k€[ޤY¿V‰8Q@=>ÛŒ‚ÂP"õÁni×õ äQ\A€Ê“˜GÍz”ç™Ê×iO²S-JÏ„ÙÒ· iV“2{þ.äŠp¡!E:Ð:VGAM/data/auuc.rda0000644000176000001440000000036312136651170013376 0ustar ripleyusers‹ r‰0âŠàb```b`‘ ‚… H02°0p‚8‰¥¥É ÌÂ@6+óĪ0€€C¯„..€Ð PQ×ý">g„îÔ€ÐÉ PÕÕC8ýº j¾?ªºTˆxÞ¥õÐÔ¥CÍË€Ê{@ùJ0e(~dÍKÌM-2Àž„r8ççæ¦%§Â±¨¤Êf NÎtÍK‡ò˜}Ëaš|SS2“3óRÑlà,Ê/×C¶…fj°k°!Û‰mŒÄ6AwrrNb1Ì0˜ WJbI¢^ZÐ ïb>áVGAM/data/alcoff.rda0000644000176000001440000000104212136651170013666 0ustar ripleyusers‹]’MkQ†ïÌ$µ *BƒT£¨`[É$ib+]ÕŠ¶E¬«:æ¥5´¡¸Õ…‚ˆèÒ"u£àp!øº*…ÒuWnŠÕ­->—9÷fÈâ¹óÎý8ï¹çžé±Y?1›PJ¹ÊÓ£b.ƒ£bª‡oW0_nÔjJy½üíƒÐÏ €ËP„#Ð+ëIˆË~M7hñ0 Â1HC® 3pêŸGp®ˆ×@$V—Äî–ý ù…³F Kð>À[øñù a&ä\R|4ûá Ì%Åó$œ€Q¸*¹¶`™¦`îÂ\ÛÇ]ƒŸðÞAIjxŽKmNK|]«~©•®Ámx߉s þñløº»¼Ñ‹¶Ç]|îêh-x ä¤ÎBJçÔyÈË÷š¼Í°Ôù¡¼Ç/ÎÞ„mâü »0¶ñá||Ÿ¿¬¯É½tí7aUâ K夿ôÀsöí{øº^MXéèúx=xZ]@RºÃIoªQ7r¦U5òNµbg·Œo>1òV°he«ÞéTžŒ“™LT‚Å ]k’DÇöžfc)M®OœŒY#|#rFä(1dDш’—D¸ÙŒUY«|«rVå­*X5dUѪ’UÖ÷¾õ𭇟cÜÓü@>³±VGAM/data/alclevels.rda0000644000176000001440000000104512136651170014411 0ustar ripleyusers‹]“MkQ†ïÌ$µ *…†REmU4&“¤‰­¸ª¿jÅ*‚“¥1…´±¸Õ…B)E—±› þB@W"ˆkWnD µ[ŸËœ{3dñÜy¹ç=÷Ü3×ǧýÔtJ)å*OzH¸ ŽJ¨¾=A½ZŸ„õy¥¼>&vÁ€§ÀY(Ã>è“õ4$e¿¦´‘ÚÃp²P‚KÂÜFÌç>Ü…óâ5‹Õ%±»eJ¾ûá8ä„1¨Á"¼€7°b>ëðfᲜK‹f7앹´x‚ƒp.H®-X¥†˜€Û0ÛöqÆA×àlÁ+¨H OÀ ÔæˆÄ×µ:*µÒ5¸ Ëð‰8‡á/‡¯û—7zÞöñ¸³‹¯Ã]íñ–€œÔ1aHȈGFr“|^FçœkÄï M²ÝöIVá4ó_Ùó‘½úý6¢¨5àœ: Eù^”·•:ÏÈ{üâì$ü$Î飼1Î'¯à³Ãúg¹—®ý7Ø”8£R£a¹Ã)é/}ç{ðLE}ûÞƒ®WÞu4~²<izÕ«tG“ÞÕ¹†‘S­ÐÈ[aÍÎ>l9Ñ|dä`ÁÊV£Ó©Zæ“™LÕ‚… û Idsn1On@œœy#|# F(1bDÙˆŠgD¸ùœUy«|« V­*Y5bUÙªŠUÖ÷¾õð­‡_`ü§ù§)n´VGAM/data/Perom.rda0000644000176000001440000000065712136651170013531 0ustar ripleyusers‹ÕVÍJÃ@Þd«Ò‚"èÁƒ‘""R´i«7{ðÄS¯K³­Bb )ÖÞ|PÂ'hºifB:ì6±?B²ß—É7³³C·Ìës¯YëÕc6ãÉš,[-«°ªÂ½>cüD½(?TXŸ$¢—=á–&.ï³ ­Á\üBÑûžü”^¤Øñ\‘z­Ÿ²ï‰åY’è‚P±iîØºÒ¸¦|nÐü…sMKò{ñôv •s<߈ t‰TËQ‹M9BW†ÒÕš%‡î^³¹u¼l^‘ï·D×!~Ô×Ï/’ç°ˆuž‘}0/îç¸;R­ëð’è!ƒ>=ý¦/ÿ…;h }BÛv_v°ÏÚ>Q+ÛDzßMy‹üEù·ØOí½+[IWo2Süªú öo­ÿ§¢zŠô&+û;Ûô=_b+õiSû¯{×½ßåëYœ>„/qvª‚“Gò ©J.Æò}ø6ÂádrŸ±fÆœŒµ2ÖÎX‡l^ ƒq HF4û[-qÿqj®‰Æ T!,q¦3mø- VGAM/data/Huggins89.t1.rda0000644000176000001440000000066712136651170014560 0ustar ripleyusers‹ÍWÍNÂ@(F!E‘¢D‚ø'zðàÙ×F…˜(&€Qöäkè3á›xð ÀÝvf†Ò¥Õd¿¤ûÍÎþ|³Óm»½¾ì6 ÝdÁR¥*rYYd yÉöÕs¿?µÚµqÀr¤ï@^»òª¸GMn |¸å©Â§ë ÿ¸çÃÝÃzõ]áí`ÿCl/#Sû>r9ÔNýmÔ)†ú“ë'˜?ÎÎwP sâÚ“bͼKù ÷‹—6Îó®Ì§î|Qý7§1÷]w¤…†ÞÊçHw&]—î}û…µqêîÝ|§˜Wë¾ëêë"ÁûDëýù_ÏýH§.þ`=Áw³JßC3™ãÜF¿¡ÌqÒ9ÀPæ83è7”9N;ج¦ò"N<ïÊggœ[MeÎçæÙPæ8|Ê'ýÊ¡½­÷x7’F ÔO^à̾6ÈšÔÙZøšl²uÆÖ9[lµØj£eMê'ä,"XD°ˆ`Á"‚E‹,"¤Èò‚óç—-º¨z¿Éb>Ÿ…3sóà(3ä,Üzc¯ÖÊñ²6S×ü—ÏPVGAM/R/0000755000176000001440000000000012136651110011231 5ustar ripleyusersVGAM/R/vsmooth.spline.q0000644000176000001440000004544212136651110014414 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. setClass("vsmooth.spline.fit", representation( "Bcoefficients" = "matrix", "knots" = "numeric", "xmin" = "numeric", "xmax" = "numeric")) setClass("vsmooth.spline", representation( "call" = "call", "constraints" = "list", "df" = "numeric", "nlfit" = "vsmooth.spline.fit", # is the nonlinear component "lev" = "matrix", "lfit" = "vlm", # 20020606 was "vlm.wfit"; is the linear component "spar" = "numeric", "lambda" = "numeric", "var" = "matrix", "w" = "matrix", "x" = "numeric", "y" = "matrix", "yin" = "matrix")) setMethod("coefficients", signature(object = "vsmooth.spline"), function(object, ...) coefvsmooth.spline(object, ...)) setMethod("coef", signature(object = "vsmooth.spline"), function(object, ...) coefvsmooth.spline(object, ...)) setMethod("coefficients", signature(object = "vsmooth.spline.fit"), function(object, ...) coefvsmooth.spline.fit(object, ...)) setMethod("coef", signature(object = "vsmooth.spline.fit"), function(object, ...) coefvsmooth.spline.fit(object, ...)) setMethod("fitted.values", signature(object = "vsmooth.spline"), function(object, ...) fittedvsmooth.spline(object, ...)) setMethod("fitted", signature(object = "vsmooth.spline"), function(object, ...) fittedvsmooth.spline(object, ...)) setMethod("residuals", signature(object = "vsmooth.spline"), function(object, ...) residvsmooth.spline(object, ...)) setMethod("resid", signature(object = "vsmooth.spline"), function(object, ...) residvsmooth.spline(object, ...)) setMethod("predict", signature(object="vsmooth.spline"), function(object, ...) predictvsmooth.spline(object, ...)) setMethod("show", "vsmooth.spline", function(object) show.vsmooth.spline(object)) setMethod("plot", "vsmooth.spline", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvsmooth.spline(x, ...))}) setMethod("predict", "vsmooth.spline.fit", function(object, ...) predictvsmooth.spline.fit(object, ...)) setMethod("model.matrix", "vsmooth.spline", function(object, ...) model.matrixvlm(object, ...)) depvar.vsmooth.spline <- function(object, ...) { object@y } if (!isGeneric("depvar")) setGeneric("depvar", function(object, ...) standardGeneric("depvar"), package = "VGAM") setMethod("depvar", "vsmooth.spline", function(object, ...) depvar.vsmooth.spline(object, ...)) vsmooth.spline <- function(x, y, w = NULL, df = rep(5, M), spar = NULL, #rep(0,M), all.knots = FALSE, iconstraint = diag(M), xconstraint = diag(M), constraints = list("(Intercepts)" = diag(M), x = diag(M)), var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list()) { if (var.arg) { warning("@var will be returned, but no use will be made of it") } missing.constraints <- missing(constraints) if (!(missing.spar <- missing(spar)) && !missing(df)) { stop("cannot specify both 'spar' and 'df'") } contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4,## tol = 0.001 was default till R 1.3.x eps = 2e-8,## eps = 0.00244 was default till R 1.3.x maxit = 500 ) contr.sp[(names(control.spar))] <- control.spar if(!all(sapply(contr.sp[1:4], is.numeric)) || contr.sp$tol < 0 || contr.sp$eps <= 0 || contr.sp$maxit <= 0) stop("invalid 'control.spar'") my.call <- match.call() if (missing(y)) { if (is.list(x)) { if (any(is.na(match(c("x", "y"), names(x))))) stop("cannot find 'x' and 'y' in list") y <- x$y x <- x$x } else if (is.complex(x)) { y <- Im(x) x <- Re(x) } else if (is.matrix(x)) { y <- x[,-1] x <- x[,1] } else { y <- x x <- time(x) } } xvector <- x n_lm <- length(xvector) ymat <- as.matrix(y) ny2 <- dimnames(ymat)[[2]] # NULL if vector M <- ncol(ymat) if (n_lm != nrow(ymat)) { stop("lengths of arguments 'x' and 'y' must match") } if (any(is.na(xvector)) || any(is.na(ymat))) { stop("NAs not allowed in arguments 'x' or 'y'") } if (is.null(w)) { wzmat <- matrix(1, n_lm, M) } else { if (any(is.na(w))) { stop("NAs not allowed in argument 'w'") } wzmat <- as.matrix(w) if (nrow(ymat) != nrow(wzmat) || ncol(wzmat) > M * (M+1) / 2) { stop("arguments 'w' and 'y' don't match") } if (scale.w) { wzmat <- wzmat / mean(wzmat[,1:M]) # 'Average' value is 1 } } dim2wz <- ncol(wzmat) if (missing.constraints) { constraints <- list("(Intercepts)" = eval(iconstraint), "x" = eval(xconstraint)) } constraints <- eval(constraints) if (is.matrix(constraints)) { constraints <- list("(Intercepts)" = constraints, "x" = constraints) } if (!is.list(constraints) || length(constraints) != 2) { stop("'constraints' must equal a list (of length 2) or a matrix") } for (ii in 1:2) if (!is.numeric(constraints[[ii]]) || !is.matrix (constraints[[ii]]) || nrow(constraints[[ii]]) != M || ncol(constraints[[ii]]) > M) stop("something wrong with argument 'constraints'") names(constraints) <- c("(Intercepts)", "x") usortx <- unique(sort(as.vector(xvector))) ooo <- match(xvector, usortx) # usortx[ooo] == x neff <- length(usortx) if (neff < 7) { stop("not enough unique 'x' values (need 7 or more)") } dim1U <- dim2wz # 10/1/00; was M * (M+1) / 2 collaps <- dotC(name = "vsuff9", as.integer(n_lm), as.integer(neff), as.integer(ooo), as.double(xvector), as.double(ymat), as.double(wzmat), xbar = double(neff), ybar = double(neff * M), wzbar = double(neff * dim2wz), uwzbar = double(1), wzybar = double(neff * M), okint = as.integer(0), as.integer(M), dim2wz = as.integer(dim2wz), dim1U = as.integer(dim1U), blist = as.double(diag(M)), ncolb = as.integer(M), trivc = as.integer(1), wuwzbar = as.integer(0), dim1Uwzbar = as.integer(dim1U), dim2wzbar = as.integer(dim2wz)) if (collaps$okint != 1) { stop("some non-positive-definite weight matrices ", "detected in 'vsuff9'") } dim(collaps$ybar) <- c(neff, M) if (FALSE) { } else { yinyin <- collaps$ybar # Includes both linear and nonlinear parts x <- collaps$xbar # Could call this xxx for location finder lfit <- vlm(yinyin ~ 1 + x, # xxx constraints = constraints, save.weight = FALSE, qr.arg = FALSE, x.arg = FALSE, y.arg = FALSE, smart = FALSE, weights = matrix(collaps$wzbar, neff, dim2wz)) } ncb0 <- ncol(constraints[[2]]) # Of xxx and not of the intercept spar <- rep(if (length(spar)) spar else 0, length = ncb0) dfvec <- rep(df, length = ncb0) if (!missing.spar) { ispar <- 1 if (any(spar <= 0) || !is.numeric(spar)) { stop("not allowed non-positive or non-numeric ", "smoothing parameters") } nonlin <- (spar != Inf) } else { ispar <- 0 if (!is.numeric(dfvec) || any(dfvec < 2 | dfvec > neff)) { stop("you must supply '2 <= df <= ", neff, "'") } nonlin <- (abs(dfvec - 2) > contr.sp$tol) } if (all(!nonlin)) { junk.fill <- new("vsmooth.spline.fit", "Bcoefficients" = matrix(as.numeric(NA), 1, 1), "knots" = numeric(0), "xmin" = numeric(0), "xmax" = numeric(0)) # 8/11/03 dratio <- as.numeric(NA) object <- new("vsmooth.spline", "call" = my.call, "constraints" = constraints, "df" = if (ispar == 0) dfvec else rep(2, length(spar)), "lfit" = lfit, "nlfit" = junk.fill, "spar" = if (ispar == 1) spar else rep(Inf, length(dfvec)), "lambda" = if (ispar == 1) dratio * 16.0^(spar * 6.0 - 2.0) else rep(Inf, length(dfvec)), "w" = matrix(collaps$wzbar, neff, dim2wz), "x" = usortx, "y" = lfit@fitted.values, "yin" = yinyin) return(object) } xbar <- (usortx - usortx[1]) / (usortx[neff] - usortx[1]) noround <- TRUE # Improvement 3/8/02 nknots <- nk if (all.knots) { knot <- if (noround) { valid.vknotl2(c(rep(xbar[1],3), xbar, rep(xbar[neff],3))) } else { c(rep(xbar[1], 3), xbar, rep(xbar[neff], 3)) } if (length(nknots)) { warning("overriding 'nk' by 'all.knots = TRUE'") } nknots <- length(knot) - 4 # No longer neff + 2 } else { chosen <- length(nknots) if (chosen && (nknots > neff+2 || nknots <= 5)) { stop("bad value for 'nk'") } if (!chosen) { nknots <- 0 } knot.list <- dotC(name = "vknootl2", as.double(xbar), as.integer(neff), knot = double(neff+6), k = as.integer(nknots+4), chosen = as.integer(chosen)) if (noround) { knot <- valid.vknotl2(knot.list$knot[1:(knot.list$k)]) knot.list$k <- length(knot) } else { knot <- knot.list$knot[1:(knot.list$k)] } nknots <- knot.list$k - 4 } if (nknots <= 5) { stop("not enough distinct knots found") } conmat <- (constraints[[2]])[, nonlin, drop = FALSE] ncb <- sum(nonlin) trivc <- trivial.constraints(conmat) resmat <- collaps$ybar - lfit@fitted.values # neff by M spar.nl <- spar[nonlin] dofr.nl <- dfvec[nonlin] dim1Uwzbar <- if (trivc) dim1U else ncb * (ncb+1) / 2 dim2wzbar <- if (trivc) dim2wz else ncb * (ncb+1) / 2 ooo <- 1:neff # Already sorted collaps <- dotC(name = "vsuff9", as.integer(neff), as.integer(neff), as.integer(ooo), as.double(collaps$xbar), as.double(resmat), as.double(collaps$wzbar), xbar = double(neff), ybar = double(neff * ncb), wzbar = double(neff * dim2wzbar), uwzbar = double(1), wzybar = double(neff * ncb), okint = as.integer(0), as.integer(M), as.integer(dim2wz), as.integer(dim1U), blist = as.double(conmat), ncolb = as.integer(ncb), as.integer(trivc), wuwzbar = as.integer(0), as.integer(dim1Uwzbar), as.integer(dim2wzbar)) if (collaps$okint != 1) { stop("some non-positive-definite weight matrices ", "detected in 'vsuff9' during the second call.") } dim(collaps$ybar) <- dim(collaps$wzybar) <- c(neff, ncb) dim(collaps$wzbar) <- c(neff, dim2wzbar) ldk <- 3 * ncb + 1 # 10/7/02; Previously 4 * ncb varmat <- if (var.arg) matrix(0, neff, ncb) else double(1) vsplin <- dotC(name = "Yee_spline", xs = as.double(xbar), as.double(collaps$wzybar), as.double(collaps$wzbar), xknot = as.double(knot), n = as.integer(neff), nknots = as.integer(nknots), as.integer(ldk), M = as.integer(ncb), dim2wz = as.integer(dim2wzbar), spar.nl = as.double(spar.nl), lamvec = as.double(spar.nl), iinfo = integer(1), fv = double(neff * ncb), Bcoef = double(nknots * ncb), varmat = as.double(varmat), levmat = double(neff * ncb), as.double(dofr.nl), ifvar = as.integer(var.arg), ierror = as.integer(0), n_lm = as.integer(neff), double(nknots), double(nknots), double(nknots), double(nknots), double(1), as.integer(0), icontrsp = as.integer(contr.sp$maxit), contrsp = as.double(unlist(contr.sp[1:4]))) if (vsplin$ierror != 0) { stop("vsplin$ierror == ", vsplin$ierror, ". Something gone wrong in 'vsplin'") } if (vsplin$iinfo != 0) { stop("leading minor of order ", vsplin$iinfo, " is not positive-definite") } dim(vsplin$levmat) <- c(neff, ncb) # A matrix even when ncb == 1 if (ncb > 1) { dim(vsplin$fv) <- c(neff, ncb) if (var.arg) dim(vsplin$varmat) <- c(neff, ncb) } dofr.nl <- colSums(vsplin$levmat) # Actual EDF used fv <- lfit@fitted.values + vsplin$fv %*% t(conmat) if (M > 1) { dimnames(fv) <- list(NULL, ny2) } dfvec[!nonlin] <- 2.0 dfvec[ nonlin] <- dofr.nl if (ispar == 0) { spar[!nonlin] <- Inf spar[ nonlin] <- vsplin$spar.nl # Actually used } fit.object <- new("vsmooth.spline.fit", "Bcoefficients" = matrix(vsplin$Bcoef, nknots, ncb), "knots" = knot, "xmax" = usortx[neff], "xmin" = usortx[1]) object <- new("vsmooth.spline", "call" = my.call, "constraints" = constraints, "df" = dfvec, "nlfit" = fit.object, "lev" = vsplin$levmat, "lfit" = lfit, "spar" = spar, # if (ispar == 1) spar else vsplin$spar, "lambda" = vsplin$lamvec, # "w" = collaps$wzbar, "x" = usortx, "y" = fv, "yin" = yinyin) if (var.arg) object@var <- vsplin$varmat object } show.vsmooth.spline <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } ncb <- if (length(x@nlfit)) ncol(x@nlfit@Bcoefficients) else NULL cat("\nSmoothing Parameter (Spar):", if (length(ncb) && ncb == 1) format(x@spar) else paste(format(x@spar), collapse = ", "), "\n") cat("\nEquivalent Degrees of Freedom (Df):", if (length(ncb) && ncb == 1) format(x@df) else paste(format(x@df), collapse = ", "), "\n") if (!all(trivial.constraints(x@constraints) == 1)) { cat("\nConstraint matrices:\n") print(x@constraints) } invisible(x) } coefvsmooth.spline.fit <- function(object, ...) { object@Bcoefficients } coefvsmooth.spline <- function(object, matrix = FALSE, ...) { list(lfit = coefvlm(object@lfit, matrix.out = matrix), nlfit = coefvsmooth.spline.fit(object@nlfit)) } fittedvsmooth.spline <- function(object, ...) { object@y } residvsmooth.spline <- function(object, ...) { as.matrix(object@yin - object@y) } plotvsmooth.spline <- function(x, xlab = "x", ylab = "", points = TRUE, pcol = par()$col, pcex = par()$cex, pch = par()$pch, lcol = par()$col, lwd = par()$lwd, lty = par()$lty, add = FALSE, ...) { points.arg <- points; rm(points) M <- ncol(x@y) pcol <- rep(pcol, length = M) pcex <- rep(pcex, length = M) pch <- rep(pch, length = M) lcol <- rep(lcol, length = M) lwd <- rep(lwd, length = M) lty <- rep(lty, length = M) if (!add) matplot(x@x, x@yin, type = "n", xlab = xlab, ylab = ylab, ...) for (ii in 1:ncol(x@y)) { if (points.arg) points(x@x, x@yin[,ii], col = pcol[ii], pch = pch[ii], cex = pcex[ii]) lines(x@x, x@y[,ii], col = lcol[ii], lwd = lwd[ii], lty = lty[ii]) } invisible(x) } predictvsmooth.spline <- function(object, x, deriv = 0, se.fit = FALSE) { if (se.fit) warning("'se.fit = TRUE' is not currently implemented. ", "Using 'se.fit = FALSE'") lfit <- object@lfit # Linear part of the vector spline nlfit <- object@nlfit # Nonlinear part of the vector spline if (missing(x)) { if (deriv == 0) { return(list(x = object@x, y = object@y)) } else { x <- object@x return(Recall(object, x, deriv)) } } mat.coef <- coefvlm(lfit, matrix.out = TRUE) coeflfit <- t(mat.coef) # M x p now M <- nrow(coeflfit) # if (is.matrix(object@y)) ncol(object@y) else 1 pred <- if (deriv == 0) predict(lfit, data.frame(x = x)) else if (deriv == 1) matrix(coeflfit[,2], length(x), M, byrow = TRUE) else matrix(0, length(x), M) if (!length(nlfit@knots)) { return(list(x = x, y = pred)) } nonlin <- (object@spar != Inf) conmat <- if (!length(lfit@constraints)) diag(M) else lfit@constraints[[2]] conmat <- conmat[, nonlin, drop = FALSE] # Of nonlinear functions list(x = x, y = pred + predict(nlfit, x, deriv)$y %*% t(conmat)) } predictvsmooth.spline.fit <- function(object, x, deriv = 0) { nknots <- nrow(object@Bcoefficients) drangex <- object@xmax - object@xmin if (missing(x)) x <- seq(from = object@xmin, to = object@xmax, length.out = nknots-4) xs <- as.double((x - object@xmin) / drangex) bad.left <- (xs < 0) bad.right <- (xs > 1) good <- !(bad.left | bad.right) ncb <- ncol(object@Bcoefficients) y <- matrix(as.numeric(NA), length(xs), ncb) if (ngood <- sum(good)) { junk <- dotC(name = "Yee_vbvs", as.integer(ngood), as.double(object@knots), as.double(object@Bcoefficients), as.double(xs[good]), smomat = double(ngood * ncb), as.integer(nknots), as.integer(deriv), as.integer(ncb)) y[good,] <- junk$smomat if (TRUE && deriv > 1) { edges <- xs <= 0 | xs >= 1 # Zero the edges & beyond explicitly y[edges,] <- 0 } } if (any(!good)) { xrange <- c(object@xmin, object@xmax) if (deriv == 0) { end.object <- Recall(object, xrange)$y end.slopes <- Recall(object, xrange, 1)$y * drangex if (any(bad.left)) { y[bad.left,] <- rep(end.object[1,], rep(sum(bad.left), ncb)) + rep(end.slopes[1,], rep(sum(bad.left), ncb)) * xs[bad.left] } if (any(bad.right)) { y[bad.right,] <- rep(end.object[2,], rep(sum(bad.right), ncb)) + rep(end.slopes[2,], rep(sum(bad.right), ncb)) * (xs[bad.right] - 1) } } else if (deriv == 1) { end.slopes <- Recall(object, xrange, 1)$y * drangex y[bad.left,] <- rep(end.slopes[1,], rep(sum(bad.left), ncb)) y[bad.right,] <- rep(end.slopes[2,], rep(sum(bad.right), ncb)) } else y[!good,] <- 0 } if (deriv > 0) y <- y / (drangex^deriv) list(x = x, y = y) } valid.vknotl2 <- function(knot, tol = 1/1024) { junk <- dotC(name = "Yee_pknootl2", knot = as.double(knot), as.integer(length(knot)), keep = integer(length(knot)), as.double(tol)) keep <- as.logical(junk$keep) knot <- junk$knot[keep] if (length(knot) <= 11) { stop("too few (distinct) knots") } knot } VGAM/R/vlm.wfit.q0000644000176000001440000000757512136651110013177 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vlm.wfit <- function(xmat, zmat, Blist, wz = NULL, U = NULL, matrix.out = FALSE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, x.ret = FALSE, offset = NULL, omit.these = NULL, only.rss = FALSE, ncolx = if (matrix.out && is.vlmX) { stop("need argument 'ncolx'") } else { ncol(xmat) }, xij = NULL, lp.names = NULL, Eta.range = NULL, Xm2 = NULL, ...) { missing.Blist <- missing(Blist) zmat <- as.matrix(zmat) n <- nrow(zmat) M <- ncol(zmat) if (!only.rss) { contrast.save <- attr(xmat, "contrasts") znames <- dimnames(zmat)[[2]] } if (length(offset)) { zmat <- zmat - offset } if (missing(U) || !length(U)) { U <- vchol(wz, M = M, n = n, silent = FALSE) } dU <- dim(U) if (dU[2] != n) { stop("input unconformable") } X_vlm_save <- if (is.vlmX) { xmat } else { if (missing.Blist || !length(Blist)) { Blist <- replace.constraints(vector("list", ncol(xmat)), diag(M), 1:ncol(xmat)) # NULL } lm2vlm.model.matrix(x = xmat, Blist = Blist, M = M, assign.attributes = FALSE, xij = xij, Xm2 = Xm2) } X_vlm <- mux111(U, X_vlm_save, M = M) z_vlm <- mux22(U, zmat, M = M, upper = TRUE, as.matrix = FALSE) if (length(omit.these)) { X_vlm <- X_vlm[!omit.these,,drop = FALSE] z_vlm <- z_vlm[!omit.these] } ans <- lm.fit(X_vlm, y = z_vlm, ...) if (rss) { ans$rss <- sum(ans$resid^2) if (only.rss) return(list(rss = ans$rss)) } if (length(omit.these) && any(omit.these)) { stop("code beyond here cannot handle omitted observations") } fv <- ans$fitted.values dim(fv) <- c(M, n) fv <- vbacksub(U, fv, M = M, n = n) # Have to premultiply fv by U if (length(Eta.range)) { if (length(Eta.range) != 2) { stop("length(Eta.range) must equal 2") } fv <- ifelse(fv < Eta.range[1], Eta.range[1], fv) fv <- ifelse(fv > Eta.range[2], Eta.range[2], fv) } ans$fitted.values <- if (M == 1) c(fv) else fv if (M > 1) { dimnames(ans$fitted.values) <- list(dimnames(zmat)[[1]], znames) } ans$residuals <- if (M == 1) c(zmat-fv) else zmat-fv if (M > 1) { dimnames(ans$residuals) <- list(dimnames(ans$residuals)[[1]], znames) } ans$misc <- list(M = M, n = n) ans$call <- match.call() ans$constraints <- Blist ans$contrasts <- contrast.save if (x.ret) { ans$X_vlm <- X_vlm_save } if (!is.null(offset)) { ans$fitted.values <- ans$fitted.values + offset } if (!matrix.out) { return(ans) } dx2 <- if (is.vlmX) NULL else dimnames(xmat)[[2]] B <- matrix(as.numeric(NA), nrow = M, ncol = ncolx, dimnames = list(lp.names, dx2)) if (is.null(Blist)) { Blist <- replace.constraints(vector("list", ncolx), diag(M), 1:ncolx) } ncolBlist <- unlist(lapply(Blist, ncol)) temp <- c(0, cumsum(ncolBlist)) for(ii in 1:ncolx) { index <- (temp[ii]+1):temp[ii+1] cm <- Blist[[ii]] B[,ii] <- cm %*% ans$coef[index] } ans$mat.coefficients <- t(B) ans } if (FALSE) print.vlm.wfit <- function(x, ...) { if (!is.null(cl <- x$call)) { cat("Call:\n") dput(cl) } coef <- x$coefficients cat("\nCoefficients:\n") print(coef, ...) rank <- x$rank if (is.null(rank)) { rank <- sum(!is.na(coef)) } n <- x$misc$n M <- x$misc$M rdf <- x$df.resid if (is.null(rdf)) { rdf <- (n - rank) * M } cat("\nDegrees of Freedom:", n*M, "Total;", rdf, "Residual\n") if (!is.null(x$rss)) { cat("Residual Sum of Squares:", format(x$rss), "\n") } invisible(x) } VGAM/R/vlm.R0000644000176000001440000001232012136651110012150 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vlm <- function(formula, data = list(), weights = NULL, subset = NULL, na.action = na.fail, prior.weights = NULL, control = vlm.control(...), method = "qr", model = FALSE, x.arg = FALSE, y.arg = TRUE, qr.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, offset = NULL, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "vlm" ocall <- match.call() if (smart) setup.smart("write") if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), qr = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") if (method != "qr") stop("only method = 'qr' is implemented") xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? if (length(offset) && any(offset != 0)) stop("offsets are redundant for (vector) linear models") wz <- model.weights(mf) y <- as.matrix(y) M <- ncol(as.matrix(y)) n <- nrow(x) dy <- dimnames(y) dy1 <- if (length(dy[[1]])) dy[[1]] else dimnames(mf)[[1]] dy2 <- if (length(dy[[2]])) dy[[2]] else paste("Y", 1:M, sep = "") dimnames(y) <- list(dy1, dy2) predictors.names <- dy2 if (!length(prior.weights)) { prior.weights <- rep(1, len = n) names(prior.weights) <- dy1 } if (any(prior.weights <= 0)) stop("only positive weights allowed") if (!length(wz)) { wz <- matrix(prior.weights, n, M) identity.wts <- TRUE } else { identity.wts <- FALSE temp <- ncol(as.matrix(wz)) if (temp < M || temp > M*(M+1)/2) stop("input 'w' must have between ", M, " and ", M*(M+1)/2, " columns") wz <- prior.weights * wz } control <- control Blist <- process.constraints(constraints, x, M) intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" fit <- vlm.wfit(xmat = x, zmat = y, Blist = Blist, wz = wz, U = NULL, matrix.out = FALSE, is.vlmX = FALSE, rss = TRUE, qr = qr.arg, x.ret = TRUE, offset = offset) ncol_X_vlm <- fit$rank fit$R <- fit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] fit$R[lower.tri(fit$R)] <- 0 fit$constraints <- Blist dnrow_X_vlm <- labels(fit$X_vlm) xnrow_X_vlm <- dnrow_X_vlm[[2]] dn <- labels(x) xn <- dn[[2]] dX_vlm <- as.integer(dim(fit$X_vlm)) nrow_X_vlm <- dX_vlm[[1]] ncol_X_vlm <- dX_vlm[[2]] misc <- list( colnames.x = xn, colnames.X_vlm = xnrow_X_vlm, function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = nrow(x), nrow_X_vlm = nrow_X_vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol_X_vlm = ncol_X_vlm, ynames = dimnames(y)[[2]]) fit$misc <- misc fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new("vlm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "control" = control, "criterion" = list(deviance = fit$rss), "dispersion" = 1, "df.residual" = fit$df.residual, "df.total" = n*M, "effects" = fit$effects, "fitted.values"= as.matrix(fit$fitted.values), "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "rss" = fit$rss, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) slot(answer, "prior.weights") <- as.matrix(prior.weights) if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (x.arg) slot(answer, "x") <- x # The 'small' design matrix if (control$save.weight) slot(answer, "weights") <- wz if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(y) answer } attr(vlm, "smart") <- TRUE VGAM/R/vglm.fit.q0000644000176000001440000003137312136651110013150 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vglm.fit <- function(x, y, w = rep(1, length(x[, 1])), X_vlm_arg = NULL, Xm2 = NULL, Ym2 = NULL, etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = vglm.control(), criterion = "coefficients", qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "vglm", ...) { specialCM <- NULL post <- list() check.rank <- TRUE # Set this to false for family functions vppr() etc. check.rank <- control$Check.rank nonparametric <- FALSE epsilon <- control$epsilon maxit <- control$maxit save.weight <- control$save.weight trace <- control$trace orig.stepsize <- control$stepsize minimize.criterion <- control$min.criterion fv <- NULL n <- dim(x)[1] new.s.call <- expression({ if (c.list$one.more) { fv <- c.list$fit new.coeffs <- c.list$coeff if (length(slot(family, "middle"))) eval(slot(family, "middle")) eta <- fv + offset mu <- slot(family, "linkinv")(eta, extra) if (length(slot(family, "middle2"))) eval(slot(family, "middle2")) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) if (trace && orig.stepsize == 1) { cat("VGLM linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(2 - log10(epsilon))), format(round(new.crit, 4))) switch(criterion, coefficients = {if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } { take.half.step = (control$half.stepsizing && length(old.coeffs)) && ((orig.stepsize != 1) || (criterion != "coefficients" && (if (minimize.criterion) new.crit > old.crit else new.crit < old.crit))) if (!is.logical(take.half.step)) take.half.step <- TRUE if (take.half.step) { stepsize <- 2 * min(orig.stepsize, 2*stepsize) new.coeffs.save <- new.coeffs if (trace) cat("Taking a modified step") repeat { if (trace) { cat(".") flush.console() } stepsize <- stepsize / 2 if (too.small <- stepsize < 0.001) break new.coeffs <- (1-stepsize)*old.coeffs + stepsize*new.coeffs.save if (length(slot(family, "middle"))) eval(slot(family, "middle")) fv <- X_vlm_save %*% new.coeffs if (M > 1) fv <- matrix(fv, n, M, byrow = TRUE) eta <- fv + offset mu <- slot(family, "linkinv")(eta, extra) if (length(slot(family, "middle2"))) eval(slot(family, "middle2")) new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) if ((criterion == "coefficients") || ( minimize.criterion && new.crit < old.crit) || (!minimize.criterion && new.crit > old.crit)) break } # of repeat if (trace) cat("\n") if (too.small) { warning("iterations terminated because ", "half-step sizes are very small") one.more <- FALSE } else { if (trace) { cat("VGLM linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(2-log10(epsilon))), format(round(new.crit, 4))) switch(criterion, coefficients = { if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } one.more <- eval(control$convergence) } } else { one.more <- eval(control$convergence) } } flush.console() if (!is.logical(one.more)) one.more <- FALSE if (one.more) { iter <- iter + 1 deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list$z <- z c.list$U <- U if (copy_X_vlm) c.list$X_vlm <- X_vlm_save } c.list$one.more <- one.more c.list$coeff <- runif(length(new.coeffs)) # 12/3/03; twist needed! old.coeffs <- new.coeffs } c.list }) copy_X_vlm <- FALSE # May be overwritten in @initialize stepsize <- orig.stepsize old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initialize mu & M (& optionally w) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (length(slot(family, "constraints"))) eval(slot(family, "constraints")) Blist <- process.constraints(constraints, x, M, specialCM = specialCM) ncolBlist <- unlist(lapply(Blist, ncol)) dimB <- sum(ncolBlist) X_vlm_save <- if (length(X_vlm_arg)) X_vlm_arg else lm2vlm.model.matrix(x, Blist, xij = control$xij, Xm2 = Xm2) if (length(coefstart)) { eta <- if (ncol(X_vlm_save)>1) X_vlm_save %*% coefstart + offset else X_vlm_save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- slot(family, "linkinv")(eta, extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # family[[criterion]] } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) old.crit <- if (minimize.criterion) 10*new.crit+10 else -10*new.crit-10 deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list <- list(z = as.double(z), fit = as.double(t(eta)), one.more = TRUE, coeff = as.double(rep(1, ncol(X_vlm_save))), U = as.double(U), copy_X_vlm = copy_X_vlm, X_vlm = if (copy_X_vlm) as.double(X_vlm_save) else double(3)) dX_vlm <- as.integer(dim(X_vlm_save)) nrow_X_vlm <- dX_vlm[[1]] ncol_X_vlm <- dX_vlm[[2]] if (nrow_X_vlm < ncol_X_vlm) stop(ncol_X_vlm, "parameters but only ", nrow_X_vlm, " observations") bf.call <- expression(vlm.wfit(xmat = X_vlm_save, z, Blist = NULL, U = U, matrix.out = FALSE, is.vlmX = TRUE, qr = qr.arg, xij = NULL)) while(c.list$one.more) { tfit <- eval(bf.call) # fit$smooth.frame is new c.list$coeff <- tfit$coefficients tfit$predictors <- tfit$fitted.values c.list$fit <- tfit$fitted.values c.list <- eval(new.s.call) NULL } if (maxit > 1 && iter >= maxit && !control$noWarning) warning("convergence not obtained in ", maxit, " iterations") dnrow_X_vlm <- labels(X_vlm_save) xnrow_X_vlm <- dnrow_X_vlm[[2]] ynrow_X_vlm <- dnrow_X_vlm[[1]] if (length(slot(family, "fini"))) eval(slot(family, "fini")) if (M > 1) tfit$predictors <- matrix(tfit$predictors, n, M) coefs <- tfit$coefficients asgn <- attr(X_vlm_save, "assign") names(coefs) <- xnrow_X_vlm rank <- tfit$rank cnames <- xnrow_X_vlm if (check.rank && rank < ncol_X_vlm) stop("vglm only handles full-rank models (currently)") R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol_X_vlm, ncol_X_vlm), dimnames = list(cnames, cnames), rank=rank) effects <- tfit$effects neff <- rep("", nrow_X_vlm) neff[seq(ncol_X_vlm)] <- cnames names(effects) <- neff dim(tfit$predictors) <- c(n, M) dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] residuals <- z - tfit$predictors if (M == 1) { tfit$predictors <- as.vector(tfit$predictors) residuals <- as.vector(residuals) names(residuals) <- names(tfit$predictors) <- yn } else { dimnames(residuals) <- dimnames(tfit$predictors) <- list(yn, predictors.names) } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } df.residual <- nrow_X_vlm - rank fit <- list(assign = asgn, coefficients = coefs, constraints = Blist, df.residual = df.residual, df.total = n*M, effects = effects, fitted.values = mu, offset = offset, rank = rank, residuals = residuals, R = R, terms = Terms) # terms: This used to be done in vglm() if (qr.arg) { fit$qr <- tfit$qr dimnames(fit$qr$qr) <- dnrow_X_vlm } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weight) wz else NULL misc <- list( colnames.x = xn, colnames.X_vlm = xnrow_X_vlm, criterion = criterion, function.name = function.name, intercept.only = intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, nrow_X_vlm = nrow_X_vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol_X_vlm = ncol_X_vlm, ynames = dimnames(y)[[2]]) crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for(ii in names(.min.criterion.VGAM)) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra) } } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(slot(family, "last"))) eval(slot(family, "last")) structure(c(fit, list(predictors = tfit$predictors, contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, rss = tfit$rss, x = x, y = y)), vclass = slot(family, "vfamily")) } VGAM/R/vglm.control.q0000644000176000001440000001203712136651110014042 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. .min.criterion.VGAM <- c("deviance" = TRUE, "loglikelihood" = FALSE, "AIC" = TRUE, "Likelihood" = FALSE, "rss" = TRUE, "coefficients" = TRUE) vlm.control <- function(save.weight = TRUE, tol = 1e-7, method = "qr", checkwz = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (tol <= 0) { warning("tol not positive; using 1e-7 instead") tol <- 1e-7 } if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") list(save.weight = save.weight, tol = tol, method = method, checkwz = checkwz, wzepsilon = wzepsilon) } vglm.control <- function(checkwz = TRUE, Check.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-7, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weight = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- pmatch(criterion[1], names(.min.criterion.VGAM), nomatch = 1) criterion <- names(.min.criterion.VGAM)[criterion] if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") convergence <- expression({ switch(criterion, coefficients = if (iter == 1) iter < maxit else (iter < maxit && max(abs(new.crit - old.crit) / (abs(old.crit) + epsilon)) > epsilon), iter < maxit && abs(old.crit - new.crit) / (abs(old.crit) + epsilon) > epsilon) }) if (!is.Numeric(epsilon, allowable.length = 1, positive = TRUE)) { warning("bad input for argument 'epsilon'; using 0.00001 instead") epsilon <- 0.00001 } if (!is.Numeric(maxit, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'maxit'; using 30 instead") maxit <- 30 } if (!is.Numeric(stepsize, allowable.length = 1, positive = TRUE)) { warning("bad input for argument 'stepsize'; using 1 instead") stepsize <- 1 } list(checkwz = checkwz, Check.rank = Check.rank, convergence = convergence, criterion = criterion, epsilon = epsilon, half.stepsizing = as.logical(half.stepsizing)[1], maxit = maxit, noWarning = as.logical(noWarning)[1], min.criterion = .min.criterion.VGAM, save.weight = as.logical(save.weight)[1], stepsize = stepsize, trace = as.logical(trace)[1], wzepsilon = wzepsilon, xij = if (is(xij, "formula")) list(xij) else xij) } vcontrol.expression <- expression({ control <- control # First one, e.g., vgam.control(...) mylist <- family@vfamily for(i in length(mylist):1) { for(ii in 1:2) { temp <- paste(if(ii == 1) "" else paste(function.name, ".", sep=""), mylist[i], ".control", sep="") tempexists <- if (is.R()) exists(temp, envir = VGAM:::VGAMenv) else exists(temp, inherit = TRUE) if (tempexists) { temp <- get(temp) temp <- temp(...) for(k in names(temp)) control[[k]] <- temp[[k]] } } } orig.criterion <- control$criterion if (control$criterion != "coefficients") { try.crit <- c(names(.min.criterion.VGAM), "coefficients") for(i in try.crit) { if (any(slotNames(family) == i) && (( is.R() && length(body(slot(family, i)))) || ((!is.R() && length(slot(family, i)) > 1)))) { control$criterion <- i break } else control$criterion <- "coefficients" } } control$min.criterion <- control$min.criterion[control$criterion] for(ii in 1:2) { temp <- paste(if(ii == 1) "" else paste(function.name, ".", sep=""), family@vfamily[1], ".", control$criterion, ".control", sep="") if (exists(temp, inherit=T)) { temp <- get(temp) temp <- temp(...) for(k in names(temp)) control[[k]] <- temp[[k]] } } }) VGAM/R/vglm.R0000644000176000001440000001712112136651110012323 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vglm <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control=vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "vglm" ocall <- match.call() if (smart) setup.smart("write") if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vglm.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) if (!is.null(form2)) { if (!is.null(subset)) stop("argument 'subset' cannot be used when ", "argument 'form2' is used") retlist <- shadowvglm(formula = form2, family = family, data = data, na.action = na.action, control = vglm.control(...), method = method, model = model, x.arg = x.arg, y.arg = y.arg, contrasts = contrasts, constraints = constraints, extra = extra, qr.arg = qr.arg) Ym2 <- retlist$Ym2 Xm2 <- retlist$Xm2 if (length(Ym2)) { if (nrow(as.matrix(Ym2)) != nrow(as.matrix(y))) stop("number of rows of 'y' and 'Ym2' are unequal") } if (length(Xm2)) { if (nrow(as.matrix(Xm2)) != nrow(as.matrix(x))) stop("number of rows of 'y' and 'Ym2' are unequal") } } else { Xm2 <- Ym2 <- NULL } offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) { w <- rep(1, nrow(mf)) } else if (ncol(as.matrix(w)) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (length(slot(family, "first"))) eval(slot(family, "first")) vglm.fitter <- get(method) fit <- vglm.fitter(x = x, y = y, w = w, offset = offset, Xm2 = Xm2, Ym2 = Ym2, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, criterion = control$criterion, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ...) fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(Class = "vglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "df.total" = fit$df.total, "dispersion" = 1, "effects" = fit$effects, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "rss" = fit$rss, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- fit$x # The 'small' (lm) design matrix if (x.arg && length(Xm2)) slot(answer, "Xm2") <- Xm2 # The second (lm) design matrix if (y.arg && length(Ym2)) slot(answer, "Ym2") <- as.matrix(Ym2) # The second response if (!is.null(form2)) slot(answer, "callXm2") <- retlist$call answer@misc$formula <- formula answer@misc$form2 <- form2 if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) slot(answer, "control") <- fit$control slot(answer, "extra") <- if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore placing ", "'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") <- fit$iter slot(answer, "post") <- fit$post fit$predictors <- as.matrix(fit$predictors) # Must be a matrix if (length(fit$misc$predictors.names) == ncol(fit$predictors)) dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) answer } attr(vglm, "smart") <- TRUE shadowvglm <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action=na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control=vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), qr.arg = FALSE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "shadowvglm" ocall <- match.call() if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vglm.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") x <- y <- NULL xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) list(Xm2=x, Ym2=y, call=ocall) } VGAM/R/vgam.match.q0000644000176000001440000000527012136651110013444 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vgam.match <- function(x, all.knots = FALSE, nk = NULL) { if (is.list(x)) { nvar <- length(x) if (length(nk)) nk <- rep(nk, length = nvar) temp <- vgam.match(x[[1]], all.knots = all.knots, nk = nk[1]) ooo <- matrix(temp$matcho, length(temp$matcho), nvar) neffec <- rep(temp$neffec, nvar) xmin <- rep(temp$xmin, nvar) xmax <- rep(temp$xmax, nvar) nknots <- rep(temp$nknots, nvar) knots <- vector("list", nvar) knots[[1]] <- temp$knots if (nvar > 1) for (ii in 2:nvar) { temp <- vgam.match(x[[ii]], all.knots = all.knots, nk = nk[ii]) ooo[, ii] <- temp$matcho neffec[ii] <- temp$neffec nknots[ii] <- temp$nknots knots[[ii]] <- temp$knots xmin[ii] <- temp$xmin xmax[ii] <- temp$xmax } names(nknots) <- names(knots) <- names(neffec) <- names(xmin) <- names(xmax) <- names(x) dimnames(ooo) <- list(NULL, names(x)) return(list(matcho = ooo, neffec = neffec, nknots = nknots, knots = knots, xmin = xmin, xmax = xmax)) } if (!is.null(attributes(x)$NAs) || any(is.na(x))) stop("cannot smooth on variables with NAs") sx <- unique(sort(as.vector(x))) # "as.vector()" strips off attributes ooo <- match(x, sx) # as.integer(match(x, sx)) # sx[o]==x neffec <- length(sx) # as.integer(length(sx)) if (neffec < 7) stop("smoothing variables must have at least 7 unique values") xmin <- sx[1] # Don't use rounded value xmax <- sx[neffec] xbar <- (sx - xmin) / (xmax - xmin) noround <- TRUE # Improvement 3/8/02 if (all.knots) { knot <- if (noround) { valid.vknotl2(c(rep(xbar[1], 3), xbar, rep(xbar[neffec], 3))) } else { c(rep(xbar[1], 3), xbar, rep(xbar[neffec], 3)) } if (length(nk)) warning("overriding nk by all.knots = TRUE") nk <- length(knot) - 4 # No longer: neffec + 2 } else { chosen <- length(nk) if (chosen && (nk > neffec+2 || nk <= 5)) stop("bad value for 'nk'") if (!chosen) nk <- 0 knot.list <- dotC(name = "vknootl2", as.double(xbar), as.integer(neffec), knot = double(neffec+6), k = as.integer(nk+4), chosen = as.integer(chosen)) if (noround) { knot <- valid.vknotl2(knot.list$knot[1:(knot.list$k)]) knot.list$k <- length(knot) } else { knot <- knot.list$knot[1:(knot$k)] } nk <- knot.list$k - 4 } if (nk <= 5) stop("not enough distinct knots found") return(list(matcho = ooo, neffec = neffec, nknots = nk, knots = knot, xmin = xmin, xmax = xmax)) } VGAM/R/vgam.fit.q0000644000176000001440000002775512136651110013146 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vgam.fit <- function(x, y, w, mf, etastart, mustart, coefstart, offset, family, control, criterion = "coefficients", constraints = NULL, extra, qr.arg, Terms, nonparametric, smooth.labels, function.name = "vgam", ...) { specialCM <- NULL post <- list() check.Rank <- TRUE # Set this to false for family functions vppr() etc. epsilon <- control$epsilon maxit <- control$maxit save.weight <- control$save.weight trace <- control$trace bf.maxit <- control$bf.maxit bf.epsilon <- control$bf.epsilon se.fit <- control$se.fit minimize.criterion <- control$min.criterion fv <- NULL n <- dim(x)[1] new.s.call <- expression({ if (c.list$one.more) { fv <- c.list$fit new.coeffs <- c.list$coeff if (length(family@middle)) eval(family@middle) eta <- fv + offset mu <- family@linkinv(eta, extra) if (length(family@middle2)) eval(family@middle2) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) if (trace) { cat("VGAM ", bf, " loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(2-log10(epsilon))), format(round(new.crit, 4))) switch(criterion, coefficients = {if(length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } one.more <- eval(control$convergence) flush.console() if (!is.finite(one.more) || !is.logical(one.more)) one.more <- FALSE if (one.more) { iter <- iter + 1 deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list$z <- z c.list$wz <- wz c.list$U <- U } c.list$one.more <- one.more c.list$coeff <- runif(length(new.coeffs)) # 12/3/03; twist needed! old.coeffs <- new.coeffs } c.list }) old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initialize mu & M (& optionally w) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (length(family@constraints)) eval(family@constraints) Blist <- process.constraints(constraints, x, M, specialCM = specialCM) ncolBlist <- unlist(lapply(Blist, ncol)) dimB <- sum(ncolBlist) if (nonparametric) { smooth.frame <- mf assignx <- attr(x, "assign") which <- assignx[smooth.labels] bf <- "s.vam" bf.call <- parse(text = paste( "s.vam(x, z, wz, tfit$smomat, which, tfit$smooth.frame,", "bf.maxit, bf.epsilon, trace, se = se.fit, X_vlm_save, ", "Blist, ncolBlist, M = M, qbig = qbig, Umat = U, ", "all.knots = control$all.knots, nk = control$nk)", sep = ""))[[1]] qbig <- sum(ncolBlist[smooth.labels]) # Number of component funs smomat <- matrix(0, n, qbig) dy <- if (is.matrix(y)) dimnames(y)[[1]] else names(y) d2 <- if (is.null(predictors.names)) paste("(Additive predictor ",1:M,")", sep = "") else predictors.names dimnames(smomat) <- list(dy, vlabel(smooth.labels, ncolBlist[smooth.labels], M)) tfit <- list(smomat = smomat, smooth.frame = smooth.frame) } else { bf.call <- expression(vlm.wfit(xmat = X_vlm_save, z, Blist = NULL, U = U, matrix.out = FALSE, is.vlmX = TRUE, qr = qr.arg, xij = NULL)) bf <- "vlm.wfit" } X_vlm_save <- lm2vlm.model.matrix(x, Blist, xij = control$xij) if (length(coefstart)) { eta <- if (ncol(X_vlm_save) > 1) X_vlm_save %*% coefstart + offset else X_vlm_save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- family@linkinv(eta, extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # Needed 4 R so have to follow suit } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) old.crit <- if (minimize.criterion) 10*new.crit+10 else -10*new.crit-10 deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list <- list(wz = as.double(wz), z = as.double(z), fit = as.double(t(eta)), one.more = TRUE, U = as.double(U), coeff = as.double(rep(1, ncol(X_vlm_save)))) dX_vlm <- as.integer(dim(X_vlm_save)) nrow_X_vlm <- dX_vlm[[1]] ncol_X_vlm <- dX_vlm[[2]] if (nrow_X_vlm < ncol_X_vlm) stop(ncol_X_vlm, " parameters but only ", nrow_X_vlm, " observations") while (c.list$one.more) { tfit <- eval(bf.call) # fit$smooth.frame is new c.list$coeff <- tfit$coefficients tfit$predictors <- tfit$fitted.values + offset c.list$fit <- tfit$fitted.values c.list <- eval(new.s.call) NULL } if (maxit > 1 && iter >= maxit) warning("convergence not obtained in ", maxit, " iterations") dnrow_X_vlm <- labels(X_vlm_save) xnrow_X_vlm <- dnrow_X_vlm[[2]] ynrow_X_vlm <- dnrow_X_vlm[[1]] if (length(family@fini)) eval(family@fini) coefs <- tfit$coefficients asgn <- attr(X_vlm_save, "assign") # 29/11/01 was x names(coefs) <- xnrow_X_vlm cnames <- xnrow_X_vlm if (!is.null(tfit$rank)) { rank <- tfit$rank if (rank < ncol(x)) stop("rank < ncol(x) is bad") } else rank <- ncol(x) R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol_X_vlm, ncol_X_vlm), dimnames = list(cnames, cnames), rank = rank) dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] if (is.matrix(mu)) { if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } else if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } tfit$fitted.values <- NULL # Have to kill it off 3/12/01 fit <- structure(c(tfit, list( assign = asgn, constraints = Blist, control = control, fitted.values = mu, formula = as.vector(attr(Terms, "formula")), iter = iter, offset = offset, rank = rank, R = R, terms = Terms))) df.residual <- nrow_X_vlm - rank if (!se.fit) { fit$varmat <- NULL } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weight) wz else NULL if (M == 1) { fit$predictors <- as.vector(fit$predictors) fit$residuals <- as.vector(fit$residuals) names(fit$residuals) <- names(fit$predictors) <- yn } else dimnames(fit$residuals) <- dimnames(fit$predictors) <- list(yn, predictors.names) NewBlist <- process.constraints(constraints, x, M, specialCM = specialCM, by.col = FALSE) misc <- list( colnames.x = xn, colnames.X_vlm = xnrow_X_vlm, criterion = criterion, function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = n, new.assign = new.assign(x, NewBlist), nonparametric = nonparametric, nrow_X_vlm = nrow_X_vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol_X_vlm = ncol_X_vlm, ynames = dimnames(y)[[2]]) if (criterion != "coefficients") fit[[criterion]] <- new.crit if (se.fit && length(fit$s.xargument)) { misc$varassign <- varassign(Blist, names(fit$s.xargument)) } if (nonparametric) { misc$smooth.labels <- smooth.labels } crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for (ii in names(.min.criterion.VGAM)) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra) } } if (M == 1) { fit$predictors <- as.vector(fit$predictors) fit$residuals <- as.vector(fit$residuals) names(fit$residuals) <- names(fit$predictors) <- yn } else { dimnames(fit$residuals) <- dimnames(fit$predictors) <- list(yn, predictors.names) } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) if (!is.null(fit$smomat)) { fit$nl.chisq <- vgam.nlchisq(fit$qr, fit$resid, wz = wz, smomat = fit$smomat, deriv = deriv.mu, U = U, smooth.labels, attr(x, "assign"), M = M, n = n, constraints = Blist) } if (!qr.arg) { fit$qr <- NULL } fit$misc <- NULL # 8/6/02; It's necessary to kill it as it exists in vgam structure(c(fit, list( contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, x = x, y = y)), vclass = family@vfamily) } new.assign <- function(X, Blist) { M <- nrow(Blist[[1]]) dn <- labels(X) xn <- dn[[2]] asgn <- attr(X, "assign") nasgn <- names(asgn) lasgn <- unlist(lapply(asgn, length)) ncolBlist <- unlist(lapply(Blist, ncol)) names(ncolBlist) <- NULL # This is necessary for below to work temp2 <- vlabel(nasgn, ncolBlist, M) L <- length(temp2) newasgn <- vector("list", L) kk <- 0 low <- 1 for (ii in 1:length(asgn)) { len <- low:(low + ncolBlist[ii] * lasgn[ii] -1) temp <- matrix(len, ncolBlist[ii], lasgn[ii]) for (mm in 1:ncolBlist[ii]) newasgn[[kk+mm]] <- temp[mm,] low <- low + ncolBlist[ii] * lasgn[ii] kk <- kk + ncolBlist[ii] } names(newasgn) <- temp2 newasgn } VGAM/R/vgam.control.q0000644000176000001440000001020012136651110014015 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vgam.control <- function(all.knots = FALSE, bf.epsilon = 1e-7, bf.maxit = 30, checkwz = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-7, maxit = 30, na.action=na.fail, nk = NULL, save.weight = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- pmatch(criterion[1], names(.min.criterion.VGAM), nomatch = 1) criterion <- names(.min.criterion.VGAM)[criterion] if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") if (length(all.knots) > 1) warning("all.knots should be of length 1; using first value only") if (!is.Numeric(bf.epsilon, allowable.length = 1, positive = TRUE)) { warning("bad input for argument 'bf.epsilon'; using 0.00001 instead") bf.epsilon <- 0.00001 } if (!is.Numeric(bf.maxit, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'bf.maxit'; using 20 instead") bf.maxit <- 20 } if (!is.Numeric(epsilon, allowable.length = 1, positive = TRUE)) { warning("bad input for argument 'epsilon'; using 0.0001 instead") epsilon <- 0.0001 } if (!is.Numeric(maxit, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'maxit'; using 30 instead") maxit <- 30 } convergence <- expression({ switch(criterion, coefficients = if (iter == 1) iter < maxit else (iter < maxit && max(abs(new.coeffs - old.coeffs)/(abs(old.coeffs)+epsilon)) > epsilon), abs(old.crit-new.crit)/(abs(old.crit)+epsilon) > epsilon && iter 0 if (nonparametric) { ff <- apply(aa$factors[smoothers[["s"]],,drop = FALSE], 2, any) smoothers[["s"]] <- if (any(ff)) seq(along = ff)[aa$order == 1 & ff] else NULL smooth.labels <- aa$term.labels[unlist(smoothers)] } else function.name <- "vglm" # This is effectively so fit <- vgam.fit(x = x, y = y, w = w, mf = mf, etastart = etastart, mustart = mustart, coefstart = coefstart, offset = offset, family = family, control = control, criterion = control$criterion, constraints = constraints, extra = extra, qr.arg = qr.arg, Terms = mtsave, nonparametric = nonparametric, smooth.labels = smooth.labels, function.name = function.name, ...) if (is.Numeric(fit$nl.df) && any(fit$nl.df < 0)) { fit$nl.df[fit$nl.df < 0] <- 0 } # -------------------------------------------------------------- if (!is.null(fit[["smooth.frame"]])) { fit <- fit[-1] # Strip off smooth.frame } else { } fit$smomat <- NULL # Not needed fit$call <- ocall if (model) fit$model <- mf if (!x.arg) fit$x <- NULL if (!y.arg) fit$y <- NULL if (nonparametric) fit$misc$smooth.labels <- smooth.labels fit$misc$dataname <- dataname if (smart) fit$smart.prediction <- get.smart.prediction() answer <- new("vgam", "assign" = attr(x, "assign"), "call" = fit$call, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "dispersion" = 1, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "rss" = fit$rss, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms=fit$terms)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) answer@misc$formula <- formula slot(answer, "control") <- fit$control if (length(fit$extra)) { slot(answer, "extra") <- fit$extra } slot(answer, "iter") <- fit$iter slot(answer, "post") <- fit$post fit$predictors <- as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) if (nonparametric) { slot(answer, "Bspline") <- fit$Bspline slot(answer, "nl.chisq") <- fit$nl.chisq if (is.Numeric(fit$nl.df)) slot(answer, "nl.df") <- fit$nl.df slot(answer, "spar") <- fit$spar slot(answer, "s.xargument") <- fit$s.xargument if (length(fit$varmat)) { slot(answer, "var") <- fit$varmat } } if (length(fit$effects)) slot(answer, "effects") <- fit$effects answer } attr(vgam, "smart") <- TRUE VGAM/R/uqo.R0000644000176000001440000007322712136651110012173 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. uqo.control = function(Rank = 1, Bestof = if (length(lvstart) && !jitter.sitescores) 1 else 10, CA1 = FALSE, Crow1positive = TRUE, epsilon = 1.0e-07, EqualTolerances = ITolerances, Etamat.colmax = 10, GradientFunction = TRUE, Hstep = 0.001, isdlv = rep(c(2, 1, rep(0.5, len = Rank)), len = Rank), ITolerances = FALSE, lvstart = NULL, jitter.sitescores = FALSE, maxitl = 40, Maxit.optim = 250, MUXfactor = rep(3, length=Rank), optim.maxit = 20, nRmax = 250, SD.sitescores = 1.0, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, ...) { Kinit = 0.001 if (!is.Numeric(MUXfactor, positive = TRUE)) stop("bad input for \"MUXfactor\"") if (any(MUXfactor < 1 | MUXfactor > 10)) stop("MUXfactor values must lie between 1 and 10") if (!is.Numeric(isdlv, positive = TRUE)) stop("bad input for \"isdlv\"") if (any(isdlv < 0.2 | isdlv > 10)) stop("isdlv values must lie between 0.2 and 10") if (length(isdlv) > 1 && any(diff(isdlv) > 0)) stop("successive isdlv values must not increase") if (!is.Numeric(Rank, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for \"Rank\"") if (!is.Numeric(Bestof, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for \"Bestof\"") if (!is.Numeric(Etamat.colmax, positive = TRUE, allowable.length = 1) || Etamat.colmax < Rank) stop("bad input for \"Etamat.colmax\"") if (!is.Numeric(maxitl, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for \"maxitl\"") if (!is.Numeric(Maxit.optim, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("Bad input for \"Maxit.optim\"") if (!is.Numeric(optim.maxit, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for \"optim.maxit\"") if (!is.Numeric(nRmax, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for \"nRmax\"") if (!is.Numeric(Hstep, allowable.length = 1, positive = TRUE)) stop("Bad input for \"Hstep\"") if (!is.Numeric(epsilon, allowable.length = 1, positive = TRUE)) stop("Bad input for \"epsilon\"") if (!is.Numeric(SmallNo, allowable.length = 1, positive = TRUE)) stop("Bad input for \"SmallNo\"") if ((SmallNo < .Machine$double.eps) || (SmallNo > .0001)) stop("SmallNo is out of range") if (Use.Init.Poisson.QO && CA1) stop("cannot have both 'Use.Init.Poisson.QO = TRUE' and 'CA1 = TRUE'") ans <- list( Bestof = Bestof, CA1 = CA1, ConstrainedQO = FALSE, # A constant, not a control parameter Corner = FALSE, # Needed for valt.1iter() Crow1positive=as.logical(rep(Crow1positive, len = Rank)), epsilon = epsilon, EqualTolerances = as.logical(EqualTolerances)[1], Etamat.colmax = Etamat.colmax, FastAlgorithm = TRUE, # A constant, not a control parameter GradientFunction = GradientFunction, Hstep = Hstep, isdlv = rep(isdlv, len = Rank), ITolerances = as.logical(ITolerances)[1], lvstart = lvstart, jitter.sitescores = as.logical(jitter.sitescores), Kinit = Kinit, maxitl= maxitl, Maxit.optim = Maxit.optim, MUXfactor = rep(MUXfactor, length=Rank), nRmax = nRmax, optim.maxit = optim.maxit, OptimizeWrtC = FALSE, Quadratic = TRUE, Rank = Rank, SD.sitescores = SD.sitescores, SmallNo = SmallNo, trace = as.logical(trace), Use.Init.Poisson.QO=as.logical(Use.Init.Poisson.QO)[1]) ans } uqo <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = uqo.control(...), offset = NULL, method = "uqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "uqo" ocall <- match.call() mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- mf$qr.arg <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev <- if (length(xvars) > 0) { xlev <- lapply(mf[xvars], levels) xlev[!sapply(xlev, is.null)] } y <- model.response(mf, "numeric") # model.extract(mf, "response") x <- model.matrix(mt, mf, contrasts) attr(x, "assign") <- attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) w <- rep(1, nrow(mf)) else if (ncol(as.matrix(w))== 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } if (!is.null(family@first)) eval(family@first) uqo.fitter <- get(method) if (ncol(x) != 1 || dimnames(x)[[2]] != "(Intercept)") stop("uqo()'s formula must have ~ 1 on the RHS") if (control$FastAlgorithm && length(as.list(family@deviance)) <= 1) stop("The fast algorithm requires the family ", "function to have a deviance slot") deviance.Bestof <- rep(as.numeric(NA), len = control$Bestof) for(tries in 1:control$Bestof) { if (control$trace && (control$Bestof>1)) cat(paste("\n========================= Fitting model", tries, "=========================\n")) it <- uqo.fitter(x = x, y = y, w = w, offset = offset, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ca1 = control$CA1 && tries == 1, ...) deviance.Bestof[tries] = it$crit.list$deviance if (tries == 1 || min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit <- it } fit$misc$deviance.Bestof <- deviance.Bestof fit$misc$criterion <- "deviance" # Needed for calibrate; 21/1/05 fit$misc$dataname <- dataname answer <- new("uqo", "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "control" = fit$control, "criterion" = fit$crit.list, "lv" = fit$sitescores, "family" = fit$family, "fitted.values"= as.matrix(fit$fitted.values), "iter" = fit$iter, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "na.action" = if (length(na.act)) list(na.act) else list(), "predictors" = as.matrix(fit$predictors)) answer@control$min.criterion = TRUE # Needed for calibrate; 21/1/05 if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- x if (y.arg) slot(answer, "y") <- as.matrix(fit$y) slot(answer, "extra") <- if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore ", "placing 'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) answer } calluqof <- function(sitescores, etamat, ymat, wvec, modelno, nice31, xmat, Control, n, M, maxMr5, othint, othdbl, bnumat, Hstep = NA, alldump) { control <- Control Rank <- control$Rank itol <- othint[14] inited <- if (is.R()) { as.numeric(existsinVGAMenv("etamat", prefix = ".VGAM.UQO.")) } else 0 othint[5] <- inited # Replacement usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.UQO.") else t(etamat) usethisbeta <- double(othint[13]) pstar <- othint[3] nstar <- if (nice31) ifelse(modelno==3 || modelno==5,n*2,n) else n*M NOS <- ifelse(modelno == 3 || modelno==5, M/2, M) sitescores <- matrix(sitescores, ncol=Rank) sitescores <- scale(sitescores, center = TRUE, scale = FALSE) if (itol) { numat <- matrix(sitescores, ncol=Rank) if (Rank > 1) { evnu <- eigen(var(numat)) numat <- numat %*% evnu$vector } sdnumat <- apply(numat, 2, sd) for(lookat in 1:Rank) if (sdnumat[lookat]>control$MUXfactor[lookat]*control$isdlv[lookat]){ muxer = control$isdlv[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] numat[,lookat] <- numat[,lookat] * muxer if (control$trace) { } } } else { numat <- matrix(sitescores, ncol=Rank) evnu <- eigen(var(numat)) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) numat <- numat %*% temp7 } ans1 <- dotFortran(name = if (nice31) "cqo1f" else "cqo2f", numat=as.double(numat), as.double(ymat), as.double(xmat), as.double(wvec), etamat=as.double(usethiseta), moff=double(if(itol) n else 1), fv=double(NOS*n), z=double(n*M), wz=double(n*M), U=double(M*n), bnumat=as.double(bnumat), qr=double(nstar*pstar), qraux=double(pstar), qpivot=integer(pstar), as.integer(n), as.integer(M), NOS=as.integer(NOS), as.integer(nstar), dimu=as.integer(M), errcode=integer(1), othint=as.integer(othint), rowind=integer(maxMr5), colind=integer(maxMr5), deviance=double(1), beta=as.double(usethisbeta), twk=double(if(nice31) nstar*3 else M*n*2), wkmm=double(M*(M+pstar)), othdbl=as.double(othdbl)) if (ans1$errcode == 0) { assign2VGAMenv(c("etamat","numat"), ans1, prefix = ".VGAM.UQO.") if (alldump) { ans1$fv = matrix(ans1$fv,n,M,byrow = TRUE,dimnames=dimnames(ymat)) assign2VGAMenv(c("beta","fv"), ans1, prefix = ".VGAM.UQO.") assign2VGAMenv(c("z","U"), ans1, prefix = ".VGAM.UQO.") } } else { cat("warning in calluqof: error code = ", ans1$errcode, "\n") rmfromVGAMenv(c("etamat"), prefix = ".VGAM.UQO.") } ans1$deviance } callduqof = function(sitescores, etamat, ymat, wvec, modelno, nice31, xmat, Control, n, M, maxMr5, othint, othdbl, bnumat, Hstep, alldump) { control = Control itol = othint[14] inited = if (is.R()) { if (exists(".VGAM.UQO.etamat", envir = VGAM:::VGAMenv)) 1 else 0 } else 0 # 0 means fortran initializes the etamat othint[5] = inited # Replacement usethiseta = if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.UQO.") else t(etamat) usethisbeta = double(othint[13]) pstar = othint[3] nstar = if (nice31) ifelse(modelno==3 || modelno==5,n*2,n) else n*M NOS = ifelse(modelno == 3 || modelno==5, M/2, M) Rank = othint[1] sitescores = matrix(sitescores, ncol=Rank) sitescores = scale(sitescores, center = TRUE, scale = FALSE) if (itol) { numat = matrix(sitescores, ncol=Rank) if (Rank > 1) { evnu = eigen(var(numat)) numat = numat %*% evnu$vector } sdnumat = apply(numat, 2, sd) for(lookat in 1:Rank) if (sdnumat[lookat]>control$MUXfactor[lookat]*control$isdlv[lookat]){ muxer = control$isdlv[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] numat[,lookat] = numat[,lookat] * muxer if (control$trace) { } } } else { numat = matrix(sitescores, ncol=Rank) evnu = eigen(var(numat)) temp7 = if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) numat = numat %*% temp7 } ans1 <- dotFortran(name = "duqof", numat=as.double(numat), as.double(ymat), as.double(xmat), as.double(wvec), etamat=as.double(usethiseta), moff=double(if(itol) n else 1), fv=double(NOS*n), z=double(n*M), wz=double(n*M), U=double(M*n), bnumat=as.double(bnumat), qr=double(nstar*pstar), qraux=double(pstar), qpivot=integer(pstar), as.integer(n), as.integer(M), NOS=as.integer(M), as.integer(nstar), dimu=as.integer(M), errcode=integer(1), othint=as.integer(othint), rowind=integer(maxMr5), colind=integer(maxMr5), deviance=double(1), beta=as.double(usethisbeta), twk=double(if(nice31) nstar*3 else M*n*2), wkmm=double(M*(M+pstar)), othdbl=as.double(othdbl), onumat=as.double(numat), deriv=double(n*Rank), hstep=as.double(Hstep), betasave=usethisbeta) if (ans1$errcode == 0) { assign2VGAMenv(c("etamat"), ans1, prefix = ".VGAM.UQO.") } else { cat("warning in callduqof: error code = ", ans1$errcode, "\n") rmfromVGAMenv(c("etamat"), prefix = ".VGAM.UQO.") } ans1$deriv } uqo.fit <- function(x, y, w = rep(1, len = nrow(x)), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = uqo.control(...), qr.arg = FALSE, constraints = NULL, extra = NULL, Terms=Terms, function.name = "uqo", ca1 = TRUE, ...) { if (!all(offset == 0)) stop("cqo.fit() cannot handle offsets") nonparametric <- FALSE epsilon <- control$epsilon optim.maxit <- control$optim.maxit save.weight <- control$save.weight trace <- control$trace orig.stepsize <- control$stepsize n <- dim(x)[1] copy_X_vlm <- FALSE # May be overwritten in @initialize stepsize <- orig.stepsize old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(family@initialize)) eval(family@initialize) # Initialize mu and M (and optionally w) n <- n.save eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else family@linkinv(eta, extra) } else { if (length(mustart)) mu <- mustart eta <- family@link(mu, extra) } M <- if (is.matrix(eta)) ncol(eta) else 1 if (is.character(rrcontrol$Dzero)) { index = match(rrcontrol$Dzero, dimnames(as.matrix(y))[[2]]) if (any(is.na(index))) stop("Dzero argument didn't fully match y-names") if (length(index) == M) stop("all linear predictors are linear in the latent variable(s)") rrcontrol$Dzero = control$Dzero = index } if (length(family@constraints)) eval(family@constraints) colx1.index = 1:ncol(x) names(colx1.index) = names.colx1.index = dimnames(x)[[2]] rrcontrol$colx1.index=control$colx1.index=colx1.index #Save it on the object colx2.index = NULL p1 = length(colx1.index); p2 = 0 rrcontrol$colx2.index=control$colx2.index=colx2.index #Save it on the object rrcontrol$Quadratic = control$Quadratic = TRUE sitescores <- if (length(rrcontrol$lvstart)) { matrix(rrcontrol$lvstart, n, Rank) } else { if (rrcontrol$Use.Init.Poisson) { .Init.Poisson.QO(ymat=as.matrix(y), X1=x, X2 = NULL, Rank=rrcontrol$Rank, trace=rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive=rrcontrol$Crow1positive, isdlv=rrcontrol$isdlv, constwt= any(family@vfamily[1] == c("negbinomial","gamma2","gaussianff")), takelog= any(family@vfamily[1] != c("gaussianff"))) } else if (ca1) { if (Rank == 1) .VGAM.UQO.CA(y)[,1:Rank] else { temp = .VGAM.UQO.CA(y)[,1:Rank] temp %*% solve(chol(var(temp))) } } else { matrix((runif(n*Rank)-0.5)*rrcontrol$SD.sitescores,n,Rank) } } if (rrcontrol$jitter.sitescores) sitescores <- jitteruqo(sitescores) Blist <- process.constraints(constraints, x, M) ncolBlist <- unlist(lapply(Blist, ncol)) dimB <- sum(ncolBlist) modelno = switch(family@vfamily[1], "poissonff"=2, "binomialff" = 1, "quasipoissonff" = 0, "quasibinomialff" = 0, "negbinomial" = 0, "gamma2"=5, 0) # stop("can't fit this model using fast algorithm") if (!modelno) stop("the family function does not work with uqo()") if (modelno == 1) modelno <- get("modelno", envir = VGAM:::VGAMenv) rmfromVGAMenv(c("etamat", "beta"), prefix = ".VGAM.UQO.") cqofastok <- exists("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) && get("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) if (!cqofastok) stop("can't fit this model using fast algorithm") nice31 <- (!control$EqualTol || control$ITolerances) && control$Quadratic && all(trivial.constraints(Blist)) X_vlm_1save <- if (nice31) { NULL } else { lm2vlm.model.matrix(x, Blist, xij=control$xij) } NOS = ifelse(modelno==3 || modelno==5, M/2, M) p1star = if (nice31) p1*ifelse(modelno==3 || modelno==5,2,1) else ncol(X_vlm_1save) p2star = if (nice31) ifelse(control$IToleran, Rank, Rank+0.5*Rank*(Rank+1)) else (NOS*Rank + Rank*(Rank+1)/2 * ifelse(control$EqualTol,1,NOS)) pstar = p1star + p2star nstar = if (nice31) ifelse(modelno==3 || modelno==5,n*2,n) else n*M maxMr = max(M, Rank) maxMr5 = maxMr*(maxMr+1)/2 lenbeta = pstar * ifelse(nice31, NOS, 1) othint = c(Rank, control$EqualTol, pstar, dimw = 1, inited=290, # other ints modelno, maxitl=control$maxitl, actnits = 0, twice = 0, p1star, p2star, nice31, lenbeta, control$ITolerances, control$trace, p1, p2, control$imethod) othdbl = c(small=control$SmallNo, fseps=control$epsilon, .Machine$double.eps, kinit=rep(control$Kinit, len = NOS), shapeinit=rep(control$shapeinit, len = NOS)) bnumat = if (nice31) matrix(0,nstar,pstar) else cbind(matrix(0,nstar,p2star), X_vlm_1save) rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "fv", "cmatrix", "ocmatrix"), prefix = ".VGAM.UQO.") for(iter in 1:optim.maxit) { if (control$trace) cat("\nIteration", iter, "\n") conjgrad <- optim(par=sitescores, fn=calluqof, gr = if (control$GradientFunction) callduqof else NULL, method = if (n*Rank>control$nRmax) "CG" else "BFGS", control=list(fnscale = 1, trace=as.integer(control$trace), maxit=control$Maxit.optim), etamat=eta, ymat=y, wvec=w, modelno=modelno, Control=rrcontrol, nice31=nice31, xmat = x, n=n, M=M, maxMr5=maxMr5, othint=othint, othdbl=othdbl, bnumat=bnumat, Hstep=control$Hstep, alldump = FALSE) sitescores = getfromVGAMenv("numat", prefix = ".VGAM.UQO.") dim(sitescores) = c(n, Rank) sitescores = scale(sitescores, center = TRUE, scale = FALSE) sitescores = crow1C(sitescores, rrcontrol$Crow1positive) dimnames(sitescores) = list(dimnames(y)[[1]], if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "")) if (converged <- (conjgrad$convergence == 0)) break } if (!converged && optim.maxit>1) warning("convergence not obtained") temp9 = calluqof(sitescores, etamat=eta, ymat=y, wvec=w, modelno=modelno, nice31=nice31, xmat = x, Control=rrcontrol, n=n, M=M, maxMr5=maxMr5, othint=othint, othdbl=othdbl, bnumat=bnumat, Hstep=NA, alldump = TRUE) coefs = getfromVGAMenv("beta", prefix = ".VGAM.UQO.") VGAM.fv = getfromVGAMenv("fv", prefix = ".VGAM.UQO.") etamat = getfromVGAMenv("etamat", prefix = ".VGAM.UQO.") dim(etamat) = c(M,n) etamat = t(etamat) wresids = getfromVGAMenv("z", prefix = ".VGAM.UQO.") - etamat dim(wresids) = c(n,M) if (!intercept.only) stop("can only handle intercept.only == TRUE currently") if (nice31) { coefs = c(t(matrix(coefs, ncol=M))) # Get into right order coefs = matrix(coefs, nrow=M) Amat = coefs[,1:Rank,drop = FALSE] if (rrcontrol$IToleran) { B1 = coefs[,-(1:Rank),drop = FALSE] Dmat = matrix(0, M, Rank*(Rank+1)/2) Dmat[,1:Rank] = -0.5 } else { Dmat = coefs[,(Rank+1):(Rank + Rank*(Rank+1)/2),drop = FALSE] B1 = coefs[,(1+(Rank + Rank*(Rank+1)/2)):ncol(coefs),drop = FALSE] } } else { Amat = t(matrix(coefs[1:(Rank*M)], Rank, M)) cptr1 = (Rank*M) Dmat = coefs[(cptr1+1):(cptr1+Rank*(Rank+1)/2)] Dmat = matrix(Dmat, M, Rank*(Rank+1)/2, byrow = TRUE) cptr1 = (Rank*M) + Rank*(Rank+1)/2 B1 = coefs[(cptr1+1):length(coefs)] } lv.names = if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "") lp.names = predictors.names if (!length(lp.names)) lp.names = NULL extra$Amat = matrix(Amat, M, Rank, dimnames = list(lp.names, lv.names)) extra$B1 = matrix(B1, ncol=M, dimnames = list(names.colx1.index, predictors.names)) extra$Dmat = matrix(Dmat, M, Rank*(Rank+1)/2) extra$Cmat = NULL # This is UQO!! VGAM.etamat = getfromVGAMenv("etamat", prefix = ".VGAM.UQO.") VGAM.etamat = matrix(VGAM.etamat, n, M, byrow = TRUE, dimnames = list(dimnames(y)[[1]], predictors.names)) coefficients = c(coefs) # Make a vector because of class "numeric" rmfromVGAMenv(c("etamat", "beta", "fv"), prefix = ".VGAM.UQO.") if (length(family@fini)) eval(family@fini) misc <- list(function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, modelno = modelno, M = M, n = n, nstar = nstar, nice31 = nice31, p = ncol(x), pstar = pstar, p1star = p1star, p2star = p2star, ynames = dimnames(y)[[2]]) crit.list <- list(deviance=conjgrad$value) structure(c(list( coefficients = coefficients, constraints = Blist, sitescores = sitescores, crit.list = crit.list, control=control, extra=extra, family=family, fitted.values=VGAM.fv, iter=iter, misc=misc, predictors=VGAM.etamat, prior.weights = w, x=x, y=y)), vclass=family@vfamily) } show.uqo <- function(object) { if (!is.null(cl <- object@call)) { cat("Call:\n") dput(cl) } cat("\n") cat(object@misc$n, "sites and", object@misc$M, "responses/species\n") cat("Rank", object@control$Rank) cat(",", ifelse(object@control$EqualToler, "equal-tolerances", "unequal-tolerances"), "\n") if (length(deviance(object))) cat("\nResidual deviance:", format(deviance(object)), "\n") invisible(object) NULL } setMethod("show", "uqo", function(object) show.uqo(object)) deviance.uqo <- function(object, ...) object@criterion$deviance setMethod("deviance", "uqo", function(object, ...) deviance.uqo(object, ...)) setMethod("coefficients", "uqo", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coef", "uqo", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("Coef", "uqo", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("show", "Coef.uqo", function(object) show.Coef.qrrvglm(object, C = FALSE)) residualsuqo <- function(object, type = c("deviance", "pearson", "working", "response"), matrix.arg= TRUE) { if (mode(type) != "character" && mode(type) != "name") type = as.character(substitute(type)) type = match.arg(type, c("deviance", "pearson", "working", "response"))[1] switch(type, response = object@y - fitted(object), stop("this type of residual hasn't been implemented yet") ) } setMethod("resid", "uqo", function(object, ...) residualsuqo(object, ...)) setMethod("residuals", "uqo", function(object, ...) residualsuqo(object, ...)) fitted.values.uqo <- function(object, ...) object@fitted.values setMethod("fitted", "uqo", function(object, ...) fitted.values.uqo(object, ...)) setMethod("fitted.values", "uqo", function(object, ...) fitted.values.uqo(object, ...)) predict.uqo <- function(object, newdata = NULL, ...) { if (length(newdata) > 0) stop("can't handle newdata argument yet") object@predictors } setMethod("predict", "uqo", function(object, ...) predict.uqo(object, ...)) setMethod("persp", "uqo", function(x, ...) perspqrrvglm(x, ...)) setMethod("trplot", "uqo", function(object, ...) trplot.qrrvglm(object, check.ok = FALSE, ...)) setMethod("plot", "uqo", function(x, y, ...) invisible(plotqrrvglm(object=x, ...))) setMethod("lvplot", "uqo", function(object, ...) invisible(lvplot.qrrvglm(object, C = FALSE, check.ok = FALSE, ...))) .VGAM.UQO.CA = function(Y) { Y = as.matrix(Y) / sum(Y) rowsum = c(Y %*% rep(1, len = ncol(Y))) colsum = c(t(Y) %*% rep(1, len = nrow(Y))) rc = outer(rowsum, colsum) Ybar = (Y - rc) / sqrt(rc) Q = qr(Ybar) if (Q$rank > 0) { temp = svd(Ybar) colnames(temp$u) = paste("CA", 1:length(temp$d), sep = "") rownames(temp$u) = dimnames(Y)[[1]] sweep(as.matrix(temp$u[,1:Q$rank, drop = FALSE]), 1, 1/sqrt(rowsum), "*") } else stop("Null rank") } if (FALSE) { scores.uqo <- function (x, type = c("sites", "species"), ...) { if (mode(type) != "character" && mode(type) != "name") type = as.character(substitute(type)) type = match.arg(type, c("sites", "species"))[1] switch(type, sites = if (any(slotNames(x) == "lv")) x@lv else Coef(x)@lv, species = if (any(slotNames(x) == "Optimum")) x@Optimum else Coef(x)@Optimum ) } setMethod("scores", "uqo", function(x, ...) scores.uqo(x, ...)) } jitteruqo = function(mat) { mat * ifelse(runif(length(mat)) < 0.5, -1, 1) } setMethod("Opt", "uqo", function(object, ...) Opt.qrrvglm(object, ...)) setMethod("Max", "uqo", function(object, ...) Max.qrrvglm(object, ...)) setMethod("lv", "uqo", function(object, ...) latvar.qrrvglm(object, ...)) if (!isGeneric("calibrate")) setGeneric("calibrate", function(object, ...) standardGeneric("calibrate")) setMethod("calibrate", "uqo", function(object, ...) calibrate.qrrvglm(object, ...)) summary.uqo = function(object, ...) { answer = Coef(object, ...) class(answer) = "summary.uqo" answer@call = object@call answer@misc = object@misc answer } show.summary.uqo = function(x, ...) { cat("\nCall:\n") dput(x@call) show.Coef.qrrvglm(x, ...) cat("\nNumber of responses/species: ", x@NOS, "\n") if (length(x@misc$dispersion) == 1) cat("\nDispersion parameter(s): ", x@misc$dispersion, "\n") invisible(x) } setClass("summary.uqo", representation("Coef.uqo", "misc" = "list", "call" = "call")) setMethod("summary", "uqo", function(object, ...) summary.uqo(object, ...)) setMethod("show", "summary.uqo", function(object) show.summary.uqo(object)) Tol.uqo = function(object, ...) { Coef(object, ...)@Tolerance } Tol.Coef.uqo = function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") Coef(object, ...)@Tolerance } if (!isGeneric("Tol")) setGeneric("Tol", function(object, ...) standardGeneric("Tol")) setMethod("Tol", "uqo", function(object, ...) Tol.uqo(object, ...)) setMethod("Tol", "Coef.uqo", function(object, ...) Tol.Coef.uqo(object, ...)) VGAM/R/summary.vlm.q0000644000176000001440000001161012136651110013704 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. summaryvlm <- function(object, correlation = FALSE, dispersion = NULL, Colnames = c("Estimate", "Std. Error", "z value")) { if (is.logical(object@misc$BFGS) && object@misc$BFGS) warning(paste("the estimated variance-covariance matrix is", "usually inaccurate as the working weight matrices are a", "crude BFGS quasi-Newton approximation")) M <- object@misc$M n <- object@misc$n nrow_X_vlm <- object@misc$nrow_X_vlm ncol_X_vlm <- object@misc$ncol_X_vlm # May be NULL for CQO objects coef <- object@coefficients cnames <- names(coef) presid <- residualsvlm(object, type = "pearson") # NULL if pooled.weight if (any(is.na(coef))) { warning(paste("Some NAs in the coefficients---no summary", " provided; returning object\n")) return(object) } rdf <- object@df.residual if (!length(dispersion)) { if (is.numeric(object@misc$dispersion)) { dispersion <- object@misc$dispersion if (all(dispersion == 0)) stop("dispersion shouldn't be zero here!") } else { dispersion <- 1 object@misc$estimated.dispersion <- FALSE } } else if (dispersion == 0) { dispersion <- if (!length(object@rss)) { stop("object@rss is empty") } else { object@rss / object@df.residual } object@misc$estimated.dispersion <- TRUE } else { if (is.numeric(object@misc$dispersion) && object@misc$dispersion != dispersion) warning("overriding the value of object@misc$dispersion") object@misc$estimated.dispersion <- FALSE } sigma <- dispersion^0.5 # Can be a vector if (is.Numeric(ncol_X_vlm)) { R <- object@R if (ncol_X_vlm < max(dim(R))) stop("R is rank deficient") rinv <- diag(ncol_X_vlm) rinv <- backsolve(R, rinv) rowlen <- drop(((rinv^2) %*% rep(1, ncol_X_vlm))^0.5) covun <- rinv %*% t(rinv) dimnames(covun) <- list(cnames, cnames) } coef <- matrix(rep(coef, 3), ncol = 3) dimnames(coef) <- list(cnames, Colnames) if (length(sigma) == 1 && is.Numeric(ncol_X_vlm)) { coef[, 2] <- rowlen %o% sigma # Fails here when sigma is a vector coef[, 3] <- coef[, 1] / coef[, 2] } else { coef[,1] <- coef[,2] <- coef[,3] <- NA } if (correlation) { correl <- covun * outer(1 / rowlen, 1 / rowlen) dimnames(correl) <- list(cnames, cnames) } else { correl <- matrix(0, 0, 0) # was NULL, but now a special matrix } answer <- new("summary.vlm", object, coef3 = coef, correlation = correl, df = c(ncol_X_vlm, rdf), sigma = sigma) if (is.Numeric(ncol_X_vlm)) answer@cov.unscaled <- covun answer@dispersion <- dispersion # Overwrite this if (length(presid)) answer@pearson.resid <- as.matrix(presid) answer } show.summary.vlm <- function(x, digits = NULL, quote = TRUE, prefix = "") { M <- x@misc$M coef3 <- x@coef3 # ficients correl <- x@correlation if (is.null(digits)) { digits <- options()$digits } else { old.digits <- options(digits = digits) on.exit(options(old.digits)) } cat("\nCall:\n") dput(x@call) presid <- x@pearson.resid rdf <- x@df[2] if (length(presid) && all(!is.na(presid))) { cat("\nPearson residuals:\n") if (rdf/M > 5) { rq <- apply(as.matrix(presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) print(t(rq), digits = digits) } else if (rdf > 0) { print(presid, digits = digits) } } if (!all(is.na(coef3))) { cat("\nCoefficients:\n") print(coef3, digits = digits) } cat("\nNumber of responses: ", M, "\n") if (length(x@misc$predictors.names)) if (M == 1) { cat("\nName of response:", paste(x@misc$predictors.names, collapse = ", "), "\n") } else { UUU <- paste(x@misc$predictors.names, collapse = ", ") UUU <- x@misc$predictors.names cat("\nNames of responses:\n") cat(UUU, fill = TRUE, sep = ", ") } if (!is.null(x@rss)) cat("\nResidual Sum of Squares:", format(round(x@rss, digits)), "on", round(rdf, digits), "degrees of freedom\n") if (length(correl)) { ncol_X_vlm <- dim(correl)[2] if (ncol_X_vlm > 1) { cat("\nCorrelation of Coefficients:\n") ll <- lower.tri(correl) correl[ll] <- format(round(correl[ll], digits)) correl[!ll] <- "" print(correl[-1, -ncol_X_vlm, drop = FALSE], quote = FALSE, digits = digits) } } invisible(NULL) } setMethod("summary", "vlm", function(object, ...) summaryvlm(object, ...)) setMethod("show", "summary.vlm", function(object) show.summary.vlm(object)) VGAM/R/summary.vglm.q0000644000176000001440000001702312136651110014057 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. yformat <- function(x, digits = options()$digits) { format(ifelse(abs(x) < 0.001, signif(x, digits), round(x, digits))) } summaryvglm <- function(object, correlation = FALSE, dispersion = NULL, digits = NULL) { if (length(dispersion) && dispersion == 0 && length(object@family@summary.dispersion) && !object@family@summary.dispersion) { stop("cannot use the general VGLM formula (based on a residual ", "sum of squares) for computing the dispersion parameter") } stuff <- summaryvlm(as(object, "vlm"), correlation = correlation, dispersion = dispersion) answer <- new("summary.vglm", object, coef3 = stuff@coef3, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) presid <- resid(object, type = "pearson") if (length(presid)) answer@pearson.resid <- as.matrix(presid) slot(answer, "misc") <- stuff@misc # Replace if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion answer } setMethod("logLik", "summary.vglm", function(object, ...) logLik.vlm(object, ...)) show.summary.vglm <- function(x, digits = NULL, quote = TRUE, prefix = "", presid = TRUE, nopredictors = FALSE) { M <- x@misc$M coef <- x@coef3 # icients correl <- x@correlation digits <- if (is.null(digits)) options()$digits - 2 else digits cat("\nCall:\n") dput(x@call) Presid <- x@pearson.resid rdf <- x@df[2] if (presid && length(Presid) && all(!is.na(Presid)) && is.finite(rdf)) { cat("\nPearson Residuals:\n") if (rdf/M > 5) { rq <- apply(as.matrix(Presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) print(t(rq), digits = digits) } else if (rdf > 0) { print(Presid, digits = digits) } } cat("\nCoefficients:\n") print.default(coef, digits = digits) cat("\nNumber of linear predictors: ", M, "\n") if (!is.null(x@misc$predictors.names) && !nopredictors) { if (M == 1) { cat("\nName of linear predictor:", paste(x@misc$predictors.names, collapse = ", "), "\n") } else if (M <= 5) { cat("\nNames of linear predictors:", paste(x@misc$predictors.names, collapse = ", "), fill = TRUE) } } prose <- "" if (length(x@dispersion)) { if (is.logical(x@misc$estimated.dispersion) && x@misc$estimated.dispersion) { prose <- "(Estimated) " } else { if (is.numeric(x@misc$default.dispersion) && x@dispersion == x@misc$default.dispersion) prose <- "(Default) " if (is.numeric(x@misc$default.dispersion) && x@dispersion != x@misc$default.dispersion) prose <- "(Pre-specified) " } cat(paste("\n", prose, "Dispersion Parameter for ", x@family@vfamily[1], " family: ", yformat(x@dispersion, digits), "\n", sep = "")) } if (length(deviance(x))) { cat("\nResidual deviance:", yformat(deviance(x), digits)) if (is.finite(rdf)) cat(" on", round(rdf, digits), "degrees of freedom\n") else cat("\n") } if (length(vll <- logLik.vlm(x))) { cat("\nLog-likelihood:", yformat(vll, digits)) if (is.finite(rdf)) cat(" on", round(rdf, digits), "degrees of freedom\n") else cat("\n") } if (length(x@criterion)) { ncrit <- names(x@criterion) for(ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), yformat(x@criterion[[ii]], digits), "\n") } cat("\nNumber of iterations:", format(trunc(x@iter)), "\n") if (!is.null(correl)) { ncol_X_vlm <- dim(correl)[2] if (ncol_X_vlm > 1) { cat("\nCorrelation of Coefficients:\n") ll <- lower.tri(correl) correl[ll] <- format(round(correl[ll], digits)) correl[!ll] <- "" print(correl[-1, -ncol_X_vlm, drop = FALSE], quote = FALSE, digits = digits) } } invisible(NULL) } setMethod("summary", "vglm", function(object, ...) summaryvglm(object, ...)) setMethod("show", "summary.vglm", function(object) show.summary.vglm(object)) vcovdefault <- function(object, ...) { if (is.null(object@vcov)) stop("no default") object@vcov } vcovvlm <- function(object, dispersion = NULL, untransform = FALSE) { so <- summaryvlm(object, correlation = FALSE, dispersion = dispersion) d <- if (any(slotNames(so) == "dispersion") && is.Numeric(so@dispersion)) so@dispersion else 1 answer <- d * so@cov.unscaled if (is.logical(OKRC <- object@misc$RegCondOK) && !OKRC) warning("MLE regularity conditions were violated ", "at the final iteration of the fitted object") if (!untransform) return(answer) new.way <- TRUE if (!is.logical(object@misc$intercept.only)) stop("cannot determine whether the object is", "an intercept-only fit, i.e., 'y ~ 1' is the response") if (!object@misc$intercept.only) stop("object must be an intercept-only fit, i.e., ", "y ~ 1 is the response") if (!all(trivial.constraints(constraints(object)) == 1)) stop("object must have trivial constraints") M <- object@misc$M tvector <- numeric(M) etavector <- predict(object)[1, ] # Contains transformed parameters LINK <- object@misc$link # link.names # This should be a character vector. EARG <- object@misc$earg # This could be a NULL if (is.null(EARG)) EARG <- list(theta = NULL) if (!is.list(EARG)) stop("the 'earg' component of 'object@misc' must be a list") if (length(LINK) != M && length(LINK) != 1) stop("cannot obtain the link functions to untransform 'object'") if (!is.character(LINK)) stop("the 'link' component of 'object@misc' should ", "be a character vector") learg <- length(EARG) llink <- length(LINK) if (llink != learg) stop("the 'earg' component of 'object@misc' should ", "be a list of length ", learg) level1 <- length(EARG) > 3 && length(intersect(names(EARG), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) EARG <- list(oneOnly = EARG) learg <- length(EARG) for (ii in 1:M) { TTheta <- etavector[ii] # Transformed theta use.earg <- if (llink == 1) EARG[[1]] else EARG[[ii]] function.name <- if (llink == 1) LINK else LINK[ii] if (new.way) { use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- TTheta # New Theta <- do.call(function.name, use.earg) use.earg[["inverse"]] <- FALSE # Reset this use.earg[["deriv"]] <- 1 # New use.earg[["theta"]] <- Theta # Renew this tvector[ii] <- do.call(function.name, use.earg) } else { stop("link functions handled in the new way now") } } # of for(ii in 1:M) tvector <- abs(tvector) answer <- (cbind(tvector) %*% rbind(tvector)) * answer if (length(dmn2 <- names(object@misc$link)) == M) dimnames(answer) <- list(dmn2, dmn2) answer } setMethod("vcov", "vlm", function(object, ...) vcovvlm(object, ...)) setMethod("vcov", "vglm", function(object, ...) vcovvlm(object, ...)) VGAM/R/summary.vgam.q0000644000176000001440000001341212136651110014042 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. summaryvgam <- function(object, dispersion = NULL, digits = options()$digits-2) { if (length(dispersion) && dispersion == 0 && length(object@family@summary.dispersion) && !object@family@summary.dispersion) { stop("cannot use the general VGLM formula (based on a residual ", "sum of squares) for computing the dispersion parameter") } newobject <- object class(newobject) <- "vglm" stuff <- summaryvglm(newobject, dispersion = dispersion) rdf <- stuff@df[2] <- object@df.residual # NA M <- object@misc$M nrow_X_vlm <- object@misc$nrow_X_vlm rank <- if (is.null(object@qr$rank)) length(object@coefficients) else object@qr$rank useF <- object@misc$useF if (is.null(useF)) useF <- FALSE df <- unlist(lapply(object@misc$new.assign, length)) nldf <- object@nl.df if (length(df)) { aod <- as.matrix(round(df, 1)) dimnames(aod) <- list(names(df), "Df") if (!is.null(object@nl.chisq)) { aod <- cbind(aod, NA, NA, NA) nl.chisq <- object@nl.chisq / object@dispersion special <- abs(nldf) < 0.1 # This was the quick fix in s.vam() nldf[special] <- 1 # Give it a plausible value for pchisq & pf snames <- names(nldf) aod[snames, 2] <- round(nldf, 1) aod[snames, 3] <- if (useF) nl.chisq/nldf else nl.chisq aod[snames, 4] <- if (useF) pf(nl.chisq / nldf, nldf, rdf, lower.tail = FALSE) else pchisq(nl.chisq, nldf, lower.tail = FALSE) if (any(special)) { aod[snames[special], 2:4] <- NA } rnames <- c("Df", "Npar Df", "Npar Chisq", "P(Chi)") if (useF) rnames[3:4] <- c("Npar F", "Pr(F)") dimnames(aod) <- list(names(df), rnames) heading <- if (useF) "\nDF for Terms and Approximate F-values for Nonparametric Effects\n" else "\nDF for Terms and Approximate Chi-squares for Nonparametric Effects\n" } else { heading <- "DF for Terms\n\n" } aod <- as.vanova(data.frame(aod, check.names = FALSE), heading) class(aod) <- "data.frame" } else { aod <- data.frame() } answer <- new("summary.vgam", object, call = stuff@call, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) slot(answer, "coefficients") <- stuff@coefficients # Replace if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion presid <- residuals(object, type = "pearson") if (length(presid)) answer@pearson.resid <- as.matrix(presid) slot(answer, "anova") <- aod answer } show.summary.vgam <- function(x, quote = TRUE, prefix = "", digits = options()$digits-2) { M <- x@misc$M cat("\nCall:\n") dput(x@call) presid <- x@pearson.resid rdf <- x@df[2] if (FALSE && !is.null(presid) && all(!is.na(presid))) { cat("\nPearson Residuals:\n") if (rdf/M > 5) { rq <- apply(as.matrix(presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) print(t(rq), digits = digits) } else if (rdf > 0) { print(presid, digits = digits) } } cat("\nNumber of linear predictors: ", M, "\n") if (!is.null(x@misc$predictors.names)) if (M == 1) cat("\nName of linear predictor:", paste(x@misc$predictors.names, collapse = ", "), "\n") else if (M <= 5) cat("\nNames of linear predictors:", paste(x@misc$predictors.names, collapse = ", "), "\n") prose <- "" if (length(x@dispersion)) { if (is.logical(x@misc$estimated.dispersion) && x@misc$estimated.dispersion) { prose <- "(Estimated) " } else { if (is.numeric(x@misc$default.dispersion) && x@dispersion == x@misc$default.dispersion) prose <- "(Default) " if (is.numeric(x@misc$default.dispersion) && x@dispersion != x@misc$default.dispersion) prose <- "(Pre-specified) " } cat(paste("\n", prose, "Dispersion Parameter for ", x@family@vfamily[1], " family: ", format(round(x@dispersion, digits)), "\n", sep = "")) } if (length(deviance(x))) cat("\nResidual deviance: ", format(round(deviance(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(logLik.vlm(x))) cat("\nLog-likelihood:", format(round(logLik.vlm(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for(ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(x@criterion[[ii]]), "\n") } cat("\nNumber of iterations: ", x@iter, "\n") if (length(x@anova)) { show.vanova(x@anova, digits = digits) # ".vanova" for Splus6 } invisible(NULL) } setMethod("summary", "vgam", function(object, ...) summaryvgam(object, ...)) setMethod("show", "summary.vgam", function(object) show.summary.vgam(object)) show.vanova <- function(x, digits = .Options$digits, ...) { rrr <- row.names(x) heading <- attr(x, "heading") if (!is.null(heading)) cat(heading, sep = "\n") attr(x, "heading") <- NULL for(i in 1:length(x)) { xx <- x[[i]] xna <- is.na(xx) xx <- format(zapsmall(xx, digits)) xx[xna] <- "" x[[i]] <- xx } print.data.frame(as.data.frame(x, row.names = rrr)) invisible(x) } as.vanova <- function(x, heading) { if (!is.data.frame(x)) stop("x must be a data frame") rrr <- row.names(x) attr(x, "heading") <- heading x <- as.data.frame(x, row.names = rrr) x } VGAM/R/step.vglm.q0000644000176000001440000000035212136651110013332 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. step.vglm <- function(fit, ...) { cat("Sorry, this function has not been written yet. Returning a NULL.\n") NULL } VGAM/R/smart.R0000644000176000001440000004123512136651110012507 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. smartpredenv <- new.env() smart.mode.is <- function(mode.arg = NULL) { if (!length(mode.arg)) { if (exists(".smart.prediction", envir = VGAM:::smartpredenv)) { get(".smart.prediction.mode", envir = VGAM:::smartpredenv) } else { "neutral" } } else { if (mode.arg != "neutral" && mode.arg != "read" && mode.arg != "write") stop("argument \"mode.arg\" must be one of", " \"neutral\", \"read\" or \"write\"") if (exists(".smart.prediction", envir = VGAM:::smartpredenv)) { get(".smart.prediction.mode", envir = VGAM:::smartpredenv) == mode.arg } else { mode.arg == "neutral" } } } setup.smart <- function(mode.arg, smart.prediction = NULL, max.smart = 30) { actual <- if (mode.arg == "write") vector("list", max.smart) else if (mode.arg == "read") smart.prediction else stop("value of 'mode.arg' unrecognized") wrapup.smart() # make sure if (length(actual)) { assign(".smart.prediction", actual, envir = VGAM:::smartpredenv) assign(".smart.prediction.counter", 0, envir = VGAM:::smartpredenv) assign(".smart.prediction.mode", mode.arg, envir = VGAM:::smartpredenv) assign(".max.smart", max.smart, envir = VGAM:::smartpredenv) assign(".smart.prediction", actual, envir = VGAM:::smartpredenv) } } wrapup.smart <- function() { if (exists(".smart.prediction", envir = VGAM:::smartpredenv)) rm(".smart.prediction", envir = VGAM:::smartpredenv) if (exists(".smart.prediction.counter", envir = VGAM:::smartpredenv)) rm(".smart.prediction.counter", envir = VGAM:::smartpredenv) if (exists(".smart.prediction.mode", envir = VGAM:::smartpredenv)) rm(".smart.prediction.mode", envir = VGAM:::smartpredenv) if (exists(".max.smart", envir = VGAM:::smartpredenv)) rm(".max.smart", envir = VGAM:::smartpredenv) } get.smart.prediction <- function() { smart.prediction.counter <- get(".smart.prediction.counter", envir = VGAM:::smartpredenv) max.smart <- get(".max.smart", envir = VGAM:::smartpredenv) if (smart.prediction.counter > 0) { # Save this on the object for smart prediction later smart.prediction <- get(".smart.prediction", envir = VGAM:::smartpredenv) if (max.smart >= (smart.prediction.counter + 1)) for(i in max.smart:(smart.prediction.counter + 1)) smart.prediction[[i]] <- NULL smart.prediction } else NULL } put.smart <- function(smart) { max.smart <- get(".max.smart", envir = VGAM:::smartpredenv) smart.prediction.counter <- get(".smart.prediction.counter", envir = VGAM:::smartpredenv) smart.prediction <- get(".smart.prediction", envir = VGAM:::smartpredenv) smart.prediction.counter <- smart.prediction.counter + 1 if (smart.prediction.counter > max.smart) { # if list is too small, make it larger max.smart <- max.smart + (inc.smart <- 10) # can change inc.smart smart.prediction <- c(smart.prediction, vector("list", inc.smart)) assign(".max.smart", max.smart, envir = VGAM:::smartpredenv) } smart.prediction[[smart.prediction.counter]] <- smart assign(".smart.prediction", smart.prediction, envir = VGAM:::smartpredenv) assign(".smart.prediction.counter", smart.prediction.counter, envir = VGAM:::smartpredenv) } get.smart <- function() { # Returns one list component of information smart.prediction <- get(".smart.prediction", envir = VGAM:::smartpredenv) smart.prediction.counter <- get(".smart.prediction.counter", envir = VGAM:::smartpredenv) smart.prediction.counter <- smart.prediction.counter + 1 assign(".smart.prediction.counter", smart.prediction.counter, envir = VGAM:::smartpredenv) smart <- smart.prediction[[smart.prediction.counter]] smart } smart.expression <- expression({ smart <- get.smart() assign(".smart.prediction.mode", "neutral", envir = VGAM:::smartpredenv) .smart.match.call <- as.character(smart$match.call) smart$match.call <- NULL # Kill it off for the do.call ans.smart <- do.call(.smart.match.call[1], c(list(x=x), smart)) assign(".smart.prediction.mode", "read", envir = VGAM:::smartpredenv) ans.smart }) is.smart <- function(object) { if (is.function(object)) { if (is.logical(a <- attr(object, "smart"))) a else FALSE } else { if (length(slotNames(object))) { if (length(object@smart.prediction) == 1 && is.logical(object@smart.prediction$smart.arg)) object@smart.prediction$smart.arg else any(slotNames(object) == "smart.prediction") } else { if (length(object$smart.prediction) == 1 && is.logical(object$smart.prediction$smart.arg)) object$smart.prediction$smart.arg else any(names(object) == "smart.prediction") } } } library(splines) bs <- function (x, df = NULL, knots = NULL, degree = 3, intercept = FALSE, Boundary.knots = range(x)) { x <- x # Evaluate x if (smart.mode.is("read")) { return(eval(smart.expression)) } nx <- names(x) x <- as.vector(x) nax <- is.na(x) if (nas <- any(nax)) x <- x[!nax] if (!missing(Boundary.knots)) { Boundary.knots <- sort(Boundary.knots) outside <- (ol <- x < Boundary.knots[1]) | (or <- x > Boundary.knots[2L]) } else outside <- FALSE ord <- 1 + (degree <- as.integer(degree)) if (ord <= 1) stop("'degree' must be integer >= 1") if (!missing(df) && missing(knots)) { nIknots <- df - ord + (1 - intercept) if (nIknots < 0) { nIknots <- 0 warning("'df' was too small; have used ", ord - (1 - intercept)) } knots <- if (nIknots > 0) { knots <- seq(from = 0, to = 1, length = nIknots + 2)[-c(1, nIknots + 2)] stats::quantile(x[!outside], knots) } } Aknots <- sort(c(rep(Boundary.knots, ord), knots)) if (any(outside)) { warning("some 'x' values beyond boundary knots may ", "cause ill-conditioned bases") derivs <- 0:degree scalef <- gamma(1L:ord) basis <- array(0, c(length(x), length(Aknots) - degree - 1L)) if (any(ol)) { k.pivot <- Boundary.knots[1L] xl <- cbind(1, outer(x[ol] - k.pivot, 1L:degree, "^")) tt <- spline.des(Aknots, rep(k.pivot, ord), ord, derivs)$design basis[ol, ] <- xl %*% (tt/scalef) } if (any(or)) { k.pivot <- Boundary.knots[2L] xr <- cbind(1, outer(x[or] - k.pivot, 1L:degree, "^")) tt <- spline.des(Aknots, rep(k.pivot, ord), ord, derivs)$design basis[or, ] <- xr %*% (tt/scalef) } if (any(inside <- !outside)) basis[inside, ] <- spline.des(Aknots, x[inside], ord)$design } else basis <- spline.des(Aknots, x, ord)$design if (!intercept) basis <- basis[, -1L, drop = FALSE] n.col <- ncol(basis) if (nas) { nmat <- matrix(NA, length(nax), n.col) nmat[!nax, ] <- basis basis <- nmat } dimnames(basis) <- list(nx, 1L:n.col) a <- list(degree = degree, knots = if (is.null(knots)) numeric(0L) else knots, Boundary.knots = Boundary.knots, intercept = intercept, Aknots = Aknots) attributes(basis) <- c(attributes(basis), a) class(basis) <- c("bs", "basis", "matrix") if (smart.mode.is("write")) put.smart(list(df = df, knots = knots, degree = degree, intercept = intercept, Boundary.knots = Boundary.knots, match.call = match.call())) basis } attr(bs, "smart") <- TRUE ns <- function (x, df = NULL, knots = NULL, intercept = FALSE, Boundary.knots = range(x)) { x <- x # Evaluate x if (smart.mode.is("read")) { return(eval(smart.expression)) } nx <- names(x) x <- as.vector(x) nax <- is.na(x) if (nas <- any(nax)) x <- x[!nax] if (!missing(Boundary.knots)) { Boundary.knots <- sort(Boundary.knots) outside <- (ol <- x < Boundary.knots[1L]) | (or <- x > Boundary.knots[2L]) } else outside <- FALSE if (!missing(df) && missing(knots)) { nIknots <- df - 1 - intercept if (nIknots < 0) { nIknots <- 0 warning("'df' was too small; have used ", 1 + intercept) } knots <- if (nIknots > 0) { knots <- seq.int(0, 1, length.out = nIknots + 2L)[-c(1L, nIknots + 2L)] stats::quantile(x[!outside], knots) } } else nIknots <- length(knots) Aknots <- sort(c(rep(Boundary.knots, 4), knots)) if (any(outside)) { basis <- array(0, c(length(x), nIknots + 4L)) if (any(ol)) { k.pivot <- Boundary.knots[1L] xl <- cbind(1, x[ol] - k.pivot) tt <- spline.des(Aknots, rep(k.pivot, 2L), 4, c(0, 1))$design basis[ol, ] <- xl %*% tt } if (any(or)) { k.pivot <- Boundary.knots[2L] xr <- cbind(1, x[or] - k.pivot) tt <- spline.des(Aknots, rep(k.pivot, 2L), 4, c(0, 1))$design basis[or, ] <- xr %*% tt } if (any(inside <- !outside)) basis[inside, ] <- spline.des(Aknots, x[inside], 4)$design } else basis <- spline.des(Aknots, x, 4)$design const <- spline.des(Aknots, Boundary.knots, 4, c(2, 2))$design if (!intercept) { const <- const[, -1, drop = FALSE] basis <- basis[, -1, drop = FALSE] } qr.const <- qr(t(const)) basis <- as.matrix((t(qr.qty(qr.const, t(basis))))[, -(1L:2L), drop = FALSE]) n.col <- ncol(basis) if (nas) { nmat <- matrix(NA, length(nax), n.col) nmat[!nax, ] <- basis basis <- nmat } dimnames(basis) <- list(nx, 1L:n.col) a <- list(degree = 3, knots = if (is.null(knots)) numeric(0) else knots, Boundary.knots = Boundary.knots, intercept = intercept, Aknots = Aknots) attributes(basis) <- c(attributes(basis), a) class(basis) <- c("ns", "basis", "matrix") if (smart.mode.is("write")) put.smart(list(df = df, knots = knots, intercept = intercept, Boundary.knots = Boundary.knots, match.call = match.call())) basis } attr(ns, "smart") <- TRUE poly <- function (x, ..., degree = 1, coefs = NULL, raw = FALSE) { x <- x # Evaluate x if (!raw && smart.mode.is("read")) { smart <- get.smart() degree <- smart$degree coefs <- smart$coefs raw <- smart$raw } dots <- list(...) if (nd <- length(dots)) { if (nd == 1 && length(dots[[1]]) == 1L) degree <- dots[[1L]] else return(polym(x, ..., degree = degree, raw = raw)) } if (is.matrix(x)) { m <- unclass(as.data.frame(cbind(x, ...))) return(do.call("polym", c(m, degree = degree, raw = raw))) } if (degree < 1) stop("'degree' must be at least 1") # At prediction time x may be less than the degree if (smart.mode.is("write") || smart.mode.is("neutral")) if (degree >= length(x)) stop("degree must be less than number of points") if (any(is.na(x))) stop("missing values are not allowed in 'poly'") n <- degree + 1 if (raw) { if (degree >= length(unique(x))) stop("'degree' must be less than number of unique points") Z <- outer(x, 1L:degree, "^") colnames(Z) <- 1L:degree attr(Z, "degree") <- 1L:degree class(Z) <- c("poly", "matrix") return(Z) } if (is.null(coefs)) { if (degree >= length(unique(x))) stop("'degree' must be less than number of unique points") xbar <- mean(x) x <- x - xbar X <- outer(x, seq_len(n) - 1, "^") QR <- qr(X) if (QR$rank < degree) stop("'degree' must be less than number of unique points") z <- QR$qr z <- z * (row(z) == col(z)) raw <- qr.qy(QR, z) norm2 <- colSums(raw^2) alpha <- (colSums(x * raw^2)/norm2 + xbar)[1L:degree] Z <- raw/rep(sqrt(norm2), each = length(x)) colnames(Z) <- 1L:n - 1L Z <- Z[, -1, drop = FALSE] attr(Z, "degree") <- 1:degree attr(Z, "coefs") <- list(alpha = alpha, norm2 = c(1, norm2)) class(Z) <- c("poly", "matrix") } else { alpha <- coefs$alpha norm2 <- coefs$norm2 Z <- matrix(, length(x), n) Z[, 1] <- 1 Z[, 2] <- x - alpha[1L] if (degree > 1) for (i in 2:degree) Z[, i + 1] <- (x - alpha[i]) * Z[, i] - (norm2[i + 1]/norm2[i]) * Z[, i - 1] Z <- Z/rep(sqrt(norm2[-1L]), each = length(x)) colnames(Z) <- 0:degree Z <- Z[, -1, drop = FALSE] attr(Z, "degree") <- 1L:degree attr(Z, "coefs") <- list(alpha = alpha, norm2 = norm2) class(Z) <- c("poly", "matrix") } if (smart.mode.is("write")) put.smart(list(degree = degree, coefs = attr(Z, "coefs"), raw = FALSE, # raw is changed above match.call = match.call())) Z } attr(poly, "smart") <- TRUE scale.default <- function (x, center = TRUE, scale = TRUE) { x <- as.matrix(x) if (smart.mode.is("read")) { return(eval(smart.expression)) } nc <- ncol(x) if (is.logical(center)) { if (center) { center <- colMeans(x, na.rm = TRUE) x <- sweep(x, 2L, center, check.margin = FALSE) } } else if (is.numeric(center) && (length(center) == nc)) x <- sweep(x, 2L, center, check.margin = FALSE) else stop("length of 'center' must equal the number of columns of 'x'") if (is.logical(scale)) { if (scale) { f <- function(v) { v <- v[!is.na(v)] sqrt(sum(v^2)/max(1, length(v) - 1L)) } scale <- apply(x, 2L, f) x <- sweep(x, 2L, scale, "/", check.margin = FALSE) } } else if (is.numeric(scale) && length(scale) == nc) x <- sweep(x, 2L, scale, "/", check.margin = FALSE) else stop("length of 'scale' must equal the number of columns of 'x'") if (is.numeric(center)) attr(x, "scaled:center") <- center if (is.numeric(scale)) attr(x, "scaled:scale") <- scale if (smart.mode.is("write")) { put.smart(list(center = center, scale = scale, match.call = match.call())) } x } attr(scale.default, "smart") <- TRUE attr(scale, "smart") <- TRUE "my1" <- function(x, minx=min(x)) { x <- x # Evaluate x if (smart.mode.is("read")) { smart <- get.smart() minx <- smart$minx # Overwrite its value } else if (smart.mode.is("write")) put.smart(list(minx=minx)) (x-minx)^2 } attr(my1, "smart") <- TRUE "my2" <- function(x, minx=min(x)) { x <- x # Evaluate x if (smart.mode.is("read")) { return(eval(smart.expression)) } else if (smart.mode.is("write")) put.smart(list(minx=minx, match.call=match.call())) (x-minx)^2 } attr(my2, "smart") <- TRUE "stdze1" <- function(x, center=TRUE, scale=TRUE) { x <- x # Evaluate x if (!is.vector(x)) stop("x must be a vector") if (smart.mode.is("read")) { smart <- get.smart() return((x-smart$center)/smart$scale) } if (is.logical(center)) center <- if (center) mean(x) else 0 if (is.logical(scale)) scale <- if (scale) sqrt(var(x)) else 1 if (smart.mode.is("write")) put.smart(list(center=center, scale=scale)) # Normal use (x-center)/scale } attr(stdze1, "smart") <- TRUE "stdze2" <- function(x, center=TRUE, scale=TRUE) { x <- x # Evaluate x if (!is.vector(x)) stop("x must be a vector") if (smart.mode.is("read")) { return(eval(smart.expression)) } if (is.logical(center)) center <- if (center) mean(x) else 0 if (is.logical(scale)) scale <- if (scale) sqrt(var(x)) else 1 if (smart.mode.is("write")) put.smart(list(center=center, scale=scale, match.call=match.call())) (x-center)/scale } attr(stdze2, "smart") <- TRUE VGAM/R/s.vam.q0000644000176000001440000002062512136651110012444 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. s.vam <- function(x, zedd, wz, smomat, which, smooth.frame, bf.maxit = 10, bf.epsilon = 0.001, trace = FALSE, se.fit = TRUE, X_vlm_save, Blist, ncolBlist, M, qbig, Umat, all.knots = FALSE, nk = NULL, sf.only = FALSE) { nwhich <- names(which) dX_vlm <- as.integer(dim(X_vlm_save)) pbig <- dX_vlm[2] if (!length(smooth.frame$first)) { data <- smooth.frame[, nwhich, drop = FALSE] smooth.frame <- vgam.match(data, all.knots = all.knots, nk = nk) smooth.frame$first <- TRUE # Only executed at the first time dx <- as.integer(dim(x)) smooth.frame$n_lm <- dx[1] smooth.frame$p_lm <- dx[2] attr(data, "class") <- NULL osparv <- lapply(data, attr, "spar") # "o" for original odfvec <- lapply(data, attr, "df") s.xargument <- lapply(data, attr, "s.xargument") for (kk in 1:length(nwhich)) { ii <- nwhich[kk] temp <- osparv[[ii]] if (!is.numeric(temp) || any(temp < 0)) { stop("spar cannot be negative or non-numeric") } if (length(temp) > ncolBlist[ii]) { warning("only the first ", ncolBlist[ii], " values of ", "'spar' are used for variable '", s.xargument, "'") } osparv[[ii]] <- rep(temp, length = ncolBlist[ii]) # recycle temp <- odfvec[[ii]] if (!is.numeric(temp) || any(temp < 1)) { stop("df is non-numeric or less than 1") } if (length(temp) > ncolBlist[ii]) { warning("only the first ", ncolBlist[ii], " value(s) of 'df' ", "are used for variable '", s.xargument, "'") } odfvec[[ii]] <- rep(temp, length = ncolBlist[ii]) # recycle if (max(temp) > smooth.frame$neffec[kk]-1) { stop("'df' value too high for variable '", s.xargument, "'") } if (any(osparv[[ii]] != 0) && any(odfvec[[ii]] != 4)) { stop("cannot specify both 'spar' and 'df'") } } # End of kk loop osparv <- unlist(osparv) odfvec <- unlist(odfvec) smooth.frame$osparv <- osparv # Original smooth.frame$odfvec <- odfvec # Original if (sum(smooth.frame$dfvec[smooth.frame$osparv == 0]) + pbig > smooth.frame$n_lm * sum(ncolBlist[nwhich])) { stop("too many parameters/dof for data on hand") } xnrow_X_vlm <- labels(X_vlm_save)[[2]] asgn <- attr(X_vlm_save, "assign") aa <- NULL for (ii in nwhich) { aa <- c(aa, xnrow_X_vlm[asgn[[ii]]]) } smooth.frame$ndfsparv <- aa # Stored here smooth.frame$xnrow_X_vlm <- xnrow_X_vlm # Stored here smooth.frame$s.xargument <- s.xargument # Stored here smooth.frame$smap <- as.vector(cumsum( c(1, ncolBlist[nwhich]))[1:length(nwhich)]) smooth.frame$try.sparv <- osparv smooth.frame$lamvector <- double(length(odfvec)) smooth.frame$bindex <- as.integer(cumsum(c(1, smooth.frame$nknots * ncolBlist[nwhich]))) smooth.frame$lindex <- as.integer(cumsum(c(1, smooth.frame$neffec * ncolBlist[nwhich]))) smooth.frame$kindex <- as.integer(cumsum(c(1, 4 + smooth.frame$nknots))) } else { smooth.frame$first <- FALSE } if (sf.only) { return(smooth.frame) } ldk <- 3 * max(ncolBlist[nwhich]) + 1 # 20020711 which <- unlist(which) p_lm <- smooth.frame$p_lm n_lm <- smooth.frame$n_lm dim2wz <- if (is.matrix(wz)) ncol(wz) else 1 dim1U <- if (is.matrix(Umat)) nrow(Umat) else 1 nBlist <- names(Blist) for (ii in length(nBlist):1) { if (!any(nBlist[ii] == nwhich)) { Blist[[ii]] <- NULL } } trivc <- trivial.constraints(Blist) ncbvec <- ncolBlist[nwhich] ncolbmax <- max(ncbvec) contr.sp <- list(low = -1.5, ## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4, ## tol = 0.001 was default till R 1.3.x eps = 2e-8, ## eps = 0.00244 was default till R 1.3.x maxit = 500) fit <- dotC(name = "Yee_vbfa", # --------------------------------- npetc = as.integer(c(n_lm, p_lm, length(which), se.fit, 0, bf.maxit, qrank = 0, M, nbig = n_lm * M, pbig, qbig, dim2wz, dim1U, ier = 0, ldk = ldk, # ldk may be unused contr.sp$maxit, iinfo = 0 )), doubvec = as.double(c(bf.epsilon, resSS = 0, unlist(contr.sp[1:4]))), as.double(x), y = as.double(zedd), wz = as.double(wz), dfvec = as.double(smooth.frame$odfvec + 1), # 20130427; + 1 added lamvec = as.double(smooth.frame$lamvector), sparv = as.double(smooth.frame$try.sparv), as.integer(smooth.frame$matcho), as.integer(smooth.frame$neffec), as.integer(which), smomat = as.double(smomat), etamat = double(M * n_lm), beta = double(pbig), varmat = if (se.fit) as.double(smomat) else double(1), qr = as.double(X_vlm_save), qraux = double(pbig), qpivot = as.integer(1:pbig), as.double(Umat), as.double(unlist(Blist)), as.integer(ncbvec), as.integer(smooth.frame$smap), trivc = as.integer(trivc), levmat = double(sum(smooth.frame$neffec * ncbvec)), # 20130427; bcoefficients = double(sum(smooth.frame$nknots * ncbvec)), knots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), lindex = as.integer(smooth.frame$lindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) # End of dotC if (exists("flush.console")) flush.console() if (smooth.frame$first) { } dim(fit$qr) <- dim(X_vlm_save) dimnames(fit$qr) <- dimnames(X_vlm_save) dim(fit$y) <- dim(zedd) dimnames(fit$y) <- dimnames(zedd) dim(fit$smomat) <- dim(smomat) dimnames(fit$smomat) <- dimnames(smomat) # Needed for vgam.nlchisq if (se.fit) { dim(fit$varmat) <- dim(smomat) dimnames(fit$varmat) <- dimnames(smomat) } if (fit$npetc[14] != 0 || fit$npetc[17] != 0) { stop("something went wrong in the C function 'vbfa'") } fit$etamat <- if (M > 1) matrix(fit$etamat, n_lm, M, byrow = TRUE) else c(fit$etamat) # May no longer be a matrix nits <- fit$npetc[5] qrank <- fit$npetc[7] if (smooth.frame$first) { smooth.frame$try.sparv <- fit$sparv } if ((nits == bf.maxit) & bf.maxit > 1) { warning("'s.vam' convergence not obtained in ", bf.maxit, " iterations") } R <- fit$qr[1:pbig, 1:pbig] R[lower.tri(R)] <- 0 Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich for (ii in 1:length(nwhich)) { b_coefs <- fit$bcoeff[(smooth.frame$bindex[ii]): (smooth.frame$bindex[ii+1]-1)] b_coefs <- matrix(b_coefs, ncol = ncolBlist[nwhich[ii]]) Bspline[[ii]] <- new("vsmooth.spline.fit", "Bcoefficients" = b_coefs, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } Leverages <- vector("list", length(nwhich)) names(Leverages) <- nwhich for (ii in 1:length(nwhich)) { levvec <- fit$levmat[(smooth.frame$lindex[ii]): (smooth.frame$lindex[ii+1]-1)] levmat <- matrix(levvec, nrow = smooth.frame$neffec[ii], ncol = ncolBlist[nwhich[ii]]) Leverages[[ii]] <- levmat } nl.df <- fit$dfvec - 1 # Used to be -1; Decrement/increment ? retlist <- list( Bspline = Bspline, coefficients = fit$beta, df.residual = n_lm * M - qrank - sum(nl.df), # Decrement/increment ? fitted.values = fit$etamat, Leverages = Leverages, nl.df = nl.df, qr = list(qr = fit$qr, rank = qrank, qraux = fit$qraux, pivot = fit$qpivot), R = R, rank = qrank, residuals = fit$y - fit$etamat, rss = fit$doubvec[2], smomat = fit$smomat, sparv = fit$sparv, s.xargument = unlist(smooth.frame$s.xargument)) names(retlist$coefficients) <- smooth.frame$xnrow_X_vlm names(retlist$sparv) <- names(retlist$nl.df) <- smooth.frame$ndfspar if (se.fit) { retlist <- c(retlist, list(varmat = fit$varmat)) } c(list(smooth.frame = smooth.frame), retlist) } VGAM/R/s.q0000644000176000001440000000144012136651110011654 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. s <- function(x, df = 4, spar = 0, ...) { xs <- substitute(x) ans <- as.character(xs) if (length(ans) > 1) stop("argument 'x' must be of length one") call <- deparse(sys.call()) if (ncol(as.matrix(x)) > 1) stop("argument 'x' must be a vector") if (!is.null(levels(x))) { x <- if (is.ordered(x)) { as.vector(x) } else stop("unordered factors cannot be used as smoothing variables") } attr(x, "spar") <- spar attr(x, "df") <- df attr(x, "call") <- call attr(x, "class") <- "smooth" attr(x, "s.xargument") <- ans # Needed for prediction and constraints a <- is.na(x) if (any(a)) attr(x, "NAs") <- seq(along = x)[a] x } VGAM/R/rrvglm.fit.q0000644000176000001440000004712612136651110013517 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rrvglm.fit <- function(x, y, w = rep(1, length(x[, 1])), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control=rrvglm.control(...), criterion = "coefficients", qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "rrvglm", ...) { specialCM <- NULL post <- list() check.rank <- TRUE # !control$Quadratic nonparametric <- FALSE epsilon <- control$epsilon maxit <- control$maxit save.weight <- control$save.weight trace <- control$trace orig.stepsize <- control$stepsize minimize.criterion <- control$min.criterion fv <- one.more <- rrr.expression <- modelno <- NULL RRR.expression <- paste("rrr", control$Algorithm, "expression", sep = ".") n <- dim(x)[1] new.s.call <- expression({ if (c.list$one.more) { fv <- c.list$fit new.coeffs <- c.list$coeff if (length(family@middle)) eval(family@middle) eta <- fv + offset mu <- family@linkinv(eta, extra) if (length(family@middle2)) eval(family@middle2) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) if (trace && orig.stepsize == 1) { cat(if(control$Quadratic) "QRR-VGLM" else "RR-VGLM", " linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients= format(new.crit, dig=round(2-log10(epsilon))), format(round(new.crit, 4))) switch(criterion, coefficients = {if(length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } { take.half.step <- (control$half.stepsizing && length(old.coeffs)) && !control$Quadratic && ((orig.stepsize != 1) || (criterion != "coefficients" && (if(minimize.criterion) new.crit > old.crit else new.crit < old.crit))) if (!is.logical(take.half.step)) take.half.step <- TRUE if (take.half.step) { stepsize <- 2 * min(orig.stepsize, 2*stepsize) new.coeffs.save <- new.coeffs if (trace) cat("Taking a modified step") repeat { if (trace) { cat(".") flush.console() } stepsize <- stepsize / 2 if (too.small <- stepsize < 0.001) break new.coeffs <- (1-stepsize)*old.coeffs + stepsize*new.coeffs.save if (length(family@middle)) eval(family@middle) fv <- X_vlm_save %*% new.coeffs if (M > 1) fv <- matrix(fv, n, M, byrow = TRUE) eta <- fv + offset mu <- family@linkinv(eta, extra) if (length(family@middle2)) eval(family@middle2) new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu,y = y,w = w,res = FALSE,eta = eta,extra)) if ((criterion == "coefficients") || ( minimize.criterion && new.crit < old.crit) || (!minimize.criterion && new.crit > old.crit)) break } if (trace) cat("\n") if (too.small) { warning("iterations terminated because ", "half-step sizes are very small") one.more <- FALSE } else { if (trace) { cat(if(control$Quadratic) "QRR-VGLM" else "RR-VGLM", " linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(2-log10(epsilon))), format(round(new.crit, 4))) switch(criterion, coefficients = {if(length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } one.more <- eval(control$convergence) } } else { one.more <- eval(control$convergence) } } flush.console() if (one.more) { iter <- iter + 1 deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) wz <- matrix(wz, nrow = n) U <- vchol(wz, M = M, n = n, silent=!trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M, n) - offset # Contains \bI \bnu rrr.expression <- get(RRR.expression) eval(rrr.expression) c.list$z <- z # contains \bI_{Rank} \bnu c.list$U <- U if (copy_X_vlm) c.list$X_vlm <- X_vlm_save } c.list$one.more <- one.more c.list$coeff <- runif(length(new.coeffs)) # 12/3/03; twist needed! old.coeffs <- new.coeffs } c.list }) # end of new.s.call copy_X_vlm <- FALSE # May be overwritten in @initialize stepsize <- orig.stepsize old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initialize mu & M (and optionally w) eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (is.character(rrcontrol$Dzero)) { index <- match(rrcontrol$Dzero, dimnames(as.matrix(y))[[2]]) if (any(is.na(index))) stop("Dzero argument didn't fully match y-names") if (length(index) == M) stop("all linear predictors are linear in the ", "latent variable(s); so set 'Quadratic = FALSE'") rrcontrol$Dzero <- control$Dzero <- index } if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.vgam(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) if (is.null(just.testing)) findex <- NULL # 20100617 tc1 <- trivial.constraints(constraints) if (!is.null(findex) && !control$Quadratic && sum(!tc1)) { for(ii in names(tc1)) if (!tc1[ii] && !any(ii == names(findex)[findex == 1])) warning("'", ii, "' is a non-trivial constraint that ", "will be overwritten by reduced-rank regression") } if (!is.null(findex) && all(findex == 1)) stop("use vglm(), not rrvglm()!") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for(ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index # Save it on the object colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 if (length(colx1.index)) colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index); p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index # Save it on the object Index.corner <- control$Index.corner Amat <- if (length(rrcontrol$Ainit)) rrcontrol$Ainit else matrix(rnorm(M * Rank, sd = rrcontrol$SD.Cinit), M, Rank) Cmat <- if (length(rrcontrol$Cinit)) rrcontrol$Cinit else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2 * Rank, sd = rrcontrol$SD.Cinit), p2, Rank) } else .Init.Poisson.QO(ymat = as.matrix(y), X1 = if (length(colx1.index)) x[, colx1.index, drop = FALSE] else NULL, X2 = x[, colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, isdlv = rrcontrol$isdlv) } if (control$Corner) Amat[control$Index.corner,] <- diag(Rank) if (length(control$szero)) Amat[control$szero,] <- 0 rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Blist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- control$Quadratic && (!control$EqualTol || control$ITolerances) && all(trivial.constraints(Blist) == 1) Blist <- Blist.save <- replace.constraints(Blist, Amat, colx2.index) ncolBlist <- unlist(lapply(Blist, ncol)) dimB <- sum(ncolBlist) X_vlm_save <- if (control$Quadratic) { tmp500 <- lm2qrrvlm.model.matrix(x = x, Blist = Blist, C = Cmat, control = control) xsmall.qrr <- tmp500$new.lv.model.matrix B.list <- tmp500$constraints # Doesn't change or contain \bI_{Rank} \bnu if (FALSE && modelno == 3) { B.list[[1]] <- (B.list[[1]])[, c(TRUE, FALSE), drop = FALSE] # Amat B.list[[2]] <- (B.list[[2]])[, c(TRUE, FALSE), drop = FALSE] # D } lv.mat <- tmp500$lv.mat if (length(tmp500$offset)) { offset <- tmp500$offset } lm2vlm.model.matrix(xsmall.qrr, B.list, xij = control$xij) } else { lv.mat <- x[,colx2.index,drop = FALSE] %*% Cmat lm2vlm.model.matrix(x, Blist, xij=control$xij) } if (length(coefstart)) { eta <- if (ncol(X_vlm_save)>1) X_vlm_save %*% coefstart + offset else X_vlm_save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- family@linkinv(eta, extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # family[[criterion]] } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) old.crit <- if (minimize.criterion) 10*new.crit+10 else -10*new.crit-10 deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list <- list(z = as.double(z), fit = as.double(t(eta)), one.more = TRUE, coeff = as.double(rep(1,ncol(X_vlm_save))), U = as.double(U), copy_X_vlm = copy_X_vlm, X_vlm = if (copy_X_vlm) as.double(X_vlm_save) else double(3)) dX_vlm <- as.integer(dim(X_vlm_save)) nrow_X_vlm <- dX_vlm[[1]] ncol_X_vlm <- dX_vlm[[2]] if (nrow_X_vlm < ncol_X_vlm) stop(ncol_X_vlm, " parameters but only ", nrow_X_vlm, " observations") { bf.call <- expression(vlm.wfit(xmat=X_vlm_save, zedd, Blist = if (control$Quadratic) B.list else Blist, ncolx=ncol(x), U=U, Eta.range = control$Eta.range, matrix.out = if (control$Quadratic) FALSE else TRUE, is.vlmX = TRUE, qr = qr.arg, xij = control$xij)) while(c.list$one.more) { if (control$Quadratic) { zedd <- as.matrix(z) if (control$Corner) zedd[,Index.corner] <- zedd[,Index.corner] - lv.mat } else { zedd <- z } if (!nice31) tfit <- eval(bf.call) # tfit$fitted.values is n x M if (!control$Quadratic) { Cmat <- tfit$mat.coef[colx2.index,,drop = FALSE] %*% Amat %*% solve(t(Amat) %*% Amat) rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() } if (!nice31) c.list$coeff <- tfit$coefficients if (control$Quadratic) { if (control$Corner) tfit$fitted.values[,Index.corner] <- tfit$fitted.values[,Index.corner] + lv.mat } if (!nice31) tfit$predictors <- tfit$fitted.values # Doesn't contain the offset if (!nice31) c.list$fit <- tfit$fitted.values c.list <- eval(new.s.call) NULL } } if (maxit > 1 && iter >= maxit) warning("convergence not obtained in ", maxit, " iterations") dnrow_X_vlm <- labels(X_vlm_save) xnrow_X_vlm <- dnrow_X_vlm[[2]] ynrow_X_vlm <- dnrow_X_vlm[[1]] if (length(family@fini)) eval(family@fini) if (M > 1 && !nice31) tfit$predictors <- matrix(tfit$predictors, n, M) asgn <- attr(X_vlm_save, "assign") if (nice31) { coefs <- rep(0, len = length(xnrow_X_vlm)) rank <- ncol_X_vlm } else { coefs <- tfit$coefficients names(coefs) <- xnrow_X_vlm rank <- tfit$rank } cnames <- xnrow_X_vlm if (check.rank && rank < ncol_X_vlm) stop("rrvglm only handles full-rank models (currently)") if (nice31) { R <- matrix(as.numeric(NA), 5, 5) } else { R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol_X_vlm, ncol_X_vlm), dimnames = list(cnames, cnames), rank = rank) } if (nice31) { effects <- rep(0, len = 77) } else { effects <- tfit$effects neff <- rep("", nrow_X_vlm) neff[seq(ncol_X_vlm)] <- cnames names(effects) <- neff dim(tfit$predictors) <- c(n, M) } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] if (nice31) { residuals <- z - fv if (M == 1) { residuals <- as.vector(residuals) names(residuals) <- yn } else { dimnames(residuals) <- list(yn, predictors.names) } } else { residuals <- z - tfit$predictors if (M == 1) { tfit$predictors <- as.vector(tfit$predictors) residuals <- as.vector(residuals) names(residuals) <- names(tfit$predictors) <- yn } else { dimnames(residuals) <- dimnames(tfit$predictors) <- list(yn, predictors.names) } } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } elts.tildeA <- (M - Rank - length(control$szero)) * Rank no.dpar <- 0 df.residual <- nrow_X_vlm - rank - (if(control$Quadratic) Rank*p2 else 0) - no.dpar - elts.tildeA fit <- list(assign = asgn, coefficients = coefs, constraints = if (control$Quadratic) B.list else Blist, df.residual = df.residual, df.total = n*M, effects = effects, fitted.values = mu, offset = offset, rank = rank, residuals = residuals, R = R, terms = Terms) # terms: This used to be done in vglm() if (qr.arg && !nice31) { fit$qr <- tfit$qr dimnames(fit$qr$qr) <- dnrow_X_vlm } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weight) wz else NULL misc <- list( colnames.x = xn, colnames.X_vlm = xnrow_X_vlm, criterion = criterion, function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, nrow_X_vlm = nrow_X_vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol_X_vlm = ncol_X_vlm, ynames = dimnames(y)[[2]]) if (one.more) misc$rrr.expression <- rrr.expression # crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for(ii in names(.min.criterion.VGAM)) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra) } } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) structure(c(fit, list(predictors = if (nice31) matrix(eta, n, M) else tfit$predictors, contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, rss = if (nice31) 000 else tfit$rss, x = x, y = y)), vclass = family@vfamily) } VGAM/R/rrvglm.control.q0000644000176000001440000001273512136651110014413 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rrvglm.control <- function(Rank = 1, Algorithm = c("alternating", "derivative"), Corner = TRUE, Uncorrelated.lv = FALSE, Wmat = NULL, Svd.arg = FALSE, Index.corner = if (length(szero)) head((1:1000)[-szero], Rank) else 1:Rank, Ainit = NULL, Alpha = 0.5, Bestof = 1, Cinit = NULL, Etamat.colmax = 10, SD.Ainit = 0.02, SD.Cinit = 0.02, szero = NULL, noRRR = ~ 1, Norrr = NA, trace = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an error in ", "the next VGAM version soon.") noRRR <- Norrr } if (mode(Algorithm) != "character" && mode(Algorithm) != "name") Algorithm <- as.character(substitute(Algorithm)) Algorithm <- match.arg(Algorithm, c("alternating", "derivative"))[1] if (Svd.arg) Corner <- FALSE if (!is.Numeric(Rank, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'Rank'") if (!is.Numeric(Alpha, positive = TRUE, allowable.length = 1) || Alpha > 1) stop("bad input for 'Alpha'") if (!is.Numeric(Bestof, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'Bestof'") if (!is.Numeric(SD.Ainit, positive = TRUE, allowable.length = 1)) stop("bad input for 'SD.Ainit'") if (!is.Numeric(SD.Cinit, positive = TRUE, allowable.length = 1)) stop("bad input for 'SD.Cinit'") if (!is.Numeric(Etamat.colmax, positive = TRUE, allowable.length = 1) || Etamat.colmax < Rank) stop("bad input for 'Etamat.colmax'") if (length(szero) && (any(round(szero) != szero) || any(szero < 1))) stop("bad input for the argument 'szero'") Quadratic <- FALSE if (!Quadratic && Algorithm == "derivative" && !Corner) { dd <- "derivative algorithm only supports corner constraints" if (length(Wmat) || Uncorrelated.lv || Svd.arg) stop(dd) warning(dd) Corner <- TRUE } if (Quadratic && Algorithm != "derivative") stop("Quadratic model can only be fitted using the derivative algorithm") if (Corner && (Svd.arg || Uncorrelated.lv || length(Wmat))) stop("cannot have 'Corner = TRUE' and either 'Svd = TRUE' or ", "'Uncorrelated.lv = TRUE' or Wmat") if (Corner && length(intersect(szero, Index.corner))) stop("cannot have 'szero' and 'Index.corner' having ", "common values") if (length(Index.corner) != Rank) stop("length(Index.corner) != Rank") if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for 'checkwz'") if (!is.Numeric(wzepsilon, allowable.length = 1, positive = TRUE)) stop("bad input for 'wzepsilon'") if (class(noRRR) != "formula" && !is.null(noRRR)) stop("argument 'noRRR' should be a formula or a NULL") ans <- c(vglm.control(trace = trace, ...), switch(Algorithm, "alternating" = valt.control(...), "derivative" = rrvglm.optim.control(...)), list(Rank = Rank, Ainit = Ainit, Algorithm = Algorithm, Alpha = Alpha, Bestof = Bestof, Cinit = Cinit, Index.corner = Index.corner, noRRR = noRRR, Corner = Corner, Uncorrelated.lv = Uncorrelated.lv, Wmat = Wmat, OptimizeWrtC = TRUE, # OptimizeWrtC, Quadratic = FALSE, # A constant now, here. SD.Ainit = SD.Ainit, SD.Cinit = SD.Cinit, Etamat.colmax = Etamat.colmax, szero = szero, Svd.arg = Svd.arg, Use.Init.Poisson.QO = Use.Init.Poisson.QO), checkwz = checkwz, wzepsilon = wzepsilon, if (Quadratic) qrrvglm.control(Rank = Rank, ...) else NULL) if (Quadratic && ans$ITolerances) { ans$Svd.arg <- FALSE ans$Uncorrelated.lv <- FALSE ans$Corner <- FALSE } ans$half.stepsizing <- FALSE # Turn it off ans } setClass("summary.rrvglm", representation("rrvglm", coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric")) setMethod("summary", "rrvglm", function(object, ...) summary.rrvglm(object, ...)) show.summary.rrvglm <- function(x, digits = NULL, quote = TRUE, prefix = "") { show.summary.vglm(x, digits = digits, quote = quote, prefix = prefix) invisible(x) NULL } setMethod("show", "summary.rrvglm", function(object) show.summary.rrvglm(x = object)) VGAM/R/rrvglm.R0000644000176000001440000001466412136651110012700 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rrvglm <- function(formula, family, data=list(), weights = NULL, subset = NULL, na.action=na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control=rrvglm.control(...), offset = NULL, method="rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "rrvglm" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- mf$qr.arg <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev = .getXlevels(mt, mf) y <- model.response(mf, "any") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") = attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) w <- rep(1, nrow(mf)) else if (ncol(as.matrix(w))==1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family=", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) # 10/12/04: testing for an empty (function) slot not elegant: if (control$Quadratic && control$FastAlgorithm && length(as.list(family@deviance)) <= 1) stop("The fast algorithm requires the family ", "function to have a deviance slot") rrvglm.fitter <- get(method) fit <- rrvglm.fitter(x=x, y=y, w=w, offset=offset, etastart=etastart, mustart=mustart, coefstart=coefstart, family=family, control=control, constraints=constraints, criterion=control$criterion, extra=extra, qr.arg = qr.arg, Terms=mt, function.name=function.name, ...) if (control$Bestof > 1) { deviance.Bestof = rep(fit$crit.list$deviance, len= control$Bestof) for(tries in 2:control$Bestof) { if (control$trace && (control$Bestof>1)) cat(paste("\n========================= Fitting model", tries, "=========================\n\n")) it <- rrvglm.fitter(x=x, y=y, w=w, offset=offset, etastart=etastart, mustart=mustart, coefstart=coefstart, family=family, control=control, constraints=constraints, criterion=control$criterion, extra=extra, qr.arg = qr.arg, Terms=mt, function.name=function.name, ...) deviance.Bestof[tries] = it$crit.list$deviance if (min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit = it } fit$misc$deviance.Bestof = deviance.Bestof } fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(if(control$Quadratic) "qrrvglm" else "rrvglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "df.total" = fit$df.total, "dispersion" = 1, "effects" = fit$effects, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "rss" = fit$rss, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms=mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) = "list" slot(answer, "qr") = fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") = attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") = as.matrix(fit$fitted.values) slot(answer, "na.action") = if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") = as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") = as.matrix(fit$weights) if (x.arg) slot(answer, "x") = fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") = xlev if (y.arg) slot(answer, "y") = as.matrix(fit$y) answer@misc$formula = formula slot(answer, "control") = fit$control slot(answer, "extra") = if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("\"extra\" is not a list, therefore placing \"extra\" into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") = fit$iter fit$predictors = as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) = list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") = fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") = as.matrix(fit$prior.weights) answer } attr(rrvglm, "smart") <- TRUE VGAM/R/residuals.vlm.q0000644000176000001440000001717112136651110014212 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. residualsvlm <- function(object, type = c("response", "deviance", "pearson", "working")) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response", "deviance", "pearson", "working"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { if (pooled.weight) return(NULL) n <- object@misc$n M <- object@misc$M wz <- weights(object, type = "w") # $weights if (!length(wz)) wz <- if (M == 1) rep(1, n) else matrix(1, n, M) if (M == 1) { if (any(wz < 0)) warning(paste("some weights are negative.", "Their residual will be assigned NA")) ans <- sqrt(c(wz)) * c(object@residuals) names(ans) <- names(object@residuals) ans } else { wz.sqrt <- matrix.power(wz, M = M, power = 0.5, fast = TRUE) ans <- mux22(wz.sqrt, object@residuals, M = M, upper = FALSE) dim(ans) <- c(M, n) ans <- t(ans) dimnames(ans) <- dimnames(object@residuals) # n x M ans } }, deviance = { M <- object@misc$M if (M > 1) return(NULL) ans <- residualsvlm(object, type = "pearson") ans }, response = object@residuals ) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } residualsvglm <- function(object, type = c("working", "pearson", "response", "deviance", "ldot"), matrix.arg = TRUE) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("working", "pearson", "response", "deviance", "ldot"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { if (pooled.weight) return(NULL) n <- object@misc$n M <- object@misc$M wz <- weights(object, type = "w") # $weights if (M == 1) { if (any(wz < 0)) warning(paste("some weights are negative.", "Their residual will be assigned NA")) ans <- sqrt(c(wz)) * c(object@residuals) names(ans) <- names(object@residuals) ans } else { wz.sqrt <- matrix.power(wz, M = M, power = 0.5, fast = TRUE) ans <- mux22(wz.sqrt, object@residuals, M = M, upper = FALSE) dim(ans) <- c(M, n) ans <- t(ans) dimnames(ans) <- dimnames(object@residuals) # n x M ans } }, deviance = { n <- object@misc$n y <- as.matrix(object@y) mu <- object@fitted.values w <- object@prior.weights if (!length(w)) w <- rep(1, n) eta <- object@predictors dev.fn <- object@family@deviance # May not 'exist' for that model if (length(body(dev.fn)) > 0) { extra <- object@extra ans <- dev.fn(mu = mu,y = y, w = w, residuals = TRUE, eta = eta, extra) if (length(ans)) { lob <- labels(object@residuals) if (is.list(lob)) { if (is.matrix(ans)) dimnames(ans) <- lob else names(ans) <- lob[[1]] } else { names(ans) <- lob } } ans } else { NULL } }, ldot = { n <- object@misc$n y <- as.matrix(object@y) mu <- object@fitted w <- object@prior.weights if (is.null(w)) w <- rep(1, n) eta <- object@predictors if (!is.null(ll.fn <- object@family@loglikelihood)) { extra <- object@extra ans <- ll.fn(mu = mu,y = y,w = w, residuals = TRUE, eta = eta, extra) if (!is.null(ans)) { ans <- c(ans) # ldot residuals can only be a vector names(ans) <- labels(object@residuals) } ans } else NULL }, response = { y <- object@y mu <- fitted(object) true.mu <- object@misc$true.mu if (is.null(true.mu)) true.mu <- TRUE ans <- if (true.mu) y - mu else NULL if (!matrix.arg && length(ans)) { if (ncol(ans) == 1) { names.ans <- dimnames(ans)[[1]] ans <- c(ans) names(ans) <- names.ans ans } else { warning("ncol(ans) is not 1") ans } } else { ans } }) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } residualsqrrvglm <- function(object, type = c("response"), matrix.arg = TRUE) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { stop("have not programmed pearson resids yet") }, deviance = { stop("have not programmed deviance resids yet") }, ldot = { stop("have not programmed ldot resids yet") }, response = { y <- object@y mu <- fitted(object) true.mu <- object@misc$true.mu if (is.null(true.mu)) true.mu <- TRUE ans <- if (true.mu) y - mu else NULL if (!matrix.arg && length(ans)) { if (ncol(ans) == 1) { names.ans <- dimnames(ans)[[1]] ans <- c(ans) names(ans) <- names.ans ans } else { warning("ncol(ans) is not 1") ans } } else ans }) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } setMethod("residuals", "vlm", function(object, ...) residualsvlm(object, ...)) setMethod("residuals", "vglm", function(object, ...) residualsvglm(object, ...)) setMethod("residuals", "vgam", function(object, ...) residualsvglm(object, ...)) setMethod("residuals", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) setMethod("resid", "vlm", function(object, ...) residualsvlm(object, ...)) setMethod("resid", "vglm", function(object, ...) residualsvglm(object, ...)) setMethod("resid", "vgam", function(object, ...) residualsvglm(object, ...)) setMethod("resid", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) VGAM/R/qtplot.q0000644000176000001440000006052512136651110012746 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. qtplot.lms.bcn <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { lp = length(percentiles) answer <- matrix(as.numeric(NA), nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) for(ii in 1:lp) { answer[, ii] <- eta[, 2] * (1+eta[, 1] * eta[, 3] * qnorm(percentiles[ii]/100))^(1/eta[, 1]) } answer } qtplot.lms.bcg <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { cc <- percentiles lp = length(percentiles) answer <- matrix(as.numeric(NA), nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) lambda <- eta[, 1] sigma <- eta[, 3] shape <- 1 / (lambda * sigma)^2 for(ii in 1:lp) { ccc <- rep(cc[ii]/100, len=nrow(eta)) ccc <- ifelse(lambda > 0, ccc, 1-ccc) answer[, ii] <- eta[, 2] * (qgamma(ccc, shape = shape)/shape)^(1/lambda) } answer } qtplot.lms.yjn2 <- qtplot.lms.yjn <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { cc <- percentiles lp = length(percentiles) answer <- matrix(as.numeric(NA), nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) lambda <- eta[, 1] mu <- eta[, 2] sigma <- eta[, 3] # Link function already taken care of above for(ii in 1:lp) { ccc <- mu + sigma * qnorm(cc[ii]/100) answer[, ii] <- yeo.johnson(ccc, lambda, inverse= TRUE) - yoffset } answer } qtplot.default <- function(object, ...) { warning("no methods function. Returning the object") invisible(object) } "qtplot.vglm" <- function(object, Attach= TRUE, ...) { LL <- length(object@family@vfamily) newcall = paste("qtplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall = parse(text = newcall)[[1]] if (Attach) { object@post$qtplot = eval(newcall) invisible(object) } else eval(newcall) } qtplot.lmscreg <- function(object, newdata = NULL, percentiles=object@misc$percentiles, plot.it= TRUE, ...) { same <- length(percentiles) == length(object@misc$percentiles) && all(percentiles==object@misc$percentiles) lp <- length(percentiles) if (same) { fitted.values <- if (!length(newdata)) object@fitted.values else { predict(object, newdata = newdata, type = "response") } fitted.values <- as.matrix(fitted.values) } else { if (!is.numeric(percentiles)) stop("'percentiles' must be specified") eta <- if (length(newdata)) predict(object, newdata = newdata, type = "link") else object@predictors if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta <- eta2theta(eta, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma if (!is.logical(expectiles <- object@misc$expectiles)) { expectiles <- FALSE } newcall = paste(if (expectiles) "explot." else "qtplot.", object@family@vfamily[1], "(percentiles = percentiles", ", eta = eta, yoffset=object@misc$yoffset)", sep = "") newcall = parse(text = newcall)[[1]] fitted.values = as.matrix( eval(newcall) ) dimnames(fitted.values) <- list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = "")) } if (plot.it) { plotqtplot.lmscreg(fitted.values = fitted.values, object = object, newdata = newdata, lp = lp, percentiles = percentiles, ...) } list(fitted.values = fitted.values, percentiles = percentiles) } plotqtplot.lmscreg <- function(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) { if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm") if (is.matrix(X) && length(object@y) && ncol(X)==2 && dimnames(X)[[2]][1] == "(Intercept)") { xx <- X[, 2] if (is.null(xlab)) { xlab <- if (object@misc$nonparametric) as.vector(slot(object, "s.xargument")) else names(object@assign)[2] } if (!add.arg) { if (!is.numeric(xlim)) xlim <- if (label) c(min(xx), max(xx)+size.label*diff(range(xx))) else c(min(xx), max(xx)) fred <- cbind(object@y, fitted.values) if (!is.numeric(ylim)) ylim <- c(min(fred), max(fred)) matplot(x=xx, y=fred, xlab = xlab, ylab = ylab, type = "n", xlim=xlim, ylim=ylim, ...) } if (y && length(object@y)) matpoints(x=xx, y=object@y, pch = pch, cex = pcex, col = pcol.arg) } else { warning(paste("there is not a single covariate.", "Returning the object.")) return(fitted.values) } } else { firstterm = attr(terms(object), "term.labels")[1] if (object@misc$nonparametric && length(object@s.xargument[firstterm])) firstterm <- object@s.xargument[firstterm] xx <- newdata[[firstterm]] if (!is.numeric(xx)) stop("couldn't extract the 'primary' variable from newdata") if (!add.arg) { if (is.null(xlab)) xlab <- firstterm if (!is.numeric(xlim)) xlim <- if (label) c(min(xx), max(xx)+size.label*diff(range(xx))) else c(min(xx), max(xx)) if (!is.numeric(ylim)) ylim <- c(min(fitted.values), max(fitted.values)) matplot(x=xx, y=fitted.values, xlab = xlab, ylab = ylab, type = "n", xlim=xlim, ylim=ylim, col = pcol.arg) } if (y && length(object@y)) matpoints(x=xx, y=object@y, pch = pch, cex = pcex, col = pcol.arg) } tcol.arg = rep(tcol.arg, length = lp) lcol.arg = rep(lcol.arg, length = lp) llwd.arg = rep(llwd.arg, length = lp) llty.arg = rep(llty.arg, length = lp) for(ii in 1:lp) { temp <- cbind(xx, fitted.values[, ii]) temp <- temp[sort.list(temp[, 1]),] index <- !duplicated(temp[, 1]) if (spline.fit) { lines(spline(temp[index, 1], temp[index, 2]), lty = llty.arg[ii], col = lcol.arg[ii], err = -1, lwd = llwd.arg[ii]) } else { lines(temp[index, 1], temp[index, 2], lty = llty.arg[ii], col = lcol.arg[ii], err = -1, lwd = llwd.arg[ii]) } if (label) text(par()$usr[2], temp[nrow(temp), 2], paste( percentiles[ii], "%", sep = ""), adj = tadj, col = tcol.arg[ii], err = -1) } invisible(fitted.values) } if (TRUE) { if (!isGeneric("qtplot")) setGeneric("qtplot", function(object, ...) standardGeneric("qtplot")) setMethod("qtplot", signature(object = "vglm"), function(object, ...) invisible(qtplot.vglm(object, ...))) setMethod("qtplot", signature(object = "vgam"), function(object, ...) invisible(qtplot.vglm(object, ...))) } "qtplot.vextremes" <- function(object, ...) { newcall = paste("qtplot.", object@family@vfamily[1], "(object = object, ... )", sep = "") newcall = parse(text = newcall)[[1]] eval(newcall) } qtplot.egumbel <- qtplot.gumbel <- function(object, plot.it = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) { if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for 'mpv'") if (!length(percentiles) || (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for 'percentiles'") eta <- predict(object) if (is.Numeric(R)) R <- rep(R, length=nrow(eta)) if (!is.Numeric(percentiles)) stop("the 'percentiles' argument needs to be assigned a value") extra = object@extra extra$mpv = mpv # Overwrite if necessary extra$R = R extra$percentiles = percentiles fitted.values = object@family@linkinv(eta = eta, extra = extra) answer = list(fitted.values = fitted.values, percentiles = percentiles) if (!plot.it) return(answer) lp = length(percentiles) # Does not include mpv tcol.arg = rep(tcol.arg, length = lp+mpv) lcol.arg = rep(lcol.arg, length = lp+mpv) llwd.arg = rep(llwd.arg, length = lp+mpv) llty.arg = rep(llty.arg, length = lp+mpv) X <- model.matrixvlm(object, type = "lm") if (is.matrix(X) && length(object@y) && ncol(X)==2 && dimnames(X)[[2]][1] == "(Intercept)") { xx <- X[, 2] if (!length(xlab)) xlab <- if (object@misc$nonparametric && length(object@s.xargument)) object@s.xargument else names(object@assign)[2] if (!add.arg) matplot(x=xx, y=cbind(object@y, fitted.values), main = main, xlab = xlab, ylab = ylab, type = "n", ...) if (y.arg) { matpoints(x=xx, y=object@y, pch = pch, col = pcol.arg) } } else { warning("there is not a single covariate.") return(answer) } for(ii in 1:(lp+mpv)) { temp <- cbind(xx, fitted.values[, ii]) temp <- temp[sort.list(temp[, 1]),] index <- !duplicated(temp[, 1]) if (spline.fit) { lines(spline(temp[index, 1], temp[index, 2]), lty = llty.arg[ii], col = lcol.arg[ii], lwd = llwd.arg[ii]) } else { lines(temp[index, 1], temp[index, 2], lty = llty.arg[ii], col = lcol.arg[ii], lwd = llwd.arg[ii]) } if (label) { mylabel = (dimnames(answer$fitted)[[2]])[ii] text(par()$usr[2], temp[nrow(temp), 2], mylabel, adj=tadj, col=tcol.arg[ii], err = -1) } } invisible(answer) } deplot.lms.bcn <- function(object, newdata, y.arg, eta0) { if (!any(object@family@vfamily == "lms.bcn")) warning("I think you've called the wrong function") Zvec <- ((y.arg/eta0[, 2])^(eta0[, 1]) -1) / (eta0[, 1] * eta0[, 3]) dZ.dy <- ((y.arg/eta0[, 2])^(eta0[, 1]-1)) / (eta0[, 2] * eta0[, 3]) yvec <- dnorm(Zvec) * abs(dZ.dy) list(newdata=newdata, y=y.arg, density=yvec) } deplot.lms.bcg <- function(object, newdata, y.arg, eta0) { if (!any(object@family@vfamily == "lms.bcg")) warning("I think you've called the wrong function") Zvec <- (y.arg/eta0[, 2])^(eta0[, 1]) # different from lms.bcn dZ.dy <- ((y.arg/eta0[, 2])^(eta0[, 1]-1)) * eta0[, 1] / eta0[, 2] lambda <- eta0[, 1] sigma <- eta0[, 3] shape <- 1 / (lambda * sigma)^2 yvec <- dgamma(Zvec, shape=shape, rate=shape) * abs(dZ.dy) list(newdata=newdata, y=y.arg, density=yvec) } deplot.lms.yjn2 <- deplot.lms.yjn <- function(object, newdata, y.arg, eta0) { if (!length(intersect(object@family@vfamily, c("lms.yjn","lms.yjn2")))) warning("I think you've called the wrong function") lambda <- eta0[, 1] Zvec <- (yeo.johnson(y.arg+object@misc$yoffset, lambda = eta0[, 1]) - eta0[, 2]) / eta0[, 3] dZ.dy <- dyj.dy.yeojohnson(y.arg+object@misc$yoffset, lambda = eta0[, 1]) / eta0[, 3] yvec <- dnorm(Zvec) * abs(dZ.dy) list(newdata=newdata, y=y.arg, density=yvec) } deplot.default <- function(object, ...) { warning("no methods function. Returning the object") invisible(object) } "deplot.vglm" <- function(object, Attach= TRUE, ...) { LL <- length(object@family@vfamily) newcall = paste("deplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall = parse(text = newcall)[[1]] if (Attach) { object@post$deplot = eval(newcall) invisible(object) } else eval(newcall) } "deplot.lmscreg" <- function(object, newdata = NULL, x0, y.arg, plot.it= TRUE, ...) { if (!length(newdata)) { newdata <- data.frame(x0=x0) var1name <- attr(terms(object), "term.labels")[1] names(newdata) <- var1name ii <- if (object@misc$nonparametric) slot(object, "s.xargument") else NULL if (length(ii) && any(logic.vec <- names(slot(object, "s.xargument")) == var1name)) names(newdata) <- ii[logic.vec] # should be the first one } eta0 <- if (length(newdata)) predict(object, newdata) else predict(object) if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta0 <- eta2theta(eta0, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma newcall = paste("deplot.", object@family@vfamily[1], "(object, newdata, y.arg = y.arg, eta0 = eta0)", sep = "") newcall = parse(text = newcall)[[1]] answer = eval(newcall) if (plot.it) plotdeplot.lmscreg(answer, y.arg=y.arg, ...) invisible(answer) } plotdeplot.lmscreg <- function(answer, y.arg, add.arg= FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...) { yvec <- answer$density xx <- y.arg if (!add.arg) { if (!is.numeric(xlim)) xlim <- c(min(xx), max(xx)) if (!is.numeric(ylim)) ylim <- c(min(yvec), max(yvec)) matplot(x=xx, y=yvec, xlab = xlab, ylab = ylab, type = "n", xlim=xlim, ylim=ylim, ...) } temp <- cbind(xx, yvec) temp <- temp[sort.list(temp[, 1]),] index <- !duplicated(temp[, 1]) lines(temp[index, 1], temp[index, 2], lty = llty.arg, col = col.arg, err = -1, lwd = llwd.arg) invisible(answer) } if (TRUE) { if (!isGeneric("deplot")) setGeneric("deplot", function(object, ...) standardGeneric("deplot")) setMethod("deplot", signature(object = "vglm"), function(object, ...) invisible(deplot.vglm(object, ...))) setMethod("deplot", signature(object = "vgam"), function(object, ...) invisible(deplot.vglm(object, ...))) } if (TRUE) { if (!isGeneric("cdf")) setGeneric("cdf", function(object, ...) standardGeneric("cdf")) setMethod("cdf", signature(object = "vglm"), function(object, ...) cdf.vglm(object, ...)) setMethod("cdf", signature(object = "vgam"), function(object, ...) cdf.vglm(object, ...)) } "cdf.vglm" <- function(object, newdata = NULL, Attach= FALSE, ...) { LL <- length(object@family@vfamily) newcall = paste("cdf.", object@family@vfamily[LL], "(object, newdata, ...)", sep = "") newcall = parse(text = newcall)[[1]] if (Attach) { object@post$cdf = eval(newcall) object } else eval(newcall) } "cdf.lmscreg" <- function(object, newdata = NULL, ...) { if (!length(newdata)) return(object@post$cdf) eta0 = if (length(newdata)) predict(object, newdata) else predict(object) if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta0 <- eta2theta(eta0, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma y = vgety(object, newdata) # Includes yoffset newcall = paste("cdf.", object@family@vfamily[1], "(y, eta0, ... )", sep = "") newcall = parse(text = newcall)[[1]] eval(newcall) } cdf.lms.bcn <- function(y, eta0) { Zvec <- ((y/eta0[, 2])^(eta0[, 1]) -1) / (eta0[, 1] * eta0[, 3]) Zvec[abs(eta0[, 3]) < 1e-5] = log(y/eta0[, 2]) / eta0[, 3] # Singularity at 0 ans = c(pnorm(Zvec)) names(ans) = dimnames(eta0)[[1]] ans } cdf.lms.bcg <- function(y, eta0) { shape = 1 / (eta0[, 1] * eta0[, 3])^2 Gvec = shape * (y/eta0[, 2])^(eta0[, 1]) ans = c(pgamma(Gvec, shape = shape)) ans[eta0[, 1] < 0] = 1-ans names(ans) = dimnames(eta0)[[1]] ans } cdf.lms.yjn <- function(y, eta0) { Zvec = (yeo.johnson(y, eta0[, 1]) - eta0[, 2])/eta0[, 3] ans = c(pnorm(Zvec)) names(ans) = dimnames(eta0)[[1]] ans } vgety = function(object, newdata = NULL) { y = if (length(newdata)) { yname = dimnames(attr(terms(object@terms),"factors"))[[1]][1] newdata[[yname]] } else { object@y } if (length(object@misc$yoffset)) y = y + object@misc$yoffset y } "rlplot.vglm" <- function(object, Attach = TRUE, ...) { LL <- length(object@family@vfamily) newcall = paste("rlplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall = parse(text = newcall)[[1]] if (Attach) { object@post$rlplot = eval(newcall) invisible(object) } else eval(newcall) } "rlplot.vextremes" <- function(object, ...) { newcall = paste("rlplot.", object@family@vfamily[1], "(object = object, ... )", sep = "") newcall = parse(text = newcall)[[1]] eval(newcall) } rlplot.egev <- rlplot.gev <- function(object, plot.it = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log = TRUE, CI = TRUE, epsilon = 1.0e-05, ...) { log.arg = log rm(log) if (!is.Numeric(epsilon, allowable.length = 1) || abs(epsilon) > 0.10) stop("bad input for 'epsilon'") if (!is.Numeric(probability, positive = TRUE) || max(probability) >= 1 || length(probability) < 5) stop("bad input for 'probability'") if (!is.logical(log.arg) || length(log.arg) != 1) stop("bad input for argument 'log'") if (!is.logical(CI) || length(CI) != 1) stop("bad input for argument 'CI'") if (!object@misc$intercept.only) stop("object must be an intercept-only fit, ", "i.e., y ~ 1 is the response") extra2 = object@extra extra2$percentiles = 100 * probability # Overwrite zp = object@family@linkinv(eta = predict(object)[1:2,], extra = extra2)[1,] yp = -log(probability) ydata = sort(object@y[, 1]) n = object@misc$n if (log.arg) { if (!add.arg) plot(log(1/yp), zp, log = "", type = "n", ylim = if (length(ylim)) ylim else c(min(c(ydata, zp)), max(c(ydata, zp))), xlab = xlab, ylab = ylab, main = main, ...) points(log(-1/log((1:n)/(n+1))), ydata, col = pcol.arg, pch = pch, cex = pcex) lines(log(1/yp), zp, lwd = llwd.arg, col = lcol.arg, lty = llty.arg) } else { if (!add.arg) plot(1/yp, zp, log = "x", type = "n", ylim = if (length(ylim)) ylim else c(min(c(ydata, zp)), max(c(ydata, zp))), xlab = xlab, ylab = ylab, main = main, ...) points(-1/log((1:n)/(n+1)), ydata, col = pcol.arg, pch = pch, cex = pcex) lines(1/yp, zp, lwd = llwd.arg, col = lcol.arg, lty = llty.arg) } if (CI) { zpp = cbind(zp, zp, zp) # lp x 3 eta = predict(object) Links = object@misc$link earg = object@misc$earg M = object@misc$M for(ii in 1:M) { TTheta = eta[, ii] use.earg = earg[[ii]] newcall = paste(Links[ii], "(theta=TTheta, earg=use.earg, inverse = TRUE)", sep = "") newcall = parse(text = newcall)[[1]] uteta = eval(newcall) # Theta, the untransformed parameter uteta = uteta + epsilon # perturb it newcall = paste(Links[ii], "(theta=uteta, earg=use.earg)", sep = "") newcall = parse(text = newcall)[[1]] teta = eval(newcall) # The transformed parameter peta = eta peta[, ii] = teta zpp[, ii] = object@family@linkinv(eta = peta, extra = extra2)[1,] zpp[, ii] = (zpp[, ii] - zp) / epsilon # On the transformed scale } VCOV = vcovvlm(object, untransform = TRUE) v = numeric(nrow(zpp)) for(ii in 1:nrow(zpp)) v[ii] = t(as.matrix(zpp[ii,])) %*% VCOV %*% as.matrix(zpp[ii,]) if (log.arg) { lines(log(1/yp), zp - 1.96 * sqrt(v), lwd=slwd.arg, col=scol.arg, lty=slty.arg) lines(log(1/yp), zp + 1.96 * sqrt(v), lwd=slwd.arg, col=scol.arg, lty=slty.arg) } else { lines(1/yp, zp - 1.96 * sqrt(v), lwd=slwd.arg, col=scol.arg, lty=slty.arg) lines(1/yp, zp + 1.96 * sqrt(v), lwd=slwd.arg, col=scol.arg, lty=slty.arg) } } answer = list(yp = yp, zp = zp) if (CI) { answer$lower = zp - 1.96 * sqrt(v) answer$upper = zp + 1.96 * sqrt(v) } invisible(answer) } if (!isGeneric("rlplot")) setGeneric("rlplot", function(object, ...) standardGeneric("rlplot")) setMethod("rlplot", "vglm", function(object, ...) rlplot.vglm(object, ...)) explot.lms.bcn <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { lp = length(percentiles) answer <- matrix(as.numeric(NA), nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) for(ii in 1:lp) { answer[, ii] <- eta[, 2] * (1+eta[, 1] * eta[, 3] * qenorm(percentiles[ii]/100))^(1/eta[, 1]) } answer } VGAM/R/qrrvglm.control.q0000644000176000001440000001260712136651110014572 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. qrrvglm.control = function(Rank = 1, Bestof = if (length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = TRUE, Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isdlv = rep(c(2, 1, rep(0.5, length = Rank)), length = Rank), iKvector = 0.1, iShape = 0.1, ITolerances = FALSE, maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep(7, length = Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if (ITolerances) 0.001 else 1.0, SD.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an error in ", "the next VGAM version soon.") noRRR <- Norrr } if (!is.Numeric(iShape, positive = TRUE)) stop("bad input for 'iShape'") if (!is.Numeric(iKvector, positive = TRUE)) stop("bad input for 'iKvector'") if (!is.Numeric(isdlv, positive = TRUE)) stop("bad input for 'isdlv'") if (any(isdlv < 0.2 | isdlv > 10)) stop("isdlv values must lie between 0.2 and 10") if (length(isdlv) > 1 && any(diff(isdlv) > 0)) stop("successive isdlv values must not increase") if (!is.Numeric(epsilon, positive = TRUE, allowable.length = 1)) stop("bad input for 'epsilon'") if (!is.Numeric(Etamat.colmax, positive = TRUE, allowable.length = 1) || Etamat.colmax < Rank) stop("bad input for 'Etamat.colmax'") if (!is.Numeric(Hstep, positive = TRUE, allowable.length = 1)) stop("bad input for 'Hstep'") if (!is.Numeric(maxitl, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'maxitl'") if (!is.Numeric(imethod, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'imethod'") if (!is.Numeric(Maxit.optim, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'Maxit.optim'") if (!is.Numeric(MUXfactor, positive = TRUE)) stop("bad input for 'MUXfactor'") if (any(MUXfactor < 1 | MUXfactor > 10)) stop("MUXfactor values must lie between 1 and 10") if (!is.Numeric(optim.maxit, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'optim.maxit'") if (!is.Numeric(Rank, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'Rank'") if (!is.Numeric(SD.Cinit, positive = TRUE, allowable.length = 1)) stop("bad input for 'SD.Cinit'") if (ITolerances && !EqualTolerances) stop("'EqualTolerances' must be TRUE if 'ITolerances' is TRUE") if (!is.Numeric(Bestof, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'Bestof'") FastAlgorithm = as.logical(FastAlgorithm)[1] if (!FastAlgorithm) stop("FastAlgorithm = TRUE is now required") if ((SmallNo < .Machine$double.eps) || (SmallNo > .0001)) stop("SmallNo is out of range") if (any(Parscale <= 0)) stop("Parscale must contain positive numbers only") if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for 'checkwz'") if (!is.Numeric(wzepsilon, allowable.length = 1, positive = TRUE)) stop("bad input for 'wzepsilon'") ans <- list( Bestof = Bestof, checkwz=checkwz, Cinit = Cinit, Crow1positive=as.logical(rep(Crow1positive, len = Rank)), ConstrainedQO = TRUE, # A constant, not a control parameter Corner = FALSE, # Needed for valt.1iter() Dzero = NULL, epsilon = epsilon, EqualTolerances = EqualTolerances, Etamat.colmax = Etamat.colmax, FastAlgorithm = FastAlgorithm, GradientFunction = GradientFunction, Hstep = Hstep, isdlv = rep(isdlv, len = Rank), iKvector = as.numeric(iKvector), iShape = as.numeric(iShape), ITolerances = ITolerances, maxitl = maxitl, imethod = imethod, Maxit.optim = Maxit.optim, min.criterion = TRUE, # needed for calibrate MUXfactor = rep(MUXfactor, length = Rank), noRRR = noRRR, optim.maxit = optim.maxit, OptimizeWrtC = TRUE, Parscale = Parscale, Quadratic = TRUE, Rank = Rank, save.weight = FALSE, SD.Cinit = SD.Cinit, SmallNo = SmallNo, szero = NULL, Svd.arg = TRUE, Alpha = 0.5, Uncorrelated.lv = TRUE, trace = trace, Use.Init.Poisson.QO = as.logical(Use.Init.Poisson.QO)[1], wzepsilon = wzepsilon) ans } VGAM/R/print.vlm.q0000644000176000001440000000347412136651110013354 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. show.vlm <- function(object) { if (!is.null(cl <- object@call)) { cat("Call:\n") dput(cl) } coef <- object@coefficients cat("\nCoefficients:\n") print(coef) rank <- object@rank if (is.null(rank)) rank <- sum(!is.na(coef)) n <- object@misc$n M <- object@misc$M nobs <- if (length(object@df.total)) object@df.total else n * M rdf <- object@df.residual if (is.null(rdf)) rdf <- (n - rank) * M cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(object)) && is.finite(deviance(object))) cat("Deviance:", format(deviance(object)), "\n") if (length(object@rss) && is.finite(object@rss)) cat("Residual Sum of Squares:", format(object@rss), "\n") invisible(object) } setMethod("show", "vlm", function(object) show.vlm(object)) if (FALSE) print.vlm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } coef <- x@coefficients cat("\nCoefficients:\n") print(coef, ...) rank <- x@rank if (is.null(rank)) rank <- sum(!is.na(coef)) n <- x@misc$n M <- x@misc$M nobs <- if (length(x@df.total)) x@df.total else n * M rdf <- x@df.residual if (is.null(rdf)) rdf <- (n - rank) * M cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(x)) && is.finite(deviance(x))) cat("Deviance:", format(deviance(x)), "\n") if (length(x@rss) && is.finite(x@rss)) cat("Residual Sum of Squares:", format(x@rss), "\n") invisible(x) } if (!is.R()) { setMethod("show", "vlm", function(object) print.vlm(object)) } if (FALSE) setMethod("print", "vlm", function(x, ...) print.vlm(x, ...)) VGAM/R/print.vglm.q0000644000176000001440000001143312136651110013515 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. show.vglm <- function(object) { if (!is.null(cl <- object@call)) { cat("Call:\n") dput(cl) } coef <- object@coefficients if (any(nas <- is.na(coef))) { if (is.null(names(coef))) names(coef) <- paste("b", 1:length(coef), sep = "") cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else { cat("\nCoefficients:\n") } print(coef) rank <- object@rank if (!length(rank)) rank <- sum(!nas) nobs <- if (length(object@df.total)) object@df.total else length(object@residuals) rdf <- object@df.residual if (!length(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(object))) cat("Residual deviance:", format(deviance(object)), "\n") llx <- logLik.vlm(object = object) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") if (length(object@criterion)) { ncrit <- names(object@criterion) for(ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(object@criterion[[ii]]), "\n") } invisible(object) } show.vgam <- function(object) { digits <- 2 if (!is.null(cl <- object@call)) { cat("Call:\n") dput(cl) } coef <- object@coefficients nas <- is.na(coef) rank <- object@rank if (is.null(rank)) rank <- sum(!nas) nobs <- if (length(object@df.total)) object@df.total else length(object@residuals) rdf <- object@df.residual if (is.null(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", format(round(rdf, digits = digits)), "Residual\n") if (length(deviance(object))) cat("Residual deviance:", format(deviance(object)), "\n") llx <- logLik.vlm(object = object) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") criterion <- attr(terms(object), "criterion") if (!is.null(criterion) && criterion != "coefficients") cat(paste(criterion, ":", sep = ""), format(object[[criterion]]), "\n") invisible(object) } setMethod("show", "vlm", function(object) show.vlm (object)) setMethod("show", "vglm", function(object) show.vglm(object)) setMethod("show", "vgam", function(object) show.vgam(object)) if (FALSE) print.vglm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } coef <- x@coefficients if (any(nas <- is.na(coef))) { if (is.null(names(coef))) names(coef) <- paste("b", 1:length(coef), sep = "") cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else { cat("\nCoefficients:\n") } print.default(coef, ...) rank <- x@rank if (!length(rank)) rank <- sum(!nas) nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (!length(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") llx <- logLik.vlm(object = x) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for(ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(x@criterion[[ii]]), "\n") } invisible(x) } if (FALSE) print.vgam <- function(x, digits = 2, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } coef <- x@coefficients nas <- is.na(coef) rank <- x@rank if (is.null(rank)) rank <- sum(!nas) nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (is.null(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", format(round(rdf, dig = digits)), "Residual\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") llx <- logLik.vlm(object = x) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") criterion <- attr(terms(x), "criterion") # 11/8/03; x@terms$terms, if (!is.null(criterion) && criterion != "coefficients") cat(paste(criterion, ":", sep = ""), format(x[[criterion]]), "\n") invisible(x) } if (FALSE) { setMethod("print", "vlm", function(x, ...) print.vlm(x, ...)) setMethod("print", "vglm", function(x, ...) print.vglm(x, ...)) setMethod("print", "vgam", function(x, ...) print.vgam(x, ...)) setMethod("show", "vlm", function(object) print.vlm(object)) setMethod("show", "vglm", function(object) print.vglm(object)) setMethod("show", "vgam", function(object) print.vgam(object)) } VGAM/R/predict.vlm.q0000644000176000001440000002554112136651110013651 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. predict.vlm <- function(object, newdata = NULL, type = c("response", "terms"), se.fit = FALSE, scale = NULL, terms.arg = NULL, raw = FALSE, dispersion = NULL, ...) { Xm2 <- NULL xij.used <- length(form2 <- object@misc$form2) || length(object@control$xij) if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response", "terms"))[1] na.act <- object@na.action object@na.action <- list() if (raw && type != "terms") stop("sorry, 'raw=TRUE' only works when 'type=\"terms\"'") if (!length(newdata) && type == "response" && !se.fit && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } ttob <- terms(object) # 11/8/03; object@terms$terms if (!length(newdata)) { offset <- object@offset if (xij.used) { bothList <- model.matrix(object, type = "bothlmlm2") X <- bothList$X Xm2 <- bothList$Xm2 } else { X <- model.matrix(object, type = "lm") } } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction = object@smart.prediction) } X <- model.matrix(delete.response(ttob), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (xij.used) { ttXm2 <- terms(form2) Xm2 <- model.matrix(delete.response(ttXm2), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) } if (object@misc$intercept.only && nrow(X) != nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep(1, nrow(newdata)), , drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num <- attr(ttob, "offset"))) { eval(attr(ttob, "variables")[[off.num+1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, ttob) if (length(Xm2)) attr(Xm2, "assign") <- attrassigndefault(Xm2, ttXm2) } hasintercept <- attr(ttob, "intercept") dx1 <- dimnames(X)[[1]] M <- object@misc$M Blist <- object@constraints ncolBlist <- unlist(lapply(Blist, ncol)) if (hasintercept) ncolBlist <- ncolBlist[-1] xbar <- x2bar <- NULL if (type == "terms" && hasintercept) { if (length(object@control$xij)) { x2bar <- colMeans(Xm2) Xm2 <- sweep(Xm2, 2, x2bar) } xbar <- colMeans(X) X <- sweep(X, 2, xbar) nac <- is.na(object@coefficients) if (any(nac)) { if (length(object@control$xij)) stop("cannot handle 'xij' argument when ", "there are NAs in the coefficients") X <- X[, !nac, drop = FALSE] xbar <- xbar[!nac] } } if (!is.null(newdata) && !is.data.frame(newdata)) newdata <- as.data.frame(newdata) nn <- if (!is.null(newdata)) nrow(newdata) else object@misc$n if (raw) { Blist <- canonical.Blist(Blist) object@constraints <- Blist } X_vlm <- lm2vlm.model.matrix(X, Blist = Blist, M = M, xij = object@control$xij, Xm2 = Xm2) attr(X_vlm, "constant") <- xbar attr(X_vlm, "constant2") <- x2bar coefs <- coefvlm(object) vasgn <- attr(X_vlm, "vassign") if (type == "terms") { nv <- names(vasgn) if (hasintercept) nv <- nv[-(1:ncol(object@constraints[["(Intercept)"]]))] terms.arg <- if (is.null(terms.arg)) nv else terms.arg index <- charmatch(terms.arg, nv) if (all(index == 0)) { warning("no match found; returning all terms") index <- 1:length(nv) } vasgn <- vasgn[nv[index]] } if (any(is.na(object@coefficients))) stop("cannot handle NAs in 'object@coefficients'") dname2 <- object@misc$predictors.names if (se.fit) { object <- as(object, "vlm") # Coerce fit.summary <- summaryvlm(object, dispersion=dispersion) sigma <- if (is.numeric(fit.summary@sigma)) fit.summary@sigma else sqrt(deviance(object) / object@df.residual) # was @rss pred <- Build.terms.vlm(x = X_vlm, coefs = coefs, cov = sigma^2 * fit.summary@cov.unscaled, assign = vasgn, collapse = type != "terms", M = M, dimname = list(dx1, dname2), coefmat = coefvlm(object, matrix.out = TRUE)) pred$df <- object@df.residual pred$sigma <- sigma } else { pred <- Build.terms.vlm(x = X_vlm, coefs = coefs, cov = NULL, assign = vasgn, collapse = type != "terms", M = M, dimname = list(dx1, dname2), coefmat = coefvlm(object, matrix.out = TRUE)) } constant <- attr(pred, "constant") if (type != "terms" && length(offset) && any(offset != 0)) { if (se.fit) { pred$fitted.values <- pred$fitted.values + offset } else { pred <- pred + offset } } if (type == "terms") { Blist <- subconstraints(object@misc$orig.assign, object@constraints) ncolBlist <- unlist(lapply(Blist, ncol)) if (hasintercept) ncolBlist <- ncolBlist[-1] cs <- cumsum(c(1, ncolBlist)) # Like a pointer for(ii in 1:(length(cs)-1)) if (cs[ii+1] - cs[ii] > 1) for(kk in (cs[ii]+1):(cs[ii+1]-1)) if (se.fit) { pred$fitted.values[, cs[ii]]<- pred$fitted.values[, cs[ii]] + pred$fitted.values[, kk] pred$se.fit[, cs[ii]] <- pred$se.fit[, cs[ii]] + pred$se.fit[, kk] } else { pred[, cs[ii]] <- pred[, cs[ii]] + pred[, kk] } if (se.fit) { pred$fitted.values <- pred$fitted.values[, cs[-length(cs)], drop = FALSE] pred$se.fit <- pred$se.fit[, cs[-length(cs)], drop = FALSE] } else { pred <- pred[, cs[-length(cs)], drop = FALSE] } pp <- if (se.fit) ncol(pred$fitted.values) else ncol(pred) if (se.fit) { dimnames(pred$fitted.values) <- dimnames(pred$se.fit) <- NULL dim(pred$fitted.values) <- dim(pred$se.fit) <- c(M, nn, pp) pred$fitted.values <- aperm(pred$fitted.values, c(2, 1, 3)) pred$se.fit <- aperm(pred$se.fit, c(2, 1, 3)) dim(pred$fitted.values) <- dim(pred$se.fit) <- c(nn, M*pp) } else { dimnames(pred) <- NULL # Saves a warning dim(pred) <- c(M, nn, pp) pred <- aperm(pred, c(2, 1, 3)) dim(pred) <- c(nn, M*pp) } if (raw) { kindex <- NULL for(ii in 1:pp) kindex <- c(kindex, (ii-1)*M + (1:ncolBlist[ii])) if (se.fit) { pred$fitted.values <- pred$fitted.values[, kindex, drop = FALSE] pred$se.fit <- pred$se.fit[, kindex, drop = FALSE] } else { pred <- pred[, kindex, drop = FALSE] } } temp <- if (raw) ncolBlist else rep(M, length(ncolBlist)) dd <- vlabel(names(ncolBlist), temp, M) if (se.fit) { dimnames(pred$fitted.values) <- dimnames(pred$se.fit) <- list(if(length(newdata)) dimnames(newdata)[[1]] else dx1, dd) } else { dimnames(pred) <- list(if(length(newdata)) dimnames(newdata)[[1]] else dx1, dd) } if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } if (!raw) cs <- cumsum(c(1, M + 0*ncolBlist)) fred <- vector("list", length(ncolBlist)) for(ii in 1:length(fred)) fred[[ii]] <- cs[ii]:(cs[ii+1]-1) names(fred) <- names(ncolBlist) if (se.fit) { attr(pred$fitted.values, "vterm.assign") <- attr(pred$se.fit, "vterm.assign") <- fred } else { attr(pred, "vterm.assign") <- fred } } # End of if (type == "terms") if (!is.null(xbar)) { if (se.fit) { attr(pred$fitted.values, "constant") <- constant } else { attr(pred, "constant") <- constant } } pred } setMethod("predict", "vlm", function(object, ...) predict.vlm(object, ...)) predict.vglm.se <- function(fit, ...) { H_ss <- hatvalues(fit, type = "centralBlocks") # diag = FALSE M <- npred(fit) nn <- nobs(fit, type = "lm") U <- vchol(weights(fit, type = "working"), M = M, n = nn) Uarray <- array(0, c(M, M, nn)) ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) MM12 <- M * (M + 1) / 2 for (jay in 1:MM12) Uarray[ind1$row.index[jay], ind1$col.index[jay], ] <- U[jay, ] Uinv.array <- apply(Uarray, 3, backsolve, x = diag(M)) dim(Uinv.array) <- c(M, M, nn) Utinv.array <- Uinv.array if (M > 1) for (jay in 1:(M-1)) { for (kay in (jay+1):M) { Utinv.array[kay, jay, ] <- Uinv.array[jay, kay, ] Utinv.array[jay, kay, ] <- 0 } } var.boldeta_i <- mux5(H_ss, Utinv.array, M = M, matrix.arg = TRUE) # First M cols are SE^2 sqrt(var.boldeta_i[, 1:M]) # SE(linear.predictor) sqrt(var.boldeta_i[, 1:M]) } subconstraints <- function(assign, constraints) { ans <- vector("list", length(assign)) if (!length(assign) || !length(constraints)) stop("assign and/or constraints is empty") for(ii in 1:length(assign)) ans[[ii]] <- constraints[[assign[[ii]][1]]] names(ans) <- names(assign) ans } is.linear.term <- function(ch) { lchar <- length(ch) ans <- rep(FALSE, len = lchar) for(ii in 1:lchar) { nc <- nchar(ch[ii]) x <- substring(ch[ii], 1:nc, 1:nc) ans[ii] <- all(x != "(" & x != "+" & x != "-" & x != "/" & x != "*" & x != "^") } names(ans) <- ch ans } canonical.Blist <- function(Blist) { for(ii in 1:length(Blist)) { temp <- Blist[[ii]] * 0 temp[cbind(1:ncol(temp),1:ncol(temp))] <- 1 Blist[[ii]] <- temp } Blist } VGAM/R/predict.vglm.q0000644000176000001440000002015112136651110014010 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. predictvglm <- function(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, extra = object@extra, ...) { na.act <- object@na.action object@na.action <- list() if (missing(extra)) { } if (deriv != 0) stop("'deriv' must be 0 for predictvglm()") if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "terms"))[1] if (untransform && (type!="link" || se.fit || deriv != 0)) stop("argument 'untransform=TRUE' only if 'type=\"link\", ", "se.fit = FALSE, deriv=0'") pred <- if (se.fit) { switch(type, response = { warning("'type=\"response\"' and 'se.fit=TRUE' not valid ", "together; setting 'se.fit = FALSE'") se.fit <- FALSE predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) fv <- object@family@linkinv(predictor, extra) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) fv }, link = { predict.vlm(object, newdata = newdata, type = "response", se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) # End of switch } else { if (is.null(newdata)) { switch(type, link = object@predictors, response = object@fitted.values, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) } else { if (!(length(object@offset) == 1 && object@offset == 0)) warning("zero offset used") switch(type, response = { predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) M <- object@misc$M fv <- object@family@linkinv(predictor, extra) if (M > 1 && is.matrix(fv)) { dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) } else { } fv }, link = { predict.vlm(object, newdata = newdata, type = "response", se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) # End of switch } } if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } if (untransform) untransformVGAM(object, pred) else pred } setMethod("predict", "vglm", function(object, ...) predictvglm(object, ...)) predict.rrvglm <- function(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, ...) { if (se.fit) { stop("20030811; predict.rrvglm(..., se.fit=TRUE) not complete yet") pred <- switch(type, response = { warning("'type=\"response\"' and 'se.fit=TRUE' not valid ", "together; setting 'se.fit = FALSE'") se.fit <- FALSE predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) fv <- object@family@linkinv(predictor, extra) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) fv }, link = { type <- "response" predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) } ) } else { return(predictvglm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...)) } na.act <- object@na.action if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } pred } setMethod("predict", "rrvglm", function(object, ...) predict.rrvglm(object, ...)) untransformVGAM <- function(object, pred) { M <- object@misc$M Links <- object@misc$link if (length(Links) != M && length(Links) != 1) stop("cannot obtain the link functions to untransform the object") upred <- pred earg <- object@misc$earg LINK <- object@misc$link # link.names # This should be a character vector. EARG <- object@misc$earg # This could be a NULL if (is.null(EARG)) EARG <- list(theta = NULL) if (!is.list(EARG)) stop("the 'earg' component of 'object@misc' must be a list") if (length(LINK) != M && length(LINK) != 1) stop("cannot obtain the link functions to untransform 'object'") if (!is.character(LINK)) stop("the 'link' component of 'object@misc' should ", "be a character vector") learg <- length(EARG) llink <- length(LINK) if (llink != learg) stop("the 'earg' component of 'object@misc' should ", "be a list of length ", learg) level1 <- length(EARG) > 3 && length(intersect(names(EARG), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) EARG <- list(oneOnly = EARG) learg <- length(EARG) for(ii in 1:M) { TTheta <- pred[, ii] # Transformed theta use.earg <- if (llink == 1) EARG[[1]] else EARG[[ii]] function.name <- if (llink == 1) LINK else LINK[ii] use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- TTheta # New Theta <- do.call(function.name, use.earg) upred[, ii] <- Theta } dmn2 <- if (length(names(object@misc$link))) names(object@misc$link) else { if (length(object@misc$parameters)) object@misc$parameters else NULL } dimnames(upred) <- list(dimnames(upred)[[1]], dmn2) upred } VGAM/R/predict.vgam.q0000644000176000001440000002520512136651110014002 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. predict.vgam <- function(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv.arg = 0, terms.arg = NULL, raw = FALSE, all = TRUE, offset = 0, untransform = FALSE, dispersion = NULL, ...) { newdata <- if (missing(newdata)) { NULL } else { as.data.frame(newdata) } no.newdata <- (length(newdata) == 0) na.act <- object@na.action object@na.action <- list() if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "terms"))[1] if (untransform && (type != "link" || se.fit || deriv.arg != 0 || offset != 0)) stop("argument 'untransform = TRUE' only if type='link', ", "se.fit = FALSE, deriv = 0") if (raw && type!="terms") stop("'raw = TRUE' only works when 'type = \"terms\"'") if (!is.numeric(deriv.arg) || deriv.arg < 0 || deriv.arg != round(deriv.arg) || length(deriv.arg) > 1) stop("bad input for the 'deriv' argument") if (deriv.arg > 0 && type!="terms") stop("'deriv>0' can only be specified if 'type=\"terms\"'") if (deriv.arg != 0 && !(type != "response" && !se.fit)) stop("argument 'deriv' only works with type != 'response' and ", "se.fit = FALSE") if (se.fit && length(newdata)) stop("cannot specify 'se.fit = TRUE' when argument 'newdata' ", "is assigned") tt <- terms(object) # 11/8/03; object@terms$terms ttf <- attr(tt, "factors") tto <- attr(tt, "order") intercept <- attr(tt, "intercept") if (!intercept) stop("an intercept is assumed") M <- object@misc$M Blist <- object@constraints ncolBlist <- unlist(lapply(Blist, ncol)) if (intercept) ncolBlist <- ncolBlist[-1] if (raw) { Blist <- canonical.Blist(Blist) object@constraints <- Blist } if (!length(newdata)) { if (type == "link") { if (se.fit) { stop("cannot handle this option (se.fit = TRUE) currently") } else { answer <- if (length(na.act)) { napredict(na.act[[1]], object@predictors) } else { object@predictors } if (untransform) return(untransformVGAM(object, answer)) else return(answer) } } else if (type == "response") { if (se.fit) { stop("cannot handle this option (se.fit = TRUE) currently") } else { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } } predictor <- predict.vlm(object, type = "terms", se.fit = se.fit, terms.arg = terms.arg, raw = raw, all = all, offset = offset, dispersion = dispersion, ...) # deriv.arg = deriv.arg, newdata <- model.matrixvlm(object, type = "lm") } else { temp.type <- if (type == "link") "response" else type predictor <- predict.vlm(object, newdata, type = temp.type, se.fit = se.fit, terms.arg = terms.arg, raw = raw, all = all, offset = offset, dispersion = dispersion, ...) # deriv.arg = deriv.arg, } if (deriv.arg > 0) if (se.fit) { predictor$fitted.values <- predictor$fitted.values * 0 predictor$se.fit <- predictor$se.fit * NA } else { predictor <- predictor * 0 } if (length(s.xargument <- object@s.xargument)) { dnames2 <- dimnames(newdata)[[2]] index1 <- match(s.xargument, dnames2, nomatch = FALSE) index2 <- match(names(s.xargument), dnames2, nomatch = FALSE) index <- index1 | index2 if (!length(index) || any(!index)) stop("required variables not found in newdata") if (is.null(tmp6 <- attr(if(se.fit) predictor$fitted.values else predictor, "vterm.assign"))) { Blist <- subconstraints(object@misc$orig.assign, object@constraints) ncolBlist <- unlist(lapply(Blist, ncol)) if (intercept) ncolBlist <- ncolBlist[-1] cs <- if (raw) cumsum(c(1, ncolBlist)) else cumsum(c(1, M + 0*ncolBlist)) tmp6 <- vector("list", length(ncolBlist)) for(ii in 1:length(tmp6)) tmp6[[ii]] <- cs[ii]:(cs[ii+1]-1) names(tmp6) <- names(ncolBlist) } n.s.xargument <- names(s.xargument) # e.g., c("s(x)", "s(x2)") for(ii in n.s.xargument) { fred <- s.xargument[ii] if (!any(dimnames(newdata)[[2]] == fred)) fred <- ii xx <- newdata[,fred] # [,s.xargument[ii]] # [,nindex[ii]] ox <- order(xx) rawMat <- predictvsmooth.spline.fit( object@Bspline[[ii]], x = xx, deriv = deriv.arg)$y eta.mat <- if (raw) rawMat else (rawMat %*% t(Blist[[ii]])) if (type == "terms") { hhh <- tmp6[[ii]] if (se.fit) { predictor$fitted.values[,hhh] <- predictor$fitted.values[,hhh] + eta.mat TS <- predictor$sigma^2 temp.var <- if (raw) { tmp7 <- object@misc$varassign tmp7 <- tmp7[[ii]] object@var[, tmp7, drop = FALSE] } else stop("cannot handle se's with raw = FALSE") predictor$se.fit[,hhh] <- (predictor$se.fit[,hhh]^2 + TS * temp.var)^0.5 } else { predictor[,hhh] <- predictor[,hhh] + eta.mat } } else { if (se.fit) { predictor$fitted.values <- predictor$fitted.values + eta.mat TS <- 1 # out$residual.scale^2 TS <- predictor$sigma^2 TT <- ncol(object@var) predictor$se.fit <- sqrt(predictor$se.fit^2 + TS * object@var %*% rep(1, TT)) } else { predictor <- predictor + eta.mat } } } } if (type == "link") { if (no.newdata && length(na.act)) { return(napredict(na.act[[1]], predictor)) } else { return(predictor) } } else if (type == "response") { fv <- object@family@linkinv(if(se.fit) predictor$fitted.values else predictor, object@extra) if (is.matrix(fv) && is.matrix(object@fitted.values)) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) if (is.matrix(fv) && ncol(fv) == 1) fv <- c(fv) if (no.newdata && length(na.act)) { fv <- if (se.fit) { napredict(na.act[[1]], fv) } else { napredict(na.act[[1]], fv) } } if (se.fit) { return(list(fit = fv, se.fit = fv*NA)) } else { return(fv) } } else { if (deriv.arg >= 1) if (se.fit) { attr(predictor$fitted.values, "constant") <- NULL } else { attr(predictor, "constant") <- NULL } if (deriv.arg >= 1) { v <- attr(if(se.fit) predictor$fitted.values else predictor, "vterm.assign") is.lin <- is.linear.term(names(v)) coefmat <- coefvlm(object, matrix.out = TRUE) ord <- 0 for(ii in names(v)) { ord <- ord + 1 index <- v[[ii]] lindex <- length(index) if (is.lin[ii]) { if (tto[ord] > 1 || (length(ttf) && ttf[ii,ii])) { if (se.fit) { predictor$fitted.values[,index] <- if (tto[ord]>1) NA else NA } else { predictor[,index] <- if (tto[ord]>1) NA else NA } } else { ans <- coefmat[ii, 1:lindex] if (se.fit) { predictor$fitted.values[,index] = if (deriv.arg == 1) matrix(ans, ncol <- lindex, byrow = TRUE) else 0 } else { predictor[,index] <- if (deriv.arg == 1) matrix(ans, ncol <- lindex, byrow = TRUE) else 0 } } } else if (length(s.xargument) && any(n.s.xargument == ii)) { ans <- coefmat[ii, 1:lindex] if (se.fit) { predictor$fitted.values[,index] = predictor$fitted.values[,index] + (if(deriv.arg == 1) matrix(ans, nrow = nrow(predictor$fitted.values), ncol = lindex, byrow = TRUE) else 0) } else { predictor[, index] <- predictor[, index] + (if(deriv.arg == 1) matrix(ans, nrow = nrow(predictor), ncol = lindex, byrow = TRUE) else 0) } } else { cat("Derivatives of term ", ii, "are unknown\n") if (se.fit) { predictor$fitted.values[,index] <- NA } else { predictor[,index] <- NA } } } } if (no.newdata && length(na.act)) { if (se.fit) { predictor$fitted.values <- napredict(na.act[[1]], predictor$fitted.values) predictor$se.fit <- napredict(na.act[[1]], predictor$se.fit) } else { predictor <- napredict(na.act[[1]], predictor) } } if (se.fit) { attr(predictor$fitted.values, "derivative") <- deriv.arg } else { attr(predictor, "derivative") <- deriv.arg } return(predictor) } } setMethod("predict", "vgam", function(object, ...) predict.vgam(object, ...)) varassign <- function(constraints, n.s.xargument) { if (!length(n.s.xargument)) stop("length(n.s.xargument) must be > 0") ans <- vector("list", length(n.s.xargument)) ncolBlist <- unlist(lapply(constraints, ncol)) names(ans) <- n.s.xargument ptr <- 1 for(ii in n.s.xargument) { temp <- ncolBlist[[ii]] ans[[ii]] <- ptr:(ptr + temp - 1) ptr <- ptr + temp } ans } VGAM/R/plot.vglm.q0000644000176000001440000006354512136651110013352 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. plotvgam <- function(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...) { missing.control <- missing(control) na.act <- x@na.action x@na.action <- list() # Don't want NAs returned from predict() or resid() if (!is.Numeric(varxij, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for the 'varxij' argument") if (any(slotNames(x) == "control")) { x@control$varxij <- varxij } missing.type.residuals <- missing(type.residuals) if (mode(type.residuals) != "character" && mode(type.residuals) != "name") type.residuals <- as.character(substitute(type.residuals)) if (!missing.type.residuals) type.residuals <- match.arg(type.residuals, c("deviance", "working", "pearson", "response"))[1] if (!is.Numeric(deriv.arg, integer.valued = TRUE, allowable.length = 1) || deriv.arg < 0) stop("bad input for the 'deriv' argument") if (se && deriv.arg > 0) { warning("standard errors not available with derivatives. ", "Setting 'se = FALSE'") se <- FALSE } preplot.object <- x@preplot if (!length(preplot.object)) { preplot.object <- preplotvgam(x, newdata = newdata, raw = raw, deriv.arg = deriv.arg, se = se, varxij = varxij) } x@preplot <- preplot.object if (!is.null(residuals) && length(residuals) == 1) { if (residuals) { if (missing.type.residuals) { for (rtype in type.residuals) if (!is.null(residuals <- resid(x, type = rtype))) break } else { residuals = resid(x, typ = type.residuals)# Get the prespecified type if (!length(residuals)) warning("residuals are NULL. Ignoring 'residuals = TRUE'") } } else { residuals <- NULL } } if (!missing.control) { control <- c(plotvgam.control(.include.dots = FALSE, ...), control, plotvgam.control(...)) } x@post$plotvgam.control <- control # Add it to the object if (plot.arg) plotpreplotvgam(preplot.object, residuals = residuals, rugplot = rugplot, scale = scale, se = se, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.term = which.term, which.cf = which.cf, control = control) x@na.action <- na.act # Restore its original value invisible(x) } ylim.scale <- function(ylim, scale = 0) { if (length(ylim) != 2 || ylim[2] < ylim[1]) stop("error in 'ylim'") try <- ylim[2] - ylim[1] if (try > scale) ylim else c(ylim[1] + ylim[2] - scale, ylim[1] + ylim[2] + scale) / 2 } getallresponses <- function(xij) { if (!is.list(xij)) return("") allterms <- lapply(xij, terms) allres <- NULL for(ii in 1:length(xij)) allres <- c(allres, as.character(attr(allterms[[ii]],"variables"))[2]) allres } headpreplotvgam <- function(object, newdata = NULL, terms = attr((object@terms)$terms, "term.labels"), raw = TRUE, deriv.arg = deriv.arg, se = FALSE, varxij = 1) { Terms <- terms(object) # 11/8/03; object@terms$terms aa <- attributes(Terms) all.terms <- labels(Terms) xvars <- parse(text = all.terms) names(xvars) <- all.terms terms <- sapply(terms, match.arg, all.terms) Interactions <- aa$order > 1 if (any(Interactions)) { stop("cannot handle interactions") } xvars <- xvars[terms] xnames <- as.list(terms) names(xnames) <- terms modes <- sapply(xvars, mode) for(term in terms[modes != "name"]) { evars <- all.names(xvars[term], functions= FALSE, unique = TRUE) if (!length(evars)) next xnames[[term]] <- evars evars <- parse(text=evars) if (length(evars) == 1) { evars <- evars[[1]] } else if (length(evars) > 1 && any(getallresponses(object@control$xij) == names(xnames)) ) { evars <- evars[[varxij]] } else { evars <- c(as.name("list"), evars) mode(evars) <- "call" } xvars[[term]] <- evars } xvars <- c(as.name("list"), xvars) mode(xvars) <- "call" if (length(newdata)) { xvars <- eval(xvars, newdata) } else { Call <- object@call if (!is.null(Call$subset) | !is.null(Call$na.action) | !is.null(options("na.action")[[1]])) { Rownames <- names(fitted(object)) if (!(Rl <- length(Rownames))) Rownames <- dimnames(fitted(object))[[1]] if (length(object@x) && !(Rl <- length(Rownames))) Rownames <- (dimnames(object@x))[[1]] if (length(object@y) && !(Rl <- length(Rownames))) Rownames <- (dimnames(object@y))[[1]] if (!(Rl <- length(Rownames))) stop("need to have names for fitted.values ", "when call has a 'subset' or 'na.action' argument") form <- paste("~", unlist(xnames), collapse = "+") Mcall <- c(as.name("model.frame"), list(formula = terms(as.formula(form)), subset = Rownames, na.action = function(x) x)) mode(Mcall) <- "call" Mcall$data <- Call$data xvars <- eval(xvars, eval(Mcall)) } else { ecall <- substitute(eval(expression(xvars))) ecall$local <- Call$data xvars <- eval(ecall) } } list(xnames = xnames, xvars = xvars) } preplotvgam <- function(object, newdata = NULL, terms = attr((object@terms)$terms, "term.labels"), raw = TRUE, deriv.arg = deriv.arg, se = FALSE, varxij = 1) { result1 <- headpreplotvgam(object, newdata = newdata, terms = terms, raw = raw, deriv.arg = deriv.arg, se = se, varxij = varxij) xvars <- result1$xvars xnames <- result1$xnames if (FALSE && !is.null(object@control$jix)) { myxij <- object@control$xij if (length(myxij)) { } } pred <- if (length(newdata)) { predict(object, newdata, type = "terms", raw = raw, se.fit = se, deriv.arg = deriv.arg) } else { predict(object, type = "terms", raw = raw, se.fit = se, deriv.arg = deriv.arg) } fits <- if (is.atomic(pred)) NULL else pred$fit se.fit <- if (is.atomic(pred)) NULL else pred$se.fit if (is.null(fits)) fits <- pred fred <- attr(fits, "vterm.assign") # NULL for M==1 Constant <- attr(fits, "constant") # NULL if se = TRUE gamplot <- xnames loop.var <- names(fred) for(term in loop.var) { .VGAM.x <- xvars[[term]] myylab <- if (all(substring(term, 1:nchar(term), 1:nchar(term)) != "(")) paste("partial for", term) else term TT <- list(x = .VGAM.x, y = fits[,(if(is.null(fred)) term else fred[[term]])], se.y = if (is.null(se.fit)) NULL else se.fit[,(if(is.null(fred)) term else fred[[term]])], xlab = xnames[[term]], ylab = myylab) class(TT) <- "preplotvgam" gamplot[[term]] <- TT } attr(gamplot, "Constant") <- Constant invisible(gamplot) } plotvlm <- function(object, residuals = NULL, rugplot= FALSE, ...) { stop("sorry, this function hasn't been written yet") } plotvglm <- function(x, residuals = NULL, smooths= FALSE, rugplot= FALSE, id.n= FALSE, ...) { stop("this function hasn't been written yet") } plotpreplotvgam <- function(x, y = NULL, residuals = NULL, rugplot= TRUE, se= FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.term = NULL, which.cf = NULL, control = NULL) { listof <- inherits(x[[1]], "preplotvgam") if (listof) { TT <- names(x) if (is.null(which.term)) which.term <- TT # Plot them all plot.no <- 0 for(ii in TT) { plot.no <- plot.no + 1 if ((is.character(which.term) && any(which.term == ii)) || (is.numeric(which.term) && any(which.term == plot.no))) plotpreplotvgam(x[[ii]], y = NULL, residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf, control = control) } } else { dummy <- function(residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, control = plotvgam.control()) c(list(residuals=residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf), control) d <- dummy(residuals = residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf, control = control) uniq.comps <- unique(c(names(x), names(d))) Call <- c(as.name("vplot"), c(d, x)[uniq.comps]) mode(Call) <- "call" invisible(eval(Call)) } } vplot.default <- function(x, y, se.y = NULL, xlab = "", ylab = "", residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { switch(data.class(x)[1], logical = vplot.factor(factor(x), y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, overlay = overlay, ...), if (is.numeric(x)) { vplot.numeric(as.vector(x), y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, overlay = overlay, ...) } else { warning("The \"x\" component of \"", ylab, "\" has class \"", class(x), "\"; no vplot() methods available") } ) # End of switch } vplot.list <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { if (is.numeric(x[[1]])) { vplot.numeric(x[[1]], y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, ...) } else { stop("this function has not been written yet") } } plotvgam.control <- function(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, ...) { ans <- list(which.cf = which.cf, xlim = xlim, ylim = ylim, llty = llty, slty = slty, pcex = pcex, pch = pch, pcol = pcol, lcol = lcol, rcol = rcol, scol = scol, llwd = llwd, slwd = slwd, add.arg = add.arg, noxmean = noxmean, one.at.a.time = one.at.a.time) if (.include.dots) { c(list(...), ans) } else { default.vals <- plotvgam.control() return.list <- list() for(ii in names(default.vals)) { replace.val <- !((length(ans[[ii]]) == length(default.vals[[ii]])) && (length(default.vals[[ii]]) > 0) && (is.logical(all.equal(ans[[ii]], default.vals[[ii]]))) && all.equal(ans[[ii]], default.vals[[ii]])) if (replace.val) return.list[[ii]] <- ans[[ii]] } if (length(return.list)) { names(return.list) <- names(return.list) return.list } else { NULL } } } vplot.numeric <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot= FALSE, se= FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay= FALSE, which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg= FALSE, one.at.a.time= FALSE, noxmean = FALSE, separator = ":", ...) { ylim0 <- ylim if (length(y)/length(x) != round(length(y)/length(x))) stop("length of 'x' and 'y' do not seem to match") y <- as.matrix(y) if (!length(which.cf)) which.cf <- 1:ncol(y) # Added 7/8/04 if (!is.null(se.y)) se.y <- as.matrix(se.y) if (!is.null(se.y) && any(is.na(se.y))) se.y <- NULL if (!is.null(residuals)) { residuals <- as.matrix(residuals) if (ncol(residuals) != ncol(y)) { warning("ncol(residuals) != ncol(y) so residuals are not plotted") residuals <- NULL } } offset.arg <- matrix(offset.arg, nrow(y), ncol(y), byrow = TRUE) y <- y + offset.arg ylab <- add.hookey(ylab, deriv.arg) if (xmeanAdded <- (se && !is.null(se.y) && !noxmean && all(substring(ylab, 1:nchar(ylab), 1:nchar(ylab)) != "("))) { x <- c(x, mean(x)) y <- rbind(y, 0 * y[1,]) se.y <- rbind(se.y, 0 * se.y[1,]) if (!is.null(residuals)) residuals <- rbind(residuals, NA*residuals[1,]) # NAs not plotted } ux <- unique(sort(x)) o <- match(ux, x) uy <- y[o, , drop = FALSE] xlim <- range(xlim, ux) ylim <- range(ylim, uy[, which.cf], na.rm = TRUE) if (rugplot) { usex <- if (xmeanAdded) x[-length(x)] else x jx <- jitter(usex[!is.na(usex)]) xlim <- range(c(xlim, jx)) } if (se && !is.null(se.y)) { se.upper <- uy + 2 * se.y[o,,drop = FALSE] se.lower <- uy - 2 * se.y[o,,drop = FALSE] ylim <- range(c(ylim, se.upper[,which.cf], se.lower[,which.cf])) } if (!is.null(residuals)) { if (length(residuals) == length(y)) { residuals <- as.matrix(y + residuals) ylim <- range(c(ylim, residuals[,which.cf]), na.rm = TRUE) } else { residuals <- NULL warning("Residuals do not match 'x' in \"", ylab, "\" preplot object") } } all.missingy <- all(is.na(y)) if (all.missingy) return() ylim <- ylim.scale(ylim, scale) if (overlay) { if (!length(which.cf)) which.cf <- 1:ncol(uy) # Added 7/8/04 if (!add.arg) { matplot(ux, uy[,which.cf], type = "n", xlim = xlim, ylim = ylim, xlab = xlab, ylab = ylab, ...) } matlines(ux, uy[,which.cf], lwd = llwd, col = lcol, lty = llty) if (!is.null(residuals)) if (ncol(y) == 1) { points(x, residuals, pch = pch, col = pcol, cex = pcex) } else { matpoints(x, residuals[,which.cf], pch = pch, col = pcol, cex = pcex) # add.arg = TRUE, } if (rugplot) rug(jx, col = rcol) if (se && !is.null(se.y)) { matlines(ux, se.upper[,which.cf], lty = slty, lwd = slwd, col = scol) matlines(ux, se.lower[,which.cf], lty = slty, lwd = slwd, col = scol) } } else { YLAB <- ylab pcex <- rep(pcex, len = ncol(uy)) pch <- rep(pch , len = ncol(uy)) pcol <- rep(pcol, len = ncol(uy)) lcol <- rep(lcol, len = ncol(uy)) llty <- rep(llty, len = ncol(uy)) llwd <- rep(llwd, len = ncol(uy)) slty <- rep(slty, len = ncol(uy)) rcol <- rep(rcol, len = ncol(uy)) scol <- rep(scol, len = ncol(uy)) slwd <- rep(slwd, len = ncol(uy)) for(ii in 1:ncol(uy)) { if (!length(which.cf) || (length(which.cf) && any(which.cf == ii))) { if (is.Numeric(ylim0, allowable.length = 2)) { ylim <- ylim0 } else { ylim <- range(ylim0, uy[, ii], na.rm = TRUE) if (se && !is.null(se.y)) ylim <- range(ylim0, se.lower[, ii], se.upper[, ii], na.rm = TRUE) if (!is.null(residuals)) ylim <- range(c(ylim, residuals[, ii]), na.rm = TRUE) ylim <- ylim.scale(ylim, scale) } if (ncol(uy)>1 && length(separator)) YLAB <- paste(ylab, separator, ii, sep = "") if (!add.arg) { if (one.at.a.time) { readline("Hit return for the next plot ") } plot(ux, uy[, ii], type = "n", xlim = xlim, ylim = ylim, xlab = xlab, ylab = YLAB, ...) } lines(ux, uy[, ii], lwd = llwd[ii], col = lcol[ii], lty = llty[ii]) if (!is.null(residuals)) points(x, residuals[, ii], pch = pch[ii], col = pcol[ii], cex = pcex[ii]) if (rugplot) rug(jx, col = rcol[ii]) if (se && !is.null(se.y)) { lines(ux, se.upper[, ii], lty = slty[ii], lwd = slwd[ii], col = scol[ii]) lines(ux, se.lower[, ii], lty = slty[ii], lwd = slwd[ii], col = scol[ii]) } } } } } vplot.matrix <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { stop("You shouldn't ever call this function!") } add.hookey <- function(ch, deriv.arg = 0) { if (!is.Numeric(deriv.arg, integer.valued = TRUE, allowable.length = 1) || deriv.arg < 0) stop("bad input for the 'deriv' argument") if (deriv.arg == 0) return(ch) hookey <- switch(deriv.arg, "'", "''", "'''", "''''", "'''''", stop("too high a derivative")) nc <- nchar(ch) sub <- substring(ch, 1:nc, 1:nc) if (nc >= 2 && sub[1] == "s" && sub[2] == "(") { paste("s", hookey, substring(ch, 2, nc), sep = "", coll = "") } else { paste(ch, hookey, sep = "", collapse = "") } } vplot.factor <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, xlim = NULL, ylim = NULL, offset.arg = 0, deriv.arg = 0, overlay= FALSE, which.cf = NULL, ...) { if (deriv.arg > 0) return(NULL) if (length(y)/length(x) != round(length(y)/length(x))) stop("length of 'x' and 'y' do not seem to match") y <- as.matrix(y) if (!is.null(se.y)) se.y <- as.matrix(se.y) if (!is.null(se.y) && any(is.na(se.y))) se.y <- NULL if (!is.null(residuals)) { residuals <- as.matrix(residuals) if (ncol(residuals) != ncol(y)) { warning("ncol(residuals) != ncol(y) so residuals are not plotted") residuals <- NULL } } if (overlay) { vvplot.factor(x, y, se.y = if (is.null(se.y)) NULL else se.y, xlab = xlab, ylab = ylab, residuals = residuals, rugplot = rugplot, scale = scale, se = se, xlim = xlim, ylim = ylim, ...) } else { for(ii in 1:ncol(y)) { ylab <- rep(ylab, len = ncol(y)) if (ncol(y) > 1) ylab <- dimnames(y)[[2]] vvplot.factor(x, y[, ii,drop = FALSE], se.y = if (is.null(se.y)) NULL else se.y[, ii,drop = FALSE], xlab = xlab, ylab = ylab[ii], residuals = if (is.null(residuals)) NULL else residuals[, ii,drop = FALSE], rugplot = rugplot, scale = scale, se = se, xlim = xlim, ylim = ylim, ...) } } invisible(NULL) } vvplot.factor <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, xlim = NULL, ylim = NULL, ...) { M <- ncol(y) nn <- as.numeric(table(x)) codex <- as.numeric(x) ucodex <- seq(nn)[nn > 0] o <- match(ucodex, codex, 0) uy <- y[o,,drop = FALSE] ylim <- range(ylim, uy) xlim <- range(c(0, sum(nn), xlim)) rightx <- cumsum(nn) leftx <- c(0, rightx[ - length(nn)]) ux <- (leftx + rightx)/2 delta <- (rightx - leftx)/8 jx <- runif(length(codex), (ux - delta)[codex], (ux + delta)[codex]) nnajx <- jx[!is.na(jx)] if (rugplot) xlim <- range(c(xlim, nnajx)) if (se && !is.null(se.y)) { se.upper <- uy + 2 * se.y[o,, drop = FALSE] se.lower <- uy - 2 * se.y[o,, drop = FALSE] ylim <- range(c(ylim, se.upper, se.lower)) } if (!is.null(residuals)) { if (length(residuals) == length(y)) { residuals <- y + residuals ylim <- range(c(ylim, residuals)) } else { residuals <- NULL warning("Residuals do not match 'x' in \"", ylab, "\" preplot object") } } ylim <- ylim.scale(ylim, scale) Levels <- levels(x) if (!all(nn)) { keep <- nn > 0 nn <- nn[keep] ux <- ux[keep] delta <- delta[keep] leftx <- leftx[keep] rightx <- rightx[keep] Levels <- Levels[keep] } about <- function(ux, M, Delta = 1 / M) { if (M == 1) return(cbind(ux)) ans <- matrix(as.numeric(NA), length(ux), M) grid <- seq(-Delta, Delta, len = M) for(ii in 1:M) { ans[, ii] <- ux + grid[ii] } ans } uxx <- about(ux, M, Delta = min(delta)) xlim <- range(c(xlim, uxx)) matplot(ux, uy, ylim = ylim, xlim = xlim, xlab = "", type = "n", ylab = ylab, axes = FALSE, frame.plot = TRUE, ...) mtext(xlab, 1, 2, adj = 0.5) axis(side = 2) lpos <- par("mar")[3] mtext(Levels, side = 3, line = lpos/2, at = ux, adj = 0.5, srt = 45) for(ii in 1:M) segments(uxx[, ii] - 1.0 * delta, uy[, ii], uxx[, ii] + 1.0 * delta, uy[, ii]) if (!is.null(residuals)) { for(ii in 1:M) { jux <- uxx[, ii] jux <- jux[codex] jux <- jux + runif(length(jux), -0.7*min(delta), 0.7*min(delta)) if (M == 1) points(jux, residuals[, ii]) else points(jux, residuals[, ii], pch = as.character(ii)) } } if (rugplot) rug(nnajx) if (se) { for(ii in 1:M) { segments(uxx[, ii]+0.5*delta, se.upper[, ii], uxx[, ii]-0.5*delta, se.upper[, ii]) segments(uxx[, ii]+0.5*delta, se.lower[, ii], uxx[, ii]-0.5*delta, se.lower[, ii]) segments(uxx[, ii], se.lower[, ii], uxx[, ii], se.upper[, ii], lty = 2) } } invisible(diff(ylim)) } if(!isGeneric("vplot")) setGeneric("vplot", function(x, ...) standardGeneric("vplot")) setMethod("vplot", "factor", function(x, ...) vplot.factor(x, ...)) setMethod("vplot", "list", function(x, ...) vplot.list(x, ...)) setMethod("vplot", "matrix", function(x, ...) vplot.matrix(x, ...)) setMethod("vplot", "numeric", function(x, ...) vplot.numeric(x, ...)) setMethod("plot", "vlm", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvlm(x, y, ...))}) setMethod("plot", "vglm", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvglm(x, y, ...))}) setMethod("plot", "vgam", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvgam(x, ...))}) plotqrrvglm <- function(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", ITolerances = object@control$EqualTolerances, ...) { M <- object@misc$M n <- object@misc$n Rank <- object@control$Rank Coef.object <- Coef(object, ITolerances = ITolerances) rtype <- match.arg(rtype, c("response", "pearson", "deviance", "working"))[1] res <- resid(object, type = rtype) my.ylab <- if (length(object@misc$ynames)) object@misc$ynames else rep(" ", len = M) Rtype <- switch(rtype, pearson = "Pearson", response = "Response", deviance = "Deviance", working = "Working") done <- 0 for(rr in 1:Rank) for(ii in 1:M) { plot(Coef.object@lv[, rr], res[, ii], xlab = paste(xlab, if (Rank == 1) "" else rr, sep = ""), ylab = my.ylab[ii], main = main, ...) done <- done + 1 if (done >= prod(par()$mfrow) && ask && done != Rank*M) { done <- 0 readline("Hit return for the next plot: ") } } object } setMethod("plot", "qrrvglm", function(x, y, ...) invisible(plotqrrvglm(object = x, ...))) put.caption <- function(text.arg = "(a)", w.x = c(0.50, 0.50), w.y = c(0.07, 0.93), ...) { text(text.arg, x = weighted.mean(par()$usr[1:2], w = w.x), y = weighted.mean(par()$usr[3:4], w = w.y), ...) } VGAM/R/nobs.R0000644000176000001440000000676212136651110012330 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. nobs.vlm <- function(object, type = c("lm", "vlm"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("lm", "vlm"))[1] if (type == "lm") { object@misc$n } else { object@misc$nrow_X_vlm } } if (!isGeneric("nobs")) setGeneric("nobs", function(object, ...) standardGeneric("nobs"), package = "VGAM") setMethod("nobs", "vlm", function(object, ...) nobs.vlm(object, ...)) nvar.vlm <- function(object, type = c("vlm", "lm"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vlm", "lm"))[1] if (type == "lm") { object@misc$p } else { object@misc$ncol_X_vlm } } nvar.vgam <- function(object, type = c("vgam", "zz"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vgam", "zz"))[1] stop("function nvar.vgam() has not been written yet") if (type == "vgam") { object@misc$p } else { object@misc$ncol_X_vlm } } nvar.rrvglm <- function(object, type = c("rrvglm", "zz"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rrvglm", "zz"))[1] stop("function nvar.rrvglm() has not been written yet") if (type == "vgam") { object@misc$p } else { object@misc$ncol_X_vlm } } nvar.qrrvglm <- function(object, type = c("qrrvglm", "zz"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("qrrvglm", "zz"))[1] stop("function nvar.qrrvglm() has not been written yet") if (type == "qrrvglm") { object@misc$p } else { object@misc$ncol_X_vlm } } nvar.cao <- function(object, type = c("cao", "zz"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rrvglm", "zz"))[1] stop("function nvar.cao() has not been written yet") if (type == "cao") { object@misc$p } else { object@misc$ncol_X_vlm } } nvar.rcim <- function(object, type = c("rcim", "zz"), ...) { if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rcim", "zz"))[1] stop("function nvar.rcim() has not been written yet") if (type == "rcim") { object@misc$p } else { object@misc$ncol_X_vlm } } if (!isGeneric("nvar")) setGeneric("nvar", function(object, ...) standardGeneric("nvar"), package = "VGAM") setMethod("nvar", "vlm", function(object, ...) nvar.vlm(object, ...)) setMethod("nvar", "vgam", function(object, ...) nvar.vgam(object, ...)) setMethod("nvar", "rrvglm", function(object, ...) nvar.rrvglm(object, ...)) setMethod("nvar", "qrrvglm", function(object, ...) nvar.qrrvglm(object, ...)) setMethod("nvar", "cao", function(object, ...) nvar.cao(object, ...)) setMethod("nvar", "rcim", function(object, ...) nvar.rcim(object, ...)) VGAM/R/mux.q0000644000176000001440000002456512136651110012240 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. mux34 <- function(xmat, cc, symmetric = FALSE) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) d <- dim(xmat) nnn <- d[1] RRR <- d[2] if (length(cc) == 1) cc <- matrix(cc, 1, 1) if (!is.matrix(cc)) stop("'cc' is not a matrix") c(dotC(name = "VGAM_C_mux34", as.double(xmat), as.double(cc), as.integer(nnn), as.integer(RRR), as.integer(symmetric), ans = as.double(rep(0.0, nnn)), NAOK = TRUE)$ans) } if(FALSE) mux34 <- function(xmat, cc, symmetric = FALSE) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) d <- dim(xmat) n <- d[1] R <- d[2] if (length(cc) == 1) cc = matrix(cc, 1, 1) if (!is.matrix(cc)) stop("'cc' is not a matrix") c(dotFortran(name = "vgamf90mux34", as.double(xmat), as.double(cc), as.integer(n), as.integer(R), as.integer(symmetric), ans = as.double(rep(0.0, n)), NAOK = TRUE)$ans) } mux2 <- function(cc, xmat) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) d <- dim(xmat) n <- d[1] p <- d[2] if (is.matrix(cc)) cc <- array(cc, c(dim(cc), n)) d <- dim(cc) M <- d[1] if (d[2] != p || d[3] != n) stop("dimension size inconformable") ans <- rep(as.numeric(NA), n*M) fred <- dotC(name = "mux2", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(p), as.integer(n), as.integer(M), NAOK = TRUE) matrix(fred$ans, n, M, byrow = TRUE) } mux22 <- function(cc, xmat, M, upper = FALSE, as.matrix = FALSE) { n <- ncol(cc) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # Usually M or M(M+1)/2 ans <- rep(as.numeric(NA), n*M) fred <- dotC(name = "mux22", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(dimm.value), as.integer(index$row), as.integer(index$col), as.integer(n), as.integer(M), wk = double(M*M), as.integer(as.numeric(upper)), NAOK = TRUE) if (!as.matrix) fred$ans else { dim(fred$ans) <- c(M, n) t(fred$ans) } } mux5 <- function(cc, x, M, matrix.arg = FALSE) { dimx <- dim(x) dimcc <- dim(cc) r <- dimx[2] if (matrix.arg) { n <- dimcc[1] neltscci <- ncol(cc) cc <- t(cc) } else { n <- dimcc[3] if (dimcc[1] != dimcc[2] || dimx[1] != dimcc[1] || (length(dimx) == 3 && dimx[3] != dimcc[3])) stop('input nonconformable') neltscci <- M*(M+1)/2 } if (is.matrix(x)) x <- array(x,c(M, r, n)) index.M <- iam(NA, NA, M, both = TRUE, diag = TRUE) index.r <- iam(NA, NA, r, both = TRUE, diag = TRUE) size <- if (matrix.arg) dimm(r)*n else r*r*n fred <- dotC(name = "mux5", as.double(cc), as.double(x), ans = double(size), as.integer(M), as.integer(n), as.integer(r), as.integer(neltscci), as.integer(dimm(r)), as.integer(as.numeric(matrix.arg)), double(M*M), double(r*r), as.integer(index.M$row), as.integer(index.M$col), as.integer(index.r$row), as.integer(index.r$col), ok3 = as.integer(1), NAOK = TRUE) if (fred$ok3 == 0) stop("can only handle matrix.arg == 1") if (matrix.arg) { ans <- fred$ans dim(ans) <- c(dimm(r), n) t(ans) } else { array(fred$ans, c(r, r, n)) } } mux55 <- function(evects, evals, M) { d <- dim(evects) n <- ncol(evals) if (d[1] != M || d[2] != M || d[3] != n || nrow(evals)!= M || ncol(evals) != n) stop("input nonconformable") MM12 <- M*(M+1)/2 # The answer is a full-matrix index <- iam(NA, NA, M, both = TRUE, diag = TRUE) fred <- dotC(name = "mux55", as.double(evects), as.double(evals), ans = double(MM12 * n), double(M*M), double(M*M), as.integer(index$row), as.integer(index$col), as.integer(M), as.integer(n), NAOK = TRUE) dim(fred$ans) <- c(MM12, n) fred$ans } mux7 <- function(cc, x) { dimx <- dim(x) dimcc <- dim(cc) if (dimx[1]!= dimcc[2] || (length(dimx) == 3 && dimx[3]!= dimcc[3])) stop('input nonconformable') M <- dimcc[1] qq <- dimcc[2] n <- dimcc[3] r <- dimx[2] if (is.matrix(x)) x <- array(x,c(qq,r, n)) ans <- array(NA, c(M, r, n)) fred <- dotC(name = "mux7", as.double(cc), as.double(x), ans = as.double(ans), as.integer(M), as.integer(qq), as.integer(n), as.integer(r), NAOK = TRUE) array(fred$ans, c(M, r, n)) } mux9 <- function(cc, xmat) { if (is.vector(xmat)) xmat <- cbind(xmat) dimxmat <- dim(xmat) dimcc <- dim(cc) if (dimcc[1] != dimcc[2] || dimxmat[1] != dimcc[3] || dimxmat[2] != dimcc[1]) stop('input nonconformable') M <- dimcc[1] n <- dimcc[3] ans <- matrix(as.numeric(NA), n, M) fred <- dotC(name = "mux9", as.double(cc), as.double(xmat), ans = as.double(ans), as.integer(M), as.integer(n), NAOK = TRUE) matrix(fred$ans, n, M) } mux11 <- function(cc, xmat) { dcc <- dim(cc) d <- dim(xmat) M <- dcc[1] R <- d[2] n <- dcc[3] if (M != dcc[2] || d[1] != n*M) stop("input inconformable") Xmat <- array(c(t(xmat)), c(R, M, n)) Xmat <- aperm(Xmat, c(2, 1, 3)) # Xmat becomes M x R x n mat <- mux7(cc, Xmat) # mat is M x R x n mat <- aperm(mat, c(2, 1, 3)) # mat becomes R x M x n mat <- matrix(c(mat), n*M, R, byrow = TRUE) mat } mux111 <- function(cc, xmat, M, upper = TRUE) { R <- ncol(xmat) n <- nrow(xmat) / M index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # M or M(M+1)/2 fred <- dotC(name = "mux111", as.double(cc), b = as.double(t(xmat)), as.integer(M), as.integer(R), as.integer(n), wk = double(M * M), wk2 = double(M * R), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), as.integer(as.numeric(upper)), NAOK = TRUE) ans <- fred$b dim(ans) <- c(R, nrow(xmat)) d <- dimnames(xmat) dimnames(ans) <- list(d[[2]], d[[1]]) t(ans) } mux15 <- function(cc, xmat) { n <- nrow(xmat) M <- ncol(xmat) if (nrow(cc) != M || ncol(cc) != M) stop("input inconformable") if (max(abs(t(cc)-cc))>0.000001) stop("argument 'cc' is not symmetric") ans <- rep(as.numeric(NA), n*M*M) fred <- dotC(name = "mux15", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(M), as.integer(n), NAOK = TRUE) array(fred$ans, c(M, M, n)) } vforsub <- function(cc, b, M, n) { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # M or M(M+1)/2 fred <- dotC(name = "vforsub", as.double(cc), b = as.double(t(b)), as.integer(M), as.integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), NAOK = TRUE) dim(fred$b) <- c(M, n) fred$b } vbacksub <- function(cc, b, M, n) { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) if (nrow(b) != M || ncol(b) != n) stop("dimension size inconformable") fred <- dotC(name = "vbacksub", as.double(cc), b = as.double(b), as.integer(M), as.integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), NAOK = TRUE) if (M == 1) { fred$b } else { dim(fred$b) <- c(M, n) t(fred$b) } } vchol <- function(cc, M, n, silent = FALSE, callno = 0) { index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) cc <- t(cc) MM <- nrow(cc) # cc is big enough to hold its Cholesky decom. fred <- dotC(name = "vchol", cc = as.double(cc), as.integer(M), as.integer(n), ok = integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(MM), NAOK = TRUE) failed <- (fred$ok != 1) if ((correction.needed <- any(failed))) { index <- (1:n)[failed] if (!silent) { if (length(index) < 11) warning("weight matri", ifelse(length(index) > 1, "ces ","x "), paste(index, collapse = ", "), " not positive-definite") } } ans <- fred$cc dim(ans) <- c(MM, n) if (correction.needed) { temp <- cc[, index, drop = FALSE] tmp777 <- vchol.greenstadt(temp, M = M, silent = silent, callno = callno + 1) if (length(index) == n) { ans <- tmp777[1:nrow(ans),,drop = FALSE] } else { ans[,index] <- tmp777 # restored 16/10/03 } } dim(ans) <- c(MM, n) # Make sure ans } vchol.greenstadt <- function(cc, M, silent = FALSE, callno = 0) { MM <- dim(cc)[1] n <- dim(cc)[2] if (!silent) cat(paste("Applying Greenstadt modification to ", n, " matri", ifelse(n > 1, "ces", "x"), "\n", sep = "")) temp <- veigen(cc, M = M) # , mat = TRUE) dim(temp$vectors) <- c(M, M, n) # Make sure (when M = 1) for mux5 dim(temp$values) <- c(M, n) # Make sure (when M = 1) for mux5 is.neg <- (temp$values < .Machine$double.eps) is.pos <- (temp$values > .Machine$double.eps) zilch <- (!is.pos & !is.neg) temp$values <- abs(temp$values) temp.small.value <- quantile(temp$values[!zilch], prob = 0.15) if (callno > 2) { temp.small.value <- abs(temp.small.value) * 1.50^callno small.value <- temp.small.value temp$values[zilch] <- small.value } if (callno > 9) { warning("taking drastic action; setting all wz to ", "scaled versions of the order-M identity matrix") cc2mean <- abs(colMeans(cc[1:M, , drop = FALSE])) temp$values <- matrix(cc2mean, M, n, byrow = TRUE) temp$vectors <- array(c(diag(M)), c(M, M, n)) } temp3 <- mux55(temp$vectors, temp$values, M = M) #, matrix.arg = TRUE) ans <- vchol(t(temp3), M = M, n = n, silent = silent, callno = callno + 1) #, matrix.arg = TRUE) if (nrow(ans) == MM) ans else ans[1:MM, , drop = FALSE] } if (FALSE) myf <- function(x) { dotFortran("VGAM_F90_fill9", x = as.double(x), lenx = as.integer(length(x)), answer = as.double(x), NAOK = TRUE)$answer } VGAM/R/model.matrix.vglm.q0000644000176000001440000005271612136651110014775 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. attrassigndefault <- function(mmat, tt) { if (!inherits(tt, "terms")) stop("need terms object") aa <- attr(mmat, "assign") if (is.null(aa)) stop("argument is not really a model matrix") ll <- attr(tt, "term.labels") if (attr(tt, "intercept") > 0) ll <- c("(Intercept)", ll) aaa <- factor(aa, labels = ll) split(order(aa), aaa) } attrassignlm <- function(object, ...) attrassigndefault(model.matrix(object), object@terms) vlabel <- function(xn, ncolBlist, M, separator = ":") { if (length(xn) != length(ncolBlist)) stop("length of first two arguments not equal") n1 <- rep(xn, ncolBlist) if (M == 1) return(n1) n2 <- as.list(ncolBlist) n2 <- lapply(n2, seq) n2 <- unlist(n2) n2 <- as.character(n2) n2 <- paste(separator, n2, sep = "") n3 <- rep(ncolBlist, ncolBlist) n2[n3 == 1] <- "" n1n2 <- paste(n1, n2, sep = "") n1n2 } vlm2lm.model.matrix <- function(x_vlm, Blist = NULL, which.lp = 1, M = NULL) { if (is.numeric(M)) { if (M != nrow(Blist[[1]])) stop("argument 'M' does not match argument 'Blist'") } else { M <- nrow(Blist[[1]]) } Hmatrices <- matrix(c(unlist(Blist)), nrow = M) if (ncol(Hmatrices) != ncol(x_vlm)) stop("ncol(Hmatrices) != ncol(x_vlm)") n_lm <- nrow(x_vlm) / M if (round(n_lm) != n_lm) stop("'n_lm' does not seem to be an integer") lapred.index <- which.lp vecTF <- Hmatrices[lapred.index, ] != 0 X_lm_jay <- x_vlm[(0:(n_lm - 1)) * M + lapred.index, vecTF, drop = FALSE] X_lm_jay } lm2vlm.model.matrix <- function(x, Blist = NULL, assign.attributes = TRUE, M = NULL, xij = NULL, Xm2 = NULL) { if (length(Blist) != ncol(x)) stop("length(Blist) != ncol(x)") if (length(xij)) { if (inherits(xij, "formula")) xij <- list(xij) if (!is.list(xij)) stop("'xij' is not a list of formulae") } if (!is.numeric(M)) M <- nrow(Blist[[1]]) nrow_X_lm <- nrow(x) if (all(trivial.constraints(Blist) == 1)) { X_vlm <- if (M > 1) kronecker(x, diag(M)) else x ncolBlist <- rep(M, ncol(x)) } else { allB <- matrix(unlist(Blist), nrow = M) ncolBlist <- unlist(lapply(Blist, ncol)) Rsum <- sum(ncolBlist) X1 <- rep(c(t(x)), rep(ncolBlist, nrow_X_lm)) dim(X1) <- c(Rsum, nrow_X_lm) X_vlm <- kronecker(t(X1), matrix(1, M, 1)) * kronecker(matrix(1, nrow_X_lm, 1), allB) rm(X1) } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] dimnames(X_vlm) <- list(vlabel(yn, rep(M, nrow_X_lm), M), vlabel(xn, ncolBlist, M)) if (assign.attributes) { attr(X_vlm, "contrasts") <- attr(x, "contrasts") attr(X_vlm, "factors") <- attr(x, "factors") attr(X_vlm, "formula") <- attr(x, "formula") attr(X_vlm, "class") <- attr(x, "class") attr(X_vlm, "order") <- attr(x, "order") attr(X_vlm, "term.labels") <- attr(x, "term.labels") nasgn <- oasgn <- attr(x, "assign") lowind <- 0 for(ii in 1:length(oasgn)) { mylen <- length(oasgn[[ii]]) * ncolBlist[oasgn[[ii]][1]] nasgn[[ii]] <- (lowind+1):(lowind+mylen) lowind <- lowind + mylen } # End of ii if (lowind != ncol(X_vlm)) stop("something gone wrong") attr(X_vlm, "assign") <- nasgn fred <- unlist(lapply(nasgn, length)) / unlist(lapply(oasgn, length)) vasgn <- vector("list", sum(fred)) kk <- 0 for(ii in 1:length(oasgn)) { temp <- matrix(nasgn[[ii]], ncol = length(oasgn[[ii]])) for(jloc in 1:nrow(temp)) { kk <- kk + 1 vasgn[[kk]] <- temp[jloc,] } } names(vasgn) <- vlabel(names(oasgn), fred, M) attr(X_vlm, "vassign") <- vasgn attr(X_vlm, "constraints") <- Blist } # End of if (assign.attributes) if (!length(xij)) return(X_vlm) at.x <- attr(x, "assign") at.vlmx <- attr(X_vlm, "assign") at.Xm2 <- attr(Xm2, "assign") for(ii in 1:length(xij)) { form.xij <- xij[[ii]] if (length(form.xij) != 3) stop("xij[[", ii, "]] is not a formula with a response") tform.xij <- terms(form.xij) aterm.form <- attr(tform.xij, "term.labels") # Does not include response if (length(aterm.form) != M) stop("xij[[", ii, "]] does not contain ", M, " terms") name.term.y <- as.character(form.xij)[2] cols.X_vlm <- at.vlmx[[name.term.y]] # May be > 1 in length. x.name.term.2 <- aterm.form[1] # Choose the first one One.such.term <- at.Xm2[[x.name.term.2]] for(bbb in 1:length(One.such.term)) { use.cols.Xm2 <- NULL for(sss in 1:M) { x.name.term.2 <- aterm.form[sss] one.such.term <- at.Xm2[[x.name.term.2]] use.cols.Xm2 <- c(use.cols.Xm2, one.such.term[bbb]) } # End of sss allXk <- Xm2[,use.cols.Xm2,drop=FALSE] cmat.no <- (at.x[[name.term.y]])[1] # First one will do (all the same). cmat <- Blist[[cmat.no]] Rsum.k <- ncol(cmat) tmp44 <- kronecker(matrix(1, nrow_X_lm, 1), t(cmat)) * kronecker(allXk, matrix(1,ncol(cmat), 1)) # n*Rsum.k x M tmp44 <- array(t(tmp44), c(M, Rsum.k, nrow_X_lm)) tmp44 <- aperm(tmp44, c(1,3,2)) # c(M, n, Rsum.k) rep.index <- cols.X_vlm[((bbb-1)*Rsum.k+1):(bbb*Rsum.k)] X_vlm[,rep.index] <- c(tmp44) } # End of bbb } # End of for(ii in 1:length(xij)) if (assign.attributes) { attr(X_vlm, "vassign") <- vasgn attr(X_vlm, "assign") <- nasgn attr(X_vlm, "xij") <- xij } X_vlm } model.matrixvlm <- function(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), lapred.index = NULL, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vlm", "lm", "lm2", "bothlmlm2"))[1] if (length(lapred.index) && type != "lm") stop("Must set 'type = \"lm\"' when 'lapred.index' is ", "assigned a value") if (length(lapred.index) && length(object@control$xij)) stop("Currently cannot handle 'xij' models when 'lapred.index' is ", "assigned a value") x <- slot(object, "x") Xm2 <- if (any(slotNames(object) == "Xm2")) slot(object, "Xm2") else numeric(0) if (!length(x)) { data <- model.frame(object, xlev = object@xlevels, ...) kill.con <- if (length(object@contrasts)) object@contrasts else NULL x <- vmodel.matrix.default(object, data = data, contrasts.arg = kill.con) tt <- terms(object) attr(x, "assign") <- attrassigndefault(x, tt) } if ((type == "lm2" || type == "bothlmlm2") && !length(Xm2)) { object.copy2 <- object data <- model.frame(object.copy2, xlev = object.copy2@xlevels, ...) kill.con <- if (length(object.copy2@contrasts)) object.copy2@contrasts else NULL Xm2 <- vmodel.matrix.default(object.copy2, data = data, contrasts.arg = kill.con) ttXm2 <- terms(object.copy2@misc$form2) attr(Xm2, "assign") <- attrassigndefault(Xm2, ttXm2) } if (type == "lm" && is.null(lapred.index)) { return(x) } else if (type == "lm2") { return(Xm2) } else if (type == "bothlmlm2") { return(list(X = x, Xm2 = Xm2)) } M <- object@misc$M Blist <- object@constraints # == constraints(object, type = "lm") X_vlm <- lm2vlm.model.matrix(x = x, Blist = Blist, xij = object@control$xij, Xm2 = Xm2) if (type == "vlm") { return(X_vlm) } else if (type == "lm" && length(lapred.index)) { if (!is.Numeric(lapred.index, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'lapred.index'") if (!length(intersect(lapred.index, 1:M))) stop("argument 'lapred.index' should have ", "a single value from the set 1:", M) Hlist <- Blist n_lm <- nobs(object) # Number of rows of the LM matrix M <- object@misc$M # Number of linear/additive predictors Hmatrices <- matrix(c(unlist(Hlist)), nrow = M) jay <- lapred.index index0 <- Hmatrices[jay, ] != 0 X_lm_jay <- X_vlm[(0:(n_lm - 1)) * M + jay, index0, drop = FALSE] X_lm_jay } else { stop("am confused. Do not know what to return") } } setMethod("model.matrix", "vlm", function(object, ...) model.matrixvlm(object, ...)) model.framevlm <- function(object, setupsmart = TRUE, wrapupsmart = TRUE, ...) { dots <- list(...) nargs <- dots[match(c("data", "na.action", "subset"), names(dots), 0)] if (length(nargs) || !length(object@model)) { fcall <- object@call fcall$method <- "model.frame" fcall[[1]] <- as.name("vlm") fcall$smart <- FALSE if (setupsmart && length(object@smart.prediction)) { setup.smart("read", smart.prediction=object@smart.prediction) } fcall[names(nargs)] <- nargs env <- environment(object@terms$terms) # @terms or @terms$terms ?? if (is.null(env)) env <- parent.frame() ans <- eval(fcall, env, parent.frame()) if (wrapupsmart && length(object@smart.prediction)) { wrapup.smart() } ans } else object@model } if (!isGeneric("model.frame")) setGeneric("model.frame", function(formula, ...) standardGeneric("model.frame")) setMethod("model.frame", "vlm", function(formula, ...) model.framevlm(object=formula, ...)) vmodel.matrix.default <- function(object, data = environment(object), contrasts.arg = NULL, xlev = NULL, ...) { t <- if (missing(data)) terms(object) else terms(object, data = data) if (is.null(attr(data, "terms"))) data <- model.frame(object, data, xlev = xlev) else { reorder <- match(sapply(attr(t, "variables"), deparse, width.cutoff = 500)[-1], names(data)) if (any(is.na(reorder))) stop("model frame and formula mismatch in model.matrix()") if (!identical(reorder, seq_len(ncol(data)))) data <- data[, reorder, drop = FALSE] } int <- attr(t, "response") if (length(data)) { contr.funs <- as.character(getOption("contrasts")) namD <- names(data) for (i in namD) if (is.character(data[[i]])) { data[[i]] <- factor(data[[i]]) warning(gettextf("variable '%s' converted to a factor", i), domain = NA) } isF <- sapply(data, function(x) is.factor(x) || is.logical(x)) isF[int] <- FALSE isOF <- sapply(data, is.ordered) for (nn in namD[isF]) if (is.null(attr(data[[nn]], "contrasts"))) contrasts(data[[nn]]) <- contr.funs[1 + isOF[nn]] if (!is.null(contrasts.arg) && is.list(contrasts.arg)) { if (is.null(namC <- names(contrasts.arg))) stop("invalid 'contrasts.arg' argument") for (nn in namC) { if (is.na(ni <- match(nn, namD))) warning(gettextf( "variable '%s' is absent, its contrast will be ignored", nn), domain = NA) else { ca <- contrasts.arg[[nn]] if (is.matrix(ca)) contrasts(data[[ni]], ncol(ca)) <- ca else contrasts(data[[ni]]) <- contrasts.arg[[nn]] } } } } else { isF <- FALSE data <- list(x = rep(0, nrow(data))) } ans <- (model.matrix(t, data)) cons <- if (any(isF)) lapply(data[isF], function(x) attr(x, "contrasts")) else NULL attr(ans, "contrasts") <- cons ans } depvar.vlm <- function(object, ...) { object@y } if (!isGeneric("depvar")) setGeneric("depvar", function(object, ...) standardGeneric("depvar"), package = "VGAM") setMethod("depvar", "vlm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "rrvglm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "qrrvglm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "cao", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "rcim", function(object, ...) depvar.vlm(object, ...)) npred.vlm <- function(object, ...) { if (length(object@misc$M)) object@misc$M else if (ncol(as.matrix(predict(object))) > 0) ncol(as.matrix(predict(object))) else stop("cannot seem to obtain 'M'") } if (!isGeneric("npred")) setGeneric("npred", function(object, ...) standardGeneric("npred"), package = "VGAM") setMethod("npred", "vlm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "rrvglm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "qrrvglm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "cao", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "rcim", function(object, ...) npred.vlm(object, ...)) hatvaluesvlm <- function(model, type = c("diagonal", "matrix", "centralBlocks"), ...) { if(!missing(type)) type <- as.character(substitute(type)) type.arg <- match.arg(type, c("diagonal", "matrix", "centralBlocks"))[1] qrSlot <- model@qr if (!is.list(qrSlot) && class(qrSlot) != "qr") stop("slot 'qr' should be a list") M <- npred(model) nn <- nobs(model, type = "lm") if (is.empty.list(qrSlot)) { wzedd <- weights(model, type = "working") UU <- vchol(wzedd, M = M, n = nn, silent = TRUE) # Few rows, many cols X.vlm <- model.matrix(model, type = "vlm") UU.X.vlm <- mux111(cc = UU, xmat = X.vlm, M = M) qrSlot <- qr(UU.X.vlm) } else { X.vlm <- NULL class(qrSlot) <- "qr" # S3 class } Q.S3 <- qr.Q(qrSlot) if (type.arg == "diagonal") { Diag.Hat <- rowSums(Q.S3^2) Diag.Elts <- matrix(Diag.Hat, nn, M, byrow = TRUE) if (length(model@misc$predictors.names) == M) colnames(Diag.Elts) <- model@misc$predictors.names if (length(rownames(model.matrix(model, type = "lm")))) rownames(Diag.Elts) <- rownames(model.matrix(model, type = "lm")) attr(Diag.Elts, "predictors.names") <- model@misc$predictors.names attr(Diag.Elts, "ncol_X_vlm") <- model@misc$ncol_X_vlm Diag.Elts } else if (type.arg == "matrix") { all.mat <- Q.S3 %*% t(Q.S3) if (!length(X.vlm)) X.vlm <- model.matrix(model, type = "vlm") dimnames(all.mat) <- list(rownames(X.vlm), rownames(X.vlm)) attr(all.mat, "M") <- M attr(all.mat, "predictors.names") <- model@misc$predictors.names attr(all.mat, "ncol_X_vlm") <- model@misc$ncol_X_vlm all.mat } else { ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) MM12 <- M * (M + 1) / 2 all.rows.index <- rep((0:(nn-1)) * M, rep(MM12, nn)) + ind1$row.index all.cols.index <- rep((0:(nn-1)) * M, rep(MM12, nn)) + ind1$col.index H_ss <- rowSums(Q.S3[all.rows.index, ] * Q.S3[all.cols.index, ]) H_ss <- matrix(H_ss, nn, MM12, byrow = TRUE) H_ss } } if (!isGeneric("hatvalues")) setGeneric("hatvalues", function(model, ...) standardGeneric("hatvalues"), package = "VGAM") setMethod("hatvalues", "vlm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "vglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "rrvglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "qrrvglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "cao", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "rcim", function(model, ...) hatvaluesvlm(model, ...)) hatplot.vlm <- function(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, ...) { if (is(model, "vlm")) { hatval <- hatvalues(model, diag = TRUE) } else { hatval <- model } if (!is.matrix(hatval)) stop("argument 'model' seems neither a vglm() object or a matrix") ncol_X_vlm <- attr(hatval, "ncol_X_vlm") M <- attr(hatval, "M") predictors.names <- attr(hatval, "predictors.names") if (!length(predictors.names)) { predictors.names <- paste("Linear/additive predictor", 1:M) } if (length(M)) { N <- nrow(hatval) / M hatval <- matrix(hatval, N, M, byrow = TRUE) } else { M <- ncol(hatval) N <- nrow(hatval) } if (is.null(ylim)) ylim <- c(0, max(hatval)) for (jay in 1:M) { plot(hatval[, jay], type = "n", main = predictors.names[jay], ylim = ylim, xlab = xlab, ylab = ylab, ...) points(1:N, hatval[, jay], ...) abline(h = multiplier * ncol_X_vlm / (N * M), lty = lty, ...) } } if (!isGeneric("hatplot")) setGeneric("hatplot", function(model, ...) standardGeneric("hatplot"), package = "VGAM") setMethod("hatplot", "matrix", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "vlm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "vglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "rrvglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "qrrvglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "cao", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "rcim", function(model, ...) hatplot.vlm(model, ...)) dfbetavlm <- function(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...) { if (!is(model, "vlm")) stop("argument 'model' does not seem to be a vglm() object") n_lm <- nobs(model, type = "lm") X_lm <- model.matrix(model, type = "lm") X_vlm <- model.matrix(model, type = "vlm") p_vlm <- ncol(X_vlm) # nvar(model, type = "vlm") M <- npred(model) wz <- weights(model, type = "work") # zz unused!!!!!!! etastart <- predict(model) offset <- matrix(model@offset, n_lm, M) new.control <- model@control pweights <- weights(model, type = "prior") orig.w <- if (is.numeric(model@extra$orig.w)) model@extra$orig.w else 1 y.integer <- if (is.logical(model@extra$y.integer)) model@extra$y.integer else FALSE new.control$trace <- trace.new new.control$maxit <- maxit.new dfbeta <- matrix(0, n_lm, p_vlm) Terms.zz <- NULL for (ii in 1:n_lm) { if (trace.new) { cat("\n", "Observation ", ii, "\n") flush.console() } w.orig <- if (length(orig.w) != n_lm) rep(orig.w, length.out = n_lm) else orig.w w.orig[ii] <- w.orig[ii] * smallno # Relative fit <- vglm.fit(x = X_lm, X_vlm_arg = X_vlm, # Should be more efficient y = if (y.integer) round(depvar(model) * c(pweights) / c(orig.w)) else (depvar(model) * c(pweights) / c(orig.w)), w = w.orig, # Set to zero so that it is 'deleted'. Xm2 = NULL, Ym2 = NULL, etastart = etastart, # coefstart = NULL, offset = offset, family = model@family, control = new.control, criterion = new.control$criterion, # "coefficients", qr.arg = FALSE, constraints = constraints(model, type = "term"), extra = model@extra, Terms = Terms.zz, function.name = "vglm") dfbeta[ii, ] <- fit$coeff } dimnames(dfbeta) <- list(rownames(X_lm), names(coef(model))) dfbeta } if (!isGeneric("dfbeta")) setGeneric("dfbeta", function(model, ...) standardGeneric("dfbeta"), package = "VGAM") setMethod("dfbeta", "matrix", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "vlm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "vglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "rrvglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "qrrvglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "cao", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "rcim", function(model, ...) dfbetavlm(model, ...)) hatvaluesbasic <- function(X_vlm, diagWm, M = 1) { if (M > 1) stop("currently argument 'M' must be 1") nn <- nrow(X_vlm) ncol_X_vlm <- ncol(X_vlm) XtW <- t(c(diagWm) * X_vlm) UU <- sqrt(diagWm) # Only for M == 1 UU.X_vlm <- c(UU) * X_vlm # c(UU) okay for M==1 qrSlot <- qr(UU.X_vlm) Rmat <- qr.R(qrSlot) rinv <- diag(ncol_X_vlm) rinv <- backsolve(Rmat, rinv) Diag.Hat <- if (FALSE) { covun <- rinv %*% t(rinv) rhs.mat <- covun %*% XtW colSums(t(X_vlm) * rhs.mat) } else { mymat <- X_vlm %*% rinv rowSums(diagWm * mymat^2) } Diag.Hat } VGAM/R/lrwaldtest.R0000644000176000001440000003602212136651110013544 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. update_default <- function (object, formula., ..., evaluate = TRUE) { if (is.null(call <- getCall(object))) stop("need an object with call component") extras <- match.call(expand.dots = FALSE)$... if (!missing(formula.)) { call$formula <- update_formula(formula(object), formula.) } if (length(extras)) { existing <- !is.na(match(names(extras), names(call))) for (a in names(extras)[existing]) call[[a]] <- extras[[a]] if (any(!existing)) { call <- c(as.list(call), extras[!existing]) call <- as.call(call) } } if (evaluate) eval(call, parent.frame()) else call } update_formula <- function (old, new, ...) { tmp <- (update.formula(as.formula(old), as.formula(new))) out <- formula(terms.formula(tmp, simplify = TRUE)) return(out) } if (FALSE) print_anova <- function (x, digits = max(getOption("digits") - 2, 3), signif.stars = getOption("show.signif.stars"), ...) { x <- x@Body if (!is.null(heading <- attr(x, "heading"))) cat(heading, sep = "\n") nc <- dim(x)[2L] if (is.null(cn <- colnames(x))) stop("'anova' object must have colnames") has.P <- grepl("^(P|Pr)\\(", cn[nc]) zap.i <- 1L:(if (has.P) nc - 1 else nc) i <- which(substr(cn, 2, 7) == " value") i <- c(i, which(!is.na(match(cn, c("F", "Cp", "Chisq"))))) if (length(i)) zap.i <- zap.i[!(zap.i %in% i)] tst.i <- i if (length(i <- grep("Df$", cn))) zap.i <- zap.i[!(zap.i %in% i)] stats::printCoefmat(x, digits = digits, signif.stars = signif.stars, has.Pvalue = has.P, P.values = has.P, cs.ind = NULL, zap.ind = zap.i, tst.ind = tst.i, na.print = "", ...) invisible(x) } setGeneric("lrtest", function(object, ...) standardGeneric("lrtest"), package = "VGAM") setClass("VGAManova", representation( "Body" = "data.frame")) lrtest_vglm <- function(object, ..., name = NULL) { cls <- class(object)[1] nobs <- function(x) x@misc$nrow_X_vlm tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update) { if (is.numeric(update)) { if (any(update < 1)) { warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { warning(paste("more terms specified than existent in the model:", paste(as.character(update[update > length(tlab(fm))]), collapse = ", "))) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { warning(paste("terms specified that are not in the model:", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", "))) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) { update <- update_default(fm, update) } if (!inherits(update, cls)) { warning(paste("original model was of class \"", cls, "\", updated model is of class \"", class(update)[1], "\"", sep = "")) } return(update) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) for(i in 2:nmodels) { objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]]) } ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for(i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to ", "the same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } rval <- matrix(rep(as.numeric(NA), 5 * nmodels), ncol = 5) colnames(rval) <- c("#Df", "LogLik", "Df", "Chisq", "Pr(>Chisq)") rownames(rval) <- 1:nmodels logLlist <- lapply(objects, logLik) dflist <- lapply(objects, df.residual) rval[,1] <- unlist(dflist) rval[,2] <- unlist(logLlist) rval[2:nmodels, 3] <- rval[2:nmodels, 1] - rval[1:(nmodels-1), 1] rval[2:nmodels, 4] <- 2 * abs(rval[2:nmodels, 2] - rval[1:(nmodels-1), 2]) rval[,5] <- pchisq(rval[,4], round(abs(rval[,3])), lower.tail = FALSE) variables <- lapply(objects, name) title <- "Likelihood ratio test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") new("VGAManova", Body = structure(as.data.frame(rval), heading = c(title, topnote))) } setMethod("lrtest", "vglm", function(object, ...) lrtest_vglm(object = object, ...)) setMethod("show", "VGAManova", function(object) stats::print.anova(object@Body)) use.S3.lrtest <- TRUE use.S3.lrtest <- FALSE if (use.S3.lrtest) lrtest <- function(object, ...) { UseMethod("lrtest") } if (use.S3.lrtest) lrtest.formula <- function(object, ..., data = list()) { object <- if (length(data) < 1) eval(call("lm", formula = as.formula(deparse(substitute(object))), environment(object))) else eval(call("lm", formula = as.formula(deparse(substitute(object))), data = as.name(deparse(substitute(data))), environment(data))) lrtest.default(object, ...) } if (use.S3.lrtest) lrtest.default <- function(object, ..., name = NULL) { cls <- class(object)[1] nobs <- function(x) NROW(residuals(x)) tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update) { if (is.numeric(update)) { if (any(update < 1)) { warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { warning(paste("more terms specified than existent in the model:", paste(as.character(update[update > length(tlab(fm))]), collapse = ", "))) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { warning(paste("terms specified that are not in the model:", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", "))) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) update <- update(fm, update) if (!inherits(update, cls)) warning(paste("original model was of class \"", cls, "\", updated model is of class \"", class(update)[1], "\"", sep = "")) return(update) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) print("objects 1") print( objects ) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) print("no.update") print( no.update ) for(i in 2:nmodels) objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]]) print("objects i") print( objects ) ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for(i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to ", "the same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) print("commonobs") print( commonobs ) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } rval <- matrix(rep(as.numeric(NA), 5 * nmodels), ncol = 5) colnames(rval) <- c("#Df", "LogLik", "Df", "Chisq", "Pr(>Chisq)") rownames(rval) <- 1:nmodels logL <- lapply(objects, logLik) rval[,1] <- as.numeric(sapply(logL, function(x) attr(x, "df"))) rval[,2] <- sapply(logL, as.numeric) rval[2:nmodels, 3] <- rval[2:nmodels, 1] - rval[1:(nmodels-1), 1] rval[2:nmodels, 4] <- 2 * abs(rval[2:nmodels, 2] - rval[1:(nmodels-1), 2]) rval[,5] <- pchisq(rval[,4], round(abs(rval[,3])), lower.tail = FALSE) variables <- lapply(objects, name) title <- "Likelihood ratio test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") structure(as.data.frame(rval), heading = c(title, topnote), class = c("anova", "data.frame")) } # End of lrtest.default if (FALSE) setGeneric("waldtest", function(object, ...) standardGeneric("waldtest"), package = "VGAM") if (FALSE) waldtest <- function(object, ...) { UseMethod("waldtest") } waldtest_formula <- function(object, ..., data = list()) { stop("cannot find waldtest_lm()") object <- if (length(data) < 1) eval(call("lm", formula = as.formula(deparse(substitute(object))), environment(object))) else eval(call("lm", formula = as.formula(deparse(substitute(object))), data = as.name(deparse(substitute(data))), environment(data))) } waldtest_default <- function(object, ..., vcov = NULL, test = c("Chisq", "F"), name = NULL) { vcov. <- vcov cls <- class(object)[1] nobs <- function(x) NROW(residuals(x)) tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update) { if (is.numeric(update)) { if (any(update < 1)) { warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { warning(paste("more terms specified than existent in the model:", paste(as.character(update[update > length(tlab(fm))]), collapse = ", "))) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { warning(paste("terms specified that are not in the model:", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", "))) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) update <- update(fm, update) if (!inherits(update, cls)) stop(paste("original model was of class \"", cls, "\", updated model is of class \"", class(update)[1], "\"", sep = "")) return(update) } modelCompare <- function(fm, fm.up, vfun = NULL) { q <- length(coef(fm)) - length(coef(fm.up)) if (q > 0) { fm0 <- fm.up fm1 <- fm } else { fm0 <- fm fm1 <- fm.up } k <- length(coef(fm1)) n <- nobs(fm1) if (!all(tlab(fm0) %in% tlab(fm1))) stop("models are not nested") ovar <- which(!(names(coef(fm1)) %in% names(coef(fm0)))) vc <- if (is.null(vfun)) vcov(fm1) else if (is.function(vfun)) vfun(fm1) else vfun stat <- t(coef(fm1)[ovar]) %*% solve(vc[ovar,ovar]) %*% coef(fm1)[ovar] return(c(-q, stat)) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) for(i in 2:nmodels) objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]]) responses <- as.character(lapply(objects, function(x) deparse(terms(x)[[2]]))) sameresp <- responses == responses[1] if (!all(sameresp)) { objects <- objects[sameresp] warning("models with response ", deparse(responses[!sameresp]), " removed because response differs from ", "model 1") } ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for(i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to the ", "same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } if (nmodels > 2 && !is.null(vcov.) && !is.function(vcov.)) stop("to compare more than 2 models `vcov.' needs to be a function") test <- match.arg(test) rval <- matrix(rep(as.numeric(NA), 4 * nmodels), ncol = 4) colnames(rval) <- c("Res.Df", "Df", test, paste("Pr(>", test, ")", sep = "")) rownames(rval) <- 1:nmodels rval[,1] <- as.numeric(sapply(objects, df.residual)) for(i in 2:nmodels) rval[i, 2:3] <- modelCompare(objects[[i-1]], objects[[i]], vfun = vcov.) if (test == "Chisq") { rval[,4] <- pchisq(rval[,3], round(abs(rval[,2])), lower.tail = FALSE) } else { df <- rval[,1] for(i in 2:nmodels) if (rval[i,2] < 0) df[i] <- rval[i-1,1] rval[,3] <- rval[,3]/abs(rval[,2]) rval[,4] <- pf(rval[,3], abs(rval[,2]), df, lower.tail = FALSE) } variables <- lapply(objects, name) title <- "Wald test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") new("VGAManova", Body = structure(as.data.frame(rval), heading = c(title, topnote))) } if (FALSE) setMethod("waldtest", "vglm", function(object, ...) waldtest_vglm(object = object, ...)) VGAM/R/logLik.vlm.q0000644000176000001440000000372312136651110013436 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. logLik.vlm <- function(object, ...) object@criterion$loglikelihood if (!isGeneric("logLik")) setGeneric("logLik", function(object, ...) standardGeneric("logLik"), package = "VGAM") setMethod("logLik", "vlm", function(object, ...) logLik.vlm(object, ...)) setMethod("logLik", "vglm", function(object, ...) logLik.vlm(object, ...)) setMethod("logLik", "vgam", function(object, ...) logLik.vlm(object, ...)) constraints.vlm <- function(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, ...) { type <- match.arg(type, c("lm", "term"))[1] Hlist <- ans <- slot(object, "constraints") # For "lm" (formerly "vlm") if (type == "term") { oassign.LM <- object@misc$orig.assign x.LM <- model.matrix(object) att.x.LM <- attr(x.LM, "assign") names.att.x.LM <- names(att.x.LM) ppp <- length(names.att.x.LM) ans <- vector("list", ppp) for (ii in 1:ppp) { col.ptr <- (oassign.LM[[ii]])[1] # 20110114 ans[[ii]] <- (Hlist[[col.ptr]]) } names(ans) <- names.att.x.LM } # End of "term" if (matrix.out) { if (all) { M <- npred(object) mat.ans <- matrix(unlist(ans), nrow = M) if (length(object@misc$predictors.names) == M) rownames(mat.ans) <- object@misc$predictors.names if (length(object@misc$colnames.X_vlm) == ncol(mat.ans)) colnames(mat.ans) <- object@misc$colnames.X_vlm mat.ans } else { ans[[which]] } } else { if (all) ans else ans[[which]] } } if (!isGeneric("constraints")) setGeneric("constraints", function(object, ...) standardGeneric("constraints")) setMethod("constraints", "vlm", function(object, ...) constraints.vlm(object, ...)) VGAM/R/links.q0000644000176000001440000012052212136651110012535 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. ToString <- function(x) paste(x, collapse = ", ") TypicalVGAMfamilyFunction <- function(lsigma = "loge", isigma = NULL, gsigma = exp(-5:5), parallel = TRUE, apply.parint = FALSE, shrinkage.init = 0.95, nointercept = NULL, imethod = 1, probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), mv = FALSE, earg.link = FALSE, whitespace = FALSE, bred = FALSE, oim = FALSE, nsimEIM = 100, zero = NULL) { NULL } TypicalVGAMlinkFunction <- function(theta, someParameter = 0, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { NULL } care.exp <- function(x, thresh = -log( sqrt( .Machine$double.xmin ) ) ) { x[x > thresh] <- thresh x[x < (-thresh)] <- -thresh exp(x) } loge <- function(theta, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("log(", theta, ")", sep = "") else paste("log(", theta, ")", sep = "") if (tag) string <- paste("Log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { exp(theta) } } else { switch(deriv + 1, { log(theta)}, theta, theta) } } logoff <- function(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(offset)) stop("bad input for argument 'offset'") if (is.character(theta)) { string <- if (short) paste("logoff(", theta, ", offset = ", as.character(offset), ")", sep = "") else paste("log(", as.character(offset), "+", theta, ")", sep = "") if (tag) string <- paste("Log with offset:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, offset = offset, inverse = FALSE, deriv = deriv) } else { exp(theta) - offset } } else { switch(deriv + 1, log(theta + offset), theta + offset, theta + offset) } } identity <- function(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- theta if (tag) string <- paste("Identity:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, inverse = FALSE, deriv = deriv) } else { theta } } else { switch(deriv+1, theta, theta*0 + 1, theta*0) } } nidentity <- function(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- paste("-", theta, sep = "") if (tag) string <- paste("Negative-identity:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, inverse = FALSE, deriv = deriv) } else { -theta } } else { switch(deriv+1, -theta, theta*0 - 1, theta*0) } } logit <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logit(", theta, ")", sep = "") else paste("log(", theta, "/(1-", theta, "))", sep = "") if (tag) string <- paste("Logit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { exp(theta - log1p(exp(theta))) } } else { switch(deriv+1, { temp2 <- log(theta) - log1p(-theta) if (any(near0.5 <- (abs(theta - 0.5) < 0.000125))) temp2[near0.5] <- log(theta[near0.5] / (1 - theta[near0.5])) temp2 }, exp(log(theta) + log1p(-theta)), exp(log(theta) + log1p(-theta)) * (1 - 2 * theta)) } } loglog <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("loglog(", theta, ")", sep = "") else paste("log(log(", theta, "))", sep = "") if (tag) string <- paste("Log-Log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 1.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { exp(exp(theta)) } } else { switch(deriv+1, { log(log(theta))}, theta * log(theta), { junk <- log(theta) -junk^2 / (1 + junk) }, stop("argument 'deriv' unmatched")) } } cloglog <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("cloglog(", theta, ")", sep = "") else paste("log(-log(1-", theta, "))", sep = "") if (tag) string <- paste("Complementary log-log:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { junk <- exp(theta) -expm1(-junk) } } else { switch(deriv+1, { log(-log1p(-theta)) }, -(1 - theta) * log1p(-theta), { junk <- log1p(-theta) -(1 - theta) * (1 + junk) * junk }, stop("argument 'deriv' unmatched")) } } probit <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("probit(", theta, ")", sep = "") else paste("qnorm(", theta, ")", sep = "") if (tag) string <- paste("Probit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1 - bvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { ans <- pnorm(theta) if (is.matrix(theta)) dim(ans) <- dim(theta) ans } } else { switch(deriv+1, { ans <- qnorm(theta) if (is.matrix(theta)) dim(ans) <- dim(theta) ans }, { if (is.matrix(theta)) { ans <- dnorm(qnorm(theta)) dim(ans) <- dim(theta) ans } else dnorm(qnorm(as.vector(theta))) }, { junk <- qnorm(theta) ans <- -junk * dnorm(junk) if (is.vector(theta)) ans else if (is.matrix(theta)) { dim(ans) <- dim(theta) ans } else { warning("can only handle vectors and matrices;", " converting to vector") ans } }) } } explink <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("explink(", theta, ")", sep = "") else paste("exp(", theta, ")", sep = "") if (tag) string <- paste("Exp:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { log(theta) } } else { switch(deriv+1, { exp(theta)}, 1 / exp(theta), -1 / exp(theta * 2)) } } reciprocal <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- paste("1/", theta, sep = "") if (tag) string <- paste("Reciprocal:", string) return(string) } if (!inverse && length(bvalue)) theta[theta == 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { 1/theta } } else { switch(deriv+1, { 1/theta}, -theta^2, 2*theta^3) } } nloge <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("nloge(", theta, ")", sep = "") else paste("-log(", theta, ")", sep = "") if (tag) string <- paste("Negative log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { exp(-theta) } } else { switch(deriv+1, { -log(theta)}, -theta, theta) } } nreciprocal <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- paste("-1/", theta, sep = "") if (tag) string <- paste("Negative reciprocal:", string) return(string) } if (!inverse && length(bvalue)) theta[theta == 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { -1/theta } } else { switch(deriv+1, { -1/theta}, theta^2, 2*theta^3) } } natural.ig <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- paste("-1/", theta, sep = "") if (tag) string <- paste("Negative inverse:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / nreciprocal(theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { 1 / sqrt(-2*theta) } } else { switch(deriv+1, -1 / (2 * theta^2), theta^3, 3 * theta^5) } } rhobit <- function(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("rhobit(", theta, ")", sep = "") else paste("log((1+", theta, ")/(1-", theta, "))", sep = "") if (tag) string <- paste("Rhobit:", string) return(string) } if (!inverse) { if (length(bminvalue)) theta[theta <= -1.0] <- bminvalue if (length(bmaxvalue)) theta[theta >= 1.0] <- bmaxvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bminvalue = bminvalue, bmaxvalue = bmaxvalue, inverse = FALSE, deriv = deriv) } else { junk <- exp(theta) expm1(theta) / (junk+1.0) } } else { switch(deriv+1, { log1p(theta) - log1p(-theta)}, (1 - theta^2) / 2, (1 - theta^2)^2 / (4*theta)) } } fisherz <- function(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("fisherz(", theta, ")", sep = "") else paste("(1/2) * log((1+", theta, ")/(1-", theta, "))", sep = "") if (tag) string <- paste("Fisher's Z transformation:", string) return(string) } if (!inverse) { if (length(bminvalue)) theta[theta <= -1.0] <- bminvalue if (length(bmaxvalue)) theta[theta >= 1.0] <- bmaxvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bminvalue = bminvalue, bmaxvalue = bmaxvalue, inverse = FALSE, deriv = deriv) } else { tanh(theta) } } else { switch(deriv+1, atanh(theta), 1.0 - theta^2, (1.0 - theta^2)^2 / (2*theta)) } } mlogit <- function(theta, refLevel = "last", M = NULL, # stop("argument 'M' not specified"), whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { fillerChar <- ifelse(whitespace, " ", "") if (length(refLevel) != 1) stop("the length of 'refLevel' must be one") if (is.character(refLevel)) { if (refLevel != "last") stop('if a character, refLevel must be "last"') refLevel <- -1 } else if (is.factor(refLevel)) { if (is.ordered(refLevel)) warning("argument 'refLevel' is from an ordered factor") refLevel <- as.character(refLevel) == levels(refLevel) refLevel <- (1:length(refLevel))[refLevel] if (!is.Numeric(refLevel, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("could not coerce 'refLevel' into a single positive integer") } else if (!is.Numeric(refLevel, allowable.length = 1, integer.valued = TRUE)) stop("'refLevel' must be a single (positive?) integer") if (is.character(theta)) { is.M <- is.finite(M) && is.numeric(M) string <- if (short) paste("mlogit(", theta, ")", sep = "") else { if (refLevel < 0) { ifelse(whitespace, paste("log(", theta, "[,j] / ", theta, "[,", ifelse(is.M, M+1, "M+1"), "]), j = 1:", ifelse(is.M, M, "M"), sep = ""), paste("log(", theta, "[,j]/", theta, "[,", ifelse(is.M, M+1, "M+1"), "]), j=1:", ifelse(is.M, M, "M"), sep = "")) } else { if (refLevel == 1) { paste("log(", theta, "[,", "j]", fillerChar, "/", fillerChar, "", theta, "[,", refLevel, "]), j", fillerChar, "=", fillerChar, "2:", ifelse(is.M, (M+1), "(M+1)"), sep = "") } else { paste("log(", theta, "[,", "j]", fillerChar, "/", "", theta, "[,", refLevel, "]), j", fillerChar, "=", fillerChar, "c(1:", refLevel-1, ",", fillerChar, refLevel+1, ":", ifelse(is.M, (M+1), "(M+1)"), ")", sep = "") } } } if (tag) string <- paste("Multinomial logit link:", string) return(string) } M.orig <- M M <- if (inverse) ncol(cbind(theta)) else ncol(cbind(theta)) - 1 if (M < 1) ifelse(inverse, stop("argument 'eta' should have at least one column"), stop("argument 'theta' should have at least two columns")) if (is.numeric(M.orig) && M != M.orig) { warning("argument 'M' does not seem right but using it") M <- M.orig } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (!inverse && length(bvalue)) theta[theta >= 1.0] <- 1 - bvalue foo <- function(eta, refLevel = -1, M) { phat <- if ((refLevel < 0) || (refLevel == M+1)) { cbind(care.exp(eta), 1.0) } else if ( refLevel == 1) { cbind(1.0, care.exp(eta)) } else { use.refLevel <- if ( refLevel < 0) M+1 else refLevel etamat <- cbind(eta[, 1:( refLevel - 1)], 0.0, eta[, ( refLevel ):M]) care.exp(etamat) } ans <- phat / rowSums(phat) colnames(ans) <- NULL ans } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, refLevel = refLevel, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { foo(theta, refLevel, M = M) # log(theta[, -jay] / theta[, jay]) } } else { switch(deriv + 1, { ans <- if (refLevel < 0) { log(theta[, -ncol(theta)] / theta[, ncol(theta)]) } else { use.refLevel <- if (refLevel < 0) ncol(theta) else refLevel log(theta[, -( use.refLevel )] / theta[, use.refLevel ]) } colnames(ans) <- NULL ans }, care.exp(log(theta) + log1p(-theta)), care.exp(log(theta) + log1p(-theta)) * (1 - 2 * theta)) } } # end of mlogit fsqrt <- function(theta, # = NA , = NULL, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(min, allowable.length = 1)) stop("bad input for 'min' component") if (!is.Numeric(max, allowable.length = 1)) stop("bad input for 'max' component") if (!is.Numeric(mux, allowable.length = 1, positive = TRUE)) stop("bad input for 'mux' component") if (min >= max) stop("'min' >= 'max' is not allowed") if (is.character(theta)) { string <- if (short) paste("fsqrt(", theta, ")", sep = "") else { if (abs(mux-sqrt(2)) < 1.0e-10) paste("sqrt(2*", theta, ") - sqrt(2*(1-", theta, "))", sep = "") else paste(as.character(mux), " * (sqrt(", theta, "-", min, ") - sqrt(", max, "-", theta, "))", sep = "") } if (tag) string <- paste("Folded square root:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, min = min, max = max, mux = mux, inverse = FALSE, deriv = deriv) } else { mid <- (min + max) / 2 boundary <- mux * sqrt(max - min) temp <- pmax(0, (theta/mux)^2 * (2*(max-min) - (theta/mux)^2)) ans <- theta if (any(ind5 <- theta < 0)) ans[ind5] <- mid - 0.5 * sqrt(temp[ind5]) if (any(ind5 <- theta >= 0)) ans[ind5] <- mid + 0.5 * sqrt(temp[ind5]) ans[theta < -boundary] <- NA ans[theta > boundary] <- NA ans } } else { switch(deriv+1, mux * (sqrt(theta-min) - sqrt(max-theta)), (2 / mux) / (1/sqrt(theta-min) + 1/sqrt(max-theta)), -(4 / mux) / ((theta-min)^(-3/2) - (max-theta)^(-3/2))) } } powl <- function(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { exponent <- power if (exponent == 0) stop("use the 'loge' link") if (is.character(theta)) { string <- if (short) paste("powl(", theta, ", power = ", as.character(exponent), ")", sep = "") else paste(theta, "^(", as.character(exponent), ")", sep = "") if (tag) string <- paste("Power link:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, power = power, inverse = FALSE, deriv = deriv) } else { theta^(1/exponent) } } else { switch(deriv+1, { theta^exponent }, { (theta^(1-exponent)) / exponent }, { (theta^(2-exponent)) / (exponent * (exponent-1)) }) } } elogit <- function(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { A = min B = max if (!inverse && length(bminvalue)) theta[theta <= A] <- bminvalue if (!inverse && length(bmaxvalue)) theta[theta >= B] <- bmaxvalue if (is.character(theta)) { string <- if (short) { if (A != 0 || B != 1) paste("elogit(", theta, ", min = ", A, ", max = ", B, ")", sep = "") else paste("elogit(", theta, ")", sep = "") } else { paste("log((", theta, "-min)/(max-", theta, "))", sep = "") } if (tag) string <- paste("Extended logit:", string) return(string) } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, min = min, max = max, bminvalue = bminvalue, bmaxvalue = bmaxvalue, inverse = FALSE, deriv = deriv) } else { junk <- care.exp(theta) (A + B * junk) / (1.0 + junk) } } else { switch(deriv+1, { log((theta - A)/(B - theta))}, (theta - A) * (B - theta) / (B-A), (theta - A) * (B - theta) * (B - 2 * theta + A) / (B-A)^2) } } logc <- function(theta, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logc(", theta, ")", sep = "") else paste("log(1-", theta, ")", sep = "") if (tag) string <- paste("Log Complementary:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta >= 1.0] <- bvalue; } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { -expm1(theta) } } else { switch(deriv+1, { log1p(-theta)}, -(1.0 - theta), -(1.0 - theta)^2) } } cauchit <- function(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("cauchit(", theta, ")", sep = "") else paste("tan(pi*(", theta, "-0.5))", sep = "") if (tag) string <- paste("Cauchit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { 0.5 + atan(theta) / pi } } else { switch(deriv+1, tan(pi * (theta-0.5)), cos(pi * (theta-0.5))^2 / pi, -sin(pi * (theta-0.5) * 2) ) } } golf <- function(theta, lambda = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(lambda, positive = TRUE)) stop('could not determine lambda or lambda has negative values') if (is.Numeric(cutpoint)) if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should contain ", "non-negative integer values") if (is.character(theta)) { string <- if (short) { lenl <- length(lambda) > 1 lenc <- length(cutpoint) > 1 paste("golf(", theta, ", lambda = ", if (lenl) "c(" else "", ToString(lambda), if (lenl) ")" else "", if (is.Numeric(cutpoint)) paste(", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", sep = "") else "", ")", sep = "") } else { if (is.Numeric(cutpoint)) { paste("-3*log(1-qnorm(", theta, ")/(3*sqrt(lambda)))", " + log(cutpoint)", sep = "") } else { paste("-3*log(1-qnorm(", theta, ")/(3*sqrt(lambda)))", sep = "") } } if (tag) string <- paste("Gamma-ordinal link function:", string) return(string) } thmat <- cbind(theta) lambda <- rep(lambda, len = ncol(thmat)) # Allow recycling for lambda if (is.Numeric(cutpoint)) cutpoint <- rep(cutpoint, len = ncol(thmat)) if (ncol(thmat) > 1) { answer <- thmat for(ii in 1:ncol(thmat)) answer[,ii] <- Recall(theta = thmat[,ii], lambda = lambda[ii], cutpoint = if (is.Numeric(cutpoint)) cutpoint[ii] else NULL, inverse = inverse, deriv = deriv) return(answer) } answer <- if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, lambda = lambda, cutpoint = cutpoint, inverse = FALSE, deriv = deriv) } else { if (is.Numeric(cutpoint)) { pnorm((1-care.exp(-(theta-log(cutpoint))/3)) * 3 * sqrt(lambda)) } else { pnorm((1-care.exp(-theta/3)) * 3 * sqrt(lambda)) } } } else { smallno <- 1 * .Machine$double.eps Theta <- theta Theta <- pmin(Theta, 1 - smallno) # Since theta == 1 is a possibility Theta <- pmax(Theta, smallno) # Since theta == 0 is a possibility Ql <- qnorm(Theta) switch(deriv+1, { temp <- Ql / (3*sqrt(lambda)) temp <- pmin(temp, 1.0 - smallno) # 100 / .Machine$double.eps -3*log1p(-temp) + if (is.Numeric(cutpoint)) log(cutpoint) else 0}, (1 - Ql / (3*sqrt(lambda))) * sqrt(lambda) * dnorm(Ql), { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } if (!is.Numeric(answer)) stop("the answer contains some NAs") answer } polf <- function(theta, # = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should", " contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc = length(cutpoint) > 1 paste("polf(", theta, ", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ")", sep = "") } else paste("2*log(0.5*qnorm(", theta, ") + sqrt(cutpoint+7/8))", sep = "") if (tag) string <- paste("Poisson-ordinal link function:", string) return(string) } thmat = cbind(theta) if (ncol(thmat) > 1) { answer = thmat cutpoint = rep(cutpoint, len = ncol(thmat)) # Reqd for the for loop for(ii in 1:ncol(thmat)) answer[,ii] = Recall(theta = thmat[,ii], cutpoint = cutpoint, inverse = inverse, deriv = deriv) return(answer) } answer = if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, cutpoint = cutpoint, inverse = FALSE, deriv = deriv) } else { if (any(cp.index <- cutpoint == 0)) { tmp <- theta tmp[cp.index] <- cloglog(theta = theta[cp.index], inverse = inverse, deriv = deriv) tmp[!cp.index] <- pnorm(2 * exp(theta[!cp.index]/2) - 2 * sqrt(cutpoint[!cp.index] + 7/8)) tmp } else { pnorm(2 * exp(theta/2) - 2 * sqrt(cutpoint + 7/8)) } } } else { if (any(cp.index <- cutpoint == 0)) { cloglog(theta = theta, inverse = inverse, deriv = deriv) } else { smallno = 1 * .Machine$double.eps SMALLNO = 1 * .Machine$double.xmin Theta = theta Theta = pmin(Theta, 1 - smallno) # Since theta == 1 is a possibility Theta = pmax(Theta, smallno) # Since theta == 0 is a possibility Ql = qnorm(Theta) switch(deriv+1, { temp = 0.5 * Ql + sqrt(cutpoint + 7/8) temp = pmax(temp, SMALLNO) 2 * log(temp)}, (Ql/2 + sqrt(cutpoint + 7/8)) * dnorm(Ql), { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) stop("the answer contains some NAs") answer } nbolf <- function(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { kay = k if (!is.Numeric(kay, positive = TRUE)) stop("could not determine 'k' or it is not positive-valued") if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should", " contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc = length(cutpoint) > 1 lenk = length(kay) > 1 paste("nbolf(", theta, ", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ", k = ", if (lenk) "c(" else "", ToString(kay), if (lenk) ")" else "", ")", sep = "") } else paste("2*log(sqrt(k) * sinh(qnorm(", theta, ")/(2*sqrt(k)) + ", "asinh(sqrt(cutpoint/k))))", sep = "") if (tag) string <- paste("Negative binomial-ordinal link function:", string) return(string) } thmat = cbind(theta) kay = rep(kay, len = ncol(thmat)) # Allow recycling for kay cutpoint = rep(cutpoint, len = ncol(thmat)) # Allow recycling for cutpoint if (ncol(thmat) > 1) { answer = thmat for(ii in 1:ncol(thmat)) answer[,ii] = Recall(theta = thmat[,ii], cutpoint = cutpoint[ii], k = kay[ii], inverse = inverse, deriv = deriv) return(answer) } answer = if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) } else { if (cutpoint == 0) { 1.0 - (kay / (kay + care.exp(theta)))^kay } else { pnorm((asinh(exp(theta/2)/sqrt(kay)) - asinh(sqrt(cutpoint/kay))) * 2 * sqrt(kay)) } } } else { smallno = 1 * .Machine$double.eps SMALLNO = 1 * .Machine$double.xmin Theta = theta Theta = pmin(Theta, 1 - smallno) # Since theta == 1 is a possibility Theta = pmax(Theta, smallno) # Since theta == 0 is a possibility if (cutpoint == 0) { switch(deriv+1, { temp = (1 - Theta)^(-1/kay) - 1 temp = pmax(temp, SMALLNO) log(kay) + log(temp)}, (kay / (1 - Theta)^(1/kay) - kay) * (1 - Theta)^(kay+1/kay), { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } else { Ql = qnorm(Theta) switch(deriv+1, { temp = sqrt(kay) * sinh(Ql/(2*sqrt(kay)) + asinh(sqrt(cutpoint/kay))) temp = pmax(temp, SMALLNO) 2 * log(temp)}, { arg1 = (Ql/(2*sqrt(kay)) + asinh(sqrt(cutpoint/kay))) sqrt(kay) * tanh(arg1) * dnorm(Ql) }, { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) stop("the answer contains some NAs") answer } nbolf2 <- function(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { kay <- k if (!is.Numeric(kay, positive = TRUE)) stop("could not determine argument 'k' or ", "it is not positive-valued") if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should ", "contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc = length(cutpoint) > 1 lenk = length(kay) > 1 paste("nbolf2(", theta, ", earg = list(cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ", k = ", if (lenk) "c(" else "", ToString(kay), if (lenk) ")" else "", "))", sep = "") } else { paste("3*log()", sep = "") } if (tag) string = paste("Negative binomial-ordinal link function 2:", string) return(string) } thmat = cbind(theta) kay = rep(kay, len = ncol(thmat)) # Allow recycling for kay if (ncol(thmat) > 1) { answer = thmat for(ii in 1:ncol(thmat)) answer[,ii] = Recall(theta = thmat[,ii], cutpoint = cutpoint[ii], k = kay[ii], inverse = inverse, deriv = deriv) return(answer) } answer = if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) } else { if (cutpoint == 0) { 1.0 - (kay / (kay + care.exp(theta)))^kay } else { a1 = -(9*cutpoint+8) / (cutpoint+1) a2 = (9*kay-1) / (kay * (cutpoint+1)^(1/3)) a3 = 9 / (kay * (cutpoint+1)^(2/3)) a4 = 9 / (cutpoint+1) B = exp(theta/3) mymat = rbind(a1^2*a2^2 + 2*a1*a2^3*B + B^2*a2^4, 0, -2*a1*a2*a3*B - 2*a2^2*a3*B^2 - a1^2*a3 - a2^2*a4, 0, B^2 * a3^2 + a3 * a4) ans = Re(t(apply(mymat, 2, polyroot))) theta2 = invfun = pnorm(-ans) # pnorm(-x) = 1-pnorm(x) for(ii in 1:4) { theta2[,ii] = Recall(theta = theta2[,ii], cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) } rankmat = t(apply(abs(theta2 - theta), 1, rank)) for(ii in 2:4) { if (any(index4 <- (rankmat[,ii] == 1))) { invfun[index4,1] = invfun[index4,ii] } } invfun[,1] } } } else { smallno = 1 * .Machine$double.eps SMALLNO = 1 * .Machine$double.xmin Theta = theta Theta = pmin(Theta, 1 - smallno) # Since theta == 1 is a possibility Theta = pmax(Theta, smallno) # Since theta == 0 is a possibility if (cutpoint == 0) { switch(deriv+1, { temp = (1 - Theta)^(-1/kay) - 1 temp = pmax(temp, SMALLNO) log(kay) + log(temp)}, (kay / (1 - Theta)^(1/kay) - kay) * (1 - Theta)^(kay+1/kay), { stop("cannot handle 'deriv = 2'") }, stop("argument 'deriv' unmatched")) } else { Ql = qnorm(Theta) a1 = -(9*cutpoint+8) / (cutpoint+1) a2 = (9*kay-1) / (kay * (cutpoint+1)^(1/3)) a3 = 9 / (kay * (cutpoint+1)^(2/3)) a4 = 9 / (cutpoint+1) discrim = a1^2 * a3 + a2^2 * a4 - Ql^2 * a3 * a4 denomin = Ql^2 * a3 - a2^2 numerat = (a1*a2 - Ql * sqrt(discrim)) argmax1 = numerat / denomin switch(deriv+1, { argmax2 = (a1*a2 + Ql * sqrt(discrim)) / denomin temp = ifelse(argmax1 > 0, argmax1, argmax2) temp = pmax(temp, SMALLNO) 3 * log(temp)}, { BB = (sqrt(discrim) - Ql^2 * a3 * a4 / sqrt(discrim)) / dnorm(Ql) CC = 2 * Ql * a3 / dnorm(Ql) dA.dtheta = (-denomin * BB - numerat * CC) / denomin^2 argmax1 / (3 * dA.dtheta) }, { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) stop("the answer contains some NAs") answer } Cut <- function(y, breaks = c(-Inf, quantile(c(y), prob = (1:4)/4))) { y <- as.matrix(y) temp <- cut(y, breaks = breaks, labels = FALSE) temp <- c(temp) # integer vector of integers if (any(is.na(temp))) stop("there are NAs") answer <- if (ncol(y) > 1) matrix(temp, nrow(y), ncol(y)) else temp if (ncol(y) > 1) { ynames <- dimnames(y)[[2]] if (!length(ynames)) ynames <- paste("Y", 1:ncol(y), sep = "") xnames <- dimnames(y)[[1]] if (!length(xnames)) xnames = as.character(1:nrow(y)) dimnames(answer) <- list(xnames, ynames) } attr(answer, "breaks") <- breaks answer } checkCut <- function(y) { if (!is.Numeric(y, positive = TRUE, integer.valued = TRUE)) stop("argument 'y' must contain positive integers only") uy <- unique(y) L <- max(uy) oklevels <- 1:L if (L == 1) stop("only one unique value") for(ii in oklevels) { if (all(ii != uy)) stop("there is no ", ii, " value") } TRUE } nbcanlink <- function(theta, size = NULL, wrt.eta = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("nbcanlink(", theta, ")", sep = "") else paste("log(", theta, " / (", theta, " + size))", sep = "") if (tag) string <- paste("Nbcanlink:", string) return(string) } kmatrix <- size theta <- cbind(theta) kmatrix <- cbind(kmatrix) if (ncol(kmatrix) != ncol(theta)) stop("arguments 'theta' and 'size' do not have ", "an equal number of cols") if (nrow(kmatrix) != nrow(theta)) stop("arguments 'theta' and 'size' do not have ", "an equal number of rows") if (deriv > 0) { if (!(wrt.eta %in% 1:2)) stop("argument 'wrt.eta' should be 1 or 2") } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, size = size, wrt.eta = wrt.eta, bvalue = bvalue, inverse = FALSE, deriv = deriv) } else { ans <- (kmatrix / expm1(-theta)) if (is.matrix(ans)) dimnames(ans) <- NULL else names(ans) <- NULL ans } } else { ans <- switch(deriv+1, (log(theta / (theta + kmatrix))), if (wrt.eta == 1) theta * (theta + kmatrix) / kmatrix else -(theta + kmatrix), if (wrt.eta == 1) -(theta * (theta + kmatrix))^2 / ((2 * theta + kmatrix) * kmatrix) else (theta + kmatrix)^2) if (is.matrix(ans)) dimnames(ans) <- NULL else names(ans) <- NULL ans } } VGAM/R/generic.q0000644000176000001440000000304712136651110013033 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. add1.vgam <- function(...) stop("no add1 method implemented for vgam() models (yet)") alias.vgam <- function(...) stop("no alias method implemented for vgam() models (yet)") anova.vgam <- function(...) stop("no anova method implemented for vgam() models (yet)") drop1.vgam <- function(...) stop("no drop1 method implemented for vgam() models (yet)") effects.vgam <- function(...) stop("no effects method implemented for vgam() models (yet)") proj.vgam <- function(...) stop("no proj method implemented for vgam() models (yet)") step.vgam <- function(...) stop("no step method implemented for vgam() models (yet)") update.vgam <- function(...) stop("no update method implemented for vgam() models (yet)") add1.vglm <- function(...) stop("no add1 method implemented for vglm() models (yet)") alias.vglm <- function(...) stop("no alias method implemented for vglm() models (yet)") anova.vglm <- function(...) stop("no anova method implemented for vglm() models (yet)") drop1.vglm <- function(...) stop("no drop1 method implemented for vglm() models (yet)") plot.vglm <- function(...) stop("no plot method implemented for vglm() models (yet)") proj.vglm <- function(...) stop("no proj method implemented for vglm() models (yet)") step.vglm <- function(...) stop("no step method implemented for vglm() models (yet)") update.vglm <- function(...) stop("no update method implemented for vglm() models (yet)") VGAM/R/formula.vlm.q0000644000176000001440000000730212136651110013657 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. formulavlm = function(x, fnumber=1, ...) { if (!is.Numeric(fnumber, integer.valued = TRUE, allowable.length = 1, positive = TRUE) || fnumber > 2) stop("argument 'fnumber' must be 1 or 2") if (!any(slotNames(x) == "misc")) stop("cannot find slot 'misc'") if (fnumber == 1) x@misc$formula else x@misc$form2 } formulaNA.VGAM = function(x, ...) { stop("a formula does not make sense for object 'x'") } setMethod("formula", "vlm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "vglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "vgam", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "rrvglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "qrrvglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "grc", function(x, ...) formulavlm(x = x, ...)) variable.namesvlm <- function(object, full = FALSE, ...) { qrslot <- object@qr if (!length(qrslot$qr)) { use.this <- object@x if (!length(use.this)) stop("argument 'object' has empty 'qr' and 'x' slots.") } else { use.this = qrslot$qr } if (full) dimnames(use.this)[[2]] else if (object@rank) dimnames(use.this)[[2]][seq_len(object@rank)] else character(0) } variable.namesrrvglm <- function(object, ...) { qrslot <- object@qr if (!length(qrslot$qr)) { use.this <- object@x if (!length(use.this)) stop("argument 'object' has empty 'qr' and 'x' slots.") } else { use.this = qrslot$qr } dimnames(use.this)[[2]] } case.namesvlm <- function(object, full = FALSE, ...) { w <- weights(object, type="prior") use.this <- residuals(object, type="working") if (!length(use.this)) use.this <- object@x if (!length(use.this)) use.this <- object@y if (!length(use.this)) stop("argument 'object' has empty 'x' and 'y' slots.") dn <- dimnames(use.this)[[1]] if (full || is.null(w) || ncol(cbind(w)) != 1) dn else dn[w!=0] } setMethod("variable.names", "vlm", function(object, ...) variable.namesvlm(object=object, ...)) setMethod("variable.names", "vglm", function(object, ...) variable.namesvlm(object=object, ...)) setMethod("variable.names", "vgam", function(object, ...) variable.namesvlm(object=object, ...)) setMethod("variable.names", "rrvglm", function(object, ...) variable.namesrrvglm(object=object, ...)) setMethod("variable.names", "qrrvglm", function(object, ...) variable.namesvlm(object=object, ...)) setMethod("variable.names", "grc", function(object, ...) variable.namesvlm(object=object, ...)) setMethod("case.names", "vlm", function(object, ...) case.namesvlm(object=object, ...)) setMethod("case.names", "vglm", function(object, ...) case.namesvlm(object=object, ...)) setMethod("case.names", "vgam", function(object, ...) case.namesvlm(object=object, ...)) setMethod("case.names", "rrvglm", function(object, ...) case.namesvlm(object=object, ...)) setMethod("case.names", "qrrvglm", function(object, ...) case.namesvlm(object=object, ...)) setMethod("case.names", "grc", function(object, ...) case.namesvlm(object=object, ...)) VGAM/R/fittedvlm.R0000644000176000001440000000374512136651110013363 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. fittedvlm <- function(object, matrix.arg = TRUE, ...) { answer = if (matrix.arg) object@fitted.values else { if (!is.matrix(object@fitted.values) || !length(object@fitted.values)) stop("object@fitted.values is not a matrix or is empty") if (ncol(object@fitted.values) == 1) c(object@fitted.values) else { warning("ncol(object@fitted.values) is not 1") c(object@fitted.values) } } if (length(answer) && length(object@na.action)) { napredict(object@na.action[[1]], answer) } else { answer } } if(!isGeneric("fitted")) setGeneric("fitted", function(object, ...) standardGeneric("fitted")) setMethod("fitted.values", "vlm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted", "vlm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted.values", "vglm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted", "vglm", function(object, ...) fittedvlm(object, ...)) predictors.vglm <- function(object, matrix = TRUE, ...) { answer = if (matrix) object@predictors else { if (!is.matrix(object@predictors) || !length(object@predictors)) stop("object@predictors is not a matrix or is empty") if (ncol(object@predictors) == 1) c(object@predictors) else { warning("ncol(object@predictors) is not 1") c(object@predictors) } } if (length(answer) && length(object@na.action)) { napredict(object@na.action[[1]], answer) } else { answer } } if(!isGeneric("predictors")) setGeneric("predictors", function(object, ...) standardGeneric("predictors")) setMethod("predictors", "vglm", function(object, ...) predictors.vglm(object, ...)) VGAM/R/family.zeroinf.R0000644000176000001440000033353212136651110014321 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dzanegbin <- function(x, size, prob = NULL, munb = NULL, pobs0 = 0, log = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(pobs0), length(prob), length(size)) if (length(x) != LLL) x <- rep(x, len = LLL) if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL) if (length(size) != LLL) size <- rep(size, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") if (!is.Numeric(prob, positive = TRUE)) stop("argument 'prob' must be in [0,Inf)") if (!is.Numeric(size, positive = TRUE)) stop("argument 'size' must be in [0,Inf)") index0 <- x == 0 if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposnegbin(x[!index0], prob = prob[!index0], size = size[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1 - pobs0[!index0]) * dposnegbin(x[!index0], prob = prob[!index0], size = size[!index0]) } ans } pzanegbin <- function(q, size, prob = NULL, munb = NULL, pobs0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } LLL <- max(length(q), length(pobs0), length(prob), length(size)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") qindex <- (q > 0) ans[ qindex] <- pobs0[qindex] + (1 - pobs0[qindex]) * pposnegbin(q[qindex], size = size[qindex], prob = prob[qindex]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans } qzanegbin <- function(p, size, prob = NULL, munb = NULL, pobs0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size/(size + munb) } LLL <- max(length(p), length(pobs0), length(prob), length(size)) if (length(p) != LLL) p <- rep(p, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ans <- p ans[p <= pobs0] <- 0 pindex <- (p > pobs0) ans[pindex] <- qposnegbin((p[pindex] - pobs0[pindex]) / (1 - pobs0[pindex]), prob = prob[pindex], size = size[pindex]) ans } rzanegbin <- function(n, size, prob = NULL, munb = NULL, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } ans <- rposnegbin(n = use.n, prob = prob, size = size) if (length(pobs0) != use.n) pobs0 <- rep(pobs0, len = use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } dzapois <- function(x, lambda, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pobs0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(lambda) != LLL) lambda <- rep(lambda, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dpospois(x[!index0], lambda[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1 - pobs0[!index0]) * dpospois(x[!index0], lambda[!index0]) } ans } pzapois <- function(q, lambda, pobs0 = 0) { LLL <- max(length(q), length(lambda), length(pobs0)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(lambda) != LLL) lambda <- rep(lambda, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1-pobs0[q > 0]) * ppospois(q[q > 0], lambda[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans } qzapois <- function(p, lambda, pobs0 = 0) { LLL <- max(length(p), length(lambda), length(pobs0)) if (length(p) != LLL) p <- rep(p, len = LLL); if (length(lambda) != LLL) lambda <- rep(lambda, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0 ans[ ind4] <- qpospois((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), lambda = lambda[ind4]) ans } rzapois <- function(n, lambda, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rpospois(use.n, lambda) if (length(pobs0) != use.n) pobs0 <- rep(pobs0, length = use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } dzipois <- function(x, lambda, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pstr0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(lambda) != LLL) lambda <- rep(lambda, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- x + lambda + pstr0 index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pstr0[ index0] + (1 - pstr0[ index0]) * dpois(x[ index0], lambda[ index0])) ans[!index0] <- log1p(-pstr0[!index0]) + dpois(x[!index0], lambda[!index0], log = TRUE) } else { ans[ index0] <- pstr0[ index0] + (1 - pstr0[ index0]) * dpois(x[ index0], lambda[ index0]) ans[!index0] <- (1 - pstr0[!index0]) * dpois(x[!index0], lambda[!index0]) } deflat_limit <- -1 / expm1(lambda) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzipois <- function(q, lambda, pstr0 = 0) { LLL <- max(length(pstr0), length(lambda), length(q)) if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); if (length(lambda) != LLL) lambda <- rep(lambda, len = LLL); if (length(q) != LLL) q <- rep(q, len = LLL); ans <- ppois(q, lambda) ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) deflat_limit <- -1 / expm1(lambda) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzipois <- function(p, lambda, pstr0 = 0) { LLL <- max(length(p), length(lambda), length(pstr0)) ans = p <- rep(p, len = LLL) lambda <- rep(lambda, len = LLL) pstr0 <- rep(pstr0, len = LLL) ans[p <= pstr0] <- 0 pindex <- (p > pstr0) ans[pindex] <- qpois((p[pindex] - pstr0[pindex]) / (1 - pstr0[pindex]), lambda = lambda[pindex]) deflat_limit <- -1 / expm1(lambda) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * exp(-lambda[ind0]) ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * exp(-lambda[pindex]) ans[pindex] <- qpospois((p[pindex] - Pobs0) / (1 - Pobs0), lambda = lambda[pindex]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans } rzipois <- function(n, lambda, pstr0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (length(pstr0) != use.n) pstr0 <- rep(pstr0, len = use.n); if (length(lambda) != use.n) lambda <- rep(lambda, len = use.n); ans <- rpois(use.n, lambda) ans <- ifelse(runif(use.n) < pstr0, 0, ans) prob0 <- exp(-lambda) deflat_limit <- -1 / expm1(lambda) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[ind0] <- rpospois(sum(ind0), lambda[ind0]) ans[ind0] <- ifelse(runif(sum(ind0)) < pobs0, 0, ans[ind0]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } yip88 <- function(link = "loge", n.arg = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Zero-inflated Poisson (based on Yip (1988))\n\n", "Link: ", namesof("lambda", link, earg), "\n", "Variance: (1 - pstr0) * lambda"), first = eval(substitute(expression({ zero <- y == 0 if (any(zero)) { if (length(extra)) extra$sumw <- sum(w) else extra <- list(sumw=sum(w)) if (is.numeric(.n.arg) && extra$sumw != .n.arg) stop("value of 'n.arg' conflicts with data ", "(it need not be specified anyway)") warning("trimming out the zero observations") axa.save <- attr(x, "assign") x <- x[!zero,, drop = FALSE] attr(x, "assign") <- axa.save # Don't lose these!! w <- w[!zero] y <- y[!zero] } else { if (!is.numeric(.n.arg)) stop("n.arg must be supplied") } }), list( .n.arg = n.arg ))), initialize = eval(substitute(expression({ narg <- if (is.numeric(.n.arg)) .n.arg else extra$sumw if (sum(w) > narg) stop("sum(w) > narg") w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("lambda", .link, list(theta = NULL), tag = FALSE) if (!length(etastart)) { lambda.init <- rep(median(y), length = length(y)) etastart <- theta2eta(lambda.init, .link , earg = .earg ) } if (length(extra)) { extra$sumw <- sum(w) extra$narg <- narg # For @linkinv } else { extra <- list(sumw = sum(w), narg = narg) } }), list( .link = link, .earg = earg, .n.arg = n.arg ))), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta, .link, .earg) temp5 <- exp(-lambda) pstr0 <- (1 - temp5 - extra$sumw/extra$narg) / (1 - temp5) if (any(pstr0 <= 0)) stop("non-positive value(s) of pstr0") (1 - pstr0) * lambda }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(lambda = .link ) misc$earg <- list(lambda = .earg ) if (intercept.only) { suma <- extra$sumw pstr0 <- (1 - temp5[1] - suma / narg) / (1 - temp5[1]) pstr0 <- if (pstr0 < 0 || pstr0 > 1) NA else pstr0 misc$pstr0 <- pstr0 } }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute(function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta, .link) temp5 <- exp(-lambda) pstr0 <- (1 - temp5 - extra$sumw / extra$narg) / (1 - temp5) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzipois(x = y, pstr0 = pstr0, lambda = lambda, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("yip88"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta, .link , earg = .earg ) temp5 <- exp(-lambda) dl.dlambda <- -1 + y/lambda - temp5/(1-temp5) dlambda.deta <- dtheta.deta(lambda, .link , earg = .earg ) w * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2lambda.deta2 <- d2theta.deta2(lambda, .link , earg = .earg ) d2l.dlambda2 <- -y / lambda^2 + temp5 / (1 - temp5)^2 -w * (d2l.dlambda2*dlambda.deta^2 + dl.dlambda*d2lambda.deta2) }), list( .link = link, .earg = earg )))) } zapoisson <- function(lpobs0 = "logit", llambda = "loge", zero = NULL) { lpobs_0 <- as.list(substitute(lpobs0)) epobs_0 <- link2list(lpobs_0) lpobs_0 <- attr(epobs_0, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Zero-altered Poisson ", "(Bernoulli and positive-Poisson conditional model)\n\n", "Links: ", namesof("pobs0", lpobs_0, earg = epobs_0, tag = FALSE), ", ", namesof("lambda", llambda, earg = elambda, tag = FALSE), "\n", "Mean: (1 - pobs0) * lambda / (1 - exp(-lambda))"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 if (any(y < 0)) stop("the response must not have negative values") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) mynames1 <- if (ncoly == 1) "pobs0" else paste("pobs0", 1:ncoly, sep = "") mynames2 <- if (ncoly == 1) "lambda" else paste("lambda", 1:ncoly, sep = "") predictors.names <- c(namesof(mynames1, .lpobs_0, earg = .epobs_0, tag = FALSE), namesof(mynames2, .llambda, earg = .elambda, tag = FALSE))[ interleave.VGAM(Musual*NOS, M = Musual)] if (!length(etastart)) { etastart <- cbind(theta2eta((0.5 + w*y0) / (1+w), .lpobs_0, earg = .epobs_0 ), matrix(1, n, NOS)) # 1 here is any old value for(spp. in 1:NOS) { sthese <- skip.these[, spp.] etastart[!sthese, NOS+spp.] = theta2eta(y[!sthese, spp.] / (-expm1(-y[!sthese, spp.])), .llambda, earg = .elambda ) } etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lpobs_0 = lpobs_0, .llambda = llambda, .epobs_0 = epobs_0, .elambda = elambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS Musual <- 2 pobs_0 <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs_0, earg = .epobs_0 )) lambda <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .llambda, earg = .elambda )) (1 - pobs_0) * lambda / (-expm1(-lambda)) }, list( .lpobs_0 = lpobs_0, .llambda = llambda, .epobs_0 = epobs_0, .elambda = elambda ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$multipleResponses <- TRUE temp.names <- c(rep( .lpobs_0 , len = NOS), rep( .llambda , len = NOS)) temp.names <- temp.names[interleave.VGAM(Musual*NOS, M = Musual)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(Musual*NOS, M = Musual)] misc$earg <- vector("list", Musual * NOS) names(misc$earg) <- names(misc$link) for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .epobs_0 misc$earg[[Musual*ii ]] <- .elambda } }), list( .lpobs_0 = lpobs_0, .llambda = llambda, .epobs_0 = epobs_0, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { NOS <- extra$NOS Musual <- 2 pobs0 <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs_0, earg = .epobs_0)) lambda <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .llambda, earg = .elambda )) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dzapois(x = y, pobs0 = pobs0, lambda = lambda, log = TRUE)) } }, list( .lpobs_0 = lpobs_0, .llambda = llambda, .epobs_0 = epobs_0, .elambda = elambda ))), vfamily = c("zapoisson"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- extra$NOS y0 <- extra$y0 skip <- extra$skip.these phimat <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs_0, earg = .epobs_0 )) lambda <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .llambda, earg = .elambda )) dl.dlambda <- y / lambda + 1 / expm1(-lambda) dl.dphimat <- -1 / (1 - phimat) # For y > 0 obsns for(spp. in 1:NOS) { dl.dphimat[skip[, spp.], spp.] <- 1 / phimat[skip[, spp.], spp.] dl.dlambda[skip[, spp.], spp.] <- 0 } dlambda.deta <- dtheta.deta(lambda, .llambda, earg = .elambda) mu.phi0 <- phimat temp3 <- if (.lpobs_0 == "logit") { c(w) * (y0 - mu.phi0) } else { c(w) * dtheta.deta(mu.phi0, link = .lpobs_0 , earg = .epobs_0 ) * dl.dphimat } ans <- cbind(temp3, c(w) * dl.dlambda * dlambda.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lpobs_0 = lpobs_0, .llambda = llambda, .epobs_0 = epobs_0, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, Musual * NOS) temp5 <- expm1(lambda) ned2l.dlambda2 <- (1 - phimat) * (temp5 + 1) * (1 / lambda - 1 / temp5) / temp5 wz[, NOS+(1:NOS)] <- w * ned2l.dlambda2 * dlambda.deta^2 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lpobs_0 == "logit" && is.empty.list( .epobs_0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi0, link = .lpobs_0, earg = .epobs_0)^2) } if (FALSE) for(ii in 1:NOS) { index200 <- abs(tmp200[, ii]) < .Machine$double.eps if (any(index200)) { tmp200[index200, ii] <- 10.0 * .Machine$double.eps^(3/4) } } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M = Musual)] wz }), list( .lpobs_0 = lpobs_0, .epobs_0 = epobs_0 )))) } # End of zapoisson zanegbinomial.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } zanegbinomial <- function(lpobs0 = "logit", lmunb = "loge", lsize = "loge", ipobs0 = NULL, isize = NULL, zero = c(-1, -3), imethod = 1, nsimEIM = 250, shrinkage.init = 0.95) { if (!is.Numeric(nsimEIM, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 30) warning("argument 'nsimEIM' should be greater than 30, say") if (length(ipobs0) && (!is.Numeric(ipobs0, positive = TRUE) || max(ipobs0) >= 1)) stop("If given, argument 'ipobs0' must contain values in (0,1) only") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("If given, argument 'isize' must contain positive values only") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") new("vglmff", blurb = c("Zero-altered negative binomial (Bernoulli and\n", "positive-negative binomial conditional model)\n\n", "Links: ", namesof("pobs0", lpobs0, earg = epobs0, tag = FALSE), ", ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), "\n", "Mean: (1 - pobs0) * munb / (1 - (size / (size + ", "munb))^size)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 3 eval(negzero.expression) }), list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 3 if (any(y < 0)) stop("the response must not have negative values") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species M <- Musual * ncoly # mynames1 <- if (NOS == 1) "pobs0" else paste("pobs0", 1:NOS, sep = "") mynames2 <- if (NOS == 1) "munb" else paste("munb", 1:NOS, sep = "") mynames3 <- if (NOS == 1) "size" else paste("size", 1:NOS, sep = "") predictors.names <- c(namesof(mynames1, .lpobs0 , earg = .epobs0 , tag = FALSE), namesof(mynames2, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames3, .lsize , earg = .esize , tag = FALSE))[ interleave.VGAM(Musual*NOS, M = Musual)] extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) if (!length(etastart)) { mu.init <- y for(iii in 1:ncol(y)) { index.posy <- (y[, iii] > 0) if ( .imethod == 1) { use.this <- weighted.mean(y[index.posy, iii], w[index.posy, iii]) mu.init[ index.posy, iii] <- (1 - .sinit ) * y[index.posy, iii] + .sinit * use.this mu.init[!index.posy, iii] <- use.this } else { use.this <- mu.init[, iii] <- (y[, iii] + weighted.mean(y[index.posy, iii], w[index.posy, iii])) / 2 } if (TRUE) { max.use.this <- 7 * use.this + 10 vecTF <- (mu.init[, iii] > max.use.this) if (any(vecTF)) mu.init[vecTF, iii] <- max.use.this } } pnb0 <- matrix(if (length( .ipobs0 )) .ipobs0 else -1, nrow = n, ncol = NOS, byrow = TRUE) for(spp. in 1:NOS) { if (any(pnb0[, spp.] < 0)) { index.y0 <- y[, spp.] < 0.5 pnb0[, spp.] <- max(min(sum(index.y0)/n, 0.97), 0.03) } } if ( is.Numeric( .isize )) { kmat0 <- matrix( .isize , nrow = n, ncol = ncoly, byrow = TRUE) } else { posnegbinomial.Loglikfun <- function(kmat, y, x, w, extraargs) { munb <- extraargs sum(c(w) * dposnegbin(x = y, munb = munb, size = kmat, log = TRUE)) } k.grid <- 2^((-6):6) kmat0 <- matrix(0, nrow = n, ncol = NOS) for(spp. in 1:NOS) { index.posy <- y[, spp.] > 0 posy <- y[index.posy, spp.] kmat0[, spp.] <- getMaxMin(k.grid, objfun = posnegbinomial.Loglikfun, y = posy, x = x[index.posy, ], w = w[index.posy, spp.], extraargs = mu.init[index.posy, spp.]) } } etastart <- cbind(theta2eta(pnb0, .lpobs0 , earg = .epobs0 ), theta2eta(mu.init, .lmunb , earg = .emunb ), theta2eta(kmat0, .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } # End of if (!length(etastart)) }), list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .ipobs0 = ipobs0, .isize = isize, .imethod = imethod, .sinit = shrinkage.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 3 NOS <- extra$NOS phi0 <- eta2theta(eta[, Musual*(1:NOS)-2], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) ], .lsize , earg = .esize ) pnb0 <- (kmat / (kmat + munb))^kmat # p(0) from negative binomial (1 - phi0) * munb / (1 - pnb0) }, list( .lpobs0 = lpobs0, .lsize = lsize, .lmunb = lmunb, .epobs0 = epobs0, .emunb = emunb, .esize = esize ))), last = eval(substitute(expression({ misc$link = c(rep( .lpobs0 , length = NOS), rep( .lmunb , length = NOS), rep( .lsize , length = NOS))[interleave.VGAM(Musual*NOS, M = Musual)] temp.names <- c(mynames1, mynames2, mynames3)[interleave.VGAM(Musual*NOS, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual*NOS) names(misc$earg) <- temp.names for(ii in 1:NOS) { misc$earg[[Musual*ii-2]] <- .epobs0 misc$earg[[Musual*ii-1]] <- .emunb misc$earg[[Musual*ii ]] <- .esize } misc$nsimEIM <- .nsimEIM misc$imethod <- .imethod misc$ipobs0 <- .ipobs0 misc$isize <- .isize misc$multipleResponses <- TRUE }), list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .ipobs0 = ipobs0, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { NOS <- extra$NOS Musual <- 3 phi0 <- eta2theta(eta[, Musual*(1:NOS)-2], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) ], .lsize , earg = .esize ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzanegbin(x = y, pobs0 = phi0, munb = munb, size = kmat, log = TRUE)) } }, list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize ))), vfamily = c("zanegbinomial"), deriv = eval(substitute(expression({ Musual <- 3 NOS <- extra$NOS y0 <- extra$y0 phi0 <- eta2theta(eta[, Musual*(1:NOS)-2, drop = FALSE], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) skip <- extra$skip.these dphi0.deta <- dtheta.deta(phi0, .lpobs0 , earg = .epobs0 ) dmunb.deta <- dtheta.deta(munb, .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat, .lsize , earg = .esize ) tempk <- kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * (tempm + log(tempk)) dl.dphi0 <- -1 / (1 - phi0) dl.dmunb <- y / munb - (y + kmat) / (munb + kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y + kmat)/(munb + kmat) + 1 + log(tempk) + df0.dkmat / oneminusf0 dl.dphi0[y == 0] <- 1 / phi0[y == 0] # Do it in one line skip <- extra$skip.these for(spp. in 1:NOS) { dl.dsize[skip[, spp.], spp.] <- dl.dmunb[skip[, spp.], spp.] <- 0 } dl.deta23 <- c(w) * cbind(dl.dmunb * dmunb.deta, dl.dsize * dsize.deta) muphi0 <- phi0 dl.deta1 <- if ( .lpobs0 == "logit") { c(w) * (y0 - muphi0) } else { c(w) * dphi0.deta * (y0 / muphi0 - 1) / (1 - muphi0) } ans <- cbind(dl.deta1, dl.deta23) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lpobs0 = lpobs0 , .lmunb = lmunb , .lsize = lsize , .epobs0 = epobs0 , .emunb = emunb , .esize = esize ))), weight = eval(substitute(expression({ six <- dimm(Musual) wz = run.varcov <- matrix(0.0, n, six*NOS-1) Musualm1 <- Musual - 1 ind2 <- iam(NA, NA, M = Musual - 1, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { ysim <- rzanegbin(n = n*NOS, pobs0 = phi0, size = kmat, mu = munb) dim(ysim) <- c(n, NOS) dl.dphi0 <- -1 / (1 - phi0) dl.dmunb <- ysim / munb - (ysim + kmat) / (munb + kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(ysim + kmat) - digamma(kmat) - (ysim + kmat)/(munb + kmat) + 1 + log(tempk) + df0.dkmat / oneminusf0 dl.dphi0[ysim == 0] <- 1 / phi0[ysim == 0] # Do it in one line ysim0 <- ifelse(ysim == 0, 1, 0) skip.sim <- matrix(as.logical(ysim0), n, NOS) for(spp. in 1:NOS) { dl.dsize[skip.sim[, spp.], spp.] <- dl.dmunb[skip.sim[, spp.], spp.] <- 0 } for(kk in 1:NOS) { temp2 <- cbind(dl.dmunb[, kk] * dmunb.deta[, kk], dl.dsize[, kk] * dsize.deta[, kk]) small.varcov <- temp2[, ind2$row.index] * temp2[, ind2$col.index] run.varcov[, ((kk-1)*Musual+2):(kk*Musual)] = run.varcov[, ((kk-1)*Musual+2):(kk*Musual)] + c(small.varcov[, 1:Musualm1]) run.varcov[, M + (kk-1)*Musual + 2] = run.varcov[, M + (kk-1)*Musual + 2] + c(small.varcov[, Musualm1 + 1]) } # kk; end of NOS } # ii; end of nsimEIM run.varcov <- cbind(run.varcov / .nsimEIM ) run.varcov <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wzind1 <- sort(c( Musual*(1:NOS) - 1, Musual*(1:NOS) - 0, M + Musual*(1:NOS) - 1)) wz[, wzind1] <- c(w) * run.varcov[, wzind1] tmp100 <- muphi0 * (1 - muphi0) tmp200 <- if ( .lpobs0 == "logit") { cbind(c(w) * tmp100) } else { c(w) * cbind(dphi0.deta^2 / tmp100) } for(ii in 1:NOS) { index200 <- abs(tmp200[, ii]) < .Machine$double.eps if (any(index200)) { tmp200[index200, ii] <- .Machine$double.eps # Diagonal 0's are bad } } wz[, Musual*(1:NOS)-2] <- tmp200 wz }), list( .lpobs0 = lpobs0, .epobs0 = epobs0, .nsimEIM = nsimEIM )))) } # End of zanegbinomial() if (FALSE) rposnegbin <- function(n, munb, size) { if (!is.Numeric(size, positive = TRUE)) stop("argument 'size' must be positive") if (!is.Numeric(munb, positive = TRUE)) stop("argument 'munb' must be positive") if (!is.Numeric(n, positive = TRUE, integer.valued = TRUE, allowable.length = 1)) stop("argument 'n' must be a positive integer") ans <- rnbinom(n = n, mu = munb, size = size) munb <- rep(munb, length = n) size <- rep(size, length = n) index <- ans == 0 while(any(index)) { more <- rnbinom(n = sum(index), mu = munb[index], size = size[index]) ans[index] <- more index <- ans == 0 } ans } if (FALSE) dposnegbin <- function(x, munb, size, log = FALSE) { if (!is.Numeric(size, positive = TRUE)) stop("argument 'size' must be positive") if (!is.Numeric(munb, positive = TRUE)) stop("argument 'munb' must be positive") ans <- dnbinom(x = x, mu = munb, size = size, log=log) ans0 <- dnbinom(x=0, mu = munb, size = size, log = FALSE) ans <- if (log) ans - log1p(-ans0) else ans/(1-ans0) ans[x == 0] <- if (log) -Inf else 0 ans } zipoisson <- function(lpstr0 = "logit", llambda = "loge", ipstr0 = NULL, ilambda = NULL, imethod = 1, shrinkage.init = 0.8, zero = NULL) { ipstr00 <- ipstr0 lpstr0 <- as.list(substitute(lpstr0)) epstr00 <- link2list(lpstr0) lpstr00 <- attr(epstr00, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (length(ipstr00)) if (!is.Numeric(ipstr00, positive = TRUE) || any(ipstr00 >= 1)) stop("argument 'ipstr0' values must be inside the interval (0,1)") if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("argument 'ilambda' values must be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") new("vglmff", blurb = c("Zero-inflated Poisson\n\n", "Links: ", namesof("pstr0", lpstr00, earg = epstr00 ), ", ", namesof("lambda", llambda, earg = elambda ), "\n", "Mean: (1 - pstr0) * lambda"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly if (any(round(y) != y)) stop("integer-valued responses only allowed for ", "the 'zipoisson' family") mynames1 <- paste("pstr0", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("lambda", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lpstr00 , earg = .epstr00 , tag = FALSE), namesof(mynames2, .llambda , earg = .elambda , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matL <- matrix(if (length( .ilambda )) .ilambda else 0, n, ncoly, byrow = TRUE) matP <- matrix(if (length( .ipstr00 )) .ipstr00 else 0, n, ncoly, byrow = TRUE) for (spp. in 1:ncoly) { yvec <- y[, spp.] Phi.init <- 1 - 0.85 * sum(w[yvec > 0]) / sum(w) Phi.init[Phi.init <= 0.02] <- 0.02 # Last resort Phi.init[Phi.init >= 0.98] <- 0.98 # Last resort if ( length(mustart)) { mustart <- matrix(mustart, n, ncoly) # Make sure right size Lambda.init <- mustart / (1 - Phi.init) } else if ( .imethod == 2) { mymean <- weighted.mean(yvec[yvec > 0], w[yvec > 0]) + 1/16 Lambda.init <- (1 - .sinit) * (yvec + 1/8) + .sinit * mymean } else { use.this <- median(yvec[yvec > 0]) + 1 / 16 Lambda.init <- (1 - .sinit) * (yvec + 1/8) + .sinit * use.this } zipois.Loglikfun <- function(phival, y, x, w, extraargs) { sum(c(w) * dzipois(x = y, pstr0 = phival, lambda = extraargs$lambda, log = TRUE)) } phi.grid <- seq(0.02, 0.98, len = 21) Phimat.init <- getMaxMin(phi.grid, objfun = zipois.Loglikfun, y = y, x = x, w = w, extraargs = list(lambda = Lambda.init)) if (length(mustart)) { Lambda.init <- Lambda.init / (1 - Phimat.init) } if (!length( .ipstr00 )) matP[, spp.] <- Phimat.init if (!length( .ilambda )) matL[, spp.] <- Lambda.init } # spp. etastart <- cbind(theta2eta(matP, .lpstr00, earg = .epstr00 ), theta2eta(matL, .llambda, earg = .elambda ))[, interleave.VGAM(M, M = Musual)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda, .ipstr00 = ipstr00, .ilambda = ilambda, .imethod = imethod, .sinit = shrinkage.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) (1 - phimat) * lambda }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lpstr00 , length = ncoly), rep( .llambda , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .epstr00 misc$earg[[Musual*ii ]] <- .elambda } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$pobs0 <- phimat + (1 - phimat) * exp(-lambda) # P(Y=0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- phimat if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzipois(x = y, pstr0 = phimat, lambda = lambda, log = TRUE)) } }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), vfamily = c("zipoisson"), deriv = eval(substitute(expression({ Musual <- 2 phimat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , earg = .elambda ) prob0 <- exp(-lambda) pobs0 <- phimat + (1 - phimat) * prob0 index0 <- as.matrix(y == 0) dl.dphimat <- -expm1(-lambda) / pobs0 dl.dphimat[!index0] <- -1 / (1 - phimat[!index0]) dl.dlambda <- -(1 - phimat) * exp(-lambda) / pobs0 dl.dlambda[!index0] <- (y[!index0] - lambda[!index0]) / lambda[!index0] dphimat.deta <- dtheta.deta(phimat, .lpstr00 , earg = .epstr00 ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) ans <- c(w) * cbind(dl.dphimat * dphimat.deta, dl.dlambda * dlambda.deta) ans <- ans[, interleave.VGAM(M, M = Musual)] if ( .llambda == "loge" && is.empty.list( .elambda ) && any(lambda[!index0] < .Machine$double.eps)) { for(spp. in 1:(M / Musual)) { ans[!index0[, spp.], Musual * spp.] <- w[!index0[, spp.]] * (y[!index0[, spp.], spp.] - lambda[!index0[, spp.], spp.]) } } ans }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(0.0, nrow = n, ncol = M + M-1) ned2l.dphimat2 <- -expm1(-lambda) / ((1 - phimat) * pobs0) ned2l.dphimatlambda <- -exp(-lambda) / pobs0 ned2l.dlambda2 <- (1 - phimat) / lambda - phimat * (1 - phimat) * exp(-lambda) / pobs0 wz <- array(c(c(w) * ned2l.dphimat2 * dphimat.deta^2, c(w) * ned2l.dlambda2 * dlambda.deta^2, c(w) * ned2l.dphimatlambda * dphimat.deta * dlambda.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .llambda = llambda, .elambda = elambda )))) } # zipoisson zibinomial <- function(lpstr0 = "logit", lprob = "logit", ipstr0 = NULL, zero = 1, mv = FALSE, imethod = 1) { if (as.logical(mv)) stop("argument 'mv' must be FALSE") lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") if (is.Numeric(ipstr0)) if (!is.Numeric(ipstr0, positive = TRUE) || any(ipstr0 >= 1)) stop("'ipstr0' values must be inside the interval (0,1)") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Zero-inflated binomial\n\n", "Links: ", namesof("pstr0", lpstr0, earg = epstr0), ", ", namesof("prob" , lprob , earg = eprob ), "\n", "Mean: (1 - pstr0) * prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w { NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } } if ( .imethod == 1) mustart <- (mustart + y) / 2 predictors.names <- c(namesof("pstr0", .lpstr0 , earg = .epstr0 , tag = FALSE), namesof("prob" , .lprob , earg = .eprob , tag = FALSE)) phi.init <- if (length( .ipstr0 )) .ipstr0 else { prob0.est <- sum(w[y == 0]) / sum(w) if ( .imethod == 1) { (prob0.est - (1 - mustart)^w) / (1 - (1 - mustart)^w) } else { prob0.est } } phi.init[phi.init <= -0.10] <- 0.10 # Lots of sample variation phi.init[phi.init <= 0.05] <- 0.15 # Last resort phi.init[phi.init >= 0.80] <- 0.80 # Last resort if ( length(mustart) && !length(etastart)) mustart <- cbind(rep(phi.init, len = n), mustart) # 1st coln not a real mu }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob, .ipstr0 = ipstr0, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { pstr0 <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) (1 - pstr0) * mubin }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), last = eval(substitute(expression({ misc$link <- c("pstr0" = .lpstr0 , "prob" = .lprob ) misc$earg <- list("pstr0" = .epstr0 , "prob" = .eprob ) misc$imethod <- .imethod misc$pobs0 <- phi + (1 - phi) * (1 - mubin)^w # [1] # P(Y=0) misc$pstr0 <- phi }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { cbind(theta2eta(mu[, 1], .lpstr0 , earg = .epstr0 ), theta2eta(mu[, 2], .lprob , earg = .eprob )) }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { pstr0 <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(dzibinom(x = round(w * y), size = w, prob = mubin, log = TRUE, pstr0 = pstr0)) } }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), vfamily = c("zibinomial"), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) prob0 <- (1 - mubin)^w # Actually q^w pobs0 <- phi + (1 - phi) * prob0 index <- (y == 0) dl.dphi <- (1 - prob0) / pobs0 dl.dphi[!index] <- -1 / (1 - phi[!index]) dl.dmubin <- -w * (1 - phi) * (1 - mubin)^(w - 1) / pobs0 dl.dmubin[!index] <- w[!index] * ( y[!index] / mubin[!index] - (1 - y[!index]) / (1 - mubin[!index])) dphi.deta <- dtheta.deta(phi, .lpstr0 , earg = .epstr0 ) dmubin.deta <- dtheta.deta(mubin, .lprob , earg = .eprob ) ans <- cbind(dl.dphi * dphi.deta, dl.dmubin * dmubin.deta) if ( .lprob == "logit") { ans[!index, 2] <- w[!index] * (y[!index] - mubin[!index]) } ans }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), nrow = n, ncol = dimm(M)) ned2l.dphi2 <- (1 - prob0) / ((1 - phi) * pobs0) ned2l.dphimubin <- -w * ((1 - mubin)^(w - 1)) / pobs0 ned2l.dmubin2 <- (w * (1 - phi) / (mubin * (1 - mubin)^2)) * (1 - mubin - w * mubin * (1 - mubin)^w * phi / pobs0) wz[,iam(1, 1, M)] <- ned2l.dphi2 * dphi.deta^2 wz[,iam(2, 2, M)] <- ned2l.dmubin2 * dmubin.deta^2 wz[,iam(1, 2, M)] <- ned2l.dphimubin * dphi.deta * dmubin.deta if (TRUE) { ind6 <- (wz[, iam(2, 2, M)] < .Machine$double.eps) if (any(ind6)) wz[ind6, iam(2, 2, M)] <- .Machine$double.eps } wz }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob )))) } dzibinom <- function(x, size, prob, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(prob), length(pstr0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- dbinom(x = x, size = size, prob = prob, log = TRUE) ans <- if (log.arg) { ifelse(x == 0, log(pstr0 + (1-pstr0) * exp(ans)), log1p(-pstr0) + ans) } else { ifelse(x == 0, pstr0 + (1-pstr0) * exp(ans) , (1-pstr0) * exp(ans)) } prob0 <- (1 - prob)^size deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzibinom <- function(q, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) { LLL <- max(length(pstr0), length(size), length(prob), length(q)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- pbinom(q, size, prob, lower.tail = lower.tail, log.p = log.p) ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) prob0 <- (1 - prob)^size deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzibinom <- function(p, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) { LLL <- max(length(p), length(size), length(prob), length(pstr0)) p <- rep(p, length = LLL) size <- rep(size, length = LLL) prob <- rep(prob, length = LLL) pstr0 <- rep(pstr0, length = LLL) ans <- p ans[p <= pstr0] <- 0 ans[p > pstr0] <- qbinom((p[p > pstr0] - pstr0[p > pstr0]) / (1 - pstr0[p > pstr0]), size[p > pstr0], prob[p > pstr0], lower.tail = lower.tail, log.p = log.p) prob0 <- (1 - prob)^size deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * prob0[pindex] ans[pindex] <- qposbinom((p[pindex] - Pobs0) / (1 - Pobs0), size = size[pindex], prob = prob[pindex]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } rzibinom <- function(n, size, prob, pstr0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n pstr0 <- rep(pstr0, len = use.n) size <- rep(size, len = use.n) prob <- rep(prob, len = use.n) ans <- rbinom(use.n, size, prob) ans[runif(use.n) < pstr0] <- 0 prob0 <- (1 - prob)^size deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[ind0] <- rposbinom(sum(ind0), size = size[ind0], prob = prob[ind0]) ans[ind0] <- ifelse(runif(sum(ind0)) < pobs0, 0, ans[ind0]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } dzinegbin <- function(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(pstr0), length(size), length(prob), length(x)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- dnbinom(x = x, size = size, prob = prob, log = log.arg) ans <- if (log.arg) ifelse(x == 0, log(pstr0+(1-pstr0)*exp(ans)), log1p(-pstr0) + ans) else ifelse(x == 0, pstr0+(1-pstr0)* ans, (1-pstr0) * ans) prob0 <- prob^size deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzinegbin <- function(q, size, prob = NULL, munb = NULL, pstr0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } LLL <- max(length(pstr0), length(size), length(prob), length(q)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- pnbinom(q = q, size = size, prob = prob) ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) prob0 <- prob^size deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzinegbin <- function(p, size, prob = NULL, munb = NULL, pstr0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size/(size + munb) } LLL <- max(length(p), length(prob), length(pstr0), length(size)) if (length(p) != LLL) p <- rep(p, len = LLL) if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL) if (length(size) != LLL) size <- rep(size, len = LLL); ans <- p ind4 <- (p > pstr0) ans[!ind4] <- 0 ans[ ind4] <- qnbinom(p = (p[ind4] - pstr0[ind4]) / (1 - pstr0[ind4]), size = size[ind4], prob = prob[ind4]) prob0 <- prob^size deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * prob0[pindex] ans[pindex] <- qposnegbin((p[pindex] - Pobs0) / (1 - Pobs0), size = size[pindex], prob = prob[pindex]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } rzinegbin <- function(n, size, prob = NULL, munb = NULL, pstr0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n pstr0 <- rep(pstr0, len = use.n) size <- rep(size, len = use.n) prob <- rep(prob, len = use.n) ans <- rnbinom(n = use.n, size = size, prob = prob) ans <- ifelse(runif(use.n) < pstr0, rep(0, use.n), ans) prob0 <- rep(prob^size, len = use.n) deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0, na.rm = TRUE)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[ind0] <- rposnegbin(sum(ind0, na.rm = TRUE), size = size[ind0], prob = prob[ind0]) ans[ind0] <- ifelse(runif(sum(ind0)) < pobs0, 0, ans[ind0]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } zinegbinomial.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } zinegbinomial <- function(lpstr0 = "logit", lmunb = "loge", lsize = "loge", ipstr0 = NULL, isize = NULL, zero = c(-1, -3), imethod = 1, shrinkage.init = 0.95, nsimEIM = 250) { lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") if (length(ipstr0) && (!is.Numeric(ipstr0, positive = TRUE) || any(ipstr0 >= 1))) stop("argument 'ipstr0' must contain values in (0,1)") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("argument 'isize' must contain positive values only") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be greater than 50, say") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") new("vglmff", blurb = c("Zero-inflated negative binomial\n\n", "Links: ", namesof("pstr0", lpstr0, earg = epstr0, tag = FALSE), ", ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), "\n", "Mean: (1 - pstr0) * munb"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 3 eval(negzero.expression) }), list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 3 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] mynames1 <- if (NOS == 1) "pstr0" else paste("pstr0", 1:NOS, sep = "") mynames2 <- if (NOS == 1) "munb" else paste("munb", 1:NOS, sep = "") mynames3 <- if (NOS == 1) "size" else paste("size", 1:NOS, sep = "") predictors.names <- c(namesof(mynames1, .lpstr0 , earg = .epstr0 , tag = FALSE), namesof(mynames2, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames3, .lsize , earg = .esize , tag = FALSE))[ interleave.VGAM(Musual*NOS, M = Musual)] if (!length(etastart)) { mum.init <- if ( .imethod == 3) { y + 1/16 } else { mum.init <- y for(iii in 1:ncol(y)) { index <- (y[, iii] > 0) mum.init[, iii] <- if ( .imethod == 2) weighted.mean(y[index, iii], w = w[index, iii]) else median(rep(y[index, iii], times = w[index, iii])) + 1/8 } (1 - .sinit) * (y + 1/16) + .sinit * mum.init } pstr0.init <- if (length( .ipstr0 )) { matrix( .ipstr0 , n, ncoly, byrow = TRUE) } else { pstr0.init <- y for(iii in 1:ncol(y)) pstr0.init[, iii] <- sum(w[y[, iii] == 0, iii]) / sum(w[, iii]) pstr0.init[pstr0.init <= 0.02] <- 0.02 # Last resort pstr0.init[pstr0.init >= 0.98] <- 0.98 # Last resort pstr0.init } kay.init = if ( is.Numeric( .isize )) { matrix( .isize, nrow = n, ncol = ncoly, byrow = TRUE) } else { zinegbin.Loglikfun <- function(kval, y, x, w, extraargs) { index0 <- (y == 0) pstr0vec <- extraargs$pstr0 muvec <- extraargs$mu ans1 <- 0.0 if (any( index0)) ans1 <- ans1 + sum(w[ index0] * dzinegbin(x = y[ index0], size = kval, munb = muvec[ index0], pstr0 = pstr0vec[ index0], log = TRUE)) if (any(!index0)) ans1 <- ans1 + sum(w[!index0] * dzinegbin(x = y[!index0], size = kval, munb = muvec[!index0], pstr0 = pstr0vec[!index0], log = TRUE)) ans1 } k.grid <- 2^((-6):6) kay.init <- matrix(0, nrow = n, ncol = NOS) for(spp. in 1:NOS) { kay.init[, spp.] <- getMaxMin(k.grid, objfun = zinegbin.Loglikfun, y = y[, spp.], x = x, w = w[, spp.], extraargs = list(pstr0 = pstr0.init[, spp.], mu = mum.init[, spp.])) } kay.init } etastart <- cbind(theta2eta(pstr0.init, .lpstr0 , earg = .epstr0 ), theta2eta(mum.init, .lmunb , earg = .emunb ), theta2eta(kay.init, .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .ipstr0 = ipstr0, .isize = isize, .sinit = shrinkage.init, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 3 NOS <- extra$NOS pstr0 <- eta2theta(eta[, Musual*(1:NOS)-2, drop = FALSE], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) fv.matrix <- (1 - pstr0) * munb if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(dimnames(pstr0)[[1]], extra$dimnamesy2) fv.matrix }, list( .lpstr0 = lpstr0, .lsize = lsize, .lmunb = lmunb, .epstr0 = epstr0, .esize = esize, .emunb = emunb ))), last = eval(substitute(expression({ misc$link <- c(rep( .lpstr0 , length = NOS), rep( .lmunb , length = NOS), rep( .lsize , length = NOS))[interleave.VGAM(Musual*NOS, M = Musual)] temp.names = c(mynames1, mynames2, mynames3)[interleave.VGAM(Musual*NOS, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual*NOS) names(misc$earg) <- temp.names for(ii in 1:NOS) { misc$earg[[Musual*ii-2]] <- .epstr0 misc$earg[[Musual*ii-1]] <- .emunb misc$earg[[Musual*ii ]] <- .esize } misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$Musual <- Musual misc$ipstr0 <- .ipstr0 misc$isize <- .isize misc$multipleResponses <- TRUE misc$pobs0 <- pstr0 + (1 - pstr0) * (kmat / (kmat + munb))^kmat # P(Y=0) misc$pstr0 <- pstr0 }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .ipstr0 = ipstr0, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 3 NOS <- extra$NOS pstr0 <- eta2theta(eta[, Musual*(1:NOS)-2, drop = FALSE], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzinegbin(x = y, size = kmat, munb = munb, pstr0 = pstr0, log = TRUE)) } }, list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize ))), vfamily = c("zinegbinomial"), deriv = eval(substitute(expression({ Musual <- 3 NOS <- extra$NOS pstr0 <- eta2theta(eta[, Musual*(1:NOS)-2, drop = FALSE], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) dpstr0.deta <- dtheta.deta(pstr0, .lpstr0 , earg = .epstr0 ) dmunb.deta <- dtheta.deta(munb , .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat , .lsize , earg = .esize ) dthetas.detas = (cbind(dpstr0.deta, dmunb.deta, dsize.deta))[, interleave.VGAM(Musual*NOS, M = Musual)] dl.dpstr0 <- -1 / (1 - pstr0) dl.dmunb <- y / munb - (y + kmat) / (munb + kmat) dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y + kmat) / (munb + kmat) + 1 + log(kmat / (kmat + munb)) for(spp. in 1:NOS) { index0 <- (y[, spp.] == 0) if (!any(index0) || !any(!index0)) stop("must have some 0s AND some positive counts in the data") kmat. <- kmat[index0, spp.] munb. <- munb[index0, spp.] pstr0. <- pstr0[index0, spp.] tempk. <- kmat. / (kmat. + munb.) tempm. <- munb. / (kmat. + munb.) prob0. <- tempk.^kmat. df0.dmunb. <- -tempk.* prob0. df0.dkmat. <- prob0. * (tempm. + log(tempk.)) denom. <- pstr0. + (1 - pstr0.) * prob0. dl.dpstr0[index0, spp.] <- (1 - prob0.) / denom. dl.dmunb[index0, spp.] <- (1 - pstr0.) * df0.dmunb. / denom. dl.dsize[index0, spp.] <- (1 - pstr0.) * df0.dkmat. / denom. } # of spp. dl.dthetas <- cbind(dl.dpstr0, dl.dmunb, dl.dsize)[, interleave.VGAM(Musual*NOS, M = Musual)] c(w) * dl.dthetas * dthetas.detas }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize ))), weight = eval(substitute(expression({ wz <- matrix(0, n, Musual*M - Musual) ind3 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) run.varcov <- array(0.0, c(n, length(ind3$row.index), NOS)) for(ii in 1:( .nsimEIM )) { ysim <- rzinegbin(n = n*NOS, pstr0 = pstr0, size = kmat, mu = munb) dim(ysim) <- c(n, NOS) index0 <- (ysim[, spp.] == 0) dl.dpstr0 <- -1 / (1 - pstr0) dl.dmunb <- ysim / munb - (ysim + kmat) / (munb + kmat) dl.dsize <- digamma(ysim + kmat) - digamma(kmat) - (ysim + kmat) / (munb + kmat) + 1 + log(kmat / (kmat + munb)) for(spp. in 1:NOS) { index0 <- (ysim[, spp.] == 0) if (!any(index0) || !any(!index0)) stop("must have some 0s AND some positive counts in the data") kmat. <- kmat[index0, spp.] munb. <- munb[index0, spp.] pstr0. <- pstr0[index0, spp.] tempk. <- kmat. / (kmat. + munb.) tempm. <- munb. / (kmat. + munb.) prob0. <- tempk.^kmat. df0.dmunb. <- -tempk.* prob0. df0.dkmat. <- prob0. * (tempm. + log(tempk.)) denom. <- pstr0. + (1 - pstr0.) * prob0. dl.dpstr0[index0, spp.] <- (1 - prob0.) / denom. dl.dmunb[index0, spp.] <- (1 - pstr0.) * df0.dmunb. / denom. dl.dsize[index0, spp.] <- (1 - pstr0.) * df0.dkmat. / denom. sdl.dthetas <- cbind(dl.dpstr0[, spp.], dl.dmunb[, spp.], dl.dsize[, spp.]) temp3 <- sdl.dthetas run.varcov[,, spp.] <- run.varcov[,, spp.] + temp3[, ind3$row.index] * temp3[, ind3$col.index] } # End of for(spp.) loop } # End of ii nsimEIM loop run.varcov <- run.varcov / .nsimEIM wz1 <- if (intercept.only) { for(spp. in 1:NOS) { for(jay in 1:length(ind3$row.index)) { run.varcov[, jay, spp.] <- mean(run.varcov[, jay, spp.]) } } run.varcov } else { run.varcov } for(spp. in 1:NOS) { wz1[,, spp.] <- wz1[,, spp.] * dthetas.detas[, Musual * (spp. - 1) + ind3$row] * dthetas.detas[, Musual * (spp. - 1) + ind3$col] } for(spp. in 1:NOS) { for(jay in 1:Musual) { for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) temp.wz1 <- wz1[,, spp.] wz[, cptr] <- temp.wz1[, iam(jay, kay, M = Musual)] } } } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .lpstr0 = lpstr0, .epstr0 = epstr0, .nsimEIM = nsimEIM )))) } # End of zinegbinomial zipoissonff <- function(llambda = "loge", lprobp = "logit", ilambda = NULL, iprobp = NULL, imethod = 1, shrinkage.init = 0.8, zero = -2) { lprobp. <- lprobp iprobp. <- iprobp llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lprobp <- as.list(substitute(lprobp)) eprobp. <- link2list(lprobp) lprobp. <- attr(eprobp., "function.name") if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("'ilambda' values must be positive") if (length(iprobp.)) if (!is.Numeric(iprobp., positive = TRUE) || any(iprobp. >= 1)) stop("'iprobp' values must be inside the interval (0,1)") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") new("vglmff", blurb = c("Zero-inflated Poisson\n\n", "Links: ", namesof("lambda", llambda, earg = elambda), ", ", namesof("probp", lprobp., earg = eprobp.), "\n", "Mean: probp * lambda"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("lambda", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("probp", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .llambda, earg = .elambda, tag = FALSE), namesof(mynames2, .lprobp., earg = .eprobp., tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matL <- matrix(if (length( .ilambda )) .ilambda else 0, n, ncoly, byrow = TRUE) matP <- matrix(if (length( .iprobp. )) .iprobp. else 0, n, ncoly, byrow = TRUE) for (jay in 1:ncoly) { yjay <- y[, jay] Phi0.init <- 1 - 0.85 * sum(w[yjay > 0]) / sum(w) Phi0.init[Phi0.init <= 0.02] <- 0.02 # Last resort Phi0.init[Phi0.init >= 0.98] <- 0.98 # Last resort if ( length(mustart)) { mustart <- matrix(mustart, n, ncoly) # Make sure right size Lambda.init <- mustart / (1 - Phi0.init) } else if ( .imethod == 2) { mymean <- weighted.mean(yjay[yjay > 0], w[yjay > 0]) + 1/16 Lambda.init <- (1 - .sinit) * (yjay + 1/8) + .sinit * mymean } else { use.this <- median(yjay[yjay > 0]) + 1 / 16 Lambda.init <- (1 - .sinit) * (yjay + 1/8) + .sinit * use.this } zipois.Loglikfun <- function(phival, y, x, w, extraargs) { sum(c(w) * dzipois(x = y, pstr0 = phival, lambda = extraargs$lambda, log = TRUE)) } phi0.grid <- seq(0.02, 0.98, len = 21) Phi0mat.init <- getMaxMin(phi0.grid, objfun = zipois.Loglikfun, y = y, x = x, w = w, extraargs = list(lambda = Lambda.init)) if (length(mustart)) { Lambda.init <- Lambda.init / (1 - Phi0mat.init) } if (!length( .ilambda )) matL[, jay] <- Lambda.init if (!length( .iprobp. )) matP[, jay] <- Phi0mat.init } etastart <- cbind(theta2eta( matL, .llambda , earg = .elambda ), theta2eta(1 - matP, .lprobp. , earg = .eprobp. ))[, interleave.VGAM(M, M = Musual)] mustart <- NULL # Since etastart has been computed. } }), list( .lprobp. = lprobp., .llambda = llambda, .eprobp. = eprobp., .elambda = elambda, .iprobp. = iprobp., .ilambda = ilambda, .imethod = imethod, .sinit = shrinkage.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 2 ncoly <- extra$ncoly lambda <- eta2theta(eta[, Musual*(1:ncoly) - 1], .llambda, earg = .elambda ) probp. <- eta2theta(eta[, Musual*(1:ncoly) ], .lprobp., earg = .eprobp. ) probp. * lambda }, list( .lprobp. = lprobp., .llambda = llambda, .eprobp. = eprobp., .elambda = elambda ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .llambda, length = ncoly), rep( .lprobp., length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual * ncoly) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .elambda misc$earg[[Musual*ii ]] <- .eprobp. } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$pobs0 <- (1 - probp.) + probp. * exp(-lambda) # P(Y=0) misc$pobs0 <- as.matrix(misc$pobs0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- (1 - probp.) misc$pstr0 <- as.matrix(misc$pstr0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) }), list( .lprobp. = lprobp., .llambda = llambda, .eprobp. = eprobp., .elambda = elambda, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 ncoly <- extra$ncoly lambda <- eta2theta(eta[, Musual*(1:ncoly) - 1], .llambda, earg = .elambda ) probp. <- eta2theta(eta[, Musual*(1:ncoly) ], .lprobp., earg = .eprobp. ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzipois(x = y, pstr0 = 1 - probp., lambda = lambda, log = TRUE)) } }, list( .lprobp. = lprobp., .llambda = llambda, .eprobp. = eprobp., .elambda = elambda ))), vfamily = c("zipoissonff"), deriv = eval(substitute(expression({ Musual <- 2 ncoly <- extra$ncoly lambda <- eta2theta(eta[, Musual*(1:ncoly) - 1], .llambda, earg = .elambda ) probp. <- eta2theta(eta[, Musual*(1:ncoly) ], .lprobp., earg = .eprobp. ) dlambda.deta <- dtheta.deta(lambda, .llambda, earg = .elambda ) dprobp..deta <- dtheta.deta(probp., .lprobp., earg = .eprobp. ) denom <- 1 + probp. * expm1(-lambda) ind0 <- (y == 0) dl.dlambda <- -probp. * exp(-lambda) / denom dl.dlambda[!ind0] <- (y[!ind0] - lambda[!ind0]) / lambda[!ind0] dl.dprobp. <- expm1(-lambda) / denom dl.dprobp.[!ind0] <- 1 / probp.[!ind0] ans <- c(w) * cbind(dl.dlambda * dlambda.deta, dl.dprobp. * dprobp..deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] if ( .llambda == "loge" && is.empty.list( .elambda ) && any(lambda[!ind0] < .Machine$double.eps)) { for(spp. in 1:ncoly) { ans[!ind0[, spp.], Musual * spp.] = w[!ind0[, spp.]] * (y[!ind0[, spp.], spp.] - lambda[!ind0[, spp.], spp.]) } } ans }), list( .lprobp. = lprobp., .llambda = llambda, .eprobp. = eprobp., .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dlambda2 <- ( probp.) / lambda - probp. * (1 - probp.) * exp(-lambda) / denom ned2l.dprobp.2 <- -expm1(-lambda) / (( probp.) * denom) ned2l.dphilambda <- +exp(-lambda) / denom wz <- array(c(c(w) * ned2l.dlambda2 * dlambda.deta^2, c(w) * ned2l.dprobp.2 * dprobp..deta^2, c(w) * ned2l.dphilambda * dprobp..deta * dlambda.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .llambda = llambda )))) } dzigeom <- function(x, prob, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(pstr0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- dgeom(x = x, prob = prob, log = TRUE) ans <- if (log.arg) { ifelse(x == 0, log(pstr0 + (1 - pstr0) * exp(ans)), log1p(-pstr0) + ans) } else { ifelse(x == 0, pstr0 + (1 - pstr0) * exp(ans) , (1 - pstr0) * exp(ans)) } prob0 <- prob deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzigeom <- function(q, prob, pstr0 = 0) { LLL <- max(length(q), length(prob), length(pstr0)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pstr0) != LLL) pstr0 <- rep(pstr0, len = LLL); ans <- pgeom(q, prob) ans <- ifelse(q < 0, 0, pstr0 + (1-pstr0) * ans) prob0 <- prob deflat_limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzigeom <- function(p, prob, pstr0 = 0) { LLL <- max(length(p), length(prob), length(pstr0)) ans <- p <- rep(p, len = LLL) prob <- rep(prob, len = LLL) pstr0 <- rep(pstr0, len = LLL) ans[p <= pstr0] <- 0 ind1 <- (p > pstr0) ans[ind1] <- qgeom((p[ind1] - pstr0[ind1]) / (1 - pstr0[ind1]), prob = prob[ind1]) prob0 <- prob deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * prob0[pindex] ans[pindex] <- 1 + qgeom((p[pindex] - Pobs0) / (1 - Pobs0), prob = prob[pindex]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } rzigeom <- function(n, prob, pstr0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n pstr0 <- rep(pstr0, len = use.n) prob <- rep(prob, len = use.n) ans <- rgeom(use.n, prob) ans[runif(use.n) < pstr0] <- 0 prob0 <- prob deflat_limit <- -prob0 / (1 - prob0) ind0 <- (deflat_limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[ind0] <- 1 + rgeom(sum(ind0), prob = prob[ind0]) ans[ind0] <- ifelse(runif(sum(ind0)) < pobs0, 0, ans[ind0]) } ans[pstr0 < deflat_limit] <- NaN ans[pstr0 > 1] <- NaN ans } zigeometric <- function(lprob = "logit", lpstr0 = "logit", iprob = NULL, ipstr0 = NULL, imethod = 1, bias.red = 0.5, zero = 2) { expected <- TRUE lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (length(ipstr0)) if (!is.Numeric(ipstr0, positive = TRUE) || ipstr0 >= 1) stop("argument 'ipstr0' is out of range") if (!is.Numeric(bias.red, allowable.length = 1, positive = TRUE) || bias.red > 1) stop("argument 'bias.red' must be between 0 and 1") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-inflated geometric distribution,\n", "P[Y = 0] = pstr0 + (1 - pstr0) * prob,\n", "P[Y = y] = (1 - pstr0) * prob * (1 - prob)^y, ", "y = 1, 2, ...\n\n", "Link: ", namesof("prob", lprob, earg = eprob ), ", ", namesof("pstr0", lpstr0, earg = epstr0), "\n", "Mean: (1 - pstr0) * (1 - prob) / prob"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 if (any(y < 0)) stop("the response must not have negative values") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species mynames1 <- if (ncoly == 1) "prob" else paste("prob", 1:ncoly, sep = "") mynames2 <- if (ncoly == 1) "pobs0" else paste("pobs0", 1:ncoly, sep = "") predictors.names <- c(namesof(mynames1, .lprob, earg = .eprob, tag = FALSE), namesof(mynames2, .lpstr0, earg = .epstr0, tag = FALSE))[ interleave.VGAM(Musual*NOS, M = Musual)] if (!length(etastart)) { prob.init <- if ( .imethod == 3) .bias.red / (1 + y + 1/8) else if ( .imethod == 2) .bias.red / (1 + matrix(colMeans(y) + 1/8, n, ncoly, byrow = TRUE)) else .bias.red / (1 + matrix(colSums(y * w) / colSums(w) + 1/8, n, ncoly, byrow = TRUE)) prob.init <- if (length( .iprob )) { matrix( .iprob , n, ncoly, byrow = TRUE) } else { prob.init # Already a matrix } prob0.est <- psze.init <- matrix(0, n, NOS) for (jlocal in 1:NOS) { prob0.est[, jlocal] <- sum(w[y[, jlocal] == 0, jlocal]) / sum(w[, jlocal]) psze.init[, jlocal] <- if ( .imethod == 3) prob0.est[, jlocal] / 2 else if ( .imethod == 1) pmax(0.05, (prob0.est[, jlocal] - median(prob.init[, jlocal]))) else prob0.est[, jlocal] / 5 } psze.init <- if (length( .ipstr0 )) { matrix( .ipstr0 , n, ncoly, byrow = TRUE) } else { psze.init # Already a matrix } etastart <- cbind(theta2eta(prob.init, .lprob , earg = .eprob ), theta2eta(psze.init, .lpstr0, earg = .epstr0)) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .iprob = iprob, .ipstr0 = ipstr0, .bias.red = bias.red, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) pstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lpstr0 , earg = .epstr0 ) (1 - pstr0) * (1 - prob) / prob }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), last = eval(substitute(expression({ temp.names <- c(rep( .lprob , len = NOS), rep( .lpstr0 , len = NOS)) temp.names <- temp.names[interleave.VGAM(Musual*NOS, M = Musual)] misc$link <- temp.names misc$earg <- vector("list", Musual * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(Musual*NOS, M = Musual)] for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .eprob misc$earg[[Musual*ii ]] <- .epstr0 } misc$imethod <- .imethod misc$zero <- .zero misc$bias.red <- .bias.red misc$expected <- .expected misc$ipstr0 <- .ipstr0 misc$pobs0 <- pobs0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- pstr0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .ipstr0 = ipstr0, .zero = zero, .expected = expected, .bias.red = bias.red, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) pstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lpstr0 , earg = .epstr0 ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzigeom(x = y, prob = prob, pstr0 = pstr0, log = TRUE)) } }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), vfamily = c("zigeometric"), deriv = eval(substitute(expression({ Musual <- 2 prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) pstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lpstr0 , earg = .epstr0 ) prob0 <- prob # P(Y == 0) pobs0 <- pstr0 + (1 - pstr0) * prob0 index0 <- (y == 0) dl.dpstr0 <- (1 - prob0) / pobs0 dl.dpstr0[!index0] <- -1 / (1 - pstr0[!index0]) dl.dprob <- (1 - pstr0) / pobs0 dl.dprob[!index0] <- 1 / prob[!index0] - y[!index0] / (1 - prob[!index0]) dprob.deta <- dtheta.deta(prob, .lprob, earg = .eprob ) dpstr0.deta <- dtheta.deta(pstr0 , .lpstr0 , earg = .epstr0 ) dl.deta12 <- c(w) * cbind(dl.dprob * dprob.deta, dl.dpstr0 * dpstr0.deta) dl.deta12 <- dl.deta12[, interleave.VGAM(ncol(dl.deta12), M = Musual)] dl.deta12 }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), weight = eval(substitute(expression({ if ( .expected ) { ned2l.dprob2 <- (1 - pstr0) * (1 / (prob^2 * (1 - prob)) + (1 - pstr0) / pobs0) ned2l.dpstr0.prob <- 1 / pobs0 ned2l.dpstr02 <- (1 - prob0) / ((1 - pstr0) * pobs0) } else { od2l.dprob2 <- ((1 - pstr0) / pobs0)^2 od2l.dprob2[!index0] <- 1 / (prob[!index0])^2 + y[!index0] / (1 - prob[!index0])^2 od2l.dpstr0.prob <- (pobs0 + (1 - prob0) * (1 - pstr0)) / pobs0^2 od2l.dpstr0.prob[!index0] <- 0 od2l.dpstr02 <- ((1 - prob0) / pobs0)^2 od2l.dpstr02[!index0] <- 1 / (1 - pstr0[!index0])^2 } allvals <- if ( .expected ) c(c(w) * ned2l.dprob2 * dprob.deta^2, c(w) * ned2l.dpstr02 * dpstr0.deta^2, c(w) * ned2l.dpstr0.prob * dprob.deta * dpstr0.deta) else c(c(w) * od2l.dprob2 * dprob.deta^2, c(w) * od2l.dpstr02 * dpstr0.deta^2, c(w) * od2l.dpstr0.prob * dprob.deta * dpstr0.deta) wz <- array(allvals, dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .expected = expected )))) } dzageom <- function(x, prob, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(pobs0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposgeom(x[!index0], prob = prob[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1-pobs0[!index0]) * dposgeom(x[!index0], prob = prob[!index0]) } ans } pzageom <- function(q, prob, pobs0 = 0) { LLL <- max(length(q), length(prob), length(pobs0)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1 - pobs0[q > 0]) * pposgeom(q[q > 0], prob = prob[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans } qzageom <- function(p, prob, pobs0 = 0) { LLL <- max(length(p), length(prob), length(pobs0)) if (length(p) != LLL) p <- rep(p, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0.0 ans[ ind4] <- qposgeom((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), prob = prob[ind4]) ans } rzageom <- function(n, prob, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rposgeom(use.n, prob) if (length(pobs0) != use.n) pobs0 <- rep(pobs0, len = use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } dzabinom <- function(x, size, prob, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(prob), length(pobs0)) if (length(x) != LLL) x <- rep(x, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposbinom(x[!index0], size = size[!index0], prob = prob[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1-pobs0[!index0]) * dposbinom(x[!index0], size = size[!index0], prob = prob[!index0]) } ans } pzabinom <- function(q, size, prob, pobs0 = 0) { LLL <- max(length(q), length(size), length(prob), length(pobs0)) if (length(q) != LLL) q <- rep(q, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); ans <- rep(0.0, len = LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1 - pobs0[q > 0]) * pposbinom(q[q > 0], size = size[q > 0], prob = prob[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans } qzabinom <- function(p, size, prob, pobs0 = 0) { LLL <- max(length(p), length(size), length(prob), length(pobs0)) if (length(p) != LLL) p <- rep(p, len = LLL); if (length(size) != LLL) size <- rep(size, len = LLL); if (length(prob) != LLL) prob <- rep(prob, len = LLL); if (length(pobs0) != LLL) pobs0 <- rep(pobs0, len = LLL); if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0.0 ans[ ind4] <- qposbinom((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), size = size[ind4], prob = prob[ind4]) ans } rzabinom <- function(n, size, prob, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rposbinom(use.n, size, prob) if (length(pobs0) != use.n) pobs0 <- rep(pobs0, len = use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } zabinomial <- function(lprob = "logit", lpobs0 = "logit", iprob = NULL, ipobs0 = NULL, imethod = 1, zero = 2) { lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (length(ipobs0)) if (!is.Numeric(ipobs0, positive = TRUE) || ipobs0 >= 1) stop("argument 'ipobs0' is out of range") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-altered binomial distribution ", "(Bernoulli and positive-binomial conditional model)\n\n", "P[Y = 0] = pobs0,\n", "P[Y = y] = (1 - pobs0) * dposbinom(x = y, size, prob), ", "y = 1, 2, ..., size,\n\n", "Link: ", namesof("prob" , lprob, earg = eprob), ", ", namesof("pobs0", lpobs0, earg = epobs0), "\n", "Mean: (1 - pobs0) * prob / (1 - (1 - prob)^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w { NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } } if (!all(w == 1)) extra$new.w <- w y <- as.matrix(y) extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) predictors.names <- c(namesof("prob" , .lprob , earg = .eprob , tag = FALSE), namesof("pobs0", .lpobs0 , earg = .epobs0 , tag = FALSE)) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w phi.init <- if (length( .ipobs0 )) .ipobs0 else { prob0.est <- sum(Size[y == 0]) / sum(Size) if ( .imethod == 1) { (prob0.est - (1 - mustart)^Size) / (1 - (1 - mustart)^Size) } else if ( .imethod == 2) { prob0.est } else { prob0.est * 0.5 } } phi.init[phi.init <= -0.10] <- 0.50 # Lots of sample variation phi.init[phi.init <= 0.01] <- 0.05 # Last resort phi.init[phi.init >= 0.99] <- 0.95 # Last resort if (!length(etastart)) { etastart <- cbind(theta2eta( mustart, .lprob, earg = .eprob ), theta2eta(phi.init, .lpobs0, earg = .epobs0 )) mustart <- NULL } }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0, .iprob = iprob, .ipobs0 = ipobs0, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob, earg = .eprob ) phi0 <- eta2theta(eta[, 2], .lpobs0, earg = .epobs0 ) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w (1 - phi0) * prob / (1 - (1 - prob)^Size) }, list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), last = eval(substitute(expression({ misc$link <- c(prob = .lprob, pobs0 = .lpobs0 ) misc$earg <- list(prob = .eprob, pobs0 = .epobs0 ) misc$imethod <- .imethod misc$zero <- .zero misc$expected <- TRUE }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0, .zero = zero, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) pobs0 <- eta2theta(eta[, 2], .lpobs0 , earg = .epobs0 ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(orig.w * dzabinom(x = round(y * Size), size = Size, prob = prob, pobs0 = pobs0, log = TRUE)) } }, list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), vfamily = c("zabinomial"), deriv = eval(substitute(expression({ NOS <- if (length(extra$NOS)) extra$NOS else 1 Musual <- 2 orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) phi0 <- eta2theta(eta[, 2], .lpobs0 , earg = .epobs0 ) dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) dphi0.deta <- dtheta.deta(phi0, .lpobs0, earg = .epobs0 ) df0.dprob <- -Size * (1 - prob)^(Size - 1) df02.dprob2 <- Size * (Size - 1) * (1 - prob)^(Size - 2) prob0 <- (1 - prob)^(Size) oneminusf0 <- 1 - prob0 dl.dprob <- c(w) * (y / prob - (1 - y) / (1 - prob)) + c(orig.w) * df0.dprob / oneminusf0 dl.dphi0 <- -1 / (1 - phi0) dl.dphi0[y == 0] <- 1 / phi0[y == 0] # Do it in one line skip <- extra$skip.these for(spp. in 1:NOS) { dl.dprob[skip[, spp.], spp.] <- 0 } ans <- cbind( dl.dprob * dprob.deta, c(orig.w) * dl.dphi0 * dphi0.deta) ans }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, Musual) usualmeanY <- prob meanY <- (1 - phi0) * usualmeanY / oneminusf0 term1 <- c(Size) * (meanY / prob^2 - meanY / (1 - prob)^2) + c(Size) * (1 - phi0) / (1 - prob)^2 term2 <- -(1 - phi0) * df02.dprob2 / oneminusf0 term3 <- -(1 - phi0) * (df0.dprob / oneminusf0)^2 ned2l.dprob2 <- term1 + term2 + term3 wz[, iam(1, 1, M)] <- ned2l.dprob2 * dprob.deta^2 mu.phi0 <- phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lpobs0 == "logit" && is.empty.list( .epobs0 )) { tmp100 } else { (dphi0.deta^2) / tmp100 } wz[, iam(2, 2, M)] <- tmp200 c(orig.w) * wz }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 )))) } zageometric <- function(lpobs0 = "logit", lprob = "logit", imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) { lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' out of range") if (length(ipobs0)) if (!is.Numeric(ipobs0, positive = TRUE) || max(ipobs0) >= 1) stop("argument 'ipobs0' out of range") new("vglmff", blurb = c("Zero-altered geometric ", "(Bernoulli and positive-geometric conditional model)\n\n", "Links: ", namesof("pobs0", lpobs0, earg = epobs0, tag = FALSE), ", ", namesof("prob" , lprob , earg = eprob , tag = FALSE), "\n", "Mean: (1 - pobs0) / prob"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 if (any(y < 0)) stop("the response must not have negative values") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) mynames1 <- if (ncoly == 1) "pobs0" else paste("pobs0", 1:ncoly, sep = "") mynames2 <- if (ncoly == 1) "prob" else paste("prob", 1:ncoly, sep = "") predictors.names <- c(namesof(mynames1, .lpobs0 , earg = .epobs0 , tag = FALSE), namesof(mynames2, .lprob , earg = .eprob , tag = FALSE))[ interleave.VGAM(Musual*NOS, M = Musual)] if (!length(etastart)) { foo <- function(x) mean(as.numeric(x == 0)) phi0.init <- matrix(apply(y, 2, foo), n, ncoly, byrow = TRUE) if (length( .ipobs0 )) phi0.init <- matrix( .ipobs0 , n, ncoly, byrow = TRUE) prob.init = if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 1) (1 - phi0.init) / (1 + matrix(colSums(y * w) / colSums(w) + 1/16, n, ncoly, byrow = TRUE)) else (1 - phi0.init) / (1 + matrix(apply(y, 2, median), n, ncoly, byrow = TRUE) + 1/16) if (length( .iprob )) prob.init <- matrix( .iprob , n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(phi0.init, .lpobs0 , earg = .epobs0 ), theta2eta(prob.init, .lprob , earg = .eprob )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob, .ipobs0 = ipobs0, .iprob = iprob, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS Musual <- 2 phi0 <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) (1 - phi0) / prob }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), last = eval(substitute(expression({ temp.names <- c(rep( .lpobs0 , len = NOS), rep( .lprob , len = NOS)) temp.names <- temp.names[interleave.VGAM(Musual*NOS, M = Musual)] misc$link <- temp.names misc$earg <- vector("list", Musual * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(Musual*NOS, M <- Musual)] for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .epobs0 misc$earg[[Musual*ii ]] <- .eprob } misc$expected <- TRUE misc$imethod <- .imethod misc$ipobs0 <- .ipobs0 misc$iprob <- .iprob misc$multipleResponses <- TRUE }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob, .ipobs0 = ipobs0, .iprob = iprob, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { NOS <- extra$NOS Musual <- 2 phi0 <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dzageom(x = y, pobs0 = phi0, prob = prob, log = TRUE)) } }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), vfamily = c("zageometric"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- extra$NOS y0 <- extra$y0 skip <- extra$skip.these phi0 <- cbind(eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, Musual*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) dl.dprob <- 1 / prob - (y - 1) / (1 - prob) dl.dphi0 <- -1 / (1 - phi0) for(spp. in 1:NOS) { dl.dphi0[skip[, spp.], spp.] <- 1 / phi0[skip[, spp.], spp.] dl.dprob[skip[, spp.], spp.] <- 0 } dphi0.deta <- dtheta.deta(phi0, .lpobs0 , earg = .epobs0 ) dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) ans <- c(w) * cbind(dl.dphi0 * dphi0.deta, dl.dprob * dprob.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, Musual*NOS) ned2l.dprob2 <- (1 - phi0) / (prob^2 * (1 - prob)) wz[, NOS+(1:NOS)] <- c(w) * ned2l.dprob2 * dprob.deta^2 mu.phi0 <- phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lpobs0 == "logit" && is.empty.list( .epobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (dphi0.deta^2) / tmp100) } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M = Musual)] wz }), list( .lpobs0 = lpobs0, .epobs0 = epobs0 )))) } # End of zageometric VGAM/R/family.vglm.R0000644000176000001440000000111012136651110013572 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. if (FALSE) family.vglm <- function(object, ...) object$vfamily if (FALSE) print.vfamily <- function(x, ...) { f <- x$vfamily if (is.null(f)) stop("not a VGAM family function") nn <- x$blurb if (is.null(nn)) invisible(return(x)) cat("Family: ", f[1], "\n") if (length(f)>1) cat("Classes:", paste(f, collapse=", "), "\n") cat("\n") for(i in 1:length(nn)) cat(nn[i]) cat("\n") invisible(return(x)) } VGAM/R/family.univariate.R0000644000176000001440000113340212136651110015007 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. getMaxMin <- function(vov, objfun, y, x, w, extraargs = NULL, maximize = TRUE, abs.arg = FALSE, ret.objfun = FALSE) { if (!is.vector(vov)) stop("'vov' must be a vector") objvals <- vov for(ii in 1:length(vov)) objvals[ii] <- objfun(vov[ii], y = y, x = x, w = w, extraargs = extraargs) try.this <- if (abs.arg) { if (maximize) vov[abs(objvals) == max(abs(objvals))] else vov[abs(objvals) == min(abs(objvals))] } else { if (maximize) vov[objvals == max(objvals)] else vov[objvals == min(objvals)] } if (!length(try.this)) stop("something has gone wrong!") ans <- if (length(try.this) == 1) try.this else sample(try.this, size = 1) if (ret.objfun) c(ans, objvals[ans == vov]) else ans } mccullagh89 <- function(ltheta = "rhobit", lnu = logoff(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL) { ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") lnu <- as.list(substitute(lnu)) enu <- link2list(lnu) lnu <- attr(enu, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("McCullagh (1989)'s distribution \n", "f(y) = (1-2*theta*y+theta^2)^(-nu) * [1 - y^2]^(nu-1/2) /\n", " Beta[nu+1/2, 1/2], ", " -1 < y < 1, -1 < theta < 1, nu > -1/2\n", "Links: ", namesof("theta", ltheta, earg = etheta), ", ", namesof("nu", lnu, earg = enu), "\n", "\n", "Mean: nu*theta/(1+nu)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w, y) y <- as.numeric(y) if (any(y <= -1 | y >= 1)) stop("all y values must be in (-1, 1)") predictors.names <- c(namesof("theta", .ltheta , earg = .etheta, tag = FALSE), namesof("nu", .lnu , earg = .enu, tag = FALSE)) if (!length(etastart)) { theta.init <- if (length( .itheta )) rep( .itheta, length = n) else { mccullagh89.aux <- function(thetaval, y, x, w, extraargs) mean((y-thetaval)*(thetaval^2-1)/(1-2*thetaval*y+thetaval^2)) theta.grid <- seq(-0.9, 0.9, by=0.05) try.this <- getMaxMin(theta.grid, objfun = mccullagh89.aux, y = y, x = x, w = w, maximize = FALSE, abs.arg = TRUE) try.this <- rep(try.this, length.out = n) try.this } tmp <- y / (theta.init - y) tmp[tmp < -0.4] <- -0.4 tmp[tmp > 10.0] <- 10.0 nu.init <- rep(if (length( .inu)) .inu else tmp, length = n) nu.init[!is.finite(nu.init)] <- 0.4 etastart <- cbind(theta2eta(theta.init, .ltheta , earg = .etheta ), theta2eta(nu.init, .lnu, earg = .enu )) } }), list( .ltheta = ltheta, .lnu = lnu, .inu = inu, .itheta = itheta, .etheta = etheta, .enu = enu ))), linkinv = eval(substitute(function(eta, extra = NULL) { Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nu <- eta2theta(eta[, 2], .lnu, earg = .enu ) nu * Theta / (1 + nu) }, list( .ltheta = ltheta, .lnu = lnu, .etheta = etheta, .enu = enu ))), last = eval(substitute(expression({ misc$link <- c("theta" = .ltheta , "nu" = .lnu ) misc$earg <- list("theta" = .etheta , "nu" = .enu ) }), list( .ltheta = ltheta, .lnu = lnu, .etheta = etheta, .enu = enu ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nu <- eta2theta(eta[, 2], .lnu, earg = .enu ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * ((nu-0.5)*log1p(-y^2) - nu * log1p(-2*Theta*y + Theta^2) - lbeta(nu + 0.5, 0.5))) }, list( .ltheta = ltheta, .lnu = lnu, .etheta = etheta, .enu = enu ))), vfamily = c("mccullagh89"), deriv = eval(substitute(expression({ Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nu <- eta2theta(eta[, 2], .lnu, earg = .enu ) dTheta.deta <- dtheta.deta(Theta, .ltheta , earg = .etheta ) dnu.deta <- dtheta.deta(nu, .lnu, earg = .enu ) dl.dTheta <- 2 * nu * (y-Theta) / (1 -2*Theta*y + Theta^2) dl.dnu <- log1p(-y^2) - log1p(-2*Theta*y + Theta^2) - digamma(nu + 0.5) + digamma(nu + 1) c(w) * cbind(dl.dTheta * dTheta.deta, dl.dnu * dnu.deta) }), list( .ltheta = ltheta, .lnu = lnu, .etheta = etheta, .enu = enu ))), weight = eval(substitute(expression({ d2l.dTheta2 <- (2 * nu^2 / (1+nu)) / (1-Theta^2) d2l.dnu2 <- trigamma(nu+0.5) - trigamma(nu+1) wz <- matrix(as.numeric(NA), n, M) # diagonal matrix wz[, iam(1, 1, M)] <- d2l.dTheta2 * dTheta.deta^2 wz[, iam(2, 2, M)] <- d2l.dnu2 * dnu.deta^2 c(w) * wz }), list( .ltheta = ltheta, .lnu = lnu )))) } hzeta.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } hzeta <- function(link = "loglog", ialpha = NULL, nsimEIM = 100) { stopifnot(ialpha > 0) stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Haight's Zeta distribution f(y) = (2y-1)^(-alpha) - ", "(2y+1)^(-alpha),\n", " alpha>0, y = 1, 2,....\n\n", "Link: ", namesof("alpha", link, earg = earg), "\n\n", "Mean: (1-2^(-alpha)) * zeta(alpha) if alpha>1", "\n", "Variance: (1-2^(1-alpha)) * zeta(alpha-1) - mean^2 if alpha>2"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) if (any(y < 1)) stop("all y values must be in 1, 2, 3,....") predictors.names <- namesof("alpha", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { a.init <- if (length( .ialpha)) .ialpha else { if ((meany <- weighted.mean(y, w)) < 1.5) 3.0 else if (meany < 2.5) 1.4 else 1.1 } a.init <- rep(a.init, length = n) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .ialpha = ialpha ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .link , earg = .earg ) mu <- (1-2^(-alpha)) * zeta(alpha) mu[alpha <= 1] <- Inf mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(alpha = .link) misc$earg <- list(alpha = .earg ) misc$nsimEIM <- .nsimEIM }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dhzeta(x = y, alpha = alpha, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("hzeta"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .link , earg = .earg ) dalpha.deta <- dtheta.deta(alpha, .link , earg = .earg ) d3 <- deriv3(~ log((2*y-1)^(-alpha) - (2*y+1)^(-alpha)), "alpha", hessian = FALSE) eval.d3 <- eval(d3) dl.dalpha <- attr(eval.d3, "gradient") c(w) * dl.dalpha * dalpha.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ log((2*ysim-1)^(-alpha) - (2*ysim+1)^(-alpha)), "alpha", hessian = FALSE) run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rhzeta(n, alpha=alpha) eval.sd3 <- eval(sd3) dl.dalpha <- attr(eval.d3, "gradient") rm(ysim) temp3 <- dl.dalpha run.var <- ((ii-1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * dalpha.deta^2 c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dhzeta <- function(x, alpha, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(alpha, positive = TRUE)) stop("'alpha' must be numeric and have positive values") nn <- max(length(x), length(alpha)) x <- rep(x, length.out = nn); alpha <- rep(alpha, length.out = nn) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 ans <- rep(0, length.out = nn) ans[!zero] <- (2*x[!zero]-1)^(-alpha[!zero]) - (2*x[!zero]+1)^(-alpha[!zero]) if (log.arg) log(ans) else ans } phzeta <- function(q, alpha) { nn <- max(length(q), length(alpha)) q <- rep(q, length.out = nn) alpha <- rep(alpha, length.out = nn) oq <- !is.finite(q) zero <- oq | q < 1 q <- floor(q) ans <- 0 * q ans[!zero] <- 1 - (2*q[!zero]+1)^(-alpha[!zero]) ans[alpha <= 0] <- NaN ans } qhzeta <- function(p, alpha) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("argument 'p' must have values inside the interval (0,1)") nn <- max(length(p), length(alpha)) p <- rep(p, length.out = nn) alpha <- rep(alpha, length.out = nn) ans <- (((1 - p)^(-1/alpha) - 1) / 2) # p is in (0,1) ans[alpha <= 0] <- NaN floor(ans + 1) } rhzeta <- function(n, alpha) { ans <- (runif(n)^(-1/alpha) - 1) / 2 ans[alpha <= 0] <- NaN floor(ans + 1) } dirmultinomial <- function(lphi = "logit", iphi = 0.10, parallel = FALSE, zero = "M") { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") if (length(zero) && !(is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || is.character(zero ))) stop("bad input for argument 'zero'") if (!is.Numeric(iphi, positive = TRUE) || max(iphi) >= 1.0) stop("bad input for argument 'iphi'") new("vglmff", blurb = c("Dirichlet-multinomial distribution\n\n", "Links: ", "log(prob[1]/prob[M]), ..., log(prob[M-1]/prob[M]), ", namesof("phi", lphi, earg = ephi), "\n", "\n", "Mean: shape_j / sum_j(shape_j)"), constraints = eval(substitute(expression({ .ZERO <- .zero if (is.character( .ZERO)) .ZERO <- eval(parse(text = .ZERO)) .PARALLEL <- .parallel if (is.logical( .PARALLEL) && .PARALLEL) { mycmatrix <- if (length( .ZERO )) stop("can only handle parallel = TRUE when zero = NULL") else cbind(rbind(matrix(1, M - 1, 1), 0), rbind(matrix(0, M - 1, 1), 1)) } else { mycmatrix <- if (M == 1) diag(1) else diag(M) } constraints <- cm.vgam(mycmatrix, x, .PARALLEL , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .ZERO , M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- TRUE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig y <- as.matrix(y) ycount <- as.matrix(y * c(w)) M <- ncol(y) if (max(abs(ycount - round(ycount))) > 1.0e-6) warning("there appears to be non-integer responses") if (min(ycount) < 0) stop("all values of the response (matrix) must be non-negative") predictors.names <- c(paste("log(prob[,", 1:(M-1), "]/prob[,", M, "])", sep = ""), namesof("phi", .lphi , short = TRUE)) extra$n2 <- w # aka omega, must be integer # as.vector(apply(y, 1, sum)) if (!length(etastart)) { if (length(mustart.orig)) { prob.init <- mustart } else { prob.init <- colSums(ycount) prob.init <- prob.init / sum(prob.init) prob.init <- matrix(prob.init, n, M, byrow = TRUE) } phi.init <- rep( .iphi , length.out = n) etastart <- cbind(log(prob.init[, -M] / prob.init[, M]), theta2eta(phi.init, .lphi , earg = .ephi )) } mustart <- NULL # Since etastart has been computed. }), list( .lphi = lphi, .ephi = ephi, .iphi = iphi ))), linkinv = eval(substitute(function(eta, extra = NULL) { M <- if (is.matrix(eta)) ncol(eta) else 1 temp <- cbind(exp(eta[, -M, drop = FALSE]), 1) prop.table(temp, 1) }, list( .ephi = ephi, .lphi = lphi ))), last = eval(substitute(expression({ misc$link <- c(rep("loge", length = M-1), .lphi ) names(misc$link) <- c( paste("prob[,", 1:(M-1), "]/prob[,", M, "])", sep = ""), "phi") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:(M-1)) misc$earg[[ii]] <- list() misc$earg[[M]] <- .ephi misc$expected <- TRUE if (intercept.only) { misc$shape <- probs[1,] * (1/phi[1]-1) # phi & probs computed in @deriv } }), list( .ephi = ephi, .lphi = lphi ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(eta)) ncol(eta) else 1 probs <- cbind(exp(eta[, -M]), 1) probs <- prop.table(probs, 1) phi <- eta2theta(eta[, M], .lphi , earg = .ephi ) n <- length(phi) ycount <- as.matrix(y * c(w)) ycount <- round(ycount) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { ans <- rep(0.0, length.out = n) omega <- extra$n2 for(jay in 1:M) { maxyj <- max(ycount[, jay]) loopOveri <- (n < maxyj) if (loopOveri) { for(iii in 1:n) { rrr <- 1:ycount[iii, jay] # a vector if (ycount[iii, jay] > 0) ans[iii] <- ans[iii] + sum(log((1-phi[iii]) * probs[iii, jay] + (rrr-1)*phi[iii])) } } else { for(rrr in 1:maxyj) { index <- (rrr <= ycount[, jay]) & (ycount[, jay] > 0) if (any(index)) ans[index] <- ans[index] + log((1-phi[index]) * probs[index, jay] + (rrr-1) * phi[index]) } } } # end of jay loop maxomega <- max(omega) loopOveri <- n < maxomega if (loopOveri) { for(iii in 1:n) { rrr <- 1:omega[iii] ans[iii]<- ans[iii] - sum(log1p(-phi[iii] + (rrr-1) * phi[iii])) } } else { for(rrr in 1:maxomega) { ind8 <- rrr <= omega ans[ind8] <- ans[ind8] - log1p(-phi[ind8] + (rrr-1) * phi[ind8]) } } sum(ans) } }, list( .ephi = ephi, .lphi = lphi ))), vfamily = c("dirmultinomial"), deriv = eval(substitute(expression({ probs <- cbind(exp(eta[, -M]), 1) probs <- prop.table(probs, 1) phi <- eta2theta(eta[, M], .lphi , earg = .ephi ) dl.dprobs <- matrix(0.0, n, M-1) dl.dphi <- rep(0.0, length.out = n) omega <- extra$n2 ycount <- as.matrix(y * c(w)) ycount <- round(ycount) for(jay in 1:M) { maxyj <- max(ycount[, jay]) loopOveri <- n < maxyj if (loopOveri) { for(iii in 1:n) { rrr <- 1:ycount[iii, jay] if (ycount[iii, jay] > 0) { PHI <- phi[iii] dl.dphi[iii] <- dl.dphi[iii] + sum((rrr-1-probs[iii, jay]) / ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI)) tmp9 <- (1-PHI) / ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI) if (jay < M) { dl.dprobs[iii, jay] <- dl.dprobs[iii, jay] + sum(tmp9) } else { for(jay2 in 1:(M-1)) dl.dprobs[iii, jay2]<-dl.dprobs[iii, jay2]-sum(tmp9) } } } } else { for(rrr in 1:maxyj) { index <- (rrr <= ycount[, jay]) & (ycount[, jay] > 0) PHI <- phi[index] dl.dphi[index] <- dl.dphi[index] + (rrr-1-probs[index, jay]) / ((1-PHI)*probs[index, jay] + (rrr-1)*PHI) tmp9 <- (1-PHI) / ((1-PHI)*probs[index, jay] + (rrr-1)*PHI) if (jay < M) { dl.dprobs[index, jay] <- dl.dprobs[index, jay] + tmp9 } else { for(jay2 in 1:(M-1)) dl.dprobs[index, jay2] <- dl.dprobs[index, jay2] - tmp9 } } } } # end of jay loop maxomega <- max(omega) loopOveri <- n < maxomega if (loopOveri) { for(iii in 1:n) { rrr <- 1:omega[iii] dl.dphi[iii]<-dl.dphi[iii] - sum((rrr-2)/(1 + (rrr-2)*phi[iii])) } } else { for(rrr in 1:maxomega) { index <- rrr <= omega dl.dphi[index] <- dl.dphi[index] - (rrr-2)/(1 + (rrr-2)*phi[index]) } } dprobs.deta <- probs[, -M] * (1 - probs[, -M]) # n x (M-1) dphi.deta <- dtheta.deta(phi, .lphi , earg = .ephi ) ans <- cbind(dl.dprobs * dprobs.deta, dl.dphi * dphi.deta) ans }), list( .ephi = ephi, .lphi = lphi ))), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) loopOveri <- (n < maxomega) if (loopOveri) { for(iii in 1:n) { rrr <- 1:omega[iii] # A vector PHI <- phi[iii] pYiM.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[iii], shape1<-probs[iii, M]*(1/PHI-1), shape2<-(1-probs[iii, M])*(1/PHI-1)) # A vector denomM <- ((1-PHI)*probs[iii, M] + (rrr-1)*PHI)^2 # A vector wz[iii, iam(M, M, M)] <- wz[iii, iam(M, M, M)] + sum(probs[iii, M]^2 * pYiM.ge.rrr / denomM) - sum(1 / (1 + (rrr-2)*PHI)^2) for(jay in 1:(M-1)) { denomj <- ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI)^2 pYij.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[iii], shape1<-probs[iii, jay]*(1/PHI-1), shape2<-(1-probs[iii, jay])*(1/PHI-1)) wz[iii, iam(jay, jay, M)] <- wz[iii, iam(jay, jay, M)] + sum(pYij.ge.rrr / denomj) + sum(pYiM.ge.rrr / denomM) for(kay in jay:(M-1)) if (kay > jay) { wz[iii, iam(jay, kay, M)] <- wz[iii, iam(jay, kay, M)] + sum(pYiM.ge.rrr / denomM) } wz[iii, iam(jay, M, M)] <- wz[iii, iam(jay, M, M)] + sum(probs[iii, jay] * pYij.ge.rrr / denomj) - sum(probs[iii, M] * pYiM.ge.rrr / denomM) wz[iii, iam(M, M, M)] <- wz[iii, iam(M, M, M)] + sum(probs[iii, jay]^2 * pYij.ge.rrr / denomj) } # end of jay loop } # end of iii loop } else { for(rrr in 1:maxomega) { ind5 <- rrr <= omega PHI <- phi[ind5] pYiM.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[ind5], shape1<-probs[ind5, M]*(1/PHI-1), shape2<-(1-probs[ind5, M])*(1/PHI-1)) denomM <- ((1-PHI)*probs[ind5, M] + (rrr-1)*PHI)^2 wz[ind5, iam(M, M, M)] <- wz[ind5, iam(M, M, M)] + probs[ind5, M]^2 * pYiM.ge.rrr / denomM - 1 / (1 + (rrr-2)*PHI)^2 for(jay in 1:(M-1)) { denomj <- ((1-PHI)*probs[ind5, jay] + (rrr-1)*PHI)^2 pYij.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[ind5], shape1<-probs[ind5, jay]*(1/PHI-1), shape2<-(1-probs[ind5, jay])*(1/PHI-1)) wz[ind5, iam(jay, jay, M)] <- wz[ind5, iam(jay, jay, M)] + pYij.ge.rrr / denomj + pYiM.ge.rrr / denomM for(kay in jay:(M-1)) if (kay > jay) { wz[ind5, iam(jay, kay, M)] <- wz[ind5, iam(jay, kay, M)] + pYiM.ge.rrr / denomM } wz[ind5, iam(jay, M, M)] <- wz[ind5, iam(jay, M, M)] + probs[ind5, jay] * pYij.ge.rrr / denomj - probs[ind5, M] * pYiM.ge.rrr / denomM wz[ind5, iam(M, M, M)] <- wz[ind5, iam(M, M, M)] + probs[ind5, jay]^2 * pYij.ge.rrr / denomj } # end of jay loop } # end of rrr loop } for(jay in 1:(M-1)) for(kay in jay:(M-1)) wz[, iam(jay, kay, M)] <- wz[, iam(jay, kay, M)] * (1-phi)^2 for(jay in 1:(M-1)) wz[, iam(jay, M, M)] <- wz[, iam(jay, M, M)] * (phi-1) / phi wz[, iam(M, M, M)] <- wz[, iam(M, M, M)] / phi^2 d1Thetas.deta <- cbind(dprobs.deta, dphi.deta) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- wz * d1Thetas.deta[, index$row] * d1Thetas.deta[, index$col] wz }), list( .ephi = ephi, .lphi = lphi )))) } dirmul.old <- function(link = "loge", init.alpha = 0.01, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(init.alpha, positive = TRUE)) stop("'init.alpha' must contain positive values only") new("vglmff", blurb = c("Dirichlet-Multinomial distribution\n\n", "Links: ", namesof("shape1", link, earg = earg), ", ..., ", namesof("shapeM", link, earg = earg), "\n\n", "Posterior mean: (n_j + shape_j)/(2*sum(n_j) + ", "sum(shape_j))\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ y <- as.matrix(y) M <- ncol(y) if (any(y != round(y ))) stop("all y values must be integer-valued") predictors.names <- namesof(paste("shape", 1:M, sep = ""), .link , earg = .earg , short = TRUE) extra$n2 <- rowSums(y) # Nb. don't multiply by 2 extra$y <- y if (!length(etastart)) { yy <- if (is.numeric( .init.alpha)) matrix( .init.alpha, n, M, byrow = TRUE) else matrix(runif(n*M), n, M) etastart <- theta2eta(yy, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.alpha = init.alpha ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep(1, length.out = M)) (extra$y + shape) / (extra$n2 + sumshape) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- paste("shape", 1:M, sep = "") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$pooled.weight <- pooled.weight }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep(1, length.out = M)) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (lgamma(sumshape) - lgamma(extra$n2 + sumshape ))) + sum(c(w) * (lgamma(y + shape) - lgamma(shape ))) }, list( .link = link, .earg = earg ))), vfamily = c("dirmul.old"), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .link , earg = .earg ) sumshape <- as.vector(shape %*% rep(1, length.out = M)) dl.dsh <- digamma(sumshape) - digamma(extra$n2 + sumshape) + digamma(y + shape) - digamma(shape) dsh.deta <- dtheta.deta(shape, .link , earg = .earg ) c(w) * dl.dsh * dsh.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- matrix(trigamma(sumshape) - trigamma(extra$n2 + sumshape), nrow = n, ncol = dimm(M)) wz[, 1:M] <- wz[, 1:M] + trigamma(y + shape) - trigamma(shape) wz <- -wz * dsh.deta[, index$row] * dsh.deta[, index$col] if (TRUE && intercept.only) { sumw <- sum(w) for(ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .link = link, .earg = earg )))) } rdiric <- function(n, shape, dimension = NULL) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.numeric(dimension)) dimension <- length(shape) shape <- rep(shape, length.out = dimension) ans <- rgamma(use.n * dimension, rep(shape, rep(use.n, dimension))) dim(ans) <- c(use.n, dimension) ans <- ans / rowSums(ans) ans } dirichlet <- function(link = "loge", parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Dirichlet distribution\n\n", "Links: ", namesof("shapej", link, earg = earg), "\n\n", "Mean: shape_j/(1 + sum(shape_j)), j = 1,..,ncol(y)"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ y <- as.matrix(y) M <- ncol(y) w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = Inf, out.wy = FALSE, colsyperw = NULL, maximize = FALSE) if (any(y <= 0) || any(y >= 1)) stop("all y values must be > 0 and < 1") mynames1 <- paste("shape", 1:M, sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (!length(etastart)) { yy <- matrix(t(y) %*% rep(1 / nrow(y), nrow(y)), nrow(y), M, byrow = TRUE) etastart <- theta2eta(yy, .link , earg = .earg ) } }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) prop.table(shape, 1) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , length.out = M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep(1, length.out = M)) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * lgamma(sumshape)) - sum(c(w) * lgamma(shape)) + sum(c(w) * (shape-1) * log(y)) } }, list( .link = link, .earg = earg ))), vfamily = c("dirichlet"), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .link , earg = .earg ) sumshape <- as.vector(shape %*% rep(1, length.out = M)) dl.dsh <- digamma(sumshape) - digamma(shape) + log(y) dsh.deta <- dtheta.deta(shape, .link , earg = .earg ) c(w) * dl.dsh * dsh.deta }), list( .link = link, .earg = earg ))), weight = expression({ index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- matrix(trigamma(sumshape), nrow = n, ncol = dimm(M)) wz[, 1:M] <- wz[, 1:M] - trigamma(shape) wz <- -c(w) * wz * dsh.deta[, index$row] * dsh.deta[, index$col] wz })) } zeta <- function(x, deriv = 0) { deriv.arg <- deriv rm(deriv) if (!is.Numeric(deriv.arg, allowable.length = 1, integer.valued = TRUE)) stop("'deriv' must be a single non-negative integer") if (deriv.arg < 0 || deriv.arg > 2) stop("'deriv' must be 0, 1, or 2") if (deriv.arg > 0) return(Zeta.derivative(x, deriv.arg = deriv.arg)) if (any(special <- Re(x) <= 1)) { ans <- x ans[special] <- Inf # For Re(x) == 1 special3 <- Re(x) < 1 ans[special3] <- NA # For 0 < Re(x) < 1 special4 <- (0 < Re(x)) & (Re(x) < 1) & (Im(x) == 0) ans[special4] <- Zeta.derivative(x[special4], deriv.arg = deriv.arg) special2 <- Re(x) < 0 if (any(special2)) { x2 <- x[special2] cx <- 1-x2 ans[special2] <- 2^(x2) * pi^(x2-1) * sin(pi*x2/2) * gamma(cx) * Recall(cx) } if (any(!special)) { ans[!special] <- Recall(x[!special]) } return(ans) } a <- 12; k <- 8 B <- c(1/6, -1/30,1/42,-1/30,5/66,-691/2730,7/6,-3617/510) ans <- 0 for(ii in 1:(a-1)) ans <- ans + 1.0 / ii^x ans <- ans + 1.0 / ((x-1.0)* a^(x-1.0)) + 1.0 / (2.0 * a^x) term <- (x/2) / a^(x+1) ans <- ans + term * B[1] for(mm in 2:k) { term <- term * (x+2*mm-2) * (x+2*mm-3) / (a * a * 2 * mm * (2*mm-1)) ans <- ans + term * B[mm] } ans } Zeta.derivative <- function(x, deriv.arg = 0) { if (!is.Numeric(deriv.arg, allowable.length = 1, integer.valued = TRUE)) stop("'deriv.arg' must be a single non-negative integer") if (deriv.arg < 0 || deriv.arg > 2) stop("'deriv.arg' must be 0, 1, or 2") if (any(Im(x) != 0)) stop("Sorry, currently can only handle x real, not complex") if (any(x < 0)) stop("Sorry, currently cannot handle x < 0") ok <- is.finite(x) & x > 0 & x != 1 # Handles NAs ans <- rep(as.numeric(NA), length(x)) nn <- sum(ok) # Effective length (excludes x < 0 and x = 1 values) if (nn) ans[ok] <- dotC(name = "vzetawr", as.double(x[ok]), ans = double(nn), as.integer(deriv.arg), as.integer(nn))$ans if (deriv.arg == 0) ans[is.finite(x) & abs(x) < 1.0e-12] <- -0.5 ans } dzeta <- function(x, p, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(p, positive = TRUE)) # || min(p) <= 1 stop("'p' must be numeric and > 0") LLL <- max(length(p), length(x)) x <- rep(x, length.out = LLL); p <- rep(p, length.out = LLL) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 if (any(zero)) warning("non-integer x and/or x < 1 or NAs") ans <- rep(if (log.arg) log(0) else 0, length.out = LLL) if (any(!zero)) { if (log.arg) { ans[!zero] <- (-p[!zero]-1)*log(x[!zero]) - log(zeta(p[!zero]+1)) } else { ans[!zero] <- x[!zero]^(-p[!zero]-1) / zeta(p[!zero]+1) } } if (any(ox)) ans[ox] <- NA ans } zetaff <- function(link = "loge", init.p = NULL, zero = NULL) { if (length(init.p) && !is.Numeric(init.p, positive = TRUE)) stop("argument 'init.p' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Zeta distribution ", "f(y) = 1/(y^(p+1) zeta(p+1)), p>0, y = 1, 2,..\n\n", "Link: ", namesof("p", link, earg = earg), "\n\n", "Mean: zeta(p) / zeta(p+1), provided p>1\n", "Variance: zeta(p-1) / zeta(p+1) - mean^2, provided p>2"), infos = eval(substitute(function(...) { list(Musual = 1, multipleResponses = TRUE, zero = .zero , link = .link) }, list( .link = link, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) mynames1 <- paste("p", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly if (!length(etastart)) { zetaff.Loglikfun <- function(pp, y, x, w, extraargs) { sum(c(w) * dzeta(x = y, p = pp, log = TRUE)) } p.grid <- seq(0.1, 3.0, length.out = 19) pp.init <- matrix( if (length( .init.p )) .init.p else -1, n, M, byrow = TRUE) if (!length( .init.p )) for (spp. in 1:ncoly) { pp.init[, spp.] <- getMaxMin(p.grid, objfun = zetaff.Loglikfun, y = y[, spp.], x = x, w = w[, spp.]) if ( .link == "loglog") pp.init[pp.init <= 1, spp.] <- 1.2 } etastart <- theta2eta(pp.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.p = init.p ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- pp <- eta2theta(eta, .link , earg = .earg ) ans[pp > 1] <- zeta(pp[pp > 1]) / zeta(pp[pp > 1] + 1) ans[pp <= 1] <- NA ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- rep( .link , length = ncoly) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { pp <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzeta(x = y, p = pp, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("zetaff"), deriv = eval(substitute(expression({ pp <- eta2theta(eta, .link , earg = .earg ) fred1 <- zeta(pp+1) fred2 <- zeta(pp+1, deriv = 1) dl.dpp <- -log(y) - fred2 / fred1 dpp.deta <- dtheta.deta(pp, .link , earg = .earg ) c(w) * dl.dpp * dpp.deta }), list( .link = link, .earg = earg ))), weight = expression({ NOS <- ncol(y) nd2l.dpp2 <- zeta(pp + 1, deriv = 2) / fred1 - (fred2/fred1)^2 wz <- nd2l.dpp2 * dpp.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) })) } gharmonic <- function(n, s = 1, lognexponent = 0) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") if (!is.Numeric(lognexponent, allowable.length = 1)) stop("bad input for argument 'lognexponent'") if (length(n) == 1 && length(s) == 1) { if (lognexponent != 0) sum(log(1:n)^lognexponent * (1:n)^(-s)) else sum((1:n)^(-s)) } else { LEN <- max(length(n), length(s)) n <- rep(n, length.out = LEN) ans <- s <- rep(s, length.out = LEN) if (lognexponent != 0) { for(ii in 1:LEN) ans[ii] <- sum(log(1:n[ii])^lognexponent * (1:n[ii])^(-s[ii])) } else for(ii in 1:LEN) ans[ii] <- sum((1:n[ii])^(-s[ii])) ans } } rzipf <- function(n, N, s) { r <- runif(n) sapply(r, function(x) {min(which(pzipf(1:N, N, s) > x))}) } dzipf <- function(x, N, s, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(s, positive = TRUE)) stop("bad input for argument 's'") nn <- max(length(x), length(N), length(s)) x <- rep(x, length.out = nn); N <- rep(N, length.out = nn); s <- rep(s, length.out = nn); ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 | x > N ans <- (if (log.arg) log(0) else 0) * x if (any(!zero)) if (log.arg) { ans[!zero] <- (-s[!zero]) * log(x[!zero]) - log(gharmonic(N[!zero], s[!zero])) } else { ans[!zero] <- x[!zero]^(-s[!zero]) / gharmonic(N[!zero], s[!zero]) } ans } pzipf <- function(q, N, s) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(s, positive = TRUE)) stop("bad input for argument 's'") nn <- max(length(q), length(N), length(s)) q <- rep(q, length.out = nn); N <- rep(N, length.out = nn); s <- rep(s, length.out = nn); oq <- !is.finite(q) zeroOR1 <- oq | q < 1 | q >= N floorq <- floor(q) ans <- 0 * floorq ans[oq | q >= N] <- 1 if (any(!zeroOR1)) ans[!zeroOR1] <- gharmonic(floorq[!zeroOR1], s[!zeroOR1]) / gharmonic(N[!zeroOR1], s[!zeroOR1]) ans } zipf <- function(N = NULL, link = "loge", init.s = NULL) { if (length(N) && (!is.Numeric(N, positive = TRUE, integer.valued = TRUE, allowable.length = 1) || N <= 1)) stop("bad input for argument 'N'") enteredN <- length(N) if (length(init.s) && !is.Numeric(init.s, positive = TRUE)) stop("argument 'init.s' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Zipf distribution f(y;s) = y^(-s) / sum((1:N)^(-s)),", " s > 0, y = 1, 2,...,N", ifelse(enteredN, paste(" = ",N,sep = ""), ""), "\n\n", "Link: ", namesof("s", link, earg = earg), "\n\n", "Mean: gharmonic(N,s-1) / gharmonic(N,s)"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.integer.y = TRUE) predictors.names <- namesof("s", .link , earg = .earg , tag = FALSE) NN <- .N if (!is.Numeric(NN, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) NN <- max(y) if (max(y) > NN) stop("maximum of the response is greater than argument 'N'") if (any(y < 1)) stop("all response values must be in 1, 2, 3,...,N( = ", NN,")") extra$N <- NN if (!length(etastart)) { llfun <- function(ss, y, N, w) { sum(c(w) * dzipf(x = y, N=extra$N, s=ss, log = TRUE)) } ss.init <- if (length( .init.s )) .init.s else getInitVals(gvals = seq(0.1, 3.0, length.out = 19), llfun=llfun, y = y, N=extra$N, w = w) ss.init <- rep(ss.init, length = length(y)) if ( .link == "loglog") ss.init[ss.init <= 1] = 1.2 etastart <- theta2eta(ss.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.s = init.s, .N = N ))), linkinv = eval(substitute(function(eta, extra = NULL) { ss <- eta2theta(eta, .link , earg = .earg ) gharmonic(extra$N, s=ss - 1) / gharmonic(extra$N, s=ss) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$expected <- FALSE misc$link <- c(s = .link) misc$earg <- list(s = .earg ) misc$N <- extra$N }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ss <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dzipf(x = y, N=extra$N, s=ss, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("zipf"), deriv = eval(substitute(expression({ ss <- eta2theta(eta, .link , earg = .earg ) fred1 <- gharmonic(extra$N, ss) fred2 <- gharmonic(extra$N, ss, lognexp = 1) dl.dss <- -log(y) + fred2 / fred1 dss.deta <- dtheta.deta(ss, .link , earg = .earg ) d2ss.deta2 <- d2theta.deta2(ss, .link , earg = .earg ) c(w) * dl.dss * dss.deta }), list( .link = link, .earg = earg ))), weight = expression({ d2l.dss <- gharmonic(extra$N, ss, lognexp = 2) / fred1 - (fred2/fred1)^2 wz <- c(w) * (dss.deta^2 * d2l.dss - d2ss.deta2 * dl.dss) wz })) } cauchy.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } cauchy <- function(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, iprobs = seq(0.2, 0.8, by=0.2), imethod = 1, nsimEIM = NULL, zero = 2) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(nsimEIM) && (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("argument 'nsimEIM' should be an integer greater than 50") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(iprobs, positive = TRUE) || max(iprobs) >= 1) stop("bad input for argument 'iprobs'") new("vglmff", blurb = c("Two parameter Cauchy distribution ", "(location & scale unknown)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n", namesof("scale", lscale, earg = escale), "\n\n", "Mean: NA\n", "Variance: NA"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ predictors.names <- c( namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) w.y.check(w = w, y = y) if (!length(etastart)) { loc.init <- if (length( .ilocat)) .ilocat else { if ( .imethod == 2) median(rep(y, w)) else if ( .imethod == 3) y else { cauchy2.Loglikfun <- function(loc, y, x, w, extraargs) { iprobs <- .iprobs qy <- quantile(rep(y, w), probs = iprobs) ztry <- tan(pi*(iprobs-0.5)) btry <- (qy - loc) / ztry scal <- median(btry, na.rm = TRUE) if (scal <= 0) scal <- 0.1 sum(c(w) * dcauchy(x = y, loc = loc, scale = scal, log = TRUE)) } loc.grid <- c(quantile(y, probs = seq(0.1, 0.9, by=0.05))) try.this <- getMaxMin(loc.grid, objfun = cauchy2.Loglikfun, y = y, x = x, w = w) try.this <- rep(c(try.this), length.out = n) try.this } } loc.init <- rep(c(loc.init), length.out = n) sca.init <- if (length( .iscale )) .iscale else { iprobs <- .iprobs qy <- quantile(rep(y, w), probs = iprobs) ztry <- tan(pi*(iprobs-0.5)) btry <- (qy - loc.init[1]) / ztry sca.init <- median(btry, na.rm = TRUE) if (sca.init <= 0) sca.init <- 0.01 sca.init } sca.init <- rep(c(sca.init), length.out = n) if ( .llocat == "loge") loc.init <- abs(loc.init)+0.01 etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(sca.init, .lscale , earg = .escale )) } }), list( .ilocat = ilocat, .elocat = elocat, .llocat = llocat, .iscale = iscale, .escale = escale, .lscale = lscale, .iprobs = iprobs, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$link <- c("location" = .llocat , "scale" =.lscale) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$imethod <- .imethod }), list( .escale = escale, .elocat = elocat, .imethod = imethod, .llocat = llocat, .lscale = lscale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dcauchy(x = y, loc=locat, sc=myscale, log = TRUE)) } }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), vfamily = c("cauchy"), deriv = eval(substitute(expression({ location <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) dlocation.deta <- dtheta.deta(location, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(myscale, .lscale , earg = .escale ) Z <- (y-location) / myscale dl.dlocation <- 2 * Z / ((1 + Z^2) * myscale) dl.dscale <- (Z^2 - 1) / ((1 + Z^2) * myscale) c(w) * cbind(dl.dlocation * dlocation.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas = cbind(dlocation.deta, dscale.deta) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rcauchy(n, loc = location, scale = myscale) Z <- (ysim-location) / myscale dl.dlocation <- 2 * Z / ((1 + Z^2) * myscale) dl.dscale <- (Z^2 - 1) / ((1 + Z^2) * myscale) rm(ysim) temp3 <- matrix(c(dl.dlocation, dl.dscale), n, 2) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz <- c(w) * matrix(wz, n, dimm(M)) } else { wz <- cbind(matrix(0.5 / myscale^2,n,2), matrix(0,n,1)) * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz <- c(w) * wz[, 1:M] # diagonal wz } wz }), list( .escale = escale, .lscale = lscale, .nsimEIM = nsimEIM, .elocat = elocat, .llocat = llocat )))) } cauchy1 <- function(scale.arg = 1, llocation = "identity", ilocation = NULL, imethod = 1) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation if (!is.Numeric(scale.arg, positive = TRUE)) stop("bad input for 'scale.arg'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("One-parameter Cauchy distribution ", "(location unknown, scale known)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: NA\n", "Variance: NA"), initialize = eval(substitute(expression({ predictors.names <- namesof("location", .llocat , earg = .elocat , tag = FALSE) w.y.check(w = w, y = y) if (!length(etastart)) { loc.init <- if (length( .ilocat)) .ilocat else { if ( .imethod == 2) median(rep(y, w)) else if ( .imethod == 3) y else { cauchy1.Loglikfun <- function(loc, y, x, w, extraargs) { scal <- extraargs sum(c(w) * dcauchy(x = y, loc = loc, scale = scal, log = TRUE)) } loc.grid <- quantile(y, probs = seq(0.1, 0.9, by = 0.05)) try.this <- getMaxMin(loc.grid, objfun = cauchy1.Loglikfun, y = y, x = x, w = w, extraargs = .scale.arg ) try.this <- rep(try.this, length.out = n) try.this } } loc.init <- rep(loc.init, length.out = n) if ( .llocat == "loge") loc.init = abs(loc.init)+0.01 etastart <- theta2eta(loc.init, .llocat , earg = .elocat ) } }), list( .scale.arg = scale.arg, .ilocat = ilocat, .elocat = elocat, .llocat = llocat, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat) misc$earg <- list("location" = .elocat ) misc$expected <- TRUE misc$scale.arg <- .scale.arg }), list( .scale.arg = scale.arg, .elocat = elocat, .llocat = llocat ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta, .llocat , earg = .elocat ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dcauchy(x = y, loc=locat, scale = .scale.arg, log = TRUE)) } }, list( .scale.arg = scale.arg, .elocat = elocat, .llocat = llocat ))), vfamily = c("cauchy1"), deriv = eval(substitute(expression({ locat <- eta2theta(eta, .llocat , earg = .elocat ) temp <- (y-locat)/.scale.arg dl.dlocat <- 2 * temp / ((1 + temp^2) * .scale.arg) dlocation.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * dl.dlocat * dlocation.deta }), list( .scale.arg = scale.arg, .elocat = elocat, .llocat = llocat ))), weight = eval(substitute(expression({ wz <- c(w) * dlocation.deta^2 / ( .scale.arg^2 * 2) wz }), list( .scale.arg = scale.arg, .elocat = elocat, .llocat = llocat )))) } logistic1 <- function(llocation = "identity", scale.arg = 1, imethod = 1) { if (!is.Numeric(scale.arg, allowable.length = 1, positive = TRUE)) stop("'scale.arg' must be a single positive number") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") new("vglmff", blurb = c("One-parameter logistic distribution ", "(location unknown, scale known)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: location", "\n", "Variance: (pi*scale)^2 / 3"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- namesof("location", .llocat , earg = .elocat , tag = FALSE) if (!length(etastart)) { locat.init <- if ( .imethod == 1) y else median(rep(y, w)) locat.init <- rep(locat.init, length.out = n) if ( .llocat == "loge") locat.init <- abs(locat.init) + 0.001 etastart <- theta2eta(locat.init, .llocat , earg = .elocat ) } }), list( .imethod = imethod, .llocat = llocat, .elocat = elocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$link <- c(location = .llocat) misc$earg <- list(location = .elocat ) misc$scale.arg <- .scale.arg }), list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta, .llocat , earg = .elocat ) zedd <- (y-locat)/.scale.arg if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dlogis(x = y, locat = locat, scale = .scale.arg, log = TRUE)) } }, list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), vfamily = c("logistic1"), deriv = eval(substitute(expression({ locat <- eta2theta(eta, .llocat , earg = .elocat ) ezedd <- exp(-(y-locat) / .scale.arg ) dl.dlocat <- (1 - ezedd) / ((1 + ezedd) * .scale.arg) dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * dl.dlocat * dlocat.deta }), list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), weight = eval(substitute(expression({ wz <- c(w) * dlocat.deta^2 / ( .scale.arg^2 * 3) wz }), list( .scale.arg = scale.arg )))) } erlang <- function(shape.arg, link = "loge", imethod = 1, zero = NULL) { if (!is.Numeric(shape.arg, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("'shape' must be a positive integer") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Erlang distribution\n\n", "Link: ", namesof("scale", link, earg = earg), "\n", "\n", "Mean: shape * scale", "\n", "Variance: shape * scale^2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { if ( .imethod == 1) { sc.init <- y / .shape.arg } if ( .imethod == 2) { sc.init <- (colSums(y * w) / colSums(w))/ .shape.arg } if ( .imethod == 3) { sc.init <- median(y) / .shape.arg } if ( !is.matrix(sc.init)) sc.init <- matrix(sc.init, n, M, byrow = TRUE) etastart <- theta2eta(sc.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .shape.arg = shape.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { sc <- eta2theta(eta, .link , earg = .earg ) .shape.arg * sc }, list( .link = link, .earg = earg, .shape.arg = shape.arg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$shape.arg <- .shape.arg }), list( .link = link, .earg = earg, .shape.arg = shape.arg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { sc <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (( .shape.arg - 1) * log(y) - y / sc - .shape.arg * log(sc) - lgamma( .shape.arg ))) } }, list( .link = link, .earg = earg, .shape.arg = shape.arg ))), vfamily = c("erlang"), deriv = eval(substitute(expression({ sc <- eta2theta(eta, .link , earg = .earg ) dl.dsc <- (y / sc - .shape.arg) / sc dsc.deta <- dtheta.deta(sc, .link , earg = .earg ) c(w) * dl.dsc * dsc.deta }), list( .link = link, .earg = earg, .shape.arg = shape.arg ))), weight = eval(substitute(expression({ ned2l.dsc2 <- .shape.arg / sc^2 wz <- c(w) * dsc.deta^2 * ned2l.dsc2 wz }), list( .earg = earg, .shape.arg = shape.arg )))) } dbort <- function(x, Qsize = 1, a = 0.5, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(Qsize, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") if (!is.Numeric(a, positive = TRUE) || max(a) >= 1) stop("bad input for argument 'a'") N <- max(length(x), length(Qsize), length(a)) x <- rep(x, length.out = N); Qsize <- rep(Qsize, length.out = N); a <- rep(a, length.out = N); xok <- (x >= Qsize) & (x == round(x)) & (a > 0) & (a < 1) ans <- rep(if (log.arg) log(0) else 0, length.out = N) # loglikelihood ans[xok] <- lgamma(1 + Qsize[xok]) - lgamma(x[xok] + 1 - Qsize[xok]) + (x[xok] - 1 - Qsize[xok]) * log(x[xok]) + (x[xok] - Qsize[xok]) * log(a[xok]) - a[xok] * x[xok] if (!log.arg) { ans[xok] <- exp(ans[xok]) } ans } rbort <- function(n, Qsize = 1, a = 0.5) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(Qsize, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") if (!is.Numeric(a, positive = TRUE) || max(a) >= 1) stop("bad input for argument 'a'") N <- use.n qsize <- rep(Qsize, length.out = N); a <- rep(a, length.out = N) totqsize <- qsize fini <- (qsize < 1) while(any(!fini)) { additions <- rpois(sum(!fini), a[!fini]) qsize[!fini] <- qsize[!fini] + additions totqsize[!fini] <- totqsize[!fini] + additions qsize <- qsize - 1 fini <- fini | (qsize < 1) } totqsize } borel.tanner <- function(Qsize = 1, link = "logit", imethod = 1) { if (!is.Numeric(Qsize, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2, 3 or 4") new("vglmff", blurb = c("Borel-Tanner distribution\n\n", "Link: ", namesof("a", link, earg = earg), "\n\n", "Mean: Qsize/(1-a)", "\n", "Variance: Qsize*a / (1-a)^3"), initialize = eval(substitute(expression({ if (any(y < .Qsize )) stop("all y values must be >= ", .Qsize ) w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE) predictors.names <- namesof("a", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { a.init <- switch(as.character( .imethod ), "1" = 1 - .Qsize / (y+1/8), "2" = rep(1 - .Qsize / weighted.mean(y, w), length.out = n), "3" = rep(1 - .Qsize / median(y), length.out = n), "4" = rep(0.5, length.out = n)) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .Qsize = Qsize, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) .Qsize / (1 - aa) }, list( .link = link, .earg = earg, .Qsize = Qsize ))), last = eval(substitute(expression({ misc$link <- c(a = .link) misc$earg <- list(a = .earg ) misc$expected <- TRUE misc$Qsize <- .Qsize }), list( .link = link, .earg = earg, .Qsize = Qsize ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dbort(x = y, Qsize = .Qsize, a = aa, log = TRUE)) } }, list( .link = link, .earg = earg, .Qsize = Qsize ))), vfamily = c("borel.tanner"), deriv = eval(substitute(expression({ aa <- eta2theta(eta, .link , earg = .earg ) dl.da <- (y - .Qsize) / aa - y da.deta <- dtheta.deta(aa, .link , earg = .earg ) c(w) * dl.da * da.deta }), list( .link = link, .earg = earg, .Qsize = Qsize ))), weight = eval(substitute(expression({ ned2l.da2 <- .Qsize / (aa * (1 - aa)) wz <- c(w) * ned2l.da2 * da.deta^2 wz }), list( .Qsize = Qsize )))) } dfelix <- function(x, a = 0.25, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(a, positive = TRUE)) stop("bad input for argument 'a'") N <- max(length(x), length(a)) x <- rep(x, length.out = N); a <- rep(a, length.out = N); xok <- (x %% 2 == 1) & (x == round(x)) & (x >= 1) & (a > 0) & (a < 0.5) ans <- rep(if (log.arg) log(0) else 0, length.out = N) # loglikelihood ans[xok] <- ((x[xok]-3)/2) * log(x[xok]) + ((x[xok]-1)/2) * log(a[xok]) - lgamma(x[xok]/2 + 0.5) - a[xok] * x[xok] if (!log.arg) { ans[xok] <- exp(ans[xok]) } ans } felix <- function(link = elogit(min = 0, max = 0.5), imethod = 1) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2, 3 or 4") new("vglmff", blurb = c("Felix distribution\n\n", "Link: ", namesof("a", link, earg = earg), "\n\n", "Mean: 1/(1-2*a)"), initialize = eval(substitute(expression({ if (any(y < 1) || any((y+1)/2 != round((y+1)/2))) warning("response should be positive, odd and integer-valued") w.y.check(w = w, y = y) predictors.names <- namesof("a", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wymean <- weighted.mean(y, w) a.init <- switch(as.character( .imethod ), "1" = (y - 1 + 1/8) / (2 * (y + 1/8) + 1/8), "2" = rep((wymean-1+1/8) / (2*(wymean+1/8)+1/8), length.out = n), "3" = rep((median(y)-1+1/8) / (2*(median(y)+1/8)+1/8), length.out = n), "4" = rep(0.25, length.out = n)) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) 1 / (1 - 2 * aa) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$link <- c(a = .link) misc$earg <- list(a = .earg ) }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dfelix(x = y, a = aa, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("felix"), deriv = eval(substitute(expression({ aa <- eta2theta(eta, .link , earg = .earg ) dl.da <- (y - 1) / (2 * aa) - y da.deta <- dtheta.deta(aa, .link , earg = .earg ) c(w) * dl.da * da.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.da2 <- 1 / (aa * (1 - 2 * aa)) wz <- c(w) * da.deta^2 * ned2l.da2 wz }), list( .link = link )))) } betaff <- function(A = 0, B = 1, lmu = "logit", lphi = "loge", imu = NULL, iphi = NULL, imethod = 1, zero = NULL) { stdbeta <- (A == 0 && B == 1) lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") if (!is.Numeric(A, allowable.length = 1) || !is.Numeric(B, allowable.length = 1) || A >= B) stop("A must be < B, and both must be of length one") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(imu) && (!is.Numeric(imu, positive = TRUE) || any(imu <= A) || any(imu >= B))) stop("bad input for argument 'imu'") if (length(iphi) && !is.Numeric(iphi, positive = TRUE)) stop("bad input for argument 'iphi'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Beta distribution parameterized by mu and a ", "precision parameter\n", if (stdbeta) paste("f(y) = y^(mu*phi-1) * (1-y)^((1-mu)*phi-1)", "/ beta(mu*phi,(1-mu)*phi),\n", " 00\n\n") else paste("f(y) = (y-",A,")^(mu1*phi-1) * (",B, "-y)^(((1-mu1)*phi)-1) / \n(beta(mu1*phi,(1-mu1)*phi) * (", B, "-", A, ")^(phi-1)),\n", A," < y < ",B, ", ", A," < mu < ",B, ", mu = ", A, " + ", (B-A), " * mu1", ", phi > 0\n\n", sep = ""), "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("phi", lphi, earg = ephi)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) <= .A || max(y) >= .B) stop("data not within (A, B)") w.y.check(w = w, y = y) predictors.names <- c(namesof("mu", .lmu , .emu , short = TRUE), namesof("phi", .lphi , .ephi, short = TRUE)) if (!length(etastart)) { mu.init <- if (is.Numeric( .imu )) .imu else {if ( .imethod == 1) weighted.mean(y, w) else median(rep(y, w))} mu1.init <- (mu.init - .A) / ( .B - .A) # In (0,1) phi.init <- if (is.Numeric( .iphi)) .iphi else max(0.01, -1 + ( .B - .A)^2 * mu1.init*(1-mu1.init)/var(y)) etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta(mu.init, .lmu , earg = .emu ) etastart[, 2] <- theta2eta(phi.init, .lphi , earg = .ephi ) } }), list( .lmu = lmu, .lphi = lphi, .imu = imu, .iphi = iphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta[, 1], .lmu , .emu ) mu }, list( .lmu = lmu, .emu = emu, .A = A, .B = B))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , phi = .lphi) misc$earg <- list(mu = .emu , phi = .ephi) misc$limits <- c( .A, .B) misc$stdbeta <- .stdbeta }), list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .stdbeta = stdbeta ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL){ mu <- eta2theta(eta[, 1], .lmu , .emu ) m1u <- if ( .stdbeta ) mu else (mu - .A) / ( .B - .A) phi <- eta2theta(eta[, 2], .lphi , .ephi ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { shape1 <- phi * m1u shape2 <- (1 - m1u) * phi zedd <- (y - .A) / ( .B - .A) sum(c(w) * (dbeta(x = zedd, shape1 = shape1, shape2 = shape2, log = TRUE) - log( abs( .B - .A )))) } }, list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .stdbeta = stdbeta ))), vfamily = "betaff", deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], .lmu , .emu ) phi <- eta2theta(eta[, 2], .lphi , .ephi ) m1u <- if ( .stdbeta ) mu else (mu - .A) / ( .B - .A) dmu.deta <- dtheta.deta(mu, .lmu , .emu ) dmu1.dmu <- 1 / ( .B - .A) dphi.deta <- dtheta.deta(phi, .lphi , .ephi ) temp1 <- m1u*phi temp2 <- (1-m1u)*phi if ( .stdbeta ) { dl.dmu1 <- phi*(digamma(temp2) - digamma(temp1) + log(y) - log1p(-y)) dl.dphi <- digamma(phi) - mu*digamma(temp1) - (1-mu)*digamma(temp2) + mu*log(y) + (1-mu)*log1p(-y) } else { dl.dmu1 <- phi*(digamma(temp2) - digamma(temp1) + log(y-.A) - log( .B-y)) dl.dphi <- digamma(phi) - m1u*digamma(temp1) - (1-m1u)*digamma(temp2) + m1u*log(y-.A) + (1-m1u)*log( .B-y) - log( .B -.A) } c(w) * cbind(dl.dmu1 * dmu1.dmu * dmu.deta, dl.dphi * dphi.deta) }), list( .lmu = lmu, .lphi = lphi, .emu = emu, .ephi = ephi, .A = A, .B = B, .stdbeta = stdbeta ))), weight = eval(substitute(expression({ d2l.dmu12 <- phi^2 * (trigamma(temp1) + trigamma(temp2)) d2l.dphi2 <- -trigamma(phi) + trigamma(temp1) * m1u^2 + trigamma(temp2) * (1-m1u)^2 d2l.dmu1phi <- temp1*trigamma(temp1) - temp2*trigamma(temp2) wz <- matrix(as.numeric(NA), n, dimm(M)) wz[, iam(1, 1, M)] <- d2l.dmu12 * dmu1.dmu^2 * dmu.deta^2 wz[, iam(2, 2, M)] <- d2l.dphi2 * dphi.deta^2 wz[, iam(1, 2, M)] <- d2l.dmu1phi * dmu1.dmu * dmu.deta * dphi.deta c(w) * wz }), list( .A = A, .B = B )))) } beta.ab <- function(lshape1 = "loge", lshape2 = "loge", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL) { lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length( i1 ) && !is.Numeric( i1, positive = TRUE)) stop("bad input for argument 'i1'") if (length( i2 ) && !is.Numeric( i2, positive = TRUE)) stop("bad input for argument 'i2'") if (!is.Numeric(A, allowable.length = 1) || !is.Numeric(B, allowable.length = 1) || A >= B) stop("A must be < B, and both must be of length one") stdbeta <- (A == 0 && B == 1) # stdbeta == T iff standard beta distn new("vglmff", blurb = c("Two-parameter Beta distribution ", "(shape parameters parameterization)\n", if (stdbeta) paste("y^(shape1-1) * (1-y)^(shape2-1) / B(shape1,shape2),", "0 <= y <= 1, shape1>0, shape2>0\n\n") else paste("(y-",A,")^(shape1-1) * (",B, "-y)^(shape2-1) / [B(shape1,shape2) * (", B, "-", A, ")^(shape1+shape2-1)], ", A," <= y <= ",B," shape1>0, shape2>0\n\n", sep = ""), "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) <= .A || max(y) >= .B) stop("data not within (A, B)") if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") w.y.check(w = w, y = y) predictors.names <- c(namesof("shape1", .lshape1 , earg = .eshape1 , short = TRUE), namesof("shape2", .lshape2 , earg = .eshape2 , short = TRUE)) if (!length(etastart)) { mu1d <- mean(y, trim = .trim) uu <- (mu1d - .A) / ( .B - .A) DD <- ( .B - .A)^2 pinit <- max(0.01, uu^2 * (1 - uu) * DD / var(y) - uu) qinit <- max(0.01, pinit * (1 - uu) / uu) etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta( pinit, .lshape1 , earg = .eshape1 ) etastart[, 2] <- theta2eta( qinit, .lshape2 , earg = .eshape2 ) } if (is.Numeric( .i1 )) etastart[, 1] <- theta2eta( .i1, .lshape1 , earg = .eshape1 ) if (is.Numeric( .i2 )) etastart[, 2] <- theta2eta( .i2, .lshape2 , earg = .eshape2 ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .i1 = i1, .i2 = i2, .trim = trim, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) .A + ( .B-.A) * shapes[, 1] / (shapes[, 1] + shapes[, 2]) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1 , shape2 = .lshape2) misc$earg <- list(shape1 = .eshape1 , shape2 = .eshape2) misc$limits <- c( .A, .B) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL){ shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { zedd <- (y - .A) / ( .B - .A) sum(c(w) * (dbeta(x = zedd, shape1 = shapes[, 1], shape2 = shapes[, 2], log = TRUE) - log( abs( .B - .A )))) } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = "beta.ab", deriv = eval(substitute(expression({ shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) dshapes.deta <- cbind(dtheta.deta(shapes[, 1], .lshape1 , earg = .eshape1), dtheta.deta(shapes[, 2], .lshape2 , earg = .eshape2)) dl.dshapes <- cbind(log(y-.A), log( .B-y)) - digamma(shapes) + digamma(shapes[, 1] + shapes[, 2]) - log( .B - .A) c(w) * dl.dshapes * dshapes.deta }), list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = expression({ temp2 <- trigamma(shapes[, 1]+shapes[, 2]) d2l.dshape12 <- temp2 - trigamma(shapes[, 1]) d2l.dshape22 <- temp2 - trigamma(shapes[, 2]) d2l.dshape1shape2 <- temp2 wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- d2l.dshape12 * dshapes.deta[, 1]^2 wz[, iam(2, 2, M)] <- d2l.dshape22 * dshapes.deta[, 2]^2 wz[, iam(1, 2, M)] <- d2l.dshape1shape2 * dshapes.deta[, 1] * dshapes.deta[, 2] -c(w) * wz })) } simple.exponential <- function() { new("vglmff", blurb = c("Simple exponential distribution\n", "Link: log(rate)\n"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { devy <- -log(y) - 1 devmu <- -log(mu) - y/mu devi <- 2 * (devy - devmu) if (residuals) sign(y - mu) * sqrt(abs(devi) * w) else sum(w * devi) }, initialize = expression({ predictors.names <- "log(rate)" mustart <- y + (y == 0) / 8 }), linkinv = function(eta, extra = NULL) exp(-eta), linkfun = function(mu, extra = NULL) -log(mu), vfamily = "simple.exponential", deriv = expression({ rate <- 1 / mu dl.drate <- mu - y drate.deta <- dtheta.deta(rate, "loge") c(w) * dl.drate * drate.deta }), weight = expression({ ned2l.drate2 <- 1 / rate^2 wz <- c(w) * drate.deta^2 * ned2l.drate2 wz })) } exponential <- function(link = "loge", location = 0, expected = TRUE, shrinkage.init = 0.95, zero = NULL) { if (!is.Numeric(location, allowable.length = 1)) stop("bad input for argument 'location'") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") new("vglmff", blurb = c("Exponential distribution\n\n", "Link: ", namesof("rate", link, earg, tag = TRUE), "\n", "Mean: ", "mu = ", if (location == 0) "1 / rate" else paste(location, "+ 1 / rate"), "\n", "Variance: ", if (location == 0) "Exponential: mu^2" else paste("(mu - ", location, ")^2", sep = "")), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { devy <- -log(y - .location) - 1 devmu <- -log(mu - .location) - (y - .location) / (mu - .location) devi <- 2 * (devy - devmu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { sum(w * devi) } }, list( .location = location, .earg = earg ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$Loc <- matrix( .location , n, ncoly, byrow = TRUE) if (any(y <= extra$Loc)) stop("all responses must be greater than ", extra$Loc) mynames1 <- if (M == 1) "rate" else paste("rate", 1:M, sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (length(mustart) + length(etastart) == 0) mustart <- matrix(colSums(y * w) / colSums(w), n, M, byrow = TRUE) * .sinit + (1 - .sinit) * y + 1 / 8 if (!length(etastart)) etastart <- theta2eta(1 / (mustart - extra$Loc), .link , earg = .earg ) }), list( .location = location, .link = link, .earg = earg, .sinit = shrinkage.init ))), linkinv = eval(substitute(function(eta, extra = NULL) extra$Loc + 1 / eta2theta(eta, .link , earg = .earg ), list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:M) { misc$earg[[ii]] <- .earg } misc$location <- .location misc$expected <- .expected misc$multipleResponses <- TRUE misc$Musual <- Musual }), list( .link = link, .earg = earg, .expected = expected, .location = location ))), linkfun = eval(substitute(function(mu, extra = NULL) theta2eta(1 / (mu - extra$Loc), .link , earg = .earg ), list( .link = link, .earg = earg ))), vfamily = c("exponential"), deriv = eval(substitute(expression({ rate <- 1 / (mu - extra$Loc) dl.drate <- mu - y drate.deta <- dtheta.deta(rate, .link , earg = .earg ) c(w) * dl.drate * drate.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.drate2 <- (mu - extra$Loc)^2 wz <- ned2l.drate2 * drate.deta^2 if (! .expected ) { d2rate.deta2 <- d2theta.deta2(rate, .link , earg = .earg ) wz <- wz - dl.drate * d2rate.deta2 } c(w) * wz }), list( .link = link, .expected = expected, .earg = earg )))) } gamma1 <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("1-parameter Gamma distribution\n", "Link: ", namesof("shape", link, earg = earg, tag = TRUE), "\n", "Mean: mu (=shape)\n", "Variance: mu (=shape)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 Musual <- 1 mynames1 <- if (M == 1) "shape" else paste("shape", 1:M, sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (!length(etastart)) etastart <- cbind(theta2eta(y + 1/8, .link , earg = .earg )) }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) eta2theta(eta, .link , earg = .earg )), list( .link = link, .earg = earg )), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE misc$Musual <- Musual }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) theta2eta(mu, .link , earg = .earg )), list( .link = link, .earg = earg )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgamma(x = y, shape = mu, scale = 1, log = TRUE)) }, vfamily = c("gamma1"), deriv = eval(substitute(expression({ shape <- mu dl.dshape <- log(y) - digamma(shape) dshape.deta <- dtheta.deta(shape, .link , earg = .earg ) ans <- c(w) * dl.dshape * dshape.deta ans c(w) * dl.dshape * dshape.deta }), list( .link = link, .earg = earg ))), weight = expression({ ned2l.dshape <- trigamma(shape) wz <- ned2l.dshape * dshape.deta^2 c(w) * wz })) } gamma2.ab <- function(lrate = "loge", lshape = "loge", irate = NULL, ishape = NULL, expected = TRUE, zero = 2) { lrate <- as.list(substitute(lrate)) erate <- link2list(lrate) lrate <- attr(erate, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length( irate) && !is.Numeric(irate, positive = TRUE)) stop("bad input for argument 'irate'") if (length( ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") new("vglmff", blurb = c("2-parameter Gamma distribution\n", "Links: ", namesof("rate", lrate, earg = erate), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: mu = shape/rate\n", "Variance: (mu^2)/shape = shape/rate^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE) predictors.names <- c(namesof("rate", .lrate , earg = .erate , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) if (!length(etastart)) { mymu = y + 0.167 * (y == 0) junk <- lsfit(x, y, wt = w, intercept = FALSE) var.y.est <- sum(c(w) * junk$resid^2) / (nrow(x) - length(junk$coef)) init.shape <- if (length( .ishape )) .ishape else mymu^2 / var.y.est init.rate <- if (length( .irate)) .irate else init.shape / mymu init.rate <- rep(init.rate, length.out = n) init.shape <- rep(init.shape, length.out = n) if ( .lshape == "loglog") init.shape[init.shape <= 1] <- 3.1 # Hopefully value is big enough etastart <- cbind(theta2eta(init.rate, .lrate , earg = .erate ), theta2eta(init.shape, .lshape , earg = .eshape )) } }), list( .lrate = lrate, .lshape = lshape, .irate = irate, .ishape = ishape, .erate = erate, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 2], .lshape , earg = .eshape ) / ( eta2theta(eta[, 1], .lrate , earg = .erate )) }, list( .lrate = lrate, .lshape = lshape, .erate = erate, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(rate = .lrate , shape = .lshape) misc$earg <- list(rate = .erate, shape = .eshape ) }), list( .lrate = lrate, .lshape = lshape, .erate = erate, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { rate <- eta2theta(eta[, 1], .lrate , earg = .erate ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgamma(x = y, shape = shape, rate=rate, log = TRUE)) } }, list( .lrate = lrate, .lshape = lshape, .erate = erate, .eshape = eshape))), vfamily = c("gamma2.ab"), deriv = eval(substitute(expression({ rate <- eta2theta(eta[, 1], .lrate , earg = .erate ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dl.drate <- mu - y dl.dshape <- log(y*rate) - digamma(shape) dratedeta <- dtheta.deta(rate, .lrate , earg = .erate ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.drate * dratedeta, dl.dshape * dshape.deta) }), list( .lrate = lrate, .lshape = lshape, .erate = erate, .eshape = eshape))), weight = eval(substitute(expression({ d2l.dshape2 <- -trigamma(shape) d2l.drate2 <- -shape/(rate^2) d2l.drateshape <- 1/rate wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- -d2l.drate2 * dratedeta^2 wz[, iam(2, 2, M)] <- -d2l.dshape2 * dshape.deta^2 wz[, iam(1, 2, M)] <- -d2l.drateshape * dratedeta * dshape.deta if (! .expected) { d2ratedeta2 <- d2theta.deta2(rate, .lrate , earg = .erate ) d2shapedeta2 <- d2theta.deta2(shape, .lshape , earg = .eshape ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.drate * d2ratedeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dshape * d2shapedeta2 } c(w) * wz }), list( .lrate = lrate, .lshape = lshape, .erate = erate, .eshape = eshape, .expected = expected )))) } gamma2 <- function(lmu = "loge", lshape = "loge", imethod = 1, ishape = NULL, parallel = FALSE, apply.parint = FALSE, deviance.arg = FALSE, zero = -2) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (length( ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") ans <- new("vglmff", blurb = c("2-parameter Gamma distribution", " (McCullagh and Nelder 1989 parameterization)\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: mu\n", "Variance: (mu^2)/shape"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) dotzero <- .zero Musual <- 2 eval(negzero.expression) constraints <- cm.zero.vgam(constraints, x, z_Index, M) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y assign("CQO.FastAlgorithm", ( .lmu == "loge" && .lshape == "loge"), envir = VGAM:::VGAMenv) if (any(function.name == c("cqo","cao")) && is.Numeric( .zero , allowable.length = 1) && .zero != -2) stop("argument zero = -2 is required") M <- Musual * ncol(y) NOS <- ncoly <- ncol(y) # Number of species temp1.names = if (NOS == 1) "mu" else paste("mu", 1:NOS, sep = "") temp2.names = if (NOS == 1) "shape" else paste("shape", 1:NOS, sep = "") predictors.names <- c(namesof(temp1.names, .lmu , earg = .emu , tag = FALSE), namesof(temp2.names, .lshape , earg = .eshape, tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] if (is.logical( .parallel ) & .parallel & ncoly > 1) warning("the constraint matrices may not be correct with ", "multiple responses") if (!length(etastart)) { init.shape <- matrix(1.0, n, NOS) mymu <- y # + 0.167 * (y == 0) # imethod == 1 (the default) if ( .imethod == 2) { for(ii in 1:ncol(y)) { mymu[, ii] <- weighted.mean(y[, ii], w = w[, ii]) } } for(spp in 1:NOS) { junk <- lsfit(x, y[, spp], wt = w[, spp], intercept = FALSE) var.y.est <- sum(w[, spp] * junk$resid^2) / (n - length(junk$coef)) init.shape[, spp] <- if (length( .ishape )) .ishape else mymu[, spp]^2 / var.y.est if ( .lshape == "loglog") init.shape[init.shape[, spp] <= 1,spp] <- 3.1 } etastart <- cbind(theta2eta(mymu, .lmu , earg = .emu ), theta2eta(init.shape, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .lmu = lmu, .lshape = lshape, .ishape = ishape, .emu = emu, .eshape = eshape, .parallel = parallel, .apply.parint = apply.parint, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) tmp34 <- c(rep( .lmu , length = NOS), rep( .lshape , length = NOS)) names(tmp34) = c(if (NOS == 1) "mu" else paste("mu", 1:NOS, sep = ""), if (NOS == 1) "shape" else paste("shape", 1:NOS, sep = "")) tmp34 <- tmp34[interleave.VGAM(M, M = 2)] misc$link <- tmp34 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .emu misc$earg[[Musual*ii ]] <- .eshape } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape, .parallel = parallel, .apply.parint = apply.parint ))), linkfun = eval(substitute(function(mu, extra = NULL) { temp <- theta2eta(mu, .lmu , earg = .emu ) temp <- cbind(temp, NA * temp) temp[, interleave.VGAM(ncol(temp), M = 2), drop = FALSE] }, list( .lmu = lmu, .emu = emu ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual mymu <- mu # eta2theta(eta[, 2*(1:NOS)-1], .lmu , earg = .emu ) shapemat <- eta2theta(eta[, Musual * (1:NOS), drop = FALSE], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgamma(x = y, shape = c(shapemat), scale = c(mymu/shapemat), log = TRUE)) } }, list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape))), vfamily = c("gamma2"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- ncol(eta) / Musual mymu <- eta2theta(eta[, Musual * (1:NOS) - 1], .lmu , earg = .emu ) shape <- eta2theta(eta[, Musual * (1:NOS)], .lshape , earg = .eshape ) dl.dmu <- shape * (y / mymu - 1) / mymu dl.dshape <- log(y) + log(shape) - log(mymu) + 1 - digamma(shape) - y / mymu dmu.deta <- dtheta.deta(mymu, .lmu , earg = .emu ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dmu * dmu.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.dmu2 <- shape / (mymu^2) ned2l.dshape2 <- trigamma(shape) - 1 / shape wz <- matrix(as.numeric(NA), n, M) # 2 = M; diagonal! wz[, Musual*(1:NOS)-1] <- ned2l.dmu2 * dmu.deta^2 wz[, Musual*(1:NOS) ] <- ned2l.dshape2 * dshape.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lmu = lmu )))) if (deviance.arg) ans@deviance <- eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (ncol(as.matrix(y)) > 1 && ncol(as.matrix(w)) > 1) stop("cannot handle matrix 'w' yet") Musual <- 2 NOS <- ncol(eta) / 2 temp300 <- eta[, 2*(1:NOS), drop = FALSE] shape <- eta2theta(temp300, .lshape , earg = .eshape ) devi <- -2 * (log(y/mu) - y/mu + 1) if (residuals) { warning("not 100% sure about these deviance residuals!") sign(y - mu) * sqrt(abs(devi) * w) } else sum(c(w) * devi) }, list( .lshape = lshape ))) ans } geometric <- function(link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) { if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Geometric distribution ", "(P[Y=y] = prob * (1 - prob)^y, y = 0, 1, 2,...)\n", "Link: ", namesof("prob", link, earg = earg), "\n", "Mean: mu = (1 - prob) / prob\n", "Variance: mu * (1 + mu) = (1 - prob) / prob^2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("prob", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 3) 1 / (1 + apply(y, 2, median) + 1/16) else 1 / (1 + colSums(y * w) / colSums(w) + 1/16) if (!is.matrix(prob.init)) prob.init <- matrix(prob.init, n, M, byrow = TRUE) if (length( .iprob )) prob.init <- matrix( .iprob , n, M, byrow = TRUE) etastart <- theta2eta(prob.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .imethod = imethod, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) (1 - prob) / prob }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$expected <- .expected misc$imethod <- .imethod misc$iprob <- .iprob }), list( .link = link, .earg = earg, .iprob = iprob, .expected = expected, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgeom(x = y, prob = prob, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("geometric"), deriv = eval(substitute(expression({ prob <- eta2theta(eta, .link , earg = .earg ) dl.dprob <- -y / (1 - prob) + 1 / prob dprobdeta <- dtheta.deta(prob, .link , earg = .earg ) c(w) * cbind(dl.dprob * dprobdeta) }), list( .link = link, .earg = earg, .expected = expected ))), weight = eval(substitute(expression({ ned2l.dprob2 <- if ( .expected ) { 1 / (prob^2 * (1 - prob)) } else { y / (1 - prob)^2 + 1 / prob^2 } wz <- ned2l.dprob2 * dprobdeta^2 if ( !( .expected )) wz <- wz - dl.dprob * d2theta.deta2(prob, .link , earg = .earg ) c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } dbetageom <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") N <- max(length(x), length(shape1), length(shape2)) x <- rep(x, length.out = N); shape1 <- rep(shape1, length.out = N); shape2 <- rep(shape2, length.out = N) loglik <- lbeta(1+shape1, shape2+abs(x)) - lbeta(shape1, shape2) xok <- (x == round(x) & x >= 0) loglik[!xok] <- log(0) if (log.arg) { loglik } else { exp(loglik) } } pbetageom <- function(q, shape1, shape2, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") N <- max(length(q), length(shape1), length(shape2)) q <- rep(q, length.out = N); shape1 <- rep(shape1, length.out = N); shape2 <- rep(shape2, length.out = N) ans <- q * 0 # Retains names(q) if (max(abs(shape1-shape1[1])) < 1.0e-08 && max(abs(shape2-shape2[1])) < 1.0e-08) { qstar <- floor(q) temp <- if (max(qstar) >= 0) dbetageom(x = 0:max(qstar), shape1 = shape1[1], shape2 = shape2[1]) else 0*qstar unq <- unique(qstar) for(ii in unq) { index <- (qstar == ii) ans[index] <- if (ii >= 0) sum(temp[1:(1+ii)]) else 0 } } else for(ii in 1:N) { qstar <- floor(q[ii]) ans[ii] <- if (qstar >= 0) sum(dbetageom(x = 0:qstar, shape1 = shape1[ii], shape2 = shape2[ii])) else 0 } if (log.p) log(ans) else ans } rbetageom <- function(n, shape1, shape2) { rgeom(n = n, prob = rbeta(n = n, shape1 = shape1, shape2 = shape2)) } negbinomial.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } negbinomial <- function(lmu = "loge", lsize = "loge", imu = NULL, isize = NULL, probs.y = 0.75, nsimEIM = 100, cutoff = 0.995, Maxiter = 5000, deviance.arg = FALSE, imethod = 1, parallel = FALSE, shrinkage.init = 0.95, zero = -2) { lmuuu <- as.list(substitute(lmu)) emuuu <- link2list(lmuuu) lmuuu <- attr(emuuu, "function.name") imuuu <- imu lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") if (length(imuuu) && !is.Numeric(imuuu, positive = TRUE)) stop("bad input for argument 'imu'") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(cutoff, allowable.length = 1) || cutoff < 0.8 || cutoff >= 1) stop("range error in the argument 'cutoff'") if (!is.Numeric(Maxiter, integer.valued = TRUE, allowable.length = 1) || Maxiter < 100) stop("bad input for argument 'Maxiter'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (!is.null(nsimEIM)) { if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("argument 'nsimEIM' should be an integer ", "greater than 10, say") } if (!is.logical( parallel ) || length( parallel ) != 1) stop("argument 'parallel' must be TRUE or FALSE") if ( parallel && length(zero)) stop("need to set 'zero = NULL' when parallel = TRUE") ans <- new("vglmff", blurb = c("Negative-binomial distribution\n\n", "Links: ", namesof("mu", lmuuu, earg = emuuu), ", ", namesof("size", lsize, earg = esize), "\n", "Mean: mu\n", "Variance: mu * (1 + mu / size) for NB-2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) if ( .parallel && ncol(cbind(y)) > 1) stop("univariate responses needed if 'parallel = TRUE'") constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 temp5 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y assign("CQO.FastAlgorithm", ( .lmuuu == "loge") && ( .lsize == "loge"), envir = VGAM:::VGAMenv) if (any(function.name == c("cqo", "cao")) && is.Numeric( .zero , allowable.length = 1) && .zero != -2) stop("argument zero = -2 is required") if (any(y < 0)) stop("negative values not allowed for the 'negbinomial' family") if (any(round(y) != y)) stop("integer-values only allowed for the 'negbinomial' family") if (ncol(w) > ncol(y)) stop("number of columns of prior-'weights' is greater than ", "the number of responses") M <- Musual * ncol(y) NOS <- ncoly <- ncol(y) # Number of species predictors.names <- c(namesof(if (NOS == 1) "mu" else paste("mu", 1:NOS, sep = ""), .lmuuu, earg = .emuuu, tag = FALSE), namesof(if (NOS == 1) "size" else paste("size", 1:NOS, sep = ""), .lsize , earg = .esize , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] if (is.null( .nsimEIM )) { save.weight <- control$save.weight <- FALSE } if (is.numeric( .mu.init )) MU.INIT <- matrix( .mu.init , nrow(y), ncol(y), byrow = TRUE) if (!length(etastart)) { mu.init <- y for(iii in 1:ncol(y)) { use.this <- if ( .imethod == 1) { weighted.mean(y[, iii], w[, iii]) + 1/16 } else if ( .imethod == 3) { c(quantile(y[, iii], probs = .probs.y ) + 1/16) } else { median(y[, iii]) + 1/16 } if (is.numeric( .mu.init )) { mu.init[, iii] <- MU.INIT[, iii] } else { medabsres <- median(abs(y[, iii] - use.this)) + 1/32 allowfun <- function(z, maxtol = 1) sign(z) * pmin(abs(z), maxtol) mu.init[, iii] <- use.this + (1 - .sinit ) * allowfun(y[, iii] - use.this, maxtol = medabsres) mu.init[, iii] <- abs(mu.init[, iii]) + 1 / 1024 } } # of for(iii) if ( is.Numeric( .k.init )) { kay.init <- matrix( .k.init, nrow = n, ncol = NOS, byrow = TRUE) } else { negbinomial.Loglikfun <- function(kmat, y, x, w, extraargs) { mu <- extraargs sum(c(w) * dnbinom(x = y, mu = mu, size = kmat, log = TRUE)) } k.grid <- 2^((-7):7) k.grid <- 2^(seq(-8, 8, length = 40)) kay.init <- matrix(0, nrow = n, ncol = NOS) for(spp. in 1:NOS) { kay.init[, spp.] <- getMaxMin(k.grid, objfun = negbinomial.Loglikfun, y = y[, spp.], x = x, w = w[, spp.], extraargs = mu.init[, spp.]) } } newemu <- .emuuu if ( .lmuuu == "nbcanlink") { newemu$size <- kay.init } etastart <- cbind(theta2eta(mu.init , link = .lmuuu , earg = newemu ), theta2eta(kay.init, link = .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .lmuuu = lmuuu, .lsize = lsize, .emuuu = emuuu, .esize = esize, .mu.init = imu, .k.init = isize, .probs.y = probs.y, .sinit = shrinkage.init, .nsimEIM = nsimEIM, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual eta.k <- eta[, Musual * (1:NOS) , drop = FALSE] kmat <- eta2theta(eta.k, .lsize , earg = .esize ) newemu <- .emuuu if ( .lmuuu == "nbcanlink") { newemu$size <- kmat } eta2theta(eta[, Musual * (1:NOS) - 1, drop = FALSE], .lmuuu , earg = newemu) }, list( .lmuuu = lmuuu, .lsize = lsize, .emuuu = emuuu, .esize = esize))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) temp0303 <- c(rep( .lmuuu, length = NOS), rep( .lsize , length = NOS)) names(temp0303) = c(if (NOS == 1) "mu" else paste("mu", 1:NOS, sep = ""), if (NOS == 1) "size" else paste("size", 1:NOS, sep = "")) temp0303 <- temp0303[interleave.VGAM(M, M = 2)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- newemu misc$earg[[Musual*ii ]] <- .esize } misc$cutoff <- .cutoff misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$shrinkage.init <- .sinit misc$multipleResponses <- TRUE }), list( .lmuuu = lmuuu, .lsize = lsize, .emuuu = emuuu, .esize = esize, .cutoff = cutoff, .nsimEIM = nsimEIM, .sinit = shrinkage.init, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { Musual <- 2 newemu <- .emuuu eta.temp <- theta2eta(mu, .lmuuu , earg = newemu) eta.kayy <- theta2eta(if (is.numeric( .isize )) .isize else 1.0, .lsize , earg = .esize ) eta.kayy <- 0 * eta.temp + eta.kayy # Right dimension now. if ( .lmuuu == "nbcanlink") { newemu$size <- eta2theta(eta.kayy, .lsize , earg = .esize ) } eta.temp <- cbind(eta.temp, eta.kayy) eta.temp[, interleave.VGAM(ncol(eta.temp), M = Musual), drop = FALSE] }, list( .lmuuu = lmuuu, .lsize = lsize, .emuuu = emuuu, .esize = esize, .isize = isize ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual eta.k <- eta[, Musual*(1:NOS), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 eta.k <- ifelse(eta.k > bigval, bigval, eta.k) eta.k <- ifelse(eta.k < -bigval, -bigval, eta.k) } kmat <- eta2theta(eta.k, .lsize , earg = .esize ) newemu <- .emuuu if ( .lmuuu == "nbcanlink") { newemu$size <- kmat } if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * dnbinom(x = y, mu = mu, size = kmat, log = TRUE)) }, list( .lsize = lsize, .lmuuu = lmuuu, .emuuu = emuuu, .esize = esize))), vfamily = c("negbinomial"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- ncol(eta) / Musual M <- ncol(eta) eta.k <- eta[, Musual*(1:NOS) , drop = FALSE] if ( .lsize == "loge") { bigval <- 68 eta.k <- ifelse(eta.k > bigval, bigval, eta.k) eta.k <- ifelse(eta.k < -bigval, -bigval, eta.k) } kmat <- eta2theta(eta.k, .lsize , earg = .esize ) newemu <- .emuuu if ( .lmuuu == "nbcanlink") { newemu$size <- kmat } dl.dmu <- y / mu - (y + kmat) / (mu + kmat) dl.dk <- digamma(y + kmat) - digamma(kmat) - (y + kmat) / (mu + kmat) + 1 + log(kmat / (kmat + mu)) if ( .lmuuu == "nbcanlink") newemu$wrt.eta <- 1 dmu.deta <- dtheta.deta(mu, .lmuuu , earg = newemu) # eta1 if ( .lmuuu == "nbcanlink") newemu$wrt.eta <- 2 dk.deta1 <- dtheta.deta(mu, .lmuuu , earg = newemu) # eta2 dk.deta2 <- dtheta.deta(kmat, .lsize , earg = .esize) myderiv <- c(w) * cbind(dl.dmu * dmu.deta, dl.dk * dk.deta2) if ( .lmuuu == "nbcanlink") { myderiv[, 1:NOS] = myderiv[, 1:NOS] + c(w) * dl.dk * dk.deta1 } myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lmuuu = lmuuu, .lsize = lsize, .emuuu = emuuu, .esize = esize))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) if (is.null( .nsimEIM )) { fred2 <- dotFortran(name = "enbin9", ans = double(n*NOS), as.double(kmat), as.double(mu), as.double( .cutoff ), as.integer(n), ok = as.integer(1), as.integer(NOS), sumpdf = double(1), as.double( .Machine$double.eps ), as.integer( .Maxiter )) if (fred2$ok != 1) stop("error in Fortran subroutine exnbin9") dim(fred2$ans) <- c(n, NOS) ned2l.dk2 <- -fred2$ans - 1/kmat + 1/(kmat+mu) wz[, Musual*(1:NOS)] <- dk.deta2^2 * ned2l.dk2 } else { run.varcov <- matrix(0, n, NOS) for(ii in 1:( .nsimEIM )) { ysim <- rnbinom(n = n*NOS, mu = c(mu), size = c(kmat)) if (NOS > 1) dim(ysim) = c(n, NOS) dl.dk <- digamma(ysim + kmat) - digamma(kmat) - (ysim + kmat) / (mu + kmat) + 1 + log(kmat / (kmat + mu)) run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- cbind(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz[, Musual*(1:NOS)] <- ned2l.dk2 * dk.deta2^2 } # end of else ned2l.dmu2 <- 1 / mu - 1 / (mu + kmat) wz[, Musual*(1:NOS) - 1] <- ned2l.dmu2 * dmu.deta^2 if ( .lmuuu == "nbcanlink") { wz[, Musual*(1:NOS)-1] <- wz[, Musual*(1:NOS)-1] + ned2l.dk2 * dk.deta1^2 wz <- cbind(wz, kronecker(ned2l.dk2 * dk.deta1 * dk.deta2, if (NOS > 1) cbind(1, 0) else 1)) } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .cutoff = cutoff, .Maxiter = Maxiter, .lmuuu = lmuuu, .nsimEIM = nsimEIM )))) if (deviance.arg) ans@deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual if (ncol(as.matrix(y)) > 1 && ncol(as.matrix(w)) > 1) stop("cannot handle matrix 'w' yet") temp300 <- eta[, Musual*(1:NOS), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } else stop("can only handle the 'loge' link") kmat <- eta2theta(temp300, .lsize , earg = .esize ) devi <- 2 * (y * log(ifelse(y < 1, 1, y) / mu) + (y + kmat) * log((mu + kmat) / (kmat + y))) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { sum(c(w) * devi) } }, list( .lsize = lsize, .emuuu = emuuu, .esize = esize))) ans } polya.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } polya <- function(lprob = "logit", lsize = "loge", iprob = NULL, isize = NULL, probs.y = 0.75, nsimEIM = 100, deviance.arg = FALSE, imethod = 1, shrinkage.init = 0.95, zero = -2) { if (length(iprob) && !is.Numeric(iprob, positive = TRUE)) stop("bad input for argument 'iprob'") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("argument 'nsimEIM' should be an integer ", "greater than 10, say") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") ans = new("vglmff", blurb = c("Polya (negative-binomial) distribution\n\n", "Links: ", namesof("prob", lprob, earg = eprob), ", ", namesof("size", lsize, earg = esize), "\n", "Mean: size * (1 - prob) / prob\n", "Variance: mean / prob"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 if (any(function.name == c("cqo", "cao"))) stop("polya() does not work with cqo() or cao(). ", "Try negbinomial()") temp5 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- Musual * ncol(y) NOS <- ncoly <- ncol(y) # Number of species predictors.names <- c(namesof(if (NOS == 1) "prob" else paste("prob", 1:NOS, sep = ""), .lprob , earg = .eprob , tag = FALSE), namesof(if (NOS == 1) "size" else paste("size", 1:NOS, sep = ""), .lsize , earg = .esize , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = 2)] if (is.null( .nsimEIM )) { save.weight <- control$save.weight <- FALSE } PROB.INIT <- if (is.numeric( .pinit )) { matrix( .pinit, nrow(y), ncol(y), byrow = TRUE) } else { NULL } if (!length(etastart)) { mu.init <- y for(iii in 1:ncol(y)) { use.this <- if ( .imethod == 1) { weighted.mean(y[, iii], w[, iii]) + 1/16 } else if ( .imethod == 3) { c(quantile(y[, iii], probs <- .probs.y) + 1/16) } else { median(y[, iii]) + 1/16 } if (FALSE) { mu.init[, iii] <- MU.INIT[, iii] } else { medabsres <- median(abs(y[, iii] - use.this)) + 1/32 allowfun <- function(z, maxtol = 1) sign(z) * pmin(abs(z), maxtol) mu.init[, iii] <- use.this + (1 - .sinit) * allowfun(y[, iii] - use.this, maxtol = medabsres) mu.init[, iii] <- abs(mu.init[, iii]) + 1 / 1024 } } if ( is.Numeric( .kinit )) { kayy.init <- matrix( .kinit, nrow = n, ncol = NOS, byrow = TRUE) } else { negbinomial.Loglikfun <- function(kmat, y, x, w, extraargs) { mu <- extraargs sum(c(w) * dnbinom(x = y, mu = mu, size = kmat, log = TRUE)) } k.grid <- 2^((-7):7) k.grid <- 2^(seq(-8, 8, length = 40)) kayy.init <- matrix(0, nrow = n, ncol = NOS) for(spp. in 1:NOS) { kayy.init[, spp.] <- getMaxMin(k.grid, objfun = negbinomial.Loglikfun, y = y[, spp.], x = x, w = w, extraargs = mu.init[, spp.]) } } prob.init <- if (length(PROB.INIT)) PROB.INIT else kayy.init / (kayy.init + mu.init) etastart <- cbind(theta2eta(prob.init, .lprob , earg = .eprob), theta2eta(kayy.init, .lsize , earg = .esize)) etastart <- etastart[, interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .pinit = iprob, .kinit = isize, .probs.y = probs.y, .sinit = shrinkage.init, .nsimEIM = nsimEIM, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual pmat <- eta2theta(eta[, Musual*(1:NOS) - 1, drop = FALSE], .lprob , earg = .eprob) kmat <- eta2theta(eta[, Musual*(1:NOS)- 0, drop = FALSE], .lsize , earg = .esize) kmat / (kmat + pmat) }, list( .lprob = lprob, .eprob = eprob, .lsize = lsize, .esize = esize))), last = eval(substitute(expression({ temp0303 <- c(rep( .lprob , length = NOS), rep( .lsize , length = NOS)) names(temp0303) = c(if (NOS == 1) "prob" else paste("prob", 1:NOS, sep = ""), if (NOS == 1) "size" else paste("size", 1:NOS, sep = "")) temp0303 <- temp0303[interleave.VGAM(M, M = 2)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .eprob misc$earg[[Musual*ii ]] <- .esize } misc$isize <- .isize misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$shrinkage.init <- .sinit misc$Musual <- 2 misc$multipleResponses <- TRUE }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .isize = isize, .nsimEIM = nsimEIM, .sinit = shrinkage.init, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual pmat <- eta2theta(eta[, Musual*(1:NOS) - 1, drop = FALSE], .lprob , earg = .eprob) temp300 <- eta[, Musual*(1:NOS) , drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp300 <- ifelse(temp300 > bigval, bigval, temp300) temp300 <- ifelse(temp300 < -bigval, -bigval, temp300) } kmat <- eta2theta(temp300, .lsize , earg = .esize) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * dnbinom(x = y, prob = pmat, size = kmat, log = TRUE)) }, list( .lsize = lsize, .lprob = lprob, .esize = esize, .eprob = eprob ))), vfamily = c("polya"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- ncol(eta) / Musual M <- ncol(eta) pmat <- eta2theta(eta[, Musual*(1:NOS) - 1, drop = FALSE], .lprob , earg = .eprob) temp3 <- eta[, Musual*(1:NOS) , drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp3 <- ifelse(temp3 > bigval, bigval, temp3) temp3 <- ifelse(temp3 < -bigval, -bigval, temp3) } kmat <- eta2theta(temp3, .lsize , earg = .esize) dl.dprob <- kmat / pmat - y / (1.0 - pmat) dl.dkayy <- digamma(y + kmat) - digamma(kmat) + log(pmat) dprob.deta <- dtheta.deta(pmat, .lprob , earg = .eprob) dkayy.deta <- dtheta.deta(kmat, .lsize , earg = .esize) dthetas.detas <- cbind(dprob.deta, dkayy.deta) dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] myderiv <- c(w) * cbind(dl.dprob, dl.dkayy) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) mumat <- as.matrix(mu) for(spp. in 1:NOS) { run.varcov <- 0 kvec <- kmat[, spp.] pvec <- pmat[, spp.] for(ii in 1:( .nsimEIM )) { ysim <- rnbinom(n = n, prob = pvec, size = kvec) dl.dprob <- kvec / pvec - ysim / (1.0 - pvec) dl.dkayy <- digamma(ysim + kvec) - digamma(kvec) + log(pvec) temp3 <- cbind(dl.dprob, dl.dkayy) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .nsimEIM = nsimEIM )))) if (deviance.arg) ans@deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual temp300 <- eta[, Musual*(1:NOS), drop = FALSE] if (ncol(as.matrix(y)) > 1 && ncol(as.matrix(w)) > 1) stop("cannot handle matrix 'w' yet") if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } else { stop("can only handle the 'loge' link") } kayy <- eta2theta(temp300, .lsize , earg = .esize) devi <- 2 * (y * log(ifelse(y < 1, 1, y) / mu) + (y + kayy) * log((mu + kayy) / (kayy + y))) if (residuals) sign(y - mu) * sqrt(abs(devi) * w) else sum(c(w) * devi) }, list( .lsize = lsize, .eprob = eprob, .esize = esize))) ans } # End of polya() simple.poisson <- function() { new("vglmff", blurb = c("Poisson distribution\n\n", "Link: log(lambda)", "\n", "Variance: lambda"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { nz <- y > 0 devi <- - (y - mu) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mu[nz]) if (residuals) sign(y - mu) * sqrt(2 * abs(devi) * w) else 2 * sum(c(w) * devi) }, initialize = expression({ if (ncol(cbind(w)) != 1) stop("prior weight must be a vector or a one-column matrix") if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") predictors.names <- "log(lambda)" mu <- (weighted.mean(y, w) + y) / 2 + 1/8 if (!length(etastart)) etastart <- log(mu) }), linkinv = function(eta, extra = NULL) exp(eta), last = expression({ misc$link <- c(lambda = "loge") misc$earg <- list(lambda = list()) }), link = function(mu, extra = NULL) log(mu), vfamily = "simple.poisson", deriv = expression({ lambda <- mu dl.dlambda <- -1 + y/lambda dlambda.deta <- dtheta.deta(theta = lambda, link = "loge") c(w) * dl.dlambda * dlambda.deta }), weight = expression({ d2l.dlambda2 <- 1 / lambda c(w) * d2l.dlambda2 * dlambda.deta^2 })) } studentt <- function(ldf = "loglog", idf = NULL, tol1 = 0.1, imethod = 1) { ldof <- as.list(substitute(ldf)) edof <- link2list(ldof) ldof <- attr(edof, "function.name") idof <- idf if (length(idof)) if (!is.Numeric(idof) || any(idof <= 1)) stop("argument 'idf' should be > 1") if (!is.Numeric(tol1, positive = TRUE)) stop("argument 'tol1' should be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Student t-distribution\n\n", "Link: ", namesof("df", ldof, earg = edof), "\n", "Variance: df / (df - 2) if df > 2\n"), infos = eval(substitute(function(...) { list(Musual = 1, tol1 = .tol1 ) }, list( .tol1 = tol1 ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- namesof("df", .ldof , earg = .edof , tag = FALSE) if (!length(etastart)) { init.df <- if (length( .idof )) .idof else { VarY <- var(y) MadY <- mad(y) if (VarY <= (1 + .tol1 )) VarY <- 1.12 if ( .imethod == 1) { 2 * VarY / (VarY - 1) } else if ( .imethod == 2) { ifelse(MadY < 1.05, 30, ifelse(MadY > 1.2, 2, 5)) } else 10 } etastart <- rep(theta2eta(init.df, .ldof , earg = .edof ), length.out = length(y)) } }), list( .ldof = ldof, .edof = edof, .idof = idof, .tol1 = tol1, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Dof <- eta2theta(eta, .ldof , earg = .edof ) ans <- 0 * eta ans[Dof <= 1] <- NA ans }, list( .ldof = ldof, .edof = edof ))), last = eval(substitute(expression({ misc$link <- c(df = .ldof ) misc$earg <- list(df = .edof ) misc$imethod <- .imethod misc$expected = TRUE }), list( .ldof = ldof, .edof = edof, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Dof <- eta2theta(eta, .ldof , earg = .edof ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dt(x = y, df = Dof, log = TRUE)) } }, list( .ldof = ldof, .edof = edof ))), vfamily = c("studentt"), deriv = eval(substitute(expression({ Dof <- eta2theta(eta, .ldof , earg = .edof ) ddf.deta <- dtheta.deta(theta = Dof, .ldof , earg = .edof ) DDS <- function(df) digamma((df + 1) / 2) - digamma(df / 2) DDSp <- function(df) 0.5 * (trigamma((df + 1) / 2) - trigamma(df / 2)) temp0 <- 1 / Dof temp1 <- temp0 * y^2 dl.ddf <- 0.5 * (-temp0 - log1p(temp1) + (Dof + 1) * y^2 / (Dof^2 * (1 + temp1)) + DDS(Dof)) c(w) * dl.ddf * ddf.deta }), list( .ldof = ldof, .edof = edof ))), weight = eval(substitute(expression({ const2 <- (Dof + 0) / (Dof + 3) const2[!is.finite(Dof)] <- 1 # Handles Inf tmp6 <- DDS(Dof) nedl2.dnu2 <- 0.5 * (tmp6 * (const2 * tmp6 - 2 / (Dof + 1)) - DDSp(Dof)) wz <- c(w) * nedl2.dnu2 * ddf.deta^2 wz }), list( .ldof = ldof, .edof = edof )))) } Kayfun.studentt <- function(df, bigno = .Machine$double.eps^(-0.46)) { ind1 <- is.finite(df) const4 <- dnorm(0) ans <- df if (any(ind1)) ans[ind1] <- exp(lgamma((df[ind1] + 1) / 2) - lgamma( df[ind1] / 2)) / sqrt(pi * df[ind1]) ans[df <= 0] <- NaN ind2 <- (df >= bigno) if (any(ind2)) { dff <- df[ind2] ans[ind2] <- const4 # 1 / const3 # for handling df = Inf } ans[!ind1] <- const4 # 1 / const3 # for handling df = Inf ans } studentt3 <- function(llocation = "identity", lscale = "loge", ldf = "loglog", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = -(2:3)) { lloc <- as.list(substitute(llocation)) eloc <- link2list(lloc) lloc <- attr(eloc, "function.name") lsca <- as.list(substitute(lscale)) esca <- link2list(lsca) lsca <- attr(esca, "function.name") ldof <- as.list(substitute(ldf)) edof <- link2list(ldof) ldof <- attr(edof, "function.name") iloc <- ilocation isca <- iscale idof <- idf if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iloc)) if (!is.Numeric(iloc)) stop("bad input in argument 'ilocation'") if (length(isca)) if (!is.Numeric(isca, positive = TRUE)) stop("argument 'iscale' should be positive") if (length(idof)) if (!is.Numeric(idof) || any(idof <= 1)) stop("argument 'idf' should be > 1") new("vglmff", blurb = c("Student t-distribution\n\n", "Link: ", namesof("location", lloc, earg = eloc), ", ", namesof("scale", lsca, earg = esca), ", ", namesof("df", ldof, earg = edof), "\n", "Variance: scale^2 * df / (df - 2) if df > 2\n"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 3 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 3, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 3 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$Musual <- Musual M <- Musual * ncoly # mynames1 <- paste("location", if (NOS > 1) 1:NOS else "", sep = "") mynames2 <- paste("scale", if (NOS > 1) 1:NOS else "", sep = "") mynames3 <- paste("df", if (NOS > 1) 1:NOS else "", sep = "") predictors.names <- c(namesof(mynames1, .lloc , earg = .eloc , tag = FALSE), namesof(mynames2, .lsca , earg = .esca , tag = FALSE), namesof(mynames3, .ldof , earg = .edof , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(Musual * NOS, M = Musual)] if (!length(etastart)) { init.loc <- if (length( .iloc )) .iloc else { if ( .imethod == 2) apply(y, 2, median) else if ( .imethod == 3) (colMeans(y) + t(y)) / 2 else { colSums(w * y) / colSums(w) } } sdvec <- apply(y, 2, sd) init.sca <- if (length( .isca )) .isca else sdvec / 2.3 sdvec <- rep(sdvec, length.out <- max(length(sdvec), length(init.sca))) init.sca <- rep(init.sca, length.out <- max(length(sdvec), length(init.sca))) ind9 <- (sdvec / init.sca <= (1 + 0.12)) sdvec[ind9] <- sqrt(1.12) * init.sca[ind9] init.dof <- if (length( .idof )) .idof else (2 * (sdvec / init.sca)^2) / ((sdvec / init.sca)^2 - 1) if (!is.Numeric(init.dof) || init.dof <= 1) init.dof <- rep(3, length.out = ncoly) mat1 <- matrix(theta2eta(init.loc, .lloc , earg = .eloc ), n, NOS, byrow = TRUE) mat2 <- matrix(theta2eta(init.sca, .lsca , earg = .esca ), n, NOS, byrow = TRUE) mat3 <- matrix(theta2eta(init.dof, .ldof , earg = .edof ), n, NOS, byrow = TRUE) etastart <- cbind(mat1, mat2, mat3) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lloc = lloc, .eloc = eloc, .iloc = iloc, .lsca = lsca, .esca = esca, .isca = isca, .ldof = ldof, .edof = edof, .idof = idof, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS Musual <- extra$Musual Loc <- eta2theta(eta[, Musual*(1:NOS)-2], .lloc , earg = .eloc ) Dof <- eta2theta(eta[, Musual*(1:NOS)-0], .ldof , earg = .edof ) Loc[Dof <= 1] <- NA Loc }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lloc , length = NOS), rep( .lsca , length = NOS), rep( .ldof , length = NOS)) misc$link <- misc$link[interleave.VGAM(Musual * NOS, M = Musual)] temp.names <- c(mynames1, mynames2, mynames3) temp.names <- temp.names[interleave.VGAM(Musual * NOS, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual * NOS) names(misc$earg) <- temp.names for(ii in 1:NOS) { misc$earg[[Musual*ii-2]] <- .eloc misc$earg[[Musual*ii-1]] <- .esca misc$earg[[Musual*ii ]] <- .edof } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { NOS <- extra$NOS Musual <- extra$Musual Loc <- eta2theta(eta[, Musual*(1:NOS)-2], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, Musual*(1:NOS)-1], .lsca , earg = .esca ) Dof <- eta2theta(eta[, Musual*(1:NOS)-0], .ldof , earg = .edof ) zedd <- (y - Loc) / Sca if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (dt(x = zedd, df = Dof, log = TRUE) - log(Sca))) } }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), vfamily = c("studentt3"), deriv = eval(substitute(expression({ Musual <- extra$Musual NOS <- extra$NOS Loc <- eta2theta(eta[, Musual*(1:NOS)-2], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, Musual*(1:NOS)-1], .lsca , earg = .esca ) Dof <- eta2theta(eta[, Musual*(1:NOS)-0], .ldof , earg = .edof ) dloc.deta <- cbind(dtheta.deta(theta = Loc, .lloc , earg = .eloc )) dsca.deta <- cbind(dtheta.deta(theta = Sca, .lsca , earg = .esca )) ddof.deta <- cbind(dtheta.deta(theta = Dof, .ldof , earg = .edof )) zedd <- (y - Loc) / Sca temp0 <- 1 / Dof temp1 <- temp0 * zedd^2 dl.dloc <- (Dof + 1) * zedd / (Sca * (Dof + zedd^2)) dl.dsca <- zedd * dl.dloc - 1 / Sca dl.ddof <- 0.5 * (-temp0 - log1p(temp1) + (Dof+1) * zedd^2 / (Dof^2 * (1 + temp1)) + digamma((Dof+1)/2) - digamma(Dof/2)) ans <- c(w) * cbind(dl.dloc * dloc.deta, dl.dsca * dsca.deta, dl.ddof * ddof.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), weight = eval(substitute(expression({ const1 <- (Dof + 1) / (Dof + 3) const2 <- (Dof + 0) / (Dof + 3) const1[!is.finite(Dof)] <- 1 # Handles Inf const2[!is.finite(Dof)] <- 1 # Handles Inf const4 <- dnorm(0) ned2l.dlocat2 <- const1 / (Sca * (Kayfun.studentt(Dof) / const4))^2 ned2l.dscale2 <- 2 * const2 / Sca^2 DDS <- function(df) digamma((df + 1) / 2) - digamma(df / 2) DDSp <- function(df) 0.5 * (trigamma((df + 1) / 2) - trigamma(df / 2)) tmp6 <- DDS(Dof) edl2.dnu2 <- 0.5 * (tmp6 * (const2 * tmp6 - 2 / (Dof + 1)) - DDSp(Dof)) ned2l.dshape2 <- cbind(edl2.dnu2) # cosmetic name change ned2l.dshape.dlocat <- cbind(0 * Sca) ned2l.dshape.dscale <- cbind((-1 / (Dof + 1.0) + const2 * DDS(Dof)) / Sca) wz <- array(c(c(w) * ned2l.dlocat2 * dloc.deta^2, c(w) * ned2l.dscale2 * dsca.deta^2, c(w) * ned2l.dshape2 * ddof.deta^2, c(w) * ned2l.dshape2 * 0, c(w) * ned2l.dshape.dscale * dsca.deta * ddof.deta, c(w) * ned2l.dshape.dlocat * dloc.deta * ddof.deta), dim = c(n, M / Musual, 6)) wz <- arwz2wz(wz, M = M, Musual = Musual) if (FALSE) { wz <- matrix(0.0, n, dimm(M)) wz[, Musual*(1:NOS) - 2] <- ned2l.dlocat2 * dloc.deta^2 wz[, Musual*(1:NOS) - 1] <- ned2l.dscale2 * dsca.deta^2 wz[, Musual*(1:NOS) - 0] <- ned2l.dshape2 * ddof.deta^2 for (ii in ((1:NOS) - 1)) { ind3 <- 1 + ii wz[, iam(ii*Musual + 1, ii*Musual + 3, M = M)] <- ned2l.dshape.dlocat[, ind3] * dloc.deta[, ind3] * ddof.deta[, ind3] wz[, iam(ii*Musual + 2, ii*Musual + 3, M = M)] <- ned2l.dshape.dscale[, ind3] * dsca.deta[, ind3] * ddof.deta[, ind3] } while (all(wz[, ncol(wz)] == 0)) wz <- wz[, -ncol(wz)] } wz }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof )))) } studentt2 <- function(df = Inf, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = -2) { lloc <- as.list(substitute(llocation)) eloc <- link2list(lloc) lloc <- attr(eloc, "function.name") lsca <- as.list(substitute(lscale)) esca <- link2list(lsca) lsca <- attr(esca, "function.name") iloc <- ilocation; isca <- iscale doff <- df if (is.finite(doff)) if (!is.Numeric(doff, positive = TRUE)) stop("argument 'df' must be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iloc)) if (!is.Numeric(iloc)) stop("bad input in argument 'ilocation'") if (length(isca)) if (!is.Numeric(isca, positive = TRUE)) stop("argument 'iscale' should be positive") new("vglmff", blurb = c("Student t-distribution\n\n", "Link: ", namesof("location", lloc, earg = eloc), ", ", namesof("scale", lsca, earg = esca), "\n", "Variance: scale^2 * df / (df - 2) if df > 2\n"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$Musual <- Musual M <- Musual * ncoly # mynames1 <- paste("location", if (NOS > 1) 1:NOS else "", sep = "") mynames2 <- paste("scale", if (NOS > 1) 1:NOS else "", sep = "") predictors.names <- c(namesof(mynames1, .lloc , earg = .eloc , tag = FALSE), namesof(mynames2, .lsca , earg = .esca , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(Musual * NOS, M = Musual)] if (!length(etastart)) { init.loc <- if (length( .iloc )) .iloc else { if ( .imethod == 2) apply(y, 2, median) else if ( .imethod == 3) (colMeans(y) + t(y)) / 2 else { colSums(w * y) / colSums(w) } } sdvec <- apply(y, 2, sd) init.sca <- if (length( .isca )) .isca else sdvec / 2.3 mat1 <- matrix(theta2eta(init.loc, .lloc , earg = .eloc ), n, NOS, byrow = TRUE) mat2 <- matrix(theta2eta(init.sca, .lsca , earg = .esca ), n, NOS, byrow = TRUE) etastart <- cbind(mat1, mat2) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] } }), list( .lloc = lloc, .eloc = eloc, .iloc = iloc, .lsca = lsca, .esca = esca, .isca = isca, .doff = doff, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS Musual <- extra$Musual Loc <- eta2theta(eta[, Musual*(1:NOS) - 1], .lloc , earg = .eloc ) Dof <- matrix( .doff , nrow(cbind(Loc)), NOS, byrow = TRUE) Loc[Dof <= 1] <- NA Loc }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lloc , length = NOS), rep( .lsca , length = NOS)) temp.names <- c(mynames1, mynames2) temp.names <- temp.names[interleave.VGAM(Musual * NOS, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual * NOS) names(misc$earg) <- temp.names for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .eloc misc$earg[[Musual*ii-0]] <- .esca } misc$Musual <- Musual misc$simEIM <- TRUE misc$df <- .doff misc$imethod <- .imethod misc$expected = TRUE misc$multipleResponses <- TRUE }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { NOS <- extra$NOS Musual <- extra$Musual Loc <- eta2theta(eta[, Musual*(1:NOS)-1], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, Musual*(1:NOS)-0], .lsca , earg = .esca ) Dof <- matrix( .doff , nrow(cbind(Loc)), NOS, byrow = TRUE) zedd <- (y - Loc) / Sca if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (dt(x = zedd, df = Dof, log = TRUE) - log(Sca))) } }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), vfamily = c("studentt2"), deriv = eval(substitute(expression({ Musual <- extra$Musual NOS <- extra$NOS Loc <- eta2theta(eta[, Musual*(1:NOS)-1], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, Musual*(1:NOS)-0], .lsca , earg = .esca ) Dof <- matrix( .doff , n, NOS, byrow = TRUE) dlocat.deta <- dtheta.deta(theta = Loc, .lloc , earg = .eloc ) dscale.deta <- dtheta.deta(theta = Sca, .lsca , earg = .esca ) zedd <- (y - Loc) / Sca temp0 <- 1 / Dof temp1 <- temp0 * zedd^2 dl.dlocat <- (Dof + 1) * zedd / (Sca * (Dof + zedd^2)) dl.dlocat[!is.finite(Dof)] <- zedd / Sca # Adjust for df=Inf dl.dscale <- zedd * dl.dlocat - 1 / Sca ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), weight = eval(substitute(expression({ const1 <- (Dof + 1) / (Dof + 3) const2 <- (Dof + 0) / (Dof + 3) const1[!is.finite( Dof )] <- 1 # Handles Inf const2[!is.finite( Dof )] <- 1 # Handles Inf const4 <- dnorm(0) ned2l.dlocat2 <- const1 / (Sca * (Kayfun.studentt(Dof) / const4))^2 ned2l.dscale2 <- 2.0 * const2 / Sca^2 # 2.0 seems to work wz <- matrix(as.numeric(NA), n, M) #2=M; diagonal! wz[, Musual*(1:NOS) - 1] <- ned2l.dlocat2 * dlocat.deta^2 wz[, Musual*(1:NOS) ] <- ned2l.dscale2 * dscale.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff )))) } chisq <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Chi-squared distribution\n\n", "Link: ", namesof("df", link, earg = earg, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$ncoly <- NOS <- ncoly # Number of species mynames1 <- paste("df", if (NOS > 1) 1:NOS else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(mustart) && !length(etastart)) mustart <- y + (1 / 8) * (y == 0) }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .link , earg = .earg ) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .link , earg = .earg ) }, list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mydf <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * dchisq(x = y, df = mydf, ncp = 0, log = TRUE)) }, list( .link = link, .earg = earg ))), vfamily = "chisq", deriv = eval(substitute(expression({ mydf <- eta2theta(eta, .link , earg = .earg ) dl.dv <- (log(y / 2) - digamma(mydf / 2)) / 2 dv.deta <- dtheta.deta(mydf, .link , earg = .earg ) c(w) * dl.dv * dv.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dv2 <- trigamma(mydf / 2) / 4 wz <- ned2l.dv2 * dv.deta^2 c(w) * wz }), list( .link = link, .earg = earg )))) } dsimplex <- function(x, mu = 0.5, dispersion = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) sigma <- dispersion deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) logpdf <- (-0.5 * log(2 * pi) - log(sigma) - 1.5 * log(x) - 1.5 * log1p(-x) - 0.5 * deeFun(x, mu) / sigma^2) logpdf[x <= 0.0] <- -Inf # log(0.0) logpdf[x >= 1.0] <- -Inf # log(0.0) logpdf[mu <= 0.0] <- NaN logpdf[mu >= 1.0] <- NaN logpdf[sigma <= 0.0] <- NaN if (log.arg) logpdf else exp(logpdf) } rsimplex <- function(n, mu = 0.5, dispersion = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n oneval <- (length(mu) == 1 && length(dispersion) == 1) answer <- rep(0.0, length.out = use.n) mu <- rep(mu, length.out = use.n); dispersion <- rep(dispersion, length.out = use.n) Kay1 <- 3 * (dispersion * mu * (1-mu))^2 if (oneval) { Kay1 <- Kay1[1] # Since oneval means there is only one unique value mymu <- mu[1] myroots <- polyroot(c(-mymu^2, Kay1+2*mymu^2, -3*Kay1+1-2*mymu, 2*Kay1)) myroots <- myroots[abs(Im(myroots)) < 0.00001] myroots <- Re(myroots) myroots <- myroots[myroots >= 0.0] myroots <- myroots[myroots <= 1.0] pdfmax <- dsimplex(myroots, mymu, dispersion[1]) pdfmax <- rep(max(pdfmax), length.out = use.n) # For multiple peaks } else { pdfmax <- numeric(use.n) for (ii in 1:use.n) { myroots <- polyroot(c(-mu[ii]^2, Kay1[ii]+2*mu[ii]^2, -3*Kay1[ii]+1-2*mu[ii], 2*Kay1[ii])) myroots <- myroots[abs(Im(myroots)) < 0.00001] myroots <- Re(myroots) myroots <- myroots[myroots >= 0.0] myroots <- myroots[myroots <= 1.0] pdfmax[ii] <- max(dsimplex(myroots, mu[ii], dispersion[ii])) } } index <- 1:use.n nleft <- length(index) while (nleft > 0) { xx <- runif(nleft) # , 0, 1 yy <- runif(nleft, max = pdfmax[index]) newindex <- (1:nleft)[yy < dsimplex(xx, mu[index], dispersion[index])] if (length(newindex)) { answer[index[newindex]] <- xx[newindex] index <- setdiff(index, index[newindex]) nleft <- nleft - length(newindex) } } answer } simplex <- function(lmu = "logit", lsigma = "loge", imu = NULL, isigma = NULL, imethod = 1, shrinkage.init = 0.95, zero = 2) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Univariate simplex distribution\n\n", "f(y) = [2*pi*sigma^2*(y*(1-y))^3]^(-0.5) * \n", " exp[-0.5*(y-mu)^2 / (sigma^2 * y * ", "(1-y) * mu^2 * (1-mu)^2)],\n", " 0 < y < 1, 0 < mu < 1, sigma > 0\n\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("sigma", lsigma, earg = esigma), "\n\n", "Mean: mu\n", "Variance function: V(mu) = mu^3 * (1 - mu)^3"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (any(y <= 0.0 | y >= 1.0)) stop("all 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE) predictors.names <- c( namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("sigma", .lsigma, earg = .esigma, tag = FALSE)) deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) if (!length(etastart)) { use.this = if ( .imethod == 3) weighted.mean(y, w = w) else if ( .imethod == 1) median(y) else mean(y, trim = 0.1) init.mu <- (1 - .sinit) * y + .sinit * use.this mu.init <- rep(if (length( .imu )) .imu else init.mu, length = n) sigma.init <- if (length( .isigma )) rep( .isigma, leng = n) else { use.this <- deeFun(y, mu=init.mu) rep(sqrt( if ( .imethod == 3) weighted.mean(use.this, w) else if ( .imethod == 1) median(use.this) else mean(use.this, trim = 0.1)), length = n) } etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu), theta2eta(sigma.init, .lsigma, earg = .esigma)) } }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .imu = imu, .isigma = isigma, .sinit = shrinkage.init, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , sigma = .lsigma) misc$earg <- list(mu = .emu , sigma = .esigma) misc$imu <- .imu misc$isigma <- .isigma misc$imethod <- .imethod misc$shrinkage.init <- .sinit }), list( .lmu = lmu, .lsigma = lsigma, .imu = imu, .isigma = isigma, .emu = emu, .esigma = esigma, .sinit = shrinkage.init, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { sigma <- eta2theta(eta[, 2], .lsigma, earg = .esigma) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dsimplex(x = y, mu = mu, dispersion = sigma, log = TRUE)) } }, list( .lsigma = lsigma, .emu = emu, .esigma = esigma ))), vfamily = c("simplex"), deriv = eval(substitute(expression({ deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) sigma <- eta2theta(eta[, 2], .lsigma, earg = .esigma) dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu) dsigma.deta <- dtheta.deta(sigma, .lsigma, earg = .esigma) dl.dmu <- (y - mu) * (deeFun(y, mu) + 1 / (mu * (1 - mu))^2) / (mu * (1 - mu) * sigma^2) dl.dsigma <- (deeFun(y, mu) / sigma^2 - 1) / sigma cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M) # Diagonal!! eim11 <- 3 / (mu * (1 - mu)) + 1 / (sigma^2 * (mu * (1 - mu))^3) wz[, iam(1, 1, M)] <- eim11 * dmu.deta^2 wz[, iam(2, 2, M)] <- (2 / sigma^2) * dsigma.deta^2 c(w) * wz }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma )))) } rig <- function(lmu = "identity", llambda = "loge", imu = NULL, ilambda = 1) { if (!is.Numeric(ilambda, positive = TRUE)) stop("bad input for 'ilambda'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Reciprocal inverse Gaussian distribution \n", "f(y) = [lambda/(2*pi*y)]^(0.5) * \n", " exp[-0.5*(lambda/y) * (y-mu)^2], ", " 0 < y,\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda), "\n\n", "Mean: mu"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { mu.init <- rep(if (length( .imu )) .imu else median(y), length = n) lambda.init <- rep(if (length( .ilambda )) .ilambda else sqrt(var(y)), length = n) etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .imu = imu, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ misc$d3 <- d3 # because save.weights = FALSE misc$link <- c(mu = .lmu , lambda = .llambda ) misc$earg <- list(mu = .emu , lambda = .elambda ) misc$pooled.weight <- pooled.weight }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-0.5*log(y) + 0.5*log(lambda) - (0.5*lambda/y) * (y-mu)^2)) }, list( .llambda = llambda, .emu = emu, .elambda = elambda ))), vfamily = c("rig"), deriv = eval(substitute(expression({ if (iter == 1) { d3 <- deriv3(~ w * (-0.5*log(y) + 0.5*log(lambda) - (0.5*lambda/y) * (y-mu)^2), c("mu", "lambda"), hessian= TRUE) } lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) dtheta.detas <- cbind(dmu.deta, dlambda.deta) dl.dthetas * dtheta.detas }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ d2l.dthetas2 = attr(eval.d3, "hessian") wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- -d2l.dthetas2[, 1, 1] * dtheta.detas[, 1]^2 wz[, iam(2, 2, M)] <- -d2l.dthetas2[, 2, 2] * dtheta.detas[, 2]^2 wz[, iam(1, 2, M)] <- -d2l.dthetas2[, 1, 2] * dtheta.detas[, 1] * dtheta.detas[, 2] if (!.expected) { d2mudeta2 <- d2theta.deta2(mu, .lmu , earg = .emu) d2lambda <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.dthetas[, 1] * d2mudeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dthetas[, 2] * d2lambda } if (intercept.only) { sumw <- sum(w) for(ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .lmu = lmu, .llambda = llambda, .expected = FALSE, .emu = emu, .elambda = elambda )))) } hypersecant <- function(link.theta = elogit(min = -pi/2, max = pi/2), init.theta = NULL) { link.theta <- as.list(substitute(link.theta)) earg <- link2list(link.theta) link.theta <- attr(earg, "function.name") new("vglmff", blurb = c("Hyperbolic Secant distribution \n", "f(y) = exp(theta*y + log(cos(theta ))) / (2*cosh(pi*y/2))\n", " for all y,\n", "Link: ", namesof("theta", link.theta , earg = earg), "\n\n", "Mean: tan(theta)"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("theta", .link.theta , earg = .earg , tag = FALSE) if (!length(etastart)) { theta.init <- rep(if (length( .init.theta )) .init.theta else median(y), length = n) etastart <- theta2eta(theta.init, .link.theta , earg = .earg ) } }), list( .link.theta = link.theta , .earg = earg, .init.theta = init.theta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) tan(theta) }, list( .link.theta = link.theta , .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link.theta ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link.theta = link.theta , .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (theta*y + log(cos(theta)) - log(cosh(pi*y/2 )))) }, list( .link.theta = link.theta , .earg = earg ))), vfamily = c("hypersecant"), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .link.theta , earg = .earg ) dl.dthetas <- y - tan(theta) dparam.deta <- dtheta.deta(theta, .link.theta , earg = .earg ) c(w) * dl.dthetas * dparam.deta }), list( .link.theta = link.theta , .earg = earg ))), weight = expression({ d2l.dthetas2 <- 1 / cos(theta)^2 wz <- c(w) * d2l.dthetas2 * dparam.deta^2 wz })) } hypersecant.1 <- function(link.theta = elogit(min = -pi/2, max = pi/2), init.theta = NULL) { link.theta <- as.list(substitute(link.theta)) earg <- link2list(link.theta) link.theta <- attr(earg, "function.name") new("vglmff", blurb = c("Hyperbolic Secant distribution \n", "f(y) = (cos(theta)/pi) * y^(-0.5+theta/pi) * \n", " (1-y)^(-0.5-theta/pi), ", " 0 < y < 1,\n", "Link: ", namesof("theta", link.theta , earg = earg), "\n\n", "Mean: 0.5 + theta/pi", "\n", "Variance: (pi^2 - 4*theta^2) / (8*pi^2)"), initialize = eval(substitute(expression({ if (any(y <= 0 | y >= 1)) stop("all response 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("theta", .link.theta , earg = .earg , tag = FALSE) if (!length(etastart)) { theta.init <- rep(if (length( .init.theta )) .init.theta else median(y), length = n) etastart <- theta2eta(theta.init, .link.theta , earg = .earg ) } }), list( .link.theta = link.theta , .earg = earg, .init.theta = init.theta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) 0.5 + theta / pi }, list( .link.theta = link.theta , .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link.theta ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link.theta = link.theta , .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (log(cos(theta)) + (-0.5+theta/pi)*log(y) + (-0.5-theta/pi)*log1p(-y ))) }, list( .link.theta = link.theta , .earg = earg ))), vfamily = c("hypersecant.1"), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .link.theta , earg = .earg ) dl.dthetas <- -tan(theta) + log(y/(1-y)) / pi dparam.deta <- dtheta.deta(theta, .link.theta , earg = .earg ) c(w) * dl.dthetas * dparam.deta }), list( .link.theta = link.theta , .earg = earg ))), weight = expression({ d2l.dthetas2 <- 1 / cos(theta)^2 wz <- c(w) * d2l.dthetas2 * dparam.deta^2 wz })) } leipnik <- function(lmu = "logit", llambda = "loge", imu = NULL, ilambda = NULL) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (is.Numeric(ilambda) && any(ilambda <= -1)) stop("argument 'ilambda' must be > -1") new("vglmff", blurb = c("Leipnik's distribution \n", "f(y) = (y(1-y))^(-1/2) * [1 + (y-mu)^2 / (y*(1-y))]^(-lambda/2) /\n", " Beta[(lambda+1)/2, 1/2], ", " 0 < y < 1, lambda > -1\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda), "\n\n", "Mean: mu\n", "Variance: mu*(1-mu)"), initialize = eval(substitute(expression({ if (any(y <= 0 | y >= 1)) stop("all response 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { mu.init <- rep(if (length( .imu )) .imu else (y), length = n) lambda.init <- rep(if (length( .ilambda )) .ilambda else 1/var(y), length = n) etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu ), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .imu = imu, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , lambda = .llambda ) misc$earg <- list(mu = .emu , lambda = .elambda ) misc$pooled.weight <- pooled.weight misc$expected <- FALSE }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-0.5*log(y*(1-y)) - 0.5 * lambda * log1p((y-mu)^2 / (y*(1-y ))) - lgamma((lambda+1)/2) + lgamma(1+ lambda/2 ))) }, list( .llambda = llambda, .emu = emu, .elambda = elambda ))), vfamily = c("leipnik"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) dl.dthetas = cbind(dl.dmu = lambda*(y-mu) / (y*(1-y)+(y-mu)^2), dl.dlambda= -0.5 * log1p((y-mu)^2 / (y*(1-y))) - 0.5*digamma((lambda+1)/2) + 0.5*digamma(1+lambda/2)) dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) dtheta.detas <- cbind(dmu.deta, dlambda.deta) c(w) * dl.dthetas * dtheta.detas }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ denominator <- y*(1-y) + (y-mu)^2 d2l.dthetas2 <- array(NA, c(n, 2, 2)) d2l.dthetas2[, 1, 1] <- c(w) * lambda*(-y*(1-y)+(y-mu)^2)/denominator^2 d2l.dthetas2[, 1, 2] <- d2l.dthetas2[, 2, 1] <- c(w) * (y-mu) / denominator d2l.dthetas2[, 2, 2] <- c(w) * (-0.25*trigamma((lambda+1)/2) + 0.25*trigamma(1+lambda/2)) wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- -d2l.dthetas2[, 1, 1] * dtheta.detas[, 1]^2 wz[, iam(2, 2, M)] <- -d2l.dthetas2[, 2, 2] * dtheta.detas[, 2]^2 wz[, iam(1, 2, M)] <- -d2l.dthetas2[, 1, 2] * dtheta.detas[, 1] * dtheta.detas[, 2] if (!.expected) { d2mudeta2 <- d2theta.deta2(mu, .lmu , earg = .emu) d2lambda <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.dthetas[, 1] * d2mudeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dthetas[, 2] * d2lambda } if (intercept.only) { sumw <- sum(w) for(ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .lmu = lmu, .llambda = llambda, .expected = FALSE, .emu = emu, .elambda = elambda )))) } invbinomial <- function(lrho = elogit(min = 0.5, max = 1), llambda = "loge", irho = NULL, ilambda = NULL, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Inverse binomial distribution\n\n", "Links: ", namesof("rho", lrho, earg = erho), ", ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: lambda*(1-rho)/(2*rho-1)\n", "Variance: lambda*rho*(1-rho)/(2*rho-1)^3\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("rho", .lrho, earg = .erho, tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { covarn <- sd(c(y))^2 / weighted.mean(y, w) temp1 <- 0.5 + (1 + sqrt(1+8*covarn)) / (8*covarn) temp2 <- 0.5 + (1 - sqrt(1+8*covarn)) / (8*covarn) init.rho <- rep(if (length( .irho)) .irho else { ifelse(temp1 > 0.5 && temp1 < 1, temp1, temp2) }, length = n) init.lambda <- rep(if (length( .ilambda)) .ilambda else { (2*init.rho-1) * weighted.mean(y, w) / (1-init.rho) }, length = n) etastart <- cbind(theta2eta(init.rho, .lrho, earg = .erho), theta2eta(init.lambda, .llambda , earg = .elambda )) } }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho, .ilambda = ilambda, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { rho <- eta2theta(eta[, 1], .lrho, earg = .erho) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) ifelse(rho > 0.5, lambda*(1-rho)/(2*rho-1), NA) }, list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c(rho= .lrho, lambda = .llambda ) misc$earg <- list(rho= .erho, lambda = .elambda ) misc$pooled.weight <- pooled.weight }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { rho <- eta2theta(eta[, 1], .lrho, earg = .erho) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (log(lambda) - lgamma(2*y+lambda) - lgamma(y+1) - lgamma(y+lambda+1) + y*log(rho) + y*log1p(-rho) + lambda*log(rho))) }, list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), vfamily = c("invbinomial"), deriv = eval(substitute(expression({ rho <- eta2theta(eta[, 1], .lrho, earg = .erho) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) dl.drho <- (y + lambda)/rho - y/(1-rho) dl.dlambda <- 1/lambda - digamma(2*y+lambda) - digamma(y+lambda+1) + log(rho) drho.deta <- dtheta.deta(rho, .lrho, earg = .erho) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind(dl.drho * drho.deta, dl.dlambda * dlambda.deta ) }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), weight = eval(substitute(expression({ ned2l.drho2 <- (mu+lambda) / rho^2 + mu / (1-rho)^2 d2l.dlambda2 <- 1/(lambda^2) + trigamma(2*y+lambda)+trigamma(y+lambda+1) ned2l.dlambdarho <- -1/rho wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.drho2 * drho.deta^2 wz[, iam(1, 2, M)] <- ned2l.dlambdarho * dlambda.deta * drho.deta wz[, iam(2, 2, M)] <- d2l.dlambda2 * dlambda.deta^2 d2rhodeta2 <- d2theta.deta2(rho, .lrho, earg = .erho) d2lambda.deta2 <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz <- c(w) * wz if (intercept.only) { pooled.weight <- TRUE wz[, iam(2, 2, M)] <- sum(wz[, iam(2, 2, M)]) / sum(w) } else { pooled.weight <- FALSE } wz }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho )))) } genpoisson <- function(llambda = elogit(min = -1, max = 1), ltheta = "loge", ilambda = NULL, itheta = NULL, use.approx = TRUE, imethod = 1, zero = 1) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.logical(use.approx) || length(use.approx) != 1) stop("'use.approx' must be logical value") new("vglmff", blurb = c("Generalized Poisson distribution\n\n", "Links: ", namesof("lambda", llambda, earg = elambda), ", ", namesof("theta", ltheta, earg = etheta), "\n", "Mean: theta / (1-lambda)\n", "Variance: theta / (1-lambda)^3"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda , earg = .elambda , tag = FALSE), namesof("theta", .ltheta , earg = .etheta, tag = FALSE)) init.lambda <- if ( .imethod == 1) 1 - sqrt(weighted.mean(y, w) / var(y)) else 0.5 init.theta <- if ( .imethod == 1) sqrt((0.01 + weighted.mean(y, w)^3) / var(y)) else median(y) * (1-init.lambda) if (init.theta <= 0) init.theta <- 0.1 cutpt <- if (init.lambda < 0) { mmm <- max(trunc(-init.theta / init.lambda), 4) max(-1, -init.theta /mmm) } else -1 if (init.lambda <= cutpt) init.lambda <- cutpt + 0.1 if (init.lambda >= 1) init.lambda <- 0.9 if (!length(etastart)) { lambda <- rep(if (length( .ilambda)) .ilambda else init.lambda, length = n) theta <- rep(if (length( .itheta)) .itheta else init.theta , length = n) etastart <- cbind(theta2eta(lambda, .llambda , earg = .elambda ), theta2eta(theta, .ltheta , earg = .etheta )) } }), list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda, .imethod = imethod, .itheta = itheta, .ilambda = ilambda )) ), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) theta <- eta2theta(eta[, 2], .ltheta , earg = .etheta ) theta / (1 - lambda) }, list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda , theta = .ltheta ) misc$earg <- list(lambda = .elambda , theta = .etheta ) if (! .use.approx ) misc$pooled.weight <- pooled.weight }), list( .ltheta = ltheta, .llambda = llambda, .use.approx = use.approx, .etheta = etheta, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) theta <- eta2theta(eta[, 2], .ltheta , earg = .etheta ) index <- (y == 0) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[index] * (-theta[index])) + sum(w[!index] * (-y[!index]*lambda[!index]-theta[!index] + (y[!index]-1)*log(theta[!index]+y[!index]*lambda[!index]) + log(theta[!index]) - lgamma(y[!index]+1)) ) }, list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), vfamily = c("genpoisson"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) theta <- eta2theta(eta[, 2], .ltheta , earg = .etheta ) dl.dlambda <- -y + y*(y-1)/(theta+y*lambda) dl.dtheta <- -1 + (y-1)/(theta+y*lambda) + 1/theta dTHETA.deta <- dtheta.deta(theta, .ltheta , earg = .etheta ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dtheta * dTHETA.deta ) }), list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) if ( .use.approx ) { BBB <- (theta+2)*(theta+2*lambda-theta*lambda)-(theta^2)*(1-lambda) d2l.dlambda2 <- 2 * theta * (theta+2) / ((1-lambda) * BBB) d2l.dtheta2 <- 2 * (1 + lambda * (2/theta - 1)) / BBB d2l.dthetalambda <- 2 * theta / BBB wz[, iam(1, 1, M)] <- d2l.dlambda2 * dlambda.deta^2 wz[, iam(2, 2, M)] <- d2l.dtheta2 * dTHETA.deta^2 wz[, iam(1, 2, M)] <- d2l.dthetalambda * dTHETA.deta * dlambda.deta wz <- c(w) * wz } else { d2l.dlambda2 <- -y^2 * (y-1) / (theta+y*lambda)^2 d2l.dtheta2 <- -(y-1)/(theta+y*lambda)^2 - 1 / theta^2 d2l.dthetalambda <- -y * (y-1) / (theta+y*lambda)^2 wz[, iam(1, 1, M)] <- -d2l.dlambda2 * dlambda.deta^2 wz[, iam(2, 2, M)] <- -d2l.dtheta2 * dTHETA.deta^2 wz[, iam(1, 2, M)] <- -d2l.dthetalambda * dTHETA.deta * dlambda.deta d2THETA.deta2 <- d2theta.deta2(theta, .ltheta , earg = .etheta ) d2lambdadeta2 <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.dlambda * d2lambdadeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dtheta * d2THETA.deta2 wz <- c(w) * wz if (intercept.only) { sumw <- sum(w) for(ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE } wz }), list( .ltheta = ltheta, .llambda = llambda, .use.approx = use.approx, .etheta = etheta, .elambda = elambda )))) } dlgamma <- function(x, location = 0, scale = 1, k = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") if (!is.Numeric(k, positive = TRUE)) stop("bad input for argument 'k'") z <- (x-location) / scale if (log.arg) { k * z - exp(z) - log(scale) - lgamma(k) } else { exp(k * z - exp(z)) / (scale * gamma(k)) } } plgamma <- function(q, location = 0, scale = 1, k = 1) { zedd <- (q - location) / scale ans <- pgamma(exp(zedd), k) ans[scale < 0] <- NaN ans } qlgamma <- function(p, location = 0, scale = 1, k = 1) { if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") ans <- location + scale * log(qgamma(p, k)) ans[scale < 0] <- NaN ans } rlgamma <- function(n, location = 0, scale = 1, k = 1) { ans <- location + scale * log(rgamma(n, k)) ans[scale < 0] <- NaN ans } lgammaff <- function(link = "loge", init.k = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Log-gamma distribution ", "f(y) = exp(ky - e^y)/gamma(k)), k>0\n\n", "Link: ", namesof("k", link, earg = earg), "\n", "\n", "Mean: digamma(k)", "\n"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("k", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { k.init <- if (length( .init.k)) rep( .init.k, length.out = length(y)) else { medy = median(y) if (medy < 2) 5 else if (medy < 4) 20 else exp(0.7 * medy) } etastart <- theta2eta(k.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.k = init.k ))), linkinv = eval(substitute(function(eta, extra = NULL) { k <- eta2theta(eta, .link , earg = .earg ) digamma(k) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(k = .link ) misc$earg <- list(k = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { kk <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dlgamma(x = y, location = 0, scale = 1, k = kk, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("lgammaff"), deriv = eval(substitute(expression({ kk <- eta2theta(eta, .link , earg = .earg ) dl.dk <- y - digamma(kk) dk.deta <- dtheta.deta(kk, .link , earg = .earg ) c(w) * dl.dk * dk.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dk2 <- trigamma(kk) wz <- c(w) * dk.deta^2 * ned2l.dk2 wz }), list( .link = link, .earg = earg )))) } lgamma3ff <- function(llocation = "identity", lscale = "loge", lshape = "loge", ilocation = NULL, iscale = NULL, ishape = 1, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Log-gamma distribution", " f(y) = exp(k(y-a)/b - e^((y-a)/b))/(b*gamma(k)), ", "location=a, scale=b>0, shape=k>0\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: a + b*digamma(k)", "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape, tag = FALSE)) if (!length(etastart)) { k.init <- if (length( .ishape )) rep( .ishape, length.out = length(y)) else { rep(exp(median(y)), length.out = length(y)) } scale.init <- if (length( .iscale )) rep( .iscale, length.out = length(y)) else { rep(sqrt(var(y) / trigamma(k.init)), length.out = length(y)) } loc.init <- if (length( .ilocat )) rep( .ilocat, length.out = length(y)) else { rep(median(y) - scale.init * digamma(k.init), length.out = length(y)) } etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(k.init, .lshape , earg = .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ilocat = ilocat, .iscale = iscale, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) + eta2theta(eta[, 2], .lscale , earg = .escale ) * digamma(eta2theta(eta[, 3], .lshape , earg = .eshape )) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale , shape = .lshape) misc$earg <- list(location = .elocat , scale = .escale , shape = .eshape ) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .llocat , earg = .elocat ) bb <- eta2theta(eta[, 2], .lscale , earg = .escale ) kk <- eta2theta(eta[, 3], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dlgamma(x = y, locat=aa, scale=bb, k=kk, log = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), vfamily = c("lgamma3ff"), deriv = eval(substitute(expression({ a <- eta2theta(eta[, 1], .llocat , earg = .elocat ) b <- eta2theta(eta[, 2], .lscale , earg = .escale ) k <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y-a)/b dl.da <- (exp(zedd) - k) / b dl.db <- (zedd * (exp(zedd) - k) - 1) / b dl.dk <- zedd - digamma(k) da.deta <- dtheta.deta(a, .llocat , earg = .elocat ) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dk.deta <- dtheta.deta(k, .lshape , earg = .eshape ) c(w) * cbind(dl.da * da.deta, dl.db * db.deta, dl.dk * dk.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.da2 <- k / b^2 ned2l.db2 <- (1 + k*(trigamma(k+1) + (digamma(k+1))^2)) / b^2 ned2l.dk2 <- trigamma(k) ned2l.dadb <- (1 + k*digamma(k)) / b^2 ned2l.dadk <- 1 / b ned2l.dbdk <- digamma(k) / b wz <- matrix(as.numeric(NA), n, dimm(M)) wz[, iam(1, 1, M)] <- ned2l.da2 * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.db2 * db.deta^2 wz[, iam(3, 3, M)] <- ned2l.dk2 * dk.deta^2 wz[, iam(1, 2, M)] <- ned2l.dadb * da.deta * db.deta wz[, iam(1, 3, M)] <- ned2l.dadk * da.deta * dk.deta wz[, iam(2, 3, M)] <- ned2l.dbdk * db.deta * dk.deta wz <- c(w) * wz wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape)))) } prentice74 <- function( llocation = "identity", lscale = "loge", lshape = "identity", ilocation = NULL, iscale = NULL, ishape = NULL, zero = 2:3) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Log-gamma distribution (Prentice, 1974)", " f(y) = |q| * exp(w/q^2 - e^w) / (b*gamma(1/q^2)) ,\n", "w = (y-a)*q/b + digamma(1/q^2), ", "location = a, scale = b > 0, shape = q\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Mean: a", "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape, tag = FALSE)) if (!length(etastart)) { sdy <- sqrt(var(y)) k.init <- if (length( .ishape )) rep( .ishape, length.out = length(y)) else { skewness <- mean((y-mean(y))^3) / sdy^3 # <0 Left Skewed rep(-skewness, length.out = length(y)) } scale.init <- if (length( .iscale )) rep( .iscale, length.out = length(y)) else { rep(sdy, length.out = length(y)) } loc.init <- if (length( .iloc )) rep( .iloc, length.out = length(y)) else { rep(median(y), length.out = length(y)) } etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(k.init, .lshape , earg = .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .iloc = ilocat, .iscale = iscale, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale , shape = .lshape ) misc$earg <- list(location = .elocat , scale = .escale , shape = .eshape ) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { a <- eta2theta(eta[, 1], .llocat , earg = .elocat ) b <- eta2theta(eta[, 2], .lscale , earg = .escale ) k <- eta2theta(eta[, 3], .lshape , earg = .eshape ) tmp55 <- k^(-2) doubw <- (y-a)*k/b + digamma(tmp55) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w)*(log(abs(k)) - log(b) - lgamma(tmp55) + doubw * tmp55 - exp(doubw ))) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), vfamily = c("prentice74"), deriv = eval(substitute(expression({ a <- eta2theta(eta[, 1], .llocat , earg = .elocat ) b <- eta2theta(eta[, 2], .lscale , earg = .escale ) k <- eta2theta(eta[, 3], .lshape , earg = .eshape ) tmp55 <- k^(-2) mustar <- digamma(tmp55) doubw <- (y-a)*k/b + mustar sigmastar2 <- trigamma(tmp55) dl.da <- k*(exp(doubw) - tmp55) / b dl.db <- ((doubw - mustar) * (exp(doubw) - tmp55) - 1) / b dl.dk <- 1/k - 2 * (doubw - mustar) / k^3 - (exp(doubw) - tmp55) * ((doubw - mustar) / k - 2 * sigmastar2 / k^3) da.deta <- dtheta.deta(a, .llocat , earg = .elocat ) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dk.deta <- dtheta.deta(k, .lshape , earg = .eshape ) c(w) * cbind(dl.da * da.deta, dl.db * db.deta, dl.dk * dk.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.da2 <- 1 / b^2 ned2l.db2 <- (1 + sigmastar2*tmp55) / b^2 ned2l.dk2 <- tmp55 - 3*sigmastar2*tmp55^2 + 4*sigmastar2*tmp55^4 * (sigmastar2 - k^2) ned2l.dadb <- k / b^2 ned2l.dadk <- (2*(sigmastar2*tmp55^2 - tmp55) - 1) / b ned2l.dbdk <- (sigmastar2*tmp55 - 1) / (b*k) wz <- matrix(as.numeric(NA), n, dimm(M)) wz[, iam(1, 1, M)] <- ned2l.da2 * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.db2 * db.deta^2 wz[, iam(3, 3, M)] <- ned2l.dk2 * dk.deta^2 wz[, iam(1, 2, M)] <- ned2l.dadb * da.deta * db.deta wz[, iam(1, 3, M)] <- ned2l.dadk * da.deta * dk.deta wz[, iam(2, 3, M)] <- ned2l.dbdk * db.deta * dk.deta wz <- c(w) * wz wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape)))) } dgengamma <- function(x, scale = 1, d = 1, k = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") if (!is.Numeric(d, positive = TRUE)) stop("bad input for argument 'd'") if (!is.Numeric(k, positive = TRUE)) stop("bad input for argument 'k'") N <- max(length(x), length(scale), length(d), length(k)) x <- rep(x, length.out = N); scale <- rep(scale, length.out = N); d <- rep(d, length.out = N); k <- rep(k, length.out = N); Loglik <- rep(log(0), length.out = N) xok <- x > 0 if (any(xok)) { zedd <- (x[xok]/scale[xok])^d[xok] Loglik[xok] <- log(d[xok]) + (-d[xok]*k[xok]) * log(scale[xok]) + (d[xok]*k[xok]-1) * log(x[xok]) - zedd - lgamma(k[xok]) } if (log.arg) { Loglik } else { exp(Loglik) } } pgengamma <- function(q, scale = 1, d = 1, k = 1) { zedd <- (q / scale)^d ans <- pgamma(zedd, k) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } qgengamma <- function(p, scale = 1, d = 1, k = 1) { ans <- scale * qgamma(p, k)^(1/d) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } rgengamma <- function(n, scale = 1, d = 1, k = 1) { ans <- scale * rgamma(n, k)^(1/d) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } gengamma <- function(lscale = "loge", ld = "loge", lk = "loge", iscale = NULL, id = NULL, ik = NULL, zero = NULL) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ld <- as.list(substitute(ld)) ed <- link2list(ld) ld <- attr(ed, "function.name") lk <- as.list(substitute(lk)) ek <- link2list(lk) lk <- attr(ek, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Generalized gamma distribution", " f(y) = d * b^(-d*k) * y^(d*k-1) * exp(-(y/b)^d) / gamma(k),\n", "scale=b>0, d>0, k>0, y>0\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("d", ld, earg = ed), ", ", namesof("k", lk, earg = ek), "\n", "\n", "Mean: b * gamma(k+1/d) / gamma(k)", "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("d", .ld , earg = .ed , tag = FALSE), namesof("k", .lk , earg = .ek , tag = FALSE)) if (!length(etastart)) { b.init <- if (length( .iscale )) rep( .iscale, length.out = length(y)) else { rep(mean(y^2) / mean(y), length.out = length(y)) } k.init <- if (length( .ik )) rep( .ik , length.out = length(y)) else { rep(mean(y) / b.init, length.out = length(y)) } d.init <- if (length( .id )) rep( .id , length.out = length(y)) else { rep(digamma(k.init) / mean(log(y / b.init)), length.out = length(y)) } etastart <- cbind(theta2eta(b.init, .lscale , earg = .escale ), theta2eta(d.init, .ld , earg = .ed ), theta2eta(k.init, .lk , earg = .ek )) } }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek, .iscale = iscale, .id = id, .ik = ik ))), linkinv = eval(substitute(function(eta, extra = NULL) { b <- eta2theta(eta[, 1], .lscale , earg = .escale ) d <- eta2theta(eta[, 2], .ld , earg = .ed ) k <- eta2theta(eta[, 3], .lk , earg = .ek ) b * gamma(k + 1 / d) / gamma(k) }, list( .lscale = lscale, .lk = lk, .ld = ld, .escale = escale, .ek = ek, .ed = ed ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , d = .ld , k = .lk ) misc$earg <- list(scale = .escale , d = .ed , k = .ek ) misc$expected <- TRUE }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { b <- eta2theta(eta[, 1], .lscale , earg = .escale ) d <- eta2theta(eta[, 2], .ld , earg = .ed ) k <- eta2theta(eta[, 3], .lk , earg = .ek ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgengamma(x = y, scale = b, d = d, k = k, log = TRUE)) } }, list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek ))), vfamily = c("gengamma"), deriv = eval(substitute(expression({ b <- eta2theta(eta[, 1], .lscale , earg = .escale ) d <- eta2theta(eta[, 2], .ld , earg = .ed ) k <- eta2theta(eta[, 3], .lk , earg = .ek ) tmp22 <- (y/b)^d tmp33 <- log(y/b) dl.db <- d * (tmp22 - k) / b dl.dd <- 1/d + tmp33 * (k - tmp22) dl.dk <- d * tmp33 - digamma(k) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dd.deta <- dtheta.deta(d, .ld , earg = .ed ) dk.deta <- dtheta.deta(k, .lk , earg = .ek ) c(w) * cbind(dl.db * db.deta, dl.dd * dd.deta, dl.dk * dk.deta) }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek ))), weight = eval(substitute(expression({ ned2l.db2 <- k * (d/b)^2 ned2l.dd2 <- (1 + k * (trigamma(k+1) + (digamma(k+1))^2)) / d^2 ned2l.dk2 <- trigamma(k) ned2l.dbdd <- -(1 + k*digamma(k)) / b ned2l.dbdk <- d / b ned2l.dddk <- -digamma(k) / d wz <- matrix(as.numeric(NA), n, dimm(M)) wz[, iam(1, 1, M)] <- ned2l.db2 * db.deta^2 wz[, iam(2, 2, M)] <- ned2l.dd2 * dd.deta^2 wz[, iam(3, 3, M)] <- ned2l.dk2 * dk.deta^2 wz[, iam(1, 2, M)] <- ned2l.dbdd * db.deta * dd.deta wz[, iam(1, 3, M)] <- ned2l.dbdk * db.deta * dk.deta wz[, iam(2, 3, M)] <- ned2l.dddk * dd.deta * dk.deta wz <- c(w) * wz wz }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek )))) } dlog <- function(x, prob, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(prob, positive = TRUE) || max(prob) >= 1) stop("bad input for argument 'prob'") N <- max(length(x), length(prob)) if (length(x) != N) x <- rep(x, length.out = N) if (length(prob) != N) prob <- rep(prob, length.out = N) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 ans <- rep(0.0, length.out = length(x)) if (log.arg) { ans[ zero] <- log(0.0) ans[!zero] <- x[!zero] * log(prob[!zero]) - log(x[!zero]) - log(-log1p(-prob[!zero])) } else { ans[!zero] <- -(prob[!zero]^(x[!zero])) / (x[!zero] * log1p(-prob[!zero])) } if (any(ox)) ans[ox] <- NA ans } plog <- function(q, prob, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(prob, positive = TRUE) || max(prob) >= 1) stop("bad input for argument 'prob'") N <- max(length(q), length(prob)) q <- rep(q, length.out = N); prob <- rep(prob, length.out = N); bigno <- 10 owen1965 <- (q * (1 - prob) > bigno) if (specialCase <- any(owen1965)) { qqq <- q[owen1965] ppp <- prob[owen1965] pqp <- qqq * (1 - ppp) bigans <- (ppp^(1+qqq) / (1-ppp)) * (1/qqq - 1 / ( pqp * (qqq-1)) + 2 / ((1-ppp) * pqp * (qqq-1) * (qqq-2)) - 6 / ((1-ppp)^2 * pqp * (qqq-1) * (qqq-2) * (qqq-3)) + 24 / ((1-ppp)^3 * pqp * (qqq-1) * (qqq-2) * (qqq-3) * (qqq-4))) bigans <- 1 + bigans / log1p(-ppp) } floorq <- pmax(1, floor(q)) # Ensures at least one element per q value floorq[owen1965] <- 1 seqq <- sequence(floorq) seqp <- rep(prob, floorq) onevector <- (seqp^seqq / seqq) / (-log1p(-seqp)) rlist <- dotC(name = "tyee_C_cum8sum", as.double(onevector), answer = double(N), as.integer(N), as.double(seqq), as.integer(length(onevector)), notok=integer(1)) if (rlist$notok != 0) stop("error in 'cum8sum'") ans <- if (log.p) log(rlist$answer) else rlist$answer if (specialCase) ans[owen1965] <- if (log.p) log(bigans) else bigans ans[q < 1] <- if (log.p) log(0.0) else 0.0 ans } rlog <- function(n, prob, Smallno = 1.0e-6) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(prob, allowable.length = 1, positive = TRUE) || max(prob) >= 1) stop("bad input for argument 'prob'") if (!is.Numeric(Smallno, positive = TRUE, allowable.length = 1) || Smallno > 0.01 || Smallno < 2 * .Machine$double.eps) stop("bad input for argument 'Smallno'") ans <- rep(0.0, length.out = use.n) ptr1 <- 1; ptr2 <- 0 a <- -1 / log1p(-prob) mean <- a*prob/(1-prob) # E(Y) sigma <- sqrt(a * prob * (1 - a * prob)) / (1 - prob) # sd(Y) ymax <- dlog(x = 1, prob) while(ptr2 < use.n) { Lower <- 0.5 # A continuity correction is used = 1 - 0.5. Upper <- mean + 5 * sigma while(plog(q = Upper, prob) < 1 - Smallno) Upper <- Upper + sigma Upper <- Upper + 0.5 x <- round(runif(2 * use.n, min = Lower, max = Upper)) index <- runif(2 * use.n, max = ymax) < dlog(x,prob) sindex <- sum(index) if (sindex) { ptr2 <- min(use.n, ptr1 + sindex - 1) ans[ptr1:ptr2] <- (x[index])[1:(1+ptr2-ptr1)] ptr1 <- ptr2 + 1 } } ans } logff <- function(link = "logit", init.c = NULL, zero = NULL) { if (length(init.c) && (!is.Numeric(init.c, positive = TRUE) || max(init.c) >= 1)) stop("init.c must be in (0,1)") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Logarithmic distribution f(y) = a * c^y / y, ", "y = 1, 2, 3,...,\n", " 0 < c < 1, a = -1 / log(1-c) \n\n", "Link: ", namesof("c", link, earg = earg), "\n", "\n", "Mean: a * c / (1 - c)", "\n"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("c", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { logff.Loglikfun <- function(probval, y, x, w, extraargs) { sum(c(w) * dlog(x = y, prob = probval, log = TRUE)) } Init.c <- matrix(if (length( .init.c )) .init.c else 0, n, M, byrow = TRUE) if (!length( .init.c )) for(ilocal in 1:ncoly) { prob.grid <- seq(0.05, 0.95, by = 0.05) Init.c[, ilocal] <- getMaxMin(prob.grid, objfun = logff.Loglikfun, y = y[, ilocal], x = x, w = w[, ilocal]) } etastart <- theta2eta(Init.c, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.c = init.c ))), linkinv = eval(substitute(function(eta, extra = NULL) { cc <- eta2theta(eta, .link , earg = .earg ) aa <- -1 / log1p(-cc) aa * cc / (1 - cc) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { cc <- eta2theta(eta, .link , earg = .earg ) aa <- -1 / log1p(-cc) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dlog(x = y, prob = -expm1(-1/aa), log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("logff"), deriv = eval(substitute(expression({ Musual <- 1 cc <- eta2theta(eta, .link , earg = .earg ) aa <- -1 / log1p(-cc) dl.dc <- 1 / ((1 - cc) * log1p(-cc)) + y / cc dc.deta <- dtheta.deta(cc, .link , earg = .earg ) c(w) * dl.dc * dc.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dc2 <- aa * (1 - aa * cc) / (cc * (1-cc)^2) wz <- c(w) * dc.deta^2 * ned2l.dc2 wz }), list( .link = link, .earg = earg )))) } levy <- function(delta = NULL, link.gamma = "loge", idelta = NULL, igamma = NULL) { delta.known = is.Numeric(delta, allowable.length = 1) link.gamma <- as.list(substitute(link.gamma)) earg <- link2list(link.gamma) link.gamma <- attr(earg, "function.name") new("vglmff", blurb = c("Levy distribution f(y) = sqrt(gamma/(2*pi)) * ", "(y-delta)^(-3/2) * \n", " exp(-gamma / (2*(y-delta ))),\n", " delta < y, gamma > 0", if (delta.known) paste(", delta = ", delta, ",", sep = ""), "\n\n", if (delta.known) "Link: " else "Links: ", namesof("gamma", link.gamma, earg = earg), if (! delta.known) c(", ", namesof("delta", "identity", earg = list())), "\n\n", "Mean: NA", "\n"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("gamma", .link.gamma, earg = .earg , tag = FALSE), if ( .delta.known) NULL else namesof("delta", "identity", earg = list(), tag = FALSE)) if (!length(etastart)) { delta.init <- if ( .delta.known) { if (min(y,na.rm = TRUE) <= .delta) stop("delta must be < min(y)") .delta } else { if (length( .idelta)) .idelta else min(y,na.rm = TRUE) - 1.0e-4 * diff(range(y,na.rm = TRUE)) } gamma.init <- if (length( .igamma)) .igamma else median(y - delta.init) # = 1/median(1/(y-delta.init)) gamma.init <- rep(gamma.init, length = length(y)) etastart <- cbind(theta2eta(gamma.init, .link.gamma , earg = .earg ), if ( .delta.known) NULL else delta.init) } }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta, .idelta = idelta, .igamma = igamma ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma, earg = .earg ) delta <- if ( .delta.known) .delta else eta[, 2] NA * mygamma }, list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), last = eval(substitute(expression({ misc$link <- if ( .delta.known) NULL else c(delta = "identity") misc$link <- c(gamma = .link.gamma, misc$link) misc$earg <- if ( .delta.known) list(gamma = .earg ) else list(gamma = .earg , delta = list()) if ( .delta.known) misc$delta <- .delta }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma, earg = .earg ) delta <- if ( .delta.known) .delta else eta[, 2] if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * 0.5 * (log(mygamma) -3 * log(y - delta) - mygamma / (y - delta))) }, list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), vfamily = c("levy"), deriv = eval(substitute(expression({ eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma, earg = .earg ) delta <- if ( .delta.known) .delta else eta[, 2] if (! .delta.known) dl.ddelta <- (3 - mygamma / (y-delta)) / (2 * (y-delta)) dl.dgamma <- 0.5 * (1 / mygamma - 1 / (y-delta)) dgamma.deta <- dtheta.deta(mygamma, .link.gamma, earg = .earg ) c(w) * cbind(dl.dgamma * dgamma.deta, if ( .delta.known) NULL else dl.ddelta) }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) # M = if (delta is known) 1 else 2 wz[, iam(1, 1, M)] <- 1 * dgamma.deta^2 if (! .delta.known) { wz[, iam(1, 2, M)] <- 3 * dgamma.deta wz[, iam(2, 2, M)] <- 21 } wz <- c(w) * wz / (2 * mygamma^2) wz }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta )))) } dlino <- function(x, shape1, shape2, lambda = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) loglik <- dbeta(x = x, shape1 = shape1, shape2 = shape2, log = TRUE) + shape1 * log(lambda) - (shape1+shape2) * log1p(-(1-lambda)*x) if (log.arg) loglik else exp(loglik) } plino <- function(q, shape1, shape2, lambda = 1) { ans <- pbeta(q = lambda * q / (1 - (1-lambda)*q), shape1 = shape1, shape2 = shape2) ans[lambda <= 0] <- NaN ans } qlino <- function(p, shape1, shape2, lambda = 1) { Y <- qbeta(p = p, shape1 = shape1, shape2 = shape2) ans <- Y / (lambda + (1-lambda)*Y) ans[lambda <= 0] <- NaN ans } rlino <- function(n, shape1, shape2, lambda = 1) { Y <- rbeta(n = n, shape1 = shape1, shape2 = shape2) ans <- Y / (lambda + (1 - lambda) * Y) ans[lambda <= 0] <- NaN ans } lino <- function(lshape1 = "loge", lshape2 = "loge", llambda = "loge", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Generalized Beta distribution (Libby and Novick, 1982)\n\n", "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), ", ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: something complicated"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) <= 0 || max(y) >= 1) stop("values of the response must be between 0 and 1 (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1", .lshape1 , earg = .eshape1 , tag = FALSE), namesof("shape2", .lshape2 , earg = .eshape2 , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { lambda.init <- rep(if (length( .ilambda )) .ilambda else 1, length = n) sh1.init <- if (length( .ishape1 )) rep( .ishape1, length = n) else NULL sh2.init <- if (length( .ishape2 )) rep( .ishape2, length = n) else NULL txY.init <- lambda.init * y / (1+lambda.init*y - y) mean1 <- mean(txY.init) mean2 <- mean(1/txY.init) if (!is.Numeric(sh1.init)) sh1.init <- rep((mean2 - 1) / (mean2 - 1/mean1), length = n) if (!is.Numeric(sh2.init)) sh2.init <- rep(sh1.init * (1-mean1) / mean1, length = n) etastart <- cbind(theta2eta(sh1.init, .lshape1 , earg = .eshape1), theta2eta(sh2.init, .lshape2 , earg = .eshape2), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda, .ishape1=ishape1, .ishape2=ishape2, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { sh1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1) sh2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) rep(as.numeric(NA), length = nrow(eta)) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1 , shape2 = .lshape2 , lambda = .llambda ) misc$earg <- list(shape1 = .eshape1 , shape2 = .eshape2 , lambda = .elambda ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { sh1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1) sh2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dlino(y, shape1 = sh1, shape2 = sh2, lambda = lambda, log = TRUE)) } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), vfamily = c("lino"), deriv = eval(substitute(expression({ sh1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1) sh2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) temp1 <- log1p(-(1-lambda) * y) temp2 <- digamma(sh1+sh2) dl.dsh1 <- log(lambda) + log(y) - digamma(sh1) + temp2 - temp1 dl.dsh2 <- log1p(-y) - digamma(sh2) + temp2 - temp1 dl.dlambda <- sh1/lambda - (sh1+sh2) * y / (1 - (1-lambda) * y) dsh1.deta <- dtheta.deta(sh1, .lshape1 , earg = .eshape1) dsh2.deta <- dtheta.deta(sh2, .lshape2 , earg = .eshape2) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind( dl.dsh1 * dsh1.deta, dl.dsh2 * dsh2.deta, dl.dlambda * dlambda.deta) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), weight = eval(substitute(expression({ temp3 <- trigamma(sh1+sh2) ned2l.dsh1 <- trigamma(sh1) - temp3 ned2l.dsh2 <- trigamma(sh2) - temp3 ned2l.dlambda2 <- sh1 * sh2 / (lambda^2 * (sh1+sh2+1)) ned2l.dsh1sh2 <- -temp3 ned2l.dsh1lambda <- -sh2 / ((sh1+sh2)*lambda) ned2l.dsh2lambda <- sh1 / ((sh1+sh2)*lambda) wz <- matrix(as.numeric(NA), n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dsh1 * dsh1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsh2 * dsh2.deta^2 wz[, iam(3, 3, M)] <- ned2l.dlambda2 * dlambda.deta^2 wz[, iam(1, 2, M)] <- ned2l.dsh1sh2 * dsh1.deta * dsh2.deta wz[, iam(1, 3, M)] <- ned2l.dsh1lambda * dsh1.deta * dlambda.deta wz[, iam(2, 3, M)] <- ned2l.dsh2lambda * dsh2.deta * dlambda.deta wz <- c(w) * wz wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda )))) } betaprime <- function(link = "loge", i1 = 2, i2 = NULL, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Beta-prime distribution\n", "y^(shape1-1) * (1+y)^(-shape1-shape2) / Beta(shape1,shape2),", " y>0, shape1>0, shape2>0\n\n", "Links: ", namesof("shape1", link, earg = earg), ", ", namesof("shape2", link, earg = earg), "\n", "Mean: shape1/(shape2-1) provided shape2>1"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1", .link , earg = .earg , short = TRUE), namesof("shape2", .link , earg = .earg , short = TRUE)) if (is.numeric( .i1) && is.numeric( .i2)) { vec <- c( .i1, .i2) vec <- c(theta2eta(vec[1], .link , earg = .earg ), theta2eta(vec[2], .link , earg = .earg )) etastart <- matrix(vec, n, 2, byrow = TRUE) } if (!length(etastart)) { init1 <- if (length( .i1)) rep( .i1, length.out = n) else rep(1, length.out = n) init2 <- if (length( .i2)) rep( .i2, length.out = n) else 1 + init1 / (y + 0.1) etastart <- matrix(theta2eta(c(init1, init2), .link , earg = .earg ), n, 2, byrow = TRUE) } }), list( .link = link, .earg = earg, .i1 = i1, .i2 = i2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shapes <- eta2theta(eta, .link , earg = .earg ) ifelse(shapes[, 2] > 1, shapes[, 1] / (shapes[, 2]-1), NA) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .link , shape2 = .link) misc$earg <- list(shape1 = .earg , shape2 = .earg ) }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL){ shapes <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) *((shapes[, 1]-1) * log(y) - lbeta(shapes[, 1], shapes[, 2]) - (shapes[, 2]+shapes[, 1]) * log1p(y))) } }, list( .link = link, .earg = earg ))), vfamily = "betaprime", deriv = eval(substitute(expression({ shapes <- eta2theta(eta, .link , earg = .earg ) dshapes.deta <- dtheta.deta(shapes, .link , earg = .earg ) dl.dshapes <- cbind(log(y) - log1p(y) - digamma(shapes[, 1]) + digamma(shapes[, 1]+shapes[, 2]), - log1p(y) - digamma(shapes[, 2]) + digamma(shapes[, 1]+shapes[, 2])) c(w) * dl.dshapes * dshapes.deta }), list( .link = link, .earg = earg ))), weight = expression({ temp2 <- trigamma(shapes[, 1] + shapes[, 2]) d2l.dshape12 <- temp2 - trigamma(shapes[, 1]) d2l.dshape22 <- temp2 - trigamma(shapes[, 2]) d2l.dshape1shape2 <- temp2 wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- d2l.dshape12 * dshapes.deta[, 1]^2 wz[, iam(2, 2, M)] <- d2l.dshape22 * dshapes.deta[, 2]^2 wz[, iam(1, 2, M)] <- d2l.dshape1shape2 * dshapes.deta[, 1] * dshapes.deta[, 2] -c(w) * wz })) } dmaxwell <- function(x, a, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(a)) x <- rep(x, length.out = L); a = rep(a, length.out = L); logdensity <- rep(log(0), length.out = L) xok <- (x > 0) logdensity[xok] <- 0.5 * log(2/pi) + 1.5 * log(a[xok]) + 2 * log(x[xok]) - 0.5 * a[xok] * x[xok]^2 logdensity[a <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pmaxwell <- function(q, a) { L <- max(length(q), length(a)) q <- rep(q, length.out = L); a <- rep(a, length.out = L); ans <- ifelse(q > 0, erf(q*sqrt(a/2)) - q*exp(-0.5*a*q^2) * sqrt(2*a/pi), 0) ans[a <= 0] <- NaN ans } rmaxwell <- function(n, a) { sqrt(2 * rgamma(n = n, 1.5) / a) } qmaxwell <- function(p, a) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("bad input for argument 'p'") if (any(a <= 0)) stop("argument 'a' must be positive") N <- max(length(p), length(a)); p <- rep(p, length.out = N); a <- rep(a, length.out = N) sqrt(2 * qgamma(p = p, 1.5) / a) } maxwell <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) # orig earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Maxwell distribution f(y;a) = sqrt(2/pi) * a^(3/2) * y^2 *", " exp(-0.5*a*y^2), y>0, a>0\n", "Link: ", namesof("a", link, earg = earg), "\n", "\n", "Mean: sqrt(8 / (a * pi))"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("a", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg ) if (!length(etastart)) { a.init <- 8 / (pi * (y + 0.1)^2) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) sqrt(8 / (aa * pi)) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ilocal in 1:ncoly) { misc$earg[[ilocal]] <- .earg } misc$link <- rep( .link , length = ncoly) names(misc$link) <- mynames1 misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * dmaxwell(x = y, a = aa, log = TRUE)) }, list( .link = link, .earg = earg ))), vfamily = c("maxwell"), deriv = eval(substitute(expression({ aa <- eta2theta(eta, .link , earg = .earg ) dl.da <- 1.5 / aa - 0.5 * y^2 da.deta <- dtheta.deta(aa, .link , earg = .earg ) c(w) * dl.da * da.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.da2 <- 1.5 / aa^2 wz <- c(w) * ned2l.da2 * da.deta^2 wz }), list( .link = link, .earg = earg )))) } dnaka <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(shape), length(scale)) x <- rep(x, length.out = L) shape <- rep(shape, length.out = L) scale <- rep(scale, length.out = L); logdensity <- rep(log(0), length.out = L) xok <- (x > 0) logdensity[xok] <- dgamma(x = x[xok]^2, shape = shape[xok], scale = scale[xok]/shape[xok], log = TRUE) + log(2) + log(x[xok]) if (log.arg) logdensity else exp(logdensity) } pnaka <- function(q, shape, scale = 1) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") L <- max(length(q), length(shape), length(scale)) q <- rep(q, length.out = L) shape <- rep(shape, length.out = L) scale <- rep(scale, length.out = L); ifelse(q <= 0, 0, pgamma(shape * q^2 / scale, shape)) } qnaka <- function(p, shape, scale = 1, ...) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") L <- max(length(p), length(shape), length(scale)) p <- rep(p, length.out = L); shape = rep(shape, length.out = L); scale <- rep(scale, length.out = L); ans <- rep(0.0, length.out = L) myfun <- function(x, shape, scale = 1, p) pnaka(q = x, shape = shape, scale = scale) - p for(ii in 1:L) { EY <- sqrt(scale[ii]/shape[ii]) * gamma(shape[ii] + 0.5) / gamma(shape[ii]) Upper <- 5 * EY while(pnaka(q = Upper, shape = shape[ii], scale = scale[ii]) < p[ii]) Upper <- Upper + scale[ii] ans[ii] <- uniroot(f = myfun, lower = 0, upper = Upper, shape = shape[ii], scale = scale[ii], p = p[ii], ...)$root } ans } rnaka <- function(n, shape, scale = 1, Smallno = 1.0e-6) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(scale, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'scale'") if (!is.Numeric(shape, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'shape'") if (!is.Numeric(Smallno, positive = TRUE, allowable.length = 1) || Smallno > 0.01 || Smallno < 2 * .Machine$double.eps) stop("bad input for argument 'Smallno'") ans <- rep(0.0, length.out = use.n) ptr1 <- 1; ptr2 <- 0 ymax <- dnaka(x = sqrt(scale * (1 - 0.5 / shape)), shape = shape, scale = scale) while(ptr2 < use.n) { EY <- sqrt(scale / shape) * gamma(shape + 0.5) / gamma(shape) Upper <- EY + 5 * scale while(pnaka(q = Upper, shape = shape, scale = scale) < 1 - Smallno) Upper <- Upper + scale x <- runif(2*use.n, min = 0, max = Upper) index <- runif(2*use.n, max = ymax) < dnaka(x, shape = shape, scale = scale) sindex <- sum(index) if (sindex) { ptr2 <- min(use.n, ptr1 + sindex - 1) ans[ptr1:ptr2] <- (x[index])[1:(1+ptr2-ptr1)] ptr1 <- ptr2 + 1 } } ans } nakagami <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = 1) { if (!is.null(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' must be a positive number or NULL") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Nakagami distribution f(y) = 2 * (shape/scale)^shape *\n", " ", "y^(2*shape-1) * exp(-shape*y^2/scale) / gamma(shape),\n", " ", "y>0, shape>0, scale>0\n", "Links: ", namesof("shape", lshape, earg = eshape), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: sqrt(scale/shape) * gamma(shape+0.5) / gamma(shape)"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape", .lshape , earg = .eshape, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { init2 <- if (is.Numeric( .iscale, positive = TRUE)) rep( .iscale, length.out = n) else rep(1, length.out = n) init1 <- if (is.Numeric( .ishape, positive = TRUE)) rep( .ishape, length.out = n) else rep(init2 / (y+1/8)^2, length.out = n) etastart <- cbind(theta2eta(init1, .lshape , earg = .eshape ), theta2eta(init2, .lscale , earg = .escale )) } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) sqrt(scale/shape) * gamma(shape+0.5) / gamma(shape) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape , scale = .lscale) misc$earg <- list(shape = .eshape, scale = .escale ) misc$expected = TRUE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * dnaka(x = y, shape = shape, scale = scale, log = TRUE)) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("nakagami"), deriv = eval(substitute(expression({ shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) dl.dshape <- 1 + log(shape/Scale) - digamma(shape) + 2 * log(y) - y^2 / Scale dl.dscale <- -shape/Scale + shape * (y/Scale)^2 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dshape * dshape.deta, dl.dscale * dscale.deta) }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ d2l.dshape2 <- trigamma(shape) - 1/shape d2l.dscale2 <- shape / Scale^2 wz <- matrix(as.numeric(NA), n, M) # diagonal wz[, iam(1, 1, M)] <- d2l.dshape2 * dshape.deta^2 wz[, iam(2, 2, M)] <- d2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape)))) } drayleigh <- function(x, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(scale)) x <- rep(x, length.out = L); scale = rep(scale, length.out = L); logdensity <- rep(log(0), length.out = L) xok <- (x > 0) logdensity[xok] <- log(x[xok]) - 0.5 * (x[xok]/scale[xok])^2 - 2 * log(scale[xok]) if (log.arg) logdensity else exp(logdensity) } prayleigh <- function(q, scale = 1) { if (any(scale <= 0)) stop("argument 'scale' must be positive") L <- max(length(q), length(scale)) q <- rep(q, length.out = L); scale = rep(scale, length.out = L); ifelse(q > 0, -expm1(-0.5*(q/scale)^2), 0) } qrayleigh <- function(p, scale = 1) { if (any(p <= 0) || any(p >= 1)) stop("argument 'p' must be between 0 and 1") ans <- scale * sqrt(-2 * log1p(-p)) ans[scale <= 0] <- NaN ans } rrayleigh <- function(n, scale = 1) { ans <- scale * sqrt(-2 * log(runif(n))) ans[scale <= 0] <- NaN ans } rayleigh <- function(lscale = "loge", nrfs = 1 / 3 + 0.01, oim.mean = TRUE, zero = NULL) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nrfs, allowable.length = 1) || nrfs < 0 || nrfs > 1) stop("bad input for 'nrfs'") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Rayleigh distribution\n\n", "f(y) = y*exp(-0.5*(y/scale)^2)/scale^2, y>0, scale>0\n\n", "Link: ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: scale * sqrt(pi / 2)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .lscale , earg = .escale , tag = FALSE) if (!length(etastart)) { Ymat <- matrix(colSums(y) / colSums(w), n, ncoly, byrow = TRUE) b.init <- (Ymat + 1/8) / sqrt(pi/2) etastart <- theta2eta(b.init, .lscale , earg = .escale ) } }), list( .lscale = lscale, .escale = escale))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) Scale * sqrt(pi / 2) }, list( .lscale = lscale, .escale = escale))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lscale , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .escale } misc$Musual <- Musual misc$multipleResponses <- TRUE misc$nrfs <- .nrfs }), list( .lscale = lscale, .escale = escale, .nrfs = nrfs ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * drayleigh(x = y, scale = Scale, log = TRUE)) } }, list( .lscale = lscale, .escale = escale))), vfamily = c("rayleigh"), deriv = eval(substitute(expression({ Scale <- eta2theta(eta, .lscale , earg = .escale ) dl.dScale <- ((y/Scale)^2 - 2) / Scale dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * dl.dScale * dScale.deta }), list( .lscale = lscale, .escale = escale))), weight = eval(substitute(expression({ d2l.dScale2 <- (3 * (y/Scale)^2 - 2) / Scale^2 ned2l.dScale2 <- 4 / Scale^2 wz <- c(w) * dScale.deta^2 * ((1 - .nrfs) * d2l.dScale2 + .nrfs * ned2l.dScale2) if (intercept.only && .oim.mean ) { ave.oim <- weighted.mean(d2l.dScale2, w) if (ave.oim > 0) { wz <- c(w) * dScale.deta^2 * ave.oim } } wz }), list( .lscale = lscale, .escale = escale, .nrfs = nrfs, .oim.mean = oim.mean )))) } dparetoIV <- function(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(location), length(scale), length(inequality), length(shape)) x <- rep(x, length.out = N); location <- rep(location, length.out = N) scale <- rep(scale, length.out = N); inequality <- rep(inequality, length.out = N) shape <- rep(shape, length.out = N) logdensity <- rep(log(0), length.out = N) xok <- (x > location) zedd <- (x - location) / scale logdensity[xok] <- log(shape[xok]) - log(scale[xok]) - log(inequality[xok]) + (1/inequality[xok]-1) * log(zedd[xok]) - (shape[xok]+1) * log1p(zedd[xok]^(1/inequality[xok])) if (log.arg) logdensity else exp(logdensity) } pparetoIV <- function(q, location = 0, scale = 1, inequality = 1, shape = 1) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") if (!is.Numeric(inequality, positive = TRUE)) stop("bad input for argument 'inequality'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") N <- max(length(q), length(location), length(scale), length(inequality), length(shape)) q <- rep(q, length.out = N); location <- rep(location, length.out = N) scale <- rep(scale, length.out = N); inequality <- rep(inequality, length.out = N) shape <- rep(shape, length.out = N) answer <- q * 0 ii <- q > location zedd <- (q[ii] - location[ii]) / scale[ii] answer[ii] <- 1 - (1 + zedd^(1/inequality[ii]))^(-shape[ii]) answer } qparetoIV <- function(p, location = 0, scale = 1, inequality = 1, shape = 1) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("bad input for argument 'p'") if (!is.Numeric(inequality, positive = TRUE)) stop("bad input for argument 'inequality'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") ans <- location + scale * (-1 + (1-p)^(-1/shape))^inequality ans[scale <= 0] <- NaN ans[shape <= 0] <- NaN ans } rparetoIV = function(n, location = 0, scale = 1, inequality = 1, shape = 1) { if (!is.Numeric(inequality, positive = TRUE)) stop("bad input for argument 'inequality'") ans <- location + scale * (-1 + runif(n)^(-1/shape))^inequality ans[scale <= 0] <- NaN ans[shape <= 0] <- NaN ans } dparetoIII <- function(x, location = 0, scale = 1, inequality = 1, log = FALSE) dparetoIV(x = x, location = location, scale = scale, inequality = inequality, shape = 1, log = log) pparetoIII <- function(q, location = 0, scale = 1, inequality = 1) pparetoIV(q = q, location = location, scale = scale, inequality = inequality, shape = 1) qparetoIII <- function(p, location = 0, scale = 1, inequality = 1) qparetoIV(p = p, location = location, scale = scale, inequality = inequality, shape = 1) rparetoIII <- function(n, location = 0, scale = 1, inequality = 1) rparetoIV(n = n, location= location, scale = scale, inequality = inequality, shape = 1) dparetoII <- function(x, location = 0, scale = 1, shape = 1, log = FALSE) dparetoIV(x = x, location = location, scale = scale, inequality = 1, shape = shape, log = log) pparetoII <- function(q, location = 0, scale = 1, shape = 1) pparetoIV(q = q, location = location, scale = scale, inequality = 1, shape = shape) qparetoII <- function(p, location = 0, scale = 1, shape = 1) qparetoIV(p = p, location = location, scale = scale, inequality = 1, shape = shape) rparetoII <- function(n, location = 0, scale = 1, shape = 1) rparetoIV(n = n, location = location, scale = scale, inequality = 1, shape = shape) dparetoI <- function(x, scale = 1, shape = 1) dparetoIV(x = x, location = scale, scale = scale, inequality = 1, shape = shape) pparetoI <- function(q, scale = 1, shape = 1) pparetoIV(q = q, location = scale, scale = scale, inequality = 1, shape = shape) qparetoI <- function(p, scale = 1, shape = 1) qparetoIV(p = p, location = scale, scale = scale, inequality = 1, shape = shape) rparetoI <- function(n, scale = 1, shape = 1) rparetoIV(n = n, location = scale, scale = scale, inequality = 1, shape = shape) paretoIV <- function(location = 0, lscale = "loge", linequality = "loge", lshape = "loge", iscale = 1, iinequality = 1, ishape = NULL, imethod = 1) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(iinequality) && any(iinequality <= 0)) stop("argument 'iinequality' must be positive") if (is.Numeric(ishape) && any(ishape <= 0)) stop("argument 'ishape' must be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE) || imethod > 2) stop("bad input for argument 'imethod'") if (linequality == "nloge" && location != 0) warning("The Burr distribution has 'location = 0' and ", "'linequality = nloge'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") linequ <- as.list(substitute(linequality)) einequ <- link2list(linequ) linequ <- attr(einequ, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") iinequ <- iinequality new("vglmff", blurb = c("Pareto(IV) distribution F(y)=1-[1+((y - ", location, ")/scale)^(1/inequality)]^(-shape),", "\n", " y > ", location, ", scale > 0, inequality > 0, shape > 0,\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("inequality", linequ, earg = einequ ), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("inequality", .linequ , earg = .einequ , tag = FALSE), namesof("shape", .lshape , earg = .eshape, tag = FALSE)) extra$location <- location <- .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { inequ.init <- if (length( .iinequ )) .iinequ else 1 scale.init <- if (length( .iscale )) .iscale else 1 shape.init <- if (length( .ishape )) .ishape else NULL if (!length(shape.init)) { zedd <- (y - location) / scale.init if ( .imethod == 1) { A1 <- weighted.mean(1/(1 + zedd^(1/inequ.init)), w = w) A2 <- weighted.mean(1/(1 + zedd^(1/inequ.init))^2, w = w) } else { A1 <- median(1/(1 + zedd^(1/inequ.init ))) A2 <- median(1/(1 + zedd^(1/inequ.init))^2) } shape.init <- max(0.01, (2*A2-A1)/(A1-A2)) } etastart <- cbind( theta2eta(rep(scale.init, length.out = n), .lscale , earg = .escale ), theta2eta(rep(inequ.init, length.out = n), .linequ, earg = .einequ), theta2eta(rep(shape.init, length.out = n), .lshape , earg = .eshape )) } }), list( .location = location, .lscale = lscale, .linequ = linequ, .lshape = lshape, .imethod = imethod, .escale = escale, .einequ = einequ, .eshape = eshape, .iscale = iscale, .iinequ = iinequ, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) location + Scale * NA }, list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "inequality" = .linequ, "shape" = .lshape) misc$earg <- list("scale" = .escale , "inequality" = .einequ, "shape" = .eshape ) misc$location = extra$location # Use this for prediction }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .lshape = lshape, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y - location) / Scale if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dparetoIV(x = y, location = location, scale = Scale, inequ = inequ, shape = shape, log = TRUE)) } }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .lshape = lshape, .eshape = eshape))), vfamily = c("paretoIV"), deriv = eval(substitute(expression({ location = extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y - location) / Scale temp100 <- 1 + zedd^(1/inequ) dl.dscale <- (shape - (1+shape) / temp100) / (inequ * Scale) dl.dinequ <- ((log(zedd) * (shape - (1+shape)/temp100)) / inequ - 1) / inequ dl.dshape <- -log(temp100) + 1/shape dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dinequ.deta <- dtheta.deta(inequ, .linequ, earg = .einequ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dscale * dscale.deta, dl.dinequ * dinequ.deta, dl.dshape * dshape.deta) }), list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape))), weight = eval(substitute(expression({ temp200 <- digamma(shape) - digamma(1) - 1 d2scale.deta2 <- shape / ((inequ*Scale)^2 * (shape+2)) d2inequ.deta2 <- (shape * (temp200^2 + trigamma(shape) + trigamma(1) ) + 2*(temp200+1)) / (inequ^2 * (shape+2)) d2shape.deta2 <- 1 / shape^2 d2si.deta2 <- (shape*(-temp200) -1) / (inequ^2 * Scale * (shape+2)) d2ss.deta2 <- -1 / ((inequ*Scale) * (shape+1)) d2is.deta2 <- temp200 / (inequ*(shape+1)) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dinequ.deta^2 * d2inequ.deta2 wz[, iam(3, 3, M)] <- dshape.deta^2 * d2shape.deta2 wz[, iam(1, 2, M)] <- dscale.deta * dinequ.deta * d2si.deta2 wz[, iam(1, 3, M)] <- dscale.deta * dshape.deta * d2ss.deta2 wz[, iam(2, 3, M)] <- dinequ.deta * dshape.deta * d2is.deta2 c(w) * wz }), list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape)))) } paretoIII <- function(location = 0, lscale = "loge", linequality = "loge", iscale = NULL, iinequality = NULL) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(iinequality) && any(iinequality <= 0)) stop("argument 'iinequality' must be positive") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") linequ <- as.list(substitute(linequality)) einequ <- link2list(linequ) linequ <- attr(einequ, "function.name") iinequ <- iinequality new("vglmff", blurb = c("Pareto(III) distribution F(y)=1-[1+((y - ", location, ")/scale)^(1/inequality)]^(-1),", "\n", " y > ", location, ", scale > 0, inequality > 0, \n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("inequality", linequ, earg = einequ ), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("inequ", .linequ, earg = .einequ, tag = FALSE)) extra$location = location = .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { inequ.init <- if (length( .iinequ)) .iinequ else NULL scale.init <- if (length( .iscale )) .iscale else NULL if (!length(inequ.init) || !length(scale.init)) { probs <- (1:4)/5 ytemp <- quantile(x = log(y-location), probs = probs) fittemp <- lsfit(x = logit(probs), y = ytemp, intercept = TRUE) if (!length(inequ.init)) inequ.init <- max(fittemp$coef["X"], 0.01) if (!length(scale.init)) scale.init <- exp(fittemp$coef["Intercept"]) } etastart=cbind( theta2eta(rep(scale.init, length.out = n), .lscale , earg = .escale ), theta2eta(rep(inequ.init, length.out = n), .linequ, earg = .einequ)) } }), list( .location = location, .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .iscale = iscale, .iinequ = iinequ ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) location + Scale * NA }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "inequality" = .linequ) misc$earg <- list("scale" = .escale , "inequality" = .einequ) misc$location <- extra$location # Use this for prediction }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) zedd <- (y - location) / Scale if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dparetoIII(x = y, location = location, scale=Scale, inequ=inequ, log = TRUE)) } }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), vfamily = c("paretoIII"), deriv = eval(substitute(expression({ location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- 1 zedd <- (y - location) / Scale temp100 <- 1 + zedd^(1/inequ) dl.dscale <- (shape - (1+shape) / temp100) / (inequ * Scale) dl.dinequ <- ((log(zedd) * (shape - (1+shape)/temp100)) / inequ - 1) / inequ dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dinequ.deta <- dtheta.deta(inequ, .linequ, earg = .einequ) c(w) * cbind(dl.dscale * dscale.deta, dl.dinequ * dinequ.deta) }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), weight = eval(substitute(expression({ d2scale.deta2 <- 1 / ((inequ*Scale)^2 * 3) d2inequ.deta2 <- (1 + 2* trigamma(1)) / (inequ^2 * 3) wz <- matrix(0, n, M) # It is diagonal wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dinequ.deta^2 * d2inequ.deta2 c(w) * wz }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ )))) } paretoII <- function(location = 0, lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(ishape) && any(ishape <= 0)) stop("argument 'ishape' must be positive") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Pareto(II) distribution F(y)=1-[1+(y - ", location, ")/scale]^(-shape),", "\n", " y > ", location, ", scale > 0, shape > 0,\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) extra$location <- location <- .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { scale.init <- if (length( .iscale )) .iscale else NULL shape.init <- if (length( .ishape )) .ishape else NULL if (!length(shape.init) || !length(scale.init)) { probs <- (1:4)/5 scale.init.0 <- 1 ytemp <- quantile(x = log(y-location+scale.init.0), probs = probs) fittemp <- lsfit(x = log1p(-probs), y = ytemp, intercept = TRUE) if (!length(shape.init)) shape.init <- max(-1/fittemp$coef["X"], 0.01) if (!length(scale.init)) scale.init <- exp(fittemp$coef["Intercept"]) } etastart=cbind( theta2eta(rep(scale.init, length.out = n), .lscale , earg = .escale ), theta2eta(rep(shape.init, length.out = n), .lshape , earg = .eshape )) } }), list( .location = location, .lscale = lscale, .escale = escale, .eshape = eshape, .lshape = lshape, .iscale = iscale, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) location + Scale * NA }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "shape" = .lshape) misc$earg <- list("scale" = .escale , "shape" = .eshape ) misc$location <- extra$location # Use this for prediction }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) zedd <- (y - location) / Scale if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dparetoII(x = y, location = location, scale=Scale, shape = shape, log = TRUE)) } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("paretoII"), deriv = eval(substitute(expression({ location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) zedd <- (y - location) / Scale temp100 <- 1 + zedd dl.dscale <- (shape - (1+shape) / temp100) / (1 * Scale) dl.dshape <- -log(temp100) + 1/shape dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dscale * dscale.deta, dl.dshape * dshape.deta) }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ d2scale.deta2 <- shape / (Scale^2 * (shape+2)) d2shape.deta2 <- 1 / shape^2 d2ss.deta2 <- -1 / (Scale * (shape+1)) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dshape.deta^2 * d2shape.deta2 wz[, iam(1, 2, M)] <- dscale.deta * dshape.deta * d2ss.deta2 c(w) * wz }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape)))) } dpareto <- function(x, location, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(location), length(shape)) x <- rep(x, length.out = L); location <- rep(location, length.out = L); shape <- rep(shape, length.out = L) logdensity <- rep(log(0), length.out = L) xok <- (x > location) logdensity[xok] <- log(shape[xok]) + shape[xok] * log(location[xok]) - (shape[xok]+1) * log(x[xok]) if (log.arg) logdensity else exp(logdensity) } ppareto <- function(q, location, shape) { L <- max(length(q), length(location), length(shape)) q <- rep(q, length.out = L); location <- rep(location, length.out = L); shape <- rep(shape, length.out = L) ans <- ifelse(q > location, 1 - (location/q)^shape, 0) ans[location <= 0] <- NaN ans[shape <= 0] <- NaN ans } qpareto <- function(p, location, shape) { if (any(p <= 0) || any(p >= 1)) stop("argument 'p' must be between 0 and 1") ans <- location / (1 - p)^(1/shape) ans[location <= 0] <- NaN ans[shape <= 0] <- NaN ans } rpareto <- function(n, location, shape) { ans <- location / runif(n)^(1/shape) ans[location <= 0] <- NaN ans[shape <= 0] <- NaN ans } pareto1 <- function(lshape = "loge", location = NULL) { if (is.Numeric(location) && location <= 0) stop("argument 'location' must be positive") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") earg <- eshape new("vglmff", blurb = c("Pareto distribution ", "f(y) = shape * location^shape / y^(shape+1),", " 00\n", "Link: ", namesof("shape", lshape, earg = earg), "\n", "\n", "Mean: location*shape/(shape-1) for shape>1"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("shape", .lshape , earg = .earg , tag = FALSE) locationhat <- if (!length( .location)) { locationEstimated <- TRUE min(y) # - .smallno } else { locationEstimated <- FALSE .location } if (any(y < locationhat)) stop("the value of location is too high ", "(requires 0 < location < min(y))") extra$location <- locationhat extra$locationEstimated <- locationEstimated if (!length(etastart)) { k.init <- (y + 1/8) / (y - locationhat + 1/8) etastart <- theta2eta(k.init, .lshape , earg = .earg ) } }), list( .lshape = lshape, .earg = earg, .location = location ))), linkinv = eval(substitute(function(eta, extra = NULL) { k <- eta2theta(eta, .lshape , earg = .earg ) location <- extra$location ifelse(k > 1, k * location / (k-1), NA) }, list( .lshape = lshape, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(k = .lshape) misc$earg <- list(k = .earg ) misc$location <- extra$location # Use this for prediction }), list( .lshape = lshape, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { k <- eta2theta(eta, .lshape , earg = .earg ) location <- extra$location if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * (log(k) + k * log(location) - (k+1) * log(y ))) } }, list( .lshape = lshape, .earg = earg ))), vfamily = c("pareto1"), deriv = eval(substitute(expression({ location <- extra$location k <- eta2theta(eta, .lshape , earg = .earg ) dl.dk <- 1/k + log(location/y) dk.deta <- dtheta.deta(k, .lshape , earg = .earg ) c(w) * dl.dk * dk.deta }), list( .lshape = lshape, .earg = earg ))), weight = eval(substitute(expression({ ed2l.dk2 <- 1 / k^2 wz <- c(w) * dk.deta^2 * ed2l.dk2 wz }), list( .lshape = lshape, .earg = earg )))) } dtpareto <- function(x, lower, upper, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(lower, positive = TRUE)) stop("argument 'lower' must be positive") if (!is.Numeric(upper, positive = TRUE)) stop("argument 'upper' must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("argument 'shape' must be positive") L <- max(length(x), length(lower), length(upper), length(shape)) x <- rep(x, length.out = L); shape <- rep(shape, length.out = L) lower <- rep(lower, length.out = L); upper <- rep(upper, length.out = L); logdensity <- rep(log(0), length.out = L) xok <- (0 < lower) & (lower < x) & (x < upper) & (shape > 0) logdensity[xok] <- log(shape[xok]) + shape[xok] * log(lower[xok]) - (shape[xok] + 1) * log(x[xok]) - log1p(-(lower[xok] / upper[xok])^(shape[xok])) logdensity[shape <= 0] <- NaN logdensity[upper < lower] <- NaN logdensity[0 > lower] <- NaN if (log.arg) logdensity else exp(logdensity) } ptpareto <- function(q, lower, upper, shape) { if (!is.Numeric(q)) stop("bad input for argument 'q'") L <- max(length(q), length(lower), length(upper), length(shape)) q <- rep(q, length.out = L); lower <- rep(lower, length.out = L); upper <- rep(upper, length.out = L); shape <- rep(shape, length.out = L) ans <- q * 0 xok <- (0 < lower) & (lower < q) & (q < upper) & (shape > 0) ans[xok] <- (1 - (lower[xok]/q[xok])^shape[xok]) / (1 - (lower[xok]/upper[xok])^shape[xok]) ans[q >= upper] <- 1 ans[upper < lower] <- NaN ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN ans } qtpareto <- function(p, lower, upper, shape) { if (!is.Numeric(p, positive = TRUE)) stop("bad input for argument 'p'") if (max(p) >= 1) stop("argument 'p' must be in (0, 1)") ans <- lower / (1 - p*(1-(lower/upper)^shape))^(1/shape) ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN ans[upper < lower] <- NaN ans } rtpareto <- function(n, lower, upper, shape) { ans <- qtpareto(p = runif(n), lower = lower, upper = upper, shape = shape) ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN ans } tpareto1 <- function(lower, upper, lshape = "loge", ishape = NULL, imethod = 1) { if (!is.Numeric(lower, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'lower'") if (!is.Numeric(upper, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'upper'") if (lower >= upper) stop("lower < upper is required") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") earg <- eshape if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Truncated Pareto distribution f(y) = shape * lower^shape /", "(y^(shape+1) * (1-(lower/upper)^shape)),", " 0 < lower < y < upper < Inf, shape>0\n", "Link: ", namesof("shape", lshape, earg = earg), "\n", "\n", "Mean: shape*lower^shape*(upper^(1-shape)-lower^(1-shape)) /", " ((1-shape) * (1-(lower/upper)^shape))"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("shape", .lshape , earg = .earg , tag = FALSE) if (any(y <= .lower)) stop("the value of argument 'lower' is too high ", "(requires '0 < lower < min(y)')") extra$lower <- .lower if (any(y >= .upper)) stop("the value of argument 'upper' is too low ", "(requires 'max(y) < upper')") extra$upper <- .upper if (!length(etastart)) { shape.init <- if (is.Numeric( .ishape )) 0 * y + .ishape else if ( .imethod == 2) { 0 * y + median(rep((y + 1/8) / (y - .lower + 1/8), times = w)) } else { tpareto1.Loglikfun <- function(shape, y, x, w, extraargs) { myratio <- .lower / .upper sum(c(w) * (log(shape) + shape * log( .lower) - (shape+1) * log(y) - log1p(-myratio^shape))) } shape.grid <- 2^((-4):4) try.this <- getMaxMin(shape.grid, objfun = tpareto1.Loglikfun, y = y, x = x, w = w) try.this = rep(try.this, length.out = n) try.this } etastart <- theta2eta(shape.init, .lshape , earg = .earg ) } }), list( .lshape = lshape, .earg = earg, .ishape = ishape, .imethod = imethod, .lower = lower, .upper = upper ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .earg ) myratio <- .lower / .upper constprop <- shape * .lower^shape / (1 - myratio^shape) constprop * ( .upper^(1-shape) - .lower^(1-shape)) / (1-shape) }, list( .lshape = lshape, .earg = earg, .lower = lower, .upper = upper ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape) misc$earg <- list(shape = .earg ) misc$lower <- extra$lower misc$upper <- extra$upper misc$expected <- TRUE }), list( .lshape = lshape, .earg = earg, .lower = lower, .upper = upper ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ans <- sum(c(w) * dtpareto(x = y, lower = .lower , upper = .upper , shape = shape, log = TRUE)) ans } }, list( .lshape = lshape, .earg = earg, .lower = lower, .upper = upper ))), vfamily = c("tpareto1"), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .earg ) myratio <- .lower / .upper myratio2 <- myratio^shape tmp330 <- myratio2 * log(myratio) / (1 - myratio2) dl.dshape <- 1 / shape + log( .lower) - log(y) + tmp330 dshape.deta <- dtheta.deta(shape, .lshape , earg = .earg ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .earg = earg, .lower = lower, .upper = upper ))), weight = eval(substitute(expression({ ned2l.dshape2 <- 1 / shape^2 - tmp330^2 / myratio2 wz <- c(w) * dshape.deta^2 * ned2l.dshape2 wz }), list( .lshape = lshape, .earg = earg, .lower = lower, .upper = upper )))) } erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1 erfc <- function(x) 2 * pnorm(x * sqrt(2), lower.tail = FALSE) wald <- function(link.lambda = "loge", init.lambda = NULL) { link.lambda <- as.list(substitute(link.lambda)) earg <- link2list(link.lambda) link.lambda <- attr(earg, "function.name") new("vglmff", blurb = c("Standard Wald distribution\n\n", "f(y) = sqrt(lambda/(2*pi*y^3)) * ", "exp(-lambda*(y-1)^2/(2*y)), y&lambda>0", "\n", "Link: ", namesof("lambda", link.lambda, earg = earg), "\n", "Mean: ", "1\n", "Variance: 1 / lambda"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("lambda", .link.lambda, earg = .earg , short = TRUE) if (!length(etastart)) { initlambda <- if (length( .init.lambda)) .init.lambda else 1 / (0.01 + (y-1)^2) initlambda <- rep(initlambda, length.out = n) etastart <- cbind(theta2eta(initlambda, link = .link.lambda , earg = .earg )) } }), list( .link.lambda = link.lambda, .earg = earg, .init.lambda=init.lambda ))), linkinv = function(eta, extra = NULL) { 0*eta + 1 }, last = eval(substitute(expression({ misc$link <- c(lambda = .link.lambda ) misc$earg <- list(lambda = .earg ) }), list( .link.lambda = link.lambda, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta, link=.link.lambda, earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * (0.5 * log(lambda/(2*pi*y^3)) - lambda * (y-1)^2 / (2*y))) }, list( .link.lambda = link.lambda, .earg = earg ))), vfamily = "wald", deriv = eval(substitute(expression({ lambda <- eta2theta(eta, link=.link.lambda, earg = .earg ) dl.dlambda <- 0.5 / lambda + 1 - 0.5 * (y + 1/y) dlambda.deta <- dtheta.deta(theta=lambda, link=.link.lambda, earg = .earg ) c(w) * cbind(dl.dlambda * dlambda.deta) }), list( .link.lambda = link.lambda, .earg = earg ))), weight = eval(substitute(expression({ d2l.dlambda2 <- 0.5 / (lambda^2) c(w) * cbind(dlambda.deta^2 * d2l.dlambda2) }), list( .link.lambda = link.lambda, .earg = earg )))) } expexp <- function(lshape = "loge", lscale = "loge", ishape = 1.1, iscale = NULL, # ishape cannot be 1 tolerance = 1.0e-6, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(tolerance, positive = TRUE, allowable.length = 1) || tolerance > 1.0e-2) stop("bad input for argument 'tolerance'") if (!is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") ishape[ishape == 1] <- 1.1 # Fails in @deriv lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Exponentiated Exponential Distribution\n", "Links: ", namesof("shape", lshape, earg = eshape), ", ", namesof("scale", lscale, earg = escale),"\n", "Mean: (digamma(shape+1)-digamma(1))/scale"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape", .lshape , earg = .eshape, short = TRUE), namesof("scale", .lscale , earg = .escale , short = TRUE)) if (!length(etastart)) { shape.init <- if (!is.Numeric( .ishape, positive = TRUE)) stop("argument 'ishape' must be positive") else rep( .ishape, length.out = n) scale.init <- if (length( .iscale )) rep( .iscale, length.out = n) else (digamma(shape.init+1) - digamma(1)) / (y+1/8) scale.init <- rep(weighted.mean(scale.init, w = w), length.out = n) etastart <- cbind(theta2eta(shape.init, .lshape , earg = .eshape ), theta2eta(scale.init, .lscale , earg = .escale )) } }), list( .lshape = lshape, .lscale = lscale, .iscale = iscale, .ishape = ishape, .eshape = eshape, .escale = escale))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) (digamma(shape+1)-digamma(1)) / scale }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale))), last = eval(substitute(expression({ misc$link <- c("shape" = .lshape , "scale" = .lscale ) misc$earg <- list("shape" = .eshape , "scale" = .escale ) misc$expected <- TRUE }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale))), loglikelihood= eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * (log(shape) + log(scale) + (shape-1)*log1p(-exp(-scale*y)) - scale*y)) }, list( .lscale = lscale, .lshape = lshape, .eshape = eshape, .escale = escale))), vfamily = c("expexp"), deriv = eval(substitute(expression({ shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) dl.dscale <- 1/scale + (shape-1)*y*exp(-scale*y) / (-expm1(-scale*y)) - y dl.dshape <- 1/shape + log1p(-exp(-scale*y)) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dshape * dshape.deta, dl.dscale * dscale.deta) }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale))), weight = eval(substitute(expression({ d11 <- 1 / shape^2 # True for all shape d22 <- d12 <- rep(as.numeric(NA), length.out = n) index2 <- abs(shape - 2) > .tolerance # index2 = shape != 1 largeno <- 10000 if (any(index2)) { Shape <- shape[index2] Shape[abs(Shape-1) < .tolerance] <- 1.001 # digamma(0) is undefined Scale <- scale[index2] tmp200 <- trigamma(1)-trigamma(Shape-1) + (digamma(Shape-1)-digamma(1))^2 # Fails when Shape == 1 tmp300 <- trigamma(1)-digamma(Shape)+(digamma(Shape)-digamma(1))^2 d22[index2] <- (1 + Shape*(Shape-1)*tmp200/(Shape-2)) / Scale^2 + Shape*tmp300 / Scale^2 } if (any(!index2)) { Scale <- scale[!index2] d22[!index2] <- (1 + 4 * sum(1/(2 + (0:largeno))^3)) / Scale^2 } index1 <- abs(shape - 1) > .tolerance # index1 <- shape != 1 if (any(index1)) { Shape <- shape[index1] Scale <- scale[index1] d12[index1] <- -(Shape*(digamma(Shape)-digamma(1))/(Shape-1) - digamma(Shape+1) + digamma(1)) / Scale } if (any(!index1)) { Scale <- scale[!index1] d12[!index1] <- -sum(1/(2 + (0:largeno))^2) / Scale } wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dshape.deta^2 * d11 wz[, iam(2, 2, M)] <- dscale.deta^2 * d22 wz[, iam(1, 2, M)] <- dscale.deta * dshape.deta * d12 c(w) * wz }), list( .tolerance = tolerance )))) } expexp1 <- function(lscale = "loge", iscale = NULL, ishape = 1) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Exponentiated Exponential Distribution", " (profile likelihood estimation)\n", "Links: ", namesof("scale", lscale, earg = escale), "\n", "Mean: (digamma(shape+1)-digamma(1))/scale"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("scale", .lscale , earg = .escale , short = TRUE) if (length(w) != n || !is.Numeric(w, integer.valued = TRUE, positive = TRUE)) stop("argument 'weights' must be a vector of positive integers") if (!intercept.only) stop("this family function only works for an ", "intercept-only, i.e., y ~ 1") extra$yvector <- y extra$sumw <- sum(w) extra$w <- w if (!length(etastart)) { shape.init <- if (!is.Numeric( .ishape, positive = TRUE)) stop("argument 'ishape' must be positive") else rep( .ishape, length.out = n) scaleinit <- if (length( .iscale )) rep( .iscale, length.out = n) else (digamma(shape.init+1) - digamma(1)) / (y+1/8) etastart <- cbind(theta2eta(scaleinit, .lscale , earg = .escale )) } }), list( .lscale = lscale, .iscale = iscale, .ishape = ishape, .escale = escale))), linkinv = eval(substitute(function(eta, extra = NULL) { scale <- eta2theta(eta, .lscale , earg = .escale ) temp7 <- -expm1(-scale*extra$yvector) shape <- -extra$sumw / sum(extra$w*log(temp7)) # \gamma(\theta) (digamma(shape+1)-digamma(1)) / scale }, list( .lscale = lscale, .escale = escale))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale) misc$earg <- list("scale" = .escale ) temp7 <- -expm1(-scale*y) shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) misc$shape <- shape # Store the ML estimate here misc$pooled.weight <- pooled.weight }), list( .lscale = lscale, .escale = escale))), loglikelihood= eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { scale <- eta2theta(eta, .lscale , earg = .escale ) temp7 <- -expm1(-scale*y) shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * (log(shape) + log(scale) + (shape-1)*log1p(-exp(-scale*y)) - scale*y)) }, list( .lscale = lscale, .escale = escale))), vfamily = c("expexp1"), deriv = eval(substitute(expression({ scale <- eta2theta(eta, .lscale , earg = .escale ) temp6 <- exp(-scale*y) temp7 <- 1-temp6 shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) d1 <- 1/scale + (shape-1)*y*temp6/temp7 - y c(w) * cbind(d1 * dtheta.deta(scale, .lscale , earg = .escale )) }), list( .lscale = lscale, .escale = escale))), weight = eval(substitute(expression({ d11 <- 1/scale^2 + y*(temp6/temp7^2) * ((shape-1) * (y*temp7+temp6) - y*temp6 / (log(temp7))^2) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] = dtheta.deta(scale, .lscale , earg = .escale )^2 * d11 - d2theta.deta2(scale, .lscale , earg = .escale ) * d1 if (FALSE && intercept.only) { sumw <- sum(w) for(ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE c(w) * wz }), list( .lscale = lscale, .escale = escale)))) } logistic2 <- function(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = -2) { ilocat <- ilocation if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Two-parameter logistic distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location", "\n", "Variance: (pi * scale)^2 / 3"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("location", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat , tag = FALSE), namesof(mynames2, .lscale , earg = .escale , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { if ( .imethod == 1) { locat.init <- y scale.init <- sqrt(3) * apply(y, 2, sd) / pi } else { locat.init <- scale.init <- NULL for(ii in 1:ncoly) { locat.init <- c(locat.init, median(rep(y[, ii], w[, ii]))) scale.init <- c(scale.init, sqrt(3) * sum(w[, ii] * (y[, ii] - locat.init[ii])^2) / (sum(w[, ii]) * pi)) } } locat.init <- matrix(if (length( .ilocat )) .ilocat else locat.init, n, ncoly, byrow = TRUE) if ( .llocat == "loge") locat.init <- abs(locat.init) + 0.001 scale.init <- matrix(if (length( .iscale )) .iscale else scale.init, n, ncoly, byrow = TRUE) etastart <- cbind( theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ))[, interleave.VGAM(M, M = Musual)] } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { M <- ncol(eta) Musual <- 2 ncoly <- M / Musual eta2theta(eta[, (1:ncoly) * Musual - 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .llocat , length = ncoly), rep( .lscale , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[ interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .elocat misc$earg[[Musual*ii ]] <- .escale } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .imethod = imethod, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- ncol(eta) Musual <- 2 ncoly <- M / Musual locat <- eta2theta(eta[, (1:ncoly)*Musual-1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, (1:ncoly)*Musual ], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dlogis(x = y, location = locat, scale = Scale, log = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), vfamily = c("logistic2"), deriv = eval(substitute(expression({ Musual <- 2 ncoly <- M / Musual locat <- eta2theta(eta[, (1:ncoly)*Musual-1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, (1:ncoly)*Musual ], .lscale , earg = .escale ) zedd <- (y - locat) / Scale ezedd <- exp(-zedd) dl.dlocat <- (-expm1(-zedd)) / ((1 + ezedd) * Scale) dl.dscale <- zedd * (-expm1(-zedd)) / ((1 + ezedd) * Scale) - 1 / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta)[, interleave.VGAM(M, M = Musual)] }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 1 / (3 * Scale^2) ned2l.dscale2 <- (3 + pi^2) / (9 * Scale^2) wz <- matrix(as.numeric(NA), nrow = n, ncol = M) # diagonal wz[, (1:ncoly) * Musual - 1] <- ned2l.dlocat2 * dlocat.deta^2 wz[, (1:ncoly) * Musual ] <- ned2l.dscale2 * dscale.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale)))) } negbinomial.size <- function(size = Inf, lmu = "loge", imu = NULL, probs.y = 0.75, imethod = 1, shrinkage.init = 0.95, zero = NULL) { if (any(size <= 0)) stop("bad input for argument 'size'") if (any(is.na(size))) stop("bad input for argument 'size'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (length(imu) && !is.Numeric(imu, positive = TRUE)) stop("bad input for argument 'imu'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") ans <- new("vglmff", blurb = c("Negative-binomial distribution with size known\n\n", "Links: ", namesof("mu", lmu, earg = emu), "\n", "Mean: mu\n", "Variance: mu * (1 + mu / size) for NB-2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- 1 if (any(y < 0)) stop("negative values not allowed for the 'negbinomial.size' family") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- Musual * ncol(y) NOS <- ncoly <- ncol(y) # Number of species mynames1 <- paste("mu", if (NOS > 1) 1:NOS else "", sep = "") predictors.names <- namesof(mynames1, .lmu , earg = .emu , tag = FALSE) if (is.numeric( .mu.init )) MU.INIT <- matrix( .mu.init, nrow(y), ncol(y), byrow = TRUE) if (!length(etastart)) { mu.init <- y for(iii in 1:ncol(y)) { use.this <- if ( .imethod == 1) { weighted.mean(y[, iii], w[, iii]) + 1/16 } else if ( .imethod == 3) { c(quantile(y[, iii], probs = .probs.y) + 1/16) } else { median(y[, iii]) + 1/16 } if (is.numeric( .mu.init )) { mu.init[, iii] <- MU.INIT[, iii] } else { medabsres <- median(abs(y[, iii] - use.this)) + 1/32 allowfun <- function(z, maxtol = 1) sign(z)*pmin(abs(z), maxtol) mu.init[, iii] <- use.this + (1 - .sinit) * allowfun(y[, iii] - use.this, maxtol = medabsres) mu.init[, iii] <- abs(mu.init[, iii]) + 1 / 1024 } } # of for(iii) kmat <- matrix( .size , n, NOS, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } etastart <- cbind(theta2eta(mu.init , link = .lmu , earg = newemu )) } }), list( .lmu = lmu, .emu = emu, .mu.init = imu, .size = size, .probs.y = probs.y, .sinit = shrinkage.init, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 1 eta <- cbind(eta) NOS <- ncol(eta) / Musual n <- nrow(eta) kmat <- matrix( .size , n, NOS, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } eta2theta(eta, .lmu , earg = newemu) }, list( .lmu = lmu, .emu = emu, .size = size ))), last = eval(substitute(expression({ misc$link <- rep( .lmu , length = NOS) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:NOS) { misc$earg[[ii]] <- newemu } misc$imethod <- .imethod misc$expected <- TRUE misc$shrinkage.init <- .sinit misc$size <- kmat }), list( .lmu = lmu, .emu = emu, .sinit = shrinkage.init, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mu <- cbind(mu) y <- cbind(y) w <- cbind(w) eta <- cbind(eta) NOS <- ncol(eta) n <- nrow(eta) kmat <- matrix( .size , n, NOS, byrow = TRUE) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { ind1 <- is.finite(kmat) ans1 <- ans2 <- 0 for (kk in 1:NOS) { ind1 <- is.finite(kmat[, kk]) ans1 <- ans1 + sum(w[ind1] * dnbinom(x = y[ind1, kk], mu = mu[ind1, kk], size = kmat[ind1, kk], log = TRUE)) ans2 <- ans2 + sum(w[!ind1] * dpois(x = y[!ind1, kk], lambda = mu[!ind1, kk], log = TRUE)) } ans <- ans1 + ans2 ans } }, list( .size = size ))), vfamily = c("negbinomial.size"), deriv = eval(substitute(expression({ eta <- cbind(eta) NOS <- M <- ncol(eta) kmat <- matrix( .size , n, M, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } dl.dmu <- y/mu - (y+kmat)/(kmat+mu) dl.dmu[!is.finite(dl.dmu)] <- (y/mu)[!is.finite(dl.dmu)] - 1 if ( .lmu == "nbcanlink") newemu$wrt.eta <- 1 dmu.deta <- dtheta.deta(mu, .lmu , earg = newemu) # eta1 myderiv <- c(w) * dl.dmu * dmu.deta myderiv }), list( .lmu = lmu, .emu = emu, .size = size ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) # wz is 'diagonal' ned2l.dmu2 <- 1 / mu - 1 / (mu + kmat) wz <- dmu.deta^2 * ned2l.dmu2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lmu = lmu )))) ans } VGAM/R/family.ts.R0000644000176000001440000003267412136651110013276 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rrar.Ci <- function(i, coeffs, aa, Ranks., MM) { index <- cumsum(c(aa, MM*Ranks.)) ans <- matrix(coeffs[(index[i]+1):index[i+1]], Ranks.[i], MM, byrow = TRUE) t(ans) } rrar.Ak1 <- function(MM, coeffs, Ranks., aa) { ptr <- 0 Ak1 <- diag(MM) for(jay in 1:MM) { for(i in 1:MM) { if (i > jay && (MM+1)-(Ranks.[jay]-1) <= i) { ptr <- ptr + 1 Ak1[i,jay] <- coeffs[ptr] } } } if (aa > 0 && ptr != aa) stop("something wrong") Ak1 } rrar.Di <- function(i, Ranks.) { if (Ranks.[1] == Ranks.[i]) diag(Ranks.[i]) else rbind(diag(Ranks.[i]), matrix(0, Ranks.[1] - Ranks.[i], Ranks.[i])) } rrar.Mi <- function(i, MM, Ranks., ki) { if (Ranks.[ki[i]] == MM) return(NULL) hi <- Ranks.[ki[i]] - Ranks.[ki[i+1]] Ji <- matrix(0, hi, Ranks.[1]) for(j in 1:hi) { Ji[j,j+Ranks.[ki[i+1]]] <- 1 } Mi <- matrix(0, MM-Ranks.[ki[i]], MM) # dim(Oi) == dim(Ji) for(j in 1:(MM-Ranks.[ki[i]])) { Mi[j,j+Ranks.[ki[i ]]] <- 1 } kronecker(Mi, Ji) } rrar.Mmat <- function(MM, uu, Ranks., ki) { Mmat <- NULL for(ii in uu:1) { Mmat <- rbind(Mmat, rrar.Mi(ii, MM, Ranks., ki)) } Mmat } block.diag <- function(A, B) { if (is.null(A) && is.null(B)) return(NULL) if (!is.null(A) && is.null(B)) return(A) if (is.null(A) && !is.null(B)) return(B) A <- as.matrix(A) B <- as.matrix(B) temp <- cbind(A, matrix(0, nrow(A), ncol(B))) rbind(temp, cbind(matrix(0, nrow(B), ncol(A)), B)) } rrar.Ht <- function(plag, MM, Ranks., coeffs, aa, uu, ki) { Htop <- Hbot <- NULL Mmat <- rrar.Mmat(MM, uu, Ranks., ki) # NULL if full rank Ak1 <- rrar.Ak1(MM, coeffs, Ranks., aa) if (!is.null(Mmat)) for(i in 1:plag) { Di <- rrar.Di(i, Ranks.) Ci <- rrar.Ci(i, coeffs, aa, Ranks., MM) temp <- Di %*% t(Ci) Htop <- cbind(Htop, Mmat %*% kronecker(diag(MM), temp)) } for(i in 1:plag) { Di <- rrar.Di(i, Ranks.) temp <- kronecker(t(Di) %*% t(Ak1), diag(MM)) Hbot <- block.diag(Hbot, temp) } rbind(Htop, Hbot) } rrar.Ut <- function(y, tt, plag, MM) { Ut <- NULL if (plag>1) for(i in 1:plag) { Ut <- rbind(Ut, kronecker(diag(MM), cbind(y[tt-i,]))) } Ut } rrar.UU <- function(y, plag, MM, n) { UU <- NULL for(i in (plag+1):n) { UU <- rbind(UU, t(rrar.Ut(y, i, plag, MM))) } UU } rrar.Wmat <- function(y, Ranks., MM, ki, plag, aa, uu, n, coeffs) { temp1 <- rrar.UU(y, plag, MM, n) temp2 <- t(rrar.Ht(plag, MM, Ranks., coeffs, aa, uu, ki)) list(UU = temp1, Ht = temp2) } rrar.control <- function(stepsize = 0.5, save.weight = TRUE, ...) { if (stepsize <= 0 || stepsize > 1) { warning("bad value of stepsize; using 0.5 instead") stepsize <- 0.5 } list(stepsize = stepsize, save.weight = as.logical(save.weight)[1]) } rrar <- function(Ranks = 1, coefstart = NULL) { lag.p <- length(Ranks) new("vglmff", blurb = c("Nested reduced-rank vector autoregressive model AR(", lag.p, ")\n\n", "Link: ", namesof("mu_t", "identity"), ", t = ", paste(paste(1:lag.p, coll = ",", sep = ""))), initialize = eval(substitute(expression({ Ranks. <- .Ranks plag <- length(Ranks.) nn <- nrow(x) # original n indices <- 1:plag copy_X_vlm <- TRUE # X_vlm_save matrix changes at each iteration dsrank <- -sort(-Ranks.) # ==rev(sort(Ranks.)) if (any(dsrank != Ranks.)) stop("Ranks must be a non-increasing sequence") if (!is.matrix(y) || ncol(y) == 1) { stop("response must be a matrix with more than one column") } else { MM <- ncol(y) ki <- udsrank <- unique(dsrank) uu <- length(udsrank) for(i in 1:uu) ki[i] <- max((1:plag)[dsrank == udsrank[i]]) ki <- c(ki, plag+1) # For computing a Ranks. <- c(Ranks., 0) # For computing a aa <- sum( (MM-Ranks.[ki[1:uu]]) * (Ranks.[ki[1:uu]]-Ranks.[ki[-1]]) ) } if (!intercept.only) warning("ignoring explanatory variables") if (any(MM < Ranks.)) stop("'max(Ranks)' can only be ", MM, " or less") y.save <- y # Save the original if (any(w != 1)) stop("all weights should be 1") new.coeffs <- .coefstart # Needed for iter = 1 of $weight new.coeffs <- if (length(new.coeffs)) rep(new.coeffs, len = aa+sum(Ranks.)*MM) else runif(aa+sum(Ranks.)*MM) temp8 <- rrar.Wmat(y.save, Ranks., MM, ki, plag, aa, uu, nn, new.coeffs) X_vlm_save <- temp8$UU %*% temp8$Ht if (!length(etastart)) { etastart <- X_vlm_save %*% new.coeffs etastart <- matrix(etastart, ncol = ncol(y), byrow = TRUE) } extra$Ranks. <- Ranks.; extra$aa <- aa extra$plag <- plag; extra$nn <- nn extra$MM <- MM; extra$coeffs <- new.coeffs; extra$y.save <- y.save keep.assign <- attr(x, "assign") x <- x[-indices, , drop = FALSE] if (is.R()) attr(x, "assign") <- keep.assign y <- y[-indices, , drop = FALSE] w <- w[-indices] n.save <- n <- nn - plag }), list( .Ranks = Ranks, .coefstart = coefstart ))), linkinv = function(eta, extra = NULL) { aa <- extra$aa coeffs <- extra$coeffs MM <- extra$MM nn <- extra$nn plag <- extra$plag Ranks. <- extra$Ranks. y.save <- extra$y.save tt <- (1+plag):nn mu <- matrix(0, nn-plag, MM) Ak1 <- rrar.Ak1(MM, coeffs, Ranks., aa) for(i in 1:plag) { Di <- rrar.Di(i, Ranks.) Ci <- rrar.Ci(i, coeffs, aa, Ranks., MM) mu <- mu + y.save[tt-i, , drop = FALSE] %*% t(Ak1 %*% Di %*% t(Ci)) } mu }, last = expression({ misc$plag <- plag misc$Ranks <- Ranks. misc$Ak1 <- Ak1 misc$omegahat <- omegahat misc$Cmatrices <- Cmatrices misc$Dmatrices <- Dmatrices misc$Hmatrix <- temp8$Ht misc$Phimatrices <- vector("list", plag) for(ii in 1:plag) { misc$Phimatrices[[ii]] <- Ak1 %*% Dmatrices[[ii]] %*% t(Cmatrices[[ii]]) } misc$Z <- y.save %*% t(solve(Ak1)) }), vfamily = "rrar", deriv = expression({ temp8 <- rrar.Wmat(y.save,Ranks.,MM,ki,plag,aa,uu,nn,new.coeffs) X_vlm_save <- temp8$UU %*% temp8$Ht extra$coeffs <- new.coeffs resmat <- y tt <- (1+plag):nn Ak1 <- rrar.Ak1(MM, new.coeffs, Ranks., aa) Cmatrices <- Dmatrices <- vector("list", plag) for(ii in 1:plag) { Dmatrices[[ii]] <- Di <- rrar.Di(ii, Ranks.) Cmatrices[[ii]] <- Ci <- rrar.Ci(ii, new.coeffs, aa, Ranks., MM) resmat <- resmat - y.save[tt - ii, , drop = FALSE] %*% t(Ak1 %*% Di %*% t(Ci)) } omegahat <- (t(resmat) %*% resmat) / n # MM x MM omegainv <- solve(omegahat) omegainv <- solve(omegahat) ind1 <- iam(NA, NA, MM, both = TRUE) wz <- matrix(omegainv[cbind(ind1$row, ind1$col)], nn-plag, length(ind1$row), byrow = TRUE) mux22(t(wz), y-mu, M = extra$MM, as.matrix = TRUE) }), weight = expression({ wz })) } vglm.garma.control <- function(save.weight = TRUE, ...) { list(save.weight = as.logical(save.weight)[1]) } garma <- function(link = "identity", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1.0) { if (!is.Numeric(p.ar.lag, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'p.ar.lag'") if (!is.Numeric(q.ma.lag, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'q.ma.lag'") if (q.ma.lag != 0) stop("sorry, only q.ma.lag = 0 is currently implemented") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("GARMA(", p.ar.lag, ",", q.ma.lag, ")\n\n", "Link: ", namesof("mu_t", link, earg = earg), ", t = ", paste(paste(1:p.ar.lag, coll = ",", sep = ""))), initialize = eval(substitute(expression({ plag <- .p.ar.lag predictors.names <- namesof("mu", .link , earg = .earg , tag = FALSE) indices <- 1:plag tt.index <- (1 + plag):nrow(x) p_lm <- ncol(x) copy_X_vlm <- TRUE # x matrix changes at each iteration if ( .link == "logit" || .link == "probit" || .link == "cloglog" || .link == "cauchit") { delete.zero.colns <- TRUE eval(process.categorical.data.vgam) mustart <- mustart[tt.index, 2] y <- y[, 2] } else { } x.save <- x # Save the original y.save <- y # Save the original w.save <- w # Save the original new.coeffs <- .coefstart # Needed for iter = 1 of @weight new.coeffs <- if (length(new.coeffs)) rep(new.coeffs, len = p_lm + plag) else c(rnorm(p_lm, sd = 0.1), rep(0, plag)) if (!length(etastart)) { etastart <- x[-indices, , drop = FALSE] %*% new.coeffs[1:p_lm] } x <- cbind(x, matrix(as.numeric(NA), n, plag)) # Right size now dx <- dimnames(x.save) morenames <- paste("(lag", 1:plag, ")", sep = "") dimnames(x) <- list(dx[[1]], c(dx[[2]], morenames)) x <- x[-indices, , drop = FALSE] class(x) <- "matrix" y <- y[-indices] w <- w[-indices] n.save <- n <- n - plag more <- vector("list", plag) names(more) <- morenames for(ii in 1:plag) more[[ii]] <- ii + max(unlist(attr(x.save, "assign"))) attr(x, "assign") <- c(attr(x.save, "assign"), more) }), list( .link = link, .p.ar.lag = p.ar.lag, .coefstart = coefstart, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(mu = .link ) misc$earg <- list(mu = .earg ) misc$plag <- plag }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) switch( .link , identity = y - mu, loge = w * (y / mu - 1), reciprocal = w * (y / mu - 1), inverse = w * (y / mu - 1), w * (y / mu - (1-y) / (1 - mu))) else switch( .link , identity = sum(w * (y - mu)^2), loge = sum(w * (-mu + y * log(mu))), reciprocal = sum(w * (-mu + y * log(mu))), inverse = sum(w * (-mu + y * log(mu))), sum(w * (y * log(mu) + (1-y) * log1p(-mu)))) }, list( .link = link, .earg = earg ))), middle2 = eval(substitute(expression({ realfv <- fv for(ii in 1:plag) { realfv <- realfv + old.coeffs[ii + p_lm] * (x.save[tt.index-ii, 1:p_lm, drop = FALSE] %*% new.coeffs[1:p_lm]) # + } true.eta <- realfv + offset mu <- family@linkinv(true.eta, extra) # overwrite mu with correct one }), list( .link = link, .earg = earg ))), vfamily = c("garma", "vglmgam"), deriv = eval(substitute(expression({ dl.dmu <- switch( .link , identity = y-mu, loge = (y - mu) / mu, reciprocal = (y - mu) / mu, inverse = (y - mu) / mu, (y - mu) / (mu * (1 - mu))) dmu.deta <- dtheta.deta(mu, .link , earg = .earg) Step <- .step # This is another method of adjusting step lengths Step * c(w) * dl.dmu * dmu.deta }), list( .link = link, .step = step, .earg = earg ))), weight = eval(substitute(expression({ x[, 1:p_lm] <- x.save[tt.index, 1:p_lm] # Reinstate for(ii in 1:plag) { temp <- theta2eta(y.save[tt.index-ii], .link , earg = .earg ) x[, 1:p_lm] <- x[, 1:p_lm] - x.save[tt.index-ii, 1:p_lm] * new.coeffs[ii + p_lm] x[, p_lm+ii] <- temp - x.save[tt.index-ii, 1:p_lm, drop = FALSE] %*% new.coeffs[1:p_lm] } class(x) <- "matrix" # Added 27/2/02; 26/2/04 if (iter == 1) old.coeffs <- new.coeffs X_vlm_save <- lm2vlm.model.matrix(x, Blist, xij = control$xij) vary <- switch( .link , identity = 1, loge = mu, reciprocal = mu^2, inverse = mu^2, mu * (1 - mu)) c(w) * dtheta.deta(mu, link = .link , earg = .earg )^2 / vary }), list( .link = link, .earg = earg )))) } if (FALSE) { setClass(Class = "Coef.rrar", representation( "plag" = "integer", "Ranks" = "integer", "omega" = "integer", "C" = "matrix", "D" = "matrix", "H" = "matrix", "Z" = "matrix", "Phi" = "list", # list of matrices "Ak1" = "matrix")) Coef.rrar = function(object, ...) { result = new(Class = "Coef.rrar", "plag" = object@misc$plag, "Ranks" = object@misc$Ranks, "omega" = object@misc$omega, "C" = object@misc$C, "D" = object@misc$D, "H" = object@misc$H, "Z" = object@misc$Z, "Phi" = object@misc$Phi, "Ak1" = object@misc$Ak1) } show.Coef.rrar <- function(object) { cat(object@plag) } setMethod("Coef", "rrar", function(object, ...) Coef(object, ...)) setMethod("show", "Coef.rrar", function(object) show.Coef.rrar(object)) } VGAM/R/family.survival.R0000644000176000001440000002657612136651110014527 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dcennormal1 <- function(r1 = 0, r2 = 0, lmu = "identity", lsd = "loge", imu = NULL, isd = NULL, zero = 2) { if (!is.Numeric(r1, allowable.length = 1, integer.valued = TRUE) || r1 < 0) stop("bad input for 'r1'") if (!is.Numeric(r2, allowable.length = 1, integer.valued = TRUE) || r2 < 0) stop("bad input for 'r2'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") new("vglmff", blurb = c("Univariate Normal distribution with double censoring\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = TRUE), ", ", namesof("sd", lsd, earg = esd, tag = TRUE), "\n", "Variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }) , list( .zero = zero))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("mu", .lmu, earg =.emu, tag = FALSE), namesof("sd", .lsd, earg =.esd, tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("the response must be a vector or a one-column matrix") if (length(w) != n || !is.Numeric(w, integer.valued = TRUE, positive = TRUE)) stop("the argument 'weights' must be a vector ", "of positive integers") sumw <- sum(w) extra$bign <- sumw + .r1 + .r2 # Tot num of censored & uncensored obsns if (!length(etastart)) { yyyy.est <- if (length( .imu )) .imu else median(y) sd.y.est <- if (length( .isd )) .isd else { junk <- lm.wfit(x = x, y = c(y), w = c(w)) 1.25 * sqrt( sum(w * junk$resid^2) / junk$df.residual ) } yyyy.est <- rep(yyyy.est , len = n) sd.y.est <- rep(sd.y.est , len = n) etastart <- cbind(mu = theta2eta(yyyy.est, .lmu , earg =.emu ), sd = theta2eta(sd.y.est, .lsd , earg =.esd )) } }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .imu = imu, .isd = isd, .r1 = r1, .r2 = r2 ))), linkinv = function(eta, extra = NULL) eta[, 1], last = eval(substitute(expression({ misc$link <- c(mu = .lmu , sd = .lsd ) misc$earg <- list(mu = .emu , sd = .esd ) misc$multipleResponses <- FALSE misc$expected <- TRUE misc$r1 <- .r1 misc$r2 <- .r2 }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { sd <- eta2theta(eta[, 2], .lsd, earg = .esd ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * dnorm(y, m = mu, sd = sd, log = TRUE)) + (if ( .r1 == 0) 0 else { z1 <- min((y - mu) / sd); Fz1 = pnorm(z1); .r1 * log(Fz1)}) + (if ( .r2 == 0) 0 else { z2 <- max((y - mu) / sd); Fz2 = pnorm(z2); .r2 * log1p(-Fz2)}) } , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), vfamily = c("dcennormal1"), deriv = eval(substitute(expression({ sd <- eta2theta(eta[, 2], .lsd, earg =.esd) q1 <- .r1 / extra$bign q2 <- .r2 / extra$bign pee <- 1 - q1 - q2 # 1 if r1==r2==0 z1 <- if ( .r1 == 0) - 100 else min((y - mu) / sd) # 100==Inf z2 <- if ( .r2 == 0) + 100 else max((y - mu) / sd) # 100==Inf fz1 <- if ( .r1 == 0) 0 else dnorm(z1) fz2 <- if ( .r2 == 0) 0 else dnorm(z2) Fz1 <- if ( .r1 == 0) 0.02 else pnorm(z1) # 0/0 undefined Fz2 <- if ( .r2 == 0) 0.99 else pnorm(z2) dl.dmu <- (y - mu) / sd^2 + ((- .r1 * fz1/Fz1 + .r2 * fz2/(1-Fz2)) / sd) / (n*w) dl.dsd <- -1/sd + (y-mu)^2 / sd^3 + ((- .r1 * z1*fz1/Fz1 + .r2 * z2*fz2/(1-Fz2)) / sd) / (n*w) dmu.deta <- dtheta.deta(mu, .lmu, earg =.emu) dsd.deta <- dtheta.deta(sd, .lsd, earg =.esd) c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), weight=expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) Q1 <- ifelse(q1 == 0, 1, q1) # Saves division by 0 below; not elegant Q2 <- ifelse(q2 == 0, 1, q2) # Saves division by 0 below; not elegant ed2l.dmu2 <- 1 / (sd^2) + ((fz1*(z1+fz1/Q1) - fz2*(z2-fz2/Q2)) / sd^2) / (pee*w) ed2l.dmusd <- ((fz1-fz2 + z1*fz1*(z1+fz1/Q1) - z2*fz2*(z2-fz2/Q2)) / sd^2) / (pee*w) ed2l.dsd2 <- 2 / (sd^2) + ((z1*fz1-z2*fz2 + z1^2 *fz1 *(z1+fz1/Q1) - z2^2 *fz2*(z2-fz2/Q2)) / sd^2) / (pee*w) wz[,iam(1,1,M)] <- w * ed2l.dmu2 * dmu.deta^2 wz[,iam(2,2,M)] <- w * ed2l.dsd2 * dsd.deta^2 wz[,iam(1,2,M)] <- w * ed2l.dmusd * dsd.deta * dmu.deta wz })) } dbisa <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(shape), length(scale)) x <- rep(x, len = L); shape <- rep(shape, len = L); scale <- rep(scale, len = L); logdensity <- rep(log(0), len = L) xok <- (x > 0) xifun <- function(x) {temp <- sqrt(x); temp - 1/temp} logdensity[xok] <- dnorm(xifun(x[xok] / scale[xok]) / shape[xok], log = TRUE) + log1p(scale[xok]/x[xok]) - log(2) - log(shape[xok]) - 0.5 * log(x[xok]) - 0.5 * log(scale[xok]) logdensity[scale <= 0] <- NaN logdensity[shape <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pbisa <- function(q, shape, scale=1) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") ans <- pnorm(((temp <- sqrt(q/scale)) - 1/temp) / shape) ans[scale < 0 | shape < 0] <- NA ans[q <= 0] <- 0 ans } qbisa <- function(p, shape, scale=1) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("argument 'p' must have values inside the interval (0,1)") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") A <- qnorm(p) temp1 <- A * shape * sqrt(4 + A^2 * shape^2) ans1 <- (2 + A^2 * shape^2 + temp1) * scale / 2 ans2 <- (2 + A^2 * shape^2 - temp1) * scale / 2 ifelse(p < 0.5, pmin(ans1, ans2), pmax(ans1, ans2)) } rbisa <- function(n, shape, scale = 1) { A <- rnorm(n) temp1 <- A * shape temp1 <- temp1 * sqrt(4 + temp1^2) ans1 <- (2 + A^2 * shape^2 + temp1) * scale / 2 ans2 <- (2 + A^2 * shape^2 - temp1) * scale / 2 ans <- ifelse(A < 0, pmin(ans1, ans2), pmax(ans1, ans2)) ans[shape <= 0] <- NaN ans[scale <= 0] <- NaN ans } bisa <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = 1, imethod = 1, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Birnbaum-Saunders distribution\n\n", "Links: ", namesof("shape", lshape, earg = eshape, tag = TRUE), "; ", namesof("scale", lscale, earg = escale, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }) , list( .zero = zero))), initialize = eval(substitute(expression({ if (ncol(y <- cbind(y)) != 1) stop("the response must be a vector or a one-column matrix") predictors.names <- c(namesof("shape", .lshape, earg = .eshape, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE)) if (!length(etastart)) { scale.init <- rep( .iscale, len = n) shape.init <- if (is.Numeric( .ishape)) rep( .ishape, len = n) else { if ( .imethod == 1) { ybar <- rep(weighted.mean(y, w), len = n) ybarr <- rep(1 / weighted.mean(1/y, w), len = n) # Reqrs y > 0 sqrt(ybar / scale.init + scale.init / ybarr - 2) } else if ( .imethod == 2) { sqrt(2*( pmax(y, scale.init+0.1) / scale.init - 1)) } else { ybar <- rep(weighted.mean(y, w), len = n) sqrt(2*(pmax(ybar, scale.init + 0.1) / scale.init - 1)) } } etastart <- cbind(theta2eta(shape.init, .lshape, earg = .eshape), theta2eta(scale.init, .lscale, earg = .escale)) } }) , list( .lshape = lshape, .lscale = lscale, .ishape = ishape, .iscale = iscale, .eshape = eshape, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { sh <- eta2theta(eta[, 1], .lshape, earg = .eshape) sc <- eta2theta(eta[, 2], .lscale, earg = .escale) sc * (1 + sh^2 / 2) }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape, scale = .lscale) misc$earg <- list(shape = .eshape, scale = .escale) misc$expected <- TRUE misc$multipleResponses <- FALSE }) , list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta, extra = NULL) { sh <- eta2theta(eta[, 1], .lshape , earg = .eshape ) sc <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(w * dbisa(x = y, shape = sh, scale = sc, log = TRUE)) } }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), vfamily = c("bisa"), deriv = eval(substitute(expression({ sh <- eta2theta(eta[, 1], .lshape, earg = .eshape) sc <- eta2theta(eta[, 2], .lscale, earg = .escale) dl.dsh <- ((y/sc - 2 + sc/y) / sh^2 - 1) / sh dl.dsc <- -0.5 / sc + 1/(y+sc) + sqrt(y) * ((y+sc)/y) * (sqrt(y/sc) - sqrt(sc/y)) / (2 * sh^2 * sc^1.5) dsh.deta <- dtheta.deta(sh, .lshape, earg = .eshape) dsc.deta <- dtheta.deta(sc, .lscale, earg = .escale) c(w) * cbind(dl.dsh * dsh.deta, dl.dsc * dsc.deta) }) , list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) # Diagonal!! wz[,iam(1,1,M)] <- 2 * dsh.deta^2 / sh^2 hfunction <- function(alpha) alpha * sqrt(pi/2) - pi * exp(2/alpha^2) * pnorm(2/alpha, lower.tail = FALSE) wz[,iam(2,2,M)] <- dsc.deta^2 * (sh * hfunction(sh) / sqrt(2*pi) + 1) / (sh*sc)^2 c(w) * wz }), list( .zero = zero )))) } VGAM/R/family.sur.R0000644000176000001440000003121412136651105013452 0ustar ripleyusers# These functions are Copyright (C) 1998-2013 T. W. Yee All rights reserved # 26/6/98; family.sur.q # 20110406; renamed to family.sur.R # zz; does or doesn't handle? : # vglm(Sur(mydataframe), sur, ...), i.e., 1st coln # of mydataframe is the response. # History # 20110406; editing it to bring it up to scratch. # 20130125; trying to get SUR() going. # -------------------------------------------------------------------- # Maybe should call this surff()??: SUR <- function( mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), # estimator = c("classical", "iterative"), parallel = FALSE, apply.parint = TRUE, # zero = NULL, Varcov = NULL, matrix.arg = FALSE) { # Notes: # 1. Varcov may be assigned a solve(wz) (=solve(\bSigma)), # and matrix.arg tells what format it is in. # 2. Based a little on normal1(). # 3. Set maxit = 1 for Zellner's estimator (2-stage). # Set maxit = 111 for iterative GLS === IGLS. # Wrong: # 1. "2stage" == Zellners estimator. # "iterative" == iterative GLS === IGLS. # "MLE.normal" == not yet done. # Or "maxit.sur = 2"? # Last modified: # 20130125; trying to get SUR() going. # 20130126; seems to work basically but not the above arguments. # A lot more work needed. # 20130130; seems to work. # Removed 'zero' argument. # Yettodo: # 2013013 ; argument 'mle.normal' is logical. #print("20130129; in SUR()") lmean <- "identity" lsdev <- "loge" emean <- list() esdev <- list() if (!is.logical(mle.normal) || length(mle.normal) != 1) stop("argument 'mle.normal' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") # if(mode(estimator) != "character" && mode(estimator) != "name") # estimator <- as.character(substitute(estimator)) # estimator <- match.arg(estimator, # c("classical", "iterative"))[1] #print(paste('estimator =', estimator)) divisor <- match.arg(divisor, c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"))[1] #print("divisor") #print( divisor ) if (mle.normal && divisor != "n") warning("MLE requires 'n' as the value of argument 'divisor'. ", "The solution will probably not be the MLE") ret.ff <- new("vglmff", blurb = c("Seemingly unrelated regressions"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) # constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, # .zero = zero, .apply.parint = apply.parint ))), # deviance = function(y, mu, w, residuals = FALSE, # eta = NULL, extra = NULL) { # Returns the residual sum of squares # Nb. extra$wz is wz #print("head(y - mu)") #print( head(y - mu) ) #print("head(extra$wz)") #print( head(extra$wz) ) # M <- if (length(extra$M)) extra$M else ifelse(is.matrix(y), ncol(y), 1) # if (residuals) { # if (M > 1) NULL else (y-mu) * sqrt(extra$wz) # } else { # ResSS.vgam(y - mu, extra$wz, M = M) # } # }, infos = eval(substitute(function(...) { list(Musual = 1, # zz??? # zero = .zero , # link = .link , parallel = .parallel , multipleResponses = TRUE ) }, list( .parallel = parallel ))), initialize = eval(substitute(expression({ if (!is.matrix(y) || ncol(y) == 1) stop("response must be a matrix with at least 2 columns") ncoly <- ncol(y) #print("constraints") #print( constraints ) if (is.logical( .parallel ) && .parallel && !all(as.logical(trivial.constraints(constraints)))) warning("setting 'parallel = TRUE' with nontrivial constraints may not ", "make sense") temp5 <- w.y.check(w = w, y = y, # Is.positive.y = TRUE, ncol.w.min = 1, ncol.w.max = 1, ncol.y.max = Inf, Is.integer.y = FALSE, Is.positive.y = FALSE, out.wy = TRUE, colsyperw = ncoly, maximize = TRUE) w <- temp5$w y <- temp5$y #print("head(w)") #print( head(w) ) if (!all(w[1, 1] == w)) stop("all prior 'weights' must currently have equal values") ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly predictors.names <- if (!length(ddd <- dimnames(y)[[2]])) paste("Y", 1:M, sep = "") else ddd # if ( .estimator == "classical") # maxit <- 1 # Iteration may lead to an increase in RSS # if ( .estimator == "iterative") # half.stepsizing <- FALSE # Assign "extra$wz" something corresponding to the M x M identity matrix. extra$wz <- matrix(1, nrow(x), M) if (!length(etastart)) { # Note: it is a good idea to start with the OLS estimators here first. etastart <- matrix(0, n, M) Blist.early <- process.constraints(constraints, x, M, specialCM = specialCM) #print("Blist.early") #print( Blist.early ) X_vlm.early <- lm2vlm.model.matrix(x, Blist.early, xij = control$xij, Xm2 = Xm2) #print("head(X_vlm.early)") #print( head(X_vlm.early) ) Hmatrices <- matrix(c(unlist(Blist.early)), nrow = M) jay.index <- 1:ncol(Hmatrices) extra$ncols_X_lm <- numeric(ncoly) for (jay in 1:ncoly) { # model.matrix(fit, lapred.index = 1, type = "lm") #print("Hmatrices") #print( Hmatrices ) # 20121231; this code adapted from model.matrixvlm(): # lapred.index <- jay.index[jay] # index0 <- Hmatrices[jay, ] != 0 # Orig. # Index0 <- Hmatrices[lapred.index, ] != 0 # X_lm_jay <- X_vlm[(0:(n_lm - 1)) * M + lapred.index, Index0, # drop = FALSE] X_lm_jay <- vlm2lm.model.matrix(x_vlm = X_vlm.early, Blist = Blist.early, which.lp = jay, M = M) #print("head(X_lm_jay)") #print( head(X_lm_jay) ) # This is useful, e.g,. for changing the denominator extra$ncols_X_lm[jay] <- ncol(X_lm_jay) etastart[, jay] <- y[, jay] - lsfit(x = X_lm_jay, y = y[, jay], wt = c(w), intercept = FALSE)$residuals } # jay } # !length(etastart) }), list( # .estimator = estimator, .parallel = parallel ))), linkinv = function(eta, extra = NULL) eta, last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lmean , length = ncoly)) temp.names <- predictors.names # temp.names <- temp.names[interleave.VGAM(Musual * ncoly, M = Musual)] names(misc$link) <- temp.names #print("head(w)") #print( head(w) ) misc$earg <- vector("list", Musual * ncoly) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii]] <- .emean } names(misc$earg) <- temp.names misc$Musual <- Musual misc$expected <- TRUE misc$divisor <- .divisor misc$values.divisor <- round(n / ratio.df) }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev, .divisor = divisor ))), # linkfun = function(mu, extra = NULL) mu, vfamily = "SUR", deriv = eval(substitute(expression({ #print("in @deriv of SUR()") #print(paste("iter =", iter)) mymu <- eta iam.indices <- iam(NA, NA, M = M, both = TRUE) #print("iam.indices") #print( iam.indices ) #print("y") #print( y ) #print("mu") #print( mu ) resmat <- y - mymu Sigma.elts <- colMeans(resmat[, iam.indices$row.index] * resmat[, iam.indices$col.index]) if ( .divisor != "n") { # Make an adjustment for the denominator (above assumes "n") # Here, ratio.df >= 1. ratio.df <- n / switch( .divisor , "n-max(pj,pk)" = n - pmax(extra$ncols_X_lm[iam.indices$row.index], extra$ncols_X_lm[iam.indices$col.index]), "sqrt((n-pj)*(n-pk))" = sqrt((n - extra$ncols_X_lm[iam.indices$row.index]) * (n - extra$ncols_X_lm[iam.indices$col.index])), stop("argument 'divisor' unmatched")) #print("ratio.df") #print( ratio.df ) Sigma.elts <- Sigma.elts * ratio.df } else { ratio.df <- rep(1, length = M*(M+1)/2) } #print("Sigma.elts") #print( Sigma.elts ) Sigma.mat <- matrix(0, M, M) Sigma.mat[cbind(iam.indices$row.index, iam.indices$col.index)] <- Sigma.elts Sigma.mat[cbind(iam.indices$col.index, iam.indices$row.index)] <- Sigma.elts #print("Sigma.mat") #print( Sigma.mat ) # Cholesky is more efficient than solve() invSigma.mat <- chol2inv(chol(Sigma.mat)) # invSigma.mat <- solve(Sigma.mat) # Inefficient #print("invSigma.mat") #print( invSigma.mat ) # dl.dmu returns \bW_i (\biy_i - \bmu_i) temp3 <- matrix(invSigma.mat[cbind(iam.indices$row.index, iam.indices$col.index)], M*(M+1)/2, n) dl.dmu <- mux22(temp3, y - mymu, M = M, upper = FALSE, as.matrix = TRUE) #print("dim(dl.dmu)") #print( dim(dl.dmu) ) #print("head(dl.dmu)") #print( head(dl.dmu) ) # dl.dmu <- (y - mymu) / sdev^2 # For normal1() dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) #print("head(dmu.deta)") #print( head(dmu.deta) ) c(w) * dl.dmu * dmu.deta }), list( .lmean = lmean, .emean = emean, .divisor = divisor ))), weight = eval(substitute(expression({ #print("in @weight of SUR()") # Overwrite invSigma.mat with the inverse variance, if given. if (length( .Varcov )) { Sigma.mat <- if ( .matrix.arg ) .Varcov else { temp.vec <- rep( .Varcov , len = M*(M+1)/2) temp.mat <- matrix(0, M, M) temp.mat[cbind(iam.indices$col.index, iam.indices$row.index)] <- temp.vec temp.mat[cbind(iam.indices$row.index, iam.indices$col.index)] <- temp.vec # temp.mat <- chol2inv(chol(temp.mat)) temp.mat } invSigma.mat <- chol2inv(chol(Sigma.mat)) } wz <- extra$wz <- c(w) * matrix(invSigma.mat[cbind(iam.indices$col.index, iam.indices$row.index)], n, M*(M+1)/2, byrow = TRUE) extra$Sigma.mat <- Sigma.mat extra$invSigma.mat <- invSigma.mat #print("head(wz)") #print( head(wz) ) wz }), list( .divisor = divisor, # .estimator = estimator, .Varcov = Varcov, .matrix.arg = matrix.arg )))) if (mle.normal) { # Add a 'loglikelihood' slot to the object. # This code based on normal1(). # Note wz is retrieved from 'extra', and 'wz' has only # one general symmetric pos-definite matrix that is replicated # a lot. # Yettodo: if "all prior 'weights' must currently have equal values" is # relaxed then have to do some code changes?? ret.ff@loglikelihood <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) # Orig: # wz <- VGAM.weights.function(w = w, M = M, n = n) # Now: wz <- extra$wz temp1 <- ResSS.vgam(y-mu, wz = wz, M = M) # Each row of wz is the same (or should be!!) onewz <- if (length(extra$invSigma.mat)) extra$invSigma.mat else (m2adefault(wz[1, , drop = FALSE], M = M))[,, 1] # M x M #print("onewz") #print( onewz ) #print("extra$invSigma.mat - onewz") #print( extra$invSigma.mat - onewz ) # 20130131; done: use det() or determinant(): logdet <- determinant(onewz)$modulus #print("logdet") #print( logdet ) # logdet <- sum(log(eigen(onewz, symmetric = TRUE, # only.values = TRUE)$values)) #print("logdet2") #print( logdet ) logretval <- -0.5 * temp1 + 0.5 * n * logdet - n * (M / 2) * log(2*pi) # logretval <- -(ncol(onewz) * log(2 * pi) + logdet + distval)/2 logretval } } ret.ff } # 20130125; Below here is old stuff... i will leave this alone # -------------------------------------------------------------------- # 20110407; Below here is old stuff... i will leave this alone # -------------------------------------------------------------------- # -------------------------------------------------------------------- # -------------------------------------------------------------------- # Sur <- function... VGAM/R/family.rrr.R0000644000176000001440000032702412136651110013451 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. replace.constraints <- function(Blist, cm, index) { for (iii in index) Blist[[iii]] <- cm Blist } valt.control <- function( Alphavec = c(2, 4, 6, 9, 12, 16, 20, 25, 30, 40, 50, 60, 80, 100, 125, 2^(8:12)), Criterion = c("rss", "coefficients"), Linesearch = FALSE, Maxit = 7, Suppress.warning = TRUE, Tolerance = 1e-7, ...) { if (mode(Criterion) != "character" && mode(Criterion) != "name") Criterion <- as.character(substitute(Criterion)) Criterion <- match.arg(Criterion, c("rss", "coefficients"))[1] list(Alphavec = Alphavec, Criterion = Criterion, Linesearch = Linesearch, Maxit = Maxit, Suppress.warning = Suppress.warning, Tolerance = Tolerance) } qrrvglm.xprod <- function(numat, Aoffset, Quadratic, ITolerances) { Rank <- ncol(numat) moff <- NULL ans <- if (Quadratic) { index <- iam(NA, NA, M = Rank, diag = TRUE, both = TRUE) temp1 <- cbind(numat[,index$row] * numat[,index$col]) if (ITolerances) { moff <- 0 for (ii in 1:Rank) moff <- moff - 0.5 * temp1[,ii] } cbind(numat, if (ITolerances) NULL else temp1) } else { as.matrix(numat) } list(matrix = if (Aoffset>0) ans else ans[, -(1:Rank), drop = FALSE], offset = moff) } valt <- function(x, z, U, Rank = 1, Blist = NULL, Cinit = NULL, Alphavec = c(2, 4, 6, 9, 12, 16, 20, 25, 30, 40, 50, 60, 80, 100, 125, 2^(8:12)), Criterion = c("rss", "coefficients"), Crow1positive = rep(TRUE, length.out = Rank), colx1.index, Linesearch = FALSE, Maxit = 20, szero = NULL, SD.Cinit = 0.02, Suppress.warning = FALSE, Tolerance = 1e-6, trace = FALSE, xij = NULL) { if (mode(Criterion) != "character" && mode(Criterion) != "name") Criterion <- as.character(substitute(Criterion)) Criterion <- match.arg(Criterion, c("rss", "coefficients"))[1] if (any(diff(Alphavec) <= 0)) stop("'Alphavec' must be an increasing sequence") if (!is.matrix(z)) z <- as.matrix(z) n <- nrow(z) M <- ncol(z) if (!is.matrix(x)) x <- as.matrix(x) colx2.index <- if (is.null(colx1.index)) 1:ncol(x) else (1:ncol(x))[-colx1.index] p1 <- length(colx1.index) p2 <- length(colx2.index) p <- p1 + p2 if (!p2) stop("'p2', the number of variables for the ", "reduced-rank regression, must be > 0") if (!length(Blist)) { Blist <- replace.constraints(vector("list", p), diag(M), 1:p) } dU <- dim(U) if (dU[2] != n) stop("input unconformable") clist2 <- replace.constraints(vector("list", Rank+p1), if (length(szero)) diag(M)[, -szero, drop = FALSE] else diag(M), 1:Rank) if (p1) { for (kk in 1:p1) clist2[[Rank+kk]] <- Blist[[colx1.index[kk]]] } if (is.null(Cinit)) Cinit <- matrix(rnorm(p2*Rank, sd = SD.Cinit), p2, Rank) fit <- list(rss = 0) # Only for initial old.crit below C <- Cinit # This is input for the main iter loop old.crit <- switch(Criterion, coefficients=C, rss=fit$rss) recover <- 0 # Allow a few iterations between different line searches for (iter in 1:Maxit) { iter.save <- iter lv.mat <- x[, colx2.index, drop = FALSE] %*% C new.lv.model.matrix <- cbind(lv.mat, if (p1) x[, colx1.index] else NULL) fit <- vlm.wfit(xmat = new.lv.model.matrix, z, Blist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = FALSE, qr = FALSE, xij = xij) A <- t(fit$mat.coef[1:Rank, , drop = FALSE]) clist1 <- replace.constraints(Blist, A, colx2.index) fit <- vlm.wfit(xmat = x, z, Blist = clist1, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, xij = xij) C <- fit$mat.coef[colx2.index, , drop = FALSE] %*% A %*% solve(t(A) %*% A) numat <- x[, colx2.index, drop = FALSE] %*% C evnu <- eigen(var(numat)) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) C <- C %*% temp7 A <- A %*% t(solve(temp7)) temp8 <- crow1C(cmat = C, Crow1positive, amat = A) C <- temp8$cmat A <- temp8$amat ratio <- switch(Criterion, coefficients = max(abs(C - old.crit) / (Tolerance+abs(C))), rss = max(abs(fit$rss - old.crit) / (Tolerance+fit$rss))) if (trace) { cat(" Alternating iteration", iter, ", Convergence criterion = ", format(ratio), "\n") if (!is.null(fit$rss)) cat(" ResSS = ", fit$rss, "\n") flush.console() } if (ratio < Tolerance) { if (!Linesearch || (Linesearch && iter >= 3)) break } else if (iter == Maxit && !Suppress.warning) { warning("did not converge") } fini.linesearch <- FALSE if (Linesearch && iter - recover >= 2) { xnew <- C direction1 <- (xnew-xold) # / sqrt(1 + sum((xnew-xold)^2)) ftemp <- fit$rss # Most recent objective function use.alpha <- 0 # The current step relative to (xold, yold) for (itter in 1:length(Alphavec)) { CC <- xold + Alphavec[itter] * direction1 try.lv.mat <- x[, colx2.index, drop = FALSE] %*% CC try.new.lv.model.matrix = cbind(try.lv.mat, if (p1) x[,colx1.index] else NULL) try <- vlm.wfit(xmat = try.new.lv.model.matrix, z, Blist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, xij = xij) if (try$rss < ftemp) { use.alpha <- Alphavec[itter] fit <- try ftemp <- try$rss C <- CC A <- t(fit$mat.coef[1:Rank, , drop = FALSE]) lv.mat <- x[, colx2.index, drop = FALSE] %*% C recover <- iter # Give it some altg iters to recover } else { if (trace && use.alpha > 0) { cat(" Finished line search using Alpha = ", use.alpha, "\n") flush.console() } fini.linesearch <- TRUE } if (fini.linesearch) break } # End of itter loop } xold <- C # Do not take care of drift old.crit <- switch(Criterion, coefficients = C, rss = fit$rss) } # End of iter loop list(A = A, C = C, fitted = fit$fitted, new.coeffs = fit$coef, rss = fit$rss) } lm2qrrvlm.model.matrix <- function(x, Blist, C, control, assign = TRUE, no.thrills = FALSE) { Rank <- control$Rank colx1.index <- control$colx1.index Quadratic <- control$Quadratic Dzero <- control$Dzero Corner <- control$Corner ITolerances <- control$ITolerances M <- nrow(Blist[[1]]) p1 <- length(colx1.index) combine2 <- c(control$szero, if (Corner) control$Index.corner else NULL) Qoffset <- if (Quadratic) ifelse(ITolerances, 0, sum(1:Rank)) else 0 NoA <- length(combine2) == M # No unknown parameters in A clist2 <- if (NoA) { Aoffset <- 0 vector("list", Aoffset+Qoffset+p1) } else { Aoffset <- Rank replace.constraints(vector("list", Aoffset+Qoffset+p1), if (length(combine2)) diag(M)[,-combine2,drop = FALSE] else diag(M), 1:Rank) # If Corner then does not contain \bI_{Rank} } if (Quadratic && !ITolerances) clist2 <- replace.constraints(clist2, if (control$EqualTolerances) matrix(1, M, 1) - eijfun(Dzero, M) else { if (length(Dzero)) diag(M)[,-Dzero,drop = FALSE] else diag(M)}, Aoffset + (1:Qoffset)) if (p1) for (kk in 1:p1) clist2[[Aoffset+Qoffset+kk]] <- Blist[[colx1.index[kk]]] if (!no.thrills) { i63 <- iam(NA, NA, M=Rank, both = TRUE) names(clist2) <- c( if (NoA) NULL else paste("(lv", 1:Rank, ")", sep = ""), if (Quadratic && Rank == 1 && !ITolerances) "(lv^2)" else if (Quadratic && Rank>1 && !ITolerances) paste("(lv", i63$row, ifelse(i63$row == i63$col, "^2", paste("*lv", i63$col, sep = "")), ")", sep = "") else NULL, if (p1) names(colx1.index) else NULL) } lv.mat <- x[,control$colx2.index,drop = FALSE] %*% C tmp900 <- qrrvglm.xprod(lv.mat, Aoffset, Quadratic, ITolerances) new.lv.model.matrix <- cbind(tmp900$matrix, if (p1) x[,colx1.index] else NULL) if (!no.thrills) dimnames(new.lv.model.matrix) <- list(dimnames(x)[[1]], names(clist2)) if (assign) { asx <- attr(x, "assign") asx <- vector("list", ncol(new.lv.model.matrix)) names(asx) <- names(clist2) for (ii in 1:length(names(asx))) { asx[[ii]] <- ii } attr(new.lv.model.matrix, "assign") <- asx } if (no.thrills) list(new.lv.model.matrix = new.lv.model.matrix, constraints = clist2, offset = tmp900$offset) else list(new.lv.model.matrix = new.lv.model.matrix, constraints = clist2, NoA = NoA, Aoffset = Aoffset, lv.mat = lv.mat, offset = tmp900$offset) } valt.2iter <- function(x, z, U, Blist, A, control) { clist1 <- replace.constraints(Blist, A, control$colx2.index) fit <- vlm.wfit(xmat = x, z, Blist = clist1, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, xij = control$xij) C <- fit$mat.coef[control$colx2.index, , drop = FALSE] %*% A %*% solve(t(A) %*% A) list(A = A, C = C, fitted = fit$fitted, new.coeffs = fit$coef, Blist = clist1, rss = fit$rss) } valt.1iter <- function(x, z, U, Blist, C, control, lp.names = NULL, nice31 = FALSE, MSratio = 1) { Rank <- control$Rank Quadratic <- control$Quadratic Index.corner <- control$Index.corner p1 <- length(control$colx1.index) M <- ncol(zedd <- as.matrix(z)) NOS <- M / MSratio Corner <- control$Corner ITolerances <- control$ITolerances Qoffset <- if (Quadratic) ifelse(ITolerances, 0, sum(1:Rank)) else 0 tmp833 <- lm2qrrvlm.model.matrix(x = x, Blist = Blist, C=C, control=control) new.lv.model.matrix <- tmp833$new.lv.model.matrix clist2 <- tmp833$constraints # Does not contain \bI_{Rank} lv.mat <- tmp833$lv.mat if (Corner) zedd[,Index.corner] <- zedd[,Index.corner] - lv.mat if (nice31 && MSratio == 1) { fit <- list(mat.coef = NULL, fitted.values = NULL, rss = 0) clist2 <- NULL # for vlm.wfit i5 <- rep(0, length.out = MSratio) for (ii in 1:NOS) { i5 <- i5 + 1:MSratio tmp100 <- vlm.wfit(xmat = new.lv.model.matrix, zedd[, i5, drop = FALSE], Blist = clist2, U = U[i5,,drop = FALSE], matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, Eta.range = control$Eta.range, xij = control$xij, lp.names = lp.names[i5]) fit$rss <- fit$rss + tmp100$rss fit$mat.coef <- cbind(fit$mat.coef, tmp100$mat.coef) fit$fitted.values <- cbind(fit$fitted.values, tmp100$fitted.values) } } else { fit <- vlm.wfit(xmat = new.lv.model.matrix, zedd, Blist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, Eta.range = control$Eta.range, xij = control$xij, lp.names = lp.names) } A <- if (tmp833$NoA) matrix(0, M, Rank) else t(fit$mat.coef[1:Rank,,drop = FALSE]) if (Corner) A[Index.corner,] <- diag(Rank) B1 <- if (p1) fit$mat.coef[-(1:(tmp833$Aoffset+Qoffset)),,drop = FALSE] else NULL fv <- as.matrix(fit$fitted.values) if (Corner) fv[,Index.corner] <- fv[,Index.corner] + lv.mat Dmat <- if (Quadratic) { if (ITolerances) { tmp800 <- matrix(0, M, Rank*(Rank+1)/2) tmp800[if (MSratio == 2) c(TRUE, FALSE) else TRUE, 1:Rank] <- -0.5 tmp800 } else t(fit$mat.coef[(tmp833$Aoffset+1): (tmp833$Aoffset+Qoffset),,drop = FALSE]) } else NULL list(Amat = A, B1 = B1, Cmat = C, Dmat = Dmat, fitted = if (M == 1) c(fv) else fv, new.coeffs = fit$coef, constraints = clist2, rss = fit$rss, offset = if (length(tmp833$offset)) tmp833$offset else NULL) } rrr.init.expression <- expression({ if (length(control$Quadratic) && control$Quadratic) copy_X_vlm <- TRUE if (function.name %in% c("cqo", "cao")) { modelno <- switch(family@vfamily[1], "poissonff" = 2, "quasipoissonff" = 2, "quasipoisson" = 2, "binomialff" = 1, "quasibinomialff" = 1, "quasibinomial" = 1, "negbinomial" = 3, "gamma2" = 5, "gaussianff" = 8, 0) # stop("cannot fit this model using fast algorithm") if (modelno == 1) modelno = get("modelno", envir = VGAM:::VGAMenv) rrcontrol$modelno = control$modelno = modelno if (modelno == 3 || modelno == 5) { M <- 2 * ifelse(is.matrix(y), ncol(y), 1) control$szero <- rrcontrol$szero <- seq(from = 2, to=M, by = 2) # Handles A control$Dzero <- rrcontrol$Dzero <- seq(from = 2, to=M, by = 2) # Handles D } } else { modelno <- 0 # Any value will do as the variable is unused. } }) rrr.alternating.expression <- expression({ alt <- valt(x, z, U, Rank=Rank, Blist = Blist, Cinit = rrcontrol$Cinit, Criterion = rrcontrol$Criterion, colx1.index = rrcontrol$colx1.index, Linesearch = rrcontrol$Linesearch, Maxit = rrcontrol$Maxit, szero = rrcontrol$szero, SD.Cinit = rrcontrol$SD.Cinit, Suppress.warning = rrcontrol$Suppress.warning, Tolerance = rrcontrol$Tolerance, trace = trace, xij = control$xij) # This is subject to drift in A and C ans2 <- rrr.normalize(rrcontrol = rrcontrol, A=alt$A, C=alt$C, x = x) Amat <- ans2$A # Fed into Blist below (in rrr.end.expression) tmp.fitted <- alt$fitted # Also fed; was alt2$fitted rrcontrol$Cinit <- ans2$C # For next valt() call eval(rrr.end.expression) # Put Amat into Blist, and create new z }) adjust.Dmat.expression <- function(Mmat, Rank, Dmat, M) { if (length(Dmat)) { ind0 <- iam(NA, NA, both = TRUE, M = Rank) for (kay in 1:M) { elts <- Dmat[kay, , drop = FALSE] # Manual recycling if (length(elts) < Rank) elts <- matrix(elts, 1, Rank) Dk <- m2adefault(elts, M = Rank)[, , 1] Dk <- matrix(Dk, Rank, Rank) Dk <- t(Mmat) %*% Dk %*% Mmat # 22/8/03; Not diagonal in general Dmat[kay, ] <- Dk[cbind(ind0$row.index[1:ncol(Dmat)], ind0$col.index[1:ncol(Dmat)])] } } Dmat } rrr.normalize <- function(rrcontrol, A, C, x, Dmat = NULL) { colx2.index <- rrcontrol$colx2.index Rank <- rrcontrol$Rank Index.corner <- rrcontrol$Index.corner M <- nrow(A) C.old <- C if (rrcontrol$Corner) { tmp87 <- A[Index.corner,,drop = FALSE] Mmat <- solve(tmp87) # The normalizing matrix C <- C %*% t(tmp87) A <- A %*% Mmat A[Index.corner,] <- diag(Rank) # Make sure Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Svd.arg) { temp <- svd(C %*% t(A)) if (!is.matrix(temp$v)) temp$v <- as.matrix(temp$v) C <- temp$u[, 1:Rank, drop = FALSE] %*% diag(temp$d[1:Rank]^(1-rrcontrol$Alpha), nrow = Rank) A <- diag(temp$d[1:Rank]^( rrcontrol$Alpha), nrow = Rank) %*% t(temp$v[, 1:Rank, drop = FALSE]) A <- t(A) Mmat <- t(C.old) %*% C.old %*% solve(t(C) %*% C.old) Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Uncorrelated.lv) { lv.mat <- x[, colx2.index, drop = FALSE] %*% C var.lv.mat <- var(lv.mat) UU <- chol(var.lv.mat) Ut <- solve(UU) Mmat <- t(UU) C <- C %*% Ut A <- A %*% t(UU) Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Quadratic) { Mmat <- diag(Rank) for (LV in 1:Rank) if (( rrcontrol$Crow1positive[LV] && C[1,LV] < 0) || (!rrcontrol$Crow1positive[LV] && C[1,LV] > 0)) { C[,LV] <- -C[,LV] A[,LV] <- -A[,LV] Mmat[LV,LV] <- -1 } Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } list(Amat=A, Cmat=C, Dmat=Dmat) } rrr.end.expression <- expression({ if (exists(".VGAM.etamat", envir = VGAM:::VGAMenv)) rm(".VGAM.etamat", envir = VGAM:::VGAMenv) if (control$Quadratic) { if (!length(extra)) extra <- list() extra$Cmat <- Cmat # Saves the latest iteration extra$Dmat <- Dmat # Not the latest iteration extra$B1 <- B1.save # Not the latest iteration (not good) } else { Blist <- replace.constraints(Blist.save, Amat, colx2.index) } X_vlm_save <- if (control$Quadratic) { tmp300 <- lm2qrrvlm.model.matrix(x = x, Blist = Blist.save, C = Cmat, control=control) lv.mat <- tmp300$lv.mat # Needed at the top of new.s.call lm2vlm.model.matrix(tmp300$new.lv.model.matrix, B.list, xij = control$xij) } else { lm2vlm.model.matrix(x, Blist, xij = control$xij) } fv <- tmp.fitted # Contains \bI \bnu eta <- fv + offset if (FALSE && control$Rank == 1) { ooo <- order(lv.mat[,1]) } mu <- family@linkinv(eta, extra) if (any(is.na(mu))) warning("there are NAs in mu") deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent=!trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset # Contains \bI \bnu }) rrr.derivative.expression <- expression({ which.optimizer <- if (control$Quadratic && control$FastAlgorithm) { "BFGS" } else { if (iter <= rrcontrol$Switch.optimizer) "Nelder-Mead" else "BFGS" } if (trace && control$OptimizeWrtC) { cat("\n\n") cat("Using", which.optimizer, "\n") flush.console() } constraints=replace.constraints(constraints,diag(M),rrcontrol$colx2.index) nice31 <- (!control$EqualTol || control$ITolerances) && all(trivial.constraints(constraints) == 1) theta0 <- c(Cmat) assign(".VGAM.dot.counter", 0, envir = VGAM:::VGAMenv) if (control$OptimizeWrtC) { if (control$Quadratic && control$FastAlgorithm) { if (iter == 2) { if (exists(".VGAM.etamat", envir = VGAM:::VGAMenv)) rm(".VGAM.etamat", envir = VGAM:::VGAMenv) } if (iter > 2 && !quasi.newton$convergence) { if (zthere <- exists(".VGAM.z", envir = VGAM:::VGAMenv)) { ..VGAM.z <- get(".VGAM.z", envir = VGAM:::VGAMenv) ..VGAM.U <- get(".VGAM.U", envir = VGAM:::VGAMenv) ..VGAM.beta <- get(".VGAM.beta", envir = VGAM:::VGAMenv) } if (zthere) { z <- matrix(..VGAM.z, n, M) # minus any offset U <- matrix(..VGAM.U, M, n) } } if (iter == 2 || quasi.newton$convergence) { NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) canfitok <- (exists("CQO.FastAlgorithm", envir=VGAM:::VGAMenv) && get("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) if (!canfitok) stop("cannot fit this model using fast algorithm") p2star <- if (nice31) ifelse(control$IToleran, Rank, Rank+0.5*Rank*(Rank+1)) else (NOS*Rank + Rank*(Rank+1)/2 * ifelse(control$EqualTol,1,NOS)) p1star <- if (nice31) p1 * ifelse(modelno == 3 || modelno == 5, 2, 1) else (ncol(X_vlm_save) - p2star) X_vlm_1save <- if (p1star > 0) X_vlm_save[,-(1:p2star)] else NULL quasi.newton <- optim(par=Cmat, fn=callcqof, gr <- if (control$GradientFunction) calldcqo else NULL, method=which.optimizer, control=list(fnscale = 1, trace = as.integer(control$trace), parscale = rep(control$Parscale, length.out = length(Cmat)), maxit = 250), etamat=eta, xmat=x, ymat=y, wvec=w, X_vlm_1save = if (nice31) NULL else X_vlm_1save, modelno=modelno, Control=control, n = n, M = M, p1star=p1star, p2star=p2star, nice31 = nice31) if (zthere <- exists(".VGAM.z", envir = VGAM:::VGAMenv)) { ..VGAM.z <- get(".VGAM.z", envir = VGAM:::VGAMenv) ..VGAM.U <- get(".VGAM.U", envir = VGAM:::VGAMenv) ..VGAM.beta <- get(".VGAM.beta", envir = VGAM:::VGAMenv) } if (zthere) { z <- matrix(..VGAM.z, n, M) # minus any offset U <- matrix(..VGAM.U, M, n) } } else { if (exists(".VGAM.offset", envir = VGAM:::VGAMenv)) rm(".VGAM.offset", envir = VGAM:::VGAMenv) } } else { use.reltol <- if (length(rrcontrol$Reltol) >= iter) rrcontrol$Reltol[iter] else rev(rrcontrol$Reltol)[1] quasi.newton <- optim(par=theta0, fn=rrr.derivC.rss, method=which.optimizer, control=list(fnscale=rrcontrol$Fnscale, maxit=rrcontrol$Maxit, abstol=rrcontrol$Abstol, reltol=use.reltol), U = U, z= if (control$ITolerances) z+offset else z, M = M, xmat=x, # varbix2=varbix2, Blist = Blist, rrcontrol = rrcontrol) } Cmat <- matrix(quasi.newton$par, p2, Rank, byrow = FALSE) if (Rank > 1 && rrcontrol$ITolerances) { numat <- x[,rrcontrol$colx2.index,drop = FALSE] %*% Cmat evnu <- eigen(var(numat)) Cmat <- Cmat %*% evnu$vector numat <- x[,rrcontrol$colx2.index,drop = FALSE] %*% Cmat offset <- if (Rank > 1) -0.5*rowSums(numat^2) else -0.5*numat^2 } } alt <- valt.1iter(x = x, z = z, U = U, Blist = Blist, C = Cmat, nice31 = nice31, control = rrcontrol, lp.names = predictors.names) if (length(alt$offset)) offset <- alt$offset B1.save <- alt$B1 # Put later into extra tmp.fitted <- alt$fitted # contains \bI_{Rank} \bnu if Corner if (modelno != 33 && control$OptimizeWrtC) alt <- rrr.normalize(rrc = rrcontrol, A = alt$Amat, C = alt$Cmat, x = x, Dmat = alt$Dmat) if (trace && control$OptimizeWrtC) { cat("\n") cat(which.optimizer, "using optim():\n") cat("Objective = ", quasi.newton$value, "\n") cat("Parameters (= c(C)) = ", if (length(quasi.newton$par) < 5) "" else "\n") cat(alt$Cmat, fill = TRUE) cat("\n") cat("Number of function evaluations = ", quasi.newton$count[1], "\n") if (length(quasi.newton$message)) cat("Message = ", quasi.newton$message, "\n") cat("\n") flush.console() } Amat <- alt$Amat # Needed in rrr.end.expression Cmat <- alt$Cmat # Needed in rrr.end.expression if Quadratic Dmat <- alt$Dmat # Put later into extra eval(rrr.end.expression) # Put Amat into Blist, and create new z }) rrr.derivC.rss <- function(theta, U, z, M, xmat, Blist, rrcontrol, omit.these = NULL) { if (rrcontrol$trace) { cat(".") flush.console() } alreadyThere <- exists(".VGAM.dot.counter", envir = VGAM:::VGAMenv) if (alreadyThere) { VGAM.dot.counter <- get(".VGAM.dot.counter", envir = VGAM:::VGAMenv) VGAM.dot.counter <- VGAM.dot.counter + 1 assign(".VGAM.dot.counter", VGAM.dot.counter, envir = VGAM:::VGAMenv) if (VGAM.dot.counter > max(50, options()$width - 5)) { if (rrcontrol$trace) { cat("\n") flush.console() } assign(".VGAM.dot.counter", 0, envir = VGAM:::VGAMenv) } } Cmat <- matrix(theta, length(rrcontrol$colx2.index), rrcontrol$Rank) tmp700 <- lm2qrrvlm.model.matrix(x = xmat, Blist = Blist, no.thrills = !rrcontrol$Corner, C = Cmat, control = rrcontrol, assign = FALSE) Blist <- tmp700$constraints # Does not contain \bI_{Rank} \bnu if (rrcontrol$Corner) { z <- as.matrix(z) # should actually call this zedd z[,rrcontrol$Index.corner] <- z[,rrcontrol$Index.corner] - tmp700$lv.mat } if (length(tmp700$offset)) z <- z - tmp700$offset vlm.wfit(xmat=tmp700$new.lv.model.matrix, zmat=z, Blist = Blist, ncolx=ncol(xmat), U = U, only.rss = TRUE, matrix.out = FALSE, is.vlmX = FALSE, rss= TRUE, qr = FALSE, Eta.range = rrcontrol$Eta.range, xij = rrcontrol$xij)$rss } rrvglm.optim.control <- function(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...) { list(Fnscale = Fnscale, Maxit = Maxit, Switch.optimizer = Switch.optimizer, Abstol = Abstol, Reltol = Reltol) } nlminbcontrol <- function(Abs.tol = 10^(-6), Eval.max = 91, Iter.max = 91, Rel.err = 10^(-6), Rel.tol = 10^(-6), Step.min = 10^(-6), X.tol = 10^(-6), ...) { list(Abs.tol = Abs.tol, Eval.max = Eval.max, Iter.max = Iter.max, Rel.err = Rel.err, Rel.tol = Rel.tol, Step.min = Step.min, X.tol = X.tol) } Coef.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { if (length(varlvI) != 1 || !is.logical(varlvI)) stop("'varlvI' must be TRUE or FALSE") if (length(reference) > 1) stop("'reference' must be of length 0 or 1") if (length(reference) && is.Numeric(reference)) if (!is.Numeric(reference, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'reference'") if (!is.logical(ConstrainedQO <- object@control$ConstrainedQO)) stop("cannot determine whether the model is constrained or not") ocontrol <- object@control coef.object <- object@coefficients Rank <- ocontrol$Rank M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M MSratio <- M / NOS # First value is g(mean) = quadratic form in lv Quadratic <- if (ConstrainedQO) ocontrol$Quadratic else TRUE if (!Quadratic) stop("object is not a quadratic ordination object") p1 <- length(ocontrol$colx1.index) p2 <- length(ocontrol$colx2.index) Index.corner <- ocontrol$Index.corner szero <- ocontrol$szero EqualTolerances <- ocontrol$EqualTolerances Dzero <- ocontrol$Dzero Corner <- if (ConstrainedQO) ocontrol$Corner else FALSE estITol <- if (ConstrainedQO) object@control$ITolerances else FALSE modelno <- object@control$modelno # 1,2,3,4,5,6,7 or 0 combine2 <- c(szero, if (Corner) Index.corner else NULL) NoA <- length(combine2) == M # A is fully known. Qoffset <- if (Quadratic) ifelse(estITol, 0, sum(1:Rank)) else 0 ynames <- object@misc$ynames if (!length(ynames)) ynames <- object@misc$predictors.names if (!length(ynames)) ynames <- object@misc$ynames if (!length(ynames)) ynames <- paste("Y", 1:NOS, sep = "") lp.names <- object@misc$predictors.names if (!length(lp.names)) lp.names <- NULL dzero.vector <- rep(FALSE, length = M) if (length(Dzero)) dzero.vector[Dzero] <- TRUE names(dzero.vector) <- ynames lv.names <- if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "") td.expression <- function(Dmat, Amat, M, Dzero, Rank, bellshaped) { Tolerance <- Darray <- m2adefault(Dmat, M = Rank) for (ii in 1:M) if (length(Dzero) && any(Dzero == ii)) { Tolerance[, , ii] <- NA # Darray[,,ii] == O bellshaped[ii] <- FALSE } else { Tolerance[, , ii] <- -0.5 * solve(Darray[, , ii]) bellshaped[ii] <- all(eigen(Tolerance[, , ii])$values > 0) } optimum <- matrix(as.numeric(NA), Rank, M) for (ii in 1:M) if (bellshaped[ii]) optimum[, ii] <- Tolerance[, , ii] %*% cbind(Amat[ii, ]) list(optimum = optimum, Tolerance = Tolerance, Darray = Darray, bellshaped = bellshaped) } Amat <- object@extra$Amat # M x Rank Cmat <- object@extra$Cmat # p2 x Rank Dmat <- object@extra$Dmat # B1 <- object@extra$B1 # bellshaped <- rep(FALSE, length = M) if (is.character(reference)) { reference <- (1:NOS)[reference == ynames] if (length(reference) != 1) stop("could not match argument 'reference' with any response") } ptr1 <- 1 candidates <- if (length(reference)) reference else { if (length(ocontrol$Dzero)) (1:M)[-ocontrol$Dzero] else (1:M)} repeat { if (ptr1 > 0) { this.spp <- candidates[ptr1] } elts <- Dmat[this.spp,,drop = FALSE] if (length(elts) < Rank) elts <- matrix(elts, 1, Rank) Dk <- m2adefault(elts, M = Rank)[,,1] # Hopefully negative-def temp400 <- eigen(Dk) ptr1 <- ptr1 + 1 if (all(temp400$value < 0)) break if (ptr1 > length(candidates)) break } if (all(temp400$value < 0)) { temp1tol <- -0.5 * solve(Dk) dim(temp1tol) <- c(Rank,Rank) Mmat <- t(chol(temp1tol)) if (ConstrainedQO) { temp900 <- solve(t(Mmat)) Cmat <- Cmat %*% temp900 Amat <- Amat %*% Mmat } if (length(Cmat)) { temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat } Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } else { if (length(reference) == 1) stop("tolerance matrix specified by 'reference' ", "is not positive-definite") else warning("could not find any positive-definite ", "tolerance matrix") } if (ConstrainedQO) if (Rank > 1) { if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") numat <- xmat[,ocontrol$colx2.index,drop = FALSE] %*% Cmat evnu <- eigen(var(numat)) Mmat <- solve(t(evnu$vector)) Cmat <- Cmat %*% evnu$vector # == Cmat %*% solve(t(Mmat)) Amat <- Amat %*% Mmat temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } if (ConstrainedQO) if (varlvI) { if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") numat <- xmat[,ocontrol$colx2.index,drop = FALSE] %*% Cmat sdnumat <- apply(cbind(numat), 2, sd) Mmat <- if (Rank > 1) diag(sdnumat) else matrix(sdnumat, 1, 1) Cmat <- Cmat %*% solve(t(Mmat)) Amat <- Amat %*% Mmat temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } cx1i <- ocontrol$colx1.index maximum <- if (length(cx1i) == 1 && names(cx1i) == "(Intercept)") { eta.temp <- B1 for (ii in 1:M) eta.temp[ii] <- eta.temp[ii] + Amat[ii, , drop = FALSE] %*% optimum[, ii, drop = FALSE] + t(optimum[, ii, drop = FALSE]) %*% Darray[,, ii, drop = TRUE] %*% optimum[, ii, drop = FALSE] mymax <- object@family@linkinv(rbind(eta.temp), extra = object@extra) c(mymax) # Convert from matrix to vector } else { 5 * rep(as.numeric(NA), length.out = M) # Make "numeric" } names(maximum) <- ynames lv.mat <- if (ConstrainedQO) { object@x[,ocontrol$colx2.index,drop = FALSE] %*% Cmat } else { object@lv } dimnames(Amat) <- list(lp.names, lv.names) if (ConstrainedQO) dimnames(Cmat) <- list(names(ocontrol$colx2.index), lv.names) if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") dimnames(lv.mat) <- list(dimnames(xmat)[[1]], lv.names) ans <- new(Class <- if (ConstrainedQO) "Coef.qrrvglm" else "Coef.uqo", A = Amat, B1 = B1, Constrained = ConstrainedQO, D = Darray, NOS = NOS, Rank = Rank, lv = lv.mat, lvOrder = lv.mat, Optimum = optimum, OptimumOrder = optimum, bellshaped = bellshaped, Dzero = dzero.vector, Maximum = maximum, Tolerance = Tolerance) if (ConstrainedQO) {ans@C <- Cmat} else {Cmat <- NULL} for (rrr in 1:Rank) ans@OptimumOrder[rrr,] <- order(ans@Optimum[rrr,]) for (rrr in 1:Rank) ans@lvOrder[,rrr] <- order(ans@lv[,rrr]) if (length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) { p <- length(object@coefficients) n <- object@misc$n M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M pstar <- if (ConstrainedQO) (p + length(Cmat)) else p + n*Rank # Adjustment; not sure about UQO adjusted.dispersion <- object@misc$dispersion * (n*M - p) / (n*M - pstar) ans@dispersion <- adjusted.dispersion } if (MSratio > 1) { keepIndex <- seq(from = 1, to = M, by = MSratio) ans@Dzero <- ans@Dzero[keepIndex] ans@Optimum <- ans@Optimum[,keepIndex,drop = FALSE] ans@Tolerance <- ans@Tolerance[,,keepIndex,drop = FALSE] ans@bellshaped <- ans@bellshaped[keepIndex] names(ans@Dzero) <- ynames } else { dimnames(ans@D) <- list(lv.names, lv.names, ynames) } names(ans@bellshaped) <- ynames dimnames(ans@Optimum) <- list(lv.names, ynames) dimnames(ans@Tolerance) <- list(lv.names, lv.names, ynames) ans } setClass(Class = "Coef.rrvglm", representation( "A" = "matrix", "B1" = "matrix", # This may be unassigned if p1 = 0. "C" = "matrix", "Rank" = "numeric", "colx1.index" = "numeric", "colx2.index" = "numeric", "Atilde" = "matrix")) setClass(Class = "Coef.uqo", representation( "A" = "matrix", "B1" = "matrix", "Constrained" = "logical", "D" = "array", "NOS" = "numeric", "Rank" = "numeric", "lv" = "matrix", "lvOrder" = "matrix", "Maximum" = "numeric", "Optimum" = "matrix", "OptimumOrder" = "matrix", "bellshaped" = "logical", "dispersion" = "numeric", "Dzero" = "logical", "Tolerance" = "array")) setClass(Class = "Coef.qrrvglm", representation( "C" = "matrix"), contains = "Coef.uqo") show.Coef.qrrvglm <- function(x, ...) { object <- x Rank <- object@Rank M <- nrow(object@A) NOS <- object@NOS mymat <- matrix(as.numeric(NA), NOS, Rank) if (Rank == 1) { # || object@Diagonal for (ii in 1:NOS) { fred <- if (Rank>1) diag(object@Tolerance[,,ii,drop = FALSE]) else object@Tolerance[,,ii] if (all(fred > 0)) mymat[ii,] <- sqrt(fred) } dimnames(mymat) <- list(dimnames(object@Tolerance)[[3]], if (Rank == 1) "lv" else paste("Tolerance", dimnames(mymat)[[2]], sep = "")) } else { for (ii in 1:NOS) { fred <- eigen(object@Tolerance[,,ii]) if (all(fred$value > 0)) mymat[ii,] <- sqrt(fred$value) } dimnames(mymat) <- list(dimnames(object@Tolerance)[[3]], paste("tol", 1:Rank, sep = "")) } dimnames(object@A) <- list(dimnames(object@A)[[1]], if (Rank > 1) paste("A", dimnames(object@A)[[2]], sep = ".") else "A") Maximum <- if (length(object@Maximum)) cbind(Maximum = object@Maximum) else NULL if (length(Maximum) && length(mymat) && Rank == 1) Maximum[is.na(mymat),] <- NA optmat <- cbind(t(object@Optimum)) dimnames(optmat) <- list(dimnames(optmat)[[1]], if (Rank > 1) paste("Optimum", dimnames(optmat)[[2]], sep = ".") else "Optimum") if (length(optmat) && length(mymat) && Rank == 1) optmat[is.na(mymat),] <- NA if ( object@Constrained ) { cat("\nC matrix (constrained/canonical coefficients)\n") print(object@C, ...) } cat("\nB1 and A matrices\n") print(cbind(t(object@B1), A = object@A), ...) cat("\nOptima and maxima\n") print(cbind(Optimum = optmat, Maximum), ...) if (Rank > 1) { # !object@Diagonal && Rank > 1 cat("\nTolerances\n") } else cat("\nTolerance\n") print(mymat, ...) cat("\nStandard deviation of the latent variables (site scores)\n") print(apply(cbind(object@lv), 2, sd)) invisible(object) } setMethod("show", "Coef.qrrvglm", function(object) show.Coef.qrrvglm(object)) setMethod("summary", "qrrvglm", function(object, ...) summary.qrrvglm(object, ...)) predictqrrvglm <- function(object, newdata = NULL, type = c("link", "response", "lv", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varlvI = FALSE, reference = NULL, ...) { if (se.fit) stop("cannot handle se.fit == TRUE yet") if (deriv != 0) stop("derivative is not equal to 0") if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "lv", "terms"))[1] if (type == "lv") stop("cannot handle type='lv' yet") if (type == "terms") stop("cannot handle type='terms' yet") M <- object@misc$M Rank <- object@control$Rank na.act <- object@na.action object@na.action <- list() if (!length(newdata) && type == "response" && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm", ...) offset <- object@offset tt <- object@terms$terms # terms(object) if (!length(object@x)) attr(X, "assign") <- attrassignlm(X, tt) } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction = object@smart.prediction) } tt <- object@terms$terms # terms(object) # 11/8/03; object@terms$terms X <- model.matrix(delete.response(tt), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (nrow(X) != nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep(1, nrow(newdata)),,drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num<-attr(tt,"offset"))) { eval(attr(tt,"variables")[[off.num+1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (any(c(offset) != 0)) stop("currently cannot handle nonzero offsets") if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, tt) } ocontrol <- object@control Rank <- ocontrol$Rank NOS <- ncol(object@y) sppnames <- dimnames(object@y)[[2]] modelno <- ocontrol$modelno # 1,2,3,5 or 0 M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in lv if (MSratio != 1) stop("can only handle MSratio == 1 for now") if (length(newdata)) { Coefs <- Coef(object, varlvI = varlvI, reference = reference) X1mat <- X[,ocontrol$colx1.index,drop = FALSE] X2mat <- X[,ocontrol$colx2.index,drop = FALSE] lvmat <- as.matrix(X2mat %*% Coefs@C) # n x Rank etamat <- as.matrix(X1mat %*% Coefs@B1 + lvmat %*% t(Coefs@A)) whichSpecies <- 1:NOS # Do it all for all species for (sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] Dmat <- matrix(Coefs@D[,,thisSpecies], Rank, Rank) etamat[,thisSpecies] <- etamat[,thisSpecies] + mux34(lvmat, Dmat, symmetric = TRUE) } } else { etamat <- object@predictors } pred <- switch(type, response = { fv = if (length(newdata)) object@family@linkinv(etamat, extra) else fitted(object) if (M > 1 && is.matrix(fv)) { dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) } fv }, link = etamat, lv = stop("failure here"), terms = stop("failure here")) if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } pred } setMethod("predict", "qrrvglm", function(object, ...) predictqrrvglm(object, ...)) coefqrrvglm <- function(object, matrix.out = FALSE, label = TRUE) { if (matrix.out) stop("currently cannot handle matrix.out = TRUE") coefvlm(object, matrix.out = matrix.out, label = label) } residualsqrrvglm <- function(object, type = c("deviance", "pearson", "working", "response", "ldot"), matrix.arg = TRUE) { stop("this function has not been written yet") } setMethod("residuals", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) show.rrvglm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } vecOfBetas <- x@coefficients if (any(nas <- is.na(vecOfBetas))) { if (is.null(names(vecOfBetas))) names(vecOfBetas) <- paste("b", 1:length(vecOfBetas), sep = "") cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else cat("\nCoefficients:\n") print.default(vecOfBetas, ...) # used to be print() if (FALSE) { Rank <- x@Rank if (!length(Rank)) Rank <- sum(!nas) } if (FALSE) { nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (!length(rdf)) rdf <- nobs - Rank } cat("\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") if (length(vll <- logLik.vlm(x))) cat("Log-likelihood:", format(vll), "\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for (iii in ncrit) if (iii != "loglikelihood" && iii != "deviance") cat(paste(iii, ":", sep = ""), format(x@criterion[[iii]]), "\n") } invisible(x) } setMethod("show", "rrvglm", function(object) show.rrvglm(object)) rrvglm.control.Gaussian <- function(half.stepsizing = FALSE, save.weight = TRUE, ...) { list(half.stepsizing = FALSE, save.weight = as.logical(save.weight)[1]) } summary.rrvglm <- function(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical= TRUE, h.step = 0.0001, kill.all = FALSE, omit13 = FALSE, fixA = FALSE, ...) { if (!is.Numeric(h.step, allowable.length = 1) || abs(h.step) > 1) stop("bad input for 'h.step'") if (!object@control$Corner) stop("this function works with corner constraints only") if (is.null(dispersion)) dispersion <- object@misc$dispersion newobject <- as(object, "vglm") stuff <- summaryvglm(newobject, correlation=correlation, dispersion=dispersion) answer <- new(Class = "summary.rrvglm", object, call = stuff@call, coef3 = stuff@coef3, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, pearson.resid = stuff@pearson.resid, sigma = stuff@sigma) if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion tmp5 <- get.rrvglm.se1(object, omit13 = omit13, numerical = numerical, h.step = h.step, kill.all = kill.all, fixA = fixA, ...) if (any(diag(tmp5$cov.unscaled) <= 0) || any(eigen(tmp5$cov.unscaled)$value <= 0)) { warning("cov.unscaled is not positive definite") } answer@cov.unscaled <- tmp5$cov.unscaled od <- if (is.numeric(object@misc$disper)) object@misc$disper else object@misc$default.disper if (is.numeric(dispersion)) { if (is.numeric(od) && dispersion != od) warning("dispersion != object@misc$dispersion; ", "using the former") } else { dispersion <- if (is.numeric(od)) od else 1 } tmp8 <- object@misc$M - object@control$Rank - length(object@control$szero) answer@df[1] <- answer@df[1] + tmp8 * object@control$Rank answer@df[2] <- answer@df[2] - tmp8 * object@control$Rank if (dispersion == 0) { dispersion <- tmp5$rss / answer@df[2] # Estimate } answer@coef3 <- get.rrvglm.se2(answer@cov.unscaled, dispersion = dispersion, coefficients = tmp5$coefficients) answer@dispersion <- dispersion answer@sigma <- dispersion^0.5 answer } get.rrvglm.se1 <- function(fit, omit13 = FALSE, kill.all = FALSE, numerical = TRUE, fixA = FALSE, h.step = 0.0001, trace.arg = FALSE, ...) { if (length(fit@control$Nested) && fit@control$Nested) stop("sorry, cannot handle nested models yet") szero <- fit@control$szero if (!length(fit@x)) stop("fix@x is empty. Run rrvglm(... , x= TRUE)") colx1.index <- fit@control$colx1.index # May be NULL colx2.index <- fit@control$colx2.index Blist <- fit@constraints ncolBlist <- unlist(lapply(Blist, ncol)) p1 <- length(colx1.index) # May be 0 p2 <- length(colx2.index) Rank <- fit@control$Rank # fit@misc$Nested.Rank Amat <- fit@constraints[[colx2.index[1]]] B1mat <- if (p1) coefvlm(fit, matrix.out = TRUE)[colx1.index, , drop = FALSE] else NULL C.try <- coefvlm(fit, matrix.out= TRUE)[colx2.index, , drop = FALSE] Cmat <- C.try %*% Amat %*% solve(t(Amat) %*% Amat) x1mat <- if (p1) fit@x[, colx1.index, drop = FALSE] else NULL x2mat <- fit@x[, colx2.index, drop = FALSE] wz <- weights(fit, type = "work") # old: wweights(fit) #fit@weights if (!length(wz)) stop("cannot get fit@weights") M <- fit@misc$M n <- fit@misc$n Index.corner <- fit@control$Index.corner # used to be (1:Rank); zmat <- fit@predictors + fit@residuals theta <- c(Amat[-c(Index.corner,szero),]) if (fit@control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = fit@control$wzepsilon) U <- vchol(wz, M = M, n = n, silent= TRUE) if (numerical) { delct.da <- num.deriv.rrr(fit, M = M, r = Rank, x1mat=x1mat, x2mat=x2mat, p2 = p2, Index.corner, Aimat=Amat, B1mat=B1mat, Cimat=Cmat, h.step = h.step, colx2.index=colx2.index, xij = fit@control$xij, szero = szero) } else { delct.da <- dctda.fast.only(theta=theta, wz = wz, U = U, zmat, M = M, r = Rank, x1mat=x1mat, x2mat=x2mat, p2 = p2, Index.corner, Aimat=Amat, B1mat=B1mat, Cimat=Cmat, xij = fit@control$xij, szero = szero) } newobject <- as(fit, "vglm") sfit2233 <- summaryvglm(newobject) d8 <- dimnames(sfit2233@cov.unscaled)[[1]] cov2233 <- solve(sfit2233@cov.unscaled) # Includes any intercepts dimnames(cov2233) <- list(d8, d8) log.vec33 <- NULL nassign <- names(fit@constraints) choose.from <- varassign(fit@constraints, nassign) for (ii in nassign) if (any(ii == names(colx2.index))) { log.vec33 <- c(log.vec33, choose.from[[ii]]) } cov33 <- cov2233[ log.vec33, log.vec33, drop = FALSE] # r*p2 by r*p2 cov23 <- cov2233[-log.vec33, log.vec33, drop = FALSE] cov22 <- cov2233[-log.vec33,-log.vec33, drop = FALSE] lv.mat <- x2mat %*% Cmat offs <- matrix(0, n, M) # The "0" handles szero's offs[,Index.corner] <- lv.mat if (M == (Rank + length(szero))) stop("cannot handle full-rank models yet") cm <- matrix(0, M, M - Rank - length(szero)) cm[-c(Index.corner, szero),] <- diag(M - Rank - length(szero)) Blist <- vector("list", length(colx1.index)+1) names(Blist) <- c(names(colx1.index), "I(lv.mat)") for (ii in names(colx1.index)) Blist[[ii]] <- fit@constraints[[ii]] Blist[["I(lv.mat)"]] <- cm if (p1) { ooo <- fit@assign bb <- NULL for (ii in 1:length(ooo)) { if (any(ooo[[ii]][1] == colx1.index)) bb <- c(bb, names(ooo)[ii]) } has.intercept <- any(bb == "(Intercept)") bb[bb == "(Intercept)"] <- "1" if (p1>1) bb <- paste(bb, collapse = "+") if (has.intercept) { bb <- paste("zmat - offs ~ ", bb, " + I(lv.mat)", collapse = " ") } else { bb <- paste("zmat - offs ~ -1 + ", bb, " + I(lv.mat)", collapse = " ") } bb <- as.formula(bb) } else { bb <- as.formula("zmat - offs ~ -1 + I(lv.mat)") } if (fit@misc$dataname == "list") { dspec <- FALSE } else { mytext1 <- "exists(x=fit@misc$dataname, envir = VGAM:::VGAMenv)" myexp1 <- parse(text=mytext1) is.there <- eval(myexp1) bbdata <- if (is.there) get(fit@misc$dataname, envir=VGAM:::VGAMenv) else get(fit@misc$dataname) dspec <- TRUE } fit1122 <- if (dspec) vlm(bb, constraints = Blist, criterion = "d", weights = wz, data = bbdata, save.weight = TRUE, smart = FALSE, trace = trace.arg, x.arg = TRUE) else vlm(bb, constraints = Blist, criterion = "d", weights = wz, save.weight = TRUE, smart = FALSE, trace = trace.arg, x.arg = TRUE) sfit1122 <- summaryvlm(fit1122) d8 <- dimnames(sfit1122@cov.unscaled)[[1]] cov1122 <- solve(sfit1122@cov.unscaled) dimnames(cov1122) <- list(d8, d8) lcs <- length(coefvlm(sfit1122)) log.vec11 <- (lcs-(M-Rank-length(szero))*Rank+1):lcs cov11 <- cov1122[log.vec11, log.vec11, drop = FALSE] cov12 <- cov1122[ log.vec11, -log.vec11, drop = FALSE] cov22 <- cov1122[-log.vec11, -log.vec11, drop = FALSE] cov13 <- delct.da %*% cov33 if (omit13) cov13 <- cov13 * 0 # zero it if (kill.all) { cov13 <- cov13 * 0 # zero it if (fixA) { cov12 <- cov12 * 0 # zero it } else { cov23 <- cov23 * 0 # zero it } } cov13 <- -cov13 # Richards (1961) if (fixA) { cov.unscaled <- rbind(cbind(cov1122, rbind(cov13, cov23)), cbind(t(cov13), t(cov23), cov33)) } else { cov.unscaled <- rbind(cbind(cov11, cov12, cov13), cbind(rbind(t(cov12), t(cov13)), cov2233)) } ans <- solve(cov.unscaled) # Get all the coefficients acoefs <- c(fit1122@coefficients[log.vec11], fit@coefficients) dimnames(ans) <- list(names(acoefs), names(acoefs)) list(cov.unscaled = ans, coefficients = acoefs, rss = sfit1122@rss) } get.rrvglm.se2 <- function(cov.unscaled, dispersion = 1, coefficients) { d8 <- dimnames(cov.unscaled)[[1]] ans <- matrix(coefficients, length(coefficients), 3) ans[,2] <- sqrt(dispersion) * sqrt(diag(cov.unscaled)) ans[,3] <- ans[,1] / ans[,2] dimnames(ans) <- list(d8, c("Estimate", "Std. Error", "z value")) ans } num.deriv.rrr <- function(fit, M, r, x1mat, x2mat, p2, Index.corner, Aimat, B1mat, Cimat, h.step = 0.0001, colx2.index, xij = NULL, szero = NULL) { nn <- nrow(x2mat) if (nrow(Cimat) != p2 || ncol(Cimat) != r) stop("'Cimat' wrong shape") dct.da <- matrix(as.numeric(NA), (M-r-length(szero))*r, r*p2) if ((length(Index.corner) + length(szero)) == M) stop("cannot handle full rank models yet") cbindex <- (1:M)[-c(Index.corner, szero)] ptr <- 1 for (sss in 1:r) for (tt in cbindex) { small.Blist <- vector("list", p2) pAmat <- Aimat pAmat[tt,sss] <- pAmat[tt,sss] + h.step # Perturb it for (ii in 1:p2) small.Blist[[ii]] <- pAmat offset <- if (length(fit@offset)) fit@offset else 0 if (all(offset == 0)) offset <- 0 neweta <- x2mat %*% Cimat %*% t(pAmat) if (is.numeric(x1mat)) neweta <- neweta + x1mat %*% B1mat fit@predictors <- neweta newmu <- fit@family@linkinv(neweta, fit@extra) fit@fitted.values <- as.matrix(newmu) # 20100909 fred <- weights(fit, type = "w", deriv= TRUE, ignore.slot= TRUE) if (!length(fred)) stop("cannot get @weights and $deriv from object") wz <- fred$weights deriv.mu <- fred$deriv U <- vchol(wz, M = M, n = nn, silent = TRUE) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = nn) newzmat <- neweta + vbacksub(U, tvfor, M = M, n = nn) - offset if (is.numeric(x1mat)) newzmat <- newzmat - x1mat %*% B1mat newfit <- vlm.wfit(xmat = x2mat, zmat = newzmat, Blist = small.Blist, U = U, matrix.out = FALSE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, x.ret = FALSE, offset = NULL, xij = xij) dct.da[ptr,] <- (newfit$coef - t(Cimat)) / h.step ptr <- ptr + 1 } dct.da } dctda.fast.only <- function(theta, wz, U, zmat, M, r, x1mat, x2mat, p2, Index.corner, Aimat, B1mat, Cimat, xij = NULL, szero = NULL) { if (length(szero)) stop("cannot handle 'szero' in dctda.fast.only()") nn <- nrow(x2mat) if (nrow(Cimat) != p2 || ncol(Cimat) != r) stop("Cimat wrong shape") fred <- kronecker(matrix(1,1,r), x2mat) fred <- kronecker(fred, matrix(1,M,1)) barney <- kronecker(Aimat, matrix(1,1,p2)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(p2*r, M, nn)) temp <- aperm(temp, c(2,1,3)) # M by p2*r by nn temp <- mux5(wz, temp, M = M, matrix.arg= TRUE) temp <- m2adefault(temp, M=p2*r) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) # p2*r by p2*r dc.da <- array(NA, c(p2, r, M, r)) # different from other functions if (length(Index.corner) == M) stop("cannot handle full rank models yet") cbindex <- (1:M)[-Index.corner] # complement of Index.corner resid2 <- if (length(x1mat)) mux22(t(wz), zmat - x1mat %*% B1mat, M = M, upper = FALSE, as.matrix = TRUE) else mux22(t(wz), zmat , M = M, upper = FALSE, as.matrix = TRUE) for (sss in 1:r) for (ttt in cbindex) { fred <- t(x2mat) * matrix(resid2[, ttt], p2, nn, byrow = TRUE) # p2 * nn temp2 <- kronecker(eifun(sss,r), rowSums(fred)) for (kkk in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[,kkk], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) # nn * M wxx <- Wiak[,ttt] * x2mat blocki <- t(x2mat) %*% wxx temp4a <- blocki %*% Cimat[,kkk] if (kkk == 1) { temp4b <- blocki %*% Cimat[,sss] } temp2 <- temp2 - kronecker(eifun(sss,r), temp4a) - kronecker(eifun(kkk,r), temp4b) } dc.da[,,ttt,sss] <- G %*% temp2 } ans1 <- dc.da[,,cbindex,,drop = FALSE] # p2 x r x (M-r) x r ans1 <- aperm(ans1, c(2,1,3,4)) # r x p2 x (M-r) x r ans1 <- matrix(c(ans1), r*p2, (M-r)*r) ans1 <- t(ans1) ans1 } dcda.fast <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept= TRUE, xij = NULL) { nn <- nrow(xmat) Aimat <- matrix(as.numeric(NA), M, r) Aimat[Index.corner,] <- diag(r) Aimat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Blist <- vector("list", pp+1) Blist[[1]] <- diag(M) for (ii in 2:(pp+1)) Blist[[ii]] <- Aimat } else { Blist <- vector("list", pp) for (ii in 1:pp) Blist[[ii]] <- Aimat } coeffs <- vlm.wfit(xmat=xmat, z, Blist, U = U, matrix.out = TRUE, xij = xij)$mat.coef c3 <- coeffs <- t(coeffs) # transpose to make M x (pp+1) int.vec <- if (intercept) c3[,1] else 0 # \boldeta_0 Cimat <- if (intercept) t(c3[Index.corner,-1,drop = FALSE]) else t(c3[Index.corner,,drop = FALSE]) if (nrow(Cimat)!=pp || ncol(Cimat)!=r) stop("Cimat wrong shape") fred <- kronecker(matrix(1,1,r), if (intercept) xmat[,-1,drop = FALSE] else xmat) fred <- kronecker(fred, matrix(1,M,1)) barney <- kronecker(Aimat, matrix(1,1,pp)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(r*pp,M,nn)) temp <- aperm(temp, c(2,1,3)) temp <- mux5(wz, temp, M = M, matrix.arg = TRUE) temp <- m2adefault(temp, M = r*pp) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) dc.da <- array(NA, c(pp,r,M,r)) # different from other functions cbindex <- (1:M)[-Index.corner] resid2 <- mux22(t(wz), z - matrix(int.vec, nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) # mat= TRUE, for (s in 1:r) for (tt in cbindex) { fred <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) * matrix(resid2[, tt], pp, nn, byrow = TRUE) temp2 <- kronecker(eifun(s,r), rowSums(fred)) temp4 <- rep(0,pp) for (k in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[, k], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) wxx <- Wiak[,tt] * (if (intercept) xmat[, -1, drop = FALSE] else xmat) blocki <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) %*% wxx temp4 <- temp4 + blocki %*% Cimat[, k] } dc.da[,,tt,s] <- G %*% (temp2 - 2 * kronecker(eifun(s,r),temp4)) } ans1 <- dc.da[,,cbindex,,drop = FALSE] # pp x r x (M-r) x r ans1 <- aperm(ans1, c(2,1,3,4)) # r x pp x (M-r) x r ans1 <- matrix(c(ans1), (M-r)*r, r*pp, byrow = TRUE) detastar.da <- array(0,c(M,r,r,nn)) for (s in 1:r) for (j in 1:r) { t1 <- t(dc.da[,j,,s]) t1 <- matrix(t1, M, pp) detastar.da[,j,s,] <- t1 %*% (if (intercept) t(xmat[,-1,drop = FALSE]) else t(xmat)) } etastar <- (if (intercept) xmat[,-1,drop = FALSE] else xmat) %*% Cimat eta <- matrix(int.vec, nn, M, byrow = TRUE) + etastar %*% t(Aimat) sumWinv <- solve((m2adefault(t(colSums(wz)), M = M))[,,1]) deta0.da <- array(0,c(M,M,r)) AtWi <- kronecker(matrix(1, nn, 1), Aimat) AtWi <- mux111(t(wz), AtWi, M = M, upper= FALSE) # matrix.arg= TRUE, AtWi <- array(t(AtWi), c(r,M,nn)) for (ss in 1:r) { temp90 <- (m2adefault(t(colSums(etastar[,ss]*wz)), M = M))[,,1] #MxM temp92 <- array(detastar.da[,,ss,], c(M,r,nn)) temp93 <- mux7(temp92, AtWi) temp91 <- rowSums(temp93, dims = 2) # M x M deta0.da[,,ss] <- -(temp90 + temp91) %*% sumWinv } ans2 <- deta0.da[-(1:r),,,drop = FALSE] # (M-r) x M x r ans2 <- aperm(ans2, c(1,3,2)) # (M-r) x r x M ans2 <- matrix(c(ans2), (M-r)*r, M) list(dc.da = ans1, dint.da = ans2) } rrr.deriv.rss <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept = TRUE, xij = NULL) { Amat <- matrix(as.numeric(NA), M, r) Amat[Index.corner,] <- diag(r) Amat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Blist <- vector("list", pp+1) Blist[[1]] <- diag(M) for (ii in 2:(pp+1)) Blist[[ii]] <- Amat } else { Blist <- vector("list", pp) for (ii in 1:pp) Blist[[ii]] <- Amat } vlm.wfit(xmat = xmat, z, Blist, U = U, matrix.out = FALSE, rss = TRUE, xij = xij)$rss } rrr.deriv.gradient.fast <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept = TRUE) { nn <- nrow(xmat) Aimat <- matrix(as.numeric(NA), M, r) Aimat[Index.corner,] <- diag(r) Aimat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Blist <- vector("list", pp+1) Blist[[1]] <- diag(M) for (i in 2:(pp+1)) Blist[[i]] <- Aimat } else { Blist <- vector("list", pp) for (i in 1:(pp)) Blist[[i]] <- Aimat } coeffs <- vlm.wfit(xmat, z, Blist, U = U, matrix.out= TRUE, xij = NULL)$mat.coef c3 <- coeffs <- t(coeffs) # transpose to make M x (pp+1) int.vec <- if (intercept) c3[,1] else 0 # \boldeta_0 Cimat <- if (intercept) t(c3[Index.corner, -1, drop = FALSE]) else t(c3[Index.corner,,drop = FALSE]) if (nrow(Cimat) != pp || ncol(Cimat) != r) stop("Cimat wrong shape") fred <- kronecker(matrix(1,1,r), if (intercept) xmat[, -1, drop = FALSE] else xmat) fred <- kronecker(fred, matrix(1, M, 1)) barney <- kronecker(Aimat, matrix(1, 1, pp)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(r*pp, M, nn)) temp <- aperm(temp, c(2, 1, 3)) temp <- mux5(wz, temp, M = M, matrix.arg= TRUE) temp <- m2adefault(temp, M = r*pp) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) dc.da <- array(NA,c(pp,r,r,M)) cbindex <- (1:M)[-Index.corner] resid2 <- mux22(t(wz), z - matrix(int.vec, nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) for (s in 1:r) for (tt in cbindex) { fred <- (if (intercept) t(xmat[,-1,drop = FALSE]) else t(xmat)) * matrix(resid2[,tt],pp,nn,byrow = TRUE) temp2 <- kronecker(eifun(s,r), rowSums(fred)) temp4 <- rep(0,pp) for (k in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[, k], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) wxx <- Wiak[,tt] * (if (intercept) xmat[, -1, drop = FALSE] else xmat) blocki <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) %*% wxx temp4 <- temp4 + blocki %*% Cimat[,k] } dc.da[,,s,tt] <- G %*% (temp2 - 2 * kronecker(eifun(s,r),temp4)) } detastar.da <- array(0,c(M,r,r,nn)) for (s in 1:r) for (j in 1:r) { t1 <- t(dc.da[,j,s,]) t1 <- matrix(t1, M, pp) detastar.da[,j,s,] <- t1 %*% (if (intercept) t(xmat[,-1,drop = FALSE]) else t(xmat)) } etastar <- (if (intercept) xmat[,-1,drop = FALSE] else xmat) %*% Cimat eta <- matrix(int.vec, nn, M, byrow = TRUE) + etastar %*% t(Aimat) sumWinv <- solve((m2adefault(t(colSums(wz)), M = M))[,,1]) deta0.da <- array(0,c(M,M,r)) AtWi <- kronecker(matrix(1, nn, 1), Aimat) AtWi <- mux111(t(wz), AtWi, M = M, upper= FALSE) # matrix.arg= TRUE, AtWi <- array(t(AtWi), c(r,M,nn)) for (ss in 1:r) { temp90 <- (m2adefault(t(colSums(etastar[,ss]*wz)), M = M))[,,1] temp92 <- array(detastar.da[,,ss,],c(M,r,nn)) temp93 <- mux7(temp92,AtWi) temp91 <- apply(temp93,1:2,sum) # M x M temp91 <- rowSums(temp93, dims = 2) # M x M deta0.da[,,ss] <- -(temp90 + temp91) %*% sumWinv } ans <- matrix(0,M,r) fred <- mux22(t(wz), z - eta, M = M, upper = FALSE, as.matrix = TRUE) fred.array <- array(t(fred %*% Aimat),c(r, 1, nn)) for (s in 1:r) { a1 <- colSums(fred %*% t(deta0.da[,, s])) a2 <- colSums(fred * etastar[, s]) temp92 <- array(detastar.da[, , s, ],c(M, r, nn)) temp93 <- mux7(temp92, fred.array) a3 <- rowSums(temp93, dims = 2) ans[,s] <- a1 + a2 + a3 } ans <- -2 * c(ans[cbindex,]) ans } vellipse <- function(R, ratio = 1, orientation = 0, center = c(0,0), N=300) { if (length(center) != 2) stop("center must be of length 2") theta <- 2*pi*(0:N)/N x1 <- R*cos(theta) y1 <- ratio*R*sin(theta) x <- center[1] + cos(orientation)*x1 - sin(orientation)*y1 y <- center[2] + sin(orientation)*x1 + cos(orientation)*y1 cbind(x, y) } biplot.qrrvglm <- function(x, ...) { stop("biplot.qrrvglm has been replaced by the function lvplot.qrrvglm") } lvplot.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, add = FALSE, plot.it= TRUE, rug= TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab=paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab= if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex=par()$cex, pcol=par()$col, pch=par()$pch, llty=par()$lty, lcol=par()$col, llwd=par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol=par()$col, clwd=par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin","mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("fitted.values", "predictors"))[1] if (is.numeric(OriginC)) OriginC <- rep(OriginC, length.out = 2) else { if (mode(OriginC) != "character" && mode(OriginC) != "name") OriginC <- as.character(substitute(OriginC)) OriginC <- match.arg(OriginC, c("origin","mean"))[1] } if (length(ellipse) > 1) stop("ellipse must be of length 1 or 0") if (is.logical(ellipse)) {ellipse <- if (ellipse) 0.95 else NULL} Rank <- object@control$Rank if (Rank > 2) stop("can only handle rank 1 or 2 models") M <- object@misc$M NOS <- ncol(object@y) MSratio <- M / NOS # First value is g(mean) = quadratic form in lv n <- object@misc$n colx2.index <- object@control$colx2.index cx1i <- object@control$colx1.index # May be NULL if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("latent variable plots allowable only for ", "noRRR = ~ 1 models") Coef.list <- Coef(object, varlvI = varlvI, reference = reference) if ( C) Cmat <- Coef.list@C nustar <- Coef.list@lv # n x Rank if (!plot.it) return(nustar) r.curves <- slot(object, type) # n times M (\boldeta or \boldmu) if (!add) { if (Rank == 1) { matplot(nustar, if ( y && type == "fitted.values") object@y else r.curves, type = "n", xlab=xlab, ylab=ylab, ...) } else { # Rank == 2 matplot(c(Coef.list@Optimum[1,], nustar[,1]), c(Coef.list@Optimum[2,], nustar[,2]), type = "n", xlab=xlab, ylab=ylab, ...) } } if ((length(pch) != 1 && length(pch) != ncol(r.curves)) || (length(pcol) != 1 && length(pcol) != ncol(r.curves)) || (length(pcex) != 1 && length(pcex) != ncol(r.curves))) stop("pch, pcol and pcex must be of length 1 or ncol(r.curves)") pch <- rep(pch, leng=ncol(r.curves)) pcol <- rep(pcol, leng=ncol(r.curves)) pcex <- rep(pcex, leng=ncol(r.curves)) llty <- rep(llty, leng=ncol(r.curves)) lcol <- rep(lcol, leng=ncol(r.curves)) llwd <- rep(llwd, leng=ncol(r.curves)) elty <- rep(elty, leng=ncol(r.curves)) ecol <- rep(ecol, leng=ncol(r.curves)) elwd <- rep(elwd, leng=ncol(r.curves)) adj.arg <- rep(adj.arg, leng=ncol(r.curves)) if ( C ) { Clwd <- rep(Clwd, leng=nrow(Cmat)) Clty <- rep(Clty, leng=nrow(Cmat)) Ccol <- rep(Ccol, leng=nrow(Cmat)) Cadj.arg <- rep(Cadj.arg, leng=nrow(Cmat)) Ccex <- rep(Ccex, leng=nrow(Cmat)) } if (Rank == 1) { for (i in 1:ncol(r.curves)) { xx <- nustar yy <- r.curves[,i] o <- sort.list(xx) xx <- xx[o] yy <- yy[o] lines(xx, yy, col=lcol[i], lwd=llwd[i], lty=llty[i]) if ( y && type == "fitted.values") { ypts <- object@y if (ncol(as.matrix(ypts)) == ncol(r.curves)) points(xx, ypts[o,i], col=pcol[i], cex=pcex[i], pch=pch[i]) } } if (rug) rug(xx) } else { for (i in 1:ncol(r.curves)) points(Coef.list@Optimum[1,i], Coef.list@Optimum[2,i], col=pcol[i], cex=pcex[i], pch=pch[i]) if (label.arg) { for (i in 1:ncol(r.curves)) text(Coef.list@Optimum[1,i], Coef.list@Optimum[2,i], labels=(dimnames(Coef.list@Optimum)[[2]])[i], adj=adj.arg[i], col=pcol[i], cex=pcex[i]) } if (chull.arg) { hull <- chull(nustar[,1], nustar[,2]) hull <- c(hull, hull[1]) lines(nustar[hull,1], nustar[hull,2], type = "b", pch=cpch, lty=clty, col=ccol, lwd=clwd) } if (length(ellipse)) { ellipse.temp <- if (ellipse > 0) ellipse else 0.95 if (ellipse < 0 && (!object@control$EqualTolerances || varlvI)) stop("an equal-tolerances assumption and 'varlvI = FALSE' ", "is needed for 'ellipse' < 0") if ( check.ok ) { colx1.index <- object@control$colx1.index if (!(length(colx1.index) == 1 && names(colx1.index) == "(Intercept)")) stop("can only plot ellipses for intercept models only") } for (i in 1:ncol(r.curves)) { cutpoint <- object@family@linkfun( if (Absolute) ellipse.temp else Coef.list@Maximum[i] * ellipse.temp, extra = object@extra) if (MSratio > 1) cutpoint <- cutpoint[1,1] cutpoint <- object@family@linkfun(Coef.list@Maximum[i], extra = object@extra) - cutpoint if (is.finite(cutpoint) && cutpoint > 0) { Mmat <- diag(rep(ifelse(object@control$Crow1positive, 1, -1), length.out = Rank)) etoli <- eigen(t(Mmat) %*% Coef.list@Tolerance[,,i] %*% Mmat) A=ifelse(etoli$val[1]>0,sqrt(2*cutpoint*etoli$val[1]),Inf) B=ifelse(etoli$val[2]>0,sqrt(2*cutpoint*etoli$val[2]),Inf) if (ellipse < 0) A <- B <- -ellipse / 2 theta.angle <- asin(etoli$vector[2,1]) * ifelse(object@control$Crow1positive[2], 1, -1) if (object@control$Crow1positive[1]) theta.angle <- pi - theta.angle if (all(is.finite(c(A,B)))) lines(vellipse(R = 2*A, ratio = B/A, orientation = theta.angle, center = Coef.list@Optimum[,i], N = egrid), lwd=elwd[i], col=ecol[i], lty=elty[i]) } } } if ( C ) { if (is.character(OriginC) && OriginC == "mean") OriginC <- c(mean(nustar[,1]), mean(nustar[,2])) if (is.character(OriginC) && OriginC == "origin") OriginC <- c(0,0) for (i in 1:nrow(Cmat)) arrows(x0=OriginC[1], y0=OriginC[2], x1=OriginC[1] + stretchC*Cmat[i,1], y1=OriginC[2] + stretchC*Cmat[i,2], lty=Clty[i], col=Ccol[i], lwd=Clwd[i]) if (label.arg) { temp200 <- dimnames(Cmat)[[1]] for (i in 1:nrow(Cmat)) text(OriginC[1] + stretchC*Cmat[i,1], OriginC[2] + stretchC*Cmat[i,2], col=Ccol[i], labels=temp200[i], adj=Cadj.arg[i], cex=Ccex[i]) } } if (sites) { text(nustar[,1], nustar[,2], adj = 0.5, labels = if (is.null(spch)) dimnames(nustar)[[1]] else rep(spch, length = nrow(nustar)), col=scol, cex=scex, font=sfont) } } invisible(nustar) } lvplot.rrvglm <- function(object, A = TRUE, C = TRUE, scores = FALSE, plot.it= TRUE, groups=rep(1, n), gapC=sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels= if (length(object@misc$predictors.names)) object@misc$predictors.names else paste("LP", 1:M, sep = ""), Aadj=par()$adj, Acex=par()$cex, Acol=par()$col, Apch = NULL, Clabels=rownames(Cmat), Cadj=par()$adj, Ccex=par()$cex, Ccol=par()$col, Clty=par()$lty, Clwd=par()$lwd, chull.arg = FALSE, ccex=par()$cex, ccol=par()$col, clty=par()$lty, clwd=par()$lwd, spch = NULL, scex=par()$cex, scol=par()$col, slabels=rownames(x2mat), ...) { if (object@control$Rank != 2 && plot.it) stop("can only handle rank-2 models") M <- object@misc$M n <- object@misc$n colx2.index <- object@control$colx2.index Coef.list <- Coef(object) Amat <- Coef.list@A Cmat <- Coef.list@C Amat <- Amat * scaleA dimnames(Amat) <- list(object@misc$predictors.names, NULL) Cmat <- Cmat / scaleA if (!length(object@x)) { object@x <- model.matrixvlm(object, type = "lm") } x2mat <- object@x[, colx2.index, drop = FALSE] nuhat <- x2mat %*% Cmat if (!plot.it) return(as.matrix(nuhat)) index.nosz <- 1:M allmat <- rbind(if (A) Amat else NULL, if (C) Cmat else NULL, if (scores) nuhat else NULL) plot(allmat[,1], allmat[,2], type = "n", xlab=xlab, ylab=ylab, ...) # xlim etc. supplied through ... if (A) { Aadj <- rep(Aadj, length.out = length(index.nosz)) Acex <- rep(Acex, length.out = length(index.nosz)) Acol <- rep(Acol, length.out = length(index.nosz)) if (length(Alabels) != M) stop("'Alabels' must be of length ", M) if (length(Apch)) { Apch <- rep(Apch, length.out = length(index.nosz)) for (i in index.nosz) points(Amat[i,1], Amat[i,2], pch=Apch[i],cex=Acex[i],col=Acol[i]) } else { for (i in index.nosz) text(Amat[i,1], Amat[i,2], Alabels[i], cex=Acex[i], col=Acol[i], adj=Aadj[i]) } } if (C) { p2 <- nrow(Cmat) gapC <- rep(gapC, length.out = p2) Cadj <- rep(Cadj, length.out = p2) Ccex <- rep(Ccex, length.out = p2) Ccol <- rep(Ccol, length.out = p2) Clwd <- rep(Clwd, length.out = p2) Clty <- rep(Clty, length.out = p2) if (length(Clabels) != p2) stop("'length(Clabels)' must be equal to ", p2) for (ii in 1:p2) { arrows(0, 0, Cmat[ii,1], Cmat[ii,2], lwd=Clwd[ii], lty=Clty[ii], col=Ccol[ii]) const <- 1 + gapC[ii] / sqrt(Cmat[ii,1]^2 + Cmat[ii,2]^2) text(const*Cmat[ii,1], const*Cmat[ii,2], Clabels[ii], cex=Ccex[ii], adj=Cadj[ii], col=Ccol[ii]) } } if (scores) { ugrp <- unique(groups) nlev <- length(ugrp) # number of groups clty <- rep(clty, length.out = nlev) clwd <- rep(clwd, length.out = nlev) ccol <- rep(ccol, length.out = nlev) if (length(spch)) spch <- rep(spch, length.out = n) scol <- rep(scol, length.out = n) scex <- rep(scex, length.out = n) for (ii in ugrp) { gp <- groups == ii if (nlev > 1 && (length(unique(spch[gp])) != 1 || length(unique(scol[gp])) != 1 || length(unique(scex[gp])) != 1)) warning("spch/scol/scex is different for individuals ", "from the same group") temp <- nuhat[gp,,drop = FALSE] if (length(spch)) { points(temp[,1], temp[,2], cex=scex[gp], pch=spch[gp], col=scol[gp]) } else { text(temp[,1], temp[,2], label=slabels, cex=scex[gp], col=scol[gp]) } if (chull.arg) { hull <- chull(temp[,1],temp[,2]) hull <- c(hull, hull[1]) lines(temp[hull,1], temp[hull,2], type = "b", lty=clty[ii], col=ccol[ii], lwd=clwd[ii], pch = " ") } } } invisible(nuhat) } Coef.rrvglm <- function(object, ...) { M <- object@misc$M n <- object@misc$n colx1.index <- object@control$colx1.index colx2.index <- object@control$colx2.index p1 <- length(colx1.index) # May be 0 Amat <- object@constraints[[colx2.index[1]]] B1mat <- if (p1) coefvlm(object, matrix.out = TRUE)[colx1.index,,drop = FALSE] else NULL C.try <- coefvlm(object, matrix.out = TRUE)[colx2.index, , drop = FALSE] Cmat <- C.try %*% Amat %*% solve(t(Amat) %*% Amat) Rank <- object@control$Rank lv.names <- if (Rank>1) paste("lv", 1:Rank, sep = "") else "lv" dimnames(Amat) <- list(object@misc$predictors.names, lv.names) dimnames(Cmat) <- list(dimnames(Cmat)[[1]], lv.names) ans <- new(Class = "Coef.rrvglm", A = Amat, C = Cmat, Rank = Rank, colx2.index = colx2.index) if (!is.null(colx1.index)) { ans@colx1.index <- colx1.index ans@B1 <- B1mat } if (object@control$Corner) ans@Atilde <- Amat[-c(object@control$Index.corner, object@control$szero),,drop = FALSE] ans } setMethod("Coef", "rrvglm", function(object, ...) Coef.rrvglm(object, ...)) show.Coef.rrvglm <- function(x, ...) { object <- x cat("A matrix:\n") print(object@A, ...) cat("\nC matrix:\n") print(object@C, ...) p1 <- length(object@colx1.index) if (p1) { cat("\nB1 matrix:\n") print(object@B1, ...) } invisible(object) } if (!isGeneric("biplot")) setGeneric("biplot", function(x, ...) standardGeneric("biplot")) setMethod("Coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("biplot", "qrrvglm", function(x, ...) { biplot.qrrvglm(x, ...)}) setMethod("lvplot", "qrrvglm", function(object, ...) { invisible(lvplot.qrrvglm(object, ...))}) setMethod("lvplot", "rrvglm", function(object, ...) { invisible(lvplot.rrvglm(object, ...))}) biplot.rrvglm <- function(x, ...) lvplot(object = x, ...) setMethod("biplot", "rrvglm", function(x, ...) invisible(biplot.rrvglm(x, ...))) summary.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { answer <- object answer@post$Coef <- Coef(object, varlvI = varlvI, reference = reference, ...) # Store it here; non-elegant if (length((answer@post$Coef)@dispersion) && length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) answer@dispersion <- answer@misc$dispersion <- (answer@post$Coef)@dispersion as(answer, "summary.qrrvglm") } show.summary.qrrvglm <- function(x, ...) { cat("\nCall:\n") dput(x@call) print(x@post$Coef, ...) # non-elegant programming if (length(x@dispersion) > 1) { cat("\nDispersion parameters:\n") if (length(x@misc$ynames)) { names(x@dispersion) <- x@misc$ynames print(x@dispersion, ...) } else { cat(x@dispersion, fill = TRUE) } cat("\n") } else if (length(x@dispersion) == 1) { cat("\nDispersion parameter: ", x@dispersion, "\n") } } setClass("summary.qrrvglm", contains = "qrrvglm") setMethod("summary", "qrrvglm", function(object, ...) summary.qrrvglm(object, ...)) setMethod("show", "summary.qrrvglm", function(object) show.summary.qrrvglm(object)) setMethod("show", "Coef.rrvglm", function(object) show.Coef.rrvglm(object)) grc <- function(y, Rank = 1, Index.corner = 2:(1+Rank), szero = 1, summary.arg = FALSE, h.step = 0.0001, ...) { myrrcontrol <- rrvglm.control(Rank = Rank, Index.corner = Index.corner, szero = szero, ...) object.save <- y if (is(y, "rrvglm")) { y <- object.save@y } else { y <- as.matrix(y) y <- as(y, "matrix") } if (length(dim(y)) != 2 || nrow(y) < 3 || ncol(y) < 3) stop("y must be a matrix with >= 3 rows & columns, ", "or a rrvglm() object") ei <- function(i, n) diag(n)[, i, drop = FALSE] .grc.df <- data.frame(Row.2 = eifun(2, nrow(y))) yn1 <- if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else paste("X2.", 1:nrow(y), sep = "") warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (any(!is.na(as.numeric(substring(yn1, 1, 1))))) yn1 <- paste("X2.", 1:nrow(y), sep = "") options(warn = warn.save) Row. <- factor(1:nrow(y)) modmat.row <- model.matrix( ~ Row.) Col. <- factor(1:ncol(y)) modmat.col <- model.matrix( ~ Col.) cms <- list("(Intercept)" = matrix(1, ncol(y), 1)) for (ii in 2:nrow(y)) { cms[[paste("Row.", ii, sep = "")]] <- matrix(1, ncol(y), 1) .grc.df[[paste("Row.", ii, sep = "")]] <- modmat.row[,ii] } for (ii in 2:ncol(y)) { cms[[paste("Col.", ii, sep = "")]] <- modmat.col[,ii,drop = FALSE] .grc.df[[paste("Col.", ii, sep = "")]] <- rep(1, nrow(y)) } for (ii in 2:nrow(y)) { cms[[yn1[ii]]] <- diag(ncol(y)) .grc.df[[yn1[ii]]] <- eifun(ii, nrow(y)) } dimnames(.grc.df) <- list(if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else as.character(1:nrow(y)), dimnames(.grc.df)[[2]]) str1 <- "~ Row.2" if (nrow(y) > 2) for (ii in 3:nrow(y)) str1 <- paste(str1, paste("Row.", ii, sep = ""), sep = " + ") for (ii in 2:ncol(y)) str1 <- paste(str1, paste("Col.", ii, sep = ""), sep = " + ") str2 <- paste("y ", str1) for (ii in 2:nrow(y)) str2 <- paste(str2, yn1[ii], sep = " + ") myrrcontrol$noRRR <- as.formula(str1) # Overwrite this assign(".grc.df", .grc.df, envir = VGAM:::VGAMenv) warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) answer <- if (is(object.save, "rrvglm")) object.save else rrvglm(as.formula(str2), family = poissonff, constraints = cms, control = myrrcontrol, data = .grc.df) options(warn = warn.save) if (summary.arg) { answer <- as(answer, "rrvglm") answer <- summary.rrvglm(answer, h.step = h.step) } else { answer <- as(answer, "grc") } if (exists(".grc.df", envir = VGAM:::VGAMenv)) rm(".grc.df", envir = VGAM:::VGAMenv) answer } summary.grc <- function(object, ...) { grc(object, summary.arg= TRUE, ...) } trplot.qrrvglm <- function(object, whichSpecies = NULL, add = FALSE, plot.it = TRUE, label.sites = FALSE, sitenames = rownames(object@y), axes.equal = TRUE, cex=par()$cex, col = 1:(nos*(nos-1)/2), log = "", lty = rep(par()$lty, length.out = nos*(nos-1)/2), lwd = rep(par()$lwd, length.out = nos*(nos-1)/2), tcol= rep(par()$col, length.out = nos*(nos-1)/2), xlab = NULL, ylab = NULL, main = "", # "Trajectory plot", type = "b", check.ok = TRUE, ...) { coef.obj <- Coef(object) # use defaults for those two arguments if (coef.obj@Rank != 1) stop("object must be a rank-1 model") fv <- fitted(object) modelno <- object@control$modelno # 1,2,3, or 0 NOS <- ncol(fv) # Number of species M <- object@misc$M # nn <- nrow(fv) # Number of sites if (length(sitenames)) sitenames <- rep(sitenames, length.out = nn) sppNames <- dimnames(object@y)[[2]] if (!length(whichSpecies)) { whichSpecies <- sppNames[1:NOS] whichSpecies.numer <- 1:NOS } else if (is.numeric(whichSpecies)) { whichSpecies.numer <- whichSpecies whichSpecies <- sppNames[whichSpecies.numer] # Convert to character } else { whichSpecies.numer <- match(whichSpecies, sppNames) } nos <- length(whichSpecies) # nos = number of species to be plotted if (length(whichSpecies.numer) <= 1) stop("must have at least 2 species to be plotted") cx1i <- object@control$colx1.index if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("trajectory plots allowable only for noRRR = ~ 1 models") first.spp <- iam(1,1,M = M,both = TRUE,diag = FALSE)$row.index second.spp <- iam(1,1,M = M,both = TRUE,diag = FALSE)$col.index myxlab <- if (length(whichSpecies.numer) == 2) { paste("Fitted value for", if (is.character(whichSpecies.numer)) whichSpecies.numer[1] else sppNames[whichSpecies.numer[1]]) } else "Fitted value for 'first' species" myxlab <- if (length(xlab)) xlab else myxlab myylab <- if (length(whichSpecies.numer) == 2) { paste("Fitted value for", if (is.character(whichSpecies.numer)) whichSpecies.numer[2] else sppNames[whichSpecies.numer[2]]) } else "Fitted value for 'second' species" myylab <- if (length(ylab)) ylab else myylab if (!add) { xxx <- if (axes.equal) fv[,whichSpecies.numer] else fv[,whichSpecies.numer[first.spp]] yyy <- if (axes.equal) fv[,whichSpecies.numer] else fv[,whichSpecies.numer[second.spp]] matplot(xxx, yyy, type = "n", log = log, xlab = myxlab, ylab = myylab, main = main, ...) } lwd <- rep(lwd, length.out = nos*(nos-1)/2) col <- rep(col, length.out = nos*(nos-1)/2) lty <- rep(lty, length.out = nos*(nos-1)/2) tcol <- rep(tcol, length.out = nos*(nos-1)/2) oo <- order(coef.obj@lv) # Sort by the latent variable ii <- 0 col <- rep(col, length = nos*(nos-1)/2) species.names <- NULL if (plot.it) for (i1 in seq(whichSpecies.numer)) { for (i2 in seq(whichSpecies.numer)) if (i1 < i2) { ii <- ii + 1 species.names <- rbind(species.names, cbind(sppNames[i1], sppNames[i2])) matplot(fv[oo, whichSpecies.numer[i1]], fv[oo, whichSpecies.numer[i2]], type = type, add = TRUE, lty = lty[ii], lwd = lwd[ii], col = col[ii], pch = if (label.sites) " " else "*" ) if (label.sites && length(sitenames)) text(fv[oo, whichSpecies.numer[i1]], fv[oo, whichSpecies.numer[i2]], labels = sitenames[oo], cex = cex, col = tcol[ii]) } } invisible(list(species.names = species.names, sitenames = sitenames[oo])) } if (!isGeneric("trplot")) setGeneric("trplot", function(object, ...) standardGeneric("trplot")) setMethod("trplot", "qrrvglm", function(object, ...) trplot.qrrvglm(object, ...)) setMethod("trplot", "cao", function(object, ...) trplot.qrrvglm(object, ...)) vcovrrvglm <- function(object, ...) { summary.rrvglm(object, ...)@cov.unscaled } vcovqrrvglm <- function(object, ITolerances = object@control$EqualTolerances, MaxScale = c("predictors", "response"), dispersion = rep(if (length(sobj@dispersion)) sobj@dispersion else 1, length.out = M), ...) { stop("this function is not yet completed") if (mode(MaxScale) != "character" && mode(MaxScale) != "name") MaxScale <- as.character(substitute(MaxScale)) MaxScale <- match.arg(MaxScale, c("predictors", "response"))[1] if (MaxScale != "predictors") stop("can currently only handle MaxScale='predictors'") sobj <- summary(object) cobj <- Coef(object, ITolerances = ITolerances, ...) M <- nrow(cobj@A) dispersion <- rep(dispersion, length.out = M) if (cobj@Rank != 1) stop("object must be a rank 1 model") dvecMax <- cbind(1, -0.5 * cobj@A / c(cobj@D), (cobj@A / c(2*cobj@D))^2) dvecTol <- cbind(0, 0, 1 / c(-2 * cobj@D)^1.5) dvecOpt <- cbind(0, -0.5 / c(cobj@D), 0.5 * cobj@A / c(cobj@D^2)) if ((length(object@control$colx1.index) != 1) || (names(object@control$colx1.index) != "(Intercept)")) stop("Can only handle noRRR=~1 models") okvals <- c(3*M, 2*M+1) if (all(length(coef(object)) != okvals)) stop("Can only handle intercepts-only model with ", "EqualTolerances = FALSE") answer <- NULL Cov.unscaled <- array(NA, c(3, 3, M), dimnames = list( c("(Intercept)", "lv", "lv^2"), c("(Intercept)", "lv", "lv^2"), dimnames(cobj@D)[[3]])) for (spp in 1:M) { index <- c(M + ifelse(object@control$EqualTolerances, 1, M) + spp, spp, M + ifelse(object@control$EqualTolerances, 1, spp)) vcov <- Cov.unscaled[,,spp] <- sobj@cov.unscaled[index, index] # Order is A, D, B1 se2Max <- dvecMax[spp,,drop = FALSE] %*% vcov %*% cbind(dvecMax[spp,]) se2Tol <- dvecTol[spp,,drop = FALSE] %*% vcov %*% cbind(dvecTol[spp,]) se2Opt <- dvecOpt[spp,,drop = FALSE] %*% vcov %*% cbind(dvecOpt[spp,]) answer <- rbind(answer, dispersion[spp]^0.5 * c(se2Opt = se2Opt, se2Tol = se2Tol, se2Max = se2Max)) } link.function <- if (MaxScale == "predictors") remove.arg(object@misc$predictors.names[1]) else "" dimnames(answer) <- list(dimnames(cobj@D)[[3]], c("Optimum", "Tolerance", if (nchar(link.function)) paste(link.function, "(Maximum)", sep = "") else "Maximum")) NAthere <- is.na(answer %*% rep(1, length.out = 3)) answer[NAthere,] <- NA # NA in tolerance means NA everywhere else new(Class = "vcov.qrrvglm", Cov.unscaled = Cov.unscaled, dispersion = dispersion, se = sqrt(answer)) } setMethod("vcov", "rrvglm", function(object, ...) vcovrrvglm(object, ...)) setMethod("vcov", "qrrvglm", function(object, ...) vcovqrrvglm(object, ...)) setClass(Class = "vcov.qrrvglm", representation( Cov.unscaled = "array", # permuted cov.unscaled dispersion = "numeric", se = "matrix")) model.matrix.qrrvglm <- function(object, type = c("lv", "vlm"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("lv", "vlm"))[1] switch(type, lv = Coef(object, ...)@lv, vlm = object@x) } setMethod("model.matrix", "qrrvglm", function(object, ...) model.matrix.qrrvglm(object, ...)) perspqrrvglm <- function(x, varlvI = FALSE, reference = NULL, plot.it = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, # zlim ignored if Rank == 1 gridlength = if (Rank == 1) 301 else c(51,51), whichSpecies = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, # For Rank == 1 only stretch = 1.05, # quick and dirty, Rank == 1 only main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", llty = par()$lty, llwd = par()$lwd, add1 = FALSE, ...) { oylim <- ylim object <- x # don't like x as the primary argument coef.obj <- Coef(object, varlvI = varlvI, reference = reference) if ((Rank <- coef.obj@Rank) > 2) stop("object must be a rank-1 or rank-2 model") fv <- fitted(object) NOS <- ncol(fv) # Number of species M <- object@misc$M # xlim <- rep(if (length(xlim)) xlim else range(coef.obj@lv[,1]), length = 2) if (!length(oylim)) { ylim <- if (Rank == 1) c(0, max(fv)*stretch) else rep(range(coef.obj@lv[,2]), length = 2) } gridlength <- rep(gridlength, length = Rank) lv1 <- seq(xlim[1], xlim[2], length = gridlength[1]) if (Rank == 1) { m <- cbind(lv1) } else { lv2 <- seq(ylim[1], ylim[2], length = gridlength[2]) m <- expand.grid(lv1,lv2) } if (dim(coef.obj@B1)[1] != 1 || dimnames(coef.obj@B1)[[1]] != "(Intercept)") stop("noRRR = ~ 1 is needed") LP <- coef.obj@A %*% t(cbind(m)) # M by n LP <- LP + c(coef.obj@B1) # Assumes \bix_1 = 1 (intercept only) mm <- as.matrix(m) N <- ncol(LP) for (jay in 1:M) { for (ii in 1:N) { LP[jay, ii] <- LP[jay, ii] + mm[ii, , drop = FALSE] %*% coef.obj@D[,,jay] %*% t(mm[ii, , drop = FALSE]) } } LP <- t(LP) # n by M fitvals <- object@family@linkinv(LP) # n by NOS dimnames(fitvals) <- list(NULL, dimnames(fv)[[2]]) sppNames <- dimnames(object@y)[[2]] if (!length(whichSpecies)) { whichSpecies <- sppNames[1:NOS] whichSpecies.numer <- 1:NOS } else if (is.numeric(whichSpecies)) { whichSpecies.numer <- whichSpecies whichSpecies <- sppNames[whichSpecies.numer] # Convert to character } else { whichSpecies.numer <- match(whichSpecies, sppNames) } if (Rank == 1) { if (plot.it) { if (!length(oylim)) ylim <- c(0, max(fitvals[,whichSpecies.numer]) * stretch) # A revision col <- rep(col, length.out = length(whichSpecies.numer)) llty <- rep(llty, leng = length(whichSpecies.numer)) llwd <- rep(llwd, leng = length(whichSpecies.numer)) if (!add1) matplot(lv1, fitvals, xlab = xlab, ylab = ylab, type = "n", main = main, xlim = xlim, ylim = ylim, ...) for (j in 1:length(whichSpecies.numer)) { ptr2 <- whichSpecies.numer[j] # points to species column lines(lv1, fitvals[,ptr2], col=col[j], lty=llty[j], lwd=llwd[j], ...) if (labelSpecies) { ptr1 <- (1:nrow(fitvals))[max(fitvals[,ptr2]) == fitvals[,ptr2]] ptr1 <- ptr1[1] text(lv1[ptr1], fitvals[ptr1,ptr2]+ (stretch-1)*diff(range(ylim)), label=sppNames[j], col=col[j], ...) } } } } else { maxfitted <- matrix(fitvals[,whichSpecies[1]], length(lv1), length(lv2)) if (length(whichSpecies) > 1) for (j in whichSpecies[-1]) { maxfitted <- pmax(maxfitted, matrix(fitvals[,j], length(lv1), length(lv2))) } if (!length(zlim)) zlim <- range(maxfitted, na.rm = TRUE) if (plot.it) graphics:::persp.default(lv1, lv2, maxfitted, zlim = zlim, xlab = xlab, ylab = ylab, zlab = zlab, ticktype = ticktype, col = col, main = main, ...) } invisible(list(fitted = fitvals, lv1grid = lv1, lv2grid = if (Rank == 2) lv2 else NULL, maxfitted = if (Rank == 2) maxfitted else NULL)) } if (!isGeneric("persp")) setGeneric("persp", function(x, ...) standardGeneric("persp"), package = "VGAM") setMethod("persp", "qrrvglm", function(x, ...) perspqrrvglm(x = x, ...)) Rank.rrvglm <- function(object, ...) { object@control$Rank } Rank.qrrvglm <- function(object, ...) { object@control$Rank } Rank.cao <- function(object, ...) { object@control$Rank } ccoef.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { Coef(object, varlvI = varlvI, reference = reference, ...)@C } ccoef.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@C } latvar.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { Coef(object, varlvI = varlvI, reference = reference, ...)@lv } lv.rrvglm <- function(object, ...) { ans <- lvplot(object, plot.it = FALSE) if (ncol(ans) == 1) dimnames(ans) <- list(dimnames(ans)[[1]], "lv") ans } latvar.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@lv } Max.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { Coef(object, varlvI = varlvI, reference = reference, ...)@Maximum } Max.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") if (any(slotNames(object) == "Maximum")) object@Maximum else Max(object, ...) } Opt.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { Coef(object, varlvI = varlvI, reference = reference, ...)@Optimum } Opt.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") Coef(object, ...)@Optimum } Tol.qrrvglm <- function(object, varlvI = FALSE, reference = NULL, ...) { Coef(object, varlvI = varlvI, reference = reference, ...)@Tolerance } Tol.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") if (any(slotNames(object) == "Tolerance")) object@Tolerance else Tol(object, ...) } if (!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) standardGeneric("ccoef")) setMethod("ccoef", "rrvglm", function(object, ...) ccoef.qrrvglm(object, ...)) setMethod("ccoef", "qrrvglm", function(object, ...) ccoef.qrrvglm(object, ...)) setMethod("ccoef", "Coef.rrvglm", function(object, ...) ccoef.Coef.qrrvglm(object, ...)) setMethod("ccoef", "Coef.qrrvglm", function(object, ...) ccoef.Coef.qrrvglm(object, ...)) setMethod("coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coefficients", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) if (!isGeneric("lv")) setGeneric("lv", function(object, ...) standardGeneric("lv")) setMethod("lv", "rrvglm", function(object, ...) lv.rrvglm(object, ...)) setMethod("lv", "qrrvglm", function(object, ...) latvar.qrrvglm(object, ...)) setMethod("lv", "Coef.rrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) setMethod("lv", "Coef.qrrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) if (!isGeneric("latvar")) setGeneric("latvar", function(object, ...) standardGeneric("latvar")) setMethod("latvar", "rrvglm", function(object, ...) lv.rrvglm(object, ...)) setMethod("latvar", "qrrvglm", function(object, ...) latvar.qrrvglm(object, ...)) setMethod("latvar", "Coef.rrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) setMethod("latvar", "Coef.qrrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) if (!isGeneric("Max")) setGeneric("Max", function(object, ...) standardGeneric("Max")) setMethod("Max", "qrrvglm", function(object, ...) Max.qrrvglm(object, ...)) setMethod("Max", "Coef.qrrvglm", function(object, ...) Max.Coef.qrrvglm(object, ...)) if (!isGeneric("Opt")) setGeneric("Opt", function(object, ...) standardGeneric("Opt")) setMethod("Opt", "qrrvglm", function(object, ...) Opt.qrrvglm(object, ...)) setMethod("Opt", "Coef.qrrvglm", function(object, ...) Opt.Coef.qrrvglm(object, ...)) if (!isGeneric("Tol")) setGeneric("Tol", function(object, ...) standardGeneric("Tol")) setMethod("Tol", "qrrvglm", function(object, ...) Tol.qrrvglm(object, ...)) setMethod("Tol", "Coef.qrrvglm", function(object, ...) Tol.Coef.qrrvglm(object, ...)) cgo <- function(...) { stop("The function 'cgo' has been renamed 'cqo'. ", "Ouch! Sorry!") } clo <- function(...) { stop("Constrained linear ordination is fitted with ", "the function 'rrvglm'") } is.bell.vlm <- is.bell.rrvglm <- function(object, ...) { M <- object@misc$M ynames <- object@misc$ynames ans <- rep(FALSE, length.out = M) if (length(ynames)) names(ans) <- ynames ans } is.bell.uqo <- is.bell.qrrvglm <- function(object, ...) { is.finite(Max(object, ...)) } is.bell.cao <- function(object, ...) { NA * Max(object, ...) } if (!isGeneric("is.bell")) setGeneric("is.bell", function(object, ...) standardGeneric("is.bell")) setMethod("is.bell","uqo", function(object, ...) is.bell.uqo(object, ...)) setMethod("is.bell","qrrvglm", function(object,...) is.bell.qrrvglm(object,...)) setMethod("is.bell","rrvglm", function(object, ...) is.bell.rrvglm(object, ...)) setMethod("is.bell","vlm", function(object, ...) is.bell.vlm(object, ...)) setMethod("is.bell","cao", function(object, ...) is.bell.cao(object, ...)) setMethod("is.bell","Coef.qrrvglm", function(object,...) is.bell.qrrvglm(object,...)) if (!isGeneric("Rank")) setGeneric("Rank", function(object, ...) standardGeneric("Rank")) setMethod("Rank", "rrvglm", function(object, ...) Rank.rrvglm(object, ...)) setMethod("Rank", "qrrvglm", function(object, ...) Rank.qrrvglm(object, ...)) setMethod("Rank", "cao", function(object, ...) Rank.cao(object, ...)) VGAM/R/family.robust.R0000644000176000001440000002704212136651110014157 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. edhuber <- function(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - mu) / sigma fk <- dnorm(k) eps <- 1 - 1 / (pnorm(k) - pnorm(-k) + 2 * fk /k) ceps <- 1 / (pnorm(k) - pnorm(-k) + 2 * fk / k) if (log.arg) { val <- log(ceps) + dnorm(zedd, log = TRUE) val[zedd < (-k)] <- (log(ceps) + log(fk) + ( k * (zedd+k)))[zedd < (-k)] val[zedd > (+k)] <- (log(ceps) + log(fk) + (-k * (zedd-k)))[zedd > (+k)] } else { val <- (ceps) * dnorm(zedd) val[zedd < (-k)] <- ((ceps) * fk * exp( k * (zedd + k)))[zedd < (-k)] val[zedd > (+k)] <- ((ceps) * fk * exp(-k * (zedd - k)))[zedd > (+k)] } list(val = if (log.arg) val - log(sigma) else val / sigma, eps = eps) } dhuber <- function(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k, mu, sigma, log = log)$val rhuber <- function(n, k = 0.862, mu = 0, sigma = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n myl <- rep(0.0, len = use.n) lowlim <- 1 upplim <- 0 chunksize <- 2 * use.n while (lowlim <= use.n) { x <- rexp(chunksize) s <- sample(c(-1, 1), size = chunksize, replace = TRUE) y <- s*x/k u <- runif(chunksize) yok <- (abs(y) >= k | u <= exp(k * abs(y) - (k * k + y * y) / 2)) sumyok <- sum(yok) if (sumyok > 0) { upplim <- upplim + sumyok if (upplim > use.n) myl <- rep(myl, len = upplim) myl[lowlim:upplim] <- y[yok] lowlim <- lowlim + sumyok } } myl <- rep(myl, len = use.n) # Prune to right length rep(mu + sigma * myl, len = use.n) } qhuber <- function (p, k = 0.862, mu = 0, sigma = 1) { if(min(sigma) <= 0) stop("argument 'sigma' must be positive") if(min(k) <= 0) stop("argument 'k' must be positive") cnorm <- sqrt(2 * pi) * ((2 * pnorm(k) - 1) + 2 * dnorm(k) / k) x <- pmin(p, 1 - p) q <- ifelse(x <= sqrt(2 * pi) * dnorm(k) / ( k * cnorm), log(k * cnorm * x) / k - k / 2, qnorm(abs(1 - pnorm(k) + x * cnorm / sqrt(2 * pi) - dnorm(k) / k))) ifelse(p < 0.5, mu + q * sigma, mu - q * sigma) } phuber <- function(q, k = 0.862, mu = 0, sigma = 1) { if (any(sigma <= 0)) stop("argument 'sigma' must be positive") A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) zedd <- (q - mu) / sigma x <- -abs(zedd) p <- ifelse(x <= -k , exp(k^2 / 2) / k * exp(k * x) / sqrt(2 * pi), dnorm(k) / k + pnorm(x) - pnorm(-k)) p <- p * (1 - eps) ifelse(zedd <= 0, p, 1 - p) } huber2 <- function(llocation = "identity", lscale = "loge", k = 0.862, imethod = 1, zero = 2) { A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(k, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'k'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Huber least favorable distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: location"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .imethod == 3) { rep(weighted.mean(y, w), len = n) } else if ( .imethod == 2) { rep(median(rep(y, w)), len = n) } else if ( .imethod == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat, earg = .elocat), theta2eta(scale.y.est, .lscale, earg = .escale)) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat, earg = .elocat) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat , "scale" = .lscale ) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$expected <- TRUE misc$k.huber <- .k misc$imethod <- .imethod misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .k = k, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) kay <- .k if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dhuber(y, k = kay, mu = location, sigma = myscale, log = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .k = k ))), vfamily = c("huber2"), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) myk <- .k zedd <- (y - mylocat) / myscale cond2 <- (abs(zedd) <= myk) cond3 <- (zedd > myk) dl.dlocat <- -myk + 0 * zedd # cond1 dl.dlocat[cond2] <- zedd[cond2] dl.dlocat[cond3] <- myk # myk is a scalar dl.dlocat <- dl.dlocat / myscale dl.dscale <- (-myk * zedd) dl.dscale[cond2] <- (zedd^2)[cond2] dl.dscale[cond3] <- ( myk * zedd)[cond3] dl.dscale <- (-1 + dl.dscale) / myscale dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) dscale.deta <- dtheta.deta(myscale, .lscale, earg = .escale) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .eps = eps, .k = k ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, 2) # diag matrix; y is one-col too temp4 <- erf(myk / sqrt(2)) ned2l.dlocat2 <- temp4 * (1 - .eps) / myscale^2 ned2l.dscale2 <- (dnorm(myk) * (1 - myk^2) + temp4) * 2 * (1 - .eps) / (myk * myscale^2) wz[, iam(1, 1, M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale2 * dscale.deta^2 ans c(w) * wz }), list( .eps = eps )))) } huber1 <- function(llocation = "identity", k = 0.862, imethod = 1) { A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(k, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'k'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") new("vglmff", blurb = c("Huber least favorable distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: location"), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .imethod == 3) { rep(weighted.mean(y, w), len = n) } else if ( .imethod == 2) { rep(median(rep(y, w)), len = n) } else if ( .imethod == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat, earg = .elocat)) } }), list( .llocat = llocat, .elocat = elocat, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat, earg = .elocat) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat ) misc$earg <- list("location" = .elocat ) misc$expected <- TRUE misc$k.huber <- .k misc$imethod <- .imethod misc$multipleResponses <- FALSE }), list( .llocat = llocat, .elocat = elocat, .k = k, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- eta2theta(eta, .llocat, earg = .elocat) kay <- .k if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dhuber(y, k = kay, mu = location, sigma = 1, log = TRUE)) } }, list( .llocat = llocat, .elocat = elocat, .k = k ))), vfamily = c("huber1"), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta, .llocat, earg = .elocat) myk <- .k zedd <- (y - mylocat) # / myscale cond2 <- (abs(zedd) <= myk) cond3 <- (zedd > myk) dl.dlocat <- -myk + 0 * zedd # cond1 dl.dlocat[cond2] <- zedd[cond2] dl.dlocat[cond3] <- myk # myk is a scalar dl.dlocat <- dl.dlocat # / myscale if (FALSE) { dl.dscale <- (-myk * zedd) dl.dscale[cond2] <- (zedd^2)[cond2] dl.dscale[cond3] <- ( myk * zedd)[cond3] dl.dscale <- (-1 + dl.dscale) / myscale } dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) ans <- c(w) * cbind(dl.dlocat * dlocat.deta) ans }), list( .llocat = llocat, .elocat = elocat, .eps = eps, .k = k ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, 1) # diag matrix; y is one-col too temp4 <- erf(myk / sqrt(2)) ned2l.dlocat2 <- temp4 * (1 - .eps) # / myscale^2 wz[, iam(1,1,M)] <- ned2l.dlocat2 * dlocat.deta^2 ans c(w) * wz }), list( .eps = eps )))) } VGAM/R/family.rcqo.R0000644000176000001440000003443412136651110013610 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rcqo <- function(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial","gamma2"), EqualMaxima = FALSE, EqualTolerances = TRUE, ESOptima = FALSE, loabundance = if (EqualMaxima) hiabundance else 10, hiabundance = 100, sdlv = head(1.5/2^(0:3), Rank), sdOptima = ifelse(ESOptima, 1.5/Rank, 1) * ifelse(scalelv, sdlv, 1), sdTolerances = 0.25, Kvector = 1, Shape = 1, sqrt = FALSE, Log = FALSE, rhox = 0.5, breaks = 4, # ignored unless family="ordinal" seed = NULL, Crow1positive = TRUE, xmat = NULL, # Can be input scalelv = TRUE ) { family = match.arg(family, c("poisson","negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial","gamma2"))[1] if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") if (!is.Numeric(p, integer.valued = TRUE, positive = TRUE, allowable.length = 1) || p < 1 + Rank) stop("bad input for argument 'p'") if (!is.Numeric(S, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'S'") if (!is.Numeric(Rank, integer.valued = TRUE, positive = TRUE, allowable.length = 1) || Rank > 4) stop("bad input for argument 'Rank'") if (!is.Numeric(Kvector, positive = TRUE)) stop("bad input for argument 'Kvector'") if (!is.Numeric(rhox) || abs(rhox) >= 1) stop("bad input for argument 'rhox'") if (length(seed) && !is.Numeric(seed, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'seed'") if (!is.logical(EqualTolerances) || length(EqualTolerances) > 1) stop("bad input for argument 'EqualTolerances)'") if (!is.logical(sqrt) || length(sqrt)>1) stop("bad input for argument 'sqrt)'") if (family != "negbinomial" && sqrt) warning("argument 'sqrt' is used only with family='negbinomial'") if (!EqualTolerances && !is.Numeric(sdTolerances, positive = TRUE)) stop("bad input for argument 'sdTolerances'") if (!is.Numeric(loabundance, positive = TRUE)) stop("bad input for argument 'loabundance'") if (!is.Numeric(sdlv, positive = TRUE)) stop("bad input for argument 'sdlv'") if (!is.Numeric(sdOptima, positive = TRUE)) stop("bad input for argument 'sdOptima'") if (EqualMaxima && loabundance != hiabundance) stop("arguments 'loabundance' and 'hiabundance' must ", "be equal when 'EqualTolerances = TRUE'") if (any(loabundance > hiabundance)) stop("loabundance > hiabundance is not allowed") if (!is.logical(Crow1positive)) { stop("bad input for argument 'Crow1positive)'") } else { Crow1positive <- rep(Crow1positive, len=Rank) } Shape <- rep(Shape, len=S) sdlv <- rep(sdlv, len=Rank) sdOptima <- rep(sdOptima, len=Rank) sdTolerances <- rep(sdTolerances, len=Rank) AA <- sdOptima / 3^0.5 if (Rank > 1 && any(diff(sdlv) > 0)) stop("argument 'sdlv)' must be a vector with decreasing values") if (FALSE) change.seed.expression <- expression({ if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) { runif(1) # initialize the RNG if necessary } if (is.null(seed)) { RNGstate <- get(".Random.seed", envir = .GlobalEnv) } else { R.seed <- get(".Random.seed", envir = .GlobalEnv) set.seed(seed) RNGstate <- structure(seed, kind = as.list(RNGkind())) on.exit(assign(".Random.seed", R.seed, envir = .GlobalEnv)) } }) change.seed.expression <- expression({ if (length(seed)) set.seed(seed) }) eval(change.seed.expression) V <- matrix(rhox, p-1, p-1) diag(V) <- 1 L <- chol(V) if (length(xmat)) { xnames <- colnames(xmat) } else { eval(change.seed.expression) xmat <- matrix(rnorm(n*(p-1)), n, p-1) %*% L xmat <- scale(xmat, center = TRUE) xnames <- paste("x", 2:p, sep="") dimnames(xmat) <- list(as.character(1:n), xnames) } eval(change.seed.expression) ccoefs <- matrix(rnorm((p-1)*Rank), p-1, Rank) lvmat <- cbind(xmat %*% ccoefs) if (Rank > 1) { Rmat <- chol(var(lvmat)) iRmat <- solve(Rmat) lvmat <- lvmat %*% iRmat # var(lvmat) == diag(Rank) ccoefs <- ccoefs %*% iRmat } for(r in 1:Rank) if (( Crow1positive[r] && ccoefs[1,r] < 0) || (!Crow1positive[r] && ccoefs[1,r] > 0)) { ccoefs[,r] <- -ccoefs[,r] lvmat[,r] <- -lvmat[,r] } if (scalelv) { for(r in 1:Rank) { sdlvr <- sd(lvmat[,r]) lvmat[,r] <- lvmat[,r] * sdlv[r] / sdlvr ccoefs[,r] <- ccoefs[,r] * sdlv[r] / sdlvr } } else { sdlvr <- NULL for(r in 1:Rank) { sdlvr <- c(sdlvr, sd(lvmat[,r])) } } if (ESOptima) { if (!is.Numeric(S^(1/Rank), integer.valued = TRUE) || S^(1/Rank) < 2) stop("S^(1/Rank) must be an integer greater or equal to 2") if (Rank == 1) { optima <- matrix(as.numeric(NA), S, Rank) for(r in 1:Rank) { optima[,r] <- seq(-AA, AA, len=S^(1/Rank)) } } else if (Rank == 2) { optima <- expand.grid(lv1=seq(-AA[1], AA[1], len=S^(1/Rank)), lv2=seq(-AA[2], AA[2], len=S^(1/Rank))) } else if (Rank == 3) { optima <- expand.grid(lv1=seq(-AA[1], AA[1], len=S^(1/Rank)), lv2=seq(-AA[2], AA[2], len=S^(1/Rank)), lv3=seq(-AA[3], AA[3], len=S^(1/Rank))) } else { optima <- expand.grid(lv1=seq(-AA[1], AA[1], len=S^(1/Rank)), lv2=seq(-AA[2], AA[2], len=S^(1/Rank)), lv3=seq(-AA[3], AA[3], len=S^(1/Rank)), lv4=seq(-AA[4], AA[4], len=S^(1/Rank))) } if (Rank > 1) optima <- matrix(unlist(optima), S, Rank) # Make sure its a matrix } else { optima <- matrix(1, S, Rank) eval(change.seed.expression) for(r in 1:Rank) { optima[,r] <- rnorm(n=S, sd=sdOptima[r]) } } for(r in 1:Rank) optima[,r] <- optima[,r] * sdOptima[r] / sd(optima[,r]) ynames <- paste("y", 1:S, sep="") Kvector <- rep(Kvector, len=S) names(Kvector) <- ynames lvnames <- if (Rank==1) "lv" else paste("lv", 1:Rank, sep="") Tols <- if (EqualTolerances) matrix(1, S, Rank) else { eval(change.seed.expression) temp <- matrix(1, S, Rank) if (S > 1) for(r in 1:Rank) { temp[-1,r] <- rnorm(S-1, mean=1, sd=sdTolerances[r]) if (any(temp[,r] <= 0)) stop("negative tolerances!") temp[,r] <- temp[,r]^2 # Tolerance matrix = var-cov matrix) } temp } dimnames(Tols) <- list(ynames, lvnames) dimnames(ccoefs) <- list(xnames, lvnames) dimnames(optima) <- list(ynames, lvnames) loeta <- log(loabundance) # May be a vector hieta <- log(hiabundance) eval(change.seed.expression) logmaxima <- runif(S, min=loeta, max=hieta) # loeta and hieta may be vector names(logmaxima) <- ynames etamat <- matrix(logmaxima, n, S, byrow = TRUE) for(jay in 1:S) { optmat <- matrix(optima[jay,], nrow=n, ncol=Rank, byrow = TRUE) tolmat <- matrix(Tols[jay,], nrow=n, ncol=Rank, byrow = TRUE) temp <- cbind((lvmat - optmat) / tolmat) for(r in 1:Rank) etamat[,jay]=etamat[,jay]-0.5*(lvmat[,r] - optmat[jay,r])*temp[,r] } rootdist <- switch(family, "poisson"=1, "binomial-poisson"=1, "ordinal-poisson"=1, "negbinomial"=2, "Binomial-negbinomial"=2, "Ordinal-negbinomial"=2, "gamma2"=3) eval(change.seed.expression) if (rootdist == 1) { ymat <- matrix(rpois(n*S, lambda = exp(etamat)), n, S) } else if (rootdist == 2) { mKvector <- matrix(Kvector, n, S, byrow = TRUE) ymat <- matrix(rnbinom(n=n*S, mu=exp(etamat), size=mKvector),n,S) if (sqrt) ymat <- ymat^0.5 } else if (rootdist == 3) { Shape <- matrix(Shape, n, S, byrow = TRUE) ymat <- matrix(rgamma(n*S, shape=Shape, scale=exp(etamat)/Shape),n,S) if (Log) ymat <- log(ymat) } else stop("argument 'rootdist' unmatched") tmp1 <- NULL if (any(family == c("ordinal-poisson","Ordinal-negbinomial"))) { tmp1 <- cut(c(ymat), breaks=breaks, labels=NULL) #To get attributes(tmp1) ymat <- cut(c(ymat), breaks=breaks, labels=FALSE) dim(ymat) <- c(n,S) } if (any(family == c("binomial-poisson","Binomial-negbinomial"))) ymat <- 0 + (ymat > 0) myform <- as.formula(paste(paste("cbind(", paste(paste("y",1:S,sep=""), collapse=","), ") ~ ", sep=""), paste(paste("x",2:p,sep=""), collapse="+"), sep="")) dimnames(ymat) <- list(as.character(1:n), ynames) ans <- data.frame(xmat, ymat) attr(ans, "ccoefficients") <- ccoefs attr(ans, "Crow1positive") <- Crow1positive attr(ans, "family") <- family attr(ans, "formula") <- myform # Useful for running cqo() on the data attr(ans, "Rank") <- Rank attr(ans, "family") <- family attr(ans, "Kvector") <- Kvector attr(ans, "logmaxima") <- logmaxima attr(ans, "loabundance") <- loabundance attr(ans, "hiabundance") <- hiabundance attr(ans, "optima") <- optima attr(ans, "Log") <- Log attr(ans, "lv") <- lvmat attr(ans, "eta") <- etamat attr(ans, "EqualTolerances") <- EqualTolerances attr(ans, "EqualMaxima") <- EqualMaxima || all(loabundance == hiabundance) attr(ans, "ESOptima") <- ESOptima attr(ans, "seed") <- seed # RNGstate attr(ans, "sdTolerances") <- sdTolerances attr(ans, "sdlv") <- if (scalelv) sdlv else sdlvr attr(ans, "sdOptima") <- sdOptima attr(ans, "Shape") <- Shape attr(ans, "sqrt") <- sqrt attr(ans, "tolerances") <- Tols^0.5 # Like a standard deviation attr(ans, "breaks") <- if (length(tmp1)) attributes(tmp1) else breaks ans } if (FALSE) dcqo <- function(x, p, S, family = c("poisson", "binomial", "negbinomial", "ordinal"), Rank = 1, EqualTolerances = TRUE, EqualMaxima = FALSE, EquallySpacedOptima = FALSE, loabundance = if (EqualMaxima) 100 else 10, hiabundance = 100, sdTolerances = 1, sdOptima = 1, nlevels = 4, # ignored unless family="ordinal" seed = NULL ) { warning("12/6/06; needs a lot of work based on rcqo()") if (mode(family) != "character" && mode(family) != "name") family <- as.character(substitute(family)) family <- match.arg(family, c("poisson", "binomial", "negbinomial", "ordinal"))[1] if (!is.Numeric(p, integer.valued = TRUE, positive = TRUE, allowable.length = 1) || p < 2) stop("bad input for argument 'p'") if (!is.Numeric(S, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'S'") if (!is.Numeric(Rank, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'Rank'") if (length(seed) && !is.Numeric(seed, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'seed'") if (!is.logical(EqualTolerances) || length(EqualTolerances)>1) stop("bad input for argument 'EqualTolerances)'") if (EqualMaxima && loabundance != hiabundance) stop("'loabundance' and 'hiabundance' must ", "be equal when 'EqualTolerances = TRUE'") if (length(seed)) set.seed(seed) xmat <- matrix(rnorm(n*(p-1)), n, p-1, dimnames=list(as.character(1:n), paste("x", 2:p, sep=""))) ccoefs <- matrix(rnorm((p-1)*Rank), p-1, Rank) lvmat <- xmat %*% ccoefs optima <- matrix(rnorm(Rank*S, sd=sdOptima), S, Rank) Tols <- if (EqualTolerances) matrix(1, S, Rank) else matrix(rnorm(Rank*S, mean=1, sd=1), S, Rank) loeta <- log(loabundance) hieta <- log(hiabundance) logmaxima <- runif(S, min=loeta, max=hieta) etamat <- matrix(logmaxima,n,S,byrow = TRUE) # eta=log(mu) only; intercept term for(jay in 1:S) { optmat <- matrix(optima[jay,], n, Rank, byrow = TRUE) tolmat <- matrix(Tols[jay,], n, Rank, byrow = TRUE) temp <- cbind((lvmat - optmat) * tolmat) for(r in 1:Rank) etamat[,jay] <- etamat[,jay] - 0.5 * temp[,r] * (lvmat[,r] - optmat[jay,r]) } ymat <- if (family == "negbinomial") { } else { matrix(rpois(n*S, lambda = exp(etamat)), n, S) } if (family == "binomial") ymat <- 0 + (ymat > 0) dimnames(ymat) <- list(as.character(1:n), paste("y", 1:S, sep="")) ans <- data.frame(xmat, ymat) attr(ans, "ccoefficients") <- ccoefs attr(ans, "family") <- family ans } getInitVals <- function(gvals, llfun, ...) { LLFUN <- match.fun(llfun) ff <- function(myx, ...) LLFUN(myx, ...) objFun <- gvals for(ii in 1:length(gvals)) objFun[ii] <- ff(myx=gvals[ii], ...) try.this <- gvals[objFun == max(objFun)] # Usually scalar, maybe vector try.this } campp <- function(q, size, prob, mu) { if (!missing(mu)) { if (!missing(prob)) stop("'prob' and 'mu' both specified") prob <- size/(size + mu) } K <- (1/3) * ((9*q+8)/(q+1) - ((9*size-1)/size) * (mu/(q+1))^(1/3)) / sqrt( (1/size) * (mu/(q+1))^(2/3) + 1 / (q+1)) # Note the +, not - pnorm(K) } VGAM/R/family.rcim.R0000644000176000001440000007603512136651110013601 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rcim <- function(y, family = poissonff, Rank = 0, Musual = NULL, weights = NULL, which.lp = 1, Index.corner = if (!Rank) NULL else 1 + Musual * (1:Rank), rprefix = "Row.", cprefix = "Col.", offset = 0, szero = if (!Rank) NULL else { if (Musual == 1) 1 else setdiff(1:(Musual*ncol(y)), c( # 1:Musual, 1 + (1:ncol(y)) * Musual, Index.corner))}, summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, ...) { noroweffects <- FALSE nocoleffects <- FALSE if (!is.Numeric(which.lp, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'which.lp'") if (!is.character(rprefix)) stop("argument 'rprefix' must be character") if (!is.character(cprefix)) stop("argument 'cprefix' must be character") if (is.character(family)) family <- get(family) if (is.function(family)) family <- ((family)()) if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } efamily <- family if (!is.Numeric(Musual)) { iefamily <- efamily@infos if (is.function(iefamily)) Musual <- (iefamily())$Musual } if (!is.Numeric(Musual)) { warning("cannot determine the value of 'Musual'.", "Assuming the value one.") Musual <- 1 } object.save <- y y <- if (is(y, "rrvglm")) { object.save@y } else { as(as.matrix(y), "matrix") } if (length(dim(y)) != 2 || nrow(y) < 3 || ncol(y) < 3) stop("argument 'y' must be a matrix with >= 3 rows & columns, or ", "a rrvglm() object") eifun <- function(i, n) diag(n)[, i, drop = FALSE] .rcim.df <- if (!noroweffects) data.frame("Row.2" = eifun(2, nrow(y))) else if (!nocoleffects) data.frame("Col.2" = eifun(2, nrow(y))) else stop("at least one of 'noroweffects' and 'nocoleffects' must be FALSE") colnames( .rcim.df ) <- paste(rprefix, "2", sep = "") # Overwrite "Row.2" yn1 <- if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else paste("X2.", 1:nrow(y), sep = "") warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (any(!is.na(as.numeric(substring(yn1, 1, 1))))) yn1 <- paste("X2.", 1:nrow(y), sep = "") options(warn = warn.save) nrprefix <- as.name(rprefix) ncprefix <- as.name(cprefix) assign(rprefix, factor(1:nrow(y))) modmat.row <- substitute( model.matrix( ~ .rprefix ), list( .rprefix = nrprefix )) assign(cprefix, factor(1:ncol(y))) modmat.col <- substitute( model.matrix( ~ .cprefix ), list( .cprefix = ncprefix )) modmat.row <- eval( modmat.row ) modmat.col <- eval( modmat.col ) Hlist <- list("(Intercept)" = matrix(1, ncol(y), 1)) if (!noroweffects) for(ii in 2:nrow(y)) { Hlist[[paste(rprefix, ii, sep = "")]] <- matrix(1, ncol(y), 1) .rcim.df[[paste(rprefix, ii, sep = "")]] <- modmat.row[, ii] } if (!nocoleffects) for(ii in 2:ncol(y)) { Hlist[[ paste(cprefix, ii, sep = "")]] <- modmat.col[, ii, drop = FALSE] .rcim.df[[paste(cprefix, ii, sep = "")]] <- rep(1, nrow(y)) } if (Rank > 0) { for(ii in 2:nrow(y)) { Hlist[[yn1[ii]]] <- diag(ncol(y)) .rcim.df[[yn1[ii]]] <- eifun(ii, nrow(y)) } } dimnames(.rcim.df) <- list(if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else as.character(1:nrow(y)), dimnames(.rcim.df)[[2]]) str1 <- paste("~ ", rprefix, "2", sep = "") if (nrow(y) > 2) for(ii in 3:nrow(y)) { str1 <- paste(str1, paste(rprefix, ii, sep = ""), sep = " + ") } for(ii in 2:ncol(y)) { str1 <- paste(str1, paste(cprefix, ii, sep = ""), sep = " + ") } str2 <- paste("y ", str1) if (Rank > 0) { for(ii in 2:nrow(y)) str2 <- paste(str2, yn1[ii], sep = " + ") } controlfun <- if (Rank == 0) rrvglm.control else rrvglm.control controlfun <- if (Rank == 0) vglm.control else rrvglm.control # orig. mycontrol <- controlfun(Rank = Rank, Index.corner = Index.corner, szero = szero, ...) if (mycontrol$trace) { } if ((mindim <- min(nrow(y), ncol(y))) <= Rank) { stop("argument 'Rank' is too high. Must be a value from 0 ", "to ", mindim - 1, " inclusive") } if (Rank > 0) mycontrol$noRRR <- as.formula(str1) # Overwrite this assign(".rcim.df", .rcim.df, envir = VGAM::VGAMenv) warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (mycontrol$trace) { } if (Musual > 1) { orig.Hlist <- Hlist kmat1 <- rbind(1, 0) kmat0 <- rbind(0, 1) kmat1 <- matrix(0, nrow = Musual, ncol = 1) kmat1[which.lp, 1] <- 1 kmat0 <- matrix(1, nrow = Musual, ncol = 1) kmat0[which.lp, 1] <- 0 for (ii in 1:length(Hlist)) { Hlist[[ii]] <- kronecker(Hlist[[ii]], kmat1) } Hlist[["(Intercept)"]] <- cbind(Hlist[["(Intercept)"]], kronecker(matrix(1, nrow(orig.Hlist[[1]]), 1), kmat0)) if (mycontrol$trace) { } } offset.matrix <- matrix(offset, nrow = nrow(y), ncol = ncol(y) * Musual) # byrow = TRUE answer <- if (Rank > 0) { if (is(object.save, "rrvglm")) object.save else rrvglm(as.formula(str2), family = family, constraints = Hlist, offset = offset.matrix, weights = if (length(weights)) weights else rep(1, length = nrow(y)), ..., control = mycontrol, data = .rcim.df) } else { if (is(object.save, "vglm")) object.save else vglm(as.formula(str2), family = family, constraints = Hlist, offset = offset.matrix, weights = if (length(weights)) weights else rep(1, length = nrow(y)), ..., control = mycontrol, data = .rcim.df) } options(warn = warn.save) answer <- if (summary.arg) { if (Rank > 0) { summary.rrvglm(as(answer, "rrvglm"), h.step = h.step) } else { summary(answer) } } else { as(answer, ifelse(Rank > 0, "rcim", "rcim0")) } answer@misc$rbaseline <- rbaseline answer@misc$cbaseline <- cbaseline answer@misc$which.lp <- which.lp answer@misc$offset <- offset.matrix answer } summaryrcim <- function(object, ...) { rcim(object, summary.arg = TRUE, ...) } setClass("rcim0", representation(not.needed = "numeric"), contains = "vglm") # Added 20110506 setClass("rcim", representation(not.needed = "numeric"), contains = "rrvglm") setMethod("summary", "rcim0", function(object, ...) summaryrcim(object, ...)) setMethod("summary", "rcim", function(object, ...) summaryrcim(object, ...)) Rcim <- function(mat, rbaseline = 1, cbaseline = 1) { mat <- as.matrix(mat) RRR <- dim(mat)[1] CCC <- dim(mat)[2] rnames <- if (is.null(rownames(mat))) { paste("X", 1:RRR, sep = "") } else { rownames(mat) } cnames <- if (is.null(colnames(mat))) { paste("Y", 1:CCC, sep = "") } else { colnames(mat) } r.index <- if (is.character(rbaseline)) which(rownames(mat) == rbaseline) else if (is.numeric(rbaseline)) rbaseline else stop("argement 'rbaseline' must be numeric", "or character of the level of row") c.index <- if (is.character(cbaseline)) which(colnames(mat) == cbaseline) else if (is.numeric(cbaseline)) cbaseline else stop("argement 'cbaseline' must be numeric", "or character of the level of row") if (length(r.index) != 1) stop("Could not match with argument 'rbaseline'") if (length(c.index) != 1) stop("Could not match with argument 'cbaseline'") yswap <- rbind(mat[r.index:RRR, ], if (r.index > 1) mat[1:(r.index - 1),] else NULL) yswap <- cbind(yswap[, c.index:CCC], if (c.index > 1) yswap[, 1:(c.index - 1)] else NULL) new.rnames <- rnames[c(r.index:RRR, if (r.index > 1) 1:(r.index - 1) else NULL)] new.cnames <- cnames[c(c.index:CCC, if (c.index > 1) 1:(c.index - 1) else NULL)] colnames(yswap) <- new.cnames rownames(yswap) <- new.rnames yswap } plotrcim0 <- function(object, centered = TRUE, whichplots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, # rlabels = FALSE, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, # clabels = FALSE, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab = "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, ...) { nparff <- if (is.numeric(object@family@infos()$Musual)) { object@family@infos()$Musual } else { 1 } if (is.numeric(object@control$Rank) && object@control$Rank != 0) warning("argument 'object' is not Rank-0") n_lm <- nrow(object@y) cobj <- coefficients(object) upperbound <- if (!is.numeric(object@control$Rank) || object@control$Rank == 0) length(cobj) else length(object@control$colx1.index) orig.roweff <- c("Row.1" = 0, cobj[(nparff + 1) : (nparff + n_lm - 1)]) orig.coleff <- c("Col.1" = 0, cobj[(nparff + n_lm) : upperbound]) last.r <- length(orig.roweff) last.c <- length(orig.coleff) orig.raxisl <- rownames(object@y) orig.caxisl <- colnames(object@y) if (is.null(orig.raxisl)) orig.raxisl <- as.character(1:nrow(object@y)) if (is.null(orig.caxisl)) orig.caxisl <- as.character(1:ncol(object@y)) roweff.orig <- roweff <- orig.roweff[c(rfirst:last.r, if (rfirst > 1) 1:(rfirst-1) else NULL)] coleff.orig <- coleff <- orig.coleff[c(cfirst:last.c, if (cfirst > 1) 1:(cfirst-1) else NULL)] if (centered) { roweff <- scale(roweff, scale = FALSE) # Center it only coleff <- scale(coleff, scale = FALSE) # Center it only } raxisl <- orig.raxisl[c(rfirst:last.r, if (rfirst > 1) 1:(rfirst-1) else NULL)] caxisl <- orig.caxisl[c(cfirst:last.c, if (cfirst > 1) 1:(cfirst-1) else NULL)] if (any(whichplots == 1, na.rm = TRUE)) { plot(roweff, type = rtype, axes = FALSE, col = rcol, main = rmain, sub = rsub, xlab = rxlab, ylab = rylab, ...) axis(1, at = 1:length(raxisl), cex.lab = rcex.lab, cex.axis = rcex.axis, labels = raxisl) axis(2, cex.lab = rcex.lab, ...) # las = rlas) if (hline0) abline(h = 0, lty = hlty, col = hcol, lwd = hlwd) } if (any(whichplots == 2, na.rm = TRUE)) { plot(coleff, type = ctype, axes = FALSE, col = ccol, main = cmain, # lwd = 2, xpd = FALSE, sub = csub, xlab = cxlab, ylab = cylab, ...) axis(1, at = 1:length(caxisl), cex.lab = ccex.lab, cex.axis = ccex.axis, labels = caxisl) axis(2, cex.lab = ccex.lab, ...) # las = clas) if (hline0) abline(h = 0, lty = hlty, col = hcol, lwd = hlwd) } object@post$row.effects = roweff object@post$col.effects = coleff object@post$raw.row.effects = roweff.orig object@post$raw.col.effects = coleff.orig invisible(object) } setMethod("plot", "rcim0", function(x, y, ...) plotrcim0(object = x, ...)) setMethod("plot", "rcim", function(x, y, ...) plotrcim0(object = x, ...)) moffset <- function(mat, roffset = 0, coffset = 0, postfix = "") { if ((is.numeric(roffset) && (roffset == 0)) && (is.numeric(coffset) && (coffset == 0))) return(mat) vecmat <- c(unlist(mat)) ind1 <- if (is.character(roffset)) which(rownames(mat) == roffset) else if (is.numeric(roffset)) roffset + 1 else stop("argument 'roffset' not matched (character). ", "It must be numeric, ", "else character and match the ", "row names of the response") ind2 <- if (is.character(coffset)) which(colnames(mat) == coffset) else if (is.numeric(coffset)) coffset + 1 else stop("argument 'coffset' not matched (character). ", "It must be numeric, ", "else character and match the ", "column names of the response") if (!is.Numeric(ind1, positive = TRUE, integer.valued = TRUE, allowable.length = 1) || !is.Numeric(ind2, positive = TRUE, integer.valued = TRUE, allowable.length = 1)) stop("bad input for arguments 'roffset' and/or 'coffset'") if (ind1 > nrow(mat)) stop("too large a value for argument 'roffset'") if (ind2 > ncol(mat)) stop("too large a value for argument 'coffset'") start.ind <- (ind2 - 1)* nrow(mat) + ind1 svecmat <- vecmat[c(start.ind:(nrow(mat) * ncol(mat)), 0:(start.ind - 1))] rownames.mat <- rownames(mat) if (length(rownames.mat) != nrow(mat)) rownames.mat <- paste("Row.", 1:nrow(mat), sep = "") colnames.mat <- colnames(mat) if (length(colnames.mat) != ncol(mat)) colnames.mat <- paste("Col.", 1:ncol(mat), sep = "") newrn <- if (roffset > 0) c(rownames.mat[c(ind1:nrow(mat))], paste(rownames.mat[0:(ind1-1)], postfix, sep = "")) else rownames.mat newcn <- c(colnames.mat[c(ind2:ncol(mat), 0:(ind2 - 1))]) if (roffset > 0) newcn <- paste(newcn, postfix, sep = "") newmat <- matrix(svecmat, nrow(mat), ncol(mat), dimnames = list(newrn, newcn)) newmat } Confint.rrnb <- function(rrnb2, level = 0.95) { if (class(rrnb2) != "rrvglm") stop("argument 'rrnb2' does not appear to be a rrvglm() object") if (!any(rrnb2@family@vfamily == "negbinomial")) stop("argument 'rrnb2' does not appear to be a negbinomial() fit") if (rrnb2@control$Rank != 1) stop("argument 'rrnb2' is not Rank-1") if (rrnb2@misc$M != 2) stop("argument 'rrnb2' does not have M = 2") if (!all(rrnb2@misc$link == "loge")) stop("argument 'rrnb2' does not have log links for both parameters") a21.hat <- (Coef(rrnb2)@A)["log(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "log(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "log(size)"] delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat) delta2.hat <- 2 - a21.hat SE.a21.hat <- sqrt(vcovrrvglm(rrnb2)["I(lv.mat)", "I(lv.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * qnorm(1 - (1 - level)/2) * SE.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # e.g., the 95 percent CI list(a21.hat = a21.hat, beta11.hat = beta11.hat, beta21.hat = beta21.hat, CI.a21 = ci.a21, CI.delta2 = ci.delta2, delta1 = delta1.hat, delta2 = delta2.hat, SE.a21.hat = SE.a21.hat) } Confint.nb1 <- function(nb1, level = 0.95) { if (class(nb1) != "vglm") stop("argument 'nb1' does not appear to be a vglm() object") if (!any(nb1@family@vfamily == "negbinomial")) stop("argument 'nb1' does not appear to be a negbinomial() fit") if (!all(unlist(constraints(nb1)[-1]) == 1)) stop("argument 'nb1' does not appear to have 'parallel = TRUE'") if (!all(unlist(constraints(nb1)[1]) == c(diag(nb1@misc$M)))) stop("argument 'nb1' does not have 'parallel = FALSE' ", "for the intercept") if (nb1@misc$M != 2) stop("argument 'nb1' does not have M = 2") if (!all(nb1@misc$link == "loge")) stop("argument 'nb1' does not have log links for both parameters") cnb1 <- coefficients(as(nb1, "vglm"), matrix = TRUE) mydiff <- (cnb1["(Intercept)", "log(size)"] - cnb1["(Intercept)", "log(mu)"]) delta0.hat <- exp(mydiff) (phi0.hat <- 1 + 1 / delta0.hat) # MLE of phi0 myvcov <- vcovvlm(as(nb1, "vglm")) # Not great; improve this! myvec <- cbind(c(-1, 1, rep(0, len = nrow(myvcov) - 2))) (se.mydiff <- sqrt(t(myvec) %*% myvcov %*% myvec)) ci.mydiff <- mydiff + c(-1, 1) * qnorm(1 - (1 - level)/2) * se.mydiff ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # e.g., the 95 percent CI for phi0 list(CI.phi0 = ci.phi0, CI.delta0 = ci.delta0, delta0 = delta0.hat, phi0 = phi0.hat) } plota21 <- function(rrvglm2, plot.it = TRUE, nseq.a21 = 31, se.eachway = c(5, 5), # == c(LHS, RHS), trace.arg = TRUE, ...) { if (class(rrvglm2) != "rrvglm") stop("argument 'rrvglm2' does not appear to be a rrvglm() object") if (rrvglm2@control$Rank != 1) stop("argument 'rrvglm2' is not Rank-1") if (rrvglm2@misc$M != 2) stop("argument 'rrvglm2' does not have M = 2") loglik.orig <- logLik(rrvglm2) temp1 <- Confint.rrnb(rrvglm2) # zz a21.hat <- (Coef(rrvglm2)@A)[2, 1] SE.a21.hat <- temp1$SE.a21.hat SE.a21.hat <- sqrt(vcov(rrvglm2)["I(lv.mat)", "I(lv.mat)"]) big.ci.a21 <- a21.hat + c(-1, 1) * se.eachway * SE.a21.hat seq.a21 <- seq(big.ci.a21[1], big.ci.a21[2], length = nseq.a21) Hlist.orig <- constraints.vlm(rrvglm2, type = "term") alreadyComputed <- !is.null(rrvglm2@post$a21.matrix) a21.matrix <- if (alreadyComputed) rrvglm2@post$a21.matrix else cbind(a21 = seq.a21, loglikelihood = 0) prev.etastart <- predict(rrvglm2) # Halves the computing time funname <- "vglm" listcall <- as.list(rrvglm2@call) if (!alreadyComputed) for (ii in 1:nseq.a21) { if (trace.arg) print(ii) argslist <- vector("list", length(listcall) - 1) for (kay in 2:(length(listcall))) argslist[[kay - 1]] <- listcall[[kay]] names(argslist) <- c(names(listcall)[-1]) argslist$trace <- trace.arg argslist$etastart <- prev.etastart argslist$constraints <- Hlist.orig for (kay in 2:length(argslist[["constraints"]])) { argslist[["constraints"]][[kay]] <- rbind(1, a21.matrix[ii, 1]) } fitnew <- do.call(what = funname, args = argslist) a21.matrix[ii, 2] <- logLik(fitnew) prev.etastart <- predict(fitnew) } if (plot.it) { plot(a21.matrix[ ,1], a21.matrix[ ,2], type = "l", col = "blue", xlab = expression(a[21]), ylab = "Log-likelihood") # ... abline(v = (Hlist.orig[[length(Hlist.orig)]])[2, 1], col = "darkorange", lty = "dashed") abline(h = loglik.orig, col = "darkorange", lty = "dashed") abline(h = loglik.orig - qchisq(0.95, df = 1), col = "darkorange", lty = "dashed") abline(v = a21.hat + c(-1, 1) * 1.96 * SE.a21.hat, col = "gray50", lty = "dashed", lwd = 2.0) } # End of (plot.it) rrvglm2@post <- list(a21.matrix = a21.matrix) invisible(rrvglm2) } Qvar <- function(object, factorname = NULL, which.eta = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL ) { if (!is.Numeric(which.eta, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'which.eta' must be a positive integer") coef.indices.saved <- coef.indices if (!is.matrix(object)) { model <- object if (is.null(factorname) && is.null(coef.indices)) { stop("arguments 'factorname' and 'coef.indices' are ", "both NULL") } if (is.null(coef.indices)) { M <- npred(model) if (M < which.eta) stop("argument 'which.eta' must be a value from the set 1:", M) newfactorname <- if (M > 1) { clist <- constraints(model, type = "term") Hk <- clist[[factorname]] Mdot <- ncol(Hk) Hk.row <- Hk[which.eta, ] if (sum(Hk.row != 0) > 1) stop("cannot handle rows of constraint matrices with more ", "than one nonzero value") foo <- function(ii) switch(as.character(ii), '1'="1st", '2'="2nd", '3'="3rd", paste(ii, "th", sep = "")) if (sum(Hk.row != 0) == 0) stop("factor '", factorname, "' is not used the ", foo(which.eta), " eta (linear predictor)") row.index <- (1:Mdot)[Hk.row != 0] all.labels <- vlabel(factorname, ncolBlist = Mdot, M = M) all.labels[row.index] } else { factorname } colptr <- attr(model.matrix(object, type = "vlm"), "vassign") colptr <- if (M > 1) { colptr[newfactorname] } else { colptr[[newfactorname]] } coef.indices <- colptr contmat <- if (length(model@xlevels[[factorname]]) == length(coef.indices)) { diag(length(coef.indices)) } else { eval(call(model@contrasts[[factorname]], model@xlevels [[factorname]])) } rownames(contmat) <- model@xlevels[[factorname]] if (is.null(estimates)) { if (M > 1) { estimates <- matrix(-1, nrow(contmat), 1) # Used to be nc = Mdot ii <- 1 estimates[, ii] <- contmat %*% (coefvlm(model)[(coef.indices[[ii]])]) } else { estimates <- contmat %*% (coefvlm(model)[coef.indices]) } } Covmat <- vcovvlm(model, dispersion = dispersion) covmat <- Covmat[unlist(coef.indices), unlist(coef.indices), drop = FALSE] covmat <- if (M > 1) { ii <- 1 ans <- contmat %*% Covmat[(colptr[[ii]]), (colptr[[ii]])] %*% t(contmat) ans } else { contmat %*% covmat %*% t(contmat) } } else { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, kk <- length(coef.indices) refPos <- numeric(0) if (0 %in% coef.indices) { refPos <- which(coef.indices == 0) coef.indices <- coef.indices[-refPos] } covmat <- vcovvlm(model, dispersion = dispersion) covmat <- covmat[coef.indices, coef.indices, drop = FALSE] if (is.null(estimates)) estimates <- coefvlm(model)[coef.indices] if (length(refPos) == 1) { if (length(estimates) != kk) estimates <- c(0, estimates) covmat <- rbind(0, cbind(0, covmat)) names(estimates)[1] <- rownames(covmat)[1] <- colnames(covmat)[1] <- reference.name if (refPos != 1) { perm <- if (refPos == kk) c(2:kk, 1) else c(2:refPos, 1, (refPos + 1):kk) estimates <- estimates[perm] covmat <- covmat[perm, perm, drop = FALSE] } } } # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, return(Recall(covmat, factorname = factorname, which.eta = which.eta, coef.indices = coef.indices.saved, labels = labels, dispersion = dispersion, estimates = estimates ) ) } else { # ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; covmat <- object if (length(labels)) rownames(covmat) <- colnames(covmat) <- labels if ((LLL <- dim(covmat)[1]) <= 2) stop("This function works only for factors with 3 ", "or more levels") } # ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; allvcov <- covmat for (ilocal in 1:LLL) for (jlocal in ilocal:LLL) allvcov[ilocal, jlocal] <- allvcov[jlocal, ilocal] <- covmat[ilocal, ilocal] + covmat[jlocal, jlocal] - covmat[ilocal, jlocal] * 2 diag(allvcov) <- rep(1.0, length = LLL) # Any positive value should do wmat <- matrix(1.0, LLL, LLL) diag(wmat) <- sqrt( .Machine$double.eps ) logAllvcov <- log(allvcov) attr(logAllvcov, "Prior.Weights") <- wmat attr(logAllvcov, "estimates") <- estimates attr(logAllvcov, "coef.indices") <- coef.indices attr(logAllvcov, "factorname") <- factorname attr(logAllvcov, "regularVar") <- diag(covmat) attr(logAllvcov, "which.eta") <- which.eta logAllvcov } WorstErrors <- function(qv.object) { stop("20110729; does not work") reducedForm <- function(covmat, qvmat) { nlevels <- dim(covmat)[1] firstRow <- covmat[1, ] ones <- rep(1, nlevels) J <- outer(ones, ones) notzero <- 2:nlevels r.covmat <- covmat + (firstRow[1]*J) - outer(firstRow, ones) - outer(ones, firstRow) r.covmat <- r.covmat[notzero, notzero] qv1 <- qvmat[1, 1] r.qvmat <- (qvmat + qv1*J)[notzero, notzero] list(r.covmat, r.qvmat) } covmat <- qv.object$covmat qvmat <- diag(qv.object$qvframe$quasiVar) r.form <- reducedForm(covmat, qvmat) r.covmat <- r.form[[1]] r.qvmat <- r.form[[2]] inverse.sqrt <- solve(chol(r.covmat)) evalues <- eigen(t(inverse.sqrt) %*% r.qvmat %*% inverse.sqrt, symmetric = TRUE)$values sqrt(c(min(evalues), max(evalues))) - 1 } IndentPrint <- function(object, indent = 4, ...) { stop("20110729; does not work") zz <- "" tc <- textConnection("zz", "w", local = TRUE) sink(tc) try(print(object, ...)) sink() close(tc) indent <- paste(rep(" ", indent), sep = "", collapse = "") cat(paste(indent, zz, sep = ""), sep = "\n") } Print.qv <- function(x, ...) { stop("20110729; does not work") } summary.qvar <- function(object, ...) { relerrs <- 1 - sqrt(exp(residuals(object, type = "response"))) diag(relerrs) <- NA minErrSimple <- round(100 * min(relerrs, na.rm = TRUE), 1) maxErrSimple <- round(100 * max(relerrs, na.rm = TRUE), 1) estimates <- c(object@extra$attributes.y$estimates) if (!length(names(estimates)) && is.matrix(object@extra$attributes.y$estimates)) names( estimates) <- rownames(object@extra$attributes.y$estimates) if (!length(names(estimates))) names( estimates) <- paste("Level", 1:length(estimates), sep = "") regularVar <- c(object@extra$attributes.y$regularVar) QuasiVar <- exp(diag(fitted(object))) / 2 QuasiSE <- sqrt(QuasiVar) structure(list(estimate = estimates, SE = sqrt(regularVar), minErrSimple = minErrSimple, maxErrSimple = maxErrSimple, quasiSE = QuasiSE, object = object, quasiVar = QuasiVar), class = "summary.qvar") } print.summary.qvar <- function(x, ...) { object <- x$object minErrSimple <- x$minErrSimple maxErrSimple <- x$maxErrSimple x$minErrSimple <- NULL x$maxErrSimple <- NULL x$object <- NULL if (length(cl <- object@call)) { cat("Call:\n") dput(cl) } facname <- c(object@extra$attributes.y$factorname) if (length(facname)) cat("Factor name: ", facname, "\n") if (length(object@dispersion)) cat("\nDispersion: ", object@dispersion, "\n\n") x <- as.data.frame(c(x)) print.data.frame(x) cat("\nWorst relative errors in SEs of simple contrasts (%): ", minErrSimple, ", ", maxErrSimple, "\n") invisible(x) } plotqvar <- function(object, intervalWidth = 2, ylab = "Estimate", xlab = NULL, # x$factorname, ylim = NULL, main = "", levelNames = NULL, conf.level = 0.95, warn.ratio = 10, border = "transparent", # None points.arg = TRUE, length.arrows = 0.25, angle = 30, scol = par()$col, slwd = par()$lwd, slty = par()$lty, ...) { if (!is.numeric(intervalWidth) && !is.numeric(conf.level)) stop("at least one of arguments 'intervalWidth' and 'conf.level' ", "should be numeric") if (!any("normal1" %in% object@family@vfamily)) stop("argument 'object' dos not appear to be a ", "rcim(, normal1) object") estimates <- c(object@extra$attributes.y$estimates) if (!length(names(estimates)) && is.matrix(object@extra$attributes.y$estimates)) names( estimates) <- rownames(object@extra$attributes.y$estimates) if (!length(names(estimates))) names( estimates) <- paste("Level", 1:length(estimates), sep = "") QuasiVar <- exp(diag(fitted(object))) / 2 QuasiSE <- sqrt(QuasiVar) if (!is.numeric(estimates)) stop("Cannot plot, because there are no 'proper' ", "parameter estimates") if (!is.numeric(QuasiSE)) stop("Cannot plot, because there are no ", "quasi standard errors") faclevels <- factor(names(estimates), levels = names(estimates)) xvalues <- seq(along = faclevels) tops <- estimates + intervalWidth * QuasiSE tails <- estimates - intervalWidth * QuasiSE if (is.numeric(conf.level)) { zedd <- abs(qnorm((1 - conf.level) / 2)) lsd.tops <- estimates + zedd * QuasiSE / sqrt(2) lsd.tails <- estimates - zedd * QuasiSE / sqrt(2) if (max(QuasiSE) / min(QuasiSE) > warn.ratio) warning("Quasi SEs appear to be quite different... the ", "LSD intervals may not be very accurate") } else { lsd.tops <- NULL lsd.tails <- NULL } if (is.null(ylim)) ylim <- range(c(tails, tops, lsd.tails, lsd.tops), na.rm = TRUE) if (is.null(xlab)) xlab <- "Factor level" plot(faclevels, estimates, border = border, ylim = ylim, xlab = xlab, ylab = ylab, main = main, ...) if (points.arg) points(estimates, ...) if (is.numeric(intervalWidth)) { segments(xvalues, tails, xvalues, tops, col = scol, lty = slty, lwd = slwd) } if (is.numeric(conf.level)) { arrows(xvalues, lsd.tails, xvalues, lsd.tops, col = scol, lty = slty, lwd = slwd, code = 3, length = length.arrows, angle = angle) } if (any(slotNames(object) == "post")) { object@post$estimates = estimates object@post$xvalues = xvalues if (is.numeric(intervalWidth)) { object@post$tails = tails object@post$tops = tops } if (is.numeric(conf.level)) { object@post$lsd.tails = lsd.tails object@post$lsd.tops = lsd.tops } } invisible(object) } VGAM/R/family.quantal.R0000644000176000001440000003655612136651110014320 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. abbott <- function(link0 = "logit", link1 = "logit", iprob0 = NULL, iprob1 = NULL, fitted.type = c("observed", "treatment", "control"), mux.offdiagonal = 0.98, zero = 1) { fitted.type <- match.arg(fitted.type, c("observed", "treatment", "control"), several.ok = TRUE) link0 <- as.list(substitute(link0)) earg0 <- link2list(link0) link0 <- attr(earg0, "function.name") link1 <- as.list(substitute(link1)) earg1 <- link2list(link1) link1 <- attr(earg1, "function.name") if (!is.Numeric(mux.offdiagonal, allowable.length = 1) || mux.offdiagonal >= 1 || mux.offdiagonal < 0) stop("argument 'mux.offdiagonal' must be in the interval [0, 1)") new("vglmff", blurb = c("Abbott's model for binary responses\n", "mu = prob0 + (1 - prob0) * prob1\n", "where 'prob0' is the 'control' mortality and\n", "'prob1' is the 'treatment' mortality and\n", "'mu' is the 'observed' mortality\n\n", "Links: ", namesof("prob0", link0, earg = earg0), ", ", namesof("prob1", link1, earg = earg1)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero # , ))), initialize = eval(substitute(expression({ eval(binomialff(link = .link0 )@initialize) # w, y, mustart are assigned predictors.names <- c(namesof("prob0", .link0, earg = .earg0, short = TRUE), namesof("prob1", .link1, earg = .earg1, short = TRUE)) if (is.null(etastart)) { prob0.init <- if (length( .iprob0 )) { rep( .iprob0, length.out = n) } else { mustart / 2 } prob1.init <- if (length( .iprob1 )) { rep( .iprob1, length.out = n) } else { mustart / 2 } mustart <- NULL etastart <- cbind(theta2eta(prob0.init, link = .link0 , earg = .earg0 ), theta2eta(prob1.init, link = .link1 , earg = .earg1 )) } }), list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1, .iprob0 = iprob0, .iprob1 = iprob1 ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob0 <- eta2theta(eta[, 1], .link0 , earg = .earg0 ) prob1 <- eta2theta(eta[, 2], .link1 , earg = .earg1 ) con.fv <- prob0 trt.fv <- prob1 obs.fv <- prob0 + (1 - prob0) * prob1 ans <- cbind("observed" = obs.fv, "treatment" = trt.fv, "control" = con.fv) ans[, .fitted.type , drop = FALSE] }, list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1, .fitted.type = fitted.type ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob0 <- eta2theta(eta[, 1], .link0 , earg = .earg0 ) prob1 <- eta2theta(eta[, 2], .link1 , earg = .earg1 ) mymu <- prob0 + (1 - prob0) * prob1 if (residuals) { w * (y / mymu - (1 - y) / (1 - mymu)) } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e6 * .Machine$double.eps smallno <- sqrt(.Machine$double.eps) if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mymu, log = TRUE)) } }, list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1 ))), last = eval(substitute(expression({ misc$link <- c(prob0 = .link0 , prob1 = .link1 ) misc$earg <- list(prob0 = .earg0 , prob1 = .earg1 ) misc$mux.offdiagonal <- .mux.offdiagonal misc$fitted.type <- .fitted.type misc$true.mu <- ( .fitted.type == "observed") }), list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1, .mux.offdiagonal = mux.offdiagonal, .fitted.type = fitted.type ))), vfamily = c("abbott", "vquantal"), deriv = eval(substitute(expression({ prob0 <- eta2theta(eta[, 1], .link0, earg = .earg0 ) prob1 <- eta2theta(eta[, 2], .link1, earg = .earg1 ) dprob0.deta <- dtheta.deta(prob0, .link0 , earg = .earg0 ) dprob1.deta <- dtheta.deta(prob1, .link1 , earg = .earg1 ) mymu <- prob0 + (1 - prob0) * prob1 dl.dmu <- y / mymu - (1 - y) / (1 - mymu) dmu.dprob0 <- 1 - prob1 dmu.dprob1 <- 1 - prob0 dl.dprob0 <- dl.dmu * dmu.dprob0 dl.dprob1 <- dl.dmu * dmu.dprob1 c(w) * cbind(dl.dprob0 * dprob0.deta, dl.dprob1 * dprob1.deta) }), list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1 ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / (mymu * (1-mymu)) ned2l.dprob02 <- ned2l.dmu2 * dmu.dprob0^2 ned2l.dprob12 <- ned2l.dmu2 * dmu.dprob1^2 ned2l.dprob1prob2 <- ( 1) # seems sort of ok but slow cvgc ned2l.dprob1prob2 <- ( 0) # kill it ned2l.dprob1prob2 <- ned2l.dmu2 * ( 1) # dont seem to work ned2l.dprob1prob2 <- ned2l.dmu2 * dmu.dprob1 * dmu.dprob0 * .mux.offdiagonal od2l.dmu2 <- y / mymu^2 + (1 - y) / (1 - mymu)^2 od2l.dprob02 <- od2l.dmu2 * dmu.dprob0^2 od2l.dprob12 <- od2l.dmu2 * dmu.dprob1^2 od2l.dprob1prob2 <- od2l.dmu2 * dmu.dprob1 * dmu.dprob0 + dl.dmu wz <- cbind(ned2l.dprob02 * dprob0.deta^2, ned2l.dprob12 * dprob1.deta^2, ned2l.dprob1prob2 * dprob1.deta * dprob0.deta) c(w) * wz }), list( .link0 = link0, .earg0 = earg0, .link1 = link1, .earg1 = earg1, .mux.offdiagonal = mux.offdiagonal )))) } if (FALSE) Abbott <- function(lprob1 = elogit(min = 0, max = 1), # For now, that is lprob0 = "logit", iprob0 = NULL, iprob1 = NULL, nointercept = 2, # NULL, zero = 1) { stop("does not work") lprob1 <- as.list(substitute(lprob1)) eprob1 <- link2list(lprob1) lprob1 <- attr(eprob1, "function.name") lprob0 <- as.list(substitute(lprob0)) eprob0 <- link2list(lprob0) lprob0 <- attr(eprob0, "function.name") new("vglmff", blurb = c("Abbott's model for binary response\n", "mu = prob0 + prob1\n", "where 'prob0' is the control mortality and\n", "'prob1' is the treatment mortality\n\n", "Links: ", namesof("prob0", lprob0, earg = eprob0), ", ", namesof("prob1", lprob1, earg = eprob1)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) constraints <- cm.nointercept.vgam(constraints, x, .nointercept, M) }), list( .zero = zero, .nointercept = nointercept ))), initialize = eval(substitute(expression({ print("here1") eval(binomialff(link = .lprob1)@initialize) # w, y, mustart are assigned print("here2") print("summary(mustart)") print( summary(mustart) ) predictors.names <- c(namesof("prob0", .lprob0, earg = .eprob0, short = TRUE), namesof("prob1", .lprob1, earg = .eprob1, short = TRUE)) if (is.null(etastart)) { prob0.init <- if (length( .iprob0 )) { rep( .iprob0, len = n) } else { mustart / 2 } prob1.init <- if (length( .iprob1 )) { rep( .iprob1, len = n) } else { mustart * 1 / 4 } mustart <- NULL print("prob0.init ") print( sort(prob0.init) ) print("prob1.init ") print( sort(prob1.init) ) eprob1 <- list(min = prob0.init, max = 1) etastart <- cbind(theta2eta(prob0.init, link = .lprob0 , earg = .eprob0 ), theta2eta(prob1.init, link = .lprob1 , earg = eprob1 )) print("head(etastart)") print( head(etastart) ) } }), list( .lprob1 = lprob1, .eprob1 = eprob1, .lprob0 = lprob0, .eprob0 = eprob0, .iprob0 = iprob0, .iprob1 = iprob1 ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob0 <- eta2theta(eta[, 1], .lprob0 , earg = .eprob0) eprob1 <- list(min = prob0, max = 1) prob1 <- eta2theta(eta[, 2], .lprob1 , earg = eprob1) prob0 + prob1 }, list( .lprob1 = lprob1, .eprob1 = eprob1, .lprob0 = lprob0, .eprob0 = eprob0 ))), last = eval(substitute(expression({ eprob1 <- list(min = prob0, max = 1) misc$link <- c(prob0 = .lprob0, prob1 = .lprob1) misc$earg <- list(prob0 = .eprob0, prob1 = eprob1) misc$nointercept = .nointercept }), list( .lprob1 = lprob1, .eprob1 = eprob1, .lprob0 = lprob0, .eprob0 = eprob0, .nointercept = nointercept ))), vfamily = c("Abbott", "vquantal"), deriv = eval(substitute(expression({ prob0 <- eta2theta(eta[,1], .lprob0, earg = .eprob0) eprob1 <- list(min = prob0, max = 1) prob1 <- eta2theta(eta[,2], .lprob1, earg = eprob1) dprob0.deta <- dtheta.deta(prob0, .lprob0 , earg = .eprob0 ) dprob1.deta <- dtheta.deta(prob1, .lprob1 , earg = eprob1 ) dl.dmu <- y / mu - (1 - y) / (1 - mu) dmu.dprob0 <- 1 # - prob1 dmu.dprob1 <- 1 # - prob0 dl.dprob0 <- dl.dmu * dmu.dprob0 dl.dprob1 <- dl.dmu * dmu.dprob1 c(w) * cbind(dl.dmu * dmu.dprob0 * dprob0.deta, dl.dmu * dmu.dprob1 * dprob1.deta) }), list( .lprob1 = lprob1, .eprob1 = eprob1, .lprob0 = lprob0, .eprob0 = eprob0 ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / (mu * (1-mu)) ned2l.dprob02 <- ned2l.dmu2 * dmu.dprob0^2 ned2l.dprob12 <- ned2l.dmu2 * dmu.dprob1^2 wz <- cbind(ned2l.dprob02 * dprob0.deta^2, ned2l.dprob12 * dprob1.deta^2) print("head(wz)") print( head(wz) ) c(w) * wz }), list( .lprob1 = lprob1, .eprob1 = eprob1, .lprob0 = lprob0, .eprob0 = eprob0 )))) } abbott.EM.control <- function(maxit = 1000, ...) { list(maxit = maxit) } abbott.EM <- function(link = "probit", b1.arg = 0, b2.arg = 0, imethod = 1, ilambda = 0.5, iprob = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(b1.arg, # allowable.length = 1, integer.valued = TRUE) || b1.arg < 0) stop("argument 'b1.arg' must be a vector of non-negative integers") if (!is.Numeric(b2.arg, # allowable.length = 1, integer.valued = TRUE) || b2.arg < 0) stop("argument 'b2.arg' must be a vector of non-negative integers") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") zero <- NULL if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Probit regression with nonzero background (EM algorithm)\n", "P[Y=1] = mu = prob0 + (1 - prob0) * linkinv(eta)\n\n", "Link: ", namesof("pi", link, earg = earg), "\n", "Mean: mu"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (length(table(y)) != 2 || max(y) > 1) stop("response must be a vector of 0s and 1s only") ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$lambda <- matrix( .ilambda , n, M, byrow = TRUE) extra$orig.w <- w mynames1 <- paste("prob0", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 3) 1 / (1 + apply(y, 2, median) + 1/16) else rnorm(n * M, mean = 0.5, sd = 0.01) # Mean 0.5 if (!is.matrix(prob.init)) prob.init <- matrix(prob.init, n, M, byrow = TRUE) if (length( .iprob )) prob.init <- matrix( .iprob , n, M, byrow = TRUE) # Mean 0.5 etastart <- theta2eta(prob.init, .link , earg = .earg ) # Mean 0 } }), list( .link = link, .earg = earg, .ilambda = ilambda, .imethod = imethod, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) mymu <- extra$lambda + (1 - extra$lambda) * prob # Eqn (3) mymu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$multipleResponses <- TRUE misc$imethod <- .imethod misc$iprob <- .iprob misc$b1.arg <- .b1.arg misc$b2.arg <- .b2.arg extra$lambda <- extra$lambda[1, ] # Now a vector }), list( .link = link, .earg = earg, .iprob = iprob, .b1.arg = b1.arg, .b2.arg = b2.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) mymu <- extra$lambda + (1 - extra$lambda) * prob # Eqn (3) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) sum(c(w) * dbinom(x = y, prob = mymu, size = nvec, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("abbott.EM"), deriv = eval(substitute(expression({ prob <- eta2theta(eta, .link , earg = .earg ) mymu <- extra$lambda + (1 - extra$lambda) * prob # Eqn (3) wz <- cbind((1 - extra$lambda) * prob / mymu) # Eqn (4) Deriv1 <- ifelse(y == 0, -dnorm(eta) / pnorm(eta, lower.tail = FALSE), dnorm(eta) / pnorm(eta)) c(w) * wz * Deriv1 }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ extra$lambda <- matrix((colSums((1 - wz) * y) + .b1.arg ) / (n + .b1.arg + .b2.arg ), n, M, byrow = TRUE) # Between eqns (6),(7) c(w) * wz }), list( .link = link, .earg = earg, .b1.arg = b1.arg, .b2.arg = b2.arg )))) } VGAM/R/family.qreg.R0000644000176000001440000051563712136651110013613 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. lms.bcn.control <- lms.bcg.control <- lms.yjn.control <- function(trace = TRUE, ...) list(trace = trace) lms.bcn <- function(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init = 4, dfsigma.init = 2, ilambda = 1, isigma = NULL, tol0 = 0.001, expectiles = FALSE) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(tol0, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'tol0'") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") if (length(expectiles) != 1 || !is.logical(expectiles)) stop("bad input for argument 'expectiles'") new("vglmff", blurb = c("LMS ", if (expectiles) "Expectile" else "Quantile", " Regression (Box-Cox transformation to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda, earg = .elambda, short= TRUE), namesof("mu", .lmu, earg = .emu, short= TRUE), namesof("sigma", .lsigma, earg = .esigma, short= TRUE)) if (!length(etastart)) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .dfmu.init) fv.init <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1.0 sigma.init <- if (is.null(.isigma)) { myratio <- ((y/fv.init)^lambda.init - 1) / lambda.init if (is.Numeric( .dfsigma.init )) { fit600 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = myratio^2, w = w, df = .dfsigma.init) sqrt(c(abs(predict(fit600, x = x[, min(ncol(x), 2)])$y))) } else { sqrt(var(myratio)) } } else { .isigma } etastart <- cbind(theta2eta(lambda.init, .llambda, earg = .elambda), theta2eta(fv.init, .lmu, earg = .emu), theta2eta(sigma.init, .lsigma, earg = .esigma)) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .dfmu.init = dfmu.init, .dfsigma.init = dfsigma.init, .ilambda = ilambda, .isigma = isigma ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta[, 1] <- eta2theta(eta[, 1], .llambda, earg = .elambda) eta[, 2] <- eta2theta(eta[, 2], .lmu, earg = .emu) eta[, 3] <- eta2theta(eta[, 3], .lsigma, earg = .esigma) if ( .expectiles ) { explot.lms.bcn(percentiles= .percentiles, eta = eta) } else { qtplot.lms.bcn(percentiles= .percentiles, eta = eta) } }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles, .expectiles = expectiles ))), last = eval(substitute(expression({ misc$links <- c(lambda = .llambda, mu = .lmu, sigma = .lsigma ) misc$earg <- list(lambda = .elambda, mu = .emu, sigma = .esigma ) misc$tol0 <- .tol0 misc$percentiles <- .percentiles misc$true.mu <- FALSE # @fitted is not a true mu misc$expectiles <- .expectiles if (control$cdf) { post$cdf <- cdf.lms.bcn(y, eta0 = matrix(c(lambda, mymu, sigma), ncol = 3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles, .expectiles = expectiles, .tol0 = tol0 ))), loglikelihood = eval(substitute( function(mu,y, w, residuals= FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) muvec <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) zedd <- ((y/muvec)^lambda - 1) / (lambda * sigma) log.dz.dy <- (lambda - 1) * log(y/muvec) - log(muvec * sigma) is.eff.0 <- abs(lambda) < .tol0 if (any(is.eff.0)) { zedd[is.eff.0] <- log(y[is.eff.0] / muvec[is.eff.0]) / sigma[is.eff.0] log.dz.dy[is.eff.0] <- -log(y[is.eff.0] * sigma[is.eff.0]) } if (residuals) stop("loglikelihood residuals not ", "implemented") else { sum(c(w) * (dnorm(zedd, log = TRUE) + log.dz.dy)) } }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .tol0 = tol0 ))), vfamily = c("lms.bcn", "lmscreg"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mymu <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) zedd <- ((y / mymu)^lambda - 1) / (lambda * sigma) z2m1 <- zedd * zedd - 1 dl.dlambda <- zedd * (zedd - log(y/mymu) / sigma) / lambda - z2m1 * log(y/mymu) dl.dmu <- zedd / (mymu * sigma) + z2m1 * lambda / mymu dl.dsigma <- z2m1 / sigma dlambda.deta <- dtheta.deta(lambda, .llambda, earg = .elambda) dmu.deta <- dtheta.deta(mymu, .lmu, earg = .emu) dsigma.deta <- dtheta.deta(sigma, .lsigma, earg = .esigma) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, 6) wz[,iam(1, 1, M)] <- (7 * sigma^2 / 4) * dlambda.deta^2 wz[,iam(2, 2, M)] <- (1 + 2*(lambda*sigma)^2)/(mymu*sigma)^2 * dmu.deta^2 wz[,iam(3, 3, M)] <- (2 / sigma^2) * dsigma.deta^2 wz[,iam(1, 2, M)] <- (-1 / (2 * mymu)) * dlambda.deta * dmu.deta wz[,iam(1, 3, M)] <- (lambda * sigma) * dlambda.deta * dsigma.deta wz[,iam(2, 3, M)] <- (2*lambda/(mymu * sigma)) * dmu.deta * dsigma.deta c(w) * wz }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma )))) } lms.bcg <- function(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init=4, dfsigma.init = 2, ilambda = 1, isigma = NULL) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") new("vglmff", blurb = c("LMS Quantile Regression ", "(Box-Cox transformation to a Gamma distribution)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list(.zero = zero))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c( namesof("lambda", .llambda, earg = .elambda, short = TRUE), namesof("mu", .lmu, earg = .emu, short = TRUE), namesof("sigma", .lsigma, earg = .esigma, short = TRUE)) if (!length(etastart)) { Fit5 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .dfmu.init) fv.init = c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) lambda.init = if (is.Numeric( .ilambda )) .ilambda else 1.0 sigma.init = if (is.null(.isigma)) { myratio = ((y/fv.init)^lambda.init-1) / lambda.init if (is.numeric( .dfsigma.init ) && is.finite( .dfsigma.init )) { fit600 = vsmooth.spline(x = x[, min(ncol(x), 2)], y=(myratio)^2, w = w, df = .dfsigma.init) sqrt(c(abs(predict(fit600, x = x[, min(ncol(x), 2)])$y))) } else sqrt(var(myratio)) } else .isigma etastart <- cbind(theta2eta(lambda.init, .llambda, earg = .elambda), theta2eta(fv.init, .lmu, earg = .emu), theta2eta(sigma.init, .lsigma, earg = .esigma)) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .dfmu.init = dfmu.init, .dfsigma.init = dfsigma.init, .ilambda = ilambda, .isigma = isigma ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta[, 1] <- eta2theta(eta[, 1], .llambda, earg = .elambda) eta[, 2] <- eta2theta(eta[, 2], .lmu, earg = .emu) eta[, 3] <- eta2theta(eta[, 3], .lsigma, earg = .esigma) qtplot.lms.bcg(percentiles= .percentiles, eta = eta) }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles ))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda, mu = .lmu, sigma = .lsigma) misc$earg <- list(lambda = .elambda, mu = .emu, sigma = .esigma) misc$percentiles <- .percentiles misc$true.mu <- FALSE # $fitted is not a true mu if (control$cdf) { post$cdf <- cdf.lms.bcg(y, eta0=matrix(c(lambda,mymu,sigma), ncol=3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu,y, w, residuals= FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mu <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) Gee <- (y / mu)^lambda theta <- 1 / (sigma * lambda)^2 if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (log(abs(lambda)) + theta * (log(theta) + log(Gee)-Gee) - lgamma(theta) - log(y))) }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), vfamily = c("lms.bcg", "lmscreg"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mymu <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) Gee <- (y / mymu)^lambda theta <- 1 / (sigma * lambda)^2 dd <- digamma(theta) dl.dlambda <- (1 + 2 * theta * (dd + Gee -1 -log(theta) - 0.5 * (Gee + 1) * log(Gee))) / lambda dl.dmu <- lambda * theta * (Gee-1) / mymu dl.dsigma <- 2*theta*(dd + Gee - log(theta * Gee)-1) / sigma dlambda.deta <- dtheta.deta(lambda, link = .llambda, earg = .elambda) dmu.deta <- dtheta.deta(mymu, link = .lmu, earg = .emu) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) cbind(dl.dlambda * dlambda.deta, dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) * w }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ tritheta <- trigamma(theta) wz <- matrix(0, n, 6) if (TRUE) { part2 <- dd + 2/theta - 2*log(theta) wz[,iam(1, 1, M)] <- ((1 + theta*(tritheta*(1+4*theta) - 4*(1+1/theta) - log(theta)*(2/theta - log(theta)) + dd*part2)) / lambda^2) * dlambda.deta^2 } else { temp <- mean( Gee*(log(Gee))^2 ) wz[,iam(1, 1, M)] <- ((4 * theta * (theta * tritheta-1) - 1 + theta*temp) / lambda^2) * dlambda.deta^2 } wz[,iam(2, 2, M)] <- dmu.deta^2 / (mymu * sigma)^2 wz[,iam(3, 3, M)] <- (4 * theta * (theta * tritheta - 1) / sigma^2) * dsigma.deta^2 wz[,iam(1, 2, M)] <- (-theta * (dd + 1 / theta - log(theta)) / mymu) * dlambda.deta * dmu.deta wz[,iam(1, 3, M)] <- 2 * theta^1.5 * (2 * theta * tritheta - 2 - 1 / theta) * dlambda.deta * dsigma.deta c(w) * wz }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma )))) } dy.dpsi.yeojohnson <- function(psi, lambda) { L <- max(length(psi), length(lambda)) psi <- rep(psi, length.out = L); lambda <- rep(lambda, length.out = L); ifelse(psi > 0, (1 + psi * lambda)^(1/lambda - 1), (1 - (2-lambda) * psi)^((lambda - 1) / (2-lambda))) } dyj.dy.yeojohnson <- function(y, lambda) { L <- max(length(y), length(lambda)) y <- rep(y, length.out = L); lambda <- rep(lambda, length.out = L); ifelse(y>0, (1 + y)^(lambda - 1), (1 - y)^(1 - lambda)) } yeo.johnson <- function(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE) { if (!is.Numeric(derivative, allowable.length = 1, integer.valued = TRUE) || derivative < 0) stop("argument 'derivative' must be a non-negative integer") ans <- y if (!is.Numeric(epsilon, allowable.length = 1, positive = TRUE)) stop("argument 'epsilon' must be a single positive number") L <- max(length(lambda), length(y)) if (length(y) != L) y <- rep(y, length.out = L) if (length(lambda) != L) lambda <- rep(lambda, length.out = L) # lambda may be of length 1 if (inverse) { if (derivative != 0) stop("argument 'derivative' must 0 when inverse = TRUE") if (any(index <- y >= 0 & abs(lambda) > epsilon)) ans[index] <- (y[index]*lambda[index] + 1)^(1/lambda[index]) - 1 if (any(index <- y >= 0 & abs(lambda) <= epsilon)) ans[index] <- expm1(y[index]) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- 1- (-(2-lambda[index]) * y[index]+1)^(1/(2-lambda[index])) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- -expm1(-y[index]) return(ans) } if (derivative == 0) { if (any(index <- y >= 0 & abs(lambda) > epsilon)) ans[index] <- ((y[index]+1)^(lambda[index]) - 1) / lambda[index] if (any(index <- y >= 0 & abs(lambda) <= epsilon)) ans[index] <- log1p(y[index]) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- -((-y[index]+1)^(2-lambda[index]) - 1)/(2 - lambda[index]) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- -log1p(-y[index]) } else { psi <- Recall(y = y, lambda=lambda, derivative=derivative-1, epsilon=epsilon, inverse=inverse) if (any(index <- y >= 0 & abs(lambda) > epsilon)) ans[index] <- ( (y[index]+1)^(lambda[index]) * (log1p(y[index]))^(derivative) - derivative * psi[index] ) / lambda[index] if (any(index <- y >= 0 & abs(lambda) <= epsilon)) ans[index] <- (log1p(y[index]))^(derivative + 1) / (derivative+1) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- -( (-y[index]+1)^(2-lambda[index]) * (-log1p(-y[index]))^(derivative) - derivative * psi[index] ) / (2-lambda[index]) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- (-log1p(-y[index]))^(derivative + 1) / (derivative+1) } ans } dpsi.dlambda.yjn <- function(psi, lambda, mymu, sigma, derivative = 0, smallno = 1.0e-8) { if (!is.Numeric(derivative, allowable.length = 1, integer.valued = TRUE) || derivative < 0) stop("argument 'derivative' must be a non-negative integer") if (!is.Numeric(smallno, allowable.length = 1, positive = TRUE)) stop("argument 'smallno' must be a single positive number") L <- max(length(psi), length(lambda), length(mymu), length(sigma)) if (length(psi) != L) psi <- rep(psi, length.out = L) if (length(lambda) != L) lambda <- rep(lambda, length.out = L) if (length(mymu) != L) mymu <- rep(mymu, length.out = L) if (length(sigma) != L) sigma <- rep(sigma, length.out = L) answer <- matrix(as.numeric(NA), L, derivative+1) CC <- psi >= 0 BB <- ifelse(CC, lambda, -2+lambda) AA <- psi * BB temp8 <- if (derivative > 0) { answer[,1:derivative] <- Recall(psi = psi, lambda = lambda, mymu = mymu, sigma = sigma, derivative = derivative-1, smallno = smallno) answer[,derivative] * derivative } else { 0 } answer[,1+derivative] <- ((AA+1) * (log1p(AA)/BB)^derivative - temp8) / BB pos <- (CC & abs(lambda) <= smallno) | (!CC & abs(lambda-2) <= smallno) if (any(pos)) answer[pos,1+derivative] = (answer[pos, 1]^(1+derivative))/(derivative+1) answer } gh.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { ((derivmat[, 2]/sigma)^2 + sqrt(2) * z * derivmat[, 3] / sigma) / sqrt(pi) } else { # Long-winded way psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } gh.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { (-derivmat[, 2]) / (sqrt(pi) * sigma^2) } else { psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } gh.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { sqrt(8 / pi) * (-derivmat[, 2]) * z / sigma^2 } else { psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } glag.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (derivmat[, 2]^2 + sqrt(2) * sigma * z * derivmat[, 3]) } else { psi <- mymu + sqrt(2) * sigma * z discontinuity <- -mymu / (sqrt(2) * sigma) (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } glag.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { discontinuity <- -mymu / (sqrt(2) * sigma) if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) } else { psi <- mymu + sqrt(2) * sigma * z (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } glag.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) * sqrt(8) * z } else { psi <- mymu + sqrt(2) * sigma * z discontinuity <- -mymu / (sqrt(2) * sigma) (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } gleg.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (derivmat[, 2]^2 + sqrt(2) * sigma * z * derivmat[, 3]) } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } gleg.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (- derivmat[, 2]) } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } gleg.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) * sqrt(8) * z } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } lms.yjn2.control <- function(save.weight = TRUE, ...) { list(save.weight=save.weight) } lms.yjn2 <- function(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lmu = "identity", lsigma = "loge", dfmu.init=4, dfsigma.init = 2, ilambda=1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") new("vglmff", blurb = c("LMS Quantile Regression (Yeo-Johnson transformation", " to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list(.zero = zero))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda, earg = .elambda, short= TRUE), namesof("mu", .lmu, earg = .emu, short= TRUE), namesof("sigma", .lsigma, earg = .esigma, short= TRUE)) y.save <- y yoff <- if (is.Numeric( .yoffset)) .yoffset else -median(y) extra$yoffset <- yoff y <- y + yoff if (!length(etastart)) { lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1. y.tx <- yeo.johnson(y, lambda.init) fv.init = if (smoothok <- (length(unique(sort(x[, min(ncol(x), 2)]))) > 7)) { fit700 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.tx, w = w, df = .dfmu.init) c(predict(fit700, x = x[, min(ncol(x), 2)])$y) } else { rep(weighted.mean(y, w), length.out = n) } sigma.init <- if (!is.Numeric(.isigma)) { if (is.Numeric( .dfsigma.init) && smoothok) { fit710 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = (y.tx - fv.init)^2, w = w, df = .dfsigma.init) sqrt(c(abs(predict(fit710, x = x[, min(ncol(x), 2)])$y))) } else { sqrt( sum( w * (y.tx - fv.init)^2 ) / sum(w) ) } } else .isigma etastart <- matrix(0, n, 3) etastart[, 1] <- theta2eta(lambda.init, .llambda, earg = .elambda) etastart[, 2] <- theta2eta(fv.init, .lmu, earg = .emu) etastart[, 3] <- theta2eta(sigma.init, .lsigma, earg = .esigma) } }), list(.llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .dfmu.init = dfmu.init, .dfsigma.init = dfsigma.init, .ilambda = ilambda, .yoffset=yoffset, .isigma = isigma))), linkinv = eval(substitute(function(eta, extra = NULL) { eta[, 1] <- eta2theta(eta[, 1], .llambda, earg = .elambda) eta[, 3] <- eta2theta(eta[, 3], .lsigma, earg = .esigma) qtplot.lms.yjn(percentiles = .percentiles, eta = eta, yoffset = extra$yoff) }, list(.percentiles = percentiles, .esigma = esigma, .elambda = elambda, .llambda = llambda, .lsigma = lsigma))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda, mu = .lmu, sigma = .lsigma) misc$earg <- list(lambda = .elambda, mu = .emu, sigma = .esigma) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$percentiles <- .percentiles misc$true.mu <- FALSE # $fitted is not a true mu misc[["yoffset"]] <- extra$yoffset y <- y.save # Restore back the value; to be attached to object if (control$cdf) { post$cdf <- cdf.lms.yjn(y + misc$yoffset, eta0=matrix(c(lambda,mymu,sigma), ncol=3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list(.percentiles = percentiles, .elambda = elambda, .emu = emu, .esigma = esigma, .nsimEIM=nsimEIM, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), loglikelihood = eval(substitute( function(mu,y, w, residuals= FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mu <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) psi <- yeo.johnson(y, lambda) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-log(sigma) - 0.5 * ((psi-mu)/sigma)^2 + (lambda-1) * sign(y) * log1p(abs(y)))) }, list( .elambda = elambda, .emu = emu, .esigma = esigma, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), vfamily = c("lms.yjn2", "lmscreg"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mymu <- eta2theta(eta[, 2], .lmu, earg = .emu) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) dlambda.deta <- dtheta.deta(lambda, link = .llambda, earg = .elambda) dmu.deta <- dtheta.deta(mymu, link = .lmu, earg = .emu) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) psi <- yeo.johnson(y, lambda) d1 <- yeo.johnson(y, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(y) * log1p(abs(y)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma dthetas.detas <- cbind(dlambda.deta, dmu.deta, dsigma.deta) c(w) * cbind(dl.dlambda, dl.dmu, dl.dsigma) * dthetas.detas }), list( .elambda = elambda, .emu = emu, .esigma = esigma, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { psi <- rnorm(n, mymu, sigma) ysim <- yeo.johnson(y=psi, lam=lambda, inv = TRUE) d1 <- yeo.johnson(ysim, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(ysim) * log1p(abs(ysim)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma rm(ysim) temp3 <- cbind(dl.dlambda, dl.dmu, dl.dsigma) run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } if (intercept.only) run.varcov <- matrix(colMeans(run.varcov), nr=n, nc=ncol(run.varcov), byrow = TRUE) wz <- run.varcov * dthetas.detas[,ind1$row] * dthetas.detas[,ind1$col] dimnames(wz) <- list(rownames(wz), NULL) # Remove the colnames c(w) * wz }), list(.lsigma = lsigma, .esigma = esigma, .elambda = elambda, .nsimEIM=nsimEIM, .llambda = llambda)))) } lms.yjn <- function(percentiles = c(25, 50, 75), zero = c(1, 3), llambda = "identity", lsigma = "loge", dfmu.init=4, dfsigma.init = 2, ilambda=1.0, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") rule <- rule[1] # Number of points (common) for all the quadrature schemes if (rule != 5 && rule != 10) stop("only rule=5 or 10 is supported") new("vglmff", blurb = c("LMS Quantile Regression ", "(Yeo-Johnson transformation to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", mu, ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list(.zero = zero))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda, earg = .elambda, short= TRUE), "mu", namesof("sigma", .lsigma, earg = .esigma, short= TRUE)) y.save <- y yoff <- if (is.Numeric( .yoffset )) .yoffset else -median(y) extra$yoffset <- yoff y <- y + yoff if (!length(etastart)) { lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1.0 y.tx <- yeo.johnson(y, lambda.init) if (smoothok <- (length(unique(sort(x[, min(ncol(x), 2)]))) > 7)) { fit700 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.tx, w = w, df = .dfmu.init) fv.init <- c(predict(fit700, x = x[, min(ncol(x), 2)])$y) } else { fv.init <- rep(weighted.mean(y, w), length.out = n) } sigma.init <- if (!is.Numeric( .isigma )) { if (is.Numeric( .dfsigma.init) && smoothok) { fit710 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = (y.tx - fv.init)^2, w = w, df = .dfsigma.init) sqrt(c(abs(predict(fit710, x = x[, min(ncol(x), 2)])$y))) } else { sqrt( sum( w * (y.tx - fv.init)^2 ) / sum(w) ) } } else .isigma etastart <- cbind(theta2eta(lambda.init,.llambda, earg = .elambda), fv.init, theta2eta(sigma.init, .lsigma, earg = .esigma)) } }), list(.lsigma = lsigma, .llambda = llambda, .esigma = esigma, .elambda = elambda, .dfmu.init = dfmu.init, .dfsigma.init = dfsigma.init, .ilambda = ilambda, .yoffset=yoffset, .isigma = isigma))), linkinv = eval(substitute(function(eta, extra = NULL) { eta[, 1] <- eta2theta(eta[, 1], .llambda, earg = .elambda) eta[, 3] <- eta2theta(eta[, 3], .lsigma, earg = .esigma) qtplot.lms.yjn(percentiles = .percentiles, eta = eta, yoffset = extra$yoff) }, list(.percentiles = percentiles, .esigma = esigma, .elambda = elambda, .llambda = llambda, .lsigma = lsigma))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda, mu = "identity", sigma = .lsigma) misc$earg <- list(lambda = .elambda, mu = list(theta = NULL), sigma = .esigma) misc$percentiles <- .percentiles misc$true.mu <- FALSE # $fitted is not a true mu misc[["yoffset"]] <- extra$yoff y <- y.save # Restore back the value; to be attached to object if (control$cdf) { post$cdf = cdf.lms.yjn(y + misc$yoffset, eta0 = matrix(c(lambda,mymu,sigma), ncol = 3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list(.percentiles = percentiles, .esigma = esigma, .elambda = elambda, .llambda = llambda, .lsigma = lsigma))), loglikelihood = eval(substitute( function(mu,y, w, residuals= FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mu <- eta[, 2] sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) psi <- yeo.johnson(y, lambda) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-log(sigma) - 0.5 * ((psi-mu)/sigma)^2 + (lambda-1) * sign(y) * log1p(abs(y)))) }, list( .esigma = esigma, .elambda = elambda, .lsigma = lsigma, .llambda = llambda))), vfamily = c("lms.yjn", "lmscreg"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda) mymu <- eta[, 2] sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma) psi <- yeo.johnson(y, lambda) d1 <- yeo.johnson(y, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(y) * log1p(abs(y)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma dlambda.deta <- dtheta.deta(lambda, link = .llambda, earg = .elambda) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) cbind(dl.dlambda * dlambda.deta, dl.dmu, dl.dsigma * dsigma.deta) * c(w) }), list( .esigma = esigma, .elambda = elambda, .lsigma = lsigma, .llambda = llambda ))), weight = eval(substitute(expression({ wz <- matrix(0, n, 6) wz[,iam(2, 2, M)] <- 1 / sigma^2 wz[,iam(3, 3, M)] <- 2 * wz[,iam(2, 2, M)] # 2 / sigma^2 if (.rule == 10) { glag.abs = c(0.13779347054,0.729454549503, 1.80834290174,3.40143369785, 5.55249614006,8.33015274676, 11.8437858379,16.2792578314, 21.996585812, 29.9206970123) glag.wts = c(0.308441115765, 0.401119929155, 0.218068287612, 0.0620874560987, 0.00950151697517, 0.000753008388588, 2.82592334963e-5, 4.24931398502e-7, 1.83956482398e-9, 9.91182721958e-13) } else { glag.abs = c(0.2635603197180449, 1.4134030591060496, 3.5964257710396850, 7.0858100058570503, 12.6408008442729685) glag.wts = c(5.217556105826727e-01, 3.986668110832433e-01, 7.594244968176882e-02, 3.611758679927785e-03, 2.336997238583738e-05) } if (.rule == 10) { sgh.abs = c(0.03873852801690856, 0.19823332465268367, 0.46520116404433082, 0.81686197962535023, 1.23454146277833154, 1.70679833036403172, 2.22994030591819214, 2.80910399394755972, 3.46387269067033854, 4.25536209637269280) sgh.wts = c(9.855210713854302e-02, 2.086780884700499e-01, 2.520517066468666e-01, 1.986843323208932e-01,9.719839905023238e-02, 2.702440190640464e-02, 3.804646170194185e-03, 2.288859354675587e-04, 4.345336765471935e-06, 1.247734096219375e-08) } else { sgh.abs = c(0.1002421519682381, 0.4828139660462573, 1.0609498215257607, 1.7797294185202606, 2.6697603560875995) sgh.wts = c(0.2484061520284881475,0.3923310666523834311, 0.2114181930760276606, 0.0332466603513424663, 0.0008248533445158026) } if (.rule == 10) { gleg.abs = c(-0.973906528517, -0.865063366689, -0.679409568299, -0.433395394129, -0.148874338982) gleg.abs = c(gleg.abs, rev(-gleg.abs)) gleg.wts = c(0.0666713443087, 0.149451349151, 0.219086362516, 0.26926671931, 0.295524224715) gleg.wts = c(gleg.wts, rev(gleg.wts)) } else { gleg.abs = c(-0.9061798459386643,-0.5384693101056820, 0, 0.5384693101056828, 0.9061798459386635) gleg.wts = c(0.2369268850561853,0.4786286704993680, 0.5688888888888889, 0.4786286704993661, 0.2369268850561916) } discontinuity = -mymu/(sqrt(2)*sigma) LL <- pmin(discontinuity, 0) UU <- pmax(discontinuity, 0) if (FALSE) { AA <- (UU-LL)/2 for(kk in 1:length(gleg.wts)) { temp1 <- AA * gleg.wts[kk] abscissae <- (UU+LL)/2 + AA * gleg.abs[kk] psi <- mymu + sqrt(2) * sigma * abscissae temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) temp9 <- cbind(temp9, exp(-abscissae^2) / (sqrt(pi) * sigma^2)) wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + temp1 * gleg.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + temp1 * gleg.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + temp1 * gleg.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } } else { temp9 = dotFortran(name = "yjngintf", as.double(LL), as.double(UU), as.double(gleg.abs), as.double(gleg.wts), as.integer(n), as.integer(length(gleg.abs)), as.double(lambda), as.double(mymu), as.double(sigma), answer=double(3*n), eps=as.double(1.0e-5))$ans dim(temp9) <- c(3,n) wz[,iam(1, 1, M)] <- temp9[1,] wz[,iam(1, 2, M)] <- temp9[2,] wz[,iam(1, 3, M)] <- temp9[3,] } for(kk in 1:length(sgh.wts)) { abscissae <- sign(-discontinuity) * sgh.abs[kk] psi <- mymu + sqrt(2) * sigma * abscissae # abscissae = z temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + sgh.wts[kk] * gh.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + sgh.wts[kk] * gh.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + sgh.wts[kk] * gh.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } temp1 <- exp(-discontinuity^2) for(kk in 1:length(glag.wts)) { abscissae <- sign(discontinuity) * sqrt(glag.abs[kk]) + discontinuity^2 psi <- mymu + sqrt(2) * sigma * abscissae temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) temp9 <- cbind(temp9, 1 / (2 * sqrt((abscissae-discontinuity^2)^2 + discontinuity^2) * sqrt(pi) * sigma^2)) temp7 <- temp1 * glag.wts[kk] wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + temp7 * glag.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + temp7 * glag.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + temp7 * glag.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] * dlambda.deta^2 wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] * dlambda.deta wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] * dsigma.deta * dlambda.deta if ( .diagW && iter <= .iters.diagW) { wz[,iam(1, 2, M)] <- wz[,iam(1, 3, M)] <- 0 } wz[,iam(2, 3, M)] <- wz[,iam(2, 3, M)] * dsigma.deta wz[,iam(3, 3, M)] <- wz[,iam(3, 3, M)] * dsigma.deta^2 c(w) * wz }), list(.lsigma = lsigma, .esigma = esigma, .elambda = elambda, .rule=rule, .diagW=diagW, .iters.diagW=iters.diagW, .llambda = llambda)))) } lmscreg.control <- function(cdf = TRUE, at.arg = NULL, x0 = NULL, ...) { if (!is.logical(cdf)) { warning("'cdf' is not logical; using TRUE instead") cdf <- TRUE } list(cdf = cdf, at.arg = at.arg, x0 = x0) } Wr1 <- function(r, w) ifelse(r <= 0, 1, w) Wr2 <- function(r, w) (r <= 0) * 1 + (r > 0) * w amlnormal.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y, extra$n, extra$M) devi <- cbind((y - mu)^2) if (residuals) { stop("not sure here") wz <- VGAM.weights.function(w = w, M = extra$M, n = extra$n) return((y - mu) * sqrt(wz) * matrix(extra$w.aml,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for(ii in 1:M) all.deviances[ii] <- sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlnormal <- function(w.aml = 1, parallel = FALSE, lexpectile = "identity", iexpectile = NULL, imethod = 1, digw = 4) { if (!is.Numeric(w.aml, positive = TRUE)) stop("argument 'w.aml' must be a vector of positive values") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") lexpectile <- as.list(substitute(lexpectile)) eexpectile <- link2list(lexpectile) lexpectile <- attr(eexpectile, "function.name") if (length(iexpectile) && !is.Numeric(iexpectile)) stop("bad input for argument 'iexpectile'") new("vglmff", blurb = c("Asymmetric least squares quantile regression\n\n", "Links: ", namesof("expectile", link = lexpectile, earg = eexpectile)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlnormal.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") predictors.names <- c(namesof( paste("expectile(",y.names,")", sep = ""), .lexpectile , earg = .eexpectile, tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 1) rep(median(y), length = n) else if ( .imethod == 2) rep(weighted.mean(y, w), length = n) else { junk <- lm.wfit(x = x, y = c(y), w = c(w)) junk$fitted } if (length( .iexpectile)) mean.init <- matrix( .iexpectile, n, M, byrow = TRUE) etastart <- matrix(theta2eta(mean.init, .lexpectile, earg = .eexpectile), n, M) } }), list( .lexpectile = lexpectile, .eexpectile = eexpectile, .iexpectile = iexpectile, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- eta <- as.matrix(eta) for(ii in 1:ncol(eta)) ans[, ii] <- eta2theta(eta[, ii], .lexpectile, earg = .eexpectile) dimnames(ans) <- list(dimnames(eta)[[1]], extra$y.names) ans }, list( .lexpectile = lexpectile, .eexpectile = eexpectile ))), last = eval(substitute(expression({ misc$link <- rep(.lexpectile, length = M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) misc$parallel <- .parallel misc$expected <- TRUE extra$percentile <- numeric(M) misc$multipleResponses <- TRUE for(ii in 1:M) { use.w <- if (M > 1 && ncol(cbind(w)) == M) w[, ii] else w extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, use.w) } names(extra$percentile) <- names(misc$link) extra$individual <- TRUE if (!(M > 1 && ncol(cbind(w)) == M)) { extra$deviance <- amlnormal.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names } }), list( .lexpectile = lexpectile, .eexpectile = eexpectile, .parallel = parallel ))), vfamily = c("amlnormal"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .lexpectile, earg = .eexpectile) dexpectile.deta <- dtheta.deta(mymu, .lexpectile, earg = .eexpectile) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * dexpectile.deta }), list( .lexpectile = lexpectile, .eexpectile = eexpectile ))), weight = eval(substitute(expression({ wz <- c(w) * wor1 * dexpectile.deta^2 wz }), list( .lexpectile = lexpectile, .eexpectile = eexpectile )))) } amlpoisson.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) nz <- y > 0 devi <- cbind(-(y - mu)) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mu[nz]) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(2 * abs(devi) * w) * matrix(extra$w,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for(ii in 1:M) all.deviances[ii] <- 2 * sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlpoisson <- function(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Poisson expectile regression by", " asymmetric maximum likelihood estimation\n\n", "Link: ", namesof("expectile", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlpoisson.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual <- FALSE predictors.names <- c(namesof(paste("expectile(",y.names,")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 2) rep(median(y), length = n) else if ( .imethod == 1) rep(weighted.mean(y, w), length = n) else { junk = lm.wfit(x = x, y = c(y), w = c(w)) abs(junk$fitted) } etastart <- matrix(theta2eta(mean.init, .link , earg = .earg ), n, M) } }), list( .link = link, .earg = earg, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for(ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$multipleResponses <- TRUE misc$expected <- TRUE misc$parallel <- .parallel misc$link <- rep(.link , length = M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) extra$percentile <- numeric(M) for(ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance <- amlpoisson.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlpoisson"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) dexpectile.deta <- dtheta.deta(mymu, .link , earg = .earg ) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * (dexpectile.deta / mymu) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ use.mu <- mymu use.mu[use.mu < .Machine$double.eps^(3/4)] <- .Machine$double.eps^(3/4) wz <- c(w) * wor1 * use.mu * (dexpectile.deta / mymu)^2 wz }), list( .link = link, .earg = earg )))) } amlbinomial.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) devy <- y nz <- y != 0 devy[nz] <- y[nz] * log(y[nz]) nz <- (1 - y) != 0 devy[nz] <- devy[nz] + (1 - y[nz]) * log1p(-y[nz]) devmu <- y * log(mu) + (1 - y) * log1p(-mu) if (any(small <- mu * (1 - mu) < .Machine$double.eps)) { warning("fitted values close to 0 or 1") smu <- mu[small] sy <- y[small] smu <- ifelse(smu < .Machine$double.eps, .Machine$double.eps, smu) onemsmu <- ifelse((1 - smu) < .Machine$double.eps, .Machine$double.eps, 1 - smu) devmu[small] <- sy * log(smu) + (1 - sy) * log(onemsmu) } devi <- 2 * (devy - devmu) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(abs(devi) * w)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - matrix(mu,extra$n,extra$M) for(ii in 1:M) all.deviances[ii] <- sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlbinomial <- function(w.aml = 1, parallel = FALSE, digw = 4, link = "logit") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Logistic expectile regression by ", "asymmetric maximum likelihood estimation\n\n", "Link: ", namesof("expectile", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlbinomial.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ { NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1 + w) no.successes <- w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nn <- y[, 1] + y[, 2] y <- ifelse(nn > 0, y[, 1]/nn, 0) w <- w * nn if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nn * y) / (1 + nn) } else stop("Response not of the right form") } extra$w.aml <- .w.aml if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual <- FALSE predictors.names <- c(namesof(paste("expectile(", y.names, ")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { etastart <- matrix(theta2eta(mustart, .link , earg = .earg ), n, M) mustart <- NULL } }), list( .link = link, .earg = earg, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for(ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep(.link , length = M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) misc$parallel <- .parallel misc$expected <- TRUE extra$percentile <- numeric(M) for(ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance <- amlbinomial.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlbinomial"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) use.mu <- mymu use.mu[use.mu < .Machine$double.eps^(3/4)] = .Machine$double.eps^(3/4) dexpectile.deta <- dtheta.deta(use.mu, .link , earg = .earg ) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * (dexpectile.deta / (use.mu * (1-use.mu))) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- c(w) * wor1 * (dexpectile.deta^2 / (use.mu * (1 - use.mu))) wz }), list( .link = link, .earg = earg)))) } amlexponential.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) devy <- cbind(-log(y) - 1) devi <- cbind(-log(mu) - y / mu) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(2 * abs(devi) * w) * matrix(extra$w,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for(ii in 1:M) all.deviances[ii] = 2 * sum(c(w) * (devy[, ii] - devi[, ii]) * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlexponential <- function(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") y.names <- paste("w.aml = ", round(w.aml, digits = digw), sep = "") predictors.names <- c(namesof( paste("expectile(", y.names,")", sep = ""), link, earg = earg)) predictors.names <- paste(predictors.names, collapse = ", ") new("vglmff", blurb = c("Exponential expectile regression by", " asymmetric maximum likelihood estimation\n\n", "Link: ", predictors.names), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlexponential.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual = FALSE predictors.names <- c(namesof( paste("expectile(", y.names, ")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 1) rep(median(y), length = n) else if ( .imethod == 2) rep(weighted.mean(y, w), length = n) else { 1 / (y + 1) } etastart <- matrix(theta2eta(mean.init, .link , earg = .earg ), n, M) } }), list( .link = link, .earg = earg, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for(ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$multipleResponses <- TRUE misc$expected <- TRUE misc$parallel <- .parallel misc$link <- rep(.link , length = M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) extra$percentile <- numeric(M) for(ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance = amlexponential.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlexponential"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) bigy <- matrix(y,extra$n,extra$M) dl.dmu <- (bigy - mymu) / mymu^2 dmu.deta <- dtheta.deta(mymu, .link , earg = .earg ) myresid <- bigy - cbind(mymu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * wor1 * dl.dmu * dmu.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / mymu^2 wz <- c(w) * wor1 * ned2l.dmu2 * dmu.deta^2 wz }), list( .link = link, .earg = earg )))) } rho1check <- function(u, tau = 0.5) u * (tau - (u <= 0)) dalap <- function(x, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location), length(scale), length(kappa)) location <- rep(location, length.out = NN); scale <- rep(scale, length.out = NN) kappa <- rep(kappa, length.out = NN); x <- rep(x, length.out = NN) tau <- rep(tau, length.out = NN) logconst <- 0.5 * log(2) - log(scale) + log(kappa) - log1p(kappa^2) exponent <- -(sqrt(2) / scale) * abs(x - location) * ifelse(x >= location, kappa, 1/kappa) indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logconst[!indexTF] <- NaN if (log.arg) logconst + exponent else exp(logconst + exponent) } ralap <- function(n, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n location <- rep(location, length.out = use.n); scale <- rep(scale, length.out = use.n) tau <- rep(tau, length.out = use.n); kappa <- rep(kappa, length.out = use.n); ans <- location + scale * log(runif(use.n)^kappa / runif(use.n)^(1/kappa)) / sqrt(2) indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } palap <- function(q, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location), length(scale), length(kappa)) location <- rep(location, length.out = NN); scale <- rep(scale, length.out = NN) kappa <- rep(kappa, length.out = NN); q <- rep(q, length.out = NN) tau <- rep(tau, length.out = NN); exponent <- -(sqrt(2) / scale) * abs(q - location) * ifelse(q >= location, kappa, 1/kappa) temp5 <- exp(exponent) / (1 + kappa^2) ans <- 1 - temp5 index1 <- (q < location) ans[index1] <- (kappa[index1])^2 * temp5[index1] indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } qalap <- function(p, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau / (1 - tau))) { NN <- max(length(p), length(location), length(scale), length(kappa)) location <- rep(location, length.out = NN); scale <- rep(scale, length.out = NN) kappa <- rep(kappa, length.out = NN); p <- rep(p, length.out = NN) tau <- rep(tau, length.out = NN) ans <- p temp5 <- kappa^2 / (1 + kappa^2) index1 <- (p <= temp5) exponent <- p[index1] / temp5[index1] ans[index1] <- location[index1] + (scale[index1] * kappa[index1]) * log(exponent) / sqrt(2) ans[!index1] <- location[!index1] - (scale[!index1] / kappa[!index1]) * (log1p((kappa[!index1])^2) + log1p(-p[!index1])) / sqrt(2) indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) & (p >= 0) & (p <= 1) ans[!indexTF] <- NaN ans[p == 0 & indexTF] <- -Inf ans[p == 1 & indexTF] <- Inf ans } rloglap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n location.ald <- rep(location.ald, length.out = use.n); scale.ald <- rep(scale.ald, length.out = use.n); tau <- rep(tau, length.out = use.n); kappa <- rep(kappa, length.out = use.n); ans <- exp(location.ald) * (runif(use.n)^kappa / runif(use.n)^(1/kappa))^(scale.ald / sqrt(2)) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } dloglap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location <- rep(location.ald, length.out = NN); scale <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); x <- rep(x, length.out = NN) tau <- rep(tau, length.out = NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) exponent <- ifelse(x >= Delta, -(Alpha+1), (Beta-1)) * (log(x) - location.ald) logdensity <- -location.ald + log(Alpha) + log(Beta) - log(Alpha + Beta) + exponent indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf if (log.arg) logdensity else exp(logdensity) } qloglap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(p), length(location.ald), length(scale.ald), length(kappa)) location <- rep(location.ald, length.out = NN); scale <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); p <- rep(p, length.out = NN) tau <- rep(tau, length.out = NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) temp9 <- Alpha + Beta ans <- Delta * (p * temp9 / Alpha)^(1/Beta) index1 <- (p > Alpha / temp9) ans[index1] <- (Delta * ((1-p) * temp9 / Beta)^(-1/Alpha))[index1] ans[p == 0] <- 0 ans[p == 1] <- Inf indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) (p >= 0) & (p <= 1) # & ans[!indexTF] <- NaN ans } ploglap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location <- rep(location.ald, length.out = NN); scale <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); q <- rep(q, length.out = NN) tau <- rep(tau, length.out = NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) temp9 <- Alpha + Beta ans <- (Alpha / temp9) * (q / Delta)^(Beta) ans[q <= 0] <- 0 index1 <- (q >= Delta) ans[index1] <- (1 - (Beta/temp9) * (Delta/q)^(Alpha))[index1] indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } rlogitlap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { logit(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), inverse = TRUE) # earg = earg } dlogitlap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location <- rep(location.ald, length.out = NN); scale <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); x <- rep(x, length.out = NN) tau <- rep(tau, length.out = NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- logit(location.ald, inverse = TRUE) # earg = earg exponent <- ifelse(x >= Delta, -Alpha, Beta) * (logit(x) - # earg = earg location.ald) logdensity <- log(Alpha) + log(Beta) - log(Alpha + Beta) - log(x) - log1p(-x) + exponent indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (log.arg) logdensity else exp(logdensity) } qlogitlap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- logit(qqq, inverse = TRUE) # earg = earg ans[(p < 0) | (p > 1)] <- NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } plogitlap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep(location.ald, length.out = NN); scale.ald <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); q <- rep(q, length.out = NN) tau <- rep(tau, length.out = NN); indexTF <- (q > 0) & (q < 1) qqq <- logit(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } rprobitlap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { probit(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), inverse = TRUE) } dprobitlap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE, meth2 = TRUE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep(location.ald, length.out = NN); scale.ald <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); x = rep(x, length.out = NN) tau <- rep(tau, length.out = NN) logdensity <- x * NaN index1 <- (x > 0) & (x < 1) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & if (meth2) { dx.dy <- x use.x <- probit(x[index1]) # earg = earg logdensity[index1] = dalap(x = use.x, location = location.ald[index1], scale = scale.ald[index1], tau = tau[index1], kappa = kappa[index1], log = TRUE) } else { Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- pnorm(location.ald) use.x <- qnorm(x) # qnorm(x[index1]) log.dy.dw <- dnorm(use.x, log = TRUE) exponent <- ifelse(x >= Delta, -Alpha, Beta) * (use.x - location.ald) - log.dy.dw logdensity[index1] <- (log(Alpha) + log(Beta) - log(Alpha + Beta) + exponent)[index1] } logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (meth2) { dx.dy[index1] <- probit(x[index1], # earg = earg, inverse = FALSE, deriv = 1) dx.dy[!index1] <- 0 dx.dy[!indexTF] <- NaN if (log.arg) logdensity - log(abs(dx.dy)) else exp(logdensity) / abs(dx.dy) } else { if (log.arg) logdensity else exp(logdensity) } } qprobitlap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- probit(qqq, inverse = TRUE) # , earg = earg ans[(p < 0) | (p > 1)] = NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } pprobitlap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep(location.ald, length.out = NN); scale.ald <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); q <- rep(q, length.out = NN) tau <- rep(tau, length.out = NN); indexTF <- (q > 0) & (q < 1) qqq <- probit(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } rclogloglap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { cloglog(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), # earg = earg, inverse = TRUE) } dclogloglap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE, meth2 = TRUE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep(location.ald, length.out = NN); scale.ald <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); x <- rep(x, length.out = NN) tau <- rep(tau, length.out = NN) logdensity <- x * NaN index1 <- (x > 0) & (x < 1) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & if (meth2) { dx.dy <- x use.w <- cloglog(x[index1]) # earg = earg logdensity[index1] <- dalap(x = use.w, location = location.ald[index1], scale = scale.ald[index1], tau = tau[index1], kappa = kappa[index1], log = TRUE) } else { Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- cloglog(location.ald, inverse = TRUE) exponent <- ifelse(x >= Delta, -(Alpha+1), Beta-1) * log(-log1p(-x)) + ifelse(x >= Delta, Alpha, -Beta) * location.ald logdensity[index1] <- (log(Alpha) + log(Beta) - log(Alpha + Beta) - log1p(-x) + exponent)[index1] } logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (meth2) { dx.dy[index1] <- cloglog(x[index1], # earg = earg, inverse = FALSE, deriv = 1) dx.dy[!index1] <- 0 dx.dy[!indexTF] <- NaN if (log.arg) logdensity - log(abs(dx.dy)) else exp(logdensity) / abs(dx.dy) } else { if (log.arg) logdensity else exp(logdensity) } } qclogloglap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- cloglog(qqq, inverse = TRUE) # , earg = earg ans[(p < 0) | (p > 1)] <- NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } pclogloglap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep(location.ald, length.out = NN); scale.ald <- rep(scale.ald, length.out = NN) kappa <- rep(kappa, length.out = NN); q <- rep(q, length.out = NN) tau <- rep(tau, length.out = NN); indexTF <- (q > 0) & (q < 1) qqq <- cloglog(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } alaplace2.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace2 <- function(tau = NULL, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau / (1-tau)), shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, eq.scale = TRUE, dfmu.init = 3, intparloc = FALSE, imethod = 1, zero = -2) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !(is.Numeric(zero, integer.valued = TRUE) || is.character(zero ))) stop("bad input for argument 'zero'") if (length(tau) && max(abs(kappa - sqrt(tau / (1 - tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") if (!is.logical(intparloc) || length(intparloc) != 1) stop("argument 'intparloc' must be a single logical") if (!is.logical(eq.scale) || length(eq.scale) != 1) stop("argument 'eq.scale' must be a single logical") if (!is.logical(parallelLocation) || length(parallelLocation) != 1) stop("argument 'parallelLocation' must be a single logical") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") new("vglmff", blurb = c("Two-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: ", "location + scale * (1/kappa - kappa) / sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ orig.constraints <- constraints .PARALLEL <- .parallelLocation onemat <- matrix(1, Mdiv2, 1) locatHmat1 <- kronecker(if ( .intparloc ) onemat else diag(Mdiv2), rbind(1, 0)) scaleHmat1 <- kronecker(if ( .eq.scale ) onemat else diag(Mdiv2), rbind(0, 1)) locatHmatk <- kronecker(if ( .PARALLEL ) onemat else diag(Mdiv2), rbind(1, 0)) scaleHmatk <- scaleHmat1 constraints <- cm.vgam(cbind(locatHmatk, scaleHmatk), x, .PARALLEL , constraints, apply.int = FALSE) if (names(constraints)[1] == "(Intercept)") { constraints[["(Intercept)"]] = cbind(locatHmat1, scaleHmat1) } dotzero <- .zero Musual <- 2 eval(negzero.expression) constraints <- cm.zero.vgam(constraints, x, z_Index, M) if (length(orig.constraints)) { if (!identical(orig.constraints, constraints)) { warning("the inputted 'constraints' argument does not match with ", "the 'zero', 'parallel', 'eq.scale' arguments. ", "Using the inputted 'constraints'.") constraints <- orig.constraints } } }), list( .eq.scale = eq.scale, .parallelLocation = parallelLocation, .intparloc = intparloc, .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ extra$Musual <- Musual <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = if (length( .kappa ) > 1) 1 else Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- ncol(y) if ((ncoly > 1) && (length( .kappa ) > 1)) stop("response must be a vector if 'kappa' or 'tau' ", "has a length greater than one") extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$Mdiv2 <- Mdiv2 <- max(ncoly, length( .kappa )) extra$M <- M <- Musual * Mdiv2 extra$n <- n extra$tau.names <- tau.names <- paste("(tau = ", round(extra$tau, digits = .digt), ")", sep = "") extra$Y.names <- Y.names <- if (ncoly > 1) dimnames(y)[[2]] else "y" if (is.null(Y.names) || any(Y.names == "")) extra$Y.names <- Y.names <- paste("y", 1:ncoly, sep = "") extra$y.names <- y.names <- if (ncoly > 1) paste(Y.names, tau.names, sep = "") else tau.names extra$individual <- FALSE mynames1 <- paste("location", if (Mdiv2 > 1) 1:Mdiv2 else "", sep = "") mynames2 <- paste("scale", if (Mdiv2 > 1) 1:Mdiv2 else "", sep = "") predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat, tag = FALSE), namesof(mynames2, .lscale , earg = .escale, tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] locat.init <- scale.init <- matrix(0, n, Mdiv2) if (!length(etastart)) { for(jay in 1:Mdiv2) { y.use <- if (ncoly > 1) y[, jay] else y if ( .imethod == 1) { locat.init[, jay] <- weighted.mean(y.use, w[, jay]) scale.init[, jay] <- sqrt(var(y.use) / 2) } else if ( .imethod == 2) { locat.init[, jay] <- median(y.use) scale.init[, jay] <- sqrt(sum(c(w[, jay]) * abs(y - median(y.use))) / (sum(w[, jay]) * 2)) } else if ( .imethod == 3) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.use, w = w[, jay], df = .dfmu.init ) locat.init[, jay] <- predict(Fit5, x = x[, min(ncol(x), 2)])$y scale.init[, jay] <- sqrt(sum(c(w[, jay]) * abs(y.use - median(y.use))) / (sum(w[, jay]) * 2)) } else { use.this <- weighted.mean(y.use, w[, jay]) locat.init[, jay] <- (1 - .sinit) * y.use + .sinit * use.this scale.init[, jay] = sqrt(sum(c(w[, jay]) * abs(y.use - median(y.use ))) / (sum(w[, jay]) * 2)) } } if (length( .ilocat )) { locat.init <- matrix( .ilocat , n, Mdiv2, byrow = TRUE) } if (length( .iscale )) { scale.init <- matrix( .iscale , n, Mdiv2, byrow = TRUE) } etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .imethod = imethod, .dfmu.init = dfmu.init, .sinit = shrinkage.init, .digt = digt, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .kappa = kappa, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { Mdiv2 <- extra$Mdiv2 locat <- eta2theta(eta[, 2 * (1:Mdiv2) - 1, drop = FALSE], .llocat , earg = .elocat ) dimnames(locat) <- list(dimnames(eta)[[1]], extra$y.names) myans <- if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$Mdiv2, byrow = TRUE) Scale <- eta2theta(eta[, 2 * (1:Mdiv2) , drop = FALSE], .lscale , earg = .escale ) locat + Scale * (1/kappamat - kappamat) } else { locat } dimnames(myans) <- list(dimnames(myans)[[1]], extra$y.names) myans }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .kappa = kappa ))), last = eval(substitute(expression({ Musual <- extra$Musual tmp34 <- c(rep( .llocat , length = Mdiv2), rep( .lscale , length = Mdiv2)) names(tmp34) <- c(mynames1, mynames2) tmp34 <- tmp34[interleave.VGAM(M, M = Musual)] misc$link <- tmp34 # Already named misc$earg <- vector("list", M) misc$Musual <- Musual for(ii in 1:Mdiv2) { misc$earg[[Musual * ii - 1]] <- .elocat misc$earg[[Musual * ii ]] <- .escale } names(misc$earg) <- names(misc$link) misc$multipleResponses <- TRUE misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$intparloc <- .intparloc extra$percentile <- numeric(Mdiv2) # length(misc$kappa) locat <- as.matrix(locat) for(ii in 1:Mdiv2) { y.use <- if (ncoly > 1) y[, ii] else y extra$percentile[ii] <- 100 * weighted.mean(y.use <= locat[, ii], w[, ii]) } # if (ncoly > 1) names(misc$link) else zz: names(extra$percentile) <- y.names }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .intparloc = intparloc, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 Mdiv2 <- extra$Mdiv2 ymat <- matrix(y, extra$n, extra$Mdiv2) kappamat <- matrix(extra$kappa, extra$n, extra$Mdiv2, byrow = TRUE) locat <- eta2theta(eta[, 2 * (1:Mdiv2) - 1, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2 * (1:Mdiv2) , drop = FALSE], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dalap(x = c(ymat), location = c(locat), scale = c(Scale), kappa = c(kappamat), log = TRUE)) } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .kappa = kappa ))), vfamily = c("alaplace2"), deriv = eval(substitute(expression({ Musual <- 2 Mdiv2 <- extra$Mdiv2 ymat <- matrix(y, n, Mdiv2) locat <- eta2theta(eta[, Musual * (1:(Mdiv2)) - 1, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, Musual * (1:(Mdiv2)) , drop = FALSE], .lscale , earg = .escale ) kappamat <- matrix(extra$kappa, n, Mdiv2, byrow = TRUE) zedd <- abs(ymat - locat) / Scale dl.dlocat <- sqrt(2) * ifelse(ymat >= locat, kappamat, 1/kappamat) * sign(ymat - locat) / Scale dl.dscale <- sqrt(2) * ifelse(ymat >= locat, kappamat, 1/kappamat) * zedd / Scale - 1 / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .kappa = kappa ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) d2l.dlocat2 <- 2 / Scale^2 d2l.dscale2 <- 1 / Scale^2 wz[, Musual*(1:Mdiv2) - 1] <- d2l.dlocat2 * dlocat.deta^2 wz[, Musual*(1:Mdiv2) ] <- d2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } alaplace1.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace1 <- function(tau = NULL, llocation = "identity", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, intparloc = FALSE, imethod = 1) { if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") llocation <- llocation llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") if (!is.logical(parallelLocation) || length(parallelLocation) != 1) stop("bad input for argument 'parallelLocation'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") new("vglmff", blurb = c("One-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), "\n", "\n", "Mean: location + scale * (1/kappa - kappa) / ", "sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ orig.constraints <- constraints onemat <- matrix(1, M, 1) locatHmat1 <- if ( .intparloc ) onemat else diag(M) locatHmatk <- if ( .parallelLocation ) onemat else diag(M) constraints <- cm.vgam(locatHmatk, x, .parallelLocation, constraints, apply.int = FALSE) if (names(constraints)[1] == "(Intercept)") { constraints[["(Intercept)"]] = locatHmat1 } if (length(orig.constraints)) { if (!identical(orig.constraints, constraints)) { warning("the inputted 'constraints' argument does not match with ", "the 'parallel', 'eq.scale' arguments. ", "Using the inputted 'constraints'.") constraints <- orig.constraints } } }), list( .parallelLocation = parallelLocation, .intparloc = intparloc ))), infos = eval(substitute(function(...) { list(Musual = 1, tau = .tau, kappa = .kappa) }, list( .kappa = kappa, .tau = tau ))), initialize = eval(substitute(expression({ extra$Musual <- Musual <- 1 temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = if (length( .kappa ) > 1) 1 else Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- ncol(y) if ((ncoly > 1) && (length( .kappa ) > 1 || length( .Scale.arg ) > 1)) stop("response must be a vector if 'kappa' or 'Scale.arg' ", "has a length greater than one") extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$M <- M <- max(length( .Scale.arg ), ncoly, length( .kappa )) # Recycle extra$Scale <- rep( .Scale.arg, length = M) extra$kappa <- rep( .kappa, length = M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$n <- n extra$tau.names <- tau.names <- paste("(tau = ", round(extra$tau, digits = .digt), ")", sep = "") extra$Y.names <- Y.names <- if (ncoly > 1) dimnames(y)[[2]] else "y" if (is.null(Y.names) || any(Y.names == "")) extra$Y.names <- Y.names <- paste("y", 1:ncoly, sep = "") extra$y.names <- y.names <- if (ncoly > 1) paste(Y.names, tau.names, sep = "") else tau.names extra$individual <- FALSE mynames1 <- paste("location", if (M > 1) 1:M else "", sep = "") predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat, tag = FALSE)) locat.init <- matrix(0, n, M) if (!length(etastart)) { for(jay in 1:M) { y.use <- if (ncoly > 1) y[, jay] else y if ( .imethod == 1) { locat.init[, jay] <- weighted.mean(y.use, w) } else if ( .imethod == 2) { locat.init[, jay] <- median(y.use) } else if ( .imethod == 3) { Fit5 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.use, w = w, df = .dfmu.init) locat.init[, jay] <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) } else { use.this <- weighted.mean(y.use, w) locat.init[, jay] <- (1- .sinit) * y.use + .sinit * use.this } if (length( .ilocat )) { locat.init <- matrix( .ilocat , n, M, byrow = TRUE) } if ( .llocat == "loge") locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } } }), list( .imethod = imethod, .dfmu.init = dfmu.init, .sinit = shrinkage.init, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) locat <- eta2theta(eta, .llocat , earg = .elocat ) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat + Scale * (1/kappamat - kappamat) } else { locat <- eta2theta(eta, .llocat , earg = .elocat ) if (length(locat) > extra$n) dimnames(locat) <- list(dimnames(eta)[[1]], extra$y.names) locat } }, list( .elocat = elocat, .llocat = llocat, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$Musual <- Musual misc$multipleResponses <- TRUE tmp34 <- c(rep( .llocat , length = M)) names(tmp34) <- mynames1 misc$link <- tmp34 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) { misc$earg[[ii]] <- .elocat } misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) misc$true.mu <- .fittedMean # @fitted is not a true mu? extra$percentile <- numeric(M) locat <- as.matrix(locat) for(ii in 1:M) { y.use <- if (ncoly > 1) y[, ii] else y extra$percentile[ii] = 100 * weighted.mean(y.use <= locat[, ii], w) } names(extra$percentile) <- y.names extra$Scale.arg <- .Scale.arg }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ymat <- matrix(y, extra$n, extra$M) kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) locat <- eta2theta(eta, .llocat , earg = .elocat ) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dalap(x = c(ymat), locat = c(locat), scale = c(Scale), kappa = c(kappamat), log = TRUE)) } }, list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("alaplace1"), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat <- eta2theta(eta, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) zedd <- abs(ymat-locat) / Scale dl.dlocat <- ifelse(ymat >= locat, kappamat, 1/kappamat) * sqrt(2) * sign(ymat - locat) / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat, .kappa = kappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat )))) } alaplace3.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace3 <- function( llocation = "identity", lscale = "loge", lkappa = "loge", ilocation = NULL, iscale = NULL, ikappa = 1.0, imethod = 1, zero = 2:3) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lkappa <- as.list(substitute(lkappa)) ekappa <- link2list(lkappa) lkappa <- attr(ekappa, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Three-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("kappa", lkappa, earg = ekappa), "\n", "\n", "Mean: location + scale * (1/kappa - kappa) / sqrt(2)", "\n", "Variance: Scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale, tag = FALSE), namesof("kappa", .lkappa , earg = .ekappa, tag = FALSE)) if (!length(etastart)) { kappa.init <- if (length( .ikappa )) rep( .ikappa, length.out = n) else rep( 1.0, length.out = n) if ( .imethod == 1) { locat.init <- median(y) scale.init <- sqrt(var(y) / 2) } else { locat.init <- y scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init <- if (length( .ilocat)) rep( .ilocat, length.out = n) else rep(locat.init, length.out = n) scale.init <- if (length( .iscale)) rep( .iscale, length.out = n) else rep(scale.init, length.out = n) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(kappa.init, .lkappa, earg = .ekappa)) } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .ekappa = ekappa, .llocat = llocat, .lscale = lscale, .lkappa = lkappa, .ilocat = ilocat, .iscale = iscale, .ikappa = ikappa ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa, earg = .ekappa) locat + Scale * (1/kappa - kappa) / sqrt(2) }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale , kappa = .lkappa ) misc$earg <- list(location = .elocat, scale = .escale, kappa = .ekappa ) misc$expected = TRUE }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa , earg = .ekappa ) # a matrix if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dalap(x = y, locat = locat, scale = Scale, kappa = kappa, log = TRUE)) } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), vfamily = c("alaplace3"), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa, earg = .ekappa) zedd <- abs(y - locat) / Scale dl.dlocat <- sqrt(2) * ifelse(y >= locat, kappa, 1/kappa) * sign(y-locat) / Scale dl.dscale <- sqrt(2) * ifelse(y >= locat, kappa, 1/kappa) * zedd / Scale - 1 / Scale dl.dkappa <- 1 / kappa - 2 * kappa / (1+kappa^2) - (sqrt(2) / Scale) * ifelse(y > locat, 1, -1/kappa^2) * abs(y-locat) dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dkappa.deta <- dtheta.deta(kappa, .lkappa, earg = .ekappa) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta, dl.dkappa * dkappa.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .ekappa = ekappa, .lkappa = lkappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale^2 d2l.dscale2 <- 1 / Scale^2 d2l.dkappa2 <- 1 / kappa^2 + 4 / (1+kappa^2)^2 d2l.dkappadloc <- -sqrt(8) / ((1+kappa^2) * Scale) d2l.dkappadscale <- -(1-kappa^2) / ((1+kappa^2) * kappa * Scale) wz <- matrix(0, nrow = n, dimm(M)) wz[,iam(1, 1, M)] <- d2l.dlocat2 * dlocat.deta^2 wz[,iam(2, 2, M)] <- d2l.dscale2 * dscale.deta^2 wz[,iam(3, 3, M)] <- d2l.dkappa2 * dkappa.deta^2 wz[,iam(1, 3, M)] <- d2l.dkappadloc * dkappa.deta * dlocat.deta wz[,iam(2, 3, M)] <- d2l.dkappadscale * dkappa.deta * dscale.deta c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } dlaplace <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdensity <- (-abs(x-location)/scale) - log(2*scale) if (log.arg) logdensity else exp(logdensity) } plaplace <- function(q, location = 0, scale = 1) { if (!is.Numeric(scale, positive = TRUE)) stop("argument 'scale' must be positive") zedd <- (q-location) / scale L <- max(length(q), length(location), length(scale)) q <- rep(q, length.out = L); location <- rep(location, length.out = L); scale <- rep(scale, length.out = L) ifelse(q < location, 0.5*exp(zedd), 1-0.5*exp(-zedd)) } qlaplace <- function(p, location = 0, scale = 1) { if (!is.Numeric(scale, positive = TRUE)) stop("argument 'scale' must be positive") L <- max(length(p), length(location), length(scale)) p <- rep(p, length.out = L); location <- rep(location, length.out = L); scale <- rep(scale, length.out = L) location - sign(p-0.5) * scale * log(2*ifelse(p < 0.5, p, 1-p)) } rlaplace <- function(n, location = 0, scale = 1) { if (!is.Numeric(n, positive = TRUE, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") if (!is.Numeric(scale, positive = TRUE)) stop("'scale' must be positive") location <- rep(location, length.out = n); scale <- rep(scale, length.out = n) r <- runif(n) location - sign(r-0.5) * scale * log(2 * ifelse(r < 0.5, r, 1-r)) } laplace <- function(llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = 2) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Two-parameter Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location", "\n", "Variance: 2*scale^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale, tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init <- median(y) scale.init <- sqrt(var(y) / 2) } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) scale.init <- sqrt(var(y) / 2) } else { locat.init <- median(y) scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init <- if (length( .ilocat)) rep( .ilocat, length.out = n) else rep(locat.init, length.out = n) scale.init <- if (length( .iscale)) rep( .iscale, length.out = n) else rep(scale.init, length.out = n) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .elocat = elocat, .llocat = llocat ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$expected <- TRUE misc$RegCondOK <- FALSE # Save this for later }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dlaplace(x = y, locat = locat, scale = Scale, log = TRUE)) } }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), vfamily = c("laplace"), deriv = eval(substitute(expression({ Locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- abs(y-Locat) / Scale dl.dLocat <- sign(y - Locat) / Scale dl.dscale <- zedd / Scale - 1 / Scale dLocat.deta <- dtheta.deta(Locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dLocat * dLocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), weight = eval(substitute(expression({ d2l.dLocat2 <- d2l.dscale2 <- 1 / Scale^2 wz <- matrix(0, nrow = n, ncol = M) # diagonal wz[,iam(1, 1, M)] <- d2l.dLocat2 * dLocat.deta^2 wz[,iam(2, 2, M)] <- d2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } fff.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } fff <- function(link = "loge", idf1 = NULL, idf2 = NULL, nsimEIM = 100, # ncp = 0, imethod = 1, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") ncp <- 0 if (any(ncp != 0)) warning("not sure about ncp != 0 wrt dl/dtheta") new("vglmff", blurb = c("F-distribution\n\n", "Links: ", namesof("df1", link, earg = earg), ", ", namesof("df2", link, earg = earg), "\n", "\n", "Mean: df2/(df2-2) provided df2>2 and ncp = 0", "\n", "Variance: ", "2*df2^2*(df1+df2-2)/(df1*(df2-2)^2*(df2-4)) ", "provided df2>4 and ncp = 0"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("df1", .link , earg = .earg , tag = FALSE), namesof("df2", .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { df2.init <- b <- 2*mean(y) / (mean(y)-1) df1.init <- 2*b^2*(b-2)/(var(y)*(b-2)^2 * (b-4) - 2*b^2) if (df2.init < 4) df2.init <- 5 if (df1.init < 2) df1.init <- 3 } else { df2.init <- b <- 2*median(y) / (median(y)-1) summy <- summary(y) var.est <- summy[5] - summy[2] df1.init <- 2*b^2*(b-2)/(var.est*(b-2)^2 * (b-4) - 2*b^2) } df1.init <- if (length( .idf1)) rep( .idf1, length.out = n) else rep(df1.init, length.out = n) df2.init <- if (length( .idf2)) rep( .idf2, length.out = n) else rep(1, length.out = n) etastart <- cbind(theta2eta(df1.init, .link , earg = .earg ), theta2eta(df2.init, .link , earg = .earg )) } }), list( .imethod = imethod, .idf1 = idf1, .earg = earg, .idf2 = idf2, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) { df2 <- eta2theta(eta[, 2], .link , earg = .earg ) ans <- df2 * NA ans[df2>2] <- df2[df2>2] / (df2[df2>2]-2) ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(df1 = .link , df2 = .link ) misc$earg <- list(df1 = .earg , df2 = .earg ) misc$nsimEIM <- .nsimEIM misc$ncp <- .ncp }), list( .link = link, .earg = earg, .ncp = ncp, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { df1 <- eta2theta(eta[, 1], .link , earg = .earg ) df2 <- eta2theta(eta[, 2], .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * df(x = y, df1 = df1, df2 = df2, ncp = .ncp, log = TRUE)) } }, list( .link = link, .earg = earg, .ncp=ncp ))), vfamily = c("fff"), deriv = eval(substitute(expression({ df1 <- eta2theta(eta[, 1], .link , earg = .earg ) df2 <- eta2theta(eta[, 2], .link , earg = .earg ) dl.ddf1 <- 0.5*digamma(0.5*(df1+df2)) + 0.5 + 0.5*log(df1/df2) + 0.5*log(y) - 0.5*digamma(0.5*df1) - 0.5*(df1+df2)*(y/df2) / (1 + df1*y/df2) - 0.5*log1p(df1*y/df2) dl.ddf2 <- 0.5*digamma(0.5*(df1+df2)) - 0.5*df1/df2 - 0.5*digamma(0.5*df2) - 0.5*(df1+df2) * (-df1*y/df2^2) / (1 + df1*y/df2) - 0.5*log1p(df1*y/df2) ddf1.deta <- dtheta.deta(df1, .link , earg = .earg ) ddf2.deta <- dtheta.deta(df2, .link , earg = .earg ) dthetas.detas <- cbind(ddf1.deta, ddf2.deta) c(w) * dthetas.detas * cbind(dl.ddf1, dl.ddf2) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { ysim <- rf(n = n, df1=df1, df2=df2) dl.ddf1 <- 0.5*digamma(0.5*(df1+df2)) + 0.5 + 0.5*log(df1/df2) + 0.5*log(ysim) - 0.5*digamma(0.5*df1) - 0.5*(df1+df2)*(ysim/df2) / (1 + df1*ysim/df2) - 0.5*log1p(df1*ysim/df2) dl.ddf2 <- 0.5*digamma(0.5*(df1+df2)) - 0.5*df1/df2 - 0.5*digamma(0.5*df2) - 0.5*(df1+df2) * (-df1*ysim/df2^2)/(1 + df1*ysim/df2) - 0.5*log1p(df1*ysim/df2) rm(ysim) temp3 <- cbind(dl.ddf1, dl.ddf2) run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- c(w) * wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM, .ncp = ncp )))) } hyperg <- function(N = NULL, D = NULL, lprob = "logit", iprob = NULL) { inputN <- is.Numeric(N, positive = TRUE) inputD <- is.Numeric(D, positive = TRUE) if (inputD && inputN) stop("only one of 'N' and 'D' is to be inputted") if (!inputD && !inputN) stop("one of 'N' and 'D' needs to be inputted") lprob <- as.list(substitute(lprob)) earg <- link2list(lprob) lprob <- attr(earg, "function.name") new("vglmff", blurb = c("Hypergeometric distribution\n\n", "Link: ", namesof("prob", lprob, earg = earg), "\n", "Mean: D/N\n"), initialize = eval(substitute(expression({ NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep(1, length.out = n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") mustart <- (0.5 + w * y) / (1 + w) no.successes <- w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nn <- y[, 1] + y[, 2] y <- ifelse(nn > 0, y[, 1]/nn, 0) w <- w * nn mustart <- (0.5 + nn * y) / (1 + nn) mustart[mustart >= 1] <- 0.95 } else stop("Response not of the right form") predictors.names <- namesof("prob", .lprob , earg = .earg , tag = FALSE) extra$Nvector <- .N extra$Dvector <- .D extra$Nunknown <- length(extra$Nvector) == 0 if (!length(etastart)) { init.prob <- if (length( .iprob)) rep( .iprob, length.out = n) else mustart etastart <- matrix(init.prob, n, ncol(cbind(y ))) } }), list( .lprob = lprob, .earg = earg, .N = N, .D = D, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .lprob, earg = .earg ) }, list( .lprob = lprob, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("prob" = .lprob) misc$earg <- list("prob" = .earg ) misc$Dvector <- .D misc$Nvector <- .N }), list( .N = N, .D = D, .lprob = lprob, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .lprob, earg = .earg ) }, list( .lprob = lprob, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { N <- extra$Nvector Dvec <- extra$Dvector prob <- mu yvec <- w * y if (residuals) { stop("loglikelihood residuals not implemented yet") } else { if (extra$Nunknown) { tmp12 <- Dvec * (1-prob) / prob sum(lgamma(1+tmp12) + lgamma(1+Dvec/prob-w) - lgamma(1+tmp12-w+yvec) - lgamma(1+Dvec/prob)) } else { sum(lgamma(1+N*prob) + lgamma(1+N*(1-prob)) - lgamma(1+N*prob-yvec) - lgamma(1+N*(1-prob) -w + yvec)) } } }, list( .lprob = lprob, .earg = earg ))), vfamily = c("hyperg"), deriv = eval(substitute(expression({ prob <- mu # equivalently, eta2theta(eta, .lprob, earg = .earg ) dprob.deta <- dtheta.deta(prob, .lprob, earg = .earg ) Dvec <- extra$Dvector Nvec <- extra$Nvector yvec <- w * y if (extra$Nunknown) { tmp72 <- -Dvec / prob^2 tmp12 <- Dvec * (1-prob) / prob dl.dprob <- tmp72 * (digamma(1 + tmp12) + digamma(1 + Dvec/prob -w) - digamma(1 + tmp12-w+yvec) - digamma(1 + Dvec/prob)) } else { dl.dprob <- Nvec * (digamma(1+Nvec*prob) - digamma(1+Nvec*(1-prob)) - digamma(1+Nvec*prob-yvec) + digamma(1+Nvec*(1-prob)-w+yvec)) } c(w) * dl.dprob * dprob.deta }), list( .lprob = lprob, .earg = earg ))), weight = eval(substitute(expression({ if (extra$Nunknown) { tmp722 <- tmp72^2 tmp13 <- 2*Dvec / prob^3 d2l.dprob2 <- tmp722 * (trigamma(1 + tmp12) + trigamma(1 + Dvec/prob - w) - trigamma(1 + tmp12 - w + yvec) - trigamma(1 + Dvec/prob)) + tmp13 * (digamma(1 + tmp12) + digamma(1 + Dvec/prob - w) - digamma(1 + tmp12 - w + yvec) - digamma(1 + Dvec/prob)) } else { d2l.dprob2 <- Nvec^2 * (trigamma(1+Nvec*prob) + trigamma(1+Nvec*(1-prob)) - trigamma(1+Nvec*prob-yvec) - trigamma(1+Nvec*(1-prob)-w+yvec)) } d2prob.deta2 <- d2theta.deta2(prob, .lprob, earg = .earg ) wz <- -(dprob.deta^2) * d2l.dprob2 wz <- c(w) * wz wz[wz < .Machine$double.eps] <- .Machine$double.eps wz }), list( .lprob = lprob, .earg = earg )))) } dbenini <- function(x, shape, y0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape), length(y0)) x <- rep(x, length.out = N); shape <- rep(shape, length.out = N); y0 <- rep(y0, length.out = N); logdensity <- rep(log(0), length.out = N) xok <- (x > y0) tempxok <- log(x[xok]/y0[xok]) logdensity[xok] <- log(2*shape[xok]) - shape[xok] * tempxok^2 + log(tempxok) - log(x[xok]) if (log.arg) logdensity else exp(logdensity) } pbenini <- function(q, shape, y0) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(y0, positive = TRUE)) stop("bad input for argument 'y0'") N <- max(length(q), length(shape), length(y0)) q <- rep(q, length.out = N); shape <- rep(shape, length.out = N); y0 <- rep(y0, length.out = N); ans <- y0 * 0 ok <- q > y0 ans[ok] <- -expm1(-shape[ok] * (log(q[ok]/y0[ok]))^2) ans } qbenini <- function(p, shape, y0) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("bad input for argument 'p'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(y0, positive = TRUE)) stop("bad input for argument 'y0'") y0 * exp(sqrt(-log1p(-p) / shape)) } rbenini <- function(n, shape, y0) { y0 * exp(sqrt(-log(runif(n)) / shape)) } benini <- function(y0 = stop("argument 'y0' must be specified"), lshape = "loge", ishape = NULL, imethod = 1, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(y0, positive = TRUE)) stop("bad input for argument 'y0'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("1-parameter Benini distribution\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Median: qbenini(p = 0.5, shape, y0)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, lshape = .lshape , eshape = .eshape) }, list( .eshape = eshape, .lshape = lshape ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) extra$y0 <- matrix( .y0 , n, ncoly, byrow = TRUE) if (any(y <= extra$y0)) stop("some values of the response are > argument 'y0' values") if (!length(etastart)) { probs.y <- (1:3) / 4 qofy <- quantile(rep(y, times = w), probs = probs.y) if ( .imethod == 1) { shape.init <- mean(-log1p(-probs.y) / (log(qofy))^2) } else { shape.init <- median(-log1p(-probs.y) / (log(qofy))^2) } shape.init <- matrix(if (length( .ishape )) .ishape else shape.init, n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .imethod = imethod, .ishape = ishape, .lshape = lshape, .eshape = eshape, .y0 = y0 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) qbenini(p = 0.5, shape, y0 = extra$y0) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lshape , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .eshape } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE extra$y0 <- .y0 }), list( .lshape = lshape, .eshape = eshape, .y0 = y0 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) y0 <- extra$y0 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dbenini(x = y, shape=shape, y0 = y0, log = TRUE)) } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("benini"), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) y0 <- extra$y0 dl.dshape <- 1/shape - (log(y/y0))^2 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dshape2 <- 1 / shape^2 wz <- ned2l.dshape2 * dshape.deta^2 c(w) * wz }), list( .lshape = lshape, .eshape = eshape )))) } dpolono <- function (x, meanlog = 0, sdlog = 1, bigx = 170, ...) { mapply(function(x, meanlog, sdlog, ...) { if (abs(x) > floor(x)) { # zero prob for -ve or non-integer 0 } else if (x > bigx) { z <- (log(x) - meanlog) / sdlog (1 + (z^2 + log(x) - meanlog - 1) / (2 * x * sdlog^2)) * exp(-0.5 * z^2) / (sqrt(2 * pi) * sdlog * x) } else integrate( function(t) exp(t * x - exp(t) - 0.5 * ((t - meanlog) / sdlog)^2), lower = -Inf, upper = Inf, ...)$value / (sqrt(2 * pi) * sdlog * exp(lgamma(x + 1.0))) }, x, meanlog, sdlog, ...) } ppolono <- function(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) { .cumprob <- rep(0, length(q)) .cumprob[q == Inf] <- 1 # special case q <- floor(q) ii <- -1 while (any(xActive <- ((.cumprob < isOne) & (q > ii)))) .cumprob[xActive] <- .cumprob[xActive] + dpolono(ii <- (ii+1), meanlog, sdlog, ...) .cumprob } rpolono <- function(n, meanlog = 0, sdlog = 1) { lambda <- rlnorm(n = n, meanlog = meanlog, sdlog = sdlog) rpois(n = n, lambda = lambda) } dtriangle <- function(x, theta, lower = 0, upper = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(theta), length(lower), length(upper)) x <- rep(x, length.out = N); lower <- rep(lower, length.out = N); upper <- rep(upper, length.out = N); theta <- rep(theta, length.out = N) denom1 <- ((upper-lower)*(theta-lower)) denom2 <- ((upper-lower)*(upper-theta)) logdensity <- rep(log(0), length.out = N) xok.neg <- (lower < x) & (x <= theta) xok.pos <- (theta <= x) & (x < upper) logdensity[xok.neg] = log(2 * (x[xok.neg] - lower[xok.neg]) / denom1[xok.neg]) logdensity[xok.pos] = log(2 * (upper[xok.pos] - x[xok.pos]) / denom2[xok.pos]) logdensity[lower >= upper] <- NaN logdensity[lower > theta] <- NaN logdensity[upper < theta] <- NaN if (log.arg) logdensity else exp(logdensity) } rtriangle <- function(n, theta, lower = 0, upper = 1) { if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") if (!is.Numeric(theta)) stop("bad input for argument 'theta'") if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < theta & theta < upper)) stop("lower < theta < upper values are required") N <- n lower <- rep(lower, length.out = N); upper <- rep(upper, length.out = N); theta <- rep(theta, length.out = N) t1 <- sqrt(runif(n)) t2 <- sqrt(runif(n)) ifelse(runif(n) < (theta - lower) / (upper - lower), lower + (theta - lower) * t1, upper - (upper - theta) * t2) } qtriangle <- function(p, theta, lower = 0, upper = 1) { if (!is.Numeric(p, positive = TRUE)) stop("bad input for argument 'p'") if (!is.Numeric(theta)) stop("bad input for argument 'theta'") if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < theta & theta < upper)) stop("lower < theta < upper values are required") N <- max(length(p), length(theta), length(lower), length(upper)) p <- rep(p, length.out = N); lower <- rep(lower, length.out = N); upper <- rep(upper, length.out = N); theta <- rep(theta, length.out = N) bad <- (p < 0) | (p > 1) if (any(bad)) stop("bad input for argument 'p'") Neg <- (p <= (theta - lower)/(upper - lower)) ans <- as.numeric(NA) * p temp1 <- p * (upper - lower) * (theta - lower) ans[ Neg] <- lower[ Neg] + sqrt(temp1[ Neg]) Pos <- (p >= (theta - lower)/(upper - lower)) if (any(Pos)) { pstar <- (p - (theta - lower)/(upper - lower)) / (1 - (theta - lower) / (upper - lower)) qstar <- cbind(1 - sqrt(1-pstar), 1 + sqrt(1-pstar)) qstar <- qstar[Pos,, drop = FALSE] qstar <- ifelse(qstar[, 1] >= 0 & qstar[, 1] <= 1, qstar[, 1], qstar[, 2]) ans[Pos] <- theta[Pos] + qstar * (upper - theta)[Pos] } ans } ptriangle <- function(q, theta, lower = 0, upper = 1) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(theta)) stop("bad input for argument 'theta'") if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < theta & theta < upper)) stop("lower < theta < upper values are required") N <- max(length(q), length(theta), length(lower), length(upper)) q <- rep(q, length.out = N); lower <- rep(lower, length.out = N); upper <- rep(upper, length.out = N); theta <- rep(theta, length.out = N) ans <- q * 0 qstar <- (q - lower)^2 / ((upper-lower) * (theta-lower)) Neg <- (lower <= q & q <= theta) ans[Neg] <- (qstar)[Neg] Pos <- (theta <= q & q <= upper) qstar <- (q - theta) / (upper-theta) ans[Pos] <- ((theta-lower)/(upper-lower))[Pos] + (qstar * (2-qstar) * (upper-theta) / (upper - lower))[Pos] ans[q >= upper] <- 1 ans } triangle <- function(lower = 0, upper = 1, link = elogit(min = lower, max = upper), itheta = NULL) { if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < upper)) stop("lower < upper values are required") if (length(itheta) && !is.Numeric(itheta)) stop("bad input for 'itheta'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Triangle distribution\n\n", "Link: ", namesof("theta", link, earg = earg)), infos = eval(substitute(function(...) { list(Musual = 1, link = .link ) }, list( .link = link ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) extra$lower <- rep( .lower, length.out = n) extra$upper <- rep( .upper, length.out = n) if (any(y <= extra$lower | y >= extra$upper)) stop("some y values in [lower,upper] detected") predictors.names <- namesof("theta", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { Theta.init <- if (length( .itheta )) .itheta else { weighted.mean(y, w) } Theta.init <- rep(Theta.init, length = n) etastart <- theta2eta(Theta.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itheta=itheta, .upper = upper, .lower = lower ))), linkinv = eval(substitute(function(eta, extra = NULL) { Theta <- eta2theta(eta, .link , earg = .earg ) lower <- extra$lower upper <- extra$upper mu <- ((Theta^3 / 3 - lower * Theta^2 / 2 + lower^3 / 6) / (Theta - lower) + ((Theta^3 / 3 - upper * Theta^2 / 2 + upper^3 / 6) / (upper - Theta))) * 2 / (upper-lower) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Theta <- eta2theta(eta, .link , earg = .earg ) lower <- extra$lower upper <- extra$upper if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dtriangle(x = y, theta = Theta, lower = lower, upper = upper, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("triangle"), deriv = eval(substitute(expression({ Theta <- eta2theta(eta, .link , earg = .earg ) dTheta.deta <- dtheta.deta(Theta, .link , earg = .earg ) pos <- y > Theta neg <- y < Theta lower <- extra$lower upper <- extra$upper dl.dTheta <- 0 * y dl.dTheta[neg] <- -1 / (Theta[neg]-lower[neg]) dl.dTheta[pos] <- 1 / (upper[pos]-Theta[pos]) w * dl.dTheta * dTheta.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2l.dTheta2 <- 1 / ((Theta - lower) * (upper - Theta)) wz <- d2l.dTheta2 * dTheta.deta^2 c(w) * wz }), list( .link = link, .earg = earg )))) } adjust0.loglaplace1 <- function(ymat, y, w, rep0) { rangey0 <- range(y[y > 0]) ymat[ymat <= 0] <- min(rangey0[1] / 2, rep0) ymat } loglaplace1.control <- function(maxit = 300, ...) { list(maxit = maxit) } loglaplace1 <- function(tau = NULL, llocation = "loge", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, rep0 = 0.5, # 0.0001, minquantile = 0, maxquantile = Inf, imethod = 1, zero = NULL) { if (length(minquantile) != 1) stop("bad input for argument 'minquantile'") if (length(maxquantile) != 1) stop("bad input for argument 'maxquantile'") if (!is.Numeric(rep0, positive = TRUE, allowable.length = 1) || rep0 > 1) stop("bad input for argument 'rep0'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation llocat.identity <- as.list(substitute("identity")) elocat.identity <- link2list(llocat.identity) llocat.identity <- attr(elocat.identity, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !(is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || is.character(zero ))) stop("bad input for argument 'zero'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") if (!is.logical(parallelLocation) || length(parallelLocation) != 1) stop("bad input for argument 'parallelLocation'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") mystring0 <- namesof("location", llocat, earg = elocat) mychars <- substring(mystring0, first = 1:nchar(mystring0), last = 1:nchar(mystring0)) mychars[nchar(mystring0)] <- ", inverse = TRUE)" mystring1 <- paste(mychars, collapse = "") new("vglmff", blurb = c("One-parameter ", if (llocat == "loge") "log-Laplace" else c(llocat, "-Laplace"), " distribution\n\n", "Links: ", mystring0, "\n", "\n", "Quantiles: ", mystring1), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallelLocation, constraints, apply.int = FALSE) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .parallelLocation = parallelLocation, .Scale.arg = Scale.arg, .zero = zero ))), initialize = eval(substitute(expression({ extra$M <- M <- max(length( .Scale.arg ), length( .kappa )) # Recycle extra$Scale <- rep( .Scale.arg, length = M) extra$kappa <- rep( .kappa, length = M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual <- FALSE predictors.names <- namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat , tag = FALSE) if (FALSE) { if (min(y) < 0) stop("negative response values detected") if ((prop.0. <- weighted.mean(1*(y == 0), w)) >= min(extra$tau)) stop("sample proportion of 0s == ", round(prop.0., digits = 4), " > minimum 'tau' value. Choose larger values for 'tau'.") if ( .rep0 == 0.5 && (ave.tau <- (weighted.mean(1*(y <= 0), w) + weighted.mean(1*(y <= 1), w))/2) >= min(extra$tau)) warning("the minimum 'tau' value should be greater than ", round(ave.tau, digits = 4)) } if (!length(etastart)) { if ( .imethod == 1) { locat.init <- quantile(rep(y, w), probs= extra$tau) + 1/16 } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) } else if ( .imethod == 3) { locat.init <- median(y) } else if ( .imethod == 4) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .dfmu.init) locat.init <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) } else { use.this <- weighted.mean(y, w) locat.init <- (1- .sinit)*y + .sinit * use.this } locat.init <- if (length( .ilocat)) rep( .ilocat, length.out = M) else rep(locat.init, length.out = M) locat.init <- matrix(locat.init, n, M, byrow = TRUE) if ( .llocat == "loge") locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } }), list( .imethod = imethod, .dfmu.init = dfmu.init, .rep0 = rep0, .sinit = shrinkage.init, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y = eta2theta(eta, .llocat , earg = .elocat ) if ( .fittedMean ) { stop("Yet to do: handle 'fittedMean = TRUE'") kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.y + Scale * (1/kappamat - kappamat) } else { if (length(locat.y) > extra$n) dimnames(locat.y) <- list(dimnames(eta)[[1]], extra$y.names) locat.y } locat.y[locat.y < .minquantile] = .minquantile locat.y[locat.y > .maxquantile] = .maxquantile locat.y }, list( .elocat = elocat, .llocat = llocat, .minquantile = minquantile, .maxquantile = maxquantile, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat) misc$earg <- list(location = .elocat ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) extra$Scale.arg <- .Scale.arg misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$rep0 <- .rep0 misc$minquantile <- .minquantile misc$maxquantile <- .maxquantile extra$percentile <- numeric(length(misc$kappa)) locat.y <- as.matrix(locat.y) for(ii in 1:length(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .minquantile = minquantile, .maxquantile = maxquantile, .rep0 = rep0, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) ymat <- matrix(y, extra$n, extra$M) if ( .llocat == "loge") ymat <- adjust0.loglaplace1(ymat = ymat, y = y, w = w, rep0= .rep0) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logoff() if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ALDans <- sum(c(w) * dalap(x = c(w.mat), locat = c(eta), scale = c(Scale.w), kappa = c(kappamat), log = TRUE)) ALDans } }, list( .elocat = elocat, .llocat = llocat, .rep0 = rep0, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("loglaplace1"), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.w <- eta locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) ymat <- adjust0.loglaplace1(ymat = ymat, y = y, w = w, rep0= .rep0) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() zedd <- abs(w.mat-locat.w) / Scale.w dl.dlocat <- ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sqrt(2) * sign(w.mat-locat.w) / Scale.w dlocat.deta <- dtheta.deta(locat.w, .llocat.identity , earg = .elocat.identity ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .rep0 = rep0, .llocat = llocat, .elocat = elocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity, .kappa = kappa ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 2 / Scale.w^2 wz <- cbind(ned2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity )))) } loglaplace2.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } loglaplace2 <- function(tau = NULL, llocation = "loge", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau/(1-tau)), shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, eq.scale = TRUE, dfmu.init = 3, rep0 = 0.5, nsimEIM = NULL, imethod = 1, zero = "(1 + M/2):M") { warning("it is best to use loglaplace1()") if (length(nsimEIM) && (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10)) stop("argument 'nsimEIM' should be an integer greater than 10") if (!is.Numeric(rep0, positive = TRUE, allowable.length = 1) || rep0 > 1) stop("bad input for argument 'rep0'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !(is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || is.character(zero ))) stop("bad input for argument 'zero'") if (!is.logical(eq.scale) || length(eq.scale) != 1) stop("bad input for argument 'eq.scale'") if (!is.logical(parallelLocation) || length(parallelLocation) != 1) stop("bad input for argument 'parallelLocation'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") if (llocat != "loge") stop("argument 'llocat' must be \"loge\"") new("vglmff", blurb = c("Two-parameter log-Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: zz location + scale * ", "(1/kappa - kappa) / sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: zz scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ .ZERO <- .zero if (is.character( .ZERO)) .ZERO <- eval(parse(text = .ZERO)) .PARALLEL <- .parallelLocation parelHmat <- if (is.logical( .PARALLEL ) && .PARALLEL ) matrix(1, M/2, 1) else diag(M/2) scaleHmat <- if (is.logical( .eq.scale ) && .eq.scale ) matrix(1, M/2, 1) else diag(M/2) mycmatrix <- cbind(rbind( parelHmat, 0*parelHmat), rbind(0*scaleHmat, scaleHmat)) constraints <- cm.vgam(mycmatrix, x, .PARALLEL, constraints, apply.int = FALSE) constraints <- cm.zero.vgam(constraints, x, .ZERO, M) if ( .PARALLEL && names(constraints)[1] == "(Intercept)") { parelHmat <- diag(M/2) mycmatrix <- cbind(rbind( parelHmat, 0*parelHmat), rbind(0*scaleHmat, scaleHmat)) constraints[["(Intercept)"]] <- mycmatrix } if (is.logical( .eq.scale) && .eq.scale && names(constraints)[1] == "(Intercept)") { temp3 <- constraints[["(Intercept)"]] temp3 <- cbind(temp3[,1:(M/2)], rbind(0*scaleHmat, scaleHmat)) constraints[["(Intercept)"]] = temp3 } }), list( .eq.scale = eq.scale, .parallelLocation = parallelLocation, .zero = zero ))), initialize = eval(substitute(expression({ extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- 2 * length(extra$kappa) extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual = FALSE predictors.names <- c(namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat, tag = FALSE), namesof(if (M == 2) "scale" else paste("scale", 1:(M/2), sep = ""), .lscale , earg = .escale, tag = FALSE)) if (weighted.mean(1 * (y < 0.001), w) >= min(extra$tau)) stop("sample proportion of 0s > minimum 'tau' value. ", "Choose larger values for 'tau'.") if (!length(etastart)) { if ( .imethod == 1) { locat.init.y <- weighted.mean(y, w) scale.init <- sqrt(var(y) / 2) } else if ( .imethod == 2) { locat.init.y <- median(y) scale.init <- sqrt(sum(c(w)*abs(y-median(y))) / (sum(w) *2)) } else if ( .imethod == 3) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .dfmu.init) locat.init.y <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) scale.init <- sqrt(sum(c(w)*abs(y-median(y))) / (sum(w) *2)) } else { use.this <- weighted.mean(y, w) locat.init.y <- (1- .sinit)*y + .sinit * use.this scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init.y <- if (length( .ilocat )) rep( .ilocat , length.out = n) else rep(locat.init.y, length.out = n) locat.init.y <- matrix(locat.init.y, n, M/2) scale.init <- if (length( .iscale)) rep( .iscale, length.out = n) else rep(scale.init, length.out = n) scale.init <- matrix(scale.init, n, M/2) etastart <- cbind(theta2eta(locat.init.y, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) } }), list( .imethod = imethod, .dfmu.init = dfmu.init, .kappa = kappa, .sinit = shrinkage.init, .digt = digt, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y <- eta2theta(eta[,1:(extra$M/2), drop = FALSE], .llocat , earg = .elocat ) if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$M/2, byrow = TRUE) Scale.y <- eta2theta(eta[,(1+extra$M/2):extra$M], .lscale , earg = .escale ) locat.y + Scale.y * (1/kappamat - kappamat) } else { dimnames(locat.y) = list(dimnames(eta)[[1]], extra$y.names) locat.y } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .fittedMean = fittedMean, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$nsimEIM <- .nsimEIM misc$rep0 <- .rep0 extra$percentile <- numeric(length(misc$kappa)) locat <- as.matrix(locat.y) for(ii in 1:length(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .nsimEIM = nsimEIM, .rep0 = rep0, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { kappamat <- matrix(extra$kappa, extra$n, extra$M/2, byrow = TRUE) Scale.w <- eta2theta(eta[,(1+extra$M/2):extra$M], .lscale , earg = .escale ) ymat <- matrix(y, extra$n, extra$M/2) ymat[ymat <= 0] <- min(min(y[y > 0]), .rep0) # Adjust for 0s ell.mat <- matrix(c(dloglaplace(x = c(ymat), locat.ald = c(eta[,1:(extra$M/2)]), scale.ald = c(Scale.w), kappa = c(kappamat), log = TRUE)), extra$n, extra$M/2) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * ell.mat) } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .rep0 = rep0, .kappa = kappa ))), vfamily = c("loglaplace2"), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M/2) Scale.w <- eta2theta(eta[,(1+extra$M/2):extra$M], .lscale , earg = .escale ) locat.w <- eta[,1:(extra$M/2), drop = FALSE] locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M/2, byrow = TRUE) w.mat <- ymat w.mat[w.mat <= 0] <- min(min(w.mat[w.mat > 0]), .rep0) # Adjust for 0s w.mat <- theta2eta(w.mat, .llocat , earg = .elocat ) # w.mat=log(w.mat) zedd <- abs(w.mat-locat.w) / Scale.w dl.dlocat <- sqrt(2) * ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sign(w.mat-locat.w) / Scale.w dl.dscale <- sqrt(2) * ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * zedd / Scale.w - 1 / Scale.w dlocat.deta <- dtheta.deta(locat.w, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale.w, .lscale , earg = .escale ) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .rep0 = rep0, .kappa = kappa ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dlocat.deta, dscale.deta) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { wsim <- matrix(rloglap(n*M/2, loc = c(locat.w), sca = c(Scale.w), kappa = c(kappamat)), n, M/2) zedd <- abs(wsim-locat.w) / Scale.w dl.dlocat <- sqrt(2) * ifelse(wsim >= locat.w, kappamat, 1/kappamat) * sign(wsim-locat.w) / Scale.w dl.dscale <- sqrt(2) * ifelse(wsim >= locat.w, kappamat, 1/kappamat) * zedd / Scale.w - 1 / Scale.w rm(wsim) temp3 <- cbind(dl.dlocat, dl.dscale) # n x M matrix run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[,ind1$row] * dthetas.detas[,ind1$col] wz <- c(w) * matrix(wz, n, dimm(M)) wz } else { d2l.dlocat2 <- 2 / (Scale.w * locat.w)^2 d2l.dscale2 <- 1 / Scale.w^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2, d2l.dscale2 * dscale.deta^2) c(w) * wz } }), list( .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .nsimEIM = nsimEIM) ))) } logitlaplace1.control <- function(maxit = 300, ...) { list(maxit = maxit) } adjust01.logitlaplace1 <- function(ymat, y, w, rep01) { rangey01 <- range(y[(y > 0) & (y < 1)]) ymat[ymat <= 0] <- min(rangey01[1] / 2, rep01 / w[y <= 0]) ymat[ymat >= 1] <- max((1 + rangey01[2]) / 2, 1 - rep01 / w[y >= 1]) ymat } logitlaplace1 <- function(tau = NULL, llocation = "logit", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, shrinkage.init = 0.95, parallelLocation = FALSE, digt = 4, dfmu.init = 3, rep01 = 0.5, imethod = 1, zero = NULL) { if (!is.Numeric(rep01, positive = TRUE, allowable.length = 1) || rep01 > 0.5) stop("bad input for argument 'rep01'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation llocat.identity <- as.list(substitute("identity")) elocat.identity <- link2list(llocat.identity) llocat.identity <- attr(elocat.identity, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !(is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || is.character(zero ))) stop("bad input for argument 'zero'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") if (!is.logical(parallelLocation) || length(parallelLocation) != 1) stop("bad input for argument 'parallelLocation'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") mystring0 <- namesof("location", llocat, earg = elocat) mychars <- substring(mystring0, first = 1:nchar(mystring0), last = 1:nchar(mystring0)) mychars[nchar(mystring0)] = ", inverse = TRUE)" mystring1 <- paste(mychars, collapse = "") new("vglmff", blurb = c("One-parameter ", llocat, "-Laplace distribution\n\n", "Links: ", mystring0, "\n", "\n", "Quantiles: ", mystring1), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallelLocation, constraints, apply.int = FALSE) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .parallelLocation = parallelLocation, .Scale.arg = Scale.arg, .zero = zero ))), initialize = eval(substitute(expression({ extra$M <- M <- max(length( .Scale.arg ), length( .kappa )) # Recycle extra$Scale <- rep( .Scale.arg, length = M) extra$kappa <- rep( .kappa, length = M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual <- FALSE predictors.names <- namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat, tag = FALSE) if (all(y == 0 | y == 1)) stop("response cannot be all 0s or 1s") if (min(y) < 0) stop("negative response values detected") if (max(y) > 1) stop("response values greater than 1 detected") if ((prop.0. <- weighted.mean(1*(y == 0), w)) >= min(extra$tau)) stop("sample proportion of 0s == ", round(prop.0., digits = 4), " > minimum 'tau' value. Choose larger values for 'tau'.") if ((prop.1. <- weighted.mean(1*(y == 1), w)) >= max(extra$tau)) stop("sample proportion of 1s == ", round(prop.1., digits = 4), " < maximum 'tau' value. Choose smaller values for 'tau'.") if (!length(etastart)) { if ( .imethod == 1) { locat.init <- quantile(rep(y, w), probs= extra$tau) } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) locat.init <- median(rep(y, w)) } else if ( .imethod == 3) { use.this <- weighted.mean(y, w) locat.init <- (1- .sinit)*y + use.this * .sinit } else { stop("this option not implemented") } locat.init <- if (length( .ilocat )) rep( .ilocat , length.out = M) else rep(locat.init, length.out = M) locat.init <- matrix(locat.init, n, M, byrow = TRUE) locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } }), list( .imethod = imethod, .dfmu.init = dfmu.init, .sinit = shrinkage.init, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y <- eta2theta(eta, .llocat , earg = .elocat ) if ( .fittedMean ) { stop("Yet to do: handle 'fittedMean = TRUE'") kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.y + Scale * (1/kappamat - kappamat) } else { if (length(locat.y) > extra$n) dimnames(locat.y) <- list(dimnames(eta)[[1]], extra$y.names) locat.y } }, list( .elocat = elocat, .llocat = llocat, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat ) misc$earg <- list(location = .elocat ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) extra$Scale.arg <- .Scale.arg misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$rep01 <- .rep01 extra$percentile <- numeric(length(misc$kappa)) locat.y <- eta2theta(eta, .llocat , earg = .elocat ) locat.y <- as.matrix(locat.y) for(ii in 1:length(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .rep01 = rep01, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) ymat <- matrix(y, extra$n, extra$M) ymat <- adjust01.logitlaplace1(ymat = ymat, y = y, w = w, rep01 = .rep01) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ALDans = sum(c(w) * dalap(x = c(w.mat), location = c(eta), scale = c(Scale.w), kappa = c(kappamat), log = TRUE)) ALDans } }, list( .elocat = elocat, .llocat = llocat, .rep01 = rep01, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("logitlaplace1"), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.w <- eta kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) ymat <- adjust01.logitlaplace1(ymat = ymat, y = y, w = w, rep01 = .rep01) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() zedd <- abs(w.mat-locat.w) / Scale.w dl.dlocat <- ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sqrt(2) * sign(w.mat-locat.w) / Scale.w dlocat.deta <- dtheta.deta(locat.w, "identity", earg = .elocat.identity ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .rep01 = rep01, .elocat = elocat, .llocat = llocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity, .kappa = kappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale.w^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat )))) } VGAM/R/family.positive.R0000644000176000001440000020146312136651110014504 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. N.hat.posbernoulli <- function(eta, link, earg = list(), R = NULL, w = NULL, X_vlm = NULL, Hlist = NULL, extra = list(), model.type = c("b", "t", "tb") ) { if (!is.null(w) && !all(1 == w)) warning("estimate of N may be wrong when prior weights ", "are not all unity") model.type <- match.arg(model.type, c("b", "t", "tb"))[1] tau <- switch(model.type, "b" = extra$tau, "t" = ncol(eta), "tb" = (ncol(eta) + 1) / 2) if (length(extra$tau) && extra$tau != tau) warning("variable 'tau' is mistaken") # Checking only jay.index <- switch(model.type, "b" = rep(1, length = tau), # Subset: 1 out of 1:2 "t" = 1:tau, # All of them "tb" = 1:tau) # Subset: first tau of them out of M = 2*tau-1 prc <- eta2theta(eta[, jay.index], link, earg = earg) # cap.probs QQQ <- exp(rowSums(log1p(-prc))) pibbeta <- exp(log1p(-QQQ)) # One.minus.QQQ N.hat <- sum(1 / pibbeta) # Point estimate ss2 <- sum(QQQ / pibbeta^2) # Assumes bbeta is known if (length(R)) { dvect <- matrix(0, length(pibbeta), ncol = ncol(X_vlm)) M <- nrow(Hlist[[1]]) n_lm <- nrow(X_vlm) / M # Number of rows of the LM matrix dprc.deta <- dtheta.deta(prc, link, earg = earg) Hmatrices <- matrix(c(unlist(Hlist)), nrow = M) for (jay in 1:tau) { lapred.index <- jay.index[jay] Index0 <- Hmatrices[lapred.index, ] != 0 X_lm_jay <- X_vlm[(0:(n_lm - 1)) * M + lapred.index, Index0, drop = FALSE] dvect[, Index0] <- dvect[, Index0] + (QQQ / (1-prc[, jay])) * dprc.deta[, jay] * X_lm_jay } dvect <- dvect * (-1 / pibbeta^2) dvect <- colSums(dvect) # Now a vector ncol_X_vlm <- nrow(R) rinv <- diag(ncol_X_vlm) rinv <- backsolve(R, rinv) rowlen <- drop(((rinv^2) %*% rep(1, ncol_X_vlm))^0.5) covun <- rinv %*% t(rinv) vecTF <- FALSE for (jay in 1:tau) { lapred.index <- jay.index[jay] vecTF <- vecTF | (Hmatrices[lapred.index, ] != 0) } vecTF.index <- (1:length(vecTF))[vecTF] covun <- covun[vecTF.index, vecTF.index, drop = FALSE] dvect <- dvect[vecTF.index, drop = FALSE] } list(N.hat = N.hat, SE.N.hat = if (length(R)) sqrt(ss2 + t(dvect) %*% covun %*% dvect) else sqrt(ss2) ) } aux.posbernoulli <- function(y, check.y = FALSE) { y <- as.matrix(y) if ((tau <- ncol(y)) == 1) stop("argument 'y' needs to be a matrix with at least two columns") if (check.y) { if (!all(y == 0 | y == 1 | y == 1/tau | is.na(y))) stop("response 'y' must contain 0s and 1s only") } zeddij <- cbind(0, t(apply(y, 1, cumsum))) # tau + 1 columns zij <- (0 + (zeddij > 0))[, 1:tau] # 0 or 1. if (length(colnames(y))) colnames(zij) <- colnames(y) cp1 <- numeric(nrow(y)) for (jay in tau:1) cp1[y[, jay] > 0] <- jay if (any(cp1 == 0)) warning("some individuals were never captured!") yr1i <- zeddij[, tau + 1] - 1 list(cap.hist1 = zij, cap1 = cp1, # aka ti1 y0i = cp1 - 1, yr0i = tau - cp1 - yr1i, yr1i = yr1i) } rposbern <- function(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), cap.effect = -1, link = "logit", is.popn = FALSE, earg.link = FALSE) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n orig.n <- use.n if (!is.popn) use.n <- 1.50 * use.n + 100 # Bigger due to rejections if (pvars == 0) stop("argument 'pvars' must be at least one") if (pvars > length(xcoeff)) stop("argument 'pvars' is too high") if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") Ymatrix <- matrix(0, use.n, nTimePts, dimnames = list(as.character(1:use.n), paste("y", 1:nTimePts, sep = ""))) CHmatrix <- matrix(0, use.n, nTimePts, dimnames = list(as.character(1:use.n), paste("ch", 0:(nTimePts-1), sep = ""))) Xmatrix <- cbind(x1 = rep(1.0, len = use.n)) if (pvars > 1) Xmatrix <- cbind(Xmatrix, matrix(runif(n = use.n * (pvars-1)), use.n, pvars - 1, dimnames = list(as.character(1:use.n), paste("x", 2:pvars, sep = "")))) lin.pred.baseline <- xcoeff[1] if (pvars > 1) lin.pred.baseline <- lin.pred.baseline + Xmatrix[, 2:pvars, drop = FALSE] %*% xcoeff[2:pvars] sumrowy <- rep(0, length = use.n) for (jlocal in 1:nTimePts) { CHmatrix[, jlocal] <- as.numeric(sumrowy > 0) lin.pred <- lin.pred.baseline + (CHmatrix[, jlocal] > 0) * cap.effect Ymatrix[, jlocal] <- rbinom(use.n, size = 1, prob = eta2theta(lin.pred, link = link, earg = earg)) sumrowy <- sumrowy + Ymatrix[, jlocal] } index0 <- (sumrowy == 0) if (all(!index0)) stop("bug in this code: cannot handle no animals being caught") Ymatrix <- Ymatrix[!index0, , drop = FALSE] Xmatrix <- Xmatrix[!index0, , drop = FALSE] CHmatrix <- CHmatrix[!index0, , drop = FALSE] zCHmatrix <- matrix(0, nrow(CHmatrix), ncol(CHmatrix), dimnames = list(as.character(1:nrow(CHmatrix)), paste("zch", 0:(ncol(CHmatrix)-1), sep = ""))) ans <- data.frame(Ymatrix, Xmatrix, CHmatrix, zCHmatrix, Chistory = rep(0, length = nrow(Ymatrix))) if (!is.popn) { ans <- if (nrow(ans) >= orig.n) { ans[1:orig.n, ] } else { rbind(ans, Recall(n = orig.n - nrow(ans), nTimePts = nTimePts, pvars = pvars, xcoeff = xcoeff, cap.effect = cap.effect, link = earg, earg.link = TRUE)) } } rownames(ans) <- as.character(1:nrow(ans)) attr(ans, "pvars") <- pvars attr(ans, "nTimePts") <- nTimePts attr(ans, "cap.effect") <- cap.effect attr(ans, "is.popn") <- is.popn attr(ans, "n") <- n ans } dposbern <- function(x, prob, prob0 = prob, log = FALSE) { x <- as.matrix(x) prob <- as.matrix(prob) prob0 <- as.matrix(prob0) if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (ncol(x) < 2) stop("columns of argument 'x' should be 2 or more") logAA0 <- rowSums(log1p(-prob0)) AA0 <- exp(logAA0) ell1 <- x * log(prob) + (1 - x) * log1p(-prob) - log1p(-AA0) / ncol(x) if (log.arg) ell1 else exp(ell1) } dposnegbin <- function(x, size, prob = NULL, munb = NULL, log = FALSE) { if (length(munb)) { if (length(prob)) stop("'prob' and 'munb' both specified") prob <- size / (size + munb) } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(size)) x <- rep(x, len = LLL); prob <- rep(prob, len = LLL); size <- rep(size, len = LLL); ans <- dnbinom(x = x, size = size, prob = prob, log = log.arg) index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(0.0) ans[!index0] <- ans[!index0] - log1p(-dnbinom(x = 0 * x[!index0], size = size[!index0], prob = prob[!index0])) } else { ans[ index0] <- 0.0 ans[!index0] <- ans[!index0] / pnbinom(q = 0 * x[!index0], size = size[!index0], prob = prob[!index0], lower.tail = FALSE) } ans } pposnegbin <- function(q, size, prob = NULL, munb = NULL) { if (length(munb)) { if (length(prob)) stop("'prob' and 'munb' both specified") prob <- size / (size + munb) } L <- max(length(q), length(prob), length(size)) if (length(q) != L) q <- rep(q, length.out = L); if (length(prob) != L) prob <- rep(prob, length.out = L); if (length(size) != L) size <- rep(size, length.out = L) ifelse(q < 1, 0, (pnbinom(q, size = size, prob = prob) - dnbinom(0, size = size, prob = prob)) / pnbinom(0, size = size, prob = prob, lower.tail = FALSE)) } qposnegbin <- function(p, size, prob = NULL, munb = NULL) { if (length(munb)) { if (length(prob)) stop("'prob' and 'munb' both specified") prob <- size / (size + munb) } ans <- qnbinom(pnbinom(q = 0, size = size, prob = prob, lower.tail = FALSE) * p + dnbinom(x = 0, size = size, prob = prob), size = size, prob = prob) ans[p > 1] <- NaN ans[p < 0] <- NaN ans[p == 1] <- Inf ans } posnegbinomial.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } posnegbinomial <- function(lmunb = "loge", lsize = "loge", isize = NULL, zero = -2, nsimEIM = 250, shrinkage.init = 0.95, imethod = 1) { if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 30) warning("argument 'nsimEIM' should be greater than 30, say") new("vglmff", blurb = c("Positive-negative binomial distribution\n\n", "Links: ", namesof("munb", lmunb, earg = emunb ), ", ", namesof("size", lsize, earg = esize ), "\n", "Mean: munb / (1 - (size / (size + munb))^size)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, lmunb = .lmunb , emunb = .emunb , lsize = .lsize , esize = .esize ) }, list( .lmunb = lmunb, .lsize = lsize, .isize = isize, .emunb = emunb, .esize = esize, .sinit = shrinkage.init, .imethod = imethod ))), initialize = eval(substitute(expression({ Musual <- 2 if (any(y == 0)) stop("there are zero values in the response") y <- as.matrix(y) temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- Musual * ncol(y) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species predictors.names <- c( namesof(if (NOS == 1) "munb" else paste("munb", 1:NOS, sep = ""), .lmunb, earg = .emunb, tag = FALSE), namesof(if (NOS == 1) "size" else paste("size", 1:NOS, sep = ""), .lsize, earg = .esize, tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] if (!length(etastart)) { mu.init <- y for(iii in 1:ncol(y)) { use.this <- if ( .imethod == 1) { weighted.mean(y[, iii], w[, iii]) } else { median(y[,iii]) } mu.init[, iii] <- (1 - .sinit) * y[, iii] + .sinit * use.this } if ( is.Numeric( .isize )) { kmat0 <- matrix( .isize , nrow = n, ncol = NOS, byrow = TRUE) } else { posnegbinomial.Loglikfun = function(kmat, y, x, w, extraargs) { munb <- extraargs sum(w * dposnegbin(x = y, size = kmat, munb = munb, log = TRUE)) } k.grid <- 2^((-6):6) kmat0 <- matrix(0, nrow = n, ncol = NOS) for(spp. in 1:NOS) { kmat0[, spp.] <- getMaxMin(k.grid, objfun = posnegbinomial.Loglikfun, y = y[, spp.], x = x, w = w[, spp.], extraargs = mu.init[, spp.]) } } p00 <- (kmat0 / (kmat0 + mu.init))^kmat0 etastart <- cbind( theta2eta(mu.init * (1 - p00), .lmunb, earg = .emunb ), theta2eta(kmat0, .lsize, earg = .esize )) etastart <- etastart[,interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .lmunb = lmunb, .lsize = lsize, .isize = isize, .emunb = emunb, .esize = esize, .sinit = shrinkage.init, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb, earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS), drop = FALSE], .lsize, earg = .esize ) po0 <- (kmat / (kmat + munb))^kmat munb / (1 - po0) }, list( .lsize = lsize, .lmunb = lmunb, .esize = esize, .emunb = emunb ))), last = eval(substitute(expression({ temp0303 <- c(rep( .lmunb , length = NOS), rep( .lsize , length = NOS)) names(temp0303) = c(if (NOS == 1) "munb" else paste("munb", 1:NOS, sep = ""), if (NOS == 1) "size" else paste("size", 1:NOS, sep = "")) temp0303 <- temp0303[interleave.VGAM(M, M = Musual)] misc$link <- temp0303 # Already named misc$earg <- vector("list", Musual*NOS) names(misc$earg) <- names(misc$link) for(ii in 1:NOS) { misc$earg[[Musual*ii-1]] <- .emunb misc$earg[[Musual*ii ]] <- .esize } misc$nsimEIM <- .nsimEIM misc$imethod <- .imethod }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 NOS <- ncol(eta) / Musual munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb, earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) , drop = FALSE], .lsize, earg = .esize ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(w * dposnegbin(x = y, size = kmat, munb = munb, log = TRUE)) } }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), vfamily = c("posnegbinomial"), deriv = eval(substitute(expression({ Musual <- 2 NOS <- extra$NOS munb <- eta2theta(eta[, Musual*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, Musual*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) dmunb.deta <- dtheta.deta(munb, .lmunb, earg = .emunb ) dsize.deta <- dtheta.deta(kmat, .lsize, earg = .esize ) NOS <- ncol(eta) / Musual tempk <- kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * (tempm + log(tempk)) df02.dmunb2 <- prob0 * tempk / (kmat + munb) - tempk * df0.dmunb df02.dkmat2 <- (prob0 / kmat) * tempm^2 df02.dkmat.dmunb <- prob0 * (-tempk) * (tempm + log(tempk)) - tempm * prob0 / (kmat + munb) dl.dmunb <- y / munb - (y + kmat) / (munb + kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y + kmat)/(munb + kmat) + 1 + log(tempk) + df0.dkmat / oneminusf0 myderiv <- c(w) * cbind(dl.dmunb * dmunb.deta, dl.dsize * dsize.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), weight = eval(substitute(expression({ run.varcov = wz <- matrix(0.0, n, 2 * Musual * NOS - 1) if (FALSE) { usualmeanY <- munb meanY <- usualmeanY / oneminusf0 ed2l.dmu2 <- meanY / munb^2 - (meanY + kmat) / (munb + kmat)^2 - df02.dmunb2 / oneminusf0 - (df0.dmunb / oneminusf0)^2 } { ind2 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { ysim <- rposnegbin(n = n*NOS, mu = c(munb), size = c(kmat)) dim(ysim) <- c(n, NOS) dl.dmunb <- ysim / munb - (ysim + kmat) / (munb + kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(ysim + kmat) - digamma(kmat) - (ysim + kmat) / (munb + kmat) + 1 + log(tempk) + df0.dkmat / oneminusf0 for(kk in 1:NOS) { temp2 <- cbind(dl.dmunb[, kk], dl.dsize[, kk]) * cbind(dmunb.deta[, kk], dsize.deta[, kk]) small.varcov <- temp2[, ind2$row.index] * temp2[, ind2$col.index] run.varcov[, ((kk-1)*Musual+1):(kk*Musual)] = run.varcov[, ((kk-1)*Musual+1):(kk*Musual)] + c(small.varcov[, 1:Musual]) run.varcov[, M + (kk-1)*Musual + 1] = run.varcov[, M + (kk-1)*Musual + 1] + c(small.varcov[, Musual + 1]) } } # ii run.varcov <- cbind(run.varcov / .nsimEIM ) wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .nsimEIM = nsimEIM )))) } dposgeom <- function(x, prob, log = FALSE) { dgeom(x - 1, prob = prob, log = log) } pposgeom <- function(q, prob) { if (!is.Numeric(prob, positive = TRUE)) stop("bad input for argument 'prob'") L <- max(length(q), length(prob)) if (length(q) != L) q = rep(q, length.out = L); if (length(prob) != L) prob = rep(prob, length.out = L); ifelse(q < 1, 0, (pgeom(q, prob) - dgeom(0, prob)) / pgeom(0, prob, lower.tail = FALSE)) } qposgeom <- function(p, prob) { ans <- qgeom(pgeom(0, prob, lower.tail = FALSE) * p + dgeom(0, prob), prob = prob) ans[p > 1] <- NaN ans[p < 0] <- NaN ans[p == 1] <- Inf ans } rposgeom <- function(n, prob) { qgeom(p = runif(n, min = dgeom(0, prob)), prob) } dpospois <- function(x, lambda, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(lambda, positive = TRUE)) stop("bad input for argument 'lambda'") L <- max(length(x), length(lambda)) x <- rep(x, len = L); lambda <- rep(lambda, len = L); ans <- if (log.arg) { ifelse(x == 0, log(0.0), dpois(x, lambda, log = TRUE) - log1p(-exp(-lambda))) } else { ifelse(x == 0, 0, -dpois(x, lambda) / expm1(-lambda)) } ans } ppospois <- function(q, lambda) { if (!is.Numeric(lambda, positive = TRUE)) stop("bad input for argument 'lambda'") L <- max(length(q), length(lambda)) if (length(q) != L) q <- rep(q, length.out = L); if (length(lambda) != L) lambda <- rep(lambda, length.out = L); ifelse(q < 1, 0, (ppois(q, lambda) - dpois(0, lambda)) / ppois(0, lambda, lower.tail = FALSE)) } qpospois <- function(p, lambda) { ans <- qpois(ppois(0, lambda, lower.tail = FALSE) * p + dpois(0, lambda), lambda = lambda) ans[p > 1] <- NaN ans[p < 0] <- NaN ans[p == 1] <- Inf ans } rpospois <- function(n, lambda) { qpois(p = runif(n, min = dpois(0, lambda)), lambda) } rposnegbin <- function(n, size, prob = NULL, munb = NULL) { if (!is.null(munb)) { if (!is.null(prob)) stop("'prob' and 'mu' both specified") qnbinom(p = runif(n, min = dnbinom(0, size, mu = munb)), size, mu = munb) } else { qnbinom(p = runif(n, min = dnbinom(0, size, prob = prob )), size, prob = prob ) } } pospoisson <- function(link = "loge", expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") if (length( ilambda) && !is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Positive-Poisson distribution\n\n", "Links: ", namesof("lambda", link, earg = earg, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, link = .link , earg = .earg) }, list( .link = link, .earg = earg ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("lambda", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg, tag = FALSE) if ( .imethod == 1) { lambda.init <- apply(y, 2, median) + 1/8 lambda.init <- matrix(lambda.init, n, ncoly, byrow = TRUE) } else if ( .imethod == 2) { lambda.init <- apply(y, 2, weighted.mean, w = w) + 1/8 lambda.init <- matrix(lambda.init, n, ncoly, byrow = TRUE) } else { lambda.init <- -y / expm1(-y) } if (length( .ilambda)) lambda.init <- lambda.init * 0 + .ilambda if (!length(etastart)) etastart <- theta2eta(lambda.init, .link , earg = .earg) }), list( .link = link, .earg = earg, .ilambda = ilambda, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta, .link , earg = .earg ) -lambda / expm1(-lambda) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , len = M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:M) misc$earg[[ii]] <- .earg misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg, .expected = expected ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(w * dpospois(x = y, lambda = lambda, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("pospoisson"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta, .link , earg = .earg ) temp6 <- expm1(lambda) dl.dlambda <- y / lambda - 1 - 1 / temp6 dlambda.deta <- dtheta.deta(lambda, .link , earg = .earg ) c(w) * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ if ( .expected ) { ned2l.dlambda2 <- (temp6 + 1) * (1/lambda - 1/temp6) / temp6 wz <- ned2l.dlambda2 * dlambda.deta^2 } else { d2l.dlambda2 <- y / lambda^2 - (temp6 + 1) / temp6^2 d2lambda.deta2 <- d2theta.deta2(lambda, .link , earg = .earg) wz <- (dlambda.deta^2) * d2l.dlambda2 - dl.dlambda * d2lambda.deta2 } c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } pposbinom <- function(q, size, prob ) { if (!is.Numeric(prob, positive = TRUE)) stop("no zero or non-numeric values allowed for argument 'prob'") L <- max(length(q), length(size), length(prob)) if (length(q) != L) q <- rep(q, length.out = L); if (length(size) != L) size <- rep(size, length.out = L); if (length(prob) != L) prob <- rep(prob, length.out = L); ifelse(q < 1, 0, (pbinom(q = q, size = size, prob = prob) - dbinom(x = 0, size = size, prob = prob)) / pbinom(q = 0, size = size, prob = prob, lower.tail = FALSE)) } qposbinom <- function(p, size, prob ) { ans <- qbinom(pbinom(0, size, prob, lower.tail = FALSE) * p + dbinom(0, size, prob), size = size, prob = prob) ans[p > 1] <- NaN ans[p < 0] <- NaN ans[p == 1] <- size[p == 1] ans } rposbinom <- function(n, size, prob) { qbinom(p = runif(n, min = dbinom(0, size, prob)), size, prob) } dposbinom <- function(x, size, prob, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(size), length(prob)) x <- rep(x, len = L); size <- rep(size, len = L); prob <- rep(prob, len = L); answer <- NaN * x is0 <- (x == 0) ok2 <- (prob > 0) & (prob <= 1) & (size == round(size)) & (size > 0) answer <- dbinom(x = x, size = size, prob = prob, log = TRUE) - log1p(-dbinom(x = 0, size = size, prob = prob)) answer[!ok2] <- NaN if (log.arg) { answer[is0 & ok2] <- log(0.0) } else { answer <- exp(answer) answer[is0 & ok2] <- 0.0 } answer } posbinomial <- function(link = "logit", mv = FALSE, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(mv) || length(mv) != 1) stop("bad input for argument 'mv'") if (mv && length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Positive-binomial distribution\n\n", "Links: ", if (mv) c(namesof("prob1", link, earg = earg, tag = FALSE), ",...,", namesof("probM", link, earg = earg, tag = FALSE)) else namesof("prob", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart if ( .mv ) { temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$orig.w <- w mustart <- matrix(colSums(y) / colSums(w), # Not colSums(y * w)... n, ncoly, byrow = TRUE) } else { eval(binomialff(link = .earg , # earg = .earg , earg.link = TRUE)@initialize) } if ( .mv ) { dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("prob", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "prob", .link , earg = .earg, short = TRUE) w <- matrix(w, n, ncoly) y <- y / w # Now sample proportion } else { predictors.names <- namesof("prob", .link , earg = .earg , tag = FALSE) } if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) mustart.orig else mustart etastart <- cbind(theta2eta(mustart.use, .link , earg = .earg )) } mustart <- NULL }), list( .link = link, .earg = earg, .mv = mv ))), linkinv = eval(substitute(function(eta, extra = NULL) { w <- extra$w binprob <- eta2theta(eta, .link , earg = .earg ) nvec <- if ( .mv ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } binprob / (1.0 - (1.0 - binprob)^nvec) }, list( .link = link, .earg = earg, .mv = mv ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) dn2 else "prob" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$mv <- .mv w <- as.numeric(w) }), list( .link = link, .earg = earg, .mv = mv ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ycounts <- if ( .mv ) { round(y * extra$orig.w) } else { if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts } nvec <- if ( .mv ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } use.orig.w <- if (is.numeric(extra$orig.w)) extra$orig.w else 1 binprob <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(use.orig.w * dposbinom(x = ycounts, size = nvec, prob = binprob, log = TRUE)) } }, list( .link = link, .earg = earg, .mv = mv ))), vfamily = c("posbinomial"), deriv = eval(substitute(expression({ use.orig.w <- if (is.numeric(extra$orig.w)) extra$orig.w else rep(1, n) nvec <- if ( .mv ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } binprob <- eta2theta(eta, .link , earg = .earg ) dmu.deta <- dtheta.deta(binprob, .link , earg = .earg ) temp1 <- 1 - (1 - binprob)^nvec temp2 <- (1 - binprob)^2 temp3 <- (1 - binprob)^(nvec-2) dl.dmu <- y / binprob - (1 - y) / (1 - binprob) - (1 - binprob) * temp3 / temp1 c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg, .mv = mv ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / (binprob * temp1) + (1 - mu) / temp2 - (nvec-1) * temp3 / temp1 - nvec * (temp2^(nvec-1)) / temp1^2 wz <- c(w) * ned2l.dmu2 * dmu.deta^2 wz }), list( .link = link, .earg = earg, .mv = mv )))) } posbernoulli.t <- function(link = "logit", parallel.t = FALSE, apply.parint = TRUE, iprob = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' must have values in (0, 1)") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") new("vglmff", blurb = c("(Multiple) positive-Bernoulli (capture-recapture) model ", "with temporal effects (M_t)\n\n", "Links: ", namesof("prob1", link, earg = earg, tag = FALSE), ", ", namesof("prob2", link, earg = earg, tag = FALSE), ", ..., ", namesof("probM", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel.t , constraints, apply.int = .apply.parint , # TRUE, cm.default = diag(M), cm.intercept.default = diag(M)) }), list( .parallel.t = parallel.t, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 1, multipleResponses = TRUE, apply.parint = .apply.parint , parallel.t = .parallel.t ) }, list( .parallel.t = parallel.t, .apply.parint = apply.parint ))), initialize = eval(substitute(expression({ Musual <- 1 mustart.orig <- mustart y <- as.matrix(y) M <- ncoly <- ncol(y) extra$tau <- tau <- ncol(y) extra$orig.w <- w w <- matrix(w, n, ncoly) mustart <- matrix(colSums(y) / colSums(w), n, ncol(y), byrow = TRUE) mustart[mustart == 0] <- 0.05 mustart[mustart == 1] <- 0.95 if (ncoly == 1) stop("the response is univariate, therefore use posbinomial()") if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") if (!all(w == 1)) stop("argument 'weight' must contain 1s only") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("prob", 1:M, sep = "") } predictors.names <- namesof(dn2, .link , earg = .earg, short = TRUE) if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) { mustart.orig } else { mustart } etastart <- cbind(theta2eta(mustart.use, .link , earg = .earg )) } mustart <- NULL }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { probs <- eta2theta(eta, .link , earg = .earg ) logAA0 <- rowSums(log1p(-probs)) AA0 <- exp(logAA0) AAA <- exp(log1p(-AA0)) # 1 - AA0 probs / AAA }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) dn2 else "prob" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$mv <- TRUE misc$iprob <- .iprob R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X_vlm = X_vlm_save, Hlist = constraints, extra = extra, model.type = "t") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat misc$parallel.t <- .parallel.t misc$apply.parint <- .apply.parint }), list( .link = link, .earg = earg, .parallel.t = parallel.t, .apply.parint = apply.parint, .iprob = iprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 probs <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(dposbern(x = ycounts, # size = 1, # Bernoulli trials prob = probs, prob0 = probs, log = TRUE)) sum(use.orig.w * dposbern(x = ycounts, # size = 1, # Bernoulli trials prob = probs, prob0 = probs, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.t"), deriv = eval(substitute(expression({ probs <- eta2theta(eta, .link , earg = .earg ) dprobs.deta <- dtheta.deta(probs, .link , earg = .earg ) logAA0 <- rowSums(log1p(-probs)) AA0 <- exp(logAA0) AAA <- exp(log1p(-AA0)) # 1 - AA0 B_s <- AA0 / (1 - probs) B_st <- array(AA0, c(n, M, M)) for(slocal in 1:(M-1)) for(tlocal in (slocal+1):M) B_st[, slocal, tlocal] = B_st[, tlocal, slocal] <- B_s[, slocal] / (1 - probs[, tlocal]) temp2 <- (1 - probs)^2 dl.dprobs <- y / probs - (1 - y) / (1 - probs) - B_s / AAA deriv.ans <- w * dl.dprobs * dprobs.deta deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ed2l.dprobs2 <- 1 / (probs * AAA) + 1 / temp2 - probs / (AAA * temp2) - (B_s / AAA)^2 wz <- matrix(as.numeric(NA), n, dimm(M)) wz[, 1:M] <- ed2l.dprobs2 * (dprobs.deta^2) for(slocal in 1:(M-1)) for(tlocal in (slocal+1):M) wz[, iam(slocal, tlocal, M = M)] <- dprobs.deta[, slocal] * dprobs.deta[, tlocal] * (B_st[,slocal,tlocal] + B_s [,slocal] * B_s [,tlocal] / AAA) / (-AAA) wz }), list( .link = link, .earg = earg )))) } posbernoulli.b <- function(link = "logit", parallel.b = FALSE, # TRUE, apply.parint = TRUE, icap.prob = NULL, irecap.prob = NULL ) { fit.type <- 1 # Currently only this is implemented link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(icap.prob)) if (!is.Numeric(icap.prob, positive = TRUE) || max(icap.prob) >= 1) stop("argument 'icap.prob' must have values in (0, 1)") if (length(irecap.prob)) if (!is.Numeric(irecap.prob, positive = TRUE) || max(irecap.prob) >= 1) stop("argument 'irecap.prob' must have values in (0, 1)") if (!is.logical(parallel.b) || length(parallel.b) != 1) stop("argument 'parallel.b' must be a single logical") new("vglmff", blurb = c("(Multiple) positive-Bernoulli (capture-recapture) model ", "with behavioural effects (M_b)\n\n", "Links: ", namesof("cap.prob", link, earg = earg, tag = FALSE), ", ", namesof("recap.prob", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, 2, 1), x = x, bool = .parallel.b , constraints = constraints, apply.int = .apply.parint , # TRUE, cm.default = matrix(1, 2, 1), cm.intercept.default = cbind(1, 0:1)) }), list( .parallel.b = parallel.b, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list( Musual = 2, apply.parint = .apply.parint , multipleResponses = FALSE) }, list( .apply.parint = apply.parint ))), initialize = eval(substitute(expression({ Musual <- 2 if (!is.matrix(y) || ncol(y) == 1) stop("the response appears to be univariate") if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") orig.y <- y extra$orig.w <- w extra$tau <- tau <- ncol(y) mustart.orig <- mustart M <- 2 tmp3 <- aux.posbernoulli(y) y0i <- extra$y0i <- tmp3$y0i yr0i <- extra$yr0i <- tmp3$yr0i yr1i <- extra$yr1i <- tmp3$yr1i cap1 <- extra$cap1 <- tmp3$cap1 cap.hist1 <- extra$cap.hist1 <- tmp3$cap.hist1 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.min = 2, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = ncol(y), maximize = TRUE) w <- temp5$w # Retain the 0-1 response y <- temp5$y # Retain the 0-1 response mustart <- matrix(colMeans(y), n, tau, byrow = TRUE) mustart <- (mustart + orig.y) / 2 predictors.names <- c(namesof( "cap.prob", .link , earg = .earg, short = TRUE), namesof("recap.prob", .link , earg = .earg, short = TRUE)) if (tau >= 4) { pbd <- posbern.aux(tau = tau) } if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) { mustart.orig } else { mustart } etastart <- cbind(theta2eta(rowMeans(mustart.use), .link , earg = .earg ), theta2eta(rowMeans(mustart.use), .link , earg = .earg )) if (length( .icap.prob )) etastart[, 1] <- theta2eta( .icap.prob , .link , earg = .earg ) if (length( .irecap.prob )) etastart[, 2] <- theta2eta( .irecap.prob , .link , earg = .earg ) } mustart <- NULL }), list( .link = link, .earg = earg, .icap.prob = icap.prob, .irecap.prob = irecap.prob ))), linkinv = eval(substitute(function(eta, extra = NULL) { cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) cap.probs <- matrix(cap.probs, nrow(eta), extra$tau) rec.probs <- matrix(rec.probs, nrow(eta), extra$tau) tau <- extra$tau if ( .fit.type == 1) { fv <- rec.probs mat.index <- cbind(1:nrow(fv), extra$cap1) fv[mat.index] <- cap.probs[mat.index] fv[extra$cap.hist1 == 0] <- cap.probs[extra$cap.hist1 == 0] } else if ( .fit.type == 2) { fv <- cap.probs } else if ( .fit.type == 3) { fv <- rec.probs } else if ( .fit.type == 4) { stop("argument 'fit.type' unmatched") } else { stop("argument 'fit.type' unmatched") } fv }, list( .link = link, .fit.type = fit.type, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c( .link , .link ) names(misc$link) <- predictors.names misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) misc$earg[[1]] <- .earg misc$earg[[2]] <- .earg misc$expected <- TRUE misc$mv <- TRUE misc$icap.prob <- .icap.prob misc$irecap.prob <- .irecap.prob misc$parallel.b <- .parallel.b misc$fit.type <- .fit.type misc$multipleResponses <- FALSE if (tau >= 4) { misc$pbd <- pbd # Needed for vcov() post-analysis. } misc$apply.parint <- .apply.parint R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X_vlm = X_vlm_save, Hlist = constraints, extra = extra, model.type = "b") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat }), list( .link = link, .earg = earg, .fit.type = fit.type, .parallel.b = parallel.b, .icap.prob = icap.prob, .irecap.prob = irecap.prob, .apply.parint = apply.parint ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) cap.probs <- matrix(cap.probs, nrow(eta), extra$tau) rec.probs <- matrix(rec.probs, nrow(eta), extra$tau) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(use.orig.w * dposbern(x = ycounts, # Bernoulli trials prob = mu, prob0 = cap.probs, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.b"), deriv = eval(substitute(expression({ cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) y0i <- extra$y0i yr0i <- extra$yr0i yr1i <- extra$yr1i cap1 <- extra$cap1 tau <- extra$tau dcapprobs.deta <- dtheta.deta(cap.probs, .link , earg = .earg ) drecprobs.deta <- dtheta.deta(rec.probs, .link , earg = .earg ) QQQ <- (1 - cap.probs)^tau dl.dcap <- 1 / cap.probs - y0i / (1 - cap.probs) - tau * ((1 - cap.probs)^(tau - 1)) / (1 - QQQ) dl.drec <- yr1i / rec.probs - yr0i / (1 - rec.probs) deriv.ans <- c(w) * cbind(dl.dcap * dcapprobs.deta, dl.drec * drecprobs.deta) deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M) # Diagonal EIM if (tau == 2) wz[, iam(2, 2, M = M)] <- (cap.probs / (rec.probs * (1 - rec.probs) * (1 - QQQ))) * drecprobs.deta^2 if (tau == 3) wz[, iam(2, 2, M = M)] <- (cap.probs * (3 - cap.probs) / ( rec.probs * (1 - rec.probs) * (1 - QQQ))) * drecprobs.deta^2 if (tau >= 4) { # rec.probs = rec.probs) eim.rec.tot <- 0 for (ii in 1:nrow(pbd$part1.rec)) { if (pbd$ml..konst.rec[ii, 1] != 0) eim.rec.tot <- eim.rec.tot + pbd$ml..konst.rec[ii, 1] * (( cap.probs)^pbd$part1.rec[ii, 1] * (1-cap.probs)^pbd$part1.rec[ii, 2] * ( rec.probs)^pbd$part1.rec[ii, 3] * (1-rec.probs)^pbd$part1.rec[ii, 4]) if (pbd$ml..konst.rec[ii, 2] != 0) eim.rec.tot <- eim.rec.tot + pbd$ml..konst.rec[ii, 2] * (( cap.probs)^pbd$part2.rec[ii, 1] * (1-cap.probs)^pbd$part2.rec[ii, 2] * ( rec.probs)^pbd$part2.rec[ii, 3] * (1-rec.probs)^pbd$part2.rec[ii, 4]) if (pbd$ml..konst.rec[ii, 3] != 0) eim.rec.tot <- eim.rec.tot + pbd$ml..konst.rec[ii, 3] * (( cap.probs)^pbd$part3.rec[ii, 1] * (1-cap.probs)^pbd$part3.rec[ii, 2] * ( rec.probs)^pbd$part3.rec[ii, 3] * (1-rec.probs)^pbd$part3.rec[ii, 4]) if (pbd$ml..konst.rec[ii, 4] != 0) eim.rec.tot <- eim.rec.tot + pbd$ml..konst.rec[ii, 4] * (( cap.probs)^pbd$part4.rec[ii, 1] * (1-cap.probs)^pbd$part4.rec[ii, 2] * ( rec.probs)^pbd$part4.rec[ii, 3] * (1-rec.probs)^pbd$part4.rec[ii, 4]) } eim.rec.tot <- (eim.rec.tot / (1 - QQQ)) * drecprobs.deta^2 wz[, iam(2, 2, M = M)] <- eim.rec.tot } dA.dcapprobs <- -tau * ((1 - QQQ) * (tau-1) * (1 - cap.probs)^(tau-2) + tau * (1 - cap.probs)^(2*tau -2)) / (1 - QQQ)^2 if (tau == 2) wz[, iam(1, 1, M = M)] <- ((2 - 3 * cap.probs + 2 * cap.probs^2) / ((1 - QQQ) * cap.probs * (1 - cap.probs)) + dA.dcapprobs) * dcapprobs.deta^2 if (tau == 3) wz[, iam(1, 1, M = M)] <- ((3 + cap.probs * (-6 + cap.probs * (7 + cap.probs * (-3)))) / ( (1 - QQQ) * cap.probs * (1 - cap.probs)) + dA.dcapprobs) * dcapprobs.deta^2 if (tau >= 4) { eim.cap.tot <- 0 for (ii in 1:nrow(pbd$part1.cap)) { if (pbd$ml..konst.cap[ii, 1] != 0) eim.cap.tot <- eim.cap.tot + pbd$ml..konst.cap[ii, 1] * (( cap.probs)^pbd$part1.cap[ii, 1] * (1-cap.probs)^pbd$part1.cap[ii, 2] * ( rec.probs)^pbd$part1.cap[ii, 3] * (1-rec.probs)^pbd$part1.cap[ii, 4]) if (pbd$ml..konst.cap[ii, 2] != 0) eim.cap.tot <- eim.cap.tot + pbd$ml..konst.cap[ii, 2] * (( cap.probs)^pbd$part2.cap[ii, 1] * (1-cap.probs)^pbd$part2.cap[ii, 2] * ( rec.probs)^pbd$part2.cap[ii, 3] * (1-rec.probs)^pbd$part2.cap[ii, 4]) if (pbd$ml..konst.cap[ii, 3] != 0) eim.cap.tot <- eim.cap.tot + pbd$ml..konst.cap[ii, 3] * (( cap.probs)^pbd$part3.cap[ii, 1] * (1-cap.probs)^pbd$part3.cap[ii, 2] * ( rec.probs)^pbd$part3.cap[ii, 3] * (1-rec.probs)^pbd$part3.cap[ii, 4]) if (pbd$ml..konst.cap[ii, 4] != 0) eim.cap.tot <- eim.cap.tot + pbd$ml..konst.cap[ii, 4] * (( cap.probs)^pbd$part4.cap[ii, 1] * (1-cap.probs)^pbd$part4.cap[ii, 2] * ( rec.probs)^pbd$part4.cap[ii, 3] * (1-rec.probs)^pbd$part4.cap[ii, 4]) } eim.cap.tot <- (eim.cap.tot / (1 - QQQ) + dA.dcapprobs) * dcapprobs.deta^2 wz[, iam(1, 1, M = M)] <- eim.cap.tot } wz <- c(w) * wz wz }), list( .link = link, .earg = earg )))) } posbern.aux <- function(tau) { y.all <- matrix(0, 2^tau - 0, tau) for (jlocal in 1:tau) y.all[, jlocal] <- c(rep(0, len = 2^(tau-jlocal)), rep(1, len = 2^(tau-jlocal))) y.all <- y.all[-1, ] aux <- aux.posbernoulli(y.all, check.y = FALSE) nstar <- nrow(y.all) l.power.cap <- matrix(0, nstar, 4) l.konst.cap <- matrix(0, nstar, 4) ml..power.cap <- matrix(0, nstar, 4) ml..konst.cap <- matrix(0, nstar, 4) l.power.rec <- matrix(0, nstar, 4) l.konst.rec <- matrix(0, nstar, 4) ml..power.rec <- matrix(0, nstar, 4) ml..konst.rec <- matrix(0, nstar, 4) l.power.rec[, 3] <- -1 l.power.rec[, 4] <- -1 for (jlocal in 1:tau) { l.konst.rec[, 3] <- l.konst.rec[, 3] + ifelse(y.all[, jlocal] > 0 & jlocal > aux$cap1, 1, 0) l.konst.rec[, 4] <- l.konst.rec[, 4] - ifelse(y.all[, jlocal] == 0 & jlocal > aux$cap1, 1, 0) } ml..power.rec[, 3] <- -2 ml..power.rec[, 4] <- -2 ml..konst.rec[, 3] <- l.konst.rec[, 3] ml..konst.rec[, 4] <- -l.konst.rec[, 4] mux.mat <- cbind(1, aux$y0i, aux$yr1i, aux$yr0i) part1.rec <- mux.mat + cbind(ml..power.rec[, 1], 0, 0, 0) part2.rec <- mux.mat + cbind(0, ml..power.rec[, 2], 0, 0) part3.rec <- mux.mat + cbind(0, 0, ml..power.rec[, 3], 0) part4.rec <- mux.mat + cbind(0, 0, 0, ml..power.rec[, 4]) l.power.cap[, 1] <- 1 l.power.cap[, 2] <- -1 l.konst.cap[, 1] <- 1 l.konst.cap[, 2] <- -aux$y0i ml..power.cap[, 1] <- -2 ml..power.cap[, 2] <- -2 ml..konst.cap[, 1] <- 1 ml..konst.cap[, 2] <- aux$y0i mux.mat <- cbind(1, aux$y0i, aux$yr1i, aux$yr0i) part1.cap <- mux.mat + cbind(ml..power.cap[, 1], 0, 0, 0) part2.cap <- mux.mat + cbind(0, ml..power.cap[, 2], 0, 0) part3.cap <- mux.mat + cbind(0, 0, ml..power.cap[, 3], 0) part4.cap <- mux.mat + cbind(0, 0, 0, ml..power.cap[, 4]) list( y.all = y.all, part1.cap = part1.cap, part2.cap = part2.cap, part3.cap = part3.cap, part4.cap = part4.cap, part1.rec = part1.rec, part2.rec = part2.rec, part3.rec = part3.rec, part4.rec = part4.rec, l.konst.cap = l.konst.cap, l.power.cap = l.power.cap, ml..konst.cap = ml..konst.cap, ml..power.cap = ml..power.cap, l.konst.rec = l.konst.rec, l.power.rec = l.power.rec, ml..konst.rec = ml..konst.rec, ml..power.rec = ml..power.rec) } posbernoulli.tb <- function(link = "logit", parallel.t = FALSE, parallel.b = FALSE, apply.parint = FALSE, imethod = 1, iprob = NULL, dconst = 0.1, dpower = -2) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' must have values in (0, 1)") if (!is.logical(parallel.t) || length(parallel.t) != 1) stop("argument 'parallel.t' must be a single logical") if (!is.logical(parallel.b) || length(parallel.b) != 1) stop("argument 'parallel.b' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") new("vglmff", blurb = c("(Multiple) positive-Bernoulli (capture-recapture) model\n", "with temporal and behavioural effects (M_{tb})\n\n", "Links: ", namesof("cap.prob.1", link, earg = earg, tag = FALSE), ", ", namesof("cap.prob.2", link, earg = earg, tag = FALSE), ", ", ", ...,\n", namesof("cap.prob.tau", link, earg = earg, tag = FALSE), ", ", namesof("recap.prob.2", link, earg = earg, tag = FALSE), ", ...,\n", namesof("recap.prob.tau", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ tmp8.mat <- cbind(c(1, rep(0, len = 2*(tau-1))), rbind(rep(0, len = tau-1), diag(tau-1), diag(tau-1))) tmp9.mat <- cbind(c(rep(0, len = tau), rep(1, len = tau-1))) cmk_tb <- if ( .parallel.t ) matrix(1, M, 1) else tmp8.mat cm1_tb <- if ( ( .parallel.t ) && ( .parallel.b )) matrix(1, M, 1) else if ( ( .parallel.t ) && !( .parallel.b )) cbind(1, tmp9.mat) else if (!( .parallel.t ) && ( .parallel.b )) tmp8.mat else if (!( .parallel.t ) && !( .parallel.b )) cbind(tmp8.mat, tmp9.mat) constraints <- cm.vgam(cmk_tb, x = x, bool = .parallel.t , # Same as .parallel.b constraints = constraints, apply.int = .apply.parint , # FALSE, cm.default = cmk_tb, cm.intercept.default = cm1_tb) }), list( .parallel.t = parallel.t, .parallel.b = parallel.b, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 2, multipleResponses = TRUE, imethod = .imethod , dconst = .dconst , dpower = .dpower , apply.parint = .apply.parint , parallel.t = .parallel.t , parallel.b = .parallel.b ) }, list( .parallel.t = parallel.t, .parallel.b = parallel.b, .imethod = imethod, .dconst = dconst, .dpower = dpower, .apply.parint = apply.parint ))), initialize = eval(substitute(expression({ Musual <- 2 # Not quite true if (ncol(cbind(w)) > 1) stop("variable 'w' should be a vector or one-column matrix") w <- c(w) # Make it a vector mustart.orig <- mustart y <- as.matrix(y) extra$tau <- tau <- ncol(y) extra$ncoly <- ncoly <- ncol(y) extra$orig.w <- w extra$ycounts <- y M <- Musual * tau - 1 # recap.prob.1 is unused if (!(ncoly %in% 2:3)) stop("the response currently must be a two- or three-column matrix") mustart <- matrix(c(weighted.mean(y[, 1], w), weighted.mean(y[, 2], w), if (tau == 3) weighted.mean(y[, 3], w) else NULL), n, tau, byrow = TRUE) mustart[mustart == 0] <- 0.05 mustart[mustart == 1] <- 0.95 if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") tmp3 <- aux.posbernoulli(y) cap.hist1 <- extra$cap.hist1 <- tmp3$cap.hist1 if (tau > 2) { yindex <- 4 * y[, 1] + 2 * y[, 2] + 1 * y[, 3] if (length(table(yindex)) != 2^tau - 1) warning("there should be ", 2^tau - 1, " patterns of 0s and 1s ", "in the response matrix. May crash.") } dn2.cap <- paste("cap.prob.", 1:ncoly, sep = "") dn2.recap <- paste("recap.prob.", 2:ncoly, sep = "") predictors.names <- c( namesof(dn2.cap, .link , earg = .earg, short = TRUE), namesof(dn2.recap, .link , earg = .earg, short = TRUE)) if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { if ( .imethod == 1) { mu.init <- if (length( .iprob )) matrix( .iprob , n, M, byrow = TRUE) else if (length(mustart.orig)) matrix(rep(mustart.orig, length = n * M), n, M) else matrix(rep(mustart, length = n * M), n, M) etastart <- theta2eta(mu.init, .link , earg = .earg ) # n x M } else { mu.init <- matrix(runif(n * M), n, M) etastart <- theta2eta(mu.init, .link , earg = .earg ) # n x M } } mustart <- NULL }), list( .link = link, .earg = earg, .iprob = iprob, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { tau <- extra$ncoly probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(0, probs[, (1+tau):ncol(probs)]) # 1st coln ignored probs.numer <- cbind(probs[, 1], ifelse(extra$cap.hist1[, 2] == 1, prr[, 2], prc[, 2])) if (tau == 3) probs.numer <- cbind(probs.numer, ifelse(extra$cap.hist1[, 3] == 1, prr[, 3], prc[, 3])) logQQQ <- rowSums(log1p(-prc)) QQQ <- exp(logQQQ) AAA <- exp(log1p(-QQQ)) # 1 - QQQ probs.numer / AAA }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep( .link , length = M) names(misc$link) <- c(dn2.cap, dn2.recap) misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$mv <- TRUE misc$iprob <- .iprob R <- tfit$qr$qr[1:ncol_X_vlm, 1:ncol_X_vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X_vlm = X_vlm_save, Hlist = constraints, extra = extra, model.type = "tb") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat misc$parallel.t <- .parallel.t misc$parallel.b <- .parallel.b misc$dconst <- .dconst misc$dpower <- .dpower misc$working.ridge <- c(rep(adjustment.posbern_tb, length = tau), rep(0, length = tau-1)) misc$apply.parint <- .apply.parint }), list( .link = link, .earg = earg, .apply.parint = apply.parint, .parallel.t = parallel.t, .parallel.b = parallel.b, .dconst = dconst, .dpower = dpower, .iprob = iprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { tau <- extra$ncoly ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(0, probs[, (1+tau):ncol(probs)]) # 1st coln ignored if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { probs.numer <- cbind(probs[, 1], ifelse(extra$cap.hist1[, 2] == 1, prr[, 2], prc[, 2])) if (tau == 3) probs.numer <- cbind(probs.numer, ifelse(extra$cap.hist1[, 3] == 1, prr[, 3], prc[, 3])) sum(use.orig.w * dposbern(x = ycounts, # size = 1, # Bernoulli trials prob = probs.numer, prob0 = prc, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.tb"), deriv = eval(substitute(expression({ tau <- extra$ncoly probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(0, probs[, (1+tau):ncol(probs)]) # 1st coln ignored logQQQ <- rowSums(log1p(-prc)) QQQ <- exp(logQQQ) dprobs.deta <- dtheta.deta(probs, .link , earg = .earg ) dprc.deta <- dprobs.deta[, 1:tau] dprr.deta <- cbind(0, dprobs.deta[, (1+tau):ncol(probs)]) # 1st coln ignored dQ.dprc <- -QQQ / (1 - prc) d2Q.dprc <- array(0, c(n, tau, tau)) for (jay in 1:(tau-1)) for (kay in (jay+1):tau) d2Q.dprc[, jay, kay] <- d2Q.dprc[, kay, jay] <- QQQ / ((1 - prc[, jay]) * (1 - prc[, kay])) if (tau == 2) dl.dpr <- cbind(y[, 1] / prc[, 1] - (1 - y[, 1]) / (1 - prc[, 1]) + dQ.dprc[, 1] / (1 - QQQ), (1 - y[, 1]) * (y[, 2] / prc[, 2] - (1 - y[, 2]) / (1 - prc[, 2])) + dQ.dprc[, 2] / (1 - QQQ), y[, 1] * (y[, 2] / prr[, 2] - (1 - y[, 2]) / (1 - prr[, 2]))) if (tau == 3) dl.dpr <- cbind(y[, 1] / prc[, 1] - (1 - y[, 1]) / (1 - prc[, 1]) + dQ.dprc[, 1] / (1 - QQQ), (1 - extra$cap.hist1[, 2]) * # (1 - y[, 1]) * (y[, 2] / prc[, 2] - (1 - y[, 2]) / (1 - prc[, 2])) + dQ.dprc[, 2] / (1 - QQQ), (1 - extra$cap.hist1[, 3]) * # (1 - y[, 1]) * (1 - y[, 2]) * y[, 3] / prc[, 3] + dQ.dprc[, 3] / (1 - QQQ), extra$cap.hist1[, 2] * # y[, 1] * (y[, 2] / prr[, 2] - (1 - y[, 2]) / (1 - prr[, 2])), extra$cap.hist1[, 3] * (y[, 3] / prr[, 3] - (1 - y[, 3]) / (1 - prr[, 3])) ) deriv.ans <- c(w) * dl.dpr * dprobs.deta deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- matrix(0, n, sum(M:(M - (tau - 1)))) cindex <- iam(NA, NA, M = M, both = TRUE) cindex$row.index <- rep(cindex$row.index, length = ncol(wz)) cindex$col.index <- rep(cindex$col.index, length = ncol(wz)) if (tau == 2) { wz[, iam(1, 1, M = M)] <- (1 - prc[, 1] * (1 - prc[, 2])) / (prc[, 1] * (1 - prc[, 1]) * (1 - QQQ)) - ((1 - prc[, 2]) / (1 - QQQ))^2 wz[, iam(1, 1, M = M)] <- wz[, iam(1, 1, M = M)] * dprc.deta[, 1]^2 wz[, iam(2, 2, M = M)] <- (prc[, 1] * (1 - prc[, 1]) / (prc[, 2] * (1 - QQQ)^2)) * dprc.deta[, 2]^2 wz[, iam(3, 3, M = M)] <- (prc[, 1] / (prr[, 2] * (1 - prr[, 2]) * (1 - QQQ))) * dprr.deta[, 2]^2 wz[, iam(1, 2, M = M)] <- -dprc.deta[, 1] * dprc.deta[, 2] / (1 - QQQ)^2 } else if (tau == 3) { wz[, iam(1, 1, M = M)] <- ((1 - prc[, 2]) * prc[, 3] + prc[, 2]) / ((1 - prc[, 1]) * (1 - QQQ)) + 1 / (prc[, 1] * (1 - QQQ)) - (dQ.dprc[, 1] / (1 - QQQ))^2 wz[, iam(2, 2, M = M)] <- (1 - prc[, 1]) * (1 - prc[, 2] * (1 - prc[, 3])) / ( prc[, 2] * (1 - prc[, 2]) * (1 - QQQ)) - (dQ.dprc[, 2] / (1 - QQQ))^2 wz[, iam(3, 3, M = M)] <- (1 - prc[, 1]) * (1 - prc[, 2]) / (prc[, 3] * (1 - QQQ)) - (dQ.dprc[, 3] / (1 - QQQ))^2 wz[, iam(4, 4, M = M)] <- prc[, 1] / (prr[, 2] * (1 - prr[, 2]) * (1 - QQQ)) wz[, iam(5, 5, M = M)] <- (prc[, 1] + prc[, 2] * (1 - prc[, 1])) / ( prr[, 3] * (1 - prr[, 3]) * (1 - QQQ)) for (jay in 1:(tau-1)) for (kay in (jay+1):tau) wz[, iam(jay, kay, M = M)] <- -(d2Q.dprc[, jay, kay] + dQ.dprc[, jay] * dQ.dprc[, kay] / (1 - QQQ)) / (1 - QQQ) wz <- wz * dprobs.deta[, cindex$row.index] * dprobs.deta[, cindex$col.index] } else { stop("tau must equal 2 or 3") } adjustment.posbern_tb <- .dconst * iter^( .dpower ) for (jay in 1:tau) wz[, iam(jay, jay, M = M)] <- wz[, iam(jay, jay, M = M)] + adjustment.posbern_tb c(w) * wz }), list( .link = link, .earg = earg, .dconst = dconst, .dpower = dpower )))) } VGAM/R/family.others.R0000644000176000001440000012543212136651110014147 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dexppois <- function(x, lambda, betave = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(lambda), length(betave)) x <- rep(x, len = N); lambda = rep(lambda, len = N); betave <- rep(betave, len = N) logdensity <- rep(log(0), len = N) xok <- (0 < x) logdensity[xok] <- log(lambda[xok]) + log(betave[xok]) - log1p(-exp(-lambda[xok])) - lambda[xok] - betave[xok] * x[xok] + lambda[xok] * exp(-betave[xok] * x[xok]) logdensity[lambda <= 0] <- NaN logdensity[betave <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } qexppois<- function(p, lambda, betave = 1) { ans <- -log(log(p * -(expm1(lambda)) + exp(lambda)) / lambda) / betave ans[(lambda <= 0) | (betave <= 0)] = NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans } pexppois<- function(q, lambda, betave = 1) { ans <-(exp(lambda * exp(-betave * q)) - exp(lambda)) / -expm1(lambda) ans[q <= 0] <- 0 ans[(lambda <= 0) | (betave <= 0)] <- NaN ans } rexppois <- function(n, lambda, betave = 1) { ans <- -log(log(runif(n) * -(expm1(lambda)) + exp(lambda)) / lambda) / betave ans[(lambda <= 0) | (betave <= 0)] <- NaN ans } exppoisson <- function(llambda = "loge", lbetave = "loge", ilambda = 1.1, ibetave = 2.0, zero = NULL) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lbetave <- as.list(substitute(lbetave)) ebetave <- link2list(lbetave) lbetave <- attr(ebetave, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(ilambda) && !is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") if (length(ibetave) && !is.Numeric(ibetave, positive = TRUE)) stop("bad input for argument 'ibetave'") ilambda[abs(ilambda - 1) < 0.01] = 1.1 new("vglmff", blurb = c("Exponential Poisson distribution \n \n", "Links: ", namesof("lambda", llambda, earg = elambda), ", ", namesof("betave", lbetave, earg = ebetave), "\n", "Mean: lambda/(expm1(lambda) * betave)) * ", "genhypergeo(c(1, 1),c(2, 2),lambda)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("lambda", .llambda, earg = .elambda, short = TRUE), namesof("betave", .lbetave, earg = .ebetave, short = TRUE)) if (!length(etastart)) { betave.init <- if (length( .ibetave )) rep( .ibetave , len = n) else stop("Need to input a value into argument 'ibetave'") lambda.init <- if (length( .ilambda )) rep( .ilambda , len = n) else (1/betave.init - mean(y)) / ((y * exp(-betave.init * y))/n) betave.init <- rep(weighted.mean(betave.init, w = w), len = n) etastart <- cbind(theta2eta(lambda.init, .llambda ,earg = .elambda ), theta2eta(betave.init, .lbetave ,earg = .ebetave )) } }), list( .llambda = llambda, .lbetave = lbetave, .ilambda = ilambda, .ibetave = ibetave, .elambda = elambda, .ebetave = ebetave))), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) betave <- eta2theta(eta[, 2], .lbetave , earg = .ebetave ) -lambda * genhypergeo(c(1, 1), c(2, 2), lambda) / (expm1(-lambda) * betave) }, list( .llambda = llambda, .lbetave = lbetave, .elambda = elambda, .ebetave = ebetave))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda , betave = .lbetave ) misc$earg <- list(lambda = .elambda , betave = .ebetave ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .llambda = llambda, .lbetave = lbetave, .elambda = elambda, .ebetave = ebetave))), loglikelihood = eval(substitute(function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) betave <- eta2theta(eta[, 2], .lbetave , earg = .ebetave ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dexppois(x = y, lambda = lambda, betave = betave, log = TRUE)) } }, list( .lbetave = lbetave , .llambda = llambda , .elambda = elambda , .ebetave = ebetave ))), vfamily = c("exppoisson"), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) betave <- eta2theta(eta[, 2], .lbetave , earg = .ebetave ) dl.dbetave <- 1/betave - y - y * lambda * exp(-betave * y) dl.dlambda <- 1/lambda - 1/expm1(lambda) - 1 + exp(-betave * y) dbetave.deta <- dtheta.deta(betave, .lbetave , earg = .ebetave ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dbetave * dbetave.deta) }), list( .llambda = llambda, .lbetave = lbetave, .elambda = elambda, .ebetave = ebetave ))), weight = eval(substitute(expression({ temp1 <- -expm1(-lambda) ned2l.dlambda2 <- (1 + exp(2 * lambda) - lambda^2 * exp(lambda) - 2 * exp(lambda)) / (lambda * temp1)^2 ned2l.dbetave2 <- 1 / betave^2 - (lambda^2 * exp(-lambda) / (4 * betave^2 * temp1)) * genhypergeo(c(2, 2, 2),c(3, 3, 3),lambda) ned2l.dbetavelambda <- (lambda * exp(-lambda) / (4 * betave * temp1)) * genhypergeo(c(2, 2),c(3, 3),lambda) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dlambda.deta^2 * ned2l.dlambda2 wz[, iam(2, 2, M)] <- dbetave.deta^2 * ned2l.dbetave2 wz[, iam(1, 2, M)] <- dbetave.deta * dlambda.deta * ned2l.dbetavelambda c(w) * wz }), list( .zero = zero )))) } dgenray <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape), length(scale)) x <- rep(x, len = N) shape <- rep(shape, len = N) scale <- rep(scale, len = N) logdensity <- rep(log(0), len = N) if (any(xok <- (x > 0))) { temp1 <- x[xok] / scale[xok] logdensity[xok] <- log(2) + log(shape[xok]) + log(x[xok]) - 2 * log(scale[xok]) - temp1^2 + (shape[xok] - 1) * log1p(-exp(-temp1^2)) } logdensity[(shape <= 0) | (scale <= 0)] <- NaN if (log.arg) { logdensity } else { exp(logdensity) } } pgenray <- function(q, shape, scale = 1) { ans <- (-expm1(-(q/scale)^2))^shape ans[q <= 0] <- 0 ans[(shape <= 0) | (scale <= 0)] <- NaN ans } qgenray <- function(p, shape, scale = 1) { ans <- scale * sqrt(-log1p(-(p^(1/shape)))) ans[(shape <= 0) | (scale <= 0)] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans } rgenray <- function(n, shape, scale = 1) { ans <- qgenray(runif(n), shape = shape, scale = scale) ans[(shape <= 0) | (scale <= 0)] <- NaN ans } genrayleigh.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } genrayleigh <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, tol12 = 1.0e-05, nsimEIM = 300, zero = 1) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Generalized Rayleigh distribution\n", "Links: ", namesof("shape", lshape, earg = eshape), ", ", namesof("scale", lscale, earg = escale), "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("shape", .lshape , earg = .eshape , short = TRUE), namesof("scale", .lscale , earg = .escale , short = TRUE)) if (!length(etastart)) { genrayleigh.Loglikfun <- function(scale, y, x, w, extraargs) { temp1 <- y / scale shape <- -1 / weighted.mean(log1p(-exp(-temp1^2)), w = w) ans <- sum(c(w) * (log(2) + log(shape) + log(y) - 2 * log(scale) - temp1^2 + (shape - 1) * log1p(-exp(-temp1^2)))) ans } scale.grid <- seq(0.2 * stats::sd(c(y)), 5.0 * stats::sd(c(y)), len = 29) scale.init <- if (length( .iscale )) .iscale else getMaxMin(scale.grid, objfun = genrayleigh.Loglikfun, y = y, x = x, w = w) scale.init <- rep(scale.init, length = length(y)) shape.init <- if (length( .ishape )) .ishape else -1 / weighted.mean(log1p(-exp(-(y/scale.init)^2)), w = w) shape.init <- rep(shape.init, length = length(y)) etastart <- cbind(theta2eta(shape.init, .lshape, earg = .eshape), theta2eta(scale.init, .lscale, earg = .escale)) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) qgenray(p = 0.5, shape = shape, scale = Scale) }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape , scale = .lscale ) misc$earg <- list(shape = .eshape , scale = .escale ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute(function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals", "not implemented yet") else { sum(c(w) * dgenray(x = y, shape = shape, scale = Scale, log = TRUE)) } }, list( .lshape = lshape , .lscale = lscale , .eshape = eshape , .escale = escale ))), vfamily = c("genrayleigh"), deriv = eval(substitute(expression({ shape <- eta2theta(eta[, 1], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dthetas.detas <- cbind(dshape.deta, dscale.deta) temp1 <- y / Scale temp2 <- exp(-temp1^2) temp3 <- temp1^2 / Scale AAA <- 2 * temp1^2 / Scale # 2 * y^2 / Scale^3 BBB <- -expm1(-temp1^2) # denominator dl.dshape <- 1/shape + log1p(-temp2) dl.dscale <- -2 / Scale + AAA * (1 - (shape - 1) * temp2 / BBB) dl.dshape[!is.finite(dl.dshape)] = max(dl.dshape[is.finite(dl.dshape)]) answer <- c(w) * cbind(dl.dshape, dl.dscale) * dthetas.detas answer }), list( .lshape = lshape , .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { ysim <- rgenray(n = n, shape = shape, scale = Scale) temp1 <- ysim / Scale temp2 <- exp(-temp1^2) # May be 1 if ysim is very close to 0. temp3 <- temp1^2 / Scale AAA <- 2 * temp1^2 / Scale # 2 * y^2 / Scale^3 BBB <- -expm1(-temp1^2) # denominator dl.dshape <- 1/shape + log1p(-temp2) dl.dscale <- -2 / Scale + AAA * (1 - (shape - 1) * temp2 / BBB) dl.dshape[!is.finite(dl.dshape)] <- max( dl.dshape[is.finite(dl.dshape)]) temp3 <- cbind(dl.dshape, dl.dscale) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = FALSE), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] c(w) * wz }), list( .lshape = lshape , .lscale = lscale, .eshape = eshape, .escale = escale, .tol12 = tol12, .nsimEIM = nsimEIM )))) } dexpgeom <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(scale), length(shape)) x <- rep(x, len = N) scale <- rep(scale, len = N) shape <- rep(shape, len = N) logdensity <- rep(log(0), len = N) if (any(xok <- (x > 0))) { temp1 <- -x[xok] / scale[xok] logdensity[xok] <- -log(scale[xok]) + log1p(-shape[xok]) + temp1 - 2 * log1p(-shape[xok] * exp(temp1)) } logdensity[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN if (log.arg) { logdensity } else { exp(logdensity) } } pexpgeom <- function(q, scale = 1, shape) { temp1 <- -q / scale ans <- -expm1(temp1) / (1 - shape * exp(temp1)) ans[q <= 0] <- 0 ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } qexpgeom <- function(p, scale = 1, shape) { ans <- (-scale) * log((p - 1) / (p * shape - 1)) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans } rexpgeom <- function(n, scale = 1, shape) { ans <- qexpgeom(runif(n), shape = shape, scale = scale) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } expgeometric.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } expgeometric <- function(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1.0e-05, zero = 1, nsimEIM = 400) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE) || any(ishape >= 1)) stop("bad input for argument 'ishape'") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Exponential geometric distribution\n\n", "Links: ", namesof("Scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: ", "(shape - 1) * log(1 - ", "shape) / (shape / Scale)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("Scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { scale.init <- if (is.Numeric( .iscale , positive = TRUE)) { rep( .iscale , len = n) } else { stats::sd(c(y)) # The papers scale parameter beta } shape.init <- if (is.Numeric( .ishape , positive = TRUE)) { rep( .ishape , len = n) } else { rep(2 - exp(median(y)/scale.init), len = n) } shape.init[shape.init >= 0.95] <- 0.95 shape.init[shape.init <= 0.05] <- 0.05 etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) (shape - 1) * log1p(-shape) / (shape / Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(Scale = .lscale , shape = .lshape ) misc$earg <- list(Scale = .escale , shape = .eshape ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute(function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals", "not implemented yet") else { sum(c(w) * dexpgeom(x = y, scale = Scale, shape = shape, log = TRUE)) } }, list( .lscale = lscale , .lshape = lshape , .escale = escale , .eshape = eshape ))), vfamily = c("expgeometric"), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) temp2 <- exp(-y / Scale) temp3 <- shape * temp2 temp4 <- y / Scale^2 dl.dscale <- -1 / Scale + temp4 + 2 * temp4 * temp3 / (1 - temp3) dl.dshape <- -1 / (1 - shape) + 2 * temp2 / (1 - temp3) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) answer <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas answer }), list( .lscale = lscale , .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rexpgeom(n, scale=Scale, shape=shape) temp2 <- exp(-ysim / Scale) temp3 <- shape * temp2 temp4 <- ysim / Scale^2 dl.dscale <- -1 / Scale + temp4 + 2 * temp4 * temp3 / (1 - temp3) dl.dshape <- -1 / (1 - shape) + 2 * temp2 / (1 - temp3) temp6 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp6[,ind1$row.index] * temp6[,ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dexplog <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(scale), length(shape)) x <- rep(x, len = N) scale <- rep(scale, len = N) shape <- rep(shape, len = N) logdensity <- rep(log(0), len = N) if (any(xok <- (x > 0))) { temp1 <- -x[xok] / scale[xok] logdensity[xok] <- -log(-log(shape[xok])) - log(scale[xok]) + log1p(-shape[xok]) + temp1 - log1p(-(1-shape[xok]) * exp(temp1)) } logdensity[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN if (log.arg) { logdensity } else { exp(logdensity) } } pexplog <- function(q, scale = 1, shape) { ans <- 1 - log1p(-(1-shape) * exp(-q / scale)) / log(shape) ans[q <= 0] <- 0 ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } qexplog <- function(p, scale = 1, shape) { ans <- -scale * (log1p(-shape^(1.0 - p)) - log1p(-shape)) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans } rexplog <- function(n, scale = 1, shape) { ans <- qexplog(runif(n), scale = scale, shape = shape) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } explogarithmic.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } explogarithmic <- function(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1.0e-05, zero = 1, nsimEIM = 400) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE) || any(ishape >= 1)) stop("bad input for argument 'ishape'") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Exponential logarithmic distribution\n\n", "Links: ", namesof("Scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: ", "(-polylog(2, 1 - p) * Scale) / log(shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("Scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { scale.init <- if (is.Numeric( .iscale , positive = TRUE)) { rep( .iscale , len = n) } else { stats::sd(c(y)) } shape.init <- if (is.Numeric( .ishape , positive = TRUE)) { rep( .ishape , len = n) } else { rep((exp(median(y)/scale.init) - 1)^2, len = n) } shape.init[shape.init >= 0.95] <- 0.95 shape.init[shape.init <= 0.05] <- 0.05 etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) qexplog(p = 0.5, shape = shape, scale = Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(Scale = .lscale , shape = .lshape ) misc$earg <- list(Scale = .escale , shape = .eshape ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute(function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals", "not implemented yet") else { sum(c(w) * dexplog(x = y, scale = Scale, shape = shape, log = TRUE)) } }, list( .lscale = lscale , .lshape = lshape , .escale = escale , .eshape = eshape ))), vfamily = c("explogarithmic"), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) temp2 <- exp(-y / Scale) temp3 <- y / Scale^2 temp4 <- 1 - shape dl.dscale <- (-1 / Scale) + temp3 + (temp4 * temp3 * temp2) / (1 - temp4 * temp2) dl.dshape <- -1 / (shape * log(shape)) - 1 / temp4 - temp2 / (1 - temp4 * temp2) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) answer <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas answer }), list( .lscale = lscale , .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rexplog(n, scale=Scale, shape=shape) temp2 <- exp(-ysim / Scale) temp3 <- ysim / Scale^2 temp4 <- 1 - shape dl.dscale <- (-1 / Scale) + temp3 + (temp4 * temp3 * temp2) / (1 - temp4 * temp2) dl.dshape <- -1 / (shape * log(shape)) - 1 / temp4 - temp2 / (1 - temp4 * temp2) temp6 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp6[,ind1$row.index] * temp6[,ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dweibull3 <- function(x, location = 0, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) dweibull(x = x - location, shape = shape, scale = scale, log = log.arg) } pweibull3 <- function(q, location = 0, scale = 1, shape) { pweibull(q = q - location, scale = scale, shape = shape) } qweibull3 <- function(p, location = 0, scale = 1, shape) { location + qweibull(p = p, shape = shape, scale = scale) } rweibull3 <- function(n, location = 0, scale = 1, shape) { location + rweibull(n = n, shape = shape, scale = scale) } ### Two-piece normal (TPN) family dtpn <- function(x, location = 0, scale = 1, skewpar = 0.5, log.arg = FALSE) { if (any(skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") LLL <- max(length(x), length(location), length(scale), length(skewpar)) if (length(x) != LLL) x <- rep(x, length = LLL) if (length(location) != LLL) location <- rep(location, length = LLL) if (length(scale) != LLL) scale <- rep(scale, length = LLL) if (length(skewpar) != LLL) skewpar <- rep(skewpar, length = LLL) zedd <- (x - location) / scale log.s1 <- -zedd^2 / (8 * skewpar^2) log.s2 <- -zedd^2 / (8 * (1 - skewpar)^2) logdensity <- log.s1 logdensity[zedd > 0] <- log.s2[zedd > 0] logdensity <- logdensity -log(scale) - log(sqrt(2 * pi)) if (log.arg) logdensity else exp(logdensity) } ptpn <- function(q, location = 0, scale = 1, skewpar = 0.5) { if (any(skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") zedd <- (q - location) / scale s1 <- 2 * skewpar * pnorm(zedd, sd = 2 * skewpar) #/ scale s2 <- skewpar + (1 - skewpar) * pgamma(zedd^2 / (8 * (1-skewpar)^2), 0.5) ans <- rep(0.0, length(zedd)) ans[zedd <= 0] <- s1[zedd <= 0] ans[zedd > 0] <- s2[zedd > 0] ans } pos <- function(x) ifelse(x > 0, x, 0.0) qtpn <- function(p, location = 0, scale = 1, skewpar = 0.5) { pp = p if (any(pp <= 0 | pp >= 1 | skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") # Recycle the vectors to equal lengths LLL <- max(length(pp), length(location), length(scale), length(skewpar)) if (length(pp) != LLL) pp <- rep(pp, length = LLL) if (length(location) != LLL) location <- rep(location, length = LLL) if (length(scale) != LLL) scale <- rep(scale, length = LLL) if (length(skewpar) != LLL) skewpar <- rep(skewpar, length = LLL) qtpn <- rep(as.numeric(NA), length(LLL)) qtpn <- qnorm(pp / (2 * skewpar), sd = 2 * skewpar) qtpn[pp > skewpar] <- sqrt(8 * ( 1 - skewpar)^2 * qgamma(pos( pp - skewpar) / ( 1 - skewpar),.5))[pp > skewpar] qtpn * scale + location } rtpn <- function(n, location = 0, scale = 1, skewpar = 0.5) { qtpn(p = runif(n), location = location, scale = scale, skewpar = skewpar) } tpnff <- function(llocation = "identity", lscale = "loge", pp = 0.5, method.init = 1, zero = 2) { if (!is.Numeric(method.init, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || method.init > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(pp, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'pp'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Two-piece normal distribution \n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: "), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .method.init == 3) { rep(weighted.mean(y, w), len = n) } else if ( .method.init == 2) { rep(median(rep(y, w)), len = n) } else if ( .method.init == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat, earg = .elocat), theta2eta(scale.y.est, .lscale, earg = .escale)) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .method.init=method.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat, earg = .elocat) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat, "scale" = .lscale) misc$earg <- list("location" = .elocat, "scale" = .escale) misc$expected <- TRUE misc$pp <- .pp misc$method.init <- .method.init misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp, .method.init = method.init ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { location <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) ppay <- .pp if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dtpn(y, skewpar = ppay, location = location, scale = myscale, log.arg = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp ))), vfamily = c("tpnff"), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) mypp <- .pp zedd <- (y - mylocat) / myscale # cond1 <- (zedd <= 0) cond2 <- (zedd > 0) dl.dlocat <- zedd / (4 * mypp^2) # cond1 dl.dlocat[cond2] <- (zedd / (4 * (1 - mypp)^2))[cond2] dl.dlocat <- dl.dlocat / myscale dl.dscale <- zedd^2 / (4 * mypp^2) dl.dscale[cond2] <- (zedd^2 / (4 * (1 - mypp)^2))[cond2] dl.dscale <- (-1 + dl.dscale) / myscale #dl.dpp <- zedd^2 / (4 * mypp^3) #dl.dpp[cond2] <- -zedd^2 / (4 * (1 - mypp)^3)[cond2] dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) dscale.deta <- dtheta.deta(myscale, .lscale, earg = .escale) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) # diag matrix; y is one-col too temp10 <- mypp * (1 - mypp) ned2l.dlocat2 <- 1 / ((4 * temp10) * myscale^2) ned2l.dscale2 <- 2 / myscale^2 wz[, iam(1, 1,M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2,M)] <- ned2l.dscale2 * dscale.deta^2 # wz[, iam(3, 3,M)] <- ned2l.dskewpar2 * dskewpa.deta^2 # wz[, iam(1, 3,M)] <- ned2l.dlocatdskewpar * dskewpar.deta * dlocat.deta ans c(w) * wz })))) } ######################################################################## tpnff3 <- function(llocation = "identity", lscale = "loge", lskewpar = "identity", method.init = 1, zero = 2) { if (!is.Numeric(method.init, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || method.init > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lskewp <- as.list(substitute(lskewpar)) eskewp <- link2list(lskewp) lskewp <- attr(eskewp, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Two-piece normal distribution \n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("skewpar", lscale, earg = eskewp), "\n\n", "Mean: "), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE), namesof("skewpar", .lskewp, earg = .eskewp, tag = FALSE)) if (!length(etastart)) { junk = lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt(sum(c(w) * junk$resid^2) / junk$df.residual) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .method.init == 3) { rep(weighted.mean(y, w), len = n) } else if ( .method.init == 2) { rep(median(rep(y, w)), len = n) } else if ( .method.init == 1) { junk$fitted } else { y } } skew.l.in <- sum((y < location.init)) / length(y) etastart <- cbind( theta2eta(location.init, .llocat, earg = .elocat), theta2eta(scale.y.est, .lscale, earg = .escale), theta2eta(skew.l.in, .lskewp, earg = .escale)) } }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp, .method.init=method.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat, earg = .elocat) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat, "scale" = .lscale, "skewpar" = .lskewp) misc$earg <- list("location" = .elocat, "scale" = .escale, "skewpar" = .eskewp) misc$expected <- TRUE misc$method.init <- .method.init }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp, .method.init = method.init ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) myskew <- eta2theta(eta[, 3], .lskewp, earg = .eskewp) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dtpn(y, location = locat, scale = myscale, skewpar = myskew, log.arg = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp ))), vfamily = c("tpnff3"), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) myskew <- eta2theta(eta[, 3], .lskewp, earg = .eskewp) zedd <- (y - mylocat) / myscale cond2 <- (zedd > 0) dl.dlocat <- zedd / (4 * myskew^2) # cond1 dl.dlocat[cond2] <- (zedd / (4 * (1 - myskew)^2))[cond2] dl.dlocat <- dl.dlocat / myscale dl.dscale <- zedd^2 / (4 * myskew^2) dl.dscale[cond2] <- (zedd^2 / (4 * (1 - myskew)^2))[cond2] dl.dscale <- (-1 + dl.dscale) / myscale dl.dskewpar <- zedd^2 / (4 * myskew^3) dl.dskewpar[cond2] <- (-zedd^2 / (4 * (1 - myskew)^3))[cond2] dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) dscale.deta <- dtheta.deta(myscale, .lscale, earg = .escale) dskewpar.deta <- dtheta.deta(myskew, .lskewp, earg = .eskewp) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta, dl.dskewpar * dskewpar.deta ) ans }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) # diag matrix; y is one-col too temp10 <- myskew * (1 - myskew) ned2l.dlocat2 <- 1 / ((4 * temp10) * myscale^2) ned2l.dscale2 <- 2 / myscale^2 ned2l.dskewpar2 <- 3 / temp10 ned2l.dlocatdskewpar <- (-2 * sqrt(2)) / (temp10 * sqrt(pi) * myscale) wz[, iam(1, 1,M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2,M)] <- ned2l.dscale2 * dscale.deta^2 wz[, iam(3, 3,M)] <- ned2l.dskewpar2 * dskewpar.deta^2 wz[, iam(1, 3,M)] <- ned2l.dlocatdskewpar * dskewpar.deta * dlocat.deta ans c(w) * wz })))) } VGAM/R/family.normal.R0000644000176000001440000027571312136651110014143 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. VGAM.weights.function <- function(w, M, n) { ncolw <- ncol(as.matrix(w)) if (ncolw == 1) { wz <- matrix(w, nrow = n, ncol = M) # w_i * diag(M) } else if (ncolw == M) { wz <- as.matrix(w) } else if (ncolw < M && M > 1) { stop("ambiguous input for 'weights'") } else if (ncolw > M*(M+1)/2) { stop("too many columns") } else { wz <- as.matrix(w) } wz } gaussianff <- function(dispersion = 0, parallel = FALSE, zero = NULL) { if (!is.Numeric(dispersion, allowable.length = 1) || dispersion < 0) stop("bad input for argument 'dispersion'") estimated.dispersion <- dispersion == 0 new("vglmff", blurb = c("Vector linear/additive model\n", "Links: identity for Y1,...,YM"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) wz <- VGAM.weights.function(w = w, M = M, n = n) if (residuals) { if (M > 1) { U <- vchol(wz, M = M, n = n) temp <- mux22(U, y-mu, M = M, upper = TRUE, as.matrix = TRUE) dimnames(temp) <- dimnames(y) temp } else (y-mu) * sqrt(wz) } else { ResSS.vgam(y-mu, wz = wz, M = M) } }, infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (is.R()) assign("CQO.FastAlgorithm", TRUE, envir = VGAM::VGAMenv) else CQO.FastAlgorithm <<- TRUE if (any(function.name == c("cqo", "cao")) && (length( .zero ) || (is.logical( .parallel ) && .parallel ))) stop("cannot handle non-default arguments for cqo() and cao()") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 dy <- dimnames(y) predictors.names <- if (!is.null(dy[[2]])) dy[[2]] else paste("Y", 1:M, sep = "") if (!length(etastart)) etastart <- 0 * y }), list( .parallel = parallel, .zero = zero ))), linkinv = function(eta, extra = NULL) eta, last = eval(substitute(expression({ dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) <- dy dpar <- .dispersion if (!dpar) { wz <- VGAM.weights.function(w = w, M = M, n = n) temp5 <- ResSS.vgam(y-mu, wz = wz, M = M) dpar <- temp5 / (length(y) - (if(is.numeric(ncol(X_vlm_save))) ncol(X_vlm_save) else 0)) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep("identity", length = M) names(misc$link) <- predictors.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list() names(misc$link) <- predictors.names if (is.R()) { if (exists("CQO.FastAlgorithm", envir = VGAM::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM::VGAMenv) } else { while (exists("CQO.FastAlgorithm")) remove("CQO.FastAlgorithm") } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) wz <- VGAM.weights.function(w = w, M = M, n = n) print("head(wz) -----------------------------") print( head(wz) ) temp1 <- ResSS.vgam(y-mu, wz = wz, M = M) if (M == 1 || ncol(wz) == M) { print("hi3 ooooo") -0.5 * temp1 + 0.5 * sum(log(wz)) - n * (M / 2) * log(2*pi) } else { if (all(wz[1, ] == apply(wz, 2, min)) && all(wz[1, ] == apply(wz, 2, max))) { onewz <- m2adefault(wz[1, , drop = FALSE], M = M) onewz <- onewz[,, 1] # M x M logdet <- determinant(onewz)$modulus logretval <- -0.5 * temp1 + 0.5 * n * logdet - n * (M / 2) * log(2*pi) distval <- stop("variable 'distval' not computed yet") logretval <- -(ncol(onewz) * log(2 * pi) + logdet + distval)/2 logretval } else { logretval <- -0.5 * temp1 - n * (M / 2) * log(2*pi) for (ii in 1:n) { onewz <- m2adefault(wz[ii, , drop = FALSE], M = M) onewz <- onewz[,, 1] # M x M logdet <- determinant(onewz)$modulus logretval <- logretval + 0.5 * logdet } logretval } } }, linkfun = function(mu, extra = NULL) mu, vfamily = "gaussianff", deriv = expression({ wz <- VGAM.weights.function(w = w, M = M, n = n) mux22(cc = t(wz), xmat = y-mu, M = M, as.matrix = TRUE) }), weight = expression({ wz })) } dposnorm <- function(x, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mean), length(sd)) x <- rep(x, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); if (log.arg) { ifelse(x < 0, log(0), dnorm(x, mean = mean, sd = sd, log = TRUE) - pnorm(mean / sd, log.p = TRUE)) } else { ifelse(x < 0, 0, dnorm(x = x, mean = mean, sd = sd) / pnorm(mean / sd)) } } pposnorm <- function(q, mean = 0, sd = 1) { L <- max(length(q), length(mean), length(sd)) q <- rep(q, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); ifelse(q < 0, 0, (pnorm(q, mean = mean, sd = sd) - pnorm(0, mean = mean, sd = sd)) / pnorm(q = mean/sd)) } qposnorm <- function(p, mean = 0, sd = 1) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") qnorm(p = p + (1-p) * pnorm(0, mean = mean, sd = sd), mean = mean, sd = sd) } rposnorm <- function(n, mean = 0, sd = 1) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") mean <- rep(mean, length = n) sd <- rep(sd, length = n) qnorm(p = runif(n, min = pnorm(0, mean = mean, sd = sd)), mean = mean, sd = sd) } posnormal1.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } posnormal1 <- function(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, nsimEIM = 100, zero = NULL) { warning("this VGAM family function is not working properly yet") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(isd) && !is.Numeric(isd, positive = TRUE)) stop("bad input for argument 'isd'") if (length(nsimEIM)) if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Positive (univariate) normal distribution\n\n", "Links: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, par.names = c("mean", "sd"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("mean", .lmean, earg = .emean, tag = FALSE), namesof("sd", .lsd, earg = .esd, tag = FALSE)) if (!length(etastart)) { init.me <- if (length( .imean)) rep( .imean, len = n) else NULL init.sd <- if (length( .isd )) rep( .isd , len = n) else NULL if (!length(init.me)) init.me <- rep(quantile(y, probs = 0.40), len = n) if (!length(init.sd)) init.sd <- rep(sd(c(y)) * 1.2, len = n) etastart <- cbind(theta2eta(init.me, .lmean, earg = .emean), theta2eta(init.sd, .lsd, earg = .esd )) } }), list( .lmean = lmean, .lsd = lsd, .imean = imean, .isd = isd, .emean = emean, .esd = esd ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean, earg = .emean ) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) mymu + mysd * dnorm(-mymu/mysd) / pnorm(mymu/mysd) }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), last = eval(substitute(expression({ misc$link <- c("mean" = .lmean , "sd" = .lsd ) misc$earg <- list("mean" = .emean , "sd" = .esd ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dposnorm(x = y, m = mymu, sd = mysd, log = TRUE)) } }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), vfamily = c("posnormal1"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) zedd <- (y-mymu) / mysd temp7 <- dnorm(-mymu/mysd) temp8 <- pnorm(mymu/mysd) * mysd dl.dmu <- zedd / mysd^2 - temp7 / temp8 dl.dsd <- (mymu*temp7/temp8 + zedd^3 / mysd - 1) / mysd dmu.deta <- dtheta.deta(mymu, .lmean, earg = .emean) dsd.deta <- dtheta.deta(mysd, .lsd, earg = .esd ) dthetas.detas <- cbind(dmu.deta, dsd.deta) c(w) * dthetas.detas * cbind(dl.dmu, dl.dsd) }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rposnorm(n, m=mymu, sd = mysd) zedd <- (ysim-mymu) / mysd temp7 <- dnorm(-mymu/mysd) temp8 <- pnorm(mymu/mysd) * mysd dl.dmu <- zedd / mysd^2 - temp7 / temp8 dl.dsd <- (mymu*temp7/temp8 + zedd^3 / mysd - 1) / mysd rm(ysim) temp3 <- matrix(c(dl.dmu, dl.dsd), n, 2) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index]*temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz <- c(w) * matrix(wz, n, dimm(M)) } else { wz <- matrix(as.numeric(NA), n, dimm(M)) ned2l.dmu2 <- (1 - temp7*mymu/temp8) / mysd^2 - (temp7/temp8)^2 ned2l.dmusd <- (temp7 /(mysd * temp8)) * (1 + (mymu/mysd)^2 + mymu*temp7 / temp8) ned2l.dsd2 <- 2 / mysd^2 - (temp7 * mymu /(mysd^2 * temp8)) * (1 + (mymu/mysd)^2 + mymu*temp7/temp8) wz[, iam(1, 1, M)] <- ned2l.dmu2 * dmu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsd2 * dsd.deta^2 wz[, iam(1, 2, M)] <- ned2l.dmusd * dsd.deta * dmu.deta wz = c(w) * wz } wz }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .nsimEIM = nsimEIM )))) } dbetanorm <- function(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- if (log.arg) { dnorm(x = x, mean = mean, sd = sd, log = TRUE) + (shape1-1) * pnorm(q = x, mean = mean, sd = sd, log.p = TRUE) + (shape2-1) * pnorm(q = x, mean = mean, sd = sd, log.p = TRUE, lower.tail = FALSE) - lbeta(shape1, shape2) } else { dnorm(x = x, mean = mean, sd = sd) * pnorm(q = x, mean = mean, sd = sd)^(shape1-1) * pnorm(q = x, mean = mean, sd = sd, lower.tail = FALSE)^(shape2-1) / beta(shape1, shape2) } ans } pbetanorm <- function(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { pbeta(q = pnorm(q = q, mean = mean, sd = sd), shape1 = shape1, shape2 = shape2, lower.tail = lower.tail, log.p = log.p) } qbetanorm <- function(p, shape1, shape2, mean = 0, sd = 1) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") qnorm(p = qbeta(p = p, shape1 = shape1, shape2 = shape2), mean = mean, sd = sd) } rbetanorm <- function(n, shape1, shape2, mean = 0, sd = 1) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") qnorm(p = qbeta(p = runif(n), shape1 = shape1, shape2 = shape2), mean = mean, sd = sd) } dtikuv <- function(x, d, mean = 0, sigma = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(d, allowable.length = 1) || max(d) >= 2) stop("bad input for argument 'd'") L <- max(length(x), length(mean), length(sigma)) x <- rep(x, len = L); mean <- rep(mean, len = L); sigma <- rep(sigma, len = L); hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) if (log.arg) { dnorm(x = x, mean = mean, sd = sigma, log = TRUE) + log(KK) + 2 * log1p(((x-mean)/sigma)^2 / (2*hh)) } else { dnorm(x = x, mean = mean, sd = sigma) * KK * (1 + ((x-mean)/sigma)^2 / (2*hh))^2 } } ptikuv <- function(q, d, mean = 0, sigma = 1) { if (!is.Numeric(d, allowable.length = 1) || max(d) >= 2) stop("bad input for argument 'd'") L <- max(length(q), length(mean), length(sigma)) q <- rep(q, len = L); mean <- rep(mean, len = L); sigma <- rep(sigma, len = L); zedd1 <- 0.5 * ((q - mean) / sigma)^2 ans <- q*0 + 0.5 hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) if (any(lhs <- q < mean)) { ans[lhs] <- ( KK/(2*sqrt(pi))) * ( gamma(0.5) * (1 - pgamma(zedd1[lhs], 0.5)) + 2 * gamma(1.5) * (1 - pgamma(zedd1[lhs], 1.5)) / hh + gamma(2.5) * (1 - pgamma(zedd1[lhs], 2.5)) / hh^2) } if (any(rhs <- q > mean)) { ans[rhs] <- 1.0 - Recall(q = (2*mean[rhs] - q[rhs]), d = d, mean = mean[rhs], sigma = sigma[rhs]) } ans } qtikuv <- function(p, d, mean = 0, sigma = 1, ...) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") if (!is.Numeric(d, allowable.length = 1) || max(d) >= 2) stop("bad input for argument 'd'") if (!is.Numeric(mean)) stop("bad input for argument 'mean'") if (!is.Numeric(sigma)) stop("bad input for argument 'sigma'") L <- max(length(p), length(mean), length(sigma)) p <- rep(p, len = L); mean <- rep(mean, len = L); sigma <- rep(sigma, len = L); ans <- rep(0.0, len = L) myfun <- function(x, d, mean = 0, sigma = 1, p) ptikuv(q = x, d = d, mean = mean, sigma = sigma) - p for (ii in 1:L) { Lower <- ifelse(p[ii] <= 0.5, mean[ii] - 3 * sigma[ii], mean[ii]) while (ptikuv(q = Lower, d = d, mean = mean[ii], sigma = sigma[ii]) > p[ii]) Lower <- Lower - sigma[ii] Upper <- ifelse(p[ii] >= 0.5, mean[ii] + 3 * sigma[ii], mean[ii]) while (ptikuv(q = Upper, d = d, mean = mean[ii], sigma = sigma[ii]) < p[ii]) Upper <- Upper + sigma[ii] ans[ii] <- uniroot(f = myfun, lower = Lower, upper = Upper, d = d, p = p[ii], mean = mean[ii], sigma = sigma[ii], ...)$root } ans } rtikuv <- function(n, d, mean = 0, sigma = 1, Smallno = 1.0e-6) { if (!is.Numeric(n, positive = TRUE, integer.valued = TRUE)) stop("bad input for argument 'n'") if (!is.Numeric(d, allowable.length = 1) || max(d) >= 2) stop("bad input for argument 'd'") if (!is.Numeric(mean, allowable.length = 1)) stop("bad input for argument 'mean'") if (!is.Numeric(sigma, allowable.length = 1)) stop("bad input for argument 'sigma'") if (!is.Numeric(Smallno, positive = TRUE, allowable.length = 1) || Smallno > 0.01 || Smallno < 2 * .Machine$double.eps) stop("bad input for argument 'Smallno'") ans <- rep(0.0, len = n) ptr1 <- 1; ptr2 <- 0 hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) ymax <- ifelse(hh < 2, dtikuv(x = mean + sigma*sqrt(4 - 2*hh), d = d, mean = mean, sigma = sigma), KK / (sqrt(2 * pi) * sigma)) while (ptr2 < n) { Lower <- mean - 5 * sigma while (ptikuv(q = Lower, d = d, mean = mean, sigma = sigma) > Smallno) Lower <- Lower - sigma Upper <- mean + 5 * sigma while (ptikuv(q = Upper, d = d, mean = mean, sigma = sigma) < 1-Smallno) Upper <- Upper + sigma x <- runif(2*n, min = Lower, max = Upper) index <- runif(2*n, max = ymax) < dtikuv(x, d = d, mean = mean, sigma = sigma) sindex <- sum(index) if (sindex) { ptr2 <- min(n, ptr1 + sindex - 1) ans[ptr1:ptr2] = (x[index])[1:(1+ptr2-ptr1)] ptr1 <- ptr2 + 1 } } ans } tikuv <- function(d, lmean = "identity", lsigma = "loge", isigma = NULL, zero = 2) { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (length(zero) && (!is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || max(zero) > 2)) stop("bad input for argument 'zero'") if (!is.Numeric(d, allowable.length = 1) || max(d) >= 2) stop("bad input for argument 'd'") new("vglmff", blurb = c("Short-tailed symmetric [Tiku and Vaughan (1999)] ", "distribution\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("sigma", lsigma, earg = esigma), "\n", "\n", "Mean: mean"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- c(namesof("mean", .lmean, earg = .emean, tag = FALSE), namesof("sigma", .lsigma, earg = .esigma, tag = FALSE)) if (!length(etastart)) { sigma.init <- if (length(.isigma)) rep(.isigma, length = n) else { hh <- 2 - .d KK <- 1 / (1 + 1/hh + 0.75/hh^2) K2 <- 1 + 3/hh + 15/(4*hh^2) rep(sqrt(var(y) / (KK*K2)), len = n) } mean.init <- rep(weighted.mean(y, w), len = n) etastart <- cbind(theta2eta(mean.init, .lmean, earg = .emean), theta2eta(sigma.init, .lsigma, earg = .esigma)) } }),list( .lmean = lmean, .lsigma = lsigma, .isigma = isigma, .d = d, .emean = emean, .esigma = esigma ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmean, earg = .emean) }, list( .lmean = lmean, .emean = emean, .esigma = esigma ))), last = eval(substitute(expression({ misc$link <- c("mean"= .lmean , "sigma"= .lsigma ) misc$earg <- list("mean"= .emean , "sigma"= .esigma ) misc$expected <- TRUE misc$d <- .d }), list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) sigma <- eta2theta(eta[, 2], .lsigma, earg = .esigma) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dtikuv(x = y, d = .d , mean = mymu, sigma = sigma, log = TRUE)) } }, list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), vfamily = c("tikuv"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) sigma <- eta2theta(eta[, 2], .lsigma, earg = .esigma) dmu.deta <- dtheta.deta(mymu, .lmean, earg = .emean) dsigma.deta <- dtheta.deta(sigma, .lsigma, earg = .esigma) zedd <- (y - mymu) / sigma hh <- 2 - .d gzedd <- zedd / (1 + 0.5*zedd^2 / hh) dl.dmu <- zedd / sigma - 2 * gzedd / (hh*sigma) dl.dsigma <- (zedd^2 - 1 - 2 * zedd * gzedd / hh) / sigma c(w) * cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), weight = eval(substitute(expression({ ayy <- 1 / (2*hh) Dnos <- 1 - (2/hh) * (1 - ayy) / (1 + 2*ayy + 3*ayy^2) Dstar <- -1 + 3 * (1 + 2*ayy + 11*ayy^2) / (1 + 2*ayy + 3*ayy^2) ned2l.dmymu2 <- Dnos / sigma^2 ned2l.dnu2 <- Dstar / sigma^2 wz <- matrix(as.numeric(NA), n, M) # diagonal matrix wz[, iam(1, 1, M)] <- ned2l.dmymu2 * dmu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dnu2 * dsigma.deta^2 c(w) * wz }), list( .lmean = lmean, .lsigma = lsigma, .emean = emean, .esigma = esigma )))) } dfnorm <- function(x, mean = 0, sd = 1, a1 = 1, a2=1) { if (!is.Numeric(a1, positive = TRUE) || !is.Numeric(a2, positive = TRUE)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must have positive values only") ans <- dnorm(x = x/(a1*sd) - mean/sd)/(a1*sd) + dnorm(x = x/(a2*sd) + mean/sd)/(a2*sd) ans[x < 0] <- 0 ans[a1 <= 0 | a2 <= 0 | is.na(a1) | is.na(a2)] <- NA ans } pfnorm <- function(q, mean = 0, sd = 1, a1 = 1, a2=1) { if (!is.Numeric(a1, positive = TRUE) || !is.Numeric(a2, positive = TRUE)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must have positive values only") L <- max(length(q), length(mean), length(sd)) q <- rep(q, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); ifelse(q < 0, 0, pnorm(q = q/(a1*sd) - mean/sd) - pnorm(q = -q/(a2*sd) - mean/sd)) } qfnorm <- function(p, mean = 0, sd = 1, a1 = 1, a2 = 1, ...) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") if (!is.Numeric(a1, positive = TRUE) || !is.Numeric(a2, positive = TRUE)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must have positive values only") L <- max(length(p), length(mean), length(sd), length(a1), length(a2)) p <- rep(p, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); a1 <- rep(a1, len = L); a2 <- rep(a2, len = L); ans <- rep(0.0 , len = L) myfun <- function(x, mean = 0, sd = 1, a1 = 1, a2=2, p) pfnorm(q = x, mean = mean, sd = sd, a1 = a1, a2 = a2) - p for(i in 1:L) { mytheta <- mean[i]/sd[i] EY <- sd[i] * ((a1[i]+a2[i]) * (mytheta * pnorm(mytheta) + dnorm(mytheta)) - a2[i] * mytheta) Upper <- 2 * EY while (pfnorm(q = Upper, mean = mean[i], sd = sd[i], a1 = a1[i], a2 = a2[i]) < p[i]) Upper <- Upper + sd[i] ans[i] <- uniroot(f = myfun, lower = 0, upper = Upper, mean = mean[i], sd = sd[i], a1 = a1[i], a2 = a2[i], p = p[i], ...)$root } ans } rfnorm <- function(n, mean = 0, sd = 1, a1 = 1, a2=1) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") if (!is.Numeric(a1, positive = TRUE) || !is.Numeric(a2, positive = TRUE)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must have positive values only") X <- rnorm(n, mean = mean, sd = sd) pmax(a1 * X, -a2*X) } fnormal1 <- function(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, a1 = 1, a2 = 1, nsimEIM = 500, imethod = 1, zero = NULL) { if (!is.Numeric(a1, positive = TRUE, allowable.length = 1) || !is.Numeric(a2, positive = TRUE, allowable.length = 1)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must each be a positive value") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") if (length(imean) && !is.Numeric(imean)) stop("bad input for 'imean'") if (length(isd) && !is.Numeric(isd, positive = TRUE)) stop("bad input for 'isd'") new("vglmff", blurb = c("(Generalized) folded univariate normal distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE)), infos = eval(substitute(function(...) { list(Musual = 2, a1 = .a1 , a2 = .a2 , zero = .zero , nsimEIM = .nsimEIM ) }, list( .zero = zero, .a1 = a1, .a2 = a2, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("mean", .lmean, earg = .emean, tag = FALSE), namesof("sd", .lsd, earg = .esd, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) if (FALSE) { if ((ncol(cbind(w)) != 1) || any(w != round(w))) stop("'weights' must be a vector or a one-column matrix ", "with integer values") m1d <- meany <- weighted.mean(y, w) m2d <- weighted.mean(y^2, w) stddev <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) Ahat <- m1d^2 / m2d thetahat <- sqrt(max(1/Ahat -1, 0.1)) mean.init <- rep(if(length( .imean)) .imean else thetahat * sqrt((stddev^2 + meany^2) * Ahat), len = n) sd.init <- rep(if(length( .isd)) .isd else sqrt((stddev^2 + meany^2) * Ahat), len = n) } stddev <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) meany <- weighted.mean(y, w) mean.init <- rep(if(length( .imean )) .imean else {if( .imethod == 1) median(y) else meany}, len = n) sd.init <- rep(if(length( .isd )) .isd else {if( .imethod == 1) stddev else 1.2*sd(c(y))}, len = n) etastart <- cbind(theta2eta(mean.init, .lmean, earg = .emean), theta2eta(sd.init, .lsd, earg = .esd )) } }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .imean = imean, .isd = isd, .a1 = a1, .a2 = a2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) mytheta <- mymu / mysd mysd * (( .a1 + .a2 ) * (mytheta * pnorm(mytheta) + dnorm(mytheta)) - .a2 * mytheta) }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmean , "sd" = .lsd ) misc$earg <- list("mu" = .emean , "sd" = .esd ) misc$multipleResponses <- FALSE misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$simEIM <- TRUE misc$imethod <- .imethod misc$a1 <- .a1 misc$a2 <- .a2 }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .imethod = imethod, .nsimEIM = nsimEIM, .a1 = a1, .a2 = a2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) a1vec <- .a1 a2vec <- .a2 if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w)*log(dnorm(x=y/(a1vec*mysd) - mymu/mysd)/(a1vec*mysd) + dnorm(x=y/(a2vec*mysd) + mymu/mysd)/(a2vec*mysd))) } }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), vfamily = c("fnormal1"), deriv = eval(substitute(expression({ Musual <- 2 mymu <- eta2theta(eta[, 1], .lmean, earg = .emean) mysd <- eta2theta(eta[, 2], .lsd, earg = .esd ) dmu.deta <- dtheta.deta(mymu, .lmean, earg = .emean) dsd.deta <- dtheta.deta(mysd, .lsd, earg = .esd ) a1vec <- .a1 a2vec <- .a2 d3 <- deriv3(~ log((exp(-0.5*(y/(a1vec*mysd) - mymu/mysd)^2)/a1vec + exp(-0.5*(y/(a2vec*mysd) + mymu/mysd)^2)/a2vec)/(mysd*sqrt(2*pi))), name = c("mymu", "mysd"), hessian = FALSE) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") # == cbind(dl.dmu, dl.dsd) DTHETA.detas <- cbind(dmu.deta, dsd.deta) c(w) * DTHETA.detas * dl.dthetas }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), weight = eval(substitute(expression({ de3 <- deriv3(~ log((exp(-0.5*(ysim/(a1vec*mysd) - mymu/mysd)^2)/a1vec + exp(-0.5*(ysim/(a2vec*mysd) + mymu/mysd)^2)/a2vec)/(mysd*sqrt(2*pi))), name = c("mymu", "mysd"), hessian = TRUE) run.mean <- 0 for(ii in 1:( .nsimEIM )) { ysim <- abs(rnorm(n, m = mymu, sd = mysd)) ysim <- rfnorm(n = n, mean = mymu, sd = mysd, a1 = a1vec, a2 = a2vec) eval.de3 <- eval(de3) d2l.dthetas2 <- attr(eval.de3, "hessian") rm(ysim) temp3 <- matrix(0, n, dimm(M)) for(ss in 1:M) for(tt in ss:M) temp3[, iam(ss,tt, M)] <- -d2l.dthetas2[, ss,tt] run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * DTHETA.detas[, index0$row] * DTHETA.detas[, index0$col] }), list( .nsimEIM = nsimEIM, .a1 = a1, .a2 = a2 )))) } lqnorm.control <- function(trace = TRUE, ...) { list(trace = trace) } lqnorm <- function(qpower = 2, link = "identity", imethod = 1, imu = NULL, shrinkage.init = 0.95) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(qpower, allowable.length = 1) || qpower <= 1) stop("bad input for argument 'qpower'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") new("vglmff", blurb = c("Minimizing the q-norm of residuals\n", "Links: ", namesof("Y1", link, earg = earg, tag = TRUE)), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 dy <- dimnames(y) predictors.names <- if (!is.null(dy[[2]])) dy[[2]] else paste("mu", 1:M, sep = "") predictors.names <- namesof(predictors.names, link = .link, earg = .earg, short = TRUE) if (!length(etastart)) { meany <- weighted.mean(y, w) mean.init <- rep(if(length( .i.mu)) .i.mu else {if( .imethod == 2) median(y) else if ( .imethod == 1) meany else .sinit * meany + (1 - .sinit) * y }, len = n) etastart <- theta2eta(mean.init, link = .link, earg = .earg) } }), list( .imethod = imethod, .i.mu = imu, .sinit = shrinkage.init, .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link, earg = .earg) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) = dy misc$link <- rep( .link, length = M) names(misc$link) <- predictors.names misc$earg <- list(mu = .earg) misc$qpower <- .qpower misc$imethod <- .imethod misc$objectiveFunction <- sum( c(w) * (abs(y - mu))^(.qpower) ) }), list( .qpower = qpower, .link = link, .earg = earg, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link, earg = .earg) }, list( .link = link, .earg = earg ))), vfamily = "lqnorm", deriv = eval(substitute(expression({ dmu.deta <- dtheta.deta(theta=mu, link = .link, earg = .earg ) myresid <- y - mu signresid <- sign(myresid) temp2 <- (abs(myresid))^(.qpower-1) .qpower * c(w) * temp2 * signresid * dmu.deta }), list( .qpower = qpower, .link = link, .earg = earg ))), weight = eval(substitute(expression({ temp3 <- (abs(myresid))^(.qpower-2) wz <- .qpower * (.qpower - 1) * c(w) * temp3 * dmu.deta^2 wz }), list( .qpower = qpower, .link = link, .earg = earg )))) } dtobit <- function(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mean), length(sd), length(Lower), length(Upper)) x <- rep(x, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); Lower <- rep(Lower, len = L); Upper <- rep(Upper, len = L); ans <- dnorm(x = x, mean = mean, sd = sd, log = log.arg) ans[x < Lower] <- if (log.arg) log(0.0) else 0.0 ans[x > Upper] <- if (log.arg) log(0.0) else 0.0 ind3 <- x == Lower ans[ind3] <- if (log.arg) { log(exp(ans[ind3]) + pnorm(q = Lower[ind3], mean = mean[ind3], sd = sd[ind3])) } else { ans[ind3] + pnorm(q = Lower[ind3], mean = mean[ind3], sd = sd[ind3]) } ind4 <- x == Upper ans[ind4] <- if (log.arg) { log(exp(ans[ind4]) + pnorm(q = Upper[ind4], mean = mean[ind4], sd = sd[ind4], lower.tail = FALSE)) } else { ans[ind4] + pnorm(q = Upper[ind4], mean = mean[ind4], sd = sd[ind4], lower.tail = FALSE) } ans } ptobit <- function(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail) != 1) stop("argument 'lower.tail' must be a single logical") if (!is.logical(log.p) || length(log.p) != 1) stop("argument 'log.p' must be a single logical") L <- max(length(q), length(mean), length(sd), length(Lower), length(Upper)) q <- rep(q, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); Lower <- rep(Lower, len = L); Upper <- rep(Upper, len = L); ans <- pnorm(q = q, mean = mean, sd = sd, lower.tail = lower.tail) ind1 <- (q < Lower) ans[ind1] <- if (lower.tail) ifelse(log.p, log(0.0), 0.0) else ifelse(log.p, log(1.0), 1.0) ind2 <- (Upper <= q) ans[ind2] <- if (lower.tail) ifelse(log.p, log(1.0), 1.0) else ifelse(log.p, log(0.0), 0.0) ans } qtobit <- function(p, mean = 0, sd = 1, Lower = 0, Upper = Inf) { L <- max(length(p), length(mean), length(sd), length(Lower), length(Upper)) p <- rep(p, len = L); mean <- rep(mean, len = L); sd <- rep(sd, len = L); Lower <- rep(Lower, len = L); Upper <- rep(Upper, len = L); ans <- qnorm(p = p, mean = mean, sd = sd) pnorm.Lower <- ptobit(q = Lower, mean = mean, sd = sd) pnorm.Upper <- ptobit(q = Upper, mean = mean, sd = sd) ind1 <- (p <= pnorm.Lower) ans[ind1] <- Lower[ind1] ind2 <- (pnorm.Upper <= p) ans[ind2] <- Upper[ind2] ans } rtobit <- function(n, mean = 0, sd = 1, Lower = 0, Upper = Inf) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n L <- max(use.n, length(mean), length(sd), length(Lower), length(Upper)) mean <- rep(mean, len = L); sd <- rep(sd, len = L); Lower <- rep(Lower, len = L); Upper <- rep(Upper, len = L); ans <- rnorm(n = use.n, mean = mean, sd = sd) cenL <- (ans < Lower) ans[cenL] <- Lower[cenL] cenU <- (ans > Upper) ans[cenU] <- Upper[cenU] attr(ans, "Lower") <- Lower attr(ans, "Upper") <- Upper attr(ans, "cenL") <- cenL attr(ans, "cenU") <- cenU ans } tobit.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } tobit <- function(Lower = 0, Upper = Inf, lmu = "identity", lsd = "loge", nsimEIM = 250, imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), imethod = 1, zero = -2) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if ( # length(Lower) != 1 || length(Upper) != 1 || !is.numeric(Lower) || !is.numeric(Upper) || any(Lower >= Upper)) stop("Lower and Upper must ", "be numeric with Lower < Upper") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") if(mode(type.fitted) != "character" && mode(type.fitted) != "name") type.fitted <- as.character(substitute(type.fitted)) type.fitted <- match.arg(type.fitted, c("uncensored", "censored", "mean.obs"))[1] stdTobit <- all(Lower == 0.0) && all(!is.finite(Upper)) && all(lmu == "identity") new("vglmff", blurb = c("Tobit model\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE), "\n", "Mean: mu", "\n", "Conditional variance: sd^2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero , nsimEIM = .nsimEIM ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ Musual <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M <- Musual * ncoly Lowmat <- matrix( .Lower , nrow = n, ncol = ncoly, byrow = TRUE) Uppmat <- matrix( .Upper , nrow = n, ncol = ncoly, byrow = TRUE) extra$censoredL <- (y <= Lowmat) extra$censoredU <- (y >= Uppmat) if (any(y < Lowmat)) { warning("replacing response values less than the value ", .Lower , " by ", .Lower ) y[y < Lowmat] <- Lowmat[y < Lowmat] } if (any(y > Uppmat)) { warning("replacing response values greater than the value ", .Upper, " by ", .Upper) y[y > Uppmat] <- Uppmat[y > Uppmat] } temp1.names <- if (ncoly == 1) "mu" else paste("mu", 1:ncoly, sep = "") temp2.names <- if (ncoly == 1) "sd" else paste("sd", 1:ncoly, sep = "") predictors.names <- c(namesof(temp1.names, .lmu, earg = .emu, tag = FALSE), namesof(temp2.names, .lsd, earg = .esd, tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] if (!length(etastart)) { anyc <- cbind(extra$censoredL | extra$censoredU) i11 <- if ( .imethod == 1) anyc else FALSE # can be all data mu.init <- sd.init <- matrix(0.0, n, ncoly) for(ii in 1:ncol(y)) { use.i11 <- i11[, ii] mylm <- lm.wfit(x = cbind(x[!use.i11, ]), y = y[!use.i11, ii], w = w[!use.i11, ii]) sd.init[, ii] <- sqrt( sum(w[!use.i11, ii] * mylm$resid^2) / mylm$df.residual ) * 1.5 mu.init[!use.i11, ii] <- mylm$fitted.values if (any(anyc[, ii])) mu.init[anyc[, ii], ii] <- x[anyc[, ii],, drop = FALSE] %*% mylm$coeff } if (length( .i.mu )) mu.init <- matrix( .i.mu , n, ncoly, byrow = TRUE) if (length( .isd )) sd.init <- matrix( .isd , n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(mu.init, .lmu, earg = .emu ), theta2eta(sd.init, .lsd, earg = .esd )) etastart <- etastart[, interleave.VGAM(M, M = Musual), drop = FALSE] } }), list( .Lower = Lower, .Upper = Upper, .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .i.mu = imu, .isd = isd, .imethod = imethod ))), linkinv = eval(substitute( function(eta, extra = NULL) { Musual <- 2 ncoly <- ncol(eta) / Musual mum <- eta2theta(eta[, Musual*(1:ncoly)-1, drop=FALSE], .lmu, earg = .emu ) if ( .type.fitted == "uncensored") return(mum) Lowmat <- matrix( .Lower, nrow = nrow(eta), ncol = ncoly, byrow = TRUE) Uppmat <- matrix( .Upper, nrow = nrow(eta), ncol = ncoly, byrow = TRUE) if ( .type.fitted == "censored") { mum[mum < Lowmat] <- Lowmat[mum < Lowmat] mum[mum > Uppmat] <- Uppmat[mum > Uppmat] mum } else { sdm <- eta2theta(eta[, Musual*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) zeddL <- (Lowmat - mum) / sdm zeddU <- (Uppmat - mum) / sdm Phi.L <- pnorm(zeddL) phi.L <- dnorm(zeddL) Phi.U <- pnorm(zeddU) phi.U <- dnorm(zeddU) mum * (Phi.U - Phi.L) + sdm * (phi.L - phi.U) + Lowmat * Phi.L + Uppmat * (1 - Phi.U) } }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .Lower = Lower, .Upper = Upper, .type.fitted = type.fitted ))), last = eval(substitute(expression({ temp0303 <- c(rep( .lmu, length = ncoly), rep( .lsd, length = ncoly)) names(temp0303) = c(if (ncoly == 1) "mu" else paste("mu", 1:ncoly, sep = ""), if (ncoly == 1) "sd" else paste("sd", 1:ncoly, sep = "")) temp0303 <- temp0303[interleave.VGAM(M, M = Musual)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .emu misc$earg[[Musual*ii ]] <- .esd } misc$multipleResponses <- TRUE misc$expected <- TRUE misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$Musual <- Musual misc$stdTobit <- .stdTobit misc$Lower <- Lowmat misc$Upper <- Uppmat if ( .stdTobit ) { save.weight <- control$save.weight <- FALSE fit$weights <- NULL } }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .nsimEIM = nsimEIM, .imethod = imethod, .stdTobit = stdTobit, .Lower = Lower, .Upper = Upper ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Musual <- 2 y <- cbind(y) ncoly <- ncol(y) cenL <- extra$censoredL cenU <- extra$censoredU cen0 <- !cenL & !cenU # uncensored obsns Lowmat <- matrix( .Lower , nrow = nrow(eta), ncol = ncoly, byrow = TRUE) Uppmat <- matrix( .Upper , nrow = nrow(eta), ncol = ncoly, byrow = TRUE) mum <- eta2theta(eta[, Musual*(1:ncoly)-1, drop = FALSE], .lmu , earg = .emu ) sdm <- eta2theta(eta[, Musual*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) ell0 <- dnorm( y[cen0], mean = mum[cen0], sd = sdm[cen0], log = TRUE) ellL <- pnorm(Lowmat[cenL], mean = mum[cenL], sd = sdm[cenL], log.p = TRUE, lower.tail = TRUE) ellU <- pnorm(Uppmat[cenU], mean = mum[cenU], sd = sdm[cenU], log.p = TRUE, lower.tail = FALSE) wmat <- matrix(w, nrow = nrow(eta), ncol = ncoly) if (residuals) { stop("loglikelihood residuals not ", "implemented yet") } else { sum(wmat[cen0] * ell0) + sum(wmat[cenL] * ellL) + sum(wmat[cenU] * ellU) } }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .Lower = Lower, .Upper = Upper ))), vfamily = c("tobit"), deriv = eval(substitute(expression({ Musual <- 2 y <- cbind(y) ncoly <- ncol(y) Lowmat <- matrix( .Lower , nrow = n, ncol = ncoly, byrow = TRUE) Uppmat <- matrix( .Upper , nrow = n, ncol = ncoly, byrow = TRUE) cenL <- extra$censoredL cenU <- extra$censoredU cen0 <- !cenL & !cenU # uncensored obsns mum <- eta2theta(eta[, Musual*(1:ncoly)-1, drop = FALSE], .lmu, earg = .emu ) sdm <- eta2theta(eta[, Musual*(1:ncoly)-0, drop = FALSE], .lsd, earg = .esd ) zedd <- (y - mum) / sdm dl.dmu <- zedd / sdm dl.dsd <- (zedd^2 - 1) / sdm dmu.deta <- dtheta.deta(mum, .lmu, earg = .emu ) dsd.deta <- dtheta.deta(sdm, .lsd, earg = .esd ) if (any(cenL)) { mumL <- Lowmat - mum temp21L <- mumL[cenL] / sdm[cenL] PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) fred21 <- phiL / PhiL dl.dmu[cenL] <- -fred21 / sdm[cenL] dl.dsd[cenL] <- fred21 * (-mumL[cenL] / sdm[cenL]^2) } if (any(cenU)) { mumU <- Uppmat - mum temp21U <- mumU[cenU] / sdm[cenU] PhiU <- pnorm(temp21U, lower.tail = FALSE) phiU <- dnorm(temp21U) fred21 <- -phiU / PhiU dl.dmu[cenU] <- -fred21 / sdm[cenU] # Negated dl.dsd[cenU] <- fred21 * (-mumU[cenU] / sdm[cenU]^2) } dthetas.detas <- cbind(dmu.deta, dsd.deta) dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] myderiv <- cbind(c(w) * dl.dmu, c(w) * dl.dsd) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .Lower = Lower, .Upper = Upper ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) if (is.numeric( .nsimEIM ) && ! .stdTobit ) { run.varcov <- 0 for(spp. in 1:ncoly) { run.varcov <- 0 muvec <- mum[, spp.] sdvec <- sdm[, spp.] for(ii in 1:( .nsimEIM )) { ysim <- rtobit(n = n, mean = muvec, sd = sdvec, Lower = Lowmat[, spp.], Upper = Uppmat[, spp.]) cenL <- attr(ysim, "cenL") cenU <- attr(ysim, "cenU") cen0 <- !cenL & !cenU # uncensored obsns zedd <- (ysim - muvec) / sdvec dl.dmu <- zedd / sdvec dl.dsd <- (zedd^2 - 1) / sdvec if (any(cenL)) { mumL <- Lowmat[, spp.] - muvec temp21L <- mumL[cenL] / sdvec[cenL] PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) fred21 <- phiL / PhiL dl.dmu[cenL] <- -fred21 / sdvec[cenL] dl.dsd[cenL] <- fred21 * (-mumL[cenL] / sdvec[cenL]^2) } if (any(cenU)) { mumU <- Uppmat[, spp.] - muvec temp21U <- mumU[cenU] / sdvec[cenU] PhiU <- pnorm(temp21U, lower.tail = FALSE) phiU <- dnorm(temp21U) fred21 <- -phiU / PhiU dl.dmu[cenU] <- -fred21 / sdvec[cenU] # Negated dl.dsd[cenU] <- fred21 * (-mumU[cenU] / sdvec[cenU]^2) } rm(ysim) temp3 <- cbind(dl.dmu, dl.dsd) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz1 <- if (intercept.only && FALSE) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] = wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop } else { wz1 <- matrix(0.0, n, dimm(Musual)) for(spp. in 1:ncoly) { zedd <- (y[, spp.] - mum[, spp.]) / sdm[, spp.] zedd0 <- ( mum[, spp.]) / sdm[, spp.] phivec <- dnorm(zedd0) Phivec <- pnorm(zedd0) wz1[, iam(1, 1, M = Musual)] <- -(phivec * zedd0 - phivec^2 / (1 - Phivec) - Phivec) wz1[, iam(2, 2, M = Musual)] <- -(phivec * zedd0^3 + phivec * zedd0 - phivec^2 * zedd0^2 / (1 - Phivec) - 2 * Phivec) wz1[, iam(1, 2, M = Musual)] <- +(phivec * zedd0^2 + phivec - phivec^2 * zedd0 / (1 - Phivec)) wz1 <- wz1 / sdm[, spp.]^2 wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop } # End of EIM temp <- w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .lmu = lmu, .Lower = Lower, .Upper = Upper, .lsd = lsd, .stdTobit = stdTobit, .nsimEIM = nsimEIM )))) } # End of tobit() normal1 <- function(lmean = "identity", lsd = "loge", lvar = "loge", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, apply.parint = FALSE, smallno = 1.0e-5, zero = -2) { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsdev <- as.list(substitute(lsd)) esdev <- link2list(lsdev) lsdev <- attr(esdev, "function.name") lvare <- as.list(substitute(lvar)) evare <- link2list(lvare) lvare <- attr(evare, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(smallno, allowable.length = 1, positive = TRUE)) stop("argument 'smallno' must be positive and close to 0") if (smallno > 0.1) { warning("replacing argument 'smallno' with 0.1") smallno <- 0.1 } if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") new("vglmff", blurb = c("Univariate normal distribution\n\n", "Links: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", if (var.arg) namesof("var", lvare, earg = evare, tag = TRUE) else namesof("sd" , lsdev, earg = esdev, tag = TRUE), "\n", if (var.arg) "Variance: var" else "Variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ orig.y <- y if (length(attr(orig.y, "Prior.Weights"))) { if (any(c(w) != 1)) warning("replacing the 'weights' argument by the 'Prior.Weights'", "attribute of the response (probably due to Qvar()") w <- attr(orig.y, "Prior.Weights") extra$attributes.y <- attributes(orig.y) } else { } temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("mean", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste(if ( .var.arg ) "var" else "sd", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lmean , earg = .emean , tag = FALSE), if ( .var.arg ) namesof(mynames2, .lvare , earg = .evare , tag = FALSE) else namesof(mynames2, .lsdev , earg = .esdev , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M = Musual)] extra$predictors.names <- predictors.names if (!length(etastart)) { sdev.init <- mean.init <- matrix(0, n, ncoly) for (jay in 1:ncoly) { jfit <- lm.wfit(x = x, y = y[, jay], w = w[, jay]) mean.init[, jay] <- if ( .lmean == "loge") pmax(1/1024, y[, jay]) else if( .imethod == 1) median(y[, jay]) else if( .imethod == 2) weighted.mean(y[, jay], w = w[, jay]) else if( .imethod == 3) weighted.mean(y[, jay], w = w[, jay]) * 0.5 + y[, jay] * 0.5 else mean(jfit$fitted) sdev.init[, jay] <- if( .imethod == 1) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else if( .imethod == 2) { if (jfit$df.resid > 0) sqrt( sum(w[, jay] * jfit$resid^2) / jfit$df.resid ) else sqrt( sum(w[, jay] * jfit$resid^2) / sum(w[, jay]) ) } else if( .imethod == 3) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else { sqrt( sum(w[, jay] * abs(y[, jay] - mean.init[, jay])) / sum(w[, jay]) ) } if (any(sdev.init[, jay] <= sqrt( .Machine$double.eps ) )) sdev.init[, jay] <- 1.01 } if (length( .isdev )) { sdev.init <- matrix( .isdev , n, ncoly, byrow = TRUE) } etastart <- cbind(theta2eta(mean.init, .lmean , earg = .emean ), if ( .var.arg ) theta2eta(sdev.init^2, .lvare , earg = .evare ) else theta2eta(sdev.init , .lsdev , earg = .esdev )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M = Musual)] colnames(etastart) <- predictors.names } }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .isdev = isd, .var.arg = var.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- extra$Musual ncoly <- extra$ncoly if ( .lmean == "explink") { if (any(eta[, Musual*(1:ncoly) - 1] < 0)) { warning("turning some columns of 'eta' positive in @linkinv") for (ii in 1:ncoly) eta[, Musual*ii - 1] <- pmax( .smallno , eta[, Musual*ii - 1]) } } eta2theta(eta[, Musual*(1:ncoly) - 1], .lmean , earg = .emean ) }, list( .lmean = lmean, .emean = emean, .esdev = esdev , .evare = evare, .smallno = smallno ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lmean , length = ncoly), rep( .lsdev , length = ncoly)) misc$link <- misc$link [interleave.VGAM(Musual * ncoly, M = Musual)] temp.names <- c(mynames1, mynames2) temp.names <- temp.names[interleave.VGAM(Musual * ncoly, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", Musual * ncoly) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .emean misc$earg[[Musual*ii ]] <- if ( .var.arg ) .evare else .esdev } names(misc$earg) <- temp.names misc$var.arg <- .var.arg misc$Musual <- Musual misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint misc$smallno <- .smallno }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .parallel = parallel, .apply.parint = apply.parint, .smallno = smallno, .var.arg = var.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ncoly <- extra$ncoly Musual <- extra$Musual if ( .lmean == "explink") { if (any(eta[, Musual*(1:ncoly) - 1] < 0)) { warning("turning some columns of 'eta' positive in @loglikelihood") for (ii in 1:ncoly) eta[, Musual*ii - 1] <- pmax( .smallno , eta[, Musual*ii - 1]) } } if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly)], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly)], .lsdev , earg = .esdev ) } if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dnorm(y, m = mu, sd = sdev, log = TRUE)) } }, list( .lsdev = lsdev, .lvare = lvare, .esdev = esdev, .evare = evare, .lmean = lmean, .smallno = smallno, .var.arg = var.arg ))), vfamily = c("normal1"), deriv = eval(substitute(expression({ ncoly <- extra$ncoly Musual <- extra$Musual if ( .lmean == "explink") { if (any(eta[, Musual*(1:ncoly) - 1] < 0)) { warning("turning some columns of 'eta' positive in @deriv") for (ii in 1:ncoly) eta[, Musual*ii - 1] <- pmax( .smallno , eta[, Musual*ii - 1]) } } mymu <- eta2theta( eta[, Musual*(1:ncoly) - 1], .lmean , earg = .emean ) if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly) ], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly) ], .lsdev , earg = .esdev ) } dl.dmu <- (y - mymu) / sdev^2 if ( .var.arg ) { dl.dva <- -0.5 / Varm + 0.5 * (y - mymu)^2 / sdev^4 } else { dl.dsd <- -1.0 / sdev + (y - mymu)^2 / sdev^3 } dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) if ( .var.arg ) { dva.deta <- dtheta.deta(Varm, .lvare , earg = .evare ) } else { dsd.deta <- dtheta.deta(sdev, .lsdev , earg = .esdev ) } ans <- c(w) * cbind(dl.dmu * dmu.deta, if ( .var.arg ) dl.dva * dva.deta else dl.dsd * dsd.deta) ans <- ans[, interleave.VGAM(ncol(ans), M = Musual)] ans }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .smallno = smallno, .var.arg = var.arg ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) # diag matrix; y is 1-column too ned2l.dmu2 <- 1 / sdev^2 if ( .var.arg ) { ned2l.dva2 <- 0.5 / Varm^2 } else { ned2l.dsd2 <- 2 / sdev^2 } wz[, Musual*(1:ncoly) - 1] <- ned2l.dmu2 * dmu.deta^2 wz[, Musual*(1:ncoly) ] <- if ( .var.arg ) { ned2l.dva2 * dva.deta^2 } else { ned2l.dsd2 * dsd.deta^2 } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .var.arg = var.arg )))) } # End of normal1() normal1.term <- function(linklist = NULL, # list(), earglist = NULL, # list(), lsd = "loge", lvar = "loge", esd = list(), evar = list(), var.arg = FALSE, imethod = 1, isd = NULL, ieta.coeffs = NULL, zero = "M") { lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") lvar <- as.list(substitute(lvar)) evar <- link2list(lvar) lvar <- attr(evar, "function.name") if (is.character(zero) && zero != "M") stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") new("vglmff", blurb = c("Univariate normal distribution with ", "varying coefficients links/constraints\n\n", "Links: ", if (var.arg) namesof("var", lvar, earg = evar, tag = TRUE) else namesof("sd" , lsd, earg = esd, tag = TRUE), "; ", "\n", if (var.arg) "Variance: var" else "Variance: sd^2"), constraints = eval(substitute(expression({ dotzero <- .zero if (is.character(dotzero) && dotzero == "M") dotzero <- M Musual <- M eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = NA, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ asgn <- attr(x, "assign") nasgn <- names(asgn) asgn2 <- attr(Xm2, "assign") nasgn2 <- names(asgn2) print("head(x)") print( head(x) ) print("head(Xm2)") print( head(Xm2) ) print("attributes(x)") print( attributes(x) ) print("attributes(Xm2)") print( attributes(Xm2) ) print("names(constraints)") print( names(constraints) ) print('nasgn') print( nasgn ) print('nasgn2') print( nasgn2 ) linklist <- .linklist Linklist <- vector("list", length(nasgn2)) names(Linklist) <- nasgn2 for (ilocal in 1:length(nasgn2)) Linklist[[ilocal]] <- "identity" if (length( linklist ) > 0) { for (ilocal in 1:length(nasgn2)) if (any(names(linklist) == nasgn2[ilocal])) Linklist[[ilocal]] <- linklist[[(nasgn2[ilocal])]] } print('linklist') print( linklist ) print('Linklist') print( Linklist ) print('unlist(Linklist)') print( unlist(Linklist) ) orig.y <- y temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y number.mlogit <- sum(unlist(Linklist) == "mlogit") print("number.mlogit") print( number.mlogit ) if (number.mlogit == 1) stop('cannot have only one "mlogit"') ncoly <- ncol(y) Musual <- NA extra$ncoly <- ncoly extra$Musual <- Musual M <- ncol(Xm2) - (number.mlogit > 0) + 1 print("M ,,,,,,,,,") print( M ) extra$Xm2 <- Xm2 cn.Xm2 <- colnames(Xm2) mynames1 <- NULL for (ilocal in 1:length(cn.Xm2)) mynames1 <- c(mynames1, namesof(cn.Xm2[ilocal], Linklist[[ilocal]], list(), tag = FALSE)) print("mynames1") print( mynames1 ) mynames2 <- paste(if ( .var.arg ) "var" else "sd", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(mynames1, if ( .var.arg ) namesof(mynames2, .lvar , earg = .evar , tag = FALSE) else namesof(mynames2, .lsd , earg = .esd , tag = FALSE)) print("predictors.names ,,,,,,,,,") print( predictors.names ) extra$predictors.names <- predictors.names if (!length(etastart)) { sdev.init <- mean.init <- matrix(0, n, ncoly) for (jay in 1:ncoly) { jfit <- lm.wfit(x = Xm2, y = y[, jay], w = w[, jay]) mean.init[, jay] <- if ( mynames2 == "loge") pmax(1/1024, y[, jay]) else if( .imethod == 1) median(y[, jay]) else if( .imethod == 2) weighted.mean(y[, jay], w = w[, jay]) else if( .imethod == 3) weighted.mean(y[, jay], w = w[, jay]) * 0.5 + y[, jay] * 0.5 else mean(jfit$fitted) sdev.init[, jay] <- if( .imethod == 1) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else if( .imethod == 2) { if (jfit$df.resid > 0) sqrt( sum(w[, jay] * jfit$resid^2) / jfit$df.resid ) else sqrt( sum(w[, jay] * jfit$resid^2) / sum(w[, jay]) ) } else if( .imethod == 3) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else { sqrt( sum(w[, jay] * abs(y[, jay] - mean.init[, jay])) / sum(w[, jay]) ) } if (any(sdev.init[, jay] <= sqrt( .Machine$double.eps ) )) sdev.init[, jay] <- 1.01 print("head(sdev.init[, jay])9") print( head(sdev.init[, jay]) ) } if (length( .isdev )) { sdev.init <- matrix( .isdev , n, ncoly, byrow = TRUE) } etastart <- cbind(eta.equi.probs, if ( .var.arg ) theta2eta(sdev.init^2, .lvar , earg = .evar ) else theta2eta(sdev.init , .lsd , earg = .esd )) colnames(etastart) <- predictors.names print("head(etastart)9") print( head(etastart) ) new.coeffs <- weighted.mean(y, w) extra$new.coeffs <- new.coeffs } }), list( .linklist = linklist, .earglist = earglist, .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .isdev = isd, .ieta.coeffs = ieta.coeffs, .var.arg = var.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { print("hi9") M <- ncol(eta) betas.matrix <- 1 / (1 + exp(-eta[, -M, drop = FALSE])) betas.matrix <- cbind(betas.matrix, 1 / (1 + rowSums(exp(eta[, -M, drop = FALSE])))) print("head(betas.matrix)1") print( head(betas.matrix) ) betas.matrix <- cbind(extra$new.coeffs[1], betas.matrix) print("head(betas.matrix)2") print( head(betas.matrix) ) print("head(extra$Xm2)") print( head(extra$Xm2) ) rowSums(extra$Xm2 * betas.matrix) }, list( .linklist = linklist, .earglist = earglist, .esd = esd , .evar = evar ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( "mlogit", length = M - 1), rep( .lsd , length = ncoly)) temp.names <- c(mynames1, mynames2) names(misc$link) <- temp.names misc$var.arg <- .var.arg misc$Musual <- Musual misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- FALSE }), list( .linklist = linklist, .earglist = earglist, .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .var.arg = var.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ncoly <- extra$ncoly Musual <- 1 # extra$Musual if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly)], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly)], .lsd , earg = .esd ) } if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dnorm(y, m = mu, sd = sdev, log = TRUE)) } }, list( .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .var.arg = var.arg ))), vfamily = c("normal1.term"), deriv = eval(substitute(expression({ print("------ in @ deriv -------------") extra$new.coeffs <- new.coeffs ncoly <- extra$ncoly Musual <- 1 # extra$Musual if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly) ], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly) ], .lsd , earg = .esd ) } betas.matrix <- 1 / (1 + exp(-eta[, -M, drop = FALSE])) betas.matrix <- cbind(betas.matrix, 1 / (1 + rowSums(exp(eta[, -M, drop = FALSE])))) print("head(betas.matrix)5") print( head(betas.matrix) ) if ( !extra$sum1.intercept && any(colnames(extra$X_LM) == "(Intercept)")) betas.matrix <- cbind(extra$new.coeffs[1], betas.matrix) print("head(betas.matrix)6") print( head(betas.matrix) ) print("head(extra$Xm2)") print( head(extra$Xm2) ) use.x <- if ( sum1.intercept ) Xm2[, -ncol(Xm2), drop = FALSE] else Xm2[, -c(1, ncol(Xm2)), drop = FALSE] mymu <- rowSums(Xm2 * betas.matrix) dMu.deta <- mymu * (1 - mymu) * use.x print("head(mymu)9") print( head(mymu) ) print("head(dMu.deta)9") print( head(dMu.deta) ) if ( .var.arg ) { dl.dva <- -0.5 / Varm + 0.5 * (y - mymu)^2 / sdev^4 } else { dl.dsd <- -1.0 / sdev + (y - mymu)^2 / sdev^3 } dl.dmu <- (y - mymu) / sdev^2 if ( .var.arg ) { dva.deta <- dtheta.deta(Varm, .lvar , earg = .evar ) } else { dsd.deta <- dtheta.deta(sdev, .lsd , earg = .esd ) } ans <- c(w) * cbind(dl.dmu * dMu.deta, if ( .var.arg ) dl.dva * dva.deta else dl.dsd * dsd.deta) print("head(deriv.ans)9") print( head(ans) ) ans }), list( .linklist = linklist, .lsd = lsd, .lvar = lvar, .earglist = earglist, .esd = esd, .evar = evar, .var.arg = var.arg ))), weight = eval(substitute(expression({ print("------ in @ weight -------------") wz <- matrix(0, n, dimm(M)) # diag matrix; y is 1-column too print("head(wz)") print( head(wz) ) if ( .var.arg ) { ned2l.dva2 <- 0.5 / Varm^2 } else { ned2l.dsd2 <- 2 / sdev^2 } wz[, iam(M, M, M = M)] <- if ( .var.arg ) { ned2l.dva2 * dva.deta^2 } else { ned2l.dsd2 * dsd.deta^2 } index <- iam(NA, NA, M , both = TRUE, diag = TRUE) indtw <- iam(NA, NA, M-1, both = TRUE, diag = TRUE) print("index") print( index ) print("indtw") print( indtw ) twz <- dMu.deta[, indtw$row.index, drop = FALSE] * dMu.deta[, indtw$col.index, drop = FALSE] print("head(twz)9------------------------------------------------") print( head(twz) ) for (ilocal in 1:ncol(twz)) wz[, iam(index$row.index[ilocal], index$col.index[ilocal], M = M)] <- twz[, iam(indtw$row.index[ilocal], indtw$col.index[ilocal], M = M-1)] print("head(wz)9------------------------------------------------") print( head(wz) ) w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .var.arg = var.arg )))) } # End of normal1.term() lognormal <- function(lmeanlog = "identity", lsdlog = "loge", zero = 2) { lmulog <- as.list(substitute(lmeanlog)) emulog <- link2list(lmulog) lmulog <- attr(emulog, "function.name") lsdlog <- as.list(substitute(lsdlog)) esdlog <- link2list(lsdlog) lsdlog <- attr(esdlog, "function.name") if (length(zero) && (!is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || zero > 2)) stop("bad input for argument argument 'zero'") new("vglmff", blurb = c("Two-parameter (univariate) lognormal distribution\n\n", "Links: ", namesof("meanlog", lmulog, earg = emulog, tag = TRUE), ", ", namesof("sdlog", lsdlog, earg = esdlog, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE) predictors.names <- c(namesof("meanlog", .lmulog, earg = .emulog, tag = FALSE), namesof("sdlog", .lsdlog, earg = .esdlog, tag = FALSE)) if (!length(etastart)) { mylm <- lm.wfit(x = x, y = c(log(y)), w = c(w)) sdlog.y.est <- sqrt( sum(c(w) * mylm$resid^2) / mylm$df.residual ) etastart <- cbind( meanlog = rep(theta2eta(log(median(y)), .lmulog, earg = .emulog), length = n), sdlog = rep(theta2eta(sdlog.y.est, .lsdlog, earg = .esdlog), length = n)) } }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), linkinv = eval(substitute(function(eta, extra = NULL) { mulog <- eta2theta(eta[, 1], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog ) exp(mulog + 0.5 * sdlog^2) }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), last = eval(substitute(expression({ misc$link <- c("meanlog" = .lmulog , "sdlog" = .lsdlog ) misc$earg <- list("meanlog" = .emulog , "sdlog" = .esdlog ) misc$expected <- TRUE }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mulog <- eta2theta(eta[, 1], .lmulog, earg = .emulog) sdlog <- eta2theta(eta[, 2], .lsdlog, earg = .esdlog) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dlnorm(y, meanlog = mulog, sdlog = sdlog, log = TRUE)) } }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), vfamily = c("lognormal"), deriv = eval(substitute(expression({ mulog <- eta2theta(eta[, 1], .lmulog, earg = .emulog) sdlog <- eta2theta(eta[, 2], .lsdlog, earg = .esdlog) dmulog.deta <- dtheta.deta(mulog, .lmulog, earg = .emulog) dsdlog.deta <- dtheta.deta(sdlog, .lsdlog, earg = .esdlog) dl.dmulog <- (log(y) - mulog) / sdlog^2 dl.dsdlog <- -1 / sdlog + (log(y) - mulog)^2 / sdlog^3 c(w) * cbind(dl.dmulog * dmulog.deta, dl.dsdlog * dsdlog.deta) }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), weight = expression({ wz <- matrix(as.numeric(NA), n, 2) # Diagonal! ned2l.dmulog2 <- 1 / sdlog^2 ned2l.dsdlog2 <- 2 * ned2l.dmulog2 wz[, iam(1, 1, M)] <- ned2l.dmulog2 * dmulog.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsdlog2 * dsdlog.deta^2 wz = c(w) * wz wz })) } lognormal3 <- function(lmeanlog = "identity", lsdlog = "loge", powers.try = (-3):3, delta = NULL, zero = 2) { if (length(delta) && !is.Numeric(delta, positive = TRUE)) stop("bad input for argument argument 'delta'") lmulog <- as.list(substitute(lmeanlog)) emulog <- link2list(lmulog) lmulog <- attr(emulog, "function.name") lsdlog <- as.list(substitute(lsdlog)) esdlog <- link2list(lsdlog) lsdlog <- attr(esdlog, "function.name") if (length(zero) && (!is.Numeric(zero, integer.valued = TRUE, positive = TRUE) || zero > 3)) stop("bad input for argument argument 'zero'") new("vglmff", blurb = c("Three-parameter (univariate) lognormal distribution\n\n", "Links: ", namesof("meanlog", lmulog, earg = emulog, tag = TRUE), "; ", namesof("sdlog", lsdlog, earg = esdlog, tag = TRUE), "; ", namesof("lambda", "identity", earg = list(), tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- c(namesof("meanlog", .lmulog, earg = .emulog, tag = FALSE), namesof("sdlog", .lsdlog, earg = .esdlog, tag = FALSE), "lambda") if (!length(etastart)) { miny <- min(y) if (length( .delta)) { lambda.init <- rep(miny- .delta, length = n) } else { pvalue.vec <- NULL powers.try <- .powers.try for(delta in 10^powers.try) { pvalue.vec <- c(pvalue.vec, shapiro.test(sample(log(y-miny+delta), size=min(5000, length(y ))))$p.value) } index.lambda <- (1:length(powers.try))[pvalue.vec == max(pvalue.vec)] lambda.init <- miny - 10^powers.try[index.lambda] } mylm <- lm.wfit(x = x, y = c(log(y - lambda.init)), w = c(w)) sdlog.y.est <- sqrt( sum(c(w) * mylm$resid^2) / mylm$df.residual ) etastart <- cbind(mu = log(median(y - lambda.init)), sdlog = rep(theta2eta(sdlog.y.est, .lsdlog , earg = .esdlog ), length = n), lambda = lambda.init) } }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog, .delta = delta, .powers.try = powers.try ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmulog, earg = .emulog) sdlog <- eta2theta(eta[, 2], .lsdlog, earg = .esdlog) lambda <- eta2theta(eta[, 3], "identity", earg = list(theta = NULL)) lambda + exp(mymu + 0.5 * sdlog^2) }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), last = eval(substitute(expression({ misc$link <- c("meanlog" = .lmulog, "sdlog" = .lsdlog, "lambda" = "identity") misc$earg <- list("meanlog" = .emulog, "sdlog" = .esdlog, "lambda" = list()) misc$expected <- TRUE }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmulog , earg = .emulog) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog) lambda <- eta2theta(eta[, 3], "identity", earg = list(theta = NULL)) if (any(y < lambda)) warning("bad 'y'") if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dlnorm(y - lambda, meanlog = mymu, sdlog = sdlog, log = TRUE)) } }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), vfamily = c("lognormal3"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, 1], .lmulog, earg = .emulog) sdlog <- eta2theta(eta[, 2], .lsdlog, earg = .esdlog) lambda <- eta2theta(eta[, 3], "identity", earg = list(theta = NULL)) if (any(y < lambda)) warning("bad 'y'") dl.dmymu <- (log(y-lambda)-mymu) / sdlog^2 dl.dsdlog <- -1/sdlog + (log(y-lambda)-mymu)^2 / sdlog^3 dl.dlambda <- (1 + (log(y-lambda)-mymu) / sdlog^2) / (y-lambda) dmymu.deta <- dtheta.deta(mymu, .lmulog, earg = .emulog) dsdlog.deta <- dtheta.deta(sdlog, .lsdlog, earg = .esdlog) dlambda.deta <- dtheta.deta(lambda, "identity", earg = list()) c(w) * cbind(dl.dmymu * dmymu.deta, dl.dsdlog * dsdlog.deta, dl.dlambda * dlambda.deta) }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), weight = expression({ wz <- matrix(0, n, dimm(M)) ned2l.dmymu2 <- 1 / sdlog^2 ned2l.dsdlog <- 2 / sdlog^2 temp9 <- exp(-mymu + sdlog^2 / 2) ned2l.dlambda2 <- exp(2*(-mymu+sdlog^2)) * (1+sdlog^2) / sdlog^2 wz[, iam(1, 1, M)] <- ned2l.dmymu2 * dmymu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsdlog * dsdlog.deta^2 wz[, iam(3, 3, M)] <- ned2l.dlambda2 * dlambda.deta^2 wz[, iam(1, 3, M)] <- temp9 * dmymu.deta * dlambda.deta / sdlog^2 wz[, iam(2, 3, M)] <- -2 * temp9 / sdlog * dsdlog.deta * dlambda.deta wz <- c(w) * wz wz })) } dsnorm <- function(x, location = 0, scale = 1, shape = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") zedd <- (x - location) / scale loglik <- log(2) + dnorm(zedd, log = TRUE) + pnorm(shape * zedd, log.p = TRUE) - log(scale) if (log.arg) { loglik } else { exp(loglik) } } rsnorm <- function(n, location = 0, scale = 1, shape = 0) { rho <- shape / sqrt(1 + shape^2) u0 <- rnorm(n) v <- rnorm(n) u1 <- rho*u0 + sqrt(1 - rho^2) * v ans <- location + scale * ifelse(u0 >= 0, u1, -u1) ans[scale <= 0] <- NA ans } skewnormal1 <- function(lshape = "identity", ishape = NULL, nsimEIM = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(nsimEIM) && (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10)) stop("argument 'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("1-parameter skew-normal distribution\n\n", "Link: ", namesof("shape", lshape , earg = eshape ), "\n", "Mean: shape * sqrt(2 / (pi * (1 + shape^2 )))\n", "Variance: 1-mu^2"), infos = eval(substitute(function(...) { list(Musual = 1, nsimEIM = .nsimEIM) }, list( .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { init.shape <- if (length( .ishape)) rep( .ishape, len = n) else { temp <- y index <- abs(y) < sqrt(2/pi)-0.01 temp[!index] <- y[!index] temp[index] <- sign(y[index]) / sqrt(2/(pi*y[index]*y[index])-1) temp } etastart <- matrix(init.shape, n, ncol(y)) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lshape, earg = .eshape) alpha * sqrt(2/(pi * (1+alpha^2 ))) }, list( .eshape = eshape, .lshape = lshape ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape) misc$earg <- list(shape = .eshape ) misc$nsimEIM <- .nsimEIM misc$expected <- (length( .nsimEIM ) > 0) }), list( .eshape = eshape, .lshape = lshape, .nsimEIM = nsimEIM ))), linkfun = eval(substitute(function(mu, extra = NULL) { alpha <- mu / sqrt(2/pi - mu^2) theta2eta(alpha, .lshape, earg = .eshape) }, list( .eshape = eshape, .lshape = lshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .lshape, earg = .eshape) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dsnorm(x = y, location = 0, scale = 1, shape = alpha, log = TRUE)) } }, list( .eshape = eshape, .lshape = lshape ))), vfamily = c("skewnormal1"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lshape, earg = .eshape) zedd <- y*alpha tmp76 <- pnorm(zedd) tmp86 <- dnorm(zedd) dl.dshape <- tmp86 * y / tmp76 dshape.deta <- dtheta.deta(alpha, .lshape, earg = .eshape) c(w) * dl.dshape * dshape.deta }), list( .eshape = eshape, .lshape = lshape ))), weight = eval(substitute(expression({ if ( length( .nsimEIM )) { run.mean <- 0 for(ii in 1:( .nsimEIM)) { ysim <- rsnorm(n, location = 0, scale = 1, shape = alpha) zedd <- ysim*alpha tmp76 <- pnorm(zedd) tmp86 <- dnorm(zedd) d2l.dshape2 <- -ysim*ysim*tmp86*(tmp76*zedd+tmp86)/tmp76^2 rm(ysim) run.mean <- ((ii-1) * run.mean + d2l.dshape2) / ii } if (intercept.only) run.mean <- mean(run.mean) wz <- -c(w) * (dshape.deta^2) * run.mean } else { d2shape.deta2 <- d2theta.deta2(alpha, .lshape, earg = .eshape) d2l.dshape2 <- -y*y * tmp86 * (tmp76 * zedd + tmp86) / tmp76^2 wz <- -(dshape.deta^2) * d2l.dshape2 - d2shape.deta2 * dl.dshape wz <- c(w) * wz } wz }), list( .eshape = eshape, .lshape = lshape, .nsimEIM = nsimEIM )))) } if (FALSE) halfnormal1 <- function(lsd = "loge", lvar = "loge", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, apply.parint = FALSE, zero = NULL) { warning("20121101; not working; yettodo: finish it!") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") lvar <- as.list(substitute(lvar)) evar <- link2list(lvar) lvar <- attr(evar, "function.name") emean <- list() lmean <- "identity" if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") new("vglmff", blurb = c("Half-normal distribution\n\n", "Links: ", if (var.arg) namesof("var", lvar, earg = evar, tag = TRUE) else namesof("sd" , lsd, earg = esd, tag = TRUE), "\n", if (var.arg) "Variance: var zz" else "Variance: sd^2 zz"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ orig.y <- y if (length(attr(orig.y, "Prior.Weights"))) { if (any(c(w) != 1)) warning("replacing the 'weights' argument by the 'Prior.Weights'", "attribute of the response (probably due to Qvar()") w <- attr(orig.y, "Prior.Weights") extra$attributes.y <- attributes(orig.y) } else { } temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames2 <- paste(if ( .var.arg ) "var" else "sd", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(if ( .var.arg ) namesof(mynames2, .lvar , earg = .evar , tag = FALSE) else namesof(mynames2, .lsd , earg = .esd , tag = FALSE)) extra$predictors.names <- predictors.names if (!length(etastart)) { sdev.init <- mean.init <- matrix(0, n, ncoly) for (jay in 1:ncoly) { jfit <- lm.wfit(x = x, y = y[, jay], w = w[, jay]) mean.init[, jay] <- if ( .lmean == "loge") pmax(1/1024, y[, jay]) else if( .imethod == 1) median(y[, jay]) else if( .imethod == 2) weighted.mean(y[, jay], w = w[, jay]) else if( .imethod == 3) weighted.mean(y[, jay], w = w[, jay]) * 0.5 + y[, jay] * 0.5 else mean(jfit$fitted) sdev.init[, jay] <- if( .imethod == 1) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else if( .imethod == 2) { if (jfit$df.resid > 0) sqrt( sum(w[, jay] * jfit$resid^2) / jfit$df.resid ) else sqrt( sum(w[, jay] * jfit$resid^2) / sum(w[, jay]) ) } else if( .imethod == 3) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else { sqrt( sum(w[, jay] * abs(y[, jay] - mean.init[, jay])) / sum(w[, jay]) ) } if (any(sdev.init[, jay] <= sqrt( .Machine$double.eps ) )) sdev.init[, jay] <- 1.01 } if (length( .isd )) { sdev.init <- matrix( .isd , n, ncoly, byrow = TRUE) } etastart <- cbind(if ( .var.arg ) theta2eta(sdev.init^2, .lvar , earg = .evar ) else theta2eta(sdev.init , .lsd , earg = .esd )) colnames(etastart) <- predictors.names } }), list( .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .lmean = lmean, .isd = isd, .var.arg = var.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Musual <- extra$Musual ncoly <- extra$ncoly eta2theta(eta[, Musual*(1:ncoly) - 1], .lmean , earg = .emean ) }, list( .esd = esd , .evar = evar, .emean = emean, .lmean = lmean ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lsd , length = ncoly)) temp.names <- c(mynames2) names(misc$link) <- temp.names misc$earg <- vector("list", Musual * ncoly) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii ]] <- if ( .var.arg ) .evar else .esd } names(misc$earg) <- temp.names misc$var.arg <- .var.arg misc$Musual <- Musual misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .parallel = parallel, .apply.parint = apply.parint, .var.arg = var.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { ncoly <- extra$ncoly Musual <- extra$Musual if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly)], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly)], .lsd , earg = .esd ) } if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dnorm(y, m = mu, sd = sdev, log = TRUE)) } }, list( .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .var.arg = var.arg ))), vfamily = c("halfnormal1"), deriv = eval(substitute(expression({ ncoly <- extra$ncoly Musual <- extra$Musual mymu <- zz if ( .var.arg ) { Varm <- eta2theta(eta[, Musual*(1:ncoly) ], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, Musual*(1:ncoly) ], .lsd , earg = .esd ) } dl.dmu <- zz * (y - mymu) / sdev^2 if ( .var.arg ) { dl.dva <- -0.5 / Varm + 0.5 * (y - mymu)^2 / sdev^4 } else { dl.dsd <- -1.0 / sdev + (y - mymu)^2 / sdev^3 } dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) if ( .var.arg ) { dva.deta <- dtheta.deta(Varm, .lvar , earg = .evar ) } else { dsd.deta <- dtheta.deta(sdev, .lsd , earg = .esd ) } ans <- c(w) * cbind(if ( .var.arg ) dl.dva * dva.deta else dl.dsd * dsd.deta) ans }), list( .lmean = lmean, .lsd = lsd, .lvar = lvar, .emean = emean, .esd = esd, .evar = evar, .var.arg = var.arg ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, M) # diag matrix; y is 1-column too ned2l.dmu2 <- 1 / sdev^2 if ( .var.arg ) { ned2l.dva2 <- 0.5 / Varm^2 } else { ned2l.dsd2 <- 2 / sdev^2 } wz[, Musual*(1:ncoly) ] <- if ( .var.arg ) { ned2l.dva2 * dva.deta^2 } else { ned2l.dsd2 * dsd.deta^2 } wz }), list( .var.arg = var.arg )))) } VGAM/R/family.nonlinear.R0000644000176000001440000004703512136651110014632 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. vnonlinear.control <- function(save.weight = TRUE, ...) { list(save.weight = as.logical(save.weight)[1]) } subset_lohi <- function(xvec, yvec, probs.x = c(0.15, 0.85), type = c("median", "wtmean", "unwtmean"), wtvec = rep(1, len = length(xvec))) { if (!is.Numeric(probs.x, allowable.length = 2)) stop("argument 'probs.x' must be numeric and of length two") min.q <- quantile(xvec, probs = probs.x[1] ) max.q <- quantile(xvec, probs = probs.x[2] ) if(mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("median", "wtmean", "unwtmean"))[1] if (type == "median") { y1bar <- median(yvec[xvec < min.q]) x1bar <- median(xvec[xvec < min.q]) y2bar <- median(yvec[xvec > max.q]) x2bar <- median(xvec[xvec > max.q]) } if (type == "wtmean") { y1bar <- weighted.mean(yvec[xvec < min.q], w = wtvec[xvec < min.q]) x1bar <- weighted.mean(xvec[xvec < min.q], w = wtvec[xvec < min.q]) y2bar <- weighted.mean(yvec[xvec > max.q], w = wtvec[xvec > max.q]) x2bar <- weighted.mean(xvec[xvec > max.q], w = wtvec[xvec > max.q]) } if (type == "unwtmean") { y1bar <- mean(yvec[xvec < min.q]) x1bar <- mean(xvec[xvec < min.q]) y2bar <- mean(yvec[xvec > max.q]) x2bar <- mean(xvec[xvec > max.q]) } if (x1bar >= x2bar) stop("cannot find two distinct x values; try decreasing the first ", "value of argument 'probs.x' and increasing the second value") list(x1bar = x1bar, y1bar = y1bar, x2bar = x2bar, y2bar = y2bar, slopeUp = (y2bar > y1bar)) } micmen.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } micmen <- function(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identity", link2 = "identity", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL) { firstDeriv <- match.arg(firstDeriv, c("nsimEIM", "rpar"))[1] if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'imethod' must be integer") if (!is.Numeric(probs.x, allowable.length = 2)) stop("argument 'probs.x' must be numeric and of length two") if (!is.logical(oim) || length(oim) != 1) stop("argument 'oim' must be single logical") stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("'imethod' must be 1 or 2 or 3") estimated.dispersion <- (dispersion == 0) link1 <- as.list(substitute(link1)) earg1 <- link2list(link1) link1 <- attr(earg1, "function.name") link2 <- as.list(substitute(link2)) earg2 <- link2list(link2) link2 <- attr(earg2, "function.name") new("vglmff", blurb = c("Michaelis-Menton regression model\n", "Y_i = theta1 * u_i / (theta2 + u_i) + e_i\n\n", "Links: ", namesof("theta1", link1, earg = earg1), ", ", namesof("theta2", link2, earg = earg2), "\n", "Variance: constant"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M = 2) }), list( .zero = zero))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 if (residuals) { if (M > 1) NULL else (y - mu) * sqrt(w) } else { ResSS.vgam(y - mu, w, M = M) } }, initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w if (!length(Xm2)) stop("regressor not found") if (ncol(as.matrix(Xm2)) != 1) stop("regressor not found or is not a vector. Use the ", "'form2' argument without an intercept") Xm2 <- as.vector(Xm2) # Make sure extra$Xm2 <- Xm2 # Needed for @linkinv predictors.names <- c(namesof("theta1", .link1 , earg = .earg1, tag = FALSE), namesof("theta2", .link2 , earg = .earg2, tag = FALSE)) if (length(mustart) || length(coefstart)) stop("cannot handle 'mustart' or 'coefstart'") if (!length(etastart)) { if ( .imethod == 3 ) { index0 <- (1:n)[Xm2 <= quantile(Xm2, prob = .probs.x[2] )] init1 <- median(y[index0]) init2 <- median(init1 * Xm2 / y - Xm2) } if ( .imethod == 1 || .imethod == 2) { mysubset <- subset_lohi(Xm2, y, probs.x = .probs.x, type = ifelse( .imethod == 1, "median", "wtmean"), wtvec = w) mat.x <- with(mysubset, cbind(c(x1bar, x2bar), -c(y1bar, y2bar))) theta.temp <- with(mysubset, solve(mat.x, c(x1bar * y1bar, x2bar * y2bar))) init1 <- theta.temp[1] init2 <- theta.temp[2] } if (length( .init1 )) init1 <- .init1 if (length( .init2 )) init2 <- .init2 etastart <- cbind( rep(theta2eta(init1, .link1 , earg = .earg1 ), len = n), rep(theta2eta(init2, .link2 , earg = .earg2 ), len = n)) } else { stop("cannot handle 'etastart' or 'mustart'") } }), list( .init1 = init1, .link1 = link1, .earg1 = earg1, .init2 = init2, .link2 = link2, .earg2 = earg2, .imethod = imethod, .probs.x = probs.x ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) theta1 * extra$Xm2 / (theta2 + extra$Xm2) }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2))), last = eval(substitute(expression({ misc$link <- c(theta1 = .link1 , theta2 = .link2 ) misc$earg <- list(theta1 = .earg1 , theta2 = .earg2 ) misc$rpar <- rpar fit$df.residual <- n - rank # Not nrow_X_vlm - rank fit$df.total <- n # Not nrow_X_vlm extra$Xm2 <- NULL # Regressor is in control$regressor dpar <- .dispersion if (!dpar) { dpar <- sum(c(w) * (y - mu)^2) / (n - ncol_X_vlm) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$firstDeriv <- .firstDeriv misc$oim <- .oim misc$rpar <- rpar misc$orig.rpar <- .rpar misc$multipleResponses <- FALSE }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .dispersion = dispersion, .imethod = imethod, .firstDeriv = firstDeriv, .oim = oim, .rpar = rpar, .nsimEIM = nsimEIM, .estimated.dispersion = estimated.dispersion ))), summary.dispersion = FALSE, vfamily = c("micmen", "vnonlinear"), deriv = eval(substitute(expression({ theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) dthetas.detas <- cbind(dtheta.deta(theta1, .link1 , earg = .earg1 ), dtheta.deta(theta2, .link2 , earg = .earg2 )) rpar <- if ( .firstDeriv == "rpar") { if (iter > 1) { max(rpar / .divisor, 1000 * .Machine$double.eps) } else { d3 <- deriv3(~ theta1 * Xm2 / (theta2 + Xm2), c("theta1", "theta2"), hessian = FALSE) .rpar } } else { .rpar } dmus.dthetas <- if (FALSE) { attr(eval(d3), "gradient") } else { dmu.dtheta1 <- Xm2 / (theta2 + Xm2) dmu.dtheta2 <- -theta1 * Xm2 / (Xm2 + theta2)^2 cbind(dmu.dtheta1, dmu.dtheta2) } myderiv <- if ( .firstDeriv == "rpar") { if (TRUE) { index <- iam(NA, NA, M = M, both = TRUE) temp200809 <- dmus.dthetas * dthetas.detas if (M > 1) temp200809[, 2:M] <- temp200809[, 2:M] + sqrt(rpar) c(w) * (y - mu) * temp200809 } else { c(w) * (y - mu) * cbind(dmus.dthetas[, 1] * dthetas.detas[, 1], dmus.dthetas[, 2] * dthetas.detas[, 2] + sqrt(rpar)) } } else { temp20101111 <- dmus.dthetas * dthetas.detas c(w) * (y - mu) * temp20101111 } myderiv }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), weight = eval(substitute(expression({ if ( .oim ) { wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- Xm2 wz[, iam(1, 2, M)] <- y - 2 * mu wz[, iam(2, 2, M)] <- theta1 * (3 * mu - 2 * y) / (theta2 + Xm2) wz <- wz * Xm2 / (theta2 + Xm2)^2 } if ( .firstDeriv == "rpar") { if (FALSE) { wz <- dmus.dthetas[,index$row] * dmus.dthetas[,index$col] * dthetas.detas[,index$row] * dthetas.detas[,index$col] if (M > 1) wz[, 2:M] <- wz[, 2:M] + rpar } else { wz <- cbind(( dmus.dthetas[, 1] * dthetas.detas[, 1])^2, ( dmus.dthetas[, 2] * dthetas.detas[, 2])^2 + rpar, dmus.dthetas[, 1] * dmus.dthetas[, 2] * dthetas.detas[, 1] * dthetas.detas[, 2]) } } else { run.varcov <- 0 index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) mysigma <- 1 for (ii in 1:( .nsimEIM )) { ysim <- theta1 * Xm2 / (theta2 + Xm2) + rnorm(n, sd = mysigma) temp3 <- (ysim - mu) * dmus.dthetas * dthetas.detas run.varcov <- run.varcov + temp3[, index0$row.index] * temp3[, index0$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov } c(w) * wz }), list( .link1 = link1, .link2 = link2, .firstDeriv = firstDeriv, .nsimEIM = nsimEIM, .oim = oim )))) } skira.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } skira <- function(rpar = 0.1, divisor = 10, init1 = NULL, init2 = NULL, link1 = "identity", link2 = "identity", earg1 = list(), earg2 = list(), imethod = 1, oim = TRUE, probs.x = c(0.15, 0.85), smallno = 1.0e-3, nsimEIM = 500, firstDeriv = c("nsimEIM", "rpar"), dispersion = 0, zero = NULL) { firstDeriv <- match.arg(firstDeriv, c("nsimEIM", "rpar"))[1] if (!is.Numeric(probs.x, allowable.length = 2)) stop("argument 'probs.x' must be numeric and of length two") estimated.dispersion <- dispersion == 0 if (mode(link1) != "character" && mode(link1) != "name") link1 <- as.character(substitute(link1)) if (mode(link2) != "character" && mode(link2) != "name") link2 <- as.character(substitute(link2)) if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'imethod' must be integer") if (imethod > 5) stop("argument 'imethod' must be 1, 2, 3, 4 or 5") if (!is.list(earg1)) earg1 = list() if (!is.list(earg2)) earg2 = list() stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) new("vglmff", blurb = c("Shinozaki-Kira regression model\n", "Y_i = 1 / (theta1 + theta2 * u_i) + e_i\n\n", "Links: ", namesof("theta1", link1, earg = earg1), ", ", namesof("theta2", link2, earg = earg2)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M = 2) }), list( .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 if (residuals) { if (M > 1) NULL else (y - mu) * sqrt(w) } else ResSS.vgam(y - mu, w, M = M) }, initialize = eval(substitute(expression({ warning("20101105; need to fix a bug in the signs of initial vals") temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y if (!length(Xm2)) stop("regressor not found") if (ncol(as.matrix(Xm2)) != 1) stop("regressor not found or is not a vector. ", "Use the 'form2' argument without an intercept") Xm2 <- as.vector(Xm2) extra$Xm2 <- Xm2 predictors.names <- c(namesof("theta1", .link1 , earg = .earg1, tag = FALSE), namesof("theta2", .link2 , earg = .earg2, tag = FALSE)) if (length(mustart) || length(coefstart)) stop("cannot handle 'mustart' or 'coefstart'") if (!length(etastart)) { min.q <- quantile(Xm2, probs = .probs.x[1] ) max.q <- quantile(Xm2, probs = .probs.x[2] ) if ( .imethod == 3 || .imethod == 2 ) { mysubset <- subset_lohi(Xm2, y, probs.x = .probs.x, type = ifelse( .imethod == 2, "median", "wtmean"), wtvec = w) mat.x <- with(mysubset, cbind(c(1, 1), c(x1bar, x2bar)) * c(y1bar, y2bar)) theta.temp <- solve(mat.x, c(1, 1)) init1 <- theta.temp[1] init2 <- theta.temp[2] } else if ( .imethod == 1 ) { yy <- as.vector( y[(Xm2 > min.q) & (Xm2 < max.q)]) xx <- as.vector(Xm2[(Xm2 > min.q) & (Xm2 < max.q)]) ww <- as.vector( w[(Xm2 > min.q) & (Xm2 < max.q)]) yy[ abs(yy) < .smallno ] <- .smallno * sign(yy[ abs(yy) < .smallno ]) wt.temp <- (yy^4) * ww wt.temp.max <- median(wt.temp) * 100 wt.temp[wt.temp > wt.temp.max] <- wt.temp.max mylm.wfit <- lm.wfit(x = cbind(1, xx), y = c(1 / yy), w = c(wt.temp)) init1 <- mylm.wfit$coef[1] init2 <- mylm.wfit$coef[2] } else if (( .imethod == 4) || ( .imethod == 5)) { tempfit <- if ( .imethod == 4 ) { fitted(loess(y ~ Xm2)) } else { fitted(smooth.spline(Xm2, y, w = w, df = 2.0)) } mysubset <- subset_lohi(Xm2, y, probs.x = .probs.x, type = "wtmean", wtvec = w) mat.x <- with(mysubset, cbind(c(1, 1), c(x1bar, x2bar)) * c(y1bar, y2bar)) theta.temp <- solve(mat.x, c(1, 1)) init1 <- theta.temp[1] init2 <- theta.temp[2] } else { stop("argument 'imethod' unmatched") } mu <- 1 / (init1 + init2 * Xm2) matplot(Xm2, cbind(y, mu), col = c("blue", "green"), main = "Initial values in green") if ( .imethod == 1 ) { points(Xm2, 1 / (init1 + init2 * Xm2), col = "green") } else { with(mysubset, points(c(x1bar, x2bar), c(y1bar, y2bar), col = "red", pch = "+", cex = 2)) } if (length( .init1 )) init1 <- .init1 if (length( .init2 )) init2 <- .init2 etastart <- cbind( rep(theta2eta(init1, .link1 , earg = .earg1 ), len = n), rep(theta2eta(init2, .link2 , earg = .earg2 ), len = n)) } else { stop("cannot handle 'etastart' or 'mustart'") } }), list( .init1 = init1, .link1 = link1, .earg1 = earg1, .init2 = init2, .link2 = link2, .earg2 = earg2, .smallno = smallno, .probs.x = probs.x, .nsimEIM = nsimEIM, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) 1 / (theta1 + theta2 * extra$Xm2) }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2 ))), last = eval(substitute(expression({ misc$link <- c(theta1 = .link1 , theta2 = .link2 ) misc$earg <- list(theta1 = .earg1 , theta2 = .earg2 ) misc$rpar <- rpar misc$orig.rpar <- .rpar fit$df.residual <- n - rank fit$df.total <- n dpar <- .dispersion if (!dpar) { dpar <- sum(c(w) * (y - mu)^2) / (n - ncol_X_vlm) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$firstDeriv <- .firstDeriv misc$oim <- .oim misc$multipleResponses <- FALSE }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .dispersion = dispersion, .rpar = rpar, .imethod = imethod, .nsimEIM = nsimEIM, .firstDeriv = firstDeriv, .oim = oim, .estimated.dispersion = estimated.dispersion ))), summary.dispersion = FALSE, vfamily = c("skira", "vnonlinear"), deriv = eval(substitute(expression({ rpar <- if ( .firstDeriv == "rpar") { if (iter > 1) { max(rpar / .divisor, 1000 * .Machine$double.eps) } else { d3 <- deriv3( ~ 1 / (theta1 + theta2 * Xm2), c("theta1", "theta2"), hessian = FALSE) .rpar } } else { .rpar } theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) dthetas.detas <- cbind(dtheta.deta(theta1, .link1 , earg = .earg1 ), dtheta.deta(theta2, .link2 , earg = .earg2 )) dmus.dthetas <- if (FALSE) { attr(eval(d3), "gradient") } else { dmu.dtheta1 <- -1 / (theta1 + theta2 * Xm2)^2 dmu.dtheta2 <- -Xm2 / (theta1 + theta2 * Xm2)^2 cbind(dmu.dtheta1, dmu.dtheta2) } myderiv <- if ( .firstDeriv == "nsimEIM") { c(w) * (y - mu) * dmus.dthetas * dthetas.detas } else { c(w) * (y - mu) * cbind(dmus.dthetas[, 1] * dthetas.detas[, 1], dmus.dthetas[, 2] * dthetas.detas[, 2] + sqrt(rpar)) } myderiv }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), weight = eval(substitute(expression({ if ( .firstDeriv == "rpar") { if (FALSE) { index5 <- iam(NA, NA, M = M, both = TRUE) wz <- dmus.dthetas[, index5$row] * dmus.dthetas[, index5$col] * dthetas.detas[, index5$row] * dthetas.detas[, index5$col] if (M > 1) wz[, -(1:M)] <- wz[, -(1:M)] / 100 } else { wz <- cbind((dmus.dthetas[, 1] * dthetas.detas[, 1])^2, (dmus.dthetas[, 2] * dthetas.detas[, 2])^2 + rpar, dmus.dthetas[, 1] * dmus.dthetas[, 2] * dthetas.detas[, 1] * dthetas.detas[, 2]) } } else { run.varcov <- 0 index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) mysigma <- sqrt( median( (y - mu)^2 ) ) / 100 mysigma <- 1 for(ii in 1:( .nsimEIM )) { ysim <- 1 / (theta1 + theta2 * Xm2) + rnorm(n, sd = mysigma) temp3 <- (ysim - mu) * dmus.dthetas * dthetas.detas run.varcov <- run.varcov + temp3[, index0$row.index] * temp3[, index0$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov } c(w) * wz }), list( .link1 = link1, .link2 = link2, .firstDeriv = firstDeriv, .nsimEIM = nsimEIM, .oim = oim )))) } VGAM/R/family.mixture.R0000644000176000001440000005737612136651110014353 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. mix2normal1.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2normal1 <- function(lphi = "logit", lmu = "identity", lsd = "loge", iphi = 0.5, imu1 = NULL, imu2 = NULL, isd1 = NULL, isd2 = NULL, qmu = c(0.2, 0.8), equalsd = TRUE, nsimEIM = 100, zero = 1) { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") emu1 <- emu2 <- emu esd1 <- esd2 <- esd if (!is.Numeric(qmu, allowable.length = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, allowable.length = 1, positive = TRUE) || iphi>= 1)) stop("bad input for argument 'iphi'") if (length(imu1) && !is.Numeric(imu1)) stop("bad input for argument 'imu1'") if (length(imu2) && !is.Numeric(imu2)) stop("bad input for argument 'imu2'") if (length(isd1) && !is.Numeric(isd1, positive = TRUE)) stop("bad input for argument 'isd1'") if (length(isd2) && !is.Numeric(isd2, positive = TRUE)) stop("bad input for argument 'isd2'") if (!is.logical(equalsd) || length(equalsd) != 1) stop("bad input for argument 'equalsd'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two univariate normals\n\n", "Links: ", namesof("phi", lphi, earg = ephi, tag = FALSE), ", ", namesof("mu1", lmu, earg = emu1, tag = FALSE), ", ", namesof("sd1", lsd, earg = esd1, tag = FALSE), ", ", namesof("mu2", lmu, earg = emu2, tag = FALSE), ", ", namesof("sd2", lsd, earg = esd2, tag = FALSE), "\n", "Mean: phi*mu1 + (1 - phi)*mu2\n", "Variance: phi*sd1^2 + (1 - phi)*sd2^2 + ", "phi*(1 - phi)*(mu1-mu2)^2"), constraints = eval(substitute(expression({ constraints <- cm.vgam(rbind(diag(4), c(0, 0, 1,0)), x, .equalsd , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero, .equalsd = equalsd ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("phi", .lphi, tag = FALSE), namesof("mu1", .lmu, earg = .emu1, tag = FALSE), namesof("sd1", .lsd, earg = .esd1, tag = FALSE), namesof("mu2", .lmu, earg = .emu2, tag = FALSE), namesof("sd2", .lsd, earg = .esd2, tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu) init.phi <- rep(if(length(.iphi)) .iphi else 0.5, length = n) init.mu1 <- rep(if(length(.imu1)) .imu1 else qy[1], length = n) init.mu2 <- rep(if(length(.imu2)) .imu2 else qy[2], length = n) ind.1 <- if (init.mu1[1] < init.mu2[1]) 1:round(n* init.phi[1]) else round(n* init.phi[1]):n ind.2 <- if (init.mu1[1] < init.mu2[1]) round(n* init.phi[1]):n else 1:round(n* init.phi[1]) sorty <- sort(y) init.sd1 <- rep(if(length( .isd1 )) .isd1 else sd(sorty[ind.1]), len = n) init.sd2 <- rep(if(length( .isd2 )) .isd2 else sd(sorty[ind.2]), len = n) if ( .equalsd ) { init.sd1 <- init.sd2 = (init.sd1 + init.sd2)/2 if (!all.equal( .esd1, .esd2 )) stop("'esd1' and 'esd2' must be equal if 'equalsd = TRUE'") } etastart <- cbind( theta2eta(init.phi, .lphi, earg = .ephi), theta2eta(init.mu1, .lmu, earg = .emu1), theta2eta(init.sd1, .lsd, earg = .esd1), theta2eta(init.mu2, .lmu, earg = .emu2), theta2eta(init.sd2, .lsd, earg = .esd2)) } }), list(.lphi = lphi, .lmu = lmu, .iphi = iphi, .imu1 = imu1, .imu2 = imu2, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .equalsd = equalsd, .lsd = lsd, .isd1 = isd1, .isd2 = isd2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) mu1 <- eta2theta(eta[, 2], link = .lmu, earg = .emu1) mu2 <- eta2theta(eta[, 4], link = .lmu, earg = .emu2) phi * mu1 + (1 - phi) * mu2 }, list( .lphi = lphi, .lmu = lmu, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi, "mu1" = .lmu, "sd1" = .lsd , "mu2" = .lmu, "sd2" = .lsd) misc$earg <- list("phi" = .ephi, "mu1" = .emu1, "sd1" = .esd1, "mu2" = .emu2, "sd2" = .esd2) misc$expected <- TRUE misc$equalsd <- .equalsd misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list(.lphi = lphi, .lmu = lmu, .lsd = lsd, .equalsd = equalsd, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta,extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) mu1 <- eta2theta(eta[, 2], link = .lmu, earg = .emu1) sd1 <- eta2theta(eta[, 3], link = .lsd, earg = .esd1) mu2 <- eta2theta(eta[, 4], link = .lmu, earg = .emu2) sd2 <- eta2theta(eta[, 5], link = .lsd, earg = .esd2) f1 <- dnorm(y, mean=mu1, sd=sd1) f2 <- dnorm(y, mean=mu2, sd=sd2) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * log(phi*f1 + (1 - phi)*f2)) }, list(.lphi = lphi, .lmu = lmu, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .lsd = lsd ))), vfamily = c("mix2normal1"), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) mu1 <- eta2theta(eta[, 2], link = .lmu, earg = .emu1) sd1 <- eta2theta(eta[, 3], link = .lsd, earg = .esd1) mu2 <- eta2theta(eta[, 4], link = .lmu, earg = .emu2) sd2 <- eta2theta(eta[, 5], link = .lsd, earg = .esd2) dphi.deta <- dtheta.deta(phi, link = .lphi, earg = .ephi) dmu1.deta <- dtheta.deta(mu1, link = .lmu, earg = .emu1) dmu2.deta <- dtheta.deta(mu2, link = .lmu, earg = .emu2) dsd1.deta <- dtheta.deta(sd1, link = .lsd, earg = .esd1) dsd2.deta <- dtheta.deta(sd2, link = .lsd, earg = .esd2) f1 <- dnorm(y, mean=mu1, sd=sd1) f2 <- dnorm(y, mean=mu2, sd=sd2) pdf <- phi*f1 + (1 - phi)*f2 z1 <- (y-mu1) / sd1 z2 <- (y-mu2) / sd2 df1.dmu1 <- z1 * f1 / sd1 df2.dmu2 <- z2 * f2 / sd2 df1.dsd1 <- (z1^2 - 1) * f1 / sd1 df2.dsd2 <- (z2^2 - 1) * f2 / sd2 dl.dphi <- (f1-f2) / pdf dl.dmu1 <- phi * df1.dmu1 / pdf dl.dmu2 <- (1 - phi) * df2.dmu2 / pdf dl.dsd1 <- phi * df1.dsd1 / pdf dl.dsd2 <- (1 - phi) * df2.dsd2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dmu1 * dmu1.deta, dl.dsd1 * dsd1.deta, dl.dmu2 * dmu2.deta, dl.dsd2 * dsd2.deta) }), list(.lphi = lphi, .lmu = lmu, .lsd = lsd, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ d3 <- deriv3(~ log( phi * dnorm((ysim-mu1)/sd1) / sd1 + (1 - phi) * dnorm((ysim-mu2)/sd2) / sd2), c("phi","mu1","sd1","mu2","sd2"), hessian= TRUE) run.mean <- 0 for(ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rnorm(n, mu1, sd1), rnorm(n, mu2, sd2)) eval.d3 <- eval(d3) d2l.dthetas2 <- attr(eval.d3, "hessian") rm(ysim) temp3 <- matrix(0, n, dimm(M)) for(ss in 1:M) for(tt in ss:M) temp3[,iam(ss,tt, M)] <- -d2l.dthetas2[,ss,tt] run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dmu1.deta, dsd1.deta, dmu2.deta, dsd2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list(.lphi = lphi, .lmu = lmu, .nsimEIM = nsimEIM )))) } mix2poisson.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2poisson <- function(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = 1) { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") el1 <- el2 <- elambda if (!is.Numeric(qmu, allowable.length = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, allowable.length = 1, positive = TRUE) || iphi >= 1)) stop("bad input for argument 'iphi'") if (length(il1) && !is.Numeric(il1)) stop("bad input for argument 'il1'") if (length(il2) && !is.Numeric(il2)) stop("bad input for argument 'il2'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two Poisson distributions\n\n", "Links: ", namesof("phi",lphi, earg = ephi), ", ", namesof("lambda1", llambda, earg = el1, tag = FALSE), ", ", namesof("lambda2", llambda, earg = el2, tag = FALSE), "\n", "Mean: phi*lambda1 + (1 - phi)*lambda2"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("phi", .lphi , earg = .ephi , tag = FALSE), namesof("lambda1", .llambda , earg = .el1 , tag = FALSE), namesof("lambda2", .llambda , earg = .el2 , tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu) init.phi <- rep(if(length(.iphi)) .iphi else 0.5, length = n) init.lambda1 <- rep(if(length(.il1)) .il1 else qy[1], length = n) init.lambda2 <- rep(if(length(.il2)) .il2 else qy[2], length = n) if (!length(etastart)) etastart <- cbind(theta2eta(init.phi, .lphi , earg = .ephi ), theta2eta(init.lambda1, .llambda , earg = .el1 ), theta2eta(init.lambda2, .llambda , earg = .el2 )) } }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .iphi = iphi, .il1 = il1, .il2 = il2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) phi * lambda1 + (1 - phi) * lambda2 }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi, "lambda1" = .llambda, "lambda2" = .llambda ) misc$earg <- list("phi" = .ephi, "lambda1" = .el1, "lambda2" = .el2 ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta,extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) lambda1 <- eta2theta(eta[, 2], link = .llambda, earg = .el1) lambda2 <- eta2theta(eta[, 3], link = .llambda, earg = .el2) f1 <- dpois(y, lam = lambda1) f2 <- dpois(y, lam = lambda2) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * log(phi*f1 + (1 - phi)*f2)) }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), vfamily = c("mix2poisson"), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) lambda1 <- eta2theta(eta[, 2], link = .llambda, earg = .el1) lambda2 <- eta2theta(eta[, 3], link = .llambda, earg = .el2) dphi.deta <- dtheta.deta(phi, link = .lphi, earg = .ephi) dlambda1.deta <- dtheta.deta(lambda1, link = .llambda, earg = .el1) dlambda2.deta <- dtheta.deta(lambda2, link = .llambda, earg = .el2) f1 <- dpois(x = y, lam = lambda1) f2 <- dpois(x = y, lam = lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- dpois(y-1, lam = lambda1) - f1 df2.dlambda2 <- dpois(y-1, lam = lambda2) - f2 dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dlambda1 * dlambda1.deta, dl.dlambda2 * dlambda2.deta) }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.mean <- 0 for(ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rpois(n, lambda1), rpois(n, lambda2)) f1 <- dpois(x = ysim, lam = lambda1) f2 <- dpois(x = ysim, lam = lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- dpois(ysim-1, lam = lambda1) - f1 df2.dlambda2 <- dpois(ysim-1, lam = lambda2) - f2 dl.dphi <- (f1 - f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf d2f1.dlambda12 <- dpois(ysim-2, lambda1) - 2*dpois(ysim-1, lambda1) + dpois(ysim, lambda1) d2f2.dlambda22 <- dpois(ysim-2, lambda2) - 2*dpois(ysim-1, lambda2) + dpois(ysim, lambda2) d2l.dphi2 <- dl.dphi^2 d2l.dlambda12 <- phi * (phi * df1.dlambda1^2 / pdf - d2f1.dlambda12) / pdf d2l.dlambda22 <- (1 - phi) * ((1 - phi) * df2.dlambda2^2 / pdf - d2f2.dlambda22) / pdf d2l.dlambda1lambda2 <- phi * (1 - phi) * df1.dlambda1 * df2.dlambda2 / pdf^2 d2l.dphilambda1 <- df1.dlambda1 * (phi*(f1-f2)/pdf - 1) / pdf d2l.dphilambda2 <- df2.dlambda2 * ((1 - phi)*(f1-f2)/pdf - 1) / pdf rm(ysim) temp3 <- matrix(0, n, dimm(M)) temp3[,iam(1, 1, M = 3)] <- d2l.dphi2 temp3[,iam(2, 2, M = 3)] <- d2l.dlambda12 temp3[,iam(3, 3, M = 3)] <- d2l.dlambda22 temp3[,iam(1, 2, M = 3)] <- d2l.dphilambda1 temp3[,iam(1, 3, M = 3)] <- d2l.dphilambda2 temp3[,iam(2, 3, M = 3)] <- d2l.dlambda1lambda2 run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dlambda1.deta, dlambda2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM )))) } mix2exp.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2exp <- function(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = 1) { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") el1 <- el2 <- elambda if (!is.Numeric(qmu, allowable.length = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, allowable.length = 1, positive = TRUE) || iphi >= 1)) stop("bad input for argument 'iphi'") if (length(il1) && !is.Numeric(il1)) stop("bad input for argument 'il1'") if (length(il2) && !is.Numeric(il2)) stop("bad input for argument 'il2'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two univariate exponentials\n\n", "Links: ", namesof("phi", lphi, earg = ephi, tag = FALSE), ", ", namesof("lambda1", llambda, earg = el1 , tag = FALSE), ", ", namesof("lambda2", llambda, earg = el2 , tag = FALSE), "\n", "Mean: phi / lambda1 + (1 - phi) / lambda2\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("phi", .lphi , earg = .ephi , tag = FALSE), namesof("lambda1", .llambda , earg = .el1 , tag = FALSE), namesof("lambda2", .llambda , earg = .el2 , tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu) init.phi <- rep(if(length(.iphi)) .iphi else 0.5, length = n) init.lambda1 <- rep(if(length(.il1)) .il1 else 1/qy[1], length = n) init.lambda2 <- rep(if(length(.il2)) .il2 else 1/qy[2], length = n) if (!length(etastart)) etastart <- cbind(theta2eta(init.phi, .lphi, earg = .ephi), theta2eta(init.lambda1, .llambda, earg = .el1), theta2eta(init.lambda2, .llambda, earg = .el2)) } }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .iphi = iphi, .il1 = il1, .il2 = il2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) lambda1 <- eta2theta(eta[, 2], link = .llambda, earg = .el1) lambda2 <- eta2theta(eta[, 3], link = .llambda, earg = .el2) phi / lambda1 + (1 - phi) / lambda2 }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi, "lambda1" = .llambda, "lambda2" = .llambda) misc$earg <- list("phi" = .ephi, "lambda1" = .el1, "lambda2" = .el2) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list(.lphi = lphi, .llambda = llambda, .nsimEIM = nsimEIM, .ephi = ephi, .el1 = el1, .el2 = el2 ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta,extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) lambda1 <- eta2theta(eta[, 2], link = .llambda, earg = .el1) lambda2 <- eta2theta(eta[, 3], link = .llambda, earg = .el2) f1 <- dexp(y, rate=lambda1) f2 <- dexp(y, rate=lambda2) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * log(phi*f1 + (1 - phi)*f2)) }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), vfamily = c("mix2exp"), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi, earg = .ephi) lambda1 <- eta2theta(eta[, 2], link = .llambda, earg = .el1) lambda2 <- eta2theta(eta[, 3], link = .llambda, earg = .el2) dphi.deta <- dtheta.deta(phi, link = .lphi, earg = .ephi) dlambda1.deta <- dtheta.deta(lambda1, link = .llambda, earg = .el1) dlambda2.deta <- dtheta.deta(lambda2, link = .llambda, earg = .el2) f1 <- dexp(x = y, rate=lambda1) f2 <- dexp(x = y, rate=lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- exp(-lambda1*y) - y * dexp(y, rate=lambda1) df2.dlambda2 <- exp(-lambda2*y) - y * dexp(y, rate=lambda2) dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dlambda1 * dlambda1.deta, dl.dlambda2 * dlambda2.deta) }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), weight = eval(substitute(expression({ run.mean <- 0 for(ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rexp(n, lambda1), rexp(n, lambda2)) f1 <- dexp(x = ysim, rate=lambda1) f2 <- dexp(x = ysim, rate=lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- exp(-lambda1*ysim) - ysim * dexp(ysim, rate=lambda1) df2.dlambda2 <- exp(-lambda2*ysim) - ysim * dexp(ysim, rate=lambda2) dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf d2f1.dlambda12 <- ysim*(ysim*lambda1-2)*exp(-lambda1*ysim) d2f2.dlambda22 <- ysim*(ysim*lambda2-2)*exp(-lambda2*ysim) d2l.dphi2 <- dl.dphi^2 d2l.dlambda12 <- phi * (phi * df1.dlambda1^2 / pdf - d2f1.dlambda12) / pdf d2l.dlambda22 <- (1 - phi) * ((1 - phi) * df2.dlambda2^2 / pdf - d2f2.dlambda22) / pdf d2l.dlambda1lambda2 <- phi * (1 - phi) * df1.dlambda1 * df2.dlambda2 / pdf^2 d2l.dphilambda1 <- df1.dlambda1 * (phi*(f1-f2)/pdf - 1) / pdf d2l.dphilambda2 <- df2.dlambda2 * ((1 - phi)*(f1-f2)/pdf - 1) / pdf rm(ysim) temp3 <- matrix(0, n, dimm(M)) temp3[,iam(1, 1, M = 3)] <- d2l.dphi2 temp3[,iam(2, 2, M = 3)] <- d2l.dlambda12 temp3[,iam(3, 3, M = 3)] <- d2l.dlambda22 temp3[,iam(1, 2, M = 3)] <- d2l.dphilambda1 temp3[,iam(1, 3, M = 3)] <- d2l.dphilambda2 temp3[,iam(2, 3, M = 3)] <- d2l.dlambda1lambda2 run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dlambda1.deta, dlambda2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM )))) } VGAM/R/family.math.R0000644000176000001440000000516012136651110013567 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. lambertW <- function(x, tolerance = 1.0e-10, maxit = 50) { if (any(Im(x) != 0.0)) stop("argument 'x' must be real, not complex!") ans <- x ans[!is.na(x) & x < -exp(-1)] <- NA ans[!is.na(x) & x >= -exp(-1)] <- log1p(x[!is.na(x) & x >= -exp(-1)]) ans[!is.na(x) & x >= 0 ] <- sqrt(x[!is.na(x) & x >= 0 ]) / 2 cutpt <- 3.0 if (any(myTF <- !is.na(x) & x > cutpt)) { L1 <- log(x[!is.na(x) & x > cutpt]) # log(as.complex(x)) L2 <- log(L1) # log(as.complex(L1)) wzinit <- L1 - L2 + (L2 + (L2*( -2 + L2)/(2) + (L2*( 6 + L2*(-9 + L2* 2)) / (6) + L2*(-12 + L2*(36 + L2*(-22 + L2*3))) / (12*L1)) / L1) / L1) / L1 ans[myTF] <- wzinit } for (ii in 1:maxit) { exp1 <- exp(ans) exp2 <- ans * exp1 delta <- (exp2 - x) / (exp2 + exp1 - ((ans + 2) * (exp2 - x) / (2 * (ans + 1.0)))) ans <- ans - delta if (all(is.na(delta) || max(abs(delta), na.rm = TRUE) < tolerance)) break if (ii == maxit) warning("did not converge") } ans[x == Inf] <- Inf ans } pgamma.deriv <- function(q, shape, tmax = 100) { nnn <- max(length(q), length(shape)) if (length(q) != nnn) q <- rep(q, length = nnn) if (length(shape) != nnn) shape <- rep(shape, length = nnn) if (!is.Numeric(q, positive = TRUE)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(tmax, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'tmax'") if (tmax < 10) warning("probably argument 'tmax' is too small") gplog <- lgamma(shape) gp1log <- gplog + log(shape) psip <- digamma(shape) psip1 <- psip + 1 / shape psidp <- trigamma(shape) psidp1 <- psidp - 1 / shape^2 fred <- dotC(name = "VGAM_C_vdigami", d = as.double(matrix(0, 6, nnn)), x = as.double(q), p = as.double(shape), as.double(gplog), as.double(gp1log), as.double(psip), as.double(psip1), as.double(psidp), as.double(psidp1), ifault = integer(nnn), tmax = as.double(tmax), as.integer(nnn)) answer <- matrix(fred$d, nnn, 6, byrow = TRUE) dimnames(answer) <- list(names(q), c("q", "q^2", "shape", "shape^2", "q.shape", "pgamma(q, shape)")) if (any(fred$ifault != 0)) { indices <- which(fred$ifault != 0) warning("convergence problems with elements ", indices) } answer } VGAM/R/family.loglin.R0000644000176000001440000002347012136651110014126 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. loglinb2 <- function(exchangeable = FALSE, zero = NULL) { new("vglmff", blurb = c("Log-linear model for binary data\n\n", "Links: ", "Identity: u1, u2, u12", "\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1,1,0, 0,0,1), 3, 2), x, .exchangeable , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .exchangeable = exchangeable, .zero = zero ))), initialize = expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y if (ncol(y) != 2) stop("ncol(y) must be = 2") predictors.names <- c("u1", "u2", "u12") if (length(mustart) + length(etastart) == 0) { mustart <- matrix(as.numeric(NA), nrow(y), 4) mustart[,1] <- weighted.mean((1-y[,1])*(1-y[,2]), w) mustart[,2] <- weighted.mean((1-y[,1])*y[,2], w) mustart[,3] <- weighted.mean(y[,1]*(1-y[,2]), w) mustart[,4] <- weighted.mean(y[,1]*y[,2], w) if (any(mustart == 0)) stop("some combinations of the response not realized") } }), linkinv = function(eta, extra = NULL) { u1 <- eta[,1] u2 <- eta[,2] u12 <- eta[,3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) cbind("00" = 1/denom, "01" = exp(u2) / denom, "10" = exp(u1) / denom, "11" = exp(u1+u2+u12) / denom) }, last = expression({ misc$link <- c("u1" = "identity", "u2" = "identity", "u12" = "identity") misc$earg <- list("u1" = list(), "u2" = list(), "u12" = list()) misc$expected <- TRUE misc$multipleResponses <- TRUE }), linkfun = function(mu, extra = NULL) { u0 <- log(mu[,1]) u2 <- log(mu[,2]) - u0 u1 <- log(mu[,3]) - u0 u12 <- log(mu[,4]) - u0 - u1 - u2 cbind(u1, u2, u12) }, loglikelihood = function(mu,y,w,residuals = FALSE,eta,extra = NULL) { u1 <- eta[,1] u2 <- eta[,2] u12 <- eta[,3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) u0 <- -log(denom) if (residuals) stop("loglikelihood residuals not implemented yet") else sum(c(w) *(u0 + u1*y[,1] + u2*y[,2] + u12*y[,1]*y[,2])) }, vfamily = c("loglinb2"), deriv = expression({ u1 <- eta[,1] u2 <- eta[,2] u12 <- eta[,3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) du0.du1 <- -(exp(u1) + exp(u1 + u2 + u12)) / denom du0.du2 <- -(exp(u2) + exp(u1 + u2 + u12)) / denom du0.du12 <- -exp(u1 + u2 + u12) / denom c(w) * cbind(du0.du1 + y[,1], du0.du2 + y[,2], du0.du12 + y[,1] * y[,2]) }), weight = expression({ d2u0.du1.2 <- -(exp(u1) + exp(u1 + u2 + u12)) * (1+exp(u2)) / denom^2 d2u0.du22 <- -(exp(u2) + exp(u1 + u2 + u12)) * (1+exp(u1)) / denom^2 d2u0.du122 <- -exp(u1 + u2 + u12) * (1+exp(u1)+exp(u2)) / denom^2 d2u0.du1u2 <- -(exp(u1 + u2 + u12) - exp(u1 + u2)) / denom^2 d2u0.du1u3 <- -(1 + exp(u2)) * exp(u1 + u2 + u12) / denom^2 d2u0.du2u3 <- -(1 + exp(u1)) * exp(u1 + u2 + u12) / denom^2 wz <- matrix(as.numeric(NA), n, dimm(M)) wz[,iam(1,1,M)] <- -d2u0.du1.2 wz[,iam(2,2,M)] <- -d2u0.du22 wz[,iam(3,3,M)] <- -d2u0.du122 wz[,iam(1,2,M)] <- -d2u0.du1u2 wz[,iam(1,3,M)] <- -d2u0.du1u3 wz[,iam(2,3,M)] <- -d2u0.du2u3 c(w) * wz })) } loglinb3 <- function(exchangeable = FALSE, zero = NULL) { new("vglmff", blurb = c("Log-linear model for trivariate binary data\n\n", "Links: ", "Identity: u1, u2, u3, u12, u13, u23", "\n"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1,1,1,0,0,0, 0,0,0,1,1,1), 6, 2), x, .exchangeable, constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .exchangeable = exchangeable, .zero = zero ))), initialize = expression({ predictors.names <- c("u1", "u2", "u3", "u12", "u13", "u23") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 3, out.wy = TRUE, colsyperw = 3, maximize = TRUE) w <- temp5$w y <- temp5$y if (ncol(y) != 3) stop("ncol(y) must be = 3") if (FALSE) extra$my.expression <- expression({ u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) }) if (length(mustart) + length(etastart) == 0) { mustart <- matrix(as.numeric(NA), nrow(y), 2^3) mustart[,1] <- weighted.mean((1-y[,1])*(1-y[,2])*(1-y[,3]), w) mustart[,2] <- weighted.mean((1-y[,1])*(1-y[,2])*y[,3], w) mustart[,3] <- weighted.mean((1-y[,1])*y[,2]*(1-y[,3]), w) mustart[,4] <- weighted.mean((1-y[,1])*y[,2]*y[,3], w) mustart[,5] <- weighted.mean(y[,1]*(1-y[,2])*(1-y[,3]), w) mustart[,6] <- weighted.mean(y[,1]*(1-y[,2])*y[,3], w) mustart[,7] <- weighted.mean(y[,1]*y[,2]*(1-y[,3]), w) mustart[,8] <- weighted.mean(y[,1]*y[,2]*y[,3], w) if (any(mustart == 0)) stop("some combinations of the response not realized") } }), linkinv = function(eta, extra = NULL) { u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) cbind("000" = 1, "001" = exp(u3), "010" = exp(u2), "011" = exp(u2+u3+u23), "100" = exp(u1), "101" = exp(u1+u3+u13), "110" = exp(u1+u2+u12), "111" = exp(u1+u2+u3+u12+u13+u23)) / denom }, last = expression({ misc$link <- rep("identity", length = M) names(misc$link) <- predictors.names misc$earg <- list(u1 = list(), u2 = list(), u3 = list(), u12 = list(), u13 = list(), u23 = list()) misc$expected <- TRUE misc$multipleResponses <- TRUE }), linkfun = function(mu, extra = NULL) { u0 <- log(mu[,1]) u3 <- log(mu[,2]) - u0 u2 <- log(mu[,3]) - u0 u23 <- log(mu[,4]) - u0 - u2 - u3 u1 <- log(mu[,5]) - u0 u13 <- log(mu[,6]) - u0 - u1 - u3 u12 <- log(mu[,7]) - u0 - u1 - u2 cbind(u1, u2, u3, u12, u13, u23) }, loglikelihood = function(mu,y,w,residuals = FALSE,eta,extra = NULL) { u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) u0 <- -log(denom) if (residuals) stop("loglikelihood residuals not implemented yet") else sum(c(w) *(u0 + u1*y[,1] + u2*y[,2] + u3*y[,3] +u12*y[,1]*y[,2] + u13*y[,1]*y[,3] + u23*y[,2]*y[,3])) }, vfamily = c("loglinb3"), deriv = expression({ u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) allterms <- exp(u1+u2+u3+u12+u13+u23) A1 <- exp(u1) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + allterms A2 <- exp(u2) + exp(u1 + u2 + u12) + exp(u2 + u3 + u23) + allterms A3 <- exp(u3) + exp(u3 + u2 + u23) + exp(u1 + u3 + u13) + allterms A12 <- exp(u1 + u2 + u12) + allterms A13 <- exp(u1 + u3 + u13) + allterms A23 <- exp(u2 + u3 + u23) + allterms c(w) * cbind(-A1/denom + y[,1], -A2/denom + y[,2], -A3/denom + y[,3], -A12/denom + y[,1]*y[,2], -A13/denom + y[,1]*y[,3], -A23/denom + y[,2]*y[,3]) }), weight = expression({ u0 <- -log(denom) dA2.du1 <- exp(u1 + u2 + u12) + allterms dA3.du1 <- exp(u1 + u3 + u13) + allterms dA3.du2 <- exp(u2 + u3 + u23) + allterms wz <- matrix(as.numeric(NA), n, dimm(6)) expu0 <- exp(u0) wz[,iam(1,1,M)] <- A1 * (1 - expu0 * A1) wz[,iam(2,2,M)] <- A2 * (1 - expu0 * A2) wz[,iam(3,3,M)] <- A3 * (1 - expu0 * A3) wz[,iam(1,2,M)] <- (dA2.du1 - expu0 * A1 * A2) wz[,iam(1,3,M)] <- (dA3.du1 - expu0 * A1 * A3) wz[,iam(2,3,M)] <- (dA3.du2 - expu0 * A2 * A3) wz[,iam(4,4,M)] <- A12 * (1 - expu0 * A12) wz[,iam(5,5,M)] <- A13 * (1 - expu0 * A13) wz[,iam(6,6,M)] <- A23 * (1 - expu0 * A23) wz[,iam(4,6,M)] <- (allterms - expu0 * A12 * A23) wz[,iam(5,6,M)] <- (allterms - expu0 * A12 * A23) wz[,iam(4,5,M)] <- (allterms - expu0 * A12 * A13) wz[,iam(1,4,M)] <- A12 * (1 - expu0 * A1) wz[,iam(1,5,M)] <- A13 * (1 - expu0 * A1) wz[,iam(1,6,M)] <- (allterms - expu0 * A1 * A23) wz[,iam(2,4,M)] <- A12 * (1 - expu0 * A2) wz[,iam(2,5,M)] <- (allterms - expu0 * A2 * A13) wz[,iam(2,6,M)] <- A23 * (1 - expu0 * A2) wz[,iam(3,4,M)] <- (allterms - expu0 * A3 * A12) wz[,iam(3,5,M)] <- A13 * (1 - expu0 * A3) wz[,iam(3,6,M)] <- A23 * (1 - expu0 * A3) wz <- expu0 * wz c(w) * wz })) } VGAM/R/family.glmgam.R0000644000176000001440000016412312136651110014107 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. binomialff <- function(link = "logit", dispersion = 1, mv = FALSE, onedpar = !mv, parallel = FALSE, apply.parint = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) { if (!is.logical(bred) || length(bred) > 1) stop("argument 'bred' must be a single logical") estimated.dispersion <- dispersion == 0 if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") ans <- new("vglmff", blurb = if (mv) c("Multiple binomial model\n\n", "Link: ", namesof("mu[,j]", link, earg = earg), "\n", "Variance: mu[,j]*(1-mu[,j])") else c("Binomial model\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu * (1 - mu)"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 1, bred = .bred , zero = .zero ) }, list( .zero = zero, .bred = bred ))), initialize = eval(substitute(expression({ assign("CQO.FastAlgorithm", ( .link == "logit" || .link == "cloglog"), envir = VGAM:::VGAMenv) assign("modelno", if ( .link == "logit") 1 else if ( .link == "cloglog") 4 else NULL, envir = VGAM:::VGAMenv) if ( .mv ) { temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- ncol(y) if (!all(y == 0 | y == 1)) stop("response must contain 0's and 1's only") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , earg = .earg , short = TRUE) if (!length(mustart) && !length(etastart)) mustart <- matrix(colMeans(y), nrow = nrow(y), ncol = ncol(y), byrow = TRUE) if (!all(w == 1)) extra$orig.w <- w extra$mv <- TRUE } else { if (!all(w == 1)) extra$orig.w <- w NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- (y != levels(y)[1]) nvec <- rep(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor (first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } predictors.names <- namesof("mu", .link , earg = .earg , short = TRUE) } if ( .bred ) { if ( !control$save.weight ) { save.weight <- control$save.weight <- TRUE } } }), list( .link = link, .mv = mv, .earg = earg, .bred = bred ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) colnames(mu) <- NULL mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) if (exists("modelno", envir = VGAM:::VGAMenv)) rm("modelno", envir = VGAM:::VGAMenv) dpar <- .dispersion if (!dpar) { temp87 <- (y-mu)^2 * wz / (dtheta.deta(mu, link = .link , earg = .earg )^2) # w cancel if (.mv && ! .onedpar) { dpar <- rep(as.numeric(NA), len = M) temp87 <- cbind(temp87) nrow.mu <- if (is.matrix(mu)) nrow(mu) else length(mu) for(ii in 1:M) dpar[ii] <- sum(temp87[, ii]) / (nrow.mu - ncol(x)) if (is.matrix(y) && length(dimnames(y)[[2]]) == length(dpar)) names(dpar) <- dimnames(y)[[2]] } else { dpar <- sum(temp87) / (length(mu) - ncol(x)) } } misc$mv <- .mv misc$dispersion <- dpar misc$default.dispersion <- 1 misc$estimated.dispersion <- .estimated.dispersion misc$bred <- .bred misc$expected <- TRUE misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) dn2 else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion, .onedpar = onedpar, .mv = mv, .bred = bred, .link = link, .earg = earg))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .link , earg = .earg ) }, list( .link = link, .earg = earg))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) { w * (y / mu - (1-y) / (1-mu)) } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e6 * .Machine$double.eps smallno <- sqrt(.Machine$double.eps) if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) if ( .mv ) { sum((ycounts * log(mu) + (1 - ycounts) * log1p(-mu)) * w) } else { sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mu, log = TRUE)) } } }, list( .mv = mv ))), vfamily = c("binomialff", "vcategorical"), deriv = eval(substitute(expression({ yBRED <- if ( .bred ) { Hvector <- hatvaluesbasic(X_vlm = X_vlm_save, diagWm = c(t(w * mu))) # Handles M>1 varY <- mu * (1 - mu) / w # Is a matrix if M>1. Seems the most correct. d1.ADJ <- dtheta.deta(mu, .link , earg = .earg ) d2.ADJ <- d2theta.deta2(mu, .link , earg = .earg ) yBRED <- y + matrix(Hvector, n, M, byrow = TRUE) * varY * d2.ADJ / (2 * d1.ADJ^2) yBRED } else { y } answer <- if ( .link == "logit") { c(w) * (yBRED - mu) } else if ( .link == "cloglog") { mu.use <- mu smallno <- 100 * .Machine$double.eps mu.use[mu.use < smallno] <- smallno mu.use[mu.use > 1.0 - smallno] <- 1.0 - smallno -c(w) * (yBRED - mu) * log1p(-mu.use) / mu.use } else { c(w) * dtheta.deta(mu, link = .link , earg = .earg ) * (yBRED / mu - 1.0) / (1.0 - mu) } answer }), list( .link = link, .earg = earg, .bred = bred))), weight = eval(substitute(expression({ tmp100 <- mu * (1.0 - mu) tmp200 <- if ( .link == "logit") { cbind(c(w) * tmp100) } else if ( .link == "cloglog") { cbind(c(w) * (1.0 - mu.use) * (log1p(-mu.use))^2 / mu.use) } else { cbind(c(w) * dtheta.deta(mu, link = .link , earg = .earg )^2 / tmp100) } for(ii in 1:M) { index500 <- !is.finite(tmp200[, ii]) | (abs(tmp200[, ii]) < .Machine$double.eps) if (any(index500)) { # Diagonal 0's are bad tmp200[index500, ii] <- .Machine$double.eps } } tmp200 }), list( .link = link, .earg = earg)))) if (!mv) ans@deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Deviance.categorical.data.vgam(mu = cbind(mu, 1-mu), y = cbind(y, 1-y), w = w, residuals = residuals, eta = eta, extra = extra) } ans } gammaff <- function(link = "nreciprocal", dispersion = 0) { estimated.dispersion <- dispersion == 0 link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Gamma distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu^2 / k"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { devi <- -2 * w * (log(ifelse(y == 0, 1, y/mu)) - (y - mu)/mu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else sum(w * devi) }, infos = eval(substitute(function(...) { list(Musual = 1, dispersion = .dispersion ) }, list( .dispersion = dispersion ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y mustart <- y + 0.167 * (y == 0) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , earg = .earg , short = TRUE) if (!length(etastart)) etastart <- theta2eta(mustart, link = .link , earg = .earg ) }), list( .link = link, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ dpar <- .dispersion if (!dpar) { if (M == 1) { temp <- w * dmu.deta^2 dpar <- sum(w * (y-mu)^2 * wz / temp) / (length(mu) - ncol(x)) } else { dpar <- rep(0, len = M) for(spp in 1:M) { temp <- w * dmu.deta[,spp]^2 dpar[spp] <- sum(w * (y[,spp]-mu[,spp])^2 * wz[,spp]/temp) / ( length(mu[,spp]) - ncol(x)) } } } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) paste("mu", 1:M, sep = "") else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .earg = earg, .estimated.dispersion = estimated.dispersion, .link = link ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), vfamily = "gammaff", deriv = eval(substitute(expression({ Musual <- 1 ncoly <- ncol(as.matrix(y)) dl.dmu <- (y-mu) / mu^2 dmu.deta <- dtheta.deta(theta = mu, link = .link , earg = .earg ) c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ d2l.dmu2 <- 1 / mu^2 wz <- dmu.deta^2 * d2l.dmu2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .link = link, .earg = earg)))) } inverse.gaussianff <- function(link = "natural.ig", dispersion = 0) { estimated.dispersion <- dispersion == 0 warning("@deviance() not finished") warning("needs checking, but I'm sure it works") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Inverse Gaussian distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu^3 / k"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { pow <- 3 # Use Quasi()$deviance with pow==3 devy <- y^(2-pow) / (1-pow) - y^(2-pow) / (2-pow) devmu <- y * mu^(1-pow) / (1-pow) - mu^(2-pow) / (2-pow) devi <- 2 * (devy - devmu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else sum(w * devi) }, infos = eval(substitute(function(...) { list(Musual = 1, dispersion = .dispersion ) }, list( .earg = earg , .dispersion = dispersion ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y mu <- y + 0.167 * (y == 0) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , .earg , short = TRUE) if (!length(etastart)) etastart <- theta2eta(mu, link = .link , .earg ) }), list( .link = link, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ dpar <- .dispersion if (!dpar) { temp <- w * dmu.deta^2 dpar <- sum( w * (y-mu)^2 * wz / temp ) / (length(mu) - ncol(x)) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) paste("mu", 1:M, sep = "") else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion, .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link, earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = "inverse.gaussianff", deriv = eval(substitute(expression({ Musual <- 1 ncoly <- ncol(as.matrix(y)) dl.dmu <- (y - mu) / mu^3 dmu.deta <- dtheta.deta(theta = mu, link = .link , earg = .earg ) c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2l.dmu2 <- 1 / mu^3 wz <- dmu.deta^2 * d2l.dmu2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .link = link, .earg = earg )))) } dinv.gaussian <- function(x, mu, lambda, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(mu), length(lambda)) x <- rep(x, len = LLL); mu <- rep(mu, len = LLL); lambda <- rep(lambda, len = LLL) logdensity <- rep(log(0), len = LLL) xok <- (x > 0) logdensity[xok] = 0.5 * log(lambda[xok] / (2 * pi * x[xok]^3)) - lambda[xok] * (x[xok]-mu[xok])^2 / (2*mu[xok]^2 * x[xok]) logdensity[mu <= 0] <- NaN logdensity[lambda <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pinv.gaussian <- function(q, mu, lambda) { if (any(mu <= 0)) stop("mu must be positive") if (any(lambda <= 0)) stop("lambda must be positive") LLL <- max(length(q), length(mu), length(lambda)) q <- rep(q, len = LLL) mu <- rep(mu, len = LLL) lambda <- rep(lambda, len = LLL) ans <- q ans[q <= 0] <- 0 bb <- q > 0 ans[bb] <- pnorm( sqrt(lambda[bb]/q[bb]) * (q[bb]/mu[bb] - 1)) + exp(2*lambda[bb]/mu[bb]) * pnorm(-sqrt(lambda[bb]/q[bb]) * (q[bb]/mu[bb] + 1)) ans } rinv.gaussian <- function(n, mu, lambda) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n mu <- rep(mu, len = use.n); lambda <- rep(lambda, len = use.n) u <- runif(use.n) Z <- rnorm(use.n)^2 # rchisq(use.n, df = 1) phi <- lambda / mu y1 <- 1 - 0.5 * (sqrt(Z^2 + 4*phi*Z) - Z) / phi ans <- mu * ifelse((1+y1)*u > 1, 1/y1, y1) ans[mu <= 0] <- NaN ans[lambda <= 0] <- NaN ans } inv.gaussianff <- function(lmu = "loge", llambda = "loge", imethod = 1, ilambda = NULL, parallel = FALSE, apply.parint = FALSE, shrinkage.init = 0.99, zero = NULL) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") new("vglmff", blurb = c("Inverse Gaussian distribution\n\n", "f(y) = sqrt(lambda/(2*pi*y^3)) * ", "exp(-lambda * (y - mu)^2 / (2 * mu^2 * y)); y, mu & lambda > 0", "Link: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: ", "mu\n", "Variance: mu^3 / lambda"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("mu", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("lambda", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lmu , earg = .emu , short = TRUE), namesof(mynames2, .llambda , earg = .elambda , short = TRUE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { init.mu <- if ( .imethod == 2) { mediany <- apply(y, 2, median) matrix(1.1 * mediany + 1/8, n, ncoly, byrow = TRUE) } else if ( .imethod == 3) { use.this <- colSums(y * w) / colSums(w) # weighted.mean(y, w) (1 - .sinit) * y + .sinit * use.this } else { matrix(colSums(y * w) / colSums(w) + 1/8, n, ncoly, byrow = TRUE) } variancey <- apply(y, 2, var) init.la <- matrix(if (length( .ilambda )) .ilambda else (init.mu^3) / (0.10 + variancey), n, ncoly, byrow = TRUE) etastart <- cbind( theta2eta(init.mu, link = .lmu , earg = .emu ), theta2eta(init.la, link = .llambda , earg = .elambda ))[, interleave.VGAM(M, M = Musual)] } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .sinit = shrinkage.init, .imethod = imethod, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lmu , length = ncoly), rep( .llambda , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .emu misc$earg[[Musual*ii ]] <- .elambda } misc$Musual <- Musual misc$imethod <- .imethod misc$shrinkage.init <- .sinit misc$expected <- TRUE misc$multipleResponses <- FALSE misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .parallel = parallel, .apply.parint = apply.parint, .sinit = shrinkage.init, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], link = .llambda , earg = .elambda ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dinv.gaussian(x = y, mu = mymu, lambda = lambda, log = TRUE)) } }, list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), vfamily = "inv.gaussianff", deriv = eval(substitute(expression({ Musual <- 2 mymu <- eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], link = .llambda , earg = .elambda ) dmu.deta <- dtheta.deta(theta = mymu , link = .lmu , earg = .emu ) dlambda.deta <- dtheta.deta(theta = lambda, link = .llambda , earg = .elambda ) dl.dmu <- lambda * (y - mymu) / mymu^3 dl.dlambda <- 0.5 / lambda - (y - mymu)^2 / (2 * mymu^2 * y) myderiv <- c(w) * cbind(dl.dmu * dmu.deta, dl.dlambda * dlambda.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dmu2 <- lambda / mymu^3 ned2l.dlambda2 <- 0.5 / (lambda^2) wz <- cbind(dmu.deta^2 * ned2l.dmu2, dlambda.deta^2 * ned2l.dlambda2)[, interleave.VGAM(M, M = Musual)] w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda )))) } poissonff <- function(link = "loge", dispersion = 1, onedpar = FALSE, imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) { if (!is.logical(bred) || length(bred) > 1) stop("argument 'bred' must be a single logical") estimated.dispersion <- (dispersion == 0) if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(imu) && !is.Numeric(imu, positive = TRUE)) stop("bad input for argument 'imu'") new("vglmff", blurb = c("Poisson distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { nz <- (y > 0) devi <- -(y - mu) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mu[nz]) if (residuals) sign(y - mu) * sqrt(2 * abs(devi) * c(w)) else 2 * sum(c(w) * devi) }, infos = eval(substitute(function(...) { list(Musual = 1, bred = .bred , zero = .zero ) }, list( .zero = zero, .bred = bred ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- ncoly <- ncol(y) assign("CQO.FastAlgorithm", ( .link == "loge"), envir = VGAM:::VGAMenv) dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , earg = .earg , short = TRUE) if ( .bred ) { if ( !control$save.weight ) { save.weight <- control$save.weight <- TRUE } } if (!length(etastart)) { mu.init <- pmax(y, 1/8) for(iii in 1:ncol(y)) { if ( .imethod == 2) { mu.init[, iii] <- weighted.mean(y[, iii], w[, iii]) + 1/8 } else if ( .imethod == 3) { mu.init[, iii] <- median(y[, iii]) + 1/8 } } if (length( .imu )) mu.init <- matrix( .imu , n, ncoly, byrow = TRUE) etastart <- theta2eta(mu.init, link = .link , earg = .earg ) } }), list( .link = link, .estimated.dispersion = estimated.dispersion, .bred = bred, .imethod = imethod, .imu = imu, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) mu }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM:::VGAMenv) dpar <- .dispersion if (!dpar) { temp87 <- (y-mu)^2 * wz / (dtheta.deta(mu, link = .link , earg = .earg )^2) # w cancel if (M > 1 && ! .onedpar ) { dpar <- rep(as.numeric(NA), length = M) temp87 <- cbind(temp87) nrow.mu <- if (is.matrix(mu)) nrow(mu) else length(mu) for(ii in 1:M) dpar[ii] <- sum(temp87[, ii]) / (nrow.mu - ncol(x)) if (is.matrix(y) && length(dimnames(y)[[2]]) == length(dpar)) names(dpar) <- dimnames(y)[[2]] } else { dpar <- sum(temp87) / (length(mu) - ncol(x)) } } misc$dispersion <- dpar misc$default.dispersion <- 1 misc$estimated.dispersion <- .estimated.dispersion misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$bred <- .bred misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) dn2 else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg }), list( .dispersion = dispersion, .imethod = imethod, .estimated.dispersion = estimated.dispersion, .bred = bred, .onedpar = onedpar, .link = link, .earg = earg))), linkfun = eval(substitute( function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) w * (y / mu - 1) else { sum(w * dpois(x = y, lambda = mu, log = TRUE)) } }, vfamily = "poissonff", deriv = eval(substitute(expression({ yBRED <- if ( .bred ) { Hvector <- hatvaluesbasic(X_vlm = X_vlm_save, diagWm = c(t(w * mu))) # Handles M>1 varY <- mu # Is a matrix if M>1. d1.BRED <- dtheta.deta(mu, .link , earg = .earg ) d2.BRED <- d2theta.deta2(mu, .link , earg = .earg ) y + matrix(Hvector, n, M, byrow = TRUE) * varY * d2.BRED / (2 * d1.BRED^2) } else { y } answer <- if ( .link == "loge" && (any(mu < .Machine$double.eps))) { c(w) * (yBRED - mu) } else { lambda <- mu dl.dlambda <- (yBRED - lambda) / lambda dlambda.deta <- dtheta.deta(theta = lambda, link = .link , earg = .earg ) c(w) * dl.dlambda * dlambda.deta } answer }), list( .link = link, .earg = earg, .bred = bred))), weight = eval(substitute(expression({ if ( .link == "loge" && (any(mu < .Machine$double.eps))) { tmp600 <- mu tmp600[tmp600 < .Machine$double.eps] <- .Machine$double.eps c(w) * tmp600 } else { ned2l.dlambda2 <- 1 / lambda ned2lambda.deta2 <- d2theta.deta2(theta = lambda, link = .link , earg = .earg ) c(w) * dlambda.deta^2 * ned2l.dlambda2 } }), list( .link = link, .earg = earg)))) } quasibinomialff <- function( link = "logit", mv = FALSE, onedpar = !mv, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") dispersion <- 0 # Estimated; this is the only difference with binomialff() ans <- binomialff(link = earg, earg.link = TRUE, dispersion = dispersion, mv = mv, onedpar = onedpar, parallel = parallel, zero = zero) ans@vfamily <- "quasibinomialff" ans } quasipoissonff <- function(link = "loge", onedpar = FALSE, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") dispersion <- 0 # Estimated; this is the only difference with poissonff() ans <- poissonff(link = earg, earg.link = TRUE, dispersion = dispersion, onedpar = onedpar, parallel = parallel, zero = zero) ans@vfamily <- "quasipoissonff" ans } dexppoisson <- function(lmean = "loge", ldispersion = "logit", idispersion = 0.8, zero = NULL) { if (!is.Numeric(idispersion, positive = TRUE)) stop("bad input for 'idispersion'") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") ldisp <- as.list(substitute(ldispersion)) edisp <- link2list(ldisp) ldisp <- attr(edisp, "function.name") idisp <- idispersion new("vglmff", blurb = c("Double exponential Poisson distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("dispersion", ldisp, earg = edisp), "\n", "Mean: ", "mean\n", "Variance: mean / dispersion"), constraints = eval(substitute(expression({ constraints = cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, lmean = .lmean , zero = .zero ) }, list( .lmean = lmean ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { "mu" } predictors.names <- c(namesof(dn2, link = .lmean, earg = .emean, short = TRUE), namesof("dispersion", link = .ldisp, earg = .edisp, short = TRUE)) init.mu <- pmax(y, 1/8) tmp2 <- rep( .idisp , length.out = n) if (!length(etastart)) etastart <- cbind(theta2eta(init.mu, link = .lmean , earg = .emean ), theta2eta(tmp2, link = .ldisp , earg = .edisp )) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp, .idisp = idisp ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], link = .lmean, earg = .emean) }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), last = eval(substitute(expression({ misc$link <- c(mean = .lmean , dispersion = .ldisp ) misc$earg <- list(mean = .emean , dispersion = .edisp ) misc$expected <- TRUE }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { lambda <- eta2theta(eta[, 1], link = .lmean, earg = .emean ) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(w * (0.5*log(Disper) + Disper*(y-lambda) + Disper*y*log(lambda))) } }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), vfamily = "dexppoisson", deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], link = .lmean, earg = .emean) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp) dl.dlambda <- Disper * (y / lambda - 1) dl.dDisper <- y * log(lambda) + y - lambda + 0.5 / Disper dlambda.deta <- dtheta.deta(theta = lambda, link = .lmean, earg = .emean) dDisper.deta <- dtheta.deta(theta = Disper, link = .ldisp, earg = .edisp) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dDisper * dDisper.deta) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), nrow = n, ncol = 2) # diagonal usethis.lambda <- pmax(lambda, .Machine$double.eps / 10000) wz[, iam(1, 1, M)] <- (Disper / usethis.lambda) * dlambda.deta^2 wz[, iam(2, 2, M)] <- (0.5 / Disper^2) * dDisper.deta^2 c(w) * wz }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp )))) } dexpbinomial <- function(lmean = "logit", ldispersion = "logit", idispersion = 0.25, zero = 2) { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") ldisp <- as.list(substitute(ldispersion)) edisp <- link2list(ldisp) ldisp <- attr(edisp, "function.name") idisp <- idispersion if (!is.Numeric(idispersion, positive = TRUE)) stop("bad input for 'idispersion'") new("vglmff", blurb = c("Double Exponential Binomial distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("dispersion", ldisp, earg = edisp), "\n", "Mean: ", "mean\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (ncol(cbind(w)) != 1) stop("'weights' must be a vector or a one-column matrix") NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- (y != levels(y)[1]) nvec <- rep(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") init.mu = mustart <- (0.5 + w * y) / (1 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec init.mu <- mustart <- (0.5 + nvec * y) / (1 + nvec) } else stop("for the dexpbinomial family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor (first level = fail, ", "other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { "mu" } predictors.names <- c(namesof(dn2, .lmean, earg = .emean, short = TRUE), namesof("dispersion", .ldisp, earg = .edisp, short = TRUE)) tmp2 <- rep( .idisp , len = n) if (!length(etastart)) etastart <- cbind(theta2eta(init.mu, .lmean, earg = .emean), theta2eta(tmp2, .ldisp, earg = .edisp)) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp, .idisp = idisp ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], link = .lmean, earg = .emean) }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), last = eval(substitute(expression({ misc$link <- c("mean" = .lmean, "dispersion" = .ldisp) misc$earg <- list( mean = .emean, dispersion = .edisp) misc$expected <- TRUE }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob <- eta2theta(eta[, 1], link = .lmean, earg = .emean) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { temp1 <- y * log(ifelse(y > 0, y, 1)) # y*log(y) temp2 <- (1.0-y) * log1p(ifelse(y < 1, -y, 0)) # (1-y)*log(1-y) sum(0.5 * log(Disper) + w * (y * Disper * log(prob) + (1-y) * Disper * log1p(-prob) + temp1 * (1-Disper) + temp2 * (1 - Disper))) } }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), vfamily = "dexpbinomial", deriv = eval(substitute(expression({ prob <- eta2theta(eta[, 1], link = .lmean, earg = .emean) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp) temp1 <- y * log(ifelse(y > 0, y, 1)) # y*log(y) temp2 <- (1.0-y) * log1p(ifelse(y < 1, -y, 0)) # (1-y)*log(1-y) temp3 <- prob * (1.0-prob) temp3 <- pmax(temp3, .Machine$double.eps * 10000) dl.dprob <- w * Disper * (y - prob) / temp3 dl.dDisper <- 0.5 / Disper + w * (y * log(prob) + (1-y)*log1p(-prob) - temp1 - temp2) dprob.deta <- dtheta.deta(theta = prob, .lmean, earg = .emean) dDisper.deta <- dtheta.deta(theta = Disper, .ldisp, earg = .edisp) cbind(dl.dprob * dprob.deta, dl.dDisper * dDisper.deta) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), nrow = n, ncol = 2) # diagonal wz[, iam(1, 1, M)] <- w * (Disper / temp3) * dprob.deta^2 wz[, iam(2, 2, M)] <- (0.5 / Disper^2) * dDisper.deta^2 wz }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp )))) } mbinomial <- function(mvar = NULL, link = "logit", parallel = TRUE, smallno = .Machine$double.eps^(3/4)) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(smallno, positive = TRUE, allowable.length = 1) || smallno > 1e-4) stop("bad input for 'smallno'") if (is.logical(parallel) && !parallel) stop("'parallel' must be TRUE") temp <- terms(mvar) mvar <- attr(temp,"term.labels") if (length(mvar) != 1) stop("cannot obtain the matching variable") if (!is.character(mvar) || length(mvar) != 1) { stop("bad input for 'mvar'") } new("vglmff", blurb = c("Matched binomial model (intercepts fitted)\n\n", "Link: ", namesof("mu[,j]", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = TRUE) constraints[[extra$mvar]] <- diag(M) specialCM <- list(a = vector("list", M-1)) for(ii in 1:(M-1)) { specialCM[[1]][[ii]] <- (constraints[[extra$mvar]])[, 1+ii,drop = FALSE] } names(specialCM) = extra$mvar }), list( .parallel = parallel ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w = w mvar <- .mvar NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nvec <- rep(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") mustart <- (0.5 + w * y) / (1 + w) no.successes <- w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec mustart <- (0.5 + nvec * y) / (1 + nvec) } else stop("Response not of the right form") temp1 <- attr(x, "assign") if (colnames(x)[1] != "(Intercept)") stop("x must have an intercept") M <- CCC <- length(temp1[[mvar]]) + (colnames(x)[1] == "(Intercept)") temp9 <- x[,temp1[[mvar]],drop = FALSE] temp9 <- temp9 * matrix(2:CCC, n, CCC-1, byrow = TRUE) temp9 <- apply(temp9, 1, max) temp9[temp9 == 0] <- 1 extra$NoMatchedSets <- CCC extra$n <- n extra$M <- M extra$mvar <- mvar extra$index9 <- temp9 predictors.names <- namesof("mu", .link , earg = .earg , short = TRUE) predictors.names <- rep(predictors.names, len = M) }), list( .link = link, .earg = earg, .mvar = mvar ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) mu[cbind(1:extra$n, extra$index9)] }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) paste("mu(matched set ", 1:M, ")", sep = "") else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE }), list( .link = link, .earg = earg))), linkfun = eval(substitute(function(mu, extra = NULL) { temp <- theta2eta(mu, .link , earg = .earg ) matrix(temp, extra$n, extra$M) }, list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) w * (y / mu - (1-y) / (1-mu)) else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e6 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mu, log = TRUE)) } }, vfamily = c("mbinomial", "vcategorical"), deriv = eval(substitute(expression({ answer <- if ( .link == "logit") { w * (y - mu) } else if ( .link == "cloglog") { mu.use <- mu smallno <- 100 * .Machine$double.eps mu.use[mu.use < smallno] <- smallno mu.use[mu.use > 1 - smallno] <- 1 - smallno -w * (y - mu) * log1p(-mu.use) / mu.use } else { w * dtheta.deta(mu, link = .link , earg = .earg ) * (y/mu - 1)/(1-mu) } result <- matrix(0, n, M) result[cbind(1:n, extra$index9)] = answer result }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ tmp100 <- mu*(1-mu) answer <- if ( .link == "logit") { cbind(w * tmp100) } else if ( .link == "cloglog") { cbind(w * (1-mu.use) * (log1p(-mu.use))^2 / mu.use ) } else { cbind(w * dtheta.deta(mu, link = .link , earg = .earg )^2 / tmp100) } result <- matrix( .smallno, n, M) result[cbind(1:n, extra$index9)] <- answer result }), list( .link = link, .earg = earg, .smallno = smallno )))) } mypool <- function(x, index) { answer <- x uindex <- unique(index) for(ii in uindex) { ind0 <- (index == ii) answer[ind0] <- sum(x[ind0]) } answer } if (FALSE) mbino <- function() { link <- "logit" earg <- list() parallel <- TRUE link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (is.logical(parallel) && !parallel) stop("'parallel' must be TRUE") new("vglmff", blurb = c("Matched binomial model (intercepts not fitted)\n\n", "Link: ", namesof("mu[,j]", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = FALSE) }), list( .parallel = parallel ))), initialize = eval(substitute(expression({ if (colnames(x)[1] == "(Intercept)") stop("the model matrix must not have an intercept") NCOL = function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y = y != levels(y)[1] nvec = rep(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") mustart = (0.5 + w * y) / (1 + w) no.successes = w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nvec = y[, 1] + y[, 2] y = ifelse(nvec > 0, y[, 1] / nvec, 0) w = w * nvec mustart = (0.5 + nvec * y) / (1 + nvec) } else stop("Response not of the right form") if (!length(etastart)) etastart <- theta2eta(mustart, link = "logit", earg = list()) temp1 = attr(x, "assign") mvar = extra$mvar if (length(mvar) != n) stop("input extra$mvar doesn't look right") if (any(y != 0 & y != 1)) stop("response vector must have 0 or 1 values only") xrle = rle(mvar) if (length(unique(mvar)) != length(xrel$length)) stop("extra$mvar must take on contiguous values") temp9 = factor(mvar) extra$NoMatchedSets = levels(temp9) extra$n = n extra$M = M extra$rlex = xrle extra$index9 = temp9 predictors.names <- namesof("mu", .link , earg = .earg , short = TRUE) }), list( .link = link, .earg = earg, .mvar = mvar ))), linkinv = eval(substitute(function(eta, extra = NULL) { denominator <- exp(eta) numerator <- mypool(denominator, extra$mvar) numerator / denominator }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(mu = .link ) misc$earg <- list(mu = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) w*(y/mu - (1-y)/(1-mu)) else { sum(w*(y*log(mu) + (1-y)*log1p(-mu))) } }, vfamily = c("mbin", "vcategorical"), deriv = eval(substitute(expression({ answer = if ( .link == "logit") { w * (y - mu) } else stop("can only handle the logit link") answer }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ tmp100 = mu*(1-mu) answer = if ( .link == "logit") { cbind(w * tmp100) } else stop("can only handle the logit link") result = matrix( .smallno, n, M) result[cbind(1:n, extra$index9)] = answer result }), list( .link = link, .earg = earg, .smallno = smallno )))) } augbinomial <- function(link = "logit", mv = FALSE, parallel = TRUE) { if (!is.logical(parallel) || length(parallel) != 1 || !parallel) warning("Argument 'parallel' should be assigned 'TRUE' only") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = if (mv) c("Augmented multivariate binomial model\n\n", "Link: ", namesof("mu.1[,j]", link, earg = earg), ", ", namesof("mu.2[,j]", link, earg = earg), "\n", "Variance: mu[,j]*(1-mu[,j])") else c("Augmented binomial model\n\n", "Link: ", namesof("mu.1[,j]", link, earg = earg), ", ", namesof("mu.2[,j]", link, earg = earg), "\n", "Variance: mu*(1-mu)"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Deviance.categorical.data.vgam(mu = cbind(mu, 1-mu), y=cbind(y, 1-y), w = w, residuals = residuals, eta = eta, extra = extra) }, infos = eval(substitute(function(...) { list(Musual = 2, parallel = .parallel) }, list( .parallel = parallel ))), initialize = eval(substitute(expression({ Musual = 2 if ( .mv ) { y = as.matrix(y) M = Musual * ncol(y) if (!all(y == 0 | y == 1)) stop("response must contain 0's and 1's only") dn2 = if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 = if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- c(namesof(if (M > 1) dn2 else "mu.1", .link , earg = .earg , short = TRUE), namesof(if (M > 1) dn2 else "mu.2", .link , earg = .earg , short = TRUE)) NOS = M / Musual predictors.names <- predictors.names[interleave.VGAM(Musual * NOS, M = Musual)] if (!length(mustart) && !length(etastart)) mustart = (0.5 + w * y) / (1 + w) } else { dn2 = c("mu1.", "mu2.") M = Musual if (!all(w == 1)) extra$orig.w = w NCOL = function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y = (y != levels(y)[1]) nvec = rep(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") if (!length(mustart) && !length(etastart)) mustart = (0.5 + w * y) / (1 + w) no.successes = y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y = round(y) nvec = y[, 1] + y[, 2] y = ifelse(nvec > 0, y[, 1] / nvec, 0) w = w * nvec if (!length(mustart) && !length(etastart)) mustart = (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor (first level = fail, ", "other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } predictors.names <- c(namesof("mu.1", .link , earg = .earg , short = TRUE), namesof("mu.2", .link , earg = .earg , short = TRUE)) } }), list( .link = link, .mv = mv, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { Mdiv2 = ncol(eta) / 2 index1 = 2*(1:Mdiv2) - 1 mu = eta2theta(eta[, index1], link = .link , earg = .earg ) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- if (M > 1) dn2 else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for(ii in 1:M) misc$earg[[ii]] <- .earg misc$parallel <- .parallel misc$expected <- TRUE misc$mv <- .mv }), list( .link = link, .mv = mv, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { usualanswer = theta2eta(mu, .link , earg = .earg ) kronecker(usualanswer, matrix(1, 1, 2)) }, list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) w * (y / mu - (1-y) / (1-mu)) else { ycounts = if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec = if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno = 1.0e6 * .Machine$double.eps smallno = sqrt(.Machine$double.eps) if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts = round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mu, log = TRUE)) } }, vfamily = c("augbinomial", "vcategorical"), deriv = eval(substitute(expression({ Musual = 2 Mdiv2 = M / 2 NOS = M / Musual Konst1 = 1 # Works with this deriv1 = Konst1 * w * if ( .link == "logit") { y * (1 - mu) } else { stop("this is not programmed in yet") dtheta.deta(mu, link = .link , earg = .earg ) * (y / mu - 1.0) / (1.0 - mu) } deriv2 = Konst1 * w * if ( .link == "logit") { -(1 - y) * mu } else { stop("this is not programmed in yet") dtheta.deta(mu, link = .link , earg = .earg ) * (y / mu - 1.0) / (1.0 - mu) } myderiv = (cbind(deriv1, deriv2))[, interleave.VGAM(Musual * NOS, M = Musual)] myderiv }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ tmp100 = mu * (1.0 - mu) tmp200 = if ( .link == "logit") { cbind(w * tmp100) } else { cbind(w * dtheta.deta(mu, link = .link , earg = .earg )^2 / tmp100) } wk_wt1 = (Konst1^2) * tmp200 * (1 - mu) wk_wt2 = (Konst1^2) * tmp200 * mu my_wk_wt = cbind(wk_wt1, wk_wt2) my_wk_wt = my_wk_wt[, interleave.VGAM(Musual * NOS, M = Musual)] my_wk_wt }), list( .link = link, .earg = earg)))) } VGAM/R/family.genetic.R0000644000176000001440000006670412136651110014267 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. G1G2G3 <- function(link = "logit", ip1 = NULL, ip2 = NULL, iF = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("G1-G2-G3 phenotype\n\n", "Links: ", namesof("p1", link, earg = earg), ", ", namesof("p2", link, earg = earg), ", ", namesof("f", link, earg = earg, tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("G1G1","G1G2","G1G3","G2G2","G2G3","G3G3") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('G1G1','G1G2','G1G3','G2G2','G2G3','G3G3')") } predictors.names <- c(namesof("p1", .link , earg = .earg , tag = FALSE), namesof("p2", .link , earg = .earg , tag = FALSE), namesof("f", .link , earg = .earg , tag = FALSE)) if (is.null(etastart)) { mydeterminant <- mustart[, 2] * mustart[, 3] + mustart[, 2] * mustart[, 5] + mustart[, 3] * mustart[, 5] p1 <- if (is.numeric( .ip1 )) rep( .ip1 , len = n) else mustart[, 2] * mustart[, 3] / mydeterminant p2 <- if (is.numeric( .ip2 )) rep( .ip2 , len = n) else mustart[, 2] * mustart[, 5] / mydeterminant ff <- if (is.numeric( .iF )) rep( .iF , len = n) else abs(1 - mustart[, 2] / (2 * p1 * p2)) if (any(p1 <= 0) || any(p1 >= 1)) stop("bad initial value for 'p1'") if (any(p2 <= 0) || any(p2 >= 1)) stop("bad initial value for 'p2'") etastart <- cbind(theta2eta(p1, .link , earg = .earg ), theta2eta(p2, .link , earg = .earg ), theta2eta(ff, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .ip1 = ip1, .ip2 = ip2, .iF = iF, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) f <- eta2theta(eta[, 3], link = .link , earg = .earg ) p3 <- abs(1 - p1 - p2) cbind("G1G1" = f*p1+(1-f)*p1^2, "G1G2" = 2*p1*p2*(1-f), "G1G3" = 2*p1*p3*(1-f), "G2G2" = f*p2+(1-f)*p2^2, "G2G3" = 2*p2*p3*(1-f), "G3G3" = f*p3+(1-f)*p3^2) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(p1 = .link , p2 = .link , f = .link ) misc$earg <- list(p1 = .earg , p2 = .earg , f = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("G1G2G3", "vgenetic"), deriv = eval(substitute(expression({ p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) p3 <- 1-p1-p2 f <- eta2theta(eta[, 3], link = .link , earg = .earg ) dP1 <- cbind(f + 2*p1*(1-f), 2*(1-f)*p2, 2*(1-f)*(1-p2-2*p1), 0, -2*(1-f)*p2, -f - 2*p3*(1-f)) dP2 <- cbind(0, 2*p1*(1-f), -2*(1-f)*p1, f+2*p2*(1-f), 2*(1-f)*(1-p1-2*p2), -f - 2*p3*(1-f)) dP3 <- cbind(p1*(1-p1), -2*p1*p2, -2*p1*p3, p2*(1-p2), -2*p2*p3, p3*(1-p3)) dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dl3 <- rowSums(y * dP3 / mu) dPP.deta <- dtheta.deta(cbind(p1, p2, f), link = .link , earg = .earg ) c(w) * cbind(dPP.deta[, 1] * dl1, dPP.deta[, 2] * dl2, dPP.deta[, 3] * dl3) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ dPP <- array(c(dP1, dP2, dP3), c(n, 6, 3)) wz <- matrix(as.numeric(NA), n, dimm(M)) # dimm(M)==6 because M==3 for(i1 in 1:M) for(i2 in i1:M) { index <- iam(i1,i2, M) wz[,index] <- rowSums(dPP[, , i1, drop = TRUE] * dPP[, , i2, drop = TRUE] / mu) * dPP.deta[, i1] * dPP.deta[, i2] } c(w) * wz }), list( .link = link, .earg = earg)))) } AAaa.nohw <- function(link = "logit", ipA = NULL, iF = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("AA-Aa-aa phenotype (without Hardy-Weinberg assumption)\n\n", "Links: ", namesof("pA", link, earg = earg), ", ", namesof("f", "identity", tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("AA","Aa","aa") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have names ", "(output of colnames()) ordered as c('AA','Aa','aa')") } predictors.names <- c(namesof("pA", .link , earg = .earg , tag = FALSE), namesof("f", "identity", earg = list(), tag = FALSE)) if (is.null(etastart)) { pA <- if (is.numeric( .ipA )) rep( .ipA , len = n) else c(sqrt(mustart[, 1] - mustart[, 2] / 2)) f <- if (is.numeric( .iF )) rep( .iF , len = n) else rep(0.01, len = n) # 1- mustart[, 2]/(2*pA*(1-pA)) if (any(pA <= 0) || any(pA >= 1)) stop("bad initial value for 'pA'") etastart <- cbind(theta2eta(pA, .link , earg = .earg ), theta2eta(f, "identity")) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .ipA = ipA, .iF = iF, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { pA <- eta2theta(eta[, 1], link = .link , earg = .earg ) f <- eta2theta(eta[, 2], link = "identity", earg = list()) cbind(AA = pA^2+pA*(1-pA)*f, Aa = 2*pA*(1-pA)*(1-f), aa = (1-pA)^2 + pA*(1-pA)*f) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(pA = .link , f = "identity") misc$earg <- list(pA = .earg , f = list() ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AAaa.nohw", "vgenetic"), deriv = eval(substitute(expression({ pA <- eta2theta(eta[, 1], link = .link , earg = .earg ) f <- eta2theta(eta[, 2], link = "identity") dP1 <- cbind(f + 2*pA*(1-f), 2*(1-f)*(1-2*pA), -2*(1-pA) +f*(1-2*pA)) dP2 <- cbind(pA*(1-pA), -2*pA*(1-pA), pA*(1-pA)) dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dPP.deta <- dtheta.deta(pA, link = .link , earg = .earg ) c(w) * cbind(dPP.deta * dl1, dl2) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ dPP <- array(c(dP1, dP2), c(n, 3, 2)) dPP.deta <- cbind(dtheta.deta(pA, link = .link , earg = .earg ), dtheta.deta(f, link = "identity")) wz <- matrix(as.numeric(NA), n, dimm(M)) # dimm(M)==3 because M==2 for(i1 in 1:M) for(i2 in i1:M) { index <- iam(i1, i2, M) wz[,index] <- rowSums(dPP[,,i1,drop = TRUE] * dPP[,,i2,drop = TRUE] / mu) * dPP.deta[, i1] * dPP.deta[, i2] } c(w) * wz }), list( .link = link, .earg = earg)))) } AB.Ab.aB.ab2 <- function(link = "logit", init.p = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("AB-Ab-aB-ab2 phenotype\n\n", "Links: ", namesof("p", link, earg = earg)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) predictors.names <- namesof("p", .link , earg = .earg , tag = FALSE) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("AB","Ab","aB","ab") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have names ", "(output of colnames()) ordered as c('AB','Ab','aB','ab')") } if (is.null(etastart)) { p.init <- if (is.numeric(.init.p)) rep(.init.p, n) else c(1 - 2 * sqrt(mustart[, 4])) etastart <- theta2eta(p.init, .link , earg = .earg ) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .init.p=init.p, .earg = earg))), linkinv = eval(substitute(function(eta,extra = NULL) { p <- eta2theta(eta, link = .link , earg = .earg ) cbind("AB" = (2+(1-p)^2), "Ab" = (1-(1-p)^2), "aB" = (1-(1-p)^2), "ab" = (1-p)^2) / 4 }, list( .link = link, .earg = earg) )), last = eval(substitute(expression({ misc$link <- c(p = .link ) misc$earg <- list(p = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AB.Ab.aB.ab2", "vgenetic"), deriv = eval(substitute(expression({ pp <- eta2theta(eta, link = .link , earg = .earg ) dP1 <- cbind(-0.5*(1-pp), 0.5*(1-pp), 0.5*(1-pp), -0.5*(1-pp)) dl1 <- rowSums(y * dP1 / mu) dPP.deta <- dtheta.deta(pp, link = .link , earg = .earg ) c(w) * dPP.deta * dl1 }), list( .link = link, .earg = earg) )), weight = eval(substitute(expression({ wz <- rowSums(dP1 * dP1 / mu) * dPP.deta^2 c(w) * wz }), list( .link = link, .earg = earg) ))) } A1A2A3 <- function(link = "logit", ip1 = NULL, ip2 = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("A1A2A3 Allele System ", "(A1A1, A1A2, A2A2, A1A3, A2A3, A3A3)\n\n", "Links: ", namesof("p1", link, earg = earg), ", ", namesof("p2", link, earg = earg, tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("A1A1","A1A2","A2A2","A1A3","A2A3","A3A3") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have names ", "(output of colnames()) ordered as ", "c('A1A1','A1A2','A2A2','A1A3','A2A3','A3A3')") } predictors.names <- c(namesof("pA", .link , earg = .earg , tag = FALSE), namesof("pB", .link , earg = .earg , tag = FALSE)) if (is.null(etastart)) { p1 <- if (is.numeric(.ip1)) rep(.ip1, n) else c(sqrt(mustart[, 1])) p2 <- if (is.numeric(.ip2)) rep(.ip2, n) else c(sqrt(mustart[, 3])) etastart <- cbind(theta2eta(p1, .link , earg = .earg ), theta2eta(p2, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .ip1 = ip1, .ip2 = ip2, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) qq <- abs(1 - p1 - p2) cbind(A1A1 = p1*p1, A1A2 = 2*p1*p2, A2A2 = p2*p2, A1A3 = 2*p1*qq, A2A3 = 2*p2*qq, A3A3 = qq*qq) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(p1 = .link , p2 = .link ) misc$earg <- list(p1 = .earg , p2 = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("A1A2A3", "vgenetic"), deriv = eval(substitute(expression({ p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) dl.dp1 <- (2*y[, 1]+y[, 2]+y[, 4])/p1 - (2*y[,6]+y[, 4]+y[,5])/(1-p1-p2) dl.dp2 <- (2*y[, 3]+y[, 2]+y[,5])/p2 - (2*y[,6]+y[, 4]+y[,5])/(1-p1-p2) dp1.deta <- dtheta.deta(p1, link = .link , earg = .earg ) dp2.deta <- dtheta.deta(p2, link = .link , earg = .earg ) c(w) * cbind(dl.dp1 * dp1.deta, dl.dp2 * dp2.deta) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ qq <- 1-p1-p2 wz <- matrix(as.numeric(NA), n, dimm(M)) # dimm(M)==3 because M==2 ned2l.dp12 <- 2 * (1/p1 + 1/qq) ned2l.dp22 <- 2 * (1/p2 + 1/qq) ned2l.dp1dp2 <- 2 / qq wz[, iam(1, 1, M)] <- ned2l.dp12 * dp1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dp22 * dp2.deta^2 wz[, iam(1, 2, M)] <- ned2l.dp1dp2 * dp1.deta * dp2.deta c(w) * wz }), list( .link = link, .earg = earg)))) } MNSs <- function(link = "logit", imS = NULL, ims = NULL, inS = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("MNSs Blood Group System (MS-Ms-MNS-MNs-NS-Ns phenotype)\n\n", "Links: ", namesof("mS", link, earg = earg), ", ", namesof("ms", link, earg = earg), ", ", namesof("nS", link, earg = earg, tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("MS","Ms","MNS","MNs","NS","Ns") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('MS','Ms','MNS','MNs','NS','Ns')") } predictors.names <- c(namesof("mS", .link , earg = .earg , tag = FALSE), namesof("ms", .link , earg = .earg , tag = FALSE), namesof("nS", .link , earg = .earg , tag = FALSE)) if (is.null(etastart)) { ms <- if (is.numeric(.ims)) rep(.ims, n) else c(sqrt(mustart[, 2])) ns <- c(sqrt(mustart[,6])) nS <- if (is.numeric(.inS)) rep(.inS, n) else c(-ns + sqrt(ns^2 + mustart[,5])) # Solve a quadratic eqn mS <- if (is.numeric(.imS)) rep(.imS, n) else 1-ns-ms-nS etastart <- cbind(theta2eta(mS, .link , earg = .earg ), theta2eta(ms, .link , earg = .earg ), theta2eta(nS, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .imS = imS, .ims = ims, .inS = inS, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { mS <- eta2theta(eta[, 1], link = .link , earg = .earg ) ms <- eta2theta(eta[, 2], link = .link , earg = .earg ) nS <- eta2theta(eta[, 3], link = .link , earg = .earg ) ns <- abs(1 - mS - ms - nS) cbind(MS = mS^2 + 2*mS*ms, Ms = ms^2, MNS = 2*(mS*nS + ms*nS + mS*ns), MNs = 2*ms*ns, NS = nS^2 + 2*nS*ns, Ns = ns^2) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(mS = .link , ms = .link , nS = .link ) misc$earg <- list(mS = .earg , ms = .earg , nS = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("MNSs", "vgenetic"), deriv = eval(substitute(expression({ mS <- eta2theta(eta[, 1], link = .link , earg = .earg ) ms <- eta2theta(eta[, 2], link = .link , earg = .earg ) nS <- eta2theta(eta[, 3], link = .link , earg = .earg ) ns <- 1-mS-ms-nS dP1 <- cbind(2*(mS+ms), 0, 2*(nS+ns-mS), -2*ms, -2*nS, -2*ns) dP2 <- cbind(2*mS, 2*ms, 2*(nS-mS), 2*(ns-ms), -2*nS, -2*ns) dP3 <- cbind(0, 0, 2*ms, -2*ms, 2*ns, -2*ns) # n x 6 dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dl3 <- rowSums(y * dP3 / mu) dPP.deta <- dtheta.deta(cbind(mS, ms, nS), link = .link , earg = .earg ) c(w) * dPP.deta * cbind(dl1, dl2, dl3) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ dPP <- array(c(dP1,dP2,dP3), c(n,6, 3)) wz <- matrix(as.numeric(NA), n, dimm(M)) # dimm(M)==6 because M==3 for(i1 in 1:M) for(i2 in i1:M) { index <- iam(i1,i2, M) wz[,index] <- rowSums(dPP[,,i1,drop = TRUE] * dPP[,,i2,drop = TRUE] / mu) * dPP.deta[,i1] * dPP.deta[,i2] } c(w) * wz }), list( .link = link, .earg = earg)))) } ABO <- function(link = "logit", ipA = NULL, ipO = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("ABO Blood Group System (A-B-AB-O phenotype)\n\n", "Links: ", namesof("pA", link, earg = earg), ", ", namesof("pB", link, earg = earg, tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("A","B","AB","O") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have names ", "(output of colnames()) ordered as c('A','B','AB','O')") } predictors.names <- c(namesof("pA", .link , earg = .earg , tag = FALSE), namesof("pB", .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { pO <- if (is.Numeric( .ipO )) rep( .ipO , len = n) else c(sqrt(mustart[, 4])) pA <- if (is.Numeric( .ipA )) rep( .ipA , len = n) else c(1 - sqrt(mustart[, 2] + mustart[, 4])) pB <- abs(1 - pA - pO) etastart <- cbind(theta2eta(pA, .link , earg = .earg ), theta2eta(pB, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .ipO = ipO, .ipA = ipA, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { pA <- eta2theta(eta[, 1], link = .link , earg = .earg ) pB <- eta2theta(eta[, 2], link = .link , earg = .earg ) pO <- abs(1 - pA - pB) cbind(A = pA*(pA+2*pO), B = pB*(pB+2*pO), AB = 2*pA*pB, O = pO*pO) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(pA = .link , pB = .link ) misc$earg <- list(pA = .earg , pB = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("ABO", "vgenetic"), deriv = eval(substitute(expression({ ppp <- eta2theta(eta[, 1], link = .link , earg = .earg ) qqq <- eta2theta(eta[, 2], link = .link , earg = .earg ) rrr <- abs(1 - ppp - qqq) pbar <- 2*rrr + ppp qbar <- 2*rrr + qqq naa <- y[, 1] nbb <- y[, 2] nab <- y[, 3] noo <- y[, 4] dl.dp <- (naa+nab)/ppp - naa/pbar - 2*nbb/qbar - 2*noo/rrr dl.dq <- (nbb+nab)/qqq - 2*naa/pbar - nbb/qbar - 2*noo/rrr dp.deta <- dtheta.deta(ppp, link = .link , earg = .earg ) dq.deta <- dtheta.deta(qqq, link = .link , earg = .earg ) c(w) * cbind(dl.dp * dp.deta, dl.dq * dq.deta) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) # dimm(M)==3 because M==2 ned2l.dp2 <- (1 + 2/ppp + 4*qqq/qbar + ppp/pbar) ned2l.dq2 <- (1 + 2/qqq + 4*ppp/pbar + qqq/qbar) ned2l.dpdq <- 2 * (1 + qqq/qbar + ppp/pbar) wz[, iam(1, 1, M)] <- ned2l.dp2 * dp.deta^2 wz[, iam(2, 2, M)] <- ned2l.dq2 * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dpdq * dp.deta * dq.deta c(w) * wz }), list( .link = link, .earg = earg)))) } AB.Ab.aB.ab <- function(link = "logit", init.p = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("AB-Ab-aB-ab phenotype\n\n", "Links: ", namesof("p", link, earg = earg, tag = TRUE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("AB","Ab","aB","ab") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('AB','Ab','aB','ab')") } predictors.names <- namesof("p", .link , earg = .earg , tag = FALSE) if (is.null(etastart)) { p <- if (is.numeric( .init.p )) rep(.init.p, len = n) else c(sqrt(4 * mustart[, 4])) etastart <- cbind(theta2eta(p, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .init.p=init.p, .earg = earg))), linkinv = eval(substitute(function(eta,extra = NULL) { p <- eta2theta(eta, link = .link , earg = .earg ) pp4 <- p * p / 4 cbind(AB = 0.5 + pp4, Ab = 0.25 - pp4, aB = 0.25 - pp4, ab = pp4) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(p = .link ) misc$earg <- list(p = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AB.Ab.aB.ab", "vgenetic"), deriv = eval(substitute(expression({ pp <- eta2theta(eta, link = .link , earg = .earg ) p2 <- pp*pp nAB <- w * y[, 1] nAb <- w * y[, 2] naB <- w * y[, 3] nab <- w * y[, 4] dl.dp <- 8 * pp * (nAB/(2+p2) - (nAb+naB)/(1-p2) + nab/p2) dp.deta <- dtheta.deta(pp, link = .link , earg = .earg ) dl.dp * dp.deta }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ ned2l.dp2 <- 4 * p2 * (1/(2+p2) + 2/(1-p2) + 1/p2) wz <- cbind((dp.deta^2) * ned2l.dp2) c(w) * wz }), list( .link = link, .earg = earg)))) } AA.Aa.aa <- function(link = "logit", init.pA = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("AA-Aa-aa phenotype\n\n", "Links: ", namesof("pA", link, earg = earg)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("AA","Aa","aa") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix ", "should have names ", "(output of colnames()) ordered as c('AA','Aa','aa')") } predictors.names <- namesof("pA", .link , earg = .earg , tag = FALSE) if (is.null(etastart)) { pA <- if (is.numeric(.init.pA)) rep(.init.pA, n) else c(sqrt(mustart[, 1])) etastart <- cbind(theta2eta(pA, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .init.pA=init.pA, .earg = earg))), linkinv = eval(substitute(function(eta,extra = NULL) { pA <- eta2theta(eta, link = .link , earg = .earg ) pp <- pA*pA cbind(AA = pp, Aa = 2*pA*(1-pA), aa = (1-pA)^2) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c("pA" = .link ) misc$earg <- list("pA" = .earg ) misc$expected = TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AA.Aa.aa", "vgenetic"), deriv = eval(substitute(expression({ pA <- eta2theta(eta, link = .link , earg = .earg ) nAA <- w * y[, 1] nAa <- w * y[, 2] naa <- w * y[, 3] dl.dpA <- (2*nAA+nAa)/pA - (nAa+2*naa)/(1-pA) dpA.deta <- dtheta.deta(pA, link = .link , earg = .earg ) dl.dpA * dpA.deta }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ ned2l.dp2 <- (2*nAA+nAa)/pA^2 + (nAa+2*naa)/(1-pA)^2 wz <- cbind((dpA.deta^2) * ned2l.dp2) wz }), list( .link = link, .earg = earg)))) } VGAM/R/family.functions.R0000644000176000001440000001336512136651110014654 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. fill <- fill1 <- fill2 <- fill3 <- function(x, values = 0, ncolx = ncol(x)) { x <- as.matrix(x) matrix(values, nrow = nrow(x), ncol = ncolx, byrow = TRUE) } extract.arg <- function(a) { s <- substitute(a) as.character(s) } remove.arg <- function(string) { nc <- nchar(string) bits <- substring(string, 1:nc, 1:nc) b1 <- (1:nc)[bits == "("] b1 <- if (length(b1)) b1[1]-1 else nc if (b1 == 0) return("") string <- paste(bits[1:b1], collapse = "") string } add.arg <- function(string, arg.string) { if (arg.string == "") return(string) nc <- nchar(string) lastc <- substring(string, nc, nc) if (lastc == ")") { if (substring(string, nc-1, nc-1) == "(") { paste(substring(string, 1, nc-2), "(", arg.string, ")", sep = "") } else { paste(substring(string, 1, nc-1), ", ", arg.string, ")", sep = "") } } else { paste(string, "(", arg.string, ")", sep = "") } } get.arg <- function(string) { nc <- nchar(string) bits <- substring(string, 1:nc, 1:nc) b1 <- (1:nc)[bits == "("] b2 <- (1:nc)[bits == ")"] b1 <- if (length(b1)) min(b1) else return("") b2 <- if (length(b2)) max(b2) else return("") if (b2-b1 == 1) "" else paste(bits[(1+b1):(b2-1)], collapse = "") } eifun <- function(i, n) cbind(as.numeric((1:n) == i)) eifun <- function(i, n) diag(n)[, i, drop = FALSE] eijfun <- function(i, n) { temp <- matrix(0, n, 1) if (length(i)) temp[i, ] <- 1 temp } dneg.binomial <- function(x, k, prob) { care.exp(x * log1p(-prob) + k * log(prob) + lgamma(x+k) - lgamma(k) - lgamma(x + 1)) } tapplymat1 <- function(mat, function.arg = c("cumsum", "diff", "cumprod")) { if (!missing(function.arg)) function.arg <- as.character(substitute(function.arg)) function.arg <- match.arg(function.arg, c("cumsum", "diff", "cumprod"))[1] type <- switch(function.arg, cumsum = 1, diff = 2, cumprod = 3, stop("function.arg not matched")) if (!is.matrix(mat)) mat <- as.matrix(mat) nr <- nrow(mat) nc <- ncol(mat) fred <- dotC(name = "tapplymat1", mat = as.double(mat), as.integer(nr), as.integer(nc), as.integer(type)) dim(fred$mat) <- c(nr, nc) dimnames(fred$mat) <- dimnames(mat) switch(function.arg, cumsum = fred$mat, diff = fred$mat[, -1, drop = FALSE], cumprod = fred$mat) } matrix.power <- function(wz, M, power, fast = TRUE) { n <- nrow(wz) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- if (is.matrix(wz)) ncol(wz) else 1 if (dimm.value > M*(M+1)/2) stop("too many columns") if (M == 1 || dimm.value == M) { WW <- wz^power # May contain NAs return(t(WW)) } if (fast) { k <- veigen(t(wz), M = M) # matrix.arg) evals <- k$values # M x n evects <- k$vectors # M x M x n } else { stop("sorry, cannot handle matrix-band form yet") k <- unlist(apply(wz, 3, eigen), use.names = FALSE) dim(k) <- c(M, M+1, n) evals <- k[, 1, , drop = TRUE] # M x n evects <- k[, -1, , drop = TRUE] # M x M x n } temp <- evals^power # Some values may be NAs index <- as.vector( matrix(1, 1, M) %*% is.na(temp) ) index <- (index == 0) if (!all(index)) { warning(paste("Some weight matrices have negative", "eigenvalues. They\nwill be assigned NAs")) temp[,!index] <- 1 } WW <- mux55(evects, temp, M = M) WW[,!index] <- NA WW } ResSS.vgam <- function(z, wz, M) { if (M == 1) return(sum(c(wz) * c(z^2))) wz.z <- mux22(t(wz), z, M = M, as.matrix = TRUE) sum(wz.z * z) } wweighted.mean <- function(y, w = NULL, matrix.arg = TRUE) { if (!matrix.arg) stop("currently, matrix.arg must be TRUE") y <- as.matrix(y) M <- ncol(y) n <- nrow(y) if (M == 1) { if (missing(w)) mean(y) else sum(w * y)/sum(w) } else { if (missing(w)) y %*% rep(1, n) else { numer <- mux22(t(w), y, M, as.matrix = TRUE) numer <- t(numer) %*% rep(1, n) denom <- t(w) %*% rep(1, n) denom <- matrix(denom, 1, length(denom)) if (matrix.arg) denom <- m2adefault(denom, M = M)[,,1] c(solve(denom, numer)) } } } veigen <- function(x, M) { n <- ncol(x) index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dimm.value <- nrow(x) # usually M or M(M+1)/2 z <- dotFortran(name = "veigen", as.integer(M), as.integer(n), as.double(x), values = double(M * n), as.integer(1), vectors = double(M*M*n), double(M), double(M), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), error.code = integer(1)) if (z$error.code) stop("eigen algorithm (rs) returned error code ", z$error.code) ord <- M:1 dim(z$values) <- c(M, n) z$values <- z$values[ord, , drop = FALSE] dim(z$vectors) <- c(M, M, n) z$vectors <- z$vectors[, ord, , drop = FALSE] return(list(values = z$values, vectors = z$vectors)) } ima <- function(j, k, M) { if (length(M) > 1 || M <= 0 || j <= 0 || k <= 0 || j > M || k > M) stop("input wrong in ima()") m <- diag(M) m[col(m) <= row(m)] <- 1:(M*(M+1)/2) if (j >= k) m[j, k] else m[k, j] } checkwz <- function(wz, M, trace = FALSE, wzepsilon = .Machine$double.eps^0.75) { if (wzepsilon > 0.5) warning("'wzepsilon' is probably too large") if (!is.matrix(wz)) wz <- as.matrix(wz) if ((temp <- sum(wz[, 1:M, drop = FALSE] < wzepsilon))) warning(paste(temp, "elements replaced by", signif(wzepsilon, 5))) wz[, 1:M] <- pmax(wzepsilon, wz[, 1:M]) wz } VGAM/R/family.fishing.R0000644000176000001440000000607212136651110014270 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. DeLury <- function(catch, effort, type = c("DeLury", "Leslie"), ricker = FALSE) { type <- match.arg(type, c("DeLury", "Leslie"))[1] if (!is.logical(ricker)) stop("bad input for argument 'ricker'") if ((LLL <- Lcatch <- length(catch)) != (Leffort <- length(effort))) stop("length(catch) != length(effort)") CPUE <- catch / effort if (type == "DeLury") { Et <- cumsum(effort) - ifelse(ricker, 0.5, 1) * effort logCPUE <- log(CPUE) lmfit <- lm(logCPUE ~ Et, x = TRUE) myq <- catchabilityCoefficient <- -coef(lmfit)[2] N0 <- exp(coef(lmfit)["(Intercept)"]) / myq } else { Kt <- cumsum(catch) - ifelse(ricker, 0.5, 1) * catch lmfit <- lm(CPUE ~ Kt, x = TRUE) myq <- catchabilityCoefficient <- -coef(lmfit)[2] N0 <- coef(lmfit)["(Intercept)"] / myq } rlist <- list(catch = catch, effort = effort, type = type, N0 = N0, CPUE = CPUE, lmfit = lmfit) if (type == "DeLury") { rlist$E <- Et } else { rlist$K <- Kt } rlist } wffc.P1 <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) ifelse(length >= min.eligible, c1 + (ppm/100) * ceiling( signif(100 * length, digits = 8) ), 0) wffc.P1star <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) ifelse(length >= min.eligible, c1 + ppm * length, 0) wffc.P2 <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P1(length, c1 = c1, min.eligible = min.eligible, ppm = ppm) + ifelse(length >= min.eligible, ceiling(100*(length-min.eligible))^2, 0) wffc.P2star <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) wffc.P1star(length, c1 = c1, min.eligible = min.eligible, ppm = ppm) + ifelse(length >= min.eligible, 10000 * (length-min.eligible)^2, 0) wffc.P3 <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) { temp1 <- floor((ceiling(100*length)/100) / min.eligible) # zz not sure temp1 <- floor(length / min.eligible) ans <- ifelse(temp1 >= 1, c1, length * 0) # Handles NAs ans <- ans + ifelse(temp1 >= 1, ppm * (ceiling(100*length)/100), 0) maxtemp1 <- max(temp1, na.rm = TRUE) if (maxtemp1 > 1) for (ii in 2:maxtemp1) { ans <- ans + ifelse(ii < temp1, min.eligible * (ii-1) * ppm, 0) + ifelse(ii == temp1, (ceiling(100*length)/100 - ii*min.eligible) * (ii-1) * ppm, 0) } ans } wffc.P3star <- function(length, c1 = 100, min.eligible = 0.18, ppm = 2000) { temp1 <- floor(length / min.eligible) ans <- ifelse(temp1 >= 1, c1, length * 0) # Handles NAs ans <- ans + ifelse(temp1 >= 1, length * ppm, 0) maxtemp1 <- max(temp1, na.rm = TRUE) if (maxtemp1 > 1) for (ii in 2:maxtemp1) { ans <- ans + ifelse(ii < temp1, min.eligible * (ii-1) * ppm, 0) + ifelse(ii == temp1, (length - ii*min.eligible) * (ii-1) * ppm, 0) } ans } VGAM/R/family.extremes.R0000644000176000001440000030516212136651110014477 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. rgev <- function(n, location = 0, scale = 1, shape = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(location)) stop("bad input for argument argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument argument 'shape'") ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n) if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) ans[!scase] <- location[!scase] + scale[!scase] * ((-log(runif(use.n - nscase)))^(-shape[!scase]) -1) / shape[!scase] if (nscase) ans[scase] <- rgumbel(nscase, location = location[scase], scale = scale[scase]) ans[scale <= 0] <- NaN ans } dgev <- function(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt( .Machine$double.eps ), oobounds.log = -Inf, giveWarning = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (oobounds.log > 0) stop("bad input for argument 'oobounds.log'") if (!is.Numeric(tolshape0, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'tolshape0'") use.n <- max(length(x), length(location), length(scale), length(shape)) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n) if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) x <- rep(x, length.out = use.n) logdensity <- rep(log(0), length.out = use.n) scase <- (abs(shape) < tolshape0) nscase <- sum(scase) if (use.n - nscase) { zedd <- 1 + shape * (x - location) / scale # pmax(0, (1+shape*xc/scale)) xok <- (!scase) & (zedd > 0) logdensity[xok] <- -log(scale[xok]) - zedd[xok]^(-1/shape[xok]) - (1 + 1/shape[xok]) * log(zedd[xok]) outofbounds <- (!scase) & (zedd <= 0) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log no.oob <- sum(outofbounds) if (giveWarning) warning(no.oob, " observation", ifelse(no.oob > 1, "s are", " is"), " out of bounds") } } if (nscase) { logdensity[scase] <- dgumbel(x[scase], location = location[scase], scale = scale[scase], log = TRUE) } logdensity[scale <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pgev <- function(q, location = 0, scale = 1, shape = 0) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(location)) stop("bad input for argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument 'shape'") use.n <- max(length(q), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n); if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) if (length(q) != use.n) q <- rep(q, length.out = use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) zedd <- (q - location) / scale if (use.n - nscase) { use.zedd <- pmax(0, 1 + shape * zedd) ans[!scase] <- exp(-use.zedd[!scase]^(-1 / shape[!scase])) } if (nscase) { ans[scase] <- pgumbel(q[scase], location = location[scase], scale = scale[scase]) } ans[scale <= 0] <- NaN ans } qgev <- function(p, location = 0, scale = 1, shape = 0) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("0 < p < 1 is required") if (!is.Numeric(location)) stop("bad input for argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument 'shape'") use.n <- max(length(p), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n); if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) if (length(p) != use.n) p <- rep(p, length.out = use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) { ans[!scase] <- location[!scase] + scale[!scase] * ((-log(p[!scase]))^(-shape[!scase]) - 1) / shape[!scase] } if (nscase) ans[scase] <- qgumbel(p[scase], location = location[scase], scale = scale[scase]) ans[scale <= 0] <- NaN ans } gev <- function( llocation = "identity", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), iscale = NULL, ishape = NULL, imethod = 1, gshape = c(-0.45, 0.45), tolshape0 = 0.001, giveWarning = TRUE, zero = 3) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.logical(giveWarning) || length(giveWarning) != 1) stop("bad input for argument 'giveWarning'") mean <- FALSE if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!mean && length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(imethod, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(ishape) && !is.Numeric(ishape)) stop("bad input for argument 'ishape'") if (!is.Numeric(tolshape0, allowable.length = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") if (!is.Numeric(gshape, allowable.length = 2) || gshape[1] >= gshape[2]) stop("bad input for argument 'gshape'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Generalized extreme value distribution\n", "Links: ", namesof("location", llocat, elocat), ", ", namesof("scale", lscale, escale), ", ", namesof("shape", lshape, eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 3, multipleResponses = FALSE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ Musual <- extra$Musual <- 3 ncoly <- ncol(y) extra$ncoly <- ncoly extra$Musual <- Musual mynames1 <- "location" mynames2 <- "scale" mynames3 <- "shape" llocat <- .llocat lscale <- .lscale lshape <- .lshape predictors.names <- c( namesof(mynames1, .llocat , .elocat , short = TRUE), namesof(mynames2, .lscale , .escale , short = TRUE), namesof(mynames3, .lshape , .eshape , short = TRUE)) y <- as.matrix(y) if (ncol(y) > 1) y <- -t(apply(-y, 1, sort, na.last = TRUE)) r.vec <- rowSums(cbind(!is.na(y))) if (any(r.vec == 0)) stop("A row contains all missing values") extra$percentiles <- .percentiles if (!length(etastart)) { init.sig <- if (length( .iscale )) rep( .iscale, length.out = nrow(y)) else NULL init.xi <- if (length( .ishape )) rep( .ishape, length.out = nrow(y)) else NULL LIST.lshape <- .lshape if ( .lshape == "elogit" && length(init.xi) && (any(init.xi <= LIST.lshape$min | init.xi >= LIST.lshape$max))) stop("bad input for an argument in 'lshape'") if ( .imethod == 1) { nvector <- 4:10 # Arbitrary; could be made an argument ynvector <- quantile(y[, 1], probs = 1-1/nvector) objecFunction <- -Inf # Actually the log-likelihood est.sigma <- !length(init.sig) gshape <- .gshape temp234 <- if (length(init.xi)) init.xi[1] else seq(gshape[1], gshape[2], length.out = 12) for(shapeTry in temp234) { xvec <- if (abs(shapeTry) < .tolshape0) log(nvector) else (nvector^shapeTry - 1) / shapeTry fit0 <- lsfit(x = xvec, y = ynvector, intercept = TRUE) sigmaTry = if (est.sigma) rep(fit0$coef["X"], length.out = nrow(y)) else init.sig LocatTry <- rep(fit0$coef["Intercept"], length.out = nrow(y)) llTry <- egev(giveWarning = FALSE)@loglikelihood(mu = NULL, y = y[, 1], w = w, residuals = FALSE, eta = cbind(theta2eta(LocatTry, .llocat , .elocat ), theta2eta(sigmaTry, .lscale , .escale ), theta2eta(shapeTry, .lshape , .eshape ))) if (llTry >= objecFunction) { if (est.sigma) init.sig <- sigmaTry init.mu <- rep(LocatTry, length.out = nrow(y)) objecFunction <- llTry bestxi <- shapeTry } } if (!length(init.xi)) init.xi <- rep(bestxi, length.out = nrow(y)) } else { init.xi <- rep(0.05, length.out = nrow(y)) if (!length(init.sig)) init.sig <- rep(sqrt(6 * var(y[, 1]))/pi, length.out = nrow(y)) EulerM <- -digamma(1) init.mu <- rep(median(y[, 1]) - EulerM*init.sig, length.out = nrow(y)) } bad <- ((1 + init.xi*(y-init.mu)/init.sig) <= 0) if (fred <- sum(bad)) { warning(paste(fred, "observations violating boundary", "constraints while initializing. Taking corrective action.")) init.xi[bad] <- ifelse(y[bad] > init.mu[bad], 0.1, -0.1) } etastart <- cbind(theta2eta(init.mu, .llocat , .elocat ), theta2eta(init.sig, .lscale , .escale ), theta2eta(init.xi, .lshape , .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ishape = ishape, .iscale = iscale, .gshape = gshape, .percentiles = percentiles, .tolshape0 = tolshape0, .imethod = imethod, .giveWarning = giveWarning ))), linkinv = eval(substitute(function(eta, extra = NULL) { Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) is.zero <- (abs(shape) < .tolshape0 ) cent <- extra$percentiles LP <- length(cent) fv <- matrix(as.numeric(NA), nrow(eta), LP) if (LP) { for(ii in 1:LP) { yp <- -log(cent[ii]/100) fv[!is.zero, ii] <- Locat[!is.zero] - sigma[!is.zero] * (1 - yp^(-shape[!is.zero])) / shape[!is.zero] fv[ is.zero, ii] <- Locat[ is.zero] - sigma[ is.zero] * log(yp) } dimnames(fv) <- list(dimnames(eta)[[1]], paste(as.character(cent), "%", sep = "")) } else { EulerM <- -digamma(1) fv <- Locat + sigma * EulerM # When shape = 0, is Gumbel fv[!is.zero] <- Locat[!is.zero] + sigma[!is.zero] * (gamma(1-shape[!is.zero])-1) / shape[!is.zero] fv[shape >= 1] <- NA # Mean exists only if shape < 1. } fv }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), last = eval(substitute(expression({ misc$earg <- vector("list", M) names(misc$earg) <- c(mynames1, mynames2, mynames3) misc$earg[[1]] <- .elocat misc$earg[[2]] <- .escale misc$earg[[3]] <- .eshape misc$link <- c( .llocat , .lscale , .lshape ) names(misc$link) <- c(mynames1, mynames2, mynames3) misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- FALSE misc$true.mu <- !length( .percentiles) # @fitted is not a true mu misc$percentiles <- .percentiles misc$expected <- TRUE misc$tolshape0 <- .tolshape0 if (ncol(y) == 1) y <- as.vector(y) if (any(shape < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta,extra = NULL) { Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) is.zero <- (abs(shape) < .tolshape0) zedd <- (y-Locat) / sigma r.vec <- rowSums(cbind(!is.na(y))) A <- 1 + shape * (y-Locat)/sigma ii <- 1:nrow(eta) A1 <- A[cbind(ii, r.vec)] mytolerance <- 0 # .Machine$double.eps if (any(bad <- (A1 <= mytolerance), na.rm = TRUE)) { cat("There are", sum(bad), "range violations in @loglikelihood\n") flush.console() } igev <- !is.zero & !bad igum <- is.zero & !bad pow <- 1 + 1/shape[igev] if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { old.answer <- sum(bad) * (-1.0e10) + sum(w[igum] * (-r.vec[igum]*log(sigma[igum]) - exp(-zedd[igum,r.vec]) - rowSums(cbind(zedd, na.rm = TRUE)))) + sum(w[igev] * (-r.vec[igev]*log(sigma[igev]) - pow*rowSums(cbind(log(A[igev])), na.rm = TRUE) - A1[igev]^(-1/shape[igev]))) old.answer } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .giveWarning = giveWarning, .tolshape0 = tolshape0 ))), vfamily = c("gev", "vextremes"), deriv = eval(substitute(expression({ Musual <- 3 r.vec <- rowSums(cbind(!is.na(y))) Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) dmu.deta <- dtheta.deta(Locat, .llocat , .elocat ) dsi.deta <- dtheta.deta(sigma, .lscale , .escale ) dxi.deta <- dtheta.deta(shape, .lshape , .eshape ) is.zero <- (abs(shape) < .tolshape0) ii <- 1:nrow(eta) zedd <- (y-Locat) / sigma A <- 1 + shape * zedd dA.dxi <- zedd # matrix dA.dmu <- -shape/sigma # vector dA.dsigma <- -shape*zedd/sigma # matrix pow <- 1 + 1/shape A1 <- A[cbind(ii, r.vec)] AAr1 <- dA.dmu/(shape * A1^pow) - pow * rowSums(cbind(dA.dmu/A), na.rm = TRUE) AAr2 <- dA.dsigma[cbind(ii,r.vec)] / (shape * A1^pow) - pow * rowSums(cbind(dA.dsigma/A), na.rm = TRUE) AAr3 <- 1/(shape * A1^pow) - pow * rowSums(cbind(dA.dsigma/A), na.rm = TRUE) dl.dmu <- AAr1 dl.dsi <- AAr2 - r.vec/sigma dl.dxi <- rowSums(cbind(log(A)), na.rm = TRUE)/shape^2 - pow * rowSums(cbind(dA.dxi/A), na.rm = TRUE) - (log(A1) / shape^2 - dA.dxi[cbind(ii,r.vec)] / (shape*A1)) * A1^(-1/shape) if (any(is.zero)) { zorro <- c(zedd[cbind(1:n,r.vec)]) zorro <- zorro[is.zero] ezedd <- exp(-zorro) dl.dmu[is.zero] <- (1-ezedd) / sigma[is.zero] dl.dsi[is.zero] <- (zorro * (1-ezedd) - 1) / sigma[is.zero] dl.dxi[is.zero] <- zorro * ((1 - ezedd) * zorro / 2 - 1) } c(w) * cbind(dl.dmu * dmu.deta, dl.dsi * dsi.deta, dl.dxi * dxi.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), weight = eval(substitute(expression({ kay <- -shape dd <- digamma(r.vec-kay+1) ddd <- digamma(r.vec+1) # Unnecessarily evaluated at each iteration temp13 <- -kay * dd + (kay^2 - kay + 1) / (1-kay) temp33 <- 1 - 2 * kay * ddd + kay^2 * (1 + trigamma(r.vec+1) + ddd^2) temp23 <- -kay * dd + (1+(1-kay)^2) / (1-kay) GR.gev <- function(j, ri, kay) gamma(ri - j*kay + 1) / gamma(ri) tmp2 <- (1-kay)^2 * GR.gev(2, r.vec, kay) # Latter is GR2 tmp1 <- (1-2*kay) * GR.gev(1, r.vec, kay) # Latter is GR1 k0 <- (1-2*kay) k1 <- k0 * kay k2 <- k1 * kay k3 <- k2 * kay # kay^3 * (1-2*kay) wz <- matrix(as.numeric(NA), n, 6) wz[, iam(1, 1, M)] <- tmp2 / (sigma^2 * k0) wz[, iam(1, 2, M)] <- (tmp2 - tmp1) / (sigma^2 * k1) wz[, iam(1, 3, M)] <- (tmp1 * temp13 - tmp2) / (sigma * k2) wz[, iam(2, 2, M)] <- (r.vec*k0 - 2*tmp1 + tmp2) / (sigma^2 * k2) wz[, iam(2, 3, M)] <- (r.vec*k1*ddd + tmp1 * temp23 - tmp2 - r.vec*k0) / (sigma * k3) wz[, iam(3, 3, M)] <- (2*tmp1*(-temp13) + tmp2 + r.vec*k0*temp33)/(k3*kay) if (any(is.zero)) { if (ncol(y) > 1) stop("cannot handle shape == 0 with a multivariate response") EulerM <- -digamma(1) wz[is.zero, iam(2, 2, M)] <- (pi^2/6 + (1-EulerM)^2) / sigma^2 wz[is.zero, iam(3, 3, M)] <- 2.4236 wz[is.zero, iam(1, 2, M)] <- (digamma(2) + 2*(EulerM-1)) / sigma^2 wz[is.zero, iam(1, 3, M)]= -(trigamma(1)/2 + digamma(1)* (digamma(1)/2+1))/sigma wz[is.zero, iam(2, 3, M)] <- (-dgammadx(2, 3)/6 + dgammadx(1, 1) + 2*dgammadx(1, 2) + 2*dgammadx(1, 3)/3) / sigma if (FALSE ) { wz[, iam(1, 2, M)] <- 2 * r.vec / sigma^2 wz[, iam(2, 2, M)] <- -4 * r.vec * digamma(r.vec+1) + 2 * r.vec + (4 * dgammadx(r.vec+1, deriv.arg = 1) - 3 * dgammadx(r.vec+1, deriv.arg = 2)) / gamma(r.vec) # Not checked } } wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dmu.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsi.deta^2 wz[, iam(3, 3, M)] <- wz[, iam(3, 3, M)] * dxi.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * dmu.deta * dsi.deta wz[, iam(1, 3, M)] <- wz[, iam(1, 3, M)] * dmu.deta * (-dxi.deta) wz[, iam(2, 3, M)] <- wz[, iam(2, 3, M)] * dsi.deta * (-dxi.deta) c(w) * wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape )))) } dgammadx <- function(x, deriv.arg = 1) { if (deriv.arg == 0) { gamma(x) } else if (deriv.arg == 1) { digamma(x) * gamma(x) } else if (deriv.arg == 2) { gamma(x) * (trigamma(x) + digamma(x)^2) } else if (deriv.arg == 3) { gamma(x) * (psigamma(x, deriv = 2) + 2 * digamma(x) * trigamma(x)) + Recall(x, deriv.arg = 1) * (trigamma(x) + digamma(x)^2) } else if (deriv.arg == 4) { Recall(x, deriv.arg = 2) * (trigamma(x) + digamma(x)^2) + 2 * Recall(x, deriv.arg = 1) * (psigamma(x, deriv = 2) + 2*digamma(x) * trigamma(x)) + gamma(x) * (psigamma(x, deriv = 3) + 2*trigamma(x)^2 + 2 * digamma(x) * psigamma(x, deriv = 2)) } else { stop("cannot handle 'deriv' > 4") } } egev <- function(llocation = "identity", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), iscale = NULL, ishape = NULL, imethod = 1, gshape = c(-0.45, 0.45), tolshape0 = 0.001, giveWarning = TRUE, zero = 3) { if (!is.logical(giveWarning) || length(giveWarning) != 1) stop("bad input for argument 'giveWarning'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(gshape, allowable.length = 2) || gshape[1] >= gshape[2]) stop("bad input for argument 'gshape'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(imethod, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(ishape) && !is.Numeric(ishape)) stop("bad input for argument 'ishape'") if (!is.Numeric(tolshape0, allowable.length = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Generalized extreme value distribution\n", "Links: ", namesof("location", link = llocat, earg = elocat), ", ", namesof("scale", link = lscale, earg = escale), ", ", namesof("shape", link = lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("location", .llocat , earg = .elocat , short = TRUE), namesof("scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (ncol(as.matrix(y)) != 1) stop("response must be a vector or one-column matrix") if (!length(etastart)) { init.sig <- if (length( .iscale )) rep( .iscale , length.out = length(y)) else NULL init.xi <- if (length( .ishape )) rep( .ishape , length.out = length(y)) else NULL eshape <- .eshape if ( .lshape == "elogit" && length(init.xi) && (any(init.xi <= eshape$min | init.xi >= eshape$max))) stop("bad input for argument 'eshape'") if ( .imethod == 1) { nvector <- 4:10 # Arbitrary; could be made an argument ynvector <- quantile(y, probs = 1-1/nvector) objecFunction <- -Inf # Actually the log-likelihood est.sigma <- !length(init.sig) gshape <- .gshape temp234 <- if (length(init.xi)) init.xi[1] else seq(gshape[1], gshape[2], length.out = 12) for(xi.try in temp234) { xvec <- if (abs(xi.try) < .tolshape0) log(nvector) else (nvector^xi.try - 1) / xi.try fit0 <- lsfit(x = xvec, y=ynvector, intercept = TRUE) if (est.sigma) { sigmaTry <- rep(fit0$coef["X"], length.out = length(y)) } else { sigmaTry <- init.sig } muTry <- rep(fit0$coef["Intercept"], length.out = length(y)) llTry <- egev(giveWarning = FALSE)@loglikelihood(mu = NULL, y = y, w = w, residuals = FALSE, eta = cbind(theta2eta(muTry, .llocat , earg = .elocat ), theta2eta(sigmaTry, .lscale , earg = .escale ), theta2eta(xi.try, .lshape , earg = .eshape ))) if (llTry >= objecFunction) { if (est.sigma) init.sig <- sigmaTry init.mu <- rep(muTry, length.out = length(y)) objecFunction <- llTry bestxi <- xi.try } } if (!length(init.xi)) init.xi <- rep(bestxi, length.out = length(y)) } else { init.xi = rep(if (length(init.xi)) init.xi else 0.05, length.out = length(y)) if (!length(init.sig)) init.sig <- rep(sqrt(6*var(y))/pi, length.out = length(y)) EulerM <- -digamma(1) init.mu <- rep(median(y) - EulerM * init.sig, length.out = length(y)) } bad <- (1 + init.xi*(y-init.mu)/init.sig <= 0) if (fred <- sum(bad, na.rm = TRUE)) { warning(paste(fred, "observations violating boundary", "constraints while initializing. Taking corrective action.")) init.xi[bad] = ifelse(y[bad] > init.mu[bad], 0.01, -0.01) } extra$percentiles <- .percentiles etastart <- cbind(theta2eta(init.mu, .llocat , earg = .elocat ), theta2eta(init.sig, .lscale , earg = .escale ), theta2eta(init.xi, .lshape , earg = .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .percentiles = percentiles, .tolshape0 = tolshape0, .imethod = imethod, .giveWarning= giveWarning, .iscale = iscale, .ishape = ishape, .gshape = gshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) xi <- eta2theta(eta[, 3], .lshape , earg = .eshape ) is.zero <- (abs(xi) < .tolshape0) cent <- extra$percentiles LP <- length(cent) fv <- matrix(as.numeric(NA), nrow(eta), LP) if (LP) { for(ii in 1:LP) { yp <- -log(cent[ii]/100) fv[!is.zero,ii] <- loc[!is.zero] - sigma[!is.zero] * (1 - yp^(-xi[!is.zero])) / xi[!is.zero] fv[is.zero,ii] <- loc[is.zero] - sigma[is.zero] * log(yp) } dimnames(fv) <- list(dimnames(eta)[[1]], paste(as.character(cent), "%", sep = "")) } else { EulerM <- -digamma(1) fv <- loc + sigma * EulerM # When xi = 0, is Gumbel fv[!is.zero] <- loc[!is.zero] + sigma[!is.zero] * (gamma(1-xi[!is.zero])-1) / xi[!is.zero] fv[xi >= 1] <- NA # Mean exists only if xi < 1. } fv }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), last = eval(substitute(expression({ misc$links <- c(location = .llocat, scale = .lscale , shape = .lshape) misc$earg <- list(location = .elocat, scale = .escale, shape = .eshape) misc$true.mu <- !length( .percentiles) # @fitted is not a true mu misc$percentiles <- .percentiles misc$tolshape0 <- .tolshape0 misc$expected <- TRUE if (any(xi < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mmu <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) xi <- eta2theta(eta[, 3], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(w * dgev(x=y, location=mmu, scale=sigma, shape=xi, tolshape0 = .tolshape0, log = TRUE, oobounds.log = -1.0e04, giveWarning= .giveWarning)) } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .giveWarning= giveWarning, .tolshape0 = tolshape0 ))), vfamily = c("egev", "vextremes"), deriv = eval(substitute(expression({ Locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) xi <- eta2theta(eta[, 3], .lshape , earg = .eshape) is.zero <- (abs(xi) < .tolshape0) zedd <- (y-Locat) / sigma A <- 1 + xi * zedd dA.dxi <- zedd dA.dmu <- -xi / sigma dA.dsigma <- -xi * zedd / sigma pow <- 1 + 1/xi if (any(bad <- A <= 0, na.rm = TRUE)) stop(sum(bad, na.rm = TRUE), " observations violating boundary constraints in '@deriv'") AA <- 1/(xi*A^pow)- pow/A dl.dmu <- dA.dmu * AA dl.dsi <- dA.dsigma * AA - 1/sigma dl.dxi <- log(A)/xi^2 - pow * dA.dxi / A - (log(A)/xi^2 - dA.dxi /(xi*A)) * A^(-1/xi) if (any(is.zero)) { ezedd <- exp(-zedd[is.zero]) dl.dmu[is.zero] <- (1 - ezedd) / sigma[is.zero] dl.dsi[is.zero] <- (zedd[is.zero] * (1 - ezedd) - 1) / sigma[is.zero] dl.dxi[is.zero] <- zedd[is.zero] * ((1 - ezedd) * zedd[is.zero] / 2 - 1) } dmu.deta <- dtheta.deta(Locat, .llocat , earg = .elocat ) dsi.deta <- dtheta.deta(sigma, .lscale , earg = .escale ) dxi.deta <- dtheta.deta(xi, .lshape , earg = .eshape) c(w) * cbind(dl.dmu * dmu.deta, dl.dsi * dsi.deta, dl.dxi * dxi.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), weight = eval(substitute(expression({ bad <- A <= 0 if (any(bad, na.rm = TRUE)) stop(sum(bad, na.rm = TRUE), " observations violating boundary constraints in '@weight'") kay <- -xi # for the formulae kay[abs(kay-0.5) < .tolshape0] <- 0.501 temp100 <- gamma(2-kay) pp <- (1-kay)^2 * gamma(1-2*kay) # gamma(0) is undefined so kay != 0.5 qq <- temp100 * (digamma(1-kay) - (1-kay)/kay) wz <- matrix(as.numeric(NA), n, 6) wz[, iam(1, 1, M)] <- pp / sigma^2 wz[, iam(2, 2, M)] <- (1-2*temp100 + pp) / (sigma * kay)^2 EulerM <- -digamma(1) wz[, iam(3, 3, M)] <- (pi^2 / 6 + (1-EulerM-1/kay)^2 + (2*qq + pp/kay)/kay) / kay^2 wz[, iam(1, 2, M)] <- (pp - temp100) / (sigma^2 * kay) wz[, iam(1, 3, M)] <- -(qq + pp/kay) / (sigma * kay) wz[, iam(2, 3, M)] <- (1-EulerM - (1-temp100)/kay - qq - pp/kay) / (sigma * kay^2) if (any(is.zero)) { wz[is.zero, iam(2, 2, M)] <- (pi^2/6 + (1-EulerM)^2) / sigma^2 wz[is.zero, iam(3, 3, M)] <- 2.4236 wz[is.zero, iam(1, 2, M)] <- (digamma(2) + 2*(EulerM-1)) / sigma^2 wz[is.zero, iam(1, 3, M)] <- -(trigamma(1)/2 + digamma(1)* (digamma(1)/2+1))/sigma wz[is.zero, iam(2, 3, M)] <- (-dgammadx(2, 3)/6 + dgammadx(1, 1) + 2*dgammadx(1, 2) + 2*dgammadx(1, 3)/3)/sigma } wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dmu.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsi.deta^2 wz[, iam(3, 3, M)] <- wz[, iam(3, 3, M)] * dxi.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * dmu.deta * dsi.deta wz[, iam(1, 3, M)] <- wz[, iam(1, 3, M)] * dmu.deta * (-dxi.deta) wz[, iam(2, 3, M)] <- wz[, iam(2, 3, M)] * dsi.deta * (-dxi.deta) c(w) * wz }), list( .eshape = eshape, .tolshape0 = tolshape0 )))) } rgumbel <- function(n, location = 0, scale = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n answer <- location - scale * log(-log(runif(use.n))) answer[scale <= 0] <- NaN answer } dgumbel <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - location) / scale logdensity <- -zedd - exp(-zedd) - log(scale) if (log.arg) logdensity else exp(logdensity) } qgumbel <- function(p, location = 0, scale = 1) { answer <- location - scale * log(-log(p)) answer[scale <= 0] <- NaN answer[p < 0] <- NaN answer[p > 1] <- NaN answer[p == 0] <- -Inf answer[p == 1] <- Inf answer } pgumbel <- function(q, location = 0, scale = 1) { answer <- exp(-exp(-(q - location) / scale)) answer[scale <= 0] <- NaN answer } gumbel <- function(llocation = "identity", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for argument 'mpv'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Gumbel distribution for extreme value regression\n", "Links: ", namesof("location", llocat, earg = elocat ), ", ", namesof("scale", lscale, earg = escale )), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("location", .llocat , earg = .elocat , short = TRUE), namesof("scale", .lscale , earg = .escale , short = TRUE)) y <- as.matrix(y) if (ncol(y) > 1) y <- -t(apply(-y, 1, sort, na.last = TRUE)) r.vec <- rowSums(cbind(!is.na(y))) if (any(r.vec == 0)) stop("There is at least one row of the response containing all NAs") if (ncol(y) > 1) { yiri <- y[cbind(1:nrow(y), r.vec)] sc.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else {3 * (rowMeans(y, na.rm = TRUE) - yiri)} sc.init <- rep(sc.init, length = nrow(y)) sc.init[sc.init <= 0.0001] <- 1 # Used to be .iscale loc.init <- yiri + sc.init * log(r.vec) } else { sc.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else 1.1 * (0.01+sqrt(var(y)*6)) / pi sc.init <- rep(sc.init, length.out = n) EulerM <- -digamma(1) loc.init <- (y - sc.init * EulerM) loc.init[loc.init <= 0] <- min(y) } extra$R <- .R extra$mpv <- .mpv extra$percentiles <- .percentiles if (!length(etastart)) { etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta( sc.init, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale = iscale, .R = R, .mpv = mpv, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) Percentiles <- extra$percentiles LP <- length(Percentiles) # may be 0 if (LP > 0) { mpv <- extra$mpv mu <- matrix(as.numeric(NA), nrow(eta), LP + mpv) # LP may be 0 Rvec <- extra$R for(ii in 1:LP) { ci <- if (is.Numeric(Rvec)) Rvec * (1 - Percentiles[ii] / 100) else -log(Percentiles[ii] / 100) mu[,ii] <- loc - sigma * log(ci) } if (mpv) mu[,ncol(mu)] <- loc - sigma * log(log(2)) dmn2 <- paste(as.character(Percentiles), "%", sep = "") if (mpv) dmn2 <- c(dmn2, "MPV") dimnames(mu) <- list(dimnames(eta)[[1]], dmn2) } else { EulerM <- -digamma(1) mu <- loc + sigma * EulerM } mu }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$links <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$R <- .R misc$mpv <- .mpv misc$true.mu <- !length( .percentiles) # @fitted is not a true mu misc$percentiles <- .percentiles }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .percentiles = percentiles, .mpv = mpv, .R = R ))), vfamily = c("gumbel", "vextremes"), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat, earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) r.vec <- rowSums(cbind(!is.na(y))) yiri <- y[cbind(1:nrow(y),r.vec)] ans <- -r.vec * log(sigma) - exp( -(yiri-loc)/sigma ) max.r.vec <- max(r.vec) for(jay in 1:max.r.vec) { index <- (jay <= r.vec) ans[index] <- ans[index] - (y[index,jay]-loc[index]) / sigma[index] } if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * ans) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), deriv = eval(substitute(expression({ loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) r.vec <- rowSums(cbind(!is.na(y))) yiri <- y[cbind(1:nrow(y),r.vec)] yi.bar <- rowMeans(y, na.rm = TRUE) temp2 <- (yiri - loc) / sigma term2 <- exp(-temp2) dloc.deta <- dtheta.deta(loc, .llocat, earg = .elocat) dsigma.deta <- dtheta.deta(sigma, .lscale , earg = .escale ) dl.dloc <- (r.vec - term2) / sigma dl.dsigma <- (rowSums((y - loc) / sigma, na.rm = TRUE) - r.vec - temp2 * term2) / sigma c(w) * cbind(dl.dloc * dloc.deta, dl.dsigma * dsigma.deta) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ temp6 <- digamma(r.vec) # , integer = T temp5 <- digamma(1:max(r.vec)) # , integer=T temp5 <- matrix(temp5, n, max(r.vec), byrow = TRUE) temp5[col(temp5) > r.vec] <- 0 temp5 <- temp5 %*% rep(1, ncol(temp5)) wz <- matrix(as.numeric(NA), n, dimm(M = 2)) # 3=dimm(M = 2) wz[, iam(1, 1, M)] <- r.vec / sigma^2 wz[, iam(2, 1, M)] <- -(1 + r.vec * temp6) / sigma^2 wz[, iam(2, 2, M)] <- (2*(r.vec+1)*temp6 + r.vec*(trigamma(r.vec) + temp6^2) + 2 - r.vec - 2*temp5) / sigma^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dloc.deta^2 wz[, iam(2, 1, M)] <- wz[, iam(2, 1, M)] * dsigma.deta * dloc.deta wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsigma.deta^2 c(w) * wz }), list( .lscale = lscale )))) } rgpd <- function(n, location = 0, scale = 1, shape = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(location)) stop("bad input for argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument 'shape'") ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n); if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) ans[!scase] <- location[!scase] + scale[!scase] * ((runif(use.n - nscase))^(-shape[!scase])-1) / shape[!scase] if (nscase) ans[scase] <- location[scase] - scale[scase] * log(runif(nscase)) ans[scale <= 0] <- NaN ans } dgpd <- function(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt( .Machine$double.eps ), oobounds.log = -Inf, giveWarning = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (oobounds.log > 0) stop("bad input for argument 'oobounds.log'") if (!is.Numeric(tolshape0, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'tolshape0'") L <- max(length(x), length(location), length(scale), length(shape)) if (length(shape) != L) shape <- rep(shape, length.out = L) if (length(location) != L) location <- rep(location, length.out = L); if (length(scale) != L) scale <- rep(scale, length.out = L) if (length(x) != L) x <- rep(x, length.out = L) logdensity <- rep(log(0), length.out = L) scase <- abs(shape) < tolshape0 nscase <- sum(scase) if (L - nscase) { zedd <- (x-location) / scale xok <- (!scase) & (zedd > 0) & (1 + shape*zedd > 0) logdensity[xok] <- -(1 + 1/shape[xok])*log1p(shape[xok]*zedd[xok]) - log(scale[xok]) outofbounds <- (!scase) & ((zedd <= 0) | (1 + shape*zedd <= 0)) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log no.oob <- sum(outofbounds) if (giveWarning) warning(no.oob, " observation", ifelse(no.oob > 1, "s are", " is"), " out of bounds") } } if (nscase) { xok <- scase & (x > location) logdensity[xok] <- -(x[xok] - location[xok]) / scale[xok] - log(scale[xok]) outofbounds <- scase & (x <= location) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log no.oob <- sum(outofbounds) if (giveWarning) warning(no.oob, " observation", ifelse(no.oob > 1, "s are", " is"), " out of bounds") } } logdensity[scale <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pgpd <- function(q, location = 0, scale = 1, shape = 0) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(location)) stop("bad input for argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument 'shape'") use.n <- max(length(q), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n); if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) if (length(q) != use.n) q <- rep(q, length.out = use.n) zedd <- (q - location) / scale use.zedd <- pmax(zedd, 0) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) { ans <- 1 - pmax(1 + shape * use.zedd, 0)^(-1/shape) } if (nscase) { pos <- (zedd >= 0) ind9 <- ( pos & scase) ans[ind9] <- -expm1(-use.zedd[ind9]) ind9 <- (!pos & scase) ans[ind9] <- 0 } ans[scale <= 0] <- NaN ans } qgpd <- function(p, location = 0, scale = 1, shape = 0) { use.n <- max(length(p), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep(shape, length.out = use.n) if (length(location) != use.n) location <- rep(location, length.out = use.n); if (length(scale) != use.n) scale <- rep(scale, length.out = use.n) if (length(p) != use.n) p <- rep(p, length.out = use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) { ans[!scase] <- location[!scase] + scale[!scase] * ((1-p[!scase])^(-shape[!scase]) - 1) / shape[!scase] } if (nscase) { ans[scase] <- location[scase] - scale[scase] * log1p(-p[scase]) } ans[p < 0] <- NaN ans[p > 1] <- NaN ans[(p == 0)] <- location[p == 0] ans[(p == 1) & (shape >= 0)] <- Inf ind5 <- (p == 1) & (shape < 0) ans[ind5] <- location[ind5] - scale[ind5] / shape[ind5] ans[scale <= 0] <- NaN ans } gpd <- function(threshold = 0, lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, giveWarning = TRUE, imethod = 1, zero = -2) { if (!is.logical(giveWarning) || length(giveWarning) != 1) stop("bad input for argument 'giveWarning'") if (!is.Numeric(threshold)) stop("bad input for argument 'threshold'") if (!is.Numeric(imethod, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(tolshape0, allowable.length = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Generalized Pareto distribution\n", "Links: ", namesof("scale", link = lscale, earg = escale ), ", ", namesof("shape", link = lshape, earg = eshape )), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly y.names <- dimnames(y)[[2]] if (length(y.names) != ncoly) y.names <- paste("Y", 1:ncoly, sep = "") extra$y.names <- y.names Threshold <- if (is.Numeric( .threshold )) .threshold else 0 Threshold <- matrix(Threshold, n, ncoly, byrow = TRUE) if (is.Numeric( .threshold )) { orig.y <- y } ystar <- as.matrix(y - Threshold) # Operate on ystar extra$threshold <- Threshold mynames1 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lscale , earg = .escale , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { meany <- colSums(ystar * w) / colSums(w) vary <- apply(ystar, 2, var) mediany <- apply(ystar, 2, median) init.xii <- if (length( .ishape )) .ishape else { if ( .imethod == 1) -0.5 * (meany^2 / vary - 1) else 0.5 * (1 - mediany^2 / vary) } init.sig <- if (length( .iscale )) .iscale else { if (.imethod == 1) 0.5 * meany * (meany^2 / vary + 1) else abs(1 - init.xii) * mediany } init.xii <- matrix(init.xii, n, ncoly, byrow = TRUE) init.sig <- matrix(init.sig, n, ncoly, byrow = TRUE) init.sig[init.sig <= 0.0] <- 0.01 # sigma > 0 init.xii[init.xii <= -0.5] <- -0.40 # Fisher scoring works if xi > -0.5 init.xii[init.xii >= 1.0] <- 0.90 # Mean/var exists if xi < 1 / 0.5 if ( .lshape == "loge") init.xii[init.xii <= 0.0] <- 0.05 etastart <- cbind(theta2eta(init.sig, .lscale , earg = .escale ), theta2eta(init.xii, .lshape , earg = .eshape ))[, interleave.VGAM(M, M = Musual)] } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape, .percentiles = percentiles, .threshold = threshold, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (!is.matrix(sigma)) sigma <- as.matrix(sigma) if (!is.matrix(shape)) shape <- as.matrix(shape) Musual <- 2 pcent <- .percentiles LP <- length(pcent) # NULL means LP == 0 and the mean is returned ncoly <- ncol(eta) / Musual if (!length(y.names <- extra$y.names)) y.names <- paste("Y", 1:ncoly, sep = "") Threshold <- extra$threshold if (LP) { do.one <- function(yvec, shape, scale, threshold, percentiles = c(90, 95), y.name = NULL, tolshape0 = 0.001) { is.zero <- (abs(shape) < tolshape0 ) # A matrix LP <- length(percentiles) fv <- matrix(as.numeric(NA), length(shape), LP) is.zero <- (abs(shape) < tolshape0) for(ii in 1:LP) { temp <- 1 - percentiles[ii] / 100 fv[!is.zero, ii] <- threshold[!is.zero] + (temp^(-shape[!is.zero]) - 1) * scale[!is.zero] / shape[!is.zero] fv[ is.zero, ii] <- threshold[is.zero] - scale[is.zero] * log(temp) } post.name <- paste(as.character(percentiles), "%", sep = "") dimnames(fv) <- list(dimnames(shape)[[1]], if (is.null(y.name)) post.name else paste(y.name, post.name, sep = " ")) fv } fv <- matrix(-1, nrow(sigma), LP * ncoly) colnames.cumsum.fv <- NULL for(jlocal in 1:ncoly) { block.mat.fv <- do.one(yvec = y[, jlocal], shape = shape[, jlocal], scale = sigma[, jlocal], threshold = Threshold[, jlocal], percentiles = pcent, y.name = if (ncoly > 1) y.names[jlocal] else NULL, tolshape0 = .tolshape0 ) fv[, (jlocal - 1) * LP + (1:LP)] <- block.mat.fv colnames.cumsum.fv <- c(colnames.cumsum.fv, colnames(block.mat.fv)) } colnames(fv) <- colnames.cumsum.fv } else { fv <- Threshold + sigma / (1 - shape) fv[shape >= 1] <- NA # Mean exists only if shape < 1. dimnames(fv) <- list(dimnames(eta)[[1]], y.names) } fv }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .threshold = threshold, .tolshape0 = tolshape0, .percentiles = percentiles ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lscale , length = ncoly), rep( .lshape , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .escale misc$earg[[Musual*ii ]] <- .eshape } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$true.mu <- FALSE # @fitted is not a true mu misc$percentiles <- .percentiles misc$tolshape0 <- .tolshape0 if (any(Shape < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .threshold = threshold, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Threshold <- extra$threshold if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgpd(x = y, location = Threshold, scale = sigma, shape = Shape, tolshape0 = .tolshape0, giveWarning = .giveWarning, log = TRUE, oobounds.log = -1.0e04)) } }, list( .tolshape0 = tolshape0, .giveWarning= giveWarning, .escale = escale, .eshape = eshape, .lscale = lscale, .lshape = lshape ))), vfamily = c("gpd", "vextremes"), deriv = eval(substitute(expression({ Musual <- 2 sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Threshold <- extra$threshold ystar <- y - Threshold # Operate on ystar A <- 1 + Shape * ystar / sigma mytolerance <- .Machine$double.eps bad <- (A <= mytolerance) if (any(bad) && any(w[bad] != 0)) { cat(sum(w[bad],na.rm = TRUE), # "; ignoring them" "observations violating boundary constraints\n") flush.console() } if (any(is.zero <- (abs(Shape) < .tolshape0))) { } igpd <- !is.zero & !bad iexp <- is.zero & !bad dl.dShape <- dl.dsigma <- rep(0, length.out = length(y)) dl.dsigma[igpd] <- ((1 + Shape[igpd]) * ystar[igpd] / (sigma[igpd] + Shape[igpd] * ystar[igpd])-1) / sigma[igpd] dl.dShape[igpd] <- log(A[igpd])/Shape[igpd]^2 - (1 + 1/Shape[igpd]) * ystar[igpd] / (A[igpd] * sigma[igpd]) dl.dShape[iexp] <- ystar[iexp] * (0.5*ystar[iexp]/sigma[iexp] - 1) / sigma[iexp] dsigma.deta <- dtheta.deta(sigma, .lscale , earg = .escale ) dShape.deta <- dtheta.deta(Shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dsigma * dsigma.deta, dl.dShape * dShape.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .tolshape0 = tolshape0, .lscale = lscale, .escale = escale, .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dscale2 <- 1 / ((1+2*Shape) * sigma^2) ned2l.dshape2 <- 2 / ((1+2*Shape) * (1+Shape)) ned2l.dshapescale <- 1 / ((1+2*Shape) * (1+Shape) * sigma) # > 0 ! NOS <- M / Musual wz <- array(c(c(w) * ned2l.dscale2 * dsigma.deta^2, c(w) * ned2l.dshape2 * dShape.deta^2, c(w) * ned2l.dshapescale * dsigma.deta * dShape.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .lscale = lscale )))) } meplot.default <- function(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) { if (!is.Numeric(y)) stop("bad input for argument 'y'") n <- length(y) sy <- sort(y) dsy <- rev(sy) # decreasing sequence me <- rev(cumsum(dsy)) / (n:1) - sy me2 <- rev(cumsum(dsy^2)) var <- (me2 - (n:1) * (me+sy)^2) / (n:1) ci <- qnorm((1+conf)/2) * sqrt(abs(var)) / sqrt(n:1) ci[length(ci)] <- NA mymat <- cbind(me - ci, me, me + ci) sy <- sy - sqrt( .Machine$double.eps ) matplot(sy, mymat, main = main, xlab = xlab, ylab = ylab, lty = lty, col = col, type = type, ...) invisible(list(threshold = sy, meanExcess = me, plusminus = ci)) } meplot.vlm <- function(object, ...) { if (!length(y <- object@y)) stop("y slot is empty") ans <- meplot(as.numeric(y), ...) invisible(ans) } if (!isGeneric("meplot")) setGeneric("meplot", function(object, ...) standardGeneric("meplot")) setMethod("meplot", "numeric", function(object, ...) meplot.default(y=object, ...)) setMethod("meplot", "vlm", function(object, ...) meplot.vlm(object, ...)) guplot.default <- function(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) { if (!is.Numeric(y)) stop("bad input for argument 'y'") n <- length(y) sy <- sort(y) x <- -log(-log(((1:n) - 0.5) / n)) plot(x, sy, main = main, xlab = xlab, ylab = ylab, type = type, ...) invisible(list(x = x, y = sy)) } guplot.vlm <- function(object, ...) { if (!length(y <- object@y)) stop("y slot is empty") ans <- guplot(as.numeric(y), ...) invisible(ans) } if (!isGeneric("guplot")) setGeneric("guplot", function(object, ...) standardGeneric("guplot")) setMethod("guplot", "numeric", function(object, ...) guplot.default(y=object, ...)) setMethod("guplot", "vlm", function(object, ...) guplot.vlm(object, ...)) egumbel <- function(llocation = "identity", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for argument 'mpv'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Gumbel distribution (univariate response)\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = TRUE), ", ", namesof("scale", lscale, earg = escale , tag = TRUE), "\n", "Mean: location + scale*0.5772..\n", "Variance: pi^2 * scale^2 / 6"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ y <- cbind(y) if (ncol(y) > 1) stop("Use gumbel() to handle multivariate responses") if (min(y) <= 0) stop("all response values must be positive") predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) extra$R <- .R extra$mpv <- .mpv extra$percentiles <- .percentiles if (!length(etastart)) { sca.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else 1.5 * (0.01+sqrt(var(y)*6)) / pi sca.init <- rep(sca.init, length.out = n) EulerM <- -digamma(1) loc.init <- (y - sca.init * EulerM) etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(sca.init, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale = iscale, .R = R, .mpv = mpv, .percentiles = percentiles ))), linkinv = eval(substitute( function(eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat, earg = .elocat) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) EulerM <- -digamma(1) Percentiles <- extra$percentiles mpv <- extra$mpv LP <- length(Percentiles) # may be 0 if (!LP) return(locat + sigma * EulerM) mu <- matrix(as.numeric(NA), nrow(eta), LP + mpv) Rvec <- extra$R if (1 <= LP) for(ii in 1:LP) { ci <- if (is.Numeric(Rvec)) Rvec * (1 - Percentiles[ii] / 100) else -log(Percentiles[ii] / 100) mu[,ii] <- locat - sigma * log(ci) } if (mpv) mu[, ncol(mu)] <- locat - sigma * log(log(2)) dmn2 <- if (LP >= 1) paste(as.character(Percentiles), "%", sep = "") else NULL if (mpv) dmn2 <- c(dmn2, "MPV") dimnames(mu) <- list(dimnames(eta)[[1]], dmn2) mu }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat, scale = .lscale) misc$earg <- list(location = .elocat, scale = .escale) misc$true.mu <- !length( .percentiles) # @fitted is not a true mu misc$R <- .R misc$mpv <- .mpv misc$percentiles = .percentiles }), list( .llocat = llocat, .lscale = lscale, .mpv = mpv, .elocat = elocat, .escale = escale, .R = R, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta,extra = NULL) { loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sca <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(w * dgumbel(x = y, location = loc, scale = sca, log = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = "egumbel", deriv = eval(substitute(expression({ loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sca <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- (y-loc) / sca temp2 <- -expm1(-zedd) dl.dloc <- temp2 / sca dl.dsca <- -1/sca + temp2 * zedd / sca dloc.deta <- dtheta.deta(loc, .llocat , earg = .elocat) dsca.deta <- dtheta.deta(sca, .lscale , earg = .escale ) c(w) * cbind(dl.dloc * dloc.deta, dl.dsca * dsca.deta) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight=expression({ digamma1 <- digamma(1) ned2l.dsca2 <- ((2 + digamma1) * digamma1 + trigamma(1) + 1) / sca^2 ned2l.dloc2 <- 1 / sca^2 ned2l.dscaloc <- -(1 + digamma1) / sca^2 wz = matrix(as.numeric(NA), n, dimm(M = 2)) wz[, iam(1, 1, M)] <- ned2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsca2 * dsca.deta^2 wz[, iam(1, 2, M)] <- ned2l.dscaloc * dloc.deta * dsca.deta c(w) * wz })) } cgumbel <- function(llocation = "identity", lscale = "loge", iscale = NULL, mean = TRUE, percentiles = NULL, zero = 2) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mean) || length(mean) != 1) stop("mean must be a single logical value") if (!mean && (!is.Numeric(percentiles, positive = TRUE) || any(percentiles >= 100))) stop("valid percentiles values must be given when mean = FALSE") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Censored Gumbel distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = TRUE), ", ", namesof("scale", lscale, earg = escale, tag = TRUE), "\n", "Mean: location + scale*0.5772..\n", "Variance: pi^2 * scale^2 / 6"), constraints = eval(substitute(expression({ constraints = cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ y <- cbind(y) if (ncol(y) > 1) stop("Use gumbel.block() to handle multivariate responses") if (any(y) <= 0) stop("all response values must be positive") if (!length(extra$leftcensored)) extra$leftcensored <- rep(FALSE, length.out = n) if (!length(extra$rightcensored)) extra$rightcensored <- rep(FALSE, length.out = n) if (any(extra$rightcensored & extra$leftcensored)) stop("some observations are both right and left censored!") predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { sc.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else 1.1 * sqrt(var(y) * 6 ) / pi sc.init <- rep(sc.init, length.out = n) EulerM <- -digamma(1) loc.init <- (y - sc.init * EulerM) loc.init[loc.init <= 0] = min(y) etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(sc.init, .lscale , earg = .escale )) } }), list( .lscale = lscale, .iscale = iscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), linkinv = eval(substitute( function(eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat) sc <- eta2theta(eta[, 2], .lscale) EulerM <- -digamma(1) if (.mean) loc + sc * EulerM else { LP <- length(.percentiles) # 0 if NULL mu <- matrix(as.numeric(NA), nrow(eta), LP) for(ii in 1:LP) { ci <- -log( .percentiles[ii] / 100) mu[, ii] <- loc - sc * log(ci) } dmn2 <- paste(as.character(.percentiles), "%", sep = "") dimnames(mu) <- list(dimnames(eta)[[1]], dmn2) mu } }, list( .lscale = lscale, .percentiles = percentiles, .llocat = llocat, .elocat = elocat, .escale = escale , .mean=mean ))), last = eval(substitute(expression({ misc$link = c(location= .llocat, scale = .lscale) misc$earg = list(location= .elocat, scale= .escale ) misc$true.mu = .mean # if FALSE then @fitted is not a true mu misc$percentiles = .percentiles }), list( .lscale = lscale, .mean=mean, .llocat = llocat, .elocat = elocat, .escale = escale , .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta,extra = NULL) { loc <- eta2theta(eta[, 1], .llocat, earg = .elocat ) sc <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- (y-loc) / sc cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns Fy <- exp(-exp(-zedd)) ell1 <- -log(sc[cen0]) - zedd[cen0] - exp(-zedd[cen0]) ell2 <- log(Fy[cenL]) ell3 <- log1p(-Fy[cenU]) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * ell1) + sum(w[cenL] * ell2) + sum(w[cenU] * ell3) }, list( .lscale = lscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), vfamily = "cgumbel", deriv = eval(substitute(expression({ cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns loc <- eta2theta(eta[, 1], .llocat, earg = .elocat ) sc <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- (y-loc) / sc temp2 <- -expm1(-zedd) dl.dloc <- temp2 / sc dl.dsc <- -1/sc + temp2 * zedd / sc dloc.deta <- dtheta.deta(loc, .llocat, earg = .elocat ) dsc.deta <- dtheta.deta(sc, .lscale , earg = .escale ) ezedd <- exp(-zedd) Fy <- exp(-ezedd) dFy.dloc <- -ezedd * Fy / sc dFy.dsc <- zedd * dFy.dloc # -zedd * exp(-zedd) * Fy / sc if (any(cenL)) { dl.dloc[cenL] <- -ezedd[cenL] / sc[cenL] dl.dsc[cenL] <- -zedd[cenL] * ezedd[cenL] / sc[cenL] } if (any(cenU)) { dl.dloc[cenU] <- -dFy.dloc[cenU] / (1-Fy[cenU]) dl.dsc[cenU] <- -dFy.dsc[cenU] / (1-Fy[cenU]) } c(w) * cbind(dl.dloc * dloc.deta, dl.dsc * dsc.deta) }), list( .lscale = lscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), weight=expression({ A1 <- ifelse(cenL, Fy, 0) A3 <- ifelse(cenU, 1-Fy, 0) A2 <- 1 - A1 - A3 # Middle; uncensored digamma1 <- digamma(1) ed2l.dsc2 <- ((2+digamma1)*digamma1 + trigamma(1) + 1) / sc^2 ed2l.dloc2 <- 1 / sc^2 ed2l.dlocsc <- -(1 + digamma1) / sc^2 wz <- matrix(as.numeric(NA), n, dimm(M = 2)) wz[, iam(1, 1, M)] <- A2 * ed2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- A2 * ed2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- A2 * ed2l.dlocsc * dloc.deta * dsc.deta d2l.dloc2 <- -ezedd / sc^2 d2l.dsc2 <- (2 - zedd) * zedd * ezedd / sc^2 d2l.dlocsc <- (1 - zedd) * ezedd / sc^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)]-A1^2 * d2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)]-A1^2 * d2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)]-A1^2 * d2l.dlocsc * dloc.deta * dsc.deta d2Fy.dloc2 <- dFy.dloc * dl.dloc + Fy * d2l.dloc2 d2Fy.dsc2 <- dFy.dsc * dl.dsc + Fy * d2l.dsc2 d2Fy.dlocsc <- dFy.dsc * dl.dloc + Fy * d2l.dlocsc d2l.dloc2 <- -((1-Fy) * d2Fy.dloc2 - dFy.dloc^2) / (1-Fy)^2 d2l.dsc2 <- -((1-Fy) * d2Fy.dsc2 - dFy.dsc^2) / (1-Fy)^2 d2l.dlocsc <- -((1-Fy) * d2Fy.dlocsc - dFy.dloc * dFy.dsc) / (1-Fy)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)]-A3^2 * d2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)]-A3^2 * d2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)]-A3^2 * d2l.dlocsc * dloc.deta * dsc.deta c(w) * wz })) } dfrechet <- function(x, location = 0, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(scale), length(shape), length(location)) x <- rep(x, length.out = L) scale <- rep(scale, length.out = L) shape <- rep(shape, length.out = L) location <- rep(location, length.out = L) logdensity <- rep(log(0), length.out = L) xok <- (x > location) rzedd <- scale / (x - location) logdensity[xok] <- log(shape[xok]) - (rzedd[xok]^shape[xok]) + (shape[xok]+1) * log(rzedd[xok]) - log(scale[xok]) logdensity[shape <= 0] <- NaN logdensity[scale <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pfrechet <- function(q, location = 0, scale = 1, shape) { if (!is.Numeric(scale, positive = TRUE)) stop("scale must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("shape must be positive") rzedd <- scale / (q - location) ans <- exp(-(rzedd^shape)) ans[q <= location] <- 0 ans } qfrechet <- function(p, location = 0, scale = 1, shape) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("0 < p < 1 is required") if (!is.Numeric(scale, positive = TRUE)) stop("scale must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("shape must be positive") location + scale * (-log(p))^(-1/shape) } rfrechet <- function(n, location = 0, scale = 1, shape) { if (!is.Numeric(scale, positive = TRUE)) stop("scale must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("shape must be positive") location + scale * (-log(runif(n)))^(-1/shape) } frechet2.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } frechet2 <- function(location = 0, lscale = "loge", lshape = logoff(offset = -2), iscale = NULL, ishape = NULL, nsimEIM = 250, zero = NULL) { if (!is.Numeric(location)) stop("bad input for argument 'location'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) new("vglmff", blurb = c("2-parameter Frechet distribution\n", "Links: ", namesof("scale", link = lscale, earg = escale ), ", ", namesof("shape", link = lshape, earg = eshape )), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("scale", .lscale , earg = .escale, short = TRUE), namesof("shape", .lshape , earg = .eshape, short = TRUE)) extra$location <- rep( .location , length.out = n) # stored here if (!length(etastart)) { locinit = extra$location if (any(y <= locinit)) stop("initial values for 'location' are out of range") frech.aux <- function(shapeval, y, x, w, extraargs) { myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shapeval) myfit <- lsfit(x = myquant, y = myobsns, intercept = TRUE) sum(myfit$resid^2) } shape.grid <- c(100, 70, 40, 20, 12, 8, 4, 2, 1.5) shape.grid <- c(1 / shape.grid, 1, shape.grid) try.this <- getMaxMin(shape.grid, objfun = frech.aux, y = y, x = x, w = w, maximize = FALSE, abs.arg = TRUE) shape.init <- if (length( .ishape )) rep( .ishape , length.out = n) else { rep(try.this , length.out = n) # variance exists if shape > 2 } myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shape.init[1]) myfit <- lsfit(x = myquant, y = myobsns) Scale.init <- if (length( .iscale )) rep( .iscale , length.out = n) else { if (all(shape.init > 1)) { myfit$coef[2] } else { rep(1.0, length.out = n) } } etastart <- cbind(theta2eta(Scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .location = location ))), linkinv = eval(substitute(function(eta, extra = NULL) { loc <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) ans <- rep(as.numeric(NA), length.out = length(shape)) ok <- shape > 1 ans[ok] <- loc[ok] + Scale[ok] * gamma(1 - 1/shape[ok]) ans }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$links <- c("scale" = .lscale , "shape" = .lshape ) misc$earg <- list("scale" = .escale , "shape" = .eshape ) misc$nsimEIM <- .nsimEIM }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { loctn <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * dfrechet(x = y, location = loctn, scale = Scale, shape = shape, log = TRUE)) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), vfamily = c("frechet2", "vextremes"), deriv = eval(substitute(expression({ loctn <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) rzedd <- Scale / (y - loctn) # reciprocial of zedd dl.dloctn <- (shape + 1) / (y - loctn) - (shape / (y - loctn)) * (rzedd)^shape dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) dthetas.detas <- cbind( dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ), dShape.deta <- dtheta.deta(shape, .lshape , earg = .eshape )) c(w) * cbind(dl.dScale, dl.dshape) * dthetas.detas }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rfrechet(n, location = loctn, scale = Scale, shape = shape) rzedd <- Scale / (ysim - loctn) # reciprocial of zedd dl.dloctn <- (shape + 1) / (ysim - loctn) - (shape / (ysim - loctn)) * (rzedd)^shape dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) rm(ysim) temp3 <- cbind(dl.dScale, dl.dshape) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz = if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz = c(w) * wz * dthetas.detas[, ind1$row.index] * dthetas.detas[, ind1$col.index] } else { stop("argument 'nsimEIM' must be numeric") } wz }), list( .nsimEIM = nsimEIM )))) } frechet3.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } if (FALSE) frechet3 <- function(anchor = NULL, ldifference = "loge", lscale = "loge", lshape = logoff(offset = -2), ilocation = NULL, iscale = NULL, ishape = NULL, nsimEIM = 250, zero = 1) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") ldiffr <- as.list(substitute(ldifference)) ediffr <- link2list(ldiffr) ldiffr <- attr(escale, "function.name") stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) new("vglmff", blurb = c("3-parameter Frechet distribution\n", "Links: ", namesof("difference", link = ldiffr, earg = ediffr), ", ", namesof("scale", link = lscale, earg = escale), ", ", namesof("shape", link = lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") predictors.names <- c(namesof("difference", .ldiffr , earg = .ediffr, short = TRUE), namesof("scale", .lscale , earg = .escale, short = TRUE), namesof("shape", .lshape , earg = .eshape, short = TRUE)) anchorpt <- if (is.Numeric( .anchor, allowable.length = 1)) .anchor else min(y) if (min(y) < anchorpt) stop("anchor point is too large") extra$LHSanchor <- anchorpt if (!length(etastart)) { frech.aux <- function(shapeval, y, x, w, extraargs) { myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shapeval) myfit <- lsfit(x = myquant, y = myobsns, intercept = TRUE) sum(myfit$resid^2) } shape.grid <- c(100, 70, 40, 20, 12, 8, 4, 2, 1.5) shape.grid <- c(1 / shape.grid, 1, shape.grid) try.this <- getMaxMin(shape.grid, objfun = frech.aux, y = y, x = x, w = w, maximize = FALSE, abs.arg = TRUE) shape.init = if (length( .ishape )) rep( .ishape , length.out = n) else { rep(try.this , length.out = n) # variance exists if shape > 2 } myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shape.init[1]) myfit <- lsfit(x = myquant, y = myobsns) plot(myobsns ~ myquant) Scale.init <- if (length( .iscale )) { rep( .iscale , length.out = n) } else { if (all(shape.init > 1)) { myfit$coef[2] } else { rep(1.0, length.out = n) } } locinit <- if (length( .ilocation )) rep( .ilocation , length.out = n) else { if (myfit$coef[1] < min(y)) { rep(myfit$coef[1], length.out = n) } else { rep(anchorpt - 0.01 * diff(range(y)), length.out = n) } } if (any(y <= locinit)) stop("initial values for 'location' are out of range") if (any(anchorpt <= locinit)) stop("require anchor point > initial location parameter value") etastart <- cbind(theta2eta(anchorpt - locinit, .ldiffr), theta2eta(Scale.init, .lscale), theta2eta(shape.init, .lshape)) } }), list( .ldiffr = ldiffr, .lscale = lscale, .lshape = lshape, .ediffr = ediffr, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .ilocation = ilocation, .anchor = anchor ))), linkinv = eval(substitute(function(eta, extra = NULL) { loctn <- extra$LHSanchor - eta2theta(eta[, 1], .ldiffr , earg = .ediffr) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) ans <- rep(as.numeric(NA), length.out = length(shape)) okay <- shape > 1 ans[okay] <- loctn[okay] + Scale[okay] * gamma(1 - 1/shape[okay]) ans }, list( .ldiffr = ldiffr, .lscale = lscale, .lshape = lshape, .ediffr = ediffr, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$links <- c("difference" = .ldiffr , "scale" = .lscale , "shape" = .lshape) misc$earg <- list("difference" = .ediffr, "scale" = .escale, "shape" = .eshape) misc$nsimEIM <- .nsimEIM extra$location <- loctn # Store the location parameter estimate here }), list( .ldiffr = ldiffr, .lscale = lscale, .lshape = lshape, .ediffr = ediffr, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { loctn <- extra$LHSanchor - eta2theta(eta[, 1], .ldiffr , earg = .ediffr) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(w * dfrechet(x = y, location = loctn, scale = Scale, shape = shape, log = TRUE)) } }, list( .ldiffr = ldiffr, .lscale = lscale, .lshape = lshape, .ediffr = ediffr, .escale = escale, .eshape = eshape ))), vfamily = c("frechet3", "vextremes"), deriv = eval(substitute(expression({ Difrc <- eta2theta(eta[, 1], .ldiffr , earg = .ediffr ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) loctn <- extra$LHSanchor - Difrc rzedd <- Scale / (y - loctn) # reciprocial of zedd dl.dloct <- (shape + 1) / (y - loctn) - (shape / (y - loctn)) * (rzedd)^shape dl.ddifff <- -dl.dloct dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) dthetas.detas <- cbind( ddifff.deta <- dtheta.deta(Difrc, .ldiffr , earg = .ediffr ), dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ), dShape.deta <- dtheta.deta(shape, .lshape , earg = .eshape )) ans <- c(w) * cbind(dl.ddifff, dl.dScale, dl.dshape) * dthetas.detas ans }), list( .ldiffr = ldiffr, .lscale = lscale, .lshape = lshape, .ediffr = ediffr, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for(ii in 1:( .nsimEIM )) { ysim <- rfrechet(n, location = loctn, scale = Scale, shape = shape) rzedd <- Scale / (ysim - loctn) # reciprocial of zedd dl.dloct <- (shape + 1) / (ysim - loctn) - (shape / (ysim - loctn)) * (rzedd)^shape dl.ddifff <- -dl.dloct dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) rm(ysim) temp3 <- cbind(dl.ddifff, dl.dScale, dl.dshape) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- c(w) * wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } else { stop("argument 'nsimEIM' must be numeric") } print("head(wz)") print( head(wz) ) print("summary(wz) ,,,,,,,,,,,,,,,,,,") print( summary(wz) ) wz }), list( .nsimEIM = nsimEIM )))) } recnormal1.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } recnormal1 <- function(lmean = "identity", lsd = "loge", imean = NULL, isd = NULL, imethod = 1, zero = NULL) { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsdev <- as.list(substitute(lsd)) esdev <- link2list(lsdev) lsdev <- attr(esdev, "function.name") isdev <- isd if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3.5) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Upper record values from a univariate normal distribution\n\n", "Links: ", namesof("mean", lmean, emean, tag = TRUE), "; ", namesof("sd", lsdev, esdev, tag = TRUE), "\n", "Variance: sd^2"), constraints = eval(substitute(expression({ constraints = cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("mean", .lmean, .emean, tag = FALSE), namesof("sd", .lsdev, .esdev, tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (any(diff(y) <= 0)) stop("response must have increasingly larger and larger values") if (any(w != 1)) warning("weights should have unit values only") if (!length(etastart)) { mean.init <- if (length( .imean )) rep( .imean , length.out = n) else { if (.lmean == "loge") pmax(1/1024, min(y)) else min(y)} sd.init <- if (length( .isdev)) rep( .isdev , length.out = n) else { if (.imethod == 1) 1*(sd(c(y))) else if (.imethod == 2) 5*(sd(c(y))) else .5*(sd(c(y))) } etastart <- cbind(theta2eta(rep(mean.init, len = n), .lmean, .emean ), theta2eta(rep(sd.init, len = n), .lsdev, .esdev )) } }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev, .imean = imean, .isdev = isdev, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmean, .emean ) }, list( .lmean = lmean, .emean = emean ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmean , "sd" = .lsdev ) misc$earg <- list("mu" = .emean , "sd" = .esdev ) misc$expected = FALSE }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta, extra = NULL) { sdev <- eta2theta(eta[, 2], .lsdev) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { zedd <- (y - mu) / sdev NN <- nrow(eta) sum(w * (-log(sdev) - 0.5 * zedd^2)) - sum(w[-NN] * pnorm(zedd[-NN], lower.tail = FALSE, log.p = TRUE)) } }, list( .lsdev = lsdev, .esdev = esdev ))), vfamily = c("recnormal1"), deriv = eval(substitute(expression({ NN <- nrow(eta) mymu <- eta2theta(eta[, 1], .lmean) sdev <- eta2theta(eta[, 2], .lsdev) zedd <- (y - mymu) / sdev temp200 <- dnorm(zedd) / (1-pnorm(zedd)) dl.dmu <- (zedd - temp200) / sdev dl.dmu[NN] <- zedd[NN] / sdev[NN] dl.dsd <- (-1 + zedd^2 - zedd * temp200) / sdev dl.dsd[NN] <- (-1 + zedd[NN]^2) / sdev[NN] dmu.deta <- dtheta.deta(mymu, .lmean, .emean ) dsd.deta <- dtheta.deta(sdev, .lsdev, .esdev ) if (iter == 1) { etanew <- eta } else { derivold <- derivnew etaold <- etanew etanew <- eta } derivnew <- c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) derivnew }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev ))), weight = expression({ if (iter == 1) { wznew <- cbind(matrix(w, n, M), matrix(0, n, dimm(M)-M)) } else { wzold <- wznew wznew <- qnupdate(w=w, wzold = wzold, dderiv=(derivold - derivnew), deta=etanew-etaold, M = M, trace=trace) # weights incorporated in args } wznew })) } recexp1.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } recexp1 <- function(lrate = "loge", irate = NULL, imethod = 1) { lrate <- as.list(substitute(lrate)) erate <- link2list(lrate) lrate <- attr(erate, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3.5) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Upper record values from a ", "1-parameter exponential distribution\n\n", "Links: ", namesof("rate", lrate, erate, tag = TRUE), "\n", "Variance: 1/rate^2"), initialize = eval(substitute(expression({ predictors.names <- c(namesof("rate", .lrate , .erate , tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (any(diff(y) <= 0)) stop("response must have increasingly larger and larger values") if (any(w != 1)) warning("weights should have unit values only") if (!length(etastart)) { rate.init <- if (length( .irate )) rep( .irate , len = n) else { init.rate <- if (.imethod == 1) length(y) / y[length(y), 1] else if (.imethod == 2) 1/mean(y) else 1/median(y) if (.lrate == "loge") pmax(1/1024, init.rate) else init.rate} etastart <- cbind(theta2eta(rep(rate.init, len = n), .lrate , .erate )) } }), list( .lrate = lrate, .erate = erate, .irate = irate, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .lrate , .erate ) }, list( .lrate = lrate, .erate = erate ))), last = eval(substitute(expression({ misc$link <- c("rate" = .lrate) misc$earg <- list("rate" = .erate) misc$expected = TRUE }), list( .lrate = lrate, .erate = erate ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta, extra = NULL) { rate = eta2theta(eta, .lrate , .erate ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { NN <- length(eta) y <- cbind(y) sum(w * log(rate)) - w[NN] * rate[NN] * y[NN, 1] } }, list( .lrate = lrate, .erate = erate ))), vfamily = c("recexp1"), deriv = eval(substitute(expression({ NN <- length(eta) rate <- c(eta2theta(eta, .lrate , .erate )) dl.drate <- 1 / rate dl.drate[NN] <- 1/ rate[NN] - y[NN, 1] drate.deta <- dtheta.deta(rate, .lrate , .erate ) c(w) * cbind(dl.drate * drate.deta) }), list( .lrate = lrate, .erate = erate ))), weight = expression({ ed2l.drate2 <- 1 / rate^2 wz <- drate.deta^2 * ed2l.drate2 c(w) * wz })) } poissonp <- function(ostatistic, dimension = 2, link = "loge", idensity = NULL, imethod = 1) { if (!is.Numeric(ostatistic, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("argument 'ostatistic' must be a single positive integer") if (!is.Numeric(dimension, positive = TRUE, allowable.length = 1, integer.valued = TRUE) || dimension > 3) stop("argument 'dimension' must be 2 or 3") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(idensity) && !is.Numeric(idensity, positive = TRUE)) stop("bad input for argument 'idensity'") new("vglmff", blurb = c(if (dimension == 2) "Poisson-points-on-a-plane distances distribution\n" else "Poisson-points-on-a-volume distances distribution\n", "Link: ", namesof("density", link, earg = earg), "\n\n", if (dimension == 2) "Mean: gamma(s+0.5) / (gamma(s) * sqrt(density * pi))" else "Mean: gamma(s+1/3) / (gamma(s) * (4*density*pi/3)^(1/3))"), initialize = eval(substitute(expression({ if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (any(y <= 0)) stop("response must contain positive values only") predictors.names <- namesof("density", .link, earg = .earg, tag = FALSE) if (!length(etastart)) { use.this <- if ( .imethod == 1) median(y) + 1/8 else weighted.mean(y,w) if ( .dimension == 2) { myratio <- exp(lgamma( .ostatistic + 0.5) - lgamma( .ostatistic )) density.init <- if (is.Numeric( .idensity )) rep( .idensity , len = n) else rep(myratio^2 / (pi * use.this^2), len = n) etastart <- theta2eta(density.init, .link, earg = .earg) } else { myratio <- exp(lgamma( .ostatistic +1/3) - lgamma( .ostatistic )) density.init <- if (is.Numeric( .idensity )) rep( .idensity , len = n) else rep(3 * myratio^3 / (4 * pi * use.this^3), len = n) etastart <- theta2eta(density.init, .link, earg = .earg) } } }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension, .imethod = imethod, .idensity = idensity ))), linkinv = eval(substitute(function(eta, extra = NULL) { density = eta2theta(eta, .link, earg = .earg) if ( .dimension == 2) { myratio <- exp(lgamma( .ostatistic +0.5) - lgamma( .ostatistic )) myratio / sqrt(density * pi) } else { myratio <- exp(lgamma( .ostatistic +1/3) - lgamma( .ostatistic)) myratio / (4 * density * pi/3)^(1/3) } }, list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), last = eval(substitute(expression({ misc$link <- c("density" = .link) misc$earg <- list("density" = .earg) misc$expected <- TRUE misc$ostatistic <- .ostatistic misc$dimension <- .dimension }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta, extra = NULL) { density = eta2theta(eta, .link, earg = .earg) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else if ( .dimension == 2) sum(w * (log(2) + .ostatistic * log(pi * density) - lgamma( .ostatistic) + (2* .ostatistic-1) * log(y) - density * pi * y^2)) else sum(w * (log(3) + .ostatistic * log(4*pi * density/3) - lgamma( .ostatistic) + (3* .ostatistic-1) * log(y) - (4/3) * density * pi * y^3)) }, list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), vfamily = c("poissonp"), deriv = eval(substitute(expression({ density <- eta2theta(eta, .link, earg = .earg) if ( .dimension == 2) { dl.ddensity <- .ostatistic / density - pi * y^2 } else { dl.ddensity <- .ostatistic / density - (4/3) * pi * y^3 } ddensity.deta <- dtheta.deta(density, .link, earg = .earg) c(w) * dl.ddensity * ddensity.deta }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), weight = eval(substitute(expression({ ned2l.ddensity2 <- .ostatistic / density^2 wz <- ddensity.deta^2 * ned2l.ddensity2 c(w) * wz }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension )))) } VGAM/R/family.exp.R0000644000176000001440000003230212136651110013430 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. qeunif <- function(p, min = 0, max = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) { ppp <- p vsmallno <- sqrt(.Machine$double.eps) smallno <- 0.10 if (any(min >= max)) stop("argument 'min' has values greater or equal ", "to argument 'max'") if (!is.Numeric( Tol_nr, allowable.length = 1, positive = TRUE) || Tol_nr > 0.10) stop("argument 'Tol_nr' is not a single positive value, ", "or is too large") nrok <- ppp >= vsmallno & ppp <= 1.0 - vsmallno & is.finite(ppp) eee <- qbeta(ppp, shape1 = 3, shape2 = 3) eee[ppp < smallno] <- sqrt(ppp[ppp < smallno]) eee[ppp > 1.0 - smallno] <- 1.0 - sqrt(1.0 - ppp[ppp > 1.0 - smallno]) for(iii in 1:Maxit_nr) { realdiff <- (peunif(eee[nrok]) - ppp[nrok]) / deunif(eee[nrok]) eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol_nr )) break if (iii == Maxit_nr) warning("did not converge") } if (max(abs(peunif(eee[nrok]) - ppp[nrok])) > Tol_nr) warning("did not converge on the second check") eee[ppp < vsmallno] <- sqrt( ppp[ppp < vsmallno]) eee[ppp > 1.0 - vsmallno] <- 1.0 - sqrt(1.0 - ppp[ppp > 1.0 - vsmallno]) eee[ppp == 0] <- 0 eee[ppp == 1] <- 1 eee[ppp < 0] <- NA eee[ppp > 1] <- NA min + eee * (max - min) } peunif <- function(q, min = 0, max = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (any(min >= max)) stop("argument 'min' has values greater or equal to argument 'max'") eee <- (q - min) / (max - min) if (log.arg) { logGofy <- 2 * log(eee) - log1p(2 * eee * (eee - 1)) logGofy[eee < 0] <- -Inf logGofy[eee > 1] <- 0.0 logGofy } else { Gofy <- eee^2 / (1 + 2 * eee * (eee - 1)) Gofy[eee < 0] <- 0.0 Gofy[eee > 1] <- 1.0 Gofy } } deunif <- function(x, min = 0, max = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (any(min >= max)) stop("argument 'min' has values greater or equal to argument 'max'") eee <- (x - min) / (max - min) if (log.arg) { ans <- log(2) + log(eee) + log1p(-eee) - 2.0 * log(2*eee*(1-eee) - 1) - log(max - min) ans[eee <= 0.0] <- log(0.0) ans[eee >= 1.0] <- log(0.0) } else { gunif <- function(y) as.numeric(y >= 0 & y <= 1) * 2*y*(1-y) / (2*y*(1-y) - 1)^2 ans <- gunif(eee) / (max - min) } ans } reunif <- function(n, min = 0, max = 1) { qeunif(runif(n), min = min, max = max) } qenorm <- function(p, mean = 0, sd = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) { ppp <- p if (!is.Numeric( Tol_nr, allowable.length = 1, positive = TRUE) || Tol_nr > 0.10) stop("argument 'Tol_nr' is not a single ", "positive value, or is too large") nrok <- is.finite(ppp) eee <- qnorm(ppp, sd = 2/3) gnorm <- function(y) dnorm(y) / (y * (1-2*pnorm(y)) - 2*dnorm(y))^2 for(iii in 1:Maxit_nr) { realdiff <- (penorm(eee[nrok]) - ppp[nrok]) / gnorm(eee[nrok]) eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol_nr )) break if (iii == Maxit_nr) warning("did not converge") } if (max(abs(penorm(eee[nrok]) - ppp[nrok])) > Tol_nr) warning("did not converge on the second check") eee[ppp == 0] <- -Inf eee[ppp == 1] <- Inf eee[ppp < 0] <- NA eee[ppp > 1] <- NA eee * ifelse(sd >= 0, sd, NaN) + mean } penorm <- function(q, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) eee <- (q - mean) / sd tmp1 <- -dnorm(eee) - eee * pnorm(eee) if (log.arg) { logGofy <- log(tmp1) - log(2 * tmp1 + eee) logGofy[eee <= -Inf] <- -Inf logGofy[eee >= Inf] <- 0.0 logGofy } else { Gofy <- tmp1 / (2 * tmp1 + eee) Gofy[eee <= -Inf] <- 0.0 Gofy[eee >= Inf] <- 1.0 Gofy } } denorm <- function(x, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) eee <- (x - mean) / sd if (log.arg) { ans <- dnorm(eee, log = TRUE) - 2.0 * log(eee * (1-2*pnorm(eee)) - 2*dnorm(eee)) - log(sd) } else { gnorm <- function(y) dnorm(y) / (y * (1-2*pnorm(y)) - 2*dnorm(y))^2 ans <- gnorm(eee) / sd ans[sd <= 0.0] <- NaN } ans } renorm <- function(n, mean = 0, sd = 1) { qenorm(runif(n), mean = mean, sd = sd) } qeexp <- function(p, rate = 1, Maxit_nr = 10, Tol_nr = 1.0e-6) { ppp <- p vsmallno <- sqrt(.Machine$double.eps) if (!is.Numeric( Tol_nr, allowable.length = 1, positive = TRUE) || Tol_nr > 0.10) stop("argument 'Tol_nr' is not a single positive value, or ", "is too large") nrok <- ppp >= vsmallno & is.finite(ppp) eee <- qf(1.0 * ppp, df1 = 4.0, df2 = 44) if ( any(rangex <- ppp < 0.8) ) eee[rangex] <- qrayleigh(ppp[rangex], scale = 0.8) eee[ppp < vsmallno] <- sqrt(ppp[ppp < vsmallno]) for(iii in 1:Maxit_nr) { realdiff <- (peexp(eee[nrok]) - ppp[nrok]) / deexp(eee[nrok]) eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol_nr )) break if (iii == Maxit_nr) warning("did not converge") } if (max(abs(peexp(eee[nrok]) - ppp[nrok])) > Tol_nr) warning("did not converge on the second check") eee[ppp < vsmallno] <- sqrt(ppp[ppp < vsmallno]) eee[ppp == 0] <- 0 eee[ppp == 1] <- Inf eee[ppp < 0] <- NaN eee[ppp > 1] <- NaN eee / rate } peexp <- function(q, rate = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) eee <- q * rate if (log.arg) { tmp1 <- -expm1(-eee) - eee logGofy <- log1p(- eee - exp(-eee)) - log(2 * tmp1 + eee - 1.0) logGofy[eee < 0] <- log(0.0) logGofy[eee >= Inf] <- log(1.0) logGofy } else { tmp1 <- -expm1(-eee) - eee Gofy <- tmp1 / (2 * tmp1 + eee - 1.0) Gofy[eee < 0] <- 0.0 Gofy[eee >= Inf] <- 1.0 Gofy } } deexp <- function(x, rate = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (any(rate <= 0)) stop("argument 'rate' must have positive values") eee <- x * rate if (log.arg) { ans <- log(eee) - eee + 2.0 * log((1-x) - 2 * exp(-x)) + log(rate) } else { gexp <- function(y) as.numeric(y >= 0) * y * exp(-y) / ((1-y) - 2 * exp(-y))^2 ans <- gexp(eee) * rate ans[rate <= 0.0] <- NaN } ans } reexp <- function(n, rate = 1) { qeexp(runif(n), rate = rate) } dkoenker <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - location) / scale zedd[scale <= 0] <- NaN if (log.arg) { log(0.25) - 1.5 * log1p((zedd / 2)^2) - log(scale) } else { 2 / (scale * (4 + zedd^2)^1.5) } } pkoenker <- function(q, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (q - location) / scale zedd[scale <= 0] <- NaN if (log.arg) { -log(2) + log1p(zedd / sqrt(4 + zedd^2)) } else { 0.5 * (1 + zedd / sqrt(4 + zedd^2)) } } qkoenker <- function(p, location = 0, scale = 1) { answer <- -2 * (1 - 2*p) / sqrt(1 - (1 - 2*p)^2) answer[p < 0] <- NaN answer[p > 1] <- NaN answer[p == 0] <- -Inf answer[p == 1] <- +Inf answer <- answer * scale + location answer[scale <= 0] <- NaN answer } rkoenker <- function(n, location = 0, scale = 1) { answer <- qkoenker(runif(n)) * scale + location answer[scale <= 0] <- NaN answer } koenker <- function(percentile = 50, llocation = "identity", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = 2) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (length(ilocat) && (!is.Numeric(ilocat, allowable.length = 1, positive = TRUE))) stop("bad input for argument 'ilocation'") if (length(iscale) && !is.Numeric(iscale)) stop("bad input for argument 'iscale'") if (!is.Numeric(percentile, positive = TRUE) || any(percentile >= 100)) stop("bad input for argument 'percentile'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("'imethod' must be 1 or 2") new("vglmff", blurb = c("Koenker distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = FALSE), ", ", namesof("scale", lscale, earg = escale, tag = FALSE), "\n\n", "Mean: location\n", "Variance: infinite"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE)) if (!length(etastart)) { locat.init <- if ( .imethod == 2) { weighted.mean(y, w) } else { median(y) } Scale.init <- if (length( .iscale )) .iscale else diff(quantile(y, prob = c(0.25, 0.75))) / (2 * 1.155) + 1.0e-5 locat.init <- rep(locat.init, length = length(y)) Scale.init <- rep(Scale.init, length = length(y)) etastart <- cbind(theta2eta(locat.init, .llocat, earg = .elocat), theta2eta(Scale.init, .lscale, earg = .escale)) } }), list( .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale, .elocat = elocat, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL){ Perce <- .percentile locat <- eta2theta(eta[, 1], link = .llocat, earg = .elocat) Scale <- eta2theta(eta[, 2], link = .lscale, earg = .escale) answer <- matrix(locat, nrow(eta), length(Perce)) for (ii in 1:length(Perce)) answer[, ii] <- qkoenker(Perce[ii] / 100, loc = locat, sc = Scale) dimnames(answer) <- list(dimnames(eta)[[1]], paste(as.character(Perce), "%", sep = "")) answer }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .percentile = percentile ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat, "scale" = .lscale) misc$earg <- list("location" = .elocat, "scale" = .escale) misc$expected <- TRUE misc$percentile <- .percentile misc$imethod <- .imethod misc$multipleResponses <- FALSE ncoly <- ncol(y) for(ii in 1:length( .percentile )) { y.use <- if (ncoly > 1) y[, ii] else y mu <- cbind(mu) extra$percentile[ii] <- 100 * weighted.mean(y.use <= mu[, ii], w) } names(extra$percentile) <- colnames(mu) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .imethod = imethod, .percentile = percentile ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat <- eta2theta(eta[, 1], link = .llocat, earg = .elocat) Scale <- eta2theta(eta[, 2], link = .lscale, earg = .escale) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(w * dkoenker(x = y, location = locat, scale = Scale, log = TRUE)) } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = c("koenker"), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], link = .llocat, earg = .elocat) Scale <- eta2theta(eta[, 2], link = .lscale, earg = .escale) dlocat.deta <- dtheta.deta(locat, link = .llocat, earg = .elocat) dscale.deta <- dtheta.deta(Scale, link = .lscale, earg = .escale) zedd <- (y - locat) / Scale dl.dlocat <- 3 * zedd / (Scale * (4 + zedd^2)) dl.dscale <- 3 * zedd^2 / (Scale * (4 + zedd^2)) - 1 / Scale c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 0.3 / Scale^2 ned2l.dscale2 <- 2.0 / (3 * Scale^2) wz <- matrix(-10, n, M) # Diagonal EIM wz[, iam(1, 1, M = M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M = M)] <- ned2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale )))) } VGAM/R/family.circular.R0000644000176000001440000003024512136651110014444 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dcard <- function(x, mu, rho, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mu), length(rho)) x <- rep(x, len = L); mu <- rep(mu, len = L); rho <- rep(rho, len = L); logdensity <- rep(log(0), len = L) xok <- (x > 0) & (x < (2*pi)) logdensity[xok] <- -log(2*pi) + log1p(2 * rho[xok] * cos(x[xok]-mu[xok])) logdensity[mu <= 0] <- NaN logdensity[mu >= 2*pi] <- NaN logdensity[rho <= -0.5] <- NaN logdensity[rho >= 0.5] <- NaN if (log.arg) logdensity else exp(logdensity) } pcard <- function(q, mu, rho) { if (!is.Numeric(mu) || any(mu < 0) || any(mu > 2*pi)) stop("'mu' must be between 0 and 2*pi inclusive") if (!is.Numeric(rho) || max(abs(rho) > 0.5)) stop("'rho' must be between -0.5 and 0.5 inclusive") ans <- (q + 2 * rho * (sin(q-mu) + sin(mu))) / (2*pi) ans[q >= (2*pi)] <- 1 ans[q <= 0] <- 0 ans } qcard <- function(p, mu, rho, tolerance=1.0e-7, maxits=500) { if (!is.Numeric(mu) || any(mu < 0) || any(mu > 2*pi)) stop("'mu' must be between 0 and 2*pi inclusive") if (!is.Numeric(rho) || max(abs(rho) > 0.5)) stop("'rho' must be between -0.5 and 0.5 inclusive") if (!is.Numeric(p, positive = TRUE) || any(p > 1)) stop("'p' must be between 0 and 1") nn <- max(length(p), length(mu), length(rho)) p <- rep(p, len=nn) mu <- rep(mu, len=nn) rho <- rep(rho, len=nn) oldans <- 2 * pi * p for(its in 1:maxits) { ans <- oldans - (oldans + 2 * rho * (sin(oldans-mu)+sin(mu)) - 2*pi*p) / (1 + 2 * rho * cos(oldans - mu)) index <- (ans <= 0) | (ans > 2*pi) if (any(index)) { ans[index] <- runif (sum(index), 0, 2*pi) } if (max(abs(ans - oldans)) < tolerance) break; if (its == maxits) {warning("did not converge"); break} oldans <- ans } ans } rcard <- function(n, mu, rho, ...) { if (!is.Numeric(mu) || any(mu < 0) || any(mu > 2*pi)) stop("argument 'mu' must be between 0 and 2*pi inclusive") if (!is.Numeric(rho) || max(abs(rho) > 0.5)) stop("argument 'rho' must be between -0.5 and 0.5 inclusive") if (!is.Numeric(n, positive = TRUE, integer.valued = TRUE, allowable.length = 1)) stop("argument 'n' must be a single positive integer") mu <- rep(mu, len = n) rho <- rep(rho, len = n) qcard(runif (n), mu = mu, rho = rho, ...) } cardioid.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } cardioid <- function( lmu = elogit(min = 0, max = 2*pi), lrho = elogit(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") if (length(imu) && (!is.Numeric(imu, positive = TRUE) || any(imu > 2*pi))) stop("bad input for argument 'imu'") if (!is.Numeric(irho) || max(abs(irho)) > 0.5) stop("bad input for argument 'irho'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Cardioid distribution\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = FALSE), ", ", namesof("rho", lrho, earg = erho, tag = FALSE), "\n", "Mean: ", "pi + (rho/pi) *", "((2*pi-mu)*sin(2*pi-mu)+cos(2*pi-mu)-mu*sin(mu)-cos(mu))"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any((y <= 0) | (y >=2*pi))) stop("the response must be in (0, 2*pi)") predictors.names <- c( namesof("mu", .lmu, earg = .emu , tag = FALSE), namesof("rho", .lrho, earg = .erho, tag = FALSE)) if (!length(etastart)) { rho.init <- rep(if (length(.irho)) .irho else 0.3, length=n) cardioid.Loglikfun <- function(mu, y, x, w, extraargs) { rho <- extraargs$irho sum(w * (-log(2*pi) + log1p(2*rho*cos(y-mu)))) } mu.grid <- seq(0.1, 6.0, len=19) mu.init <- if (length( .imu )) .imu else getMaxMin(mu.grid, objfun = cardioid.Loglikfun, y = y, x = x, w = w, extraargs = list(irho = rho.init)) mu.init <- rep(mu.init, length=length(y)) etastart <- cbind(theta2eta( mu.init, .lmu, earg = .emu), theta2eta(rho.init, .lrho, earg = .erho)) } }), list( .lmu = lmu, .lrho = lrho, .imu = imu, .irho = irho, .emu = emu, .erho = erho ))), linkinv = eval(substitute(function(eta, extra = NULL){ mu <- eta2theta(eta[, 1], link = .lmu, earg = .emu) rho <- eta2theta(eta[, 2], link = .lrho, earg = .erho) pi + (rho/pi) * ((2*pi-mu)*sin(2*pi-mu) + cos(2*pi-mu) - mu*sin(mu) - cos(mu)) }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu, "rho" = .lrho) misc$earg <- list("mu" = .emu, "rho" = .erho) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mu <- eta2theta(eta[, 1], link = .lmu, earg = .emu) rho <- eta2theta(eta[, 2], link = .lrho, earg = .erho) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(w * dcard(x = y, mu = mu, rho = rho, log = TRUE)) } }, list( .lmu = lmu, .lrho=lrho, .emu = emu, .erho=erho ))), vfamily = c("cardioid"), deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], link = .lmu, earg = .emu) rho <- eta2theta(eta[, 2], link = .lrho, earg = .erho) dmu.deta <- dtheta.deta(mu, link = .lmu, earg = .emu) drho.deta <- dtheta.deta(rho, link = .lrho, earg = .erho) dl.dmu <- 2 * rho * sin(y-mu) / (1 + 2 * rho * cos(y-mu)) dl.drho <- 2 * cos(y-mu) / (1 + 2 * rho * cos(y-mu)) c(w) * cbind(dl.dmu * dmu.deta, dl.drho * drho.deta) }), list( .lmu = lmu, .lrho=lrho, .emu = emu, .erho=erho, .nsimEIM=nsimEIM ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for(ii in 1:( .nsimEIM )) { ysim <- rcard(n, mu=mu, rho=rho) dl.dmu <- 2 * rho * sin(ysim-mu) / (1 + 2 * rho * cos(ysim-mu)) dl.drho <- 2 * cos(ysim-mu) / (1 + 2 * rho * cos(ysim-mu)) rm(ysim) temp3 <- cbind(dl.dmu, dl.drho) run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov dtheta.detas <- cbind(dmu.deta, drho.deta) wz <- wz * dtheta.detas[,index0$row] * dtheta.detas[,index0$col] c(w) * wz }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM )))) } vonmises <- function(llocation = elogit(min = 0, max = 2*pi), lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Von Mises distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero , parameterNames = c("location", "scale")) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init <- mean(y) rat10 <- sqrt((sum(w*cos(y )))^2 + sum(w*sin(y))^2) / sum(w) scale.init <- sqrt(1 - rat10) } else { locat.init <- median(y) scale.init <- sqrt(sum(w*abs(y - locat.init)) / sum(w)) } locat.init <- if (length( .ilocat )) rep( .ilocat , len=n) else rep(locat.init, len=n) scale.init <- if (length( .iscale )) rep( .iscale , len = n) else rep(1, len = n) etastart <- cbind( theta2eta(locat.init, .llocat, earg = .elocat), theta2eta(scale.init, .lscale, earg = .escale)) } y <- y %% (2*pi) # Coerce after initial values have been computed }), list( .imethod = imethod, .ilocat = ilocat, .escale = escale, .elocat = elocat, .lscale = lscale, .llocat = llocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat, earg = .elocat) %% (2*pi) }, list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), loglikelihood = eval(substitute( function(mu,y,w,residuals= FALSE,eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat, earg = .elocat) Scale <- eta2theta(eta[, 2], .lscale, earg = .escale) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w * (Scale * cos(y - locat) - log(mbesselI0(x = Scale )))) }, list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), vfamily = c("vonmises"), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], .llocat, earg = .elocat) Scale <- eta2theta(eta[, 2], .lscale, earg = .escale) tmp6 <- mbesselI0(x = Scale, deriv = 2) dl.dlocat <- Scale * sin(y - locat) dl.dscale <- cos(y - locat) - tmp6[, 2] / tmp6[, 1] dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale, earg = .escale) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- Scale * tmp6[, 2] / tmp6[, 1] ned2l.dscale2 <- tmp6[, 3] / tmp6[, 1] - (tmp6[, 2] / tmp6[, 1])^2 wz <- matrix(as.numeric(NA), nrow = n, ncol = 2) # diagonal wz[,iam(1, 1, M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[,iam(2, 2, M)] <- ned2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .elocat = elocat, .lscale = lscale, .llocat = llocat )))) } VGAM/R/family.censored.R0000644000176000001440000012665612136651110014456 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. cenpoisson <- function(link = "loge", imu = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Censored Poisson distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu"), initialize = eval(substitute(expression({ if (any(is.na(y))) stop("NAs are not allowed in the response") w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 3, Is.integer.y = TRUE) centype <- attr(y, "type") if (centype == "right") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- rep(FALSE, len = n) extra$interval <- rep(FALSE, len = n) init.mu <- pmax(y[, 1], 1/8) } else if (centype == "left") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- rep(FALSE, len = n) extra$leftcensored <- ifelse(temp == 0, TRUE, FALSE) extra$interval <- rep(FALSE, len = n) init.mu <- pmax(y[, 1], 1/8) } else if (centype == "interval" || centype == "interval2") { temp <- y[, 3] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- ifelse(temp == 2, TRUE, FALSE) extra$intervalcensored <- ifelse(temp == 3, TRUE, FALSE) init.mu <- pmax((y[, 1] + y[, 2])/2, 1/8) # for intervalcensored if (any(extra$uncensored)) init.mu[extra$uncensored] <- pmax(y[extra$uncensored, 1], 1/8) if (any(extra$rightcensored)) init.mu[extra$rightcensored] <- pmax(y[extra$rightcensored, 1], 1/8) if (any(extra$leftcensored)) init.mu[extra$leftcensored] <- pmax(y[extra$leftcensored, 1], 1/8) } else if (centype == "counting") { stop("type == 'counting' not compatible with cenpoisson()") init.mu <- pmax(y[, 1], 1/8) stop("currently not working") } else stop("response have to be in a class of SurvS4") if (length( .imu )) init.mu <- 0 * y[, 1] + .imu predictors.names <- namesof("mu", .link, earg = .earg, short = TRUE) if (!length(etastart)) etastart <- theta2eta(init.mu, link = .link, earg = .earg) }), list( .link = link, .earg = earg, .imu = imu))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link, earg = .earg) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$expected <- FALSE misc$link <- c("mu" = .link) misc$earg <- list("mu" = .earg) misc$multipleResponses <- FALSE }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link, earg = .earg) }, list( .link = link, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$intervalcensored if (residuals){ stop("loglikelihood residuals not implemented yet") } else { sum(w[cen0] * dpois(y[cen0, 1], mu[cen0], log = TRUE)) + sum(w[cenU] * log1p(-ppois(y[cenU, 1] - 1, mu[cenU]))) + sum(w[cenL] * ppois(y[cenL, 1] - 1, mu[cenL], log.p = TRUE)) + sum(w[cenI] * log(ppois(y[cenI, 2], mu[cenI]) - ppois(y[cenI, 1], mu[cenI]))) } }, vfamily = "cenpoisson", deriv = eval(substitute(expression({ cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$intervalcensored lambda <- eta2theta(eta, link = .link, earg = .earg) dl.dlambda <- (y[, 1] - lambda)/lambda # uncensored yllim <- yulim <- y[, 1] # uncensored if (any(cenU)) { yllim[cenU] <- y[cenU, 1] densm1 <- dpois(yllim-1, lambda) queue <- ppois(yllim-1, lambda, lower.tail = FALSE) dl.dlambda[cenU] <- densm1[cenU] / queue[cenU] } if (any(cenL)) { yulim[cenL] <- y[cenL, 1] - 1 densm0 <- dpois(yulim, lambda) Queue <- ppois(yulim, lambda) # Left tail probability dl.dlambda[cenL] <- -densm0[cenL] / Queue[cenL] } if (any(cenI)) { yllim[cenI] <- y[cenI, 1] + 1 yulim[cenI] <- y[cenI, 2] Queue1 <- ppois(yllim-1, lambda) Queue2 <- ppois(yulim, lambda) densm02 <- dpois(yulim, lambda) densm12 <- dpois(yllim-1, lambda) dl.dlambda[cenI] <- (-densm02[cenI]+densm12[cenI]) / (Queue2[cenI]-Queue1[cenI]) } dlambda.deta <- dtheta.deta(theta=lambda, link = .link, earg = .earg) c(w) * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2lambda.deta2 <- d2theta.deta2(theta = lambda, link = .link, earg = .earg) d2l.dlambda2 <- 1 / lambda # uncensored; Fisher scoring if (any(cenU)) { densm2 <- dpois(yllim-2, lambda) d2l.dlambda2[cenU] <- (dl.dlambda[cenU])^2 - (densm2[cenU]-densm1[cenU])/queue[cenU] } if (any(cenL)) { densm1 <- dpois(yulim-1, lambda) d2l.dlambda2[cenL] <- (dl.dlambda[cenL])^2 - (densm0[cenL]-densm1[cenL])/Queue[cenL] } if (any(cenI)) { densm03 <- dpois(yulim-1, lambda) densm13 <- dpois(yllim-2, lambda) d2l.dlambda2[cenI] <- (dl.dlambda[cenI])^2 - (densm13[cenI]-densm12[cenI]-densm03[cenI] + densm02[cenI]) / (Queue2[cenI]-Queue1[cenI]) } wz <- c(w) * ((dlambda.deta^2) * d2l.dlambda2) wz }), list( .link = link, .earg = earg )))) } if (FALSE) cexpon <- ecexpon <- function(link = "loge", location = 0) { if (!is.Numeric(location, allowable.length = 1)) stop("bad input for 'location'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Censored exponential distribution\n\n", "Link: ", namesof("rate", link, tag = TRUE), "\n", "Mean: ", "mu = ", location, " + 1 / ", namesof("rate", link, tag = FALSE), "\n", "Variance: ", if (location == 0) "Exponential: mu^2" else paste("(mu-", location, ")^2", sep = "")), initialize = eval(substitute(expression({ extra$location <- .location if (any(y[, 1] <= extra$location)) stop("all responses must be greater than ", extra$location) predictors.names <- namesof("rate", .link , .earg , tag = FALSE) type <- attr(y, "type") if (type == "right" || type == "left"){ mu <- y[, 1] + (abs(y[, 1] - extra$location) < 0.001) / 8 }else if (type == "interval"){ temp <- y[, 3] mu <- ifelse(temp == 3, y[, 2] + (abs(y[, 2] - extra$location) < 0.001) / 8, y[, 1] + (abs(y[, 1] - extra$location) < 0.001) / 8) } if (!length(etastart)) etastart <- theta2eta(1/(mu-extra$location), .link , .earg ) if (type == "right") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- rep(FALSE, len = n) extra$interval <- rep(FALSE, len = n) } else if (type == "left") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- rep(FALSE, len = n) extra$leftcensored <- ifelse(temp == 0, TRUE, FALSE) extra$interval <- rep(FALSE, len = n) } else if (type == "counting") { stop("type == 'counting' not recognized") extra$uncensored <- rep(temp == 1, TRUE, FALSE) extra$interval <- rep(FALSE, len = n) extra$leftcensored <- rep(FALSE, len = n) extra$rightcensored <- rep(FALSE, len = n) extra$counting <- ifelse(temp == 0, TRUE, FALSE) } else if (type == "interval") { temp <- y[, 3] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- ifelse(temp == 2, TRUE, FALSE) extra$interval <- ifelse(temp == 3, TRUE, FALSE) } else stop("'type' not recognized") }), list( .location=location, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) extra$location + 1 / eta2theta(eta, .link , .earg ), list( .link = link ) )), last = eval(substitute(expression({ misc$location <- extra$location misc$link <- c("rate" = .link) misc$multipleResponses <- FALSE }), list( .link = link ))), link = eval(substitute(function(mu, extra = NULL) theta2eta(1/(mu-extra$location), .link , .earg ), list( .link = link ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { rate <- 1 / (mu - extra$location) cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$interval if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cenL] * log1p(-exp(-rate[cenL]*(y[cenL, 1]-extra$location)))) + sum(w[cenU] * (-rate[cenU]*(y[cenU, 1]-extra$location))) + sum(w[cen0] * (log(rate[cen0]) - rate[cen0]*(y[cen0, 1]-extra$location))) + sum(w[cenI] * log(-exp(-rate[cenI]*(y[cenI, 2]-extra$location))+ exp(-rate[cenI]*(y[cenI, 1]-extra$location)))) }, list( .link = link ))), vfamily = c("ecexpon"), deriv = eval(substitute(expression({ rate <- 1 / (mu - extra$location) cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$interval dl.drate <- 1/rate - (y[, 1]-extra$location) # uncensored tmp200 <- exp(-rate*(y[, 1]-extra$location)) tmp200b <- exp(-rate*(y[, 2]-extra$location)) # for interval censored if (any(cenL)) dl.drate[cenL] <- (y[cenL, 1]-extra$location) * tmp200[cenL] / (1 - tmp200[cenL]) if (any(cenU)) dl.drate[cenU] <- -(y[cenU, 1]-extra$location) if (any(cenI)) dl.drate[cenI] <- ((y[cenI, 2]-extra$location)*tmp200b[cenI]- (y[cenI, 1]-extra$location)*tmp200[cenI])/ (-tmp200b[cenI]+tmp200[cenI]) drate.deta <- dtheta.deta(rate, .link , .earg ) c(w) * dl.drate * drate.deta }), list( .link = link ) )), weight = eval(substitute(expression({ A123 <- ((mu-extra$location)^2) # uncensored d2l.drate2 Lowpt <- ifelse(cenL, y[, 1], extra$location) Lowpt <- ifelse(cenI, y[, 1], Lowpt) #interval censored Upppt <- ifelse(cenU, y[, 1], Inf) Upppt <- ifelse(cenI, y[, 2], Upppt) #interval censored tmp300 <- exp(-rate*(Lowpt - extra$location)) d2l.drate2 <- 0 * y[, 1] ind50 <- Lowpt > extra$location d2l.drate2[ind50] <- (Lowpt[ind50]-extra$location)^2 * tmp300[ind50] / (1-tmp300[ind50]) d2l.drate2 <- d2l.drate2 + (exp(-rate*(Lowpt-extra$location)) - exp(-rate*(Upppt-extra$location))) * A123 wz <- c(w) * (drate.deta^2) * d2l.drate2 wz }), list( .link = link )))) } cennormal1 <- function(lmu = "identity", lsd = "loge", imethod = 1, zero = 2) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Censored univariate normal\n\n", "Links: ", namesof("mu", lmu, tag = TRUE), "; ", namesof("sd", lsd, tag = TRUE), "\n", "Conditional variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero=zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y if (!length(extra$leftcensored)) extra$leftcensored <- rep(FALSE, len = n) if (!length(extra$rightcensored)) extra$rightcensored <- rep(FALSE, len = n) if (any(extra$rightcensored & extra$leftcensored)) stop("some observations are both right and left censored!") predictors.names <- c(namesof("mu", .lmu, earg =.emu, tag = FALSE), namesof("sd", .lsd, earg =.esd, tag = FALSE)) if (!length(etastart)) { anyc <- extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # can be all data junk <- lm.wfit(x = cbind(x[!i11, ]), y = y[!i11], w = w[!i11]) sd.y.est <- sqrt(sum(w[!i11] * junk$resid^2) / junk$df.residual) etastart <- cbind(mu = y, rep(theta2eta(sd.y.est, .lsd), length = n)) if (any(anyc)) etastart[anyc, 1] <- x[anyc,,drop = FALSE] %*% junk$coeff } }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .imethod = imethod ))), linkinv = eval(substitute( function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu, earg = .emu) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu, "sd" = .lsd) misc$earg <- list("mu" = .emu ,"sd" = .esd ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns mum <- eta2theta(eta[, 1], .lmu, earg = .emu ) sdv <- eta2theta(eta[, 2], .lsd, earg = .esd ) Lower <- ifelse(cenL, y, -Inf) Upper <- ifelse(cenU, y, Inf) ell1 <- -log(sdv[cen0]) - 0.5 * ((y[cen0] - mum[cen0])/sdv[cen0])^2 ell2 <- log1p(-pnorm((mum[cenL] - Lower[cenL])/sdv[cenL])) ell3 <- log1p(-pnorm(( Upper[cenU] - mum[cenU])/sdv[cenU])) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * ell1) + sum(w[cenL] * ell2) + sum(w[cenU] * ell3) }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), vfamily = c("cennormal1"), deriv = eval(substitute(expression({ cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns Lower <- ifelse(cenL, y, -Inf) Upper <- ifelse(cenU, y, Inf) mum <- eta2theta(eta[, 1], .lmu) sdv <- eta2theta(eta[, 2], .lsd) dl.dmu <- (y-mum) / sdv^2 dl.dsd <- (((y-mum)/sdv)^2 - 1) / sdv dmu.deta <- dtheta.deta(mum, .lmu, earg = .emu ) dsd.deta <- dtheta.deta(sdv, .lsd, earg = .esd ) if (any(cenL)) { mumL <- mum - Lower temp21L <- mumL[cenL] / sdv[cenL] PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) fred21 <- phiL / (1 - PhiL) dl.dmu[cenL] <- -fred21 / sdv[cenL] dl.dsd[cenL] <- mumL[cenL] * fred21 / sdv[cenL]^2 rm(fred21) } if (any(cenU)) { mumU <- Upper - mum temp21U <- mumU[cenU] / sdv[cenU] PhiU <- pnorm(temp21U) phiU <- dnorm(temp21U) fred21 <- phiU / (1 - PhiU) dl.dmu[cenU] <- fred21 / sdv[cenU] # Negated dl.dsd[cenU] <- mumU[cenU] * fred21 / sdv[cenU]^2 rm(fred21) } c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), weight = eval(substitute(expression({ A1 <- 1 - pnorm((mum - Lower) / sdv) # Lower A3 <- 1 - pnorm((Upper - mum) / sdv) # Upper A2 <- 1 - A1 - A3 # Middle; uncensored wz <- matrix(0, n, 3) wz[,iam(1, 1,M)] <- A2 * 1 / sdv^2 # ed2l.dmu2 wz[,iam(2, 2,M)] <- A2 * 2 / sdv^2 # ed2l.dsd2 mumL <- mum - Lower temp21L <- mumL / sdv PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) temp31L <- ((1-PhiL) * sdv)^2 wz.cenL11 <- phiL * (phiL - (1-PhiL)*temp21L) / temp31L wz.cenL22 <- mumL * phiL * ((1-PhiL) * (2 - temp21L^2) + mumL * phiL / sdv) / (sdv * temp31L) wz.cenL12 <- phiL * ((1-PhiL)*(temp21L^2 - 1) - temp21L*phiL) / temp31L wz.cenL11[!is.finite(wz.cenL11)] <- 0 wz.cenL22[!is.finite(wz.cenL22)] <- 0 wz.cenL12[!is.finite(wz.cenL12)] <- 0 wz[,iam(1, 1,M)] <- wz[,iam(1, 1,M)] + A1 * wz.cenL11 wz[,iam(2, 2,M)] <- wz[,iam(2, 2,M)] + A1 * wz.cenL22 wz[,iam(1, 2,M)] <- A1 * wz.cenL12 mumU <- Upper - mum # often Inf temp21U <- mumU / sdv # often Inf PhiU <- pnorm(temp21U) # often 1 phiU <- dnorm(temp21U) # often 0 temp31U <- ((1-PhiU) * sdv)^2 # often 0 tmp8 <- (1-PhiU)*temp21U wzcenU11 <- phiU * (phiU - tmp8) / temp31U tmp9 <- (1-PhiU) * (2 - temp21U^2) wzcenU22 <- mumU * phiU * (tmp9 + mumU * phiU / sdv) / (sdv * temp31U) wzcenU12 <- -phiU * ((1-PhiU)*(temp21U^2 - 1) - temp21U*phiU) / temp31U wzcenU11[!is.finite(wzcenU11)] <- 0 # Needed when Upper==Inf wzcenU22[!is.finite(wzcenU22)] <- 0 # Needed when Upper==Inf wzcenU12[!is.finite(wzcenU12)] <- 0 # Needed when Upper==Inf wz[,iam(1, 1,M)] <- wz[,iam(1, 1,M)] + A3 * wzcenU11 wz[,iam(2, 2,M)] <- wz[,iam(2, 2,M)] + A3 * wzcenU22 wz[,iam(1, 2,M)] <- wz[,iam(1, 2,M)] + A3 * wzcenU12 wz[,iam(1, 1,M)] <- wz[,iam(1, 1,M)] * dmu.deta^2 wz[,iam(2, 2,M)] <- wz[,iam(2, 2,M)] * dsd.deta^2 wz[,iam(1, 2,M)] <- wz[,iam(1, 2,M)] * dmu.deta * dsd.deta c(w) * wz }), list( .lmu = lmu, .lsd = lsd )))) } cenrayleigh <- function(lscale = "loge", oim = TRUE) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(oim) || length(oim) != 1) stop("bad input for argument 'oim'") new("vglmff", blurb = c("Censored Rayleigh distribution\n\n", "f(y) = y*exp(-0.5*(y/scale)^2)/scale^2, y>0, scale>0\n", "Link: ", namesof("scale", lscale, earg = escale ), "\n", "\n", "Mean: scale * sqrt(pi / 2)"), initialize = eval(substitute(expression({ if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (length(extra$leftcensored)) stop("cannot handle left-censored data") if (!length(extra$rightcensored)) extra$rightcensored <- rep(FALSE, len = n) predictors.names <- namesof("scale", .lscale, earg = .escale, tag = FALSE) if (!length(etastart)) { a.init <- (y+1/8) / sqrt(pi/2) etastart <- theta2eta(a.init, .lscale, earg = .escale ) } }), list( .lscale = lscale, .escale = escale ))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta, .lscale, earg = .escale ) Scale * sqrt(pi/2) }, list( .lscale = lscale, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale) misc$earg <- list("scale" = .escale) misc$oim <- .oim }), list( .lscale = lscale, .escale = escale, .oim = oim ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta, .lscale, earg = .escale ) cen0 <- !extra$rightcensored # uncensored obsns cenU <- extra$rightcensored if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * (log(y[cen0]) - 2*log(Scale[cen0]) - 0.5*(y[cen0]/Scale[cen0])^2)) - sum(w[cenU] * (y[cenU]/Scale[cenU])^2) * 0.5 }, list( .lscale = lscale, .escale = escale ))), vfamily = c("cenrayleigh"), deriv = eval(substitute(expression({ cen0 <- !extra$rightcensored # uncensored obsns cenU <- extra$rightcensored Scale <- eta2theta(eta, .lscale, earg = .escale ) dl.dScale <- ((y/Scale)^2 - 2) / Scale dScale.deta <- dtheta.deta(Scale, .lscale, earg = .escale ) dl.dScale[cenU] <- y[cenU]^2 / Scale[cenU]^3 c(w) * dl.dScale * dScale.deta }), list( .lscale = lscale, .escale = escale ))), weight = eval(substitute(expression({ ned2l.dScale2 <- 4 / Scale^2 wz <- dScale.deta^2 * ned2l.dScale2 if ( .oim ) { d2l.dScale2 <- 3 * (y[cenU])^2 / (Scale[cenU])^4 d2Scale.deta2 <- d2theta.deta2(Scale[cenU], .lscale, earg = .escale ) wz[cenU] <- (dScale.deta[cenU])^2 * d2l.dScale2 - dl.dScale[cenU] * d2Scale.deta2 } else { ned2l.dScale2[cenU] <- 6 / (Scale[cenU])^2 wz[cenU] <- (dScale.deta[cenU])^2 * ned2l.dScale2[cenU] } c(w) * wz }), list( .lscale = lscale, .escale = escale, .oim = oim )))) } weibull <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = -2) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (!is.Numeric(nrfs, allowable.length = 1) || nrfs < 0 || nrfs > 1) stop("bad input for argument 'nrfs'") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") new("vglmff", blurb = c("Weibull distribution\n\n", "Links: ", namesof("shape", lshape, earg = eshape), ", ", namesof("scale", lscale, earg = escale), "\n", "Mean: scale * gamma(1 + 1/shape)\n", "Variance: scale^2 * (gamma(1 + 2/shape) - ", "gamma(1 + 1/shape)^2)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly if (is.SurvS4(y)) stop("only uncensored observations are allowed; ", "don't use SurvS4()") mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape , earg = .eshape , tag = FALSE), namesof(mynames2, .lscale , earg = .escale , tag = FALSE))[ interleave.VGAM(M, M = Musual)] Shape.init <- matrix(if(length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) Scale.init <- matrix(if(length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .ishape ) || !length( .iscale )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # Can be all data probs.y <- .probs.y xvec <- log(-log1p(-probs.y)) fit0 <- lsfit(x = xvec, y = log(quantile(y[!i11, ilocal], probs = probs.y ))) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- 1 / fit0$coef["X"] if (!is.Numeric(Scale.init[, ilocal])) Scale.init[, ilocal] <- exp(fit0$coef["Intercept"]) } # ilocal etastart <- cbind(theta2eta(Shape.init, .lshape , earg = .eshape ), theta2eta(Scale.init, .lscale , earg = .escale ))[, interleave.VGAM(M, M = Musual)] } } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) Scale * gamma(1 + 1 / Shape) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), last = eval(substitute(expression({ regnotok <- any(Shape <= 2) if (any(Shape <= 1)) { warning("MLE regularity conditions are violated", "(shape <= 1) at the final iteration: ", "MLEs are not consistent") } else if (any(1 < Shape & Shape < 2)) { warning("MLE regularity conditions are violated", "(1 < shape < 2) at the final iteration: ", "MLEs exist but are not asymptotically normal") } else if (any(2 == Shape)) { warning("MLE regularity conditions are violated", "(shape == 2) at the final iteration: ", "MLEs exist and are normal and asymptotically ", "efficient but with a slower convergence rate than when ", "shape > 2") } Musual <- extra$Musual misc$link <- c(rep( .lshape , length = ncoly), rep( .lscale , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eshape misc$earg[[Musual*ii ]] <- .escale } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nrfs <- .nrfs misc$RegCondOK <- !regnotok # Save this for later }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .imethod = imethod, .nrfs = nrfs ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dweibull(x = y, shape = Shape, scale = Scale, log = TRUE)) } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), vfamily = c("weibull"), deriv = eval(substitute(expression({ Musual <- 2 Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , earg = .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) dl.dshape <- 1 / Shape + log(y / Scale) - log(y / Scale) * (y / Scale)^Shape dl.dscale <- (Shape / Scale) * (-1.0 + (y / Scale)^Shape) dshape.deta <- dtheta.deta(Shape, .lshape, earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale, earg = .escale ) myderiv <- c(w) * cbind(dl.dshape * dshape.deta, dl.dscale * dscale.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), weight = eval(substitute(expression({ EulerM <- -digamma(1.0) ned2l.dshape <- (6*(EulerM - 1)^2 + pi^2)/(6*Shape^2) # KK (2003) ned2l.dscale <- (Shape / Scale)^2 ned2l.dshapescale <- (EulerM-1) / Scale wz <- array(c(c(w) * ned2l.dshape * dshape.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dshapescale * dscale.deta * dshape.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .eshape = eshape, .nrfs = nrfs )))) } setOldClass(c("SurvS4", "Surv")) SurvS4 <- function (time, time2, event, type = c("right", "left", "interval", "counting", "interval2"), origin = 0) { nn <- length(time) ng <- nargs() if (missing(type)) { if (ng == 1 || ng == 2) type <- "right" else if (ng == 3) type <- "counting" else stop("Invalid number of arguments") } else { type <- match.arg(type) ng <- ng - 1 if (ng != 3 && (type == "interval" || type == "counting")) stop("Wrong number of args for this type of survival data") if (ng != 2 && (type == "right" || type == "left" || type == "interval2")) stop("Wrong number of args for this type of survival data") } who <- !is.na(time) if (ng == 1) { if (!is.numeric(time)) stop("Time variable is not numeric") ss <- cbind(time, 1) dimnames(ss) <- list(NULL, c("time", "status")) } else if (type == "right" || type == "left") { if (!is.numeric(time)) stop("Time variable is not numeric") if (length(time2) != nn) stop("Time and status are different lengths") if (is.logical(time2)) status <- 1 * time2 else if (is.numeric(time2)) { who2 <- !is.na(time2) if (max(time2[who2]) == 2) status <- time2 - 1 else status <- time2 if (any(status[who2] != 0 & status[who2] != 1)) stop("Invalid status value") } else stop("Invalid status value") ss <- cbind(time, status) dimnames(ss) <- list(NULL, c("time", "status")) } else if (type == "counting") { if (length(time2) != nn) stop("Start and stop are different lengths") if (length(event) != nn) stop("Start and event are different lengths") if (!is.numeric(time)) stop("Start time is not numeric") if (!is.numeric(time2)) stop("Stop time is not numeric") who3 <- who & !is.na(time2) if (any(time[who3] >= time2[who3])) stop("Stop time must be > start time") if (is.logical(event)) status <- 1 * event else if (is.numeric(event)) { who2 <- !is.na(event) if (max(event[who2]) == 2) status <- event - 1 else status <- event if (any(status[who2] != 0 & status[who2] != 1)) stop("Invalid status value") } else stop("Invalid status value") ss <- cbind(time - origin, time2 - origin, status) } else { if (type == "interval2") { event <- ifelse(is.na(time), 2, ifelse(is.na(time2), 0, ifelse(time == time2, 1, 3))) if (any(time[event == 3] > time2[event == 3])) stop("Invalid interval: start > stop") time <- ifelse(event != 2, time, time2) type <- "interval" } else { temp <- event[!is.na(event)] if (!is.numeric(temp)) stop("Status indicator must be numeric") if (length(temp) > 0 && any(temp != floor(temp) | temp < 0 | temp > 3)) stop("Status indicator must be 0, 1, 2 or 3") } status <- event ss <- cbind(time, ifelse(!is.na(event) & event == 3, time2, 1), status) } attr(ss, "type") <- type class(ss) <- "SurvS4" ss } is.SurvS4 <- function(x) inherits(x, "SurvS4") setIs(class1 = "SurvS4", class2 = "matrix") # Forces vglm()@y to be a matrix as.character.SurvS4 <- function (x, ...) { class(x) <- NULL type <- attr(x, "type") if (type == "right") { temp <- x[, 2] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "+", " ")) paste(format(x[, 1]), temp, sep = "") } else if (type == "counting") { temp <- x[, 3] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "+", " ")) paste("(", format(x[, 1]), ",", format(x[, 2]), temp, "]", sep = "") } else if (type == "left") { temp <- x[, 2] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "<", " ")) paste(temp, format(x[, 1]), sep = "") } else { stat <- x[, 3] temp <- c("+", "", "-", "]")[stat + 1] temp2 <- ifelse(stat == 3, paste("(", format(x[, 1]), ", ", format(x[, 2]), sep = ""), format(x[, 1])) ifelse(is.na(stat), as.character(NA), paste(temp2, temp, sep = "")) } } "[.SurvS4" <- function(x, i, j, drop = FALSE) { if (missing(j)) { temp <- class(x) type <- attr(x, "type") class(x) <- NULL x <- x[i, , drop = FALSE] class(x) <- temp attr(x, "type") <- type x } else { class(x) <- NULL NextMethod("[") } } is.na.SurvS4 <- function(x) { as.vector( (1* is.na(unclass(x)))%*% rep(1, ncol(x)) >0) } show.SurvS4 <- function (object) print.default(as.character.SurvS4(object), quote = FALSE) setMethod("show", "SurvS4", function(object) show.SurvS4(object)) pgamma.deriv.unscaled <- function(q, shape) { gam0 <- exp(lgamma(shape) + pgamma(q = q, shape = shape, log.p = TRUE)) I.sq <- pgamma(q = q, shape = shape) alld <- pgamma.deriv(q = q, shape = shape) # 6-coln matrix tmp3 <- alld[, 3] / I.sq # RHS of eqn (4.5) of \cite{wing:1989} G1s <- digamma(shape) + tmp3 # eqn (4.9) gam1 <- gam0 * G1s dG1s <- trigamma(shape) + alld[, 4] / I.sq - tmp3^2 # eqn (4.13) G2s <- dG1s + G1s^2 # eqn (4.12) gam2 <- gam0 * G2s cbind("0" = gam0, "1" = gam1, "2" = gam2) } truncweibull <- function(lower.limit = 1e-5, lAlpha = "loge", lBetaa = "loge", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = -2) { lAlpha <- as.list(substitute(lAlpha)) eAlpha <- link2list(lAlpha) lAlpha <- attr(eAlpha, "function.name") lBetaa <- as.list(substitute(lBetaa)) eBetaa <- link2list(lBetaa) lBetaa <- attr(eBetaa, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (!is.Numeric(nrfs, allowable.length = 1) || nrfs < 0 || nrfs > 1) stop("bad input for argument 'nrfs'") if (length(iAlpha)) if (!is.Numeric(iAlpha, positive = TRUE)) stop("argument 'iAlpha' values must be positive") if (length(iBetaa)) if (!is.Numeric(iBetaa, positive = TRUE)) stop("argument 'iBetaa' values must be positive") new("vglmff", blurb = c("Truncated weibull distribution\n\n", "Links: ", namesof("Alpha", lAlpha, earg = eAlpha), ", ", namesof("Betaa", lBetaa, earg = eBetaa), "\n", if (length( lower.limit ) < 5) paste("Truncation point(s): ", lower.limit, sep = ", ") else ""), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, lower.limit = .lower.limit , zero = .zero ) }, list( .zero = zero, .lower.limit = lower.limit ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$lower.limit <- matrix( .lower.limit , n, ncoly, byrow = TRUE) if (any(y < extra$lower.limit)) { stop("some response values less than argument 'lower.limit'") } if (is.SurvS4(y)) stop("only uncensored observations are allowed; ", "don't use SurvS4()") mynames1 <- paste("Alpha", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("Betaa", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lAlpha , earg = .eAlpha , tag = FALSE), namesof(mynames2, .lBetaa , earg = .eBetaa , tag = FALSE))[ interleave.VGAM(M, M = Musual)] Alpha.init <- matrix(if (length( .iAlpha )) .iAlpha else 0 + NA, n, ncoly, byrow = TRUE) Betaa.init <- matrix(if (length( .iBetaa )) .iBetaa else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .iAlpha ) || !length( .iBetaa )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # Can be all data probs.y <- .probs.y xvec <- log(-log1p(-probs.y)) fit0 <- lsfit(x = xvec, y = log(quantile(y[!i11, ilocal], probs = probs.y ))) aaa.init <- 1 / fit0$coef["X"] bbb.init <- exp(fit0$coef["Intercept"]) if (!is.Numeric(Betaa.init[, ilocal])) Betaa.init[, ilocal] <- aaa.init if (!is.Numeric(Alpha.init[, ilocal])) Alpha.init[, ilocal] <- (1 / bbb.init)^aaa.init } # ilocal } else { Alpha.init <- rep( .iAlpha , length = n) Betaa.init <- rep( .iBetaa , length = n) } etastart <- cbind(theta2eta(Alpha.init, .lAlpha , earg = .eAlpha ), theta2eta(Betaa.init, .lBetaa , earg = .eBetaa ))[, interleave.VGAM(M, M = Musual)] } }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .iBetaa = iBetaa, .iAlpha = iAlpha, .lower.limit = lower.limit, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) aTb <- Alpha * extra$lower.limit^Betaa wingo3 <- pgamma.deriv.unscaled(q = aTb, shape = 1 + 1 / Betaa) exp.aTb <- exp(aTb) (gamma(1 + 1 / Betaa) - wingo3[, 1]) * exp.aTb / Alpha^(1 / Betaa) }, list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit) )), last = eval(substitute(expression({ aaa.hat <- Betaa regnotok <- any(aaa.hat <= 2) if (any(aaa.hat <= 1)) { warning("MLE regularity conditions are violated", "(Betaa <= 1) at the final iteration: ", "MLEs are not consistent") } else if (any(1 < aaa.hat & aaa.hat < 2)) { warning("MLE regularity conditions are violated", "(1 < Betaa < 2) at the final iteration: ", "MLEs exist but are not asymptotically normal") } else if (any(2 == aaa.hat)) { warning("MLE regularity conditions are violated", "(Betaa == 2) at the final iteration: ", "MLEs exist and are normal and asymptotically ", "efficient but with a slower convergence rate than when ", "Betaa > 2") } Musual <- extra$Musual misc$link <- c(rep( .lAlpha , length = ncoly), rep( .lBetaa , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eAlpha misc$earg[[Musual*ii ]] <- .eBetaa } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nrfs <- .nrfs misc$RegCondOK <- !regnotok # Save this for later }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .imethod = imethod, .lower.limit = lower.limit, .nrfs = nrfs ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) Shape <- Betaa Scale <- 1 / Alpha^(1/Betaa) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (dweibull(x = y, shape = Shape, scale = Scale, log = TRUE) - pweibull(q = extra$lower.limit, shape = Shape, scale = Scale, log.p = TRUE, lower.tail = FALSE))) } }, list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit ) )), vfamily = c("truncweibull"), deriv = eval(substitute(expression({ Musual <- 2 Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) Shape <- Betaa Scale <- 1 / Alpha^(1/Betaa) TTT <- extra$lower.limit dl.dAlpha <- 1 / Alpha - y^Betaa + TTT^Betaa dl.dBetaa <- (1 / Betaa) + log(y) - Alpha * (y^Betaa * log(y) - TTT^Betaa * log(TTT)) dAlpha.deta <- dtheta.deta(Alpha, .lAlpha, earg = .eAlpha ) dBetaa.deta <- dtheta.deta(Betaa, .lBetaa, earg = .eBetaa ) myderiv <- c(w) * cbind(dl.dAlpha * dAlpha.deta, dl.dBetaa * dBetaa.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit ) )), weight = eval(substitute(expression({ aTb <- Alpha * TTT^Betaa exp.aTb <- exp(aTb) TblogT <- (TTT^Betaa) * log(TTT) wingo3 <- pgamma.deriv.unscaled(q = aTb, shape = 2) # 3-cols Eyblogy <- (exp.aTb * (digamma(2) - wingo3[, 2]) - (aTb + 1) * log(Alpha)) / (Alpha * Betaa) Eyblog2y <- (exp.aTb * (digamma(2)^2 + trigamma(2) - wingo3[, 3]) - 2 * log(Alpha) * (digamma(2) - wingo3[, 2])) / (Alpha * Betaa^2) + (log(Alpha)^2) * (aTb + 1) / (Alpha * Betaa^2) ned2l.daa <- 1 / Alpha^2 ned2l.dab <- Eyblogy - TblogT ned2l.dbb <- (1 / Betaa)^2 + Alpha * Eyblog2y - aTb * (log(TTT))^2 wz <- array(c(c(w) * ned2l.daa * dAlpha.deta^2, c(w) * ned2l.dbb * dBetaa.deta^2, c(w) * ned2l.dab * dBetaa.deta * dAlpha.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .nrfs = nrfs )))) } VGAM/R/family.categorical.R0000644000176000001440000024712212136651110015121 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. process.categorical.data.vgam <- expression({ extra$y.integer <- TRUE if (!all(w == 1)) extra$orig.w <- w if (!is.matrix(y)) { yf <- as.factor(y) lev <- levels(yf) llev <- length(lev) nn <- length(yf) y <- matrix(0, nn, llev) y[cbind(1:nn,as.vector(unclass(yf)))] <- 1 dimnames(y) <- list(names(yf), lev) if (llev <= 1) stop("the response matrix does not have 2 or more columns") } else { nn <- nrow(y) } nvec <- rowSums(y) if (min(y) < 0 || any(round(y) != y)) stop("the response must be non-negative counts (integers)") if (!exists("delete.zero.colns") || (exists("delete.zero.colns") && delete.zero.colns)) { sumy2 <- colSums(y) if (any(index <- sumy2 == 0)) { y <- y[,!index, drop = FALSE] sumy2 <- sumy2[!index] if (all(index) || ncol(y) <= 1) stop("'y' matrix has 0 or 1 columns") warning("Deleted ", sum(!index), " columns of the response matrix due to zero counts") } } if (any(miss <- (nvec == 0))) { smiss <- sum(miss) warning("Deleted ", smiss, " rows of the response matrix due to zero counts") x <- x[!miss,, drop = FALSE] y <- y[!miss,, drop = FALSE] w <- cbind(w) w <- w[!miss,, drop = FALSE] nvec <- nvec[!miss] nn <- nn - smiss } w <- w * nvec nvec[nvec == 0] <- 1 y <- prop.table(y, 1) # Convert to proportions if (length(mustart) + length(etastart) == 0) { mustart <- y + (1 / ncol(y) - y) / nvec } }) Deviance.categorical.data.vgam <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (ncol(y) == 1 || ncol(mu) == 1) stop("arguments 'y' and 'mu' must have at least 2 columns") double.eps <- sqrt( .Machine$double.xmin ) devy <- y nonz <- (y != 0) devy[nonz] <- y[nonz] * log(y[nonz]) devmu <- 0 * y # filler; y*log(mu) gives a warning (fixed up anyway). if (any(smallmu <- (mu * (1 - mu) < double.eps))) { warning("fitted values close to 0 or 1") smu <- mu[smallmu] smy <- y[smallmu] smu <- ifelse(smu < double.eps, double.eps, smu) devmu[smallmu] <- smy * log(smu) } devmu[!smallmu] <- y[!smallmu] * log(mu[!smallmu]) devi <- 2 * (devy - devmu) if (residuals) { M <- if (is.matrix(eta)) ncol(eta) else 1 if (M > 1) return(NULL) devi <- devi %*% rep(1, ncol(devi)) # deviance = \sum_i devi[i] return(c(sign(y[, 1] - mu[, 1]) * sqrt(abs(devi) * w))) } else { sum(w * devi) } } dmultinomial <- function(x, size = NULL, prob, log = FALSE, dochecking = TRUE, smallno = 1.0e-7) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) x <- as.matrix(x) prob <- as.matrix(prob) if (((K <- ncol(x)) <= 1) || ncol(prob) != K) stop("arguments 'x' and 'prob' must be matrices with ", "two or more columns") if (dochecking) { if (min(prob) < 0) stop("argument 'prob' contains some negative values") if (any(abs((rsprob <- rowSums(prob)) - 1) > smallno)) stop("some rows of 'prob' do not add to unity") if (any(abs(x - round(x)) > smallno)) stop("argument 'x' should be integer-valued") if (length(size)) { if (any(abs(size - rowSums(x)) > smallno)) stop("rowSums(x) does not agree with argument 'size'") } else { size <- round(rowSums(x)) } } else { if (!length(size)) size <- round(rowSums(prob)) } logdensity <- lgamma(size + 1) + rowSums(x * log(prob) - lgamma(x + 1)) if (log.arg) logdensity else exp(logdensity) } sratio <- function(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Stopping ratio model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y = j+1|Y <= j+1]", "P[Y=j+1|Y<=j+1]") else ifelse(whitespace, "P[Y = j|Y >= j]", "P[Y=j|Y>=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.vgam) extra$wy.prod <- TRUE M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar, "=", .fillerChar, 2:(M+1), "|Y", .fillerChar, "<=", .fillerChar, 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar, "=", .fillerChar, 1:M, "|Y", .fillerChar, ">=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) fv.matrix <- if ( .reverse ) { M <- ncol(eta) djr <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(1 - djr[, M:1], "cumprod")[, M:1] cbind(1, djr) * cbind(temp, 1) } else { dj <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(1 - dj, "cumprod") cbind(dj, 1) * cbind(1, temp) } if (length(extra$dimnamesy2)) dimnames(fv.matrix) = list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace extra <- list() # kill what was used }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { cump <- tapplymat1(mu, "cumsum") if ( .reverse ) { djr <- mu[, -1] / cump[, -1] theta2eta(djr, .link , earg = .earg ) } else { M <- ncol(mu) - 1 dj <- if (M == 1) mu[, 1] else mu[, 1:M] / (1 - cbind(0, cump[, 1:(M-1)])) theta2eta(dj, .link , earg = .earg ) } }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("sratio", "vcategorical"), deriv = eval(substitute(expression({ if (!length(extra$mymat)) { extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] } if ( .reverse ) { djr <- eta2theta(eta, .link , earg = .earg ) Mp1 <- ncol(extra$mymat) c(w) * (y[, -1] / djr - extra$mymat[, -Mp1] / (1 - djr)) * dtheta.deta(djr, .link , earg = .earg ) } else { dj <- eta2theta(eta, .link , earg = .earg ) c(w) * (y[, -ncol(y)] / dj - extra$mymat[, -1] / (1 - dj)) * dtheta.deta(dj, .link , earg = .earg ) } }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") ddjr.deta <- dtheta.deta(djr, .link , earg = .earg ) wz <- c(w) * ddjr.deta^2 * (mu[, -1] / djr^2 + cump[, 1:M] / (1 - djr)^2) } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] ddj.deta <- dtheta.deta(dj, .link , earg = .earg ) wz <- c(w) * ddj.deta^2 * (mu[, 1:M] / dj^2 + ccump[, -1] / (1 - dj)^2) } wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } cratio <- function(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Continuation ratio model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y < j+1|Y <= j+1]", "P[Y j|Y >= j]", "P[Y>j|Y>=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.vgam) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar, "<", .fillerChar, 2:(M+1), "|Y", .fillerChar, "<=", .fillerChar, 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar, ">", .fillerChar, 1:M, "|Y", .fillerChar, ">=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , earg = .earg , short = TRUE) y.names <- paste("mu", 1:(M+1), sep = "") extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) fv.matrix <- if ( .reverse ) { M <- ncol(eta) djrs <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(djrs[, M:1], "cumprod")[, M:1] cbind(1, 1 - djrs) * cbind(temp, 1) } else { djs <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(djs, "cumprod") cbind(1 - djs,1) * cbind(1, temp) } if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace extra <- list() # kill what was used }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { cump <- tapplymat1(mu, "cumsum") if ( .reverse ) { djrs <- 1 - mu[, -1] / cump[, -1] theta2eta(djrs, .link , earg = .earg ) } else { M <- ncol(mu) - 1 djs <- if (M == 1) 1 - mu[, 1] else 1 - mu[, 1:M] / (1 - cbind(0, cump[, 1:(M-1)])) theta2eta(djs, .link , earg = .earg ) } }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("cratio", "vcategorical"), deriv = eval(substitute(expression({ if (!length(extra$mymat)) { extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] } if ( .reverse ) { djrs <- eta2theta(eta, .link , earg = .earg ) Mp1 <- ncol(extra$mymat) -c(w) * (y[, -1]/(1 - djrs) - extra$mymat[, -Mp1]/djrs) * dtheta.deta(djrs, .link , earg = .earg ) } else { djs <- eta2theta(eta, .link , earg = .earg ) -c(w) * (y[, -ncol(y)]/(1 - djs) - extra$mymat[, -1]/djs) * dtheta.deta(djs, .link , earg = .earg ) } }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") ddjrs.deta <- dtheta.deta(djrs, .link , earg = .earg ) wz <- c(w) * ddjrs.deta^2 * (mu[, -1] / (1 - djrs)^2 + cump[, 1:M] / djrs^2) } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] ddjs.deta <- dtheta.deta(djs, .link , earg = .earg ) wz <- c(w) * ddjs.deta^2 * (mu[, 1:M] / (1 - djs)^2 + ccump[, -1] / djs^2) } wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } vglm.multinomial.deviance.control <- function(maxit = 21, panic = FALSE, ...) { if (maxit < 1) { warning("bad value of maxit; using 21 instead") maxit = 21 } list(maxit = maxit, panic = as.logical(panic)[1]) } vglm.multinomial.control <- function(maxit = 21, panic = FALSE, criterion = c("aic1", "aic2", names( .min.criterion.VGAM )), ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- match.arg(criterion, c("aic1", "aic2", names( .min.criterion.VGAM )))[1] if (maxit < 1) { warning("bad value of maxit; using 21 instead") maxit <- 21 } list(maxit = maxit, panic = as.logical(panic)[1], criterion = criterion, min.criterion = c("aic1" = FALSE, "aic2" = TRUE, .min.criterion.VGAM)) } vglm.vcategorical.control <- function(maxit = 30, trace = FALSE, panic = TRUE, ...) { if (maxit < 1) { warning("bad value of maxit; using 200 instead") maxit = 200 } list(maxit = maxit, trace = as.logical(trace)[1], panic = as.logical(panic)[1]) } multinomial <- function(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "last", whitespace = FALSE) { if (length(refLevel) != 1) stop("the length of 'refLevel' must be one") if (is.character(refLevel)) { if (refLevel != "last") stop('if a character, refLevel must be "last"') refLevel <- -1 } else if (is.factor(refLevel)) { if (is.ordered(refLevel)) warning("'refLevel' is from an ordered factor") refLevel <- as.character(refLevel) == levels(refLevel) refLevel <- (1:length(refLevel))[refLevel] if (!is.Numeric(refLevel, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("could not coerce 'refLevel' into a single positive integer") } else if (!is.Numeric(refLevel, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("'refLevel' must be a single positive integer") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Multinomial logit model\n\n", "Links: ", if (refLevel < 0) { ifelse(whitespace, "log(mu[,j] / mu[,M+1]), j = 1:M,\n", "log(mu[,j]/mu[,M+1]), j=1:M,\n") } else { if (refLevel == 1) { paste("log(mu[,", "j]", fillerChar, "/", fillerChar, "mu[,", refLevel, "]), j", fillerChar, "=", fillerChar, "2:(M+1),\n", sep = "") } else { paste("log(mu[,", "j]", fillerChar, "/", "mu[,", refLevel, "]), j", fillerChar, "=", fillerChar, "c(1:", refLevel-1, ",", fillerChar, refLevel+1, ":(M+1)),\n", sep = "") } }, "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints, apply.int = FALSE) constraints <- cm.zero.vgam(constraints, x, .zero, M) constraints <- cm.nointercept.vgam(constraints, x, .nointercept, M) }), list( .parallel = parallel, .zero = zero, .nointercept = nointercept, .refLevel = refLevel ))), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(parallel = .parallel , refLevel = .refLevel , multipleResponses = FALSE, zero = .zero ) }, list( .zero = zero, .refLevel = refLevel, .parallel = parallel ))), initialize = eval(substitute(expression({ delete.zero.colns <- TRUE eval(process.categorical.data.vgam) M <- ncol(y)-1 use.refLevel <- if ( .refLevel < 0) M+1 else .refLevel if (use.refLevel > (M+1)) stop("argument 'refLevel' has a value that is too high") allbut.refLevel <- (1:(M+1))[-use.refLevel] predictors.names <- paste("log(mu[,", allbut.refLevel, "]", .fillerChar, "/", .fillerChar, "mu[,", use.refLevel, "])", sep = "") y.names <- paste("mu", 1:(M+1), sep = "") }), list( .refLevel = refLevel, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (any(is.na(eta))) warning("there are NAs in eta in slot inverse") ans <- mlogit(eta, refLevel = .refLevel , inverse = TRUE) if (any(is.na(ans))) warning("there are NAs here in slot linkinv") if (min(ans) == 0 || max(ans) == 1) warning("fitted probabilities numerically 0 or 1 occurred") ans }), list( .refLevel = refLevel )), last = eval(substitute(expression({ misc$refLevel <- if ( .refLevel < 0) M+1 else .refLevel misc$link <- "mlogit" misc$earg <- list(mlogit = list( M = M, refLevel = use.refLevel )) dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) <- dy misc$multipleResponses <- FALSE misc$nointercept <- .nointercept misc$parallel <- .parallel misc$refLevel <- use.refLevel misc$refLevel.orig <- .refLevel misc$zero <- .zero }), list( .refLevel = refLevel, .nointercept = nointercept, .parallel = parallel, .zero = zero ))), linkfun = eval(substitute( function(mu, extra = NULL) { mlogit(mu, refLevel = .refLevel ) }), list( .refLevel = refLevel )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("multinomial", "vcategorical"), deriv = eval(substitute(expression({ if ( .refLevel < 0) { c(w) * (y[, -ncol(y)] - mu[, -ncol(y)]) } else { use.refLevel <- if ( .refLevel < 0) M+1 else .refLevel c(w) * (y[, -use.refLevel] - mu[, -use.refLevel]) } }), list( .refLevel = refLevel ))), weight = eval(substitute(expression({ mytiny <- (mu < sqrt(.Machine$double.eps)) | (mu > 1.0 - sqrt(.Machine$double.eps)) use.refLevel <- if ( .refLevel < 0) M+1 else .refLevel if (M == 1) { wz <- mu[, 3-use.refLevel] * (1-mu[, 3-use.refLevel]) } else { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) myinc <- (index$row.index >= use.refLevel) index$row.index[myinc] <- index$row.index[myinc] + 1 myinc <- (index$col.index >= use.refLevel) index$col.index[myinc] <- index$col.index[myinc] + 1 wz <- -mu[, index$row] * mu[, index$col] wz[, 1:M] <- wz[, 1:M] + mu[, -use.refLevel ] } atiny <- (mytiny %*% rep(1, ncol(mu))) > 0 # apply(mytiny, 1, any) if (any(atiny)) { if (M == 1) wz[atiny] <- wz[atiny] * (1 + .Machine$double.eps^0.5) + .Machine$double.eps else wz[atiny, 1:M] <- wz[atiny, 1:M] * (1 + .Machine$double.eps^0.5) + .Machine$double.eps } c(w) * wz }), list( .refLevel = refLevel )))) } cumulative <- function(link = "logit", parallel = FALSE, reverse = FALSE, mv = FALSE, apply.parint = FALSE, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") if (!is.logical(mv) || length(mv) != 1) stop("argument 'mv' must be a single logical") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") new("vglmff", blurb = if ( mv ) c(paste("Multivariate cumulative", link, "model\n\n"), "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y1 >= j+1]", "P[Y1>=j+1]") else ifelse(whitespace, "P[Y1 <= j]", "P[Y1<=j]"), link, earg = earg), ", ...") else c(paste("Cumulative", link, "model\n\n"), "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y >= j+1]", "P[Y>=j+1]") else ifelse(whitespace, "P[Y <= j]", "P[Y<=j]"), link, earg = earg)), constraints = eval(substitute(expression({ if ( .mv ) { if ( !length(constraints) ) { Llevels <- extra$Llevels NOS <- extra$NOS Hk.matrix <- kronecker(diag(NOS), matrix(1,Llevels-1,1)) constraints <- cm.vgam(Hk.matrix, x, .parallel, constraints, apply.int = .apply.parint) } } else { constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints, apply.int = .apply.parint) } }), list( .parallel = parallel, .mv = mv, .apply.parint = apply.parint ))), deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { answer <- if ( .mv ) { totdev <- 0 NOS <- extra$NOS Llevels <- extra$Llevels for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels) totdev <- totdev + Deviance.categorical.data.vgam( mu = mu[, aindex, drop = FALSE], y = y[, aindex, drop = FALSE], w = w, residuals = residuals, eta = eta[, cindex, drop = FALSE], extra = extra) } totdev } else { Deviance.categorical.data.vgam(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) } answer }, list( .earg = earg, .link = link, .mv = mv ) )), initialize = eval(substitute(expression({ if (colnames(x)[1] != "(Intercept)") warning("there seems to be no intercept term!") if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") extra$mv <- .mv if ( .mv ) { checkCut(y) # Check the input; stops if there is an error. if (any(w != 1) || ncol(cbind(w)) != 1) stop("the 'weights' argument must be a vector of all ones") Llevels <- max(y) delete.zero.colns <- FALSE orig.y <- cbind(y) # Convert y into a matrix if necessary NOS <- ncol(cbind(orig.y)) use.y <- use.mustart <- NULL for (iii in 1:NOS) { y <- as.factor(orig.y[,iii]) eval(process.categorical.data.vgam) use.y <- cbind(use.y, y) use.mustart <- cbind(use.mustart, mustart) } mustart <- use.mustart y <- use.y # n x (Llevels*NOS) M <- NOS * (Llevels-1) mynames <- y.names <- NULL for (iii in 1:NOS) { Y.names <- paste("Y", iii, sep = "") mu.names <- paste("mu", iii, ".", sep = "") mynames <- c(mynames, if ( .reverse ) paste("P[", Y.names, ">=", 2:Llevels, "]", sep = "") else paste("P[", Y.names, "<=", 1:(Llevels-1), "]", sep = "")) y.names <- c(y.names, paste(mu.names, 1:Llevels, sep = "")) } predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) extra$NOS <- NOS extra$Llevels <- Llevels } else { delete.zero.colns <- TRUE eval(process.categorical.data.vgam) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar , ">=", .fillerChar, 2:(1+M), "]", sep = "") else paste("P[Y", .fillerChar , "<=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") if (ncol(cbind(w)) == 1) { if (length(mustart) && all(c(y) %in% c(0, 1))) for (iii in 1:ncol(y)) mustart[,iii] <- weighted.mean(y[,iii], w) } if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] } }), list( .reverse = reverse, .mv = mv, .link = link, .earg = earg, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { answer <- if ( .mv ) { NOS <- extra$NOS Llevels <- extra$Llevels fv.matrix <- matrix(0, nrow(eta), NOS*Llevels) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels) if ( .reverse ) { ccump <- cbind(1, eta2theta(eta[, cindex, drop = FALSE], .link , earg = .earg )) fv.matrix[,aindex] <- cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta[, cindex, drop = FALSE], .link , earg = .earg ), 1) fv.matrix[,aindex] <- cbind(cump[, 1], tapplymat1(cump, "diff")) } } fv.matrix } else { fv.matrix <- if ( .reverse ) { ccump <- cbind(1, eta2theta(eta, .link , earg = .earg )) cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta, .link , earg = .earg ), 1) cbind(cump[, 1], tapplymat1(cump, "diff")) } if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix } answer }, list( .reverse = reverse, .link = link, .earg = earg, .mv = mv ))), last = eval(substitute(expression({ if ( .mv ) { misc$link <- .link misc$earg <- list( .earg ) } else { misc$link <- rep( .link , length = M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg } misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace misc$parameters <- mynames misc$reverse <- .reverse misc$parallel <- .parallel misc$mv <- .mv }), list( .reverse = reverse, .parallel = parallel, .link = link, .earg = earg, .fillerChar = fillerChar, .mv = mv, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { answer <- if ( .mv ) { NOS <- extra$NOS Llevels <- extra$Llevels eta.matrix <- matrix(0, nrow(mu), NOS*(Llevels-1)) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels) cump <- tapplymat1(as.matrix(mu[,aindex]), "cumsum") eta.matrix[,cindex] = theta2eta(if ( .reverse ) 1-cump[, 1:(Llevels-1)] else cump[, 1:(Llevels-1)], .link , earg = .earg ) } eta.matrix } else { cump <- tapplymat1(as.matrix(mu), "cumsum") M <- ncol(as.matrix(mu)) - 1 theta2eta(if ( .reverse ) 1-cump[, 1:M] else cump[, 1:M], .link , earg = .earg ) } answer }, list( .link = link, .earg = earg, .reverse = reverse, .mv = mv ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("cumulative", "vcategorical"), deriv = eval(substitute(expression({ mu.use <- pmax(mu, .Machine$double.eps * 1.0e-0) deriv.answer <- if ( .mv ) { NOS <- extra$NOS Llevels <- extra$Llevels dcump.deta <- resmat <- matrix(0, n, NOS * (Llevels-1)) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels-1) cump <- eta2theta(eta[,cindex, drop = FALSE], .link , earg = .earg ) dcump.deta[,cindex] <- dtheta.deta(cump, .link , earg = .earg ) resmat[,cindex] <- (y[, aindex, drop = FALSE] / mu.use[, aindex, drop = FALSE] - y[, 1+aindex, drop = FALSE] / mu.use[, 1+aindex, drop = FALSE]) } (if ( .reverse ) -c(w) else c(w)) * dcump.deta * resmat } else { cump <- eta2theta(eta, .link , earg = .earg ) dcump.deta <- dtheta.deta(cump, .link , earg = .earg ) c(if ( .reverse ) -c(w) else c(w)) * dcump.deta * (y[, -(M+1)] / mu.use[, -(M+1)] - y[, -1] / mu.use[, -1]) } deriv.answer }), list( .link = link, .earg = earg, .reverse = reverse, .mv = mv ))), weight = eval(substitute(expression({ if ( .mv ) { NOS <- extra$NOS Llevels <- extra$Llevels wz <- matrix(0, n, NOS*(Llevels-1)) # Diagonal elts only for a start for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels-1) wz[,cindex] <- c(w) * dcump.deta[,cindex, drop = FALSE]^2 * (1 / mu.use[, aindex, drop = FALSE] + 1 / mu.use[, 1+aindex, drop = FALSE]) } if (Llevels-1 > 1) { iii <- 1 oindex <- (iii-1) * (Llevels-1) + 1:(Llevels-2) wz <- cbind(wz, -c(w) * dcump.deta[, oindex] * dcump.deta[, 1+oindex]) if (NOS > 1) { cptrwz <- ncol(wz) # Like a pointer wz <- cbind(wz, matrix(0, nrow(wz), (NOS-1) * (Llevels-1))) for (iii in 2:NOS) { oindex <- (iii-1)*(Llevels-1) + 1:(Llevels-2) wz[,cptrwz + 1 + (1:(Llevels-2))] = -c(w) * dcump.deta[,oindex] * dcump.deta[, 1+oindex] cptrwz <- cptrwz + Llevels - 1 # Move it along a bit } } } } else { wz <- c(w) * dcump.deta^2 * (1/mu.use[, 1:M] + 1/mu.use[, -1]) if (M > 1) wz <- cbind(wz, -c(w) * dcump.deta[, -M] * dcump.deta[, 2:M] / mu.use[, 2:M]) } wz }), list( .earg = earg, .link = link, .mv = mv )))) } propodds <- function(reverse = TRUE, whitespace = FALSE) { if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") cumulative(parallel = TRUE, reverse = reverse, whitespace = whitespace) } acat <- function(link = "loge", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Adjacent-categories model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y = j] / P[Y = j + 1]", "P[Y=j]/P[Y=j+1]") else ifelse(whitespace, "P[Y = j + 1] / P[Y = j]", "P[Y=j+1]/P[Y=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.vgam) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar , "=", 1:M, "]", .fillerChar , "/", .fillerChar , "P[Y", .fillerChar , "=", .fillerChar , 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar , "=", .fillerChar , 2:(M+1), "]", .fillerChar , "/", .fillerChar , "P[Y", .fillerChar , "=", .fillerChar , 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) M <- ncol(eta) fv.matrix <- if ( .reverse ) { zetar <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zetar[, M:1], "cumprod")[, M:1, drop = FALSE] cbind(temp, 1) / drop(1 + temp %*% rep(1, ncol(temp))) } else { zeta <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zeta, "cumprod") cbind(1, temp) / drop(1 + temp %*% rep(1, ncol(temp))) } if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep( .link , length = M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { M <- ncol(mu) - 1 theta2eta(if ( .reverse ) mu[, 1:M] / mu[, -1] else mu[, -1] / mu[, 1:M], .link , earg = .earg ) }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("acat", "vcategorical"), deriv = eval(substitute(expression({ zeta <- eta2theta(eta, .link , earg = .earg ) # May be zetar dzeta.deta <- dtheta.deta(zeta, .link , earg = .earg ) d1 <- acat.deriv(zeta, M = M, n = n, reverse = .reverse ) score <- attr(d1, "gradient") / d1 answer <- if ( .reverse ) { cumy <- tapplymat1(y, "cumsum") c(w) * dzeta.deta * (cumy[, 1:M] / zeta - score) } else { ccumy <- tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] c(w) * dzeta.deta * (ccumy[, -1] / zeta - score) } answer }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ wz <- matrix(as.numeric(NA), n, dimm(M)) hess <- attr(d1, "hessian") / d1 if (M > 1) for (jay in 1:(M-1)) for (kay in (jay+1):M) wz[,iam(jay, kay,M)] <- (hess[, jay, kay] - score[, jay] * score[, kay]) * dzeta.deta[, jay] * dzeta.deta[, kay] if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") wz[, 1:M] <- (cump[, 1:M] / zeta^2 - score^2) * dzeta.deta^2 } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] wz[, 1:M] <- (ccump[, -1] / zeta^2 - score^2) * dzeta.deta^2 } c(w) * wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } acat.deriv <- function(zeta, reverse, M, n) { alltxt <- NULL for (ii in 1:M) { index <- if (reverse) ii:M else 1:ii vars <- paste("zeta", index, sep = "") txt <- paste(vars, collapse = "*") alltxt <- c(alltxt, txt) } alltxt <- paste(alltxt, collapse = " + ") alltxt <- paste(" ~ 1 +", alltxt) txt <- as.formula(alltxt) allvars <- paste("zeta", 1:M, sep = "") d1 <- deriv3(txt, allvars, hessian = TRUE) zeta <- as.matrix(zeta) for (ii in 1:M) assign(paste("zeta", ii, sep = ""), zeta[, ii]) ans <- eval(d1) ans } brat <- function(refgp = "last", refvalue = 1, init.alpha = 1) { if (!is.Numeric(init.alpha, positive = TRUE)) stop("'init.alpha' must contain positive values only") if (!is.Numeric(refvalue, allowable.length = 1, positive = TRUE)) stop("'refvalue' must be a single positive value") if (!is.character(refgp) && !is.Numeric(refgp, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("'refgp' must be a single positive integer") new("vglmff", blurb = c(paste("Bradley-Terry model (without ties)\n\n"), "Links: ", namesof("alpha's", "loge")), initialize = eval(substitute(expression({ are.ties <- attr(y, "are.ties") # If Brat() was used if (is.logical(are.ties) && are.ties) stop("use bratt(), not brat(), when there are ties") try.index <- 1:400 M <- (1:length(try.index))[(try.index+1)*(try.index) == ncol(y)] if (!is.finite(M)) stop("cannot determine 'M'") init.alpha <- matrix(rep( .init.alpha , length.out = M), n, M, byrow = TRUE) etastart <- matrix(theta2eta(init.alpha, "loge", earg = list(theta = NULL)), n, M, byrow = TRUE) refgp <- .refgp if (!intercept.only) warning("this function only works with intercept-only models") extra$ybrat.indices <- .brat.indices(NCo = M+1, are.ties = FALSE) uindex <- if ( .refgp == "last") 1:M else (1:(M+1))[-( .refgp ) ] predictors.names <- namesof(paste("alpha", uindex, sep = ""), "loge", short = TRUE) }), list( .refgp = refgp, .init.alpha=init.alpha ))), linkinv = eval(substitute( function(eta, extra = NULL) { probs <- NULL eta <- as.matrix(eta) # in case M = 1 for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii,], "loge", earg = list(theta = NULL)), .refvalue , .refgp ) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind(probs, alpha1 / (alpha1 + alpha2)) } dimnames(probs) <- dimnames(eta) probs }, list( .refgp = refgp, .refvalue = refvalue) )), last = eval(substitute(expression({ misc$link <- rep( "loge", length = M) names(misc$link) <- paste("alpha", uindex, sep = "") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- list(theta = NULL) misc$refgp <- .refgp misc$refvalue <- .refvalue }), list( .refgp = refgp, .refvalue = refvalue ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("brat"), deriv = eval(substitute(expression({ ans <- NULL uindex <- if ( .refgp == "last") 1:M else (1:(M+1))[-( .refgp ) ] eta <- as.matrix(eta) # in case M = 1 for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii,], "loge", earg = list(theta = NULL)), .refvalue, .refgp) ymat <- InverseBrat(y[ii,], NCo = M+1, diag = 0) answer <- rep(0, len = M) for (aa in 1:(M+1)) { answer <- answer + (1 - (aa == uindex)) * (ymat[uindex,aa] * alpha[aa] - ymat[aa,uindex] * alpha[uindex]) / (alpha[aa] + alpha[uindex]) } ans <- rbind(ans, w[ii] * answer) } dimnames(ans) <- dimnames(eta) ans }), list( .refvalue = refvalue, .refgp = refgp) )), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii,], "loge", earg = list(theta = NULL)), .refvalue, .refgp) ymat <- InverseBrat(y[ii,], NCo = M+1, diag = 0) for (aa in 1:(M+1)) { wz[ii, 1:M] <- wz[ii, 1:M] + (1 - (aa == uindex)) * (ymat[aa,uindex] + ymat[uindex,aa]) * alpha[aa] * alpha[uindex] / (alpha[aa] + alpha[uindex])^2 } if (M > 1) { ind5 <- iam(1, 1, M, both = TRUE, diag = FALSE) wz[ii,(M+1):ncol(wz)] = -(ymat[cbind(uindex[ind5$row],uindex[ind5$col])] + ymat[cbind(uindex[ind5$col],uindex[ind5$row])]) * alpha[uindex[ind5$col]] * alpha[uindex[ind5$row]] / (alpha[uindex[ind5$row]] + alpha[uindex[ind5$col]])^2 } } wz <- c(w) * wz wz }), list( .refvalue = refvalue, .refgp = refgp )))) } bratt <- function(refgp = "last", refvalue = 1, init.alpha = 1, i0 = 0.01) { if (!is.Numeric(i0, allowable.length = 1, positive = TRUE)) stop("'i0' must be a single positive value") if (!is.Numeric(init.alpha, positive = TRUE)) stop("'init.alpha' must contain positive values only") if (!is.Numeric(refvalue, allowable.length = 1, positive = TRUE)) stop("'refvalue' must be a single positive value") if (!is.character(refgp) && !is.Numeric(refgp, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("'refgp' must be a single positive integer") new("vglmff", blurb = c(paste("Bradley-Terry model (with ties)\n\n"), "Links: ", namesof("alpha's", "loge"), ", log(alpha0)"), initialize = eval(substitute(expression({ try.index <- 1:400 M <- (1:length(try.index))[(try.index*(try.index-1)) == ncol(y)] if (!is.Numeric(M, allowable.length = 1, integer.valued = TRUE)) stop("cannot determine 'M'") NCo <- M # Number of contestants are.ties <- attr(y, "are.ties") # If Brat() was used if (is.logical(are.ties)) { if (!are.ties) stop("use brat(), not bratt(), when there are no ties") ties <- attr(y, "ties") } else { are.ties <- FALSE ties <- 0 * y } init.alpha <- rep( .init.alpha, len = NCo-1) ialpha0 <- .i0 etastart <- cbind(matrix(theta2eta(init.alpha, "loge", list(theta = NULL)), n, NCo-1, byrow = TRUE), theta2eta(rep(ialpha0, length.out = n), "loge", list(theta = NULL))) refgp <- .refgp if (!intercept.only) warning("this function only works with intercept-only models") extra$ties <- ties # Flat (1-row) matrix extra$ybrat.indices <- .brat.indices(NCo=NCo, are.ties = FALSE) extra$tbrat.indices <- .brat.indices(NCo=NCo, are.ties = TRUE) # unused extra$dnties <- dimnames(ties) uindex <- if (refgp == "last") 1:(NCo-1) else (1:(NCo))[-refgp ] predictors.names <- c( namesof(paste("alpha", uindex, sep = ""), "loge", short = TRUE), namesof("alpha0", "loge", short = TRUE)) }), list( .refgp = refgp, .i0 = i0, .init.alpha=init.alpha ))), linkinv = eval(substitute( function(eta, extra = NULL) { probs <- qprobs <- NULL M <- ncol(eta) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge"), .refvalue , .refgp ) alpha0 <- loge(eta[ii, M], inverse = TRUE) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind(probs, alpha1 / (alpha1+alpha2+alpha0)) # qprobs <- rbind(qprobs, alpha0 / (alpha1+alpha2+alpha0)) # } if (length(extra$dnties)) dimnames(qprobs) <- extra$dnties attr(probs, "probtie") <- qprobs probs }, list( .refgp = refgp, .refvalue = refvalue) )), last = eval(substitute(expression({ misc$link <- rep( "loge", length = M) names(misc$link) <- c(paste("alpha", uindex, sep = ""), "alpha0") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- list(theta = NULL) misc$refgp <- .refgp misc$refvalue <- .refvalue misc$alpha <- alpha misc$alpha0 <- alpha0 }), list( .refgp = refgp, .refvalue = refvalue ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (y * log(mu) + 0.5 * extra$ties * log(attr(mu, "probtie")))) }, vfamily = c("bratt"), deriv = eval(substitute(expression({ ans <- NULL ties <- extra$ties NCo <- M uindex <- if ( .refgp == "last") 1:(M-1) else (1:(M))[-( .refgp )] eta <- as.matrix(eta) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge", earg = list(theta = NULL)), .refvalue, .refgp) alpha0 <- loge(eta[ii, M], inverse = TRUE) ymat <- InverseBrat(y[ii,], NCo = M, diag = 0) tmat <- InverseBrat(ties[ii,], NCo = M, diag = 0) answer <- rep(0, len=NCo-1) # deriv wrt eta[-M] for (aa in 1:NCo) { Daj <- alpha[aa] + alpha[uindex] + alpha0 pja <- alpha[uindex] / Daj answer <- answer + alpha[uindex] * (-ymat[aa,uindex] + ymat[uindex,aa]*(1-pja)/pja - tmat[uindex,aa]) / Daj } deriv0 <- 0 # deriv wrt eta[M] for (aa in 1:(NCo-1)) for (bb in (aa+1):NCo) { Dab <- alpha[aa] + alpha[bb] + alpha0 qab <- alpha0 / Dab deriv0 <- deriv0 + alpha0 * (-ymat[aa,bb] - ymat[bb,aa] + tmat[aa,bb]*(1-qab)/qab) / Dab } ans <- rbind(ans, w[ii] * c(answer, deriv0)) } dimnames(ans) <- dimnames(eta) ans }), list( .refvalue = refvalue, .refgp = refgp) )), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) # includes diagonal for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge", earg = list(theta = NULL)), .refvalue, .refgp) alpha0 <- loge(eta[ii, M], inverse = TRUE) ymat <- InverseBrat(y[ii,], NCo = M, diag = 0) tmat <- InverseBrat(ties[ii,], NCo = M, diag = 0) for (aa in 1:(NCo)) { Daj <- alpha[aa] + alpha[uindex] + alpha0 pja <- alpha[uindex] / Daj nja <- ymat[aa,uindex] + ymat[uindex,aa] + tmat[uindex,aa] wz[ii, 1:(NCo-1)] <- wz[ii, 1:(NCo - 1)] + alpha[uindex]^2 * nja * (1 - pja) / (pja * Daj^2) if (aa < NCo) for (bb in (aa+1):(NCo)) { nab <- ymat[aa,bb] + ymat[bb,aa] + tmat[bb,aa] Dab <- alpha[aa] + alpha[bb] + alpha0 qab <- alpha0 / Dab wz[ii, NCo] <- wz[ii,NCo] + alpha0^2 * nab * (1-qab) / (qab * Dab^2) } } if (NCo > 2) { ind5 <- iam(1, 1, M = NCo, both = TRUE, diag = FALSE) alphajunk <- c(alpha, junk=NA) mat4 <- cbind(uindex[ind5$row],uindex[ind5$col]) wz[ii,(M+1):ncol(wz)] <- -(ymat[mat4] + ymat[mat4[, 2:1]] + tmat[mat4]) * alphajunk[uindex[ind5$col]] * alphajunk[uindex[ind5$row]] / (alpha0 + alphajunk[uindex[ind5$row]] + alphajunk[uindex[ind5$col]])^2 } for (sss in 1:length(uindex)) { jay <- uindex[sss] naj <- ymat[, jay] + ymat[jay,] + tmat[, jay] Daj <- alpha[jay] + alpha + alpha0 wz[ii,iam(sss, NCo, M = NCo, diag = TRUE)] = -alpha[jay] * alpha0 * sum(naj / Daj^2) } } wz <- c(w) * wz wz }), list( .refvalue = refvalue, .refgp = refgp )))) } .brat.alpha <- function(vec, value, posn) { if (is.character(posn)) if (posn != "last") stop("can only handle \"last\"") else return(c(vec, value)) c(if (posn == 1) NULL else vec[1:(posn-1)], value, if (posn == length(vec) + 1) NULL else vec[posn:length(vec)]) } .brat.indices <- function(NCo, are.ties = FALSE) { if (!is.Numeric(NCo, allowable.length = 1, integer.valued = TRUE) || NCo < 2) stop("bad input for 'NCo'") m <- diag(NCo) if (are.ties) { cbind(rindex = row(m)[col(m) < row(m)], cindex = col(m)[col(m) < row(m)]) } else cbind(rindex = row(m)[col(m) != row(m)], cindex = col(m)[col(m) != row(m)]) } Brat <- function(mat, ties = 0 * mat, string = c(">", "=="), whitespace = FALSE) { stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") string <- paste(fillerChar, string, fillerChar, sep = "") allargs <- list(mat) # ,... callit <- if (length(names(allargs))) names(allargs) else as.character(1:length(allargs)) ans <- ans.ties <- NULL for (ii in 1:length(allargs)) { m <- allargs[[ii]] if (!is.matrix(m) || dim(m)[1] != dim(m)[2]) stop("m must be a square matrix") diag(ties) <- 0 if (!all(ties == t(ties))) stop("ties must be a symmetric matrix") are.ties <- any(ties > 0) diag(ties) <- NA diag(m) <- 0 # Could have been NAs if (any(is.na(m))) stop("missing values not allowed (except on the diagonal)") diag(m) <- NA dm <- as.data.frame.table(m) dt <- as.data.frame.table(ties) dm <- dm[!is.na(dm$Freq),] dt <- dt[!is.na(dt$Freq),] usethis1 <- paste(dm[, 1], string[1], dm[, 2], sep = "") usethis2 <- paste(dm[, 1], string[2], dm[, 2], sep = "") ans <- rbind(ans, matrix(dm$Freq, nrow = 1)) ans.ties <- rbind(ans.ties, matrix(dt$Freq, nrow = 1)) } dimnames(ans) <- list(callit, usethis1) dimnames(ans.ties) <- list(callit, usethis2) attr(ans, "ties") <- ans.ties attr(ans, "are.ties") <- are.ties ans } InverseBrat <- function(yvec, NCo = (1:900)[(1:900)*((1:900)-1) == ncol(rbind(yvec))], multiplicity = if (is.matrix(yvec)) nrow(yvec) else 1, diag = NA, string = c(">","=="), whitespace = FALSE) { stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") string <- paste(fillerChar, string, fillerChar, sep = "") ans <- array(diag, c(NCo, NCo, multiplicity)) yvec.orig <- yvec yvec <- c(yvec) ptr <- 1 for (mul in 1:multiplicity) for (i1 in 1:(NCo)) for (i2 in 1:(NCo)) if (i1 != i2) { ans[i2,i1,mul] <- yvec[ptr] ptr <- ptr + 1 } ans <- if (multiplicity > 1) ans else matrix(ans, NCo, NCo) if (is.array(yvec.orig) || is.matrix(yvec.orig)) { names.yvec <- dimnames(yvec.orig)[[2]] ii <- strsplit(names.yvec, string[1]) cal <- NULL for (kk in c(NCo, 1:(NCo-1))) cal <- c(cal, (ii[[kk]])[1]) if (multiplicity>1) { dimnames(ans) <- list(cal, cal, dimnames(yvec.orig)[[1]]) } else { dimnames(ans) <- list(cal, cal) } } ans } tapplymat1 <- function(mat, function.arg = c("cumsum", "diff", "cumprod")) { if (!missing(function.arg)) function.arg <- as.character(substitute(function.arg)) function.arg <- match.arg(function.arg, c("cumsum", "diff", "cumprod"))[1] type <- switch(function.arg, cumsum = 1, diff = 2, cumprod = 3, stop("function.arg not matched")) if (!is.matrix(mat)) mat <- as.matrix(mat) NR <- nrow(mat) NC <- ncol(mat) fred <- dotC(name = "tapplymat1", mat=as.double(mat), as.integer(NR), as.integer(NC), as.integer(type)) dim(fred$mat) <- c(NR, NC) dimnames(fred$mat) <- dimnames(mat) switch(function.arg, cumsum = fred$mat, diff = fred$mat[, -1, drop = FALSE], cumprod= fred$mat) } ordpoisson <- function(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loge") { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") fcutpoints <- cutpoints[is.finite(cutpoints)] if (!is.Numeric(fcutpoints, integer.valued = TRUE) || any(fcutpoints < 0)) stop("'cutpoints' must have non-negative integer or Inf ", "values only") if (is.finite(cutpoints[length(cutpoints)])) cutpoints <- c(cutpoints, Inf) if (!is.logical(countdata) || length(countdata) != 1) stop("argument 'countdata' must be a single logical") if (countdata) { if (!is.Numeric(NOS, integer.valued = TRUE, positive = TRUE)) stop("'NOS' must have integer values only") if (!is.Numeric(Levels, integer.valued = TRUE, positive = TRUE) || any(Levels < 2)) stop("'Levels' must have integer values (>= 2) only") Levels <- rep(Levels, length = NOS) } new("vglmff", blurb = c(paste("Ordinal Poisson model\n\n"), "Link: ", namesof("mu", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ orig.y <- cbind(y) # Convert y into a matrix if necessary if ( .countdata ) { extra$NOS <- M <- NOS <- .NOS extra$Levels <- Levels <- .Levels y.names <- dimnames(y)[[2]] # Hopefully the user inputted them } else { if (any(w != 1) || ncol(cbind(w)) != 1) stop("the 'weights' argument must be a vector of all ones") extra$NOS <- M <- NOS <- if (is.Numeric( .NOS )) .NOS else ncol(orig.y) Levels <- rep(if (is.Numeric( .Levels )) .Levels else 0, len = NOS) if (!is.Numeric( .Levels )) for (iii in 1:NOS) { Levels[iii] <- length(unique(sort(orig.y[,iii]))) } extra$Levels <- Levels } initmu <- if (is.Numeric( .init.mu )) rep( .init.mu, len = NOS) else NULL cutpoints <- rep( .cutpoints, len = sum(Levels)) delete.zero.colns <- FALSE use.y <- if ( .countdata ) y else matrix(0, n, sum(Levels)) use.etastart <- matrix(0, n, M) cptr <- 1 for (iii in 1:NOS) { y <- factor(orig.y[,iii], levels=(1:Levels[iii])) if ( !( .countdata )) { eval(process.categorical.data.vgam) # Creates mustart and y use.y[,cptr:(cptr+Levels[iii]-1)] <- y } use.etastart[,iii] <- if (is.Numeric(initmu)) initmu[iii] else median(cutpoints[cptr:(cptr+Levels[iii]-1-1)]) cptr <- cptr + Levels[iii] } mustart <- NULL # Overwrite it etastart <- theta2eta(use.etastart, .link , earg = .earg ) y <- use.y # n x sum(Levels) M <- NOS for (iii in 1:NOS) { mu.names <- paste("mu", iii, ".", sep = "") } ncoly <- extra$ncoly <- sum(Levels) cp.vector <- rep( .cutpoints, length=ncoly) extra$countdata <- .countdata extra$cutpoints <- cp.vector extra$n <- n mynames <- if (M > 1) paste("mu", 1:M, sep = "") else "mu" predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) }), list( .link = link, .countdata = countdata, .earg = earg, .cutpoints=cutpoints, .NOS=NOS, .Levels=Levels, .init.mu = init.mu ))), linkinv = eval(substitute( function(eta, extra = NULL) { mu <- eta2theta(eta, link= .link , earg = .earg ) # Poisson means mu <- cbind(mu) mu }, list( .link = link, .earg = earg, .countdata = countdata ))), last = eval(substitute(expression({ print("y.names") if ( .countdata ) { misc$link <- .link misc$earg <- list( .earg ) } else { misc$link <- rep( .link , length = M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg } misc$parameters <- mynames misc$countdata <- .countdata misc$true.mu = FALSE # $fitted is not a true mu }), list( .link = link, .countdata = countdata, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { probs <- ordpoissonProbs(extra, mu) index0 <- y == 0 probs[index0] <- 1 pindex0 <- probs == 0 probs[pindex0] <- 1 sum(pindex0) * (-1.0e+10) + sum(w * y * log(probs)) } }, vfamily = c("ordpoisson", "vcategorical"), deriv = eval(substitute(expression({ probs <- ordpoissonProbs(extra, mu) probs.use <- pmax(probs, .Machine$double.eps * 1.0e-0) cp.vector <- extra$cutpoints NOS <- extra$NOS Levels <- extra$Levels resmat <- matrix(0, n, M) dl.dprob <- y / probs.use dmu.deta <- dtheta.deta(mu, .link , earg = .earg ) dprob.dmu <- ordpoissonProbs(extra, mu, deriv = 1) cptr <- 1 for (iii in 1:NOS) { for (kkk in 1:Levels[iii]) { resmat[,iii] <- resmat[,iii] + dl.dprob[,cptr] * dprob.dmu[,cptr] cptr <- cptr + 1 } } resmat <- c(w) * resmat * dmu.deta resmat }), list( .link = link, .earg = earg, .countdata=countdata ))), weight = eval(substitute(expression({ d2l.dmu2 <- matrix(0, n, M) # Diagonal matrix cptr <- 1 for (iii in 1:NOS) { for (kkk in 1:Levels[iii]) { d2l.dmu2[,iii] <- d2l.dmu2[,iii] + dprob.dmu[,cptr]^2 / probs.use[,cptr] cptr <- cptr + 1 } } wz <- c(w) * d2l.dmu2 * dmu.deta^2 wz }), list( .earg = earg, .link = link, .countdata = countdata )))) } ordpoissonProbs <- function(extra, mu, deriv = 0) { cp.vector <- extra$cutpoints NOS <- extra$NOS if (deriv == 1) { dprob.dmu <- matrix(0, extra$n, extra$ncoly) } else { probs <- matrix(0, extra$n, extra$ncoly) } mu <- cbind(mu) cptr <- 1 for (iii in 1:NOS) { if (deriv == 1) { dprob.dmu[,cptr] <- -dpois(x = cp.vector[cptr], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr], lambda = mu[,iii]) } cptr <- cptr + 1 while(is.finite(cp.vector[cptr])) { if (deriv == 1) { dprob.dmu[,cptr] <- dpois(x = cp.vector[cptr-1], lambda = mu[,iii]) - dpois(x = cp.vector[cptr ], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr ], lambda = mu[,iii]) - ppois(q = cp.vector[cptr-1], lambda = mu[,iii]) } cptr <- cptr + 1 } if (deriv == 1) { dprob.dmu[,cptr] <- dpois(x = cp.vector[cptr-1], lambda = mu[,iii]) - dpois(x = cp.vector[cptr ], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr ], lambda = mu[,iii]) - ppois(q = cp.vector[cptr-1], lambda = mu[,iii]) } cptr <- cptr + 1 } if (deriv == 1) dprob.dmu else probs } if (FALSE) scumulative <- function(link = "logit", earg = list(), lscale = "loge", escale = list(), parallel = FALSE, sparallel = TRUE, reverse = FALSE, iscale = 1) { stop("sorry, not working yet") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") new("vglmff", blurb = c(paste("Scaled cumulative", link, "model\n\n"), "Links: ", namesof(if (reverse) "P[Y>=j+1]" else "P[Y<=j]", link, earg = earg), ", ", namesof("scale_j", lscale, escale)), constraints = eval(substitute(expression({ J = M / 2 constraints <- cm.vgam(matrix(1,J,1), x, .parallel, constraints, apply.int = FALSE) constraints[["(Intercept)"]] = rbind(constraints[["(Intercept)"]], matrix(0, J, ncol(constraints[["(Intercept)"]]))) cm2 <- cm.vgam(matrix(1,J,1), x, .sparallel, constraints = NULL, apply.int = FALSE) for (ii in 2:length(constraints)) constraints[[ii]] = cbind(rbind(constraints[[ii]], matrix(0, J, ncol(constraints[[ii]]))), rbind(matrix(0, J, ncol(cm2[[ii]])), cm2[[ii]])) for (ii in 1:length(constraints)) constraints[[ii]] = (constraints[[ii]])[interleave.VGAM(M, M = 2),, drop = FALSE] }), list( .parallel = parallel, .sparallel=sparallel ))), deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { answer = Deviance.categorical.data.vgam(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) answer }, list( .earg = earg, .link = link ) )), initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") if (intercept.only) stop("use cumulative() for intercept-only models") delete.zero.colns = TRUE # Cannot have FALSE since then prob(Y=jay)=0 eval(process.categorical.data.vgam) M = 2*(ncol(y)-1) J = M / 2 extra$J = J mynames = if ( .reverse ) paste("P[Y>=", 2:(1+J), "]", sep = "") else paste("P[Y<=", 1:J, "]", sep = "") predictors.names <- c( namesof(mynames, .link , short = TRUE, earg = .earg ), namesof(paste("scale_", 1:J, sep = ""), .lscale, short = TRUE, earg = .escale )) y.names = paste("mu", 1:(J+1), sep = "") if (length(dimnames(y))) extra$dimnamesy2 = dimnames(y)[[2]] predictors.names <- predictors.names[interleave.VGAM(M, M = 2)] }), list( .link = link, .lscale = lscale, .reverse = reverse, .earg = earg, .escale = escale ))), linkinv = eval(substitute( function(eta, extra = NULL) { J = extra$J M = 2*J etamat1 = eta[, 2*(1:J)-1, drop = FALSE] etamat2 = eta[, 2*(1:J), drop = FALSE] scalemat = eta2theta(etamat2, .lscale, earg = .escale ) fv.matrix = if ( .reverse ) { ccump = cbind(1, eta2theta(etamat1 / scalemat, .link , earg = .earg )) cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump = cbind(eta2theta(etamat1 / scalemat, .link , earg = .earg ), 1) cbind(cump[, 1], tapplymat1(cump, "diff")) } if (length(extra$dimnamesy2)) dimnames(fv.matrix) = list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix }, list( .link = link, .lscale = lscale, .reverse = reverse, .earg = earg, .escale = escale ))), last = eval(substitute(expression({ J = extra$J misc$link = c(rep( .link , length = J), rep( .lscale, length = J))[interleave.VGAM(M, M = 2)] names(misc$link) = predictors.names misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:J) misc$earg[[2*ii-1]] <- .earg for (ii in 1:J) misc$earg[[2*ii ]] <- .escale misc$parameters = mynames misc$reverse = .reverse misc$parallel = .parallel misc$sparallel = .sparallel }), list( .link = link, .lscale = lscale, .reverse = reverse, .parallel = parallel, .sparallel = sparallel, .earg = earg, .escale = escale ))), linkfun = eval(substitute( function(mu, extra = NULL) { cump = tapplymat1(as.matrix(mu), "cumsum") J = ncol(as.matrix(mu)) - 1 M = 2 * J answer = cbind( theta2eta(if ( .reverse ) 1-cump[, 1:J] else cump[, 1:J], .link , earg = .earg ), matrix(theta2eta( .iscale, .lscale , earg = .escale ), nrow(as.matrix(mu)), J, byrow = TRUE)) answer = answer[,interleave.VGAM(M, M = 2)] answer }, list( .link = link, .lscale = lscale, .reverse = reverse, .iscale = iscale, .earg = earg, .escale = escale ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts = if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec = if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno = 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts = round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("scumulative", "vcategorical"), deriv = eval(substitute(expression({ ooz = iter %% 2 J = extra$J mu.use = pmax(mu, .Machine$double.eps * 1.0e-0) etamat1 = eta[, 2*(1:J)-1, drop = FALSE] etamat2 = eta[, 2*(1:J) , drop = FALSE] scalemat = eta2theta(etamat2, .lscale, earg = .escale ) cump = eta2theta(etamat1 / scalemat, .link , earg = .earg ) dcump.deta = dtheta.deta(cump, .link , earg = .earg ) dscale.deta = dtheta.deta(scalemat, .lscale, earg = .escale ) dl.dcump = (if ( .reverse ) -w else w) * (y[, 1:J]/mu.use[, 1:J] - y[, -1]/mu.use[, -1]) dcump.dscale = -dcump.deta * etamat1 / scalemat^2 ans = cbind(dl.dcump * dcump.deta / scalemat, dl.dcump * dcump.dscale * dscale.deta) ans = ans[,interleave.VGAM(M, M = 2)] if (ooz) ans[, c(TRUE, FALSE)] = 0 else ans[, c(FALSE, TRUE)] = 0 ans }), list( .link = link, .lscale = lscale, .reverse = reverse, .earg = earg, .escale = escale ))), weight = eval(substitute(expression({ wz = matrix(0, n, 2*(2*M-3)) wz[, 2*(1:J)-1] = if (ooz) c(w) * (dcump.deta / scalemat)^2 * (1/mu.use[, 1:J] + 1/mu.use[, -1]) else 1 wz[, 2*(1:J)] = if (ooz) 1 else c(w) * (dcump.dscale * dscale.deta)^2 * (1/mu.use[, 1:J] + 1/mu.use[, -1]) wz0 = c(w) * (dcump.deta / scalemat) * (dcump.dscale * dscale.deta) * (1/mu.use[, 1:J] + 1/mu.use[, -1]) wz0 = as.matrix(wz0) for (ii in 1:J) wz[,iam(2*ii-1,2*ii,M = M)] = if (ooz) wz0[, ii] else 0 if (J > 1) { wz0 = -c(w) * (dcump.deta[, -J] / scalemat[, -J]) * (dcump.deta[, -1] / scalemat[, -1]) / mu.use[, 2:J] wz0 = as.matrix(wz0) # Just in case J=2 for (ii in 1:(J-1)) wz[, iam(2*ii-1, 2*ii+1, M = M)] = if (ooz) wz0[, ii] else 0 wz0 = -c(w) * (dcump.dscale[, -1] * dscale.deta[, -1]) * (dcump.dscale[, -J] * dscale.deta[, -J]) / mu.use[, 2:J] wz0 = as.matrix(wz0) for (ii in 1:(J-1)) wz[,iam(2*ii,2*ii+2,M = M)] = if (ooz) wz0[, ii] else 0 wz0 = -c(w) * (dcump.deta[, -J] / scalemat[, -J]) * (dcump.dscale[, -1] * dscale.deta[, -1]) / mu.use[, 2:J] wz0 = as.matrix(wz0) for (ii in 1:(J-1)) wz[,iam(2*ii-1,2*ii+2,M = M)] = if (ooz) wz0[, ii] else 0 wz0 = -c(w) * (dcump.deta[, -1] / scalemat[, -1]) * (dcump.dscale[, -J] * dscale.deta[, -J]) / mu.use[, 2:J] wz0 = as.matrix(wz0) for (ii in 1:(J-1)) wz[,iam(2*ii,2*ii+1,M = M)] = if (ooz) wz0[, ii] else 0 } wz }), list( .link = link, .lscale = lscale, .earg = earg, .escale = escale )))) } margeff <- function(object, subset = NULL) { ii <- ii.save <- subset if (!is(object, "vglm")) stop("'object' is not a vglm() object") if (!any(temp.logical <- is.element(c("multinomial","cumulative"), object@family@vfamily))) stop("'object' is not a 'multinomial' or 'cumulative' VGLM!") model.multinomial <- temp.logical[1] if (is(object, "vgam")) stop("'object' is a vgam() object") if (length(object@control$xij)) stop("'object' contains 'xij' terms") if (length(object@misc$form2)) stop("'object' contains 'form2' terms") oassign <- object@misc$orig.assign if (any(unlist(lapply(oassign, length)) > 1)) warning("some terms in 'object' create more than one column of ", "the LM design matrix") nnn <- object@misc$n M <- object@misc$M # ncol(B) # length(pvec) - 1 if (model.multinomial) { rlev <- object@misc$refLevel cfit <- coefvlm(object, matrix.out = TRUE) B <- if (!length(rlev)) { cbind(cfit, 0) } else { if (rlev == M+1) { # Default cbind(cfit, 0) } else if (rlev == 1) { cbind(0, cfit) } else { cbind(cfit[, 1:(rlev-1)], 0, cfit[,rlev:M]) } } ppp <- nrow(B) pvec1 <- fitted(object)[ 1,] colnames(B) <- if (length(names(pvec1))) names(pvec1) else paste("mu", 1:(M+1), sep = "") if (is.null(ii)) { BB <- array(B, c(ppp, M+1, nnn)) pvec <- c(t(fitted(object))) pvec <- rep(pvec, each=ppp) temp1 <- array(BB * pvec, c(ppp, M+1, nnn)) temp2 <- aperm(temp1, c(2,1,3)) # (M+1) x ppp x nnn temp2 <- colSums(temp2) # ppp x nnn temp2 <- array(rep(temp2, each=M+1), c(M+1, ppp, nnn)) temp2 <- aperm(temp2, c(2, 1, 3)) # ppp x (M+1) x nnn temp3 <- pvec ans <- array((BB - temp2) * temp3, c(ppp, M+1, nnn), dimnames = list(dimnames(B)[[1]], dimnames(B)[[2]], dimnames(fitted(object))[[1]])) ans } else if (is.numeric(ii) && (length(ii) == 1)) { pvec <- fitted(object)[ii,] temp1 <- B * matrix(pvec, ppp, M+1, byrow = TRUE) temp2 <- matrix(rowSums(temp1), ppp, M+1) temp3 <- matrix(pvec, nrow(B), M+1, byrow = TRUE) (B - temp2) * temp3 } else { if (is.logical(ii)) ii <- (1:nnn)[ii] ans <- array(0, c(ppp, M+1, length(ii)), dimnames = list(dimnames(B)[[1]], dimnames(B)[[2]], dimnames(fitted(object)[ii,])[[1]])) for (ilocal in 1:length(ii)) { pvec <- fitted(object)[ii[ilocal],] temp1 <- B * matrix(pvec, ppp, M+1, byrow = TRUE) temp2 <- matrix(rowSums(temp1), ppp, M+1) temp3 <- matrix(pvec, nrow(B), M+1, byrow = TRUE) ans[,,ilocal] <- (B - temp2) * temp3 } ans } } else { if (is.logical(is.multivariateY <- object@misc$mv) && is.multivariateY) stop("cannot handle cumulative(mv = TRUE)") reverse <- object@misc$reverse linkfunctions <- object@misc$link all.eargs <- object@misc$earg B <- cfit <- coefvlm(object, matrix.out = TRUE) ppp <- nrow(B) hdot <- lpmat <- kronecker(predict(object), matrix(1, ppp, 1)) resmat <- cbind(hdot, 1) for (jlocal in 1:M) { Cump <- eta2theta(lpmat[,jlocal], link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) hdot[, jlocal] <- dtheta.deta(Cump, link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) } resmat[, 1] <- ifelse(reverse, -1, 1) * hdot[, 1] * cfit[, 1] if (M > 1) { for (jlocal in 2:M) resmat[, jlocal] <- ifelse(reverse, -1, 1) * (hdot[, jlocal ] * cfit[, jlocal ] - hdot[, jlocal - 1] * cfit[, jlocal - 1]) } resmat[, M+1] <- ifelse(reverse, 1, -1) * hdot[, M] * cfit[, M] temp1 <- array(resmat, c(ppp, nnn, M+1), dimnames = list(dimnames(B)[[1]], dimnames(fitted(object))[[1]], dimnames(fitted(object))[[2]])) temp1 <- aperm(temp1, c(1, 3, 2)) # ppp x (M+1) x nnn if (is.null(ii)) { return(temp1) } else if (is.numeric(ii) && (length(ii) == 1)) { return(temp1[,, ii]) } else { return(temp1[,, ii]) } } } prplot <- function(object, control = prplot.control(...), ...) { if (!any(slotNames(object) == "family") || !any(object@family@vfamily == "vcategorical")) stop("'object' does not seem to be a VGAM categorical model object") if (!any(object@family@vfamily == "cumulative")) stop("'object' is not seem to be a VGAM categorical model object") control <- prplot.control(...) object <- plotvgam(object, plot.arg = FALSE, raw = FALSE) # , ... if (length(names(object@preplot)) != 1) stop("object needs to have only one term") MM <- object@misc$M use.y <- cbind((object@preplot[[1]])$y) Constant <- attr(object@preplot, "Constant") if (is.numeric(Constant) && length(Constant) == ncol(use.y)) use.y <- use.y + matrix(Constant, nrow(use.y), ncol(use.y), byrow = TRUE) for (ii in 1:MM) { use.y[, ii] <- eta2theta(use.y[, ii], link = object@misc$link[[ii]], earg = object@misc$earg[[ii]]) } if (ncol(use.y) != MM) use.y = use.y[, 1:MM, drop = FALSE] use.x <- (object@preplot[[1]])$x myxlab <- if (length(control$xlab)) control$xlab else (object@preplot[[1]])$xlab mymain <- if (MM <= 3) paste(object@misc$parameters, collapse = ", ") else paste(object@misc$parameters[c(1, MM)], collapse = ",...,") if (length(control$main)) mymain = control$main if (length(control$ylab)) myylab = control$ylab matplot(use.x, use.y, type = "l", xlab = myxlab, ylab = myylab, lty = control$lty, col = control$col, las = control$las, xlim = if (is.Numeric(control$xlim)) control$xlim else range(use.x), ylim = if (is.Numeric(control$ylim)) control$ylim else range(use.y), main=mymain) if (control$rug.arg) rug(use.x, col=control$rcol, lwd=control$rlwd) invisible(object) } prplot.control <- function(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...) { list(xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, lty = lty, col = col, rcol = rcol, lwd = lwd, rlwd = rlwd, rug.arg = rug.arg, las = las, main = main) } is.parallel.matrix <- function(object, ...) is.matrix(object) && all(!is.na(object)) && all(c(object) == 1) && ncol(object) == 1 is.parallel.vglm <- function(object, type = c("term", "lm"), ...) { type <- match.arg(type, c("term", "lm"))[1] Hlist <- constraints(object, type = type) unlist(lapply(Hlist, is.parallel.matrix)) } if (!isGeneric("is.parallel")) setGeneric("is.parallel", function(object, ...) standardGeneric("is.parallel"), package = "VGAM") setMethod("is.parallel", "matrix", function(object, ...) is.parallel.matrix(object, ...)) setMethod("is.parallel", "vglm", function(object, ...) is.parallel.vglm(object, ...)) is.zero.matrix <- function(object, ...) { rnames <- rownames(object) intercept.index <- if (length(rnames)) { if (any(rnames == "(Intercept)")) { (1:length(rnames))[rnames == "(Intercept)"] } else { stop("the matrix does not seem to have an intercept") NULL } } else { stop("the matrix does not seem to have an intercept") NULL } if (nrow(object) <= 1) stop("the matrix needs to have more than one row, i.e., more than ", "an intercept on the RHS of the formula") cfit <- object[-intercept.index, , drop = FALSE] foo <- function(conmat.col) all(!is.na(conmat.col)) && all(c(conmat.col) == 0) unlist(apply(cfit, 2, foo)) } is.zero.vglm <- function(object, ...) { is.zero.matrix(coef(object, matrix = TRUE)) } if (!isGeneric("is.zero")) setGeneric("is.zero", function(object, ...) standardGeneric("is.zero"), package = "VGAM") setMethod("is.zero", "matrix", function(object, ...) is.zero.matrix(object, ...)) setMethod("is.zero", "vglm", function(object, ...) is.zero.vglm(object, ...)) VGAM/R/family.bivariate.R0000644000176000001440000022477712136651110014625 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. bilogistic4.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } bilogistic4 <- function(llocation = "identity", lscale = "loge", iloc1 = NULL, iscale1 = NULL, iloc2 = NULL, iscale2 = NULL, imethod = 1, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("imethod must be 1 or 2") new("vglmff", blurb = c("Bivariate logistic distribution\n\n", "Link: ", namesof("location1", llocat, elocat), ", ", namesof("scale1", lscale, escale), ", ", namesof("location2", llocat, elocat), ", ", namesof("scale2", lscale, escale), "\n", "\n", "Means: location1, location2"), constraints = eval(substitute(expression({ constraints = cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location1", .llocat, .elocat , tag = FALSE), namesof("scale1", .lscale, .escale , tag = FALSE), namesof("location2", .llocat, .elocat , tag = FALSE), namesof("scale2", .lscale, .escale , tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init1 <- y[, 1] scale.init1 <- sqrt(3) * sd(y[, 1]) / pi locat.init2 <- y[, 2] scale.init2 <- sqrt(3) * sd(y[, 2]) / pi } else { locat.init1 <- median(rep(y[, 1], w)) locat.init2 <- median(rep(y[, 2], w)) const4 <- sqrt(3) / (sum(w) * pi) scale.init1 <- const4 * sum(c(w) *(y[, 1] - locat.init1)^2) scale.init2 <- const4 * sum(c(w) *(y[, 2] - locat.init2)^2) } loc1.init <- if (length( .iloc1 )) rep( .iloc1, length.out = n) else rep(locat.init1, length.out = n) loc2.init <- if (length( .iloc2 )) rep( .iloc2, length.out = n) else rep(locat.init2, length.out = n) scale1.init <- if (length( .iscale1 )) rep( .iscale1, length.out = n) else rep(1, length.out = n) scale2.init <- if (length( .iscale2 )) rep( .iscale2, length.out = n) else rep(1, length.out = n) if ( .llocat == "loge") locat.init1 <- abs(locat.init1) + 0.001 if ( .llocat == "loge") locat.init2 <- abs(locat.init2) + 0.001 etastart <- cbind(theta2eta(locat.init1, .llocat , .elocat ), theta2eta(scale1.init, .lscale , .escale ), theta2eta(locat.init2, .llocat , .elocat ), theta2eta(scale2.init, .lscale , .escale )) } }), list(.imethod = imethod, .iloc1 = iloc1, .iloc2 = iloc2, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale1 = iscale1, .iscale2 = iscale2))), linkinv = function(eta, extra = NULL) { cbind(eta[, 1], eta[, 2]) }, last = eval(substitute(expression({ misc$link <- c(location1 = .llocat, scale1 = .lscale, location2 = .llocat, scale2 = .lscale) misc$earg <- list(location1 = .elocat, scale1 = .escale, location2 = .elocat, scale2 = .escale) misc$expected <- FALSE misc$BFGS <- TRUE misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) zedd1 <- (y[, 1]-locat1) / Scale1 zedd2 <- (y[, 2]-locat2) / Scale2 if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-zedd1 - zedd2 - 3 * log1p(exp(-zedd1)+exp(-zedd2)) - log(Scale1) - log(Scale2))) }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = c("bilogistic4"), deriv = eval(substitute(expression({ locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) zedd1 <- (y[, 1]-locat1) / Scale1 zedd2 <- (y[, 2]-locat2) / Scale2 ezedd1 <- exp(-zedd1) ezedd2 <- exp(-zedd2) denom <- 1 + ezedd1 + ezedd2 dl.dlocat1 <- (1 - 3 * ezedd1 / denom) / Scale1 dl.dlocat2 <- (1 - 3 * ezedd2 / denom) / Scale2 dl.dscale1 <- (zedd1 - 1 - 3 * ezedd1 * zedd1 / denom) / Scale1 dl.dscale2 <- (zedd2 - 1 - 3 * ezedd2 * zedd2 / denom) / Scale2 dlocat1.deta <- dtheta.deta(locat1, .llocat , .elocat ) dlocat2.deta <- dtheta.deta(locat2, .llocat , .elocat ) dscale1.deta <- dtheta.deta(Scale1, .lscale , .escale ) dscale2.deta <- dtheta.deta(Scale2, .lscale , .escale ) if (iter == 1) { etanew <- eta } else { derivold <- derivnew etaold <- etanew etanew <- eta } derivnew <- c(w) * cbind(dl.dlocat1 * dlocat1.deta, dl.dscale1 * dscale1.deta, dl.dlocat2 * dlocat2.deta, dl.dscale2 * dscale2.deta) derivnew }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ if (iter == 1) { wznew <- cbind(matrix(w, n, M), matrix(0, n, dimm(M)-M)) } else { wzold <- wznew wznew <- qnupdate(w = w, wzold=wzold, dderiv=(derivold - derivnew), deta=etanew-etaold, M = M, trace=trace) # weights incorporated in args } wznew }), list( .lscale = lscale, .escale = escale, .llocat = llocat)))) } dbilogis4 <- function(x1, x2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x1), length(x2), length(loc1), length(loc2), length(scale1), length(scale2)) x1 <- rep(x1, length.out = L); x2 <- rep(x2, length.out = L); loc1 <- rep(loc1, length.out = L); loc2 <- rep(loc2, length.out = L); scale1 <- rep(scale1, length.out = L); scale2 <- rep(scale2, length.out = L); zedd1 <- (-(x1-loc1)/scale1) zedd2 <- (-(x2-loc2)/scale2) logdensity <- log(2) + log(zedd1) + log(zedd2) - log(scale1) - log(scale1) - 3 * log1p(exp(zedd1) + exp(zedd2)) if (log.arg) logdensity else exp(logdensity) } pbilogis4 <- function(q1, q2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) { ans <- 1 / (1 + exp(-(q1-loc1)/scale1) + exp(-(q2-loc2)/scale2)) ans[scale1 <= 0] <- NA ans[scale2 <= 0] <- NA ans } rbilogis4 <- function(n, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) { y1 <- rlogis(n = n, location = loc1, scale = scale1) ezedd1 <- exp(-(y1-loc1)/scale1) y2 <- loc2 - scale2 * log(1/sqrt(runif(n) / (1 + ezedd1)^2) - 1 - ezedd1) ans <- cbind(y1, y2) ans[scale2 <= 0, ] <- NA ans } freund61 <- function(la = "loge", lap = "loge", lb = "loge", lbp = "loge", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL) { la <- as.list(substitute(la)) ea <- link2list(la) la <- attr(ea, "function.name") lap <- as.list(substitute(lap)) eap <- link2list(lap) lap <- attr(eap, "function.name") lb <- as.list(substitute(lb)) eb <- link2list(lb) lb <- attr(eb, "function.name") lbp <- as.list(substitute(lbp)) ebp <- link2list(lbp) lbp <- attr(ebp, "function.name") new("vglmff", blurb = c("Freund (1961) bivariate exponential distribution\n", "Links: ", namesof("a", la, earg = ea ), ", ", namesof("ap", lap, earg = eap), ", ", namesof("b", lb, earg = eb ), ", ", namesof("bp", lbp, earg = ebp)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1, 1,0,0, 0,0, 1, 1), M, 2), x, .independent, constraints, apply.int = TRUE) constraints = cm.zero.vgam(constraints, x, .zero, M) }), list(.independent = independent, .zero = zero))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("a", .la, earg = .ea , short = TRUE), namesof("ap", .lap, earg = .eap, short = TRUE), namesof("b", .lb, earg = .eb , short = TRUE), namesof("bp", .lbp, earg = .ebp, short = TRUE)) extra$y1.lt.y2 = y[, 1] < y[, 2] if (!(arr <- sum(extra$y1.lt.y2)) || arr == n) stop("identifiability problem: either all y1 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate gamma: McKay's distribution\n", "Links: ", namesof("scale", lscale), ", ", namesof("shape1", lshape1), ", ", namesof("shape2", lshape2)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y if (any(y[, 1] >= y[, 2])) stop("the second column minus the first column must be a vector ", "of positive values") predictors.names <- c(namesof("scale", .lscale, .escale, short = TRUE), namesof("shape1", .lshape1, .eshape1, short = TRUE), namesof("shape2", .lshape2, .eshape2, short = TRUE)) if (!length(etastart)) { momentsY <- if ( .imethod == 1) { cbind(median(y[, 1]), # This may not be monotonic median(y[, 2])) + 0.01 } else { cbind(weighted.mean(y[, 1], w), weighted.mean(y[, 2], w)) } mcg2.loglik <- function(thetaval, y, x, w, extraargs) { ainit <- a <- thetaval momentsY <- extraargs$momentsY p <- (1/a) * abs(momentsY[1]) + 0.01 q <- (1/a) * abs(momentsY[2] - momentsY[1]) + 0.01 sum(c(w) * (-(p+q)*log(a) - lgamma(p) - lgamma(q) + (p - 1)*log(y[, 1]) + (q - 1)*log(y[, 2]-y[, 1]) - y[, 2] / a )) } a.grid <- if (length( .iscale )) c( .iscale ) else c(0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100) extraargs <- list(momentsY = momentsY) ainit <- getMaxMin(a.grid, objfun = mcg2.loglik, y = y, x = x, w = w, maximize = TRUE, extraargs = extraargs) ainit <- rep(if(is.Numeric( .iscale )) .iscale else ainit, length.out = n) pinit <- (1/ainit) * abs(momentsY[1]) + 0.01 qinit <- (1/ainit) * abs(momentsY[2] - momentsY[1]) + 0.01 pinit <- rep(if(is.Numeric( .ishape1 )) .ishape1 else pinit, length.out = n) qinit <- rep(if(is.Numeric( .ishape2 )) .ishape2 else qinit, length.out = n) etastart <- cbind(theta2eta(ainit, .lscale), theta2eta(pinit, .lshape1), theta2eta(qinit, .lshape2)) } }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2, .iscale = iscale, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { a <- eta2theta(eta[, 1], .lscale , .escale ) p <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) q <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) cbind("y1" = p*a, "y2" = (p+q)*a) }, list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "shape1" = .lshape1 , "shape2" = .lshape2 ) misc$earg <- list("scale" = .escale , "shape1" = .eshape1 , "shape2" = .eshape2 ) misc$ishape1 <- .ishape1 misc$ishape2 <- .ishape2 misc$iscale <- .iscale misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2, .iscale = iscale, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { a <- eta2theta(eta[, 1], .lscale , .escale ) p <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) q <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * (-(p+q)*log(a) - lgamma(p) - lgamma(q) + (p - 1)*log(y[, 1]) + (q - 1)*log(y[, 2]-y[, 1]) - y[, 2] / a)) }, list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("bivgamma.mckay"), deriv = eval(substitute(expression({ aparam <- eta2theta(eta[, 1], .lscale , .escale ) shape1 <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) shape2 <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) dl.da <- (-(shape1+shape2) + y[, 2] / aparam) / aparam dl.dshape1 <- -log(aparam) - digamma(shape1) + log(y[, 1]) dl.dshape2 <- -log(aparam) - digamma(shape2) + log(y[, 2]-y[, 1]) c(w) * cbind(dl.da * dtheta.deta(aparam, .lscale), dl.dshape1 * dtheta.deta(shape1, .lshape1), dl.dshape2 * dtheta.deta(shape2, .lshape2)) }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ d11 <- (shape1+shape2) / aparam^2 d22 <- trigamma(shape1) d33 <- trigamma(shape2) d12 <- 1 / aparam d13 <- 1 / aparam d23 <- 0 wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dtheta.deta(aparam, .lscale)^2 * d11 wz[, iam(2, 2, M)] <- dtheta.deta(shape1, .lshape1)^2 * d22 wz[, iam(3, 3, M)] <- dtheta.deta(shape2, .lshape2)^2 * d33 wz[, iam(1, 2, M)] <- dtheta.deta(aparam, .lscale) * dtheta.deta(shape1, .lshape1) * d12 wz[, iam(1, 3, M)] <- dtheta.deta(aparam, .lscale) * dtheta.deta(shape2, .lshape2) * d13 wz[, iam(2, 3, M)] <- dtheta.deta(shape1, .lshape1) * dtheta.deta(shape2, .lshape2) * d23 c(w) * wz }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2 )))) } rfrank <- function(n, alpha) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(alpha, positive = TRUE)) stop("bad input for argument 'alpha'") alpha <- rep(alpha, length.out = use.n) U <- runif(use.n) V <- runif(use.n) T <- alpha^U + (alpha - alpha^U) * V X <- U index <- abs(alpha - 1) < .Machine$double.eps Y <- U if (any(!index)) Y[!index] <- logb(T[!index]/(T[!index]+(1-alpha[!index])*V[!index]), base = alpha[!index]) ans <- matrix(c(X, Y), nrow = use.n, ncol = 2) if (any(index)) { ans[index, 1] <- runif(sum(index)) # Uniform density for alpha == 1 ans[index, 2] <- runif(sum(index)) } ans } pfrank <- function(q1, q2, alpha) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(alpha, positive = TRUE)) stop("bad input for 'alpha'") L <- max(length(q1), length(q2), length(alpha)) alpha <- rep(alpha, length.out = L) q1 <- rep(q1, length.out = L) q2 <- rep(q2, length.out = L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) | (abs(alpha - 1) < .Machine$double.eps) ans <- as.numeric(index) if (any(!index)) ans[!index] <- logb(1 + ((alpha[!index])^(x[!index]) - 1)* ((alpha[!index])^(y[!index]) - 1)/(alpha[!index] - 1), base=alpha[!index]) ind2 <- (abs(alpha - 1) < .Machine$double.eps) ans[ind2] <- x[ind2] * y[ind2] ans[x >= 1 & y < 1] <- y[x >= 1 & y < 1] # P(Y2 < q2) = q2 ans[y >= 1 & x < 1] <- x[y >= 1 & x < 1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } dfrank <- function(x1, x2, alpha, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x1)) stop("bad input for 'x1'") if (!is.Numeric(x2)) stop("bad input for 'x2'") if (!is.Numeric(alpha, positive = TRUE)) stop("bad input for 'alpha'") L <- max(length(x1), length(x2), length(alpha)) alpha <- rep(alpha, length.out = L) x1 <- rep(x1, length.out = L) x2 <- rep(x2, length.out = L) if (log.arg) { denom <- alpha-1 + (alpha^x1 - 1) * (alpha^x2 - 1) denom <- abs(denom) log((alpha - 1) * log(alpha)) + (x1+x2)*log(alpha) - 2 * log(denom) } else { temp <- (alpha - 1) + (alpha^x1 - 1) * (alpha^x2 - 1) index <- (abs(alpha - 1) < .Machine$double.eps) ans <- x1 if (any(!index)) ans[!index] <- (alpha[!index] - 1) * log(alpha[!index]) * (alpha[!index])^(x1[!index]+x2[!index]) / (temp[!index])^2 ans[x1 <= 0 | x2 <= 0 | x1 >= 1 | x2 >= 1] <- 0 ans[index] <- 1 ans } } frank.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } frank <- function(lapar = "loge", iapar = 2, nsimEIM = 250) { lapar <- as.list(substitute(lapar)) eapar <- link2list(lapar) lapar <- attr(eapar, "function.name") if (!is.Numeric(iapar, positive = TRUE)) stop("'iapar' must be positive") if (length(nsimEIM) && (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Frank's bivariate distribution\n", "Links: ", namesof("apar", lapar, earg = eapar )), initialize = eval(substitute(expression({ if (any(y <= 0) || any(y >= 1)) stop("the response must have values between 0 and 1") temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar, earg = .eapar, short = TRUE)) if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] if (!length(etastart)) { apar.init <- rep(.iapar, length.out = n) etastart <- cbind(theta2eta(apar.init, .lapar, earg = .eapar )) } }), list( .lapar = lapar, .eapar = eapar, .iapar = iapar))), linkinv = eval(substitute(function(eta, extra = NULL) { apar <- eta2theta(eta, .lapar, earg = .eapar ) fv.matrix <- matrix(0.5, length(apar), 2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(names(eta), extra$dimnamesy2) fv.matrix }, list( .lapar = lapar, .eapar = eapar ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .eapar ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { apar <- eta2theta(eta, .lapar, earg = .eapar ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dfrank(x1 = y[, 1], x2 = y[, 2], alpha = apar, log = TRUE)) } }, list( .lapar = lapar, .eapar = eapar ))), vfamily = c("frank"), deriv = eval(substitute(expression({ apar <- eta2theta(eta, .lapar, earg = .eapar ) dapar.deta <- dtheta.deta(apar, .lapar, earg = .eapar ) de3 <- deriv3(~ (log((apar - 1) * log(apar)) + (y1+y2)*log(apar) - 2 * log(apar-1 + (apar^y1 - 1) * (apar^y2 - 1))), name = "apar", hessian = TRUE) denom <- apar-1 + (apar^y[, 1] - 1) * (apar^y[, 2] - 1) tmp700 <- 2*apar^(y[, 1]+y[, 2]) - apar^y[, 1] - apar^y[, 2] numerator <- 1 + y[, 1] * apar^(y[, 1] - 1) * (apar^y[, 2] - 1) + y[, 2] * apar^(y[, 2] - 1) * (apar^y[, 1] - 1) Dl.dapar <- 1/(apar - 1) + 1/(apar*log(apar)) + (y[, 1]+y[, 2])/apar - 2 * numerator / denom c(w) * Dl.dapar * dapar.deta }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ if ( is.Numeric( .nsimEIM)) { pooled.weight <- FALSE # For @last run.mean <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rfrank(n,alpha=apar) y1 <- ysim[, 1]; y2 <- ysim[, 2]; eval.de3 <- eval(de3) d2l.dthetas2 <- attr(eval.de3, "hessian") rm(ysim) temp3 <- -d2l.dthetas2[, 1, 1] # M = 1 run.mean <- ((ii - 1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(mean(run.mean), n, dimm(M)) else run.mean wz <- wz * dapar.deta^2 c(w) * wz } else { nump <- apar^(y[, 1]+y[, 2]-2) * (2 * y[, 1] * y[, 2] + y[, 1]*(y[, 1] - 1) + y[, 2]*(y[, 2] - 1)) - y[, 1]*(y[, 1] - 1) * apar^(y[, 1]-2) - y[, 2]*(y[, 2] - 1) * apar^(y[, 2]-2) D2l.dapar2 <- 1/(apar - 1)^2 + (1+log(apar))/(apar*log(apar))^2 + (y[, 1]+y[, 2])/apar^2 + 2 * (nump / denom - (numerator/denom)^2) d2apar.deta2 <- d2theta.deta2(apar, .lapar) wz <- c(w) * (dapar.deta^2 * D2l.dapar2 - Dl.dapar * d2apar.deta2) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for(iii in 1:ncol(wz)) wz[,iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz } }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM )))) } gammahyp <- function(ltheta = "loge", itheta = NULL, expected = FALSE) { ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") if (!is.logical(expected) || length(expected) != 1) stop("argument 'expected' must be a single logical") new("vglmff", blurb = c("Gamma hyperbola bivariate distribution\n", "Links: ", namesof("theta", ltheta, etheta)), initialize = eval(substitute(expression({ if (any(y[, 1] <= 0) || any(y[, 2] <= 1)) stop("the response has values that are out of range") temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("theta", .ltheta, .etheta , short = TRUE)) if (!length(etastart)) { theta.init <- if (length( .itheta)) { rep( .itheta , length.out = n) } else { 1 / (y[, 2] - 1 + 0.01) } etastart <- cbind(theta2eta(theta.init, .ltheta , .etheta )) } }), list( .ltheta = ltheta, .etheta = etheta, .itheta = itheta))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .ltheta , .etheta ) cbind(theta*exp(theta), 1+1/theta) }, list( .ltheta = ltheta, .etheta = etheta ))), last = eval(substitute(expression({ misc$link <- c("theta" = .ltheta ) misc$earg <- list("theta" = .etheta ) misc$expected <- .expected misc$multipleResponses <- FALSE }), list( .ltheta = ltheta, .etheta = etheta, .expected = expected ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { theta <- eta2theta(eta, .ltheta , .etheta ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (-exp(-theta)*y[, 1]/theta - theta*y[, 2])) } }, list( .ltheta = ltheta, .etheta = etheta ))), vfamily = c("gammahyp"), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .ltheta , .etheta ) Dl.dtheta <- exp(-theta) * y[, 1] * (1+theta) / theta^2 - y[, 2] DTHETA.deta <- dtheta.deta(theta, .ltheta , .etheta ) c(w) * Dl.dtheta * DTHETA.deta }), list( .ltheta = ltheta, .etheta = etheta ))), weight = eval(substitute(expression({ temp300 <- 2 + theta * (2 + theta) if ( .expected ) { D2l.dtheta2 <- temp300 / theta^2 wz <- c(w) * DTHETA.deta^2 * D2l.dtheta2 } else { D2l.dtheta2 <- temp300 * y[, 1] * exp(-theta) / theta^3 D2theta.deta2 <- d2theta.deta2(theta, .ltheta ) wz <- c(w) * (DTHETA.deta^2 * D2l.dtheta2 - Dl.dtheta * D2theta.deta2) } wz }), list( .ltheta = ltheta, .etheta = etheta, .expected = expected )))) } morgenstern <- function(lapar = "rhobit", iapar = NULL, tola0 = 0.01, imethod = 1) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (length(iapar) && (!is.Numeric(iapar, allowable.length = 1) || abs(iapar) >= 1)) stop("argument 'iapar' must be a single number between -1 and 1") if (!is.Numeric(tola0, allowable.length = 1, positive = TRUE)) stop("argument 'tola0' must be a single positive number") if (length(iapar) && abs(iapar) <= tola0) stop("argument 'iapar' must not be between -tola0 and tola0") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Morgenstern's bivariate exponential distribution\n", "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar, earg = .earg , short = TRUE)) if (length(dimnames(y))) extra$dimnamesy2 = dimnames(y)[[2]] if (!length(etastart)) { ainit <- if (length(.iapar)) rep( .iapar , length.out = n) else { mean1 <- if ( .imethod == 1) median(y[, 1]) else mean(y[, 1]) mean2 <- if ( .imethod == 1) median(y[, 2]) else mean(y[, 2]) Finit <- 0.01 + mean(y[, 1] <= mean1 & y[, 2] <= mean2) ((Finit+expm1(-mean1)+exp(-mean2)) / exp(-mean1-mean2) - 1) / ( expm1(-mean1) * expm1(-mean2)) } etastart <- theta2eta(rep(ainit, length.out = n), .lapar, earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) fv.matrix <- matrix(1, length(alpha), 2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) = list(names(eta), extra$dimnamesy2) fv.matrix }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) alpha[abs(alpha) < .tola0 ] <- .tola0 if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { denom <- (1 + alpha - 2*alpha*(exp(-y[, 1]) + exp(-y[, 2])) + 4*alpha*exp(-y[, 1] - y[, 2])) sum(c(w) * (-y[, 1] - y[, 2] + log(denom))) } }, list( .lapar = lapar, .earg = earg, .tola0=tola0 ))), vfamily = c("morgenstern"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar, earg = .earg ) alpha[abs(alpha) < .tola0 ] <- .tola0 numerator <- 1 - 2*(exp(-y[, 1]) + exp(-y[, 2])) + 4*exp(-y[, 1] - y[, 2]) denom <- (1 + alpha - 2*alpha*(exp(-y[, 1]) + exp(-y[, 2])) + 4 *alpha*exp(-y[, 1] - y[, 2])) dl.dalpha <- numerator / denom dalpha.deta <- dtheta.deta(alpha, .lapar, earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg, .tola0=tola0 ))), weight = eval(substitute(expression({ d2l.dalpha2 <- dl.dalpha^2 d2alpha.deta2 <- d2theta.deta2(alpha, .lapar, earg = .earg ) wz <- c(w) * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for(iii in 1:ncol(wz)) wz[,iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lapar = lapar, .earg = earg )))) } rfgm <- function(n, alpha) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(alpha)) stop("bad input for argument 'alpha'") if (any(abs(alpha) > 1)) stop("argument 'alpha' has values out of range") y1 <- V1 <- runif(use.n) V2 <- runif(use.n) temp <- 2*y1 - 1 A <- alpha * temp - 1 B <- sqrt(1 - 2 * alpha * temp + (alpha*temp)^2 + 4 * alpha * V2 * temp) y2 <- 2 * V2 / (B - A) matrix(c(y1, y2), nrow = use.n, ncol = 2) } dfgm <- function(x1, x2, alpha, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(alpha)) stop("bad input for 'alpha'") if (any(abs(alpha) > 1)) stop("'alpha' values out of range") if ( !is.logical( log.arg ) || length( log.arg ) != 1 ) stop("bad input for argument 'log'") L <- max(length(x1), length(x2), length(alpha)) if (length(x1) != L) x1 <- rep(x1, length.out = L) if (length(x2) != L) x2 <- rep(x2, length.out = L) if (length(alpha) != L) alpha <- rep(alpha, length.out = L) ans <- 0 * x1 xnok <- (x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1) if ( log.arg ) { ans[!xnok] <- log1p(alpha[!xnok] * (1-2*x1[!xnok]) * (1-2*x2[!xnok])) ans[xnok] <- log(0) } else { ans[!xnok] <- 1 + alpha[!xnok] * (1-2*x1[!xnok]) * (1-2*x2[!xnok]) ans[xnok] <- 0 if (any(ans<0)) stop("negative values in the density (alpha out of range)") } ans } pfgm <- function(q1, q2, alpha) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(alpha)) stop("bad input for 'alpha'") if (any(abs(alpha) > 1)) stop("'alpha' values out of range") L <- max(length(q1), length(q2), length(alpha)) if (length(q1) != L) q1 <- rep(q1, length.out = L) if (length(q2) != L) q2 <- rep(q2, length.out = L) if (length(alpha) != L) alpha <- rep(alpha, length.out = L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) ans <- as.numeric(index) if (any(!index)) { ans[!index] <- q1[!index] * q2[!index] * (1 + alpha[!index] * (1-q1[!index])*(1-q2[!index])) } ans[x >= 1 & y<1] <- y[x >= 1 & y<1] # P(Y2 < q2) = q2 ans[y >= 1 & x<1] <- x[y >= 1 & x<1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } fgm.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } fgm <- function(lapar = "rhobit", iapar = NULL, imethod = 1, nsimEIM = 200) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (!length(nsimEIM) || (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("'nsimEIM' should be an integer greater than 50") if (length(iapar) && (abs(iapar) >= 1)) stop("'iapar' should be less than 1 in absolute value") new("vglmff", blurb = c("Farlie-Gumbel-Morgenstern distribution\n", "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("apar", .lapar, earg = .earg , short = TRUE) if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] if (!length(etastart)) { ainit <- if (length( .iapar )) .iapar else { mean1 <- if ( .imethod == 1) weighted.mean(y[, 1], w) else median(y[, 1]) mean2 <- if ( .imethod == 1) weighted.mean(y[, 2], w) else median(y[, 2]) Finit <- weighted.mean(y[, 1] <= mean1 & y[, 2] <= mean2, w) (Finit / (mean1 * mean2) - 1) / ((1-mean1) * (1-mean2)) } ainit <- min(0.95, max(ainit, -0.95)) etastart <- theta2eta(rep(ainit, length.out = n), .lapar, earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) fv.matrix <- matrix(0.5, length(alpha), 2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(names(eta), extra$dimnamesy2) fv.matrix }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dfgm(x1 = y[, 1], x2 = y[, 2], alpha = alpha, log = TRUE)) } }, list( .lapar = lapar, .earg = earg ))), vfamily = c("fgm"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar, earg = .earg ) dalpha.deta <- dtheta.deta(alpha, .lapar, earg = .earg ) numerator <- (1 - 2 * y[, 1]) * (1 - 2 * y[, 2]) denom <- 1 + alpha * numerator mytolerance <- .Machine$double.eps bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() denom[bad] <- 2 * mytolerance } dl.dalpha <- numerator / denom c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rfgm(n, alpha=alpha) numerator <- (1 - 2 * ysim[, 1]) * (1 - 2 * ysim[, 2]) denom <- 1 + alpha * numerator dl.dalpha <- numerator / denom rm(ysim) temp3 <- dl.dalpha run.var <- ((ii - 1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * dalpha.deta^2 c(w) * wz }), list( .lapar = lapar, .earg = earg, .nsimEIM = nsimEIM )))) } gumbelIbiv <- function(lapar = "identity", iapar = NULL, imethod = 1) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (length(iapar) && !is.Numeric(iapar, allowable.length = 1)) stop("'iapar' must be a single number") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Gumbel's Type I bivariate distribution\n", "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar , earg = .earg , short = TRUE)) if (!length(etastart)) { ainit <- if (length( .iapar )) rep( .iapar, length.out = n) else { mean1 <- if ( .imethod == 1) median(y[, 1]) else mean(y[, 1]) mean2 <- if ( .imethod == 1) median(y[, 2]) else mean(y[, 2]) Finit <- 0.01 + mean(y[, 1] <= mean1 & y[, 2] <= mean2) (log(Finit+expm1(-mean1)+exp(-mean2))+mean1+mean2)/(mean1*mean2) } etastart <- theta2eta(rep(ainit, length.out = n), .lapar, earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) cbind(rep(1, len = length(alpha)), rep(1, len = length(alpha))) }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .lapar, earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { denom <- (alpha*y[, 1] - 1) * (alpha*y[, 2] - 1) + alpha mytolerance <- .Machine$double.xmin bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() } sum(bad) * (-1.0e10) + sum(w[!bad] * (-y[!bad, 1] - y[!bad, 2] + alpha[!bad]*y[!bad, 1]*y[!bad, 2] + log(denom[!bad]))) } }, list( .lapar = lapar, .earg = earg ))), vfamily = c("gumbelIbiv"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar, earg = .earg ) numerator <- (alpha*y[, 1] - 1)*y[, 2] + (alpha*y[, 2] - 1)*y[, 1] + 1 denom <- (alpha*y[, 1] - 1) * (alpha*y[, 2] - 1) + alpha denom <- abs(denom) dl.dalpha <- numerator / denom + y[, 1]*y[, 2] dalpha.deta <- dtheta.deta(alpha, .lapar, earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg ))), weight = eval(substitute(expression({ d2l.dalpha2 <- (numerator/denom)^2 - 2*y[, 1]*y[, 2] / denom d2alpha.deta2 <- d2theta.deta2(alpha, .lapar, earg = .earg ) wz <- c(w) * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for(iii in 1:ncol(wz)) wz[, iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lapar = lapar, .earg = earg )))) } pplack <- function(q1, q2, oratio) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for 'oratio'") L <- max(length(q1), length(q2), length(oratio)) if (length(q1) != L) q1 <- rep(q1, length.out = L) if (length(q2) != L) q2 <- rep(q2, length.out = L) if (length(oratio) != L) oratio <- rep(oratio, length.out = L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) | (abs(oratio - 1) < 1.0e-6) # .Machine$double.eps ans <- as.numeric(index) if (any(!index)) { temp1 <- 1 + (oratio[!index] - 1) * (q1[!index] + q2[!index]) temp2 <- temp1 - sqrt(temp1^2 - 4 * oratio[!index] * (oratio[!index] - 1) * q1[!index] * q2[!index]) ans[!index] <- 0.5 * temp2 / (oratio[!index] - 1) } ind2 <- (abs(oratio - 1) < 1.0e-6) # .Machine$double.eps ans[ind2] <- x[ind2] * y[ind2] ans[x >= 1 & y<1] <- y[x >= 1 & y<1] # P(Y2 < q2) = q2 ans[y >= 1 & x<1] <- x[y >= 1 & x<1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } rplack <- function(n, oratio) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n y1 <- U <- runif(use.n) V <- runif(use.n) Z <- V * (1-V) y2 <- (2*Z*(y1*oratio^2 + 1 - y1) + oratio * (1 - 2 * Z) - (1 - 2 * V) * sqrt(oratio * (oratio + 4*Z*y1*(1-y1)*(1-oratio)^2))) / (oratio + Z*(1-oratio)^2) matrix(c(y1, 0.5 * y2), nrow = use.n, ncol = 2) } dplack <- function(x1, x2, oratio, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- log(oratio) + log1p((oratio - 1) * (x1+x2 - 2*x1*x2)) - 1.5 * log((1 + (x1+x2)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*x1*x2) ans[ # !is.na(x1) & !is.na(x2) & !is.na(oratio) & ((x1 < 0) | (x1 > 1) | (x2 < 0) | (x2 > 1))] <- log(0) if (log.arg) ans else exp(ans) } plackett.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } plackett <- function(link = "loge", ioratio = NULL, imethod = 1, nsimEIM = 200) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(ioratio) && (!is.Numeric(ioratio, positive = TRUE))) stop("'ioratio' must be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Plackett distribution\n", "Links: ", namesof("oratio", link, earg = earg )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("oratio", .link , earg = .earg, short = TRUE) if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] if (!length(etastart)) { orinit <- if (length( .ioratio )) .ioratio else { if ( .imethod == 2) { scorp <- cor(y)[1, 2] if (abs(scorp) <= 0.1) 1 else if (abs(scorp) <= 0.3) 3^sign(scorp) else if (abs(scorp) <= 0.6) 5^sign(scorp) else if (abs(scorp) <= 0.8) 20^sign(scorp) else 40^sign(scorp) } else { y10 <- weighted.mean(y[, 1], w) y20 <- weighted.mean(y[, 2], w) (0.5 + sum(w[(y[, 1] < y10) & (y[, 2] < y20)])) * (0.5 + sum(w[(y[, 1] >= y10) & (y[, 2] >= y20)])) / ( ((0.5 + sum(w[(y[, 1] < y10) & (y[, 2] >= y20)])) * (0.5 + sum(w[(y[, 1] >= y10) & (y[, 2] < y20)])))) } } etastart <- theta2eta(rep(orinit, length.out = n), .link , earg = .earg ) } }), list( .ioratio = ioratio, .link = link, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { oratio <- eta2theta(eta, .link , earg = .earg ) fv.matrix <- matrix(0.5, length(oratio), 2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(dimnames(eta)[[1]], extra$dimnamesy2) fv.matrix }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(oratio = .link) misc$earg <- list(oratio = .earg) misc$expected <- FALSE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { oratio <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dplack(x1 = y[, 1], x2 = y[, 2], oratio = oratio, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("plackett"), deriv = eval(substitute(expression({ oratio <- eta2theta(eta, .link , earg = .earg ) doratio.deta <- dtheta.deta(oratio, .link , earg = .earg ) y1 <- y[, 1] y2 <- y[, 2] de3 <- deriv3(~ (log(oratio) + log(1+(oratio - 1) * (y1+y2-2*y1*y2)) - 1.5 * log((1 + (y1+y2)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*y1*y2)), name = "oratio", hessian = FALSE) eval.de3 <- eval(de3) dl.doratio <- attr(eval.de3, "gradient") c(w) * dl.doratio * doratio.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ (log(oratio) + log(1+(oratio - 1) * (y1sim+y2sim-2*y1sim*y2sim)) - 1.5 * log((1 + (y1sim+y2sim)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*y1sim*y2sim)), name = "oratio", hessian= FALSE) run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rplack(n, oratio=oratio) y1sim <- ysim[, 1] y2sim <- ysim[, 1] eval.sd3 <- eval(sd3) dl.doratio <- attr(eval.sd3, "gradient") rm(ysim, y1sim, y2sim) temp3 <- dl.doratio run.var <- ((ii - 1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * doratio.deta^2 c(w) * wz }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM )))) } damh <- function(x1, x2, alpha, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x1), length(x2), length(alpha)) alpha <- rep(alpha, length.out = L) x1 <- rep(x1, length.out = L) x2 <- rep(x2, length.out = L) temp <- 1 - alpha*(1-x1)*(1-x2) if (log.arg) { ans <- log1p(-alpha+2*alpha*x1*x2/temp) - 2*log(temp) ans[(x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1)] <- log(0) } else { ans <- (1-alpha+2*alpha*x1*x2/temp) / (temp^2) ans[(x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1)] <- 0 } ans[abs(alpha) > 1] <- NA ans } pamh <- function(q1, q2, alpha) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(alpha)) stop("bad input for 'alpha'") L <- max(length(q1), length(q2), length(alpha)) if (length(q1) != L) q1 <- rep(q1, length.out = L) if (length(q2) != L) q2 <- rep(q2, length.out = L) if (length(alpha) != L) alpha <- rep(alpha, length.out = L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y<= 0) | (x >= 1 & y >= 1) ans <- as.numeric(index) if (any(!index)) { ans[!index] <- (q1[!index]*q2[!index]) / (1 - alpha[!index]*(1-q1[!index])*(1-q2[!index])) } ans[x >= 1 & y < 1] <- y[x >= 1 & y < 1] # P(Y2 < q2) = q2 ans[y >= 1 & x < 1] <- x[y >= 1 & x < 1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans[abs(alpha) > 1] <- NA ans } ramh <- function(n, alpha) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (any(abs(alpha) > 1)) stop("'alpha' values out of range") U1 <- V1 <- runif(use.n) V2 <- runif(use.n) b <- 1-V1 A <- -alpha*(2*b*V2+1)+2*alpha^2*b^2*V2+1 B <- alpha^2*(4*b^2*V2-4*b*V2+1)+alpha*(4*V2-4*b*V2-2)+1 U2 <- (2*V2*(alpha*b - 1)^2)/(A+sqrt(B)) matrix(c(U1, U2), nrow = use.n, ncol = 2) } amh.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } amh <- function(lalpha = "rhobit", ialpha = NULL, imethod = 1, nsimEIM = 250) { lalpha <- as.list(substitute(lalpha)) ealpha <- link2list(lalpha) lalpha <- attr(ealpha, "function.name") if (length(ialpha) && (abs(ialpha) > 1)) stop("'ialpha' should be less than or equal to 1 in absolute value") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("imethod must be 1 or 2") if (length(nsimEIM) && (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Ali-Mikhail-Haq distribution\n", "Links: ", namesof("alpha", lalpha, earg = ealpha )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("alpha", .lalpha, earg = .ealpha, short = TRUE)) if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] if (!length(etastart)) { ainit <- if (length( .ialpha )) .ialpha else { mean1 <- if ( .imethod == 1) weighted.mean(y[, 1], w) else median(y[, 1]) mean2 <- if ( .imethod == 1) weighted.mean(y[, 2], w) else median(y[, 2]) Finit <- weighted.mean(y[, 1] <= mean1 & y[, 2] <= mean2, w) (1 - (mean1 * mean2 / Finit)) / ((1-mean1) * (1-mean2)) } ainit <- min(0.95, max(ainit, -0.95)) etastart <- theta2eta(rep(ainit, length.out = n), .lalpha, earg = .ealpha ) } }), list( .lalpha = lalpha, .ealpha = ealpha, .ialpha = ialpha, .imethod = imethod))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lalpha, earg = .ealpha ) fv.matrix <- matrix(0.5, length(alpha), 2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(names(eta), extra$dimnamesy2) fv.matrix }, list( .lalpha = lalpha, .ealpha = ealpha ))), last = eval(substitute(expression({ misc$link <- c("alpha" = .lalpha ) misc$earg <- list("alpha" = .ealpha ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lalpha = lalpha, .ealpha = ealpha, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .lalpha, earg = .ealpha ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * damh(x1 = y[, 1], x2 = y[, 2], alpha = alpha, log = TRUE)) } }, list( .lalpha = lalpha, .ealpha = ealpha ))), vfamily = c("amh"), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lalpha, earg = .ealpha ) dalpha.deta <- dtheta.deta(alpha, .lalpha, earg = .ealpha ) y1 <- y[, 1] y2 <- y[, 2] de3 <- deriv3(~ (log(1-alpha+(2*alpha*y1*y2/(1-alpha*(1-y1)*(1-y2))))- 2*log(1-alpha*(1-y1)*(1-y2))) , name = "alpha", hessian= FALSE) eval.de3 <- eval(de3) dl.dalpha <- attr(eval.de3, "gradient") c(w) * dl.dalpha * dalpha.deta }), list( .lalpha = lalpha, .ealpha = ealpha ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ (log(1-alpha+ (2*alpha*y1sim*y2sim/(1-alpha*(1-y1sim)*(1-y2sim)))) - 2*log(1-alpha*(1-y1sim)*(1-y2sim))) , name = "alpha", hessian= FALSE) run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- ramh(n, alpha=alpha) y1sim <- ysim[, 1] y2sim <- ysim[, 1] eval.sd3 <- eval(sd3) dl.alpha <- attr(eval.sd3, "gradient") rm(ysim, y1sim, y2sim) temp3 <- dl.dalpha run.var <- ((ii - 1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * dalpha.deta^2 c(w) * wz }), list( .lalpha = lalpha, .ealpha = ealpha, .nsimEIM = nsimEIM )))) } dbinorm <- function(x1, x2, mean1 = 0, mean2 = 0, sd1 = 1, sd2 = 1, rho = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) temp5 <- 1 - rho^2 zedd1 <- (x1 - mean1) / sd1 zedd2 <- (x2 - mean2) / sd2 logpdf <- -log(2 * pi) - log(sd1 ) - log(sd2) - 0.5 * log1p(-rho^2) + -(0.5 / temp5) * (zedd1^2 - 2 * rho * zedd1 * zedd2 + zedd2^2) if (log.arg) logpdf else exp(logpdf) } binormal <- function(lmean1 = "identity", lmean2 = "identity", lsd1 = "loge", lsd2 = "loge", lrho = "rhobit", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = 3:5) { lmean1 <- as.list(substitute(lmean1)) emean1 <- link2list(lmean1) lmean1 <- attr(emean1, "function.name") lmean2 <- as.list(substitute(lmean2)) emean2 <- link2list(lmean2) lmean2 <- attr(emean2, "function.name") lsd1 <- as.list(substitute(lsd1)) esd1 <- link2list(lsd1) lsd1 <- attr(esd1, "function.name") lsd2 <- as.list(substitute(lsd2)) esd2 <- link2list(lsd2) lsd2 <- attr(esd2, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") trivial1 <- is.logical(eq.mean) && length(eq.mean) == 1 && !eq.mean trivial2 <- is.logical(eq.sd ) && length(eq.sd ) == 1 && !eq.sd if(!trivial1 && !trivial2) stop("only one of 'eq.mean' and 'eq.sd' can be assigned a value") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate normal distribution\n", "Links: ", namesof("mean1", lmean1, earg = emean1 ), ", ", namesof("mean2", lmean2, earg = emean2 ), ", ", namesof("sd1", lsd1, earg = esd1 ), ", ", namesof("sd2", lsd2, earg = esd2 ), ", ", namesof("rho", lrho, earg = erho )), constraints = eval(substitute(expression({ temp8.m <- diag(5)[, -2] temp8.m[2, 1] <- 1 temp8.s <- diag(5)[, -4] temp8.s[4, 3] <- 1 constraints <- cm.vgam(temp8.m, x, .eq.mean, constraints, apply.int = TRUE) constraints <- cm.vgam(temp8.s, x, .eq.sd, constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero, .eq.sd = eq.sd, .eq.mean = eq.mean ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mean1", .lmean1, earg = .emean1, short = TRUE), namesof("mean2", .lmean2, earg = .emean2, short = TRUE), namesof("sd1", .lsd1, earg = .esd1, short = TRUE), namesof("sd2", .lsd2, earg = .esd2, short = TRUE), namesof("rho", .lrho, earg = .erho, short = TRUE)) if (length(dimnames(y))) extra$dimnamesy2 <- dimnames(y)[[2]] if (!length(etastart)) { imean1 <- rep(if (length( .imean1 )) .imean1 else weighted.mean(y[, 1], w = w), length.out = n) imean2 <- rep(if (length( .imean2 )) .imean2 else weighted.mean(y[, 2], w = w), length.out = n) isd1 <- rep(if (length( .isd1 )) .isd1 else sd(y[, 1]), length.out = n) isd2 <- rep(if (length( .isd2 )) .isd2 else sd(y[, 2]), length.out = n) irho <- rep(if (length( .irho )) .irho else cor(y[, 1], y[, 2]), length.out = n) if ( .imethod == 2) { imean1 <- abs(imean1) + 0.01 imean2 <- abs(imean2) + 0.01 } etastart <- cbind(theta2eta(imean1, .lmean1, earg = .emean1), theta2eta(imean2, .lmean2, earg = .emean2), theta2eta(isd1, .lsd1, earg = .esd1), theta2eta(isd2, .lsd2, earg = .esd2), theta2eta(irho, .lrho, earg = .erho)) } }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod, .imean1 = imean1, .imean2 = imean2, .isd1 = isd1, .isd2 = isd2, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { mean1 <- eta2theta(eta[, 1], .lmean1, earg = .emean1) mean2 <- eta2theta(eta[, 2], .lmean2, earg = .emean2) fv.matrix <- cbind(mean1, mean2) if (length(extra$dimnamesy2)) dimnames(fv.matrix) <- list(names(eta), extra$dimnamesy2) fv.matrix } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho ))), last = eval(substitute(expression({ misc$link <- c("mean1" = .lmean1, "mean2" = .lmean2, "sd1" = .lsd1, "sd2" = .lsd2, "rho" = .lrho) misc$earg <- list("mean1" = .emean1, "mean2" = .emean2, "sd1" = .esd1, "sd2" = .esd2, "rho" = .erho) misc$expected <- TRUE misc$multipleResponses <- FALSE }) , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mean1 <- eta2theta(eta[, 1], .lmean1, earg = .emean1) mean2 <- eta2theta(eta[, 2], .lmean2, earg = .emean2) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dbinorm(x1 = y[, 1], x2 = y[, 2], mean1 = mean1, mean2 = mean2, sd1 = sd1, sd2 = sd2, rho = Rho, log = TRUE)) } } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod ))), vfamily = c("binormal"), deriv = eval(substitute(expression({ mean1 <- eta2theta(eta[, 1], .lmean1, earg = .emean1) mean2 <- eta2theta(eta[, 2], .lmean2, earg = .emean2) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) zedd1 <- (y[, 1] - mean1) / sd1 zedd2 <- (y[, 2] - mean2) / sd2 temp5 <- 1 - Rho^2 SigmaInv <- matrix(0, n, dimm(2)) SigmaInv[, iam(1, 1, M = 2)] <- 1 / ((sd1^2) * temp5) SigmaInv[, iam(2, 2, M = 2)] <- 1 / ((sd2^2) * temp5) SigmaInv[, iam(1, 2, M = 2)] <- -Rho / (sd1 * sd2 * temp5) dl.dmeans <- mux22(t(SigmaInv), y - cbind(mean1, mean2), M = 2, as.matrix = TRUE) dl.dsd1 <- -1 / sd1 + zedd1 * (zedd1 - Rho * zedd2) / (sd1 * temp5) dl.dsd2 <- -1 / sd2 + zedd2 * (zedd2 - Rho * zedd1) / (sd2 * temp5) dl.drho <- -Rho * (zedd1^2 - 2 * Rho * zedd1 * zedd2 + zedd2^2) / temp5^2 + zedd1 * zedd2 / temp5 + Rho / temp5 dmean1.deta <- dtheta.deta(mean1, .lmean1) dmean2.deta <- dtheta.deta(mean2, .lmean2) dsd1.deta <- dtheta.deta(sd1 , .lsd1 ) dsd2.deta <- dtheta.deta(sd2 , .lsd2 ) drho.deta <- dtheta.deta(Rho , .lrho ) dthetas.detas <- cbind(dmean1.deta, dmean2.deta, dsd1.deta, dsd2.deta, drho.deta) c(w) * cbind(dl.dmeans[, 1], dl.dmeans[, 2], dl.dsd1, dl.dsd2, dl.drho) * dthetas.detas }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, dimm(M)) wz[, iam(1, 1, M)] <- SigmaInv[, iam(1, 1, M = 2)] wz[, iam(2, 2, M)] <- SigmaInv[, iam(2, 2, M = 2)] wz[, iam(1, 2, M)] <- SigmaInv[, iam(1, 2, M = 2)] wz[, iam(3, 3, M)] <- (1 + 1 / temp5) / sd1^2 wz[, iam(4, 4, M)] <- (1 + 1 / temp5) / sd2^2 wz[, iam(3, 4, M)] <- -(Rho^2) / (temp5 * sd1 * sd2) wz[, iam(5, 5, M)] <- 2 * (1 + 2 * Rho^2) / temp5^2 - (1 + Rho^2) / temp5^2 wz[, iam(3, 5, M)] <- -Rho / (sd1 * temp5) wz[, iam(4, 5, M)] <- -Rho / (sd2 * temp5) for (ilocal in 1:M) for (jlocal in ilocal:M) wz[, iam(ilocal, jlocal, M)] <- wz[, iam(ilocal, jlocal, M)] * dthetas.detas[, ilocal] * dthetas.detas[, jlocal] c(w) * wz }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod )))) } gumbelI <- function(la = "identity", earg = list(), ia = NULL, imethod = 1) { la <- as.list(substitute(la)) earg <- link2list(la) la <- attr(earg, "function.name") if (length(ia) && !is.Numeric(ia, allowable.length = 1)) stop("'ia' must be a single number") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb=c("Gumbel's Type I Bivariate Distribution\n", "Links: ", namesof("a", la, earg = earg )), initialize=eval(substitute(expression({ if (!is.matrix(y) || ncol(y) != 2) stop("the response must be a 2 column matrix") if (any(y < 0)) stop("the response must have non-negative values only") predictors.names <- c(namesof("a", .la, earg = .earg , short = TRUE)) if (!length(etastart)) { ainit <- if (length( .ia )) rep( .ia, len = n) else { mean1 <- if ( .imethod == 1) median(y[,1]) else mean(y[,1]) mean2 <- if ( .imethod == 1) median(y[,2]) else mean(y[,2]) Finit <- 0.01 + mean(y[,1] <= mean1 & y[,2] <= mean2) (log(Finit+expm1(-mean1)+exp(-mean2))+mean1+mean2)/(mean1*mean2) } etastart <- theta2eta(rep(ainit, len = n), .la, earg = .earg ) } }), list( .ia=ia, .la = la, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .la, earg = .earg ) cbind(rep(1, len = length(alpha)), rep(1, len = length(alpha))) }, list( .la = la ))), last = eval(substitute(expression({ misc$link <- c("a" = .la ) misc$earg <- list("a" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight }), list( .la = la, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha <- eta2theta(eta, .la, earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { denom <- (alpha*y[,1] - 1) * (alpha*y[,2] - 1) + alpha mytolerance <- .Machine$double.xmin bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() denom[bad] <- 2 * mytolerance } sum(w * (-y[,1] - y[,2] + alpha*y[,1]*y[,2] + log(denom))) } }, list( .la = la, .earg = earg ))), vfamily=c("gumbelI"), deriv=eval(substitute(expression({ alpha <- eta2theta(eta, .la, earg = .earg ) numerator <- (alpha*y[,1] - 1)*y[,2] + (alpha*y[,2] - 1)*y[,1] + 1 denom <- (alpha*y[,1] - 1) * (alpha*y[,2] - 1) + alpha denom <- abs(denom) dl.dalpha <- numerator / denom + y[,1]*y[,2] dalpha.deta <- dtheta.deta(alpha, .la, earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .la = la, .earg = earg ))), weight=eval(substitute(expression({ d2l.dalpha2 <- (numerator/denom)^2 - 2*y[,1]*y[,2] / denom d2alpha.deta2 <- d2theta.deta2(alpha, .la, earg = .earg ) wz <- w * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for(iii in 1:ncol(wz)) wz[,iii] <- sum(wz[,iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .la = la, .earg = earg )))) } VGAM/R/family.binomial.R0000644000176000001440000031245612136651110014441 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. process.binomial2.data.vgam <- expression({ if (!all(w == 1)) extra$orig.w <- w if (!is.matrix(y)) { yf <- as.factor(y) lev <- levels(yf) llev <- length(lev) if (llev != 4) stop("response must have 4 levels") nn <- length(yf) y <- matrix(0, nn, llev) y[cbind(1:nn, as.vector(unclass(yf)))] <- 1 colnamesy <- paste(lev, ":", c("00", "01", "10", "11"), sep = "") dimnames(y) <- list(names(yf), colnamesy) input.type <- 1 } else if (ncol(y) == 2) { if (!all(y == 0 | y == 1)) stop("response must contains 0's and 1's only") col.index <- y[, 2] + 2*y[, 1] + 1 # 1:4 nn <- nrow(y) y <- matrix(0, nn, 4) y[cbind(1:nn, col.index)] <- 1 dimnames(y) <- list(dimnames(y)[[1]], c("00", "01", "10", "11")) input.type <- 2 } else if (ncol(y) == 4) { input.type <- 3 } else stop("response unrecognized") nvec <- rowSums(y) w <- w * nvec y <- y / nvec # Convert to proportions if (length(mustart) + length(etastart) == 0) { mu <- y + (1 / ncol(y) - y) / nvec dimnames(mu) <- dimnames(y) mustart <- mu } }) betabinomial.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } betabinomial <- function(lmu = "logit", lrho = "logit", irho = NULL, imethod = 1, shrinkage.init = 0.95, nsimEIM = NULL, zero = 2) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2, 3 or 4") if (!is.Numeric(shrinkage.init, allowable.length = 1) || shrinkage.init < 0 || shrinkage.init > 1) stop("bad input for argument 'shrinkage.init'") if (!is.null(nsimEIM)) { if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("'nsimEIM' should be an integer greater than 10, say") } new("vglmff", blurb = c("Beta-binomial model\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("rho", lrho, earg = erho), "\n", "Mean: mu", "\n", "Variance: mu*(1-mu)*(1+(w-1)*rho)/w"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (is.null( .nsimEIM)) { save.weight <- control$save.weight <- FALSE } mustart.orig <- mustart eval(binomialff()@initialize) # Note: n,w,y,mustart is changed if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("rho", .lrho , earg = .erho , tag = FALSE)) if (!length(etastart)) { betabinomial.Loglikfun <- function(rhoval, y, x, w, extraargs) { shape1 <- extraargs$mustart * (1-rhoval) / rhoval shape2 <- (1-extraargs$mustart) * (1-rhoval) / rhoval ycounts <- extraargs$ycounts # Ought to be integer-valued nvec <- extraargs$nvec sum(dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE)) } rho.grid <- seq(0.05, 0.95, len=21) # rvar = mustart.use = if (length(mustart.orig)) { mustart.orig } else if ( .imethod == 1) { rep(weighted.mean(y, w), len = n) } else if ( .imethod == 2) { .sinit * weighted.mean(y, w) + (1 - .sinit) * y } else if ( .imethod == 3) { y.matrix <- cbind(y) mat.temp <- matrix(colMeans(y.matrix), nrow(y.matrix), ncol(y.matrix), byrow = TRUE) 0.5 * mustart + 0.5 * mat.temp } else { mustart } try.this <- getMaxMin(rho.grid, objfun=betabinomial.Loglikfun, y = y, x = x, w = w, extraargs = list( ycounts=ycounts, nvec = if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w), mustart = mustart.use)) init.rho <- if (is.Numeric( .irho )) rep( .irho , length = n) else rep(try.this, length = n) etastart <- cbind(theta2eta(mustart.use, .lmu , earg = .emu), theta2eta(init.rho, .lrho , earg = .erho)) mustart <- NULL # Since etastart has been computed. } }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .imethod = imethod, .sinit = shrinkage.init, .nsimEIM = nsimEIM, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) eta2theta(eta[, 1], .lmu, earg = .emu), list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu, rho = .lrho) misc$earg <- list(mu = .emu, rho = .erho) misc$zero <- .zero misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$rho <- 1 / (shape1 + shape2 + 1) }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM, .zero = zero ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE, eta, extra = NULL) { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts mymu <- eta2theta(eta[, 1], .lmu, earg = .emu) rho <- eta2theta(eta[, 2], .lrho , earg = .erho) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) rho <- pmax(rho, smallno) rho <- pmin(rho, 1 - smallno) shape1 <- mymu * (1 - rho) / rho shape2 <- (1 - mymu) * (1 - rho) / rho nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE )) } }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), vfamily = c("betabinomial"), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts ycounts <- round(ycounts) mymu <- eta2theta(eta[, 1], .lmu, earg = .emu) rho <- eta2theta(eta[, 2], .lrho , earg = .erho) smallno <- 100 * .Machine$double.eps rho <- pmax(rho, smallno) rho <- pmin(rho, 1-smallno) shape1 <- mymu * (1 - rho) / rho shape2 <- (1 - mymu) * (1 - rho) / rho dshape1.dmu <- (1 - rho) / rho dshape2.dmu <- -(1 - rho) / rho dshape1.drho <- -mymu / rho^2 dshape2.drho <- -(1 - mymu) / rho^2 dmu.deta <- dtheta.deta(mymu, .lmu , earg = .emu) drho.deta <- dtheta.deta(rho, .lrho , earg = .erho) dl.dmu <- dshape1.dmu * (digamma(shape1+ycounts) - digamma(shape2+nvec-ycounts) - digamma(shape1) + digamma(shape2)) dl.drho <- (-1/rho^2) * (mymu * digamma(shape1 + ycounts) + (1 - mymu) * digamma(shape2 + nvec - ycounts) - digamma(shape1 + shape2 + nvec) - mymu * digamma(shape1) - (1 - mymu)*digamma(shape2) + digamma(shape1+shape2)) c(if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dmu * dmu.deta, dl.drho * drho.deta) }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM )) { wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(2) wz11 <- -(expected.betabin.ab(nvec, shape1, shape2, TRUE) - trigamma(shape1+shape2+nvec) - trigamma(shape1) + trigamma(shape1+shape2)) wz22 <- -(expected.betabin.ab(nvec, shape1, shape2, FALSE) - trigamma(shape1+shape2+nvec) - trigamma(shape2) + trigamma(shape1+shape2)) wz21 <- -(trigamma(shape1+shape2) - trigamma(shape1+shape2+nvec)) wz[, iam(1, 1, M)] <- dmu.deta^2 * (wz11 * dshape1.dmu^2 + wz22 * dshape2.dmu^2 + 2 * wz21 * dshape1.dmu * dshape2.dmu) wz[, iam(2, 2, M)] <- drho.deta^2 * (wz11 * dshape1.drho^2 + wz22 * dshape2.drho^2 + 2 * wz21 * dshape1.drho * dshape2.drho) wz[, iam(2, 1, M)] <- dmu.deta * drho.deta * (dshape1.dmu*(wz11*dshape1.drho + wz21*dshape2.drho) + dshape2.dmu*(wz21*dshape1.drho + wz22*dshape2.drho)) wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dmu.deta, drho.deta) for (ii in 1:( .nsimEIM )) { ysim <- rbetabinom.ab(n = n, size = nvec, shape1 = shape1, shape2 = shape2) dl.dmu <- dshape1.dmu * (digamma(shape1+ysim) - digamma(shape2+nvec-ysim) - digamma(shape1) + digamma(shape2)) dl.drho <- (-1/rho^2) * (mymu * digamma(shape1+ysim) + (1-mymu) * digamma(shape2+nvec-ysim) - digamma(shape1+shape2+nvec) - mymu * digamma(shape1) - (1-mymu)*digamma(shape2) + digamma(shape1+shape2)) temp3 <- cbind(dl.dmu, dl.drho) # n x M matrix run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM )))) } dbinom2.or <- function(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for argument 'oratio'") if (!is.Numeric(tol, positive = TRUE, allowable.length = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } n <- max(length(mu1), length(mu2), length(oratio)) oratio <- rep(oratio, len = n) mu1 <- rep(mu1, len = n) mu2 <- rep(mu2, len = n) a.temp <- 1 + (mu1+mu2)*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * mu1 * mu2 temp <- sqrt(a.temp^2 + b.temp) p11 <- ifelse(abs(oratio-1) < tol, mu1*mu2, (a.temp-temp)/(2*(oratio-1))) p01 <- mu2 - p11 p10 <- mu1 - p11 p00 <- 1 - p11 - p01 - p10 matrix(c(p00, p01, p10, p11), n, 4, dimnames = list(NULL, colnames)) } rbinom2.or <- function(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if (twoCols) c("y1", "y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for argument 'oratio'") if (!is.Numeric(tol, positive = TRUE, allowable.length = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } dmat <- dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = oratio, exchangeable = exchangeable, tol = tol, ErrorCheck = ErrorCheck) answer <- matrix(0, n, 2, dimnames = list(NULL, if (twoCols) colnames else NULL)) yy <- runif(n) cs1 <- dmat[, "00"] + dmat[, "01"] cs2 <- cs1 + dmat[, "10"] index <- (dmat[, "00"] < yy) & (yy <= cs1) answer[index, 2] <- 1 index <- (cs1 < yy) & (yy <= cs2) answer[index, 1] <- 1 index <- (yy > cs2) answer[index,] <- 1 if (twoCols) answer else { answer4 <- matrix(0, n, 4, dimnames = list(NULL, colnames)) answer4[cbind(1:n, 1 + 2*answer[, 1] + answer[, 2])] <- 1 answer4 } } binom2.or <- function(lmu = "logit", lmu1 = lmu, lmu2 = lmu, loratio = "loge", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = 3, exchangeable = FALSE, tol = 0.001, morerobust = FALSE) { lmu1 <- lmu1 lmu2 <- lmu2 lmu1 <- as.list(substitute(lmu1)) emu1 <- link2list(lmu1) lmu1 <- attr(emu1, "function.name") lmu2 <- as.list(substitute(lmu2)) emu2 <- link2list(lmu2) lmu2 <- attr(emu2, "function.name") loratio <- as.list(substitute(loratio)) eoratio <- link2list(loratio) loratio <- attr(eoratio, "function.name") if (is.logical(exchangeable) && exchangeable && ((lmu1 != lmu2) || !all.equal(emu1, emu2))) stop("exchangeable = TRUE but marginal links are not equal") if (!is.Numeric(tol, positive = TRUE, allowable.length = 1) || tol > 0.1) stop("bad input for argument 'tol'") new("vglmff", blurb = c("Bivariate binomial regression with an odds ratio\n", "Links: ", namesof("mu1", lmu1, earg = emu1), ", ", namesof("mu2", lmu2, earg = emu2), "; ", namesof("oratio", loratio, earg = eoratio)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1, 1,0,0,0, 1), 3, 2), x, .exchangeable , constraints, apply.int = TRUE) constraints = cm.zero.vgam(constraints, x, .zero , M) }), list( .exchangeable = exchangeable, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart eval(process.binomial2.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c(namesof("mu1", .lmu1, earg = .emu1, short = TRUE), namesof("mu2", .lmu2, earg = .emu2, short = TRUE), namesof("oratio", .loratio, earg = .eoratio, short = TRUE)) if (!length(etastart)) { pmargin <- cbind(mustart[, 3] + mustart[, 4], mustart[, 2] + mustart[, 4]) ioratio <- if (length( .ioratio)) rep( .ioratio , len = n) else mustart[, 4] * mustart[, 1] / (mustart[, 2] * mustart[, 3]) if (length( .imu1 )) pmargin[, 1] <- .imu1 if (length( .imu2 )) pmargin[, 2] <- .imu2 etastart <- cbind(theta2eta(pmargin[, 1], .lmu1, earg = .emu1), theta2eta(pmargin[, 2], .lmu2, earg = .emu2), theta2eta(ioratio, .loratio, earg = .eoratio)) } }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .imu1 = imu1, .imu2 = imu2, .ioratio = ioratio ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu1, earg = .emu1), eta2theta(eta[, 2], .lmu2, earg = .emu2)) oratio <- eta2theta(eta[, 3], .loratio, earg = .eoratio) a.temp <- 1 + (pmargin[, 1]+pmargin[, 2])*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp <- sqrt(a.temp^2 + b.temp) pj4 <- ifelse(abs(oratio-1) < .tol, pmargin[, 1]*pmargin[, 2], (a.temp-temp)/(2*(oratio-1))) pj2 <- pmargin[, 2] - pj4 pj3 <- pmargin[, 1] - pj4 cbind("00" = 1-pj4-pj2-pj3, "01" = pj2, "10" = pj3, "11" = pj4) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .tol = tol ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu1 , mu2 = .lmu2 , oratio = .loratio ) misc$earg <- list(mu1 = .emu1 , mu2 = .emu2 , oratio = .eoratio ) misc$tol <- .tol misc$expected <- TRUE }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .tol = tol ))), linkfun = eval(substitute(function(mu, extra = NULL) { pmargin <- cbind(mu[, 3]+mu[, 4], mu[, 2]+mu[, 4]) oratio <- mu[, 4]*mu[, 1] / (mu[, 2]*mu[, 3]) cbind(theta2eta(pmargin[, 1], .lmu1 , earg = .emu1), theta2eta(pmargin[, 2], .lmu2 , earg = .emu2), theta2eta(oratio, .loratio, earg = .eoratio)) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) stop("loglikelihood residuals not implemented yet") else { if ( .morerobust) { vsmallno <- 1.0e4 * .Machine$double.xmin mu[mu < vsmallno] <- vsmallno } ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) } }, list( .morerobust = morerobust ))), vfamily = c("binom2.or", "binom2"), deriv = eval(substitute(expression({ smallno <- 1.0e4 * .Machine$double.eps mu.use <- mu mu.use[mu.use < smallno] <- smallno mu.use[mu.use > 1 - smallno] <- 1 - smallno pmargin <- cbind(mu.use[, 3] + mu.use[, 4], mu.use[, 2] + mu.use[, 4]) pmargin[, 1] <- pmax( smallno, pmargin[, 1]) pmargin[, 1] <- pmin(1 - smallno, pmargin[, 1]) pmargin[, 2] <- pmax( smallno, pmargin[, 2]) pmargin[, 2] <- pmin(1 - smallno, pmargin[, 2]) oratio <- mu.use[, 4]*mu.use[, 1] / (mu.use[, 2]*mu.use[, 3]) use.oratio <- pmax(smallno, oratio) a.temp <- 1 + (pmargin[, 1]+pmargin[, 2])*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp9 <- sqrt(a.temp^2 + b.temp) coeff12 <- -0.5 + (2*oratio*pmargin - a.temp) / (2*temp9) dl.dmu1 <- coeff12[, 2] * (y[, 1]/mu.use[, 1]-y[, 3]/mu.use[, 3]) - (1+coeff12[, 2]) * (y[, 2]/mu.use[, 2]-y[, 4]/mu.use[, 4]) dl.dmu2 <- coeff12[, 1] * (y[, 1]/mu.use[, 1]-y[, 2]/mu.use[, 2]) - (1+coeff12[, 1]) * (y[, 3]/mu.use[, 3]-y[, 4]/mu.use[, 4]) coeff3 <- (y[, 1]/mu.use[, 1] - y[, 2]/mu.use[, 2] - y[, 3]/mu.use[, 3] + y[, 4]/mu.use[, 4]) Vab <- pmax(smallno, 1 / (1/mu.use[, 1] + 1/mu.use[, 2] + 1/mu.use[, 3] + 1/mu.use[, 4])) dp11.doratio <- Vab / use.oratio dl.doratio <- coeff3 * dp11.doratio c(w) * cbind(dl.dmu1 * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1), dl.dmu2 * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2), dl.doratio * dtheta.deta(oratio, .loratio, earg = .eoratio)) }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio ))), weight = eval(substitute(expression({ Deltapi <- mu.use[, 3]*mu.use[, 2] - mu.use[, 4]*mu.use[, 1] myDelta <- pmax(smallno, mu.use[, 1] * mu.use[, 2] * mu.use[, 3] * mu.use[, 4]) pqmargin <- pmargin * (1-pmargin) pqmargin[pqmargin < smallno] <- smallno wz <- matrix(0, n, 4) wz[, iam(1, 1, M)] <- (pqmargin[, 2] * Vab / myDelta) * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1)^2 wz[, iam(2, 2, M)] <- (pqmargin[, 1] * Vab / myDelta) * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2)^2 wz[, iam(3, 3, M)] <- (Vab / use.oratio^2) * dtheta.deta(use.oratio, .loratio, earg = .eoratio)^2 wz[, iam(1, 2, M)] <- (Vab * Deltapi / myDelta) * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1) * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2) c(w) * wz }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio )))) } dbinom2.rho <- function(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(rho) || min(rho) <= -1 || max(rho) >= 1) stop("bad input for argument 'rho'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } nn <- max(length(mu1), length(mu2), length(rho)) rho <- rep(rho, len = nn) mu1 <- rep(mu1, len = nn) mu2 <- rep(mu2, len = nn) eta1 <- qnorm(mu1) eta2 <- qnorm(mu2) p11 <- pnorm2(eta1, eta2, cov12 = rho) p01 <- mu2 - p11 p10 <- mu1 - p11 p00 <- 1.0 - p01 - p10 - p11 matrix(c(p00, p01, p10, p11), nn, 4, dimnames = list(NULL, colnames)) } rbinom2.rho <- function(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if (twoCols) c("y1", "y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(rho) || min(rho) <= -1 || max(rho) >= 1) stop("bad input for argument 'rho'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } dmat <- dbinom2.rho(mu1 = mu1, mu2 = mu2, rho = rho, exchangeable = exchangeable, ErrorCheck = ErrorCheck) answer <- matrix(0, n, 2, dimnames = list(NULL, if (twoCols) colnames else NULL)) yy <- runif(n) cs1 <- dmat[, "00"] + dmat[, "01"] cs2 <- cs1 + dmat[, "10"] index <- (dmat[, "00"] < yy) & (yy <= cs1) answer[index, 2] <- 1 index <- (cs1 < yy) & (yy <= cs2) answer[index, 1] <- 1 index <- (yy > cs2) answer[index,] <- 1 if (twoCols) answer else { answer4 <- matrix(0, n, 4, dimnames = list(NULL, colnames)) answer4[cbind(1:n, 1 + 2*answer[, 1] + answer[, 2])] <- 1 answer4 } } binom2.rho.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } binom2.rho <- function(lrho = "rhobit", lmu = "probit", # added 20120817 imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = 3, exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) { lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (lmu != "probit") warning("argument 'lmu' should be 'probit'") lmu12 <- "probit" # But emu may contain some arguments. emu12 <- emu # list() if (is.Numeric(nsimEIM)) { if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 100) warning("'nsimEIM' should be an integer greater than 100") } if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate probit model\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12), ", ", namesof("rho", lrho, earg = erho)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1, 1, 0, 0, 0, 1), 3, 2), x, .exchangeable , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 3, multipleResponses = FALSE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart eval(process.binomial2.data.vgam) if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE), namesof("rho", .lrho , earg = .erho, short = TRUE)) if (is.null( .nsimEIM)) { save.weight <- control$save.weight <- FALSE } ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (is.null(etastart)) { if (length(mustart.orig)) { mu1.init <- mustart.orig[, 3] + mustart.orig[, 4] mu2.init <- mustart.orig[, 2] + mustart.orig[, 4] } else if ( .imethod == 1) { glm1.fit <- glm(cbind(ycounts[, 3] + ycounts[, 4], ycounts[, 1] + ycounts[, 2]) ~ x - 1, fam = binomial("probit")) glm2.fit <- glm(cbind(ycounts[, 2] + ycounts[, 4], ycounts[, 1] + ycounts[, 3]) ~ x - 1, fam = binomial("probit")) mu1.init <- fitted(glm1.fit) mu2.init <- fitted(glm2.fit) } else if ( .imethod == 2) { mu1.init <- if (is.Numeric( .imu1 )) rep( .imu1 , length = n) else mu[, 3] + mu[, 4] mu2.init <- if (is.Numeric( .imu2 )) rep( .imu2 , length = n) else mu[, 2] + mu[, 4] } else { stop("bad value for argument 'imethod'") } binom2.rho.Loglikfun <- function(rhoval, y, x, w, extraargs) { init.mu1 <- extraargs$initmu1 init.mu2 <- extraargs$initmu2 ycounts <- extraargs$ycounts nvec <- extraargs$nvec eta1 <- qnorm(init.mu1) eta2 <- qnorm(init.mu2) p11 <- pnorm2(eta1, eta2, cov12 = rhoval) p01 <- pmin(init.mu2 - p11, init.mu2) p10 <- pmin(init.mu1 - p11, init.mu1) p00 <- 1.0 - p01 - p10 - p11 mumat <- abs(cbind("00" = p00, "01" = p01, "10" = p10, "11" = p11)) mumat <- mumat / rowSums(mumat) mumat[mumat < 1.0e-100] <- 1.0e-100 sum((if (is.numeric(extraargs$orig.w)) extraargs$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mumat, log = TRUE, dochecking = FALSE)) } rho.grid <- .grho # seq(-0.95, 0.95, len = 31) try.this <- getMaxMin(rho.grid, objfun = binom2.rho.Loglikfun, y = y, x = x, w = w, extraargs = list( orig.w = extra$orig.w, ycounts = ycounts, initmu1 = mu1.init, initmu2 = mu2.init, nvec = nvec )) rho.init <- if (is.Numeric( .irho )) rep( .irho , len = n) else { try.this } etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 ), theta2eta(rho.init, .lrho , earg = .erho)) mustart <- NULL # Since etastart has been computed. } }), list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho, .grho = grho, .irho = irho, .imethod = imethod, .nsimEIM = nsimEIM, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rho <- eta2theta(eta[, 3], .lrho , earg = .erho) p11 <- pnorm2(eta[, 1], eta[, 2], cov12 = rho) p01 <- pmin(pmargin[, 2] - p11, pmargin[, 2]) p10 <- pmin(pmargin[, 1] - p11, pmargin[, 1]) p00 <- 1.0 - p01 - p10 - p11 ansmat <- abs(cbind("00" = p00, "01" = p01, "10" = p10, "11" = p11)) ansmat / rowSums(ansmat) }, list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 , rho = .lrho ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 , rho = .erho ) misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu12 = lmu12, .lrho = lrho, .nsimEIM = nsimEIM, .emu12 = emu12, .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) stop("loglikelihood residuals not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) } }, list( .erho = erho ))), vfamily = c("binom2.rho", "binom2"), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- eta2theta(eta[, 3], .lrho , earg = .erho) p11 <- pnorm2(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmargin[, 2] - p11 p10 <- pmargin[, 1] - p11 p00 <- 1 - p01 - p10 - p11 ABmat <- (eta[, 1:2] - rhovec * eta[, 2:1]) / sqrt(pmax(1e5 * .Machine$double.eps, 1.0 - rhovec^2)) PhiA <- pnorm(ABmat[, 1]) PhiB <- pnorm(ABmat[, 2]) onemPhiA <- pnorm(ABmat[, 1], lower.tail = FALSE) onemPhiB <- pnorm(ABmat[, 2], lower.tail = FALSE) smallno <- 1000 * .Machine$double.eps p00[p00 < smallno] <- smallno p01[p01 < smallno] <- smallno p10[p10 < smallno] <- smallno p11[p11 < smallno] <- smallno dprob00 <- dnorm2(eta[, 1], eta[, 2], rho = rhovec) dl.dprob1 <- PhiB * (ycounts[, 4]/p11 - ycounts[, 2]/p01) + onemPhiB * (ycounts[, 3]/p10 - ycounts[, 1]/p00) dl.dprob2 <- PhiA * (ycounts[, 4]/p11 - ycounts[, 3]/p10) + onemPhiA * (ycounts[, 2]/p01 - ycounts[, 1]/p00) dl.drho <- (ycounts[, 4]/p11 - ycounts[, 3]/p10 - ycounts[, 2]/p01 + ycounts[, 1]/p00) * dprob00 dprob1.deta <- dtheta.deta(pmargin[, 1], .lmu12 , earg = .emu12 ) dprob2.deta <- dtheta.deta(pmargin[, 2], .lmu12 , earg = .emu12 ) drho.deta <- dtheta.deta(rhovec, .lrho , earg = .erho) dthetas.detas <- cbind(dprob1.deta, dprob2.deta, drho.deta) (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dprob1, dl.dprob2, dl.drho) * dthetas.detas }), list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM )) { ned2l.dprob1prob1 <- PhiB^2 * (1/p11 + 1/p01) + onemPhiB^2 * (1/p10 + 1/p00) ned2l.dprob2prob2 <- PhiA^2 * (1/p11 + 1/p10) + onemPhiA^2 * (1/p01 + 1/p00) ned2l.dprob1prob2 <- PhiA * ( PhiB/p11 - onemPhiB/p10) + onemPhiA * (onemPhiB/p00 - PhiB/p01) ned2l.dprob1rho <- (PhiB * (1/p11 + 1/p01) - onemPhiB * (1/p10 + 1/p00)) * dprob00 ned2l.dprob2rho <- (PhiA * (1/p11 + 1/p10) - onemPhiA * (1/p01 + 1/p00)) * dprob00 ned2l.drho2 <- (1/p11 + 1/p01 + 1/p10 + 1/p00) * dprob00^2 wz <- matrix(0, n, dimm(M)) # 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dprob1prob1 * dprob1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dprob2prob2 * dprob2.deta^2 wz[, iam(3, 3, M)] <- ned2l.drho2 * drho.deta^2 wz[, iam(1, 2, M)] <- ned2l.dprob1prob2 * dprob1.deta * dprob2.deta wz[, iam(2, 3, M)] <- ned2l.dprob2rho * dprob2.deta * drho.deta wz[, iam(1, 3, M)] <- ned2l.dprob1rho * dprob1.deta * drho.deta } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rbinom2.rho(n, mu1 = pmargin[, 1], mu2 = pmargin[, 2], twoCols = FALSE, rho = rhovec) dl.dprob1 <- PhiB * (ysim[, 4]/p11 - ysim[, 2]/p01) + onemPhiB * (ysim[, 3]/p10 - ysim[, 1]/p00) dl.dprob2 <- PhiA * (ysim[, 4]/p11 - ysim[, 3]/p10) + onemPhiA * (ysim[, 2]/p01 - ysim[, 1]/p00) dl.drho <- (ysim[, 4]/p11 - ysim[, 3]/p10 - ysim[, 2]/p01 + ysim[, 1]/p00) * dprob00 rm(ysim) temp3 <- cbind(dl.dprob1, dl.dprob2, dl.drho) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dnorm2 <- function(x, y, rho = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdnorm2 <- (-0.5*(x * (x - 2*y*rho) + y^2) / (1.0 - rho^2)) - log(2 * pi) - 0.5 * log1p(-rho^2) if (log.arg) { logdnorm2 } else { exp(logdnorm2) } } pnorm2 <- function(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) { sd1 <- sqrt(var1) sd2 <- sqrt(var2) rho <- cov12 / (sd1 * sd2) if (any(is.na(x1) | is.na(x2) | is.na(sd1) | is.na(sd2) | is.na(mean1) | is.na(mean2) | is.na(rho))) stop("no NAs allowed in arguments or variables 'x1', 'x2', 'mean1', ", "'mean2', 'sd1', 'sd2', 'cov12'") if (min(rho) < -1 || max(rho) > +1) stop("correlation 'rho' is out of range") if (length(mean1) > 1 && length(mean2) == 1 && length(var1) == 1 && length(var2) == 1 && length(cov12) == 1) warning("the call to pnorm2() seems based on the old version ", "of the arguments") LLL <- max(length(x1), length(x2), length(mean1), length(mean2), length(sd1), length(sd2), length(rho)) if (length(x1) != LLL) x1 <- rep(x1, len = LLL) if (length(x2) != LLL) x2 <- rep(x2, len = LLL) if (length(mean1) != LLL) mean1 <- rep(mean1, len = LLL) if (length(mean2) != LLL) mean2 <- rep(mean2, len = LLL) if (length(sd1) != LLL) sd1 <- rep(sd1, len = LLL) if (length(sd2) != LLL) sd2 <- rep(sd2, len = LLL) if (length(rho) != LLL) rho <- rep(rho, len = LLL) Z1 <- (x1 - mean1) / sd1 Z2 <- (x2 - mean2) / sd2 ans <- Z1 singler <- ifelse(length(rho) == 1, 1, 0) answer <- dotC(name = "pnorm2", ah = as.double(-Z1), ak = as.double(-Z2), r = as.double(rho), size = as.integer(LLL), singler = as.integer(singler), ans = as.double(ans))$ans if (any(answer < 0.0)) warning("some negative values returned") answer } my.dbinom <- function(x, size = stop("no 'size' argument"), prob = stop("no 'prob' argument")) { exp(lgamma(size + 1) - lgamma(size - x + 1) - lgamma(x + 1) + x * log(prob / (1 - prob)) + size * log1p(-prob)) } size.binomial <- function(prob = 0.5, link = "loge") { if (any(prob <= 0 | prob >= 1)) stop("some values of prob out of range") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Binomial with n unknown, prob known (prob = ", prob, ")\n", "Links: ", namesof("size", link, tag = TRUE), " (treated as real-valued)\n", "Variance: Var(Y) = size * prob * (1-prob);", " Var(size) is intractable"), initialize = eval(substitute(expression({ predictors.names <- "size" extra$temp2 <- rep( .prob , length = n) if (is.null(etastart)) { nvec <- (y+0.1)/extra$temp2 etastart <- theta2eta(nvec, .link ) } }), list( .prob = prob, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) { nvec <- eta2theta(eta, .link) nvec * extra$temp2 }, list( .link = link ))), last = eval(substitute(expression({ misc$link <- c(size = .link ) misc$prob <- extra$temp2 }), list( .link = link ))), linkfun = eval(substitute(function(mu, extra = NULL) { nvec <- mu / extra$temp2 theta2eta(nvec, .link) }, list( .link = link ))), loglikelihood = eval(substitute( function(mu, y, w, res = FALSE,eta, extra = NULL) { nvec <- mu / extra$temp2 if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * (lgamma(nvec+1) - lgamma(y+1) - lgamma(nvec-y+1) + y * log( .prob / (1- .prob )) + nvec * log1p(- .prob ))) } }, list( .prob = prob ))), vfamily = c("size.binomial"), deriv = eval(substitute(expression({ nvec <- mu/extra$temp2 dldnvec <- digamma(nvec+1) - digamma(nvec-y+1) + log1p(-extra$temp2) dnvecdeta <- dtheta.deta(nvec, .link) c(w) * cbind(dldnvec * dnvecdeta) }), list( .link = link ))), weight = eval(substitute(expression({ d2ldnvec2 <- trigamma(nvec+1) - trigamma(nvec-y+1) d2ldnvec2[y == 0] <- -sqrt( .Machine$double.eps ) wz <- -c(w) * dnvecdeta^2 * d2ldnvec2 wz }), list( .link = link )))) } dbetabinom.ab <- function(x, size, shape1, shape2, log = FALSE, .dontuse.prob = NULL) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(size, integer.valued = TRUE)) stop("bad input for argument 'size'") if (any(shape1 < 0, na.rm = TRUE)) stop("negative values for argument 'shape1' not allowed") if (any(shape2 < 0, na.rm = TRUE)) stop("negative values for argument 'shape2' not allowed") LLL <- max(length(x), length(size), length(shape1), length(shape2)) if (length(x) != LLL) x <- rep(x, len = LLL) if (length(size) != LLL) size <- rep(size, len = LLL) if (length(shape1) != LLL) shape1 <- rep(shape1, len = LLL) if (length(shape2) != LLL) shape2 <- rep(shape2, len = LLL) ans <- 0 * x ok <- (round(x) == x) & (x >= 0) & (x <= size) & is.finite(shape1) & is.finite(shape2) if (any(ok)) { ans[ok] <- lchoose(size[ok], x[ok]) + lbeta(shape1[ok] + x[ok], shape2[ok] + size[ok] - x[ok]) - lbeta(shape1[ok], shape2[ok]) if (log.arg) { } else { ans[ok] <- exp(ans[ok]) } } okay1 <- is.na(shape1) & is.infinite(shape2) # rho = 0 and prob == 0 okay2 <- is.infinite(shape1) & is.na(shape2) # rho = 0 and prob == 1 okay3 <- is.infinite(shape1) & is.infinite(shape2) # rho = 0 and 0 < prob < 1 if (sum.okay1 <- sum(okay1)) ans[okay1] <- dbinom(x = x[okay1], size = size[okay1], prob = 0, log = log.arg) if (sum.okay2 <- sum(okay2)) ans[okay2] <- dbinom(x = x[okay2], size = size[okay2], prob = 1, log = log.arg) if (sum.okay3 <- sum(okay3)) { if (length(.dontuse.prob) != LLL) .dontuse.prob <- rep( .dontuse.prob , len = LLL) ans[okay3] <- dbinom(x = x[okay3], size = size[okay3], prob = .dontuse.prob[okay3], log = log.arg) } ans } pbetabinom.ab <- function(q, size, shape1, shape2, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(size, integer.valued = TRUE)) stop("bad input for argument 'size'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") LLL <- max(length(q), length(size), length(shape1), length(shape2)) if (length(q) != LLL) q <- rep(q, len = LLL) if (length(shape1) != LLL) shape1 <- rep(shape1, len = LLL) if (length(shape2) != LLL) shape2 <- rep(shape2, len = LLL) if (length(size) != LLL) size <- rep(size, len = LLL); ans <- q * 0 # Retains names(q) if (max(abs(size - size[1])) < 1.0e-08 && max(abs(shape1 - shape1[1])) < 1.0e-08 && max(abs(shape2 - shape2[1])) < 1.0e-08) { qstar <- floor(q) temp <- if (max(qstar) >= 0) { dbetabinom.ab(0:max(qstar), size = size[1], shape1 = shape1[1], shape2 = shape2[1]) } else { 0 * qstar } unq <- unique(qstar) for (ii in unq) { index <- qstar == ii ans[index] <- if (ii >= 0) sum(temp[1:(1+ii)]) else 0 } } else { for (ii in 1:LLL) { qstar <- floor(q[ii]) ans[ii] <- if (qstar >= 0) { sum(dbetabinom.ab(x = 0:qstar, size = size[ii], shape1 = shape1[ii], shape2 = shape2[ii])) } else 0 } } if (log.p) log(ans) else ans } rbetabinom.ab <- function(n, size, shape1, shape2, .dontuse.prob = NULL) { # checkargs = TRUE if (!is.Numeric(size, integer.valued = TRUE)) stop("bad input for argument 'size'") if (any(shape1 < 0, na.rm = TRUE)) stop("negative values for argument 'shape1' not allowed") if (any(shape2 < 0, na.rm = TRUE)) stop("negative values for argument 'shape2' not allowed") use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (length(size) != use.n) size <- rep(size, len = use.n) if (length(shape1) != use.n) shape1 <- rep(shape1, len = use.n) if (length(shape2) != use.n) shape2 <- rep(shape2, len = use.n) ans <- rep(as.numeric(NA), len = use.n) okay0 <- is.finite(shape1) & is.finite(shape2) if (smalln <- sum(okay0)) ans[okay0] <- rbinom(n = smalln, size = size[okay0], prob = rbeta(n = smalln, shape1 = shape1[okay0], shape2 = shape2[okay0])) okay1 <- is.na(shape1) & is.infinite(shape2) # rho = 0 and prob == 0 okay2 <- is.infinite(shape1) & is.na(shape2) # rho = 0 and prob == 1 okay3 <- is.infinite(shape1) & is.infinite(shape2) # rho = 0 and 0 < prob < 1 if (sum.okay1 <- sum(okay1)) ans[okay1] <- rbinom(n = sum.okay1, size = size[okay1], prob = 0) if (sum.okay2 <- sum(okay2)) ans[okay2] <- rbinom(n = sum.okay2, size = size[okay2], prob = 1) if (sum.okay3 <- sum(okay3)) { if (length( .dontuse.prob ) != use.n) .dontuse.prob <- rep(.dontuse.prob, len = use.n) ans[okay3] <- rbinom(n = sum.okay3, size = size[okay3], prob = .dontuse.prob[okay3]) } ans } dbetabinom <- function(x, size, prob, rho = 0, log = FALSE) { dbetabinom.ab(x = x, size = size, shape1 = prob*(1-rho)/rho, shape2 = (1-prob)*(1-rho)/rho, log = log, .dontuse.prob = prob) } pbetabinom <- function(q, size, prob, rho, log.p = FALSE) { pbetabinom.ab(q = q, size = size, shape1 = prob*(1-rho)/rho, shape2 = (1-prob)*(1-rho)/rho, log.p = log.p) } rbetabinom <- function(n, size, prob, rho = 0) { rbetabinom.ab(n = n, size = size, shape1 = prob*(1-rho)/rho, shape2 = (1-prob)*(1-rho)/rho, .dontuse.prob = prob) } expected.betabin.ab <- function(nvec, shape1, shape2, first) { NN <- length(nvec) ans <- rep(0.0, len = NN) if (first) { for (ii in 1:NN) { temp639 <- lbeta(shape1[ii], shape2[ii]) yy <- 0:nvec[ii] ans[ii] <- ans[ii] + sum(trigamma(shape1[ii] + yy) * exp(lchoose(nvec[ii], yy) + lbeta(shape1[ii]+yy, shape2[ii]+nvec[ii]-yy) - temp639)) } } else { for (ii in 1:NN) { temp639 <- lbeta(shape1[ii], shape2[ii]) yy <- 0:nvec[ii] ans[ii] <- ans[ii] + sum(trigamma(nvec[ii]+shape2[ii] - yy) * exp(lchoose(nvec[ii], yy) + lbeta(shape1[ii]+yy, shape2[ii]+nvec[ii]-yy) - temp639)) } } ans } betabinomial.ab.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } betabinomial.ab <- function(lshape12 = "loge", i1 = 1, i2 = NULL, imethod = 1, shrinkage.init = 0.95, nsimEIM = NULL, zero = NULL) { lshape12 <- as.list(substitute(lshape12)) earg <- link2list(lshape12) lshape12 <- attr(earg, "function.name") if (!is.Numeric(i1, positive = TRUE)) stop("bad input for argument 'i1'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") if (length(i2) && !is.Numeric(i2, positive = TRUE)) stop("bad input for argument 'i2'") if (!is.null(nsimEIM)) { if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("'nsimEIM' should be an integer greater than 10, say") } new("vglmff", blurb = c("Beta-binomial model\n", "Links: ", namesof("shape1", lshape12, earg = earg), ", ", namesof("shape2", lshape12, earg = earg), "\n", "Mean: mu = shape1 / (shape1+shape2)", "\n", "Variance: mu * (1-mu) * (1+(w-1)*rho) / w, ", "where rho = 1 / (shape1+shape2+1)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (is.null( .nsimEIM)) { save.weight <- control$save.weight <- FALSE } mustart.orig <- mustart eval(binomialff()@initialize) # Note: n,w,y,mustart is changed if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c(namesof("shape1", .lshape12, earg = .earg, tag = FALSE), namesof("shape2", .lshape12, earg = .earg, tag = FALSE)) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) mustart.orig else mustart shape1 <- rep( .i1 , len = n) shape2 <- if (length( .i2 )) { rep( .i2 , len = n) } else if (length(mustart.orig)) { shape1 * (1 / mustart.use - 1) } else if ( .imethod == 1) { shape1 * (1 / weighted.mean(y, w) - 1) } else if ( .imethod == 2) { temp777 <- .sinit * weighted.mean(y, w) + (1 - .sinit) * y shape1 * (1 / temp777 - 1) } else { shape1 * (1 / weighted.mean(mustart.use, w) - 1) } ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer etastart <- cbind(theta2eta(shape1, .lshape12, earg = .earg), theta2eta(shape2, .lshape12, earg = .earg)) mustart <- NULL # Since etastart has been computed. } }), list( .lshape12 = lshape12, .earg = earg, .i1 = i1, .i2 = i2, .nsimEIM = nsimEIM, .imethod = imethod, .sinit = shrinkage.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape12, earg = .earg) shape2 <- eta2theta(eta[, 2], .lshape12, earg = .earg) shape1 / (shape1 + shape2) }, list( .lshape12 = lshape12, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("shape1" = .lshape12 , "shape2" = .lshape12 ) misc$earg <- list("shape1" = .earg , "shape2" = .earg ) shape1 <- eta2theta(eta[, 1], .lshape12, earg = .earg) shape2 <- eta2theta(eta[, 2], .lshape12, earg = .earg) misc$rho <- 1 / (shape1 + shape2 + 1) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$zero <- .zero }), list( .lshape12 = lshape12, .earg = earg, .nsimEIM = nsimEIM, .zero = zero ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta, extra = NULL) { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) shape1 <- eta2theta(eta[, 1], .lshape12, earg = .earg) shape2 <- eta2theta(eta[, 2], .lshape12, earg = .earg) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE )) } }, list( .lshape12 = lshape12, .earg = earg ))), vfamily = c("betabinomial.ab"), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts shape1 <- eta2theta(eta[, 1], .lshape12, earg = .earg) shape2 <- eta2theta(eta[, 2], .lshape12, earg = .earg) dshape1.deta <- dtheta.deta(shape1, .lshape12, earg = .earg) dshape2.deta <- dtheta.deta(shape2, .lshape12, earg = .earg) dl.dshape1 <- digamma(shape1+ycounts) - digamma(shape1+shape2+nvec) - digamma(shape1) + digamma(shape1 + shape2) dl.dshape2 <- digamma(nvec + shape2 - ycounts) - digamma(shape1 + shape2 + nvec) - digamma(shape2) + digamma(shape1 + shape2) (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) }), list( .lshape12 = lshape12, .earg = earg ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM)) { wz <- matrix(as.numeric(NA), n, dimm(M)) #3=dimm(2) wz[, iam(1, 1, M)] <- -(expected.betabin.ab(nvec,shape1,shape2, TRUE) - trigamma(shape1+shape2+nvec) - trigamma(shape1) + trigamma(shape1+shape2)) * dshape1.deta^2 wz[, iam(2, 2, M)] <- -(expected.betabin.ab(nvec,shape1,shape2, FALSE) - trigamma(shape1+shape2+nvec) - trigamma(shape2) + trigamma(shape1+shape2)) * dshape2.deta^2 wz[, iam(2, 1, M)] <- -(trigamma(shape1+shape2) - trigamma(shape1+shape2+nvec)) * dshape1.deta * dshape2.deta wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dshape1.deta, dshape2.deta) for (ii in 1:( .nsimEIM )) { ysim <- rbetabinom.ab(n = n, size = nvec, shape1 = shape1, shape2 = shape2) checkargs = .checkargs dl.dshape1 <- digamma(shape1+ysim) - digamma(shape1+shape2+nvec) - digamma(shape1) + digamma(shape1+shape2) dl.dshape2 <- digamma(nvec+shape2-ysim) - digamma(shape1+shape2+nvec) - digamma(shape2) + digamma(shape1+shape2) rm(ysim) temp3 <- cbind(dl.dshape1, dl.dshape2) # n x M matrix run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index]* temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } }), list( .lshape12 = lshape12, .earg = earg, .nsimEIM = nsimEIM )))) } betageometric <- function(lprob = "logit", lshape = "loge", iprob = NULL, ishape = 0.1, moreSummation = c(2, 100), tolerance = 1.0e-10, zero = NULL) { lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(moreSummation, positive = TRUE, allowable.length = 2, integer.valued = TRUE)) stop("bad input for argument 'moreSummation'") if (!is.Numeric(tolerance, positive = TRUE, allowable.length = 1) || 1.0 - tolerance >= 1.0) stop("bad input for argument 'tolerance'") new("vglmff", blurb = c("Beta-geometric distribution\n", "Links: ", namesof("prob", lprob, earg = eprob), ", ", namesof("shape", lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ eval(geometric()@initialize) predictors.names <- c(namesof("prob", .lprob, earg = .eprob, tag = FALSE), namesof("shape", .lshape, earg = .eshape, short = FALSE)) if (length( .iprob )) prob.init <- rep( .iprob , len = n) if (!length(etastart) || ncol(cbind(etastart)) != 2) { shape.init <- rep( .ishape , len = n) etastart <- cbind(theta2eta(prob.init, .lprob, earg = .eprob), theta2eta(shape.init, .lshape, earg = .eshape)) } }), list( .iprob = iprob, .ishape = ishape, .lprob = lprob, .eprob = eprob, .eshape = eshape, .lshape = lshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob, earg = .eprob) shape <- eta2theta(eta[, 2], .lshape, earg = .eshape) mymu <- (1-prob) / (prob - shape) ifelse(mymu >= 0, mymu, NA) }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c("prob" = .lprob , "shape" = .lshape ) misc$earg <- list("prob" = .eprob , "shape" = .eshape ) if (intercept.only) { misc$shape1 <- shape1[1] # These quantities computed in @deriv misc$shape2 <- shape2[1] } misc$expected <- TRUE misc$tolerance <- .tolerance misc$zero <- .zero misc$moreSummation = .moreSummation }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape, .tolerance = tolerance, .moreSummation = moreSummation, .zero = zero ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob, earg = .eprob) shape <- eta2theta(eta[, 2], .lshape, earg = .eshape) ans <- log(prob) maxy <- max(y) if (residuals) stop("loglikelihood residuals not implemented yet") else { for (ii in 1:maxy) { index <- (ii <= y) ans[index] <- ans[index] + log1p(-prob[index]+(ii-1) * shape[index]) - log1p((ii-1)*shape[index]) } ans <- ans - log1p((y+1-1)*shape) sum(w * ans) } }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), vfamily = c("betageometric"), deriv = eval(substitute(expression({ prob <- eta2theta(eta[, 1], .lprob, earg = .eprob) shape <- eta2theta(eta[, 2], .lshape, earg = .eshape) shape1 <- prob / shape; shape2 <- (1 - prob) / shape; dprob.deta <- dtheta.deta(prob, .lprob, earg = .eprob) dshape.deta <- dtheta.deta(shape, .lshape, earg = .eshape) dl.dprob <- 1 / prob dl.dshape <- 0 * y maxy <- max(y) for (ii in 1:maxy) { index <- (ii <= y) dl.dprob[index] <- dl.dprob[index] - 1/(1-prob[index]+(ii-1)*shape[index]) dl.dshape[index] <- dl.dshape[index] + (ii-1)/(1-prob[index]+(ii-1)*shape[index]) - (ii-1)/(1+(ii-1)*shape[index]) } dl.dshape <- dl.dshape - (y+1 -1)/(1+(y+1 -1)*shape) c(w) * cbind(dl.dprob * dprob.deta, dl.dshape * dshape.deta) }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) #3=dimm(2) wz[, iam(1, 1, M)] <- 1 / prob^2 moresum <- .moreSummation maxsummation <- round(maxy * moresum[1] + moresum[2]) for (ii in 3:maxsummation) { temp7 <- 1 - pbetageom(q = ii-1-1, shape1 = shape1, shape2 = shape2) denom1 <- (1-prob+(ii-2)*shape)^2 denom2 <- (1+(ii-2)*shape)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + temp7 / denom1 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] - (ii-2) * temp7 / denom1 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + (ii-2)^2 * temp7 / denom1 - (ii-1)^2 * temp7 / denom2 if (max(temp7) < .tolerance ) break; } ii <- 2 temp7 <- 1 - pbetageom(q=ii-1-1, shape1 = shape1, shape2 = shape2) denom1 <- (1-prob+(ii-2)*shape)^2 denom2 <- (1+(ii-2)*shape)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + temp7 / denom1 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - (ii-1)^2 * temp7 / denom2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dprob.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dshape.deta^2 wz[, iam(2, 1, M)] <- wz[, iam(2, 1, M)] * dprob.deta * dshape.deta c(w) * wz }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape, .moreSummation = moreSummation, .tolerance = tolerance )))) } seq2binomial <- function(lprob1 = "logit", lprob2 = "logit", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, apply.parint = TRUE, zero = NULL) { lprob1 <- as.list(substitute(lprob1)) eprob1 <- link2list(lprob1) lprob1 <- attr(eprob1, "function.name") lprob2 <- as.list(substitute(lprob2)) eprob2 <- link2list(lprob2) lprob2 <- attr(eprob2, "function.name") if (length(iprob1) && (!is.Numeric(iprob1, positive = TRUE) || max(iprob1) >= 1)) stop("bad input for argument 'iprob1'") if (length(iprob2) && (!is.Numeric(iprob2, positive = TRUE) || max(iprob2) >= 1)) stop("bad input for argument 'iprob2'") new("vglmff", blurb = c("Sequential binomial distribution ", "(Crowder and Sweeting, 1989)\n", "Links: ", namesof("prob1", lprob1, earg = eprob1), ", ", namesof("prob2", lprob2, earg = eprob2)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel , constraints, apply.int = .apply.parint ) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .parallel = parallel, .apply.parint = apply.parint, .zero = zero ))), initialize = eval(substitute(expression({ if (!is.vector(w)) stop("the 'weights' argument must be a vector") if (any(abs(w - round(w)) > 0.000001)) stop("the 'weights' argument does not seem to be integer-valued") if (ncol(y <- cbind(y)) != 2) stop("the response must be a 2-column matrix") if (any(y < 0 | y > 1)) stop("the response must have values between 0 and 1") w <- round(w) rvector <- w * y[, 1] if (any(abs(rvector - round(rvector)) > 1.0e-8)) warning("number of successes in column one ", "should be integer-valued") svector <- rvector * y[, 2] if (any(abs(svector - round(svector)) > 1.0e-8)) warning("number of successes in", " column two should be integer-valued") predictors.names <- c(namesof("prob1", .lprob1,earg = .eprob1, tag = FALSE), namesof("prob2", .lprob2,earg = .eprob2, tag = FALSE)) prob1.init <- if (is.Numeric( .iprob1)) rep( .iprob1 , len = n) else rep(weighted.mean(y[, 1], w = w), len = n) prob2.init <- if (is.Numeric( .iprob2 )) rep( .iprob2 , length = n) else rep(weighted.mean(y[, 2], w = w*y[, 1]), length = n) if (!length(etastart)) { etastart <- cbind(theta2eta(prob1.init, .lprob1, earg = .eprob1), theta2eta(prob2.init, .lprob2, earg = .eprob2)) } }), list( .iprob1 = iprob1, .iprob2 = iprob2, .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob1 <- eta2theta(eta[, 1], .lprob1, earg = .eprob1) prob2 <- eta2theta(eta[, 2], .lprob2, earg = .eprob2) cbind(prob1, prob2) }, list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), last = eval(substitute(expression({ misc$link <- c("prob1" = .lprob1 , "prob2" = .lprob2 ) misc$earg <- list("prob1" = .eprob1 , "prob2" = .eprob2 ) misc$expected <- TRUE misc$zero <- .zero misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2, .parallel = parallel, .apply.parint = apply.parint, .zero = zero ))), loglikelihood = eval(substitute( function(mu,y,w,residuals = FALSE,eta, extra = NULL) { prob1 <- eta2theta(eta[, 1], .lprob1, earg = .eprob1) prob2 <- eta2theta(eta[, 2], .lprob2, earg = .eprob2) smallno <- 100 * .Machine$double.eps prob1 <- pmax(prob1, smallno) prob1 <- pmin(prob1, 1-smallno) prob2 <- pmax(prob2, smallno) prob2 <- pmin(prob2, 1-smallno) mvector <- w rvector <- w * y[, 1] svector <- rvector * y[, 2] if (residuals) stop("loglikelihood residuals not implemented yet") else { ans1 <- sum(dbinom(rvector, size = mvector, prob = prob1, log = TRUE) + dbinom(svector, size = rvector, prob = prob2, log = TRUE)) ans1 } }, list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), vfamily = c("seq2binomial"), deriv = eval(substitute(expression({ prob1 <- eta2theta(eta[, 1], .lprob1, earg = .eprob1) prob2 <- eta2theta(eta[, 2], .lprob2, earg = .eprob2) smallno <- 100 * .Machine$double.eps prob1 <- pmax(prob1, smallno) prob1 <- pmin(prob1, 1-smallno) prob2 <- pmax(prob2, smallno) prob2 <- pmin(prob2, 1-smallno) dprob1.deta <- dtheta.deta(prob1, .lprob1, earg = .eprob1) dprob2.deta <- dtheta.deta(prob2, .lprob2, earg = .eprob2) mvector <- w rvector <- w * y[, 1] svector <- rvector * y[, 2] dl.dprob1 <- rvector / prob1 - (mvector-rvector) / (1-prob1) dl.dprob2 <- svector / prob2 - (rvector-svector) / (1-prob2) cbind(dl.dprob1 * dprob1.deta, dl.dprob2 * dprob2.deta) }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M) wz[, iam(1, 1, M)] <- (dprob1.deta^2) / (prob1 * (1-prob1)) wz[, iam(2, 2, M)] <- (dprob2.deta^2) * prob1 / (prob2 * (1-prob2)) c(w) * wz }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 )))) } zipebcom <- function(lmu12 = "cloglog", lphi12 = "logit", loratio = "loge", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = 2:3, tol = 0.001, addRidge = 0.001) { lmu12 <- as.list(substitute(lmu12)) emu12 <- link2list(lmu12) lmu12 <- attr(emu12, "function.name") lphi12 <- as.list(substitute(lphi12)) ephi12 <- link2list(lphi12) lphi12 <- attr(ephi12, "function.name") loratio <- as.list(substitute(loratio)) eoratio <- link2list(loratio) loratio <- attr(eoratio, "function.name") if (!is.Numeric(tol, positive = TRUE, allowable.length = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (!is.Numeric(addRidge, allowable.length = 1, positive = TRUE) || addRidge > 0.5) stop("bad input for argument 'addRidge'") if (lmu12 != "cloglog") warning("argument 'lmu12' should be 'cloglog'") new("vglmff", blurb = c("Exchangeable bivariate ", lmu12, " odds-ratio model based on\n", "a zero-inflated Poisson distribution\n\n", "Links: ", namesof("mu12", lmu12, earg = emu12), ", ", namesof("phi12", lphi12, earg = ephi12), ", ", namesof("oratio", loratio, earg = eoratio)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ eval(process.binomial2.data.vgam) predictors.names <- c( namesof("mu12", .lmu12 , earg = .emu12 , short = TRUE), namesof("phi12", .lphi12, earg = .ephi12, short = TRUE), namesof("oratio", .loratio, earg = .eoratio, short = TRUE)) propY1.eq.0 <- weighted.mean(y[,'00'], w) + weighted.mean(y[,'01'], w) propY2.eq.0 <- weighted.mean(y[,'00'], w) + weighted.mean(y[,'10'], w) if (length( .iphi12) && any( .iphi12 > propY1.eq.0)) warning("iphi12 must be less than the sample proportion of Y1==0") if (length( .iphi12) && any( .iphi12 > propY2.eq.0)) warning("iphi12 must be less than the sample proportion of Y2==0") if (!length(etastart)) { pstar.init <- ((mu[, 3]+mu[, 4]) + (mu[, 2]+mu[, 4])) / 2 phi.init <- if (length(.iphi12)) rep(.iphi12, len = n) else min(propY1.eq.0 * 0.95, propY2.eq.0 * 0.95, pstar.init/1.5) oratio.init <- if (length( .ioratio)) rep( .ioratio, len = n) else mu[, 4]*mu[, 1]/(mu[, 2]*mu[, 3]) mu12.init <- if (length(.imu12)) rep(.imu12, len = n) else pstar.init / (1-phi.init) etastart <- cbind( theta2eta(mu12.init, .lmu12 , earg = .emu12 ), theta2eta(phi.init, .lphi12, earg = .ephi12), theta2eta(oratio.init, .loratio, earg = .eoratio)) } }), list( .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio, .imu12 = imu12, .iphi12 = iphi12, .ioratio = ioratio ))), linkinv = eval(substitute(function(eta, extra = NULL) { A1vec <- eta2theta(eta[, 1], .lmu12 , earg = .emu12 ) phivec <- eta2theta(eta[, 2], .lphi12, earg = .ephi12) pmargin <- matrix((1 - phivec) * A1vec, nrow(eta), 2) oratio <- eta2theta(eta[, 3], .loratio, earg = .eoratio) a.temp <- 1 + (pmargin[, 1]+pmargin[, 2])*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp <- sqrt(a.temp^2 + b.temp) pj4 <- ifelse(abs(oratio-1) < .tol, pmargin[, 1]*pmargin[, 2], (a.temp-temp)/(2*(oratio-1))) pj2 <- pmargin[, 2] - pj4 pj3 <- pmargin[, 1] - pj4 cbind("00" = 1-pj4-pj2-pj3, "01" = pj2, "10" = pj3, "11" = pj4) }, list( .tol = tol, .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), last = eval(substitute(expression({ misc$link <- c("mu12"= .lmu12 , "phi12" = .lphi12, "oratio" = .loratio) misc$earg <- list("mu12"= .emu12 , "phi12" = .ephi12, "oratio" = .eoratio) misc$tol <- .tol misc$expected <- TRUE misc$addRidge <- .addRidge }), list( .tol = tol, .addRidge = addRidge, .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("zipebcom"), deriv = eval(substitute(expression({ A1vec <- eta2theta(eta[, 1], .lmu12 , earg = .emu12 ) smallno <- .Machine$double.eps^(2/4) A1vec[A1vec > 1.0 -smallno] <- 1.0 - smallno phivec <- eta2theta(eta[, 2], .lphi12, earg = .ephi12) pmargin <- matrix((1 - phivec) * A1vec, n, 2) oratio <- eta2theta(eta[, 3], .loratio, earg = .eoratio) Vab <- 1 / (1/mu[, 1] + 1/mu[, 2] + 1/mu[, 3] + 1/mu[, 4]) Vabc <- 1/mu[, 1] + 1/mu[, 2] denom3 <- 2 * oratio * mu[, 2] + mu[, 1] + mu[, 4] temp1 <- oratio * mu[, 2] + mu[, 4] dp11star.dp1unstar <- 2*(1-phivec)*Vab * Vabc dp11star.dphi1 <- -2 * A1vec * Vab * Vabc dp11star.doratio <- Vab / oratio yandmu <- (y[, 1]/mu[, 1] - y[, 2]/mu[, 2] - y[, 3]/mu[, 3] + y[, 4]/mu[, 4]) dp11.doratio <- Vab / oratio check.dl.doratio <- yandmu * dp11.doratio cyandmu <- (y[, 2]+y[, 3])/mu[, 2] - 2 * y[, 1]/mu[, 1] dl.dmu1 <- dp11star.dp1unstar * yandmu + (1-phivec) * cyandmu dl.dphi1 <- dp11star.dphi1 * yandmu - A1vec * cyandmu dl.doratio <- check.dl.doratio dthetas.detas = cbind(dtheta.deta(A1vec, .lmu12 , earg = .emu12 ), dtheta.deta(phivec, .lphi12, earg = .ephi12), dtheta.deta(oratio, .loratio, earg = .eoratio)) c(w) * cbind(dl.dmu1, dl.dphi1, dl.doratio) * dthetas.detas }), list( .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), weight = eval(substitute(expression({ wz <- matrix(0, n, 4) alternwz11 <- 2 * (1-phivec)^2 * (2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * (dthetas.detas[, 1])^2 wz[, iam(1, 1, M)] <- alternwz11 alternwz22 <- 2* A1vec^2 *(2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * (dthetas.detas[, 2])^2 wz[, iam(2, 2, M)] <- alternwz22 alternwz12 <- -2*A1vec*(1-phivec)* (2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * dthetas.detas[, 1] * dthetas.detas[, 2] wz[, iam(1, 2, M)] <- alternwz12 alternwz33 <- (Vab / oratio^2) * dthetas.detas[, 3]^2 wz[, iam(3, 3, M)] <- alternwz33 wz[, 1:2] <- wz[, 1:2] * (1 + .addRidge) c(w) * wz }), list( .addRidge = addRidge )))) } if (FALSE) lusted68 <- function(lrhopos = "loge", lrhoneg = "loge", erhopos = list(), erhoneg = list(), irhopos = NULL, irhoneg = NULL, iprob1 = NULL, iprob2 = NULL, zero = NULL) { lrhopos <- as.list(substitute(lrhopos)) erhopos <- link2list(lrhopos) lrhopos <- attr(erhopos, "function.name") lrhoneg <- as.list(substitute(lrhoneg)) erhoneg <- link2list(lrhoneg) lrhoneg <- attr(erhoneg, "function.name") new("vglmff", blurb = c("Lusted (1968)'s model\n", "Links: ", namesof("rhopos", lrhopos, earg = erhopos), ", ", namesof("rhoneg", lrhoneg, earg = erhoneg)), initialize = eval(substitute(expression({ eval(process.binomial2.data.vgam) predictors.names <- c( namesof("rhopos", .lrhopos, earg = .erhopos, short = TRUE), namesof("rhoneg", .lrhoneg, earg = .erhoneg, short = TRUE)) if (!length(etastart)) { nnn1 <- round(w * (y[, 1] + y[, 2])) nnn2 <- round(w * (y[, 3] + y[, 4])) init.pee1 <- if (length( .iprob1 )) rep( .iprob1 , len = n) else mu[, 1] / (mu[, 1] + mu[, 2]) init.pee2 <- if (length( .iprob2 )) rep( .iprob2 , len = n) else mu[, 3] / (mu[, 3] + mu[, 4]) init.rhopos <- pmax(1.1, init.pee1 / init.pee2) # Should be > 1 init.rhoneg <- pmin(0.4, (1 - init.pee1) / (1 - init.pee2)) # c. 0 if (length( .irhopos)) init.rhopos <- rep( .irhopos , len = n) if (length( .irhoneg)) init.rhoneg <- rep( .irhoneg , len = n) etastart <- cbind(theta2eta(init.rhopos, .lrhopos, earg = .erhopos), theta2eta(init.rhoneg, .lrhoneg, earg = .erhoneg)) } }), list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg, .iprob1 = iprob1, .iprob2 = iprob2, .irhopos = irhopos, .irhoneg = irhoneg ))), linkinv = eval(substitute(function(eta, extra = NULL) { rhopos <- eta2theta(eta[, 1], .lrhopos, earg = .erhopos) rhoneg <- eta2theta(eta[, 2], .lrhoneg, earg = .erhoneg) pee2 <- (1 - rhoneg) / (rhopos - rhoneg) pee1 <- pee2 * rhopos cbind(rhopos, rhoneg, "mu1" = pee1, "mu2" = pee2) }, list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg ))), last = eval(substitute(expression({ misc$link <- c("rhopos" = .lrhopos, "rhoneg" = .lrhoneg ) misc$earg <- list("rhopos" = .erhopos, "rhoneg" = .erhoneg ) misc$expected <- TRUE }), list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg, .irhopos = irhopos, .irhoneg = irhoneg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { rhopos <- eta2theta(eta[, 1], .lrhopos, earg = .erhopos) rhoneg <- eta2theta(eta[, 2], .lrhoneg, earg = .erhoneg) pee2 <- (1 - rhoneg) / (rhopos - rhoneg) pee1 <- pee2 * rhopos if (min(pee1) <= 0.5) { warning("too small pee1 values") pee1[pee1 <= 0.5] <- 0.66 } if (max(pee1) >= 1) { warning("large pee1 values") pee1[pee1 >= 1] <- 0.99 } if (min(pee2) <= 0.0) { warning("too small pee2 values") pee2[pee2 <= 0.0] <- 0.01 } if (max(pee2) >= 0.5) { warning("too large pee2 values") pee2[pee2 >= 0.5] <- 0.44 } if (residuals) stop("loglikelihood residuals not implemented yet") else { nnn1 <- round(w * (y[, 1] + y[, 2])) nnn2 <- round(w * (y[, 3] + y[, 4])) index1 <- nnn1 > 0 index2 <- nnn2 > 0 print(head(dbinom(round(w[index1] * y[index1, 1]), nnn1[index1], prob = pee1[index1], log = TRUE), 18)) sum(dbinom(round(w[index1] * y[index1, 1]), nnn1[index1], prob = pee1[index1], log = TRUE)) + sum(dbinom(round(w[index2] * y[index2, 3]), nnn2[index2], prob = pee2[index2], log = TRUE)) } }, list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg, .irhopos = irhopos, .irhoneg = irhoneg ))), vfamily = c("lusted68", "binom2"), deriv = eval(substitute(expression({ rhopos <- eta2theta(eta[, 1], .lrhopos, earg = .erhopos) rhoneg <- eta2theta(eta[, 2], .lrhoneg, earg = .erhoneg) pee2 <- (1 - rhoneg) / (rhopos - rhoneg) pee1 <- pee2 * rhopos nnn1 <- round(w * (y[, 1] + y[, 3])) nnn2 <- round(w * (y[, 2] + y[, 4])) rhodif <- rhopos - rhoneg drhopos.deta <- dtheta.deta(rhopos, .lrhopos, earg = .erhopos) drhoneg.deta <- dtheta.deta(rhoneg, .lrhoneg, earg = .erhoneg) dl1.drhopos <- y[, 1] / rhopos + y[, 2] / (rhopos - 1) - 1 / rhodif dl1.drhoneg <- -y[, 1] / (1 - rhoneg) + y[, 2] / rhoneg + 1 / rhodif dl2.drhopos <- y[, 4] / (rhopos - 1) - 1 / rhodif dl2.drhoneg <- -y[, 3] / (1 - rhoneg) + 1 / rhodif cbind((nnn1 * dl1.drhopos + nnn2 * dl2.drhopos) * drhopos.deta, (nnn1 * dl1.drhoneg + nnn2 * dl2.drhoneg) * drhoneg.deta) }), list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg, .irhopos = irhopos, .irhoneg = irhoneg ))), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) # 3 = dimm(2) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + nnn1 * (pee1 / rhopos^2 + (1 - pee1) / (rhopos - 1)^2 - 1 / rhodif^2) wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + nnn1 * (pee1 / (1 - rhoneg)^2 + (1 - pee1) / rhoneg^2 - 1 / rhodif^2) wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] + nnn1 / rhodif^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + nnn2 * ((1 - pee2) / (rhopos - 1)^2 - 1 / rhodif^2) wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + nnn2 * (pee2 / (1 - rhoneg)^2 - 1 / rhodif^2) wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] + nnn2 / rhodif^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * drhopos.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * drhoneg.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * drhopos.deta * drhoneg.deta wz }), list( .lrhopos = lrhopos, .lrhoneg = lrhoneg, .erhopos = erhopos, .erhoneg = erhoneg, .irhopos = irhopos, .irhoneg = irhoneg )))) } binom2.Rho <- function(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL) { lmu12 <- "probit" emu12 <- list() if (is.Numeric(nsimEIM)) { if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 100) warning("'nsimEIM' should be an integer greater than 100") } new("vglmff", blurb = c("Bivariate probit model with rho = ", format(rho), "\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1, 1), 2, 1), x, .exchangeable , constraints, apply.int = TRUE) }), list( .exchangeable = exchangeable ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ eval(process.binomial2.data.vgam) predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE)) if (is.null( .nsimEIM)) { save.weight <- control$save.weight <- FALSE } if (is.null(etastart)) { mu1.init= if (is.Numeric(.imu1)) rep(.imu1, length = n) else mu[, 3] + mu[, 4] mu2.init= if (is.Numeric(.imu2)) rep(.imu2, length = n) else mu[, 2] + mu[, 4] etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 )) } }), list( .lmu12 = lmu12, .emu12 = emu12, .nsimEIM = nsimEIM, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- rep( .rho , len = nrow(eta)) p11 <- pnorm2(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmin(pmargin[, 2] - p11, pmargin[, 2]) p10 <- pmin(pmargin[, 1] - p11, pmargin[, 1]) p00 <- 1 - p01 - p10 - p11 ansmat <- abs(cbind("00"=p00, "01"=p01, "10"=p10, "11"=p11)) ansmat / rowSums(ansmat) }, list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 ) misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$rho <- .rho }), list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) stop("loglikelihood residuals not implemented yet") else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) sum((if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE)) } }, list( .rho = rho ))), vfamily = c("binom2.Rho", "binom2"), deriv = eval(substitute(expression({ pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- rep( .rho , len = nrow(eta)) p11 <- pnorm2(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmargin[, 2]-p11 p10 <- pmargin[, 1]-p11 p00 <- 1-p01-p10-p11 ABmat <- (eta[, 1:2] - rhovec * eta[, 2:1]) / sqrt(pmax(1e5 * .Machine$double.eps, 1.0 - rhovec^2)) PhiA <- pnorm(ABmat[, 1]) PhiB <- pnorm(ABmat[, 2]) onemPhiA <- pnorm(ABmat[, 1], lower.tail = FALSE) onemPhiB <- pnorm(ABmat[, 2], lower.tail = FALSE) smallno <- 1000 * .Machine$double.eps p00[p00 < smallno] <- smallno p01[p01 < smallno] <- smallno p10[p10 < smallno] <- smallno p11[p11 < smallno] <- smallno dprob00 <- dnorm2(eta[, 1], eta[, 2], rho = rhovec) dl.dprob1 <- PhiB*(y[, 4]/p11-y[, 2]/p01) + onemPhiB*(y[, 3]/p10-y[, 1]/p00) dl.dprob2 <- PhiA*(y[, 4]/p11-y[, 3]/p10) + onemPhiA*(y[, 2]/p01-y[, 1]/p00) dprob1.deta <- dtheta.deta(pmargin[, 1], .lmu12 , earg = .emu12 ) dprob2.deta <- dtheta.deta(pmargin[, 2], .lmu12 , earg = .emu12 ) dthetas.detas <- cbind(dprob1.deta, dprob2.deta) c(w) * cbind(dl.dprob1, dl.dprob2) * dthetas.detas }), list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM)) { ned2l.dprob1prob1 <- PhiB^2 *(1/p11+1/p01) + onemPhiB^2 *(1/p10+1/p00) ned2l.dprob2prob2 <- PhiA^2 *(1/p11+1/p10) + onemPhiA^2 *(1/p01+1/p00) ned2l.dprob1prob2 <- PhiA * (PhiB/p11 - onemPhiB/p10) + onemPhiA * (onemPhiB/p00 - PhiB/p01) wz <- matrix(0, n, dimm(M)) # 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dprob1prob1 * dprob1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dprob2prob2 * dprob2.deta^2 wz[, iam(1, 2, M)] <- ned2l.dprob1prob2 * dprob1.deta * dprob2.deta } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rbinom2.rho(n = n, mu1 = pmargin[, 1], mu2 = pmargin[, 2], twoCols = FALSE, rho = rhovec) dl.dprob1 <- PhiB * (ysim[, 4]/p11-ysim[, 2]/p01) + onemPhiB * (ysim[, 3]/p10-ysim[, 1]/p00) dl.dprob2 <- PhiA * (ysim[, 4]/p11-ysim[, 3]/p10) + onemPhiA * (ysim[, 2]/p01-ysim[, 1]/p00) rm(ysim) temp3 <- cbind(dl.dprob1, dl.dprob2) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } binom2.rho.ss <- function(lrho = "rhobit", lmu = "probit", # added 20120817 imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = 3, exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05)) { lrho <- as.list(substitute(lrho)) e.rho <- link2list(lrho) l.rho <- attr(e.rho, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (lmu != "probit") warning("argument 'lmu' should be 'probit'. Changing it.") lmu12 <- "probit" # But emu may contain some arguments. emu12 <- emu # list() if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate probit model with sample selection\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12), ", ", namesof("rho", l.rho, earg = e.rho)), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(c(1, 1, 0, 0, 0, 1), 3, 2), x, .exchangeable , constraints, apply.int = TRUE) constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 3, multipleResponses = FALSE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (!is.matrix(y)) stop("response must be a 2- or 3-column matrix") ncoly <- ncol(y) temp5 <- w.y.check(w = w, y = y, ncol.w.min = 1, ncol.w.max = 1, ncol.y.min = 2, ncol.y.max = 3, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, out.wy = TRUE, colsyperw = ncoly, maximize = TRUE) w <- temp5$w y <- temp5$y if (!all(c(y) == 0 | c(y) == 1)) stop("response matrix must have values 0 and 1 only") if (ncoly == 2) { extra$ymat2col <- y y <- cbind("0" = 1 - y[, 1], "10" = y[, 1] * (1 - y[, 2]), "11" = y[, 1] * y[, 2]) } else { if(!all(rowSums(y) == 1)) stop("response matrix must have two 0s and one 1 in each row") y1vec <- 1 - y[, 1] # Not a 0 means a 1. y2vec <- ifelse(y1vec == 1, y[, 3], 0) extra$ymat2col <- cbind(y1vec, y2vec) } predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE), namesof("rho", .l.rho , earg = .e.rho, short = TRUE)) ycounts <- y nvec <- 1 if (!length(etastart)) { if (length(mustart)) { mu1.init <- mustart[, 1] mu2.init <- mustart[, 2] } else if ( .imethod == 1) { mu1.init <- weighted.mean(extra$ymat2col[, 1], c(w)) index1 <- (extra$ymat2col[, 1] == 1) mu2.init <- weighted.mean(extra$ymat2col[index1, 2], w[index1, 1]) mu1.init <- rep(mu1.init, len = n) mu2.init <- rep(mu2.init, len = n) } else if ( .imethod == 2) { warning("not working yet2") glm1.fit <- glm(ycounts ~ x - 1, weights = c(w), fam = binomial("probit")) glm2.fit <- glm(ycounts[, 2:1] ~ x - 1, weights = c(w), fam = binomial("probit")) mu1.init <- fitted(glm1.fit) mu2.init <- fitted(glm2.fit) } else { stop("bad value for argument 'imethod'") } if (length( .imu1 )) mu1.init <- rep( .imu1 , length = n) if (length( .imu2 )) mu2.init <- rep( .imu2 , length = n) binom2.rho.ss.Loglikfun <- function(rhoval, y, x, w, extraargs) { init.mu1 <- extraargs$initmu1 init.mu2 <- extraargs$initmu2 ymat2col <- extraargs$ymat2col nvec <- extraargs$nvec eta1 <- qnorm(init.mu1) eta2 <- qnorm(init.mu2) smallno <- 1000 * .Machine$double.eps p11 <- pmax(smallno, pnorm2(eta1, eta2, cov12 = rhoval)) p10 <- pmax(smallno, pnorm( eta1) - p11) p0 <- pmax(smallno, pnorm(-eta1)) mumat <- abs(cbind("0" = p0, "10" = p10, "11" = p11)) # rows sum to unity smallpos <- 1.0e-100 mumat[mumat < smallpos] <- smallpos ycounts <- y # n x 3 use.mu <- mumat # cbind(p0, p10, p11) retval <- sum(c(w) * dmultinomial(x = ycounts, size = nvec, prob = use.mu, # mumat, log = TRUE, dochecking = FALSE)) retval } rho.grid <- .grho # seq(-0.95, 0.95, len = 31) try.this <- getMaxMin(rho.grid, objfun = binom2.rho.ss.Loglikfun, y = y, x = x, w = w, extraargs = list( ymat2col = extra$ymat2col, initmu1 = mu1.init, initmu2 = mu2.init, nvec = nvec )) rho.init <- if (is.Numeric( .irho )) rep( .irho , len = n) else { try.this } etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 ), theta2eta(rho.init, .l.rho , earg = .e.rho )) } mustart <- NULL # Since etastart has been computed and/or no @linkfun. }), list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho, .grho = grho, .irho = irho, .imethod = imethod, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) smallno <- 1000 * .Machine$double.eps p11 <- pmax(smallno, pnorm2(eta[, 1], eta[, 2], cov12 = rhovec)) p10 <- pmax(smallno, pnorm( eta[, 1]) - p11) p0 <- pmax(smallno, pnorm(-eta[, 1])) sumprob <- p11 + p10 + p0 p11 <- p11 / sumprob p10 <- p10 / sumprob p0 <- p0 / sumprob ansmat <- abs(cbind("0" = p0, # p0 == P(Y_1 = 0) "10" = p10, "11" = p11)) ansmat }, list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 , rho = .l.rho ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 , rho = .e.rho ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { if (residuals) stop("loglikelihood residuals not implemented yet") else { ycounts <- y # n x 3 nvec <- 1 smallno <- 1000 * .Machine$double.eps rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) p11 <- pmax(smallno, pnorm2(eta[, 1], eta[, 2], cov12 = rhovec)) p10 <- pmax(smallno, pnorm( eta[, 1]) - p11) p0 <- pmax(smallno, pnorm(-eta[, 1])) sumprob <- p11 + p10 + p0 p11 <- p11 / sumprob p10 <- p10 / sumprob p0 <- p0 / sumprob sum(c(w) * dmultinomial(x = ycounts, size = nvec, prob = mu, # use.mu, log = TRUE, dochecking = FALSE)) } }, list( .l.rho = l.rho, .e.rho = e.rho ))), vfamily = c("binom2.rho.ss", "binom2"), deriv = eval(substitute(expression({ nvec <- 1 ycounts <- extra$ymat2col pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) smallno <- 1000 * .Machine$double.eps p11 <- pmax(smallno, pnorm2(eta[, 1], eta[, 2], cov12 = rhovec)) p10 <- pmax(smallno, pnorm( eta[, 1]) - p11) p0 <- pmax(smallno, pnorm(-eta[, 1])) sumprob <- p11 + p10 + p0 p11 <- p11 / sumprob p10 <- p10 / sumprob p0 <- p0 / sumprob BAmat <- (eta[, 1:2] - rhovec * eta[, 2:1]) / sqrt(pmax(1e5 * .Machine$double.eps, 1.0 - rhovec^2)) PhiA <- pnorm(BAmat[, 2]) PhiB <- pnorm(BAmat[, 1]) onemPhiA <- pnorm(BAmat[, 2], lower.tail = FALSE) onemPhiB <- pnorm(BAmat[, 1], lower.tail = FALSE) mycode <- FALSE # zz mycode <- TRUE # zz if (mycode) { dprob00 <- dnorm2(eta[, 1], eta[, 2], rho = rhovec) dl.dprob1 <- PhiA * ycounts[, 1] * ycounts[, 2] / p11 + onemPhiA * ycounts[, 1] * (1 - ycounts[, 2]) / p10 - (1 - ycounts[, 1]) / p0 dl.dprob2 <- PhiB * ( ycounts[, 1] * ycounts[, 2] / p11 - ycounts[, 1] * (1 - ycounts[, 2]) / p10) dl.drho <- dprob00 * ( ycounts[, 1] * ycounts[, 2] / p11 - ycounts[, 1] * (1 - ycounts[, 2]) / p10) dprob1.deta <- dtheta.deta(pmargin[, 1], .lmu12 , earg = .emu12 ) dprob2.deta <- dtheta.deta(pmargin[, 2], .lmu12 , earg = .emu12 ) drho...deta <- dtheta.deta(rhovec, .l.rho , earg = .e.rho ) ans.deriv <- c(w) * cbind(dl.dprob1 * dprob1.deta, dl.dprob2 * dprob2.deta, dl.drho * drho...deta) } # else { eta1 <- eta[, 1] # dat1 %*% params[1:X1.d2] eta2 <- eta[, 2] # dat2 %*% params[(X1.d2 + 1):(X1.d2 + X2.d2)] corr.st <- eta[, 3] # params[(X1.d2 + X2.d2 + 1)] corr <- rhovec # tanh(corr.st) dat <- ycounts y1.y2 <- dat[, 1] * dat[, 2] y1.cy2 <- dat[, 1] * (1 - dat[, 2]) cy1 <- (1 - dat[, 1]) d.r <- 1/sqrt(pmax(10000 * .Machine$double.eps, 1 - corr^2)) A <- pnorm((eta2 - corr * eta1) * d.r) A.c <- 1 - A B <- pnorm((eta1 - corr * eta2) * d.r) p11 <- pmax(pnorm2(eta1, eta2, cov12 = corr), 1000 * .Machine$double.eps) p10 <- pmax(pnorm( eta1) - p11, 1000 * .Machine$double.eps) p0 <- pmax(pnorm(-eta1), 1000 * .Machine$double.eps) d.n1 <- dnorm(eta1) d.n2 <- dnorm(eta2) d.n1n2 <- dnorm2(eta1, eta2, rho = corr) drh.drh.st <- 4 * exp(2 * corr.st)/(exp(2 * corr.st) + 1)^2 dl.dbe1 <- d.n1 * (y1.y2/p11 * A + y1.cy2/p10 * A.c - cy1/p0) dl.dbe2 <- d.n2 * B * (y1.y2/p11 - y1.cy2/p10) dl.drho <- d.n1n2 * (y1.y2/p11 - y1.cy2/p10) * drh.drh.st ans.deriv2 <- c(w) * cbind(dl.dbe1, dl.dbe2, dl.drho) # } ans.deriv }), list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho ))), weight = eval(substitute(expression({ if (mycode) { ned2l.dprob1prob1 <- PhiA^2 / p11 + onemPhiA^2 / p10 + 1 / p0 ned2l.dprob2prob2 <- (1/p11 + 1/p10) * PhiB^2 ned2l.drho2 <- (1/p11 + 1/p10) * dprob00^2 ned2l.dprob1prob2 <- PhiA * PhiB / p11 - onemPhiA * PhiB / p10 ned2l.dprob1rho <- (PhiA/p11 - onemPhiA/p10) * dprob00 ned2l.dprob2rho <- (1/p11 + 1/p10) * PhiB * dprob00 wz <- matrix(0, n, dimm(M)) # 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dprob1prob1 * dprob1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dprob2prob2 * dprob2.deta^2 wz[, iam(3, 3, M)] <- ned2l.drho2 * drho...deta^2 wz[, iam(1, 2, M)] <- ned2l.dprob1prob2 * dprob1.deta * dprob2.deta wz[, iam(1, 3, M)] <- ned2l.dprob1rho * dprob1.deta * drho...deta wz[, iam(2, 3, M)] <- ned2l.dprob2rho * dprob2.deta * drho...deta } # else { ned2l.be1.be1 <- (A^2/p11 + A.c^2/p10 + 1/p0) * d.n1^2 ned2l.be2.be2 <- ( 1/p11 + 1/p10) * B^2 * d.n2^2 ned2l.rho.rho <- ( 1/p11 + 1/p10) * d.n1n2^2 * drh.drh.st^2 ned2l.be1.be2 <- (A * B/p11 - A.c * B/p10) * d.n1 * d.n2 ned2l.be1.rho <- (A * (1/p11) - A.c * (1/p10)) * d.n1n2 * d.n1 * drh.drh.st ned2l.be2.rho <- B * (1/p11 + 1/p10) * d.n1n2 * d.n2 * drh.drh.st WZ <- matrix(0, n, dimm(M)) # 6=dimm(M) WZ[, iam(1, 1, M)] <- ned2l.be1.be1 WZ[, iam(2, 2, M)] <- ned2l.be2.be2 WZ[, iam(3, 3, M)] <- ned2l.rho.rho WZ[, iam(1, 2, M)] <- ned2l.be1.be2 WZ[, iam(1, 3, M)] <- ned2l.be1.rho WZ[, iam(2, 3, M)] <- ned2l.be2.rho c(w) * wz }), list( .zero = zero )))) } VGAM/R/family.basics.R0000644000176000001440000006636012136651110014113 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. getind <- function(constraints, M, ncolx) { if (!length(constraints)) { constraints <- vector("list", ncolx) for (ii in 1:ncolx) constraints[[ii]] <- diag(M) } ans <- vector("list", M+1) names(ans) <- c(paste("eta", 1:M, sep = ""), "ncolX_vlm") temp2 <- matrix(unlist(constraints), nrow = M) for (kk in 1:M) { ansx <- NULL for (ii in 1:length(constraints)) { temp <- constraints[[ii]] isfox <- any(temp[kk, ] != 0) if (isfox) { ansx <- c(ansx, ii) } } ans[[kk]] <- list(xindex = ansx, X_vlmindex = (1:ncol(temp2))[temp2[kk,] != 0]) } ans[[M+1]] <- ncol(temp2) ans } cm.vgam <- function(cm, x, bool, constraints, apply.int = FALSE, overwrite = FALSE, cm.default = diag(nrow(cm)), # 20121226 cm.intercept.default = diag(nrow(cm)) # 20121226 ) { if (is.null(bool)) return(NULL) if (!is.matrix(cm)) stop("argument 'cm' is not a matrix") M <- nrow(cm) asgn <- attr(x, "assign") if(is.null(asgn)) stop("the 'assign' attribute is missing from 'x'; this ", "may be due to some missing values") # 20100306 nasgn <- names(asgn) ninasgn <- nasgn[nasgn != "(Intercept)"] if (!length(constraints)) { constraints <- vector("list", length(nasgn)) for (ii in 1:length(nasgn)) { constraints[[ii]] <- cm.default # diag(M) } names(constraints) <- nasgn if (any(nasgn == "(Intercept)")) constraints[["(Intercept)"]] <- cm.intercept.default } if (!is.list(constraints)) stop("argument 'constraints' must be a list") if (length(constraints) != length(nasgn) || any(sort(names(constraints)) != sort(nasgn))) { cat("\nnames(constraints)\n") print(names(constraints) ) cat("\nnames(attr(x, 'assign'))\n") print( nasgn ) stop("The above do not match; 'constraints' is half-pie") } if (is.logical(bool)) { if (bool) { if (any(nasgn == "(Intercept)") && apply.int) constraints[["(Intercept)"]] <- cm if (length(ninasgn)) for (ii in ninasgn) constraints[[ii]] <- cm } else { return(constraints) } } else { tbool <- terms(bool) if (attr(tbool, "response")) { ii <- attr(tbool, "factors") default <- dimnames(ii)[[1]] default <- default[1] default <- parse(text = default[1])[[1]] default <- as.logical(eval(default)) } else { default <- TRUE } tl <- attr(tbool, "term.labels") if (attr(tbool, "intercept")) tl <- c("(Intercept)", tl) for (ii in nasgn) { if ( default && any(tl == ii)) constraints[[ii]] <- cm if (!default && !any(tl == ii)) constraints[[ii]] <- cm } } constraints } cm.nointercept.vgam <- function(constraints, x, nointercept, M) { asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) # list() names(constraints) <- nasgn } if (!is.list(constraints)) stop("'constraints' must be a list") for (ii in 1:length(asgn)) constraints[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) if (is.null(nointercept)) return(constraints) if (!is.numeric(nointercept)) stop("'nointercept' must be numeric") nointercept <- unique(sort(nointercept)) if (length(nointercept) == 0 || length(nointercept) >= M) stop("too few or too many values") if (any(nointercept < 1 | nointercept > M)) stop("'nointercept' out of range") if (nasgn[1] != "(Intercept)" || M == 1) stop("Need an (Intercept) constraint matrix with M>1") if (!all.equal(constraints[["(Intercept)"]], diag(M))) warning("Constraint matrix of (Intercept) not diagonal") temp <- constraints[["(Intercept)"]] temp <- temp[, -nointercept, drop = FALSE] constraints[["(Intercept)"]] <- temp constraints } cm.zero.vgam <- function(constraints, x, zero, M) { asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) # list() names(constraints) <- nasgn } if (!is.list(constraints)) stop("'constraints' must be a list") for (ii in 1:length(asgn)) constraints[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) if (is.null(zero)) return(constraints) if (!is.numeric(zero)) stop("'zero' must be numeric") if (any(zero < 1 | zero > M)) stop("'zero' out of range") if (nasgn[1] != "(Intercept)") stop("cannot fit an intercept to a no-intercept model") if (2 <= length(constraints)) for (ii in 2:length(constraints)) { Hmatk <- constraints[[nasgn[ii]]] Hmatk[zero, ] <- 0 index <- NULL for (kk in 1:ncol(Hmatk)) if (all(Hmatk[,kk] == 0)) index <- c(index, kk) if (length(index) == ncol(Hmatk)) stop("constraint matrix has no columns!") if (!is.null(index)) Hmatk <- Hmatk[, -index, drop = FALSE] constraints[[nasgn[ii]]] <- Hmatk } constraints } process.constraints <- function(constraints, x, M, by.col = TRUE, specialCM = NULL) { asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) for (ii in 1:length(nasgn)) constraints[[ii]] <- diag(M) names(constraints) <- nasgn } if (is.matrix(constraints)) constraints <- list(constraints) if (!is.list(constraints)) stop("'constraints' must be a list") lenconstraints <- length(constraints) if (lenconstraints > 0) for (ii in 1:lenconstraints) { constraints[[ii]] <- eval(constraints[[ii]]) if (!is.null (constraints[[ii]]) && !is.matrix(constraints[[ii]])) stop("'constraints[[", ii, "]]' is not a matrix") } if (is.null(names(constraints))) names(constraints) <- rep(nasgn, length.out = lenconstraints) temp <- if (!is.R()) list() else { junk <- vector("list", length(nasgn)) names(junk) <- nasgn junk } for (ii in 1:length(nasgn)) temp[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) for (ii in 1:length(asgn)) { if (!is.matrix(temp[[ii]])) { stop("not a constraint matrix") } if (ncol(temp[[ii]]) > M) stop("constraint matrix has too many columns") } if (!by.col) return(temp) constraints <- temp Blist <- vector("list", ncol(x)) for (ii in 1:length(asgn)) { cols <- asgn[[ii]] ictr <- 0 for (jay in cols) { ictr <- ictr + 1 cm <- if (is.list(specialCM) && any(nasgn[ii] == names(specialCM))) { slist <- specialCM[[(nasgn[ii])]] slist[[ictr]] } else constraints[[ii]] Blist[[jay]] <- cm } } names(Blist) <- dimnames(x)[[2]] Blist } trivial.constraints <- function(Blist, target = diag(M)) { if (is.null(Blist)) return(1) if (is.matrix(Blist)) Blist <- list(Blist) M <- dim(Blist[[1]])[1] if (!is.matrix(target)) stop("target is not a matrix") dimtar <- dim(target) trivc <- rep(1, length(Blist)) names(trivc) <- names(Blist) for (ii in 1:length(Blist)) { d <- dim(Blist[[ii]]) if (d[1] != dimtar[1]) trivc[ii] <- 0 if (d[2] != dimtar[2]) trivc[ii] <- 0 if (d[1] != M) trivc[ii] <- 0 if (length(Blist[[ii]]) != length(target)) trivc[ii] <- 0 if (trivc[ii] == 0) next if (!all(c(Blist[[ii]]) == c(target))) trivc[ii] <- 0 if (trivc[ii] == 0) next } trivc } add.constraints <- function(constraints, new.constraints, overwrite = FALSE, check = FALSE) { empty.list <- function(l) (is.null(l) || (is.list(l) && length(l) == 0)) if (empty.list(constraints)) if (is.list(new.constraints)) return(new.constraints) else return(list()) # Both NULL probably constraints <- as.list(constraints) new.constraints <- as.list(new.constraints) nc <- names(constraints) # May be NULL nn <- names(new.constraints) # May be NULL if (is.null(nc) || is.null(nn)) stop("lists must have names") if (any(nc == "") || any(nn == "")) stop("lists must have names") if (!empty.list(constraints) && !empty.list(new.constraints)) { for (ii in nn) { if (any(ii == nc)) { if (check && (!(all(dim(constraints[[ii]]) == dim(new.constraints[[ii]])) && all( constraints[[ii]] == new.constraints[[ii]])))) stop("apparent contradiction in the specification ", "of the constraints") if (overwrite) constraints[[ii]] <- new.constraints[[ii]] } else constraints[[ii]] <- new.constraints[[ii]] } } else { if (!empty.list(constraints)) return(as.list(constraints)) else return(as.list(new.constraints)) } constraints } iam <- function(j, k, M, # hbw = M, both = FALSE, diag = TRUE) { jay <- j kay <- k if (M == 1) if (!diag) stop("cannot handle this") if (M == 1) if (both) return(list(row.index = 1, col.index = 1)) else return(1) upper <- if (diag) M else M - 1 i2 <- as.list(upper:1) i2 <- lapply(i2, seq) i2 <- unlist(i2) i1 <- matrix(1:M, M, M) i1 <- if (diag) c(i1[row(i1) >= col(i1)]) else c(i1[row(i1) > col(i1)]) if (both) { list(row.index = i2, col.index = i1) } else { if (jay > M || kay > M || jay < 1 || kay < 1) stop("range error in j or k") both <- (i1 == jay & i2 == kay) | (i1 == kay & i2 == jay) (1:length(i2))[both] } } dimm <- function(M, hbw = M) { if (!is.numeric(hbw)) hbw <- M if (hbw > M || hbw < 1) stop("range error in argument 'hbw'") hbw * (2 * M - hbw +1) / 2 } m2avglm <- function(object, upper = FALSE, allow.vector = FALSE) { m2adefault(wweights(object), M = object@misc$M, upper = upper, allow.vector = allow.vector) } m2adefault <- function(m, M, upper = FALSE, allow.vector = FALSE) { if (!is.numeric(m)) stop("argument 'm' is not numeric") if (!is.matrix(m)) m <- cbind(m) n <- nrow(m) dimm <- ncol(m) index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (dimm > length(index$row.index)) stop("bad value for 'M'; it is too small") if (dimm < M) { stop("bad value for 'M'; it is too big") } fred <- dotC(name = "m2a", as.double(t(m)), ans=double(M*M*n), as.integer(dimm), as.integer(index$row-1), as.integer(index$col-1), as.integer(n), as.integer(M), as.integer(as.numeric(upper)), NAOK = TRUE) dim(fred$ans) <- c(M, M, n) alpn <- NULL dimnames(fred$ans) <- list(alpn, alpn, dimnames(m)[[1]]) fred$a } a2m <- function(a, hbw = M) { if (is.matrix(a) && ncol(a) == nrow(a)) a <- array(a, c(nrow(a), ncol(a), 1)) if (!is.array(a)) dim(a) <- c(1,1,length(a)) M <- dim(a)[1] n <- dim(a)[3] dimm.value <- dimm(M, hbw) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) fred <- dotC(name = "a2m", as.double(a), m=double(dimm.value*n), as.integer(dimm.value), as.integer(index$row-1), as.integer(index$col-1), as.integer(n), as.integer(M), NAOK = TRUE) dim(fred$m) <- c(dimm.value,n) fred$m <- t(fred$m) if (hbw != M) attr(fred$m, "hbw") <- hbw if (length(lpn <- dimnames(a)[[1]]) != 0) attr(fred$m, "predictors.names") <- lpn fred$m } vindex <- function(M, row.arg = FALSE, col.arg = FALSE, length.arg = M * (M + 1) / 2) { if ((row.arg + col.arg) != 1) stop("only one of row and col must be TRUE") if (M == 1) { ans <- 1 } else { if (row.arg) { i1 <- matrix(1:M, M, M) ans <- c(i1[row(i1) + col(i1) <= (M + 1)]) } else { i1 <- matrix(1:M, M, M) ans <- c(i1[row(i1) >= col(i1)]) } } if (length.arg > length(ans)) stop("argument 'length.arg' too big") rep(ans, length.out = length.arg) } if(!exists("is.R")) is.R <- function() exists("version") && !is.null(version$language) && version$language == "R" wweights <- function(object, matrix.arg = TRUE, deriv.arg = FALSE, ignore.slot = FALSE, checkwz = TRUE) { if (length(wz <- object@weights) && !ignore.slot && !deriv.arg) { return(wz) } M <- object@misc$M # Done below n <- object@misc$n # Done below if (any(slotNames(object) == "extra")) { extra <- object@extra if (length(extra) == 1 && !length(names(extra))) { extra <- extra[[1]] } } mu <- object@fitted.values if (any(slotNames(object) == "predictors")) eta <- object@predictors mt <- terms(object) # object@terms$terms; 11/8/03 Blist <- constraints <- object@constraints new.coeffs <- object@coefficients if (any(slotNames(object) == "iter")) iter <- object@iter w <- rep(1, n) if (any(slotNames(object) == "prior.weights")) w <- object@prior.weights if (!length(w)) w <- rep(1, n) x <- object@x if (!length(x)) x <- model.matrixvlm(object, type = "lm") y <- object@y if (any(slotNames(object) == "control")) for (ii in names(object@control)) { assign(ii, object@control[[ii]]) } if (length(object@misc)) for (ii in names(object@misc)) { assign(ii, object@misc[[ii]]) } if (any(slotNames(object) == "family")) { expr <- object@family@deriv deriv.mu <- eval(expr) if (!length(wz)) { expr <- object@family@weight wz <- eval(expr) if (M > 1) dimnames(wz) <- list(dimnames(wz)[[1]], NULL) # Remove colnames wz <- if (matrix.arg) as.matrix(wz) else c(wz) } if (deriv.arg) list(deriv = deriv.mu, weights = wz) else wz } else { NULL } } pweights <- function(object, ...) { ans <- object@prior.weights if (length(ans)) { ans } else { temp <- object@y ans <- rep(1, nrow(temp)) # Assumed all equal and unity. names(ans) <- dimnames(temp)[[1]] ans } } procVec <- function(vec, yn, Default) { if (any(is.na(vec))) stop("vec cannot contain any NAs") L <- length(vec) nvec <- names(vec) # vec[""] undefined named <- length(nvec) # FALSE for c(1,3) if (named) { index <- (1:L)[nvec == ""] default <- if (length(index)) vec[index] else Default } else { default <- vec } answer <- rep(default, length.out = length(yn)) names(answer) <- yn if (named) { nvec2 <- nvec[nvec != ""] if (length(nvec2)) { if (any(!is.element(nvec2, yn))) stop("some names given which are superfluous") answer <- rep(as.numeric(NA), length.out = length(yn)) names(answer) <- yn answer[nvec2] <- vec[nvec2] answer[is.na(answer)] <- rep(default, length.out <- sum(is.na(answer))) } } answer } if (FALSE) { if (!isGeneric("m2a")) setGeneric("m2a", function(object, ...) standardGeneric("m2a")) setMethod("m2a", "vglm", function(object, ...) m2avglm(object, ...)) } weightsvglm <- function(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) { weightsvlm(object, type = type, matrix.arg = matrix.arg, ignore.slot = ignore.slot, deriv.arg = deriv.arg, ...) } weightsvlm <- function(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("prior", "working"))[1] if (type == "working") { wweights(object = object, matrix.arg = matrix.arg, deriv.arg = deriv.arg, ignore.slot = ignore.slot, ...) } else { if (deriv.arg) stop("cannot set 'deriv = TRUE' when 'type=\"prior\"'") ans <- pweights(object) if (matrix.arg) as.matrix(ans) else c(ans) } } if (!isGeneric("weights")) setGeneric("weights", function(object, ...) standardGeneric("weights")) setMethod("weights", "vlm", function(object, ...) weightsvlm(object, ...)) setMethod("weights", "vglm", function(object, ...) weightsvglm(object, ...)) dotFortran <- function(name, ..., NAOK = FALSE, DUP = TRUE, PACKAGE = "VGAM") { .Fortran(name, ..., NAOK = NAOK, DUP = DUP, PACKAGE = PACKAGE) } dotC <- function(name, ..., NAOK = FALSE, DUP = TRUE, PACKAGE = "VGAM") { .C(name, ..., NAOK = NAOK, DUP = DUP, PACKAGE = PACKAGE) } qnupdate <- function(w, wzold, dderiv, deta, M, keeppd = TRUE, trace = FALSE, reset = FALSE, effpos=.Machine$double.eps^0.75) { if (M == 1) { dderiv <- cbind(dderiv) deta <- cbind(deta) } Bs <- mux22(t(wzold), deta, M = M, upper = FALSE, as.matrix = TRUE) # n x M sBs <- c( (deta * Bs) %*% rep(1, M) ) # should have positive values sy <- c( (dderiv * deta) %*% rep(1, M) ) wznew <- wzold index <- iam(NA, NA, M = M, both = TRUE) index$row.index <- rep(index$row.index, len=ncol(wzold)) index$col.index <- rep(index$col.index, len=ncol(wzold)) updateThese <- if (keeppd) (sy > effpos) else rep(TRUE, len=length(sy)) if (!keeppd || any(updateThese)) { wznew[updateThese,] <- wznew[updateThese,] - Bs[updateThese,index$row] * Bs[updateThese,index$col] / sBs[updateThese] + dderiv[updateThese,index$row] * dderiv[updateThese,index$col] / sy[updateThese] notupdated <- sum(!updateThese) if (notupdated && trace) cat(notupdated, "weight matrices not updated out of", length(sy), "\n") } else { warning("no BFGS quasi-Newton update made at all") cat("no BFGS quasi-Newton update made at all\n") flush.console() } wznew } mbesselI0 <- function(x, deriv.arg = 0) { if (!is.Numeric(deriv.arg, allowable.length = 1, integer.valued = TRUE, positive = TRUE) && deriv.arg != 0) stop("argument 'deriv.arg' must be a single non-negative integer") if (!(deriv.arg == 0 || deriv.arg == 1 || deriv.arg == 2)) stop("argument 'deriv' must be 0, 1, or 2") if (!is.Numeric(x)) stop("bad input for argument 'x'") nn <- length(x) if (FALSE) { } ans <- matrix(as.numeric(NA), nrow = nn, ncol = deriv.arg+1) ans[, 1] <- besselI(x, nu = 0) if (deriv.arg>=1) ans[,2] <- besselI(x, nu = 1) if (deriv.arg>=2) ans[,3] <- ans[,1] - ans[,2] / x ans } VGAM.matrix.norm <- function(A, power = 2, suppressWarning = FALSE) { if ((nrow(A) != ncol(A)) && !suppressWarning) warning("norms should be calculated for square matrices; ", "'A' is not square") if (power == "F") { sqrt(sum(A^2)) } else if (power == 1) { max(colSums(abs(A))) } else if (power == 2) { sqrt(max(eigen(t(A) %*% A)$value)) } else if (!is.finite(power)) { max(colSums(abs(A))) } else { stop("argument 'power' not recognized") } } rmfromVGAMenv <- function(varnames, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") for (ii in evarnames) { mytext1 <- "exists(x = ii, envir = VGAM:::VGAMenv)" myexp1 <- parse(text = mytext1) is.there <- eval(myexp1) if (is.there) { rm(list = ii, envir = VGAM:::VGAMenv) } } } existsinVGAMenv <- function(varnames, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") ans <- NULL for (ii in evarnames) { mytext1 <- "exists(x = ii, envir = VGAM:::VGAMenv)" myexp1 <- parse(text = mytext1) is.there <- eval(myexp1) ans <- c(ans, is.there) } ans } assign2VGAMenv <- function(varnames, mylist, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") for (ii in 1:length(varnames)) { assign(evarnames[ii], mylist[[(varnames[ii])]], envir = VGAM:::VGAMenv) } } getfromVGAMenv <- function(varname, prefix = "") { varname <- paste(prefix, varname, sep = "") if (length(varname) > 1) stop("'varname' must be of length 1") get(varname, envir = VGAM:::VGAMenv) } lerch <- function(x, s, v, tolerance = 1.0e-10, iter = 100) { if (!is.Numeric(x) || !is.Numeric(s) || !is.Numeric(v)) stop("bad input in 'x', 's', and/or 'v'") if (is.complex(c(x,s,v))) stop("complex arguments not allowed in 'x', 's' and 'v'") if (!is.Numeric(tolerance, allowable.length = 1, positive = TRUE) || tolerance > 0.01) stop("bad input for argument 'tolerance'") if (!is.Numeric(iter, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'iter'") L <- max(length(x), length(s), length(v)) x <- rep(x, length.out = L); s <- rep(s, length.out = L); v <- rep(v, length.out = L); xok <- abs(x) < 1 & !(v <= 0 & v == round(v)) x[!xok] <- 0 # Fix this later ans <- dotC(name = "lerchphi123", err = integer(L), as.integer(L), as.double(x), as.double(s), as.double(v), acc=as.double(tolerance), result=double(L), as.integer(iter)) ifelse(ans$err == 0 & xok , ans$result, NA) } negzero.expression <- expression({ posdotzero <- dotzero[dotzero > 0] negdotzero <- dotzero[dotzero < 0] bigUniqInt <- 1080 zneg_index <- if (length(negdotzero)) { if (!is.Numeric(-negdotzero, positive = TRUE, integer.valued = TRUE) || max(-negdotzero) > Musual) stop("bad input for argument 'zero'") zneg_index <- rep(0:bigUniqInt, rep(length(negdotzero), 1 + bigUniqInt)) * Musual + abs(negdotzero) sort(intersect(zneg_index, 1:M)) } else { NULL } zpos_index <- if (length(posdotzero)) posdotzero else NULL z_Index <- if (!length(dotzero)) NULL else unique(sort(c(zneg_index, zpos_index))) constraints <- cm.zero.vgam(constraints, x, z_Index, M) }) is.empty.list <- function(mylist) { is.list(mylist) && length(unlist(mylist)) == 0 } interleave.VGAM <- function(L, M) c(matrix(1:L, nrow = M, byrow = TRUE)) w.wz.merge <- function(w, wz, n, M, ndepy, intercept.only = FALSE) { wz <- as.matrix(wz) if (ndepy == 1) return( c(w) * wz) if (intercept.only) warning("yettodo: support intercept.only == TRUE") if (ncol(as.matrix(w)) > ndepy) stop("number of columns of 'w' exceeds number of responses") w <- matrix(w, n, ndepy) w.rep <- matrix(0, n, ncol(wz)) Musual <- M / ndepy all.indices <- iam(NA, NA, M = M, both = TRUE) if (FALSE) for (ii in 1:ncol(wz)) { if ((ind1 <- ceiling(all.indices$row[ii] / Musual)) == ceiling(all.indices$col[ii] / Musual)) { w.rep[, ii] <- w[, ind1] } } # ii res.Ind1 <- ceiling(all.indices$row.index / Musual) Ind1 <- res.Ind1 == ceiling(all.indices$col.index / Musual) LLLL <- min(ncol(wz), length(Ind1)) Ind1 <- Ind1[1:LLLL] res.Ind1 <- res.Ind1[1:LLLL] for (ii in 1:ndepy) { sub.ind1 <- (1:LLLL)[Ind1 & (res.Ind1 == ii)] w.rep[, sub.ind1] <- w[, ii] } # ii w.rep * wz } w.y.check <- function(w, y, ncol.w.max = 1, ncol.y.max = 1, ncol.w.min = 1, ncol.y.min = 1, out.wy = FALSE, colsyperw = 1, maximize = FALSE, Is.integer.y = FALSE, Is.positive.y = FALSE, Is.nonnegative.y = FALSE, prefix.w = "PriorWeight", prefix.y = "Response") { if (!is.matrix(w)) w <- as.matrix(w) if (!is.matrix(y)) y <- as.matrix(y) n_lm <- nrow(y) rn.w <- rownames(w) rn.y <- rownames(y) cn.w <- colnames(w) cn.y <- colnames(y) if (Is.integer.y && any(y != round(y))) stop("response variable 'y' must be integer-valued") if (Is.positive.y && any(y <= 0)) stop("response variable 'y' must be positive-valued") if (Is.nonnegative.y && any(y < 0)) stop("response variable 'y' must be 0 or positive-valued") if (nrow(w) != n_lm) stop("nrow(w) should be equal to nrow(y)") if (ncol(w) > ncol.w.max) stop("prior-weight variable 'w' has too many columns") if (ncol(y) > ncol.y.max) stop("response variable 'y' has too many columns; ", "only ", ncol.y.max, " allowed") if (ncol(w) < ncol.w.min) stop("prior-weight variable 'w' has too few columns") if (ncol(y) < ncol.y.min) stop("response variable 'y' has too few columns; ", "at least ", ncol.y.max, " needed") if (min(w) <= 0) stop("prior-weight variable 'w' must contain positive values only") if (is.numeric(colsyperw) && ncol(y) %% colsyperw != 0) stop("number of columns of the response variable 'y' is not ", "a multiple of ", colsyperw) if (maximize) { Ncol.max.w <- max(ncol(w), ncol(y) / colsyperw) Ncol.max.y <- max(ncol(y), ncol(w) * colsyperw) } else { Ncol.max.w <- ncol(w) Ncol.max.y <- ncol(y) } if (out.wy && ncol(w) < Ncol.max.w) { nblanks <- sum(cn.w == "") if (nblanks > 0) cn.w[cn.w == ""] <- paste(prefix.w, 1:nblanks, sep = "") if (length(cn.w) < Ncol.max.w) cn.w <- c(cn.w, paste(prefix.w, (length(cn.w)+1):Ncol.max.w, sep = "")) w <- matrix(w, n_lm, Ncol.max.w, dimnames = list(rn.w, cn.w)) } if (out.wy && ncol(y) < Ncol.max.y) { nblanks <- sum(cn.y == "") if (nblanks > 0) cn.y[cn.y == ""] <- paste(prefix.y, 1:nblanks, sep = "") if (length(cn.y) < Ncol.max.y) cn.y <- c(cn.y, paste(prefix.y, (length(cn.y)+1):Ncol.max.y, sep = "")) y <- matrix(y, n_lm, Ncol.max.y, dimnames = list(rn.y, cn.y)) } list(w = if (out.wy) w else NULL, y = if (out.wy) y else NULL) } arwz2wz <- function(arwz, M = 1, Musual = 1) { if (length(dim.arwz <- dim(arwz)) != 3) stop("dimension of 'arwz' should be of length 3") n <- dim.arwz[1] ndepy <- dim.arwz[2] dim.val <- dim.arwz[3] if (ndepy == 1) { dim(arwz) <- c(n, dim.val) return(arwz) } wz <- matrix(0.0, nrow = n, ncol = sum(M:(M-Musual+1))) ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) len.ind1 <- dim.val # length(ind1$col.index) for (ii in 1:ndepy) { for (jlocal in 1:len.ind1) { wz[, iam(Musual * (ii - 1) + ind1$row[jlocal], Musual * (ii - 1) + ind1$col[jlocal], M = M)] <- arwz[, ii, jlocal] } } colind <- ncol(wz) while (all(wz[, colind] == 0)) colind <- colind - 1 if (colind < ncol(wz)) wz <- wz[, 1:colind, drop = FALSE] wz } vweighted.mean.default <- function (x, w, ..., na.rm = FALSE) { temp5 <- w.y.check(w = w, y = x, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE, Is.integer.y = FALSE, Is.positive.y = FALSE, Is.nonnegative.y = FALSE, prefix.w = "PriorWeight", prefix.y = "Response") x <- temp5$y w <- temp5$w ans <- numeric(ncol(w)) for (ii in 1:ncol(w)) ans[ii] <- weighted.mean(x[, ii], w = w[, ii], ..., na.rm = na.rm) ans } VGAM/R/family.aunivariate.R0000644000176000001440000017410512136651110015154 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dkumar <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape1), length(shape2)) x <- rep(x, len = N); shape1 <- rep(shape1, len = N); shape2 <- rep(shape2, len = N) logdensity <- rep(log(0), len = N) xok <- (0 <= x & x <= 1) logdensity[xok] <- log(shape1[xok]) + log(shape2[xok]) + (shape1[xok] - 1) * log(x[xok]) + (shape2[xok] - 1) * log1p(-x[xok]^shape1[xok]) logdensity[shape1 <= 0] <- NaN logdensity[shape2 <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } rkumar <- function(n, shape1, shape2) { ans <- (1 - (runif(n))^(1/shape2))^(1/shape1) ans[(shape1 <= 0) | (shape2 <= 0)] <- NaN ans } qkumar <- function(p, shape1, shape2) { ans <- (1.0 - (1.0 - p)^(1/shape2))^(1/shape1) ans[(shape1 <= 0) | (shape2 <= 0)] = NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans } pkumar <- function(q, shape1, shape2) { ans <- 1.0 - (1.0 - q^shape1)^shape2 ans[q <= 0] <- 0 ans[q >= 1] <- 1 ans[(shape1 <= 0) | (shape2 <= 0)] <- NaN ans } kumar <- function(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = NULL, grid.shape1 = c(0.4, 6.0), tol12 = 1.0e-4, zero = NULL) { lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (length(ishape1) && (!is.Numeric(ishape1, allowable.length = 1, positive = TRUE))) stop("bad input for argument 'ishape1'") if (length(ishape2) && !is.Numeric(ishape2)) stop("bad input for argument 'ishape2'") if (!is.Numeric(tol12, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'tol12'") if (!is.Numeric(grid.shape1, allowable.length = 2, positive = TRUE)) stop("bad input for argument 'grid.shape1'") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Kumaraswamy distribution\n\n", "Links: ", namesof("shape1", lshape1, eshape1, tag = FALSE), ", ", namesof("shape2", lshape2, eshape2, tag = FALSE), "\n", "Mean: ", "shape2 * beta(1+1/shape1, shape2)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, lshape1 = .lshape1 , zero = .zero ) }, list( .zero = zero, .lshape1 = lshape1 ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any((y <= 0) | (y >= 1))) stop("the response must be in (0, 1)") ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape1", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("shape2", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape1 , earg = .eshape1 , tag = FALSE), namesof(mynames2, .lshape2 , earg = .eshape2 , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { kumar.Loglikfun <- function(shape1, y, x, w, extraargs) { mediany <- colSums(y * w) / colSums(w) # weighted.mean(y, w) shape2 <- log(0.5) / log1p(-(mediany^shape1)) sum(c(w) * dkumar(x = y, shape1 = shape1, shape2 = shape2, log = TRUE)) } shape1.grid <- seq( .grid.shape1[1], .grid.shape1[2], len = 19) shape1.init <- if (length( .ishape1 )) .ishape1 else getMaxMin(shape1.grid, objfun = kumar.Loglikfun, y = y, x = x, w = w) shape1.init <- matrix(shape1.init, n, ncoly, byrow = TRUE) mediany <- colSums(y * w) / colSums(w) # weighted.mean(y, w) shape2.init <- if (length( .ishape2 )) .ishape2 else log(0.5) / log1p(-(mediany^shape1.init)) shape2.init <- matrix(shape2.init, n, ncoly, byrow = TRUE) etastart <- cbind( theta2eta(shape1.init, .lshape1 , earg = .eshape1 ), theta2eta(shape2.init, .lshape2 , earg = .eshape2 ))[, interleave.VGAM(M, M = Musual)] } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .ishape1 = ishape1, .ishape2 = ishape2, .eshape1 = eshape1, .eshape2 = eshape2, .grid.shape1 = grid.shape1 ))), linkinv = eval(substitute(function(eta, extra = NULL){ shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) shape2 * (base::beta(1 + 1/shape1, shape2)) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lshape1 , length = ncoly), rep( .lshape2 , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eshape1 misc$earg[[Musual*ii ]] <- .eshape2 } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(c(w) * dkumar(x = y, shape1 = shape1, shape2 = shape2, log = TRUE)) } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("kumar"), deriv = eval(substitute(expression({ shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) dshape1.deta <- dtheta.deta(shape1, link = .lshape1 , earg = .eshape1 ) dshape2.deta <- dtheta.deta(shape2, link = .lshape2 , earg = .eshape2 ) dl.dshape1 <- 1 / shape1 + log(y) - (shape2 - 1) * log(y) * (y^shape1) / (1 - y^shape1) dl.dshape2 <- 1 / shape2 + log1p(-y^shape1) myderiv <- c(w) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ ned2l.dshape11 <- (1 + (shape2 / (shape2 - 2)) * ((digamma(shape2) - digamma(2))^2 - (trigamma(shape2) - trigamma(2)))) / shape1^2 ned2l.dshape22 <- 1.0 / shape2^2 ned2l.dshape12 <- -((digamma(1 + shape2) - digamma(2)) / (shape2 - 1.0)) / shape1 index1 <- (abs(shape2 - 1.0) < .tol12) if (any(index1)) ned2l.dshape12[index1] <- -trigamma(2) / shape1[index1] index2 <- (abs(shape2 - 2.0) < .tol12 ) if (any(index2)) ned2l.dshape11[index2] <- (1.0 - 2.0 * psigamma(2.0, deriv = 2)) / shape1[index2]^2 wz <- array(c(c(w) * ned2l.dshape11 * dshape1.deta^2, c(w) * ned2l.dshape22 * dshape2.deta^2, c(w) * ned2l.dshape12 * dshape1.deta * dshape2.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .tol12 = tol12 )))) } drice <- function(x, vee, sigma, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(vee), length(sigma)) x <- rep(x, len = N); vee <- rep(vee, len = N); sigma <- rep(sigma, len = N) logdensity <- rep(log(0), len = N) xok <- (x > 0) x.abs <- abs(x[xok] * vee[xok] / sigma[xok]^2) logdensity[xok] <- log(x[xok]) - 2 * log(sigma[xok]) + (-(x[xok]^2+vee[xok]^2)/(2*sigma[xok]^2)) + log(besselI(x.abs, nu = 0, expon.scaled = TRUE)) + x.abs logdensity[sigma <= 0] <- NaN logdensity[vee < 0] <- NaN if (log.arg) logdensity else exp(logdensity) } rrice <- function(n, vee, sigma) { if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") theta <- 1 # any number X <- rnorm(n, mean = vee * cos(theta), sd = sigma) Y <- rnorm(n, mean = vee * sin(theta), sd = sigma) sqrt(X^2 + Y^2) } riceff.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } riceff <- function(lvee = "loge", lsigma = "loge", ivee = NULL, isigma = NULL, nsimEIM = 100, zero = NULL) { lvee <- as.list(substitute(lvee)) evee <- link2list(lvee) lvee <- attr(evee, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (length(ivee) && !is.Numeric(ivee, positive = TRUE)) stop("bad input for argument 'ivee'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Rice distribution\n\n", "Links: ", namesof("vee", lvee, earg = evee, tag = FALSE), ", ", namesof("sigma", lsigma, earg = esigma, tag = FALSE), "\n", "Mean: ", "sigma*sqrt(pi/2)*exp(z/2)*((1-z)*", "besselI(-z/2, nu = 0) - z * besselI(-z/2, nu = 1)) ", "where z=-vee^2/(2*sigma^2)"), constraints = eval(substitute(expression({ constraints = cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("vee", .lvee, earg = .evee, tag = FALSE), namesof("sigma", .lsigma, earg = .esigma, tag = FALSE)) if (!length(etastart)) { riceff.Loglikfun <- function(vee, y, x, w, extraargs) { sigma.init <- sd(rep(y, w)) sum(c(w) * (log(y) - 2*log(sigma.init) + log(besselI(y*vee/sigma.init^2, nu = 0)) - (y^2 + vee^2)/(2*sigma.init^2))) } vee.grid <- seq(quantile(rep(y, w), probs = seq(0, 1, 0.2))["20%"], quantile(rep(y, w), probs = seq(0, 1, 0.2))["80%"], len=11) vee.init <- if (length( .ivee )) .ivee else getMaxMin(vee.grid, objfun = riceff.Loglikfun, y = y, x = x, w = w) vee.init <- rep(vee.init, length = length(y)) sigma.init <- if (length( .isigma )) .isigma else sqrt(max((weighted.mean(y^2, w) - vee.init^2)/2, 0.001)) sigma.init <- rep(sigma.init, length = length(y)) etastart <- cbind(theta2eta(vee.init, .lvee, earg = .evee), theta2eta(sigma.init, .lsigma, earg = .esigma)) } }), list( .lvee = lvee, .lsigma = lsigma, .ivee = ivee, .isigma = isigma, .evee = evee, .esigma = esigma ))), linkinv = eval(substitute(function(eta, extra = NULL){ vee <- eta2theta(eta[, 1], link = .lvee, earg = .evee) sigma <- eta2theta(eta[, 2], link = .lsigma, earg = .esigma) temp9 <- -vee^2 / (2*sigma^2) sigma * sqrt(pi/2) * ((1-temp9) * besselI(-temp9/2, nu = 0, expon = TRUE) - temp9 * besselI(-temp9/2, nu = 1, expon = TRUE)) }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), last = eval(substitute(expression({ misc$link <- c("vee" = .lvee, "sigma" = .lsigma) misc$earg <- list("vee" = .evee, "sigma" = .esigma) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta,extra = NULL) { vee <- eta2theta(eta[, 1], link = .lvee , earg = .evee ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * drice(x = y, vee = vee, sigma = sigma, log = TRUE)) } }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), vfamily = c("riceff"), deriv = eval(substitute(expression({ vee <- eta2theta(eta[, 1], link = .lvee, earg = .evee) sigma <- eta2theta(eta[, 2], link = .lsigma, earg = .esigma) dvee.deta <- dtheta.deta(vee, link = .lvee, earg = .evee) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) temp8 <- y * vee / sigma^2 dl.dvee <- -vee/sigma^2 + (y/sigma^2) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) dl.dsigma <- -2/sigma + (y^2 + vee^2)/(sigma^3) - (2 * temp8 / sigma) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) c(w) * cbind(dl.dvee * dvee.deta, dl.dsigma * dsigma.deta) }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.var <- run.cov <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rrice(n, vee = vee, sigma = sigma) temp8 <- ysim * vee / sigma^2 dl.dvee <- -vee/sigma^2 + (ysim/sigma^2) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) dl.dsigma <- -2/sigma + (ysim^2 + vee^2)/(sigma^3) - (2 * temp8 / sigma) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) rm(ysim) temp3 <- cbind(dl.dvee, dl.dsigma) run.var <- ((ii-1) * run.var + temp3^2) / ii run.cov <- ((ii-1) * run.cov + temp3[, 1] * temp3[, 2]) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var, run.cov)), n, dimm(M), byrow = TRUE) else cbind(run.var, run.cov) dtheta.detas <- cbind(dvee.deta, dsigma.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM )))) } dskellam <- function(x, mu1, mu2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mu1), length(mu2)) x <- rep(x, len = L); mu1 <- rep(mu1, len = L); mu2 <- rep(mu2, len = L); ok2 <- is.finite(mu1) && is.finite(mu2) & (mu1 >= 0) & (mu2 >= 0) ok3 <- (mu1 == 0) & (mu2 > 0) ok4 <- (mu1 > 0) & (mu2 == 0) ok5 <- (mu1 == 0) & (mu2 == 0) if (log.arg) { ans <- -mu1 - mu2 + 2 * sqrt(mu1*mu2) + 0.5 * x * log(mu1) - 0.5 * x * log(mu2) + log(besselI(2 * sqrt(mu1*mu2), nu = x, expon.scaled = TRUE)) ans[ok3] <- dpois(x = -x[ok3], lambda = mu2[ok3], log = TRUE) ans[ok4] <- dpois(x = -x[ok4], lambda = mu1[ok4], log = TRUE) ans[ok5] <- dpois(x = x[ok5], lambda = 0.0, log = TRUE) ans[x != round(x)] = log(0.0) } else { ans <- (mu1/mu2)^(x/2) * exp(-mu1-mu2 + 2 * sqrt(mu1*mu2)) * besselI(2 * sqrt(mu1*mu2), nu = x, expon.scaled = TRUE) ans[ok3] <- dpois(x = -x[ok3], lambda = mu2[ok3]) ans[ok4] <- dpois(x = -x[ok4], lambda = mu1[ok4]) ans[ok5] <- dpois(x = x[ok5], lambda = 0.0) ans[x != round(x)] <- 0.0 } ans[!ok2] <- NaN ans } rskellam <- function(n, mu1, mu2) { rpois(n, mu1) - rpois(n, mu2) } skellam.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } skellam <- function(lmu1 = "loge", lmu2 = "loge", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL) { lmu1 <- as.list(substitute(lmu1)) emu1 <- link2list(lmu1) lmu1 <- attr(emu1, "function.name") lmu2 <- as.list(substitute(lmu2)) emu2 <- link2list(lmu2) lmu2 <- attr(emu2, "function.name") if (length(imu1) && !is.Numeric(imu1, positive = TRUE)) stop("bad input for argument 'imu1'") if (length(imu2) && !is.Numeric(imu2, positive = TRUE)) stop("bad input for argument 'imu2'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Skellam distribution\n\n", "Links: ", namesof("mu1", lmu1, earg = emu1, tag = FALSE), ", ", namesof("mu2", lmu2, earg = emu2, tag = FALSE), "\n", "Mean: mu1-mu2", "\n", "Variance: mu1+mu2"), constraints = eval(substitute(expression({ constraints <- cm.vgam(matrix(1, M, 1), x, .parallel, constraints, apply.int = TRUE) constraints = cm.zero.vgam(constraints, x, .zero, M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, Is.integer.y = TRUE, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mu1", .lmu1, earg = .emu1, tag = FALSE), namesof("mu2", .lmu2, earg = .emu2, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) var.y.est <- sum(c(w) * junk$resid^2) / junk$df.residual mean.init <- weighted.mean(y, w) mu1.init <- max((var.y.est + mean.init) / 2, 0.01) mu2.init <- max((var.y.est - mean.init) / 2, 0.01) mu1.init <- rep(if(length( .imu1 )) .imu1 else mu1.init, length <- n) mu2.init <- rep(if(length( .imu2 )) .imu2 else mu2.init, length = n) etastart <- cbind(theta2eta(mu1.init, .lmu1, earg = .emu1), theta2eta(mu2.init, .lmu2, earg = .emu2)) } }), list( .lmu1 = lmu1, .lmu2 = lmu2, .imu1 = imu1, .imu2 = imu2, .emu1 = emu1, .emu2 = emu2 ))), linkinv = eval(substitute(function(eta, extra = NULL){ mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2) mu1 - mu2 }, list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2 ))), last = eval(substitute(expression({ misc$link <- c("mu1" = .lmu1, "mu2" = .lmu2) misc$earg <- list("mu1" = .emu1, "mu2" = .emu2) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta,extra = NULL) { mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2) if (residuals) stop("loglikelihood residuals not implemented yet") else { if ( is.logical( .parallel ) && length( .parallel )== 1 && .parallel ) sum(c(w) * log(besselI(2*mu1, nu = y, expon = TRUE))) else sum(c(w) * (-mu1 - mu2 + 0.5 * y * log(mu1) - 0.5 * y * log(mu2) + 2 * sqrt(mu1*mu2) + # Use this when expon = TRUE log(besselI(2 * sqrt(mu1*mu2), nu = y, expon = TRUE)))) } }, list( .lmu1 = lmu1, .lmu2 = lmu2, .parallel = parallel, .emu1 = emu1, .emu2 = emu2 ))), vfamily = c("skellam"), deriv = eval(substitute(expression({ mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2) dmu1.deta <- dtheta.deta(mu1, link = .lmu1, earg = .emu1) dmu2.deta <- dtheta.deta(mu2, link = .lmu2, earg = .emu2) temp8 <- 2 * sqrt(mu1*mu2) temp9 <- besselI(temp8, nu = y , expon = TRUE) temp7 <- (besselI(temp8, nu = y-1, expon = TRUE) + besselI(temp8, nu = y+1, expon = TRUE)) / 2 temp6 <- temp7 / temp9 dl.dmu1 <- -1 + 0.5 * y / mu1 + sqrt(mu2/mu1) * temp6 dl.dmu2 <- -1 - 0.5 * y / mu2 + sqrt(mu1/mu2) * temp6 c(w) * cbind(dl.dmu1 * dmu1.deta, dl.dmu2 * dmu2.deta) }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.var <- run.cov <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rskellam(n, mu1=mu1, mu2=mu2) temp9 <- besselI(temp8, nu = ysim, expon = TRUE) temp7 <- (besselI(temp8, nu = ysim-1, expon = TRUE) + besselI(temp8, nu = ysim+1, expon = TRUE)) / 2 temp6 <- temp7 / temp9 dl.dmu1 <- -1 + 0.5 * ysim/mu1 + sqrt(mu2/mu1) * temp6 dl.dmu2 <- -1 - 0.5 * ysim/mu2 + sqrt(mu1/mu2) * temp6 rm(ysim) temp3 <- cbind(dl.dmu1, dl.dmu2) run.var <- ((ii-1) * run.var + temp3^2) / ii run.cov <- ((ii-1) * run.cov + temp3[, 1] * temp3[, 2]) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var, run.cov)), n, dimm(M), byrow = TRUE) else cbind(run.var, run.cov) dtheta.detas <- cbind(dmu1.deta, dmu2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM )))) } dyules <- function(x, rho, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- log(rho) + lbeta(abs(x), rho+1) ans[(x != round(x)) | (x < 1)] <- log(0) } else { ans <- rho * beta(x, rho+1) ans[(x != round(x)) | (x < 1)] <- 0 } ans[!is.finite(rho) | (rho <= 0) | (rho <= 0)] <- NA ans } ryules <- function(n, rho) { if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1)) stop("bad input for argument 'n'") rgeom(n, prob = exp(-rexp(n, rate=rho))) + 1 } pyules <- function(q, rho) { tq <- trunc(q) ans <- 1 - tq * beta(abs(tq), rho+1) ans[q < 1] <- 0 ans[(rho <= 0) | (rho <= 0)] <- NA ans } yulesimon.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } yulesimon <- function(link = "loge", irho = NULL, nsimEIM = 200, zero = NULL) { if (length(irho) && !is.Numeric(irho, positive = TRUE)) stop("argument 'irho' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Yule-Simon distribution f(y) = rho * beta(y, rho + 1), ", "rho > 0, y = 1, 2,..\n\n", "Link: ", namesof("rho", link, earg = earg), "\n\n", "Mean: rho / (rho - 1), provided rho > 1\n", "Variance: rho^2 / ((rho - 1)^2 * (rho - 2)), ", "provided rho > 2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("rho", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- colSums(y * w) / colSums(w) + 1/8 rho.init <- wmeany / (wmeany - 1) rho.init <- matrix(if (length( .irho )) .irho else rho.init, n, M, byrow = TRUE) etastart <- theta2eta(rho.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- rho <- eta2theta(eta, .link , earg = .earg ) ans[rho > 1] <- rho / (rho - 1) ans[rho <= 1] <- NA ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$irho <- .irho misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM, .irho = irho ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta, extra = NULL) { rho <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dyules(x = y, rho = rho, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("yulesimon"), deriv = eval(substitute(expression({ Musual <- 1 rho <- eta2theta(eta, .link , earg = .earg ) dl.drho <- 1/rho + digamma(1+rho) - digamma(1+rho+y) drho.deta <- dtheta.deta(rho, .link , earg = .earg ) c(w) * dl.drho * drho.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- ryules(n, rho <- rho) dl.drho <- 1/rho + digamma(1+rho) - digamma(1+rho+ysim) rm(ysim) temp3 <- dl.drho run.var <- ((ii-1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, M, byrow = TRUE) else cbind(run.var) wz <- wz * drho.deta^2 c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dlind <- function(x, theta, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- 2 * log(theta) + log1p(x) - theta * x - log1p(theta) ans[(x < 0)] <- log(0) } else { ans <- theta^2 * (1 + x) * exp(-theta * x) / (1 + theta) ans[(x < 0)] <- 0 } ans[(theta <= 0)] <- NaN ans } plind <- function(q, theta) { ifelse(q > 0, 1 - (theta + 1 + theta * q) * exp(-theta * q) / (1 + theta), 0) } rlind <- function(n, theta) { ifelse(runif(n) < theta / (1 + theta), rexp(n, theta), rgamma(n, shape = 2, scale = 1 / theta)) } lindley <- function(link = "loge", itheta = NULL, zero = NULL) { if (length(itheta) && !is.Numeric(itheta, positive = TRUE)) stop("argument 'itheta' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Lindley distribution f(y) = ", "theta^2 * (1 + y) * exp(-theta * y) / (1 + theta), ", "theta > 0, y > 0,\n\n", "Link: ", namesof("theta", link, earg = earg), "\n\n", "Mean: (theta + 2) / (theta * (theta + 1))\n", "Variance: (theta^2 + 4 * theta + 2) / (theta * (theta + 1))^2"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("theta", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- colSums(y * w) / colSums(w) + 1/8 theta.init <- 1 / (wmeany + 1) theta.init <- matrix(if (length( .itheta )) .itheta else theta.init, n, M, byrow = TRUE) etastart <- theta2eta(theta.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itheta = itheta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link , earg = .earg ) (theta + 2) / (theta * (theta + 1)) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$itheta <- .itheta misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg, .itheta = itheta ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta, extra = NULL) { theta <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dlind(x = y, theta = theta, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("lindley"), deriv = eval(substitute(expression({ Musual <- 1 theta <- eta2theta(eta, .link , earg = .earg ) dl.dtheta <- 2 / theta - 1 / (1 + theta) - y dtheta.deta <- dtheta.deta(theta, .link , earg = .earg ) c(w) * dl.dtheta * dtheta.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dtheta2 <- (theta^2 + 4 * theta + 2) / (theta * (1 + theta))^2 c(w) * ned2l.dtheta2 * dtheta.deta^2 }), list( .zero = zero )))) } dpoislindley <- function(x, theta, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- 2 * log(theta) + log(theta + 2 + x) - (x+3) * log1p(theta) ans[(x != round(x)) | (x < 0)] <- log(0) } else { ans <- theta^2 * (theta + 2 + x) / (theta + 1)^(x+3) ans[(x != round(x)) | (x < 0)] <- 0 } ans[ # !is.finite(theta) | (theta <= 0)] <- NA ans } if (FALSE) rpoislindley <- function(n, theta) { } if (FALSE) ppoislindley <- function(q, theta) { } if (FALSE) poislindley.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } if (FALSE) poissonlindley <- function(link = "loge", itheta = NULL, nsimEIM = 200, zero = NULL) { stop("not working since rpoislindley() not written") if (length(itheta) && !is.Numeric(itheta, positive = TRUE)) stop("argument 'itheta' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") new("vglmff", blurb = c("Poisson-Lindley distribution f(y) = ", "theta^2 * (theta + 2 + y) / (theta + 1)^(y+3), ", "theta > 0, y = 0, 1, 2,..\n\n", "Link: ", namesof("theta", link, earg = earg), "\n\n", "Mean: (theta + 2) / (theta * (theta + 1)),\n", "Variance: (theta^3 + 4 * theta^2 + 6 * theta + 2) / ", "(theta * (theta + 1))^2, " ), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("theta", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- colSums(y * w) / colSums(w) + 1/8 MOM <- (sqrt((wmeany - 1)^2 + 8 * wmeany) - wmeany + 1) / (2 * wmeany) MOM[MOM < 0.01] <- 0.01 theta.init <- MOM theta.init <- matrix(if (length( .itheta )) .itheta else theta.init, n, M, byrow = TRUE) etastart <- theta2eta(theta.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itheta = itheta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link , earg = .earg ) (theta + 2) / (theta * (theta + 1)) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$itheta <- .itheta misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM, .itheta = itheta ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta, extra = NULL) { theta = eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dpoislindley(x = y, theta = theta, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("poissonlindley"), deriv = eval(substitute(expression({ Musual <- 1 theta <- eta2theta(eta, .link , earg = .earg ) dl.dtheta <- 2 / theta + 1 / (y + 2 + theta) - (y + 3) / (theta + 1) dtheta.deta <- dtheta.deta(theta, .link , earg = .earg ) c(w) * dl.dtheta * dtheta.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ run.var <- 0 for(ii in 1:( .nsimEIM )) { ysim <- rpoislindley(n, theta = theta) dl.dtheta <- 2 / theta + 1 / (ysim + 2 + theta) - (ysim + 3) / (theta + 1) rm(ysim) temp3 <- dl.dtheta run.var <- ((ii-1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, M, byrow = TRUE) else cbind(run.var) wz <- wz * dtheta.deta^2 c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dslash <- function(x, mu = 0, sigma = 1, log = FALSE, smallno = .Machine$double.eps * 1000) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(sigma) || any(sigma <= 0)) stop("argument 'sigma' must be positive") L <- max(length(x), length(mu), length(sigma)) x <- rep(x, len = L); mu <- rep(mu, len = L); sigma <- rep(sigma, len = L) zedd <- (x-mu)/sigma if (log.arg) ifelse(abs(zedd) 0") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") if (!is.Numeric(iprobs, positive = TRUE) || max(iprobs) >= 1 || length(iprobs) != 2) stop("bad input for argument 'iprobs'") if (!is.Numeric(smallno, positive = TRUE) || smallno > 0.1) stop("bad input for argument 'smallno'") new("vglmff", blurb = c("Slash distribution\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = FALSE), ", ", namesof("sigma", lsigma, earg = esigma, tag = FALSE), "\n", paste( "1-exp(-(((y-mu)/sigma)^2)/2))/(sqrt(2*pi)*", "sigma*((y-mu)/sigma)^2)", "\ty!=mu", "\n1/(2*sigma*sqrt(2*pi))", "\t\t\t\t\t\t\ty=mu\n")), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero, M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mu", .lmu, earg = .emu, tag = FALSE), namesof("sigma", .lsigma, earg = .esigma, tag = FALSE)) if (!length(etastart)) { slash.Loglikfun <- function(mu, y, x, w, extraargs) { sigma <- if (is.Numeric(.isigma)) .isigma else max(0.01, ((quantile(rep(y, w), prob = 0.75)/2)-mu)/qnorm(0.75)) zedd <- (y-mu)/sigma sum(c(w) * ifelse(abs(zedd)<.smallno, -log(2*sigma*sqrt(2*pi)), log1p(-exp(-zedd^2/2)) - log(sqrt(2*pi) * sigma * zedd^2))) } iprobs <- .iprobs mu.grid <- quantile(rep(y, w), probs=iprobs) mu.grid <- seq(mu.grid[1], mu.grid[2], length=100) mu.init <- if (length( .imu )) .imu else getMaxMin(mu.grid, objfun = slash.Loglikfun, y = y, x = x, w = w) sigma.init <- if (is.Numeric(.isigma)) .isigma else max(0.01, ((quantile(rep(y, w), prob = 0.75)/2) - mu.init) / qnorm(0.75)) mu.init <- rep(mu.init, length = length(y)) etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta(mu.init, .lmu, earg = .emu) etastart[, 2] <- theta2eta(sigma.init, .lsigma, earg = .esigma) } }), list( .lmu = lmu, .lsigma = lsigma, .imu = imu, .isigma = isigma, .emu = emu, .esigma = esigma, .iprobs = iprobs, .smallno = smallno))), linkinv = eval(substitute(function(eta, extra = NULL){ NA * eta2theta(eta[, 1], link = .lmu, earg = .emu) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu, "sigma" = .lsigma) misc$earg <- list("mu" = .emu, "sigma" = .esigma) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta,extra = NULL) { mu <- eta2theta(eta[, 1], link = .lmu, earg = .emu) sigma <- eta2theta(eta[, 2], link = .lsigma, earg = .esigma) zedd <- (y - mu) / sigma if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dslash(x = y, mu = mu, sigma = sigma, log = TRUE, smallno = .smallno)) } }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .smallno = smallno ))), vfamily = c("slash"), deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], link = .lmu, earg = .emu) sigma <- eta2theta(eta[, 2], link = .lsigma, earg = .esigma) dmu.deta <- dtheta.deta(mu, link = .lmu, earg = .emu) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) zedd <- (y-mu)/sigma d3 <- deriv3(~ w * log(1-exp(-(((y-mu)/sigma)^2)/2))- log(sqrt(2*pi)*sigma*((y-mu)/sigma)^2), c("mu", "sigma")) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") dl.dmu <- dl.dthetas[, 1] dl.dsigma <- dl.dthetas[, 2] ind0 <- (abs(zedd) < .smallno) dl.dmu[ind0] <- 0 dl.dsigma[ind0] <- -1/sigma[ind0] ans <- c(w) * cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) ans }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .smallno = smallno ))), weight=eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) sd3 <- deriv3(~ w * log(1-exp(-(((ysim-mu)/sigma)^2)/2))- log(sqrt(2*pi)*sigma*((ysim-mu)/sigma)^2), c("mu", "sigma")) for(ii in 1:( .nsimEIM )) { ysim <- rslash(n, mu = mu, sigma = sigma) seval.d3 <- eval(sd3) dl.dthetas <- attr(seval.d3, "gradient") dl.dmu <- dl.dthetas[, 1] dl.dsigma <- dl.dthetas[, 2] temp3 <- cbind(dl.dmu, dl.dsigma) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = FALSE), n, ncol(run.varcov), byrow = TRUE) else run.varcov dthetas.detas <- cbind(dmu.deta, dsigma.deta) wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] c(w) * wz }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .nsimEIM = nsimEIM, .smallno = smallno )))) } dnefghs <- function(x, tau, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(tau)) x <- rep(x, len = N); tau = rep(tau, len = N); logdensity <- log(sin(pi*tau)) + (1-tau)*x - log(pi) - log1p(exp(x)) logdensity[tau < 0] <- NaN logdensity[tau > 1] <- NaN if (log.arg) logdensity else exp(logdensity) } nefghs <- function(link = "logit", itau = NULL, imethod = 1) { if (length(itau) && !is.Numeric(itau, positive = TRUE) || any(itau >= 1)) stop("argument 'itau' must be in (0, 1)") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Natural exponential family generalized hyperbolic ", "secant distribution\n", "f(y) = sin(pi*tau)*exp((1-tau)*y)/(pi*(1+exp(y))\n\n", "Link: ", namesof("tau", link, earg = earg), "\n\n", "Mean: pi / tan(pi * tau)\n"), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("tau", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- if ( .imethod == 1) weighted.mean(y, w) else median(rep(y, w)) if (abs(wmeany) < 0.01) wmeany <- 0.01 tau.init <- atan(pi / wmeany) / pi + 0.5 tau.init[tau.init < 0.03] <- 0.03 tau.init[tau.init > 0.97] <- 0.97 tau.init <- rep(if (length( .itau )) .itau else tau.init, len = n) etastart <- theta2eta(tau.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itau = itau, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { tau <- eta2theta(eta, .link , earg = .earg ) pi / tan(pi * tau) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(tau = .link) misc$earg <- list(tau = .earg ) misc$expected <- TRUE misc$imethod <- .imethod }), list( .link = link, .earg = earg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta, extra = NULL) { tau <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dnefghs(x = y, tau = tau, log = TRUE)) } }, list( .link = link, .earg = earg ))), vfamily = c("nefghs"), deriv = eval(substitute(expression({ tau <- eta2theta(eta, .link , earg = .earg ) dl.dtau <- pi / tan(pi * tau) - y dtau.deta <- dtheta.deta(tau, .link , earg = .earg ) w * dl.dtau * dtau.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2l.dtau2 <- (pi / sin(pi * tau))^2 wz <- d2l.dtau2 * dtau.deta^2 c(w) * wz }), list( .link = link )))) } dlogF <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdensity <- -shape2*x - lbeta(shape1, shape2) - (shape1 + shape2) * log1p(exp(-x)) if (log.arg) logdensity else exp(logdensity) } logF <- function(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = 1, imethod = 1) { if (length(ishape1) && !is.Numeric(ishape1, positive = TRUE)) stop("argument 'ishape1' must be positive") if ( # length(ishape2) && !is.Numeric(ishape2, positive = TRUE)) stop("argument 'ishape2' must be positive") lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("log F distribution\n", "f(y) = exp(-shape2 * y) / (beta(shape1, shape2) * ", "(1 + exp(-y))^(shape1 + shape2))\n\n", "Link: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), "\n\n", "Mean: digamma(shape1) - digamma(shape2)"), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("shape1", .lshape1, earg = .eshape1, tag = FALSE), namesof("shape2", .lshape2, earg = .eshape2, tag = FALSE)) if (!length(etastart)) { wmeany <- if ( .imethod == 1) weighted.mean(y, w) else median(rep(y, w)) shape1.init <- shape2.init = rep( .ishape2, len = n) shape1.init <- if (length( .ishape1)) rep( .ishape1, len = n) else { index1 <- (y > wmeany) shape1.init[ index1] <- shape2.init[ index1] + 1/1 shape1.init[!index1] <- shape2.init[!index1] - 1/1 shape1.init <- pmax(shape1.init, 1/8) shape1.init } etastart <- cbind(theta2eta(shape1.init, .lshape1, earg = .eshape1), theta2eta(shape2.init, .lshape2, earg = .eshape2)) } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1, earg = .eshape1) shape2 <- eta2theta(eta[, 2], .lshape2, earg = .eshape2) digamma(shape1) - digamma(shape2) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1, shape2 = .lshape2) misc$earg <- list(shape1 = .eshape1, shape2 = .eshape2) misc$expected <- TRUE misc$imethod <- .imethod }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu,y, w, residuals = FALSE,eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1, earg = .eshape1) shape2 <- eta2theta(eta[, 2], .lshape2, earg = .eshape2) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(c(w) * dlogF(x = y, shape1 = shape1, shape2 = shape2, log = TRUE)) } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("logF"), deriv = eval(substitute(expression({ shape1 <- eta2theta(eta[, 1], .lshape1, earg = .eshape1) shape2 <- eta2theta(eta[, 2], .lshape2, earg = .eshape2) tmp888 <- digamma(shape1 + shape2) - log1p(exp(-y)) dl.dshape1 <- tmp888 - digamma(shape1) dl.dshape2 <- tmp888 - digamma(shape2) - y dshape1.deta <- dtheta.deta(shape1, .lshape1, earg = .eshape1) dshape2.deta <- dtheta.deta(shape2, .lshape2, earg = .eshape2) c(w) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ tmp888 <- trigamma(shape1 + shape2) ned2l.dshape12 <- trigamma(shape1) - tmp888 ned2l.dshape22 <- trigamma(shape2) - tmp888 ned2l.dshape1shape2 <- -tmp888 wz <- matrix(0, n, dimm(M)) wz[,iam(1, 1, M = M)] <- ned2l.dshape12 * dshape1.deta^2 wz[,iam(2, 2, M = M)] <- ned2l.dshape22 * dshape2.deta^2 wz[,iam(1, 2, M = M)] <- ned2l.dshape1shape2 * dshape1.deta * dshape2.deta c(w) * wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 )))) } dbenf <- function(x, ndigits = 1, log = FALSE) { if (!is.Numeric(ndigits, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- x * NA indexTF <- is.finite(x) & (x >= lowerlimit) ans[indexTF] <- log10(1 + 1/x[indexTF]) ans[!is.na(x) & !is.nan(x) & ((x < lowerlimit) | (x > upperlimit) | (x != round(x)))] <- 0.0 if (log.arg) log(ans) else ans } rbenf <- function(n, ndigits = 1) { if (!is.Numeric(ndigits, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, allowable.length = 1, positive = TRUE)) stop("bad input for argument 'n'") else n myrunif <- runif(use.n) ans <- rep(lowerlimit, length = use.n) for(ii in (lowerlimit+1):upperlimit) { indexTF <- (pbenf(ii-1, ndigits = ndigits) < myrunif) & (myrunif <= pbenf(ii, ndigits = ndigits)) ans[indexTF] <- ii } ans } pbenf <- function(q, ndigits = 1, log.p = FALSE) { if (!is.Numeric(ndigits, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) ans <- q * NA floorq <- floor(q) indexTF <- is.finite(q) & (floorq >= lowerlimit) ans[indexTF] <- log10(1 + floorq[indexTF]) - ifelse(ndigits == 1, 0, 1) ans[!is.na(q) & !is.nan(q) & (q >= upperlimit)] <- 1 ans[!is.na(q) & !is.nan(q) & (q < lowerlimit)] <- 0 if (log.p) log(ans) else ans } qbenf <- function(p, ndigits = 1) { if (!is.Numeric(ndigits, allowable.length = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) bad <- !is.na(p) & !is.nan(p) & ((p < 0) | (p > 1)) if (any(bad)) stop("bad input for argument 'p'") ans <- rep(lowerlimit, length = length(p)) for(ii in (lowerlimit+1):upperlimit) { indexTF <- is.finite(p) & (pbenf(ii-1, ndigits = ndigits) < p) & (p <= pbenf(ii, ndigits = ndigits)) ans[indexTF] <- ii } ans[ is.na(p) | is.nan(p)] <- NA ans[!is.na(p) & !is.nan(p) & (p == 0)] <- lowerlimit ans[!is.na(p) & !is.nan(p) & (p == 1)] <- upperlimit ans } truncgeometric <- function(upper.limit = Inf, # lower.limit = 1, # Inclusive link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) { if (is.finite(upper.limit) && !is.Numeric(upper.limit, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'upper.limit'") if (any(upper.limit < 0)) stop("bad input for argument 'upper.limit'") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") uu.ll <- min(upper.limit) new("vglmff", blurb = c("Truncated geometric distribution ", "(P[Y=y] =\n", " ", "prob * (1 - prob)^y / [1-(1-prob)^", uu.ll+1, "], y = 0,1,...,", uu.ll, ")\n", "Link: ", namesof("prob", link, earg = earg), "\n", "Mean: mu = 1 / prob - 1 ", ifelse(is.finite(upper.limit), paste("- (", upper.limit+1, ") * (1 - prob)^", upper.limit+1, " / (1 - ", "(1 - prob)^", upper.limit+1, ")", sep = ""), "")), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 1 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 1, upper.limit = .upper.limit , zero = .zero ) }, list( .zero = zero, .upper.limit = upper.limit ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 1 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly extra$upper.limit <- matrix( .upper.limit , n, ncoly, byrow = TRUE) if (any(y > extra$upper.limit)) stop("some response values greater than argument 'upper.limit'") mynames1 <- paste("prob", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 3) 1 / (1 + apply(y, 2, median) + 1/16) else 1 / (1 + colSums(y * w) / colSums(w) + 1/16) if (!is.matrix(prob.init)) prob.init <- matrix(prob.init, n, M, byrow = TRUE) if (length( .iprob )) prob.init <- matrix( .iprob , n, M, byrow = TRUE) etastart <- theta2eta(prob.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .upper.limit = upper.limit, .imethod = imethod, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) QQQ <- 1 - prob upper.limit <- extra$upper.limit tmp1 <- QQQ^(upper.limit+1) answer <- 1 / prob - 1 - (upper.limit+1) * tmp1 / (1 - tmp1) answer[!is.finite(answer)] <- 1 / prob[!is.finite(answer)] - 1 answer }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .link , length = ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for(ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$Musual <- Musual misc$multipleResponses <- TRUE misc$expected <- .expected misc$imethod <- .imethod misc$iprob <- .iprob }), list( .link = link, .earg = earg, .iprob = iprob, .upper.limit = upper.limit, .expected = expected, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { upper.limit <- extra$upper.limit sum(c(w) * (dgeom(x = y, prob = prob, log = TRUE) - log1p(-(1.0 - prob)^(1 + upper.limit)))) } }, list( .link = link, .earg = earg ))), vfamily = c("truncgeometric"), deriv = eval(substitute(expression({ prob <- eta2theta(eta, .link , earg = .earg ) sss <- upper.limit <- extra$upper.limit # Is a matrix QQQ <- 1 - prob tmp1 <- QQQ^(upper.limit + 1) dl.dprob <- 1 / prob + (0 - y) / (1 - prob) - (1+upper.limit) * QQQ^(upper.limit - 0) / (1 - tmp1) dl.dprob[!is.finite(upper.limit)] <- 1 / prob[!is.finite(upper.limit)] + (0 - y[!is.finite(upper.limit)]) / (1 - prob[!is.finite(upper.limit)]) dprobdeta <- dtheta.deta(prob, .link , earg = .earg ) c(w) * cbind(dl.dprob * dprobdeta) }), list( .link = link, .earg = earg, .upper.limit = upper.limit, .expected = expected ))), weight = eval(substitute(expression({ eim.oim.fun <- function(mu.y, sss) ifelse(is.finite(sss), 1/prob^2 + (0 + mu.y) / QQQ^2 - (1+sss) * ((sss-0) * QQQ^(sss-1) / (1 - tmp1) + (1+sss) * QQQ^(2*sss) / (1 - tmp1)^2), 1 / (prob^2 * (1 - prob))) ned2l.dprob2 <- if ( .expected ) { eim.oim.fun(mu, sss) } else { eim.oim.fun(y, sss) } wz <- ned2l.dprob2 * dprobdeta^2 if ( !( .expected )) wz <- wz - dl.dprob * d2theta.deta2(prob, .link , earg = .earg ) c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } VGAM/R/family.actuary.R0000644000176000001440000040001612136651110014305 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dgumbelII <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- x index0 <- (x < 0) & is.finite(x) & !is.na(x) ans[!index0] <- log(shape[!index0] / scale[!index0]) + (shape[!index0] + 1) * log(scale[!index0] / x[!index0]) - (x[!index0] / scale[!index0])^(-shape[!index0]) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pgumbelII <- function(q, shape, scale = 1) { LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- exp(-(q / scale)^(-shape)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0] <- NaN ans[q == Inf] <- 1 ans } qgumbelII <- function(p, shape, scale = 1) { LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- scale * (-log(p))^(-1 / shape) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN ans[shape <= 0 | scale <= 0] <- NaN ans } rgumbelII <- function(n, shape, scale = 1) { qgumbelII(runif(n), shape = shape, scale = scale) } gumbelII <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, # 50, imethod = 1, zero = -2) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE)) stop("bad input for argument 'zero'") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (length(perc.out)) if (!is.Numeric(perc.out, positive = TRUE) || max(probs.y) >= 100) stop("bad input for argument 'perc.out'") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") new("vglmff", blurb = c("Gumbel Type II distribution\n\n", "Links: ", namesof("shape", lshape, eshape), ", ", namesof("scale", lscale, escale), "\n", "Mean: scale^(1/shape) * gamma(1 - 1 / shape)\n", "Variance: scale^(2/shape) * (gamma(1 - 2/shape) - ", "gamma(1 + 1/shape)^2)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, perc.out = .perc.out , zero = .zero ) }, list( .zero = zero, .perc.out = perc.out ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape , .eshape , tag = FALSE), namesof(mynames2, .lscale , .escale , tag = FALSE))[ interleave.VGAM(M, M = Musual)] Shape.init <- matrix(if(length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) Scale.init <- matrix(if(length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .ishape ) || !length( .iscale )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # can be all data probs.y <- .probs.y xvec <- log(-log(probs.y)) fit0 <- lsfit(y = xvec, x = log(quantile(y[!i11, ilocal], probs = probs.y ))) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- -fit0$coef["X"] if (!is.Numeric(Scale.init[, ilocal])) Scale.init[, ilocal] <- exp(fit0$coef["Intercept"] / Shape.init[, ilocal]) } # ilocal etastart <- cbind(theta2eta(Shape.init, .lshape , .eshape ), theta2eta(Scale.init, .lscale , .escale ))[, interleave.VGAM(M, M = Musual)] } } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) Shape <- as.matrix(Shape) if (length( .perc.out ) > 1 && ncol(Shape) > 1) stop("argument 'perc.out' should be of length one since ", "there are multiple responses") if (!length( .perc.out )) { return(Scale * gamma(1 - 1 / Shape)) } ans <- if (length( .perc.out ) > 1) { qgumbelII(p = matrix( .perc.out / 100, length(Shape), length( .perc.out ), byrow = TRUE), shape = Shape, scale = Scale) } else { qgumbelII(p = .perc.out / 100, shape = Shape, scale = Scale) } colnames(ans) <- paste(as.character( .perc.out ), "%", sep = "") ans }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .perc.out = perc.out ) )), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lshape , length = ncoly), rep( .lscale , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eshape misc$earg[[Musual*ii ]] <- .escale } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$perc.out <- .perc.out misc$true.mu <- FALSE # @fitted is not a true mu }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .perc.out = perc.out, .imethod = imethod ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE,eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(c(w) * dgumbelII(x = y, shape = Shape, scale = Scale, log = TRUE)) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), vfamily = c("gumbelII"), deriv = eval(substitute(expression({ Musual <- 2 Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) dl.dshape <- 1 / Shape + log(Scale / y) - log(Scale / y) * (Scale / y)^Shape dl.dscale <- Shape / Scale - (Shape / y) * (Scale / y)^(Shape - 1) dshape.deta <- dtheta.deta(Shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , .escale ) myderiv <- c(w) * cbind(dl.dshape, dl.dscale) * cbind(dshape.deta, dscale.deta) myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), weight = eval(substitute(expression({ EulerM <- -digamma(1.0) ned2l.dshape2 <- (1 + trigamma(2) + digamma(2)^2) / Shape^2 ned2l.dscale2 <- (Shape / Scale)^2 ned2l.dshapescale <- digamma(2) / Scale wz <- array(c(c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dscale2 * dscale.deta^2, c(w) * ned2l.dshapescale * dscale.deta * dshape.deta), dim = c(n, M / Musual, 3)) wz <- arwz2wz(wz, M = M, Musual = Musual) wz }), list( .lscale = lscale, .lshape = lshape )))) } dmbeard <- function(x, shape, scale = 1, rho, epsilon, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(rho), length(epsilon)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(rho) != LLL) rho <- rep(rho, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + (-epsilon * x - ((rho * epsilon - 1) / (rho * scale)) * (log1p(rho * shape) - log(exp(-x * scale) + rho * shape) - scale * x)) - log(exp(-x * scale) + shape * rho) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | rho <= 0 | epsilon <= 0] <- NaN ans } pmbeard <- function(q, shape, scale = 1, rho, epsilon) { LLL <- max(length(q), length(shape), length(scale), length(rho), length(epsilon)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(rho) != LLL) rho <- rep(rho, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) ans <- -expm1(-epsilon * q - ((rho * epsilon - 1) / (rho * scale)) * (log1p(rho * shape) - log(exp(-scale * q) + rho * shape) - scale * q)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0 | rho <= 0 | epsilon <= 0] <- NaN ans[q == Inf] <- 1 ans } dmperks <- function(x, shape, scale = 1, epsilon, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(epsilon)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + (-epsilon * x - ((epsilon - 1) / scale) * (log1p(shape) - log(shape + exp(-x * scale)) -x * scale)) - log(exp(-x * scale) + shape) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | epsilon <= 0] <- NaN ans } pmperks <- function(q, shape, scale = 1, epsilon) { LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- -expm1(-epsilon * q - ((epsilon - 1) / scale) * (log1p(shape) - log(shape + exp(-q * scale)) - q * scale)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0] <- NaN ans[q == Inf] <- 1 ans } dbeard <- function(x, shape, scale = 1, rho, log = FALSE) { warning("does not integrate to unity") if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(rho)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(rho) != LLL) rho <- rep(rho, length.out = LLL) index0 <- (x < 0) ans <- log(shape) - x * scale * (rho^(-1 / scale)) + log(rho) + log(scale) + (rho^(-1 / scale)) * log1p(shape * rho) - (1 + rho^(-1 / scale)) * log(shape * rho + exp(-x * scale)) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | rho <= 0] <- NaN ans } dbeard <- function(x, shape, scale = 1, rho, log = FALSE) { alpha = shape; beta = scale; warning("does not integrate to unity") ret=ifelse(x<=0 | beta<=0,NaN, exp(alpha+beta*x)*(1+exp(alpha+rho))**(exp(-rho/beta))/ (1+exp(alpha+rho+beta*x))**(1+exp(-rho/beta))) ret } qbeard <- function(x, u = 0.5, alpha = 1, beta = 1,rho = 1) { ret <- ifelse(x<=0 | u<=0 | u>=1 | length(x)!=length(u) | beta<=0, NaN, (1/beta)* (log((u**(-beta*exp(rho)))* (1+exp(alpha+rho+beta*x))-1)-alpha-rho)-x) return(ret) } dperks <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) index0 <- (x < 0) ans <- log(shape) - x + log1p(shape) / scale - (1 + 1 / scale) * log(shape + exp(-x * scale)) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pperks <- function(q, shape, scale = 1) { LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) logS <- -q + (log1p(shape) - log(shape + exp(-q * scale))) / scale ans <- -expm1(logS) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0] <- NaN ans[q == Inf] <- 1 ans } qperks <- function(p, shape, scale = 1) { LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) tmp <- scale * log1p(-p) onemFb <- exp(tmp) ans <- (log1p(shape - onemFb) - log(shape) - tmp) / scale ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p > 1] <- NaN ans[p == 1] <- Inf ans[shape <= 0 | scale <= 0] <- NaN ans } rperks <- function(n, shape, scale = 1) { qperks(runif(n), shape = shape, scale = scale) } perks.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } perks <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, gshape = exp(-5:5), gscale = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") new("vglmff", blurb = c("Perks' distribution\n\n", "Links: ", namesof("shape", lshape, eshape), ", ", namesof("scale", lscale, escale), "\n", "Median: qperks(p = 0.5, shape, scale)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape , .eshape , tag = FALSE), namesof(mynames2, .lscale , .escale , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) shape.grid <- .gshape scale.grid <- .gscale for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] perks.Loglikfun <- function(scaleval, y, x, w, extraargs) { ans <- sum(c(w) * dperks(x = y, shape = extraargs$Shape, scale = scaleval, log = TRUE)) ans } mymat <- matrix(-1, length(shape.grid), 2) for (jlocal in 1:length(shape.grid)) { mymat[jlocal, ] <- getMaxMin(scale.grid, objfun = perks.Loglikfun, y = yvec, x = x, w = wvec, ret.objfun = TRUE, extraargs = list(Shape = shape.grid[jlocal])) } index.shape <- which(mymat[, 2] == max(mymat[, 2]))[1] if (!length( .ishape )) matH[, spp.] <- shape.grid[index.shape] if (!length( .iscale )) matC[, spp.] <- mymat[index.shape, 1] } # spp. etastart <- cbind(theta2eta(matH, .lshape , .eshape ), theta2eta(matC, .lscale , .escale ))[, interleave.VGAM(M, M = Musual)] } # End of !length(etastart) }), list( .lscale = lscale, .lshape = lshape, .eshape = eshape, .escale = escale, .gshape = gshape, .gscale = gscale, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) qperks(p = 0.5, shape = Shape, scale = Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep( .lshape , length = ncoly), rep( .lscale , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eshape misc$earg[[Musual*ii ]] <- .escale } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dperks(x = y, shape = Shape, scale = Scale, log = TRUE)) } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), vfamily = c("perks"), deriv = eval(substitute(expression({ Musual <- 2 shape <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lscale , .escale ) temp2 <- exp(y * scale) temp3 <- 1 + shape * temp2 dl.dshape <- 1 / shape + 1 / (scale * (1 + shape)) - (1 + 1 / scale) * temp2 / temp3 dl.dscale <- y - log1p(shape) / scale^2 + log1p(shape * temp2) / scale^2 - (1 + 1 / scale) * shape * y * temp2 / temp3 dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(scale, .lscale , .escale ) dthetas.detas <- cbind(dshape.deta, dscale.deta) myderiv <- c(w) * cbind(dl.dshape, dl.dscale) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ NOS <- M / Musual dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) for(spp. in 1:NOS) { run.varcov <- 0 Shape <- shape[, spp.] Scale <- scale[, spp.] if (FALSE && intercept.only && .oim.mean ) { stop("this is wrong") temp8 <- (1 + Shape * exp(Scale * y[, spp.]))^2 nd2l.dadb <- 2 * y[, spp.] * exp(Scale * y[, spp.]) / temp8 nd2l.dada <- 1 / Shape^2 + 1 / (1 + Shape)^2 - 2 * exp(2 * Scale * y[, spp.]) / temp8 nd2l.dbdb <- 2 * Shape * y[, spp.]^2 * exp(Scale * y[, spp.]) / temp8 ave.oim11 <- weighted.mean(nd2l.dada, w[, spp.]) ave.oim12 <- weighted.mean(nd2l.dadb, w[, spp.]) ave.oim22 <- weighted.mean(nd2l.dbdb, w[, spp.]) run.varcov <- cbind(ave.oim11, ave.oim22, ave.oim12) } else { for(ii in 1:( .nsimEIM )) { ysim <- rperks(n = n, shape = Shape, scale = Scale) if (ii < 3) { } temp2 <- exp(ysim * Scale) temp3 <- 1 + Shape * temp2 dl.dshape <- 1 / Shape + 1 / (Scale * (1 + Shape)) - (1 + 1 / Scale) * temp2 / temp3 dl.dscale <- ysim - log1p(Shape) / Scale^2 + log1p(Shape * temp2) / Scale^2 - (1 + 1 / Scale) * Shape * ysim * temp2 / temp3 temp7 <- cbind(dl.dshape, dl.dscale) if (ii < 3) { } run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) } wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .lscale = lscale, .escale = escale, .nsimEIM = nsimEIM, .oim.mean = oim.mean )))) } # perks() dmakeham <- function(x, shape, scale = 1, epsilon = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(epsilon)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + x * (scale - epsilon) - (shape / scale) * expm1(x * scale) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans } pmakeham <- function(q, shape, scale = 1, epsilon = 0) { LLL <- max(length(q), length(shape), length(scale), length(epsilon)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) ans <- -expm1(-q * epsilon - (shape / scale) * expm1(scale * q)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans[q == Inf] <- 1 ans } qmakeham <- function(p, shape, scale = 1, epsilon = 0) { LLL <- max(length(p), length(shape), length(scale), length(epsilon)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(epsilon) != LLL) epsilon <- rep(epsilon, length.out = LLL) ans <- shape / (scale * epsilon) - log1p(-p) / epsilon - lambertW((shape / epsilon) * exp(shape / epsilon) * (1 - p)^(-(scale / epsilon))) / scale ans[epsilon == 0] <- qgompertz(p = p[epsilon == 0], shape = shape[epsilon == 0], scale = scale[epsilon == 0]) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans } rmakeham <- function(n, shape, scale = 1, epsilon = 0) { qmakeham(runif(n), shape = shape, scale = scale, epsilon = epsilon) } makeham.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } makeham <- function(lshape = "loge", lscale = "loge", lepsilon = "loge", ishape = NULL, iscale = NULL, iepsilon = NULL, # 0.3, gshape = exp(-5:5), gscale = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL) { lepsil <- lepsilon iepsil <- iepsilon lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lepsil <- as.list(substitute(lepsil)) eepsil <- link2list(lepsil) lepsil <- attr(eepsil, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") if (length(iepsil)) if (!is.Numeric(iepsil, positive = TRUE)) stop("argument 'iepsil' values must be positive") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") new("vglmff", blurb = c("Makeham distribution\n\n", "Links: ", namesof("shape", lshape, eshape), ", ", namesof("scale", lscale, escale), ", ", namesof("epsilon", lepsil, eepsil), "\n", "Median: qmakeham(p = 0.5, shape, scale, epsilon)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 3 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 3, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 3 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") mynames3 <- paste("epsilon", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape , .eshape , tag = FALSE), namesof(mynames2, .lscale , .escale , tag = FALSE), namesof(mynames3, .lepsil , .eepsil , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matE <- matrix(if (length( .iepsil )) .iepsil else 0.3, n, ncoly, byrow = TRUE) shape.grid <- unique(sort(c( .gshape ))) scale.grid <- unique(sort(c( .gscale ))) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] makeham.Loglikfun <- function(scaleval, y, x, w, extraargs) { ans <- sum(c(w) * dmakeham(x = y, shape = extraargs$Shape, epsilon = extraargs$Epsil, scale = scaleval, log = TRUE)) ans } mymat <- matrix(-1, length(shape.grid), 2) for (jlocal in 1:length(shape.grid)) { mymat[jlocal, ] <- getMaxMin(scale.grid, objfun = makeham.Loglikfun, y = yvec, x = x, w = wvec, ret.objfun = TRUE, extraargs = list(Shape = shape.grid[jlocal], Epsil = matE[1, spp.])) } index.shape <- which(mymat[, 2] == max(mymat[, 2]))[1] if (!length( .ishape )) matH[, spp.] <- shape.grid[index.shape] if (!length( .iscale )) matC[, spp.] <- mymat[index.shape, 1] } # spp. epsil.grid <- c( .gepsil ) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] makeham.Loglikfun2 <- function(epsilval, y, x, w, extraargs) { ans <- sum(c(w) * dmakeham(x = y, shape = extraargs$Shape, epsilon = epsilval, scale = extraargs$Scale, log = TRUE)) ans } Init.epsil <- getMaxMin(epsil.grid, objfun = makeham.Loglikfun2, y = yvec, x = x, w = wvec, extraargs = list(Shape = matH[1, spp.], Scale = matC[1, spp.])) matE[, spp.] <- Init.epsil } # spp. etastart <- cbind(theta2eta(matH, .lshape , .eshape ), theta2eta(matC, .lscale , .escale ), theta2eta(matE, .lepsil , .eepsil ))[, interleave.VGAM(M, M = Musual)] } # End of !length(etastart) }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .gshape = gshape, .gscale = gscale, .gepsil = gepsilon, .ishape = ishape, .iscale = iscale, .iepsil = iepsil ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , .escale ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lepsil , .eepsil ) qmakeham(p = 0.5, shape = shape, scale = scale, epsil = epsil) }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lshape , length = ncoly), rep( .lscale , length = ncoly), rep( .lepsil , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2, mynames3)[ interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-2]] <- .eshape misc$earg[[Musual*ii-1]] <- .escale misc$earg[[Musual*ii ]] <- .eepsil } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , .escale ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lepsil , .eepsil ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dmakeham(x = y, shape = shape, scale = scale, epsil = epsil, log = TRUE)) } }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), vfamily = c("makeham"), deriv = eval(substitute(expression({ Musual <- 3 shape <- eta2theta(eta[, c(TRUE, FALSE, FALSE), drop = FALSE], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE), drop = FALSE], .lscale , .escale ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE), drop = FALSE], .lepsil , .eepsil ) temp2 <- exp(y * scale) temp3 <- epsil + shape * temp2 dl.dshape <- temp2 / temp3 - expm1(y * scale) / scale dl.dscale <- shape * y * temp2 / temp3 + shape * expm1(y * scale) / scale^2 - shape * y * temp2 / scale dl.depsil <- 1 / temp3 - y dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(scale, .lscale , .escale ) depsil.deta <- dtheta.deta(epsil, .lepsil , .eepsil ) dthetas.detas <- cbind(dshape.deta, dscale.deta, depsil.deta) myderiv <- c(w) * cbind(dl.dshape, dl.dscale, dl.depsil) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), weight = eval(substitute(expression({ NOS <- M / Musual dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] wz <- matrix(0.0, n, M + M - 1 + M - 2) # wz has half-bw 3 ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) for(spp. in 1:NOS) { run.varcov <- 0 Shape <- shape[, spp.] Scale <- scale[, spp.] Epsil <- epsil[, spp.] if (FALSE && intercept.only && .oim.mean ) { temp8 <- (1 + Shape * exp(Scale * y[, spp.]))^2 nd2l.dadb <- 2 * y[, spp.] * exp(Scale * y[, spp.]) / temp8 nd2l.dada <- 1 / Shape^2 + 1 / (1 + Shape)^2 - 2 * exp(2 * Scale * y[, spp.]) / temp8 nd2l.dbdb <- 2 * Shape * y[, spp.]^2 * exp(Scale * y[, spp.]) / temp8 ave.oim11 <- weighted.mean(nd2l.dada, w[, spp.]) ave.oim12 <- weighted.mean(nd2l.dadb, w[, spp.]) ave.oim22 <- weighted.mean(nd2l.dbdb, w[, spp.]) run.varcov <- cbind(ave.oim11, ave.oim22, ave.oim12) } else { for(ii in 1:( .nsimEIM )) { ysim <- rmakeham(n = n, shape = Shape, scale = Scale, epsil = Epsil) if (ii < 3) { } temp2 <- exp(ysim * Scale) temp3 <- Epsil + Shape * temp2 if (!is.Numeric(temp2)) stop("temp2 is not Numeric") if (!is.Numeric(temp3)) stop("temp3 is not Numeric") dl.dshape <- temp2 / temp3 - expm1(ysim * Scale) / Scale dl.dscale <- Shape * ysim * temp2 / temp3 + Shape * expm1(ysim * Scale) / Scale^2 - Shape * ysim * temp2 / Scale dl.depsil <- 1 / temp3 - ysim temp7 <- cbind(dl.dshape, dl.dscale, dl.depsil) if (ii < 3) { } run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) } for (ilocal in 1:ncol(run.varcov)) { indexInf <- is.finite(run.varcov[, ilocal]) run.varcov[!indexInf, ilocal] <- mean(run.varcov[indexInf, ilocal]) } wz1 <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = TRUE), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .nsimEIM = nsimEIM, .oim.mean = oim.mean )))) } # makeham() dgompertz <- function(x, shape, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) index0 <- (x < 0) index1 <- abs(x * scale) < 0.1 & is.finite(x * scale) ans <- log(shape) + x * scale - (shape / scale) * (exp(x * scale) - 1) ans[index1] <- log(shape[index1]) + x[index1] * scale[index1] - (shape[index1] / scale[index1]) * expm1(x[index1] * scale[index1]) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pgompertz <- function(q, shape, scale = 1) { LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- -expm1((-shape / scale) * expm1(scale * q)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0] <- NaN ans[q == Inf] <- 1 ans } qgompertz <- function(p, shape, scale = 1) { LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape) != LLL) shape <- rep(shape, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) ans <- log1p((-scale / shape) * log1p(-p)) / scale ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN ans[shape <= 0 | scale <= 0] <- NaN ans } rgompertz <- function(n, shape, scale = 1) { qgompertz(runif(n), shape = shape, scale = scale) } gompertz.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } gompertz <- function(lshape = "loge", lscale = "loge", ishape = NULL, iscale = NULL, nsimEIM = 500, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") new("vglmff", blurb = c("Gompertz distribution\n\n", "Links: ", namesof("shape", lshape, eshape ), ", ", namesof("scale", lscale, escale ), "\n", "Median: scale * log(2 - 1 / shape)"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("scale", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lshape , .eshape , tag = FALSE), namesof(mynames2, .lscale , .escale , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) shape.grid <- c(exp(-seq(4, 0.1, len = 07)), 1, exp( seq(0.1, 4, len = 07))) scale.grid <- c(exp(-seq(4, 0.1, len = 07)), 1, exp( seq(0.1, 4, len = 07))) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] gompertz.Loglikfun <- function(scaleval, y, x, w, extraargs) { ans <- sum(c(w) * dgompertz(x = y, shape = extraargs$Shape, scale = scaleval, log = TRUE)) ans } mymat <- matrix(-1, length(shape.grid), 2) for (jlocal in 1:length(shape.grid)) { mymat[jlocal, ] <- getMaxMin(scale.grid, objfun = gompertz.Loglikfun, y = yvec, x = x, w = wvec, ret.objfun = TRUE, extraargs = list(Shape = shape.grid[jlocal])) } index.shape <- which(mymat[, 2] == max(mymat[, 2]))[1] if (!length( .ishape )) matH[, spp.] <- shape.grid[index.shape] if (!length( .iscale )) matC[, spp.] <- mymat[index.shape, 1] } # spp. etastart <- cbind(theta2eta(matH, .lshape , .eshape ), theta2eta(matC, .lscale , .escale ))[, interleave.VGAM(M, M = Musual)] } # End of !length(etastart) }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) log1p((scale / shape) * log(2)) / scale }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lshape , length = ncoly), rep( .lscale , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .eshape misc$earg[[Musual*ii ]] <- .escale } misc$Musual <- Musual misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { shape <- eta2theta(eta[, c(TRUE, FALSE)], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dgompertz(x = y, shape = shape, scale = scale, log = TRUE)) } }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), vfamily = c("gompertz"), deriv = eval(substitute(expression({ Musual <- 2 shape <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lshape , .eshape ) scale <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lscale , .escale ) temp2 <- exp(y * scale) temp4 <- -expm1(y * scale) dl.dshape <- 1 / shape + temp4 / scale dl.dscale <- y * (1 - shape * temp2 / scale) - shape * temp4 / scale^2 dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(scale, .lscale , .escale ) dthetas.detas <- cbind(dshape.deta, dscale.deta) myderiv <- c(w) * cbind(dl.dshape, dl.dscale) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ NOS <- M / Musual dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) for(spp. in 1:NOS) { run.varcov <- 0 Shape <- shape[, spp.] Scale <- scale[, spp.] for(ii in 1:( .nsimEIM )) { ysim <- rgompertz(n = n, shape = Shape, scale = Scale) if (ii < 3) { } temp2 <- exp(ysim * scale) temp4 <- -expm1(ysim * scale) dl.dshape <- 1 / shape + temp4 / scale dl.dscale <- ysim * (1 - shape * temp2 / scale) - shape * temp4 / scale^2 temp7 <- cbind(dl.dshape, dl.dscale) run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .lscale = lscale, .escale = escale, .nsimEIM = nsimEIM )))) } # gompertz() dmoe <- function (x, alpha = 1, lambda = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(alpha), length(lambda)) if (length(x) != LLL) x <- rep(x, length.out = LLL) if (length(alpha) != LLL) alpha <- rep(alpha, length.out = LLL) if (length(lambda) != LLL) lambda <- rep(lambda, length.out = LLL) index0 <- (x < 0) if (log.arg) { ans <- log(lambda) + (lambda * x) - 2 * log(expm1(lambda * x) + alpha) ans[index0] <- log(0) } else { ans <- lambda * exp(lambda * x) / (expm1(lambda * x) + alpha)^2 ans[index0] <- 0 } ans[alpha <= 0 | lambda <= 0] <- NaN ans } pmoe <- function (q, alpha = 1, lambda = 1) { ret <- ifelse(alpha <= 0 | lambda <= 0, NaN, 1 - 1 / (expm1(lambda * q) + alpha)) ret[q < log(2 - alpha) / lambda] <- 0 ret } qmoe <- function (p, alpha = 1, lambda = 1) { ifelse(p < 0 | p > 1 | alpha <= 0 | lambda <= 0, NaN, log1p(-alpha + 1 / (1 - p)) / lambda) } rmoe <- function (n, alpha = 1, lambda = 1) { qmoe(p = runif(n), alpha = alpha, lambda = lambda) } exponential.mo.control <- function(save.weight = TRUE, ...) { list(save.weight = save.weight) } exponential.mo <- function(lalpha = "loge", llambda = "loge", ealpha = list(), elambda = list(), ialpha = 1, ilambda = NULL, imethod = 1, nsimEIM = 200, zero = NULL) { stop("fundamentally unable to estimate the parameters as ", "the support of the density depends on the parameters") lalpha <- as.list(substitute(lalpha)) ealpha <- link2list(lalpha) lalpha <- attr(ealpha, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lalpha0 <- lalpha ealpha0 <- ealpha ialpha0 <- ialpha if (!is.Numeric(nsimEIM, allowable.length = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ialpha0)) if (!is.Numeric(ialpha0, positive = TRUE)) stop("argument 'ialpha' values must be positive") if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("argument 'ilambda' values must be positive") if (!is.Numeric(imethod, allowable.length = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Marshall-Olkin exponential distribution\n\n", "Links: ", namesof("alpha", lalpha0, ealpha0 ), ", ", namesof("lambda", llambda, elambda ), "\n", "Median: log(3 - alpha) / lambda"), constraints = eval(substitute(expression({ dotzero <- .zero Musual <- 2 eval(negzero.expression) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(Musual = 2, nsimEIM = .nsimEIM, zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) Musual <- 2 extra$ncoly <- ncoly extra$Musual <- Musual M <- Musual * ncoly mynames1 <- paste("alpha", if (ncoly > 1) 1:ncoly else "", sep = "") mynames2 <- paste("lambda", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- c(namesof(mynames1, .lalpha0 , .ealpha0 , tag = FALSE), namesof(mynames2, .llambda , .elambda , tag = FALSE))[ interleave.VGAM(M, M = Musual)] if (!length(etastart)) { matL <- matrix(if (length( .ilambda )) .ilambda else 0, n, ncoly, byrow = TRUE) matA <- matrix(if (length( .ialpha0 )) .ialpha0 else 0, n, ncoly, byrow = TRUE) for (spp. in 1:ncoly) { yvec <- y[, spp.] moexpon.Loglikfun <- function(lambdaval, y, x, w, extraargs) { ans <- sum(c(w) * log(dmoe(x = y, alpha = extraargs$alpha, lambda = lambdaval))) ans } Alpha.init <- .ialpha0 lambda.grid <- seq(0.1, 10.0, len = 21) Lambda.init <- getMaxMin(lambda.grid, objfun = moexpon.Loglikfun, y = y, x = x, w = w, extraargs = list(alpha = Alpha.init)) if (length(mustart)) { Lambda.init <- Lambda.init / (1 - Phimat.init) } if (!length( .ialpha0 )) matA[, spp.] <- Alpha0.init if (!length( .ilambda )) matL[, spp.] <- Lambda.init } # spp. etastart <- cbind(theta2eta(matA, .lalpha0, .ealpha0 ), theta2eta(matL, .llambda, .elambda ))[, interleave.VGAM(M, M = Musual)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda, .ialpha0 = ialpha0, .ilambda = ilambda, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha0 <- eta2theta(eta[, c(TRUE, FALSE)], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , .elambda ) log(3 - alpha0) / lambda }, list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), last = eval(substitute(expression({ Musual <- extra$Musual misc$link <- c(rep( .lalpha0 , length = ncoly), rep( .llambda , length = ncoly))[interleave.VGAM(M, M = Musual)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = Musual)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for(ii in 1:ncoly) { misc$earg[[Musual*ii-1]] <- .ealpha0 misc$earg[[Musual*ii ]] <- .elambda } misc$Musual <- Musual misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { alpha0 <- eta2theta(eta[, c(TRUE, FALSE)], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , .elambda ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * log(dmoe(x = y, alpha = alpha0, lambda = lambda))) } }, list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), vfamily = c("exponential.mo"), deriv = eval(substitute(expression({ Musual <- 2 alpha0 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , .elambda ) temp2 <- (expm1(lambda * y) + alpha0) dl.dalpha0 <- -2 / temp2 dl.dlambda <- 1 / lambda + y - 2 * y * exp(lambda * y) / temp2 dalpha0.deta <- dtheta.deta(alpha0, .lalpha0 , .ealpha0 ) dlambda.deta <- dtheta.deta(lambda, .llambda , .elambda ) dthetas.detas <- cbind(dalpha0.deta, dlambda.deta) myderiv <- c(w) * cbind(dl.dalpha0, dl.dlambda) * dthetas.detas myderiv[, interleave.VGAM(M, M = Musual)] }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), weight = eval(substitute(expression({ NOS <- M / Musual dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M = Musual)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = Musual, both = TRUE, diag = TRUE) for(spp. in 1:NOS) { run.varcov <- 0 Alph <- alpha0[, spp.] Lamb <- lambda[, spp.] for(ii in 1:( .nsimEIM )) { ysim <- rmoe(n = n, alpha = Alph, lambda = Lamb) if (ii < 3) { } temp2 <- (expm1(lambda * ysim) + alpha0) dl.dalpha0 <- -2 / temp2 dl.dlambda <- 1 / lambda + ysim - 2 * ysim * exp(lambda * ysim) / temp2 temp3 <- cbind(dl.dalpha0, dl.dlambda) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, Musual * (spp. - 1) + ind1$row] * dThetas.detas[, Musual * (spp. - 1) + ind1$col] for(jay in 1:Musual) for(kay in jay:Musual) { cptr <- iam((spp. - 1) * Musual + jay, (spp. - 1) * Musual + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = Musual)] } } # End of for(spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / Musual) }), list( .llambda = llambda, .elambda = elambda, .nsimEIM = nsimEIM )))) } # exponential.mo() genbetaII <- function(lshape1.a = "loge", lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", ishape1.a = NULL, iscale = NULL, ishape2.p = 1.0, ishape3.q = 1.0, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Generalized Beta II distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape2.p", lshape2.p, earg = eshape2.p), ", ", namesof("shape3.q", lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(shape2.p + 1/shape1.a) * ", "gamma(shape3.q - 1/shape1.a) / ", "(gamma(shape2.p) * gamma(shape3.q))"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a, earg = .eshape1.a, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape2.p", .lshape2.p, earg = .eshape2.p, tag = FALSE), namesof("shape3.q", .lshape3.q, earg = .eshape3.q, tag = FALSE)) if (!length( .ishape1.a ) || !length( .iscale )) { qvec <- c( .25, .5, .75) # Arbitrary; could be made an argument ishape3.q <- if (length( .ishape3.q)) .ishape3.q else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { aa <- rep(if (length( .ishape1.a )) .ishape1.a else abs(1 / fit0$coef[2]), length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length = n) qq <- rep(if (length( .ishape3.q )) .ishape3.q else 1.0, length.out = n) parg <- rep(if (length( .ishape2.p )) .ishape2.p else 1.0, length.out = n) outOfRange <- (qq - 1/aa <= 0) qq[outOfRange] <- 1 / aa[outOfRange] + 1 outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- 1 / aa[outOfRange] + 1 etastart <- cbind(theta2eta(aa, .lshape1.a, earg = .eshape1.a), theta2eta(scale, .lscale , earg = .escale ), theta2eta(parg, .lshape2.p, earg = .eshape2.p), theta2eta(qq, .lshape3.q, earg = .eshape3.q)) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .ishape1.a = ishape1.a, .iscale = iscale, .ishape2.p = ishape2.p, .ishape3.q = ishape3.q ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 4], .lshape3.q, earg = .eshape3.q) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans[parg <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a , scale = .lscale , shape2.p = .lshape2.p , shape3.q = .lshape3.q ) misc$earg <- list(shape1.a = .eshape1.a , scale = .escale , shape2.p = .eshape2.p , shape3.q = .eshape3.q ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 4], .lshape3.q, earg = .eshape3.q) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * (log(aa) + (aa * parg - 1) * log(y) - aa * parg * log(scale) + - lbeta(parg, qq) - (parg + qq) * log1p((y/scale)^aa))) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), vfamily = c("genbetaII"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 4], .lshape3.q, earg = .eshape3.q) temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dl.dq <- temp3 - temp3b - temp4 da.deta <- dtheta.deta(aa, .lshape1.a, earg = .eshape1.a) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p, earg = .eshape2.p) dq.deta <- dtheta.deta(qq, .lshape3.q, earg = .eshape3.q) c(w) * cbind( dl.da * da.deta, dl.dscale * dscale.deta, dl.dp * dp.deta, dl.dq * dq.deta ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg+qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1+parg+qq)) ned2l.dp <- temp5a - temp5 ned2l.dq <- temp5b - temp5 ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (scale*(1 + parg+qq)) ned2l.dap <- -(qq * (temp3a -temp3b) -1) / (aa*(parg+qq)) ned2l.daq <- -(parg * (temp3b -temp3a) -1) / (aa*(parg+qq)) ned2l.dscalep <- aa * qq / (scale*(parg+qq)) ned2l.dscaleq <- -aa * parg / (scale*(parg+qq)) ned2l.dpq <- -temp5 wz <- matrix(as.numeric(NA), n, dimm(M)) # M==4 means 10=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(3, 3, M)] <- ned2l.dp * dp.deta^2 wz[, iam(4, 4, M)] <- ned2l.dq * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz[, iam(1, 3, M)] <- ned2l.dap * da.deta * dp.deta wz[, iam(1, 4, M)] <- ned2l.daq * da.deta * dq.deta wz[, iam(2, 3, M)] <- ned2l.dscalep * dscale.deta * dp.deta wz[, iam(2, 4, M)] <- ned2l.dscaleq * dscale.deta * dq.deta wz[, iam(3, 4, M)] <- ned2l.dpq * dp.deta * dq.deta wz <- c(w) * wz wz }), list( .lshape1.a <- lshape1.a, .lscale = lscale, .eshape1.a <- eshape1.a, .escale = escale, .eshape2.p <- eshape2.p, .eshape3.q = eshape3.q, .lshape2.p <- lshape2.p, .lshape3.q = lshape3.q )))) } rsinmad <- function(n, shape1.a, scale = 1, shape3.q) qsinmad(runif(n), shape1.a, scale = scale, shape3.q) rlomax <- function(n, scale = 1, shape3.q) rsinmad(n, shape1.a = 1, scale = scale, shape3.q) rfisk <- function(n, shape1.a, scale = 1) rsinmad(n, shape1.a, scale = scale, shape3.q = 1) rparalogistic <- function(n, shape1.a, scale = 1) rsinmad(n, shape1.a, scale = scale, shape1.a) rdagum <- function(n, shape1.a, scale = 1, shape2.p) qdagum(runif(n), shape1.a = shape1.a, scale = scale, shape2.p = shape2.p) rinvlomax <- function(n, scale = 1, shape2.p) rdagum(n, shape1.a = 1, scale = scale, shape2.p) rinvparalogistic <- function(n, shape1.a, scale = 1) rdagum(n, shape1.a, scale = scale, shape1.a) qsinmad <- function(p, shape1.a, scale = 1, shape3.q) { bad <- (p < 0) | (p > 1) | is.na(p) ans <- NA * p ans[is.nan(p)] <- NaN LLL <- max(length(p), length(shape1.a), length(scale), length(shape3.q)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape1.a) != LLL) shape1.a <- rep(shape1.a, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(shape3.q) != LLL) shape3.q <- rep(shape3.q, length.out = LLL) Shape1.a <- shape1.a[!bad] Scale <- scale[!bad] Shape3.q <- shape3.q[!bad] QQ <- p[!bad] ans[!bad] <- Scale * ((1 - QQ)^(-1/Shape3.q) - 1)^(1/Shape1.a) ans } qlomax <- function(p, scale = 1, shape3.q) qsinmad(p, shape1.a = 1, scale = scale, shape3.q) qfisk <- function(p, shape1.a, scale = 1) qsinmad(p, shape1.a, scale = scale, shape3.q = 1) qparalogistic <- function(p, shape1.a, scale = 1) qsinmad(p, shape1.a, scale = scale, shape1.a) qdagum <- function(p, shape1.a, scale = 1, shape2.p) { LLL <- max(length(p), length(shape1.a), length(scale), length(shape2.p)) if (length(p) != LLL) p <- rep(p, length.out = LLL) if (length(shape1.a) != LLL) shape1.a <- rep(shape1.a, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(shape2.p) != LLL) shape2.p <- rep(shape2.p, length.out = LLL) bad <- (p < 0) | (p > 1) | (scale <= 0) | is.na(p) ans <- NA * p ans[is.nan(p)] <- NaN ans[!bad] <- scale[!bad] * (p[!bad]^(-1/shape2.p[!bad]) - 1)^(-1/shape1.a[!bad]) ans } qinvlomax <- function(p, scale = 1, shape2.p) qdagum(p, shape1.a = 1, scale = scale, shape2.p) qinvparalogistic <- function(p, shape1.a, scale = 1) qdagum(p, shape1.a, scale = scale, shape1.a) psinmad <- function(q, shape1.a, scale = 1, shape3.q) { LLL <- max(length(q), length(shape1.a), length(scale), length(shape3.q)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape1.a) != LLL) shape1.a <- rep(shape1.a, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(shape3.q) != LLL) shape3.q <- rep(shape3.q, length.out = LLL) notpos <- (q <= 0) & !is.na(q) Shape1.a <- shape1.a[!notpos] Scale <- scale[!notpos] Shape3.q <- shape3.q[!notpos] QQ <- q[!notpos] ans <- 0 * q # rep(0.0, len = LLL) ans[!notpos] <- 1 - (1 + (QQ / Scale)^Shape1.a)^(-Shape3.q) ans[scale <= 0] <- NaN ans[shape1.a <= 0] <- NaN ans[shape3.q <= 0] <- NaN ans[q == -Inf] <- 0 ans } plomax <- function(q, scale = 1, shape3.q) psinmad(q, shape1.a = 1, scale = scale, shape3.q) pfisk <- function(q, shape1.a, scale = 1) psinmad(q, shape1.a, scale = scale, shape3.q = 1) pparalogistic <- function(q, shape1.a, scale = 1) psinmad(q, shape1.a, scale = scale, shape1.a) pdagum <- function(q, shape1.a, scale = 1, shape2.p) { LLL <- max(length(q), length(shape1.a), length(scale), length(shape2.p)) if (length(q) != LLL) q <- rep(q, length.out = LLL) if (length(shape1.a) != LLL) shape1.a <- rep(shape1.a, length.out = LLL) if (length(scale) != LLL) scale <- rep(scale, length.out = LLL) if (length(shape2.p) != LLL) shape2.p <- rep(shape2.p, length.out = LLL) notpos <- (q <= 0) & !is.na(q) Shape1.a <- shape1.a[!notpos] Scale <- scale[!notpos] Shape2.p <- shape2.p[!notpos] QQ <- q[!notpos] ans <- 0 * q ans[!notpos] <- (1 + (QQ/Scale)^(-Shape1.a))^(-Shape2.p) ans[scale <= 0] <- NaN ans[shape1.a <= 0] <- NaN ans[shape2.p <= 0] <- NaN ans[q == -Inf] <- 0 ans } pinvlomax <- function(q, scale = 1, shape2.p) pdagum(q, shape1.a = 1, scale = scale, shape2.p) pinvparalogistic <- function(q, shape1.a, scale = 1) pdagum(q, shape1.a, scale = scale, shape1.a) dsinmad <- function(x, shape1.a, scale = 1, shape3.q, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape1.a), length(scale), length(shape3.q)) x <- rep(x, length.out = LLL); shape1.a <- rep(shape1.a, length.out = LLL) scale <- rep(scale, length.out = LLL); shape3.q <- rep(shape3.q, length.out = LLL) Loglik <- rep(log(0), length.out = LLL) xok <- (x > 0) & !is.na(x) # Avoids log(x) if x<0, and handles NAs Loglik[xok] <- log(shape1.a[xok]) + log(shape3.q[xok]) + (shape1.a[xok]-1) * log(x[xok]) - shape1.a[xok] * log(scale[xok]) - (1 + shape3.q[xok]) * log1p((x[xok]/scale[xok])^shape1.a[xok]) x.eq.0 <- (x == 0) & !is.na(x) Loglik[x.eq.0] <- log(shape1.a[x.eq.0]) + log(shape3.q[x.eq.0]) - shape1.a[x.eq.0] * log(scale[x.eq.0]) Loglik[is.na(x)] <- NA Loglik[is.nan(x)] <- NaN Loglik[x == Inf] <- log(0) if (log.arg) Loglik else exp(Loglik) } dlomax <- function(x, scale = 1, shape3.q, log = FALSE) dsinmad(x, shape1.a = 1, scale = scale, shape3.q, log = log) dfisk <- function(x, shape1.a, scale = 1, log = FALSE) dsinmad(x, shape1.a, scale = scale, shape3.q = 1, log = log) dparalogistic <- function(x, shape1.a, scale = 1, log = FALSE) dsinmad(x, shape1.a, scale = scale, shape1.a, log = log) ddagum <- function(x, shape1.a, scale = 1, shape2.p, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape1.a), length(scale), length(shape2.p)) x <- rep(x, length.out = LLL) shape1.a <- rep(shape1.a, length.out = LLL) scale <- rep(scale, length.out = LLL) shape2.p <- rep(shape2.p, length.out = LLL) Loglik <- rep(log(0), length.out = LLL) xok <- (x > 0) & !is.na(x) # Avoids log(x) if x<0, and handles NAs Loglik[xok] <- log(shape1.a[xok]) + log(shape2.p[xok]) + (shape1.a[xok] * shape2.p[xok]-1) * log( x[xok]) - shape1.a[xok] * shape2.p[xok] * log(scale[xok]) - (1 + shape2.p[xok]) * log1p((x[xok]/scale[xok])^shape1.a[xok]) Loglik[shape2.p <= 0] <- NaN x.eq.0 <- (x == 0) & !is.na(x) Loglik[x.eq.0] <- log(shape1.a[x.eq.0]) + log(shape2.p[x.eq.0]) - shape1.a[x.eq.0] * shape2.p[x.eq.0] * log(scale[x.eq.0]) Loglik[is.na(x)] <- NA Loglik[is.nan(x)] <- NaN Loglik[x == Inf] <- log(0) if (log.arg) Loglik else exp(Loglik) } dinvlomax <- function(x, scale = 1, shape2.p, log = FALSE) ddagum(x, shape1.a = 1, scale = scale, shape2.p, log = log) dinvparalogistic <- function(x, shape1.a, scale = 1, log = FALSE) ddagum(x, shape1.a, scale = scale, shape1.a, log = log) sinmad <- function(lshape1.a = "loge", lscale = "loge", lshape3.q = "loge", ishape1.a = NULL, iscale = NULL, ishape3.q = 1.0, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Singh-Maddala distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape3.q", lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(1 + 1/shape1.a) * ", "gamma(shape3.q - 1/shape1.a) / ", "gamma(shape3.q)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a, earg = .eshape1.a, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape3.q", .lshape3.q, earg = .eshape3.q, tag = FALSE)) parg <- 1 if (!length( .ishape1.a) || !length( .iscale )) { qvec <- c( .25, .5, .75) # Arbitrary; could be made an argument ishape3.q <- if (length( .ishape3.q)) .ishape3.q else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { aa <- rep(if (length( .ishape1.a)) .ishape1.a else 1/fit0$coef[2], length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) qq <- rep(if (length( .ishape3.q)) .ishape3.q else 1.0, length.out = n) outOfRange <- (aa * qq <= 1) qq[outOfRange] <- 1 / aa[outOfRange] + 1 etastart <- cbind(theta2eta(aa, .lshape1.a, earg = .eshape1.a), theta2eta(scale, .lscale , earg = .escale ), theta2eta(qq, .lshape3.q, earg = .eshape3.q)) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape3.q = lshape3.q, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q, .ishape1.a = ishape1.a, .iscale = iscale, .ishape3.q = ishape3.q ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q, .lshape3.q = lshape3.q ))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a, scale = .lscale , shape3.q = .lshape3.q) misc$earg <- list(shape1.a = .eshape1.a, scale = .escale , shape3.q = .eshape3.q) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q, .lshape3.q = lshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a , earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q ) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dsinmad(x = y, shape1.a = aa, scale = scale, shape3.q = qq, log = TRUE)) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape3.q = lshape3.q, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q ))), vfamily = c("sinmad"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q) temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3a <- digamma(parg) temp3b <- digamma(qq) dl.da <- 1 / aa + parg * temp1 - (parg + qq) * temp1 / (1 + 1 / temp2) dl.dscale <- (aa / scale) * (-parg + (parg + qq) / (1 + 1 / temp2)) dl.dq <- digamma(parg + qq) - temp3b - log1p(temp2) da.deta <- dtheta.deta(aa, .lshape1.a, earg = .eshape1.a) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dq.deta <- dtheta.deta(qq, .lshape3.q, earg = .eshape3.q) c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta, dl.dq * dq.deta ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q, .lshape3.q = lshape3.q ))), weight = eval(substitute(expression({ ned2l.da <- (1 + parg+qq + parg * qq * (trigamma(parg) + trigamma(qq) + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1 + parg + qq)) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1+parg+qq)) ned2l.dq <- 1/qq^2 ned2l.dascale <- (parg - qq - parg*qq * (temp3a -temp3b)) / (scale*(1 + parg+qq)) ned2l.daq <- -(parg * (temp3b -temp3a) -1) / (aa*(parg+qq)) ned2l.dscaleq <- -aa * parg / (scale*(parg+qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(3, 3, M)] <- ned2l.dq * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz[, iam(1, 3, M)] <- ned2l.daq * da.deta * dq.deta wz[, iam(2, 3, M)] <- ned2l.dscaleq * dscale.deta * dq.deta wz <- c(w) * wz wz }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape3.q = eshape3.q, .lshape3.q = lshape3.q )))) } dagum <- function(lshape1.a = "loge", lscale = "loge", lshape2.p = "loge", ishape1.a = NULL, iscale = NULL, ishape2.p = 1.0, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Dagum distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape2.p", lshape2.p, earg = eshape2.p), "\n", "Mean: scale * gamma(shape2.p + 1/shape1.a) * ", "gamma(1 - 1/shape1.a) / ", "gamma(shape2.p)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a, earg = .eshape1.a, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape2.p", .lshape2.p, earg = .eshape2.p, tag = FALSE)) if (!length( .ishape1.a) || !length( .iscale )) { qvec <- c( .25, .5, .75) # Arbitrary; could be made an argument ishape2.p <- if (length( .ishape2.p)) .ishape2.p else 1 xvec <- log( qvec^(-1/ ishape2.p ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { parg <- rep(if (length( .ishape2.p )) .ishape2.p else 1.0, length.out = n) aa <- rep(if (length( .ishape1.a )) .ishape1.a else -1/fit0$coef[2], length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- 1 / aa[outOfRange] + 1 etastart <- cbind(theta2eta(aa, .lshape1.a, earg = .eshape1.a), theta2eta(scale, .lscale , earg = .escale ), theta2eta(parg, .lshape2.p, earg = .eshape2.p)) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape2.p = lshape2.p, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .ishape1.a = ishape1.a, .iscale = iscale, .ishape2.p = ishape2.p ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans[parg <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .lshape2.p = lshape2.p ))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a, scale = .lscale , p = .lshape2.p ) misc$earg <- list(shape1.a = .eshape1.a, scale = .escale , p = .eshape2.p ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape2.p = lshape2.p, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- 1 if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * ddagum(x = y, shape1.a = aa, scale = Scale, shape2.p = parg, log = TRUE)) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .lshape2.p = lshape2.p, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p ))), vfamily = c("dagum"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- eta2theta(eta[, 3], .lshape2.p, earg = .eshape2.p) qq <- 1 temp1 <- log(y / Scale) temp2 <- (y / Scale)^aa temp3a <- digamma(parg) temp3b <- digamma(qq) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + digamma(parg + qq) - temp3a - log1p(temp2) da.deta <- dtheta.deta(aa, .lshape1.a, earg = .eshape1.a) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p, earg = .eshape2.p) c(w) * cbind( dl.da * da.deta, dl.dscale * dscale.deta, dl.dp * dp.deta ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .lshape2.p = lshape2.p ))), weight = eval(substitute(expression({ ned2l.da <- (1 + parg+qq + parg * qq * (trigamma(parg) + trigamma(qq) + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- aa^2 * parg * qq / (Scale^2 * (1+parg+qq)) ned2l.dp <- 1 / parg^2 ned2l.dascale <- (parg - qq - parg * qq *(temp3a -temp3b) ) / (Scale * (1 + parg+qq)) ned2l.dap= -(qq * (temp3a -temp3b) -1) / (aa*(parg+qq)) ned2l.dscalep <- aa * qq / (Scale * (parg + qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(3, 3, M)] <- ned2l.dp * dp.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz[, iam(1, 3, M)] <- ned2l.dap * da.deta * dp.deta wz[, iam(2, 3, M)] <- ned2l.dscalep * dscale.deta * dp.deta wz <- c(w) * wz wz }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .eshape2.p = eshape2.p, .lshape2.p = lshape2.p )))) } betaII <- function(lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape2.p = 2, ishape3.q = 2, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Beta II distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape2.p", lshape2.p, earg = eshape2.p), ", ", namesof("shape3.q", lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(shape2.p + 1) * ", "gamma(shape3.q - 1) / ", "(gamma(shape2.p) * gamma(shape3.q))"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape2.p", .lshape2.p, earg = .eshape2.p, tag = FALSE), namesof("shape3.q", .lshape3.q, earg = .eshape3.q, tag = FALSE)) if (!length( .iscale )) { qvec <- c(0.25, .5, .75) # Arbitrary; could be made an argument ishape3.q <- if (length( .ishape3.q)) .ishape3.q else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) qq <- rep(if (length( .ishape3.q)) .ishape3.q else 1.0, length.out = n) parg <- rep(if (length( .ishape2.p)) .ishape2.p else 1.0, length.out = n) aa <- 1 outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- 1 / aa[outOfRange] + 1 outOfRange <- (qq - 1/aa <= 0) qq[outOfRange] <- 1 / aa + 1 etastart <- cbind(theta2eta(scale, .lscale , earg = .escale ), theta2eta(parg, .lshape2.p, earg = .eshape2.p), theta2eta(qq, .lshape3.q, earg = .eshape3.q)) } }), list( .lscale = lscale, .escale = escale, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .iscale = iscale, .ishape2.p = ishape2.p, .ishape3.q = ishape3.q ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- 1 Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[Scale <= 0] <- NA ans[parg <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lscale = lscale, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape2.p = .lshape2.p, shape3.q = .lshape3.q) misc$earg <- list(scale = .escale , shape2.p = .eshape2.p, shape3.q = .eshape3.q) misc$expected <- TRUE }), list( .lscale = lscale, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * (log(aa) + (aa*parg-1)*log(y) - aa*parg*log(scale) + (-lbeta(parg, qq)) - (parg+qq)*log1p((y/scale)^aa))) } }, list( .lscale = lscale, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), vfamily = c("betaII"), deriv = eval(substitute(expression({ aa <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p, earg = .eshape2.p) qq <- eta2theta(eta[, 3], .lshape3.q, earg = .eshape3.q) temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dl.dq <- temp3 - temp3b - temp4 dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p, earg = .eshape2.p) dq.deta <- dtheta.deta(qq, .lshape3.q, earg = .eshape3.q) c(w) * cbind( dl.dscale * dscale.deta, dl.dp * dp.deta, dl.dq * dq.deta ) }), list( .lscale = lscale, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1+parg+qq)) ned2l.dp <- trigamma(parg) - temp5 ned2l.dq <- trigamma(qq) - temp5 ned2l.dscalep <- aa * qq / (scale*(parg+qq)) ned2l.dscaleq <- -aa * parg / (scale*(parg+qq)) ned2l.dpq <- -temp5 wz <- matrix(as.numeric(NA), n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(2, 2, M)] <- ned2l.dp * dp.deta^2 wz[, iam(3, 3, M)] <- ned2l.dq * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dscalep * dscale.deta * dp.deta wz[, iam(1, 3, M)] <- ned2l.dscaleq * dscale.deta * dq.deta wz[, iam(2, 3, M)] <- ned2l.dpq * dp.deta * dq.deta wz <- c(w) * wz wz }), list( .lscale = lscale, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .escale = escale, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q )))) } lomax <- function(lscale = "loge", lshape3.q = "loge", iscale = NULL, ishape3.q = NULL, # 2.0, gshape3.q = exp(-5:5), zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Lomax distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape3.q", lshape3.q, earg = eshape3.q), "\n", "Mean: scale / (shape3.q - 1)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) mynames1 <- "scale" mynames2 <- "shape3.q" predictors.names <- c(namesof(mynames1, .lscale , earg = .escale , tag = FALSE), namesof(mynames2, .lshape3.q , earg = .eshape3.q , tag = FALSE)) aa <- parg <- 1 lomax.Loglikfun <- function(shape3.q, y, x, w, extraargs) { qvec <- c(0.25, 0.5, 0.75) # Arbitrary; could be made an argument xvec <- log( (1-qvec)^(-1/ shape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) init.scale <- exp(fit0$coef[1]) ans <- sum(c(w) * dlomax(x = y, shape3.q = shape3.q, scale = init.scale, log = TRUE)) ans } shape3.q.grid <- .gshape3.q yvec <- y wvec <- w Init.shape3.q <- getMaxMin(shape3.q.grid, objfun = lomax.Loglikfun, y = yvec, x = x, w = wvec, extraargs = NULL) if (!length( .iscale )) { qvec <- c(0.25, 0.5, 0.75) # Arbitrary; could be made an argument ishape3.q <- if (length( .ishape3.q )) .ishape3.q else Init.shape3.q xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { qq <- rep(if (length( .ishape3.q)) .ishape3.q else Init.shape3.q, length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) aa <- 1 outOfRange <- (qq - 1/aa <= 0) qq[outOfRange] <- 1 / aa + 1 etastart <- cbind(theta2eta(scale, .lscale , earg = .escale ), theta2eta(qq, .lshape3.q, earg = .eshape3.q)) } }), list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q, .gshape3.q = gshape3.q, .iscale = iscale, .ishape3.q = ishape3.q ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- 1 Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 2], .lshape3.q , earg = .eshape3.q ) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[Scale <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape3.q = .lshape3.q) misc$earg <- list(scale = .escale , shape3.q = .eshape3.q) misc$expected <- TRUE }), list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 2], .lshape3.q, earg = .eshape3.q) if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dlomax(x = y, scale = scale, shape3.q = qq, log = TRUE)) } }, list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q ))), vfamily = c("lomax"), deriv = eval(substitute(expression({ aa <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- 1 qq <- eta2theta(eta[, 2], .lshape3.q, earg = .eshape3.q) temp2 <- (y/scale)^aa dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dq <- digamma(parg + qq) - digamma(qq) - log1p(temp2) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dq.deta <- dtheta.deta(qq, .lshape3.q, earg = .eshape3.q) c(w) * cbind( dl.dscale * dscale.deta, dl.dq * dq.deta ) }), list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q ))), weight = eval(substitute(expression({ ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1+parg+qq)) ned2l.dq <- 1/qq^2 ned2l.dscaleq <- -aa * parg / (scale*(parg+qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M == 2 means 3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(2, 2, M)] <- ned2l.dq * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dscaleq * dscale.deta * dq.deta wz <- c(w) * wz wz }), list( .lscale = lscale, .lshape3.q = lshape3.q, .escale = escale, .eshape3.q = eshape3.q )))) } fisk <- function(lshape1.a = "loge", lscale = "loge", ishape1.a = NULL, iscale = NULL, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Fisk distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), "\n", "Mean: scale * gamma(1 + 1/shape1.a) * ", "gamma(1 - 1/shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) qq <- parg <- 1 if (!length( .iscale )) { qvec <- c( 0.25, 0.5, 0.75) # Arbitrary; could be made an argument xvec <- log( 1 / qvec - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { aa <- rep(if (length( .ishape1.a)) .ishape1.a else abs(-1 / fit0$coef[2]), length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) parg <- 1 qq <- 1 outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- 1 / aa[outOfRange] + 1 outOfRange <- (qq - 1/aa <= 0) qq[outOfRange] <- 1 / aa + 1 etastart <- cbind(theta2eta(aa, .lshape1.a , earg = .eshape1.a ), theta2eta(scale, .lscale , earg = .escale )) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .ishape1.a = ishape1.a, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a , scale = .lscale ) misc$earg <- list(shape1.a = .eshape1.a , scale = .escale ) misc$expected <- TRUE }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a , earg = .eshape1.a ) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- qq <- 1 if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dfisk(x = y, shape1.a = aa, scale = scale, log = TRUE)) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), vfamily = c("fisk"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- qq <- 1 temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3a <- digamma(parg) temp3b <- digamma(qq) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) c(w) * cbind( dl.da * da.deta, dl.dscale * dscale.deta ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), weight = eval(substitute(expression({ ned2l.da <- (1 + parg+qq + parg * qq * (trigamma(parg) + trigamma(qq) + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1 + parg + qq)) ned2l.dascale <- (parg - qq - parg*qq*(temp3a - temp3b)) / ( scale * (1 + parg + qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M == 2 means 3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz <- c(w) * wz wz }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale)))) } invlomax <- function(lscale = "loge", lshape2.p = "loge", iscale = NULL, ishape2.p = 1.0, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Inverse Lomax distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape2.p", lshape2.p, earg = eshape2.p), "\n", "Mean: does not exist"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape2.p", .lshape2.p, earg = .eshape2.p, tag = FALSE)) qq <- aa <- 1 if (!length( .iscale )) { qvec <- c(0.25, .5, .75) # Arbitrary; could be made an argument ishape2.p <- if (length( .ishape2.p)) .ishape2.p else 1 xvec <- log( qvec^(-1/ ishape2.p ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) parg <- rep(if (length( .ishape2.p)) .ishape2.p else 1.0, length.out = n) etastart <- cbind(theta2eta(scale, .lscale , earg = .escale ), theta2eta(parg, .lshape2.p, earg = .eshape2.p)) } }), list( .lscale = lscale, .lshape2.p = lshape2.p, .escale = escale, .eshape2.p = eshape2.p, .iscale = iscale, .ishape2.p = ishape2.p ))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p, earg = .eshape2.p) NA * Scale }, list( .lscale = lscale, .escale = escale, .eshape2.p = eshape2.p, .lshape2.p = lshape2.p ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape2.p = .lshape2.p ) misc$earg <- list(scale = .escale , shape2.p = .eshape2.p ) misc$expected <- TRUE }), list( .lscale = lscale, .escale = escale, .eshape2.p = eshape2.p, .lshape2.p = lshape2.p ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p, earg = .eshape2.p) qq <- 1 if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dinvlomax(x = y, scale = scale, shape2.p = parg, log = TRUE)) } }, list( .lscale = lscale, .lshape2.p = lshape2.p, .escale = escale, .eshape2.p = eshape2.p ))), vfamily = c("invlomax"), deriv = eval(substitute(expression({ aa <- qq <- 1 scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) parg <- eta2theta(eta[, 2], .lshape2.p , earg = .eshape2.p) temp1 <- log(y/scale) temp2 <- (y/scale)^aa dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + digamma(parg + qq) - digamma(parg) - log1p(temp2) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p , earg = .eshape2.p) c(w) * cbind( dl.dscale * dscale.deta, dl.dp * dp.deta ) }), list( .lscale = lscale, .lshape2.p = lshape2.p, .escale = escale, .eshape2.p = eshape2.p ))), weight = eval(substitute(expression({ ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1 + parg + qq)) ned2l.dp <- 1 / parg^2 ned2l.dscalep <- aa * qq / (scale * (parg + qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M == 2 means 3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(2, 2, M)] <- ned2l.dp * dp.deta^2 wz[, iam(1, 2, M)] <- ned2l.dscalep * dscale.deta * dp.deta wz <- c(w) * wz wz }), list( .lscale = lscale, .lshape2.p = lshape2.p, .escale = escale, .eshape2.p = eshape2.p )))) } paralogistic <- function(lshape1.a = "loge", lscale = "loge", ishape1.a = 2, iscale = NULL, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Paralogistic distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), "\n", "Mean: scale * gamma(1 + 1/shape1.a) * ", "gamma(shape1.a - 1/shape1.a) / gamma(shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a, earg = .eshape1.a, tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) parg <- 1 if (!length( .ishape1.a) || !length( .iscale )) { qvec <- c( .25, .5, .75) # Arbitrary; could be made an argument ishape1.a <- if (length( .ishape1.a)) .ishape1.a else 1 xvec <- log( (1-qvec)^(-1/ ishape1.a ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } if (!length(etastart)) { aa <- rep(if (length( .ishape1.a)) .ishape1.a else abs(1/fit0$coef[2]), length.out = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length.out = n) parg <- 1 qq <- aa outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- 1 / aa[outOfRange] + 1 outOfRange <- (qq - 1/aa <= 0) aa[outOfRange] <- qq[outOfRange] <- 2 # Need aa > 1, where aa == qq etastart <- cbind(theta2eta(aa, .lshape1.a, earg = .eshape1.a), theta2eta(scale, .lscale , earg = .escale )) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .ishape1.a = ishape1.a, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- aa ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a, scale = .lscale) misc$earg <- list(shape1.a = .eshape1.a, scale = .escale ) misc$expected <- TRUE }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- aa if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dparalogistic(x = y, shape1.a = aa, scale = scale, log = TRUE)) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), vfamily = c("paralogistic"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a , earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- 1 qq <- aa temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3a <- digamma(parg) temp3b <- digamma(qq) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) da.deta <- dtheta.deta(aa, .lshape1.a, earg = .eshape1.a) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) c(w) * cbind( dl.da * da.deta, dl.dscale * dscale.deta) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), weight = eval(substitute(expression({ ned2l.da <- (1 + parg + qq + parg * qq * (trigamma(parg) + trigamma(qq) + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1+parg+qq)) ned2l.dascale <- (parg - qq - parg*qq*(temp3a -temp3b)) /( scale*(1 + parg+qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M == 2 means 3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz <- c(w) * wz wz }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale)))) } invparalogistic <- function(lshape1.a = "loge", lscale = "loge", ishape1.a = 2, iscale = NULL, zero = NULL) { if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Inverse paralogistic distribution\n\n", "Links: ", namesof("shape1.a", lshape1.a, earg = eshape1.a), ", ", namesof("scale", lscale, earg = escale), "\n", "Mean: scale * gamma(shape1.a + 1/shape1.a) * ", "gamma(1 - 1/shape1.a)/gamma(shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1.a", .lshape1.a, earg = .eshape1.a , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length( .ishape1.a) || !length( .iscale )) { qvec <- c( .25, .5, .75) # Arbitrary; could be made an argument ishape2.p <- if (length( .ishape1.a )) .ishape1.a else 1 xvec <- log( qvec^(-1/ ishape2.p ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(y, qvec ))) } qq <- 1 if (!length(etastart)) { aa <- rep(if (length( .ishape1.a)) .ishape1.a else -1/fit0$coef[2], length = n) scale <- rep(if (length( .iscale )) .iscale else exp(fit0$coef[1]), length = n) parg <- aa qq <- 1 outOfRange <- (parg + 1/aa <= 0) parg[outOfRange] <- aa[outOfRange] <- 2 outOfRange <- (qq - 1/aa <= 0) qq[outOfRange] <- 1 / aa[outOfRange] + 1 etastart <- cbind(theta2eta(aa, .lshape1.a , earg = .eshape1.a), theta2eta(scale, .lscale , earg = .escale )) } }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale, .ishape1.a = ishape1.a, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- aa qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), last = eval(substitute(expression({ misc$link <- c(shape1.a = .lshape1.a, scale = .lscale ) misc$earg <- list(shape1.a = .eshape1.a, scale = .escale ) misc$expected <- TRUE }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- aa qq <- 1 if (residuals) stop("loglikelihood residuals ", "not implemented yet") else { sum(c(w) * dinvparalogistic(x = y, shape1.a = aa, scale = scale, log = TRUE)) } }, list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), vfamily = c("invparalogistic"), deriv = eval(substitute(expression({ aa <- eta2theta(eta[, 1], .lshape1.a, earg = .eshape1.a) scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) parg <- aa qq <- 1 temp1 <- log(y/scale) temp2 <- (y/scale)^aa temp3a <- digamma(parg) temp3b <- digamma(qq) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dscale <- (aa/scale) * (-parg + (parg+qq) / (1+1/temp2)) da.deta <- dtheta.deta(aa, .lshape1.a, earg = .eshape1.a) dscale.deta <- dtheta.deta(scale, .lscale , earg = .escale ) c(w) * cbind( dl.da * da.deta, dl.dscale * dscale.deta ) }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale))), weight = eval(substitute(expression({ ned2l.da <- (1 + parg + qq + parg * qq * (trigamma(parg) + trigamma(qq) + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1 + parg + qq)) ned2l.dscale <- aa^2 * parg * qq / (scale^2 * (1 + parg + qq)) ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (scale * (1 + parg + qq)) wz <- matrix(as.numeric(NA), n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.da * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale * dscale.deta^2 wz[, iam(1, 2, M)] <- ned2l.dascale * da.deta * dscale.deta wz <- c(w) * wz wz }), list( .lshape1.a = lshape1.a, .lscale = lscale, .eshape1.a = eshape1.a, .escale = escale)))) } if (FALSE) genlognormal <- function(link.sigma = "loge", link.r = "loge", init.sigma = 1, init.r = 1, zero = NULL) { warning("2/4/04; doesn't work, possibly because first derivs are ", "not continuous (sign() is used). Certainly, the derivs wrt ", "mymu are problematic (run with maxit=4:9 and look at weight ", "matrices). Possibly fundamentally cannot be estimated by IRLS. ", "Pooling doesn't seem to help") if (length(zero) && !is.Numeric(zero, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'zero'") link.sigma <- as.list(substitute(link.sigma)) esigma <- link2list(link.sigma) link.sigma <- attr(esigma, "function.name") link.r <- as.list(substitute(link.r)) er <- link2list(link.r) link.r <- attr(er, "function.name") new("vglmff", blurb = c("Three-parameter generalized lognormal distribution\n\n", "Links: ", "loc; ", namesof("sigma", link.sigma, earg = esigma, tag = TRUE), ", ", namesof("r", link.r, earg = er, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.vgam(constraints, x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (ncol(cbind(y)) != 1) stop("response must be a vector or a one-column matrix") predictors.names <- c(namesof("loc", "identity", earg = list(), tag = FALSE), namesof("sigma", .link.sigma, earg = .esigma, tag = FALSE), namesof("r", .link.r, earg = .er, tag = FALSE)) if (!length( .init.sigma) || !length( .init.r)) { init.r <- if (length( .init.r)) .init.r else 1 sigma.init <- (0.5 * sum(abs(log(y) - mean(log(y )))^init.r))^(1/init.r) } if (any(y <= 0)) stop("y must be positive") if (!length(etastart)) { sigma.init <- rep(if (length( .init.sigma)) .init.sigma else sigma.init, length.out = n) r.init <- if (length( .init.r)) .init.r else init.r etastart <- cbind(mu = rep(log(median(y)), length.out = n), sigma = sigma.init, r = r.init) } }), list( .link.sigma = link.sigma, .link.r = link.r, .init.sigma = init.sigma, .init.r = init.r ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, 1], "identity", earg = list()) sigma <- eta2theta(eta[, 2], .link.sigma, earg = .esigma) r <- eta2theta(eta[, 3], .link.r, earg = .er) r }, list( .link.sigma = link.sigma, .link.r = link.r ))), last = eval(substitute(expression({ misc$link = c(loc = "identity", "sigma" = .link.sigma, r = .link.r ) misc$expected = TRUE }), list( .link.sigma = link.sigma, .link.r = link.r ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { mymu <- eta2theta(eta[, 1], "identity", earg = list()) sigma <- eta2theta(eta[, 2], .link.sigma, earg = .esigma) r <- eta2theta(eta[, 3], .link.r, earg = .er) temp89 <- (abs(log(y)-mymu)/sigma)^r if (residuals) stop("loglikelihood residuals ", "not implemented yet") else sum(c(w) * (-log(r^(1/r) * sigma) - lgamma(1+1/r) - temp89/r)) }, list( .link.sigma = link.sigma, .link.r = link.r ))), vfamily = c("genlognormal3"), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, 1], "identity", earg = list()) sigma <- eta2theta(eta[, 2], .link.sigma, earg = .esigma) r <- eta2theta(eta[, 3], .link.r, earg = .er) ss <- 1 + 1/r temp33 <- (abs(log(y)-mymu)/sigma) temp33r1 <- temp33^(r-1) dl.dmymu <- temp33r1 * sign(log(y)-mymu) / sigma dl.dsigma <- (temp33*temp33r1 - 1) / sigma dl.dr <- (log(r) - 1 + digamma(ss) + temp33*temp33r1)/r^2 - temp33r1 * log(temp33r1) / r dmymu.deta <- dtheta.deta(mymu, "identity", earg = list()) dsigma.deta <- dtheta.deta(sigma, .link.sigma, earg = .esigma) dr.deta <- dtheta.deta(r, .link.r, earg = .er) c(w) * cbind(dl.dmymu * dmymu.deta, dl.dsigma * dsigma.deta, dl.dr * dr.deta) }), list( .link.sigma = link.sigma, .link.r = link.r ))), weight = expression({ wz <- matrix(0, n, 6) # 5 will have small savings of 1 column B <- log(r) + digamma(ss) ned2l.dmymu2 <- (r-1) * gamma(1-1/r) / (sigma^2 * r^(2/r) * gamma(ss)) ned2l.dsigma2 <- r / sigma^2 ned2l.dr2 <- (ss * trigamma(ss) + B^2 - 1) / r^3 ned2l.dsigmar <- -B / (r * sigma) wz[, iam(1, 1, M)] <- ned2l.dmymu2 * dmymu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsigma2 * dsigma.deta^2 wz[, iam(3, 3, M)] <- ned2l.dr2 * dr.deta^2 wz[, iam(2, 3, M)] <- ned2l.dsigmar * dsigma.deta * dr.deta wz = c(w) * wz wz })) } VGAM/R/effects.vglm.q0000644000176000001440000000103312136651110013773 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. effects.vlm <- function(object, ...) { cat("Sorry, this function has not been written yet. Returning a NULL.\n") invisible(NULL) } if(!isGeneric("effects")) setGeneric("effects", function(object, ...) standardGeneric("effects")) if(is.R()) { setMethod("effects", "vlm", function(object, ...) effects.vlm(object, ...)) } else { setMethod("effects", "vlm", function(object, ...) effects.vlm(object, ...)) } VGAM/R/deviance.vlm.q0000644000176000001440000000574112136651110013775 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. deviance.vlm <- function(object, ...) object@criterion$deviance deviance.vglm <- function(object, ...) object@criterion$deviance if(!isGeneric("deviance")) setGeneric("deviance", function(object, ...) standardGeneric("deviance")) setMethod("deviance", "vlm", function(object, ...) deviance.vlm(object, ...)) setMethod("deviance", "vglm", function(object, ...) deviance.vglm(object, ...)) df.residual_vlm <- function(object, type = c("vlm", "lm"), ...) { type <- type[1] switch(type, vlm = object@df.residual, lm = nobs(object, type = "lm") - nvar_vlm(object, type = "lm"), stop("argument 'type' unmatched")) } setMethod("df.residual", "vlm", function(object, ...) df.residual_vlm(object, ...)) nvar_vlm <- function(object, ...) { M <- npred(object) allH <- matrix(unlist(constraints(object, type = "lm")), nrow = M) checkNonZero <- function(m) sum(as.logical(m)) numPars <- apply(allH, 1, checkNonZero) if (length(object@misc$predictors.names) == M) names(numPars) <- object@misc$predictors.names NumPars <- rep(0, length = M) for (jay in 1:M) { X_lm_jay <- model.matrix(object, type = "lm", lapred.index = jay) NumPars[jay] <- ncol(X_lm_jay) } if (length(object@misc$predictors.names) == M) names(NumPars) <- object@misc$predictors.names if (!all(NumPars == numPars)) { print(NumPars - numPars) # Should be all 0s stop("something wrong in nvar_vlm()") } numPars } if (FALSE) { set.seed(123) zapdat = data.frame(x2 = runif(nn <- 2000)) zapdat = transform(zapdat, p0 = logit(-0.5 + 1*x2, inverse = TRUE), lambda = loge( 0.5 + 2*x2, inverse = TRUE), f1 = gl(4, 50, labels = LETTERS[1:4]), x3 = runif(nn)) zapdat = transform(zapdat, y = rzapois(nn, lambda, p0)) with(zapdat, table(y)) fit1 = vglm(y ~ x2, zapoisson, zapdat, trace = TRUE) fit1 = vglm(y ~ bs(x2), zapoisson, zapdat, trace = TRUE) coef(fit1, matrix = TRUE) # These should agree with the above values fit2 = vglm(y ~ bs(x2) + x3, zapoisson(zero = 2), zapdat, trace = TRUE) coef(fit2, matrix = TRUE) clist = list("(Intercept)" = diag(2), "x2" = rbind(0,1), "x3" = rbind(1,0)) fit3 = vglm(y ~ x2 + x3, zapoisson(zero = NULL), zapdat, constraints = clist, trace = TRUE) coef(fit3, matrix = TRUE) constraints(fit2, type = "term") constraints(fit2, type = "lm") head(model.matrix(fit2, type = "term")) head(model.matrix(fit2, type = "lm")) allH = matrix(unlist(constraints(fit1)), nrow = fit1@misc$M) allH = matrix(unlist(constraints(fit2)), nrow = fit2@misc$M) allH = matrix(unlist(constraints(fit3)), nrow = fit3@misc$M) checkNonZero = function(m) sum(as.logical(m)) (numPars = apply(allH, 1, checkNonZero)) nvar_vlm(fit1) nvar_vlm(fit2) nvar_vlm(fit3) } VGAM/R/cqo.fit.q0000644000176000001440000007570112136651110012770 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. callcqoc <- function(cmatrix, etamat, xmat, ymat, wvec, X_vlm_1save, modelno, Control, n, M, p1star, p2star, nice31, allofit = FALSE) { ocmatrix <- cmatrix control <- Control Rank <- control$Rank p1 <- length(control$colx1.index); p2 <- length(control$colx2.index) dim(cmatrix) <- c(p2, Rank) # for crow1C pstar <- p1star + p2star maxMr <- max(M, Rank) nstar <- if (nice31) ifelse(modelno == 3 || modelno == 5,n*2,n) else n*M NOS <- ifelse(modelno == 3 || modelno==5, M/2, M) lenbeta <- pstar * ifelse(nice31, NOS, 1) if (itol <- control$ITolerances) { if (Rank > 1) { numat <- xmat[,control$colx2.index,drop = FALSE] %*% cmatrix evnu <- eigen(var(numat)) cmatrix <- cmatrix %*% evnu$vector } cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix sdnumat <- apply(numat, 2, sd) for(lookat in 1:Rank) if (sdnumat[lookat] > control$MUXfactor[lookat] * control$isdlv[lookat]) { muxer <- control$isdlv[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] numat[,lookat] <- numat[,lookat] * muxer cmatrix[,lookat] <- cmatrix[,lookat] * muxer if (control$trace) { cat(paste("Taking evasive action for latent variable ", lookat, ".\n", sep = "")) flush.console() } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } } else { numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix evnu <- eigen(var(numat)) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix } inited <- if (is.R()) { if (exists(".VGAM.CQO.etamat", envir = VGAM:::VGAMenv)) 1 else 0 } else 0 usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CQO.") else t(etamat) usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CQO.") else double(lenbeta) othint <- c(Rank = Rank, control$EqualTol, pstar = pstar, dimw = 1, inited = inited, modelno = modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star = p1star, p2star = p2star, nice31 = nice31, lenbeta = lenbeta, itol = itol, control$trace, p1 = p1, p2 = p2, control$imethod) bnumat <- if (nice31) matrix(0,nstar,pstar) else cbind(matrix(0,nstar,p2star), X_vlm_1save) ans1 <- dotC(name = if (nice31) "cqo_1" else "cqo_2", numat = as.double(numat), as.double(ymat), as.double(if (p1) xmat[,control$colx1.index] else 999), as.double(wvec), etamat = as.double(usethiseta), moff = double(if (itol) n else 1), fv = double(NOS*n), z = double(n*M), wz = double(n*M), U = double(M*n), bnumat = as.double(bnumat), qr = double(nstar*pstar), qraux = double(pstar), qpivot = integer(pstar), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1 + NOS), othint = as.integer(othint), deviance = double(1+NOS), beta = as.double(usethisbeta), othdbl = as.double(c(small = control$SmallNo, epsilon = control$epsilon, .Machine$double.eps, iKvector = rep(control$iKvector, len = NOS), iShape = rep(control$iShape, len = NOS)))) if (ans1$errcode[1] == 0) { assign2VGAMenv(c("etamat", "z", "U", "beta", "deviance"), ans1, prefix = ".VGAM.CQO.") if (is.R()) { assign(".VGAM.CQO.cmatrix", cmatrix, envir = VGAM:::VGAMenv) assign(".VGAM.CQO.ocmatrix", ocmatrix, envir = VGAM:::VGAMenv) } else { } } else { print("hi 88 20100402; all the species did not converge in callcqo") warning("error code in callcqoc = ", ans1$errcode[1]) if (nice31) { print("ans1$errcode[-1]") # Only if (nice31) print( ans1$errcode[-1] ) } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } if (control$trace) flush.console() if (allofit) list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], coefficients = ans1$beta) else ans1$deviance[1] } calldcqo <- function(cmatrix, etamat, xmat, ymat, wvec, X_vlm_1save, modelno, Control, n, M, p1star, p2star, nice31, allofit = FALSE) { control <- Control Rank <- control$Rank p1 <- length(control$colx1.index); p2 <- length(control$colx2.index) dim(cmatrix) <- c(p2, Rank) # for crow1C xmat2 <- xmat[, control$colx2.index, drop = FALSE] #ccc numat <- double(n*Rank) #ccc pstar <- p1star + p2star maxMr <- max(M, Rank) nstar <- if (nice31) ifelse(modelno == 3 || modelno == 5,n*2,n) else n*M NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) lenbeta <- pstar * ifelse(nice31, NOS, 1) if (itol <- control$ITolerances) { if (Rank > 1) { numat <- xmat[, control$colx2.index, drop=FALSE] %*% cmatrix evnu <- eigen(var(numat)) cmatrix <- cmatrix %*% evnu$vector } cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[,control$colx2.index,drop=FALSE] %*% cmatrix sdnumat <- apply(numat, 2, sd) for(lookat in 1:Rank) if (sdnumat[lookat] > control$MUXfactor[lookat] * control$isdlv[lookat]) { muxer <- control$isdlv[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] cmatrix[,lookat] <- cmatrix[,lookat] * muxer if (control$trace) { cat(paste("Taking evasive action for latent variable ", lookat, ".\n", sep="")) flush.console() } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } } else { numat <- xmat[,control$colx2.index,drop=FALSE] %*% cmatrix evnu <- eigen(var(numat)) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix } inited <- if (is.R()) { if (exists(".VGAM.CQO.etamat", envir = VGAM:::VGAMenv)) 1 else 0 } else 0 usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CQO.") else t(etamat) usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CQO.") else double(lenbeta) othint <- c(Rank, control$EqualTol, pstar, dimw = 1, inited = inited, modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star = p1star, p2star = p2star, nice31 = nice31, lenbeta, itol = itol, control$trace, p1, p2, control$imethod) # other ints bnumat <- if (nice31) matrix(0,nstar,pstar) else cbind(matrix(0,nstar,p2star), X_vlm_1save) flush.console() ans1 <- dotC(name = "dcqo1", numat = as.double(numat), as.double(ymat), as.double(if (p1) xmat[,control$colx1.index] else 999), as.double(wvec), etamat = as.double(usethiseta), moff = double(if (itol) n else 1), fv = double(NOS*n), z = double(n*M), wz = double(n*M), U = double(M*n), bnumat = as.double(bnumat), qr = double(nstar * pstar), qraux = double(pstar), qpivot = integer(pstar), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1 + NOS), othint = as.integer(othint), deviance = double(1 + NOS), beta = as.double(usethisbeta), othdbl = as.double(c(small = control$SmallNo, epsilon = control$epsilon, .Machine$double.eps, iKvector = rep(control$iKvector, len = NOS), iShape = rep(control$iShape, len = NOS))), xmat2 = as.double(xmat2), cmat = as.double(cmatrix), p2 = as.integer(p2), deriv = double(p2*Rank), hstep = as.double(control$Hstep)) if (ans1$errcode[1] != 0) { warning("error code in calldcqo = ", ans1$errcode[1]) print("hi 88 20100402; all the species did not converge in calldcqo") print("ans1$errcode[]") print( ans1$errcode[] ) } flush.console() ans1$deriv } checkCMCO <- function(Blist, control, modelno) { p1 <- length(colx1.index <- control$colx1.index) p2 <- length(colx2.index <- control$colx2.index) if (p1 + p2 != length(Blist)) stop("'Blist' is the wrong length") if (p1 == 0 || p2 == 0) stop("Some variables are needed in noRRR and non-noRRR arguments") if (all(names(colx1.index) != "(Intercept)")) stop("an intercept term must be in the argument 'noRRR' formula") Blist1 <- vector("list", p1) Blist2 <- vector("list", p2) for(kk in 1:p1) Blist1[[kk]] <- Blist[[(colx1.index[kk])]] for(kk in 1:p2) Blist2[[kk]] <- Blist[[(colx2.index[kk])]] if (modelno == 3 || modelno == 5) { if (p1 > 1) for(kk in 2:p1) Blist1[[kk]] <- (Blist1[[kk]])[c(TRUE,FALSE),,drop = FALSE] for(kk in 1:p2) Blist2[[kk]] <- (Blist2[[kk]])[c(TRUE,FALSE),,drop = FALSE] } if (!all(trivial.constraints(Blist2) == 1)) stop("the constraint matrices for the non-noRRR terms ", "are not trivial") if (!trivial.constraints(Blist1[[1]])) stop("the constraint matrices for intercept term is ", "not trivial") if (p1 > 1) for(kk in 2:p1) if (!trivial.constraints(list(Blist1[[kk]]))) stop("the constraint matrices for some 'noRRR' ", "terms is not trivial") nice31 <- if (control$Quadratic) (!control$EqualTol || control$ITolerances) else TRUE as.numeric(nice31) } cqo.fit <- function(x, y, w = rep(1, length(x[, 1])), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = qrrvglm.control(...), constraints = NULL, extra = NULL, Terms = Terms, function.name = "cqo", ...) { modelno <- quasi.newton <- NOS <- z <- fv <- NULL if (!all(offset == 0)) stop("cqo.fit() cannot handle offsets") specialCM <- NULL post <- list() nonparametric <- FALSE epsilon <- control$epsilon maxitl <- control$maxitl save.weight <- control$save.weight trace <- control$trace orig.stepsize <- control$stepsize n <- dim(x)[1] intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(family@initialize)) eval(family@initialize) # Initialize mu and M (and optionally w) n <- n.save eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (is.character(rrcontrol$Dzero)) { index <- match(rrcontrol$Dzero, dimnames(as.matrix(y))[[2]]) if (any(is.na(index))) stop("Dzero argument didn't fully match y-names") if (length(index) == M) stop("all linear predictors are linear in the", " latent variable(s); so set 'Quadratic=FALSE'") rrcontrol$Dzero <- control$Dzero <- index } if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.vgam(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) tc1 <- trivial.constraints(constraints) if (!control$Quadratic && sum(!tc1)) { for(ii in names(tc1)) if (!tc1[ii] && !any(ii == names(findex)[findex == 1])) warning("'", ii, "' is a non-trivial constraint that will ", "be overwritten by reduced-rank regression") } if (all(findex == 1)) stop("use vglm(), not rrvglm()!") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for(ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index); p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index Amat <- if (length(rrcontrol$Ainit)) rrcontrol$Ainit else matrix(rnorm(M * Rank, sd = rrcontrol$SD.Cinit), M, Rank) Cmat <- if (length(rrcontrol$Cinit)) { matrix(rrcontrol$Cinit, p2, Rank) } else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2 * Rank, sd = rrcontrol$SD.Cinit), p2, Rank) } else .Init.Poisson.QO(ymat = as.matrix(y), X1 = x[, colx1.index, drop = FALSE], X2 = x[, colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, isdlv = rrcontrol$isdlv, constwt = any(family@vfamily[1] == c("negbinomial","gamma2","gaussianff")), takelog = any(family@vfamily[1] != c("gaussianff"))) } if (rrcontrol$ITolerances) { lvmat <- x[, rrcontrol$colx2.index, drop = FALSE] %*% Cmat lvmatmeans <- t(lvmat) %*% matrix(1/n, n, 1) if (!all(abs(lvmatmeans) < 4)) warning("ITolerances=TRUE but the variables making up the ", "latent variable(s) do not appear to be centered.") } if (modelno == 3 || modelno == 5) Amat[c(FALSE,TRUE),] <- 0 # Intercept only for log(k) if (length(control$szero)) Amat[control$szero,] <- 0 rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Blist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- checkCMCO(Blist, control = control, modelno = modelno) ncolBlist <- unlist(lapply(Blist, ncol)) dimB <- sum(ncolBlist) X_vlm_save <- if (nice31) { NULL } else { tmp500 <- lm2qrrvlm.model.matrix(x = x, Blist = Blist, C = Cmat, control = control) xsmall.qrr <- tmp500$new.lv.model.matrix B.list <- tmp500$constraints lv.mat <- tmp500$lv.mat if (length(tmp500$offset)) { offset <- tmp500$offset } lm2vlm.model.matrix(xsmall.qrr, B.list, xij = control$xij) } if (length(coefstart) && length(X_vlm_save)) { eta <- if (ncol(X_vlm_save) > 1) X_vlm_save %*% coefstart + offset else X_vlm_save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- family@linkinv(eta, extra) } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") eval(cqo.init.derivative.expression) for(iter in 1:control$optim.maxit) { eval(cqo.derivative.expression) if (!quasi.newton$convergence) break } if (maxitl > 1 && iter >= maxitl && quasi.newton$convergence) warning("convergence not obtained in", maxitl, "iterations.") if (length(family@fini)) eval(family@fini) asgn <- attr(x, "assign") coefs <- getfromVGAMenv("beta", prefix = ".VGAM.CQO.") if (control$ITolerances) { if (NOS == M) { coefs <- c(t(matrix(coefs, ncol = M))) # Get into right order } else { coefs <- coefs } } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] residuals <- z - fv if (M == 1) { residuals <- as.vector(residuals) names(residuals) <- yn } else { dimnames(residuals) <- list(yn, predictors.names) } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) y.names <- NULL } df.residual <- 55 - 8 - Rank*p2 fit <- list(assign = asgn, coefficients = coefs, constraints = Blist, df.residual = df.residual, df.total = n*M, fitted.values = mu, offset = offset, residuals = residuals, terms = Terms) # terms: This used to be done in vglm() if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } fit$weights <- if (save.weight) wz else NULL misc <- list( colnames.x = xn, criterion = "deviance", function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, orig.assign = attr(x, "assign"), p = ncol(x), ynames = dimnames(y)[[2]]) if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) edeviance <- getfromVGAMenv("deviance", prefix = ".VGAM.CQO.") crit.list <- list(deviance = edeviance[1], alldeviance = edeviance[-1]) if (is.character(y.names) && length(y.names) == length(crit.list$alldeviance)) names(crit.list$alldeviance) = y.names structure(c(fit, list(predictors = matrix(eta, n, M), contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, rss = 000, x = x, y = y)), vclass = family@vfamily) } .Init.Poisson.QO <- function(ymat, X1, X2, Rank = 1, epsilon = 1/32, max.ncol.etamat = 10, trace = FALSE, Crow1positive = rep(TRUE, len = Rank), isdlv = rep(1, lengt = Rank), constwt = FALSE, takelog = TRUE) { print.CQO.expression = expression({ if (trace && length(X2)) { cat("\nUsing initial values\n") dimnames(ans) <- list(dimnames(X2)[[2]], if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "")) if (p2 > 5) print(ans, dig = 3) else print(t(ans), dig = 3) } flush.console() }) sd.scale.X2.expression <- expression({ if (length(isdlv)) { actualSD <- c( sqrt(diag(var(X2 %*% ans))) ) for(ii in 1:Rank) ans[,ii] <- ans[,ii] * isdlv[ii] / actualSD[ii] } }) Crow1positive <- if (length(Crow1positive)) rep(Crow1positive, len = Rank) else rep(TRUE, len = Rank) if (epsilon <= 0) stop("epsilon > 0 is required") ymat <- cbind(ymat) + epsilon # ymat == 0 cause problems NOS <- ncol(ymat) p2 <- ncol(X2) if (NOS < 2*Rank) { ans <- crow1C(matrix(rnorm(p2 * Rank, sd = 0.02), p2, Rank), Crow1positive) eval(sd.scale.X2.expression) if (NOS == 1) { eval(print.CQO.expression) return(ans) } else { ans.save <- ans; # ans.save contains scaled guesses } } calS <- 1:NOS # Set of all species available for the approximation effrank <- min(Rank, floor(NOS/2)) # effective rank ncol.etamat <- min(if (length(X2)) floor(NOS/2) else effrank, max.ncol.etamat) etamat <- wts <- matrix(0, nrow = nrow(ymat), ncol = ncol.etamat) # has >=1 coln rr <- 1 for(ii in 1:floor(NOS/2)) { if (length(calS) < 2) break index <- sample(calS, size = 2) # Randomness here etamat[,rr] <- etamat[,rr] + (if (takelog) log(ymat[,index[1]] / ymat[,index[2]]) else ymat[,index[1]] - ymat[,index[2]]) wts[,rr] <- wts[,rr] + (if (constwt) 1 else ymat[,index[1]] + ymat[,index[2]]) calS <- setdiff(calS, index) rr <- (rr %% ncol.etamat) + 1 } if (trace) cat("\nObtaining initial values\n") if (length(X2)) { alt <- valt(x = cbind(X1, X2), z = etamat, U = sqrt(t(wts)), Rank = effrank, Blist = NULL, Cinit = NULL, trace = FALSE, colx1.index = 1:ncol(X1), Criterion = "rss") temp.control <- list(Rank = effrank, colx1.index = 1:ncol(X1), Alpha = 0.5, colx2.index = (ncol(X1)+1):(ncol(X1) + ncol(X2)), Corner = FALSE, Svd.arg = TRUE, Uncorrelated.lv = TRUE, Quadratic = FALSE) ans2 <- if (Rank > 1) rrr.normalize(rrcontrol = temp.control, A = alt$A, C = alt$C, x = cbind(X1, X2)) else alt ans <- crow1C(ans2$C, rep(Crow1positive, length.out = effrank)) Rank.save <- Rank Rank <- effrank eval(sd.scale.X2.expression) Rank <- Rank.save if (effrank < Rank) { ans <- cbind(ans, ans.save[,-(1:effrank)]) # ans is better } eval(print.CQO.expression) } else { xij <- NULL # temporary measure U <- t(sqrt(wts)) tmp <- vlm.wfit(xmat = X1, zmat = etamat, Blist = NULL, U = U, matrix.out = TRUE, is.vlmX = FALSE, rss = TRUE, qr = FALSE, xij = xij) ans <- crow1C(as.matrix(tmp$resid), rep(Crow1positive, length.out = effrank)) if (effrank < Rank) { ans <- cbind(ans, ans.save[,-(1:effrank)]) # ans is better } if (Rank > 1) { evnu <- eigen(var(ans)) ans <- ans %*% evnu$vector } if (length(isdlv)) { actualSD <- apply(cbind(ans), 2, sd) for(ii in 1:Rank) ans[,ii] <- ans[,ii] * isdlv[ii] / actualSD[ii] } ans <- crow1C(ans, rep(Crow1positive, length.out = Rank)) dimnames(ans) <- list(dimnames(X1)[[1]], if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "")) if (trace) { if (nrow(ans) > 10) print(t(ans), dig = 3) else print(ans, dig = 3) } } ans } cqo.init.derivative.expression <- expression({ which.optimizer <- if (is.R()) { if (control$Quadratic && control$FastAlgorithm) { "BFGS" } else { if (iter <= rrcontrol$Switch.optimizer) "Nelder-Mead" else "BFGS" } } else "Quasi-Newton" if (trace && control$OptimizeWrtC) { cat("\nUsing", which.optimizer, "algorithm\n") flush.console() } if (FALSE) { constraints <- replace.constraints(constraints, diag(M), rrcontrol$colx2.index) nice31 <- (!control$EqualTol || control$ITolerances) && all(trivial.constraints(constraints) == 1) } NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) canfitok <- if (is.R()) (exists("CQO.FastAlgorithm", envir=VGAM:::VGAMenv) && get("CQO.FastAlgorithm", envir = VGAM:::VGAMenv)) else (exists("CQO.FastAlgorithm",inherits=TRUE) && CQO.FastAlgorithm) if (!canfitok) stop("cannot fit this model using fast algorithm") p2star <- if (nice31) ifelse(control$IToleran, Rank, Rank+0.5*Rank*(Rank+1)) else (NOS*Rank + Rank*(Rank+1)/2 * ifelse(control$EqualTol,1,NOS)) p1star <- if (nice31) ifelse(modelno == 3 || modelno == 5,1+p1,p1) else (ncol(X_vlm_save)-p2star) X_vlm_1save <- if (p1star > 0) X_vlm_save[,-(1:p2star)] else NULL }) cqo.derivative.expression <- expression({ if (iter == 1 || quasi.newton$convergence) { quasi.newton <- optim(par = Cmat, fn = callcqoc, gr = if (control$GradientFunction) calldcqo else NULL, method = which.optimizer, control = list(fnscale = 1,trace = as.integer(control$trace), parscale = rep(control$Parscale, len = length(Cmat)), maxit = control$Maxit.optim), etamat = eta, xmat = x, ymat = y, wvec = w, X_vlm_1save = X_vlm_1save, modelno = modelno, Control = control, n = n, M = M, p1star = p1star, p2star = p2star, nice31 = nice31) z <- matrix(getfromVGAMenv("z", prefix = ".VGAM.CQO."), n, M) U <- matrix(getfromVGAMenv("U", prefix = ".VGAM.CQO."), M, n) } ocmatrix <- getfromVGAMenv("ocmatrix", prefix = ".VGAM.CQO.") maxdiff <- max(abs(c(ocmatrix) - c(quasi.newton$par)) / (1 + abs(c(ocmatrix)))) if (maxdiff < 1.0e-4) { Cmat <- getfromVGAMenv("cmatrix", prefix = ".VGAM.CQO.") } else { warning("solution does not correspond to .VGAM.CQO.cmatrix") } alt <- valt.1iter(x = x, z = z, U = U, Blist = Blist, C = Cmat, nice31 = nice31, control = rrcontrol, lp.names = predictors.names, MSratio = M/NOS) if (length(alt$offset)) offset <- alt$offset B1.save <- alt$B1 # Put later into extra tmp.fitted <- alt$fitted # contains \bI_{Rank} \bnu if Corner if (trace && control$OptimizeWrtC) { cat("\n") cat(which.optimizer, "using", if (is.R()) "optim():" else "nlminb():", "\n") cat("Objective =", quasi.newton$value, "\n") cat("Parameters (= c(C)) = ", if (length(quasi.newton$par) < 5) "" else "\n") cat( if (is.R()) alt$Cmat else format(alt$Cmat), fill=TRUE) cat("\n") if (!is.R()) { cat("Gradient norm =", format(quasi.newton$grad.norm), "\n") cat("Number of gradient evaluations =", quasi.newton$g.evals, "\n") } cat("Number of function evaluations =", if (is.R()) quasi.newton$count[1] else quasi.newton$f.evals, "\n") if (length(quasi.newton$message)) cat("Message =", quasi.newton$message, "\n") cat("\n") flush.console() } Amat <- alt$Amat # Cmat <- alt$Cmat # Dmat <- alt$Dmat # eval(cqo.end.expression) # }) cqo.end.expression <- expression({ rmfromVGAMenv(c("etamat"), prefix = ".VGAM.CQO.") if (control$Quadratic) { if (!length(extra)) extra =list() extra$Amat <- Amat # Not the latest iteration ?? extra$Cmat <- Cmat # Saves the latest iteration extra$Dmat <- Dmat # Not the latest iteration extra$B1 <- B1.save # Not the latest iteration (not good) } else { Blist <- replace.constraints(Blist.save, Amat, colx2.index) } fv <- tmp.fitted # Contains \bI \bnu eta <- fv + offset mu <- family@linkinv(eta, extra) if (any(is.na(mu))) warning("there are NAs in mu") deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzeps = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset # Contains \bI \bnu }) crow1C <- function(cmat, crow1positive = rep(TRUE, length.out = ncol(cmat)), amat = NULL) { if (!is.logical(crow1positive) || length(crow1positive) != ncol(cmat)) stop("bad input in crow1C") for(LV in 1:ncol(cmat)) if (( crow1positive[LV] && cmat[1,LV] < 0) || (!crow1positive[LV] && cmat[1,LV] > 0)) { cmat[,LV] <- -cmat[,LV] if (length(amat)) amat[,LV] <- -amat[,LV] } if (length(amat)) list(cmat = cmat, amat = amat) else cmat } printqrrvglm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } if (FALSE) { } if (FALSE) { nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (!length(rdf)) rdf <- nobs - Rank } cat("\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") if (FALSE && length(x@criterion)) { ncrit <- names(x@criterion) for(ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep=""), format(x@criterion[[ii]]), "\n") } invisible(x) } setMethod("Coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coefficients", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) if (!isGeneric("deviance")) setGeneric("deviance", function(object, ...) standardGeneric("deviance")) setMethod("deviance", "qrrvglm", function(object,...) object@criterion$deviance) setMethod("fitted", "qrrvglm", function(object, ...) fittedvlm(object)) setMethod("fitted.values", "qrrvglm", function(object, ...) fittedvlm(object)) setMethod("show", "qrrvglm", function(object) printqrrvglm(object)) VGAM/R/cqo.R0000644000176000001440000001267712136651110012153 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. cqo <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data = function.name <- "cqo" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev <- if (length(xvars) > 0) { xlev <- lapply(mf[xvars], levels) xlev[!sapply(xlev, is.null)] } y <- model.response(mf, "numeric") # model.extract(mf, "response") x <- model.matrix(mt, mf, contrasts) attr(x, "assign") = attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) w <- rep(1, nrow(mf)) else if (ncol(as.matrix(w)) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family=", family, "' is not a VGAM family function") } control$criterion = "coefficients" # Specifically 4 vcontrol.expression eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) cqo.fitter <- get(method) deviance.Bestof = rep(as.numeric(NA), len=control$Bestof) for(tries in 1:control$Bestof) { if (control$trace && (control$Bestof>1)) cat(paste("\n========================= Fitting model", tries, "=========================\n")) onefit <- cqo.fitter(x=x, y=y, w=w, offset=offset, etastart=etastart, mustart=mustart, coefstart=coefstart, family=family, control=control, constraints=constraints, extra=extra, Terms=mt, function.name=function.name, ...) deviance.Bestof[tries] = if (length(onefit$crit.list$deviance)) onefit$crit.list$deviance else onefit$crit.list$loglikelihood if (tries == 1 || min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit = onefit } fit$misc$deviance.Bestof = deviance.Bestof fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(Class="qrrvglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, # list("deviance" = min(deviance.Bestof)), "dispersion" = 1, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "residuals" = as.matrix(fit$residuals), "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms=mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (length(attr(x, "contrasts"))) slot(answer, "contrasts") = attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") = as.matrix(fit$fitted.values) slot(answer, "na.action") = if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") = as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") = as.matrix(fit$weights) if (x.arg) slot(answer, "x") = fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") = xlev if (y.arg) slot(answer, "y") = as.matrix(fit$y) fit$control$min.criterion = TRUE # needed for calibrate; a special case slot(answer, "control") = fit$control slot(answer, "extra") = if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore placing ", "'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") = fit$iter fit$predictors = as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) = list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") = fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") = as.matrix(fit$prior.weights) answer } attr(cqo, "smart") <- TRUE VGAM/R/coef.vlm.q0000644000176000001440000000552012136651110013126 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. coefvlm <- function(object, matrix.out = FALSE, label = TRUE) { ans <- object@coefficients if (!label) names(ans) <- NULL if (!matrix.out) return(ans) ncolx <- object@misc$p # = length(object@constraints) M <- object@misc$M Blist <- object@constraints if (all(trivial.constraints(Blist) == 1)) { Bmat <- matrix(ans, nrow = ncolx, ncol = M, byrow = TRUE) } else { Bmat <- matrix(as.numeric(NA), nrow = ncolx, ncol = M) if (!matrix.out) return(ans) ncolBlist <- unlist(lapply(Blist, ncol)) nasgn <- names(Blist) temp <- c(0, cumsum(ncolBlist)) for(ii in 1:length(nasgn)) { index <- (temp[ii] + 1):temp[ii + 1] cmat <- Blist[[nasgn[ii]]] Bmat[ii,] <- cmat %*% ans[index] } } if (label) { d1 <- object@misc$colnames.x d2 <- object@misc$predictors.names # Could be NULL dimnames(Bmat) <- list(d1, d2) } Bmat } # end of coefvlm setMethod("coefficients", "vlm", function(object, ...) coefvlm(object, ...)) setMethod("coef", "vlm", function(object, ...) coefvlm(object, ...)) setMethod("coefficients", "vglm", function(object, ...) coefvlm(object, ...)) setMethod("coef", "vglm", function(object, ...) coefvlm(object, ...)) setMethod("coefficients", "summary.vglm", function(object, ...) object@coef3) setMethod("coef", "summary.vglm", function(object, ...) object@coef3) Coef.vlm <- function(object, ...) { LL <- length(object@family@vfamily) funname <- paste("Coef.", object@family@vfamily[LL], sep = "") if (exists(funname)) { newcall <- paste("Coef.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall <- parse(text = newcall)[[1]] return(eval(newcall)) } answer <- if (length(tmp2 <- object@misc$link) && object@misc$intercept.only && trivial.constraints(object@constraints)) { if (!is.list(use.earg <- object@misc$earg)) use.earg <- list() answer <- eta2theta(rbind(coefvlm(object)), link = object@misc$link, earg = use.earg) answer <- c(answer) if (length(ntmp2 <- names(tmp2)) == object@misc$M) names(answer) <- ntmp2 answer } else { coefvlm(object, ... ) } if (length(tmp3 <- object@misc$parameter.names) && object@misc$intercept.only && trivial.constraints(object@constraints)) { answer <- c(answer) if (length(tmp3) == object@misc$M && is.character(tmp3)) names(answer) <- tmp3 } answer } setMethod("Coefficients", "vlm", function(object, ...) Coef.vlm(object, ...)) setMethod("Coef", "vlm", function(object, ...) Coef.vlm(object, ...)) VGAM/R/cao.fit.q0000644000176000001440000020750712136651110012751 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. cao.fit <- function(x, y, w = rep(1, length(x[, 1])), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = cao.control(...), criterion = "coefficients", qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "cao", ...) { maxitl <- fv <- NULL specialCM <- NULL post <- list() check.rank <- TRUE # nonparametric <- TRUE optim.maxit <- control$optim.maxit save.weight <- control$save.weight trace <- control$trace minimize.criterion <- control$min.criterion n <- dim(x)[1] copy_X_vlm <- FALSE # May be overwritten in @initialize X_vlm_save <- NULL intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(family@initialize)) eval(family@initialize) # Initialize mu and M (and optionally w) n <- n.save modelno <- switch(family@vfamily[1], "poissonff" = 2, "binomialff" = 1, "quasipoissonff" = 0, "quasibinomialff" = 0, "negbinomial" = 3, "gamma2" = 5, "gaussianff" = 8, 0) # stop("cannot fit this model using fast algorithm") if (!modelno) stop("the family function does not work with cao()") if (modelno == 1) modelno <- get("modelno", envir = VGAM:::VGAMenv) eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.vgam(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) tc1 <- trivial.constraints(constraints) if (all(findex == 1)) stop("No covariates to form latent variables from.") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for(ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index); p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index Cmat <- if (length(rrcontrol$Cinit)) { matrix(rrcontrol$Cinit, p2, Rank) } else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2*Rank, sd = rrcontrol$SD.Cinit), p2, Rank) } else { .Init.Poisson.QO(ymat = as.matrix(y), X1 = x[,colx1.index, drop = FALSE], X2 = x[,colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, constwt = any(family@vfamily[1] == c("negbinomial", "gamma2", "gaussianff")), takelog = any(family@vfamily[1] != c("gaussianff"))) } } rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Blist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- checkCMCO(Blist, control = control, modelno = modelno) if (nice31 != 1) stop("not nice") ncolBlist <- unlist(lapply(Blist, ncol)) lv.mat <- x[, colx2.index, drop = FALSE] %*% Cmat rmfromVGAMenv(c("etamat", "beta"), prefix = ".VGAM.CAO.") Nice21 <- length(names.colx1.index) == 1 && names.colx1.index == "(Intercept)" if (!Nice21) stop("'noRRR = ~ 1' is supported only, without constraints") NOS <- ifelse(modelno %in% c(3, 5), M/2, M) p1star. <- if (Nice21) ifelse(modelno %in% c(3, 5), 2, 1) else M p2star. <- if (Nice21) Rank else stop("not Nice21") pstar. <- p1star. + p2star. nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M lenbeta <- pstar. * ifelse(Nice21, NOS, 1) othint <- c(Rank, control$EqualTol, pstar. , dim2wz = 1, inited = 0, # w(,dimw) cols modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star. , p2star. , Nice21, lenbeta, controlITolerances = 0, control$trace, p1, p2 = p2, imethod = control$imethod, bchat = 0) othdbl <- c(small = control$SmallNo, fseps = control$epsilon, .Machine$double.eps, iKvector = rep(control$iKvector, len = NOS), iShape = rep(control$iShape, len = NOS), resss = 0, bfeps = control$bf.epsilon, hstep = 0.1) for(iter in 1:optim.maxit) { if (control$trace) { cat("\nIteration", iter, "\n") flush.console() } conjgrad <- optim(par = c(Cmat), fn = callcaoc, gr = if (control$GradientFunction) calldcaoc else NULL, method = "BFGS", control=list(fnscale = 1, trace = as.integer(control$trace), maxit = control$Maxit.optim, REPORT = 10), etamat = eta, xmat = x, ymat = y, # as.matrix(y), wvec = w, modelno = modelno, Control = control, Nice21 = Nice21, p1star. = p1star. , p2star. = p2star. , n = n, M = M, othint = othint, othdbl = othdbl, alldump = FALSE) Cmat <- matrix(conjgrad$par, p2, Rank) # old becoz of scale(cmatrix) # Cmat <- Cmat %*% Ut # Normalized if (converged <- (conjgrad$convergence == 0)) break } if (!converged) { if (maxitl > 1) { warning("convergence not obtained in", maxitl, "iterations.") } else { warning("convergence not obtained") } } else { } Cmat <- crow1C(Cmat, control$Crow1positive) # Make sure signs are right flush.console() temp9 <- callcaoc(cmatrix = Cmat, etamat = eta, xmat = x, ymat = y, wvec = w, modelno = modelno, Control = control, Nice21 = Nice21, p1star. = p1star. , p2star. = p2star. , n = n, M = M, othint = othint, othdbl = othdbl, alldump = TRUE) if (!is.list(extra)) extra <- list() extra$Cmat <- temp9$Cmat ynames <- dimnames(y)[[2]] extra$df1.nl <- temp9$df1.nl extra$lambda1 <- temp9$lambda1 extra$spar1 <- temp9$spar1 names(extra$df1.nl) <- names(extra$lambda1) <- names(extra$spar1) <- ynames if (Rank == 2) { extra$spar2 <- temp9$spar2 extra$lambda2 <- temp9$lambda2 extra$df2.nl <- temp9$df2.nl names(extra$df2.nl) <- names(extra$lambda2) <- names(extra$spar2) <- ynames } extra$alldeviance <- temp9$alldeviance names(extra$alldeviance) <- ynames mu <- matrix(temp9$fitted, n, NOS, byrow = TRUE) dn <- labels(x) yn <- dn[[1]] if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } fit <- list( fitted.values = mu, Cmatrix = Cmat, terms = Terms) # terms: This used to be done in vglm() misc <- list( criterion = criterion, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, p = ncol(x), ynames = ynames) crit.list <- list() crit.list$deviance <- temp9$deviance if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) structure(c(fit, temp9, list( contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, x = x, y = y)), vclass = family@vfamily) } cao.control <- function(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, # For now 24/12/04 iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, # was 40 prior to 20100420 imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, # was 40 prior to 20100420 Maxit.optim = 250, optim.maxit = 20, SD.sitescores = 1.0, SD.Cinit = 0.02, trace = TRUE, df1.nl = 2.5, # About 1.5--2.5 gives the flexibility of a quadratic df2.nl = 2.5, # About 1.5--2.5 gives the flexibility of a quadratic spar1 = 0, # 0 means df1.nl is used spar2 = 0, # 0 means df2.nl is used ...) { if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an error in ", "the next VGAM version soon.") noRRR <- Norrr } if (!is.Numeric(iShape, positive = TRUE)) stop("bad input for 'iShape'") if (!is.Numeric(iKvector, positive = TRUE)) stop("bad input for 'iKvector'") if (!is.Numeric(imethod, positive = TRUE, allowable.length = 1, integer.valued = TRUE)) stop("bad input for 'imethod'") if (criterion != "deviance") stop("'criterion' must be 'deviance'") if (GradientFunction) stop("14/1/05; GradientFunction = TRUE not working yet") se.fit <- as.logical(FALSE) if (se.fit) stop("se.fit = FALSE handled only") if (length(Cinit) && !is.Numeric(Cinit)) stop("Bad input for 'Cinit'") if (!is.Numeric(Bestof, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'Bestof'") if (!is.Numeric(maxitl, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'maxitl'") if (!is.Numeric(bf.epsilon, allowable.length = 1, positive = TRUE)) stop("Bad input for 'bf.epsilon'") if (!is.Numeric(bf.maxit, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("Bad input for 'bf.maxit'") if (!is.Numeric(Etamat.colmax, positive = TRUE, allowable.length = 1) || Etamat.colmax < Rank) stop("bad input for 'Etamat.colmax'") if (!is.Numeric(Maxit.optim, integer.valued = TRUE, positive = TRUE, allowable.length = 1)) stop("Bad input for 'Maxit.optim'") if (!is.Numeric(optim.maxit, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'optim.maxit'") if (!is.Numeric(SD.sitescores, allowable.length = 1, positive = TRUE)) stop("Bad input for 'SD.sitescores'") if (!is.Numeric(SD.Cinit, allowable.length = 1, positive = TRUE)) stop("Bad input for 'SD.Cinit'") if (!is.Numeric(df1.nl) || any(df1.nl < 0)) stop("Bad input for 'df1.nl'") if (any(df1.nl >= 0 & df1.nl < 0.05)) { warning("'df1.nl' values between 0 and 0.05 converted to 0.05") df1.nl[df1.nl < 0.05] <- 0.05 } if (!is.Numeric(df2.nl) || any(df2.nl < 0)) stop("Bad input for 'df2.nl'") if (any(df2.nl >= 0 & df2.nl < 0.05)) { warning("'df2.nl' values between 0 and 0.05 converted to 0.05") df2.nl[df2.nl < 0.05] = 0.05 } if (!is.Numeric(spar1) || any(spar1 < 0)) stop("Bad input for 'spar1'") if (!is.Numeric(spar2) || any(spar2 < 0)) stop("Bad input for 'spar2'") if (!is.Numeric(epsilon, positive = TRUE, allowable.length = 1)) stop("Bad input for 'epsilon'") if (!is.Numeric(SmallNo, positive = TRUE, allowable.length = 1)) stop("Bad input for 'SmallNo'") if ((SmallNo < .Machine$double.eps) || (SmallNo > .0001)) stop("'SmallNo' is out of range") ans <- list( Corner = FALSE, # A constant, not a control parameter; unneeded? EqualTolerances = FALSE, # A constant, not a control parameter; needed ITolerances = FALSE, # A constant, not a control parameter; unneeded? Quadratic = FALSE, # A constant, not a control parameter; unneeded? all.knots = as.logical(all.knots)[1], Bestof = Bestof, Cinit = Cinit, ConstrainedO = TRUE, # A constant, not a control parameter criterion = criterion, Crow1positive = as.logical(rep(Crow1positive, len = Rank)), epsilon = epsilon, Etamat.colmax = Etamat.colmax, FastAlgorithm = TRUE, # A constant, not a control parameter GradientFunction = as.logical(GradientFunction), maxitl = maxitl, bf.epsilon = bf.epsilon, bf.maxit = bf.maxit, imethod = imethod, Maxit.optim = Maxit.optim, optim.maxit = optim.maxit, noRRR = noRRR, Rank = Rank, SD.sitescores = SD.sitescores, SD.Cinit = SD.Cinit, se.fit = se.fit, # If TRUE, then would need storage for S QR fits SmallNo = SmallNo, trace = as.integer(trace), Use.Init.Poisson.QO = Use.Init.Poisson.QO, iKvector = as.numeric(iKvector), iShape = as.numeric(iShape), DF1 = 2.5, # Used as Default value if df1.nl has no default DF2 = 2.5, # Used as Default value if df2.nl has no default SPAR1 = 0, # Used as Default value if spar1 has no default SPAR2 = 0, # Used as Default value if spar2 has no default df1.nl = df1.nl, df2.nl = df2.nl, spar1 = spar1, spar2 = spar2) ans } create.cms <- function(Rank = 1, M, MSratio = 1, which, p1 = 1) { if (!is.Numeric(p1, allowable.length = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for 'p1'") Blist. <- vector("list", p1 + Rank) for(rr in 1:(p1+Rank)) Blist.[[rr]] <- diag(M) names(Blist.) <- if (p1 == 1) c("(Intercept)", names(which)) else stop() if (MSratio == 2) { for(r in 1:Rank) Blist.[[p1+r]] <- eijfun(1, M) } Blist. } callcaoc <- function(cmatrix, etamat, xmat, ymat, wvec, modelno, Control, Nice21 = TRUE, p1star. = if (modelno %in% c(3, 5)) 2 else 1, p2star. = Rank, n, M, othint, othdbl, alldump = FALSE) { flush.console() control <- Control Rank <- control$Rank p1 <- length(control$colx1.index) p2 <- length(control$colx2.index) yn <- dimnames(ymat)[[2]] if (length(yn) != ncol(ymat)) stop("the column names of 'ymat' must be given") queue <- qbig <- Rank # 19/10/05; number of smooths per species NOS <- if (modelno %in% c(3, 5)) M/2 else M df1.nl <- procVec(control$df1.nl, yn = yn , Default = control$DF1) spar1 <- procVec(control$spar1, yn = yn , Default = control$SPAR1) df2.nl <- procVec(control$df2.nl, yn = yn , Default = control$DF2) spar2 <- procVec(control$spar2, yn = yn , Default = control$SPAR2) if (any(c(length(spar1), length(spar2), length(df1.nl), length(df2.nl)) != NOS)) stop("wrong length in at least one of ", "'df1.nl', 'df2.nl', 'spar1', 'spar2'") cmatrix <- matrix(cmatrix, p2, Rank) # crow1C() needs a matrix as input cmatrix <- crow1C(cmatrix, crow1positive = control$Crow1positive) numat <- xmat[,control$colx2.index, drop = FALSE] %*% cmatrix evnu <- eigen(var(numat)) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, crow1positive =control$Crow1positive) numat <- xmat[,control$colx2.index, drop = FALSE] %*% cmatrix dim(numat) <- c(n, Rank) mynames5 <- if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "") nu1mat <- cbind("(Intercept)" = 1, lv = numat) dimnames(nu1mat) <- list(dimnames(xmat)[[1]], c("(Intercept)", mynames5)) temp.smooth.frame <- vector("list", p1+Rank) # Temporary makeshift frame names(temp.smooth.frame) <- c(names(control$colx1.index), mynames5) for(uu in 1:(p1+Rank)) { temp.smooth.frame[[uu]] <- nu1mat[,uu] } temp.smooth.frame <- data.frame(temp.smooth.frame) for(uu in 1:Rank) { attr(temp.smooth.frame[,uu+p1], "spar") <- 0 # this value unused attr(temp.smooth.frame[,uu+p1], "df") <- 4 # this value unused } pstar. <- p1star. + p2star. # = Mdot + Rank nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M lenbeta <- pstar. * ifelse(Nice21, NOS, 1) # Holds the linear coeffs inited <- if (exists(".VGAM.CAO.etamat", envir=VGAM:::VGAMenv)) 1 else 0 usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CAO.") else t(etamat) if (any(is.na(usethiseta))) { usethiseta <- t(etamat) # So that dim(usethiseta) == c(M,n) rmfromVGAMenv("etamat", prefix = ".VGAM.CAO.") } usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CAO.") else double(lenbeta) othint[5] <- inited # Refine initialization within C pstar <- NOS * pstar. bnumat <- if (Nice21) matrix(0, nstar, pstar.) else stop("code not written here") M. <- MSratio <- M / NOS # 1 or 2 usually which <- p1 + (1:Rank) # These columns are smoothed nwhich <- names(which) <- mynames5 origBlist <- Blist. <- create.cms(Rank = Rank, M = M., MSratio = MSratio, which = which, p1 = p1) # For 1 species only ncolBlist. <- unlist(lapply(Blist. , ncol)) smooth.frame <- s.vam(x = nu1mat, zedd = NULL, wz = NULL, smomat = NULL, which = which, smooth.frame = temp.smooth.frame, bf.maxit = control$bf.maxit, bf.epsilon = control$bf.epsilon, trace = FALSE, se.fit = control$se.fit, X_vlm_save = bnumat, Blist = Blist. , ncolBlist = ncolBlist. , M = M. , qbig = NULL, Umat = NULL, # NULL ==> unneeded all.knots = control$all.knots, nk = NULL, sf.only = TRUE) ldk <- 3 * max(ncolBlist.[nwhich]) + 1 # 11/7/02 dimw. <- M. # Smoothing one spp. at a time dim1U. <- M. wz. <- matrix(0, n, dimw. ) if (names(Blist.)[1] != "(Intercept)") stop("something wrong here") Blist.[[1]] <- NULL trivc <- rep(2 - M. , len = queue) # All of queue smooths are basic smooths ncbvec <- ncolBlist.[nwhich] ncolb <- max(ncbvec) qbig. <- NOS * qbig # == NOS * Rank; holds all the smooths if (!all.equal(as.vector(ncbvec), rep(1, len = queue))) stop("'ncbvec' not right---should be a queue-vector of ones") pbig <- pstar. # contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4,## tol = 0.001 was default till R 1.3.x eps = 2e-8,## eps = 0.00244 was default till R 1.3.x maxit = 500 ) if (FALSE) contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 0.001, # was default till R 1.3.x eps = 0.00244, # was default till R 1.3.x maxit = 500 ) npetc <- c(n = nrow(nu1mat), p. = ncol(nu1mat), q = length(which), se.fit = control$se.fit, 0, control$bf.maxit, qrank = 0, M = M. , nbig = nstar, pbig = pbig, qbig = qbig, dim2wz = dimw. , dim1U = dim1U. , ierror = 0, ldk=ldk, contr.sp$maxit, iinfo = 0) if (Rank == 2) { smopar <- (c(spar1, spar2))[interleave.VGAM(4*NOS, M = 2)] dofvec <- (1.0 + c(df1.nl, df2.nl))[interleave.VGAM(4*NOS, M = 2)] lamvec <- 0 * dofvec stop("20100414; havent got Rank = 2 going yet") } else { smopar <- c(spar1, spar2) dofvec <- c(df1.nl, df2.nl) + 1.0 lamvec <- 0 * dofvec } ans1 <- dotC(name = "vcao6", numat = as.double(numat), ymat = as.double(ymat), wvec = as.double(wvec), etamat = as.double(usethiseta), fv = double(NOS*n), zedd = double(n*M), wz = double(n*M), U = double(M*n), # bnumat = as.double(bnumat), qr = double(nstar*pstar.), qraux = double(pstar.), qpivot = integer(pstar.), n = as.integer(n), M = as.integer(M), NOS = as.integer(NOS), nstar = as.integer(nstar), dim1U = as.integer( M ), # for U, not U. errcode = integer(1), othint = as.integer(othint), deviance = double(1 + NOS), # NOS more elts added 20100413 beta = as.double(usethisbeta), othdbl = as.double(othdbl), npetc = as.integer(npetc), M. = as.integer( M. ), dofvec = as.double(dofvec), lamvec = as.double(lamvec), smopar = as.double(smopar), match = as.integer(smooth.frame$o), as.integer(smooth.frame$nef), which = as.integer(which), smomat = as.double(matrix(0, n, qbig. )), nu1mat = as.double(nu1mat), blist = as.double(unlist( Blist. )), as.integer(ncbvec), smap = as.integer(1:(Rank+1)), # trivc = as.integer(trivc), levmat = as.double(matrix(0, n, qbig. )), bcoefficients = double(NOS * sum(smooth.frame$nknots*ncbvec)), xknots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) flush.console() if (ans1$errcode == 0) { assign2VGAMenv(c("etamat", "beta"), ans1, prefix = ".VGAM.CAO.") assign(".VGAM.CAO.cmatrix", matrix(cmatrix,p2,Rank), envir=VGAM:::VGAMenv) } else { cat("warning in callcaoc: error code = ", ans1$errcode, "\n") cat("warning in callcaoc: npetc[14] = ", ans1$npetc[14], "\n") flush.console() rmfromVGAMenv(c("etamat", "beta"), prefix = ".VGAM.CAO.") } returnans <- if (alldump) { bindex <- ans1$bindex ncolBlist <- ncbvec Bspline2 <- vector("list", NOS) names(Bspline2) <- dimnames(ymat)[[2]] Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich ind9 <- 0 # moving index for(sppno in 1:NOS) { for(ii in 1:length(nwhich)) { ind7 <- (smooth.frame$bindex[ii]):(smooth.frame$bindex[ii+1]-1) ans <- ans1$bcoeff[ind9+ind7] ans <- matrix(ans, ncol=ncolBlist[nwhich[ii]]) Bspline[[ii]] = new(Class = "vsmooth.spline.fit", "Bcoefficients" = ans, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } ind9 <- ind9 + smooth.frame$bindex[length(nwhich)+1]-1 Bspline2[[sppno]] <- Bspline } qrank <- npetc[7] # Assume all species have the same qrank value dim(ans1$etamat) <- c(M,n) # was c(n,M) prior to 22/8/06 df1.nl <- ans1$dofvec[1:NOS] - 1.0 lambda1 <- ans1$lamvec[1:NOS] spar1 <- ans1$smopar[1:NOS] if (Rank == 2) { stop("20100414; this isnt working yet") df2.nl <- ans1$dofvec[NOS + (1:NOS)] - 1.0 lambda2 <- ans1$lamvec[NOS + (1:NOS)] spar2 <- ans1$smopar[NOS + (1:NOS)] } list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], bcoefficients = ans1$bcoefficients, bindex = ans1$bindex, Bspline = Bspline2, Cmat = matrix(cmatrix, p2, Rank, dimnames=list( names(control$colx2.index), mynames5)), coefficients = ans1$beta, df1.nl = df1.nl, df2.nl = if (Rank == 2) df2.nl else NULL, df.residual = n*M - qrank - sum(ans1$df - 1), fitted = ans1$fv, # NOS x n kindex = ans1$kindex, lambda1 = lambda1, lambda2 = if (Rank == 2) lambda2 else NULL, predictors = matrix(ans1$etamat, n, M, byrow = TRUE), wresiduals = ans1$zedd - t(ans1$etamat), # n x M spar1 = spar1, spar2 = if (Rank == 2) spar2 else NULL) } else ans1$deviance[1] flush.console() returnans } calldcaoc <- function(cmatrix, etamat, xmat, ymat, wvec, modelno, Control, Nice21 = TRUE, p1star. = if (modelno %in% c(3, 5)) 2 else 1, p2star. = Rank, n, M, othint, othdbl, alldump = FALSE) { if (alldump) stop("really used?") flush.console() U <- NULL if (!Nice21) stop("'Nice21' must be TRUE") control <- Control Rank <- control$Rank p2 <- length(control$colx2.index) yn <- dimnames(ymat)[[2]] if (!length( yn )) yn <- paste("Y", 1:ncol(ymat), sep = "") cmatrix <- scale(cmatrix) xmat2 <- xmat[,control$colx2.index, drop = FALSE] #ccc numat <- xmat2 %*% matrix(cmatrix, p2, Rank) dim(numat) <- c(nrow(xmat), Rank) temp.smooth.frame <- vector("list", 1+Rank) # Temporary makeshift frame mynames5 <- if (Rank == 1) "lv" else paste("lv",1:Rank,sep = "") names(temp.smooth.frame) <- c("(Intercept)", mynames5) temp.smooth.frame[[1]] <- rep(1, len=n) for(uu in 1:Rank) { temp.smooth.frame[[uu+1]] <- numat[,uu] } temp.smooth.frame <- data.frame(temp.smooth.frame) for(uu in 1:Rank) { attr(temp.smooth.frame[,uu+1], "spar") <- 0 # any old value attr(temp.smooth.frame[,uu+1], "df") <- 4 # any old value } pstar. <- p1star. + p2star. nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M NOS <- ifelse(modelno %in% c(3, 5), M / 2, M) lenbeta <- pstar. * ifelse(Nice21, NOS, 1) if (TRUE) { inited <- if (exists(".VGAM.CAO.etamat", envir = VGAM:::VGAMenv)) 1 else 0 usethiseta <- if (inited == 1) get(".VGAM.CAO.etamat", envir = VGAM:::VGAMenv) else t(etamat) } usethisbeta <- if (inited == 2) get(".VGAM.CAO.beta", envir = VGAM:::VGAMenv) else double(lenbeta) pstar <- NOS * pstar. bnumat <- if (Nice21) matrix(0,nstar,pstar) else stop("need 'Nice21'") M. <- MSratio <- M / NOS # 1 or 2 usually p1 <- 1 which <- p1 + (1:Rank) # The first 1 is the intercept term nwhich <- names(which) <- mynames5 origBlist <- Blist. <- create.cms(Rank = Rank, M = M., MSratio = MSratio, which = which, p1 = p1) # For 1 species ncolBlist. <- unlist(lapply(Blist. , ncol)) nu1mat <- cbind("(Intercept)" = 1, lv = numat) dimnames(nu1mat) <- list(dimnames(xmat)[[1]], c("(Intercept)","lv")) smooth.frame <- s.vam(x = nu1mat, zedd = NULL, wz = NULL, smomat = NULL, which = which, smooth.frame = temp.smooth.frame, bf.maxit = control$bf.maxit, bf.epsilon = control$bf.epsilon, trace = FALSE, se.fit = control$se.fit, X_vlm_save = bnumat, Blist = Blist., ncolBlist = ncolBlist. , M = M. , qbig = NULL, Umat = U, # NULL value ==> not needed all.knots = control$all.knots, nk = NULL, sf.only = TRUE) ldk <- 4 * max(ncolBlist.[nwhich]) # was M; # Prior to 11/7/02 ldk <- 3 * max(ncolBlist.[nwhich]) + 1 # 11/7/02 wz. <- matrix(0, n, M. ) # not sure dimw. <- if (is.matrix( wz. )) ncol( wz. ) else 1 dim1U. <- M. # 20100410 queue <- qbig <- Rank # 19/10/05; number of smooths per species Blist.[[1]] <- NULL trivc <- rep(2 - M. , len = queue) # All of queue smooths are basic smooths ncbvec <- ncolBlist.[nwhich] ncolb <- max(ncbvec) qbig. <- NOS * qbig # == NOS * Rank pbig <- pstar. # Not sure if (FALSE) { df1.nl <- rep(control$df1.nl, len = NOS) # This is used df2.nl <- rep(control$df2.nl, len = NOS) # This is used spar1 <- rep(control$spar1, len = NOS) # This is used spar2 <- rep(control$spar2, len = NOS) # This is used } else { # This is used df1.nl <- procVec(control$df1.nl, yn = yn , Default = control$DF1) df2.nl <- df1.nl # 20100417; stopgap spar1 <- procVec(control$spar1, yn = yn , Default = control$SPAR1) spar2 <- spar1 # 20100417; stopgap dofvec <- c(df1.nl, df2.nl) lamvec <- 0 * dofvec smopar <- c(spar1, spar2) } contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4,## tol = 0.001 was default till R 1.3.x eps = 2e-8,## eps = 0.00244 was default till R 1.3.x maxit = 500 ) if (FALSE) contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 0.001, # was default till R 1.3.x eps = 0.00244, # was default till R 1.3.x maxit = 500 ) warning("20100405; this is old:") npetc <- c(n = n, p = 1+Rank, length(which), se.fit = control$se.fit, 0, maxitl = control$maxitl, qrank = 0, M = M. , n.M = n* M. , pbig = sum( ncolBlist.), qbig = qbig, dimw = dimw. , dim1U = dim1U. , ierror = 0, ldk = ldk) warning("20100405; this is new:") npetc <- c(n = nrow(nu1mat), p. = ncol(nu1mat), q = length(which), se.fit = control$se.fit, 0, control$bf.maxit, qrank = 0, M = M. , nbig = nstar, pbig = pbig, qbig = qbig, dim2wz = dimw. , dim1U = dim1U. , ierror = 0, ldk = ldk, contr.sp$maxit, iinfo = 0) flush.console() ans1 <- dotC(name = if (Nice21) "vdcao6" else stop("need 'Nice21'"), numat = as.double(numat), as.double(ymat), as.double(wvec), etamat = as.double(usethiseta), fv = double(NOS*n), zedd = double(n*M), wz = double(n*M), U = double(M*n), # bnumat = as.double(bnumat), qr = double(nstar*pstar.), qraux = double(pstar.), qpivot = integer(pstar.), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1), othint = as.integer(othint), deviance = double(1 + NOS), beta = as.double(usethisbeta), othdbl = as.double(othdbl), as.double(xmat2), cmat = as.double(cmatrix), p2 = as.integer(p2), deriv = double(p2*Rank), betasave = double(lenbeta), npetc = as.integer(npetc), M. = as.integer( M. ), dofvec = as.double(dofvec + 1.0), lamvec = as.double(0 * dofvec), smopar = as.double(smopar), match = as.integer(smooth.frame$o), as.integer(smooth.frame$nef), as.integer(which), smomat = as.double(matrix(0, n, qbig. )), nu1mat = as.double(nu1mat), as.double(unlist( Blist. )), as.integer(ncbvec), smap = as.integer(1:(Rank+1)), trivc = as.integer(trivc), levmat = as.double(matrix(0, n, qbig. )), bcoefficients = double(NOS * sum(smooth.frame$nknots*ncbvec)), xknots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) flush.console() assign(".VGAM.CAO.etamat", ans1$etamat, envir = VGAM:::VGAMenv) assign(".VGAM.CAO.z", ans1$zedd, envir = VGAM:::VGAMenv) # z; minus any offset assign(".VGAM.CAO.U", ans1$U, envir = VGAM:::VGAMenv) # U if (ans1$errcode == 0) { } else { cat("warning in calldcaoc: error code = ", ans1$errcode, "\n") flush.console() } returnans <- if (alldump) { bindex <- ans1$bindex ncolBlist <- ncbvec Bspline2 <- vector("list", NOS) names(Bspline2) <- dimnames(ymat)[[2]] Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich ind9 <- 0 # moving index for(jay in 1:NOS) { for(ii in 1:length(nwhich)) { ind9 <- ind9[length(ind9)] + (bindex[ii]):(bindex[ii+1]-1) ans <- ans1$bcoeff[ind9] ans <- matrix(ans, ncol = ncolBlist[nwhich[ii]]) Bspline[[ii]] <- new(Class = "vsmooth.spline.fit", "Bcoefficients" = ans, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } Bspline2[[jay]] <- Bspline } qrank <- npetc[7] # Assume all species have the same qrank value dim(ans1$etamat) <- c(M,n) # bug: was c(n,M) prior to 22/8/06 list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], bcoefficients = ans1$bcoefficients, bindex = ans1$bindex, Bspline = Bspline2, Cmat=matrix(cmatrix, p2, Rank, dimnames=list( names(control$colx2.index), mynames5)), coefficients=ans1$beta, df1.nl = ans1$dofvec[1:NOS] - 1, df2.nl = if (Rank == 2) ans1$dofvec[2*(1:NOS) - 1] - 1 else NULL, lambda1 = ans1$lambda[1:NOS], lambda2 = if (Rank == 2) ans1$lambda[2*(1:NOS) - 1] else NULL, df.residual = n*M - qrank - sum(ans1$df - 1), fitted=ans1$fv, kindex = ans1$kindex, predictors=matrix(ans1$etamat, n, M, byrow = TRUE), wresiduals = ans1$zedd - t(ans1$etamat), # n x M spar1 = ans1$smopar[1:NOS], spar2 = if (Rank == 2) ans1$smopar[2*(1:NOS) - 1] else NULL) } else { ans1$deriv } flush.console() returnans } setClass(Class = "Coef.cao", representation( "Bspline" = "list", "C" = "matrix", "Constrained" = "logical", "df1.nl" = "numeric", "df2.nl" = "numeric", "dispersion" = "numeric", "eta2" = "matrix", "lv" = "matrix", "lvOrder" = "matrix", "M" = "numeric", "Maximum" = "numeric", "NOS" = "numeric", "Optimum" = "matrix", "OptimumOrder" = "matrix", "Rank" = "numeric", "spar1" = "numeric", "spar2" = "numeric")) Coef.cao <- function(object, epsOptimum = 0.00001, # determines how accurately Optimum is estimated gridlen = 40, # Number of points on the grid (one level at a time) maxgriditer = 10, # Maximum number of iterations allowed for grid search smallno = 0.05, ...) { if (!is.Numeric(epsOptimum, positive = TRUE, allowable.length = 1)) stop("bad input for argument 'epsOptimum'") if (!is.Numeric(gridlen, positive = TRUE, integer.valued = TRUE) || gridlen < 5) stop("bad input for argument 'gridlen'") if (!is.Numeric(maxgriditer, positive = TRUE, allowable.length = 1, integer.valued = TRUE) || maxgriditer < 3) stop("bad input for argument 'maxgriditer'") if (!is.logical(ConstrainedO <- object@control$ConstrainedO)) stop("cannot determine whether the model is constrained or not") if (!is.Numeric(smallno, positive = TRUE, allowable.length = 1) || smallno > 0.5 || smallno < 0.0001) stop("bad input for argument 'smallno'") ocontrol <- object@control if ((Rank <- ocontrol$Rank) > 2) stop("'Rank' must be 1 or 2") gridlen <- rep(gridlen, length=Rank) M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M MSratio <- M / NOS # 1 or 2; First value is g(mean)=quadratic form in lv nice21 <- (length(ocontrol$colx1.index) == 1) && (names(ocontrol$colx1.index) == "(Intercept)") if (!nice21) stop("Can only handle 'noRRR = ~ 1'") p1 <- length(ocontrol$colx1.index) p2 <- length(ocontrol$colx2.index) modelno <- object@control$modelno # 1,2,3,... or 0 ynames <- object@misc$ynames if (!length(ynames)) ynames <- object@misc$predictors.names if (!length(ynames)) ynames <- object@misc$ynames if (!length(ynames)) ynames <- paste("Y", 1:NOS, sep = "") lp.names <- object@misc$predictors.names if (!length(lp.names)) lp.names <- NULL lv.names <- if (Rank == 1) "lv" else paste("lv", 1:Rank, sep = "") Cmat <- object@extra$Cmat # p2 x Rank (provided maxitl > 1) if (ConstrainedO) dimnames(Cmat) <- list(names(ocontrol$colx2.index), lv.names) lv.mat <- if (ConstrainedO) { object@x[,ocontrol$colx2.index, drop = FALSE] %*% Cmat } else { object@lv } optimum <- matrix(as.numeric(NA), Rank, NOS, dimnames = list(lv.names, ynames)) extents <- apply(lv.mat, 2, range) # 2 by R maximum <- rep(as.numeric(NA), len=NOS) whichSpecies <- 1:NOS # Do it for all species if (Rank == 1) { gridd <- cbind(seq(extents[1,1], extents[2,1], len=gridlen)) } else { gridd <- expand.grid(seq(extents[1,1], extents[2,1], len=gridlen[1]), seq(extents[1,2], extents[2,2], len=gridlen[2])) eta2matrix <- matrix(0, NOS, 1) } gridd.orig <- gridd # if (Rank == 2) then this is for initial values for(sppno in 1:length(whichSpecies)) { gridd <- gridd.orig gridres1 <- gridd[2,1] - gridd[1,1] gridres2 <- if (Rank == 2) gridd[2,2] - gridd[1,2] else 0 griditer <- 1 thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], ynames) else whichSpecies[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'whichSpecies'") while(griditer == 1 || ((griditer <= maxgriditer) && ((gridres1 > epsOptimum) || (gridres2 > epsOptimum)))) { temp <- predictcao(object, grid=gridd, sppno=thisSpecies, Rank=Rank, deriv = 0, MSratio=MSratio) yvals <- temp$yvals # gridlen-vector xvals <- temp$xvals # gridlen x Rank; gridd if (length(temp$eta2)) eta2matrix[sppno,1] <- temp$eta2 nnn <- length(yvals) index <- (1:nnn)[yvals == max(yvals)] if (length(index) != 1) warning("could not find a single maximum") if (Rank == 2) { initvalue <- rep(xvals[index,], length=Rank) # for optim() # Make sure initvalue is in the interior if (abs(initvalue[1] - extents[1,1]) < smallno) initvalue[1] <- extents[1,1] + smallno if (abs(initvalue[1] - extents[2,1]) < smallno) initvalue[1] <- extents[2,1] - smallno if (abs(initvalue[2] - extents[1,2]) < smallno) initvalue[2] <- extents[1,2] + smallno if (abs(initvalue[2] - extents[2,2]) < smallno) initvalue[2] <- extents[2,2] - smallno break } if (index == 1 || index == nnn) { maximum[sppno] <- optimum[1,sppno] <- NA gridres1 <- epsOptimum + 1 # equivalent to a break break # just in case } else { maximum[sppno] <- yvals[index] # on the eta scale optimum[1,sppno] <- xvals[index,1] gridd[,1] <- seq( max(extents[1,1], optimum[1,sppno]-gridres1), min(extents[2,1], optimum[1,sppno]+gridres1), len=gridlen) gridres1 <- gridd[2,1] - gridd[1,1] griditer <- griditer + 1 } } # of while if (Rank == 2) { # Rank = 2, so use optim(). The above was to get initial values. myfun <- function(x, object, sppno, Rank = 1, deriv = 0, MSratio = 1) { # x is a 2-vector x <- matrix(x, 1, length(x)) temp <- predictcao(object, grid = x, sppno = sppno, Rank = Rank, deriv = deriv, MSratio = MSratio) temp$yval } answer <- optim(initvalue, myfun, gr = NULL, method = "L-BFGS-B", lower = extents[1,], upper = extents[2,], control = list(fnscale = -1), # maximize! object = object, sppno = sppno, Rank = Rank, deriv = 0, MSratio = MSratio) # Check to see if the soln is @ boundary. If not, assign it. for(rindex in 1:Rank) if (abs(answer$par[rindex] - extents[1,rindex]) > smallno && abs(answer$par[rindex] - extents[2,rindex]) > smallno) { optimum[rindex,sppno] <- answer$par[rindex] maximum[sppno] <- answer$value } } # end of Rank = 2 } # end of sppno myetamat <- rbind(maximum) if (MSratio == 2) myetamat <- kronecker(myetamat, matrix(1:0, 1, 2)) maximum <- object@family@linkinv(eta = myetamat, extra = object@extra) maximum <- c(maximum) # Convert from matrix to vector names(maximum) <- ynames ans <- new(Class = "Coef.cao", Bspline = object@Bspline, Constrained=ConstrainedO, df1.nl = object@extra$df1.nl, lv = lv.mat, lvOrder = lv.mat, Maximum = maximum, M = M, NOS = NOS, Optimum=optimum, OptimumOrder=optimum, Rank = Rank, spar1 = object@extra$spar1) if (ConstrainedO) {ans@C = Cmat} else {Cmat = NULL} if (Rank == 2) { dimnames(eta2matrix) <- list( object@misc$predictors.names[c(FALSE,TRUE)], " ") ans@eta2 <- eta2matrix ans@df2.nl <- object@extra$df2.nl ans@spar2 <- object@extra$spar2 } for(rindex in 1:Rank) { ans@OptimumOrder[rindex,] <- order(ans@Optimum[rindex,]) ans@lvOrder[,rindex] <- order(ans@lv[,rindex]) } if (length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) { p <- length(object@coefficients) n <- object@misc$n M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M pstar <- p + length(Cmat) # Adjustment adjusted.dispersion <- object@misc$dispersion * (n*M - p) / (n*M - pstar) ans@dispersion <- adjusted.dispersion } if (MSratio == 2) { lcoef <- object@coefficients temp <- lcoef[((1:NOS)-1)*(2+Rank)+2] names(temp) <- object@misc$predictors.names[2*(1:NOS)] ans@dispersion <- temp } dimnames(ans@Optimum) <- list(lv.names, ynames) ans } show.Coef.cao <- function(object, digits = max(2, options()$digits-2), ...) { Rank <- object@Rank NOS <- object@NOS M <- object@M Maximum <- if (length(object@Maximum)) cbind(Maximum = object@Maximum) else NULL optmat <- cbind(t(object@Optimum)) dimnames(optmat) <- list(dimnames(optmat)[[1]], if (Rank > 1) paste("Optimum", dimnames(optmat)[[2]], sep = ".") else "Optimum") if ( object@Constrained ) { cat("\nC matrix (constrained/canonical coefficients)\n") print(object@C, digits = digits, ...) } cat("\nOptima and maxima\n") print(cbind(Optimum = optmat, Maximum), digits = max(1, digits-1)) cat("\nNonlinear degrees of freedom\n") if (Rank == 1) { print(cbind(df1.nl = object@df1.nl), digits = max(2, digits-1), ...) } else { print(cbind(df1.nl = object@df1.nl, df2.nl = object@df2.nl), digits = max(2, digits-1), ...) } invisible(object) } setMethod("show", "Coef.cao", function(object) show.Coef.cao(object)) setMethod("coef", "cao", function(object, ...) Coef.cao(object, ...)) setMethod("coefficients", "cao", function(object, ...) Coef.cao(object, ...)) setMethod("Coef", "cao", function(object, ...) Coef.cao(object, ...)) lvplot.cao <- function(object, add = FALSE, plot.it = TRUE, rugplot = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex=par()$cex, pcol=par()$col, pch=par()$pch, llty=par()$lty, lcol=par()$col, llwd=par()$lwd, label.arg= FALSE, adj.arg=-0.5, sites= FALSE, spch = NULL, scol=par()$col, scex=par()$cex, sfont=par()$font, whichSpecies = NULL, check.ok = TRUE, ...) { type <- match.arg(type, c("fitted.values", "predictors"))[1] if ((Rank <- object@control$Rank) > 2) stop("can only handle 'Rank' = 1 or 2 models") M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M NOS <- ncol(object@y) MSratio <- M / NOS # First value is g(mean) = quadratic form in lv n <- object@misc$n colx2.index <- object@control$colx2.index cx1i <- object@control$colx1.index if (!length(whichSpecies)) whichSpecies <- 1:NOS if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("latent variable plots allowable only for 'noRRR = ~ 1' models") Coeflist <- Coef(object) Cmat <- Coeflist@C lvmat <- Coeflist@lv # n x Rank if (!plot.it) return(lvmat) r.curves <- slot(object, type) # n times (M or S) (\boldeta or \boldmu) if (MSratio != 1 && type == "predictors") stop("can only plot the predictors if M == S") MorS <- ncol(r.curves) # Actually, here, the value is S always. if (!add) { if (Rank == 1) { matplot(lvmat, if ( y && type == "fitted.values") object@y[,whichSpecies, drop = FALSE] else r.curves[,whichSpecies, drop = FALSE], type = "n", xlab = xlab, ylab=ylab, ...) } else { # Rank == 2 matplot(c(Coeflist@Optimum[1,whichSpecies], lvmat[,1]), c(Coeflist@Optimum[2,whichSpecies], lvmat[,2]), type = "n", xlab = xlab, ylab=ylab, ...) } } pch <- rep(pch, leng = length(whichSpecies)) pcol <- rep(pcol, leng = length(whichSpecies)) pcex <- rep(pcex, leng = length(whichSpecies)) llty <- rep(llty, leng = length(whichSpecies)) lcol <- rep(lcol, leng = length(whichSpecies)) llwd <- rep(llwd, leng = length(whichSpecies)) adj.arg <- rep(adj.arg, leng = length(whichSpecies)) sppnames <- if (type == "predictors") dimnames(r.curves)[[2]] else dimnames(object@y)[[2]] if (Rank == 1) { for(sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], sppnames) else whichSpecies[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'whichSpecies'") xx <- lvmat yy <- r.curves[,indexSpecies] o <- sort.list(xx) xx <- xx[ o ] yy <- yy[ o ] lines(xx, yy, col=lcol[sppno], lwd=llwd[sppno], lty=llty[sppno]) if ( y && type == "fitted.values") { ypts <- object@y if (ncol(as.matrix(ypts)) == ncol(r.curves)) points(xx, ypts[o,sppno], col=pcol[sppno], cex=pcex[sppno], pch=pch[sppno]) } } if (rugplot) rug(xx) } else { if (sites) { text(lvmat[,1], lvmat[,2], adj = 0.5, labels = if (is.null(spch)) dimnames(lvmat)[[1]] else rep(spch, length=nrow(lvmat)), col=scol, cex=scex, font=sfont) } for(sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], sppnames) else whichSpecies[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'whichSpecies'") points(Coeflist@Optimum[1,indexSpecies], Coeflist@Optimum[2,indexSpecies], col=pcol[sppno], cex=pcex[sppno], pch=pch[sppno]) } if (label.arg) { for(sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], sppnames) else whichSpecies[sppno] text(Coeflist@Optimum[1,indexSpecies], Coeflist@Optimum[2,indexSpecies], labels=(dimnames(Coeflist@Optimum)[[2]])[indexSpecies], adj=adj.arg[sppno], col=pcol[sppno], cex=pcex[sppno]) } } } invisible(lvmat) } setMethod("lvplot", "cao", function(object, ...) { invisible(lvplot.cao(object, ...))}) predict.cao <- function (object, newdata = NULL, type = c("link", "response", "terms"), deriv = 0, ...) { type <- match.arg(type, c("link", "response", "terms"))[1] if (type != "link" && deriv != 0) stop("Setting deriv = requires type='link'") na.act <- object@na.action object@na.action <- list() ocontrol <- object@control nice21 <- (length(ocontrol$colx1.index) == 1) && (names(ocontrol$colx1.index) == "(Intercept)") if (!nice21) stop("Can only handle 'noRRR = ~ 1'") if (!length(newdata) && type == "response" && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm", ...) offset <- object@offset tt <- terms(object) if (!length(object@x)) attr(X, "assign") <- attrassignlm(X, tt) } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction=object@smart.prediction) } tt <- terms(object) # 11/8/03; object@terms$terms X <- model.matrix(delete.response(tt), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (nice21 && nrow(X)!=nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep(1, nrow(newdata)),, drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num <- attr(tt, "offset"))) { eval(attr(tt, "variables")[[off.num+1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, tt) } cancoefs <- ccoef(object) lvmat <- X[,ocontrol$colx2.index, drop = FALSE] %*% cancoefs # n x Rank Rank <- ocontrol$Rank NOS <- ncol(object@y) sppnames <- dimnames(object@y)[[2]] modelno <- ocontrol$modelno # 1,2,3,5 or 0 M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in lv if (type == "terms") { terms.mat = matrix(0,nrow(X),Rank*NOS) # 1st R cols for spp.1, etc. interceptvector <- rep(0, len=NOS) } else { etamat <- matrix(0, nrow(X), M) # Could contain derivatives } ind8 <- 1:Rank whichSpecies <- 1:NOS # Do it all for all species for(sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], sppnames) else whichSpecies[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'whichSpecies'") temp345 <- predictcao(object, grid=lvmat, sppno=thisSpecies, Rank=Rank, deriv=deriv, MSratio=MSratio, type=ifelse(type == "response", "link", type)) if (MSratio == 2) { if (any(type == c("link", "response"))) { etamat[,2*sppno-1] <- temp345$yvals etamat[,2*sppno ] <- temp345$eta2 } else { terms.mat[,ind8] <- temp345 interceptvector[sppno] <- attr(temp345, "constant") } } else { if (any(type == c("link", "response"))) { etamat[,sppno] <- temp345$yvals } else { terms.mat[,ind8] <- temp345 interceptvector[sppno] <- attr(temp345, "constant") } } ind8 <- ind8 + Rank } if (length(offset) && any(offset != 0)) etamat <- etamat + offset if (type == "link") { dimnames(etamat) <- list(dimnames(X)[[1]], if (deriv == 0) object@misc$predictors.names else NULL) return(etamat) } else if (type == "response") { fv <- object@family@linkinv(etamat, extra=object@extra) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) return(fv) } else { attr(terms.mat, "constant") = interceptvector terms.mat } } setMethod("predict", "cao", function(object, ...) predict.cao(object, ...)) predictcao <- function(object, grid, sppno, Rank = 1, deriv = 0, MSratio = 1, type = "link") { if (type != "link" && type != "terms") stop("'link' must be \"link\" or \"terms\"") if (ncol(grid <- as.matrix(grid)) != Rank) stop("'grid' must have ", Rank, " columns") if (!is.Numeric(1 + deriv, allowable.length = 1, positive = TRUE, integer.valued = TRUE)) stop("'deriv' must be a non-negative integer") if (type == "terms" && deriv != 0) stop("'deriv' must be 0 when type=\"terms\"") temp.b <- object@Bspline[[sppno]] if (type == "terms") { meanlv <- colMeans(grid) answer <- matrix(0, nrow(grid), Rank) } else { nlfunvalues <- 0 } for(rindex in 1:Rank) { temp <- temp.b[[rindex]] # temp is of class "vsmooth.spline.fit" nlpart <- predict(temp, grid[,rindex], deriv = deriv) yvals <- nlpart$y if (type == "terms") { answer[,rindex] = yvals } else { nlfunvalues <- nlfunvalues + yvals } } # Get the linear part of the additive predictor (intercept and slopes) lcoef <- object@coefficients # linear coefs; dont use coef() (== Coef) llcoef <- lcoef[(1+(sppno-1)*(MSratio+Rank)):(sppno*(MSratio+Rank))] if (type == "terms") { interceptvector <- llcoef[1] for(rindex in 1:Rank) { answer[,rindex] <- answer[,rindex] + (grid[,rindex] - meanlv[rindex]) * llcoef[MSratio+rindex] interceptvector <- interceptvector + meanlv[rindex] * llcoef[MSratio+rindex] } } else { linpar <- if (deriv == 0) { llcoef[1]+grid %*% llcoef[-(1:MSratio)] } else { if(deriv == 1) llcoef[MSratio+rindex] else 0 } nlfunvalues <- nlfunvalues + linpar # Now complete } if (type == "terms") { attr(answer, "constant") <- interceptvector answer } else { list(xvals = grid, yvals = c(nlfunvalues), eta2 = if (MSratio == 2) llcoef[MSratio] else NULL) } } plot.cao <- function(x, xlab = if (Rank == 1) "Latent Variable" else paste("Latent Variable", 1:Rank), ylab = NULL, residuals.arg = FALSE, pcol=par()$col, pcex=par()$cex, pch=par()$pch, lcol=par()$col, lwd=par()$lwd, lty=par()$lty, add = FALSE, main = NULL, center.cf = Rank > 1, WhichRank = 1:Rank, whichSpecies = NULL, # a numeric or character vector rugplot = TRUE, se.arg = FALSE, deriv = 0, scale = 0, ylim = NULL, overlay = FALSE, ...) { Rank <- x@control$Rank if (!is.logical(center.cf) || length(center.cf) != 1) stop("bad input for argument 'center.cf'") if (Rank > 1 && !center.cf) stop("center.cf = TRUE is needed for models with Rank > 1") NOS <- ncol(x@y) sppnames <- dimnames(x@y)[[2]] modelno <- x@control$modelno # 1,2,3, or 0 M <- if (any(slotNames(x) == "predictors") && is.matrix(x@predictors)) ncol(x@predictors) else x@misc$M if (all((MSratio <- M / NOS) != c(1,2))) stop("bad value for 'MSratio'") pcol <- rep(pcol, length = Rank*NOS) pcex <- rep(pcex, length = Rank*NOS) pch <- rep(pch, length = Rank*NOS) lcol <- rep(lcol, length = Rank*NOS) lwd <- rep(lwd, length = Rank*NOS) lty <- rep(lty, length = Rank*NOS) xlab <- rep(xlab, length = Rank) if (!length(whichSpecies)) whichSpecies <- 1:NOS if (length(ylab)) ylab <- rep(ylab, len=length(whichSpecies)) # Too long if overlay if (length(main)) main <- rep(main, len=length(whichSpecies)) # Too long if overlay lvmat <- lv(x) nice21 <- length(x@control$colx1.index) == 1 && names(x@control$colx1.index) == "(Intercept)" if (!nice21) stop("can only handle intercept-only models") counter <- 0 for(sppno in 1:length(whichSpecies)) { thisSpecies <- whichSpecies[sppno] indexSpecies <- if (is.character(whichSpecies)) match(whichSpecies[sppno], sppnames) else whichSpecies[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'whichSpecies'") terms.mat <- predictcao(object = x, grid=lvmat, type = "terms", sppno=indexSpecies, Rank=Rank, deriv=deriv, MSratio=MSratio) for(rindex in WhichRank) { xvals <- lvmat[,rindex] yvals <- terms.mat[,rindex] o <- sort.list(xvals) xvals <- xvals[ o ] yvals <- yvals[ o ] if (!center.cf) yvals <- yvals + attr(terms.mat, "constant") if (!add) if (sppno == 1 || !overlay) { ylim.use <- if (length(ylim)) ylim else ylim.scale(range(yvals), scale) matplot(xvals, yvals, type = "n", xlab = xlab[rindex], ylab = if (length(ylab)) ylab[sppno] else ifelse(overlay, "Fitted functions", "Fitted function"), main = if (length(main)) main[sppno] else ifelse(overlay, "", sppnames[thisSpecies]), ylim=ylim.use, ...) } if (residuals.arg) { stop("cannot handle residuals = TRUE yet") } counter <- counter + 1 lines(xvals, yvals, col=lcol[counter], lwd=lwd[counter], lty=lty[counter]) if (rugplot) rug(xvals) } } invisible(x) } setMethod("plot", "cao", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plot.cao(x, ...))}) persp.cao <- function(x, plot.it = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, # zlim ignored if Rank == 1 gridlength = if (Rank == 1) 301 else c(51,51), whichSpecies = NULL, xlab= if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab= if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, # For Rank == 1 only stretch = 1.05, # quick and dirty, Rank == 1 only main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", lty=par()$lty, lwd=par()$lwd, rugplot = FALSE, ...) { object <- x # don't like x as the primary argument coefobj <- Coef(object) if ((Rank <- coefobj@Rank) > 2) stop("object must be a rank-1 or rank-2 model") fvmat <- fitted(object) NOS <- ncol(fvmat) # Number of species M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in lv xlim <- if (length(xlim)) xlim else range(coefobj@lv[,1]) if (!length(ylim.orig <- ylim)) { ylim <- if (Rank == 1) c(0, max(fvmat)*stretch) else range(coefobj@lv[,2]) } xlim <- rep(xlim, length = 2) ylim <- rep(ylim, length = 2) gridlength <- rep(gridlength, length=Rank) lv1 <- seq(xlim[1], xlim[2], length=gridlength[1]) lv2 <- if (Rank == 2) seq(ylim[1], ylim[2], len=gridlength[2]) else NULL lvmat <- if (Rank == 2) expand.grid(lv1, lv2) else cbind(lv1) sppNames <- dimnames(object@y)[[2]] if (!length(whichSpecies)) { whichSpecies <- sppNames[1:NOS] whichSpecies.numer <- 1:NOS } else if (is.numeric(whichSpecies)) { whichSpecies.numer <- whichSpecies whichSpecies <- sppNames[whichSpecies.numer] # Convert to character } else whichSpecies.numer <- match(whichSpecies, sppNames) LP <- matrix(as.numeric(NA),nrow(lvmat),NOS) # For 1st eta for each spp. for(sppno in 1:NOS) { temp <- predictcao(object=object, grid=lvmat, sppno=sppno, Rank=Rank, deriv = 0, MSratio=MSratio) LP[,sppno] <- temp$yval } if (MSratio == 2) { LP <- kronecker(LP, matrix(1:0, 1, 2)) # n x M } fitvals <- object@family@linkinv(LP, extra=object@extra) # n by NOS dimnames(fitvals) <- list(NULL, dimnames(fvmat)[[2]]) if (Rank == 1) { if (plot.it) { if (!length(ylim.orig)) ylim <- c(0, max(fitvals[,whichSpecies.numer])*stretch) # A revision col <- rep(col, len=length(whichSpecies.numer)) lty <- rep(lty, len=length(whichSpecies.numer)) lwd <- rep(lwd, len=length(whichSpecies.numer)) matplot(lv1, fitvals, xlab = xlab, ylab=ylab, type = "n", main=main, xlim = xlim, ylim=ylim, ...) if (rugplot) rug(lv(object)) for(sppno in 1:length(whichSpecies.numer)) { ptr2 <- whichSpecies.numer[sppno] # points to species column lines(lv1, fitvals[,ptr2], col=col[sppno], lty=lty[sppno], lwd=lwd [sppno], ...) if (labelSpecies) { ptr1 <- (1:nrow(fitvals))[max(fitvals[,ptr2]) == fitvals[,ptr2]] ptr1 <- ptr1[1] text(lv1[ptr1], fitvals[ptr1,ptr2]+(stretch-1) * diff(range(ylim)), label=sppNames[sppno], col=col[sppno], ...) } } } } else { maxfitted <- matrix(fitvals[,whichSpecies[1]], length(lv1), length(lv2)) if (length(whichSpecies) > 1) for(sppno in whichSpecies[-1]) { maxfitted <- pmax(maxfitted, matrix(fitvals[,sppno], length(lv1), length(lv2))) } if (!length(zlim)) zlim <- range(maxfitted, na.rm = TRUE) if (plot.it) graphics:::persp.default(lv1, lv2, maxfitted, zlim=zlim, xlab = xlab, ylab=ylab, zlab=zlab, ticktype = ticktype, col = col, main=main, ...) } invisible(list(fitted = fitvals, lv1grid = lv1, lv2grid = if (Rank == 2) lv2 else NULL, maxfitted = if (Rank == 2) maxfitted else NULL)) } if(!isGeneric("persp")) setGeneric("persp", function(x, ...) standardGeneric("persp")) setMethod("persp", "cao", function(x, ...) persp.cao(x = x, ...)) lv.cao <- function(object, ...) { Coef(object, ...)@lv } if(!isGeneric("lv")) setGeneric("lv", function(object, ...) standardGeneric("lv"), package = "VGAM") setMethod("lv", "cao", function(object, ...) lv.cao(object, ...)) setClass(Class = "summary.cao", representation("misc" = "list", "call" = "call"), contains = "Coef.cao") summary.cao <- function(object, ...) { answer <- Coef(object, ...) answer <- as(answer, "summary.cao") answer@misc <- object@misc answer@call <- object@call answer } setMethod("summary", "cao", function(object, ...) summary.cao(object, ...)) show.summary.cao <- function(x, ...) { cat("\nCall:\n") dput(x@call) show.Coef.cao(x, ...) cat("\nNumber of species: ", x@NOS, "\n") if (length(x@misc$dispersion) == 1) { cat("\nDispersion parameter(s): ", x@misc$dispersion, "\n") } else if (is.Numeric(x@dispersion)) { cat("\nDispersion parameter(s)\n") print( x@dispersion, ... ) } invisible(x) } setMethod("show", "summary.cao", function(object) show.summary.cao(object)) ccoef.cao <- function(object, ...) { Coef(object, ...)@C } ccoef.Coef.cao <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@C } if(!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) standardGeneric("ccoef")) setMethod("ccoef", "cao", function(object, ...) ccoef.cao(object, ...)) setMethod("ccoef", "Coef.cao", function(object, ...) ccoef.Coef.cao(object, ...)) if(!isGeneric("calibrate")) setGeneric("calibrate", function(object, ...) standardGeneric("calibrate")) setMethod("calibrate", "cao", function(object, ...) calibrate.qrrvglm(object, ...)) setMethod("calibrate", "qrrvglm", function(object, ...) calibrate.qrrvglm(object, ...)) Tol.cao <- function(object, ...) { stop("The tolerance for a 'cao' object is undefined") } if(!isGeneric("Tol")) setGeneric("Tol", function(object, ...) standardGeneric("Tol")) setMethod("Tol", "cao", function(object, ...) Tol.cao(object, ...)) setMethod("show", "cao", function(object) show.vgam(object)) VGAM/R/cao.R0000644000176000001440000001271312136651110012122 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. cao <- function(formula, family, data=list(), weights=NULL, subset=NULL, na.action=na.fail, etastart=NULL, mustart=NULL, coefstart=NULL, control=cao.control(...), offset=NULL, method="cao.fit", model=FALSE, x.arg=TRUE, y.arg=TRUE, contrasts=NULL, constraints=NULL, extra=NULL, qr.arg=FALSE, smart=TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "cao" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- mf$qr.arg <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev <- if (length(xvars) > 0) { xlev <- lapply(mf[xvars], levels) xlev[!sapply(xlev, is.null)] } y <- model.response(mf, "numeric") # model.extract(mf, "response") x <- model.matrix(mt, mf, contrasts) attr(x, "assign") = attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) w <- rep(1, nrow(mf)) else if (ncol(as.matrix(w))==1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family=", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) cao.fitter <- get(method) deviance.Bestof = rep(as.numeric(NA), len = control$Bestof) for(tries in 1:control$Bestof) { if (control$trace && (control$Bestof>1)) { cat(paste("\n========================= Fitting model", tries, "=========================\n")) if (exists("flush.console")) flush.console() } onefit <- cao.fitter(x=x, y=y, w=w, offset=offset, etastart=etastart, mustart=mustart, coefstart=coefstart, family=family, control=control, constraints=constraints, criterion=control$criterion, extra=extra, qr.arg = qr.arg, Terms=mt, function.name=function.name, ...) deviance.Bestof[tries] = onefit$crit.list$deviance if (tries == 1 || min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit = onefit } fit$misc$deviance.Bestof = deviance.Bestof fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new("cao", "assign" = attr(x, "assign"), "Bspline" = fit$Bspline, "call" = ocall, "coefficients" = fit$coefficients, "criterion" = fit$crit.list, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "residuals" = as.matrix(fit$wresiduals), "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms=mt)) if (!smart) answer@smart.prediction <- list(smart.arg=FALSE) if (qr.arg) { class(fit$qr) = "list" slot(answer, "qr") = fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") = attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") = as.matrix(fit$fitted.values) slot(answer, "na.action") = if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") = as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") = as.matrix(fit$weights) if (x.arg) slot(answer, "x") = fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") = xlev if (y.arg) slot(answer, "y") = as.matrix(fit$y) slot(answer, "control") = fit$control slot(answer, "extra") = if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore ", "placing 'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") = fit$iter fit$predictors = as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) = list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") = fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") = as.matrix(fit$prior.weights) answer } attr(cao, "smart") <- TRUE VGAM/R/calibrate.q0000644000176000001440000002637612136651110013357 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. calibrate.qrrvglm.control = function(object, trace=FALSE, # passed into optim() Method.optim="BFGS", # passed into optim(method=Method) gridSize = if (Rank==1) 9 else 5, varlvI = FALSE, ...) { Rank = object@control$Rank EqualTolerances = object@control$EqualTolerances if (!is.Numeric(gridSize, positive = TRUE, integer.valued = TRUE, allowable.length = 1)) stop("bad input for 'gridSize'") if (gridSize < 2) stop("'gridSize' must be >= 2") list(# maxit=Maxit.optim, # Note the name change trace=as.numeric(trace)[1], Method.optim=Method.optim, gridSize=gridSize, varlvI = as.logical(varlvI)[1]) } if(!isGeneric("calibrate")) setGeneric("calibrate", function(object, ...) standardGeneric("calibrate")) calibrate.qrrvglm = function(object, newdata = NULL, type = c("lv","predictors","response","vcov","all3or4"), initial.vals = NULL, ...) { Quadratic = if (is.logical(object@control$Quadratic)) object@control$Quadratic else FALSE # T if CQO, F if CAO if (!length(newdata)) { if (!length(object@y)) stop("no newdata") else newdata = data.frame(object@y) } if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("lv","predictors","response","vcov","all3or4"))[1] if (!Quadratic && type=="vcov") stop("cannot have 'type=\"vcov\"' when object is a \"cao\" object") if (is.vector(newdata)) newdata = rbind(newdata) if (!is.matrix(newdata)) newdata = as.matrix(newdata) newdata = newdata[,object@misc$ynames,drop=FALSE] obfunct = slot(object@family, object@misc$criterion) # Objective function minimize.obfunct = if (Quadratic) object@control$min.criterion else TRUE # Logical; TRUE for CAO objects because deviance is minimized if (!is.logical(minimize.obfunct)) stop("object@control$min.criterion is not a logical") optim.control = calibrate.qrrvglm.control(object=object, ...) # For cao too if ((Rank <- object@control$Rank) > 2) stop("currently can only handle Rank=1 and 2") Coefobject = if (Quadratic) { Coef(object, varlvI=optim.control$varlvI) } else { Coef(object) } if (!length(initial.vals)) { L = apply(Coefobject@lv, 2, min) U = apply(Coefobject@lv, 2, max) initial.vals = if (Rank==1) cbind(seq(L, U, length=optim.control$gridSize)) else expand.grid(seq(L[1], U[1], length=optim.control$gridSize), seq(L[2], U[2], length=optim.control$gridSize)) } ok = length(object@control$colx1.index)==1 && names(object@control$colx1.index) == "(Intercept)" if (!ok) stop("The x1 vector must be an intercept only") nn = nrow(newdata) BestOFpar = NULL # It may be more efficient not to append BestOFvalues = NULL # Best OF objective function values for(i1 in 1:nn) { if (optim.control$trace) cat("\nOptimizing for observation", i1, "-----------------\n") OFvalues = OFpar = NULL # OF means objective function for(ii in 1:nrow(initial.vals)) { if (optim.control$trace) { cat("Starting from grid-point", ii, ":") flush.console() } ans = if (is.R()) { if (Quadratic) optim(par=initial.vals[ii,], fn=.my.calib.objfunction.qrrvglm, method=optim.control$Method.optim, # "BFGS", or "CG" or ... control = c(fnscale=ifelse(minimize.obfunct,1,-1), optim.control), y=newdata[i1,], extra=object@extra, objfun=obfunct, Coefs=Coefobject, misc.list = object@misc, everything = FALSE, mu.function = slot(object@family, "inverse")) else optim(par=initial.vals[ii,], fn=.my.calib.objfunction.cao, method=optim.control$Method.optim, # "BFGS", or "CG" or ... control = c(fnscale=ifelse(minimize.obfunct,1,-1), optim.control), y=newdata[i1,], extra=object@extra, objfun=obfunct, object=object, Coefs=Coefobject, misc.list = object@misc, everything = FALSE, mu.function = slot(object@family, "inverse")) } else stop("not implemented in S-PLUS yet") if (optim.control$trace) { if (ans$convergence == 0) cat("Successful convergence\n") else cat("Unsuccessful convergence\n") flush.console() } if (ans$convergence == 0) { OFvalues = c(OFvalues, ans$value) OFpar = rbind(OFpar, ans$par) } } if (length(OFpar)) { index = if (minimize.obfunct) (1:nrow(OFpar))[OFvalues==min(OFvalues)] else (1:nrow(OFpar))[OFvalues==max(OFvalues)] if (length(index) > 1) { warning(paste("multiple solutions found for observation ", i1, ". Choosing one randomly.", sep="")) index = sample(index, size=1) } else if (length(index) == 0) stop("length(index) is zero") BestOFpar = rbind(BestOFpar, OFpar[index,]) BestOFvalues = c(BestOFvalues, OFvalues[index]) } else { BestOFpar = rbind(BestOFpar, rep(as.numeric(NA), len=Rank)) BestOFvalues = c(BestOFvalues, NA) } } pretty = function(BestOFpar, newdata, Rank) { if (Rank==1) { BestOFpar = c(BestOFpar) names(BestOFpar) = dimnames(newdata)[[1]] } else dimnames(BestOFpar) = list(dimnames(newdata)[[1]], if (Rank==1) "lv" else paste("lv", 1:Rank, sep="")) BestOFpar } if (type=="lv") { BestOFpar = pretty(BestOFpar, newdata, Rank) attr(BestOFpar,"objectiveFunction")=pretty(BestOFvalues,newdata,Rank=1) BestOFpar } else { etaValues = muValues = NULL # if (Quadratic) vcValues = array(0, c(Rank,Rank,nn)) for(i1 in 1:nn) { ans = if (Quadratic) .my.calib.objfunction.qrrvglm(BestOFpar[i1, ], y=newdata[i1,], extra=object@extra, objfun=obfunct, Coefs=Coefobject, misc.list = object@misc, everything = TRUE, mu.function = slot(object@family, "inverse")) else .my.calib.objfunction.cao(BestOFpar[i1, ], y=newdata[i1,], extra=object@extra, objfun=obfunct, object=object, Coefs=Coefobject, misc.list = object@misc, everything = TRUE, mu.function = slot(object@family, "inverse")) muValues = rbind(muValues, matrix(ans$mu, nrow=1)) etaValues = rbind(etaValues, matrix(ans$eta, nrow=1)) if (Quadratic) vcValues[,,i1] = ans$vcmat # Can be NULL for "cao" objects } if (type=="response") { dimnames(muValues) = dimnames(newdata) muValues } else if (type=="predictors") { dimnames(etaValues) = list(dimnames(newdata)[[1]], dimnames(object@predictors)[[2]]) etaValues } else if (type=="vcov") { if (Quadratic) dimnames(vcValues) = list(as.character(1:Rank), as.character(1:Rank), dimnames(newdata)[[1]]) vcValues } else if (type=="all3or4") { if (Quadratic) dimnames(vcValues) = list(as.character(1:Rank), as.character(1:Rank), dimnames(newdata)[[1]]) dimnames(muValues) = dimnames(newdata) dimnames(etaValues) = list(dimnames(newdata)[[1]], dimnames(object@predictors)[[2]]) BestOFpar = pretty(BestOFpar, newdata, Rank) attr(BestOFpar,"objectiveFunction") = pretty(BestOFvalues,newdata,Rank=1) list(lv=BestOFpar, predictors=etaValues, response=muValues, vcov = if(Quadratic) vcValues else NULL) } else stop("type not matched") } } .my.calib.objfunction.qrrvglm = function(bnu, y, extra = NULL, objfun, Coefs, misc.list, everything=TRUE, mu.function) { bnumat = cbind(bnu) Rank = length(bnu) eta = cbind(c(Coefs@B1)) + Coefs@A %*% bnumat # bix1 = intercept only M = misc.list$M for(s in 1:M) { temp = Coefs@D[,,s,drop=FALSE] dim(temp) = dim(temp)[1:2] # c(Rank, Rank) eta[s,1] = eta[s,1] + t(bnumat) %*% temp %*% bnumat } eta = matrix(eta, 1, M, byrow=TRUE) mu = rbind(mu.function(eta, extra)) # Make sure it has one row value = objfun(mu=mu, y=y, w=1, # ignore prior.weights on the object residuals=FALSE, eta=eta, extra=extra) if (everything) { vcmat = matrix(0, Rank, Rank) for(s in 1:M) { vec1 = cbind(Coefs@A[s,]) + 2 * matrix(Coefs@D[,,s], Rank, Rank) %*% bnumat vcmat = vcmat + mu[1,s] * vec1 %*% t(vec1) } vcmat = solve(vcmat) } else vcmat = NULL if (everything) list(eta=eta, mu=mu, value=value, vcmat=vcmat) else value } .my.calib.objfunction.cao = function(bnu, y, extra = NULL, objfun, object, Coefs, misc.list, everything=TRUE, mu.function) { Rank = length(bnu) NOS = Coefs@NOS eta = matrix(as.numeric(NA), 1, NOS) for(j in 1:NOS) { eta[1,j] = predictcao(object, grid=bnu, sppno=j, Rank=Rank, deriv=0)$yvals } mu = rbind(mu.function(eta, extra)) # Make sure it has one row value = objfun(mu=mu, y=y, w=1, # ignore prior.weights on the object residuals=FALSE, eta=eta, extra=extra) vcmat = NULL # No theory as of yet to compute the vcmat if (everything) list(eta=eta, mu=mu, value=value, vcmat=vcmat) else value } setMethod("calibrate", "qrrvglm", function(object, ...) calibrate.qrrvglm(object, ...)) VGAM/R/build.terms.vlm.q0000644000176000001440000000455112136651110014445 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. if (!isGeneric("terms")) setGeneric("terms", function(x, ...) standardGeneric("terms")) terms.vlm = function(x, ...) { v = x@terms if (!length(v)) stop("terms slot is empty") v = v$terms if (!length(v)) stop("no terms component") v } setMethod("terms", "vlm", function(x, ...) terms.vlm(x, ...)) Build.terms.vlm = function(x, coefs, cov = NULL, assign, collapse = TRUE, M, dimname=NULL, coefmat = NULL) { cov.true = !is.null(cov) if (collapse) { fit = matrix(x %*% coefs, ncol=M, byrow=TRUE) dimnames(fit) = dimname if (M==1) fit = c(fit) if (cov.true) { var = ((x %*% cov) * x) %*% rep(1, length(coefs)) list(fitted.values = fit, se.fit = if (M==1) c(sqrt(var)) else matrix(sqrt(var), ncol=M, byrow=TRUE, dimnames=dimname)) } else { fit } } else { constant = attr(x, "constant") if (!is.null(constant)) { constant = as.vector( t(coefmat) %*% constant ) } if (missing(assign)) assign = attr(x, "assign") if (is.null(assign)) stop("Need an 'assign' list") fit = array(0, c(nrow(x), length(assign)), list(dimnames(x)[[1]], names(assign))) if (cov.true) se = fit TL = sapply(assign, length) simple = TL == 1 complex = TL > 1 if (any(simple)) { asss = unlist(assign[simple]) ones = rep(1, nrow(x)) fit[, simple] = x[, asss] * outer(ones, coefs[asss]) if (cov.true) se[,simple] = abs(x[,asss]) * outer(ones, sqrt(diag(cov))[asss]) } if (any(complex)) { assign = assign[complex] for(term in names(assign)) { TT = assign[[term]] xt = x[, TT] fit[, term] = xt %*% coefs[TT] if (cov.true) se[, term] = sqrt(drop(((xt %*% cov[TT, TT]) * xt) %*% rep(1, length(TT)))) } } attr(fit, "constant") = constant if (cov.true) list(fitted.values = fit, se.fit = se) else fit } } VGAM/R/bAIC.q0000644000176000001440000000761712136651110012164 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. if (TRUE) { if (!isGeneric("AIC")) setGeneric("AIC", function(object, ..., k = 2) standardGeneric("AIC"), package = "VGAM") AICvlm = function(object, ..., k = 2) { estdisp = object@misc$estimated.dispersion no.dpar = if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 -2 * logLik.vlm(object, ...) + k * (length(coefvlm(object)) + no.dpar) } AICvgam = function(object, ..., k = 2) { estdisp = object@misc$estimated.dispersion no.dpar = if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 nldf = if (is.Numeric(object@nl.df)) sum(object@nl.df) else 0 -2 * logLik.vlm(object, ...) + k * (length(coefvlm(object)) + no.dpar + nldf) } AICrrvglm = function(object, ..., k = 2) { estdisp = object@misc$estimated.dispersion no.dpar = if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 szero = object@control$szero MMM = object@misc$M Rank = object@control$Rank elts.tildeA = (MMM - Rank - length(szero)) * Rank -2 * logLik.vlm(object, ...) + k * (length(coefvlm(object)) + no.dpar + elts.tildeA) } AICqrrvglm = function(object, ..., k = 2) { estdisp = object@misc$estimated.dispersion no.dpar = if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 szero = object@control$szero MMM = object@misc$M Rank = object@control$Rank elts.tildeA = (MMM - Rank - length(szero)) * Rank EqualTolerances = object@control$EqualTolerances ITolerances = object@control$ITolerances if (!(length(EqualTolerances) == 1 && is.logical(EqualTolerances))) stop("could not determine whether the fitted object used an ", "equal-tolerances assumption based on ", "argument 'EqualTolerances'") if (!(length(ITolerances) == 1 && is.logical(ITolerances))) stop("could not determine whether the fitted object used an ", "equal-tolerances assumption based on argument 'ITolerances'") NOS = if (length(object@y)) ncol(object@y) else MMM MSratio = MMM / NOS # First value is g(mean) = quadratic form in l if (round(MSratio) != MSratio) stop("'MSratio' is not an integer") elts.D = ifelse(ITolerances || EqualTolerances, 1, NOS) * Rank*(Rank+1)/2 deviance(object, ...) + k * (length(coefvlm(object)) + no.dpar + elts.tildeA + elts.D) } setMethod("AIC", "vlm", function(object, ..., k = 2) AICvlm(object, ..., k = k)) setMethod("AIC", "vglm", function(object, ..., k = 2) AICvlm(object, ..., k = k)) setMethod("AIC", "vgam", function(object, ..., k = 2) AICvgam(object, ..., k = k)) setMethod("AIC", "rrvglm", function(object, ..., k = 2) AICrrvglm(object, ..., k = k)) setMethod("AIC", "qrrvglm", function(object, ..., k = 2) AICqrrvglm(object, ..., k = k)) } if (FALSE) { AICvglm = function(object, ..., k = 2) { crit = logLik.vlm(object, ...) -2 * crit + k * length(coef(object)) } AICrrvglm = function(object, ..., k = 2) { stop("not working yet") crit = logLik.vlm(object) sign = -2 if (!length(crit) || !is.numeric(crit)) { crit = deviance(object) sign = 1 } if (!length(crit) || !is.numeric(crit)) stop("cannot get at the deviance or loglikelihood of the object") sign * crit + 2 * (length(coef(object)) + object@control$rank * (object@misc$M - object@control$rank)) } setMethod("AIC", signature(object = "vglm"), function(object, ..., k = 2) AICvglm(object, ..., k = k)) setMethod("AIC", signature(object = "rrvglm"), function(object, ..., k = 2) AICrrvglm(object, ..., k = k)) } VGAM/R/attrassign.R0000644000176000001440000000147612136651110013543 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. attrassignlm <- function(lmobj) { attrassign(model.matrix(lmobj),terms(lmobj)) } attrassigndefault <- function(mmat,tt) { if (!inherits(tt,"terms")) stop("need terms object") aa<-attr(mmat,"assign") if (is.null(aa)) stop("argument is not really a model matrix") ll<-attr(tt,"term.labels") if (attr(tt,"intercept")>0) ll<-c("(Intercept)",ll) aaa<-factor(aa,labels=ll) split(order(aa),aaa) } if(!isGeneric("attrassign")) setGeneric("attrassign", function(object, ...) standardGeneric("attrassign")) setMethod("attrassign", "lm", function(object, ...) attrassignlm(object, ...)) VGAM/R/add1.vglm.q0000644000176000001440000000015512136651110013171 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. VGAM/R/aamethods.q0000644000176000001440000002604612136651110013370 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. is.Numeric <- function(x, allowable.length = Inf, integer.valued = FALSE, positive = FALSE) if (all(is.numeric(x)) && all(is.finite(x)) && (if (is.finite(allowable.length)) length(x) == allowable.length else TRUE) && (if (integer.valued) all(x == round(x)) else TRUE) && (if (positive) all(x>0) else TRUE)) TRUE else FALSE VGAMenv <- new.env() .onLoad <- function(lib, pkg) require(methods) # 25/1/05 if (!any(search() == "package:methods")) library(methods) if (!any(search() == "package:splines")) require(splines) .VGAM.prototype.list = list( "constraints" = expression({}), "fini" = expression({}), "first" = expression({}), "initialize" = expression({}), "last" = expression({}), "middle" = expression({}), "middle2" = expression({}), "deriv" = expression({}), "weight" = expression({})) setClass("vglmff", representation( "blurb" = "character", "constraints" = "expression", "deviance" = "function", "fini" = "expression", "first" = "expression", "infos" = "function", # Added 20101203 "initialize" = "expression", "last" = "expression", "linkfun" = "function", "linkinv" = "function", "loglikelihood"= "function", "middle" = "expression", "middle2" = "expression", "summary.dispersion" = "logical", "vfamily" = "character", "deriv" = "expression", "weight" = "expression"), # "call" prototype = .VGAM.prototype.list) valid.vglmff = function(object) { compulsory = c("initialize", "weight", "deriv", "linkinv") for(ii in compulsory) { if (!length(slot(object, ii))) stop("slot ", ii, " is empty") } if (length(as.list(object@linkinv)) != 3) stop("wrong number of arguments in object@linkinv") } if (FALSE) setValidity("vglmff", valid.vglmff) show.vglmff <- function(object) { f <- object@vfamily if (is.null(f)) stop("not a VGAM family function") nn <- object@blurb cat("Family: ", f[1], "\n") if (length(f) > 1) cat("Informal classes:", paste(f, collapse = ", "), "\n") cat("\n") for(ii in 1:length(nn)) cat(nn[ii]) cat("\n") } setMethod("show", "vglmff", function(object) show.vglmff(object = object)) setClass("vlmsmall", representation( "call" = "call", "coefficients" = "numeric", "constraints" = "list", "control" = "list", "criterion" = "list", "fitted.values"= "matrix", "misc" = "list", "model" = "data.frame", "na.action" = "list", "post" = "list", "preplot" = "list", "prior.weights"= "matrix", "residuals" = "matrix", "weights" = "matrix", "x" = "matrix", "y" = "matrix"), ) setClass("vlm", representation( "assign" = "list", "callXm2" = "call", "contrasts" = "list", "df.residual" = "numeric", "df.total" = "numeric", "dispersion" = "numeric", "effects" = "numeric", "offset" = "matrix", "qr" = "list", "R" = "matrix", "rank" = "integer", "rss" = "numeric", "smart.prediction" = "list", "terms" = "list", "Xm2" = "matrix", "Ym2" = "matrix", "xlevels" = "list" ), contains = "vlmsmall" ) setClass("vglm", representation( "extra" = "list", "family" = "vglmff", "iter" = "numeric", "predictors" = "matrix"), contains = "vlm") setClass("vgam", representation( "Bspline" = "list", # each [[i]] is a "vsmooth.spline.fit" "nl.chisq" = "numeric", "nl.df" = "numeric", "spar" = "numeric", "s.xargument" = "character", "var" = "matrix"), contains = "vglm") setClass("summary.vgam", representation( anova = "data.frame", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), prototype(anova=data.frame()), contains = "vgam") setClass("summary.vglm", representation( coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), contains = "vglm") setClass("summary.vlm", representation( coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), contains = "vlm") setClass(Class = "rrvglm", contains = "vglm") if (FALSE) setClass("qrrvglm", representation( "assign" = "list", "call" = "call", "coefficients" = "numeric", "constraints" = "list", "contrasts" = "list", "control" = "list", "criterion" = "list", "df.residual" = "numeric", "df.total" = "numeric", "dispersion" = "numeric", "extra" = "list", "family" = "vglmff", "fitted.values"= "matrix", "iter" = "numeric", "misc" = "list", "model" = "data.frame", "na.action" = "list", "offset" = "matrix", "post" = "list", "predictors" = "matrix", "preplot" = "list", "prior.weights"= "matrix", "residuals" = "matrix", "smart.prediction" = "list", "terms" = "list", "weights" = "matrix", "x" = "matrix", "Xm2" = "matrix", "Ym2" = "matrix", "xlevels" = "list", "y" = "matrix") ) setClass(Class = "qrrvglm", contains = "rrvglm") if (FALSE) setAs("qrrvglm", "vglm", function(from) new("vglm", "extra"=from@extra, "family"=from@family, "iter"=from@iter, "predictors"=from@predictors, "assign"=from@assign, "call"=from@call, "coefficients"=from@coefficients, "constraints"=from@constraints, "contrasts"=from@contrasts, "control"=from@control, "criterion"=from@criterion, "df.residual"=from@df.residual, "df.total"=from@df.total, "dispersion"=from@dispersion, "effects"=from@effects, "fitted.values"=from@fitted.values, "misc"=from@misc, "model"=from@model, "na.action"=from@na.action, "offset"=from@offset, "post"=from@post, "preplot"=from@preplot, "prior.weights"=from@prior.weights, "qr"=from@qr, "R"=from@R, "rank"=from@rank, "residuals"=from@residuals, "rss"=from@rss, "smart.prediction"=from@smart.prediction, "terms"=from@terms, "weights"=from@weights, "x"=from@x, "xlevels"=from@xlevels, "y"=from@y)) setClass("rcim0", representation(not.needed = "numeric"), contains = "vglm") # Added 20110506 setClass("rcim", representation(not.needed = "numeric"), contains = "rrvglm") setClass("grc", representation(not.needed = "numeric"), contains = "rrvglm") setMethod("summary", "rcim", function(object, ...) summary.rcim(object, ...)) setMethod("summary", "grc", function(object, ...) summary.grc(object, ...)) if (FALSE) { setClass("vfamily", representation("list")) } if (!isGeneric("Coef")) setGeneric("Coef", function(object, ...) standardGeneric("Coef"), package = "VGAM") if (!isGeneric("Coefficients")) setGeneric("Coefficients", function(object, ...) standardGeneric("Coefficients"), package = "VGAM") if (!isGeneric("logLik")) setGeneric("logLik", function(object, ...) standardGeneric("logLik"), package = "VGAM") if (!isGeneric("plot")) setGeneric("plot", function(x, y, ...) standardGeneric("plot"), package = "VGAM") if (!isGeneric("vcov")) setGeneric("vcov", function(object, ...) standardGeneric("vcov"), package = "VGAM") setClass("uqo", representation( "lv" = "matrix", "extra" = "list", "family" = "vglmff", "iter" = "numeric", "predictors" = "matrix"), contains = "vlmsmall") setClass(Class = "cao", contains = "vgam") if (!isGeneric("lvplot")) setGeneric("lvplot", function(object, ...) standardGeneric("lvplot"), package = "VGAM") if (!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) standardGeneric("ccoef"), package = "VGAM") setGeneric("coef", function(object, ...) standardGeneric("coef"), package = "VGAM") setGeneric("coefficients", function(object, ...) standardGeneric("coefficients"), package = "VGAM") if (!isGeneric("df.residual")) setGeneric("df.residual", function(object, ...) standardGeneric("df.residual"), package = "VGAM") if (!isGeneric("fitted")) setGeneric("fitted", function(object, ...) standardGeneric("fitted"), package = "VGAM") if (!isGeneric("fitted.values")) setGeneric("fitted.values", function(object, ...) standardGeneric("fitted.values"), package = "VGAM") if (!isGeneric("model.matrix")) setGeneric("model.matrix", function(object, ...) standardGeneric("model.matrix")) if (!isGeneric("model.frame")) setGeneric("model.frame", function(formula, ...) standardGeneric("model.frame")) if (!isGeneric("predict")) setGeneric("predict", function(object, ...) standardGeneric("predict")) if (!isGeneric("resid")) setGeneric("resid", function(object, ...) standardGeneric("resid")) if (!isGeneric("residuals")) setGeneric("residuals", function(object, ...) standardGeneric("residuals"), package = "VGAM") if (!isGeneric("weights")) setGeneric("weights", function(object, ...) standardGeneric("weights"), package = "VGAM") if (!isGeneric("AIC")) setGeneric("AIC", function(object, ..., k=2) standardGeneric("AIC"), package = "VGAM") if (!isGeneric("formula")) setGeneric("formula", function(x, ...) standardGeneric("formula"), package = "VGAM") if (!isGeneric("case.names")) setGeneric("case.names", function(object, ...) standardGeneric("case.names"), package = "VGAM") if (!isGeneric("variable.names")) setGeneric("variable.names", function(object, ...) standardGeneric("variable.names"), package = "VGAM") if (!isGeneric("summary")) setGeneric("summary", function(object, ...) standardGeneric("summary"), package = "VGAM") VGAM/R/Links.R0000644000176000001440000001043512136651110012437 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. dtheta.deta <- function(theta, link = "identity", earg = list(theta = theta, # Needed inverse = FALSE, deriv = 1, short = TRUE, tag = FALSE)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) { warning("apparent conflict in name of link function") } earg[["theta"]] <- theta # New data earg[["deriv"]] <- 1 # New do.call(what = function.name, args = earg) } d2theta.deta2 <- function(theta, link = "identity", earg = list(theta = theta, # Needed inverse = FALSE, deriv = 2, short = TRUE, tag = FALSE)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function in D2theta.deta2") earg[["theta"]] <- theta # New data earg[["deriv"]] <- 2 # New do.call(what = function.name, args = earg) } theta2eta <- function(theta, link = "identity", earg = list(theta = NULL)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function") earg[["theta"]] <- theta # New data do.call(what = function.name, args = earg) } eta2theta <- function(theta, # This is really eta. link = "identity", earg = list(theta = NULL)) { orig.earg <- earg if (!is.list(earg)) stop("argument 'earg' is not a list") level1 <- length(earg) > 3 && length(intersect(names(earg), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) earg <- list(oneOnly = earg) llink <- length(link) if (llink != length(earg)) stop("length of argument 'link' differs from ", "length of argument 'earg'") if (llink == 0) stop("length(earg) == 0 not allowed") if (llink == 1) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (is.list(earg[[1]])) earg <- earg[[1]] function.name <- link function.name2 <- attr(earg, "function.name") # May be, e.g., NULL if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function") earg[["theta"]] <- theta # New data earg[["inverse"]] <- TRUE # New return(do.call(what = function.name, args = earg)) } if (!is.matrix(theta) && length(theta) == length(earg)) theta <- rbind(theta) ans <- NULL for(iii in 1:llink) { use.earg <- earg[[iii]] use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- theta[, iii] # New use.function.name <- link[iii] ans <- cbind(ans, do.call(what = use.function.name, args = use.earg)) } if (length(orig.earg) == ncol(ans) && length(names(orig.earg)) > 0 && ncol(ans) > 0) colnames(ans) <- names(orig.earg) ans } namesof <- function(theta, link = "identity", earg = list(tag = tag, short = short), tag = FALSE, short = TRUE) { earg[["theta"]] <- as.character(theta) earg[["tag"]] <- tag earg[["short"]] <- short do.call(link, args = earg) } link2list <- function(link ) { ans <- link fun.name <- as.character(ans[[1]]) big.list <- as.list(as.function(get(fun.name))) big.list[[length(big.list)]] <- NULL # Kill the body of code t.index <- pmatch(names(ans[-1]), names(big.list)) t.index if (any(is.na(t.index))) stop("in '", fun.name, "' could not match argument(s) ", paste('"', names(ans[-1])[is.na(t.index)], '"', sep = "", collapse = ", ")) Big.list <- big.list trivial.call <- (length(t.index) == 0) if (!trivial.call) { Big.list[t.index] <- ans[-1] } attr(Big.list, "function.name") <- fun.name Big.list } VGAM/NEWS0000755000176000001440000017123612136651105011550 0ustar ripleyusers ************************************************** * * * 0.9 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.9-1 NEW FEATURES o A companion package, called \pkg{VGAMdata}, is new. Some large data sets previously in \pkg{VGAM} have been shifted there, e.g., xs.nz and ugss. In \pkg{VGAMdata} there is (new) oly12 and students.tw. o pnorm2() argument names have changed from 'sd1' to 'var1', etc. and 'rho' to 'cov12'. See documentation. Warning given if it returns any negative value. o Introduction of g-type arguments for grid search. o Improved initial values for: lomax(). o Argument 'bred' works for poissonff(). o latvar() generic available, identical to lv(). But the latter will be withdrawn soon. o Rank() generic available for RR-VGLMs, QRR-VGLMs, CAO models. o New function: pgamma.deriv(), pgamma.deriv.unscaled(), vlm2lm.model.matrix(). o New VGAM family functions: posbernoulli.b(), posbernoulli.t(), posbernoulli.tb(tau = 2 or 3). These provide estimates of N as well as its standard error. Also, truncgeometric() and truncweibull() are new. Also, SUR() is new. Also, binom2.rho.ss() does not work yet. o New argument 'matrix.out = FALSE' for constraints.vlm(). o cm.vgam() has a few more arguments to provide more flexibility. But there should be no changes for VGAM users at this stage. o Renamed functions: confint_rrnb() is now renamed to Confint.rrnb() and confint_nb1() is now renamed to Confint.nb1(). o Some changes to component names returned by Confint.rrnb() and Confint.nb1(): $CI. and $SE. are uppercase. o Some zero-inflated VGAM family functions return a "vglm" object with @misc$pstr0 for the estimated probability of a structural zero. o New data set: olym12. Note that Students.tw is earmarked for \pkg{VGAMdata}. o Data sets renamed: olympic renamed to olym08. o Qvar() has a 'which.eta = 1' argument specifying which linear predictor to use. So quasi-variances are now available to models with M > 1 linear predictors. o Tested okay on R 3.0.0. BUG FIXES and CHANGES o VGAM now depends on R >= 2.15.1. o Fortran array bounds problems (picked up by AddressSanitizer) have been fixed. o All "no visible binding for global variables" warnings have been suppressed. o vgam() with a s(spar = myspar) term should run, and if myspar is extracted from a previous vgam() model then the two models should effectively be the same. o summaryvgam() did not calculate or print out all the p-values for testing linearity. o fnormal1()@initialize was faulty wrt lm.wfit(). o zageometric() and zigeometric() handle multiple responses. o mlogit(inverse = TRUE) and mlogit(inverse = FALSE) were switched. Now multinomial() makes use of mlogit(). mlogit() now calls care.exp() to avoid overflow and underflow; this stops multinomial() from returning a NA as a fitted value if abs(eta) is very large. o arwz2wz() introduced to simplify multiple responses working weight matrices (wrt construction). o Renamed functions: dhuggins91() is now dposbern(), huber() is now huber2(), ei() is now eifun(), eij() is now eijfun(), rss.vgam() is now ResSS.vgam(). o fisherz(theta) was wrong. Corrected, then replaced by atanh(theta). o [dpq]dagum(x), [dpq]lomax(x), [dpq]sinmad(x), etc. handled correctly for x = 0, Inf, -Inf, NaN, NA. o qdagum(x) failed due to 'Scale' [thanks to Alena Tartalova]. o Arguments renamed: 'intercept.apply' renamed to 'apply.parint', 'Norrr' renamed to 'noRRR' (warning/error message issued), 'nowarning' renamed to 'noWarning' in vglm.control(). o seq2binomial()@loglikelihood includes the binomial lchoose() constants. o qgev() bug [thanks to Alex Cannon], and qgpd(). o cao() produces less error/warning messages usually. o Data sets corrected for errors: chinese.nz. o Data set changes: gew had incorrect y1 and y2 values, and variables x1 to x4 have been renamed to value.g, capital.w, etc. The year variable has been added. CHANGES IN VGAM VERSION 0.9-0 NEW FEATURES o Major change: VGAM family functions no longer have arguments such as earg, escale, eshape, etc. Arguments such as offset that used to be passed in via those arguments can be done directly through the link function. For example, gev(lshape = "logoff", eshape = list(offset = 0.5)) is replaced by gev(lshape = logoff(offset = 0.5)). The @misc slot retains the $link and $earg components, however, the latter is in a different format. Functions such as dtheta.deta(), d2theta.deta2(), eta2theta(), theta2eta() have been modified. Link functions have been simplified somewhat. The casual user will probably not be affected, but programmers will. Sorry about this! o New VGAM family functions: [dpqr]gompertz(), [dpqr]gumbelII(), [dpr]lindley(), [dpqr]makeham(), [dpqr]perks(). o df.residual() supports a new formula/equation for 'type = "lm"'. o garma("reciprocal") supported. o is.parallel() for constraint matrices summary. o Improved family functions: these can handle multiple responses: benini(), chisq(), erlang(), exponential(), gamma1(), geometric(), gpd(), inv.gaussianff(), logff(), maxwell(), rayleigh(), yulesimon(), zetaff(). o New data set: hormone [http://www.stat.tamu.edu/~carroll/data/hormone_data.txt]. o If a factor response is not ordered then a warning is issued for acat(), cratio(), cumulative() and sratio(). o New dpqr-type functions: [dpqr]perks(), [dpqr]mperks(), [dpqr]mbeard(). o Argument 'parallel' added to gamma2(). o New link functions: mlogit(). BUG FIXES and CHANGES o zibinomial() had 1 wrong element in the EIM; one of the corrections of VGAM 0.8-4 was actually incorrect. o zibinomial() blurb was wrong: previously was "(1 - pstr0) * prob / (1 - (1 - prob)^w)" where prob is the mean of the ordinary binomial distribution. Now is "(1 - pstr0) * prob". o betaff() no longer has "A" and "B" arguments; they ar extracted from "lmu = elogit(min = A, max = B)". o binom2.rho() has "lmu" as a new argument 2. o logistic2() has has zero = -2 as default, and can handle multiple responses. o gengamma() returned the wrong mean (picked up by Andrea Venturini): not b * k but b * gamma(k + 1 / d) / gamma(k). o tobit.Rd nows states vector values for 'Lower' and 'Upper' are permitted. Also, the @misc$Lower and @misc$Upper are matrices of the same dimension as the response. o constraints.vlm(type = c("vlm", "lm")) has been changed to constraints.vlm(type = c("lm", "term")) [respectively]. o Rcam() renamed to Rcim(), and rcam() renamed to rcim(). Class "rcam" changed to "rcim". o Days changed from "Monday" to "Mon" in all crash data frames, etc. o w.wz.merge() written to handle the working weights for multiple responses. w.y.check() written to check the integrity of prior weights and response. o Argument 'sameScale' changed to 'eq.scale', 'quantile.probs' in negbinomial-type families changed to 'probs.y'. o No more warnings: dirmultinomial(). o Renamed arguments: benini(earg <- eshape), binormal(equalmean <- eq.mean), binormal(equalsd <- eq.sd), o dirmultinomial() can handle a 1-row response [thanks to Peng Yu]. o weibull() gives improved warnings re. the shape parameter wrt regularity conditions. o The 12 most time-consuming examples have been placed in a \dontrun{} to save time. o Argument "prob.x" renamed to "probs.x". o Argument "hbw" removed from iam(). o Argument 'name' is passed into .C and .Fortran() [in dotC() and dotFortran()] is now okay because the first argument is unnamed. CHANGES IN VGAM VERSION 0.8-7 NEW FEATURES o Modified VGAM family functions: genbetaII()@initialize has been improved, as well as those special cases of that distribution (such as sinmad, lomax, paralogistic, dagum, etc.). o Argument 'lapred.index' added to model.matrix(). o npred() is now defined as a generic function (returns M). o hatvalues() and hatplot() written for vglm() objects. o The argument 'qr.arg' is set TRUE now by default in vglm(). o df.residual() supports the argument 'type = c("vlm", "lm")'. o Argument 'nowarning' added to vglm.control(). o New data set: ucberk. o Improved functions: rposbinom(), rposgeom(), rposnegbin(), rpospois(). o Tested okay on R 2.15.0. BUG FIXES and CHANGES o Labelling of the linear predictors for sratio(), cratio() etc. was faulty. o pbetabinom.ab() did not recycle shape1 correctly [found by David Venet]. o Arguments lower.tail and log.p not supported (temporarily) in pposbinom() and qposbinom(). CHANGES IN VGAM VERSION 0.8-6 NEW FEATURES o Modified VGAM family functions: sinmad()@initialize has been improved. BUG FIXES and CHANGES o VGAM now depends on R >= 2.14.0. o Trying to eliminate some residual errors with the NAMESPACE. CHANGES IN VGAM VERSION 0.8-5 NEW FEATURES o New VGAM family functions: negbinomial.size(), zabinomial(dpqr), zageometric(dpqr), [dpqr]posgeom(). o New link functions: nbcanlink(). o Modified VGAM family functions: posnegbinomial(), zanegbinomial() and zinegbinomial() use the nsimEIM argument; zipoisson() handles a matrix response; all [dpqr]zi-type functions handle zero-deflation, normal1() can model the variance too as the 2nd parameter. o Rudimentary methods functions for lrtest() and update(), based on packages lmtest and base. o The VGAM family functions for genetic models have been improved wrt initial values. o New data sets: xs.nz. BUG FIXES and CHANGES o In anticipation for R version 2.15.0, VGAM imports from stats4 'coef', 'plot', 'summary', 'vcov'. Calls to 'print' have been replaced by 'show' since VGAM uses S4 methods. Numerous NAMESPACE changes have been made. No more warnings during checking and installation! o Labelling in summary() of vglm() objects changed. It now closely follows glm(). In particular, it has changed from c("Value", "Std. Error", "t value") to c("Estimate", "Std. Error", "z value"). Note that "z value" might change later to, e.g., "Wald". o Zero-inflated and zero-altered functions have renamed and reordered arguments. Ouch! These include 'pstr0' for probability of a structural 0 [zero-inflated], and 'pobs0' for probability of an observed 0 [zero-altered]. For example, argument lpstr0 replaces lphi in zipoisson(). The order of these arguments, including the respective dpqr-type functions, may have changed too. o zapoisson() now implements Fisher scoring. o zipoissonff() had the wrong sign for the non-diagonal EIM element. o nobs() is now defined as a generic function (needed for older versions of R---versions 2-12.2 or earlier, actually). o Data sets renamed: uscrime and usgrain renamed to crime.us and grain.us; bminz renamed to bmi.nz, nzc renamed to chinese.nz, nzmarital renamed to marital.nz. o Improved family functions: genbetaII(), betaII(), sinmad(), dagum(), lomax(), invlomax(), fisk(), invparalogistic(), paralogistic(); wrt fitted values (range checks in place now). These functions have many argument names changed, e.g., link.a is now lshape1.a, init.a is now ishape1.a. Also, some default initial values have changed from 1 to 2. o Argument names changed (Ouch!): q.lag.ma changed to q.ma.lag in garma(). CHANGES IN VGAM VERSION 0.8-4 NEW FEATURES o VGAM family functions renamed (Ouch!): 1. 'betabinom.ab' renamed to 'betabinomial.ab'. o Other functions renamed (Ouch!): 1. '[dpr]betabin' renamed to '[dpr]betabinom' etc.; 2. '[dpr]betabin.ab' renamed to '[dpr]betabinom.ab' etc.; o Slot names changed (Ouch!): 1. 'link' renamed to 'linkfun' for class "vglmff"; 2. 'inverse' renamed to 'linkinv' for class "vglmff". o Extra arguments added to freund61(). o New VGAM family functions: abbott(), zigeometric(dpqr), huber1(). o New functions: [p]polono(), depvar() generic for the dependent (response) variable, Qvar() and explink() for quasi-variances. o Improved functions: [d]polono(), [dr]betabin() handles rho = 0 (suggested by Peng Yu). o Improved family functions: normal1() handles matrix 'weights'. o Defaults changed: [dr]betabin(rho = 0). o New methods functions: nobs(), nvar(), depvar(). o Renaming: fitted.vlm() is now fittedvlm(), persp.qrrvglm() is now perspqrrvglm(), predict.qrrvglm() is now predictqrrvglm(), predict.vglm() is now predictvglm(). o New data sets: finney44. o VGAM now depends on R >= 2.11.1. o Tested okay on R 2.14.0. BUG FIXES o zibinomial() had 2 wrong elements in the EIM, thanks to Alan Welsh for picking this up. o margeff() for cumulative() was faulty. o blurb slot of binormal() was faulty. o betabinomial() did not return the estimated rho in @misc$rho as did betabinomial.ab(). o kumar() did not initialize well with non-integer prior weights. o rdagum() did not handle the 'scale' argument correctly. o codes() in s() is defunct so it has been replaced. CHANGES IN VGAM VERSION 0.8-3 NEW FEATURES o Argument names changed (Ouch!): 1. 'method.init' renamed to 'imethod'; 2. 'k' renamed to 'size' in negbinomial(), zanegbinomial(), posnegbinomial(), and zinegbinomial(): e.g., 'lk' renamed to 'lsize', 'ik' renamed to 'isize', etc. o New data sets: hued, huie, huse, ugss, uscrime; usagrain renamed to usgrain. o The "prior.weights" slot is now a "matrix", not "numeric". o [dpr]betabin.ab() now handles size = 0. Thanks to Robert Wolpert for picking up this bug. o New VGAM family functions: [d]binormal(), [dr]huggins91() but this is not working properly, [dpqr]explogarithmic(), polya(). o New functions: [dpqr]tobit(). o Improved family functions: tobit() implements the proper EIM for the standard model, dcennormal1() has some new arguments [and renamed], cennormal1(), cenrayleigh() renamed. o VGAM now depends on R >= 2.10.0. o Rcam(), moffset(), etc. for RCAMs have been improved and modified. o VGAM family functions currently withdrawn: frechet3(). o Tested ok on R 2.13.0. BUG FIXES o tobit()@loglikelihood had omitted the constant in dnorm(). Also, tobit() uses simulated Fisher scoring (for nonstandard model). o moffset() was buggy. o ABO() is a little more robust. o dirichlet()@loglikelihood miscalculated. Ditto for gaussian()@loglikelihood (constants were omitted). Thanks to Arne Henningsen for picking up these bugs. o zipoissonff() did not initialize correctly and labelling was wrong with matrix responses. CHANGES IN VGAM VERSION 0.8-2 NEW FEATURES o Objects of class "vglmff" have a "infos" slot to give information about the family. o New functions: lambertW(), rcam(), wffc.P3(), wffc.P3star(), confint_rrnb(), confint_nb1(). o New VGAM family functions: binom2.Rho(), [dpqr]expgeometric(), [dpqr]genrayleigh(), [dpqr]huber(), [dpqr]koenker(), studentt[23](), zipoissonff(). o Argument 'imethod' changed to 'method.init' for some families, e.g., cnormal1(), tobit(), weibull(). o Improvements have been made to binom2.rho(). o Improved family functions: negbinomial() has a new argument 'parallel', micmen() has more initial value choices and fitting algorithms, kumar(), studentt() and studentt2() now implement the EIM, normal1() can handle multiple responses. o Argument names changed: 'init.rho' renamed to 'irho' in binom2.rho(), 'a' renamed to 'scale' in rayleigh(), 'Structural.zero' renamed to 'szero' thoughout. o zapoisson() permutes the linear/additive predictors. o Several families such as negbinomial(), z[ai]negbinomial(), zapoisson(), gamma2(), handle zero = c(-2, 3), say, i.e., negative and positive values. o New data sets: about half a dozen road crash data frames of 2009 NZ data. o constraints(vglmfit) now has a 'type' argument that can be fed into the original fit (type = "lm") as the constraints argument. o vchol() takes drastic action to avoid infinite looping: it sets the working weights to be proportional to the order-M diagonal matrix. o lognormal() and lognormal3() now have zero = 2 as the default (was zero = NULL). o Some variable names within grc() changed, e.g., Row. and not Row. o The smartpred package within VGAM has updated poly(), ns(), bs() and scale() for R version 2.12.0 (2010-10-15). Calls to smartpredenv are now VGAM:::smartpredenv. BUG FIXES o VGAM:::VGAMenv is used now to avoid problems locating this environment. o Input of the mustart, etastart and coefstart arguments at the solution should results in only one iteration being needed. o binomialff() and categorical familes (e.g., multinomial) only accept a factor or non-negative counts as the response. This allows the 'weights' vector to have any value really. In the past the denominator of a sample proportion was allowed via the 'weights' argument. o wffc.P1() had some rounding problems, e.g., with 0.280 m. CHANGES IN VGAM VERSION 0.8-1 NEW FEATURES o Most of the Fortran 77 code has been converted to C. This change will be largely hidden from most users but there may be the occasional bug not detected. Much of the heavy work was done by Alvin Sou. o lms.bcn()@loglikelihood incorporates the constants in the log-likelihood. o Also, no more F90 code! This means less portability/platform problems. o bivgamma.mckay, formerly mckaygamma2(), has been modified substantially. o Improvements have been made to simplex() and [dr]simplex() have been written. o Expectile functions for the uniform, normal and exponential distributions: [dpqr]-type functions. o cqo() has EqualTolerances = TRUE and ITolerances = FALSE as the default now. The result is that cqo() should work without the environmental variables being scaled. If it is scaled then setting ITolerances = TRUE will result in greater speed and requiring less memory. o Families that deal with proportions, such as binomialff() and betabinomial(), incorporate weights separately from the weights generated by the response/counts. So the weights argument can now have any positive values. o rrvglm(..., Norrr = NULL) can be used so that the reduced-rank regression is applied to every variable including the intercept. o Renaming: ggamma() is now gengamma(), etc. o Improved functions: negbinomial() has a few new arguments. BUG FIXES o Deviance.categorical.data.vgam did not handle small fitted probabilities. o binom2.rho() could produce small negative fitted probabilities. o seq2binomial() did not initialize 'mvector'. o zeta() crashed on some platforms. o cqo() appears to be working again with the new C code. o cao() still not working with the new C code. o zapoisson() did not implement the elambda argument correctly. o Tested ok on R 2.11.1. ************************************************** * * * 0.7 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.7-10 NEW FEATURES o Surv() renamed to Surv4(), class "SurvS4" renamed to "Surv4". o coef(summary(vglmObject)) returns a 3-column matrix of estimates, standard errors and Wald statistics, rather than coef(vglmObject) in the past. o Improved VGAM family functions: fff() uses simulated Fisher scoring now and has slightly better initial values. o New VGAM family functions: propodds(reverse) is equivalent to cumulative(parallel=TRUE, reverse=reverse) (for convenience only). o Compatible with R 2.10.1 and the article "The VGAM package for categorical data analysis," Journal of Statistical Software, 2010. A vignette based on this paper is included. o Argument w.als renamed to w.aml in amlnormal(). BUG FIXES o VGAM family functions: fff() had poor initial values. o betabinomial()@loglikelihood required 'ycounts' to be integer. o [dpqr]betanorm() were written but not in the NAMESPACE. CHANGES IN VGAM VERSION 0.7-9 NEW FEATURES o New functions: margeff() for marginal effects of a vglm() "multinomial" or "cumulative" model. o Almost all VGAM family functions now have a "loglikelihood" slot that incorporates any constants in the density function. Hence the fitted likelihood may differ by a constant from previous results. In particular, models such as multinomial(), cumulative() and binom2.or() have this new feature. o vglm() now has a modified 'xij' argument which implements eta-specific covariates. Usage now involves the form2' argument, and the 'xij' argument does not interfere with constraint matrices. Documentation is supplied on the VGAM website, in particular, http://www.stat.auckland.ac.nz/~yee/VGAM/doc/xij.pdf o cases.names() and variable.names() methods functions written for vglm()/vgam() objects. o cumulative() has improved initial values, especially for long data format, i.e., when each row of the data frame is an observation rather than inputting a matrix of counts. o rrvglm() handles a factor response without giving a warning. o New data: olympic. o testf90.f90 has been renamed to testf90.f95. This may decrease the incidences of compiler problems on some platforms (f95 seems more popular than f90). o For cqo() objects: AIC(), resid() have been written. o Improved functions: negbinomial() default initial values are more robust to outliers in the response, betabinomial() and betabin.ab() have better initialization and "loglikelihood" slot matches dbetabin.ab(log=TRUE). o Renamed VGAM family functions: alsqreg() becomes amlnormal(). o Renamed arguments: lmu replaces link.mu in zibinomial(). o dzeta(p) has changed wrt 'p'. o The functions summary.lms() and summary.rc.exponential() are no longer distributed to avoid a warning wrt S3 vs S4 methods dispatch. o The VGAM family functions for genetic models have been improved, e.g., some basic error checking. Also some changes in the names of the parameters, e.g., "q" to "pB" for ABO(), plus some switching of the order of the arguments. BUG FIXES o VGAM interferes much less in regard to generic functions such as predict(), fitted(), resid(), wrt other packages and also including base's lm(), glm(), etc. o AIC() method for rrvglm() objects was wrong (did not account for argument 'Structural.zero'). o dzibinom(log=TRUE) was wrong. CHANGES IN VGAM VERSION 0.7-8 NEW FEATURES o [dpqr]benf() written for Benford's distribution. o plog() and dlog() improved. o multinomial() now has a refLevel argument to specify the reference or baseline level of the factor. o binom2.or() has a new argument 'morerobust'. o Renamed arguments in mix2normal1(). o DeLury() written. o [dpqr]-type functions related to the negative binomial distribution have changed wrt argument names and order. o [pq]posnegbin(), [dpqr]zapois(), [dr]zanegbin() written. o [dpqr]zinb() renamed to [dpqr]zinegbin(). o lms.bcn(), lms.bcg(), lms.yjn() have zero=c(1,3) as the new default. This will increase the chances of successive convergence. o Renamed arguments in lms.bcn(), lms.bcg(), lms.yjn(), e.g., link.sigma is now lsigma. Some new arguments added too, e.g., llambda. o Works for R 2.5.0 and later (not 2.4.0 and later). Compatible with R 2.7.2. o Contains Fortran 90 code (since 0.7-7, actually). This will cause problems on older Linux machines without such a compiler. Actually, removing the .f90 file(s) will not be too much of a problem as there is very little F90 code in use by the package at the moment. o New functions: dbinom2.rho(), rbinom2.rho(), dposnegbin(). o New data: wffc, wffc.indiv, wffc.teams, wffc.nc. o Improved functions: binom2.rho(), micmen(), negbinomial(), poissonff(), posnegbinomial(), zanegbinomial(), o A new form2 argument has been added to vglm(). VGAM family functions such as micmen() have the regressor inputted using form2 now, rather than the regressor argument. The resulting usage is a more elegant. Fitted objects have a few more slots and formulas put in set places on the object. o AIC() methods functions has been modified. BUG FIXES o The big bug whereby vgam(... ~ s(x), ... ) did not work under Windows was due to a single array element that was not initialized. Evidently, Linux compilers seemed to have set it to zero. Funny, the code has worked for decade or so... o dposbinom() was buggy at x=0. Also it now handles size=0 and prob=0 or prob=1. o pzipois() was buggy at x<0. o dbetabin.ab(log=T) was incorrect outside its support. o zipf() did not handle 0 < s < 1. o data(ruge) was faulty. o summary(rrvglmObject) failed. o persp.cao(ylim=) did not work. o plotvgam() failed when se=TRUE and which.cf was specified. CHANGES IN VGAM VERSION 0.7-7 NEW FEATURES o Labelling changes: binom2.or() uses "oratio" instead of "OR" (stands for the odds ratio). o New VGAM family functions: zipebcom(). o New functions: dbinom2.or(), rbinom2.or(). o binom2.or() has new arguments 'imu1, 'imu2' and 'ioratio' for inputting optional marginal probabilities and odds ratio. The third element of the score vector uses a new formula. o [dpqr]zinb() has arguments prob and munb set to NULL by default. o Compatible with R 2.7.0. BUG FIXES o gaussianff()@loglikelihood was buggy. o all(trivial.constraints(Blist)) changed to all(trivial.constraints(Blist) == 1) to avoid a warning in R 2.7.0. Ditto for 'all(findex)' and 'any(diff(Alphavec))'. o qtriangle(0.3, theta=0.3) used to fail. o gharmonic() handles a negative argument s. CHANGES IN VGAM VERSION 0.7-6 NEW FEATURES o dpolono() has a new argument 'bigx' which implements an approximation. It is for handling large values of x. o vglm() and vgam() now create the response and model matrices etc. in the same way as glm(). A consequence is that the response does not have to be "numeric" as in lm(), e.g., a factor response is now permitted. o New VGAM family functions: alaplace1(), alaplace2(), alaplace3(dpqr), amlbinomial(), amlexponential(), amlpoisson(), amh(), lqnorm(), mbinomial(), scumulative(). o Other VGAM family functions with argument names changed or added: lms.yjn2(). o These VGAM family functions have been improved: alsqreg() [parallel option, w argument can be a vector, link function for the expectiles]. o The data set "aml" has been renamed "leukemia". o Previously laplace(zero=NULL), now laplace(zero=2). BUG FIXES o deplot() applied to a "lms.yjn2" object gave an unnecessary warning. o In the leukemia and toxop data sets 1L is replaced by 1 and 2L by 2 etc. CHANGES IN VGAM VERSION 0.7-5 NEW FEATURES o New VGAM family functions: betaff(), cardioid(dpqr), cauchy(), felix(d), fnormal1(dpqr), invbinomial(), kumar(dpqr), lms.yjn2(), mix2exp(), plackett(dpr), riceff(dr), skellam(dr), zinegbinomial(dpqr). o These VGAM family functions have been improved: frank(), genpoisson(), hzeta(), mix2normal1(), mix2poisson(), pospoisson(), studentt(). o These VGAM family functions have had their default arguments changed: genpoisson(), mix2normal1(). o New documentation: borel.tanner(dr). o expm1() used whenever possible. o Renamed VGAM family functions: betaff() changed to beta.ab(). o cauchy1() now returns the location estimates as the fitted values instead of NA (for the mean). BUG FIXES o cumulative(), sratio(), cratio(), acat() had response-matrix column names which got lost. o lms.yjn() failed if there was not enough data. CHANGES IN VGAM VERSION 0.7-4 NEW FEATURES o weibull() does not handle any censored observations at all. The function cenweibull(), which will handle censored observations, is currently being written and will use Surv() as input; it should be distributed with version 0.7-5 of VGAM. o bisa() now implements full Fisher scoring. No numerical integration is needed. o Certain functions from the smartpred package are no longer distributed with the VGAM package. These are lm, glm, predict.lm, predict.mlm, predict.glm. This is done because many users have found they interfere with the VGAM package in unpredictable ways. o The following VGAM family functions have improved initial values: betabinomial(), cauchy1(), mccullagh89(), negbinomial(), tpareto1(), zipoisson(). o New family functions: alsqreg(), dexpbinomial(), laplace(), poissonp(), seq2binomial(), triangle(dpqr). o VGAM family functions currently withdrawn: cexpon(). o A new class called "SurvS4" has been prepared. It will be used later to handle VGAM family functions beginning with "cen" that use Surv() as input. o log1p() used whenever possible. BUG FIXES o bisa() did not make use of ishape. o cao(..., family=gaussianff) failed. It now works, although the dispersion parameter is computed using a slightly different formula. CHANGES IN VGAM VERSION 0.7-3 NEW FEATURES o gpd() now does not delete any data internally. The user should use the subset argument of vglm() and vgam() in order to select any subset of a data frame. o zapoisson() has a zero argument, and this can be assigned a negative value. o "partial for" is added to the ylabel of linear terms of a vgam() object when it is plotted. o When a vgam() object is plotted with se=TRUE and if there are linear terms then the mean of x is added to the plot (this makes the standard error curves meet there). o This package has been tested (somewhat) under R 2.5.0. BUG FIXES o plotvgam() did not work for vgam() objects using the subset argument. o cao() objects would not show() or print(), at least under R 2.4.1. o summary(vgam.object) failed if vgam.object was a totally linear model (i.e., no s() term in the formula). Now the "R" slot is assigned for all vgam() objects. o preplotvgam() had a bug regarding $se.fit of an atomic pred$se.fit. CHANGES IN VGAM VERSION 0.7-2 NEW FEATURES o Almost all VGAM family functions now have an earg-type argument to support each link function. This allows parameters specific to each link to be passed in, e.g., VGAMfamilyfunction(link="logoff", earg=list(offset=1)) o rinv.gaussian() is new. o New VGAM family functions: morgenstern(), fgm(), gumbelIbiv(), ordpoisson(). o New documentation: powl(), fsqrt(). BUG FIXES o zanegbinomial()@last had wrong names in misc$link. o summary(vgam.object) failed to print the anova table. o summary(cao.object) failed. CHANGES o binom2.or() has argument names changed from "lp" to "lmu" etc. This is partly to make it in keeping with other VGAM family functions for binary responses. o Other VGAM family functions with argument names changed: frank(). o lms.bcn(), lms.bcg(), lms.yjn() arguments have changed order. o hyper() renamed to hyperg(). o plotvgam() uses ylim if it is inputted. CHANGES IN VGAM VERSION 0.7-1 NEW FEATURES o VGAM family functions now require full (name) specification of parameter link functions. For example, binomialff(link=probit) is ok, as is binomialff(link="probit"), but binomialff(link="pr") isn't. VGAM family functions no longer offer a fixed set of link functions but the user can invoke any, as well as write their own link function. o Working residuals for vglm() objects are now the default. They used to be deviance residuals but they are not defined for most VGAM family functions. In the future the default may become "pearson" residuals. For safety, use the type argument, e.g., resid(vglmobject, type="response"). o ITolerances=TRUE is now the default for qrrvglm.control(), consequently, equal tolerances CQO models are fitted. The rationale for this change that setting ITolerances=TRUE provides the fast computational speed as well as the easiest interpretation of the results. Also, rcqo() matches this by having EqualTolerances=TRUE as its default. However, having an equal tolerances assumption should be checked. o New VGAM family functions: tikuv(dpqr), [dpqr]naka(), [dpr]log(), [dpqr]tpareto1(), betabinomial(). o VGAM family functions which have been renamed (and often improved): New name Old name -------- -------- dirmultinomial() dirmul() negbinomial() negbin.mu() negbinomial.ab() negbin.mn() posnegbinomial() posnegbin.mu() zanegbinomial() zanegbin.mu() rposnegbin() rposnegbin.mu() gamma2() gamma2.ab() gamma2mu() gamma2() o New functions: lerch(), rcqo(). o In the smartpred package smart.mode.is(mode.arg) now requires mode.arg, if given, to be exactly one of 3 character strings. Also, is.smart(object) handles object being a smart function or a fitted object. o The VGAM package comes with modified lm, predict.lm, predict.glm, predict.mlm, glm functions---these implement smart prediction, and are current to R version 2.3.1 (2006-06-01). o The order of the linear/additive predictors for expexp() have been switched. o weibull(zero=2) is the default now. o negbin.mu(), posnegbin.mu(), zanegbin.mu: these have a few added arguments for further flexibility, and some arguments have changed names, e.g., 'k.init' has been changed to 'ik' and 'link.mu' to 'lmu'. o Negative binomial random variates are now generated using rnbinom() in the stats package rather than rnegbin() in the MASS package. o binom2.or() and binom2.rho() have more choices for some arguments such as lor and lrho. o Initial values have been improved for logff(), zipf() and zetaff(). o This package should work for R 2.4.0 after additional tweaks to handle changes in show(). BUG FIXES o pbetabin() had a bug. o studentt() has a mean of 0 only if df > 1. o garma() failed for link="loge". It now works for binary data with the "logit" link. o Internally, wz <- matrix(NA, ...) changed to wz <- matrix(as.numeric(NA), ...). Ditto for rep(NA, ...) to rep(as.numeric(NA), ...). o tobit() had a bug in the initialize slot. o rposnegbin.mu() now calls the MASS library function rnegbin() explicitly. o gammaff() now works. o Working residuals for cao() objects were wrong. o lvplot() for cao() objects have a whichSpecies argument which allows selective plotting of the species' curves. o gaussianff() did not work with rrvglm(). It now has a loglikelihood slot, and returns deviance residuals for M>1. CHANGES o gaussianff(), studentt() have the order of its arguments changed. o eta2theta(), theta2eta(): if these have a matrix "theta" then it no longer calls the VGAM link function one column at a time. Hence VGAM link functions must handle matrix "theta" using one value of "earg" argument. o The earg argument has changed for many VGAM link functions. It is now a list, with component names that are specific to each link function. See the online help files for the list component names. Soon, every VGAM family function that allows a link function will have an earg argument to match it, thus giving maximum flexibility. ************************************************** * * * 0.6 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.6-9 NEW FEATURES o New VGAM family functions: lino(dpqr), recexp1(), posnormal1(dpqr), betageometric(dpr), [dr]polono(), [dpr]betabin(), gamma2mu(), bisa(dpqr), zipf(dp). There is a new dirmul() (the old one is renamed to dirmul.old()) but it hasn't yet be completed. o Renamed VGAM family functions: beta2() changed to betaff(). o Renamed VGAM functions: is.a.number() changed to is.Numeric(). o The Windows crossbuild was done under R 2.3.0. BUG FIXES o Family functions lognormal(), lognormal3() now include the 1/sqrt(2*pi) constant in @loglikelihood because of its use of dnorm(..., log=TRUE) and dlnorm(..., log=TRUE). o [dpqr]lognormal() withdrawn as they exist in R already. o Documentation for betaff() contained mistakes. o summary() of a betabin.ab() object used to fail. o The assign statement has been removed from some FORTRAN code. CHANGES IN VGAM VERSION 0.6-8 NEW FEATURES o New VGAM family functions: recnormal1(), recexp1(), paretoIV(dpqr), paretoIII(dpqr), paretoII(dpqr), gammahyp(), benini(dpqr). However, the fitted value (mean) for benini() may be faulty. o Decommissioned VGAM family functions: gpdold(), ogev(), zipoissonX(). o gumbel.block() renamed to gumbel(), and gumbel() renamed to egumbel(). o Argument names and defaults have changed for: gpd(), egev(), gev(), ogev(), cgumbel(), egumbel(), gumbel(), and weibull(). Also, gpd(), gev() and egev() have some improvements done internally. Also, rlplot() is new. o Several family functions have been converted to a new convention whereby ilocation, iscale, ishape arguments are used, and also llocation, lscale, lshape arguments for the link functions etc. o New link function: nidentity(theta) for negative-identity: -theta. o New argument "untransform" in predict() and vcov() for VGLMs. o For intercept-only models, Coef(fit) returns more user-friendly labelled output. BUG FIXES o ppareto() had a bug. o gpd() had an incorrect second derivative. CHANGES IN VGAM VERSION 0.6-7 NEW FEATURES o New VGAM family functions: bilogistic4(dpr), frechet2(), frechet3(), freund61(), frank(dpr), mccullagh89(). o For cao(), df1.nl has a default of 2.5, changed from 2.0 before. o For vglm(), vgam() etc., diagonal elements of the working weight matrices that are less than .Machine$double.eps^0.75 are replaced by this value. The arguments checkwz and wzepsilon support this feature. o More documentation on: fill() [re. the xij argument], cauchy1(). o logistic2() now uses Fisher scoring. o Argument init.method changed to method.init in several family functions. o Any non-smart-prediction use of smartpredenv has been changed to VGAMenv. BUG FIXES o rayleigh() was not in NAMESPACE. o logistic1() and logistic2() had wrong first derivatives and loglikelihood function. logistic1() offers some choice of link function for the location parameter. CHANGES IN VGAM VERSION 0.6-6 NEW FEATURES o New functions: zibinomial(), zibinom(dpqr), posbinom(dpqr), mix2normal1(), mix2poisson(), dsnorm(), rsnorm(), cexpon(), cgumbel(), cnormal1(), hyper(). o New generic functions: is.bell() works for RR-VGLMs, QRR-VGLMs and RR-VGAMs (CLO, CQO and CAO, respectively). o normal1() has a new (first) argument: lmean for the mean. o Documentation for skewnormal1() and snorm(dr). BUG FIXES o tobit() now implements Fisher scoring properly. o Coef.vlm() needed to test for trivial constraints. o skewnorm1() had a bug in it. It has been fixed and renamed to skewnormal1(). o cao() had a problem with the variable "usethiseta" when it had possible NAs. o An appropriate error message is given if residuals=TRUE in the call to @loglikelihood, for all VGAM family functions. o Two unneeded lines in rgam.f have been removed. CHANGES IN VGAM VERSION 0.6-5 NEW FEATURES o New functions: guplot(), meplot(), ggamma(dpqr), fff(), vonmises(), lgamma3ff, lgamma(dpqr), prentice74, tobit, zipoisson(dpqr), [dpqr]pospois(), [dpqr]laplace but there is no laplace(). o cqo() has been largely rewritten. It now sports a new algorithm for ITolerances=TRUE. It can handle large data sets (e.g., 1000 sites with 100 species). Compared to other cqo() options, it is the fastest. There are a few things to learn though to take full advantage of the new algorithm, e.g., centering the variables. o Windows version is cross built with R 2.2.0. The Linux version has been tested with R 2.2.0. o cao() has been largely rewritten. It now should not hang in the the windows cross build version. o .Init.Poisson.CQO() has been renamed .Init.Poisson.QO(), and also improved (however, it uses more memory by default). o Modelling functions such as vglm(), vgam() and cao() have qr.arg=FALSE now. This means object sizes can be a lot smaller. o The functions positive.poisson(), positive.binomial() etc. have been renamed pospoisson(), posbinomial() etc. o The functions [dpqr]gpd now have a location=0 argument. o Some VGAM family functions will be adapted later to use the BFGS quasi-Newton update for their working weight matrices. o The link function logoff() now works, for constant offsets. Link functions had the argument "extra"; now called "earg" to avoid confusion with the argument "extra" used in vglm() etc. Also, elogit() is new, which allows a parameter to lie between two values (A,B), say. BUG FIXES o plotvgam() was incorrect if one of the terms (but not the first) was "x" or a function of "x" such as bs(x). o smart.expression now handles multiple 'arguments' by choosing the first, which is the smart function name. o lv(rrvglm.object) failed. CHANGES IN VGAM VERSION 0.6-4 NEW FEATURES o New family functions: betabin.ab(), betaprime(), dcnormal1(), erlang(), expexp(), inv.gaussianff(), maxwell(), mckaygamma2(), nakagami(), pareto1(), rayleigh(), wald(). Of these, Pareto, Rayleigh and Maxwell have random number generation etc. o If criter="coef" and trace=TRUE, then the number of decimal places used to print the estimated coefficients at each iteration is proportional to the control constant epsilon. o tanl() has been named to cauchit(), and appropriate family functions reflect this change, i.e., link="cauchit" instead of link="tanl". o size.binomial() has been improved. o Documentation for gamma1(), gamma2(). BUG FIXES o The documentation for the reverse argument in cumulative(), cratio(), etc. was incorrect. o vcov() didn't work on the windows version. o cao() still hangs under the windows version, so hopefully this bug will be fixed soon! CHANGES IN VGAM VERSION 0.6-3 NEW FEATURES o Built with R 2.1.0 for the .zip file (Windows version) and deposited in the right directory at www.stat.auckland.ac.nz. o More documentation, e.g., fitted(), yeo.johnson(), dirmul(). o zeta() and zetaff() have been improved and/or corrected. o The family functions binomial, poisson, quasibinomial, quasipoisson, gaussian, inverse.gaussian, Gamma have been withdrawn because of inteference with glm(). CHANGES IN VGAM VERSION 0.6-2 NEW FEATURES o model.frame() and model.matrix() are roughly working for objects that inherit from "vlm"s, e.g., "vglm" objects. Both of these methods functions accept a "data" argument etc. Also, for these, smart prediction works. o A methods function for the generic function weights() has been written for VGLM objects. It returns either the prior or working weights. BUG FIXES o The Crow1positive argument in cao() did not function correctly. o The family functions dagum, fisk, lomax, invlomax, paralogistic, invparalogistic, lognormal were not exported in the NAMESPACE file. o Functions in gaut.c and mux.c used "long" to represent integers. In R, these should be "int". Although these are equivalent on 32-bit machines, they differ on 64-bit machines and crash. The files are now renamed to gautr.c and muxr.c in R. o summary(cao.object) failed. CHANGES IN VGAM VERSION 0.6-1 NEW FEATURES o New functions: cao() for "constrained additive ordination", and uqo() for "unconstrained quadratic ordination". Both of these are unfinished but will hopefully be completed in the forseeable future. o The function cgo() has been renamed to cqo(). Ouch! CQO stands for "constrained quadratic ordination", and is better than the old name cgo(), for canonical Gaussian ordination. o The inverse() link function has been renamed to reciprocal(). o More documentation: loglinb2() and loglinb3(). o zipbipp() renamed to zapoisson(), where "za" stand for "zero-altered". This is more in line with the literature. New families: zanegbin.mu, positive.negbin.mu. New random variates: rposnegbin.mu, rpospois. o negbin.mu() works now for cgo(). The subsequent methods functions have been adapted to work on it too. However, negbin.mu() is not recommended because maximum likelihood estimation of the index parameter is fraught numerically. It is better to use quasipoissonff(). o cgo() now uses the function .Init.Poisson.CGO() to obtain initial values for the canonical coefficients, C. The argument Use.Init.Poisson.CGO in qrrvglm.control() now controls this feature. o Lazy loading has been enabled for the VGAM package. o Name spaces has been introduced into the VGAM package. The consequencies of this might be far reaching for code heavily based on the internals of the VGAM package. o The application of name spaces means "ff" can be dropped from certain family functions. In particular, poisson() can be used instead of poissonff(), and binomial() instead of binomialff(). Ditto for quasipoissonff() and quasibinomialff(). o names.of() changed to namesof(). Many other function names have been changed, particularly those of the S3 classes such as coef. something, e.g., coef.vlm to coefvlm. In general, S3 methods functions such as print.summary.vlm have the first "." deleted, but classes such as "summary.vlm" retain the ".", and the function is printsummary.vlm. BUG FIXES o Some documentation regarding the negative binomial distribution was wrong. o The digamma function in FORTRAN was buggy. o gumbel.block() now handles a vector response (equivalently, a one column matrix) and the deviance has been decommissioned. Instead, the log-likelihood is computed. ************************************************** * * * 0.5 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.5-24 NEW FEATURES o zipbipp() and zipoissonX() are new alternatives to yip88(). They fit a zero-inflated Poisson distribution. Both can handle covariates for both parameters (p0 or phi, and lambda.) zipbipp() is recommended over the others. zipoissonX() is experimental at this stage and should be used with caution. rpospois() is new. o More documentation: rhobit and binom2.rho. o binom2.or() now has lp1 and lp2 arguments, which allow a different link function for each of the two marginal probabilities. o bratt() is a new family function. It fits the Bradley Terry model with ties. o flush.console() is used if it exists. This will make Windows version more nicer for large data sets and when trace=TRUE is used. o wweights() extracts the working weights of an object. Used to be called vweights(). CHANGES IN VGAM VERSION 0.5-23 NEW FEATURES o The package works under the latest version, viz. 2.0.0. There are fewer warning messages when checking :) o persp() for CGO objects now handles Rank=1 models. CHANGES IN VGAM VERSION 0.5-22 BUG FIXES o plot.vgam(..., overlay=TRUE, which.cf=1:2) was incorrect. NEW FEATURES o demo files now are avaible for VGAM. These include lmsqreg, distributions, and cgo. More will be added later. CHANGES IN VGAM VERSION 0.5-21 BUG FIXES o .Rd files adapted to reflect new changes in the library names. o persp.qrrvglm() argument whichSpecies was faulty. o gpd()@inverse returned erroneous centiles. o Coef(cgo(..., FastAlgorithm=TRUE)) produced wrong results. NEW FEATURES o cgo(..., FastAlgorithm=TRUE) has been fined-tuned to give greater speed and accuracy. o lms.yjn() uses FORTRAN code to implement the Gauss-Legendre algorithm. This results in greater accuracy. o More documentation, especially for family functions for extreme values modelling. CHANGES IN VGAM VERSION 0.5-20 BUG FIXES o vglm(y ~ x, binomialff(link=tanl)) used to fail. o The CHECK command failed previously, but now it only gives 5 warnings. NEW FEATURES o persp.qrrvglm() has been written to apply persp() to a rank-2 CGO model. o cgo(..., FastAlgorithm=TRUE) now has a logical argument GradientFunction, which if TRUE (default), computes the derivatives by using finite-difference approximations. The default will cause the speed to generally increase. CHANGES IN VGAM VERSION 0.5-19 BUG FIXES o garma() did coerce the model matrix into the correct class o fisherz() could not work out the inverse. NEW FEATURES o trplot() is a new generic function, and for objects of class "qrrvglm" (a cgo() object), it produces a trajectory plot for species. o vcov.qrrvglm() now computes standard errors and returns the variance-covariance matrix for rank-1 QRR-VGLMs. o A new fast algorithm is implemented for cgo(..., FastAlgorithm=TRUE) which only works under windows. It is a new undocumented algorithm. o New family functions: lognormal(), lognormal3(), weibull(). o New family functions: genbetaII(), betaII(), sinmad(), dagum(), lomax(), invlomax(), fisk(), invparalogistic(), paralogistic(). Additionally, d*, r* p* and q* forms of the density/random-generation etc. functions for all of these except for betaII and genbetaII. o New link function for (0,1) parameters: tanl() for tan link. It has a heavier tail and corresponds to a Cauchy distribution (cf. probit for normal). o New family function: brat() for the Bradley Terry model (intercept model only). CHANGES IN VGAM VERSION 0.5-18 NEW FEATURES o I've changed deplot.lmscreg() so that the "at" argument is now "y.arg", and the density is returned with name "density" instead of "y". That is, "at" is now "y", and "y" is now "density". o lvplot.rrvglm() and biplot.rrvglm() have been merged and are now equivalent. CHANGES IN VGAM VERSION 0.5-17 BUG FIXES o Bestof argument in cgo() and rrvglm() was faulty. o Bug in plot.vgam(type.resid) fixed. NEW FEATURES o Updated to work under R 1.8.1 o logLik() and AIC() methods functions supported for many VGAM objects. o lms.bcn.control(), lms.bcg.control(), lms.yjn.control() now have trace=TRUE because monitoring LMS quantile regression models is a good idea. o lms.bcn(), lms.bcg(), lms.yjn() now improved. CHANGES IN VGAM VERSION 0.5-16 BUG FIXES o biplot.rrvglm() had a internal bug with @C. o Runs under R 1.8.0 now, having a fix with "qr" slot. o etastart, coefstart, mustart arguments were not functional in vgam(). o vchol() did not replace the correct elements; sometimes the index was out of subscript range. o residuals.vlm() tried to evaluate a deviance slot in a "vglmff" object even when it was empty. o Documentation links to functions in other packages now work. NEW FEATURES o lvplot.qrrvglm() has been renamed biplot.qrrvglm(). Argument Equal.tolerances changed to EqualTolerances. Argument Circular changed to ITolerances. rrvglm.control() now split into qrrvglm.control() and itself. o cgo() now performs canonical Gaussian ordination. CHANGES IN VGAM VERSION 0.5-15 BUG FIXES o Coef.qrrvglm() failed wrt Equal.tolerances and Circular when Rank>2. NEW FEATURES o gco() is now an easier interface for fitting Gaussian canonical ordination models. gco(...) is essentially rrvglm(..., Quadratic=TRUE). o Documentation for deplot.lmscreg(), qtplot.lmscreg(), cdf.lmscreg() and related functions. Also for positive.poisson(), positive.binomial() and yip88(). o lvplot.qrrvglm() improved to handle non-diagonal tolerance matrices, and a new Rotate option is available for QRR-VGLMs. o By default, QRR-VGLMs now have the constraint that the latent variables are uncorrelated and have unit variances, i.e., their variance-covariance matrix is diag(Rank). Also, the Crow1positive argument allows ordinations to be reflected across axes. CHANGES IN VGAM VERSION 0.5-14 BUG FIXES o vgam() with s() terms and subset= used to give a bug because the attributes of s() were lost. o summary() of a gaussianff was faulty because control.gaussianff() was called gaussianff.control(). NEW FEATURES o install.packages("VGAM", CRAN="http://www.stat.auckland.ac.nz/~yee") now works for PC and Linux/Unix, i.e., the distribution of the VGAM package allows for this type of download. o poissonff(), quasipoissonff(), binomialff() and quasibinomialff() now handle multiple dispersion parameters when mv=TRUE and onedpar=FALSE. o Generic function predictx(), with methods function for "qrrvglm" objects. This solves (with limited functionality) the calibration problem. o predict.qrrvglm() and predict.rrvglm() written (but don't work 100%) o Coef.rrvglm() now returns an S4 object, which can be printed nicely. o summary.qrrvglm() has been improved. o Documentation for poissonff(), quasipoissonff(), binomialff() and quasibinomialff(). CHANGES IN VGAM VERSION 0.5-13 BUG FIXES o Code with T and F now use TRUE and FALSE. NEW FEATURES o Documentation for lms.bcn(), lms.bcg(), lms.yjn(), and bmi. Additionally, the overall documentation has been improved throughout. o print.Coef.qrrvglm prints the contents of Coef(qrrvglm.object) in a nicer format. It uses S4 features. CHANGES IN VGAM VERSION 0.5-12 BUG FIXES o The package now works under R 1.7.1. This includes the smart prediction library. o dirichlet(), skewnorm1(), geometric(), gamma2() and erlang() had a bug that has been fixed. NEW FEATURES o documentation for beta2(), and dirichlet(). o Easier installation; use something like "R CMD INSTALL -l ./myRlibs VGAM_0.5-12.tar.gz" for a local library. CHANGES IN VGAM VERSION 0.5-11 BUG FIXES o The code has been upgraded to work under R 1.7.0 because of the calls to LAPACK and object oriented features. NEW FEATURES o levy() added, plus grc() documentation. o constraints added to binomialff() and poissonff() since they both handle multivariate responses. CHANGES IN VGAM VERSION 0.5-10 BUG FIXES o Many univariate family functions had a faulty loglikelihood slot. o negbin.mu() was faulty causing very slow convergence. o Coef.vglm() had a bug due to "fit" rather than "object" NEW FEATURES o logff() added. o The undocumented backchat facility now works for Splus 6.x. This should increase the efficiency of vglm() in particular. Thanks to Insightful and Dr J. Chambers for helping to get it going under the S4 engine. CHANGES IN VGAM VERSION 0.5-9 BUG FIXES o binomialff() had a bug in @weight. o binomialff(mv=T) used to fail. o gev(), ogev() and egev() had @loglikelihood that was faulty. NEW FEATURES o .Rd documentation included for vglm(), vgam(), rrvglm(), and associated control and class functions, plus smart prediction. CHANGES IN VGAM VERSION 0.5-8 NEW FEATURES o rrvglm() now has a Quadratic argument to implement the class of Quadratic Reduced-rank VGLMs, which gives maximum likelihood solutions to Gaussian canonical ordination problems. Documentation is in rrvglm.pdf CHANGES IN VGAM VERSION 0.5-7 NEW FEATURES o vglm() now has a xij argument which implements eta-specific covariates. Documentation is supplied on the VGAM website. o grc() has been written for Goodman's RC association model for a contingency table. Documentation is in rrvglm.pdf VGAM/NAMESPACE0000644000176000001440000003107712136651110012257 0ustar ripleyusers# These functions are # Copyright (C) 1998-2013 T.W. Yee, University of Auckland. # All rights reserved. useDynLib(VGAM) export(pgamma.deriv, pgamma.deriv.unscaled, truncweibull) export(binom2.rho.ss) export(arwz2wz) export(link2list) export(mlogit) export(perks, dperks, pperks, qperks, rperks) export(gumbelII, dgumbelII, pgumbelII, qgumbelII, rgumbelII) export(makeham, dmakeham, pmakeham, qmakeham, rmakeham) export(gompertz, dgompertz, pgompertz, qgompertz, rgompertz) export(lindley, dlind, plind, rlind) export(w.wz.merge, w.y.check, vweighted.mean.default) export(is.parallel.matrix, is.parallel.vglm, is.zero.matrix, is.zero.vglm) exportMethods(is.parallel, is.zero) export(nvar_vlm) importFrom("stats4", nobs) exportMethods(nobs) importMethodsFrom("stats4") importFrom(stats4, AIC, coef, summary, plot, logLik, vcov) exportMethods(AIC, coef, summary, plot, logLik, vcov) export(npred, npred.vlm) exportMethods(npred) export(hatvalues, hatvaluesvlm) exportMethods(hatvalues) importFrom("stats", hatvalues) export(dfbeta, dfbetavlm) exportMethods(dfbeta) export(hatplot, hatplot.vlm) exportMethods(hatplot) export(VGAMenv) export(lrtest, lrtest_vglm) export(update_default, update_formula) export(nvar, nvar.vlm, nvar.vgam, nvar.rrvglm, nvar.qrrvglm, nvar.cao, nvar.rcim) export( nobs.vlm) export(plota21) export(Confint.rrnb, Confint.nb1) export(vcovrrvglm) export(posbernoulli.b, posbernoulli.t, posbernoulli.tb, aux.posbernoulli) export(N.hat.posbernoulli) export(dposbern, rposbern) export(posbern.aux) export(is.empty.list) export( Build.terms.vlm, procVec, ResSS.vgam, vcontrol.expression, vplot, vplot.default, vplot.factor, vplot.list, vplot.matrix, vplot.numeric, vvplot.factor) export( case.namesvlm, variable.namesvlm ) export(expgeometric, dexpgeom, pexpgeom, qexpgeom, rexpgeom, genrayleigh, dgenray, pgenray, qgenray, rgenray, exppoisson, dexppois, pexppois, qexppois, rexppois, explogarithmic, dexplog, pexplog, qexplog, rexplog) export(Rcim, plotrcim0, rcim, summaryrcim) export(moffset) export(plotqvar, Qvar) export(depvar, depvar.vlm) export(put.caption) export( d2theta.deta2, Deviance.categorical.data.vgam, lm2qrrvlm.model.matrix, m2avglm, dotFortran, dotC, dimm) export(is.smart, smart.mode.is, wrapup.smart, setup.smart, my1, my2) export( smart.expression, get.smart, get.smart.prediction, put.smart) export( dbinorm, binormal) export( bs, ns, scale.default, poly ) export(iam, fill, fill1, fill2, fill3, abbott, amh, damh, pamh, ramh, bivgamma.mckay, freund61, frechet2, dfrechet, pfrechet, qfrechet, rfrechet, frank, dfrank, pfrank, rfrank, plackett, dplack, pplack, rplack, benini, dbenini, pbenini, qbenini, rbenini, maxwell, dmaxwell, pmaxwell, qmaxwell, rmaxwell, morgenstern, dfgm, pfgm, rfgm, fgm, gumbelIbiv, erf, erfc, lerch, lambertW, tpareto1, dtpareto, qtpareto, rtpareto, ptpareto, pareto1, dpareto, qpareto, rpareto, ppareto, paretoIV, dparetoIV, qparetoIV, rparetoIV, pparetoIV, paretoIII, dparetoIII, qparetoIII, rparetoIII, pparetoIII, paretoII, dparetoII, qparetoII, rparetoII, pparetoII, dparetoI, qparetoI, rparetoI, pparetoI, cgumbel, egumbel, gumbel, dgumbel, pgumbel, qgumbel, rgumbel, fnormal1, dfnorm, pfnorm, qfnorm, rfnorm, cennormal1, dcennormal1, recnormal1, recexp1, cenrayleigh, rayleigh, drayleigh, prayleigh, qrayleigh, rrayleigh, drice, rrice, riceff, dskellam, rskellam, skellam, inv.gaussianff, dinv.gaussian, pinv.gaussian, rinv.gaussian, wald, expexp1, expexp) export(A1A2A3, a2m, AAaa.nohw, AICvlm, AICvgam, AICrrvglm, AICqrrvglm, # AICvglm, anova.vgam, anova.vglm, bisa, dbisa, pbisa, qbisa, rbisa, betabinomial.ab, betabinomial, dexpbinomial, dbetabinom, pbetabinom, rbetabinom, dbetabinom.ab, pbetabinom.ab, rbetabinom.ab, biplot.qrrvglm, dbort, rbort, borel.tanner, care.exp, cauchy, cauchy1, ccoef.cao, ccoef.Coef.cao, ccoef.Coef.qrrvglm, ccoef.qrrvglm, cdf, cdf.lms.bcg, cdf.lms.bcn, cdf.lms.yjn, cdf.vglm, Coef.cao, Coefficients, coefqrrvglm, coefvlm, coefvsmooth.spline, coefvsmooth.spline.fit, constraints, constraints.vlm, deplot, deplot.default, deplot.lms.bcg, deplot.lms.bcn, deplot.lms.yjn, deplot.lms.yjn2, deplot.vglm, deviance.uqo, deviance.vglm, deviance.vlm, df.residual_vlm, dirmultinomial, dirmul.old, dnorm2, dtheta.deta) export(cloglog,cauchit,elogit,explink,fisherz,logc,loge,logit, logoff,nreciprocal, probit,reciprocal,rhobit, golf,polf,nbolf,nbolf2,Cut) export(ordpoisson, poissonp) export(m2adefault, erlang, dfelix, felix, fitted.values.uqo, fittedvlm, fittedvsmooth.spline, fsqrt, formulavlm, formulaNA.VGAM, garma, gaussianff, hypersecant, hypersecant.1, hyperg, invbinomial, InverseBrat, inverse.gaussianff, is.Numeric, mccullagh89, leipnik, levy, lms.bcg.control, lms.bcn.control, lmscreg.control, lms.yjn.control, lms.bcg, lms.bcn, lms.yjn, lms.yjn2, lqnorm, dbilogis4, pbilogis4, rbilogis4, bilogistic4, logistic1, logistic2, logLik.vlm, lv.cao, latvar.Coef.qrrvglm, latvar.qrrvglm, lvplot.cao, Rank, Rank.rrvglm, Rank.qrrvglm, Rank.cao, Max.Coef.qrrvglm, Max.qrrvglm, is.bell.vlm, is.bell.rrvglm, is.bell.qrrvglm, is.bell.cao, is.bell, model.matrix.qrrvglm, model.matrixvlm, model.framevlm, nakagami, dnaka, pnaka, qnaka, rnaka, namesof, nlminbcontrol, nloge, Opt.Coef.qrrvglm, Opt.qrrvglm, persp.cao) export( micmen ) export( plot.cao, plotpreplotvgam, plotvglm, plotvlm, plotvsmooth.spline, pnorm2, powl, predict.cao, predictcao, predictors, predictors.vglm, predictqrrvglm, predict.rrvglm, predict.uqo, predict.vgam, predictvglm, predict.vlm, predictvsmooth.spline, predictvsmooth.spline.fit, show.Coef.cao, show.Coef.qrrvglm, show.Coef.rrvglm, show.rrvglm, show.summary.cao, show.summary.qrrvglm, show.summary.rrvglm, show.summary.uqo, show.summary.vgam, show.summary.vglm, show.summary.vlm, show.uqo, show.vanova, show.vgam, show.vglm, show.vlm, show.vglmff, show.vsmooth.spline, process.binomial2.data.vgam, process.categorical.data.vgam, negzero.expression, qtplot, qtplot.default, qtplot.gumbel, qtplot.lms.bcg, qtplot.lms.bcn, qtplot.lms.yjn, qtplot.lms.yjn2, qtplot.vextremes, qtplot.vglm, explot.lms.bcn, rlplot, rlplot.egev, rlplot.gev, rlplot.vextremes, rlplot.vglm, rlplot, rlplot.vglm, rrar.control, rrvglm.control.Gaussian) export( SurvS4, is.SurvS4, as.character.SurvS4, show.SurvS4, simple.exponential, simple.poisson, mbinomial, seq2binomial, size.binomial, stdze1, stdze2, summary.cao, summary.grc, summary.qrrvglm, summary.rrvglm, summary.uqo, summaryvgam, summaryvglm, summaryvlm, s.vam, terms.vlm, theta2eta, Tol.Coef.qrrvglm, Tol.Coef.uqo, Tol.qrrvglm, Tol.uqo, triangle, dtriangle, ptriangle, qtriangle, rtriangle, vcovvlm, vglm.fit, vglm.garma.control, vglm.multinomial.control, vglm.multinomial.deviance.control, vglm.vcategorical.control, vlm, vlm.control, vnonlinear.control, wweights, yeo.johnson, dzipf, pzipf, zipf, zeta, zetaff, dzeta) export(lm2vlm.model.matrix) export(vlm2lm.model.matrix) importFrom("stats", model.matrix) importFrom("stats", model.frame) importFrom("stats", terms) importFrom("stats", resid) importFrom("stats", residuals) importFrom("stats", fitted) importFrom("stats", predict) importFrom("stats", df.residual) importFrom("stats", deviance) importFrom("stats", fitted.values) importFrom("stats", effects) importFrom("stats", weights) importFrom("stats", formula) importFrom("stats", case.names) importFrom("stats", variable.names) importFrom("stats", dchisq, pchisq, pf, dexp, rexp, dpois, ppois, qpois, rpois, dnorm, pnorm, qnorm, rnorm) importFrom("graphics", persp) export(ddagum, rdagum, qdagum, pdagum, dagum) export(dfisk, pfisk, qfisk, rfisk, fisk) export(dlomax, plomax, qlomax, rlomax, lomax) export(dinvlomax, pinvlomax, qinvlomax, rinvlomax, invlomax) export(dparalogistic, pparalogistic, qparalogistic, rparalogistic, paralogistic) export(dinvparalogistic, pinvparalogistic, qinvparalogistic, rinvparalogistic, invparalogistic) export(dsinmad, psinmad, qsinmad, rsinmad, sinmad) export(lognormal) export(dpolono, ppolono, rpolono) export(dgpd, pgpd, qgpd, rgpd, gpd) export(dgev, pgev, qgev, rgev, gev, egev) export(dlaplace, plaplace, qlaplace, rlaplace, laplace) export(dalap, palap, qalap, ralap, alaplace1.control, alaplace2.control, alaplace3.control, alaplace1, alaplace2, alaplace3) export(dloglap, ploglap, qloglap, rloglap) export(loglaplace1.control, loglaplace1) export(dlogitlap, plogitlap, qlogitlap, rlogitlap, logitlaplace1.control, logitlaplace1) export(dprobitlap, pprobitlap, qprobitlap, rprobitlap) export(dclogloglap, pclogloglap, qclogloglap, rclogloglap) export(dcard, pcard, qcard, rcard, cardioid) export(fff, fff.control, mbesselI0, vonmises) export( AA.Aa.aa, AB.Ab.aB.ab2, AB.Ab.aB.ab, ABO, acat, beta.ab, betaff, dbetageom, pbetageom, rbetageom, betageometric, dbetanorm, pbetanorm, qbetanorm, rbetanorm, # betanorm, betaprime, betaII, zipebcom, binom2.or, dbinom2.or, rbinom2.or, binom2.rho, dbinom2.rho, rbinom2.rho, binom2.Rho, binomialff, biplot.rrvglm, brat, bratt, Brat, calibrate.qrrvglm.control, calibrate.qrrvglm, calibrate, cao.control, cao, cdf.lmscreg, cgo, chisq, clo, ccoef, Coef, Coef.qrrvglm, Coef.rrvglm, Coef.vlm, predictqrrvglm, cratio, cumulative, propodds, prplot, prplot.control) export( deplot.lmscreg, dirichlet, exponential, G1G2G3) export( lgammaff, lgamma3ff) export( gammahyp, gengamma, gamma1, gamma2, gamma2.ab, gammaff) export(dlgamma, plgamma, qlgamma, rlgamma) export(dgengamma, pgengamma, qgengamma, rgengamma) export( dbenf, pbenf, qbenf, rbenf, genbetaII, genpoisson, geometric, truncgeometric, dlino, plino, qlino, rlino, lino, grc, dhzeta, phzeta, qhzeta, rhzeta, hzeta, nidentity, identity, prentice74, amlnormal, amlbinomial, amlexponential, amlpoisson, Wr1, Wr2, dkumar, pkumar, qkumar, rkumar, kumar, dyules, pyules, ryules, yulesimon, logff, dlog, plog, rlog, loglinb2, loglinb3, loglog, lognormal3, lvplot.qrrvglm, lvplot, lvplot.rrvglm, lv, latvar, Max, MNSs, dmultinomial, multinomial, margeff) export( huber2, huber1, dhuber, edhuber, phuber, qhuber, rhuber) export( slash, dslash, pslash, rslash) export( deunif, peunif, qeunif, reunif, denorm, penorm, qenorm, renorm, koenker, dkoenker, pkoenker, qkoenker, rkoenker, deexp, peexp, qeexp, reexp) export( meplot, meplot.default, meplot.vlm, guplot, guplot.default, guplot.vlm, negbinomial, negbinomial.size, polya, normal1, SUR, nbcanlink, tobit, dtobit, ptobit, qtobit, rtobit, Opt, perspqrrvglm, plotdeplot.lmscreg, plotqrrvglm, plotqtplot.lmscreg, plotvgam.control, plotvgam, cenpoisson, poissonff, dposbinom, pposbinom, qposbinom, rposbinom, posbinomial, dposgeom, pposgeom, qposgeom, rposgeom, # posgeometric, dposnegbin, pposnegbin, qposnegbin, rposnegbin, posnegbinomial, dposnorm, pposnorm, qposnorm, rposnorm, posnormal1, dpospois, ppospois, qpospois, rpospois, pospoisson, qtplot.lmscreg, quasibinomialff, quasipoissonff, rdiric, rig, rrar, rrvglm.control, rrvglm.optim.control) export(eta2theta, rrvglm, simplex, dsimplex, rsimplex, sratio, s, studentt, studentt2, studentt3, Kayfun.studentt, Tol, trplot.qrrvglm, trplot, rcqo, cqo, qrrvglm.control, uqo.control, uqo, vgam.control, vgam, vglm.control, vglm, vsmooth.spline, weibull, yip88, dzanegbin, pzanegbin, qzanegbin, rzanegbin, zanegbinomial, dzabinom, pzabinom, qzabinom, rzabinom, zabinomial, dzapois, pzapois, qzapois, rzapois, zapoisson, dzibinom, pzibinom, qzibinom, rzibinom, zibinomial, dzinegbin, pzinegbin, qzinegbin, rzinegbin, zinegbinomial, dzigeom, pzigeom, qzigeom, rzigeom, zigeometric, dzageom, pzageom, qzageom, rzageom, zageometric, dzipois, pzipois, qzipois, rzipois, zipoisson, zipoissonff, mix2exp, mix2normal1, mix2poisson, mix2exp.control, mix2normal1.control, mix2poisson.control, skewnormal1, dsnorm, rsnorm, tikuv, dtikuv, ptikuv, qtikuv, rtikuv) export(DeLury, wffc.P1, wffc.P1star, wffc.P2, wffc.P2star, wffc.P3, wffc.P3star ) exportClasses(vglmff, vlm, vglm, vgam, rrvglm, qrrvglm, grc, rcim, vlmsmall, uqo, cao, summary.vgam, summary.vglm, summary.vlm, summary.qrrvglm, summary.cao, summary.rrvglm, Coef.rrvglm, Coef.uqo, Coef.qrrvglm, Coef.cao, vcov.qrrvglm, vsmooth.spline.fit, vsmooth.spline) exportClasses(SurvS4) exportMethods( Coef, coefficients, constraints, effects, predict, fitted, fitted.values, resid, residuals, show, terms, model.frame, model.matrix, summary, coef, AIC, plot, logLik, vcov, deviance, calibrate, cdf, ccoef, df.residual, lv, latvar, Max, Opt, Tol, biplot, deplot, lvplot, qtplot, rlplot, meplot, trplot, vplot, formula, case.names, variable.names, weights, persp) exportMethods(AIC, coef, summary, plot, logLik, vcov) VGAM/DISCLAIMER0000755000176000001440000000076112136651105012402 0ustar ripleyusersThe VGAM package for R is still in the development stage, meaning that new features are still being added and bugs are still being found on a regular basis. This product is available on a use-at-your-own-risk basis: the Author assumes no liability for loss or damage of any kind resulting from the use of this product. The code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. VGAM/DESCRIPTION0000755000176000001440000000151212136674024012551 0ustar ripleyusersPackage: VGAM Version: 0.9-1 Date: 2013-04-27 Title: Vector Generalized Linear and Additive Models Author: Thomas W. Yee Maintainer: Thomas Yee Depends: R (>= 2.15.1), splines, methods, stats, stats4 Suggests: MASS Description: Vector generalized linear and additive models, and associated models (Reduced-Rank VGLMs, Quadratic RR-VGLMs, Reduced-Rank VGAMs). This package fits many models and distribution by maximum likelihood estimation (MLE) or penalized MLE. Also fits constrained ordination models in ecology. License: GPL-2 Imports: methods, stats, stats4 URL: http://www.stat.auckland.ac.nz/~yee/VGAM LazyLoad: yes LazyData: yes Packaged: 2013-04-27 04:22:15 UTC; tyee001 NeedsCompilation: yes Repository: CRAN Date/Publication: 2013-04-27 09:02:44 VGAM/BUGS0000755000176000001440000000652312136651105011530 0ustar ripleyusersHere is a list of known bugs. 2012-09 loge('a', short = FALSE, inverse = FALSE) loge('a', short = FALSE, inverse = TRUE) give the same answer. Coef(vglm.dirmultinomial.fit) fails. Evidently, multiple "mlogit"s saved on vglm.dirmultinomial.fit@misc do not suffice. 2011-12 VGAM version 0.8-4 said it needed R version 2-11.1 or later. But really, R version 2-13.0 or later is needed. This is because the generic nobs() was not defined properly. Another fix is to install the (latest) prerelease version at http://www.stat.auckland.ac.nz/~yee/VGAM/prerelease 2010-04-12 cqo() should be working now. It uses new C code. Also, vgam() and vsmooth.spline() should not be noticeably different from before. But cao() is still working... getting it going soon hopefully. 2009/07/13 cqo() fails... I think it is due to initial values being faulty. Hope to look into it soon. 2009/06/18 For a given VGAM family function, arguments such as parallel, exchangeable etc. will not work if the RHS of the formula is an intercept only. For example, parallel = FALSE ~ 1 and exchangeable = TRUE ~ 1 will fail. Instead, try something like parallel = TRUE ~ x2 + x3 + x4 -1 and exchangeable = FAlSE ~ x2 + x3 + x4 + x5 -1 respectively. 2009/01/01 prediction with vgam( ~ offset(myoffsetmatrix) + ... ) fails inside a function because myoffsetmatrix cannot be found. 2008/08/12 Under Windows, the vgam() example involving the Hunua data seems to fail. It is under investigation. 2008/08/04 VGAM interferes with other packages, e.g., predict() and summary(). This is due to S3 and S4 interference, and currently I haven't sussed out the full details (e.g., NAMESPACES). For now it is best to attach VGAM only when needed and detach it when other packages are to be used. This can be done with library(VGAM) and detach("package:VGAM") 2008/05/16 zipf() did not handle 0 < s < 1. The prerelease version fixes this. 2008/03/12 A call such as mydof = 4 Fit = vgam(y ~ s(x, df=mydof), fam=poissonff) will result in failure when plot(Fit) Instead, one needs Fit = vgam(y ~ s(x, df=4), fam=poissonff) 2008/02/16 The VGAM package interferes with other functions, for example, if VGAM is loaded and lmobject is an "lm" object then fitted(lmobject) predict(lmobject) resid(lmobject) residuals(lmobject) will fail. 2006/05/18 dirmul() is not working yet. 2005/11/16 cao() now works in Windows. The argument xij does not work properly. 2005/8/31 The windows version of cao() seems to hang. It does not hang in Linux. 2005/6/10 cao() works in Linux but seems to hang in Windows. The latter (distributed in a .zip file format) is made from a R Cross Build process which may be a reason for the bug. I'm slowly looking into the bug. 2005/5/6 The VGAM package interferes with other code, including glm(). This may be due to the smart prediction code, or be due to the NAMESPACE facility. In order to use other functions outside the VGAM package you may need to type "detach()". 2003/7/14 vgam(y ~ s(x, df=2), subset= x > 2) will fail in R because the subset argument has the effect that the "df" and "spar" attributes are removed from the data frame containing the smoothing variables. Current fix: create a separate data frame satisfying the subset= condition, and then run vgam() on this smaller data frame. Thanks for Eugene Zwane for finding this bug.